id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2302.07119 | Particle and guiding-center orbits in crossed electric and magnetic
fields | The problem of the charged-particle motion in crossed electric and magnetic
fields is investigated, and the validity of the guiding-center representation
is assessed in comparison with the exact particle dynamics. While the magnetic
field is considered to be straight and uniform, the (perpendicular) radial
electric field is nonuniform. The Hamiltonian guiding-center theory of
charged-particle motion is presented for arbitrary radial electric fields, and
explicit examples are provided for the case of a linear radial electric field. | Alain J. Brizard | 2023-02-14T15:25:07Z | http://arxiv.org/abs/2302.07119v2 | # Particle and guiding-center orbits in crossed electric and magnetic fields
###### Abstract
The problem of the charged-particle motion in crossed electric and magnetic fields is investigated, and the validity of the guiding-center representation is assessed in comparison with the exact particle dynamics. While the magnetic field is considered to be straight and uniform, the (perpendicular) radial electric field is nonuniform. The Hamiltonian guiding-center theory of charged-particle motion is presented for arbitrary radial electric fields, and explicit examples are provided for the case of a linear radial electric field.
## I Introduction
The importance of radial electric fields in rotating magnetized plasmas has been a topic of great interest for a few decades [1; 2; 3; 4] because of the role of sheared \(E\times B\) flows in the stabilization of turbulent magnetized plasmas. The guiding-center analysis for particle dynamics in the presence of background (equilibrium) radial electric fields [5; 6], and its extension to the gyrokinetic analysis of turbulent magnetized plasmas [7; 8; 9; 10; 11], has also been a topic of great interest. The case of rotating magnetized plasmas due to the presence of radial electric fields continues to attract attention [12; 13].
In the present work, we apply the Hamiltonian guiding-center theory developed in Ref. [5] for general magnetic geometry to the problem of charge-particle motion in the presence of a uniform magnetic field \(\mathbf{B}=B_{0}\,\mathbf{\widehat{z}}\) and a nonuniform electric field \(\mathbf{E}=-\,\nabla\Phi\). Using the magnetic coordinates \((\psi,\theta,z)\), the magnetic field \(\mathbf{B}=\nabla\times\mathbf{A}\) can be represented in terms of the magnetic vector potential \(\mathbf{A}=\psi\,\nabla\theta\), where the magnetic flux is expressed in terms of cylindrical coordinates as \(\psi(r)=B_{0}r^{2}/2\), while the electric potential \(\Phi=\Phi(\psi)\) is assumed to be a flux function. While the standard assumption is that the \(E\times B\) rotation frequency \(\omega_{E}(\psi)\equiv c\,\Phi^{\prime}(\psi)\ll\Omega_{0}\) is small compared to the gyrofrequency \(\Omega_{0}=qB_{0}/mc\) (for a particle of mass \(m\) and charge \(q\)), this does not require the \(E\times B\) velocity \(r\,\omega_{E}(\psi)\) to be small compared to the thermal velocity [7; 13], especially in the edge of a confined magnetized plasma.
The remainder of the paper is organized as follows. First, the dynamics of a charged particle moving in the two-dimensional plane perpendicular to a constant magnetic field under the influence of a nonlinear radial electric field (Sec. II) and a linear radial electric field (Sec. III) is discussed, where explicit results are presented for the linear case. Next, the guiding-center analysis for particle dynamics in a nonlinear radial electric field is presented in Sec. IV while explicit guiding-center results are presented in Sec. V for a linear radial electric field.
## II Particle Lagrangian dynamics in nonlinear radial electric field
The two-dimensional motion of a charged particle in the plane perpendicular to the magnetic field is represented by the particle Lagrangian expressed in magnetic coordinates as
\[L = \left(\frac{q}{c}\,\mathbf{A}\;+\;m\,\mathbf{v}\right)\cdot \dot{\mathbf{x}}\;-\;\left(\frac{m}{2}\,|\mathbf{v}|^{2}\;+\;q\,\Phi\right) \tag{1}\] \[= \frac{q}{c}\,\mathbf{A}\!\cdot\!\dot{\mathbf{x}}\;+\;\frac{m}{2} \,|\dot{\mathbf{x}}|^{2}\;-\;q\,\Phi\] \[= \frac{q}{c}\,\psi\,\dot{\theta}\;+\;\frac{q\psi}{c\Omega_{0}} \left(\frac{\dot{\psi}^{2}}{4\,\psi^{2}}+\dot{\theta}^{2}\right)\;-\;q\,\Phi( \psi),\]
where a dot denotes a time derivative and the particle's velocity
\[\mathbf{v}\;\equiv\;\dot{\mathbf{x}}\;=\;\dot{\psi}\;\frac{\partial\mathbf{x }}{\partial\psi}\,+\,\dot{\theta}\;\frac{\partial\mathbf{x}}{\partial\theta} \;=\;\sqrt{\frac{2\psi}{B_{0}}}\left(\frac{\dot{\psi}}{2\psi}\,\widehat{r} \;+\;\dot{\theta}\,\widehat{\theta}\right) \tag{2}\]
is expressed in the plane perpendicular to the magnetic field, with Jacobian \(\mathbf{\widehat{z}}\cdot(\partial\mathbf{x}/\partial\psi\times\partial \mathbf{x}/\partial\theta)=1/B_{0}\). Since the Lagrangian (1) is independent of the azimuthal angle \(\theta\), the azimuthal canonical angular momentum
\[p_{\theta}\;\equiv\;\frac{q}{\partial\dot{\theta}}\;=\;\frac{q}{c}\,\psi\left( 1\;+\;2\,\dot{\theta}/\Omega_{0}\right) \tag{3}\]
is a constant of motion (i.e., \(dp_{\theta}/dt=\partial L/\partial\theta=0\)). By using the initial conditions \((\psi_{0},\dot{\theta}_{0}=\omega_{0})\), we may write the azimuthal canonical angular momentum as
\[p_{\theta}\;=\;\frac{q}{c}\,\psi_{0}\,(1\;+\;2\,\omega_{0}/\Omega_{0})\;\equiv \;\frac{q}{c}\,\overline{\psi}_{0}, \tag{4}\]
and, thus, the azimuthal angular velocity is
\[\dot{\theta}(\psi)\;=\;\frac{\Omega_{0}}{2}\left(\frac{\overline{\psi}_{0}}{ \psi}-1\right), \tag{5}\]
which indicates that reversal is possible if \(\psi\) crosses \(\overline{\psi}_{0}\).
Next, we construct the Routh-Lagrange function (or Routhian [14]) in order to obtain a reduced Lagrangian formulation of the \(\psi\)-dynamics:
\[\mathsf{R}(\psi,\dot{\psi})\;\equiv\;L-\dot{\theta}\,\partial L/\partial\dot{ \theta},\]
and we obtain
\[{\sf R}(\psi,\dot{\psi})\;=\;\frac{q}{c\Omega_{0}}\left(\frac{\dot{\psi}^{2}}{4\, \psi}\right)\;-\;V(\psi), \tag{6}\]
where the effective potential is
\[V(\psi)\;=\;q\,\Phi(\psi)\;+\;\frac{q\Omega_{0}}{4c\,\psi}\left(\overline{\psi} _{0}-\psi\right)^{2}. \tag{7}\]
Hence, the Routh-Euler-Lagrange equation
\[\frac{d}{dt}\left(\frac{\partial{\sf R}}{\partial\dot{\psi}}\right)\;=\;\frac {\partial{\sf R}}{\partial\psi}\]
yields the nonlinear second-order ordinary differential equation for the normalized magnetic flux \(\chi\equiv\psi/\overline{\psi}_{0}\):
\[\chi^{\prime\prime}\;=\;\frac{1}{2\chi}\left[(\chi^{\prime})^{2}\;+\;\left(1\; -\;\chi^{2}\right)\right]\;-\;2\epsilon\,\chi\,\nu(\chi), \tag{8}\]
where a prime denotes a derivative with respect to the normalized time \(t^{\prime}\equiv\Omega_{0}\,t\) and the dimensionless parameter
\[\epsilon\;\equiv\;c\,\Phi^{\prime}(\overline{\psi}_{0})/\Omega_{0} \tag{9}\]
denotes the ratio of the \(E\times B\) azimuthal frequency \(\omega_{E}(\overline{\psi}_{0})=c\Phi^{\prime}(\overline{\psi}_{0})\) at \(\psi=\overline{\psi}_{0}\) to the gyrofrequency \(\Omega_{0}\), and we define the function
\[\nu(\chi)\;\equiv\;\Phi^{\prime}(\overline{\psi}_{0}\chi)/\Phi^{\prime}( \overline{\psi}_{0}), \tag{10}\]
which equals one if \(\Phi(\psi)\) is a linear function of \(\psi\).
We note that Eq. (8) possesses an energy conservation law, with normalization \({\cal E}\equiv(q\Omega_{0}\overline{\psi}_{0}/2c)\,\overline{\cal E}\), where
\[\overline{\cal E} = \frac{1}{2\chi}\left[(\chi^{\prime})^{2}\;+\;(1-\chi)^{2}\right] \;+\;\frac{2\epsilon\,\Phi(\overline{\psi}_{0}\chi)}{\overline{\psi}_{0}\, \Phi^{\prime}(\overline{\psi}_{0})} \tag{11}\] \[\equiv \frac{(\chi^{\prime})^{2}}{2\chi}\;+\;U(\chi),\]
so that the solution for \(\chi(t^{\prime})\) can be found by inverting the integral solution
\[t^{\prime}(\chi)\;\equiv\;\Omega_{0}\,t(\chi)\;=\;\pm\int_{\chi_{0}}^{\chi} \frac{ds}{\sqrt{2s\left[\overline{\cal E}-U(s)\right]}}, \tag{12}\]
where the initial condition \(\chi_{b}\) can be chosen as a turning point (\(\dot{\chi}=0\)) defined by the condition \(\overline{\cal E}=U(\chi_{b})\).
Lastly, the solution for Eq. (12) can be analytically obtained by quadrature for an electric potential \(\Phi(\psi)\) represented as a polynomial up to third order in \(\psi\). Once the solution \(\chi(t^{\prime})\) is found by inverting the integral solution for Eq. (12), the solution for the azimuthal angle \(\theta(t^{\prime})\) is found by integrating
\[\theta^{\prime}(t^{\prime})\;=\;\frac{1}{2}\left(\frac{1}{\chi(t^{\prime})}\; -\;1\right). \tag{13}\]
Hence, a particle orbit can be generated from these solutions, which can be expressed in polar form as
\[x(t^{\prime}) = r(t^{\prime})\;\cos\theta(t^{\prime}), \tag{14}\] \[y(t^{\prime}) = r(t^{\prime})\;\sin\theta(t^{\prime}), \tag{15}\]
where \(r(t^{\prime})=\sqrt{2\psi(t^{\prime})/B_{0}}\).
## III Particle dynamics in a linear radial electric field
In previous work by White, Hassam, and Brizard [12], a nonuniform radial electric field was represented by the electric potential \(\Phi(\psi)=a\,\sqrt{\psi}+b\,\psi\), where \(a\) and \(b\) are constants associated with a uniform radial electric field and a radial electric with constant gradient, respectively.
Here, we look at charged particle dynamics in the electric potential
\[\Phi(\psi)\;=\;\Phi^{\prime}(\overline{\psi}_{0})\,\psi, \tag{16}\]
so that, using \(\psi=B_{0}\,(x^{2}+y^{2})/2\), we find a linear radial electric field
\[{\bf E}\;=\;-\,\nabla\Phi\;=\;-\,B_{0}\,\Phi^{\prime}(\overline{\psi}_{0})\,(x \,\widehat{\chi}+y\,\widehat{\sf y})\,. \tag{17}\]
This nonuniform radial electric field yields a nonuniform \(E\times B\) velocity
\[{\bf u}\;=\;{\bf E}\times\frac{c\widehat{\cal E}}{B_{0}}\;=\;\epsilon\,\Omega_ {0}\,(x\,\widehat{\sf y}-y\,\widehat{\sf x})\,, \tag{18}\]
which has constant parallel vorticity \(\widehat{\bf z}\cdot\nabla\times{\bf u}=2\epsilon\,\Omega_{0}\).
In the ordering recently considered by Joseph [13], an electric oscillation frequency is defined as \(\Omega_{E}\equiv(-q\,\nabla\,{\bf\cdot}{\bf E}/mc)^{\frac{3}{2}}\), so that we find \(\Omega_{E}=\Omega_{0}\,\sqrt{2\,\epsilon}\) from Eqs. (9) and (17). Hence, according to the Joseph ordering [13], our study is situated between the large-flow ordering \(\Omega_{E}/\Omega_{0}={\cal O}(\epsilon)\) and the maximal ordering \(\Omega_{E}/\Omega_{0}={\cal O}(1)\).
In recent work, Kabin [15] considered an additional divergenceless electric field \({\bf E}_{1}=E_{10}^{\prime}\,(y\,\widehat{\bf x}+x\,\widehat{\sf y})\), but since it is generated by an electric potential \(\Phi_{1}(\psi,\theta)=-\,E_{10}^{\prime}\,x\,y=-\,(E_{10}^{\prime}/B_{0})\,\psi \,\sin(2\theta)\) that breaks the invariance of the azimuthal canonical angular momentum, it will not be considered here.
### Normal-mode analysis
Using the electric field (17), the equations of motion are expressed in Cartesian coordinates as
\[x^{\prime\prime} = -\,\epsilon\,x\;+\;y^{\prime}, \tag{19}\] \[y^{\prime\prime} = -\,\epsilon\,y\;-\;x^{\prime}, \tag{20}\]
which have an azimuthal canonical angular momentum invariant
\[\overline{p}_{\theta}\;=\;x\,y^{\prime}\;-\;y\,x^{\prime}\;+\;\frac{1}{2}\,| \mathbf{x}|^{2}\;\equiv\;1, \tag{21}\]
which follows from Eq. (4), and an energy invariant
\[\overline{\cal E} = |\mathbf{x}^{\prime}|^{2}\;+\;\epsilon\,|\mathbf{x}|^{2}\;+\; \frac{1}{|\mathbf{x}|^{2}}\left(1\;-\;\overline{p}_{\theta}^{2}\right)\;+\; \overline{p}_{\theta}\;-\;1 \tag{22}\] \[\equiv |\mathbf{x}^{\prime}|^{2}\;+\;\epsilon\,|\mathbf{x}|^{2},\]
which follows from Eqs. (11) and (21). We note that, since the equations (19)-(20) are linear in \(x\) and \(y\), they can be arbitrarily normalized.
Using the standard normal-mode analysis, where \(x=\overline{x}\,\exp(i\omega t^{\prime})\) and \(y=\overline{y}\,\exp(i\omega t^{\prime})\), we obtain the matrix equation
\[\left(\begin{array}{cc}\epsilon-\omega^{2}&-i\,\omega\\ i\,\omega&\epsilon-\omega^{2}\end{array}\right)\cdot\left(\begin{array}{c} \overline{x}\\ \overline{y}\end{array}\right)\;=\;0, \tag{23}\]
which has non-trivial solutions only if
\[\left(\epsilon-\omega^{2}\right)^{2}\;-\;\omega^{2}\;=\;0, \tag{24}\]
with solutions \(\pm\,\omega_{+}\) and \(\pm\,\omega_{-}\), where
\[\omega_{\pm}\;=\;\frac{1}{2}\left(1\;\pm\;\sqrt{1+4\,\epsilon}\right), \tag{25}\]
and \(\omega_{-}<0<\omega_{+}\) under the assumption that \(\epsilon>0\). By inspection, the general solutions for Eqs. (19)-(20) are
\[x(t^{\prime}) = a\,\cos(\omega_{+}t^{\prime}+\alpha)\;+\;b\,\cos(\omega_{-}t^{ \prime}+\beta), \tag{26}\] \[y(t^{\prime}) = -\,a\,\sin(\omega_{+}t^{\prime}+\alpha)\;-\;b\,\sin(\omega_{-}t^ {\prime}+\beta), \tag{27}\]
where the constants \((a,\alpha;b,\beta)\) are chosen from initial conditions. We note that the normalized magnetic flux \(\chi\equiv(x^{2}+y^{2})/2\) is expressed as
\[\chi(t^{\prime})\;=\;\frac{1}{2}\left(a^{2}\;+\;b^{2}\right)\;+\;a\,b\,\cos( \tau+\delta), \tag{28}\]
where \(\tau\equiv(\omega_{+}-\omega_{-})t^{\prime}=\sqrt{1+4\epsilon}\,t^{\prime}\) and \(\delta\equiv\alpha-\beta\).
### Integral orbital solution
We now consider the integral orbital solution (12), where the effective potential
\[U(\chi)\;=\;\frac{(1-\chi)^{2}}{2\chi}\;+\;2\epsilon\,\chi \tag{29}\]
has a minimum \(U(\chi_{0})=1/\chi_{0}-1\) at
\[\chi_{0}\;\equiv\;1/\sqrt{1+4\epsilon}. \tag{30}\]
Hence, a real orbital radial solution exists for \(\overline{\cal E}\geq U(\chi_{0})\) and the radicand in Eq. (12) can be expressed as
\[2s\left[\overline{\cal E}-U(s)\right]\;=\;(1+4\epsilon)\left[\tan^{2}\phi\;- \;(s-\sec\phi)^{2}\right],\]
where we defined
\[(1+\overline{\cal E})\chi_{0}\;\equiv\;\sec\phi\;\geq\;1, \tag{31}\]
i.e., the radial motion is periodic when the energy is above the minimum of \(U(\chi)\), with \(0\leq\phi<\pi/2\). The orbital solution is, therefore, expressed as
\[\chi(t^{\prime})\;\equiv\;\chi_{0}\left(\sec\phi\;-\;\tan\phi\;\cos\tau\right), \tag{32}\]
where \(\tau=t^{\prime}/\chi_{0}=\sqrt{1+4\epsilon}\,t^{\prime}\) and \(\chi(0)\) is chosen to be at the lower turning point: \(\chi(0)=\chi_{0}\left(\sec\phi-\tan\phi\right)\). By comparing this solution with the normal-mode solution (28), we obtain \(\delta=\pi\), with \(a^{2}+b^{2}=2\chi_{0}\,\sec\phi\) and \(a\,b=\chi_{0}\,\tan\phi\), from which we obtain
\[a(\epsilon,\phi) = b(\epsilon,\phi)\,\tan(\phi/2), \tag{33}\] \[b(\epsilon,\phi) = \sqrt{\chi_{0}(\epsilon)\left(1+\sec\phi\right)},\]
so that Eqs. (22)-(21) yield
\[\overline{p}_{\theta} = a^{2}\left(\frac{1}{2}-\omega_{+}\right)\;+\;b^{2}\left(\frac{ 1}{2}-\omega_{-}\right)\;=\;1,\] \[\overline{\cal E} = a^{2}\,\left(\omega_{+}^{2}\;+\;\epsilon\right)\;+\;b^{2}\, \left(\omega_{-}^{2}\;+\;\epsilon\right)\;=\;\sec\phi/\chi_{0}-1,\]
which follow from Eqs. (4) and (31), respectively.
Using the orbital solution (32), the solution for the azimuthal angle is obtain from Eq. (13) as
\[\theta(t^{\prime}) = -\,\frac{t^{\prime}}{2}\;+\;\frac{1}{2}\int_{0}^{\tau}\frac{dz}{ \sec\phi-\tan\phi\;\cos z} \tag{34}\] \[\equiv -\,\frac{t^{\prime}}{2}\;+\;\vartheta(\tau,\phi),\]
where
\[\vartheta(\tau,\phi) = \arctan\left[i\left(\sec\phi\;-\;\tan\phi\;e^{i\tau}\right)\right] \tag{35}\] \[-\;\arctan\left[i\left(\sec\phi\;-\;\tan\phi\right)\right]\] \[\equiv -\,\frac{i}{2}\ln\left(\frac{1-e^{i\tau}\,\cot(\phi/2)}{e^{i\tau }-\cot(\phi/2)}\right),\]
which vanishes at \(\tau=0\) and, as expected from Eq. (34), \(\vartheta(\tau,\phi)\rightarrow\tau/2\) as \(\phi\to 0\), i.e., \(\theta(t^{\prime})\rightarrow-\,\omega_{-}\,t^{\prime}\).
From the radial solution (32), we find that the radial period is \(T=2\pi\,\chi_{0}\) and the azimuthal angular deviation between successive radial maxima (or minima) is obtained from Eq. (34) as \(\Delta\theta\equiv\theta(T)-\theta(0)=\pi\,(1-\chi_{0})\), which implies that the planar curve \((x(t^{\prime}),y(t^{\prime}))\) closes upon itself only if \(\chi_{0}\) is a rational number. We also note that the planar curve initiates retrograde motion near the upper radial turning point \(\chi_{0}\left(\sec\phi+\tan\phi\right)\) when \(\phi>\arcsin[(1-\chi_{0}^{2})/(1+\chi_{0}^{2})]=\arcsin[2\epsilon/(1+2 \epsilon)]\).
Figures 1-2 show two cases parametrized by different values of \((\epsilon,\phi)\). In Fig. 1, the value \(\epsilon=1/2\) causes \(\chi_{0}=1/\sqrt{3}\) to be irrational and the planar curve \((x(t^{\prime}),y(t^{\prime}))\) does not close upon itself. The planar curve also exhibits retrograde motion since \(\phi=\pi/3>\arcsin(1/2)=\pi/6\). In Fig. 2, on the other hand, the value \(\epsilon=6\) is chosen so that \(\chi_{0}=1/5\) is rational and, therefore, the planar curve \((x(t^{\prime}),y(t^{\prime}))\) closes upon itself (after 5 radial cycles). Since \(\phi=\pi/4<\arcsin(12/13)\simeq 3\pi/8\), however, the planar curve does not exhibit retrograde motion.
When expressed in terms of Cartesian coordinates, the
orbital solution is expressed as
\[x(t^{\prime}) = \sqrt{2\,\chi(t^{\prime})}\;\cos\theta(t^{\prime})\] \[= b(\epsilon,\phi)\left[\cos(\omega_{-}t^{\prime})\ -\ \tan(\phi/2)\;\cos(\omega_{+}t^{\prime})\right],\] \[y(t^{\prime}) = \sqrt{2\,\chi(t^{\prime})}\;\sin\theta(t^{\prime})\] \[= b(\epsilon,\phi)\left[\tan(\phi/2)\;\sin(\omega_{+}t^{\prime}) \ -\ \sin(\omega_{-}t^{\prime})\right],\]
where we selected the phases \(\alpha=\pi\) and \(\beta=0\). We note that, when the orbital solution (32)-(34) is evaluated at the minimum (\(\phi=0\)) of the effective potential \(U(\chi)\), we find a circular solution, with a constant radius \(b=\sqrt{2\chi_{0}}\) (with \(a=0\)), and \(\theta(t^{\prime})=-\,\omega_{-}\,t^{\prime}\).
This completes our analysis of the charged-particle motion in a uniform magnetic field \(B_{0}\,\widehat{\bf z}=\nabla\psi\times\nabla\theta\) with a linear radial electric field \({\bf E}=-\,\nabla\Phi=-\,\Phi_{0}^{\prime}\,\nabla\psi\) with constant \(E\times B\) parallel vorticity.
## IV Guiding-center analysis for a nonlinear radial electric field
In this Section, we proceed with the guiding-center analysis of a general radial electric field \({\bf E}=-\,\nabla\Phi(\psi)\) with the dimensionless parameter (9) considered in the limit \(\epsilon\ll 1\). The purpose of the guiding-center analysis is to derive a reduced dynamical description in which the fast gyromotion has been transformed away (not averaged!).
The Hamiltonian guiding-center theory of charged-particle motion in the presence of electric and magnetic fields was presented in Refs. [5; 6], and was recently summarized in Ref. [10], for the case of a nonuniform magnetic field. Here, we apply the same perturbation analysis for the simpler case of a uniform magnetic field.
### Particle Lagrangian in a drifting frame
The guiding-center analysis begins by shifting the lab reference frame to a reference frame drifting with the \(E\times B\) velocity
\[{\bf u} = \frac{c\widehat{\bf z}}{B_{0}}\crosscross\nabla\Phi\;=\;c\,\Phi^ {\prime}(\psi)\;\frac{\partial{\bf x}}{\partial\theta}\;=\;\epsilon\,\Omega_{ 0}\,\nu(\psi)\;\frac{\partial{\bf x}}{\partial\theta} \tag{38}\] \[= \frac{2c}{B_{0}}\,\Phi^{\prime}(\psi)\,\psi\;\nabla\theta\;=\; \frac{\Omega_{0}}{B_{0}}\;\epsilon\,\Psi_{1}(\psi)\;\nabla\theta,\]
which is directed along the azimuthal direction, with parallel \(E\times B\) parallel vorticity
\[\widehat{\bf z}\!\cdot\!\nabla\cross\bf u\;=\;\frac{c}{B_{0}}\,\nabla^{2} \Phi\;=\;\epsilon\,\Omega_{0}\,\Psi_{1}^{\prime}(\psi). \tag{39}\]
Here, the first-order correction \(\Psi_{1}(\psi)\) is defined as
\[\Psi_{1}(\psi)\;\equiv\;\psi\left(2\,\frac{c\Phi^{\prime}(\psi)}{\epsilon\, \Omega_{0}}\right)\;=\;2\;\psi\,\nu(\psi), \tag{40}\]
so that the phase-space position of a charged particle is transformed as \(({\bf x},{\bf v})\rightarrow({\bf x},{\bf w})\), where \({\bf w}\equiv{\bf v}-{\bf u}\) denotes the relative particle velocity in the drifting frame.
Hence, the shifted particle Lagrangian becomes
\[L_{E} = \left[\frac{q}{c}\,\psi\,\nabla\theta\ +\ m\,({\bf w}+{\bf u}) \right]\!\cdot\!\dot{\bf x} \tag{41}\] \[-\,\left(q\,\Phi\ +\ \frac{m}{2}|{\bf w}+{\bf u}|^{2}\right),\]
where \(|{\bf w}+{\bf u}|^{2}=|{\bf w}|^{2}+|{\bf u}|^{2}+2\,{\bf w}\!\cdot\!{\bf u}\). We note that we restrict our analysis to two-dimensional motion in the \((x,y)\)-plane, where \({\bf w}\) is expressed in particle space as
\[{\bf w}\;=\;\dot{\bf x}\;-\;{\bf u}\;=\;\dot{\psi}\,\frac{\partial{\bf x}}{ \partial\psi}\;+\;\left[\dot{\theta}\;-\;c\,\Phi^{\prime}(\psi)\right]\frac{ \partial{\bf x}}{\partial\theta}. \tag{42}\]
Here, the magnitude of \({\bf w}\) depends on the lowest-order magnetic moment \(\mu_{0}\): \(w=|{\bf w}|=\sqrt{2\,\mu_{0}B_{0}/m}\), while the unit vector \(\stackrel{{\sim}}{{\perp}}\equiv{\bf w}/w\) depends on the spatial coordinates \((\psi,\theta)\) as well as the lowest-order gyroangle \(\zeta_{0}\).
We now consider a guiding-center transformation \((\psi,\theta;{\bf w})\rightarrow(\Psi,\Theta;\mu,\zeta)\), where \((\Psi,\Theta)\) denote the guiding-center coordinates, \(\mu\) denotes the guiding-center gyromagnetic moment, and \(\zeta\) denotes the guiding-center gyromagnetic that is canonically conjugate to the guiding-center gyroctener gyrocten \(J=\mu\,B_{0}/\Omega_{0}\). The analysis begins with renormalizing the mass of the particle as \(m\rightarrow\epsilon\,m\) (which is analogous to performing an expansion in \(1/\Omega_{0}\)), so that the shifted particle Lagrangian is expressed as \(L_{E}=L_{E0}+\epsilon\,L_{E1}\), where the lowest-order particle Lagrangian is
\[L_{E0}\;=\;(q/c)\psi\,\dot{\theta}\;-\;q\,\Phi(\psi), \tag{43}\]
while the first-order particle Lagrangian is
\[L_{E1} = m\,\left[{\bf w}\!\cdot\left(\frac{\partial{\bf x}}{\partial \psi}\,\dot{\psi}+\frac{\partial{\bf x}}{\partial\theta}\,\dot{\theta}\right) \;-\;{\bf w}\!\cdot\!c\,\Phi^{\prime}(\psi)\frac{\partial{\bf x}}{\partial \theta}\right] \tag{44}\] \[+\;\frac{q}{c}\,\Psi_{1}(\psi)\,\dot{\theta}\;-\;\left(q\,\Phi_{1 }(\psi)\;+\;\mu_{0}\,B_{0}\right),\]
which explicitly displays the gyroangle-dependent relative velocity \({\bf w}\), and
\[\frac{1}{2}m|{\bf u}|^{2}\;=\;q\,\psi\,\Phi^{\prime}\;\left(\frac{c\Phi^{ \prime}}{\Omega_{0}}\right)\;\equiv\;\epsilon\,q\,\Phi_{1}(\psi) \tag{45}\]
introduces the first-order correction \(\Phi_{1}(\psi)\) to the electrostatic potential \(\Phi(\psi)\). We note that the gradient of \(\Phi_{1}\) introduces centrifugal effects in the guiding-center dynamics of a charged particle [3].
### Guiding-center dynamics in a drifting frame
The purpose of the guiding-center transformation is to remove the linear contributions from the gyroangle-dependent relative velocity \({\bf w}\) from Eq. (44). As a result of this transformation, the shifted guiding-center Lagrangian is generically expressed as
\[L_{E\rm gc}\;\equiv\;\frac{q}{c}\,\Psi^{*}\dot{\Theta}\;+\;J\left(\dot{\zeta }\;-\;\dot{\Theta}\right)-\left(q\,\Phi^{*}+\mu\;B^{*}\right), \tag{46}\]
where \((\Psi^{*},\Phi^{*},B^{*})\) are functions of \(\Psi\) that will be derived after the guiding-center transformation is defined (see Sec. IV.4). We note that the terms \(J\left(\dot{\zeta}-\dot{\Theta}\right)\) appear in order to satisfy gyrogauge invariance, with the gyrogauge vector \(\mathbf{\cal R}\equiv\nabla\mathbf{\hat{\bf e}}_{1}\!\cdot \!\mathbf{\hat{\bf e}}_{2}=\nabla\Theta\) calculated from cylindrical geometry, so that \(\mathbf{\cal R}\!\cdot\!\dot{\bf X}=\dot{\Theta}\).
The guiding-center equation of motion for the two-dimensional guiding-center position \({\bf X}\) is obtained from the guiding-center Lagrangian (46) as
\[\dot{\bf X}\;=\;\frac{\mathbf{\hat{\bf z}}}{qB_{\parallel}^{*}}\; \mathbf{\times}\;(q\,\nabla\Phi^{*}\;+\;\mu\;\nabla B^{*})\,, \tag{47}\]
where
\[B_{\parallel}^{*}\equiv\mathbf{\hat{\bf z}}\!\cdot\!{\bf B}^{*}\;=\; \mathbf{\hat{\bf z}}\!\cdot\!\nabla\Psi^{*}\;\mathbf{\times} \;\nabla\Theta\;=\;B_{0}\;d\Psi^{*}/d\Psi, \tag{48}\]
while the equation for the guiding-center gyroangle \(\zeta\) is expressed as
\[\dot{\zeta}\;=\;\Omega_{0}\;B^{*}/B_{0}\;+\;\mathbf{\cal R}\!\cdot\! \dot{\bf X}. \tag{49}\]
From Noether's Theorem [14], we easily conclude that \(\Psi\) and \(\mu\) are guiding-center constants of motion since the guiding-center Lagrangian (46) is independent of the angles \(\Theta\) and \(\zeta\).
When considering the guiding-center motion in physical space, we find the Cartesian representation for a circle: \(X(t)=R\,\cos\Theta(t)\) and \(Y(t)=R\,\sin\Theta(t)\), with radius \(R\equiv\sqrt{2\Psi/B_{0}}\). We also immediately find that the guiding-center energy
\[{\cal E}_{\rm gc}\;=\;q\,\Phi^{*}(\Psi)\;+\;\mu\;B^{*}(\Psi) \tag{50}\]
and the guiding-center azimuthal canonical angular momentum
\[P_{\Theta\rm gc}\;\equiv\;\partial L_{E\rm gc}/\partial\dot{\Theta}\;=\;(q/c) \,\Psi^{*}(\Psi)-J \tag{51}\]
are guiding-center constants of motion.
We will now construct explicit expressions for \((\Psi^{*},\Phi^{*},B^{*})\) as functions of \(\Psi\), represented as expansions in powers of \(\epsilon\), once again interpreted through the mass renormalization \(m\rightarrow\epsilon\,m\).
### Guiding-center transformation
The derivation of the guiding-center transformation that leads from the particle Lagrangian (41) to the guiding-center Lagrangian (46) begins with the separation of a generic Lagrangian \(L=p_{\alpha}\,\dot{z}^{\alpha}-H\) into a symplectic part \(p_{\alpha}\dot{z}^{\alpha}\), which is then converted into the symplectic one-form \(\gamma=p_{\alpha}\,\mathsf{d}z^{\alpha}\) (where \(\mathsf{d}\) denotes an exterior derivative), and a Hamiltonian part \(H\).
Next, we construct the guiding-center transformation as an asymptotic expansion in powers of \(\epsilon\) for each guiding-center phase-space coordinate \(Z^{\alpha}=(\Psi,\Theta,\mu,\zeta)\) in terms of the particle phase-space coordinates \(z^{\alpha}=(\psi,\theta,\mu_{0},\zeta_{0})\):
\[Z^{\alpha}\;=\;z^{\alpha}+\epsilon\,G_{1}^{\alpha}+\epsilon^{2}\left(G_{2}^{ \alpha}\;+\;\frac{1}{2}\;G_{1}^{\beta}\frac{\partial G_{1}^{\alpha}}{\partial z ^{\beta}}\right)+\cdots, \tag{52}\]
where the components \((G_{n}^{\alpha},G_{n}^{\beta},G_{n}^{\mu},G_{n}^{\zeta})\) are chosen at \(n\)th-order in order to derive an \(n\)th-order guiding-center
Lagrangian that is independent of the guiding-center gyroangle. Once these components are derived, we return the particle mass to its physical value \(\epsilon m\to m\).
Using the standard methods of Lie-transform perturbation theory [16], the new symplectic one-form
\[\mathsf{T}_{\rm gc}^{-1}\gamma+\mathsf{d}S\;\equiv\;P_{\alpha}(\Psi,\Theta;\mu )\,\mathsf{d}Z^{\alpha} \tag{53}\]
where \(S\) is an arbitrary gauge function, and the new Hamiltonian
\[\mathsf{T}_{\rm gc}^{-1}H\;\equiv\;H_{\rm gc}(\Psi,\Theta;\mu) \tag{54}\]
are obtained at each order in \(\epsilon\), where the guiding-center push-forward operator
\[\mathsf{T}_{\rm gc}^{-1}\;=\;\cdots\exp(-\epsilon^{2}\pounds_{2})\,\exp(- \epsilon\pounds_{1})\]
is expressed in terms of Lie derivatives \(\pounds_{n}\) generated by the vector field \(\mathsf{G}_{n}\), which are then used in the guiding-center transformation (52).
Using the ordering (9), the phase-space Lagrangian symplectic one-form \(\gamma_{E}=\gamma_{E0}+\epsilon\,\gamma_{E1}\) is expressed as
\[\gamma_{E0} = (q/c)\,\psi\,\mathsf{d}\theta, \tag{55}\] \[\gamma_{E1} = \frac{q}{c}\,\Psi_{1}(\psi)\;\mathsf{d}\theta\;+\;m\mathbf{w} \boldsymbol{\cdot}\left(\frac{\partial\mathbf{x}}{\partial\psi}\,\mathsf{d} \psi+\frac{\partial\mathbf{x}}{\partial\theta}\,\mathsf{d}\theta\right), \tag{56}\]
where \(\Psi_{1}(\psi)\) is defined in Eq. (40), while the zeroth and first-order Hamiltonians, on the other hand, are
\[H_{E0} = q\,\Phi(\psi), \tag{57}\] \[H_{E1} = q\,\Phi_{1}(\psi)\;+\;\mu_{0}\,B_{0}\;+\;m\,\mathbf{w} \boldsymbol{\cdot}c\,\Phi^{\prime}(\psi)\frac{\partial\mathbf{x}}{\partial \theta}, \tag{58}\]
where \(\Phi_{1}(\psi)\) is defined in Eq. (45).
#### iii.2.1 Zeroth-order analysis
By definition, the zeroth-order guiding-center symplectic one-form is
\[\Gamma_{E{\rm gc}0}\;\equiv\;(q/c)\,\Psi\,\mathsf{d}\Theta, \tag{59}\]
where \((\Psi,\Theta)\) denotes the guiding-center position. The zeroth-order guiding-center Hamiltonian, on the other hand, is
\[H_{E{\rm gc}0}\;\equiv\;q\,\Phi(\Psi), \tag{60}\]
so that the zeroth-order guiding-center Lagrangian is
\[L_{E{\rm gc}0}\;=\;(q/c)\Psi\,\dot{\Theta}\;-\;q\,\Phi(\Psi), \tag{61}\]
which yields the zeroth-order equation of motion \(\dot{\Theta}=c\,\Phi^{\prime}(\Psi)\) and the azimuthal canonical angular momentum conservation law is \((q/c)\,\dot{\Psi}=\partial L_{E{\rm gc}0}/\partial\Theta=0\) implies that \(\Psi\) is conserved at the lowest order.
#### iii.2.2 First-order analysis
Next, the first-order guiding-center symplectic one-form is constructed as
\[\Gamma_{E{\rm gc}1} = \frac{q}{c}\Psi_{1}\,\mathsf{d}\theta\;+\;m\mathbf{w} \boldsymbol{\cdot}\left(\frac{\partial\mathbf{x}}{\partial\psi}\,\mathsf{d} \psi+\frac{\partial\mathbf{x}}{\partial\theta}\,\mathsf{d}\theta\right) \tag{62}\] \[-\;\frac{q}{c}\left(G_{1}^{\psi}\,\mathsf{d}\theta\;-\;G_{1}^{ \theta}\,\mathsf{d}\psi\right)\] \[\equiv \frac{q}{c}\Psi_{1}(\Psi)\;\mathsf{d}\Theta,\]
where \(S_{1}=0\) at this order, and the gyroangle-dependent relative velocity \(\mathbf{w}\) is removed by choosing the spatial components
\[G_{1}^{\psi} = (B_{0}/\Omega_{0})\,\mathbf{w}\boldsymbol{\cdot}\partial\mathbf{ x}/\partial\theta, \tag{63}\] \[G_{1}^{\theta} = -\,(B_{0}/\Omega_{0})\,\mathbf{w}\boldsymbol{\cdot}\partial \mathbf{x}/\partial\psi, \tag{64}\]
which yields the standard result [6]
\[G_{1}^{\mathbf{x}}\;=\;\mathbf{w}\boldsymbol{\times}\,\frac{\widehat{\mathbf{ \mathcal{Z}}}}{\Omega_{0}}\;=\;\frac{1}{\Omega_{0}}\,\frac{\partial\mathbf{w}} {\partial\zeta}\;\equiv\;-\,\boldsymbol{\rho}, \tag{65}\]
where the relative velocity \(\mathbf{w}\equiv\Omega_{0}\,\partial\boldsymbol{\rho}/\partial\zeta_{0}\) is defined in Eq. (42). We note that by returning the particle mass to its physical value \(\epsilon m\to m\), the components (63)-(64) are, in fact, zeroth-order in \(\epsilon\) and, therefore, we will need to derive the components \((G_{2}^{\psi},G_{2}^{\theta})\) at second order.
The first-order guiding-center Hamiltonian is constructed as
\[H_{E{\rm gc}1} = q\,\Phi_{1}\;+\;\mu_{0}\,B_{0}\;+\;\frac{B_{0}}{\Omega_{0}}\,q\, \Phi^{\prime}\;\mathbf{w}\boldsymbol{\cdot}\,\frac{\partial\mathbf{x}}{ \partial\theta}\;-\;q\,\Phi^{\prime}\;G_{1}^{\psi} \tag{66}\] \[= q\,\Phi_{1}(\Psi)\;+\;\mu\,B_{0},\]
where we used Eq. (63) to cancel the gyroangle-dependent relative velocity \(\mathbf{w}\). Hence, the first-order guiding-center Lagrangian is
\[L_{E{\rm gc}1}\;=\;\frac{q}{c}\,\Psi_{1}(\Psi)\,\dot{\Theta}\;-\;\left(q\,\Phi _{1}(\Psi)\;+\;\mu\,B_{0}\right), \tag{67}\]
which preserves the conservation law of \(\Psi\) of the zeroth-order guiding-center Lagrangian.
#### iii.2.3 Second-order analysis
At second order, the second-order guiding-center symplectic one-form is constructed as
\[\Gamma_{\rm Egc2} = -\,\frac{q}{c}\left[\left(G_{2}^{\psi}\,{\rm d}\theta-G_{2}^{\theta} \,{\rm d}\psi\right)\;+\;\Psi_{1}^{\prime}\left(G_{1}^{\psi}\,{\rm d}\theta-G_{1 }^{\theta}\,{\rm d}\psi\right)\right]\;-\;\frac{m}{2}\left(G_{1}^{\mu}\,\frac{ \partial{\bf w}}{\partial\mu_{0}}+G_{1}^{\zeta}\,\frac{\partial{\bf w}}{ \partial\zeta_{0}}\right)\,\cdot\,\left(\frac{\partial{\bf x}}{\partial\psi}\,{ \rm d}\psi+\frac{\partial{\bf x}}{\partial\theta}\,{\rm d}\theta\right) \tag{68}\] \[+\;\frac{m}{2}\;G_{1}^{\bf x}\,\cdot\,\left[\left(\frac{\partial{ \bf w}}{\partial\mu_{0}}\,{\rm d}\mu_{0}+\frac{\partial{\bf w}}{\partial\zeta _{0}}\,{\rm d}\zeta_{0}\right)\;-\;{\rm d}{\bf x}\times{\bf\nabla}\times{\bf w }\right]\;\equiv\;J\,\left({\rm d}\zeta\;-\;{\mathbf{\cal R}}\,{\cdot }\,{\rm d}{\bf X}\right)\;=\;J\,\left({\rm d}\zeta\;-\;{\mbox{\boldmath$\cal R$ }}\right),\]
where \(S_{2}=0\) at this order, \(J\equiv\mu\,B_{0}/\Omega_{0}\) is the guiding-center gyroaction, with its canonically-conjugate guiding-center gyroangle \(\zeta\), and the gyrogauge vector \({\mathbf{\cal R}}\equiv\nabla\widehat{\bf e}_{1}\,{\cdot}\,\widehat{ \bf e}_{2}=\nabla\Theta\) is calculated from cylindrical geometry (with \(\widehat{\bf e}_{1}=\widehat{r}\) and \(\widehat{\bf e}_{2}=\widehat{\theta}=\widehat{\bf z}\times\widehat{\bf e}_{1}\)). Here, we use the identity
\[\nabla\times{\bf w}\;=\;\frac{\partial{\bf w}}{\partial\zeta_{0}}\,\mathbf{\times}\,{\mathbf{\cal R}}, \tag{69}\]
which follows from the alternate definition \({\mathbf{\cal R}}=\nabla\,\widehat{\perp}\,{\cdot}\,\widehat{\rho}\), where \({\bf w}\equiv w\,\widehat{\perp}\) (\(w\) is constant in a uniform magnetic field) and \(\widehat{\bf z}=\widehat{\perp}\times\widehat{\rho}\), so that
\[\frac{m}{2}\left({\mathbf{\rho}}\times{\bf\nabla}\times{\bf w}\right) \,{\cdot}\,{\rm d}{\bf x}=J_{0}\,{\mathbf{\cal R}}\,{\cdot}\,{\rm d }{\bf x}+\frac{m}{2}\left({\mathbf{\rho}}\cdot{\mathbf{\cal R }}\right)\frac{\partial{\bf w}}{\partial\zeta}\,{\cdot}\,{\rm d}{\bf x}.\]
Hence, Eq. (68) yields the second-order spatial components
\[G_{2}^{\psi} = -\,\Psi_{1}^{\prime}\;G_{1}^{\psi}-\frac{B_{0}}{2\Omega_{0}} \left(G_{1}^{\mu}\,\frac{\partial{\bf w}}{\partial\mu_{0}}+g_{1}^{\zeta}\frac{ \partial{\bf w}}{\partial\zeta_{0}}\right){\cdot}\frac{\partial{\bf x}}{ \partial\theta}, \tag{70}\] \[G_{2}^{\theta} = -\,\Psi_{1}^{\prime}\;G_{1}^{\theta}+\frac{B_{0}}{2\Omega_{0}} \left(G_{1}^{\mu}\,\frac{\partial{\bf w}}{\partial\mu_{0}}+g_{1}^{\zeta}\frac{ \partial{\bf w}}{\partial\zeta_{0}}\right){\cdot}\frac{\partial{\bf x}}{ \partial\psi}, \tag{71}\]
where \(g_{1}^{\zeta}\equiv G_{1}^{\zeta}+{\mathbf{\rho}}\cdot{\mathbf{\cal R}}\). The second-order spatial vector field is, therefore, expressed as
\[G_{2}^{\bf x}\;=\;\Psi_{1}^{\prime}\,{\mathbf{\rho}}+\frac{1}{2} \left(G_{1}^{\mu}\,\frac{\partial{\mathbf{\rho}}}{\partial\mu_{0}}+g _{1}^{\zeta}\frac{\partial{\mathbf{\rho}}}{\partial\zeta_{0}}\right), \tag{72}\]
where we substituted Eq. (65).
We now turn our attention to the second-order guiding-center Hamiltonian, which is constructed as
\[H_{\rm Egc2} = -\,q\left(\Phi^{\prime}G_{2}^{\psi}+\Phi_{1}^{\prime}G_{1}^{ \psi}\right)-B_{0}\,G_{1}^{\mu} \tag{73}\] \[+\,\frac{m}{2}\left[({\bf u}{\mathbf{\rho}}):\nabla{\bf w }\;+\;({\bf w}{\mathbf{\rho}}):\nabla{\bf u}\right]\] \[-\,\frac{m}{2}\left(G_{1}^{\mu}\frac{\partial{\bf w}}{\partial \mu_{0}}+G_{1}^{\zeta}\frac{\partial{\bf w}}{\partial\zeta_{0}}\right)\,{\cdot} \,{\bf u}.\]
First, we note that
\[\frac{m}{2}\;({\bf u}{\mathbf{\rho}}):\nabla{\bf w}\;=\;-\,\frac{m{ \bf u}}{2}{\cdot}\,\frac{\partial{\bf w}}{\partial\zeta_{0}}\left({\mathbf{\rho}}\,{\cdot}\,{\mathbf{\cal R}}\right), \tag{74}\]
while
\[\frac{m}{2}\;({\bf w}{\mathbf{\rho}}):\nabla{\bf u}\;=\;-\,J_{0}{ \mathfrak{a}}_{1}:\nabla{\bf u}\;-\;\frac{1}{2}\,J_{0}\widehat{\bf z}\,{\cdot} \,\nabla\times{\bf u}, \tag{75}\]
where the dyadic tensor \({\mathfrak{a}}_{1}\equiv-\frac{1}{2}\left(\widehat{\perp}\widehat{\rho}+ \widehat{\rho}\widehat{\perp}\right)\) is explicitly gyroangle-dependent. Hence, inserting these expressions into Eq. (73), while using Eq. (70), we obtain
\[H_{\rm Egc2} = q\left(\Phi^{\prime}\Psi_{1}^{\prime}\;-\;\Phi_{1}^{\prime}\right) G_{1}^{\psi}\;-\;B_{0}\,G_{1}^{\mu} \tag{76}\] \[-\;J_{0}\left({\mathfrak{a}}_{1}:\nabla{\bf u}\;+\;\frac{\widehat{ \bf z}}{2}{\cdot}\,\nabla\times{\bf u}\right),\]
where the terms \(G_{1}^{\psi}\) and \({\mathfrak{a}}_{1}\) are explicitly gyroangle-dependent and must be removed from the guiding-center Hamiltonian. The second-order guiding-center Hamiltonian is, therefore, defined as
\[H_{\rm Egc2}\;\equiv\;-\,B_{0}\;\left\langle G_{1}^{\mu}\right\rangle\;-\;J\; \frac{\widehat{\bf z}}{2}{\cdot}\,\nabla\times{\bf u}, \tag{77}\]
where the gyroangle-dependent part of \(G_{1}^{\mu}\) is defined as
\[\widetilde{G}_{1}^{\mu}\;=\;\frac{q}{B_{0}}\left(\Phi^{\prime}\,\Psi_{1}^{ \prime}\;-\;\Phi_{1}^{\prime}\right)G_{1}^{\psi}\;-\;\frac{J_{0}}{B_{0}}\;{ \mathfrak{a}}_{1}:\nabla{\bf u}. \tag{78}\]
The remaining first-order components \(\left\langle G_{1}^{\mu}\right\rangle\) and \(G_{1}^{\zeta}\) must now be determined at third order.
By combining the symplectic structure (68) and the Hamiltonian (77), the second-order guiding-center Lagrangian is expressed as
\[L_{\rm Egc2} = J\,\left(\dot{\zeta}-\dot{\Theta}\right)+B_{0}\;\left\langle G_{1}^{ \mu}\right\rangle+J\;\frac{\widehat{\bf z}}{2}{\cdot}\,\mathbf{ \nabla\times}\,{\bf u}, \tag{79}\]
which now introduces the gyromotion dynamics.
#### iii.1.4 Third-order analysis
Because of the smallness of the ordering parameter \(\epsilon\), there is no interest (at this time) in deriving third-order corrections to the guiding-center Lagrangian. The missing first-order components (\(\left\langle G_{1}^{\mu}\right\rangle\), \(G_{1}^{\zeta}\)), however, are determined at third order in the guiding-center analysis from the identities [5]
\[G_{1}^{\mu} = -\;\mu_{0}(\widehat{\bf z}/\Omega_{0})\,{\cdot}\,\nabla\times{\bf u} \;+\;(\Omega_{0}/B_{0})\;\partial\overline{S}_{3}/\partial\zeta_{0}, \tag{80}\] \[G_{1}^{\zeta} = -\;(\Omega_{0}/B_{0})\;\partial S_{3}/\partial\mu_{0}, \tag{81}\]
where the third-order scalar functions \((S_{3},\overline{S}_{3})\) are explicitly gyroangle-dependent, with
\[\overline{S}_{3}\;\equiv\;S_{3}\;-\;\frac{2}{3}\;\mu_{0}\left(B_{0}/\Omega_{0} \right){\mathbf{\rho}}\,{\cdot}\,{\mathbf{\cal R}}. \tag{82}\]
First, by gyroangle averaging both sides of Eq. (80), we immediately find that
\[\left\langle G_{1}^{\mu}\right\rangle\;\equiv\;-\;\mu\;(\widehat{\mathsf{z}}/ \Omega_{0})\,\boldsymbol{\cdot}\,\nabla\,\boldsymbol{\times}\,\mathbf{u}\;=\;- \;\mu\;\epsilon\,\Psi_{1}^{\prime}, \tag{83}\]
and the second-order guiding-center Hamiltonian (77) becomes
\[H_{E\mathrm{gc}^{2}}\;=\;J\;\frac{\widehat{\mathsf{z}}}{2}\,\boldsymbol{\cdot }\,\nabla\,\boldsymbol{\times}\,\mathbf{u}\;=\;\frac{J}{2}\;\epsilon\,\Omega_ {0}\,\Psi_{1}^{\prime}. \tag{84}\]
while, using \(\mathbf{w}=\Omega_{0}\,\partial\boldsymbol{\rho}/\partial\zeta_{0}\), Eq. (78) yields
\[\frac{\partial\overline{S}_{3}}{\partial\zeta}\;=\;\frac{qB_{0}}{\Omega_{0}} \left(\Phi^{\prime}\,\Psi_{1}^{\prime}\;-\;\Phi_{1}^{\prime}\right)\frac{ \partial\boldsymbol{\rho}}{\partial\zeta_{0}}\,\frac{\partial\mathbf{x}}{ \partial\theta}\;-\;\frac{J_{0}}{\Omega_{0}}\frac{\partial\mathsf{a}_{2}}{ \partial\zeta_{0}}:\nabla\mathbf{u},\]
where \(\mathsf{a}_{1}\equiv\partial\mathsf{a}_{2}/\partial\zeta\) and \(\mathsf{a}_{2}\equiv\frac{1}{4}(\widehat{\perp}\widehat{\perp}-\widehat{ \rho}\widehat{\rho})\), which is solved as
\[\overline{S}_{3}\;=\;\frac{qB_{0}}{\Omega_{0}}\left(\Phi^{\prime}\,\Psi_{1}^{ \prime}\;-\;\Phi_{1}^{\prime}\right)\boldsymbol{\rho}\,\cdot\frac{\partial \mathbf{x}}{\partial\theta}\;-\;\frac{J_{0}}{\Omega_{0}}\;\mathsf{a}_{2}: \nabla\mathbf{u}.\]
We now use Eq. (82) to obtain
\[S_{3} = \frac{qB_{0}}{\Omega_{0}}\left(\Phi^{\prime}\,\Psi_{1}^{\prime}\; -\;\Phi_{1}^{\prime}\right)\boldsymbol{\rho}\,\boldsymbol{\cdot}\,\frac{ \partial\mathbf{x}}{\partial\theta}\;-\;\frac{J_{0}}{\Omega_{0}}\;\mathsf{a}_{ 2}:\nabla\mathbf{u} \tag{85}\] \[+\;\frac{2}{3}\;\mu_{0}\left(B_{0}/\Omega_{0}\right)\boldsymbol{ \rho}\,\boldsymbol{\cdot}\,\boldsymbol{\mathcal{R}},\]
which can be inserted into Eq. (81) to obtain
\[G_{1}^{\zeta} = -\,\boldsymbol{\rho}\,\boldsymbol{\cdot}\,\boldsymbol{\mathcal{R }}\;+\;\frac{\mathsf{a}_{2}}{\Omega_{0}}:\nabla\mathbf{u} \tag{86}\] \[-\;q\left(\Phi^{\prime}\,\Psi_{1}^{\prime}\;-\;\Phi_{1}^{\prime} \right)\frac{\partial\boldsymbol{\rho}}{\partial\mu_{0}}\,\boldsymbol{\cdot} \,\frac{\partial\mathbf{x}}{\partial\theta}.\]
We note that the first term on the right side of Eq. (85) is required to preserve gyrogauge invariance.
### Guiding-center Lagrangian in a drifting frame
By combining all relevant orders, and restoring the physical mass \(\epsilon m\to m\), we construct the guiding-center Lagrangian in the drifting frame
\[L_{E\mathrm{gc}}\equiv\frac{q}{c}\,\Psi^{*}\dot{\Theta}+J\left(\dot{\zeta}- \dot{\Theta}\right)-\left(q\,\Phi^{*}+\mu\,B^{*}\right), \tag{87}\]
where
\[\Psi^{*}(\Psi) \equiv \Psi+\Psi_{1}\;=\;\Psi\left(1\;+\;2\,\epsilon\,\nu(\Psi)\right), \tag{88}\] \[\Phi^{*}(\Psi) \equiv \Phi+\Phi_{1}\;=\;\Phi(\Psi)\;+\;\epsilon\,\Psi\,\Phi^{\prime}( \Psi)\,\nu(\Psi),\] (89) \[B^{*}(\Psi) \equiv B_{0}\left(1+\frac{1}{2}\,\Psi_{1}^{\prime}\right), \tag{90}\]
and \(\epsilon=c\,\Phi^{\prime}(\Psi_{0})/\Omega_{0}\) returns to its physical interpretation, and \(\nu(\Psi)\equiv\Phi^{\prime}(\Psi)/\Phi^{\prime}(\Psi_{0})\). The Euler-Lagrange guiding-center equations of motion for the guiding-center angles \(\Theta\) and \(\zeta\) are
\[\dot{\Theta} = \frac{c}{q}\left(q\,\frac{d\Phi^{*}}{d\Psi^{*}}+\mu\;\frac{dB^{*} }{d\Psi^{*}}\right)\equiv\Omega(\Psi,\mu), \tag{91}\] \[\dot{\zeta} = \Omega_{0}\,B^{*}(\Psi)/B_{0}\;+\;\dot{\Theta}, \tag{92}\]
where we note that the guiding-center azimuthal angular velocity (90) depends on the guiding-center magnetic moment \(\mu\) for nonlinear radial electric fields since \(\Psi_{1}^{\prime\prime}\neq 0\). Since the guiding-center azimuthal angle \(\Theta\) is ignorable, the guiding-center azimuthal canonical angular momentum
\[P_{\mathrm{gc}\Theta}\;\equiv\;\frac{\partial L_{E\mathrm{gc}}}{\partial\dot{ \Theta}}\;=\;\frac{q}{c}\left(\Psi\;+\;\Psi_{1}\right)\;-\;J \tag{93}\]
is conserved, which follows from the conservation of \(\Psi\) and \(J\). It is also immediately clear that the guiding-center energy \(\mathcal{E}_{\mathrm{gc}}\equiv q\Phi^{*}(\Psi)+\mu\,B^{*}(\Psi)\) is also a constant of motion.
We note that the term \(\frac{1}{2}\,\mu B_{0}\,\Psi_{1}^{\prime}\) in Eqs. (87) and (88) can be interpreted as a finite-Larmor-correction to the electrostatic potential energy
\[q\left\langle\Phi(\mathbf{X}+\boldsymbol{\rho})\right\rangle-q\,\Phi(\mathbf{X} )=\frac{q}{2}\left\langle\boldsymbol{\rho}\boldsymbol{\rho}\right\rangle: \nabla\nabla\Phi=\frac{1}{2}\,\mu B_{0}\,\Psi_{1}^{\prime}.\]
Hence, the guiding-center Hamiltonian can be expressed as
\[H_{\mathrm{gc}}\;=\;q\,\left\langle\Phi(\Psi-G_{1}^{\psi})\right\rangle\;+\;q\, \Phi_{1}(\Psi)\;+\;\mu\,B_{0}, \tag{94}\]
where \(q\,\Phi_{1}\equiv m\,|\mathbf{u}|^{2}/2\).
Lastly, we note that the guiding-center position can be expressed in Cartesian coordinates as \((X,Y)\), where
\[X(t) = \sqrt{2\,\Psi/B_{0}}\;\cos\left[\Omega(\Psi,\mu)\;t\right], \tag{95}\] \[Y(t) = \sqrt{2\,\Psi/B_{0}}\;\sin\left[\Omega(\Psi,\mu)\;t\right], \tag{96}\]
which can then be compared with the Cartesian coordinates \((x,y)\) of the particle position given by Eqs. (14)-(15). Hence, because of the conservation law of \(\Psi\), the guiding-center moves on a circle with constant radius \(\sqrt{2\,\Psi/B_{0}}\), at a constant angular velocity \(\Omega(\Psi,\mu)\).
### Guiding-center conservation laws
We have just discovered that the guiding-center motion conserves the guiding-center magnetic flux \(\Psi\) and the guiding-center magnetic moment \(\mu\). First, the guiding-center magnetic flux \(\Psi\) can be constructed from the particle dynamics directly from the expansion
\[\Psi\;=\;\psi\;+\;G_{1}^{\psi}\;+\;G_{2}^{\psi}\;+\;\frac{1}{2}\;G_{1}^{\beta} \frac{\partial G_{1}^{\psi}}{\partial z^{\beta}}+\cdots \tag{97}\]
In Eq. (97), we find
\[G_{1}^{\beta}\frac{\partial G_{1}^{\psi}}{\partial z^{\beta}} = \frac{B_{0}}{\Omega_{0}}\left[G_{1}^{\boldsymbol{\times}}\, \boldsymbol{\cdot}\,\nabla\mathbf{w}\,\boldsymbol{\cdot}\,\frac{\partial \mathbf{x}}{\partial\theta}\;+\;G_{1}^{\boldsymbol{\times}}\,\boldsymbol{\cdot} \,\nabla\left(\frac{\partial\mathbf{x}}{\partial\theta}\right)\,\boldsymbol{\cdot}\, \mathbf{w}\right] \tag{98}\] \[+\;\frac{B_{0}}{\Omega_{0}}\left(G_{1}^{\mu}\,\frac{\partial \mathbf{w}}{\partial\mu_{0}}\;+\;G_{1}^{\zeta}\,\frac{\partial\mathbf{w}}{ \partial\zeta_{0}}\right)\,\boldsymbol{\cdot}\,\frac{\partial\mathbf{x}}{ \partial\theta}.\]
Here, using \(G_{1}^{\mathbf{x}}=-\,\mathbf{\rho}\), we find
\[G_{1}^{\mathbf{x}}\,\mathbf{\cdot}\nabla\mathbf{w}\cdot\frac{\partial\mathbf{x}}{ \partial\theta}\;=\;(\mathbf{\rho}\,\mathbf{\cdot}\,\mathbf{\mathcal{R}})\;\frac{\partial \mathbf{w}}{\partial\zeta_{0}}\,\cdot\frac{\partial\mathbf{x}}{\partial\theta},\]
while
\[G_{1}^{\mathbf{x}}\,\mathbf{\cdot}\,\nabla\left(\frac{\partial\mathbf{x}}{ \partial\theta}\right)\,\mathbf{\cdot}\,\mathbf{w}\;=\;\widehat{\mathbf{\mathbf{z}}} \cdot\left(\mathbf{w}\,\mathbf{\times}\,\mathbf{\rho}\right)\;=\;\frac{|\mathbf{w}|^{ 2}}{\Omega_{0}}\;=\;\frac{2J_{0}}{m},\]
where \(J_{0}=\mu_{0}B_{0}/\Omega_{0}\) is the lowest-order gyroaction, so that
\[\frac{1}{2}\,G_{1}^{\beta}\frac{\partial G_{1}^{\psi}}{\partial z^{\beta}}\;= \;\frac{c}{q}\,J_{0}\;+\;\frac{B_{0}}{2\Omega_{0}}\left(G_{1}^{\mu}\,\frac{ \partial\mathbf{w}}{\partial\mu_{0}}\;+\;g_{1}^{\zeta}\,\frac{\partial \mathbf{w}}{\partial\zeta_{0}}\right)\,\cdot\frac{\partial\mathbf{x}}{ \partial\theta},\]
where \(g_{1}^{\zeta}=G_{1}^{\zeta}+\mathbf{\rho}\,\mathbf{\cdot}\,\mathbf{\mathcal{R}}\). Since \(G_{2}^{\psi}\), given by Eq. (70), is
\[G_{2}^{\psi}=-\,\Psi_{1}^{\prime}\;G_{1}^{\psi}-\frac{B_{0}}{2\Omega_{0}} \left(G_{1}^{\mu}\,\frac{\partial\mathbf{w}}{\partial\mu_{0}}+g_{1}^{\zeta} \frac{\partial\mathbf{w}}{\partial\zeta_{0}}\right)\,\cdot\frac{\partial \mathbf{x}}{\partial\theta},\]
then
\[G_{2}^{\psi}\;+\;\frac{1}{2}\;G_{1}^{\beta}\frac{\partial G_{1}^{\psi}}{ \partial z^{\beta}}\;=\;-\,\Psi_{1}^{\prime}\;G_{1}^{\psi}\;+\;\frac{c}{q}J_{ 0}.\]
Hence, the guiding-center magnetic flux \(\Psi\) is defined as
\[\Psi\;=\;\psi\;+\;(1\;-\;\Psi_{1}^{\prime})\,G_{1}^{\psi}\;+\;\frac{c}{q}\,J_ {0}\;+\;\cdots, \tag{98}\]
where \(\Psi_{1}^{\prime}=2\,c(\Phi^{\prime}+\psi\;\;\Phi^{\prime\prime})/\Omega_{0}\), and \(G_{1}^{\psi}=2\psi\,(\dot{\theta}-c\Phi^{\prime}/\Omega_{0})\). We also note that the gyroangle-averaged magnetic flux \(\langle\psi\rangle=\Psi-(c/q)\,J\neq\Psi\) is not equal to the guiding-center magnetic flux.
Next, the guiding-center magnetic moment \(\mu_{\rm gc}\) can be constructed from the particle dynamics directly from the expansion
\[\mu\;=\;\mu_{0}\;+\;G_{1}^{\mu}+\cdots, \tag{99}\]
where the lowest-order magnetic moment \(\mu_{0}=m|\mathbf{w}|^{2}/2B_{0}\) is
\[\mu_{0}\;=\;\frac{q\,\psi}{c\Omega_{0}B_{0}}\left[\left(\frac{\dot{\psi}}{2 \psi}\right)^{2}+\left(\dot{\theta}-c\Phi^{\prime}\right)^{2}\right], \tag{100}\]
and
\[G_{1}^{\mu}\;= -\;\frac{\mu_{0}}{\Omega_{0}}\left(\mathfrak{a}_{1}:\nabla \mathbf{u}\;+\;\widehat{\mathbf{\mathbf{z}}}\,\mathbf{\cdot}\,\mathbf{\nabla}\,\mathbf{\times }\,\mathbf{u}\right) \tag{101}\] \[+\frac{q}{B_{0}}\left(\Phi^{\prime}\,\Psi_{1}^{\prime}\;-\;\Phi_{ 1}^{\prime}\right)G_{1}^{\psi},\]
with
\[\Phi^{\prime}\,\Psi_{1}^{\prime}-\Phi_{1}^{\prime} = \frac{c\Phi^{\prime}}{\Omega_{0}}\left[(2\,\Phi^{\prime}+2\psi\; \Phi^{\prime\prime})\;-\;(\Phi^{\prime}+2\psi\;\Phi^{\prime\prime})\right]\] \[= \frac{\Omega_{0}}{c}\left(\frac{c\Phi^{\prime}}{\Omega_{0}} \right)^{2}.\]
Here, we use Eq. (75) to write
\[-\frac{\mu_{0}}{\Omega_{0}}\;\mathfrak{a}_{1}:\nabla\mathbf{u}\;=\;\frac{m}{2B _{0}}\left(\mathbf{w}\mathbf{\rho}\right):\nabla\mathbf{u}\;+\;\frac{\mu_{0} \widehat{\mathbf{\mathbf{z}}}}{2\Omega_{0}}\,\mathbf{\cdot}\,\mathbf{\nabla}\,\mathbf{\times }\,\mathbf{u},\]
where
\[\frac{m}{2B_{0}}\;(\mathbf{w}\mathbf{\rho}):\nabla\mathbf{u} = -\frac{q\psi}{B_{0}}\;\left(\Phi^{\prime}+2\psi\,\Phi^{\prime \prime}\right)\left(\dot{\theta}-\frac{c\Phi^{\prime}}{\Omega_{0}}\right)^{2} \tag{102}\] \[-\;\frac{q\psi}{B_{0}}\;\Phi^{\prime}\left(\frac{\dot{\psi}}{2\psi }\right)^{2},\]
and
\[\frac{\mu_{0}\widehat{\mathbf{\mathbf{z}}}}{2\Omega_{0}}\,\mathbf{\cdot}\,\mathbf{\nabla}\, \mathbf{\times}\,\mathbf{u}=\frac{1}{2}\,\mu_{0}\,\Psi_{1}^{\prime}\;=\;\mu_{0} \left(\Phi^{\prime}+\psi\;\Phi^{\prime\prime}\right). \tag{103}\]
We thus easily conclude that, from Eq. (99), we find the simple relation \(\langle\mu_{0}\rangle=\mu-\langle G_{1}^{\mu}\rangle\equiv\mu\,d\Psi^{\ast}/d\Psi\). The conservation laws of the guiding-center azimuthal canonical angular momentum (98) and the guiding-center magnetic moment (99) will be explored in Sec. V for the case of a linear radial electric field.
Lastly, we establish the validity of the guiding-center representation by verifying that the guiding-center pullback \(\mathsf{T}_{\rm gc}P_{\rm gc\Theta}\) of the guiding-center azimuthal canonical angular momentum (92) is equal to the particle azimuthal canonical angular momentum (3): \(\mathsf{T}_{\rm gc}P_{\rm gc\Theta}=p_{\theta}\). Here, the expansion of the guiding-center pull-back \(\mathsf{T}_{\rm gc}P_{\rm gc\Theta}\)
\[\mathsf{T}_{\rm gc}P_{\rm gc\Theta} = \frac{q}{c}\,\psi\;+\;\epsilon\,\frac{q}{c}\left(\Psi_{1}+G_{1}^{ \psi}\right)\;-\;\epsilon^{2}\;J \tag{104}\] \[+\;\epsilon^{2}\frac{q}{c}\left(G_{2}^{\psi}+\frac{1}{2}\,\mathsf{ G}_{1}\cdot\mathsf{d}G_{1}^{\psi}+G_{1}^{\psi}\,\Psi_{1}^{\prime}\right)\] \[= \frac{q}{c}\,\psi\left(1\;+\;2\,\dot{\theta}/\Omega_{0}\right)\;=\;p _{\theta}\]
yields the particle azimuthal canonical angular momentum \(p_{\theta}\) up to second order in \(\epsilon\). Hence, the guiding-center transformation (52) generated by the components \((G_{1}^{\alpha},G_{2}^{\alpha},...)\) is faithful to the exact conservation laws of the particle dynamics.
### Guiding-center polarization and magnetization
Polarization and magnetization are pillars of the reduced Vlasov-Maxwell dynamical description of self-consistent magnetized plasmas [17; 18; 19]. We now calculate the guiding-center polarization and magnetization in the lab frame, which are each defined as the sum of a contribution associated with the transformation to the drifting frame and a contribution in the drifting frame directly calculated from the guiding-center transformation.
We begin with the guiding-center polarization, which is expressed in terms of the electric-dipole definition
\[\mathbf{\pi}_{\rm gc}\;\equiv\;q\,\left(\mathbf{\rho}_{E}\;+\;\langle\mathbf{\rho}_{\rm gc }\rangle\right), \tag{105}\]
where the lowest-order electric displacement
\[\mathbf{\rho}_{E}\;\equiv\;\frac{\widehat{\mathbf{z}}}{\Omega_{0}}\cross{\bf u}\;=\;-\; \frac{c\,\nabla\Phi}{B_{0}\Omega_{0}}\;=\;-\;\frac{c\,\Phi^{\prime}(\psi)}{B_{0} \Omega_{0}}\;\nabla\psi \tag{106}\]
involves the radial electric field, as expected. The contribution associated with the guiding-center transformation is constructed from the guiding-center displacement \(\mathbf{\rho}_{\rm gc}\equiv\overline{\Gamma}_{\rm gc}^{-1}{\bf x}-{\bf X}\), which is expressed as
\[\mathbf{\rho}_{\rm gc} = -\,\epsilon\,G_{1}^{\bf x}\;-\;\epsilon^{2}\left(G_{2}^{\bf x}\;- \;\frac{1}{2}\,{\sf G}_{1}\cdot{\sf d}G_{1}^{\bf x}\right)\;+\;\cdots\] \[= \epsilon\,(1-\epsilon\,\Psi_{1}^{\prime})\;\mathbf{\rho}-\frac{ \epsilon^{2}\widehat{\mathbf{z}}}{\Omega_{0}}\cross\,\left(G_{1}^{\mu}\frac{\partial {\bf w}}{\partial\mu}+g_{1}^{\zeta}\frac{\partial{\bf w}}{\partial\zeta} \right),\]
where we have restored the mass renormalization \(m\to\epsilon\,m\). Given the fact that the lowest-order gyroradius \(\mathbf{\rho}\) is gyroangle-dependent, the gyroangle-averaged guiding-center displacement yields the expression
\[\left\langle\mathbf{\rho}_{\rm gc}\right\rangle = -\,\frac{\epsilon^{2}\widehat{\mathbf{z}}}{\Omega_{0}}\cross\, \left\langle G_{1}^{\mu}\frac{\partial{\bf w}}{\partial\mu}+g_{1}^{\zeta}\frac {\partial{\bf w}}{\partial\zeta}\right\rangle \tag{107}\] \[= \frac{\epsilon^{2}c}{B_{0}\Omega_{0}}\left(\Phi^{\prime}\,\Psi_{ 1}^{\prime}\;-\;\Phi_{1}^{\prime}\right)\nabla\psi,\]
where we used Eqs. (80)-(81). By adding the two contributions (106) and (107) in Eq. (105), we find the net guiding-center electric-dipole moment
\[\mathbf{\pi}_{\rm gc}\;=\;-\;\frac{cq}{B_{0}\Omega_{0}}\left[\Phi^{\prime}\;+\; \epsilon\,(\Phi_{1}^{\prime}-\Phi^{\prime}\Psi_{1}^{\prime})\right]\nabla\psi, \tag{108}\]
which contains first-order guiding-center corrections to the lowest-order electric displacement. We now show that, using Eq. (94), Eq. (108) can be expressed as
\[\mathbf{\pi}_{\rm gc} \equiv \frac{q\widehat{\mathbf{z}}}{\Omega_{0}}\cross\dot{\bf X}\;=\;-\; \frac{cq\,\nabla\Phi^{*}}{B_{\parallel}^{*}\Omega_{0}}\;=\;-\;\frac{cq}{B_{0 }\Omega_{0}}\;\frac{d\Phi^{*}}{d\Psi^{*}}\,\nabla\psi \tag{109}\] \[= -\;\frac{cq}{B_{0}\Omega_{0}}\left(\frac{\Phi^{\prime}+\epsilon \,\Phi_{1}^{\prime}}{1\;+\;\epsilon\,\Psi_{1}^{\prime}}\right)\nabla\psi,\]
which yields Eq. (108) if we explicitly expand Eq. (109) in powers of \(\epsilon\) and keep only terms up to second order.
We note that the drifting-frame guiding-center polarization contribution can also be calculated from the guiding-center Lagrangian (86), which can be rewritten as
\[L_{\rm Egc} = \left(\frac{q}{c}\,\Psi\,\nabla\Theta\;+\;m\,{\bf u}\right)\, \cdot\dot{\bf X}+J\,\left(\dot{\zeta}-\mathbf{\mathcal{R}}\mathbf{\cdot}\dot{\bf X}\right)\] \[-\;\left(q\,\Phi\;+\;\frac{m}{2}\,|{\bf u}|^{2}\;+\;\mu\,B_{0}\;- \;\frac{q}{2}\left\langle\mathbf{\rho}\mathbf{\rho}\right\rangle:\nabla{\bf E}\right).\]
Hence, we find
\[\frac{\partial L_{\rm Egc}}{\partial{\bf E}}\;=\;\frac{q\widehat{\mathbf{z}}}{ \Omega_{0}}\cross\dot{\bf X}\;-\;\frac{qc\,{\bf E}}{B_{0}\Omega_{0}}\;=\;\frac{ q\widehat{\mathbf{z}}}{\Omega_{0}}\cross\dot{\bf X}-q\,\mathbf{\rho}_{E}, \tag{111}\]
while the quadrupole contribution
\[\nabla\mathbf{\cdot}\left(\frac{\partial L_{\rm Egc}}{\partial\nabla{\bf E}} \right)\;=\;\frac{q}{2}\;\nabla\mathbf{\cdot}\left\langle\mathbf{\rho}\mathbf{\rho}\right\rangle =\;0\]
vanishes in a uniform magnetic field.
Next, we calculate the guiding-center intrinsic magnetic dipole
\[\mathbf{\mu}_{\rm gc} \equiv \frac{q\Omega_{0}}{2c}\left\langle\mathbf{\rho}_{\rm gc}\cross\frac {\partial\mathbf{\rho}_{\rm gc}}{\partial\zeta}\right\rangle\] \[= \frac{q\Omega_{0}}{2c}\left(1\;-\;2\,\epsilon\,\Psi_{1}^{\prime} \right)\left\langle\mathbf{\rho}\cross\frac{\partial\mathbf{\rho}}{\partial\zeta}\right\rangle\] \[-\;\frac{\epsilon q}{c}\left\langle\left[\widehat{\bf z}\cross \left(G_{1}^{\mu}\frac{\partial{\bf w}}{\partial\mu}+g_{1}^{\zeta}\frac{ \partial{\bf w}}{\partial\zeta}\right)\right]\right.\times\frac{\partial\mathbf{ \rho}}{\partial\zeta}\right\rangle.\]
The lowest-order contribution makes use of the definition \(\left\langle\mathbf{\rho}\cross\partial\mathbf{\rho}/\partial\zeta\right\rangle=-(2 \mu_{0}B_{0}/m\Omega_{0}^{2})\,\widehat{\bf z}\), so that we find
\[\frac{q\Omega_{0}}{2c}\left(1\;-\;2\,\epsilon\,\Psi_{1}^{\prime}\right)\left\langle \mathbf{\rho}\cross\frac{\partial\mathbf{\rho}}{\partial\zeta}\right\rangle\;=\;-\; \mu_{0}\left(1\;-\;2\,\epsilon\,\Psi_{1}^{\prime}\right)\,\widehat{\bf z},\]
while the first-order contribution is
\[\frac{\epsilon q}{c}\left\langle\left[\widehat{\bf z}\cross\left(G_{1}^{\mu} \frac{\partial{\bf w}}{\partial\mu}+g_{1}^{\zeta}\frac{\partial{\bf w}}{ \partial\zeta}\right)\right]\cross\frac{\partial\mathbf{\rho}}{\partial\zeta}\right\rangle \;=\;-\;\epsilon\,\mu_{0}\Psi_{1}^{\prime}\,\widehat{\bf z}.\]
If we combine these results, we obtain the simple formula
\[\mathbf{\mu}_{\rm gc}\;\equiv\;-\mu_{0}\;(B_{0}/B_{\parallel}^{*})\,\widehat{\bf z }\;=\;-\,\mu_{0}\left(1\;-\;\epsilon\,\Psi_{1}^{\prime}\right)\,\widehat{\bf z}, \tag{113}\]
after an expansion in powers of \(\epsilon\) is carried out.
## V Guiding-center dynamics for a linear radial electric field
In this Section, we return to the case of a linear radial electric field, where \(\Phi^{\prime}=\epsilon\,\Omega_{0}/c\) and \(\Phi^{\prime\prime}=0\), so that \(\epsilon\Psi_{1}^{\prime}=2\epsilon\) and \(\epsilon\Phi_{1}^{\prime}=\epsilon^{2}\Omega_{0}/c\). In this case, the guiding-center azimuthal angular velocity (90) is \(\Omega(\Psi)=\Omega_{0}\,\overline{\omega}(\epsilon)\), where \(\overline{\omega}(\epsilon)=\epsilon\,(1+\epsilon)/(1+2\epsilon)\), i.e., in the limit \(\epsilon\ll 1\), the guiding-center azimuthal angular velocity is proportional to \(\epsilon=c\,\Phi^{\prime}(\overline{\psi}_{0})/\Omega_{0}\). As was noted below Eq. (90), the guiding-center azimuthal angular velocity \(\Omega(\Psi)\) is independent of the guiding-center magnetic moment \(\mu\) for a linear radial electric field since \(\Psi_{1}^{\prime}\) is a constant.
Here, we will use the dimensionless ordering parameter \(\epsilon=1/30\), instead of the standard value \(1/1000\) commonly assumed in guiding-center theory, in order to show how far the perturbation analysis can be pushed to nonstandard orderings, e.g., according to Joseph's ordering Joseph (1965), we find \(\Omega_{E}/\Omega_{0}=\sqrt{2\epsilon}\simeq 25\%\).
### Guiding-center conservation laws
In the case of a linear radial electric field, the guiding-center magnetic flux (98) becomes
\[\Psi(\tau) = (1-2\epsilon)\,\overline{\psi}_{0}\;+\;\frac{c}{q\Omega_{0}}\left( \mathcal{E}\;-\;q\,\Phi_{0}\right) \tag{114}\] \[+\;5\epsilon^{2}\,\psi(\tau),\]
where \(\Phi_{0}\equiv\Phi(\overline{\psi}_{0})\) and the time dependence (with \(\tau=\sqrt{1+4\epsilon}\,\Omega_{0}\,t\)) has been pushed from zeroth order to second order in \(\epsilon\) as a result of the guiding-center transformation (98). Next, the guiding-center magnetic moment (99) becomes
\[\mu(\tau) = (1-2\,\epsilon)\;\frac{\mathcal{E}}{B_{0}}\;-\;(1-3\,\epsilon)\; \frac{q\Phi_{0}}{B_{0}} \tag{115}\] \[-\;4\,\epsilon^{3}\left(\frac{q\Omega_{0}}{cB_{0}}\right)\psi( \tau),\]
where \(\mu_{0}\) is given in Eq. (100):
\[\mu_{0}\;=\;(\mathcal{E}-q\,\Phi_{0})/B_{0}\;+\;\epsilon^{2}\left(\frac{q \Omega_{0}}{cB_{0}}\right)\psi(\tau). \tag{116}\]
Hence, the time dependence has been pushed from second order to third order in \(\epsilon\) as a result of the guiding-center transformation (99).
In Fig. 3, we see the normalized lowest-order magnetic flux \(\chi(\tau)\equiv\psi(\tau)/\overline{\psi}_{0}\) (gray) and the normalized guiding-center magnetic flux \(\chi_{\rm gc}\) (black) for the case of a linear radial electric field with \(\epsilon=1/30\) and \(\phi=\pi/10\). We clearly see that the large-amplitude oscillation in \(\psi(\tau)\) has been greatly reduced in Eq. (114) by a factor of \(\epsilon^{2}\). We also see that the normalized guiding-center magnetic flux \(\chi_{\rm gc}\) is NOT equal to the averaged normalized magnetic flux \(\langle\chi\rangle=(a^{2}+b^{2})/2=\chi_{0}\sec\phi\), shown as a dashed horizontal line in Fig. 3.
In Fig. 4, we see the normalized lowest-order magnetic moment \(\overline{\mu}_{0}\) (gray) and the normalized guiding-center magnetic moment \(\overline{\mu}\) (black) for the case of a linear radial electric field with \(\epsilon=1/30\) and \(\phi=\pi/10\), each normalized by \((q\Omega_{0}/cB_{0})\overline{\psi}_{0}\). We clearly see that, while the lowest-order magnetic moment (116) shows some oscillations with small amplitudes (at order \(\epsilon^{2}\)), the guiding-center magnetic moment (115) is fairly flat, with minimal-amplitude oscillations (at order \(\epsilon^{3}\)).
### Guiding-center dynamics
Lastly, the plots of \(x(t^{\prime})\) and \(X(t^{\prime})\), as well as the parametric plots of \((x,y)\) and \((X,Y)\), are shown in Figs. 5 and 6, respectively, for the case of a linear radial electric field with \(\epsilon=1/30\) and \(\phi=\pi/10\). We clearly see how well the guiding-center position (94)-(95):
\[X(t^{\prime}) = \sqrt{2\,\Psi(\tau)/B_{0}}\;\cos\left(t^{\prime}\,\overline{ \omega}(\epsilon)\right), \tag{117}\] \[Y(t^{\prime}) = \sqrt{2\,\Psi(\tau)/B_{0}}\;\sin\left(t^{\prime}\,\overline{ \omega}(\epsilon)\right) \tag{118}\]
follows the particle position (36)-(37). Hence, the guiding-center transformation introduced in Sec. IV has achieved its purpose in building guiding-center invariants \(\Psi\) and \(\mu\) to higher order in perturbation analysis from the lowest-order coordinates \(\psi\) and \(\mu_{0}\). In addition, the guiding-center dynamics follows the particle dynamics
Figure 4: Plots of the normalized magnetic moment \(\overline{\mu}_{0}(\tau)\) (gray) and the normalized guiding-center magnetic moment \(\overline{\mu}(\tau)\) (black) for the case of a linear radial electric field with \(\epsilon=1/30\) and \(\phi=\pi/10\).
Figure 5: Plots of \(x(t^{\prime})\) and \(X(t^{\prime})\) shown as gray and black curves, respectively, in the range \(0\leq t^{\prime}\leq 4\pi/\overline{\omega}(\epsilon)\) for the case of a linear radial electric field with \(\epsilon=1/30\) and \(\phi=\pi/10\).
## VI Summary
The presence of a nonuniform electric field adds a significant element of complexity in the guiding-center analysis of particle motion in crossed electric and magnetic fields, which are quite common in laboratory and space magnetized plasmas. In the present work, we greatly simplified the guiding-center analysis presented in Ref. [5] by considering a nonuniform radial electric field in the presence of a uniform magnetic field.
The case of a nonlinear radial electric field is a topic of great interest in the investigation of turbulence and transport in rotating magnetized plasmas [1; 2; 3; 8; 9] and was recently explored by Wang _et al._[11] in performing gyrokinetic studies of ion-temperature-gradient (ITG) turbulence and transport in the scrape-off layer (SOL) region of a field-reversed magnetized plasma.
The results of our guiding-center analysis for the case of a linear radial electric field confirm the faithfulness of the guiding-center representation. For a nonlinear radial electric field with quadratic nonlinearity
\[\Phi(\psi)\;=\;\Phi_{0}\;+\;\Phi_{0}^{\prime}\,(\psi-\overline{\psi}_{0})\;+ \;\frac{1}{2}\,\Phi_{0}^{\prime\prime}\,(\psi-\overline{\psi}_{0})^{2},\]
the radial integral solution (12) involves Weierstrass elliptic functions (for example, see Ref. [14]), and the energy dependence of the guiding-center azimuthal angular velocity (90) becomes important. Additional comments concerning guiding-center orbits in a nonlinear radial electric field in a uniform magnetic field can be found in the recent work by Joseph [13]. Future work will consider other orbital effects of nonlinear radial electric field such as the orbit squeezing effect [20; 21; 22], which may be explored in the limit of a uniform magnetic field, as well as applications of the general guiding-center theory presented in Sec. IV for the case of a nonlinear radial electric field in a nonuniform magnetic field.
###### Acknowledgements.
The Author wishes to acknowledge useful discussions with K. Kabin on the charged-particle dynamics in the presence of a linear radial electric field. The present work was supported by the National Science Foundation grant PHY-2206302.
**Data Availability Statement**
The Mathematica code used to generate the plots in the present manuscript is available upon request.
|
2301.07449 | Absorption and scattering of a high dimensional noncommutative black
hole | In this work, we investigate the scattering of massless plane scalar waves by
the high dimensional noncommutative Schwarzschild-Tangherlini black hole. We
use the partial wave approach to determine the scattering and absorption cross
sections in the incident wavelength range. Our numerical results demonstrate
that the bigger the noncommutative parameter, the smaller the maximum value of
the related partial absorption cross section, however the tendency is slightly.
We also discovered that when the noncommutative parameter is weak, the
absorption cross section of the high dimensional black hole oscillates in the
low frequency zone. The total absorption cross section oscillates around the
geometrical optical limit in the high frequency range, and the scattering
characteristics of black holes with various parameters are visibly different.
The influence on the differential scattering cross section is particularly
pronounced at large angles. | Mao-Yuan Wan, Chen Wu | 2023-01-18T11:45:04Z | http://arxiv.org/abs/2301.07449v1 | # Absorption and scattering of a high dimensional non-commutative black hole
###### Abstract
In this work, we investigate the scattering of massless plane scalar waves by the high dimensional non-commutative Schwarzschild-Tangherlini black hole. We use the partial wave approach to determine the scattering and absorption cross sections in the incident wavelength range. Our numerical results demonstrate that the bigger the non-commutative parameter, the smaller the maximum value of the related partial absorption cross section, however the tendency is slightly. We also discovered that when the non-commutative parameter is weak, the absorption cross section of the high dimensional black hole oscillates in the low frequency zone. The total absorption cross section oscillates around the geometrical optical limit in the high frequency range, and the scattering characteristics of black holes with various parameters are visibly different. The influence on the differential scattering cross section is particularly pronounced at large angles.
## I Introduction
The problem of understanding how to avoid singularities in black hole spacetimes is important in general relativity. Researchers have been very interested in non-commuting black holes as an alternative to quantum gravity [1; 2]. The theory arose from the perspective of string theory [3]. The existence of a minimal length is a natural need related to quantum mechanics when considering the quantum properties of the phase space. As a result, the presence of singularities can be prevented by constraining the minimum scale of space and time, i.e. replacing the point-like matter distribution with a smeared matter distribution [4; 5]. Obviously, black holes are an essential application area for non-commutative geometry, and understanding non-commutative can help us better comprehend black hole evaporation. Susskind [6] suggested in 1993 that the string effect should not be neglected in the string and black hole correlation theory. String effects in black holes inspired by non-commutative geometry are analogous to non-commutative field theory derived from string theory in some aspects [7]. In 1999, Seiberg and Witten [8] claimed that non-commutative field theory can explain some open-string low-energy efficient theories with nontrivial backgrounds, so providing a theoretical foundation for the study of non-commutative spacetime. String theory studies show that spacetime coordinates become a non-commutative operator on the \(D\)-brane. Its commutator is
\[[\hat{x}_{\mu},\hat{x}_{\nu}]=i\theta_{\mu\nu}=i\frac{C_{\mu\nu}}{\Lambda_{ \rm NC}^{2}}, \tag{1}\]
where \(\hat{x}_{\mu}\) is the spacetime coordinate operator and \(\theta_{\mu\nu}\) is an antisymmetric constant tensor with dimension equal to the square of the length, \(\Lambda_{\rm NC}^{2}\) is the mass scale associated with non-commutative and \(C^{ab}\), and it is commonly thought of as a frame-free dimensionless antisymmetric matrix which is not a tensor. Eq. (1) may do the non-commutative correction of quantum field theory.
Nicolini and his colleagues [7] point out that non-commutativity is a characteristic of the manifold itself, not a superposition of geometric forms. As a result, non-commutative affects the source term and has no effect on the Einstein tensor section of the field equation. That is, the Gaussian smeared matter distribution [7] replaces the mass density of the point-like function of the energy-momentum tensor of the Einstein equations. In addition to the Gaussian smeared matter distribution, there exist the Lorentz [9] and Rayleigh distributions [10]. This results in a non-commutative self-regular black hole solution devoid of singularities. Nicolini and his colleagues [7] become the first to solve a 4-dimensional non-commutative geometrically inspired Shwarzschild black hole. It is then extended to charged [11] case and extra spatial dimensions [12] case. Modesto and Nicolini [13] extended it to the general situation of electrically charged rotating non-commutative black holes in 2010. Non-commutative black holes also extend to higher dimensional black holes [14; 15] and higher dimensional charged black holes [16; 17]. Nozari and Mehdipour [9] investigated the Hawking radiation of non-commutative Shwarzschild black holes. Many other researchers have investigated the thermodynamic aspects of non-commutative black holes [18; 19; 20; 21]. Yan [22; 23] used the WKB method to calculate the QNMs of non-commutative black hole. Konoplya and Bronnikov et al. [24; 25] used WKB method to study Quasinormal ringing of regular black holes.
Since non-commutative spacetime coordinates introduce a new basic natural length scale, it is also fascinating to investigate the influence of this constant parameter on matter wave absorption and scattering in black hole spacetime. During the 1970s and 1980s, consider
able results of attempt [26] was given to the study of the scattering of various planar waves of frequency \(\omega\) by black holes. People can regard the scattering results as a black hole fingerprint and possibly observed. Recently, these topics have been also arousing lots of interest [27; 28; 29; 30; 31]. However, there are far too many fields around horizons in real astrophysical complicated black holes. In order to understand black holes as much as possible, we should consider as many fields as possible, but this will complicate the calculation. We might as well simplify the problem for this. The simplest instance, the scalar wave in the Spherically symmetrical black hole, immediately displays several basics of the problem, such as the high frequency limit of the absorption cross section tends to the area of the black hole shadow or Hawking radiation. For increasingly complicated cases, the analytical intricacy might obscure the calculation's basic simplicity. Thus, we will look at massless scalar waves. Can the computations of scattering be used in astrophysics? Regarding the impure, astrophysically complicated black holes, we must agree that observation has yet to provide us with a completely convincing candidate to which we might apply these ideas. Considering that photons are only detect a black hole outside the shadow due to the spacetime structure and the astrophysical environment surrounding it. The underlying geometrical structure could only be detected by gravitational wave [26]. In theory, high-precision measurements of massless scalar wave fluxes scattered by black holes might one day be used to estimate black hole's hairs. Even in the absence of experimental confirmation, we believe that research into black hole scattering will continue to further our knowledge of how black holes interact with their surroundings.
The reason that this subject attracts increasing attentions are as follows: 1)quasinormal modes of black holes are thought to representing the poles of the corresponding black hole scattering matrix. 2)scattering and absorption from black holes is related to many interesting phenomena, such as glory, photon orbit and superradiant scattering [32; 33; 34; 35; 36; 37; 38; 39; 40; 41]. In Ref. [42; 43; 44], Sanchez noticed that the absorption cross section of the Schwarzschild black hole for massless scalar wave oscillating near the geometrical optical limit via numerical approaches.
In recent years, the partial wave approach has been widely employed in the research of black hole scattering, particularly the scattering process of numerous fields of black holes [45; 46; 47; 48; 49]. Chen and his coworkers [50] investigated the scattering and absorption cross section of massless scalar wave from the black hole surrounded by magnetic field. Crispino et al. [51] extended study of the scattering theory of scalar waves in the Reissner-Nordstrom black hole spacetime. Huang et al. [52] studied the scattering and absorption cross section of massless scalar waves by the Bardeen black hole. It is worth noting that the Bardeen black hole is a type of regular black hole, as well as one of the non-singularity black hole theories. Anacleto et al. [53] investigates the scattering and absorption cross sections of non-commutative Schwarzschild black holes with Gaussian smeared matter distribution and Lorentz smeared matter distribution. Non-commutative BTZ black holes also cause scattering [54]. Scalar wave scattering by spherically symmetric \(D\)-dimensional black holes is also studied in string theory [55].
The major focus of this study is on the scalar scattering process of the \(D\)-dimensional non-commutative Schwarzschild-Tangherlini black hole with the smeared matter distribution, as well as the influence of different \(\theta\) values on the scalar absorption and scattering cross sections. The following is the structure of this paper: Sect. II presents the perturbation equation and effective potential equation for scalar perturbation in a given spacetime background and briefly discusses the \(D\)-dimensional non-commutative Schwarzschild-Tangherlini black hole with smeared matter distribution. We also explore the allowed range of hoop parameter \(k\) under the hoop hypothesis, as well as the range of non-commutative parameter \(\theta\) corresponding to different dimension \(D\) and value \(k\) when event horizon exist. In Sect. III, we perform an analysis of the classical dynamics of massless scalar field. The concentration of Sect. III is the absorption and scattering cross sections of massless scalar fields for the \(D\)-dimensional non-commutative Schwarzschild-Tangherlini black holes. Finally, we come to a conclusion in Sect. IV. Throughout this paper we use natural units \(c=\hbar=G=k_{B}=1\).
## II The basic equation
### The metric
Non-commutative geometry is represented physically as a fluid that smeared rather than is squeezed at the origin, and the energy-momentum tensor changed by the smeared matter distribution corresponds to the anisotropy fluid. Many researchers [56; 57; 58; 59] have recently demonstrated that the Gaussian smeared matter distribution may be substituted as long as the origin has a pointed peak comparable to the Dirac \(\delta\)-function and the integral of the distribution function is finite. The author provide the general density of smeared matters in Ref. [59],
\[\rho_{\rm matter}(r)=\left[\frac{M}{\pi^{\frac{D-1}{2}}(4\theta)^{\frac{D+k-1 }{2}}}\cdot\frac{\Gamma\left(\frac{D-1}{2}\right)}{\Gamma\left(\frac{D+k-1}{ 2}\right)}\right]r^{k}e^{-\frac{2}{4\theta}}, \tag{2}\]
where \(M\) is the black hole's mass and \(theta\) is a positive non-commutative parameter. \(k\) is a non-negative integer, and the Gaussian smeared matter distribution corresponds to \(k=0\), the Rayleigh distribution corresponds to \(k=1\), the Maxwell-Boltzmann distribution corresponds to \(k=2\), and so on. \(\Gamma(x)\)is the gamma function.
The average radius corresponding to the mass density distribution of the matter can be calculated using Eq.
(2),
\[\bar{r}=\int\limits_{0}^{+\infty}\frac{\rho_{\rm matter}(r)}{M}r{\rm d}V_{D-1}= \sqrt{4\theta}\frac{\Gamma\left(\frac{D+k}{2}\right)}{\Gamma\left(\frac{D+k-1}{ 2}\right)}, \tag{3}\]
where \({\rm d}V_{D-1}\) represents the \((D-1)\) dimension volume element. The greater the corresponding value \(\theta\), the more disperse the matter distribution; the smaller the corresponding parameter \(\theta\), the more concentrated the matter distribution. The mean radius of matter is \(\bar{r}\to 0\) in the limit of \(\theta\to 0\), which indicates that the distribution of matter collapses into a point and the non-commutative of spacetime disappears. As a result, this spacetime non-commutative can be described as a tiny influence overlaid on ordinary spacetime, with the non-commutative impact mostly reflected in the area around the average radius of matter.
The following requirements must be met for the geometric metric solution of a static spherically symmetric non-commutative Schwarzschild black hole: 1) The radial matter distribution function \(\rho_{\rm matter}(r)\) is spherically symmetric. 2) Covariant energy conservation \(\nabla_{\mu}T^{\mu\nu}=0\). 3) Possesses Schwarzschild-like features \(g_{00}=-g_{rr}^{-1}\). As a result, the matter source distribution's spherically symmetric energy-momentum tensor [60; 17] can be expressed as \(T_{\mu\nu}=(\rho+p_{\vartheta_{i}})U_{\mu}U_{\nu}+p_{\vartheta_{i}}g_{\mu\nu }+(p_{r}-p_{\vartheta_{i}})X_{\mu}X_{\nu}\). where \(U_{\mu}\) is the fluid velocity and \(X_{\mu}\) is the unit vector along the radial direction,
\[[T^{\mu}_{\ \ \nu}]_{\rm matter}=diag[-\rho_{\rm matter}(r),p_{r},p_{ \vartheta_{1}},\ldots,p_{\vartheta_{D-2}}], \tag{4}\]
where \(p_{r}=-\rho_{\rm matter}(r)\) denotes radial pressure, while \(p_{\vartheta_{i}}=-\rho_{\rm matter}(r)-[r/(D-2)]\partial_{r}\rho_{\rm matter }(r)\) denotes tangential pressure. The Einstein field equation can be written as
\[R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=8\pi G_{D}[T_{\mu\nu}]_{\rm matter}, \tag{5}\]
where \(G_{D}\) is the gravitational constant in \(D\)-dimensional spacetime, \(G_{D}=m_{\rm Pl}^{-2}=l_{\rm Pl}^{-2}\), where \(m_{\rm Pl}^{-2}\) and \(l_{\rm Pl}^{-2}\) represent the Planck mass and length, respectively.
In the \(D\)-dimensional Schwarzschild-Tangherlini spacetime, the metric of a non-commutative spherically symmetric black hole is
\[{\rm d}s^{2}=-f(r){\rm d}t^{2}+f(r)^{-1}{\rm d}r^{2}+r^{2}{\rm d}\Omega_{D-2}^ {2}, \tag{6}\]
where \({\rm d}\Omega_{D-2}^{2}\) denotes the line element on the \((D-1)\) dimensional unit sphere, and
\[{\rm d}\Omega_{D-2}^{2}= {\rm d}\vartheta_{1}^{2}+\sin^{2}\vartheta_{1}{\rm d}\vartheta_ {2}^{2}+\ldots\] \[+(\sin^{2}\vartheta_{1}\ldots\sin^{2}\vartheta_{D-3}){\rm d} \vartheta_{D-2}^{2}\] \[= \sum_{j=1}^{D-2}\left(\prod_{i=1}^{j=2}\sin^{2}\vartheta_{i} \right){\rm d}\vartheta_{j}^{2}, \tag{7}\]
here, the coordinate system \((t,r,\vartheta_{1},\vartheta_{2},\ldots,\vartheta_{D-2})\) is utilized. The lapse function [21] is
\[f(r)=1-\frac{16\pi G_{d}m(r)}{(D-2)\Omega_{D-2}r^{D-3}}, \tag{8}\]
where \(\Omega_{D-2}\) denotes the volume of a \((D-2)\)-dimensional unit sphere [61; 62].
\[\Omega_{D-2}=\frac{2\pi^{\frac{D-1}{2}}}{\Gamma\left(\frac{D-1}{2}\right)}. \tag{9}\]
\(m(r)=\int_{0}^{+\infty}\rho_{\rm matter}(r)\Omega_{D-2}r^{2}{\rm d}r\) denotes the black hole's mass distribution, which is given by
\[m(r)=\frac{M}{\Gamma\left(\frac{D+k-1}{2}\right)}\gamma\left(\frac{D+k-1}{2},\frac{r^{2}}{4\theta}\right). \tag{10}\]
This enables us to rewrite the lapse function as follows:
\[f(r)=1-\frac{16\pi\frac{M}{\Gamma\left(\frac{D+k-1}{2}\right)}}{(D-2)\Omega_{ D-2}r^{D-3}}\cdot\gamma\left(\frac{D+k-1}{2},\frac{r^{2}}{4\theta}\right). \tag{11}\]
\(\gamma(a,x)\) is the lower incomplete gamma function. Geometric units are utilized throughout the rest of this text such that \(G_{D}=c=\hbar=k_{B}=1\).
The next introduce numerous special black hole solutions, which are variants of non-commutative \(D\)-dimensional Schwarzschild black hole solutions with generic smeared matter distribution. Eq. (11) is close to the lapse function of a \(D\)-dimensional Schwarzschild black hole [63] when the non-commutative limit \(\theta\to 0\) or \(r\gg\theta\) is used. Eq. (11) depicts the non-commutative \(D\)-dimensional Schwarzschild black hole [15] under Gaussian smeared matter distribution when \(k=0\) is met. When both of the above requirements are satisfied, Eq. (11) turns into a Schwarzschild black hole.
### Perturbation equation and effective potential for scalar field
For the mathematical black holes, we only discuss the scattering of massless scalar waves by black holes. This is the most basic scenario so that we can easily grasp some properties about black hole. Although this study has mostly focused on massless scalar wave fields, identical strategies may be used to any scattering issue. The Klein-Gordon equation in the spacetime background Eq. (6) describes the massless scalar wave in this situation.
\[(-g)^{-\frac{1}{2}}\partial_{\mu}[(-g)^{\frac{1}{2}}g^{\mu\nu}\partial_{\nu} \Psi]=0, \tag{12}\]
where \(g\) denotes the determinant of \(g_{\mu\nu}\), \(\Psi\) represents a scalar field [64].
\[\Psi(t,r,\vartheta_{i})=\sum_{l,m}r^{\frac{2-D}{2}}\psi_{l}(t,r)Y_{l,m}( \vartheta_{1},\vartheta_{2},\ldots,\vartheta_{D-2}), \tag{13}\]
where \(\vartheta_{i}\) denotes the \((D-2)\)-dimensional angle and \(Y_{l,m}(\vartheta_{1},\vartheta_{2},\ldots,\vartheta_{D-2})\) denotes the \(D\)-dimensional spherical harmonic function \(\psi_{l}(t,r)=e^{i\omega t}\Phi_{\omega}(r)\). The Klein-Gordon equation may be reduced to a
Schrodinger-like wave function using the tortoise coordinate transform \(r_{*}=\int f(r)^{-1}\mathrm{d}r\) and the separation of radial and angular variables.
\[\left[\frac{\partial^{2}}{\partial r_{*}^{2}}+\omega^{2}-V_{\theta,k}(r_{*}) \right]\Phi_{\omega}=0. \tag{14}\]
The effective potential is
\[V_{\theta,k}(r)= f(r)\left[\frac{l(l+D-3)}{r^{2}}+\frac{D-2}{2r}\frac{\partial f (r)}{\partial r}\right]\] \[+ f(r)\left[\frac{(D-2)(D-4)}{4r^{2}}f(r)\right], \tag{15}\]
where \(l=1,2,\dots\) is a multipole number. The tortoise coordinate \(r_{*}\) is defined on the interval \((-\infty,+\infty)\) such that \(r_{*}\rightarrow+\infty\) relates to the spatial infinite \(r\rightarrow+\infty\) and \(r_{*}\rightarrow-\infty\) corresponds to the event horizon \(r_{\mathrm{eh}}\). The above effective potential is defined positively and takes the form of a potential barrier close to zero at both spatial infinity and the event horizon. We discover that as the parameters \(k\) or \(D\) grows, so does the maximum value of the effective potential barrier \(V_{\mathrm{max}}\). And when the parameter \(\theta\) grows, the maximum value of the potential barrier \(V_{\theta,k}^{\mathrm{max}}(r)\) lowers. The influence of the parameter \(\theta\) falls to practically null as the value of \(k\) grows.
### The allowable value of \(k\) and the valid range of \(\theta\)
According to Eq. (11), the formula for \(M\) and the event horizon radius \(r_{\mathrm{eh}}\) can be obtained.
\[M=\frac{(D-2)\Omega_{D-2}r_{\mathrm{eh}}^{D-3}}{16\pi}\cdot\frac{\Gamma\left( \frac{D+k-1}{2}\right)}{\gamma\left(\frac{D+k-1}{2},\frac{r_{\mathrm{eh}}^{2} }{4\theta}\right)}, \tag{16}\]
where \(r_{\mathrm{eh}}\) is the greatest root of \(f(r)=0\). The radius of an extreme black hole's event horizon \(r_{\mathrm{eh}}\) satisfies the relation \(\partial M/\partial r_{\mathrm{eh}}=0\), therefore, \(r_{\mathrm{eh}}\) satisfies the following formula,
\[2x_{\mathrm{eh}}^{D+k-1}e^{-x_{\mathrm{eh}}^{2}}=(D-3)\cdot\gamma\left(\frac{ D+k-1}{2},x_{\mathrm{eh}}^{2}\right), \tag{17}\]
where \(x_{\mathrm{eh}}\) is defined by \(x_{\mathrm{eh}}=r_{\mathrm{eh}}/\sqrt{4\theta}\).
The hoop conjecture [65; 66] needs to be considered here: The mean radius of the matter mass distribution is smaller than the radius of the extreme black hole's event horizon. This is \(\bar{r}\leq r_{\mathrm{eh}}\) or \(\bar{x}\leq x_{\mathrm{eh}}\), where \(\bar{x}_{\mathrm{eh}}=\bar{r}_{\mathrm{eh}}/\sqrt{4\theta}\). Furthermore, the hoop conjecture ensures that the smallest black holes (extreme black holes) have zero temperature and zero heat capacity. Black holes will not have extreme structures if the hoop conjecture is broken. The following equation is obtained by combining Eq. (3).
\[\frac{\Gamma\left(\frac{D+k}{2}\right)}{\Gamma\left(\frac{D+k-1}{2}\right)} \leq x_{\mathrm{eh}}(D,k). \tag{18}\]
As a result, the permissible range of \(k\) of the non-commutative \(D\)-dimensional Schwarzschild-Tangherlini black hole corresponding to different \(D\) values can be determined using Eq. (18), as shown in TABLE I. The Gaussian mass distribution of matter is only valid to non-commutative Schwarzschild-Tangherlini black holes in \(D=4\) and \(D=5\) spacetime, as shown in TABLE I.
It is worth mentioning that the lapse function \(f(r)=0\) in a \(D\)-dimensional Schwarzschild-Tangherlini black hole can have 0, 1, or 2 horizons depending on the value of \(\theta\). As a result, we may obtain the extreme \(\theta\) parameter of the coincidence of the inner and outer horizon, denoted as \(\theta_{\mathrm{max}}\). We compute \(\theta_{\mathrm{max}}\) for the permitted \(k\) of \(4\leq
\begin{table}
\begin{tabular}{c c c c c c c} \(D\) & \(k=0\) & \(k=1\) & \(k=2\) & \(k=3\) & \(k=4\) \\
4 & 0.275811 & 0.214662 & 0.17807 & 0.153293 & 0.135218 \\
5 & 0.0633279 & 0.0496268 & 0.0412099 & 0.035442 & 0.0312102 \\ \hline \(D\) & \(k=4\) & \(k=5\) & \(k=6\) & \(k=7\) & \(k=8\) \\
6 & 0.0231343 & 0.0207527 & 0.0188472 & 0.0172844 & 0.0159774 \\ \hline \(D\) & \(k=8\) & \(k=9\) & \(k=10\) & \(k=11\) & \(k=12\) \\
7 & 0.0150971 & 0.0140589 & 0.0131622 & 0.0123793 & 0.0116894 \\ \hline \(D\) & \(k=14\) & \(k=15\) & \(k=16\) & \(k=17\) & \(k=18\) \\
8 & 0.0107588 & 0.0102581 & 0.00980412 & 0.00939053 & 0.00901203 \\ \hline \(D\) & \(k=22\) & \(k=23\) & \(k=24\) & \(k=25\) & \(k=26\) \\
9 & 0.00818131 & 0.007913 & 0.00766244 & 0.00742791 & 0.00720788 \\ \hline \(D\) & \(k=32\) & \(k=33\) & \(k=34\) & \(k=35\) & \(k=36\) \\
10 & 0.00652586 & 0.00636871 & 0.00621922 & 0.00607684 & 0.00594106 \\ \hline \(D\) & \(k=43\) & \(k=44\) & \(k=45\) & \(k=46\) & \(k=47\) \\
11 & 0.0054962 & 0.0053935 & 0.00529468 & 0.00519954 & 0.00510786 \\ \end{tabular}
\end{table}
Table 2: \(\theta_{\mathrm{max}}\) values for different \(k\) and \(D\) values.
\(D\leq 11\) spacetime, as indicated in FIG. 1, and present the \(\theta_{\rm max}\) for the six least allowable \(k\) in TABLE 2 for clarity. We discover that when \(4\leq D\leq 11\), \(\theta_{\rm max}\) decreases as the value of \(k\) grows.
### Shadow of \(4-D\) cases
We discuss the shadow in \(4-D\) non-commutative Schwarzschild-Tangherlini black hole spacetimes in this subsection. In general, the geodesics in black hole spacetimes are described using the formula [51]
\[\dot{s}^{2}=-f(r)\dot{t}^{2}+f^{-1}(r)\dot{r}^{2}+r^{2}(\dot{\theta}^{2}+\sin^ {2}\theta\dot{\phi}^{2})=\lambda, \tag{19}\]
where the over dot denotes the derivative with respect to an affine parameter \(\lambda\). For massless particles we have \(\lambda=0\).
It is useful to introduce the function \(h(r)^{2}\)
\[h(r)^{2}=\frac{r^{2}}{f(r)}, \tag{20}\]
In terms of \(h(r)^{2}\), the radius of the outermost photon sphere \(r_{\rm ph}\) is the largest root of the equation [67]
\[\frac{\rm d}{\rm d r}h(r)^{2}=0, \tag{21}\]
The critical impact parameter \(b_{\rm cr}\) and the radius of the photon sphere \(r_{\rm ph}\) are related by
\[b_{\rm cr}=h(r_{\rm ph}). \tag{22}\]
The shadow radius \(r_{\rm sh}\) is given by
\[r_{\rm sh}=\left.\frac{r}{\sqrt{f(r)}}\right|_{r_{\rm ph}}. \tag{23}\]
The constraints utilized in Ref. [68] will be used to limit the parameters of \(4-D\) non-commutative Schwarzschild-Tangherlini black hole in the next parts. The \(1\sigma\) constraints
\[4.54\lesssim r_{\rm sh}/M\lesssim 5.22, \tag{24}\]
and
\[4.20\lesssim r_{\rm sh}/M\lesssim 5.56. \tag{25}\]
These constraints are consistent with those mentioned in Ref. [69] These estimates come from two completely independent observation, Keck-based and VLTI-based.
We plot the shadows for \(4-D\) non-commutative Schwarzschild-Tangherlini black hole as a function of the non-commutative parameter \(\theta\) in Fig. 2 and for \(k=0,1,2,3,4\). Then we find that it makes very little effect on black hole shadow whether it is \(k\) or \(\theta\). We only consider the \(4-\)dimensional case here since the estimates for these observations are based on the \(4-\)dimensional GR, and we can't easily extend them to the high-dimensional situation.
Because of the existence of the innermost unstable photon sphere \(r=r_{\rm ph}\), the photon can deviate from the photon sphere at any angle. This feature indicates that the glory phenomena will occur [51]. Glory phenomena, like those seen in optics, are bright spots or halos that occur in the scattering intensity in the epipolar direction, the intensity and size of which depend on the wavelength of the incident perturbation. The approximation developed by Matzner et al. [66] may be used to determine the magnitude and size of bright spots.
## III Numerical results
To determine the absorption and scattering cross sections of the scalar field dispersed by the black hole in this study, we must solve the radial equation with appropriate boundary conditions. For our scattering problem, we consider plane scalar waves incoming from the infinite null past. Therefore, we are interested in solutions of Eq. (14) subjected to the following boundary conditions.
\[\phi_{\omega l}(r_{*})\approx\begin{cases}A_{\rm in}e^{-i\omega r_{*}}+A_{\rm out }e^{i\omega r_{*}}&\text{for $r_{*}\to\infty$;}\\ e^{-i\omega r_{*}}&\text{for $r_{*}\to-\infty$}\end{cases} \tag{26}\]
and satisfy the conservation relation \(1+|A_{\rm out}|^{2}=|A_{\rm in}|^{2}\). The scattered wave's phase shift is defined as
\[e^{i\omega\delta_{l}}=(-1)^{l+1}\frac{A_{\rm out}}{A_{\rm in}}. \tag{27}\]
Figure 2: Shadow radius \(r_{sh}\) of \(4-D\) non–commutative Schwarzschild–Tangherlini black hole as a function of the non–commutative parameter \(\theta(k=0,1,2,3,4\) corresponding to the Yellow, red, green, blue and black solid lines, respectively). The value of \(\theta\) starts from \(\theta\to 0\). The white and light gray zones match the EHT horizon-scale picture of Sgr A* at \(1\sigma\) and \(2\sigma\), respectively. The gray area represents the value outside the limit.
According to quantum mechanics, when an incident wave travels through a potential barrier, some of it is reflected back and part of it is transmitted across the potential barrier. The transmission amplitude \(T_{l}\) and reflection amplitude \(R_{l}\) fulfill flux conservation.
\[|T_{l}|^{2}+|R_{l}|^{2}=1, \tag{28}\]
where \(T_{l}\) and \(R_{l}\) denote the transmission amplitude \(T_{l}=1/A_{\rm in}\) and the reflection amplitude \(R_{l}=A_{\rm out}/A_{\rm in}\), respectively.
Figure 3: The partial absorption cross section and total absorption cross section of high dimensional non–commutative black holes with different values of \(k\) and \(D\) are plotted respectively, and the mass is normalized to M=1. The parameter \(\theta\) has a slight effect on the absorption cross section.
### Absorption cross section
It is well known that the total absorption cross section can be calculated as
\[\sigma_{\rm abs}=\sum_{l=0}^{\infty}\sigma_{\rm abs}^{(l)}, \tag{29}\]
where \(\sigma_{\rm abs}^{(l)}\) denotes the partial absorption cross section and can be expressed by transmission coefficient:
\[\sigma_{\rm abs}^{(l)}=\frac{\pi}{\omega^{2}}(2l+1)(1-|e^{2i\delta_{l}}|^{2}) =\frac{\pi}{\omega^{2}}(2l+1)|T_{\omega l}|^{2}. \tag{30}\]
In FIG. 3, For various values of the \(k\) and \(D\) parameters, we illustrate the partial absorption cross sections \(\sigma_{\rm abs}^{(l)}\) corresponding to high dimensional non-commutative black holes. i.e. \(l=0,1,2,3,4,5\) and
Figure 4: The logarithm values of the differential scattering cross sections of high dimensional non–commutative black holes with different values of \(k\) and \(D\) are plotted in the range of \(\vartheta=0^{\circ}\) to \(\vartheta=180^{\circ}\), with \(\omega M\)=1 fixed.
total absorption cross sections \(\sigma_{\rm abs}\) of the \(D\)-dimensional non-commutative black holes with different metric parameters. The mass \(M\) is normalized to 1. It is obvious that the partial absorption cross section \(\sigma_{\rm abs}^{(l)}\) tends to vanish as \(\omega M\) increases. In the null energy limit, the partial wave with \(l=0\) provides a nonzero absorption cross section [51]. Furthermore, for each value of \(l>0\), the corresponding partial absorption cross section \(\sigma_{\rm abs}^{(l)}\) starts from zero, reaches a peak, and decreases asymptotically in the high frequency zone. The larger the value of \(l\) is, (i)the smaller the corresponding maximum of \(\sigma_{\rm abs}^{(l)}\) is and (ii) the larger the value of \(\omega M\) associated with the peak of \(\sigma_{\rm abs}^{(l)}\) is. This is compatible with the fact that the larger the value of \(l\) is, the higher the scattering potential \(V_{\theta,k}\) is.
FIG. 3 further demonstrates that the parameter \(\theta\) has a slight influence on the absorption cross section. For \(D=4\) cases with \(\theta\rightarrow\theta_{\rm max}\), the absorption cross section shows a increasing trend. But in \(D\geq 5\) cases, the absorption cross section shows a decreasing trend. The total absorption cross section \(\sigma_{\rm abs}\) oscillates approaching the geometric optics limit. The total absorption cross section goes to the geometrical optical limit \(\sigma_{\rm abs}^{\rm hf}=\pi b_{\rm cr}^{2}\)[51] when \(\omega M\gg 1\) reaches the high frequency zone, where \(b_{\rm cr}\) is the crucial impact parameter [67]. The larger dimensionless number \(\omega M\) is required to attain the geometrical optical limit as the \(D\) increases.
For \(D=4\), We show the situation of \(k=\)0, 1 and 2, respectively. With the increase of \(k\), \(\theta_{\rm max}\) is getting smaller and smaller. We can see that the \(k\) value has little effect on the absorption cross section. The absorption cross section tends to the Schwarzschild case when \(\theta\to 0\) in the case of \(D=4\) and \(k=0\), which is consistent with the predicted outcome.
When \(D\geq 5\), the situation changes dramatically. As \(D\) climbs, the absorption cross section decreases as usually. However, in the low frequency range, when \(\theta\) is tiny and \(l\) is relatively small, the absorption cross section begins to fluctuate and has a significant peak. The absorption cross section curve in the low frequency zone is smooth when \(\theta\) is greater and goes to \(\theta_{\rm max}\). In both circumstances, the high frequency area is smooth and the oscillations disappear. In contrast to the situation of \(D=4\) dimension, the total absorption cross section oscillates slightly and approaches the limit value at the much larger \(\omega M\).
### Scattering cross section
It is widely known that the scattering amplitude is defined as
\[g(\vartheta)=\frac{1}{2i\omega}\sum_{l=0}^{\infty}(2l+1)(e^{2i\delta_{l}}-1)P _{l}(\cos\vartheta) \tag{31}\]
and the differential scattering cross section
\[\frac{\mathrm{d}\sigma_{\rm sca}}{\mathrm{d}\Omega}=|g(\vartheta)|^{2}. \tag{32}\]
As a direct consequence, the scattering cross section is obtained
\[\sigma_{\rm sca}(\omega)=\int\frac{\mathrm{d}\sigma}{\mathrm{d}\Omega}\mathrm{ d}\Omega=\frac{1}{2i\omega}\sum_{l=0}^{\infty}(2l+1)|e^{2i\delta_{l}}-1|^{2}. \tag{33}\]
FIG. 4 shows the differential scattering cross section as a function of the scattering angle at \(\omega M\)=1. For some black holes, we find the scattering flux is strengthened and its width becomes narrower in the forward direction. The scalar field scattering becomes more concentrated. With fixed frequency in the low frequency range, the glory peak is higher and the glory width becomes narrower. As a result, the glory phenomena at the forward and backward lends itself advantageously to astronomical observation. Furthermore, we can see that the impact of non-commutative on the differential scattering cross section is stronger at large angles.
In the cases of \(D=4\), the glory peaks becomes narrower as the non-commutative \(\theta\) increases. However, compared with the high-dimensional cases, the glory peaks are wider than high-dimensional cases. When \(D\geq 5\), we also fixed the \(\omega M=1\), but the scattering pattern is similar to high frequency scattering. The glory peaks is quite narrow.
## IV Conclusions
In summary, we have investigated the scattering and absorption cross section of the massless scalar field from several high dimensional non-commutative black hole with smeared matter sources. We investigate that the interaction between frequency \(\omega M\) and parameters \(D\), \(k\), \(\theta\), respectively. We exhibit the results of these parameters with varying values.
Let us summarize the scattering and absorption of massless scalar fields by high dimensional non-commutative black hole. For absorption cross section in \(D=4\) with fixed \(k\), our computational results indicated that the larger \(\theta\) is, the lower the associated total absorption cross section is. The total absorption cross section oscillates approaches the geometrical optical limit in the high frequency zone. But in \(D\geq 5\) cases, the absorption cross section shows a decreasing trend. The total absorption cross section oscillates approaches the geometrical optical limit in the much higher frequency region compare with the \(D=4\) cases and the total absorption cross section oscillates slightly.
While for the scattering cross section at fixed \(\omega M\), the scattering flux is intensified and its width narrows in the forward direction for some black holes. The scattering of the scalar field gets more intense and the forward and backward directions of glory phenomenon lends itself well
to astronomical observation. The glory peaks get thinner as the non-commutative \(\theta\) rises in the circumstances when \(D=4\). When \(D\geq 5\), we likewise set \(\omega M=1\), but the scattering pattern resembles high frequency scattering and the glory width are relatively small comparewith \(D=4\). The influence of non-commutative on the differential scattering cross section is particularly pronounced at large angles.
In theory, high-precision measurements of massless scalar wave fluxes scattered by black holes might one day be used to estimate black hole's hairs. A more immediate possibility is that scattering and absorption patterns may be observed with black hole analogue systems created in the laboratory [70; 71].
###### Acknowledgements.
The authors thank Dr. Z.N. Yan for his positive help and useful discussion. This work was supported by the National Key Research and Develop Program of China under Contract No. 2018YFA0404404.
|
2308.14658 | Adversarial Predictions of Data Distributions Across Federated
Internet-of-Things Devices | Federated learning (FL) is increasingly becoming the default approach for
training machine learning models across decentralized Internet-of-Things (IoT)
devices. A key advantage of FL is that no raw data are communicated across the
network, providing an immediate layer of privacy. Despite this, recent works
have demonstrated that data reconstruction can be done with the locally trained
model updates which are communicated across the network. However, many of these
works have limitations with regard to how the gradients are computed in
backpropagation. In this work, we demonstrate that the model weights shared in
FL can expose revealing information about the local data distributions of IoT
devices. This leakage could expose sensitive information to malicious actors in
a distributed system. We further discuss results which show that injecting
noise into model weights is ineffective at preventing data leakage without
seriously harming the global model accuracy. | Samir Rajani, Dario Dematties, Nathaniel Hudson, Kyle Chard, Nicola Ferrier, Rajesh Sankaran, Peter Beckman | 2023-08-28T15:40:50Z | http://arxiv.org/abs/2308.14658v1 | # Adversarial Predictions of Data Distributions Across Federated Internet-of-Things Devices
###### Abstract
Federated learning (FL) is increasingly becoming the default approach for training machine learning models across decentralized Internet-of-Things (IoT) devices. A key advantage of FL is that no raw data are communicated across the network, providing an immediate layer of privacy. Despite this, recent works have demonstrated that data reconstruction can be done with the locally trained model updates which are communicated across the network. However, many of these works have limitations with regard to how the gradients are computed in backpropagation. In this work, we demonstrate that the model weights shared in FL can expose revealing information about the local data distributions of IoT devices. This leakage could expose sensitive information to malicious actors in a distributed system. We further discuss results which show that injecting noise into model weights is ineffective at preventing data leakage without seriously harming the global model accuracy.
Federated Learning, Data Leakage, Internet-of-Things, Data Privacy, Sensor Networks
## I Introduction
_Federated Learning_ (FL) is a method for decentralized machine learning which can seamlessly train models across many _Internet-of-Things_ (IoT) devices. In conventional decentralized machine learning, IoT devices send their training data to a central location (e.g., remote cloud server) to train a deep neural network or some other model. This approach is unattractive due to its high communication cost [1] and its requirement that all data--including possibly sensitive data--be shared across a network [2]. Under FL, no raw data are communicated over the network. Instead, the central server is sent only locally trained model weights provided by the IoT devices. In short, FL begins with a central server initializing a "global" machine learning model with random weights, and then the FL training loop begins. The FL training loop, depicted in Fig. 1, is as follows: _(i)_ the central server shares the current global model with the IoT devices; _(ii)_ the IoT devices receive a copy of the global model and train it on their private data; _(iii)_ the IoT devices send back their locally trained copies of the model to the central server; _(iv)_ the central server aggregates all the received locally trained models to update the global model; and the loop repeats.
Because no raw data are communicated over the network in FL, it is a promising method for training a model on sensitive or private data. However, since FL requires that clients send their model updates across a network, the extent to which FL actually protects client data is unclear.
Client models are trained on local data, meaning they could contain implicit revealing information about private training samples. Recent work has shown that gradients from a small batch of samples can be used to produce pixel-wise reconstructions of images and token-wise reconstructions of text [3]. While these findings are alarming, their application is limited to settings in which the gradients being transmitted are from a single, small batch of at most eight samples [3], or to a limited extent when gradients are averaged over a larger number of samples [4]. Other work [5] has shown that it is possible to analytically reconstruct ground-truth labels for training samples from gradients, but this technique is viable only when gradients are known for every single training sample. By contrast, in typical FL settings, client models undergo one or more epochs of training over their local data before their parameters are sent to the server, rather than just a single gradient update on a small batch of samples.
In this work, we show that even in realistic federated
Fig. 1: The standard steps for FL process: global model distribution, client model training, client model communication, and aggregation. The steps in red outline the procedure by which an adversary might conduct an attack.
settings, sharing client model gradients or parameters leaks private information about training samples. In particular, we show that an adversary could train a deep neural network to extract a client's label distribution from its shared model parameters with near-perfect accuracy. In many federated settings, the ability of an adversary to predict data distributions of IoT devices can be dangerous. Consider, for example, a federated language modelling task in which the words of text messages sent by users of mobile phones serve as labels. In this setting, it is conceivable that an adversary who is able to reconstruct the labels of the samples on client devices might be able to deduce identifiable information about a user. An adversary might, for example, be able to make a list of clients whose text messages contain information related to banking, and then target these clients in the future, with more refined attacks. Moreover, the clients participating in FL might be under the false assumption that their privacy is assured because training samples remain on their device. To understand the extent to which user data is protected, we need a deeper understanding of what kinds of attacks are possible and what kinds of mitigation strategies are effective.
Our contributions can be summarized as follows: 1) we show empirically that client model parameters contain information about the labels of samples on which they were trained; 2) we perform several experiments that suggest explanations for the presence of this information; 3) we demonstrate that a deep neural network can be trained to recover information about the distributions of client labels; and 4) we test the ability of simple defenses, such as adding Gaussian and Laplacian noise to gradients, to prevent the leakage of information about client label distributions. Our findings have significant implications for the use of FL in privacy-sensitive settings.
## II Related Work
Much of the popularity of FL can be accredited to the intuitive idea that it improves data privacy in decentralized machine learning since no raw data are communicated across the network [6, 7, 8]. However, recent works have begun to challenge this assumption [5]. Zhu et al. coin the notion of "deep leakage from gradients" to refer their demonstrated optimization-based approach to reconstruct data from gradient updates [3]. Similarly, Geiping et al. demonstrate a numerical reconstruction method which reconstructs images from a neural network gradient update [4]. However, these works have to relax the problem of data reconstruction substantially to demonstrate the effectiveness of their techniques. Specifically, their approaches are most effective when the gradient update from training the model is done on a single image. They both note that multi-image reconstruction can be done but that effectiveness of the technique suffers when gradients are averaged across modest batch sizes \(B\geq 8\).
These growing concerns around data leakage from model updates have encouraged exploration into additional privacy-preserving techniques for FL. Some of the most popular include homomorphic encryption [9, 10] and differential privacy [11, 12, 13, 14]. Homomorphic encryption is a technique that encrypts data in such a way that they are equivariant to many mathematical transformations. This means that the global model can be updated by an average of encrypted model weights. These averaged model weights can then be decrypted to train the models locally without corrupting the local models [15]. A central concern surrounding homomorphic encryption for FL is that the resource cost of encrypting data on resource-constrained IoT devices is non-trivial [16, 17]. Differential privacy takes a different, less resource-costly approach. This technique directly injects an amount of noise, \(\epsilon>0\), into client model updates before they are sent to aggregation node [18, 19]. While cheaper to perform, there is an obvious trade-off to consider. If \(\epsilon\) is too large, the global model's accuracy will suffer; if \(\epsilon\) is too small, privacy benefits are reduced.
Our work takes a fundamentally different approach from prior works in the FL privacy literature. While the works on data reconstruction from model gradients are critically important in the FL domain, we argue that they rely on unrealistic simplifying assumptions related to the amount of data model updates are trained on to support the reconstruction. Instead, we consider a more realistic scenario where the amount of data on IoT devices is larger than a handful of data samples (i.e., \(\gg 8\)). We then show that even with this more challenging scenario, the local data _distributions_ of IoT devices in an FL system can still be approximated based on the model weights.
## III Problem Description & Research Questions
We consider a standard two-tier FL system comprised of decentralized IoT devices with local, private data and a central aggregation node which manages the FL process. We seek to learn the extent to which client information is compromised by sharing client gradients. In particular, could an adversary use the model parameters shared by clients to deduce information about their private training data?
Further, we study an adversary who has access to information about the federated training (e.g., an honest-but-curious server). In the case of a supervised objective, we assume the adversary has access to the dataset labels and their order in the output layer of the network. In the case of an unsupervised objective, as we will show in Section IV-A, models representations contain relevant information about the input features of the data, and an adversary could potentially still deduce information about client data distributions. We also assume the adversary has access to the client optimization parameters, such as the learning rate \(\eta\) and the number of local training epochs \(E\). We focus on the possibility that an adversary could deduce information about the distribution of a client's training samples. Client training data are often not independently and identically distributed (non-IID), and their data distributions may contain private information.
Specifically, we are interested in answering the following questions: **Q1.** What is the relationship between client model parameters and the data distribution on which the parameters were learned? **Q2.** Can a client model's parameters be used
directly to predict the distribution on which they were learned? We expand on our approach to each of these questions below.
### _Visualizing Client Model Parameters_
To understand the relationship between client model parameters and the data distribution on which they were learned, we train clients on a variety of data distributions and visualize their model parameters using principal component analysis (PCA). We refer to clients trained on synthetic data distributions as "dummy clients," and to the space of dimension-reduced model parameters as a "model-latent space."
In Section IV, we show several experiments which explain the relationship between the position of a client model in the model-latent space to the client's data distribution. By performing a layer-by-layer decomposition of the client model parameters, and by examining the behavior of client model parameters when the training objective is unsupervised, we also propose an explanation for this clustering.
### _Predicting Client Data Distributions_
The existence of a relationship between client model parameters and the data distribution used for learning them raises a second question: can a model's position in the model-latent space be used directly to predict a client's label distribution? To answer this question, we introduce the notion of a "meta-dataset." For client models trained on disjoint training samples \(\mathcal{P}_{k}\) from some original dataset, the meta-dataset is a mapping from client model parameters \(w_{k}\) to label distributions \(\text{Pr}(y=y_{i}\ |\ i\in\mathcal{P}_{k})\). We propose that an adversary might be capable of training a deep neural network on a synthetic meta-dataset, enabling them to extract information about clients.
In Algorithm 1, we propose one potential strategy an adversary might use to obtain information about client data distributions. Later, in Section IV, we validate empirically the effectiveness of this strategy in the presence of a simple defense: the addition of Gaussian and Laplacian noise to gradients. The adversary first intercepts the global model \(w_{t}\) sent by the server to the clients (line 1). They then construct a meta-dataset by creating a large number of dummy clients whose local training data comes from a synthetic non-IID distribution (lines 3-6). The client models are initialized with the intercepted global model parameters and trained on their local training data. Then, their trained parameters \(\hat{w}^{k}\) are reduced to some small number of dimensions using PCA (lines 8-10). Each dummy client provides one training sample in the meta-dataset, in which the PCA-reduced model parameters are the inputs, and the label distributions are the outputs. The adversary then trains a neural network on the meta-dataset (lines 11-14); the dimension of the PCA-reduced model parameters \(\hat{w}^{k}_{\text{pca}}\) should be sufficiently small such that they can fit in the input layer. A key assumption is that, in order to train the "dummy" client models, the adversary can access a set of proxy training samples that are similar to the training samples on the client devices. Upon intercepting model parameters sent from the clients to the server, they could apply the same PCA model to these parameters, and use their trained label distribution predictor to perform inference (lines 15-17).
```
1 intercept global model weights, \(w_{t}\)
2\(\hat{S}^{t}\leftarrow\) set of dummy clients
3foreach dummy client \(k\in\hat{S}^{t}\)do
4\(\hat{\mathcal{P}}^{k}\leftarrow\) (random subset of proxy data)
5\(\hat{w}^{k}\leftarrow\)\(\text{ClientUpdate}(k,w_{t})\)
6\(y^{k}\leftarrow\)\(\text{Pr}(y=y_{i}\ |\ i\in\mathcal{P}_{k})\)
7\(W_{\text{pca}}\leftarrow\) (fit PCA model on the \(\hat{w}^{k}\))
8foreach dummy client \(k\in\hat{S}^{t}\)do
9\(\hat{w}^{k}_{\text{pca}}\leftarrow\) (apply \(W_{\text{pca}}\) to \(\hat{w}^{k}\))
10 initialize label distribution predictor \(f_{\theta}\)
11\(\mathcal{B}\leftarrow\) (split the \((\hat{w}^{k}_{\text{pca}},y^{k})\) into batches of size \(\hat{B}\))
12foreach local epoch \(e=1,\dots,\hat{E}\)do
13foreach batch \(b\in\mathcal{B}\)do
14\(\theta\leftarrow\theta-\eta^{\prime}\nabla\hat{\ell}(\theta;b)\)
15foreach client \(k\in S_{t}\)do
16 intercept client model weights \(w^{k}_{t+1}\)
17\(y^{k}_{\text{(pred)}}\gets f_{\theta}(w^{k}_{t+1})\)
18
19Procedure ClientUpdate\((k,w)\)
20\(\mathcal{B}\leftarrow\) (split \(\hat{\mathcal{P}}_{k}\) into batches of size \(B\))
21foreach local epoch \(e=1,\dots,E\)do
2foreach batch \(b\in\mathcal{B}\)do
22\(w\gets w-\eta\nabla\ell(w;b)\)
23
24return\(w\) to server
```
**Algorithm 1**Adversarial data distribution prediction
## IV Experimental Design & Results
### _Visualizing Client Model Parameters_
As a starting point for understanding the relationship between client model parameters and their label distributions, we replicate and extend an experiment performed by Wang et al. [20]. We initialize a global model, which is broadcast to 100 clients with disjoint training sets. For each of the ten labels in the dataset, ten clients are provided with a synthetic non-IID training set, according to one of two sampling schemes. In the "80%-20% sampling" scheme, we provide each client with a data distribution in which 80% of the samples have one label, and the remaining 20% are uniformly distributed across the remaining labels. In the "Dirichlet sampling" scheme, client data distributions are sampled from a Dirichlet distribution. Each client is trained for one epoch, and all client model parameters are reduced to two dimensions via PCA for visualization.
We perform experiments using supervised classification tasks on the MNIST dataset using a multi-layer perceptron (MLP), and on the CIFAR-10 dataset using a convolutional neural network (CNN); the results of these experiments are shown in Fig. 2. For both datasets and architectures, we observe the same phenomenon: client model parameters are
clustered according to their dominant labels, suggesting that they contain implicit information about the samples on which they were learned. We might hypothesize that this clustering originates from one or both of two distinct phenomena:
1. Images belonging to the same class have similar features, leading clients with the same dominant label to train models with similar parameters.
2. Images belonging to the same class have the same labels, and the nature of the supervised classification task means client models trained on sample distributions with the same dominant label will have similar parameters. Zhao et al. demonstrate a related phenomenon analytically for gradients from a single training sample [5].
If the clustering is a result of the first phenomenon, we would expect that client models with semantically similar dominant labels (e.g., car and truck) would be close together in the model-latent space, and that clustering would occur even with an unsupervised training objective. Meanwhile, if the clustering is a result of the second phenomenon, we would expect the positions of dominant-label clusters to be arbitrary.
To understand which of these phenomena accounts for the clustering, we perform two more experiments. First, we examine how the model-latent space behaves when we study each layer of the neural network in isolation. Fig. 3 shows the PCA-reduced client model parameters for each layer of the CNN used for CIFAR-10. We find in the earlier convolutional layers, client models trained on distributions with semantically similar dominant labels (e.g., cat and dog, car and truck, plane and ship) are close together in the model-latent space. In the final fully-connected layers, this semantic clustering disappears, suggesting that the clustering is primarily a result of the labels provided in the supervised classification task. We hypothesize, therefore, that the formation of dominant-label clusters is a result of both phenomena we identified, the first manifesting in the earlier layers, which extract features from input samples, and the second manifesting in the later layers, designed for the classification task.
To validate this hypothesis, we perform a second experiment, in which we train an autoencoder on the MNIST dataset. If part of the clustering phenomenon can be accounted for by feature similarity, we would expect some clustering to occur even with an unsupervised task. Indeed, as shown in Fig. 4, we observe a strong clustering effect in the model-latent space according to the dominant label. The problem of label deficiency at the edge suggests the application of FL in unsupervised and self-supervised settings [21]. The finding that clustering occurs even when labels are not used in training suggests that in these applications, and in instances where the adversary does not have access to the labels used for a supervised objective, client privacy might be leaked.
### _Predicting Client Data Distributions_
To determine the extent to which the adversarial strategy proposed in Algorithm 1 might work in a practical federated setting, and to test the viability of adding noise to gradients as a defense against this strategy, we run experiments on the
Fig. 3: Layer-by-layer clustering of PCA-reduced client model parameters on the CIFAR-10 dataset. Semantic clustering appears to occur in the earlier convolutional layers, suggesting that the clustering is the result of the similarity in features of images belonging to the same class. Note, for example, that client models trained primarily on images of animals are concentrated in the bottom-left corner of the model-latent space for the first convolutional layer. Meanwhile, the clustering appears roughly arbitrary in the final fully-connected layer, suggesting that the clustering here is a result of labels.
Fig. 2: Clustering of PCA-reduced client model parameters on the MNIST and CIFAR-10 datasets according to the dominant label in their training sets. In the \(\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text
MNIST and CIFAR-10 datasets, injecting three different scales of Gaussian and Laplacian noise into client gradients.
For our experiments, the training set of the meta-dataset contains client model parameters trained on samples from the training set of the original dataset. Meanwhile, the test set of the meta-dataset contains client model parameters trained on samples from the test set of the original dataset. This train/test split reflects an assumption that the proxy data accessible by the adversary is different from the training data on IoT devices, but drawn from the same distribution. To generate synthetic non-IID client label distributions on which dummy clients are trained, we sample from a symmetric Dirichlet distribution, where the concentration parameter \(\alpha\) governs the uniformity of the client label distribution. To capture a wide range of uniformities, we construct the meta-dataset using an equal number of client label distributions with each of the concentration parameters \(\alpha\in\{0.1,1,10,100,1000\}\). Example client label distributions are shown in Fig. 5. For each experiment, the training split of the meta-dataset contains 10,000 samples in total (2,000 per value of \(\alpha\)), and the test split contains 2,000 samples in total (400 per value of \(\alpha\)). We use a simplified model in which the adversary only performs the attack during the first round of FedAvg. In other words, the adversary intercepts the initial global model \(w_{0}\) and the client models \(w_{1}^{k}\). In constructing the meta-dataset, we reduce the client model parameters to 10 dimensions via PCA.
For each level of noise, we first measure the accuracy of a global model trained with FedAvg across 500 rounds, in which training samples are assigned to clients uniformly at random. In our tests, we use \(K=100\) clients, each receiving 500 training samples. We use client fraction of \(C=0.1\) and train each client for \(E=1\) local epoch before aggregation. For MNIST, the client updates use a batch size of 32 and a learning rate of \(10^{-4}\); for CIFAR-10, the updates use a batch size of 32 and a learning rate of \(10^{-3}\). To reduce the effects of random model initialization and client sampling, we perform eight trials for each dataset and level of noise.
Then, we train a label distribution predictor on the training set of the meta-dataset and assess its performance on the test set. The predictor uses an MLP architecture consisting of an input layer with 10 units, eight hidden layers with 1000 units each and ReLU activations, and an output layer with 10 units and a softmax activation to produce a valid probability distribution over the labels.
The results of the meta-dataset training are shown for the CIFAR-10 dataset in Fig. 6; experiments on MNIST showed almost identical results. We find that with \(10^{-3}\)-scale Gaussian and Laplacian noise, the global model is able to learn, but the adversary can completely uncover the client data distributions. Furthermore, even with \(10^{-2}\)-scale noise, when the model performance is notably compromised, we found almost no benefit in preventing privacy leakage. Consequently, our findings suggest that the adversarial strategy proposed in Algorithm 1 is capable of predicting client label distributions with high accuracy, and adding noise to client gradients is insufficient in protecting against the adversarial model.
## V Conclusions & Future Directions
We have introduced an algorithm by which, in realistic settings, an adversary can predict the label distributions of federated clients. Our approach lays the groundwork for a general class of algorithms in which an adversary creates a meta-dataset which maps client model parameters to some property of their local data. Future work might investigate other approaches within this class of algorithms, such as predicting the presence of a particular type of sample, rather than a full label distribution. More research is also necessary to determine the effectiveness of the proposed adversarial technique on more complex datasets and with self-supervised training objectives. Finally, we have shown that injecting noise into client gradients is insufficient to protect privacy. Further work is needed to determine the effectiveness of other defensive strategies in limiting these forms of attacks.
## Acknowledgment
We want to thank Bhupendra A. Raut, Seongha Park, Robert C. Jackson, Sean Shahkarami, Yongho Kim, and Ian Foster for their insightful comments, ideas, and support. This work
Fig. 4: Clustering of autoencoder parameters trained with an unsupervised objective on MNIST. The clustering phenomenon occurs even in the absence of labels, suggesting that on simple datasets, it can be attributed in part to the similarity of features in training samples with the same label.
Fig. 5: Example client label distributions sampled from a Dirichlet distribution, used to create synthetic non-IID data. We constructed a meta-dataset by training clients which had sample distributions drawn using concentration parameters \(\alpha\in\{0.1,1,10,100,1000\}\), which provided a wide range of uniformities.
was supported in part by U.S. Department of Energy, Office of Science, under contract DE-AC02-06CHI1357, National Science Foundation's Mid-Scale Research Infrastructure grant, NSF-OAC-1935984, and U.S. Department of Energy, Office of Science, under grant DOE-145-SE-PRJ1009506. This research used resources of the Argonne Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC02-06CH11357.
|
2310.01779 | HallE-Control: Controlling Object Hallucination in Large Multimodal
Models | Current Large Multimodal Models (LMMs) achieve remarkable progress, yet there
remains significant uncertainty regarding their ability to accurately apprehend
visual details, that is, in performing detailed captioning. To address this, we
introduce $\textit{CCEval}$, a GPT-4 assisted evaluation method for detailed
captioning. Interestingly, while LMMs demonstrate minimal object existence
hallucination in existing VQA benchmarks, our proposed evaluation reveals
continued susceptibility to such hallucinations. In this paper, we make the
first attempt to investigate such hallucination from different aspects,
including image resolution, the language decoder size, and instruction data
amount, quality, granularity. Our findings underscore the unwarranted inference
when the language description includes details at a finer object granularity
than what the vision module can ground or verify, thus inducing hallucination.
To control such hallucinations, we further attribute the reliability of
captioning to contextual knowledge (involving only contextually grounded
objects) and parametric knowledge (containing inferred objects by the model).
Thus, we introduce $\textit{HallE-Control}$, a controllable LMM in terms of
$\textbf{Hall}$ucination in object $\textbf{E}$xistence. HallE-Control can
condition the captioning to shift between (i) exclusively depicting contextual
knowledge for grounded objects and (ii) blending it with parametric knowledge
to imagine inferred objects. Our method reduces hallucination by 44% compared
to LLaVA$_{7B}$ and maintains the object coverage. | Bohan Zhai, Shijia Yang, Chenfeng Xu, Sheng Shen, Kurt Keutzer, Chunyuan Li, Manling Li | 2023-10-03T04:01:27Z | http://arxiv.org/abs/2310.01779v3 | # Halle-Switch: Controlling Object Hallucination in Large Vision Language Models
###### Abstract
Current large vision-language models (LVLMs) achieve remarkable progress, yet there remains significant uncertainty regarding their ability to accurately approach-head visual details, that is, in performing detailed captioning. To address this, we introduce _CCEval_, a GPT-4 assisted evaluation method tailored for detailed captioning. Interestingly, while LVLMs demonstrate minimal object existence hallucination in existing VQA benchmarks, our proposed evaluation reveals continued susceptibility to such hallucinations. In this paper, we make the first attempt to investigate such hallucination from different aspects, including image resolution, the language decoder size, and instruction data amount, quality, granularity. Our findings underscore the unwarranted inference when the language description includes details at a finer object granularity than what the vision module can ground or verify, thus inducing hallucination. To control such hallucinations, we further attribute the reliability of captioning to contextual knowledge (involving only contextually grounded objects) and parametric knowledge (containing inferred objects by the model). Thus, we introduce _HallE-Switch_, a controllable LVLM in terms of **Hall**ucination in object **E**xistence. Halle-Switch can condition the captioning to shift between (i) exclusively depicting contextual knowledge for grounded objects and (ii) blending it with parametric knowledge to imagine inferred objects. Our method reduces hallucination by 44% compared to \(\text{LLaVA}_{7B}\) and maintains the same object coverage. Our code is publicly available at [https://github.com/bronyayang/HallE_Switch](https://github.com/bronyayang/HallE_Switch)
## 1 Introduction
In recent years, Large Vision-Language Models (LVLMs) (Liu et al., 2023; Dai et al., 2023; Li et al., 2023; Zhu et al., 2023) have achieved significant progress, advancing tasks such as detailed captioning, visual conversations, and vision question-answering (VQA) (Goyal et al., 2017; Liu et al., 2023; Hudson & Manning, 2019; Fu et al., 2023). However, similar to Large Language Models (LLMs) (Touvron et al., 2023; Team, 2023; OpenAI, 2022) in the NLP domain, LVLMs confront the issue of hallucination. This is particularly severe in detailed image captioning, which hinders the performance of downstream applications in robotics (Huang et al., 2023), visual search (Hu et al., 2023), etc. To better understand and address this challenge, we first outline three types of object hallucinations frequently observed in the detailed captions: (1) Object Existence Hallucination - The detailed image description references objects that are not present; (2) Object Attribute Hallucination - The detailed image description inaccurately characterizes objects, misrepresenting attributes such as color, shape, and size; (3) Object Relationship Hallucination - The detailed image description inaccurately depicts the relationships or interactions among objects, including erroneous relative positions, interaction states, and actions involving two or more objects. In this work, we mainly focus on defining the metric, analyzing the cause, and addressing the the problem of **object existence hallucination**.
Evaluating detailed captions is inherently complex. Some of the efforts, including benchmarks like POPE (Li et al., 2023), evaluate object hallucination using VQA. Such a bias towards VQA-based evaluations might result in an incomplete assessment of detailed captions, which requires obtaining a comprehensive view of visual details. To bridge this gap, we introduce _CCEval_, designed specifically for object existence hallucination in detailed captions. To avoid the model gaining an unfair advantage by favoring shorter descriptions, CCEval maintains consistency in metrics such as average sentence length and the number of objects. Notably, even models that well-performed on VQA-based object hallucination benchmarks showed substantial hallucinations when evaluated with CCEval.
In our exploration to uncover the underlying cause of object existence hallucination, we look into various factors including the size of the language decoder, the quantity, quality, and granularity of instruction data, and the input resolution to the vision encoder. We conclude the most crucial factor to be the alignment between objects mentioned in training caption and those vision encoder can perceive. During training of LVLMs, the goal is to establish a one-to-one correspondence between objects mentioned in the caption and those present in the image. Objects successfully grounded by the vision encoder form accurate associations, internalizing them as **contextual knowledge**. Conversely, objects in the language that the vision encoder fails to ground create word-word semantic associations, which can be attributed to the generalization from the **parametric knowledge** within the model's parameters. During inference, when the model draws from such parametric knowledge, any misalignment can manifest as hallucination, as the model attempts to "guess" details not grounded by the vision module.
To address such hallucination, we are motivated by that _not all hallucination is bad, and it is more desirable to control the generalization rather than outright removal of all imagined objects_. Recognizing the significance of both contextual and parametric knowledge in ensuring generation reliability, we present _HallE-Switch_, a novel approach to control the extent of expressed hallucination or parametric knowledge. We curate a 33k dataset similar to LLAVA (Liu et al., 2023), incorporating both pure contextual knowledge and a blend of contextual knowledge with marked parametric knowledge. Leveraging this dataset, we train a lightweighted single linear layer to control over the frozen LVLM. As demonstrated in Figure 1, a singular continuous parameter adjustment (e.g. \(-1\rightarrow+1\)) during inference enables the model to produce detailed captions with only contextual knowledge (e.g., \(-1\)) or blend with parametric knowledge (e.g., \(+1\)). Furthermore, the inferred objects from parametric knowledge are automatically highlighted with distinct tokens (e.g., [object]) for human reference. This method offers the advantage of preserving object count and coverage as well as sentence length, while effectively control object existence hallucination.
Overall, our contributions are:
* A novel evaluation method for detailed caption object existence hallucination, with metrics such as object count, coverage, and average sentence length, alongside hallucination assessment;
Figure 1: The figure shows that Halle-Switch uses a single continuous parameter during inference to control imagination in the outputted caption. A switch value of "\(-1\)" makes the model use solely contextual knowledge [visually grounded objects], such as trees, buses, cars, and streets. A switch value of “\(+1\)"makes the model incorporate parametric knowledge [notified objects], such as people, clouds, and traffic lights, with the [object] marker labeling those inferred objects.
* A comprehensive analysis on LVLM components that influence hallucination, with a specific focus on alignment issues in the vision encoder and instruction data;
* A first approach to control object existence hallucination within detailed captions.
## 2 Hallucination Analysis
Object existence hallucination can be influenced by several factors, including the language decoder, instruction data, and vision encoder. In our analysis, we address each factor individually. For a diverse methodological exploration, we select LLaVA, InstructBLIP (Dai et al., 2023), and Shikra (Chen et al., 2023): LLaVA and Shikra share the same model structure; Shikra and InstructBLIP use mixed-dataset and multi-task instruction data; InstructBLIP finetunes only Q-former, while the other finetune projector and LLM. More details about models can be found in Appendix.
### Benchmarks
There are two primary approach, VQA-based and caption-based benchmarks, for evaluating object existence hallucination in LVLMs.
**VQA-based benchmarks** pose questions about objects within images. For a model to be considered hallucination-free, it should address these visual questions accurately. Notably, a large proportion of questions are simply binary, typically asking about the presence or attributes of objects.
The POPE benchmark evaluates object existence hallucination by a polling-based query method, consisting of a series of yes/no questions on sampled objects from visual instructions. POPE contains three sets: random, popular, and adversarial. These subsets respectively focus on randomly selected objects, frequently occurring objects, and those objects that co-occur in training sets. We choose POPE evaluation on MSCOCO (Lin et al., 2014) dataset. MME (Fu et al., 2023) coarse-grained recognition construct yes/no questions similarly but selects objects at random. This benchmark has 30 images, with each image paired with two questions: one positive and one negative.
In Table 1, LLaVA\({}_{7B}\) exhibits the greatest degree of hallucination, whereas Shikra outperforms other models in both POPE and MME. Specifically, Shikra shows a significantly higher F1 score in both POPE-popular and POPE-adversarial categories, while LLaVA\({}_{7B}\) displays the lowest. Additionally, Shikra's "Yes" ratio is closer to a balanced 50% compared to other models. However, in subsequent sections, we demonstrate that these observations from VQA-based benchmarks are not consistent with those from caption-based benchmarks.
**Caption-based benchmarks**, like CHAIR, begin by splitting the sentence and extracting nouns. Subsequently, it augments the ground truth objects by incorporating hard-coded synonyms and phrases, forming a ground truth set. The benchmark then identifies hallucinated objects by comparing the objects in the caption with this ground truth set. CHAIR computes CHAIR\({}_{i}\) and CHAIR
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline
**Benchmark** & **Model** & **Accuracy\(\uparrow\)** & **Precision\(\uparrow\)** & **Recall\(\uparrow\)** & **F1\(\uparrow\)** & **Yes (\%)** \\ \hline \multirow{4}{*}{POPE - Random} & LLAVA\({}_{7B}\) & 73.13 & 66.95 & 94.53 & 78.39 & 72.78 \\ & LLAVA\({}_{13B}\) & 78.49 & 73.57 & 90.93 & 81.34 & 63.71 \\ & Shikra\({}_{7B}\) & 86.99 & 94.77 & 79.13 & 86.25 & 43.04 \\ & InstructBLIP\({}_{7B}\) & 86.60 & 80.74 & 96.13 & 87.77 & 59.53 \\ \hline \multirow{4}{*}{POPE - Popular} & LLAVA\({}_{7B}\) & 59.87 & 55.88 & 93.80 & 70.03 & 83.93 \\ & LLAVA\({}_{13B}\) & 70.80 & 64.73 & 91.40 & 75.79 & 70.60 \\ & Shikra\({}_{7B}\) & 84.35 & 88.10 & 79.43 & 83.54 & 45.08 \\ & InstructBLIP\({}_{7B}\) & 71.27 & 64.20 & 96.13 & 76.99 & 74.87 \\ \hline \multirow{4}{*}{POPE - Adversarial} & LLAVA\({}_{7B}\) & 57.06 & 54.07 & 93.930 & 68.63 & 86.87 \\ & LLAVA\({}_{13B}\) & 63.93 & 90.91 & 91.07 & 71.63 & 77.13 \\ & Shikra\({}_{7B}\) & 82.88 & 85.32 & 79.43 & 82.27 & 46.55 \\ & InstructBLIP\({}_{7B}\) & 72.10 & 65.13 & 95.13 & 73.22 & 73.03 \\ \hline \multirow{4}{*}{**Benchmark**} & **Model** & **Existence\(\uparrow\)** & **Count\(\uparrow\)** & **Position\(\uparrow\)** & **Color\(\uparrow\)** & **Total\(\uparrow\)** \\ \cline{2-6} & LLAVA\({}_{7B}\) & 150.00 & 48.33 & 50.00 & 55.00 & 303.33 \\ \cline{1-1} & LLAVA\({}_{13B}\) & 180.00 & 113.33 & 55.00 & 95.00 & 443.33 \\ \cline{1-1} & Shikra\({}_{7B}\) & 185.00 & 118.33 & 75.00 & 155.00 & 533.33 \\ \cline{1-1} & InstructBLIP\({}_{7B}\) & 185.00 & 60.00 & 50.00 & 125.00 & 420.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: We evaluate LLaVA Vicuna\({}_{7B}\), LLaVA Vicuna\({}_{13B}\), Shikra\({}_{7B}\), InstructBLIP Vicuna\({}_{7B}\) public checkpoints on VQA-based benchmarks, including POPE and MME.
as follows:
\[\text{CHAIR}_{i}=\frac{|\{\text{hallucinated objects}\}|}{|\{\text{all objects mentioned}\}|}\]
\[\text{CHAIR}_{s}=\frac{|\{\text{sentences with hallucinated object}\}|}{|\{\text{all sentences}\}|}\]
Table 2(left) reveals that while InstructBLIP exhibits minimal object existence hallucination, it averages a mere 0.8 objects per sentence. In contrast, LLaVA\({}_{13B}\) and Shikra manifest a higher degree of hallucination, but they also generate more detailed captions, outputting as many as 7.6 and 7.5 objects per sentence, respectively. We find comparing object hallucinations is impractical when there is a significant disparity in average sentence length and the number of objects.
Apart from these disparities, the use of a hard-coded ground truth set is another challenge. To counter these challenges, we introduce _CCEval_, a GPT-4 assisted evaluation for detailed captions. We first prompt LVLMs to generate detailed captions on 100 randomly sampled images from Visual Genome (Krishna et al., 2017). Subsequently, utilizing GPT-4's in-context learning capabilities, we extract individual objects from these captions and identify hallucinated ones by referencing the provided ground truth objects. On top of CHAIR Rohrbach et al. (2018), we introduce "coverage" metric to ensure that the captions are detailed enough. This metric computes the ratio of objects in the caption that match the ground truth to the total number of ground truth objects. We additionally record and balance the average number of objects as well as the average length of captions across all cases. More details on prompts of CCEval can be found in Appendix.
As reflected in Table 2(right), when subjected to consistent constraints--average sentence length approximately 100 words and around 9 objects per sentence--all models exhibit comparably sub-optimal results. Interestingly, while Shikra surpass other models in VQA-based benchmarks, especially in the POPE, it under-performs in CCEval. This suggests that object existence hallucination in detailed captions is not consistently captured by VQA-based evaluations.
\begin{table}
\begin{tabular}{l|c c c|c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**CHAIR**} & \multicolumn{4}{c}{**CCEval (Ours)**} \\ & _CHAIR\({}_{i}\)_ & _CHAIR\({}_{i}\)_ & **Avg. Length\({}^{\dagger}\)** & **Avg. Object\({}^{\dagger}\)** & _CHAIR\({}_{i}\)_ & _CEAIR\({}_{i}\)_ & **Avg. Length\({}^{\dagger}\)** & **Avg. Object\({}^{\dagger}\)** \\ \hline LLaVA\({}_{2B}\) & 24.1 & 9.1 & 42.5 & 3.7 & 72.00 & 19.7 & 32.74 & 92.27 & 9.19 \\ LLaVA\({}_{13B}\) & 60.6 & 18.4 & 90.2 & 7.6 & 79.00 & 23.80 & 33.56 & 108.02 & 9.28 \\ Shikra\({}_{13B}\) & 59.1 & 16.6 & 91.2 & 7.5 & 83.00 & 24.40 & 33.29 & 109.37 & 9.10 \\ InstructBLIP\({}_{13B}\) & 1.4 & 1.7 & 2.3 & 0.8 & 72.00 & 22.30 & 29.76 & 108.42 & 8.04 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between CHAIR and our evaluation method, CCEval.
\begin{table}
\begin{tabular}{l|l|c c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & **Model** & **Accuracy\({}^{\dagger}\)** & **Precision\({}^{\dagger}\)** & **Recall\({}^{\dagger}\)** & **F1\({}^{\dagger}\)** & **Yes (\%)** \\ \hline \multirow{4}{*}{POPE - Random} & LLaVA\({}_{13B}\) & 75.77 & 69.79 & 93.47 & 79.91 & 69.04 \\ & LLaVA\({}_{13B}\) & 78.49 & 73.57 & 90.93 & 81.34 & 63.71 \\ & ILaVA\({}_{13B}\) & 78.14 & 73.18 & 90.93 & 81.09 & 64.05 \\ & InstructBLIP\({}_{13B}\) & 86.60 & 80.74 & 96.13 & 87.77 & 59.53 \\ & InstructBLIP\({}_{13B}\) & 88.73 & 86.67 & 92.33 & 89.41 & 54.91 \\ \hline \multirow{4}{*}{POPE - Popular} & LLaVA\({}_{7B}\) & 65.07 & 59.60 & 93.53 & 72.81 & 78.47 \\ & ILaVA\({}_{13B}\) & 70.80 & 64.73 & 91.40 & 75.79 & 70.60 \\ & ILaVA\({}_{13B}\) & 72.43 & 66.45 & 90.60 & 76.67 & 68.17 \\ & InstructBLIP\({}_{13B}\) & 71.27 & 64.20 & 96.13 & 76.99 & 74.87 \\ & InstructBLIP\({}_{13B}\) & 80.53 & 74.70 & 92.33 & 82.59 & 61.80 \\ \hline \multirow{4}{*}{POPE - Adversarial} & LLaVA\({}_{7B}\) & 57.07 & 54.07 & 93.93 & 68.63 & 86.87 \\ & LLaVA\({}_{13B}\) & 63.93 & 59.03 & 91.07 & 71.63 & 77.13 \\ & InstructBLIP\({}_{13B}\) & 66.30 & 60.91 & 91.00 & 72.98 & 74.70 \\ & InstructBLIP\({}_{13B}\) & 72.10 & 65.13 & 95.13 & 77.32 & 73.03 \\ & InstructBLIP\({}_{13B}\) & 73.97 & 67.53 & 92.33 & 78.01 & 68.37 \\ \hline \multirow{4}{*}{**Benchmark**} & **Model** & _CHAIR\({}_{i}\)_ & _CHAIR\({}_{i}\)_ & **Coverage\({}^{\dagger}\)** & **Avg. Length\({}^{\dagger}\)** & **Avg. Object\({}^{\dagger}\)** \\ \cline{2-6} & LLaVA\({}_{7B}\) & 82.00 & 25.30 & 31.58 & 109.89 & 9.31 \\ \cline{1-1} & LLaVA\({}_{13B}\) & 79.00 & 23.80 & 33.56 & 108.02 & 9.28 \\ \cline{1-1} & LLaVA\({}_{13B}\) & 82.00 & 21.80 & 31.26 & 106.85 & 9.07 \\ \cline{1-1} & InstructBLIP\({}_{13B}\) & 72.00 & 22.30 & 29.76 & 108.42 & 8.04 \\ \cline{1-1} & InstructBLIP\({}_{13B}\) & 64.00 & 16.70 & 33.60 & 101.63 & 8.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance of LLaVA and InstructBLIP with different sizes of language decoder. LLaVA are trained on CC-595k for stage one and Instruction-150k for stage two.
### Language Decoder
We investigate if expanding the size of the language backbone can mitigate object existence hallucination. As detailed in Table 3, the language decoder of LLaVA is increased from 7B to 33B, and for InstructBLIP, it is increased from 7B to 13B. The result shows that hallucination for LLaVA reduced for POPE but not for CCEval. For InstructBLIP, CHAIR\({}_{i}\) and CHAIR\({}_{s}\) is reduced by 8 and 5.6 on CCEval, respectively. However, although there is a gain for scaling up language backbone, it is not consistent or salient from the observation, suggesting language decoder is not a primary factor in reducing hallucination.
### Data
Similar to our approach with the language decoder, we begin by scaling up the volume of instruction finetuning data, ranging from 80K to 2.4M. As illustrated in Table 4, the LLaVA\({}_{7B}\) model, finetuned on 80K instruction data, exhibits fewer object existence hallucinations compared to the models finetuned on 150K and SVIT (Zhao et al., 2023a). The result suggests extra data without quality guarantee may increase hallucination for both VQA-based and caption-based evaluations. Given that all three datasets are generated by GPT-4, we question the quality of the data. LRV also raises this concern, suggesting that the training data itself might contain hallucinations. Some examples of training data are presented in the Appendix. Interestingly, our examination shows no object existence hallucination: the objects in the captions are contained the ground truth objects from MSCOCO. However, we identify certain ground truth objects are challenging for human observers to ground, due to factors like size, resolution, and occlusion. This led us to hypothesize that the vision encoder might also struggle to ground these objects effectively.
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline
**Benchmark** & **Vision Encoder** & \(\textit{CHAR}_{\pm}\downarrow\) & \(\textit{CHAR}_{\pm}\downarrow\) & **Coverage\(\uparrow\)** & **Avg. Length\(\uparrow\)** & **Avg. Object\(\uparrow\)** \\ \hline \multirow{3}{*}{CCEval (Ours)} & CLIP-L-122x & 79.00 & 21.70 & 32.04 & 110.36 & 9.12 \\ & CLIP-L-336x & 79.00 & 18.90 & 36.00 & 111.55 & 9.19 \\ \cline{1-1} & CLIP-L-224x (SW) & 72.00 & 18.70 & 36.89 & 110.43 & 8.65 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Performance of LLaVA\({}_{7B}\) and with sliding window technique (SW).
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline
**Benchmark** & **Finetune Data** & **Accuracy\(\uparrow\)** & **Precision\(\uparrow\)** & **Recall\(\uparrow\)** & **F1\(\uparrow\)** & **Yes (\%)** \\ \hline \multirow{3}{*}{POPE - Random} & 80K & 73.13 & 66.95 & 94.53 & 78.39 & 72.78 \\ & 158K & 75.77 & 69.79 & 93.47 & 79.91 & 69.04 \\ & SVIT & 52.34 & 52.00 & 97.87 & 67.92 & 97.01 \\ \hline \multirow{3}{*}{POPE - Popular} & 80K & 59.87 & 55.88 & 93.80 & 70.03 & 83.93 \\ & 158K & 65.07 & 59.60 & 93.53 & 72.81 & 78.47 \\ & SVIT & 50.77 & 50.43 & 90.47 & 64.76 & 89.70 \\ \hline \multirow{3}{*}{POPE - Adversarial} & 80K & 57.07 & 54.07 & 93.93 & 68.63 & 86.87 \\ & 158K & 58.47 & 55.00 & 93.07 & 69.14 & 84.6 \\ & SVIT & 51.37 & 50.77 & 90.33 & 65.00 & 88.97 \\ \hline \multirow{3}{*}{**Benchmark**} & **Finetune Data** & _CHAR\({}_{\pm}\downarrow\)_ & _CHAR\({}_{\pm}\downarrow\)_ & **Coverage\(\uparrow\)** & **Avg. Length\(\uparrow\)** & **Avg. Object\(\uparrow\)** \\ \hline \multirow{3}{*}{CCEval (Ours)} & 80K & 72.00 & 19.70 & 32.74 & 92.27 & 9.19 \\ & 158K & 82.00 & 25.30 & 33.58 & 109.89 & 9.31 \\ \cline{1-1} & SVIT & 87.00 & 23.30 & 47.46 & 296.63 & 18.14 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of LLaVA\({}_{7B}\) with different sizes of data. 80K and 158K contains 80K and 158K data respectively, and SVIT contains 2.4M.
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline
**Benchmark** & **Vision Encoder** & \(\textit{CHAR}_{\pm}\downarrow\) & \(\textit{CHAR}_{\pm}\downarrow\) & **Coverage\(\uparrow\)** & **Avg. Length\(\uparrow\)** & **Avg. Object\(\uparrow\)** \\ \hline \multirow{3}{*}{CCEval (Ours)} & CLIP-L-112x & 79.00 & 21.70 & 32.04 & 110.36 & 9.12 \\ & CLIP-L-224x & 74.00 & 19.30 & 32.83 & 113.03 & 9.18 \\ \cline{1-1} & CLIP-L-336x & 64.00 & 16.00 & 33.37 & 108.52 & 8.92 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of LLaVA with Llama 2\({}_{13B}\) language decoder and CLIP-Large vision encoder with different input resolutions.
### Vision Encoder
Intuitively, increasing image resolution enhances model's perception of finer details, thus making the grounding of objects mentioned in the caption easier. To verify our hypothesis from the previous section, we increase the input image resolution for the vision encoder. Specifically, for our evaluation, the resolution for LLaVA\({}_{TB}\) was incremented from 224x to full resolution using a sliding window approach for efficiency, as detailed in the Appendix. Table 6 shows a constant decrease in hallucination and increase in object coverage. Additionally, we assess LLaVA with Llama 2\({}_{13B}\), varying the resolution from 112x to 336x. For the 112x112 resolution, the original image was downscaled to 112x and subsequently upscaled to 224x before being input to CLIP-Large-224x. Table 5 gives a consistent observation that larger input resolution can reduce hallucination.
**Conclusion.** Through our systematic analysis of object existence hallucination, we summarize several insights: (1) Enlarging the language decoder does mitigate hallucination, but improvements are not huge. (2) Expanding the volume of instruction data actually increases hallucination. Upon inspection of training data, we find certain objects described in the captions might not be grounded by the vision encoder. (3) To validate our hypothesis in (2), we show improving input image resolution significantly reduces hallucination by enhancing model grounding ability.
Reflecting on these findings, we attempt to provide an explanation for the reason of object existence hallucination in detailed captions. The process of image captioning in LVLMs can be perceived as a form of information mapping or translation. Ideally, the goal is to have a direct one-to-one correspondence between objects identified in the image and those mentioned in the captions. Objects successfully grounded by the vision encoder form accurate correspondence, making this as **contextual knowledge** in the model, following (Neeman et al., 2023). When objects in training caption fail to ground by the vision encoder, the model learns **parametric knowledge**, the knowledge encoded in the model's parameter. This kind of knowledge is the association of objects in the language with other words instead of with corresponding image object feature. During inference, when the model draws from parametric knowledge, it attempts to "guess" details not grounded by the vision module and is perceived as object existence hallucination.
## 3 Hallucination Controlling
Following our previous explanation that parametric knowledge leads to hallucination, we can eliminate the parametric knowledge and refrain the resulting model from guessing related objects. However, such guessing is necessary depending the details required by the downstream task, as shown in
Figure 2: (a) shows the overall training pipeline of HallE-Switch. When generating data, we use RAM to separate ground truth objects to visually grounded and omitted objects. Then, we utilize GPT-4 to convert this existing list of grounded objects into a caption as contextual data, and we assign \(\varepsilon\) as \(-1\). We put bracket around omitted objects in the original LLaVA caption as parametric joint data and assign \(\varepsilon\) as \(+1\). During training, we supervise using contextual only data and parametric joint data, pass in \(\varepsilon\) as \(-1\) or \(+1\), respectively.
(b) shows our methods consistently outperforms LLaVA baselines and InstructBLIP\({}_{13B}\). Indication is w/ ind. in Table 7, and Switch is -1 in Table 8.
Appendix. Therefore, our approach work towards controlling and balancing parametric knowledge rather than eliminating.
We introduce _HallE-Switch_, a LVLM designed to control the extent of parametric knowledge within detailed captions. For this purpose, we developed two datasets: the first captures solely contextual knowledge, while the second merges both contextual and parametric knowledge. Using these datasets, we integrated a control projector into the model to control parametric knowledge.
### Data Generation
**Grouping using RAM.** We begin by passing MSCOCO's ground truth objects to the open vocabulary detector RAM (Zhang et al., 2023b). RAM categorizes the objects into two groups: "grounded" (contextual group) and "omitted" (parametric group). This step aims to simulate the maximum visual granularity achievable by a vision encoder in LVLMs.
**Contextual Data Generation.** Our first dataset involves generating detailed captions using only objects from the contextual group. To do this, we feed MSCOCO source labels (including object classes, bounding boxes, and short captions) into GPT-4. We adhere to LLaVA's caption creation pipeline and provide prompt in Appendix.
**Parametric Joint Data Generation.** The second dataset incorporates both contextual and parametric knowledge. Here, we begin with LLaVA's original detailed captions and annotate objects from the parametric group with special tokens. Specifically, we enclose the "omitted" objects with brackets. Formally, if \(S\) denotes the original image caption sentence and \(X=\{x_{1},...,x_{n}\}\) represents a set of undetected objects, our data processing can be represented as:
\[S_{new}=\text{replace}(S,x_{i},[x_{i}])\]
The purpose of bracketing the parametric objects is twofold: it serves as an indicator during inference and provides a hint during training.
### Hallucination Switch
Inspired by LM-Switch (Han et al., 2023), we address hallucination by adding a control parameter \(\varepsilon\) that serves as a "switching value", e.g., \(+1\) for permitted imagination and \(-1\) for restricting imagination, as depicted in Figure 2(a). Let \(M\) represent the LLM with fixed parameters: \(M(x)=H(e_{\nu})\) where \(H\) stands for LM-head and \(e_{v}=B(x)\) is the output word embedding from LM-backbone. We modify \(M^{\prime}=M(\varepsilon W)\), thus making output word embedding \(e^{\prime}_{v}=e_{v}+\varepsilon WE_{v}\), leading to the derived model \(M^{\prime}\) as:
\[M^{\prime}(x)=H(B(x)+\varepsilon W(B(x))).\]
The learned projector \(W\) can be regarded as the transformation from a generic word space to the object sensitive word space, where word-word semantic correspondence is optimized to object correspondence, and \(\varepsilon\) governs the intensity of imagining related objects.
**Training.** To train such a controlling parameter, we leverage the contrastive training data covering both contextual and parametric datasets in Sec 3.1. For data with only contextual knowledge, we assign \(\varepsilon=-1\) when inputting into the model. In contrast, for data with both contextual and parametric knowledge, we use \(\varepsilon=1\). Notably, only the linear layer \(W\) is fine-tuning throughout the training phase.
**Inference.** At the inference stage, \(\varepsilon\) can adopt any value within the interval \([-1,1]\). Specifically, an \(\varepsilon\) value of \(-1\) corresponds to minimal reliance on parametric knowledge, whereas a value of \(1\) indicates a strong inclination towards such knowledge. Detailed theoretical explanation on why HallE-Switch works are elaborated upon in the Appendix.
## 4 Experiment
### Finetune on Parametric Joint Data
Before we present experiments on HallE-Switch, we show the upper-bound experiment results on how well the model can indicate parametric knowledge. We directly finetune the LLaVA model
on parametric joint data. Intuitively, the model is trained on data indicating parametric knowledge. Its output should identify parametric knowledge accurately. Specifically, the model should put a bracket around every "guessed" objects for indication of hallucination.
Therefore, we evaluate the object hallucination in three different settings: 1. Evaluation only on indicated objects: We do CCEval only on objects inside the bracket. The result should reflect a high level of hallucination. 2. Evaluation without indicated objects: We disregard objects in bracket and calculate CCEval. The result should reflect a low level of hallucination. 3. Evaluation with indicated objects: We calculate CCEval on all objects. Due to modification of the settings, we slightly change definition of CHAIR scores in CCEval as detailed in Appendix.
**Evaluation only on indicated objects.** The hallucination level for indicated objects, denoted as \(CHAIR_{i}\), is 63.90 for LLaVA\({}_{7B}\) and 62.31 for LLaVA\({}_{13B}\). It is considerably higher than the baselines all other models. Concurrently, their coverage is 12.01 and 19.09 for LLaVA\({}_{7B}\) and LLaVA\({}_{13B}\), respectively, which both of them significantly lower than the coverage of 33.58 for LLaVA\({}_{7B}\) baseline model. The experiment results show the object within special tokens has significantly higher hallucination rate and lower coverage rate which support our assumption that the object inside the indication tokens are objects from parametric knowledge.
**Evaluation without indicated objects.** For objects outside of the special token scope, we found that hallucination is markedly reduced, CHAIR\({}_{s}\) decreased from 82 to 57 which is 30.5% percent improvement and CHAIR\({}_{i}\) decrease 32.4%, from 25.3 to 17.10, compared to the baseline. This suggests that the model is less prone to make erroneous assumptions for objects not marked by brackets. This is interesting because the model perfectly capture the intention of marking parametric objects in training data and replicate the behavior during inference.
**Evaluation with indicated objects.**: We observe a significant decline in the hallucination without any reduce in object coverage. LLaVA\({}_{7B}\) CHAIR\({}_{i}\) improved from 25.30 to 14.00 which has 44.66% improvements. For LLaVA\({}_{13B}\) CHAIR\({}_{i}\) improved from 16 to 9.86 which also has 38.38% improvements.
### Hallucination Controlling
During model inference, we select 4 different \(\varepsilon\), ranging from \(-1\) to \(+1\). As shown in Table 8, we evaluate HallE-Switch\({}_{7B}\) and HallE-Switch\({}_{13B}\) model, which use LLaVA as backbone. For the 7B model, we train it by removing the special token in parametric joint data, showing that indication is not a necessary condition for switch to work. \(\varepsilon=-1\) means the switch trained purely on contextual only data, which try to minimize the hallucination, where \(\varepsilon=+1\) switch to maximize parametric
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline
**Setting** & **LLM** & **Resolution** & _CHAIR\({}_{s}\)_ & _CHAIR\({}_{s}\)_ & **Coverage\({}^{\dagger}\)** & **Avg. Length\({}^{\dagger}\)** & **Avg. Object\({}^{\dagger}\)** \\ \hline
158K baseline & LLaVA\({}_{7B}\) & 224x & 82.00 & 25.30 & 33.58 & 109.89 & 9.31 \\ only ind. & LLaVA\({}_{7B}\) & 224x & 53.00 & 63.90 & 12.01 & – & 1.66 \\ w/o ind. & LLaVA\({}_{7B}\) & 224x & 57.00 & 17.10 & 37.60 & 108.94 & 7.63 \\ w/ ind & LLaVA\({}_{7B}\) & 224x & 57.00 & 14.00 & 37.62 & 108.94 & 9.22 \\ \hline
158K baseline & LLaVA Alama 21.18 & 336x & 64.00 & 16.00 & 33.37 & 108.52 & 8.92 \\ only ind. & LLaVA Alama 21.18 & 336x & 52.00 & 62.31 & 19.90 & – & 1.3 \\ w/o ind. & LLaVA Alama 21.18 & 336x & 52.00 & 11.62 & 34.70 & 106.94 & 7.23 \\ w/ ind & LLaVAVAlama 21.18 & 336x & 52.00 & 9.86 & 39.31 & 106.94 & 8.52 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Comparison between baselines and the effect of indication on \(CCEval\). ’Only ind’ means evaluation only on indicated objects; ’w/o ind’ means evaluation without indicated objects; ’w/ ind’ means evaluation without indicated objects; ’w/ ind’ means evaluation with indicated objects.
\begin{table}
\begin{tabular}{l|c c|c c|c c} \hline \hline
**Switch** & **LLM** & **Resolution** & _CHAIR\({}_{s}\)_ & _CHAIR\({}_{s}\)_ & **Coverage\({}^{\dagger}\)** & **Avg. Length\({}^{\dagger}\)** & **Avg. Object\({}^{\dagger}\)** \\ \hline
1 & LLaVA\({}_{7B}\) & 224x & 89.00 & 26.60 & 32.80 & 108.54 & 9.72 \\
0.5 & LLaVA\({}_{7B}\) & 224x & 85.00 & 27.92 & 34.02 & 109.33 & 8.81 \\ -0.5 & LLaVA\({}_{7B}\) & 224x & 81.00 & 24.88 & 35.87 & 118.08 & 8.04 \\ \hline
1 & LLaVA\({}_{7B}\) & 224x & 76.00 & 20.90 & 33.88 & 133.79 & 8.02 \\ \hline
1 & LLaVA Lima 21.18 & 336x & 65.00 & 14.58 & 36.14 & 102.18 & 8.37 \\
0.5 & LLaVA Lima 21.18 & 336x & 65.00 & 14.44 & 32.32 & 103.51 & 8.45 \\ -0.5 & LLaVA Lima 21.18 & 336x & 66.00 & 13.79 & 33.07 & 105.57 & 8.41 \\ \hline
1 & LLaVA Lima 21.18 & 336x & 43.00 & 6.37 & 34.37 & 136.28 & 8.79 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance of \(HallE\)_Switch_.
knowledge output. The results show that as \(\varepsilon\) increase, CHAIR\({}_{i}\) increase from 20.90 to 26.6, the coverage keeps at a similar level.
For the 13B model, we keep the indication inside the parametric joint data. The HallE-Switch\({}_{13B}\) achieves the best in object existence hallucination metric. With switch set to -1 and indication, we have CHAIR\({}_{s}=43\) versus baseline's 64 and CHAIR\({}_{i}=6.37\) vs baseline's 16, and the coverage of the model is not decreasing.
## 5 Related Work
Large Vision-Language Models (LVLMs).The rapid advancements in Large Language Models (LLMs) (Touvron et al., 2023; Chung et al., 2022; Touvron et al., 2023; Anil et al., 2023; Driess et al., 2023; Scao et al., 2022; OpenAI, 2023) combined with a surge in open-source initiatives, have paved the way for the emergence of extensive vision-language models (Liu et al., 2023; Goyal et al., 2017; Zhu et al., 2023; Sun et al., 2023; Ye et al., 2023; Bai et al., 2023; Chen et al., 2023; Peng et al., 2023). LLaVA introduced the concept of integrating a simple projector during LLM fine-tuning. Chatspot (Zhao et al., 2023) follow LLaVA's model structure, but embed region of interest into instruction data. GPT4RoI (Zhang et al., 2023) and Shikra (Chen et al., 2023) add grounding tasks to LLaVA structure models, and achieve great performance on various tasks. Instead of using detector to provide region information to the model, we use detector to filter objects for alignment between vision and language information. Concurrently, BLIP2 (Li et al., 2023) and InstructBLIP (Dai et al., 2023) presented Q-former-based LVLMs. Multimodal-GPT (Gong et al., 2023) and Otter (Li et al., 2023) aims to improve OpenFlaming's (Alayrac et al., 2023) directive adherence. mPLUG-Owl (Ye et al., 2023) suggests a two-step method: first train vision models, then refining the language model using techniques like LoRA. Our work utilize a linear layer to control object existence hallucination within LVLMs.
Evaluation on LVLMs.The evaluation of large vision-and-language models (LVLMs) (Yu et al., 2023; Liu et al., 2023; ) is notably challenging due to the intricate nature of generation tasks they undertake. Some of the VQA-based benchmarks (Antol et al., 2015; Hudson & Manning, 2019; Gurari et al., 2018) require models to identify objects, colors, or quantities, while others (Liu et al., 2023; Li et al., 2023; Lu et al., 2022) offer multiple-choice questions. POPE (Li et al., 2023) and MME (Fu et al., 2023) include object hallucination evaluation like paired yes/no questions on object existence, color, counting, OCR, and etc. While VQA-based benchmarks are cheap and straightforward, we find them cannot accurately reflect object hallucination for detailed captions. Besides VQA benchmarks, ROUGE (Lin, 2004; Elliott & Keller, 2014) use n-gram to evaluate similarity between ground truth and model inferences. CIDr (Vedantam et al., 2015) is a triplet-based method of collecting human annotations to measure consensus. CHAIR (Rohrbach et al., 2018) evaluate caption hallucination based on object concept. These methods are constraint by ground truth length or word variance and cannot clearly reflect hallucination with object coverage information. Wang et al. (2023) try use a language model to predict whether the caption exist hallucination, which is cheaper than GPT-4. Our work introduces CCEval, including CHAIR metrics, object coverage, average sentence length and number of objects to overcome limitations of previous evaluations.
HallucinationHallucinations (Ji et al., 2023; Shi et al., 2023; Lin et al., 2021) have been widely studied in traditional NLG (Ji et al., 2023) tasks, including machine translation (Zhou et al., 2020; Lee et al., 2019), data-to-text (Rebuffel et al., 2021; Kasner & Dusek, 2022; Lee et al., 2022), summarization (Cao et al., 2022), dialogue (Dziri et al., 2022) and QA (Shuster et al., 2021). For LVLMs, previous studies have been mainly focusing on object hallucination (Marino et al., 2019; MacLeod et al., 2017; Li et al., 2023; ). POPE (Li et al., 2023) reveals object existence hallucination may related with label distributions, such as object co-occurance. Earlier than POPE, Biten et al. (2022) balance object co-occurrence to decrease hallucination. LRV (Liu et al., 2023) finds the cause of hallucination in VQA benchmarks, especially unbalanced answer distribution and lack of negation information. Our work raise another important cause: misalignment between the vision and language information captured by models. More interestingly, we can explain balancing labels in Biten et al. (2022) as trying to weaken the parametric knowledge caused by the misalignment.
Conclusion
In summary, this study delves deep into the object hallucination phenomena within the detailed captions of LVLMs, advancing understanding of the accuracy and unwarranted inference in describing visual details. We introduce a novel and comprehensive evaluation method for object existence hallucination in detailed captions. We conduct an in-depth and component-wise analysis of LVLMs, meticulously examining each element that might result in hallucination. We further identify an alignment issue between the vision encoder and the instruction data. To alleviate such hallucination, we introduce controlling parameters over LVLMs to condition the inference of objects.
|
2308.03324 | Grid homology for spatial graphs and a Künneth formula of connected
sum | In this paper, we research the grid homology for spatial graphs with cut
edges. We show that the grid homology for spatial graph $f$ is trivial if $f$
has sinks, sources, or cut edges. As an application, we give purely
combinatorial proofs of some formulas including a K\"{u}nneth formula for the
knot Floer homology of connected sums in the framework of the grid homology. | Hajime Kubota | 2023-08-07T06:16:50Z | http://arxiv.org/abs/2308.03324v3 | # Grid homology for spatial graphs and a kunneth formula of connected sums
###### Abstract.
We define the hat and tilde versions of the grid homology for spatial graphs possibly with sinks, sources, or cut edges by extending the grid homology for spatial graphs developed by Harvey, O'Donnol [4]. We show that the grid homology of a spatial graph \(f\) is trivial if \(f\) has sinks, sources, or cut edges. As an application, we give a purely combinatorial proof of a kunneth formula for the knot Floer homology of connected sums in the framework of the grid homology.
## 1. Introduction
Knot Floer homology is a powerful invariant of knots developed by Ozsvath and Szabo [11] and Rasmussen [13] independently. It is called a categorification of the Alexander polynomial since the graded Euler characteristic coincides with the Alexander polynomial.
Grid homology is a combinatorial reconstruction of knot Floer homology developed by Manolescu, Ozsvath, Szabo, and Thurston [8]. Grid homology enables us to calculate knot Floer homology without geometrical arguments. So an interesting problem is to give a purely combinatorial proof of known results in knot Floer homology using grid homology. For example, Sarkar [14] defined the combinatorial Ozsvath-Szabo Tau-invariant, which is defined in [10], and gave a purely combinatorial proof that the Ozsvath-Szabo Tau-invariant is a concordance invariant. Similarly, Foldvari [2] reconstructed the combinatorial Upsilon invariant using grid homology. Then the author [5] proved that it is a concordance invariant.
In 2017, Harvey and O'Donnol [4] extended grid homology to a certain class of oriented spatial graphs called transverse spatial graphs. For a sinkless and sourceless transverse spatial graph \(f\), they defined a sutured manifold \((E(f),\gamma(f))\) determined by \(f\) and showed that the hat version of the grid homology of \(f\) is isomorphic to sutured Floer homology of \((E(f),\gamma(f))\)[4, Theorem 6.6]. As a corollary, the graded Euler characteristic of their hat version coincides with the torsion invariant \(T(E(f),\gamma(f))\in\mathbb{Z}[H_{1}(E(f))]\) of Friedl, Juhasz, and Rasmussen [3].
Bao [1] defined Floer homology for embedded bipartite graphs. Harvey and O'Donnol showed that Bao's Floer homology is essentially the same as their grid homology.
In this paper, based on the work of Harvey and O'Donnol [4], we first quickly define the hat version of the grid homology for transverse spatial graphs that may have sinks or sources. For sinkless and sourceless transverse spatial graphs, they first defined the minus version and then the hat version using it. The condition sinkless and sourceless
are necessary to define their minus version. On the other hand, we define the hat version directly without this technical condition.
We define a cut edge for a spatial graph (see Definition 1.2). We then observe the grid homology for a transverse spatial graph \(f\colon G\to S^{3}\) when \(f\) has sinks, sources, or cut edges. we show that the grid homology for \(f\) is trivial if \(f\) has sinks, sources, or cut edges (Theorem 1.4).
As applications of Theorem 1.4, we give some formulas (Corollary 1.6-Theorem 1.9), including a Kunneth formula for knot Floer homology of connected sums (Corollary 1.8). Especially, the behavior of knot Floer homology under connected sums is well-known [11, 7]. However, no proof of this formula in the framework of grid homology has been known.
### MOY graphs
An oriented spatial graph \(f\) is the image of an embedding of a directed graph in \(S^{3}\). Intuitively, a **transverse spatial graph** is an oriented spatial graph such that for each vertex, there is a small disk that separates the incoming edges and the outgoing edges. See [4, Definition 2.2] for the definition of transverse spatial graphs.
Let \(E(f)\) denote the set of edges of \(f\) and \(V(f)\) the set of vertices of \(f\). For \(v\in V(f)\), let \(\operatorname{In}(v)\) be the set of edges incoming to \(v\) and \(\operatorname{Out}(v)\) be the set of edges outgoing to \(v\).
**Definition 1.1**.:
1. A **balanced coloring**\(\omega\) for \(f\) is a map \(E(f)\to\mathbb{Z}\) satisfying \(\sum_{e\in\operatorname{In}(v)}\omega(e)=\sum_{e\in\operatorname{Out}(v)} \omega(e)\) for each \(v\in V(f)\).
2. An **MOY graph**\((f,\omega)\) is a pair of a transverse spatial graph and a balanced coloring of \(f\).
We call a vertex \(v\)**sink** if \(v\) has only incoming edges. Analogously, we call a vertex \(v\)**source** if \(v\) has only outgoing edges.
**Definition 1.2**.: Let \(f\) be a spatial graph. An edge \(e\in E(f)\) is a **cut edge** if there exists an embedded 2-sphere \(\Sigma\subset S^{3}\) such that \(\Sigma\cap(f(G)-\operatorname{Int}(e))=\emptyset\) and \(e\) meets \(\Sigma\) transversely in a single point.
**Remark 1.3**.:
* The 2-sphere \(\Sigma\) in the above definition is a _cutting sphere_ introduced by Taniyama [15].
* Cut edges for abstract graphs differ from those for spatial graphs. An edge \(e\) of an abstract graph is called a cut edge if \(G-e\) has one more connected component than \(G\).
* Some spatial graph \(f\colon G\to S^{3}\) has no cut edge even if \(G\) has cut edges as an abstract graph. (Figure 1).
### Main results
**Theorem 1.4**.: _Let \((f,\omega)\) be an MOY graph._
1. _If_ \(f\) _has a sink or source, then_ \(\widehat{HF}(f,\omega)=0\)_._
2. _If_ \(f\) _has a cut edge as a spatial graph, then_ \(\widehat{HF}(f,\omega)=0\)
We define the disjoint union, the connected sum, and the wedge sum of two MOY graphs as follows:
**Definition 1.5**.: Suppose \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\) are two MOY graphs and \((v_{1},v_{2})\in V(f_{1})\times V(f_{2})\).
1. Let \((f_{1}\sqcup f_{2},\omega_{1}\sqcup\omega_{2})\) be an MOY graph as a disjoint union of \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\), where \(\omega_{1}\sqcup\omega_{2}\) is naturally determined by \(\omega_{1},\omega_{2}\).
2. If \(\omega_{1}(v_{1})=\omega_{2}(v_{2})\), let \((f_{1}\#_{(v_{1},v_{2})}f_{2},\omega_{1}\#\omega_{2})\) be an MOY graph obtained from \((f_{1}\sqcup f_{2},\omega_{1}\sqcup\omega_{2})\) as in Figure 2, where \(\omega_{1}\#\omega_{2}\) is naturally determined by \(\omega_{1},\omega_{2}\).
3. Let \((f\vee_{(v_{1},v_{2})}f_{2},\omega_{1}\vee\omega_{2})\) be an MOY graph obtained from \(f_{1}\sqcup f_{2}\) by identifying \(v_{1}\) and \(v_{2}\), where \(\omega_{1}\vee\omega_{2}\) is naturally determined by \(\omega_{1},\omega_{2}\).
**Corollary 1.6**.: _Let \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\) be two MOY graphs. For any pair \((v_{1},v_{2})\in V(f_{1})\times V(f_{2})\), we have_
\[\widehat{HF}(f\vee_{(v_{1},v_{2})}f_{2},\omega_{1}\vee\omega_{2})\cong \widehat{HF}(f_{1},\omega_{1})\otimes\widehat{HF}(f_{2},\omega_{2}).\]
_as absolute Maslov graded, relative Alexander graded \(\mathbb{F}\)-vector spaces._
Let \(W(i)\) be a two-dimensional graded vector space \(W(i)\cong\mathbb{F}_{0,0}\oplus\mathbb{F}_{-1,-i}\), where \(\mathbb{F}=\mathbb{Z}/2\mathbb{Z}\). For a bigraded \(\mathbb{F}\)-vector space \(X\), the corresponding **shift** of \(X\), denoted \(X[\![a,b]\!]\), is the bigraded \(\mathbb{F}\)-vector space so that \(X[\![a,b]\!]_{d,s}=X_{d+a,s+b}\). Then, we have
\[X\otimes W(i)\cong X\oplus X[\![1,i]\!].\]
**Theorem 1.7**.: _Let \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\) be two MOY graphs. Let \((v_{1},v_{2})\in V(f_{1})\times V(f_{2})\) be a pair of vertices with \(\omega_{1}(v_{1})=\omega_{2}(v_{2})\). Then we have_
\[\widehat{HF}(f_{1}\#_{(v_{1},v_{2})}f_{2},\omega_{1}\#\omega_{2})\cong \widehat{HF}(f_{1},\omega_{1})\otimes\widehat{HF}(f_{2},\omega_{2})\otimes W (\omega_{1}(v_{1})),\]
Figure 1. Spatial handcuff graphs. The rightmost one has no cut edge as a spatial graph.
Figure 2. MOY graphs in Definition 1.5
_as absolute Maslov graded, relative Alexander graded \(\mathbb{F}\)-vector spaces._
As a corollary of Theorem 1.7, we give a purely combinatorial proof of a Kunneth formula for the knot Floer homology of connected sums.
**Corollary 1.8**.: _Let \(K_{1}\) and \(K_{2}\) be two knots. Then we have_
\[\widehat{HFK}(K_{1}\#K_{2})\cong\widehat{HFK}(K_{1})\otimes\widehat{HFK}(K_{2}),\]
_as bigraded \(\mathbb{F}\)-vector spaces._
**Theorem 1.9**.: _Let \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\) be two MOY graphs. Then we have_
\[\widehat{HF}(f_{1}\sqcup f_{2},\omega_{1}\sqcup\omega_{2})\cong\widehat{HF}(f_ {1},\omega_{1})\otimes\widehat{HF}(f_{2},\omega_{2})\otimes W(0),\]
_as absolute Maslov graded, relative Alexander graded \(\mathbb{F}\)-vector spaces._
### Outline of the paper
In Section 2, we quickly review the grid homology for MOY graphs. In Section 3, we give the proof of Theorem 1.4. In Section 4, we define some acyclic chain complexes used for the proof of Theorem 1.7. In Section 5, we prove Theorem 1.7. In Sections 6-8, we verify Corollaries 1.6 and 1.8, and Theorem 1.9 respectively. Finally, in Section 9, we give an application of Theorem 1.4 and some examples.
## 2. grid homology for general MOY graphs
### The definition of the grid chain complex
This section provides an overview of the grid homology for MOY graphs. It can be defined immediately from the grid homology for transverse spatial graphs by modifying its Alexander grading. For the grid homology for sinkless and sourceless transverse spatial graphs, see [4].
Harvey and O'Donnol [4] defined the minus version of the grid homology for transverse spatial graphs and then the hat and tilde versions. The sinkless and sourceless condition is necessary for their minus version but is not necessary for the tilde and hat versions. In fact, the minus version requires this condition to ensure that \(\partial^{-}\circ\partial^{-}=0\). Referring to [12, Remark 4.6.13], we will introduce the tilde and hat versions without using the minus version.
A **planar graph grid diagram**\(g\) is an \(n\times n\) grid of squares some of that are decorated with an \(X\)- or \(O\)- (sometimes \(O^{*}\)-) marking with the following conditions.
1. There is exactly one \(O\) or \(O^{*}\) on each row and column.
2. If a row or column has no \(X\) or more than one \(X\), then the row or column has \(O^{*}\).
3. \(O\)'s (or \(O^{*}\)'s) and \(X\)'s do not share the same square.
We denote the set of \(O\)- and \(O^{*}\)-markings by \(\mathbb{O}\), the set of \(O^{*}\)-markings by \(\mathbb{O}^{*}\), and the set of \(X\)-markings by \(\mathbb{X}\). We will use the labeling of markings as \(\{O_{i}\}_{i=1}^{n}\) and \(\{X_{j}\}_{j=1}^{m}\). We assume that \(O_{1},\ldots,O_{V}\) are the \(O^{*}\)-markings.
**Remark 2.1**.: In this paper, we allow grid diagrams to have "isolated \(O^{*}\)'s": \(O^{*}\)-markings with no \(X\) in the row or column. In this case, an isolated \(O^{*}\)-marking represents an isolated vertex.
A graph grid diagram realizes a transverse spatial graph by drawing horizontal segments from the \(O-\) (or \(O^{*}-\)) markings to the \(X\)-markings in each row and vertical ones from the \(X\)-markings to the \(O-\) (or \(O^{*}-\)) markings in each column, and assuming that the vertical segments always cross above the horizontal ones. \(O^{*}\)-markings correspond to vertices of the transverse spatial graph and \(O\)- and \(X\)-markings to the interior of edges of a transverse spatial graph.
Throughout the paper, we only consider graph grid diagrams representing MOY graphs: any \(O\)-marking is connected to some \(O^{*}\)-marking by segments.
**Definition 2.2**.: For a graph grid diagram \(g\) representing \(f\) with balanced coloring \(\omega\), a **weight**\(\omega_{g}\colon\mathbb{O}\cup\mathbb{X}\to\mathbb{Z}\) is a map naturally determined by \(\omega\) as follows;
* \(\omega_{g}(O_{i})=\omega(e)\) if \(O_{i}\) corresponds to the interior of the edge \(e\).
* \(\omega_{g}(X_{j})=\omega(e)\) if \(X_{j}\) corresponds to the interior of the edge \(e\).
* \(\omega_{g}(O_{i})=\sum_{e\in\operatorname{In}(v)}\omega(e)=\sum_{e\in \operatorname{Out}(v)}\omega(e)\) if \(O_{i}\) is decorated by \(*\) and corresponds to the vertex \(v\).
We abbreviate \(\omega_{g}\) to \(\omega\) as long as there is no confusion. We remark that if \(O_{i}\) represents a sink or source, then \(\omega(O_{i})=0\).
We regard a graph grid diagram as a diagram of the torus obtained by identifying edges in a natural way. This is called a **toroidal graph grid diagram**. We assume that every toroidal diagram is oriented naturally. We write the horizontal circles and vertical circles which separate the torus into \(n\times n\) squares as \(\boldsymbol{\alpha}=\{\alpha_{i}\}_{i=1}^{n}\) and \(\boldsymbol{\beta}=\{\beta_{j}\}_{j=1}^{n}\) respectively.
Any two graph grid diagrams representing the same transverse spatial graph are connected by a finite sequence of the graph grid moves [4, Theorem 3.6]. The graph grid moves are the following three moves (refer to [4]) :
* **Cyclic permutation** (the left of Figure 3) permuting the rows or columns cyclically.
* **Commutation\({}^{\prime}\)** (the right of Figure 3) permuting two adjacent columns satisfying the following condition; there are vertical line segments \(\operatorname{LS}_{1},\operatorname{LS}_{2}\) on the torus such that (1) \(\operatorname{LS}_{1}\cup\operatorname{LS}_{2}\) contains all the \(X\)'s and \(O\)'s in the two columns, (2) the projection of \(\operatorname{LS}_{1}\cup\operatorname{LS}_{2}\) to a single vertical circle \(\beta_{i}\) is \(\beta_{i}\), and (3) the projection of their endpoints \(\partial(\operatorname{LS}_{1})\cup\partial(\operatorname{LS}_{2})\) to a single circle \(\beta_{i}\) is precisely two points. Permuting two rows is defined in the same way.
* **(De-)stabilization\({}^{\prime}\)** (Figure 4) let \(g\) be an \(n\times n\) graph grid diagram and choose an \(X\)-marking. Then \(g^{\prime}\) is called a stabilization\({}^{\prime}\) of \(g\) if it is an \((n+1)\times(n+1)\) graph grid diagram obtained by adding a new row and column next to the \(X\)-marking of \(g\), moving the \(X\)-marking to next column, and putting new one \(O\)-marking just above the \(X\)-marking and one \(X\)-marking just upper left of the \(X\)-marking. The inverse of stabilization is called destabilization.
These moves are also valid for MOY graphs. Reidemeister moves around sinks and sources can be realized by these moves in the same way as the general vertices.
A **state**\(\mathbf{x}\) of \(g\) is a bijection \(\boldsymbol{\alpha}\to\boldsymbol{\beta}\), in other words, an \(n\)-tuple of points in the torus such that each horizontal circle has exactly one point of \(\mathbf{x}\) and each vertical circle has
exactly one point of \(\mathbf{x}\). We denote by \(\mathbf{S}(g)\) the set of states of \(g\). We describe a state as \(n\) points on the graph grid diagram (Figure 5).
For \(\mathbf{x},\mathbf{y}\in\mathbb{S}(g)\), a **domain**\(p\) from \(\mathbf{x}\) to \(\mathbf{y}\) is a formal sum of the closure of squares, which is satisfying the following conditions;
* \(p\) is divided by \(\boldsymbol{\alpha}\cup\boldsymbol{\beta}\)
* \(\partial(\partial_{\alpha}p)=\mathbf{y}-\mathbf{x}\) and \(\partial(\partial_{\beta}p)=\mathbf{x}-\mathbf{y}\), where \(\partial_{\alpha}p\) is the portion of the boundary of \(p\) in the horizontal circles \(\alpha_{1}\cup\dots\cup\alpha_{n}\) and \(\partial_{\beta}p\) is the portion of the boundary of \(p\) in the vertical ones.
A domain \(p\) is **positive** if the coefficient of any square is nonnegative. Here, we always consider positive domains. Let \(\pi(\mathbf{x},\mathbf{y})\) denote the set of positive domains from \(\mathbf{x}\) to \(\mathbf{y}\).
Figure 5. An example of a state and a rectangle
Figure 3. Cyclic permutation and commutation\({}^{\prime}\), gray lines are LS\({}_{1}\) and LS\({}_{2}\)
Let \(\mathbf{x},\mathbf{y}\in\mathbf{S}(g)\) be two states with \(|\mathbf{x}\cap\mathbf{y}|=n-2\). An **rectangle**\(r\) from \(\mathbf{x}\) to \(\mathbf{y}\) is a domain such that \(\partial(r)\) is the union of four segments. A rectangle \(r\) is **empty** if \(\mathbf{x}\cap\operatorname{Int}(r)=\mathbf{y}\cap\operatorname{Int}(r)=\emptyset\). Let \(\operatorname{Rect}^{\circ}(\mathbf{x},\mathbf{y})\) be the set of empty rectangles from \(\mathbf{x}\) to \(\mathbf{y}\). If \(|\mathbf{x}\cap\mathbf{y}|\neq n-2\), then we define \(\operatorname{Rect}^{\circ}(\mathbf{x},\mathbf{y})=\emptyset\).
For two domains \(p_{1}\in\pi(\mathbf{x},\mathbf{y})\) and \(p_{2}\in\pi(\mathbf{y},\mathbf{z})\), the **composite domain**\(p_{1}*p_{2}\) is the domain from \(\mathbf{x}\) to \(\mathbf{z}\) such that the coefficient of each square is the sum of the coefficient of the square of \(p_{1}\) and \(p_{2}\).
**Definition 2.3**.: Let \(\widetilde{CF}(g,\omega)\) be an \(\mathbb{F}\)-vector space finitely generated by \(\mathbf{S}(g)\), with the endmorphism
\[\widetilde{\partial}(\mathbf{x})=\sum_{\mathbf{y}\in\mathbf{S}(g)}\#\{r\in \operatorname{Rect}^{\circ}(\mathbf{x},\mathbf{y})|r\cap\mathbb{O}=r\cap \mathbb{X}=\emptyset\}\cdot\mathbf{y},\]
where \(\#\{\cdot\}\) counts rectangles modulo \(2\).
**Definition 2.4**.: Let \(\widehat{CF}(g,\omega)\) be an \(\mathbb{F}\)-vector space with basis \(\{U_{V+1}^{k_{V+1}}\cdots U_{n}^{k_{n}}\cdot\mathbf{x}|k_{i}\geq 0,\mathbf{x} \in\mathbf{S}(g)\}\) with the endmorphism defined as
\[\widehat{\partial}(\mathbf{x})=\sum_{\mathbf{y}\in\mathbf{S}(g)}\left(\sum_{ \{r\in\operatorname{Rect}^{\circ}(\mathbf{x},\mathbf{y})|r\cap\mathbb{X}=r \cap\mathbb{O}^{*}=\emptyset\}}U_{V+1}^{O_{V+1}(r)}\cdots U_{n}^{O_{n}(r)} \right)\mathbf{y}.\]
A **planar realization** of a toroidal diagram \(g\) is a planar figure obtained by cutting it along some \(\alpha_{i}\) and \(\beta_{j}\) and putting it on \([0,n)\times[0,n)\in\mathbb{R}^{2}\) in a natural way.
For two points \((a_{1},a_{2}),(b_{1},b_{2})\subset\mathbb{R}^{2}\), we say \((a_{1},a_{2})<(b_{1},b_{2})\) if \(a_{1}<b_{1}\) and \(a_{2}<b_{2}\). For two sets of finitely points \(A,B\subset\mathbb{R}^{2}\), let \(\mathcal{I}(A,B)\) be the number of pairs \(a\in A,b\in B\) with \(a<b\) and let \(\mathcal{J}(A,B)=(\mathcal{I}(A,B)+\mathcal{I}(B,A))/2\).
We consider that \(n\) points of states are on lattice point on \(\mathbb{R}^{2}\) and each \(O\)- and \(X\)-marking is located at \((l+\frac{1}{2},m+\frac{1}{2})\) for some \(l,m\in\{0,1,\ldots,n-1\}\).
**Definition 2.5**.: Let \(\omega\) be a weight of \(g\). Take a planar realization of \(g\). For \(\mathbf{x}\in\mathbf{S}(g)\), the Maslov grading \(M(\mathbf{x})\) and the Alexander grading \(A(\mathbf{x})\) are defined by
\[M(\mathbf{x}) =\mathcal{J}(\mathbf{x}-\mathbb{O},\mathbf{x}-\mathbb{O})+1, \tag{2.2}\] \[A(\mathbf{x}) =\mathcal{J}(\mathbf{x},\sum_{j=1}^{m}\omega(X_{j})\cdot X_{j}- \sum_{i=1}^{n}\omega(O_{i})\cdot O_{i}). \tag{2.1}\]
These two gradings are extended to the whole of \(\widehat{CF}(g,\omega)\) by
\[M(U_{i})=-2,\ A(U_{i})=-\omega(O_{i})\ (i=V+1,\ldots,n). \tag{2.3}\]
The Maslov grading is well-defined as a toroidal diagram [8, Lemma 2.4]. The Alexander grading is not well-defined as a toroidal diagram, however, relative Alexander grading \(A^{rel}(\mathbf{x},\mathbf{y})=A(\mathbf{x})-A(\mathbf{y})\) is well-defined,[4, Corollary 4.14]
**Proposition 2.6**.: \(\widetilde{CF}(g,\omega),\widetilde{CF}(g,\omega)\) _are an absolute Maslov graded, relative Alexander graded chain complex. We will denote by \(\widetilde{HF}(g,\omega),\widehat{HF}(g,\omega)\) their homology respectively._
**Proof.** By using the same argument such as [4, Proposition 4.18], the differential \(\widetilde{\partial}\) and \(\widehat{\partial}\) drops the Maslov grading by one and preserves the Alexander grading.
We will use the notations of [12, Lemma 4.6.7] to show that \(\widehat{\partial}^{2}=0\) and \(\widehat{\partial}^{2}=0\). The cases (R-1) and (R-2) can be shown in the same way. The case (R-3) is slightly different. When \(\mathbf{x}=\mathbf{z}\), the composite domain of two empty rectangles is a thin annulus because the rectangles are empty. Since every row and column has an (isolated) \(O^{*}\)-marking or at least one \(X\)-marking, we can not take such a domain in the hat and tilde versions. \(\square\)
### The invariance of \(\widehat{HF}\)
The invariance of our hat version follows immediately from the invariance of the hat version of Harvey and O'Donnol [4] because our definitions except for the Alexander grading are the same as theirs. To prove the invariance, it is sufficient to recall the chain maps Harvey and O'Donnol gave and to take the induced maps.
So the following Propositions are shown immediately.
**Proposition 2.7**.: _Let \(g\) and \(g^{\prime}\) be two graph grid diagrams for an MOY graph \((f,\omega)\). Let \(\omega_{g}\) and \(\omega_{g^{\prime}}\) be weights for \(g\) and \(g^{\prime}\) respectively determined by \(\omega\). Then there is an isomorphism of absolute Maslov graded, relative Alexander graded \(\mathbb{F}\)-vector spaces_
\[\widehat{HF}(g,\omega_{g})\cong\widehat{HF}(g^{\prime},\omega_{g^{\prime}}).\]
_We will denote by \(\widehat{HF}(f,\omega)\)._
**Proof.** Let \(g\) and \(g^{\prime}\) be two graph grid diagrams for \(f\). Suppose that By [4, Theorem 3.6], it is sufficient to check the case that \(g^{\prime}\) is obtained by a single graph grid move.
If \(g^{\prime}\) is obtained from \(g\) by a single cyclic permutation, then the natural correspondence of states induces the isomorphism of chain complexes \(\widehat{CF}(g,\omega_{g})\cong\widehat{CF}(g^{\prime},\omega_{g^{\prime}})\).
If \(g^{\prime}\) is obtained from \(g\) by a single commutation\({}^{\prime}\) or stabilization\({}^{\prime}\), then the quasi-isomorphism of [4, Proposition 5.1 or 5.5] induces the quasi-isomorphism of our hat chain complexes. Let \(\phi\) be the quasi-isomorphism of chain complexes of \(\mathbb{F}[U_{1},\dots,U_{V}]\)-module for their minus version. We can take the induced map \(\widehat{\phi}\) of chain complexes of \(\mathbb{F}\)-vector space for their hat version by letting \(U_{1}=\dots=U_{V}\). Since the definitions of their hat version and our hat version are the same except for the Alexander gradings, \(\widehat{\phi}\) works also for our hat chain complex as quasi-isomorphism. We remark that sometimes \(g\) has isolated \(O^{*}\)'s for our hat version, but the argument of counting rectangles works similarly. \(\square\)
**Proposition 2.8**.: _Let \(g\) be an \(n\times n\) graph grid diagram. Then there is an isomorphism as absolute Maslov graded, relative Alexander graded \(\mathbb{F}\)-vector spaces_
\[\widetilde{HF}(g,\omega_{g})\cong\widehat{HF}(g,\omega_{g})\bigotimes_{i=V+1 }^{n}W(\omega_{g}(O_{i})). \tag{2.4}\]
**Proof.** It is shown by the same arguments as [4, Proposition 4.21, Lemma 4.31, and Proposition 4.32].
## 3. The proof of Theorem 1.4 (1)
Theorem 1.4 (1) is much easier. We will check that the grid chain complex for an MOY graph with sinks or sources can be written as a mapping cone \(\operatorname{Cone}(\partial_{N}^{I}\colon\widetilde{N}\to\widetilde{I})\) such that the chain map \(\partial_{N}^{I}\) is a quasi-isomorphism. Then a standard argument of homological algebra verifies that its homology vanishes.
_proof of theorem 1.4 (1)._ We will show the case of an MOY graph with a source. The case of a sink can be proved in the same way by reflecting the graph grid diagram along the diagonal line.
Let \((f,\omega)\) be an MOY graph with a source. Let \(f^{\prime}\) be the spatial graph obtained from \(f\) by removing the source and all edges outgoing the source. Take a balanced coloring \(\omega^{\prime}\) for \(f^{\prime}\) naturally determined by \(\omega\).
Choose one vertex \(v\) of \(f\) adjacent to the source. Take two graph grid diagrams \(g\) and \(g^{\prime}\) for \(f\) and \(f^{\prime}\) respectively. Suppose that \(g^{\prime}\) is obtained from \(g\) by removing the top row and rightmost column of \(g\) and that the \(O^{*}\)-marking of \(g\) corresponding to \(v\) is in the rightmost square of the top row of \(g^{\prime}\) (Figure 6).
According to Proposition 2.8, it is sufficient to check that the homology of the tilde version vanishes. Take a point \(c=\alpha_{n+1}\cap\beta_{n+1}\). We will decompose the set of states \(\mathbf{S}(g)\) as the disjoint union \(\mathbf{I}(g)\cup\mathbf{N}(g)\), where \(\mathbf{I}(g)=\{\mathbf{x}\in\mathbf{S}(g)|c\in\mathbf{x}\}\) and \(\mathbf{N}(g)=\{\mathbf{x}\in\mathbf{S}(g)|c\notin\mathbf{x}\}\). This decomposition gives a decomposition \(\widetilde{CF}(g,\omega)=\widetilde{I}\oplus\widetilde{N}\) as a vector space, where \(\widetilde{I}\) and \(\widetilde{N}\) are the spans of \(\mathbf{I}(g),\mathbf{N}(g)\) respectively. Then we can write the differential on \(\widetilde{CF}(g,\omega)\) as
\[\widetilde{\partial}=\begin{pmatrix}\partial_{I}^{I}&\partial_{N}^{I}\\ 0&\partial_{N}^{N}\end{pmatrix},\]
and \(\widetilde{CF}(g,\omega)=\operatorname{Cone}(\partial_{N}^{I}\colon( \widetilde{N},\partial_{N}^{N})\to(\widetilde{I},\partial_{I}^{I}))\).
To see that \(\widetilde{HF}(g,\omega)=0\), we will check that the chain map \(\partial_{N}^{I}\colon(\widetilde{N},\partial_{N}^{N})\to(\widetilde{I}, \partial_{I}^{I})\) is a quasi-isomorphism. Let \(\mathbb{O}=\{O_{0},O_{1},\ldots,O_{n}\}\) denote the set of \(O\)-markings of
Figure 6. Two graph grid diagrams representing \(f\) and \(f^{\prime}\) respectively
and \(\mathbb{O}^{\prime}=\{O_{1},\ldots,O_{n}\}\) denote the set of \(O\)-markings of \(g^{\prime}\). We will think that \(O_{0}\) is the \(O^{*}\)-marking in the topmost column of \(g\) representing the sink.
Let \(\mathcal{H}\colon\widetilde{I}\to\widetilde{N}\) and \(\mathcal{H}_{N}\colon\widetilde{N}\to\widetilde{N}\) be two linear maps defined by for \(\mathbf{x}\in\mathbf{I}(g)\) and \(\mathbf{y}\in\mathbf{N}(g)\),
\[\mathcal{H}(\mathbf{x}) =\sum_{\mathbf{y}^{\prime}\in\mathbf{N}(g)}\#\{r\in\operatorname {Rect}^{\circ}(\mathbf{x},\mathbf{y}^{\prime})|r\cap(\mathbb{O}\setminus O_{ 0})=r\cap\mathbb{X}=\emptyset,O_{0}\in r\}\cdot\mathbf{y}^{\prime},\] \[\mathcal{H}_{N}(\mathbf{y}) =\sum_{\mathbf{y}^{\prime}\in\mathbf{N}(g)}\#\{r\in\operatorname {Rect}^{\circ}(\mathbf{y},\mathbf{y}^{\prime})|r\cap(\mathbb{O}\setminus O_{ 0})=r\cap\mathbb{X}=\emptyset,O_{0}\in r\}\cdot\mathbf{y}^{\prime}.\]
Then it is straightforward to see that \(\partial_{N}^{I}\circ\mathcal{H}=\operatorname{id}_{\widetilde{I}}\) and \(\mathcal{H}\circ\partial_{N}^{I}+\partial_{N}^{N}\circ\mathcal{H}_{N}+ \mathcal{H}_{N}\circ\partial_{N}^{N}=\operatorname{id}_{\widetilde{N}}\) by counting all rectangles appearing in these equations. So \(\partial_{N}^{I}\colon(\widetilde{N},\partial_{N}^{N})\to(\widetilde{I}, \partial_{I}^{I})\) is a quasi-isomorphism.
## 4. The preparations of Theorem 1.4 (2)
We will prepare some special chain complexes whose homology is trivial. These chain complexes will appear as subcomplexes of the grid chain complex.
**Lemma 4.1**.: _Let \(C\) be a chain complex and \(C^{\prime}\) be its subcomplex. Then, \(H(C)\cong H(C/C^{\prime})\) if and only if \(H(C^{\prime})=0\)._
**Proof.** It follows immediately from the long exact sequence from the following short exact sequence:
For simplicity, we often use the following graph grid diagrams.
**Definition 4.2**.: An \(n\times n\) graph grid diagram is **good** if it satisfies the following two conditions:
* The leftmost square of the top row and the rightmost square of the bottom row has an \(O\)- or \(O^{*}\)-marking,
* The rightmost square of the top row has an \(X\)-marking,
### A special chain complex (1)
Let \(g_{a}\) be an \((n+1)\times(n+1)\) graph grid diagram such that the upper left \(n\times n\) block can be viewed as a good graph grid diagram and that the rightmost column of \(g_{a}\) has an \(O^{*}\)-marking and \(X\)-marking as in Figure 7). Let \(g_{b}\) be an \((n+1)\times(n+1)\) grid diagram constructed similarly. We remark that the upper left \(n\times n\) block of \(g_{a}\) and the lower right \(n\times n\) block of \(g_{b}\) are arbitrary except for the three markings specified above.
Let \(\omega_{a},\omega_{b}\) be weights for \(g_{a},g_{b}\) respectively. Take two points \(c,d\) and an area \(e\) as in Figure 7. Similarly to the proof of Theorem 1.4 (1), we think that \(\mathbf{S}(g_{a})\) is decomposed as the disjoint union \(\mathbf{S}(g_{a})=\mathbf{S}(g_{a},c)\sqcup\mathbf{S}(g_{a},d)\sqcup\mathbf{ S}(g_{a},e)\), where
\[\mathbf{S}(g_{a},c) =\{\mathbf{x}\in\mathbf{S}(g_{a})|c\in\mathbf{x}\},\] \[\mathbf{S}(g_{a},d) =\{\mathbf{x}\in\mathbf{S}(g_{a})|d\in\mathbf{x}\},\] \[\mathbf{S}(g_{a},e) =\{\mathbf{x}\in\mathbf{S}(g_{a})|c,d\notin\mathbf{x}\}.\]
We define \(\mathbf{S}(g_{b},c),\mathbf{S}(g_{b},d)\), and \(\mathbf{S}(g_{b},e)\) in the same manner. Then we get the splitting of the vector space \(\widetilde{CF}(g_{a},\omega_{a})\cong N_{a}(c)\oplus I_{a}(d)\oplus N_{a}(e)\), where \(N_{a}(c)\), \(I_{a}(d)\), and \(N_{a}(e)\) are the spans of \(\mathbf{S}(g_{a},c),\mathbf{S}(g_{a},d)\), and \(\mathbf{S}(g_{a},e)\) respectively. We also obtain the splitting \(\widetilde{CF}(g_{b},\omega_{b})\cong N_{b}(c)\oplus I_{b}(d)\oplus N_{b}(e)\).
We remark that \(N_{a}(c)\) and \(N_{a}(e)\) are subcomplexes of \(\widetilde{CF}(g_{a},\omega_{a})\) and \(I_{a}(d)\) is the quotient complex. Let \(r\in\mathrm{Rect}^{\circ}(\mathbf{x},\mathbf{y})\) be an empty rectangle on \(g_{a}\). If \(\mathbf{x}\in\mathbf{S}(g_{a},c)\), then clearly we have \(\mathbf{y}\in\mathbf{S}(g_{a},c)\). Therefore \(N_{a}(c)\) is a subcomplex of \(\widetilde{CF}(g_{a},\omega_{a})\). The case of \(N_{a}(e)\) can be proved in the same way.
**Lemma 4.3**.: \(H(N_{a}(e))=0\) _and \(H(N_{b}(e))=0\)._
**Proof.** Let \(f_{a}\) be the spatial graphs associated with \(g_{a}\). Then clearly \(f_{a}\) has a sink. So \(\widetilde{HF}(g,\omega)=0\). Using the same notation of the proof for Theorem 1.4 (1), careful observation shows that \(\widetilde{CF}(g_{a},\omega_{a})/N_{a}(d)\cong\mathrm{Cone}(\partial_{a} \colon I_{a}(d)\to N_{a}(c))\) as chain complexes, where \(\partial_{a}\) is the chain map defined by
\[\partial_{a}(\mathbf{x})=\sum_{\mathbf{y}\in\mathbf{S}(g_{a},c)}\#\{r\in \mathrm{Rect}^{\circ}(\mathbf{x},\mathbf{y})|r\cap\mathbb{O}=r\cap\mathbb{X}= \emptyset\}\cdot\mathbf{y}.\]
Since \(\partial_{a}\) always moves both \(d\) and the point on the horizontal circle containing \(c\), \(\partial_{a}\) is an isomorphism of chain complexes. Thus \(H(\mathrm{Cone}(\partial_{a}\colon I_{a}(d)\to N_{a}(c)))=0\) and the lemma follows. The same proof works for \(N_{b}(e)\). We remark that this Lemma holds independently from the choice of weight of \(g_{a}\) and \(g_{b}\). \(\square\)
### A special chain complex (2)
We will introduce certain chain complexes using _grid diagram-like diagrams_. Let \(n\geqq 2\) be a fixed integer and \(g_{n}\) be an \(n\times n\) diagram as in the right of Figure 7. Suppose that \(g_{n}\) has exactly one \(O^{*}\)-marking and \(2n-2\)\(X\)-markings and that there is no marking in the lower left \((n-1)\times(n-1)\) block. Although \(g_{n}\) is not a graph grid diagram, we can define a chain complex in the same manner as a graph grid diagram because \(g_{n}\) has at least one \(X\)-marking in each row and column.
We define \(\mathbb{O}^{*},\mathbb{X},\mathbf{S}(g_{n}),\mathrm{Rect}^{\circ}\), and Maslov grading in the same manner as general grid homology in Section 2.1. We remark that \(\#\mathbb{O}^{*}=1\) and \(\#\mathbb{X}=2n-2\).
Figure 7. Left: special graph grid diagrams \(g_{a}\) and \(g_{b}\). Right: a special diagram \(g_{n}\)
**Definition 4.4**.: The chain complex \(C_{n}\) is an \(\mathbb{F}\)-vector space finitely generated by \(\mathbf{S}(g_{n})\), and whose differential is defined by
\[\partial_{n}(\mathbf{x})=\sum_{\mathbf{y}\in\mathbf{S}(g)}\#\{r\in\mathrm{Rect }^{\circ}(\mathbf{x},\mathbf{y})|r\cap\mathbb{O}^{*}=r\cap\mathbb{X}=\emptyset \}\cdot\mathbf{y},\]
where \(\#\{\cdot\}\) counts rectangles modulo \(2\).
By the definition of Maslov grading, \(\partial_{n}\) drops a Maslov grading by one.
**Proposition 4.5**.: _Let \(C_{n}\) be a chain complex of Definition 4.4. Then we have \(H(C_{n})=0\) for any \(n\geqq 2\)._
**Proof.** We will prove by induction on \(n\).
Let \(n=2\), the chain complex \(C_{2}\) is generated by two states \(\mathbf{x}_{12}\) and \(\mathbf{x}_{21}\), and we have
\[\partial(\mathbf{x}_{12})=\mathbf{x}_{21},\ \partial(\mathbf{x}_{21})=0.\]
So \(H(C_{2})=0\).
For the inductive step, we will divide \(C_{n}\) into many subcomplexes isomorphic to \(C_{n-1}\). Take \(n\) points \(c_{1},\ldots,c_{n}\) on the vertical circle \(\beta_{1}\) as in Figure 7. For \(i=1,\ldots,n\), let \(C_{n}(i)\) be the span of the set of states containing \(c_{i}\). Then we have \(C_{n}=C_{n}(1)\oplus\cdots\oplus C_{n}(n)\) as a vector space.
Let \(r\in\mathrm{Rect}^{\circ}(\mathbf{x},\mathbf{y})\) be an empty rectangle and \(c_{j}=\mathbf{x}\cap\beta_{1}\) and \(c_{k}=\mathbf{y}\cap\beta_{1}\). Then we have \(j\geqq k\) because the rightmost column is filled by the markings.
\[C_{n}(n)\subset C_{n}(n)\oplus C_{n}(n-1)\subset\cdots\subset C_{n}(n)\oplus \cdots\oplus C_{n}(1)=C_{n}.\]
This sequence gives a sequence of chain complexes \(C_{n}(n),C_{n}(n-1),\ldots,C_{n}(1)\), where \(C_{n}(i)\) is the quotient complex of \(C_{n}(i)\oplus\cdots\oplus C_{n}(n)\) by \(C_{n}(i+1)\oplus\cdots\oplus C_{n}(n)\) for \(i=n-1,\ldots,1\).
Obviously, we have \(C_{n}(i)\cong C_{n-1}\) and \(H(C_{n}(i))=0\) for \(i=1,\ldots,n\). Applying Lemma 4.1 verifies that \(H(C_{n})=0\). \(\Box\)
## 5. The proof of theorem 1.4 (2)
Let \((f,\omega)\) be an MOY graph such that \(f\) has a cut edge as a spatial graph (Definition 1.2). If at least one of two vertices connected by the cut edge is sink or source, then Theorem 1.4 (1) shows that \(\widehat{HF}(f,\omega)=0\). Then suppose that the two vertices connected by the cut edge are neither sinks nor sources.
### The structure of the grid chain complex
Let \((f,\omega)\) be an MOY graph such that \(f\) has a cut edge as a spatial graph and that the two vertices connected by it are neither sinks nor sources. Then there exists a \(2n\times 2n\) graph grid diagram \(g\) for \(f\). Consider four \(n\times n\) blocks obtained by cutting \(g\) along the horizontal circles \(\alpha_{1}\cup\alpha_{n+1}\) and the vertical circles \(\beta_{1}\cup\beta_{n+1}\). We will call the four blocks \(g_{11}\), \(g_{12}\), \(g_{21}\), and \(g_{22}\) respectively (see Figure 8). We can assume that \(g\) satisfies the following conditions:
* \(g_{11}\) and \(g_{22}\) can be viewed as good graph grid diagrams.
* The rightmost square of the bottom row of \(g_{11}\) has an \(O^{*}\)-marking.
* \(g_{12}\) has no \(O\)- or \(O^{*}\)-markings and only one \(X\)-marking in the leftmost square of the bottom row.
* \(g_{21}\) has no markings.
* The leftmost square of the top row of \(g_{22}\) has an \(O^{*}\)-marking.
Take a point \(c\) as the intersection point \(\alpha_{n+1}\cap\beta_{n+1}\). Using the same notations as the proof of Theorem 1.4 (1), we can write \(\widetilde{CF}(g,\omega)\) as a mapping cone \(\operatorname{Cone}(\partial_{I}^{N}\colon(\widetilde{I},\partial_{I})\to( \widetilde{N},\partial_{N}))\).
For \(k=1,\ldots,n\), let \(\mathbf{S}(g_{11},k)\) be a \(k\)-tuple of points of \((\alpha_{n+1}\cup\cdots\cup\alpha_{2n})\cap(\beta_{1}\cup\cdots\cup\beta_{n})\) so that each horizontal and vertical circle has at most one point. We define \(\mathbf{S}(g_{12},k)\), \(\mathbf{S}(g_{21},k)\), and \(\mathbf{S}(g_{22},k)\) in the same way. Then we will represent each state \(\mathbf{x}\in\mathbf{S}(g)\) uniquely as
\[\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix},\]
where \(\mathbf{x}_{11}\in\mathbf{S}(g_{11},n-k)\), \(\mathbf{x}_{12}\in\mathbf{S}(g_{12},k)\), \(\mathbf{x}_{21}\in\mathbf{S}(g_{21},k)\), and \(\mathbf{x}_{22}\in\mathbf{S}(g_{22},n-k)\). Using this representation, we decompose the set of grid states \(\mathbf{S}(g)\) as disjoint union \(\mathbf{S}_{0}(g)\cup\cdots\cup\mathbf{S}_{n}(g)\), where \(\mathbf{S}_{k}(g)\) is the set of states represented by \(\mathbf{S}(g_{11},n-k)\), \(\mathbf{S}(g_{12},k)\), \(\in\mathbf{S}(g_{21},k)\), and \(\mathbf{S}(g_{22},n-k)\).
Now we give the splitting of the vector space
\[\widetilde{CF}(g,\omega)\cong\left(\bigoplus_{k=1}^{n}I_{k}\right)\oplus\left( \bigoplus_{k=0}^{n}N_{k}\right),\]
where \(I_{k}\) is the span of \(\mathbf{S}_{k}(g)\cap\mathbf{I}(g)\), and \(N_{k}\) is the span of \(\mathbf{S}_{k}(g)\cap\mathbf{N}(g)\). We remark that \(\mathbf{S}_{0}(g)\cap\mathbf{I}(g)=\emptyset\).
The differential of \(\widetilde{CF}(g,\omega)\), denoted by \(\widetilde{\partial}\), satisfies that for \(k=1,\ldots,n\),
\[\widetilde{\partial}(I_{k}) \subset I_{k-1}\oplus I_{k}\oplus N_{k-1},\] \[\widetilde{\partial}(N_{k}) \subset N_{k-1}\oplus N_{k}.\]
This relation can be expressed by the following schematic picture:
Figure 8. Left: A \(2n\times 2n\) graph grid diagram. Right: an example of \(2n\times 2n\) diagram for a spatial handcuff graph.
The top row of the picture represents the chain complex \((\widetilde{I},\partial_{I})\), and the bottom row represents the chain complex \((\widetilde{N},\partial_{N})\).
### The main idea of the proof
The main idea of the proof is to see that the following chain complexes are acyclic:
* \(\mathcal{C}=\mathrm{Cone}(\partial_{I}^{N}|_{I_{1}}\colon I_{1}\to N_{0})\), which is the subcomplex of \(\widetilde{CF}(g,\omega)\),
* \(N_{i}\), which is the subcomplex of \(\widetilde{CF}(g,\omega)/(\mathcal{C}\oplus N_{1}\oplus\cdots\oplus N_{i-1})\) for \(i=1,\ldots,n\),
* \(I_{i}\), which is the subcomplex of \(\widetilde{CF}(g,\omega)/(\widetilde{N}\oplus I_{2}\oplus\cdots\oplus I_{i-1})\) for \(i=2,\ldots,n-1\),
* \(I_{n}\), which is the quotient complex \(\widetilde{CF}(g,\omega)/(\widetilde{N}\oplus I_{2}\oplus\cdots\oplus I_{n-1})\).
Then Lemma 4.1 verifies Theorem 1.4 (2). We will see that these chain complexes are decomposed into finitely many acyclic chain complexes introduced in Sections 4.1 and 4.2.
**Lemma 5.1**.: \(\mathcal{C}\) _is acyclic._
**Proof.** Since any state \(\mathbf{x}\in\widetilde{I}\) satisfies \(\widetilde{\partial}\circ\widetilde{\partial}(\mathbf{x})=(\partial_{I}\circ \partial_{I}+\partial_{I}^{N}\circ\partial_{I}+\partial_{N}\circ\partial_{I}^{N })(\mathbf{x})=0\), we have \(\partial_{I}^{N}\circ\partial_{I}+\partial_{N}\circ\partial_{I}^{N}=0\) and thus \(\partial_{I}^{N}\) is a chain map.
For any state \(\mathbf{x}\) of \(\widetilde{I}\), the chain map \(\partial_{I}^{N}|_{I_{1}}\) only counts the rectangle that has \(c\) and \(\mathbf{x}_{21}\) as its corners. So \(\partial_{I}^{N}|_{I_{1}}\) is a bijection and thus an isomorphism.
Since \(\partial_{I}^{N}|_{I_{1}}\) is an isomorphism of chain complexes, the usual argument of mapping cone shows that \(H(\mathcal{C})=0\). \(\square\)
**Lemma 5.2**.: \(N_{1}\) _is acyclic._
**Proof.** The generators of \(N_{1}\) is the states \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\) with \(\mathbf{x}_{12}\in\mathbf{S}(g_{12},1)\) and \(\{c\}\neq\mathbf{x}_{12}\). Let \(N_{1p}\) be the span of the set of states whose point \(\mathbf{x}_{12}\) is in the gray area \(p\) in Figure 9. We define \(N_{1q}\) and \(N_{1r}\) in the same way. Then we have the splitting the vector space \(N_{1}\cong N_{1p}\oplus N_{1q}\oplus N_{1r}\). \(N_{1p}\) is the subcomplex of \(N_{1}\) and the quotient complex is decomposed as \(N_{1q}\oplus N_{1r}\).
Figure 9. A state of \(\mathcal{C}_{1}\). Either \(p\), \(q\), or \(r\) contains one point of the state.
We will show the following isomorphisms of chain complexes:
\[N_{1p} \cong N_{a}(e)\otimes_{\mathbb{F}}N_{b}(e), \tag{5.2}\] \[N_{1q} \cong I_{a}(d)\otimes_{\mathbb{F}}N_{b}(e),\] (5.3) \[N_{1r} \cong N_{a}(e)\otimes_{\mathbb{F}}I_{b}(d), \tag{5.1}\]
where \(N_{\bullet}(e)\) and \(I_{\bullet}(d)\) (\(\bullet=a,b\)) are the special chain complexes in Section 4.1.
Let \(\boldsymbol{\alpha}^{\bullet}=\{\alpha_{i}^{\bullet}\}_{i=1}^{n}\) and \(\beta^{\bullet}=\{\beta_{j}^{\bullet}\}_{j=1}^{n}\)\(\bullet\) (\(\bullet=g,g_{a},g_{b}\)) be the horizontal and vertical circles of the graph grid diagram.
We first verify (5.1). Let \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\) be a state of \(N_{1p}\). Write \(\mathbf{x}_{12}=\alpha_{n+i}^{g}\cap\beta_{n+j}^{g}\) and \(\mathbf{x}_{21}=\alpha_{k}^{g}\cap\beta_{l}^{g}\). They determine the states \(\mathbf{x}_{a}\in\mathbf{S}(g_{a},e)\) and \(\mathbf{x}_{b}\in\mathbf{S}(g_{b},e)\) as follows:
Figure 11. Rectangles of \(g\) and the corresponding rectangles of \(g_{a},g_{b}\).
Figure 10. Upper: a state mapped by \(f_{1p}\). Lower: a state mapped by \(f_{1q}\).
* \(\mathbf{x}_{a}\) is consisting of \(\alpha_{i+1}^{g_{a}}\cap\beta_{n+1}^{g_{a}},\alpha_{1}^{g_{a}}\cap\beta_{l}^{g_{ a}}\), and \(n-1\) points in the upper left \(n\times n\) block naturally determined by \(\mathbf{x}_{11}\).
* \(\mathbf{x}_{b}\) is consisting of \(\alpha_{n+1}^{g_{b}}\cap\beta_{j+1}^{g_{b}},\alpha_{k}^{g_{b}}\cap\beta_{1}^{g_ {b}}\), and \(n-1\) points in the lower right \(n\times n\) block naturally determined by \(\mathbf{x}_{22}\).
Then define the linear map \(f_{1p}\colon N_{1p}\to N_{a}(e)\otimes_{\mathbb{F}}N_{b}(e)\) by \(f_{1p}(\mathbf{x})=\mathbf{x}_{a}\otimes\mathbf{x}_{b}\).
Let \(\partial_{N_{1p}}\), \(\partial_{N_{a}}\), and \(\partial_{N_{b}}\) be the differential of \(N_{1p}\), \(N_{a}(e)\), and \(N_{b}(e)\) respectively. Then \(f_{1p}\) is a bijection. The rectangle \(r\) counted by \(\partial_{N_{1p}}\) satisfies either \(r\cap\mathrm{Int}g_{11}\neq\emptyset\) or \(r\cap\mathrm{Int}g_{22}\neq\emptyset\). A rectangle with \(r\cap\mathrm{Int}(g_{11})\neq\emptyset\) (resp. \(r\cap\mathrm{Int}(g_{22})\neq\emptyset\)) is realized by a rectangle of \(g_{a}\) (resp. \(g_{b}\), See Figure 11). So we have \(f_{1p}\circ\partial_{N_{1p}}=(\partial_{N_{a}}\otimes\mathrm{id}+\mathrm{id} \otimes\partial_{N_{b}})\circ f_{1p}\). Therefore \(f_{1p}\) is an isomorphism of chain complexes.
Next, we show (5.2). Let \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\) be a state of \(N_{1q}\). In this case, \(\mathbf{x}_{12}\) is on the \((n+1)\)-th vertical circle \(\alpha_{n+1}\). The same algorithm as the case of \(N_{1p}\) determines a linear map \(f_{1q}\colon N_{1q}\to I_{a}(d)\otimes_{\mathbb{F}}N_{b}(e)\). This map is also an isomorphism. Note that as a chain complex, the differential of \(I_{a}(d)\) does not move the point \(d=\alpha_{2}\cap\beta_{n+1}\). (5.3) can be shown in the same way as (5.2).
By Lemma 4.3, we see that \(H(N_{1p})=0\), \(H(N_{1q})=0\), and \(H(N_{1r})=0\). Then we conclude that \(H(N_{1})=0\).
**Lemma 5.3**.: \(N_{k}\) _and \(I_{k}\) are acyclic for \(k=2,\ldots,n\)._
In this case, these complexes are divided into finitely many copies of the special chain complex (2) in Section 4.2.
**Definition 5.4**.: For a state \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\) of \(N_{k}\) or \(I_{k}\) (\(k=2,\ldots,n\)), the **modified Maslov grading**\(M^{\prime}\) for \(\mathbf{x}\) is defined by \(M^{\prime}(\mathbf{x})=M(\mathbf{x}_{11})+M(\mathbf{x}_{22})\).
**Lemma 5.5**.: _The differentials of \(N_{2},\ldots,N_{n}\) and \(I_{2},\ldots,I_{n}\) preserve or drop the modified Maslov grading. Moreover, the modified Maslov grading is preserved if and only if the differential does not change \(\mathbf{x}_{11}\) or \(\mathbf{x}_{22}\)._
Proof.: Let \(r\) be an empty rectangle counted by the differential. Then \(r\) does not change \(\mathbf{x}_{11}\) and \(\mathbf{x}_{22}\) simultaneously. If both \(\mathbf{x}_{11}\) and \(\mathbf{x}_{22}\) are preserved, then the modified Maslov grading is preserved.
Suppose that \(r\) changes \(\mathbf{x}_{11}\) and preserve \(\mathbf{x}_{22}\). Then there are three cases of \(r\):
1. \(r\) changes only \(\mathbf{x}_{11}\). It is clear that the modified Maslov grading drops.
2. \(r\) changes only \(\mathbf{x}_{11}\) and \(\mathbf{x}_{21}\). Suppose that \(r\cap\alpha_{n+1}\neq\emptyset\). Let \(x_{1}\) and \(x_{2}\) be the intersection points at the northeast and northwest corner of \(r\) respectively. Let \(\beta_{i}\) be the vertical circle containing \(x_{2}\), and \(\beta_{i+k}\) be the vertical circle containing \(x_{1}\). Then \(\mathcal{J}(\mathbf{x}_{11},\mathbb{O})\) drops by \(k\) because there are \(k\)\(O\)-markings above the segment connecting \(x_{1}\) and \(x_{2}\). On the other hand \(\mathcal{J}(\mathbf{x}_{11},\mathbf{x}_{11})\) drops at most \(k-1\) because there are at most \(k-1\) points above the segment. Therefore \(M^{\prime}\) drops. The cases that \(r\cap\alpha_{1}\neq\emptyset\) can be shown similarly.
3. \(r\) changes only \(\mathbf{x}_{11}\) and \(\mathbf{x}_{12}\). The same argument shows that \(M^{\prime}\) drops.
The case that \(r\) changes \(\mathbf{x}_{22}\) and preserve \(\mathbf{x}_{11}\) is the same.
Proof of Lemma 5.3.: We first show that \(N_{2},\ldots,N_{n}\) are acyclic. Let \(m=\min\{M^{\prime}(\mathbf{x})|\mathbf{x}\in\mathbf{S}_{k}(g)\cap\mathbf{N}(g)\}\) and \(M=\max\{M^{\prime}(\mathbf{x})|\mathbf{x}\in\mathbf{S}_{k}(g)\cap\mathbf{N}(g)\}\). We obtain the splitting of the vector space \(N_{k}=\bigoplus_{i=m}^{M}N_{k}^{i}\), where \(N_{k}^{i}\) is the span of the grid states whose the modified Maslov grading is \(i\).
According to Lemma 5.5, we have a sequence of subcomplexes,
\[N_{k}^{m}\subset(N_{k}^{m}\oplus N_{k}^{m+1})\subset(N_{k}^{m}\oplus N_{k}^{m+ 1}\oplus N_{k}^{m+2})\subset\cdots\subset\bigoplus_{i=m}^{M}N_{k}^{i}=N_{k}.\]
This sequence deduces a sequence of chain complexes \(N_{k}^{m},\ldots,N_{k}^{M}\), where \(N_{k}^{i}\) is the quotient complex of \((N_{k}^{m}\oplus\cdots\oplus N_{k}^{i})\) by \((N_{k}^{m}\oplus\cdots\oplus N_{k}^{i-1})\) for \(i=m+1,\ldots,M\).
Lemma 4.1 implies that it is sufficient to see that the homology of \(N_{k}^{i}\) vanishes for \(i=m,\ldots,M\).
For \(i=m,\ldots M\), let \(\mathbf{S}_{k}(g|M^{\prime}=i)\) be the set of pairs \(\{(\mathbf{x}_{11},\mathbf{x}_{22})\in\mathbf{S}(g_{11},n-k)\times\mathbf{S}( g_{22},n-k)|M(\mathbf{x}_{11})+M(\mathbf{x}_{22})=i\}\).
We have the decomposition of the vector space
\[N_{k}^{i}=\bigoplus_{(\mathbf{y}_{11},\mathbf{y}_{22})\in\mathbf{S}_{k}(g|M^{ \prime}=i)}N(\mathbf{y}_{11},\mathbf{y}_{22}), \tag{5.4}\]
where \(N(\mathbf{y}_{11},\mathbf{y}_{22})\) is the span of the set of states \(\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\in\mathbf{S}_{k}(g)\cap\mathbf{N} (g)\) with \(\mathbf{x}_{11}=\mathbf{y}_{11}\) and \(\mathbf{x}_{22}=\mathbf{y}_{22}\). Again using Lemma 5.5, we can regard (5.4) as a decomposition of the chain complex. Clearly each summand \(N(\mathbf{y}_{11},\mathbf{y}_{22})\) is isomorphic to \(C_{k}\otimes C_{k}\), where \(C_{k}\) is the special chain complex in Section 4.2. Therefore we have \(H(N(\mathbf{y}_{11},\mathbf{y}_{22}))=0\) and hence \(H(N_{k}^{i})=0\). Lemma 4.1 shows that \(H(N_{k})=0\).
The same argument shows that \(I_{k}\) is decomposed into many copies of \(C_{k-1}\otimes C_{k}\), and \(H(I_{k})=0\) follows. We remark that \(I_{2}\) is decomposed into many copies of \(C_{2}\).
## 6. The proof of Corollary 1.6
For two MOY graphs \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\), let \((f\vee_{(v_{1},v_{2})}f_{2},\omega_{1}\vee\omega_{2})\) be the spatial graph (Definition 1.5) and \(f\) be the transverse spatial graph consisting of \(f_{1}\sqcup f_{2}\) and a cut edge from \(v_{1}\) to \(v_{2}\). Take a balanced coloring \(\omega\) for \(f\) naturally determined by \(\omega_{1}\) and \(\omega_{2}\).
Let \(g\) be a \(2n\times 2n\) graph grid diagram for \(f\) and \(g_{\vee}\) be an \((2n-1)\times(2n-1)\) graph grid diagram for \(f\vee_{(v_{1},v_{2})}f_{2}\) as in Figure 12. Let \(g_{ij}\)\((i,j\in\{1,2\})\) be four \(n\times n\) blocks of \(g\) as in Figure 12. We can assume the following conditions:
* \(g_{11}\) and \(g_{22}\) can be viewed as good graph grid diagrams,
* \(g_{11}\) and the upper left \(n\times n\) block of \(g_{\vee}\) are the same,
* \(g_{22}\) and the lower right \(n\times n\) block of \(g_{\vee}\) are the same,
* \(g_{12}\) has no \(O\)- or \(O^{*}\)-marking and exactly one \(X\)-marking in its the leftmost square of the bottom row,
* \(g_{21}\) has no markings,
* The upper right and the lower left \((n-1)\times(n-1)\) blocks of \(g_{\vee}\) have no markings.
Since \(g\) represents a transverse spatial graph with a cut edge, Section 5.1 implies that the chain complex \(\widetilde{CF}(g,\omega)\) can be written as \(\operatorname{Cone}(\partial_{I}^{N}\colon\widetilde{I}\to\widetilde{N})\).
The following Lemma will be used sometimes:
**Lemma 6.1**.: _Let \(g\) be a \(2n\times 2n\) graph grid diagram and \(g_{1}\), \(g_{2}\) be two \(n\times n\) graph grid diagram. Suppose that they satisfy the following conditions:_
* \(g_{1}\) _and_ \(g_{2}\) _are good graph diagrams,_
* \(g_{1}\) _and the upper left_ \(n\times n\) _block of_ \(g\) _are the same,_
* \(g_{2}\) _and the lower right_ \(n\times n\) _block of_ \(g\) _are the same,_
* _The lower left_ \(n\times n\) _block has no markings._
_Let \(\omega\) be a weight for \(g\) and \(\omega_{i}\) be a weight for \(g_{i}\) given by the restriction of \(\omega\). Then for \(N_{0}\), which is a subcomplex of \(\widetilde{CF}(g,\omega)\) (Section 5.1), there is an isomorphism_
\[N_{0}\cong\widetilde{CF}(g_{1},\omega_{1})\otimes\widetilde{CF}(g_{2},\omega_ {2})\llbracket-1,0\rrbracket.\]
**Proof.** Let \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\) be a state of \(N_{0}\). Since the upper left and lower right \(n\times n\) blocks are good graph diagrams, every empty rectangle from \(\mathbf{x}\) does not change \(\mathbf{x}_{11}\) and \(\mathbf{x}_{22}\) simultaneously. So the natural correspondence \(\mathbf{x}\to\mathbf{x}_{11}\otimes\mathbf{x}_{22}\) induces an isomorphism \(N_{0}\cong\widetilde{CF}(g_{1},\omega_{1})\otimes\widetilde{CF}(g_{2},\omega_ {2})\llbracket-1,0\rrbracket\). The direct computation shows that this isomorphism increases the Maslov grading by one and preserves the Alexander grading. \(\square\)
proof of Corollary 1.6.: Since \(g\) represents a transverse spatial graph with a cut edge, Lemmas 5.1-5.3 and 6.1 implies that \(H(\widetilde{I})\cong H(I_{1})\cong H(N_{0})\llbracket 1,0\rrbracket\cong \widetilde{HF}(g_{1},\omega_{1})\otimes\widetilde{HF}(g_{2},\omega_{2})\).
For a state \(\mathbf{x}\cup\{c\}\) of \(\widetilde{I}\), let \(\phi\colon\widetilde{I}\to\widetilde{CF}(g_{\vee},\omega_{1}\vee\omega_{2})\) be a linear map defined by \(\phi(\mathbf{x}\cup\{c\})=\mathbf{x}\). The same argument as [12, Lemma 5.2.5] shows that \(\phi\) is an isomorphism of absolute Maslov graded, relative Alexander graded chain complexes. Therefore we have
\[\widetilde{HF}(g_{\vee},\omega_{1}\vee\omega_{2})\cong\widetilde{HF}(g_{1}, \omega_{1})\otimes\widetilde{HF}(g_{2},\omega_{2}).\]
Proposition 2.8 gives
\[\widehat{HF}(g_{\vee},\omega_{1}\vee\omega_{2})\cong\widehat{HF}(g_{1}, \omega_{1})\otimes\widehat{HF}(g_{2},\omega_{2}).\]
Figure 12. Two graph grid diagrams \(g\) and \(g_{\vee}\)
## 7. The proof of the connected sum formula
Let \((f_{1},\omega_{1})\) and \((f_{2},\omega_{2})\) be two MOY graph. Suppose that there is a pair \((v_{1},v_{2})\in V(f_{1})\times V(f_{2})\) such that \(\omega_{1}(v_{1})=\omega_{2}(v_{2})\). Let \(f\) be a transverse spatial graph consisting of \(f_{1}\sqcup f_{2}\) and a cut edge from \(v_{1}\) to \(v_{2}\). Take a balanced coloring \(\omega\) for \(f\) naturally determined by \(\omega_{1}\) and \(\omega_{2}\). Then we can take two \(2n\times 2n\) graph grid diagrams \(g\) and \(g_{\#}\) for \(f\) and \(f_{1}\#_{(v_{1},v_{2})}f_{2}\) respectively as in Figure 13. Let \(c\) be the intersection point \(\alpha_{n+1}\cap\beta_{n+1}\) on \(g\) and \(g_{\#}\). We can assume that \(g\) and \(g_{\#}\) coincide except for the \(2\times 2\) block around \(c\).
We decompose the set of states as \(\mathbf{S}(g_{\#})=\mathbf{I}(g_{\#})\sqcup\mathbf{N}(g_{\#})\), where \(\mathbf{I}(g_{\#})\) is the set of states containing \(c\). Using the spans of them, we obtain the splitting of the vector space \(\widetilde{CF}(g_{\#})\cong\widetilde{N}_{\#}\oplus\widetilde{I}_{\#}\). Then we can write the chain complex of \(g_{\#}\) as \(\widetilde{CF}(g_{\#})=\mathrm{Cone}(\partial_{N}^{I}\colon\widetilde{N}_{\# }\to\widetilde{I}_{\#})\), where \(\partial_{N}^{I}\) is the chain map counting empty rectangles from a state of \(\mathbf{N}(g_{\#})\) to a state of \(\mathbf{I}(g_{\#})\).
Since \(g\) represents a transverse spatial graph with a cut edge, Section 5.1 implies that its chain complex is written as \(\widetilde{CF}(g)=\mathrm{Cone}(\partial_{I}^{N}\colon\widetilde{I}\to \widetilde{N})\). Recall that \(\widetilde{I}\) has a subcomplex \(I_{0}\) and \(\widetilde{N}\) has a subcomplex \(N_{1}\) such that \(H(\widetilde{I})\cong H(I_{0})\) and \(H(\widetilde{N})\cong H(N_{0})\).
**Lemma 7.1**.: _As absolute Maslov graded, relative Alexander graded chain complexes, there are natural isomorphisms \(\widetilde{I}\cong\widetilde{I}_{\#}[\![1,\omega_{1}(v_{1})]\!]\) and \(\widetilde{N}\cong\widetilde{N}_{\#}[\![-1,0]\!]\)._
**Proof.** Natural bijections \(\mathbf{I}(g)\to\mathbf{I}(g_{\#})\) and \(\mathbf{N}(g)\to\mathbf{N}(g_{\#})\) give isomorphisms because for any empty rectangle \(r\) counted by the differential of \(\widetilde{I}\), \(\widetilde{N}\), \(\widetilde{I}_{\#}\), and \(\widetilde{N}_{\#}\), \(r\) is disjoint from the interior of \(2\times 2\) block around \(c\).
A simple computation shows that the bijection \(\mathbf{I}(g)\to\mathbf{I}(g_{\#})\) drops the Maslov grading by one and the Alexander grading by \(\omega(v_{1})\), and the bijection \(\mathbf{N}(g)\to\mathbf{N}(g_{\#})\) increase the Maslov grading by one and preserve the Alexander gradings.
**Lemma 7.2**.: _The induced map on homology \(H(\partial_{\#})\colon H(\widetilde{N}_{\#})\to H(\widetilde{I}_{\#})\) is trivial._
Figure 13. Graph grid diagrams for \(f\) and \(f_{1}\#_{(v_{1},v_{2})}f_{2}\)
**Proof.** Using the isomorphism \(\widetilde{N}\cong\widetilde{N}_{\#}[\![-1,0]\!]\), let \(N_{\#0}\) be a subcomplex of \(\widetilde{N}_{\#}\) which is isomorphic to \(N_{0}\). Then we have \(H(\widetilde{N}_{\#})\cong H(N_{\#0})\) since \(H(\widetilde{N})\cong H(N_{0})\). To show this Lemma, it is sufficient to see that \(\partial_{\#}(N_{\#0})=0\).
Let \(r\) be a rectangle from a state of \(N_{0}\) to a state of \(\widetilde{I}\). Then \(r\) must contain one of \(X\)-marking drawn in the right of Figure 13. Therefore we have \(\partial_{\#}(N_{\#0})=0\).
proof of Theorem 1.7.: Lemma 7.2 implies
\[\widetilde{HF}(f_{1}\#_{(v_{1},v_{2})}f_{2},\omega_{1}\#\omega_{2})\cong H( \widetilde{I}_{\#})\oplus H(\widetilde{N}_{\#}).\]
Using Lemmas 5.1-5.3, 6.1, and 7.1, we have
\[\widetilde{HF}(f_{1}\#_{(v_{1},v_{2})}f_{2},\omega_{1}\#\omega_{2})\cong \widetilde{HF}(g_{1},\omega_{1})\otimes\widetilde{HF}(g_{2},\omega_{2})\otimes W (\omega_{1}(v_{1})).\]
Then Proposition 2.8 gives
\[\widehat{HF}(f_{1}\#_{(v_{1},v_{2})}f_{2},\omega_{1}\#\omega_{2})\cong \widehat{HF}(g_{1},\omega_{1})\otimes\widehat{HF}(g_{2},\omega_{2})\otimes W (\omega_{1}(v_{1})).\]
proof of Corollary 1.8.: We will regard a knot as a transverse spatial graph consisting of one vertex and edge. If we think of a balanced coloring for a knot that sends the only edge to one, our grid homology \(\widehat{HF}\) coincides with the original grid homology \(\widehat{GH}\), and thus with knot Floer homology \(\widehat{HFK}\) up to shift of the Alexander grading. Since the Alexander grading of \(\widehat{GH}\) and \(\widehat{HFK}\) only depends on the knot type, it is sufficient to prove the connected sum formula for \(\widehat{HF}\).
Now we use only balanced colorings that send the edges to one, so we write \(\widehat{HF}(f)\) instead of \(\widehat{HF}(f,\omega)\). For \(i=1,2\), let \(v_{i}\) be the only vertex of \(K_{i}\). Then \(K_{1}\#_{(v_{1},v_{2})}K_{2}\) (Definition 1.5) is a transverse spatial graph consisting of two vertices and two edges. By contracting one of two edges, we can obtain a transverse spatial graph corresponding to \(K_{1}\#K_{2}\).
Theorem 1.7 implies
\[\widehat{HF}(K_{1}\#_{(v_{1},v_{2})}K_{2})\cong\widehat{HF}(K_{1})\otimes \widehat{HF}(K_{2})\otimes W(1).\]
as absolute Maslov graded, relative Alexander graded vector space. Contracting one edge of \(K_{1}\#_{(v_{1},v_{2})}K_{2}\) yields a transverse spatial graph corresponding to \(K_{1}\#K_{2}\). By [6, Theorem 1.9], we have
\[\widehat{HF}(K_{1}\#K_{2})\cong\widehat{HF}(K_{1})\otimes\widehat{HF}(K_{2}),\]
as absolute Maslov graded, relative Alexander graded vector space. Then the connected sum formula for \(\widehat{GH}\) and \(\widehat{HFK}\) follows.
## 8. The proof of Theorem 1.9
Let \(g_{1}\) and \(g_{2}\) be two \(n\times n\) graph grid diagrams for \((f_{1},\omega_{1})\) and \((f_{1},\omega_{2})\) respectively. Then there is a natural \(2n\times 2n\) graph grid diagram \(g_{\sqcup}\) for \(f_{1}\sqcup f_{2}\) using \(g_{1}\) and \(g_{2}\). Take the \((2n+4)\times(2n+4)\) graph grid diagram \(g^{\prime}\) obtained from \(g_{\sqcup}\) by adding two rows and columns as in Figure 14. Let \(\omega_{\sqcup}\) be a weight for \(g_{\sqcup}\) naturally determined by \(\omega_{1}\) and \(\omega_{2}\)
Let \(\omega^{\prime}\) be a weight for \(g^{\prime}\) sending the marking representing the two unknots to one and the others to the same integers as \(\omega_{\sqcup}\).
\(g^{\prime}\) represents the spatial graph consisting of the disjoint union of \(f_{1}\), \(f_{2}\), and two unknots. As in the left of Figure 8, let \(g_{ij}\) (\(i,j\in\{1,2\}\)) be four \((n+2)\times(n+2)\) blocks obtained by cutting \(g^{\prime}\) along \(\alpha_{1}\), \(\alpha_{n+3}\), \(\beta_{1}\), and \(\beta_{n+3}\).
The following Lemma is quickly proved as the extension of [12, Lemma 8.4.2] using the same argument.
**Lemma 8.1**.: _Let \((f,\omega)\) be an MOY graph. Let \((\mathcal{O},\omega_{\mathcal{O}})\) be the MOY graph where \(\mathcal{O}\) is the unknot consisting of one vertex and edge, and \(\omega_{\mathcal{O}}\) sends the only edge to one. Then there is an isomorphism of absolute Maslov relative Alexander graded \(\mathbb{F}\)-vector spaces_
\[\widehat{HF}(f\sqcup\mathcal{O},\omega\sqcup\omega_{\mathcal{O}})\cong \widehat{HF}(f,\omega)\otimes W(0). \tag{8.1}\]
**Proof.** In [12, Lemmas 8.4.2 and 8.4.6], they constructed quasi-isomorphisms and chain homotopy equivalences for the minus version. Consider the induced maps on our hat version. Then the analogies of the proof of [12, Lemmas 8.4.2 and 8.4.6] prove (8.1). \(\square\)
Let \(c\) be the intersection point \(\alpha_{n+3}\cap\beta_{n+3}\) on \(g^{\prime}\). Using the same notations as Section 5.1, we can write the chain complex for \(g^{\prime}\) as \(\widetilde{CF}(g^{\prime},\omega^{\prime})=\operatorname{Cone}(\partial^{ \prime}\colon\widetilde{I}\to\widetilde{N})\), where \(\partial^{\prime}\) is the chain map counting empty rectangles from a state of \(\widetilde{I}\) to a state of \(\widetilde{N}\).
We will examine the structure of subcomplexes \(N_{0}\) and \(I_{1}\). Take two points \(d=\alpha_{n+3}\cap\beta_{1}\) and \(e=\alpha_{1}\cap\beta_{n+3}\) on \(g^{\prime}\). We decompose the set of states \(\mathbf{S}(g^{\prime})\) as the disjoint union \(\mathbf{S}(g^{\prime})=II\sqcup IN\sqcup NI\sqcup NN\), where
\[II =\{\mathbf{x}\in\mathbf{S}(g)|d,e\in\mathbf{x}\},\] \[IN =\{\mathbf{x}\in\mathbf{S}(g)|d\in\mathbf{x},e\notin\mathbf{x}\},\] \[NI =\{\mathbf{x}\in\mathbf{S}(g)|d\notin\mathbf{x},e\in\mathbf{x}\},\] \[NN =\{\mathbf{x}\in\mathbf{S}(g)|d,e\notin\mathbf{x}\}.\]
Figure 14. The graph grid diagrams \(g_{\sqcup}\) and \(g^{\prime}\).
Because of the markings representing two unknots, using the spans of them, we have the decomposition of the chain complex \(N_{0}\cong\widetilde{II}\oplus\widetilde{IN}\oplus\widetilde{NI}\oplus\widetilde {NN}\). Let \(\mathbb{II}\), \(\mathbb{IN}\), \(\mathbb{NI}\), and \(\mathbb{NN}\) be the subcomplexes of \(I_{1}\) isomorphic to \(\widetilde{II}\), \(\widetilde{IN}\), \(\widetilde{NI}\), and \(\widetilde{NN}\), respectively. Then we have the decomposition of the chain complex \(I_{1}\cong\mathbb{II}\oplus\mathbb{IN}\oplus\mathbb{NI}\oplus\mathbb{NN}\).
**Remark 8.2**.: In this case, the chain map \(\partial^{\prime}|_{I_{1}}\colon I_{1}\to N_{0}\) is not an isomorphism. The isomorphism \(I_{1}\cong N_{0}\) is given by the chain map counting only one empty rectangle whose northeast corner is \(c\).
**Proposition 8.3**.: _The induced map on homology \(H(\partial^{\prime})\colon H(\widetilde{I})\to H(\widetilde{N})\) is trivial._
**Proof.** Let \(\zeta\) be a non-zero element of \(H(\widetilde{I})\). We can assume that \(\zeta\) is represented by the sum of the states of \(I_{1}\) as \(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}\). We will give the element \(\eta\in N_{0}\) such that \(\partial_{N}(\eta)=\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})\).
For \(i=1,\ldots,s\), let \(\mathbf{x}_{i}=\begin{pmatrix}\mathbf{x}_{i_{11}}&\mathbf{x}_{i_{12}}\\ \mathbf{x}_{i_{21}}&\mathbf{x}_{i_{22}}\end{pmatrix}\). We remark that \(\mathbf{x}_{i_{12}}=\{c\}=\{\alpha_{n+3}\cap\beta_{n+3}\}\).
Since \(I_{1}\) is decomposed into four chain complexes, it is sufficient to consider the following four cases.
**Case 1.**\(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}\) is a cycle of \(\mathbb{II}\). Then \(\mathbf{x}_{i_{21}}=\{\alpha_{1}\cap\beta_{1}\}\). In this case, we have \(\partial^{\prime}(\mathbf{x}_{i})=0\) because \(\partial^{\prime}\) counts exactly two empty rectangles whose northeast and southwest corner are \(\mathbf{x}_{i_{12}}\) and \(\mathbf{x}_{i_{21}}\).
**Case 2.**\(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}\) is a cycle of \(\mathbb{IN}\). Then \(\mathbf{x}_{i_{21}}\) is on the vertical circle \(\beta_{1}\) and \(\mathbf{x}_{i_{21}}\neq\{\alpha_{1}\cap\beta_{1}\}\). Let \(\mathbf{x}_{i_{21}}=\{\alpha_{j}\cap\beta_{1}\}\). In this case, \(\mathbf{x}_{i_{22}}\) has a point on \(\alpha_{1}\). Write it as \(\alpha_{1}\cap\beta_{k}\).
Consider a linear map \(F\colon\mathbb{IN}\to N_{1}\) whose value on \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\in\mathbb{IN}\) is given by \(F(\mathbf{x})=\begin{pmatrix}\mathbf{y}_{11}&\mathbf{y}_{12}\\ \mathbf{y}_{21}&\mathbf{y}_{22}\end{pmatrix}\), where
\[\mathbf{y}_{11} =\mathbf{x}_{11},\] \[\mathbf{y}_{12} =\{\alpha_{n+3}\cap\beta_{k}\},\] \[\mathbf{y}_{21} =\{\alpha_{1}\cap\beta_{1}\},\] \[\mathbf{y}_{22} =(\mathbf{x}_{22}\cup\{\alpha_{j}\cap\beta_{n+3}\})\setminus\{ \alpha_{1}\cap\beta_{k}\}.\]
**Claim 1**.: \(\partial_{N}(F(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}))=\partial^{\prime}( \mathbf{x}_{1}+\cdots+\mathbf{x}_{s})\)_._
**Proof.** For each \(i\), \(\partial^{\prime}(\mathbf{x}_{i})\) is the sum of two states of \(\widetilde{N}\). A direct computation shows that \(\partial_{N}(F(\mathbf{x}_{i}))\) contains these two states. Let \(\partial_{N}(F(\mathbf{x}_{i}))=\partial^{\prime}(\mathbf{x}_{i})+\phi(F( \mathbf{x}_{i}))\).
We will show that
\[\phi(F(\mathbf{x}_{i}))=F(\partial_{I}(\mathbf{x}_{i})). \tag{8.2}\]
Consider the empty rectangles counted by \(\partial_{I}(\mathbf{x}_{i})\) and \(\phi(G(\mathbf{x}_{i}))\). Let \(r\in\operatorname{Rect}^{\circ}(\mathbf{x}_{i},\mathbf{z})\) be an empty rectangle counted by \(\partial_{I}(\mathbf{x}_{i})\). We see that there exist the corresponding empty rectangle \(r^{\prime}\in\operatorname{Rect}^{\circ}(F(\mathbf{x}_{i}),\mathbf{z}^{ \prime})\) counted by \(\phi(F(\mathbf{x}_{i}))\). We have three cases (see Figure 15).
* If \(r\) moves the point \(\alpha_{j}\cap\beta_{1}\in\mathbf{x}_{i}\), then \(r\cap g_{11}=\emptyset\) and \(r\cap g_{22}\neq\emptyset\). Let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{j}\cap\beta_{1}\) with \(\alpha_{j}\cap\beta_{n+3}\). Since the two points \(\beta_{1}\cap\mathbf{z}\) and \(\beta_{n+3}\cap\mathbf{z}^{\prime}\) are on the same horizontal circle, we have \(F(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{1}\cap\beta_{k}\in\mathbf{x}_{i}\), then \(r\cap g_{11}=\emptyset\) and \(r\cap g_{22}\neq\emptyset\). Let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{1}\cap\beta_{k}\) with \(\alpha_{n+3}\cap\beta_{k}\). Since the two points \(\alpha_{1}\cap\mathbf{z}\) and \(\alpha_{n+3}\cap\mathbf{z}^{\prime}\) are on the same vertical circle, we have \(F(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) preserves \(\alpha_{j}\cap\beta_{1},\alpha_{1}\cap\beta_{k}\in\mathbf{x}_{i}\), then \(r^{\prime}=r\). Clearly we have \(F(\mathbf{z})=\mathbf{z}^{\prime}\).
Conversely, for each empty rectangle \(r^{\prime}\) of \(\phi(F(\mathbf{x}_{i}))\), there exists an empty rectangle of \(\partial_{I}(\mathbf{x}_{i})\) corresponding to \(r^{\prime}\). Thus (8.2) is proved.
Finally, (8.2) gives
\[\partial_{N}(F(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})) =\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})+\phi(F( \mathbf{x}_{1}))+\cdots+\phi(F(\mathbf{x}_{s}))\] \[=\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})+F( \partial_{I}(\mathbf{x}_{1}))+\cdots+F(\partial_{I}(\mathbf{x}_{s}))\] \[=\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})+F( \partial_{I}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}))\] \[=\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}).\]
**Case 3.**\(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}\) is a cycle of \(\mathbb{N}\mathbb{I}\). The same result as Case 2 is proved by switching \(\{\alpha_{i}\}_{i=1}^{2n+4}\) and \(\{\beta_{i}\}_{i=1}^{2n+4}\).
**Case 4.**\(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s}\) is a cycle of \(\mathbb{N}\mathbb{N}\). Then the point \(\mathbf{x}_{i_{21}}\) is not on \(\alpha_{1}\) or \(\beta_{1}\). Let \(\mathbf{x}_{i_{21}}=\{\alpha_{j}\cap\beta_{k}\}\). In this case, \(\mathbf{x}_{i_{11}}\) has a point on \(\beta_{1}\) and \(\mathbf{x}_{i_{22}}\) has a point on \(\alpha_{1}\). Let \(\alpha_{l}\cap\beta_{1}\in\mathbf{x}_{i_{11}}\) and \(\alpha_{1}\cap\beta_{m}\in\mathbf{x}_{i_{22}}\).
Figure 15. **Case 2.** The correspondence between empty rectangles from \(\mathbf{x}_{i}\) and from \(F(\mathbf{x}_{i})\).
Consider three linear maps \(G,H,I\colon\mathbb{N}\mathbb{N}\to N_{1}\) whose values on \(\mathbf{x}=\begin{pmatrix}\mathbf{x}_{11}&\mathbf{x}_{12}\\ \mathbf{x}_{21}&\mathbf{x}_{22}\end{pmatrix}\in\mathbb{N}\mathbb{N}\) are given by
\[G(\mathbf{x})=\begin{pmatrix}\mathbf{y}_{11}&\mathbf{y}_{12}\\ \mathbf{y}_{21}&\mathbf{y}_{22}\end{pmatrix},H(\mathbf{x})=\begin{pmatrix} \mathbf{z}_{11}&\mathbf{z}_{12}\\ \mathbf{z}_{21}&\mathbf{z}_{22}\end{pmatrix},I(\mathbf{w})=\begin{pmatrix} \mathbf{w}_{11}&\mathbf{w}_{12}\\ \mathbf{w}_{21}&\mathbf{w}_{22}\end{pmatrix},\]
where
\[\mathbf{y}_{11} =(\mathbf{x}_{11}\cup\{\alpha_{n+3}\cap\beta_{k}\})\setminus\{ \alpha_{l}\cap\beta_{1}\},\] \[\mathbf{y}_{12} =\{\alpha_{l}\cap\beta_{m}\},\] \[\mathbf{y}_{21} =\{\alpha_{1}\cap\beta_{1}\},\] \[\mathbf{y}_{22} =(\mathbf{x}_{22}\cup\{\alpha_{j}\cap\beta_{n+3}\})\setminus\{ \alpha_{1}\cap\beta_{m}\},\]
\[\mathbf{z}_{11} =(\mathbf{x}_{11}\cup\{\alpha_{l}\cap\beta_{1}\})\setminus\{\alpha_ {l}\cap\beta_{1}\},\] \[\mathbf{z}_{12} =\{\alpha_{n+3}\cap\beta_{m}\},\] \[\mathbf{z}_{21} =\{\alpha_{1}\cap\beta_{k}\},\] \[\mathbf{z}_{22} =(\mathbf{x}_{22}\cup\{\alpha_{j}\cap\beta_{n+3}\})\setminus\{ \alpha_{1}\cap\beta_{m}\},\]
\[\mathbf{w}_{11} =(\mathbf{x}_{11}\cup\{\alpha_{n+3}\cap\beta_{k}\})\setminus\{ \alpha_{l}\cap\beta_{1}\},\] \[\mathbf{w}_{12} =\{\alpha_{l}\cap\beta_{n+3}\},\] \[\mathbf{w}_{21} =\{\alpha_{j}\cap\beta_{1}\},\] \[\mathbf{w}_{22} =(\mathbf{x}_{22}\cup\{\alpha_{1}\cap\beta_{m}\})\setminus\{ \alpha_{1}\cap\beta_{m}\}.\]
**Claim 2**.: \(\partial_{N}(\sum_{i=1}^{s}(G(\mathbf{x}_{i})+H(\mathbf{x}_{i})+I(\mathbf{x} _{i}))=\partial^{\prime}(\mathbf{x}_{1}+\cdots+\mathbf{x}_{s})\)_._
For each \(i\), \(\partial^{\prime}(\mathbf{x}_{i})\) is the sum of three states of \(\widetilde{N}\).
Figure 17. **Case 4.** The rectangles counted by \(\partial_{I}\) and the related rectangles counted by \(\phi\).
For \(G(\mathbf{x}_{i})\), exactly three empty rectangles counted by \(\partial_{N}\) have the point \(\alpha_{1}\cap\beta_{1}\) as their corner. Let \(\partial_{N}(G(\mathbf{x}_{i}))=\psi(G(\mathbf{x}_{i}))+\phi(G(\mathbf{x}_{i}))\), where \(\psi(G(\mathbf{x}_{i}))\) is the sum of three states obtained by counting these three rectangles and \(\phi(G(\mathbf{x}_{i}))\) is the sum of others.
For \(H(\mathbf{x}_{i})\), exactly two rectangles counted by \(\partial_{N}\) move two of the four points on \(\alpha_{1}\), \(\alpha_{n+3}\), \(\beta_{1}\), and \(\beta_{n+3}\). Write \(\partial_{N}(H(\mathbf{x}_{i}))=\psi(H(\mathbf{x}_{i}))+\phi(H(\mathbf{x}_{i}))\), where \(\psi(H(\mathbf{x}_{i}))\) is the two states obtained by counting these two rectangles and \(\phi(H(\mathbf{x}_{i}))\) is the sum of others.
For \(I(\mathbf{x}_{i})\), write \(\partial_{N}(I(\mathbf{x}_{i}))=\psi(I(\mathbf{x}_{i}))+\phi(I(\mathbf{x}_{i}))\) in the same way as \(H(\mathbf{x}_{i})\).
A direct computation (see Figure 16) shows
\[\partial^{\prime}(\mathbf{x}_{i})=\psi(G(\mathbf{x}_{i}))+\psi(H(\mathbf{x}_{ i}))+\psi(I(\mathbf{x}_{i})). \tag{8.3}\]
Then, we will prove the following equations:
\[\phi(G(\mathbf{x}_{i})) =G(\partial_{I}(\mathbf{x}_{i})), \tag{8.5}\] \[\phi(H(\mathbf{x}_{i})) =H(\partial_{I}(\mathbf{x}_{i})),\] (8.6) \[\phi(I(\mathbf{x}_{i})) =I(\partial_{I}(\mathbf{x}_{i})). \tag{8.4}\]
Consider the empty rectangles counted by \(\partial_{I}(\mathbf{x}_{i})\) and \(\phi(G(\mathbf{x}_{i}))\). Let \(r\) be an empty rectangle counted by \(\partial_{I}(\mathbf{x}_{i})\). Suppose that \(r\in\operatorname{Rect}^{\circ}(\mathbf{x}_{i},\mathbf{z})\). There exists the corresponding empty rectangle \(r^{\prime}\in\operatorname{Rect}^{\circ}(G(\mathbf{x}_{i}),\mathbf{z}^{ \prime})\) counted by \(\phi(G(\mathbf{x}_{i}))\). We have five cases (see Figure 17).
* If \(r\) moves the point \(\alpha_{j}\cap\beta_{k}\in\mathbf{x}_{i}\) and \(r\cap g_{11}\neq\emptyset\), let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{j}\cap\beta_{k}\) with \(\alpha_{n+3}\cap\beta_{k}\). Since the two points \(\alpha_{j}\cap\mathbf{z}\) and \(\alpha_{n+3}\cap\mathbf{z}^{\prime}\) are on the same vertical circle, we have \(G(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{j}\cap\beta_{k}\in\mathbf{x}_{i}\) and \(r\cap g_{22}\neq\emptyset\), let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{j}\cap\beta_{k}\) with \(\alpha_{j}\cap\beta_{n+3}\). Since the two points \(\beta_{k}\cap\mathbf{z}\) and \(\beta_{n+3}\cap\mathbf{z}^{\prime}\) are on the same horizontal circle, we have \(G(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{1}\cap\beta_{m}\in\mathbf{x}_{i}\), then \(r\cap g_{11}=\emptyset\) and \(r\cap g_{22}\neq\emptyset\). Let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{1}\cap\beta_{m}\) with \(\alpha_{l}\cap\beta_{m}\). Since the two points \(\beta_{1}\cap\mathbf{z}\) and \(\beta_{m}\cap\mathbf{z}^{\prime}\) are on the same horizontal circle, we have \(G(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) preserves \(\alpha_{j}\cap\beta_{1},\alpha_{1}\cap\beta_{k}\in\mathbf{x}_{i}\), then let \(r^{\prime}=r\). We have \(G(\mathbf{z})=\mathbf{z}^{\prime}\).
Conversely, for each empty rectangle \(r^{\prime}\) of \(\phi(G(\mathbf{x}_{i}))\), there exists an empty rectangle of \(\partial_{I}(\mathbf{x}_{i})\) corresponding to \(r^{\prime}\). Thus (8.4) is proved.
Consider the empty rectangles counted by \(\partial_{I}(\mathbf{x}_{i})\) and \(\phi(H(\mathbf{x}_{i}))\). Let \(r\) be an empty rectangle appearing in \(\partial_{I}(\mathbf{x}_{i})\). Suppose that \(r\in\operatorname{Rect}^{\circ}(\mathbf{x}_{i},\mathbf{z})\). There exists the corresponding empty rectangle \(r^{\prime}\in\operatorname{Rect}^{\circ}(H(\mathbf{x}_{i}),\mathbf{z}^{ \prime})\) appearing in \(\phi(H(\mathbf{x}_{i}))\) as follows. We have five cases.
* If \(r\) moves the point \(\alpha_{j}\cap\beta_{k}\in\mathbf{x}_{i}\) and \(r\cap g_{11}\neq\emptyset\), let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{j}\cap\beta_{k}\) with \(\alpha_{1}\cap\beta_{k}\). Since the two points \(\alpha_{j}\cap\mathbf{z}\) and \(\alpha_{1}\cap\mathbf{z}^{\prime}\) are on the same vertical circle, we have \(F(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{j}\cap\beta_{k}\in\mathbf{x}_{i}\) and \(r\cap g_{22}\neq\emptyset\), let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{j}\cap\beta_{k}\) with \(\alpha_{j}\cap\beta_{n+3}\). Since the two points \(\beta_{k}\cap\mathbf{z}\) and \(\beta_{n+3}\cap\mathbf{z}^{\prime}\) are on the same horizontal circle, we have \(H(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{1}\cap\beta_{m}\in\mathbf{x}_{i}\), then \(r\cap g_{11}=\emptyset\) and \(r\cap g_{22}\neq\emptyset\). Let \(r^{\prime}\) be the rectangle obtained from \(r\) by replacing the corner point \(\alpha_{1}\cap\beta_{m}\) with \(\alpha_{n+3}\cap\beta_{m}\). Since the two points \(\alpha_{1}\cap\mathbf{z}\) and \(\alpha_{n+3}\cap\mathbf{z}^{\prime}\) are on the same vertical circle, we have \(H(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) moves the point \(\alpha_{l}\cap\beta_{1}\in\mathbf{x}_{i}\), then \(r\cap g_{11}\neq\emptyset\) and \(r\cap g_{22}=\emptyset\). Let \(r^{\prime}=r\) and we have \(H(\mathbf{z})=\mathbf{z}^{\prime}\).
* If \(r\) preserves \(\alpha_{j}\cap\beta_{1},\alpha_{1}\cap\beta_{k}\in\mathbf{x}_{i}\), then \(r^{\prime}=r\). We have \(H(\mathbf{z})=\mathbf{z}^{\prime}\).
Conversely, for each empty rectangle \(r^{\prime}\) of \(\phi(H(\mathbf{x}_{i}))\), there exists an empty rectangle of \(\partial_{I}(\mathbf{x}_{i})\) corresponding to \(r^{\prime}\). Thus (8.5) is proved.
(8.6) can be proved in the same way as (8.5).
Finally, (8.3)-(8.6) give
\[\partial_{N}(\sum_{i=1}^{s}(G(\mathbf{x}_{i})+H(\mathbf{x}_{i})+I (\mathbf{x}_{i}))\] \[=\partial^{\prime}(\sum_{i=1}^{s}\mathbf{x}_{i})+\sum_{i=1}^{s} \phi(G(\mathbf{x}_{1}))+\sum_{i=1}^{s}\phi(H(\mathbf{x}_{1}))+\sum_{i=1}^{s} \phi(I(\mathbf{x}_{1}))\] \[=\partial^{\prime}(\sum_{i=1}^{s}\mathbf{x}_{i})+G(\partial_{I}( \sum_{i=1}^{s}\mathbf{x}_{i}))+H(\partial_{I}(\sum_{i=1}^{s}\mathbf{x}_{i}))+ I(\partial_{I}(\sum_{i=1}^{s}\mathbf{x}_{i}))\] \[=\partial^{\prime}(\sum_{i=1}^{s}\mathbf{x}_{i}).\]
proof of Theorem 1.9.: Using Lemma 8.1, we have
\[\widehat{HF}(g^{\prime},\omega^{\prime})\cong\widehat{HF}(g_{\sqcup},\omega_{ \sqcup})\otimes W(0)^{\otimes 2}.\]
Proposition 2.8 gives
\[\widehat{HF}(g^{\prime},\omega^{\prime})\cong\widetilde{HF}(g_{\sqcup},\omega_ {\sqcup})\otimes W(0)^{\otimes 2}\otimes W(1)^{\otimes 2}.\]
Let \(g_{1}\sqcup\mathcal{O}\) (respectively \(g_{2}\sqcup\mathcal{O}\)) be a \((n+2)\times(n+2)\) graph grid diagram which is the same as the upper left (respectively lower right) \((n+2)\times(n+2)\) block of \(g^{\prime}\). Let \(\omega_{1}\sqcup\mathcal{O}\) and \(\omega_{2}\sqcup\mathcal{O}\) be weights for \(g_{1}\sqcup\mathcal{O}\) and \(g_{2}\sqcup\mathcal{O}\) respectively naturally induced by \(\omega^{\prime}\). Then Lemmas 6.1 and 8.3 imply that
\[\widetilde{HF}(g^{\prime},\omega^{\prime})\cong\widetilde{HF}(g_{1}\sqcup \mathcal{O},\omega_{1}\sqcup\mathcal{O})\otimes\widetilde{HF}(g_{1}\sqcup \mathcal{O},\omega_{1}\sqcup\mathcal{O})\otimes W(0).\]
Proposition 2.8 and Lemma 8.1 give
\[\widetilde{HF}(g_{i}\sqcup\mathcal{O},\omega_{i}\sqcup\mathcal{O})\cong \widetilde{HF}(g_{1},\omega_{1})\otimes W(0)\otimes W(1),\]
for \(i=1,2\). Combining these equations, we have
\[\widetilde{HF}(g_{\sqcup},\omega_{\sqcup})\otimes W(0)^{\otimes 2}\otimes W(1)^{ \otimes 2}\cong\widetilde{HF}(g_{1},\omega_{1})\otimes\widetilde{HF}(g_{1}, \omega_{1})\otimes W(0)^{\otimes 3}\otimes W(1)^{\otimes 2}.\]
Since we are thinking of \(\mathbb{F}\)-vector spaces, the part \(W(0)^{\otimes 2}\otimes W(1)^{\otimes 2}\) can be cancelled. So we obtain
\[\widetilde{HF}(g_{\sqcup},\omega_{\sqcup})\cong\widetilde{HF}(g_{1},\omega_{1}) \otimes\widetilde{HF}(g_{1},\omega_{1})\otimes W(0).\]
Finally, Proposition 2.8 gives
\[\widehat{HF}(g_{\sqcup},\omega_{\sqcup})\cong\widehat{HF}(g_{1},\omega_{1}) \otimes\widehat{HF}(g_{1},\omega_{1})\otimes W(0).\]
## 9. An application and examples
Let \(G\) be an abstract graph. \(G\) is **planar** if there is an embedding of \(G\) into \(\mathbb{R}^{2}\). For a planar graph \(G\), a spatial graph \(f(G)\) is **trivial** if \(f(G)\) is ambient isotopic to an embedding of \(G\) into \(\mathbb{R}^{2}\subset\mathbb{R}^{3}\). It is known that a trivial spatial embedding of a planar graph is unique up to ambient isotopy in \(\mathbb{R}^{3}\)[9].
The grid homology gives obstructions to the trivial spatial handcuff graph.
**Corollary 9.1**.: _Let \(f\colon G\to S^{3}\) be a spatial embedding of a handcuff graph. Take any balanced coloring \(\omega\) for \(f\). If \(\widehat{HF}(f,\omega)\) is nontrivial, then \(f(G)\) is nontrivial._
**Proof.** A handcuff graph is clearly planar. The trivial embedding of it has a cut edge. Then Theorem 1.4 (2) completes the proof. \(\square\)
**Remark 9.2**.:
* This corollary holds for any planar graph with a cut edge.
* The converse of this corollary does not hold. For example, the grid homology of the spatial graph of the center of Figure 1 is also trivial.
Figure 18. Three graph grid diagrams for spatial handcuff graphs.
Here are some computations of grid homology for spatial handcuff graphs. Let \(g_{1}\), \(g_{2}\), and \(g_{3}\) be three graph grid diagrams as in Figure 18. Suppose that their balanced colorings, denote by \(\omega_{i}\) (\(i=1,2,3\)), take \(a\) and \(b\) for two loops and zero for the edge connecting two different vertices. Direct computations show that
\[\widehat{HF}(g_{1},\omega_{1})= 0,\] \[\widehat{HF}(g_{2},\omega_{2})= \mathbb{F}_{(0,a+b)}\oplus\mathbb{F}_{(-1,a)}\oplus\mathbb{F}_{(- 1,b)}\oplus\mathbb{F}_{(-2,0)},\] \[\widehat{HF}(g_{3},\omega_{1})= \mathbb{F}_{(1,a+b)}\oplus\mathbb{F}_{(0,a+b)}\oplus\mathbb{F}_{ (0,a)}\oplus\mathbb{F}_{(0,b)}\] \[\oplus\mathbb{F}_{(-1,a)}\oplus\mathbb{F}_{(-1,b)}\oplus\mathbb{F }_{(-1,0)}\oplus\mathbb{F}_{(-2,0)},\]
where relative Alexander grading is shifted for simplicity.
## 10. Acknowledgement
I would like to express my sincere gratitude to my supervisor, Tetsuya Ito, for helpful discussions and corrections. This work was supported by JST, the establishment of university fellowships towards the creation of science technology innovation, Grant Number JPMJFS2123.
|
2310.19136 | Outlier-robust additive matrix decomposition | We study least-squares trace regression when the parameter is the sum of a
$r$-low-rank matrix and a $s$-sparse matrix and a fraction $\epsilon$ of the
labels is corrupted. For subgaussian distributions and feature-dependent noise,
we highlight three needed design properties, each one derived from a different
process inequality: a "product process inequality", "Chevet's inequality" and a
"multiplier process inequality". These properties handle, simultaneously,
additive decomposition, label contamination and design-noise interaction. They
imply the near-optimality of a tractable estimator with respect to the
effective dimensions $d_{eff,r}$ and $d_{eff,s}$ of the low-rank and sparse
components, $\epsilon$ and the failure probability $\delta$. The near-optimal
rate is $\mathsf{r}(n,d_{eff,r}) + \mathsf{r}(n,d_{eff,s}) +
\sqrt{(1+\log(1/\delta))/n} + \epsilon\log(1/\epsilon)$, where
$\mathsf{r}(n,d_{eff,r})+\mathsf{r}(n,d_{eff,s})$ is the optimal rate in
average with no contamination. Our estimator is adaptive to
$(s,r,\epsilon,\delta)$ and, for fixed absolute constant $c>0$, it attains the
mentioned rate with probability $1-\delta$ uniformly over all
$\delta\ge\exp(-cn)$. Without matrix decomposition, our analysis also entails
optimal bounds for a robust estimator adapted to the noise variance. Our
estimators are based on "sorted" versions of Huber's loss. We present
simulations matching the theory. In particular, it reveals the superiority of
"sorted" Huber's losses over the classical Huber's loss. | Philip Thompson | 2023-10-29T19:51:50Z | http://arxiv.org/abs/2310.19136v2 | # Outlier-robust additive matrix decomposition and robust matrix completion
###### Abstract
We study least-squares trace regression when the parameter is the sum of a \(r\)-low-rank and a \(s\)-sparse matrices and a fraction \(\epsilon\) of the labels is corrupted. For subgaussian distributions, we highlight three design properties. The first, termed \(\mathrm{PP}\), handles additive decomposition and follows from a product process inequality. The second, termed \(\mathrm{IP}\), handles both label contamination and additive decomposition. It follows from Chevet's inequality. The third, termed \(\mathrm{MP}\), handles the interaction between the design and featured-dependent noise. It follows from a multiplier process inequality. Jointly, these properties entail the near-optimality of a tractable estimator with respect to the effective dimensions \(d_{\mathrm{eff},r}\) and \(d_{\mathrm{eff},s}\) for the low-rank and sparse components, \(\epsilon\) and the failure probability \(\delta\). This rate has the form \(\mathsf{r}(n,d_{\mathrm{eff},r})+\mathsf{r}(n,d_{\mathrm{eff},s})+\sqrt{(1+ \log(1/\delta))/n}+\epsilon\log(1/\epsilon)\). Here, \(\mathsf{r}(n,d_{\mathrm{eff},r})+\mathsf{r}(n,d_{\mathrm{eff},s})\) is the optimal uncontaminated rate, independent of \(\delta\). Our estimator is adaptive to \((s,r,\epsilon,\delta)\) and, for fixed absolute constant \(c>0\), it attains the mentioned rate with probability \(1-\delta\) uniformly over all \(\delta\geq\exp(-cn)\). Disconsidering matrix decomposition, our analysis also entails optimal bounds for a robust estimator adapted to the noise variance. Finally, we consider robust matrix completion. We highlight a new property for this problem: one can robustly and optimally estimate the incomplete matrix regardless of the _magnitude of the corruption_. Our estimators are based on "sorted" versions of Huber's loss. We present simulations matching the theory. In particular, it reveals the superiority of "sorted" Huber loss over the classical Huber's loss.
## 1 Introduction
Outlier-robust estimation has been a topic studied for many decades since the seminal work by Huber [55]. One of the objectives of the field is to device estimators which are less sensitive to outlier contamination. The formalization of outlyingness and the construction of robust estimators matured in several directions. One common assumption is that the adversary can only change a fraction \(\epsilon\) of the original sample. For an extensive overview we refer, e.g., to [51, 71, 56] and references therein.
Within a very general framework, the minimax optimality of several robust estimation problems has been recently obtained in a series of elegant works by [17, 18, 49]. The construction, however, is based on Tukey's depth, a hard computational problem in higher dimensions. A recent trend of research, initiated by [35, 63], has focused on the optimality of robust estimators within computationally tractable algorithms. The _oblivious_ model assumes the contamination is independent of the original sample. In the _adversarial_ model, the outliers may depend arbitrarily on the sample. For instance, optimal mean estimators for the adversarial model can now be computed in nearly-linear time [24, 41, 33, 31]. We refer to [36] for an extensive survey.
In the realm of robust linear regression, two broad lines of investigations exist: (1) one in which only the response (label) is contaminated and (2) the more general setting in which the covariates (features) are also corrupted [37, 39]. Model (1), albeit less general, has been considered in many applications and studied in numerous past and recent works [13, 67, 94, 48, 85, 38]. It has also some connection with the problems of robust matrix completion [14, 21, 19, 67, 58] and matrix decomposition [16, 14, 104, 54, 2]. Both models (1)-(2) have been considered assuming adversarial or oblivious contamination. For instance, an interesting property of model (1) with oblivious contamination is the existence of consistent estimators, a property not shared by the adversary model. See for instance [97, 8, 94, 48, 85].
In this work, we consider a new broader model: robust trace regression with additive matrix decomposition (RTRMD). It corresponds to the least-squares trace regression model in which the parameter is the sum of a low-rank matrix and a sparse matrix and, simultaneously, the labels are adversarially contaminated. As such, we are interested in high-dimensional estimation/prediction: the sample is corrupted by \(o\) outliers and its sample size \(n\) is much smaller than the extrinsic dimension \(p\). We focus on subgaussian distributions and pay attention to the following points:
1. _Adversarial label corruption in high dimensions._ The parameter is a \(d_{1}\times d_{2}\) matrix and \(n\ll p:=d_{1}d_{2}\). The general model RTRMD includes, as particular cases, \(s\)-sparse linear regression [96] and noisy \(r\)-low-rank matrix sensing [78]. One practical appeal of the established theory of high-dimensional estimation is the existence of efficient estimators adapted to \((s,r)\) -- without resorting to Lepski's method. Likewise, we assume no knowledge of \((s,r,o)\). We also present new results for robust matrix completion [58].
2. _Noise heterogeneity._ The majority of the literature on (robust) least-squares regression, within the framework of \(M\)-estimation with decomposable regularizers [80], assumes feature-independent noise. We avoid this assumption and identify new design properties and concentration inequalities needed for this case. See Section 10 for a discussion.
3. _Subgaussian rates and uniform confidence level._ Minimax rates are defined on average or as a function of the failure probability \(\delta\). For instance, the first seminal bounds in sparse linear regression were of the form \(\sqrt{s\log(p/s\delta)/n}\) -- optimal in average. As a function of \(\delta\), the optimal rate is the "subgaussian" rate \(\sqrt{s\log(p/s)/n}+\sqrt{\log(1/\delta)/n}\) -- for which \(\log(1/\delta)\) does not multiply the "effective dimension". A second point is to what extent the estimator _depends on \(\delta\)_ -- a relevant point in practice. Let \(c>0\) be an absolute constant. Without knowing \(\delta\), does exist an estimator that "automatically" attains the optimal subgaussian rate _across all \(\delta\geq\exp(-cn)\)_ with probability at least \(1-\delta\)? We construct robust estimators with this property under the assumptions of items (a)-(b), including a robust estimator adapted to the noise variance.
4. _Matrix decomposition._ This challenging problem was considered in [102, 16, 14, 104, 54, 72] assuming the "incoherence" condition and in [2] assuming the milder "low-spikeness" condition. See also Chapter 7 of the book [53]. These works do not consider label contamination. A general framework is proposed in [2] assuming a specific design property (see their Definition 2). In the applications considered in [2], this property is straightforwardly satisfied: either the design is the identity or the fixed design is invertible. For instance, multi-task learning has an invertible design in high dimensions (one has \(n\geq d_{1}\) albeit \(n\ll d_{1}d_{2}\)). In this regard, trace regression with additive matrix decomposition is fundamentally a different model: with high probability, the random design is singular in the regime \(n\ll d_{1}d_{2}\). To our knowledge, there is currently no optimal statistical theory for RTRMD -- even without label contamination. Under assumptions (a)-(c), this work identifies three new design properties to prove optimality for this problem: \(\mathrm{PP}\), \(\mathrm{IP}\) and \(\mathrm{MP}\) -- these are discussed in detail in Section 9. See also Section 7.3.
The rest of the paper is organized as follows. We start presenting some useful notation. Section 2 states our general framework, Section 3 presents our estimators and Section 4 exemplifies with concrete models. Section 5 and 6 state new results for these models. It also presents reference to later sections concerning technical contributions. In Section 7, we review related literature and compare with our results. In Section 8, we state two new concentration inequalities for the multiplier and product processes. We believe they can be useful beyond the scope of this paper. In Section 9, we state our needed design properties and apply these inequalities to prove them. Section 10 is dedicated to a preliminary discussion on the role of our multiplier process inequality (Theorem 7 in Section 8) in the framework of \(M\)-estimation with decomposable regularizers. Sections 11-12 presents the proofs our main results, namely, Theorems 16 and 22. Theorem 16 is a deterministic bound for RTRMD assuming certain design properties. Theorem 22 is a deterministic bound for robust trace regression adaptive to the noise variance (in the case there is no matrix decomposition). Finally, Sections 13 and 14 finish with simulations and a final discussion. Additional proofs are presented in the Supplemental Material.
_Notation_.: We set \(\mathbb{R}^{p}:=\mathbb{R}^{d_{1}\times d_{2}}\). For a vector \(\mathbf{v}\), \(\|\mathbf{v}\|_{k}\) denotes its \(\ell_{k}\)-norm (\(1\leq k\leq\infty\)) and \(\|\mathbf{v}\|_{0}\) is the number of its nonzero entries. We use similar notation for matrices (considered as vectors). We denote the Frobenius norm by \(\|\cdot\|_{F}\), the nuclear norm by \(\|\cdot\|_{N}\) and the operator norm by \(\|\cdot\|_{\mathrm{op}}\). Given norm \(\mathcal{R}\) and \(\mathbf{V}\in\mathbb{R}^{p}\setminus\{0\}\), \(\Psi_{\mathcal{R}}(\mathbf{V}):=\nicefrac{{\mathcal{R}(\mathbf{V})}}{{\| \mathbf{V}\|_{F}}}\). The inner product in \(\mathbb{R}^{p}\) will be denoted by \(\big{\{}\mathbf{V},\mathbf{W}\big{\}}=\mathrm{tr}(\mathbf{V}^{\top}\mathbf{W})\). For vectors, we use the notation \(\langle\cdot,\cdot\rangle\). If \(a\leq Cb\) for some absolute constant \(C>0\), we write \(a\lesssim b\) or \(a\leq\mathcal{O}(b)\). We write \(a\asymp b\) if \(a\lesssim b\) and \(b\lesssim a\). Given \(\ell\in\mathbb{N}\), \([\ell]:=\{1,\ldots,\ell\}\). The \(\psi_{2}\)-Orlicz norm will be denoted by \(|\cdot|_{\psi_{2}}\). Throughout the paper, given \(\ell\in\mathbb{N}\), \(A^{(\ell)}:=A/\sqrt{\ell}\) whenever \(A\) is a number, vector or function.
We now recall the definition of the Slope norm [11]. Given nonincreasing positive sequence \(\mathbf{\omega}:=\{\omega_{i}\}_{i\in[n]}\), the Slope norm at a point \(\mathbf{u}\in\mathbb{R}^{n}\) is defined by \(\|\mathbf{u}\|_{\sharp}:=\sum_{i\in[n]}\omega_{i}\mathbf{u}_{i}^{2}\), where \(\mathbf{u}_{1}^{\sharp}\geq\ldots\geq\mathbf{u}_{n}^{\sharp}\) denotes the nonincreasing rearrangement of the absolute coordinates of \(\mathbf{u}\). Throughout this paper, unless otherwise stated, \(\mathbf{\omega}\in\mathbb{R}^{n}\) will be the sequence with coordinates \(\omega_{i}=\sqrt{\log(An/i)}\) for some \(A\geq 2\). Recall that \(\Omega:=\{\sum_{i=1}^{\infty}\omega_{i}^{2}\}^{1/2}\asymp o\log(n/o)\)[6]. With some abuse of notation, \(\|\cdot\|_{\sharp}\) will also denote the Slope norm in \(\mathbb{R}^{p}\) with sequence \(\bar{w}_{j}=\sqrt{\log(Ap/j)}\) for some \(\bar{A}\geq 2\).
\(\mathbf{X}\in\mathbb{R}^{p}\) will denote a random matrix with distribution \(\Pi\) and covariance operator \(\mathfrak{S}\) -- seen as a vector, \(\mathbf{\Sigma}\) is its covariance matrix. Given \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathbb{R}^{p})^{2}\times\mathbb{R}^{n}\), we define the bilinear form \(\big{\{}\mathbf{V},\mathbf{W}\big{\}}_{\Pi}:=\mathbb{E}[\{\mathbf{X},\mathbf{ V}\}\big{\}}(\mathbf{X},\mathbf{W})]=\big{\{}\mathfrak{S}(\mathbf{V}), \mathbf{W}\big{\}}\) and the pseudo-norms \(\|\mathbf{V}\|_{\Pi}:=\big{\{}\mathbf{V},\mathbf{V}\big{\}}_{\Pi}^{1/2}\), \(\|[\mathbf{V},\mathbf{W},\mathbf{u}]\|_{\Pi}:=\{\|\mathbf{V}\|_{\Pi}^{2}+\|\mathbf{ W}\|_{\Pi}^{2}+\|\mathbf{u}\|_{2}^{2}\}^{1/2}\), \(\|\mathbf{V},\mathbf{W}\|_{\Pi}:=\|[\mathbf{V},\mathbf{W},\mathbf{0}]\|_{\Pi}\) and \(\|[\mathbf{V},\mathbf{u}]\|_{\Pi}:=\|[\mathbf{V},\mathbf{0},\mathbf{u}]\|_{\Pi}\). Given \(\mathbb{C}\subset\mathbb{R}^{p}\), let \(\mu(\mathbb{C}):=\sup_{\mathbf{V}\in\mathbb{C}}\|\mathbf{V}\|_{F}/\|\mathbf{V }\|_{\Pi}\). Next, we define the unit balls \(\mathbb{B}_{\Pi}:=\{\mathbf{V}\in\mathbb{R}^{p}:\|\mathbf{V}\|_{\Pi}\leq 1\}, \mathbb{B}_{F}:=\{\mathbf{V}\in\mathbb{R}^{p}:\|\mathbf{V}\|_{F}\leq 1\},\)\(\mathbb{B}_{\ell}^{k}:=\{\mathbf{v}\in\mathbb{R}^{k}:\|\mathbf{v}\|_{\ell}\leq 1\}\) and, for given norm \(\mathcal{R}\) in \(\mathbb{R}^{k}\), \(\mathbb{B}_{\mathcal{R}}:=\{\mathbf{v}:\mathcal{R}(\mathbf{v})\leq 1\}\). All the corresponding unit spheres will take the symbol \(\mathbb{S}\). Finally, the _Gaussian width_ of a compact set \(\mathcal{B}\subset\mathbb{R}^{k\times\ell}\) is the quantity \(\mathscr{G}(\mathcal{B}):=\mathbb{E}[\sup_{\mathbf{V}\in\mathcal{B}}\langle \mathbf{V},\mathbf{\Xi}\rangle],\) where \(\mathbf{\Xi}\in\mathbb{R}^{k\times\ell}\) is random matrix with iid \(\mathcal{N}(0,1)\) entries.
## 2 General framework
_Assumption 1_ (Adversarial label contamination).: Let \(\{(y_{i}^{\circ},\mathbf{X}_{i}^{\circ})\}_{i\in[n]}\) be an iid copy of a feature-label pair \((\mathbf{X},y)\in\mathbb{R}^{p}\times\mathbb{R}\). We assume available a sample \(\{(y_{i},\mathbf{X}_{i})\}_{i\in[n]}\) such that \(\mathbf{X}_{i}=\mathbf{X}_{i}^{\circ}\) for all \(i\in[n]\) and at most \(o\) arbitrary outliers modify the label sample \(\{y_{i}^{\circ}\}_{i\in[n]}\).
Letting \(\mathbf{y}=(y_{i})_{i\in[n]}\), our underlying model is
\[\mathbf{y}=\mathbf{f}+\sqrt{n}\mathbf{\theta}^{*}+\mathbf{\xi}, \tag{1}\]
where \(\mathbf{f}:=(f_{i})_{i\in[n]}\) and \(\mathbf{\xi}:=(\xi_{i})_{i\in[n]}\) are, respectively, iid copies of unknown random variables \(f,\xi\in\mathbb{R}\) and \(\mathbf{\theta}^{*}\in\mathbb{R}^{n}\) is an arbitrary unknown vector with at most \(o\) nonzero coordinates.1 We assume \(\xi\) and \(\mathbf{X}\) are centered and \(\mathbb{E}[\xi\mathbf{X}]=0\). The number \(\epsilon:=o/n\) is referred as the "contamination fraction".
Footnote 1: Nothing more is assumed on \(\mathbf{\theta}^{*}\). For instance, it can depend arbitrarily on the data set. We are not concerned with \(\mathbf{\theta}^{*}\) itself and rather see it as a nuisance parameter.
Define the _design operator_\(\mathfrak{X}:\mathbb{R}^{p}\to\mathbb{R}^{n}\) with components \(\mathfrak{X}_{i}(\mathbf{V}):=\big{\{}\mathbf{X}_{i},\mathbf{V}\big{\}}\). In its general form, our goal is to estimate \(\mathbf{f}\) assuming that, for some \([\mathbf{B},\mathbf{\Gamma}]\in\mathbb{R}^{p}\), the average approximation error \(\nicefrac{{(1/n)}}{{n}}\|\mathfrak{X}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}\|_{2}^ {2}\) is "small". To be precise, assuming \(\epsilon\leq c\) for some constant \(c\in(0,1/2)\), we would like to design an estimator \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}}]\) satisfying, with probability at least \(1-\delta\), "oracle inequalities" of the form
\[\|\mathfrak{X}^{(n)}(\hat{\mathbf{B}}+\hat{\mathbf{\Gamma}})-\mathbf{f}^{(n)}\|_{2}^ {2}\leq\inf_{[\mathbf{B},\mathbf{\Gamma}]\in\mathcal{F}}\Big{\{}C\|\mathfrak{X}^ {(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^{2}+r_{\mathbf{B}, \mathbf{\Gamma}}(n,d_{\mathrm{eff}},\epsilon,\delta)\Big{\}}\,. \tag{2}\]
In above, \(C\approx 1\) is an universal constant, \(\mathcal{F}\) is a class of parameters associated to well known parsimonious properties -- e.g, sparsity or low-rankness -- and \(r_{\mathbf{B},\mathbf{\Gamma}}(n,d_{\mathrm{eff}},\epsilon,\delta)\) is an appropriate rate that depends on the "effective dimension" \(d_{\mathrm{eff}}\) of \(\mathcal{F}\). In Section 5, we specify different classes of interest, each associated to the examples of Section 4.
When the model is "well-specified", there is \([\mathbf{B}^{*},\mathbf{\Gamma}^{*}]\in\mathcal{F}\) such that
\[[\mathbf{B}^{*},\mathbf{\Gamma}^{*}]\in\operatorname*{argmin}_{[\mathbf{B}, \mathbf{\Gamma}]\in(\mathbb{R}^{p})^{2}}\mathbb{E}\left[y-\langle\mathbf{X}, \mathbf{B}+\mathbf{\Gamma}\rangle\right]^{2}. \tag{3}\]
This corresponds to (1) with \(f:=(\mathbf{X},\mathbf{B}^{*}+\boldsymbol{\Gamma}^{*})\). In this case, \(\|\mathbf{X}^{(n)}(\mathbf{B}^{*}+\boldsymbol{\Gamma}^{*})-\boldsymbol{f}^{(n)} \|_{2}=0\), Assumption 1 and (1) are equivalent and the rate \(r_{\mathbf{B}^{*},\boldsymbol{\Gamma}^{*}}(n,d_{\mathrm{eff}},\epsilon,\delta)\) is the minimax optimal rate for the well-specified model. More broadly, our goal is to obtain, under the assumptions discussed in the introduction, inequalities of the form (2), ensuring optimal statistical guarantees for least-squares trace regression in presence of either additive matrix decomposition, label contamination or inexact parsimony.
## 3 Our estimators
Our estimators are based on the following new class of losses.
_Definition 1_ (Sorted Huber-type losses).: Define the functions \(\rho_{1}(\boldsymbol{u}):=\|\boldsymbol{u}\|_{2}\) and \(\rho_{2}(\boldsymbol{u}):=\frac{1}{2}\|\boldsymbol{u}\|_{2}^{2}\) over \(\mathbb{R}^{n}\). For \(q\in\{1,2\}\) and \(\tau>0\), let \(\rho_{\tau\boldsymbol{\omega},q}:\mathbb{R}^{n}\to\mathbb{R}_{+}\) be the infimal convolution of \(\rho_{q}\) and \(\tau\|\cdot\|_{\sharp}\), i.e.,
\[\rho_{\tau\boldsymbol{\omega},q}(\boldsymbol{u}):=\min_{\boldsymbol{z}\in \mathbb{R}^{n}}\rho_{q}(\boldsymbol{u}-\boldsymbol{z})+\tau\|\boldsymbol{z} \|_{\sharp}.\]
Finally, define the loss \(\mathcal{L}_{\tau\boldsymbol{\omega},q}(\mathbf{B}):=\rho_{\tau\boldsymbol{ \omega},q}\left(\nicefrac{{\boldsymbol{y}-\mathbf{X}(\mathbf{B})}}{{\sqrt{n}} }\right).\)
These losses are convex. For instance, when \(q=2\), \(\rho_{\tau\boldsymbol{\omega},2}\) is the optimal value of the problem defining the proximal mapping of \(\tau\|\cdot\|_{\sharp}\). When \(\omega_{1}=\ldots=\omega_{n}=1\), separability implies the explicit expression:
\[\mathcal{L}_{\tau\boldsymbol{\omega},2}(\mathbf{B})=\tau^{2}\sum_{i=1}^{n} \Phi\left(\frac{y_{i}-\mathfrak{X}_{i}(\mathbf{B})}{\tau\sqrt{n}}\right),\]
where \(\Phi:\mathbb{R}\to\mathbb{R}\) is the Huber's function \(\Phi(t)=\min\{\nicefrac{{(1/2)}}{{2}}t^{2},|t|-\nicefrac{{1}}{{2}}\}\). Thus, Huber regression corresponds to \(M\)-estimation with the loss \(\mathcal{L}_{\tau\boldsymbol{\omega},2}\) with _constant_ weighting sequence \(\boldsymbol{\omega}\). In this work we advocate the use of the new loss \(\mathcal{L}_{\tau\boldsymbol{\omega},2}\) with _varying weights_. Throughout this paper, we fix the sequence \(\boldsymbol{\omega}:=(\omega_{i})_{i\in[n]}\) to be \(\omega_{i}:=\sqrt{\log(An/i)}\) for some \(A\geq 2\). It corresponds to a "Sorted" generalization of Huber's loss.
In high-dimensions, we additionally use regularization norms \((\mathcal{R},\mathcal{S})\). We consider the estimator
\[\begin{array}{rcl}[\hat{\mathbf{B}},\hat{\boldsymbol{\Gamma}}]&\in&\operatorname {argmin}_{[\mathbf{B},\boldsymbol{\Gamma}]\in(\mathbf{R}^{p})^{2}}\mathcal{L }_{\tau\boldsymbol{\omega},2}(\mathbf{B}+\boldsymbol{\Gamma})+\lambda \mathcal{R}(\mathbf{B})+\chi\mathcal{S}(\boldsymbol{\Gamma})\\ &\text{s.t.}&\|\mathbf{B}\|_{\infty}\leq\mathsf{a},\end{array} \tag{4}\]
where \(\lambda,\chi,\tau>0\) and \(\mathsf{a}\in(0,\infty]\) are tuning parameters. If we are not concerned with matrix decomposition, we instead consider, for \(q\in\{1,2\}\), estimators of the form
\[\hat{\mathbf{B}} \in \operatorname{argmin}_{\mathbf{B}\in\mathbb{R}^{p}}\mathcal{L}_{ \tau\boldsymbol{\omega},q}(\mathbf{B})+\lambda\mathcal{R}(\mathbf{B}). \tag{5}\]
By adding an extra variable \(\hat{\boldsymbol{\theta}}\) -- aiming in estimating the nuisance parameter \(\boldsymbol{\theta}^{*}\) -- the solution of (4) can be equivalently found solving
\[\begin{array}{rcl}\min_{[\mathbf{B},\boldsymbol{\Gamma},\boldsymbol{\theta} ]\in(\mathbb{R}^{p})^{2}\times\mathbb{R}^{n}}\frac{1}{2n}\sum_{i=1}^{n}\left(y_{ i}-(\mathbf{X}_{i},\mathbf{B}+\boldsymbol{\Gamma})+\sqrt{n}\boldsymbol{\theta}_{i} )^{2}+\lambda\mathcal{R}(\mathbf{B})+\chi\mathcal{S}(\boldsymbol{\Gamma})+ \tau\|\boldsymbol{\theta}\|_{\sharp}\\ \text{s.t.}&\|\mathbf{B}\|_{\infty}\leq\mathsf{a}.\end{array} \tag{6}\]
Estimator (6) is a concrete example of regularization with three norms.
Similarly, the solution of (5) with \(q=2\) can be found solving
\[\min_{[\mathbf{B},\boldsymbol{\theta}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}} \frac{1}{2n}\sum_{i=1}^{n}\left(y_{i}-(\mathbf{X}_{i},\mathbf{B})+\sqrt{n} \boldsymbol{\theta}_{i})^{2}+\lambda\mathcal{R}(\mathbf{B})+\tau\|\boldsymbol{ \theta}\|_{\sharp}. \tag{7}\]
The practical appeal of these problems is that they can computed by alternated convex optimization using standard Lasso, Slope or nuclear norm solvers [1].
Finally, the solution of (5) with \(q=1\) can be found solving
\[\min_{[\mathbf{B},\boldsymbol{\theta}]\in\mathds{R}^{p}\times\mathds{R}^{n}}\left\{ \tfrac{1}{n}\sum_{i=1}^{n}\left(y_{i}-\left\langle\mathds{X}_{i},\mathbf{B} \right\rangle+\sqrt{n}\boldsymbol{\theta}_{i}\right)^{2}\right\}^{\frac{1}{2}}+ \lambda\mathcal{R}(\mathbf{B})+\tau\|\boldsymbol{\theta}\|_{\sharp}. \tag{8}\]
When \(\boldsymbol{\theta}\equiv 0\) and \(\mathcal{R}=\|\cdot\|_{1}\), the above estimator corresponds to the square-root lasso estimator [7]. When not concerned with matrix decomposition, we will show that the robust estimator (8) achieves the same guarantees as (7) -- under the same set of assumptions (a)-(c) in Section 1 -- while being adaptive to the variance of the noise. Of course, the computational cost of (8) is higher.
_Remark 1_ (Huber loss versus "Sorted" Huber loss).: Let us illustrate considering the well-specified model with \(\Gamma^{*}\equiv 0\) and \(q=2\). It is well known that, in linear regression, Huber's estimator can be cast as a least-squares estimator in the augmented variable \([\boldsymbol{b},\boldsymbol{\theta}]\) with the penalization \(\|\boldsymbol{\theta}\|_{1}\)[9, 42]. In this work, we penalize \(\|\boldsymbol{\theta}\|_{\sharp}\) -- i.e., we fit with the new loss in Definition 1. Figure 1 plots the estimation error as a function of \(\epsilon\) in sparse linear regression using synthetic contaminated data. We refer to Section 13 for details. We can see that the "Sorted" Huber loss significantly outperforms Huber regression. In this work, we present near-optimal bounds trying to explain this significant empirical observation. The intuition is that the former loss is tuned according to the magnitude of each individual label outlier. Differently, Huber regression processes all label data points indistinguishably.
## 4 Motivating examples
### Robust least-squares trace regression with additive matrix decomposition
Matrix decomposition is motivated by several applications. We refer to [2] and Chapter 7 of the book [53] for a precise discussion and further references. They consider the general framework: to estimate the pair \([\mathbf{B}^{*},\Gamma^{*}]\in(\mathds{R}^{p})^{2}\) given a noisy linear observation of its sum. Precisely, their model is \(\mathbf{Y}=\mathfrak{X}(\mathbf{B}^{*}+\Gamma^{*})+\boldsymbol{\Xi}\) where the design \(\mathfrak{X}:\mathds{R}^{p}\rightarrow\mathbb{R}^{n\times m}\) takes values on matrices with \(n\) id rows and \(\boldsymbol{\Xi}\in\mathbb{R}^{n\times m}\) is a noise matrix independent of \(\mathfrak{X}\) with \(n\) centered iid rows. It is further required that \(\mathbf{B}^{*}\) is low-rank, \(\Gamma^{*}\) is a sparse matrix and that a "low-spikeness" conditions holds. Three subproblems are analyzed in the framework of [2]: factor analysis, robust covariance estimation and multi-task learning. The first two problems have identity designs (\(\mathfrak{X}=I\)). In multi-task learning, the design is invertible with high probability. Trace regression with additive matrix decomposition corresponds to their model with \(m=1\) and design \(\mathfrak{X}_{i}(\mathbf{B}):=\left\langle\mathds{X}_{i},\mathbf{B}\right\rangle\). It can be alternatively modeled by (3) with \(\mathbf{B}^{*}\) having rank \(r\ll d_{1}\wedge d_{2}\) and \(\Gamma^{*}\) having at most \(s\) nonzero entries (assuming \(\boldsymbol{\theta}^{*}\equiv 0\)). As discussed in item (d) of the introduction, the design property introduced in [2] is not guaranteed to hold for this problem. In fact, we are not aware of any optimality theory for this problem and one of the contributions of this work is to show that a product process inequality is _key_ in establishing it. More broadly, we wish the noise to depend on \(\mathfrak{X}\) arbitrarily.
More broadly, we also consider additive matrix decomposition when labels are contaminated (Assump
Figure 1: Huber vs “Sorted” Huber losses in sparse regression: \(\sqrt{\texttt{MSE}}\) versus \(\epsilon\).
tion 1) -- including miss-specification, see (1)-(3) and the discussion therein. A second contribution of this work is to identify three new design properties, consequences of inequalities for the product and multipliers processes and Chevet's inequality, that entail optimality for this model. This general problem is the main goal of this work -- it motivates the general Theorem 16 in Section 11. Its proof and required conditions permeate our main results.
To finish, we require the so called "low-spikeness" assumption: there exists \(\mathsf{a}^{*}>0\) such that for any potential parameter \(\mathbf{B}\), \(\|\mathbf{B}\|_{\infty}\leq\nicefrac{{\mathsf{a}^{*}}}{{\sqrt{n}}}\). For this problem, we consider estimator (4) with tuning \(\mathsf{a}:=\mathsf{a}^{*}/\sqrt{n}\), \(\mathcal{R}:=\|\cdot\|_{N}\) and \(\mathcal{S}:=\|\cdot\|_{1}\).
Next, we mention two particular submodels of RTRMD for which our general theory imply novel results.
#### 4.1.1 Robust least-squares sparse regression
Least-squares sparse regression [98, 10] is a particular case of model (3) with \(d_{1}=p\), \(d_{2}=1\), \(\mathbf{x}:=\mathbf{X}\in\mathbb{R}^{p}\), \(\mathbf{b}^{*}:=\mathbf{B}^{*}\in\mathbb{R}^{p}\), \(\mathbf{\Gamma}^{*}\equiv\mathbf{0}\) and \(\|\mathbf{b}^{*}\|_{0}\leq s\ll p\). In case of label contamination (\(0\neq\|\mathbf{\theta}^{*}\|_{0}\leq o\)), we consider estimators (5) with \(q\in\{1,2\}\) and \(\mathcal{R}:=\|\cdot\|_{1}\) or \(\mathcal{R}=\|\cdot\|_{\sharp}\). In fact, we consider the general model (1) allowing miss-specification.
#### 4.1.2 Robust least-squares trace regression
Least-squares trace regression or matrix sensing [88, 87, 89, 78] correspond to model (3) with \(\mathbf{B}^{*}\in\mathbb{R}^{p}\) having rank \(r\ll d_{1}\wedge d_{2}\) and \(\mathbf{\Gamma}^{*}\equiv\mathbf{0}\). With contaminated labels, we consider estimators (5) with \(q\in\{1,2\}\) and \(\mathcal{R}:=\|\cdot\|_{N}\). We consider the general model (1) so to allow miss-specification.
### Robust matrix completion
The matrix completion problem consists in estimating a matrix \(\mathbf{B}^{*}\in\mathds{R}^{p}\) of rank \(r\ll d_{1}\wedge d_{2}\) having sampled an incomplete subset of its entries (for simplicity, we will not consider the miss-specified model). Mathematically, it corresponds to trace regression where \(\mathbf{X}\) has a discrete distribution \(\Pi\) supported on \(\mathcal{X}:=\{\mathbf{e}_{j}\bar{\mathbf{e}}_{k}^{\top}:j\in[d_{1}],k\in[d_{2}]\}\), where \(\{\mathbf{e}_{j}\}\) and \(\{\bar{\mathbf{e}}_{k}\}\) denote the canonical basis. More generally, the measurements can be noisy. In robust matrix completion, a fraction of the clean sampled entries is corrupted by arbitrary outliers. There are two different statistical questions of the same problem: (i) estimation of \(\mathbf{B}^{*}\)_and_ its contaminated entries or (ii) estimating only \(\mathbf{B}^{*}\). Both models (i)-(ii) can be rewritten in terms of our general framework (1)-(3). Still, we show new theoretical and empirical evidence that their statistical guarantees are very different. See Section 6.
In [58], the authors aim in answering question (i). Thus, they parametrize this problem as trace regression with additive matrix decomposition with no label contamination2, that is, model (1) with \(f=\{\mathbf{X},\mathbf{B}^{*}+\mathbf{\Gamma}^{*}\}\), \(0\neq\|\mathbf{\Gamma}^{*}\|_{0}\leq o\) and \(\mathbf{\theta}^{*}=\mathbf{0}\). Indeed, by definition of \(\mathcal{X}\), the sampled entries are \(\mathbf{B}^{*}_{j(k)(i)}+\mathbf{\Gamma}^{*}_{j(k)(i)}\) with \(\mathbf{\Gamma}^{*}_{j(k)(i)}\) being the entries corruption. In establishing minimax bounds, they consider, for fixed \(\mathsf{a}>0\), the class of parameters \(\mathbf{B}^{*}+\mathbf{\Gamma}^{*}\in\mathds{R}^{p}\) satisfying \(\mathrm{rank}(\mathbf{B})\leq r\), \(\|\mathbf{\Gamma}^{*}\|_{0}\leq o\) and \(\|\mathbf{B}\|_{\infty}\vee\|\mathbf{\Gamma}^{*}\|_{\infty}\leq\mathsf{a}\). They prove their estimator is near optimal _assuming known_\(\mathsf{a}\geq\|\mathbf{B}^{*}\|_{\infty}\vee\|\mathbf{\Gamma}^{*}\|_{\infty}\). Additionally, they show the optimal statistical rate fundamentally depends on the corruption level \(\|\mathbf{\Gamma}^{*}\|_{\infty}\).
Footnote 2: We emphasize that this is a very different model than trace regression with matrix decomposition discussed in Section 4.1 (with \(\mathbf{\theta}^{*}=\mathbf{0}\)). Indeed, in the first model the design is discrete supported on \(\mathcal{X}\). In the second model, the design is \(L\)-subgaussian. It is known that discrete designs are essentially “heavy-tailed” in high dimensions. This fundamental difference impacts the optimal corruption rate \(\omega(\epsilon)\). As we shall see, ignoring log factors, subgaussian designs have \(\omega(\epsilon)\asymp\epsilon\) while discrete designs have \(\omega(\epsilon)\asymp\sqrt{\epsilon}\).
In this work, we are mainly interested in question (ii) -- which is enough for most of applications. In this case, noisy robust matrix completion is equivalent to model (1) with \(f=\langle\mathbf{X},\mathbf{B}^{*}\rangle\), \(\mathbf{\Gamma}^{*}=\mathbf{0}\) and \(0\neq\|\mathbf{\theta}^{*}\|_{0}\leq o\). In this case, the outlyingness is captured with a different parameter class: \([\mathbf{B}^{*},\mathbf{\theta}^{*}]\in\mathds{R}^{p}\times\mathbb{R}^{n}\) satisfying \(\mathrm{rank}(\mathbf{B}^{*})\leq r\), \(\|\mathbf{\theta}^{*}\|_{0}\leq o\) and \(\|\mathbf{B}^{*}\|_{\infty}\leq\mathsf{a}\). _Assuming known_\(\mathsf{a}\geq\|\mathbf{B}^{*}\|_{\infty}\), _but unknown_\(\|\mathbf{\theta}^{*}\|_{\infty}\), we consider estimator_ (5) _with \(q=2\) and penalization \(\mathcal{R}(\mathbf{B}):=\|\mathbf{B}\|_{N}/\sqrt{p}\). For simplicity, we do not consider the case \(q=1\) nor the setting with model miss-specification._
Results for robust sparse/low-rank regression with subgaussian designs
We state in this section optimal guarantees for the estimators (4)-(5) of models of Section 4.1. These are in fact particular "concrete" consequences of our main results: Theorems 7-8, Proposition 2, Theorem 16 and Theorem 22. Being somewhat technical, we state them in later sections, respectively, Sections 8, 9, 11 and 12 -- including a preliminary discussion in Section 10. Theorem 16 is a general deterministic result for estimator (4) and RTRMD. It assumes specific design properties presented in Definition 9 in Section 9. Proposition 2 in this section ensures these properties are satisfied with high probability. Its proof requires new concentration inequalities stated in Theorems 7-8 of Section 8. Theorem 22 in Section 12 is a general deterministic result for estimator (5) with \(q=1\) and the problem of robust trace regression (with no matrix decomposition). This theorem ensures this estimator is near-optimal adaptively to the noise variance. In a nutshell, Theorem 16 is our main deterministic result: the proof of Theorem 22 is largely inspired by the proof of the first. These points are explained in detail later.
Next, we work with distributions satisfying the following assumption.
_Assumption 2_.: \(\mathbf{X}\) is centered and \(L\)-subgaussian for some \(L\geq 1\), that is, for all \(\mathbf{V}\in\mathds{R}^{p}\). Additionally, \(1\leq|\xi|_{\psi_{2}}=:\sigma<\infty\).
Define
\[\rho_{1}(\mathbf{\Sigma}):=\max_{j\in|p|}\mathbf{\Sigma}_{jj}^{1/2},\quad\text { and }\quad\rho_{N}(\mathbf{\Sigma}):=\sup_{\|\mathbf{z},\|_{2}=1}\{\mathbb{E}( \mathbf{z}^{\top}\mathbf{X}\mathbf{\nu})^{2}\}^{1/2}.\]
Throughout the paper, we let \(\mathbf{\Delta}^{\hat{\mathbf{\theta}}}:=\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\), \(\mathbf{\Delta}_{\mathbf{B}}:=\widehat{\mathbf{B}}-\mathbf{B}\) and \(\mathbf{\Delta}_{\mathbf{\Gamma}}:=\widehat{\mathbf{\Gamma}}-\mathbf{\Gamma},\) where \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}},\hat{\mathbf{\theta}}]\) denote estimators and \(\mathbf{B},\mathbf{\Gamma}\in\mathds{R}^{p}\) are given points.
Next, we give guarantees for the estimator (4) and RTRMD. Given \(r,s\in\mathbb{N}\) and \(\mathsf{a}^{*},\mathsf{c}>0\), we define the class
\[\mathcal{F}(r,s,\mathsf{a}^{*},\mathsf{c}):=\left\{[\mathbf{B},\mathbf{\Gamma }]\in(\mathds{R}^{p})^{2}:\begin{array}{c}\mathrm{rank}(\mathbf{B})\leq r, \|\mathbf{\Gamma}\|_{0}\leq s,\\ (\!\!\!/n)\|\mathbf{\bar{x}}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}\|_{2}^{2}\leq \mathsf{c}\sigma^{2},\\ \|\mathbf{B}\|_{\infty}\leq\frac{\mathsf{a}^{*}}{\sqrt{n}}\end{array}\right\}.\]
Let \(\omega(\epsilon):=\epsilon\log(1/\epsilon)\). Given \(C_{1}>0\), define
\[r_{n,r,s,\delta}(\mathsf{a}^{*},C_{1}):=L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt {n}}+L^{2}\left[\sqrt{\frac{r(d_{1}+d_{2})}{n}}+\sqrt{\frac{s\log p}{n}}\right] +\left(1+\frac{1}{C_{1}\sigma L}\right)\mathsf{a}^{*}\sqrt{\frac{s}{n}}.\]
**Theorem 2** (Robust trace regression with additive matrix decomposition).: _Grant Assumptions 1-2, model (1) and assume \(\mathbf{X}\) is isotropic. Then there are absolute constants \(\mathsf{c}\in(0,1)\), \(c_{1}\in(0,1/2)\) and \(C_{0},C_{1}\geq 1\) such that the following holds. Suppose \(C_{1}^{2}L^{4}\epsilon\log(1/\epsilon)\leq c_{1}\). Given \(\mathsf{a}^{*}>0\), let \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}}]\) be the solution of (4) with \(\mathcal{R}:=\|\cdot\|_{N}\), \(\mathcal{S}:=\|\cdot\|_{1}\) and with tuning parameters \(\mathsf{a}:=\mathsf{a}^{*}/\sqrt{n}\), \(\lambda\asymp\sigma L^{2}\sqrt{\nicefrac{{\log p}}{{n}}}+\nicefrac{{\mathsf{ a}^{*}}}{{\sqrt{n}}}\) and \(\tau\asymp C_{1}L^{2}\sigma/\sqrt{n}\). Assume that_
\[n\gtrsim\left[L^{4}\cdot r(d_{1}+d_{2})\right]\bigvee\left[\left(L^{4}\log p+ (\mathsf{a}^{*})^{2}\right)s\right].\]
_Then, for any \(\delta\in(0,1)\) such that \(\delta\geq\exp\left(-\frac{n}{C_{0}L^{4}}\right),\) on an event of probability \(\geq 1-\delta\), for all \([\mathbf{B},\mathbf{\Gamma}]\in\mathcal{F}(r,s,\mathsf{a}^{*},\mathsf{c})\),_
\[(\nicefrac{{\mathsf{a}}}{{2}})\|\mathbf{\Delta}_{\mathbf{B}}\|_{ N}+(\nicefrac{{\mathsf{a}}}{{2}})\|\mathbf{\Delta}_{\mathbf{\Gamma}}\|_{1}+\| \mathbf{\bar{x}}^{(n)}(\hat{\mathbf{B}}+\hat{\mathbf{\Gamma}})-\mathbf{f}^{(n)} \|_{2}^{2} \leq\left(1+\frac{\mathcal{O}(1)}{C_{1}^{2}L^{2}}\right)\|\mathbf{ \bar{x}}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^{2}\] \[+\mathcal{O}(\sigma^{2})r_{n,r,s,\delta}^{2}(\mathsf{a}^{*},C_{1} )+\mathcal{O}(C_{1}^{2}\sigma^{2}L^{6})\omega^{2}(\epsilon), \tag{9}\]
_and also_
\[\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi} \leq\left(\mathcal{O}(1)+\frac{\mathcal{O}(1)}{C_{1}L}\right)\|\mathbf{\bar {x}}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}+\mathcal{O}(\sigma)r _{n,r,s,\delta}(\mathsf{a}^{*},C_{1})+\mathcal{O}(C_{1}\sigma L^{3})\omega( \epsilon). \tag{10}\]
Theorem 2 states oracle inequalities of the form (2).3 The next proposition ensures the rate in Theorem 2 is optimal up to a log factor.4 Its proof follows from similar arguments in [2] for the noisy matrix decomposition problem with identity design. Define the class
Footnote 3: In case the approximation error is \(\mathcal{O}(\sigma)r_{n,r,s,\delta}(\mathbf{a}^{*},1)\) — and assuming \(L^{2}\epsilon\log(1/\epsilon)\leq c_{1}\) and \(\tau\simeq\sigma L/\sqrt{n}\) — we can obtain slightly improved bounds for the estimator in Theorem 2. Namely, the error coefficients \(\mathcal{O}(1)/C_{1}^{2}L^{2}\) in (9) and \(\mathcal{O}(1)/C_{1}L\) in (10) are improved to \(\mathcal{O}(\sigma^{2})r_{n,r,s,\delta}^{2}(\mathbf{a}^{*},1)\) and \(\mathcal{O}(\sigma)r_{n,r,s,\delta}(\mathbf{a}^{*},1)\) respectively. Additionally, the corruption errors \(\mathcal{O}(C_{1}^{2}\sigma^{2}L^{6})\omega^{2}(\epsilon)\) in (9) and \(\mathcal{O}(C_{1}\sigma L^{3})\omega(\epsilon)\) in (10) are improved to \(\mathcal{O}(\sigma^{2}L^{4})\omega^{2}(\epsilon)\) and \(\mathcal{O}(\sigma L^{2})\omega(\epsilon)\) respectively.
Footnote 4: By the general theory of [17], the corruption term \(\omega(\epsilon)\) is optimal (up to a log term). Thus, it is sufficient to give a lower bound for the non-corrupted model.
\[\mathcal{A}(r,s,\mathbf{a}^{*})=\left\{\mathbf{\Theta}^{*}:=[ \mathbf{B}^{*},\mathbf{\Gamma}^{*}]\in(\mathds{R}^{p})^{2}:\mathrm{rank}( \mathbf{B}^{*})\leq r,\|\mathbf{\Gamma}^{*}\|_{0}\leq s,\|\mathbf{B}^{*}\|_{ \infty}\leq\frac{\mathbf{a}^{*}}{\sqrt{n}}\right\}.\]
For any \(\mathbf{\Theta}^{*}:=[\mathbf{B}^{*},\mathbf{\Gamma}^{*}]\in(\mathds{R}^{p})^ {2}\), let \(\mathbb{P}_{\mathbf{\Theta}^{*}}\) denote the distribution of the data \(\{y_{i},\mathbf{X}_{i}\}_{i\in[n]}\) satisfying (3) with parameters \([\mathbf{B}^{*},\mathbf{\Gamma}^{*}]\). Finally, for some \(\sigma>0\), let
\[\Psi_{n}(r,s,\mathbf{a}^{*}):=\sigma^{2}\left\{\frac{r(d_{1}+d_{2 })}{n}+\frac{s}{n}\log\left(\frac{p-s}{s/2}\right)\right\}+(\mathbf{a}^{*})^{ 2}\frac{s}{n}.\]
**Proposition 1**.: _Assume that \(\{\xi_{i}\}_{i\in[n]}\) are iid \(\mathcal{N}(0,\sigma^{2})\) independent of \(\{\mathbf{X}_{i}\}_{i\in[n]}\), \(\mathbf{X}\) is isotropic and \(\|\mathbf{B}^{*}\|_{\infty}\leq\mathbf{a}^{*}/\sqrt{n}\). Assume \(d_{1},d_{2}\geq 10\), \(\mathbf{a}^{*}\geq 32\sqrt{\log p}\) and \(s<p\)._
_Then there exists universal constants \(c>0\) and \(\beta\in(0,1)\) such that_
\[\inf_{\hat{\mathbf{\Theta}}}\sup_{\mathbf{\Theta}^{*}\in\mathcal{ A}(r,s,\mathbf{a}^{*})}\mathbb{P}_{\mathbf{\Theta}^{*}}\cdot\left\{\|[ \mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}_{\mathbf{\Gamma}^{*}}]\|_{ \Pi}\geq c\Psi_{n}^{\frac{1}{2}}(r,s,\mathbf{a}^{*})\right\}\geq\beta,\]
_where the infimum is taken over all estimators \(\hat{\mathbf{\Theta}}=[\hat{\mathbf{B}},\hat{\mathbf{\Gamma}}]\) constructed from the data \(\{y_{i},\mathbf{X}_{i}\}_{i\in[n]}\)._
Next, we state guarantees for the estimator (5) with \(q=2\) for three different parameter classes associated to the subproblems of Sections 4.1.1 and 4.1.2. Let \(\|\cdot\|\) denote either \(\|\cdot\|_{0}\) or the rank operation \(\mathrm{rank}(\cdot)\) and \(\mathbb{C}_{\mathbf{B}}\subset\mathds{R}^{p}\) be a cone parametrized by a point \(\mathbf{B}\in\mathds{R}^{p}\). Given \(d,d_{\mathrm{eff}}\in\mathbb{N}\) and \(\rho,\mathsf{c},\mathsf{C}>0\), let \(\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{C}):=\{\mathbf{B}\in\mathds{R}^{p} :\|\mathbf{B}\|\leq d,\mathsf{C}L^{2}\rho\mu\left(\mathbb{C}_{\mathbf{B}} \right)\sqrt{d_{\mathrm{eff}}/n}\leq 1\}\) and
\[\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{c},\mathsf{C}):=\left\{ \mathbf{B}\in\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{C}):(\nicefrac{{1}}{{ n}})\|\mathfrak{X}(\mathbf{B})-\mathbf{f}\|_{2}^{2}\leq\sigma^{2}\right\}. \tag{11}\]
Consider three cases:
1. In sparse regression (Section 4.1.1), take \(\mathcal{R}:=\|\cdot\|_{1}\) and \(\lambda\asymp L^{2}\sigma\rho_{1}(\mathbf{\Sigma})\sqrt{\nicefrac{{\log p}}{{n}}}\). Set \(\|\cdot\|:=\|\cdot\|_{0}\), \(\rho:=\rho_{1}(\mathbf{\Sigma})\), \(d:=s\), \(d_{\mathrm{eff}}:=s\log p\), and \(\mathbb{C}_{\mathbf{b}}:=\mathcal{C}_{\mathbf{b},\|\cdot\|_{1}}(6)\) for given \(\mathbf{b}\) — see Section 11 for the definition of this cone.
2. In sparse regression, take \(\mathcal{R}:=\|\cdot\|_{\sharp}\), the Slope norm in \(\mathbb{R}^{p}\), and \(\lambda\asymp L^{2}\sigma\rho_{1}(\mathbf{\Sigma})/\sqrt{n}\). Set \(\|\cdot\|:=\|\cdot\|_{0}\), \(\rho:=\rho_{1}(\mathbf{\Sigma})\), \(d:=s\), \(d_{\mathrm{eff}}:=s\log(ep/s)\), and the cone \(\mathbb{C}_{\mathbf{b}}:=\overline{\mathcal{C}}_{s}(6)\) for each \(\mathbf{b}\) such that \(\|\mathbf{b}\|_{0}\leq s\) -- see Section 32 in the Supplement for the definition of this cone.
3. In trace regression (Section 4.1.2), take \(\mathcal{R}:=\|\cdot\|_{N}\) and \(\lambda\asymp L^{2}\sigma\rho_{N}(\mathbf{\Sigma})\sqrt{d_{1}+d_{2}/n}\). Set \(\|\cdot\|:=\mathrm{rank}(\cdot)\), \(\rho:=\rho_{N}(\mathbf{\Sigma})\), \(d:=r\), \(d_{\mathrm{eff}}:=r(d_{1}+d_{2})\) and the cone \(\mathbb{C}_{\mathbf{B}}:=\mathcal{C}_{\mathbf{B},\|\cdot\|_{N}}(6)\) for given \(\mathbf{B}\) -- see Section 11.
Let us define
\[r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu):=L\frac{1+\sqrt{\log(1/ \delta)}}{\sqrt{n}}+L^{2}\rho\mu\sqrt{\frac{d_{\mathrm{eff}}}{n}}. \tag{12}\]
**Theorem 3** (Robust sparse/low-rank regression).: _Grant Assumptions 1-2 and model (1). Then there are absolute constants \(\mathsf{c},c_{1}\in(0,1/2)\) and \(\mathsf{C},C_{0},C_{1}\geq 1\) such that the following holds. Suppose \(C_{1}^{2}L^{4}\epsilon\log(1/\epsilon)\leq c_{1}\) and take \(\tau\asymp C_{1}L^{2}\sigma/\sqrt{n}\). Let \(\hat{\mathbf{B}}\) be the solution of (5) with \(q=2\) -- correspondingly to each tuning in cases (i)-(iii). Consider the three different classes of type (11) for each of the cases (i)-(iii)._
_Then, for any \(\delta\in(0,1)\) such that \(\delta\geq\exp\left(-\frac{n}{C_{0}L^{4}}\right),\) on an event of probability \(\geq\,1-\delta\), for all \(\mathbf{B}\in\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{c},\mathsf{C})\),_
\[(\nicefrac{{\lambda}}{{2}})\mathcal{R}(\mathbf{\Delta_{B}})+\| \mathfrak{X}^{(n)}(\mathbf{\tilde{B}})-\boldsymbol{f}^{(n)}\|_{2}^{2} \leq\left(1+\frac{\mathcal{O}(1)}{C_{1}^{2}L^{2}}\right)\| \mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}^{2}\] \[+\mathcal{O}(\sigma^{2})r_{n,d_{\mathrm{eff}},\delta}^{2}(\rho, \mu(\mathbb{C}_{\mathbf{B}}))+\mathcal{O}(C_{1}^{2}\sigma^{2}L^{6})\omega^{2}( \epsilon), \tag{13}\]
_and also_
\[\|\mathbf{\Delta_{B}}\|_{\Pi}\leq\left(\mathcal{O}(1)+\frac{\mathcal{O}(1)}{C _{1}L}\right)\|\mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}+ \mathcal{O}(\sigma)r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu(\mathbb{C}_{ \mathbf{B}}))+\mathcal{O}(C_{1}\sigma L^{3})\omega(\epsilon). \tag{14}\]
From the quadratic process inequality, we may replace \(\rho_{1}(\mathbf{\Sigma})\) with \(\hat{\rho}_{1}:=\max_{j\in[p]}\|\mathbb{X}_{\bullet,j}\|_{2}\) and \(\rho_{N}(\mathbf{\Sigma})\) by its empirical counterpart \(\hat{\rho}_{N}\). We now present estimation guarantees for the estimator (5) with \(q=1\). Consider three cases:
* Grant case (i) above but with \(\lambda\asymp L\rho_{1}(\mathbf{\Sigma})\sqrt{\log p/n}\).
* Grant case (ii) above but with \(\lambda\asymp L\rho_{1}(\mathbf{\Sigma})/\sqrt{n}\).
* Grant case (iii) above but with \(\lambda\asymp L\rho_{N}(\mathbf{\Sigma})\sqrt{d_{1}+d_{2}/n}\).
**Theorem 4** (\(\sigma\)-adaptive robust sparse/low-rank regression).: _Grant Assumptions 1-2 and model (1). Then there are absolute constants \(c_{1}\in(0,1/2)\) and \(C,C_{0}\geq 1\) such that the following holds. Suppose \(L^{2}\epsilon\log(1/\epsilon)\leq c_{1}\) and take \(\tau\asymp L/\sqrt{n}\). Let \(\mathbf{\tilde{B}}\) be the solution of (5) with \(q=1\) -- correspondingly to each tuning in cases (i')-(iii'). Consider the three different classes of type (11) for each of the cases (i')-(iii')._
_Let \(\delta\in(0,1)\) such that \(\delta\geq\exp\left(-\frac{n}{C_{0}(L^{4}\nu\sigma^{2})}\right).\) Let \(\mu_{*}:=\sup_{\mathbf{B}\in\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{C})}\mu (\mathbb{C}_{\mathbf{B}})\) and \(\mathsf{c}_{0,n}\asymp r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu_{*})\). Then, on an event of probability \(\geq 1-\delta\), it holds that, for all \(\mathbf{B}\in\mathcal{F}(d,d_{\mathrm{eff}},\rho,\mathsf{c}_{0,n}^{2},\mathsf{ C})\),_
\[(\nicefrac{{\lambda}}{{2}})\mathcal{R}(\mathbf{\Delta_{B}})+\| \mathfrak{X}^{(n)}(\mathbf{\tilde{B}})-\boldsymbol{f}^{(n)}\|_{2}^{2} \leq\left(1+\mathcal{O}(\mathsf{c}_{0,n}^{2})\right)\|\mathfrak{X} ^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}^{2}\] \[+\mathcal{O}(\sigma^{2})r_{n,d_{\mathrm{eff}},\delta}^{2}(\rho, \mu_{*})+\mathcal{O}(\sigma^{2}L^{4})\omega^{2}(\epsilon), \tag{15}\]
_and also_
\[\|\mathbf{\Delta_{B}}\|_{\Pi}\leq\left(\mathcal{O}(1)+\mathcal{O}(\mathsf{c}_ {0,n})\right)\|\mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}+ \mathcal{O}(\sigma)r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu_{*})+\mathcal{O}( \sigma L^{2})\omega(\epsilon). \tag{16}\]
In all previous theorems, the correspondent estimator is adaptive to \(\delta\) and the confidence level is \(1-\delta\) across any \(\delta\geq\exp(-cn)\) for a fixed constant \(c>0\). The estimators are also adaptive to \((s,r,o)\), and, in Theorems 3-4, adaptive to \(\mu(\mathbb{C}_{\mathbf{B}})\). In case there is no matrix decomposition, estimator (5) with \(q=1\) achieves, up to constants, the same rate of estimator (5) with \(q=2\) with the advantage of being adaptive to \(\sigma.\) On the other hand, for \(q=1\) the approximation must be \(\mathcal{O}(\sigma)r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu_{*})\) while for \(q=2\) the approximation error is only required to be \(\mathcal{O}(\sigma).\)5
Footnote 5: In case the approximation error is \(\mathcal{O}(\sigma)r_{n,d_{\mathrm{eff}},\delta}(\rho,\mu_{*})\) and assuming \(L^{2}\epsilon\log(1/\epsilon)\leq c_{1}\), the same bounds (15)-(16) are valid for the estimator (5) with \(q=2\) — with the tuning specified in cases (i)-(iii) and \(\tau\asymp\sigma L/\sqrt{n}\).
_Remark 2_ (\(\delta\)-adaptive subgaussian rates with feature-dependent noise).: Within the framework of \(M\)-estimation with decomposable regularizers, the classical oracle inequalities for sparse and trace regression [10, 80] are optimal in average -- but suboptimal in \(\delta\) and assuming estimators with knowledge of \(\delta\). [6] was the first work to obtain the subgaussian rate with \(\delta\)-adaptive estimators for sparse linear regression. [34] later generalized the bounds of [6] for the square-root Lasso estimator. Their proof strategy, however, fundamentally assumes the _noise is independent of features_ (see Section 10). A corollary of Theorems 3-4
with no contamination is the previously unknown result that the same estimators in [6, 34] attain the subgaussian rate adaptively to \(\delta\)_assuming nothing on \((\mathbf{X},\xi)\) than marginal subgaussianity._ We refer to Section 10 and Remark 7 in Section 8 for an explanation of why removing the independence assumption is challenging in the framework of \(M\)-estimation with decomposable regularizers. See also Section 7.3. To finish, we remark that when \(\epsilon=0\) we could prove "sharp" oracle inequalities -- that is, with constant \(C=1\) in (2).
## 6 Results for robust matrix completion
For robust matrix completion (Section 4.2), we follow a similar distribution assumption in [60, 58].
_Assumption 3_.: \(\mathbf{X}\) has a discrete distribution \(\Pi=\{\pi_{k,\ell}\}_{(k,\ell)\in[d_{1}]\times[d_{2}]}\) with support on \(\mathcal{X}\). Let \(d:=d_{1}+d_{2}\), \(m=d_{1}\wedge d_{2}\), \(R_{k}:=\sum_{\ell=1}^{d_{2}}\pi_{k,\ell}\), \(C_{\ell}:=\sum_{k=1}^{d_{1}}\pi_{k,\ell}\) and \(L:=m\cdot\max_{k\in[d_{1}],\ell\in[d_{2}]}\{R_{k},C_{\ell}\}\). Assume \(\sigma:=|\xi|_{\psi_{2}}\geq 1\) and \(\sigma_{\xi}^{2}:=\mathbb{E}[\xi^{2}]\geq 1\).
Given \(\delta\in(0,1)\) and \(\mu>0\), define
\[r_{n,\delta}(\mu):=\mu\sqrt{\frac{Lpr}{mn}}\sqrt{\log(d/\delta)}+\mu\log^{1/2} \left(\frac{\sigma m}{\sigma_{\xi}}\right)\frac{\sqrt{pr}}{n}\log(d/\delta)+ \frac{\sqrt{1+\log(1/\delta)}}{\sqrt{n}}.\]
**Theorem 5**.: _Grant Assumptions 1 and \(3\) and suppose \(\operatorname{rank}(\mathbf{B}^{*})\leq r\) and \(\|\mathbf{B}^{*}\|_{\infty}\leq\mathsf{a}\) for some known \(\mathsf{a}>0\). Suppose that \(\epsilon<0.5\) and fix \(\delta\in(0,1)\). Let_
\[\tilde{\mathbf{B}} \in \operatorname{argmin}_{\mathbf{B}\in\mathbb{R}^{p}}\mathcal{L}_{ \tau\omega,2}(\mathbf{B})+\lambda\frac{\|\mathbf{B}\|_{N}}{\sqrt{p}}\] \[\text{s.t.} \|\mathbf{B}\|_{\infty}\leq\mathsf{a},\]
_with \(\tau\asymp\frac{\sigma\lor\mathsf{a}}{\sqrt{n}}\) and_
\[\lambda\asymp\left[(\sigma\lor\mathsf{a})\sqrt{\frac{Lp}{mn}}\sqrt{(\log(d/ \delta)}\right]\bigvee\left[(\sigma\lor\mathsf{a})\log^{1/2}\left(\frac{\sigma m }{\sigma_{\xi}}\right)\frac{\sqrt{p}}{n}\log(d/\delta)\right].\]
_Let \(\mathbb{C}_{\mathbf{B}^{*}}:=\mathcal{C}_{\mathbf{B}^{*},\|\cdot\|_{N}}(4)\) be the cone defined in Section \(l1\) and \(\mu(\mathbf{B}^{*}):=\mu(\mathbb{C}_{\mathbf{B}^{*}})/\sqrt{p}\). Then, with probability at least \(1-\delta\),_
\[\|[\mathbf{\Delta}_{\mathbf{B},},\mathbf{\Delta}^{\hat{\boldsymbol {\theta}}}]\|_{\Pi} \lesssim(\mathsf{a}\lor\sigma)r_{n,\delta}(\mu(\mathbf{B}^{*}))+( \mathsf{a}\lor\sigma)\sqrt{\epsilon\log(1/\epsilon)},\] \[\lambda\frac{\|\mathbf{\Delta}_{\mathbf{B},}\|_{N}}{\sqrt{p}}+ \tau\|\mathbf{\Delta}^{\hat{\boldsymbol{\theta}}}\|_{\sharp} \lesssim(\mathsf{a}\lor\sigma)^{2}r_{n,\delta}^{2}(\mu(\mathbf{B}^{*}))+( \mathsf{a}\lor\sigma)^{2}\epsilon\log(1/\epsilon).\]
For simplicity, let us assume \(d_{1}=d_{2}=d\) and \(L=1\) (in case the sampling is well-conditioned) and ignore logs. By Theorem 5, the estimation error is \(\mathsf{a}\sqrt{(dr/n)\log(1/\delta)}+\mathsf{a}\sqrt{\log(1/\delta)/n}+ \mathsf{a}\sqrt{\epsilon\log(1/\epsilon)}\). It is possible to show that this bound is optimal up to log factors (using a similar lower bound argument in [58]). In Theorem 5, \(\mu(\mathbf{B}^{*})\) is the restricted condition number measuring how much the sampling distribution differs from the uniform distribution.
## 7 Related work and contributions
### Robust sparse least-squares regression
This model has been the subject of numerous works. From a methodological point of view, the \(\ell_{1}\)-penalized Huber's estimator has been considered in [90, 91, 66]. Empirical evaluation for the choice of tuning parameters is comprehensively studied in these papers. For the adversarial model with Gaussian data, fast rates for such estimator have been obtained in [13, 64, 29, 28, 82]. The average near optimal rate was shown only recently in [30]. They obtain the rate \(\sqrt{s\log(p/\delta)/n}+\epsilon\log(n/\delta)\), with a breakdown point
\(\epsilon\leq e/\log n\) for a constant \(c>0\). Under different conditions, the same estimator was later shown in [27] to attain the subgaussian rate with breakdown point \(\epsilon\leq c\) without the extra factor \(\log(1/\epsilon)\) and allowing feature-dependent heavy-tailed noise. These two works are the most closely related to our particular Theorem 3 for robust sparse linear regression.
_Remark 3_ (Subgaussian rates and \(\delta\)-adaptive estimators).: We start mentioning that, unlike ours, the analysis of regularized Huber regression in [27, 30] assumes knowledge of \(\delta\) and [30] does not achieve the \(\delta\)-subgaussian rate. We refer to the next Section 7.3 for a discussion on why this point is challenging.
_Remark 4_ (Comparison with [27]).: The result in [27] is interesting: the subgaussian rate is attainable with the standard Huber loss -- without the extra \(\log(1/\epsilon)\). It also allows feature-dependent heavy-tailed noise -- assuming some extra mild conditions. Still, we argue that, in the subgaussian setting, this result is much weaker than our Theorem 3 for a sparse parameter. We give two main reasons:
1. Even though not explicitly stated, the proof in [27] is specific to the _oblivious model_ -- a much weaker model than the adversarial one considered in this work. If \(\mathcal{O}\) denotes the index set of outliers, they fundamentally use that,6 for all \([\mathbf{b},\mathbf{b}^{\prime}]\), \[\left|\frac{1}{o}\sum_{i\in\mathcal{O}}(|\langle\mathbf{x}_{i},\mathbf{b}-\mathbf{b}^{ \prime}\rangle|-\mathbb{E}[|\langle\mathbf{x},\mathbf{b}-\mathbf{b}^{\prime}\rangle|]) \right|_{\psi_{2}}\lesssim\frac{\|\mathbf{b}-\mathbf{b}^{\prime}\|_{\Pi}}{\sqrt{o}}.\] This follows from Hoeffding's inequality, but only if \(\{\mathbf{x}_{i}\}_{i\in\mathcal{O}}\) is iid for _fixed_\(\mathcal{O}\). In the adversarial model, \(\mathcal{O}\) is an arbitrary random variable dependent on the data set. Footnote 6: See equation (42) in page 3595 in [27].
2. [27] attains the optimal rate for \(\ell_{1}\)-regularized Huber regression with penalization \[\lambda\asymp\sigma\left(\sqrt{\frac{\log p}{n}}\bigvee\mu(\mathbb{C}_{\mathbf{b }^{*}})\sqrt{\frac{\log(1/\delta)}{sn}}\bigvee\mu(\mathbb{C}_{\mathbf{b}^{*}}) \frac{o}{\sqrt{sn}}\right),\] (18) and, in case the noise is subgaussian, \(\tau\asymp\sigma\). Our tuning \((\lambda,\tau)\) follows the very different scaling \(\lambda\asymp\sigma\sqrt{\log p/n},\tau\asymp\sigma/\sqrt{n}\). One notable difference is that our tuning is adaptive to \((s,o,\mu(\mathbb{C}_{\mathbf{b}^{*}}),\delta)\), without resorting to Lepski's method. If we focus on \((s,o,\mu(\mathbb{C}_{\mathbf{b}^{*}}),\delta)\)-adaptive estimators, our guarantees and simulation results are significantly in favor of sorted Huber-type losses instead of the standard Huber loss.
We argue that the different scaling (18) is a consequence of a different proof method. The proof in [27] is based on _"localization" arguments_ for regularized empirical risk minimization (ERM) [73, 65].7 Being more precise, [27] is able to show that regularized ERM with convex Lipschitz losses [3, 26] is robust against contaminated labels8_for the restricted oblivious model_. In this approach, one uses the fact that the loss based on Huber's function satisfies the so called "Bernstein's condition" -- under additional mild noise conditions.9 Our proof method does not follow the "localization" literature but rather the literature on _\(M\)-estimation with decomposable regularizers_[10, 80, 6]. In this approach, we do not use Lipschitz continuity of Huber-type losses. In fact, our analysis uses a loss based on the square cost and defined over an augmented variable -- see (6)-(7).
Footnote 7: This approach has a vast history. State-of-the art results were given in the seminal paper [73] — introducing the “small-ball method” for ERM with the square loss. With it, proper localized control of the quadratic and multiplier processes entail optimal rates. [65] generalized this method to analyze regularized ERM. A key tool in this work is the so called “sparsity equation”. This elegant method entails, in particular, optimality of Lasso, Slope and trace regression.
Footnote 8: The works [3, 26] also use the “sparsity equation” but, unlike [73, 65], do not use explicit concentration of the quadratic/multiplier processes. For convex Lipschitz losses satisfying the “Bernstein condition”, localized concentration of the empirical process suffices.
_Remark 5_ (Comparison with [30]).: Granting reasons (i)-(ii) in Remark 4, [30] is the closest work to ours -- indeed, they consider the adversarial model and \((s,o,\mu(\mathbb{C}_{\mathbf{b}^{*}}))\)-adaptive estimators. Our most noted
improvements in terms of rate guarantees are three-fold. First, we show that sorted Huber-type regression has improved bounds compared to Huber-regression: the corruption error \(\epsilon\log n\) and breakdown point \(c/\log n\) of the latter is replaced by \(\epsilon\log(1/\epsilon)\) and breakdown point \(c\). We give numerical evidence of the superiority of sorted Huber regression compared to standard Huber regression (see Figure 1). Adaptations of Huber regression have been studied before. Still, the changes usually involve changing only the tuning hyper-parameters. To our knowledge, our theoretical and empirical results for sorted Huber-type losses were previously unknown. Second, unlike the results in [30], our rates for sparse regression use \(\delta\)-adaptive estimators attaining the optimal \(\delta\)-subgaussian rate under weaker assumptions -- namely, subgaussian feature-dependent noise. For a justification on why this improvement is non trivial, we recall Remarks 2-3 and refer to Section 7.3 and pointers therein. Finally, our third contribution is to give optimal guarantees for robust sparse regression with \(\sigma\)-adaptive estimators under the same set of assumptions. In Section 12, we explain that the proof of Theorem 4 is an adaptation of the proof of Theorem 3. We also present oracle inequalities for corrupted miss-specified models.
To finish, we remark that our improvements on [30] are substantial in terms of proof techniques and structural properties. In fact, our main focus is the broader model RTRMD. We refer to the next Section 7.3 and sections therein for a detailed discussion on this point.
_Remark 6_ (Further references in robust sparse regression).: We complement our review mentioning some literature analyzing different contamination models, e.g. dense bounded noise [101, 67, 81, 46, 1]. This setting is also studied in [57] with the LAD-estimator [100]. Alternatively, a refined analysis of iterative thresholding methods were considered in [9, 8, 94, 77]. They obtain sharp breakdown points and consistency bounds for the oblivious model. Works on sparse linear regression with covariate contamination were considered early on by [20] and, more recently, in [4], albeit with worst rates and breakdown points compared to the response contamination model. Works by [68, 69] have also studied the optimality of sparse linear regression in models with error-in-variables and missing-data covariates. Although out of scope, we mention for completeness that tractable algorithms for linear regression with covariate contamination have been intensively investigated in the low-dimensional scaling (\(n\geq p\)), with initial works by [37, 39, 86] and more recent ones in [32, 25, 84, 83].
### Robust trace regression
The first bounds on trace regression (with no corruption) were presented, e.g., in [78, 89, 103, 80] following the framework of \(M\)-estimation with decomposable regularizers.10 Trace regression with label-feature contamination is studied in detail in [49]. This paper is based on Tukey's depth, a hard computational problem in high dimensions. The recent papers [44, 92] focus on models with heavy-tailed noise. The interesting paper [92] considers label contamination as well, but follows a methodology based on non-convex optimization, presenting bounds for a gradient-descent method. As such, it is hard to compare their results with ours, as they follow different set of assumptions. For instance, they assume Huber's contamination model -- which is more restrictive than the adversarial model considered in this work. Other minor differences include the assumptions that the covariance matrix is invertible, the noise has positive density around the origin and that some conditions are satisfied to ensure good initialization.
Footnote 10: The complementary works [65, 3, 26] also study trace regression with different techniques. See Remark 4.
Robust trace regression is not considered in [27, 30]. Still, their methods could be applied to this problem. In that case, the exact same comments in Remarks 3-5 would still apply -- changing \((s,\log p)\) by \((r,d_{1}+d_{2})\). As mentioned in Section 7.1, our improvements in guarantees and assumptions are in fact consequences of new proof techniques and structural design properties motivated to study the broader model RTRMD. We discuss this point in the next section.
### Robust trace regression with additive matrix decomposition
The statistical theory for this model is the main contribution of this work. In fact, the improvements discussed in Sections 7.1-7.2 are particular instances of the new techniques and definitions we develop for this general model. As mentioned in the introduction, additive matrix decomposition was already
extensively studied in [102, 16, 14, 104, 54, 72, 2]. Still, there is currently no optimality theory for additive matrix decomposition in trace regression -- nor its extension with label contamination. For this problem, we assume the low-spikeness condition, already considered in [2]. In this preliminary section, we briefly comment on three new design properties we develop in order to establish an optimality theory for RTRMD. A detailed discussion is referred to later sections.
When the parameter is the sum of a low-rank and sparse matrices, the random design in trace regression is singular with high probability. We identify a concentration inequality for the product process (see our Theorem 8) as the sufficient property to prove restricted strong convexity for this model. In Definition 9 in Section 9, we denote this property by \(\mathrm{PP}\). To the best of our knowledge, this is a novel application of product processes in high-dimensional statistics. This technical property is "sharp", in the sense that replacing it with other naive methods, e.g. dual-norm inequalities, fail to entail the optimal rate.
Unfortunately, \(\mathrm{PP}\) is no longer sufficient in case of label contamination. To handle it, we take inspiration from [30]. This work identified one design property sufficient to handle label contamination when the parameter is sparse. This property, termed "incoherence principle" (\(\mathrm{IP}\)), is derived in [30] from Chevet's inequality. [30] is not concerned with matrix decomposition, and as such, \(\mathrm{PP}\) is unnecessary. We show that a _generalized_ version of \(\mathrm{IP}\) (see Definition 9 in Section 9) and the new property \(\mathrm{PP}\) are _jointly_ sufficient properties to ensure restricted strong convexity for RTRMD and to optimaly control the "design-corruption interaction". In [30], the particular version of \(\mathrm{IP}\) is defined over the pair \([\mathbf{v},\mathbf{u}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}\) and norms \((\mathcal{R},\|\cdot\|_{\sharp})\rightharpoonup\mathbf{v}\) associated with the parameter vector and \(\mathbf{u}\) with the corruption vector. For RTRMD, we need a generalized version of \(\mathrm{IP}\) over the triple \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^{n}\) and norms \((\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp})\): a coordinate for the low-rank component, a second coordinate for the sparse component and a coordinate for the corruption vector. Again, \(\mathrm{PP}\) and \(\mathrm{IP}\) are "sharp": mere use of dual-norm inequalities fail to achieve optimality. See Remarks 8-9 in Section 9 and Remarks 10-12 in Section 11.
The third design property we use enables us to achieve the optimal \(\delta\)-subgaussian rate with \(\delta\)-adaptive estimators, even when the noise is feature-dependent. This property, denoted by \(\mathrm{MP}\) in Definition 9 in Section 9, follows from a new concentration inequality for the multiplier process (see our Theorem 7 in Section 8). In the framework of \(M\)-estimation with decomposable regularizers, \(\mathrm{MP}\) is a classical property used to control the "design-noise interaction" [10, 2, 6, 30]. The typical way to prove this property is via the dual-norm inequality [10, 2, 30]. Unfortunately, this approach fails to entail the subgaussian rate and \(\delta\)-adaptivity. For sparse parameters and no contamination, [6] was the first to present a suitable version of \(\mathrm{MP}\) entailing the subgaussian rate with \(\delta\)-adaptive estimators. Nevertheless, in Section 10 we explain in detail why the techniques in [6] fundamentally do not imply \(\mathrm{MP}\) in case the _noise depends on the features_. See also Remark 7 in Section 8. To conclude, our version of \(\mathrm{MP}\) in Definition 9 in Section 9 is more general than [6] so to handle feature-dependent noise, additive matrix decomposition and label contamination: \(\mathrm{MP}\) for RTRMD is defined over the triple \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^{n}\) and norms \((\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp})\).
To finish, as discussed in Section 7.1, the relevance of sorted Huber-type losses also applies to RTRMD. Using the standard Huber's loss, we would have rate \(\omega(\epsilon)\asymp\epsilon\log n\) instead of \(\omega(\epsilon)\asymp\epsilon\log(1/\epsilon)\), breakdown point \(\epsilon\leq c/\log n\) instead of \(c\), for some constant \(c\in(0,1/2)\), and similar empirical performance as in Figure 1.
### Robust matrix completion
The literature on matrix completion with nuclear norm relaxation [45, 93] is extensive. For instance, bounds for exact recovery were first obtained by [15] assuming "incoherence". [79] considers noisy matrix completion with the different notion of "low-spikeness". There are numerous works considering either exact recovery or noisy estimation. As we are mainly interested in the corrupted model, a complete overview is out of scope. We refer to further references in [60, 58] and the recent works [22, 23, 70, 99] for a comprehensive review on matrix completion under the "incoherence" assumption. In this paper, we assume we know an upper bound \(\mathsf{a}\geq\|\mathbf{B}^{*}\|_{\infty}\). This is assumed in [47, 59, 50, 61, 12]. It is weaker than the incoherence assumption, but it requires the hyper-parameter \(\mathsf{a}\) (which can be more consuming in practice). Both frameworks are hard to compare so we only review works assuming knowledge of \(\mathsf{a}\).
Robustness is considered in [58] and [43, 76, 44]. [43, 76, 44] are mainly concerned with heavy-tailed noise. Like this work, [58] focus on outlier contamination. Suppose for simplicity we sample the entries
of a \(d\times d\) square matrix \(\mathbf{B}^{*}\) with rank \(r\). Compared to [58], we obtain improved optimal rates and hyper-parameter tuning in terms of the _corruption level_. Ignoring logs, the optimal rate in [58] when estimating _both_ the parameter and the corruption is \(\mathsf{a}\sqrt{(dr/n)\log(1/\delta)}+\mathsf{a}\sqrt{\log(1/\delta)/n}+ \mathsf{a}\sqrt{\epsilon\log(1/\epsilon)}\). To attain this rate, [58] assumes knowledge of some \(\mathsf{a}>0\) upper bounding the sup norm of \(\mathbf{B}^{*}\)_and the corrupted entries_. We show in Theorem 5 that, when only estimating \(\mathbf{B}^{*}\), the optimal rate has the same expression but requiring only \(\mathsf{a}\geq\|\mathbf{B}^{*}\|_{\infty}\) -- _regardless of the magnitudes of the corruption entries_. Interestingly, the tuning parameter \(\mathsf{a}\) in our Theorem 5 is the same as in matrix completion with no label contamination. This is not only of theoretical relevance. We have verified in our simulations that, with extremely high accuracy, _increasing 1000 times the magnitude of the corruption does not change the estimation error and the estimator tuning_. See Table 1 in Section 13. In Figure 3(b) in Section 13, we confirm the dependence on \(\epsilon\) predicted in the rate of Theorem 5. To our knowledge, this "robustness" property with respect to the corruption level was not known in the matrix completion literature.
We conclude this section mentioning that the proof of Theorem 5 differs significantly to the proofs of Theorems 2-3. Indeed, we use Bernstein-type concentration inequalities for bounded processes and require different dimension-reduction cones. For lack of space, we refer the details to Section 38 in the supplement.
## 8 Results for the multiplier and product processes
In this section we present new concentration inequalities for subgaussian Multiplier and Product processes with optimal dependence on \((d_{\mathrm{eff}},\delta)\). The notation in this section is independent of all previous sections. Throughout this section, \((B,\mathcal{B},\mathbf{P})\) is a probability space, \((\xi,X)\) is a random (possibly not independent) pair taking values on \(\mathbb{R}\times B\) and \(X\) has marginal distribution \(\mathbf{P}\). \(\{(\xi_{i},X_{i})\}_{i\in[n]}\) will denote an iid copy of \((\xi,X)\) and \(\tilde{\mathbf{P}}\) be denotes the empirical measure associated to \(\{X_{i}\}_{i\in[n]}\). The _multiplier process_ over functions \(f\in F\) is defined as
\[M(f):=\frac{1}{n}\sum_{i\in[n]}(\xi_{i}f(X_{i})-\mathbb{E}[\xi f(X)]).\]
For instance, the _empirical process_ is a particular case when \(\xi\equiv 1\). The _product process_ is defined as
\[A(f,g):=\frac{1}{n}\sum_{i\in[n]}\bigg{\{}f(X_{i})g(X_{i})-\mathbb{E}f(X_{i}) g(X_{i})\bigg{\}},\]
over two distinct subgaussian classes \(F\) and \(G\) of measurable functions. When \(F=G\), the correspondent process is often termed the _quadratic process_.
There is a large literature on concentration of these processes and its use in risk minimization. One pioneering idea is of "generic chaining", first developed by Talagrand for the empirical process [95]. This method was refined by Dirksen, Bednorz, Mendelson and collaborators, e.g., in [75, 40, 5, 74]. The following notion of complexity is used in generic chaining bounds.11
Footnote 11: The pioneering work by Talagrand presented the, somewhat mysterious, \(\gamma_{2}\)-functional as a measure of complexity of the class. The “truncated” \(\gamma_{2,p}\)-functional was presented recently by Dirksen [40].
_Definition 6_ (\(\gamma_{2,p}\)-functional).: Let \((T,\mathsf{d})\) be a pseudo-metric space. We say a sequence \((T_{k})\) of subsets of \(T\) is _admissible_ if \(|T_{0}|=1\) and \(|T_{k}|\leq 2^{2^{k}}\) for \(k\in\mathbb{N}\) and \(\cup_{k\geq 0}T_{k}\) is dense in \(T\). Let \(\mathcal{A}\) denote the class of all such admissible subset sequences. Given \(p\geq 1\), the \(\gamma_{2,p}\)-functional with respect to \((T,\mathsf{d})\) is the quantity
\[\gamma_{2,p}(T):=\inf_{(T_{k})\in\mathcal{A}}\sup_{t\in T}\sum_{k\geq\lfloor \log_{2}p\rfloor}2^{k/2}\mathsf{d}(t,T_{k}).\]
We will say that \((T_{k})\in\mathcal{A}\) is optimal if it achieves the infimum above. Set \(\gamma_{2}(T):=\gamma_{2,1}(T)\).
Let \(L_{\psi_{2}}=L_{\psi_{2}}(\mathbf{P})\) be the family of measurable functions \(f:B\to\mathbb{R}\) having finite \(\psi_{2}\)-norm
\[\|f\|_{\psi_{2}}:=|f(X)|_{\psi_{2}}:=\inf\{c>0:\mathbb{E}[\psi_{2}(f(X)/c)]\leq 1\}\]
where \(\psi_{2}(t):=e^{t^{2}}-1\). We assume that the \(\psi_{2}\)-norm of \(\xi\), denoted also by \(\|\xi\|_{\psi_{2}}\), is finite. Given \(f,g\in L_{\psi_{2}}\), we define the pseudo-distance \(\mathsf{d}(f,g):=\|f-g\|_{\psi_{2}}\). Given a subclass \(F\subset L_{\psi_{2}}\), we let \(\Delta(F):=\sup_{f,f^{\prime}\in F}\mathsf{d}(f,f^{\prime})\) and \(\bar{\Delta}(F):=\sup_{f\in F}\mathsf{d}(f,0)\). We prove the following two results in Sections B and C in the supplement.
**Theorem 7** (Multiplier process).: _There exists universal constant \(c>0\), such that for all \(f_{0}\in F\), \(n\geq 1\), \(u\geq 1\) and \(v\geq 1\), with probability at least \(1-ce^{-u/4}-ce^{-nv}\),_
\[\sup_{f\in F}|M(f)-M(f_{0})|\lesssim\left(\sqrt{v}+1\right)\|\xi\|_{\psi_{2}} \frac{\gamma_{2}(F)}{\sqrt{n}}+\left(\sqrt{\frac{2u}{n}}+\frac{u}{n}+\sqrt{ \frac{uv}{n}}\right)\|\xi\|_{\psi_{2}}\bar{\Delta}(F)\]
**Theorem 8** (Product process).: _Let \(F,G\) be subclasses of \(L_{\psi_{2}}\). There exist universal constants \(c,C>0\), such that for all \(n\geq 1\) and \(u\geq 1\), with probability at least \(1-e^{-u}\),_
\[\sup_{(f,g)\in F\times G}|A(f,g)| \leq C\left[\frac{\gamma_{2}(F)\gamma_{2}(G)}{n}+\bar{\Delta}(F) \frac{\gamma_{2}(G)}{\sqrt{n}}+\bar{\Delta}(G)\frac{\gamma_{2}(F)}{\sqrt{n}}\right]\] \[+c\sup_{(f,g)\in F\times G}\|fg-\mathbf{P}fg\|_{\psi_{1}}\left( \sqrt{\frac{u}{n}}+\frac{u}{n}\right).\]
_Remark 7_ (Confidence level & complexity).: Mendelson [74] established impressive concentration inequalities for the multiplier and product processes. In fact, they hold for much more general \((\xi,X)\) having heavier tails (see Theorems 1.9, 1.13 and 4.4 in [74]). When specifying these bounds to subgaussian classes and noise, however, the confidence parameter \(u>0\) multiplies the complexities \(\gamma_{2}(F)\) and \(\gamma_{2}(G)\) -- unlike our Theorems 7-8. For a related discussion regarding the empirical and quadratic processes, we refer to Remark 3.3(ii) and observations before Corollary 5.7 in Dirksen's paper [40]. This technical point is crucial in our proof to show that our class of estimators attain the \(\delta\)-subgaussian rate in the high-dimensional regime with \(\delta\)-adaptive estimators. We refer to Section 10 for a discussion on this topic. Note that we can take \(v\asymp 1\) for failure probability \(\delta\geq e^{-\ell^{\prime}n}\) for absolute constant \(c^{\prime}>0\). Our proofs are motivated by Dirksen's method for the quadratic process [40] and Talagrand's proof for the empirical process [95].12
Footnote 12: They are not corollaries of Dirksen’s results. For instance, Theorem 8 cannot be derived from the quadratic process inequality and the parallelogram law. Indeed, we fundamentally need \(F\neq Q\).
## 9 Properties for subgaussian distributions
In what follows, \(\mathfrak{M}(\mathbf{V},\mathbf{u}):=\mathfrak{X}(\mathbf{V})+\sqrt{n}\mathbf{u}\), \(\mathcal{R}\) and \(\mathcal{S}\) are norms on \(\mathbb{R}^{p}\) and \(\mathcal{Q}\) is a norm on \(\mathbb{R}^{n}\). Throughout this section, \((\mathbf{X},\xi)\in\mathbb{R}^{p}\times\mathbb{R}\) satisfies Assumption 2. Next, we define the empirical bilinear form
\[\langle\!\langle\mathbf{V},\mathbf{W}\rangle\!\rangle_{n}:=\frac{1}{n}\sum_{i \in[n]}\langle\!\langle\mathbf{X}_{i},\mathbf{V}\rangle\!\rangle\langle\! \langle\mathbf{X}_{i},\mathbf{W}\rangle\!\rangle=\langle\!\langle\mathfrak{X} ^{(n)}(\mathbf{V}),\mathfrak{X}^{(n)}(\mathbf{W})\!\rangle.\]
In what follows, \(\{\mathsf{a}_{i},\mathsf{b}_{i},\mathsf{c}_{i},\mathsf{d}_{i},\mathsf{f}_{i}\}\) are fixed positive numbers.
**Definition 9**.:
1. \(\mathfrak{X}\) satisfies \(\mathrm{RSC}_{\mathcal{R}}(\mathsf{a}_{1},\mathsf{a}_{2})\) if for all \(\mathbf{V}\in\mathbb{R}^{p}\), \[\left\|\mathfrak{X}^{(n)}(\mathbf{V})\right\|_{2}\geq\mathsf{a}_{1}\|\mathbf{V} \|_{\Pi}-\mathsf{a}_{2}\mathcal{R}(\mathbf{V}).\]
2. \(\mathfrak{X}\) satisfies \(\mathrm{PP}_{\mathcal{R},\mathcal{S}}(\mathsf{c}_{1},\mathsf{c}_{2},\mathsf{c}_ {3},\mathsf{c}_{4})\) if for all \([\mathbf{V},\mathbf{W}]\in(\mathbb{R}^{p})^{2}\), \[|\langle\!\langle\mathbf{V},\mathbf{W}\rangle\!\rangle_{n}-\langle\!\langle \mathbf{V},\mathbf{W}\rangle\!\rangle_{\Pi}| \leq\mathsf{c}_{1}\left\|\mathbf{V}\right\|_{\Pi}\|\mathbf{W}\|_{\Pi}+ \mathsf{c}_{2}\mathcal{R}(\mathbf{V})\|\mathbf{W}\|_{\Pi}+\mathsf{c}_{3}\left\| \mathbf{V}\right\|_{\Pi}\mathcal{S}(\mathbf{W})\] \[\quad+\mathsf{c}_{4}\mathcal{R}(\mathbf{V})\mathcal{S}(\mathbf{W}).\]
3. \(\mathfrak{X}\) satisfies \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{b}_{1},\mathsf{b}_{2}, \mathsf{b}_{3},\mathsf{b}_{4})\) if for all \([\mathbf{V},\mathbf{W},\mathbf{\boldsymbol{u}}]\in(\mathds{R}^{p})^{2}\times \mathbb{R}^{n}\), \[|\langle\boldsymbol{u},\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W})\rangle| \leq\mathsf{b}_{1}\left\|[\mathbf{V},\mathbf{W}]\right\|_{\Pi} \|\boldsymbol{u}\|_{2}+\mathsf{b}_{2}\mathcal{R}(\mathbf{V})\|\boldsymbol{u} \|_{2}+\mathsf{b}_{3}\mathcal{S}(\mathbf{W})\|\boldsymbol{u}\|_{2}\] \[+\mathsf{b}_{4}\left\|[\mathbf{V},\mathbf{W}]\right\|_{\Pi} \mathcal{Q}(\boldsymbol{u}).\]
4. \(\mathfrak{X}\) satisfies \(\mathrm{ARSC}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{d}_{1},\mathsf{d}_ {2},\mathsf{d}_{3},\mathsf{d}_{4})\) if for all \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^ {n}\), \[\left\{\|\mathfrak{M}^{(n)}(\mathbf{V}+\mathbf{W},\boldsymbol{u})\|_{2}^{2}-2 \langle\mathbf{V},\mathbf{W}\rangle_{\Pi}\right\}_{+}^{\frac{1}{2}}\geq \mathsf{d}_{1}\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}-\mathsf{d}_{2} \mathcal{R}(\mathbf{V})-\mathsf{d}_{3}\mathcal{S}(\mathbf{W})-\mathsf{d}_{4} \mathcal{Q}(\boldsymbol{u}).\]
5. \((\mathfrak{X},\boldsymbol{\xi})\) satisfies \(\mathrm{MP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{f}_{1},\mathsf{f}_{ 2},\mathsf{f}_{3},\mathsf{f}_{4})\) if for all \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^ {n}\), \[|\langle\boldsymbol{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V}+\mathbf{W}, \boldsymbol{u})\rangle|\leq\mathsf{f}_{1}\|[\mathbf{V},\mathbf{W},\boldsymbol{ u}]\|_{\Pi}+\mathsf{f}_{2}\mathcal{R}(\mathbf{V})+\mathsf{f}_{3}\mathcal{S}( \mathbf{W})+\mathsf{f}_{4}\mathcal{Q}(\boldsymbol{u}).\]
In the next lemmas, we show that \(\mathrm{RSC}\) and \(\mathrm{ARSC}\) are consequences of \(\mathrm{PP}\) and \(\mathrm{IP}\).
**Lemma 10**.: _Suppose \(\mathfrak{X}\) satisfies \(\mathrm{PP}_{\mathcal{R},\mathcal{R}}(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4})\) with \(\alpha_{1}\in(0,1)\). Then \(\mathrm{RSC}_{\mathcal{R}}(\mathsf{a}_{1},\mathsf{a}_{2})\) holds with constants \(\mathsf{a}_{1}:=\sqrt{(3/4)(1-\alpha_{1})}\) and \(\mathsf{a}_{2}:=\left\{\frac{(\alpha_{2}+\alpha_{3})^{2}}{(1-\alpha_{1})}+ \alpha_{4}\right\}^{1/2}\)._
**Lemma 11** (Lemma 7 in [30]).: _Suppose \(\mathfrak{X}\) satisfies \(\mathrm{RSC}_{\mathcal{R}}(\mathsf{a}_{1},\mathsf{a}_{2})\) and \(\mathrm{IP}_{\mathcal{R},0,\mathcal{Q}}(\mathsf{b}_{1},\mathsf{b}_{2},0, \mathsf{b}_{4})\). Suppose further that \(\mathsf{a}_{1}\in(0,1)\) and \(\mathsf{b}_{1}<3\mathsf{a}_{1}^{2}/4\). Then \(\mathrm{ARSC}_{\mathcal{R},\mathcal{Q}}(\mathsf{d}_{1},\mathsf{d}_{2},0, \mathsf{d}_{4})\) holds with constants \(\mathsf{d}_{1}:=\sqrt{(3/4)\mathsf{a}_{1}^{2}-\mathsf{b}_{1}}\), \(\mathsf{d}_{2}:=\frac{2\mathsf{b}_{2}}{\mathsf{a}_{1}}+\mathsf{a}_{2},\) and \(\mathsf{d}_{4}:=\frac{2\mathsf{b}_{4}}{\mathsf{a}_{1}}\)._
**Lemma 12**.: _Suppose \(\mathfrak{X}\) satisfies \(\mathrm{RSC}_{\mathcal{R}}(\mathsf{a}_{1},\mathsf{a}_{2})\), \(\mathrm{RSC}_{\mathcal{S}}(\bar{\mathsf{a}}_{1},\bar{\mathsf{a}}_{2})\), \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{b}_{1},\mathsf{b}_{2},\mathsf{b}_{3},\mathsf{b}_{4})\) and \(\mathrm{PP}_{\mathcal{R},\mathcal{S}}(\mathsf{c}_{1},\mathsf{c}_{2},\mathsf{c} _{3},\mathsf{c}_{4})\) with \(\mathsf{c}_{4}=\gamma\mathsf{c}_{2}\mathsf{c}_{3}\) for some \(\gamma>0\). Suppose further that \(\mathsf{a}_{1},\bar{\mathsf{a}}_{1}\in(0,1)\) and \(\mathsf{b}_{1}+\mathsf{c}_{1}<3(\mathsf{a}_{1}\wedge\bar{\mathsf{a}}_{1})^{2}/4\). Then \(\mathrm{ARSC}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{d}_{1},\mathsf{d}_ {2},\mathsf{d}_{3},\mathsf{d}_{4})\) holds with constants \(\mathsf{d}_{1}:=\sqrt{(3/4)(\mathsf{a}_{1}\wedge\bar{\mathsf{a}}_{1})^{2}-( \mathsf{b}_{1}+\mathsf{c}_{1})}\) and_
\[\mathsf{d}_{2}:=\left\{\frac{8(\mathsf{b}_{2}^{2}+\mathsf{c}_{2}^{2})}{( \mathsf{a}_{1}\wedge\bar{\mathsf{a}}_{1})^{2}}+\gamma\mathsf{c}_{2}^{2}\right\} ^{\frac{1}{2}}+\mathsf{a}_{2},\quad\mathsf{d}_{3}:=\left\{\frac{8(\mathsf{b}_{ 2}^{2}+\mathsf{c}_{2}^{2})}{(\mathsf{a}_{1}\wedge\bar{\mathsf{a}}_{1})^{2}}+ \gamma\mathsf{c}_{3}^{2}\right\}^{\frac{1}{2}}+\bar{\mathsf{a}}_{2},\quad\mathsf{ d}_{4}:=\frac{2\sqrt{2}\mathsf{b}_{4}}{(\mathsf{a}_{1}\wedge\bar{ \mathsf{a}}_{1})}.\]
In the following sections, we will not explicitly use \(\mathrm{PP}\). From the previous lemmas, it should be clear that this property is implicitly used when we invoke \(\mathrm{RSC}\) and \(\mathrm{ARSC}\). The fundamental properties we need to show for subgaussian designs and noises are \(\mathrm{PP}\), \(\mathrm{IP}\) and \(\mathrm{MP}\). The next proposition ensures these properties hold with high probability.
**Proposition 2**.: _Grant Assumption 2. There is universal constant \(C>0\) such that the following holds. Let \(\delta\in(0,1)\)._
1. _With probability_ \(\geq 1-\delta\)_,_ \(\mathrm{PP}_{\mathcal{R},\mathcal{S}}(\mathsf{c}_{1},\mathsf{c}_{2},\mathsf{c}_{3}, \mathsf{c}_{4})\) _is satisfied with constants_ \[\mathsf{c}_{1} =CL^{2}\left(\frac{1+\log(1/\delta)}{n}+\frac{1+\sqrt{\log(1/ \delta)}}{\sqrt{n}}\right),\] \[\mathsf{c}_{2} =CL^{2}\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)\mathscr{G} \left(\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\right),\] \[\mathsf{c}_{3} =CL^{2}\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)\mathscr{G} \left(\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}})\right),\] \[\mathsf{c}_{4} =\frac{CL^{2}}{n}\mathscr{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{R}})\right)\cdot\mathscr{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{S}})\right).\]
2. _Suppose that_ \(\mathsf{c}(\delta):=CL^{2}(\frac{1+\log(1/\delta)}{n}+\frac{1+\sqrt{\log(1/ \delta)}}{\sqrt{n}})<1\)_. Then, with probability_ \(\geq 1-\delta\)_,_ \(\mathrm{RSC}_{\mathcal{R}}(\mathsf{a}_{1},\mathsf{a}_{2})\) _holds with constants_ \(\mathsf{a}_{1}=\sqrt{(3/4)(1-\mathsf{c}(\delta))}\) _and_ \[\mathsf{a}_{2}=\left(\frac{2CL^{2}}{\sqrt{1-\mathsf{c}(\delta)}}\left(\frac{1}{n}+ \frac{1}{\sqrt{n}}\right)+\frac{CL}{\sqrt{n}}\right)\mathscr{G}\left(\mathfrak{S}^{1/ 2}(\mathbb{B}_{\mathcal{R}})\right).\]
3. _With probability_ \(\geq\ 1\ -\delta\)_,_ \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{b}_{1},\mathsf{b}_{2}, \mathsf{b}_{3},\mathsf{b}_{4})\) _holds with constants_ \(\mathsf{b}_{1}\ =\ CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}\)_,_ \(\mathsf{b}_{2}\ =\ CL\frac{\mathscr{G}(\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}}))}{ \sqrt{n}},\mathsf{b}_{3}\ =\ CL\frac{\mathscr{G}(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{S}}))}{\sqrt{n}}\) _and_ \(\mathsf{b}_{4}=CL\frac{\mathscr{G}(\mathbb{B}_{\mathcal{Q}})}{\sqrt{n}}\)_._
4. _Define the quantities_ \[\triangle_{n}(\delta) :=(\nicefrac{{1}}{{\sqrt{n}}})[1+\sqrt{\log(1/\delta)}]+( \nicefrac{{1}}{{n}})[1+\log(1/\delta)+\sqrt{\log(1/\delta)}],\] \[\lozenge_{n}(\delta) :=(\nicefrac{{1}}{{\sqrt{n}}})[1+(\nicefrac{{1}}{{\sqrt{n}}}) \sqrt{\log(1/\delta)}]+(\nicefrac{{1}}{{n}}).\] _With probability_ \(\geq 1-\delta\)_,_ \(\mathrm{MP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{f}_{1},\mathsf{f}_{ 2},\mathsf{f}_{3},\mathsf{f}_{4})\) _holds with constants_ \(\mathsf{f}_{1}:=C\sigma L\triangle_{n}(\delta)\)_,_ \(\mathsf{f}_{2}:=C\sigma L\lozenge_{n}(\delta)\cdot\mathscr{G}(\mathfrak{S}^{1/ 2}(\mathbb{B}_{\mathcal{R}}))\)_,_ \(\mathsf{f}_{3}:=C\sigma L\lozenge_{n}(\delta)\cdot\mathscr{G}(\mathfrak{S}^{1/ 2}(\mathbb{B}_{\mathcal{S}}))\) _and_ \(\mathsf{f}_{4}:=\frac{\sigma}{\sqrt{n}}\mathscr{G}\ (\mathbb{B}_{\mathcal{Q}})\)_._
Proof sketch.: \(\mathrm{PP}\) follows from the concentration inequality for the product process given in Theorem 8 and a two-parameter peeling argument. \(\mathrm{RSC}\) follows from \(\mathrm{PP}\) and Lemma 10.13 To prove \(\mathrm{IP}\), we invoke Chevet's inequality for subgaussian processes twice -- for each of the norms \((\mathcal{R},\mathcal{Q})\) -- and a two-parameter peeling lemma. To prove \(\mathrm{MP}\), we invoke the multiplier process inequality of Theorem 7 twice -- for each of the norms \((\mathcal{R},\mathcal{Q})\). We also concentrate the linear process \(\mathbf{u}\mapsto\langle\mathbf{u},\mathbf{\xi}\rangle\) using a symmetrization-comparison argument with a Gaussian linear process. From these three bounds and a one-parameter peeling lemma, \(\mathrm{MP}\) follows. The proofs of these claims are referred to Sections 17, 18, 19 and 20 of the supplement.
Footnote 13: Alternatively, \(\mathrm{RSC}\) could be proved from Dirksen-Bednorz inequality for the quadratic process [40, 5] and a one-parameter peeling lemma.
\(\mathrm{RSC}\) is the well known "restricted strong convexity" used to analyze regularized \(M\)-estimators with decomposable norms [10, 69, 80, 6]. \(\mathrm{MP}\) over a single variable \(\mathbf{V}\) and norm \(\mathcal{R}\) is also well known to be useful. We generalize this concept over the triple \([\mathbf{V},\mathbf{W},\mathbf{u}]\) and norms \((\mathcal{R},\mathcal{S},\mathcal{Q})\). See the next Section 10 for a discussion on this notion. Similarly, \(\mathrm{IP}\), \(\mathrm{PP}\) and \(\mathrm{ARSC}\) -- an abbreviation for "augmented" restricted convexity -- are new properties on the triplet \([\mathbf{V},\mathbf{W},\mathbf{u}]\) which we show to be useful for RTRMD.
_Remark 8_ (Design properties in [2]).: The main design property in [2] is stated in their Definition 2. In our terminology, it is equivalent to \(\mathrm{ARSC}_{\mathcal{R},\mathcal{S},0}(\mathsf{d}_{1},\mathsf{d}_{2}, \mathsf{d}_{3},0)\), a particular version over the pair \([\mathbf{V},\mathbf{W}]\in(\mathbb{R}^{p})^{2}\). Their Theorem 1 states deterministic bounds assuming this property. Still, they guarantee this property holds only for two simple cases. The first is identity designs, for which \(\mathrm{ARSC}\) is trivially satisfied with \(\mathsf{d}_{1}=1\) and \(\mathsf{d}_{2}=\mathsf{d}_{3}=0\). The second case is multi-task learning. Using Dirksen's inequality for the quadratic process, it is straightforward to show that the design for this problem is invertible with high probability.14 Even without label contamination, proving \(\mathrm{ARSC}_{\mathcal{R},\mathcal{S},0}(\mathsf{d}_{1},\mathsf{d}_{2}, \mathsf{d}_{3},0)\) for trace regression with additive matrix decomposition is non trivial. Indeed, we need the technical property \(\mathrm{PP}\), stated in Proposition 2, which follows from our Theorem 8.
Footnote 14: The design components are \(\mathfrak{X}_{i}(\mathbf{B}):=\mathbf{x}_{i}^{\top}\mathbf{B}\). When \(n\gtrsim d_{1}\), standard concentration inequalities imply \(\frac{1}{n}\sum_{i=1}^{n}(\mathbf{x}_{i}^{\top}\mathbf{b})^{2}\geq c\|\mathbf{b}\|_{\Pi}^ {2}\) for all \(\mathbf{b}\in\mathbb{R}^{d_{1}}\) with high probability for some absolute constant \(c\in(0,1)\). Thus, \(\frac{1}{n}\|\mathfrak{X}(\mathbf{B})\|_{\mathcal{R}}^{2}\geq c\|\mathbf{B}\|_{ \Pi}^{2}\) for all \(\mathbf{B}\in\mathbb{R}^{p}\).
The model in [2] is not concerned with label contamination. Using our terminology, they do not need \(\mathrm{IP}\) (\(\mathsf{b}_{i}\equiv 0\)) and it is enough to set \(\mathcal{Q}\equiv 0\), \(\mathsf{c}_{4}=\mathsf{d}_{4}=\mathsf{f}_{4}=0\). Additionally, they implicitly use \(\mathrm{MP}_{\mathcal{R},\mathcal{S},0}(0,\mathsf{f}_{2},\mathsf{f}_{3},0)\) with \(\mathsf{f}_{1}=0\), resorting to the dual-norm inequality. As explained in the next section, this is the technical reason their bounds are optimal in average but optimal in \(\delta\). We remind that they assume the noise is independent of the features.
_Remark 9_ (Design properties in [30]).: [30] studies robust sparse regression with Huber's loss in the Gaussian setting. In this quest, they require the particular properties \(\mathrm{IP}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathsf{b}_{1},\mathsf{b}_{2},0, \mathsf{b}_{4})\) and \(\mathrm{ARSC}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathsf{d}_{1},\mathsf{d}_{2},0, \mathsf{d}_{4})\) over the pair \([\mathbf{v},\mathbf{u}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}\).15 Even for this particular setting, we show the relevance of considering the "sorted Huber's loss" (taking \(\mathcal{Q}=\|\cdot\|_{2}\)). Indeed, we improve the rate and breakdown points (in terms of log terms) and the "practical" constant in the convergence rate (recall the simulation in Figure 1).
Footnote 15: They use the notation \(\mathrm{TP}\) for \(\mathrm{RSC}\) and \(\mathrm{ATP}\) for \(\mathrm{ARSC}\).
Their model is not concerned with matrix decomposition. Using our terminology, they do not need \(\mathrm{PP}\) (\(\mathsf{c}_{i}\equiv 0\)) and it is enough to set \(\mathcal{S}\equiv 0\), \(\mathsf{b}_{3}=\mathsf{d}_{3}=\mathsf{f}_{3}=0\). Our framework deals with the broader model RTRMD. By Lemma 12 and Proposition 2, RTRMD requires a non trivial interplay between the product process inequality of Theorem 8 and Chevet's inequality (see Section 19 in the supplement). Both inequalities are
needed as they imply, respectively, \(\mathrm{PP}\) and the general version of \(\mathrm{IP}\) over the triple \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathbb{R}^{p})^{2}\times\mathbb{R}^{n}\) and norms \((\mathcal{R},\mathcal{S},\mathcal{Q})\).
To conclude, the analysis in [30] implicitly uses the particular version \(\mathrm{MP}_{\mathcal{R},0,\mathcal{Q}}(0,\mathrm{f}_{2},0,\mathrm{f}_{4})\) with \(\mathrm{f}_{1}=0\) -- indeed, they resort to the dual-norm inequality. As in the previous remark, this approach leads to near-optimal bounds in \((n,s,p,\epsilon)\) in average but sub-optimal in \(\delta\). We prove the general version of \(\mathrm{MP}\) using the multiplier process inequality of Theorem 7. [30] also assume the noise is independent of the features.
## 10 \(\mathrm{MP}\) in \(M\)-estimation with decomposable regularizers
Let \(\hat{\mathbf{B}}\) be the least-squares estimator with penalization \(\lambda\mathcal{R}\) -- when there is no contamination, the model is well-specified and \(\mathbf{\Gamma}^{*}\equiv 0\). By the first-order condition, one gets
\[\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}^{*}})\|_{2}^{2}\leq(\nicefrac{{ 1}}{{n}})\langle\mathbf{\xi},\mathfrak{X}(\mathbf{\Delta}_{\mathbf{B}^{*}})\rangle+ \lambda\big{(}\mathcal{R}(\mathbf{B}^{*})-\mathcal{R}(\tilde{\mathbf{B}}) \big{)}.\]
In the "Lasso proof", the standard argument is to properly upper bound the RHS and lower bound the LHS -- using the regularization effect of the decomposable norm \(\mathcal{R}\). For the upper bound, the typical way is to use the dual-norm inequality:
\[(\nicefrac{{1}}{{n}})\langle\mathbf{\xi},\mathfrak{X}(\mathbf{\Delta}_{\mathbf{B}^{* }})\rangle\leq(\nicefrac{{1}}{{n}})\mathcal{R}^{*}(\mathfrak{X}^{*}(\mathbf{\xi} ))\cdot\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}^{*}}), \tag{19}\]
where \(\mathcal{R}^{*}\) denotes the dual-norm of \(\mathcal{R}\) and \(\mathfrak{X}^{*}(\mathbf{\xi}):=\sum_{i=1}^{n}\xi_{i}\mathbf{X}_{i}\) is the adjoint operator of \(\mathfrak{X}\). The displayed bound is \(\mathrm{MP}_{\mathcal{R},0,0}(\mathrm{f}_{2},0,0)\) with \(\mathrm{f}_{2}=(1/n)\mathcal{R}^{*}(\mathfrak{X}^{*}(\mathbf{\xi}))\) -- justifying the well known condition \(\lambda\gtrsim(1/n)\mathcal{R}^{*}(\mathfrak{X}^{*}(\mathbf{\xi}))\) that ensures \(\mathbf{\Delta}_{\mathbf{B}^{*}}\) lies in a dimension-reduction cone. \(\mathrm{RSC}\) can then be invoked for the lower bound -- indeed, it implies strong-convexity over the dimension-reduction cone.
Consider trace regression with \(\mathcal{R}=\|\cdot\|_{N}\). As first shown in [78], if we use (19) we must take \(\lambda\asymp\sigma\rho_{N}(\mathbf{\Sigma})(\sqrt{(d_{1}+d_{2})/n}+\sqrt{\log(1/ \delta)/n})\), implying the estimation rate
\[\sigma\rho_{N}(\mathbf{\Sigma})\sqrt{r(d_{1}+d_{2})/n}+\sigma\rho_{N}(\mathbf{\Sigma} )\sqrt{r\log(1/\delta)/n}.\]
This seminal result is optimal -- in average but suboptimal in \(\delta\). In case of sparse regression, the same approach leads to the rate \(\sigma\rho_{1}(\mathbf{\Sigma})\sqrt{s\log p/n}+\sigma\rho_{1}(\mathbf{\Sigma})\sqrt{ s\log(1/\delta)/n}.\) To our knowledge, [6] was the first to attain the \(\delta\)-subgaussian rate for sparse regression. See also [34] for extensions on their results for the square-root Lasso and Slope estimators. Let \(\mathbb{X}\) denote the design matrix satisfying \(\max_{j\in[p]}\|\mathbb{X}_{\mathbf{\ast},j}\|_{2}\leq 1\) and assume that \(\mathbf{\xi}\) is independent of \(\mathbb{X}\). In their Theorem 9.1, they show that, with probability \(\geq 1-\delta\), for all \(\mathbf{v}\in\mathbb{R}^{p}\),
\[(\nicefrac{{1}}{{n}})\langle\mathbf{\xi},\mathbb{X}\mathbf{v}\rangle\leq\tilde{ \mathrm{f}}_{1}\|\mathbb{X}^{(n)}\mathbf{v}\|_{2}+\tilde{\mathrm{f}}_{2}\|\mathbf{v}\|_{ \sharp},\]
with \(\tilde{\mathrm{f}}_{1}\asymp\sigma(\nicefrac{{1+\sqrt{\log(1/\delta)}}}{{ \sqrt{n}}})\) and \(\tilde{\mathrm{f}}_{2}\asymp\sigma/\sqrt{n}\). The above bound and an upper bound on the quadratic process imply \(\mathrm{MP}_{\|\cdot\|_{2},0,0}(\mathrm{f}_{1},\mathrm{f}_{2},0,0)\) with \(\mathrm{f}_{1}\asymp\tilde{\mathrm{f}}_{1}\) and \(\mathrm{f}_{2}\asymp\tilde{\mathrm{f}}_{2}\).16 It is a substantial improvement upon (19) in the sense that \(\tilde{\mathrm{f}}_{2}\) does not depend on \(\delta\) -- appearing only in the "variance" constant \(\tilde{\mathrm{f}}_{1}\). This allows us to take \(\lambda\asymp\sigma/\sqrt{n}\), entailing the subgaussian rate \(\sigma\sqrt{s\log(p/s)/n}+\sigma\sqrt{\log(1/\delta)/n}.\) In conclusion, showing \(\mathrm{MP}_{\mathcal{R},0,0}(\mathrm{f}_{1},\mathrm{f}_{2},0,0)\) with finer arguments than (19) -- with proper coefficient \(\mathrm{f}_{1}\neq 0\) -- is the technical reason [6] attains the \(\delta\)-subgaussian rate with \(\delta\)-adaptive estimators in the framework of \(M\)-estimation with decomposable regularizers.
Footnote 16: DIRSEN’s inequality implies, for suitable constants \(\tilde{\mathrm{s}}_{1},\tilde{\mathrm{s}}_{2}>0\), \(\|\mathbb{X}^{(n)}\mathbf{v}\|_{2}\leq\tilde{\mathrm{s}}_{1}\|\mathbf{v}\|_{\Pi}+\tilde {\mathrm{s}}_{2}\|\mathbf{v}\|_{\sharp}\) for all \(\mathbf{v}\) with high probability.
_Theorem 9.1 in [6] fundamentally assumes \(\mathbf{\xi}\) is independent of \(\mathbf{X}\)._17 It is natural to ask if independence can be dropped. With feature-dependent noise, Theorem 7 and a peeling argument entail \(\mathrm{MP}_{\|\cdot\|_{\sharp},0,0}(\mathrm{f}_{1},\mathrm{f}_{2},0,0)\)
with high probability and constants \(\mathsf{f}_{1}\asymp\sigma L^{\left(1+\sqrt{\log(1/\delta)}/\sqrt{n}\right)}\) and \(\mathsf{f}_{2}\asymp\sigma L\rho_{1}(\mathbf{\Sigma})/\sqrt{n}\) -- assuming \(n\gtrsim 1+\log(1/\delta)\). More generally, we prove \(\operatorname{MP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathsf{f}_{1}, \mathsf{f}_{2},\mathsf{f}_{3},\mathsf{f}_{4})\) with non-zero constants \(\mathsf{f}_{3}\) and \(\mathsf{f}_{4}\asymp\sigma L\mathcal{B}(\mathbb{B}_{\mathcal{Q}})/\sqrt{n}\) for general regularization norms \((\mathcal{R},\mathcal{S},\mathcal{Q})\). When \(\mathcal{Q}=\|\cdot\|_{\sharp}\), we show this property is useful to handle label contamination and/or additive matrix decomposition. These properties entail the \(\delta\)-subgaussian rate for the robust \(\delta\)-adaptive estimators (4)-(5) -- assuming nothing but (marginal) subgaussianity of \(\mathbf{X}\) and \(\mathbf{\xi}\).
## 11 First general theorem
The main result of this section is the general Theorem 16. To stated it, we fix the positive constants \(\{\mathsf{a}_{i}\}\), \(\{\mathsf{b}_{i}\}\), \(\{\mathsf{c}_{i}\}\), \(\{\mathsf{d}_{i}\}\) and \(\{\mathsf{f}_{i}\}\) in Definition 9. Theorems 2-3 are consequences of Theorem 16 and Proposition 2 -- which ensure that the required design properties hold with high probability.
We start with some definitions.
_Definition 13_ (Decomposable norms [80, 61]).: A norm \(\mathcal{R}\) over \(\mathbb{R}^{p}\) is said to be decomposable if, for all \(\mathbf{B}\in\mathbb{R}^{p}\), there exists linear map \(\mathbf{V}\mapsto\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V})\) such that, for all \(\mathbf{V}\in\mathds{R}^{p}\), defining \(\mathcal{P}_{\mathbf{B}}(\mathbf{V}):=\mathbf{V}-\mathcal{P}_{\mathbf{B}}^{ \perp}(\mathbf{V})\),
* \(\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{B})=0\),
* \(\{\mathcal{P}_{\mathbf{B}}(\mathbf{V}),\mathcal{P}_{\mathbf{B}}^{\perp}( \mathbf{V})\}=0\),
* \(\mathcal{R}(\mathbf{V})=\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V}))+ \mathcal{R}(\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V}))\).
When \(\mathcal{R}=\|\cdot\|_{1}\), \(\mathcal{P}_{\mathbf{B}}\) is the projection onto the support of vector \(\mathbf{b}\). When \(\mathcal{R}=\|\cdot\|_{N}\), \(\mathcal{P}_{\mathbf{B}}\) is the projection onto the "low-rank" support of the matrix \(\mathbf{B}\) -- see Section 21 in the supplement for a precise definition. In what follows, \(\mathcal{R}\) and \(\mathcal{S}\) will be decomposable norms on \(\mathds{R}^{p}\). We shall need the following definition.
_Definition 14_ (Dimension-reduction cone).: Given \([\mathbf{B},\mathbf{\Gamma}]\in(\mathds{R}^{p})^{2}\), let \(\mathcal{P}_{\mathbf{B}}\) and \(\mathcal{P}_{\mathbf{\Gamma}}\) be the projection maps associated to \((\mathcal{R},\mathbf{B})\) and \((\mathcal{S},\mathbf{\Gamma})\) respectively. Fix \(c_{0},\gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\eta\geq 0\). Let \(\mathcal{C}_{\mathbf{B},\mathbf{\Gamma},\mathcal{R},\mathcal{S}}(c_{0},\gamma _{\mathcal{R}},\gamma_{\mathcal{S}},\eta)\) be the cone of points \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^{n}\) satisfying
\[\gamma_{\mathcal{R}}\mathcal{R}(\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V})) +\gamma_{\mathcal{S}}\mathcal{S}(\mathcal{P}_{\mathbf{\Gamma}}^{\perp}( \mathbf{W}))+\sum_{i=o+1}^{n}\omega_{i}\mathbf{u}_{i}^{\sharp}\leq c_{0}\left[ \gamma_{\mathcal{R}}\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V}))+\gamma _{\mathcal{S}}\mathcal{S}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))+\eta\| \mathbf{u}\|_{2}\right].\]
We will sometimes omit some of the subscripts when they are clear in the context.
The one-dimensional cone \(\mathcal{C}_{\mathbf{B},\mathcal{R}}(c_{0}):=\mathcal{C}_{\mathbf{B},\mathbf{ 0},\mathcal{R},\mathcal{Q}}(c_{0},1,0,0)\) is well known in high-dimensional statistics. In the analysis of RTRMD the three-dimensional cone \(\mathcal{C}_{\mathbf{B},\mathbf{\Gamma},\mathcal{R},\mathcal{S}}\) is useful. Next, we will need some additional notation when dealing with contamination in miss-specified models.
_Definition 15_.: Let \(\mathbf{B},\mathbf{\Gamma},\mathbf{V},\mathbf{W}\in\mathds{R}^{p}\) and non-negative numbers \((c_{0},\alpha,\mathsf{f},r)\). Define the quantities \(R_{\mathcal{R},c_{0}}(\mathbf{V}|\mathbf{B}):=\Psi_{\mathcal{R}}(\mathcal{P}_{ \mathbf{B}}(\mathbf{V}))\mu(\mathcal{C}_{\mathbf{B},\mathcal{R}}(2c_{0}))\) and \(R_{\mathcal{S},c_{0}}(\mathbf{W}|\mathbf{\Gamma}):=\Psi_{\mathcal{S}}( \mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))\mu(\mathcal{C}_{\mathbf{\Gamma}, \mathcal{S}}(2c_{0}))\). Additionally, set
\[r_{\lambda,\chi,\alpha,c_{0}}(\mathbf{V},\mathbf{W}|\mathbf{B},\mathbf{\Gamma }):=\{\lambda^{2}R_{\mathcal{R},c_{0}}^{2}(\mathbf{V}|\mathbf{B})+\chi^{2}R_{ \mathcal{S},c_{0}}^{2}(\mathbf{W}|\mathbf{\Gamma})+\alpha^{2}\}^{1/2}.\]
Finally, set \(\bigtriangleup_{2}(\mathsf{f},r):=(\nicefrac{{1}}{{\mathsf{d}_{1}^{2}}}) \left(4\mathsf{f}+3r\right)^{2}\) and \(\bigtriangleup_{2}(\mathsf{f},r):=(\nicefrac{{1}}{{\mathsf{d}_{1}^{2}}}) \left(16\mathsf{f}+12r\right).\)
**Theorem 16** (\(q=2\) & matrix decomposition).: _Grant Assumption 1 and model (1). Consider the solution \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}}]\) of (4) with hyper-parameters \(\lambda,\chi,\tau>0\) and \(\mathsf{a}\in(0,\infty]\). Let \(\mathsf{c}_{*},\mathsf{f}_{*}\geq 0\) be absolute constants and suppose:_
* \((\mathfrak{X},\mathbf{\xi})\) _satisfies the_ \(\operatorname{MP}_{\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp}}(\mathsf{f}_{1}, \mathsf{f}_{2},\mathsf{f}_{3},\mathsf{f}_{4})\)_._
* \(\mathfrak{X}\) _satisfies the_ \(\operatorname{ARSC}_{\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp}}(\mathsf{d}_{1}, \mathsf{d}_{2},\mathsf{d}_{3},\mathsf{d}_{4})\)_._
* \(\mathfrak{X}\) _satisfies the_ \(\operatorname{IP}_{\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp}}\left(\mathsf{b}_{1}, \mathsf{b}_{2},\mathsf{b}_{3},\mathsf{b}_{4}\right)\)_._
* _The hyper-parameters_ \((\lambda,\chi,\tau)\) _satisfy_ \(\tau\geq 4[(\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4})]\)_,_ \[\lambda\geq 4[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}+2\mathsf{c}_{*} \sigma\mathsf{b}_{2})]\text{ and }\chi\geq 4[(\sigma\mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{*}+2 \mathsf{c}_{*}\sigma\mathsf{b}_{3})].\]
* \(\hat{\mathfrak{f}}_{1}:=\mathfrak{f}_{1}+\mathfrak{b}_{1}(\mathsf{c}_{*}\sigma)+( \nicefrac{{2b_{1}}}{{\tau}})(\mathsf{c}_{*}^{2}\sigma^{2})\leq\sigma\mathsf{d}_{1}/2\)_._
_Let any \(D\geq 0\) and \([\mathbf{B},\mathbf{\Gamma}]\) satisfying the constraints_
\[\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2} =D, \tag{20}\] \[\|\mathbf{B}\|_{\infty} \leq\mathsf{a},\] (21) \[|\langle\!\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}\rangle\!\!\rangle_{\Pi}| \leq\mathfrak{f}_{*}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}),\] (22) \[r:=r_{\lambda,\chi,\tau\Omega,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}_{\mathbf{\Gamma}}|\mathbf{B},\mathbf{\Gamma}) <\sigma\mathsf{d}_{1},\] (23) \[D^{2}+\mathbf{\bigtriangleup}_{2}(\mathfrak{f}_{1},r) \leq\mathsf{c}_{*}^{2}\sigma^{2},\] (24) \[\big{[}\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}}\big{]}\big{\rangle} \big{\vee}\big{[}\nicefrac{{2(\sqrt{2}/\mathsf{d}_{1})}}{D}+\mathbf{\bigtriangleup }_{2}(\mathfrak{f}_{1},r)\big{]} \leq\mathsf{c}_{*}\sigma. \tag{25}\]
_Define the quantities \(\hat{r}:=r_{\lambda,\chi,0,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}|\mathbf{B},\mathbf{\Gamma})\) and_
\[F:=\mathfrak{f}_{1}+\mathfrak{b}_{1}\left(\nicefrac{{(D^{2}}}{{\sigma\mathsf{ d}_{1})}}{\big{\rangle}}\big{\vee}\big{(}\nicefrac{{2(\sqrt{2}/\mathsf{d}_{1})}}{D}+ \mathbf{\bigtriangleup}_{2}(\mathfrak{f}_{1},r)\big{)}\right)+(2\mathfrak{b}_{4} /\tau)\left(D^{2}+\mathbf{\bigtriangleup}_{2}(\mathfrak{f}_{1},r)\right).\]
_Then, it holds that_
\[\nicefrac{{(1/2)}}{{2}}\left(\lambda\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})+\chi\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})\right)+\|\mathfrak{ X}^{(n)}(\hat{\mathbf{B}}+\hat{\mathbf{\Gamma}})-\mathbf{f}^{(n)}\|_{2}^{2}\leq D^{2}+ \mathbf{\bigtriangleup}_{2}(F,\hat{r}). \tag{26}\] \[\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{ \Pi}\leq(\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}})\big{\vee}\big{[} \nicefrac{{\left(2\sqrt{2}/\mathsf{d}_{1}\right)}}{D}+\mathbf{\bigtriangleup}_{2}(F,\hat{r})\big{]}\,. \tag{27}\]
The conditions (i)-(iii) are the required design properties. Condition (iv) prescribes the "optimal" level of the hyper-parameters \((\lambda,\chi,\tau)\) in terms of the design constants, the noise level and the low-spikeness constant \(\mathfrak{f}_{*}\). Notice that \((\lambda,\chi)\) also depend on the constant \(\mathsf{c}_{*}\) -- this constant is related with the constraints (20) and (24)-(25). As explained later, these constraints identify the effect of the corruption error \(\mathbf{\Delta}^{\mathbf{\theta}}\) and miss-specification error on the choice of \((\lambda,\chi)\). The constraints (21)-(22) encode the low-spikeness assumption. Finally, condition (v) and constraint (23) encode the minimal sample size and maximum breakdown point. Notice that the corruption and miss-specification errors also impact condition (v) via the constant \(\mathsf{c}_{*}\).
The proof of Theorem 16 will be done via intermediate lemmas, stated next. These are proven in the supplement. We start with the next lemma, a consequence of the first order condition of (6). In the following, we grant Assumption 1 and model (1) and set \(\mathbf{\Delta}=\mathfrak{X}(\hat{\mathbf{B}}+\hat{\mathbf{\Gamma}})-\mathbf{f}\).
**Lemma 17**.: _For all \([\mathbf{B},\mathbf{\Gamma}]\in(\mathbb{R}^{p})^{2}\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),_
\[\langle\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}, \mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}},\bm {\Delta}^{\hat{\mathbf{\theta}}})\rangle \leq\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{ B}}+\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle\] \[+\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{ \mathbf{B}})\big{)}+\chi\big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}( \hat{\mathbf{\Gamma}})\big{)}+\tau\big{(}\|\mathbf{\theta}^{*}\|_{\sharp}-\|\hat{ \mathbf{\theta}}\|_{\sharp}\big{)}. \tag{28}\]
Next, we upper (and lower) bound (28) using \(\mathrm{MP}\) (and \(\mathrm{ARSC}\)). In case of additive matrix decomposition, we require an additional condition related to the spikeness assumption.
**Lemma 18**.: _Suppose conditions (i)-(ii) of Theorem 16 hold. For some \(\mathfrak{f}_{*}\geq 0\), let \([\mathbf{B},\mathbf{\Gamma}]\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\) and \(|\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle\!\!\rangle _{\Pi}|\leq\mathfrak{f}_{*}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})\). Define the quantities_
\[\mathbf{\Delta}:=((\sigma\mathsf{d}_{2})\vee(2\mathfrak{f}_{2})) \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+((\sigma\mathsf{d}_{3})\vee(2\mathfrak{f}_ {3}+2\mathfrak{f}_{*}))\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})+((\sigma \mathsf{d}_{4})\vee(2\mathfrak{f}_{4}))\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{ \sharp},\] \[\mathbf{\nabla}:=\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\big{)}+\chi\big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}( \hat{\mathbf{\Gamma}})\big{)}+\tau\big{(}\|\mathbf{\theta}^{*}\|_{\sharp}-\|\hat{ \mathbf{\theta}}\|_{\sharp}\big{)}.\]
_Then_
\[\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}^{2}+\Big{(} \mathsf{d}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}-(\nicefrac{{\mathbf{\alpha}}}{{\sigma}})\Big{)} _{+}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^ {2}\] \[+2\mathfrak{f}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}+\mathbf{\Delta}+2\mathbf{\nabla}. \tag{29}\]
To illustrate, consider well-specified trace regression with label contamination -- namely, \(\mathbf{B}=\mathbf{B}^{*}\)
\(\mathbf{\Gamma}=\mathbf{\Gamma}^{*}=\mathbf{0}\) and \(\mathfrak{X}^{(n)}(\mathbf{B}^{*})-\boldsymbol{f}^{(n)}=\mathbf{0}\). In this case, \(\chi=\mathsf{f}_{*}=0\), \(\mathsf{a}=\infty\) and it is sufficient that \(\mathrm{MP}\) and \(\mathrm{ARSC}\) hold with \(\mathsf{d}_{3}=\mathsf{f}_{3}=0\) and \(\mathcal{S}\equiv 0\). Using (28), a similar proof of (29) entails
\[\|\mathfrak{M}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}^{*}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}})\|_{2}^{2} \leq\frac{\mathsf{f}_{1}}{\mathsf{c}_{1}}\|\mathfrak{M}^{(n)}( \boldsymbol{\Delta}_{\mathbf{B}^{*}},\boldsymbol{\Delta}^{\hat{\boldsymbol{ \theta}}})\|_{2}+\left(\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{c}_{2}}{ \mathsf{c}_{1}}\right)\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}^{*}})+ \left(\mathsf{f}_{3}+\frac{\mathsf{f}_{1}\mathsf{c}_{3}}{\mathsf{c}_{1}} \right)\|\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\|_{\sharp}\] \[+\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{ \mathbf{B}})\big{)}+\tau\big{(}\|\boldsymbol{\theta}^{*}\|_{\sharp}-\|\hat{ \boldsymbol{\theta}}\|_{\sharp}\big{)}. \tag{30}\]
In case \(\mathsf{f}_{1}=0\), the above bound and decomposability of norms can be used to show that \([\boldsymbol{\Delta}_{\mathbf{B}^{*}},\mathbf{0},\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}]\in\mathcal{C}_{\mathbf{B}^{*},\mathbf{0}}(c_{0},\gamma, 0,\Omega)\) for some \(c_{0}>0\) and \(\gamma:=\lambda/\tau\) -- provided the penalization \((\lambda,\tau)\) is large enough. This would be the approach using the dual-norm inequality. Instead, we resort to Theorem 7 to obtain \(\mathrm{MP}\) with \(\mathsf{f}_{1}\neq 0\) -- enabling us to obtain the optimal rate in \(\delta\). \(\mathrm{ARSC}\) can be further used to lower bound (30) -- as it implies restricted strong convexity over \(\mathcal{C}_{\mathbf{B}^{*},\mathbf{0}}(c_{0},\gamma,0,\Omega)\). Inequality (29) is a non trivial generalization of (30) to handle miss-specification and additive matrix decomposition.18
Footnote 18: More precisely, (29) gives a recursion in the variable \(\|\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\|_{\Pi}\) — instead of \(\|\mathfrak{M}^{(n)}(\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}+ \boldsymbol{\Delta}^{\Gamma},\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}} )\|_{2}\). This technical point is needed in case of matrix decomposition, accounting for the “bias” \(|\{\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}\}_{ \Pi}|\leq\mathsf{f}_{*}\mathcal{S}(\boldsymbol{\Delta}_{\mathbf{\Gamma}})\).
_Remark 10_ (Relevance of \(\mathrm{PP}\) and \(\mathrm{IP}\)).: One should not take for granted the fact that \(\mathrm{PP}_{\mathcal{R},\mathcal{S}}(\mathsf{c}_{1},\mathsf{c}_{2},\mathsf{c}_ {3},\mathsf{c}_{4})\) and \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp}}(\mathsf{b}_{1},\mathsf{ b}_{2},\mathsf{b}_{3},\mathsf{b}_{4})\) over the triplet \([\mathbf{V},\mathbf{W},\mathbf{u}]\) are _implicitly_ invoked in Lemma 18. Indeed, by Lemmas 10 and 12, both properties entail \(\mathrm{ARSC}_{\mathcal{R},\mathcal{S},\|\cdot\|_{\sharp}}(\mathsf{d}_{1}, \mathsf{d}_{2},\mathsf{d}_{3},\mathsf{d}_{4})\). The optimal bound for \(\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}]\|_{\Pi}\) is obtained invoking \(\mathrm{ARSC}\) with sharp constants (as stated in Proposition 2). One could argue if a more "direct" approach could lead to the optimal rate, for instance, one without resorting to such technical definitions. It turns out that the mere use of dual-norm inequalities is suboptimal.
Before proceeding, we state the next lemma stating the useful bounds (31)-(32) for points in \(\mathcal{C}_{\mathbf{B},\mathbf{\Gamma}}\). For convenience, given \([\mathbf{V},\mathbf{W},\mathbf{B},\mathbf{\Gamma},\boldsymbol{u}]\), we define
\[\triangle_{\lambda,\chi,\tau}(\mathbf{V},\mathbf{W},\boldsymbol{u} |\mathbf{B},\mathbf{W}) :=(\nicefrac{{3\!\!\!\!/}}{{2}})(\mathcal{R}\circ\mathcal{P}_{ \mathbf{B}})(\mathbf{V})-(\nicefrac{{\lambda\!\!\!/}}{{2}})(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}}^{\perp})(\mathbf{V})\] \[+(\nicefrac{{3\!\!\!\!/}}{{2}})(\mathcal{S}\circ\mathcal{P}_{ \mathbf{\Gamma}})(\mathbf{W})-(\nicefrac{{\chi\!\!\!/}}{{2}})(\mathcal{S} \circ\mathcal{P}_{\mathbf{\Gamma}}^{\perp})(\mathbf{W})\] \[+(\nicefrac{{3\!\!\!\!/}}{{2}})\|\boldsymbol{u}\|_{2}-( \nicefrac{{\tau\!\!\!/}}{{2}})\sum_{i=o+1}^{n}\omega_{i}\boldsymbol{u}_{i}^{ \sharp}.\]
**Lemma 19**.: _Define \(\gamma_{\mathcal{R}}:=\lambda/\tau\) and \(\gamma_{\mathcal{S}}:=\chi/\tau\) and let \(c_{0},\eta>0\). Then, for any \([\mathbf{B},\mathbf{\Gamma}]\) and \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in\mathcal{C}_{\mathbf{B},\mathbf{ \Gamma}}(c_{0},\gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\eta)\),_
\[\triangle_{\lambda,\chi,\tau}(\mathbf{V},\mathbf{W},\boldsymbol{u} |\mathbf{B},\mathbf{\Gamma}) \leq(\nicefrac{{3\!\!\!/}}{{2}})r_{\lambda,\chi,\tau\eta,c_{0}}( \mathbf{V},\mathbf{W}|\mathbf{B},\mathbf{\Gamma})\cdot\|[\mathbf{V},\mathbf{W}, \boldsymbol{u}]\|_{\Pi}, \tag{31}\] \[\lambda\mathcal{R}(\mathbf{V})+\chi\mathcal{S}(\mathbf{W})+\tau\| \boldsymbol{u}\|_{\sharp} \leq 2(c_{0}+1)\cdot r_{\lambda,\chi,\tau\eta,c_{0}}(\mathbf{V},\mathbf{W}| \mathbf{B},\mathbf{\Gamma})\cdot\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}. \tag{32}\]
The previous lemmas entail the next proposition.
**Proposition 3**.: _Suppose the conditions (i)-(ii) of Theorem 16 hold and, additionally,_
* \(\lambda\geq 4[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2})],\chi\geq 4[(\sigma \mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{*})]\) _and_ \(\tau\geq 4[(\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4})]\)_._
* \(2\mathsf{f}_{1}\leq\sigma\mathsf{d}_{1}\)_._
_For any \(D\geq 0\) and \([\mathbf{B},\mathbf{\Gamma}]\) satisfying the constraints (20), (21), (22) and (23),_
\[(\nicefrac{{1\!\!\!/}}{{2}})(\lambda\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}} )+\chi\mathcal{S}(\boldsymbol{\Delta}_{\mathbf{\Gamma}})+\tau\|\boldsymbol{\Delta}^{ \hat{\boldsymbol{\theta}}}\|_{\sharp})+\|\boldsymbol{\Delta}^{(n)}+\boldsymbol{ \Delta}^{\hat{\boldsymbol{\theta}}}\|_{2}^{2}\leq D^{2}+\boldsymbol{\triangle}_{ 2}(\mathsf{f}_{1},r), \tag{33}\]
_where \(r:=r_{\lambda,\chi,\tau\Omega,3}(\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{ \Delta}_{\mathbf{\Gamma}}|\mathbf{B},\mathbf{\Gamma})\). Additionally,_
\[\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}]\|_{\Pi}\leq[(\nicefrac{{D^{2} }}{{\sigma\mathsf{d}_{1}}})]\bigvee[(\nicefrac{{2\sqrt{2}}}{{4}})D+\boldsymbol{ \triangle}_{2}(\mathsf{f}_{1},r)\big{]}\,. \tag{34}\]
Using Proposition 2, we can show the bound in (33) is a near-optimal oracle inequality for the triplet \(\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}]\|_{\Pi}\). Still, it is suboptimal for \(\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\). Next, we show that the bounds for \(\boldsymbol{\Delta}^{\hat{\boldsymbol{
by Proposition 3, are enough to obtain the near-optimal rate for \(\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\). First, we prove the following lemma -- an easy consequence of the first-order condition for fixed \(\mathbf{\theta}\equiv\mathbf{\theta}\).
**Lemma 20**.: _For all \([\mathbf{B},\mathbf{\Gamma}]\in(\mathds{R}^{p})^{2}\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),_
\[\langle\mathbf{\Delta}^{(n)},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B }}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle \leq\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{ X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle\] \[+\lambda(\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}}))+ \chi(\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{\Gamma}})). \tag{35}\]
In case of label contamination and/or additive matrix decomposition, a key difference with the standard "Lasso proof" is that \(\mathrm{MP}_{\mathcal{R},0,0}(\mathrm{f}_{1},\mathrm{f}_{2},0,0)\) does not give a sufficient upper bound for (35). Indeed, the noise \(\mathbf{\xi}^{(n)}\) is shifted by \(-\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\) and the multiplier process depends on the decomposed error \(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}}\). Suppose \(\mathrm{MP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathrm{f}_{1},\mathrm{f}_{ 2},\mathrm{f}_{3},\mathrm{f}_{4})\) and \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\mathcal{Q}}(\mathrm{b}_{1},\mathrm{b}_{ 2},\mathrm{b}_{3},\mathrm{b}_{4})\) hold. The next lemma states that, if the nuisance error \(\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\) is "not too large" compared to the noise, then the "perturbed multiplier process" \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}}]\mapsto\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}}, \mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle\) in (35) can be effectively upper bounded. For the lower bound, we assume ARSC and the low-spikeness condition hold.
**Lemma 21**.: _Suppose conditions (i)-(iii) of Theorem 16 hold. For some \(\mathsf{f}_{*}\geq 0\), let \([\mathbf{B},\mathbf{\Gamma}]\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\) and \(|\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle|\leq \mathsf{f}_{*}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})\). Define the quantities_
\[\hat{\mathbf{\lambda}} :=\left((\sigma\mathsf{d}_{2})\vee\left(2\mathsf{f}_{2}+2 \mathrm{b}_{2}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\right)\right)\mathcal{R} (\mathbf{\Delta}_{\mathbf{B}})+\left((\sigma\mathsf{d}_{3})\vee\left(2\mathsf{f}_{ 3}+2\mathsf{f}_{*}+2\mathrm{b}_{3}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2} \right)\right)\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}),\] \[\hat{\mathbf{\tau}} :=\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{ B}})\big{)}+\chi\big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{ \mathbf{\Gamma}})\big{)}.\]
_Then_
\[\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+(\mathsf{d}_{1}\|[\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}-(\nicefrac{{\mathsf{a}}}{{ \sigma}})\big{)}_{+}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)} \|_{2}^{2}\] \[+\left(2\mathsf{f}_{1}+2\mathrm{b}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{2}+2\mathsf{b}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \right)\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\] \[+\mathbf{\hat{\Delta}}+2\mathbf{\hat{\tau}}. \tag{36}\]
With no contamination (\(\mathbf{\Delta}^{\hat{\mathbf{\theta}}}=\mathbf{0}\), \(\tau=0\)), Lemmas 18 and 21 coincide. In that case, Proposition 3 implies the optimal bound for \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\). Lemma 21 improves upon Lemma 18 in case of label contamination. Next, we discuss how this lemma is used in the proof of Theorem 16. The complete proof is presented in the supplement.
Lemma 21 gives a recursion on the parameter error \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\) -- instead of \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}]\) as in Lemma 18. Instead of a parameter estimate, \(\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\) is seen as a nuisance error perturbing the noise levels \((\mathsf{f}_{1},\mathsf{f}_{2},\mathsf{f}_{3})\) and \((\sigma\mathsf{d}_{1},\sigma\mathsf{d}_{2},\sigma\mathsf{d}_{3})\). As expected, this perturbation affects the tuning of the hyper-parameters \((\lambda,\chi)\) and the corresponding rate for \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\). For (36) to be meaningful, \(\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\) must be small enough compared to the noise. Precisely, the auxiliary Proposition 3 implies that \(\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\leq\mathsf{c}_{*}\sigma\) and \(\tau\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\leq c_{*}^{2}\sigma^{2}\) if we assume that (24)-(25). Using Proposition 2, we can show that these conditions hold with high probability -- including miss-specified models satisfying \((\nicefrac{{1}}{{n}}/n)\|\mathfrak{X}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}\|_{2}^{2} \lesssim\sigma^{2}\).
_Remark 11_ (The relevance of \(\mathrm{IP}\)).: \(\mathrm{IP}\) over the triplet \([\mathbf{V},\mathbf{W},\mathbf{u}]\) is a major tool in the proof of Lemma 21. Examining this lemma, we see that the thresholds for \((\lambda,\chi)\) are perturbed, respectively, by terms of order \(\mathrm{b}_{2}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\) and \(\mathrm{b}_{3}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\) and the coefficient of \(\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\) in inequality (36) is perturbed by a term of order \(\mathrm{b}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{4}\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{\sharp}\). The rate optimality of estimator (6) follows from these precise expressions and the bounds \(\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\leq\mathsf{c}_{*}\sigma\) and \(\tau\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\leq c_{*}^{2}\sigma^{2}\). Indeed, Proposition 2 reveals that the constants \((\mathsf{b}_{1},\mathsf{b}_{2},\mathsf{b}_{3},\mathsf{b}_{4})\) are sharp -- it can be shown that the mere use of dual-norm inequalities instead of \(\mathrm{IP}\) do not entail the optimal rate for \(\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\).
_Remark 12_ (Proofs in [30]).: [30] studies well-specified robust sparse regression with Huber's loss and Gaussian distributions. There are two "high level ideas" used in [30] which we borrow. Their first idea is to identify \(\mathrm{IP}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathsf{b}_{1},\mathrm{b}_{2},0, \mathsf{b}_{4})\) over the pair \([\mathbf{v},\mathbf{u}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}\) as the sufficient design property to handle label contamination with Huber's loss -- in case the parameter is known to be sparse and well-specified.
To prove this property they use Chevet's inequality for gaussian processes. Their second idea is to treat \(\mathbf{\Delta}^{\hat{\theta}}\) as a nuisance parameter, using a "two-stage" proof. The first stage establishes the optimal bound for the pair \([\mathbf{\Delta}_{b^{*}},\mathbf{\Delta}^{\hat{\theta}}]\). The second establishes the optimal bound for \(\mathbf{\Delta}_{b^{*}}\), using the nuisance bound for \(\mathbf{\Delta}^{\hat{\theta}}\).
This work is concerned with a significant more general model: miss-specified RTRMD. As such, the arguments in the proof of Theorem 16 have substantial changes, both on technical details and structural design properties. On a fundamental level, we give three contributions when compared to the analysis in [30]. The first is to identify the new design property \(\mathrm{PP}_{\mathcal{R},\mathcal{S}}(\mathsf{c}_{1},\mathsf{c}_{2},\mathsf{c }_{3},\mathsf{c}_{4})\) over the pair \([\mathbf{V},\mathbf{W}]\in(\mathds{R}^{p})^{2}\) as the sufficient property to handle additive matrix decomposition in trace regression.
The second is to identify the more general version \(\mathrm{IP}_{\mathcal{R},\mathcal{S},\|\cdot\|_{2}}(\mathsf{b}_{1},\mathsf{b }_{2},\mathsf{b}_{3},\mathsf{b}_{4})\) over the triplet \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathds{R}^{p})^{2}\times\mathbb{R}^{n}\), and its relation with \(\mathrm{PP}\), as the sufficient design properties to handle, simultaneously, label contamination and additive matrix decomposition. For this, we fundamentally need to use Chevet's inequality and the new product process inequality of Theorem 8. We are not aware of a similar application of product processes in high-dimensional estimation.
The proof in [30] handles the design-noise interaction in the simplest way: invoking \(\mathrm{MP}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(0,\mathsf{f}_{2},0,\mathsf{f}_{4})\) over the pair \([\mathbf{v},\mathbf{u}]\) via the dual-norm inequality. Consequently, they do not attain the subgaussian rate. Our third contribution is to derive a new multiplier process inequality (Theorem 7) and obtain the general version \(\mathrm{MP}_{\mathcal{R},\mathcal{S},\|\cdot\|_{2}}(\mathsf{f}_{1},\mathsf{f}_{ 2},\mathsf{f}_{3},\mathsf{f}_{4})\) over the triplet \([\mathbf{V},\mathbf{W},\mathbf{u}]\). It leads to sharp constants in terms of \(\delta\) when the noise is feature-dependent -- recall the discussion in Section 10 and comparison with [6]. The \(\mathrm{MP}\) property with constant \(\mathsf{f}_{1}\neq 0\) is fundamental to achieve the optimal subgaussian rate in \(\delta\) with \(\delta\)-adaptive estimators.
## 12 Second general theorem
In what follows, \(\mathcal{R}\) is a decomposable norm in \(\mathds{R}^{p}\). The main result of this section is Theorem 22. To stated it, we fix the positive constants \(\{\mathsf{a}_{i}\}\), \(\{\mathsf{b}_{i}\}\), \(\{\mathsf{c}_{i}\}\), \(\{\mathsf{d}_{i}\}\) and \(\{\mathsf{f}_{i}\}\) in Definition 9. Theorem 4 will follow from Theorem 22 and Proposition 2. Before stating Theorem 22, we simplify some of the notation in Definition 15. Given \(\mathbf{B},\mathbf{V}\in\mathds{R}^{p}\) and \(\alpha,c_{0}>0\), we define \(r_{\lambda,\alpha,c_{0}}(\mathbf{V}|\mathbf{B}):=\{\lambda^{2}R_{\mathcal{R},c _{0}}^{2}(\mathbf{V}|\mathbf{B})+\alpha^{2}\}^{1/2}\). Given \(\mathsf{f},\mathsf{c}>0\), we let \(\mathbf{\phi}_{1}(\mathsf{f},\mathsf{c}):=\left(\nicefrac{{2}}{{\mathsf{d}_{1}} }\right)\left(2\mathsf{f}+\mathsf{c}\right)^{2}\) and \(\mathbf{\phi}_{1}(\mathsf{f},\mathsf{c}):=\left(\nicefrac{{4}}{{\mathsf{d}_{1}} }\right)\left(2\mathsf{f}+\mathsf{c}\right).\)
**Theorem 22** (\(q=1\) & no matrix decomposition).: _Grant Assumption 1 and model (1). Consider the solution \(\hat{\mathbf{B}}\) of (5) with \(q=1\) and hyper-parameters \(\lambda,\tau>0\). Let \(\hat{\sigma}:=\|\mathbf{\xi}^{(n)}\|_{2}\) and constants \(\mathsf{c}_{\star}>0\) and \(\mathsf{c}_{n}\geq 0\) such that \(\mathsf{c}_{\star}+\mathsf{c}_{n}\in[0,1/4)\). Assume that:_
1. \((\mathfrak{X},\mathbf{\xi})\) _satisfies the_ \(\mathrm{ARSC}_{\mathcal{R},0,\|\cdot\|_{2}}(\mathsf{f}_{1},\mathsf{f}_{2},0, \mathsf{f}_{4})\)_._
2. \(\mathfrak{X}\) _satisfies the_ \(\mathrm{ARSC}_{\mathcal{R},0,\|\cdot\|_{2}}(\mathsf{d}_{1},\mathsf{d}_{2},0, \mathsf{d}_{4})\)_._
3. \(\mathfrak{X}\) _satisfies the_ \(\mathrm{IP}_{\mathcal{R},0,\|\cdot\|_{2}}(\mathsf{b}_{1},\mathsf{b}_{2},0, \mathsf{b}_{4})\)_._
4. _The hyper-parameters_ \((\lambda,\tau)\) _satisfy_ \[(1-4(\mathsf{c}_{n}+\mathsf{c}_{\star}))\hat{\sigma}\lambda \geq 4\left[\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}+\left(\mathsf{b}_{2}+\frac{\mathsf{b}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}\right)\mathsf{c}_{\star}\hat{\sigma}+2\nicefrac{{\mathsf{b}_{ 1}\!/\tau}}{{\mathsf{d}_{1}}}\mathsf{c}_{\star}^{2}\hat{\sigma}^{2}\right],\] \[(1-4\mathsf{c}_{n})\hat{\sigma}\tau \geq 4\left[\mathsf{f}_{4}+\frac{\mathsf{f}_{1}\mathsf{d}_{4}}{ \mathsf{d}_{1}}\right].\]
5. _For_ \(\hat{\mathsf{f}}_{1}:=\mathsf{f}_{1}+\mathsf{b}_{1}(\mathsf{c}_{\star}\sigma)+2 \nicefrac{{(\mathsf{b}_{1}\!/\hat{\sigma}\tau)}}{{(\mathsf{c}_{\star}^{2}\sigma^ {2})}}\) _one has_ \(56[(\mathsf{f}_{1}\!/\!\mathsf{d}_{1})+\hat{\sigma}\mathsf{c}_{n}]\leq 3\hat{\sigma}\)_._
_Let any \(D\geq 0\) and \(\mathbf{B}\) satisfying the constraints_
\[\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^{(n)}\|_{2} =D\leq\hat{\sigma}\mathsf{c}_{n}, \tag{37}\] \[r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,\mathsf{c}}(\mathbf{ \Delta}_{\mathbf{B}}|\mathbf{B}) \leq\left\{\frac{1}{112}\bigvee\frac{1}{28[(\mathsf{d}_{2}/ \lambda)\vee(\nicefrac{{4}}{{\mathsf{d}_{1}}}/\tau)]}\right\}\hat{\sigma} \mathsf{d}_{1},\] (38) \[D^{2}+\mathbf{\phi}_{1}\left(\mathsf{f}_{1},\mathsf{f}_{2}\right) \leq\mathsf{c}_{\star}^{2}\hat{\sigma}^{2}, \tag{39}\]
\[(\nicefrac{{4D}}{{d_{1}}})+(\nicefrac{{2}}{{d_{1}}})\mathbf{\triangle}_{1}\left({\rm f}_{ 1},R\right)\leq\mathsf{c}_{*}\hat{\sigma}, \tag{40}\]
_where \(R:=(\hat{\sigma}\mathsf{c}_{n})\vee(3r)\)._
_Define the quantities \(\hat{r}:=r_{\hat{\sigma}\lambda,0,6}(\mathbf{\Delta}_{\mathbf{B}}|\mathbf{B})\), \(\hat{R}:=(\hat{\sigma}\mathsf{c}_{n})\vee(3\hat{r})\) and_
\[F:={\rm f}_{1}+{\rm b}_{1}\left[(\nicefrac{{4D}}{{d_{1}}})+(\nicefrac{{2}}{{d_ {1}}})\mathbf{\triangle}_{1}\left({\rm f}_{1},R\right)\right]+2(\nicefrac{{10}}{{ \phi\tau}})\left[D^{2}+\mathbf{\triangle}_{1}\left({\rm f}_{1},R\right)\right].\]
_Then_
\[(\nicefrac{{\phi\lambda}}{{2}})\mathcal{R}(\mathbf{\Delta}_{\mathbf{ B}})+\|\mathfrak{X}^{(n)}(\hat{\mathbf{B}})-\mathbf{f}^{(n)}\|_{2}^{2}\leq D^{2}+\mathbf{ \triangle}_{1}(F,\hat{R}), \tag{41}\] \[\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\leq 2D+\mathbf{ \triangle}_{1}(F,\hat{R}),\] (42) \[\|\mathbf{\Delta}_{\mathbf{B}}\|_{\Pi}\leq(\nicefrac{{1}}{{d_{1}}})[2 D+\mathbf{\triangle}_{1}(F,\hat{R})]+(\nicefrac{{22}}{{d_{2}}}\nicefrac{{ 1}}{{d_{1}}}\partial\lambda)[D^{2}+\mathbf{\triangle}_{1}(F,\hat{R})]. \tag{43}\]
Theorem 22 has a similar format to Theorem 16 but with a few differences. The first obvious one is that constraints (21)-(22) are excluded. Indeed, Theorem 22 is not concerned with matrix decomposition (\(\mathsf{a}=\infty\), \({\rm f}_{3}={\rm d}_{3}={\rm b}_{3}={\rm f}_{*}=0\), \(\mathcal{S}\equiv 0\)). The main difference is that \((\hat{\sigma}\lambda,\hat{\sigma}\tau)\) in condition (iv) of Theorem 22 substitutes \((\lambda,\tau)\) in condition (iv) of Theorem 16. By Bernstein's inequality, \(\sigma/2\leq\hat{\sigma}\leq 3\sigma/2\) with high probability -- assuming \(n\gtrsim\sigma^{2}(1+\log(1/\delta))\). This justifies why the tuning \((\lambda,\tau)\) in Theorem 4 is adaptive to \(\sigma\). Another difference is that the constraints (37) and (39)-(40) require the constants \((\mathsf{c}_{n},\mathsf{c}_{*})\) to be \(\mathcal{O}(1)r_{n,d_{\rm eff},\delta}(\rho,\mu_{*})\) -- recall (12). This justifies why the approximation error in Theorem 4 must be \(\mathcal{O}(\sigma)r_{n,d_{\rm eff},\delta}(\rho,\mu_{*})\) instead of \(\mathcal{O}(\sigma)\) as in Theorem 3.
From the previous discussion, it is not surprising that the road map of the proofs of Theorem 22 and Theorem 4 are, respectively, an adaptation of the proofs of Theorem 16 and Theorem 3. The details are left to Section 34 in the supplement. For completeness, we present a brief description of the proof of Theorem 22. To facilitate, we use as reference the proof of Theorem 16 in Section 11 and give pointers to the changes in the supplement.
1. Lemma 32 in Section 34 is a variation of Lemma 17. Inequality (91) in Lemma 32 is similar to (28) but with the addition of the error term (44).19 Examples (33) and (34) are given in Appendix B.
2. In above, \(\mathcal{E}_{\mathbf{B}}:=\mathfrak{X}(\mathbf{B})-\mathbf{f}\) denotes the miss-specification error. The term above appears because we use the adaptive loss \(\mathcal{L}_{\tau\omega,1}\). To address this term, Lemma 32 states the auxiliary inequality (90).
3. Lemma 33 entails Proposition 8 in Section 34, a variation of Proposition 3.
4. Lemma 34 in Section 34 is a variation of Lemma 20. Inequality (103) in Lemma 34 is analogous to (35) with the addition of the error term (44).19 Examining Lemma 33, we note that the miss-specification errors \(\lambda\|\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}\|_{2}\) and \(\tau\|\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}\|_{2}\) impact the choice of \((\lambda,\tau)\). This fact implies the constant \(\mathsf{c}_{n}\) to be of the order of the minimax rate. Like Lemma 32, Lemma 33 states the auxiliary inequality (92). Footnote 19: Because it handles matrix decomposition, inequality (29) states a recursion in the variable \(\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\Gamma},\mathbf{\Delta}^{\hat{\theta}}]\|_ {\Pi}\). The presence of the error term (44) justifies why inequality (93) states a recursion in the variable \(\|\mathfrak{X}^{(n)}(\mathbf{\Delta}^{\mathbf{B}},\mathbf{\Delta}^{\hat{\theta}})\|_ {2}\).
3. Lemma 33 entails Proposition 8 in Section 34, a variation of Proposition 3.
4. Lemma 34 in Section 34 is a variation of Lemma 20. Inequality (103) in Lemma 34 is analogous to (35) with the addition of the error term (44).19 Examples (33) and (34). To address it, Lemma 34 states the auxiliary inequality (102). Footnote 19: Inequality (36) states a recursion in the variable \(\|[\Delta_{\mathbf{B}},\mathbf{\Delta}_{\Gamma}]\|_{\Pi}\) while (106) states a recursion in the variable \(\|\mathfrak{X}^{(n)}(\mathbf{\Delta}^{\mathbf{B}})\|_{2}\).
5. Finally, Lemma 35 in Section 34 is a variation of Lemma 21. The major difference is the addition of the term (45) in inequality (106) of Lemma 35.20 Consequently, the corruption error \(\lambda\|\mathbf{\Delta}^{\hat{\theta}}\|_{2}\) and the
miss-specification error \(\lambda\|\mathsf{c}_{\mathbf{B}}^{(n)}\|_{2}\) affect the choice of \(\lambda\). This is why the constants \((\mathsf{c}_{n},\mathsf{c}_{*})\) need to be of the order of the minimax rate. To address (45), Lemma 35 states the auxiliary inequality (105).
The proof of Theorem 22 follows from Lemma 35 and Proposition 8. Proposition 8 is invoked to give precise bounds on the nuisance error \(\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\).
## 13 Simulation results
We report simulation results in R with synthetic data demonstrating excellent agreement between theory and practice. The code is in [https://github.com/philipthomp/Outlier-robust-regression](https://github.com/philipthomp/Outlier-robust-regression). In all regression problems we simulate \(\mathcal{N}(0,1)\) noise. For trace-regression (sparse, low-rank or matrix decomposition) the design is simulated with iid \(\mathcal{N}(0,1)\) entries. For robust matrix completion, the entries of the low-rank matrix are sampled uniformly -- in that case, we use the scaled design \(\sqrt{p}\mathbf{X}_{i}\) when computing the estimator. Our solvers use a batch version of a proximal gradient method on the separable variables \([\mathbf{B},\boldsymbol{\Gamma},\boldsymbol{\theta}]\).21
Footnote 21: The proximal map of the Slope or \(\ell_{1}\) norms are computed with the function prox_sorted_L1() of the glmSLOPE package [1]. The \((\ell_{\infty}\)-constrained) proximal map of the nuclear norm is computed via \((\ell_{\infty}\)-constrained) soft-thresholding of the singular value decomposition.
_Robust sparse linear regression._ In our simulation, \(p=100\), \(n=1000\). We simulate for 3 different sparsity indexes \(s=15,25,35\) over for a grid of \(\epsilon\). Respectively, \(\boldsymbol{b}^{*}\) and \(\boldsymbol{\theta}^{*}\) are set with the first \(s\) and \(o\) entries equal to \(10\) and zero otherwise. We solve (7) taking \(A=10\) and \(\mathcal{R}\) to be the Slope norm with \(\bar{A}=10\). For each \(s\), the plot of the square root \(\sqrt{\mathtt{MSE}}\) of the mean squared error (MSE) as a function of \(\epsilon\) (averaged over 100 repetitions) is shown in Figure 2(a). As predicted by Theorem 3, the plot fits fairly well a linear growth with \(\epsilon\). Taking \(\mathcal{R}\) to be the Slope norm, we also compare Huber's regression (H) with the "Sorted Huber's loss" (S) in Definition 1 for \(s=25\). Fixing all model parameters, we increase the nonzero entries of \(\boldsymbol{b}^{*}\) and \(\boldsymbol{\theta}^{*}\) to \(50\). In Figure 1, we show the correspondent plots of the square root of the MSE as a function of \(\epsilon\) (averaged over 100 repetitions). We see that "S" clearly outperforms "H". The first \(25\) entries estimated by "H" and "S" fluctuated around \(40\) and \(48\) respectively.
_Robust low-rank trace regression._ In our simulation \(d_{1}=d_{2}=10\) and \(n=1000\). This model is simulated for 3 different rank values \(r=1,2,3\) over a grid of \(\epsilon\). The low-rank parameter is generated by randomly choosing the spaces of left and right singular vectors with all nonzero singular values equal to \(10\). The corruption vector \(\boldsymbol{\theta}^{*}\) is set to have the first \(o\) entries equal to \(10\) and zero otherwise. We solve (7) taking \(\mathcal{R}\) to be the nuclear norm and \(A=10\). For each \(r\), the plot of \(\sqrt{\mathtt{MSE}}\) as a function of \(\epsilon\) (averaged over 50 repetitions) is given in Figure 3(a). The plot fits fairly well a linear growth with \(\epsilon\) as predicted by Theorem 3.
_Trace-regression with additive matrix decomposition._ We simulate with \(d_{1}=d_{2}=10\) and \(n=1000\) and no label contamination. The low-rank parameter is generated by randomly choosing the spaces of left and right singular vectors such that \(\|\mathbf{B}^{*}\|_{\infty}\leq\frac{\boldsymbol{\mathrm{s}}^{*}}{\sqrt{n}}\) with \(\mathsf{a}^{*}=1\). The sparse parameter is simulated with the non-zero entries of value 10 chosen uniformly at random. We solve (6) in two settings. First, this model is
Figure 2: Robust sparse linear regression: \(\sqrt{\mathtt{MSE}}\) versus \(\epsilon\) for different levels of sparsity.
simulated for 2 different sparsity values \(s=5,80\) over a grid of ranks \(r\). For each \(s\), the plot of MSE versus \(r\) (averaged over 20 repetitions) is given in Figure 4(a). Secondly, we simulate for rank \(r=5\) over a grid of sparsity levels \(s\). The plot of the MSE versus \(s\) (averaged over 20 repetitions) is given in Figure 4(b). Both plots fit fairly well a linear growth as predicted by Theorem 2.
_Robust matrix completion._ In our simulation, \(d_{1}=d_{2}=10\) and \(n=80\). We simulate for \(3\) different rank values \(r=1,5,10\) over a grid of \(\epsilon\). The low-rank parameter is generated by randomly choosing the spaces of left and right singular vectors such that \(\|\mathbf{B}^{*}\|_{\infty}\leq 1=:\mathsf{a}\). The corruption vector \(\mathbf{\theta}^{*}\) is set to have the first \(o\) entries equal to \(10\) and zero otherwise. We solve (17) taking \(A=10\). For each \(r\), the plot of MSE as a function of \(\epsilon\) (averaged over 100 repetitions) is given in Figure 3(b). The plot fits fairly well a linear growth with \(\epsilon\) as predicted by Theorem 5.22 We also study how the estimation error varies with respect to the corruption level \(\|\mathbf{\theta}^{*}\|_{\infty}\). Fixing the previous model parameters and taking \(r=3\), we present in Table 1 the \(\sqrt{\text{MSE}}\) versus \(\epsilon\in[0.05,0.35]\) (averaged over 100 repetitions) for \(\|\mathbf{\theta}^{*}\|_{\infty}=10,100,1000\). For a precise comparison, the same data set is used for the three values of \(\|\mathbf{\theta}^{*}\|_{\infty}\) in each repetition. As predicted by Theorem 5, the resulted error is adaptive and robust with respect to the corruption level \(\|\mathbf{\theta}^{*}\|_{\infty}\).
Footnote 22: We have noted, however, that the log factor \(\log(C/\epsilon)\) in the MSE rate of the form \(\epsilon\mapsto\epsilon\log(C/\epsilon)\), as predicted in Theorem 5, is more present around \(\epsilon=0\).
## 14 Discussion
We present theoretical and empirical improvements resorting to "sorted" variations of the classical Huber's loss. Instead of the \(\ell_{1}\) norm, the loss in Definition 1 is based on a variational formula regularized by the slope norm -- treating each label outlier magnitude individually. This use of the slope norm refines an existing
Figure 4: Trace regression with additive matrix decomposition: plot of MSE.
Figure 3: Robust trace regression & matrix completion: error versus \(\epsilon\).
robust loss and should be contrasted to [11, 6]. There, the slope norm is used as a refined regularization norm when estimating the sparse parameter.23 More generally, this work identifies three new design properties, \(\mathrm{PP}\), \(\mathrm{IP}\) and \(\mathrm{MP}\), which jointly entail that the robust estimator (4) is near-optimal in terms of dimension, \(\epsilon\) and \(\delta\) -- when dealing simultaneously with matrix decomposition, label contamination, featured-dependent noise and miss-specification. These properties are based on new sharp concentration inequalities. We believe these could be useful elsewhere, e.g., non-parametric least squares regression [52, 62] and compressive sensing theory [40]. We reemphasize that there seems to be no prior estimation theory for RTRMD -- with \(\mathrm{PP}\) being a new application of the product process inequality. Under the incoherence assumption, it would be interesting to investigate further when exact recovery is possible or how nonconvex optimization approaches behave on this general model [23].
Footnote 23: While the estimator (7) of \([\mathbf{b}^{*},\mathbf{\theta}^{*}]\) can be written as a slope estimator with an augmented design, this point of view is not enough to entail the optimal rate. For instance, \(\mathrm{IP}\) is crucially needed to entail optimality when estimating only \(\mathbf{b}^{*}\). See comments after Proposition 3.
## References
* [1] B. Adcock, A. Bao, J. Jakeman, and A. Narayan. Compressed sensing with sparse corruptions: Fault-tolerant sparse collocation approximations. _SIAM/ASA Journal on Uncertainty Quantification_, 6(4):1424-1453, 2018.
* [2] A. Agarwal, S. Negahban, and M. Wainwright. Noisy matrix decomposition via convex relaxation: optimal rates in high dimensions. _Ann. Statist._, 40(2):1171-1197, 2012.
* 2144, 2019.
* [4] Sivaraman Balakrishnan, Simon S. Du, Jerry Li, and Aarti Singh. Computationally efficient robust sparse estimation in high dimensions. In Satyen Kale and Ohad Shamir, editors, _Proceedings of the 2017 Conference on Learning Theory_, volume 65 of _Proceedings of Machine Learning Research_, pages 169-212, Amsterdam, Netherlands, 07-10 Jul 2017. PMLR.
* [5] W. Bednorz. Concentration via chaining method and its applications. _arxiv 1405.0676_, 2014.
* [6] Pierre C. Bellec, Guillaume Lecue, and Alexandre B. Tsybakov. Slope meets lasso: Improved oracle bounds and optimality. _Ann. Statist._, 46(6B):3603-3642, 12 2018.
* [7] A. Belloni, V. Chernozhukov, and L. Wang. Square-root lasso: pivotal recovery of sparse signals via conic programming. _Biometrika_, 98(4):791-806, 2011.
* [8] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar. Consistent robust regression. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 30, pages 2110-2119. Curran Associates, Inc., 2017.
* [9] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. In C. Cortes, N. Lawrence, D. Lee, M. Sugiyama, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 28, pages 721-729. Curran Associates, Inc., 2015.
* [10] P. J. Bickel, Y. Ritov, and A. B. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector. _Ann. Statist._, 37(4):1705-1732, 2009.
* [11] Malgorzata Bogdan, Ewoout van den Berg, Chiara Sabatti, Weijie Su, and Emmanuel J. Candes. Slope--adaptive variable selection via convex optimization. _Ann. Appl. Stat._, 9(3):1103-1140, 09 2015.
* [12] T. Tony Cai and Wen-Xin Zhou. Matrix completion via max-norm constrained optimization. _Electron. J. Statist._, 10(1):1493-1525, 2016.
* [13] E. Candes and P. A. Randall. Highly robust error correction by convex programming. _IEEE Trans. Inform. Theory_, 54(7):2829-2840, 2008.
* [14] Emmanuel J. Candes, Xiaodong Li, Yi Ma, and John Wright. Robust principal component analysis? _J. ACM_, 58(3), June 2011.
* [15] Emmanuel Candes and Benjamin Recht. Exact matrix completion via convex optimization. _Found. Comput. Math._, 9(717), 2009.
* [16] V. Chandrasekaran, S. Sanghavi, Pablo A. Parrilo, and A. S Willsky. Rank-sparsity incoherence for matrix decomposition. _SIAM J. Optim._, 21(2):572-596, 2011.
* [17] Mengjie Chen, Chao Gao, and Zhao Ren. A general decision theory for huber's \(\epsilon\)-contamination model. _Electron. J. Statist._, 10(2):3752-3774, 2016.
* [18] Mengjie Chen, Chao Gao, and Zhao Ren. Robust covariance and scatter matrix estimation under huber's contamination model. _Ann. Statist._, 46(5):1932-1960, 10 2018.
* [19] Y. Chen, A. Jalali, S. Sanghavi, and C. Caramanis. Low-rank matrix recovery from errors and erasures. _IEEE Transactions on Information Theory_, 59(7):4324-4337, 2013.
* [20] Yudong Chen, Constantine Caramanis, and Shie Mannor. Robust sparse regression under adversarial corruption. In Sanjoy Dasgupta and David McAllester, editors, _Proceedings of the 30th International Conference on Machine Learning_, volume 28 of _Proceedings of Machine Learning Research_, pages 774-782, Atlanta, Georgia, USA, 17-19 Jun 2013. PMLR.
* [21] Yudong Chen, Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust matrix completion and corrupted columns. In Lise Getoor and Tobias Scheffer, editors, _Proceedings of the 28th International Conference on Machine Learning (ICML-II)_, ICML '11, pages 873-880, New York, NY, USA, June 2011. ACM.
* [22] Yuxin Chen, Yuejie Chi, Jianqing Fan, Cong Ma, and Yuling Yan. Noisy matrix completion: Understanding statistical guarantees for convex relaxation via nonconvex optimization. _SIAM Journal on Optimization_, 30(4):3098-3121, 2020.
* 2971, 2021.
* [24] Yu Cheng, Ilias Diakonikolas, and Rong Ge. High-dimensional robust mean estimation in nearly-linear time. In _Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms_, SODA '19, page 2755-2771, USA, 2019. Society for Industrial and Applied Mathematics.
* [25] Y. Cherapanamjeri, E. Aras, N. Tipuraneni, M.I. Jordan, N. Flammarion, and PL. Bartlett. Optimal robust linear regression in nearly linear time. _arxiv 2007.08137_, 2020.
* [26] G. Chinot, G. Lecue, and M. Lerasle. Robust statistical learning with Lipschitz and convex loss functions. _Probab. Theory Relat. Fields_, 176(3):897-940, 2020.
* [27] Geoffrey Chinot. Erm and rrm are optimal estimators for regression problems when malicious outliers corrupt the labels. _Electron. J. Statist._, 14(2):3563-3605, 2020.
* [28] Arnak Dalalyan and Yin Chen. Fused sparsity and robust estimation for linear models with unknown variance. In F. Pereira, C. J. C. Burges, L. Bottou, and K. Q. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 25, pages 1259-1267. Curran Associates, Inc., 2012.
* [29] Arnak Dalalyan and Renaud Keriven. L\({}_{\cdot}\)1-penalized robust estimation for a class of inverse problems arising in multiview geometry. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, _Advances in Neural Information Processing Systems_, volume 22, pages 441-449. Curran Associates, Inc., 2009.
* [30] Arnak Dalalyan and Philip Thompson. Outlier-robust estimation of a sparse linear model using \(\nu_{\ell_{1}}\)-penalized huber's m-estimator. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32, pages 13189-13198. Curran Associates, Inc., 2019.
* 1219, 2022.
* [32] Jules Depersin. A spectral algorithm for robust regression with subgaussian rates. _arxiv 2007.06072_, 2020.
* 536, 2022.
* 766, 2018.
* [35] I. Diakonikolas, G. Kamath, D. M. Kane, J. Li, A. Moitra, and A. Stewart. Robust estimators in high dimensions without the computational intractability. In _2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)_, pages 655-664, 2016.
* [36] I. Diakonikolas and D. Kane. Recent advances in algorithmic high-dimensional robust statistics. _arxiv 1911.05911_, 2019.
* [37] Ilias Diakonikolas, Gautam Kamath, Daniel Kane, Jerry Li, Jacob Steinhardt, and Alistair Stewart. Sever: A robust meta-algorithm for stochastic optimization. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, _Proceedings of the 36th International Conference on Machine Learning_, volume 97 of _Proceedings of Machine Learning Research_, pages 1596-1606, Long Beach, California, USA, 2019. PMLR.
* [38] Ilias Diakonikolas, Sushrut Karmalkar, Jong Ho Park, and Christos Tzamos. Distribution-independent regression for generalized linear models with oblivious corruptions. In _Proceedings of Thirty Sixth Conference on Learning Theory_, pages 5453-5475, 2023.
* [39] Ilias Diakonikolas, Weilhao Kong, and Alistair Stewart. Efficient algorithms and lower bounds for robust linear regression. In _Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms_, SODA '19, page 2745-2754, USA, 2019. Society for Industrial and Applied Mathematics.
* [40] Sjoerd Dirksen. Tail bounds via generic chaining. _Electron. J. Probab._, 20:1-29, 2015.
* [41] Yihe Dong, Samuel Hopkins, and Jerry Li. Quantum entropy scoring for fast robust mean estimation and improved outlier detection. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alche-Buc, E. Fox, and R. Garnett, editors, _Advances in Neural Information Processing Systems_, volume 32, pages 6067-6077. Curran Associates, Inc., 2019.
* [42] D. Donoho and A. Montanari. High dimensional robust m-estimation: asymptotic variance via approximate message passing. _Probab. Theory Relat. Fields_, 166:935---969, 2016.
* [43] Andreas Elsener and Sara van de Geer. Robust low-rank matrix estimation. _Annals of Statistics_, 46(6B):3481-3509, 2018.
* 1266, 2021.
* [45] M. Fazel. Matrix rank minimization with applications. _PhD thesis, Stanford University_, 2002.
* [46] R. Foygel and L. Mackey. Corrupted sensing: Novel guarantees for separating structured signals. _IEEE Transactions on Information Theory_, 60(2):1223-1247, 2014.
* [47] S. Gaiffas and G. Lecue. Sharp oracle inequalities for high-dimensional matrix prediction. _IEEE Transactions on Information Theory_, 57(10):6942-6957, 2011.
* [48] C. Gao and J. Lafferty. Model repair: Robust recovery of over-parameterized statistical models. _arxiv 2005.09912_, 2020.
* [49] Chao Gao. Robust regression via mutivariate regression depth. _Bernoulli_, 26(2):l139-1170, 05 2020.
* [50] Stephane Gaiffas and Olga Klopp. High dimensional matrix estimation with unknown variance of the noise. _Statistica Sinica_, 27(1):l15-145, 2017.
* [51] F. Hampel, E. Ronchetti, P. Rousseeuw, and W. Stahel. _Robust statistics: the approach based on influence functions_. Wiley Series in Probability and Statistics. Wiley, 2011.
* 2319, 2019.
* [53] Trevor Hastie, Robert Tibshirani, and Martin Wainwright. _Statistical Learning with Sparsity: The Lasso and Generalizations_. CRC Press, 2015.
* [54] Daniel Hsu, Sham M Kakade, and Tong Zhang. Robust matrix decomposition with sparse corruptions. _IEEE Transactions on Information Theory_, 57:7221-7234, 2011.
* [55] Peter J. Huber. Robust estimation of a location parameter. _Ann. Math. Statist._, 35(1):73-101, 1964.
* [56] Peter J. Huber and Elvezio M. Ronchetti. _Robust statistics_. Wiley Series in Probability and Statistics. Wiley, 2011.
* [57] S. Karmalkar and E. Price. Compressed sensing with adversarial sparse noise via l1 regression. _arxiv 1809.08055_, 2018.
* [58] O. Klopp, K. Lounici, and A.B. Tsybakov. Robust matrix completion. _Probab. Theory Relat. Fields_, 169(523-564), 2017.
* [59] Olga Klopp. Rank penalized estimators for high-dimensional matrices. _Electron. J. Statist._, 5:l161-l183, 2011.
* [60] Olga Klopp. Noisy low-rank matrix completion with general sampling distribution. _Bernoulli_, 20(1):282-303, 02 2014.
* [61] Vladimir Koltchinskii, Karim Lounici, and Alexandre B. Tsybakov. Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion. _Ann. Statist._, 39(5):2302-2329, 10 2011.
* 302, 2022.
* [63] K. A. Lai, A. B. Rao, and S. Vempala. Agnostic estimation of mean and covariance. In _2016 IEEE 57th Annual Symposium on Foundations of Computer Science (FOCS)_, pages 665-674, 2016.
* [64] J. N. Laska, M. A. Davenport, and R. G. Baraniuk. Exact signal recovery from sparsely corrupted measurements through the pursuit of justice. In _2009 Conference Record of the Forty-Third Asilomar Conference on Signals, Systems and Computers_, pages 1556-1560, 2009.
* 641, 2018.
* [66] Yoonkyung Lee, Steven N. MacEachern, and Yoonsuh Jung. Regularization of case-specific parameters for robustness and efficiency. _Statist. Sci._, 27(3):350-372, 08 2012.
* [67] Xiaodong Li. Compressed sensing and matrix completion with constant proportion of corruptions. _Constr. Approx._, 37:73-99, 2013.
* [68] P. Loh and M. J. Wainwright. Corrupted and missing predictors: Minimax bounds for high-dimensional linear regression. In _2012 IEEE International Symposium on Information Theory Proceedings_, pages 2601-2605, 2012.
* [69] Po-Ling Loh and Martin J. Wainwright. High-dimensional regression with noisy and missing data: Provable guarantees with nonconvexity. _Ann. Statist._, 40(3):1637-1664, 06 2012.
* [70] Cong Ma, Kaizheng Wang, Yuejie Chi, and Yuxin Chen. Implicit regularization in nonconvex statistical estimation: Gradient descent converges linearly for phase retrieval, matrix completion, and blind deconvolution. _Foundations of Computational Mathematics_, 20(3):451-632, 2020.
* [71] R. A. Maronna, D. R. Martin, and V. J. Yohai. _Robust Statistics: Theory and Methods_. Wiley Series in Probability and Statistics. Wiley, 2006.
- 1160, 2011.
* [73] Shahar Mendelson. Learning without concentration. In Maria Florina Balcan, Vitaly Feldman, and Csaba Szepesvari, editors, _Proceedings of The 27th Conference on Learning Theory_, volume 35 of _Proceedings of Machine Learning Research_, pages 25-39. PMLR, 13-15 Jun 2014.
* 3680, 2016. In Memoriam: Evarist Gine.
* [75] Shahar Mendelson, Alain Pajor, and Nicole Tomczak-Jaegermann. Reconstruction and subgaussian operators in asymptotic geometric analysis. _Geometric and Functional Analysis_, 17(4):1248-1282, 2007.
* 2903, 2018.
* [77] Bhaskar Mukhoty, Govind Gopakumar, Prateek Jain, and Purushottam Kar. Globally-convergent iteratively reweighted least squares for robust regression problems. In Kamalika Chaudhuri and Masashi Sugiyama, editors, _Proceedings of Machine Learning Research_, volume 89 of _Proceedings of Machine Learning Research_, pages 313-322. PMLR, 16-18 Apr 2019.
* [78] Sahand Negahban and Martin J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. _Ann. Statist._, 39(2):1069-1097, 04 2011.
* [79] Sahand Negahban and Martin J. Wainwright. Restricted strong convexity and weighted matrix completion: Optimal bounds with noise. _Journal of Machine Learning Research_, 13(53):1665-1697, 2012.
* [80] Sahand N. Negahban, Pradeep Ravikumar, Martin J. Wainwright, and Bin Yu. A unified framework for high-dimensional analysis of \(m\)-estimators with decomposable regularizers. _Statist. Sci._, 27(4):538-557, 11 2012.
* [81] N. H. Nguyen and T. D. Tran. Exact recoverability from dense corrupted observations via \(\ell_{1}\)-minimization. _IEEE Transactions on Information Theory_, 59(4):2017-2035, 2013.
* [82] N. H. Nguyen and T. D. Tran. Robust lasso with missing and grossly corrupted observations. _IEEE Trans. Inform. Theory,_ 59(4):2036-2058, 2013.
* [83] Roberto I. Oliveira, Zoraida E. Rico, and Philip Thompson. A spectral least-squares-type method for heavy-tailed corrupted regression with unknown covariance & heterogeneous noise. _arxiv 2209.02856_, 2022.
* [84] A. Pensia, V. Jog, and P.-L. Loh. Robust regression with covariate filtering: Heavy tails and adversarial contamination. _arxiv 2009.12976_, 2020.
* [85] Scott Pesme and Nicolas Flammarion. Online robust regression via sgd on the ll loss. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin, editors, _Advances in Neural Information Processing Systems_, volume 33, pages 2540-2552. Curran Associates, Inc., 2020.
* [86] Adarsh Prasad, Arun Sai Suggala, Sivaraman Balakrishnan, and Pradeep Ravikumar. Robust estimation via robust gradient estimation. _Journal of the Royal Statistical Society: Series B (Statistical Methodology)_, 82(3):601-627, 2020.
* [87] B. Recht, W. Xu, and B. Hassibi. Null space conditions and thresholds for rank minimization. _Mathematical Programming_, 127:175-202, 2011.
* [88] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization. _SIAM Review_, 52(3):471-501, 2010.
* [89] Angelika Rohde and Alexandre B. Tsybakov. Estimation of high-dimensional low-rank matrices. _Ann. Statist._, 39(2):887-930, 04 2011.
* [90] S. Sardy, P. Tseng, and A. Bruce. Robust wavelet denoising. _IEEE Transactions on Signal Processing_, 49(6):1146-1152, Jun 2001.
* [91] Yiyuan She and Art B. Owen. Outlier detection using nonconvex penalized regression. _Journal of the American Statistical Association_, 106(494):626-639, 2011.
* [92] Yinan Shen, Jingyang Li, Jian-Feng Cai, and Dong Xia. Computationally efficient and statistically optimal robust high-dimensional linear regression. _arxiv 2305.06199_, 2023.
* [93] N. Srebro. Learning with matrix factorizations. _PhD Thesis, MIT_, 2004.
* [94] Arun Sai Suggala, Kush Bhatia, Pradeep Ravikumar, and Prateek Jain. Adaptive hard thresholding for near-optimal consistent robust regression. In Alina Beygelzimer and Daniel Hsu, editors, _Proceedings of the Thirty-Second Conference on Learning Theory_, volume 99 of _Proceedings of Machine Learning Research_, pages 2892-2897, Phoenix, USA, 25-28 Jun 2019. PMLR.
* [95] M. Talagrand. _Upper and Lower Bounds for Stochastic Processes_. A series of modern surveys in mathematics. Springer, Berlin, Heidelberg, 2014.
* [96] R. Tibshirani. Regression shrinkage and selection via the Lasso. _Journal of the Royal Statistical Society. Series B_, 58(1):267-288, 1996.
* [97] E. Tsakonas, J. Jalden, N. D. Sidiropoulos, and B. Ottersten. Convergence of the huber regression m-estimate in the presence of dense outliers. _IEEE Signal Processing Letters_, 21(10):1211-1214, 2014.
* [98] Sara A. van de Geer and Peter Buhlmann. On the conditions used to prove oracle results for the lasso. _Electron. J. Statist._, 3:1360-1392, 2009.
* [99] Bingyan Wang and Jianqing Fan. Robust Matrix Completion with Heavy-tailed Noise. _arXiv:2206.04276_, 2022.
* [100] Hansheng Wang, Guodong Li, and Guohua Jiang. Robust regression shrinkage and consistent variable selection through the lad-lasso. _Journal of Business & Economic Statistics_, 25(3):347-355, 2007.
* [101] J. Wright and Y. Ma. Dense error correction via ll-minimization. In _2009 IEEE International Conference on Acoustics, Speech and Signal Processing_, pages 3033-3036, 2009.
* [102] John Wright, Arvind Ganesh, Shankar Rao, Yigang Peng, and Yi Ma. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Y. Bengio, D. Schuurmans, J. Lafferty, C. Williams, and A. Culotta, editors, _Advances in Neural Information Processing Systems_, volume 22. Curran Associates, Inc., 2009.
* 2745, 2016.
* [104] Huan Xu, Constantine Caramanis, and Sujay Sanghavi. Robust pca via outlier pursuit. In J. Lafferty, C. Williams, J. Shawe-Taylor, R. Zemel, and A. Culotta, editors, _Advances in Neural Information Processing Systems_, volume 23, pages 2496-2504. Curran Associates, Inc., 2010.
## Summary of the paper
* 1 Introduction
* 2 General framework
* 3 Our estimators
* 4 Motivating examples
* 4.1 Robust least-squares trace regression with additive matrix decomposition
* 4.1.1 Robust least-squares sparse regression
* 4.1.2 Robust least-squares trace regression
* 4.2 Robust matrix completion
* 5 Results for robust sparse/low-rank regression with subgaussian designs
* 6 Results for robust matrix completion
* 7 Related work and contributions
* 7.1 Robust sparse least-squares regression
* 7.2 Robust trace regression
* 7.3 Robust trace regression with additive matrix decomposition
* 7.4 Robust matrix completion
* 8 Results for the multiplier and product processes
* 9 Properties for subgaussian distributions
* 10 \(\mathrm{MP}\) in \(M\)-estimation with decomposable regularizers
* 11 First general theorem
* 12 Second general theorem
* 13 Simulation results
* 14 Discussion
* 15 Proof of Lemma 10
* 16 Proof of Lemma 12
* 17 Proof of Proposition 2, item (i)
* 18 Proof of Proposition 2, item (ii)
* 19 Proof of Proposition 2, item (iii)
* 20 Proof of Proposition 2, item (iv)
* 21 Lemmas for decomposable norms
[MISSING_PAGE_POST]
**Supplementary Material**
In this supplement we give detailed proofs of the results stated in the main text. It is organized as follows:
* The supplement starts proving Lemmas 10 and 12 and Proposition 2 in Section 9 of the main text -- see Sections 15 to 20. Auxiliary lemmas are recalled in Section 21.
* We then pass to the proof of Theorem 16 in Section 11 of the main text. The auxiliary lemmas are proved in order from Sections 22 to 27. The proof of Theorem 16 is concluded in Section 28.
* The proof of Theorem 2 in Section 5 of the main text is given Section 29. It follows from Theorem 16 and Proposition 2. The lower bound -- Proposition 1 is proved in Section 30.
* The proof of Theorem 3 in Section 5 in the main text is given in Sections 31 to 33. It also follows Theorem 16 and Proposition 2.
* The proof of Theorem 22 in Section 12 of the main text is presented in Section 34. This theorem and Proposition 2 entail Theorem 4 in Section 5 of the main text -- its proof is given in Sections 35 to 37.
* The proof of Theorem 5 in Section 6 of the main text is given in Section 38.
* The proofs of Theorems 7 and 8 in Section 8 of the main text and needed peeling lemmas are presented in the Appendix.
Unless otherwise stated, \(C>0\) and \(c\in(0,1)\) will denote universal constants that may change within the text.
## 15 Proof of Lemma 10
From \(\mathrm{PP}_{\mathcal{R},\mathcal{R}}(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_ {4})\),
\[||\mathbf{\bar{x}}^{(n)}(\mathbf{V})\|_{2}^{2}-\|\mathbf{V}\|_{ \Pi}^{2} |\leq\alpha_{1}\|\mathbf{V}\|_{\Pi}^{2}+(\alpha_{2}+\alpha_{3}) \mathcal{R}(\mathbf{V})\|\mathbf{V}\|_{\Pi}+\alpha_{4}\mathcal{R}^{2}( \mathbf{V})\] \[\leq\|\mathbf{V}\|_{\Pi}^{2}\left(\alpha_{1}+\left(\nicefrac{{ \alpha}}{{2}}/2\right)\right)+\mathcal{R}^{2}(\mathbf{V})\left(\alpha_{4}+ \frac{(\alpha_{2}+\alpha_{3})^{2}}{2\alpha^{2}}\right).\]
Let \(0<\alpha<\sqrt{2(1-\alpha_{1})}\). Rearranging and taking the square-root, it follows that \(\mathrm{RSC}_{\mathcal{R}}(\mathtt{a}_{1},\mathtt{a}_{2})\) holds with constants \(\mathtt{a}_{1}:=\{1-\alpha_{1}-(\nicefrac{{\alpha^{2}}}{{2}})\}^{1/2}\) and \(\mathtt{a}_{2}:=\{\frac{(\alpha_{2}+\alpha_{3})^{2}}{2\alpha^{2}}+\alpha_{4} \}^{1/2}.\) The proof is finished once we set \(\alpha^{2}:=(1-\alpha_{1})/2\).
## 16 Proof of Lemma 12
Let \(\alpha:=(\mathtt{a}_{1}\wedge\bar{\mathtt{a}}_{1})/2\sqrt{2}\) and \(\beta:=\mathtt{b}_{1}+\mathtt{c}_{1}.\) By assumption \(b:=\beta+2\alpha^{2}<(\mathtt{a}_{1}\wedge\bar{\mathtt{a}}_{1})^{2}\) and \(\mathtt{d}_{1}^{2}=(\mathtt{a}_{1}\wedge\bar{\mathtt{a}}_{1})^{2}-b.\) From \(\mathrm{RSC}\),
\[\mathtt{d}_{1}^{2}\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{ \Pi}^{2} \leq\mathtt{a}_{1}^{2}\|\mathbf{V}\|_{\Pi}^{2}+\bar{\mathtt{a}}_{1}^{2}\| \mathbf{W}\|_{\Pi}^{2}+\|\boldsymbol{u}\|_{2}^{2}-b\|[\mathbf{V},\mathbf{W}, \boldsymbol{u}]\|_{\Pi}^{2}\] \[\leq\left(\|\mathbf{\bar{x}}^{(n)}(\mathbf{V})\|_{2}+\mathtt{a}_{ 2}\mathcal{R}(\mathbf{V})\right)^{2}+\left(\|\mathbf{\bar{x}}^{(n)}(\mathbf{ W})\|_{2}+\bar{\mathtt{a}}_{2}\mathcal{S}(\mathbf{W})\right)^{2}+\|\boldsymbol{u}\|_{2}^{2}\] \[-b\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}^{2},\]
so, after taking the square-root,24
Footnote 24: Here we used the relation \(\sqrt{(A+B)^{2}+(A+B)^{2}+C^{2}}\leq\sqrt{A^{2}+\bar{A}^{2}+C^{2}}+B+\bar{B}\) for positive numbers \((A,\bar{A},B,\bar{B},C)\).
\[\mathtt{d}_{1}\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}\leq\left\{\| \mathbf{\bar{x}}^{(n)}(\mathbf{V})\|_{2}^{2}+\|\mathbf{\bar{x}}^{(n)}(\mathbf{ W})\|_{2}^{2}+\left(\|\boldsymbol{u}\|_{2}^{2}-b\|[\mathbf{V},\mathbf{W}, \boldsymbol{u}]\|_{\Pi}^{2}\right)_{+}\right\}^{\frac{1}{2}}\]
\[+\mathsf{a}_{2}\mathcal{R}(\mathbf{V})+\bar{\mathsf{a}}_{2} \mathcal{S}(\mathbf{W}). \tag{46}\]
In what follows, it is enough to consider the case \(\|\mathbf{u}\|_{2}^{2}\geq b\|[\mathbf{V},\mathbf{W},\mathbf{u}]\|_{\Pi}^{2}\).
Taking the squares,
\[\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2}+\|\mathfrak{X}^{(n)}( \mathbf{W})\|_{2}^{2}+\|\mathbf{u}\|_{2}^{2} =\|\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W})+\mathbf{u}\|_{2}^{2}\] \[-2\langle\mathfrak{X}^{(n)}(\mathbf{V}),\mathfrak{X}^{(n)}( \mathbf{W})\rangle-2\langle\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W}),\mathbf{u}\rangle.\]
By IP and Young's inequality,
\[T_{1}:=-2\langle\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W}),\mathbf{u}\rangle \leq 2\mathsf{b}_{1}\left\|[\mathbf{V},\mathbf{W}]\right\|_{\Pi} \|\mathbf{u}\|_{2}+2\mathsf{b}_{2}\mathcal{R}(\mathbf{V})\|\mathbf{u}\|_{2}\] \[+2\mathsf{b}_{3}\mathcal{S}(\mathbf{V})\|\mathbf{u}\|_{2}+2\mathsf{b }_{4}\left\|[\mathbf{V},\mathbf{W}]\right\|_{\Pi}\mathcal{Q}(\mathbf{u})\] \[\leq(\mathsf{b}_{1}+\alpha^{2})(\|[\mathbf{V},\mathbf{W}]\|_{\Pi }^{2}+\|\mathbf{u}\|_{2}^{2})+\frac{\mathsf{b}_{2}^{2}}{\alpha^{2}}\mathcal{R}^{2 }(\mathbf{V})\] \[+\frac{\mathsf{b}_{3}^{2}}{\alpha^{2}}\mathcal{S}^{2}(\mathbf{W}) +\frac{\mathsf{b}_{4}^{2}}{\alpha^{2}}\mathcal{Q}^{2}(\mathbf{u}).\]
Let \(T_{2}:=-2\langle\mathfrak{X}^{(n)}(\mathbf{V}),\mathfrak{X}^{(n)}(\mathbf{W})\rangle\). By PP and Young's inequality,
\[T_{2}+2\langle\mathbf{V},\mathbf{W}\rangle_{\Pi} \leq 2\mathsf{c}_{1}\left\|\mathbf{V}\right\|_{\Pi}\|\mathbf{W} \|_{\Pi}+2\mathsf{c}_{2}\mathcal{R}(\mathbf{V})\|\mathbf{W}\|_{\Pi}+2\mathsf{ c}_{3}\left\|\mathbf{V}\right\|_{\Pi}\mathcal{S}(\mathbf{W})\] \[+2\gamma\mathsf{c}_{2}\mathsf{c}_{3}\mathcal{R}(\mathbf{V}) \mathcal{S}(\mathbf{W})\] \[\leq(\mathsf{c}_{1}+\alpha^{2})(\|\mathbf{V}\|_{\Pi}^{2}+\| \mathbf{W}\|_{\Pi}^{2})\] \[+\left(\frac{\mathsf{c}_{2}^{2}}{\alpha^{2}}+\gamma\mathsf{c}_{2 }^{2}\right)\mathcal{R}^{2}(\mathbf{V})+\left(\frac{\mathsf{c}_{3}^{2}}{ \alpha^{2}}+\gamma\mathsf{c}_{3}^{2}\right)\mathcal{S}^{2}(\mathbf{W}).\]
Let \(T_{0}:=\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2}+\|\mathfrak{X}^{(n)}( \mathbf{W})\|_{2}^{2}+\|\mathbf{u}\|_{2}^{2}\). We thus conclude that
\[T_{0} \leq\|\mathfrak{M}^{(n)}(\mathbf{V}+\mathbf{W},\mathbf{u})\|_{2}^{2} -2\langle\mathbf{V},\mathbf{W}\rangle_{\Pi}\] \[+(\mathsf{b}_{1}+\mathsf{c}_{1}+2\alpha^{2})\|[\mathbf{V},\mathbf{ W}]\|_{\Pi}^{2}+(\mathsf{b}_{1}+\alpha^{2})\|\mathbf{u}\|_{2}^{2}\] \[+\left(\frac{\mathsf{b}_{2}^{2}}{\alpha^{2}}+\frac{\mathsf{c}_{2 }^{2}}{\alpha^{2}}+\gamma\mathsf{c}_{2}^{2}\right)\mathcal{R}^{2}(\mathbf{V}) +\left(\frac{\mathsf{b}_{3}^{2}}{\alpha^{2}}+\frac{\mathsf{c}_{3}^{2}}{\alpha ^{2}}+\gamma\mathsf{c}_{3}^{2}\right)\mathcal{S}^{2}(\mathbf{W})+\frac{\mathsf{ b}_{4}^{2}}{\alpha^{2}}\mathcal{Q}^{2}(\mathbf{u}). \tag{47}\]
From (46)-(47) and definitions of \((\beta,b)\),
\[\mathsf{d}_{1}\|[\mathbf{V},\mathbf{W},\mathbf{u}]\|_{\Pi} \leq\left\{T_{0}-b\|[\mathbf{V},\mathbf{W},\mathbf{u}]\|_{\Pi}^{2} \right\}^{\frac{1}{2}}+\mathsf{a}_{2}\mathcal{R}(\mathbf{V})+\bar{\mathsf{a}}_ {2}\mathcal{S}(\mathbf{W})\] \[\leq\left\{\|\mathfrak{M}^{(n)}(\mathbf{V}+\mathbf{W},\mathbf{u})\|_{ 2}^{2}-2\langle\mathbf{V},\mathbf{W}\rangle_{\Pi}\right\}_{+}^{\frac{1}{2}}\] \[+\left(\left\{\frac{\mathsf{b}_{2}^{2}}{\alpha^{2}}+\frac{ \mathsf{c}_{2}^{2}}{\alpha^{2}}+\gamma\mathsf{c}_{2}^{2}\right\}^{\frac{1}{2}}+ \mathsf{a}_{2}\right)\mathcal{R}(\mathbf{V})+\left(\left\{\frac{\mathsf{b}_{3}^ {2}}{\alpha^{2}}+\frac{\mathsf{c}_{3}^{2}}{\alpha^{2}}+\gamma\mathsf{c}_{3}^{2 }\right\}^{\frac{1}{2}}+\bar{\mathsf{a}}_{2}\right)\mathcal{S}(\mathbf{W})\] \[+\frac{\mathsf{b}_{4}}{\alpha}\mathcal{Q}(\mathbf{u}).\]
## 17 Proof of Proposition 2, item (i)
We start with the following lemma.
**Lemma 23**.: _Suppose that \(\mathbf{X}\) is \(L\)-subgaussian. Let \(\mathcal{B}_{1}\) and \(\mathcal{B}_{2}\) be bounded subsets of \(\mathbb{B}_{\Pi}\). For any \(n\geq 1\) and \(t\geq 1\), with probability at least \(1-e^{-t}\), it holds that_
\[\sup_{[\mathbf{V},\mathbf{W}]\in\mathcal{B}_{1}\times\mathcal{B}_{2}}|\langle \mathbf{V},\mathbf{W}\rangle_{n}-\langle\!\langle\mathbf{V},\mathbf{W}\rangle_{ \Pi}|\leq\frac{CL^{2}}{n}\mathcal{G}\big{(}\mathfrak{S}^{1/2}(\mathcal{B}_{1}) \big{)}\mathcal{G}\big{(}\mathfrak{S}^{1/2}(\mathcal{B}_{2})\big{)}\]
\[+\frac{CL^{2}}{\sqrt{n}}\left[\mathcal{G}\left(\mathfrak{S}^{1/2}( \mathcal{B}_{1})+\mathcal{G}\big{(}\mathfrak{S}^{1/2}(\mathcal{B}_{2})\big{]}\right.\right.\] \[+\left.\left.CL^{2}\left(\frac{t}{n}+\sqrt{\frac{t}{n}}\right).\right.\right.\]
Lemma 23 is immediate from Theorem 8 applied to the classes \(F:=\{\mathbf{V}\in\mathcal{B}_{1}:\left\langle\cdot,\mathbf{V}\right\rangle\}\) and \(G:=\{\mathbf{W}\in\mathcal{B}_{2}:\left\{\cdot,\mathbf{W}\right\}\}\) and Talagrand's majorizing theorems [95].25 The next proposition is a restatement of item (i) of Proposition 2. We prove it using Lemma 23 and the peeling Lemma 42 in Appendix A.
Footnote 25: By these theorems, \(\gamma_{2}(F)\asymp L\mathcal{G}\big{(}\mathfrak{S}^{1/2}(\mathcal{B}_{1}))\). We also note that \(\bar{\Delta}(F)\lesssim L\).
**Proposition 4** (pp).: _Suppose that \(\mathbf{X}\) is \(L\)-subgaussian. For all \(\delta\in(0,1)\) and \(n\in\mathbb{N}\), with probability at least \(1-\delta\), the following property holds: for all \([\mathbf{V},\mathbf{W}]\in(\mathds{R}^{p})^{2}\),_
\[|\{\mathbf{W},\mathbf{V}\}_{n}-\left\langle\mathbf{W},\mathbf{V} \right\rangle_{\Pi}| \leq CL^{2}\left(\frac{1+\log(1/\delta)}{n}+\frac{1+\sqrt{\log( 1/\delta)}}{\sqrt{n}}\right)\|\mathbf{V}\|_{\Pi}\|\mathbf{W}\|_{\Pi}\] \[+CL^{2}\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)\mathcal{G} \left(\mathcal{R}(\mathbf{V})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}}) \cap\|\mathbf{V}\|_{\Pi}\mathbb{B}_{F}\right)\|\mathbf{W}\|_{\Pi}\] \[+CL^{2}\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)\mathcal{G} \left(\mathcal{S}(\mathbf{W})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}}) \cap\|\mathbf{W}\|_{\Pi}\mathbb{B}_{F}\right)\left\|\mathbf{V}\right\|_{\Pi}\] \[+\frac{CL^{2}}{n}\mathcal{G}\left(\mathcal{R}(\mathbf{V}) \mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\cap\|\mathbf{V}\|_{\Pi}\mathbb{ B}_{F}\right)\cdot\mathcal{G}\left(\mathcal{S}(\mathbf{W})\mathfrak{S}^{1/2}( \mathbb{B}_{\mathcal{S}})\cap\|\mathbf{W}\|_{\Pi}\mathbb{B}_{F}\right).\]
Proof.: Let \(r,\bar{r}>0\) and define the sets
\[V_{1}:=\{\mathbf{V}:\|\mathbf{V}\|_{\Pi}\leq 1,\mathcal{R}(\mathbf{V})\leq r \},\quad V_{2}:=\{\mathbf{W}:\|\mathbf{W}\|_{\Pi}\leq 1,\mathcal{S}(\mathbf{W}) \leq\bar{r}\}.\]
Note that,
\[\mathcal{G}\left(\mathfrak{S}^{1/2}(V_{1})\right) \leq r\mathcal{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R} })\cap r^{-1}\mathbb{B}_{F}\right)=:g(r),\] \[\mathcal{G}\left(\mathfrak{S}^{1/2}(V_{2})\right) \leq\bar{r}\mathcal{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{S}})\cap\bar{r}^{-1}\mathbb{B}_{F}\right)=:\bar{g}(\bar{r}).\]
By Lemma 23, for any \(r,\bar{r}>0\) and \(\delta\in(0,1/c]\), we have with probability at least \(1-c\delta\),
\[\sup_{[\mathbf{V},\mathbf{W}]\in V_{1}\times V_{2}}|\{\mathbf{W },\mathbf{V}\}_{n}-\{\mathbf{W},\mathbf{V}\}_{\Pi}| \leq\frac{CL^{2}}{n}\cdot g(r)\bar{g}(\bar{r})+\frac{CL^{2}}{ \sqrt{n}}[g(r)+\bar{g}(\bar{r})]\] \[+CL^{2}\left(\sqrt{\frac{\log(1/\delta)}{n}}+\frac{\log(1/\delta) }{n}\right).\]
We now invoke Lemma 42 with the set \(V:=\mathbb{B}_{\Pi}\times\mathbb{B}_{\Pi}\), functions
\[M(\mathbf{V},\mathbf{W}) :=-\left|\{\mathbf{W},\mathbf{V}\}_{n}-\{\mathbf{W},\mathbf{V}\} _{\Pi}\right|,\] \[h(\mathbf{V},\mathbf{W}) :=\mathcal{R}(\mathbf{V})\] \[\bar{h}(\mathbf{V},\mathbf{W}) :=\mathcal{S}(\mathbf{W}),\]
functions \(g\) and \(\bar{g}\) as stated above and constant \(b:=CL^{2}\). The claim follows from such lemma and the homogeneity of norms.
Proof of Proposition 2, item (ii)
By item (i) of Proposition 2, with probability\(\geq 1-\delta\), \(\operatorname{PP}_{\mathcal{R},\mathcal{R}}\) holds with constants
\[\mathsf{c}_{1} =CL^{2}\left(\frac{1+\log(1/\delta)}{n}+\frac{1+\sqrt{\log(1/\delta )}}{\sqrt{n}}\right),\] \[\mathsf{c}_{2}=\mathsf{c}_{3} =CL^{2}\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)\mathscr{G} \left(\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\right),\] \[\mathsf{c}_{4} =\frac{CL^{2}}{n}\mathscr{G}^{2}\left(\mathfrak{S}^{1/2}( \mathbb{B}_{\mathcal{R}})\right).\]
The claim follows from this and Lemma 10 with constant \(\alpha^{2}=(1-\mathsf{c}_{1})/2\) -- noting that, by assumption \(\mathsf{c}_{1}\in(0,1)\).
## 19 Proof of Proposition 2, item (iii)
We start with the following lemma, stating a high-probability version of Chevet's inequality. This result is suggested as an exercise in Vershynin [1]. We give a proof for completeness.
**Lemma 24**.: _Suppose that \(\mathbf{X}\) is \(L\)-subgaussian. Let \(V\) be any bounded subset of \(\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\) Define \(V_{1}:=\{\mathbf{V}:\exists\,\boldsymbol{u}\text{ s.t. }(\mathbf{V}, \boldsymbol{u})\in V\}\) and \(V_{2}:=\{\boldsymbol{u}:\exists\,\boldsymbol{\nabla}\text{ s.t. }(\mathbf{V}, \boldsymbol{u})\in V\}\)._
_Then, for any \(n\geq 1\) and \(t>0\), with probability at least \(1-2\exp(-t^{2})\),_
\[\sup_{[\mathbf{V},\boldsymbol{u}]\in V}\langle\boldsymbol{u},\mathfrak{X}( \mathbf{V})\rangle\leq CL[\mathscr{G}\big{(}\mathfrak{S}^{1/2}(V_{1}))+ \mathscr{G}\big{(}V_{2}\big{)}+t].\]
Proof.: For each \((\mathbf{V},\boldsymbol{u})\in V\), we define
\[Z_{\mathbf{V},\boldsymbol{u}}:=\langle\boldsymbol{u},\mathfrak{X}(\mathbf{V}) \rangle=\sum_{i\in[n]}\boldsymbol{u}_{i}(\mathbf{X}_{i},\mathbf{V}),\qquad W_ {\mathbf{V},\boldsymbol{u}}:=L(\langle\mathbf{V},\mathfrak{S}^{1/2}( \boldsymbol{\Xi})\rangle+\langle\boldsymbol{u},\boldsymbol{\xi}\rangle),\]
where \(\boldsymbol{\Xi}\in\mathbb{R}^{p}\) and \(\boldsymbol{\xi}\in\mathbb{R}^{n}\) are independent each one having iid \(\mathcal{N}(0,1)\) entries. Therefore, \((\mathbf{V},\boldsymbol{u})\mapsto W_{\mathbf{V},\boldsymbol{u}}\) defines a centered Gaussian process indexed by \(V\).
We may easily bound the \(\psi_{2}\)-norm of the increments using rotation invariance of sub-Gaussian random variables. Indeed, using that \(\{\mathbf{X}_{i}\}\) is an iid sequence and Proposition 2.6.1 in [1], given \([\mathbf{V},\boldsymbol{u}]\) and \([\mathbf{V}^{\prime},\boldsymbol{u}^{\prime}]\) in \(V\),
\[|Z_{\mathbf{V},\boldsymbol{u}}-Z_{\mathbf{V}^{\prime},\boldsymbol{ u}^{\prime}}|_{\psi_{2}}^{2} =\left|\sum_{i\in[n]}(\mathbf{X}_{i},\boldsymbol{u}_{i}\mathbf{V}- \boldsymbol{u}_{i}^{\prime}\mathbf{V}^{\prime})\right|_{\psi_{2}}^{2}\] \[\leq C\sum_{i\in[n]}\big{|}\langle\mathbf{X}_{i},\boldsymbol{u}_ {i}\mathbf{V}-\boldsymbol{u}_{i}^{\prime}\mathbf{V}^{\prime}\rangle\big{|}_{ \psi_{2}}^{2}\] \[\leq 2C\sum_{i\in[n]}\big{|}\langle\mathbf{X}_{i},(\boldsymbol{u}_ {i}-\boldsymbol{u}_{i}^{\prime})\mathbf{V}\rangle\big{|}_{\psi_{2}}^{2}+2C \sum_{i\in[n]}\big{|}\langle\mathbf{X}_{i},\boldsymbol{u}_{i}^{\prime}( \mathbf{V}-\mathbf{V}^{\prime})\rangle\big{|}_{\psi_{2}}^{2}\] \[\leq 2CL^{2}\|\boldsymbol{u}-\boldsymbol{u}^{\prime}\|_{2}^{2}\| \mathbf{V}\|_{\Pi}^{2}+2CL^{2}\|\boldsymbol{u}^{\prime}\|_{2}^{2}\|\mathbf{V}- \mathbf{V}^{\prime}\|_{\Pi}^{2}\leq 2CL^{2}\mathsf{d}([\mathbf{V},\boldsymbol{u}],[ \mathbf{V}^{\prime},\boldsymbol{u}^{\prime}]), \tag{48}\]
with the pseudo-metric \(\mathsf{d}([\mathbf{V},\boldsymbol{u}],[\mathbf{V}^{\prime},\boldsymbol{u}^{ \prime}]):=\sqrt{\|\boldsymbol{u}-\boldsymbol{u}^{\prime}\|_{2}^{2}+\|\mathbf{V }-\mathbf{V}^{\prime}\|_{\Pi}^{2}}\), using that \(\|\mathbf{V}\|_{\Pi}\leq 1\) and \(\|\boldsymbol{u}^{\prime}\|_{2}\leq 1\). On the other hand, by definition of the process \(W\) it is easy to check that
\[\mathbb{E}[(W_{\mathbf{V},\boldsymbol{u}}-W_{\mathbf{V}^{\prime},\boldsymbol{ u}^{\prime}})^{2}]=L^{2}(\|\mathbf{V}-\mathbf{V}^{\prime}\|_{\Pi}^{2}+\| \boldsymbol{u}-\boldsymbol{u}^{\prime}\|_{2}^{2}). \tag{49}\]
From (48),(49), we conclude that the processes \(W\) and \(Z\) satisfy the conditions of Talagrand's majoration and minoration generic chaining bounds for sub-Gaussian processes (e.g. Theorems 8.5.5 and 8.6.1 in [1]).
Hence, for any \(t\geq 0\), with probability at least \(1-2e^{-t^{2}}\),
\[\sup_{[\mathbf{V},\mathbf{u}]\in V}|Z_{\mathbf{V},\mathbf{u}}|\leq CL\left\{\mathbb{E} \left[\sup_{[\mathbf{V},\mathbf{u}]\in V}W_{\mathbf{V},\mathbf{u}}\right]+t\right\}.\]
In above we used that \(Z_{\mathbf{V}_{0},\mathbf{u}_{0}}=0\) at \([\mathbf{V}_{0},\mathbf{u}_{0}]=0\) and that the diameter of \(V\subset\mathbb{B}_{\Pi}^{m_{1}\times m_{2}}\times\mathbb{B}_{2}^{n}\) under the metric \(\mathsf{d}\) is less than \(2\sqrt{2}\). We also have
\[\mathbb{E}\bigg{[}\sup_{[\mathbf{V},\mathbf{u}]\in V}W_{\mathbf{V},\mathbf{u}}\bigg{]} \leq\mathbb{E}\bigg{[}\sup_{\mathbf{V}\in V_{1}}\left\langle\mathbf{\Xi}, \mathfrak{S}^{1/2}(\mathbf{V})\right\rangle\bigg{]}+\mathbb{E}\bigg{[}\sup_{ \mathbf{u}\in V_{2}}\left\langle\mathbf{u},\mathbf{\xi}\right\rangle\bigg{]}=\mathscr{G} \big{(}\mathfrak{S}^{1/2}(V_{1})\big{)}+\mathscr{G}(V_{2}).\]
Joining the two previous inequalities complete the proof of the claimed inequality.
The next proposition is a restatement of item (iii) of Proposition 2. We prove it using Lemma 24 and the peeling Lemma 43 in Appendix A.
**Proposition 5** (IP).: _Suppose that \(\mathbf{X}\) is \(L\)-subgaussian. For all \(\delta\in(0,1)\) and \(n\in\mathbb{N}\), with probability at least \(1-\delta\), the following property holds: for all \([\mathbf{V},\mathbf{W},\mathbf{u}]\in(\mathbb{R}^{p})^{2}\times\mathbb{R}^{n}\),_
\[\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W})\rangle \leq CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}\|[\mathbf{V}, \mathbf{W}]\|_{\Pi}\|\mathbf{u}\|_{2}\] \[+CL\frac{\mathscr{G}\big{(}\mathcal{R}(\mathbf{V})\mathfrak{S}^{ 1/2}(\mathbb{B}_{\mathcal{R}})\cap\|[\mathbf{V},\mathbf{W}]\|_{\Pi}\mathbb{B}_ {F}\big{)}}{\sqrt{n}}\|\mathbf{u}\|_{2}\] \[+CL\frac{\mathscr{G}\big{(}\mathcal{Q}(\mathbf{u})\mathfrak{B}_{ \mathcal{Q}}\cap\|\mathbf{u}\|_{2}\mathbb{B}_{2}^{n}\big{)}}{\sqrt{n}}\|[\mathbf{V },\mathbf{W}]\|_{\Pi}.\]
Proof.: Let \(R_{1},R_{2},R_{3}>0\) and define the sets
\[V_{1} :=\{\mathbf{V}\in\mathbb{R}^{p}:\|\mathbf{V}\|_{\Pi}\leq 1, \mathcal{R}(\mathbf{V})\leq R_{1}\}, \tag{50}\] \[V_{2} :=\{\mathbf{u}\in\mathbb{R}^{n}:\|\mathbf{u}\|_{2}\leq 1,\mathcal{Q}(\mathbf{u} )\leq R_{2}\},\] (51) \[V_{3} :=\{\mathbf{W}\in\mathbb{R}^{p}:\|\mathbf{W}\|_{\Pi}\leq 1, \mathcal{S}(\mathbf{V})\leq R_{3}\}. \tag{52}\]
We note that
\[\mathscr{G}\left(\mathfrak{S}^{1/2}(V_{1})\right) \leq R_{1}\mathscr{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{R}})\cap R_{1}^{-1}\mathbb{B}_{F}\right)=:g_{1}(R_{1}), \tag{53}\] \[\mathscr{G}(V_{2}) \leq R_{2}\mathscr{G}\left(\mathbb{B}_{\mathcal{Q}}\cap R_{2}^{- 1}\mathbb{B}_{2}^{n}\right)=:g_{2}(R_{2}),\] (54) \[\mathscr{G}\left(\mathfrak{S}^{1/2}(V_{3})\right) \leq R_{3}\mathscr{G}\left(\mathfrak{S}^{1/2}(\mathbb{B}_{ \mathcal{S}})\cap R_{3}^{-1}\mathbb{B}_{F}\right)=:g_{3}(R_{3}). \tag{55}\]
By Lemma 24, we have that, for any \(R_{1},R_{2}>0\) and \(\delta\in(0,1)\), with probability at least \(1-\delta\), the following inequality holds:
\[\sup_{[\mathbf{V},\mathbf{u}]\in V_{1}\times V_{2}}\langle\mathbf{u},\mathfrak{X}^{(n )}(\mathbf{V})\rangle\leq\frac{CL}{\sqrt{n}}g_{1}(R_{1})+\frac{CL}{\sqrt{n}}g _{2}(R_{2})+\frac{CL}{\sqrt{n}}\sqrt{\log(2/\delta)}.\]
Next, we invoke Lemma 43 with the set \(V:=\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\), functions
\[M(\mathbf{V},\mathbf{u}) :=-\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{V})\rangle,\] \[h(\mathbf{V},\mathbf{u}) :=\mathcal{R}(\mathbf{V})\] \[\bar{h}(\mathbf{V},\mathbf{u}) :=\mathcal{Q}(\mathbf{u}),\]
functions \(g:=g_{1}\) and \(\bar{g}:=g_{2}\) and constant \(b:=CL\). By this lemma, given \(\delta\in(0,1)\), with probability at
least \(1-\delta\), for all \([\mathbf{V},\mathbf{u}]\in\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\),
\[\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{V})\rangle \leq\frac{CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{R}(\mathbf{V}) \mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\cap\mathbb{B}_{F}\right)\] \[+\frac{CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{Q}(\mathbf{u})\mathbb{B }_{\mathcal{Q}}\cap\mathbb{B}_{2}^{n}\right)\] \[+\frac{CL}{\sqrt{n}}(1+\sqrt{\log(1/\delta)}).\]
Similarly, we will invoke Lemma 24 with set \(V_{3}\times V_{2}\) and Lemma 43 with set \(V:=\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\), functions
\[M(\mathbf{W},\mathbf{u}) :=-\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{W})\rangle,\] \[h(\mathbf{W},\mathbf{u}) :=\mathcal{S}(\mathbf{V})\] \[\bar{h}(\mathbf{W},\mathbf{u}) :=\mathcal{Q}(\mathbf{u}),\]
functions \(g:=g_{3}\) and \(\bar{g}:=g_{2}\) and constant \(b:=CL\). We obtain that, for any \(\delta\in(0,1)\), with probability at least \(1-\delta\), for all \([\mathbf{W},\mathbf{u}]\in\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\),
\[\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{W})\rangle \leq\frac{CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{S}(\mathbf{W}) \mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}})\cap\mathbb{B}_{F}\right)\] \[+\frac{CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{Q}(\mathbf{u})\mathbb{B }_{\mathcal{Q}}\cap\mathbb{B}_{2}^{n}\right)\] \[+\frac{CL}{\sqrt{n}}(1+\sqrt{\log(1/\delta)}).\]
By an union bound, we obtain that, for every \(\delta\in(0,1)\), with probability at least \(1-\delta\), for all \([\mathbf{V},\mathbf{W},\mathbf{u}]\in\mathbb{B}_{\Pi}\times\mathbb{B}_{\Pi}\times \mathbb{B}_{2}^{n}\),
\[\langle\mathbf{u},\mathfrak{X}^{(n)}(\mathbf{V}+\mathbf{W})\rangle \leq\frac{CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{R}(\mathbf{V}) \mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\cap\mathbb{B}_{F}\right)+\frac{ CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{S}(\mathbf{W})\mathfrak{S}^{1/2}( \mathbb{B}_{\mathcal{S}})\cap\mathbb{B}_{F}\right)\] \[+\frac{2CL}{\sqrt{n}}\mathscr{G}\left(\mathcal{Q}(\mathbf{u})\mathbb{ B}_{\mathcal{Q}}\cap\mathbb{B}_{2}^{n}\right)\] \[+\frac{2CL}{\sqrt{n}}(1+\sqrt{\log(1/\delta)}).\]
To finish, we use that, for any \([\mathbf{V},\mathbf{W},\mathbf{u}]\) with non-zero coordinates, \(\left[\frac{\mathbf{V}}{\|\mathbf{V},\mathbf{W}\|\|_{\Pi}},\frac{\mathbf{W}}{ \|\mathbf{V},\mathbf{W}\|_{\Pi}},\frac{\mathbf{u}}{\|\mathbf{u}\|_{2}}\right]\) belongs \(\mathbb{B}_{\Pi}\times\mathbb{B}_{\Pi}\times\mathbb{B}_{2}^{n}\) and use homogeneity of norms.
## 20 Proof of Proposition 2, item (iv)
We start by stating two auxiliary lemmas. The next result follows from a tail symmetrization-contraction argument and the Gaussian concentration inequality.
**Lemma 25** (Proposition 9.2 in [6]).: _Assume \(\sigma:=|\xi|_{\psi_{2}}<\infty\). Let \(U\) be any bounded subset of \(\mathbb{B}_{2}^{n}\). For any \(n\geq 1\) and \(t>0\), with probability at least \(1-\exp(-t^{2}/2)\),_
\[\sup_{\mathbf{u}\in U}\langle\mathbf{\xi},\mathbf{u}\rangle\leq C\sigma\left[\mathscr{G} \left(U\right)+t\right].\]
Next, we state the following lemma.
**Lemma 26**.: _Suppose that \(\mathbf{X}\) is \(L\)-subgaussian. Let \(V\) be a bounded subset of \(\mathbb{B}_{\Pi}\). There exists universal
constant \(c>0\), such that for all \(n\geq 1\), \(u,v\geq 1\), with probability at least \(1-ce^{-u/4}-ce^{-ne}\),_
\[\sup_{\mathbf{V}\in V}\langle\boldsymbol{\xi}^{(n)},\mathfrak{X}^{(n)}(\mathbf{V })\rangle\leq C\left(\sqrt{v}+1\right)\frac{\sigma L}{\sqrt{n}}\mathscr{G} \big{(}\mathfrak{S}^{1/2}(V)\big{)}+C\sigma L\left(\sqrt{\frac{2u}{n}}+\frac{ u}{n}+\sqrt{\frac{uv}{n}}\right).\]
The previous lemma is immediate from Theorem 7 applied to the class \(F:=\{\mathbf{V}\in\mathcal{B}:\langle\!\cdot,\mathbf{V}\rangle\!\}\) and Talagrand's majorizing theorems.
Finally, the next proposition is a restatement of item (iv) of Proposition 2. We prove it using Lemmas 25 and 26 and two peeling lemmas: Lemma 43 (with \(\bar{g}=\bar{h}\equiv 0\)) and Lemma 44 in Appendix A. We recall the following definition:
\[\triangle_{n}(\delta):=(\nicefrac{{1}}{{\sqrt{n}}})[1+\sqrt{\log(1/\delta)}] +(\nicefrac{{1}}{{n}})[1+\log(1/\delta)+\sqrt{\log(1/\delta)}].\]
Next, we also define the functions
\[g_{\mathcal{R}}(\mathbf{V},\mathbf{W},\boldsymbol{u}) \coloneqq\mathscr{G}\left(\mathcal{R}(\mathbf{V})\mathfrak{S}^{1/ 2}(\mathbb{B}_{\mathcal{R}})\cap\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{ \Pi}\mathbb{B}_{F}\right),\] \[g_{\mathcal{S}}(\mathbf{V},\mathbf{W},\boldsymbol{u}) \coloneqq\mathscr{G}\left(\mathcal{S}(\mathbf{W})\mathfrak{S}^{ 1/2}(\mathbb{B}_{\mathcal{S}})\cap\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_ {\Pi}\mathbb{B}_{F}\right),\] \[g_{\mathcal{Q}}(\mathbf{V},\mathbf{W},\boldsymbol{u}) \coloneqq\mathscr{G}\left(\mathcal{Q}(\boldsymbol{u})\mathbb{B}_{ \mathcal{Q}}\cap\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}\mathbb{B}_{2 }^{n}\right).\]
**Proposition 6** (\(\mathrm{MP}\)).: _For all \(n\in\mathbb{N}\) and all \(\delta\in(0,1)\), with probability at least \(1-\delta\), for all \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in\mathbb{B}_{\Pi}\times\mathbb{B}_{\Pi} \times\mathbb{B}_{2}^{n}\),_
\[\langle\boldsymbol{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V}+ \mathbf{W},\boldsymbol{u})\rangle \leq C\sigma L\cdot\triangle_{n}(\delta)\cdot\|[\mathbf{V}, \mathbf{W},\boldsymbol{u}]\|_{\Pi}\] \[+C\sigma L\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/ \delta)}]\frac{1}{\sqrt{n}}+\frac{1}{n}\right\}g_{\mathcal{R}}(\mathbf{V}, \mathbf{W},\boldsymbol{u})\] \[+C\sigma L\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/ \delta)}]\frac{1}{\sqrt{n}}+\frac{1}{n}\right\}g_{\mathcal{S}}(\mathbf{V}, \mathbf{W},\boldsymbol{u})\] \[+C\frac{\sigma}{\sqrt{n}}g_{\mathcal{Q}}(\mathbf{V},\mathbf{W}, \boldsymbol{u}).\]
Proof.: Given \(R_{1},R_{2},R_{3}>0\), we recall the definitions of the sets \(V_{1}\), \(V_{2}\) and \(V_{3}\) in (50), (51) and (52) and functions \(g_{1}\), \(g_{2}\) and \(g_{3}\) in (53), (54) and (55).
By Lemma 26, there is constant \(c\geq 1\), such that, for any \(R_{1}>0\) and \(\delta\in(0,1/c]\), with probability at least \(1-2c\delta\),
\[\sup_{\mathbf{V}\in V_{1}}\langle\boldsymbol{\xi}^{(n)}, \mathfrak{X}^{(n)}(\mathbf{V})\rangle \leq C\left(\sqrt{\frac{\log(1/\delta)}{n}}+1\right)\frac{\sigma L }{\sqrt{n}}g_{1}(R_{1})\] \[+C\sigma L\left(\sqrt{\frac{\log(1/\delta)}{n}}+\frac{\log(1/ \delta)}{n}\right).\]
Next, we invoke Lemma 44 with the set \(V:=\mathbb{B}_{\Pi}\), functions
\[M(\mathbf{V}) \coloneqq-\langle\boldsymbol{\xi}^{(n)},\mathfrak{X}^{(n)}( \mathbf{V})\rangle,\] \[h(\mathbf{V}) \coloneqq\mathcal{R}(\mathbf{V}),\]
function \(g:=g_{1}\) and constant \(b:=C\sigma L\). By this lemma, given \(\delta\in(0,1/2c]\), with probability at least \(1-2c\delta\), for all \(\mathbf{V}\in\mathbb{B}_{\Pi}\),
\[\langle\boldsymbol{\xi}^{(n)},\mathfrak{X}^{(n)}(\mathbf{V})\rangle \leq C\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/\delta) }]\frac{\sigma L}{\sqrt{n}}+\frac{\sigma L}{n}\right\}\mathscr{G}\left( \mathcal{R}(\mathbf{V})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\cap \mathbb{B}_{F}\right)\] \[+C(\nicefrac{{\sigma L}}{{\sqrt{n}}})[1+\sqrt{\log(1/\delta)}]+C (\nicefrac{{\sigma L}}{{n}})[\log(1/\delta)+\sqrt{\log(1/\delta)}].\]
Proceeding exactly like above but with set \(V_{3}\), norm \(\mathcal{S}\) and function \(g:=g_{3}\), we get that for all \(\delta\in(0,1/2c]\), with probability at least \(1-2c\delta\), for all \(\mathbf{W}\in\mathbb{B}_{\Pi}\),
\[\langle\boldsymbol{\xi}^{(n)},\boldsymbol{\mathfrak{X}}^{(n)}( \mathbf{W})\rangle \leq C\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/\delta) }]\frac{\sigma L}{\sqrt{n}}+\frac{\sigma L}{n}\right\}\mathcal{G}\left( \mathcal{S}(\mathbf{W})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}})\cap \mathbb{B}_{F}\right)\] \[+C(\nicefrac{{\sigma L}}{{\sqrt{n}}})[1+\sqrt{\log(1/\delta)}]+C( \nicefrac{{\sigma L}}{{n}})[\log(1/\delta)+\sqrt{\log(1/\delta)}].\]
Finally, by Lemma 25, for any \(R_{2}>0\) and \(\delta\in(0,1)\), with probability at least \(1-\delta\),
\[\sup_{\boldsymbol{u}\in V_{2}}\langle\boldsymbol{\xi}^{(n)},\boldsymbol{u} \rangle\leq C\frac{\sigma}{\sqrt{n}}\left[g_{2}(R_{2})+\sqrt{\log(1/\delta)} \right].\]
We now invoke Lemma 43 with set \(V:=\mathbb{B}_{2}^{n}\), functions
\[M(\boldsymbol{u}) :=-\langle\boldsymbol{\xi}^{(n)},\boldsymbol{u}\rangle,\] \[h(\boldsymbol{u}) :=\mathcal{Q}(\boldsymbol{u}),\]
function \(g:=g_{2}\) (and \(\bar{g}=\bar{h}\equiv 0\)) and constant \(b:=C\sigma\). We get that, for any \(\delta\in(0,1)\), with probability at least \(1-\delta\), for all \(\boldsymbol{u}\in\mathbb{B}_{2}^{n}\),
\[\langle\boldsymbol{\xi}^{(n)},\boldsymbol{u}\rangle\leq C\frac{\sigma}{\sqrt{ n}}\mathcal{G}\left(\mathcal{Q}(\boldsymbol{u})\mathbb{B}_{\mathcal{Q}}\cap \mathbb{B}_{2}^{n}\right)+C\frac{\sigma}{\sqrt{n}}\left[1+\sqrt{\log(1/\delta)} \right].\]
By an union bound, we obtain that, for every \(\delta\in(0,1/(4c+1)]\), with probability at least \(1-(4c+1)\delta\), for all \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in\mathbb{B}_{\Pi}\times\mathbb{B}_{ \Pi}\times\mathbb{B}_{2}^{n}\),
\[\langle\boldsymbol{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V}+ \mathbf{W},\boldsymbol{u})\rangle =\langle\boldsymbol{\xi}^{(n)},\boldsymbol{\mathfrak{X}}^{(n)}( \mathbf{V})\rangle+\langle\boldsymbol{\xi}^{(n)},\boldsymbol{\mathfrak{X}}^{( n)}(\mathbf{W})\rangle+\langle\boldsymbol{\xi}^{(n)},\boldsymbol{u}\rangle\] \[\leq C\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/\delta) }]\frac{\sigma L}{\sqrt{n}}+\frac{\sigma L}{n}\right\}\mathcal{G}\left( \mathcal{R}(\mathbf{V})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{R}})\cap \mathbb{B}_{F}\right)\] \[+C\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\log(1/\delta)}] \frac{\sigma L}{\sqrt{n}}+\frac{\sigma L}{n}\right\}\mathcal{G}\left( \mathcal{S}(\mathbf{V})\mathfrak{S}^{1/2}(\mathbb{B}_{\mathcal{S}})\cap \mathbb{B}_{F}\right)\] \[+C\frac{\sigma}{\sqrt{n}}\mathcal{G}\left(\mathcal{Q}( \boldsymbol{u})\mathbb{B}_{\mathcal{Q}}\cap\mathbb{B}_{2}^{n}\right)\] \[+3C(\nicefrac{{\sigma L}}{{\sqrt{n}}})[1+\sqrt{\log(1/\delta)} ]+2C(\nicefrac{{\sigma L}}{{n}})[\log(1/\delta)+\sqrt{\log(1/\delta)}],\]
where we used that \(L\geq 1\).
To finish, we use that, for any \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\) with non-zero coordinates, the vector
\[\left[\frac{\mathbf{V}}{\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}}, \frac{\mathbf{W}}{\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}},\frac{ \boldsymbol{u}}{\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{\Pi}}\right]\]
belongs to \(\mathbb{B}_{\Pi}\times\mathbb{B}_{\Pi}\times\mathbb{S}_{2}^{n}\) and use homogeneity of norms.
## 21 Lemmas for decomposable norms
Recall Definition 13 in Section 11. We first remind the reader that the \(\ell_{1}\) and nuclear norms are decomposable.
_Example 1_ (\(\ell_{1}\)-norm).: Given \(\mathbf{B}\in\mathds{R}^{p}\) with _sparsity support_\(\mathscr{S}(\mathbf{B}):=\{[j,k]:\mathbf{B}_{j,k}\neq 0\}\), the \(\ell_{1}\)-norm in \(\mathds{R}^{p}\) satisfies the above decomposability condition with the map \(\mathbf{V}\mapsto\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V}):=\mathbf{V}_{ \mathcal{S}(\mathbf{B})^{c}}\) where \(\mathbf{V}_{\mathcal{S}(\mathbf{B})^{c}}\) denotes the \(d_{1}\times d_{2}\) matrix whose entries are zero at indexes in \(\mathcal{S}(\mathbf{B})\).
_Example 2_ (Nuclear norm).: Let \(\mathbf{B}\in\mathds{R}^{p}\) with rank \(r:=\mathrm{rank}(\mathbf{B})\), singular values \(\{\sigma_{j}\}_{j\in[r]}\) and singular vector decomposition \(\mathbf{B}=\sum_{j\in[r]}\sigma_{j}\boldsymbol{u}_{j}\boldsymbol{v}_{j}^{\top}\). Here \(\{\boldsymbol{u}_{j}\}_{j\in[r]}\) are the left singular vectors spanning the subspace \(\mathcal{U}\) and \(\{\boldsymbol{v}_{j}\}_{j\in[r]}\) are the right singular vectors spanning the subspace \(\mathcal{V}\). The pair \((\mathcal{U},\mathcal{V})\) is sometimes referred
as the _low-rank support_ of \(\mathbf{B}\). Given subspace \(S\subset\mathbb{R}^{\ell}\) let \(\mathbf{P}_{S^{\perp}}\) denote the matrix defining the orthogonal projection onto \(S^{\perp}\). Then, the map \(\mathbf{V}\mapsto\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V}):=\mathbf{P}_{ \mathcal{U}^{\perp}}\mathbf{V}\mathbf{P}_{\mathcal{V}^{\perp}}^{\top}\) satisfy the decomposability condition for the nuclear norm \(\|\cdot\|_{N}\).
In the framework of regularized least-squares regression, decomposability is mostly useful because of the following lemma.
**Lemma 27** ([80]).: _Let \(\mathcal{R}\) be a decomposable norm over \(\mathds{R}^{p}\). Let \(\mathbf{B},\hat{\mathbf{B}}\in\mathds{R}^{p}\) and \(\mathbf{V}:=\hat{\mathbf{B}}-\mathbf{B}\). Then, for any \(\nu\in[0,1]\),_
\[\nu\mathcal{R}(\mathbf{V})+\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\leq(1+\nu)\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V}) )-(1-\nu)\mathcal{R}(\mathcal{P}_{\mathbf{B}}^{\perp}(\mathbf{V})).\]
Next, we state a well known lemma for the Slope norm that improves upon the previous lemma -- when comparing it with the \(\ell_{1}\)-norm.
**Lemma 28** ([6]).: _Let \(o\in[n],\boldsymbol{\theta},\hat{\boldsymbol{\theta}}\in\mathbb{R}^{n}\) such that \(\|\boldsymbol{\theta}\|_{0}\leq o\). Set \(\boldsymbol{u}:=\hat{\boldsymbol{\theta}}-\boldsymbol{\theta}\). Then \(\|\boldsymbol{\theta}\|_{\sharp}-\|\hat{\boldsymbol{\theta}}\|_{\sharp}\leq \sum_{i=o+1}^{o}\omega_{i}\boldsymbol{u}_{i}^{\sharp}-\sum_{i=o+1}^{n}\omega_{i }\boldsymbol{u}_{i}^{\sharp}\). In particular, for any \(\nu\in[0,1]\),_
\[\nu\|\boldsymbol{u}\|_{\sharp}+\|\boldsymbol{\theta}\|_{\sharp}- \|\hat{\boldsymbol{\theta}}\|_{\sharp}\leq(1+\nu)\sum_{i=1}^{o}\omega_{i} \boldsymbol{u}_{i}^{\sharp}-(1-\nu)\sum_{i=o+1}^{n}\omega_{i}\boldsymbol{u}_{ i}^{\sharp}.\]
## 22 Proof of Lemma 17
The first order condition of (6) at \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}},\hat{\boldsymbol{\theta}}]\) is equivalent to the statement: there exist \(\mathbf{V}\in\partial\mathcal{R}(\hat{\mathbf{B}})\), \(\mathbf{W}\in\partial\mathcal{S}(\hat{\mathbf{\Gamma}})\) and \(\boldsymbol{u}\in\partial\|\hat{\boldsymbol{\theta}}\|_{\sharp}\) such that for all \([\mathbf{B},\mathbf{\Gamma},\boldsymbol{\theta}]\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),
\[\sum_{i\in[n]}\left[y_{i}^{(n)}-\mathfrak{X}_{i}^{(n)}(\widehat {\mathbf{B}}+\widehat{\mathbf{\Gamma}})-\hat{\boldsymbol{\theta}}_{i}\right] \left(\mathfrak{X}_{i}^{(n)},\hat{\mathbf{B}}-\mathbf{B}\right) \geq\lambda\langle\!\langle\mathbf{V},\hat{\mathbf{B}}-\mathbf{B}\rangle,\] \[\sum_{i\in[n]}\left[y_{i}^{(n)}-\mathfrak{X}_{i}^{(n)}(\widehat{ \mathbf{B}}+\widehat{\mathbf{\Gamma}})-\hat{\boldsymbol{\theta}}_{i}\right] \left(\mathfrak{X}_{i}^{(n)},\hat{\mathbf{\Gamma}}-\mathbf{\Gamma}\right) \geq\chi\langle\!\langle\mathbf{W},\hat{\mathbf{\Gamma}}-\mathbf{ \Gamma}\rangle,\] \[\langle\boldsymbol{y}^{(n)}-\mathfrak{X}^{(n)}(\hat{\mathbf{B}}+ \hat{\mathbf{\Gamma}})-\hat{\boldsymbol{\theta}},\hat{\boldsymbol{\theta}}- \boldsymbol{\theta}\rangle \geq\tau\langle\boldsymbol{u},\hat{\boldsymbol{\theta}}-\boldsymbol{ \theta}\rangle.\]
Setting \(\boldsymbol{\theta}=\boldsymbol{\theta}^{*}\) and using that \(\boldsymbol{y}^{(n)}=\boldsymbol{f}^{(n)}+\boldsymbol{\theta}^{*}+\boldsymbol {\xi}^{(n)}\), we obtain, for \([\mathbf{B},\mathbf{\Gamma}]\) such that \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),
\[\sum_{i\in[n]}\left[\boldsymbol{\Delta}_{i}^{(n)}+\boldsymbol{ \Delta}_{i}^{\hat{\boldsymbol{\theta}}}\right]\langle\!\langle\mathbf{X}_{i}^{(n )},\boldsymbol{\Delta}_{\mathbf{B}}\rangle\!\rangle \leq\sum_{i\in[n]}\xi_{i}^{(n)}\langle\!\langle\mathbf{X}_{i}^{(n )},\boldsymbol{\Delta}_{\mathbf{P}}\rangle-\chi\langle\!\langle\mathbf{V}, \boldsymbol{\Delta}_{\mathbf{P}}\rangle\!\rangle,\] \[\left\langle\boldsymbol{\Delta}^{(n)}+\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}},\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\right\rangle \leq\langle\boldsymbol{\xi}^{(n)},\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}\rangle-\tau\langle\boldsymbol{u},\boldsymbol{\Delta}^{ \hat{\boldsymbol{\theta}}}\rangle.\]
Summing the above inequalities,
\[\langle\boldsymbol{\Delta}^{(n)}+\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}},\mathfrak{M}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}}+ \boldsymbol{\Delta}_{\mathbf{\Gamma}},\boldsymbol{\Delta}^{\hat{\boldsymbol{ \theta}}})\rangle \leq\langle\boldsymbol{\xi}^{(n)},\mathfrak{M}^{(n)}(\boldsymbol{ \Delta}_{\mathbf{B}}+\boldsymbol{\Delta}_{\mathbf{\Gamma}},\boldsymbol{\Delta}^{ \hat{\boldsymbol{\theta}}})\rangle\] \[-\lambda\langle\!\langle\mathbf{V},\boldsymbol{\Delta}_{\mathbf{B}} \rangle-\chi\langle\!\langle\mathbf{V},\boldsymbol{\Delta}_{\mathbf{P}}\rangle -\tau\langle\boldsymbol{u},\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\rangle\] \[+\lambda\langle\!\langle\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\rangle+\chi\langle\!\langle\mathcal{S}(\mathbf{\Gamma})- \mathcal{S}(\hat{\mathbf{\Gamma}})\rangle+\tau\big{(}\|\boldsymbol{\theta}^{*} \|_{\sharp}-\|\hat{\boldsymbol{\theta}}\|_{\sharp}\!\big{)},\]
where we used that26\(-\langle\!\langle\boldsymbol{\Delta}_{\mathbf{B}},\mathbf{V}\rangle\!\rangle \leq\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\), \(-\langle\!\langle\boldsymbol{\Delta}_{\mathbf{\Gamma}},\mathbf{W}\rangle\!\rangle \leq\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{\Gamma}})\) and \(-\langle\!\langle\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}},\boldsymbol{u} \rangle\!\rangle\leq\|\boldsymbol{\theta}^{*}\|_{\sharp}-\|\hat{\boldsymbol{ \theta}}\|_{\sharp}\).
### Proof of Lemma 18
By the parallelogram law,
\[\langle\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{M} ^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\bm {\theta}}})\rangle=\] \[=\frac{1}{2}\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_ {2}^{2}+\frac{1}{2}\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_ {\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{2}^{2}-\frac{1}{2}\|\mathfrak{ X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^{2}.\]
\(\mathrm{ARSC}\) implies in particular that
\[\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{ \Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{2}^{2} \geq\left(\mathsf{d}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{ \mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}-\mathsf{d}_{2}\mathcal{ R}(\mathbf{\Delta}_{\mathbf{B}})-\mathsf{d}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{ \Gamma}})-\mathsf{d}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\right)_{+} ^{2}\] \[-2|(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}})\|,\]
noticing that, by assumption,
\[|\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle\!\!\!\!\!\!\! \rangle_{\Pi}|\leq\mathsf{f}_{*}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}).\]
\(\mathrm{MP}\) implies that
\[\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta} _{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle\leq\mathsf{f}_{1}\|[ \mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}]\|_{\Pi}+\mathsf{f}_{2}\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\mathsf{ f}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})+\mathsf{f}_{4}\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{\sharp}.\]
The claim of the lemma follows from the four previous displays and Lemma 17.
### Proof of Lemma 19
In what follows, \(R_{\mathbf{B}}\,:=\,R_{\mathcal{R}}(\mathbf{V}|\mathbf{B})\,=\,\Psi_{ \mathcal{R}}(\mathcal{P}_{\mathbf{B}}(\mathbf{V}))\mu(\mathcal{C}_{\mathbf{B} }(2c_{0}))\). Similarly, \(R_{\mathbf{\Gamma}}:=R_{\mathcal{S}}(\mathbf{W}|\mathbf{\Gamma})\). By Cauchy-Schwarz,
\[\triangle_{\lambda,\chi,\tau}(\mathbf{V},\mathbf{W},\mathbf{u}) \leq(\nicefrac{{3}}{{2}})\left(\lambda\mathcal{R}(\mathcal{P}_{ \mathbf{B}}(\mathbf{V}))+\chi\mathcal{S}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W} ))+\eta\|\mathbf{u}\|_{2}\right)\] \[\leq(\nicefrac{{3}}{{2}})\{\lambda^{2}R_{\mathbf{B}}^{2}+\chi^{2 }R_{\mathbf{\Gamma}}^{2}+\tau^{2}\eta^{2}\}^{1/2}\|[\mathbf{V},\mathbf{W},\mathbf{u}] \|_{\Pi},\]
that is, (31). We now split our arguments in four cases.
**Case 1:**: \(\mathbf{V}\in\mathcal{C}_{\mathbf{B}}(2c_{0})\) and \(\mathbf{W}\in\mathcal{C}_{\mathbf{\Gamma}}(2c_{0})\).
Decomposability of \((\mathcal{R},\mathcal{S})\) and \([\mathbf{V},\mathbf{W},\mathbf{u}]\in\mathcal{C}_{\mathbf{B},\mathbf{\Gamma}}(c_{0}, \gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\eta)\) and Cauchy-Schwarz further imply
\[\lambda\mathcal{R}(\mathbf{V})+\chi\mathcal{R}(\mathbf{W})+\tau \|\mathbf{u}\|_{\sharp} \leq(c_{0}+1)(\lambda\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V }))+\chi\mathcal{R}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))+\tau\eta\|\mathbf{u}\|_ {2})\] \[\leq(c_{0}+1)\{\lambda^{2}R_{\mathbf{B}}^{2}+\chi^{2}R_{\mathbf{ \Gamma}}^{2}+\tau^{2}\eta^{2}\}^{1/2}\|[\mathbf{V},\mathbf{W},\mathbf{u}]\|_{\Pi}.\]
**Case 2:**: \(\mathbf{V}\notin\mathcal{C}_{\mathbf{B}}(2c_{0})\) and \(\mathbf{W}\in\mathcal{C}_{\mathbf{\Gamma}}(2c_{0})\).
As \([\mathbf{V},\mathbf{W},\mathbf{u}]\in\mathcal{C}_{\mathbf{B},\mathbf{\Gamma}}(c_{0}, \gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\eta)\), we get
\[c_{0}\gamma_{\mathcal{R}}\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V}))\leq c _{0}\left[\gamma_{\mathcal{S}}\mathcal{S}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))+ \eta\|\mathbf{u}\|_{2}\right].\]
Hence,
\[\lambda\mathcal{R}(\mathbf{V})+\chi\mathcal{S}(\mathbf{W})+\tau \big{\|}\mathbf{u}\big{\|}_{\sharp} \leq(c_{0}+1)(\lambda\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V }))+\chi\mathcal{R}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))+\tau\eta\|\mathbf{u}\|_{2})\] \[\leq 2(c_{0}+1)(\chi\mathcal{R}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W} ))+\tau\eta\|\mathbf{u}\|_{2})\] \[\leq 2(c_{0}+1)(\chi^{2}R_{\mathbf{\Gamma}}^{2}+\tau^{2}\eta^{2}\}^{1/2} \|[\mathbf{W},\mathbf{u}]\|_{\Pi}.\]
**Case 3:**: \(\mathbf{V}\in\mathcal{C}_{\mathbf{B}}(2c_{0})\) and \(\mathbf{W}\notin\mathcal{C}_{\mathbf{\Gamma}}(2c_{0})\).
This case follows very similarly to Case 2, exchanging the roles between \((\mathbf{V},\mathcal{R})\) and \((\mathbf{W},\mathcal{S})\). This leads
to the bounds:
\[\lambda\mathcal{R}(\mathbf{V})+\chi\mathcal{S}(\mathbf{W})+\tau\|\boldsymbol{u}\|_{ \sharp}\leq 2(c_{0}+1)\{\lambda^{2}R_{\mathbf{B}}^{2}+\tau^{2}\eta^{2}\}^{1/2}\| [\mathbf{V},\boldsymbol{u}]\|_{\Pi}.\]
**Case 4:**: \(\mathbf{V}\notin\mathcal{C}_{\mathbf{B}}(2c_{0})\) and \(\mathbf{W}\notin\mathcal{C}_{\mathbf{\Gamma}}(2c_{0})\).
As \([\mathbf{V},\mathbf{W},\boldsymbol{u}]\in\mathcal{C}_{\mathbf{B},\mathbf{ \Gamma}}(c_{0},\gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\eta)\), we get
\[c_{0}\left[\gamma_{\mathcal{R}}\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{V} ))+\gamma_{\mathcal{S}}\mathcal{S}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}) )\right]\leq c_{0}\eta\|\boldsymbol{u}\|_{2}.\]
Hence,
\[\lambda\mathcal{R}(\mathbf{V})+\chi\mathcal{S}(\mathbf{W})+\tau \big{\|}\boldsymbol{u}\big{\|}_{\sharp} \leq(c_{0}+1)(\lambda\mathcal{R}(\mathcal{P}_{\mathbf{B}}(\mathbf{ V}))+\chi\mathcal{R}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{W}))+\tau\eta| \boldsymbol{u}\|_{2})\] \[\leq 2(c_{0}+1)\tau\eta\|\boldsymbol{u}\|_{2}.\]
Relation (32) follows by taking the largest bounds among all four cases.
## 25 Proof of Proposition 3
Let \(\blacksquare:=(\nicefrac{{\lambda}}{{4}})\mathcal{R}(\boldsymbol{\Delta}_{ \mathbf{B}})+(\nicefrac{{\lambda}}{{4}})\mathcal{S}(\boldsymbol{\Delta}_{ \mathbf{\Gamma}})+(\nicefrac{{\gamma}}{{4}})\|\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}\|_{\sharp}\). By Lemma 18,
\[2\blacksquare+\|\boldsymbol{\Delta}^{(n)}+\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}\|_{2}^{2}+(\mathsf{d}_{1}\|[\mathbf{V},\mathbf{W}, \boldsymbol{u}]\|_{\Pi}-(\nicefrac{{1}}{{\sigma}})\boldsymbol{\Delta})_{+}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\boldsymbol{f}^ {(n)}\|_{2}^{2}\] \[+(2\mathsf{f}_{1}\|[\mathbf{V},\mathbf{W},\boldsymbol{u}]\|_{ \Pi}-\boldsymbol{\Delta})+2(\boldsymbol{\Delta}+\blacksquare+\boldsymbol{\nabla}).\]
By Lemmas 27 and 28 (with \(\nu=1/2\)) and condition (iii'),
\[\boldsymbol{\Delta}+\blacksquare+\boldsymbol{\nabla} =[((\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}))+(\nicefrac{{ \lambda}}{{4}})]\,\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}})+\lambda \big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)}\] \[+[((\sigma\mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{*}))+( \nicefrac{{\lambda}}{{4}})]\,\mathcal{S}(\boldsymbol{\Delta}_{\mathbf{ \Gamma}})+\chi\big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{ \Gamma}})\big{)}\] \[+[((\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4}))+(\nicefrac{{ \gamma}}{{4}})]\,\|\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\|_{\sharp} +\tau\big{(}\|\boldsymbol{\theta}^{*}\|_{\sharp}-\|\hat{\boldsymbol{\theta}} \|_{\sharp}\big{)}\] \[\leq(\nicefrac{{\lambda}}{{2}})\mathcal{R}(\boldsymbol{\Delta}_{ \mathbf{B}})+\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B }})\big{)}\] \[+(\nicefrac{{\chi}}{{2}})\mathcal{S}(\boldsymbol{\Delta}_{ \mathbf{\Gamma}})+\chi\big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{ \mathbf{\Gamma}})\big{)}\] \[+(\nicefrac{{\gamma}}{{2}})\|\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}\|_{\sharp}+\tau\big{(}\|\boldsymbol{\theta}^{*}\|_{ \sharp}-\|\hat{\boldsymbol{\theta}}\|_{\sharp}\big{)}\] \[\leq\triangle_{\lambda,\chi,\tau}(\boldsymbol{\Delta}_{\mathbf{ B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}},\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}}|\mathbf{B},\mathbf{\Gamma}).\]
Next, we will define some local variables for convenience of notation. Let \(G:=\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}]\|_{\Pi}\), \(D:=\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\boldsymbol{f}^{(n)}\|_{2}\), \(x:=\|\boldsymbol{\Delta}^{(n)}+\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}} \|_{2},\) and \(r:=r_{\lambda,\chi,\tau}\Omega,\nicefrac{{3}}{{2}}(\boldsymbol{\Delta}_{ \mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}|\mathbf{B},\mathbf{\Gamma})\). Define also \(\triangle:=\triangle_{\lambda,\chi,\tau}(\boldsymbol{\Delta}_{\mathbf{B}}, \boldsymbol{\Delta}_{\mathbf{\Gamma}},\boldsymbol{\Delta}^{\hat{\boldsymbol{ \theta}}}|\mathbf{B},\mathbf{\Gamma})\) and
\[H :=(\nicefrac{{3\lambda}}{{2}})(\mathcal{R}\circ\mathcal{P}_{ \mathbf{B}})(\boldsymbol{\Delta}_{\mathbf{B}})+(\nicefrac{{3\chi}}{{2}})( \mathcal{S}\circ\mathcal{P}_{\mathbf{\Gamma}})(\boldsymbol{\Delta}_{\mathbf{ \Gamma}})+(\nicefrac{{3\tau}}{{2}})\|\boldsymbol{\Delta}^{\hat{\boldsymbol{ \theta}}}\|_{2},\] \[I :=(\nicefrac{{\lambda}}{{2}})(\mathcal{R}\circ\mathcal{P}_{ \mathbf{B}}^{\perp})(\boldsymbol{\Delta}_{\mathbf{B}})+(\nicefrac{{\chi}}{{2}})( \mathcal{S}\circ\mathcal{P}_{\mathbf{\Gamma}}^{\perp})(\boldsymbol{\Delta}_{ \mathbf{\Gamma}})+(\nicefrac{{\gamma}}{{2}})\sum_{i=o+1}^{n}\omega_{i}( \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}})_{i}^{\sharp}.\]
In particular, \(\triangle=H-I\).
The previous two bounds entail
\[2\blacksquare+x^{2}+(\mathsf{d}_{1}G-(\nicefrac{{\boldsymbol{\Delta}}}{{\sigma}}))_{+}^ {2}\leq D^{2}+(2\mathsf{f}_{1}G-\boldsymbol{\Delta})+2\triangle. \tag{56}\]
We split our argument in two cases.
**Case 1:**: \(\mathsf{d}_{1}G\leq\boldsymbol{\Delta}/2\sigma\). We next show that \(\triangle\,\leq\,0\). If \(\triangle\,>\,0\) then \([\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}_{\mathbf{\Gamma}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}]\in\mathcal{C}_{\mathbf{B}, \mathbf{\Gamma}}(3,\gamma_{\mathcal{R}},\gamma_{\mathcal{S}},\Omega)\). By
Lemma 19 and (iii'), \(\blacktriangle\leq(\nicefrac{{1}}{{4}})[\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{ B}})+\chi\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})+\tau\big{\|}\mathbf{ \Delta}^{\theta}\big{\|}_{\sharp}]\leq 2rG<2\sigma\mathrm{d}_{1}G,\) where we have used condition (23). This is a contradiction, showing that \(\triangle\leq 0\).
Now, condition (iv') implies that \((2\mathrm{f}_{1}G-\blacktriangle)\leq\sigma(\mathrm{d}_{1}G-(\blacktriangle/ \sigma))\leq-\blacktriangle/2.\) We thus obtain from (56) that
\[2\blacksquare+x^{2}+\frac{\blacktriangle}{2}\leq D^{2}. \tag{57}\]
In particular,
\[\mathrm{d}_{1}G\leq\frac{D^{2}}{\sigma}. \tag{58}\]
**Case 2:**: \(\mathrm{d}_{1}G\geq\blacktriangle/2\sigma.\) In particular, from (56),
\[2\blacksquare+x^{2}+\frac{\mathrm{d}_{1}^{2}G^{2}}{4}\leq D^{2}+2\mathrm{f}_{1}G+2 \triangle. \tag{59}\]
We next consider two cases.
**Case 2.1:**: \(\mathrm{f}_{1}G\geq H.\) Hence, \(\triangle\leq H\leq\mathrm{f}_{1}G.\) From (59),
\[2\blacksquare+x^{2}+\frac{\mathrm{d}_{1}^{2}G^{2}}{4}\leq D^{2}+4\mathrm{f}_{1}G.\]
From \(4\mathrm{f}_{1}G\leq\frac{16\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}+\frac{ \mathrm{d}_{1}^{2}G^{2}}{4}\), we get
\[2\blacksquare+x^{2}\leq D^{2}+\frac{16\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}. \tag{60}\]
If we use instead \(4\mathrm{f}_{1}G\leq\frac{32\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}+\frac{ \mathrm{d}_{1}^{2}G^{2}}{8}\), we get
\[\frac{\mathrm{d}_{1}^{2}G^{2}}{8}\leq D^{2}+\frac{32\mathrm{f}_{1}^{2}}{ \mathrm{d}_{1}^{2}}. \tag{61}\]
**Case 2.2:**: \(\mathrm{f}_{1}G\leq H.\) Suppose first \(\triangle\leq 0\). From \(2\mathrm{f}_{1}G\leq\frac{4\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}+\frac{ \mathrm{d}_{1}^{2}G^{2}}{4}\) and (59), we get
\[2\blacksquare+x^{2}\leq D^{2}+\frac{4\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}. \tag{62}\]
If instead we use \(2\mathrm{f}_{1}G\leq\frac{8\mathrm{f}_{1}^{2}}{\mathrm{d}_{1}^{2}}+\frac{ \mathrm{d}_{1}^{2}G^{2}}{8}\), we obtain
\[\frac{\mathrm{d}_{1}^{2}G^{2}}{8}\leq D^{2}+\frac{8\mathrm{f}_{1}^{2}}{ \mathrm{d}_{1}^{2}}. \tag{63}\]
Suppose now \(\triangle\geq 0\). Hence \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{ \Delta}^{\hat{\theta}}]\in\mathcal{C}_{\mathbf{B},\mathbf{\Gamma}}(3,\gamma_{ \mathcal{R}},\gamma_{\mathcal{S}},\Omega)\). By Lemma 19, \(\triangle\leq 3rG/2\). From (59),
\[2\blacksquare+x^{2}+\frac{\mathrm{d}_{1}^{2}G^{2}}{4}\leq D^{2}+(2\mathrm{f}_{1}+3r )G.\]
Proceeding similarly before, we obtain from the displayed bound that
\[2\blacksquare+x^{2} \leq D^{2}+\frac{(2\mathrm{f}_{1}+3r)^{2}}{\mathrm{d}_{1}^{2}}, \tag{64}\] \[\frac{\mathrm{d}_{1}^{2}G^{2}}{8} \leq D^{2}+\frac{2(2\mathrm{f}_{1}+3r)^{2}}{\mathrm{d}_{1}^{2}}. \tag{65}\]
The proof of (33) follows by taking the largest of the bounds in (57), (60), (62) and (64). The proof of (34) follows by taking the largest of the bounds in (58), (61), (63) and (65).
## 26 Proof of Lemma 20
As \([\hat{\mathbf{B}},\hat{\mathbf{\Gamma}},\hat{\mathbf{\theta}}]\) is the minimizer of (6), in particular
\[[\hat{\mathbf{B}},\hat{\mathbf{\Gamma}}]\in\operatorname*{argmin}_{[\mathbf{B}, \mathbf{\Gamma}]:\|\mathbf{B}\|_{\infty}\leq\mathsf{a}}\left\{\frac{1}{2}\| \mathbf{y}^{(n)}-\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\hat{\mathbf{\theta}} \|_{2}^{2}+\lambda\mathcal{R}(\mathbf{B})+\chi\mathcal{S}(\mathbf{\Gamma}) \right\}.\]
By the first order condition, there exist \(\mathbf{V},\mathbf{W}\in\mathds{R}^{p}\) such that \(\mathcal{R}^{*}(\mathbf{V})\leq 1\), \(\left(\mathbf{V},\hat{\mathbf{B}}\right)=\mathcal{R}(\hat{\mathbf{B}})\), \(\mathcal{S}^{*}(\mathbf{W})\leq 1\), \(\left(\mathbf{W},\hat{\mathbf{\Gamma}}\right)=\mathcal{S}(\hat{\mathbf{\Gamma }})\), such that, for all \([\mathbf{B},\mathbf{\Gamma}]\) with \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),
\[0 \leq\sum_{i\in[n]}\left[\mathfrak{X}_{i}^{(n)}(\hat{\mathbf{B}}+ \hat{\mathbf{\Gamma}})+\hat{\mathbf{\theta}}_{i}-y_{i}^{(n)}\right]\left\langle \mathbf{X}_{i}^{(n)},\mathbf{B}-\hat{\mathbf{B}}\right\rangle+\lambda(\! \mathbf{V},\mathbf{B}-\hat{\mathbf{B}}),\] \[0 \leq\sum_{i\in[n]}\left[\mathfrak{X}_{i}^{(n)}(\hat{\mathbf{B}}+ \hat{\mathbf{\Gamma}})+\hat{\mathbf{\theta}}_{i}-y_{i}^{(n)}\right]\left\langle \mathbf{X}_{i}^{(n)},\mathbf{\Gamma}-\hat{\mathbf{\Gamma}}\right\rangle+\chi( \!\mathbf{W},\mathbf{\Gamma}-\hat{\mathbf{\Gamma}}).\]
Summing the previous inequalities and using that \(\mathbf{y}^{(n)}=\mathbf{f}^{(n)}+\mathbf{\theta}^{*}+\mathbf{\xi}^{(n)}\) we obtain, for all \([\mathbf{B},\mathbf{\Gamma}]\) with \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}\),
\[\left\langle\mathbf{\Delta}^{(n)},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\bm {\Delta}_{\mathbf{\Gamma}})\right\rangle\leq\left\langle\mathfrak{X}^{(n)}( \mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}}),\mathbf{\xi}^{(n)}-\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\right\rangle-\lambda(\!\mathbf{\Delta}_{\mathbf{B}}, \mathbf{V})-\chi(\!\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{W}).\]
Moreover, \(-(\!\mathbf{\Delta}_{\mathbf{B}},\mathbf{V})\leq\mathcal{R}(\mathbf{B})-\mathcal{ R}(\hat{\mathbf{B}})\). Similarly, \(-(\!\mathbf{\Delta}_{\mathbf{\Gamma}},\mathbf{W})\!\leq\mathcal{S}(\mathbf{\Gamma })-\mathcal{S}(\hat{\mathbf{\Gamma}})\). Combining these bounds with the previous displays finishes the proof.
## 27 Proof of Lemma 21
By the parallelogram law,
\[\langle\mathbf{\Delta}^{(n)},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{ B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle=\] \[=\frac{1}{2}\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+\frac{1}{2}\|\mathfrak{ X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\|_{2}^{2}- \frac{1}{2}\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^{2}.\]
By \(\operatorname{ARSC}\) (with variable \(\mathbf{u}=\mathbf{0}\)),
\[\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\| _{2}^{2}\geq(\mathsf{d}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}]\|_{\Pi}-\mathsf{d}_{2}\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})-\mathsf{ d}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}))_{+}^{2}-2|\langle\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle|.\]
Also, by assumption,
\[|\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle|\leq \mathsf{f}_{*}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}).\]
The previous three displays and (35) imply
\[\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+(\mathsf{d}_{1}\|[\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}-\mathsf{d}_{2}\mathcal{R}( \mathbf{\Delta}_{\mathbf{B}})-\mathsf{d}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{ \Gamma}}))_{+}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}^ {2}\] \[+2\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{ X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle+2\mathsf{f}_{*} \mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})\] \[+2\lambda(\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}}))+2 \chi(\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{\Gamma}})). \tag{66}\]
By IP,
\[\langle-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle \leq\mathsf{b}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{ \mathbf{\Gamma}}]\|_{\Pi}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{2} \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+ \mathsf{b}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{2}\] \[+\mathsf{b}_{4}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}]\|_{\Pi}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}.\]
By (with variable \(\mathbf{u}=\mathbf{0}\)),
\[\langle\mathbf{\xi}^{(n)},\mathbf{\mathfrak{X}}^{(n)}(\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta }_{\mathbf{\Gamma}})\rangle\leq\mathsf{f}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}_{\mathbf{\Gamma}}]\|_{\Pi}+\mathsf{f}_{2}\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})+\mathsf{f}_{3}\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}).\]
The two previous displays imply
\[\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathbf{\mathfrak{X}}^{(n)} (\mathbf{\Delta}_{\mathbf{B}}+\mathbf{\Delta}_{\mathbf{\Gamma}})\rangle\] \[\leq\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_ {\Pi}\left(\mathsf{b}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{ 4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}+\mathsf{f}_{1}\right)+ \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})(\mathsf{b}_{2}\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{2}+\mathsf{f}_{2})+\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})( \mathsf{b}_{3}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{f}_{3}).\]
The proof of (36) follows from the previous display and (66).
## 28 Proof of Theorem 16
Let \(\hat{\mathbf{\Xi}}:=(\nicefrac{{\gamma}}{{4}})\mathcal{R}(\mathbf{\Delta}_{\mathbf{B }})+(\nicefrac{{\gamma}}{{4}})\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}}).\) By Lemma 21,
\[2\hat{\mathbf{\Xi}}+\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+(\mathsf{d}_{1}\|[ \mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}-(\nicefrac{{ \mathbf{i}}}{{\sigma}}))_{+}^{2} \leq\|\mathbf{\mathfrak{X}}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^ {(n)}\|_{2}^{2}\] \[+\left(2\mathsf{f}_{1}+2\mathsf{b}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{2}+2\mathsf{b}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \right)\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}- \hat{\mathbf{\Delta}}\] \[+2(\hat{\mathbf{\Delta}}+\hat{\mathbf{\Xi}}+\hat{\mathbf{\Psi}}). \tag{67}\]
Additionally, all conditions of Proposition 3 hold. Hence,
\[\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2} \leq(\nicefrac{{D^{2}}}{{\sigma}}\nicefrac{{\sigma_{\mathsf{d}_{ 1}}}}{{2}})\bigvee\left((\nicefrac{{2\sqrt{2}}}{{\sigma}}\nicefrac{{\sigma_{ \mathsf{d}_{1}}}}{{2}})D+\mathbf{\bigtriangleup}_{2}(\mathsf{f}_{1},r)\right)\leq \mathsf{c}_{*}\sigma, \tag{68}\] \[(\nicefrac{{\gamma}}{{2}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \leq D^{2}+\mathbf{\bigtriangleup}_{2}(\mathsf{f}_{1},r)\leq\mathsf{c} _{*}^{2}\sigma^{2}, \tag{69}\]
where we have used conditions (24)-(25). In particular, by (iv),
\[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}+2\mathsf{b}_{2}\|\bm {\Delta}^{\hat{\mathbf{\theta}}}\|_{2}) \leq\frac{\lambda}{4}, \tag{70}\] \[(\sigma\mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{*}+2 \mathsf{b}_{3}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}) \leq\frac{\lambda}{4}. \tag{71}\]
By Lemma 27 (with \(\nu=1/2\)) and (70)-(71),
\[\hat{\mathbf{\Delta}}+\hat{\mathbf{\Xi}}+\hat{\mathbf{\Psi}} =\left[((\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}+2\mathsf{b}_{2} \|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}))+(\nicefrac{{\gamma}}{{4}})\right] \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\lambda\big{(}\mathcal{R}(\mathbf{B})- \mathcal{R}(\hat{\mathbf{B}})\big{)}\] \[+\left[((\sigma\mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{* }+2\mathsf{b}_{3}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}))+(\nicefrac{{\gamma}}{{4 }})\right]\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})+\chi\big{(}\mathcal{S}( \mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{\Gamma}})\big{)}\] \[\leq(\nicefrac{{\gamma}}{{2}})\mathcal{R}(\mathbf{\Delta}_{\mathbf{B }})+\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)} +(\nicefrac{{\gamma}}{{2}})\mathcal{S}(\mathbf{\Delta}_{\mathbf{\Gamma}})+\chi \big{(}\mathcal{S}(\mathbf{\Gamma})-\mathcal{S}(\hat{\mathbf{\Gamma}})\big{)}\] \[\leq\triangle_{\lambda,\chi,0}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}_{\mathbf{\Gamma}},\mathbf{0}|\mathbf{B},\mathbf{\Gamma})=:\triangle. \tag{72}\]
Next, we define the local variables
\[H :=(\nicefrac{{3\gamma}}{{2}})(\mathcal{R}\circ\mathcal{P}_{ \mathbf{B}})(\mathbf{\Delta}_{\mathbf{B}})+(\nicefrac{{3\gamma}}{{2}})(\mathcal{S} \circ\mathcal{P}_{\mathbf{\Gamma}})(\mathbf{\Delta}_{\mathbf{\Gamma}}),\] \[I :=(\nicefrac{{\gamma}}{{2}})(\mathcal{R}\circ\mathcal{P}_{\mathbf{B }}^{\perp})(\mathbf{\Delta}_{\mathbf{B}})+(\nicefrac{{\gamma}}{{2}})(\mathcal{S} \circ\mathcal{P}_{\mathbf{\Gamma}}^{\perp})(\mathbf{\Delta}_{\mathbf{\Gamma}}),\]
noting that \(\triangle=H-I\). Recall \(D=\|\mathbf{\mathfrak{X}}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\mathbf{f}^{(n)}\|_{2}\). For convenience, we define the additional local variables \(G:=\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}]\|_{\Pi}\), \(x:=\|\mathbf{\Delta}^{(n)}\|_{2}\), and \(\hat{r}:=r_{\lambda,\chi,0,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}|\mathbf{B},\mathbf{\Gamma})\). Finally, let us define the auxiliary variables
\[F :=\mathsf{f}_{1}+\mathsf{b}_{1}\left((\nicefrac{{D^{2}}}{{\sigma} \mathsf{d}_{1}})\bigvee\left((\nicefrac{{2\sqrt{2}}}{{\sigma}}\nicefrac{{\sigma_{ \mathsf{d}_{1}}}}{{2}})D+\mathbf{\bigtriangleup}_{2}(\mathsf{f}_{1},r)\right)\right)+( 2\mathsf{b}_{4}/\tau)\left(D^{2}+\mathbf{\bigtriangleup}_{2}(\mathsf{f}_{1},r) \right),\] \[\hat{\mathsf{f}}_{1} :=\mathsf{f}_{1}+\mathsf{b}_{1}(\mathsf{c}_{*}\sigma)+(2\mathsf{b }_{4}/\tau)(\mathsf{c}_{*}^{2}\sigma^{2}).\]
Note that, from (68)-(69) and condition (v), \(F\leq\hat{\mathsf{f}}_{1}\leq\sigma\mathsf{d}_{1}/2\).
From (67), (68)-(69), and (72),
\[2\hat{\overline{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsfmathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}+x^{2}+( \mathsf{d}_{1}G-(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}))^{2}_{+}\leq D^{2}+(2FG-\hat{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}})+2\triangle. \tag{73}\]
The rest of the proof uses similar arguments used in the proof of Proposition 3.
We split our argument in two cases.
**Case 1:**: \(\mathsf{d}_{1}G\leq\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}/2\sigma\). We next show that \(\triangle\leq 0\). If \(\triangle>0\) then \([\mathsf{\Delta_{B}},\mathsf{\Delta_{\Gamma}},\mathsf{0}]\in\mathcal{C}_{ \mathbf{B},\Gamma}(3,\gamma_{\mathcal{R}},\gamma_{\mathcal{S}},0)\). By Lemma 19 and (70)-(71), \(\hat{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}} \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ { \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ { \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ { \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ { \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{\mathsf{ \mathsf{ \mathsf{\mathsf{ \mathsf{\mathsf{ \mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{ }}}}}}}}}+\frac{\mathsf{d}_{1}^{2}G^{2}}{4}\leq D^{2}+4FG}.\]
From \(4FG\leq\frac{16F^{2}}{\mathsf{d
(76),
\[2\dot{\overline{\mathbf{u}}}+x^{2}+\frac{\mathsf{d}_{1}^{2}G^{2}}{4}\leq D^{2}+(2F+ 3\hat{r})G.\]
Proceeding similarly as before, we obtain from the displayed bound that
\[2\dot{\overline{\mathbf{u}}}+x^{2} \leq D^{2}+\frac{(2F+3\hat{r})^{2}}{\mathsf{d}_{1}^{2}}, \tag{81}\] \[\frac{\mathsf{d}_{1}^{2}G^{2}}{8} \leq D^{2}+\frac{2(2F+3\hat{r})^{2}}{\mathsf{d}_{1}^{2}}. \tag{82}\]
The proof of (26) follows by taking the largest of the bounds in (74), (77), (79) and (81). The proof of (27) follows by taking the largest of the bounds in (75), (78), (80) and (82).
### Proof of Theorem 2
In the following \(\mathcal{R}:=\|\cdot\|_{N}\), \(\mathcal{S}:=\|\cdot\|_{1}\), \(\mathsf{a}=\mathsf{a}^{*}/\sqrt{n}\) and \(\mathsf{f}_{*}=2\mathsf{a}^{*}/\sqrt{n}\). Recall that \(\boldsymbol{\Sigma}\) is the identity matrix. We know that \(\mathscr{G}(\boldsymbol{\Sigma}^{1/2}\mathbb{B}_{\|\cdot\|_{N}})\lesssim \sqrt{d_{1}+d_{2}}\), \(\mathscr{G}(\boldsymbol{\Sigma}^{1/2}\mathbb{B}_{1}^{p})\lesssim\sqrt{\log p}\) and \(\mathscr{G}(\mathbb{B}_{\sharp})\lesssim 1\). Next, we assume that \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) for an absolute constant to be determined next. We will also use that \(L\geq 1\). In the following, \(C>0\) is the universal constant stated in Proposition 2. Without loss on generality, we assume the constant \(c_{*}\) in Theorem 16 is \(\geq 1\). No effort is made to optimize the numerical constants.
By Proposition 2(i) and taking \(C_{0}\geq 1\) large enough, we get that, on an event \(\mathcal{E}_{1}\) of probability \(\geq 1-\delta\), \(\operatorname{PP}_{\|\cdot\|_{N},\|\cdot\|_{1}}(\mathsf{c}_{1}(\delta), \mathsf{c}_{2},\mathsf{c}_{3},\mathsf{c}_{4})\) holds with constants
\[\mathsf{c}_{1}(\delta)\asymp CL^{2}\frac{1+\sqrt{\log(1/\delta)} }{\sqrt{n}},\quad\mathsf{c}_{2}\asymp CL^{2}\sqrt{\frac{d_{1}+d_{2}}{n}},\] \[\mathsf{c}_{3}\asymp CL^{2}\sqrt{\frac{\log p}{n}},\quad\mathsf{ c}_{4}\asymp CL^{2}\sqrt{\frac{d_{1}+d_{2}}{n}}\cdot\sqrt{\frac{\log p}{n}}.\]
By Proposition 2(ii) and taking \(C_{0}\gtrsim C^{2}\) large enough, we get that, on an event \(\mathcal{E}_{2}\) of probability \(\geq 1-2\delta\), \(\operatorname{RSC}_{\|\cdot\|_{N}}(\mathsf{a}_{1},\mathsf{a}_{2})\) and \(\operatorname{RSC}_{\|\cdot\|_{1}}(\bar{\mathsf{a}}_{1},\bar{\mathsf{a}}_{2})\) hold with constants \(\mathsf{a}_{1}=\bar{\mathsf{a}}_{1}\in(0,1)\) and
\[\mathsf{a}_{2}\asymp CL^{2}\sqrt{\frac{d_{1}+d_{2}}{n}},\quad\bar{\mathsf{a}} _{2}\asymp CL^{2}\sqrt{\frac{\log p}{n}}.\]
By Proposition 2(iii), on an event \(\mathcal{E}_{3}\) of probability \(\geq 1-\delta\), \(\operatorname{IP}_{\|\cdot\|_{N},\|\cdot\|_{1},\|\cdot\|_{\sharp}}(\mathsf{b} _{1}(\delta),\mathsf{b}_{2},\mathsf{b}_{3},\mathsf{b}_{4})\) holds with constants \(\mathsf{b}_{4}\asymp\frac{CL}{\sqrt{n}}\),
\[\mathsf{b}_{1}(\delta)\asymp CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}},\quad \mathsf{b}_{2}\asymp CL\sqrt{\frac{d_{1}+d_{2}}{n}},\quad\mathsf{b}_{3}\asymp CL \sqrt{\frac{\log p}{n}}.\]
By Proposition 2(iv) and \(C_{0}\geq 1\) large enough, we have that, on an event \(\mathcal{E}_{4}\) of probability \(\geq 1-\delta\), \(\operatorname{MP}_{\|\cdot\|_{N},\|\cdot\|_{1},\|\cdot\|_{\sharp}}(\mathsf{f}_{ 1}(\delta),\mathsf{f}_{2},\mathsf{f}_{3},\mathsf{f}_{4})\) holds with constants \(\mathsf{f}_{4}\asymp\frac{C\sigma}{\sqrt{n}}\),
\[\mathsf{f}_{1}(\delta)\asymp C\sigma L\frac{2+3\sqrt{\log(1/\delta)}}{\sqrt{n}},\quad\mathsf{f}_{2}\asymp C\sigma L\sqrt{\frac{d_{1}+d_{2}}{n}},\quad\mathsf{ f}_{3}\asymp C\sigma L\sqrt{\frac{\log p}{n}}.\]
Finally, by Lemma 12 and taking \(C_{0}\gtrsim C^{2}\) large enough, we have that, on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\cap\mathcal{E}_{4}\) of probability \(\geq 1-5\delta\), all the previous stated properties hold and \(\operatorname{ARSC}_{\|\cdot\|_{N},\|\cdot\|_{1},\|\cdot\|_{\sharp}}(\mathsf{d} _{1},\mathsf{d}_{2},\mathsf{d}_{3},\mathsf{d}_{4})\) holds with
constants \(\mathsf{d}_{1}\in(0,1)\), \(\mathsf{d}_{4}\asymp\frac{CL}{\sqrt{n}}\),
\[\mathsf{d}_{2}\asymp CL^{2}\sqrt{\frac{d_{1}+d_{2}}{n}},\quad\mathsf{d}_{3} \asymp CL^{2}\sqrt{\frac{\log p}{n}}.\]
The rest of the proof will happen on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\cap\mathcal{E}_{4}\). We will invoke the general Theorem 16. Conditions (i)-(iii) were shown previously. We now verify conditions (iv)-(v). Note that
\[4[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}+2\mathsf{c}_{*}\sigma\mathsf{b}_ {2})]\lesssim(1+\mathsf{c}_{*})C\sigma L^{2}\sqrt{\frac{d_{1}+d_{2}}{n}}\asymp\lambda.\]
Also,
\[4[(\sigma\mathsf{d}_{3})\vee(2\mathsf{f}_{3}+2\mathsf{f}_{*}+2\mathsf{c}_{*} \sigma\mathsf{b}_{3})]\lesssim(1+\mathsf{c}_{*})C\sigma L^{2}\sqrt{\frac{\log p }{n}}+\frac{\mathsf{a}^{*}}{\sqrt{n}}\asymp\chi.\]
Next, we choose \(\tau\asymp C_{1}\mathsf{c}_{*}^{2}C\sigma L^{2}/\sqrt{n}\) for some absolute constant \(C_{1}\geq 1\). We have \(4[(\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4})]\lesssim C\sigma L/\sqrt{n} \lesssim\tau\). This shows (iv). Finally, by the choice of \(\tau\),
\[\hat{\mathsf{f}}_{1}=\mathsf{f}_{1}+\mathsf{c}_{*}(\sigma\mathsf{b}_{1})+2 \mathsf{c}_{*}^{2}\sigma^{2}(\mathsf{b}/\tau)\lesssim(1+\mathsf{c}_{*})C \sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+\frac{1}{C_{1}L}\sigma,\]
which can be set strictly less than \(\sigma\mathsf{d}_{1}/2\asymp\sigma\) taking \(C_{0}\gtrsim C^{2}(1+\mathsf{c}_{*})^{2}\) and \(C_{1}\geq 1\) large enough. Hence, (v) holds.
In what follows, we verify the conditions (20)-(25). Let \([\mathbf{B},\mathbf{\Gamma}]\) with \(D=\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\boldsymbol{f}^{(n)}\|_{2}\), \(\operatorname{rank}(\mathbf{B})\leq r\), \(\|\mathbf{\Gamma}\|_{0}\leq s\) and \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}^{*}/\sqrt{n}\). Conditions (20)-(21) hold. Condition (22) also holds since, by the dual-norm inequality, isotropy and the facts that \(\|\hat{\mathbf{B}}\|_{\infty}\leq\mathsf{a}^{*}/\sqrt{n}\) and \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}^{*}/\sqrt{n}\),
\[|\langle\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma}}\rangle _{\Pi}|\leq\|\mathbf{\Delta}_{\mathbf{B}}\|_{\infty}\|\mathbf{\Delta}_{ \mathbf{\Gamma}}\|_{1}\leq\frac{2\mathsf{a}^{*}}{\sqrt{n}}\|\mathbf{\Delta}_{ \mathbf{\Gamma}}\|_{1}.\]
We have that \(\Psi_{\|\cdot\|_{N}}(\mathcal{P}_{\mathbf{B}}(\mathbf{\Delta}_{\mathbf{B}})) \leq\sqrt{r}\). By isotropy \(\mu(\mathbf{B}):=\mu\left(\mathcal{C}_{\mathbf{B},\|\cdot\|_{N}}(6)\right)=1\). Similarly, \(\Psi_{\|\cdot\|_{1}}(\mathcal{P}_{\mathbf{\Gamma}}(\mathbf{\Delta}_{\mathbf{ \Gamma}}))\leq\sqrt{s}\) and \(\mu(\mathbf{\Gamma}):=\mu\left(\mathcal{C}_{\mathbf{\Gamma},\|\cdot\|_{1}}(6) \right)=1\). By the choice of \((\lambda,\chi,\tau)\),
\[r^{2}\asymp(1+\mathsf{c}_{*})^{2}C^{2}\sigma^{2}L^{4}\cdot\frac{ r(d_{1}+d_{2})}{n}+(1+\mathsf{c}_{*})^{2}C^{2}\sigma^{2}L^{4}\cdot\frac{s\log p}{n}+ \frac{(\mathsf{a}^{*})^{2}s}{n}\] \[\qquad\qquad+C_{1}^{2}\mathsf{c}_{*}^{4}C^{2}\sigma^{2}L^{2} \epsilon\log(e/\epsilon),\]
where we used that \(\Omega^{2}\leq 2o\log(en/o)\). We have \(r<\sigma\mathsf{d}_{1}\asymp\sigma\) if
\[C_{2}(1+\mathsf{c}_{*})CL^{2}\cdot\sqrt{\frac{r(d_{1}+d_{2})}{n}} <1, \tag{83}\] \[C_{2}(1+\mathsf{c}_{*})CL^{2}\cdot\sqrt{\frac{s\log p}{n}}+ \mathsf{a}^{*}\sqrt{\frac{s}{n}} <1,\] (84) \[C_{1}\mathsf{c}_{*}^{2}CL^{2}\sqrt{\epsilon\log(e/\epsilon)}<c_{ 1}, \tag{85}\]
for some absolute constants \(C_{2}\geq 1\) and \(c_{1}\in(0,1)\).
Next, we show that \(D^{2}+\bigtriangleup_{2}(\mathsf{f}_{1},r)\leq\mathsf{c}_{*}^{2}\sigma^{2}\). For this to be true it is sufficient that \(D\leq\frac{\mathsf{c}_{*}\sigma}{3}\) and \(\bigtriangleup_{2}^{1/2}(\mathsf{f}_{1},r)\leq\frac{3\mathsf{c}_{*}\sigma}{4}\). Note that
\[\bigtriangleup_{2}^{1/2}(\mathsf{f}_{1},r)=\frac{4\mathsf{f}_{1}+3r}{\mathsf{d} _{1}}\lesssim C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+r,\]
which is not greater than \(\frac{3\mathsf{c}_{*}\sigma}{4}\) if
\[\mathcal{O}(1)CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}} \leq\frac{3\mathsf{c}_{*}}{16}\] \[\mathcal{O}(1)(1+\mathsf{c}_{*})CL^{2}\cdot\sqrt{\frac{r(d_{1}+d_ {2})}{n}} \leq\frac{3\mathsf{c}_{*}}{16},\] \[\mathcal{O}(1)(1+\mathsf{c}_{*})CL^{2}\cdot\sqrt{\frac{s\log p}{n }}+\mathcal{O}(1)\mathsf{a}^{*}\sqrt{\frac{s}{n}} \leq\frac{3\mathsf{c}_{*}}{16},\] \[\mathcal{O}(1)C_{1}\mathsf{c}_{*}^{2}CL^{2}\sqrt{\epsilon\log(e/ \epsilon)} \leq\frac{3\mathsf{c}_{*}}{16}.\]
Next, we show that \([(D^{2}/\sigma\mathsf{d}_{1})]\bigvee\big{[}(2\sqrt{2}/\mathsf{d}_{1})D+ \bigotimes_{\mathbf{Z}}(\mathsf{f}_{1},r)\big{]}\leq\mathsf{c}_{*}\sigma\). For that to happen it is sufficient that \(D\leq[(\sqrt{\mathsf{d}_{1}\mathsf{c}_{*}})\wedge(\mathsf{d}_{1}\mathsf{c}_{* }/6\sqrt{2})]\sigma\) and \(\mathsf{d}_{2}(\mathsf{f}_{1},r)\leq\frac{3\mathsf{c}_{*}\sigma}{4}\). Since \(\mathsf{d}_{1}\asymp 1\) and \(\mathsf{c}_{*}\geq 1\), the first relation is guaranteed if \(D\lesssim\sqrt{\mathsf{c}_{*}}\sigma\). We have
\[\bigotimes_{2}(\mathsf{f}_{1},r)=\frac{16\mathsf{f}_{1}+12r}{\mathsf{d}_{1}^{ 2}}\lesssim C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+r,\]
which is not greater than \(\frac{3\mathsf{c}_{*}\sigma}{4}\) if all the four conditions in the previous display are met -- up to enlarging the constants if necessary.
Optimizing the previous inequalities, we conclude that for conditions (20)-(25) to hold it is sufficient to take \(\mathsf{c}_{*}\asymp 1\) large enough, \(C_{0}\gtrsim C^{2}\) large enough and that (83), (84), (85) hold (possibly enlarging \(C_{2}\) if necessary) and \(D\lesssim\sigma\) holds (possibly decreasing the constant if necessary).
In conclusion, assume \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) with \(C_{0}\gtrsim C^{2}\) large enough and set the hyper-parameters to be
\[\lambda\asymp C\sigma L^{2}\sqrt{\frac{d_{1}+d_{2}}{n}},\quad\chi\asymp C \sigma L^{2}\sqrt{\frac{\log p}{n}}+\frac{\mathsf{a}^{*}}{\sqrt{n}},\quad \tau\asymp\frac{C_{1}C\sigma L^{2}}{\sqrt{n}}.\]
Assume (83), (84), (85) hold. Let \([\mathbf{B},\mathbf{\Gamma}]\) with \(D=\|\mathfrak{X}^{(n)}(\mathbf{B}+\mathbf{\Gamma})-\boldsymbol{f}^{(n)}\|_{2}\lesssim\sigma\), \(\mathrm{rank}(\mathbf{B})\leq r\), \(\|\mathbf{\Gamma}\|_{0}\leq s\) and \(\|\mathbf{B}\|_{\infty}\leq\mathsf{a}^{*}/\sqrt{n}\). Then conditions (i)-(v) and conditions (20)-(25) of Theorem 16 are satisfied, implying (26)-(27) for such \([\mathbf{B},\mathbf{\Gamma}]\). We now verify the statement of Theorem 2.
First, note that
\[\hat{r}^{2} \lesssim C^{2}\sigma^{2}L^{4}\cdot\frac{r(d_{1}+d_{2})}{n}+C^{2} \sigma^{2}L^{4}\cdot\frac{s\log p}{n}+\frac{(\mathsf{a}^{*})^{2}s}{n}\] \[r^{2} \lesssim\hat{r}^{2}+C_{1}^{2}C^{2}\sigma^{2}L^{4}\epsilon\log(e/ \epsilon),\]
Next, using that \(D\lesssim\sigma\), \(\mathsf{d}_{1}\asymp 1\), \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) and \(\mathsf{b}_{1}r\leq\frac{\sigma\mathsf{b}_{1}^{2}}{2}+\frac{r^{2}}{2\sigma}\),
\[F \lesssim\mathsf{f}_{1}+\mathsf{b}_{1}(\sigma+\mathsf{f}_{1}+r)+ \frac{1}{C_{1}L\sigma}(D^{2}+(\mathsf{f}_{1}+r)^{2})\] \[\lesssim(1+\mathsf{b}_{1})\mathsf{f}_{1}+\sigma\mathsf{b}_{1}+ \frac{\sigma\mathsf{b}_{1}^{2}}{2}+\frac{r^{2}}{2\sigma}+\frac{D}{C_{1}L}+ \frac{\mathsf{f}_{1}^{2}}{C_{1}L\sigma}+\frac{r^{2}}{C_{1}L\sigma}\] \[\lesssim\frac{D}{C_{1}L}+\mathsf{f}_{1}+\sigma\mathsf{b}_{1}+ \frac{r^{2}}{C_{1}L\sigma}.\]
From (83)-(85), we can write
\[\frac{r^{4}}{C_{1}^{2}L^{2}\sigma^{2}} \lesssim(\nicefrac{{C}}{{C_{1}}})^{2}\sigma^{2}L^{2}\cdot\frac{r (d_{1}+d_{2})}{n}+(\nicefrac{{C}}{{C_{1}}})^{2}\sigma^{2}L^{2}\cdot\frac{s\log p }{n}+\frac{1}{C_{1}^{2}L^{2}\sigma^{2}}\cdot\frac{(\mathsf{a}^{*})^{2}s}{n}\] \[+C_{1}^{2}C^{4}\sigma^{2}L^{6}\epsilon^{2}\log^{2}(e/\epsilon).\]
We have that
\[\spadesuit_{2}(F,\hat{r})=\frac{1}{\mathsf{d}_{1}^{2}}(4F+3\hat{r})^{2}\ \lesssim F^{2}+\hat{r}^{2}\lesssim\frac{D^{2}}{C_{1}^{2}L^{2}}+\mathsf{f}_{1}^{ 2}+\sigma^{2}\mathsf{b}_{1}^{2}+\hat{r}^{2}+\frac{r^{4}}{C_{1}^{2}L^{2}\sigma^ {2}}.\]
It follows that
\[D^{2}+\spadesuit_{2}(F,\hat{r}) \leq D^{2}\left(1+\frac{\mathcal{O}(1)}{C_{1}^{2}L^{2}}\right)\] \[+\mathcal{O}(1)C^{2}\cdot\sigma^{2}L^{2}\frac{1+\log(1/\delta)}{n}\] \[+\mathcal{O}(1)C^{2}\cdot\sigma^{2}L^{4}\cdot\frac{r(d_{1}+d_{2} )}{n}+\mathcal{O}(1)C^{2}\cdot\sigma^{2}L^{4}\cdot\frac{s\log p}{n}+\left(1+ \frac{\mathcal{O}(1)}{C_{1}^{2}\sigma^{2}L^{2}}\right)\frac{(\mathsf{a}^{*})^ {2}s}{n}\] \[+\mathcal{O}(1)C^{4}\cdot C_{1}^{2}\sigma^{2}L^{6}\epsilon^{2} \log^{2}(e/e).\]
This establishes (9) in Theorem 2 using (26) in Theorem 16.
Note that, since \(D\lesssim\sigma\) and \(\mathsf{d}_{1}\asymp 1\),
\[\left(\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}}\right)\bigvee\left[\left( \nicefrac{{2\sqrt{2}}}{{\mathsf{d}_{1}}}\right)D+\spadesuit_{2}(F,\hat{r}) \right]\lesssim D+\spadesuit_{2}(F,\hat{r}).\]
Also, \(\spadesuit_{2}(F,\hat{r})\lesssim\spadesuit_{2}^{1/2}(F,\hat{r})\). Thus
\[\left(\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}}\right)\bigvee \left[\left(\nicefrac{{2\sqrt{2}}}{{\mathsf{d}_{1}}}\right)D+\spadesuit_{2}(F, \hat{r})\right] \leq D\left(\mathcal{O}(1)+\frac{\mathcal{O}(1)}{C_{1}L}\right)\] \[+\mathcal{O}(1)C\cdot\sigma L\frac{1+\sqrt{\log(1/\delta)}}{ \sqrt{n}}\] \[+\mathcal{O}(1)C\cdot\sigma L^{2}\cdot\sqrt{\frac{r(d_{1}+d_{2})} {n}}\] \[+\mathcal{O}(1)C\cdot\sigma L^{2}\cdot\sqrt{\frac{s\log p}{n}}+ \left(1+\frac{\mathcal{O}(1)}{C_{1}\sigma L}\right)\mathsf{a}^{*}\sqrt{\frac{ s}{n}}\] \[+\mathcal{O}(1)C^{2}\cdot C_{1}\sigma L^{3}\epsilon\log(e/ \epsilon).\]
This establishes (10) in Theorem 2 using (27) in Theorem 16.
## 30 Proof of Proposition 1
In trace-regression with matrix decomposition, the design is random. Up to conditioning on the feature data, the proof of Proposition 1 follows almost identical arguments in the proof of Theorem 2 in [2] for the matrix decomposition problem with identity design \((\mathfrak{X}=I)\). We present a sketch here for completeness.
We prepare the ground so to apply Fano's method. For any \(\mathbf{\Theta}:=[\mathbf{B},\mathbf{\Gamma}]\in(\mathds{R}^{p})^{2}\), we define for convenience \(\|\mathbf{\Theta}\|_{F}^{2}:=\|\mathbf{B}\|_{F}^{2}+\|\mathbf{\Gamma}\|_{F}^{2}\). Given \(\eta>0\) and \(M\in\mathbb{N}\), a \(\eta\)-packing of \(\mathcal{A}(r,s,\mathsf{a}^{*})\) of size \(M\) is a finite subset \(\mathcal{A}=\{\mathbf{\Theta}_{1},\dots,\mathbf{\Theta}_{M}\}\) of \(\mathcal{A}(r,s,\mathsf{a}^{*})\) satisfying \(\|\mathbf{\Theta}_{\ell}-\mathbf{\Theta}_{k}\|_{F}\geq\eta\) for all \(\ell\neq k\). For model (3), let \(y_{1}^{n}:=\{y_{i}\}_{i\in[n]}\) and \(\mathbf{X}_{1}^{n}:=\{\mathbf{X}_{i}\}_{i\in[n]}\). For any \(k\in[M]\) and \(i\in[n]\), we will denote by \(P_{y_{1}^{n}|\mathbf{X}_{1}^{n}}^{k}\) (or \(P_{y_{i}|\mathbf{X}_{1}^{n}}^{k}\)) the conditional distribution of \(y_{1}^{n}\) (or \(y_{i}\)) given \(\mathbf{X}_{1}^{n}\) corresponding to the model (3) with parameters \(\mathbf{\Theta}_{k}=[\mathbf{B}_{k},\mathbf{\Gamma}_{k}]\) belonging to the packing \(\mathcal{A}\). Being normal distributions, they are mutually absolute continuous. We denote by \(\mathsf{KL}(\mathbf{P}\|\mathbf{Q})\) the Kullback-Leibler divergence between probability measures \(\mathbf{P}\) and \(\mathbf{Q}\). In that setting, Fano's method assures that
\[\inf_{\hat{\mathbf{\Theta}}}\sup_{\mathbf{\Theta}^{*}\in\mathcal{A}(r,s,\mathsf{a}^{*} )}\mathbb{P}_{\mathbf{\Theta}^{*}}\left\{\|\hat{\mathbf{\Theta}}-\mathbf{\Theta}^{*}\|_{F} ^{2}\geq\eta^{2}\right\}\geq 1-\frac{\frac{1}{\binom{N}{2}}\sum_{k,\ell=1}^{M} \mathbb{E}_{\mathbf{X}_{1}^{n}}\mathsf{KL}\left(P_{y_{1}^{n}|\mathbf{X}_{1}^{n} }^{k}\right\|P_{y_{1}^{n}|\mathbf{X}_{1}^{n}}^{k}\right)+\log 2}{\log M}. \tag{86}\]
The proof follows from an union bound on the following two separate lower bounds.
_Lower bound on the low-spikeness bias._ It is sufficient to give a lower bound on \(\mathcal{A}(1,s,\mathsf{a}^{*})\). We define the subset \(\mathcal{A}\) of \(\mathcal{A}(1,s,\mathsf{a}^{*})\) with size \(M=4\) by
\[\mathcal{A}:=\{[\mathbf{B}^{*},-\mathbf{B}^{*}],[-\mathbf{B}^{*},\mathbf{B}^{* }],(\nicefrac{{1}}{{2}})[\mathbf{B}^{*},-\mathbf{B}^{*}],[\mathbf{0},\mathbf{ 0}]\}\]
using the matrix \(\mathbf{B}^{*}\in\mathds{R}^{p}\) defined by
\[\mathbf{B}^{*}:=\frac{\mathsf{a}^{*}}{\sqrt{n}}\left[\begin{array}{c}1\\ 0\\ \vdots\\ 0\end{array}\right]\underbrace{\left[\begin{array}{cccccc}1&1&\cdots&1&0& \cdots&0\end{array}\right]^{\top}}_{\boldsymbol{f}^{\top}},\]
where \(\boldsymbol{f}\in\mathbb{R}^{d_{2}}\) has \(s\) unit coordinates. It is easy to check that \(\mathcal{A}\) is a \(\eta\)-packing of \(\mathcal{A}(1,s,\mathsf{a}^{*})\) with \(\eta=c_{0}\mathsf{a}^{*}\sqrt{\frac{\pi}{n}}\) for some constant \(c_{0}>0\). For any element \([\mathbf{B}_{k},\boldsymbol{\Gamma}_{k}]\) of \(\mathcal{A}\), \(\mathbf{B}_{k}+\boldsymbol{\Gamma}_{k}=0\) implying \(P_{y_{1}^{\prime}|\mathbf{X}_{1}^{n}}^{k}\sim\mathcal{N}_{n}(0,n\sigma^{2} \mathbf{I})\). Hence, for any \(k\neq\ell\),
\[\mathsf{KL}\left(P_{y_{1}^{\prime}|\mathbf{X}_{1}^{n}}^{k}\|P_{y_{1}^{\prime} |\mathbf{X}_{1}^{n}}^{\ell}\right)=0.\]
From (86), one obtains a lower bound with rate of order \(\mathsf{a}^{*}\sqrt{\frac{\pi}{n}}\) with positive probability.
_Lower bound on the estimation error._ From the packing constructions in Lemmas 5 and 6 in [2], one may show that, for \(d_{1},d_{2}\geq 10\), \(\mathsf{a}^{*}\geq 32\sqrt{\log p}\), \(s<p\) and any \(\eta>0\), there exists a \(\eta\)-packing \(\mathcal{A}=\{\boldsymbol{\Theta}_{k}\}_{k\in[M]}\) of \(\mathcal{A}(r,s,\mathsf{a}^{*})\) with size
\[M\geq\frac{1}{4}\exp\left\{\frac{s}{2}\log\frac{p-s}{s/2}+\frac{r(d_{1}+d_{2}) }{256}\right\}, \tag{87}\]
satisfying \(\|\boldsymbol{\Theta}_{k}\|_{F}\leq 3\eta\) for any \(k\in[M]\). By independence of \(\{\xi_{i}\}_{i\in[n]}\) and \(\mathbf{X}_{1}^{n}\) and isotropy,
\[\mathbb{E}_{\mathbf{X}_{1}^{n}}\mathsf{KL}\left(P_{y_{1}^{\prime}|\mathbf{X}_ {1}^{n}}^{k}\|P_{y_{1}^{\prime}|\mathbf{X}_{1}^{n}}^{\ell}\right)=\sum_{i\in[ n]}\mathbb{E}_{\mathbf{X}_{1}^{n}}\mathsf{KL}\left(P_{y_{i}|\mathbf{X}_{i}}^{k}\|P _{y_{i}|\mathbf{X}_{i}}^{\ell}\right)=\frac{n\|\boldsymbol{\Theta}_{k}- \boldsymbol{\Theta}_{k}\|_{F}^{2}}{2\sigma^{2}}\leq\frac{18n}{\sigma^{2}} \eta^{2}.\]
From (86) and (87), one then checks that tacking
\[\eta^{2}:=c_{o}\sigma^{2}\left\{\frac{r(d_{1}+d_{2})}{n}+\frac{s}{n}\log\left( \frac{p-s}{s/2}\right)\right\},\]
for some constant \(c_{0}>0\), one obtains a lower bound with rate of order \(\eta^{2}\) with positive probability.
## 31 Proof of Theorem 3, case (i)
In the following \(\mathcal{R}:=\|\cdot\|_{1}\) and \(\mathcal{S}\equiv 0\), \(\hat{\boldsymbol{\Gamma}}=\boldsymbol{\Gamma}=\boldsymbol{0}\), \(\mathsf{a}=\infty\) and \(\mathsf{f}_{*}=\chi=0\). A standard Gaussian maximal inequality implies \(\mathscr{G}(\boldsymbol{\Sigma}^{1/2}\mathbb{R}_{1}^{p})\lesssim\rho_{1}( \boldsymbol{\Sigma})\sqrt{\log p}\) and Proposition E.2 in [6] implies \(\mathscr{G}(\mathbb{B}_{\sharp})\lesssim 1\). Next, we assume that \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) for an absolute constant to be determined next. We will also use that \(L\geq 1\). In the following, \(C>0\) is the universal constant stated in Proposition 2. Without loss on generality, we assume the constant \(\mathsf{c}_{*}\) in Theorem 16 is \(\geq 1\). No effort is made to optimize the numerical constants.
By Proposition 2(ii) and taking \(C_{0}\gtrsim C^{2}\) large enough, we get that, on an event \(\mathcal{E}_{1}\) of probability \(\geq 1-\delta\), \(\operatorname{RSC}_{\|\cdot\|_{1}}(\mathsf{a}_{1},\mathsf{a}_{2})\) holds with constants \(\mathsf{a}_{1}\in(0,1)\) and
\[\mathsf{a}_{2}\asymp CL^{2}\rho_{1}(\boldsymbol{\Sigma})\sqrt{\frac{\log p}{n}}.\]
By Proposition 2(iii), on an event \(\mathcal{E}_{2}\) of probability \(\geq 1-\delta\), \(\operatorname{IP}_{\|\cdot\|_{1},0,\|\cdot\|_{\sharp}}(\mathsf{b}_{1}(\delta), \mathsf{b}_{2},0,\mathsf{b}_{4})\) holds with constants
\(\mathsf{b}_{4}\asymp\frac{CL}{\sqrt{n}}\),
\[\mathsf{b}_{1}(\delta)\asymp CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}},\quad \text{and}\quad\mathsf{b}_{2}\asymp CL\rho_{1}(\mathbf{\Sigma})\sqrt{\frac{\log p}{n}}.\]
By Proposition 2(iv) and \(C_{0}\geq 1\) large enough, we have that, on an event \(\mathcal{E}_{3}\) of probability \(\geq\,1\,-\,\delta\), holds with constants \(\mathsf{f}_{4}\asymp\frac{C\sigma}{\sqrt{n}}\),
\[\mathsf{f}_{1}(\delta)\asymp C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}},\quad\text{and}\quad\mathsf{f}_{2}\asymp C\sigma L\rho_{1}(\mathbf{\Sigma})\sqrt{ \frac{\log p}{n}}.\]
Finally, from Lemma 11 and taking \(C_{0}\gtrsim C^{2}\) large enough, we have that, on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\) of probability \(\geq 1-3\delta\), all the previous stated properties hold and \(\operatorname{ARSC}_{\left\lVert\cdot\right\rVert_{1},0,\left\lVert\cdot \right\rVert_{\mathsf{f}}}(\mathsf{d}_{1},\mathsf{d}_{2},0,\mathsf{d}_{4})\) holds27 with constants \(\mathsf{d}_{1}\in(0,1)\), \(\mathsf{d}_{4}\asymp\frac{CL}{\sqrt{n}}\),
Footnote 27: Since there is no matrix decomposition, the proof of Theorem 3 only needs a particular version of the mentioned properties – with \(\mathcal{S}=0\) and \(\mathbf{W}=\mathbf{0}\). We give a proof derived from the unified Theorem 16 – which handles matrix decomposition.
\[\mathsf{d}_{2}\asymp CL^{2}\rho_{1}(\mathbf{\Sigma})\sqrt{\frac{\log p}{n}}.\]
The rest of the proof will happen on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\). We will invoke the general Theorem 16. Conditions (i)-(iii) were shown previously. We now verify conditions (iv)-(v). Note that
\[4[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2}+2\mathsf{c}_{*} \sigma\mathsf{b}_{2})]\lesssim(1+\mathsf{c}_{*})C\sigma L^{2}\rho_{1}(\mathbf{ \Sigma})\sqrt{\frac{\log p}{n}}\asymp\lambda.\]
Next, we choose \(\tau\asymp C_{1}\mathsf{c}_{*}^{2}C\sigma L^{2}/\sqrt{n}\) for some absolute constant \(C_{1}\geq 1\). We have \(4[(\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4})]\lesssim C\sigma L/\sqrt{n}\lesssim\tau\). This shows (iv). Finally, by the choice of \(\tau\),
\[\hat{\mathsf{f}}_{1}=\mathsf{f}_{1}+\mathsf{c}_{*}(\sigma\mathsf{ b}_{1})+2\mathsf{c}_{*}^{2}\sigma^{2}(\nicefrac{{b}}{{q}}/\tau)\lesssim(1+ \mathsf{c}_{*})C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+\frac{1}{C_{1 }L}\sigma,\]
which can be set strictly less than \(\sigma\mathsf{d}_{1}/2\asymp\sigma\) taking \(C_{0}\gtrsim C^{2}(1+\mathsf{c}_{*})^{2}\) and \(C_{1}\geq 1\) large enough. Hence, (v) holds.
In what follows, we verify the conditions (20)-(25). Let \(\mathbf{b}\) with \(D=\left\lVert\mathfrak{X}^{(n)}(\mathbf{b})-\mathbf{f}^{(n)}\right\rVert_{2}\) and \(\left\lVert\mathbf{b}\right\rVert_{0}\leq s\). Note that conditions (21)-(22) are trivially satisfied. We have that \(\Psi_{\left\lVert\cdot\right\rVert_{1}}(\mathbf{\rho}_{\mathbf{b}}(\mathbf{\Delta}_{\mathbf{b} }))\leq\sqrt{s}\). Let us denote \(\mu(\mathbf{b}):=\mu\left(\mathcal{C}_{\mathbf{b},\left\lVert\cdot\right\rVert_{ 1}}(6)\right)\). By the choice of \((\lambda,\tau)\),
\[r^{2} \asymp(1+\mathsf{c}_{*})^{2}C^{2}\sigma^{2}L^{4}\rho_{1}^{2}(\mathbf{ \Sigma})\mu^{2}(\mathbf{b})\cdot\frac{s\log p}{n}\] \[+C_{1}^{2}\mathsf{c}_{*}^{4}C^{2}\sigma^{2}L^{4}\epsilon\log(e/ \epsilon),\]
where we used that \(\Omega^{2}\leq 2o\log(en/o)\). We have \(r<\sigma\mathsf{d}_{1}\asymp\sigma\) if
\[C_{2}(1+\mathsf{c}_{*})CL^{2}\rho_{1}(\mathbf{\Sigma})\mu(\mathbf{b}) \cdot\sqrt{\frac{s\log p}{n}}<1, \tag{88}\] \[C_{1}\mathsf{c}_{*}^{2}CL^{2}\sqrt{\epsilon\log(e/\epsilon)}<c_{1}, \tag{89}\]
for some absolute constants \(C_{2}\geq 1\) and \(c_{1}\in(0,1)\).
Next, we show that \(D^{2}+\mathbf{\phi}_{2}(\mathsf{f}_{1},r)\leq\mathsf{c}_{*}^{2}\sigma^{2}\). For this to be true it is sufficient that \(D\leq\frac{\mathsf{c}_{*}\sigma}{3}\) and \(\mathbf{\phi}_{2}^{1/2}(\mathsf{f}_{1},r)\leq\frac{\mathsf{3c}_{*}\sigma}{4}\). Note that
\[\mathbf{\phi}_{2}^{1/2}(\mathsf{f}_{1},r)=\frac{4\mathsf{f}_{1}+3r}{ \mathsf{d}_{1}}\lesssim C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+r,\]
which is not greater than \(\frac{3\mathsf{c}_{*}\sigma}{4}\) if
\[\mathcal{O}(1)CL\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}} \leq\frac{\mathsf{c}_{*}}{4}\] \[\mathcal{O}(1)(1+\mathsf{c}_{*})CL^{2}\rho_{1}(\mathbf{\Sigma})\mu( \mathbf{b})\cdot\sqrt{\frac{s\log p}{n}} \leq\frac{\mathsf{c}_{*}}{4},\] \[\mathcal{O}(1)C_{1}\mathsf{c}_{*}^{2}CL^{2}\sqrt{\epsilon\log(e/ \epsilon)} \leq\frac{\mathsf{c}_{*}}{4}.\]
Next, we show that \(\left[(\nicefrac{{D^{2}}}{{\sigma d_{1}}})\right]\bigvee\left[(\nicefrac{{2 \sqrt{2}}}{{d_{1}}})D+\bigtriangleup_{2}(\mathsf{f}_{1},r)\right]\leq\mathsf{c }_{*}\sigma\). For that to happen it is sufficient that \(D\leq[(\sqrt{\mathsf{d}_{1}\mathsf{c}_{*}})\wedge(\mathsf{d}_{1}\mathsf{c}_{* }/6\sqrt{2})]\sigma\) and \(\bigtriangleup_{2}(\mathsf{f}_{1},r)\leq\frac{3\mathsf{c}_{*}\sigma}{4}\). Since \(\mathsf{d}_{1}\asymp 1\) and \(\mathsf{c}_{*}\geq 1\), the first relation is guaranteed if \(D\lesssim\sqrt{\mathsf{c}_{*}}\sigma\). We have
\[\bigtriangleup_{2}(\mathsf{f}_{1},r)=\frac{16\mathsf{f}_{1}+12r} {\mathsf{d}_{1}^{2}}\lesssim C\sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n} }+r,\]
which is not greater than \(\frac{3\mathsf{c}_{*}\sigma}{4}\) if the previous three conditions in display are met -- up to enlarging the constants if necessary.
Optimizing the previous inequalities, we conclude that for conditions (20)-(25) to hold it is sufficient to take \(\mathsf{c}_{*}\asymp 1\) large enough, \(C_{0}\gtrsim C^{2}\) large enough and that (88)-(89) and \(D\lesssim\sigma\) (possibly with a small enough constant) hold.
In conclusion, assume \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) with \(C_{0}\gtrsim C^{2}\) large enough and set the hyper-parameters to be
\[\lambda\asymp C\sigma L^{2}\rho_{1}(\mathbf{\Sigma})\sqrt{\frac{\log p }{n}},\quad\tau\asymp\frac{C_{1}C\sigma L^{2}}{\sqrt{n}}.\]
Assume (89) holds. Let any \(\mathbf{b}\in\mathbb{R}^{p}\) such that \(D=\|\mathfrak{X}^{(n)}(\mathbf{b})-\mathbf{f}^{(n)}\|_{2}\lesssim\sigma\) and (88) all hold. Then conditions (i)-(v) and conditions (20)-(25) of Theorem 16 are satisfied, implying (26)-(27) for such \(\mathbf{b}\). We now verify the statement of Theorem 3, case (i).
First, note that
\[\hat{r}^{2}=s\lambda^{2}\mu^{2}(\mathbf{b}) \lesssim C^{2}\sigma^{2}L^{4}\rho_{1}^{2}(\mathbf{\Sigma})\mu^{2}(\bm {b})\cdot\frac{s\log p}{n}\] \[r^{2} \lesssim\hat{r}^{2}+C_{1}^{2}C^{2}\sigma^{2}L^{4}\epsilon\log(e/ \epsilon),\]
Next, using that \(D\lesssim\sigma\), \(\mathsf{d}_{1}\asymp 1\), \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) and \(\mathsf{b}_{1}r\leq\frac{\sigma\mathsf{b}_{2}^{2}}{2}+\frac{r^{2}}{2\sigma}\),
\[F \lesssim\mathsf{f}_{1}+\mathsf{b}_{1}(\sigma+\mathsf{f}_{1}+r)+ \frac{1}{C_{1}\sigma L}(D^{2}+(\mathsf{f}_{1}+r)^{2})\] \[\lesssim(1+\mathsf{b}_{1})\mathsf{f}_{1}+\sigma\mathsf{b}_{1}+ \frac{\sigma\mathsf{b}_{1}^{2}}{2}+\frac{r^{2}}{2\sigma}+\frac{D}{C_{1}L}+ \frac{\mathsf{f}_{1}^{2}}{C_{1}\sigma L}+\frac{r^{2}}{C_{1}\sigma L}\] \[\lesssim\frac{D}{C_{1}L}+\mathsf{f}_{1}+\sigma\mathsf{b}_{1}+ \frac{r^{2}}{C_{1}\sigma L}.\]
From (88)-(89), we can write
\[\frac{r^{4}}{C_{1}^{2}\sigma^{2}L^{2}}\lesssim(\nicefrac{{C}}{{C_{1}}})^{2} \sigma^{2}L^{2}\rho_{1}^{2}(\mathbf{\Sigma})\mu^{2}(\mathbf{b})\cdot\frac{s\log p}{n}+ C_{1}^{2}C^{4}\sigma^{2}L^{6}\epsilon^{2}\log^{2}(e/\epsilon).\]
We have that
\[\bigtriangleup_{2}(F,\hat{r})=\frac{1}{\mathsf{d}_{1}^{2}}(4F+3 \hat{r})^{2}\]
\[\lesssim F^{2}+\hat{r}^{2}\] \[\lesssim\frac{D^{2}}{C_{1}^{2}L^{2}}+\mathsf{f}_{1}^{2}+\sigma^{2} \mathsf{b}_{1}^{2}+\hat{r}^{2}+\frac{r^{4}}{C_{1}^{2}\sigma^{2}L^{2}}.\]
It follows that
\[D^{2}+\bigtriangleup_{2}(F,\hat{r}) \leq D^{2}\left(1+\frac{\mathcal{O}(1)}{C_{1}^{2}L^{2}}\right)\] \[+\mathcal{O}(1)C^{2}\cdot\sigma^{2}L^{2}\frac{1+\log(1/\delta)}{n }+\mathcal{O}(1)C^{2}\cdot\sigma^{2}L^{4}\rho_{1}^{2}(\mathbf{\Sigma})\mu^{2}( \boldsymbol{b})\cdot\frac{s\log p}{n}\] \[+\mathcal{O}(1)C^{4}\cdot C_{1}^{2}\sigma^{2}L^{6}\epsilon^{2} \log^{2}(e/\epsilon).\]
This establishes (13) in Theorem 3, case (i) using (26) in Theorem 16.
Note that, since \(D\lesssim\sigma\) and \(\mathsf{d}_{1}\asymp 1\),
\[(\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}})\bigvee\big{[}(\nicefrac{{2 \sqrt{2}}}{{\mathsf{d}_{1}}})D+\bigtriangleup_{2}(F,\hat{r})\big{]}\lesssim D +\bigtriangleup_{2}(F,\hat{r}).\]
Also, \(\bigtriangleup_{2}(F,\hat{r})\lesssim\bigtriangleup_{2}^{1/2}(F,\hat{r})\). Thus
\[(\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}})\bigvee\big{[}( \nicefrac{{2\sqrt{2}}}{{\mathsf{d}_{1}}})D+\bigtriangleup_{2}(F,\hat{r})\big{]} \leq D\left(\mathcal{O}(1)+\frac{\mathcal{O}(1)}{C_{1}L}\right)\] \[+\mathcal{O}(1)C\cdot\sigma L\frac{1+\sqrt{\log(1/\delta)}}{ \sqrt{n}}+\mathcal{O}(1)C\cdot\sigma L^{2}\rho_{1}(\mathbf{\Sigma})\mu( \boldsymbol{b})\cdot\sqrt{\frac{s\log p}{n}}\] \[+\mathcal{O}(1)C^{2}\cdot C_{1}\sigma L^{3}\epsilon\log(e/ \epsilon).\]
This establishes (14) in Theorem 3, case (i) using (27) in Theorem 16.
## 32 Proof of Theorem 3, case (ii)
We need to consider different cones. Recall that \(\Omega:=\sqrt{\sum_{i=1}^{o}\omega_{i}^{2}}\) and let \(\bar{\Omega}_{s}:=\sqrt{\sum_{j=1}^{s}\bar{\omega}_{j}^{2}}\).
_Definition \(29\)_.: For \(c_{0},\gamma>0\), let
\[\overline{\mathcal{C}}_{s}(c_{0}) :=\left\{\boldsymbol{v}\in\mathbb{R}^{p}:\sum_{j=s+1}^{p}\bar{ \omega}_{j}\boldsymbol{v}_{j}^{\sharp}\leq c_{0}\bar{\Omega}_{s}\|\boldsymbol{ v}\|_{2}\right\},\] \[\overline{\mathcal{C}}_{s}(c_{0},\gamma) :=\left\{[\boldsymbol{v},\boldsymbol{u}]\in\mathbb{R}^{p}\times \mathbb{R}^{n}:\gamma\sum_{j=s+1}^{p}\bar{\omega}_{j}\boldsymbol{v}_{j}^{ \sharp}+\sum_{i=o+1}^{n}\omega_{i}\boldsymbol{u}_{i}^{\sharp}\leq c_{0}\left[ \gamma\bar{\Omega}_{s}\|\boldsymbol{v}\|_{2}+\Omega\|\boldsymbol{u}\|_{2} \right]\right\}.\]
_Definition \(30\)_.: Given \([\boldsymbol{v},\boldsymbol{u}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}\), \(s\in[n]\) and \(c_{0},\alpha>0\), define
\[r_{\lambda,\alpha,c_{0}}(s):=\{\lambda^{2}\bar{\Omega}_{s}^{2}\mu^{2}( \overline{\mathcal{C}}_{s}(2c_{0}))+\alpha^{2}\}^{1/2},\]
and
\[\triangle_{\lambda,\tau}(\boldsymbol{v},\boldsymbol{u}):=(\nicefrac{{3\lambda \bar{\Omega}_{s}}}{{2}})\|\boldsymbol{v}\|_{2}-(\nicefrac{{\lambda}}{{2}}) \sum_{j=s+1}^{p}\bar{\omega}_{j}\boldsymbol{v}_{j}^{\sharp}+(\nicefrac{{3\tau \Omega}}{{2}})\|\boldsymbol{u}\|_{2}-(\nicefrac{{\tau}}{{2}})\sum_{i=o+1}^{n }\omega_{i}\boldsymbol{u}_{i}^{\sharp}.\]
The proof with \(\mathcal{R}=\|\cdot\|_{\sharp}\), the Slope norm in \(\mathbb{R}^{p}\), follows a similar path to the proof of Theorem 3, case (i). First, we claim that a very similar theorem to Theorem 16 holds but with the minor changes:
We set \(\mathcal{S}\equiv 0\), \(\hat{\boldsymbol{\Gamma}}=\boldsymbol{\Gamma}=\boldsymbol{0}\), \(\mathsf{a}=\infty\) and \(\mathsf{f}_{*}=\chi=0\). As (21)-(22) are trivially satisfied they can be removed. Also, we replace \(r:=r_{\lambda,\chi,\tau\Omega,3}(\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{ \Delta}_{\mathbf{\Gamma}}|\mathbf{B},\boldsymbol{\Gamma})\) with \(r:=r_{\lambda,\tau\Omega,3}(s)\) and \(\hat{r}:=r_{\lambda,\tau\Omega,3}(s)\).
\(r_{\lambda,\chi,0,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{\Gamma }}|\mathbf{B},\mathbf{\Gamma})\) with \(\hat{r}:=r_{\lambda,0,3}(s)\).
Let us call it Theorem 16\({}^{\prime}\). Using this theorem, setting \(\mu(\mathbf{b}):=\mu(\overline{c}_{s}(6))\) for given \(\mathbf{b}\) with \(\|\mathbf{b}\|_{0}\leq s\), using the bound \(\mathscr{G}(\mathbf{\Sigma}^{1/2}\mathbb{B}_{\sharp})\lesssim\rho_{1}(\mathbf{\Sigma})\) -- which follows from Proposition E.2 in [6] -- and the fact that \(\bar{\Omega}_{s}\leq 2s\log(ep/s)\), the proof of Theorem 3, case (ii) follows very similar arguments to the case (i).
Next, we highlight the minor changes in the proof of Theorem 16\({}^{\prime}\). Lemmas 17-18 are unchanged. First, we obtain a variation of Lemma 19 -- with a similar proof.28.
Footnote 28: In fact, we only need to consider the cases \(\mathbf{v}\in\overline{c}_{s}(2c_{0})\) and \(\mathbf{v}\notin\overline{c}_{s}(2c_{0})\).
**Lemma 31**.: _Define \(\gamma:=\lambda/\tau\) and let \(c_{0}>0\). Then, for any \([\mathbf{v},\mathbf{u}]\in\overline{c}_{s}(c_{0},\gamma)\),_
\[\triangle_{\lambda,\tau}(\mathbf{v},\mathbf{u}) \leq(\nicefrac{{3}}{{2}})\cdot r_{\lambda,\tau\Omega,c_{0}}(s) \cdot\|[\mathbf{v},\mathbf{u}]\|_{\Pi},\] \[\lambda\|\mathbf{v}\|_{\sharp}+\tau\|\mathbf{u}\|_{\sharp} \leq 2(c_{0}+1)\cdot r_{\lambda,\tau\Omega,c_{0}}(s)\cdot\|[\mathbf{v}, \mathbf{u}]\|_{\Pi}.\]
Using this lemma, we obtain a variation of Proposition 3 -- again with a similar proof, using \(\triangle:=\triangle_{\lambda,\tau}(\mathbf{\Delta}_{\mathbf{b}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})\) instead of \(\triangle:=\triangle_{\lambda,\chi,\tau}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}_{\mathbf{\Gamma}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}|\mathbf{B}, \mathbf{\Gamma})\) and \(r:=r_{\lambda,\chi,\tau\Omega,3}(s)\) instead of \(r:=r_{\lambda,\chi,\tau\Omega,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta} _{\mathbf{\Gamma}}|\mathbf{B},\mathbf{\Gamma})\).
**Proposition 7**.: _Suppose the conditions (i)-(ii) of Theorem 16\({}^{\prime}\) hold and, additionally,_
1. \(\lambda\geq 4[(\sigma\mathsf{d}_{2})\vee(2\mathsf{f}_{2})],\) _and_ \(\tau\geq 4[(\sigma\mathsf{d}_{4})\vee(2\mathsf{f}_{4})]\)_._
2. \(2\mathsf{f}_{1}\leq\sigma\mathsf{d}_{1}\)_._
_For any \(D\geq 0\) and \(\mathbf{b}\) satisfying the constraints (20) (with \(\mathbf{\Gamma}=\mathbf{0}\)) and (23) (with \(r:=r_{\lambda,\tau\Omega,3}(s)\)),_
\[(\nicefrac{{1}}{{2}})(\lambda\|\mathbf{\Delta}_{\mathbf{b}}\|_{\sharp}+\tau\| \mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp})+\|\mathbf{\Delta}^{(n)}+ \mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}^{2}\leq D^{2}+\mathbf{\phi}_{2}(\mathsf{f }_{1},r),\]
_where \(r:=r_{\lambda,\tau\Omega,3}(s)\). Moreover,_
\[\|[\mathbf{\Delta}_{\mathbf{b}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}\leq[ (\nicefrac{{D^{2}}}{{\sigma\mathsf{d}_{1}}})]\bigvee\big{[}(\nicefrac{{2\sqrt {2}}}{{\sigma\mathsf{d}_{1}}})D+\mathbf{\triangle}_{2}(\mathsf{f}_{1},r)\big{]}\,.\]
Next, Lemmas 20-21 are unchanged. Using the previous auxiliary results, we claim that the proof of Theorem 16\({}^{\prime}\) follows the same arguments in the proof of Theorem 16\({}^{\prime}\) -- using \(\triangle:=\triangle_{\lambda,0}(\mathbf{\Delta}_{\mathbf{b}},\mathbf{0})\) instead of \(\triangle:=\triangle_{\lambda,\chi,0}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}_{\mathbf{\Gamma}},\mathbf{0}|\mathbf{B},\mathbf{\Gamma})\), \(r:=r_{\lambda,0,3}(s)\) instead of \(r:=r_{\lambda,\chi,0,3}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{ \Gamma}}|\mathbf{B},\mathbf{\Gamma})\) and \(\overline{c}_{s}(3)\) instead of \(\mathcal{C}_{\mathbf{B},\mathbf{\Gamma}}(3,\gamma_{\mathcal{R}},\gamma_{ \mathcal{S}},0)\).
## 33 Proof of Theorem 3, case (iii)
Invoking Theorem 16, the proof follows exact the same guidelines as in the Proof of Theorem 3, case (i) but using the nuclear norm \(\mathcal{R}:=\|\cdot\|_{N}\). We highlight next the minor changes. First, \(\mathscr{G}(\mathfrak{S}^{1/2}(\mathbb{B}_{\|\cdot\|_{N}}))\leq\rho_{N}( \mathbf{\Sigma})(\sqrt{d_{1}}+\sqrt{d_{2}})\) by Lemma H.1 in [78] -- up to an absolute constant. As a consequence, in the constants \(\{\mathsf{a}_{i}\}\), \(\{\mathsf{b}_{i}\}\), \(\{\mathsf{c}_{i}\}\), \(\{\mathsf{d}_{i}\}\) and \(\{\mathsf{f}_{i}\}\), we have to replace the factor \(\rho_{1}(\mathbf{\Sigma})\sqrt{\frac{\log p}{n}}\) by \(\rho_{N}(\mathbf{\Sigma})\sqrt{\frac{d_{1}+d_{2}}{n}}\). Second, we have \(\Psi_{\|\cdot\|_{N}}(\mathcal{P}_{\mathbf{B}}(\mathbf{\Delta}_{\mathbf{B}})) \leq\sqrt{r}\) -- so that we replace \(\mu(\mathbf{b})=\mu\left(C_{\mathbf{b},\|\cdot\|_{1}}(6)\right)\) by \(\mu(\mathbf{B})=\mu\left(C_{\mathbf{B},\|\cdot\|_{N}}(6)\right)\).
## 34 Proof of Theorem 22
Throughout this section \(\mathcal{R}\) is a decomposable norm on \(\mathds{R}^{p}\) and we grant Assumption 1 and model (1). We set \(\mathbf{\xi}:=\mathbf{y}-\mathfrak{X}(\hat{\mathbf{B}})-\sqrt{n}\hat{\mathbf{\theta}}\), \(\mathbf{\Delta}:=\mathfrak{X}(\hat{\mathbf{B}})-\mathbf{f}\) and \(\hat{\sigma}:=\|\mathbf{\xi}^{(n)}\|_{2}\). Given \(\mathbf{B}\), we define the quantities \(\mathbf{\xi}_{\mathbf{B}}:=\mathbf{y}-\mathfrak{X}(\mathbf{B})-\sqrt{n}\mathbf{\theta}^{*}\) and \(\mathcal{E}_{\mathbf{B}}:=\mathfrak{X}(\mathbf{B})-\mathbf{f}\). Note that \(\mathbf{\xi}=\mathbf{\xi}_{\mathbf{B}}+\mathcal{E}_{\mathbf{B}}\).
**Lemma 32**.: _For all \(\mathbf{B}\in\mathds{R}^{p}\), it holds that_
\[0\leq\langle\mathbf{\xi}_{\mathbf{B}}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle+(\hat{\sigma}\lambda) \big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)}+(\hat{ \sigma}\tau)\big{(}\|\mathbf{\theta}^{*}\|_{\sharp}-\|\hat{\mathbf{\theta}}\|_{ \sharp}\big{)}\]
\[\leq(\|\mathbf{\xi}_{\mathbf{B}}^{(n)}\|_{2}-\|\mathbf{\xi}^{(n)}\|_{2}) \lambda\left(-\mathbf{\zeta}_{\mathbf{B}},\hat{\mathbf{B}}-\mathbf{B}\right)+(\|\mathbf{ \xi}_{\mathbf{B}}^{(n)}\|_{2}-\|\mathbf{\xi}^{(n)}\|_{2})\tau\left(-\{\mathbf{u}_{ \mathbf{\theta}},\hat{\mathbf{\theta}}-\mathbf{\theta}^{*}\}\right)\] \[\leq\|\mathbf{\xi}_{\mathbf{B}}^{(n)}\|_{2}\lambda\mathcal{R}(\mathbf{ \Delta}_{\mathbf{B}})+\|\mathbf{\xi}_{\mathbf{B}}^{(n)}\|_{2}\tau\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{2},\]
where we used that \(\mathbf{\xi}_{\mathbf{B}}=\mathbf{\xi}-\mathcal{E}_{\mathbf{B}},|\langle\mathbf{\nabla}_{ \mathbf{B}},\mathbf{\Delta}_{\mathbf{B}}\rangle|\leq\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})\) and \(|\langle\mathbf{u}_{\mathbf{\theta}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle|\leq \|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\). Equation (90) follow from the three previous displays.
We now show (91). The first order condition of (8) at \([\hat{\mathbf{B}},\hat{\mathbf{\theta}}]\) is equivalent to the statement: there exist \(\mathbf{V}\in\partial\mathcal{R}(\hat{\mathbf{B}})\) and \(\mathbf{u}\in\partial\|\hat{\mathbf{\theta}}\|_{\sharp}\) such that for all \([\mathbf{B},\mathbf{\theta}]\),
\[\sum_{i\in[n]}\left[y_{i}^{(n)}-\mathfrak{X}_{i}^{(n)}(\hat{ \mathbf{B}})-\hat{\mathbf{\theta}}_{i}\right]\left(\mathbf{X}_{i}^{(n)},\hat{ \mathbf{B}}-\mathbf{B}\right)\geq\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}\langle \mathbf{V},\hat{\mathbf{B}}-\mathbf{B}\rangle,\] \[\langle\mathbf{y}^{(n)}-\mathfrak{X}^{(n)}(\hat{\mathbf{B}})-\hat{ \mathbf{\theta}},\hat{\mathbf{\theta}}-\mathbf{\theta}\rangle\geq\tau\|\hat{\mathbf{\xi}}^{(n) }\|_{2}\langle\mathbf{u},\hat{\mathbf{\theta}}-\mathbf{\theta}\rangle.\]
Setting \(\mathbf{\theta}=\mathbf{\theta}^{*}\) and using that \(\mathbf{y}^{(n)}=\mathbf{f}^{(n)}+\mathbf{\theta}^{*}+\mathbf{\xi}^{(n)}\) we obtain that for all \(\mathbf{B}\),
\[\sum_{i\in[n]}\left[\mathbf{\Delta}_{i}^{(n)}+\mathbf{\Delta}_{i}^{\hat{ \mathbf{\theta}}}\right]\left(\mathbf{X}_{i}^{(n)},\mathbf{\Delta}_{\mathbf{B}}\right) \leq\sum_{i\in[n]}\xi_{i}^{(n)}(\mathbf{X}_{i}^{(n)},\mathbf{\Delta}_{ \mathbf{B}})-\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}\langle\mathbf{V},\mathbf{\Delta} _{\mathbf{B}}\rangle,\] \[\left\langle\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}, \mathbf{\Delta}^{\hat{\mathbf{\theta}}}\right\rangle \leq\langle\mathbf{\xi}^{(n)},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle- \tau\|\hat{\mathbf{\xi}}^{(n)}\|_{2}\langle\mathbf{u},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle.\]
Summing the above inequalities,
\[\langle\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{M}^ {(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle \leq\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{ B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle\] \[-\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}(\mathbb{V},\mathbf{\Delta}_{ \mathbf{B}})-\tau\|\hat{\mathbf{\xi}}^{(n)}\|_{2}\langle\mathbf{u},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}}\rangle.\]
We now write
\[-\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}(\mathbb{V},\mathbf{\Delta}_{ \mathbf{B}})-\tau\|\hat{\mathbf{\xi}}^{(n)}\|_{2}\langle\mathbf{u},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}}\rangle\] \[=-\lambda\|\mathbf{\xi}^{(n)}\|_{2}(\mathbb{V},\mathbf{\Delta}_{\mathbf{ B}})-\tau\|\mathbf{\xi}^{(n)}\|_{2}\langle\mathbf{u},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle\] \[+\lambda(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{\xi}}^{(n)}\|_{2})( \mathbb{V},\mathbf{\Delta}_{\mathbf{B}})+\tau(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{ \xi}}^{(n)}\|_{2})\langle\mathbf{u},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle\]
Using that \(-\langle\mathbf{\Delta}_{\mathbf{B}},\mathbb{V}\rangle\leq\mathcal{R}(\mathbf{B}) -\mathcal{R}(\hat{\mathbf{B}})\) and \(-\langle\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathbf{u}\rangle\leq\|\mathbf{\theta}^{*}\| _{\sharp}-\|\hat{\mathbf{\theta}}\|_{\sharp}\), we get that
\[-\lambda\|\mathbf{\xi}^{(n)}\|_{2}\langle\mathbb{V},\mathbf{\Delta}_{ \mathbf{B}}\rangle-\tau\|\mathbf{\xi}^{(n)}\|_{2}\langle\mathbf{u},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}}\rangle\leq(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})- \mathcal{R}(\hat{\mathbf{B}})\big{)}+(\hat{\sigma}\tau)\big{(}\|\mathbf{\theta}^{ *}\|_{\sharp}-\|\hat{\mathbf{\theta}}\|_{\sharp}\big{)}.\]
Finally, using the facts that \(\hat{\mathbf{\xi}}^{(n)}=\mathbf{\xi}^{(n)}-\mathcal{E}_{\mathbf{B}}^{(n)}-\mathfrak{M} ^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\), \(|\{\mathbb{V},\mathbf{\Delta}_{\mathbf{B}}\}|\leq\mathcal{R}(\mathbf{\Delta}_{\mathbf{ B}})\) and \(|\langle\mathbf{u},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle|\leq\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{\sharp}\) we get
\[\lambda(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{\xi}}^{(n)}\|_{2})( \mathbb{V},\mathbf{\Delta}_{\mathbf{B}})+\tau(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{ \xi}}^{(n)}\|_{2})\langle\mathbf{u},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\rangle\] \[\leq\left(\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}^{\hat{\mathbf{\theta}}})\|_{2}+\|\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}\|_{2 }\right)\left(\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\tau\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{\sharp}\right).\]
Equation (91) follows from the four previous displays.
Next, we upper (and lower) bound (91) using \(\mathrm{MP}\) (and \(\mathrm{ARSC}\)).
**Lemma 33**.: _Suppose conditions (i)-(ii) of Theorem 22 hold. Let \(\mathbf{B}\in\mathds{R}^{p}\) and define the quantities_
\[\mathbf{\Delta}_{1} :=\left(\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}+\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2}\lambda\right)\mathcal{R} (\mathbf{\Delta}_{\mathbf{B}})+\left(\mathsf{f}_{4}+\frac{\mathsf{f}_{1}\mathsf{d}_ {4}}{\mathsf{d}_{1}}+\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2}\tau\right)\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp},\] \[\mathbf{\nabla}_{1} :=(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R} (\hat{\mathbf{B}})\big{)}+(\hat{\sigma}\tau)\big{(}\|\mathbf{\theta}^{*}\|_{\sharp }-\|\hat{\mathbf{\theta}}\|_{\sharp}\big{)}.\]
_Then_
\[0\leq\left(\frac{\mathsf{f}_{1}}{\mathsf{d}_{1}}+\|\mathcal{E}_{ \mathbf{B}}^{(n)}\|_{2}\right)\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}^{\hat{\mathbf{\theta}}})\|_{2}+\mathbf{\Delta}_{1}+\mathbf{\nabla}_{1}, \tag{92}\]
_and also_
\[\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}^{2}+\| \mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{ 2}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^{(n)}\|_{2}^{2}\] \[+\frac{2\mathsf{f}_{1}}{\mathsf{d}_{1}}\|\mathfrak{M}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{2}+2(\mathbf{\Delta}_{1}+ \mathbf{\nabla}_{1})\] \[+2\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}})\|_{2}\left(\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\tau\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\right). \tag{93}\]
Proof.: By the parallelogram law,
\[\langle\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}, \mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle=\] \[=\frac{1}{2}\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_ {2}^{2}+\frac{1}{2}\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})\|_{2}^{2}-\frac{1}{2}\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^ {(n)}\|_{2}^{2}.\]
\(\mathrm{ARSC}\) implies in particular that
\[\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}})\|_{2}\geq\mathsf{d}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}]\|_{\Pi}-\mathsf{d}_{2}\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})- \mathsf{d}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}.\]
\(\mathrm{MP}\) implies that
\[\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})\rangle\leq\mathsf{f}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}+\mathsf{f}_{2}\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})+\mathsf{f}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}.\]
The proof of (93) follows from the three previous displays and inequality (91) of Lemma 32.
The proof of (92) follows from the two previous displays, inequality (90) and the fact that
\[\langle\mathbf{\xi}_{\mathbf{B}}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta} _{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle =\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle-\langle\mathcal{E}_{\mathbf{B}}^{(n )},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta} }})\rangle\] \[\leq\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\rangle+\|\mathcal{E}_{\mathbf{B }}^{(n)}\|_{2}\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})\|_{2}.\]
The previous lemmas entail the next proposition. For convenience, given \([\mathbf{V},\mathbf{B},\mathbf{u}]\), we define
\[\triangle_{\lambda,\tau}(\mathbf{V},\mathbf{u}|\mathbf{B}) :=(\nicefrac{{3\gamma}}{{2}})(\mathcal{R}\circ\mathcal{P}_{ \mathbf{B}})(\mathbf{V})-(\nicefrac{{\gamma}}{{2}})(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}}^{\perp})(\mathbf{V})\] \[+(\nicefrac{{3\tau\Omega}}{{2}})\|\mathbf{u}\|_{2}-(\nicefrac{{ \gamma}}{{2}})\sum_{i=o+1}^{n}\omega_{i}\mathbf{u}_{i}^{\sharp}.\]
**Proposition 8**.: _Suppose the conditions (i)-(ii) of Theorem 22 hold and, additionally, for some \(\mathsf{c}_{n}\in[0,1/4)\),_
* \((1-4\mathsf{c}_{n})\hat{\sigma}\lambda\geq 4[\mathsf{f}_{2}+(\nicefrac{{ \mathsf{f}_{1}\mathsf{d}_{2}}}{{\mathsf{d}_{1}}})]\) _and_ \((1-4\mathsf{c}_{n})\hat{\sigma}\tau\geq 4[\mathsf{f}_{4}+(\nicefrac{{ \mathsf{f}_{1}\mathsf{d}_{4}}}{{\mathsf{d}_{1}}})]\)_._
* \(56\left(\nicefrac{{\mathsf{f}_{1}}}{{\mathsf{d}_{1}}}+\hat{\sigma}\mathsf{c} _{n}\right)\leq 3\hat{\sigma}\)_._
_For any \(D\geq 0\) and \(\mathbf{B}\) satisfying the constraints (37)-(38),_
\[(\nicefrac{{\sigma}}{{2}})(\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+ \tau\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp})+\|\mathbf{\Delta}^{(n)}+\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{2}^{2}\leq D^{2}+\mathbf{\triangle}_{1}\left( \mathsf{f}_{1},R\right), \tag{94}\]
_where \(r:=r_{\vartheta\lambda,\vartheta\tau\Omega,6}(\mathbf{\Delta}_{\mathbf{B}}| \mathbf{B})\) and \(R:=(\hat{\sigma}\mathsf{c}_{n})\vee(3r)\). Moreover,_
\[\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}) \|_{2}\leq 2D+\mathbf{\triangle}_{1}\left(\mathsf{f}_{1},R\right). \tag{95}\]
Proof.: Let \(\blacksquare_{1}:=(\nicefrac{{\vartheta\lambda}}{{4}})\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})+(\nicefrac{{\vartheta\tau}}{{4}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta} }}\|_{\sharp}\). By Lemma 33 and \(\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2}\leq\hat{\sigma}\mathsf{c}_{n}\), we have
\[0\leq\left(\frac{\mathsf{f}_{1}}{\mathsf{d}_{1}}+\hat{\sigma}\mathsf{c}_{n} \right)\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}})\|_{2}+\mathbf{\triangle}_{1}+\mathbf{\blacktriangledown}_{1},\]
and also
\[2\blacksquare_{1}+\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\mathbf{\theta}}} \|_{2}^{2}+\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}})\|_{2}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^{(n)}\|_{2}^{2}\] \[+(\nicefrac{{2\mathsf{f}_{1}}}{{\mathsf{d}_{1}}})\|\mathfrak{M}^{ (n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{2}+2(\mathbf{ \triangle}_{1}+\blacksquare_{1}+\mathbf{\blacktriangledown}_{1})\] \[+2\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{ \mathbf{\theta}}})\|_{2}\left(\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\tau\| \mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\right).\]
By Lemmas 27 and 28 (with \(\nu=1/2\)) and conditions (iii') and \(\|\mathcal{E}_{\mathbf{B}}\|_{2}\leq\hat{\sigma}\mathsf{c}_{n}\),
\[\mathbf{\triangle}_{1}+\blacksquare_{1}+\mathbf{\blacktriangledown}_{1} \leq\left[\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}+\hat{\sigma}\lambda\mathsf{c}_{n}+(\nicefrac{{\vartheta\lambda}}{{4 }})\right]\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+(\hat{\sigma}\lambda)\big{(} \mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)}\] \[+\left[\mathsf{f}_{4}+\frac{\mathsf{f}_{1}\mathsf{d}_{4}}{ \mathsf{d}_{1}}+\hat{\sigma}\tau\mathsf{c}_{n}+(\nicefrac{{\vartheta\tau}}{{4}}) \right]\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}+(\hat{\sigma}\tau)\big{(}\| \mathbf{\theta}^{*}\|_{\sharp}-\|\hat{\mathbf{\theta}}\|_{\sharp}\big{)}\] \[\leq(\nicefrac{{\vartheta\lambda}}{{2}})\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})+(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\big{)}\] \[+(\nicefrac{{\vartheta\tau}}{{2}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}} \|_{\sharp}+(\hat{\sigma}\tau)\big{(}\|\mathbf{\theta}^{*}\|_{\sharp}-\|\hat{ \mathbf{\theta}}\|_{\sharp}\big{)}\] \[\leq\triangle_{\hat{\sigma}\lambda,\hat{\sigma}\tau}(\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}|\mathbf{B}).\]
Next, we will define some local variables for convenience of notation. Let \(G:=\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{ \theta}})\|_{2}\), \(D:=\|\mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}\), \(x:=\|\mathbf{\Delta}^{(n)}+\mathbf{\Delta}^{\hat{\theta}}\|_{2}\), and \(r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,6}(\mathbf{\Delta}_{\mathbf{B }}|\mathbf{B})\). Define also \(\triangle:=\triangle_{\hat{\sigma}\lambda,\hat{\sigma}\tau}(\mathbf{\Delta}_{ \mathbf{B}},\mathbf{\Delta}^{\hat{\theta}}|\mathbf{B})\) and
\[H :=(\nicefrac{{3\hat{\sigma}\lambda}}{{2}})(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}})(\mathbf{\Delta}_{\mathbf{B}})+(\nicefrac{{3\hat{ \sigma}\tau\Omega}}{{2}})\|\mathbf{\Delta}^{\hat{\theta}}\|_{2},\] \[I :=(\nicefrac{{\hat{\sigma}\lambda}}{{2}})(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}}^{\perp})(\mathbf{\Delta}_{\mathbf{B}})+(\nicefrac{{ \hat{\sigma}\tau}}{{2}})\sum_{i=o+1}^{n}\omega_{i}(\mathbf{\Delta}^{\hat{ \theta}})_{i}^{\sharp}.\]
In particular, \(\triangle=H-I\).
The previous three bounds entail the two inequalities:
\[0 \leq(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})\,G+ \triangle, \tag{96}\] \[2\blacksquare_{1}+x^{2}+G^{2} \leq D^{2}+(\nicefrac{{2t_{1}}}{{d_{1}}})\,G+2\left(\lambda \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\tau\|\mathbf{\Delta}^{\hat{\theta} }\|_{\sharp}\right)G+2\triangle. \tag{97}\]
We split our argument in two cases.
**Case 1:**: \((\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})\,G\geq H\). Hence, \(\triangle\leq H\leq(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})\,G\). From (96), \(I\leq(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})\,G+H\leq 2 \left(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)G\). This fact and decomposability imply
\[\hat{\sigma}\left(\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+\tau\| \mathbf{\Delta}^{\hat{\theta}}\|_{\sharp}\right)\leq 2I+\frac{2H}{3}\leq\frac{14}{3} \left(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)G.\]
From (97), we get
\[2\blacksquare_{1}+x^{2}+G^{2} \leq D^{2}+(\nicefrac{{2t_{1}}}{{d_{1}}})\,G+\frac{28}{3\hat{ \sigma}}\left(\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)G^ {2}+2\triangle\] \[\leq D^{2}+2\left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma} \mathbf{c}_{n}\right)G+\frac{28}{3\hat{\sigma}}\left(\nicefrac{{t_{1}}}{{d_{1 }}}+\hat{\sigma}\mathbf{c}_{n}\right)G^{2},\]
which, together with condition (iv\({}^{\prime}\)), implies
\[2\blacksquare_{1}+x^{2}+\frac{G^{2}}{2}\leq D^{2}+2\left(\nicefrac{{2t_{1}}}{{d_{1 }}}+\hat{\sigma}\mathbf{c}_{n}\right)G.\]
From \(2\left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)G\leq 2 \left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)^{2}+ \frac{G^{2}}{2}\), we get
\[2\blacksquare_{1}+x^{2}\leq D^{2}+2\left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma} \mathbf{c}_{n}\right)^{2}. \tag{98}\]
If we use instead \(2\left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)G\leq 4 \left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n}\right)^{2}+ \frac{G^{2}}{4}\), we get
\[\frac{G^{2}}{4}\leq D^{2}+4\left(\nicefrac{{2t_{1}}}{{d_{1}}}+\hat{\sigma} \mathbf{c}_{n}\right)^{2}. \tag{99}\]
**Case 2:**: \((\nicefrac{{t_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})\,G\leq H\). From (96), \(0\leq H+\triangle=2H-I\) so that \([\mathbf{\Delta}_{\mathbf{B}},\mathbf{0},\mathbf{\Delta}^{\hat{\theta}}]\in \mathcal{C}_{\mathbf{B},\mathbf{0}}(6,\gamma,0,\Omega)\), where we defined \(\gamma:=\lambda/\tau\). By Lemma 19,
\[\triangle \leq(\nicefrac{{3}}{{2}})r\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{ \Delta}^{\hat{\theta}}]\|_{\Pi},\] \[(\hat{\sigma}\lambda)\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+( \hat{\sigma}\tau)\big{\|}\mathbf{\Delta}^{\hat{\theta}}\big{\|}_{\sharp} \leq 14r\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\theta}}]\|_ {\Pi}.\]
The second inequality above and \(\mathrm{ARSC}\) imply that
\[\mathsf{d}_{1}\|[\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{ \hat{\theta}}]\|_{\Pi} \leq G+[\nicefrac{{\left({4_{2}}/\lambda\right)}}{{\lambda}}\lor( \nicefrac{{4_{1}}}{{d_{1}}}+\hat{\sigma}\mathbf{c}_{n})](\lambda\mathcal{R}( \mathbf{\Delta}_{\mathbf{B}})+\tau\big{\|}\mathbf{\Delta}^{\hat{\theta}}\big{\|}_ {\sharp})\] \[\leq G+[\nicefrac{{\left({4_{2}}/\lambda\right)}}{{\lambda}}\lor( \nicefrac{{4_{1}}}{{\tau}})]14(\nicefrac{{7_{2}}}{{\hat{\sigma}}})\|[ \mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\theta}}]\|_{\Pi}.\]
By condition (38), \(14[(\nicefrac{{\mathsf{d}_{2}}}{{\lambda}})\vee(\nicefrac{{\mathsf{d}_{1}}}{{\tau}})]( \nicefrac{{\mathsf{r}}}{{\sigma}})\leq\mathsf{d}_{1}/2\), implying \((\mathsf{d}_{1}/2)\|[\boldsymbol{\Delta}_{\mathbf{B}},\boldsymbol{\Delta}^{ \boldsymbol{\theta}}]\|_{\Pi}\leq G\). We conclude that
\[\triangle \leq(\nicefrac{{3r}}{{\mathsf{d}_{1}}})G,\] \[\hat{\sigma}\lambda\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}}) +\hat{\sigma}\tau\big{\|}\boldsymbol{\Delta}^{\boldsymbol{\theta}}\big{\|}_{ \sharp} \leq(\nicefrac{{28r}}{{\mathsf{d}_{1}}})G.\]
From (97),
\[2\boldsymbol{\blacksquare}_{1}+x^{2}+G^{2} \leq D^{2}+(\nicefrac{{2\mathsf{f}_{1}}}{{\mathsf{d}_{1}}})G+( \nicefrac{{56r}}{{\mathsf{d}_{1}}})G^{2}+(\nicefrac{{6r}}{{\mathsf{d}_{1}}})G\] \[=D^{2}+\left(\frac{2\mathsf{f}_{1}+6r}{\mathsf{d}_{1}}\right)G+( \nicefrac{{56r}}{{\mathsf{d}_{1}}})G^{2},\]
which, together with \(56r\leq\hat{\sigma}\mathsf{d}_{1}/2\) -- as stated in condition (38) --, entails
\[2\boldsymbol{\blacksquare}_{1}+x^{2}+\frac{G^{2}}{2} \leq D^{2}+2\left(\frac{\mathsf{f}_{1}+3r}{\mathsf{d}_{1}}\right)G.\]
Proceeding similarly as before, we obtain from the displayed bound that
\[2\boldsymbol{\blacksquare}_{1}+x^{2} \leq D^{2}+2\left(\frac{\mathsf{f}_{1}+3r}{\mathsf{d}_{1}}\right) ^{2}, \tag{100}\] \[\frac{G^{2}}{4} \leq D^{2}+4\left(\frac{\mathsf{f}_{1}+3r}{\mathsf{d}_{1}}\right) ^{2}. \tag{101}\]
The proof of (94) follows by taking the largest of the bounds in (98) and (100). The proof of (95) follows by taking the largest of the bounds in (99) and (101).
We will later invoke Proposition 8 in the proof of Theorem 22 when bounding the nuisance error \(\boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}}\). Next, we prove the following lemma -- an easy consequence of the first order condition when fixing \(\boldsymbol{\theta}\equiv\hat{\boldsymbol{\theta}}\).
**Lemma 34**.: _For all \(\mathbf{B}\in\mathds{R}^{p}\),_
\[0\leq\langle\boldsymbol{\xi}_{\mathbf{B}}^{(n)}-\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}},\mathfrak{X}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}}) \rangle+(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{ \mathbf{B}})\big{)}+\left(\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2}+\|\boldsymbol {\Delta}^{\hat{\boldsymbol{\theta}}}\|_{2}\right)\lambda\mathcal{R}( \boldsymbol{\Delta}_{\mathbf{B}}). \tag{102}\]
_and also_
\[\langle\boldsymbol{\Delta}^{(n)},\mathfrak{X}^{(n)}(\boldsymbol{ \Delta}_{\mathbf{B}})\rangle \leq\langle\boldsymbol{\xi}^{(n)}-\boldsymbol{\Delta}^{\hat{ \boldsymbol{\theta}}},\mathfrak{X}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}}) \rangle+(\hat{\sigma}\lambda)(\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{ \mathbf{B}}))\] \[+\left(\|\mathfrak{R}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}}, \boldsymbol{\Delta}^{\hat{\boldsymbol{\theta}}})\|_{2}+\|\mathcal{E}_{\mathbf{ B}}^{(n)}\|_{2}\right)\lambda\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}}). \tag{103}\]
Proof.: Observe that
\[\hat{\mathbf{B}}\in \operatorname*{argmin}_{\mathbf{B}}\left\{\|\boldsymbol{y}^{(n)}- \mathfrak{X}^{(n)}(\mathbf{B})-\hat{\boldsymbol{\theta}}\|_{2}+\lambda \mathcal{R}(\mathbf{B})\right\}.\]
Let us define \(\hat{\boldsymbol{\xi}}_{\mathbf{B}}:=\boldsymbol{y}-\mathfrak{X}(\mathbf{B})- \sqrt{n}\hat{\boldsymbol{\theta}}\). We first claim that
\[0\leq\langle\hat{\boldsymbol{\xi}}_{\mathbf{B}}^{(n)},\mathfrak{X}^{(n)}( \boldsymbol{\Delta}_{\mathbf{B}})\rangle+\|\hat{\boldsymbol{\xi}}_{\mathbf{B} }^{(n)}\|_{2}\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B }})\big{)}. \tag{104}\]
It is enough to assume \(\hat{\boldsymbol{\xi}}_{\mathbf{B}}\neq\boldsymbol{0}\). Comparing the minimality of \(\hat{\mathbf{B}}\) with \(\mathbf{B}\) and by convexity of \(\boldsymbol{u}\mapsto\|\boldsymbol{u}\|_{2}\),
\[0 \leq\|\hat{\boldsymbol{\xi}}_{\mathbf{B}}^{(n)}\|_{2}-\|\hat{ \boldsymbol{\xi}}^{(n)}\|_{2}+\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\big{)}\] \[\leq-\left\langle\frac{\hat{\boldsymbol{\xi}}_{\mathbf{B}}^{(n)}}{ \|\hat{\boldsymbol{\xi}}_{\mathbf{B}}^{(n)}\|_{2}},\hat{\boldsymbol{\xi}}^{(n)}- \boldsymbol{\xi}_{\mathbf{B}}^{(n)}\right\rangle+\lambda\big{(}\mathcal{R}( \mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)},\]
implying (104) since \(\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}=\hat{\mathbf{\xi}}^{(n)}+\mathfrak{X}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}})\). Note also, that \(\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}=\mathbf{\xi}_{\mathbf{B}}^{(n)}-\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\).
We now write
\[\|\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}\|_{2}\lambda\big{(}\mathcal{R} (\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)} =\|\mathbf{\xi}^{(n)}\|_{2}\lambda\big{(}\mathcal{R}(\mathbf{B})- \mathcal{R}(\hat{\mathbf{B}})\big{)}\] \[+(\|\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}\|_{2}-\|\mathbf{\xi}^{(n)}\|_{2 })\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)}.\]
By convexity of \(\mathbf{W}\mapsto\mathcal{R}(\mathbf{W})\), there exist \(\mathbf{V}_{\mathbf{B}}\in\mathcal{R}(\mathbf{B})\) such that
\[(\|\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}\|_{2}-\|\mathbf{\xi}^{(n)}\|_{ 2})\lambda\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)} \leq(\|\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}\|_{2}-\|\mathbf{\xi}^{(n)}\| _{2})\lambda\left(-(\!\mathbf{V}_{\mathbf{B}},\hat{\mathbf{B}}-\mathbf{B})\right)\] \[\leq(\|\mathbf{\xi}_{\mathbf{B}}^{(n)}\|_{2}+\|\mathbf{\Delta}^{\hat{\bm {\theta}}}\|_{2})\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}}),\]
where we used that \(\hat{\mathbf{\xi}}_{\mathbf{B}}^{(n)}=\mathbf{\xi}^{(n)}-\mathbf{\varepsilon}_{\mathbf{B}} ^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\) and \(|(\!\mathbf{V}_{\mathbf{B}},\mathbf{\Delta}_{\mathbf{B}}\!)|\leq\mathcal{R}(\mathbf{ \Delta}_{\mathbf{B}})\). Equation (102) follow from the two previous displays and (104).
We now prove (103). The KKT conditions imply that there exists \(\mathbf{V}\in\mathds{R}^{p}\) with \(\mathcal{R}^{*}(\mathbf{V})\leq 1\) and \((\!\mathbf{V},\hat{\mathbf{B}}\!)=\mathcal{R}(\hat{\mathbf{B}})\) such that, for all \(\mathbf{B}\in\mathds{R}^{p}\),
\[0\leq\sum_{i\in[n]}\left[\mathfrak{X}_{i}^{(n)}(\widehat{\mathbf{B}})+\widehat {\mathbf{\theta}}_{i}-y_{i}^{(n)}\right](\!\mathbf{X}_{i}^{(n)},\mathbf{B}-\hat{ \mathbf{B}})+\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}(\!\mathbf{V},\mathbf{B}-\hat{ \mathbf{B}}).\]
Using that \(\mathbf{y}^{(n)}=\mathbf{f}^{(n)}+\mathbf{\theta}^{*}+\mathbf{\xi}^{(n)}\), we arrive at
\[0\leq\Big{\langle}\mathbf{\Delta}^{(n)},-\mathfrak{X}^{(n)}(\mathbf{\Delta}_{ \mathbf{B}})\Big{\rangle}+\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta }}},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle-\lambda\|\hat{\mathbf{\xi} }^{(n)}\|_{2}(\!\mathbf{\Delta}_{\mathbf{B}},\mathbf{V}).\]
We now write
\[-\lambda\|\hat{\mathbf{\xi}}^{(n)}\|_{2}(\!\mathbf{V},\mathbf{\Delta}_{ \mathbf{B}}\!) =-\lambda\|\mathbf{\xi}^{(n)}\|_{2}(\!\mathbf{V},\mathbf{\Delta}_{ \mathbf{B}}\!)\] \[+\lambda(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{\xi}}^{(n)}\|_{2})(\! \mathbf{V},\mathbf{\Delta}_{\mathbf{B}}\!).\]
Using that \(-(\!\mathbf{\Delta}_{\mathbf{B}},\mathbf{V})\leq\mathcal{R}(\mathbf{B})-\mathcal{ R}(\hat{\mathbf{B}})\), we get that
\[-\lambda\|\mathbf{\xi}^{(n)}\|_{2}(\!\mathbf{V},\mathbf{\Delta}_{\mathbf{B}}\!)\leq( \hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}(\hat{\mathbf{B} })\big{)}.\]
Finally, using the facts that \(\hat{\mathbf{\xi}}^{(n)}=\mathbf{\xi}^{(n)}-\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}- \mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\) and \(|(\!\mathbf{V},\mathbf{\Delta}_{\mathbf{B}}\!)|\leq\mathcal{R}(\mathbf{\Delta}_{\mathbf{ B}})\) we get
\[\lambda(\|\mathbf{\xi}^{(n)}\|_{2}-\|\hat{\mathbf{\xi}}^{(n)}\|_{2})(\! \mathbf{V},\mathbf{\Delta}_{\mathbf{B}}\!)\leq\Big{(}\|\mathfrak{M}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}})\|_{2}+\|\mathbf{\varepsilon} _{\mathbf{B}}^{(n)}\|_{2}\Big{)}\,\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}}).\]
Equation (103) follows from the four previous displays.
Using the previous lemma and IP -- in addition to \(\mathrm{MP}\) and \(\mathrm{ARSC}\) --, we obtain the following lemma.
**Lemma 35**.: _Suppose conditions (i)-(iii) of Theorem 16 hold. Define the quantities_
\[\hat{\mathbf{\lambda}}_{1} :=\left[\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}+\left(\mathsf{b}_{2}+\lambda+\frac{\mathsf{b}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}\right)\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\frac{\mathsf{b}_ {4}\mathsf{d}_{2}}{\mathsf{d}_{1}}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+ \lambda\|\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}\|_{2}\right]\mathcal{R}(\mathbf{ \Delta}_{\mathbf{B}}),\] \[\hat{\mathbf{\nu}}_{1} :=(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\big{)}.\]
_Then_
\[0\leq\left[\left(\mathsf{f}_{1}\!/\!\mathsf{d}_{1}\right)+\left(\mathsf{b}_{1} \!/\!\mathsf{d}_{1}\right)\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\left( \mathsf{b}_{1}\!/\!\mathsf{d}_{1}\right)\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+ \|\mathbf{\varepsilon}_{\mathbf{B}}^{(n)}\|_{2}\right]\|\mathfrak{X}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}})\|_{2}+\hat{\mathbf{\lambda}}_{1}+\hat{\mathbf{\nu}}_{1}, \tag{105}\]
_and also_
\[\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{ B}})\|_{2}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^{(n)}\|_{2}^{2}\] \[+\left[(\nicefrac{{\mathsf{f}_{1}}}{{\mathsf{d}_{1}}})+(\nicefrac{{ \mathsf{2b}_{1}}}{{\mathsf{d}_{1}}})\|\mathbf{\Delta}^{\mathbf{\theta}}\|_{2}+( \nicefrac{{\mathsf{2b}_{4}}}{{\mathsf{d}_{1}}})\|\mathbf{\Delta}^{\mathbf{\theta}}\|_{ \sharp}\right]\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\] \[+2(\hat{\mathbf{\Lambda}}_{1}+\hat{\mathbf{\Psi}}_{1})+2\|\mathfrak{X}^{( n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\lambda\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}}). \tag{106}\]
Proof.: By the parallelogram law,
\[\langle\mathbf{\Delta}^{(n)},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle= \frac{1}{2}\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+\frac{1}{2}\|\mathfrak{X}^{(n)}(\mathbf{ \Delta}_{\mathbf{B}})\|_{2}^{2}-\frac{1}{2}\|\mathfrak{X}^{(n)}(\mathbf{B})- \mathbf{f}^{(n)}\|_{2}^{2}.\]
The previous display, (103) and \(\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}},\mathbf{\Delta}^{\mathbf{\theta}})\|_{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}+\|\mathbf{\Delta}^{\mathbf{ \theta}}\|_{2}\) imply
\[\|\mathbf{\Delta}^{(n)}\|_{2}^{2}+\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{ \mathbf{B}})\|_{2}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B})-\mathbf{f}^{(n)}\|_{2}^{2}\] \[+2(\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X}^{( n)}(\mathbf{\Delta}_{\mathbf{B}}))\] \[+2(\hat{\sigma}\lambda)(\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}}))\] \[+2\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\lambda \mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})+2(\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2 }+\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2})\lambda\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}}). \tag{107}\]
By IP (with variable \(\mathbf{W}=\mathbf{0}\)),
\[\langle-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{ \mathbf{B}})\rangle\leq\mathsf{b}_{1}\|\mathbf{\Delta}_{\mathbf{B}}\|_{\Pi}\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{2}\mathcal{R}(\mathbf{\Delta}_{ \mathbf{B}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{4}\|\mathbf{ \Delta}_{\mathbf{B}}\|_{\Pi}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}.\]
By MP (with variables \(\mathbf{W}=\mathbf{0}\) and \(\mathbf{u}=\mathbf{0}\)),
\[\langle\mathbf{\xi}^{(n)},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle\leq \mathsf{f}_{1}\|\mathbf{\Delta}_{\mathbf{B}}\|_{\Pi}+\mathsf{f}_{2}\mathcal{R}( \mathbf{\Delta}_{\mathbf{B}}).\]
By ARSC (with variables \(\mathbf{W}=\mathbf{0}\) and \(\mathbf{u}=\mathbf{0}\)),
\[\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\geq\mathsf{d}_{1}\|\mathbf{ \Delta}_{\mathbf{B}}\|_{\Pi}-\mathsf{d}_{2}\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}}).\]
The three previous displays imply
\[\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X }^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle \leq\|\mathbf{\Delta}_{\mathbf{B}}\|_{\Pi}\left(\mathsf{b}_{1}\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{b}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{\sharp}+\mathsf{f}_{1}\right)+\mathcal{R}(\mathbf{\Delta}_{\mathbf{B} })(\mathsf{b}_{2}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{f}_{2})\] \[\leq\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}\left[ \nicefrac{{\mathsf{b}_{1}}}{{\mathsf{d}_{1}}}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}} \|_{2}+\nicefrac{{\mathsf{b}_{4}}}{{\mathsf{d}_{1}}}\|\mathbf{\Delta}^{\mathbf{\theta}} \|_{\sharp}+\mathsf{(f}_{1}\!/\mathsf{d}_{1})\right]\] \[+\mathcal{R}(\mathbf{\Delta}_{\mathbf{B}})\left[\nicefrac{{\mathsf{d}_ {2}}}{{\mathsf{d}_{1}}}\left(\mathsf{b}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}} \|_{2}+\mathsf{b}_{4}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}+\mathsf{f}_{1 }\right)+\mathsf{b}_{2}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\mathsf{f}_{2} \right].\]
The proof of (106) follows from the previous display and (107).
The proof of (105) follows from the previous display, inequality (102) and the fact that
\[\langle\mathbf{\xi}_{\mathbf{B}}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle =\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}},\mathfrak{X }^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle-\langle\mathcal{E}_{\mathbf{B}}^{(n)}, \mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle\] \[\leq\langle\mathbf{\xi}^{(n)}-\mathbf{\Delta}^{\hat{\mathbf{\theta}}}, \mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\rangle+\|\mathcal{E}_{\mathbf{B}}^{(n)} \|_{2}\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}.\]
We conclude with the proof of Theorem 22. It uses Proposition 8 and Lemma 35.
Proof of Theorem 22.: Let \(\hat{\mathbf{\Pi}}_{1}:=(\nicefrac{{\mathsf{d}_{1}}}{{\mathsf{d}_{1}}})\mathcal{R}(\mathbf{ \Delta}_{\mathbf{B}})\). By Lemma 35 and \(\|\mathcal{E}_{\mathbf{B}}^{(n)}\|_{2}\leq\hat{\sigma}c_{n}\), we have
\[0\leq\left[(\nicefrac{{\mathsf{f}_{1}}}{{\mathsf{d}_{1}}})+(\nicefrac{{ \mathsf{b}_{1}}}{{\mathsf{d}_{1}}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+( \nicefrac{{\mathsf{b}_{4}}}{{\mathsf{d}_{1}}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{ \sharp}+\hat{\sigma}c_{n}\right]\|\mathfrak{X}^{(n)}(\mathbf{\Delta}_{\mathbf{B}})\|_{2}+ \hat{\mathbf{\Lambda}}_{1}+\hat{\mathbf{\Psi}}_{1}, \tag{108}\]
and also
\[2\hat{\boldsymbol{\Pi}}_{1}+\|\boldsymbol{\Delta}^{(n)}\|_{2}^{2}+ \|\mathfrak{X}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}})\|_{2}^{2} \leq\|\mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}^ {2}\] \[+\left[\left(\nicefrac{{2t}}{{\mathfrak{d}_{1}}}/\nicefrac{{4}}{{ \mathfrak{d}_{1}}}\right)\|\boldsymbol{\Delta}^{\theta}\|_{2}+\left(\nicefrac{{ 2b}}{{\mathfrak{d}_{1}}}/\nicefrac{{4}}{{\mathfrak{d}_{1}}}\right)\| \boldsymbol{\Delta}^{\theta}\|_{\sharp}\right]\|\mathfrak{X}^{(n)}(\boldsymbol {\Delta}_{\mathbf{B}})\|_{2}\] \[+2(\hat{\boldsymbol{\Delta}}_{1}+\hat{\boldsymbol{\Pi}}_{1}+ \hat{\boldsymbol{\nabla}}_{1})+2\|\mathfrak{X}^{(n)}(\boldsymbol{\Delta}_{ \mathbf{B}})\|_{2}\lambda\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}}). \tag{109}\]
For convenience, let \(R:=(\hat{\sigma}\mathsf{c}_{n})\vee(3r)\). All conditions of Proposition 8 hold. Hence,
\[\|\boldsymbol{\Delta}^{\hat{\theta}}\|_{2} \leq(\nicefrac{{4D}}{{\mathfrak{d}_{1}}})+(\nicefrac{{2}}{{ \mathfrak{d}_{1}}})\boldsymbol{\Delta}_{1}\left(\mathsf{f}_{1},R\right)\leq \mathsf{c}_{*}\hat{\sigma}, \tag{110}\] \[(\nicefrac{{\hat{\sigma}}}{{\tau}}/2)\|\boldsymbol{\Delta}^{ \hat{\theta}}\|_{\sharp} \leq D^{2}+\boldsymbol{\phi}_{1}\left(\mathsf{f}_{1},R\right)\leq \mathsf{c}_{*}^{2}\hat{\sigma}^{2}, \tag{111}\]
where we have used conditions (39)-(40). In particular, by the condition on \(\lambda\) in (iv),
\[\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{\mathsf{d}_{1}}+\left( \mathsf{b}_{2}+\lambda+\frac{\mathsf{b}_{1}\mathsf{d}_{2}}{\mathsf{d}_{1}} \right)\|\boldsymbol{\Delta}^{\hat{\theta}}\|_{2}+\frac{\mathsf{b}_{4}\mathsf{ d}_{2}}{\mathsf{d}_{1}}\|\boldsymbol{\Delta}^{\hat{\theta}}\|_{\sharp}+\hat{ \sigma}\lambda\mathsf{c}_{n}\leq\frac{\hat{\sigma}\lambda}{4}. \tag{112}\]
By Lemmas 27 (with \(\nu=1/2\)), (112) and \(\|\mathcal{E}_{\mathbf{B}}\|_{2}\leq\hat{\sigma}\mathsf{c}_{n}\),
\[\hat{\boldsymbol{\Delta}}_{1}+\hat{\boldsymbol{\Pi}}_{1}+\hat{ \boldsymbol{\nabla}}_{1} \leq\left[\mathsf{f}_{2}+\frac{\mathsf{f}_{1}\mathsf{d}_{2}}{ \mathsf{d}_{1}}+\left(\mathsf{b}_{2}+\lambda+\frac{\mathsf{b}_{1}\mathsf{d}_{2} }{\mathsf{d}_{1}}\right)\|\boldsymbol{\Delta}^{\hat{\theta}}\|_{2}+\frac{ \mathsf{b}_{4}\mathsf{d}_{2}}{\mathsf{d}_{1}}\|\boldsymbol{\Delta}^{\hat{ \theta}}\|_{\sharp}+\hat{\sigma}\lambda\mathsf{c}_{n}+(\nicefrac{{\hat{ \sigma}}}{{\lambda}}/4)\right]\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}})\] \[+(\hat{\sigma}\lambda)\big{(}\mathcal{R}(\mathbf{B})-\mathcal{R}( \hat{\mathbf{B}})\big{)}\] \[\leq(\nicefrac{{\hat{\sigma}}}{{\lambda}}/2)\mathcal{R}( \boldsymbol{\Delta}_{\mathbf{B}})+(\hat{\sigma}\lambda)\big{(}\mathcal{R}( \mathbf{B})-\mathcal{R}(\hat{\mathbf{B}})\big{)}\] \[\leq\triangle_{\hat{\sigma}\lambda,0}(\boldsymbol{\Delta}_{ \mathbf{B}},\mathbf{0}|\mathbf{B})=:\triangle. \tag{113}\]
Next, we define the local variables
\[H :=(\nicefrac{{3\hat{\sigma}}}{{\lambda}}/2)(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}})(\boldsymbol{\Delta}_{\mathbf{B}}),\] \[I :=(\nicefrac{{\hat{\sigma}}}{{\lambda}}/2)(\mathcal{R}\circ \mathcal{P}_{\mathbf{B}}^{\perp})(\boldsymbol{\Delta}_{\mathbf{B}}).\]
In particular, \(\triangle=H-I\). Recall \(D:=\|\mathfrak{X}^{(n)}(\mathbf{B})-\boldsymbol{f}^{(n)}\|_{2}\). Define also \(G:=\|\mathfrak{X}^{(n)}(\boldsymbol{\Delta}_{\mathbf{B}})\|_{2}\), \(x:=\|\boldsymbol{\Delta}^{(n)}\|_{2}\), and \(\hat{r}:=r_{\hat{\sigma}\lambda,0,6}(\boldsymbol{\Delta}_{\mathbf{B}}|\mathbf{B})\). Finally, let us define the auxiliary variables
\[F :=\mathsf{f}_{1}+\mathsf{b}_{1}\left[(\nicefrac{{4D}}{{ \mathfrak{d}_{1}}})+(\nicefrac{{2}}{{\mathfrak{d}_{1}}})\boldsymbol{\Delta}_{ 1}\left(\mathsf{f}_{1},R\right)\right]+2(\nicefrac{{\mathfrak{b}_{4}}}{{ \mathfrak{d}_{7}}})\left[D^{2}+\boldsymbol{\phi}_{1}\left(\mathsf{f}_{1},R \right)\right],\] \[\hat{\mathsf{f}}_{1} :=\mathsf{f}_{1}+\mathsf{b}_{1}(\mathsf{c}_{*}\sigma)+2( \nicefrac{{\mathfrak{b}_{4}}}{{\mathfrak{d}_{7}}})(\mathsf{c}_{*}^{2}\sigma^{2}).\]
Note that, from (110)-(111) and \(F\leq\hat{\mathsf{f}}_{1}\).
From (108)-(109), (110)-(111) and (113),
\[0 \leq(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n })\,G+\triangle, \tag{114}\] \[2\hat{\boldsymbol{\Pi}}_{1}+x^{2}+G^{2} \leq D^{2}+2(\nicefrac{{F}}{{\mathfrak{d}_{1}}})G+2\lambda\mathcal{R}( \boldsymbol{\Delta}_{\mathbf{B}})G+2\triangle. \tag{115}\]
The rest of the proof is similar to the proof of Proposition 8.
We split our argument in two cases.
**Case 1:**: \((\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n})\,G\geq H\). Hence, \(\triangle\leq H\leq(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n })\,G\). From (114), \(I\leq(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n})\,G+H\leq 2 \left(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n}\right)\,G\). This fact and decomposability imply
\[\hat{\sigma}\lambda\mathcal{R}(\boldsymbol{\Delta}_{\mathbf{B}})\leq 2I+\frac{2H}{3} \leq\frac{14}{3}\left(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n }\right)G.\]
From (115), we get
\[2\hat{\boldsymbol{\Pi}}_{1}+x^{2}+G^{2} \leq D^{2}+2(\nicefrac{{F}}{{\mathfrak{d}_{1}}})G+\frac{28}{3\hat{ \sigma}}\left(\nicefrac{{F}}{{\mathfrak{d}_{1}}}+\hat{\sigma}\mathsf{c}_{n} \right)G^{2}+2\triangle\]
\[2\hat{\overline{\mathsf{h}}}_{1}+x^{2}+\frac{G^{2}}{2}\leq D^{2}+2\left(\frac{F+3 \hat{r}}{\mathsf{d}_{1}}\right)G.\]
Proceeding similarly as before, we obtain from the displayed bound that
\[2\hat{\overline{\mathsf{h}}}_{1}+x^{2} \leq D^{2}+2\left(\frac{F+3\hat{r}}{\mathsf{d}_{1}}\right)^{2}, \tag{119}\] \[\frac{G^{2}}{4} \leq D^{2}+4\left(\frac{F+3\hat{r}}{\mathsf{d}_{1}}\right)^{2}. \tag{120}\]
The proof of (41) follows by taking the largest of the bounds in (116) and (119). The proof of (42) follows by taking the largest of the bounds in (117) and (120). The proof of (43) follows from (118) -- namely, \(\mathrm{ARSC}\) -- and (41)-(42).
## 35 Proof sketch of Theorem 4, case (i')
In the following \(\mathcal{R}:=\|\cdot\|_{1}\). Next, we assume that \(n\geq C_{0}L^{4}(1+\log(1/\delta))\) and \(n\geq C_{0}\sigma^{2}(1+\log(1/\delta))\) for an absolute constant to be determined next. We will also use that \(L\geq 1\). In the following, \(C>0\) is the universal constant stated in Proposition 2. No effort is made to optimize the numerical constants.
Invoking Proposition 2 and taking \(C_{0}\gtrsim C^{2}\), we know that on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\) -- stated in Section 31 -- of probability \(\geq 1-3\delta\), the properties \(\mathrm{RSC}_{\|\cdot\|_{1}}(\mathtt{a}_{1},\mathtt{a}_{2})\), \(\mathrm{IP}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathtt{b}_{1}(\delta),\mathtt{b} _{2},0,\mathtt{b}_{4})\), \(\mathrm{MP}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathtt{f}_{1}(\delta),\mathtt{f} _{2},0,\mathtt{f}_{4})\) and \(\mathrm{ARSC}_{\|\cdot\|_{1},0,\|\cdot\|_{1}}(\mathtt{d}_{1},\mathtt{d}_{2},0,\mathtt{d}_{4})\) all hold -- see Section 31 for the expression of the constants \(\{\mathtt{a}_{i}\}\), \(\{\mathtt{b}_{i}\}\), \(\{\mathtt{d}_{i}\}\) and \(\{\mathtt{f}_{i}\}\). Enlarging \(C_{0}\) if necessary, Bernstein's inequality implies that, on an event \(\mathcal{E}_{4}\) of probability at least \(1-\delta\), \(\sigma/2\leq\hat{\sigma}\leq 3\sigma/2\). The proof will work on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\cap\mathcal{E}_{3}\cap\mathcal{E}_{4}\) of probability \(\geq 1-4\delta\).
Next, we will use the notation \(\mu(\boldsymbol{b}):=\mu\left(\mathcal{C}_{\boldsymbol{b},\|\cdot\|_{1}}(6)\right)\) and
\[\mathcal{F} :=\mathcal{F}(s,s\log p,\rho_{1}(\boldsymbol{\Sigma}),\mathsf{C}),\] \[\mathcal{F}_{0} :=\mathcal{F}(s,s\log p,\rho_{1}(\boldsymbol{\Sigma}),\mathtt{c} _{0,n}^{2},\mathsf{C}),\]
for \(\mathtt{c}_{0,n}\) and \(\mathsf{C}\) to be determined. By definition \(\mu_{*}:=\sup_{\boldsymbol{b}\in\mathcal{F}}\mu(\boldsymbol{b})<\infty\). We will take
\[\mathtt{c}_{0,n}\asymp r_{n,s\log p,\delta}(\rho_{1}(\boldsymbol{\Sigma}), \mu_{*})=L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+L^{2}\rho_{1}(\boldsymbol{ \Sigma})\mu_{*}\sqrt{\frac{s\log p}{n}}.\]
Also, we take \(\mathtt{c}_{n}:=2\mathtt{c}_{0,n}\) and \(\mathtt{c}_{*}\asymp\mathtt{c}_{n}\).
Next, we invoke Theorem 22. Conditions (i)-(iii) are met. Now, we note that, by definition of \(\mathcal{F}\) and assumptions of the theorem,
\[\mathcal{O}(1)CL^{2}\rho_{1}(\boldsymbol{\Sigma})\mu_{*}\sqrt{ \frac{s\log p}{n}}<1, \tag{121}\] \[CL\sqrt{\epsilon\log(e/\epsilon)}<c_{1}, \tag{122}\]
choosing appropriate absolute constants \(c_{1}\in(0,1)\) and \(\mathsf{C}\geq 1\). In that case, by changing constants if necessary, we can assume \(\mathtt{c}_{n},\mathtt{c}_{*}\in(0,1)\) are small enough. Thus, if we choose \(\lambda\asymp C\sigma L^{2}\rho_{1}(\boldsymbol{\Sigma})\sqrt{\log p/n}\) and \(\tau\asymp C\sigma L/\sqrt{n}\), it is easy to check that conditions (iv)-(v) of Theorem 22 are met.
In remains to verify conditions (37)-(40) for \(\boldsymbol{b}\in\mathcal{F}_{0}\). Condition (37) is met: \(D=\|\mathfrak{F}^{(n)}(\boldsymbol{b})-\boldsymbol{f}^{(n)}\|_{2}\leq\mathtt{ c}_{0,n}\sigma\leq 2\mathtt{c}_{0,n}\hat{\sigma}\). Let \(r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,6}(\boldsymbol{\Delta}_{ \boldsymbol{b}}|\boldsymbol{b})\). By the choice of \((\lambda,\tau)\), one checks that (38) is satisfied if (121)-(122) hold.
Let \(R:=(\hat{\sigma}\mathtt{c}_{n})\vee(3r)\) and \(r_{*}:=\sup_{\boldsymbol{b}\in\mathcal{F}}r\). We have \(R\lesssim r_{*}\). Next, we show that \(D^{2}+\boldsymbol{\phi}_{1}(\mathtt{f}_{1},R)\leq\mathtt{c}_{*}^{2}\sigma^{2}\). For this to be true it is sufficient that \(D\leq\frac{\mathtt{c}_{*}\sigma}{3}\) and \(\boldsymbol{\phi}_{1}^{1/2}(\mathtt{f}_{1},R)\leq\frac{3\mathtt{c}_{*}\sigma}{4}\). By adjusting constants, we can have \(D\leq\hat{\sigma}\mathtt{c}_{n}\leq\frac{\mathtt{c}_{*}\sigma}{3}\). Note that
\[\boldsymbol{\phi}_{1}^{1/2}(\mathtt{f}_{1},R)\lesssim\mathtt{f}_{1}+R\lesssim C \sigma L\frac{1+\sqrt{\log(1/\delta)}}{\sqrt{n}}+r_{*},\]
which, by definition of \(\mathtt{c}_{*}\) and (121)-(122), can be shown to be not greater than \(\frac{3\mathtt{c}_{*}\sigma}{4}\) -- adjusting the numerical constants if necessary. Hence, condition (39) holds. The verification of (40) is similar.
We now verify the statement of Theorem 3, case (i) -- using the bounds (41)-(42) of Theorem 22. Let \(\hat{r}:=r_{\hat{\sigma}\lambda,0,6}(\boldsymbol{\Delta}_{\mathbf{B}}| \mathbf{B})\), \(\hat{R}:=(\hat{\sigma}\mathtt{c}_{n})\vee(3\hat{r})\) and \(\hat{r}_{*}:=\sup_{\boldsymbol{b}\in\mathcal{F}}\hat{r}\). We have \(\hat{R}\lesssim\hat{r}_{*}\). We claim that, using \(D\lesssim\sigma\mathtt{c}_{0,n}\) and (121)-(122), similar computations used in the proof of Theorem 3(i) to bound the terms \(D^{2}+\boldsymbol{\phi}_{1}(F,\hat{R})\) and \(2D+\boldsymbol{\phi}_{1}(F,\hat{R})\) entail the bounds in (15)-(16).
## 36 Proof sketch of Theorem 4, case (ii')
The proof of Theorem 4(ii') needs Definitions 29-30 used in the proof of Theorem 3, case (ii) -- see Section 32.
The proof with \(\mathcal{R}=\|\cdot\|_{\sharp}\), the Slope norm in \(\mathbb{R}^{p}\), follows a similar path to the proof of Theorem 4, case (i'). We claim that a very similar theorem to Theorem 22 holds but with the minor changes:
We replace \(r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,6}(\boldsymbol{\Delta}_{ \mathbf{B}}|\mathbf{B})\) with \(r:=r_{\hat{\sigma}\lambda,0,6}(\boldsymbol{\Delta}_{\mathbf{B}}|\mathbf{B})\) with \(\hat{r}:=r_{\hat{\sigma}\lambda,0,6}(s)\).
Let us call it Theorem 22\({}^{\prime}\). Using this theorem, setting \(\mu(\boldsymbol{b}):=\mu(\overline{\mathcal{C}}_{s}(6))\) for given \(\boldsymbol{b}\) with \(\|\boldsymbol{b}\|_{0}\leq s\), using the bound \(\mathscr{G}(\boldsymbol{\Sigma}^{1/2}\mathbb{B}_{\sharp})\lesssim\rho_{1}( \boldsymbol{\Sigma})\) -- which follows from Proposition E.2 in [6] -- and the fact that \(\overline{\Omega}_{s}\leq 2s\log(ep/s)\), the proof of Theorem 4, case (ii') follows very similar arguments to the case (i').
Next, we highlight the minor changes in the proof of Theorem 22\({}^{\prime}\). Lemmas 32-33 are unchanged. Instead of Lemma 19 we used Lemma 31 -- see Section 32. Using these lemmas, we obtain a variation of Proposition 8 -- again with a similar proof, using \(\triangle:=\triangle_{\hat{\sigma}\lambda,\hat{\sigma}\tau}(\boldsymbol{ \Delta}_{\mathbf{b}},\boldsymbol{\Delta}^{\boldsymbol{\theta}})\) instead of \(\triangle:=\triangle_{\hat{\sigma}\lambda,\hat{\sigma}\tau}(\boldsymbol{ \Delta}_{\mathbf{B}},\boldsymbol{\Delta}^{\boldsymbol{\theta}}|\mathbf{B})\) and \(r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,6}(s)\) instead of \(r:=r_{\hat{\sigma}\lambda,\hat{\sigma}\tau\Omega,6}(\boldsymbol{\Delta}_{ \mathbf{B}}|\mathbf{B})\). Lemmas 34-35 are unchanged. Using all these auxiliary results, we claim that the proof of Theorem 22\({}^{\prime}\) follows the same arguments in the proof of Theorem 22 -- using \(\triangle:=\triangle_{\hat{\sigma}\lambda,0}(\boldsymbol{\Delta}_{\mathbf{b}}, \mathbf{0})\) instead of \(\triangle:=\triangle_{\hat{\sigma}\lambda,0}(\boldsymbol{\Delta}_{\mathbf{B}},\mathbf{0}|\mathbf{B})\), \(r:=r_{\hat{\sigma}\lambda,0,6}(s)\) instead of \(r:=r_{\hat{\sigma}\lambda,0,3}(\boldsymbol{\Delta}_{\mathbf{B}}|\mathbf{B})\) and \(\overline{\mathcal{C}}_{s}(6)\) instead of \(C_{\mathbf{B}}(6)\).
## 37 Proof sketch of Theorem 4, case (iii')
The exact same comments in Section 33 apply -- but invoking Theorem 22 instead of Theorem 16.
## 38 Proof of Theorem 5
In this section, \(\boldsymbol{\Gamma}^{*}\equiv\mathbf{0}\) and \(\|\mathbf{B}^{*}\|_{\infty}\leq\mathsf{a}<\infty\). For ease of reference, we recall that estimator \(\hat{\mathbf{B}}\) in the statement of Theorem 5 is equivalent to
\[\min_{[\mathbf{B},\boldsymbol{\theta}]\in\mathds{R}^{p}\times \mathbb{R}^{n}}\frac{1}{2n}\sum_{i=1}^{n}\left(y_{i}-\left(\mathbf{X}_{i}, \mathbf{B}\right)+\sqrt{n}\boldsymbol{\theta}_{i}\right)^{2}+\tfrac{\lambda}{ \sqrt{p}}\mathcal{R}(\mathbf{B})+\tau\|\boldsymbol{\theta}\|_{\sharp}\] s.t. \[\|\mathbf{B}\|_{\infty}\leq\mathsf{a}.\]
### Design properties for "heavy-tailed" discrete designs
Recall Definition 9 in Section 9. We will work with a variation of \(\mathrm{MP}\).
_Definition 36_ (\(\mathrm{MP}\)).: Let \(\mathcal{R}\) be a norm on \(\mathbb{R}^{p}\) and \(\mathcal{Q}\) be a norm on \(\mathbb{R}^{n}\). Given non-negative numbers \((\mathsf{f}_{1},\mathsf{f}_{2},\mathsf{f}_{4})\), we say \((\mathfrak{X},\boldsymbol{\xi})\) satisfies \(\mathrm{MP}_{\mathcal{R},\mathcal{Q}}(\mathsf{f}_{1},\mathsf{f}_{2},\mathsf{f}_{ 4})\) if
\[\forall[\mathbf{V},\boldsymbol{u}]\in\mathds{R}^{p}\times\mathbb{R}^{n},\quad |\langle\boldsymbol{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V},\boldsymbol{u}) \rangle|\leq\mathsf{f}_{1}\|\boldsymbol{u}\|_{2}+\mathsf{f}_{2}\mathcal{R}( \mathbf{V})+\mathsf{f}_{4}\mathcal{Q}(\boldsymbol{u}).\]
We will also need additional cone definitions.
_Definition 37_.: Let \(\mathcal{R}\) be a norm on \(\mathbb{R}^{p}\) and \(\mathcal{Q}\) be a norm on \(\mathbb{R}^{n}\). Given \(\mathsf{h}>0\), we define the cones
\[\mathfrak{C}(\mathsf{h}) :=\left\{\mathbf{V}:\mathsf{h}\|\mathbf{V}\|_{\infty}\leq\| \mathbf{V}\|_{\Pi}\right\},\] \[\mathcal{C}_{\mathcal{R}^{(p)}}(\mathsf{h}) :=\left\{\mathbf{V}:\mathsf{h}\|\mathbf{V}\|_{\infty}\mathcal{R}^{( p)}(\mathbf{V})\leq\|\mathbf{V}\|_{\Pi}^{2}\right\},\] \[\mathsf{C}_{\mathcal{Q}}(\mathsf{h}) :=\left\{[\mathbf{V},\boldsymbol{u}]:\mathsf{h}\|\mathbf{V}\|_{ \infty}\mathcal{Q}(\boldsymbol{u})\leq\|[\mathbf{V},\boldsymbol{u}]\|_{\Pi}^{2} \right\}.\]
The aim of this section is to prove that, with high probability, \(\mathrm{ARSC}(\mathsf{c}_{1},0,0,0)\) holds over an appropriate cone \(\mathbb{C}\) and \(\mathrm{MP}_{\mathcal{R}^{(p)},\|.\|_{\sharp}}(\mathsf{f}_{1},\mathsf{f}_{2}, \mathsf{f}_{4})\) holds everywhere. Throughout this section, we additionally assume \((\mathbf{X},\xi)\in\mathds{R}^{p}\times\mathbb{R}\) satisfies Assumption 3 and \(\{(\mathbf{X}_{i},\xi_{i})\}_{i\in[n]}\) is an iid copy of \((\mathbf{X},\xi)\). Given a compact set \(V\subset\mathbb{R}^{p}\), we shall need the Gaussian complexity
\[\mathcal{G}(V):=\mathbb{E}\left[\mathscr{G}\left(\mathfrak{X}\left(V\right) \right)\right]=\mathbb{E}\left[\mathbb{E}\left[\sup_{\mathbf{V}\in V}\sum_{i=1 }^{n}\mathfrak{X}_{i}(\mathbf{V})\boldsymbol{\xi}_{i}\bigg{|}\mathfrak{X} \right]\right],\]
where \(\mathbf{\xi}\sim\mathcal{N}(\mathbf{0},\mathbf{I}_{n})\) is independent of \(\mathbf{X}\). Define also \(\tilde{\mathcal{G}}(V):=\sqrt{p}\mathcal{G}(V)\). We note that, by the dual-norm inequality, given \(\mathbf{V}\in\mathbb{R}^{p}\), \(|\langle\mathbf{X},\mathbf{V}\rangle|\leq\|\mathbf{V}\|_{\infty}\). In particular, \(\|\mathbf{V}\|_{\Pi}=\mathbb{E}^{1/2}(\mathbf{X},\mathbf{V})^{2}\leq\|\mathbf{ V}\|_{\infty}\).
The next result states a _lower bound_ on the quadratic process. The main tool to prove this lemma is a _one-sided_ version of a Bernstein's inequality for bounded processes due to Bousquet [4].
**Lemma 38**.: _Let \(\mathbb{S}_{\infty}^{p}:=\{\mathbf{V}\in\mathbb{R}^{p}:\|\mathbf{V}\|_{\infty }=1\}\) and \(V\subset R\mathbb{B}_{\Pi}\cap\mathbb{S}_{\infty}^{p}\) for some \(R\in[0,1]\). Then, for any \(n\geq 1\) and \(t\geq 0\), with probability at least \(1-\exp(-t)\),_
\[\sup_{\mathbf{V}\in V}\frac{1}{n}\sum_{i\in[n]}\left[\|\mathbf{V}\|_{\Pi}^{2}- \langle\mathbf{X}_{i},\mathbf{V}\rangle^{2}\right]\leq 2\sqrt{2\pi}\frac{ \mathcal{G}(V)}{n}+2R\sqrt{\frac{t}{n}}+\frac{t}{3n}.\]
Proof.: Define \(X_{i,\mathbf{V}}:=\|\mathbf{V}\|_{\Pi}^{2}-\langle\mathbf{X}_{i},\mathbf{V} \rangle^{2}\). By the symmetrization inequality (e.g. Exercise 11.5 in [4]), we have
\[\mathbb{E}\left[\sup_{\mathbf{V}\in V}\sum_{i\in[n]}X_{i,\mathbf{V}}\right] \leq 2\mathbb{E}\left[\sup_{\mathbf{V}\in V}\sum_{i\in[n]}\epsilon_{i}X_{i, \mathbf{V}}\right],\]
where \(\{\epsilon_{i}\}_{i\in[n]}\) are iid Rademacher variables independent of \(X:=\{\mathbf{X}_{i}\}_{i\in[n]}\). One may bound the Rademacher complexity by the Gaussian complexity as
\[\mathbb{E}\left[\sup_{\mathbf{V}\in V}\sum_{i\in[n]}\epsilon_{i}X_{i,\mathbf{ V}}\bigg{|}X\right]\leq\sqrt{\frac{\pi}{2}}\mathbb{E}\left[\sup_{\mathbf{V}\in V} \sum_{i\in[n]}g_{i}X_{i,\mathbf{V}}\bigg{|}X\right],\]
where \(\{g_{i}\}_{i\in[n]}\) is an iid \(\mathcal{N}(0,1)\) sequence independent of \(\{\mathbf{X}_{i}\}_{i\in[n]}\).
We will now use an standard argument via Slepian's inequality over the randomness of \(\{g_{i}\}_{i\in[n]}\) for the process \(\mathbf{V}\mapsto Z_{\mathbf{V}}:=\sum_{i\in[n]}g_{i}X_{i,\mathbf{V}}\). One has
\[\mathbb{E}[(Z_{\mathbf{V}}-Z_{\mathbf{V}^{\prime}})^{2}|X]=\sum_{i\in[n]} \langle\mathbf{X}_{i},\mathbf{V}-\mathbf{V}^{\prime}\rangle^{2}\langle \mathbf{X}_{i},\mathbf{V}+\mathbf{V}^{\prime}\rangle^{2}\leq 4\|\tilde{ \mathcal{X}}(\mathbf{V}-\mathbf{V}^{\prime})\|_{2}^{2}.\]
Define the Gaussian process \(W_{\mathbf{V}}:=2\langle\tilde{\mathcal{X}}(\mathbf{V}),\mathbf{\xi}\rangle\) with \(\mathbf{\xi}\sim\mathcal{N}_{n}(0,\mathbf{I}_{n})\) independent of \(\{\mathbf{X}_{i}\}_{i\in[n]}\). Under the above conditions, Slepian's inequality implies
\[\mathbb{E}\left[\sup_{\mathbf{V}\in V}Z_{\mathbf{V}}\bigg{|}X\right]\leq \mathbb{E}\left[\sup_{\mathbf{V}\in V}W_{\mathbf{V}}\bigg{|}X\right]=2 \mathcal{G}(\tilde{\mathcal{X}}(V)).\]
Define \(Z:=\sup_{\mathbf{V}\in V}\sum_{i\in[n]}X_{i,\mathbf{V}}\). The previous displays imply \(\mathbb{E}Z\leq 2\sqrt{2\pi}\mathbb{E}[\mathcal{G}(\tilde{\mathcal{X}}(V))]\).
We now establish concentration. For all \(i\in[n]\) and \(\mathbf{V}\in V\), \(X_{i,\mathbf{V}}\leq R^{2}\leq 1\). Since, for each \(\mathbf{V}\in V\), \(\{X_{i,\mathbf{V}}\}_{i\in[n]}\) are independent and identically distributed, we may apply Bousquet's inequality (e.g. Corollary 12.2 in [4]). We thus get that, for all \(t\geq 0\), with probability at least \(1-\exp(-t)\),
\[Z\leq\mathbb{E}Z+\sigma\sqrt{2t}+\frac{t}{3},\]
where \(\sigma^{2}:=\sup_{\mathbf{V}\in V}\sum_{i\in[n]}\mathbb{E}[X_{i,\mathbf{V}}^{ 2}]\). Since for \(\mathbf{V}\in V\), \(\|\mathbf{V}\|_{\Pi}\vee\{\mathbf{X}_{i},\mathbf{V}\}\leq\|\mathbf{V}\|_{\infty}\leq 1\), it is easy to check that \(\mathbb{E}((\mathbf{X}_{i},\mathbf{V})^{2}-\|\mathbf{V}\|_{\Pi}^{2})^{2}= \mathbb{E}(\mathbf{X}_{i},\mathbf{V})^{4}+\|\mathbf{V}\|_{\Pi}^{4}\leq 2R^{2}\). In particular,
\[\sigma^{2}=\sup_{\mathbf{V}\in V}\sum_{i\in[n]}\mathbb{E}[X_{i,\mathbf{V}}^{ 2}]\leq 2R^{2}n.\]
This finishes the proof.
The next proposition is proved using the previous lemma and a peeling argument.
**Proposition 9** (Rsc).: _There are universal constants \(\mathsf{c}_{0},c_{0}\in(0,1)\) for which the following holds. Let \(\delta\in(0,1)\) and define the number_
\[\mathsf{h}_{\delta}:=\frac{\tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}})}{c_{0} n}\bigvee\frac{\sqrt{1+\log(1/\delta)}}{c_{0}\sqrt{n}}, \tag{123}\]
_and the cone \(\mathbb{C}(\delta):=\mathfrak{C}\left(\mathsf{h}_{\delta}\right)\bigcap \mathscr{C}_{\mathcal{R}^{(p)}}\left(\mathsf{h}_{\delta}\right).\) Then, with probability at least \(1-\delta\) the design operator \(\mathfrak{X}\) satisfies_
\[\inf_{\mathbf{V}\in\mathbb{C}(\delta)}\frac{\|\mathfrak{X}^{(n)}(\mathbf{V}) \|_{2}}{\|\mathbf{V}\|_{\Pi}}\geq\mathsf{c}_{0}.\]
Proof.: Fix \(\alpha>1\) and \(\delta\in(0,1)\). We define the set \(S(\delta):=\mathbb{C}(\delta)\cap\mathbb{S}_{\infty}\) and, for all \(\ell\in\mathbb{N}^{*}\), the set
\[S_{\ell}(\delta):=\left\{\mathbf{V}\in\mathbb{S}_{\infty}^{p}:\alpha^{\ell-1 }\mathsf{h}_{\delta}\leq\|\mathbf{V}\|_{\Pi}<\alpha^{\ell}\mathsf{h}_{\delta},\quad\mathsf{h}_{\delta}\mathcal{R}^{(p)}(\mathbf{V})\leq\|\mathbf{V}\|_{\Pi} ^{2}\right\}.\]
Note that for all \(\ell\in\mathbb{N}^{*}\), \(S_{\ell}(\delta)\subset r\mathbb{B}_{\mathcal{R}}\) with \(r:=\sqrt{p}\alpha^{2\ell}\mathsf{h}_{\delta}\). Hence,
\[\mathcal{G}(S_{\ell}(\delta))=\mathbb{E}[\mathcal{G}(\mathfrak{X}(S_{\ell}( \delta)))]\leq\sqrt{p}\alpha^{2\ell}\mathsf{h}_{\delta}\mathbb{E}[\mathcal{G} (\mathfrak{X}(\mathbb{B}_{\mathcal{R}}))]=\alpha^{2\ell}\mathsf{h}_{\delta} \tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}}).\]
For any \(\ell\in\mathbb{N}^{*}\), \(S_{\ell}(\delta)\subset\mathbb{S}_{\infty}^{p}\cap r_{\ell}\mathbb{B}_{\Pi}\) with \(r_{\ell}:=\min\{1,\alpha^{\ell}\mathsf{h}_{\delta}\}\). Hence, by Lemma 38 and the previous displayed bound, it holds that, for any \(\ell\in\mathbb{N}^{*}\) and \(t>0\), on an event \(\mathcal{E}_{\ell}(t)\) of probability \(\geq 1-e^{-t}\), for all \(\mathbf{V}\in S_{\ell}(\delta)\),
\[\|\mathbf{V}\|_{\Pi}^{2}-\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2}\leq 2\sqrt{ 2\pi}\frac{\alpha^{2\ell}\mathsf{h}_{\delta}\tilde{\mathcal{G}}(\mathbb{B}_{ \mathcal{R}})}{n}+2r_{\ell}\sqrt{\frac{t}{n}}+\frac{t}{3n}.\]
Let \(c\in(0,1)\) absolute constant to be determined. By an union bound the event \(\mathcal{E}:=\cap_{\ell=1}^{\infty}\mathcal{E}_{\ell}\left(cn\alpha^{2\ell} \mathsf{h}_{\delta}^{2}\right)\) is such that
\[\mathbb{P}(\mathcal{E}^{c})\leq\sum_{\ell=1}^{\infty}\exp\left(-cn\alpha^{2 \ell}\mathsf{h}_{\delta}^{2}\right)\leq\sum_{\ell=1}^{\infty}\exp\left(-2cn \log(\alpha)\mathsf{h}_{\delta}^{2}\cdot\ell\right)\leq\frac{\exp\left(-2cn \log(\alpha)\mathsf{h}_{\delta}^{2}\right)}{1-\exp\left(-2cn\log(\alpha) \mathsf{h}_{\delta}^{2}\right)}. \tag{124}\]
Next, we assume that
\[\mathsf{h}_{\delta}\geq\frac{\tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}})}{ cn}\bigvee\sqrt{\frac{\log(2)}{2c\log(\alpha)\cdot n}}\bigvee\sqrt{\frac{\log(2/ \delta)}{2c\log(\alpha)\cdot n}}.\]
In particular, \(\mathbb{P}(\mathcal{E}^{c})\leq\delta\) and \(\frac{\mathsf{h}\tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}})}{n}\leq \mathsf{c}\mathsf{h}_{\delta}^{2}\).
The rest of the proof will occur on the event \(\mathcal{E}\). Let \(\mathbf{V}\in S(\delta)\). There is \(\ell\in\mathbb{N}^{*}\) such that \(\mathbf{V}\in S_{\ell}(\delta)\). Since \(\mathbb{S}_{\infty}^{p}\subset\mathbb{B}_{\Pi}\), we have two cases:
**Case 1:**: \(\alpha^{\ell}\mathsf{h}_{\delta}<1\). From (124),
\[\|\mathbf{V}\|_{\Pi}^{2}-\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2} \leq 2\sqrt{2\pi}\cdot c(\alpha^{\ell}\mathsf{h}_{\delta})^{2}+2 \sqrt{c}(\alpha^{\ell}\mathsf{h}_{\delta})^{2}+\frac{c}{3}(\alpha^{\ell} \mathsf{h}_{\delta})^{2}\] \[\leq\left(2\sqrt{2\pi}\cdot c+2\sqrt{c}+\frac{c}{3}\right)\alpha^{ 2}\|\mathbf{V}\|_{\Pi}^{2}.\]
**Case 2:**: \(\alpha^{\ell-1}\mathsf{h}_{\delta}<1\leq\alpha^{\ell}\mathsf{h}_{\delta}\). From (124),
\[\|\mathbf{V}\|_{\Pi}^{2}-\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2} \leq 2\sqrt{2\pi}\cdot c(\alpha^{\ell}\mathsf{h}_{\delta})^{2}+2 \sqrt{c}(\alpha^{\ell}\mathsf{h}_{\delta})+\frac{c}{3}(\alpha^{\ell}\mathsf{h}_{ \delta})^{2}\] \[\leq\left(2\sqrt{2\pi}\cdot c+2\sqrt{c}+\frac{c}{3}\right)(\alpha^ {\ell}\mathsf{h}_{\delta})^{2}\] \[\leq\left(2\sqrt{2\pi}\cdot c+2\sqrt{c}+\frac{c}{3}\right)\alpha^{ 2}\|\mathbf{V}\|_{\Pi}^{2}.\]
Choose \(c=c(\alpha)\in(0,1)\) small enough so that \(a:=((2\sqrt{2\pi}+3^{-1})c+2\sqrt{c})\alpha^{2}<1\) and set \(\mathsf{c}_{0}^{2}:=1-a\). We
conclude that \(\inf_{\mathbf{V}\in\mathbb{C}(\delta)\cap\mathbb{R}_{\delta}^{p}}\frac{\|\mathbf{X} ^{(n)}(\mathbf{V})\|_{2}}{\|\mathbf{V}\|_{\Pi}}\geq\mathsf{c}_{0}.\) The fact that \(\inf_{\mathbf{V}\in\mathbb{C}(\delta)}\frac{\|\mathbf{X}^{(n)}(\mathbf{V})\|_{ 2}}{\|\mathbf{V}\|_{\Pi}}\geq\mathsf{c}_{0}\) follows from homogeneity of norms and the fact that \(\mathbb{C}(\delta)\) is a cone. This concludes the proof.
**Proposition 10** (\(\mathrm{ARSC}\)).: _Let \(\mathsf{c}_{0},\mathsf{c}_{0}\in(0,1)\) be the universal constants of Proposition 9. Given \(\delta\in(0,1)\), let \(\mathsf{h}_{\delta}\) be the number in (123) and define the cone_
\[\mathbb{C}(\delta):=(\mathfrak{C}(\mathsf{h}_{\delta})\times\mathbb{R}^{n}) \bigcap\left(\mathcal{C}_{\mathcal{R}^{(p)}}\left(\mathsf{h}_{\delta}\right) \times\mathbb{R}^{n}\right)\bigcap\mathsf{C}_{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ \mathbb{\mathbb{\mathbb{\mathbb{\mathbb{ }}}}}}}}}}}}} \left(\frac{4}{\mathsf{c}_{0}^{2}\sqrt{n}}\right).\]
_Then, with probability at least \(1-\delta\) the design operator \(\mathfrak{M}\) satisfies_
\[\inf_{[\mathbf{V},\mathbf{u}]\in\mathbb{C}(\delta)}\frac{\|\mathfrak{M}^{(n)}( \mathbf{V},\mathbf{u})\|_{2}}{\|[\mathbf{V},\mathbf{u}]\|_{\Pi}}\geq\mathsf{c}_{0}/ \sqrt{2}.\]
Proof.: The proof will happen on the event for which the statement of Proposition 9 is satisfied. First, we claim that, for all \([\mathbf{V},\mathbf{u}]\in\mathds{R}^{p}\times\mathbb{R}^{n}\),
\[\langle\mathfrak{X}^{(n)}(\mathbf{V}),\mathbf{u}\rangle\leq\frac{\|\mathbf{V}\|_{ \infty}}{\sqrt{n}}\|\mathbf{u}\|_{\sharp}.\]
Indeed, \(\sum_{i\in[n]}\mathbf{u}_{i}(\mathbf{X}_{i}^{(n)},\mathbf{V})\leq\frac{1}{\sqrt{ n}}\sum_{i\in[n]}\mathbf{u}_{i}^{\sharp}(\mathbf{X}_{i},\mathbf{V})^{\sharp}\leq \frac{\|\mathbf{V}\|_{\infty}}{\sqrt{n}}\|\mathbf{u}\|_{\sharp}\), since \(\min_{i\in[n]}\omega_{i}=\omega_{n}\geq 1\) as the sequence \(\{\omega_{i}\}\) is non-increasing.
From the above fact and Proposition 9, we have that, for all \([\mathbf{V},\mathbf{u}]\in\mathbb{C}(\delta)\),
\[\|\mathfrak{X}^{(n)}(\mathbf{V})+\mathbf{u}\|_{2}^{2} =\|\mathfrak{X}^{(n)}(\mathbf{V})\|_{2}^{2}+\|\mathbf{u}\|_{2}^{2}+2 \langle\mathfrak{X}^{(n)}(\mathbf{V}),\mathbf{u}\rangle\] \[\geq\mathsf{c}_{0}^{2}\|\mathbf{V}\|_{\Pi}^{2}+\|\mathbf{u}\|_{2}^{2} -2\frac{\|\mathbf{V}\|_{\infty}}{\sqrt{n}}\|\mathbf{u}\|_{\sharp}\] \[\geq(\mathsf{c}_{0}^{2}/2)\|\mathbf{V},\mathbf{u}\|_{\Pi}^{2}.\]
This finishes the proof.
We now aim in proving \(\mathrm{MP}\) with \(\mathcal{R}=\|\cdot\|_{N}\), the nuclear norm. We will use the following well known result which makes use of Bernstein-type inequalities for random matrices [2],[3].29
Footnote 29: Note that \(\max_{k,\ell}\{R_{k},C_{\ell}\}\geq\frac{1}{d_{1}d_{2}}\).
**Lemma 39** (Lemma 2 in [61], Lemma 5 in [60]).: _Suppose \(\{(\eta,\mathbf{X})\}\cup\{(\eta_{i},\mathbf{X}_{i})\}_{i\in[n]}\) is an iid sequence taking values on \(\mathbb{R}\times\mathcal{X}\) such that \(\eta\) has zero mean, variance \(\sigma_{\eta}^{2}\) and \(0<|\eta_{|\psi_{2}}<\infty\). For \(t>0\), define_
\[\triangle_{\eta}(t):=\max\left\{\sigma_{\eta}\sqrt{\max_{k,\ell}\{R_{k},C_{ \ell}\}\frac{(\log t+\log d)}{n}},|\eta|_{\psi_{2}}\log^{1/2}\left(\frac{|\eta |_{\psi_{2}}m}{\sigma_{\eta}}\right)\frac{(\log t+\log d)}{n}\right\}.\]
_Then for some absolute constant \(\mathsf{C}>0\), for all \(\delta\in(0,1)\), with probability at least \(1-\delta\),_
\[\left\|\frac{1}{n}\sum_{i\in[n]}\eta_{i}\mathbf{X}_{i}\right\|_{\mathrm{op}} \leq\mathsf{C}\triangle_{\eta}(2/\delta).\]
**Proposition 11** (\(\mathrm{MP}\)).: _For all \(\delta\in(0,1]\) and \(n\in\mathbb{N}\), with probability of at least \(1-\delta\), for all \([\mathbf{V},\mathbf{u}]\in\mathds{R}^{p}\times\mathbb{R}^{n}\),_
\[\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V},\mathbf{u})\rangle\leq C\sqrt {p}\triangle_{\xi}(4/\delta)\cdot\mathcal{R}^{(p)}(\mathbf{V})+C\sigma\frac{ \sqrt{1+\log(2/\delta)}}{\sqrt{n}}\|\mathbf{u}\|_{2}+C\sigma\frac{\mathscr{G}( \mathbb{B}_{\sharp}^{n})}{\sqrt{n}}\|\mathbf{u}\|_{\sharp}.\]
Proof.: From Lemma 39, on an event \(\mathcal{E}_{1}\) of probability \(\geq 1-\delta/2\),
\[\left\|\frac{1}{n}\sum_{i\in[n]}\xi_{i}\mathbf{X}_{i}\right\|_{\mathrm{op}}\leq C \triangle_{\xi}(4/\delta).\]
We shall also use that, on an event \(\mathcal{E}_{2}\) of probability \(\geq 1-\delta/2\), for all \(\mathbf{u}\in\mathbb{R}^{n}\),
\[\frac{1}{\sqrt{n}}\sum_{i\in[n]}\xi_{i}\mathbf{u}_{i}\leq C\sigma\sqrt{\frac{1+ \log(2/\delta)}{n}}+C\sigma\frac{\mathscr{G}(\mathbb{B}_{\sharp}^{n})}{\sqrt{n }}\|\mathbf{u}\|_{\sharp}. \tag{125}\]
Indeed, from Lemma 25 in Section 20 with the set \(U:=\{\mathbf{u}\in\mathbb{B}_{2}^{n}:\|\mathbf{u}\|_{\sharp}\leq r\}\) we obtain that, with probability \(\geq 1-\delta\),
\[\sup_{\mathbf{u}\in U}\frac{\langle\mathbf{\xi},\mathbf{u}\rangle}{\sqrt{n}}\leq C\sigma \frac{\mathscr{G}(\mathbb{B}_{\sharp}^{n})}{\sqrt{n}}r+C\sigma\sqrt{\frac{2 \log(1/\delta)}{n}}.\]
This fact and Lemma 43 in the Appendix with set \(V:=\mathbb{B}_{2}^{n}\) and functions \(M(\mathbf{u}):=-\langle\mathbf{\xi},\mathbf{u}\rangle/\sqrt{n}\), \(h(\mathbf{u}):=\|\mathbf{u}\|_{\sharp}\) and \(g(r):=r\mathscr{G}(\mathbb{B}_{\sharp}^{n})\) (as well as \(\bar{h}\equiv 0\) and \(\bar{g}\equiv 0\)) entails that with probability \(\geq 1-\delta\), for all \(\mathbf{u}\in\mathbb{B}_{2}^{n}\),
\[\frac{1}{\sqrt{n}}\sum_{i\in[n]}\xi_{i}\mathbf{u}_{i}\leq C\sigma\sqrt{\frac{1+ \log(1/\delta)}{n}}+C\sigma\frac{\mathscr{G}(\mathbb{B}_{\sharp}^{n})}{\sqrt{ n}}\|\mathbf{u}\|_{\sharp}.\]
This fact and homogeneity of norms imply that, on the same event, the property displayed above holds for all \(\mathbf{u}\in\mathbb{R}^{n}\). This proves (125).
On the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\) of probability \(\geq 1-\delta\), the inequality claimed in the lemma holds for all \([\mathbf{V},\mathbf{u}]\in\mathbb{R}^{p}\times\mathbb{R}^{n}\) since, by the dual norm inequality,
\[\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{V},\mathbf{u})\rangle =\frac{1}{n}\sum_{i\in[n]}\xi_{i}\langle\!\langle\mathbf{X}_{i}, \mathbf{V}\rangle\!\rangle+\frac{1}{\sqrt{n}}\sum_{i\in[n]}\xi_{i}\mathbf{u}_{i}\] \[\leq\sqrt{p}\left\|\frac{1}{n}\sum_{i\in[n]}\xi_{i}\mathbf{X}_{i} \right\|_{\mathrm{op}}\mathcal{R}^{(p)}(\mathbf{V})+\frac{1}{\sqrt{n}}\sum_{i \in[n]}\xi_{i}\mathbf{u}_{i}.\]
### Deterministic bounds
Throughout this section, \(\mathcal{R}\) is a any decomposable norm. For convenience, we will work with the normalized regularization \(\mathcal{R}^{(p)}=\mathcal{R}/\sqrt{p}\). Recall Definition 14. Given \(c_{0},\gamma,\eta>0\), we use the notation \(\mathcal{C}_{\mathbf{B}^{*},\mathcal{R}^{(p)}}(c_{0},\gamma,\eta):=\mathcal{C }_{\mathbf{B}^{*},\mathbf{0},\mathcal{R}^{(p)},0}(c_{0},\gamma,0,\eta)\).
**Lemma 40**.: _Suppose that \(\|\mathbf{B}^{*}\|_{\infty}\leq\mathsf{a}\) and_
1. \((\mathfrak{X},\mathbf{\xi})\) _satisfies_ \(\mathrm{MP}_{\mathcal{R}^{(p)},\|\cdot\|_{\sharp}}(\mathsf{f}_{1},\mathsf{f}_{ 2},\mathsf{f}_{4})\) _in Definition_ 36_._
2. \(\lambda=\gamma\tau\geq 2\mathsf{f}_{2},\tau\geq 2\mathsf{f}_{4}\)_._
_Then_
\[\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta }}})\|_{2}^{2}\leq\mathsf{f}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\triangle, \tag{126}\]
_where_
\[\triangle:=(\nicefrac{{3\lambda}}{{2}})(\mathcal{R}^{(p)}\circ\mathcal{P}_{ \mathbf{B}^{*}})(\mathbf{\Delta}_{\mathbf{B}^{*}})-(\nicefrac{{\lambda}}{{2}})( \mathcal{R}^{(p)}\circ\mathcal{P}_{\mathbf{B}^{*}}^{\perp})(\mathbf{\Delta}_{ \mathbf{B}^{*}})+(\nicefrac{{3\tau\Omega}}{{2}})\|\mathbf{\Delta}^{\hat{\mathbf{\theta }}}\|_{2}-(\nicefrac{{7}}{{2}})\sum_{i=o+1}^{n}\omega_{i}(\mathbf{\Delta}^{\hat{ \mathbf{\theta}}})_{i}^{\sharp}.\]
_In particular, \([\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathcal{C}_{ \mathbf{B}^{*},\mathcal{R}^{(p)}}\left(3,\gamma,2\frac{f_{\mathsf{t}}}{3\tau}+ \Omega\right)\)._
Proof sketch.: A similar argument used in Lemma 17 entails
\[\|\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})\|_{2}^{2} \leq\langle\mathbf{\xi}^{(n)},\mathfrak{M}^{(n)}(\mathbf{\Delta}_{\mathbf{B}^{*}},\bm {\Delta}^{\hat{\mathbf{\theta}}})\rangle+\lambda(\mathcal{R}^{(p)}(\mathbf{B}^{*}) -\mathcal{R}^{(p)}(\hat{\mathbf{B}}))+\tau(\|\mathbf{\theta}^{*}\|_{\sharp}-\|\hat {\mathbf{\theta}}\|_{\sharp}).\]
By condition (i)-(ii) and Lemmas 27 and 28 (with \(\nu=1/2\)), we obtain (126).
**Theorem 41** (Robust matrix completion).: _Define the cone_
\[\mathbb{C}:=(\mathfrak{C}\left(\mathsf{h}_{1}\right)\times\mathbb{R}^{n}) \bigcap\left(\mathcal{C}_{\mathcal{R}^{(p)}}\left(\mathsf{h}_{2}\right) \times\mathbb{R}^{n}\right)\bigcap\mathsf{C}_{\|\cdot\|_{\sharp}}\left( \mathsf{h}_{3}\right),\]
_for positive constants \((\mathsf{h}_{1},\mathsf{h}_{2},\mathsf{h}_{3})\). Suppose that_
* \((\mathfrak{X},\mathbf{\xi})\) _satisfies the_ \(\mathrm{MP}_{\mathcal{R}^{(p)},\|\cdot\|_{\sharp}}(\mathsf{f}_{1},\mathsf{f}_{ 2},\mathsf{f}_{4})\)_._
* _For some_ \(\mathsf{c}_{1}>0\)_, the design satisfies_ \(\inf_{[\mathbf{V},\mathbf{u}]\in\mathbb{C}}\frac{\|\mathfrak{M}^{(n)}( \mathbf{V},\mathbf{u})\|_{2}}{\|\mathcal{V},\mathbf{u}\|_{\Pi}}\geq\mathsf{c}_ {1}\)_._
* \(\lambda=\gamma\tau\geq 2\mathsf{f}_{2}\) _and_ \(\tau\geq 2\mathsf{f}_{3}\)_._
_Define the quantities \(\mathcal{L}:=\frac{\mathsf{a}\mathsf{h}_{2}}{\lambda}\vee\frac{\mathsf{a} \mathsf{h}_{3}}{\tau}\) and_
\[R:=\Psi_{\mathcal{R}^{(p)}}\left(\mathcal{P}_{\mathbf{B}^{*}}( \mathbf{\Delta}_{\mathbf{B}^{*}})\right)\mu\left(\mathcal{C}_{\mathbf{B}^{*}, \mathcal{R}^{(p)}}(4)\right).\]
_Then either \(\|\mathbf{\Delta}_{\mathbf{B}^{*}}\|_{\Pi}\leq 2\mathsf{a}\mathsf{h}_{1}\) or_
\[\|[\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}] \|_{\Pi} \leq\left[\mathcal{L}\sqrt{64\lambda^{2}R^{2}+(32\tau\Omega+20 \mathsf{f}_{1})^{2}}\right]\bigvee\left[\frac{1}{\mathsf{c}_{1}^{2}}\sqrt{2.25 \lambda^{2}R^{2}+(6\tau\Omega+4\mathsf{f}_{1})^{2}}\right],\] \[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{B}^{*}})+\tau\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \leq\left[8\mathcal{L}(4\lambda^{2}R^{2}+(8\tau\Omega+5\mathsf{f}_ {1})^{2})\right]\bigvee\left[\frac{1}{\mathsf{c}_{1}^{2}}(9\lambda^{2}R^{2}+(1 6\tau\Omega+10\mathsf{f}_{1})^{2})\right].\]
Proof.: We will divide the proof in different cases. We recall that by Lemma 40, \([\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathscr{C}\) with \(\mathscr{C}:=\mathcal{C}_{\mathbf{B}^{*},\mathcal{R}^{(p)}}(3,\gamma,\Omega+2 \frac{\mathsf{f}_{1}}{3\tau})\).
**Case 1:**: \(\mathbf{\Delta}_{\mathbf{B}^{*}}\notin\mathfrak{C}\left(\mathsf{h}_{1}\right)\).
By definition, \(\|\mathbf{\Delta}_{\mathbf{B}^{*}}\|_{\Pi}\leq\mathsf{h}_{1}\|\mathbf{\Delta}_{\mathbf{B }^{*}}\|_{\infty}\leq 2\mathsf{a}\mathsf{h}_{1}\) and we are done.
**Case 2:**: \([\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\notin\mathscr{C }_{\mathcal{R}^{(p)}}\left(\mathsf{h}_{2}\right)\).
In that case
\[\|[\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}^{2} \leq\mathsf{h}_{2}\|\mathbf{\Delta}_{\mathbf{B}^{*}}\|_{\infty}\mathcal{R}^{(p)}( \mathbf{\Delta}_{\mathbf{B}^{*}})\leq 2\mathsf{a}(\nicefrac{{\mathsf{h}_{2}}}{{\lambda}}) \lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{B}^{*}}). \tag{127}\]
We now consider two cases.
**Case 2.1:**: \(4\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}(\mathbf{\Delta}_{\mathbf{B}^{*}})) \geq\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}^{\perp}(\mathbf{\Delta}_{ \mathbf{B}^{*}}))\).
We have \(\mathbf{\Delta}_{\mathbf{B}^{*}}\in\mathcal{C}_{\mathbf{B}^{*},\mathcal{R}^{(p)}}(4)\). Decomposability of \(\mathcal{R}\), \([\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathscr{C}\) and Cauchy-Schwarz further imply
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{B}^{*}})+\tau\|\mathbf{ \Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \leq 4\lambda\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}(\mathbf{ \Delta}_{\mathbf{B}^{*}}))+(4\tau\Omega+2\mathsf{f}_{1})\|\mathbf{\Delta}^{\hat{ \mathbf{\theta}}}\|_{2}\] \[\leq \sqrt{16\lambda^{2}R^{2}+(4\tau\Omega+2\mathsf{f}_{1})^{2}}\big{\| [}\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\big{]}\big{\|}_{ \Pi}. \tag{128}\]
The above display and (127) imply
\[\big{\|}[\mathbf{\Delta}_{\mathbf{B}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\big{\|}_ {\Pi}\leq 4\mathsf{a}\big{(}\nicefrac{{\mathsf{h}_{2}}}{{\lambda}}\big{)}\sqrt{4 \lambda^{2}R^{2}+(2\tau\Omega+\mathsf{f}_{1})^{2}},\]
and
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}^{\hat{\mathbf{B}}})+\tau\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{\sharp} \leq 8\text{a}(\nicefrac{{\text{h}_{2}}}{{\lambda}})[4\lambda^{2}R^{2}+(2 \tau\Omega+\text{f}_{1})^{2}].\]
**Case 2.2:**: \(4\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{\mathrm{B}}^{*}}(\mathbf{\Delta}_{\mathbf{ \mathrm{B}}^{*}}))<\mathcal{R}^{(p)}(\mathcal{P}_{\hat{\mathbf{\mathrm{B}}}^{*}}^{ \perp}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}}))\).
As \([\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathscr{C}\),
\[\lambda\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{\mathrm{B}}^{*}}(\mathbf{ \Delta}_{\mathbf{\mathrm{B}}^{*}}))+\tau\sum_{i=o+1}^{n}\omega_{i}(\mathbf{\Delta}^{ \hat{\mathbf{\theta}}})_{i}^{\sharp}\leq(3\tau\Omega+2\text{f}_{1})\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{2}. \tag{129}\]
This fact, decomposability of \(\mathcal{R}\) and Cauchy-Schwarz imply
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}})+ \tau\big{\|}\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\big{\|}_{\sharp} \leq 4\lambda\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{\mathrm{B}}^{*}}( \mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}}))+(4\tau\Omega+2\text{f}_{1})\|\mathbf{\Delta}^{ \hat{\mathbf{\theta}}}\|_{2}\] \[\leq(16\tau\Omega+10\text{f}_{1})\,\big{\|}\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\big{\|}_{2}. \tag{130}\]
The above display and (127) imply
\[\big{\|}[\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}]\big{\|}_{\Pi}\leq 4\text{a}(\nicefrac{{\text{h}_{2}}}{{\lambda}})(8 \tau\Omega+5\text{f}_{1}),\]
and
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}})+\tau \big{\|}\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\big{\|}_{\sharp}\leq 8\text{a}( \nicefrac{{\text{h}_{2}}}{{\lambda}})(8\tau\Omega+5\text{f}_{1})^{2}.\]
**Case 3:**: \([\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\notin \text{C}_{\|\cdot\|_{\sharp}}(\text{h}_{3})\).
In that case
\[\|[\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta} }}]\|_{\Pi}^{2}\leq\text{h}_{3}\|\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}}\|_{\infty} \|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}\leq 2\text{a}(\nicefrac{{\text{h}_{3}}}{{ \tau}})\tau\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp}.\]
As in Case 2, by dividing in the same two subcases and using the bounds (128) and (130) we get
\[\big{\|}[\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}]\big{\|}_{\Pi} \leq 4\text{a}(\nicefrac{{\text{h}_{3}}}{{\tau}})\sqrt{4\lambda^{2}R ^{2}+(2\tau\Omega+\text{f}_{1})^{2}},\] \[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}})+\tau \|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \leq 8\text{a}(\nicefrac{{\text{h}_{3}}}{{\tau}})[4\lambda^{2}R^{2}+(2 \tau\Omega+\text{f}_{1})^{2}].\]
or
\[\big{\|}[\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}]\big{\|}_{\Pi} \leq 4\text{a}(\nicefrac{{\text{h}_{3}}}{{\tau}})(8\tau\Omega+5 \text{f}_{1}),\] \[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}})+\tau \big{\|}\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\big{\|}_{\sharp} \leq 8\text{a}(\nicefrac{{\text{h}_{3}}}{{\tau}})(8\tau\Omega+5 \text{f}_{1})^{2}.\]
**Case 4:**: \([\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathbb{C}\).
As above, we further split in two cases.
**Case 4.1:**: \(4\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{\mathrm{B}}^{*}}(\mathbf{\Delta}_{\mathbf{\mathrm{B}} ^{*}}))\geq\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{\mathrm{B}}^{*}}^{\perp}(\mathbf{ \Delta}_{\mathbf{\mathrm{B}}^{*}}))\).
Hence \(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}}\in\mathcal{C}_{\mathbf{\mathrm{B}}^{*},\mathcal{R} ^{(p)}}(4)\). Decomposability of \(\mathcal{R}\), \([\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathscr{C}\) and Cauchy-Schwarz give
\[\text{f}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\triangle \leq(\nicefrac{{3\lambda}}{{2}})\mathcal{R}^{(p)}(\mathcal{P}_{ \mathbf{\mathrm{B}}^{*}}(\mathbf{\Delta}_{\mathbf{\mathrm{B}}^{*}}))+(\nicefrac{{3\tau\Omega }}{{2}}+\text{f}_{1})\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}\] \[\leq(\nicefrac{{3}}{{2}})\left\{\lambda^{2}R^{2}+\left(\tau\Omega+ \frac{2\text{f}_{1}}{3}\right)^{2}\right\}^{1/2}\|[\mathbf{\Delta}_{\mathbf{\mathrm{B}} ^{*}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}.\]
From the above display, (126) and condition (ii), we obtain
\[\|[\mathbf{\Delta_{\mathbf{B}^{*}}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{ \Pi}\leq(\nicefrac{{3}}{{2}})\frac{\sqrt{\lambda^{2}R^{2}+(\tau\Omega+(\nicefrac{ {2t_{1}}}{{3}}))^{2}}}{\mathsf{c}_{1}^{2}}.\]
Finally, (128) and the previous display imply
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta_{\mathbf{B}^{*}}})+\tau\|\bm {\Delta}^{\hat{\mathbf{\theta}}}\|_{\sharp} \leq\frac{3}{\mathsf{c}_{1}^{2}}\sqrt{\lambda^{2}R^{2}+(\tau\Omega+ (\nicefrac{{2t_{1}}}{{3}}))^{2}}\sqrt{4\lambda^{2}R^{2}+(2\tau\Omega+\mathsf{ f}_{1})^{2}}\] \[\leq\frac{1}{\mathsf{c}_{1}^{2}}[9\lambda^{2}R^{2}+(3\tau\Omega+2 \mathsf{f}_{1})^{2}].\]
**Case 4.2:**: \(4\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}(\mathbf{\Delta_{\mathbf{B}^{*}}}) )<\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}^{\perp}(\mathbf{\Delta_{\mathbf{ B}^{*}}}))\).
As \([\mathbf{\Delta_{\mathbf{B}^{*}}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\in\mathscr{C}\), relation (129) holds. This fact implies that
\[\mathsf{f}_{1}\|\mathbf{\Delta}^{\hat{\mathbf{\theta}}}\|_{2}+\triangle\leq(\nicefrac{ {3}}{{2}})\mathcal{R}^{(p)}(\mathcal{P}_{\mathbf{B}^{*}}(\mathbf{\Delta_{\mathbf{ B}^{*}}}))+(\nicefrac{{3\tau\Omega}}{{2}}+\mathsf{f}_{1})\|\mathbf{\Delta}^{\hat{\mathbf{ \theta}}}\|_{2}\leq(6\tau\Omega+4\mathsf{f}_{1})\|\mathbf{\Delta}^{\hat{\mathbf{\theta }}}\|_{2}.\]
The above display, (126) and condition (ii) yield
\[\|[\mathbf{\Delta_{\mathbf{B}^{*}}},\mathbf{\Delta}^{\hat{\mathbf{\theta}}}]\|_{\Pi}\leq \frac{6\tau\Omega+4\mathsf{f}_{1}}{\mathsf{c}_{1}^{2}}.\]
Finally, (130) and the previous display imply
\[\lambda\mathcal{R}^{(p)}(\mathbf{\Delta_{\mathbf{B}^{*}}})+\tau\|\mathbf{\Delta}^{\hat {\mathbf{\theta}}}\|_{\sharp}\leq\frac{(16\tau\Omega+10\mathsf{f}_{1})^{2}}{ \mathsf{c}_{1}^{2}}.\]
To finish, we note that the bounds in the statement of the theorem is the maximum of all the bounds established in the above cases.
### Proof of Theorem 5
We now set \(\mathcal{R}:=\|\cdot\|_{N}\). We have
\[\tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}})=\sqrt{p\mathbb{E}}\left[ \mathscr{G}\left(\mathfrak{X}\left(\mathbb{B}_{\|\cdot\|_{N}}\right)\right) \right]\leq\sqrt{p\mathbb{E}}\left[\left\|\sum_{i\in[n]}g_{i}\mathbf{X}_{i} \right\|_{\mathrm{op}}\right],\]
with \(\{g_{i}\}_{i\in[n]}\) iid \(\mathcal{N}(0,1)\) independent of \(\{\mathbf{X}_{i}\}_{i\in[n]}\). The assumptions of Lemma 39 apply to \(\eta=\mathcal{N}(0,1)\) so integration yields
\[\frac{\tilde{\mathcal{G}}(\mathbb{B}_{\mathcal{R}})}{n}\lesssim\max\left\{ \sqrt{\frac{p\max_{k,\ell}\{R_{k},C_{\ell}\}}{n}(\log d)},\frac{\sqrt{p}}{n}( \log m)^{1/2}(\log d)\right\}.\]
As before, Proposition E.2 in [6] implies that \(\mathscr{G}(\mathbb{B}_{\|\cdot\|_{\sharp}}^{n})\lesssim 1\).
Fix \(\delta\in(0,1)\). Let the numbers \(\mathsf{h}_{3}\asymp\frac{1}{\sqrt{n}}\), and
\[\mathsf{h}_{1}=\mathsf{h}_{2}\asymp\sqrt{p\max_{k,\ell}\{R_{k},C_{\ell}\}\frac {\log d}{n}}\bigvee\sqrt{p}\log^{1/2}(m)\frac{\log d}{n}\bigvee\sqrt{\frac{1+ \log(1/\delta)}{n}},\]
and cone
\[\bar{\mathbb{C}}(\delta):=\mathfrak{C}\left(\mathsf{h}_{1}\right)\times \mathbb{R}^{n}\bigcap\mathscr{C}_{\mathcal{R}^{(p)}}\left(\mathsf{h}_{2} \right)\times\mathbb{R}^{n}\bigcap\mathsf{C}_{\|\cdot\|_{\sharp}}\left( \mathsf{h}_{3}\right).\]
By Proposition 10, there is universal constant \(c_{1}\in(0,1)\) such that, on an event \(\mathcal{E}_{1}\) of probability \(\geq 1-\delta/2\),
\[\inf_{[\mathbf{V},\mathbf{u}]\in\mathbb{C}(\delta)}\frac{\|\mathfrak{M}^{(n)}( \mathbf{V},\mathbf{u})\|_{2}}{\|[\mathbf{V},\mathbf{u}]\|_{\Pi}}\geq c_{1}.\]
Take
\[\mathsf{f}_{1} \asymp\sigma\frac{1+\sqrt{\log(2/\delta)}}{\sqrt{n}},\] \[\mathsf{f}_{2} \asymp\max\left\{\sigma_{\xi}\sqrt{p\max\{R_{k},C_{\xi}\}\frac{ \log(4/\delta)+\log d}{n},\sqrt{p}\sigma\log^{1/2}\left(\frac{\sigma m}{\sigma_ {\xi}}\right)\frac{\log(4/\delta)+\log d}{n}\right\},\] \[\mathsf{f}_{4} \asymp\frac{\sigma}{\sqrt{n}}.\]
By Proposition 11, we have on an event \(\mathcal{E}_{2}\) of probability \(\geq 1-\delta/2\), \(\operatorname{MP}_{\|\cdot\|_{N},\|\cdot\|_{1}}(\mathsf{f}_{1},\mathsf{f}_{2 },\mathsf{f}_{4})\) is satisfied.
The proof will now hold on the event \(\mathcal{E}_{1}\cap\mathcal{E}_{2}\) of probability \(\geq\,1-\delta\). With the tuning parameters \(\lambda\asymp(\nicefrac{{\mathsf{f}}}{{2}}/)(\mathsf{a}\vee\sigma)\) and \(\tau\asymp(\nicefrac{{\mathsf{f}}}{{4}}/)(\mathsf{a}\vee\sigma)\) stated in Theorem 5, all conditions of Theorem 4 hold on such event implying the claimed rates, noting that \(R\leq\sqrt{r}\mu(\mathbf{B}^{*})\) with
\[\mu(\mathbf{B}^{*}):=\mu^{(p)}\left(\mathcal{C}_{\mathbf{B}^{*},\|\cdot\|_{N} ^{(p)}}(4)\right)=\mu^{(p)}\left(\mathcal{C}_{\mathbf{B}^{*},\|\cdot\|_{N}}(4 )\right),\]
and that \(\mathcal{L}\lesssim 1\), since \(\mathsf{h}_{2}/\lambda\lesssim\frac{1}{\mathsf{a}\vee\sigma}\) and \(\mathsf{h}_{3}/\tau\lesssim\frac{1}{\mathsf{a}\vee\sigma}\).
## Appendix A Peeling lemmas
This section presents several peeling lemmas. This is a well known technique in Empirical Process theory in order to lift confidence statements from a compact subset to the entire set. Throughout this section, \(g,\bar{g}\) are right-continuous, non-decreasing functions from \(\mathbb{R}_{+}\) to \(\mathbb{R}_{+}\), \(V\) is an arbitrary set and \(h,\bar{h}\) are functions from \(V\) to \(\mathbb{R}_{+}\). We let \(g^{-1}\) be the generalized inverse of \(g\) defined by \(g^{-1}(x)=\inf\{a\in\mathbb{R}_{+}:g(a)\geq x\}\); we use the same notation for the generalized inverse of \(\bar{g}\). \(\mathbb{N}^{*}\) is the set of natural numbers excluding zero. In what follows, \(C>0\) is an universal constant that may change within the text.
### Peeling for \(\operatorname{PP}\)
**Lemma 42**.: _Let \(b>0\) be a constant and \(c\geq 1\) be an universal constant. Assume that, for every \(r,\bar{r}>0\) and every \(\delta\in(0,1/c]\), the event \(A(r,\bar{r},\delta)\) defined by the inequality_
\[\inf_{\mathbf{v}\in V:(h,\bar{h})(\mathbf{v})\leq(r,\bar{r})}M(\mathbf{v})\geq-\frac{b}{n }g(r)\bar{g}(\bar{r})-\frac{b}{\sqrt{n}}(g(r)+\bar{g}(\bar{r}))-b\left(\sqrt {\frac{\log(1/\delta)}{n}}+\frac{\log(1/\delta)}{n}\right),\]
_has probability at least \(1-c\delta\)._
_Then, for every \(\delta\in(0,1/c]\), with probability at least \(1-c\delta\), it holds that, for all \(\mathbf{v}\in V\),_
\[M(\mathbf{v}) \geq-C\frac{b}{n}g\circ h(\mathbf{v})\cdot\bar{g}\circ\bar{h}(\mathbf{v} )-Cb\left(\frac{1}{n}+\frac{1}{\sqrt{n}}\right)g\circ h(\mathbf{v})-Cb\left(\frac {1}{n}+\frac{1}{\sqrt{n}}\right)\bar{g}\circ\bar{h}(\mathbf{v})\] \[-Cb\left(\frac{1}{\sqrt{n}}+\frac{1}{n}\right)-Cb\left(\sqrt{ \frac{\log(1/\delta)}{n}}+\frac{\log(1/\delta)}{n}\right).\]
Proof.: Let \(\mu>0\) and \(\eta,\epsilon>1\) be parameters to be chosen later on. We set \(\mu_{0}=0\) and, for \(k\geq 1\), \(\mu_{k}:=\mu\eta^{k-1}\). We may partition the set \(V\) with the sets
\[V_{k,\bar{k}}:=\{\mathbf{v}\in V:\mu_{k}\leq(g\circ h)(\mathbf{v})<\mu_{k+1},\mu_{\bar{ k}}\leq(\bar{g}\circ\bar{h})(\mathbf{v})<\mu_{\bar{k}+1}\},\]
defined for \(k,\bar{k}\in\mathbb{N}\). Given \(m\in\mathbb{N}^{\star}\), we set \(\nu_{m}:=g^{-1}(\mu_{m})\) and \(\bar{\nu}_{m}:=\bar{g}^{-1}(\mu_{m})\). Clearly, \(g(\nu_{m})=\mu_{m}\) and \(\bar{g}(\bar{\nu}_{m})=\mu_{m}\).
Fix \(\delta\in(0,1/c]\). An union bound and the fact that \(\sum_{k\geq 1,\bar{k}\geq 1}(k\bar{k})^{-1-\epsilon}\leq(1+\epsilon^{-1})^{2}\) imply that the event
\[A:=\bigcap_{k\geq 1,\bar{k}\geq 1}^{\infty}A\left(\nu_{k},\bar{\nu}_{\bar{k}}, \frac{\epsilon^{2}\delta}{(1+\epsilon)^{2}(k\bar{k})^{1+\epsilon}}\right),\]
has a probability at least \(1-c\delta\).
To ease notation, define \(\triangle(\delta):=\log\{(1+\epsilon)^{2}/(\epsilon^{2}\delta)\}\) and \(\triangle_{k,\bar{k}}:=(1+\epsilon)\log(k\bar{k})\). We assume in the sequel that the event \(A\) is realized, that is,
\[\forall k,\bar{k}\in\mathbb{N}^{\star} \begin{cases}\forall\mathbf{v}\in V\text{ such that }(h,\bar{h})(\mathbf{v})\leq(\nu_{k},\bar{\nu}_{\bar{k}})\text{ we have}\\ M(\mathbf{v})\geq-\frac{b}{n}g(\nu_{k})\bar{g}(\bar{\nu}_{k})-\frac{b}{\sqrt{n}}g( \nu_{k})-\frac{b}{\sqrt{n}}\bar{g}(\bar{\nu}_{\bar{k}})\\ -(b/\sqrt{n})\sqrt{\triangle(\delta)+\triangle_{k,\bar{k}}}-(b/n)[\triangle( \delta)+\triangle_{k,\bar{k}}].\end{cases} \tag{131}\]
For every \(\mathbf{v}\in V\), there are \(\ell,\bar{\ell}\in\mathbb{N}\) such that \(\mathbf{v}\in V_{\ell,\bar{\ell}}\). We now consider several cases.
**Case 1:**: \(\ell=\bar{\ell}=0\). Since \(h(\mathbf{v})\leq\nu_{1}\), \(\bar{h}(\mathbf{v})\leq\bar{\nu}_{1}\), (131) with \(k=\bar{k}=1\) leads to
\[M(\mathbf{v})\geq-\frac{b}{n}\mu^{2}-\frac{b}{\sqrt{n}}\mu-\frac{b}{\sqrt{n}}\mu- \frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}-\frac{b}{n}\triangle(\delta).\]
**Case 2:**: \(\ell\geq 1\) and \(\bar{\ell}\geq 1\). Using that \(h(\mathbf{v})\leq\nu_{\ell+1}\) and \(\bar{h}(\mathbf{v})\leq\bar{\nu}_{\ell+1}\), we will invoke (131) for the indexes \((k,\bar{k})=(\ell+1,\bar{\ell}+1)\). Additionally, we observe that, since \(\mathbf{v}\in V_{\ell,\bar{\ell}}\), and \(\mu\eta^{\ell+1}=\eta^{2}\mu_{\ell}\),
\[g(\nu_{\ell+1})=\eta\mu^{\ell}=\eta\mu^{\ell+1}+\eta\mu^{\ell}-\eta\mu^{\ell+1 }\leq\eta^{2}g\circ h(\mathbf{v})+\mu\eta^{\ell}-\mu\eta^{\ell+1}. \tag{132}\]
Similarly,
\[\bar{g}(\nu_{\bar{\ell}+1}) \leq\eta^{2}\bar{g}\circ\bar{h}(\mathbf{v})+\mu\eta^{\bar{\ell}}-\mu \eta^{\bar{\ell}+1}, \tag{133}\] \[g(\nu_{\ell+1})\cdot\bar{g}(\nu_{\bar{\ell}+1}) \leq\eta^{4}g\circ h(\mathbf{v})\cdot\bar{g}\circ\bar{h}(\mathbf{v})+\mu^{ 2}\eta^{q}\eta^{\bar{\ell}}-\mu^{2}\eta^{\ell+1}\eta^{\bar{\ell}+1}. \tag{134}\]
Next, we will use the previous bounds in (131). To ease notation, we define the quantity
\[\lozenge_{\ell,\bar{\ell}} :=\frac{b}{n}\mu^{2}(\eta^{\ell}\eta^{\bar{\ell}}-\eta^{\ell+1} \eta^{\bar{\ell}+1})+\frac{b}{\sqrt{n}}\mu(\eta^{\ell}-\eta^{\ell+1})+\frac{b} {\sqrt{n}}\mu(\eta^{\bar{\ell}}-\eta^{\bar{\ell}+1})\] \[+\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)+\triangle_{\ell+1,\bar{ \ell}+1}}-\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}\] \[+\frac{b}{n}[\triangle(\delta)+\triangle_{\ell+1,\bar{\ell}+1}]- \frac{b}{n}\triangle(\delta).\]
In conclusion, from (131) with \((k,\bar{k})=(\ell+1,\bar{\ell}+1)\) and the bounds (132), (133) and (134), we may write
\[M(\mathbf{v})+\lozenge_{\ell,\bar{\ell}} \geq-\frac{b}{n}\eta^{4}g\circ h(\mathbf{v})\cdot\bar{g}\circ\bar{h}( \mathbf{v})-\frac{b}{\sqrt{n}}\eta^{2}g\circ h(\mathbf{v})-\frac{b}{\sqrt{n}}\eta^{2} \bar{g}\circ\bar{h}(\mathbf{v})\] \[-\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}-\frac{b}{n}\triangle( \delta).\]
To finish, we claim that, by appropriately fixing \(\mu>0\) and \(\epsilon,\eta>1\), we can show \(\sup_{\ell,\bar{\ell}\geq 1}\lozenge_{\ell,\bar{\ell}}\leq 0\). We prove this claim next.
First,
\[T_{1}(\ell,\bar{\ell}) :=\frac{b}{n}\triangle_{\ell+1,\bar{\ell}+1}+\frac{b\mu^{2}}{n}( \eta^{\ell}\eta^{\bar{\ell}}-\eta^{\ell+1}\eta^{\bar{\ell}+1})\] \[=\frac{b}{n}\eta^{\ell}\eta^{\bar{\ell}}\left[(1+\epsilon)\frac{ \log(\ell+1)+\log(\bar{\ell}+1)}{\eta^{\ell}\eta^{\bar{\ell}}}+\mu^{2}(1-\eta^ {2})\right].\]
Since, for \(\eta>1\),
\[\sup_{\ell,\bar{\ell}\geq 1}\frac{\log(\ell+1)+\log(\bar{\ell}+1)}{\eta^{\ell} \eta^{\bar{\ell}}}\leq C,\]
we can fix \(\epsilon>1\) and \(\mu>0\) and take \(\eta>1\) large enough such that \(\sup_{\ell,\bar{\ell}\geq 1}T_{1}(\ell,\bar{\ell})\leq 0\).
Second, we note that
\[\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)+\triangle_{\ell+1,\bar{\ell}+1}}- \frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}\leq\frac{b}{\sqrt{n}}\sqrt{ \triangle_{\ell+1,\bar{\ell}+1}}.\]
Hence,
\[T_{2}(\ell,\bar{\ell}) :=\frac{b}{\sqrt{n}}\mu(\eta^{\ell}-\eta^{\ell+1})+\frac{b}{\sqrt {n}}\mu(\eta^{\bar{\ell}}-\eta^{\bar{\ell}+1})\] \[+\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)+\triangle_{\ell+1,\bar {\ell}+1}}-\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}\] \[\leq\frac{b}{\sqrt{n}}\left[\sqrt{\log(\ell+1)}+\mu(\eta^{\ell}- \eta^{\ell+1})\right]\] \[+\frac{b}{\sqrt{n}}\left[\sqrt{\log(\bar{\ell}+1)}+\mu(\eta^{ \bar{\ell}}-\eta^{\bar{\ell}+1})\right]\] \[=\frac{b}{\sqrt{n}}\eta^{\ell}\left[\frac{\sqrt{\log(\ell+1)}}{ \eta^{\ell}}+\mu(1-\eta)\right]\] \[+\frac{b}{\sqrt{n}}\eta^{\ell}\left[\frac{\sqrt{\log(\ell+1)}}{ \eta^{\ell}}+\mu(1-\eta)\right].\]
Since, for \(\eta>1\), \(\sup_{\ell\geq 1}\frac{\sqrt{\log(\ell+1)}}{\eta^{\ell}}\leq C,\) again, we may fix \(\epsilon>1\) and \(\mu>0\) and take \(\eta>1\) large enough such that \(\sup_{\ell,\bar{\ell}\geq 1}T_{2}(\ell,\bar{\ell})\leq 0\).
We thus conclude that \(\sup_{\ell,\bar{\ell}\geq 1}\Diamond_{\ell,\bar{\ell}}\leq\sup_{\ell,\bar{ \ell}\geq 1}[T_{1}(\ell,\bar{\ell})+T_{2}(\ell,\bar{\ell})]\leq 0,\) as claimed.
**Case 3:**: \(\ell\geq 1\) and \(\bar{\ell}=0\). Since \(h(\mathbf{v})\leq\nu_{\ell+1}\) and \(\bar{h}(\mathbf{v})\leq\bar{\nu}_{1}\), we get from (131) with \((k,\bar{k})=(\ell+1,1)\) and (132),
\[M(\mathbf{v})+\Box_{\ell}\geq-\frac{b}{n}\mu\eta^{2}g\circ h(\mathbf{v})-\frac{b}{ \sqrt{n}}\eta^{2}g\circ h(\mathbf{v})-\frac{b}{\sqrt{n}}\mu-\frac{b}{\sqrt{n}} \sqrt{\triangle(\delta)}-\frac{b}{n}\triangle(\delta),\]
where have defined
\[\Box_{\ell} :=\frac{b}{n}\mu^{2}(\eta^{\ell}-\eta^{\ell+1})+\frac{b}{\sqrt{n} }\mu(\eta^{\ell}-\eta^{\ell+1})\] \[+\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)+\triangle_{\ell+1,1}}- \frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}\] \[+\frac{b}{n}[\triangle(\delta)+\triangle_{\ell+1,1}]-\frac{b}{n} \triangle(\delta).\]
We claim that, by appropriately fixing \(\mu>0\) and \(\epsilon>1\) and taking \(\eta>1\) large enough, we can show \(\sup_{\ell\geq 1}\Box_{\ell}\leq 0.\) Indeed, the reasoning is analogous to the one in Case 2 so we omit it.
**Case 4:**: \(\ell=0\) and \(\bar{\ell}\geq 1\). By analogy with Case 3, using that \(h(\mathbf{v})\leq\nu_{1}\) and \(\bar{h}(\mathbf{v})\leq\nu_{\ell+1}\), we get from (131)
with \((k,\bar{k})=(1,\bar{\ell}+1)\) and (133),
\[M(\mathbf{v})+\bigcirc_{\bar{\ell}}\geq-\frac{b}{n}\mu\eta^{2}\bar{g}\circ\bar{h}(\bm {v})-\frac{b}{\sqrt{n}}\mu-\frac{b}{\sqrt{n}}\eta^{2}\bar{g}\circ\bar{h}(\mathbf{v}) -\frac{b}{\sqrt{n}}\sqrt{\triangle(\delta)}-\frac{b}{n}\triangle(\delta),\]
where the term \(\bigcirc_{\bar{\ell}}\) can be shown to satisfy \(\sup_{\bar{\ell}\geq 1}\bigcirc_{\bar{\ell}}\leq 0,\) for fixed \(\mu>0\) and \(\epsilon>1\) and large enough \(\eta>1\).
The proof of the statement of the lemma is finished once we join the lower bounds established in the four cases.
### Peeling for \(\mathrm{IP}\)
The following lemma is a restatement of Lemma 6 in [30]. The proof follows by similar arguments used in the proof of Lemma 42.
**Lemma 43**.: _Let \(b>0\) be a constant and \(c\geq 1\) be an universal constant. Assume that, for every \(r,\bar{r}>0\) and every \(\delta\in(0,1/c]\), the event \(A(r,\bar{r},\delta)\) defined by the inequality_
\[\inf_{\mathbf{v}\in V:(h,\bar{h})(\mathbf{v})\leq(r,\bar{r})}M(\mathbf{v})\geq-\frac{b}{ \sqrt{n}}g(r)-\frac{b}{\sqrt{n}}\bar{g}(\bar{r})-\frac{b}{\sqrt{n}}\sqrt{\log (2/\delta)},\]
_has probability at least \(1-c\delta\)._
_Then, for every \(\delta\in(0,1/c]\), with probability at least \(1-c\delta\), it holds that, for all \(\mathbf{v}\in V\),_
\[M(\mathbf{v})\geq-\frac{Cb}{\sqrt{n}}g\circ h(\mathbf{v})-\frac{Cb}{\sqrt{n}}\bar{g} \circ\bar{h}(\mathbf{v})-\frac{Cb}{\sqrt{n}}(1+\sqrt{\log(1/\delta)}).\]
### Peeling for \(\mathrm{MP}\)
To prove Proposition 6, we will use Lemma 43 (with \(\bar{g}=\bar{h}\equiv 0\)) and the following lemma.
**Lemma 44**.: _Let \(b>0\) be a constant and \(c\geq 1\) be universal constants. Assume that for every \(r>0\) and every \(\delta\in(0,1/c]\), the event \(A(r,\delta)\) defined by the inequality_
\[\inf_{\mathbf{v}\in V:h(\mathbf{v})\leq r}M(\mathbf{v}) \geq-\left[1+\frac{1}{\sqrt{n}}\sqrt{\log(1/\delta)}\right]b \frac{g(r)}{\sqrt{n}}\] \[-\frac{b}{\sqrt{n}}\sqrt{\log(1/\delta)}-\frac{b}{n}\log(1/\delta),\]
_has probability at least \(1-c\delta\)._
_Then, with probability at least \(1-c\delta\), we have that, for all \(\mathbf{v}\in V\),_
\[M(\mathbf{v}) \geq-C\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle( \delta)}]\frac{b}{\sqrt{n}}+\frac{b}{n}\right\}g\circ h(\mathbf{v})\] \[-C(\nicefrac{{b}}{{\sqrt{n}}})[1+\sqrt{\triangle(\delta)}]-C( \nicefrac{{b}}{{n}})[\triangle(\delta)+\sqrt{\triangle(\delta)}].\]
Proof.: Let \(\mu>0\) and \(\eta,\epsilon>1\) be two parameters to be chosen later on. We set \(\mu_{0}=0\) and, for \(k\geq 1\), \(\mu_{k}:=\mu\eta^{k-1}\). Define, for \(k\in\mathbb{N}\), the sets
\[V_{k}:=\{\mathbf{v}\in V:\mu_{k}\leq(g\circ h)(\mathbf{v})<\mu_{k+1}\}.\]
For \(k\in\mathbb{N}^{\star}\), we set \(\nu_{k}:=g^{-1}(\mu_{k})\).
An union bound and the fact that \(\sum_{k\geq 1}k^{-1-\epsilon}\leq 1+\epsilon^{-1}\) imply that the event
\[A:=\bigcap_{k=1}^{\infty}A\left(\nu_{k},\frac{\epsilon\delta}{(1+\epsilon)k^{1 +\epsilon}}\right),\]
has a probability at least \(1-c\delta\). For convenience, we define \(\triangle(t):=\log\{(1+\epsilon)/(\epsilon t)\}\) and \(\triangle_{k}:=(1+\epsilon)\log k\). Throughout the proof, assume that this event is realized:
\[\forall k\in\mathbb{N}^{*} \begin{cases}\forall\mathbf{v}\in V\text{ such that }h(\mathbf{v})\leq\nu_{k}\text{ we have}\\ M(\mathbf{v})\geq-[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)+ \triangle_{k}}]b\frac{q(\nicefrac{{\mu_{k}}}{{\sqrt{n}}})}{\sqrt{\triangle( \delta)+\triangle_{k}}}-(\nicefrac{{b}}{{n}})[\triangle(\delta)+\triangle_{k}]. \end{cases} \tag{135}\]
For every \(\mathbf{v}\in V\), there is \(\ell\in\mathbb{N}\) such that \(\mathbf{v}\in V_{\ell}\).
**Case 1:**: \(\ell=0\). In that case, (135) with \(k=1\) and using \(g(\nu_{1})=\mu\) lead to
\[M(\mathbf{v})\geq-[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}]( \nicefrac{{\mu_{k}}}{{\sqrt{n}}})-(\nicefrac{{b}}{{\sqrt{n}}})\sqrt{\triangle (\delta)}-(\nicefrac{{b}}{{n}})\triangle(\delta).\]
**Case 2:**: \(\ell\geq 1\). Since \(v\in V_{\ell}\), \(h(\mathbf{v})\leq\nu_{\ell+1}\), so we can invoke (135) for \(k=\ell+1\).
First, we make some observations. Recall that \(g(\nu_{\ell+1})=\mu\eta^{\ell}\). Since \(\eta>1\), there is universal constant \(C\geq 1\) such that
\[\sqrt{\triangle_{\ell+1}}\mu\eta^{\ell}\leq C\sqrt{(1+\epsilon)}\mu\eta^{\ell +1}\leq C\sqrt{(1+\epsilon)}\eta^{2}g\circ h(\mathbf{v}).\]
Additionally,
\[\mu\eta^{\ell}=\mu\eta^{\ell+1}+\mu(\eta^{\ell}-\eta^{\ell+1})\leq\eta^{2}g \circ h(\mathbf{v})+\mu(\eta^{\ell}-\eta^{\ell+1}).\]
We thus conclude that
\[[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)+ \triangle_{\ell+1}}]b\frac{g(\nu_{\ell+1})}{\sqrt{n}} \leq[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}] \frac{b\mu\eta^{\ell}}{\sqrt{n}}+\sqrt{\triangle_{\ell+1}}\frac{b\mu\eta^{\ell }}{n}\] \[\leq[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}] \frac{b\eta^{2}}{\sqrt{n}}g\circ h(\mathbf{v})\] \[+[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}]\frac{b \mu}{\sqrt{n}}(\eta^{\ell}-\eta^{\ell+1})\] \[+C\sqrt{(1+\epsilon)}\frac{\eta^{2}b}{n}g\circ h(\mathbf{v}). \tag{136}\]
In order to use the previous bound in (135), it will be convenient to define the quantity:
\[\Diamond_{\ell} :=[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}]\frac{ b\mu}{\sqrt{n}}(\eta^{\ell}-\eta^{\ell+1})\] \[+(\nicefrac{{b}}{{\sqrt{n}}})\sqrt{\triangle(\delta)+\triangle_{ \ell+1}}-(\nicefrac{{b}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}\] \[+(\nicefrac{{b}}{{n}})[\triangle(\delta)+\triangle_{\ell+1}]-( \nicefrac{{b}}{{n}})\triangle(\delta).\]
In conclusion, from (135) (with \(k=\ell+1\)) and (136), we obtain
\[M(\mathbf{v})+\Diamond_{\ell} \geq-\left\{[1+(\nicefrac{{1}}{{\sqrt{n}}})\sqrt{\triangle( \delta)}]\frac{b\eta^{2}}{\sqrt{n}}+C\sqrt{(1+\epsilon)}\frac{\eta^{2}b}{n} \right\}g\circ h(\mathbf{v})\] \[-(\nicefrac{{b}}{{\sqrt{n}}})\sqrt{\triangle(\delta)}-( \nicefrac{{b}}{{n}})\triangle(\delta).\]
Next, we claim that, by appropriately fixing \(\mu>0\) and \(\epsilon,\eta>1\), we can show
\[\sup_{\ell\geq 1}\Diamond_{\ell}\leq 0. \tag{137}\]
We present the proof of the claim next.
First, since \(\triangle(\delta)\geq 1\),
\[T_{1}(\ell) :=\sqrt{\triangle(\delta)}\frac{b\mu}{n}(\eta^{\ell}-\eta^{\ell+1})+ \frac{b}{n}\triangle_{\ell+1}\] \[\leq\sqrt{\triangle(\delta)}\frac{b}{n}\left[\mu(\eta^{\ell}-\eta^ {\ell+1})+\log(\ell+1)\right]\] \[\leq\sqrt{\triangle(\delta)}\frac{b}{n}\eta^{\ell}\left[\mu(1- \eta)+\frac{\log(\ell+1)}{\eta^{\ell}}\right].\]
Since, for \(\eta>1\), \(\sup_{\ell\geq 1}\frac{\log(\ell+1)}{\eta^{\ell}}\leq C\), we may fix \(\epsilon>1\) and \(\mu>0\) and take \(\eta>1\) large enough such that \(\sup_{\ell\geq 1}T_{1}(\ell)\leq 0\).
Secondly,
\[T_{2}(\ell) :=\frac{b}{\sqrt{n}}\mu(\eta^{\ell}-\eta^{\ell+1})+\frac{b}{\sqrt {n}}\sqrt{\triangle(\delta)+\triangle_{\ell+1}}-\frac{b}{\sqrt{n}}\sqrt{ \triangle(\delta)}\] \[\leq\frac{b}{\sqrt{n}}\left[\sqrt{\log(\ell+1)}+\mu(\eta^{\ell}- \eta^{\ell+1})\right]\] \[\leq\frac{b}{\sqrt{n}}\eta^{\ell}\left[\frac{\sqrt{\log(\ell+1)} }{\eta^{\ell}}+\mu(1-\eta)\right].\]
Again, since, for \(\eta>1\), \(\sup_{\ell\geq 1}\frac{\sqrt{\log(\ell+1)}}{\eta^{\ell}}\leq C\), we may fix \(\epsilon>1\) and \(\mu>0\) and take \(\eta>1\) large enough such that \(\sup_{\ell\geq 1}T_{2}(\ell)\leq 0\).
In conclusion, since \(\lozenge_{\ell}=T_{1}(\ell)+T_{2}(\ell)\), we have proved (137).
Joining the lower bounds from both cases, we prove the statement of the lemma.
## Appendix B The Multiplier Process
In this section we prove Theorem 7. We refer to the definitions and notations in Section 8. Given \(f,g\in L_{\psi_{2}}\), we set \(\langle f,g\rangle_{n}:=\hat{\mathbf{P}}fg\) and \(\|f\|_{n}:=\sqrt{\langle f,f\rangle_{n}}\). We recall the Holder-type inequality \(\|fg\|_{\psi_{1}}\leq\|f\|_{\psi_{2}}\|g\|_{\psi_{2}}\).
Our proof is inspired by Dirksen's method [40] which obtained concentration inequalities for the _quadratic process_. One key observation used by Dirksen [40] and Bednorz [5] is that one must bound the chain differently for \(k\leq\lfloor\log_{2}n\rfloor\), the so called "subgaussian path" and \(k\geq\lfloor\log_{2}n\rfloor\), the "subexponential path". In bounding the multiplier process, we additionally introduce a "lazy walked" chain, a technique already present in Talagrand's original bound for the _empirical process_[95].
We first present some preliminary lemmas.
**Lemma 45**.: _Let \(f,f^{\prime}\in L_{\psi_{2}}\)._
_If for \(k\in\mathbb{N}\), \(2^{k/2}\leq\sqrt{n}\), then for any \(u\geq 1\), with probability at least \(1-2\exp(-(2^{k}+u))\),_
\[|M(f)-M(f^{\prime})|\leq\left[(1+\sqrt{2})\frac{2^{k/2}}{\sqrt{n}}+\sqrt{ \frac{2u}{n}}+\frac{u}{n}\right]\|\xi\|_{\psi_{2}}\|f-f^{\prime}\|_{\psi_{2}}.\]
_If for \(k\in\mathbb{N}\), \(2^{k/2}\geq\sqrt{n}\), then for any \(u\geq 1\), with probability at least \(1-2\exp(-(2^{k}+u))\),_
\[\|f-f^{\prime}\|_{n}\leq(\sqrt{u}+2^{k/2})\frac{[2(1+\sqrt{2})+1]^{1/2}}{ \sqrt{n}}\mathsf{d}(f,f^{\prime}).\]
Proof.: Suppose \(2^{\frac{k}{2}}\leq\sqrt{n}\). We first note that, by the triangle and Holder-type inequalities for the norm \(\psi_{1}\), we have \(\xi f-\xi f^{\prime}\in L_{\psi_{1}}\). The Bernstein-type inequality implies that, for all \(v\geq 0\), with probability at least
\(1-2e^{-v}\),
\[|M(f)-M(f^{\prime})| =|\hat{\mathbf{P}}[(\xi f-\xi f^{\prime})-\mathbf{P}(\xi f-\xi f^{ \prime})]|\] \[\leq\|\xi f-\xi f^{\prime}-\mathbf{P}(\xi f-\xi f^{\prime})\|_{ \psi_{1}}\left(\sqrt{\frac{2v}{n}}+\frac{v}{n}\right).\]
Taking \(v:=2^{k}+u\) and using that \(2^{\frac{k}{2}}\leq\sqrt{n}\), we get
\[\sqrt{\frac{2v}{n}}+\frac{v}{n}\leq(1+\sqrt{2})\frac{2^{k/2}}{\sqrt{n}}+\sqrt {\frac{2u}{n}}+\frac{u}{n},\]
establishing the the first inequality claimed in the lemma.
Suppose now \(2^{k/2}\geq\sqrt{n}\). The second inequality is proved in [40], Lemma 5.4. We include the proof for completeness with slightly better constants. We first claim that, for any \(v\geq 1\), with probability at least \(1-2\exp(-nv)\),
\[\|f-f^{\prime}\|_{n}\leq[2(1+\sqrt{2})+1]^{1/2}\|f-f^{\prime}\|_{\psi_{2}} \sqrt{v}.\]
Indeed, by Bernstein's inequality, for any \(v\geq 1\), with probability at least \(1-2e^{-nv}\),
\[|\hat{\mathbf{P}}[(f-f^{\prime})^{2}-\mathbf{P}(f-f^{\prime})^{2}]|\leq\|(f-f ^{\prime})^{2}-\mathbf{P}(f-f^{\prime})^{2}\|_{\psi_{1}}\left(\sqrt{2v}+v \right)\leq 2(1+\sqrt{2})\|(f-f^{\prime})^{2}\|_{\psi_{1}}v.\]
For \(v\geq 1\), we also have \(|\mathbf{P}(f-f^{\prime})^{2}|\leq\|(f-f^{\prime})^{2}\|_{\psi_{1}}\leq\|(f-f ^{\prime})^{2}\|_{\psi_{1}}v\). This, triangle inequality, \(\|(f-f^{\prime})^{2}\|_{\psi_{1}}=\|f-f^{\prime}\|_{\psi_{2}}^{2}\) and the display imply the claim.
Now, since \(2^{k/2}\geq\sqrt{n}\), setting \(v:=2^{k}u\frac{1}{n}\geq 1\) we get \(\exp(-nv)=\exp(-2^{k}u)\), entailing the claim.
The following result is a straightforward modification of Lemma A.4 by Dirksen [40].
**Lemma 46**.: _Fix \(1\leq p<\infty\), \(u\geq 2\) and set \(\ell:=\lfloor\log_{2}p\rfloor\). For every \(n>\ell\) let \((\Omega_{i}^{(k)})_{i\in I_{k}}\) be a collection of events satisfying_
\[\mathbb{P}(\Omega_{i}^{(k)})\leq 2\exp(-(2^{k}+u)),\quad\text{for all}\ i\in I_{k}\]
_or_
\[\mathbb{P}(\Omega_{i}^{(k)})\leq 2\exp(-2^{k}u),\quad\text{for all}\ i\in I_{k}.\]
_If \(|I_{k}|\leq 2^{2^{k}+1}\), then for an absolute constant \(c>0\),_
\[\mathbb{P}\left(\cup_{k>\ell}\cup_{i\in I_{k}}\Omega_{i}^{(k)}\right)\leq c \exp(-pu/4).\]
Proof of Theorem 7.: Let \((F_{k})\) be an optimal admissible sequences for \(\gamma_{2}(F)\). Let \((\mathcal{F}_{k})\) be defined by \(\mathcal{F}_{0}:=F_{0}\) and \(\mathcal{F}_{k}:=\cup_{j\leq k}F_{j}\) so that \(|\mathcal{F}_{k}|\leq 2|F_{k}|=2^{2^{k}+1}\). Set \(k_{0}:=\min\{k\geq 1:2^{k/2}>\sqrt{n}\}\) and let us define \(\mathcal{I}:=\{k\in\mathbb{N}:\ell<k<k_{0}\}\) and \(\mathcal{J}:=\{k\in\mathbb{N}:k\geq k_{0}\}\). Given \(k\in\mathbb{N}\) and \(f\in F\), let \(\Pi_{k}(f)\in\operatorname*{argmin}_{f^{\prime}\in\mathcal{F}_{k}}\mathsf{d}(f, f^{\prime})\). Given \(f\in F\), we take some \(\Pi_{0}(f)\in F\) and for any \(j\in\mathbb{N}\), we define the "lazy walk" chain selection by:
\[k_{j}(f):=\inf\left\{j\geq k_{j-1}(f):\mathsf{d}(f,\Pi_{j}(f))\leq\frac{1}{2} \mathsf{d}(f,\Pi_{k_{j-1}(f)}(f))\right\}.\]
For simplicity of notation, we will set \(\pi_{j}(f):=\Pi_{k_{j}(f)}(f)\). For \(f\in F\), our proof will rely on the chain:
\[M(f)-M(\pi_{0}(f))=\sum_{j:k_{j}(f)\in\mathcal{J}}\left[M(\pi_{j+1}(f))-M(\pi_ {j}(f))\right]+\sum_{j:k_{j}(f)\in\mathcal{I}}\left[M(\pi_{j}(f))-M(\pi_{j-1}(f ))\right], \tag{138}\]
where we have used that \(\cup_{k\geq 0}\mathcal{F}_{k}\) is dense on \(F\).
Fix \(u\geq 1\). Given any \(k\in\mathbb{N}\), define the event \(\Omega_{k,\mathcal{I},u}\) for which, for all \(f,f^{\prime}\in\mathcal{F}_{k}\), we have
\[|M(f)-M(f^{\prime})|\leq\left[(1+\sqrt{2})\frac{2^{k/2}}{\sqrt{n}}+\sqrt{\frac {2u}{n}}+\frac{u}{n}\right]\|\xi\|_{\psi_{2}}\|f-f^{\prime}\|_{\psi_{2}}. \tag{139}\]
Define also the event \(\Omega_{k,\mathcal{J},u}\) for which, for all \(f,f^{\prime}\in\mathcal{F}_{k}\), we have
\[\|f-f^{\prime}\|_{n}\leq(\sqrt{u}+2^{k/2})\frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt {n}}\|f-f^{\prime}\|_{\psi_{2}}. \tag{140}\]
For simplicity, define the vector \(\boldsymbol{\xi}:=(\xi_{i})_{i\in[n]}\) and \(\|\boldsymbol{\xi}\|_{n}:=\frac{1}{\sqrt{n}}\|\boldsymbol{\xi}\|_{2}\). Given \(v\geq 1\), we define the event \(\Omega_{\xi,v}\), for which
\[\|\boldsymbol{\xi}\|_{n}\leq[2(1+\sqrt{2})+1]^{1/2}\|\xi\|_{\psi_{2}}\sqrt{v}. \tag{141}\]
By an union bound over all possible pairs \((\pi_{k-1}(f),\pi_{k}(f))\) we have \(|\Omega_{k,\mathcal{I},u}|\leq|\mathcal{F}_{k-1}||\mathcal{F}_{k}|\leq 2^{2^{k+1}}\). If \(\Omega_{\mathcal{I},u}:=\cap_{k\in\mathcal{I}}\Omega_{k,\mathcal{I},u}\), the first bound on Lemma 45 for \(k\in\mathcal{I}\) and Lemma 46 imply that there is universal constant \(c>0\)
\[\mathbb{P}(\Omega_{\mathcal{I},u}^{c})\leq ce^{-u/4}.\]
Similarly, the second bound in Lemma 45 for \(k\in\mathcal{J}\) and Lemma 46 imply that for the event \(\Omega_{\mathcal{J},u}:=\cap_{k\in\mathcal{J}}\Omega_{k,\mathcal{J},u}\), we have
\[\mathbb{P}(\Omega_{\mathcal{J},u}^{c})\leq ce^{-u/4}.\]
Using Bernstein's inequality for \(\{\xi_{i}\}_{i\in[n]}\) we get \(\mathbb{P}(\Omega_{\xi,v}^{c})\leq ce^{-vn}\). Hence, the event \(\Omega_{u,v}:=\Omega_{\mathcal{I},u}\cap\Omega_{\mathcal{J},u}\cap\Omega_{\xi,v}\) has \(\mathbb{P}(\Omega_{u,v}^{c})\leq ce^{-u/4}+ce^{-vn}\).
We next fix \(u\geq 2\) and \(v\geq 1\) and assume that \(\Omega_{u,v}\) always holds. We now bound the chain over \(\mathcal{I}\) and \(\mathcal{J}\) separately.
Part 1: _The subgaussian path \(\mathcal{I}\)._ Given \(j\) such that \(k_{j}(f)\in\mathcal{I}\), since \(\pi_{j}(f),\pi_{j-1}(f)\in\mathcal{F}_{k_{j}(f)}\), we may apply (139) to \(k:=k_{j}(f)\) so that
\[|M(\pi_{j}(f))-M(\pi_{j-1}(f))|\leq\left[(1+\sqrt{2})\frac{2^{k_{j}(f)/2}}{ \sqrt{n}}+\sqrt{\frac{2u}{n}}+\frac{u}{n}\right]\|\xi\|_{\psi_{2}}\|\pi_{j}(f) -\pi_{j-1}(f)\|_{\psi_{2}}.\]
We note that, by triangle inequality and minimality of \(k_{j-1}(f)\),
\[\|\pi_{j}(f)-\pi_{j-1}(f)\|_{\psi_{2}}\leq\mathsf{d}(f,\mathcal{F}_{k_{j}(f)}) +\mathsf{d}(f,\mathcal{F}_{k_{j-1}(f)})\leq\mathsf{d}(f,\mathcal{F}_{k_{j}(f) })+2\mathsf{d}(f,\mathcal{F}_{k_{j}(f)-1}),\]
so that
\[\sum_{j:k_{j}(f)\in\mathcal{I}}2^{k_{j}(f)/2}\|\pi_{j}(f)-\pi_{j-1}(f)\|_{\psi _{2}}\leq(1+2\sqrt{2})\gamma_{2}(F). \tag{142}\]
Moreover, by the definition of the lazy walked chain and a geometric series bound,
\[\sum_{j:k_{j}(f)\in\mathcal{I}}\|\pi_{j}(f)-\pi_{j-1}(f)\|_{\psi_{2}}\leq 4 \mathsf{d}(f,\mathcal{F}_{0})\leq 4\bar{\Delta}(F). \tag{143}\]
We thus conclude that
\[\left|\sum_{j:k_{j}(f)\in\mathcal{I}}[M(\pi_{j}(f))-M(\pi_{j-1}(f))]\right| \leq 4\left(\sqrt{\frac{2u}{n}}+\frac{u}{n}\right)\|\xi\|_{\psi_{2}}\bar{ \Delta}(F)+(1+\sqrt{2})(1+2\sqrt{2})\|\xi\|_{\psi_{2}}\frac{\gamma_{2}(F)}{ \sqrt{n}}. \tag{144}\]
**Part 2: _The subexponential path \(\mathcal{J}\)._ Let us denote by \(\mathbf{Q}\) the joint distribution of \((\xi,X)\) and \(\hat{\mathbf{Q}}\) the empirical distribution associated to \(\{(\xi_{i},X_{i})\}_{i\in[n]}\). In particular, \(M(f)=\hat{\mathbf{Q}}(\cdot)f-\mathbf{Q}\hat{\mathbf{Q}}(\cdot)f\). By Jensen's and triangle inequalities,
\[\left|\sum_{j:k_{j}(f)\in\mathcal{J}}[M(\pi_{j+1}(f))-M(\pi_{j}(f))]\right| \leq\sum_{j:k_{j}(f)\in\mathcal{J}}\hat{\mathbf{Q}}(\cdot)\left| \pi_{j+1}(f)-\pi_{j}(f)\right|\] \[+\sum_{j:k_{j}(f)\in\mathcal{J}}\mathbf{Q}\hat{\mathbf{Q}}(\cdot )\left|\pi_{j+1}(f)-\pi_{j}(f)\right|. \tag{145}\]
For convenience, we set \(\hat{T}_{j}:=\hat{\mathbf{Q}}(\cdot)\left|\pi_{j+1}(f)-\pi_{j}(f)\right|\).
Given \(j\) such that \(k_{j}(f)\in\mathcal{J}\), since \(\pi_{j+1}(f),\pi_{j}(f)\in\mathcal{F}_{k_{j+1}(f)}\), we may apply (140) to \(k:=k_{j+1}(f)\). This fact, (141) and Cauchy-Schwarz yield
\[\hat{T}_{j}\leq\|\boldsymbol{\xi}\|_{n}\|\pi_{j+1}(f)-\pi_{j}(f)\|_{n}\leq \sqrt{v}[2(1+\sqrt{2})+1]\|\xi\|_{\psi_{2}}(\sqrt{u}+2^{k_{j+1}(f)/2})\frac{1 }{\sqrt{n}}\|\pi_{j+1}(f)-\pi_{j}(f)\|_{\psi_{2}}.\]
In a similar fashion, we can also state that with probability at least \(1-ce^{-u/4}\),
\[\frac{\hat{T}_{j}}{\|\boldsymbol{\xi}\|_{n}}\leq(\sqrt{u}+2^{k_{j+1}(f)/2}) \frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt{n}}\|\pi_{j+1}(f)-\pi_{j}(f)\|_{\psi_{2}},\]
so integrating the tail leads to
\[\left\{\mathbb{E}\left(\frac{\hat{T}_{j}}{\|\boldsymbol{\xi}\|_{n}}\right)^{2 }\right\}^{1/2}\leq c2^{k_{j+1}(f)/2}\frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt{n}} \|\pi_{j+1}(f)-\pi_{j}(f)\|_{\psi_{2}},\]
and by Holder's inequality,
\[\mathbf{P}\hat{T}_{j}\leq\left\{\mathbb{E}\|\boldsymbol{\xi}\|_{n}^{2}\right\} ^{1/2}\left\{\mathbb{E}\left(\frac{\hat{T}_{j}}{\|\boldsymbol{\xi}\|_{n}} \right)^{2}\right\}^{1/2}\leq c[2(1+\sqrt{2})+1]^{1/2}\frac{\|\xi\|_{\psi_{2} }}{\sqrt{n}}\frac{2^{k_{j+1}(f)/2}\|\pi_{j+1}(f)-\pi_{j}(f)\|_{\psi_{2}}}{ \sqrt{n}}.\]
We thus conclude from (145) and a analogous reasoning to (142) and (143) that
\[\left|\sum_{j:k_{j}(f)\in\mathcal{J}}[M(\pi_{j+1}(f))-M(\pi_{j}(f))]\right| \leq c_{1}^{2}\sqrt{v}\|\xi\|_{\psi_{2}}\sqrt{u}\frac{4\bar{\Delta }(F)}{\sqrt{n}}\] \[+\left(c_{1}^{2}\sqrt{v}+c_{1}\frac{c}{\sqrt{n}}\right)\|\xi\|_{ \psi_{2}}(1+2\sqrt{2})\frac{\gamma_{2}(F)}{\sqrt{n}},\]
with \(c_{1}:=[2(1+\sqrt{2})+1]^{1/2}\).
From the above bound, (144) and (138) we conclude that, for any \(u\geq 2\) and \(v\geq 1\), on the event \(\Omega_{u,v}\) of probability at least \(1-ce^{-u/4}-ce^{-nv}\), we have the bound stated in the theorem.
## Appendix C The Product Process
In this section we prove Theorem 8 -- restate it here in a more general form. We refer to notation and definitions in Section 8.
**Theorem 47** (Product process).: _Let \(F,G\) be subclasses of \(L_{\psi_{2}}\). For any \(1\leq p<\infty\),_
\[\left|\sup_{(f,g)\in F\times G}|A(f,g)|\right|_{p}\lesssim\frac{\gamma_{2,p}(F) \gamma_{2,p}(G)}{n}+\bar{\Delta}(F)\frac{\gamma_{2,p}(G)}{\sqrt{n}}+\bar{ \Delta}(G)\frac{\gamma_{2,p}(F)}{\sqrt{n}}+\bar{\Delta}(F)\bar{\Delta}(G) \left(\sqrt{\frac{p}{n}}+\frac{p}{n}\right).\]
_In particular, there exist universal constants \(c,C>0\), such that for all \(n\geq 1\) and \(u\geq 1\), with probability at least \(1-e^{-u}\),_
\[\sup_{(f,g)\in F\times G}|A(f,g)| \leq C\left[\frac{\gamma_{2}(F)\gamma_{2}(G)}{n}+\bar{\Delta}(F) \frac{\gamma_{2}(G)}{\sqrt{n}}+\bar{\Delta}(G)\frac{\gamma_{2}(F)}{\sqrt{n}}\right]\] \[+c\sup_{(f,g)\in F\times G}\|fg-\mathbf{P}fg\|_{\psi_{1}}\left( \sqrt{\frac{u}{n}}+\frac{u}{n}\right).\]
Before proving the theorem we will need some auxiliary results.
**Lemma 48**.: _Let \(f,f^{\prime}\) and \(g,g^{\prime}\) in \(L_{\psi_{2}}\)._
_If for \(k\in\mathbb{N}\), \(2^{k/2}\leq\sqrt{n}\), then for any \(u\geq 1\), with probability at least \(1-2\exp(-2^{k}u)\),_
\[|A(f,g)-A(f^{\prime},g^{\prime})|\leq 2(1+\sqrt{2})\frac{u2^{k/2}}{\sqrt{n}}\|fg -f^{\prime}g^{\prime}\|_{\psi_{1}}.\]
_If for \(k\in\mathbb{N}\), \(2^{k/2}\geq\sqrt{n}\), then for any \(u\geq 1\), with probability at least \(1-2\exp(-2^{k}u)\),_
\[\|f-f^{\prime}\|_{n}\leq\sqrt{u}2^{k/2}\frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt{n }}\mathsf{d}(f,f^{\prime}).\]
Proof.: The proof is as in the proof of Lemma 45 using Bernstein's inequality. A minor difference is that it is used the larger scaling \(v\,:=\,2^{k}u\) for all \(u\,\geq 1\) instead of \(v\,:=\,2^{k}+u\) for all \(u\,>\,0\). The fact that \(\sqrt{2u}+u\leq(\sqrt{2}+1)u\) for \(u\,\geq 1\) is used in the first inequality of the lemma.
As in the proof of Theorem 7, we combine Dirksen's method [40] with Talagrand's [95] "lazy-walked" chain. One difference now is that we will explicitly need Dirksen's bound for the quadratic process
\[A(f):=\hat{\mathbf{P}}(f^{2}-\mathbf{P}f^{2}).\]
The following proposition is a corollary of the proof of Theorem 5.5 in [40].
**Proposition 12** (Dirksen [40], Theorem 5.5).: _Let \(F\subset L_{\psi_{2}}\). Given \(1\leq p<\infty\), set \(\ell:=\lfloor\log_{2}p\rfloor\) and \(k_{0}:=\min\{k>\ell:2^{k/2}>\sqrt{n}\}\). Let \((\mathcal{F}_{k})\) be an optimal admissible sequence for \(\gamma_{2,p}(F,\mathsf{d})\) and, for any \(f\in F\) and \(k\in\mathbb{N}\), let \(\pi_{k}(f)\in\operatorname*{argmin}_{f^{\prime}\in\mathcal{F}_{\ell}}\mathsf{ d}(f,f^{\prime})\). Then there exists universal constant \(c>0\) such that for all \(n\in\mathbb{N}\) and \(u\geq 2\), with probability at least \(1-ce^{-pu/4}\),_
\[\sup_{f\in F}\sup_{k\geq k_{0}}|A(\pi_{k}(f))|^{1/2}-\sup_{f\in F}|A(\pi_{k_{ 0}}(f))|^{1/2}\leq\sqrt{u}\left[25\frac{\gamma_{2,p}(F,\mathsf{d})}{\sqrt{n}} +\left(85\frac{\bar{\Delta}(F)\gamma_{2,p}(F,\mathsf{d})}{\sqrt{n}}\right)^{1/2 }\right].\]
_Moreover, for all \(n\in\mathbb{N}\) and \(u\geq 1\), with probability at least \(1-ce^{-pu/4}\),_
\[\sup_{f\in F}|A(\pi_{k_{0}}(f))|^{1/2}\leq\sqrt{u}[4(1+\sqrt{2})+2]^{1/2}\bar {\Delta}(F).\]
Finally, besides Lemma 46, we need some additional lemmas from Dirksen [40].
**Lemma 49** (Dirksen [40], Lemma A.3).: _Fix \(1\leq p<\infty\), set \(\ell:=\lfloor\log_{2}p\rfloor\) and let \((X_{t})_{t\in T}\) be a finite collection of real-valued random variables with \(|T|\leq 2^{2^{\ell}}\)._
_Then_
\[\left(\mathbb{E}\sup_{t\in T}|X_{t}|^{p}\right)^{1/p}\leq 2\sup_{t\in T}( \mathbb{E}|X_{t}|^{p})^{1/p}.\]
**Lemma 50** (Dirksen [40], Lemma A.5).: _Fix \(1\leq p<\infty\) and \(0<\alpha<\infty\). Let \(\gamma\geq 0\) and suppose that \(\xi\) is a positive random variable such that for some \(c\geq 1\) and \(u_{*}>0\), for all \(u\geq u_{*}\),_
\[\mathbb{P}(\xi>\gamma u)\leq c\exp(-pu^{\alpha}/4).\]
_Then, for a constant \(\tilde{c}_{\alpha}>0\), depending only on \(\alpha\),_
\[(\mathbb{E}\xi^{p})^{1/p}\leq\gamma(\tilde{c}_{\alpha}c+u_{*}).\]
**Lemma 51** (Dirksen [40], Lemma A.2).: _Let \(0<\alpha<\infty\). If a random variable \(X\) satisfies, for some \(a_{1},a_{2}>0\),_
\[\mathbb{P}(|X|\geq a_{1}u+a_{2}\sqrt{u})\leq\exp(-u),\]
_for all \(u\geq 0\), then, for absolute constant \(C>0\), for all \(p\geq 1\),_
\[(\mathbb{E}|X|^{p})^{1/p}\leq C(a_{1}p+a_{2}\sqrt{p}).\]
Proof of Theorem 47.: Let \((\mathcal{F}_{k})\) and \((\mathcal{G}_{k})\) be optimal admissible sequences for \(\gamma_{2,p}(F)\) and \(\gamma_{2,p}(G)\) respectively. Set \(\ell:=\lfloor\log_{2}p\rfloor\), \(k_{0}:=\min\{k>\ell:2^{k/2}>\sqrt{n}\}\) and let us define \(\mathcal{I}:=\{k\in\mathbb{N}:\ell<k<k_{0}\}\) and \(\mathcal{J}:=\{k\in\mathbb{N}:k\geq k_{0}\}\). Given \((f,g)\in F\times G\), for any \(k\in\mathbb{N}\), we take the usual selections \(\pi_{k}(f)\in\operatorname*{argmin}_{f^{\prime}\in\mathcal{F}_{k}}\mathsf{d} (f,f^{\prime})\), and \(\Pi_{k}(g)\in\operatorname*{argmin}_{g^{\prime}\in\mathcal{G}_{k}}\mathsf{d} (g,g^{\prime})\). For convenience, we also define \(\mathcal{P}_{k}(f,g):=A(\pi_{k}(f),\Pi_{k}(g))\) and \(\mathcal{P}_{k}(f,g):=\pi_{k}(f)\Pi_{k}(g)\). Our proof will rely on the chain:
\[A(f,g)-A(\pi_{\ell}(f),\Pi_{\ell}(g))=\sum_{k\in\mathcal{J}}\left[\mathcal{P} _{k+1}(f,g)-\mathcal{P}_{k}(f,g)\right]+\sum_{k\in\mathcal{I}}\left[\mathcal{P }_{k}(f,g)-\mathcal{P}_{k-1}(f,g)\right], \tag{146}\]
where we have used that \(\cup_{k\geq 0}\mathcal{F}_{k}\times\mathcal{G}_{k}\) is dense on \(F\times G\).
Fix \(u\geq 2\). Given any \(k\in\mathbb{N}\), define the event \(\Omega_{k,\mathcal{I},u,p}\) for which, for all \(f\in F\) and \(g\in G\), we have
\[|\mathcal{P}_{k}(f,g)-\mathcal{P}_{k-1}(f,g)|\leq 2(1+\sqrt{2})u2^{k/2} \frac{\|\mathcal{P}_{k}(f,g)-\mathcal{P}_{k-1}(f,g)\|_{\psi_{1}}}{\sqrt{n}}. \tag{147}\]
Define also the event \(\Omega_{k,\mathcal{J},u,p}\) for which, for all \(f\in F\) and \(g\in G\), we have both inequalities:
\[\|\pi_{k+1}(f)-\pi_{k}(f)\|_{n} \leq\sqrt{u}2^{k/2}\frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt{n}}\|\pi _{k+1}(f)-\pi_{k}(f)\|_{\psi_{2}},\] \[\|\Pi_{k+1}(g)-\Pi_{k}(g)\|_{n} \leq\sqrt{u}2^{k/2}\frac{[2(1+\sqrt{2})+1]^{1/2}}{\sqrt{n}}\|\Pi _{k+1}(g)-\Pi_{k}(g)\|_{\psi_{2}}. \tag{148}\]
By an union bound over all possible 4-tuples \((\pi_{k-1}(f),\pi_{k}(f),\Pi_{k-1}(g),\Pi_{k}(g))\) we have \(|\Omega_{k,\mathcal{I},u,p}|\leq|\mathcal{F}_{k-1}||\mathcal{F}_{k}||\mathcal{ G}_{k}||\mathcal{G}_{k-1}|\)\(2^{2^{k+2}}\). If \(\Omega_{\mathcal{I},u,p}:=\cap_{k\in\mathcal{I}}\Omega_{k,\mathcal{I},u,p}\), the first bound on Lemma 48 for \(k\in\mathcal{I}\) and Lemma 46 (using that \(k>\ell\) over \(\mathcal{I}\)) imply that there is universal constant \(c>0\)
\[\mathbb{P}(\Omega_{\mathcal{I},u,p}^{c})\leq ce^{-pu/4}.\]
Similarly, the second bound in Lemma 48 for \(k\in\mathcal{J}\) and Lemma 46 (using that \(k>\ell\) over \(\mathcal{J}\)) imply that for the event \(\Omega_{\mathcal{J},u,p}:=\cap_{k\in\mathcal{J}}\Omega_{k,\mathcal{J},u,p}\), we have
\[\mathbb{P}(\Omega_{\mathcal{J},u,p}^{c})\leq ce^{-pu/4}.\]
We now also define the event \(\Omega_{u,p}\) as the intersection of \(\Omega_{\mathcal{I},u,p}\cap\Omega_{\mathcal{J},u,p}\) and the events for which both inequalities of Proposition 12 hold for both classes \(F\) and \(G\). Clearly, by such proposition and the two
previous displays we have \(\mathbb{P}(\Omega_{u,p}^{c})\leq ce^{-pu/4}\) from an union bound.
We next fix \(u\geq 2\) and assume that \(\Omega_{u,p}\) always holds. We now bound the chain over \(\mathcal{I}\) and \(\mathcal{J}\) separately.
**Part 1: _The subgaussian path \(\mathcal{I}\)._**: From (147) and the inequality
\[\|\mathscr{P}_{k}(f,g)-\mathscr{P}_{k-1}(f,g)\|_{\psi_{1}} \leq\|\pi_{k}(f)-\pi_{k-1}(f)\|_{\psi_{2}}\|\Pi_{k}(g)\|_{\psi_{2} }+\|\pi_{k-1}(f)\|_{\psi_{2}}\|\Pi_{k}(g)-\Pi_{k-1}(g)\|_{\psi_{2}}\] \[\leq\bar{\Delta}(G)[\mathsf{d}(f,\pi_{k}(f))+\mathsf{d}(f,\pi_{k- 1}(f))]+\bar{\Delta}(F)[\mathsf{d}(g,\Pi_{k}(g))+\mathsf{d}(g,\Pi_{k-1}(g))]\]
implying
\[\left|\sum_{k\in\mathcal{I}}[\mathcal{P}_{k}(f,g)-\mathcal{P}_{k-1}(f,g)] \right|\leq 2(1+\sqrt{2})^{2}\frac{u}{\sqrt{n}}\left[\bar{\Delta}(F) \gamma_{2,p}(G)+\bar{\Delta}(G)\gamma_{2,p}(F)\right]. \tag{149}\]
**Part 2: _The subexponential path \(\mathcal{J}\)._**: Note that \(A(f,g)=\hat{\mathbf{P}}fg-\mathbf{P}(\hat{\mathbf{P}}fg)\) and thus, by Jensen's and triangle inequalities,
\[\left|\sum_{k\in\mathcal{J}}[\mathcal{P}_{k+1}(f,g)-\mathcal{P}_ {k}(f,g)]\right| \leq\left|\sum_{k\in\mathcal{J}}\hat{\mathbf{P}}[\mathscr{P}_{k+ 1}(f,g)-\mathscr{P}_{k}(f,g)]\right|+\left|\sum_{k\in\mathcal{J}}\mathbf{P} \hat{\mathbf{P}}[\mathscr{P}_{k+1}(f,g)-\mathscr{P}_{k}(f,g)]\right|\] \[\leq\sum_{k\in\mathcal{J}}\hat{\mathbf{P}}\left|\mathscr{P}_{k+1} (f,g)-\mathscr{P}_{k}(f,g)\right|+\sum_{k\in\mathcal{J}}\mathbf{P}\hat{ \mathbf{P}}\left|\mathscr{P}_{k+1}(f,g)-\mathscr{P}_{k}(f,g)\right|. \tag{150}\]
Let us denote \(\hat{T}_{k}:=\hat{\mathbf{P}}\left|\mathscr{P}_{k+1}(f,g)-\mathscr{P}_{k}(f,g)\right|\). We have the split
\[\hat{T}_{k}\leq\left|\hat{\mathbf{P}}\pi_{k}(f)[\Pi_{k+1}(g)-\Pi_{k}(g)] \right|+\left|\hat{\mathbf{P}}\Pi_{k+1}(g)[\pi_{k+1}(f)-\pi_{k}(f)]\right|.\]
By Cauchy-Schwarz,
\[\left|\hat{\mathbf{P}}\pi_{k}(f)[\Pi_{k+1}(g)-\Pi_{k}(g)]\right| \leq\|\pi_{k}(f)\|_{n}\|\Pi_{k+1}(g)-\Pi_{k}(g)\|\|_{n}\] \[\leq\left[\{A(\pi_{k}(f))\}^{1/2}+\{\mathbf{P}\pi_{k}^{2}(f)\}^{ 1/2}\right]\|\Pi_{k+1}(g)-\Pi_{k}(g)\|_{n},\]
which together with (148), bounds in Proposition \(12\) and \(\sqrt{u}\geq 1\) give
\[\left|\hat{\mathbf{P}}\pi_{k}(f)[\Pi_{k+1}(g)-\Pi_{k}(g)]\right| \leq\frac{c_{2}u2^{k/2}}{\sqrt{n}}\left[c_{1}\bar{\Delta}(F)+25 \frac{\gamma_{2,p}(F)}{\sqrt{n}}+\left\{\frac{85\bar{\Delta}(F)\gamma_{2,p}(F )}{\sqrt{n}}\right\}^{1/2}\right]\mathsf{d}(\Pi_{k+1}(g),\Pi_{k}(g))\] \[\leq c_{2}\frac{u2^{k/2}}{\sqrt{n}}\mathsf{d}(\Pi_{k+1}(g),\Pi_{k }(g))\left[c_{3}\bar{\Delta}(F)+c_{4}\frac{\gamma_{2,p}(F)}{\sqrt{n}}\right],\]
by Young's inequality and constants \(c_{1}:=\{4(1+\sqrt{2}+2)\}^{1/2}+1,c_{2}:=\{2(1+\sqrt{2}+1)\}^{1/2},c_{3}:=c_{1 }+\frac{\sqrt{85}}{2}\) and \(c_{4}:=25+\frac{\sqrt{85}}{2}\). An identical bound gives
\[\left|\hat{\mathbf{P}}\Pi_{k+1}(g)[\pi_{k+1}(f)-\pi_{k}(f)]\right| \leq c_{2}\frac{u2^{k/2}}{\sqrt{n}}\mathsf{d}(\pi_{k+1}(f),\pi_{k}(f))\left[c _{3}\bar{\Delta}(G)+c_{4}\frac{\gamma_{2,p}(G)}{\sqrt{n}}\right].\]
We thus conclude that
\[\hat{T}_{k} \leq c_{2}\frac{u2^{k/2}}{\sqrt{n}}\mathsf{d}(\Pi_{k+1}(g),\Pi_{k} (g))\left[c_{3}\bar{\Delta}(F)+c_{4}\frac{\gamma_{2,p}(F)}{\sqrt{n}}\right]\] \[+c_{2}\frac{u2^{k/2}}{\sqrt{n}}\mathsf{d}(\pi_{k+1}(f),\pi_{k}(f)) \left[c_{3}\bar{\Delta}(G)+c_{4}\frac{\gamma_{2,p}(G)}{\sqrt{n}}\right].\]
Note that, in fact, we have proved that the above bound on \(\hat{T}_{k}\) holds with probability at least
\(c\exp(-pu/4)\) for any \(u\geq 2\). Thus, from Lemma 50 we have, for some universal constant \(c_{0}>0\),
\[\mathbf{P}\hat{T}_{k} \leq c_{0}c_{2}\frac{u2^{k/2}}{\sqrt{n}}\text{d}(\Pi_{k+1}(g),\Pi_{ k}(g))\left[c_{3}\bar{\Delta}(F)+c_{4}\frac{\gamma_{2,p}(F)}{\sqrt{n}}\right]\] \[+c_{0}c_{2}\frac{u2^{k/2}}{\sqrt{n}}\text{d}(\pi_{k+1}(f),\pi_{k} (f))\left[c_{3}\bar{\Delta}(G)+c_{4}\frac{\gamma_{2,p}(G)}{\sqrt{n}}\right].\]
Using the previous two bounds in (150) gives, after using the triangle inequality for d, summing over \(k\in\mathcal{J}\) and using the definition of \(\gamma_{2,p}(F)\) and \(\gamma_{2,p}(G)\) (recalling that \(k>\ell\)),
\[\left|\sum_{k\in\mathcal{J}}[\mathcal{P}_{k+1}(f,g)-\mathcal{P}_{ k}(f,g)]\right| \leq(1+c_{0})(1+2^{-1/2})c_{2}\frac{u}{\sqrt{n}}\gamma_{2,p}(G) \left[c_{3}\bar{\Delta}(F)+c_{4}\frac{\gamma_{2,p}(F)}{\sqrt{n}}\right]\] \[+(1+c_{0})(1+2^{-1/2})c_{2}\frac{u}{\sqrt{n}}\gamma_{2,p}(F)\left[ c_{3}\bar{\Delta}(G)+c_{4}\frac{\gamma_{2,p}(G)}{\sqrt{n}}\right].\]
From the above bound, (149) and (146) we conclude that, for any \(u\geq 2\), on the event \(\Omega_{u,p}\) of probability at least \(1-e^{-pu/4}\), we have
\[\sup_{(f,g)\in F\times G}|A(f,g)|^{1/2}-\sup_{(f,g)\in F\times G} |A(\pi_{\ell}(f),\Pi_{\ell}(g))|^{1/2}\] \[\leq\sqrt{u}\left[\frac{c_{5}}{\sqrt{n}}\left(\bar{\Delta}(F) \gamma_{2,p}(G)+\bar{\Delta}(G)\gamma_{2,p}(F)\right)+c_{6}\frac{\gamma_{2,p}( F)\gamma_{2,p}(G)}{n}\right]^{1/2},\]
with \(c_{5}:=2(1+\sqrt{2}^{2}+(1+c_{0})(1+2^{-1/2}))c_{2}c_{3}\) and \(c_{6}:=2(1+c_{0})c_{2}c_{4}(1+2^{-1/2})\). This and Lemma 50 (with \(\alpha=2\)) imply that
\[\left|\sup_{(f,g)\in F\times G}|A(f,g)|^{1/2}-\sup_{(f,g)\in F \times G}|A(\pi_{\ell}(f),\Pi_{\ell}(g))|^{1/2}\right|_{p}\] \[\leq c\left[\frac{1}{\sqrt{n}}\left(\bar{\Delta}(F)\gamma_{2,p}( G)+\bar{\Delta}(G)\gamma_{2,p}(F)\right)+\frac{\gamma_{2,p}(F)\gamma_{2,p}(G)}{n} \right]^{1/2}.\]
We also have from Lemma 49,
\[\left(\mathbb{E}\sup_{(f,g)\in F\times G}|A(\pi_{\ell}(f),\Pi_{ \ell}(g))|^{p/2}\right)^{2/p} \leq 4\sup_{(f,g)\in F\times G}\left(\mathbb{E}|A(\pi_{\ell}(f), \Pi_{\ell}(g))|^{p/2}\right)^{2/p}\] \[\leq c\sup_{(f,g)\in F\times G}\left[\|\mathscr{P}_{\ell}(f,g)- \mathbf{P}\mathscr{P}_{\ell}(f,g)\|_{\psi_{1}}\left(\sqrt{\frac{p}{n}}+\frac{ p}{n}\right)\right],\]
where the second inequality follows from Bernstein's inequality for \(A(\pi_{\ell}(f),\Pi_{\ell}(g))\) and Lemma 51. The two previous displays finish the proof.
## References
* [1] Vershynin, R. (2018). _High-dimensional probability_. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge.
* [2] R. Ahlswede and A. Winter (2002). Strong converse for identification via quantum channels. _IEEE Transactions on Information Theory, 48(3)_, 569-579.
* [3] Tropp, J.A. (2012). User-Friendly Tail Bounds for Sums of Random Matrices. _Found. Comput. Math., 12_, 389--434.
* [4] S. Boucheron and G. Lugosi and P. Massart (2013). _Concentration inequalities: a nonasymptotic theory of independence_. Oxford University Press, Oxford.
* [5] M. Ledoux and M. Talagrand (1991). _Probability in Banach spaces_. A Series of Modern Surveys in Mathematics. Springer, Berlin, Heidelberg. |
2303.17037 | Non-Hermitian Orthogonal Polynomials on a Trefoil | We investigate asymptotic behavior of polynomials $ Q_n(z) $ satisfying
non-Hermitian orthogonality relations $$ \int_\Delta s^kQ_n(s)\rho(s)ds =0,
\quad k\in\{0,\ldots,n-1\}, $$ where $ \Delta $ is a Chebotar\"ev (minimal
capacity) contour connecting three non-collinear points and $ \rho(s) $ is a
Jacobi-type weight including a possible power-type singularity at the
Chebotar\"ev center of $ \Delta $. | Ahmad B. Barhoumi, Maxim L. Yattselev | 2023-03-29T21:39:53Z | http://arxiv.org/abs/2303.17037v1 | # Non-Hermitian orthogonal polynomials on a trefoil
###### Abstract.
We investigate asymptotic behavior of polynomials \(Q_{n}(z)\) satisfying non-Hermitian orthogonality relations
\[\int_{\Delta}s^{k}Q_{n}(s)\rho(s)\mathrm{d}s=0,\quad k\in\{0,\ldots,n-1\},\]
where \(\Delta\) is a Chebotarev (minimal capacity) contour connecting three non-collinear points and \(\rho(s)\) is a Jacobi-type weight including a possible power-type singularity at the Chebotarev center of \(\Delta\).
Key words and phrases:Non-Hermitian orthogonality, strong asymptotics, Pade approximation, Riemann-Hilbert analysis 2020 Mathematics Subject Classification: 42C05, 41A20, 41A21 The first author was partially supported by the National Science Foundation under DMS-1812625. The research of the second author was supported in part by a grant from the Simons Foundation, CGM-706591. |
2309.02307 | Hidden subsystem symmetry protected states in competing topological
orders | We reveal the connection between two-dimensional subsystem symmetry-protected
topological (SSPT) states and two-dimensional topological orders via a
self-dual frustrated toric code model. This model, an enrichment of the toric
code (TC) with its dual interactions, can be mapped to a model defined on the
dual lattice with subsystem symmetries and subextensive ground state
degeneracy. The map connects exactly the frustrated TC to two copies of the
topological plaquette Ising model (TPIM), as a strong SSPT model with linear
subsystem symmetries. The membrane order parameter of the TPIM is exactly
mapped to dual TC stabilizers as the order parameter of the frustrated TC
model, SSPT gapless edge states of the TPIM are mapped to zero-energy dangling
operators under open boundaries, and the transition from the SSPT-ordered TPIM
to the trivial paramagnetic phase is mapped to the transition between two
distinct topological orders. We also demonstrate that this mapping can be used
to elucidate the structure of other SSPT models, reflecting the subtle linkage
between SSPT order and topological order in two dimensions. | Shi Feng | 2023-09-05T15:21:48Z | http://arxiv.org/abs/2309.02307v2 | # Hidden subsystem symmetry protected states in competing topological orders
###### Abstract
We reveal the connection between two-dimensional subsystem symmetry-protected topological (SSPT) states and two-dimensional topological orders via a self-dual frustrated toric code model. This model, an enrichment of the toric code (TC) with its dual interactions, can be mapped to a model defined on the dual lattice with subsystem symmetries and subextensive ground state degeneracy. The map connects exactly the frustrated TC to two copies of the topological plaquette Ising model (TPIM), as a strong SSPT model with linear subsystem symmetries. The membrane order parameter of TPIM is exactly mapped to dual TC stabilizers as the order parameter of the frustrated TC model, and the transition between the SSPT-ordered TPIM to the trivial paramagnetic phase is mapped to the transition between two distinct topological orders. We also demonstrate that this picture of frustrated TC can be used to construct other SSPT models, hinting at a subtle linkage between SSPT order and topological order in two dimensions.
The exploration of topological phases of matter has expanded our understanding of distinct states of matter beyond those conventionally defined by spontaneously broken symmetry. Three primary categories of topological phases have emerged: (1) Phases protected by an unbroken global symmetry, i.e. symmetry protected topological (SPT) phases exemplified by topological band insulators [1] and integer spin chains [2; 3]. (2) Topologically ordered phases [4; 5], as found in matters like quantum spin liquids [6; 7; 8], Kitaev's toric code [9], and fractional quantum Hall systems [10], known for their inherent or emergent local gauge fields and braiding statistics. (3) The most recent and compelling addition, the Subsystem Symmetry Protected Topological (SSPT) phases [11; 12], which lies at an intermediate position between the first two categories, featuring a sub-global symmetry and non-local order parameter. Just as SPT states are known for boundary modes that are protected by global symmetries, SSPT states are also protected in a similar fashion, but by their subextensive subsystem symmetries. Such phases introduce new forms of matter characterized by subextensive topological ground-state degeneracy (GSD), quasiparticle excitations with fractonic mobility [13; 14; 15; 16].
The primary objective of this paper is to elucidate the intricate relationship between 2D SSPT states and 2D topological order. We concentrate on the "strong SSPT" states [11; 12], distinct from the "weak SSPT" states in that they cannot be simply regarded as decoupled or weakly coupled SPT states in lower dimensions. While several related scenarios are well-understood, such as the connection between 2D global SPT orders and their dual 2D topological orders [18; 19], and between 3D SSPT orders and 3D fracton orders [20; 21], the relationship between 2D SSPT orders and 2D topological orders remains relatively less explored. In this work, drawing parallels with the hidden symmetry-breaking picture observed in an SPT phase after a non-local transformation [22; 23], we demonstrate the existence of hidden competing topological orders within 2D SSPT phases, or conversely, the hidden 2D SSPT phases within competing 2D topological orders. We note that previous studies have established a connection between the SSPT triangular cluster-state model and its dual topological order via the Xu-Moore duality [24; 20], we here take a different approach: by using a non-local map to relate the simplest topological order, the toric code (TC), to the simplest strong SSPT model, the topological plaquette Ising model (TPIM) [Fig.1].
Motivated by the self-duality of TPIM [12], it is natural to start from a manifestly self-dual model with topological order, though later we will show that half of it suffices to give a similar picture. Let us consider the following self-dual frustrated TC Hamiltonian (FTC), where we superimpose the TC interactions with its dual interactions
\[H_{\rm FTC}=-\left[\sum_{s}A_{s}+\sum_{p}B_{p}\right]-\alpha\left[\sum_{p}A_ {p}+\sum_{s}B_{s}\right] \tag{1}\]
Figure 1: Mapping between the self-dual frustrated TC model (Left) and two copies of TPIM (Right). Left: TC square lattice \(\Lambda\) in solid black lines, where qubits live on links; its equivalence in terms of checker-board lattice in Wen’s plaquette representation [17] – star operator corresponds to shaded square while plaquette operator to white square; and TC stabilizers \(A_{s}\) (red star), \(B_{p}\) (blue plaquette), and dual TC stabilizers \(A_{p}\) (red plaquette) and \(B_{s}\) (blue star). Right: The TPIM lattice. Two sublattices are denoted by black and gray lines, with qubits labelled in orange and purple. The two five-qubit interactions: \(\sigma^{z}_{\rm center}\prod_{i\in\rm corners}\sigma^{x}_{i}\), are circled in yellow and red.
The first two terms, \(A_{s}=\prod_{i\in s}\sigma_{i}^{x},\ B_{p}=\prod_{i\in p}\sigma_{i}^{z}\), are the \(X\) and \(Z\) parity checks in the canonical \(H_{\rm TC}\) of TC, while the dual interactions, \(A_{p}=\prod_{i\in p}\sigma_{i}^{x},\ B_{s}=\prod_{i\in s}\sigma_{i}^{z}\) in the later two terms, are added as perturbations parameterized by \(\alpha\). It depicts the competition between two mutually frustrating TCs and enjoys a duality \(\alpha\leftrightarrow\alpha^{-1}\). At transition \(\alpha=1\) both vertices and plaquettes play the same role. Note this is different from the 3D fracton orders, which typically involve stacking 2D toric codes to realize higher-dimensional fracton phases [16; 25; 26]. Here the perturbation is not disjoint from the original 2D TC and it does not commute with \(H_{\rm TC}\), e.g. the dual operator \(A_{p}\) and its neighboring plaquette \(B_{p^{\prime}}\) share a single link. The original toric code has finite number of GSD, however, we will show that the two competing TCs in Eq. 1 can be mapped to a model with a subextensively infinite number of degenerate ground states, which is intimately related to SSPT states and linear symmetries that play a central role in 2D fractoinc physics [14; 15; 27].
_Fractonic anyons:-_ A common signature of the emergence of subsystem symmetries is the effective dimensional reduction, i.e. the fractonic nature, of excitations [28; 14; 27]. Here we start from the mobility of anyons in the leading order perturbation theory as a rudimentary indicator, showing that the competition between the two \(Z_{2}\) topological orders give rise to fractonic behaviors in 2D. Note that none of the two terms in the perturbation of Eq. 1 can create a pair of anyon, so fractonic behavior is expected. At \(\alpha=0\), the TC model has four types of Abelian anyons: \(1,e,m,\epsilon=e\times m\), where \(e\) and \(m\) are electric charge and magnetic flux in the context of \(Z_{2}\) gauge theory, behaving as self bosons and mutual semions; while the \(e\times m\) bound state \(\epsilon\) have fermionic statistics. The ground state is an anyon vacuum, elementary excitations under open boundary condition can be any one of the \(e\), \(m\) or \(e\times m\) anyons. It is natural to ask what is the dynamics of these elementary particles under an \(\alpha\neq 0\) perturbation. Indeed, it is known that the TC, when perturbed by a transverse field \(\sum_{i}\sigma_{i}^{x}\) or \(\sum_{i}\sigma_{i}^{z}\)[29; 30; 31], a single \(e\) and single \(m\) anyons would disperse isotropically; and by \(\sum_{i}\sigma_{i}^{y}\) the fermionic \(\epsilon\) particles would disperse linearly [32; 33]. However, it is clear from the leading order perturbation that in the case of Eq. 1, none of these particles would gain mobility; while it is the anyon bound states, e.g. the two bosonic bound states, \(e_{1}\times e_{2}\) or \(m_{1}\times m_{2}\) with individual anyons aligned in (off)diagonal direction, that develop partially mobile softened modes. The subscript is used to emphasize that they are spacally separated with a lattice constant and do not fuse to a trivial state. It is readily to see that these particles, while mobile, enjoys only a constraint mobility. For example, a bound state created to be aligned with diagonal direction can only move along the off-diagonal direction, as shown in Fig. 2(left). Nevertheless, the bound state created in vertical or horizontal directions remains stable, as shown in Fig. 2(right). To the leading order perturbation theory, its effective dispersion can be described by
\[\varepsilon_{m_{1}\times m_{2}}(\mathbf{k})=\varepsilon_{e_{1}\times e_{2}}( \mathbf{k})=4-2\cos(\mathbf{k}\cdot\mathbf{d}) \tag{2}\]
with \(\mathbf{d}\) in the diagonal or off-diagonal direction. Hence the two-anyon bound state does not condense at the transition \(\alpha=1\), indeed, the phase transition is instead driven by the four-anyon bound state, which is free to move in all directions. These properties are similar to the 2D fractonic property of the plaquette Ising model (PIM) where four-spin-flip excitation can move without constraint; while single- or two-spin-flip excitations are either immobile or partially mobile along lines [14; 33].
_Order parameter:-_ For the ground state property of Eq. 1 without anyon excitations, we also define the topological order parameter. In the large-\(\alpha\) limit, a vacuum state of the dual anyons, i.e. \(A_{p}\) and \(B_{s}\) in the other set of TC stabilizers, are guaranteed at ground state by duality. Naturally, we can use these operators, or their product in some (potentially disconnected) region \(\mathcal{M}\), as topological order parameters:
\[O_{p}=\left\langle\prod_{p\in\mathcal{M}}A_{p}\right\rangle,\ O_{s}=\left\langle \prod_{s\in\mathcal{M}}B_{s}\right\rangle \tag{3}\]
such that the phase of \(\alpha<(>)1\) corresponds to \(O=0(1)\). Here we restrict ourselves to open boundary condition, ruling out the non-contractable Wilson loops for convenience. For the smallest \(\mathcal{M}\), the order parameter reduces to a single star \(A_{p}\) or plaquette \(B_{s}\).
_Map to TPIM:-_ Let us now introduce a new set of spin variables living on the vertices and faces of the dual lattice \(\tilde{\Lambda}\)[34; 33], where new degrees of freedom are denoted by hollow squares in Fig. 3(a). We apply a non-local transformation, relating a pauli matrix \(\bar{\sigma}_{x}\) on the
Figure 2: The anyon mobility analysis in leading order perturbation theory. Left: A local perturbation by the dual TC operator \(A_{p}\) (in red) moves the two anyon bound state \(m_{1}\times m_{2}\) aligned in diagonal direction (deep blue) to its adjacent position (faint blue) along the off-diagonal direction, resulting in a bound-state dispersion along the one-dimensional direction. Right: The same perturbation transitions the horizontally aligned two-anyon bound state (deep blue) to a vertically aligned state (faint blue), and conversely. This results in a stable, immobile bound state.
dual lattice to a set of qubits on the original lattice, and identify \(\tilde{\sigma}^{z}\) as stabalizer operators:
\[\tilde{\sigma}^{x}_{j}=\prod_{i>j}\sigma^{y}_{i},\ \ \tilde{\sigma}^{z}_{j(s)}=A_{s},\ \ \tilde{\sigma}^{z}_{j(p)}=B_{p} \tag{4}\]
as is illustrated in Fig. 3(a), \(\tilde{\sigma}^{x}_{j}\) on the dual lattice (orange square) is given by the product of \(\sigma^{y}_{i}\) on the original lattice (orange dots). It is readily to check that it is a faithful representation that respects Pauli algebra. Under this mapping, one copy of the TC can be treated as trivial Ising variables, while the other is responsible for all emerging non-trivial properties. For clarity, let us first rewrite the operators of the dual TC, \(B_{s}\) and \(A_{p}\), in terms of the original TC variables:
\[B_{s}=A_{s}\prod_{i\in s}\sigma^{y}_{i},\ \ A_{p}=\prod_{i\in p}\sigma^{y}_{i}\,B _{p} \tag{5}\]
by the identity \(\sigma^{a}\sigma^{b}=\delta_{ab}+i\epsilon^{abc}\sigma^{c}\). In terms of \(\tilde{\sigma}^{x}\) and \(\tilde{\sigma}^{z}\) defined in Eq. 4, they can be written into:
\[B_{s}=\tilde{\sigma}^{z}_{c(\tilde{p}_{V})}\prod_{i\in\tilde{p}_{V}}\tilde{ \sigma}^{x}_{i},\ \ A_{p}=\tilde{\sigma}^{z}_{c(\tilde{p}_{F})}\prod_{i\in\tilde{p}_{F}}\tilde{ \sigma}^{x}_{i} \tag{6}\]
Here \(\tilde{p}_{V}\) denotes a plaquette defined on the vertices of the dual lattice \(\tilde{\Lambda}\) that encloses one vertex in \(\tilde{\Lambda}\), as illustrated in Fig. 3(b), where the enclosed qubit and the qubits of the plaquette live in different emergent sublattices, colored by orange and purple squares. \(\tilde{p}_{F}\) denotes a plaquette defined on face centers of \(\tilde{\Lambda}\) that encloses another face center in \(\tilde{\Lambda}\), as illustrated in Fig. 3(c). Again the enclosed qubit and the qubits of the plaquette live in different emergent sublattices, colored by green and blue squares. \(c(\tilde{p}_{V})\) denotes the site that lies at the center of a vertex-plaquatte, while \(c(\tilde{p}_{F})\) denotes that of a face-center plaquette. After this mapping, it is readily to see that the FTC amounts to adding up interactions in \(V,F\in\tilde{\Lambda}\):
\[\mathcal{H}_{\eta}=-\alpha\sum_{c(\tilde{p}_{\eta})}\tilde{\sigma}^{z}_{c( \tilde{p}_{\eta})}\prod_{i\in\tilde{p}_{\eta}}\tilde{\sigma}^{x}_{i}-\sum_{c( \tilde{p}_{\eta})}\tilde{\sigma}^{z}_{c(\tilde{p}_{\eta})},\ \ \eta=V,F \tag{7}\]
where each \(\eta\), represented in Fig. 3(b) or (c), corresponds to one copy of TPIM. We use the calligraphic \(\mathcal{H}\) to remind the reader that it is one of the two disjoint TPIM Hamiltonians. Eq. 7 is equivalent to the TPIM (or cluster-state model) protected by a subextensive number of \(Z_{2}\times Z_{2}\) linear symmetries. For \(\alpha<1\), Eq. 7 reduces to Ising paramagnet in \(\tilde{\Lambda}\) which is equivalent to a TC; while when \(\alpha>1\) the system becomes strong SSPT state. For each copy, e.g. one defined on the \(V\) lattice, the symmetry generators are given by:
\[U^{V_{i}}_{m}=\prod_{n\in\text{cols}}\tilde{\sigma}^{z}_{m,n},\ U^{V_{i}}_{n}= \prod_{m\in\text{rows}}\tilde{\sigma}^{z}_{m,n} \tag{8}\]
The notation \(U^{V_{i}}_{m(n)}\) stands for the linear unitary operators defined on each row (column) inside the emergent sublattice \(V_{i}\), where \(i=1,2\) corresponds to the purple and orange sites in Fig. 3(b), and the \(m,n\) labels the rows and columns of \(V_{i}\). Assume the original lattice \(\Lambda\) is of the size \(2L_{x}\times 2L_{y}\), Eq. 7 describes two copies of TPIM model, with each defined on a \(L_{x}\times L_{y}\) sublattice having
\[\text{GSD}_{V(F)}=4^{L_{x}+L_{y}-1} \tag{9}\]
corresponding to the subextensive \(Z_{2}\times Z_{2}\) symmetry-protected edge modes of TPIM. Indeed, one may choose to perturb the TC Hamiltonian only by either one of \(A_{p}\) or \(B_{s}\) to get only one copy of the TPIM, and a lattice of \(L_{x}\times L_{y}\) would give the same GSD. In fact, a closer inspection of Eq. 6 and Eq. 7 readily reveals that a single
Figure 3: (a) An illustration of the transformation in Eq. 4. Qubits in \(\Lambda\) lattice are denoted by filled dots, while qubits in the dual lattice \(\tilde{\Lambda}\) are denoted by hollow squares. In Eq. 4, the non-local transformation relates a qubit in \(\tilde{\Lambda}\) (represented by the orange square) to the set of qubits in \(\Lambda\) (denoted by the orange dots) that are located within the dashed triangular region immediately to its left (thus the symbol \(>\)). (b,c) Two copies of TPIM in the dual lattices after the transformation of FTC. (b) One copy of TPIM is defined on vertices in \(\tilde{\Lambda}\), denoted by \(V\). The two emergent sublattices \(V_{1},V_{2}\in V\) are denoted by filled orange and red squares, corresponding to the two sublattices shown in Fig. 1(right). (c) The second TPIM copy defined on centers of the faces in \(\tilde{\Lambda}\), denoted by \(F\). The two emergent sublattices \(F_{1},F_{2}\in F\) are labelled by filled green and blue squares.
TPIM can emerge from a half of the FTC, whereby only one of \(V\) or \(F\) sublattices emerge:
\[-\sum_{s}\left(A_{s}+\alpha B_{s}\right)\cong-\sum_{p}\left(B_{p}+\alpha A_{p} \right)\mapsto H_{\text{TPIM}} \tag{10}\]
In this form the perturbation analysis of anyon mobility is no longer viable, however, it helps in singling out a minimal stabilizer model that is mappable to a strong SSPT-ordered TPIM in Eq. 7.
Given the correspondence between SSPT states and the FTC model, it would be interesting to relate the order parameters in the two cases. While there is no local order parameter (local in \(\tilde{\Lambda}\) basis) for distinguishing the small-\(\alpha\) and large-\(\alpha\) phases of Eq. 7, it is known to exist a two-dimensional membrane order parameter \(\tilde{O}\)[12]:
\[\tilde{O}=\left\langle\prod_{i\in\mathcal{C}}\tilde{\sigma}_{i}^{x}\prod_{i \in\mathcal{M}}\tilde{\sigma}_{i}^{z}\right\rangle \tag{11}\]
where \(\mathcal{C}\) refers to the four corners of a square membrane of one sublattice, and \(\mathcal{M}\) refers to all sites of the other sublattice which are enclosed by \(\mathcal{C}\). It immediately follows that \(\tilde{O}\mapsto O_{p,s}\), i.e. \(\tilde{O}\) can be mapped, according to Eq. 4, to the product of disconnected stabilizer operators as defined in Eq. 3, which serves as a natural order parameter of the FTC in Eq. 1. In particular, if \(\mathcal{C}\) in Eq. 11 is chosen to be the smallest possible membrane, i.e. that which contains only one site of the other sublattice, \(\tilde{O}\) would be mapped to a single \(A_{p}\) or \(B_{s}\) of the dual TC. Such correspondence between FTC and SSPT provides a lucid intuition for the topological nature of strong SSPT states, that the membrane order in SSPT cluster states is related to Abelian anyon excitations of a TC; or conversely, one may start from a TC model, where dressing an anyon operator with non-local string operators will give rise to a membrane topological order in SSPT cluster states.
_Map to other strong SSPT models:-_ We have established a mapping from FTC to two copies of TPIM, as illustrated in Fig. 3(b) and (c). A natural question to ask is if this insight would be useful in the construction of other SSPT models. Indeed, a lot of different 2D SSPT models have been proposed using symmetry defect homology theory [20]. The answer is affirmative. Here we give an example of mapping another frustrated TC model into the SSPT cluster model on a triangular lattice, whose Hamiltonian is generated by seven-qubit interactions as shown in Fig. 4(left). This model has strong SSPT order as is proved in Ref. [11]. Consider the following frustrated TC Hamiltonian that is different from Eq. 1:
\[\begin{split}& H_{\text{FTC}_{2}}=-\sum_{s}A_{s}-\sum_{p}B_{p}\\ &+\alpha\sum_{p}\sigma_{1,p}^{z}\sigma_{2,p}^{x}\sigma_{3,p}^{z} \sigma_{4,p}^{x}+\alpha\sum_{s}\sigma_{1,s}^{x}\sigma_{2,s}^{z}\sigma_{3,s}^{ x}\sigma_{4,s}^{z}\end{split} \tag{12}\]
where the symbol \(\sigma_{1,s(p)}^{z(x)}\) denotes the \(i\)-th spin in a plaquette or star. The index is increased clock-wisely and the bottom site is assumed to be \(i=0\). The perturbation term, up to a unitary rotation, is equivalent to an antiferromagnetic Xu-Moore model [24]. It does not commute with \(H_{\text{TC}}\) hence the topological order is again frustrated. For clarity we rewrite the perturbation term into
\[\sigma_{1,p}^{z}\sigma_{2,p}^{x}\sigma_{3,p}^{z}\sigma_{4,p}^{y}=- \sigma_{2,p}^{y}\sigma_{4,p}^{y}B_{p} \tag{13}\] \[\sigma_{1,s}^{x}\sigma_{2,s}^{z}\sigma_{3,s}^{x}\sigma_{4,s}^{z}=- \sigma_{2,s}^{y}\sigma_{4,s}^{y}A_{s} \tag{14}\]
Applying the transformation in Eq. 4 to the right hand side gives us the cluster model defined on a triangular lattice, as is illustrated in Fig. 4. Other SSPT models with higher-order interactions can be generated by the same token, which we do not enumerate in the paper.
_Relation to the plaquette Ising model:-_ To fully elucidate the physical understanding, it is imperative to investigate the relationship between the plaquette Ising model (PIM) [24; 35] and TC. Notably, a TPIM has been known to map onto two copies of PIM, leading to the designation of the former as the topological PIM (TPIM) [12]. To see the relation between TC and PIM and TPIM, consider the following Hamiltonian as an extended TC with additional "Y-checks", which we shall refer to as YTC:
\[H_{\text{YTC}}=-\sum_{s}A_{s}-\sum_{p}B_{p}-\alpha\sum_{s}Y_{s}-\alpha\sum_{p }Y_{p} \tag{15}\]
Here we defined \(Y_{s(p)}=\prod_{i\in s(p)}\sigma_{i}^{y}\), which does not commute with the \(A_{s}\) nor \(B_{p}\). At small \(\alpha\), single anyon \(e\), \(m\) and their fermionic bound states \(\epsilon=e\times m\) are either completely immobile or partially mobile along lines, and only bosonic bound states \(e_{1}\times e_{2}\) or \(m_{1}\times m_{2}\) are completely mobile whose dispersion is again described by Eq. 2. This perturbation picture of anyon mobility is similar to the case of FTC, however, as we are to discuss, the YTC is mappped to PIM, instead of SSPT-ordered TPIM. Following the same map in Eq. 4, it is straightforward to see Eq. 15 is mapped to four copies of the
Figure 4: The triangular cluster model with SSPT order. The generator of the SSPT model is the seven-point interaction shown on the left side and the blue shadow on the right side. It can be mapped onto a frustrated 2D TC model defined in Eq. 12, whereby a triangular lattice in the dual lattice merge. Here for clarity we label the face-centered sites of \(\tilde{\Lambda}\) in red, and vertex sites of \(\tilde{\Lambda}\) in orange.
following terms
\[\mathcal{H}_{\xi}=-\alpha\sum_{\tilde{p}_{\xi}}\prod_{\tilde{p}_{\xi}}\tilde{\sigma }^{x}_{i\in\tilde{p}_{\xi}}-\sum_{i\in\tilde{\Lambda}_{\xi}}\tilde{\sigma}^{z}_ {i} \tag{16}\]
where the subscript \(\xi\) in \(\tilde{p}_{\xi}\) and \(\tilde{\Lambda}_{\xi}\) corresponds to the four distinct sublattices \(\xi=1,\cdots,4\) of \(\tilde{\Lambda}\). This is the PIM that also enjoys a duality \(\alpha\leftrightarrow\alpha^{-1}\), thus a transition at \(\alpha=1\). Indeed, as is discussed in [32; 33], a Zeeman field in \(\sigma^{y}\) would induce a single copy of PIM after the transformation, where there is no sublattice structure in \(\tilde{\Lambda}\). Each sublattice Hamiltonian in Eq. 16 enjoys a subextensive number of \(Z_{2}\) symmetries generated by \(U^{\xi}_{m}=\prod_{n\in\text{cols}}\tilde{\sigma}^{z}_{m,n}\), \(U^{\xi}_{n}=\prod_{m\in\text{rows}}\tilde{\sigma}^{z}_{m,n}\), with \(m,n\) are row and column indices bounded by the lattice size. A direct count yields a GSD for each pair of replicas equivalent to that of a TPIM, assuming the same lattice size characterized by either \(V\) or \(F\).
We have explicated the relation beteen YTC and PIM, but what is the relation between YTC and FTC regarding the hidden SSPT phase in the later? Consider two of the four sublattices \(\xi=1,2\), and define a non-local transformation: \(\tilde{q}^{x}_{\tilde{i}_{1}}=\tilde{\sigma}^{x}_{\tilde{i}_{1}}\prod_{\tilde {i}_{2}\geq i_{1}}\tilde{q}^{z}_{i_{2}}\), \(\tilde{\sigma}^{x}_{\tilde{i}_{2}}=\tilde{\sigma}^{x}_{\tilde{i}_{2}}\prod_{ \tilde{i}_{1}\geq i_{2}}\tilde{\sigma}^{z}_{\tilde{i}_{1}}\), \(\tilde{\sigma}^{z}_{\tilde{i}_{1(2)}}=\tilde{\sigma}^{z}_{\tilde{i}_{1(2)}}\), where \(i_{1}\in\tilde{\Lambda}_{\xi_{1}}\) and \(i_{2}\in\tilde{\Lambda}_{\xi_{2}}\). It is straightforward to check that under these transformation, the combination of the two copies in Eq. 16 becomes the TPIM Hamiltonian in Eq. 7 in \(\tilde{\sigma}\) variables. Here one may immediately recognize that the YTC, as the mapped model of PIMs, can be converted to FTC, as the mapped TPIMs, by simply hitting all \(Y_{s}\) and \(Y_{p}\) operators in Eq. 15 by \(A_{s}\) and \(B_{p}\). Hence the non-local transformation between TPIM and PIM in the dual lattices is achieved locally by applying canonical TC stabilizers, further establishing the deep relation between SSPT phases and the TC topological order. A summary of these mappings is presented in Table. 1.
_Discussion, conclusion and outlook:-_ Given the aforementioned mappings, we briefly comment on the differences between weak SSPT phases, which can be viewed as stacked or weakly coupled 1D SPT phases that retain their individual symmetries, and strong SSPT phases, which cannot be reduced to merely decoupled or weakly coupled 1D SPTs. It's useful to consider the hidden symmetry-breaking picture in SPT phases. Take for instance a standard 1D SPT with \(Z_{2}\times Z_{2}\) symmetry and its corresponding non-local string order parameter. It is related to the symmetry-breaking picture of a hidden antiferromagnetic model after a non-local transformation [22; 23]. On the other hand, under the new mapping we introduced in Eq. 4, the TPIM can be mapped to two competing topological TC models in Eq. 1; or equivalently, a Half FTC-stabilizer model, corresponding to a set of mutually frustrating TC stablizers; and the membrane order parameter associated with TPIM becomes a product of stabilizer operators in TC, underlining its intrinsic many-body nature as opposed to weak SSPT systems. This also makes explicit the fact that the SSPT-ordered TPIM model cannot be continuously deformed into a trivial phase in the thermodynamic limit, and the self-duality of TPIM is made manifestly clear in terms of the self-dual frustrated TC model. Interestingly, such connection not only exists between TPIM and TC model, but also holds for other SSPT models such as the triangular cluster model, which can emerge by inducing a different frustration on TC, hinting at a subtle linkage between topological order and strong SSPT order.
In summary, we have constructed exact maps connecting frustrated TC models to different SSPT-ordered models, showing that the novel SSPT phases of matter can be understood in terms of the simplest topological order in TC model. Given the rapidly evolving theories and experiments in quantum stabilizers and measurement-induced dynamics in various platforms [36; 37; 38; 39], it is both theoretically and practically appealing that one may use the frustrated TC-stabilizer measurement to simulate SSPT phases and its fractonic dynamics, and construct equivalences of different SSPT models as dressed TC models. Conversely, one may also relate the engineering of quantum error correcting systems to SPT and SSPT phases. Since the TPIM model was originally proposed as a pathway to the one-way quantum computer [40; 41], it is intriguing to relate the measurement-based quantum computation to quantum error correction formalism. We believe that the above mapping which relates SSPT phase and the TC stabilizers is suggestive enough to warrant further studies.
_Acknowledgement:-_ S. Feng acknowledges support from the Presidential Fellowship at The Ohio State University. S. Feng also thanks the Boulder Summer School 2023 at the University of Colorado Boulder for the enlightening lectures and discussions. Further gratitude is extended to X. Yang, N. Trivedi, A. Agarwala, and S. Bhattacharjee for their discussions and collaboration on related topics.
|
2302.00662 | Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous
Unobserved Confounders | Offline reinforcement learning is important in domains such as medicine,
economics, and e-commerce where online experimentation is costly, dangerous or
unethical, and where the true model is unknown. However, most methods assume
all covariates used in the behavior policy's action decisions are observed.
Though this assumption, sequential ignorability/unconfoundedness, likely does
not hold in observational data, most of the data that accounts for selection
into treatment may be observed, motivating sensitivity analysis. We study
robust policy evaluation and policy optimization in the presence of
sequentially-exogenous unobserved confounders under a sensitivity model. We
propose and analyze orthogonalized robust fitted-Q-iteration that uses
closed-form solutions of the robust Bellman operator to derive a loss
minimization problem for the robust Q function, and adds a bias-correction to
quantile estimation. Our algorithm enjoys the computational ease of
fitted-Q-iteration and statistical improvements (reduced dependence on quantile
estimation error) from orthogonalization. We provide sample complexity bounds,
insights, and show effectiveness both in simulations and on real-world
longitudinal healthcare data of treating sepsis. In particular, our model of
sequential unobserved confounders yields an online Markov decision process,
rather than partially observed Markov decision process: we illustrate how this
can enable warm-starting optimistic reinforcement learning algorithms with
valid robust bounds from observational data. | David Bruns-Smith, Angela Zhou | 2023-02-01T18:40:53Z | http://arxiv.org/abs/2302.00662v2 | # Robust Fitted-Q-Evaluation and Iteration under Sequentially Exogenous Unobserved Confounders
###### Abstract
Offline reinforcement learning is important in domains such as medicine, economics, and e-commerce where online experimentation is costly, dangerous or unethical, and where the true model is unknown. However, most methods assume all covariates used in the behavior policy's action decisions are observed. This untestable assumption may be incorrect. We study robust policy evaluation and policy optimization in the presence of unobserved confounders. We assume the extent of possible unobserved confounding can be bounded by a sensitivity model, and that the unobserved confounders are sequentially exogenous. We propose and analyze an (orthogonalized) robust fitted-Q-iteration that uses closed-form solutions of the robust Bellman operator to derive a loss minimization problem for the robust Q function. Our algorithm enjoys the computational ease of fitted-Q-iteration and statistical improvements (reduced dependence on quantile estimation error) from orthogonalization. We provide sample complexity bounds, insights, and show effectiveness in simulations.
## 1 Introduction
Sequential decision-making problems in medicine, economics, and e-commerce require the use of historical observational data for when online experimentation is costly, dangerous or unethical. But, precisely the same properties of a problem setting that require offline reinforcement learning also imply that the collected datasets may be vulnerable to unobserved variables that impact both the data collection policy and the outcomes. In general, these _unobserved confounders_ make unbiased estimation of the causal effect of the actions on the next state or reward impossible.
Our work develops practical sensitivity analysis for unobserved confounding under the structural restriction of memoryless unobserved confounders.
We briefly describe previous work most closely related to the robust policy evaluation and optimization approach, based on assumed bounds on the extent of possible unobserved confounding, that we develop here. An extensive line of work on sensitivity analysis in causal inference posits restrictions (an ambiguity set) on the extent of possible violations of sequential ignorability. We leverage the "marginal sensitivity model" of Tan (2012), which is a variant of a super-population version of Rosenbaum's sensitivity analysis for observational studies (Rosenbaum, 2004); these models have been used for offline single-timestep policy optimization as well (Aronow and Lee, 2013; Miratrix et al., 2018; Zhao et al., 2019; Yadlowsky et al., 2018; Kallus et al., 2018; Kallus and Zhou, 2020). In the sequential setting, (Namkoong et al., 2020) uses the Rosenbaum model and an additional assumption of _single-timestep_ unobserved confounding in finite-horizon off-policy evaluation. Kallus and Zhou (2020) partially identifies Liu et al. (2018) under an assumption of memoryless unobserved confounding and the MSM. Bruns-Smith (2021) applies the memoryless unobserved confounder model of Kallus and Zhou (2020) to the finite-horizon setting of Namkoong et al. (2020) and derives a robust fitted-Q evaluation, but is essentially limited to a tabular setting with small state/action space.
Fitted-Q evaluation/iteration is a well-studied, empirically successful, and practically appealing paradigm in offline reinforcement learning (RL), in particular for its complex function approximation via squared loss
minimization (Fu et al., 2021; Le et al., 2019). In this work we build on the implicitly tabular robust FQE strategy of Bruns-Smith (2021) to handle complex function approximation in policy optimization. We leverage recent advances in sensitivity analysis in non-sequential settings (Dorn et al., 2021; Dorn and Guo, 2022) which characterize the conditional robust-optimal solution as a conditional expected shortfall (Shapiro et al., 2021). Building on these single-timestep insights, we develop an orthogonalized robust fitted-Q-iteration via an orthogonalization of (Olma, 2021).
The contributions of our work are as follows. We leverage structural insights from recent work in sensitivity analysis to make robust FQE/FQI under sequentially exogenous confounders _computationally_ scalable via function approximation, as well as _statistically_ improved by orthogonalized estimation of the conditional expected shortfall (via leveraging the closed form solution of the robust Bellman operator). We derive a new _theoretical_ analysis, giving sample complexity guarantees for orthogonalized robust FQI based on the Rademacher complexity of conditional quantile and robust \(Q\) functions. Overall, these improvements help scale up robust policy learning to practical settings. We demonstrate effectiveness and improvements from orthogonalization in computational experiments.
## 2 Preliminaries
### Problem Setup with Unobserved State
We consider a Markov Decision Process comprised of a tuple \((\mathcal{S}\times\mathcal{U},\mathcal{A},R,P,\chi,H,\gamma).\) To avoid measurability issues, we assume that \(\mathcal{S}\) and \(\mathcal{U}\) are discrete but possibly countably infinite sets. We assume \(\mathcal{A}\) is finite. Let \(\mathcal{M}(X)\) denote probability measures on a set \(X\). \(P\) is the set of time \(t\) transition functions, \(P_{t}:\mathcal{S}\times\mathcal{U}\times\mathcal{A}\rightarrow\mathcal{M}( \mathcal{S}\times\mathcal{U})\); \(R\) is the set of time \(t\) reward maps, \(R_{t}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\); \(\chi:\mathcal{M}(\mathcal{S}\times\mathcal{U})\) is the initial state distribution; \(H\) is the horizon and \(\gamma\leq 1\) is the discount factor.
A policy, \(\pi\), is a set of maps \(\pi_{t}:\mathcal{S}\times\mathcal{U}\rightarrow\mathcal{M}(\mathcal{A})\). A trajectory of the MDP running policy \(\pi\) defines the random variables, \(S_{0},U_{0}\sim\chi\), and for all \(t\), \(A_{t}\sim\pi_{t}(S_{t},U_{t})\), \(S_{t+1},U_{t+1}\sim P_{t}(S_{t},U_{t},A_{t})\). We will write \(\mathbb{P}_{\pi}\) and \(\mathbb{E}_{\pi}\) to denote probabilities and expectations of these random variables. In a slight abuse of notation, we will write \(\pi_{t}(A_{t}|S_{t},U_{t})\) to denote the probability of \(A_{t}\) under the distribution \(\pi_{t}(S_{t},U_{t})\).
### Defining an MDP on Observables
In this paper, we consider the setting where the \(\mathcal{U}\) part of the state is _unobserved_. We will study policy evaluation and optimization for policies that are only a function of the observed state, \(S_{t}\). Note that the practitioner specifies the reward as a function of only the observed state \(\mathcal{S}\)1. Define the observed-state Q and value functions:
Footnote 1: This is essentially without loss of generality since the reward depends on \(S_{t+1}\) and \(S_{t+1}\) depends on \(U_{t}\).
\[Q_{t}^{\pi}(s,a) \coloneqq\mathbb{E}_{\pi^{e}}\Bigg{[}\sum_{j=t}^{H-1}\gamma^{j-t} R_{j}\Bigg{|}S_{t}=s,A_{t}=a\Bigg{]}\] \[V_{t}^{\pi}(s) \coloneqq\mathbb{E}_{\pi^{e}}\Bigg{[}\sum_{j=t}^{H-1}\gamma^{j-t} R_{j}\Bigg{|}S_{t}=s\Bigg{]}\]
where here and below we write \(R_{t}\coloneqq R_{t}(S_{t},A_{t},S_{t+1})\) when clear from context.
In an offline setting with data collected via an arbitrary policy \(\pi\), the dependence of both the transitions and the policy on \(U_{t}\) leads to so-called _unobserved confounding_. However, there is a second _distinct_ concern: conditional on observables alone the underlying formulation changes from a tractable MDP into an intractable POMDP problem. For example, \(\mathbb{P}_{\pi}(S_{t+1}|S_{t},A_{t})\) will vary with the choice of \(\pi\), meaning that we cannot apply standard RL techniques on the observed state alone to estimate \(Q_{t}^{\pi}\), even if there were no confounding.
Before proceeding, we isolate the unobserved confounding concern from the POMDP concern by introducing the following assumption:
**Assumption 1** (Memoryless unobserved confounders): _The unobserved state is drawn independently each period from \(U_{t}\sim P_{t}^{u}(S_{t})\), where \(P_{t}^{u}:\mathcal{S}\rightarrow\mathcal{M}(\mathcal{U})\) is some fixed set of maps._
**Lemma 1** (Observed State Markov Property): _For any \(t\), let \(H_{t}=\{S_{j},A_{j}:j\leq t\}\) be the history of the observed state and actions up to time \(t\). Given Assumption 1, for all \(t,s,a,h\),_
\[\mathbb{P}_{\pi}(S_{t+1}|S_{t}=s,A_{t}=a,H_{t-1}=h) =\mathbb{P}_{\pi}(S_{t+1}|S_{t}=s,A_{t}=a)\] \[\mathbb{P}_{\pi}(A_{t}|S_{t}=s,H_{t-1}=h) =\mathbb{P}_{\pi}(A_{t}|S_{t}=s)\]
Lemma 1 is an implication of Assumption 1 that can be tested from observed states and actions alone. As we will demonstrate, Assumption 1 rules out POMDP concerns but does not rule out bias from unobserved confounding. Therefore, Lemma 1 provides an observable test to separate these two settings. It is possible to use Lemma 1 directly as the core assumption at the cost of substantially more complexity. See the Appendix for discussion. Under Assumption 1, the setting with a policy \(\pi^{e}\) that only depends on the observed state is equivalent to a marginal MDP over the observed state alone:
**Proposition 1** (Marginal MDP): _Let \(\chi^{\text{marg}}\) be the marginal distribution of \(\chi\) over the observed state. Given Assumption 1, there exists \(P_{t}^{\text{marg}}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{M}( \mathcal{S})\) such that for any policies \(\pi^{e}\) and \(\pi^{e^{\prime}}\) that do not depend on \(U_{t}\) and for all \(s,a,t\): Furthermore, we can define a new MDP, \((\mathcal{S},\mathcal{A},R,T^{\text{marg}},\chi^{\text{marg}},H,\gamma)\), with probabilities under policy \(\pi^{e}\) denoted \(\mathbb{P}_{\pi^{e}}^{\text{marg}}\) such that \(\forall\pi^{e}\),_
\[\mathbb{P}_{\pi^{e}}^{\text{marg}}(S_{0},A_{0},...S_{H},A_{H})=\mathbb{P}_{\pi ^{e}}(S_{0},A_{0},...S_{H},A_{H}).\]
Proposition 1 shows that under Assumption 1, policy evaluation and optimization over observed-state policies \(\pi^{e}\) are MDP problems in the observed state. To simplify notation, in what follows we will drop the superscripts and write \(P_{t}(s,a)\) to refer the transition probabilities in the marginal MDP. Likewise, in what follows \(\pi^{e}\) will not depend on the unobserved state and we will write \(\pi^{e}(s)\) regardless of whether we are considering the full MDP or the marginal MDP. Similarly, we will write \(Q_{t}^{\pi^{e}}(s,a)\) and \(V_{t}^{\pi^{e}}(s)\) to refer to the Q and value functions in both MDPs. An optimal policy \(\pi^{*}\) among all \(\pi^{e}\) is guaranteed to exist by applying the results in Puterman (2014) to the marginal MDP. Denote the corresponding optimal value \(V_{t}^{\pi^{e}}(s)\coloneqq\sup_{\pi^{e}}V_{t}^{\pi^{e}}(s),\forall s,t\).
### Offline RL and Unobserved Confounding
We study offline policy evaluation/optimization using data collected under a policy \(\pi^{b}\) that depends on \(U_{t}\). Define the behavioral marginal transition probabilities \(P_{t}^{b}(s,a)\coloneqq\mathbb{P}_{\pi^{b}}(S_{t+1}|S_{t}=s,A_{t}=a)\). The key challenge in this setting is unobserved confounding. Because both the underlying transitions and the behavior policy depend on the unobserved variable \(U_{t}\), we have \(P_{t}^{b}(s,a)\neq P_{t}(s,a)\).
Define \(\pi_{t}^{b}(A_{t}|S_{t})\coloneqq\mathbb{P}_{\pi^{b}}(A_{t}|S_{t})\). Then the following proposition summarizes the bias from confounding:
**Proposition 2** (Confounding for Regression): _Given Assumption 1, for any \(s,a,\pi^{b},\) and any function \(f:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\),_
\[\mathbb{E}_{\pi^{e}}[f(S_{t},A_{t},S_{t+1})|S_{t}=s,A_{t}=a]=\mathbb{E}_{\pi^{ b}}\left[\frac{\pi_{t}^{b}(A_{t}|S_{t})}{\pi_{t}^{b}(A_{t}|S_{t},U_{t})}f(S_{t},A_{ t},S_{t+1})\middle|S_{t}=s,A_{t}=a\right].\]
In particular, letting \(f\) be an indicator for the next state gives the difference between \(P_{t}(s,a)\) and \(P_{t}^{b}(s,a)\).
Notice that the bias cannot be corrected because \(\pi_{t}^{b}(A_{t}|S_{t})/\pi_{t}^{b}(A_{t}|S_{t},U_{t})\) is unobserved. Furthermore, if \(\pi_{t}^{b}(A_{t}|S_{t},U_{t})\) can be arbitrary, then our estimates using data collected under \(\pi^{b}\) can be arbitrarily biased. To make progress, we follow the sensitivity analysis literature in causal inference and place a parameterized restriction on the difference between \(\pi_{t}^{b}(A_{t}|S_{t})\) and \(\pi_{t}^{b}(A_{t}|S_{t},U_{t})\):
**Assumption 2** (Marginal Sensitivity Model): _There exists \(\Lambda\) such that for all \(t,s\in\mathcal{S},u\in\mathcal{U},a\in\mathcal{A}\),_
\[\Lambda^{-1}\leq\left(\frac{\pi_{t}^{b}(a\mid s,u)}{1-\pi_{t}^{b}(a\mid s,u)} \right)/\left(\frac{\pi_{t}^{b}(a\mid s)}{1-\pi_{t}^{b}(a\mid s)}\right)\leq\Lambda. \tag{1}\]
Choosing \(\Lambda\) should be informed by domain knowledge. Using Assumption 2, we can express Proposition 2 as a weighted regression with bounded weights. Define the random variable \(W_{t}\coloneqq\frac{\pi_{t}^{b}(A_{t}|S_{t})}{\pi_{t}^{b}(A_{t}|S_{t},U_{t})}\). Then Assumption 2 implies the following bounds almost everywhere:
\[\alpha_{t}(S,A) \leq W_{t}\leq\beta_{t}(S,A), \tag{2}\] \[\alpha_{t}(S,A) \coloneqq\pi_{t}^{b}(A_{t}|S_{t})+\frac{1}{\Lambda}(1-\pi_{t}^{b }(A_{t}|S_{t})),\] \[\beta_{t}(S,A) \coloneqq\pi_{t}^{b}(A_{t}|S_{t})+\Lambda(1-\pi_{t}^{b}(A_{t}|S_{ t})).\]
Let \(MSM_{t}(\Lambda)\) denote the set of random variables \(\bar{W}_{t}\) that satisfy Equation (2). The worst-case value for the confounded regression can be computed by a linear program.
**Proposition 3**: _Given Assumption 1 and Assumption 2, the smallest value of \(\mathbb{E}_{\pi^{*}}[f(S_{t},A_{t},S_{t+1})|S_{t}=s,A_{t}=a]\) consistent with the observed data distribution is:_
\[\psi_{f,t}^{-}(s,a)\coloneqq\min_{W_{t}\in MSM_{t}(\Lambda)} \mathbb{E}_{\pi^{b}}\left[\bar{W}_{t}f(S_{t},A_{t},S_{t+1})|S_{t}=s,A_{t}=a\right]\] _s.t._ \[\mathbb{E}_{\pi^{b}}\left[\bar{W}_{t}|S_{t}=s,A_{t}=a\right]=1.\]
_Similarly, the largest value, \(\psi_{f,t}^{+}(s,a)\), replaces the \(\min\) with a max._
We can use this linear program to derive uncertainty sets for the unknown marginal transition probabilities. We apply Proposition 3 to \(f_{s^{\prime}}(S,A,S^{\prime})\coloneqq\mathbb{I}(S^{\prime}=s^{\prime})\) for each \(s^{\prime}\) to get \(P_{t}\in\mathcal{P}_{t}\), where
\[\mathcal{P}_{t}\coloneqq\left\{\bar{P}:\mathcal{S}\times\mathcal{A}\to \mathcal{M}(\mathcal{S}):\bar{P}(s^{\prime}|s,a)\in[\psi_{f_{s^{\prime}},t}^{- },\psi_{f_{s^{\prime}},t}^{+}]\right\}.\]
We let \(\mathcal{P}\) denote the set of possible transition probability maps for all \(t\).
### Robust Estimands and Bellman Operators
We have now shown that under Assumptions 1 and 2, we have a marginal MDP such that any data generating policy \(\pi^{b}\) creates an uncertainty set \(\mathcal{P}\) for the confounded marginal transition probabilities. While point estimation is not possible, we can find the worst-case values of \(Q_{t}^{\pi^{*}}\) and \(V_{t}^{\pi^{*}}\) over the uncertainty set. This is called a Robust Markov Decision Process (RMDP) problem, see e.g. Iyengar (2005a), and we denote the corresponding robust Q and value functions \(\bar{Q}_{t}^{\pi^{*}}\) and \(\bar{V}_{t}^{\pi^{*}}\).
Define the following operators:
**Definition 1** (Robust Bellman Operators): _For any function \(g:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\),_
\[(\bar{\mathcal{T}}_{t}^{\pi^{*}}g)(s,a) \coloneqq\inf_{\bar{P}_{t}\in\mathcal{P}_{t}}\mathbb{E}_{S^{ \prime}\sim\bar{P}_{t}(\cdot|s,a)}\Big{[}R_{t}+\gamma g(S^{\prime},\pi_{t+1}^{ e})]\Big{]}, \tag{3}\] \[(\bar{\mathcal{T}}_{t}^{*}g)(s,a) \coloneqq\inf_{\bar{P}_{t}\in\mathcal{P}_{t}}\mathbb{E}_{S^{ \prime}\sim\bar{P}_{t}(\cdot|s,a)}\Big{[}R_{t}+\gamma\max_{a^{\prime}}[g(S^{ \prime},a^{\prime})]\Big{]}, \tag{4}\]
_where \(g(S^{\prime},\pi_{t+1}^{e})\coloneqq\mathbb{E}_{A^{\prime}\sim\pi_{t+1}^{e}( S^{\prime})}[g(S^{\prime},A^{\prime})]\)._
Then using the results in Iyengar (2005b), we can define a typical robust Bellman equation:
**Proposition 4** (Robust Bellman Equation): _Let \(|\mathcal{A}|=2\) and let Assumptions 1 and 2 hold. Then applying the results in Iyengar (2005b), gives_
\[\bar{Q}_{t}^{\pi^{*}}(s,a) =\bar{\mathcal{T}}_{t}^{\pi^{*}}\bar{Q}_{t+1}^{\pi^{*}}(s,a),\] \[\bar{V}_{t}^{\pi^{*}}(s) =\mathbb{E}_{A\sim\pi_{t}^{*}(s)}[\bar{Q}_{t}^{\pi^{*}}(s,A)],\] \[\bar{Q}_{t}^{*}(s,a) =\bar{\mathcal{T}}_{t}^{*}\bar{Q}_{t+1}^{*}(s,a),\] \[\bar{V}_{t}^{*}(s) =\mathbb{E}_{A\sim\pi_{t}^{*}(s)}[\bar{Q}_{t}^{*}(s,A)],\]
_where \(\bar{Q}_{t}^{*}\) and \(\bar{V}_{t}^{*}\) are the optimal robust Q and value function achieved by the policy \(\bar{\pi}^{*}\)._
When \(|\mathcal{A}|>2\), there is an important subtlety to note. The \(W_{t}\) that would achieve the minimum in Proposition 3 does not enforce the density constraint on \(\pi_{t}^{b}\)_across actions_. In the special case where there are only two actions, Dorn et al. (2021) show that the different minimums _are_ simultaneously achievable, and therefore the RMDP is \(s\),_a-rectangular_ and the results in Iyengar (2005b) apply directly. For \(|\mathcal{A}|>2\), the infimum in Equation (3) is _not_ generally simultaneously realizable (see the Appendix for a counter-example). Nonetheless, the robust Bellman operator corresponds to an \(s,a\)-rectangular relaxation of the RMDP, and Proposition 4 will hold with lower bounds instead of equalities.
## 3 Method
Our estimation strategy for robust policy optimization is a robust analog of Fitted-Q Iteration (FQI). Non-robust FQI estimates the non-robust Bellman optimality operator using least squares regression. We extend this approach to the robust Bellman operator. First in Section 3.1, we show that the robust Bellman operator admits a closed form as a conditional expectation of observables. Then in Section 3.2, we incorporate this insight into an orthogonalized confounding-robust FQI algorithm with function approximation. An exactly analagous procedure applies to robust FQE.
### Closed-Form for the Robust Bellman Operator
In this section, we use recent results to derive a _closed-form_ expression for the infimum in Proposition 3 in order to derive a feasible algorithm for continuous state spaces. This is an application of the results in Rockafellar et al. (2000) and Dorn et al. (2021).
Define \(\tau\coloneqq\Lambda/(1+\Lambda)\). For any \(s,a\), we define the conditional quantile:
\[Z_{t}^{1-\tau}(g\mid s,a)\coloneqq\inf_{z}\{z\colon P_{t}^{b}(g \geq z\mid S_{t}=s,A_{t}=a)\leq 1-\tau\}.\]
When clear from context, we use the shorthands \(Z_{t}^{1-\tau}\coloneqq Z_{t}^{1-\tau}(Y_{t}\mid S_{t},A_{t}),\alpha_{t} \coloneqq\alpha_{t}(S_{t},A_{t}),\beta_{t}\coloneqq\beta_{t}(S_{t},A_{t})\).
**Proposition 5**: _Let \(f:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\to\mathbb{R}\) be any function and define the random variable \(Y_{t}\coloneqq f(S_{t},A_{t},S_{t+1})\). Then:_
\[\psi_{f,t}^{-}(s,a) =\mathbb{E}_{\pi^{b}}\left[\alpha_{t}Y_{t}+(1-\alpha_{t})\frac{Y_ {t}\mathbb{I}(Y_{t}\leq Z_{t}^{1-\tau})}{1-\tau}\middle|S_{t}=s,A_{t}=a\right].\]
The worst-case value is a convex combination of the conditional mean and the conditional value-at-risk (CVaR). We provide a derivation here for intuition. Dorn et al. (2021) show that the linear program in Proposition 3 has a closed form solution corresponding to adversarial weights:
\[\psi_{f,t}^{-}(s,a) =\mathbb{E}_{\pi^{b}}\left[W_{t}^{*}Y_{t}|S_{t}=s,A_{t}=a\right]\] \[\text{where }W_{t}^{*} =\alpha_{t}\mathbb{I}(Y_{t}>Z_{t}^{1-\tau})+\beta_{t}\mathbb{I} (Y_{t}\leq Z_{t}^{1-\tau}).\]
We can derive the form in Proposition 5 with a few additional transformations. Define:
\[\mu_{t}(s,a) \coloneqq\mathbb{E}_{\pi^{b}}[Y_{t}|S_{t}=s,A_{t}=a],\] \[\text{CVaR}_{t}^{1-\tau}(s,a) \coloneqq\frac{1}{1-\tau}\mathbb{E}_{\pi^{b}}\left[Y_{t}\mathbb{I }(Y_{t}<Z_{t}^{1-\tau})|S_{t}=s,A_{t}=a\right].\]
We use the following identity for any random variables \(Y\) and \(X\):
\[\mathbb{E}[Y|X]=\mathbb{E}[Y\mathbb{I}(Y>Z^{1-\tau}(Y|X))|X]+ \mathbb{E}[Y\mathbb{I}(Y\leq Z^{1-\tau}(Y|X))|X]\]
to get:
\[\psi_{f,t}^{-}(s,a)=\alpha_{t}\mu_{t}(s,a)+(\beta_{t}-\alpha_{t}) (1-\tau)\text{CVaR}_{t}^{1-\tau}(s,a),\]
and some algebra gives \((\beta_{t}-\alpha_{t})(1-\tau)=(1-\alpha_{t})\), which gives the desired convex combination.
**Corollary 1** (Robust Bellman Operator Closed Form): _For any \(g:\mathcal{S}\times\mathcal{A}\to\mathbb{R}\), consider the case where:_
\[Y_{t}\coloneqq R_{t}+\gamma\max_{a^{\prime}}[g(S_{t+1},a^{\prime})].\]
_Then,_
\[(\bar{\mathcal{T}}_{t}^{*}g)(s,a)=\mathbb{E}_{\pi^{b}}\left[\alpha_{t}Y_{t}+(1- \alpha_{t})\frac{Y_{t}\mathbb{I}(Y_{t}\leq Z_{t}^{1-\tau})}{1-\tau}\bigg{|}S_ {t}=s,A_{t}=a\right],\]
_and likewise for \(\bar{\mathcal{T}}_{t}^{\pi^{e}}\) with \(\max_{a^{\prime}}\) replaced with \(\mathbb{E}_{A^{\prime}\sim\pi_{t}^{e}(S_{t+1})}\)._
**Remark 1** (Robust off-policy evaluation): _This work focuses on policy optimization and evaluation via closed-form solution of the robust Bellman operator and fitted-Q-iteration. In Appendix E we discuss an alternative analogous estimator of the policy value analogous to Jiang and Li (2016); Thomas et al. (2015); Namkoong et al. (2020) and differences from our setting._
### Estimation: Robust FQI with function approximation
Now assume that we observe \(n\) trajectories of length \(H\), \(\mathcal{D}_{\pi^{b}}\coloneqq\{(s_{t}^{(i)},a_{t}^{(i)},s_{t+1}^{(i)})_{t=0}^ {H}\}_{i=0}^{n}\) collected from the underlying MDP using an unknown behavior policy \(\pi^{b}\) that depends on the unobserved state. We will write \(\hat{\mathbb{E}}_{t}\) to denote a sample average of the \(n\) data points collected at time \(t\), e.g. \(\hat{\mathbb{E}}_{t}[f(S_{t},A_{t},S_{t+1})]\coloneqq\frac{1}{n}\sum_{i=1}^{ n}f(s_{t}^{(i)},a_{t}^{(i)},s_{t+1}^{(i)})\).
```
1: Estimate the marginal behavior policy \(\pi^{b}_{t}(a|s)\).
2: Compute \(\{\alpha_{t}(s_{t}^{(i)},a_{t}^{(i)})\}_{i=1}^{n}\) as in Equation (2).
3: Initialize \(\hat{\overline{Q}}_{H}=0\).
4:for\(t=H-1,\ldots,0\)do
5: Compute the nominal outcomes \(\{Y_{t}^{(i)}(\hat{\overline{Q}}_{H})\}_{i=1}^{n}\) as in Equation (5).
6: Fit \(\hat{Z}_{t}^{1-\tau}\) the \((1-\tau)\)th conditional quantile of the outcomes \(Y_{t}^{(i)}\).
7: Compute pseudooutcomes \(\{\hat{Y}_{t}^{(i)}(\hat{Z}_{t}^{1-\tau},\hat{\overline{Q}}_{t+1})\}_{i=1}^{n}\) as in Equation (6).
8: Fit \(\hat{\overline{Q}}_{t}\) via least-squares regression of \(\hat{Y}_{t}^{(i)}\) against \((s_{t}^{(i)},a_{t}^{(i)})\).
9: Compute \(\pi_{t}^{*}(s)\in\arg\max_{a}\hat{\overline{Q}}_{t}(s,a)\).
10:endfor
```
**Algorithm 1** Confounding-Robust Fitted-Q-Iteration
Nominal FQIFQI successively forms approximations \(\hat{Q}_{t}\) at each time step by using regression to estimate the conditional expectation (with respect to states and actions) of the Bellman residual,
\[Y_{t}(Q) \coloneqq R_{t}+\gamma\max_{a^{\prime}}\left[Q(S_{t+1},a^{\prime })\right], \tag{5}\] \[\hat{Q}_{t} \in\arg\min_{q_{t}}\hat{\mathbb{E}}_{t}[(Y_{t}(\hat{Q}_{t+1})-q_{ t}(S_{t},A_{t}))^{2}].\]
This regression step is a stochastic approximation of the Bellman optimality operator: replacing the expectation over the next-state transition with a stochastic sample thereof (realized from data).
An orthogonalized robust FQEAlgorithm 1 describes the algorithm for robust FQI. We iteratively fit \(\hat{\overline{Q}}_{t}\), then find the policy that is greedy with respect to the estimated robust \(Q\) function. We now describe
deriving the \(Q\)-fitting step in more detail. We form an empirical approximation of the robust Bellman operators in Equation (3) by regressing the pseudo-outcome in Corollary 1 against \(S\) and \(A\) in the data.
The regression target depends on the conditional quantile \(Z_{t}^{1-\tau}\), a _nuisance function_ that must be estimated but is not our substantive target of interest. To avoid transferring biased first-stage estimation error of \(Z_{t}^{1-\tau}\) to the Q-function, we introduce orthogonalization. An important literature on Neyman-orthogonality (sometimes called double/debiased machine learning, and related to semiparametric statistics) derives bias adjustments (Chernozhukov et al., 2018; Newey, 1994). (See Appendix A for more related work). In particular, we apply an orthogonalization of Olma (2021), which considers a truncated conditional expectation \(m(\eta,x)=\frac{1}{1-\tau}\mathbb{E}[Y\mathbb{I}\{Y\leq Z^{1-\tau}\}\mid X=x]\) and they show that
\[\tfrac{1}{1-\tau}\mathbb{E}[Y\mathbb{I}\{Y\leq Z^{1-\tau}\}-Z^{1-\tau}( \mathbb{I}\{Y\leq Z^{1-\tau}\}-(1-\tau))\mid X]\]
is Neyman-orthogonal with respect to error in \(Z^{1-\tau}\). We apply this orthogonal adjustment to Corollary 1 to obtain our regression target for robust FQE:
\[\tilde{Y}_{t}(Z,Q)\coloneqq\alpha_{t}Y_{t}(Q)+(1-\alpha_{t})\cdot\tfrac{1}{1- \tau}\Big{(}Y_{t}(Q)\mathbb{I}(Y_{t}(Q)\leq Z_{t}^{1-\tau})-Z\cdot[\mathbb{I} \{Y_{t}(Q)\leq Z\}-(1-\tau)]\Big{)} \tag{6}\]
It enjoys weaker dependence on first-stage estimation error in the quantile functions. Our time-\(t\) target of estimation is
\[\hat{\overline{Q}}_{t}\in\arg\min_{q_{t}}\hat{\mathbb{E}}_{t}[(\tilde{Y}_{t}( \hat{Z}_{t}^{1-\tau},\hat{\overline{Q}}_{t+1})-q_{t}(S_{t},A_{t}))^{2}]. \tag{7}\]
Estimation considerationsA large literature discusses methods for quantile regression (Koenker and Hallock, 2001; Meinshausen, 2006; Belloni and Chernozhukov, 2011), as well as conditional expected shortfall (Cai and Wang, 2008; Kato, 2012) and can guide the choice of function class for quantiles and \(\overline{Q}\) appropriately.
## 4 Analysis and Guarantees
We first describe the estimation benefits we receive from orthogonalization before discussing analysis of robust fitted-Q-evaluation and iteration, and insights. (All proofs are in the appendix).
### Estimation guarantees
**Assumption 3** (Bounded conditional density): _Assume that a.s., \(P_{t}(s_{t+1}\mid s_{t},a)<M_{P},\forall t,s_{t},s_{t+1}\)._
Assumption 3 is a common regularity condition for the analysis of quantiles. Define
\[\hat{\overline{Q}}(s,a) =\mathbb{E}_{\pi^{b}}[\tilde{Y}_{t}(\hat{Z}_{t},\hat{\overline{Q }}_{t+1})\mid s,a] \text{feasible robust Q},\] \[\hat{\overline{Q}}(s,a) =\mathbb{E}^{\pi^{b}}[\tilde{Y}_{t}(Z_{t},\hat{\overline{Q}}_{t+ 1})\mid s,a] \text{oracle nuisance},\]
We show that \(\hat{\overline{Q}}(s,a)\) enjoys lower-order estimation error from \(\hat{Z}\).2 (The proof follows straightforwardly from various results in Semenova (2017); Olma (2021)).
Footnote 2: In theory, we would require data splitting unless we make Donsker-type assumptions. In the experiments (and algorithm description) in the interest of data-efficiency we do not data-split. The analysis can be readily extended. Recent work (Chen et al., 2022) shows rigorously data-splitting may not be necessary under stability conditions; extending that analysis to this setting would be interesting future work.
**Proposition 6** (CVaR estimation error): _For \(a\in\mathcal{A},t\in[T],\)_
\[\|\hat{\overline{Q}}_{t}(S,a)-\overline{Q}_{t}(S,a)\|\leq\|\widetilde{ \overline{Q}}_{t}(S,a)-\overline{Q}_{t}(S,a)\|+o_{p}(n^{-\frac{1}{2}})\]
We assume concentratability and Bellman completeness, as is standard in the offline RL literature (but certainly not innocuous).
**Assumption 4** (Concentratability): _Given a policy \(\pi\), let \(P_{t}^{\pi}\) denote the marginal distribution at time step \(t\), starting from \(s_{1}\) and following \(\pi\), and \(\mu_{t}\) denote the true marginal occupancy distribution under \(\pi_{b}\). There exists a parameter \(C\) such that_
\[\sup_{(s,a,t)\in\mathcal{S}\times\mathcal{A}\times[T]}\frac{\mathrm{d}P_{t}^{ \pi}}{\mathrm{d}\mu_{t}}(s,a)\leq C\quad\text{ for any policy }\pi.\]
**Assumption 5** (Approximate Bellman completeness): _There exists \(\epsilon>0\) such that, for all \(t\in[T],\)_
\[\sup_{q_{t+1}\in\mathcal{Q}_{t+1}}\inf_{q_{t}\in\mathcal{Q}_{t}}\left\|q_{t}- \overline{\mathcal{T}}_{t}^{*}q_{t+1}\right\|_{\mu_{t}}^{2}\leq\epsilon.\]
Concentratability is analogous to sequential overlap. It assumes a uniformly bounded density ratio between the true marginal occupancy distribution and those induced by policies encountered during policy optimization. Approximate Bellman completeness assumes that the function class \(\mathcal{Q}\) is approximately closed under the robust Bellman operator. We also make regularity assumptions for estimation.
**Assumption 6** (Estimation):
1. _Boundedness of outcomes:_ \(R_{t}\leq B_{R},\forall t\)__
2. _Rademacher complexity: The_ \(Q\)_-function class has Rademacher complexity_ \(\mathcal{R}(\mathcal{Q})\) _and the_ \(Z\)_-function class has Rademacher complexity_ \(\mathcal{R}(\mathcal{Z}).\)__
Although we ultimately seek an optimal policy, approaches based on fitted-Q-evaluation and iteration instead optimize the squared loss, which is related to the Bellman error that is a surrogate for value suboptimality.
**Definition 2** (Bellman error): _Under data distribution \(\mu_{t}\), define the Bellman error of function \(q=(q_{1},\ldots,q_{T}):\)_
\[\mathcal{E}(q)=\tfrac{1}{T}\sum_{t=1}^{T}\lVert q_{t}-\overline{\mathcal{T}}_{ t}^{*}q_{t+1}\rVert_{\mu_{t}} \tag{8}\]
The next lemma, which appears as Duan et al. (2021, Lemma 3.2), Xie and Jiang (2020, Thm. 2), justifies this approach by relating the Bellman error to the value suboptimality. (Its proof follows directly via the robust Bellman operator and is omitted).
**Lemma 2** (Bellman error to value suboptimality): _Under Assumption 4, for any \(q\in\mathcal{Q},\) we have that, for \(\pi\) the policy that is greedy with respect to \(q,\)_
\[V_{1}^{*}(s_{1})-V_{1}^{\pi}(s_{1})\leq 2H\sqrt{C\cdot\mathcal{E}(q^{\pi})}\]
Finally, putting the above together, our sample complexity bound states that the Bellman error is on the order of \(O(n^{-\frac{1}{2}})\). We assume the behavior policy is essentially known, so this analysis omits estimation error in \(\pi_{b}\).
**Theorem 1** (Fitted Q Iteration guarantee): _Suppose Assumptions 3 to 6 and let \(B_{f},B_{\overline{Q}},B_{\mathcal{L}_{q}}\) be bounds on Bellman residual, \(\overline{Q}\), and the squared loss function that follow from assuming bounded rewards. Then, with probability \(>1-\delta,\)_
\[\mathcal{E}(\hat{Q})=\tfrac{1}{T}\sum_{t=1}^{T}\left\lVert\hat{Q} _{t}-\overline{\mathcal{T}}_{t}^{*}\hat{Q}_{t+1}\right\rVert_{\mu_{t}}^{2}\] \[\leq\epsilon+\frac{1}{T}\sum_{t=1}^{T}\left\{8(1-\tau)+1)(2B_{f}+ 2B_{\overline{Q}_{t}})\mathcal{R}(\mathcal{Z})+2B_{\overline{Q}}\mathcal{R}( \mathcal{Q})+B_{\mathcal{L}_{q}}\sqrt{\frac{2\log(6T/\delta)}{n}}\right\}+o_ {p}(n^{-\frac{1}{2}})\]
Proof sketch.As appears elsewhere in analysis of FQI (Duan et al., 2021), we may obtain the following standard decomposition:
\[\|\hat{\overline{Q}}_{t,\hat{Z}_{t}}-\overline{\mathcal{T}}_{t,\hat {Z}_{t}}^{*}\hat{\overline{Q}}_{t+1}\|_{\mu_{t}}^{2}=\mathbb{E}_{\mu}[\ell( \hat{\overline{Q}}_{t,\hat{Z}_{t}},\hat{\overline{Q}}_{t+1};\hat{Z}_{t})]\] \[\qquad-\mathbb{E}_{\mu}[\ell(\overline{Q}_{t,Z_{t}}^{t},\hat{ \overline{Q}}_{t+1};Z_{t})]+\|\overline{Q}_{t,Z_{t}}^{\dagger}-\overline{ \mathcal{T}}_{t}^{*}\hat{\overline{Q}}_{t+1}\|_{\mu_{t}}^{2}\]
where \(\overline{Q}_{t,Z_{t}}^{\dagger}\) is the oracle squared loss minimizer (relative to the \(\hat{\overline{Q}}_{t+1}\) output from the algorithm). The last term is bounded by Assumption 5 (completeness). From here on our analysis differs with additional decomposition relative to estimated nuisances \(\hat{Z}_{t}\) and applying our orthogonality results in Proposition 6. Uniform convergence analysis follows by standard Rademacher complexity contraction.
### The Cost of Confounding with Infinite Data
While Theorem 1 analyses the difficulty of estimating the robust value function, here we additionally provide a brief analysis of how conservative the true robust value function is at the population-level for policy evaluation (not optimization). We consider a simplified Gaussian setting where we can analytically characterize the population-level difference between the nominal and robust value functions.
**Proposition 7**: _Let \(\mathcal{S}=\mathbb{R}\) and \(\mathcal{A}=\{0,1\}\). Define parameters \(\theta_{T},\theta_{R},\sigma_{T}\in\mathbb{R}\). Let \(P_{\pi^{b}}(S_{t+1}|S_{t},A_{t})=\mathcal{N}(\theta_{T}S_{t},\sigma_{T})\), \(R(s,a,s^{\prime})=\theta_{R}s^{\prime}\), \(\pi_{t}^{c}(1|S_{t})=0.5\), and consider some \(\pi^{b}\) such that \(\pi_{t}^{b}(A_{t}|S_{t})\) does not vary with \(S_{t}\). Finally, let \(\beta_{i}\coloneqq\theta_{R}\theta_{T}\sum_{k=0}^{i-1}\gamma^{k}\theta_{T}^{k}\) and notice that the nominal, non-robust value functions are \(V_{H-i}^{\pi^{c}}(s)=\beta_{i}s\). Then:_
\[|V_{1}^{\pi^{c}}(s)-\bar{V}_{1}^{\pi^{c}}(s)|\leq\frac{1}{16\theta_{T}}\left( \sum_{i=1}^{H-1}\gamma^{H-i}\beta_{i}\right)\sigma_{T}\log(\Lambda).\]
Note that cost of robustness gets worse as the horizon \(H\) increases. However, the scaling with the _degree_ of confounding \(\Lambda\) is independent of \(H\), and has a modest \(\log(\Lambda)\) rate. This contrasts starkly with the difficulty of estimation as summarized in Corollary 2 that has quadratic scaling with \(\Lambda\). Further population-level analysis of the cost of confounding is a promising direction for future work.
### Insights
**Bias-variance tradeoff in selection of \(\Lambda\).**
Although the constants in the sample complexity bound can be improved, we may obtain insights on how \(\Lambda\) affects estimation difficulty by studying a slightly reformulated regression target.
**Corollary 2**: _Assume that the same function classes \(\mathcal{Q},\mathcal{Z}\) are used for every timestep, and their Rademacher complexities are \(\mathcal{R}(\mathcal{Q})\leq C_{q}n^{-\frac{1}{2}},\mathcal{R}(\mathcal{Z}) \leq C_{2}n^{-\frac{1}{2}}\). Then there exist absolute constants \(K,k\) such that_
\[\mathcal{E}(\hat{Q})\leq\epsilon+K\Lambda^{2}(T+k)^{2}n^{-\frac{1}{2}}.\]
The main takeaway is that the width of confidence bounds on the robust \(Q\) function increases quadratically in \(\Lambda\) (and the horizon). Though horizon dependence might be improved by assuming \(\sum R_{t}\leq B\) instead, the dependence on \(\Lambda\) illustrates robustness-variance-sharpness tradeoffs. Namely, as we increase \(\Lambda\), we estimate more extremal tail regions, which is simply more difficult. Sharper tail bounds on conditional expected shortfall estimation would also qualitatively yield similar insights. In practice, this can motivate choosing a _smaller_ value of \(\Lambda\) in order to obtain valid confidence bounds that are not overly conservative.
**Connections to pessimism in offline RL.** Pessimism is an important algorithmic design principle for offline RL in the _absence_ of unobserved confounders (Xie et al., 2021; Rashidinejad et al., 2021; Jin et al., 2021). Therefore, robust FQI with lower-confidence-bound-sized \(\Lambda\) gracefully degrades to a pessimistic offline RL method if unobserved confounders were, contrary to our method's use case, not actually present in the data. Conversely, pessimistic offline RL with _state-wise_ lower confidence bounds confers some robustness against unobserved confounders. But state-wise LCBs are viewed as overly conservative relative to a profiled lower bound on the average value (Xie et al., 2021).
## 5 Simulation Experiments
We perform simulation experiments in a mis-specified sparse linear setting with heteroskedastic conditional variance. We use the following behavioral marginal MDP:
\[\mathcal{S}\subset\mathbb{R}^{d}, \mathcal{A}=\{0,1\},S_{0}\sim\mathcal{N}(0,0.01),\] \[\pi_{b}(1|S_{t})=0.5,\forall S_{t}\] \[P_{t}^{b}(S_{t+1}|S_{t},A_{t})=\mathcal{N}(BS_{t}+\theta_{A}a, \max\{AS_{t}+\sigma,0\})\] \[R(S_{t},A_{t},S_{t+1})=\theta_{R}^{T}S_{t+1}\]
with parameters \(A,B\in\mathbb{R}^{d\times d},\theta_{R},\theta_{A}\in\mathbb{R}^{d},\sigma\in \mathbb{R}\) chosen such that \(AS_{t}+\sigma>0\) with probability vanishingly close to \(1\). The number of features \(d=25\) and \(A\) and \(B\) are chosen to be column-wise sparse, with \(5\) and \(20\) non-zero columns respectively. We collect a dataset of size \(n=5000\) from a single trajectory.
We estimate \(\bar{V}_{1}^{*}(s)\) for \(T=5\) and several different values of \(\Lambda\), using both the orthogonalized and non-orthogonalized robust losses. For function approximation for the conditional mean and conditional quantile, we use Lasso regression. Note that while this is a correctly specified in the non-robust setting, the CVaR is _non-linear_ in the observed state due to the non-linear conditional standard deviation of \(\theta_{R}^{T}S_{t+1}\), and therefore the Lasso is a mis-specified model for the quantile and robust value functions. For details see Appendix G.3 in the Appendix.
We report the MSE of the value function estimate over \(100\) trials, alongside the average \(\ell_{2}\)-norm parameter error and the percentage of the time a wrong action is taken. The MSE and percentage of mistakes are computed in an iid holdout sample of size \(200,000\) drawn from the initial state distribution comparing the estimated value function / policy to an analytic ground truth. See the Appendix for details on the ground truth derivation.
The results illustrate two important phenomenon. First, for both algorithms, the mean squared error gets worse as \(\Lambda\) increases. This is expected, as in the quadratic dependence in Corollary 2. So while in practice, we would like to certify robustness for higher levels of \(\Lambda\), the estimated lower bounds become less and less reliable - illustrating the tradeoff disccused in Section 4.3. Second, the non-orthogonal algorithm suffers from substantially worse mean-squared error and as a result selects a sub-optimal action more often, especially at high levels of \(\Lambda\). Naturally, since the quantiles becomes harder to estimate for higher \(\Lambda\), orthogonalization becomes more important. These numerical results emphasize that orthogonalization can be quite important in practice.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(\Lambda\) & Algorithm & MSE(\(V_{0}^{*}\)) & \(\ell_{2}\) Parameter Error & \% wrong action \\ \hline \hline
1 & FQI & 0.2927 & 2.506 & 0\% \\ \hline
2 & Non-Orthogonal & 0.6916 & 3.458 & 5e-5\% \\ & Orthogonal & 0.4119 & 2.678 & 0\% \\ \hline
5.25 & Non-Orthogonal & 10.87 & 7.263 & 0.39\% \\ & Orthogonal & 0.5552 & 3.110 & 0\% \\ \hline
8.5 & Non-Orthogonal & 50.72 & 17.32 & 2.5\% \\ & Orthogonal & 0.7113 & 3.410 & 4e-5\% \\ \hline
11.75 & Non-Orthogonal & 171.1 & 33.80 & 5.4\% \\ & Orthogonal & 1.336 & 3.666 & 6e-4\% \\ \hline
15 & Non-Orthogonal & 432.9 & 55.86 & 8.2\% \\ & Orthogonal & 2.687 & 3.931 & 4e-3\% \\ \hline \end{tabular}
\end{table}
Table 1: Simulation results versus ground-truth measured by value function MSE, Q function parameter error, and portion of the time a sub-optimal action is taken. The results compare non-orthogonal and orthogonal confounding robust FQI over five values of \(\Lambda\).
Additional related work
**Recent work.** Recent work [Panaganti et al., 2022] also proposes a robust fitted-Q-iteration algorithm for RMDPs. Although the broad algorithmic design is similar, we consider a different uncertainty set from their \(\ell_{1}\) set, and further introduce orthogonalization. In the single-timestep setting, further improvements are possible when targeting the ATE, such as in Dorn and Guo [2022], Dorn et al. [2021]. However, we need to recover the _entire robust Q-function_, i.e. conditional argmin of the upper (resp. lower) bounds for policy optimization, for an \((s,a)\)-rectangular robust Bellman operator.
**Other related work, with different approaches.** We briefly discuss central differences from this approach to others in related literature (and include an expanded discussion in the appendix). A rapidly growing line of work on offline reinforcement learning with unobserved confounders can broadly be divided into different categories: assuming point identification is available via instrumental variables [Wang et al., 2021]/latent variable models [Bennett and Kallus, 2019]/front-door identification [Shi et al., 2022b], proximal causal inference in POMDPs from temporal structure [Tennenholtz et al., 2019, Bennett et al., 2021, Uehara et al., 2022, Shi et al., 2022a] or additional proxies [Miao et al., 2022], or no-information partial identification (PI) bounds based only on the structure of probability distributions (law of total probability to obtain a partial order in [Han, 2022]/PI bounds with time-varying instrumental variables in [Chen and Zhang, 2021] based on Manski-Pepper bounds). Other works include [Fu et al., 2022, Liao et al., 2021, Saghafian, 2021].
We quickly summarize the differences in our approach based on sensitivity analysis from other strategies. Although point identification is nice if available, sensitivity analysis can be used when assumptions of point identification (instrumental-variables, front-door adjustment) _are not true_, as may be the case in practice. Proximal causal inference imposes additional (unverifiable) completeness assumptions on the latent variable structure, and is a statistically challenging ill-posed inverse problem. We study a more restricted model of memoryless unobserved confounders that leads to qualitatively different behavior than a generic POMDP, i.e. an online counterpart that is a marginal MDP, which could justify potential warmstarting approaches. Finally, sensitivity analysis, which relaxes strong assumptions, is less conservative and more informative than no-assumptions partial identification. Developing a _variety_ of identification approaches further is crucial both for analysts to use appropriate estimators/bounds, and methodologically to support falsifiability analyses.
## 7 Conclusion
We developed orthogonalized robust fitted-Q-iteration under sequentially observed exogenous confounders and analyzed the sample complexity. Computational experiments show effectiveness and benefits of orthogonality. Interesting directions for future work include tighter statistical analysis, falsifiability-based analyses to draw on competing identification proposals, or elicitation of required upper bound inputs on the algorithm.
## 8 Acknowledgments
AZ acknowledges funding from the Foundations of Data Science Institute. This work was done in part while the author was visiting the Simons Institute for the Theory of Computing. |
2308.15552 | Pure Exploration under Mediators' Feedback | Stochastic multi-armed bandits are a sequential-decision-making framework,
where, at each interaction step, the learner selects an arm and observes a
stochastic reward. Within the context of best-arm identification (BAI)
problems, the goal of the agent lies in finding the optimal arm, i.e., the one
with highest expected reward, as accurately and efficiently as possible.
Nevertheless, the sequential interaction protocol of classical BAI problems,
where the agent has complete control over the arm being pulled at each round,
does not effectively model several decision-making problems of interest (e.g.,
off-policy learning, partially controllable environments, and human feedback).
For this reason, in this work, we propose a novel strict generalization of the
classical BAI problem that we refer to as best-arm identification under
mediators' feedback (BAI-MF). More specifically, we consider the scenario in
which the learner has access to a set of mediators, each of which selects the
arms on the agent's behalf according to a stochastic and possibly unknown
policy. The mediator, then, communicates back to the agent the pulled arm
together with the observed reward. In this setting, the agent's goal lies in
sequentially choosing which mediator to query to identify with high probability
the optimal arm while minimizing the identification time, i.e., the sample
complexity. To this end, we first derive and analyze a statistical lower bound
on the sample complexity specific to our general mediator feedback scenario.
Then, we propose a sequential decision-making strategy for discovering the best
arm under the assumption that the mediators' policies are known to the learner.
As our theory verifies, this algorithm matches the lower bound both almost
surely and in expectation. Finally, we extend these results to cases where the
mediators' policies are unknown to the learner obtaining comparable results. | Riccardo Poiani, Alberto Maria Metelli, Marcello Restelli | 2023-08-29T18:18:21Z | http://arxiv.org/abs/2308.15552v2 | # Pure Exploration under Mediators' Feedback
###### Abstract
Stochastic multi-armed bandits are a sequential-decision-making framework, where, at each interaction step, the learner selects an arm and observes a stochastic reward. Within the context of best-arm identification (BAI) problems, the goal of the agent lies in finding the optimal arm, i.e., the one with highest expected reward, as accurately and efficiently as possible. Nevertheless, the sequential interaction protocol of classical BAI problems, where the agent has complete control over the arm being pulled at each round, does not effectively model several decision-making problems of interest (e.g., off-policy learning, partially controllable environments, and human feedback). For this reason, in this work, we propose a novel strict generalization of the classical BAI problem that we refer to as best-arm identification under mediators' feedback (BAI-MF). More specifically, we consider the scenario in which the learner has access to a set of _mediators_, each of which selects the arms on the agent's behalf according to a stochastic and possibly _unknown_ policy. The mediator, then, communicates back to the agent the pulled arm together with the observed reward. In this setting, the agent's goal lies in sequentially choosing which mediator to query to identify with high probability the optimal arm while minimizing the identification time, i.e., the sample complexity. To this end, we first derive and analyze a statistical lower bound on the sample complexity specific to our general mediator feedback scenario. Then, we propose a sequential decision-making strategy for discovering the best arm under the assumption that the mediators' policies are known to the learner. As our theory verifies, this algorithm matches the lower bound both almost surely and in expectation. Finally, we extend these results to cases where the mediators' policies are unknown to the learner obtaining comparable results.
Footnote †: preprint: Preprint.
## Introduction
Stochastic multi-armed bandits (Lattimore and Szepesvari, 2020) are a sequential decision-making framework where, during each interaction round, the learner selects an arm and observes a sample drawn from its reward distribution. Contrary to regret minimization problems, where the agent aims at maximizing the cumulative reward, in _best-arm identification_ (BAI) scenarios (Even-Dar, Mannor, and Mansour, 2002), the agent's primary focus lies in computing the arm with the highest expected reward (i.e., the optimal arm) as accurately and efficiently as possible. More specifically, in the _fixed-confidence_ setting, given a maximal risk parameter \(\delta\), the agent's primary focus is on identifying, with probability at least \(1-\delta\), the optimal arm with a minimum number of samples.
Nevertheless, the sequential interaction protocol of classical BAI settings, in which the agent has complete control of the arm being pulled at each round (i.e., at each step, the agent chooses which arm to query), fails to adequately represent various decision-making problems that are of importance. In fact, in some relevant scenarios, the agent possesses only partial or no control over the arms being played. Consider, indeed, the following examples.
* **Off-Policy Learning.** Off-policy learning is a crucial aspect of decision-making theory that has gathered significant attention, especially within the Reinforcement Learning (RL) community (Sutton and Barto, 2018). Here, the agent continuously observes, at each round, actions sampled from a fixed behavioral policy, together with the corresponding rewards. The goal, here, consequently, lies in exploiting these off-policy interactions to identify the best arm with high probability.
* **Active Off-Policy Learning.** This scenario generalizes the off-policy setting previously presented. In this case, multiple behavioral policies are available to the agent. The learner can decide which behavioral policy to query to quickly identify the optimal arm. In practice, these behavioral policies can be, for instance, those of experts with the skill necessary to perform a subset of actions within the arm set. Another relevant example might arise in scenarios with human feedback (Li et al., 2019), where multiple humans can perform actions on the agent's behalf according to some private and personal policy.
* **Partially Controllable Environments.** Consider a situation in which arms are related to a set of safe plans that are executed, e.g., by a physical robot. In this scenario, whenever the agent pulls an arm, physical errors within the process might lead to performing a slightly different plan. Nevertheless, the learner might still be interested in identifying the optimal plan conditioned on the system disturbance. Indeed, at the cost of increasing the quality of the physical robot, the action error can be reduced. Consequently, the designer of the learning system might
exploit the available robot to identify the optimal plan, which can be then be directly exploited in the more expensive and upgraded physical system.
As we can see, all these scenarios cannot be properly modeled with the usual bandit scheme as the agent has limited or no control on the arms being pulled during each interaction round. For this reason, in this work, we study a strict generalization of the classical BAI framework that (i) circumvents the limits of complete controllability that is typical of bandit frameworks and (ii) encompasses all the examples we discussed above. To this end, we introduce the best-arm identification problem under _mediators' feedback_, where the learner has access to a set of mediators, each of which will query arms on the agent's behalf according to some stochastic, _possibly unknown_ and _fixed_ behavioral policy. The mediator will then communicate back to the agent which action it has played, together with the observed reward realization. In this setting, the agent's goal lies in sequentially choosing which mediator to query to identify with high probability the optimal arm while minimizing the sample complexity. As one can verify, such formalism decouples the arms' pulls from the agent's choices, thus allowing to properly model all the scenarios depicted above.
Original Contributions and OutlineAfter introducing the necessary notation and backgrounds, we formally describe the BAI problem under the mediators' feedback. We then derive a _lower bound_ on the identification time that reveals the typical max-min pure exploration structure that commonly arises in fixed-confidence BAI problems. To properly understand the statistical complexity of the problem, we further investigate this pure exploration game by presenting several analyses and discussions on the hardness of the mediators' feedback setting. Given these theoretical insights, and under the assumption that the mediators' policies are known to the learner, we develop an algorithm that sequentially chooses which mediator to query to identify the optimal arm with the least possible amount of samples. Our analysis reveals that the presented algorithm is _asymptotically optimal_ (i.e., it matches the lower bound for \(\delta\to 0\)), both almost surely and in expectation. Finally, we extend all these results to the case in which the mediators' policies are unknown to the learner. Most surprisingly, a simple modification to our algorithm leads to theoretical results identical to the ones presented for the known policy setting. In other words, the two problems are characterized, at least for sufficiently small values of \(\delta\), by the same statistical complexity.
## Related Works
Best-Arm IdentificationSince the seminal work of Even-Dar, Mannor, and Mansour (2002), the fixed-confidence BAI setting has gathered increasing attention within the community. In particular, considerable efforts have been dedicated to refining algorithms and statistical lower-bounds, all with the ultimate objective of constructing optimal identification strategies (e.g., Bubeck, Munos, and Stoltz 2009; Audibert, Bubeck, and Munos 2010; Karnin, Koren, and Somekh 2013; Jamieson et al. 2014; Jamieson and Nowak 2014; Kaufmann, Cappe, and Garivier 2016). In this context, and of particular relevance for our work, Garivier and Kaufmann (2016) have proposed the celebrated Track and Stop (TaS) algorithm, which attains _optimal_ statistical complexity in the asymptotic regime, i.e., \(\delta\to 0\). Building upon this work, numerous studies have been conducted to propose improvements and generalizations upon the TaS algorithm (e.g., Degenne and Koolen 2019; Wang, Tzeng, and Proutiere 2021; Degenne et al. 2020; Tirinzoni and Degenne 2022).
Structured Best-Arm IdentificationOne of the key factors that contributed to the success of TaS is its ability to emerge as a versatile framework that can be meticulously adapted to several variants of the BAI problem, such as linear (Jedra and Proutiere 2020) and spectral bandits (Kocak and Garivier 2020), multiple answers problems (Degenne and Koolen 2019), and many others (e.g., Garivier et al. 2017; Moulos 2019; Agrawal, Juneja, and Glynn 2020). Among problems with additional structure, our work is related to BAI under choice-based feedback (Feng, Caldentey, and Ryan 2022; Yang and Feng 2023), where a company sequentially shows sets of items to a population of customers and collects their choices. The objective is to identify the most preferred item with the least number of samples and with high-probability. Another relevant work is Russac et al. (2021), where the authors study the BAI problem in the presence of sub-populations. In more precise terms, the authors make the assumption that a population can be divided into distinct and similar subgroups. During each time step, one of these subgroups is sampled and an action (i.e., arm) is chosen. The observed outcome is a random draw from the selected arm, considering the characteristics of the current subgroup. To evaluate the effectiveness of each arm, a weighted average of its subpopulation means is used. Finally, our feedback structure is also related to BAI in contaminated bandits (Altschuler, Brunel, and Malek 2019; Mukherjee et al. 2021), where each arm pull has a probability \(\epsilon\) of generating a sample from an arbitrary distribution, rather than the true one. Nevertheless, we remark that none of these settings can be mapped to the mediators' feedback one and viceversa.
Mediators' FeedbackThe mediator feedback terminology was introduced by Metelli et al. (2021) in the context of Policy Optimization (PO) in RL. Similar to the previous studies of Papini et al. (2019), the authors deal with the PO problem as a bandit where each policy in a given set is mapped to a distinct arm, whose reward is given by the usual cumulative RL return. Notice that, in this setting, the ability to perform actions in the environment is mediated by the policy set of the agent. For this reason, in our work, we adopt their terminology to disentangle the arms' pull from the agent's choices. Among this line of works, we notice that, recently, a variant of this problem has also been studied in the context of non-stochastic bandits with expert advice (Eldowa et al. 2023). Here, during each round, the learner selects an expert that will perform an action on the agent's behalf according to some fixed distribution. Similar ideas have also been investigated in Sen, Shanmugam,
and Shakkottai (2018) for regret minimization in contextual bandits. More specifically, they assume access to a class of stochastic experts, where each expert is a conditional distribution over the arms given the context. Compared to our work, both Sen, Shanmugam, and Shakkottai (2018) and Eldowa et al. (2023) consider the problem of minimizing the regret against the best expert.
Other Related WorksOff-policy learning plays a vital role in decision-making theory and has garnered considerable interest, particularly in RL (Sutton and Barto 2018). In particular, the off-policy feedback has received extensive research in the offline RL literature (Levine et al. 2020), where the agent lacks the ability to directly interact with the environment and is instead limited to utilizing a fixed dataset gathered by possible multiple and unknown behavioral policies. Finally, related to our work, Gabbianelli, Neu, and Papini (2023) have studied regret minimization in adversarial bandits with off-policy feedback. More specifically, the authors assume that the learner cannot directly observe its rewards, but instead sees the ones obtained by a behavioral and unknown policy that runs in parallel.
## Preliminaries
In this section, we provide the necessary notation and backgrounds that will be used throughout this document. We first describe in-depth the best-arm identification framework, and then we formalize our multiple mediators feedback problem.
### Fixed-Confidence Best-Arm Identification
In fixed-confidence best-arm identification (BAI) problems (Even-Dar, Mannor, and Mansour 2002), the agent interacts with a set of \(K\) probability distributions \(\mathbf{\nu}=(\nu_{1},\ldots\nu_{K})\) with respective means \(\mathbf{\mu}=(\mu_{1},\ldots,\mu_{K})\). For simplicity, we assume that there is a unique optimal arm, and, w.l.o.g., \(\mu_{1}>\mu_{2}\geq\cdots\geq\mu_{K}\). In the rest of this work, we consider distributions within the one-dimensional canonical exponential family (Cappe et al. 2013), which are directly parameterized by their mean. 1 For this reason, with a little abuse of notation, we will often refer to the bandit model \(\mathbf{\nu}\) using the means of its arms \(\mathbf{\mu}\). We use the symbol \(\mathcal{M}\) to denote this class of bandit models with unique optimal arms. Given two distributions \(p,q\in\mathcal{M}\), we denote with \(d(p,q)\) the KL divergence between \(p\) and \(q\).
Footnote 1: The reader who is not familiar with the subject may consider Bernoullian or Gaussian distributions with known variance.
We now proceed by formalizing the interaction scheme between the agent and the bandit model. At every interaction step \(t\in\mathbb{N}\), the agent selects an arm \(A_{t}\in[K]\) and receives a new and independent reward \(X_{t}\sim\nu_{A_{t}}\). The procedure that defines how arms \(A_{t}\) are selected is often referred to as _sampling rule_. Given a maximal risk parameter \(\delta\in(0,1)\), the goal of the agent is to output the optimal arm \(\hat{a}_{\tau_{\delta}}=\{1\}\) with probability at least \(1-\delta\), while minimizing the _sample complexity_\(\tau_{\delta}\in\mathbb{N}\). More formally, \(\tau_{\delta}\) is a stopping time that controls the end of the data acquisition phase, after which a decision \(\hat{a}_{\tau_{\delta}}\) is made. We refer to algorithms that satisfy \(\mathbb{P}\left(\hat{a}_{\tau_{\delta}}\in\operatorname*{argmax}_{a\in[K]} \mu_{a}\right)\leq\delta\) as \(\delta\)-correct strategies.
### On the Complexity of Best-Arm Identification
We now describe in detail the statistical complexity of fixed-confidence BAI problems (Garivier and Kaufmann 2016). Given a bandit model \(\mathbf{\mu}\in\mathcal{M}\), let \(a^{*}(\mathbf{\mu})=\operatorname*{argmax}_{a\in[K]}\mu_{a}\). We introduce the set \(\text{Alt}(\mathbf{\mu})\) as the set of problems where the optimal arm is different w.r.t. to \(\mathbf{\mu}\), namely \(\text{Alt}(\mathbf{\mu})\coloneqq\left\{\mathbf{\lambda}\in\mathcal{M}:a^{*}(\mathbf{ \lambda})\neq a^{*}(\mathbf{\mu})\right\}\). Let \(\text{kl}(x,y)=x\log(x/y)+(1-x)\log((1-x)/(1-y))\). Then, for any \(\delta\)-correct algorithm it holds that:
\[\mathbb{E}_{\mathbf{\mu}}\left[\tau_{\delta}\right]\geq T^{*}(\mathbf{\mu})\text{kl}( \delta,1-\delta), \tag{1}\]
where:
\[T^{*}(\mathbf{\mu})^{-1}=\sup_{\omega\in\Delta_{K}}\inf_{\mathbf{\lambda}\in\text{Alt }(\mathbf{\mu})}\left(\sum_{a=1}^{K}\omega_{a}d(\mu_{a},\lambda_{a})\right). \tag{2}\]
We remark that, when \(\delta\to 0\), \(T^{*}(\mathbf{\mu})\) fully describes the statistical complexity of each problem \(\mathbf{\mu}\). More specifically, from Equation (1) we obtain:
\[\limsup_{\delta\to 0}\frac{\mathbb{E}_{\mathbf{\mu}}[\tau_{\delta}]}{\log(1/ \delta)}\geq T^{*}(\mathbf{\mu}). \tag{3}\]
For this reason, \(T^{*}(\mathbf{\mu})\) has played a crucial role in several BAI studies (e.g., Garivier and Kaufmann 2016; Wang, Tzeng, and Proutiere 2021; Tirinzoni and Degenne 2022). From Equation (2), we can see that \(T^{*}(\mathbf{\mu})^{-1}\) can be seen as a max-min game where the first player chooses a pull proportion among the different arms, and the second player chooses a hard-to-identify alternative problem where the optimal arm is different (Degenne, Koolen, and Menard 2019). In this sense, the unique maximizer of Equation (2), which we denote as \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\), can be interpreted as the optimal proportion with which arms should be queried in order to identify \(a^{*}(\mathbf{\mu})\). Since solving Equation (2) requires access to quantities unknown to the learner, \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) often takes the name of _oracle weights_.
### Track and Stop Algorithm
The seminal work of Garivier and Kaufmann (2016) has presented the Track and Stop (TaS) algorithm, which is the first asymptotically optimal approach for the fixed confidence BAI scenario; i.e., when \(\delta\to 0\), it guarantees to stop with sample complexity that matches the lower bound. The core idea behind TaS lies in solving an empirical version of Equation (2) to estimate the optimal oracle weights. Then, in order to match the optimal proportions \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) (which guarantees to achieve optimality), the sampling rule will allocate samples by _tracking_ this empirical estimation. We remark that this is combined with a forced exploration sampling strategy that ensures that the estimate of the mean of each arm, and consequently the estimate of \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\), is sufficiently accurate. Lastly, as a stopping criterion, TS employs the Generalized Likelihood Ratio (GLR) statistic to determine if enough information has been collected to infer, with a risk not exceeding \(\delta\), whether the mean of one arm is greater than that of all the others. 2
### Mediators' Feedback Best-Arm Identification
In this work, we study the following generalization of the best-arm identification problem. Given a bandit model \(\mathbf{\nu}\) with \(K\) arms, the learner cannot directly sample rewards from each arm \(\nu_{a}\), but, instead it can query a set of \(E\) mediators, each of which is described by a _possibly unknown_ and _fixed_ behavioral policy \(\mathbf{\pi}_{e}\in\Delta_{K}\). More specifically, at each interaction step \(t\in\mathbb{N}\), the agent will select a mediator \(E_{t}\in[E]\), which, on the agent's behalf, will pull an arm \(A_{t}\sim\mathbf{\pi}_{\mathbf{E}_{t}}\) and will observe a reward \(X_{t}\sim\nu_{A_{t}}\). The mediator \(E_{t}\) will then communicate back to the agent both the action \(A_{t}\) and observed reward \(X_{t}\). For brevity, we adopt the symbol \(\mathbf{\pi}\) as a shortcut for the set of mediators' policies \(\left(\mathbf{\pi}_{e}\right)_{e=1}^{E}\). Given a maximal risk parameter \(\delta\), the goal of the agent remains identifying with high-probability the optimal arm within \(\mathbf{\mu}\) while minimizing the sample complexity \(\tau_{\delta}\). To this end, we restict our study to the following scenarios.
**Assumption 1**.: _For any \(a\in[K]\) there exists \(e\in[E]\) such that \(\pi_{e}(a)>0\)._
Assumption 1 states that the mediators' policies explores with positive probability each action \(a\in[K]\). In other words, the agent should be able to gather information on each arm within the arm set. This is a very mild requirement that, as we shall see, is necessary for finite sample complexity results.
To conclude, we notice that the proposed interaction protocol is a strict generalization w.r.t. to the usual BAI framework. Indeed, whenever (i) the mediators' policies are known, (ii) \(E=K\) and, (iii) for all action \(a\in[K]\), \(\mathbf{\pi}_{\mathbf{E}_{n}}\) is a Dirac distribution on action \(a\), we recover the usual best-arm identification problem. In the rest of this document, we refer to this peculiar set of mediators' policies as \(\mathbf{\bar{\pi}}\).
## 3 On the Statistical Complexity
This section discusses the intrinsic statistical complexity of the best-arm identification problems under mediators' feedback. More specifically, we provide and analyze a lower bound on the sample complexity that is necessary to identify the optimal arm with high-probability. The following result summarizes our result.
**Theorem 1**.: _Let \(\delta\in(0,1)\). For any \(\delta\)-correct strategy, any bandit model \(\mathbf{\mu}\), and any set of mediators \(\mathbf{\pi}\) it holds that:_
\[\mathbb{E}_{\mathbf{\mu},\mathbf{\pi}}\left[\tau_{\delta}\right]\geq\mathrm{kl}( \delta,1-\delta)T^{*}(\mathbf{\mu},\mathbf{\pi}), \tag{4}\]
_where \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) is defined as:_
\[\sup_{\mathbf{\omega}\in\Sigma_{E}}\inf_{\mathbf{\lambda}\in\mathrm{Alt}(\mathbf{\mu})} \left(\sum_{e=1}^{E}\omega_{e}\sum_{a=1}^{K}\pi_{e}(a)d(\mu_{a},\lambda_{a}) \right). \tag{5}\]
Theorem 1 describes some comments. First of all, as we can appreciate from Equation (5), \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) reports the typical max-min game that describes lower bounds for standard best-arm identification problems. More specifically, the max-player determines the proportion with which each mediator should be queried, while the min-player decides an alternative (and hard) alternative instance in which the optimal arm is modified. It has to be remarked that \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\),and, consequently, the oracle weights \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\), directly depends on the set of mediators' policies \(\mathbf{\pi}\). In other words, \(\mathbf{\pi}\) plays a crucial role in the statistical complexity of the problem. To further investigate this dependency, let us introduce some additional notation. Given \(\mathbf{\omega}\in\Sigma_{E}\), we define \(\mathbf{\tilde{\pi}}(\mathbf{\omega})\in\Sigma_{K}\), where \(\tilde{\pi}_{a}(\mathbf{\omega})=\sum_{e=1}^{E}\omega_{e}\pi_{e}(a)\) denotes the probability of playing an arm \(a\) when sampling mediators according to \(\mathbf{\omega}\). Then, let \(\widetilde{\Sigma}_{K}\subseteq\Sigma_{K}\) be the set of all the possible \(\mathbf{\tilde{\pi}}\) that can be generated starting from some \(\mathbf{\omega}\in\Sigma_{E}\). At this point, it is possible to rewrite \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) as:
\[\sup_{\mathbf{\tilde{\pi}}\in\Sigma_{K}}\inf_{\mathbf{\lambda}\in\mathrm{Alt}(\mathbf{\mu} )}\sum_{a=1}^{K}\tilde{\pi}_{a}d(\mu_{a},\lambda_{a}). \tag{6}\]
At this point, we notice that Equation (6) shares significant similarities with the definition of \(T^{*}(\mathbf{\mu})^{-1}\) for classical BAI problems; i.e., Equation (2). The only difference, indeed, stands in the fact that, under mediators' feedback, the max-player can only act on the restricted set \(\widetilde{\Sigma}_{K}\) rather than the entire simplex \(\Sigma_{K}\). In this sense, that max-min game is between the proportion of arm pulls that is _possible_ to play according to the mediators \(\mathbf{\pi}\), and the alternative hard instance. In the rest of this document, we denote maximizers of Equation (6) with \(\mathbf{\tilde{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\). Given this interpreation of Theorem 1 we now proceed with some additional analysis that further investigates the statistical complexity of the problem.
### Comparison with classical BAI
At this point, one might question the connection between Theorem 1 and the conventional lower bound of classical BAI problems.
First of all, it is worth noting that Theorem 1 effectively generalizes existing statistical complexity results of the typical BAI problem, thus offering a broader perspective. Indeed, whenever the set of mediators' policies is equal to \(\mathbf{\bar{\pi}}\), Equation (4) directly reduces to Equation (1). In other words, \(T^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})^{-1}\) is exactly \(T^{*}(\mathbf{\mu})^{-1}\). Furthermore, for a general set of mediators \(\mathbf{\pi}\), it is possible to derive the following result.
**Proposition 1**.: _For any bandit model \(\mathbf{\mu}\) and mediators' policies \(\mathbf{\pi}\) it holds that:_
\[T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\leq T^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})^{-1}. \tag{7}\]
_Furthermore, \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}<T^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})^{-1}\) holds if and only if \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})\notin\widetilde{\Sigma}_{K}\)._
From Equation 7, we can see that the mediators' feedback problem is always at least as difficult as the classical BAI setting. From an intuitive perspective, this result is expected. Indeed, from Equation (6), we know that the only difference between \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) and \(T^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})^{-1}\) lies in the definition of \(\widetilde{\Sigma}_{K}\), that, as previously discussed, encodes the partial controllability on the arm space that is introduced by the mediators \(\mathbf{\pi}\). Furthermore, Proposition 1 fully characterizes the set of instances in which the mediators' feedback introduces additional challenges in identifying the optimal arm. More precisely, the lower bound of Theorem 1 separates from the one of classical BAI whenever the max-player
cannot pull, in expectation, arms according to the proportion \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\bar{\pi}})\) that results from the lower bound of the classical BAI problem.
### On the Action Covering Assumption
We now continue with a formal justification behind the action covering assumption, i.e., Assumption 1. To this end, we analyze the behavior of Theorem 1 under the peculiar single mediator setting, that is \(E=1\). More specifically, let us focus on the case in which there are two different actions, \(a_{1}\) and \(a_{2}\), associated to Gaussian reward distributions with unitary variance and means \(\mu_{a_{1}}>\mu_{a_{2}}\). In this case, \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) reduces to:
\[\frac{1}{2}\frac{\pi_{e}(a_{1})\pi_{e}(a_{2})}{\pi_{e}(a_{1})+\pi_{e}(a_{2})} \Delta^{2}, \tag{8}\]
where \(\Delta=\mu_{a_{1}}-\mu_{a_{2}}\). In this context, it is easy to see that, as soon as \(\pi_{e}(a)\to 1\) for any of the two actions, \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) tends to \(0\), and, consequently, \(\mathbb{E}_{\mathbf{\mu},\mathbf{\pi}}[\tau_{6}]\to+\infty\). In this sense, we can appreciate as Assumption 1 turns out being a necessary assumption for finite sample complexity result. This should come as no surprise: if we cannot observe any realization from a certain arm \(a\in[K]\), we are unable to conclude whether \(a\) is optimal or not.
### Off-Policy Learning
Given the significant importance of off-policy learning within the sequential decision-making community, we now provide additional details on the lower bound for the case in which \(E=1\). We notice, indeed, that whenever \(E=1\), our setting reduces to the off-policy best-arm identification problem, where the learner continuously observes actions and rewards from another agent (i.e, the mediator). In this case, assuming for the sake of exposition Gaussian distributions with unitary variance, \(T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}\) can be rewritten as:
\[T^{*}(\mathbf{\mu},\mathbf{\pi})^{-1}=\min_{a\neq 1}\frac{1}{2}\frac{\pi_{e}(1)\pi_{e} (a)}{\pi_{e}(1)+\pi_{e}(a)}\Delta_{a}^{2}, \tag{9}\]
where \(\Delta_{a}=\mu_{1}-\mu_{a}\). Equation (9) expresses the lower bound _only_ in term of the most difficult to identify alternative arm (i.e., the minimum over the different sub-optimal actions). Furthermore, contrary to what usually happens in classical BAI problems, this difficult to identify alternative arm is not the one with the smallest gap \(\Delta_{a}\), but there is a trade-off between \(\Delta_{a}\) and how easy it is to observe the mediator playing action \(a\), namely \(\pi_{e}(a)\).
### On \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\)
Finally, before moving into the details of our algorithm, we conclude with some more technical considerations on \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\). It has to be noticed that, compared to standard BAI problems, \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\) are, in general, not unique. In other words, the mappings \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\) are set-valued. 3 As shown by previous works [1], this sort of feature introduces significant challenges within the algorithmic design/analysis of, e.g., Track and Stop inspired algorithms. The following proposition shows significant and technical relevant properties that help overcoming these challenges.
Footnote 3: We refer the reader to the appendix for examples that illustrate the non-uniqueness of \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\).
**Proposition 2**.: _The sets \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\) are convex. Furthermore, the mappings \((\mathbf{\mu},\mathbf{\pi})\to\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) and \((\mathbf{\mu},\mathbf{\pi})\to\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\) are upper hemicontinuous._
The unfamiliar reader might think of upper hemicontinuity as a generalization of the continuity property for set-valued mappings [1]. 4 As our analysis will reveal, Proposition 2 will play a crucial role for the analysis of our algorithmic solution.
Footnote 4: Further technical details on this point are deferred to the appendix.
## Track and Stop under Mediators' feedback
In this section, we continue by providing our algorithm for the best-arm identification problem under mediators feedback. Here, we focus on the case in which the mediators policies \(\mathbf{\pi}\) are known to the learner. We refer the reader to the next section for the case in which \(\mathbf{\pi}\) is unknown. As algorithm, we cast the TaS framework to our interaction setting in the following way.
Sampling RuleAs a sampling rule, we adopt C-tracking of the oracle mediator proportions \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\). More formally, let \(\mathbf{\hat{\mu}}(t)\) be the vector of estimates of the mean of each arm at time \(t\). We then compute any maximizers of the empirical version of Equation (5), and we \(L^{\infty}\) project it onto \(\Sigma_{E}^{\epsilon_{t}}=\{\mathbf{\omega}\in\Sigma_{E}:\forall i\;\omega_{i}\geq \epsilon_{t}\}\), where \(\epsilon_{t}\) is given by \(\epsilon_{t}=(E^{2}+t)^{-1/2}/2\). We notice that, in the original version of TaS, C-Tracking was applied to track optimal proportions between arms. Our algorithmic choice (i.e., tracking mediator proportions) is a direct consequence of the fact that we cannot directly track arm proportions (e.g., \(\mathbf{\bar{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\)), but, instead, the learner can only decide which mediator will be queried at time \(t\). Furthermore, we also remark that from Proposition 2, \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) is, in general, set-valued, and, consequently, significantly harder to track.
Stopping RuleSince the goal lies in identifying the optimal arm \(a\in[K]\), we stick to the successful Generalized Likelihood Ratio (GLR) statistic \(Z(t)\) to decide when enough information has been gathered to confidently recommend which arm has the highest mean. More specifically, the algorithm stops whenever the following condition is verified:
\[Z(t)\coloneqq t\min_{\mathbf{\lambda}\in\text{Alt}(\mathbf{\hat{\mu}})}\sum_{a=1}^{K} \frac{N_{a}(t)}{t}d(\hat{\mu}_{a}(t),\lambda_{a})\geq\beta(t,\delta), \tag{10}\]
where \(N_{a}(t)\) denotes the number of pulls to arm \(a\) at time \(t\), and \(\beta(t,\delta)\) represents an exploration rate that is commonly set to \(\log\left(\frac{Ct^{a}}{\delta}\right)\), for some \(\alpha>1\) and appropriate constant \(C\).
The reccomendationFor the same reasons of the stopping rule, we rely on the reccomendation rule of Garivier and Kaufmann (2016). Namely, our algorithm reccomend the arm with the highest empirical mean \(\hat{a}_{\tau_{\delta}}=\operatorname*{argmax}_{a\in[K]}\hat{\mu}_{a}(\tau_{ \delta})\).
### Theoretical Results
At this point, we are ready to present our theoretical analysis on the performance of our algorithm. First of all, we notice that, due to the choice of the stopping and the reccomendation rule, our algorithm is \(\delta\)-correct. More precisely, \(\mathbb{P}_{\mathbf{\mu},\mathbf{\pi}}\left(\tau_{\delta}<+\infty,\hat{a}_{\tau_{ \delta}}\neq a^{*}\right)\leq\delta\) holds for all bandit models \(\mathbf{\mu}\) and all mediators' policies that satisfies Assumption 1.
We now continue by presenting sample complexity results. As in previous works, we begin by providing an almost sure convergence result. The following theorem presents our result.
**Theorem 2**.: _Consider any \(\mathbf{\mu}\in\mathcal{M}\) and any \(\mathbf{\pi}\) such that Assumption 1 is satisfied. Let \(\alpha\in(1,e/2]\). It holds that:_
\[\mathbb{P}_{\mathbf{\mu},\mathbf{\pi}}\left(\limsup_{\delta\to 0}\frac{\tau_{ \delta}}{\log\left(1/\delta\right)}\leq\alpha T^{*}(\mathbf{\mu},\mathbf{\pi})\right)=1. \tag{11}\]
Similarly, it is possible to derive a result that directly controls the expectation of the stopping time \(\tau_{\delta}\). More specifically, we prove the following result.
**Theorem 3**.: _Consider any \(\mathbf{\mu}\in\mathcal{M}\) and any \(\mathbf{\pi}\) such that Assumption 1 is satisfied. Let \(\alpha\in(1,e/2]\). It holds that:_
\[\limsup_{\delta\to 0}\frac{\mathbb{E}_{\mathbf{\mu},\mathbf{\pi}}[\tau_{ \delta}]}{\log(1/\delta)}\leq\alpha T^{*}(\mathbf{\mu},\mathbf{\pi}). \tag{12}\]
In other words, Theorem 3 shows that in the asymptotic regime of \(\delta\to 0\), our algorithm matches the lower bound presented in Theorem 1.
### Technical Remarks
We conclude this section by discussing the technical challenges behind our analysis, which all arises from the peculiar type of feedback together with the non-uniqueness of the oracle weights \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\). More specifically, one of the main difficulties arises from the fact that the stopping rule deals with _arm-dependent_ quantities (e.g., the number of pulls at each arm), while the sampling rule and the forced exploration that arises from C-tracking solely focus on the optimal _mediators_ proportions. In this sense, in order to guarantee optimality, at least from an almost surely perspective (i.e., Theorem 2), we need to ensure that our sampling rule, which aims at tracking the mediators' oracle weights \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\), actually converges to some \(\widetilde{\mathbf{\pi}}\in\widetilde{\mathbf{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})\). To this end, we derive the following result.
**Lemma 1**.: _Consider a sequence of pulls generated by C-tracking on the mediators' oracle weights \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\). Then, it holds that:_
\[\mathbb{P}_{\mathbf{\mu},\mathbf{\pi}}\left(\lim_{\epsilon\to+\infty}\inf_{\widetilde {\mathbf{\pi}}\in\widetilde{\mathbf{\pi}}^{*}(\mathbf{\mu},\mathbf{\pi})}\max_{a\in[K]}\left| \frac{N_{a}(t)}{t}-\widetilde{\pi}_{a}\right|=0\right)=1.\]
As one might expect, a modified version of Lemma 1 needs to be derived also for the proof of Theorem 3, in which we directly control the expectation of \(\tau_{\delta}\). In this case, we need to carefully define an event \(\mathcal{E}\) under which the following holds.
**Lemma 2**.: _There exists a constant \(T_{\epsilon}\) such that for \(T\geq T_{\epsilon}\) it holds that, on \(\mathcal{E}\), \(\mathcal{C}\)-tracking verifies:_
\[\forall t\geq\sqrt{T},\quad\inf_{\widetilde{\mathbf{\pi}}\in\widetilde{\mathbf{\pi}}^ {*}(\mathbf{\mu},\mathbf{\pi})}\max_{a\in[K]}\left|\frac{N_{a}(t)}{t}-\widetilde{\pi}_ {a}\right|\leq\epsilon. \tag{13}\]
As our analysis will reveal, here, and, in particular, in controlling the probability of the complementary event \(\mathcal{E}^{c}\), the sampling process of the mediators' feedback will technically play a crucial role. 5 We refer the reader to our proofs in the appendix for further details on this point.
Footnote 5: Intuitively, since Equation 13 is used to prove the asymptotic optimality, we need to ensure that \(\mathcal{E}^{c}\) is unlikely to happen.
## Unknown Mediators Policies
Up to this point, we have discussed the best-arm identification problem under mediators' feedback assuming that the learner has perfect knowledge on the set of mediators policies \(\mathbf{\pi}\). In this section, we extend our results to the case in which the agent does not know \(\mathbf{\pi}\), but instead it has learn it directly from data.
Before detailing our theoretical findings, we notice that Theorem 1 still represents a valid lower bound for this more intricated setting. For this reason, one might be tempted to extend our algorithm to track the optimal mediators proportions \(\mathbf{\omega}^{*}(\mathbf{\mu},\mathbf{\pi})\) to the case in which the set \(\mathbf{\pi}\) is unknown to the learner. To this end, let \(\mathbf{\hat{\pi}}(t)\) be the matrix containing empirical estimates for each mediator policy \(\mathbf{\pi}_{\epsilon}\). Then, it is sufficient to modify the C-tracking sampling rule presented in the previous section by computing any maximizer of the empirical version of Equation (5) where _both_\(\mathbf{\mu}\) and \(\mathbf{\pi}\) are replaced with \(\mathbf{\hat{\mu}}(t)\) and \(\mathbf{\bar{\pi}}(t)\) respectively. 6 As we shall now see, this simple modification allows us to derive results that are equivalent to Theorem 2 and 3. More specifically, we begin by showing the following almost surely convergence result.
Footnote 6: We notice that, in the previous section, \(\mathbf{\mu}\) was replaced with \(\mathbf{\hat{\mu}}(t)\) while \(\mathbf{\pi}\) was used directly since it was available to the learner.
**Theorem 4**.: _Consider any \(\mathbf{\mu}\in\mathcal{M}\) and any \(\mathbf{\pi}\) such that Assumption 1 is satisfied. Let \(\mathbf{\pi}\) be unknown to the learner prior to interacting with the environment. Let \(\alpha\in(1,e/2]\). It holds that:_
\[\mathbb{P}_{\mathbf{\mu},\mathbf{\pi}}\left(\limsup_{\delta\to 0}\frac{\tau_{ \delta}}{\log\left(1/\delta\right)}\leq\alpha T^{*}(\mathbf{\mu},\mathbf{\pi}) \right)=1. \tag{14}\]
Furthermore, as done in the previous section, it is possible to derive a result that directly controls the expectation of the stopping time \(\tau_{\delta}\).
**Theorem 5**.: _Consider any \(\mathbf{\mu}\in\mathcal{M}\) and any \(\mathbf{\pi}\) such that Assumption 1 is satisfied. Let \(\mathbf{\pi}\) be unknown to the learner prior to interacting with the environment. Let \(\alpha\in(1,e/2]\). It holds that:_
\[\limsup_{\delta\to 0}\frac{\mathbb{E}_{\mathbf{\mu},\mathbf{\pi}}[\tau_{ \delta}]}{\log(1/\delta)}\leq\alpha T^{*}(\mathbf{\mu},\mathbf{\pi}). \tag{15}\]
We now proceed by analyzing the results of Theorems 4 and 5. First of all, as we can appreciate, they fully extend the results of Theorems 2 and 3 to the unknown policies setting. Notice, in particular, that the theoretical results of the unknown policy setting, i.e., Equations (14) and (15), are _completely equivalent_ to the ones previously presented for the case in which \(\mathbf{\pi}\) is available to the learner, i.e., Equations (11) and (12). Furthermore, as a direct consequence of the fact that Theorem 1 represents a lower bound to the problem, it follows that the simple modification that we presented at the beginning of this section, is sufficient to derive an _asymptotically optimal_ algorithm even in the case in which \(\mathbf{\pi}\) is not available to the learner. Most importantly, these considerations implies that not knowing \(\mathbf{\pi}\) does not affect the statistical complexity of the problem, at least in the asymptotic regime \(\delta\to 0\). As a direct consequence, all the analysis and discussion we presented in the lower bound section hold equivalently both for the known and the unknown policy settings.
## Conclusions and Future Works
This study presents a theoretical investigation into the significant problem of best-arm identification under mediators' feedback. By uncoupling the agent's decisions from the arm's pulls, we are able to extend the classical bandit interaction protocol to model various sequential decision-making problems of interest, including off-policy learning, active off-policy learning, and partially controllable environments. Within this context, we have first analyzed the statistical complexity of the problem by deriving a lower bound on the time required to identify the optimal arm with high probability. Given this result, we have presented several discussions on this complexity, among which a game-theoretic interpretation, a comparison with the classical best-arm identification problem, and a characterization of problems that for which finite sample complexity results hold. We have then proposed an asymptotically optimal algorithm, both for the case in which the mediators' policies are known and unknown to the learner.
To conclude, we outline different exciting future research directions. First of all, as commonly done in the recent best-arm identification literature, our work focussed on the asymptotic regime of \(\delta\to 0\). It would be interesting to investigate the mediators' feedback under the moderate regime of \(\delta\) as well. Secondly, we addressed problems with a fixed set of mediators' policies. Extending our formulation to cases where the mediators' policies evolve over time presents a captivating avenue for future research. In this context, we conjecture that, due to the negative result related to Assumption 1, some structure on the evolution dynamics is required. Finally, although in our work we focussed on the bandit scenario, it would be interesting to extend the mediators' feedback setting to the task of PAC learning nearly optimal RL policies while restricting the policy space to a set of fixed behaviors. Indeed, it is worth remarking that, at least to the best of our knowledge, the mediators' feedback problem have been investigated in RL only in the context of regret minimization (Papini et al., 2019; Metelli et al., 2021).
|
2306.12791 | On the Construction of Near-MDS Matrices | The optimal branch number of MDS matrices makes them a preferred choice for
designing diffusion layers in many block ciphers and hash functions. However,
in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch
numbers offer a better balance between security and efficiency as a diffusion
layer, compared to MDS matrices. In this paper, we study NMDS matrices,
exploring their construction in both recursive and nonrecursive settings. We
provide several theoretical results and explore the hardware efficiency of the
construction of NMDS matrices. Additionally, we make comparisons between the
results of NMDS and MDS matrices whenever possible. For the recursive approach,
we study the DLS matrices and provide some theoretical results on their use.
Some of the results are used to restrict the search space of the DLS matrices.
We also show that over a field of characteristic 2, any sparse matrix of order
$n\geq 4$ with fixed XOR value of 1 cannot be an NMDS when raised to a power of
$k\leq n$. Following that, we use the generalized DLS (GDLS) matrices to
provide some lightweight recursive NMDS matrices of several orders that perform
better than the existing matrices in terms of hardware cost or the number of
iterations. For the nonrecursive construction of NMDS matrices, we study
various structures, such as circulant and left-circulant matrices, and their
generalizations: Toeplitz and Hankel matrices. In addition, we prove that
Toeplitz matrices of order $n>4$ cannot be simultaneously NMDS and involutory
over a field of characteristic 2. Finally, we use GDLS matrices to provide some
lightweight NMDS matrices that can be computed in one clock cycle. The proposed
nonrecursive NMDS matrices of orders 4, 5, 6, 7, and 8 can be implemented with
24, 50, 65, 96, and 108 XORs over $\mathbb{F}_{2^4}$, respectively. | Kishan Chand Gupta, Sumit Kumar Pandey, Susanta Samanta | 2023-06-22T10:47:52Z | http://arxiv.org/abs/2306.12791v2 | # On the Construction of Near-MDS Matrices
###### Abstract
The optimal branch number of MDS matrices makes them a preferred choice for designing diffusion layers in many block ciphers and hash functions. However, in lightweight cryptography, Near-MDS (NMDS) matrices with sub-optimal branch numbers offer a better balance between security and efficiency as a diffusion layer, compared to MDS matrices. In this paper, we study NMDS matrices, exploring their construction in both recursive and nonrecursive settings. We provide several theoretical results and explore the hardware efficiency of the construction of NMDS matrices. Additionally, we make comparisons between the results of NMDS and MDS matrices whenever possible. For the recursive approach, we study the DLS matrices and provide some theoretical results on their use. Some of the results are used to restrict the search space of the DLS matrices. We also show that over a field of characteristic \(2\), any sparse matrix of order \(n\geq 4\) with fixed XOR value of \(1\) cannot be an NMDS when raised to a power of \(k\leq n\). Following that, we use the generalized DLS (GDLS) matrices to provide some lightweight recursive NMDS matrices of several orders that perform better than the existing matrices in terms of hardware cost or the number of iterations. For the nonrecursive construction of NMDS matrices, we study various structures, such as circulant and left-circulant matrices, and their generalizations: Toeplitz and Hankel matrices. In addition, we prove that Toeplitz matrices of order \(n>4\) cannot be simultaneously NMDS and involutory over a field of characteristic \(2\). Finally, we use GDLS matrices to provide some lightweight NMDS matrices that can be computed in one clock cycle. The proposed nonrecursive NMDS matrices of orders \(4\), \(5\), \(6\), \(7\), and \(8\) can be implemented with \(24\), \(50\), \(65\), \(96\), and \(108\) XORs over \(\mathbb{F}_{2^{4}}\), respectively.
Keywords:Diffusion Layer: Branch number MDS matrix Near-MDS matrix DLS matrix XOR count.
## 1 Introduction
Shannon's concepts of confusion and diffusion [38] are well demonstrated through the design of symmetric key cryptographic primitives. In many cases, the round function of the design uses both non-linear and linear layers to achieve confusion and diffusion, respectively. The focus of this paper is the construction of linear diffusion layers whose purpose is to maximize the spreading of internal dependencies. Optimal diffusion layers can be achieved by utilizing MDS matrices with maximum branch numbers. An example of this is the MixColumn operation in AES [13] which employs a \(4\times 4\) MDS matrix. The use of an MDS matrix makes AES robust against differential and linear attacks with a low number of rounds, which is ideal for low-latency applications.
The growth of ubiquitous computing such as IoT has highlighted new security needs, driving research in the field of lightweight cryptography. Currently, there is a growing focus on the study of lightweight diffusion matrices. Numerous proposals for constructing lightweight MDS and involutory MDS matrices have been made [7, 24, 30, 32, 36, 39]. It is important to note that elements of an MDS matrix must be nonzero, resulting in a high hardware implementation cost. To minimize hardware cost, Guo et al. [16, 17] proposed a novel design approach of using recursive MDS matrices, which have a low hardware area but require additional clock cycles. Subsequently, researchers have focused on designing recursive MDS matrices, producing a significant number of results [3, 8, 20, 22, 23, 34, 40, 42, 43]. Instead of recursive approach, Sajadieh et al. [35] constructed lightweight MDS matrices by the composition of different sparse matrices and the block cipher FUTURE [21] use this idea for the MDS matrix in its MixColumn operation.
We have gained a thorough understanding of local optimization techniques so far. As a consequence, recent efforts have shifted to addressing the problem on a more fundamental level, viewing it as the well-known Shortest Linear Straight-Line Problem (SLP), which was first used in [11] for global optimization of a pre-defined linear function. This leads to the construction of more lightweight MDS matrices [15, 26, 28, 44].
However, the trade-off between security and efficiency may not be optimal with MDS and recursive MDS matrices. Near-MDS (NMDS) matrices are characterized by having sub-optimal branch numbers, leading to a slower diffusion speed and smaller minimum active Sboxes per round compared to ciphers using MDS matrices. However, studies such as [2, 4] have demonstrated that NMDS matrices, when used with a well-chosen permutation, can improve security against differential and linear cryptanalysis. Thus, NMDS matrices offer better trade-offs between security and efficiency when used in the diffusion layer of lightweight cryptography. Some recent lightweight block ciphers, such as PRIDE [1], Midori [4], MANFIS [6], FIDES [9] and PRINCE [10] have utilized NMDS matrices. The importance of lightweight symmetric key primitives with low power, low energy, or low latency is growing, and NMDS matrices are commonly employed in the construction of lightweight block ciphers. However, NMDS or recursive NMDS matrices have been less studied in the literature up to now. This inspires us to introduce new results on NMDS matrices and in this work, we look at
both recursive NMDS matrices and NMDS matrices that can be constructed using one clock cycle.
Contributions.The primary focus of this paper is the study of NMDS matrices, examining their construction using both recursive and nonrecursive approaches. We present various theoretical results and also discuss the hardware efficiency of the construction of NMDS matrices. We also compare the results of NMDS with MDS matrices whenever possible.
The exhaustive search using a naive way of finding a higher order recursive NMDS matrix using DLS matrices is impractical. In this regard, we present some theoretical results that are used to restrict the search space to a small domain. Following that, we observe that there are no \(k\)-NMDS (see Definition 4) DLS matrices over \(\mathbb{F}_{2^{4}}\) for orders \(n=5,6,7,8\) and \(k\in\{n-1,n\}\) if the fixed XOR (say \(\mathcal{K}\)) of the matrix is taken to be less than \(\left\lceil\frac{n}{2}\right\rceil\).
In [20, Theorem 3], the authors proved that for a DLS matrix \(M\) of order \(n\geq 2\), \(M^{k}\) is not MDS for \(k<n-1\). In Theorem 2, we demonstrate that for a DLS matrix \(M\) of order \(n\geq 2\), \(M^{k}\) is not NMDS for \(k<n-2\). Furthermore, we discuss the impact of the permutation \(\rho\) in a DLS matrix \(DLS(\rho;D_{1},D_{2})\) for the construction of recursive NMDS matrices. More specifically, in Corollary 6, we show that if \(\rho\) is not an \(n\)-cycle, a DLS matrix of order \(n\geq 2\), cannot be \(k\)-NMDS for \(k\leq n\).
In [19, Theorem 2], it was proved that no sparse matrix of order \(n\geq 3\) with \(\mathcal{K}=1\) can yield an MDS matrix when raised to power \(n\). In Theorem 4, we show that over a field of characteristic \(2\), there is no sparse matrix of order \(n\geq 4\) with \(\mathcal{K}=1\) can yield NMDS matrix when raised to power \(k\leq n\). As a consequence, it is shown that for a \(k\)-NMDS matrix of orders \(4\) with \(k\leq 4\), the lowest XOR count is \(2r\) over the field \(\mathbb{F}_{2^{r}}\).
Using GDLS matrices, we also introduce some lightweight recursive NMDS matrices of orders \(4,5,6\) and \(7\) that can be implemented with \(8,13,13,\) and \(18\) XORs over \(\mathbb{F}_{2^{4}}\), respectively. Besides searching over \(\mathbb{F}_{2^{4}}\), we also provide some efficient \(k\)-NMDS GDLS matrices with \(k\in\{n-1,n\}\) over \(GL(8,\mathbb{F}_{2})\) for various orders \(n\). Table 2 compares our results with the known results.
We examine different structures for the nonrecursive construction of NMDS matrices, including circulant and left-circulant matrices, as well as their generalizations such as Toeplitz and Hankel matrices. Proposition 3 in [27] demonstrates that circulant matrices of order \(n>4\) cannot be both NMDS and involutory over \(\mathbb{F}_{2^{r}}\). In Theorem 9, we prove that this result also holds for Toeplitz matrices.
According to [24, Lemma 5], it has been established that no orthogonal circulant matrix of order \(2^{n}\) over the field \(\mathbb{F}_{2^{r}}\) can be MDS for \(n\geq 2\). However, in Remark 22, we see that NMDS circulant orthogonal matrices of any order may exist over the field \(\mathbb{F}_{2^{r}}\). Similar result shows up for left-circulant matrices. Table 4 is provided to compare the involutory and orthogonal properties of MDS and NMDS matrices constructed from the circulant, left-circulant, Toeplitz, and Hankel families.
For Hadamard, circulant, or left-circulant NMDS matrices of order \(n\), we must have \(\mathcal{K}\geq n(n-2)\), which results in a high implementation cost for constructing NMDS matrices from such matrices. To address this issue, we use composition of different GDLS matrices to construct nonrecursive NMDS matrices. Our proposed nonrecursive NMDS matrices for orders 4, 5, 6, 7, and 8 can be implemented using 24, 50, 65, 96, and 108 XORs over \(\mathbb{F}_{2^{4}}\), respectively. Table 5 compares our results for nonrecursive NMDS matrices with the known results.
To compare with MDS matrices, we examine certain well-known results of MDS matrices and apply them to NMDS matrices. For example, in Remark 3 and Remark 4, we observe that a submatrix of an NMDS matrix may not necessarily be an NMDS matrix, and an NMDS matrix can also be singular. However, we prove that the inverse of a nonsingular NMDS matrix is also an NMDS matrix. In Corollary 3, we prove that if \(M\) is an NMDS matrix and \(D_{1}\) and \(D_{2}\) are two nonsingular diagonal matrices, then \(D_{1}MD_{2}\) is also an NMDS matrix.
#### 1.0.1 Outline.
The remaining sections of this paper are structured as follows. In Section 2, definitions and preliminaries are presented. Section 3 discusses DLS matrices for the construction of recursive NMDS matrices. Some lightweight recursive NMDS matrices of various orders, using GDLS matrices, are proposed in Section 4. Section 5 discusses circulant, left-circulant, Toeplitz and Hankel matrices for the construction of nonrecursive NMDS matrices. Section 6 provides some lightweight nonrecursive NMDS matrices constructed from GDLS matrices. Section 7 concludes the paper and discusses some possible future work.
## 2 Definition and Preliminaries
Before discussing NMDS matrices in depth, let us recall some basic notations for finite fields, their representations, and the construction of matrices.
Let \(\mathbb{F}_{2^{r}}\) be the finite field of \(2^{r}\) elements and \(\mathbb{F}_{2^{r}}^{n}\) be the set of vectors of length \(n\) with entries from the finite field \(\mathbb{F}_{2^{r}}\). Let \(\mathbb{F}_{2^{r}}^{*}\) be the multiplicative group of \(\mathbb{F}_{2^{r}}\). It is a well-established fact that elements of a finite field with characteristic 2 can be represented as vectors with coefficients in \(\mathbb{F}_{2}\). In other words, there exists a vector space isomorphism from \(\mathbb{F}_{2^{r}}\) to \(\mathbb{F}_{2}^{r}\) defined by \(x=(x_{1}\alpha_{1}+x_{2}\alpha_{2}+\cdots+x_{r}\alpha_{r})\rightarrow(x_{1}, x_{2},\ldots,x_{r})\), where \(\{\alpha_{1},\alpha_{2},\ldots,\alpha_{r}\}\) is a basis of \(\mathbb{F}_{2^{r}}\). If \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{r}}\), every nonzero element of \(\mathbb{F}_{2^{r}}\) can be expressed as a power of \(\alpha\). Hence, \(\mathbb{F}_{2^{r}}^{*}=\left\{1,\alpha,\alpha^{2},\alpha^{3},\ldots,\alpha^{ 2^{r}-1}\right\}\). In favor of a more compact notation, we also use a hexadecimal representation of a finite field \(\mathbb{F}_{2^{r}}\) throughout the paper. For instance, \(\mathbb{F}_{2^{4}}/0\)x13 denotes the finite field \(\mathbb{F}_{2^{4}}\) constructed by the polynomial \(x^{4}+x+1\).
A square matrix is a matrix with the same number of rows and columns. An \(n\times n\) matrix is known as a matrix of order \(n\). The ring of \(n\times n\) matrices over \(\mathbb{F}_{2^{r}}\) is denoted by \(M_{n}(\mathbb{F}_{2^{r}})\) and the general linear group consisting of nonsingular \(n\times n\) matrices over \(\mathbb{F}_{2^{r}}\) is denoted by \(GL(n,\mathbb{F}_{2^{r}})\).
The diffusion power of a linear transformation, as specified by a matrix, is quantified by its branch numbers [13, pages 130-132].
Definition 1: [13, page \(132\)] The differential branch number, \(\beta_{d}(M)\), of a matrix \(M\) of order \(n\) over the finite field \(\mathbb{F}_{2^{r}}\) is defined as the smallest number of nonzero components in both the input vector \(x\) and the output vector \(Mx\), as we consider all nonzero \(x\) in \((\mathbb{F}_{2^{r}})^{n}\) i.e.
\[\beta_{d}(M)=min_{x\neq\mathbf{0}}(w(x)+w(Mx))\text{,}\]
where \(w(x)\) represents the number of nonzero components in the vector \(x\).
Definition 2: [13, page \(132\)] The linear branch number, \(\beta_{l}(M)\), of a matrix \(M\) of order \(n\) over the finite field \(\mathbb{F}_{2^{r}}\) is defined as the smallest number of nonzero components in both the input vector \(x\) and the output vector \(M^{T}x\), as we consider all nonzero \(x\) in \((\mathbb{F}_{2^{r}})^{n}\) i.e.
\[\beta_{l}(M)=min_{x\neq\mathbf{0}}(w(x)+w(M^{T}x))\text{,}\]
where \(w(x)\) represents the number of nonzero components in the vector \(x\).
Remark 1: [13, page 144] The differential branch number \(\beta_{d}(M)\) of a matrix \(M\) is equal to the minimum distance of the linear code \(C\) generated by the matrix \([I\ |\ M]\). Furthermore, \(\beta_{l}(M)\) is equivalent to the minimum distance of the dual code of \(C\).
Remark 2: [13, page 132] It is important to note that the maximum value for both \(\beta_{d}(M)\) and \(\beta_{l}(M)\) is \(n+1\). While \(\beta_{d}(M)\) and \(\beta_{l}(M)\) are not always equal, a matrix with the highest possible differential or linear branch number will have the same value for both.
The singleton bound tells us that \(d\leq n-k+1\) for a \([n,k,d]\) code \(C\). We call \(C\) an MDS (maximum distance separable) code if \(d=n-k+1\). In this context, we state an important theorem from coding theory.
Theorem 1.: [31, page 321] _An \([n,k,d]\) code \(C\) with generator matrix \(G=[I\ |\ M]\), where \(M\) is a \(k\times(n-k)\) matrix, is MDS if and only if every square submatrix (formed from any \(i\) rows and any \(i\) columns, for any \(i=1,2,\ldots,min\{k,n-k\}\)) of \(M\) is nonsingular._
Another way to define an MDS matrix is as follows.
**Fact 1**: _A square matrix \(M\) is an MDS matrix if and only if every square submatrices of \(M\) are nonsingular._
In this paper, we discuss the diffusion matrices with the highest branch numbers among non-MDS matrices.
Definition 3: [27] A matrix \(M\) of order \(n\) is called a near-MDS (in short NMDS) matrix if \(\beta_{d}(M)=\ \beta_{l}(M)=n\).
In [14], an \([n,k,d]\) NMDS code \(C\) is defined by the conditions \(d=n-k\) and \(d^{\prime}=k\), where \(d^{\prime}\) is the minimum distance of the dual code of \(C\). Thus, by Remark 1, for a matrix \(M\) of order \(n\) with \(\beta_{d}(M)=\beta_{l}(M)=n\), the matrix \([I\mid M]\) is exactly a generator matrix of a \([2n,n,n]\) NMDS code. Thus, the following characterization of an NMDS matrix is obtained.
Lemma 1: _[_27, 41_]_ _Let \(M\) be a non-MDS matrix of order \(n\), where \(n\) is a positive integer with \(n\geq 2\). Then \(M\) is NMDS if and only if for any \(1\leq g\leq n-1\) each \(g\times(g+1)\) and \((g+1)\times g\) submatrix of \(M\) has at least one \(g\times g\) nonsingular submatrix._
In Lemma 1, if we assume \(g=1\), we can deduce that there is at most one \(0\) in each row and each column of an NMDS matrix. Hence, we have the following corollary.
Corollary 1: _An NMDS matrix \(M\) of order \(n\) must contain at least \(n^{2}-n\) nonzero elements._
Remark 3: We know that every square submatrices of an MDS matrix are MDS. However, a square submatrix of an NMDS matrix may not be an NMDS matrix. For example, consider the matrix
\[M=\begin{bmatrix}0&\alpha&1&\alpha+1\\ \alpha+1&0&\alpha&1\\ 1&\alpha+1&0&\alpha\\ \alpha&1&\alpha+1&0\end{bmatrix}\]
over \(\mathbb{F}_{2^{4}}/0\)x\(13\), where \(\alpha\) is a root of \(x^{4}+x+1\). Then it can be checked that \(M\) is an NMDS matrix. However, the \(2\times 2\) submatrix \(\begin{bmatrix}1&\alpha+1\\ \alpha&1\end{bmatrix}\) of \(M\) is an MDS matrix.
Corollary 2: _If \(A\) is an NMDS matrix, then \(A^{T}\) is also an NMDS matrix._
Remark 4: We know that an MDS matrix cannot be singular, whereas an NMDS matrix can be singular. For example, the NMDS matrix \(M\) in Remark 3 is singular.
Lemma 2: _For a nonsingular NMDS matrix \(M\), its inverse \(M^{-1}\) is also an NMDS matrix._
Proof: Let \(G=[I\mid M]\) be a generator matrix of an NMDS code. Elementary row operation change \(G=[I\mid M]\) to \(G^{\prime}=[M^{-1}\mid I]\). Since elementary row operations do not change the code, \(G^{\prime}\) is also a generator matrix of the NMDS code. So the code defined by \([I\mid M^{-1}]\) has the same minimal distance. Therefore, \(M^{-1}\) is an NMDS matrix.
Definition 4: Let \(k\) be a positive integer. A matrix \(B\) is said to be recursive NMDS or \(k\)-NMDS if the matrix \(M=B^{k}\) is NMDS. If \(B\) is \(k\)-NMDS then we say \(B\) yields an NMDS matrix.
Example 1: For example, the matrix
\[B=\left[\begin{array}{cccc}0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 1&\alpha&0&0\end{array}\right]\]
is 10-NMDS, where \(\alpha\) is a primitive element of the field \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\).
Definition 5: A matrix \(D\) of order \(n\) is said to be diagonal if \((D)_{i,j}=0\) for \(i\neq j\). By setting \(d_{i}=(D)_{i,i}\), we denote the diagonal matrix \(D\) as \(diag(d_{1},d_{2},\ldots,d_{n})\).
It is obvious to see that the determinant of \(D\) is \(det(D)=\prod_{i=1}^{n}d_{i}\). Hence, the diagonal matrix \(D\) is nonsingular if and only if \(d_{i}\neq 0\) for \(1\leq i\leq n\).
It is worth exploring whether the NMDS property of a matrix remains invariant under the elementary row operation of multiplying a row or column by a nonzero scalar. Therefore, we have the following lemma.
Lemma 3: _Let \(M\) be an NMDS matrix over \(\mathbb{F}_{2^{r}}\), then \(M^{\prime}\), obtained by multiplying a row (column) of \(M\) by any \(c\in\mathbb{F}_{2^{r}}^{*}\) will also be an NMDS._
Proof: Take \(B^{\prime}\) be an arbitrary \(g\times(g+1)\) (\((g+1)\times g\)) submatrix of \(M^{\prime}\). Suppose \(B\) is the corresponding submatrix of \(M\). Since \(M\) is NMDS matrix, \(B\) must have a nonsingular \(g\times g\) submatrix \(I\). Let \(I^{\prime}\) be the corresponding submatrix of \(B^{\prime}\). If the submatrix \(I^{\prime}\) contains the row (column) in which \(c\) has multiplied, then \(det(I^{\prime})=c\cdot det(I)\) otherwise \(det(I^{\prime})=det(I)\). Thus, \(B^{\prime}\) contains a nonsingular \(g\times g\) submatrix \(I^{\prime}\). Therefore, by Lemma 1, \(M^{\prime}\) is also an NMDS matrix.
Let \(D=diag(c_{1},c_{2},\ldots,c_{n})\) be a diagonal matrix. Then by the multiplication \(DM\) (\(MD\)) it means multiply the \(i\)-th row (\(i\)-th column) of \(M\) by \(c_{i}\) for \(1\leq i\leq n\). Hence, we can generalize the Lemma 3 as follows.
Corollary 3: _Let \(M\) be an NMDS matrix, then for any nonsingular diagonal matrices \(D_{1}\) and \(D_{2}\), \(D_{1}MD_{2}\) will also be an NMDS matrix._
Corollary 4: _Let \(B\) be a recursive NMDS matrix, then for any nonsingular diagonal matrix \(D\), \(DBD^{-1}\) will also be a recursive NMDS matrix._
Proof: Suppose \(D\) is a nonsingular diagonal matrix and \(B\) is \(k\)-NMDS i.e. \(B^{k}\) is an NMDS matrix. Then we have
\[(DBD^{-1})^{k}=\underbrace{DBD^{-1}\cdot DBD^{-1}\cdot\ \ldots\ \cdot DBD^{-1}}_{k- \mathrm{times}}=DB^{k}D^{-1}\]
Now since \(D\) is a nonsingular diagonal matrix and \(B^{k}\) is an NMDS matrix, from Corollary 3, we can say that \(DB^{k}D^{-1}\) is again an NMDS matrix. Hence, \(DBD^{-1}\) is a recursive NMDS matrix. More specifically \(DBD^{-1}\) is \(k\)-NMDS.
Definition 6: Let \(\rho\) be an element of the symmetric group \(S_{n}\) (set of all permutations over the set \(\{1,2,\ldots,n\}\) ). Then by \(\rho=\ [i_{1},\ i_{2},\ i_{3},\ \ldots\,i_{n}]\), where \(1\leq i_{j}\leq n\) for \(j=1,2,3,\ldots,n\), we mean \(\rho=\left(\begin{array}{cccc}1&2&3&\ldots&n\\ i_{1}&i_{2}&i_{3}&\ldots&i_{n}\end{array}\right)\) i.e. \(1\to i_{1}\), \(2\to i_{2}\), \(\ldots\), \(n\to i_{n}\).
Then the product of two permutations \(\rho_{1}=\ [i_{1},\ i_{2},\ i_{3},\ \ldots\,i_{n}]\) and \(\rho_{2}=\ [j_{1},\ j_{2},\ j_{3},\ \ldots\,j_{n}]\) is given by \(\rho_{1}\cdot\rho_{2}=[i_{j_{1}},\ i_{j_{2}},\ i_{j_{3}},\ \ldots\,i_{j_{n}}]\) and the inverse of a permutation \(\rho=\ [i_{1},\ i_{2},\ i_{3},\ \ldots\,i_{n}]\) is the permutation \(\delta=\ [j_{1},\ j_{2},\ j_{3},\ \ldots\,j_{n}]\) such that \(\rho\cdot\delta=\delta\cdot\rho=\ [1,2,3,\ \ldots\,n]\).
Example 2: For the two permutations \(\rho_{1}=[2,3,4,5,1,6]\) and \(\rho_{2}=[1,4,3,2,6,5]\) over \(S_{6}\), their product is given by
\[\rho_{1}\cdot\rho_{2}=[2,5,4,3,6,1]\ \text{and}\ \rho_{2}\cdot\rho_{1}=[4,3,2,6,1,5].\]
The inverse of the permutation \(\rho_{1}=[2,3,4,5,1,6]\) is given by \(\delta=[5,1,2,3,4,6]\).
Definition 7: A permutation matrix \(P\) of order \(n\) related to a permutation \(\rho=\ [i_{1},\ \ \ i_{2},\ \ \ i_{3},\ \ \ldots,\ \ \ i_{n}]\) is the binary matrix which is obtained from the identity matrix of order \(n\) by permuting the rows (columns) according to the permutation \(\rho\).
In this paper, we will use the row permuted identity matrix to represent permutation matrices. For instance, the permutation matrix \(P\) related to the permutation \([4,2,3,1]\) is given by
\[P=\ \begin{bmatrix}0&0&0&1\\ 0&1&0&0\\ 0&0&1&0\\ 1&0&0&0\end{bmatrix}.\]
Note that a permutation matrix is invertible and the inverse of \(P\) is the transpose of \(P\), i.e. \(P^{-1}=P^{T}\). The product of two permutation matrices is a permutation matrix.
Definition 8: Let \(\rho\) be an element of the symmetric group \(S_{n}\). Then \(\rho\) is called a \(k\) length cycle or \(k\)-cycle, written \((j_{1}\ j_{2}\ j_{3}\ \ldots\ j_{k})\), if \(\rho=\begin{pmatrix}j_{1}&j_{2}&j_{3}&\ldots&j_{k}\\ j_{2}&j_{3}&j_{4}&\ldots&j_{1}\end{pmatrix}\) i.e. \(j_{1}\to j_{2}\), \(j_{2}\to j_{3}\), \(\ldots\), \(j_{k}\to j_{1}\).
For example, the permutation \(\rho=[3,2,4,1,5]\) can be written as \((1\ 3\ 4)\). So \(\rho\) is a 3-cycle in \(S_{5}\).
The branch number of a matrix is invariant under row (column) permutation. So, from [18], we get the following result.
Lemma 4: _[_18_]_ _For any permutation matrices \(P\) and \(Q\), the branch numbers of the two matrices \(M\) and \(PMQ\) are same.
Corollary 5: _Let \(B\) be a recursive NMDS matrix, then for any permutation matrix \(P\), \(PBP^{-1}\) will also be a recursive NMDS matrix._
Definition 9: We will call a matrix \(M_{1}\) to be diagonal (permutation) similar to a matrix \(M_{2}\) if \(M_{1}=DM_{2}D^{-1}\) (\(M_{1}=PM_{2}P^{-1}\)) for some nonsingular diagonal matrix \(D\) (permutation matrix \(P\)).
**Fact 2**: _Diagonal (permutation) similar of a \(k\)-NMDS matrix is again a \(k\)-NMDS matrix._
**XOR count:** The efficiency of hardware implementation in a given operation is typically assessed by the amount of area required. It is worth noting that the diffusion matrix can only be implemented using XOR gates, which leads to the following definition.
Definition 10: [25] The direct XOR count (d-XOR count) of a matrix \(M\in GL(r,\mathbb{F}_{2})\), denoted by d-XOR(\(M\)), is determined by
\[\text{d-XOR}(M)=wt(M)-r,\]
where \(wt(M)\) denotes the number of ones in the matrix \(M\).
Definition 11: [25] The sequential XOR count (s-XOR count) of a matrix \(M\in GL(r,\mathbb{F}_{2})\), denoted by s-XOR(\(M\)), is equal to the smallest non-negative integer \(t\) such that \(M\) can be represented as
\[M=P\prod_{k=1}^{t}{(I+E_{i_{k},j_{k}})},\]
where \(P\) is a permutation matrix and \(E_{i_{k},j_{k}}\), with \(i_{k}\neq j_{k}\) for all \(k\), is a binary matrix with 1 as \((i_{k},j_{k})\)-th entry and 0 elsewhere.
When a basis of \(\mathbb{F}_{2^{r}}\) is given, the multiplication of \(\alpha\in\mathbb{F}_{2^{r}}\) given by \(x\mapsto\alpha x\) can be expressed as the multiplication of a matrix in \(GL(r,\mathbb{F}_{2})\). The matrix depends not only on \(\alpha\), but also on the choice of basis of \(\mathbb{F}_{2^{r}}\) over \(\mathbb{F}_{2}\). Let \(M_{\alpha,B}\) be the matrix representation of the mapping \(x\mapsto\alpha x\) with respect to the basis \(B\).
Definition 12: [25] Let \(\alpha\in\mathbb{F}_{2^{r}}\). Then the d-XOR count and s-XOR count of \(\alpha\), denoted by d-XOR(\(\alpha\)) and s-XOR(\(\alpha\)) respectively, is given by
\[\text{d-XOR($\alpha$)}=\ \min_{B}\text{d-XOR($M_{\alpha,B}$)}\ \ \text{and}\ \ \text{s-XOR($\alpha$)}=\ \min_{B}\text{s-XOR($M_{\alpha,B}$)},\]
where the minimum is taken over all bases of \(\mathbb{F}_{2^{r}}\) over \(\mathbb{F}_{2}\).
The d-XOR count (s-XOR count) of \(M_{\alpha,B}\) generally differs from the d-XOR count (s-XOR count) of \(M_{\alpha,B^{\prime}}\) for different bases \(B\) and \(B^{\prime}\). For more details about the two XOR count metrics, see [25] and the related references mentioned therein.
We denote the XOR count of \(\alpha\in\mathbb{F}_{2^{r}}\) as XOR(\(\alpha\)), which can either be the d-XOR count or the s-XOR count, unless specified otherwise.
The cost of implementing a diffusion matrix can be determined by adding up the XOR counts of each entry in the matrix. If a row has \(k_{i}\) nonzero elements from the field \(\mathbb{F}_{2^{r}}\), these \(k_{i}\) elements must be combined, which incurs a fixed XOR cost of \((k_{i}-1)r\). Therefore, if an \(n\) order matrix has \(k_{1},k_{2},\ldots,k_{n}\) nonzero elements in its \(n\) rows, the matrix incurs a fixed XOR cost of \(\sum_{i=1}^{n}(k_{i}-1)r\) in \(\mathbb{F}_{2^{r}}\).
The sum, \(\mathcal{K}=\sum_{i=1}^{n}(k_{i}-1)\), is referred to as the fixed XOR of the matrix in this paper. For an MDS matrix of order \(n\), its fixed XOR is \(\mathcal{K}=n(n-1)\). The XOR count of an \(n\) order matrix \(M\), denoted by \(XOR(M)\), over the field \(\mathbb{F}_{2^{r}}\) is calculated as \(\sum_{i,j=1}^{n}XOR((M)_{i,j})+\mathcal{K}\cdot r\), where \(XOR((M)_{i,j})\) is the XOR count of the entry \((M)_{i,j}\) in \(M\).
Recently, the search for MDS matrices that can be efficiently implemented using global optimization techniques has received a lot of attention [5, 15, 26, 28, 44]. An implementation that utilizes global optimization can result in significant cost savings compared to local optimization, as common intermediate values can be computed only once and reused. However, this paper focuses on the d-XOR (s-XOR) metric to calculate the implementation cost of the matrices and presents several new NMDS and recursive NMDS matrices with a lower XOR count than previously reported results.
**Other Notations:** Here are the other notations used in the paper.
1. The \((i,j)\)-th entry of a matrix \(A\) is denoted by \((A)_{i,j}\).
2. We denote by \(|A|\) for the number of nonzero entries in the matrix \(A\) and \(|A|\leq|B|\) means the number of nonzero elements in \(A\) is less than or equal to the number of nonzero elements in \(B\).
3. For two \(m\times n\) matrices \(A\) and \(B\), we symbolize \(A\leqq B\) if \((A)_{i,j}\neq 0\) implies \((B)_{i,j}\neq 0\).
4. \(\mathbb{F}_{2}[L]\) denotes the set of polynomials of \(L\) over \(\mathbb{F}_{2}\).
5. For simplicity, we use nonzero positions in each row of a binary matrix as a representation of the matrix. For example, \([[1,2,3],[1,3],[2]]\) represents the binary matrix \(\begin{bmatrix}1&1&1\\ 1&0&1\\ 0&1&0\end{bmatrix}\).
An MDS matrix must have all its entries nonzero. Therefore, any \(n\times n\) matrix cannot be MDS if the number of nonzero entries is less than \(n^{2}\). Whereas, an NMDS matrix of order \(n\) must have at least \(n^{2}-n\) nonzero entries. In the following section, we use this fact to obtain some interesting results on NMDS matrices.
## 3 Construction of Recursive NMDS matrices
The importance of recursive MDS matrices is that they are especially well suited for lightweight implementations: the diffusion layer is constructed by recursively executing the implementation of the sparse matrices, which requires some clock cycles. The use of recursive MDS matrices using companion matrices has been
observed in the PHOTON [16] family of hash functions and the block cipher LED [17] due to their ability to be constructed with a simple LFSR. Subsequently, Generalized-Feistel-Structure (GFS) [42], Diagonal-Serial-Invertible (DSI) [40] and diagonal-like sparse (DLS) [20] matrices were proposed for constructing recursive MDS matrices. On the other hand, there has been insufficient study on the construction of NMDS matrices, as well as recursive NMDS matrices. This inspires us to present some new results on NMDS matrices and in the following section, we are considering the DLS matrices for the construction of recursive NMDS matrices.
### Construction of NMDS matrices from DLS matrices
Definition 13: [20] Let \(\rho=\ [i_{1},\ i_{2},\ i_{3},\ \ldots\,i_{n}]\) be a permutation such that \(i_{k}\neq k\) for \(k=1,2,\ldots,n\), \(D_{1}\) be a nonsingular diagonal matrix and \(D_{2}\) be a diagonal matrix (may be singular). Then we will call the matrix
\[B=PD_{1}+D_{2}\]
as the diagonal-like sparse (DLS) matrix, where \(P\) is the permutation matrix of order \(n\) related to the permutation \(\rho\). The matrices denoted as \(DLS(\rho;D_{1},D_{2})\).
Example 3: An example of a DLS matrix of order \(4\) is given by
\[DLS(\rho;D_{1},D_{2})=PD_{1}+D_{2}=\begin{bmatrix}0&0&0&1\\ 0&0&1&0\\ 1&0&0&0\\ 0&1&0&0\end{bmatrix}\cdot\begin{bmatrix}a&0&0&0\\ 0&b&0&0\\ 0&0&c&0\\ 0&0&0&d\end{bmatrix}+\begin{bmatrix}e&0&0&0\\ 0&0&0&0\\ 0&0&f&0\\ 0&0&0&0\end{bmatrix}=\begin{bmatrix}e&0&0&d\\ 0&0&c&0\\ a&0&f&0\\ 0&b&0&0\end{bmatrix},\]
where \(P\) is the permutation matrix related to \(\rho=[3,4,2,1]\) and \(D_{1}\)=\(diag(a,\,b,\)\(c,d)\) and \(D_{2}=diag(e,0,f,0)\).
In [29], the authors have obtained some lightweight recursive NMDS matrices with the lowest fixed XOR value of one (i.e. \(\mathcal{K}\)=1), and it requires a large number of iterations. Thus, these matrices are not useful for low latency purposes. In this paper, we consider the case of checking whether \(B^{k}\) is NMDS or not for \(k\leq n\).
In [20, Theorem 3], the authors proved that for a DLS matrix \(M\) of order \(n\), \(M^{k}\) contains at least one zero for \(0\leq k<n-1\) and \(n\geq 2\). In Theorem 3.1, we have used the similar proof technique to show that for a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\), \(M\) cannot be \(k\)-NMDS for \(0\leq k<n-2\). For this, we need the following lemmas.
Lemma 5: _[_20_, Lemma 4]_ _For any permutation matrix \(P\) related to some permutation \(\rho\) and any diagonal matrix \(D\), we have \(DP=PD_{1}\) for some diagonal matrix \(D_{1}\). Also, the number of nonzero entries in \(D\) and \(D_{1}\) are same._
Lemma 6: _[_20_, Lemma 5]_ _Let \(M=P+D_{2}\) be an \(n\times n\) matrix, where \(D_{2}\) is a diagonal matrix (may be singular) and \(P\) is a permutation matrix. Then_
\[M^{r}\leq P^{r}+P^{r-1}D+P^{r-2}D+\cdots+PD+D_{2}^{2}\]
_for \(r\geq 2\), where \(D\) denotes some nonsingular diagonal matrix._
From Corollary 1, we know that any matrix of order \(n\) cannot be NMDS if the number of nonzero entries is less than \(n^{2}-n\). We are using this fact for the proof of the following theorem.
Theorem 4.1: _Given a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\geq 2\), \(M^{k}\) is not NMDS for \(k<n-2\)._
Proof: We have \(M=DLS(\rho;D_{1},D_{2})\leqq P+D_{2}\), where \(P\) is the permutation matrix corresponding to \(\rho\). From Lemma 6, we have
\[\begin{split}|M^{k}|&\leq|P^{k}+P^{k-1}D+\cdots+PD+ D_{2}^{2}|\\ &\leq|P^{k}D|+|P^{k-1}D|+\cdots+|PD|+|D_{2}^{2}|.\end{split} \tag{1}\]
Since power of a permutation matrix is again a permutation matrix, we have
\[|M^{k}|\leq\underbrace{|D|+|D|+\cdots+|D|}_{k\text{ times}}+|D_{2}^{2}|\leq kn +n. \tag{2}\]
Now for \(k<n-2\), we have
\[|M^{k}|<(n-2)n+n=n^{2}-n\implies|M^{k}|<n^{2}-n.\]
Hence, \(M^{k}\) is not NMDS for \(k<n-2\).
Remark 5: In Equation 2, if we assume \(k<n-1\), we can see that \(|M^{k}|<n^{2}\). Hence, given a DLS matrix \(M\), \(M^{k}\) is not MDS for \(k<n-1\).
Remark 6: From Theorem 4.1, we know that for a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\geq 2\), \(M^{k}\) is not an NMDS for \(k<n-2\). However, there exist \(k\)-NMDS DLS matrix for \(k=n-2\). For example, consider the DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(4\) with \(\rho=[4,1,2,3]\), \(D_{1}=diag(\alpha^{2},\alpha^{2},\alpha^{2},\alpha^{2})\) and \(D_{2}=diag(\alpha^{2},1,\alpha^{2},1)\), where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) with \(\alpha^{4}+\alpha+1=0\). It can be checked that the matrix
\[M =DLS(\rho;D_{1},D_{2})\] \[=\begin{bmatrix}\alpha^{2}&\alpha^{2}&0&0\\ 0&1&\alpha^{2}&0\\ 0&0&\alpha^{2}&\alpha^{2}\\ \alpha^{2}&0&0&1\end{bmatrix}\]
is \(2\)-NMDS.
We will now explore how the permutation \(\rho\) impacts a DLS matrix \(DLS(\rho;D_{1},D_{2})\) in the construction of recursive NMDS matrices. For this, we will use the following lemma.
Lemma 7: _If \(\rho\) is not an \(n\)-cycle for a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\geq 2\), then \(M^{n-1}\) and \(M^{n}\) contain at most \(n^{2}-2n\) and \(n^{2}-n-2\) nonzero elements respectively._
Proof: If \(\rho\) is not an \(n\)-cycle, then it is a product of disjoint cycles in \(S_{n}\). Suppose that \(\rho=\rho_{1}\rho_{2}\ldots\rho_{v}\), where \(\rho_{i}\) is a \(r_{i}\)-cycle in \(S_{n}\) for \(i=1,2,\ldots,v\) and \(v\in\left\{2,3,\ldots,\left\lfloor\frac{n}{2}\right\rfloor\right\}\). But by the definition of the DLS matrix, \(\rho\) has no fixed points, we have \(2\leq r_{i}\leq n-2\) and \(r_{1}+r_{2}+\cdots+r_{v}=n\).
Now from Equation 2, we have \(|M^{n-1}|\leq n^{2}\) and \(|M^{n}|\leq n^{2}+n\). However, we can eliminate some counting of nonzero elements based on the following conditions:
1. For the permutation matrix \(P\) related to \(\rho\), \(P^{r_{i}}D\) has \(r_{i}\) nonzero elements in the diagonal. Also, \(D\) has \(n\) many nonzero elements in the diagonal.
2. \(P^{r_{i}+1}D\) and \(PD\) have \(r_{i}\) many nonzero elements in the same positions.
3. Since \(v\leq\left\lfloor\frac{n}{2}\right\rfloor\), at least two multiples of some \(r_{i}\) occurs in the set \(\left\{2,3,\cdots,n\right\}\). Thus, \(P^{r_{i}}D\) and \(P^{2r_{i}}D\) have at least \(r_{i}\) nonzero elements in the same diagonal position.
Therefore, we have
\[|M^{n-1}|\leq n^{2}-2\cdot(r_{1}+r_{2}+\cdots+r_{v})\leq n^{2}-2n\] \[\text{and }|M^{n}|\leq n^{2}+n-2\cdot(r_{1}+r_{2}+\cdots+r_{v})-r_ {i}\leq n^{2}-n-2.\]
Hence, the result.
Also, if \(\rho\) is not an \(n\)-cycle, then by the Condition 1 of the above proof and Equation 2, we can say \(|M^{n-2}|<n^{2}-n\). Hence, by combining the results of Theorem 4.1 and Lemma 7, we have the result as follows.
Corollary 6: _For a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\geq 2\), if \(\rho\) is not an \(n\)-cycle, then \(M^{k}\) is not NMDS for \(k\leq n\)._
To this point, we have ignored the possibility that the diagonal of \(D_{2}\) contains zero entries. We now look at the case in which \(D_{2}\) is singular, i.e., its diagonal contains at least one zero.
Lemma 8: _In a DLS matrix \(M=DLS(\rho_{1};D_{1},D_{2})\) of order \(n\geq 2\), if \(D_{2}\) is singular, then \(M^{k}\) cannot be NMDS for \(k\leq n-2\), even if \(\rho\) is \(n\)-cycle._
Proof: If \(D_{2}\) is singular, having at least one zero in the diagonal then from Equation 1 we have,
\[|M^{k}| \leq|P^{k}D|+|P^{k-1}D|+\cdots+|PD|+|D_{i=0}|\] \[\leq\underbrace{n+n+\cdots+n}_{k\text{ times}}+(n-1)=kn+n-1. \tag{3}\]
Where \(D_{i=0}\) be some diagonal matrix with a zero at the \(i\)-th diagonal position for some \(i\in\left\{1,2,\ldots,n\right\}\). Thus, for \(k\leq n-2\), we have \(|M^{k}|<n^{2}-n\). Hence, \(M\) cannot be \(k\)-NMDS for \(k\leq n-2\).
Remark 7: When \(D_{2}\) is singular, a DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\geq 2\), can be a \(k\)-NMDS for \(k=n-1\). For example, consider the DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(4\) with \(\rho=[4,1,2,3]\), \(D_{1}=diag(1,1,1,1)\) and \(D_{2}=diag(1,0,1,0)\) over \(\mathbb{F}_{2^{r}}\). It can be verified that the matrix
\[M =DLS(\rho;D_{1},D_{2})\] \[=\begin{bmatrix}1&1&0&0\\ 0&0&1&0\\ 0&0&1&1\\ 1&0&0&0\end{bmatrix}\]
is \(3\)-NMDS.
**Discussion:** From [20, Theorem 4], we know that for an \(n\)-MDS DLS matrix of order \(n\), we must have \(\mathcal{K}\geq\left\lceil\frac{n}{2}\right\rceil\). However, the minimum value of \(\mathcal{K}\) may be less than \(\left\lceil\frac{n}{2}\right\rceil\) for having at least \(n^{2}-n\) nonzero elements when it is raised to power \(n-1\). For example, consider the DLS matrix \(B=DLS(\rho;D_{1},D_{2})\) of order \(5\) over \(\mathbb{F}_{2^{4}}\), where \(\rho=[5,1,2,3,4]\), \(D_{1}=diag(\alpha^{4},\alpha^{4},1,1,1)\), \(D_{2}=diag(\alpha,0,0,1,0)\) and \(\alpha\) is a root of the constructing polynomial \(x^{4}+x+1=0\). Then \(B^{4}\) have \(21\) nonzero elements. However, \(B^{4}\) is not an NMDS matrix.
For NMDS matrices, we could not find the minimum value of \(\mathcal{K}\) like we have for MDS matrices. In the next section, we will provide some theoretical results about NMDS matrices that will help us in determining the minimum value of \(\mathcal{K}\) for DLS matrices of order \(n\) that are \(k\)-NMDS with \(k=n-1\) and \(k=n\).
Remark 8: In [19, Theorem 2], authors prove that any matrix \(M\) of order \(n\) with \(\mathcal{K}=1\) can have at most \(\frac{n(n+3)}{2}-1\) nonzero elements when it raised to the power \(n\). Hence, for \(n\geq 5\), we have \(|M^{n}|<n^{2}-n\). Thus, for \(n\geq 5\), any matrix of order \(n\) with \(\mathcal{K}=1\) cannot be \(n\)-NMDS.
For \(n=4\), we have \(\frac{n(n+3)}{2}-1>n^{2}-n\). So it may seem that a matrix of order \(4\) with \(\mathcal{K}=1\) can be \(4\)-NMDS. However, in the following theorem, we will see that to be \(4\)-NMDS, a matrix of order \(4\) must have \(\mathcal{K}=2\).
Theorem 3.1: _There does not exist any \(4\)-NMDS matrix of order \(4\) with \(\mathcal{K}=1\) over a field of characteristic \(2\)._
Proof: A matrix \(M\) of order \(n\) can never be recursive NMDS if its one row or column has all zero entries 4. Also, if \(M\) contains \(n\) many nonzero elements in such a way that no column or row has all zero entries, then \(M\) is of the form \(M=PD\), where \(P\) is a permutation matrix and \(D\) is a diagonal matrix. Then by Lemma 5, any power of \(M\) is again of the form \(P^{\prime}D^{\prime}\), for some permutation matrix \(P^{\prime}\) and diagonal matrix \(D^{\prime}\). Hence, \(M\) cannot be recursive NMDS.
Let \(\mathcal{S}\) be the set of all matrices \(M\) that contain \(n+1\) many nonzero elements with \(\mathcal{K}=1\) and in such a way that no column or row has all zero entries. Then each \(M\in\mathcal{S}\) can be written as \(M=PD+A\), where \(A\) has only one nonzero element. Let the nonzero element lies in the \(i\)-th row of \(A\).
Now consider the permutation matrix \(P_{1}\) obtained from the identity matrix by permuting the row \(i\) to row 1. Now
\[P_{1}MP_{1}^{-1} =P_{1}(PD+A)P_{1}^{-1}\] \[=P_{1}PDP_{1}^{-1}+P_{1}AP_{1}^{-1}\]
By Lemma 5, we have \(DP_{1}^{-1}=P_{1}^{-1}D^{\prime}\) for some diagonal matrix \(D^{\prime}\). Thus we have
\[P_{1}MP_{1}^{-1} =P_{1}PP_{1}^{-1}D^{\prime}+P_{1}AP_{1}^{-1}\] \[=QD^{\prime}+A^{\prime},\]
where \(Q=P_{1}PP_{1}^{-1}\), \(A^{\prime}=P_{1}AP_{1}^{-1}\) and \(A^{\prime}\) has the nonzero element in its first row. Therefore, \(M\) is permutation similar to \(QD^{\prime}+A^{\prime}\). Now let \(\mathcal{S}^{\prime}\subset\mathcal{S}\) be the set of all matrices with two nonzero elements in the first row.
Since from Fact 2, we know that a permutation similar to a recursive NMDS matrix is also a recursive NMDS matrix, we simply need to check from the set \(\mathcal{S}^{\prime}\) for finding all recursive NMDS matrices with \(\mathcal{K}=1\).
It can be checked that there are only six5 matrix structures (See (4)) of order \(n=4\) from the set \(\mathcal{S}^{\prime}\) that can potentially be NMDS (i.e. number of nonzero elements >12) when they are raised to power 4. However, all the six structures
Footnote 5: For \(n=4\), there are a total of 72 elements in \(\mathcal{S}^{\prime}\), and by running a computer search, we have observed that there are only 6 matrix structures that have at least 12 nonzero elements when raised to the power 4.
\[\begin{bmatrix}*\,*\,0\,0\\ \,0\,0\,0\,*\\ \,0\,0\,0\,*\\ \,*\,0\,0\,0\end{bmatrix},\begin{bmatrix}*\,0\,0\,*\\ \,*\,0\,0\,0\,0\\ \,0\,0\,*\\ \,0\,0\,*\,0\end{bmatrix},\begin{bmatrix}*\,*\,0\,0\\ \,0\,0\,0\,*\\ \,0\,0\,0\,*\\ \,0\,0\,*\\ \,0\,0\,*\,0\end{bmatrix},\begin{bmatrix}*\,0\,0\,*\\ \,*\,0\,0\,0\,0\\ \,0\,0\,0\,*\\ \,0\,*\,0\,0\end{bmatrix},\begin{bmatrix}*\,0\,0\,*\\ \,*\,0\,0\,0\\ \,0\,0\,*\\ \,0\,*\,0\,0\end{bmatrix}\text{ and }\begin{bmatrix}*\,0\,0\,*\\ \,0\,0\,0\,*\\ \,0\,*\,0\,0\\ \,*\,0\,0\,0\end{bmatrix} \tag{4}\]
are also permutation similar. Now consider the first matrix structure and let
\[M=\begin{bmatrix}a\,\,x_{1}\,\,0\,\,\,0\\ \,0\,\,0\,\,x_{2}\,\,0\\ \,0\,0\,\,0\,\,x_{3}\\ x_{4}\,\,0\,\,0\,\,0\end{bmatrix},\]
where \(a,x_{1},x_{2},x_{3}\) and \(x_{4}\) are some nonzero elements in the field. Now consider the input vector of \(M\) as \([1,ax_{1}^{-1},0,0]^{T}\). The resultant vector after each iteration is
\[\begin{bmatrix}1\\ ax_{1}^{-1}\\ 0\\ 0\end{bmatrix}\xrightarrow[\text{i=1}]{}\begin{bmatrix}0\\ 0\\ 0\\ x_{4}\end{bmatrix}\xrightarrow[\text{i=2}]{}\begin{bmatrix}0\\ 0\\ x_{3}x_{4}\\ 0\end{bmatrix}\xrightarrow[\text{i=3}]{}\begin{bmatrix}0\\ x_{2}x_{3}x_{4}\\ 0\\ 0\end{bmatrix}\xrightarrow[\text{i=4}]{}\begin{bmatrix}x_{1}x_{2}x_{3}x_{4}\\ 0\\ 0\\ 0\end{bmatrix}\]
The sum of nonzero elements of input vector and output vector in each iteration is less than \(4\) i.e. branch number of \(M<4\). Therefore, \(M\) is not \(k\)-NMDS for \(k\leq 4\). Hence, there does not exist any \(4\)-NMDS matrix of order \(4\) with \(\mathcal{K}=1\) over a field of characteristic \(2\).
From [19, Lemma 3], we can easily check that for a matrix of order \(n\geq 4\) with \(\mathcal{K}=1\), \(|M^{k}|<n^{2}-n\) for \(k\leq n-1\). Thus, by using Remark 8 and Theorem 3.1, we have the following result.
Theorem 3.2: _For \(n\geq 4\), there does not exist any \(k\)-NMDS matrix of order \(n\) with \(\mathcal{K}=1\) and \(k\leq n\) over a field of characteristic \(2\)._
Remark 9: For \(n<4\), there may exist a \(k\)-NMDS matrix of order \(n\) with \(\mathcal{K}=1\) and \(k\leq n\). For example, the matrix \(B=\begin{bmatrix}0&1&0\\ 0&0&1\\ 1&1&0\end{bmatrix}\) is \(3\)-NMDS.
**Fact 3**: _Over a field of characteristic \(2\), a DLS matrix of order \(n\) with \(\mathcal{K}=1\) cannot be \(k\)-NMDS for \(n\geq 4\) and \(k\leq n\)._
Now we will discuss some equivalence classes of DLS matrices for the construction of recursive NMDS matrices.
### Equivalence classes of DLS matrices
If the DLS matrix \(DLS(\rho;D_{1},D_{2})\) of order \(n\) has fixed XOR \(\mathcal{K}=l\), the diagonal of \(D_{2}\) has \(l\) nonzero elements. Therefore, there are \({}^{n}C_{l}\) possible arrangements for distributing these \(l\) nonzero elements along the diagonal of \(D_{2}\).
Also, in a DLS matrix \(DLS(\rho;D_{1},D_{2})\), the permutation \(\rho=[i_{1},i_{2},\ldots,i_{n}]\) must satisfy \(i_{k}\neq k\) for \(k=1,2,\ldots,n\). In other words, \(\rho\) represents a derangement of a set of \(n\) elements. Therefore, there are \(D(n)\)6 possible choices for \(\rho\) in a DLS matrix, where \(D(n)\) denotes the number of derangements of a set of \(n\) elements [33].
Footnote 6: The formula for \(D_{n}\) is given by \(D_{n}=(n-1)[D_{n-1}+D_{n-2}]\) with initial conditions \(D_{1}=1\) and \(D_{0}=0\). For example, the values of \(D(n)\) are \(1,2,9,44\), and \(265\) for \(n=2,3,4,5\), and \(6\), respectively.
As a result, the search space for finding a recursive NMDS matrix from the DLS matrices over the field \(\mathbb{F}_{2^{r}}\) is given by \(D(n)\cdot{}^{n}C_{l}\cdot(2^{r})^{(n+l)}\). For example, the search space for finding a \(6\)-NMDS matrix from a DLS matrix of order \(6\), with \(\mathcal{K}=3\), over the field \(\mathbb{F}_{2^{4}}\) is \(265\cdot 20\cdot 2^{36}\approx 2^{48}\).
However, we have drastically reduced the search space by defining some equivalence classes of DLS matrices.
Theorem 3.3: _Let \(a,a_{1},a_{2},\ldots,a_{n},a^{\prime}_{1},a^{\prime}_{2},\ldots,a^{\prime}_{n }\in\mathbb{F}_{2^{r}}^{*}\) such that \(a=\prod_{i=1}^{n}a_{i}=\prod_{i=1}^{n}a^{\prime}_{i}\). Then for any diagonal matrix \(D_{2}\) over \(\mathbb{F}_{2^{r}}\), the DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\) is \(k\)-NMDS if and only if \(M^{\prime}=DLS(\rho;D^{\prime}_{1},D_{2})\) is \(k\)-NMDS, where \(k\in\{n-1,n\}\), \(D_{1}=diag(a_{1},a_{2},\ldots,a_{n})\) and \(D^{\prime}_{1}=diag(a^{\prime}_{1},\)\(a^{\prime}_{2},\ldots,a^{\prime}_{n})\)._
Proof: Suppose \(\rho=\ [i_{1},\ \ i_{2},\ \ i_{3},\ \ \ldots\,\ i_{n}]\) and \(P\) is the permutation matrix corresponding to \(\rho\).
Now for any nonsingular diagonal matrix \(D=diag(d_{1},d_{2},\ldots,d_{n})\), we have
\[DMD^{-1}=D(PD_{1}+D_{2})D^{-1}=DPD_{1}D^{-1}+D_{2}.\]
Now by Lemma 5, we have \(DP=PD^{\prime}\) where \(D^{\prime}=diag(d_{i_{1}},d_{i_{2}},\ldots,d_{i_{n}})\). Thus, we have
\[DMD^{-1}=P(D^{\prime}D_{1}D^{-1})+D_{2}.\]
If \(D^{\prime}D_{1}D^{-1}=D^{\prime}_{1}\), then we have
\[DMD^{-1}=PD^{\prime}_{1}+D_{2}=M^{\prime}. \tag{5}\]
Now if \(D^{\prime}D_{1}D^{-1}=D^{\prime}_{1}\), we have \(D^{\prime}D_{1}=D^{\prime}_{1}D\). Therefore, we have
\[\left.\begin{array}{rl}a_{1}d_{i_{1}}&=a^{\prime}_{1}d_{1}\\ a_{2}d_{i_{2}}&=a^{\prime}_{2}d_{2}\\ &\vdots\\ a_{n}d_{i_{n}}&=a^{\prime}_{n}d_{n}\end{array}\right\}\qquad\Longrightarrow \qquad\left\{\begin{array}{rl}d_{1}&=a_{1}d_{i_{1}}(a^{\prime}_{1})^{-1} \\ d_{2}&=a_{2}d_{i_{2}}(a^{\prime}_{2})^{-1}\\ &\vdots\\ d_{n}&=a_{n}d_{i_{n}}(a^{\prime}_{n})^{-1}.\end{array}\right. \tag{6}\]
From Corollary 6, we know that a DLS matrix of order \(n\) can be \(k\)-NMDS, with \(k\leq n\), only when \(\rho\) is \(n\)-cycle. Thus, since \(\rho\) is \(n\)-cycle, we have
\[a_{1}a_{2}\ldots a_{n}=a^{\prime}_{1}a^{\prime}_{2}\ldots a^{\prime}_{n}=a.\]
Now choosing \(d_{1}=1\), from Equation 6, we get the values of other \(d_{j}\)'s in terms of \(a_{i}\)'s and \(a^{\prime}_{i}\)'s for \(j=2,3,\ldots,n\). Also, from Corollary 4 and Equation 5, we can say that \(M\) is \(k\)-NMDS if and only if \(M^{{}^{\prime}}\) is \(k\)-NMDS.
Corollary 7: _Let \(a=\prod_{i=1}^{n}a_{i}\) for \(a_{1},a_{2},\ldots,a_{n}\in\mathbb{F}_{2^{r}}^{*}\). Then for any diagonal matrix \(D_{2}\) over \(\mathbb{F}_{2^{r}}\), the DLS matrix \(M=DLS(\rho;D_{1},D_{2})\) of order \(n\) is \(k\)-NMDS if and only if \(M^{{}^{\prime}}=DLS(\rho;D^{{}^{\prime}}_{1},D_{2})\) is \(k\)-NMDS, where \(k\in\{n-1,n\}\), \(D_{1}=diag(a_{1},a_{2},\ldots,a_{n})\) and \(D^{{}^{\prime}}_{1}=diag(a,1,1,\ldots,1)\)._
Remark 10: For any \(c\in\mathbb{F}_{2^{r}}^{*}\), \(M\) is \(k\)-NMDS implies \(cM\) is also \(k\)-NMDS. Thus, if \(\rho\) is an \(n\)-cycle permutation, \(M=DLS(\rho;D_{1},D_{2})\) is diagonal similar to \(M^{\prime}=DLS(\rho;D^{\prime}_{1},D^{\prime}_{2})\), where \(D_{1}=diag(a_{1},a_{2},\ldots,a_{n})\), \(D^{\prime}_{1}=diag(c^{n}a,1,1,\ldots,1)\), \(D^{\prime}_{2}=c\cdot D_{2}\) and \(a=\prod_{i=1}^{n}a_{i}\). We know that \(x\mapsto x^{2^{l}}\) is an isomorphism over \(\mathbb{F}_{2^{r}}\). So when \(n=2^{l}\), there exist an element \(c=a^{-1/n}\in\mathbb{F}_{2^{r}}^{*}\). Hence, when \(n=2^{l}\), we can say that \(M\) is diagonal similar to \(M^{\prime\prime}=DLS(\rho;D^{\prime\prime}_{1},D^{\prime\prime}_{2})\), where \(D^{\prime\prime}_{1}=diag(1,1,1,\ldots,1)\) and \(D^{\prime\prime}_{2}\) is some diagonal matrix. Therefore, for \(k\in\{n-1,n\}\), \(M\) is \(k\)-NMDS if and only if \(M^{\prime\prime}\) is also \(k\)-NMDS.
We know that a permutation similar to an NMDS matrix is again an NMDS matrix. Thus, we can reduce the search space further by eliminating the permutation similar matrices from the search space. For this, we need the following lemma.
Lemma 9: _Let \(M_{1}=DLS(\rho_{1};D_{1},D_{2})\) be a DLS matrix of order \(n\) and \(\rho_{2}\in S_{n}\) is conjugate with \(\rho_{1}\), then \(M_{1}\) is \(k\)-NMDS if and only if \(M_{2}=DLS(\rho_{2};D_{1}^{{}^{\prime}},D_{2}^{{}^{\prime}})\) is \(k\)-NMDS, where \(D_{1}^{{}^{\prime}}\) and \(D_{2}^{{}^{\prime}}\) are some diagonal matrices._
Proof: Since \(\rho_{1}\) and \(\rho_{2}\) are conjugate, we have \(\sigma\rho_{1}\sigma^{-1}=\rho_{2}\), for some \(\sigma\in S_{n}\). Let \(P_{1},P_{2}\) and \(P\) be the permutation matrices related to \(\rho_{1},\rho_{2}\) and \(\sigma\) respectively. Then we have
\[PM_{1}P^{-1} =P(P_{1}D_{1}+D_{2})P^{-1}=PP_{1}D_{1}P^{-1}+PD_{2}P^{-1}\] \[=PP_{1}P^{-1}D_{1}^{{}^{\prime}}+PP^{-1}D_{2}^{{}^{\prime}},\]
where \(D_{1}P^{-1}=P^{-1}D_{1}^{{}^{\prime}}\) and \(D_{2}P^{-1}=P^{-1}D_{2}^{{}^{\prime}}\) for some diagonal matrices \(D_{1}^{{}^{\prime}}\) and \(D_{2}^{{}^{\prime}}\). Thus, we have \(PM_{1}P^{-1}=P_{2}D_{1}^{{}^{\prime}}+D_{2}^{{}^{\prime}}=M_{2}\). Since \(PM_{1}P^{-1}=M_{2}\), from Corollary 5 we can say that \(M_{1}\) is \(k\)-NMDS if and only if \(M_{2}\) is \(k\)-NMDS.
Remark 11: If \(D_{2}\) is singular, a DLS matrix \(M=DLS(\rho_{1};D_{1},D_{2})\) of order \(n\) cannot be \(k\)-NMDS for \(k\leq n-2\). Also, \(\rho\) must be an \(n\)-cycle for \(M\) to be \(k\)-NMDS with \(k=n-1\) or \(k=n\). In addition, the \(n\)-cycles in \(S_{n}\) are conjugate with each other. Therefore, to find the \(k\)-NMDS (with \(k=n-1\) and \(k=n\)) DLS matrices, we need to check only for the DLS matrices associated with one fixed \(n\)-cycle \(\rho\).
Now consider \(\mathbb{D}(n,\mathbb{F}_{2^{r}})\) to be the set of all DLS matrices \(DLS(\rho;D_{1},D_{2})\) of order \(n\), with \(\mathcal{K}=\left\lceil\frac{n}{2}\right\rceil\), over the field \(\mathbb{F}_{2^{r}}\) and define
\[\mathbb{D}^{{}^{\prime}}(n,\mathbb{F}_{2^{r}})=\{B\in\mathbb{D}(n,\mathbb{F}_ {2^{r}}):\ B=P^{{}^{\prime}}D_{1}^{{}^{\prime}}+D_{2}^{{}^{\prime}}\},\]
where \(P^{{}^{\prime}}\) is the permutation matrix related to the \(n\) length cycle \([2,3,4,\ldots,n-1,n,1]\)7 and \(D_{1}^{{}^{\prime}}=diag(a,1,1,\ldots,1)\).
Footnote 7: By Remark 11, any \(n\) length cycle can be chosen for the set \(\mathbb{D}^{{}^{\prime}}(n,\mathbb{F}_{2^{r}})\).
Thus, to find the \(k\)-NMDS (with \(k=n-1\) and \(k=n\)) DLS matrices over \(\mathbb{F}_{2^{r}}\), we need to check only for the DLS matrices in the set \(\mathbb{D}^{{}^{\prime}}(n,\mathbb{F}_{2^{r}})\).
Remark 12: From the discussion of [20, Section 3.3], we know that if \(\rho=[2,\,3,\) 4,..., \(n-1,\,n,\,1]\) and \(D_{2}\) has any two consecutive zero entries, then the DLS matrix of order \(n\) cannot be \(n\)-MDS. However, this result is not true for NMDS matrices. For example, consider the DLS matrix \(B=DLS(\rho;D_{1},D_{2})\) of order 4 over \(\mathbb{F}_{2^{4}}\), where \(\rho=[2,3,4,1]\), \(D_{1}=diag(1,1,1,1)\), \(D_{2}=diag(1,\alpha,0,0)\) and \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) with \(\alpha^{4}+\alpha+1=0\). Then it can be checked that \(B\) is 4-NMDS.
Thus, for finding \(k\)-NMDS DLS matrices with \(k\in\{n-1,n\}\) and \(\mathcal{K}=l\), we need to check for all the \({}^{n}C_{l}\) arrangements for the \(l\) nonzero elements in the diagonal of \(D_{2}\). Thus, the search space for finding \(k\)-NMDS DLS matrices, with \(k\in\{n-1,n\}\), over the field \(\mathbb{F}_{2^{r}}\) has been reduced from \(D(n)\cdot{}^{n}C_{l}\cdot(2^{r})^{(n+l)}\) to \({}^{n}C_{l}\cdot(2^{r})^{(1+l)}\). Then, by exhaustive search in the restricted domain, we have the results for the existence of \(k\)-NMDS DLS matrices over \(\mathbb{F}_{2^{4}}\) and \(\mathbb{F}_{2^{8}}\) for \(n=4,5,6,7,8\) listed in Table 1.
## 4 Construction of Recursive NMDS matrices from GDLS matrices
In this section, we present some lightweight recursive NMDS matrices of orders \(4,5,6,7\), and \(8\) from the GDLS matrices, introduced in [20].
Definition 14: [20] Let \(\rho_{1}=\ [i_{1},\ i_{2},\ i_{3},\ \ldots\,i_{n}]\) and \(\rho_{2}=\ [j_{1},\ j_{2},\ j_{3},\ \ldots\,j_{n}]\) be two permutations such that \(i_{k}\neq j_{k}\) for \(k=1,2,\ldots,n\), \(D_{1}\) be a nonsingular diagonal matrix and \(D_{2}\) be a diagonal matrix (may be singular). Then we will call the matrix
\[B=P_{1}D_{1}+P_{2}D_{2}\]
as the generalized DLS (GDLS) matrix, where \(P_{1}\) and \(P_{2}\) are the permutation matrices of order \(n\) related to the permutation \(\rho_{1}\) and \(\rho_{2}\) respectively. We will denote these matrices as \(GDLS(\rho_{1},\rho_{2};D_{1},D_{2})\).
From Lemma 8, we know that if \(D_{2}\) is singular, a DLS matrix \(DLS(\rho;D_{1},D_{2})\) can never be \(k\)-NMDS for \(k\leq n-2\). However, this result is not applicable to GDLS matrices. For instance, the GDLS matrix \(M=GDLS(\rho_{1},\rho_{2};D_{1},D_{2})\) of order \(7\) with \(\rho_{1}=[6,7,4,5,2,3,1]\), \(\rho_{2}=[3,2,1,4,7,6,5]\), \(D_{1}=diag(1,\ 1,\ 1,\ 1,\\ 1,\ \alpha)\) and \(D_{2}=diag(1,0,\alpha^{2},0,\alpha,0,\alpha^{2})\) is \(5\)-NMDS, where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) with \(\alpha^{4}+\alpha+1=0\).
Since GDLS matrices have the potential to generate NMDS matrices with fewer iterations, we select them for constructing recursive NMDS matrices. To find recursive NMDS matrix, we begin with \(k=n-2\), and if this does not yield a result, we increase the value of \(k\).
From the definition of GDLS matrices, it can be observed that the size of the set of all GDLS matrices with \(\mathcal{K}=l\) over the field \(\mathbb{F}_{2^{r}}\) is \(n!\cdot D(n)\cdot^{n}C_{l}\cdot(2^{r})^{(n+l)}\)
\begin{table}
\begin{tabular}{|c|c||c|c|c|c|c|c|} \hline & & \multicolumn{2}{c|}{\(\mathcal{K}=2\)} & \multicolumn{2}{c|}{\(\mathcal{K}=3\)} & \multicolumn{2}{c|}{\(\mathcal{K}=4\)} \\ \cline{3-8} Order \(n\) & \(k\) & over \(\mathbb{F}_{2^{4}}\) & over \(\mathbb{F}_{2^{8}}\) & over \(\mathbb{F}_{2^{4}}\) & over \(\mathbb{F}_{2^{8}}\) & over \(\mathbb{F}_{2^{4}}\) & over \(\mathbb{F}_{2^{8}}\) \\ \hline
4 & 3 & Exists & Exists & Exists & – & – & – & – \\ & 4 & Exists & Exists & – & – & – & – \\ \hline
5 & 4 & DNE & DNE & Exists & Exists & – & – \\ & 5 & DNE & DNE & Exists & Exists & – & – \\ \hline
6 & 5 & DNE & DNE & Exists & Exists & – & – \\ & 6 & DNE & DNE & Exists & Exists & – & – \\ \hline
7 & 6 & DNE & DNE & DNE & * & Exists & Exists \\ & 7 & DNE & DNE & DNE & * & Exists & Exists \\ \hline
8 & 7 & DNE & DNE & DNE & DNE & DNE & Exists \\ & 8 & DNE & DNE & DNE & DNE & DNE & Exists \\ \hline \end{tabular} \({}^{*}\) Over \(\mathbb{F}_{2^{8}}\), we are unable to make a decision for \(n=7\) with \(\mathcal{K}=3\) since we were unable to perform an exhaustive search even in the restricted domain.
\end{table}
Table 1: \(k\)-NMDS DLS matrix of order \(n\) over the field \(\mathbb{F}_{2^{r}}\) with \(k=n-1\) and \(k=n\) (“**DNE**” stands for does not exist).
where \(D(n)\) represents the number of derangements for \(n\) distinct objects. This size is extremely large, making an exhaustive search impractical for obtaining a \(k\)-NMDS matrix of order \(n\geq 5\) from the GDLS matrices.
To minimize the search space, in most cases, we arbitrarily select \(\rho_{1}\) as the \(n\)-cycle \([n,\ 1,\ 2,\ \ldots,\ n-1]\). However, it is important to note that there is no inherent advantage in choosing \(\rho_{1}=[n,1,2,\ldots,n-1]\) for obtaining a recursive NMDS matrix. If we change \(\rho_{1}=[n,1,2,\ldots,n-1]\) to any permutation from \(S_{n}\), there is still a possibility of obtaining a recursive NMDS matrix.
Also, to find lightweight recursive NMDS matrices, we looked through the GDLS matrices of order \(n\) with \(\mathcal{K}=\left\lceil\frac{n}{2}\right\rceil\) whose entries are from the set \(\{1,\,\alpha,\)\(\alpha^{-1},\ \alpha^{2},\ \alpha^{-2}\}\), where \(\alpha\) is a primitive element and a root of the constructing polynomial of the field \(\mathbb{F}_{2^{r}}\). The search space for finding \(k\)-NMDS matrices of order \(n\geq 5\) remains large, even when considering the set \(\{1,\ \alpha,\ \ \alpha^{-1},\ \alpha^{2},\)\(\alpha^{-2}\}\). Therefore, to obtain \(k\)-NMDS matrices of order \(n=5,6,7,8\), we conduct a random search.
Also note that the implementation costs of the matrices presented in this section over a field are calculated by referring to the s-XOR count value of the corresponding field elements as provided in table of [40, App. B].
### Construction of \(4\times 4\) Recursive NMDS matrices
In this section, we propose a GDLS matrix \(B\) of order \(4\) that yields a recursive NMDS matrix over the field \(\mathbb{F}_{2^{r}}\) for \(r\geq 1\). Based on Theorem 4.1, it is known that there are no \(k\)-NMDS matrices of order \(4\) with \(\mathcal{K}=1\) and \(k\leq 4\) over a field of characteristic \(2\). Therefore, to obtain recursive NMDS matrices of order \(4\), we must choose \(\mathcal{K}\geq 2\). The proposed GDLS matrix is constructed by the permutations \(\rho_{1}=[2,3,4,1],\ \rho_{2}=[1,2,3,4]\) and diagonal matrices \(D_{1}=diag(1,1,1,1)\), \(D_{2}=diag(0,1,0,1)\).
\[B=GDLS(\rho_{1},\rho_{2};D_{1},D_{2})=\begin{bmatrix}0&0&0&1\\ 1&1&0&0\\ 0&1&0&0\\ 0&0&1&1\end{bmatrix} \tag{7}\]
The matrix \(B\) is a \(3\)-NMDS matrix with a XOR count of \(2\cdot r=2r\) over the field \(\mathbb{F}_{2^{r}}\).
Lemma 10: _For \(k\)-NMDS matrix of orders \(4\) with \(k\leq 4\), the lowest XOR count is \(2r\) over the field \(\mathbb{F}_{2^{r}}\)._
Proof: From Remark 8 and Theorem 3.1, we know that any matrix \(M\) of order \(4\) with \(\mathcal{K}=1\) cannot be \(k\)-NMDS for \(k\leq 4\). Hence, we must have \(\mathcal{K}\geq 2\). Therefore, we have \(XOR(M)\geq 2\cdot r\) over the field \(\mathbb{F}_{2^{r}}\).
Remark 13: For \(k\leq 4\), the proposed matrix \(B\) in (7) has the lowest XOR count among the \(k\)-NMDS matrices of order \(4\) over the field \(\mathbb{F}_{2^{r}}\) for \(r\geq 1\).
### Construction of \(5\times 5\) Recursive NMDS matrices
This section presents two GDLS matrices, \(A_{1}\) and \(A_{2}\), of order 5 that give NMDS matrices when raised to power 4 and 5, respectively, over the field \(\mathbb{F}_{2^{4}}\). We also looked for GDLS matrices \(M\) of order 5 such that \(M^{k}\) is NMDS for \(k\leq n-2\) and \(\mathcal{K}=3\), but we were unable to find any over \(\mathbb{F}_{2^{4}}\). Consider the GDLS matrices \(A_{1}\) and \(A_{2}\) of order 5 which are constructed as follows:
1. \(A_{1}\): \(\rho_{1}=[5,1,2,3,4],\rho_{2}=[3,2,5,4,1]\), \(D_{1}=diag(1,1,1,1,1)\) and \(D_{2}=diag(0,\alpha,0,1,\alpha^{-1})\)
2. \(A_{2}\): \(\rho_{1}=[5,1,2,3,4],\rho_{2}=[3,4,5,1,2]\), \(D_{1}=diag(1,1,1,1,1)\) and \(D_{2}=diag(0,1,0,1,\alpha)\) \[A_{1}=\begin{bmatrix}0&1&0&0&\alpha^{-1}\\ 0&\alpha&1&0&0\\ 0&0&0&1&0\\ 0&0&0&1&1\\ 1&0&0&0&0\end{bmatrix}\quad A_{2}=\begin{bmatrix}0&1&0&1&0\\ 0&0&1&0&\alpha\\ 0&0&0&1&0\\ 0&1&0&0&1\\ 1&0&0&0&0\end{bmatrix},\] (8)
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It is easy to verify that the matrix \(A_{1}\) is a 4-NMDS matrix with a XOR count of \((1+1)+3\cdot 4=14\) and \(A_{2}\) is a 5-NMDS matrix with a XOR count of \(1+3\cdot 4=13\).
In Lemma 11, we discuss the lowest XOR count of recursive NMDS matrices of order \(n\geq 5\). For this, we need the following result from [12].
Theorem 4.1: _[_12_]_ _A matrix of order \(n\), with 0 and 1 as entries, has a maximum branch number of \(\frac{2n+4}{3}\)._
Lemma 11: _Given a recursive NMDS matrix \(B\) of orders \(n\geq 5\), with \(\mathcal{K}=l\), the lowest XOR count of \(B\) is \(XOR(\beta)+l\cdot r\) over the field \(\mathbb{F}_{2^{r}}\), where \(\beta\) (\(\neq 1\)) is a nonzero element in \(\mathbb{F}_{2^{r}}\) with the lowest XOR count value in that field._
Proof: An NMDS matrix of order \(n\) has branch number of \(n\). So from Theorem 4.1, we can say that a matrix of order \(n\), with entries from the set \(\{0,1\}\subseteq\mathbb{F}_{2^{r}}\), cannot be NMDS for \(n\geq 5\). If we take a matrix \(B\) with entries of 0 or 1, then the entries of \(B^{k}\) will remain in the set \(\{0,1\}\) for any power \(k\). So \(B\) must have an element \(\gamma\not\in\{0,1\}\). Therefore, \(XOR(B)\geq XOR(\beta)+l\cdot r\), where \(\beta\) (\(\neq 1\)) is a nonzero element in \(\mathbb{F}_{2^{r}}\) with the lowest XOR count value in that field.
Remark 14: Over the field \(\mathbb{F}_{2^{4}}\), the matrix \(A_{2}\) in (8) has the lowest XOR count among the 5-NMDS matrices of order 5 and \(\mathcal{K}=3\).
### Construction of \(6\times 6\) Recursive NMDS matrices
In this section, we introduce two lightweight GDLS matrices, \(B_{1}\) and \(B_{2}\), of order 6 with \(\mathcal{K}=3\). These matrices can be implemented with 14 and 13 XORs over the field \(\mathbb{F}_{2^{4}}\), respectively, and yield NMDS matrices when raised to the power of 5 and 6, respectively. The matrices \(B_{1}\) and \(B_{2}\) of order 6 are constructed as follows:
1. \(B_{1}\): \(\rho_{1}=[6,1,2,3,4,5],\rho_{2}=[1,2,3,4,5,6]\), \(D_{1}=diag(1,1,1,1,1,1)\) and \(D_{2}=diag(0,\alpha,0,\alpha^{-1},0,1)\)
2. \(B_{2}\): \(\rho_{1}=[6,1,2,3,4,5],\rho_{2}=[3,4,5,2,6,1]\), \(D_{1}=diag(1,1,1,1,1,1)\) and \(D_{2}=diag(0,\alpha,0,1,0,1)\)
\[B_{1}=\begin{bmatrix}0&1&0&0&0&0\\ 0&\alpha&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&\alpha^{-1}&1&0\\ 0&0&0&0&0&1\\ 1&0&0&0&0&1\end{bmatrix}\quad B_{2}=\begin{bmatrix}0&1&0&0&0&1\\ 0&0&1&1&0&0\\ 0&0&0&1&0&0\\ 0&\alpha&0&0&1&0\\ 0&0&0&0&1\\ 1&0&0&0&0&0\end{bmatrix}, \tag{9}\]
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It can be checked that the matrix \(B_{1}\) is a 5-NMDS matrix with a XOR count of \((1+1)+3\cdot 4=14\) and \(B_{2}\) is a 6-NMDS matrix with a XOR count of \(1+3\cdot 4=13\).
We also searched for GDLS matrices \(M\) of order 6 such that \(M^{k}\) is NMDS for \(k\leq n-2\) and \(\mathcal{K}=3\), but we could not find such matrices over \(\mathbb{F}_{2^{4}}\).
Remark 15: It is not possible to have elements with XOR count 1 in \(\mathbb{F}_{2^{8}}\) due to the absence of trinomial irreducible polynomial of degree 8 over \(\mathbb{F}_{2}\)[7, Theorem 2]. However, it is possible to have elements with XOR count of 1 over rings.
Consider the binary matrix \(C=[[2],[3],[4],[5],[6],[7],[8],[1,3]]\) which is the companion matrix of \(x^{8}+x^{2}+1\) over \(\mathbb{F}_{2}\). If we replace \(\alpha\) by \(C\), then the matrices \(A_{1}\) and \(A_{2}\) in (8) and \(B_{1}\) and \(B_{2}\) in (9) will be 4-NMDS, 5-NMDS, 5-NMDS, and 6-NMDS over \(GL(8,\mathbb{F}_{2})\), respectively. In addition, the implementation cost of \(C\) and \(C^{-1}\) is 1 XOR. Hence, the implementation cost of \(A_{1}\), \(A_{2}\), \(B_{1}\) and \(B_{2}\) over \(GL(8,\mathbb{F}_{2})\) are \(26,25,26\) and 25 XORs, respectively.
Remark 16: Over the field \(\mathbb{F}_{2^{4}}\), the matrix \(B_{2}\) in (9) has the lowest XOR count among the 6-NMDS matrices of order 6 and \(\mathcal{K}=3\).
### Construction of \(7\times 7\) Recursive NMDS matrices
In this section, we propose three GDLS matrices of order 7 that yield NMDS matrices over the field \(\mathbb{F}_{2^{4}}\) for \(\mathcal{K}=4\). Consider the GDLS matrices \(B_{1},\ B_{2}\) and \(B_{3}\) of order 7 which are constructed as follows:
1. \(B_{1}\): \(\rho_{1}=[6,7,4,5,2,3,1],\ \rho_{2}=[3,2,1,4,7,6,5],\ D_{1}=diag(1,1,1,1,1,1,\) \(\alpha)\) and \(D_{2}=diag(1,0,\alpha^{2},0,\alpha,0,\alpha^{2})\)
2. \(B_{2}\): \(\rho_{1}=[7,1,2,3,4,5,6],\ \rho_{2}=[6,7,5,4,1,3,2],\ D_{1}=diag(1,\alpha^{-1}, \alpha,1,\alpha,\) \(\alpha,1)\) and \(D_{2}=diag(0,1,0,1,0,1,1)\)
3. \(B_{3}\): \(\rho_{1}=[7,1,2,3,4,5,6],\ \rho_{2}=[5,2,6,7,3,1,4],\ D_{1}=diag(1,1,1,1,1,1,\) \(1)\) and \(D_{2}=diag(0,\alpha^{-1},0,1,0,\alpha^{-1},1)\)
\[B_{1}=\begin{bmatrix}0&0&\alpha^{2}&0&0&0&\alpha\\ 0&0&0&0&1&0&0\\ 1&0&0&0&0&1&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&\alpha^{2}\\ 1&0&0&0&0&0&0\\ 0&1&0&0&\alpha&0&0\end{bmatrix}B_{2}=\begin{bmatrix}0&\alpha^{-1}&0&0&0&0&0\\ 0&0&\alpha&0&0&0&1\\ 0&0&0&1&0&1&0\\ 0&0&0&1&\alpha&0&0\\ 0&0&0&0&0&\alpha&0\\ 0&0&0&0&0&1\\ 1&0&0&0&0&0\end{bmatrix}B_{3}=\begin{bmatrix}0&1&0&0&0&\alpha^{-1}&0\\ 0&\alpha^{-1}&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&1&0&1\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&1\\ 1&0&0&1&0&0&0\end{bmatrix}, \tag{10}\]
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). It can be verified that matrix \(B_{1}\) is a 5-NMDS matrix with an XOR count of \((1+2+1+2)+4\cdot 4=22\), \(B_{2}\) is a 6-NMDS matrix with an XOR count of \((1+1+1+1)+4\cdot 4=20\), and \(B_{3}\) is a 7-NMDS matrix with an XOR count of \((1+1)+4\cdot 4=18\).
Remark 17: If we replace \(\alpha\) by \(C\) (the binary matrix in Remark 15), then the matrices \(B_{1},\ B_{2}\) and \(B_{3}\) in (10) will be 5-NMDS, 6-NMDS and 7-NMDS over \(GL(8,\mathbb{F}_{2})\), respectively. The binary matrix \(C^{2}\) can be implemented with 2 XORs. Hence, \(B_{1},\ B_{2}\) and \(B_{3}\) can be implemented with \(38,36\) and \(34\) XORs, respectively, over \(GL(8,\mathbb{F}_{2})\).
Remark 18: We know that in a DLS matrix \(M=DLS(\rho_{1};D_{1},D_{2})\) of order \(n\geq 2\), if \(D_{2}\) is singular, then \(M^{k}\) cannot be NMDS for \(k\leq n-2\). However, the result is not true for GDLS matrices. For example the matrix \(B_{1}\) of order 7 in (10) is 5-NMDS.
### Construction of \(8\times 8\) Recursive NMDS matrices
As 4 and 8 are the most commonly used diffusion layer matrix sizes, we look for a \(k\)-NMDS GDLS matrix of order 8 over \(\mathbb{F}_{2^{4}}\). However, we were unable to find a GDLS matrix of order 8, which corresponds to 7-NMDS or 8-NMDS. We have proposed two GDLS matrices of order 8 that yield NMDS matrices over the field \(\mathbb{F}_{2^{8}}\) with \(\mathcal{K}=4\). Consider the GDLS matrices \(B_{1}\) and \(B_{2}\) of order 8 which are constructed as follows:
1. \(B_{1}:\ \rho_{1}=[2,3,4,5,6,7,8,1],\ \rho_{2}=[3,8,5,2,1,4,6,7],\ D_{1}=diag(1,1,1,1, \alpha^{-2},1,1)\) and \(D_{2}=diag(1,0,\alpha,0,1,0,\alpha^{-1},0)\)
2. \(B_{2}:\ \rho_{1}=[2,3,4,5,6,7,8,1],\ \rho_{2}=[3,8,5,2,1,4,6,7],\ D_{1}=diag(1,\,1, \,1,\)\(\alpha^{2},1,1,1,1)\) and \(D_{2}=diag(\alpha,0,1,0,1,0,1,0)\)
\[B_{1}=\begin{bmatrix}0&0&0&0&1&0&0&1\\ 1&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&0&1&0&\alpha^{-1}&0\\ 0&0&0&0&0&\alpha^{-2}&0&0\\ 0&0&0&0&0&1&0\end{bmatrix}B_{2}=\begin{bmatrix}0&0&0&0&1&0&0&1\\ 1&0&0&0&0&0&0\\ \alpha&1&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&1&\alpha^{2}&0&0&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&1&0\end{bmatrix}, \tag{11}\]
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{8}}\) and a root of \(x^{8}+x^{7}+x^{6}+x+1\). The matrix \(B_{1}\) is a 7-NMDS matrix with a XOR count of \((4+3+3)+4\cdot 8=42\) and \(B_{2}\) is a 8-NMDS matrix with a XOR count of \((3+4)+4\cdot 8=39\).
Remark 19: Consider the binary matrix \(C_{8}=[[8],[1,2],[2,8],[3],[4],[5],[6],[7]]\) whose minimal polynomial is \(x^{8}+x^{7}+x^{2}+x+1\). Then by replacing \(\alpha\) by \(C_{8}\), the matrices \(B_{1}\), and \(B_{2}\) in (11) will be 7-NMDS and 8-NMDS over \(GL(8,\mathbb{F}_{2})\), respectively. In addition, the implementation cost of \(C_{8}\) is 2 XORs. Also, \(C_{8}^{-1}\), \(C_{8}^{2}\) and \(C_{8}^{-2}\) can be implemented with 2, 4 and 4 XORs respectively. Hence, \(B_{1}\) and \(B_{2}\) can be implemented with 40 and 38 XORs, respectively, over \(GL(8,\mathbb{F}_{2})\).
Until now, we have discussed NMDS matrices in a recursive setup. While these matrices have a low hardware cost, they do require some clock cycles. To use recursive NMDS (say, \(k\)-NMDS) matrices in an unrolled implementation, we have to add \(k\) copies of the matrix to the circuit in sequence, which may increase the cost of the diffusion layer. Thus, they might not be suitable for block ciphers such as PRINCE [10], MANTIS [6], and FUTURE [21], which operate within a single clock cycle. From the next section on, we will discuss nonrecursive constructions of NMDS matrices.
## 5 Construction of nonrecursive NMDS matrices
The construction of nonrecursive MDS matrices is typically based on specific matrix types such as circulant matrices, Hadamard matrices, Cauchy matrices, Vandermonde matrices, and Toeplitz matrices. A brief summary of such constructions is presented in [18]. Circulant and Hadamard matrices of order \(n\) can have at most \(n\) distinct elements; thus, these matrices are used to reduce the search space. Also, circulant matrices have the flexibility to be implemented in both round-based and serialized implementations [30]. In [27], the authors have studied the construction of NMDS matrices using circulant and Hadamard matrices and present some generic NMDS matrices of order \(n\) for the range of \(5\leq n\leq 9\).
Definition 15: An \(n\times n\) matrix \(M\) is said to be a right circulant (or circulant) matrix if its elements are determined by the elements of its first row \(x_{1},x_{2},\ldots,x_{n}\) as
\[M=Circ(x_{1},x_{2},\ldots,x_{n})=\begin{bmatrix}x_{1}&x_{2}&\ldots&x_{n}\\ x_{n}&x_{1}&\ldots&x_{n-1}\\ \vdots&\vdots&\vdots&\vdots\\ x_{2}&x_{3}&\ldots&x_{1}\end{bmatrix}.\]
In the context of implementing block ciphers, we know that if an efficient matrix \(M\) used in encryption is involutory, then its inverse \(M^{-1}=M\) applied for decryption will also be efficient. Therefore, it is particularly important to locate NMDS matrices that are also involutory. In this regard, Li et al. [27] show that for \(n>4\), no circulant matrices of order \(n\) over \(\mathbb{F}_{2^{r}}\) can simultaneously be involutory and NMDS. We recall it in the following theorem.
\begin{table}
\begin{tabular}{|c c c c c c|} \hline Order \(n\) & Input & Iterations & Field/Ring & XOR count & References \\ \hline
4 & 4-bit & 34 & \(M_{4}(\mathbb{F}_{2})\) & **1** & [29] \\
4 & 4-bit & 16 & \(M_{4}(\mathbb{F}_{2})\) & 2 & [29] \\
4 & 4-bit & 10 & \(M_{4}(\mathbb{F}_{2})\) & 3 & [29] \\
4 & 4-bit & 7 & \(M_{4}(\mathbb{F}_{2})\) & 4 & [29] \\
4 & 4-bit & 5 & \(M_{4}(\mathbb{F}_{2})\) & 7 & [29] \\
4 & 4-bit & 3 & \(M_{4}(\mathbb{F}_{2})\) & 8 & [29] \\
4 & 4-bit & 3 & \(\mathbb{F}_{2^{4}}\) & 8 & Section 4.1 \\
4 & 4-bit & 3 & \(\mathbb{F}_{2^{4}}\) & 8 & [35] \\
4 & 4-bit & **2** & \(M_{4}(\mathbb{F}_{2})\) & 12 & [29] \\ \hline
4 & 8-bit & 66 & \(M_{8}(\mathbb{F}_{2})\) & **1** & [29] \\
4 & 8-bit & 34 & \(M_{8}(\mathbb{F}_{2})\) & 2 & [29] \\
4 & 8-bit & 16 & \(M_{8}(\mathbb{F}_{2})\) & 4 & [29] \\
4 & 8-bit & 10 & \(M_{8}(\mathbb{F}_{2})\) & 6 & [29] \\
4 & 8-bit & 7 & \(M_{8}(\mathbb{F}_{2})\) & 8 & [29] \\
4 & 8-bit & 3 & \(M_{8}(\mathbb{F}_{2})\) & 16 & [29] \\
4 & 8-bit & 3 & \(\mathbb{F}_{2^{8}}\) & 16 & Section 4.1 \\
4 & 8-bit & 3 & \(\mathbb{F}_{2^{8}}\) & 16 & [35] \\
4 & 8-bit & **2** & \(M_{4}(\mathbb{F}_{2})\) & 24 & [29] \\ \hline \hline
5 & 4-bit & 86 & \(M_{8}(\mathbb{F}_{2})\) & **1** & [29] \\
5 & 4-bit & 46 & \(M_{8}(\mathbb{F}_{2})\) & 2 & [29] \\
5 & 4-bit & 20 & \(M_{8}(\mathbb{F}_{2})\) & 3 & [29] \\
5 & 4-bit & 15 & \(M_{8}(\mathbb{F}_{2})\) & 4 & [29] \\
5 & 4-bit & 8 & \(M_{8}(\mathbb{F}_{2})\) & 8 & [29] \\
5 & 4-bit & 5 & \(\mathbb{F}_{2^{4}}/0\)x13 & 13 & Section 4.2 \\
5 & 4-bit & **4** & \(\mathbb{F}_{2^{4}}/0\)x13 & 14 & Section 4.2 \\ \hline
5 & 8-bit & 120 & \(M_{8}(\mathbb{F}_{2})\) & **1** & [29] \\
5 & 8-bit & 86 & \(M_{8}(\mathbb{F}_{2})\) & 2 & [29] \\
5 & 8-bit & 46 & \(M_{8}(\mathbb{F}_{2})\) & 4 & [29] \\
5 & 8-bit & 20 & \(M_{8}(\mathbb{F}_{2})\) & 6 & [29] \\
5 & 8-bit & 15 & \(M_{8}(\mathbb{F}_{2})\) & 8 & [29] \\
5 & 8-bit & 8 & \(M_{8}(\mathbb{F}_{2})\) & 16 & [29] \\
5 & 8-bit & 5 & \(GL(8,\mathbb{F}_{2})\) & 25 & Remark 15 \\
5 & 8-bit & **4** & \(GL(8,\mathbb{F}_{2})\) & 26 & Remark 15 \\ \hline \hline
6 & 4-bit & 6 & \(\mathbb{F}_{2^{4}}/0\)x13 & **13** & Section 4.3 \\
6 & 4-bit & **5** & \(\mathbb{F}_{2^{4}}/0\)x13 & 14 & Section 4.3 \\ \hline
6 & 8-bit & 6 & \(GL(8,\mathbb{F}_{2})\) & **25** & Remark 15 \\
6 & 8-bit & **5** & \(GL(8,\mathbb{F}_{2})\) & 26 & Remark 15 \\ \hline \hline
7 & 4-bit & 7 & \(\mathbb{F}_{2^{4}}/0\)x13 & **18** & Section 4.4 \\
7 & 4-bit & 6 & \(\mathbb{F}_{2^{4}}/0\)x13 & 20 & Section 4.4 \\
7 & 4-bit & **5** & \(\mathbb{F}_{2^{4}}/0\)x13 & 22 & Section 4.4 \\ \hline
7 & 8-bit & 7 & \(GL(8,\mathbb{F}_{2})\) & **34** & Remark 17 \\
7 & 8-bit & 6 & \(GL(8,\mathbb{F}_{2})\) & 36 & Remark 17 \\
7 & 8-bit & **5** & \(GL(8,\mathbb{F}_{2})\) & 38 & Remark 17 \\ \hline \hline
8 & 8-bit & 8 & \(\mathbb{F}_{2^{8}}/0\)x1\(c\)3 & 39 & Section 4.5 \\
8 & 8-bit & 8 & \(GL(8,\mathbb{F}_{2})\) & **38** & Section 4.5 \\
8 & 8-bit & **7** & \(\mathbb{F}_{2^{8}}/0\)x1\(c\)3 & 42 & Section 4.5 \\
8 & 8-bit & **7** & \(GL(8,\mathbb{F}_{2})\) & 40 & Section 4.5 \\ \hline \end{tabular}
\end{table}
Table 2: Comparison of recursive NMDS matrices of order \(n\).
**Theorem 7**.: _[_27_]_ _Over the field \(\mathbb{F}_{2^{r}}\), circulant involutory matrices of order \(n>4\) are not NMDS._
Remark 20: For \(n<4\), there may exist circulant involutory NMDS matrices over \(\mathbb{F}_{2^{r}}\). For example, the circulant matrix \(Circ(0,1,1,1)\) of order \(4\) is both involutory and NMDS over the field \(\mathbb{F}_{2^{r}}\).
Remark 21: According to [18; 24], the above result is also true for circulant MDS matrices of order \(n\) with a modified lower bound of \(n\geq 3\).
For symmetric cryptography, having an orthogonal matrix as the linear diffusion layer simplifies decryption because the transpose of an orthogonal matrix is its inverse. This makes orthogonal matrices ideal for constructing the linear diffusion layer. Matrices of order \(2^{n}\) are particularly important in cryptography. However, as stated in [24, Lemma 5], for \(n\geq 2\), we know that any orthogonal circulant matrix of order \(2^{n}\) over the field \(\mathbb{F}_{2^{r}}\) is not MDS. But circulant MDS matrices of different orders may be orthogonal over \(\mathbb{F}_{2^{r}}\)[18, Remark 24].
Remark 22: NMDS circulant orthogonal matrices of any order may exist over the field \(\mathbb{F}_{2^{r}}\). For example, consider the circulant matrices \(Circ(0,\alpha^{3}+\alpha+1,\alpha^{3}+\alpha^{2}+\alpha,\alpha^{3}+1,\alpha^{ 3}+\alpha^{2}+1)\), \(Circ(0,1,\alpha,\alpha^{2}+\alpha+1,\alpha^{3}+\alpha+1,\alpha^{3}+\alpha^{2 }+\alpha)\), and \(Circ(0,1,\alpha,\alpha+1,\alpha+1,\alpha^{3}+\alpha^{2}+\alpha+1,1,\alpha^{3} +\alpha^{2})\) of order \(5,6,\) and \(8\), respectively, where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of the polynomial \(x^{4}+x+1\). It can be checked that these matrices are both orthogonal and NMDS.
In [30], the authors suggest a new category of matrices known as left-circulant matrices. These matrices retain the advantages of circulant matrices.
Definition 16: An \(n\times n\) matrix \(M\) is said to be a left-circulant matrix if each successive row is obtained by a left shift of the previous row i.e.
\[M=l\text{-}Circ(x_{1},x_{2},\ldots,x_{n})=\begin{bmatrix}x_{1}&x_{2}&\ldots& x_{n}\\ x_{2}&x_{3}&\ldots&x_{1}\\ \vdots&\vdots&\vdots&\vdots\\ x_{n}&x_{1}&\ldots&x_{n-1}\end{bmatrix}.\]
Note that a left-circulant matrix is symmetric; consequently, if the matrix is orthogonal, then it is involutory, and vice versa.
Remark 23: From Lemma 4, we know that if \(M\) is an NMDS matrix, then for any permutation matrix \(P\), \(PM\) is also an NMDS matrix. Additionally, as per Remark 22, it is possible to obtain a circulant NMDS matrix \(M=Circ(x_{1},x_{2},\ldots,x_{n})\) of any order \(n\) over \(\mathbb{F}_{2^{r}}\) that is orthogonal. Now, consider the permutation matrix \(P\) of order \(n\) as follows:
\[P=\begin{bmatrix}1&0&0&\ldots&0&0&0\\ 0&0&0&\ldots&0&0&1\\ 0&0&0&\ldots&0&1&0\\ 0&0&0&\ldots&1&0&0\\ \vdots&\vdots&\vdots&\ldots&\vdots&\vdots\\ 0&1&0&\ldots&0&0&0\end{bmatrix}.\]
It can be easily verified that \(PM\) = \(l\)-\(Circ(x_{1},x_{2},\ldots,x_{n})\). Also, since \(M\) is orthogonal and \(P\) is a permutation matrix, we have
\[(PM)^{T}=M^{T}P^{T}=M^{-1}P^{-1}=(PM)^{-1}.\]
Thus, pre-multiplying \(M\) with \(P\) will not alter its NMDS property and orthogonality. Consequently, the resulting matrix \(PM=l\)-\(Circ(x_{1},x_{2},\ldots,x_{n})\) will be both orthogonal and NMDS, making it an involutory NMDS matrix. Therefore, NMDS left-circulant involutory (orthogonal) matrices of any order may exist over the field \(\mathbb{F}_{2^{r}}\).
Definition 17: A \(2^{n}\times 2^{n}\) matrix \(M\) in \(\mathbb{F}_{2^{r}}\) is considered a Hadamard matrix if it can be written in the following form:
\[M=\begin{bmatrix}H_{1}&H_{2}\\ H_{2}&H_{1}\end{bmatrix}\]
where \(H_{1}\) and \(H_{2}\) are also Hadamard.
The most significant advantage of Hadamard matrices is the potential for constructing involutory matrices. If the elements of the matrix are chosen so that the first row sums to one, the resulting matrix will be involutory [18].
The absence of any zero entries is a necessary condition for matrices such as Hadamard, circulant and left-circulant matrices to be MDS. Therefore, these matrices result in a high implementation cost due to \(\mathcal{K}=n(n-1)\). Having zero entries (with a maximum of one zero per row or column) does not affect the NMDS property of these matrices, leading to a low implementation cost with \(\mathcal{K}=n(n-2)\). Taking advantage of this, the authors in [27] provided some generic lightweight involutory NMDS matrices of order 8 from Hadamard matrices.
Theorem 4.1: _For a Hadamard, circulant, or left-circulant NMDS matrix of order \(n\) over \(\mathbb{F}_{2^{r}}\) with \(n\geq 5\), the XOR count is at least \(XOR(\beta)\cdot n+n(n-2)\cdot r\) over the field \(\mathbb{F}_{2^{r}}\), where \(\beta\) (\(\neq 1\)) is a nonzero element in \(\mathbb{F}_{2^{r}}\) with the lowest XOR count value in that field._
Proof: An NMDS matrix \(B\) of order \(n\) has branch number of \(n\). Therefore, according to Theorem 4.1, a matrix of order \(n\) with \(n\geq 5\) and entries from the set \(\{0,1\}\subseteq\mathbb{F}_{2^{r}}\) cannot be NMDS. This means that \(B\) must contain an element \(\gamma\not\in\{0,1\}\). Additionally, for an NMDS matrix, we must have \(\mathcal{K}\geq n(n-2)\). Also, each row in a Hadamard, circulant, or left-circulant matrix is a rearrangement of the first row. Hence, for these matrices to be NMDS over the field \(\mathbb{F}_{2^{r}}\), the minimum XOR count must be \(XOR(\beta)\cdot n+n(n-2)\cdot r\).
The lowest XOR count value (of an element) in the field \(\mathbb{F}_{2^{4}}\) is one, which allows us to obtain the lowest possible XOR count of Hadamard, circulant, or left-circulant NMDS matrices of various orders over \(\mathbb{F}_{2^{4}}\) as shown in Table 3.
The use of Toeplitz matrices for the construction of MDS matrices has been explored in the literature [36, 37], and we will discuss them for the construction of NMDS matrices.
**Definition 18**.: _The \(n\times n\) matrix_
\[M=\begin{bmatrix}x_{1}&x_{2}&x_{3}&\ldots&x_{n-2}&x_{n-1}&x_{n}\\ y_{1}&x_{1}&x_{2}&\ldots&x_{n-3}&x_{n-2}&x_{n-1}\\ y_{2}&y_{1}&x_{1}&\ldots&x_{n-4}&x_{n-3}&x_{n-2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ y_{n-1}&y_{n-2}&y_{n-3}&\ldots&y_{2}&y_{1}&x_{1}\end{bmatrix}\]
_is called a Toeplitz matrix of order \(n\)._
It is easy to check that circulant matrices are a special type of Toeplitz matrices. Also, like circulant matrices, it is not possible for Toeplitz matrices of order \(n>4\) to be both NMDS and involutory over a field of characteristic \(2\).
**Theorem 9**.: _Over the field \(\mathbb{F}_{2^{r}}\), Toeplitz involutory matrices of order \(n>4\) are not NMDS._
Proof.: Let \(M\) be a Toeplitz matrix (as in Definition 18) of order \(n\) which is both involutory and NMDS over the field \(\mathbb{F}_{2^{r}}\), where \(n>4\). We will examine two scenarios: when \(n\) is even and when \(n\) is odd.
**Case 1:**\(n\) is even.
In an NMDS matrix, there may be a zero entry. So this case splits into two subcases: \(x_{n}\neq 0\) and \(x_{n}=0\).
**Case 1.1:** When \(x_{n}\neq 0\).
The \((n-1)\)-th element in the 1st row of \(M^{2}\) is
\[(M^{2})_{1,n-1} =M_{row(1)}\cdot M_{column(n-1)}\] \[=x_{1}x_{n-1}+x_{2}x_{n-2}+\cdots+x_{\frac{n}{2}}x_{\frac{n}{2}}+ \cdots+x_{n-1}x_{1}+x_{n}y_{1}\] \[=x_{\frac{n}{2}}^{2}+x_{n}y_{1}.\]
Since \(M\) is involutory, we have \((M^{2})_{1,n-1}=0\). Therefore, from above we have
\[x_{\frac{n}{2}}^{2}+x_{n}y_{1}=0 \tag{12}\] \[\implies y_{1}=x_{\frac{n}{2}}^{2}x_{n}^{-1}. \tag{13}\]
We have
\[(M^{2})_{1,n-2} =M_{row(1)}\cdot M_{column(n-2)}\] \[=x_{1}x_{n-2}+x_{2}x_{n-3}+\cdots+x_{\frac{n-2}{2}}x_{\frac{n}{2 }}+x_{\frac{n}{2}}x_{\frac{n-2}{2}}+\cdots\] \[\quad+x_{n-3}x_{2}+x_{n-2}x_{1}+x_{n-1}y_{1}+x_{n}y_{2}\] \[=x_{n-1}y_{1}+x_{n}y_{2}.\]
\begin{table}
\begin{tabular}{|c||c c c c|} \hline order \(n\) & 5 & 6 & 7 & 8 \\ \hline Lowest XOR count & 65 & 102 & 147 & 200 \\ \hline \end{tabular}
\end{table}
Table 3: Lowest possible XOR count of Hadamard, circulant, or left-circulant NMDS matrices of order \(n\) over \(\mathbb{F}_{2^{4}}\).
Also, \((M^{2})_{1,n-2}=0\), which results in
\[x_{n-1}y_{1}+x_{n}y_{2}=0 \tag{14}\]
Now, from Equation 13 and Equation 14, we have
\[y_{2}=x_{\frac{n}{2}}^{2}x_{n-1}x_{n}^{-2}. \tag{15}\]
Also, from \((M^{2})_{3,n-1}=0\), we have
\[x_{\frac{n-2}{2}}^{2}+x_{n-1}y_{2}=0\] \[\implies x_{\frac{n-2}{2}}^{2}+x_{n-1}\cdot x_{\frac{n}{2}}^{2}x_{ n-1}x_{n}^{-2}=0 \text{[From Equation \ref{eq:15}]}\] \[\implies x_{\frac{n-2}{2}}^{2}x_{n}^{2}=x_{\frac{n}{2}}^{2}x_{n-1}^ {2}\] \[\implies x_{\frac{n-2}{2}}x_{n}=x_{\frac{n}{2}}x_{n-1} \text{[Since characteristic of $\mathbb{F}_{2^{r}}$ is \ref{eq:15}]} \tag{16}\]
Now consider the input vector \(v=[0,0,\ldots,\underbrace{x_{n}}_{\frac{n}{2}\text{-th}},0,\ldots,x_{\frac{n }{2}}]^{T}\) of \(M\). Therefore, we have
\[M\cdot v =[x_{\frac{n}{2}}x_{n}+x_{n}x_{\frac{n}{2}},\ x_{\frac{n-2}{2}}x_ {n}+x_{n-1}x_{\frac{n}{2}},\ *,\ldots,*,\underbrace{y_{1}x_{n}+x_{\frac{n}{2}}^{2}} _{(\frac{n}{2}+1)\text{-th}},*,\ldots,*]^{T}\] \[=[0,0,*,\ldots,\underbrace{0}_{(\frac{n}{2}+1)\text{-th}},*, \ldots,*]^{T},\]
where \(*\) denotes some entry may or may not be zero. Here the second and \((\frac{n}{2}+1)\)-th coordinates of \(M\cdot v\) are zero by Equation 16 and Equation 12, respectively. Thus, the sum of nonzero elements of input vector \((v)\) and output vector \((M\cdot v)\) is \(\leq 2+(n-3)<n\) i.e. branch number of \(M<n\). This contradicts that \(M\) is NMDS.
**Case 1.2:** When \(x_{n}=0\).
If \(x_{n}=0\), then from Equation 12, we conclude that \(x_{\frac{n}{2}}^{2}=0\) which implies \(x_{\frac{n}{2}}=0\). Therefore, the Toeplitz matrix \(M\) has two zero entries in its first row, which contradicts the fact that \(M\) is NMDS.
**Case 2:**\(n\) is odd.
The \(n\)-th element in the 1st row of \(M^{2}\) is
\[(M^{2})_{1,n} =M_{row(1)}\cdot M_{column(n)}\] \[=x_{1}x_{n}+x_{2}x_{n-1}+\cdots+x_{\frac{n+1}{2}}x_{\frac{n+1}{2} }+\cdots+x_{n-1}x_{2}+x_{n}x_{1}\] \[=x_{\frac{n+1}{2}}^{2}.\]
Also, we have \((M^{2})_{2,n-1}=x_{\frac{n-1}{2}}^{2}\). Therefore, since \(M\) is involutory, it follows that \((M^{2})_{1,n}=(M^{2})_{2,n-1}=0\), implying that \(x_{\frac{n-1}{2}}=x_{\frac{n+1}{2}}=0\). This means that \(M\) has two zero entries in its first row, which contradicts that \(M\) is an NMDS matrix. Hence, the proof.
Remark 24: Circulant matrices are a particular type of Toeplitz matrices, and thus, from Remark 20, we can say that for \(n<4\), there may exist Toeplitz involutory NMDS matrices over \(\mathbb{F}_{2^{r}}\).
Remark 25: From [36, Theorem 2], we know that for \(n\geq 2\), any orthogonal Toeplitz matrix of order \(2^{n}\) over the field \(\mathbb{F}_{2^{r}}\) is not MDS. However, this result does not hold for NMDS matrices. Circulant matrices are a particular type of Toeplitz matrices, and thus, from Remark 22, we can say that Toeplitz orthogonal NMDS matrices of any order may exist over the field \(\mathbb{F}_{2^{r}}\).
Hankel matrices, introduced in [18] for MDS matrix construction, are similar to Toeplitz matrices in that each ascending skew diagonal from left to right is constant.
Definition 19: The \(n\times n\) matrix
\[M=\begin{bmatrix}x_{1}&x_{2}&x_{3}&\ldots&x_{n-1}&x_{n}\\ x_{2}&x_{3}&x_{4}&\ldots&x_{n}&y_{1}\\ x_{3}&x_{4}&x_{5}&\ldots&y_{1}&y_{2}\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots\\ x_{n}&y_{1}&y_{2}&\ldots&y_{n-2}&y_{n-1}\end{bmatrix}\]
is called a Hankel matrix.
It is important to note that a left-circulant matrix is a special case of Hankel matrix. Hankel matrices are symmetric and may be described by their first row and last column. Thus an involutory (orthogonal) Hankel matrix is orthogonal (involutory).
Remark 26: From [18, Theorem 7.5], we know that for \(n\geq 2\), any involutory (orthogonal) Hankel matrix of order \(2^{n}\) over the field \(\mathbb{F}_{2^{r}}\) is not MDS. However, this result does not hold for NMDS matrices. Left-circulant matrices are a particular type of Hankel matrices, and thus, from Remark 23, we can say that Hankel involutory (orthogonal) NMDS matrices of any order may exist over the field \(\mathbb{F}_{2^{r}}\).
We close this section by presenting Table 4, which compares the involutory and orthogonal properties of MDS and NMDS matrices constructed from the circulant, left-circulant, Toeplitz and Hankel families.
## 6 Construction of nonrecursive NMDS matrices from GDLS matrices
Constructing NMDS matrices from circulant, left-circulant, Hadamard, Toeplitz or Hankel matrices of order \(n\) may result in a high implementation cost due to the requirement of having \(\mathcal{K}\geq n(n-2)\). To address this issue, in this section, we
present some lightweight nonrecursive NMDS matrices through the composition of various GDLS (see Definition 14) matrices, similar to the method used by the authors in [21, 35] for constructing MDS matrices.
To minimize the search space, in most cases, we arbitrarily select \(\rho_{1}\) as the \(n\)-cycle \([n,\ 1,\ 2,\ \ldots,\ n-1]\). However, it is important to note that there is no inherent advantage in choosing \(\rho_{1}=[n,1,2,\ldots,n-1]\) for obtaining an NMDS matrix. If we change \(\rho_{1}=[n,1,2,\ldots,n-1]\) to any permutation from \(S_{n}\), there is still a possibility of obtaining an NMDS matrix.
To search for lightweight nonrecursive NMDS matrices, we examine GDLS matrices of order \(n\) with \(\mathcal{K}=\left\lceil\frac{n}{2}\right\rceil\) and entries from the set \(\left\{1,\alpha,\alpha^{-1},\alpha^{2},\alpha^{-2}\right\}\), where \(\alpha\) is a primitive element and a root of the constructing polynomial of the field \(\mathbb{F}_{2^{r}}\). The search space for finding nonrecursive NMDS matrices of order \(n\geq 5\) remains large, even when considering the set \(\left\{1,\ \alpha,\ \alpha^{-1},\ \alpha^{2},\ \alpha^{-2}\right\}\). Therefore, to obtain nonrecursive NMDS matrices of order \(n=5,6,7,8\), we conduct a random search. In addition, to construct nonrecursive NMDS matrices of order \(n\), we typically chose \(n-2\) GDLS matrices of the same structure. If this does not yield result, we use \(n-1\) matrices instead.
Also note that the implementation costs of the matrices presented in this section over a field are calculated by referring to the s-XOR count value of the corresponding field elements as provided in table of [40, App. B]. In Table 5, we compare our results for nonrecursive NMDS matrices with the existing results.
### Construction of \(\boldsymbol{4\times 4}\) nonrecursive NMDS matrices
From Remark 13, we know that the matrix \(B\) given in 7 has the lowest XOR count among all \(k\)-NMDS matrices with \(k\leq 4\) over \(\mathbb{F}_{2^{r}}\). The proposed GDLS
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Type & Property & Dimension & MDS & NMDS \\ \hline \hline \multirow{4}{*}{Circulant} & Involutory & \(n\times n\) & DNE & DNE \\ \cline{3-5} & & \(2^{n}\times 2^{n}\) & DNE & may exist \\ \cline{3-5} & Orthogonal & \(2n\times 2n\) & may exist & may exist \\ \cline{3-5} & & \((2n+1)\times(2n+1)\) & may exist & may exist \\ \hline \hline \multirow{2}{*}{left-Circulant} & Involutory & \(2^{n}\times 2^{n}\) & DNE & may exist \\ \cline{3-5} & & \(2n\times 2n\) & may exist & may exist \\ \cline{3-5} & & \((2n+1)\times(2n+1)\) & may exist & may exist \\ \hline \hline \multirow{4}{*}{Toeplitz} & Involutory & \(n\times n\) & DNE & DNE \\ \cline{3-5} & & \(2^{n}\times 2^{n}\) & DNE & may exist \\ \cline{3-5} & Orthogonal & \(2n\times 2n\) & may exist & may exist \\ \cline{3-5} & & \((2n+1)\times(2n+1)\) & may exist & may exist \\ \hline \hline \multirow{2}{*}{Hankel} & Involutory & \(2^{n}\times 2^{n}\) & DNE & may exist \\ \cline{3-5} & & \(2n\times 2n\) & may exist & may exist \\ \cline{3-5} & & \((2n+1)\times(2n+1)\) & may exist & may exist \\ \hline \end{tabular}
\end{table}
Table 4: Comparison of involutory and orthogonal properties of MDS and NMDS matrices over a finite field \(\mathbb{F}_{2^{r}}\) (“**DNE**” stands for does not exist).
matrix
\[B=\begin{bmatrix}0&0&0&1\\ 1&1&0&0\\ 0&1&0&0\\ 0&0&1&1\end{bmatrix}\]
is 3-NMDS over a field \(\mathbb{F}_{2^{r}}\). Therefore, we can obtain a nonrecursive NMDS matrix of order 4 by composing the matrix \(B\) with itself three times. This results in an implementation cost of \(3\cdot(2\cdot r)=6r\) over a field \(\mathbb{F}_{2^{r}}\).
### Construction of \(\mathbf{5\times 5}\) nonrecursive NMDS matrices
In this section, we propose three GDLS matrices, \(B_{1},B_{2}\) and \(B_{3}\), which are constructed by the permutations \(\rho_{1}=[5,1,2,3,4]\), \(\rho_{2}=[3,2,5,4,1]\) and the following diagonal matrices.
1. \(B_{1}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1)\) and \(D_{2}=diag(0,\alpha,0,1,1)\)
2. \(B_{2}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1)\) and \(D_{2}=diag(0,1,0,1,1)\)
3. \(B_{3}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1)\) and \(D_{2}=diag(0,1,0,\alpha,1)\),
\begin{table}
\begin{tabular}{|r l l r r|} \hline Order \(n\) & Input & Field/Ring & XOR count & References \\ \hline
4 & 4-bit & \(\mathbb{F}_{2^{4}}\) & 24 & [35] \\
4 & 4-bit & \(\mathbb{F}_{2^{4}}\) & 24 & Section 6.1 \\
4 & 8-bit & \(\mathbb{F}_{2^{8}}\) & 48 & [35] \\
4 & 8-bit & \(\mathbb{F}_{2^{8}}\) & 48 & Section 6.1 \\ \hline
5 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 65 & [27] \\
5 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & **50** & Section 6.2 \\
5 & 8-bit & \(\mathbb{F}_{2^{8}}\)/0x11b & 130 & [27] \\
5 & 8-bit & \(GL(8,\mathbb{F}_{2})\) & **98** & Remark 27 \\ \hline
6 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 108 & [27] \\
6 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & **65** & Section 6.3 \\
6 & 8-bit & \(\mathbb{F}_{2^{8}}\)/0x11b & 216 & [27] \\
6 & 8-bit & \(GL(8,\mathbb{F}_{2})\) & **125** & Remark 28 \\ \hline
7 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 154 & [27] \\
7 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & **96** & Section 6.4 \\
7 & 8-bit & \(\mathbb{F}_{2^{8}}\)/0x11b & 308 & [27] \\
7 & 8-bit & \(GL(8,\mathbb{F}_{2})\) & **176** & Remark 29 \\ \hline
8 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 216 & [27] \\
8 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 108 & [35] \\
8 & 4-bit & \(\mathbb{F}_{2^{4}}\)/0x13 & 108 & Section 6.5 \\
8 & 8-bit & \(\mathbb{F}_{2^{8}}\)/0x11b & 432 & [27] \\
8 & 8-bit & \(GL(8,\mathbb{F}_{2})\) & 204 & [35] \\
8 & 8-bit & \(GL(8,\mathbb{F}_{2})\) & 204 & Remark 30 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of nonrecursive NMDS matrices of order \(n\).
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). Using these three GDLS matrices, we propose a \(5\times 5\) NMDS matrix as follows:
\[M=B_{2}B_{3}B_{1}B_{2}=\begin{bmatrix}0&1&0&0&1\\ 0&1&1&0&0\\ 0&0&0&1&0\\ 0&0&0&1&1\\ 1&0&0&0&0\end{bmatrix}\begin{bmatrix}0&1&0&0&1\\ 0&1&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1&0\\ 0&0&0&\alpha&1\\ 1&0&0&0&0\end{bmatrix}\begin{bmatrix}0&1&0&0&1\\ 0&\alpha&1&0&0\\ 0&0&0&1&0\\ 0&0&0&1&1\\ 1&0&0&0&0\end{bmatrix}\\ \end{bmatrix} \tag{17}\] \[=\begin{bmatrix}1&\alpha+1&\alpha+1&0&1\\ 1&\alpha&\alpha&1&0\\ \alpha&1&0&\alpha&\alpha+1\\ \alpha+1&0&1&\alpha&\alpha+1\\ 0&\alpha+1&\alpha&1&1\end{bmatrix}.\]
Now, \(XOR(M)=XOR(B_{1})+2\cdot XOR(B_{2})+XOR(B_{3})\). Therefore, \(M\) can be implemented with \((1+3\cdot 4)+2\cdot(0+3\cdot 4)+(1+3\cdot 4)=50\) XORs over the field \(\mathbb{F}_{2^{4}}/0\)x\(13\).
Remark 27: As discussed in Remark 15, if \(\alpha\) is replaced with \(C\), the matrix \(M\) from (17) will be NMDS over \(GL(8,\mathbb{F}_{2})\), with an implementation cost of \((1+3\cdot 8)+2\cdot(0+3\cdot 8)+(1+3\cdot 8)=98\) XORs.
### Construction of \(6\times 6\) nonrecursive NMDS matrices
In this section, we propose a lightweight \(6\times 6\) NMDS matrix \(M\) that can be implemented with 65 XORs over the field \(\mathbb{F}_{2^{4}}\). The matrix \(M\) is constructed from three GDLS matrices, \(B_{1}\), \(B_{2}\), and \(B_{3}\), of order 6, as \(M=B_{2}^{2}B_{1}B_{3}B_{2}\). These GDLS matrices are constructed using the permutations \(\rho_{1}=[6,1,2,3,4,5]\) and \(\rho_{2}=[5,6,1,2,3,4]\) and by the following diagonal matrices as follows:
1. \(B_{1}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,\alpha,1,\alpha)\) and \(D_{2}=diag(0,\alpha,0,1,0,1)\)
2. \(B_{2}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1,1)\) and \(D_{2}=diag(0,1,0,1,0,1)\)
3. \(B_{3}\): \(\rho_{1},\rho_{2},\ D_{1}=diag(\alpha,1,1,\alpha^{-1},1,1)\) and \(D_{2}=diag(0,1,0,1,0,1)\)
\[B_{1}=\begin{bmatrix}0&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&\alpha&0&0\\ 0&0&0&0&1&1\\ 0&0&0&0&0&\alpha\\ 1&\alpha&0&0&0&0\end{bmatrix}\quad B_{2}=\begin{bmatrix}0&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&1&0\\ 0&0&0&0&1&1\\ 0&0&0&0&0&1\\ 1&1&0&0&0&0\end{bmatrix}\quad B_{3}=\begin{bmatrix}0&1&0&0&0&0\\ 0&0&1&1&0&0\\ 0&0&0&\alpha^{-1}&0&0\\ 0&0&0&0&1&1\\ 0&0&0&0&0&1\\ \alpha&1&0&0&0&0\end{bmatrix}, \tag{18}\]
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). Now it can be checked that \(M\) is NMDS over \(\mathbb{F}_{2^{4}}/0\)x\(13\) with an implementation cost of 65 XORs, calculated as \(XOR(M)=XOR(B_{1})+3\cdot XOR(B_{2})+XOR(B_{3})=(1+1+1+3\cdot 4)+3\cdot(0+3\cdot 4)+(1+ 1+3\cdot 4)=65\).
Remark 28: As discussed in Remark 15, if \(\alpha\) is replaced with \(C\), the matrix \(M\) constructed from \(B_{1},B_{2}\) and \(B_{3}\) in (18) will be NMDS over \(GL(8,\mathbb{F}_{2})\), with an implementation cost of 125 XORs.
### Construction of \(7\times 7\) nonrecursive NMDS matrices
This section presents three GDLS matrices, \(B_{1},B_{2}\), and \(B_{3}\), of order \(7\). These matrices are constructed using the permutations \(\rho_{1}=[6,7,4,5,2,3,1]\) and \(\rho_{2}=[3,2,1,4,7,6,5]\), along with specific diagonal matrices as follows:
1. \(B_{1}\colon\rho_{1},\rho_{2},\ D_{1}=diag(1,\alpha^{-1},1,\alpha^{-2},1,\alpha ^{2},1)\) and \(D_{2}=diag(1,0,1,0,1,0,1)\)
2. \(B_{2}\colon\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1,1,1)\) and \(D_{2}=diag(1,0,1,0,1,0,1)\)
3. \(B_{3}\colon\rho_{1},\rho_{2},\ D_{1}=diag(1,1,1,1,1,1,\alpha^{-1})\) and \(D_{2}=diag(1,0,1,0,\alpha^{-2},0,1)\)
\[B_{1}=\begin{bmatrix}0&0&1&0&0&0&1\\ 0&0&0&0&1&0&0\\ 1&0&0&0&0&\alpha^{2}&0\\ 0&0&1&0&0&0&0\\ 0&0&\alpha^{-2}&0&0&1\\ 1&0&0&0&0&0&0\\ 0&\alpha^{-1}&0&0&1&0&0\end{bmatrix}\quad B_{2}=\begin{bmatrix}0&0&1&0&0&0&1 \\ 0&0&0&0&1&0&0\\ 1&0&0&0&1&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0&1\\ 1&0&0&0&0&0\\ 0&1&0&0&1&0&0\end{bmatrix}\quad B_{3}=\begin{bmatrix}0&0&1&0&0&0&\alpha^{-1} \\ 0&0&0&0&1&0&0\\ 1&0&0&0&0&1&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&1\\ 1&0&0&0&0&0&0\\ 0&1&0&0&1&0&0\end{bmatrix}, \tag{19}\]
where \(\alpha\) is a primitive element in \(\mathbb{F}_{2^{4}}\) and a root of the polynomial \(x^{4}+x+1\). Using these three GDLS matrices, we propose a \(7\times 7\) matrix \(M\) given by \(M=B_{3}B_{1}^{2}B_{3}B_{2}\). It can be verified that \(M\) is an NMDS matrix over \(\mathbb{F}_{2^{4}}/0\)x13 with an implementation cost of 96 XORs, which is calculated as \(XOR(M)=\ 2\cdot(1+2+2+4\cdot 4)+(0+4\cdot 4)+2\cdot(1+2+4\cdot 4)=96\).
Remark 29: By replacing \(\alpha\) with \(C\), as discussed in Remark 15, the matrix \(M\) constructed from \(B_{1},B_{2}\) and \(B_{3}\) in (19) becomes an NMDS over \(GL(8,\mathbb{F}_{2})\). Furthermore, the binary matrices \(C^{2}\) and \(C^{-2}\) can be implemented with 2 XORs. Consequently, the implementation cost of the matrix \(M\) is 176 XORs over \(GL(8,\mathbb{F}_{2})\).
### Construction of \(8\times 8\) nonrecursive NMDS matrices
In this section, we present a lightweight \(8\times 8\) matrix \(M\) over the field \(\mathbb{F}_{2^{4}}\) that can be implemented with 108 XORs, which meets the best known result. To construct the matrix \(M\), we use three GDLS matrices, \(B_{1}\), \(B_{2}\), and \(B_{3}\), of order 8, by \(M=B_{2}B_{1}B_{3}B_{2}^{3}\). These GDLS matrices are generated using the permutations \(\rho_{1}=[4,5,2,3,8,1,6,7]\) and \(\rho_{2}=[5,4,3,6,1,8,7,2]\), along with the following diagonal matrices.
1. \(B_{1}\colon\rho_{1},\rho_{2},D_{1}=diag(1,\alpha,1,\alpha,\ 1,\alpha,1,\alpha)\) and \(D_{2}=diag(1,0,1,0,\ 1,0,1,\\ 0)\)
2. \(B_{2}\colon\rho_{1},\rho_{2},D_{1}=diag(1,1,1,1,\ 1,1,1,1)\), \(D_{2}=diag(1,0,1,0,\ 1,0,1,0)\)
3. \(B_{3}\colon\rho_{1},\rho_{2},D_{1}=diag(1,1,1,1,\ 1,1,1,1)\) and \(D_{2}=diag(\alpha^{-2},\ 0,\ \alpha^{-2},\ 0,\\ \alpha^{-2},0,\alpha^{-2},0)\)
\[B_{1} =\begin{bmatrix}0&0&0&0&1&\alpha&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&1&\alpha&0&0&0&0\\ 1&0&0&0&0&0&0&0\\ 1&\alpha&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&1&\alpha\\ 0&0&0&0&1&0&0&0\end{bmatrix}B_{2} =\begin{bmatrix}0&0&0&0&0&\alpha^{-2}&1&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&1&1&0&0&0&0&0\\ 1&0&0&0&0&0&0&0\\ 1&1&0&0&0&0&0&0\\ \alpha^{-2}&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&\alpha^{-2}&1\\ 0&0&0&0&1&0&0&0\end{bmatrix}, \tag{20}\]
where \(\alpha\) is a primitive element of \(\mathbb{F}_{2^{4}}\) and a root of \(x^{4}+x+1\). Therefore, \(M\) can be implemented with \((1+1+1+1+4\cdot 4)+4\cdot(0+4\cdot 4)+(2+2+2+2+4\cdot 4)=108\) XORs.
Remark 30: As discussed in Remark 15, if we substitute \(\alpha\) with \(C\), the matrix \(M\) that is formed from \(B_{1},B_{2}\), and \(B_{3}\) in (20) becomes an NMDS matrix over \(GL(8,\mathbb{F}_{2})\). Also, the binary matrix \(C^{-2}\) can be implemented with only 2 XORs. Therefore, the implementation cost of the matrix \(M\) becomes 204 XORs over \(GL(8,\mathbb{F}_{2})\).
Remark 31: From Table 5, we can observe that the proposed nonrecursive NMDS matrices of order 4 and 8 have the same cost as given in the paper [35] for \(\mathbb{F}_{2^{4}}\) and \(\mathbb{F}_{2^{s}}\). However, it should be noted that the paper [35] does not provide NMDS matrices for the orders \(n=5\), \(6\), and \(7\). In contrast, our proposed nonrecursive NMDS matrices for orders 5, 6, and 7 not only fulfill this gap but also have a lower hardware cost compared to the existing nonrecursive NMDS matrices in the literature [27] (as shown in Table 5). Also, the GDLS matrix structure is not limited to even orders, unlike the GFS or EGFS matrix structure used in [35]. The GDLS matrix structure is applicable to matrices of all orders, enabling improvements in several parameters that are not achievable with the GFS or EGFS matrix structure.
## 7 Conclusion and Future Work
This paper examines the construction of NMDS matrices using both recursive and nonrecursive approaches, investigates various theoretical results, and presents some lightweight NMDS matrices of various orders in both approaches. Table 6 presents a summary of the implementation cost of the NMDS matrices given in this paper. We explore the DLS matrices and derive some theoretical results for the construction of recursive NMDS matrices. We prove that for \(n\geq 4\), there does not exist any \(k\)-NMDS sparse matrix of order \(n\) with \(\mathcal{K}=1\) and \(k\leq n\) over a field of characteristic 2. For the nonrecursive NMDS matrices, we examine the circulant, left-circulant, Toeplitz, and Hankel families of matrices. We prove that Toeplitz matrices of order \(n>4\) cannot be simultaneously NMDS and involutory over a field of characteristic 2. To compare with MDS matrices, we examine some well-known results of MDS matrices and apply them to NMDS matrices. For instance, Table 4 compares the involutory and orthogonal properties
of MDS and NMDS matrices constructed from circulant, left-circulant, Toeplitz, and Hankel matrices. We use GDLS matrices to provide some lightweight NMDS matrices that can be computed in one clock cycle. The proposed nonrecursive NMDS matrices of orders 4, 5, 6, 7, and 8 can be implemented with 24, 50, 65, 96, and 108 XORs over \(\mathbb{F}_{2^{4}}\), respectively. These results match the best-known lightweight NMDS matrices of orders 4 and 8, and outperform the best-known matrices of orders 5, 6, and 7.
In the literature, there has been an extensive study of the direct construction of MDS matrices using both recursive and nonrecursive methods. Nonrecursive direct constructions are mainly obtained from Cauchy and Vandermonde based constructions, while recursive direct constructions are derived from companion matrices by some coding theoretic techniques. However, there is no direct construction available for the NMDS matrices. Therefore, finding a direct construction method for NMDS matrices using both recursive and nonrecursive approaches could be a potential area for future work.
\begin{table}
\begin{tabular}{|c c c c c|} \hline Order \(n\) & Input & Type & Iterations & XOR count \\ \hline
[MISSING_PAGE_POST]
onrecursive & - & 204 \\ \hline \end{tabular}
\end{table}
Table 6: A summary of results on NMDS matrices of this paper. |
2306.02582 | Enhancing Point Annotations with Superpixel and Confidence Learning
Guided for Improving Semi-Supervised OCT Fluid Segmentation | Automatic segmentation of fluid in Optical Coherence Tomography (OCT) images
is beneficial for ophthalmologists to make an accurate diagnosis. Although
semi-supervised OCT fluid segmentation networks enhance their performance by
introducing additional unlabeled data, the performance enhancement is limited.
To address this, we propose Superpixel and Confident Learning Guide Point
Annotations Network (SCLGPA-Net) based on the teacher-student architecture,
which can learn OCT fluid segmentation from limited fully-annotated data and
abundant point-annotated data. Specifically, we use points to annotate fluid
regions in unlabeled OCT images and the Superpixel-Guided Pseudo-Label
Generation (SGPLG) module generates pseudo-labels and pixel-level label trust
maps from the point annotations. The label trust maps provide an indication of
the reliability of the pseudo-labels. Furthermore, we propose the Confident
Learning Guided Label Refinement (CLGLR) module identifies error information in
the pseudo-labels and leads to further refinement. Experiments on the RETOUCH
dataset show that we are able to reduce the need for fully-annotated data by
94.22\%, closing the gap with the best fully supervised baselines to a mean IoU
of only 2\%. Furthermore, We constructed a private 2D OCT fluid segmentation
dataset for evaluation. Compared with other methods, comprehensive experimental
results demonstrate that the proposed method can achieve excellent performance
in OCT fluid segmentation. | Tengjin Weng, Yang Shen, Kai Jin, Zhiming Cheng, Yunxiang Li, Gewen Zhang, Shuai Wang, Yaqi Wang | 2023-06-05T04:21:00Z | http://arxiv.org/abs/2306.02582v3 | # Learning from Noisy Labels Generated by Extremely Point Annotations for OCT Fluid Segmentation
###### Abstract
Automatic segmentation of fluid in OCT (Optical Coherence Tomography) images is beneficial for ophthalmologists to make an accurate diagnosis. Currently, data-driven convolutional neural networks (CNNs) have achieved great success in OCT fluid segmentation. However, obtaining pixel-level masks of OCT images is time-consuming and requires expertise. The popular weakly-supervised strategy is to generate noisy pseudo-labels from weak annotations, but the noise information introduced may mislead the model training. To address this issue, (i) we propose a superpixel-guided method for generating noisy labels from weak point annotations, called Point to Noisy by Superpixel (PNS), which limits the network from over-fitting noise by assigning low confidence to suspiciously noisy label pixels, and (ii) we propose a Two-Stage Mean-Teacher-assisted Confident Learning (2SMTCL) method based on MTCL for multi-category OCT fluid segmentation, which alleviates the uncertainty and computing power consumption introduced by the real-time characterization noise of MTCL. For evaluation, we have constructed a 2D OCT fluid segmentation dataset. Compared with other state-of-art label-denoising methods, comprehensive experimental results demonstrate that the proposed method can achieve excellent performance in OCT fluid segmentation as well as label denoising. Our study provides an efficient, accurate, and practical solution for fluid segmentation of OCT images, which is expected to have a positive impact on the diagnosis and treatment of patients in the field of ophthalmology.
## 1 Introduction
The intricate anatomical structures and diverse disease symptoms associated with eye diseases often present substantial hurdles in diagnosis. Optical Coherence Tomography (OCT), a non-intrusive imaging technique [1, 2], offers high-definition, sectional imaging of the retina and optic nerve. This allows eye specialists to accurately observe and quantify retinal abnormalities. More specifically, retinal fluid in OCT, a key indicator in detecting and diagnosing eye conditions, is categorized into intraretinal fluid (IRF), subretinal fluid (SRF), and pigment epithelial detachment (PED) based on its location of accumulation. These fluids are vital biomarkers for ocular diseases like age-related macular degeneration (AMD) and retinal vein occlusion (RVO). Identifying the existence and location of these fluids assists eye doctors in diagnosing, monitoring, and planning effective treatment strategies to maintain vision. Nevertheless, manual analysis of OCT images can be time-consuming and prone to errors. Traditional segmentation methods, such as threshold-based [3], graph-based [4, 5], and machine learning-based [6] techniques have been used in OCT segmentation, but often fall short due to varying image quality, the necessity for extensive specialized knowledge, and limited generalization abilities.
In contrast to traditional segmentation methods that rely on carefully crafted handcrafted features, convolutional
neural networks (CNNs) can automatically learn and extract image features from the data itself. Therefore, various CNN-based methods have been developed for performing segmentation tasks, such as FCN [7], SegNet [8], DeepLab [9], and UNet [10]. The utilization of CNNs in medical image segmentation requires substantial amounts of data. Unfortunately, manual segmentation of medical images demands significant expertise and time. Obtaining an adequate quantity of accurately-labeled data from medical experts can be a difficult and challenging task, thereby posing obstacles to developing precise CNN models for medical image segmentation. Without enough clearly labeled pixel-level annotations, CNN-based segmentation methods often struggle to fit, leading to performance degradation. To address this problem, researchers choose to collect additional labeled data without quality control, such as crowdsourcing or noisy pseudo-labeled data generated based on weak supervision. However, directly combining clean labels with noisy labels may confuse the network during training and lead to performance degradation, negating the benefit provided by clean labels [11; 12]. Therefore, it is crucial to effectively and robustly utilize the additional information available in large amounts of noisy-labeled data.
Initiatives to tackle the challenge of noisy labels, there are a range of label-denoising strategies. These strategies are generally divided into two categories, depending on the way input data is partitioned. The first category comprises methods that gather and amalgamate data from various sources to indiscriminately train the model. The second category is designed for practical scenarios where professionals are asked to label or perform quality checks on a small dataset, distinguishing between clearly-labeled and noisy-labeled data. Our research is centered on this second approach, where prior knowledge is employed to aid the network in differentiating between straightforward and ambiguous labeled data. For this purpose, several techniques have been suggested, one of which is Mean-Teacher-assisted Confident Learning (MTCL[13]), a classic method of distinguishing data sources. It can robustly learn segmentation from limited high-quality labeled data and abundant low-quality labeled data. Specifically, the framework of MTCL can leverage the extra dark knowledge in low-quality labeled images based on perturbation-based unsupervised consistency, and effectively exploit the beneficial information in low-quality noisy labels through explicit label refinement. However, the Confident Learning (CL) module based on Cleanlab can only be calculated on the CPU, and it is time-consuming to introduce CL characterization label noise for each iteration during training. Moreover, multi-category segmentation is more challenging than two-category segmentation leading to CL real-time characterization label noise that may be inaccurate.
In this work, we illustrate the application of NLL to the OCT fluid segmentation task from the following two aspects: (i) We proposed Point to Noisy by Superpixel (PNS), which can generate noisy labels from weak point annotations via superpixels guidance, and generate label trust graphs to provide a confidence measure for each label pixel in the noisy labels. These label trust graphs can constrain the network from over-fitting noise by assigning lower confidence to suspected noisy label pixels. (ii) We choose MTCL as the NLL framework, which can robustly learn segmentation from limited high-quality labeled data and abundant low-quality labeled data. Considering that multi-category segmentation is more challenging than two-category segmentation, we propose a Two-Stage Mean-Teacher-assisted Confident Learning (2SMTCL) method for multi-category OCT fluid segmentation, which alleviates the uncertainty and computing power consumption introduced by the real-time characterization noise of MTCL. 2SMTCL trains two networks: a noisy network and a denoising network. Specifically, the first-stage noisy network is trained based on the teacher-student architecture, then introduce CL to characterize the pixel-level label noise and refine the noisy labels. Finally, the second-stage denoising network is trained by denoising labels.
We evaluate the performance of 2SMTCL on the OCT fluid segmentation dataset employed in this study. The results show that our method can effectively exploit weak point annotations to improve segmentation performance, outperforming other competing methods. The contributions of our research are summarized as follows:
* To the best of our knowledge, we are the first researchers to apply NLL to the task of OCT fluid segmentation.
* We propose a superpixel-guided method for generating noisy labels from weak point annotations named PNS, which can constrain the network from over
Figure 1: The Point to Noisy by Superpixel (PNS) method generates noisy labels and label trust graphs from point annotations, guided by superpixels. The superpixel image is obtained using the superpixel algorithm (SLIC). By computing the similarity between the labeled superpixel block and its adjacent superpixel blocks to obtain the noisy label and label trust graph.
fitting noise by assigning lower confidence to suspected noisy label pixels.
* We propose 2SMTCL, a two-stage method based on MTCL for multi-category OCT fluid segmentation, which alleviates the uncertainty and computing power consumption introduced by the real-time characterization noise of MTCL.
* We have constructed a 2D OCT image segmentation dataset with corresponding ground truth annotations and point annotations. This dataset can serve as a valuable resource for training and evaluating deep learning models aimed at achieving accurate fluid segmentation.
## 2 Relate Work
### CNN-Based OCT Fluid Segmentation
Many successful OCT fluid segmentation methods use convolutional neural networks (CNNs) based on the UNet [10] architecture. Rashno _et al._[14] incorporated a graph shortest path technique as a post-processing step to enhance the predictive results of UNet for OCT fluid segmentation. To exploit the structural relationship between retinal layers and fluids, Xu _et al._[15] proposed a two-stage fluid segmentation framework. They first trained a retinal layer segmentation network to extract retinal layer maps which were used to constrain the fluid segmentation network in the second stage. Several other studies, such as [16, 17], employed a graph-cut method to generate retinal layer segmentation maps. These maps were then combined to train a UNet for fluid segmentation. Moreover, De _et al._[18] proposed a UNet-based architecture that can simultaneously segment retinal layers and fluids, utilizing pixel-level annotations of retinal layer and fluid masks to enhance OCT segmentation performance. Although various methods have been proposed with little difference in performance, the effectiveness of current OCT fluid segmentation methods relies heavily on a large number of datasets with annotations.
### Weakly-Supervised Segmentation
Given the inaccessibility of large amounts of fully annotated data, several researchers have developed various weakly supervised medical image segmentation methods. A prevalent approach involves generating noisy pseudo-labels from weak annotations and subsequently using these to train a segmentation model. Pu _et al._[19] proposed a technique that utilizes a graph neural network based on superpixels to create noisy pseudo-labels from weak annotations like points or scribbles. Nevertheless, this method could introduce two sources of error: inaccuracies in the generated noisy pseudo-labels and the subsequent errors in learning segmentation from these labels. To circumvent these issues, some strategies directly train segmentation models on partial annotations. Bearman _et al._[20], for instance, combined point-supervised and self-supervised techniques to master object segmentation within images. Other strategies like [21, 22] employ prior knowledge of constraint expressions to aid segmentation during training. In the realm of medical image segmentation, this prior knowledge can be incredibly useful, given the frequent availability of information about the target region in advance.
While existing weakly supervised methods have demonstrated their potential in reducing manual labor and improving segmentation performance, their application to OCT fluid segmentation hasn't been extensively investigated. He _et al._[23] introduced a method dubbed Intra-Slice Contrast Learning Network (ISCLNet) that relies on weak point supervision for 3D OCT fluid segmentation. However, in actual diagnoses, ophthalmologists typically only concentrate on a limited number of OCT images displaying fluid. The inter-image comparison technique deployed by ISCLNet can be challenging when dealing with incomplete OCT data. Motivated by NLL, we believe this method can be adapted to weakly supervised 2D OCT fluid segmenta
Figure 2: Examples of retinal in OCT image with manual annotations. (a) The original OCT image with fluids; (b) The point annotations by points and lines. (c) The noisy labels generated by PNS; (d) The full mask annotations; In (c) and (d), the green, blue, and red contours denote PED, SRF, and IRF, respectively.
tion.
### Learning Segmentation with Noisy Labels
Previous work has pointed out that labeled data with noise can mislead network training and degrade network performance. Most existing noise-supervised learning works focus on image-level classification tasks [24], [25], [26] while more challenging pixel-wise segmentation tasks remain to be studied. Zhang _et al._[27] proposed a TriNet based on Co-teaching [25], which trains a third network using combined predictions from the first two networks to alleviate the misleading problem caused by label noise. Li _et al._[28] proposed a method that employs superpixels to guide the network for noise-aware training and refinement of noisy labels. Zhang _et al._[29] suggested a two-stage strategy for pre-training a network using a combination of different datasets, followed by fine-tuning the labels by Confident Learning to train a second network. Zhu _et al._[30] proposed a module for assessing the quality of image-level labels to identify high-quality labels for fine-tuning a network. Xu _et al._[13] developed the MTCL framework based on Mean-Teacher architecture and Confident Learning, which can robustly learn segmentation from limited high-quality labeled data and abundant low-quality labeled data. The KDEM [31] method is an extension of the semi-supervised learning approach proposed by [11], which introduces additional techniques such as knowledge distillation and entropy minimization regularization to further improve the segmentation performance. Yang _et al._[32] introduce a dual-branch network that can learn efficiently by processing accurate and noisy annotations separately. These methods demonstrate how to improve the network's ability to learn noisy labels and provide insights for future research in this area. Extensive experiments of many NLL methods on datasets such as JSRT [33] and ISIC [34] have achieved promising results, but limitations caused by the lack of OCT fluid segmentation datasets hinder the application of these methods in this field. Therefore, the effectiveness of NLL methods in OCT fluid segmentation remains largely unexplored.
## 3 Methodology
### Framework Overview
Our method divides the dataset into two groups: clearly-labeled data (CD) and noisy-labeled data (ND). The noisy labels and label trust graphs of ND are generated using weak point annotations via PNS. To simplify the description of our methodology, we define \(M\) samples to represent the CD, while the remaining \(N-M\) samples represent the ND. We denote the CD as \(\mathbf{D}_{c}=\{(\mathbf{X}_{(i)},\mathbf{Y}_{(i)})\}_{i=1}^{M}\) and the ND as \(\mathbf{D}_{n}=\{(\mathbf{X}_{(i)},\mathbf{\tilde{Y}}_{(i)},\mathbf{U}_{(i)}) \}_{i=M+1}^{N}\), where \(\mathbf{X}_{(i)}\in\mathbb{R}^{\Omega_{i}}\) represents the input 2D OCT images. \(\mathbf{Y}_{(i)},\mathbf{\tilde{Y}}_{(i)}\in\{0,1,2,3\}^{\Omega_{i}}\) (four types of segmentation tasks) denotes the clean segmentation label and noisy segmentation label of \(\mathbf{X}_{(i)}\), respectively. The label trust graph \(\mathbf{U}_{(i)}\) indicates the degree of trust of \(\mathbf{\tilde{Y}}_{(i)}\), where \(\mathbf{U}_{(i)}\subseteq\{0,0.1,\ldots,1\}^{\Omega_{i}}\).
Fig. 3 illustrates our method that aims to learn OCT fluid segmentation simultaneously from limited CD and abundant ND. The images of the CD are fed to the student model, and the images of the ND are both fed to the student model and teacher model. Simultaneously, the PNS method generates noisy labels and label trust graphs of ND. After obtaining the noisy network (student network) based on MT architecture, CL is used to characterize the label error in ND to obtain estimated error maps. The denoising network is trained with denoised labels and refined label trust graphs, which are obtained guided by the estimated error maps. Our method will be elaborated on the following two aspects:
(i) How to generate noisy labels and label trust graphs to constrain the network from over-fitting noise.
(ii) How 2SMTCL robustly learns multi-category OCT fluid segmentation from abundant noisy labels.
### Labels and Label Trust Graphs of ND Generated by PNS
Our proposed PNS can generate noisy labels from weak point annotations via superpixels guidance, and generate label trust graphs to provide a confidence measure for each label pixel in the noisy labels. These label trust graphs can constrain the network from over-fitting noise by assigning lower confidence to suspected noisy label pixels. Fig. 1 shows how noisy labels and label trust graphs are generated from point annotations via PNS.
#### 3.2.1 Superpixel-guided for Generating Noisy Labels from Weak Point Annotations
Our OCT fluid segmentation dataset includes fully annotated labels and weakly annotated labels for three types of fluids: PED, SRF, and IRF. For weak annotation, use points to indicate the center of the fluid accumulation (for SRF and IRF) and lines (consisting of two or more points) to mark the bottom of the PED. This greatly simplifies the annotation process and reduces the required labor. The proposed PNS can generate noisy labels from these point annotations.
Formally, give an image \(\mathbf{X}\), the weak label is represented by \(\mathbf{Y}^{\prime}=\{Y^{\prime}_{i}\}_{i=1}^{n}\), \(Y^{\prime}_{i}\in\{1,2...,C\}\) where \(C\) is the number of semantic classes and \(n\) is the number of pixels. The superpixel image is obtained based on the SLIC [35] algorithm. We denote superpixel image as \(\mathbf{S}=\{S_{i}\}_{i=1}^{n}\), where \(S_{i}\in\{1,2,..K\}\) and the \(K\) is the number of superpixel blocks. Here \(S_{j}=k\) means that the pixel \(j\) belongs to the \(k^{th}\) superpixel block. We can represent all the pixels \(j\) that are included in the \(k^{th}\) superpixel block by \(\mathbf{\tilde{S}}=\{\tilde{S}_{k}\}_{k=1}^{K}\), where \(\tilde{S}_{k}=\{j:S_{j=k}\}\). Further, the superpixel label is
represented by \(\mathbf{\bar{Y}}=\{\bar{Y}_{k}\}_{k=1}^{K}\) and the initial values are zero. The following procedure illustrates how to convert weak label \(\mathbf{Y}^{\prime}\) to superpixel label \(\mathbf{\bar{Y}}\):
\[\bar{Y}_{k}=c,\exists\;(Y_{j}^{\prime}=c), \tag{1}\]
where \(j\in\bar{S}_{k}\) and \(c\neq 0\). From this, we get the initial superpixel label \(\mathbf{\bar{Y}}\). Due to a scarcity of pixel annotations in \(\mathbf{Y}^{\prime}\), the majority of \(\bar{Y}_{k}\) values are equal to zero. Identify and isolate all \(\bar{Y}_{k}\) not equal to 0 and randomly select a superpixel block label \(\bar{Y}_{ms}\). The corresponding superpixel block is \(\bar{S}_{ms}\). We select one of the adjacent (all adjacent superpixel blocks for IRF and SRF, upper adjacent superpixel block for PED) superpixel blocks \(\bar{S}_{ns}\) of \(\bar{S}_{ms}\) and performs the following operations to infect \(\bar{Y}_{ns}\):
\[\bar{Y}_{ns}=\bar{Y}_{ms}\cdot\mathbb{I}(cos\_dis(\bar{S}_{ms},\bar{S}_{ns}) \geq t), \tag{2}\]
where \(cos\_dis(\bar{S}_{ms},\bar{S}_{ns})\) represents the superpixel blocks similarity and \(t\) is the similarity threshold (the similarity threshold of IRF and SRF to 0.6, and the similarity threshold of PED to 0.5). The similarity of \(\bar{S}_{ms}\) and \(\bar{S}_{ns}\) as follows:
\[cos\_dis(\bar{S}_{ms},\bar{S}_{ns})=\frac{\sum_{v=0}^{255}\left(\mathbf{O}_{ms }^{v}\right)(\mathbf{O}_{ns}^{v})}{\sqrt{\sum_{v=0}^{255}\left(\mathbf{O}_{ms }^{v}\right)^{2}}\sqrt{\sum_{v=0}^{255}\left(\mathbf{O}_{ns}^{v}\right)^{2}}}, \tag{3}\]
where \(\mathbf{O}_{k}^{v}\) represents the number of pixel values \(v\) contained in the \(k^{th}\) superpixel block. The number of each pixel value contained in the \(\bar{S}_{ms}\) and \(\bar{S}_{ns}\) are calculated by the following formula:
\[\mathbf{O}_{ms}^{v}=\sum_{j:S_{j}=ms}\mathbb{I}(\mathbf{X}[j]=v),v=[0,255], \tag{4}\] \[\mathbf{O}_{ns}^{v}=\sum_{j:S_{j}=ns}\mathbb{I}(\mathbf{X}[j]=v),v =[0,255].\]
If \(cos\_dis(\bar{S}_{ms},\bar{S}_{ns})\geq t\), we assign the value of \(\bar{Y}_{ms}\) to \(\bar{Y}_{ns}\) and it is obvious that the adjacent superpixel blocks of \(\bar{Y}_{ns}\) are also very likely to be similar to \(\bar{Y}_{ms}\). Therefore, the adjacent superpixel blocks of \(\bar{S}_{ns}\) will be regarded as the adjacent superpixel blocks of \(\bar{S}_{ms}\). The processing of \(\bar{Y}_{ms}\) will not end until all the similarity values of adjacent superpixel blocks are less than threshold \(t\). After processing all initial \(\bar{Y}_{k}\) not equal to 0, the superpixel label \(\mathbf{\bar{Y}}\) will be converted to the pixel-wise noisy label \(\mathbf{\bar{Y}}=\{\bar{Y}\}_{i=1}^{n}\):
\[\tilde{Y}_{i}=\bar{Y}_{S_{i}}. \tag{5}\]
Fig. 2 shows the visualization of our noisy labels generated from point annotations. Next, we will describe how to generate label trust graphs.
#### 3.2.2 Label Trust Graph for Noise-robust Learning
In the process of generating noisy labels, it is unreasonable to give the same confidence to all noisy labeled pixels. Therefore, we propose a method to assign suitable
Figure 3: Illustration of Two-Stage Mean-Teacher-assisted Confident Learning (2SMTCL). The images of the CD are fed to the student model, and the images of the ND are both fed to the student model and teacher model. Simultaneously, the PNS method generates noisy labels and label trust graphs of ND. After obtaining the noisy network (student network) based on MT architecture, CL is used to characterize the label error in ND to obtain estimated error maps \(\mathbf{X}_{err}\). The denoising network is trained with denoised labels and refined label trust graphs, which are obtained guided by the estimated error maps.
confidence by measuring the actual distance of superpixel blocks. We introduce a pixel-wise label trust graph \(\mathbf{U}=\{U_{i}\}_{i=1}^{n}\) where \(U_{i}\subseteq(0,0.1,\dots,1)\). The label trust graph is used to adjust the influence of each pixel's label during training, which can help to mitigate the impact of noise on the network. Specifically, all values of \(U_{i}\) are set to 0.5 (the \(U_{i}\) value of the pixels contained in the initial \(\tilde{Y}_{k}\) not equal to 0 is set to 1) and update \(\mathbf{U}\) at the same time when updating \(\mathbf{\overline{Y}}\). If \(cos\_dis(\tilde{S}_{ms},\widetilde{S}_{ns})\geq t\), calculate the super-pixel block distance between \(\tilde{S}_{ms}\) and \(\widetilde{S}_{ns}\) and assign lower confidence values to the corresponding pixels on \(\mathbf{U}\) that are farther away from the \(\tilde{S}_{ms}\).
### 2SMTCL for Multi-category OCT Fluid Segmentation
We choose MTCL with one-stage two-category segmentation as the NLL framework. Based on MTCL, we propose a two-stage method 2SMTCL for multi-category OCT fluid segmentation. Unlike MTCL, which introduces CL in real-time during training for label noise characterization, 2SMTCL is a two-stage NLL framework. Specifically, after training the first-stage noisy network based on the teacher-student architecture, introduce CL to characterize the pixel-level noisy labels and refine the noisy labels, and then get the second-stage denoising network trained by the denoised label. Whether it is the first stage or the second stage, the network architecture is MT and keeps CD unchanged. Our motivation is as follows: (i) CL real-time characterize noise will mislead the training of the network because multi-category segmentation is more challenging than two-category segmentation. (ii) Under the premise of introducing label trust graphs to constrain network training, CL real-time characterize noise is unnecessary and time-consuming. More details about the framework of 2SMTCL are explained in the following.
#### 3.3.1 Training Noisy Network Based on Mean-Teacher Architecture
Previous studies have demonstrated that noisy labels can have a detrimental effect on model training. To address this challenge, we choose MTCL with one-stage two-category segmentation as the NLL framework, which involves partitioning the dataset into two categories: confidently CD and non-confidently ND. The basic network architecture chooses the Mean-Teacher (MT) model, which is effective in Semi-Supervised Learning (SSL). The MT architecture comprises a student model (updated through back-propagation) and a teacher model (updated based on the weights of the student model at different training stages). A great strength of the MT framework is superior in its ability to leverage knowledge from image-only data using perturbation-based consistency regularization.
Formally, we denoted the weights of the student model at training step \(t\) as \(\theta_{t}\). We updated the teacher model's weights \(\widetilde{\theta}_{t}\) using an exponential moving average (EMA) strategy, which can be formulated as follows:
\[\widetilde{\theta}_{t}=\alpha\widetilde{\theta}_{t-1}+(1-\alpha)\theta_{t}, \tag{6}\]
where \(\alpha\) is the EMA decay rate, and it is set to 0.99, as recommended by [36]. Based on the smoothness assumption [37], we encouraged the teacher model's temporal ensemble prediction to be consistent with that of the student model under different perturbations, such as adding random Gaussian noise \(\xi\) to the input images. The student network in MT serves as our first-stage noisy network for pixel-level label noise characterization in the next stage.
#### 3.3.2 Confident Learning for Multi-Category Pixel-Wise Conditional Label Errors
Despite the presence of label trust graphs \(\mathbf{U}\), which are designed to limit the impact of noisy labels on model learning, there remains the potential for label noise to be learned by the model. Confident Learning (CL) [24] is able to identify label errors in datasets and enhance training with noisy labels by estimating the joint distribution between the noisy (observed) labels \(\tilde{y}\) and the true (latent) labels \(y^{*}\), as assumed by Angluin [38]. This estimation enables CL to assign higher confidence to instances with more reliable labels and lower confidence to instances with more questionable labels, resulting in finding the error labels. Zhang _et al._[29] pioneered the application of CL to medical image segmentation and achieved promising results. Moreover, many follow-up studies [12, 13] have proved the effectiveness of CL for medical image segmentation. However, most of the research is based on the segmentation task of binary classification, further research is needed to explore the effectiveness of CL for multi-category medical image segmentation.
Specifically, given an ND image \(\mathbf{X}\) and we denote \(\mathbf{X}=(\mathbf{x},\tilde{y})^{n}\), where \(\tilde{y}\) means the label of pixels and \(n=w\times h\) means the number of pixels in \(\mathbf{X}\). We can obtain the predicted probabilities \(\hat{\mathbf{P}}\) for \(m\) classes by the first-stage noisy network. Assuming that a pixel \(\mathbf{x}\) labeled \(\tilde{y}=i\) has large enough predicted probabilities \(\hat{\mathbf{P}}_{j}(\mathbf{x})\geq t_{j}\), there is a possibility that the current annotation for \(\mathbf{x}\) is incorrect, and it may actually belong to the true latent label \(y^{*}=j\) (\(i\in\mathcal{C}_{m}\), \(j\in\mathcal{C}_{m}\), \(\mathcal{C}_{m}\) indicates the set of \(m\) class label). Here, we set the average predicted probabilities \(\hat{\mathbf{P}}_{j}(\mathbf{x})\) of all pixels labeled \(\tilde{y}=j\) as the threshold \(t_{j}\):
\[t_{j}:=\frac{1}{|\mathbf{X}_{\tilde{y}=j}|}\sum_{\mathbf{x}\in\mathbf{X}_{ \tilde{y}=j}}\hat{\mathbf{P}}_{j}(\mathbf{x}). \tag{7}\]
Based on this assumption, we can construct the confusion matrix \(\mathbf{C}_{\tilde{y},y^{*}}\) by counting the number of pixels \(\mathbf{x}\) that are
labeled as \(\tilde{y}=i\) and may actually belong to the true latent label \(y^{*}=j\). The \(\mathbf{C}_{\tilde{y},y^{*}}[i][j]\) represents the count of such pixels for which the observed label is \(\tilde{y}=i\) and the true latent label is \(y^{*}=j\):
\[\mathbf{C}_{\tilde{y},y^{*}}[i][j]:=\left|\hat{\mathbf{X}}_{\tilde{y}=i,y^{*}=j }\right|, \tag{8}\]
where
\[\hat{\mathbf{X}}_{\tilde{y}=i,y^{*}=j}:=\{\mathbf{x}\in\mathbf{X}_{\tilde{y}= i}: \mathbf{\hat{P}}_{j}(\mathbf{x})\geq t_{j}, \tag{9}\] \[j=\operatorname*{arg\,max}_{k\in M:\hat{\mathbf{P}}_{k}(\mathbf{ x})\geq t_{k}}\hat{\mathbf{P}}_{k}(\mathbf{x})\}.\]
After obtaining the confusion matrix \(\mathbf{C}_{\tilde{y},y^{*}}\) it needs to be normalized. Then, the joint distribution \(\mathbf{Q}_{\tilde{y},y^{*}}\) between the noisy labels and the true labels can be obtained by dividing each element in the confusion matrix by the total number of pixels:
\[\mathbf{Q}_{\tilde{y},y^{*}}[i][j]=\frac{\mathbf{\tilde{C}}_{\tilde{y},y^{*} }[i][j]}{\sum_{i\in\mathcal{C}_{m},j\in\mathcal{C}_{m}}\mathbf{\tilde{C}}_{ \tilde{y},y^{*}}[i][j]}, \tag{10}\]
where
\[\mathbf{\tilde{C}}_{\tilde{y},y^{*}}[i][j]=\frac{\mathbf{C}_{\tilde{y},y^{*} }[i][j]}{\sum_{j\in\mathcal{C}_{m}}\mathbf{\tilde{C}}_{\tilde{y},y^{*}}[i][j]} \cdot\left|\mathbf{X}_{\tilde{y}=i}\right|. \tag{11}\]
In order to identify label noise, we adopt the prune by class noise rate (PBNR) strategy, which works by removing examples with a high probability of being mislabeled for every non-diagonal in the \(\mathbf{Q}_{\tilde{y},y^{*}}[i][j]\) and select \(n\cdot\mathbf{Q}_{\tilde{y},y^{*}}[i][j]\) as mislabeled pixels. Considering our task is multi-category segmentation, we sort the returned error labels index by self-confidence (predicted probability of the given label) for each pixel and select the first \(80\%\) of error labels to form the binary estimate error map \(\mathbf{X}_{err}\), where "1" denotes that this pixel is identified as a mislabeled one. Such pixel-level error map \(\mathbf{X}_{err}\) can guide the subsequent label refinement and label trust graph refinement process.
#### 3.3.3 Label Refinement and Label Trust Graph Refinement
MTCL [13] proposes three different label refinement methods and the MTCL_Hard has the best performance. We highly trust the accuracy of the estimated error map \(\mathbf{X}_{err}\) and impose the hard refinement on the given noisy labels \(\mathbf{\tilde{Y}}\). The predicted label \(\mathbf{\hat{Y}}=\{\hat{Y}\}_{i=1}^{n}\) is calculated by the prediction probability \(\mathbf{\hat{P}}\):
\[\mathbf{\hat{Y}}_{i}=\operatorname*{arg\,max}_{c}\mathbf{\hat{P}}(c,n). \tag{12}\]
We denote \(\mathbf{\hat{Y}}=\{\hat{Y}\}_{i=1}^{n}\) as the denoised label, which is formulated by:
\[\hat{Y}_{i}=\mathbb{I}(\mathbf{X}_{err}^{i}=0)\tilde{Y}_{i}+\mathbb{I}( \mathbf{X}_{err}^{i}=1)\hat{Y}_{i}. \tag{13}\]
Similar to the noisy label \(\mathbf{\hat{Y}}\), the label trust graph \(\mathbf{U}\) requires modification since the previous graph represented the trustworthiness of the unreliable noisy label. We denote \(\mathbf{\hat{U}}=\{\hat{U}\}_{i=1}^{n}\) as the refined label trust graph, it can be formulated as:
\[\hat{U}_{i}=\mathbb{I}(\mathbf{X}_{err}^{i}=0)U_{i}+\mathbb{I}(\mathbf{X}_{ err}^{i}=1)\delta, \tag{14}\]
where \(\delta\in[0,1]\) is the trust level of estimated error map \(\mathbf{X}_{err}\), we set as 1.
#### 3.3.4 Denoising Network Training
We update \(\{\mathbf{\hat{Y}},\mathbf{U}\}\) of ND using \(\{\mathbf{\hat{Y}},\mathbf{\hat{U}}\}\) for the purpose of training the denoising network. The experimental parameters applied during the training of the noisy network were retained for the training of the denoising network. The student network obtained in the second stage of training is used as our final denoising network.
### Final Loss Function
The overall loss function of the model in the first stage is consistent with that in the second stage. In general, Our total loss is divided into three parts: the supervised loss \(\mathcal{L}_{c}=\mathcal{L}_{c}^{ce}+\mathcal{L}_{c}^{dec}\) on CD, the perturbation-based consistency loss \(\mathcal{L}_{con}\) and the supervised loss \(\mathcal{L}_{n}=\mathcal{L}_{n}^{ce}\cdot\mathbf{U}^{\prime}+\mathcal{L}_{n}^ {dec}\) on ND. The total loss is calculated by:
\[\mathcal{L}=\alpha\mathcal{L}_{c}+\beta(\mathcal{L}_{n})+\lambda\mathcal{L}_{ con}. \tag{15}\]
Here, \(\mathbf{U}^{{}^{\prime}}\) in \(\mathcal{L}_{n}\) is \(\mathbf{U}\) for training noisy network and will be replaced by \(\mathbf{\hat{U}}\) for denoising network. Empirically, \(\alpha\) and \(\beta\) are hyper-parameters and we set \(\alpha=1\), \(\beta=1\). The \(\mathcal{L}_{con}\) are calculated by the pixel-wise mean squared error (MSE), \(\lambda\) is a ramp-up trade-off weight commonly scheduled by the time-dependent Gaussian function [39]\(\lambda(t)=w_{max}\cdot e^{(-5(1-\frac{t}{max})^{2})}\), where \(w_{max}\) is the maximum weight commonly set as 0.1 [40] and \(t_{max}\) is the maximum training iteration. Such a \(\lambda\) weight representation avoids being dominated by misleading targets when starting online training.
## 4 Experiments
### Datasets and Experimental Setup
#### 4.1.1 OCT Fluid Segmentation Dataset
The data for our experiments come from the Eye Center at the Second Affiliated Hospital, School of Medicine, Zhejiang University. The dataset consists of OCT images from various patients, taken at different times and with two distinct resolutions of 1476 \(\times\) 560 and 1520 \(\times\) 596. Given the extensive background area in the original images, any other fluids appeared comparatively small. To address this,
we centrally cropped all images and resized them to a resolution of 600 \(\times\) 250. A subset of OCT images rich in the fluid was selected for comprehensive and weak point labeling. The total dataset consists of 1704 OCT images, with 1304 images designated for training and 400 for testing. To ensure the reliability of our results, we meticulously partitioned the dataset such that data from a single patient was used exclusively for either training or testing. A detailed overview of our dataset can be found in Table 2.
#### 4.1.2 Baseline Approaches
Considering the lack of solid research on noisy label learning for OCT fluid segmentation, it is our objective to in
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{3}{c|}{Settings} & \multicolumn{4}{c}{Metrics} \\ \cline{2-10} & CD & ND & Separate? & DSC & D\({}_{SRF}\) & D\({}_{IRF}\) & D\({}_{PED}\) & ASD & 95HD \\ \hline CD-Sup & 10\(\%\) & 0\(\%\) & - & 60.94 & 67.32 & 77.83 & 37.65 & 4.066 & 14.875 \\ CD\&ND-Sup & 10\(\%\) & 90\(\%\) & \(\times\) & 77.53 & 82.84 & 85.90 & 63.86 & 2.775 & 16.488 \\ \hline
2SRnT[29] & 10\(\%\) & 90\(\%\) & \(\times\) & 78.08 & 83.43 & 87.16 & 63.66 & 2.511 & 15.398 \\ Co-teaching [25] & 10\(\%\) & 90\(\%\) & \(\times\) & 80.28 & 86.42 & 85.14 & **69.26** & 2.904 & 17.895 \\ TriNet[27] & 10\(\%\) & 90\(\%\) & \(\times\) & 78.08 & 85.35 & 83.70 & 65.20 & 3.192 & 18.063 \\ \hline MTCL [13] & 10\(\%\) & 90\(\%\) & ✓ & 77.84 & **89.30** & 84.65 & 59.59 & **1.680** & 16.897 \\ Dast[32] & 10\(\%\) & 90\(\%\) & ✓ & 72.11 & 71.40 & 83.90 & 61.06 & 3.184 & 15.934 \\ \hline
2SMTCL & 10\(\%\) & 90\(\%\) & ✓ & **80.45** & 86.86 & **88.26** & 66.23 & 1.970 & **13.187** \\ \hline \hline CD-Sup & 20\(\%\) & 0\(\%\) & - & 65.26 & 73.60 & 81.57 & 40.62 & 3.670 & 16.184 \\ CD\&ND-Sup & 20\(\%\) & 80\(\%\) & \(\times\) & 77.84 & 84.28 & 85.94 & 63.31 & 2.834 & 19.499 \\ \hline
2SRnT[29] & 20\(\%\) & 80\(\%\) & \(\times\) & 78.97 & 84.37 & 85.92 & 66.63 & 1.939 & 17.576 \\ Co-teaching [25] & 20\(\%\) & 80\(\%\) & \(\times\) & 80.07 & 87.79 & 83.51 & 68.91 & 2.923 & 19.241 \\ TriNet[27] & 20\(\%\) & 80\(\%\) & \(\times\) & 80.00 & 88.02 & 83.13 & 68.86 & 2.621 & 16.871 \\ \hline MTCL [13] & 20\(\%\) & 80\(\%\) & ✓ & 78.67 & 83.71 & **87.43** & 64.88 & 2.129 & 14.716 \\ Dast[32] & 20\(\%\) & 80\(\%\) & ✓ & 75.41 & 79.09 & 77.70 & **69.45** & 3.807 & 14.417 \\ \hline
2SMTCL & 20\(\%\) & 80\(\%\) & ✓ & **80.86** & **88.57** & 85.87 & 68.12 & **1.333** & **13.184** \\ \hline \hline CD-Sup & 30\(\%\) & 0\(\%\) & - & 70.16 & 69.06 & 86.40 & 55.02 & 3.5061 & 15.111 \\ CD\&ND-Sup & 30\(\%\) & 70\(\%\) & \(\times\) & 79.32 & 87.08 & 83.75 & 67.13 & 1.772 & 15.710 \\ \hline
2SRnT[29] & 30\(\%\) & 70\(\%\) & \(\times\) & 79.16 & 88.39 & 86.67 & 62.42 & 2.465 & 15.436 \\ Co-teaching [25] & 30\(\%\) & 70\(\%\) & \(\times\) & 80.66 & 86.92 & 86.63 & 68.43 & 2.960 & 15.513 \\ TriNet[27] & 30\(\%\) & 70\(\%\) & \(\times\) & 80.00 & 85.41 & 83.03 & **71.54** & 2.509 & 15.324 \\ \hline MTCL [13] & 30\(\%\) & 70\(\%\) & ✓ & 80.35 & 88.19 & 83.25 & 69.59 & 2.105 & 16.355 \\ Dast[32] & 30\(\%\) & 70\(\%\) & ✓ & 76.83 & 80.05 & 85.01 & 65.43 & 2.114 & **12.308** \\ \hline
2SMTCL & 30\(\%\) & 70\(\%\) & ✓ & **82.87** & **89.46** & **88.78** & 70.37 & **1.696** & 12.896 \\ \hline \hline ND-Sup & 0\(\%\) & 100\(\%\) & - & 73.64 & 76.67 & 85.03 & 59.22 & 3.358 & 20.292 \\ CD-Sup & 100\(\%\) & 0\(\%\) & - & 83.53 & 89.40 & 87.88 & 73.31 & 1.339 & 11.593 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Oct Fluid Segmentation Studies. Comparison of the Experimental Results of the State-of-Art Methods on Denoising Methods on Different Ratios of CD and ND. The Best Results are in Bold. (Dice Unit: \(\%\), ASD And 95HD Unit: mm)
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Dataset \(\backslash\) Fluid & PED & SRF & IRF & All Negative \\ \hline Train & 1064 & 691 & 227 & 29 \\ \hline Test & 155 & 147 & 148 & 84 \\ \hline \hline \end{tabular}
\end{table}
Table 2: ALL THE OCT IMAGES IN OUR DATASET CONTAIN THE NUMBER OF DIFFERENT TYPES OF FLUIDS.
corporate an extensive range of baselines to facilitate comprehensive and purposeful comparisons across diverse scenarios. This will enable us to provide insights for future research in this field. The baselines can be categorized as follows:
* Fully supervised baselines: (i) **CD-Sup**: uses only CD to train the backbone (2D U-Net [10]) network; (ii) **ND-Sup**: uses only ND to train the backbone network: (iii) **CD&ND-Sup**: mixes both CD and ND to train the backbone network.
* Mix CD and ND: (i) **2SRnT[29]**: involves two stages for pre-training a network using a combination of different datasets, followed by fine-tuning the labels using confidence estimates to train a second network; (ii) **Co-teaching[25]**: a joint teaching method of the double network; (iii) **TriNet[27]**: a tri-network based noise-tolerant method extended from co-teaching.
* Separate CD and ND: (i) **MTCL[13]**: Mean-Teacher-assisted Confident Learning, which can robustly learn segmentation from limited high-quality labeled data and abundant low-quality labeled data; (ii) **Dast[32]**: a dual-branch network to separately learn from the accurate and noisy annotations.
#### 4.1.3 Implementation and Evaluation Metrics
Our method is implemented in Python with PyTorch, using an NVIDIA GeForce RTX 3090 GPU with 24GB memory. The network is trained using the RMSprop optimizer (weight decay=1e-8,momentum=0.9). The learning rate is initialized as 1e-5 and divided by 10 every 2000 iterations. We totally trained 4000 iterations as the network converged. The batch size is set to (8, 8) for CD and ND separately and 16 when they are not distinguished during input to the network. We scale the size of the image uniformly to 256 \(\times\) 128 and then directly input it into the student network. To conduct a comprehensive evaluation, we utilize three renowned indicators - dice score, average surface distance (ASD), and 95\(\%\) Hausdorff distance (95HD). The dice scores for each category, excluding the background, are presented.
### Experiments on OCT Fluid Segmentation
Table 1 presents the comparison results under 10\(\%\), 20\(\%\), and 30\(\%\) CD settings. Firstly, in the typical supervised settings of CD-Sup and CD&ND-Sup, the network performs poorly and can benefit from additional ND, although their labels contain noise. We hypothesize two possibilities: (i) The partially noisy labels generated by PNS are highly accurate and can provide reliable guidance for the network. The effect of ND-Sup can also reach 73.64%, which is also confirmed. (ii) Even with only 30\(\%\) of the CD data, the network may still be under-fitting and potentially learn valuable features from the ND. When turning to the Mix CD and ND setting, the three baseline methods, 2SRnT, Co-teaching, and TriNet have shown effective performance in mitigating the negative effects caused by noisy labels. In contrast, under the Separate CD and ND settings, MTCL has demonstrated a steady improvement ranging from 10\(\%\)
Figure 4: Examples of retinal in OCT image with manual annotations. (a) The original OCT images with fluids; (b) The noisy labels generated by PNS; (c) The denoised labels; (d) The full mask annotations.
to 30\(\%\). Although Dast has also shown improvement, it still falls short of the baseline performance (CD&ND-Sup). One possible explanation for this discrepancy is domain crossing since Dast was originally designed to perform COVID-19 pneumonia lesion segmentation. Under the 10\(\%\) - 20\(\%\) CD setting, Co-teaching and MTCL achieved highly competitive results, but 2SMTCL still outperforms them in most metrics. When we increased the proportion of CD to 30\(\%\), our method substantially surpassed other label-denoising methods. Overall, in the OCT fluid segmentation task, our method achieves satisfactory results, suggesting that the label trust graph and estimated error map can accurately characterize the location of label noise, enabling the network to fully exploit these informative denoised labels. Fig. 5 presents the results of 2SMTCL and other approaches under the 30\(\%\) CD setting and 70\(\%\) ND setting. It is evident that the mask predicted by our method is closer to the ground truth, further demonstrating the effectiveness of 2SMTCL.
### Analytical Ablation Study
To verify the effectiveness of each component, we propose different variants to perform ablation studies. Table 3,
Figure 5: Visualized segmentation results of different methods under 30\(\%\) CD setting and 70\(\%\) ND setting. From top to bottom is Image, Co-teaching, TriNet, 2SRnT, MTCL, Dast, 2SMTCL, GT.
Table 4, and Table 5 shows our ablation experiments. Our ablation experiments are performed on the denoising network and CD accounted for 30\(\%\), and ND accounted for 70\(\%\).
Table 3 demonstrates the impact of perturbation-based consistency learning and label trust graphs on the performance of the denoising network, where \(\delta\) refers to the set of label trust graphs and MT denotes the Mean-Teacher architecture. Without the MT architecture, the model's average dice score decreased by \(2.5\%\) and performed worse than the noisy network. It shows that the network trained on MT architecture can effectively utilize the pure image information of ND to improve performance. Furthermore, Table 3 indicates that the label trust information contained in the label trust graph can help the network avoid over-fitting noise. The average dice score of the final model's result was even less than \(80\%\) when trained without the label trust graph. In the Refinement stage, trust estimation error maps (set to 1) can improve the performance of the network compared to discarding the estimation error maps (set to 0) or leaving them unchanged. These experimental results surface that the components (perturbation-based consistency learning and label trust graphs) of the 2SMTCL can effectively mitigate the negative effects of noisy labels on the network for OCT fluid segmentation.
Table 4 displays the impact of different hyper-parameters \(\alpha\) and \(\beta\) of the loss function (Equation 15) on the denoising network. As we already have the label trust graph to constrain the loss of ND, we chose to set \(\alpha=1\) and \(\beta=1\), which performed optimally in terms of most metrics. 2SMTCL with appropriate hyper-parameters achieved superior results and proved effective in denoising labels for deep learning-based OCT fluid segmentation.
As the bulk of the training data consists of noisy labels generated by point annotations, conducting ablation studies on the PNS method to yield the best noisy-labeled data is critical. We set the superpixel block size to 13 in our experiment, meaning each superpixel block encompasses approximately 169 pixels (13 \(\times\) 13 on average). A pivotal factor to consider is the setting of the similarity threshold. If the threshold for creating noisy labels is excessively high, the labels may convey insufficient information, leading to network under-fitting. Conversely, if the threshold is too low, the noisy labels could introduce an overwhelming amount of noise, adversely affecting network performance. Therefore, careful selection of an appropriate threshold for generating noisy labels is essential to strike a balance between the amount of information and noise in the labels for optimum network performance. Compared to SRF and IRF, we noted that visually distinguishing PED fluids can be more challenging. Therefore, we set a smaller similarity threshold for PED when determining the threshold. Table 5 presents the final impact of our generated noisy labels for network training under different similarity thresholds. We selected \(SI_{t}=0.6,P_{t}=0.5\) due to its superior performance across most metrics.
## 5 Discussion
### Visualization of Label Denoising
The PNS method is heavily dependent on the similarity among superpixel blocks, leading to the generation of noisy labels with noticeable gaps. These gaps can substantially impact the training process of the model. To rectify this, we utilized the label denoising module to mend the noisy labels with the assistance of reliable guidance. The denoised labels generated by our proposed method are more closely aligned with the ground truths compared to the original noisy labels. Our module proficiently fills in the gaps and refines the edges of the labels, resulting in notable improvements. The superior effectiveness of our label-denoising process is highlighted both in the final performance of the model and in visual depictions of the label-denoising process. As evidenced by the results in Fig. 4, our label denoising module exhibits impressive effectiveness.
### Future Works
In this research, we introduce a strategy that leverages point annotations to generate noisy labels, thereby decreas
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Settings} & \multicolumn{6}{c}{Metrics} \\ \cline{2-9} & \(\alpha\) & \(\beta\) & DSC & D\({}_{SRF}\) & D\({}_{IRF}\) & D\({}_{PED}\) & ASD & 95HD \\ \hline \multirow{4}{*}{2SMTCL} & 1 & 1 & **82.87** & **89.46** & **88.78** & 70.37 & 1.696 & **12.896** \\ \cline{2-9} & 1 & 0.3 & 81.57 & 86.10 & 85.59 & **72.72** & 1.602 & 13.165 \\ \cline{1-1} \cline{2-9} & 1 & 0.5 & 80.99 & 87.60 & 83.79 & 71.57 & **1.469** & 13.250 \\ \hline \end{tabular}
\end{table}
Table 4: Ablation Study Of Different Loss Weight \(\beta\) Of ND. The Best Results Are In Bold. (Dice Unit: \(\%\), ASD And 95HD Unit: mm)
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Settings} & \multicolumn{6}{c}{Metrics} \\ \cline{2-9} & \(\delta\) & MT & DSC & D\({}_{SRF}\) & D\({}_{IRF}\) & D\({}_{PED}\) & ASD & 95HD \\ \hline \multirow{4}{*}{2SMTCL} & 0.6 & 0.5 & **82.87** & 89.46 & **88.78** & **70.37** & 1.696 & **12.896** \\ \cline{2-9} & 0.7 & 0.6 & 79.35 & 83.51 & 86.54 & 69.19 & **1.549** & 15.612 \\ \cline{1-1} \cline{2-9} & 0.5 & 0.4 & 81.52 & **90.87** & 83.55 & 70.16 & 1.58 & 13.526 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation Study Of Different \(t\) Of PED, IRF, And SRF When Generating Noisy Label By PNS. The Best Results Are In Bold. (Dice Unit: \(\%\), ASD And 95HD Unit: mm)
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c|}{Settings} & \multicolumn{6}{c}{Metrics} \\ \cline{2-9} & \(\alpha\) & \(\beta\) & DSC & D\({}_{SRF}\) & D\({}_{IRF}\) & D\({}_{PED}\) & ASD & 95HD \\ \hline \multirow{4}{*}{2SMTCL} & 1 & 1 & **82.87** & **89.46** & **88.78** & 70.37 & 1.696 & **12.896** \\ \cline{2-9} & 1 & 0.3 & 81.57 & 86.10 & 85.59 & **72.72** & 1.602 & 13.165 \\ \cline{1-1} \cline{2-9} & 1 & 0.5 & 80.99 & 87.60 & 83.79 & 71.57 & **1.469** & 13.250 \\ \hline \end{tabular}
\end{table}
Table 5: Ablation Study Of Different \(t\) Of PED, IRF, And SRF When Generating Noisy Label By PNS. The Best Results Are In Bold. (Dice Unit: \(\%\), ASD And 95HD Unit: mm)
ing the reliance on pixel-level annotations for training segmentation models. Our approach is capable of robustly learning OCT fluid segmentation from a limited volume of fully annotated data and a substantial amount of weakly annotated data. Although our methodology has demonstrated encouraging results, it currently still relies on a modest amount of fully annotated data. As a future direction, we will try to devise a training framework that is solely dependent on weakly supervised annotations, which would further lessen the model's requirement for high-quality annotations.
## 6 Conclusion
In this study, we explored the efficacy of employing noisy label learning techniques for OCT fluid segmentation. Initially, we introduced a superpixel-guided method for generating noisy labels from weak point annotations, termed Point to Noisy by Superpixel (PNS). This technique restricts the network from over-fitting to noise by assigning low confidence to pixels with potentially noisy labels. Subsequently, we developed a Two-Stage Mean-Teacher-assisted Confident Learning (2SMTCL) method, designed for multi-category OCT fluid segmentation. This method is capable of segmenting OCT fluid utilizing limited clearly-labeled data and a significant quantity of noisy-labeled data. To substantiate the robustness and efficiency of our approach, we compiled an OCT fluid segmentation dataset. The empirical results displayed that our technique surpassed other label-denoising methods, delivering superior segmentation performance and demonstrating notable effectiveness in label denoising. Our study provides an efficient, accurate, and practical solution for fluid segmentation of OCT images, which is expected to have a positive impact on the diagnosis and treatment of patients in the field of ophthalmology.
|
2308.14894 | Multiscale Contextual Learning for Speech Emotion Recognition in
Emergency Call Center Conversations | Emotion recognition in conversations is essential for ensuring advanced
human-machine interactions. However, creating robust and accurate emotion
recognition systems in real life is challenging, mainly due to the scarcity of
emotion datasets collected in the wild and the inability to take into account
the dialogue context. The CEMO dataset, composed of conversations between
agents and patients during emergency calls to a French call center, fills this
gap. The nature of these interactions highlights the role of the emotional flow
of the conversation in predicting patient emotions, as context can often make a
difference in understanding actual feelings. This paper presents a multi-scale
conversational context learning approach for speech emotion recognition, which
takes advantage of this hypothesis. We investigated this approach on both
speech transcriptions and acoustic segments. Experimentally, our method uses
the previous or next information of the targeted segment. In the text domain,
we tested the context window using a wide range of tokens (from 10 to 100) and
at the speech turns level, considering inputs from both the same and opposing
speakers. According to our tests, the context derived from previous tokens has
a more significant influence on accurate prediction than the following tokens.
Furthermore, taking the last speech turn of the same speaker in the
conversation seems useful. In the acoustic domain, we conducted an in-depth
analysis of the impact of the surrounding emotions on the prediction. While
multi-scale conversational context learning using Transformers can enhance
performance in the textual modality for emergency call recordings,
incorporating acoustic context is more challenging. | Théo Deschamps-Berger, Lori Lamel, Laurence Devillers | 2023-08-28T20:31:45Z | http://arxiv.org/abs/2308.14894v1 | Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations
###### Abstract.
Emotion recognition in conversations is essential for ensuring advanced human-machine interactions. However, creating robust and accurate emotion recognition systems in real life is challenging, mainly due to the scarcity of emotion datasets collected in the wild and the inability to take into account the dialogue context. The CEMO dataset, composed of conversations between agents and patients during emergency calls to a French call center, fills this gap. The nature of these interactions highlights the role of the emotional flow of the conversation in predicting patient emotions, as context can often make a difference in understanding actual feelings. This paper presents a multi-scale conversational context learning approach for speech emotion recognition, which takes advantage of this hypothesis. We investigated this approach on both speech transcriptions and acoustic segments. Experimentally, our method uses the previous or next information of the targeted segment. In the text domain, we tested the context window using a wide range of tokens (from 10 to 100) and at the speech turns level, considering inputs from both the same and opposing speakers. According to our tests, the context derived from previous tokens has a more significant influence on accurate prediction than the following tokens. Furthermore, taking the last speech turn of the same speaker in the conversation seems useful. In the acoustic domain, we conducted an in-depth analysis of the impact of the surrounding emotions on the prediction. While multi-scale conversational context learning using Transformers can enhance performance in the textual modality for emergency call recordings, incorporating acoustic context is more challenging.
Speech emotion recognition, Multiscale contextual learning, Emotion Recognition in Conversation, Transformers, Emergency call center +
Footnote †: journal: Computer Vision and Pattern Recognition
## 1. Introduction and recent work
In recent years novel methods and techniques have been applied to speech-based downstream applications with a focus on the potential benefits of incorporating conversational information into such systems. This contextual information is usually derived from previous and subsequent utterances in the form of speech transcriptions or acoustic contexts.
An early significant approach by (Kang et al., 2017), utilized a bidirectional LSTM to assimilate context without distinguish speakers. Extending this methodology, (Kang et al., 2017) incorporated a GRU structure within their ICON model to identify speaker's relationships. Later, (Kang et al., 2017), converted conversations into a graph, employing a graph convolutional neural network for emotion classification. This work was further developed by (almost) the same team, who integrated common-sense knowledge to understand interlocutors' interactions (Kang et al., 2017).
Recent work by (Kang et al., 2017) has used new neural network structures for context understanding. An extension of this approach was proposed in (Kang et al., 2017) which introduced DialogueCRN to fully capture conversational context from a cognitive point of view. These papers illustrate the ongoing evolution of the field.
Ongoing research about conversational context in speech task has paralleled the rise of self-supervised pre-training models, which
are now popular for handling downstream tasks. These models has shown strong results across various speech tasks benchmarks as highlighted in (Kang et al., 2017). Our paper proposes context-aware fine-tuning, which utilizes surrounding speech segments during fine-tuning to improve performance on downstream speech tasks and enrich Transformer embeddings through the integration of auxiliary context module, as illustrated by (Kang et al., 2017) and by (Kang et al., 2018) with their emotion-aware Transformer Emoformer.
In the field of Speech Emotion Recognition, advances with Transformer models in deep learning have reached state-of-the-art performance on acted speech (Kang et al., 2019) and on widely-known open-source research database like (Beng et al., 2019). Upon appropriate fine-tuning Transformers are able to learn efficient representations of the inputs.
However recognizing spontaneous emotions remains a challenge. But remarkably, Transformer encoder models shown significant results over classical approaches on spontaneous emotion recordings (Beng et al., 2019). Through a specific integration of multimodal fusion mechanisms, these models are highly capable of gathering efficient emotional cues across modalities, (Beng et al., 2019). This paper leverages the French CEMO corpus which consists of real-life conversational data collected in an emergency call center (Beng et al., 2019). This corpus provides an excellent opportunity to tackle the challenge of integrating conversation context in a realistic emergency context.
Despite the effectiveness of Transformer models, their standard self-attention mechanism's quadratic complexity limits application to relatively small windows (Beng et al., 2019). Cutting-edge research has focused on optimizing the attention mechanisms to a lower complexity like FlashAttention (Beng et al., 2019), addressing this limitation by lowering the attention complexity paves the way for future models to be trained from scratch on huge datasets with wider context.
In this work we propose a multi-scale hierarchical training system adapted to pre-trained standard attention models which are available by the French community. The proposed approach draws inspiration from recent work by (Kang et al., 2019). We evaluate the impact of different types of contextual information for acoustic level and manual speech transcription. Integrating the acoustic and linguistic context of dialogue into an emotion detection system remains a challenge, but this work aims to contribute to these ongoing efforts and explain the impact of such a system and their limitations.
## 2. Conversational Corpus: Cemo
The emergency call center corpus presents a unique opportunity to examine real-world emotional expression. This rich 20+ hour dataset captures naturalistic interactions between callers in crisis and operators. As described by (Beng et al., 2019; Kang et al., 2017), it contains emotional annotations across situations ranging from medical emergencies to psychiatric distress. Segments were coded for major and minor emotions with fine-grained labels from 7 macro-classes.
The caller can be either the patient or a third party (family, friend, colleague, neighbor, stranger). The wide range of caller types (age, gender, origin), accents (regional, foreign), different vocal qualities (alterations due to alcohol/medication, a cold, etc.) also makes it an extremely diverse corpus. As shown in Table 1, the caller and Agent emotional profiles differ. Callers expressed intense emotions like fear, anger, and sadness, given their crisis state. In contrast, agents maintained a regulated presence, with more positive and neutral states, reflecting their professional role.
Inter-rater reliability highlights differences between callers and agents. Agreement on emotions was higher for callers than agents (Kappa 0.54 vs 0.35). This suggests agents regulate emotions, producing subtle expressions that are challenging to consistently code. Refining annotation schemes could better capture the complexity of agents' emotional states.
Data preparation is key for performance and robustness. As detailed in Table 2, a balanced CEMO subset (2h40) of 4224 segments was selected for training/validation/testing. The 4 classes were equally distributed with 1056 samples each. Fear and Neutral were subsampled, prioritizing speaker diversity. Anger was completed with agent segments of annoyance/impatience resulting in a class with less speakers diversity and possible bias. Positive had the most speakers and dialogues, suggesting heterogeneity. Manual transcriptions were performed with guidelines similar to the Amities project (Kang et al., 2017).
The transcriptions contain about 2499 nonspeech markers, primarily pauses, breath, and other mouth noises. The vocabulary size is 2.6k, with a mean and median of about 10 words per segment (min 1, max 47).
Figure 2 represents the transition probabilities between the emotion expressed in the previous speech turn and the target segment. The diagram illustrates the likelihood of moving from each prior
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline
**Caller** & **Segments** & **Speakers** & **Agent** & **Segments** & **Speakers** \\ \hline Total & 17679 & 870 & Total & 16523 & 7 \\ EEA & 7397 & 825 & NEU & 10059 & 7 \\ NEU & 7329 & 822 & POS & 4310 & 7 \\ POS & 1187 & 566 & ANG & 1213 & 6 \\ ANG & 417 & 146 & FEA & 437 & 7 \\ HUR & 261 & 67 & FEA/POS & 122 & 4 \\ SUR & 144 & 118 & ANG/POS & 65 & 4 \\ EEA/POS & 130 & 103 & ANG/FEA & 57 & 3 \\ FEA/SAD & 128 & 71 & POS/SUR & 24 & 4 \\ EEA/HUR & 116 & 55 & FEA/SUR & 16 & 4 \\ OTHER & 294 & 171 & OTHER & 52 & 3 \\ \hline \hline \end{tabular}
\end{table}
Table 1. The 10 most represented emotions and mixtures of emotions by caller and agent. FEA: Fear, NEU: Neutral, POS: Positive, ANG: Anger, SAD: Sadness, HUR: Hurt, SUR: Surprise OTHER: Sum of remaining classes
\begin{table}
\begin{tabular}{l r r r r r} \hline \hline
**CEMO\({}_{\text{s}}\)** & **ANG** & **FEA** & **NEU** & **POS** & **Total** \\ \hline \#Speech seg. & 1056 & 1056 & 1056 & 1056 & 4224 \\ \#Speakers & 149 & 537 & 450 & 544 & 812 \\ \#Dialogues & 280 & 504 & 425 & 516 & 735 \\ Total duration (mn) & 39 & 52 & 49 & 20 & 160 \\ Duration mean (s) & 2.2 & 2.9 & 2.8 & 1.1 & 2.3 \\ Vocabulary size & 1146 & 1500 & 1150 & 505 & 2600 \\ Avg. word count & 9.3 & 11.9 & 7.9 & 3.8 & 8.2 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Details of the CEMO subset of speech signals and manual transcripts. ANG: Anger, FEA: Fear, NEU: Neutral, POS: Positive, Total: Total number of segments.
emotion category (rows) to each target emotion (columns). Anger persists across turns at a 68% probability. Asymmetry exists between Anger and Fear, with Fear more often following Anger. Surprise is surprisingly followed by Anger, without any wordplay intended.
Figure 3 displays a histogram which illustrates the distribution of gap duration between the context and the target segment. This excludes contiguous segments, corresponding to 3040 Previous-to-target and 2895 Target-to-next segments. For non-contiguous segments, there are 1174 and 1152 respectively for Previous-to-target and Target-to-next segments. Notably, there are only 10 segments that lack any preceding context, and 177 segments that do not have any following context.
## 3. Methodology
Our approach aims at recognizing emotions from speech. The systems presented in this article are based on the incorporation of conversational context via pre-trained transformative attention mechanisms. We have divided this section into two main parts, devoted to single modalities (acoustic and textual). Our aim is to better understand the impact of context in these systems.
First, we tackled the textual modality, i.e. manual transcriptions of dialogues incorporating the context in a "blind" way a defined number of conversational elements (named tokens in pre-trained models). Then, we modified the scale of the contextual window as a function of speech turns, and conducted experiments on specific conversational segments.
In a second phase, we focused on the acoustic modality, where we exploited the context of speech turns that had been supported by the textual approach. We then extended this to hierarchical training, on the assumption that low-level cues for emotion prediction would be learned by the model during initial context-free training, and that incorporating conversational context in a second phase would enable higher-level information to be learned.
Our methodology is based on the application of specific Transformer encoder models: FlauBERT large (Han et al., 2017) and wav2vec2.0 large (Beng et al., 2017). These models use self-supervised learning to create meaningful abstractions from text and raw audio data. Prior research (Han et al., 2017) showed the successful adaptation of pre-trained models to detect discrete speech emotion labels from the CEMO corpus (Beng et al., 2017). From the available models, we chose to use the leBenchmark model (Wav2Vec2-FR-3K) (Deng et al., 2017), trained on 3,000 hours of French language data. This decision was guided by the model's performance on the CEMO corpus (Han et al., 2017).
The training database for the wav2vec2-FR-3K model is comprised of spontaneous dialogues recorded by telephone, some with emotional content, thus mirroring the characteristics of the CEMO corpus. The multi-head attention layers were fine-tuned for speech emotion recognition using the CEMO corpus. This was done under the assumption that the initial layers of the model (Convolutional layers and Embedding) are robust to this task (Han et al., 2017; Wang et al., 2018).
## 4. Contextual Exploration of Textual Modality
In this research, we propose a fine-tuned system for detecting emotions on the CEMO dataset by incorporating semantic information from the anterior or posterior parts of speech. During training, the context is concatenated with speech inputs to be fed into a Transformer. The proposed system relies on the pre-trained multi-head attention layers of the FlauBERT model (Han et al., 2017), to learn the relationships between the latent states of the current segment and its context.
The multi-head attention mechanism allows the model to learn relevant parts of the segment to predict, within its conversational context. To emphasis this weighting, we mask the embeddings yielded by the Transformer corresponding to the context. The rest of the embeddings are fed into an attention pooling layer and classified into discrete emotions.
We firstly focused on a "blind" semantic approach where the context was selected by the amount of tokens. The average number of seconds for one token in the CEMO dataset is equivalent to 0.2s, then we have an average of 5 tokens per second. We performed some experiments with a window of tokens' number from 0 to 100. The results are displayed in the Figure. 5 which shows the UA scores obtained in the prediction of the four discrete emotions. Two regression lines pass through the origin 0 which correspond to the baseline experiment without context.
Figure 3. Histogram of gap duration between context segment and the segment to predict (segments with a gap of zero are excluded)
Figure 2. Transition between the previous and current emotion segments, we only show previous emotions which have at least 30 segments.
There is a positive impact of context unevenly distributed between the anterior and posterior conversational contexts. The previous tokens in our tokens are more useful to enrich the segment embeddings to be predicted. Limits to the interpretability of this approach may arise from the semantic perspective, where we are uncertain whether the number of tokens will be extracted from the middle of a sentence or a speech turn.
To address this hypothesis, we conducted experiments at the speech turn level, using the previous or next segment of speech. We also extended the experiments to speaker type, which could have an impact on how the context is learned by the Transformer.
The results in Table. 3 detailed the different configurations we used. From the results, it seems that incorporating context from the same speaker outperforms the opposite speaker approach, suggesting that the emotion of a sentence may be more influenced by the speaker's previous sentences rather than the other speaker's. This makes intuitive sense as people's emotions tend to be consistent within a short time frame and are likely to be less influenced by the immediate response from others. The Anger and Fear classes fluctuate the most with context, which may indicate that these emotional states are more complex or nuanced, and may be more influenced by context and speaker.
Contextual experiments on the speech turns scale produced better or similar results to those obtained on the token scale, see Fig. 5 and Table. 3. Even with a large token window, up to 100 tokens (sub-words for FlauBERT), equivalent to around 20 seconds of speech, it fails to achieve the best scores, regardless the turn before or after the segment to be predicted.
In comparison the average context speech turn segments last 1 seconds, thus, the right positioning and semantic meaning of the text is one of Speech Emotion Recognition's keys performance.
## 5. Contextual Exploration of Acoustic Modality
Our approach to predict emotions from acoustic is similar to the text modality, we concatenate raw audio as input to the acoustic Transformer and mask the embeddings specific to the context produced by the Transformer. At this stage, the wav2vec2 model applies a multi-head attention mechanism on both the surrounding segments and the target segment.
This mechanism allows the model to focus on different features in the segment and its surrounding context, potentially improving the emotional relevance of the embeddings produced.
To adjust the wav2vec2-FR-3K model to our needs, we added an attention pooling layer and a classifier. One drawback of this approach is the higher computational cost of the Transformer acoustic model compared to the textual one. Due to the specifications of our computational clusters, we are limited to a maximum length of around 6.5 seconds for the large wav2vec2 model.
Following the results obtained on context at a speech turns level with the text modality, we incorporate the context from the previous or next turn of the target segment. Furthermore we implemented a novel way to enrich the yielded wav2vec2 embeddings through a dedicated auxiliary context module influenced by (Kang et al., 2019). The auxiliary module is detailed in Fig. 5, it gathers the embeddings from the surrounding segments into a context attention pooling layer. This pool, together with a fully connected network, generates a context vector that provides a compact, informative representation of the surrounding context.
Figure 4. Prediction Accuracy vs. Context Token Count: Tokens number represent anterior/posterior tokens to the target segment. Accuracy is Unweighted Average (UA), in %
Figure 5. Illustration of CCFTE Concatenation of Context Features with Target Embeddings, figure from (Kang et al., 2019)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline
**Model** & **Context** & **from** & **ANG** & **FEA** & **NEU** & **POS** & **Total** \\ \hline \multirow{6}{*}{FlaueBERT} & Previous & same speaker & 66.0 & 64.5 & 70.6 & 85.7 & **71.7** \\ & Next & same speaker & 70.4 & 59.7 & 72.5 & 83.6 & **71.5** \\ & Next & opposite speaker & 67.9 & 62.3 & 72.4 & 82.7 & **71.3** \\ & Previous & all speakers & 66.3 & 61.2 & 72.4 & 84.8 & **71.2** \\ & Next & all speakers & 64.1 & 66.3 & 68.6 & 85.2 & **71.0** \\ & Previous & opposite speaker & 59.4 & 66.1 & 71.0 & 84.3 & **70.2** \\ \hline FlauBERT & Without & - & 61.1 & 66.0 & 68.2 & 85.1 & **70.1** \\ \hline \hline \end{tabular}
\end{table}
Table 3. Comparison of Textual Models (on manual transcriptions) using FlauBERT Embeddings with and without Contextual Information, sorted by UA: % Contextual information: 1st column: Previous or Next segments, 2nd column: same speaker, opposite speaker or all speakers
\[C_{i}=\text{FullyConnected}(\text{AttentionPooling}(E_{i},S_{i})) \tag{1}\]
In the equation above, \(C_{i}\) is the context vector for the \(i\)-th segment, \(E_{i}\) signifies the embeddings of the target segment, and \(S_{i}\) is the input segment. The context vector is then concatenated with each of the embeddings of the target segment, effectively underline the contextual information into the final classifier prediction.
The Table 4 presents results evaluating the incorporation of contextual acoustic information to enhance emotion recognition performance of wav2vec2 embeddings. Across conditions, two proposed context integration methods were examined - masking the context embedding (MWCE) and concatenating context features with target embeddings (CCFTE) - using either previous or next utterances as context.
Notably, the baseline wav2vec2 model with no context elicited the highest total unweighted accuracy (UA) of 75.6%, exceeding all context-enhanced models. This suggests intrinsic limitations of the concatenation-based context integration approaches assessed. Both MWCE and CCFTE concatenation utilizing prior context modestly boosted performance to 75.4% UA. However, next context yielded negligible gains, indicating contextual benefits may be asymmetric.
Despite the disappointing results of our preliminary experiments using acoustic models trained on isolated utterances, we continue to further explore this approach building on prior textual results. We were seeking of a way to leverage the meaningful contextual signals that could be present in adjacent turns. We shifted the method in a hierarchical training framework where first, acoustic models were trained on the target segments using isolated utterances without conversational context. Subsequently, we fine-tuned the model to adapt to the surrounding conversation segments, thereby learning higher-level emotional cues that are context-dependent. Simultaneously, we train a parallel model from the same baseline checkpoint to serve as a comparison, ensuring our fine-tuning process contributes positively to the emotion prediction task.
The obtained results, detailed in Table 5 demonstrate the limited gains achieved through hierarchical fine-tuning with concatenated context. Critically, all context-enhanced models fail to improve over the baseline wav2vec2 model at 76.2% UA. This implies significant shortcomings in the concatenation-based context integration paradigm.
Although small improvements are achieved using the previous context with MWCE+CCFTE, the global hierarchical learning methodology provides insignificant improvements to acoustic modeling. These results reveal shortcomings compared to text-based modeling approaches.
In particular, the minimal gains from concatenating context features (CCFTE) reveal this technique inadequately incorporates conversational patterns. The embedding masking (MWCE) is somewhat more beneficial, but the context integration remains insufficient.
We furthermore tried other experiments which not yielded better results, these experiments where based on MFCC cues of the surroundings segments.
## 6. Analysis of Prediction Accuracy Based on the Previous Segment's Emotion
The Figure 6, illustrates the distribution of previous emotion labels of the 4 targeted emotions. To compare the results obtained with conversational context, we took the same configuration with context taken from previous segments (whatever the speakers) for two different sets of predictions: speech transcriptions (T) and acoustic (A). Both models performance are respectively 71.2% (T) and 76.2% (A) UA (see Table 3 and 4).
Across both experiments, the Positive emotion and Neutral state segments seem to be predicted most accurately when the previous emotion is also Positive, results from 86.7% to 88.6% UA for both acoustic and transcriptions. The best results for Fear is obtained
\begin{table}
\begin{tabular}{l l l l l l l l l l} \hline \hline
**Model** & **Strategy** & **Context** & **MWCE** & **CCFTE** & **ANG** & **FEA** & **NEU** & **POS** & **Total** \\ \hline wav2vec2 & - & Without & - & - & 73.0 & 70.2 & 72.2 & 80.1 & 75.6 \\ \hline \multirow{3}{*}{ wav2vec2} & \multirow{3}{*}{Concatenation} & Previous & ✓ & ✓ & 71.1 & 73.1 & 70.7 & 86.6 & 75.4 \\ & & Next & ✓ & ✓ & 68.1 & 73.7 & 73.1 & 86.4 & 75.3 \\ \cline{1-1} & & Next & ✓ & ✓ & 71.2 & 62.7 & 75.6 & 85.4 & **74.9** \\ \cline{1-1} & & & & 66.4 & 63.0 & 79.2 & 85.4 & **73.5** \\ \hline \hline \end{tabular}
\end{table}
Table 4. Comparison of Acoustic Models Using wav2vec2 Embeddings with and without Contextual Information, MWCE: Masking w2v2 context embed., CCFTE: Concatenation of Context Features with Target Embeddings, sorted by UA
Figure 6. Prediction accuracy of the target emotions based on the previous segment’s emotion, % in UA
from Anger previous segment, 77.8% (A) and 70.8% (T). For Anger class an high UA is obtained for the segments with anterior Fear emotion expressed. The acoustic and textual models results are heterogeneous for the Anger class, the acoustic model is outperforming textual model when the previous segment was Fear (89.5% (A) vs. 71.5% (T)), on the other hand when the previous segment was Anger, the textual model had great results over the acoustic model (68.7% (T) vs. 58.7% (A)).
## 7. Conclusion
This paper explored Multiscale Contextual Learning for Speech Emotion Recognition in emergency call center conversations using the CEMO corpus collected in-the-wild. We conducted experiments incorporating contextual information from both speech transcriptions and acoustic signals with varying scales of the context. Overall, acoustic models demonstrate superior performance compared to text models, Table 3, 5.
For text modeling with FlauBERT's Transformer embeddings, the context derived from previous segment has a more significant influence on accurate prediction than the following segment. Furthermore, taking the last speech turn of the same speaker in the conversation leads to better results in Table 3.
For acoustic modeling with wav2vec2.0 Transformer embeddings, we did not improve our results by using contextual information, Table 4. Despite pursuing a hierarchical training framework, Table 5, the results are disappointing and reveal challenges in effectively modeling sequential unimodal acoustic context using feature concatenation.
We also conducted an in-depth analysis of the impact of the previous emotions on the predictions. While multi-scale conversational context learning using Transformers can enhance performance in the textual modality for emergency call recordings, incorporating acoustic context is more challenging, see Table 4. Advanced context modeling techniques are needed to fully leverage conversational dependencies in speech emotion recognition. Extending the context to model inter-speaker dynamics and relationships throughout full conversations is an important direction. Advances in attention mechanisms to handle wider contexts will also enable further progress on context-aware speech emotion recognition.
## Ethics and reproducibility
The use of the CEMO database or any subsets of it, carefully respected ethical conventions and agreements ensuring the anonymity of the callers. All evaluations are performed on 5 folds with a classical cross-speaker folding strategy that is speaker independent between training, validation and test sets. During each fold, system training is optimized on the best Unweighted Accuracy (UA) of the validation set. The outputs of each fold are combined for the final results. The experiments were carried out using Pytorch on GPUs (Tesla V100 with 32 Gbytes of RAM). To ensure the reproducibility of the runs, we set a random seed to 0 and prevent our system from using non-deterministic algorithms. This work was performed using HPC resources from GENCI-IDRIS (Grant 2022-AD011011844R1).
###### Acknowledgements.
The PHD thesis of Theo Deschamps-Berger is supported by the ANR AI Chair HUMAAINE at LISN-CNRS, led by Laurence Devillers and reuniting researchers in computer science, linguists and behavioral economists from the Paris-Saclay University. The data annotation work was partially financed by several EC projects: FP6-CHIL and NoE HUMAINE. The authors would like to thank, M. Lesprit and J. Martel for their help with data annotation. The work is conducted in the framework of a convention between the APHP France and the LISN-CNRS. |
2301.09470 | Enabling Kernel Bypass Networking on gem5 | Full-system simulation of computer systems is critical to capture the complex
interplay between various hardware and software components in future systems.
Modeling the network subsystem is indispensable to the fidelity of the
full-system simulation due to the increasing importance of scale-out systems.
The network software stack has undergone major changes over the last decade,
and kernel-bypass networking stacks and data-plane networks are rapidly
replacing the conventional kernel network stack. Nevertheless, the current
state-of-the-art architectural simulators still use kernel networking which
precludes realistic network application scenarios. In this work, we enable
kernel bypass networking stack on gem5, the state-of-the-art full-system
architectural simulator. We extend gem5's NIC hardware model and device driver
to enable the support for userspace device drivers to run the DPDK framework.
We also implement a network load generator hardware model in gem5 to generate
various traffic patterns and perform per-packet timestamp and latency
measurements without introducing packet loss. Our experimental results show
that DPDK's simulated network bandwidth scales with the number of cores and NIC
ports. As two use cases, we analyze the sensitivity of (1) network performance
to several microarchitectural parameters, and (2) direct cache access (DCA)
technology to DPDK burst size. | Siddharth Agarwal, Minwoo Lee, Ren Wang, Mohammad Alian | 2023-01-23T15:01:50Z | http://arxiv.org/abs/2301.09470v2 | # Enabling Kernel Bypass Networking on gem5
###### Abstract
Full-system simulation of computer systems is critical to capture the complex interplay between various hardware and software components in future systems. Modeling the network subsystem is indispensable to the fidelity of the full-system simulation due to the increasing importance of scale-out systems. The network software stack has undergone major changes over the last decade, and kernel-bypass networking stacks and data-plane networks are rapidly replacing the conventional kernel network stack. Nevertheless, the current state-of-the-art architectural simulators still use kernel networking which precludes realistic network application scenarios. In this work, we enable kernel bypass networking stack on gem5, the state-of-the-art full-system architectural simulator. We extend gem5's NIC hardware model and device driver to enable the support for userspace device drivers to run the DPDK framework. We also implement a network load generator hardware model in gem5 to generate various traffic patterns and perform per-packet timestamp and latency measurements without introducing packet loss. Our experimental results show that DPDK's simulated network bandwidth scales with the number of cores and NIC ports. As two use cases, we analyze the sensitivity of (1) network performance to several microarchitectural parameters, and (2) direct cache access (DCA) technology to DPDK burst size.
Network, Kernel Bypass, DPDK, gem5
## 1 Introduction
The evolution of networking technology enabled hundreds of gigabytes per second inter-server data transmission rates in datacenters, and terabit per seconds network interfaces are on the horizon [7]. Proper handling of such high network rates in the processor microarchitecture and memory hierarchy is necessary to deliver high-quality end-to-end performance for emerging exascale applications. Unfortunately, the current tools for modeling and evaluating future system architectures have outdated models for the networking subsystem and at most can deliver several tens of gigabits per second network data rates to the processor and memory hierarchy. For instance, gem5's baseline network interface card (NIC) model only delivers \(\sim\)10Gbps network bandwidth with a single NIC running iperf TCP test (\(\sim\)20Gbps with 4 NICs). FireSim, which is an FPGA-based cycle-accurate simulator, supports networked simulation that delivers \(\sim\)1Gbps bandwidth per NIC [5].
With such low network bandwidth, the architectural simulation cannot be used for evaluating future terabit per second networked systems. In this work, we filled this gap in the gem5 simulator by enabling the DPDK software stack on gem5 that bypasses the Linux network software stack and delivers the maximum network bandwidth that a given processor architecture and memory hierarchy can sustain. In other words, we enable full-system gem5 to simulate networked systems in which the network stack is no longer the bottleneck; instead, in our setup, the processor and memory are the bottlenecks in network packet delivery.
In this paper, we describe the shortcomings of gem5's current network stack in evaluating future networked systems and explain our extensions to the gem5 hardware model and Linux kernel to enable a DPDK software stack with a polling mode driver (PMD) network interface. We also add a network traffic generator hardware model to gem5 that can be used to inject packets to the simulated network with configurable rate, packet size, and traffic pattern. The hardware traffic generator is widely used in the industry to stress test the network subsystem without experiencing any packet loss. Our experimental results show that the bandwidth of our DPDK PMD interface scales with the number of simulated network ports and processor cores. Each simulated network port pinned to a simulated ARM core - loosely modeled after an ARM Cortex-A72 core - in a quad-core setup can sustain network receive and transmit bandwidth of \(\sim\)25Gbps when running L2fwd DPDK application without any packet loss. As illustrated in Fig.1, a single NIC port running L2rwd DPDK application sustains \(\sim\)53Gbps while iperf only sustains \(\sim\)10Gbps. The complete simulation setup and extensions are open sourced in the following git repository [https://github.com/agsiddharth/CAL-DPDK-GEM5](https://github.com/agsiddharth/CAL-DPDK-GEM5). The authors are also working on integrating the changes to the vanilla release of gem5.
## 2 Background and Motivation
Network packet processing in the Linux kernel suffers from the following bottlenecks: frequent system calls for packet transmission and reception, frequent buffer copies within kernel software stack and between kernel and userspace buffers, and long-latency interrupt processing and notification. Kernel bypass software stacks alleviate these bottlenecks by (1) reducing context switches between userspace and kernel space; (2) using large buffer allocations, huge pages, and zero-copy interfaces
Fig. 1: (a) Baseline dual-mode gem5 with Linux kernel software stack for evaluating networked systems, (b) what is enabled in this work: kernel bypass software stack with hardware load generator model in gem5.
to reduce buffer management and data movement overheads; (3) using polling for RX and TX completion notification. Many kernel bypass protocols have been proposed and implemented, including Data Plane Development Kit (DPDK) [4].
DPDK provides a userspace API for application developers. It reserves pinned huepages and allows the NIC to DMA directly into the application's buffers. Since it is polling based, it eliminates context switching overheads as well. DPDK application can be implemented in two modes:
_Run to completion_ mode where the packet processing loop is: (1) retrieve RX packets through polling mode driver (PMD) RX API, (2) process packets on the same logical core, (3) send pending packets through PMD TX API
_Pipeline mode_, which lets cores pass packets between each other via a ring buffer to process packets.
**Current gem5 Network Stack.**
The current NIC simulation object in gem5 loosely models Intel(r) 8254x NIC series at a minimal functional level. Fig. 1(a) shows a two-node full-system simulated system connected through the simulated NIC and Ethernet Link. Default gem5 uses Kernel interrupt-driven network stack and can only sustain up to \(\sim\)20Gbps network bandwidth using four powerful, multi-core, O3 ARM cores running at 3GHz frequency (See Sec. 4 for detailed experimental methodology). Such low network bandwidth does not sufficiently exercise the hardware and software stack of future networked systems modeled with gem5. Thus, the current gem5 model is not useful for evaluating networked systems supporting hundreds of gigabits per second network throughput.
**Hardware Traffic Generators.** One of the main concerns when evaluating networked systems is to load the system-under-test with real traffic and measure network bandwidth and per-packet, round-trip network latency without introducing extra latency or packet drop a the load generator node. The practice in the industry is to use hardware traffic generators that utilize FPGA line cards to generate packets with configurable traffic patterns, sizes, and protocols and provide detailed network statistics per transmitted and received packets [6].
## 3 Linux Kernel Bypass in gem5
This section discusses the changes we made to gem5 and DPDK framework to enable Kernel bypass networking and implement the hardware load generator model in gem5. We do not make any changes in the Linux kernel.
### _Changes to gem5_
The changes in gem5 are limited to PCI model to enable Userspace I/O (UIO) driver and the NIC model to enable byte granular PCI configuration space accesses.
#### 3.1.1 Enable Userspace I/O Driver
uio_pci_generic driver in Linux enables a userspace application to directly access the address space of a PCI device. DPDK uses this driver to gain the userspace application access to the PCI config space and implement a polling mode driver. The default gem5 does not enable the uio_pci_generic driver during boot as the PCI Command Register is not fully implemented in gem5. Fig.2 shows the first 8 bytes of the PCI configuration space that includes the 16-bit Command Register at offset 0x0. The baseline gem5 implements bits 0-9 of the Command Register but does not implement bit-10, which is the _interrupt disable_ bit. We implement _interrupt disable_ bit in gem5 PCI model so the Linux kernel can disable the interrupts for the PCI devices on gem5 which is necessary to support uio_pci_generic driver.
#### 3.1.2 Enable Byte-Granular Access to PCI Configuration Space
The default gem5 only supports 16-bit accesses to the Command Register shown in Fig.2. In fact, this is rational since the size of the Command register is 16-bits. However, we observed that DPDK accesses the Command register using 8-bit memory accesses. Such byte-granular accesses are being ignored in gem5, and therefore DPDK cannot properly read and write the upper half of the Command Register (offset 0x05 of the PCI config space). We extended the readConfig and writeConfig functions in the gem5 PCI model to enable byte-granular accesses to the Command Register.
#### 3.1.3 Implement Interrupt Mask Register in the NIC model
The last modification in gem5 is to implement Interrupt Mask Register in the is254xGBe device model. Interestingly, this register is included in the is254xGBe model, but the read and write methods for accessing the register are not implemented in the current gem5 release. We implemented the read and write methods to enable DPDK to launch its polling mode driver.
#### 3.1.4 Enable the NIC Model to Correctly Operate with a PMD
NIC devices keep a handful of available descriptors (usually 32\(\sim\)64 descriptors) that can be populated upon receiving a packet on an on-chip cache which is called _descriptor cache_. Descriptor cache improves the performance as the NIC does not need to fetch available descriptors from the CPU memory on demand. The NIC gradually writes back the descriptor cache to the CPU memory (using DMA), and then the CPU is notified of received packets.
The current gem5's NIC model writes back the received descriptors based on a threshold set by the Linux kernel. Once the number of used descriptors is higher than a threshold, NIC initiates a writeback. When using a PMD, the threshold registers in the NIC model are not properly set, and thus the NIC starts writing back the descriptors when all of them are used. This means that packets are DMAed to the CPU memory in large batches (32\(\sim\)64 packets), which causes unrealistic pressure on the CPU memory subsystem and increases the possibility of packet drops at high receive rates. We implemented a parameter for the NIC where the user can control the threshold of descriptor writebacks in gem5.
### _Changes to DPDK_
The DPDK Environment Abstraction Layer (EAL) relies on vendor ID checks to match a device and a PMD. We modify the DPDK source to skip these checks and force the matching of the gem5 device to the e1000 PMD. Unmodified DPDK cannot fetch the correct vendor ID when running on gem5 and therefore fails to call the proper PMD driver. We suspect this is because some manufacturer-specific information is missing in the gem5 NIC model. Note that skipping the vendor ID test does not have any impact on gem5 simulations as the current gem5 release
Fig. 2: First 8 bytes of the PCI configuration space that includes PCI Command Register.
has only the e1000 NIC model. If new NIC models are added to gem5, the DPDK framework should be recompiled after hard-coding the PMD driver to use a different NIC model.
### _Hardware Load Generator Model._
The hardware load generator model can generate packets at arbitrary rates and sizes. We implement a simulation object called EtherLoadGen that has a single Ethernet port and can directly connect to the NIC port of a simulated node as shown in Fig.1(b). Therefore, for simple network benchmarking, one does not need to run distributed or dual mode gem5 simulations and a single system simulation is enough. This significantly improves the simulation speed. The parameters of EtherLoadGen are packet rate, packet size, and protocol. They can be statically set while launching a simulation or a packet trace can be passed to the simulator to be replayed by EtherLoadGen. In the static mode, EtherLoadGen create Ethernet packets with the specified size and send them at a fixed rate to the Ethernet port. The static protocol that we support for now is plain Ethernet packets. If a trace file is provided, EtherLoadGen will read from the trace file and generate traffic based on the timestamps, sizes, and protocol in the trace file.
EtherLoadGen adds a timestamp to each outgoing packet at a configurable offset and compares the timestamp with the current tick on incoming packets to compute per-packet round-trip latency. EtherLoadGen reports mean, median, standard deviation, and tail latency of network packets in the statistics file. It also produces a packet drop percentage and a histogram of packet forwarding latency. The load generator model enables simple network benchmarking in gem5 without the need to simulate multiple system nodes. This is similar to the practice in the industry for using hardware load generators to evaluate the performance of the network [1].
EtherLoadGen also supports a bandwidth test mode where it gradually increases the bandwidth to find the maximum sustainable bandwidth of a server, which is the maximum bandwidth that a server can sustain without packet drops.
## 4 Evaluation
### _Methodology_
Table I shows the baseline gem5 configuration we used for the experiments. We use iperf and LZFwd for Kernel and DPDK experiments. In this section, we first explain how to build DPDK on gem5 full-system disk-image and write scripts to run a simple LZFwd DPDK application and load it with the simulated hardware packet generator. Then we show the results that validate the correctness of our Kernel bypass setup. Next, we use iperf and LZFwd to compare the scalability of Kernel stack and DPDK on gem5. Lastly, we perform a sensitivity analysis on microarchitecture configurations for network bandwidth when using Kernel stack and DPDK.
To build a kernel and disk image for gem5, we use the buildroot tool. The Kernel needs to be compiled with support for huge pages, and the kernel module uio_pci_generic. Listing 1 shows the Kernel config option needed to be enabled in buildroot tool for DPDK.
```
1CONFIG_HUGETLSFS-y
2CONFIG_HUGETL_PAGE-y
3CONFIG_UIO-y
4CONFIG_CT-y
5CONFIG_UIO_PCI_GENERIC=m
```
Listing 1: Kern CONFIG options needed for DPDK.
Listing 2 shows the bash script for bringing up the Userspace IO (UIO) driver (line 1), binding it to a NIC port (line 2), allocating huge pages, and lastly, starting the _testpnd_ application. As shown in the listing, the procedure for running DPDK applications on gem5 is identical to running DPDK apps on bare metal hardware.
### _Experimental Results_
In this section, we show experimental results that verify the functionality of DPDK enabled hardware/software stack on gem5 and illustrate the scalability of our kernel bypass network software stack on gem5. Since validating gem5's performance is out of the scope of this paper, we do not compare the network bandwidth of a bare metal system with our gem5 setup and only report the performance of gem5. There are on-going efforts to validate gem5 performance for different ISAs [2].
To verify DPDK's functionality on the simulated system, we modify LZFwd sample application to print the content of the packets received from the network. We always receive the correct content regardless of the packet size and network configuration. This experimentally verifies the correct functionality of the kernel bypass stack on gem5.
Fig. 3(a) shows the _maximum sustainable bandwidth_ of a single gem5 node when equipped with up to 4 NICs, running iperf and LZFwd. We define the _maximum sustainable bandwidth_ as the maximum bandwidth without packet drop. As explained in Sec.3.3, EtherLoadGen supports a mode where it gradually increases the packet rate and monitors the responses received from the server to find the maximum sustainable bandwidth. Fig.3(a) compares the bandwidth achieved using Linux kernel stack (iperf configuration) and DPDK (LZFwd configuration). Fig.3(a) has two highlights: (1) DPDK delivers much higher bandwidth compared with kernel stack. As shown in the figure, LZFwd sustains 5.4\(\times\) and 4.9\(\times\) more bandwidth compared with iperf using 1 and 4 NICs, respectively. (2) adding more NICs scales DPDK's network bandwidth better than the Linux kernel. As shown in the figure, moving from 3 NICs to 4 NICs, the DPDK software stack has 24.1% higher bandwidth, while the Linux kernel stack only sustains 5.3% higher network bandwidth.
\begin{table}
\begin{tabular}{l c} \hline \hline Parameters & Baseline Values \\ \hline Core freq: & 2GHz \\ Superscalar & 3 ways \\ ROB/IQ/LO/SQ entries & 384/128/128/128 \\ Int \& FP physical registers & 128 \& 192 \\ Branch predictor/BII entries & BiMode/2048 \\ Caches (size, assoc): I/D/L2 & 32KB,/64KB,2/MB,16ways \\ LII/LID/L2 latency,MSHRs & 1/2/12 cycles, 2/6/16 MSHRs \\ DRAM/mem size & DDR4-3200-88/2GB \\ iocache & 24 cycles, 16 MSHRs \\ Network latency/BW & 1\(\mu\)/200Gbps \\ DPDK & Version 20.11.3 \\ Operating system & Linux Linaro (kernel 5.4.0) \\ gem5 version & v21.1.0.2 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Simulation configuration.
## 5 Example Use Cases
### _Micro Architectural Sensitivity Analysis_
In this subsection, we analyze the sensitivity of the network bandwidth to various microarchitectural configurations. We perform the analysis for both Linux kernel and kernel bypass network stacks and compare the results. We start with a baseline node configuration shown in Table I (2GHz CPU configuration in Fig.3(b)). Then increase the CPU frequency (3GHz CPU), reduce the PICe bus latency (low latency PCIe), double the number of memory channels (2x Mem Ch), double the size of ROB and LSQ (2xROB/LSQ), double the number of load-store functional units in the processor pipeline (2xLISUs), double the size of L1 data and instruction caches (2xL1D/I), double the size of L2 and LLC caches (2xL2/LLC), and lastly enable direct cache access [3] to place received network data on the LLC instead of DRAM (DCA). Fig.3(b) shows the maximum sustainable bandwidth for all the mentioned configurations. Note that the enhancements are accumulative, meaning that we apply each optimization on top of the previous one, i.e., the DCA configuration runs at 3GHz, has low latency PCIe, twice the number of memory channels, twice ROB, LSQ, L1D, L1D, L2, LLC sizes of the configuration listed in Table I.
As shown in Fig.3(b), different microarchitectural parameters impact DPDK and Linux kernel stacks differently. For example, increasing CPU frequency alone improves DPDK bandwidth by 1.2% but improves Linux kernel stack bandwidth by 32.5%. This is due to the CPU intensity of the kernel stack compared to the userspace DPDK stack. Fig.3(b) clearly shows that conducting architectural research using an old software stack can lead to incorrect assumptions and optimizations.
### _Sensitivity of Direct Cache Access Performance to Burst Size_
As another usecase for the userspace networking and network load generator, Fig.4 plots the impact of L2Fwd burst size on the writeback rate of L2 and L3 caches when receiving 1024 packets in a short time interval. Fig.4(a) L2Fwd aggregate forwarding in burst of 32 packets while in Fig.4(b), L2Fwd waits until 1024 packets are received and then start the forwarding. The simulated node implements a non-inclusive L2 with DCA enabled. As shown, a large batch size results in LLC contention at the beginning of the burst arrival as many packets will be DMA transferred to the LLC (in the ring buffer) within a short interval. A shorter batch size overlaps the processing of the received packets with the DMA from NIC to LLC and since L2 is non-inclusive, demand misses from L2 make space in the LLC for incoming packets; therefore, Fig.4(a) has lower LLC spritback rate.
## 6 Conclusion
In this paper, we introduced gem5's userspace networking stack. We explained the changes we made to enable gem5 to run DPDK applications. We show that the bandwidth of L2Fwd DPDK application running on gem5 follows the same trend when running on bare metal hardware. We showed that the bandwidth of DPDK applications running on gem5 scales significantly better than applications that use the Linux kernel stack. Using gem5 running iperf and L2Fwd, we performed a sensitivity analysis on microarchitecture optimizations and showed that kernel bypass network applications are sensitive to different microarchitecture optimizations compared with Linux kernel network applications. We released the source code, scripts, and instructions to create disk images, install DPDK, and run DPDK applications in the following git repository [https://github.com/agsiddharth/CAL-DPDK-GEM5](https://github.com/agsiddharth/CAL-DPDK-GEM5)..
## Acknowledgement
This research was in part supported by an NSF grant (CNS-2213807).
|
2304.14599 | Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled
Dataset of Tweets | One of the major challenges in automatic hate speech detection is the lack of
datasets that cover a wide range of biased and unbiased messages and that are
consistently labeled. We propose a labeling procedure that addresses some of
the common weaknesses of labeled datasets. We focus on antisemitic speech on
Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of
topics common in conversations about Jews, Israel, and antisemitism between
January 2019 and December 2021 by drawing from representative samples with
relevant keywords. Our annotation process aims to strictly apply a commonly
used definition of antisemitism by forcing annotators to specify which part of
the definition applies, and by giving them the option to personally disagree
with the definition on a case-by-case basis. Labeling tweets that call out
antisemitism, report antisemitism, or are otherwise related to antisemitism
(such as the Holocaust) but are not actually antisemitic can help reduce false
positives in automated detection. The dataset includes 1,250 tweets (18%) that
are antisemitic according to the International Holocaust Remembrance Alliance
(IHRA) definition of antisemitism. It is important to note, however, that the
dataset is not comprehensive. Many topics are still not covered, and it only
includes tweets collected from Twitter between January 2019 and December 2021.
Additionally, the dataset only includes tweets that were written in English.
Despite these limitations, we hope that this is a meaningful contribution to
improving the automated detection of antisemitic speech. | Gunther Jikeli, Sameer Karali, Daniel Miehling, Katharina Soemer | 2023-04-28T02:52:38Z | http://arxiv.org/abs/2304.14599v1 | # Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets
###### Abstract
One of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.
We focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.
Our annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.
The dataset includes 1,250 tweets (18%) that are antisemitic according to the International Holocaust Remembrance Alliance (IHRA) definition of antisemitism.
It is important to note, however, that the dataset is not comprehensive. Many topics are still not covered, and it only includes tweets collected from Twitter between January 2019 and December 2021. Additionally, the dataset only includes tweets that were written in English. Despite these limitations, we hope that this is a meaningful contribution to improving the automated detection of antisemitic speech.
## Introduction and Related Work
Automated hate speech detection has advanced significantly in recent years with the development of deep learning techniques and large-scale training data. Many studies have shown that machine learning algorithms, especially deep neural networks, can achieve high accuracy in detecting hate speech within test datasets.
Hate speech detection using automated methods such as BERT, ELECTRA, or Perspective API has been developed to investigate large datasets containing toxic speech patterns and conspiracy-related content [1, 1, 2].
However, the task of detecting hate speech remains challenging for several reasons [20]. First, the datasets on which the models are trained are relatively small. They do not include all variations of hate speech manifestations in a rapidly changing environment. Second, there is no single definition of hate speech. It can be expressed in a variety of ways, often involves a high degree of subjectivity, and depends on cultural, social, and historical factors, making it difficult to identify and classify consistently. Third, proper classification often requires more context than what is readily available, such as previous discussions in a thread or a history of ironic messages from particular users. Calling out hate speech or reporting on stereotypes often results in false positives. A quick test with ChatGPT can illustrate this. ChatGPT correctly identifies antisemitic stereotypes in the message _"Fox News trashes Georges Soros while praising Joe Rogan using some antisemitic tropes - puppet master using his money to control the world. Then Pete Hegseth goes into a Fant about the nonsense conspiracy theory Cultural Marxism. This is from Fox & Friends morning show."_ However, ChatGPT classifies it as an antisemitic message itself.
Despite these challenges, the research community has made significant progress in developing models that can detect hate speech. These models often rely on a combination of linguistic features, such as word n-grams, word embeddings, and contextual features,
such as the identity of the author or the presence of certain keywords.
It's worth noting that while the quality of these models has improved, they are not perfect and can still produce false positives or false negatives. As a result, it is important to use these models as part of a larger framework that includes human review and oversight. It is also important to ensure that these models are trained on diverse, representative data and that their results are interpretable and transparent.
Narrowing hate speech to hostile attitudes toward specific communities or groups of people can help improve automated detection models, as it may be easier to consistently label datasets with a precise definition.
While there are many labeled datasets on hate speech in general, few datasets focus specifically on antisemitism. Chandra et al. (2021) built a labeled dataset on antisemitism from 3,102 posts on Twitter and 3,509 posts on Gab, focusing on messages containing both images and text, as well as words related to Jews, such as "Jewish," "Hasidic," "Hebrew," "Semitic," "Judasitic," "israeli," "yahudi," "yehudi," and also slurs. The Twitter dataset, including IDs and annotations, is publicly available.1 Three annotators labeled posts as antisemitic or not and classified antisemitic posts into one of the four categories: political, economic, religious, or racial antisemitism. Steffen et al. (2022) labeled a dataset of 3,663 German-language Telegram messages about antisemitism and conspiracy theories. They retrieved the messages from Telegram channels that were used to protest government measures to contain the pandemic. The messages were posted between March 11, 2020, and December 19, 2021. Both projects make their datasets available publicly or, in the latter case, on request. Both projects use the International Holocaust Remembrance Alliance's Working Definition of Antisemitism (IHRA Definition) as a guideline for determining whether a message is antisemitic or not.2 This is also the case for Schwarzschild's comprehensive study of online messages in German Schwarz-Friesel (2019). Guhl et al.'s study of the German far-right Guhl et al. (2020), and the ongoing Decoding Antisemitism project, which examines online comments on articles in mainstream media outlets in English, German, and French Ascone et al. (2022); Becker and Allington (2021). We have shown that the definition can be successfully used to classify online messages when inferences from the definition, such as "classical antisemitic stereotypes," are spelled out Jikeli et al. (2019).
Footnote 1: [https://github.com/mohit3011/Online-Antisemitism-Detection-Using-MultimodalDeep-Learning](https://github.com/mohit3011/Online-Antisemitism-Detection-Using-MultimodalDeep-Learning)
Research on automated detection of hate speech and conspiracy theories related to the Covid-19 pandemic or QAnon has shown that Jews figure prominently in conspiracy fantasies Vergani et al. (2022); Hoseini et al. (2021); La Morgia et al. (2021). Automated detection of antisemitic speech could also help to identify conspiracy theories.
However, time-consuming manual annotation and consistent labeling are the bottleneck for most supervised machine learning projects. Our project on antisemitic tweets is in principle not different from many other projects on hate speech dataset, which include defining a classification scheme, labeling guidelines, collecting adequate data, preprocessing this data according to the task, training experts on labeling, and building a final corpus Pustejovsky and Stubbs (2013).
However, we propose a number of measures to ensure the production of high quality datasets. We are making the IDs and the text of our dataset available, along with our label of whether or not they are antisemitic. Later this year, we will add the label of calling out antisemitism. The usernames in the tweets are not anonymized, as we believe this information may be useful for further research.
## Generating Our Corpus
Our corpus covers a wide range of antisemitic and non-antisemitic conversations about Jews on Twitter from 2019 to 2021. We used Indiana University's Observatory on Social Media (OSoMe) database to identify relevant messages. The OSoMe database contains 10 percent of all live tweets on a statistically relevant basis. We queried with two keywords that are likely to result in a wide range of conversations about Jews as a religious, ethnic, or political community: "Jews" and "Israel." We then added samples with more targeted keywords that are likely to generate a high percentage of antisemitic tweets, namely the slurs
"K---s"3 and "ZioNazi*." We ran 14 queries for different time periods between January 2019 and December 2021. The queries returned tweet ID numbers. We then randomly sampled 2,000 tweets from each query, pulled the text and metadata from the Twitter archive from those that were still live, filtered for English tweets using Google's language detection library, and randomly selected 500 tweets from the remaining tweets.4 We took screenshots of the sampled tweets for documentation. We repeated this process for four queries, resulting in two samples for those queries and 18 samples in total, see Table 1.
Footnote 3: “K---s” stands for the antisemitic slur “kikes.”
Footnote 4: For the queries with the two slurs the sample size was smaller because fewer tweets remained after this process.
### Annotation
We annotated the tweets, considering the text, images, videos, and links, in their "natural" context, including threads. We used a detailed annotation guideline (Jikeli, Cavar, and Michling, 2019), based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and types of antisemitism. We created an online annotation tool ([https://annotationportal.com](https://annotationportal.com)) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.
Footnote 5: [https://www.holocaustremembrance.com/resources/workin](https://www.holocaustremembrance.com/resources/workin)
The Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its "natural" environment.
The portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as "confident antisemitic," "probably antisemitic," "probably not antisemitic," "confident not antisemitic," or "don't know." The annotation guideline, including the definition, is linked in a PDF document.
All annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.
When annotators label a tweet as "probably" or "confident" antisemitic, they must also select an applicable section of the IHRA definition to move on to the next tweet. If annotators feel the tweet is antisemitic, but no section of the definition applies, they will classify the tweet as not antisemitic according to the IHRA definition, and check a box indicating that they disagree with the IHRA definition for that tweet. Annotators can also use this the other way around, that is, they can label the message as antisemitic according to the IHRA definition, but personally disagree with it in a particular case. The option to personally disagree
Figure 1: Annotation Portal with tweet example6
with the definition on a case-by-case basis is intended to encourage a stricter application of the IHRA definition rather than the individual definitions of the annotators.
Asking annotators to choose between a very negative, negative, neutral, positive, or very positive sentiment for the tweet regarding Jews, Judaism, or Israel further helps annotators apply the definition by allowing them to express that the tweet has a negative sentiment even if they are unable to find a part of the definition that applies. Messages that call out or report antisemitism are also flagged, as are tweets that are sarcastic and tweets that relate to the Holocaust, including comparisons to contemporary issues. Since the Covid pandemic, we have seen an increase in Holocaust distortions, most of which do not fall under the IHRA definition. We have therefore added a Holocaust distortion label.
As an element of what we consider annotation reliability, our annotators meet on a weekly basis to discuss potentially difficult tweets. A tweet can be considered difficult if its content or context is not easily understandable, or if it is unclear which paragraph of the IHRA definition applies.
Two annotators labeled each sample. After both annotators completed their annotations, they discussed their disagreements about whether or not the tweets were antisemitic.7 Reasons for disagreement included mislabeling due to fatigue, lack of understanding of the context, and overlooking some aspects of the messages. In almost all cases, the discussion led to an agreement. The tweets that did not reach agreement, that is, where one annotator labeled the message as antisemitic and the other did not, were not included in our final labeled dataset. Table 1 shows the annotation results after discussion with the remaining number of tweets and the percentage of antisemitic tweets.8
Footnote 7: Annotators discussed messages that one of them labeled as confident or probably antisemitic and the other one labeled as confident or probably not antisemitic or “I don’t know.”
Footnote 8: Before publishing the dataset, we checked for errors, including a possible overlap between tweets labeled as antisemitic and tweets calling out antisemitism. They were labeled, resulting in the correction of 9 tweets.
Many dataset labeling projects provide kappa coefficients to measure the quality of inter-annotator agreement. This does not make sense in our case because we discuss all disagreements, and few disagreements remain. However, kappa requires independent classification. Therefore, kappa is an artificial value for our dataset. It is very close to 1. Our overall pre-discussion Cohen's kappa is 0.66, varying from sample to sample.9 Not surprisingly, it is lower when the data has a skewed distribution, that is, when either very few tweets are antisemitic or if very few tweets in a sample are not antisemitic. Kappa does not seem appropriate for measuring annotation reliability for our dataset, and perhaps for social data annotation in general.
Footnote 9: Cohen’s kappa was calculated for dichotomized values, “I don’t know” falls under “not antisemitic.”
The weekly group discussions and the discussions among the annotators helped the annotators to better understand the context of online conversations about events and online celebrities in the US, UK, India, or elsewhere. The annotators became increasingly familiar with the contexts because they often revolved around similar topics.
## Annotation Results
In our published labeled dataset, we use binary categories, treating ratings of confident/probably not antisemitic, and don't know as not antisemitic and probably/confident antisemitic as antisemitic. While annotators discussed disagreements about their antisemitism ratings, they did not discuss disagreements about their "calling out" ratings. The label "calling out" was more common than the label "antisemitism" in many samples, but it was inconsistent, especially in the annotations at the beginning of the project. We are relabeling the dataset for "calling out" and will publish the results in an update of the dataset. An overview of the annotation results of the label "antisemitism" per sample can be found in Table 1.
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**\begin{tabular}{} \end{tabular}**} & \multirow{2}{*}{**
\begin{tabular}{} \end{tabular}**} \\ & & & & \\ \hline
1 & Jews & Jan.-Dec. 2019 & 432 & 6.3 \% \\ \hline
2 & Jews & Jan.-Dec. 2019 & 409 & 7.6 \% \\ \hline
3 & Jews & Jan.-Apr. 2020 & 460 & 12.2 \% \\ \hline
4 & Jews & Jan.-Apr. 2020 & 423 & 11.6 \% \\ \hline
5 & Jews & May-Aug. 2020 & 390 & 14.1 \% \\ \hline
6 & Jews & May-Aug. 2020 & 387 & 16.3 \% \\ \hline
7 & Jews & Sep.-Dec. 2020 & 402 & 10 \% \\ \hline \end{tabular}
Our Gold Standard includes 6,941 tweets with keywords related to antisemitism and Jewish life, of which 1,250 tweets (18%) are antisemitic according to the IHRA definition. 1,499 tweets (22%) were sent in 2019, 3,716 tweets (54%) in 2020 and 1726 tweets (25%) are from 2021. Out of the 6,941 tweets, 4,605 tweets (66%) are from queries with the keyword "Jews," 1,524 tweets (22%) with the keyword "Israel," 529 tweets (8%) with the slur "ZioNNazi*" and 283 tweets (4%) with the slur "K--s." Some of the keywords may also appear in other samples, e.g., a tweet may contain both the word "Jews" and "Israel."
Out of 4,605 tweets containing the keyword "Jews," 483 tweets (11%) are considered antisemitic. Out of 1,524 tweets containing the keyword "Israel," 203 tweets (13%) are antisemitic. Our dataset contains 283 tweets with the antisemitic slur "k--s." It is not surprising that many messages, 34% in our samples (97 messages), with this keyword are antisemitic, but we also noticed that many tweets containing the slur "k--s" are calling out the use of this term. In contrast, the vast majority of tweets with the slur "ZioNazi*" are antisemitic: 467 out of 529, or 88%.
The labeled dataset on antisemitism is now awaiting testing, and to facilitate this we have made it available on Zenodo (Jikeli et al., 2023).
## Discussion
Our labeled dataset of 6,941 tweets is based on representative samples of tweets containing the common keywords "Jews" and "Israel" and keywords more likely to be used in antisemitic contexts, the slurs "ZioNazi*" and "k--s." It includes 1,250 tweets (18%) that are antisemitic according to IHRA's Working Definition of Antisemitism.
The majority of the tweets (66%) come from queries with the keyword "Jews," which is representative of a continuous time period from January 2019 to December 2021. It is reasonable to assume that our dataset is a good reflection of discussions on Twitter about Jews and covers the most prevalent topics, at least when the word "Jews" is mentioned and for the three-year period covered by the dataset. 483 tweets (11%) with the keyword "Jews" were labeled as antisemitic. It is also reasonable to assume that they cover most of the relevant topics of antisemitic discussions about Jews on Twitter during this period.
203 out of 1,524 (13%) tweets about Israel are antisemitic. They are likely to cover the main tropes and discussions during this period. However, this period does not include heightened tensions in the Israeli-Palestinian conflict, such as the flare-up of the conflict in May 2021. We will be updating our dataset with data from all of 2021 and 2022 in the near future.
The slurs "ZioNazi*" and "K--s" are not used very often by Twitter users compared to the words "Jews" and "Israel." While there are millions of tweets containing the latter words each year, there are "only" a few tens of thousands of tweets containing these slurs. The majority of tweets containing the slur "Zionazi*," are used in an approving way and have an antisemitic message (88%). This is not the case when the word "k--s" is used. Only one-third are antisemitic.
The tweets in our dataset cover a wide range of antisemitic and non-antisemitic conversations about Jews. However, the dataset needs to be enlarged and constantly updated to cover all topics comprehensively. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.
It is important to annotate online messages in their "natural" context. Context, including pictures, memes, or previous comments within a thread, can completely change the meaning of a message.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
8 & Jews & Sep.-Dec. 2020 & 390 & 7.4 \% \\ \hline
9 & Jews & Jan.-Apr. 2021 & 453 & 6 \% \\ \hline
10 & Jews & May-Aug. 2021 & 414 & 19.6 \% \\ \hline
11 & Jews & Sep.-Dec. 2021 & 445 & 5.6 \% \\ \hline
12 & Israel & Jan.-Apr. 2020 & 294 & 11.6 \% \\ \hline
13 & Israel & May-Aug. 2020 & 408 & 13.7 \% \\ \hline
14 & Israel & Sep.-Dec. 2020 & 408 & 15.2 \% \\ \hline
15 & Israel & Jan.-Apr. 2021 & 414 & 12.3 \% \\ \hline
16 & ZioNazi* & Jan.-Dec. 2019 & 375 & 89.3 \% \\ \hline
17 & ZioNazi* & Jan.-Apr. 2020 & 154 & 85.7 \% \\ \hline
18 & kikes & Jan.-Dec. 2019 & 283 & 34.3 \% \\ \hline & & **Jan. 2019 -** & **6941** & **18** \% \\ \hline & & **Dec. 2021** & **6941** & **18** \% \\ \hline \end{tabular}
\end{table}
Table 1: Overview of samples in dataset
The annotation process encourages annotators to apply a widely used definition of antisemitism consistently, even if they disagree on certain aspects or on certain cases. Our annotation process appears to be robust although it is difficult to measure because a key element of our annotation process is the discussion among annotators of their disagreements and weekly discussions of tweets that are difficult to classify. This violates the assumption of independent classification in kappa calculations. Percentage agreement and Cohen's kappa almost reach their maximum after annotators discuss and revisit tweets on which they previously disagreed. In the pre-discussion annotation, we do not aim for 100% agreement. Rather, we want annotators from different perspectives to fully understand the messages of each tweet, which they can then explain in discussions focused on deciding whether or not it is antisemitic according to the IHRA definition. The dataset only includes tweets with 100% agreement between annotators. The pre-discussion inter-rater reliability has a kappa value of 0.66. We consider the training of qualified annotators and the discussion process to be essential for building an accurately labeled dataset.
## Acknowledgements
This work used Jetstream2 at Indiana University through allocation HUM200003 from the Advanced Cyberinfrastructure Coordination Ecosystem: Services & Support (ACCESS) program, which is supported by National Science Foundation grants #2138259, #2138286, #2138307, #2137603, and #2138296.
We are grateful for the support of Indiana University's Observatory on Social Media (OSoMe) [1] and the contributions and annotations of all team members in our Social Media & Hate Research Lab at Indiana University's Institute for the Study of Contemporary Antisemitism, especially Grace Bland, Elisha S. Breton, Kathryn Cooper, Robin Forstenhausler, Sophie von Mariassy, Mabel Poindexter, Jenna Solomon, Clara Schilling, and Victor Tschiskale.
|
2301.06743 | On the Gamma-Ray Emission of the Andromeda Galaxy M31 | Using the $\gamma$-ray data obtained with the Large Area Telescope (LAT)
onboard {\it the Fermi Gamma-ray Space Telescope (Fermi)} for $\sim$14 years,
we examine the high energy emission emanating from the center of the Andromeda
Galaxy M31. Different from previously reported results, which show a seemingly
extended source, we instead find two individual point sources, one consistent
with being at the center and one 0\fdg4 south-east of the center. The emission
of the former is well described using a Log-Parabola model, similar to those of
previous studies, and that of the latter can be fitted with a power law. We
discuss the possible origins for the two sources. M31's central source, now
consistent with being a point source, necessitates a revisit of its previously
discussed originations with this new property taken into consideration, in
particular those cosmic rays or dark matter scenarios involving extended source
distributions. The SE source appears to have a projected distance of
$\sim$6\,kpc from M31's center, and the investigation is required as to whether
it is a source locally associated with M31, or is instead a background
extra-galactic one. | Yi Xing, Zhongxiang Wang, Dong Zheng, Jie Li | 2023-01-17T08:03:19Z | http://arxiv.org/abs/2301.06743v2 | # On the Gamma-Ray Emission of the Andromeda Galaxy M31
###### Abstract
Using the \(\gamma\)-ray data obtained with the Large Area Telescope (LAT) onboard _the Fermi Gamma-ray Space Telescope (Fermi)_ for \(\sim\)14 years, we examine the high energy emission emanating from the center of the Andromeda Galaxy M31. Different from previously reported results, which show a seemingly extended source, we instead find two individual point sources, one consistent with being at the center and one \(0\fdg 4\) south-east of the center. The emission of the former is well described using a Log-Parabola model, similar to those of previous studies, and that of the latter can be fitted with a power law. We discuss the possible origins for the two sources. M31's central source, now consistent with being a point source, necessitates a revisit of its previously discussed originations with this new property taken into consideration, in particular those cosmic rays or dark matter scenarios involving extended source distributions. The SE source appears to have a projected distance of \(\sim\)6 kpc from M31's center, and the investigation is required as to whether it is a source locally associated with M31, or is instead a background extra-galactic one.
Gamma-ray sources (633); Andromeda Galaxy (39) 000-0002-4072-2865]Yi Xing
## 1 Introduction
The Andromeda Galaxy M31, located approximately 780 kpc away from our Milky Way (Conn et al., 2012), is one of a dozen galaxies that have been detected at \(\gamma\)-rays (Xi et al., 2020; Abdollahi et al., 2022). Utilizing the 2-yr data taken with the Large Area Telescope (LAT) onboard _the Fermi Gamma-ray Space Telescope (Fermi)_, the _Fermi_-LAT Collaboration (Abdo et al., 2010) first reported the detection of M31 at a 5\(\sigma\) confidence level. Following this initial report, different analyses of the LAT data have been performed for studies of the \(\gamma\)-ray emission of M31 (Li et al., 2016; Pshirkov et al., 2016; Ackermann et al., 2017; Karwin et al., 2019; Zimmer et al., 2022). It appears that the primary \(\gamma\)-ray emission of M31 coincides with its center, and efforts have been made to identify a possible extended structure in the emission, whose presence in M31 is expected and would reveal the existence and distribution of cosmic rays or supposedly massive dark matter. Due to the hadronic and/or leptonic processes of the former (e.g., McDaniel et al., 2019; Do et al., 2021) or the decay or annihilation of the latter (e.g., Beck & Colafrancesco, 2017; McDaniel et al., 2018), the observed \(\gamma\)-ray emission may be explained.
Among the reported analyses and results, a representative analysis was provided by the _Fermi_-LAT Collaboration in Ackermann et al. (2017). Using \(>7\)-yr LAT data, they tested a list of different point and extended source models in their analysis. They found that the \(\gamma\)-ray emission of M31 was consistent with it being at the center and described with a \(0\fdg 38\)-radius uniform-brightness disk (at a 4\(\sigma\) significance level).
Now with \(\sim\)14 years of \(\gamma\)-ray data collected with LAT, we conducted analysis for studying the \(\gamma\)-ray emission of M31, and found that rather than one extended source, the emission stems actually from two sources, one at the center and one south-east to the center. In this paper, we report the analysis and results of this study. Below Section 2 describes our analysis of the LAT data and provides the results, and in Section 3, we discuss the results and their implications for our understanding of M31's \(\gamma\)-ray emission.
### LAT Data and Baseline Source Model
We selected 0.1-500 GeV LAT events from the updated _Fermi_ Pass 8 database in the time period from 2008-08-04 15:43:39 (UTC) to 2022-09-26 23:16:35 (UTC). The region of interest (ROI) was set to be centered at M31's catalog position with a size of 20\({}^{\circ}\)\(\times\) 20\({}^{\circ}\), and the _CLEAN_ event class was used in the analysis. As recommended by the LAT team1, we included events with zenith angles less than 90 deg so as to prevent the Earth's limb contamination, and we also excluded events with 'bad' quality flags.
Footnote 1: [http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/](http://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/)
In the _Fermi_ LAT 12-yr source catalog (4FGL-DR3; Abdollahi et al., 2022), the \(\gamma\)-ray counterpart to M31 is listed as a point source (PS) modeled with a Log-Parabola (LP) spectral form, \(dN/dE=N_{0}(E/E_{b})^{-(\alpha+\beta log(E/E_{b}))}\). We considered this PS model as a baseline one (named 1PS) for M31 and made our source model by including all sources listed in 4FGL-DR3 within 20 deg of M31. The spectral forms and parameters of the sources are provided in the catalog. Unless stated otherwise, for our analysis, spectral parameters were always set free for the sources within 5 deg of M31, while the rest were fixed at their catalog values. The spectral model gll_iem_v07.fit was used for the Galactic diffuse emission, and the spectral file iso_P8R3_CLEAN_V3_v1.txt was used for the extragalactic diffuse emission, with their normalizations both set free and other parameters fixed.
### One Point Source Analysis
We performed the standard binned likelihood analysis to the LAT data in the 0.1-500 GeV band using the 1PS source model, where the scale parameter \(E_{b}\) in the LP model for M31 was fixed to the catalog value of 913.08 MeV. For M31's \(\gamma\)-ray emission, we obtained \(\alpha=2.2\pm\)0.2, \(\beta=0.36\pm\)0.16, and 0.1-500 GeV photon flux \(F_{\rm ph}=2.7\pm 1.2\times 10^{-9}\,{\rm photon\,cm^{-2}\,s^{-1}}\), which are consistent with those give in the catalog. The Test Statistic (TS) value obtained for the source was 108, wherein the likelihood value \(L_{\rm 1PS}\) (\(=177565.02\)) was used for model comparisons below in Section 2.3. We tested the effects of setting \(E_{b}\) free, and found the results to be nearly the same, only with larger uncertain
Figure 1: TS maps for the region of M31 in the energy ranges of 0.1–500 GeV (_left_ panel) and 2–500 GeV (_right_ panel). The image scale of each TS map is 0.05 degree pixel\({}^{-1}\), for which a color bar is drawn to indicate the TS value range. Also in each TS map, a white ellipse is plotted to show the M31 disk/halo boundary defined in Racine (1991) and green pluses to show the 4FGL-DR3 catalog sources, which include M31 (the center one in the ellipse). _Left:_ a cyan cross and circle mark the best-fit uniform-brightness disk model given in Ackermann et al. (2017), and a black cross and circle the best-fit disk model we determined (note the black cross is fixed at the central position of M31 given in SIMBAD database; cf., Section 2.3). _Right:_ a white cross and dashed circle mark the position and 2\(\sigma\) error circle respectively we determined for the M31 central emission, and a black dashed circle marks the 2\(\sigma\) error circle for the SE source. In addition, the position of M32 is marked by a green cross.
ties. This indicates that the fitting was not sensitive to this parameter given the relatively low TS value of the source. As such, we fixed \(E_{b}\) at the catalog value for the following analysis.
To carefully examine the analysis results, we calculated a 0.1-500 GeV TS map for a 5\({}^{\rm o}\) x 5\({}^{\rm o}\) region centered at M31, in which all the catalog sources except M31 were removed. The TS map, shown in the left panel of Figure 1, seemingly suggests that the \(\gamma\)-ray emission at the central position of M31 is extended, though slightly elongating along the south-east (SE) direction. We mark the best-fit, uniform-brightness disk (radius 0\(\fdg\)38) reported in Ackermann et al. (2017) with a cyan circle in the figure, and as shown, the disk does enclose the emission. However, when we checked the TS maps at higher energy ranges, we found that the \(\gamma\)-ray emission in fact is resolved to be two individual sources. In the right panel of Figure 1, we show a similar TS map but with an energy range in 2-500 GeV. As can be seen, one source is at the center of M31, and the other one is SE to the center. We ran _gtfindsrc_ in Fermitools to the 2-500 GeV data to determine the position of this SE source, and obtained R.A.=11\(\fdg\)01, Decl.=+40\(\fdg\)95, (equinox J2000.0). The 1\(\sigma\) nominal uncertainty for the position was \(\simeq\)0\(\fdg\)03.
### Verification Analysis for Two Point Sources
Since the discovery of two sources can critically change our understanding of M31's \(\gamma\)-ray emission, we conducted various analyses to verify our results. First, we re-performed the maximum likelihood analysis with 2 PSs included in the source model (named 2PS) this time. The source at the center of M31 was still set to have a LP spectral form, while the SE source at the position obtained above was modeled with a simple power law (PL). We obtained \(\alpha=2.1\pm\)0.3, \(\beta=0.27\pm\)0.14, and \(F_{\rm ph}=2.4\pm 1.3\times 10^{-9}\,{\rm photon\,cm^{-2}\,s^{-1}}\) for the center one, with TS\(\simeq\)79, and for the SE one, we obtained PL index \(\Gamma=2.1\pm\)0.2 and \(F_{\rm ph}=1.7\pm 1.1\times 10^{-9}\,{\rm photon\,cm^{-2}\,s^{-1}}\) with TS\(\simeq 35\). These results are summarized in Table 1. We compared the likelihood value \(L_{\rm 2PS}\), from the 2PS model, to \(L_{\rm 1PS}\) using the formula \(\sqrt{2(\log L_{\rm 2PS}-\log L_{\rm 1PS})}\), and found the fit improved at a 6.0\(\sigma\) significance level.
We then proceeded to compare the 2PS model with the best-fit uniform-brightness disk model reported in Ackermann et al. (2017). We included the disk (centered at R.A.=10\(\fdg\)76, Decl.=41\(\fdg\)19; note that this position has SE offsets from the central position of M31) in the source model, whose spectral form was a LP, and obtained a TS value of 152 from the likelihood analysis (the parameter values are given in Table 1). In comparing the likelihood values from this disk model and the 2PS model, the latter was found to be better at a 3.7\(\sigma\) confidence level.
Finally, we also tested a uniform-brightness disk with the center fixed at that of M31 given in the SIMBAD2 database. The radius of the disk was searched from 0\(\fdg\)00 (i.e., a PS) to 0\(\fdg\)70 with a step of 0\(\fdg\)02. Comparison of the resulting likelihood values indicated that the best fit was obtained when the radius was 0\(\fdg\)32, while the TS value was 138. When comparing this best fit to the 2PS model, the likelihood values indicated that the latter was better at a 5.0\(\sigma\) significance level.
Footnote 2: [http://simbad.u-strasbg.fr/simbad/](http://simbad.u-strasbg.fr/simbad/)
Given the clear indications of two sources in the 2-500 GeV TS map, verified further with the additional analyses, we concluded that there are two \(\gamma\)-ray sources in the direction of M31. We then proceeded to check whether the central source at M31 would be extended or not. Once again, we included the SE one as a PS in the source model and set a uniform-brightness disk for the central one, with the radius varied in the same
Figure 2: \(\gamma\)-ray spectra of the central M31 and SE sources (black diamonds and red dots respectively), along which the flux upper limits are shown as the open symbols. The best-fit models for the two sources, the LP for the M31 central one and PL for the SE one, are shown as the black curve and red line respectively. For comparison, the model spectrum of our Galactic center source (4FGL J1745.6\(-\)2859) given in Abdollahi et al. (2022) is scaled by the distance ratio (8 kpc/780 kpc)\({}^{2}\) and shown as a blue dashed curve.
manner as the above, and then performed the likelihood analysis. Going over the results, no significant extension was found for the central source. Therefore this \(\gamma\)-ray source at M31's center is a PS based on the _Fermi_ LAT data. Furthermore by running gtfindsrc to the 2-500 GeV data, we also obtained its position: R.A.=10\(\fdg\)73, Decl.=+41\(\fdg\)32 (equinox J2000.0), with a 2\(\sigma\) error radius of 0\(\fdg\)16 (shown in the right panel of Figure 1). This position is consistent with that of the center of M31.
### Spectral Analysis
We extracted the \(\gamma\)-ray spectra of the two sources by performing the maximum likelihood analysis of the LAT data in 12 evenly divided energy bands in logarithm from 0.1-500 GeV. In the extraction, the spectral normalizations of sources within 5 deg of M31's catalog position were set as free parameters, while all other source parameters were fixed at values obtained from the above likelihood analysis. A PL spectral form was set for the two sources, with \(\Gamma\) fixed to 2. For the obtained spectral data points, we kept those with TS\(\geq\)4 and derived 95% flux upper limits for the rest. Figure 2 shows the spectra, and Table 2 provides the flux (or upper limit) values and their respective TS values.
The respective best-fit LP and PL spectral models for M31's central and SE sources, obtained from the above 2PS-model analysis, are also plotted in Figure 2. As shown, the best-fit models adequately describe the \(\gamma\)-ray spectra. We also tested other often-used spectral models for each of the two sources, for example a PL and a PL with an exponentially cutoff (PLEC) for the M31's central one. We did not find that other models were better. The LP and PLEC models provided equally good fits to the emission of M31's central source, and they were better than a PL at a 2.7\(\sigma\) significance level. All three models provided a nearly equally good fit for the emission of the SE source, which was likely due to the low TS value (\(\simeq\)35) of the source.
### Variability Analysis
As a check, we searched for long-term variability of the two sources in 0.1-500 GeV by calculating Variability Index TS\({}_{var}\)(Nolan et al., 2012). We extracted light curves of 87 time bins, with each bin consisting of 60-day data. Following the procedure introduced in Nolan et al. (2012), if the flux of a source is constant, TS\({}_{var}\) would be distributed as \(\chi^{2}\) with 86 degrees of freedom; variable sources would be identified when TS\({}_{var}\) is larger than 119.4 (at a 99% confidence level). The computed TS\({}_{var}\) for the M31 central and SE sources were 73.1 and 76.4 respectively, indicating that the two sources did not show significant long-term flux variations.
## 3 Discussion
After analyzing the \(\sim\)14-yr _Fermi_ LAT data for the M31 region, we obtained the results different to those of previous reports. There are two sources contained in the \(\gamma\)-ray emission of M31, one at the center and one with offsets of 0\(\fdg\)33 in R.A. and \(-0\fdg\)32 in Decl. from the center. As the central one is brighter, its emission is still described with a LP model consistent with the previous ones (including that given in the 4FGL-DR3; Abdollahi et al., 2022). The emission of the SE one is described with a PL model, and one feature can be noted is that its emission mostly contains high energy photons in \(\sim\)3.5-30 GeV, among which the \(\sim\)20 GeV photons that the central source does not have (cf., Figure 2 and Table 2). The findings thus drastically change the perception of M31's \(\gamma\)-ray emission.
As the central emission is consistent with being a PS, limiting its origin to M31's central region, the source of the emission should be located within \(\sim\)2.2 kpc of the center (at a source distance of 780 kpc) by considering the 2\(\sigma\) error radius of 0\(\fdg\)16. The origins involving some degree of extended distributions, such as cosmic rays (McDaniel et al., 2019; Do et al., 2021) or dark matter (e.g., Beck and Colafrancesco, 2017; McDaniel et al., 2018) in M31 should be revisited (also see Ackermann et al., 2017 and references therein). Another possible origin discussed is the old population of unresolved objects in the center, such as millisecond pulsars (MSPs; e.g., Ackermann et al., 2017; Eckner et al., 2018), since one competing scenario for the excess \(\gamma\)-ray emission of our own Galactic center is that the emission arises from MSPs (Brandt and Kocsis, 2015). We note that given the 0.1-500 GeV flux of \(1.9\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\) obtained from the 2PS model for the central source, its \(\gamma\)-ray luminosity is \(\simeq 1.4\times 10^{38}\) erg s\({}^{-1}\). This luminosity is much larger than those of known \(\gamma\)-ray sources in our Galaxy. For example, the luminosity is \(\sim\)50 times that of our Galactic center emission (cf., Figure 2), which is the most luminous one at \(\gamma\)-rays in our Galaxy (Cafardo et al., 2021; Abdollahi et al., 2022). Thus in order to explain the central emission of M31 with those of a population of MSPs, the number of MSPs would be required to be at least 15000 (Xing et al., in preparation, where the estimation method is fully described in Wu et al., 2022). Whether the central region of M31 can host such a large number of MSPs is uncertain (Fragione et al., 2019).
Though the SE source is significantly away from the center of M31, it still appears to be within the
M31's galactic region (cf., Figure 1). Given that 6659 sources have been detected with LAT in the all sky (Abdollahi et al., 2022), the average source density is \(\sim 0.16\) deg\({}^{-2}\). Considering a circle of 0\(\fdg\)4 radius (the distance between the center of M31 and the SE source), the chance of finding two or more sources in such a circular region by coincidence is \(\sim 0.4\%\). Thus there is a high chance that the SE source is associated with M31. Its 0.1-500 GeV flux is \(\simeq 1.5\times 10^{-12}\) erg s\({}^{-1}\) cm\({}^{-2}\), implying the luminosity would be \(\sim 1.0\times 10^{38}\) erg s\({}^{-1}\) at M31's distance. This luminosity is still much larger than that of any Galactic source, which renders it difficult to identify its possible source types via simple property comparisons between the SE source and the luminous Galactic sources. We searched for sources within the SE source's 2\(\sigma\) error circle in other bands. At the optical bands, there are two globular clusters (GCs) given in the SIMBAD database; however, associating the \(\gamma\)-ray emission with the GCs is difficult as a large number of MSPs would be required to be contained in them (e.g., Wu et al., 2022). At the X-ray band, Stiele et al. (2011) reported the results from the deep _XMM-Newton_ survey of M31. In their results, there were 12 X-ray sources within the error circle; among them, three were classified as foreground star candidates, one as a galaxy candidate, and eight had unknown classes. The last eight sources were generally faint, having X-ray fluxes of 2-7\(\times 10^{-15}\) erg s\({}^{-1}\) cm\({}^{-2}\). It is not clear whether one of them, located away from M31's center with projected distances of \(\sim 6\) kpc, could be the counterpart to the \(\gamma\)-ray source.
Alternatively, if we consider the low probability that the SE source is a background extra-galactic one, we first note that while the SE source's position is close, it does not coincide with that of the galaxy M32 (cf., right panel of Figure 1). Since in the \(\gamma\)-ray sky, dominant extra-galactic sources are Active Galactic Nuclei (AGN; Abdollahi et al., 2022) with demonstrably significant radio emission (i.e., having radio jets; e.g., de Menezes et al., 2020), we checked for the radio sources within the error circle and found four listed in either the SIMBAD database (Bystedt et al., 1984; Braun, 1990) or the Very Large Array Sky Survey Epoch 1 catalog (Gordon et al., 2020). Among them, one is the galaxy candidate previously mentioned, and the other three do not have obvious X-ray or optical counterparts. As given in Stiele et al. (2011), the X-ray position of this galaxy candidate is R.A.=10\(\fdg\)985375, Decl.=40\(\fdg\)9815833, (equinox J2000.0), with a 3\(\sigma\) nominal uncertainty of 3\(\farcs\)74, but looking into the determined radio or optical positions for this galaxy, we find that the X-ray error circle does not enclose the positions. The properties of this source remain to be further investigated. In conclusion, while the SE source has the properties of exhibiting a hard spectrum and constant \(\gamma\)-ray emission over the past 14 years, its nature is uncertain.
This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
This work is support by the Basic Research Program of Yunnan Province (No. 202201AS070005), the National Natural Science Foundation of China (12273033), and the Original Innovation Program of the Chinese Academy of Sciences (E085021002).
|
2310.06002 | LCOT: Linear circular optimal transport | The optimal transport problem for measures supported on non-Euclidean spaces
has recently gained ample interest in diverse applications involving
representation learning. In this paper, we focus on circular probability
measures, i.e., probability measures supported on the unit circle, and
introduce a new computationally efficient metric for these measures, denoted as
Linear Circular Optimal Transport (LCOT). The proposed metric comes with an
explicit linear embedding that allows one to apply Machine Learning (ML)
algorithms to the embedded measures and seamlessly modify the underlying metric
for the ML algorithm to LCOT. We show that the proposed metric is rooted in the
Circular Optimal Transport (COT) and can be considered the linearization of the
COT metric with respect to a fixed reference measure. We provide a theoretical
analysis of the proposed metric and derive the computational complexities for
pairwise comparison of circular probability measures. Lastly, through a set of
numerical experiments, we demonstrate the benefits of LCOT in learning
representations of circular measures. | Rocio Diaz Martin, Ivan Medri, Yikun Bai, Xinran Liu, Kangbai Yan, Gustavo K. Rohde, Soheil Kolouri | 2023-10-09T14:37:56Z | http://arxiv.org/abs/2310.06002v1 | # LCOT: Linear circular optimal transport
###### Abstract
The optimal transport problem for measures supported on non-Euclidean spaces has recently gained ample interest in diverse applications involving representation learning. In this paper, we focus on circular probability measures, i.e., probability measures supported on the unit circle, and introduce a new computationally efficient metric for these measures, denoted as Linear Circular Optimal Transport (LCOT). The proposed metric comes with an explicit linear embedding that allows one to apply Machine Learning (ML) algorithms to the embedded measures and seamlessly modify the underlying metric for the ML algorithm to LCOT. We show that the proposed metric is rooted in the Circular Optimal Transport (COT) and can be considered the linearization of the COT metric with respect to a fixed reference measure. We provide a theoretical analysis of the proposed metric and derive the computational complexities for pairwise comparison of circular probability measures. Lastly, through a set of numerical experiments, we demonstrate the benefits of LCOT in learning representations of circular measures.
Introduction
Optimal transport (OT) [39, 33] is a mathematical framework that seeks the most efficient way of transforming one probability measure into another. The OT framework leads to a geometrically intuitive and robust metric on the set of probability measures, referred to as the Wasserstein distance. It has become an increasingly popular tool in machine learning, data analysis, and computer vision [20, 19]. OT's applications encompass generative modeling [4, 37, 21], domain adaptation [11, 12], transfer learning [3, 27], supervised learning [14], clustering [16], image and pointcloud registration [15, 5, 25], and even inverse problems [31], among others. Recently, there has been an increasing interest in OT for measures supported on manifolds [7, 36]. This surging interest is primarily due to: 1) real-world data is often supported on a low-dimensional manifold embedded in larger-dimensional Euclidean spaces, and 2) many applications inherently involve non-Euclidean geometry, e.g., geophysical data or cortical signals in the brain.
In this paper, we are interested in efficiently comparing probability measures supported on the unit circle, aka circular probability measures, using the optimal transport framework. Such probability measures, with their densities often represented as circular/rose histograms, are prevalent in many applications, from computer vision and signal processing domains to geology and astronomy. For instance, in classic computer vision, the color content of an image can be accounted for by its hue in the HSV space, leading to one-dimensional circular histograms. Additionally, local image/shape descriptors are often represented via circular histograms, as evidenced in classic computer vision papers like SIFT [28] and ShapeContext [6]. In structural geology, the orientation of rock formations, such as bedding planes, fault lines, and joint sets, can be represented via circular histograms [38]. In signal processing, circular histograms are commonly used to represent the phase distribution of periodic signals [26]. Additionally, a periodic signal can be normalized and represented as a circular probability density function (PDF).
Notably, a large body of literature exists on circular statistics [18]. More specific to our work, however, are the seminal works of [13] and [34], which provide a thorough study of the OT problem and transportation distances on the circle (see also [8]). OT on circles has also been recently revisited in various papers [17, 7], further highlighting the topic's timeliness. Unlike OT on the real line, generally, the OT problem between probability measures defined on the circle does not have a closed-form solution. This stems from the intrinsic metric on the circle and the fact that there are two paths between any pair of points on a circle (i.e., clockwise and counter-clockwise). Interestingly, however, when one of the probability measures is the Lebesgue measure, i.e., the uniform distribution, the 2-Wasserstein distance on the circle has a closed-form solution, which we will discuss in the Background section.
We present the Linear Circular OT (LCOT), a new transport-based distance for circular probability measures. By leveraging the closed-form solution of the circular 2-Wasserstein distance between each distribution and the uniform distribution on the circle, our method sidesteps the need for optimization. Concisely, we determine the
Monge maps that push the uniform distribution to each input measure using the closed-form solution, then set the distance between the input measures based on the disparities between their respective Monge maps. Our approach draws parallels with the Linear Optimal Transport (LOT) framework proposed by [40] and can be seen as an extension of the cumulative distribution transform (CDT) presented by [32] to circular probability measures (see also, [2, 1]). The idea of linearized (unbalanced) optimal transport was also studied recently in various works [9, 30, 36, 10]. From a geometric perspective, we provide explicit logarithmic and exponential maps between the space of probability measures on the unit circle and the tangent space at a reference measure (e.g., the Lebesgue measure) [40, 9, 36]. Then, we define our distance in this tangent space, giving rise to the terminology 'Linear' Circular OT. The logarithmic map provides a linear embedding for the LCOT distance, while the exponential map inverts this embedding. We provide a theoretical analysis of the proposed metric, LCOT, and demonstrate its utility in various problems.
**Contributions.** Our specific contributions in this paper include 1) proposing a computationally efficient metric for circular probability measures, 2) providing a theoretical analysis of the proposed metric, including its computational complexity for pairwise comparison of a set of circular measures, and 3) demonstrating the robustness of the proposed metric in manifold learning, measure interpolation, and clustering/classification of probability measures.
## 2 Background
### Circle Space
The unit circle \(\mathbb{S}^{1}\) can be defined as the quotient space
\[\mathbb{S}^{1}=\mathbb{R}/\mathbb{Z}=\left\{\{x+n:\,n\in\mathbb{Z}\}:\,x\in[0,1)\right\}.\]
The above definition is equivalent to \([0,1]\) under the identification \(0=1\). For the sake of simplicity in this article, we treat them as indistinguishable. Furthermore, we adopt a parametrization of the circle as \([0,1)\), designating the North Pole as \(0\) and adopting a clockwise orientation. This will serve as our _canonical_ parametrization.
Let \(|\cdot|\) denote the absolute value on \(\mathbb{R}\). With the aim of avoiding any confusion, when necessary, we will denote it by \(|\cdot|_{\mathbb{R}}\). Then, a metric on \(\mathbb{S}^{1}\) can be defined as
\[|x-y|_{\mathbb{S}^{1}}:=\min\{|x-y|_{\mathbb{R}},1-|x-y|_{\mathbb{R}}\},\qquad x,y\in[0,1)\]
or, equivalently, as
\[|x-y|_{\mathbb{S}^{1}}:=\min_{k\in\mathbb{Z}}|x-y+k|_{\mathbb{R}},\]
where for the second formula \(x,y\in\mathbb{R}\) are understood as representatives of two classes of equivalence in \(\mathbb{R}/\mathbb{Z}\), but these two representatives \(x,y\) do not need to belong to \([0,1)\). It turns out that such a metric defines a geodesic distance: It is the smaller of the two
arc lengths between the points \(x\), \(y\) along the circumference (cf. [18], where here we parametrize angles between \(0\) and \(1\) instead of between \(0\) and \(2\pi\). Besides, the circle \(\mathbb{S}^{1}\) can be endowed with a group structure. Indeed, as the quotient space \(\mathbb{R}/\mathbb{Z}\) it inherits the addition from \(\mathbb{R}\) modulo \(\mathbb{Z}\). Equivalently, for any \(x,y\in[0,1)\), we can define the operations \(+,-\) as
\[(x,y)\mapsto\begin{cases}x\pm y,&\text{ if }x\pm y\in[0,1)\\ x\pm y\mp 1,&\text{ otherwise.}\end{cases} \tag{1}\]
### Distributions on the circle
Regarded as a set, \(\mathbb{S}^{1}\) can be identified with \([0,1)\). Thus, signals over \(\mathbb{S}^{1}\) can be interpreted as \(1\)-periodic functions on \(\mathbb{R}\). More generally, every measure \(\mu\in\mathcal{P}(\mathbb{S}^{1})\) can be regarded as a measure on \(\mathbb{R}\) by
\[\mu(A+n):=\mu(A),\qquad\text{ for every }A\subseteq[0,1)\text{ Borel subset, and }\,n\in\mathbb{Z}. \tag{2}\]
Then, its _cumulative distribution function_, denoted by \(F_{\mu}\), is defined as
\[F_{\mu}(y):=\mu([0,y))=\int_{0}^{y}d\mu,\qquad\forall y\in[0,1) \tag{3}\]
and can be extended to a function on \(\mathbb{R}\) by
\[F_{\mu}(y+n):=F_{\mu}(y)+n,\qquad\forall y\in[0,1),\,n\in\mathbb{Z}. \tag{4}\]
Figure 2 shows the concept of \(F_{\mu}\) and its extension to \(\mathbb{R}\).
In the rest of this article, we do not distinguish between the definition of measures on \(\mathbb{S}^{1}\) or their periodic extensions into \(\mathbb{R}\), as well as between their CDFs or their extended CDFs into \(\mathbb{R}\).
**Definition 2.1** (Cumulative distribution function with respect to a reference point).: Let \(\mu\in\mathcal{P}(\mathbb{S}^{1})\), and consider a reference point \(x_{0}\in\mathbb{S}^{1}\). Assume that \(\mathbb{S}^{1}\) is identified as
Figure 1: Visualization of densities (blue and orange) on \(\mathbb{S}^{1}\) and after unrolling them to \([0,1)\) by considering a cutting point \(x_{0}\). The blue density is the uniform distribution on \(\mathbb{S}^{1}\), represented as having height \(1\) over the unit circle in black.
\([0,1)\) according to our canonical parametrization. By abuse of notation, also denote by \(x_{0}\) the point in \([0,1)\) that corresponds to the given reference point when considering the canonical parametrization. We define
\[F_{\mu,x_{0}}(y):=F_{\mu}(x_{0}+y)-F_{\mu}(x_{0}).\]
The reference point \(x_{0}\) can be considered as the "origin" for parametrizing the circle as \([0,1)\) starting from \(x_{0}\). That is, \(x_{0}\) will correspond to \(0\), and from there, we move according to the clockwise orientation. Thus, we can think of \(x_{0}\) in the above definition as a "cutting point": A point where we cut \(\mathbb{S}^{1}\) into a line by \(x_{0}\) and so we can unroll PDFs and CDFs over the circle into \(\mathbb{R}\). See Figures 1 and 2.
Besides, note that \(F_{\mu,x_{0}}(0)=0\) and \(F_{\mu,x_{0}}(1)=1\) by the \(1\)-periodicity of \(\mu\). This is to emphasize that in the new system of coordinates, or in the new parametrization of \(\mathbb{S}^{1}\) as \([0,1)\) starting from \(x_{0}\), the new origin \(x_{0}\) plays the role of \(0\). Finally, notice that if \(x_{0}\) is the North Pole, which corresponds to \(0\) in the canonical parametrization of the circle, then \(F_{\mu,x_{0}}=F_{\mu}\).
**Definition 2.2**.: The quantile function \(F_{\mu,x_{0}}^{-1}:[0,1]\to[0,1]\) is defined as
\[F_{\mu,x_{0}}^{-1}(y):=\inf\{x:\,F_{\mu,x_{0}}(x)>y\}.\]
### Optimal transport on the circle
#### 2.3.1 Problem setup
Given \(\mu,\nu\in\mathcal{P}(\mathbb{S}^{1})\), let \(c(x,y):=h(|x-y|_{\mathbb{S}^{1}})\) be the cost of transporting a unit mass from \(x\) to \(y\) on the circle, where \(h:\mathbb{R}\to\mathbb{R}_{+}\) is a convex increasing function. The Circular Optimal Transport cost between \(\mu\) and \(\nu\) is defined as
\[COT_{h}(\mu,\nu):=\inf_{\gamma\in\Gamma(\mu,\nu)}\int_{\mathbb{S}^{1}\times \mathbb{S}^{1}}c(x,y)\,d\gamma(x,y), \tag{5}\]
Figure 2: Left: The density of a probability measure, \(\mu\). Middle: visualization of the periodic extension to \(\mathbb{R}\) of a CDF, \(F_{\mu}\), of measure \(\mu\) on \([0,1)\sim\mathbb{S}^{1}\). Right: Visualization of \(F_{\mu,x_{0}}\) given in Definition 2.1, where the parameterization of the circle is changed; now, the origin \(0\) is the point \(x_{0}\).
where \(\Gamma(\mu,\nu)\) is the set of all transport plans from \(\mu\) to \(\nu\), that is, \(\gamma\in\Gamma(\mu,\nu)\) is such that \(\gamma\in\mathcal{P}(\mathbb{S}^{1}\times\mathbb{S}^{1})\) having first and second marginals \(\mu\) and \(\nu\), respectively. There always exists a minimizer \(\gamma^{*}\) of \(5\), and it is called a Kantorovich optimal plan (see, for example, [35, Th. 1.4]).
When \(h(x)=|x|^{p}\), for \(1\leq p<\infty\), we denote \(COT_{h}(\cdot,\cdot)=COT_{p}(\cdot,\cdot)\), and \(COT_{p}(\cdot,\cdot)^{1/p}\) defines a distance on \(\mathcal{P}(\mathbb{S}^{1})\). In general,
\[COT_{h}(\mu,\nu)\leq\inf_{M:\,M_{\#}\mu=\nu}\int_{\mathbb{S}^{1}}h(|M(x)-x|_{ \mathbb{S}^{1}})\,d\mu(x), \tag{6}\]
and a minimizer \(M^{*}:\mathbb{S}^{1}\rightarrow\mathbb{S}^{1}\) of the right-hand side of 6, among all maps \(M\) that pushforward \(\mu\) to \(\nu\)1, might not exist. In this work, we will consider the cases where a minimizer \(M^{*}\) does exist, for example, when the reference measure \(\mu\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\) (see [29, 35]). In these scenarios, such map \(M^{*}\) is called an optimal transportation map or a Monge map. Moreover, as \(\mu,\nu\in\mathcal{P}(\mathbb{S}^{1})\) can be regarded as measures on \(\mathbb{R}\) according to equation 2, we can work with transportation maps \(M:\mathbb{R}\rightarrow\mathbb{R}\) that are \(1\)-periodic functions satisfying \(M_{\#}\mu=\nu\).
Footnote 1: The pushforward \(M_{\#}\mu\) is defined by the change of variables \(\int\varphi(y)\,dM_{\#}\mu(y):=\int\varphi(M(x))\,d\mu(x)\), for every continuous function \(\varphi:\mathbb{S}^{1}\rightarrow\mathbb{C}\).
**Proposition 2.3**.: _Two equivalent formulations of \(COT_{h}\) are the following:_
\[COT_{h}(\mu,\nu) =\inf_{x_{0}\in[0,1)}\int_{0}^{1}h(|F_{\mu,x_{0}}^{-1}(x)-F_{\nu, x_{0}}^{-1}(x)|_{\mathbb{R}})\,dx \tag{7}\] \[=\inf_{\alpha\in\mathbb{R}}\int_{0}^{1}h(|F_{\mu}^{-1}(x)-F_{\nu }^{-1}(x-\alpha)|_{\mathbb{R}})\,dx. \tag{8}\]
_When there exist minimizers \(x_{cut}\) and \(\alpha_{\mu,\nu}\) of equation 7 and equation 8, respectively, the relation between them is given by_
\[\alpha_{\mu,\nu}=F_{\mu}(x_{cut})-F_{\nu}(x_{cut}). \tag{9}\]
_Moreover, if \(\mu=Unif(\mathbb{S}^{1})\) and \(h(x)=|x|^{2}\), it can be verified that \(\alpha_{\mu,\nu}\) is the antipodal of \(\mathbb{E}(\nu)\), i.e.,_
\[\alpha_{\mu,\nu}=x_{cut}-F_{\nu}(x_{cut})=\mathbb{E}(\nu)-1/2. \tag{10}\]
The proof of equation 8 in Proposition 2.3 is provided in [13] for the optimal coupling for any pair of probability measures on \(\mathbb{S}^{1}\). For the particular and enlightening case of discrete probability measures on \(\mathbb{S}^{1}\), we refer the reader to [34]. In that article, equation 7 is introduced. Finally, equation 10 is given for example in [7, Proposition 1].
Proposition 2.3 allows us to see the optimization problem of transporting measures supported on the circle as an optimization problem on the real line by looking for the best "cutting point" so that the circle can be unrolled into the real line by \(1\)-periodicity.
_Remark 2.4_.: The minimizer of equation 8 is unique (see equation 9), but there can be multiple minimizers for equation 7 (see Figure 8 in Appendix A.2). However, when a minimizer \(x_{cut}\) of equation 7 exists, it will lead to the optimal transportation displacement on a circular domain (see Section 2.3.2 below).
#### 2.3.2 A closed-form formula for the optimal circular displacement
Let \(x_{cut}\) be a minimizer of equation 7, that is,
\[COT_{h}(\mu,\nu)=\int_{0}^{1}h(|F^{-1}_{\mu,x_{cut}}(x)-F^{-1}_{\nu,x_{cut}}(x)|_ {\mathbb{R}})\,dx. \tag{11}\]
From equation 11, one can now emulate the Optimal Transport Theory on the real line (see, for e.g., [35]): The point \(x_{cut}\) provides a reference where one can "cut" the circle. Subsequently, computing the optimal transport between \(\mu\) and \(\nu\) boils down to solving an optimal transport problem between two distributions on the real line.
We consider the parametrization of \(\mathbb{S}^{1}\) as \([0,1)\) by setting \(x_{cut}\) as the origin and moving along the clockwise orientation. Let us use the notation \(\widetilde{x}\in[0,1)\) for the points given by such parametrization, and the notation \(x\in[0,1)\) for the canonical parametrization. That is, the change of coordinates from the two parametrizations is given by \(x=\widetilde{x}+x_{cut}\). Then, if \(\mu\) does not give mass to atoms, by equation 11 and the classical theory of Optimal Transport on the real line, the optimal transport map (Monge map) that takes a point \(\widetilde{x}\) to a point \(\widetilde{y}\) is given by
\[F^{-1}_{\nu,x_{cut}}\circ F_{\mu,x_{cut}}(\widetilde{x})=\widetilde{y} \tag{12}\]
That is, 12 defines a circular optimal transportation map from \(\mu\) to \(\nu\) written in the parametrization that establishes \(x_{cut}\) as the "origin." If we want to refer everything to the original labeling of the circle, that is, if we want to write equation 12 with respect to the canonical parametrization, we need to change coordinates
\[\begin{cases}\widetilde{x}=&x-x_{cut}\\ \widetilde{y}=&y-x_{cut}\end{cases}. \tag{13}\]
Therefore, a closed-form formula for an optimal circular transportation map in the canonical coordinates is given by
\[M^{\nu}_{\mu}(x):=F^{-1}_{\nu,x_{cut}}\circ F_{\mu,x_{cut}}(x-x_{cut})+x_{cut }=y,\qquad x\in[0,1), \tag{14}\]
and the corresponding optimal circular transport displacement that takes \(x\) to \(y\) is
\[M^{\nu}_{\mu}(x)-x=F^{-1}_{\nu,x_{cut}}\circ F_{\mu,x_{cut}}(x-x_{cut})-(x-x_ {cut}),\qquad x\in[0,1). \tag{15}\]
In summary, we condense the preceding discussion in the following result. The proof is provided in Appendix A.1. While the result builds upon prior work, drawing from [7, 34, 35], it offers an explicit formula for the optimal Monge map, a detail previously lacking in the literature.
**Theorem 2.5**.: _Let \(\mu,\nu\in\mathcal{P}(\mathbb{S}^{1})\). Assume that \(\mu\) is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\) (that is, it does not give mass to atoms)._
1. _If_ \(x_{cut}\) _is a minimizer of equation_ 7_, then equation_ 14 _defines an optimal circular transportation map. (We will use the notation_ \(M_{\mu}^{\nu}\) _for the Monge map from_ \(\mu\) _to_ \(\nu\)_.)_
2. _If_ \(\alpha_{\mu,\nu}\) _minimizes equation_ 8_, then_ \[M_{\mu}^{\nu}(x)=F_{\nu}^{-1}(F_{\mu}(x)-\alpha_{\mu,\nu})\] (16)
3. _If_ \(x_{0},x_{1}\) _are two minimizers of equation_ 7_, then_ \[F_{\nu,x_{0}}^{-1}\circ F_{\mu,x_{0}}(x-x_{0})+x_{0}=F_{\nu,x_{1}}^{-1}\circ F_ {\mu,x_{1}}(x-x_{1})+x_{1}\qquad\forall\,x\in[0,1).\]
4. _The optimal map defined by the formula equation_ 14 _is unique. (The uniqueness is as functions on_ \(\mathbb{S}^{1}\)_, or as functions on_ \(\mathbb{R}\) _up to modulo_ \(\mathbb{Z}\)_)._
5. _If also_ \(\nu\) _does not give mass to atoms, then_ \((M_{\mu}^{\nu})^{-1}=M_{\nu}^{\mu}\)_._
Having established the necessary background, we are now poised to introduce our proposed metric. In the subsequent section, we present the Linear Circular Optimal Transport (LCOT) metric.
## 3 Method
### Linear Circular Optimal Transport Embedding (LCOT)
By following the footsteps of[40], starting from the COT framework, we will define an embedding for circular measures by computing the optimal displacement from a fixed reference measure. Then, the \(L^{p}\)-distance on the embedding space defines a new distance between circular measures (Theorem 3.6 below).
**Definition 3.1** (LCOT Embedding).: For a fixed reference measure \(\mu\in\mathcal{P}(\mathbb{S}^{1})\) that is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\), we define the _Linear Circular Optimal Transport (LCOT) Embedding_ of a target measure \(\nu\in\mathcal{P}(\mathbb{S}^{1})\) with respect to the cost \(COT_{h}(\cdot,\cdot)\), for a convex increasing function \(h:\mathbb{R}\to\mathbb{R}_{+}\), by
\[\widehat{\nu}^{\mu,h}(x):=F_{\nu,x_{cut}}^{-1}(F_{\mu}(x-x_{cut}))-(x-x_{cut} )=F_{\nu}^{-1}(F_{\mu}(x)-\alpha_{\mu,\nu})-x,\quad x\in[0,1), \tag{17}\]
where \(x_{cut}\) is any minimizer of equation 7 and \(\alpha_{\mu,\nu}\) is the minimizer of equation 8.
The LCOT-Embedding corresponds to the optimal (circular) displacement that comes from the problem of transporting the reference measure \(\mu\) to the given target measure \(\nu\) with respect to a general cost \(COT_{h}(\cdot,\cdot)\) (see equation 16 from Theorem 2.5 and equation 15).
**Definition 3.2** (LCOT distance).: Under the settings of Definition 3.1, we define the LCOT-discrepancy by
\[LCOT_{\mu,h}(\nu_{1},\nu_{2}):=\int_{0}^{1}h\left(\min_{k\in\mathbb{Z}}\{| \widehat{\nu_{1}}^{\mu,h}(t)-\widehat{\nu_{2}}^{\mu,h}(t)+k|_{\mathbb{R}}\} \right)\,d\mu(t),\quad\forall\nu_{1},\nu_{2}\in\mathcal{P}(\mathbb{S}^{1}).\]
In particular, when \(h(\cdot)=|\cdot|^{p}\), for \(1\leq p<\infty\), we define
\[LCOT_{\mu,p}(\nu_{1},\nu_{2}) :=\|\widehat{\nu_{1}}^{\mu,h}-\widehat{\nu_{2}}^{\mu,h}\|_{L^{p}( \mathbb{S}^{1},d\mu)}^{p}\] \[=\int_{0}^{1}\left(\min_{k\in\mathbb{Z}}\{|\widehat{\nu_{1}}^{\mu,h}(t)-\widehat{\nu_{2}}^{\mu,h}(t)+k|_{\mathbb{R}}\}\right)^{p}\,d\mu(t),\]
where
\[L^{p}(\mathbb{S}^{1},d\mu):=\{f:\mathbb{S}^{1}\to\mathbb{R}\;|\quad\|f\|_{L^{ p}(\mathbb{S}^{1},d\mu)}:=\left(\int_{\mathbb{S}^{1}}|f(t)|_{\mathbb{S}^{1}}^{p }\,d\mu(t)\right)^{1/p}<\infty\}. \tag{18}\]
If \(\mu=Unif(\mathbb{S}^{1})\), we use the notation \(L^{p}(\mathbb{S}^{1}):=L^{p}(\mathbb{S}^{1},d\mu)\).
The embedding \(\nu\mapsto\widehat{\nu}\) as outlined by equation 17 is consistent with the definition of the Logarithm function given in [36, Definition 2.7] (we also refer to [40] for the LOT framework). However, the emphasis of the embedding in this paper is on computational efficiency, and a closed-form solution is provided. Additional details are available in Appendix A.5.
_Remark 3.3_.: If the reference measure is \(\mu=Unif(\mathbb{S}^{1})\), given a target measure \(\nu\in\mathcal{P}(\mathbb{S}^{1})\), we denote the LCOT-Embedding \(\widehat{\nu}^{\mu,h}\) of \(\nu\) with respect to the cost \(COT_{2}(\cdot,\cdot)\) (i.e., \(h(x)=|x|^{2}\)) simply by \(\widehat{\nu}\). Due to Theorem 2.5 and equation 10, the expression 17 reduces to
\[\widehat{\nu}(x):=F_{\nu}^{-1}\left(x-\mathbb{E}(\nu)+\frac{1}{2}\right)-x, \qquad x\in[0,1). \tag{19}\]
In this setting, we denote \(LCOT_{\mu,h}(\cdot,\cdot)\) simply by \(LCOT(\cdot,\cdot)\). That is, given \(\nu_{1},\nu_{2}\in\mathcal{P}(\mathbb{S}^{1})\),
\[LCOT(\nu_{1},\nu_{2}):=\|\widehat{\nu_{1}}-\widehat{\nu_{2}}\|_{L^{2}( \mathbb{S}^{1})}^{2}=\int_{0}^{1}\left(\min_{k\in\mathbb{Z}}\{|\widehat{\nu_{ 1}}(t)-\widehat{\nu_{2}}(t)+k|_{\mathbb{R}}\}\right)^{2}\,dt. \tag{20}\]
All our experiments are performed using the embedding \(\widehat{\nu}\) given by 19 due to the robustness of the closed-form formula 10 for the minimizer \(\alpha_{\mu,\nu}\) of equation 8 when \(h(x)=|x|^{2}\) and \(\mu=Unif(\mathbb{S}^{1})\).
_Remark 3.4_.: Let \(\mu\in\mathcal{P}(\mathbb{S}^{1})\) be absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\). Given \(\nu\in\mathcal{P}(\mathbb{S}^{1})\).
\[COT_{h}(\mu,\nu)=\int_{0}^{1}h\left(|\widehat{\nu}^{\mu,h}(t)|_{\mathbb{S}^{1} }\right)\,dt=\int_{0}^{1}h\left(|\widehat{\nu}^{\mu,h}(t)-\widehat{\mu}^{\mu,h} (t)|_{\mathbb{S}^{1}}\right)\,dt=LCOT_{\mu,h}(\mu,\nu).\]
In particular,
\[COT_{2}(\mu,\nu)=\|\widehat{\nu}\|_{L^{2}(\mathbb{S}^{1})}^{2}=\|\widehat{\nu }-\widehat{\mu}\|_{L^{2}(\mathbb{S}^{1})}^{2}=LCOT(\mu,\nu).\]
**Proposition 3.5** (Invertibility of the LCOT-Embedding.).: _Let \(\mu\in\mathcal{P}(\mathbb{S}^{1})\) be absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\), and let \(\nu\in\mathcal{P}(\mathbb{S}^{1})\). Then,_
\[\nu=(\widehat{\nu}^{\mu,h}+\mathrm{id})_{\#}\mu.\]
We refer to Prop. A.1 in the Appendix for more properties of the LCOT-Embedding.
**Theorem 3.6**.: _Let \(\mu\in\mathcal{P}(\mathbb{S}^{1})\) be absolutely continuous with respect to the Lebesgue measure on \(\mathbb{S}^{1}\), and let \(h(x)=|x|^{p}\), for \(1\leq p<\infty\). Then \(LCOT_{\mu,p}(\cdot,\cdot)^{1/p}\) is a distance on \(\mathcal{P}(\mathbb{S}^{1})\). In particular, \(LCOT(\cdot,\cdot)^{1/2}\) is a distance on \(\mathcal{P}(\mathbb{S}^{1})\)._
### LCOT interpolation between circular measures
Given a COT Monge map and the LCOT embedding, we can compute a linear interpolation between circular measures (refer to [40] for a similar approach on the Euclidean setting). First, for arbitrary measures \(\sigma,\nu\in\mathcal{P}(\mathbb{S}^{1})\) the COT interpolation can be written as:
\[\rho_{t}^{COT}:=\left((1-t)\mathrm{id}+tM_{\sigma}^{\nu}\right)_{\#}\sigma, \qquad t\in[0,1]. \tag{21}\]
Similarly, for a fixed reference measure \(\mu\in\mathcal{P}(\mathbb{S}^{1})\), we can write the LCOT interpolation as:
\[\rho_{t}^{LCOT}:=\left((1-t)(\widehat{\sigma}+\mathrm{id})+t(\widehat{\nu}+ \mathrm{id})\right)_{\#}\mu,\qquad t\in[0,1], \tag{22}\]
where we have \(\rho_{t=0}^{COT}=\rho_{t=0}^{LCOT}=\sigma\) and \(\rho_{t=1}^{COT}=\rho_{t=1}^{LCOT}=\nu\). In Figure 3, we show such interpolations between the reference measure \(\mu\) and two arbitrary measures \(\nu_{1}\) and \(\nu_{2}\) for COT and LCOT. As can be seen, the COT and LCOT interpolations between \(\mu\) and \(\nu_{i}\)s coincide (by definition), while the interpolation between \(\nu_{1}\) and \(\nu_{2}\) is different for the two methods. We also provide an illustration of the logarithmic and exponential maps to, and from, the LCOT embedding.
Figure 3: Left: Illustration of the LCOT embedding, the linearization process (logarithmic map), and measure interpolations. Right: Pairwise interpolations between reference measure \(\mu\) and measures \(\nu_{1}\) and \(\nu_{2}\), using formulas in 21 (COT) and 22 (LCOT).
### Time complexity of Linear COT distance between discrete measures
According to [13, Theorem 6.2], for discrete measures \(\nu_{1},\nu_{2}\) with \(N_{1},N_{2}\) sorted points, the _binary search_ algorithm requires \(\mathcal{O}((N_{1}+N_{2})\log(1/\epsilon))\) computational time to find an \(\epsilon\)-approximate solution for \(\alpha_{\nu_{1},\nu_{2}}\). If \(M\) is the least common denominator for all probability masses, an exact solution can be obtained in \(\mathcal{O}((N_{1}+N_{2})\ln M)\). Then, for a given \(\epsilon>0\) and \(K\) probability measures, \(\{\nu_{k}\}_{k=1}^{K}\), each with \(N\) points, the total time to pairwise compute the COT distance is \(\mathcal{O}(K^{2}N\ln(1/\epsilon))\). For LCOT, when the reference \(\mu\) is the Lebesgue measure, the optimal \(\alpha_{\mu,\nu_{k}}\) has a closed-form solution (see equation 10) and the time complexity for computing the LCOT embedding via equation 19 is \(\mathcal{O}(N)\). The LCOT distance calculation between \(\nu_{i}\) and \(\nu_{j}\) according to equation 20 requires \(\mathcal{O}(N)\) computations. Hence, the total time for pairwise LCOT distance computation between \(K\) probability measures, \(\{\nu_{k}\}_{k=1}^{K}\), each with \(N\) points, would be \(\mathcal{O}(K^{2}N+KN)\). See Appendix A.3 for further explanation.
To verify these time complexities, we evaluate the computational time for COT and LCOT algorithms and present the results in Figure 4. We generate \(K\) random discrete measures, \(\{\nu_{k}\}_{k=1}^{K}\), each with \(N\) samples, and for the reference measure, \(\mu\), we choose: 1) the uniform discrete measure, and 2) a random discrete measure, both with \(N_{0}=N\) samples. To calculate \(\alpha_{\mu,\nu_{k}}\), we considered the two scenarios, using the binary search [13] for the non-uniform reference, and using equation 10 for the uniform reference. We labeled them as, "uniform ref." and "non-uniform ref." Then, in our first experiment, we set \(K=2\) and measured the wall-clock time for calculating COT and LCOT while varying \(N\in\{100,200,\ldots,20000\}\). For our second experiment, and for \(N\in\{500,1000,5000\}\), we vary \(K\in\{2,4,6,\ldots,64\}\) and measure the total time for calculating pairwise COT and LCOT distances. The computational benefit of LCOT is evident from Figure 4.
Figure 4: Computational time analysis of COT and LCOT, for pairwise comparison of \(K\) discrete measures, each with \(N\) samples. Left: Wall-clock time for \(K=2\) and \(N\in\{500,1000,\ldots,5000\}\). Right: Wall-clock time for \(N\in\{500,1000,5000\}\), and \(K\in\{2,4,6,\ldots,64\}\). Solid lines are COT, dotted are LCOT with a uniform reference and dash-dotted are LCOT with a non-uniform reference.
Experiments
To better understand the geometry induced by the LCOT metric, we perform Multidimensional Scaling (MDS) [23] on a family of densities, where the discrepancy matrices are computed using LCOT, COT, OT (with a fixed cutting point), and the Euclidean distance.
**Experiment 1.** We generate three families of circular densities, calculate pairwise distances between them, and depict their MDS embedding in Figure 5. In short, the densities are chosen as follows; we start with two initial densities: (1) a von Mises centered at the south pole of the circle (\(\mu\)=0.5), (2) a bimodal von Mises centered at the east (\(\mu\)=0.25) and west (\(\mu\)=0.75) ends of the circle. Then, we create 20 equally distant circular translations of each of these densities to capture the geometry of the circle. Finally, we parametrize the COT geodesic between the initial densities and generate 20 extra densities on the geodesic. Figure 5 shows these densities in green, blue, and red, respectively. The representations given by the MDS visualizations show that LCOT and COT capture the geometry of the circle coded in the translation property in an intuitive fashion. In contrast, OT and Euclidean distances do not capture the underlying geometry of the problem.
**Experiment 2.** To assess the separability properties of the LCOT embedding, we follow a similar experiment design as in [24]. We consider six groups of circular density functions as in the third row of Figure 5: unimodal von Mises (axial: 0\({}^{\circ}\)), wrapped skew-normal, symmetric bimodal von Mises (axial: 0\({}^{\circ}\) and 180\({}^{\circ}\)), asymmetric bimodal von Mises (axial: 0\({}^{\circ}\) and 120\({}^{\circ}\)), symmetric trimodal von Mises (axial: 0\({}^{\circ}\), 120\({}^{\circ}\) and 240\({}^{\circ}\)), asymmetric trimodal von Mises (axial: 0\({}^{\circ}\), 240\({}^{\circ}\) and 225\({}^{\circ}\)). We assign a von Mises distribution with a small spread (\(\kappa=200\)) to each distribution's axis/axes to introduce random perturbations of these distributions. We generate 20 sets of simulated densities and sample each with 50-100 samples. Following the computation of pairwise distances among the sets of samples using LCOT, COT, OT, and Euclidean methods, we again employ MDS to visualize the separability of each approach across the six circular density classes mentioned above. The outcomes are presented in the bottom row of Figure 5. It can be seen that LCOT stands out for its superior clustering outcomes, featuring distinct boundaries between the actual classes, outperforming the other methods.
**Experiment 3.** In our last experiment, we consider the calculation of the barycenter of circular densities. Building upon Experiments 1 and 2, we generated unimodal, bimodal, and trimodal von Mises distributions. For each distribution's axis/axes, we assigned a von Mises distribution with a small spread (\(\kappa=200\)) to introduce random perturbations. These distributions are shown in Figure 6 (left). Subsequently, we computed both the Euclidean average of these densities and the LCOT barycenter. Notably, unlike COT, the invertible nature of the LCOT embedding allows us to directly calculate the barycenter as the inverse of the embedded distributions' average. The resulting barycenters are illustrated in Fig. 6. As observed, the LCOT method accurately recovers the correct barycenter without necessitating additional optimization steps.
## 5 Conclusion and discussion
In this paper, we present the Linear Circular Optimal Transport (LCOT) distance, a new metric for circular measures derived from the Linear Optimal Transport (LOT) framework [40, 22, 32, 9, 2, 30]. The LCOT offers 1) notable computational benefits over the COT metric, particularly in pairwise comparisons of numerous measures, and 2) a linear embedding where the \(\|\cdot\|_{L^{2}(\mathbb{S}^{1})}\) between embedded distributions equates to the LCOT metric. We consolidated scattered results on circular OT into Theorem 2.5 and introduced the LCOT metric and embedding, validating LCOT as a metric in Theorem 3.6. In Section 3.3, we assess LCOT's computational complexity for pairwise comparisons of \(K\) circular measures, juxtaposing it with COT. We conclude by showcasing LCOT's empirical strengths via MDS embeddings on circular densities using different metrics.
Figure 5: MDS for embedding classes of probability densities into an Euclidean space of dimension 2 where the original pair-wise distances (COT-distance, LOT-distance, Euclidean or \(L^{2}\)-distance) are preserved as well as possible.
## Acknowledgement
SK acknowledges partial support from the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR00112190135 and HR00112090023, and the Wellcome LEAP Foundation. GKR acknowledges support from ONR N000142212505, and NIH GM130825.
|
2302.09120 | Robot path planning using deep reinforcement learning | Autonomous navigation is challenging for mobile robots, especially in an
unknown environment. Commonly, the robot requires multiple sensors to map the
environment, locate itself, and make a plan to reach the target. However,
reinforcement learning methods offer an alternative to map-free navigation
tasks by learning the optimal actions to take. In this article, deep
reinforcement learning agents are implemented using variants of the deep Q
networks method, the D3QN and rainbow algorithms, for both the obstacle
avoidance and the goal-oriented navigation task. The agents are trained and
evaluated in a simulated environment. Furthermore, an analysis of the changes
in the behaviour and performance of the agents caused by modifications in the
reward function is conducted. | Miguel Quinones-Ramirez, Jorge Rios-Martinez, Victor Uc-Cetina | 2023-02-17T20:08:59Z | http://arxiv.org/abs/2302.09120v2 | # Robot path planning using deep reinforcement learning
###### Abstract
Autonomous navigation is challenging for mobile robots, especially in an unknown environment. Commonly, the robot requires multiple sensors to map the environment, locate itself, and make a plan to reach the target. However, reinforcement learning methods offer an alternative to map-free navigation tasks by learning the optimal actions to take. In this article, deep reinforcement learning agents are implemented using variants of the deep Q networks method, the D3QN and rainbow algorithms, for both the obstacle avoidance and the goal-oriented navigation task. The agents are trained and evaluated in a simulated environment. Furthermore, an analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
_Keywords--_ path planning; obstacle avoidance; deep reinforcement learning.
## 1 Introduction
Navigation competence is essential for mobile robots. To navigate autonomously, a robot must use its sensors' data to identify an optimal or suboptimal path to a target point while avoiding collisions. Generally, a map of the environment is constructed, and then a path planner algorithm is used to find a clear path. However, the task becomes daunting when dealing with sensor noise, tracking errors, and unpredictable surroundings. It also becomes challenging and time-consuming to update the obstacle map accurately, replan the navigation path and predict all possible situations the robot may encounter.
Alternatively, new methods that do not require maps to navigate have been proposed, such as the use of deep reinforcement learning (DRL), introduced by Mnih et al. in 2013 [27], which has shown the ability to solve complex tasks that require a lot of data processing by combining the reinforcement learning (RL) framework with the artificial neural networks from deep learning (DL). These methods have the advantages of being mapless, having a strong learning ability, lower sensor accuracy dependence, and requiring less human supervision and environment-dependent engineering. Contrary to other mapless navigation approaches, which require explicit programming of the robot's behaviour, DRL methods allow the robot to learn the optimal actions to take at each time step by associating them with observations of the environment and a reward signal. Furthermore, unlike pure deep learning methods, they do not require a dataset of labelled samples,
which is severely lacking in robotics. Instead, the robot is trained by directly interacting with its environment in a trial-and-error manner. Even when training in the real world proves costly, DRL allows a robot to learn in a simulated environment safely and then transfer the knowledge to a real robot, which is possible because of the generalisation ability of DL models. DRL robotic applications often treat sensor data as a representation of the environment's state, the most commonly used being ranging data, monocular camera images, and depth camera data. Among the sensors used to collect the data, RGB-D cameras are one of the most cost-efficient, lightweight, and information-rich, which allows them to be used for a wide range of applications. As a state representation, RGB images are sensitive to lighting and colour changes, which may be irrelevant to the navigation task. Still, depth images provide geometrical information about the surroundings and are represented as grayscale images, which have been proven to achieve good results in DRL methods applied to different domains. The introduction of deep reinforcement Learning in 2013 by DeepMind [27] demonstrated its potential by training agents that achieved better performance than human experts on Atari games. Since then, notable achievements of DRL methods have been primarily on gaming applications, such as AlphaGo [39] winning against the Go champion, AlphaZero [40] beating the champion chess program, and OpenAI Five [4] defeating professional teams in the online game DOTA 2. However, RL approaches to solve real-world problems have been proposed in several domains, including healthcare [50], analytics [6], language processing [43], networking [24], finances [41] and robotics [18].
Deep reinforcement learning approaches in navigation aim to benefit from learnt skills to solve conventional navigation problems, such as lack of generalisation, the need for fine-tuning or the inability to react in real-time, for applications where mobile robots operate in complex environments. Some of these scenarios include outdoor environments with uneven terrain and noisier sensor readings, dynamic environments where fast reaction times are required, and human environments where collaboration and safety measures are necessary. Deep reinforcement-based applications for navigation have been developed for social robotics, service robotics, unmanned ground vehicles and self-driving cars, among others.
For the autonomous navigation problem, DRL applications are focused on four scenarios, as studied by Zhu et al. in [52], which include local obstacle avoidance, indoor navigation, multi-robot navigation and social navigation. The applications are usually limited to one of those specific capabilities and are developed by conducting specialised research and adding expert knowledge to favour the convergence of the DRL methods. For that reason, little research has been done on moving from a simpler to a more complicated task. Moreover, few studies analyse the impact of the reward function on the agent's behaviour, as its design is tailored to solve the specific task, and no further comparison is made. Furthermore, the review of Zhu et al. [52], the survey of DRL algorithms for autonomous vehicles of Ye et al. [49], as well as the related works reviewed, indicate that among the most commonly used DRL algorithms are the Deep Q Networks (DQN) [27], Double DQN (DDQN) [44], Dueling DDQN (D3QN), Asynchronous Advantage Actor-Critic [26], Proximal Policy Optimization [35] and Deep Deterministic Policy Gradients [21]. However, since their introduction, improvements have been proposed in each algorithm's family of RL methods that lead to new state-of-the-art performances in their benchmark domain, such as continuous control or Atari Games. This means that more modern DRL methods could also improve the results in autonomous navigation-related tasks. In the present research, those problems are studied by training and evaluating different DRL agents in obstacle avoidance and goal-oriented navigation tasks, which were designed considering the challenges presented in the previous reviews. As mentioned by Zhu et al. in [52], the term \(mapless\) used to describe DRL-based navigation systems in this work refers to the use of lightweight localisation solutions, such as GPS and WiFi, to obtain the relative position of the goal point without a global map. Although the training environments were designed based
on the conditions of an indoor navigation scenario, the goal-oriented navigation task is referred to as such due to its focus on reaching a goal rather than the complexity of the environment.
This article introduces a mapless deep reinforcement learning approach to solve the autonomous navigation problem in indoor and static simulated environments using depth images. It focuses on analysing the different data required to train agents for obstacle avoidance and goal-oriented navigation tasks, studying the effect on their behaviour and performance by modifying the reward signal and changing the algorithm used. The proposed approach is implemented in the open-source mobile robot Turtlebot2 1, by using the Robotic Operating System (ROS) 2 as the robotics framework and Gazebo 3 as the robotics and physics simulator. However, the DRL framework can be applied to different mobile robots and using other robotic simulators as long as it is possible for the robot to perform the designated actions and the necessary sensory data is available. An initial idea about how a robot follows an RL approach in a navigation task is shown in Fig. 1.
Footnote 1: [https://www.turtlebot.com/turtlebot2/](https://www.turtlebot.com/turtlebot2/)
Footnote 2: [http://wiki.ros.org/](http://wiki.ros.org/)
Footnote 3: [https://gazebosim.org/home](https://gazebosim.org/home)
## 2 Autonomous Navigation
Autonomous navigation is one of the biggest challenges for a mobile robot. A robot must succeed at four building blocks to navigate autonomously: perception, localisation, cognition, and motion control [30]. Perception requires taking measurements, using different sensors, and extracting meaningful information from those measurements. Localisation involves determining the robot's absolute position in space and relative position concerning its goal and the obstacles. Cognition includes decision-making and its execution to achieve the highest-order goals. Moreover, motion control modulates the robot's motor outputs to achieve the desired trajectory. For a mobile robot, the navigation competence is required for its cognition. Given partial knowledge about its environment and a goal position, navigation encompasses the capability of the robot to act based on its knowledge and sensor values to reach the goal as efficiently as possible [30]. However, obstacle avoidance and path planning competencies are also required for autonomous navigation. There may need to be more than a behaviour or reactive navigation [15] for a mobile robot to reach a distant goal. Likewise, a plan might only be accomplished if the robot can react to unforeseen events. For that reason, modern navigation methods combine both competencies, sensor data and a map, to create a plan, execute it and make adjustments during motion.
Figure 1: An intuition of the RL framework applied to a robot. Based on an observation of the environment, the robot is given the optimal action to take.
### Obstacle Avoidance
Obstacle avoidance requires controlling the robot's trajectory to prevent collisions by making decisions based on sensor readings [30]. Unlike path planning, it is reactive and considers only a few steps ahead when making decisions. One of the simplest obstacle avoidance algorithms is the Bug Algorithm, which follows the contour of each obstacle to circumnavigate it. The robot stops its movement around the obstacle when it finds a minimum distance point towards its destination or a slope equal to its original one, meaning that it requires at least the robot's localisation. [15] As an obstacle avoidance approach with access to knowledge of its environment, the Bubble Band technique generates a subset of the free space around a robot that can be travelled without collision using a map and range information. A string of these so-called bubbles is later used to indicate the trajectory to the goal position [30]. For more robustness, the Vector Field Histogram (VFH) technique generates a 2D polar histogram of the environment around the robot based on its sensor readings. Then it converts it into a 1D polar histogram, where the x-axis represents the angle at which an obstacle was found and the y-axis the probability of it being there. Then, a path is chosen based on the obstacle density, the robot's alignment with the goal, and its steering angle [55]. Dynamic Window Approach (DWA) is a method that goes a step further by considering the robot's kinematics constraints to select an appropriate combination of linear and angular velocities that allows it to avoid running into obstacles. Given the current robot's speed, the local version of DWA selects a set of tuples of linear and angular velocities which can be reached within the next sample period, also known as the dynamic window. Then, the set is reduced to only those which allow it to stop before hitting an obstacle, given by an objective function, and selects the best tuple based on an objective function. The global version of DWA considers the distance to a goal in the objective function, allowing it to have a more long-termed view [30]. Fuzzy Logic Controllers are an alternative approach that uses ambiguous and noisy data to make decisions by selecting a proper action based on a set of rules that model a reasoning capability. They improve the performance of mobile robots in complex environments, but at the cost of the complexity that entails designing the set of heuristics [37]. For a more detailed explanation of obstacle avoidance methods, the work of Shitsukane et al.[38] can be consulted.
### Path Planning
Path planning is defined as the problem of finding a sequence of valid configurations to move from a starting position to a goal position and requires a model of the environment transformed into a discrete map. However, most mobile robots use differential-drive systems, which impose nonholonomic constraints on their configuration. Furthermore, when they are on the ground, their path planning is often considered to take place in a 2D representation of the environment [30]. For that reason, typical representations of the environment include grid maps, metric maps, and topological maps. Path planning is classified by the environment and the knowledge that the robot has about it. If the robot has complete knowledge about its environment, it is known as a global path planning problem, in which the planner has to compute an optimal path to the goal. In contrast, a local path planner uses sensor readings to constantly obtain information about the robot's surroundings and follow a path while avoiding obstacles. Local path planning is associated with obstacle avoidance, while global path planning includes graph-based and potential field-based methods [16].
Graph Search methods rely on using a map that indicates the free and occupied space in the environment to build a graph and compute a solution. Then, graph search algorithms can be used to find a path, such as breadth-first search, depth-first search, Dijkstra's algorithm, or the A*
algorithm. Among these, the A-star algorithm stands out for its consistency, speed, and ability to find the optimal solution at the cost of being computationally more expensive and requiring a heuristic function and path cost function, which may be difficult to define in some cases. Rapidly Exploring Random Trees (RRT) is also a fast alternative that does not require a heuristic function, and its lack of solution optimality was addressed by RTT*. Potential Field path planning methods define forces that either attract the mobile robot towards the goal or repel it from certain positions, such as obstacles. The environment is modelled based on the forces, and the robot is a point under its influence. As long as the robot can localise its position concerning the potential field, it can compute its following action based on the forces surrounding it. A more in-depth analysis of the path planning problem can be seen in the work of Sanchez-Ibanez et al. [42].
### 2.3 Robot Navigation Systems
Autonomous navigation systems require a path-planning method, an obstacle-avoidance approach, and a localisation method to provide the necessary information for both. Sensor data may be used directly in some cases. However, without knowledge about its position relative to the goal, a mobile robot is limited to reactive behaviour, following a predetermined path, or chasing short-termed goals based on its sensor range [5, 15]. A broad classification of autonomous navigation techniques is whether they are used indoors or outdoors, as well as regarding their consideration of dynamic obstacles. Indoor environments have their working space clearly defined and the surface area physically delimited, and the boundaries are easily identifiable by the robot's path planning and obstacle avoidance algorithms. The limited space and predominance of flat surfaces favour map-based and map-building systems because the robustness and reliability outweigh the computing cost when the resources are available. On the other hand, outdoor navigation systems must deal with uneven terrain, noisier sensor readings due to environmental causes, and more uncertainty about the robot's whereabouts due to its unstructured environments. Navigation in dynamic environments is more complex, requiring not only estimating the position of static obstacles and boundaries but also constantly being on the lookout for movement or any other indication that an obstacle may be headed toward the robot's path. Dynamic navigation systems have a broader selection of applications but require fast updates. However, the inclusion of dynamic obstacles is beyond the scope of this work.
Indoor navigation techniques can also be categorised as map-based, map-building-based, or mapless, depending on the source of the goal-related information they use. Map-based approaches must be provided with a representation of the environment built by a different system beforehand. Map-building techniques can compute the model of the environment themselves and use it subsequently as a source of information. Mapless methods rely on their sensors alone, primarily on visual data, to infer knowledge about their goals' position based on the features detected during motion [5]. RL enables a nature-inspired approach, in which robots learn the optimal behaviour to fulfil a task by interacting with their environment instead of being programmed explicitly. Combined with the advancements in the DL field, it allows them to extract meaningful features from their environment and decide which actions to take without an explicit rule. A DRL-based approach allows a robot to behave similarly to other mapless methods and train specific tasks that complement or improve existing navigation systems.
### 2.4 Conventional Navigation
In most navigation problems, the robot does not have access to an accurate map of the environment, and the most popular approach to solve them is by using a map-building system. For that
reason, navigation is also referred to as the combination of localisation, map-building, and path planning. In that case, the standard technique is to perform the three tasks simultaneously, known as Simultaneous Localization and Mapping (SLAM) [5]. Different algorithms have been proposed to solve the SLAM problem, with the most commonly used being laser, sonar, or visual sensors. The SLAM problem has been studied for many years and has become the industry standard technique to solve navigation problems due to its robustness and reliability, despite the cost of computing and updating a map [2].
Even the industry and academic most popular robotics framework, the Robot Operating System (ROS) 4, describes a default navigation system like a map-building system, which requires the computation of a map through odometry and sensor data, and the use of global and local path planners [19]. Commonly used algorithms in the navigation stack 5 include GMapping, Adaptive Monte Carlo Localization, \(A*\) for the global path planner, and DWA for the local path planner and obstacle avoidance. RL offers a mapless approach for solving navigation tasks, a better generalisation capability combined with Deep Learning, and the ability to perform complex behaviours without engineering them. The RL framework allows for versatility and is not limited to using distances as observations and velocities as outputs but can be trained with different data depending on the task. Furthermore, its learning capability is not limited to pre-established rules. It learns to associate a given observation of the environment with the optimal action to fulfil the task as efficiently as possible. Also, the learning process is performed before the robot is put in motion, allowing simulators to be used as a safe training space. It also eases the load on the robot during movement because it already knows what action to take in each scenario.
Footnote 4: [http://wiki.ros.org/](http://wiki.ros.org/)
Footnote 5: [http://wiki.ros.org/navigation](http://wiki.ros.org/navigation)
## 3 Reinforcement Learning
Reinforcement Learning (RL) is one of the three essential Machine Learning (ML) paradigms. RL aims to enable an agent to learn the optimal behaviour to accomplish a task by repeatedly interacting with its environment [33], differing from supervised learning and unsupervised learning, which rely on given data sets.
The main elements of an RL problem are the agent, its possible actions, the environment it belongs to, the state of the environment at any given time, the reward the agent receives from the environment, and a policy that defines the agent's behaviour. The agent is associated with the model that carries out the decision-making progress and learns; it does not refer to a physical entity. The actions are the set of decisions that the agent can take to interact with its environment. The environment generally refers to anything that the agent cannot arbitrarily change. At the same time, a state is the complete description of the environment at a given time. The reward signal is a numerical value that indicates how well the agent performed and is perceived by the environment on each time step. Finally, the policy is a rule used by the agent for its decision-making process, which maps the states perceived from the environment to actions to be taken when being in them.
### Markov Decision Processes
Markov decision processes (MDPs) are used to formally define the interactions between a learning agent and its environment and as the mathematical foundation of an RL problem. An MDP is a system described by the set of states \(S\), the set of actions \(A\), the reward fucntion \(R:S\times A\times S\to R\)
and a transition probability function \(P:S\times R\times S\times A\rightarrow[0,1]\)[33]; and also obeys the Markov property
\[p(s^{\prime},r|s,a)=Pr\{S_{t}=s^{\prime},R_{t}=r|S_{t-1}=s,A_{t-1}=a\}\]
which establishes that future states only depend on the most recent state and action. MDPs are a formalization of sequential decision-making, where actions influence future states and rewards, and by using them, it is possible to predict all future rewards and states. When the agent has access to the transition probability function, also referred to as the model of the environment, it is possible to use model-based RL methods, which rely on state transitions and reward predictions to plan. However, in most cases, a ground-truth model of the environment is not available, and the agent must follow a model-free approach to learn purely from experience by associating states to actions through some computation.
### Returns and Episodes
At each time step \(t\), the agent observes the current state \(S_{t}=s\in S\) of the environment, proceeds to take an action \(A_{t}=a\in A\), and is provided with a reward \(R_{t+i}\) by the environment. Then, the environment transitions to a new state \(S_{t+1}=s^{\prime}\) and the cycle is repeated, as shown in Fig. 2. By looking for correlations between states, actions and rewards, the agent learns to perform its task efficiently [33].
The agent's goal is to maximize the cumulative reward it receives, also known as the return, which can be defined as the sum of the rewards at each time step:
\[G_{t}=R_{t+1}+R_{t+2}+R_{t+3}+...\]
To prevent an infinite amount of return, the concept of discounting is introduced, and the discounted return is defined as:
\[G_{t}=R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+...=\sum_{k=0}^{\infty}\gamma^{k }R_{t+k+1}\]
where \(\gamma\) is the discount rate and determines the value of the future rewards, \(0\leq\gamma\leq 1\).
However, in many cases, the agent-environment interaction can be broken down into sub-sequences, called episodes, with a final time step \(T\). Each episode ends in a terminal state followed by a reset to a starting state.
Figure 2: The agent-environment interaction. The agent observes the current state, selects an action, receives a reward and an observation of the new state.
### Policies and Value Functions
A policy maps states to probabilities of selecting each possible action. When an agent follows a policy \(\pi\), then \(\pi(a|s)\) is the probability of performing the action \(a\) when at the state \(s\). The goal of an RL algorithm is to discover an optimal policy \(\pi^{*}\) that prioritizes the best action to take at each state, so as to maximize \(G\)[33]. For that reason, it is useful to know how valuable a state is.
A value function \(v_{\pi}(s)\) is defined the expected return when starting in a state \(s\) and subsequently following a particular policy \(\pi\):
\[v_{\pi}=E_{\pi}[G_{t}|S_{t}=s]\]
Similarly, an action-value function \(q_{\pi}\) is defined as the expected return when starting from \(s\), taking the action \(a\), and thereafter following the policy \(\pi\):
\[q_{\pi}=E_{\pi}[G_{t}|S_{t}=s,A_{t}=a]\]
A policy \(\pi\) con be compared to a different policy \(\pi^{\prime}\) given their expected returns
\[\pi\geq\pi^{\prime}\text{ if and only if }v_{\pi}(s)\geq v_{\pi^{\prime}}(s) \text{ for all }s\in S\]
The policy that is better than or equal to all others is considered the optimal policy \(\pi^{*}\) and is associated with an optimal state-value function \(v_{*}\) or an optimal action-value function \(q_{*}\), defined as
\[v_{*}(s) =\max_{\pi}v_{\pi}(s)\] \[q_{*}(s,a) =\max_{\pi}q_{\pi}(s,a)\]
Both types of value functions follow a consistency condition, the Bellman Equation, which expresses the relationship between the value of a state and the value of its possible successor state. The Bellman optimality equation for \(v_{*}\) and \(q_{*}\) are
\[v_{*}(s) =max_{a}E[R_{t+1}+\gamma v_{*}(S_{t+1})|S_{t}=s,A_{t}=a]\] \[q_{*}(s,a) =E[R_{t+1}+\gamma\max_{a^{\prime}}q_{*}(S_{t+1},a^{\prime})|S_{t} =s,A_{t}=a]\]
Depending on the RL method, there are different approaches to reaching optimal behaviour. Policy-based or policy optimization methods directly approximate the optimal policy of the agent, while value-based methods learn to estimate it through the use of value functions or state-action functions.
Also, off-policy RL methods use a behaviour policy to select an action and explore the environment different from the target policy that is learnt and improved. Contrary to on-policy methods, where the target and behaviour policy are the same. Online methods that update their parameters while observing a stream of experiences, can use two different policies and update them separately. While offline methods commonly optimise only the target policy, and copy its parameters into the behaviour policy, as a less memory-consuming approach, by storing and using experiences at different points of time during the training, through the use of large buffers.
### Temporal-Difference Learning
Temporal-Difference Learning refers to a class of model-free, value-based methods, which update their estimate of the value function based on previous estimates without waiting for a final outcome, also known as bootstrapping. Given some experience following a policy \(\pi\), TD methods update their estimate \(V\) of \(v_{\pi}\) at each time step \(t+1\) by using the observed reward \(R_{t+1}\) and the estimate \(V(S_{t+1})\)[33]
\[V(S_{t})\gets V(S_{t})+\alpha[R_{t+1}+\gamma V(S_{t+1})-V(S_{t})]\]
The most basic type of TD method is called the one-step TD because the target for the TD update is calculated using the value and reward of only the next time step. The quantity in brackets in the one-step TD is also called the TD error because it measures the difference between the estimated value of \(S_{t}\) and the better estimate \(R_{t+1}+\gamma V(S_{t+1})\), available one step later. As long as the step-size parameter \(\alpha\) is sufficiently small, one-step TD converges deterministically to a single answer.
The advantages of TD methods over others are that they do not require a model of the environment and do not need to wait until the end of the episode to learn.
### Q-Learning
Q-learning is an off-policy TD method and one of the most popular Reinforcement Learning algorithms. It is defined by the update to the action-value function:
\[Q(S_{t},A_{t})\gets Q(S_{t},A_{t})+\alpha[R_{t+1}+\gamma\max_{a}Q(S_{t+1},a)-Q(S_{t},A_{t})]\]
And to approximate the optimal action-value function \(q_{*}\), the agent must visit, store in a tabular manner, and update all the state-action pairs, also known as Q-values, for the action-value function Q [46].
## 4 Deep Reinforcement Learning
The previously described framework may be used to apply an RL approach to a robotics problem. In the case of autonomous navigation, the robot can be seen as the agent, its linear and angular velocities as the actions and the reward should incentive the robot to evade obstacles or move closer to its goal, as shown in Fig. 3. However, the challenge lies in defining an appropriate state that provides enough information for the robot to fulfil its task, especially for robots that operate in a three-dimensional space.
In 2013, Kober et al. published a survey [18] about the challenges and successes of Reinforcement Learning in Robotics, and one of the main challenges is the "Curse of Dimensionality". This holds, especially for robotics, where multiple sensor readings, degrees of freedom or images are needed to describe the robot's state space. However, in the same year, Google DeepMind proposed a novel algorithm, Deep Q Networks (DQN) [27], by combining the traditional Q-learning method with a Neural Network, which vastly outperformed all previous methods at playing Atari games with RGB images as inputs. This work started the trend of combining RL methods with Neural Networks from the DL field, which became a subfield known as Deep Reinforcement Learning.
When designing an agent that uses depth images as states, the improved computational capabilities and robustness of the DRL are needed for the agent to be able to process the data and extract meaningful features that allow it to differentiate and evaluate each state.
### Neural Networks
Artificial Neural Networks, or simply Neural Networks (NNs), are computing models based on a collection of connected nodes known as neurons, used to approximate high-dimensional and non-linear functions. The neurons are aggregated into layers, where different transformations are performed and associated with the weights adjusted for the network to learn. The neurons are inspired by the brain cells of the same name, and their design is based on the perceptron, introduced by Frank Rosenblatt in 1958 [31]. Each neuron's inputs are weighted, summed and added a bias before being passed through an activation function that applies a non-linear transformation, which is the main reason why they perform well in different applications.
Each NN has an input layer, where data is introduced, an output layer, where a prediction is given, and many hidden layers in between, where the values are computed. The more hidden layers are used, the better the capability of the network to abstract meaningful information from the data to make better predictions. For that reason, the term \(deep\) originates from using a larger amount of hidden layers, which was possible due to the increase in available computing power and memory, contrary to the earlier \(shallow\) networks.
The most basic type of neural network is a feedforward neural network [17], or multilayer perceptron, where each layer is composed of many neurons, and their output is connected to the input of the next layer. The layers of these types of NNs are known as feedforward, fully connected or linear layers due to their sequential nature and because all of the neurons are connected to the next layer. The number of neurons and the activation function for each layer can be modified, with the most commonly used being the \(reLu\), \(tanh\), \(sigmoid\) and \(softmax\) functions.
A specialised type of NN for processing data that has a grid-like shape is known as the Convolutional Neural Network (CNN) [17], and its most popular use is for processing images. CNNs have layers that perform a convolution instead of a matrix multiplication, known as convolutional layers. The convolution requires sliding a kernel, a smaller array of parameters, along the input matrix of the layer and performing a dot product in small windows of features, reducing the output data size. The size of the kernel, the number of kernels, the amount of stride that the kernel slides, and whether the input features are padded to keep their size after the operation, among other features, can be tuned for each convolutional layer. The convolution operation allows extracting high-level features from images, such as edges and colour. It performs better predictions, and the popularity of this type of network increased thanks to the results of trained models such as AlexNet, presented
Figure 3: An example of a robot described in the RL framework. The state is still to be defined.
in [20], and ResNet, proposed in [13].
### Deep Q-Networks
The Q-Learning algorithm's limitations to store and approximate the value of all state-pairs when the number of combinations is increased was addressed by Mnih et al. in [27]. They proposed an approach called Deep Q Networks that combined the Q-learning algorithm with Neural Networks.
The core idea was to approximate the Q-values using a Deep Neural Network (DNN) instead of storing them in a tabular manner. To that end, the value function was parametrised as \(Q(s,a;\theta_{i})\) by using the neural network's weights \(\theta\) at each time step \(i\)[28]. The Q-learning update becomes the loss function to train the neural network. The loss is given by:
\[L(\theta)=E_{s,a,r,s^{\prime}}-U(D)[(r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{ \prime};\theta_{i})-Q(s,a;\theta_{i}))^{2}]\]
Where, at each time step \(t\), the agent's experiences \(e_{t}=(s_{t},a_{t},r_{t},s_{t+1}\) are stored in an experience replay \(D_{t}=e_{1},...,e_{t}\), and a mini-batch of experiences \((s,a,r,s^{\prime})\) is drawn uniformly at random, \(U(D)\), to perform the update.
This method outperformed most state-of-the-art methods at Atari games without prior knowledge and by using raw images and established the beginning of Deep Reinforcement Learning.
### Double DQN
One disadvantage of the Q-learning algorithm, as evidenced by van Hasselt, is the overestimation of action values due to a positive bias from using the maximum action value as an approximation for the maximum expected action value. A double estimator method was proposed to decouple the action selection process from the evaluation and eliminate the bias, resulting in an underestimation of action values. Furthermore, van Hasselt et al. [44] extended the idea for its use in parametric equations and the DQN algorithm, proposing the variant Deep reinforcement learning with Double Q-learning (Double DQN or DDQN) by using two Neural Networks with different sets of weights. The main neural network picks the best next action \(a^{\prime}\) among all the available, and then the target neural network evaluates the action to know its Q-value. While the main neural network's weights are updated normally, the target neural network is updated every so often with a copy of the main neural network's weights. The Bellman equation in this algorithm has the shape:
\[Q(s,a;\theta)=r+\gamma Q(s^{\prime},argmax_{a^{\prime}}Q(s^{\prime},a^{\prime} ;\theta);\theta^{\prime})\]
### Prioritized Experience Replay
The Experience Replay, introduced by Lin [22], helped online RL methods to break the temporal correlations of the updates and to prevent the loss of rare experiences by mixing more and less recent experiences and allowing them to be used multiple times. However, experiences are sampled uniformly at random, without regard for each experience's value. The Prioritized Experience Replay (PER), proposed by Tom Schaul et al. [34], focuses on the effective use of the replay memory for learning by prioritising transitions which may be more valuable for the agent but rarely occur. The TD error \(\delta\) is used as a criterion to measure the importance of each transition by indicating how unexpected each transition is because it compares how far the value is from the next bootstrap
estimate. However, purely choosing the experiences with the most TD error would lead to overfitting. Therefore, a stochastic sampling method was proposed that interpolates greedy prioritisation and uniform random sampling.
So each transition \(i\) is given a priority value
\[p_{i}=|\delta_{i}|+\epsilon\]
where \(\epsilon\) is a small positive constant that prevents a transition from not being visited, such that \(p_{i}>0\). And the probability of sampling each transition \(i\) is given by
\[P(i)=\frac{p_{i}^{\alpha}}{\sum_{k}p_{k}^{\alpha}}\]
where the \(\alpha\) determines how much prioritization is used, with \(\alpha=0\) corresponding with the uniform case. And to prevent the bias toward high-priority samples introduced by the change of distribution in the stochastic updates, importance-sampling (IS) weights are used
\[w_{i}=\left(\frac{1}{N}\frac{1}{P(i)}\right)^{\beta}\]
where \(N\) is the size of the replay buffer, and the Q-learning update is performed using \(w_{i}\delta_{i}\) instead of \(\delta_{i}\). The hyperparameter \(\beta\) controls how much the IS weights affect learning and is linearly annealed from an initial value \(0<\omega<1\) to \(1\).
### Dueling Network
The Dueling Network architecture, proposed by Xie et al. [45], splits the Q-values between the value function V(s) and the advantage function A(s, a). The first one estimates the reward collected from the state \({}^{\prime}s^{\prime}\), while the second one estimates how much better one action is compared to the others.
The Q-value is defined by:
\[Q(s,a)=V(s)+A(s,a)\]
For that reason, the Dueling Network has two streams to separately estimate state values and the advantages for each action and combine them to output Q-values for each action. To prevent the Q-value equation from being unidentifiable, the advantage function estimator is forced to have zero advantage at a chosen action:
\[Q(s,a)=V(s)+(A(s,a)-\frac{1}{|A|}\sum_{a^{\prime}}A(s,a))\]
Because the dueling architecture shares the same input-output interface, it can be combined with other Q network-based architectures. One of the algorithms which significantly improved when combined with a dueling architecture is the DDQN, and such combination is often referred to as Dueling Double DQN or D3QN.
### Multi-step Learning
The idea of multi-step learning, or originally known as n-step Bootstrapping [33], comes from the comparison between TD methods and other RL methods, such as the Monte Carlo (MC) methods. Whereas most TD methods bootstrap their estimations over every time step, MC methods do so
only at the end of each training episode. Therefore, a middle ground was proposed in which it is possible to bootstrap over a length of time in which significant state changes have occurred, effectively leading to faster learning. The truncated n-step return from a given state \(S_{t}\) is defined as
\[R_{t}^{n}=\sum_{k=0}^{n-1}\gamma_{t}^{k}R_{t+k+1}\]
And the multi-step variant of the DQN loss is defined as
\[(R_{t}^{n}+\gamma_{t}^{n}\max_{a^{\prime}}q_{\theta}(S_{t+n},a^{\prime})-q_{ \theta}(S_{t},A_{t}))^{2}\]
### Distributional Reinforcement Learning
Bellemare et al. [3] proposed a method to model the full distribution of returns instead of only the expectation, which leads to better approximations and more stable learning. The returns' distribution is modelled using a discrete distribution parametrised by \(N\in\mathbb{N}^{+}\) and \(V_{MIN}\),\(V_{MAX}\in\mathbb{R}\), with probability masses placed on a discrete support \(z\), where \(z\) is a vector of \(N\) atoms, considered as the canonical returns, defined by
\[z^{i}=V_{MIN}+i(\frac{V_{MAX}-V_{MIN}}{N-1})\]
for \(i\in 1,...,N\). With the probability mass of each atom
\[p_{\theta}^{i}(s,a)=\frac{e^{\theta_{i}(s,a)}}{\sum e^{\theta_{j}(s,a)}}\]
such that the approximating discrete distribution \(d\) at time \(t\) is given by
\[d_{t}=(z,p_{\theta}(s,a))\]
A variant of Bellman's equation is used to learn the probability masses. The Bellman operator \(T^{\pi}\) is defined to describe the contraction by \(\gamma\) and shift by the reward of the future estimation, to get the current value during the policy evaluation. The Bellman Equation
\[Q^{\pi}(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P,\pi}Q^{\pi}(s^{\prime},a^{ \prime})\]
can be rewritten using the Bellman operator
\[T^{\pi}Q(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P,\pi}Q(s^{\prime},a^{ \prime})\]
The Bellman operator \(T^{\pi}\) is further proved to converge to a unique return distribution by using a metric between cumulative distribution functions, known as the Wasserstein Metric. Denoting the return as \(Z\) and the return distribution as \(Z^{\pi}\), the convergence of \(Z\) is studied by applying the Bellman operator, as
\[T^{\pi}Z(s,a)=R(s,a)+\gamma P^{\pi}Z(s^{\prime},a^{\prime}).\]
However, when extending the idea to the Bellman optimality operator \(T\)
\[TQ(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P}max_{a^{\prime}\in A}Q(s^{\prime },a^{\prime}),\]
it can only be proved that \(T\) converges to a set of optimal return distributions.
Furthermore, applying \(T\) to \(Z\) cannot be computationally done without applying the \(argmax\) function to the expectation of the future value.
\[T^{*}Z(s,a)=R(s,a)+\gamma Z(s^{\prime},max_{a^{\prime}\in A}E[Z(s^{\prime},a^{ \prime})])\]
When applying the Bellman update \(TZ_{\theta}\) to the parametrisation \(Z_{\theta}\), the supports are almost always disjointed. To fix this, and considering an issue with the Wasserstein loss when sampling from transitions, the sample Bellman update \(\tilde{TZ}_{\theta}\) is projected onto the support of \(Z_{\theta}\), reducing the update to a multi-class classification.
### Noisy Networks
One of the key challenges of RL methods is maintaining a balance between exploration and exploitation. Traditional exploration heuristics rely on random perturbations of the agent's policy, such as \(\epsilon\)-greedy, probabilities, or intrinsic motivation terms added to the reward signal, to encourage new behaviours. However, these methods are not easily applied with neural networks or rely on a metric chosen by the experimenter. For that reason, Fortunato et al. [11] proposed NoisyNet, an approach where perturbations of a neural network's weights are used to drive exploration. The number of parameters in the linear layer of the neural network is doubled and allows for different learning rates at the state space. For a linear layer of a neural network
\[y=wx+b\]
the corresponding noisy linear layer is defined as
\[y=(\mu^{w}+\sigma^{w}\odot\epsilon^{w})x+\mu^{b}+\sigma^{b}\odot\epsilon^{b}\]
where the parameters \(\mu^{w},\sigma^{w},\mu^{b},\sigma^{b}\) are learnable and \(\epsilon^{w},\epsilon^{b}\) are noise random variables originating from either an Independent Gaussian noise or a Factorised Gaussian noise.
### Rainbow
All previous improvements to the original DQN algorithm were made independently, as illustrated in Fig. 4. Hessel et al. [14] proposed that each extension addressed a distinct concern and that they could be combined to improve the performance of the DQN algorithm. The distributional loss is replaced with a multi-step variant. A shift by the truncated n-step discounted return is considered at the value \(S_{t+n}\) in the Bellman operator instead of the original shift by the return. The target distribution is defined as
\[d_{t}^{(n)}=(R_{t}^{(n)}+\gamma_{t}^{(n)}z,p_{\bar{\theta}}(S_{t+n},a{*}_{t+n})\]
where the greedy action \(a{*}_{t+n}\) is selected by the online network to bootstrap, and the target network evaluates the use of said action.
The resulting KL loss is:
\[D_{KL}(\phi_{z}d_{t}^{(n)}||d_{t})\]
which can be used to compute the priority values of the PER as a more robust and efficient alternative to the TD error. The neural networks follow the dueling network architecture but are adapted for use with return distributions. And finally, all the linear layers are replaced with noisy linear layers.
## 5 Deep Reinforcement Learning for Navigation
In mapless navigation systems, there isn't an available representation of the environment; the robot perceives the environment as it navigates and must be able to recognise objects, landmarks or any similar type of information that allows it to infer knowledge about where its goal is located. Most of these systems use visual information, primarily the first-person-view image, and perform some reactive behaviour as they process the incoming data [12].
Optical Flow methods use a sequence of images to estimate the motion of objects and features. Velocities perceived are used for the robot's decision-making, always preferring to move in the direction of less change. This is also the main disadvantage of these methods [9]. Appearance-based methods store and memorise images of the environment and associate them with certain relative localisation to the goal, allowing the robot to perform the correct motion control. However, labelling the desired images and developing the appropriate criteria may be difficult and time-consuming [9]. Feature tracking-based methods rely on detecting features and motion from the elements in the environment and, based on that information, estimate the robot's trajectory and motion [5]. Object recognition is more symbolic and can detect features rather than memorising precise objects or positions. Deep learning approaches are very similar in the sense that neural networks are trained with many images to identify features of the objects in the environment [51].
All the aforementioned methods are limited to one task except the DL-based. All of them require the use of labelled images that indicate the desired motion at a specific place or the landmark it represents, which can be very costly to produce. On the contrary, RL agents can be trained for different tasks and allow simulated environments to safely and efficiently train an agent before transferring it to a real-life robot, reducing the computational load needed to learn the task. Finally, technological advances allow the recreation of more realistic and complex scenarios and accelerate learning.
Choices of DRL algorithms in robotics include different variations of DQN, and policy search methods, such as Proximal Policy Optimization [35] (PPO), Asynchronous Advantage Actor-Critic [26] (A3C) and Deep Deterministic Policy Gradients [21] (DDPG). In the case of mobile robots, different tasks have been accomplished using DRL methods, the most common being obstacle avoidance and navigation. However, more complex tasks can be performed depending on the information
Figure 4: Rainbow DQN components. The combination of independent improvements resulted in a better performance than the baseline DQN.
provided to the agent. A summary of the related works can be seen in Table 1.
For the obstacle avoidance task, Lei Tai and Ming Liu [53] implemented a DQN agent trained to explore indoor environments by using depth images as the states and a CNN pre-trained with real-world samples. Linhai Xie et al. [48] combined a CNN trained to predict depth images with a D3QN agent to propose an approach that uses only monocular RGB vision as input. They also showed that the D3QN model outperformed a vanilla DQN model on both training speed and performance. Patrick Wenzel et al. [47] also used a NN to predict depth images based on RGB images and implemented three different agents to solve obstacle avoidance in circuit-like environments: a PPO agent with a discrete action set, a PPO agent with a continuous action set and a DQN agent. They concluded that the PPO agent with discrete actions outperformed the other two agents and that depth images yielded better results than RGB and grayscale images.
For the goal-oriented navigation task, Xiagang Ruan et al.[32] implemented a D3QN agent that successfully navigates autonomously by using depth images and the distance to the goal as a state. Changan Chen et al. [7] presented an LSTM network that models Human-Robot and Human-Human interactions, using the DRL framework, for navigation towards a goal in a crowded environment. Yuke Zhu et al. [54] trained an A3C agent in a self-developed physics engine, which could generalise across targets and scenes. Two RGB images were used for the state representation, one from the agent's perspective and another that shows the target, and were embedded by a CNN before being passed to the agent. Liulong Ma et al. [25] compared two DRL agents, DQN for a discrete action space and PPO for a continuous action space, to perform a mapless navigation task by using a Variational Autoencoder to encode RGB images and appending them with target related information. The PPO model outperformed the DQN model in both performance and training time and also got better results in its environment than the benchmark. Cimus Reinis et al. [8] proposed a DDPG agent that combined a stack of depth images with the polar coordinates between the robot and the goal as the state and with a reward based on the robot's velocity. They performed successful experiments on simulated environments as well as real-world scenarios.
However, other works involve different navigation-related tasks, such as Pararth Shah et al.[36], which combined a DQN agent with a Recurrent Neural Network to map natural language instructions, and visual and depth inputs to actions. Wenhan Luo et al. [23] developed an A3C agent for a mobile robot, combined with a ConvLSTM NN, that takes RGB frames as inputs and produces both camera control and motion control signals as outputs. Their agent could resume tracking after losing the target and was successfully transferred to real-world scenarios. Placed and Castellanos [29] developed a D3QN agent capable of performing active SLAM with less intensive computation by using laser measurements and designing the reward function based on a formulation of the active SLAM problem.
While most studies specialise in a task and propose a specific reward function and state representation to fulfil it, the work presented analyses the challenge involved in going from a simpler task to a more complex one, as well as the effects the reward function can have on the robot's behaviour and performance. Also, the popular D3QN algorithm is compared with a more recent variant of the DQN family of methods, the Rainbow algorithm.
For a more in-depth review of DRL algorithms and applications in navigation, the surveys of Zhu et al. [52] or Ye et al. [49] can be consulted. It is noteworthy, as also studied by Zhu et al. in [52], that more often than not DRL applications in navigation require lightweight navigation solutions to be a complete navigation system. As previously discussed, the most common approach to solve the navigation problem is by using a SLAM technique in a map-building-based robotic system.
In this work, two different approaches to incorporating a DRL agent in a navigation system are explored. The first one is as an obstacle avoidance agent, which can explore an environment
with different obstacles and navigate in circuit-like environments. The second is an agent capable of steering towards a goal when given reference information. The D3QN and Rainbow DQN algorithms are compared to evaluate the difference in results between an algorithm commonly used and its successor. And finally, different reward functions will be implemented in each method to analyse the difference in results and the actions the agents take.
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline Agent & Algorithm & State & & Task \\ \hline
[53] & DQN & & Depth Image & & Obstacle Avoidance \\
[48] & D3QN with & Predicted & Depth & Image & Obstacle Avoidance \\ & CNN & & from RGB & & \\
[47] & PPO & and & Predicted & Depth & Image & Maze Navigation \\ & DQN & with & from RGB & & \\ & GAN & & & & \\
[32] & D3QN & & Depth Image and Distance & Goal Navigation \\ & & & to Goal & & \\
[54] & A3C & Perspective RGB Image & Goal Navigation \\ & & & and RGB Image from target & & \\
[7] & LSTM-RL & Position, Velocity and Radius of Agent and Humans & & Goal Navigation in a Crowd \\
[25] & DQN with & RGB Image, Polar Coordinates and Motion Information & & Goal Navigation \\
[8] & DDPG & Depth Images and Polar Coordinates & & Goal Navigation \\
[36] & DQN with & Natural Language Instruction, Visual and Depth Data & & Goal Navigation with Natural Language Directions \\
[29] & D3QN & Laser Measurements & Active SLAM \\
[23] & A3C with & RGB Image & & Object Following and Tracking \\ & ConvLSTM & & & \\ Proposed & D3QN \& Rainbow & Depth Image & Obstacle Avoidance \\ & DQN & & & \\ & & Depth Image and Polar Coordinates & Goal Navigation \\ \hline \hline \end{tabular}
\end{table}
Table 1: A non-extensive summary of previous works. There is a set of commonly used RL algorithms, but depending on the choice of state representation, different tasks can be trained.
## 6 Design of the DRL agent
This section contains the details of the DRL approach representation. First, a description of the state representation, action space and reward function will be given. Then, the architecture of the neural networks and specifications of the DRL methods used will be discussed. An intuition of how the implementation for the obstacle avoidance task looks in the agent-environment interaction loop of the RL framework is shown in Fig. 5.
### State Representation
The state representation must contain enough information about the environment so the agent can decide what action to take to maximise its return, using only the state provided at any given time step.
Depth images provide geometric information about the robot's surroundings in three dimensions. On the contrary, RGB images are more susceptible to lightning and contain colour information which may be irrelevant. Furthermore, depth images are represented in grayscale. This type of image has been proven to be a good state representation in other DRL tasks, such as Atari games [28], mainly when used as a stack because of the dense information they contain. For those reasons, the chosen state representation for the obstacle avoidance task consists of a stack of four successive depth images, with one taken at each time step of the training process. The geometric information provided should be enough for the agent to determine when a collision is imminent, and a change of behaviour is necessary.
However, more information is needed to determine the agent's relative position to its destination for the goal-oriented navigation task. To avoid the problem of the agent not recognising the difference between similar states, known as aliasing, the polar coordinates from the agent to the goal are appended to the state representation in the form of a distance and angle.
Figure 5: Example of the RL design for the obstacle avoidance task. The depth images are perceived in the simulated environment in Gazebo and reach the RL algorithm through the ROS framework.
### Action Space
Actions represent the agent's choices to interact with its environment and are constrained by its physical limitations and task. Actions in robotics include desired velocities, accelerations, or torques sent to a motor.
In the case of a mobile robot performing the task of obstacle avoidance, the noteworthy commands are the input linear and angular velocities. Because the environments are static, there is no action given for the robot to stay still, and it must always remain in motion. In the case of the discrete set of actions, two linear velocities were selected to allow the robot to either slow down while turning or speed up to reach its destination faster. Also, four angular velocities were chosen to let the robot rotate at different rates in each direction and one null-valued angular velocity to go straight.
Because of the specifications of the robot used in the simulations for training, the Turtlebot 2, which is further discussed in the next chapter, the specific values are the following: \(0.2m/s\) or \(0.4m/s\) for the linear velocity and \(\frac{\pi}{6}rad/s\), \(\frac{\pi}{12}rad/s\), \(0rad/s\), \(\frac{-\pi}{12}rad/s\) or \(\frac{-\pi}{r}ad/s\) for the angular velocity.
### Reward Function
The reward function reflects the agent's objective and is the core of the learning process; it grades how well the agent behaved at a given time step.
#### 6.3.1 Obstacle Avoidance
For an agent attempting to explore its environment while avoiding obstacles, either a penalty for crashing into an obstacle, a small reward at each time step or a sparse reward for completing several steps without colliding may be enough to learn the task at hand. However, it seems that different approaches may incentive certain behaviours. One such constraint is to penalize the robot's angular velocity for prioritizing moving straight and more steadily. For that reason, two different reward functions were tested.
The first reward function is a simple one that gives a small reward to an agent for each time step that it does not collide with an obstacle and gives a penalty two orders of magnitude higher on collision:
\[R=\left\{\begin{array}{ll}-10&\quad\text{on collision}\\ 0.1&\quad\text{at each time step}\end{array}\right. \tag{1}\]
The second reward function, referred to as the behaviour reward function, rewards the agent for its linear velocity and penalizes the angular velocity:
\[R=\left\{\begin{array}{ll}-10&\quad\text{on collision}\\ v-|w|&\quad\text{at each time step}\end{array}\right. \tag{2}\]
Where \(v\) is the linear velocity of the robot and \(w\) is the angular velocity, combined with the previously chosen actions, the robot can earn a reward between \([-0.13,0.4]\) at each time step, with the penalty for colliding being two orders of magnitude higher as well, giving it a higher priority when learning.
#### Goal-Oriented Navigation
When the task is changed to a goal-oriented navigation, more information is needed for the agent to receive a reward signal that differentiates whether it is in a better position regarding the goal. For that reason, the chosen metrics were the distance to the goal \(d\) and the heading towards the goal \(\theta\), as the minimum amount of information needed to locate the position of the goal. Thus, the reward function is extended to account for the new information:
\[R=\left\{\begin{array}{ll}-10&\mbox{on collision}\\ (v-c|w|)cos(\theta)-v_{max}&\mbox{at each time step}\\ 10&\mbox{on arrival}\end{array}\right. \tag{3}\]
Where \(cos(\theta)\) determines whether the robot faces the objective and gives a negative reward when the agent stays away, \(c\) is a constant discount factor to avoid the difference between the values of the velocities yields a negative reward, and \(v_{max}\) is the maximum linear velocity the robot can achieve. Combined with the previous reward function elements, \(v\) and \(w\), the agent avoids further penalty when moving straight to the goal and receives more when moving away from it. By increasing the order of magnitude of the reward when reaching the goal, the agent can risk some reward as long as it reaches it. This reward function is referred to as the negative reward function because the values it provides at each step are between \([-0.8,0]\)
A positive version of the reward function, where there is no constant penalty based on the maximum linear velocity of the agent, was also used to evaluate which version has better results:
\[R=\left\{\begin{array}{ll}-10&\mbox{on collision}\\ (v-c|w|)cos(\theta)&\mbox{at each time step}\\ 10&\mbox{on arrival}\end{array}\right. \tag{4}\]
Different approaches could have been taken when designing the reward function for such a task, but the current design was chosen, and a sparse reward system was avoided altogether in an attempt for it to generalize and perform better in different kinds of environments.
### Neural Network Architectures
A CNN architecture based on the work proposed by Wang et al. [45] is used for the D3QN agent, to process the stack of depth images corresponding to the state, and outputting the q-values of each action. The number of layers and hyperparameters of each layer is the same as the NN evaluated in the article. For the case of the goal-oriented navigation task, the distance and angle towards the target are appended to the output of the flattening layer.
For the Rainbow DQN agent, the last layers of the network architecture are modified, following the implementation of the C51 agent described by Bellemare et al. in [3], which uses 51 atoms to estimate the distribution of the rewards instead of the expected values. Training a robotics RL agent in the real world requires a significant amount of time for the algorithm to converge, constant supervision to reset the agent to its initial state after reaching a terminal state, and avoiding accidents. For that reason, the implementation proposed in this thesis is done in a simulator, which has benefits such as speeding up the training time, automatically resetting the whole environment after each episode and allowing different initial configurations for the agent to explore the entire environment better.
### Simulated Environment
The Robotic Operating System (ROS) 6 was chosen as the robotics framework to run the experiments, as it provides many software libraries and tools used to build robot applications, as well as communication between the different software needed to run or simulate a robotic system, such as sensor readings, control algorithms and task algorithms. The distribution of ROS used to run the experiments was Melodic Morenia.
Footnote 6: [http://wiki.ros.org/](http://wiki.ros.org/)
The simulated robot used for training is the Turtlebot2 7, an open-source robot commonly used in robotic research. It features an Asus Xtion PRO LIVE as an RGB-D camera and the differential drive base Kobuki, which has a variety of sensors, such as odometry, gyroscope and a laser sensor. Its maximum translational velocity is 0.7 m/s, and its maximum rotational velocity before performance degradation is 110 deg/s. Being differential wheeled allows it to change its direction without additional forward or backward motion. The laser sensor was used to detect collisions at fixed distances accurately. Still, its data were not considered in the state representation, meaning that a bumper or other collision-detecting sensor could replace it.
Footnote 7: [https://www.turtlebot.com/turtlebot2/](https://www.turtlebot.com/turtlebot2/)
Gazebo 8 was used as the robotics simulator to model the environment, load the Turtlebot2 and its sensors to train the proposed reinforcement learning agent, and speed up the simulation ten times faster than in real-time. The Gazebo version used is 9.0.0. The open-source openai_ros ROS package, developed by The Construct 9, was used as the RL framework, which provides communication between ROS, Gazebo and the RL scripts. It also allows the environment's set-up in Gazebo, which offers states and rewards at each time step and resets the environment at the end of each episode.
Footnote 8: [https://gazebosim.org/home](https://gazebosim.org/home)
Footnote 9: [https://www.theconstructsim.com/](https://www.theconstructsim.com/)
Finally, the reinforcement learning algorithms, training and evaluating scripts were implemented using the Python programming language, with the OpenCV computer vision library being used to preprocess the depth images. The Rainbow DQN and D3QN algorithms were based on the implementation of Dittert [10] and Arulkumaran [1].
### Training
The training was done in a simulated environment. The hyperparameters' values were chosen based on the algorithms' original work. The learning rate, Adam optimiser, gamma, batch size and hidden layer size were the same as the original DQN work of Mnih et al. in [28]. The buffer size was lowered because of initial hardware limitations, and the number of random steps to fill it was also proportionally decreased. The \(N\) step, \(\tau\), and minimum \(\epsilon\) values were chosen according to the Rainbow DQN proposed by Hessel et al. [14]. The D3QN agent requires the \(\epsilon\) hyperparameter for exploration, which starts with a value of 1 and is exponentially decayed until it reaches \(\epsilon_{min}\). Since the number of training episodes would be much smaller, compared to other RL-related works, the \(\alpha\) value was slightly increased, and the \(\omega\) value decreased to prioritise experiences earlier. A soft update of parameters with the value of \(\tau\), as described by Lillicrap et al. in [21], was chosen instead of a hard update. A summary of the hyperparameters used can be seen in the Table 2.
The depth images were resized, normalised and pre-processed before being passed to the agent as observations. The default size of the depth images used for training was \(80\times 64\) pixels, similar to the size of images used for training RL agents in Atari games since the DQN implementation in [28], but keeping the width and height ratio of the original image size. Also, at each step, the
depth image was stacked with the three previous ones, as described in the design section, while at the start of each episode, the initial frame was copied four times.
The experiments were performed on a computer equipped with an AMD Ryzen 5 3600 CPU, an NVIDIA RTX 3060 Ti GPU and 32 GB of RAM.
### Obstacle Avoidance
The obstacle avoidance agent was trained in a \(5m\) environment with different obstacles, as shown in Fig. 6. The reasoning behind its design was to expose the RL agent to different obstacle shapes to learn better how to avoid collisions. At the start of each episode, the agent's starting position was randomly initialised from 15 possibilities to accelerate the learning process and address the challenge of generalisation presented in [52]. Each training session lasted for 1500 episodes, and the episodes ended after 400 steps or when the agent crashed into an obstacle. For better accuracy, collisions were detected with the robot's laser sensor at a distance of 0.3 meters.
As shown in Table 3, six obstacle avoidance agents were trained, with their label referring to the algorithm, reward function and size of depth images used during their training.
### Navigation
The goal-oriented agent was trained in a slightly wider, \(6m\) environment with only primitive shapes as obstacles, which can be seen in Fig. 7. The reason behind using more basic obstacles in the environment is for the agent to focus more on the path-planning competence of the goal-oriented navigation task rather than the obstacle-avoiding one. At the start of each episode, the agent's starting position was randomly initialised from 5 different possibilities and the goal position from a set of 6 cases. The maximum number of steps was slightly lowered to 350, but the total episode count increased to 25,000. The collision detection and episode-ending conditions were almost identical to the previous task, with an added terminal state when the agent reached the goal. A goal was considered to be contacted at a lenient distance of \(0.8m\) to speed up the learning process.
\begin{table}
\begin{tabular}{c c c} \hline \hline Hyperparameter & D3QN & Rainbow DQN \\ \hline Learning rate & 0.00025 & 0.00025 \\ Batch Size & 32 & 32 \\ Hidden Layer Size & 512 & 512 \\ \(\gamma\) & 0.99 & 0.99 \\ Buffer Size & 100000 & 100000 \\ Initial Random Steps & 20000 & 20000 \\ \(\tau\) & 0.001 & 0.001 \\ \(\epsilon_{min}\) & 0.01 & N/A \\ N step & 1 & 3 \\ \(\omega\) & 0.4 & 0.4 \\ \(\alpha\) & 0.6 & 0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameters values. The D3QN agent requires the hyperparameter \(\epsilon\) for exploration, while Rainbow DQN uses the NoisyNets for exploration proposed.
Figure 6: The training environment for the obstacle avoidance task. Arrows indicate available random starting configurations.
Figure 7: The training environment for the navigation task. Arrows indicate possible starting configurations, and dots represent goal positions.
Three agents were trained, with different algorithm and reward function choices, as seen in Table 4.
### Evaluation
#### 6.9.1 Obstacle Avoidance
For the obstacle avoidance task, the models were subjected to two evaluations, one for their ability to evade different obstacles and another to test whether their training was enough to navigate a circuit-like environment without a goal.
The environment used to test obstacle avoidance competence is the same for training but with different starting points that put the robot close to the obstacles from the beginning. Two points nearby were chosen as starting positions for each of the six types of obstacles, resulting in 12 initial configurations, as seen in Fig. 8. The evaluation had a duration of 600 episodes, with 100 steps each. The idea behind it is only to check whether the robot can avoid collisions with the specific types of obstacles it is trained with, as its capability to move forward while avoiding walls will be tested later. For the obstacle avoidance task, the models were subjected to two evaluations, one for their ability to evade different obstacles and another to test whether their training was enough to navigate a circuit-like environment without a goal.
The second test was performed in a simple circuit-like environment with four pre-defined starting points, shown in Fig. 9. A perfect performance was not expected, as the agent was trained in a different environment. However, the reasoning behind it is that RL agents sometimes optimise their behaviour in unintended ways. One such case for an obstacle avoidance task, as there is no reward based on a clear objective other than a penalty for colliding, would be if the agent moved around in
\begin{table}
\begin{tabular}{c c} \hline \hline Agent & Reward Function \\ \hline NegativeD3QN & 3 \\ NegativeRainbow & 3 \\ PositiveRainbow & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Agents trained for the goal-oriented navigation task. Their names indicate their algorithm and reward function used during training.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Reward Function & Size of Depth Image \\ \hline SimpleD3QN & 1 & \(80\times 64\) \\ SimpleRainbow & 1 & \(80\times 64\) \\ SimpleRainbowL & 1 & \(160\times 128\) \\ BehaviourD3QN & 2 & \(80\times 64\) \\ BehaviourRainbow & 2 & \(80\times 64\) \\ BehaviourRainbowL & 2 & \(160\times 128\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Agents trained for the obstacle avoidance task. Their names indicate their algorithm, reward function and the size of the depth images used during training.
circles. To test whether the agents can navigate a road bounded by walls where circular motion is impossible without colliding, a simple circuit-like scenario from the openai_ros package was adapted as an evaluation environment.
In both cases, the distance for considering a collision was slightly lowered to \(0.2m\) to evaluate the agent's reaction competence better.
#### 6.9.2 Navigation
For the navigation task, the models were evaluated in the same environment used for training, with and without the same starting points. The assignment was more challenging, as the agents needed to avoid obstacles while moving closer to the goal. Therefore, each agent was tested on its learning and adaptative capabilities. The agent was allowed to navigate a maximum of 250 steps to reach its destination and was evaluated for 1000 episodes. The number of goal positions was increased to 10, but the starting configurations were kept to 5. The adjustment of starting and goal positions is shown in Fig. 10
The collision detection was turned off during the evaluation process so that the agent still had a chance to overcome the obstacles and fulfil the goal-reaching task.
## 7 Experimental Results
### Training
There are different metrics to consider when evaluating the training performance of an RL agent. The most important is the return, which indicates how well the agent performed its task. However, in the goal-oriented navigation task, the starting and goal positions are randomly chosen from a set at the start of each episode, meaning that the maximum return the agent can achieve per episode varies; therefore, the metric can be pretty noisy. Nonetheless, it still shows the learning curve and is expected to increase over time as the agent optimises its behaviour. One task-independent metric, also used in ML applications, to describe the learning of an algorithm is the loss function, which is expected to decrease over time as the agent explores its environment and improves its estimations.
Figure 8: Obstacle avoidance evaluation starting positions. Two points near each obstacle were chosen as valid starting positions.
Figure 10: Goal-Oriented navigation evaluation environment. The starting and goal positions were shifted to test the agents’ adaptation capacity.
Figure 9: Circuit navigation evaluation environment for the obstacle avoidance agents. Although the obstacles are simpler, the lack of space prevents circular motion from being an optimal behaviour to avoid collisions.
The loss indicates the mean squared error between the \(q\) value calculated and the expected value for the TD methods. A Task dependent metric that can be compared for the navigation task is the percentage of times the agent reaches the goal. As for the obstacle avoidance task, the rate of collisions and steps the agent managed to navigate before crashing can be measured.
Because the original plots are very noisy, mainly due to the initial random position at the start of each episode, the results presented were calculated using a moving average of one hundred steps.
Finally, the different metrics were measured in episodes, as the tasks relied on avoiding collisions or reaching the goal within a reasonable amount of time steps, and the agents were rewarded or punished accordingly. The only exception was the loss function, which was monitored at each time step to verify the learning process with each batch of samples used.
#### 7.1.1 Obstacle Avoidance
Six different agents were trained and compared for the obstacle avoidance task, three of which consist of a D3QN agent and two Rainbow agents trained with varying sizes of depth images, using the simple reward function.
Between the agents with the simple reward function, which corresponds to the equation 1, SimpleRainbowL achieved slightly better results than SimpleRainbow by maintaining a higher return, lower collision rate and more training steps, as seen in Fig. 11. The D3QN was trained for fewer training steps, meaning it crashed earlier in each episode. The use of a smaller depth image size allowed SimpleRainbow to seemingly achieve peak performance at around 800 episodes, followed by SimpleRainbowL at 1000 episodes and SimpleD3QN at 1200 episodes, when their amount of return was at its highest and collision rate at its lowest.
Similar results were achieved by the agents with the behavioural reward, corresponding to the equation 1, as demonstrated in Fig. 12. However, peak performance was achieved after 1100 episodes, indicated by the collision rate, as the return now depends on the behaviour. BehaviourL had a lower collision rate and higher return, meaning that it performed better at the task and at adapting to the constraints in the velocities.
As seen in Fig. 13, the Rainbow agents outperformed the D3QN ones at avoiding collisions by doubling the number of steps navigated and having much lower crash rates. Also, using a larger image slightly improved the results, but at the cost of requiring more time to train. The loss and returns could not be compared, as the reward functions operated at different scales. Agents with the behaviour reward took longer to learn to avoid collisions, as they seemed to start optimising their behaviour first. Still, their performance increased sharply after some exploration, which can be seen in the drop of their collision rate in Fig. 13. Even so, as expected, the agents with the simple reward had lower collision rates, as their only objective was to avoid collisions. In contrast, the behaviour reward imposed a penalty on the other agents' choice of speed, which demanded more training time to improve their results.
All agents learnt at different rates, as seen in the decrease in their average loss.
#### 7.1.2 Navigation
For the navigation task, three different agents were trained, the D3QN agent and a Rainbow agent for each of the two reward functions. In this case, the average return cannot be compared, as both reward functions are on a different scale but can be seen as the agent's learning process. Also, the average amount of steps was not used as a metric, as most of the time, the agents collided quickly, and the episode ended early while they learnt to reach the goal.
Figure 11: Training performance of the obstacle avoidance agents with simple reward. The best performance was achieved by SimpleRainbowL, the Rainbow DQN agent that used the simple reward function and larger depth image size.
Figure 12: Training performance of the obstacle avoidance agents with behaviour reward. The best performance was achieved by BehaviourRainbowL, the Rainbow DQN agent that used the behaviour reward function and larger depth image size.
As evidenced in Fig. 14, Rainbow agents performed better than the D3QN agent by doubling the number of times they reached the goal. Additionally, NegativeRainbow, the agent with the negative-based reward function corresponding to the equation 3, yielded better results by reaching the goal more often, as expected. The reason is that the additional constraint encourages the agent to reach the destination as fast as possible to stop the punishment at each time step. Nonetheless, the positive reward-based agent, which follows the equation 4, still managed to optimise its behaviour and reach the goal a fair amount of times, as seen by its increasing return.
All the agents had more room to learn, as seen in their increasing returns and decreasing losses at the end of the training process.
### Evaluation
For the evaluation process, only the task-dependent metrics are compared, as there is no learning process involved, and the trained models are only used to select their best-valued action, given the current state of the environment. Only Rainbow agents were used for evaluation, as they drastically outperformed the D3QN agents during training and were expected to perform better even under different conditions.
#### 7.2.1 Obstacle Avoidance
For both tests, obstacle avoidance and circuit navigation, the agents were evaluated on their average crash rate and the average number of steps they could navigate without a collision. Also, the action selected at each time step was tracked to analyse the behaviour of each agent.
The results for the evaluation of the obstacle evasion task were similar to those during training. SimpleRainbow achieved the best results, as seen in Table 5, by having a lower collision rate and a higher number of steps without crashing. Using the simple reward function almost halved the average collision percentage than using the behaviour reward function, and a larger depth image size also produced slightly better results. The average collision rates are higher than the final averages
Figure 13: Training performance of the obstacle avoidance agents. Better results were achieved by using the Rainbow DQN algorithm, the simple reward function and a larger depth image size.
Figure 14: Training performance of the goal-oriented navigation agents. NegativeRainbow performed the task better by achieving a higher rate of goals reached. Meanwhile, the loss and return evidenced the learning process of all agents.
seen during the training process in Fig. 13, as the difference in conditions influences the results.
However, the difference in performance can be related to the difference in each agent's chosen actions distribution, seen in Fig. 15. The agents with a simple reward function had a uniform distribution in their action selection, with a slight preference for evading a particular direction. Meanwhile, the agents with the behaviour reward function prioritised using the highest linear velocity and avoiding turning altogether, preferring small angular velocities when it is necessary to avoid an obstacle. For that, the difference in task performance is unsurprising when considering that one type of agent had to evade while going at full speed and barely turning.
In the case of the circuit navigation evaluation, using a larger depth image size proved to be more critical than the reward function, as seen in Table 6 that those agents collided less, independently of their reward function. Still, SimpleRainbowL achieved a lower collision rate and a higher number of steps without crashing during training, showing its better ability to adapt to a different environment. Nonetheless, the results were better than expected, with all agents being able to navigate above the average amount of steps and only having difficulties in the sharp turns of the circuit, which were absent in their training environment. In addition, the best agent, SimpleRainbowL, reached below the halfway mark for the average amount of collisions, as seen in Table 6, with its evident difficulty being the left turn at the centre of the scene, which can be reached from two out of the four starting points, corresponding to the right and bottom starting positions in Fig. 9.
The contrast of the chosen actions distribution is also seen in this task, evidenced by the Fig. 16, with the behaviour reward function demanding less turning and more speed. There was an increase in the choice of turning right, which was caused by the circuit design.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Average Collision Percentage & Average Steps \\ \hline SimpleRainbow & 72.5\% & 109 \\ SimpleRainbowL & **48\%** & **150** \\ BehaviourRainbow & 82.25\% & 115 \\ BehaviourRainbowL & 58.25\% & 146 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Evaluation performance of the obstacle avoidance agents in the circuit navigation. SimpleRainbowL achieved the best results by colliding less, even in an environment with sharper turns.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Average Collision Percentage & Average Steps \\ \hline SimpleRainbow & 43.16\% & 69 \\ SimpleRainbowL & **41.83\%** & **71** \\ BehaviourRainbow & 74.83\% & 45 \\ BehaviourRainbowL & 70.5\% & 47 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Evaluation performance of the obstacle avoidance agents. SimpleRainbowL achieved the best results by colliding less and persisting for more time without crashing.
## 6 Conclusion
Figure 15: Distributions of the chosen actions by the obstacle avoidance agents during evaluation. The behaviour reward function restricted the choice of angular speeds and prioritised the maximum value of linear speed.
Figure 16: Distributions of the chosen actions by the obstacle avoidance agents during circuit navigation evaluation. The increase in the need to turn right further evidences the difference in behaviour and performance. The simple reward function allowed the agents to overcome the circuit’s sharp turns better.
#### 7.2.2 Navigation
The navigation agents were evaluated in the same scene as their training, and the collisions were turned off to test better their learnt ability to reach the goal. When evaluated under the same training conditions, as noticed in Table 7, using the negative-based reward achieves better results, almost beating the environment altogether. Nonetheless, the positive reward-based agent also achieved good results, reaching the goal around seventy per cent of the time. The lower average number of steps of Negative Rainbow proves its speed at reaching its destination.
Although both agents were trained under the same restrictions for their choices of linear and angular velocities, there was a noticeable difference in the distribution of their chosen actions, evidenced in Fig. 17. NegativeRainbow preferred the highest linear speed and relied less on turning, displaying its better mastery of the task, while PositiveRainbow favoured a lower linear speed.
However, once the initial conditions are changed, there is a sharp decline in both agents' performance, as seen in Table 8, with them reaching the goal less than half the amount of times compared with the previous evaluation. It is also noteworthy that it almost took the agents the maximum number of steps to reach the goal. Their uncertainty was also reflected in their actions, shown in Fig. 18, with both agents performing higher turning.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Average Goal Reached Percentage & Average Steps \\ \hline NegativeRainbow & **96.9\%** & **79** \\ PositiveRainbow & 67.9\% & 173 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Evaluation performance of the goal-oriented navigation agents under the same circumstances. NegativeRainbow almost beat the environment by achieving a near-perfect goal-reaching rate.
Figure 17: Distributions of the chosen actions by the goal-oriented navigation agents during evaluation. NegativeRainbow shows more confidence by choosing the higher linear speed, even though both agents are rewarded by its choice.
## 6 Conclusion
In this paper, we proposed a novel approach to the proposed approach to the proposed approach. The proposed approach is based on the proposed approach to the proposed approach. The proposed approach is based on the proposed approach.
The lower performance led to the belief that either the state representation for the navigation task needed to be better or the NN ignored the polar coordinates. Furthermore, the agent seemed to learn to reach the goal by visually recognising the path in its training environment.
Nonetheless, some NegativeRainbow's trajectories in its training environment, where it fulfilled its task almost entirely, were compared with the path computed by the standard navigation stack developed for the Turtlebot2 in Fig. 19. The navigation stack uses Dijkstra's algorithm for the global path planner and DWA for local path planning. To compute the paths, it was required first to manually generate the cost map using the GMapping package. The RL agent's trajectories only required loading the trained policy but were registered on the same map for clarity.
Even if the agent's capability to generalize its knowledge was lacking, it seemed to approach the optimal behaviour in its training environment, as its trajectories were straight and shorter than using a path planner. NegativeRainbow preferred to navigate between the obstacles to reach the goal faster rather than planning a path around them.
## 8 Discussion
The best obstacle avoidance agent, SimpleRainbowL, reached below the twenty per cent collision rate during training, as seen in Fig. 11. It also achieved 41.83% under different conditions, as evidenced in Table 5, and 48% in a different environment, reported in Table 6. It hinted that a simple reward function might be enough to fulfil a task if the state representation is adequate, in this case, the depth images.
Training with a larger image size yielded slightly better results but required more time to train for the same amount of episodes. And when the agents were evaluated in a different environment, the size of the images they used appeared to influence their results more, as seen in Table 6. More
Figure 19: Comparison between NegativeRainbow’s trajectories and the standard Turtlebot2 navigation stack, which uses Dijkstra’s algorithm and DWA. Arrows indicate the starting configurations, and the circle is the goal position.
experiments would be required to validate these assumptions or verify if the standard image size was too small.
The agents with the behaviour reward function took longer to lower their collision rates, as seen in Fig. 13, because they had to consider the constraints imposed by their reward function when optimising their policy. Nonetheless, rewarding the value of the linear velocity and penalising the angular velocity achieved the expected result, as the agents preferred to move faster and turn less. This behaviour may not be ideal for the obstacle avoidance task, where the agent must prioritiise avoiding collisions rather than moving fast. Still, it served as a proof of concept and a basis when designing the reward function of the goal-oriented navigation task where reaching the objective faster was preferred.
When switching to the goal-oriented navigation task, the distance and angle to the goal were added to the state representation to reward the agent for moving closer to the objective. Nonetheless, the agent also had to use depth images to avoid obstacles. The best agent, NegativeRainbow, achieved a 96.9% rate of reaching the goal in its training environment and 35.5% under different conditions, seen in Tables 7 and 8 respectively. Its success during training seemed to be due to the use of depth images as part of the state representation, as the drop in performance during evaluation indicated that the agent could not constantly localise its goal. Furthermore, the analysis of the agent's trajectories during motion seemed to imply that it almost reached the optimal behaviour in its training environment. However, further study and measurements would be required to confirm it.
It was noteworthy that NegativeRainbow almost always reached the goal in its training environment while also avoiding obstacles, suggesting that using depth images might be enough to learn the task in a specific environment. However, the lack of solid generalisation to different conditions makes it impractical for use in different environments. The results imply that switching from a simpler task to a more complex one requires more consideration than adding the additional information required.
In the goal-oriented navigation task, the penalty at each time step improved the agent's performance, seen in Table 7, as the agent was urged to reach the destination as fast as possible. On the contrary, a reward at each time step might have instigated the agent to accumulate reward by navigating rather than by reaching the goal, as the agent still appeared to increase its average return during training, which can be noticed in Fig. 14.
In all cases, the Rainbow DQN algorithm achieved better results than the D3QN algorithm. The improved exploration mechanism provided by the noisy network, the consideration of additional time steps in the computation of the n-step return, and the better estimations provided by the use of the distribution of the rewards aided the agents in accomplishing both tasks. Similar to the original paper, where Hesselt et al. [14] demonstrated a significant improvement in the results of Rainbow over previous variations of the DQN algorithm, the agents trained with the Rainbow DQN algorithm reached significantly more goals and collided less with obstacles than those trained with D3QN. This supports the idea of using variations of DRL algorithms with additional improvements to achieve better results rather than clinging to the most popular ones.
Finally, all agents trained seemed to learn their tasks with varying degrees of success, as their average reward kept increasing and the loss of their algorithm decreased during their training. Longer training sessions could increase the agents' performance even further.
Conclusions
This research project involved implementing a DRL approach for different robot navigation-related tasks. The proposed methods achieved a 41.83% collision rate for the obstacle avoidance task and a 96.9% target-reaching rate for the goal-oriented navigation task in their training environments. However, their lower performance during evaluation suggests that further work is required to achieve optimal behaviour.
The experimental work suggests that the improved exploration, more informed updates and better estimations of the Rainbow DQN allowed it to reach more targets and collide less during training than the D3QN agents. The results support the idea that, much like its comparison with the previous variations of the DQN method in their original domain of Atari games, Rainbow DQN might also perform better at navigation-related tasks. This could lead to improvements in existing works or as an idea to consider when designing a new DRL approach in the same field.
To perform the goal-oriented navigation task, the agent was provided additional information to measure how close it was to the goal compared to the design of the obstacle avoidance task. The trained agent seemed to succeed at the task in its training environment with a 96.9% goal-reaching rate but only achieved 35.5% under different conditions, seemingly learning the specific path to the goals during training. The results suggest that the transition from the obstacle avoidance task to the goal-oriented navigation task could not be accomplished with the parameters added for the agent's localisation and that further study should be performed about the state representation or balance of the weight of each data source.
Finally, a behaviour was induced in an obstacle avoidance agent by placing penalties based on its linear and angular velocities in the reward function, which led to the robot preferring to move faster and avoid turning. Still, it avoided fewer obstacles than using a simpler reward function with the same amount of training, suggesting that it required more time to learn its task. In the case of the goal-oriented navigation task, a penalty at each time step encouraged an agent to reach the target faster and more consistently than using a reward function that could grant positive values. Similar constraints could also be implemented for other navigation-related tasks, but a trade-off between training time and performance might still apply.
## References
* [1] K. Arulkumaran. Rainbow: Combining improvements in deep reinforcement learning. [https://github.com/Kaixhin/Rainbow](https://github.com/Kaixhin/Rainbow), 2017.
* [2] J. Aulinas, Y. Petillot, J. Salvi, and X. Llado. The slam problem: a survey. volume 184, pages 363-371, 01 2008.
* [3] M. G. Bellemare, W. Dabney, and R. Munos. A distributional perspective on reinforcement learning, 2017.
* [4] C. Berner, G. Brockman, B. Chan, V. Cheung, P. Debiak, C. Dennison, D. Farhi, Q. Fischer, S. Hashme, C. Hesse, R. Jozefowicz, S. Gray, C. Olsson, J. Pachocki, M. Petrov, H. P. de Oliveira Pinto, J. Raiman, T. Salimans, J. Schlatter, J. Schneider, S. Sidor, I. Sutskever, J. Tang, F. Wolski, and S. Zhang. Dota 2 with large scale deep reinforcement learning. _CoRR_, abs/1912.06680, 2019.
* [5] F. Bonin-Font, A. Ortiz, and G. Oliver. Visual navigation for mobile robots: A survey. _J. Intell. Robotics Syst._, 53(3):263-296, nov 2008.
* [6] Q. Cai, C. Cui, Y. Xiong, W. Wang, Z. Xie, and M. Zhang. A survey on deep reinforcement learning for data processing and analytics. _IEEE Transactions on Knowledge and Data Engineering_, pages 1-1, 2022.
* [7] C. Chen, Y. Liu, S. Kreiss, and A. Alahi. Crowd-robot interaction: Crowd-aware robot navigation with attention-based deep reinforcement learning, 2018.
* [8] R. Cimurs, J. H. Lee, and I. H. Suh. Goal-oriented obstacle avoidance with deep reinforcement learning in continuous action space. _Electronics_, 9(3):411, Feb 2020.
* 267, 03 2002.
* [10] S. Dittert. Dqn-atari-agents: Modularized pytorch implementation of several dqn agents, i.a. ddqn, dueling dqn, noisy dqn, c51, rainbow and drqn. [https://github.com/BY571/DQN-Atari-Agents](https://github.com/BY571/DQN-Atari-Agents), 2020.
* [11] M. Fortunato, M. G. Azar, B. Piot, J. Menick, I. Osband, A. Graves, V. Mnih, R. Munos, D. Hassabis, O. Pietquin, C. Blundell, and S. Legg. Noisy networks for exploration. _ArXiv_, abs/1706.10295, 2017.
* [12] M. Guzel. Autonomous vehicle navigation using vision and mapless strategies: A survey. _Advances in Mechanical Engineering_, 2013, 01 2013.
* [13] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. _CoRR_, abs/1512.03385, 2015.
* [14] M. Hessel, J. Modayil, H. van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan, B. Piot, M. Azar, and D. Silver. Rainbow: Combining improvements in deep reinforcement learning, 2017.
* [15] H. Hexmoor. _Essential Principles for Autonomous Robotics_. Morgan & Claypool Publishers, 2013.
* [16] M. Hoy, A. S. Matveev, and A. V. Savkin. Algorithms for collision-free navigation of mobile robots in complex cluttered environments: a survey. _Robotica_, 33(3):463-497, 2015.
* [17] G. I., Y. Bengio, and A. Courville. _Deep Learning_. The MIT Press, 2016.
* [18] J. Kober and J. Peters. _Reinforcement Learning in Robotics: A Survey_, pages 579-610. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
* [19] A. Koubaa. _Robot Operating System (ROS): The Complete Reference (Volume 1)_. Springer, 2016.
* [20] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In F. Pereira, C. Burges, L. Bottou, and K. Weinberger, editors, _Advances in Neural Information Processing Systems_, volume 25. Curran Associates, Inc., 2012.
* [21] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra. Continuous control with deep reinforcement learning, 2015.
* [22] L.-J. Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. _Mach. Learn._, 8(3-4):293-321, may 1992.
* [23] W. Luo, P. Sun, F. Zhong, W. Liu, T. Zhang, and Y. Wang. End-to-end active object tracking and its real-world deployment via reinforcement learning, 2018.
* [24] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim. Applications of deep reinforcement learning in communications and networking: A survey. _IEEE Communications Surveys & Tutorials_, 21(4):3133-3174, 2019.
* [25] L. Ma, Y. Liu*, and J. Chen. Using rgb image as visual input for mapless robot navigation, 2019.
* [26] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. P. Lillicrap, T. Harley, D. Silver, and K. Kavukcuoglu. Asynchronous methods for deep reinforcement learning. 2016.
* [27] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller. Playing atari with deep reinforcement learning, 2013.
* [28] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. _Nature_, 518(7540):529-533, Feb. 2015.
* [29] J. Placed and J. Castellanos. A deep reinforcement learning approach for active slam. _Applied Sciences_, 10:8386, 11 2020.
* [30] S. R., N. I. R., and D. Scaramuzza. _Introduction to Autonomous Mobile Robots (2nd ed.)_. The MIT Press, 2011.
* [31] F. Rosenblatt. _The Perceptron, a Perceiving and Recognizing Automaton Project Para_. Report: Cornell Aeronautical Laboratory. Cornell Aeronautical Laboratory, 1957.
* [32] X. Ruan, D. Ren, X. Zhu, and J. Huang. Mobile robot navigation based on deep reinforcement learning. pages 6174-6178, 06 2019.
* [33] S. R. S. and B. A. G. _Reinforcement Learning: An Introduction (2nd ed.)_. The MIT Press, 2018.
* [34] T. Schaul, J. Quan, I. Antonoglou, and D. Silver. Prioritized experience replay, 2015.
* [35] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov. Proximal policy optimization algorithms, 2017.
* [36] P. Shah, M. Fiser, A. Faust, J. C. Kew, and D. Hakkani-Tur. Follownet: Robot navigation by following natural language directions with deep reinforcement learning. 2018.
* [37] A. Shitsukane, W. Cheriuyot, C. Otieno, and M. Mgala. A survey on obstacles avoidance mobile robot in static unknown environment. _International Journal of Computer (IJC)_, 03 2018.
* [38] A. Shitsukane, W. Cheriuyot, C. Otieno, and M. Mgala. A survey on obstacles avoidance mobile robot in static unknown environment. _International Journal of Computer (IJC)_, 03 2018.
* [39] D. Silver, A. Huang, C. Maddison, A. Guez, L. Sifre, G. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis. Mastering the game of go with deep neural networks and tree search. _Nature_, 529:484-489, 01 2016.
* 1144, 2018.
* [41] S. Sun, R. Wang, and B. An. Reinforcement learning for quantitative trading. _CoRR_, abs/2109.13851, 2021.
* [42] J. R. Sanchez-Ibanez, C. J. Perez-del Pulgar, and A. Garcia-Cerezo. Path planning for autonomous mobile robots: A review. _Sensors_, 21(23):7898, Nov 2021.
* [43] V. Uc-Cetina, N. Navarro-Guerrero, A. Martin-Gonzalez, C. Weber, and S. Wermter. Survey on reinforcement learning for language processing. _CoRR_, abs/2104.05565, 2021.
* [44] H. van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning, 2015.
* [45] Z. Wang, T. Schaul, M. Hessel, H. van Hasselt, M. Lanctot, and N. de Freitas. Dueling network architectures for deep reinforcement learning, 2015.
* [46] C. J. C. H. Watkins and P. Dayan. Q-learning. _Machine Learning_, 8:279-292, 1992.
* [47] P. Wenzel, T. Schon, L. Leal-Taixe, and D. Cremers. Vision-based mobile robotics obstacle avoidance with deep reinforcement learning. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 14360-14366, 2021.
* [48] L. Xie, S. Wang, A. Markham, and N. Trigoni. Towards monocular vision based obstacle avoidance through deep reinforcement learning, 2017.
* [49] F. Ye, S. Zhang, P. Wang, and C. Chan. A survey of deep reinforcement learning algorithms for motion planning and control of autonomous vehicles. _CoRR_, abs/2105.14218, 2021.
* [50] C. Yu, J. Liu, S. Nemati, and G. Yin. Reinforcement learning in healthcare: A survey. _ACM Comput. Surv._, 55(1), nov 2021.
* [51] F. Zhu, Y. Zhu, V. C. Lee, X. Liang, and X. Chang. Deep learning for embodied vision navigation: A survey, 2021.
* [52] K. Zhu and T. Zhang. Deep reinforcement learning based mobile robot navigation: A review. _Tsinghua Science and Technology_, 26(5):674-691, 2021.
* [53] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning, 2016.
* [54] Y. Zhu, R. Mottaghi, E. Kolve, J. J. Lim, A. Gupta, L. Fei-Fei, and A. Farhadi. Target-driven visual navigation in indoor scenes using deep reinforcement learning, 2016.
* [55] M. Zohaib, M. Pasha, R. A. Riaz, N. Javaid, M. Ilahi, and R. Khan. Control strategies for mobile robot with obstacle avoidance. 3:1027-1036, 06 2013.
# Robot path planning using deep reinforcement learning
Miguel Quinones-Ramirez
Facultad de Matemdticas, Universidad Autonoma de Yucatan, Anillo Periferico Norte, Tablaje
Cat. 13615, Colonia Chuburnd Hidalgo Inn
Merida, Yucatan, Mexico
[email protected]
Jorge Rios-Martinez
Facultad de Matematicas, Universidad Autonoma de Yucatan, Anillo Periferico Norte, Tablaje
Cat. 13615, Colonia Chuburnd Hidalgo Inn
Merida, Yucatan, Mexico
[email protected]
Victor Uc-Cetina
Facultad de Matematicas, Universidad Autonoma de Yucatan, Anillo Periferico Norte, Tablaje
Cat. 13615, Colonia Chuburnd Hidalgo Inn
Merida, Yucatan, Mexico
[email protected]
[http://sites.google.com/view/victoruccetina/](http://sites.google.com/view/victoruccetina/)
###### Abstract
Autonomous navigation is challenging for mobile robots, especially in an unknown environment. Commonly, the robot requires multiple sensors to map the environment, locate itself, and make a plan to reach the target. However, reinforcement learning methods offer an alternative to map-free navigation tasks by learning the optimal actions to take. In this article, deep reinforcement learning agents are implemented using variants of the deep Q networks method, the D3QN and rainbow algorithms, for both the obstacle avoidance and the goal-oriented navigation task. The agents are trained and evaluated in a simulated environment. Furthermore, an analysis of the changes in the behaviour and performance of the agents caused by modifications in the reward function is conducted.
path planning; obstacle avoidance; deep reinforcement learning.
## 1 Introduction
Navigation competence is essential for mobile robots. To navigate autonomously, a robot must use its sensors' data to identify an optimal or suboptimal path to a target point while avoiding collisions. Generally, a map of the environment is constructed, and then a path planner algorithm is used to find a clear path. However, the task becomes daunting when dealing with sensor noise, tracking errors, and unpredictable surroundings. It also becomes challenging and time-consuming to update the obstacle map accurately, replan the navigation path and predict all possible situations the robot may encounter.
Alternatively, new methods that do not require maps to navigate have been proposed, such as the use of deep reinforcement learning (DRL), introduced by Mnih et al. in 2013 [27], which has shown the ability to solve complex tasks that require a lot of data processing by combining the reinforcement learning (RL) framework with the artificial neural networks from deep learning (DL). These methods have the advantages of being mapless, having a strong learning ability, lower sensor accuracy dependence, and requiring less human supervision and environment-dependent engineering. Contrary to other mapless navigation approaches, which require explicit programming of the robot's behaviour, DRL methods allow the robot to learn the optimal actions to take at each time step by associating them with observations of the environment and a reward signal. Furthermore, unlike pure deep learning methods, they do not require a dataset of labelled samples, which is severely lacking in robotics. Instead, the robot is trained by directly interacting with its environment in a trial-and-error manner. Even when training in the real world proves costly, DRL allows a robot to learn in a simulated environment safely and then transfer the knowledge to a real robot, which is possible because of the generalisation ability of DL models. DRL robotic applications often treat sensor data as a representation of the environment's state, the most commonly used being ranging data, monocular camera images, and depth camera data. Among the sensors used to collect the data, RGB-D cameras are one of the most cost-efficient, lightweight, and information-rich, which allows them to be used for a wide range of applications. As a state representation, RGB images are sensitive to lighting and colour changes, which may be irrelevant to the navigation task. Still, depth images provide geometrical information about the surroundings and are represented as grayscale images, which have been proven to achieve good results in DRL methods applied to different domains. The introduction of deep reinforcement Learning in 2013 by DeepMind [27] demonstrated its potential by training agents that achieved better performance than human experts on Atari games. Since then, notable achievements of DRL methods have been primarily on gaming applications, such as AlphaGo [39] winning against the Go champion, AlphaZero [40] beating the champion chess program, and OpenAI Five [4] defeating professional teams in the online game DOTA 2. However, RL approaches to solve real-world problems have been proposed in several domains, including healthcare [50], analytics [6], language processing [43], networking [24], finances [41] and robotics [18].
Deep reinforcement learning approaches in navigation aim to benefit from learnt skills to solve conventional navigation problems, such as lack of generalisation, the need for fine-tuning or the inability to react in real-time, for applications where mobile robots operate in complex environments. Some of these scenarios include outdoor environments with uneven terrain and noisier sensor readings, dynamic environments where fast reaction times are required, and human environments where collaboration and safety measures are necessary. Deep reinforcement-based applications for navigation have been developed for social robotics, service robotics, unmanned ground vehicles and self-driving cars, among others.
For the autonomous navigation problem, DRL applications are focused on four scenarios, as studied by Zhu et al. in [52], which include local obstacle avoidance, indoor navigation, multi-robot navigation and social navigation. The applications are usually limited to one of those specific capabilities and are developed by conducting specialised research and adding expert knowledge to favour the convergence of the DRL methods. For that reason, little research has been done on moving from a simpler to a more complicated task. Moreover, few studies analyse the impact of the reward function on the agent's behaviour, as its design is tailored to solve the specific task, and no further comparison is made. Furthermore, the review of Zhu et al. [52], the survey of DRL algorithms for autonomous vehicles of Ye et al. [49], as well as the related works reviewed, indicate that among the most commonly used DRL algorithms are the Deep Q Networks (DQN) [27], Double DQN (DDQN) [44], Dueling DDQN (D3QN), Asynchronous Advantage Actor-Critic [26], Proximal Policy Optimization [35] and Deep Deterministic Policy Gradients [21]. However, since their introduction, improvements have been proposed in each algorithm's family of RL methods that lead to new state-of-the-art performances in their benchmark domain, such as continuous control or Atari Games. This means that more modern DRL methods could also improve the results in autonomous navigation-related tasks. In the present research, those problems are studied by training and evaluating different DRL agents in obstacle avoidance and goal-oriented navigation tasks, which were designed considering the challenges presented in the previous reviews. As mentioned by Zhu et al. in [52], the term _mapless_ used to describe DRL-based navigation systems in this work refers to the use of lightweight localisation solutions, such as GPS and WiFi, to obtain the relative position of the goal point without a global map. Although the training environments were designed based on the conditions of an indoor navigation scenario, the goal-oriented navigation task is referred to as such due to its focus on reaching a goal rather than the complexity of the environment.
This article introduces a mapless deep reinforcement learning approach to solve the autonomous navigation problem in indoor and static simulated environments using depth images. It focuses on analysing the different data required to train agents for obstacle avoidance and goal-oriented navigation tasks, studying the effect on their behaviour and performance by modifying the reward signal and changing the algorithm used. The proposed approach is implemented in the open-source mobile robot Turtlebot2 a, by using the Robotic Operating System (ROS) b as the robotics framework and Gazebo c as the robotics and physics simulator. However, the DRL framework can be applied to different mobile robots and using other robotic simulators as long as it is possible for the robot to perform the designated actions and the necessary sensory data is available. An initial idea about how a
robot follows an RL approach in a navigation task is shown in Fig. 1.
## 2 Autonomous Navigation
Autonomous navigation is one of the biggest challenges for a mobile robot. A robot must succeed at four building blocks to navigate autonomously: perception, localisation, cognition, and motion control [30]. Perception requires taking measurements, using different sensors, and extracting meaningful information from those measurements. Localisation involves determining the robot's absolute position in space and relative position concerning its goal and the obstacles. Cognition includes decision-making and its execution to achieve the highest-order goals. Moreover, motion control modulates the robot's motor outputs to achieve the desired trajectory. For a mobile robot, the navigation competence is required for its cognition. Given partial knowledge about its environment and a goal position, navigation encompasses the capability of the robot to act based on its knowledge and sensor values to reach the goal as efficiently as possible [30]. However, obstacle avoidance and path planning competencies are also required for autonomous navigation. There may need to be more than a behaviour or reactive navigation [15] for a mobile robot to reach a distant goal. Likewise, a plan might only be accomplished if the robot can react to unforeseen events. For that reason, modern navigation methods combine both competencies, sensor data and a map, to create a plan, execute it and make adjustments during motion.
### Obstacle Avoidance
Obstacle avoidance requires controlling the robot's trajectory to prevent collisions by making decisions based on sensor readings [30]. Unlike path planning, it is reactive and considers only a few steps ahead when making decisions. One of the simplest obstacle avoidance algorithms is the Bug Algorithm, which follows the contour of each obstacle to circumnavigate it. The robot stops its movement around the obstacle when it finds a minimum distance point towards its destination or a slope
Figure 1: An intuition of the RL framework applied to a robot. Based on an observation of the environment, the robot is given the optimal action to take.
equal to its original one, meaning that it requires at least the robot's localisation. [15] As an obstacle avoidance approach with access to knowledge of its environment, the Bubble Band technique generates a subset of the free space around a robot that can be travelled without collision using a map and range information. A string of these so-called bubbles is later used to indicate the trajectory to the goal position [30]. For more robustness, the Vector Field Histogram (VFH) technique generates a 2D polar histogram of the environment around the robot based on its sensor readings. Then it converts it into a 1D polar histogram, where the x-axis represents the angle at which an obstacle was found and the y-axis the probability of it being there. Then, a path is chosen based on the obstacle density, the robot's alignment with the goal, and its steering angle [55]. Dynamic Window Approach (DWA) is a method that goes a step further by considering the robot's kinematics constraints to select an appropriate combination of linear and angular velocities that allows it to avoid running into obstacles. Given the current robot's speed, the local version of DWA selects a set of tuples of linear and angular velocities which can be reached within the next sample period, also known as the dynamic window. Then, the set is reduced to only those which allow it to stop before hitting an obstacle, given by an objective function, and selects the best tuple based on an objective function. The global version of DWA considers the distance to a goal in the objective function, allowing it to have a more long-termed view [30]. Fuzzy Logic Controllers are an alternative approach that uses ambiguous and noisy data to make decisions by selecting a proper action based on a set of rules that model a reasoning capability. They improve the performance of mobile robots in complex environments, but at the cost of the complexity that entails designing the set of heuristics [37]. For a more detailed explanation of obstacle avoidance methods, the work of Shitsukane et al. [38] can be consulted.
### Path Planning
Path planning is defined as the problem of finding a sequence of valid configurations to move from a starting position to a goal position and requires a model of the environment transformed into a discrete map. However, most mobile robots use differential-drive systems, which impose nonholonomic constraints on their configuration. Furthermore, when they are on the ground, their path planning is often considered to take place in a 2D representation of the environment [30]. For that reason, typical representations of the environment include grid maps, metric maps, and topological maps. Path planning is classified by the environment and the knowledge that the robot has about it. If the robot has complete knowledge about its environment, it is known as a global path planning problem, in which the planner has to compute an optimal path to the goal. In contrast, a local path planner uses sensor readings to constantly obtain information about the robot's surroundings and follow a path while avoiding obstacles. Local path planning is associated with obstacle avoidance, while global path planning includes graph-based and potential
field-based methods [16].
Graph Search methods rely on using a map that indicates the free and occupied space in the environment to build a graph and compute a solution. Then, graph search algorithms can be used to find a path, such as breadth-first search, depth-first search, Dijkstra's algorithm, or the A* algorithm. Among these, the A-star algorithm stands out for its consistency, speed, and ability to find the optimal solution at the cost of being computationally more expensive and requiring a heuristic function and path cost function, which may be difficult to define in some cases. Rapidly Exploring Random Trees (RRT) is also a fast alternative that does not require a heuristic function, and its lack of solution optimality was addressed by RTT*. Potential Field path planning methods define forces that either attract the mobile robot towards the goal or repel it from certain positions, such as obstacles. The environment is modelled based on the forces, and the robot is a point under its influence. As long as the robot can localise its position concerning the potential field, it can compute its following action based on the forces surrounding it. A more in-depth analysis of the path planning problem can be seen in the work of Sanchez-Ibanez et al. [42].
### Robot Navigation Systems
Autonomous navigation systems require a path-planning method, an obstacle-avoidance approach, and a localisation method to provide the necessary information for both. Sensor data may be used directly in some cases. However, without knowledge about its position relative to the goal, a mobile robot is limited to reactive behaviour, following a predetermined path, or chasing short-termed goals based on its sensor range [5, 15]. A broad classification of autonomous navigation techniques is whether they are used indoors or outdoors, as well as regarding their consideration of dynamic obstacles. Indoor environments have their working space clearly defined and the surface area physically delimited, and the boundaries are easily identifiable by the robot's path planning and obstacle avoidance algorithms. The limited space and predominance of flat surfaces favour map-based and map-building systems because the robustness and reliability outweigh the computing cost when the resources are available. On the other hand, outdoor navigation systems must deal with uneven terrain, noisier sensor readings due to environmental causes, and more uncertainty about the robot's whereabouts due to its unstructured environments. Navigation in dynamic environments is more complex, requiring not only estimating the position of static obstacles and boundaries but also constantly being on the lookout for movement or any other indication that an obstacle may be headed toward the robot's path. Dynamic navigation systems have a broader selection of applications but require fast updates. However, the inclusion of dynamic obstacles is beyond the scope of this work.
Indoor navigation techniques can also be categorised as map-based, map-building-based, or mapless, depending on the source of the goal-related information
they use. Map-based approaches must be provided with a representation of the environment built by a different system beforehand. Map-building techniques can compute the model of the environment themselves and use it subsequently as a source of information. Mapless methods rely on their sensors alone, primarily on visual data, to infer knowledge about their goals' position based on the features detected during motion [5]. RL enables a nature-inspired approach, in which robots learn the optimal behaviour to fulfil a task by interacting with their environment instead of being programmed explicitly. Combined with the advancements in the DL field, it allows them to extract meaningful features from their environment and decide which actions to take without an explicit rule. A DRL-based approach allows a robot to behave similarly to other mapless methods and train specific tasks that complement or improve existing navigation systems.
### Conventional Navigation
In most navigation problems, the robot does not have access to an accurate map of the environment, and the most popular approach to solve them is by using a map-building system. For that reason, navigation is also referred to as the combination of localisation, map-building, and path planning. In that case, the standard technique is to perform the three tasks simultaneously, known as Simultaneous Localization and Mapping (SLAM) [5]. Different algorithms have been proposed to solve the SLAM problem, with the most commonly used being laser, sonar, or visual sensors. The SLAM problem has been studied for many years and has become the industry standard technique to solve navigation problems due to its robustness and reliability, despite the cost of computing and updating a map 2.
Footnote 2: [http://wiki.ros.org/](http://wiki.ros.org/)
Even the industry and academic most popular robotics framework, the Robot Operating System (ROS) 3, describes a default navigation system like a map-building system, which requires the computation of a map through odometry and sensor data, and the use of global and local path planners [19]. Commonly used algorithms in the navigation stack 4 include GMapping, Adaptive Monte Carlo Localization, \(A*\) for the global path planner, and DWA for the local path planner and obstacle avoidance. RL offers a mapless approach for solving navigation tasks, a better generalisation capability combined with Deep Learning, and the ability to perform complex behaviours without engineering them. The RL framework allows for versatility and is not limited to using distances as observations and velocities as outputs but can be trained with different data depending on the task. Furthermore, its learning capability is not limited to pre-established rules. It learns to associate a given observation of the environment with the optimal action to fulfil the task as efficiently as possible. Also, the learning process is performed before the robot is put in motion, allowing simulators to be used as a safe training space. It also eases
the load on the robot during movement because it already knows what action to take in each scenario.
## 3 Reinforcement Learning
Reinforcement Learning (RL) is one of the three essential Machine Learning (ML) paradigms. RL aims to enable an agent to learn the optimal behaviour to accomplish a task by repeatedly interacting with its environment [33], differing from supervised learning and unsupervised learning, which rely on given data sets.
The main elements of an RL problem are the agent, its possible actions, the environment it belongs to, the state of the environment at any given time, the reward the agent receives from the environment, and a policy that defines the agent's behaviour. The agent is associated with the model that carries out the decision-making progress and learns; it does not refer to a physical entity. The actions are the set of decisions that the agent can take to interact with its environment. The environment generally refers to anything that the agent cannot arbitrarily change. At the same time, a state is the complete description of the environment at a given time. The reward signal is a numerical value that indicates how well the agent performed and is perceived by the environment on each time step. Finally, the policy is a rule used by the agent for its decision-making process, which maps the states perceived from the environment to actions to be taken when being in them.
### Markov Decision Processes
Markov decision processes (MDPs) are used to formally define the interactions between a learning agent and its environment and as the mathematical foundation of an RL problem. An MDP is a system described by the set of states \(S\), the set of actions \(A\), the reward fucntion \(R:S\times A\times S\to R\) and a transition probability function \(P:S\times R\times S\times A\rightarrow[0,1]\)[33]; and also obeys the Markov property
\[p(s^{\prime},r|s,a)=Pr\{S_{t}=s^{\prime},R_{t}=r|S_{t-1}=s,A_{t-1}=a\}\]
which establishes that future states only depend on the most recent state and action. MDPs are a formalization of sequential decision-making, where actions influence future states and rewards, and by using them, it is possible to predict all future rewards and states. When the agent has access to the transition probability function, also referred to as the model of the environment, it is possible to use model-based RL methods, which rely on state transitions and reward predictions to plan. However, in most cases, a ground-truth model of the environment is not available, and the agent must follow a model-free approach to learn purely from experience by associating states to actions through some computation.
### Returns and Episodes
At each time step \(t\), the agent observes the current state \(S_{t}=s\in S\) of the environment, proceeds to take an action \(A_{t}=a\in A\), and is provided with a reward
by the environment. Then, the environment transitions to a new state \(S_{t+1}=s^{\prime}\) and the cycle is repeated, as shown in Fig. 2. By looking for correlations between states, actions and rewards, the agent learns to perform its task efficiently [33].
The agent's goal is to maximize the cumulative reward it receives, also known as the return, which can be defined as the sum of the rewards at each time step:
\[G_{t}=R_{t+1}+R_{t+2}+R_{t+3}+...\]
To prevent an infinite amount of return, the concept of discounting is introduced, and the discounted return is defined as:
\[G_{t}=R_{t+1}+\gamma R_{t+2}+\gamma^{2}R_{t+3}+...=\sum_{k=0}^{\infty}\gamma^{k }R_{t+k+1}\]
where \(\gamma\) is the discount rate and determines the value of the future rewards, \(0\leq\gamma\leq 1\).
However, in many cases, the agent-environment interaction can be broken down into sub-sequences, called episodes, with a final time step \(T\). Each episode ends in a terminal state followed by a reset to a starting state.
### Policies and Value Functions
A policy maps states to probabilities of selecting each possible action. When an agent follows a policy \(\pi\), then \(\pi(a|s)\) is the probability of performing the action \(a\) when at the state \(s\). The goal of an RL algorithm is to discover an optimal policy \(\pi^{*}\) that prioritizes the best action to take at each state, so as to maximize \(G\)[33]. For that reason, it is useful to know how valuable a state is.
A value function \(v_{\pi}(s)\) is defined the expected return when starting in a state \(s\) and subsequently following a particular policy \(\pi\):
Figure 2: The agent-environment interaction. The agent observes the current state, selects an action, receives a reward and an observation of the new state.
\[v_{\pi}=E_{\pi}[G_{t}|S_{t}=s]\]
Similarly, an action-value function \(q_{\pi}\) is defined as the expected return when starting from \(s\), taking the action \(a\), and thereafter following the policy \(\pi\):
\[q_{\pi}=E_{\pi}[G_{t}|S_{t}=s,A_{t}=a]\]
A policy \(\pi\) con be compared to a different policy \(\pi^{\prime}\) given their expected returns
\[\pi\geq\pi^{\prime}\text{ if and only if }v_{\pi}(s)\geq v_{\pi^{\prime}}(s) \text{ for all }s\in S\]
The policy that is better than or equal to all others is considered the optimal policy \(\pi^{*}\) and is associated with an optimal state-value function \(v_{*}\) or an optimal action-value function \(q_{*}\), defined as
\[v_{*}(s)=\max_{\pi}v_{\pi}(s)\]
\[q_{*}(s,a)=\max_{\pi}q_{\pi}(s,a)\]
Both types of value functions follow a consistency condition, the Bellman Equation, which expresses the relationship between the value of a state and the value of its possible successor state. The Bellman optimality equation for \(v_{*}\) and \(q_{*}\) are
\[v_{*}(s)=max_{a}E[R_{t+1}+\gamma v_{*}(S_{t+1})|S_{t}=s,A_{t}=a]\]
\[q_{*}(s,a)=E[R_{t+1}+\gamma\max_{a^{\prime}}q_{*}(S_{t+1},a^{\prime})|S_{t}=s,A_{t}=a]\]
Depending on the RL method, there are different approaches to reaching optimal behaviour. Policy-based or policy optimization methods directly approximate the optimal policy of the agent, while value-based methods learn to estimate it through the use of value functions or state-action functions.
Also, off-policy RL methods use a behaviour policy to select an action and explore the environment different from the target policy that is learnt and improved. Contrary to on-policy methods, where the target and behaviour policy are the same. Online methods that update their parameters while observing a stream of experiences, can use two different policies and update them separately. While offline methods commonly optimise only the target policy, and copy its parameters into the behaviour policy, as a less memory-consuming approach, by storing and using experiences at different points of time during the training, through the use of large buffers.
### Temporal-Difference Learning
Temporal-Difference Learning refers to a class of model-free, value-based methods, which update their estimate of the value function based on previous estimates without waiting for a final outcome, also known as bootstrapping. Given some experience following a policy \(\pi\), TD methods update their estimate \(V\) of \(v_{\pi}\) at each time step \(t+1\) by using the observed reward \(R_{t+1}\) and the estimate \(V(S_{t+1})\)[33]
\[V(S_{t})\gets V(S_{t})+\alpha[R_{t+1}+\gamma V(S_{t+1})-V(S_{t})]\]
The most basic type of TD method is called the one-step TD because the target for the TD update is calculated using the value and reward of only the next time step. The quantity in brackets in the one-step TD is also called the TD error because it measures the difference between the estimated value of \(S_{t}\) and the better estimate \(R_{t+1}+\gamma V(S_{t+1})\), available one step later. As long as the step-size parameter \(\alpha\) is sufficiently small, one-step TD converges deterministically to a single answer.
The advantages of TD methods over others are that they do not require a model of the environment and do not need to wait until the end of the episode to learn.
### Q-Learning
Q-learning is an off-policy TD method and one of the most popular Reinforcement Learning algorithms. It is defined by the update to the action-value function:
\[Q(S_{t},A_{t})\gets Q(S_{t},A_{t})+\alpha[R_{t+1}+\gamma\max_{a}Q(S_{t+1},a)-Q(S_{t},A_{t})]\]
And to approximate the optimal action-value function \(q_{*}\), the agent must visit, store in a tabular manner, and update all the state-action pairs, also known as Q-values, for the action-value function Q [46].
## 4 Deep Reinforcement Learning
The previously described framework may be used to apply an RL approach to a robotics problem. In the case of autonomous navigation, the robot can be seen as the agent, its linear and angular velocities as the actions and the reward should incentive the robot to evade obstacles or move closer to its goal, as shown in Fig. 3. However, the challenge lies in defining an appropriate state that provides enough information for the robot to fulfil its task, especially for robots that operate in a three-dimensional space.
In 2013, Kober et al. published a survey [18] about the challenges and successes of Reinforcement Learning in Robotics, and one of the main challenges is the "Curse of Dimensionality". This holds, especially for robotics, where multiple sensor readings, degrees of freedom or images are needed to describe the robot's state space. However, in the same year, Google DeepMind proposed a novel algorithm, Deep Q
Networks (DQN) [27], by combining the traditional Q-learning method with a Neural Network, which vastly outperformed all previous methods at playing Atari games with RGB images as inputs. This work started the trend of combining RL methods with Neural Networks from the DL field, which became a subfield known as Deep Reinforcement Learning.
When designing an agent that uses depth images as states, the improved computational capabilities and robustness of the DRL are needed for the agent to be able to process the data and extract meaningful features that allow it to differentiate and evaluate each state.
### Neural Networks
Artificial Neural Networks, or simply Neural Networks (NNs), are computing models based on a collection of connected nodes known as neurons, used to approximate high-dimensional and non-linear functions. The neurons are aggregated into layers, where different transformations are performed and associated with the weights adjusted for the network to learn. The neurons are inspired by the brain cells of the same name, and their design is based on the perceptron, introduced by Frank Rosenblatt in 1958 [31]. Each neuron's inputs are weighted, summed and added a bias before being passed through an activation function that applies a non-linear transformation, which is the main reason why they perform well in different applications.
Each NN has an input layer, where data is introduced, an output layer, where a prediction is given, and many hidden layers in between, where the values are computed. The more hidden layers are used, the better the capability of the network to abstract meaningful information from the data to make better predictions. For that reason, the term _deep_ originates from using a larger amount of hidden layers,
Figure 3: An example of a robot described in the RL framework. The state is still to be defined.
which was possible due to the increase in available computing power and memory, contrary to the earlier \(shallow\) networks.
The most basic type of neural network is a feedforward neural network [17], or multilayer perceptron, where each layer is composed of many neurons, and their output is connected to the input of the next layer. The layers of these types of NNs are known as feedforward, fully connected or linear layers due to their sequential nature and because all of the neurons are connected to the next layer. The number of neurons and the activation function for each layer can be modified, with the most commonly used being the \(reLu\), \(tanh\), \(sigmoid\) and \(softmax\) functions.
A specialised type of NN for processing data that has a grid-like shape is known as the Convolutional Neural Network (CNN) [17], and its most popular use is for processing images. CNNs have layers that perform a convolution instead of a matrix multiplication, known as convolutional layers. The convolution requires sliding a kernel, a smaller array of parameters, along the input matrix of the slayer and performing a dot product in small windows of features, reducing the output data size. The size of the kernel, the number of kernels, the amount of stride that the kernel slides, and whether the input features are padded to keep their size after the operation, among other features, can be tuned for each convolutional layer. The convolution operation allows extracting high-level features from images, such as edges and colour. It performs better predictions, and the popularity of this type of network increased thanks to the results of trained models such as AlexNet, presented in [20], and ResNet, proposed in [13].
### Deep Q-Networks
The Q-Learning algorithm's limitations to store and approximate the value of all state-pairs when the number of combinations is increased was addressed by Mnih et al. in [27]. They proposed an approach called Deep Q Networks that combined the Q-learning algorithm with Neural Networks.
The core idea was to approximate the Q-values using a Deep Neural Network (DNN) instead of storing them in a tabular manner. To that end, the value function was parametrised as \(Q(s,a;\theta_{i})\) by using the neural network's weights \(\theta\) at each time step \(i\)[28]. The Q-learning update becomes the loss function to train the neural network. The loss is given by:
\[L(\theta)=E_{s,a,r,s^{\prime}}-U(D)[(r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{ \prime};\theta_{i})-Q(s,a;\theta_{i}))^{2}]\]
Where, at each time step \(t\), the agent's experiences \(e_{t}=(s_{t},a_{t},r_{t},s_{t+1}\) are stored in an experience replay \(D_{t}=e_{1},...,e_{t}\), and a mini-batch of experiences \((s,a,r,s^{\prime})\) is drawn uniformly at random, \(U(D)\), to perform the update.
This method outperformed most state-of-the-art methods at Atari games without prior knowledge and by using raw images and established the beginning of Deep Reinforcement Learning.
### Double DQN
One disadvantage of the Q-learning algorithm, as evidenced by van Hasselt, is the overestimation of action values due to a positive bias from using the maximum action value as an approximation for the maximum expected action value. A double estimator method was proposed to decouple the action selection process from the evaluation and eliminate the bias, resulting in an underestimation of action values. Furthermore, van Hasselt et al. [44] extended the idea for its use in parametric equations and the DQN algorithm, proposing the variant Deep reinforcement learning with Double Q-learning (Double DQN or DDQN) by using two Neural Networks with different sets of weights. The main neural network picks the best next action \(a^{\prime}\) among all the available, and then the target neural network evaluates the action to know its Q-value. While the main neural network's weights are updated normally, the target neural network is updated every so often with a copy of the main neural network's weights. The Bellman equation in this algorithm has the shape:
\[Q(s,a;\theta)=r+\gamma Q(s^{\prime},argmax_{a^{\prime}}Q(s^{\prime},a^{\prime} ;\theta);\theta^{\prime})\]
### Prioritized Experience Replay
The Experience Replay, introduced by Lin [22], helped online RL methods to break the temporal correlations of the updates and to prevent the loss of rare experiences by mixing more and less recent experiences and allowing them to be used multiple times. However, experiences are sampled uniformly at random, without regard for each experience's value. The Prioritized Experience Replay (PER), proposed by Tom Schaul et al. [34], focuses on the effective use of the replay memory for learning by prioritising transitions which may be more valuable for the agent but rarely occur. The TD error \(\delta\) is used as a criterion to measure the importance of each transition by indicating how unexpected each transition is because it compares how far the value is from the next bootstrap estimate. However, purely choosing the experiences with the most TD error would lead to over-fitting. Therefore, a stochastic sampling method was proposed that interpolates greedy prioritisation and uniform random sampling.
So each transition \(i\) is given a priority value
\[p_{i}=|\delta_{i}|+\epsilon\]
where \(\epsilon\) is a small positive constant that prevents a transition from not being visited, such that \(p_{i}>0\). And the probability of sampling each transition \(i\) is given by
\[P(i)=\frac{p_{i}^{\alpha}}{\sum_{k}p_{k}^{\alpha}}\]
where the \(\alpha\) determines how much prioritization is used, with \(\alpha=0\) corresponding with the uniform case. And to prevent the bias toward high-priority samples introduced by the change of distribution in the stochastic updates, importance-sampling
(IS) weights are used
\[w_{i}=\left(\frac{1}{N}\frac{1}{P(i)}\right)^{\beta}\]
where \(N\) is the size of the replay buffer, and the Q-learning update is performed using \(w_{i}\delta_{i}\) instead of \(\delta_{i}\). The hyperparameter \(\beta\) controls how much the IS weights affect learning and is linearly annealed from an initial value \(0<\omega<1\) to 1.
### Dueling Network
The Dueling Network architecture, proposed by Xie et al. [45], splits the Q-values between the value function V(s) and the advantage function A(s, a). The first one estimates the reward collected from the state \({}^{\prime}s^{\prime}\), while the second one estimates how much better one action is compared to the others.
The Q-value is defined by:
\[Q(s,a)=V(s)+A(s,a)\]
For that reason, the Dueling Network has two streams to separately estimate state values and the advantages for each action and combine them to output Q-values for each action. To prevent the Q-value equation from being unidentifiable, the advantage function estimator is forced to have zero advantage at a chosen action:
\[Q(s,a)=V(s)+(A(s,a)-\frac{1}{|A|}\sum_{a^{\prime}}A(s,a))\]
Because the dueling architecture shares the same input-output interface, it can be combined with other Q network-based architectures. One of the algorithms which significantly improved when combined with a dueling architecture is the DDQN, and such combination is often referred to as Dueling Double DQN or D3QN.
### Multi-step Learning
The idea of multi-step learning, or originally known as n-step Bootstrapping [33], comes from the comparison between TD methods and other RL methods, such as the Monte Carlo (MC) methods. Whereas most TD methods bootstrap their estimations over every time step, MC methods do so only at the end of each training episode. Therefore, a middle ground was proposed in which it is possible to bootstrap over a length of time in which significant state changes have occurred, effectively leading to faster learning. The truncated n-step return from a given state \(S_{t}\) is defined as
\[R_{t}^{n}=\sum_{k=0}^{n-1}\gamma_{t}^{k}R_{t+k+1}\]
And the multi-step variant of the DQN loss is defined as
\[(R_{t}^{n}+\gamma_{t}^{n}\max_{a^{\prime}}q_{\theta}(S_{t+n},a^{\prime})-q_{ \theta}(S_{t},A_{t}))^{2}\]
### Distributional Reinforcement Learning
Bellemare et al. [3] proposed a method to model the full distribution of returns instead of only the expectation, which leads to better approximations and more stable learning. The returns' distribution is modelled using a discrete distribution parametrised by \(N\in\mathbb{N}^{+}\) and \(V_{MIN}\),\(V_{MAX}\in\mathbb{R}\), with probability masses placed on a discrete support \(z\), where \(z\) is a vector of \(N\) atoms, considered as the canonical returns, defined by
\[z^{i}=V_{MIN}+i(\frac{V_{MAX}-V_{MIN}}{N-1})\]
for \(i\in 1,...,N\). With the probability mass of each atom
\[p_{\theta}^{i}(s,a)=\frac{e^{\theta_{i}(s,a)}}{\sum e^{\theta_{j}(s,a)}}\]
such that the approximating discrete distribution \(d\) at time \(t\) is given by
\[d_{t}=(z,p_{\theta}(s,a))\]
A variant of Bellman's equation is used to learn the probability masses. The Bellman operator \(T^{\pi}\) is defined to describe the contraction by \(\gamma\) and shift by the reward of the future estimation, to get the current value during the policy evaluation. The Bellman Equation
\[Q^{\pi}(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P,\pi}Q^{\pi}(s^{\prime},a^{ \prime})\]
can be rewritten using the Bellman operator
\[T^{\pi}Q(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P,\pi}Q(s^{\prime},a^{\prime})\]
The Bellman operator \(T^{\pi}\) is further proved to converge to a unique return distribution by using a metric between cumulative distribution functions, known as the Wasserstein Metric. Denoting the return as \(Z\) and the return distribution as \(Z^{\pi}\), the convergence of \(Z\) is studied by applying the Bellman operator, as
\[T^{\pi}Z(s,a)=R(s,a)+\gamma P^{\pi}Z(s^{\prime},a^{\prime}).\]
However, when extending the idea to the Bellman optimality operator
\[TQ(s,a)=\mathbb{E}R(s,a)+\gamma\mathbb{E}_{P}max_{a^{\prime}\in A}Q(s^{\prime},a^{ \prime}),\]
it can only be proved that \(T\) converges to a set of optimal return distributions.
Furthermore, applying \(T\) to \(Z\) cannot be computationally done without applying the \(argmax\) function to the expectation of the future value.
\[T^{*}Z(s,a)=R(s,a)+\gamma Z(s^{\prime},max_{a^{\prime}\in A}E[Z(s^{\prime},a^{ \prime})])\]
When applying the Bellman update \(TZ_{\theta}\) to the parametrisation \(Z_{\theta}\), the supports are almost always disjointed. To fix this, and considering an issue with the Wasserstein loss when sampling from transitions, the sample Bellman update \(\bar{T}Z_{\theta}\) is projected onto the support of \(Z_{\theta}\), reducing the update to a multi-class classification.
### Noisy Networks
One of the key challenges of RL methods is maintaining a balance between exploration and exploitation. Traditional exploration heuristics rely on random perturbations of the agent's policy, such as \(\epsilon\)-greedy, probabilities, or intrinsic motivation terms added to the reward signal, to encourage new behaviours. However, these methods are not easily applied with neural networks or rely on a metric chosen by the experimenter. For that reason, Fortunato et al. [11] proposed NoisyNet, an approach where perturbations of a neural network's weights are used to drive exploration. The number of parameters in the linear layer of the neural network is doubled and allows for different learning rates at the state space. For a linear layer of a neural network
\[y=wx+b\]
the corresponding noisy linear layer is defined as
\[y=(\mu^{w}+\sigma^{w}\odot\epsilon^{w})x+\mu^{b}+\sigma^{b}\odot\epsilon^{b}\]
where the parameters \(\mu^{w},\sigma^{w},\mu^{b},\sigma^{b}\) are learnable and \(\epsilon^{w},\epsilon^{b}\) are noise random variables originating from either an Independent Gaussian noise or a Factorised Gaussian noise.
### Rainbow
All previous improvements to the original DQN algorithm were made independently, as illustrated in Fig. 4. Hessel et al. [14] proposed that each extension addressed a distinct concern and that they could be combined to improve the performance of the DQN algorithm. The distributional loss is replaced with a multi-step variant. A shift by the truncated n-step discounted return is considered at the value \(S_{t+n}\) in the Bellman operator instead of the original shift by the return. The target distribution is defined as
\[d_{t}^{(n)}=(R_{t}^{(n)}+\gamma_{t}^{(n)}z,p_{\bar{\theta}}(S_{t+n},a*_{t+n})\]
where the greedy action \(a*_{t+n}\) is selected by the online network to bootstrap, and the target network evaluates the use of said action.
The resulting KL loss is:
\[D_{KL}(\phi_{z}d_{t}^{(n)}||d_{t})\]
which can be used to compute the priority values of the PER as a more robust and efficient alternative to the TD error. The neural networks follow the dueling network architecture but are adapted for use with return distributions. And finally, all the linear layers are replaced with noisy linear layers.
## 5 Deep Reinforcement Learning for Navigation
In mapless navigation systems, there isn't an available representation of the environment; the robot perceives the environment as it navigates and must be able to recognise objects, landmarks or any similar type of information that allows it to infer knowledge about where its goal is located. Most of these systems use visual information, primarily the first-person-view image, and perform some reactive behaviour as they process the incoming data [12].
Optical Flow methods use a sequence of images to estimate the motion of objects and features. Velocities perceived are used for the robot's decision-making, always preferring to move in the direction of less change. This is also the main disadvantage of these methods [9]. Appearance-based methods store and memorise images of the environment and associate them with certain relative localisation to the goal, allowing the robot to perform the correct motion control. However, labelling the desired images and developing the appropriate criteria may be difficult
Figure 4: Rainbow DQN components. The combination of independent improvements resulted in a better performance than the baseline DQN.
and time-consuming [9]. Feature tracking-based methods rely on detecting features and motion from the elements in the environment and, based on that information, estimate the robot's trajectory and motion [5]. Object recognition is more symbolic and can detect features rather than memorising precise objects or positions. Deep learning approaches are very similar in the sense that neural networks are trained with many images to identify features of the objects in the environment [51].
All the aforementioned methods are limited to one task except the DL-based. All of them require the use of labelled images that indicate the desired motion at a specific place or the landmark it represents, which can be very costly to produce. On the contrary, RL agents can be trained for different tasks and allow simulated environments to safely and efficiently train an agent before transferring it to a real-life robot, reducing the computational load needed to learn the task. Finally, technological advances allow the recreation of more realistic and complex scenarios and accelerate learning.
Choices of DRL algorithms in robotics include different variations of DQN, and policy search methods, such as Proximal Policy Optimization [35] (PPO), Asynchronous Advantage Actor-Critic [26] (A3C) and Deep Deterministic Policy Gradients [21] (DDPG). In the case of mobile robots, different tasks have been accomplished using DRL methods, the most common being obstacle avoidance and navigation. However, more complex tasks can be performed depending on the information provided to the agent. A summary of the related works can be seen in Table 1.
For the obstacle avoidance task, Lei Tai and Ming Liu [53] implemented a DQN agent trained to explore indoor environments by using depth images as the states and a CNN pre-trained with real-world samples. Linhai Xie et al. [48] combined a CNN trained to predict depth images with a D3QN agent to propose an approach that uses only monocular RGB vision as input. They also showed that the D3QN model outperformed a vanilla DQN model on both training speed and performance. Patrick Wenzel et al. [47] also used a NN to predict depth images based on RGB images and implemented three different agents to solve obstacle avoidance in circuit-like environments: a PPO agent with a discrete action set, a PPO agent with a continuous action set and a DQN agent. They concluded that the PPO agent with discrete actions outperformed the other two agents and that depth images yielded better results than RGB and grayscale images.
For the goal-oriented navigation task, Xiagang Ruan et al.[32] implemented a D3QN agent that successfully navigates autonomously by using depth images and the distance to the goal as a state. Changan Chen et al. [7] presented an LSTM network that models Human-Robot and Human-Human interactions, using the DRL framework, for navigation towards a goal in a crowded environment. Yuke Zhu et al. [54] trained an A3C agent in a self-developed physics engine, which could generalise across targets and scenes. Two RGB images were used for the state representation, one from the agent's perspective and another that shows the target, and were embedded by a CNN before being passed to the agent. Liulong Ma et al. [25] compared
two DRL agents, DQN for a discrete action space and PPO for a continuous action space, to perform a mapless navigation task by using a Variational Autoencoder to encode RGB images and appending them with target related information. The PPO model outperformed the DQN model in both performance and training time and also got better results in its environment than the benchmark. Cimus Reinis et al. [8] proposed a DDPG agent that combined a stack of depth images with the polar coordinates between the robot and the goal as the state and with a reward based on the robot's velocity. They performed successful experiments on simulated environments as well as real-world scenarios.
However, other works involve different navigation-related tasks, such as Pararth Shah et al. [36], which combined a DQN agent with a Recurrent Neural Network to map natural language instructions, and visual and depth inputs to actions. Wenhan Luo et al. [23] developed an A3C agent for a mobile robot, combined with a ConvLSTM NN, that takes RGB frames as inputs and produces both camera control and motion control signals as outputs. Their agent could resume tracking after losing the target and was successfully transferred to real-world scenarios. Placed and Castellanos [29] developed a D3QN agent capable of performing active SLAM with less intensive computation by using laser measurements and designing the reward function based on a formulation of the active SLAM problem.
While most studies specialise in a task and propose a specific reward function and state representation to fulfil it, the work presented analyses the challenge involved in going from a simpler task to a more complex one, as well as the effects the reward function can have on the robot's behaviour and performance. Also, the popular D3QN algorithm is compared with a more recent variant of the DQN family of methods, the Rainbow algorithm.
For a more in-depth review of DRL algorithms and applications in navigation, the surveys of Zhu et al. [52] or Ye et al. [49] can be consulted. It is noteworthy, as also studied by Zhu et al. in [52], that more often than not DRL applications in navigation require lightweight navigation solutions to be a complete navigation system. As previously discussed, the most common approach to solve the navigation problem is by using a SLAM technique in a map-building-based robotic system.
In this work, two different approaches to incorporating a DRL agent in a navigation system are explored. The first one is as an obstacle avoidance agent, which can explore an environment with different obstacles and navigate in circuit-like environments. The second is an agent capable of steering towards a goal when given reference information. The D3QN and Rainbow DQN algorithms are compared to evaluate the difference in results between an algorithm commonly used and its successor. And finally, different reward functions will be implemented in each method to analyse the difference in results and the actions the agents take.
\begin{table}
\begin{tabular}{l l l l} \hline \hline Agent & Algorithm & State & Task \\ \hline \({}^{53}\) & DQN & Depth Image & Obstacle Avoidance \\ \({}^{48}\) & D3QN with & Predicted Depth Image & Obstacle Avoidance \\ & CNN & from RGB & \\ \({}^{47}\) & PPO and & Predicted Depth Image & Maze Navigation \\ & DQN with & from RGB & \\ & GAN & & \\ \({}^{32}\) & D3QN & Depth Image and Distance to Goal & Goal Navigation \\ \({}^{54}\) & A3C & Perspective RGB Image and RGB Image from target & Goal Navigation \\ \({}^{7}\) & LSTM-RL & Position, Velocity and Radius of Agent and Humans & Goal Navigation in a Crowd \\ \({}^{25}\) & DQN with & RGB Image, Polar Coordinates and Motion Information & Goal Navigation \\ \({}^{8}\) & DDPG & Depth Images and Polar Coordinates & Goal Navigation with Natural Language Directions \\ \({}^{36}\) & DQN with & Natural Language Instruction, Visual and Depth Data & Maze Navigation with Natural Language Directions \\ \({}^{29}\) & D3QN & Laser Measurements & Active SLAM \\ \({}^{23}\) & A3C with & RGB Image & Object Following and Tracking \\ Proposed & D3QN & Depth Image & Obstacle Avoidance \\ Agents & \& Rainbow & \\ & DQN & Depth Image and Polar Coordinates & Goal Navigation \\ \hline \hline \end{tabular}
\end{table}
Table 1: A non-extensive summary of previous works. There is a set of commonly used RL algorithms, but depending on the choice of state representation, different tasks can be trained.
## 6 Design of the DRL agent
This section contains the details of the DRL approach representation. First, a description of the state representation, action space and reward function will be given. Then, the architecture of the neural networks and specifications of the DRL methods used will be discussed. An intuition of how the implementation for the obstacle avoidance task looks in the agent-environment interaction loop of the RL framework is shown in Fig. 5.
### State Representation
The state representation must contain enough information about the environment so the agent can decide what action to take to maximise its return, using only the state provided at any given time step.
Depth images provide geometric information about the robot's surroundings in three dimensions. On the contrary, RGB images are more susceptible to lightning and contain colour information which may be irrelevant. Furthermore, depth images are represented in grayscale. This type of image has been proven to be a good state representation in other DRL tasks, such as Atari games [28], mainly when used as a stack because of the dense information they contain. For those reasons, the chosen state representation for the obstacle avoidance task consists of a stack of four successive depth images, with one taken at each time step of the training process.
Figure 5: Example of the RL design for the obstacle avoidance task. The depth images are perceived in the simulated environment in Gazebo and reach the RL algorithm through the ROS framework.
The geometric information provided should be enough for the agent to determine when a collision is imminent, and a change of behaviour is necessary.
However, more information is needed to determine the agent's relative position to its destination for the goal-oriented navigation task. To avoid the problem of the agent not recognising the difference between similar states, known as aliasing, the polar coordinates from the agent to the goal are appended to the state representation in the form of a distance and angle.
### Action Space
Actions represent the agent's choices to interact with its environment and are constrained by its physical limitations and task. Actions in robotics include desired velocities, accelerations, or torques sent to a motor.
In the case of a mobile robot performing the task of obstacle avoidance, the noteworthy commands are the input linear and angular velocities. Because the environments are static, there is no action given for the robot to stay still, and it must always remain in motion. In the case of the discrete set of actions, two linear velocities were selected to allow the robot to either slow down while turning or speed up to reach its destination faster. Also, four angular velocities were chosen to let the robot rotate at different rates in each direction and one null-valued angular velocity to go straight.
Because of the specifications of the robot used in the simulations for training, the Turtlebot 2, which is further discussed in the next chapter, the specific values are the following: \(0.2m/s\) or \(0.4m/s\) for the linear velocity and \(\frac{\pi}{6}rad/s\), \(\frac{\pi}{12}rad/s\), \(0rad/s\), \(\frac{\pi-}{12}rad/s\) or \(\frac{-\pi}{r}ad/s\) for the angular velocity.
### Reward Function
The reward function reflects the agent's objective and is the core of the learning process; it grades how well the agent behaved at a given time step.
#### 6.3.1 Obstacle Avoidance
For an agent attempting to explore its environment while avoiding obstacles, either a penalty for crashing into an obstacle, a small reward at each time step or a sparse reward for completing several steps without colliding may be enough to learn the task at hand. However, it seems that different approaches may incentive certain behaviours. One such constraint is to penalize the robot's angular velocity for prioritizing moving straight and more steadily. For that reason, two different reward functions were tested.
The first reward function is a simple one that gives a small reward to an agent for each time step that it does not collide with an obstacle and gives a penalty two orders of magnitude higher on collision:
\[R=\left\{\begin{array}{ll}-10&\text{on collision}\\ 0.1&\text{at each time step}\end{array}\right. \tag{1}\]
The second reward function, referred to as the behaviour reward function, rewards the agent for its linear velocity and penalizes the angular velocity:
\[R=\left\{\begin{array}{ll}-10&\text{on collision}\\ v-|w|&\text{at each time step}\end{array}\right. \tag{2}\]
Where \(v\) is the linear velocity of the robot and \(w\) is the angular velocity, combined with the previously chosen actions, the robot can earn a reward between \([-0.13,0.4]\) at each time step, with the penalty for colliding being two orders of magnitude higher as well, giving it a higher priority when learning.
#### 6.3.2 Goal-Oriented Navigation
When the task is changed to a goal-oriented navigation, more information is needed for the agent to receive a reward signal that differentiates whether it is in a better position regarding the goal. For that reason, the chosen metrics were the distance to the goal \(d\) and the heading towards the goal \(\theta\), as the minimum amount of information needed to locate the position of the goal. Thus, the reward function is extended to account for the new information:
\[R=\left\{\begin{array}{ll}-10&\text{on collision}\\ (v-c|w|)cos(\theta)-v_{max}&\text{at each time step}\\ 10&\text{on arrival}\end{array}\right. \tag{3}\]
Where \(cos(\theta)\) determines whether the robot faces the objective and gives a negative reward when the agent strays away, \(c\) is a constant discount factor to avoid the difference between the values of the velocities yields a negative reward, and \(v_{max}\) is the maximum linear velocity the robot can achieve. Combined with the previous reward function elements, \(v\) and \(w\), the agent avoids further penalty when moving straight to the goal and receives more when moving away from it. By increasing the order of magnitude of the reward when reaching the goal, the agent can risk some reward as long as it reaches it. This reward function is referred to as the negative reward function because the values it provides at each step are between \([-0.8,0]\)
A positive version of the reward function, where there is no constant penalty based on the maximum linear velocity of the agent, was also used to evaluate which version has better results:
\[R=\left\{\begin{array}{ll}-10&\text{on collision}\\ (v-c|w|)cos(\theta)&\text{at each time step}\\ 10&\text{on arrival}\end{array}\right. \tag{4}\]
Different approaches could have been taken when designing the reward function for such a task, but the current design was chosen, and a sparse reward system was avoided altogether in an attempt for it to generalize and perform better in different kinds of environments.
### Neural Network Architectures
A CNN architecture based on the work proposed by Wang et al. [45] is used for the D3QN agent, to process the stack of depth images corresponding to the state, and outputting the q-values of each action. The number of layers and hyperparameters of each layer is the same as the NN evaluated in the article. For the case of the goal-oriented navigation task, the distance and angle towards the target are appended to the output of the flattening layer.
For the Rainbow DQN agent, the last layers of the network architecture are modified, following the implementation of the C51 agent described by Bellemare et al. in [3], which uses 51 atoms to estimate the distribution of the rewards instead of the expected values. Training a robotics RL agent in the real world requires a significant amount of time for the algorithm to converge, constant supervision to reset the agent to its initial state after reaching a terminal state, and avoiding accidents. For that reason, the implementation proposed in this thesis is done in a simulator, which has benefits such as speeding up the training time, automatically resetting the whole environment after each episode and allowing different initial configurations for the agent to explore the entire environment better.
### Simulated Environment
The Robotic Operating System (ROS) 2 was chosen as the robotics framework to run the experiments, as it provides many software libraries and tools used to build robot applications, as well as communication between the different software needed to run or simulate a robotic system, such as sensor readings, control algorithms and task algorithms. The distribution of ROS used to run the experiments was Melodic Morenia.
Footnote 2: [http://wiki.ros.org/](http://wiki.ros.org/)
The simulated robot used for training is the Turtlebot2 3, an open-source robot commonly used in robotic research. It features an Asus Xtion PRO LIVE as an RGB-D camera and the differential drive base Kobuki, which has a variety of sensors, such as odometry, gyroscope and a laser sensor. Its maximum translational velocity is 0.7 m/s, and its maximum rotational velocity before performance degradation is 110 deg/s. Being differential wheeled allows it to change its direction without additional forward or backward motion. The laser sensor was used to detect collisions at fixed distances accurately. Still, its data were not considered in
the state representation, meaning that a bumper or other collision-detecting sensor could replace it.
Gazebo 2 was used as the robotics simulator to model the environment, load the Turtlebot2 and its sensors to train the proposed reinforcement learning agent, and speed up the simulation ten times faster than in real-time. The Gazebo version used is 9.0.0. The open-source openai_ros ROS package, developed by The Construct 3, was used as the RL framework, which provides communication between ROS, Gazebo and the RL scripts. It also allows the environment's set-up in Gazebo, which offers states and rewards at each time step and resets the environment at the end of each episode.
Footnote 2: [https://gazebosim.org/home](https://gazebosim.org/home)
Footnote 3: [https://www.theconstructsim.com/](https://www.theconstructsim.com/)
Finally, the reinforcement learning algorithms, training and evaluating scripts were implemented using the Python programming language, with the OpenCV computer vision library being used to preprocess the depth images. The Rainbow DQN and D3QN algorithms were based on the implementation of Dittert [10] and Arulkumaran [1].
### Training
The training was done in a simulated environment. The hyperparameters' values were chosen based on the algorithms' original work. The learning rate, Adam optimiser, gamma, batch size and hidden layer size were the same as the original DQN work of Mnih et al. in [28]. The buffer size was lowered because of initial hardware limitations, and the number of random steps to fill it was also proportionally decreased. The \(N\) step, \(\tau\), and minimum \(\epsilon\) values were chosen according to the Rainbow DQN proposed by Hessel et al. [14]. The D3QN agent requires the \(\epsilon\) hyperparameter for exploration, which starts with a value of 1 and is exponentially decayed until it reaches \(\epsilon_{min}\). Since the number of training episodes would be much smaller, compared to other RL-related works, the \(\alpha\) value was slightly increased, and the \(\omega\) value decreased to prioritise experiences earlier. A soft update of parameters with the value of \(\tau\), as described by Lillicrap et al. in [21], was chosen instead of a hard update. A summary of the hyperparameters used can be seen in the Table 2.
The depth images were resized, normalised and pre-processed before being passed to the agent as observations. The default size of the depth images used for training was \(80\times 64\) pixels, similar to the size of images used for training RL agents in Atari games since the DQN implementation in [28], but keeping the width and height ratio of the original image size. Also, at each step, the depth image was stacked with the three previous ones, as described in the design section, while at the start of each episode, the initial frame was copied four times.
The experiments were performed on a computer equipped with an AMD Ryzen
5 3600 CPU, an NVIDIA RTX 3060 Ti GPU and 32 GB of RAM.
### Obstacle Avoidance
The obstacle avoidance agent was trained in a \(5m\) environment with different obstacles, as shown in Fig. 6. The reasoning behind its design was to expose the RL agent to different obstacle shapes to learn better how to avoid collisions. At the start of each episode, the agent's starting position was randomly initialised from 15 possibilities to accelerate the learning process and address the challenge of generalisation presented in [52]. Each training session lasted for 1500 episodes, and the episodes ended after 400 steps or when the agent crashed into an obstacle. For better accuracy, collisions were detected with the robot's laser sensor at a distance of 0.3 meters.
As shown in Table 3, six obstacle avoidance agents were trained, with their label referring to the algorithm, reward function and size of depth images used during their training.
### Navigation
The goal-oriented agent was trained in a slightly wider, \(6m\) environment with only primitive shapes as obstacles, which can be seen in Fig. 7. The reason behind using more basic obstacles in the environment is for the agent to focus more on the path-planning competence of the goal-oriented navigation task rather than the obstacle-avoiding one. At the start of each episode, the agent's starting position was randomly initialised from 5 different possibilities and the goal position from a set of 6 cases. The maximum number of steps was slightly lowered to 350, but the
\begin{table}
\begin{tabular}{c c c} \hline \hline Hyperparameter & D3QN & Rainbow DQN \\ \hline Learning rate & 0.00025 & 0.00025 \\ Batch Size & 32 & 32 \\ Hidden Layer Size & 512 & 512 \\ \(\gamma\) & 0.99 & 0.99 \\ Buffer Size & 100000 & 100000 \\ Initial Random Steps & 20000 & 20000 \\ \(\tau\) & 0.001 & 0.001 \\ \(\epsilon_{min}\) & 0.01 & N/A \\ N step & 1 & 3 \\ \(\omega\) & 0.4 & 0.4 \\ \(\alpha\) & 0.6 & 0.6 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hyperparameters values. The D3QN agent requires the hyperparameter \(\epsilon\) for exploration, while Rainbow DQN uses the NoisyNets for exploration proposed.
total episode count increased to 25,000. The collision detection and episode-ending conditions were almost identical to the previous task, with an added terminal state when the agent reached the goal. A goal was considered to be contacted at a lenient distance of \(0.8m\) to speed up the learning process.
Three agents were trained, with different algorithm and reward function choices, as seen in Table 4.
Figure 6: The training environment for the obstacle avoidance task. Arrows indicate available random starting configurations.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Reward Function & Size of Depth Image \\ \hline SimpleD3QN & 1 & \(80\times 64\) \\ SimpleRainbow & 1 & \(80\times 64\) \\ SimpleRainbowL & 1 & \(160\times 128\) \\ BehaviourD3QN & 2 & \(80\times 64\) \\ BehaviourRainbow & 2 & \(80\times 64\) \\ BehaviourRainbowL & 2 & \(160\times 128\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Agents trained for the obstacle avoidance task. Their names indicate their algorithm, reward function and the size of the depth images used during training.
### Evaluation
#### 6.9.1 Obstacle Avoidance
For the obstacle avoidance task, the models were subjected to two evaluations, one for their ability to evade different obstacles and another to test whether their training was enough to navigate a circuit-like environment without a goal.
The environment used to test obstacle avoidance competence is the same for training but with different starting points that put the robot close to the obstacles from the beginning. Two points nearby were chosen as starting positions for each of the six types of obstacles, resulting in 12 initial configurations, as seen in Fig. 8. The evaluation had a duration of 600 episodes, with 100 steps each. The idea behind it is only to check whether the robot can avoid collisions with the specific types of obstacles it is trained with, as its capability to move forward while avoiding walls
\begin{table}
\begin{tabular}{c c} \hline \hline Agent & Reward Function \\ \hline NegativeD3QN & 3 \\ NegativeRainbow & 3 \\ PositiveRainbow & 4 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Agents trained for the goal-oriented navigation task. Their names indicate their algorithm and reward function used during training.
Figure 7: The training environment for the navigation task. Arrows indicate possible starting configurations, and dots represent goal positions.
will be tested later. For the obstacle avoidance task, the models were subjected to two evaluations, one for their ability to evade different obstacles and another to test whether their training was enough to navigate a circuit-like environment without a goal.
The second test was performed in a simple circuit-like environment with four pre-defined starting points, shown in Fig. 9. A perfect performance was not expected, as the agent was trained in a different environment. However, the reasoning behind it is that RL agents sometimes optimise their behaviour in unintended ways. One such case for an obstacle avoidance task, as there is no reward based on a clear objective other than a penalty for colliding, would be if the agent moved around in circles. To test whether the agents can navigate a road bounded by walls where circular motion is impossible without colliding, a simple circuit-like scenario from the openai_ros package was adapted as an evaluation environment.
In both cases, the distance for considering a collision was slightly lowered to \(0.2m\) to evaluate the agent's reaction competence better.
#### 6.9.2 Navigation
For the navigation task, the models were evaluated in the same environment used for training, with and without the same starting points. The assignment was more challenging, as the agents needed to avoid obstacles while moving closer to the goal. Therefore, each agent was tested on its learning and adaptative capabilities. The agent was allowed to navigate a maximum of 250 steps to reach its destination and was evaluated for 1000 episodes. The number of goal positions was increased to 10, but the starting configurations were kept to 5. The adjustment of starting and goal positions is shown in Fig. 10
The collision detection was turned off during the evaluation process so that the agent still had a chance to overcome the obstacles and fulfil the goal-reaching task.
Figure 8: Obstacle avoidance evaluation starting positions. Two points near each obstacle were chosen as valid starting positions.
## 7 Experimental Results
### Training
There are different metrics to consider when evaluating the training performance of an RL agent. The most important is the return, which indicates how well the agent
Figure 10: Goal-Oriented navigation evaluation environment. The starting and goal positions were shifted to test the agents’ adaptation capacity.
Figure 9: Circuit navigation evaluation environment for the obstacle avoidance agents. Although the obstacles are simpler, the lack of space prevents circular motion from being an optimal behaviour to avoid collisions.
performed its task. However, in the goal-oriented navigation task, the starting and goal positions are randomly chosen from a set at the start of each episode, meaning that the maximum return the agent can achieve per episode varies; therefore, the metric can be pretty noisy. Nonetheless, it still shows the learning curve and is expected to increase over time as the agent optimises its behaviour. One task-independent metric, also used in ML applications, to describe the learning of an algorithm is the loss function, which is expected to decrease over time as the agent explores its environment and improves its estimations. The loss indicates the mean squared error between the \(q\) value calculated and the expected value for the TD methods. A Task dependent metric that can be compared for the navigation task is the percentage of times the agent reaches the goal. As for the obstacle avoidance task, the rate of collisions and steps the agent managed to navigate before crashing can be measured.
Because the original plots are very noisy, mainly due to the initial random position at the start of each episode, the results presented were calculated using a moving average of one hundred steps.
Finally, the different metrics were measured in episodes, as the tasks relied on avoiding collisions or reaching the goal within a reasonable amount of time steps, and the agents were rewarded or punished accordingly. The only exception was the loss function, which was monitored at each time step to verify the learning process with each batch of samples used.
#### 7.1.1 Obstacle Avoidance
Six different agents were trained and compared for the obstacle avoidance task, three of which consist of a D3QN agent and two Rainbow agents trained with varying sizes of depth images, using the simple reward function.
Between the agents with the simple reward function, which corresponds to the equation 1, SimpleRainbowL achieved slightly better results than SimpleRainbow by maintaining a higher return, lower collision rate and more training steps, as seen in Fig. 11. The D3QN was trained for fewer training steps, meaning it crashed earlier in each episode. The use of a smaller depth image size allowed SimpleRainbow to seemingly achieve peak performance at around 800 episodes, followed by SimpleRainbowL at 1000 episodes and SimpleD3QN at 1200 episodes, when their amount of return was at its highest and collision rate at its lowest.
Similar results were achieved by the agents with the behavioural reward, corresponding to the equation 1, as demonstrated in Fig. 12. However, peak performance was achieved after 1100 episodes, indicated by the collision rate, as the return now depends on the behaviour. BehaviourRainbowL had a lower collision rate and higher return, meaning that it performed better at the task and at adapting to the constraints in the velocities.
As seen in Fig. 13, the Rainbow agents outperformed the D3QN ones at avoiding collisions by doubling the number of steps navigated and having much lower crash
rates. Also, using a larger image slightly improved the results, but at the cost of requiring more time to train. The loss and returns could not be compared, as the reward functions operated at different scales. Agents with the behaviour reward took longer to learn to avoid collisions, as they seemed to start optimising their behaviour first. Still, their performance increased sharply after some exploration, which can be seen in the drop of their collision rate in Fig. 13. Even so, as expected, the agents with the simple reward had lower collision rates, as their only objective was to avoid collisions. In contrast, the behaviour reward imposed a penalty on the other agents' choice of speed, which demanded more training time to improve their results.
All agents learnt at different rates, as seen in the decrease in their average loss.
Figure 11: Training performance of the obstacle avoidance agents with simple reward. The best performance was achieved by SimpleRainbowL, the Rainbow DQN agent that used the simple reward function and larger depth image size.
#### 7.1.2 Navigation
For the navigation task, three different agents were trained, the D3QN agent and a Rainbow agent for each of the two reward functions. In this case, the average return cannot be compared, as both reward functions are on a different scale but can be seen as the agent's learning process. Also, the average amount of steps was not used as a metric, as most of the time, the agents collided quickly, and the episode ended early while they learnt to reach the goal.
As evidenced in Fig. 14, Rainbow agents performed better than the D3QN agent by doubling the number of times they reached the goal. Additionally, NegativeRainbow, the agent with the negative-based reward function corresponding to the equation 3, yielded better results by reaching the goal more often, as expected. The reason is that the additional constraint encourages the agent to reach the destination as fast as possible to stop the punishment at each time step. Nonetheless,
Figure 12: Training performance of the obstacle avoidance agents with behaviour reward. The best performance was achieved by BehaviourRainbowL, the Rainbow DQN agent that used the behaviour reward function and larger depth image size.
the positive reward-based agent, which follows the equation 4, still managed to optimise its behaviour and reach the goal a fair amount of times, as seen by its increasing return.
All the agents had more room to learn, as seen in their increasing returns and decreasing losses at the end of the training process.
### Evaluation
For the evaluation process, only the task-dependent metrics are compared, as there is no learning process involved, and the trained models are only used to select their best-valued action, given the current state of the environment. Only Rainbow agents were used for evaluation, as they drastically outperformed the D3QN agents during training and were expected to perform better even under different conditions.
#### 7.2.1 Obstacle Avoidance
For both tests, obstacle avoidance and circuit navigation, the agents were evaluated on their average crash rate and the average number of steps they could navigate without a collision. Also, the action selected at each time step was tracked to analyse the behaviour of each agent.
The results for the evaluation of the obstacle evasion task were similar to those during training. SimpleRainbow achieved the best results, as seen in Table 5, by having a lower collision rate and a higher number of steps without crashing. Using the simple reward function almost halved the average collision percentage than using the behaviour reward function, and a larger depth image size also produced slightly better results. The average collision rates are higher than the final averages
Figure 13: Training performance of the obstacle avoidance agents. Better results were achieved by using the Rainbow DQN algorithm, the simple reward function and a larger depth image size.
seen during the training process in Fig. 13, as the difference in conditions influences the results.
However, the difference in performance can be related to the difference in each agent's chosen actions distribution, seen in Fig. 15. The agents with a simple reward function had a uniform distribution in their action selection, with a slight preference for evading a particular direction. Meanwhile, the agents with the behaviour reward function prioritised using the highest linear velocity and avoiding turning altogether, preferring small angular velocities when it is necessary to avoid an obstacle. For that, the difference in task performance is unsurprising when considering that one type of agent had to evade while going at full speed and barely turning.
In the case of the circuit navigation evaluation, using a larger depth image size
Figure 14: Training performance of the goal-oriented navigation agents. NegativeRainbow performed the task better by achieving a higher rate of goals reached. Meanwhile, the loss and return evidenced the learning process of all agents.
proved to be more critical than the reward function, as seen in Table 6 that those agents collided less, independently of their reward function. Still, SimpleRainbowL achieved a lower collision rate and a higher number of steps without crashing during training, showing its better ability to adapt to a different environment. Nonetheless, the results were better than expected, with all agents being able to navigate above the average amount of steps and only having difficulties in the sharp turns of the circuit, which were absent in their training environment. In addition, the best agent, SimpleRainbowL, reached below the halfway mark for the average amount of collisions, as seen in Table 6, with its evident difficulty being the left turn at the centre of the scene, which can be reached from two out of the four starting points, corresponding to the right and bottom starting positions in Fig. 9.
The contrast of the chosen actions distribution is also seen in this task, evidenced by the Fig. 16, with the behaviour reward function demanding less turning and more speed. There was an increase in the choice of turning right, which was caused by the circuit design.
\begin{table}
\begin{tabular}{c c c} \hline Agent & Average Collision Percentage & Average Steps \\ \hline SimpleRainbow & 72.5\% & 109 \\ SimpleRainbowL & **48\%** & **150** \\ BehaviourRainbow & 82.25\% & 115 \\ BehaviourRainbowL & 58.25\% & 146 \\ \hline \end{tabular}
\end{table}
Table 6: Evaluation performance of the obstacle avoidance agents in the circuit navigation. SimpleRainbowL achieved the best results by colliding less, even in an environment with sharper turns.
\begin{table}
\begin{tabular}{c c c} \hline Agent & Average Collision Percentage & Average Steps \\ \hline SimpleRainbow & 43.16\% & 69 \\ SimpleRainbowL & **41.83\%** & **71** \\ BehaviourRainbow & 74.83\% & 45 \\ BehaviourRainbowL & 70.5\% & 47 \\ \hline \end{tabular}
\end{table}
Table 5: Evaluation performance of the obstacle avoidance agents. SimpleRainbowL achieved the best results by colliding less and persisting for more time without crashing.
#### 7.2.2 Navigation
The navigation agents were evaluated in the same scene as their training, and the collisions were turned off to test better their learnt ability to reach the goal. When evaluated under the same training conditions, as noticed in Table 7, using the negative-based reward achieves better results, almost beating the environment altogether. Nonetheless, the positive reward-based agent also achieved good results, reaching the goal around seventy per cent of the time. The lower average number of steps of NegativeRainbow proves its speed at reaching its destination.
Although both agents were trained under the same restrictions for their choices of linear and angular velocities, there was a noticeable difference in the distribution of their chosen actions, evidenced in Fig. 17. NegativeRainbow preferred the highest linear speed and relied less on turning, displaying its better mastery of the task,
Figure 15: Distributions of the chosen actions by the obstacle avoidance agents during evaluation. The behaviour reward function restricted the choice of angular speeds and prioritised the maximum value of linear speed.
while PositiveRainbow favoured a lower linear speed.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Average Goal Reached Percentage & Average Steps \\ \hline NegativeRainbow & **96.9\%** & **79** \\ PositiveRainbow & 67.9\% & 173 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Evaluation performance of the goal-oriented navigation agents under the same circumstances. NegativeRainbow almost beat the environment by achieving a near-perfect goal-reaching rate.
Figure 16: Distributions of the chosen actions by the obstacle avoidance agents during circuit navigation evaluation. The increase in the need to turn right further evidences the difference in behaviour and performance. The simple reward function allowed the agents to overcome the circuit’s sharp turns better.
However, once the initial conditions are changed, there is a sharp decline in both agents' performance, as seen in Table 8, with them reaching the goal less than half the amount of times compared with the previous evaluation. It is also noteworthy that it almost took the agents the maximum number of steps to reach the goal. Their uncertainty was also reflected in their actions, shown in Fig. 18, with both agents performing higher turning.
The lower performance led to the belief that either the state representation for the navigation task needed to be better or the NN ignored the polar coordinates. Furthermore, the agent seemed to learn to reach the goal by visually recognising the path in its training environment.
Nonetheless, some NegativeRainbow's trajectories in its training environment, where it fulfilled its task almost entirely, were compared with the path computed by the standard navigation stack developed for the Turtlebot2 in Fig. 19. The navigation stack uses Dijkstra's algorithm for the global path planner and DWA for local path planning. To compute the paths, it was required first to manually
Figure 17: Distributions of the chosen actions by the goal-oriented navigation agents during evaluation. NegativeRainbow shows more confidence by choosing the higher linear speed, even though both agents are rewarded by its choice.
\begin{table}
\begin{tabular}{c c c} \hline \hline Agent & Average Goal Reached Percentage & Average Steps \\ \hline NegativeRainbow & **35.5\%** & **200** \\ PositiveRainbow & 22.6\% & 228 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Evaluation performance of the goal-oriented navigation agents under different initial conditions. The agent’s average number of steps required to reach the goal almost reached the limit.
generate the cost map using the GMapping package. The RL agent's trajectories only required loading the trained policy but were registered on the same map for clarity.
Even if the agent's capability to generalize its knowledge was lacking, it seemed
Figure 19: Comparison between NegativeRainbow’s trajectories and the standard Turtlebot2 navigation stack, which uses Dijkstra’s algorithm and DWA. Arrows indicate the starting configurations, and the circle is the goal position.
Figure 18: Distributions of the chosen actions by the goal-oriented navigation agents during evaluation under different initial conditions. NegativeRainbow preferred to turn around, while PositiveRainbow kept its original distribution.
to approach the optimal behaviour in its training environment, as its trajectories were straight and shorter than using a path planner. NegativeRainbow preferred to navigate between the obstacles to reach the goal faster rather than planning a path around them.
## 8 Discussion
The best obstacle avoidance agent, SimpleRainbowL, reached below the twenty per cent collision rate during training, as seen in Fig. 11. It also achieved 41.83% under different conditions, as evidenced in Table 5, and 48% in a different environment, reported in Table 6. It hinted that a simple reward function might be enough to fulfil a task if the state representation is adequate, in this case, the depth images.
Training with a larger image size yielded slightly better results but required more time to train for the same amount of episodes. And when the agents were evaluated in a different environment, the size of the images they used appeared to influence their results more, as seen in Table 6. More experiments would be required to validate these assumptions or verify if the standard image size was too small.
The agents with the behaviour reward function took longer to lower their collision rates, as seen in Fig. 13, because they had to consider the constraints imposed by their reward function when optimising their policy. Nonetheless, rewarding the value of the linear velocity and penalising the angular velocity achieved the expected result, as the agents preferred to move faster and turn less. This behaviour may not be ideal for the obstacle avoidance task, where the agent must prioritise avoiding collisions rather than moving fast. Still, it served as a proof of concept and a basis when designing the reward function of the goal-oriented navigation task where reaching the objective faster was preferred.
When switching to the goal-oriented navigation task, the distance and angle to the goal were added to the state representation to reward the agent for moving closer to the objective. Nonetheless, the agent also had to use depth images to avoid obstacles. The best agent, NegativeRainbow, achieved a 96.9% rate of reaching the goal in its training environment and 35.5% under different conditions, seen in Tables 7 and 8 respectively. Its success during training seemed to be due to the use of depth images as part of the state representation, as the drop in performance during evaluation indicated that the agent could not constantly localise its goal. Furthermore, the analysis of the agent's trajectories during motion seemed to imply that it almost reached the optimal behaviour in its training environment. However, further study and measurements would be required to confirm it.
It was noteworthy that NegativeRainbow almost always reached the goal in its training environment while also avoiding obstacles, suggesting that using depth images might be enough to learn the task in a specific environment. However, the lack of solid generalisation to different conditions makes it impractical for use in different environments. The results imply that switching from a simpler task to a more complex one requires more consideration than adding the additional
information required.
In the goal-oriented navigation task, the penalty at each time step improved the agent's performance, seen in Table 7, as the agent was urged to reach the destination as fast as possible. On the contrary, a reward at each time step might have instigated the agent to accumulate reward by navigating rather than by reaching the goal, as the agent still appeared to increase its average return during training, which can be noticed in Fig. 14.
In all cases, the Rainbow DQN algorithm achieved better results than the D3QN algorithm. The improved exploration mechanism provided by the noisy network, the consideration of additional time steps in the computation of the n-step return, and the better estimations provided by the use of the distribution of the rewards aided the agents in accomplishing both tasks. Similar to the original paper, where Hessel et al. [14] demonstrated a significant improvement in the results of Rainbow over previous variations of the DQN algorithm, the agents trained with the Rainbow DQN algorithm reached significantly more goals and collided less with obstacles than those trained with D3QN. This supports the idea of using variations of DRL algorithms with additional improvements to achieve better results rather than clinging to the most popular ones.
Finally, all agents trained seemed to learn their tasks with varying degrees of success, as their average reward kept increasing and the loss of their algorithm decreased during their training. Longer training sessions could increase the agents' performance even further.
## 9 Conclusions
This research project involved implementing a DRL approach for different robot navigation-related tasks. The proposed methods achieved a 41.83% collision rate for the obstacle avoidance task and a 96.9% target-reaching rate for the goal-oriented navigation task in their training environments. However, their lower performance during evaluation suggests that further work is required to achieve optimal behaviour.
The experimental work suggests that the improved exploration, more informed updates and better estimations of the Rainbow DQN allowed it to reach more targets and collide less during training than the D3QN agents. The results support the idea that, much like its comparison with the previous variations of the DQN method in their original domain of Atari games, Rainbow DQN might also perform better at navigation-related tasks. This could lead to improvements in existing works or as an idea to consider when designing a new DRL approach in the same field.
To perform the goal-oriented navigation task, the agent was provided additional information to measure how close it was to the goal compared to the design of the obstacle avoidance task. The trained agent seemed to succeed at the task in its training environment with a 96.9% goal-reaching rate but only achieved 35.5%
under different conditions, seemingly learning the specific path to the goals during training. The results suggest that the transition from the obstacle avoidance task to the goal-oriented navigation task could not be accomplished with the parameters added for the agent's localisation and that further study should be performed about the state representation or balance of the weight of each data source.
Finally, a behaviour was induced in an obstacle avoidance agent by placing penalties based on its linear and angular velocities in the reward function, which led to the robot preferring to move faster and avoid turning. Still, it avoided fewer obstacles than using a simpler reward function with the same amount of training, suggesting that it required more time to learn its task. In the case of the goal-oriented navigation task, a penalty at each time step encouraged an agent to reach the target faster and more consistently than using a reward function that could grant positive values. Similar constraints could also be implemented for other navigation-related tasks, but a trade-off between training time and performance might still apply.
|
2306.12949 | On the use of the Gram matrix for multivariate functional principal
components analysis | Dimension reduction is crucial in functional data analysis (FDA). The key
tool to reduce the dimension of the data is functional principal component
analysis. Existing approaches for functional principal component analysis
usually involve the diagonalization of the covariance operator. With the
increasing size and complexity of functional datasets, estimating the
covariance operator has become more challenging. Therefore, there is a growing
need for efficient methodologies to estimate the eigencomponents. Using the
duality of the space of observations and the space of functional features, we
propose to use the inner-product between the curves to estimate the
eigenelements of multivariate and multidimensional functional datasets. The
relationship between the eigenelements of the covariance operator and those of
the inner-product matrix is established. We explore the application of these
methodologies in several FDA settings and provide general guidance on their
usability. | Steven Golovkine, Edward Gunning, Andrew J. Simpkin, Norma Bargary | 2023-06-22T15:09:41Z | http://arxiv.org/abs/2306.12949v2 | # On the use of the Gram matrix for multivariate functional principal components analysis
###### Abstract
Dimension reduction is crucial in functional data analysis (FDA). The key tool to reduce the dimension of the data is functional principal component analysis. Existing approaches for functional principal component analysis usually involve the diagonalization of the covariance operator. With the increasing size and complexity of functional datasets, estimating the covariance operator has become more challenging. Therefore, there is a growing need for efficient methodologies to estimate the eigencomponents. Using the duality of the space of observations and the space of functional features, we propose to use the inner-product between the curves to estimate the eigenelements of multivariate and multidimensional functional datasets. The relationship between the eigenelements of the covariance operator and those of the inner-product matrix is established. We explore the application of these methodologies in several FDA settings and provide general guidance on their usability.
_Keywords--_ Dimension Reduction; Functional Data Analysis; Functional Principal Components; Multivariate Functional Data
## 1 Introduction
Functional data analysis (FDA) is a statistical methodology for analyzing data that can be characterized as functions. These functions could represent measurements taken over time or space, such as temperature readings over a yearly period or spatial patterns of disease occurrence. The goal of FDA is to extract meaningful information from these functions and to model their behavior. See, e.g., Ramsay and Silverman (2005); Horvath and Kokoszka (2012); Wang et al. (2016); Kokoszka et al. (2017) for some references on FDA.
Functional principal component analysis (FPCA) is an extension of principal component analysis (PCA, a commonly used tool for dimension reduction in multivariate data) to functional data. FPCA was introduced by Karhunen (1947) and Loeve (1945) and developed by Dauxois et al. (1982). Since then, FPCA has become a prevalent tool in FDA due to its ability to convert infinite-dimensional functional data into finite-dimensional vectors of random scores. These scores are a countable sequence of uncorrelated random variables that can be truncated to a finite vector in practical applications. By applying multivariate data analysis tools to these random scores, FPCA can achieve the goal of dimension reduction while assuming mild assumptions about the underlying stochastic process. FPCA is usually used as a preprocessing step to feed, e.g., regression and classification models. Recently, FPCA has been extended to multivariate functional data, which are data that consist of multiple functions that are observed simultaneously. This extension is referred to as multivariate functional principal component analysis (MFPCA). As for FPCA, a key benefit of MFPCA is that it allows one to identify and visualize the main sources of variation in the multivariate functional data. This can be useful in different applications, such as identifying patterns of movements in sport biomechanics (Warmenhoven et al., 2019), analyzing changes in brain activity in neuroscience (Song and Kim, 2022), or comparing countries' competitiveness in economics (Krzysko et al., 2022).
In MFPCA, we seek to decompose the covariance structure of the multivariate functional data into a set of orthogonal basis functions, named the principal components, which capture the main
sources of variation in the data. There are multiple approaches to estimate the principal components of a multivariate functional dataset. Ramsay and Silverman (2005) combine the multivariate curves into one big curve and then perform a usual FPCA via an eigendecomposition of the covariance structure. This methodology can only be run for data that are defined on the same unidimensional domain, that exhibit similar amounts of variability and are measured in the same units. Jacques and Preda (2014) propose to represent each feature of the multivariate function separately using a basis function expansion. This results in a different set of coefficients for each univariate curve. The eigendecomposition is then run on the matrix of stacked coefficients. To consider the normalization issue of Ramsay and Silverman (2005), Jacques and Preda (2014) and Chiou et al. (2014) propose to normalize the data by the standard deviation of the curves at each of the sampling points. Happ and Greven (2018) extend the estimation of multivariate principal components to functional data defined on different dimensional domains. Their estimation procedure is based on carrying out FPCA on each univariate feature, and then using a weighted combination of the resulting principal components to obtain the multivariate eigencomponents. Finally, Berrendero et al. (2011) develop a different method to estimate the eigencomponents as they perform a principal components analysis for each sampling time point.
The key motivation of this paper is to investigate the duality between rows and columns of a data matrix to estimate the eigencomponents of a multivariate functional dataset. The duality between rows and columns of a data matrix is a fundamental concept in classical multivariate statistics (Escofier, 1979; Saporta, 1990). A data matrix typically represents a set of observations of multiple features, each row corresponds to an individual observation and each column corresponds to an individual feature. The duality between rows and columns refers to the fact that many statistical methodologies can be conducted either on the rows or the columns of the data matrix, and the results will be related to each other. For example, the principal components obtained from a PCA run on the rows of the data matrix are the same as the ones obtained from a PCA run on the columns of the matrix. The choice of method to use, based on criteria such as computational time or data storage needs, is thus left to the statistician. This concept has been widely studied in multivariate statistics (see, e.g., Pages (2014); Hardle and Simar (2019)). In the context of functional data, this principle has received limited attention despite being mentioned in the seminal paper of FDA (Ramsay, 1982). Ramsay and Silverman (2005) briefly commented on it in a concluding remark of Chapter 8, while Kneip and Utikal (2001) and Benko et al. (2009) utilized it to compute principal components for dense univariate functional data. Chen et al. (2017) also employ it to gain computational advantage when univariate functional data are sampled on a very dense grid. To the best of our knowledge, however, there is no available literature on its application to multivariate functional data that are observed on different dimensional domains. Our aim is therefore to investigate this duality for multivariate functional data observed on different dimensional domains and provide guidelines to statisticians on which method to use in different cases.
The remainder of the paper is organized as follows. In Section 2, we define multivariate functional data, with components that are observed on possibly different domains. In Section 3, we develop the duality between the observations' space and the functional components' space. The relationship between the eigencomponents of the covariance operator of the multivariate functional datasets and the eigencomponents of the inner-product matrix between the observations is derived in Section 4. Extensive simulations are given in Section 5. We also provide guidelines on which method to use with respect to different data characteristics. The paper concludes with a discussion and an outlook in Section 6.
## 2 Model
The structure of the data we consider, referred to as _multivariate functional data_, is similar to that presented in Happ and Greven (2018). The data consist of independent trajectories of a vector-valued stochastic process \(X=(X^{(1)},\ldots,X^{(P)})^{\top}\), \(P\geq 1\). (Here and in the following, for any matrix \(A\), \(A^{\top}\) denotes its transpose.) For each \(1\leq p\leq P\), let \(\mathcal{T}_{p}\) be a rectangle in some Euclidean space \(\mathbb{R}^{d_{p}}\) with \(d_{p}\geq 1\), e.g., \(\mathcal{T}_{p}=[0,1]^{d_{p}}\). Each coordinate \(X^{(p)}:\mathcal{T}_{p}\rightarrow\mathbb{R}\) is assumed to belong to \(\mathcal{L}^{2}\left(\mathcal{T}_{p}\right)\), the Hilbert space of square-integrable real-valued functions defined on \(\mathcal{T}_{p}\), having the usual inner product that we denote by \(\langle\cdot,\cdot\rangle\), and \(\left|\cdot\right|\) the associated norm. Thus \(X\) is a stochastic process indexed by \(\mathbf{t}=(t_{1},\ldots,t_{P})\) belonging to the \(P-\)fold Cartesian product \(\mathcal{T}\mathbf{:=}\mathcal{T}_{1}\times\cdots\times\mathcal{T}_{P}\) and taking values in the \(P-\)fold Cartesian product space \(\mathcal{H}=\mathcal{L}^{2}\left(\mathcal{T}_{1}\right)\times\cdots\times \mathcal{L}^{2}\left(\mathcal{T}_{P}\right)\).
We consider the function \(\langle\!\langle\cdot,\cdot\rangle\!\rangle:\mathcal{H}\times\mathcal{H}\to \mathbb{R}\),
\[\langle\!\langle f,g\rangle\!\rangle\coloneqq\sum_{p=1}^{P}\Big{\langle}f^{(p)}, g^{(p)}\Big{\rangle}=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}f^{(p)}(t_{p})g^{(p)}(t_{p}) \mathrm{d}t_{p},\quad f,g\in\mathcal{H}.\]
\(\mathcal{H}\) is a Hilbert space with respect to the inner product \(\langle\!\langle\cdot,\cdot\rangle\!\rangle\)(Happ and Greven, 2018). We denote by \([\![\cdot|\!]\!]\), the norm induced by \(\langle\!\langle\cdot,\cdot\rangle\!\rangle\). Let \(\mu:\mathcal{T}\to\mathcal{H}\) denote the mean function of the process \(X\), \(\mu(\mathbf{t})\coloneqq\mathbb{E}(X(\mathbf{t})),\,\mathbf{t}\in\mathcal{T}\). Let \(C\) denote the \(P\times P\) matrix-valued covariance function which, for \(\mathbf{s},\mathbf{t}\in\mathcal{T}\), is defined as
\[C(\mathbf{s},\mathbf{t})\coloneqq\mathbb{E}\left(\{X(\mathbf{s})-\mu(\mathbf{ s})\}\{X(\mathbf{t})-\mu(\mathbf{t})\}^{\top}\right),\quad\mathbf{s},\mathbf{t} \in\mathcal{T}.\]
More precisely, for \(1\leq p,q\leq P\), the \((p,q)\)th entry of the matrix \(C(\mathbf{s},\mathbf{t})\) is the covariance function between the \(p\)th and the \(q\)th features of the process \(X\):
\[C_{p,q}(s_{p},t_{q})\coloneqq\mathbb{E}\left(\{X^{(p)}(s_{p})-\mu^{(p)}(s_{p}) \}\{X^{(q)}(t_{q})-\mu^{(q)}(t_{q})\}\right),\quad s_{p}\in\mathcal{T}_{p},t_{ q}\in\mathcal{T}_{q}.\]
Let \(\Gamma:\mathcal{H}\to\mathcal{H}\) denote the covariance operator of \(X\), defined as an integral operator with kernel \(C\). That is, for \(f\in\mathcal{H}\) and \(\mathbf{t}\in\mathcal{T}\), the \(p\)th feature of \(\Gamma f(\mathbf{t})\) is given by
\[(\Gamma f)^{(p)}(t_{p})\coloneqq\langle\!\langle C_{p,}(t_{p},\cdot),f(\cdot )\rangle\!\rangle=\langle\!\langle C_{\cdot,p}(\cdot,t_{p}),f(\cdot)\rangle\! \rangle,\quad t_{p}\in\mathcal{T}_{p}.\]
Let us consider a set of \(N\) curves \(\mathcal{X}=\{X_{1},\ldots,X_{n},\ldots,X_{N}\}\) generated as a random sample of the \(P\)-dimensional stochastic process \(X\) with continuous trajectories. Unless otherwise stated, the data are assumed to be observed without error. The data can be viewed as a table with \(N\) rows and \(P\) columns where each entry is a curve, potentially on a multidimensional domain (see Figure 1). Each row of this matrix represents an observation; while each column represents a functional feature. At the intersection of row \(n\) and column \(p\), we thus have \(X_{n}^{(p)}\) which is the curve that concerns the (functional) feature \(p\) for the individual \(n\).
For \(n\in\{1,\ldots,N\}\), each observation \(n\) is attributed the weight \(\pi_{n}\) such that \(\sum_{n}\pi_{n}=1\), e.g., \(\pi_{n}=1/N\). For a given \(p\in\{1,\ldots,P\}\), the mean curve of the \(p\)th feature along the \(N\) observations is denoted by \(\mu^{(p)}\). This quantity can be computed as
\[\mu^{(p)}(t_{p})=\sum_{n=1}^{N}\pi_{n}X_{n}^{(p)}(t_{p}),\quad t_{p}\in \mathcal{T}_{p},\quad p\in\{1,\ldots,P\}.\]
The cross-covariance function of the \(p\)th and \(q\)th features along the \(N\) observations can be computed as, for \(p=1,\ldots,P\) and \(q=1,\ldots,P\),
\[C_{p,q}(s_{p},t_{q})=\sum_{n=1}^{N}\pi_{n}X_{n}^{(p)}(s_{p})X_{n}^{(q)}(t_{q})- \mu^{(p)}(s_{p})\mu^{(q)}(t_{q}),\quad s_{p}\in\mathcal{T}_{p},\quad t_{q}\in \mathcal{T}_{q}. \tag{1}\]
For the set \(\mathcal{X}\), the inner-product matrix, also called the Gram matrix, \(\mathbf{M}\) is defined as a matrix of size \(N\times N\) with entries
\[\mathbf{M}_{nn^{\prime}}=\sqrt{\pi_{n}\pi_{n^{\prime}}}\langle\!\langle X_{n}- \mu,X_{n^{\prime}}-\mu\rangle\!\rangle,\quad n,n^{\prime}=1,\ldots,N. \tag{2}\]
This matrix is symmetric, positive definite, and interpretable as a proximity matrix, each entry being the distance between the weighted observations.
Figure 1: Functional data matrix, adapted from Berrendero et al. (2011).
### Basis decomposition
In many practical situations, functional data are noisy and only observed at specific time points. To extract the underlying functional features of the data, smoothing and interpolation techniques are commonly employed. These techniques involve approximating the true underlying function generating the data by a finite-dimensional set of basis functions. Assume that for each feature \(p=1,\ldots,P\), there exists a set of basis of functions \(\Psi^{(p)}=\{\psi_{k}^{(p)}\}_{1\leq k\leq K_{p}}\) such that each feature of each curve \(n=1,\ldots,N\) can be expanded using the basis:
\[X_{n}^{(p)}(t_{p})=\sum_{k=1}^{K_{p}}c_{nk}^{(p)}\psi_{k}^{(p)}(t_{p}),\quad t_ {p}\in\mathcal{T}_{p}, \tag{3}\]
where \(\{c_{nk}^{(p)}\}_{1\leq k\leq K_{p}}\) is a set of coefficients for feature \(p\) of observation \(n\). We denote by \(\overline{c}_{k}^{(p)}=\sum_{n=1}^{N}\pi_{n}c_{nk}^{(p)}\) the mean coefficient of feature \(p\) corresponding to the \(k\)th basis function. The \(p\)th feature of the mean function can be then expanded in the same basis as:
\[\mu^{(p)}(t_{p})=\sum_{k=1}^{K_{p}}\overline{c}_{k}^{(p)}\psi_{k}^{(p)}(t_{p}),\quad t_{p}\in\mathcal{T}_{p}.\]
Similarly, the covariance function of the \(p\)th and \(q\)th features is given by:
\[C_{p,q}(s_{p},t_{q})=\sum_{k=1}^{K_{p}}\sum_{l=1}^{K_{q}}\left(\sum_{n=1}^{N} \pi_{n}c_{nk}^{(p)}c_{nl}^{(q)}-\overline{c}_{k}^{(p)}\overline{c}_{l}^{(q)} \right)\psi_{k}^{(p)}(s_{p})\psi_{l}^{(q)}(t_{q}),\quad s_{p}\in\mathcal{T}_{p},\quad t_{q}\in\mathcal{T}_{q}.\]
These formulas can be written in matrix form as follows. For \(\mathbf{t}\in\mathcal{T},\) we have that \(X(\mathbf{t})=\mathbf{C}\Psi(\mathbf{t})\) where \(X(\mathbf{t})\) is a \(N\times P\) matrix with entries \(X_{n}^{(p)}(t_{p}),\ t_{p}\in\mathcal{T}_{p},\ 1\leq p\leq P,\ 1\leq n\leq N,\)
\[\mathbf{C}=\left(\mathbf{C}^{(1)}\quad\cdots\quad\mathbf{C}^{(P)}\right), \quad\text{and}\quad\Psi(\mathbf{t})=\text{diag}\{\Psi^{(1)}(t_{1}),\ldots, \Psi^{(P)}(t_{P})\},\]
where
\[\mathbf{C}^{(p)}=\begin{pmatrix}c_{11}^{(p)}&\cdots&c_{1K_{p}}^{(p)}\\ \vdots&\ddots&\vdots\\ c_{N1}^{(p)}&\cdots&c_{NK_{p}}^{(p)}\end{pmatrix}\quad\text{and}\quad\Psi^{(p)} (t_{p})=\begin{pmatrix}\psi_{1}^{(p)}(t_{p})\\ \vdots\\ \psi_{K_{p}}^{(p)}(t_{p})\end{pmatrix}.\]
Using the basis expansion and denoting \(\Pi^{\top}=(\pi_{1},\ldots,\pi_{N})\), the mean and covariance functions are given by
\[\mu(\mathbf{t})=\Psi(\mathbf{t})^{\top}\mathbf{C}^{\top}\Pi\quad\text{and} \quad C(\mathbf{s},\mathbf{t})=\Psi(\mathbf{s})^{\top}\mathbf{C}^{\top}\left( \text{diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{\top}\right)\mathbf{C}\Psi( \mathbf{t}).\]
Finally, we denote by \(\mathbf{W}\) the matrix of inner products of the functions in the basis \(\Psi\). The matrix \(\mathbf{W}\) is a block-diagonal matrix such that \(\mathbf{W}=\text{diag}\{\mathbf{W}^{(1)},\ldots,\mathbf{W}^{(P)}\}\) where each entry is given by
\[\mathbf{W}_{k,l}^{(p)}=\left\langle\psi_{k}^{(p)},\psi_{l}^{(p)}\right\rangle,\quad 1\leq k,l\leq K_{p},\quad 1\leq p\leq P.\]
We remark that, if the basis \(\Psi\) is an orthonormal basis, the matrix \(\mathbf{W}\) is equal to the identity matrix of size \(\sum_{p=1}^{P}K_{p}\). Using the expansion of the data into the basis of functions \(\Psi\), the inner-product matrix \(\mathbf{M}\) is written
\[\mathbf{M}=\text{diag}\{\sqrt{\pi_{1}},\ldots,\sqrt{\pi_{N}}\}\left(\text{I}_ {N}-\mathbf{1}_{N}\Pi^{\top}\right)\mathbf{C}\mathbf{W}\mathbf{C}^{\top}\left( \text{I}_{N}-\Pi\text{I}_{N}^{\top}\right)\text{diag}\{\sqrt{\pi_{1}},\ldots, \sqrt{\pi_{N}}\}\]
where \(\text{I}_{N}\) is the identity matrix of size \(N\) and \(\mathbf{1}_{N}\) is a vector of \(1\) of length \(N\).
## 3 On the geometry of multivariate functional data
### Duality diagram
The distinction between the space of rows of a matrix as a sample from a population and the space of columns as the fixed variables on which the observations were measured has been explained in Holmes (2008) and De la Cruz and Holmes (2011) for multivariate data analysis. We propose to
define a duality diagram in the context of multivariate functional data. Consider the data matrix defined by the set \(\mathcal{X}\). We define an operator \(L_{X}:\mathcal{H}\rightarrow\mathbb{R}^{N}\) by
\[L_{X}:f\mapsto\begin{pmatrix}\sqrt{\pi_{1}}\langle\!\langle X_{1}-\mu,f\rangle \!\rangle\\ \qquad\vdots\\ \sqrt{\pi_{N}}\langle\!\langle X_{N}-\mu,f\rangle\!\rangle\end{pmatrix}.\]
Using the linearity of the inner-product \(\langle\!\langle\cdot,\cdot\rangle\!\rangle\) and vectors, the operator \(L_{X}\) is linear. Define the operator \(L_{X}^{\star}:\mathbb{R}^{N}\rightarrow\mathcal{H}\) as
\[L_{X}^{\star}:u\mapsto\begin{pmatrix}\sum_{n=1}^{N}\sqrt{\pi_{n}}u_{n}\{X_{n}^ {(1)}(t_{1})-\mu^{(1)}(t_{1})\}\\ \qquad\qquad\qquad\qquad\qquad\qquad\vdots\\ \sum_{n=1}^{N}\sqrt{\pi_{n}}u_{n}\{X_{n}^{(P)}(t_{P})-\mu^{(P)}(t_{P})\} \end{pmatrix}.\]
Then \(L_{X}^{\star}\) is the adjoint operator of the linear operator \(L_{X}\) (see Appendix A for a proof). As an adjoint operator, \(L_{X}^{\star}\) is a linear operator. The operator \(\Gamma\) and the matrix \(\mathbf{M}\) define geometries in \(\mathcal{H}\) and \(\mathbb{R}^{N}\), respectively, through
\[\langle\!\langle f,g\rangle\!\rangle_{\Gamma}=\langle\!\langle f,\Gamma g \rangle\!\rangle,\quad f,g\in\mathcal{H},\quad\text{and}\quad\left(u,v\right)_ {\mathbf{M}}=u^{\top}\mathbf{M}v,\quad u,v\in\mathbb{R}^{N}.\]
We denote by \(\left\|\!\left|\cdot\right|\!\right|_{\Gamma}\) and \(\left(\cdot\right)\!\right|_{\mathbf{M}}\) their associated norms. Using the definition of adjoint operators \(L_{X}\) and \(L_{X}^{\star}\), we have that
\[\left(L_{X}(f),u\right)_{\mathbf{M}}=\langle\!\langle f,L_{X}^{\star}(u) \rangle\!\rangle_{\Gamma},\quad\text{for all}\quad f\in\mathcal{H},u\in \mathbb{R}^{N}.\]
These relationships can be expressed as a duality diagram, see Figure 2. The triplet \((X,\Gamma,\mathbf{M})\) defines a (multivariate) functional data analysis framework. One consequence of this transition between spaces is that the eigencomponents of \(\mathcal{X}\) can be estimated equivalently using either the covariance operator \(\Gamma\) or the Gram matrix \(\mathbf{M}\). The relationship between the eigencomponents of \(\Gamma\) and the eigencomponents of \(\mathbf{M}\) are derived in Section 4.
**Remark 1**.: _In general, in order to avoid confusion, inner-products and norms in function space, \(\mathcal{H}\) or \(\mathcal{L}^{2}\left(\mathcal{T}\right)\), will be refered with angle brackets, \(\langle\cdot,\cdot\rangle\), while inner-products and norms in coordinate space, \(\mathbb{R}^{N}\), will be refered with round brackets, \(\langle\cdot,\cdot\rangle\)._
**Remark 2**.: _We present here the duality diagram for the linear integral operator \(\Gamma\) with kernel \(C\). It is however possible to define duality diagrams for more general linear integral operators defined with a continuous symmetric positive definite function as kernel (see Gonzalez and Munoz (2010) and Wong and Zhang (2019) for discussions on possible integral operators to represent univariate functional data)._
### Cloud of individuals
Given an element \(f\in\mathcal{H}\), let \(\{f^{(p)}(t_{p}),\,t_{p}\in\mathcal{T}_{p},\,p=1,\ldots,P\}\) be the features set of the element. We identify this set as the point \(\mathrm{M}_{f}\) in the space \(\mathcal{H}\). The space \(\mathcal{H}\) is referred to as the observation space. The cloud of points that represents the set of observations \(\mathcal{X}\) in \(\mathcal{H}\) is denoted by \(\mathcal{C}_{P}\). Let \(\mathrm{G}_{\mu}\) be the centre of gravity of the cloud \(\mathcal{C}_{P}\). In the space \(\mathcal{H}\), its coordinates are given by \(\{\mu^{(p)}(t_{p}),\,t_{p}\in\mathcal{T}_{p},\,p=1,\ldots,P\}\). If the features are centered, the origin \(\mathrm{O}_{\mathcal{H}}\) of the axes in \(\mathcal{H}\) coincides with \(\mathrm{G}_{\mu}\).
Let \(f\) and \(g\) be two elements in \(\mathcal{H}\) and denote by \(\mathrm{M}_{f}\) and \(\mathrm{M}_{g}\) their associated points in \(\mathcal{C}_{P}\) (see Figure 3). The most natural distance between these observations is based on the usual inner product in \(\mathcal{H}\), \(\langle\!\langle\cdot,\cdot\rangle\!\rangle\), and is defined as
\[d^{2}(\mathrm{M}_{f},\mathrm{M}_{g})=\left|\!\left|\!\left|f-g\right|\!\right| \!\right|^{2}=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\{f^{(p)}(t_{p})-g^{(p )}(t_{p})\right\}^{2}\mathrm{d}t_{p}.\]
This distance measures how different the observations are, and thus gives one characterization of the shape of the cloud \(\mathcal{C}_{P}\). Another description of this shape is to consider the distance between each observation and \(\mathrm{G}_{\mu}\), the center of the cloud. Let \(f\) be an element of \(\mathcal{H}\), associated to the point \(\mathrm{M}_{f}\), and \(\mu\) the element of \(\mathcal{H}\) related to \(\mathrm{G}_{\mu}\), the distance between \(f\) and \(\mu\) is given by
\[d^{2}(\mathrm{M}_{f},\mathrm{G}_{\mu})=\left|\!\left|\!\left|f-\mu\right|\! \right|\!\right|^{2}=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\{f^{(p)}(t_{p} )-\mu^{(p)}(t_{p})\right\}^{2}\mathrm{d}t_{p}.\]
Given the set \(\mathcal{X}\), the total inertia of \(\mathcal{C}_{P}\), with respect to \(\mathrm{G}_{\mu}\) and the distance \(d\), is given by
\[\sum_{n=1}^{N}\pi_{n}d^{2}(\mathrm{M}_{n},\mathrm{G}_{\mu})=\frac{1}{2}\sum_{ i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}d^{2}(\mathrm{M}_{i},\mathrm{M}_{j})=\sum_{p=1 }^{P}\int_{\mathcal{T}_{p}}\mathrm{Var}\,X^{(p)}(t_{p})\mathrm{d}t_{p}. \tag{4}\]
The duality diagram, however, allows us to define another suitable distance to characterize the shape of the cloud \(\mathcal{C}_{P}\). We thus define
\[d^{2}_{\Gamma}(\mathrm{M}_{f},\mathrm{M}_{g})=\left|\!\left|\!\left|f-g\right|\! \right|\!\right|_{\Gamma}^{2}.\]
The utilization of the distance measure \(d_{\Gamma}\), which accounts for the variability among all the features within the functional data, corresponds to a Mahalanobis-type distance framework for multivariate functional data (see Berrendero et al. (2020) and Martino et al. (2019)). Given the set \(\mathcal{X}\), the total inertia of \(\mathcal{C}_{P}\), with respect to \(\mathrm{G}_{\mu}\) and the distance \(d_{\Gamma}\), is given by
\[\sum_{n=1}^{N}\pi_{n}d^{2}_{\Gamma}(\mathrm{M}_{n},\mathrm{G}_{\mu})=\frac{1}{ 2}\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}d^{2}_{\Gamma}(\mathrm{M}_{i}, \mathrm{M}_{j})=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left|\!\left|\!\left|C_{P} (t_{p},\cdot)\right|\!\right|\!\right|^{2}\mathrm{d}t_{p}. \tag{5}\]
The derivation of these equalities are given in Appendix A.
Figure 3: Left: Cloud of observations. Right: Projection of the points onto the elements of \(\mathcal{H}\). The observation \(f\) (resp. \(g\)) is identified by the point \(\mathrm{M}_{f}\) (resp. \(\mathrm{M}_{g}\)) in the cloud \(\mathcal{C}_{P}\). The point \(\mathrm{G}_{\mu}\) is the center of gravity of \(\mathcal{C}_{P}\) and the point \(\mathrm{O}_{\mathcal{H}}\) is the origin of the space \(\mathcal{H}\).
**Remark 3**.: _These results have the same interpretation as for multivariate scalar data. This is also the multivariate analogue of the relation between variance and sum of squared differences known for univariate functional data. If the features are reduced beforehand, such that \(\int_{\mathcal{T}_{p}}\mathrm{Var}\,X^{(p)}(t_{p})\mathrm{d}t_{p}=1\) for the distance \(d\) or \(\int_{\mathcal{T}_{p}}\left|\!\left|\!\left|\mathcal{C}_{P}(t_{p},\cdot)\right|\! \right|\right|^{2}\mathrm{d}t_{p}=1\) for the distance \(d_{\Gamma}\), the total inertia of the cloud \(\mathcal{C}_{P}\) is equal to the number of components \(P\). We are, in general, not interested by the total inertia but how this variance is spread among the features._
### Cloud of features
Given an element \(f\in\mathcal{H}\), let \(L_{X}(f)=\{\sqrt{\pi_{n}}\langle X_{n}-\mu,f\rangle\!\}\), \(n=1,\ldots,N\}\) be the set of projections of \(f\) onto the centered observations. We identify this set as the point \(\mathsf{M}_{f}\) in the space \(\mathbb{R}^{N}\). The space \(\mathbb{R}^{N}\) is referred to as the features' space. The cloud of points that represents the set of observations in \(\mathbb{R}^{N}\) is denoted by \(\mathcal{C}_{N}\). Let \(\mathsf{G}_{\mathsf{u}}\) be the centre of gravity of the cloud \(\mathcal{C}_{N}\). In the space \(\mathbb{R}^{N}\), its coordinates are given by \(L_{X}(\mu)=\{\sqrt{\mu}\langle\!\langle X_{n}-\mu,\mu\rangle\!\rangle,\,n=1, \ldots,N\}\). If the data are centered, the origin \(\mathsf{O}_{\mathsf{R}}\) of the axes in \(\mathbb{R}^{N}\) coincides with \(\mathsf{G}_{\mathsf{u}}\).
We consider the usual inner-product in \(\mathbb{R}^{N}\), such that for all \(u,v\in\mathbb{R}^{N},(u,v)=u^{\top}v\), associated with the norm \((\!(\cdot)\!)\). Let \(f\) and \(g\) be two elements in \(\mathcal{H}\) and denote by \(\mathsf{M}_{f}\) and \(\mathsf{M}_{g}\) their associated points in \(\mathcal{C}_{N}\) (see Figure 4). The distance between \(\mathsf{M}_{f}\) and \(\mathsf{M}_{g}\) is thus defined as
\[\mathsf{d}^{2}(\mathsf{M}_{f},\mathsf{M}_{g})=(\!(L_{X}(f)-L_{X}(g))\!)^{2}= \sum_{n=1}^{N}\pi_{n}\langle\!\langle X_{n}-\mu,f-g\rangle\!\rangle^{2}.\]
Similarly to the cloud of individuals, this distance characterizes the shape of the cloud \(\mathcal{C}_{N}\) and we also have access to this characterization through the distance with the center of gravity \(\mathsf{G}_{\mathsf{u}}\). Let \(f\) be an element of \(\mathcal{H}\), associated to the point \(\mathsf{M}_{f}\), and \(\mu\) the element of \(\mathcal{H}\) related to \(\mathsf{G}_{\mathsf{u}}\), the distance between \(\mathsf{M}_{f}\) and \(\mu\) is given by
\[\mathsf{d}^{2}(\mathsf{M}_{f},\mathsf{G}_{\mathsf{u}})=(\!(L_{X}(f)-L_{X}( \mu)\!)\!)^{2}=\sum_{n=1}^{N}\pi_{n}\langle\!\langle X_{n}-\mu,f-\mu\rangle\! \rangle^{2}.\]
Given the set \(\mathcal{X}\), the total inertia of \(\mathcal{C}_{N}\), with respect to \(\mathsf{G}_{\mathsf{u}}\) and the distance \(\mathsf{d}\), is given by
\[\sum_{n=1}^{N}\pi_{n}\mathsf{d}^{2}(\mathsf{M}_{n},\mathsf{G}_{\mathsf{u}})= \frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\mathsf{d}^{2}(\mathsf{M} _{i},\mathsf{M}_{j})=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left|\!\left|\! \left|C_{P}(t_{p},\cdot)\right|\!\right|\right|^{2}\mathrm{d}t_{p}. \tag{6}\]
Using the distances induced by the duality diagram, the total inertia of the cloud \(\mathcal{C}_{N}\) is thus equal to the total inertia of the cloud \(\mathcal{C}_{P}\). This property highlights the duality between the spaces \(\mathcal{H}\) and \(\mathbb{R}^{N}\). To further emphasize this duality, the cosine of the angle \(\theta_{fg}\) formed by the two points \(\mathsf{M}_{f}\) and \(\mathsf{M}_{g}\) is equal to their correlation coefficient and can be written
\[\cos(\theta_{fg})=\frac{(L_{X}(f),L_{X}(g))}{(\!(L_{X}(f))\!)\!)}=\frac{ \langle\!\langle f,g\rangle\!\rangle_{\Gamma}}{\left|\!\left|\!\left|f\right|\! \right|\!\right|_{\Gamma}\left|\!\left|g\right|\!\right|_{\Gamma}}.\]
The derivation of these equalities are given in Appendix A.
Figure 4: Left: Cloud of features. Right: Projection of the points on the elements of \(\mathbb{R}^{N}\). The observation \(f\) (resp. \(g\)) is identified by the point \(\mathsf{M}_{f}\) (resp. \(\mathsf{M}_{g}\)) in the cloud \(\mathcal{C}_{N}\). The point \(\mathsf{G}_{\mathsf{u}}\) is the center of gravity of \(\mathcal{C}_{N}\) and the point \(\mathsf{O}_{\mathsf{R}}\) is the origin of the space \(\mathbb{R}^{N}\).
**Remark 4**.: _Although each axis of the space does not directly represent the features, but rather the projection of an element of \(\mathcal{H}\) onto the elements of the set \(\mathcal{X}\), we refer to this space as the features' space. We use this terminology to highlight the similarity between multivariate functional data analysis and traditional multivariate data analysis, as well as to emphasize the dimensionality of this space._
### On centering and reducing
For conducting an MFPCA, the features are usually assumed centred (Happ and Greven, 2018). Prothero et al. (2023) give a complete overview of centering in the context of FDA. Here, we comment on the geometric interpretation of centering in this context and compare with the multivariate scalar case. We focus on the usual centering in FDA, namely \(X_{n}^{(p)}(t_{p})-\mu^{(p)}(t_{p}),\ t_{p}\in\mathcal{T}_{p}\) (refered as _object centering_ in Prothero et al. (2023)). The geometric interpretation of object centering is the same if we refer to the observation space \(\mathcal{H}\) or the feature space \(\mathbb{R}^{N}\). Within the space \(\mathcal{H}\) (resp. \(\mathbb{R}^{N}\)), centering is interpreted as translating the centre of gravity of the clouds, \(\mathrm{G}_{\mu}\) (resp. \(\mathrm{G}_{\mu}\)), to the origin point \(\mathrm{O}_{\mathcal{H}}\) (resp. \(\mathrm{Q}_{\mathbb{R}}\)) of the space \(\mathcal{H}\) (resp. \(\mathbb{R}^{N}\)). This transformation, being a translation, does not change the shape of the cloud \(\mathcal{C}_{\mathcal{P}}\) (resp. \(\mathcal{C}_{N}\)). The interpretation is the same as for the centering in the multivariate scalar data context within their observation space.
Concerning the standardization of the data, there are two main proposals in the literature. Happ and Greven (2018) propose to weight each component \(p\) by
\[w_{p}=\left(\int_{\mathcal{T}_{p}}\mathrm{Var}\,X^{(p)}(t_{p})\mathrm{d}t_{p} \right)^{-1}.\]
This standardization is coherent with the derivation of the total inertia of the observation space using the usual distance in \(\mathcal{H}\). Chiou et al. (2014) propose to standardize each component \(p\) of the data using the function
\[w_{p}(t_{p})=\left(\mathrm{Var}\,X^{(p)}(t_{p})\right)^{-1/2},\quad t_{p}\in \mathcal{T}_{p}.\]
This corresponds to a standardization of the curves by the standard deviation of the component at each sampling point. The standard deviation curve is estimated as the square root of the diagonal of the covariance function estimates, obtained using a local linear smoother of the pooled data. For each functional feature \(p\), this standardization mimics the standardization used for principal components analysis if the number of (scalar) features is infinite. Considering the duality diagram and the total inertia of the clouds with respect to the distance \(d_{\Gamma}\), we propose to weight each component \(p\) by
\[w_{p}=\left(\int_{\mathcal{T}_{p}}\left|\!\left|C_{p}\right.\!\right.\!\right. \!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\! \left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\left|\!\!\left|\! \left|\!\left|\!\left|\!\!\left|\!\left|\!\left|\!\!\left|\!\left|\!\! \left|\!\left|\!\!\left|\!\!\left|\!\left|\!\!\left|\!\left|\!\!\left|\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\! \left|\!\!\left|\!\!\left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\! \left|\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!\!\left|\!\!\!\!\!\left|\!\!\!\!\!\! \left|\!\!\!\left|\!\!\!\!\left|\!\!\!\!\left|\!\!
version of the Karhunen-Loeve expansion (8) as the eigenvalues \(\lambda_{k}\), and hence the contribution of \(\mathfrak{c}_{k}\) to (8), becomes negligible as \(k\) goes to infinity. Let
\[X_{[K]}(\mathbf{t})=\mu(\mathbf{t})+\sum_{k=1}^{K}\mathfrak{c}_{k}\phi_{k}( \mathbf{t}),\quad\mathbf{t}\in\mathcal{T},\quad K\geq 1, \tag{9}\]
be the truncated Karhunen-Loeve expansion of the process \(X\) and
\[X_{[K_{p}]}^{(p)}(t_{p})=\mu^{(p)}(t_{p})+\sum_{k=1}^{K_{p}}\mathfrak{c}_{k}^{( p)}\varphi_{k}^{(p)}(t_{p}),\quad t_{p}\in\mathcal{T}_{p},\quad K_{p}\geq 1,\quad 1 \leq p\leq P, \tag{10}\]
be the truncated Karhunen-Loeve expansion of the \(p\)th feature of the process \(X\). For each \(p\), the set \(\{\varphi_{k}\}_{1\leq k\leq K_{p}}\) is a basis of univariate functions in \(\mathcal{L}^{2}\left(\mathcal{T}_{p}\right)\), whose elements are not the components of the multivariate functions \(\phi_{k}\).
### Diagonalization of the covariance operator
The estimation of the eigencomponents of the covariance \(\Gamma\) by its diagonalization is derived in Happ and Greven (2018) for a general class of multivariate functional data defined on different dimensional domains. They give a direct relationship between the truncated representation (10) of the single elements \(X^{(p)}\) and the truncated representation (9) of the multivariate functional data \(X\).
We recall here how to estimate the eigencomponents. Following Happ and Greven (2018, Prop. 5), the multivariate components for \(X\) are estimated by a weighted combination of the univariate components computed from each \(X^{(p)}\). First, we perform a univariate FPCA on each of the features of \(X\) separately. For a feature \(X^{(p)}\), the eigenfunctions and eigenvectors are computed as a matrix decomposition of the estimated covariance \(C_{p,p}\) from Equation (1). This results in a set of eigenfunctions \(\{\varphi_{k}^{(p)}\}_{1\leq k\leq K_{p}}\) associated with a set of eigenvalues \(\{\lambda_{k}^{(p)}\}_{1\leq k\leq K_{p}}\) for a given truncation integer \(K_{p}\). Then, the univariate scores for a realization \(X_{n}^{(p)}\) of \(X^{(p)}\) are given by \(\mathbf{c}_{nk}^{(p)}=\left\langle X_{n}^{(p)},\varphi_{k}^{(p)}\right\rangle, \ 1\leq k\leq K_{p}\). These scores might be estimated by numerical integration for example. Considering \(K_{+}:=\sum_{p=1}^{P}K_{p}\), we then define the matrix \(\mathcal{Z}\in\mathbb{R}^{N\times K_{+}}\), where on each row we concatenate the scores obtained for the \(P\) features of the \(n\)th observation: \((\mathbf{c}_{n1}^{(1)},\ldots,\mathbf{c}_{nK_{1}}^{(1)},\ldots,\mathbf{c}_{n1 }^{(P)},\ldots,\mathbf{c}_{nK_{P}}^{(P)})\). An estimation of the covariance of the matrix \(\mathcal{Z}\) is given by \(\mathbf{Z}=(N-1)^{-1}\mathcal{Z}^{\top}\mathcal{Z}\). An eiganalysis of the matrix \(\mathbf{Z}\) is carried out to estimate the eigenvectors \(\mathbf{v}_{k}\) and eigenvalues \(\lambda_{k}\). Finally, the multivariate eigenfunctions are estimated as a linear combination of the univariate eigenfunctions using
\[\phi_{k}^{(p)}(t_{p})=\sum_{l=1}^{K_{p}}[\mathbf{v}_{k}]_{l}^{(p)}\varphi_{l}^{(p)} (t_{p}),\quad t_{p}\in\mathcal{T}_{p},\quad 1\leq k\leq K_{+},\quad 1\leq p\leq P,\]
where \([\mathbf{v}_{k}]_{l}^{(p)}\) denotes the \(l\)th entry of the \(p\)th block of the vector \(\mathbf{v}_{k}\). The multivariate scores are estimated as
\[\mathfrak{c}_{nk}=\mathcal{Z}_{n},\mathbf{v}_{k},\quad 1\leq n\leq N,\quad 1\leq k \leq K_{+}.\]
We refer the reader to Happ and Greven (2018) for the derivation of the eigencomponents of the covariance operator if the curves are expanded in a general basis of functions.
### Diagonalization of the inner product matrix
We can use the duality relation between row and column spaces of a data matrix to estimate the eigencomponents of the covariance operator. Consider the inner-product matrix \(\mathbf{M}\), with entries defined in (2) and assuming that all observations are equally weighted, i.e., for all \(n=1,\ldots,N\), \(\pi_{n}=1/N\). Let \(\{l_{k}\}_{1\leq k\leq N}\) such that \(l_{1}\geq\ldots\geq l_{N}\geq 0\) be the set of eigenvalues and \(\{v_{k}\}_{1\leq k\leq N}\) be the set of eigenvectors of the matrix \(\mathbf{M}\). The relationship between all nonzero eigenvalues of the covariance operator \(\Gamma\) and the eigenvalues of \(\mathbf{M}\) is given by
\[\lambda_{k}=l_{k},\quad k=1,2,\ldots,N,\]
while the relationship between the multivariate eigenfunctions of the covariance operator \(\Gamma\) and the orthonormal eigenvectors of \(M\) is given by
\[\phi_{k}(\mathbf{t})=\frac{1}{\sqrt{Nl_{k}}}\sum_{n=1}^{N}[v_{k}]_{n}\left\{X_{ n}(\mathbf{t})-\mu(\mathbf{t})\right\},\quad\mathbf{t}\in\mathcal{T},\quad k=1,2, \ldots,N,\]
where \([v_{k}]_{n}\) is the \(n\)th entry of the vector \(v_{k}\). The scores are then computed as the inner-product between the multivariate curves and the multivariate eigenfunctions and are given by
\[\mathfrak{c}_{nk}=\sqrt{Nl_{k}}[v_{k}]_{n},\quad n=1,2,\ldots,N,\quad k=1,2, \ldots,N.\]
These results can be extended in a natural way if all the curves are expanded in a general basis of functions, defined in Equation (3). The derivation of these equalities, as well as the derivation of the eigencomponents using the expansion of the curves in a general basis of function and the Gram matrix, are given in Appendix B in a slighty more general framework where the observation weights are not equal.
### Computational complexity
We describe the time complexity for the computation of the MFPCA algorithm using the covariance operator and the inner product matrix. Considering the observation of \(N\) curves with \(P\) features, we assume that all observations of the feature \(p\) are sampled on a common grid of \(M_{p}\) points. For \(a\in\mathbb{N}\), let \(M^{a}=\sum_{p}M^{a}_{p}\). Let \(K\) be the number of multivariate eigenfunctions to estimate. For the estimation of the eigencomponents using the covariance operator, we have \(K\leq K_{+}\). While \(K\) has the same interpretation for both the eigendecomposition of the covariance operator and the eigendecomposition of the inner product matrix, in the latter case, it is not computed as the summation over the univariate elements, but rather as the number of components needed to achieve a certain amount of variance explained. Here, we also assume that the curves are perfectly observed, and thus no smoothing step is included in the expression of the time complexity. Note that the smoothing step will often have the same impact on complexity between the approaches.
To estimate the time complexity of an algorithm, we count the number of elementary operations performed, considering a fixed execution time for each. Worst-case time complexity is considered. We first give the time complexity for the estimation of the eigencomponents using the covariance operator by explaining the time complexity of each individual step (see Happ and Greven (2018) and Section 4.1). For each feature \(p\), the time complexity of the estimation of the covariance matrix is \(\mathcal{O}(NM_{p}^{2})\), of the eigendecomposition of the matrix is \(\mathcal{O}(M_{p}^{3})\) and of the univariate score is \(\mathcal{O}(NM_{p}K_{p})\). Therefore, the total time complexity is the sum over the \(p\) univariate time complexities. The covariance matrix \(\mathbf{Z}\) of the stacked univariate scores \(\mathcal{Z}\) is then computed with a time complexity of \(\mathcal{O}(NK_{+}^{2})\), because the dimension of the matrix \(\mathcal{Z}\) is \(N\times K_{+}\). The eigendecomposition of the matrix \(\mathbf{Z}\) has a time complexity of \(\mathcal{O}(K_{+}^{3})\). The final step is to compute the multivariate eigenfunctions and scores. For the estimation of the \(K\leq K_{+}\) multivariate eigenfunctions, the time complexity is \(\mathcal{O}(K\sum_{p}M_{p}K_{p})\) and for the estimation of the scores, the time complexity is \(\mathcal{O}(NK^{2})\). Gathering all the results, the final complexity of the estimation of the eigencomponents using the eigendecomposition of the covariance operator is
\[\mathcal{O}\left(NM^{2}+M^{3}+N\sum_{p=1}^{P}M_{p}K_{p}+NK_{+}^{2}+K_{+}^{3}+ K\sum_{p=1}^{P}M_{p}K_{p}+NK^{2}\right).\]
We now consider the time complexity of the estimation of the eigencomponents using the eigendecomposition of the inner product matrix (see Section 4.2). The inner product between two curves can be estimated in \(\mathcal{O}(M^{1})\). Since there are \(N^{2}\) terms in the matrix, the time complexity for the computation of the inner product matrix is then \(\mathcal{O}(N^{2}M^{1})\). The eigendecomposition of this matrix has a time complexity of \(\mathcal{O}(N^{3})\). For the multivariate eigenfunctions, the time complexity is \(\mathcal{O}(KNP)\) and is \(\mathcal{O}(KN)\) for the multivariate scores. Gathering all the results, the final complexity of the estimation of eigencomponents using the eigendecomposition of the inner product matrix is
\[\mathcal{O}\left(N^{2}M^{1}+N^{3}+KNP+KN\right).\]
The number of components \(K\) to estimate is usually small compared to the number of curves \(N\) or to the total number of sampling points \(M^{1}\). Both time complexities can then be reduced to \(\mathcal{O}(NM^{2}+M^{3})\) for the diagonalization of the covariance operator and to \(\mathcal{O}(N^{2}M^{1}+N^{3})\) using the Gram matrix. If the number of observations is large compared to the total number of sampling points, it thus seems preferable to use the covariance operator to estimate the eigencomponents, while if the total number of sampling points is large compare to the number of observations, the use of the Gram matrix seems better. Note that the number of features \(P\) does not have much impact on the computational complexity. These results are confirmed in the simulation (see Section 5.2).
**Remark 5**.: _We can use singular values decomposition (SVD) in both cases to make the algorithm faster as it allows to compute only the first \(K\) eigenfunctions. In practice, this might be important as the maximum number of non-zero eigenvalues is the minimum between the number of observations and the number of sampling points._
## 5 Empirical analysis
Using simulated data, we compare the estimation of the eigencomponents using the diagonalization of the covariance operator and the Gram matrix. The diagonalization of the covariance operator is performed using the methodology of Happ and Greven (2018). As this methodology is based on the expansion of each univariate feature into univariate principal components, we used univariate FPCA, if the curves are unidimensional, and the Functional Canonical Polyadic-Tensor Power Algorithm (FCP-TPA) for regularized tensor decomposition (Allen, 2013), if the curves are two-dimensional. We choose the FCP-TPA as it is used by Happ and Greven (2018) in their algorithm and implemented in their software (Happ-Kurz, 2020). Note that we could also use a two-dimensional basis expansion such as penalized tensor splines or discrete cosine transform, but we do not investigate these expansions here, as we do not want to prespecify a basis of functions.
The results of the simulation are compared using computation times (CT), the integrated squared error (ISE) risk function for the multivariate eigenfunctions, the log-absolute error (\(\log-\mathrm{AE}\)) risk function for the eigenvalues and the mean integreated squared error (MISE) risk function for the reconstructed data. Let \(\phi_{k}\) be the true eigenfunction and \(\widehat{\phi}_{k}\) the estimated eigenfunction defined on \(\mathcal{T}.\) We then define the ISE as
(11)
Let \(\lambda=\{\lambda_{1},\ldots,\lambda_{K}\}\) be the set of true eigenvalues and \(\widehat{\lambda}=\{\widehat{\lambda}_{1},\ldots,\widehat{\lambda}_{K}\}\) be the set of estimated eigenvalues. We then define the \(\log-\mathrm{AE}\) as
\[\log-\mathrm{AE}(\lambda_{k},\widehat{\lambda}_{k})=\log(|\lambda_{k}- \widehat{\lambda}_{k}|),\quad k=1,\ldots,K. \tag{12}\]
Let \(\mathcal{X}\) be the set of true data and \(\widehat{\mathcal{X}}\) be the set of reconstructed data. We define the MISE of the reconstructed data as
\[\mathrm{MISE}(\mathcal{X},\widehat{\mathcal{X}})=\frac{1}{N}\sum_{n=1}^{N} \left|\!\left|\!\left|X_{n}-\widehat{X}_{n}\right|\!\right|\!\right|^{2}=\frac {1}{N}\sum_{n=1}^{N}\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\Big{\{}X_{n}^{(p)}(t _{p})-\widehat{X}_{n}^{(p)}(t_{p})\Big{\}}^{2}\,\mathrm{d}t_{p}. \tag{13}\]
Each integral is approximated by the trapezoidal rule with an equidistant grid. We let \(\widehat{\phi},\)\(\widehat{\lambda}\) and \(\widehat{\mathcal{X}}\) be the estimators obtained using the Gram matrix and \(\widetilde{\phi},\)\(\widehat{\lambda}\) and \(\widetilde{\mathcal{X}}\) the estimators obtained using the covariance operator. For each simulation, we compute the ratios
\[\frac{\mathrm{ISE}(\phi_{k},\widehat{\phi}_{k})}{\mathrm{ISE}(\phi_{k}, \widehat{\phi}_{k})},\quad\frac{\log-\mathrm{AE}(\lambda_{k},\widehat{\lambda }_{k})}{\log-\mathrm{AE}(\lambda_{k},\widehat{\lambda}_{k})},\quad k=1, \ldots,K,\quad\text{and}\quad\frac{\mathrm{MISE}(\mathcal{X},\widehat{ \mathcal{X}})}{\mathrm{MISE}(\mathcal{X},\widehat{\mathcal{X}})},\]
and compare them to 1.
### Simulation experiments
We consider two simulation scenarios. One consists of multivariate functional data with univariate features defined on one-dimensional domains and the other consists of univariate functional data defined on a two-dimensional domain.
Scenario 1.The simulation setting is based on the simulation in Happ and Greven (2018). The data-generating process is based on a truncated version of the Karhunen-Loeve decomposition. First, we generate a large orthonormal basis \(\{\psi_{k}\}_{1\leq k\leq K}\) of \(\mathcal{L}^{2}\left(\mathcal{T}\right)\) on an interval \(\mathcal{T}=[0,T]\subset\mathbb{R}.\) We fix \(T_{1}=0\) and \(T_{P+1}=T\) and we generate \(P-1\) cutting points \(T_{2},\ldots,T_{P}\) uniformly in \(\mathcal{T}\) such that \(0=T_{1}<\cdots<T_{P}<T_{P+1}=T\). Let \(s_{1},\ldots,s_{P}\in\{-1,1\}\) be coefficients that randomly flip the eigenfunctions with probability 0.5. The univariate components of the eigenfunctions are then defined as
\[\phi_{k}^{(p)}(t_{p})=s_{p}\psi_{k}\big{|}_{\left[T_{p},T_{p+1}\right]}\left( \frac{t_{p}-T_{p}}{T_{p+1}-T_{p}}\right),\quad k=1,\ldots,K,\quad p=1,\ldots,P.\]
The notation \(\phi_{k}\big{|}_{[T_{p},T_{p+1}]}\) is the restriction of the function \(\phi_{k}\) to the set \([T_{p},T_{p+1}]\). The set of multivariate functions \(\{\psi_{k}\}_{1\leq k\leq K}\) is an orthonormal system in \(\mathcal{H}\coloneqq\mathcal{L}^{2}\left(\mathcal{T}_{1}\right)\times\dots \times\mathcal{L}^{2}\left(\mathcal{T}_{p}\right)\) with \(\mathcal{T}_{p}=[0,1]\). Each curve is then simulated using the truncated multivariate Karhunen-Loeve expansion (9):
\[X(\mathbf{t})=\sum_{k=1}^{K}\mathfrak{c}_{k}\phi_{k}(\mathbf{t}),\quad\mathbf{ t}\in\mathcal{T},\]
where the scores \(\mathfrak{c}_{k}\) are sampled as random normal variables with mean \(0\) and variance \(\lambda_{k}\). The eigenvalues \(\lambda_{k}\) are defined with an exponential decrease, \(\lambda_{k}=\exp(-(k+1)/2)\). We simulate, for each replication of the simulation, \(N=25,50,75\) and \(100\) observations. Similarly, each component is sampled on a regular grid of \(M=25,50,75\) and \(100\) sampling points. We compare the methods for \(P=2,10,20\) and \(50\) features and we set \(K=10\).
Scenario 2.The data generating process is again based on a truncated version of the Karhunen-Loeve decomposition. First, we generate an orthonormal basis \(\{\phi_{k}\}_{1\leq k\leq K}\) of \(\mathcal{L}^{2}\left(\mathcal{T}\right)\) on an interval \(\mathcal{T}=[0,1]\times[0,1]\) as the tensor product of the first Fourier basis functions:
\[\phi_{k}(s,t)=\psi_{l}(s)\otimes\psi_{m}(t),\quad s,t\in[0,1],\quad k=1,\dots,K,\]
where \(\psi_{l}\) and \(\psi_{m}\) are elements of the Fourier basis. Each curve is then simulated using the truncated multivariate Karhunen-Loeve expansion (9):
\[X(s,t)=\sum_{k=1}^{K}\mathfrak{c}_{k}\phi_{k}(s,t),\quad s,t\in[0,1],\]
where the scores \(\mathfrak{c}_{k}\) are defined as for the Scenario 1. We simulate, for each replication of the simulations, \(N=25,50,75\) and \(100\) observations. Similarly, each component is sampled on a regular grid of \(M=25\times 25,50\times 50,75\times 75\) and \(100\times 100\) sampling points. We set \(K=10\).
### Simulation results
We compared MFPCA using the diagonalization of the covariance operator and using the diagonalization of the Gram matrix in terms of their CT, estimation of eigenvalues, estimation of eigenfunctions, and reconstruction of curves. We fix the number of retained components to be \(5\) for each simulation of both scenarios. Each experiment is repeated \(500\) times. The results are presented below.
Computational time.To compare the computational time of the diagonalization of the covariance operator and the diagonalization of the Gram matrix, we measured the time it took for each method to complete the MFPCA for each simulated dataset. Figure 5 shows the kernel density estimates of the ratio of CT for each method across all sample sizes, number of sampling points and number of features. For Scenario 1, we found that the diagonalization of the covariance has a shorter CT compared to the diagonalization of the Gram matrix for most combinations of sample sizes, number of functions and number of sampling points. It is faster to use the Gram matrix if the number of observations is low compared to the number of sampling points. Figure 6 shows the kernel density estimates of the ratio of CT for each method across all sample sizes and number of sampling points. For Scenario 2, we found that the diagonalization of the Gram matrix has a shorter CT compared to the diagonalization of the covariance operator across all sample sizes and number of sampling points. The shorter computational time of the diagonalization of the Gram matrix for Scenario 2 makes it a more efficient option for analyzing two and higher-dimensional functional datasets. It is however worth noting that the computational time can still vary depending on the specific implementation of each method, the computational resources available, and the complexity of the dataset (number of observations, number of sampling points, etc.).
Eigenvalues estimation.To compare the estimation of the eigenvalues between the diagonalization of the covariance operator and the diagonalization of the Gram matrix, we calculated the ratio of the \(\log-\)AE (12) between the estimated eigenvalues and the true eigenvalues for each simulated dataset and for the first five eigenvalues. Figure 7 shows the boxplots of the \(\log-\)AE for each method across all sample sizes, number of sampling points and number of features for Scenario 1. We found that the two methods behave similarly for all considered settings. Figure 8 shows the boxplots of the \(\log-\)AE for each method across all sample sizes and number of sampling points for Scenario 2. We found that the FCP-TPA gives slighty better estimation of the eigenvalues.
Figure 5: Ratio of computation time for Scenario 1 between the Gram matrix method and the covariance operator method. Each univariate component is defined on a one-dimensional domain. \(N\) is the number of observations, \(M\) is the number of sampling points per curve and \(P\) is the number of features.
Figure 6: Ratio of computation time for Scenario 2 between the Gram matrix method and the covariance operator method. \(N\) is the number of observations and \(M\times M\) is the number of sampling points per images.
Figure 8: Ratio of \(\log-\)AE for Scenario 2 between the Gram matrix method and the covariance operator method. \(N\) is the number of observations and \(M\times M\) is the number of sampling points per images.
Figure 7: Ratio of \(\log-\)AE for Scenario 1 between the Gram matrix method and the covariance operator method. Each univariate component is defined on a one-dimensional domain. \(N\) is the number of observations, \(M\) is the number of sampling points per curve and \(P\) is the number of features.
**Eigenfunctions estimation.** To compare the estimation of the eigenfunctions between the diagonalization of the covariance operator and the diagonalization of the Gram matrix, we calculated the ratio of the ISE (11) between the estimated eigenfunctions and the true eigenfunctions for each simulated dataset and for the first five eigenfunctions. Figure 9 shows the boxplots of the ISE for each method across all sample sizes, number of sampling points and number of features for Scenario 1. We found that the two methods behave similarly for all considered settings. For \(P=10,20\) and \(50\), the results are identical. Figure 10 shows the boxplots of the ISE for each method across all sample sizes and number of sampling points for Scenario 2. We found that the decomposition of the Gram matrix gives better estimation of the eigenfunctions compared to the FCP-TPA, especially when the number of observations increases.
**Curves reconstruction.** To compare the quality of the reconstruction of the curves between the diagonalization of the covariance operator and the diagonlisation of the Gram matrix, we calculated the ratio of the MISE (13) between the reconstruction of the curves and the true curves for each simulated dataset. Figure 11 shows the boxplots of the ISE for each method across all sample sizes, number of sampling points and number of features for Scenario 1. We found that the diagonalization of the Gram matrix gives slightly better results for all considered settings. Figure 12 shows the boxplots of the ISE for each method across all sample sizes and number of sampling points for Scenario 2. Similarly to Scenario 1, we found that the decomposition of the Gram matrix gives better estimation of the true curves compared to the FCP-TPA, especially when the number of observations increases.
## 6 Discussion and conclusion
MFPCA is a fundamental statistical tool for the analysis of multivariate functional data, which enables us to capture the variability in observations defined by multiple curves. In this paper, we have described the duality between rows and columns of a data matrix within the context of multivariate functional data. We have proposed to use this duality to estimate the eigencomponents of the covariance operator in multivariate functional datasets. By comparing the results
Figure 9: Ratio of ISE for Scenario 1 between the Gram matrix method and the covariance operator method. Each univariate component is defined on a one-dimensional domain. \(N\) is the number of observations, \(M\) is the number of sampling points per curve and \(P\) is the number of features.
of the two methods, we provide the researcher with guidelines for determining the most appropriate method within a range of functional data frameworks. Overall, our simulations showed that the diagonalization of the covariance operator or of the Gram matrix gave similar results in terms of the estimation of the eigenvalues, the eigenfunctions and reconstruction of the curves for one-dimensional multivariate functional data, while the performance of the diagonalization of the Gram matrix outperformed the FCP-TPA for higher-dimensional functional datasets. Regarding the computation time, the use of the covariance operator is faster in most cases for multivariate functional data defined on unidimensional domains. The only situation where the use of the Gram matrix is quicker is when \(M\gg N\). For data defined on higher-dimensional domains, diagonalization of the Gram matrix is faster. In conclusion, we recommend to use the covariance operator for multivariate functional data with all features defined on a one-dimensional domeins (curves) and for a number of observations larger or comparable to the number of sampling points regardless of the number of features. If the data are defined on multi-dimensional domains (images) or the number of sampling points is much higher than the number of observations, we advise using the Gram matrix.
Utilizing the Gram matrix enables the estimation of the number of components retained via the percentage of variance explained by the multivariate functional data, whereas the decomposition of the covariance operator necessitates the specification of the percentage of variance accounted for by each individual univariate feature. Specifying the percentage of variance explained for each feature does not guarantee that we recover the nominal percentage of variance explained for the multivariate data. Although we have not investigated the extent to which this might be important, the duality relation derived in this work provides a direct solution to the problem. Future work could investigate the settings in which univariate variance-explained cutoffs fail to retain the correct percentage of variance explained in multivariate functional data, and hence where the Gram matrix approach may be preferred.
In practice, observations of (multivariate) functional data are often subject to noise. As we recommend the use of the Gram matrix solely for densely sampled functional datasets, individual curve smoothing should suffice to approximate the Gram matrix in such cases. The estimation of the Gram matrix in the context of sparsely sampled functional data is however deemed irrelevant, given our findings that the utilization of the covariance operator for the estimation of the eigencomponents yields comparable results, while typically requiring less computational time.
The open-source implementation can be accessed at [https://github.com/StevenGolovkine/FDApy](https://github.com/StevenGolovkine/FDApy), while scripts to reproduce the simulations are at [https://github.com/FAST-ULxNUIG/geom_mfpca](https://github.com/FAST-ULxNUIG/geom_mfpca).
Figure 10: Ratio of ISE for Scenario 2 between the Gram matrix method and the covariance operator method. \(N\) is the number of observations and \(M\times M\) is the number of sampling points per images.
Figure 11: Ratio of MISE for Scenario 1 between the Gram matrix method and the covariance operator method. Each univariate component is defined on a one-dimensional domain. \(N\) is the number of observations, \(M\) is the number of sampling points per curve and \(P\) is the number of features.
Figure 12: Ratio of MISE for Scenario 2 between the Gram matrix method and the covariance operator method. \(N\) is the number of observations and \(M\times M\) is the number of sampling points per images.
Derivation of the equalities
Using the definition of adjoint operators, we must prove that
\[(L_{X}(f),u)_{\mathbf{M}}=\langle\!\langle f,L_{X}^{\star}(u)\rangle\!\rangle_{ \Gamma},\quad\text{for all}\quad f\in\mathcal{H},\quad u\in\mathbb{R}^{N}. \tag{14}\]
For all \(f\in\mathcal{H},u\in\mathbb{R}^{N}\), we have that
\[(L_{X}(f),u)_{\mathbf{M}} =L_{X}(f)^{\top}\mathbf{M}u\] \[=\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\sqrt{\pi_{j}}u_{j}\langle \!\langle X_{i}-\mu,X_{j}-\mu\rangle\!\langle\!\langle X_{i}-\mu,f\rangle\!\rangle,\] \[\langle\!\langle f,L_{X}^{\star}(u)\rangle\!\rangle_{\Gamma} =\langle\!\langle\Gamma f,L_{X}^{\star}(u)\rangle\!\rangle\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}(\Gamma)^{(p)}(t_{p})\left\{ \sum_{n=1}^{N}\sqrt{\pi_{n}}u_{n}\left(X_{n}^{(p)}(t_{p})-\mu^{(p)}(t_{p}) \right)\right\}\mathrm{d}t_{p}\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\{\sum_{q=1}^{P}\int_{ \mathcal{T}_{q}}C_{p,q}(t_{p},s_{q})f^{(q)}(s_{q})\mathrm{d}s_{q}\right\} \left\{\sum_{j=1}^{N}\sqrt{\pi_{j}}u_{j}\left(X_{j}^{(p)}(t_{p})-\mu^{(p)}(t_{ p})\right)\right\}\mathrm{d}t_{p}\] \[=\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\sqrt{\pi_{j}}u_{j}\left\{ \sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left(X_{i}^{(p)}(t_{p})-\mu^{(p)}(t_{p}) \right)\left(X_{j}^{(p)}(t_{p})-\mu^{(p)}(t_{p})\right)\mathrm{d}t_{p}\right\}\times\] \[\qquad\qquad\left\{\sum_{q=1}^{P}\int_{\mathcal{T}_{q}}\left(X_{ i}^{(q)}(s_{q})-\mu^{(q)}(s_{q})\right)f^{(q)}(s_{q})\mathrm{d}s_{q}\right\}\] \[=\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\sqrt{\pi_{j}}u_{j}\langle \!\langle X_{i}-\mu,X_{j}-\mu\rangle\!\rangle\langle\!\langle X_{i}-\mu,f\rangle\!\rangle.\]
So, the equality (14) is proved and we conclude that \(L_{X}^{\star}\) is the adjoint operator of \(L_{X}\).
Next, we derive the total inertia of the cloud \(\mathcal{C}_{P}\) using the distance \(d\). Recall that
\[\mathrm{Var}\{X^{(p)}(t_{p})\}=\sum_{n=1}^{N}\pi_{n}\{X_{n}^{(p)}(t_{p})\}^{2} -\{\mu^{(p)}(t_{p})\}^{2}\quad\text{where}\quad\mu^{(p)}(t_{p})=\sum_{n=1}^{N }\pi_{n}X_{n}^{(p)}(t_{p}),\quad t_{p}\in\mathcal{T}_{p}.\]
\[\sum_{n=1}^{N}\pi_{n}d^{2}(\mathrm{M}_{n},\mathrm{G}_{\mu}) =\sum_{n=1}^{N}\pi_{n}\sum_{p=1}^{P}\left\|X_{n}^{(p)}-\mu^{(p)} \right\|^{2}\] \[=\sum_{p=1}^{P}\left(\sum_{n=1}^{N}\pi_{n}\left\|X_{n}^{(p)} \right\|^{2}-\left\|\mu^{(p)}\right\|^{2}\right)\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\mathrm{Var}\,X^{(p)}(t_{p}) \mathrm{d}t_{p},\] \[\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}d^{2}(\mathrm{M}_{i}, \mathrm{M}_{j}) =\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\sum_{p=1}^{P}\left\|X_ {i}^{(p)}-X_{j}^{(p)}\right\|^{2}\] \[=\sum_{p=1}^{P}\left(2\sum_{i=1}^{N}\pi_{i}\left\|X_{i}^{(p)} \right\|^{2}-2\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\left\langle X_{i}^{(p )},X_{j}^{(p)}\right\rangle\right)\] \[=2\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\mathrm{Var}\,X^{(p)}(t_{p} )\mathrm{d}t_{p}.\]
The equalities in Equation (4) are shown.
We now derive the total inertia of the cloud \(\mathcal{C}_{P}\) using the distance \(d_{\Gamma}\). We have that
\[\sum_{n=1}^{N}\pi_{n}d_{\Gamma}^{2}(\mathsf{M}_{n},\mathsf{G}_{ \mathsf{J}}) =\sum_{n=1}^{N}\pi_{n}\langle\!\langle X_{n}-\mu,X_{n}-\mu\rangle\! \rangle_{\Gamma}\] \[=\sum_{n=1}^{N}\pi_{n}\left\|\!\langle X_{n}|\!\right\|_{\Gamma}^ {2}-\left\|\!\mu\right\|_{\Gamma}^{2}\] \[=\sum_{p=1}^{P}\left(\sum_{n=1}^{N}\pi_{n}\left\|\!\langle X_{n}^ {(p)}\right\|_{\Gamma}^{2}-\left\|\!\mu^{(p)}\right\|_{\Gamma}^{2}\right)\] \[=\sum_{p=1}^{P}\sum_{q=1}^{P}\int_{\mathcal{T}_{p}}\int_{ \mathcal{T}_{q}}C_{p,q}(t_{p},s_{q})C_{p,q}(t_{p},s_{q})\mathrm{d}s_{q}\mathrm{ d}t_{p}\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\|\!\langle C_{p}(t_{p},\cdot)|\!\right\|^{2}\mathrm{d}t_{p},\] \[\frac{1}{2}\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}d_{\Gamma}^ {2}(\mathsf{M}_{i},\mathsf{M}_{j}) =\sum_{i=1}^{N}\pi_{i}\left\|\!\langle X_{i}|\!\right\|_{\Gamma}^ {2}-\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\langle\!\langle X_{i},X_{j} \rangle\!\rangle_{\Gamma}\] \[=\sum_{i=1}^{N}\pi_{i}\left\|\!\langle X_{i}|\!\right\|_{\Gamma} ^{2}-\left\|\!\mu\right\|_{\Gamma}^{2}\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\|\!\langle C_{p}(t_{p},\cdot)|\!\right\|^{2}\mathrm{d}t_{p}.\]
The equalities in Equation (5) are shown.
Finally, we derive the inertia of the cloud \(\mathcal{C}_{\mathcal{V}}\) using the distance \(\mathsf{d}\). We have that
\[\sum_{n=1}^{N}\pi_{n}\mathsf{d}^{2}(\mathsf{M}_{n},\mathsf{G}_{ \mathsf{J}}) =\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\langle\!\langle X_{i} -\mu,X_{j}-\mu\rangle\!\rangle\langle\!\langle X_{i}-\mu,X_{j}-\mu\rangle\!\rangle\] \[=\sum_{p=1}^{P}\sum_{q=1}^{P}\int_{\mathcal{T}_{p}}\int_{ \mathcal{T}_{q}}C_{p,q}(t_{p},s_{q})C_{p,q}(t_{p},s_{q})\mathrm{d}s_{q}\mathrm{ d}t_{p}\] \[=\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\|\!\langle C_{p}(t_{p},\cdot)|\!\right\|^{2}\mathrm{d}t_{p},\] \[\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\mathrm{d}^{2}(\mathsf{ M}_{i},\mathsf{M}_{j}) =\sum_{i=1}^{N}\sum_{j=1}^{N}\sum_{n=1}^{N}\pi_{i}\pi_{j}\pi_{n} \langle\!\langle X_{n}-\mu,X_{i}-X_{j}\rangle\!\rangle\langle\!\langle X_{n}- \mu,X_{i}-X_{j}\rangle\!\rangle\] \[=2\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\pi_{j}\langle\!\langle X_{i} -\mu,X_{j}-\mu\rangle\!\rangle\langle\!\langle X_{i}-\mu,X_{j}-\mu\rangle\!\rangle\] \[=2\sum_{p=1}^{P}\int_{\mathcal{T}_{p}}\left\|\!\langle C_{p}(t_{p},\cdot)|\!\right\|^{2}\mathrm{d}t_{p}.\]
The equalities in Equation (6) are shown.
## Appendix B Derivation of the eigencomponents
Using the Hilbert-Schmidt theorem, there exists a complete orthonormal basis of eigenvectors \(\{v_{k}\}_{1\leq k\leq N}\) of the inner-product matrix \(\mathbf{M}\) such that
\[\mathbf{M}v_{k}=l_{k}v_{k}. \tag{15}\]
Let \(X=(X_{1}-\mu,\ldots,X_{N}-\mu)^{\top}\) and denote \(\widetilde{X}=\mathrm{diag}\{\sqrt{\pi_{1}},\ldots,\sqrt{\pi_{N}}\}X\), the matrix of weighted observations. Recall that, in the case of \(P\)-dimensional process, the realisations of the process \(X_{n},\ n=1,\cdots,N\) and \(\mu\) are vectors of functions of length \(P\), and thus \(X\) (and \(\widetilde{X}\)) is a matrix of functions of size \(N\times P\). By left multiplying Equation (15) by \(\widetilde{X}^{\top}\), we obtain
\[\widetilde{X}^{\top}\mathbf{M}v_{k}=l_{k}\widetilde{X}^{\top}v_{k}. \tag{16}\]
Expanding Equation (16), for each component \(p=1,\ldots,P\), we have,
\[\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\sqrt{\pi_{j}}[v_{k}]_{j}\left\{X_{i}^{(p)}( \cdot)-\mu^{(p)}(\cdot)\right\}\langle\!\langle X_{i}-\mu,X_{j}-\mu\rangle\! \rangle=l_{k}\!\!\sum_{n=1}^{N}\!\!\sqrt{\pi_{n}}[v_{k}]_{n}\left\{X_{n}^{(p)} (\cdot)-\mu^{(p)}(\cdot)\right\}. \tag{17}\]
Here and in the following, we note \([a]_{p}\) the \(p\)th entry of the vector \(a\). Starting from the left side of Equation (17), we get
\[[\widetilde{X}^{\top}\mathbf{M}v_{k}]_{p} =\sum_{i=1}^{N}\sum_{j=1}^{N}\pi_{i}\sqrt{\pi_{j}}[v_{k}]_{j} \left\{X_{i}^{(p)}(\cdot)-\mu^{(p)}(\cdot)\right\}\langle\!\langle X_{i}-\mu,X _{j}-\mu\rangle\!\rangle\] (18) \[=\sum_{q=1}^{P}\int_{\mathcal{T}_{q}}\sum_{i=1}^{N}\pi_{i}\left\{ X_{i}^{(p)}(\cdot)-\mu^{(p)}(\cdot)\right\}\left\{X_{i}^{(q)}(s_{q})-\mu^{(q)}(s_{q} )\right\}\] \[\qquad\qquad\sum_{j=1}^{N}\sqrt{\pi_{j}}[v_{k}]_{j}\left\{X_{j}^{ (q)}(s_{q})-\mu^{(q)}(s_{q})\right\}\mathrm{d}s_{q}\] \[=\sum_{q=1}^{P}\int_{\mathcal{T}_{q}}C_{p,q}(\cdot,s_{q})\sum_{j= 1}^{N}\sqrt{\pi_{j}}[v_{k}]_{j}\left\{X_{j}^{(q)}(s_{q})-\mu^{(q)}(s_{q}) \right\}\mathrm{d}s_{q}\] \[=\sum_{j=1}^{N}\langle\!\langle C_{p}(\cdot,\cdot),\sqrt{\pi_{j} }[v_{k}]_{j}\left\{X_{j}-\mu\right\}\rangle\!\rangle\] \[=\Gamma\left(\sum_{j=1}^{N}\sqrt{\pi_{j}}[v_{k}]_{j}\left\{X_{j} -\mu\right\}\!\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
Concerning the expansion of the data into the basis of function \(\Psi\), we write
\[\mathbf{M}=\left(\operatorname{diag}\{\sqrt{\pi_{1}},\ldots,\sqrt{\pi_{N}}\} \left(\mathbf{I}_{\!N}-\mathbf{1}_{\!N}\Pi^{\top}\right)\mathbf{C}W^{1/2}\right) \left(\operatorname{diag}\{\sqrt{\pi_{1}},\ldots,\sqrt{\pi_{N}}\}\left(\mathbf{I }_{\!N}-\mathbf{1}_{\!N}\Pi^{\top}\right)\mathbf{C}W^{1/2}\right)^{\top}.\]
We note
\[\mathbf{A}=\operatorname{diag}\{\sqrt{\pi_{1}},\ldots,\sqrt{\pi_{N}}\}\left( \mathbf{I}_{\!N}-\mathbf{1}_{\!N}\Pi^{\top}\right)\mathbf{C}W^{1/2},\]
such that \(\mathbf{M}=\mathbf{A}\mathbf{A}^{\top}\). We also assume that \(\phi_{1},\phi_{2},\ldots\) the eigenfunctions of the covariance operator \(\Gamma\) have a decomposition into the basis \(\Psi\)
\[\phi_{k}(\cdot)=\begin{pmatrix}\phi_{k}^{(1)}(\cdot)\\ \vdots\\ \phi_{k}^{(P)}(\cdot)\end{pmatrix}=\begin{pmatrix}\psi^{(1)\top}(\cdot)b_{1k} \\ \vdots\\ \psi^{(P)\top}(\cdot)b_{pk}\end{pmatrix},\quad\text{where}\quad b_{pk}=\left( b_{pk1},\ldots,b_{pkK_{p}}\right)^{\top}.\]
We have, for \(p=1,\ldots,P\),
\[\left(\Gamma\phi_{k}\right)^{(p)}(\cdot) =\sum_{q=1}^{P}\int_{\mathcal{T}_{q}}C_{p,q}(\cdot,s_{q})\phi_{k}^ {(q)}(s_{q})\mathrm{d}s_{q}\] \[=\sum_{q=1}^{P}\int_{\mathcal{T}_{q}}\Psi(\cdot)^{(p)\top}\mathbf{ C}^{(p)\top}\left(\operatorname{diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{\top} \right)\mathbf{C}^{(q)}\Psi^{(q)}(s_{q})\Psi^{(q)}(s_{q})^{\top}b_{qk}\mathrm{ d}s_{q}\] \[=\Psi(\cdot)^{(p)\top}\mathbf{C}^{(p)\top}\left(\operatorname{ diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{\top}\right)\sum_{q=1}^{P}\mathbf{C}^{(q)} \int_{\mathcal{T}_{q}}\Psi^{(q)}(s_{q})\Psi(s_{q})^{(q)\top}\mathrm{d}s_{q}b_{qk}\] \[=\Psi(\cdot)^{(p)\top}\mathbf{C}^{(p)\top}\left(\operatorname{ diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{\top}\right)\sum_{q=1}^{P}\mathbf{C}^{(q)} \mathbf{W}^{(q)}b_{qk}.\]
This equation is true for all \(p=1,\cdots,P\), this can be rewritten with matrices as
\[\Gamma\phi_{k}(\cdot)=\Psi(\cdot)^{\top}\mathbf{C}^{\top}\left(\operatorname{ diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{\top}\right)\mathbf{C}\mathbf{W}b_{k}.\]
From the eigenequation, we have that
\[\Gamma\phi_{k}(\cdot)=\lambda_{k}\phi_{k}(\cdot)\Longleftrightarrow\Psi( \cdot)^{\top}\mathbf{C}^{\top}\left(\operatorname{diag}\{\pi_{1},\ldots,\pi_{ N}\}-\Pi\Pi^{\top}\right)\mathbf{C}\mathbf{W}b_{k}=\lambda_{k}\Psi(\cdot)^{\top}b_{k}.\]
Since this equation must be true for all \(t_{p}\in\mathcal{T}_{p}\), this imply the equation
\[\mathbf{C}^{\top}\left(\operatorname{diag}\{\pi_{1},\ldots,\pi_{N}\}-\Pi\Pi^{ \top}\right)\mathbf{C}\mathbf{W}b_{k}=\lambda_{k}b_{k}. \tag{21}\]
As the eigenfunctions are assumed to be normalized, \(\left\|\!\left|\phi_{k}\right|\!\right|^{2}=1\). And so, \(b_{k}^{\top}Wb_{k}=1\). Let \(u_{k}=W^{1/2}b_{k}\). Then, from Equation (21), we obtain
\[\mathbf{W}^{1/2}\mathbf{C}^{\top}\left(\operatorname{diag}\{\pi_{1},\ldots, \pi_{N}\}-\Pi\Pi^{\top}\right)\mathbf{C}\mathbf{W}^{1/2}u_{k}=\lambda_{k}u_{k} \Longleftrightarrow\mathbf{A}^{\top}\mathbf{A}u_{k}=N\lambda_{k}u_{k}. \tag{22}\]
From the eigendecomposition of the matrix \(M\), we get
\[\mathbf{M}v_{k}=l_{k}v_{k}\Longleftrightarrow\mathbf{A}\mathbf{A}^{\top}v_{ k}=l_{k}v_{k}. \tag{23}\]
The equations (22) and (23) are eigenequations in the classical PCA case, with the duality \(X^{\top}X\) and \(XX^{\top}\). Following Pages (2014); Hardle and Simar (2019), we find that, for \(1\leq k\leq K\),
\[\lambda_{k}=l_{k},\quad v_{k}=\frac{1}{\sqrt{l_{k}}}\mathbf{A}u_{k}\quad\text{ and}\quad u_{k}=\frac{1}{\sqrt{l_{k}}}\mathbf{A}^{\top}v_{k}.\]
And finally, to get the coefficient of the eigenfunctions, for \(1\leq k\leq K\),
\[b_{k}=\mathbf{W}^{-1/2}u_{k}=\frac{1}{\sqrt{l_{k}}}\mathbf{C}^{\top}\left( \mathbf{I}_{\!N}-\Pi\Pi_{N}^{\top}\right)\operatorname{diag}\{\sqrt{\pi_{1}}, \ldots,\sqrt{\pi_{N}}\}v_{k}.\]
## Acknowledgment
S. Golovkine, A. J. Simpkin and N. Bargary are partially supported by Science Foundation Ireland under Grant No. 19/FFP/7002 and co-funded under the European Regional Development Fund. E. Gunning is supported in part Science Foundation Ireland (Grant No. 18/CRT/6049) and co-funded under the European Regional Development Fund. |
2304.04952 | Data-Efficient Image Quality Assessment with Attention-Panel Decoder | Blind Image Quality Assessment (BIQA) is a fundamental task in computer
vision, which however remains unresolved due to the complex distortion
conditions and diversified image contents. To confront this challenge, we in
this paper propose a novel BIQA pipeline based on the Transformer architecture,
which achieves an efficient quality-aware feature representation with much
fewer data. More specifically, we consider the traditional fine-tuning in BIQA
as an interpretation of the pre-trained model. In this way, we further
introduce a Transformer decoder to refine the perceptual information of the CLS
token from different perspectives. This enables our model to establish the
quality-aware feature manifold efficiently while attaining a strong
generalization capability. Meanwhile, inspired by the subjective evaluation
behaviors of human, we introduce a novel attention panel mechanism, which
improves the model performance and reduces the prediction uncertainty
simultaneously. The proposed BIQA method maintains a lightweight design with
only one layer of the decoder, yet extensive experiments on eight standard BIQA
datasets (both synthetic and authentic) demonstrate its superior performance to
the state-of-the-art BIQA methods, i.e., achieving the SRCC values of 0.875
(vs. 0.859 in LIVEC) and 0.980 (vs. 0.969 in LIVE). | Guanyi Qin, Runze Hu, Yutao Liu, Xiawu Zheng, Haotian Liu, Xiu Li, Yan Zhang | 2023-04-11T03:52:17Z | http://arxiv.org/abs/2304.04952v1 | # Data-Efficient Image Quality Assessment with Attention-Panel Decoder
###### Abstract
Blind Image Quality Assessment (BIQA) is a fundamental task in computer vision, which however remains unresolved due to the complex distortion conditions and diversified image contents. To confront this challenge, we in this paper propose a novel BIQA pipeline based on the Transformer architecture, which achieves an efficient quality-aware feature representation with much fewer data. More specifically, we consider the traditional fine-tuning in BIQA as an interpretation of the pre-trained model. In this way, we further introduce a Transformer decoder to refine the perceptual information of the CLS token from different perspectives. This enables our model to establish the quality-aware feature manifold efficiently while attaining a strong generalization capability. Meanwhile, inspired by the subjective evaluation behaviors of human, we introduce a novel attention panel mechanism, which improves the model performance and reduces the prediction uncertainty simultaneously. The proposed BIQA method maintains a lightweight design with only one layer of the decoder, yet extensive experiments on eight standard BIQA datasets (both synthetic and authentic) demonstrate its superior performance to the state-of-the-art BIQA methods, i.e., achieving the SRCC values of 0.875 (vs. 0.859 in LIVEC) and 0.980 (vs. 0.969 in LIVE). Checkpoints, logs and code will be available at [https://github.com/narthchin/DEIQT](https://github.com/narthchin/DEIQT).
## Introduction
The goal of Image Quality Assessment (IQA) approaches is to automatically evaluate the quality of images in accordance with human subjective judgement. With the increasing growth of computer vision applications, the efficient and reliable IQA model has increased in importance. It is essential to monitor and improve the visual quality of contents and can be also adopted as testing criteria or optimization goals for benchmarking image processing algorithms. Based on the availability of the pristine reference image, IQA can be typically divided into full-reference IQA (FR-IQA) [22], reduced-reference IQA (RR-IQA) [20], and no-reference or blind IQA (BIQA) [17]. The applications of FR and RR IQA methods tend to be limited, since reference images are generally unavailable in real-world situations. Correspondingly, the BIQA methods do not require such reference images and thus become more promising yet more challenging.
Current state-of-the-art (SOTA) BIQA methods employ
Figure 1: Image on top: the performance of the proposed DEIQT varying the amount of training data on the LIVE dataset. SOTA results are obtained from the TReS [17] using 80% data. Our method can achieve the SOTA performance with only 30% data. Images in the medium: the sample images. Images at bottom: Quality attention map from DEIQT. Our model can accurately capture the quality degradation areas of an image. Meanwhile, it ignores the perceptual information that is related to the image recognition yet less important for the quality assessment, i.e., the white cars in the second image.
either convolutional neural networks (CNN) or Vision Transformer (ViT) based architectures [14], which perform an end-to-end optimization of feature engineering and quality regression, simultaneously. The training strategy of BIQA methods generally follows a straightforward pre-training and fine-tuning pipeline. In the pre-training stage, models are trained on a large-scale classification dataset, i.e., ImageNet [15]. Then, models are fine-tuned on a small-scale BIQA dataset. Nevertheless, the requirements of feature representations for these two stages are not consistent. The pre-training stage concentrates on the global semantic features that are highly related to the image content, whereas the fine-tuning stage needs to consider both global semantics and local details of an image [13]. Consequently, the process of fine-tuning still necessitates a substantial amount of data so as to successfully adapt the model awareness from the image content understanding to the image quality. However, due to the labor-intensive characteristics of image annotation, BIQA has high expectations for fitness on low data volumes. Thus, an efficient data-learning strategy, which is capable of constructing an accurate quality-aware feature manifold using a small quantity of data, is desired and has become a beneficial endeavor for computer vision tasks and industrial applications.
To this end, we propose a novel BIQA method that can efficiently characterize the image quality using much fewer data than existing BIQA methods. The proposed BIQA method is based on the Transformer encoder-decoder architecture, herein namely data-efficient image quality transformer (DEIQT). Specifically, we consider that learned features at the pre-training stage are highly related yet more abstract for the BIQA task. In other words, the fine-tuning from the classification task to the BIQA task can be regarded as an interpretation of feature abstractness. Based on this, the classification (CLS) token in the Transformer encoder is an abstract characterization of quality-aware features [16]. Thus, it may not effectively develop an optimal feature representation for the image quality during the process of fine-tuning. To address this issue, we introduce the Transformer decoder to further decode the CLS token, thereby effectively adapting the token for the BIQA task.
In particular, we make use of the self-attention and cross-attention operations in the decoder to realize an optimal feature representation for the image quality. The self-attention decodes the aggregated features in the CLS token. It can diminish the significance of those features that are less relevant to the image quality. The resulting outputs of self-attention are handled as the query to the decoder, which is therefore more sensitive to quality-aware image features. The cross-attention performs the interactions between the query and the extracted features from the encoder. It refines the decoder embeddings such that making the extracted features highly related to the image quality. The Transformer decoder brings in an efficient learning property for DEIQT. This not only allows the DEIQT to accurately characterize the image quality using significantly fewer data (Fig. 1), but also improves the model training efficiency (Fig. 5). Notably, one layer decoder is adequate to deliver a satisfactory performance for DEIQT (Table 5), which ensures a lightweight design of our model.
Furthermore, due to the considerable variation in the image contents and distortion conditions, existing BIQA methods generally suffer from a high prediction uncertainty. This hinders the model stability, leading to an inaccurate prediction. To address this issue, we design a novel attention-panel mechanism in the decoder. This mechanism is inspired by the subjective evaluation system, wherein an image is scored by a number of participants and the mean of scores (MOS) is considered the label of this image. During the subjective quality evaluation, opinions of humans on an image differ from person to person. The attention-panel mechanism mimics such human behaviors by randomly initializing and further learning the opinion of each "human" on the image quality. Specifically, it makes use of the cross-attention of decoder to evaluate the image quality from different perspectives and concludes the quality evaluation based on the results from all of these perspectives. The attention-panel mechanism can improve the model stability while introducing almost zero parameters (Table 4).
In summary, contributions of this paper are the following:
* We make the first attempt to develop a BIQA solution based on the complete Transformer encoder-decoder architecture. We employ the CLS token as inputs to the decoder, to enable the proposed DEIQT to extract comprehensive quality-aware features from an image while attaining a high learning efficiency. To the best of our knowledge, we are the first to leverage the Transformer decoder for the IQA task.
* Inspired by the human subjective evaluation, we introduce a novel attention-panel mechanism to further improve the performance of DEIQT while reducing the prediction uncertainty. Notably, the attention-panel mechanism introduces almost no parameters to the model.
* We verify DEIQT on 8 benchmark IQA datasets involving a wide range of image contents, distortion types and dataset size. DEIQT outperforms other competitors across all these datasets.
## Related Work
**CNN-based BIQA**. Benefiting from the powerful feature expression ability, CNN-based BIQA methods have gained a great deal of popularity recently [13, 14, 15, 16]. One of the mainstreams of the CNN-based method [13] is to integrate the feature learning and regression modeling into a general CNN framework so that developing an accurate and efficient quality representation. Modern CNN-based models [14] also put great efforts into other perspectives of the BIQA challenges, i.e., the limited size of IQA dataset and complex distortion conditions.
In summary, CNN-based methods demonstrate great potential for BIQA tasks, but further efforts are required. Specifically, CNN-based methods usually adopt the image patches [14, 15] as inputs or extract learned features from different layers of CNNs to form a multi-perceptual-scale representation, i.e., the shallow layer
for local details and the deeper layer for high-level semantics [14, 15]. The effectiveness of these strategies has been proved, but it introduces non-negligible a computational burden in training and inference[16, 17]. Furthermore, due to the inherent locality bias of CNNs, the CNN-based BIQA methods are often constrained by the ineffective characterization of non-local features, notwithstanding the fact that BIQA task depends on both local and non-local image information.
**Transformers in BIQA**. Transformers [12] that were first designed for the natural language processing have raised considerable research interests in the computer vision area. The Vision Transformer (ViT) [13] is one of the most representative works. It performs the classification task using a pure Transformer encoder architecture, and, with modern training strategies, ViT achieves a competing performance against the CNN-based methods [14]. Transformer also demonstrates great potential in dealing with the BIQA task thanks to its strong capability in modelling the non-local dependencies among perceptual features of the image. Currently, there are mainly two ways for using Transformer in BIQA: hybrid Transformer [15, 16] and pure ViT-based Transformer [17]. The former utilizes CNNs to extract the perceptual features as inputs to the Transformer encoder, whereas the latter directly sends image patches as inputs to the Transformer encoder.
The Transformer-based BIQA methods have achieved great performance. However, the Transformer in BIQA can be further exploited. Current Transformer-based BIQA methods only involve the Transformer encoder, yet their ability to accurately characterize the image quality is still restricted. The main reason can be attributed to that the extracted features from the encoder are rather abstract in terms of the image quality, making it difficult to model the relations between these features and the quality score. Thus, additional efforts are needed to derive an optimal feature representation for the image quality.
## Data-Efficient Image Quality Transformer
### Overall Architecture
To further improve the learning efficiency and capacity of BIQA, we make the first attempt to develop a Transformer encoder-decoder BIQA framework, namely data-efficient image quality transformer (DEIQT). The overall architecture of the proposed DEIQT is illustrated in Fig. 2. Given an input image, we first obtain the CLS token through the outputs of the Transformer encoder, which acts as the multi-perceptual-level image representation. With the self-attention operation, the CLS token can capture local and non-local dependencies from patch embeddings, thereby preserving comprehensive information for the image quality. The CLS token is then integrated with the attention-panel embeddings via the element-wise addition. A multi-head self-attention block is applied to transform them into queries of the decoder. Each attention-panel embedding absorbs the information from the CLS token, where the cross-attention mechanism in the decoder allows each to learn quality-aware features of an image from a unique perspective. Following this, the transformer decoder outputs the quality embeddings consisting of quality-aware features of an image. Finally, the quality embeddings are sent to a multi-layer perceptron (MLP) head to make several predictions for the image quality. We can obtain one prediction from each embedding of the quality embeddings. The average of these predictions is treated as the final quality score of the image.
### Perceptual Feature Aggregation in Transformer Encoder
The self-attention of Transformer encoder aggregates local and non-local information from a sequence of input patches with a minimum inductive bias, which allows it to comprehensively characterize perceptual features of an image. We herein take the advantage of the self-attention to obtain an efficient perceptual representation for the image. Given an input image \(I\in\mathbb{R}^{C\times H\times W}\), we first reshape it into \(N\) patches as in \(t_{n}\in\mathbb{R}^{p^{2}\times C}\), where \(H\) and \(W\) are the height
Figure 2: Model overview of the proposed DEIQT
and width of the image, respectively. \(C\) is the number of channels and \(p\) indicates the patch size. The total number of patches is calculated as in \(N=\frac{HW}{p^{2}}\). Each patch is then transformed into a \(D\)-dimension embedding through a linear projection layer. A learnable embedding of CLS token \(\mathbf{T}_{\text{CLS}}\in\mathbb{R}^{1\times D}\) is prepended to the \(N\) patch embeddings yielding to a total number of \(N+1\) embeddings. An additional position embedding is also introduced into these \(N+1\) embeddings for preserving the positional information.
Let \(\mathbf{T}=\{\mathbf{T}_{\text{CLS}},\mathbf{T}_{1},\dots,\mathbf{T}_{N}\}\in\mathbb{R}^{N+1 \times D}\) be the embedding sequence. \(\mathbf{T}\) is then fed to the multi-head self-attention (MHSA) block to perform the self-attention operation. The MHSA block contains \(h\) heads each with the dimension \(d=\frac{D}{h}\). \(\mathbf{T}\) is transformed into three groups of matrices as in the query \(\mathbf{Q}\), key \(\mathbf{K}\) and value \(\mathbf{V}\) using three different linear projection layers, where \(\mathbf{Q}=\{\mathbf{Q}_{1},...,\mathbf{Q}_{h}\}\in\mathbb{R}^{(N+1)\times D}\), \(\mathbf{K}=\{\mathbf{K}_{1},...,\mathbf{K}_{h}\}\in\mathbb{R}^{(N+1)\times D}\), and \(\mathbf{V}=\{\mathbf{V}_{1},...,\mathbf{V}_{h}\}\in\mathbb{R}^{(N+1)\times D}\) for \(\mathbf{Q}_{h},\mathbf{K}_{h},\mathbf{V}_{h}\in\mathbb{R}^{(N+1)\times d}\). The output of Transformer encoder \(\mathbf{Z}_{O}\) is formulated as :
\[\text{MHSA}\left(\mathbf{Q},\mathbf{K},\mathbf{V}\right)= \text{Cat}(\textit{Attention}(\mathbf{Q}_{1},\mathbf{K}_{1},\mathbf{V}_{1}), \dots, \tag{1}\] \[\textit{Attention}(\mathbf{Q}_{h},\mathbf{K}_{h},\mathbf{V}_{h}))\mathbf{\mathcal{ W}}_{L}\] \[\mathbf{Z}_{M}= \text{MHA}\left(\mathbf{Q},\mathbf{K},\mathbf{V}\right)+\mathbf{T}\] \[\mathbf{Z}_{O}= \text{MLP}\left(\text{Norm}(\mathbf{Z}_{M})\right)+\mathbf{Z}_{M},\]
where \(\mathbf{\mathcal{W}}_{L}\) refers to the weights of the linear projection layer, \(\textit{Attention}(\mathbf{Q}_{h},\mathbf{K}_{h},\mathbf{V}_{h})=\textit{softmax}\left( \frac{\mathbf{Q}_{h}\mathbf{K}_{h}{}^{T}}{\sqrt{d}}\right)\mathbf{V}_{h}\) and \(\text{Norm}(\cdot)\) indicates the layer normalization. \(\mathbf{Z}_{O}\) is denoted as in \(\mathbf{Z}_{O}=\{\mathbf{Z}_{O}\left[0\right],...,\mathbf{Z}_{O}\left[N\right]\}\in\mathbb{ R}^{(N+1)\times d}\).
### Quality-Aware Decoder
For the ViT-based BIQA methods, the learned CLS token \(\mathbf{Z}_{O}\left[0\right]\) is typically considered to contain aggregated perceptual information for the image quality. It will be sent to an MLP head to perform the regression task of quality prediction. However, as explained earlier, \(\mathbf{Z}_{O}\left[0\right]\) mainly relates to the abstractness of quality-aware features. It is difficult to directly utilize \(\mathbf{Z}_{O}\left[0\right]\) to attain an optimal representation for the image quality. To this end, we introduce a quality-aware decoder to further interpret the CLS token, such that making the extracted features more significant to the image quality.
Let \(\hat{\mathbf{T}}_{\text{CLS}}\in\mathbb{R}^{1\times D}\) be the CLS token obtained from the output of encoder. \(\hat{\mathbf{T}}_{\text{CLS}}\) is first sent to a MHSA block to model the dependencies between each element with the remaining elements of the CLS token. The output of MHSA is followed by the residual connection to generate queries of the transformer decoder, written by
\[\mathbf{Q}_{d}=\text{MHSA}\left(\text{Norm}\left(\hat{\mathbf{T}}_{\text{CLS}}\right) \right)+\hat{\mathbf{T}}_{\text{CLS}}. \tag{2}\]
The role of the MHSA block is to decode the CLS token such that making the produced query more sensitive to the quality-aware features. Following this, we utilize \(\hat{\mathbf{Z}}_{O}=\{\mathbf{Z}_{O}\left[1\right],...,\mathbf{Z}_{O}\left[N\right]\}\in \mathbb{R}^{N\times d}\) as Key and Value of the decoder, denoted by \(\mathbf{K}_{d}=\mathbf{V}_{d}=\hat{\mathbf{Z}}_{O}\), where \(\hat{\mathbf{Z}}_{O}\cap\mathbf{Z}_{O}=\hat{\mathbf{T}}_{\text{CLS}}\). Then, \(\mathbf{Q}_{d},\mathbf{K}_{d}\) and \(\mathbf{V}_{d}\) are sent to a multi-head cross-attention (MHCA) block to perform the cross-attention. During this process, we utilize \(\mathbf{Q}_{d}\) to re-interact with the features of the image patches preserved in the encoder outputs, and thus ensuring the attentional features more significant to the image quality. The output of the cross-attention is written by
\[\mathbf{S}=\text{MLP}\left(\text{MHCA}(\text{Norm}(\mathbf{Q}_{d}),\mathbf{K}_{d},\mathbf{V}_{ d})+\mathbf{Q}_{d}\right), \tag{3}\]
where \(\mathbf{S}\) indicates the refined quality-aware features from the encoder outputs which is more comprehensive and accurate in defining the image quality. Finally, \(\mathbf{S}\) is fed to an MLP head to derive the final quality score, wherein we minimize the smooth \(l_{1}\) loss to train our network. The quality-aware decoder can significantly improve the learning capacity of the transformer-based BIQA model, and thus enhancing the model performance in terms of prediction accuracy, generalization capability and stability.
In Fig. 3, we demonstrate the effectiveness of the quality-aware decoder by investigating the gradients of the CLS token for models with and without the decoder. As observed, without the decoder, the gradients of the CLS token vary significantly throughout the training. This will substantially decrease the training efficiency, and even cause the model to fail to converge. Correspondingly, the designed decoder is capable of reducing such a large variation, thereby ensuring a high training efficiency. It is also worth mentioning that the designed Decoder is combined with the standard ViT encoder in a non-intrusive manner, which not only makes our model compatible with any variants of the ViT encoder but also enables us to directly utilize the weights of other pretrained encoders to increase the training efficiency of our model. More importantly, we empirically show that the designed Decoder with a depth of 1 can effectively achieve a satisfactory performance (Table 5). This significantly restricts the model size, making it more suitable for practical applications.
Figure 3: Probability distributions of CLS token Gradients varying the training steps. ViT-BIQA and DEIQT are models without and with the proposed decoder, respectively. By introducing the decoder, variations in the gradients decrease considerably faster than those without the decoder, indicating that the decoder can greatly improve training efficiency.
### Attention-Panel Mechanism
Images captured in the real-world generally involve various contents and complex distortion conditions, resulting in the BIQA models exhibiting a high prediction uncertainty. To mitigate this, we propose an attention-panel mechanism in the Transformer decoder. This mechanism is inspired by the human subjective evaluation, wherein an image is scored by a number of participants and the mean of scores (MOS) is considered the label of the image. During this evaluation process, the personal subjective opinion on an image differs from person to person. The proposed attention-panel mechanism imitates such a situation, in which each panel member represents a participant of the subject evaluation and judges the image quality from a different perspective. This way, the model can achieve a comprehensive evaluation of the image quality, thus reducing the prediction uncertainty [17].
Let \(L\) be the number of panel members. Prior to sending the CLS token to the decoder, we create the attention-panel embeddings as in \(\mathbf{J}=\{\mathbf{J}_{1},\dots,\mathbf{J}_{L}\}\in\mathbb{R}^{L\times D}\). \(\mathbf{J}\) is initialized with random numbers. Then, we expand the CLS token \(L\) times to form the matrix \(\hat{\mathbf{T}}=\{\hat{\mathbf{T}}_{\text{CLS}},\dots,\hat{\mathbf{T}}_{\text{CLS}}\}\in \mathbb{R}^{L\times D}\). The element-wise summation of \(\mathbf{J}\) and \(\hat{\mathbf{T}}\) is employed as the inputs to the quality-aware decoder. Therefore, the calculation of queries in Eq. 2 is reformulated as
\[\hat{\mathbf{Q}}_{d}=\text{MHSA}\left(\text{Norm}\left(\hat{\mathbf{T}}_{\text{CLS}}+ \mathbf{J}\right)\right)+\left(\hat{\mathbf{T}}_{\text{CLS}}+\mathbf{J}\right). \tag{4}\]
The operation of cross-attention is performed in Eq. 3 by replacing \(\mathbf{Q}_{d}\) with \(\hat{\mathbf{Q}}_{d}\). We obtain the quality embeddings \(\hat{\mathbf{S}}=\{\hat{\mathbf{S}}_{1},\dots\hat{\mathbf{S}}_{L}\}\in\mathbb{R}^{L\times D}\). Finally, \(\hat{\mathbf{S}}\) is sent to the MLP to derive a vector of scores as in \(\mathbf{O}=\{O_{1},\dots,O_{L}\}\), which contains \(L\) scores corresponding to \(L\) members. The mean of these \(L\) scores \(\frac{\sum_{l=1}^{L}}{L}\) is treated as final quality score.
With the attention-panel, DEIQT is capable of characterizing the image quality from different perspectives, thus attaining a comprehensive evaluation. To verify that, we adopt the cosine similarity metric to measure the similarity between the characterized perceptual features from any two panel members. Given an image, we obtain the quality embeddings from three trained DEIQT models with 6, 12 and 18 panel members, respectively. The cosine similarity between every two quality embeddings is computed. The results are reported in Fig. 4. As observed, the similarity between these panel members is extremely low. Accordingly, the quality-aware features described by each panel member are rather different.
## Experiments
### Benchmark Datasets and Evaluation Protocols
We evaluate the performance of the proposed DEIQT on 8 typical BIQA datasets, including 4 synthetic datasets of LIVE [23], CSIQ [12], TID2013 [2], KADID [17], and 4 authentic datasets of LIVEC [1], KONIQ [16], LIVEFB [20], SPAQ [18]. Specifically, for the authentic datasets, LIVEC consists of 1162 images captured by different photographers with a wide variety of mobile devices. SPAQ contains 11000 images collected by 66 smartphones. KonIQ-10k is composed of 10073 images selected from public multimedia resources. LIVEFB is the largest-scale authentic dataset (by far) that includes 39810 images. For the synthetic datasets, they contain a small number of pristine images which are synthetically distorted by various distortion types, such as JPEG compression and Gaussian blurring. LIVE and CISQ contain 799 and 866 synthetically distorted images with 5 and 6 distortion types, respectively. TID2013 and KADID consist of 3000 and 10125 synthetically distorted images involving 24 and 25 distortion types, respectively.
In our experiments, two commonly used criteria, Spearman's rank order correlation coefficient (SRCC) and Pearson's linear correlation coefficient (PLCC), are adopted to quantify the performance of DEIQT in terms of prediction monotonicity and prediction accuracy, respectively. Both SRCC and PLCC range from 0 to 1. A superior performance should result in the absolute values of SRCC and PLCC close to 1.
### Implementation Details
For DEIQT, we followed the typical training strategy to randomly crop an input image into 10 image patches with a resolution of \(224\times 224\). Each image patch was then reshaped into a sequence of patches with the patch size \(p=16\), and the dimensions of input tokens \(D=384\). We created the Transformer encoder based on the ViT-S proposed in DeiT III [21]. The depth of the encoder was set to 12, and the number of heads \(h=6\). For the Decoder, the depth was set to 1 and the number of panel members \(L=6\).
The Encoder of DEIQT was pre-trained on the ImageNet-1K for 400 epochs using the Layer-wise Adaptive Moments optimizer [22] for Batch training with the batch size 2048. DEIQT was trained for 9 Epochs. The learning rate was set to \(2\times 10^{-4}\) with a decay factor of 10 ev
Figure 4: Cosine similarity between characterized perceptual features of panel members. The number of panel members in DEIQT is set to 6, 12, and 18, respectively. The extremely low similarity between two members suggests that each member judges the image quality from a very unique perspective.
ery 3 epochs. The batch size was determined depending on the size of the dataset, i.e., 16 and 128 for the LIVEC and KonIQ, respectively. For each dataset, 80% images were used for training and the remaining 20% images were utilized for testing. We repeated this process 10 times to mitigate the performance bias and the medians of SRCC and PLCC were reported.
### Overall Prediction Performance Comparison
Table 1 reports the comparison results between the proposed DEIQT and 13 state-of-the-art BIQA methods, which include both hand-crafted BIQA methods, such as DIIVINE [2], ILNIQE [13] and BRISQUE [14], and deep-learning-based methods, i.e., MUSIQ [15] and MetalQA [16]. As observed across these eight datasets, DEIQT outperforms all other methods. Since images on these 8 datasets span a wide variety of image contents and distortion types, it is very challenging to consistently achieve the leading performance on all of them. Correspondingly, these observations confirm the effectiveness and superiority of DEIQT in characterizing the image quality.
### Generalization Capability Validation
We further evaluate the generalization capability of DEIQT through the cross-datasets validation methodology, where a BIQA model is trained on one dataset, and then tested on the other datasets without any fine-tuning or parameter adaption. The experimental results in terms of the medians of SRCC on five datasets are reported in Table 2. As observed, DEIQT achieves the best performance on four of five datasets and reaches the competing performance on the KonIQ dataset. These results manifest the strong generalization capability of DEIQT.
### Data-Efficient Learning Validation
One of the key properties of DEIQT is data-efficient learning, which allows our model to achieve a competing performance to state-of-the-art BIQA methods while requiring substantially less training data. Given the costly image annotation and model training, such a property is highly desired for BIQA methods. To further investigate this property, we conduct controlled experiments to train our model by varying the amount of training data from 20% to 60% with an interval of 20%. We repeat the experiment 10 times for each amount of training data and report the medians of SRCC. The testing data remains 20% images irrespective of the amount of the training data and is completely nonoverlapped with the training data throughout all experiments.
achieves the best performance on the synthetic datasets. For authentic datasets, DEIQT is capable of achieving the competing performance with 60% images, which is still much more efficient than other BIQA methods.
In addition to the required training data, we further evaluate the training efficiency of DEIQT which is also an important indicator for data-efficient learning. Fig. 5 illustrates the medians of SRCC as the number of epochs increases on the testing set of LIVEC and KonIQ. ViT-BIQA directly utilizes the extracted features of the CLS token to predict the image quality. As shown in Fig. 5, DEIQT converges significantly faster than other methods, where it reaches a satisfactory performance in only two epochs. As a comparison, ViT-BIQA exhibits a slow convergence rate, especially on the small-scale dataset LIVEC. These observations vividly demonstrate that DEIQT can efficiently implement the domain adaptation from the pre-training of classification tasks to the fine-tuning of IQA tasks, thereby greatly improving the training efficiency.
### Ablation Study
DEIQT is composed of three essential components, including the ViT encoder, quality-aware decoder, and the attention-panel mechanism. We conduct the ablation experiments to examine the individual contribution of each component. Table 4 shows the experimental results on the LIVEC and KonIQ datasets. The ViT in Table 4 refers to the DEIQT without the quality-aware decoder and the attention-panel. It is equivalent to the ViT-BIQA in Fig. 5. AP/6 indicates the attention-panel (AP) with 6 panel members. ViT + AP/6 skips the decoder and sends the inputs of the DEIQT decoder to an MLP head to make the prediction. Decoder(R\({}^{*}\)) and Decoder(CLS) mean that random numbers or CLS token are utilized as inputs of the decoder, respectively. The proposed DEIQT consists of ViT, Decoder(CLS) and AP/6.
From Table 4, we observe that both the quality-aware decoder and the attention-panel mechanism are highly effective in characterizing the image quality, and thus contributing to the overall performance of DEIQT. In particular, the proposed quality-aware decoder significantly improves the model performance in terms of accuracy and stability, whereas the attention-panel contributes less than the decoder. This is reasonable considering that the number of parameters introduced by the decoder is substantially higher than those introduced by the attention-panel. The operations involved in the decoder are also much more sophisticated. Nevertheless, the attention-panel allows our model to attain improved performance with negligible additional expense.
Finally, we carry out the experiment to investigate the effects of the decoder depth on the DEIQT. The experimental results are listed in Table 5. As can be seen that DEIQT is insensitive to the depth of decoder. When the number of layers of decoder increases, the performance of DEIQT remains almost unchanged on these two datasets. Thus, we set the number of layers of decoder to 1 to maintain a lightweight design for our model.
## Conclusion
In this paper, we present a data-efficient image quality transformer (DEIQT), which can accurately characterize the image quality using much less data. In particular, we regard the CLS token as the abstractness of quality-aware features and adapt it to the queries of the dedicatedly designed decoder. Then, we leverage the cross-attention mechanism to decouple the quality-aware features from the encoder outputs. Furthermore, inspired by the human behaviors of the subjective evaluation, we offer a novel attention-panel mechanism to mitigate the prediction uncertainty while introducing almost no additional parameters. Experiments on eight standard datasets demonstrate the superiority of DEIQT in terms of prediction accuracy, training efficiency, and generalization capability.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline & & \multicolumn{2}{c}{LIVEC} & \multicolumn{2}{c}{KonIQ} \\ \cline{3-5} Module & \#Params & PLCC & SRCC & PLCC & SRCC \\ \hline ViT & \multirow{2}{*}{22M} & 0.770 & 0.714 & 0.919 & 0.908 \\ std & & \(\pm\)0.045 & \(\pm\)0.039 & \(\pm\)0.011 & \(\pm\)0.011 \\ \hline ViT + AP/6 & \multirow{2}{*}{22M} & 0.782 & 0.720 & 0.924 & 0.913 \\ std & & \(\pm\)0.033 & \(\pm\)0.030 & \(\pm\)0.010 & \(\pm\)0.008 \\ \hline ViT + Decoder(R\({}^{*}\)) & \multirow{2}{*}{24M} & 0.871 & 0.842 & 0.927 & 0.916 \\ std & & \(\pm\)0.018 & \(\pm\)0.024 & \(\pm\)0.007 & \(\pm\)0.006 \\ \hline ViT + Decoder(CLS) & \multirow{2}{*}{24M} & 0.881 & 0.863 & 0.931 & 0.918 \\ std & & \(\pm\)0.018 & \(\pm\)0.019 & \(\pm\)0.005 & \(\pm\)0.007 \\ \hline DEIQT & \multirow{2}{*}{24M} & **0.894** & **0.875** & **0.934** & **0.921** \\ std & & \(\pm\)0.014 & \(\pm\)0.017 & \(\pm\)0.003 & \(\pm\)0.004 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablation experiments on LIVEC and KonIQ datasets. Bold entries indicate the best performance.
Figure 5: Median SRCC versus Epochs on the LIVEC and KonIQ testing datasets.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & \multicolumn{2}{c}{LIVEC} & \multicolumn{2}{c}{KonIQ} \\ \cline{3-5} Layer Nums & \#Params & PLCC & SRCC & PLCC & SRCC \\ \hline
1 & 24M & 0.894 & 0.875 & 0.934 & 0.921 \\
2 & 26M & 0.890 & 0.871 & 0.933 & 0.919 \\
4 & 31M & **0.895** & **0.877** & **0.936** & **0.922** \\
8 & 40M & **0.895** & 0.873 & 0.933 & 0.918 \\ \hline \hline \end{tabular}
\end{table}
Table 5: The effects of the layer numbers of the decoder on the DEIQT. Bold entries indicate the best results.
## Acknowledgments
This research was partly supported by the National Key R&D Program of China (Grant No. 2020AAA0108303), and in part by National Natural Science Foundation of China under Grant 62201538, Natural Science Foundation of Shandong Province under grant ZR2022QF006, and in part by China National Postdoctoral Program for Innovative Talents (BX20220392), China Postdoctoral Science Foundation (2022M711729)
|
2303.11446 | The Torus of Triangles | We prove the 2-torus $\mathbb T$, an abelian linear algebraic group, occurs
as a compactification of the moduli space of labeled, oriented, similarity
classes of triangles. A (possibly degenerate) triangle is {\it inscribable} if
it can be inscribed in a circle. We show $\mathbb T$ is a fine moduli space of
labeled, oriented, possibly-degenerate inscribable classes, and that the
(moduli) stack of (unlabeled, unoriented) possibly-degenerate inscribable
classes is the quotient $[\mathbb T/D_6]$. To illustrate how aptly $\mathbb T$
describes triangle classes we show the main triangle types form distinguished
algebraic structures: subgroups, cosets, and elements of small order, and use
the natural metric on $\mathbb T$ to compare them. | Eric Brussel, Madeleine E. Goertz | 2023-03-20T20:44:28Z | http://arxiv.org/abs/2303.11446v3 | # The torus of triangles
###### Abstract.
We show the moduli space of similarity classes of labeled, oriented triangles is a topological torus, and plot on this space the families of triangles corresponding to various trivial and nontrivial loops. The latter include families of right triangles, isosceles triangles, and so-called "degenerate" triangles.
This research was generously supported by the William and Linda Frost Fund in the Cal Poly College of Science and Mathematics.
## Introduction
The study of triangles in the plane, dating back to Euclid, is one of the oldest and thoroughly investigated subjects in all of mathematics. Straightedge-compass constructions, Apollonian problems, and the Euclidean geometry of the plane have been studied exhaustively. There is a list of thousands of geometrically defined triangle "centers", maintained in an online encyclopedia (see [12]). It is a fascinating, elemental subject.
A _moduli space_ is a space of points that represent members of a set of mathematical objects. It often comes with a geometry that is useful for studying the families of these objects as a whole, making statistical calculations, or just visualizing one family in the context of others. It is difficult to overstate the importance of this concept, which is ubiquitous in modern mathematics.
The problem of constructing a moduli space of triangles is frequently given as a first example, to demonstrate the properties and challenges of more complex moduli problems. The idea for this particular space goes back at least to Lewis Carroll, who in "Pillow Problem 58" asked for the probability that a randomly chosen triangle is obtuse (see [1]). This question has intriguingly different answers, depending on the probability distribution implied by the space - the meaning of "random".
We will not weigh in on the question of what is the "right" moduli space of triangles, which has already been well-discussed (see [10], [11], or [12]), but rather on the question of what happens near the boundary of a particular commonly drawn space of labeled triangles, called the _triangle of triangles_, see e.g. [11, Figure 2]. What if a continuous family of triangles - one parameterized by a continuum of real numbers - should _breach the border_ of this space, a border that does not consist of triangles at all, but rather of degenerate, "flattened" triangles? As the renowned number theorist Barry Mazur put it, it is often at the edge of the map where things become really interesting ([14, p.1]).
We find this question can be answered in a straightforward way if we replace the category of similarity classes of labeled triangles with the more structured category of labeled, oriented similarity classes, into which the triangle of triangles embeds as those with positive orientation. This set forms a topological torus, called, of course, the _torus of triangles_, see Figure A. In the torus of triangles, the (degenerate) border of the triangle of triangles becomes the boundary between positively and negatively oriented triangles, shaded in Figure A, and a continuous family that crosses the boundary _changes orientation_. The torus's nontrivial topology, exemplified by loops around the tubular body and the center hole that cannot be contracted to a point, leads us to consider "nontrivial" loops of triangles in Section 3 (Figure B).
This paper is meant to be a tour through some basic ideas of moduli space theory in pursuit of a way to understand families of triangles. Although the proof of our main theorem is elementary, we have not seen the result in the literature.
## 1. **The Triangles of Triangles**
A _triangle_ is a plane figure defined by three vertices and three straight edges connecting them. A _labeled triangle_ is a triangle together with a labeling of its vertices. We represent a labeled triangle by an ordered triple \((\alpha,\beta,\gamma)\) whose entries are the angles at the vertices \(A,B,C\). We then view \(A,B,C\) as coordinate functions that compute angles. A given similarity class of (scalene) triangle has six labeled representatives, corresponding to the \(3!=6\) possible orderings of \((\alpha,\beta,\gamma)\).
We parameterize the labeled triangles with the part of the plane \(A+B+C=\pi\) in the \((+++)\)-octant. This is a borderless triangle, denoted \(\mathsf{T}^{\circ}\), whose closure \(\mathsf{T}=\mathsf{T}^{\circ}\sqcup\partial\mathsf{T}\) is called the _triangle of triangles_. In Figure 1.1 the three altitudes represent isosceles triangles, and the darker inscribed triangular region represents the acute triangles. The centroid is the equilateral triangle. We refer
to it as a moduli space, though technically it fails the universal property, since the isosceles and equilateral points have nontrivial automorphisms.
Figure A. The Torus of Triangles, shaded positive and negative, with a trivial and nontrivial loop.
Figure B. Triangle Families Inscribed in a Circle
Figure A. The Torus of Triangles, shaded positive and negative, with a trivial and nontrivial loop.
### Using the Triangle of Triangles to Compute Things
We like the representation of the space of triangles in Figure 1, because it seems natural and is easy to analyze geometrically. It is certainly not original, see e.g. [1, Figure 2]. With it we can compute things like dimension and relative measure. It leads us to expect the acute and obtuse locus to be two-dimensional, since those regions locally have two degrees of freedom. It also suggests the loci of isosceles and right triangles should be one-dimensional, which we know is true since one number is required to specify a right triangle or an isosceles triangle. Additionally, the triangle of triangles predicts
1. The ratio of obtuse to isosceles is \([3:1]\). This is because the triangle of acute triangles is the (red) inscribed triangle, one of four equilateral triangles that partition the triangle of triangles.
2. The ratio of obtuse-isosceles to acute-isosceles is \([1:1]\). This is because the altitudes represent isosceles triangles, and exactly half of each altitude lies in the triangle acute triangles.
3. The ratio of isosceles to right triangles is \(\left[2\cdot\frac{\sqrt{3}}{2}:\frac{1}{2}\right]=2\sqrt{3}\).
Though this parameter space seems "natural" to us, there are other natural-seeming candidates, and, as mentioned above, they sometimes give different predictions for these numbers. The reason is that in each one there is an implicit definition for the term "random," and different constructions can differ in their implicit definitions.
## 2. **The Torus of Triangles**
### Oriented Labeled Triangles
Although we have just defined a triangle of triangles, we intend to replace it with something more general. As mentioned above, the question motivating this paper is, what happens when a continuous family of triangles in \(\mathsf{T}^{\circ}\) crosses the border \(\partial\mathsf{T}\)? The answer is that the triangles in the family change _orientation_.
An _orientation_ of a labeled triangle is an assignment of a direction in which to traverse the vertices. We say a triangle is _positively oriented_ if the vertices are traversed counterclockwise, and write \(\Delta ABC=(\alpha,\beta,\gamma)\); otherwise we say it is _negatively oriented_, and write \(-\Delta ABC=(-\alpha,-\beta,-\gamma)\), where \(0\leq\alpha,\beta,\gamma\leq\pi\) in both cases. Parameterize the negatively oriented elements with \(A+B+C=-\pi\) in the \((-\,-)\)-octant. As for the positively oriented elements, this is a borderless triangle, which we denote by \(-\mathsf{T}^{\circ}\). We call the closure \(-\mathsf{T}=-\mathsf{T}^{\circ}\sqcup-\partial\mathsf{T}\) the _shadow triangle of triangles_.
The border points \(\partial\mathsf{T}\) and \(-\partial\mathsf{T}\) are not, of course, proper triangles, since they are triples with at least one zero entry; we say they are _degenerate_ triangles. Let \(\mathsf{LOTS}=\mathsf{T}\sqcup-\mathsf{T}\) denote the set of labeled, oriented triangles up to similarity, including the degenerate ones. The two sets are illustrated in Figure 2.
### Main Theorem
**Theorem 2.2.1**.: \(\mathsf{LOTS}\) _is naturally a topological torus. Explicitly,_
\[\mathsf{LOTS}\ =\ \frac{\mathsf{T}\sqcup-\mathsf{T}}{\sim}\]
_where the relation is on \(\partial\mathsf{T}\) and \(-\partial\mathsf{T}\), illustrated by arrows in Figure 2, and given explicitly by by_
\[(\alpha,\beta,0)\in\partial\mathsf{T}\ \sim\ (\alpha-\pi,\beta-\pi,0) \in-\partial\mathsf{T}\quad\text{when $\gamma=0$}\] \[(\alpha,0,\gamma)\in\partial\mathsf{T}\ \sim\ (\alpha-\pi,0,\gamma-\pi) \in-\partial\mathsf{T}\quad\text{when $\beta=0$}\] \[(0,\beta,\gamma)\in\partial\mathsf{T}\ \sim\ (0,\beta-\pi,\gamma-\pi) \in-\partial\mathsf{T}\quad\text{when $\alpha=0$}\]
Proof.: To prove it we induce a quotient on the disjoint union \(\mathsf{T}\sqcup-\mathsf{T}\) via a bijection with a transparently topologically connected alternative parameter space for labeled oriented triangles, one
that uses a triangle's _relative arguments_. For any three points \(A,B,C\) inscribed in a circle, let \(t_{A}=\arg(A)-\arg(C)\) and \(t_{B}=\arg(B)-\arg(C)\), the "relative arguments" of the points \(A,B,C\), where an argument is traversed counterclockwise from a fixed ray (see Figure 2.3).
Let \(\mathsf{R}\) be the \(t_{A}t_{B}\)-plane. It is clear that every pair of relative arguments defines a triangle, and that the triangle defined by a pair \((t_{A},t_{B})\) is positively oriented if and only if the representative
Figure 2.3. Relative Arguments
Figure 2.1.
Figure 2.2. \((\mathsf{T}\sqcup-\mathsf{T})/\!\sim\)
in the fundamental domain \(0<t_{A},t_{B}<2\pi\) satisfies \(t_{A}<t_{B}\). Thus \(\mathsf{R}\) is partitioned into triangles corresponding to positively and negatively oriented triangles, shown in yellow and gray in Figure 2.4.
It is also clear that points in \(\mathsf{R}\) determine the same triangle if and only if they are congruent (mod \(2\pi\)). Since every triangle has a set of relative arguments, the assignment of \(\triangle ABC\) to a point \((t_{A},t_{B})\) defines a surjective map
\[\triangle\ :\ \mathsf{R}\longrightarrow\mathsf{LOTS}\]
that induces a bijection \(\mathsf{R}/2\pi\stackrel{{\sim}}{{\longrightarrow}}\mathsf{LOTS}\), realizing \(\mathsf{LOTS}\) as the topological torus of Figure 2.5.
To compute \(\triangle\), which takes a pair of relative arguments to a triple of angles, either on \(\mathsf{T}\) or \(-\mathsf{T}\), we use Euclid III.20 [1, III Proposition 20], which states, _in a circle the angle at the center is double the angle at the circumference, when the angles have the same circumference as base._ Let \([t_{A}]\) and \([t_{B}]\) denote representatives of \(t_{A}(\text{mod }2\pi)\) and \(t_{B}(\text{mod }2\pi)\) in the interval \([0,2\pi]\), where the choice of \(0\) or \(2\pi\) will be clear from context. Applying Euclid III.20 to Figure 2.3 yields
\[\triangle(t_{A},t_{B})=\begin{cases}\left(\frac{1}{2}(2\pi-[t_{B}]),\,\frac{1} {2}[t_{A}],\,\frac{1}{2}([t_{B}]-[t_{A}])\right)\in\mathsf{T}&\text{if }0<[t_{A}]<[t_{B}]<2\pi\\ \left(-\frac{1}{2}[t_{B}],\,-\frac{1}{2}(2\pi-[t_{A}]),\,\frac{1}{2}([t_{B}]-[ t_{A}])\right)\in-\mathsf{T}&\text{if }0<[t_{B}]<[t_{A}]<2\pi\end{cases}\]
Thus \(\triangle\) maps the yellow regions to positively oriented triangles in \(\mathsf{T}^{\circ}\), and the gray regions to negatively oriented triangles, in \(-\mathsf{T}^{\circ}\). The triangle \(\triangle(t_{A},t_{B})\) is degenerate if \([t_{B}]-[t_{A}],[t_{A}]\), or \([t_{B}]\) is \(0\), in which cases two vertices coincide. In these cases the formula for \(\triangle(t_{A},t_{B})\) identifies
Figure 2.4. The Plane \(\mathsf{R}\)
Figure 2.5. \(\mathsf{R}/2\pi\) is a Torus
points on the borders as follows:
\[\begin{bmatrix}\left(\pi-\frac{1}{2}[t_{A}],\,\frac{1}{2}[t_{A}],\,0\right)\in \mathsf{T}\longleftrightarrow\left(-\frac{1}{2}[t_{A}],\,-\pi+\frac{1}{2}[t_{ A}],\,0\right)\in-\mathsf{T}&\text{if $[t_{B}]-[t_{A}]=0$}\\ \left(\pi-\frac{1}{2}[t_{B}],\,0,\,\frac{1}{2}[t_{B}]\right)\in\mathsf{T} \longleftrightarrow\left(-\frac{1}{2}[t_{B}],\,0,\,-\pi+\frac{1}{2}[t_{B}] \right)\in-\mathsf{T}&\text{if $t_{A}\equiv 0\equiv 2\pi$}\\ \left(0,\,\frac{1}{2}[t_{A}],\,\pi-\frac{1}{2}[t_{A}]\right)\in\mathsf{T} \longleftrightarrow\left(0,\,-\pi+\frac{1}{2}[t_{A}],\,-\frac{1}{2}[t_{A}] \right)\in-\mathsf{T}&\text{if $t_{B}\equiv 2\pi\equiv 0$}\end{bmatrix}\]
This matches the statement of the theorem, and completes the proof.
The torus of triangles is drawn in Figure 2.6. The yellow and gray regions represent the positively- and negatively-oriented triangles. The red curves at the boundaries of the positively- and negatively-oriented regions are degenerate triangles. The blue and black curves are the isosceles and right families, respectively. The triple intersection point of the blue isosceles curves in the yellow region is the positively-oriented equilateral triangle point; this point has a negatively-oriented counterpart in the gray region. The point shown on the front of the torus in Figure 2.6, where the three red and three blue curves intersect, corresponds to the identified four corners of the fundamental polygon.
**Remark 2.2.2**.: The linear map taking the \(t_{A}t_{B}\)-plane \(\mathsf{R}\) to the plane \(A+B+C=\pi\) on which we find \(\mathsf{T}\) is
\[\left(\pi,0,0\right)\,+\,\begin{bmatrix}\phantom{-}0&-\frac{1}{2}\\ \frac{1}{2}&0\\ -\frac{1}{2}&\frac{1}{2}\end{bmatrix}\]
with area distortion factor \(\frac{\sqrt{3}}{4}\). Since lengths are distorted nonuniformly, the fundamental domain \(\mathsf{R}/2\pi\) computes the same information about different types of triangles with respect to area, but not with respect to length, as the original moduli space \(\mathsf{T}\) in Subsection 1.1.
## 3. **Examples of Families**
A torus has a nontrivial fundamental group, and therefore contains "nontrivial" loops, loops not contractible to a point. The families of degenerate, isosceles, and right (oriented labeled) triangles all determine nontrivial loops.
### A Trivial Family
Several concentric trivial loops of triangles are shown in purple on the torus in Figure 3.1a, and in the \((t_{A},t_{B})\)-plane in Figure 3.1b. These families form circles in the plane, all contractible to a point. The actual (3-sided) triangles in one of these families are illustrated in Figure 3.2a.
Figure 2.6. The Torus of Triangles
### A Nontrivial Family
A nontrivial family of actual (3-sided) triangles is illustrated in Figures 3.3a and 3.1a. This family is generated by fixing vertices \(A\) and \(C\) and allowing \(B\) to wander around the circle. Notice how the warm-colored triangles are negatively-oriented and thus located in the gray triangle, while the cool-colored triangles are positively-oriented and located in the yellow triangle. The yellow triangles in Figure 3.3a are almost degenerate triangles, which is seen in Figure 3.3b by the yellow color of the gradient when it cross the diagonal line, which represents degenerate triangles.
Figure 3.1. Trivial Families and a Nontrivial Family
Figure 3.2. The Triangles in a Trivial Family. Point \(\Delta ABC\) in Figure 3.2b represents \(\Delta ABC\) in Figure 3.2a.
## Acknowledgment
The authors would like to acknowledge the generous support of the Bill and Linda Frost Foundation, without which this paper would not have been possible.
|
2303.09209 | Recommending the optimal policy by learning to act from temporal data | Prescriptive Process Monitoring is a prominent problem in Process Mining,
which consists in identifying a set of actions to be recommended with the goal
of optimising a target measure of interest or Key Performance Indicator (KPI).
One challenge that makes this problem difficult is the need to provide
Prescriptive Process Monitoring techniques only based on temporally annotated
(process) execution data, stored in, so-called execution logs, due to the lack
of well crafted and human validated explicit models. In this paper we aim at
proposing an AI based approach that learns, by means of Reinforcement Learning
(RL), an optimal policy (almost) only from the observation of past executions
and recommends the best activities to carry on for optimizing a KPI of
interest. This is achieved first by learning a Markov Decision Process for the
specific KPIs from data, and then by using RL training to learn the optimal
policy. The approach is validated on real and synthetic datasets and compared
with off-policy Deep RL approaches. The ability of our approach to compare
with, and often overcome, Deep RL approaches provides a contribution towards
the exploitation of white box RL techniques in scenarios where only temporal
execution data are available. | Stefano Branchi, Andrei Buliga, Chiara Di Francescomarino, Chiara Ghidini, Francesca Meneghello, Massimiliano Ronzani | 2023-03-16T10:30:36Z | http://arxiv.org/abs/2303.09209v1 | # Recommending the optimal policy by learning to act from temporal data
###### Abstract
Prescriptive Process Monitoring is a prominent problem in Process Mining, which consists in identifying a set of actions to be recommended with the goal of optimising a target measure of interest or Key Performance Indicator (KPI). One challenge that makes this problem difficult is the need to provide Prescriptive Process Monitoring techniques only based on temporally annotated (process) execution data, stored in, so-called execution logs, due to the lack of well crafted and human validated explicit models. In this paper we aim at proposing an AI based approach that learns, by means of Reinforcement Learning (RL), an optimal policy (almost) only from the observation of past executions and recommends the best activities to carry on for optimizing a KPI of interest. This is achieved first by learning a Markov Decision Process for the specific KPIs from data, and then by using RL training to learn the optimal policy. The approach is validated on real and synthetic datasets and compared with off-policy Deep RL approaches. The ability of our approach to compare with, and often overcome, Deep RL approaches provides a contribution towards the exploitation of white box RL techniques in scenarios where only temporal execution data are available.
## 1 Introduction
Prescriptive Process Monitoring (PPM) is a prominent problem in Process Mining, which consists in identifying a set of actions or interventions to be recommended with the goal of optimising a target measure of interest or Key Performance Indicator (KPI). In its simplest formulations it contains methods for raising alarms or triggering interventions, to prevent or mitigate undesired outcomes, as well as for recommending the best resource allocation. Only recently, works have targeted the generation of recommendations of the next activity(ies) to optimize a certain KPI of interest [22, 23], such as, the cycle time of the process execution in a single-actor setting (or ignoring the actor dimension). Only one recent work [1] explicitly considers the process execution in the context of a multi-actor complex environment that depends upon exogenous factors, including how the other process actors behave. In this setting, identifying the best strategy to follow for a target actor is not straightforward and Artificial Intelligence (AI) techniques based on Reinforcement Learning (RL) have shown promising results.
Indeed, Reinforcement Learning is increasingly used as one of the state-of-the art solutions in several areas where strategic reasoning is needed in a multi-actor setting: from gaming [20] to dialogue generation [11], from smart city policy making [13], to planning [12], just to name a few. Two RL approaches are used in the literature: one relies entirely on data and exploits techniques of Deep RL; the other exploits explicit Markov Decision Processes (MDP), but assumes the existence of a model (the MDP) for "playing the game". The first approach has the advantage of avoiding the construction of, often complex, models, but has the drawback of producing black box decisions; while the second is more transparent but requires a model (the MDP) of the scenario at hand for the KPIs of interest. Unfortunately, the explicit knowledge required to build an MDP is often not available in the Process Mining domain. Moreover, it is a knowledge difficult to obtain by manually inspecting the process executions stored in the, so-called, _event logs_ (as done in [1]), or by using Process Discovery techniques [20], which are not tailored to discover meaningful MDP states for the considered KPIs. Thus, an important challenge to address in PPM is the ability to provide methods able to recommend next activity(ies) in multi-actor processes only exploiting the information contained in the temporally annotated (process) execution data, ideally learning an MDP in a white box manner, thus avoiding black box Deep RL approaches.
In this paper we aim at tackling this challenge by proposing an AI based approach that learns, by means of Reinforcement Learning, an optimal policy only from the observation of temporally annotated sequences of event data i.e., the past process executions, and recommends the best activities to carry on for optimizing a KPI of interest in a multi-actor setting. This is achieved first by learning an explicit MDP for the specific KPIs from the event log (Sec. 3.1), and then by using RL training to learn the optimal policy (Sec. 3.2). One of the challenges we have to tackle is the construction of a MDP which must contain a fulfilling description of the system. This can be technically difficult because a _state space_ which describes all possible configurations of the system can become unbearably big, so some sort of effective technique to avoid that must be put in place. We address this challenge by using clusterization to group together similar process ex
ecutions, which -- intuitively -- represent similar configurations of the system which can be grouped into a single state, thus greatly reducing the state space dimension without affecting the quality of the result. To show the validity of the approach we evaluate it on real and synthetic datasets, (i) by adapting the estimation of the _state-action value function_\(q\) -- via scaling -- to avoid problems related to overfitting, and (ii) by comparing it with off-policy deep Reinforcement Learning approaches also taking into account the log dimension and the popularity of successful traces in the event log (Sec. 4 and 5).
The ability of our approach to compare with, and often overcome, Deep RL approaches shows how our technique contributes in a significant manner to learn explicit Markov Decision Processes directly from temporal execution data in an effective manner. Therefore, it opens the doors to the exploitation of white box RL techniques in scenarios where only temporally annotated data are available, which has the potential to be applied outside the Process Mining area.
## 2 Background
In this section we provide the background knowledge necessary to understand the rest of the paper.
### Event log
An _event log_\(\mathcal{L}\) consists of traces representing executions of a process (a.k.a. cases). A _trace_\(\sigma=\langle e_{1},e_{2},\ldots,e_{n}\rangle\) is an ordered finite sequence of events \(e_{i}\), each referring to the execution of an activity label \(\mathfrak{a}\in A\). The _event_\(e_{i}=(\mathfrak{a},t,p_{1},\ldots,p_{n})\) is, in turn, a complex structure comprising the activity label \(\mathfrak{a}=\text{Act}(e_{i})\), its timestamp \(t\), indicating the time in which the event has occurred, and possibly _data payloads_\(p_{1},\ldots,p_{n}\), consisting of attributes, such as, the resource(s) involved in the execution of an activity, or other data recorded during the event. A partial trace execution, or _prefix_, of a trace \(\sigma=\langle e_{1},e_{2},\ldots,e_{n}\rangle\) is a sequence \(\sigma_{k}\) containing the first \(k\) events of \(\sigma\), with \(k\leq n\). For instance, given a trace \(\sigma=\langle e_{1},e_{2},e_{3},e_{4}\rangle\), the prefix \(\sigma_{2}=\langle e_{1},e_{2}\rangle\).
We define the _frequency_\(f_{\mathfrak{a}}(\sigma)\) of a trace \(\sigma\), the number of times the activity label \(\mathfrak{a}\) occurs in the trace \(\sigma\). We define the _position_\(p_{\mathfrak{a}}(\sigma)\) of a trace \(\sigma\), the last position in which the activity label \(\mathfrak{a}\) occurs in the trace \(\sigma\), with the first event having position 1. Both frequency and position are equal to zero if the corresponding activity is missing in the trace. For instance, given a trace \(\sigma=\langle(\mathfrak{a}_{1},t_{1}),(\mathfrak{a}_{2},t_{2}),(\mathfrak{a }_{1},t_{3})\rangle\), \(f_{\mathfrak{a}_{1}}(\sigma)=2\) and \(p_{\mathfrak{a}_{1}}(\sigma)=3\).
### Reinforcement Learning
Reinforcement Learning [10] is a form of unsupervised learning where an agent learns to act in an environment by maximizing the total amount of reward received by its actions. At each time step \(t\), the agent chooses and performs an _action_\(a\) in response to the observation of the _state_ of the environment \(s\). Performing action \(a\) causes, at the next time step \(t+1\), the environment to stochastically move to a new state \(s^{\prime}\), and gives the agent a _reward_\(r_{t+1}=\mathcal{R}(s,a,s^{\prime})\) that indicates how well the agent has performed. The probability that, given the current state \(s\) and the action \(a\), the environment moves into the new state \(s^{\prime}\) is given by the state transition function \(\mathcal{P}(s,a,s^{\prime})\).
The learning problem is therefore described as a discrete-time MDP \(M\), which is defined by a tuple \(M=(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma)\), where:
* \(\mathcal{S}\) is the set of states.
* \(\mathcal{A}\) is the set of agent's actions.
* \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R }_{[0,1]}\) is the transition probability function. \(\mathcal{P}(s,a,s^{\prime})=Pr(s_{t+1}=s^{\prime}|s_{t}=s,a_{t}=a)\) is the probability of transition (at the time step \(t\)) from state \(s\) to state \(s^{\prime}\) under action \(a\in\mathcal{A}\).
* \(\mathcal{R}:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function. \(\mathcal{R}(s,a,s^{\prime})\) is the reward obtained by going from \(s\) to \(s^{\prime}\) with action \(a\).
* \(\gamma\in\mathbb{R}_{[0,1]}\) is the discount factor for the rewards. Value of \(\gamma<1\) models an agent that discounts future rewards.
An MDP satisfies the _Markov Property_, that is, given \(s_{t}\) and \(a_{t}\), the next state \(s_{t+1}\) is conditionally independent of all prior states and actions, and it only depends on the current state, i.e., \(Pr(s_{t+1}|s_{t},a_{t})=Pr(s_{t+1}|s_{0},\cdots,s_{t},a_{0},\cdots,a_{t})\).
A policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) is a mapping from each state \(s\in\mathcal{S}\) to an action \(a\in\mathcal{A}\), and the _cumulative reward_ is the (discounted) sum of the rewards obtained by the agent while acting at the various time points.
The _state-action value function_\(q^{\pi}\) is the expected discounted future reward obtained by taking action \(a\) in state \(s\) and then continuing to use the policy \(\pi\) thereafter
\[q^{\pi}(s,a)=\mathbb{E}_{\pi}\bigg{(}\sum_{k=0}^{\infty}\gamma^{k}r_{k+t+1} \bigg{|}s=s_{t},a=a_{t}\bigg{)} \tag{1}\]
where \(r_{t+1}=\mathcal{R}(s_{t},a_{t},s_{t+1})\) is the reward obtained at time \(t+1\) and \(\mathbb{E}\) means expectation value.
A policy is _optimal_, usually denoted as \(\pi^{*}\), if it maximizes the state-action value (1), that is if \(q^{*}(s,a)\coloneqq q^{\pi^{*}}(s,a)\geq q^{\pi}(s,a)\) for all \(\pi\), \(s\in\mathcal{S}\) and \(a\in\mathcal{A}\). Learning the optimal policy is the goal of RL.
In RL, _on-policy_ methods improve the current policy, while _off-policy_ methods improve a target policy different from the policy used for generating experiences.
Policy iteration is an on-policy method that iteratively evaluates (_evaluation phase_) and improves (_optimization phase_) an arbitrary policy \(\pi\) until convergence. In the evaluation phase, _Monte Carlo methods_ estimate the action-value function \(q^{\pi}(s,a)\) using simulations of episodes generated by alternating between exploiting the policy and exploring the action space. After each episode, the \(q^{\pi}\) value is updated for every visited state-action pair \((s,a)\) by estimating its value as the mean of the sampled returns that originated from \((s,a)\) over time. With enough time, an accurate estimate of \(q^{\pi}\) can be obtained. Policy improvement computes a greedy policy based on \(q^{\pi}\), where given a state \(s\), the new policy returns an action that maximizes \(q^{\pi}(s,\cdot)\).
In an _off-policy_ setting, the agent collects experiences by interacting with the environment and stores them in a buffer. The actions are chosen from the behavioural policy and for
the policy improvement step, experiences are sampled from the buffer to learn the target policy.
_Q-Learning_ is an off-policy method that evaluates and improves a separate greedy policy while following the behavioural policy. It is a model-free value iteration algorithm that defines an action-value function \(Q\) by the following update formula:
\[Q(s,a)\gets Q(s,a)+\alpha\left[\gamma\max_{a^{\prime}}Q(s^{\prime},a^{ \prime})-Q(s,a)\right] \tag{2}\]
where \(\alpha\) is the learning rate parameter. The action-value function \(Q\), defined by (2), directly approximates the optimal action-value function \(q^{\star}\), independently of the policy being followed in the updates. As opposed to on-policy methods, Q-Learning does not wait until the end of the episode to get the returns and update \(q^{\pi}\), but can do so in an incremental fashion at each timestep.
## 3 Learning to act from temporal data
In this section we describe the proposed RL solution, whose pipeline is reported in Figure 1. The pipeline takes as input an event log related to a multi-actor process, the definition of a relevant KPI to optimise, as well as the ownership of the activities in the log and builds via RL a (recommender) system that recommends the next best activity(ies) in order to optimise the desired KPI. It is composed of three phases: in the _Preprocessing phase_ an MDP is built starting from the three inputs of the pipeline (Sec. 3.1); in the _Reinforcement Learning phase_ the MDP is trained by the RL algorithm to learn the best policy, and, finally, in the _Runtime phase_, the policy is used to recommend the best next activity(ies) for an ongoing execution (Sec. 3.2).
### MDP construction from log data
The aim of the _Preprocessing phase_ is building an MDP starting from: (i) the event log; (ii) the list of activities carried out by the _agent_, and the ones by the _environment_; and (iii) the KPI to optimise.
At a high level, the following mapping between the MDP components (actions and states) and the information extracted from the event log can be defined as follows:
* actions: the activities that the agent can carry out;
* states: a comprehensive description of the system. In a fashion similar to (Branchi et al., 2022) it contains three components:
* last activity selected by the agent or the environment;
* the trace history, that is, the condensed information about the entire trace execution up to that point;
* information about the reward obtained up to that point.
Considering the definition of MDP action and state reported above, the steps to follow for the construction of the MDP are the following:
1. _Enriching the log_. In this step, based on the list of the agent's activities, each activity in the log is marked as an agent or as an environment (all the other activities) activity. Moreover, the KPI of interest is computed for each trace \(\sigma\). Its value can depend on the executed activities or on other attributes of the trace, and represents the reward \(r(\sigma)\) of the whole trace \(\sigma\). Note that we define \(r(\sigma_{k})=0\) for incomplete prefixes \(\sigma_{k}\), \(k<\text{len}(\sigma)\).
2. _Encoding and clustering using k-means._ In this step, each prefix in the log is encoded using three types of information: the frequency of the activities, the last position in which an activity has occurred, and the reward obtained up to that point. Namely, if the alphabet of all the activities is \(\{\mathfrak{a}_{1},\ldots,\mathfrak{a}_{n}\}\) then the encoding of a prefix \(\sigma_{k}\) is: \[\boldsymbol{v}_{\sigma_{k}}=\Big{(}\frac{f_{\mathfrak{a}_{1}}(\sigma_{k})}{f_ {\text{max}}},\ldots,\frac{f_{\mathfrak{a}_{n}}(\sigma_{k})}{f_{\text{max}}}, \frac{p_{\mathfrak{a}_{1}}(\sigma_{k})}{p_{\text{max}}},\ldots,\frac{p_{ \mathfrak{a}_{n}}(\sigma_{k})}{p_{\text{max}}},\tilde{r}(\sigma_{k})\Big{)}\] (3) where \(f_{\mathfrak{a}_{1}}\) and \(p_{\mathfrak{a}_{i}}\) are defined in Sect. 2. \(f_{\text{max}}\) and \(p_{\text{max}}\) are respectively the highest frequency and position for all activities and all prefixes in the log. Note that the latter can also be seen as the max length of all traces in the log \[f_{\text{max}}=\max_{\begin{subarray}{c}a\in A,\sigma\in\mathcal{L}\\ 0<k\leq\text{len}(\sigma)\end{subarray}}\big{(}f_{\mathfrak{a}}(\sigma_{k}) \big{)},\qquad p_{\text{max}}=\max_{\sigma\in\mathcal{L}}\big{(}\text{len}( \sigma)\big{)}\] (4) Finally, \(\tilde{r}(\sigma_{k})\) is the _prefix normalized reward_: it is equal to \(0\) for proper trace prefixes, while it is the reward of the trace normalized with respect to the rewards of all the traces for complete traces. In other terms: \[\tilde{r}(\sigma_{k})=\begin{cases}0&k<\text{len}(\sigma)\\ \frac{r(\sigma)-\min_{\sigma^{\prime}\in\mathcal{L}}r(\sigma^{\prime})}{\max_ {\sigma^{\prime}\in\mathcal{L}}r(\sigma^{\prime})-\min_{\sigma^{\prime}\in \mathcal{L}}r(\sigma^{\prime})}&k=\text{len}(\sigma)\end{cases}\] (5) The encoded prefixes are then used to train a k-means model (Lloyd, 1957) that will be able to assign a new prefix to the cluster containing the most similar prefixes.
3. _Constructing the MDP_. Once the k-means model has been trained, the MDP can be built. For each prefix \(\sigma_{k}\), a state \(s_{\sigma_{k}}\) is defined as the pair of the last performed activity \(\mathfrak{a}_{i_{k}}=\text{Act}(e_{k})\) and the cluster \(c_{\sigma_{k-1}}\) assigned to \(\sigma_{k-1}\) by the k-means model: \[s_{\sigma_{k}}=\big{(}\mathfrak{a}_{i_{k}},\,c_{\sigma_{k-1}}\big{)}.\] (6) This definition of state would allow us to include, in the state, information about the history of the execution and its reward. Once defined the states, the MDP is built by replaying the traces in the event log: we build a directed graph, where states correspond to nodes and edges correspond to actions moving from one node state to another. Moreover, for each edge, the probability of reaching the target node
Figure 1: Pipeline of the proposed RL solution.
(computed based on the number of traces in the event log that reach the corresponding state) and the value of the reward function are computed. Each edge is mapped to the tuple \((s,a,s^{\prime},\mathcal{P},\mathcal{R})\) where \(s\) is the state corresponding to the source node of the edge, \(a\) is the action used for labelling the edge, \(s^{\prime}\) is the state corresponding to the target node of the edge, \(\mathcal{P}(s,a,s^{\prime})\) is computed as the percentage of the traces that reach the state \(s^{\prime}\) among all the traces that reach state \(s\) and execute action \(a\). The reward function \(\mathcal{R}(s,a,s^{\prime})\) is computed as the average of the rewards \(r(\sigma_{k})\) of those prefixes \(\sigma_{k}\) corresponding to the edge \((s,a,s^{\prime})\)1.
Footnote 1: A prefix \(\sigma_{k}\) corresponds to an edge \((s,a,s^{\prime})\) if \(s^{\prime}=s_{\sigma_{k}}\), \(a=\text{Act}(e_{l})\) and \(s=s_{\sigma_{l-1}}\), where \(e_{l}\) is the last event with \(l<k\) associated to an agent’s activity in \(\sigma_{k}\).
### RL and runtime phases
In the _Reinforcement Learning_ phase, the MDP obtained in the previous phase is trained by the RL algorithm to learn the optimal policy, as shown in Figure 1. We use Monte Carlo simulation: each simulation uses the MDP to generate an episode, or trace, whose final reward is eventually used to properly update the state-action value (1) and then the policy via policy iteration.
In (Branchi et al., 2022) the authors noted that the MDPs mined from event data logs can have some issues. Namely some action choices in the MDP guarantee to gain a high final reward almost certainly during the training, however these choices are not reliable, because they appear in too few traces, and so the very likely high reward is accidental and does not rely on a robust statistical correlation within the training data. This can be seen as an overfitting on the training data used to construct the MDP.
To tackle this issue in (Branchi et al., 2022) a recalibration of the reward was performed for every transition with a multiplicative factor depending on the number of occurrences of the given transition in the log. In this way, very convenient but unreliable actions were discouraged during the training with respect to convenient actions more popular in the log. In this work, instead of scaling the reward of each transition, we scale the \(q\)-value function (1) as follows
\[\tilde{q}(s,a)=q(s,a)\,h(s,a) \tag{7}\]
where \(h:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}_{[0,1]}\) is a monotonically non-decreasing function of the number \(n(s,a)\) of occurrences of the state-action pair in the log. The trivial choice \(h^{0}(s,a)=1\) defines the standard \(q\)-value function. In this work we consider three non-trivial types for \(h\) and we test their effect on the policy learned:
* a linear function of the number of occurrences: \[h^{\text{in}}(s,a)=\frac{n(s,a)-\min_{s^{\prime}\in\mathcal{S},a^{\prime}\in \mathcal{A}}n(s^{\prime},a^{\prime})}{\max_{s^{\prime}\in\mathcal{S},a^{ \prime}\in\mathcal{A}}n(s^{\prime},a^{\prime})-\min_{s^{\prime}\in\mathcal{S},a^{\prime}\in\mathcal{A}}n(s^{\prime},a^{\prime})}\]
* a step function \(h^{\text{step}}_{n_{t}}\) which is equal to zero for \(n(s,a)\) smaller or equal than a certain threshold \(n_{t}>0\) and is equal to one for higher values of \(n(s,a)\), that is: \[h^{\text{step}}_{n_{t}}(s,a)=\begin{cases}0&n(s,a)\leq n_{t}\\ 1&n(s,a)>n_{t}\end{cases}\]
* a smooth function \[h^{\text{smooth}}_{\lambda}(s,a)=1-2\frac{e^{-n(s,a)/\lambda}}{1+e^{-n(s,a)/ \lambda}}\] parametrized by a real value \(\lambda>0\). This choice for \(h\) actually interpolates between the previous two proposals: indeed \(h^{\text{smooth}}_{\lambda\to 0}\to h^{\text{step}}_{n_{0} =0}\) and \(h^{\text{smooth}}_{\lambda\rightarrow\infty}\to h^{\text{lin}}\).
For every choice of \(h\) we obtain a different policy \(\pi_{h}\) from the RL training.
Finally, in the _Runtime_ phase, as shown in Figure 1, the policy is used to recommend the best next activity(ies) for a ongoing process execution2.
Footnote 2: The recommender system takes the ongoing prefix, maps it to a state following the same steps as in Sect. 3.1 and uses the policy to recommend the best next activity to maximize the reward (KPI).
## 4 Evaluation setting
In the following we report the evaluation setting we designed in order to measure the performance of the policies \(\pi_{h}\) obtained by applying the proposed pipeline, and to compare them with a state-of-the-art baseline.
### Research questions
We aim at evaluating the capability of the proposed solution to conveniently recommend the next activities so as to optimise the relevant KPI targeted. To this aim, we assess the capability of the proposed solution: (i) when different scaling functions \(h\) are used; (ii) with respect to state-of-the-art approaches; (iii) when applied to datasets with different characteristics (e.g., size and success rate). More in detail, we are interested to answer the following research questions:
1. How do the different instantiations of the scaling function \(h\) perform in terms of policy capability to optimise the KPI of interest?
2. How does the proposed approach perform in terms of policy capability to optimise the KPI of interest with respect to state-of-the-art techniques directly learning from temporal data (Deep RL approaches)?
3. How does the proposed approach perform in terms of robustness of the results with datasets of different sizes and success rates with respect to Deep RL approaches?
**RQ1** aims at comparing the different instantiations of the \(h\) function presented and to evaluate whether they are able to recommend more effective policies than the ones generated using the standard function \(h^{0}=1\). The aim of **RQ2** is to compare the proposed pipeline with state-of-the-art RL approaches that directly learn from temporal data without requiring further domain knowledge (e.g., the MDP) as input and, in particular, with black-box methods as Deep RL techniques. Finally, **RQ3** aims at evaluating the impact of different characteristics of the training dataset (dataset size and percentage of successful traces, i.e., traces scoring well on the KPI of interest) on our solution and Deep RL approaches to return a policy that optimises the KPI of interest.
### Datasets description
To evaluate the performance of the proposed solution we use the real-world publicly-available BPI Challenge 2012 event log (BPIC2012) (van Dongen, 2012) and a set of four synthetic event logs inspired by the BPIC2012 event log.
The BPIC2012 event log describes the execution of a loan application process in a Dutch Financial Institute. Every process execution represents a single application to the bank by a customer for a personal loan. The application can be accepted or rejected by the bank, likewise the customer can withdraw the application at any time. If the bank decides to grant a loan to a customer it generates an offer and sends it to the customer, that in turn can decide to accept the offer or to refuse it. This log is a good example of a multi-actor process scenario, where we chose to take the perspective of the bank actor, so that the agent is the _bank_ and the environment is the _customer_. The two actors have different goals, the customer aims for a convenient loan offer, whereas the bank seeks reliable customers to whom to send loan offers. The overall goal of the bank is the acceptance of the offer by the customer. Indeed, if the customer declines the offer, the time and the resources spent by the bank in generating the offer are wasted. In this scenario we define the KPI that the bank aims at optimising as the profit deriving from the loan, excluding the costs spent in the process execution. The KPI of the bank is hence composed of two parts: (i) a positive part that is the profit made by the bank if the loan offer is accepted by the customer, namely the interest rate of the loan, which we arbitrarily set as the \(15\%\) of the amount requested in the loan application; and (ii) a negative part proportional to the working time spent in the process, which we set as a cost of 36 euros/h.
For the generation of the four synthetic logs we define a simulation model \(\mathcal{M}\) inspired by the BPIC2012 process. The simulation model consists of the BPMN model in Figure 2, extended with additional information for the simulation aspects3 (case inter-arrival time, activity durations, routing probabilities, resource allocation and utilization, etc.). The main differences between the four synthetic logs concern their sizes, that is the number of traces they contain, and the probability that a loan application is (pre)accepted by the bank, which, in turn, affects the probability that a loan offer is accepted by a customer and hence the possibility for the bank to obtain a high KPI. Table 1 shows the details of the different event logs (including the real one). \(\mathcal{L}^{s}_{rare}\) and \(\mathcal{L}^{b}_{rare}\) are the logs with a low success rate, that is with a low percentage of traces achieving the acceptance of the offer by the customer (activity accept in Figure 2), and hence with a low average KPI value. This is achieved by imposing a low probability (less than 50%) in the model to have a pre-accepted loan (gateway 1 in the model). After this first gateway, the probability of obtaining the loan depends on different factors: the amount required in the loan application, the number of offers created and calls made by the bank during the process. The differences between these four synthetic logs will be central for answering **RQ3**.
Footnote 3: The complete BPMN model and all the simulation parameters used to define \(\mathcal{M}\) are available in the evaluation repository.
### Baseline definition
With the aim of comparing the proposed pipeline with state-of-the-art approaches directly learning from temporal data, that is Deep RL approaches (**RQ2** and **RQ3**), we trained an offline Deep-Q Network (DQN). Within a complex environment, where there can be a large number of states, it is difficult to compute the action-value function \(Q(s,a)\) for every possible state-action pair and thus model approximation methods have to be used to approximate \(q^{*}\). Following the architecture presented in (Chiorrini et al., 2020), we employed a Long-Short Term Memory (LSTM) to approximate \(q^{*}\). Since LSTM-based neural networks preserve the sequentiality of the input, when encoding the traces for the DQN method, we encoded the frequency at each timestep, foregoing the last seen position encoding used to construct the MDP in Section 3.
Q-learning updates at iteration \(i\) are done through the use of the mean squared error:
\[L_{i}(\theta_{i})=\mathbb{E}_{s,a,r,s^{\prime}}\left[r+\gamma\max_{a^{\prime }}Q(s^{\prime},a^{\prime},\theta_{i}^{-})-Q(s,a,\theta_{i})\right]^{2} \tag{8}\]
where \(\gamma\) represents the discount factor applying penalties to future rewards, \(\theta_{i}\) represents the parameters of the LSTM model at iteration \(i\) and \(\theta_{i}^{-}\) represents the parameters of the target network at iteration \(i\) used to approximate \(\max_{a^{\prime}}q^{*}(s,a)\). In the literature, DQNs are trained by randomly sampling batches of experiences from the replay buffer during training to avoid overfitting. For trace executions, by sampling a single transition from one timestep to the other, the sequentiality of individual traces would not be picked up by the LSTM model. To overcome this and avoid overfitting, we sampled traces to be trained sequentially but randomize the order of the traces in the log to avoid spurious correlations that might be learned by the order in which the traces are fed into the model. Since some traces in the event log have long executions in terms of the activities performed, a proper window size was used when building the state input for the LSTM model to preserve long-term dependencies. Since some actions are not available at specific timestamps, action masking is utilised to constrain the possible action space based on the last activity in the trace at the current timestep. For our baseline we implemented a masking function that, during the \(Q(s,a)\) value update, restrict the \(\max_{a^{\prime}}\) function in (8) to those actions that have been seen in the event log. Since we cannot interact with the environment and perform exploration, this allows us to restrict the space of possible choices to the ones observed in the data.
### Evaluation methods
We design two different evaluation approaches to answer the research questions in Section 4.1: a simulation evaluation and a test log analysis.
Simulation evaluationA well-known problem in evaluating recommender systems for a real-world process scenario is the difficulty of testing the recommendations provided by the recommender system (Dumas, 2021). To circumvent this issue we leverage synthetic logs. Indeed, this allows us to
test the policies trained on the synthetic logs, by simulating a testing situation via the model used to generate the log.
The simulation evaluation uses a modified version of the simulation model \(\mathcal{M}\) employed to generate the synthetic dataset in Section 4.2, This modified model \(\mathcal{M}^{\prime}\) uses the policies obtained by our method, or by the DQN baseline, to recommend the next activity for the bank. Each step of the simulation is classified as a multiple decision point, i.e. either the agent (bank) or the environment (customer) can act, or as a single decision point, where only the agent can perform an activity.
For the multiple decision points, as _gateways_ 3, 5 and 6 in Figure 2, the environment has priority over the agent, so it can act (or not act) according to the probability distribution of the original model \(\mathcal{M}\)4. The policy requires as input the prefix \(\sigma_{k}\) of the trace that is being simulated, and returns the recommended next activity. The model \(\mathcal{M}^{\prime}\) checks if the recommended next activity is allowed, i.e. the activity satisfies the BPMN model, otherwise the trace simulation ends with an exception and the reward is the one accumulated until then. In addition, during the simulation, it may happen that a trace never reaches the end of the process because the policy keeps recommending activities that have negative effects on the environment response, that is, it reduces to zero the probability of the environment to respond. In this case, we stop the trace simulation as before.
Footnote 4: As described in Section 4.2 the probability distribution of the environment response depends on the ongoing prefix \(\sigma_{k}\).
For every synthetic log (\(\mathcal{L}^{s},\mathcal{L}^{b},\mathcal{L}^{s}_{rare},\mathcal{L}^{b}_{rare}\)) we generate a new model \(\mathcal{M}^{\prime}\) for each policy trained on it (\(\pi_{0},\pi_{\text{lin}},\pi_{\text{step}},\pi_{\text{smooth}},\pi_{\text{ DQN}}\)). Then we use these models to simulate the behaviour of the environment in response to the agent's action recommended by the policy for 5 000 traces and compute their average reward, which is the metrics we use to evaluate and compare the techniques. Although this type of setting offers a way to perform an online evaluation of the recommender system, it can be used only for synthetic scenarios in which the model is available. However, the behaviour of synthetic logs can be less complex when compared to the behaviour of real-life logs. We hence also have a test log analysis evaluation.
Test log analysisWe perform an analysis of the test event log, which is the same analysis considered in the work [2]. This analysis is important as it allows us to evaluate the performance on the real-world log, for which we do not have the exact process model. However, to test the trustworthiness of this type of evaluation, we also apply it to the synthetic logs, so that for these datasets the two type of evaluations (simulation and test log analysis) can be compared.
We perform two types of log analysis. In the first one we compare the average KPI value of the traces in the test log that follow the optimal policy with the average KPI value of all the traces in the log. In the second log analysis we focus on evaluating recommendations for ongoing executions, i.e., we measure how much following the policy can improve the outcome of cases where some activity has already been executed. To do this we consider, for each trace in the test event log, all its prefixes and separately analyse each of them, as a potential ongoing execution. For each prefix \(\sigma_{k}\) of a trace \(\sigma\) in the test event log we compare the ground truth value of the KPI of interest of the trace \(\sigma\) once completed, against an estimation of the value of the KPI obtained following the optimal policy from that execution point forward. The estimation is obtained by averaging the KPI values of all the traces \(\tau\) in the log that have the same prefix \(\tau_{k}=\sigma_{k}\) and follow the optimal policy from there on.
### Training procedure
Each dataset described in Table 1 was split into two parts: \(80\%\) was used to train the agent of the RL model and 20% for testing the retrieved policy 5.
Footnote 5: For the test log analysis (Section 4.4) we excluded from the test event log all those traces that end without the presence of an actual decision point for the agent, since the outcome of these traces cannot be modified by the decisions of the agent.
Each log was preprocessed as described in Section 3.1. In the kmeans clusterization step we selected 100 clusters for every log, this choice was made after performing a silhouette analysis on every dataset. We trained the RL agent in order to obtain 4 different policies, each corresponding to one of the instantiations of the scale function described in Section 3.2: \(\pi_{0}\) to the trivial case \(h^{0}=1\), \(\pi_{\text{lin}}\) to the linear function \(h^{\text{lin}}\), \(\pi_{\text{step}}\) to the step function \(h^{\text{step}}_{n_{t}=50}\) and \(\pi_{\text{smooth}}\) to the smooth function \(h^{\text{smooth}}_{\lambda=50}\).6
Footnote 6: \(n_{t}=50\) and \(\lambda=50\) were selected after an explorative analysis available at the evaluation repository.
## 5 Evaluation results
In this section we show the results of the evaluation and answer the research questions of Section 4.1. All the code and material are available at [https://tinyurl.com/repOptimalPolicy](https://tinyurl.com/repOptimalPolicy).
To answer **RQ1** we compared the four policies obtained with our method for each dataset (\(\pi_{0},\pi_{\text{lin}},\pi_{\text{step}},\pi_{\text{smooth}}\)). We compared them using the two methods described in Sect. 4.4. Figure 3 reports the average rewards obtained for the synthetic logs in the simulation evaluation. It can be noticed that all the policies provide a much higher average reward than the standard average reward of the log \(\mathcal{L}\) generated by the model \(\mathcal{M}\). However, among the four policies there is no definite winner, and their performances are actually close. To refine this analysis we report in Table 2 the difference in terms of average reward between pairs of policies, by highlighting in bold the statistically relevant differences (p-value \(\leq 0.05\)). For example, only one statistically
\begin{table}
\begin{tabular}{l c c c c c c} Dataset & Trace \# & Variant \# & Event \# & \begin{tabular}{c} Avgrace \\ length \\ \end{tabular} & \begin{tabular}{c} Application \\ prerecorded \\ \end{tabular} &
\begin{tabular}{c} Offers \\ accepted \\ \end{tabular} \\ \hline BPI2012 & 13087 & 4366 & 262200 & 20 & 56\% & 17\% \\ \(\mathcal{L}^{s}\) & 2000 & 704 & 29077 & 29 & 60\% & 23\% \\ \(\mathcal{L}^{s}\) & 2000 & 496 & 22165 & 22 & 40\% & 16\% \\ \(\mathcal{L}^{s}\) & 10000 & 3083 & 149997 & 29 & 60\% & 24\% \\ \(\mathcal{L}^{s}_{rare}\) & 10000 & 2070 & 109652 & 22 & 40\% & 16\% \\ \hline \end{tabular}
\end{table}
Table 1: Datasets Description.
relevant difference was found for the log \(\mathcal{L}^{b}_{rare}\) between \(\pi_{\text{step}}\) and \(\pi_{0}\).
Figure 4 shows the average rewards of the optimal traces obtained for the five logs (synthetic and real) in the test log event analysis. In this case the results are more varied and differences can be observed when compared to the results in Figure 3. Indeed, there are few cases in which the optimal traces with respect to a policy actually have an average reward lower than the average reward of the test log. However, these latter results have to be considered cautiously considering the low statistic of the samples used to compute the average reward. Although the analyses carried out do not allow us to identify the best scaling function instantiation, we can state that, by applying a scaling function, we usually obtain better results and that \(\pi_{\text{smooth}}\) seems to have steady performance across all the logs (**RQ1**).
Focusing on **RQ2**, in Figure 3 we can notice that DQN usually performs worse than our method in the simulation evaluation. More precisely, looking at Table 3 we can see that only for \(\pi_{0}\) in \(\mathcal{L}^{b}_{rare}\) and \(\pi_{\text{step}}\) in \(\mathcal{L}^{b}\) the difference with DQN is not statistically relevant. Instead, \(\pi_{\text{lin}}\) and \(\pi_{\text{smooth}}\) perform better than the baseline in every log.
The result of the test log analysis is more difficult to interpret. In fact, in Figure 4, there are only two test logs (\(\mathcal{L}^{s}\) and \(\mathcal{L}^{b}\)) with traces following the DQN policy, and these traces have a very low average reward. However this does not directly imply that the baseline performs poorly, as can be seen from the simulation evaluation.
To understand better the case of the real-world log BPIC2012 we performed the second type of event log analysis, which evaluates recommendations on ongoing executions. The average gained reward for every prefix length is shown in Figure 5. Notice that only at prefix length \(11\) we get an estimated gain for DQN, and this is a negative gain reward. It can also be noticed that for longer prefixes, DQN results start improving and, for prefixes longer than \(20\), even outperform the proposed method. The negative results observed in the two analyses could be attributed to the absence of optimal traces suggested by the DQN agent in the test log. This issue is particularly evident in the second analysis, where for shorter prefixes, the trace execution consistently deviated from the recommended actions. We stress that this kind of evaluation is limited, as it can not evaluate optimal traces which are not present in the test log. The analyses carried out show that the proposed pipeline allows us to significantly outperform (on synthetic logs) or in the worst case compare existing approaches (**RQ2**).
Finally, concerning **RQ3**, by looking at Figure 3 it is
\begin{table}
\begin{tabular}{l c c c c} \hline \(\pi_{1}\) vs \(\pi_{2}\) & \(\mathcal{L}^{s}\) & \(\mathcal{L}^{s}_{rare}\) & \(\mathcal{L}^{b}\) & \(\mathcal{L}^{b}_{rare}\) \\ \hline \(\pi_{\text{lin}}\) vs \(\pi_{\text{smooth}}\) & 29 & **-152** & -45 & 7 \\ \(\pi_{0}\) vs \(\pi_{\text{smooth}}\) & -35 & -23 & -28 & -19 \\ \(\pi_{0}\) vs \(\pi_{\text{lin}}\) & **-64** & **129** & 17 & -26 \\ \(\pi_{\text{step}}\) vs \(\pi_{\text{smooth}}\) & 10 & -85 & **-120** & 33 \\ \(\pi_{\text{step}}\) vs \(\pi_{\text{lin}}\) & -19 & **67** & **75** & 26 \\ \(\pi_{\text{step}}\) vs \(\pi_{0}\) & **45** & -62 & **92** & **52** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison between policies for different scaling functions \(h\). Each value is the difference of the average reward computed with the simulation evaluation. The boldface values are statistically relevant differences, i.e. with p-value \(\leq 0.05\).
Figure 4: Average reward of optimal traces in the logs. The \(\mathcal{L}\) column represents the average reward of the entire test log. The numbers above each column correspond to the number of traces used to compute the mean.
Figure 3: Simulation evaluation results for the synthetic logs. The column \(\mathcal{L}\) corresponds to the average reward of the synthetic log generated by the model \(\mathcal{M}\), without the application of any policy.
Figure 2: The BPMN model used by \(\mathcal{M}\) to generate the synthetic logs and by \(\mathcal{M}^{\prime}\) to perform the simulation evaluation.
clear that, with respect to the size of the log used in training, our method is more robust than the DQN. Indeed, the performance for \(\mathcal{L}^{b}\) and \(\mathcal{L}^{b}_{rare}\) are close to the one for \(\mathcal{L}^{s}\) and \(\mathcal{L}^{s}_{rare}\) respectively. On the contrary, the performance of DQN greatly decreases for logs of small size. This effect is even more evident in the case of the _rare_ logs, i.e. those with low success rate. Indeed in \(\mathcal{L}^{s}\), DQN has poor but positive performance, whereas in \(\mathcal{L}^{s}_{rare}\) its performance is entirely negative.
This suggests that our method is more robust than the offline DQN baseline with respect to changes in the log size and success rate (**RQ3**). The reason is probably two-fold: (i) deep learning algorithms require more data to provide good results -- especially many examples of successful traces; (ii) the offline trait of our DQN baseline training does not allow the agent to explore traces not present in the log, limiting its capability to act successfully in unseen states.
## 6 Related work
The state-of-the-art works related to this paper pertain to two fields: Prescriptive Process Monitoring and Reinforcement Learning. The section is hence structured by first presenting the PPM related work and then the RL state-of-the-art works applied to process mining problems.
Several PPM techniques have been recently proposed in the literature [12]. Focusing on the type of recommended interventions, we can classify existing work in three groups: (i) the ones that recommend interventions to prevent or mitigate an undesired outcome [13, 14, 15, 16, 17]; (ii) the ones that recommend a resource allocation [11, 12]; (iii) the ones that recommend next activities to optimize a given KPI [13, 14, 15].
The approach presented in this paper falls under this small third family of prescriptive process monitoring approaches. The work in [13] discusses how the most likely behavior does not guarantee to achieve the desired business goal. As a solution to this problem, they propose and evaluate a prescriptive business process monitoring technique that recommends next best actions to optimize a specific KPI, i.e., the time. The work in [14] presents a data-mining driven concept of recommendation-based business process optimization supporting adaptive and continuously optimized business processes. The work in [14] discusses Process-aware Recommender (PAR) systems, in which a prescriptive-analytics component, in case of executions with a negative outcome prediction, recommends the next activities that minimize the risk to complete the process execution with a negative outcome. Similarly to this approach, the work in [1] takes the perspective of one of the actors of the process and aim at optimizing a domain-specific KPI of interest for this actor by leveraging an RL approach. However, in that work, the authors heavily rely on a manual pre-processing of the event log in order to extract the background knowledge needed to manually build the MDP. Differently from the works above, we aim at taug a multi-actor perspective by building the RL pipeline automatically from the event log using only the KPI of interest and the activity ownership.
In the literature, only few RL approaches have been proposed for facing problems in Process Mining. Silvander proposes using Q-Learning with function approximation via a deep Q network (DQN) for the optimization of business processes [10]. He suggests defining a so called decay rate to reduce the amount of exploration over time. The work in [1] instead apply a DQN for Predictive Process Monitoring tasks, by learning to predict next activity and execution times through the means of RL rather than classical supervised learning solutions. Huang et al. employ RL for the dynamic optimization of resource allocation in business process executions [10]. Metzger et al. propose an alarm-based approach to prevent and mitigate an undesired outcome [14]. They use online RL to learn when to trigger proactive process adaptations based on the reliability of predictions. Although all these works use RL in the process mining field, none of them use it for recommending the next actions to perform in order to optimize a certain KPI of interest, as in this work. Finally, some works have applied RL and Inverse RL approaches to recommend the next actions on temporal data [15] or on data constrained by temporal constraints [14].
## 7 Conclusion
In this paper we provide an AI based approach that learns, by means of Reinforcement Learning, an optimal policy only
Figure 5: Estimated average gained reward following the policy from a given prefix length for the BPIC2012 test event log.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline \(\pi_{1}\) vs \(\pi_{2}\) & \(\mathcal{L}^{s}\) & \(\mathcal{L}^{s}_{rare}\) & \(\mathcal{L}^{b}\) & \(\mathcal{L}^{b}_{rare}\) \\ \hline \(\pi_{0}\) vs \(\pi_{\text{DQN}}\) & **336** & **633** & **71** & 35 \\ \(\pi_{\text{lin}}\) vs \(\pi_{\text{DQN}}\) & **400** & **504** & **54** & **61** \\ \(\pi_{\text{step}}\) vs \(\pi_{\text{DQN}}\) & **381** & **571** & -21 & **87** \\ \(\pi_{\text{smooth}}\) vs \(\pi_{\text{DQN}}\) & **371** & **656** & **99** & **54** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison between our policies and the DQN baseline. Each value is the difference of the average reward computed with the simulation evaluation. The boldface values are statistically relevant differences, i.e. with p-value \(\leq 0.05\).
from the observation of temporally annotated sequences of event data and recommends the best activities to carry on for optimizing a Key Performance Indicator of interest in a multi-actor setting. This is achieved first by learning an explicit Markov Decision Process (MDP) for the specific KPIs from the event log using clusterization techniques, and then by using RL training to learn the optimal policy.
The evaluation shows that the usage of scaling for the state-action value functions \(q\) is useful to solve the problem of the over-fitting on the training data from which the MDP is constructed (**RQ1**); it also shows that our approach compares with, and often overcomes, off-policy deep RL approaches especially when taking into account the log dimension and the popularity of successful traces in the event log (**RQ2** and **RQ3**). This can open the doors to the exploitation of white box RL techniques in datasets where only temporal execution data are available, which can be applied to temporally annotated data outside the Process Mining area.
|
2308.13123 | Multiscale modeling of thermal properties in Polyurethane incorporated
with phase change materials composites: A case study | Polyurethane (PU) is an ideal thermal insulation material due to its
excellent thermal properties. The incorporation of Phase Change Materials
(PCMs) capsules into Polyurethane (PU) has been shown to be effective in
building envelopes. This design can significantly increase the stability of the
indoor thermal environment and reduce the fluctuation of indoor air
temperature. We develop a multiscale model of a PU-PCM foam composite and study
the thermal conductivity of this material. Later, the design of materials can
be optimized by obtaining thermal conductivity. We conduct a case study based
on the performance of this optimized material to fully consider the thermal
comfort of the occupants of a building envelope with the application of PU-PCMs
composites in a single room. At the same time, we also predict the energy
consumption of this case. All the outcomes show that this design is promising,
enabling the passive design of building energy and significantly improving
occupants' comfort. | Bokai Liu, Weizhuo Lu, Xiaoyue Hu, Chao Zhang, Cuixia Wang, Yilin Qu, Thomas Olofsson | 2023-08-25T00:29:56Z | http://arxiv.org/abs/2308.13123v1 | Multiscale modeling of thermal properties in Polyurethane incorporated with phase change materials composites: A case study
###### Abstract
Polyurethane (PU) is an ideal thermal insulation material due to its excellent thermal properties. The incorporation of Phase Change Materials (PCMs) capsules into Polyurethane (PU) has been shown to be effective in building envelopes. This design can significantly increase the stability of the indoor thermal environment and reduce the fluctuation of indoor air temperature. We develop a multiscale model of a PU-PCM foam composite and study the thermal conductivity of this material. Later, the design of materials can be optimized by obtaining thermal conductivity. We conduct a case study based on the performance of this optimized material to fully consider the thermal comfort of the occupants of a building envelope with the application of PU-PCMs composites in a single room. At the same time, we also predict the energy consumption of this case. All the outcomes show that this design is promising, enabling the passive design of building energy and significantly improving occupants' comfort.
Polyurethane (PU), Phase Change Materials (PCMs), Thermal properties, Multiscale modelling, Building energy. 11 - 14th - 14th 2023, Aachen, Germany 1
## 1 Introduction
The rapid increase in world energy use has raised concerns about supply difficulties, depletion of energy resources, and serious environmental impacts such as ozone layer depletion, global warming, and climate change [1]. The International Energy Agency has collected tire data on trends in energy consumption about how greenhouse gas emissions will increase due to global warming, leading to extreme weather conditions around the world. Mitigating these challenges, the European Union (EU) aims to reduce carbon footprints by 80% to 95% below 1990 levels by 2050 [2]. Enhancing the energy performance of the building sector has been highlighted as a roadmap towards a competitive low-carbon economy by 2050, as 36% of total CO2 emissions come from this sector, where HVAC systems account for 50% total final energy consumption in the EU. Improving energy efficiency has become an increasingly popular research topic. The application of thermal energy storage, i.e., phase change materials (PCMs) in buildings has attracted many researchers in recent decades to improve building energy efficiency. Embedding PCMs in the building envelope/building materials will increase the latent heat storage capacity, thereby improving the indoor thermal environment and comfort [3]. Compared with other building materials used for thermal energy storage, PCM has so many advantages like high heat of fusion, high energy storage density, and constant phase change temperature.
However, PCMs also suffer from disadvantages, for instance, nonideal thermal conductivity and leakage during phase transitions. These disadvantages of PCM can be improved by preparing shape-stable polymer composites with PCM inclusions and embedding them in building components to improve the performance of building envelope. Microencapsulation of PCMs with polymeric shells stands out as one of the best
confinement options for this application since it meets the requirements mentioned above and, additionally, polymers are cheap and have low density and thermal conductivity. Polyurethane (PU) rigid foams as the polymer matrix that can contain PCMs have been widely used for thermal insulation as the ultimate energy savers [4]. The air trapped within the honeycomb like structure develops passive insulation characteristics of foam in addition to polyurethanes' heat absorption capacity. The lowest thermal conductivity (between 0.02 and 0.05 W/mK), high mechanical and chemical stability at both high and low temperatures, the ability to form sandwich structures with various facer materials are their advantages. Compared with other insulating materials, PU-PCMs are highly competitive.
In recent years, many papers have studied the combination of PCM in PU matrix. Ana M. Borreguero et al. studied PU foams incorporating different percentages of microcapsules containing Rubitherm@ RT27 [5]. Ming You et al. studied the thermal properties of The microencapsulated n-octadec (MicroPCMs) with a styrene (St)-divinylbenzene (DVB) co-polymer shell [6]. Ahmet Alper Aydin et al. showed the feasibility of PU-PCM composites by direct utilization of a PCM and its compatibility with the applied PU formulation recipe in terms of improved thermal characteristics [7]. Chunguang Yang et al. proved that the thermal energy PU storage-capacity PCM foam is enhanced significantly in their experimental study [8].
Most of these studies have focused on the synthesis methods and the thermal energy storage capacity of the composites, but few studies have addressed the thermal evaluation of PU-PCM foams in use. Based on the above situation, this paper focuses on the multi-scale modeling of PU-PCM at different scales, and conducts a case study to implement the impact of this composite on the energy efficiency of the building envelope.
This article is organized as follows. In the next section, we describe the multi-scale model for PU-PCM as well as the description of case study, which is based on the application of PU-PCM in building envelope. Subsequently, we discuss the results before we conclude our manuscript in last section.
## 2 Methods
In this section, we describe our hierarchical multi-scale modeling, which is inspired from the model presented in our previous research [9-14]. The models for PU-PCMs at different scales are based on finite element method - representative volume elements (FEM-RVEs) and can be reviewed in Fig. 1. For each scale, we first need to identify a suitable RVE size to subsequently upscale the material parameters to the next higher length scale. All relational effective parameters are summarized in Table 1. Later a case study is used to prove this application.
### Multi-scale modelling of PU-PCMs
In our study there are mainly two different length scales - the meso-scale and the macroscopic scale. The key point is to link the material behaviour and engineering applications. A continuum model is employed at the mesoscale, which consists of the equivalent inclusions embedded in the polymer matrix. Based on the micromechanics approach, the volume elements should be sufficiently large to fully describe the statistics of the phase arrangement for the material we considered, i.e., they should be Representative Volume Elements (RVEs). In practice, smaller volume elements must typically be used due to limitations in available computational cost. So, the proper size of this RVE should be well defined and try to make it as small as possible. In this implementation a python script is applied to generate the microstructure of the RVE based on the algorithm which has been described in detail elsewhere. It
\begin{table}
\begin{tabular}{c c c} \hline Effective & Effective & Parameters \\ scale & length & \\ \hline \multirow{4}{*}{Meso} & \multirow{4}{*}{\(\mu\)m} & Thermal conductivity of inclusion \\ & & Thermal conductivity of matrix \\ \cline{1-1} & & Interface resistance \\ \cline{1-1} & & Radius of inclusion \\ \cline{1-1} \cline{3-3} Macro & mm & Volume fraction \\ \hline \end{tabular}
\end{table}
Table 1: Relational effective parameters for each scale.
Figure 1: Multi-scale modelling scheme
avoids overlapping of the equivalent sphere and ensures the imposition of periodicity. The FEM package ABAQUS is run in all simulations. A typical discretization of the RVE with quadratic tetrahedra elements is illustrated in Fig.2.
The field equation for heat conduction is given by:
\[C_{f}\frac{\partial\theta}{\partial t}+\nabla\cdot\mathbf{q}-Q=0\]
where \(\theta\) is temperature change, \(\mathbf{q}\) the heat flux, \(Q\) the body heat sourse, \(C_{f}\) the heat capcity.
For quasi-steady problems, we have:
\[\nabla\cdot\mathbf{q}-Q=0\text{ in }\Omega\]
with natural boundary conditions:
\[\mathbf{q}\cdot\mathbf{n}=\bar{q}\text{ on }\Gamma_{q}\]
where \(\mathbf{n}\) is the unit normal and \(\bar{q}\) is the prescribed value of heat flux at \(\Gamma_{q}\). The weak form for heat conduction is given by: Find \(\theta\in\nu\) such that:
\[-\int\mathbf{q}\cdot\nabla(\theta\theta)\mathrm{d}\Omega=-\int\bar{q}\delta\theta \mathrm{d}\varsigma+\int Q\delta\theta\mathrm{d}\Omega\]
Two different heat flux at both ends of the RVE, are applied, which is shown in Fig 3. It will generate a continuous heat flow through the entire RVE. The generated temperature gradient is necessary with Fourier's law to finally compute the macroscopic thermal conductivity:
\[\mathbf{q}=-\mathbf{\kappa}\cdot\nabla\theta\]
with
\[\mathbf{\kappa}=\begin{cases}k_{xx}&0&0\\ 0&k_{yy}&0\\ 0&0&k_{zz}\end{cases}\]
where \(\mathbf{\kappa}\) is the thermal conductivity of the composite material. Regarding the boundary conditions at different RVE edges, the isotropic thermal conductivity with \(\kappa_{xx}=\kappa_{yy}=\kappa_{zz}\) can be confirmed. The output of the meso-scale mode is the macroscopic thermal conductivity of the composite.
At the macroscopic scale, a larger homogenized structure is considered accounting for uncertainties. Therefore, cubic RVE with different thermal properties extracting from simulations at the meso-scale are randomly distributed in the macroscale, see Fig 3. Though it is principally possible to use FEM at the macro-scale, we employ the rule of mixture for computational efficiency, which is given by
\[\bar{x}=\frac{\sum_{i}\ X_{i}P_{i}}{\sum_{i}\ P_{i}}\]
where \(X_{i}\) is the thermal properties of the \(i\)-th cubic simulation and \(P_{i}\) is the weight.
### PU-PCMs application: Case study
We chose a single-family house that is common in European regions. The house is divided into two floors, and the layout of the house is as follows in Fig 4, Fig 5, and Fig 6 including a living room, the first bedroom on the first floor, the second bedroom on the first floor, stairwell, bathroom, storage room and the third bedroom (second floor), the fourth bedroom (second floor). The specific location of the house is Umea Sweden (63deg49deg30degN20deg15deg50degE), where it is a typical city in northeast Sweden, the capital of Vasterbotten County. Umea has a subarctic climate (Dfc), bordering on a humid continental climate (Dfb) with short and fairly warm summers. Winters are lengthy and freezing but usually milder than in areas at the same latitude with a more continental climate. Average January temperature is about -7degC (19degF), July is 16degC (6degF).
Based on the above location information and climate distribution, in this case study, the main consideration is the energy consumption for heating houses in the Umea area in summer. The application of PU-PCM can store and release energy through phase change while reducing the U-value of the house. In this scenario, we
Figure 2:
Figure 3: Heat flux in Representative Volume Element
Figure 2:
choose to add an additional layer of PU-PCM on the interior walls and ceiling of the house. The engineering parameters of PU-PCMs are conveyed by previous multiscale modeling. Our simulation is mainly based on the hourly trend of temperature in Umea in 2022 as the external temperature load. The change of indoor temperature is mainly the response of the external temperature change and fuel heating.
## 3 Results and Discussion
In this section we mainly introduce the engineering parameters of PU-PCMs composites and their performance in this case study application.
As shown in the Fig. 7 below, this is the result of RVE based on FEM simulation. After applying a thermal load in the corresponding direction, we can obtain the temperature distribution of its response. Through Fourier's law, the effective thermal conductivity of PU-PCMs composites can be computed, and the result is shown in Table 2.
Then we import the obtained engineering parameters of PU-PCMs as modeling variables into the application of single-family houses. Through partial differential equations, a physical simulation of the entire house is established. We use Runge-Kutta methods to obtain the annual energy consumption for the whole year of 2022 and the hourly indoor temperature change throughout the year. The annual energy consumption results are shown in the table 3 and table 4. We calculate two comparative models without and with PU-PCMs interlayer as part of the building envelope. From the results, it can be seen that PU-PCMs can provide significant energy savings for single-family houses in Umea in 2022 weather conditions. Compared with the predicted data, it can be found that the energy efficiency has been improved by 2.6% in this case study, which means the which means that there will be a corresponding reduction in \(CO_{2}\) emissions at the same time.
Figure 4: The application of PU-PCMs in building envelope
Figure 5: Building Exterior for Case Study
Figure 6: Architectural plans for singe-family house
Figure 7: The temperature distribution of PU-PCMs RVE
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{Effect} & & \\ \multicolumn{1}{c}{Effectiv} & \multicolumn{1}{c}{ive} & \\ \multicolumn{1}{c}{e scale} & \multicolumn{1}{c}{lengt} & \multicolumn{1}{c}{} & \\ \multicolumn{1}{c}{h} & & & \\ \hline Meso & \(\mu\)m & \begin{tabular}{l} Thermal \\ conductivity \\ of inclusion \\ \end{tabular} & 0.56W/m:K \\ & & \begin{tabular}{l} Thermal \\ conductivity \\ of matrix \\ \end{tabular} & 0.036W/m:K \\ & & \begin{tabular}{l} Interface \\ resistance \\ \end{tabular} & 35_MW_\(m^{2}/K\) \\ Macro & mm & \begin{tabular}{l} Volume \\ fraction \\ \end{tabular} & 20\% \\ Macro & mm &
\begin{tabular}{l} Effective \\ thermal \\ conductivity \\ \end{tabular} & 0.24W/m:K \\ \hline \hline \end{tabular}
\end{table}
Table 2: Thermal conductivity of PU-PCMs
\begin{table}
\begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{Purchased} & \\ \multicolumn{1}{c}{} & & \begin{tabular}{l} energy \\ (kWh) \\ \end{tabular} & \multicolumn{1}{c}{(kWh/m2)} \\ \hline \hline \multirow{3}{*}{
\begin{tabular}{l} \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\) \\ \(\blacksquare\blacksquare\) \\
Figure 9: Daily Energy usage with PU-PCMs in Building envelope
Figure 10: Annual thermal comfort hours in 1\({}^{\pm}\) bedroom
Figure 8: Daily Energy usage plot without PU-PCMs
Figure 11: Annual thermal comfort hours in 1\({}^{\pm}\) bedroom with PU-PCMs enhanced
Figure 10: Annual thermal comfort hours in 1\({}^{\pm}\) bedroom
8.48% improvement of acceptable thermal comfort range, meanwhile, unacceptable time decreased by 3.79%.
* 3. In areas where substantial summer cooling is not considered, such as Umea, the application of this composite material can effectively improve the indoor thermal comfort period, especially during the hot summer period.
## 5 Acknowledgements
We gratefully acknowledge the support of the Kempe Foundation Sweden (Kempestiftelserna - Stiftelserna J.C. Kempes och Seth M. Kempes minne) and EU project H2020-AURORAL (Architecture for Unified Regional and Open digital ecosystems for Smart Communities and Rural Areas Large scale application).
We also gratefully acknowledge the support of The Arctic Centre's strategic funds for research trips and conference attendances.
The computations handling were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at High Performance Computing Center North (HPC2N) partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
|
2308.03330 | Expediting Neural Network Verification via Network Reduction | A wide range of verification methods have been proposed to verify the safety
properties of deep neural networks ensuring that the networks function
correctly in critical applications. However, many well-known verification tools
still struggle with complicated network architectures and large network sizes.
In this work, we propose a network reduction technique as a pre-processing
method prior to verification. The proposed method reduces neural networks via
eliminating stable ReLU neurons, and transforming them into a sequential neural
network consisting of ReLU and Affine layers which can be handled by the most
verification tools. We instantiate the reduction technique on the
state-of-the-art complete and incomplete verification tools, including
alpha-beta-crown, VeriNet and PRIMA. Our experiments on a large set of
benchmarks indicate that the proposed technique can significantly reduce neural
networks and speed up existing verification tools. Furthermore, the experiment
results also show that network reduction can improve the availability of
existing verification tools on many networks by reducing them into sequential
neural networks. | Yuyi Zhong, Ruiwei Wang, Siau-Cheng Khoo | 2023-08-07T06:23:24Z | http://arxiv.org/abs/2308.03330v2 | # Expediting Neural Network Verification via Network Reduction
###### Abstract
A wide range of verification methods have been proposed to verify the safety properties of deep neural networks ensuring that the networks function correctly in critical applications. However, many well-known verification tools still struggle with complicated network architectures and large network sizes. In this work, we propose a network reduction technique as a pre-processing method prior to verification. The proposed method reduces neural networks via eliminating stable ReLU neurons, and transforming them into a sequential neural network consisting of ReLU and Affine layers which can be handled by the most verification tools. We instantiate the reduction technique on the state-of-the-art complete and incomplete verification tools, including \(\alpha\beta\)-crown, VeriNet and PRIMA. Our experiments on a large set of benchmarks indicate that the proposed technique can significantly reduce neural networks and speed up existing verification tools. Furthermore, the experiment results also show that network reduction can improve the availability of existing verification tools on many networks by reducing them into sequential neural networks.
Neural Network Verification, Network Reduction, Pre-processing
## I Introduction
Deep neural networks have been widely applied in real-world applications. At the same time, it is indispensable to guarantee the safety properties of neural networks in those critical scenarios. As neural networks are trained to be larger and deeper, researchers have deployed various techniques to speed up the verification process. For example, to over-approximate the whole network behavior [1, 2, 3, 4]; to deploy GPU implementation [5, 6, 7]; or to merge neurons in the same layer in an over-approximate manner so as to reduce the number of neurons [8, 9].
This work aims to further accelerate the verification process by "pre-processing" the tested neural network with ReLU activation function and constructing a _reduced_ network with _fewer_ number of neurons and connections. We propose the _network reduction_ technique, which returns a reduced network (named as _REDNet_) that captures the _exact_ behavior of the original network rather than over-approximating the original network. Therefore verification over the reduced network equals the original verification problem yet requires less execution cost.
The REDNet could be instantiated on different verification techniques and is beneficial for complex verification processes such as branch-and-bound based (bab) or abstract refinement based methods. For example, branch-and-bound based methods [6, 7, 10, 11] generate a set of sub-problems to verify the original problem. Once deployed with REDNet before the branch-and-bound phase, all the sub-problems are built on the reduced network, thus achieving overall speed gain. For abstract refinement based methods, like [12, 13, 4], they collect and encode the network constraints and refine individual neuron bounds via LP (linear program) or MILP (mixed-integer linear program) solving. This refinement process could be applied to a large set of neurons, and the number of constraints in the network encoding can be significantly reduced with the deployment of REDNet.
We have implemented our proposed network reduction technique in a prototypical system named REDNet (the reduced network), which is available at [https://github.com/REDNet-verifier/IDNN](https://github.com/REDNet-verifier/IDNN). The experiments show that the ReLU neurons in the reduced network could be up to 95 times smaller than those in the original network in the best case and 10.6 times smaller on average. We instantiated REDNet on the state-of-the-art complete verifier \(\alpha,\beta\)-CROWN [14], VeriNet [15] and incomplete verifier PRIMA [12] and assessed the effectiveness of REDNet over a wide range of benchmarks. The results show that, with the deployment of REDNet, \(\alpha,\beta\)-CROWN could verify more properties given the same timeout and gain average \(1.5\times\) speedup. Also, VeriNet with REDNet verifies 25.9% more properties than the original and can be \(1.6\times\) faster. REDNet could also assist PRIMA to gain \(1.99\times\) speedup and verifies 60.6% more images on average. Lastly, REDNet is constructed in a simple network architecture and making it amenable for existing tools to handle network architectures that they could not support previously.
We summarize the contributions of our work as follows:
* We define stable ReLU neurons and deploy the state-of-the-art bounding method to detect such stable neurons.
* We propose a _network reduction_ process to _remove_ stable neurons and generate REDNet that contains a smaller number of neurons and connections, thereby boosting the efficiency of existing verification methods.
* We prove that the generated REDNet preserves the input-output equivalence of the original network.
* We instantiate the REDNet on several state-of-the-art
verification methods. The experiment results indicate that the same verification processes execute faster on the REDNet than on the original network; it can also respond accurately to tougher queries the verification of which were timeout when running on the original network.
* REDNet is constructed with a simple network architecture, it can assist various tools in handling more networks that they have failed to be supported previously.
* Lastly, we export REDNet as a fully-connected network in ONNX, which is an open format that is widely accepted by the most verification tools.
## II Preliminaries
A _feedforward neural network_ is a function \(\mathcal{F}\) defined as a directed acyclic diagram \((\mathcal{V},\mathcal{E})\) where every node \(L_{i}\) in \(\mathcal{V}\) represents a layer of \(|L_{i}|\) neurons and each arc \((L_{i},L_{j})\) in \(\mathcal{E}\) denotes that the outputs of the neurons in \(L_{i}\) are inputs of the neurons in \(L_{j}\). For each layer \(L_{i}\in\mathcal{V}\), \(in(L_{i})=\{L_{j}|(L_{j},L_{i})\in\mathcal{E}\}\) is the _preceding layers_ of \(L_{i}\) and \(out(L_{i})=\{L_{j}|(L_{i},L_{j})\in\mathcal{E}\}\) denotes the _succeeding layers_ of \(L_{i}\). If the set \(in(L_{i})\) of preceding layers is non-empty, then the layer \(L_{i}\) represents a computation operator, e.g. the ReLU and GEMM operators; otherwise \(L_{i}\) is an input layer of the neural network. In addition, \(L_{i}\) is an _output layer_ of the neural network if \(out(L_{i})\) is empty.
In this paper, we consider the ReLU-based neural network with one input layer and one output layer. Note that multiple input layers (and output layers) can be concatenated as one input layer (and one output layer). Then a neural network is considered as a _sequential neural network_ if \(|in(L_{i})|=1\) and \(|out(L_{i})|=1\) for all layers \(L_{i}\) in the neural network except the input and output layers.
We use a vector \(\vec{a}\) to denote the input of a neural network \(\mathcal{F}\), and the _input space_\(I\) of \(\mathcal{F}\) includes all possible inputs of \(\mathcal{F}\). For any input \(\vec{a}\in I\), \(L_{i}(\vec{a})\) is a vector denoting the outputs of neurons in a layer \(L_{i}\) given this input \(\vec{a}\). The output \(\mathcal{F}(\vec{a})\) of the neural network is the output of its output layer.
The _neural network verification problem_ is to verify that for all possible inputs \(\vec{a}\) from a given input space \(I\), the outputs \(\mathcal{F}(\vec{a})\) of a neural network \(\mathcal{F}\) must satisfy a specific condition. For example, the robustness verification problem is to verify that for all inputs within a specified input space, the neural network's output must satisfy that the value of the neuron corresponding to the ground truth label is greater than the values of the other neurons in the output layer.
## III Overview
In this section, we present a simple network \(\mathcal{F}\) and illustrate how to construct the reduced network via the deletion of stable neurons, where stable neurons refer to those ReLU neurons whose inputs are completely non-negative or non-positive.
The example is a fully-connected network with ReLU activation function \(y=\text{max}(0,x)\) as shown in Figure 1, where the connections are recorded in the linear constraints near each affine neuron. We set the input space \(I\) to be \([-1,1]\times[-1,1]\), and apply one of the state-of-the-art bound propagators CROWN [16] to propagate the input intervals to other neurons of the network. The deployment of CROWN returns us the concrete bounds for intermediate neurons, which are displayed next to the corresponding neurons in Figure 1.
From this initial computation, we observe that four ReLU neurons are _stable_: \(x_{8}\) is stably deactivated as its input \(x_{3}\leq 0\) yields \(x_{8}=0\); \(x_{9},x_{10},x_{11}\) are stably activated as their inputs are always greater or equal to zero, yielding \(x_{9}=x_{4},x_{10}=x_{5},x_{11}=x_{6}\). Given the observation, we could _remove_ those stable ReLU neurons together with their input neurons: we could directly eliminate neurons \(x_{3},x_{8}\); and delete \(x_{4},x_{5},x_{6},x_{9},x_{10},x_{11}\) while _connecting_\(y_{1},y_{2}\) directly to the preceding neurons of \(x_{4},x_{5},x_{6}\), which are \(x_{1},x_{2}\).
After removal, the connections are updated as in Figure 2. The new affine constraint of \(y_{1}\) is computed as follows:
\[y_{1} = x_{8}-x_{9}+x_{10}+x_{11}-x_{12}\] \[= 0-x_{4}+x_{5}+x_{6}-x_{12}\] \[= x_{1}-x_{2}+1-x_{12}\]
Similarly, the computation of \(y_{2}\) is updated as follows:
Figure 1: The example network with initial concrete bounds
Figure 2: The network connections after neuron removal
\[y_{2} = x_{8}+x_{9}+x_{10}+x_{11}+x_{12}\] \[= 0+x_{4}+x_{5}+x_{6}+x_{12}\] \[= 3x_{1}+x_{2}+7+x_{12}\]
The above computation only involves equality replacement; therefore, the two networks in Figure 2 and Figure 1 functions _the same_ given the specified input space \(I\). However, the network architecture has been modified, and the output neurons are now defined over its preceding layer together with the input layer. To preserve the network architecture without introducing new connections between layers, we _merge_ the stably activated neurons into _a smaller set_ of neurons instead of directly deleting them.
The final reduced network is shown below in Figure 3, where we transform the connection between \(y_{1},y_{2}\) and \(x_{1},x_{2}\) in Figure 2 into two merged neurons \(m_{1}=x_{1}-x_{2}+1;m_{2}=3x_{1}+x2+7\). Since \(m_{2}\) is stably activated given the input space, we have \(m_{4}=m_{2}\), thus \(y_{2}=m_{4}+x_{12}\) which equals to the definition of \(y_{2}\) in Figure 2. To further enforce \(m_{1}\) to remain stably activated, we increase the bias of \(m_{1}\) to be 2, which leads to \(m_{3}=m_{1}\), thus \(y_{2}=m_{3}-x_{12}-1\). Therefore, the final reduced network in Figure 3 remains to be a fully-connected network, but the stably deactivated neuron \(x_{3}\) has been removed, and the original set of stably activated neurons \(x_{4},x_{5},x_{6}\) are merged into a smaller set of stably activated neurons \(m_{1},m_{2}\). Note that the connection between \(y_{1},y_{2}\) and \(m_{1},m_{2}\) are actually identity matrix:
\[\begin{bmatrix}y_{1}\\ y_{2}\end{bmatrix}=\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\cdot\begin{bmatrix}m_{1}\\ m_{2}\end{bmatrix}+\begin{bmatrix}-1\\ 1\end{bmatrix}\cdot x_{12}+\begin{bmatrix}-1\\ 0\end{bmatrix} \tag{3}\]
Therefore, the number of merged neurons depends on the number of neurons in the succeeding layer (e.g., the output layer in this example). Generally speaking, the number of output neurons is significantly smaller than the number of intermediate neurons. Therefore we conduct the reduction in a backward manner from the last hidden layer to the first hidden layer, and the experiments in section VI show that a major proportion of the neurons could be deleted, which boosts verification efficiency and therefore improve precision within a given execution timeout.
## IV Stable ReLU Neurons Reduction
### _Stable ReLU neurons_
Given a ReLU layer \(X\) in a neural network and an input \(\vec{a}\), the layer \(X\) has exactly one preceding layer \(Y\) (i.e. \(in(X)=\{Y\}\)) and the _pre-activation_ of the \(k^{th}\) ReLU neuron \(x\) in \(X\) is the output \(y(\vec{a})\) of the \(k^{th}\) neuron \(y\) in \(Y\). For simplicity, we use \(\hat{x}(\vec{a})=y(\vec{a})\) to denote the pre-activation of \(x\).
**Definition 1**.: A ReLU neuron \(x\) in a neural network is _deactivated_ w.r.t. (with respect to) an input space \(I\) if \(\hat{x}(\vec{a})\leq 0\) for all inputs \(\vec{a}\in I\), and \(x\) is _activated_ w.r.t. the input space \(I\) if \(\hat{x}(\vec{a})\geq 0\) for all \(\vec{a}\in I\). Then \(x\) is _stable_ w.r.t. \(I\) if \(x\) is deactivated or activated w.r.t. \(I\).
It is NP-complete to check whether the output of a neural network is greater than or equal to 0 [17]. In addition, we can add an additional ReLU layer behind the output layer of the neural network where the output of the original neural network becomes the pre-activation of ReLU neurons, therefore, it is straightforward that checking the stability of ReLU neurons w.r.t. an input space is NP-hard.
**Theorem 1**.: It is NP-hard to check whether a ReLU neuron is stable w.r.t. an input space \(I\).
Many methods have been proposed to compute the lower and upper bounds of the pre-activation of ReLU neurons. Usually, these methods use different constraints to tighten the pre-activation bounds, such as linear relaxations, split constraints, global cuts and output constraints. For detecting the stability of ReLU neurons w.r.t. an input space, we only consider the methods which employ the linear relaxations of ReLU neurons alone, as the other constraints may filter out inputs from the input space, e.g. a ReLU neuron is compulsory to be stably deactivated despite the input space if the propagator uses a split constraint to enforce that the pre-activation input value is always non-positive. We enumerate various bound propagation methods in Table I. The methods using other constraints are not suitable for detecting the stability of intermediate neurons, such as \(\beta\)-Crown employs split constraints.
In our experiments, we use the GPU-based bound propagation method CROWN to detect stable ReLU neurons, which leads to a reasonably significant reduction with a small time cost, as displayed in Table III.
### _ReLU layer reduction_
As illustrated in the example given in section III, after computation of concrete neuron bounds, we detect and handle those stable ReLU neurons in the following ways:
\begin{table}
\begin{tabular}{|c|c|} \hline Methods for Stability Detection & Other Methods \\ \hline \hline Interval [18], DeepZ/Symbolic [2, 19, 20] & \(\beta\)-Crown [21] \\ CROWN [16], FCrown [22], \(\alpha\)-Crown [23] & GCP-Crown [24] \\ Deeply [25], KPoly [26], PRIMA [12] & ARENA [13] \\ RefineZon [27],OptC2V [28] & DeepSRGR [4] \\ SDP-Relaxation [29, 30, 31] & \\ LP/Lagrangian Dual [32, 33, 34] & \\ \hline \end{tabular}
\end{table}
Table I: Bound propagation methods
Figure 3: The final network after reduction (REDNet), where the dashed connection means the coefficient equals to 0
* For a stably deactivated ReLU neuron whose input value is always non-positive, it is always evaluated as \(0\) and thereby will be directly deleted from the network as it has no effect on the actual computation;
* For a stably activated ReLU neurons (the input values of which are always non-negative) in the same layer, we reconstruct this set of stably activated neurons into _a smaller_ set of stably activated neurons as we reduce \(x_{4},x_{5},x_{6}\) into \(m_{1},m_{2}\) in section III.
**Reconstruction of Stably Activated Neurons.** Figure 2 illustrates that the deletion of stably activated neurons requires creating new connections between the preceding and succeeding neurons of the deleted neurons. We follow the convention that every intermediate ReLU layer only directly connects to one preceding layer and one succeeding layer, which conducts linear computation (and we defer the details of how to simplify a complicated network with multiple preceding/succeeding connections into such simpler architecture in section V).
An example of a ReLU layer pending reduction is shown in Figure 4, where \(M_{1}\) indicates the linear connection between layer \(V\) and \(X\); \(M_{2}\) indicates the connection between layer \(Y\) and \(Z\). Biases are recorded in \(B_{1}\) and \(B_{2}\) respectively. Suppose that the uppermost \(k\) neurons in layer \(Y\) are stably activated, and we delete them together with their inputs in layer \(X\) from Figure 4. After deletion, we need to generate a new connection between layers \(Z\) and \(V\). As stably activated ReLU neurons behave as identity functions, the new connection matrix between layer \(Z\) and \(V\) can be computed from existing connection matrices \(M_{1}\) (size \(m\times q\)) and \(M_{2}\) (matrix with size \(n\times m\)). Assume that \(M[0:k,:]\) indicates that we slice the matrix to contain only the first \(k\) rows; and \(M[:0:k]\) means we only take the leftmost \(k\) columns of the matrix, we define a matrix \(M^{\prime}_{VZ}\) with size \(n\times q\) that is computed as:
\[M^{\prime}_{VZ}=M_{2}[:,0:k]\cdot M_{1}[0:k,:] \tag{4}\]
We consider this new connection \(M^{\prime}_{VZ}\) with size \(n\times q\) as:
\[M^{\prime}_{VZ}=M_{I}\cdot M^{\prime}_{VX}, \tag{5}\]
where \(M^{\prime}_{VX}\) _equals to \(M^{\prime}_{VZ}\)_ and functions as the affine connection between layers \(V\) and _newly constructed_ neurons in layer \(X\); \(M_{I}\) denotes an \(n\times n\) identity matrix and is the affine connection between layer \(Z\) and the _newly constructed_ neurons in layer \(Y\), as shown in Figure 5. Here, the additional weight matrix between layers \(V\) and \(Z\) is actually computed as \(M^{\prime}_{VZ}=M_{I}\cdot ReLU()\cdot M^{\prime}_{VX}\). For Equation 5 to hold, we need to make sure that the ReLU function between \(M\) and \(M^{\prime}\) becomes an identity, which means \(M\) must be non-negative and \(M^{\prime}\) is stably activated. So we will compute the concrete bounds of \(M\) and add an additional bias \(B\) to enforce it as non-negative as we did for neuron \(m_{1}\) in section III. This additional bias will be canceled out at layer \(Z\) with \(-B\) offset.
Note that we conduct this reduction in a backward manner from the last hidden layer (whose succeeding layer is the output layer that usually consists of a very small number of neurons, e.g. 10) to the first hidden layer. Therefore, upon reduction of layers \(X\) and \(Y\), layer \(Z\) has already been reduced and contains a small number \(n\) of neurons. In the end, the \(k\) stably activated neurons will be reduced into \(n\) stably activated neurons and we obtain a smaller-sized affine layer with \(m-k+n\) neurons, where \(k\) is usually much bigger than \(n\). Therefore, we are able to observe a significant size reduction as shown in Table III.
**Lemma 1**.: The reduction process preserves the input-output equivalence of the original network. That is, for any input \(\vec{a}\in I\), \(\mathcal{F}(\vec{a})\equiv\mathcal{F}^{\prime}(\vec{a})\) where \(\mathcal{F}\) is the original network and \(\mathcal{F}^{\prime}\) is the reduced one.
Proof.: The reduction process operates on ReLU neurons that are stable w.r.t. the input space \(I\). Specifically, (i) Stably _deactivated_ ReLU neurons are always evaluated as \(0\) and can be deleted directly as they have no effect on the subsequent computation; (ii) Stably _activated_ ReLU neurons are reconstructed in a way that their functionality are preserved before (Figure 4) and after (Figure 5) reconstruction.
For any \(\vec{a}\in I\), \(V(\vec{a})\) is the output of Layer \(V\) and the output of Layer \(Z\) is computed as \(Z(\vec{a})=M_{2}\cdot ReLU(M_{1}\cdot V(\vec{a})+B_{1})+B_{2}\) in Figure 4. we decompose \(Z(\vec{a})-B_{2}\) as
\[M_{2}[:,0:k]\cdot ReLU(M_{1}[0:k,:]\cdot V(\vec{a})+B_{1}[0:k]) \tag{6}\] \[+M_{2}[:,k:m]\cdot ReLU(M_{1}[k:m,:]\cdot V(\vec{a})+B_{1}[k:m]) \tag{7}\]
Without loss of generality, we assume the uppermost \(k\) neurons in layer \(Y\) are stably activated. Formula 6 thus simplifies to \(M_{2}[:,0:k]\cdot M_{1}[0:k,:]\cdot V(\vec{a})+M_{2}[:,0:k]\cdot B_{1}[0:k]=M_ {I}\cdot M^{\prime}_{VX}\cdot V(\vec{a})+B^{\prime}\), where \(M^{\prime}_{VX}=M_{2}[:,0:k]\cdot M_{1}[0:k,:]\), \(B^{\prime}=M_{2}[:,0:k]\cdot B_{1}[0:k]\) and \(M_{I}\) is an identity matrix.
Figure 4: Layer \(X\) and \(Y\) with pending reduction, together with its preceding layer \(V\) and succeeding affine layer \(Z\).
Figure 5: The block after reduction of stably activated neurons. \(M_{1}[k:m,:]\) contains the last \(m-k\) rows of \(M_{1}\), while \(M_{2}[:,k:m]\) takes the rightmost \(m-k\) columns of \(M_{2}\). \(B^{\prime}\) is computed as \(M_{2}[:,0:k]\cdot B_{1}[0:k]\). The _newly constructed_ neurons are dashed and colored in blue.
Furthermore, we compute an additional bias \(B\) to ensure that \(M^{\prime}_{VX}\cdot V(\vec{a})+B\geq 0\) for all \(\vec{a}\in I\). Thus Formula 6 finally simplifies to:
\[M_{I}\cdot ReLU(M^{\prime}_{VX}\cdot V(\vec{a})+B)-B+B^{\prime} \tag{8}\]
Based on Formula 8, we obtain \(Z(\vec{a})=M_{I}\cdot ReLU(M^{\prime}_{VX}\cdot V(\vec{a})+B)+M_{2}[:,k:m]\cdot ReLU (M_{1}[k:m,:]\cdot V(\vec{a})+B_{1}[k:m])+B^{\prime}+B_{2}-B\), which equals to the computation conducted in Figure 5. Thus, the network preserves input-output equivalence after reduction.
## V Neural Network Simplification
In section IV, we describe how reduction is conducted on a sequential neural network, where each intermediate layer only connects to one preceding Linear layer and one succeeding Linear layer. In this paper, a Linear layer refers to a layer whose output is computed via linear computation. Nonetheless, there exist many complicated network architectures (e.g., residual networks) that are not sequential. In order to handle a wider range of neural networks, we propose a neural network simplification process to transform complex network architectures into simplified sequential neural networks and then conduct reduction on the simplified network.
We now introduce how to transform a complex ReLU-based neural network into a sequential neural network consisting of Linear and ReLU layers so that stable ReLU neurons can be reduced. Note that we only consider the neural network layers that can be encoded as Linear and ReLU layers; further discussion about this can be found in section VII.
The network simplification process involves two main steps (shown in Figure 6): (i) Encode various layers as SumLinear blocks and ReLU layers (we defer the definition of SumLinear block to subsection V-A); (ii) Transform SumLinear blocks into Linear layers. Here, Linear layers refer to layers that conduct linear computation.
### _Encode various layers into SumLinear blocks_
A _SumLinear block_ is a combination of a set of Linear layers and a Sum layer such that the Linear layers are preceding layers of the Sum layer, where the output of the Sum layer is the element-wise sum of its inputs. The output of the SumLinear block is equal to the element-wise sum of the outputs of the Linear layers, and the preceding layers of the SumLinear block include the preceding layers of all the Linear layers. Any Linear layer can be encoded as a SumLinear block by adding a Sum layer behind the Linear layer. A main difference between them is that the SumLinear block can have more than 1 preceding layer.
Many neural network layers can be directly transformed into SumLinear blocks, such as Conv, GeMM, Add, Sub, Concat, Reshape, Split, Squeeze, Unsqueeze, \(\cdots\), Flatten, MatMul, and BatchNormalization layers used in ONNX models.1 Note that the Linear layer only has one preceding layer, while the Add and Concat layers can have more than one preceding layer; hence, they cannot be directly encoded as a Linear layer (this motivates the introduction of SumLinear blocks).
Footnote 1: In general, Maxpooling can be encoded as Conv and ReLU layers with existing tools such as DNNV [35]. Note that \(max(x,y)=ReLU(x-y)+y\).
Figure 7 shows a SumLinear block encoding a Concat layer with 2 precedessors \(X\) and \(Y\). The biases of the two Linear layers are zero and the concatenation of their weights is an identity matrix. Thus, each neuron of the layers \(X,Y\) is mapped to a neuron of the Sum layer. Assume \(|X|=|Y|=1\). Their weights are represented by matrices: \(\begin{bmatrix}1\\ 0\end{bmatrix}\) and \(\begin{bmatrix}0\\ 1\end{bmatrix}\) which can be concatenated into an identity matrix \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\).
In the same spirit, the Add layer could also be encoded as a SumLinear block as shown in Figure 8. Assume that \(|X|\) and \(|Y|\) are each equal to 2, the weights of the two Linear layers are represented by identity matrices \(\begin{bmatrix}1&0\\ 0&1\end{bmatrix}\).
Figure 8: Encode an Add layer into a SumLinear block.
Figure 6: The procedure of neural network reduction. The encoding session is described in subsection V-A; the transformation is discussed in subsection V-B; and the reduction part is explained in section IV.
Figure 7: Encode a Concat layer into a SumLinear block.
### _Transform SumLinear blocks into Linear Layers_
In this subsection, we show how to encode SumLinear blocks as Linear layers. Firstly, we need to transform SumLinear blocks into normalized SumLinear blocks. To this end, a SumLinear block \(L\) is _normalized_ if it does not have any Linear layer of which the preceding layer is a SumLinear block (i.e. \(in(L)\) does not have SumLinear blocks), and each of the Linear layers in \(L\) has different preceding layers. For example, the SumLinear given in Figure 7 is normalized if its preceding layers \(X\) and \(Y\) are not SumLinear blocks.
**SumLinear Block Normalization.** If a SumLinear block \(L^{\prime}\) includes a Linear layer \(L^{\prime}_{j}\) with a weight \(M^{\prime}_{j}\) and a bias \(B^{\prime}_{j}\) such that the preceding layer of \(L^{\prime}_{j}\) is another SumLinear block \(L^{\prime\prime}\) including \(k\) Linear layers \(L^{\prime\prime}_{1},\cdots L^{\prime\prime}_{k}\) with weights \(M^{\prime\prime}_{1},\cdots,M^{\prime\prime}_{k}\) and biases \(B^{\prime\prime}_{1},\cdots,B^{\prime\prime}_{k}\), then we can normalize \(L^{\prime}\) by replacing \(L^{\prime}_{j}\) with \(k\) new Linear layers \(L_{1},\cdots L_{k}\) where for any \(1\leq i\leq k\), the layer \(L_{i}\) has the same preceding layer as that of \(L^{\prime\prime}_{i}\), and the weight and bias of \(L_{i}\) are computed as:
\[M_{i} =M^{\prime}_{j}\cdot M^{\prime\prime}_{i} \tag{9}\] \[B_{i} =\begin{cases}B^{\prime}_{j}+M^{\prime}_{j}\cdot B^{\prime\prime }_{i}&\text{if }i=1\\ M^{\prime}_{j}\cdot B^{\prime\prime}_{i}&\text{otherwise}\end{cases} \tag{10}\]
During the normalization, if the succeeding layers of the block \(L^{\prime\prime}\) become empty, then \(L^{\prime\prime}\) is directly removed.
In addition, if two Linear layers \(L_{a},L_{b}\) in a SumLinear block have the same preceding layer, then in normalization, we can replace them by one new Linear layer \(L_{c}\) such that \(L_{c}\) has the same preceding layer as them and the weight (and bias) of \(L_{c}\) is the sum of the weights (and biases) of \(L_{a},L_{b}\).
**Lemma 2**.: SumLinear block normalization does not change the functionality of a neural network.
Proof.: Let \(\vec{a}^{\prime\prime}_{i}\) be any input of a Linear layer \(L^{\prime\prime}_{i}\) in the block \(L^{\prime\prime}\) where \(L^{\prime\prime}\) is the preceding layer of \(L^{\prime}_{j}\). Thus, the input of \(L^{\prime}_{j}\) (called \(\vec{a}^{\prime}_{j}\)) equals to \(\sum_{i=1}^{k}(M^{\prime\prime}_{i}\cdot\vec{a}^{\prime\prime}_{i}+B^{\prime \prime}_{i})\). Then the output of \(L^{\prime}_{j}\) is \(B^{\prime}_{j}+\sum_{i=1}^{k}(M^{\prime}_{j}\cdot M^{\prime\prime}_{i}\cdot \vec{a}^{\prime\prime}_{i}+M^{\prime}_{j}\cdot B^{\prime\prime}_{i})\) which is equal to the sum of the outputs of the layers \(L_{1},\cdots L_{k}\). Therefore, replacing \(L^{\prime}_{j}\) with \(L_{1},\cdots L_{k}\) does not change the output of the SumLinear block \(L^{\prime}\).
If the succeeding layers of \(L^{\prime\prime}\) become empty, then removing \(L^{\prime\prime}\) does not affect the outputs of other layers and the network.
In addition, the sum of the outputs of the two linear layers \(L_{a},L_{b}\) in a SumLinear Block with the same preceding layer is equal to the output of the new layer \(L_{c}\), thus, the output of the block does not change after replacing \(L_{a},L_{b}\) with \(L_{c}\).
So SumLinear block normalization does not change the functionality of a neural network.
We next show how to encode normalized SumLinear blocks as Linear layers.
**Linear Layer Construction**. First, we say that a ReLU layer \(L_{i}\) is _blocked_ by a SumLinear block \(L\) if \(L\) is the only succeeding layer of \(L_{i}\). Then, we use \(\mathcal{R}_{L}\) to denote the set of ReLU layers blocked by the SumLinear block \(L\). Let \(\mathcal{P}_{L}\) include other preceding layers of \(L\) which are not in \(\mathcal{R}_{L}\). If \(L\) is normalized, then \(L\) and the set of ReLU layers in \(\mathcal{R}_{L}\) can be replaced by a Linear layer \(L^{l}\), a ReLU layer \(L^{r}\) and a new SumLinear block \(L^{s}\) such that
* the weight \(M^{l}\) (the bias \(B^{l}\)) of the linear layer \(L^{l}\) is a concatenation (the sum) of the weights (the bias) of the Linear layers in \(L\) and the preceding layer of \(L^{l}\) is \(L^{r}\) and \(L^{l}\) has the same succeeding layers as \(L\);
* the SumLinear block \(L^{s}\) encodes a concatenation of layers in \(\mathcal{P}_{L}\) and the preceding layers of layers in \(\mathcal{R}_{L}\);
* \(L^{s}\) is the preceding layer of \(L^{r}\).
Additionally, in order to make sure that the outputs of the layers in \(\mathcal{P}_{L}\) can pass through the ReLU layer \(L^{r}\), the neurons in \(L^{r}\) which connect to the layers in \(P_{L}\) are enforced as activated neurons by adding an additional bias \(B\) to a Linear layer in \(L^{s}\) and minus \(M^{l}\cdot B\) from the bias of \(L^{l}\).
**Lemma 3**.: Linear layer construction does not change the functionality of a neural network.
Proof.: The pre-activation of \(L^{r}\) is the output of \(L^{s}\) that equals to \(B\) plus the concatenation of the outputs of layers in \(\mathcal{P}_{L}\) and the pre-activation of Layers in \(\mathcal{R}_{L}\). This ensures that the output of \(L^{r}\) equals to \(B\) plus the concatenation (call it \(\vec{a}\)) of the outputs of layers in \(\mathcal{P}_{L}\) and \(\mathcal{R}_{L}\). Next, the output of \(L^{l}\) equals to \(M^{l}\cdot(B+\vec{a})+B^{l}-M^{l}\cdot B=M^{l}\cdot\vec{a}+B^{l}\) which is equal to the output of original layer \(L\).
In addition, \(L\) is the only succeeding layer of layers in \(\mathcal{R}_{L}\), so replacing \(\mathcal{R}_{L}\), \(L\) with the layers \(L^{s}\), \(L^{r}\), \(L^{l}\) does not change the functionality of the neural network.
**Network simplification.** We use algorithm 1 to transform ReLU-based neural networks \((\mathcal{V},\mathcal{E})\) into a sequential neural network consisting of Linear and ReLU layers. At line 1, the function \(Initialization(\mathcal{V},\mathcal{E})\) encodes all layers in \(\mathcal{V}\) as SumLinear blocks and ReLU layers. Between line 3 and line 8, the algorithm repeatedly selects the last SumLinear block \(L\) in \(\mathcal{V}\) and reconstructs \(L\) into Linear layers, where a SumLinear block is _the last_ block means there is not any path from it to another SumLinear block. \((\mathcal{V},\mathcal{E})\) only has 1 output layer, and the Linear and ReLU layers only have 1 preceding layer, thus, there is only one last SumLinear block.
```
Input: A neural network \((\mathcal{V},\mathcal{E})\) Output: A sequential neural network
1\(\mathcal{V},\mathcal{E}\gets Initialization(\mathcal{V},\mathcal{E})\);
2while\((\mathcal{V},\mathcal{E})\) has SumLinear blocksdo
3 Let \(L\) be the last SumLinear block in \((\mathcal{V},\mathcal{E})\);
4\(\mathcal{V},\mathcal{E},L\gets Normalization(\mathcal{V},\mathcal{E},L)\);
5if\(|in(L)|>1\)then
6\(\mathcal{V},\mathcal{E}\gets LinearLayerConstruction(\mathcal{V},\mathcal{E},L)\);
7
8else
9\(\mathcal{V},\mathcal{E}\gets Linearization(\mathcal{V},\mathcal{E},L)\);
10
11return\((\mathcal{V},\mathcal{E})\);
```
**Algorithm 1**Neural Network Simplification
At line 4, the function \(Normalization(\mathcal{V},\mathcal{E},L)\) is used to normalize the last SumLinear block \(L\). If the normalized \(L\) has more than one preceding layer (i.e. \(|in(L)|>1\)), then the function \(LinearLayer Construction(\mathcal{V},\mathcal{E},L)\) is used to replace \(L\) with the layers \(L^{l},L^{r},L^{s}\) introduced in the Linear layer construction (at line 6), otherwise the function \(Linearization(\mathcal{V},\mathcal{E},L)\) is used to directly replace \(L\) with the only Linear layer included in \(L\) (at line 8).
In the rest of this subsection, we show that line 6 in algorithm 1 can only be visited at most \(|\mathcal{V}|\) times, thus, the algorithm can terminate and generate an equivalent sequential neural network consisting of Linear and ReLU layers.
**Lemma 4**.: Assume \((\mathcal{V},\mathcal{E})\) is a neural network consisting of Linear, ReLU layers and SumLinear blocks and \(L\) is the last SumLinear block in \(\mathcal{V}\). If \(|in(L)|>1\) and \(in(L)\) does not have SumLinear blocks and Linear layers, then \(\mathcal{R}_{L}\) is not empty.
Proof.: \((\mathcal{V},\mathcal{E})\) only has one output layer and all layers behind \(L\) have at most one preceding layer, thus, a path from any layer before \(L\) to the output layer must pass \(L\).
Let \(L_{i}\) be the last ReLU layer in \(in(L)\). If \(L_{i}\) has a succeeding layer \(L_{j}\) such that \(L_{j}\neq L\), then \(L\) must be in all paths from \(L_{j}\) to the output layer, and there would be a ReLU layer in \(in(L)\) included by a path from \(L_{j}\) to \(L\), which meant that \(L_{i}\) was not the last layer in \(in(L)\), a contradiction. Hence, \(L_{i}\) is in \(\mathcal{R}_{L}\) and \(\mathcal{R}_{L}\neq\emptyset\).
Based on lemma 4, if \(|in(L)|>1\), then \(|\mathcal{R}_{L}|\geq 1\), thus, the number of ReLU layers before the last SumLinear block in \(\mathcal{V}\) is decreased after replacing \(L\) and the layers in \(\mathcal{R}_{L}\) with the layers \(L^{l},L^{r},L^{s}\) introduced in the Linear layer construction where \(L^{s}\) becomes the last SumLinear block in \(\mathcal{V}\). So we can get that algorithm 1 can terminate and generate a neural network consisting of Linear and ReLU layers.
**Theorem 2**.: Algorithm 1 can terminate and generate a neural network consisting of Linear and ReLU layers.
Proof.: From lemma 4, we know that line 6 in algorithm 1 reduces the number of ReLU neurons before the last SumLinear block in \(\mathcal{V}\), therefore, it can only be visited at most \(|\mathcal{V}|\) times. Note that SumLinear block normalization (at line 4) does not affect ReLU layers and the layers behind \(L\).
Then line 8 can directly replace all SumLinear blocks having one preceding layer in \(\mathcal{V}\) with Linear layers. Therefore, algorithm 1 can terminate and return a neural network consisting of Linear and ReLU layers.
**Theorem 3**.: Our constructed REDNet is input-output equivalent to the original network given the input space \(I\).
Proof.: (Sketch.) Our reduction technique contains two steps: (i) network simplification presented in section V; (ii) stable ReLU neuron reduction described in section IV. Each step is designed deliberately to preserve input-output equivalence.
_Simplification equivalence._ Function \(Initialization(\mathcal{V},\mathcal{E})\) at line 1 in algorithm 1 encodes ONNX layers into a uniform network representation; such encoding preserves input-output equivalence. Then lemma 2 and lemma 3 show that line 4 and line 6 do not change network functionality. In addition, line 8, replacing a SumLinear Block with the only Linear layer in the block, also does not change network output. Therefore, algorithm 1 can construct a sequential neural network that has the same functionality as the original neural network.
_Reduction equivalence._ The proof is given in lemma 1.
### _Illustrative example of network simplification_
In this subsection, we use a simple network block to illustrate how to perform algorithm 1 on a non-sequential structure (Figure 9(a)) to get a sequential neural network consisting of Linear and ReLU layers (Figure 9(b)). In Figure 9, each rectangular node (including \(n_{1},n_{2},n_{3},n_{4}\)) represents a set of neurons whose values are derived from the preceding connected node(s) and the connections between them. Note that red-colored rectangular nodes are ReLU nodes that represent the output neurons of the ReLU layer; blue nodes are convolutional nodes; the black node is an Add layer. The connections between nodes are represented with directed edges, and the connected functions are displayed near the edges (e.g. conv1, ReLU). Symbol \(\oplus\) represents concatenation.
Firstly, we apply function \(Initialization(\mathcal{V},\mathcal{E})\) at line 1 to encode Figure 9(a) as SumLinear blocks and ReLU layers, where the weights and biases of each Linear layer are displayed above the layer. We name the two ReLU nodes \(n_{1},n_{3}\) as ReLU1, ReLU2 respectively.
Figure 10: Network in Figure 9(a) encoded with SumLinear blocks and ReLU layers
Figure 9: The simplification of a non-sequential block
Then we take the last SumLinear block from Figure 10 and normalize this block (Figure 11(a)) and obtain the normalized block as in Figure 11(b). The whole network is now updated as Figure 12.
At this step, we notice that ReLU layer ReLU2 is _blocked_ by the last SumLinear block, and ReLU1 is _not blocked_ as it has another path to a subsequent ReLU layer. Therefore, we perform the Linear layer construction at line 6 and obtain the network in Figure 13.
Lastly, we take out the last SumLinear block in Figure 13 and perform normalization to obtain Figure 14. At Figure 14, the last SumLinear block includes two Linear layers having the same preceding layer ReLU1 (Figure 14), which requires further normalization.
The final architecture of the network is given in Figure 15, where we have \(\begin{bmatrix}\text{conv1-w}\\ \text{identity}\end{bmatrix}=\text{conv1-w}\oplus\text{identity}\). Now the oringal network has been simplified into a sequential one.
## VI Experiments
In this section, we present our experimental results of instantiation of network reduction technique on \(\alpha,\beta\)-CROWN [14], VeriNet [15] and PRIMA [12] to show evidence that: given the _same_ verification problem, the _same_ verification algorithm runs _faster_ on the reduced network compared to the original network, which gives us confidence in the ability of our method as in enhancing the efficiency of existing verification methods. Furthermore, the simple architecture in REDNet allows existing verification tools that only support limited network benchmarks to handle more networks.
### _Experiment Setup_
The evaluation machine has two 2.40GHz Intel(R) Xeon(R) Silver 4210R CPUs with 384 GB of main memory and a NVIDIA RTX A5000 GPU.
**Evaluation Benchmarks.** The evaluation datasets include MNIST [36] and CIFAR10/CIFAR100 [37]. MNIST dataset contains hand-written digits with 784 pixels, while CIFAR10/CIFAR100 includes colorful images with 3072 pixels. We chose fully-connected, convolutional and residual networks with various sizes from two well-known benchmarks: the academic ERAN system [38] and VNNCOMP2021/2022 (International Verification of Neural Networks Competition) [39, 40]. The number of activation layers (#Layers), the number of ReLU neurons (#Neurons), and the trained defense of each network are listed in Table II, where a trained defense refers to a defense method against adversarial samples to improve robustness. Please note that "Mixed" means mixed training, which combines adversarial training and certified defense training loss. This could lead to an excellent balance between model clean accuracy and robustness, and is beneficial for obtaining higher verified accuracy [41].
**Verification Properties.** We conduct robustness analysis, where we determine if the classification result of a neural
Figure 11: Normalization of the last SumLinear block
Figure 14: The network after the second normalization
Figure 12: The network after the first normalization
Figure 13: The network after the Linear layer construction. For simplicity to show the weight matrices, we assume that ReLU2 and ReLU1 all have one neuron; and “-biases/-weights” are abbreviated as “-b/-w” respectively.
Figure 15: The sequential network after the third normalization
network - given a set of slightly perturbed images derived from the original image (input specification) - remains the same as the ground truth label obtained from the original unperturbed image (output specification). The set of images is defined by a user-specified parameter \(\epsilon\), which perturbs each pixel \(p_{i}\) to take an intensity interval \([p_{i}-\epsilon,p_{i}+\epsilon]\). Therefore, the input space \(I\) is \(\bigtimes_{i=1}^{n}[p_{i}-\epsilon,p_{i}+\epsilon]\). In our experiment, we acquire the verification properties from the provided vnnlib files [44] that record the input and output specification or via a self-specified \(\epsilon\). We aim to speed up the analysis process for those _properties that are tough to be verified._ Hence we _filter out_ those falsified properties. We obtain around 30 properties for each tested network, as enumerated in Table II.
### _Network reduction results_
Table III shows the size of reduced networks, where the bound propagation methods crown and \(\alpha\)-crown are used to compute concrete bounds and detect stable neurons. Here we present the number of neurons in the original network and the average size after reduction (under column "AvgN") and reduction time (under column "AvgT") for the two methods. We have out-of-memory problem when running \(\alpha\)-crown on network C_100_Large, thus we mark the result as "-".
The table shows that a significant number of neurons could be reduced within a reasonable time budget by leveraging the concrete bounds returned by CROWN. Therefore, we use CROWN as our bound propagator for the rest of the experiments. On average, the reduced networks are \(10.6\times\) smaller than the original networks.
Figure 16(a) shows the reduction ratio distribution where each dot \((\alpha,\beta)\) in the figure means that the reduction ratio is greater than \(\beta\) on \(\alpha\) percent properties. The reduction ratio can be up to 95 times at the best case and greater than 20 times on 10% properties. Figure 16(b) gives the size distribution of reduced networks. Each dot \((\alpha,\beta)\) in the figure means the reduced networks have at most \(\beta\) ReLU neurons on \(\alpha\) percent properties. We can see that on more than 94% properties, there are at most 8000 ReLU neurons in the reduced networks.
### _Instantiation on \(\alpha,\beta\)-Crown_
\(\alpha,\beta\)-CROWN is GPU based and the winning verifier in VNCOMP 2021 [39] and VNNCOMP 2022 [40], the methodology of which is based on linear bound propagation framework and branch-and-bound. We first instantiate our technique on \(\alpha,\beta\)-CROWN, and we name the new system as \(\alpha,\beta\)-CROWN-R. We set the timeout of verification as 300 seconds, and if the tool fails to terminate within the timeout, we deem the result to be inconclusive. The results are listed in Table IV, where we explicitly enumerate the number of timeout properties, the number of verified properties, and the average execution time of \(\alpha,\beta\)-CROWN (column \(\alpha,\beta\)-**CROWN-O**) and our instantiated system (column \(\alpha,\beta\)-**CROWN-R**) on the properties where both methods can terminate within timeout.2
Footnote 2: When the original method is timeout or fails to execute for all properties, e.g. C_100_Med in Table V and M_ConvBig in Table VI, the average time is computed on the properties where our method can terminate within timeout.
From the result, we observe that \(\alpha,\beta\)-CROWN-R could verify more tough properties that have failed to be verified within 300 seconds in \(\alpha,\beta\)-CROWN-O. This indicates that our reduction pre-processing does not only benefit those easy verification problems but also helps verify more difficult properties within a decent time. In general, \(\alpha,\beta\)-CROWN-R verifies 11 more properties and boosts the efficiency of \(\alpha,\beta\)-CROWN-O with average \(1.52\times\) speedup on all 12 networks. The average
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline
**Network** & **Type** & **\#Layers** & **\#Neurons** & **Defense** & **\#Property** \\ \hline M\_256x6 & fully-connected & 6 & 1,536 & None & 30 \\ \hline M\_ConvMed & convolutional & 3 & 5,704 & None & 31 \\ \hline M\_ConvBig & convolutional & 6 & 48,064 & DiffAf [42] & 29 \\ \hline M\_SkipNet & residual & 6 & 71,650 & DiffAI & 31 \\ \hline C\_8\_255Simp & convolutional & 3 & 16,634 & None & 30 \\ \hline C\_WideNew & convolutional & 3 & 6,244 & None & 32 \\ \hline C\_ConvBig & convolutional & 6 & 62,464 & PGD [43] & 37 \\ \hline C\_Resnet4b & residual & 10 & 14,436 & None & 30 \\ \hline C\_ResnetA & residual & 8 & 11,364 & None & 32 \\ \hline C\_ResnetB & residual & 8 & 11,364 & None & 29 \\ \hline C\_100\_Med & residual & 10 & 55,460 & Mixed & 24 \\ \hline C\_100\_Large & residual & 10 & 286,820 & Mixed & 24 \\ \hline \end{tabular}
\end{table}
Table II: Detailed information of the experimental networks
Figure 16: Visualized results of the reduction with CROWN
is computed across the networks where the speedup for each network is calculated by the reported time on the original network.
In addition, the performance of REDNet is affected by the network reduction ratio. For example, \(\alpha,\beta\)-CROWN-R only has average 1.12 speedup on M_256\(\times\)6 whose reduction ratio is only 1.55, while \(\alpha,\beta\)-CROWN-R can have average 2.52\(\times\) speedup on C_100_large whose mean reduction ratio is 39.80.
### _Instantiation on PRIMA_
PRIMA [12] is one of the state-of-the-art incomplete verification tools. It introduces a new convex relaxation method that considers multiple ReLUs jointly in order to capture the correlation between ReLU neurons in the same layer. Furthermore, PRIMA leverages LP-solving or MILP-solving to refine individual neuron bounds within a user-configured timeout. Note that PRIMA stores the connection between neurons in two ways: 1. Dense expression, which encodes the fully-connected computation in a fully-connected layer; 2. Sparse expression, that only keeps the non-zero coefficients and the indexes of preceding neurons of which the corresponding coefficients are non-zero (e.g. convolutional layer). As some affine connections between layers in our reduced network contain many zero elements (since we introduce the identity matrix in the newly constructed connection), we elect to record them as sparse expressions in the instantiated PRIMA (abbreviated as PRIMA-R).
The comparison results are given in Table V, and we set a 2000 seconds timeout for each verification query as PRIMA runs on the CPU and usually takes a long execution time for deep networks. Note that PRIMA returns segmentation fault for M_SkipNet, thus the results are marked as "-"; PRIMA times out for all properties of C_100_Med and C_100_Large, hence marked as "TO". Note that there are some cases where PRIMA-R runs slower than PRIMA-O, e.g., for network C_ConvBig. This happens because PRIMA conducts refined verification by pruning the potential adversarial label one by one within a certain timeout. Once an adversarial label fails to be pruned within the timeout, PRIMA returns unknown immediately without checking the rest of the adversarial labels. In PRIMA-R, we could prune those failed labels that previously timed out in PRIMA-O, thus continuing the verification process, which may take more overall time. But accordingly, we gain significant precision improvement, e.g. PRIMA-R can verify 9 more properties on C_ConvBig.
On average, PRIMA-R gains \(1.99\times\) speedup than PRIMA-O and verifies 60.6% more images, which indicates the strength of REDNet to improve both efficiency and precision.
of a verification tool on an unsupported network is regarded as timeout. Then Figure 17(b) shows the execution time distribution of \(\alpha,\beta\)-CROWN-R and \(\alpha,\beta\)-CROWN-O, where each position \((\alpha,\beta)\) denotes that the tool can verify \(\beta\) percent properties in \(\alpha\) seconds. For example, by setting the time limit to 10 seconds, \(\alpha,\beta\)-CROWN-R can verify 32.9% properties, and \(\alpha,\beta\)-CROWN-O only verifies 18.3% properties.
On most properties, the verification tools on REDNet are faster than the tools on the original network. Despite its generality, REDNet may achieve marginal effectiveness on certain tools or benchmarks due to the following factors:
* The reduction ratio affects the subsequent verification acceleration. A less significant reduction ratio plus the reduction cost could cause marginal overall speedup. Figure 17(e) and Figure 17(f) depict the effect of the reduction ratio on the speedup gained. Despite other factors affecting the final speedup, there is a general trend that a significant reduction ratio leads to better speedup, which may cause superb effectiveness on networks C_100_Med and C_100_Large.
* Different tools may use distinct bound propagation methods, which have different degrees of dependency on the network size. PRIMA deploys DeepPoly whose time complexity depends on \(N^{3}\) where each layer has at most \(N\) neurons [45]; as such reduction in network size can lead to better performance. \(\alpha,\beta\)-CROWN, on the other hand, uses \(\beta\)-crown. \(\beta\)-crown is used to generate constraints of output neurons defined over preceding layers until the input layer. Thus, the number of constraints does not vary, and the number of intermediate neurons can only affect the number of variables that appear in the constraint; as such, deployment of REDNET may reap a marginal effect in speedup on \(\alpha,\beta\)-CROWN compared to PRIMA. For VeriNet, it uses symbolic interval propagation to generate constraints of intermediate and output neurons defined over the input neurons. Thereby intermediate neuron size only affects the number of constraints while the number of defined variables in the constraint is fixed as the input dimension. Hence, REDNet could be less effective on VeriNet compared to PRIMA in general.
* Some layer types (e.g. Conv) may compute faster than fully-connected layers; since our method transforms these layers into fully-connected layers before performing network reduction, its efficiency may not be that significant as compared to the original layers. On the other hand, it is worth-noticing that the use of fully-connected layers improves the availability of existing tools.
* \(\alpha,\beta\)-CROWN and VeriNet are branch-and-bound based and they generate sub-problems from their respective branching heuristics, which are dependent on the original network structures. The REDNet changes the network structure, and hence the heuristic can generate different sub-problems. This may affect the performance.
We conclude empirically that REDNet has better performance (significant speedup or much more properties verified) on large networks, i.e. networks with more than 40k ReLU neurons. On the large networks, the average speedup of \(\alpha,\beta\)
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Neural Net**} & \multicolumn{4}{c|}{**VeriNet-O**} & \multicolumn{2}{c|}{**VeriNet-R**} \\ & \multicolumn{2}{c|}{(on original network)} & \multicolumn{2}{c|}{(on reduced network)} \\ \cline{2-7} & \#Timeout & \#VeriNet & Time(s) & \#Timeout & \#VeriNet & Time(s) \\ \hline M\_256x6 & 27 & 3 & 52.94 & 27 & 3 & **47.78** \\ \hline M\_ConvMed & - & - & - & 19 & **12** & **23.96** \\ \hline M\_ConvBig & - & - & - & 23 & **6** & **48.81** \\ \hline C\_WideKW & 3 & 29 & 27.12 & 2 & **30** & **22.74** \\ \hline C\_8\_255Simp & 10 & 20 & 63.26 & 10 & 20 & **52.56** \\ \hline C\_ResnetA & 25 & 7 & 129.49 & 25 & 7 & **83.35** \\ \hline C\_ResnetB & 21 & 8 & 116.80 & 20 & **9** & **74.98** \\ \hline C\_100\_Med & 10 & 14 & 69.71 & 9 & **15** & **21.15** \\ \hline \end{tabular}
\end{table}
Table VI: The experiment results of VeriNet. The time is the average execution time of the properties where both methods terminate before timeout. When VeriNet-O fails to execute, e.g. M\_ConvBig, the average time is computed on the properties where our method can terminate within the timeout.
Figure 17: Visualized comparison results.
CROWN-R is 1.94\(\times\), and the average speedup of VeriNet-R is 3.29\(\times\); and PRIMA-R verifies 42 properties while PRIMA-O only verifies 11 properties.
### _Support of other verifiers for the benchmarks_
As can be seen from subsection VI-D, PRIMA fails to analyze M_SkipNet because it does not support its network architecture. However, with the introduction of REDNet, which is constructed as a fully-connected neural network, PRIMA is now able to verify M_SkipNet. A similar improvement happens to VeriNet. Therefore, our REDNet not only speeds up the verification process but also allows existing tools to handle network architectures that are not supported originally.
To further testify that the reduced network adds support to existing verification tools, we select four tools - Debona [46], Venus [47], Nnenum [48], PeregriNN [49] - from VNCOMP2021/2022 that only support limited network architectures. We select one representative verification property for each of our tested networks to check if the four designated tools can support the networks.
We present the results in Table VII where we color the left half of the circle black to indicate that the original network is supported by the tool (and white otherwise); we also color the right half of the circle black if the reduced network is supported by the tool (and white otherwise.) In general, the black color implies the network is _supported_ and the white color implies the network is _not supported_. Note that Venus does not support networks whose output layer is a ReLU layer; therefore, it cannot be executed for both the original and the reduced network for M_SkipNet. These results boost our confidence that our constructed REDNet not only accelerates the verification but also _produces a simple neural network architecture that significantly expands the scope of neural networks which various tools can handle._
## VII Discussion
We now discuss the limitation of our work.
_Supported layer types._ As described in section V, our reduced neural network contains only Affine layers (e.g. GEMM layers) and ReLU layers, therefore we could only represent non-activation layers that conduct linear computation. For example, an _Add_ layer that takes layer \(\alpha\) and layer \(\beta\) conducts linear computation as the output is computed as \(\alpha+\beta\). A _Convolutional_ layer conducts linear computation as well as it only takes one input layer and the other operands are constant weights and bias. However, we couldn't support a _Multiplication_ layer if it takes layer \(\alpha\) and layer \(\beta\) and computes \(\alpha\times\beta\) as the output. For future work, we will explore the possibility of handling more non-linear computations.
_Floating-point error._ As presented in Theorem 3, our reduction process preserves the input-output equivalence of the original network in the real-number domain. However, like many existing verification algorithms [12, 14, 24, 26] that use floating-point numbers when conducted on physical machines, our implementation involves floating-point number computation, thus inevitably introducing floating-point error. The error could be mitigated by deploying float data type with higher precision during implementation.
## VIII Related Work
Theoretically, verifying deep neural networks with ReLU functions is an NP-hard problem [17]. Particularly, the complexity of the problems grows with a larger number of _nodes_ in the network. Therefore, with the concern of scalability, many works have been proposed by over-approximating the behavior of the network. This over-approximation can be conducted by abstract interpretation techniques [1, 25, 50]; or to soundly approximate the network with _fewer nodes_[8, 9, 51, 52].
In detail, abstract interpretation-based methods over-approximate the functionality of each neuron with an abstract domain, such as box/interval [50], zonotope [1] or polyhedra [25]. These methods reason over the original neural networks without changing the number of neurons in the test network.
On the contrary, reduction methods in [8, 9, 51, 52] reduce the number of neurons in a way that over-approximates the original network's behavior. However, such over-approximation would jeopardize completeness when instantiated on complete methods. On the contrary, our reduction method captures the _exact_ behavior of the network without approximation error. Therefore REDNet could be instantiated on complete tools and even verify more properties given the same timeout. Furthermore, REDNet could handle various large networks where the previous work [51] only evaluated one large-scale network (the C_ConvBig in our benchmark) that was reduced to 25% of the original size with a very small perturbation \(\epsilon=0.001\); whereas we could reduce it to just 10% with \(\epsilon\approx 0.0078\) (properties from VNN competition 2022). We remark that the smaller perturbation, the more reduction we could gain. Other related tools in [9, 52] were only evaluated with ACAS Xu networks with a very small input dimension and network sizes, making it challenging for us to make any meaningful comparison. Last but not least, the reduced networks designed in [8, 9] use intervals or
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline \multirow{2}{*}{**Networks(12)**} & \multicolumn{4}{c|}{**Tools**} \\ \cline{2-5} & Venus & Debona & Nnenum & PeregriNN \\ \hline M\_256x6 & \(\bullet\) & \(\bullet\) & \(\bullet\) & \(\bullet\) \\ M\_ConvMed & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ M\_ConvBig & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ M\_SkipNet & ReLU-error & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_WideKW & \(\bullet\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_8\_255Simp & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ConvBig & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ResnetA & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_ResnetB & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_100\_Med & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ C\_100\_Large & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) & \(\mathfrak{D}\) \\ \hline \end{tabular}
\end{table} TABLE VII: The networks supported by existing verification tools. A fully black circle indicates both the original and the reduced networks are supported. A right-half black circle indicates that the tool supports only the reduced network.
values in an abstract domain to represent connection weights. Such specialized connections require implementation support if instantiated on existing verification methods. But we export REDNet as a fully-connected network in ONNX, which is an open format that is widely accepted. This makes our REDNet a versatile tool that could also be combined with existing reduction methods to lessen the network size even further as they all apply to fully-connected networks.
REDNet could benefit various verification techniques, such as branch-and-bound based methods [53, 7, 10, 5]. The key insight of the branch-and-bound method is to divide the original verification problem \(P\) into subdomains/subproblems by splitting neuron intervals. For example, one can bisect the input neuron interval such that the input space is curtailed in each subdomain; or to split an unstable ReLU neuron \(y\) (whose input value can be both negative and positive) at the point 0, thereby \(y\) will be stably activated or stably deactivated in the subdomains. Our _network reduction_ technique, once applied at the beginning of branch-and-bound based methods, will help generates easier subproblems based on a small-sized network, thus accelerating the whole analysis process without sacrificing verification precision.
Furthermore, the reduced network could accelerate abstract refinement based processes like PRIMA [12], where it encodes the network constraints and resolves individual neuron bounds. As REDNet contains fewer neurons and connections, the solving process involves a smaller set of constraints, which leads to overall speedup.
## IX Conclusion
In this work, we propose the _neural network reduction_ technique, which constructs a reduced neural network with fewer neurons and connections while capturing the _same_ behavior as the original tested neural network. In particular, we provide formal definitions of stable ReLU neurons and deploy the state-of-the-art bound propagation method to detect such stable neurons and _remove_ them from the neural network in a way that preserves the network functionality. We conduct extensive experiments over various benchmarks and state-of-the-art verification tools. The results on a large set of neural networks indicate that our method can be instantiated on different verification methods, including \(\alpha,\beta\)-CROWN, VeriNet and PRIMA, to expedite the analysis process further.
We believe that our method is an efficient pre-processing technique that returns a functionally-equivalent reduced network on which the same verification algorithm runs faster, correspondingly enhancing the efficiency of existing verification methods for them to answer tough verification queries within a decent time budget. Moreover, the simplified network architectures in REDNets empower existing tools to handle a wider range of networks they could not support previously.
## Acknowledgment
This research is supported by a Singapore Ministry of Education Academic Research Fund Tier 1 T1-251RES2103. The second author is supported by NUS grant T1 251RES2219.
|
2303.01655 | Atomtronic superconducting quantum interference device in synthetic
dimensions | We propose atomtronic counterpart of superconducting quantum interference
device (SQUID) in synthetic $2$-dimensional space. The system is composed of
Bose-Einstein condensate (BEC) in two neighboring optical wells which is
coupled to an external coherent light. Furthermore, availability of
controllable atomtronic flux qubit in the synthetic dimensions is demonstrated
with this system. Control parameter for the qubit is naturally provided by
artificial magnetic flux originated from the coherent atom-light coupling.
Comparing with traditional SQUID which requires at least $2$-dimensional
circuits, the synthetic dimensional SQUID can be realized only in
$1$-dimensional circuits. It should be a great advantage for the scalability
and integration feature of quantum logic gates. | Wenxi Lai, Yu-Quan Ma, Yi-Wen Wei | 2023-03-03T01:01:06Z | http://arxiv.org/abs/2303.01655v1 | # Atomtronic superconducting quantum interference device in synthetic dimensions
###### Abstract
We propose atomtronic counterpart of superconducting quantum interference device (SQUID) in synthetic 2-dimensional space. The system is composed of Bose-Einstein condensate (BEC) in two neighboring optical wells which is coupled to an external coherent light. Furthermore, availability of controllable atomtronic flux qubit in the synthetic dimensions is demonstrated with this system. Control parameter for the qubit is naturally provided by artificial magnetic flux originated from the coherent atom-light coupling. Comparing with traditional SQUID which requires at least 2-dimensional circuits, the synthetic dimensional SQUID can be realized only in 1-dimensional circuits. It should be a great advantage for the scalability and integration feature of quantum logic gates.
pacs: 03.75.Gg, 74.50.+r, 03.67.Lx, 42.50.Ct **Introduction**. Quantum logic gates based on circuit quantum electrodynamics (QED) have advantages for scalability and integration [1; 2; 3; 4]. However, they are easily affected from environments due to charge and spin degree of freedom of electrons [5; 6; 7; 8]. On contrast, cold atoms in cavity QED have quantum oscillations with long coherent lifetime. For example, optical oscillations in alkali earth (like) atoms enjoy lifetime about microsecond to tens of second [9; 10]. The obvious drawback of cavity QED based quantum logic gates comes from their poor scalability. Although several schemes for improving the scalable feature have been proposed, they are hard to be realized and popularized in practice [11; 12; 13].
A balanced protocol may be available with atomtronics, in which atoms keep their strong coherence and, at the same time, scalability of atomic transistors becomes feasible due to similarity between atomtronic and electronic circuits [14; 15; 16; 17]. Indeed, atomtronic circuits based on BEC Josephson junctions become one of the important candidates for the development of quantum computation. In 1-dimensional optical potentials, BEC atoms appear quantum fluctuations of atom numbers and non local phase differences as studied both theoretically [18; 19; 20; 21] and experimentally [22; 23]. The Josephson junction like properties can be applied in 2-dimensional optical potentials to realize atomtronic SQUID. They are proposed recently [24; 25; 26; 27; 28; 29] and implemented in consequent experiments [30; 31; 32]. In some literatures, such devices are directly called atomtronic quantum interference device (AQUID).
Furthermore, atomtronic flux qubits and universal quantum logic gates are demonstrated in toroidal potential wells [33], ring shaped optical lattices [34; 35] and triangular optical lattices [36; 37]. In the ring shaped BEC, one can follow Caldeira and Leggett's dissipative model to extract effective Lagrangian of particular Josephson barriers [33; 34; 35]. Desired Hamiltonian for the flux qubit would be originated from the effective Lagrangian. Couplings between two BEC rings are not hard to be implemented for the two qubit universal quantum gates as introduced in Refs. [24; 26; 28; 33; 34; 35]. Large number flux qubits are expected to be integrated in 2-dimensional optical lattices [36].
In this paper, we theoretically demonstrate atomic SQUID and flux qubit in a synthetic dimensions of BEC system. In traditional SQUID circuits, at least 2-dimensional space is necessary to form a closed rings both in electronics [1; 2; 3; 4] and atomtronics [24; 25; 26; 27; 28; 29; 30; 31; 32]. However, in the present synthetic dimensional SQUID, one needs only 1-dimensional circuits, since the closed rings are created in the synthetic dimensions (Fig. 1 (a)). Atom-light coupling induced artificial gauge field [10; 38; 39; 40; 41; 42] would induce magnetic flux into the synthetic dimensional ring. It could be used to control the SQUID and tunable qubits become available. Due to the lowered dimension in real space, such systems should greatly simplify the configuration of atomtronic flux qubits and improve scalability of the devices.
**Atomtronic Josephson junction in position space**: In position space, tunneling of cold atom Bose-Einstein condensate (BEC) in double-well systems can be used to realize atomic Josephson junction, which has been widely studied earlier [18; 21; 23]. For completeness, firstly, let us review this kind of junction considering all atoms are in internal (electronic) state \(s\). As shown in Fig. 1 (b), the typical Hamiltonian of this tunneling process is described by the Bose-Hubbard Hamiltonian \(\hat{H}_{s}\)[23; 24; 30; 32],
\[\hat{H}_{s}=\hat{H}_{Js}+\sum_{\alpha=0,1}\frac{U}{2}\hat{n}_{\alpha s}(\hat{ n}_{\alpha s}-1), \tag{1}\]
where tunneling process is described by \(\hat{H}_{Js}=\sum_{\alpha=0,1}\mu_{\alpha}\hat{n}_{\alpha s}-\hbar J(\hat{a}_{0s}^ {\dagger}\hat{a}_{1s}+H.c.)\). \(\mu_{\alpha}\) represents the chemical potential of the corresponding optical well. Atom number operators are defined as \(\hat{n}_{\alpha s}=\hat{a}_{\alpha s}^{\dagger}\hat{a}_{\alpha s}\). Atom-atom repulsion energy
due to occupying the same well is given by the quantity \(\frac{U}{2}\). Rate of atom tunneling between the two wells is denoted by \(J\).
Based on the mean field theory of BEC [30; 35; 44], these annihilation operators could be described by the particle number and phase representation as,
\[\langle\hat{a}_{\alpha s}\rangle=\sqrt{n_{\alpha s}}e^{i\theta_{ \alpha s}}. \tag{2}\]
Then the Hamiltonian (1) becomes
\[H_{s}=\sum_{\alpha=0,1}(\mu_{\alpha}n_{\alpha s}+\frac{U}{2}n_{ \alpha s}^{2})-E_{Js}\cos(\theta_{1s}-\theta_{0s}), \tag{3}\]
where the Josephson energy is defined to be \(E_{Js}=2\hbar J\sqrt{n_{0s}n_{1s}}\). This system has effective local capacity \(C_{\alpha}=\frac{q^{2}}{U}\) which can be deduced from the comparison between the capacitance energy \(\frac{Q_{\alpha s}^{2}}{2C}\) and the repulsive energy in Eq.(3). At the same time, the charging energy should be \(E_{C_{\alpha}}=\frac{q^{2}}{2C_{\alpha}}=\frac{U}{2}\). The effective charge is \(Q_{\alpha s}=qn_{\alpha s}\). Here, \(q\) is the unit charge and for neutral atoms \(q=1\) with the meaning of atom number. One can use Hamilton's equations \(\frac{dn_{\alpha s}}{dt}=-\frac{1}{\hbar}\frac{\partial H_{s}}{\partial\theta _{\alpha s}}\) and \(\frac{d\theta_{\alpha s}}{dt}=\frac{1}{\hbar}\frac{\partial H_{s}}{\partial n _{\alpha s}}\) to reach the equations of motion,
\[\frac{d}{dt}n_{s}=-2J\sqrt{1-n_{s}^{2}}\sin(\theta_{s}) \tag{4}\]
\[\frac{d}{dt}\theta_{s}=\delta+\frac{U}{\hbar}N_{s}n_{s}+\frac{2Jn _{s}}{\sqrt{1-n_{s}^{2}}}\cos(\theta_{s}) \tag{5}\]
where \(\theta_{s}=\theta_{0s}-\theta_{1s}\), \(n_{s}=\frac{n_{0s}-n_{1s}}{n_{0s}+n_{1s}}\), \(N_{s}=n_{0s}+n_{1s}\) and \(\delta=\frac{\mu_{0}-\mu_{1}}{\hbar}\). The results can be found in earlier work [43; 18]. Furthermore, Eq.(4) gives rise to the current formula in Josephson junctions [31; 45],
\[I_{s}=I_{sc}\sin(\theta_{s}), \tag{6}\]
where critical current of the junction is \(I_{sc}=q\frac{E_{Js}}{\hbar}\).
Figure 1: (Color on line) (a) Schematic illustration of BEC ring in synthetic dimensions. (b) Atomtronic Josephson junction in position space. (c) Atomtronic Josephson junction in atom internal states. (d) Atomtronic SQUID in synthetic dimensions.
**Atomtronic Josephson junction in atom internal states**: In the following, let us show the Josephson junction originated from coherent optical transition between two internal states (\(s=g\) and \(s=e\)) of BEC atoms. Clock transitions of alkaline-earth (like) atoms with long lifetime can be used here. As BEC gas, all atoms are assumed to be in the same mode described with the annihilation operator \(\hat{a}_{s}\), which satisfies the relation \(\hat{a}_{\alpha s}|n_{\alpha s}\rangle=\sqrt{n_{\alpha s}}|n_{\alpha s}-1\rangle\). The model is shown in Fig. 1 (c). Optical coupling between the two internal levels of BEC atoms under a coherent light with frequency \(\nu\) could be described by [10; 38; 39],
\[\hat{H}_{\alpha}=\sum_{s=g,e}\varepsilon_{s}\hat{n}_{\alpha s}-\frac{\hbar \Omega}{2}(\hat{a}_{\alpha g}^{\dagger}\hat{a}_{\alpha e}e^{i(\nu t-\alpha \phi)}+H.c.), \tag{7}\]
where, \(\varepsilon_{s}\) denotes the ground energy level (\(s=g\)) or the excited energy level (\(s=e\)) of atom internal states. \(\alpha\) represents position and \(\alpha\phi\) is position dependent phase generated in coherent atom-light coupling. The Rabi frequency \(\Omega\) is taken as a real quantity considering its tunable phase [46; 47].
With the free evolution Hamiltonian \(H_{\alpha 0}=\varepsilon_{g}\hat{n}_{\alpha g}+(\varepsilon_{g}+\hbar\nu)\hat{n }_{\alpha e}\), the Hamiltonian (7) is transformed into the form \(H_{\alpha}^{I}\) in interaction picture.
\[\hat{H}_{\alpha}^{I}=\hbar\Delta\hat{n}_{\alpha e}-\frac{\hbar \Omega}{2}(\hat{a}_{\alpha g}^{\dagger}\hat{a}_{\alpha e}e^{-i\alpha\phi}+H.c.), \tag{8}\]
Consequently, using the mean field approach in Eq.(2), \(H_{\alpha}^{I}\) could be written into the \(c\) number form,
\[H_{\alpha}^{I}=\hbar\Delta n_{\alpha e}-E_{\Omega\alpha}\cos( \theta_{\alpha g}-\theta_{\alpha e}+\alpha\phi), \tag{9}\]
where the detuning \(\Delta=(\varepsilon_{e}-\varepsilon_{g})/\hbar-\nu\). We define the Josephson energy, \(E_{\Omega\alpha}=\hbar\Omega\sqrt{n_{\alpha g}n_{\alpha e}}\), for atomic transition between internal states under the optical coupling. The second term in Eq.(9) is energy of the effective inductor. Since, the model of optical transition does not involve repulsive energy like atom tunneling in the above section, the corresponding local capacity \(C_{s}\) should be infinitely large, \(C_{s}\rightarrow\infty\). Then, the effective charging energy is close to zero, \(E_{C_{s}}=\frac{g^{2}}{2C_{s}}\to 0\).
With Hamilton's equations \(\frac{dn_{\alpha e}}{dt}=-\frac{1}{\hbar}\frac{\partial H_{\alpha\downarrow} }{\partial\theta_{\alpha s}}\), \(\frac{d\theta_{\alpha e}}{dt}=\frac{1}{\hbar}\frac{\partial H_{\alpha \downarrow}}{\partial n_{\alpha s}}\), we obtain equation of motion of the flux dynamics analogous to the above method,
\[\frac{d}{dt}n_{\alpha}=-\Omega\sqrt{1-n_{\alpha}^{2}}\sin(\theta _{\alpha}+\alpha\phi) \tag{10}\]
\[\frac{d}{dt}\theta_{\alpha}=-\Delta+\frac{\Omega n_{\alpha}}{ \sqrt{1-n_{\alpha}^{2}}}\cos(\theta_{\alpha}+\alpha\phi) \tag{11}\]
where \(n_{\alpha}=\frac{n_{\alpha g}-n_{\alpha e}}{n_{\alpha g}+n_{\alpha e}}\) and \(\theta_{\alpha}=\theta_{\alpha g}-\theta_{\alpha e}\). Eq.(11) lacks the term of atom-atom repulsion energy, which is the main difference comparing with Eq.(5). The repulsion energy is related to capacitance energy [24; 25]. Therefore, the optical transition of atoms just works as an effective inductor as illustrated with the equivalent circuit in Fig. 1 (b). Current corresponding to the phase difference can be derived from Eq.(10) that,
\[I_{\alpha}=I_{\alpha c}\sin(\theta_{\alpha}+\alpha\phi), \tag{12}\]
where critical current of the junction is \(I_{\alpha c}=q\frac{E_{\Omega\alpha}}{\hbar}\).
In Eq.(11), if one consider resonant coupling, \(\Delta=0\) and approximately half population inversion, \(n_{\alpha g}\approx n_{\alpha e}\), the phase \(\theta_{\alpha}\) would be almost a constant. Recently, experiments for Josephson junction effects between two spin components of BEC atoms have been exploited with coherent optical couplings [48]. Fluctuations of spin polarization and the corresponding phase in these experiments should be originated from spin-spin interaction and the atom-light detuning which is necessary for the coherent Raman transition.
**Atomtronic SQUID in synthetic dimensions**: When both the atom tunneling in the two-well and optical transition in atom internal states are considered at the same time, a closed loop in the synthetic dimensions would be created as illustrated in Fig. 1 (d). In this ring, there are four junctions all together, two junctions are in the position space and the other two are in the atom internal transition space. Hamiltonian of the ring in synthetic dimensions could be given by,
\[\hat{H}_{S}=\sum_{s=g,e}\hat{H}_{Js}+\sum_{\alpha=0,1}[\frac{U}{2} \hat{N}_{\alpha}(\hat{N}_{\alpha}-1)+\hat{H}_{\alpha}], \tag{13}\]
where, \(\hat{H}_{Js}\) is given in (1), \(\hat{H}_{\alpha}\) is given by (7) and \(\hat{N}_{\alpha}=\sum_{s=g,e}\hat{n}_{\alpha s}\).
Now, using the free evolution Hamiltonian \(\hat{H}_{0}=\sum_{\alpha,s}(\mu_{\alpha}+\varepsilon_{s})\hat{n}_{\alpha s}\), one could transform the Eq.(13) into its form of interaction picture,
\[\hat{H}=\sum_{s=g,e}\hat{H}_{Js}^{I}+\sum_{\alpha=0,1}[\frac{U}{2}\hat{N}_{ \alpha}(\hat{N}_{\alpha}-1)+\hat{H}_{\alpha}^{I}]. \tag{14}\]
where \(\hat{H}_{Js}^{I}=-\hbar J(\hat{a}_{0s}^{\dagger}\hat{a}_{1s}+H.c)\). The resonant interaction \(\hbar\nu=\varepsilon_{e}-\varepsilon_{g}\) and equal chemical potentials \(\mu_{L}=\mu_{R}\) have been considered here.
With the mean field approach in Eq.(2), the Hamiltonian (14) could be written in the c number form,
\[H=\sum_{\alpha=0,1}\frac{U}{2}N_{\alpha}^{2}-\sum_{s=g,e}E_{Js}\cos(\theta_{s })-\sum_{\alpha=0,1}E_{\Omega\alpha}\cos(\theta_{\alpha}+\alpha\phi). \tag{15}\]
where \(N_{\alpha}=n_{\alpha g}+n_{\alpha e}\). The linear term in the atom-atom repulsive energy is neglected due to the fact \(N_{\alpha}\gg 1\). Phase differences along the anti-clockwise direction have been defined as, \(\theta_{g}=\theta_{0g}-\theta_{1g}\), \(\theta_{1}=\theta_{1g}-\theta_{1e}\), \(\theta_{e}=\theta_{1e}-\theta_{0e}\) and \(\theta_{0}=\theta_{0e}-\theta_{0g}\). Algebraic sum of phase differences along the synthetic dimensional ring plus the optically induced external phase should satisfy the relation of flux quantization condition,
\[\theta_{g}+\theta_{e}+\theta_{0}+\theta_{1}+\phi=2\pi m, \tag{16}\]
where \(m\) is an integer.
In the following, we use the Hamiltonian's equation \(\frac{\partial n_{\alpha s}}{\partial t}=-\frac{1}{\hbar}\frac{\partial H}{ \partial\theta_{\alpha s}}\), to derive atomic current \(I\). Firstly, the Hamiltonian's equations give us equations of motion,
\[\frac{\partial n_{\alpha g}}{\partial t}=(-1)^{\alpha}[\frac{E_{\Omega\alpha} }{\hbar}\sin(\theta_{\alpha}+\alpha\phi)-\frac{E_{Jg}}{\hbar}\sin(\theta_{g})], \tag{17}\]
\[\frac{\partial n_{\alpha e}}{\partial t}=(-1)^{\alpha}[\frac{E_{Je}}{\hbar} \sin(\theta_{e})-\frac{E_{\Omega\alpha}}{\hbar}\sin(\theta_{\alpha}+\alpha \phi)]. \tag{18}\]
Considering the anti-clockwise direction as the positive current, we have these relations for current between different islands, \(q\frac{\partial n_{\alpha g}}{\partial t}=(-1)^{\alpha}(I_{\alpha}-I_{g})\) and \(q\frac{\partial n_{\alpha g}}{\partial t}=(-1)^{\alpha}(I_{e}-I_{\alpha})\). The averaged formula \(I=\frac{1}{4}(I_{0}+I_{g}+I_{1}+I_{e})\) gives rise to the total current in the ring,
\[I=\frac{1}{4}[\sum_{s=g,e}I_{sc}\sin(\theta_{s})+\sum_{\alpha=0,1}I_{\alpha c} \sin(\theta_{\alpha}+\alpha\phi)]. \tag{19}\]
where the critical currents are \(I_{sc}=\frac{qE_{Jg}}{\hbar}\) and \(I_{\alpha c}=\frac{qE_{\Omega\alpha}}{\hbar}\).
Due to the flux quantization condition (16), phase difference in the ring are not independent. The current formula could be obtained,
\[I=\frac{1}{4}[I_{gc}\sin(\theta_{g})-I_{ec}\sin(\theta_{g}+\theta_{0}+\theta_{1}+ \phi)+I_{0c}\sin(\theta_{0})+I_{1c}\sin(\theta_{1}+\phi)]. \tag{20}\]
The current formula is in accord with the current appeared in the superconducting circuit with four Josephson junctions [50]. The atomtronic SQUID is in synthetic 3-dimensional coordinates \(\theta_{0}\), \(\theta_{1}\) and \(\theta_{g}\) with a parameter \(\phi\).
Atomic currents tuned between clockwise and anti-clockwise directions are in the ring are shown in Fig. 2 for different phase values. Here, the tunneling rate is taken as \(J=0.1\) KHz and the Rabi frequency is set to be \(\Omega=0.5\) kHz which are involved in the earlier experiments [10; 39]. In addition, the mean atom numbers in all islands are taken to be the same as \(n_{0g}=n_{0e}=n_{1g}=n_{1e}=100\).
**Atomtronic flux qubit in synthetic dimensions**: Model for the Hamiltonian (15) is shown in Fig. 1 (d). During the resonant coupling of coherent optical transition, we have \(\Delta=0\) and \(n_{\alpha g}\approx n_{\alpha e}\). From Eq. (11) one can conclude that corresponding phase fluctuation could be negligible small, namely, \(\dot{\theta}_{\alpha}=\dot{\theta}_{\alpha g}-\dot{\theta}_{\alpha e}\to 0\). The negligible small phase change is reasonable since atom-atom repulsion energy is not involved when atoms transit between its two internal states \(g\) and \(e\). Therefore, we take phase differences \(\theta_{0}\) and \(\theta_{1}\) as time independent classical quantities. Comparing Eqs.(5) and (11), it is reasonable to assume that phase fluctuation in the loop is mainly contributed from the atom tunneling between the two wells, namely, \(\dot{\theta}_{s}=\dot{\theta}_{Ls}-\dot{\theta}_{Rs}\neq 0\). Under this condition, Hamiltonian (15) is reduced into,
\[H=\sum_{\alpha=0,1}\frac{U}{2}N_{\alpha}^{2}-\sum_{s=g,e}E_{Js}\cos(\theta_{s }). \tag{21}\]
where the non-dynamical constant terms \(-E_{\Omega 1}\cos(\theta_{0})\) and \(-E_{\Omega 1}\cos(\theta_{1}+\phi)\) have been neglected since the phases \(\theta_{0}\) and \(\theta_{1}\) are regarded as time independent classical parameters.
The Hamiltonian (21) is expressed in terms of general coordinates and momentums, \(H=E(n_{\alpha s},\theta_{\alpha s})\). To describe the Hamiltonian in terms of velocities and coordinates, \(H=E(\dot{\theta}_{\alpha s},\theta_{\alpha s})\), one may find the corresponding velocities from the Hamiltonian's equation, \(\dot{\theta}_{\alpha g}=\frac{1}{\hbar}\frac{\partial H}{\partial n_{\alpha g}}\)[51]. It gives rise to the results
\[\frac{d\theta_{0s}}{dt}=\frac{U}{\hbar}(n_{0g}+n_{0e})-J\sqrt{\frac{n_{1s}}{ n_{0s}}}\cos(\theta_{s}), \tag{22}\]
\[\frac{d\theta_{1s}}{dt}=\frac{U}{\hbar}(n_{1g}+n_{1e})-J\sqrt{\frac{n_{0s}}{ n_{1s}}}\cos(\theta_{s}). \tag{23}\]
When the quantity \(\frac{U}{\hbar}\) is comparable to \(J\), and \(n_{0s}\), \(n_{1s}\gg 1\), this inequality \(\frac{U}{\hbar}(n_{\alpha g}+n_{\alpha e})\gg J\sqrt{\frac{n_{0s}}{1s}}\) should be satisfied. In this regime, the second term in Eqs. (22) and (22) could be neglected. Then, we obtain the relations \(\dot{\theta}_{0s}\approx\frac{U}{\hbar}(n_{0g}+n_{0e})\) and \(\dot{\theta}_{1s}\approx\frac{U}{\hbar}(n_{1g}+n_{1e})\) for \(s=g\), \(e\). These two relations, allow us to write \((n_{\alpha g}+n_{\alpha e})\approx\frac{\hbar}{2U}(\dot{\theta}_{\alpha g}+ \dot{\theta}_{\alpha e})\) for \(\alpha=0,1\). One can take \(N_{\alpha}\) into the Hamiltonian (21),
\[H=\sum_{\alpha=0,1}\frac{\hbar^{2}}{8U}(\dot{\theta}_{\alpha s}+\dot{\theta}_{ \alpha e})^{2}-\sum_{s=g,e}E_{Js}\cos(\theta_{s}). \tag{24}\]
Figure 3: (Color on line) (a) Potential energy versus phase \(\theta\). Here, \(U/\hbar=10J\). (b) Comparison between the original potential and quadratic order potential with respect to \(U/\hbar=10J\). (c) Frequency of the harmonic oscillator defined in Eq.(30).
Considering the phase differences \(\theta_{g}=\theta_{0g}-\theta_{1g}\) and \(\theta_{e}=\theta_{1e}-\theta_{0e}\), it is convenient to define \(\theta_{0g}=\Theta_{g}+\frac{\theta_{g}}{2}\), \(\theta_{1g}=\Theta_{g}-\frac{\theta_{g}}{2}\), \(\theta_{0e}=\Theta_{e}-\frac{\theta_{e}}{2}\) and \(\theta_{1e}=\Theta_{e}+\frac{\theta_{e}}{2}\), where \(\Theta_{g}\) and \(\Theta_{e}\) are time independent mean quantities. They leads to the Hamiltonian in terms of non local phases,
\[H=\frac{\hbar^{2}}{16U}(\dot{\theta}_{g}-\dot{\theta}_{e})^{2}-\sum_{s=g,e}E_{Js }\cos(\theta_{s}). \tag{25}\]
It reflects the synthetic dimensional atomtronic SQUID is equivalent to the SQUID containing two Josephson junctions on a ring [52]. Schematic illustration of this configuration is shown at the bottom of Fig. 1 (d).
Because \(\theta_{0}\) and \(\theta_{1}\) are time independent classical quantities, we absorb \(\theta_{0}\) and \(\theta_{1}\) into \(\phi\) and define \(\varphi=\theta_{0}+\theta_{1}+\phi\). In this case, the flux quantized condition (16) becomes \(\theta_{g}+\theta_{e}+\varphi=2m\pi\). It reveals there is just one degree of freedom in this system except \(\varphi\). Then, Eq. (25) would be reduced into the following simple form,
\[H=\frac{\hbar^{2}}{4U}\dot{\theta}_{g}^{2}-E_{Jg}\cos(\theta_{g})-E_{Je}\cos( \theta_{g}+\varphi). \tag{26}\]
Now, one may choose a new origin of coordinate that let \(\theta_{g}=\theta-\frac{\varphi}{2}\). Additionally, considering \(E_{Jg}\approx E_{Je}\) in the resonant coherent coupling, one can find,
\[H_{eff}=\frac{\hbar^{2}}{4U}\dot{\theta}^{2}-2E_{Jg}\cos(\frac{\varphi}{2}) \cos(\theta). \tag{27}\]
This is the Hamiltonian of single qubit with tunable parameter \(\varphi\)[45]. It is valuable to point out that, definition of the phase difference \(\dot{\theta}=\dot{\theta}_{Lg}-\dot{\theta}_{Rg}\) and \(\dot{\theta}_{\alpha g}\approx\frac{U}{\hbar}(n_{\alpha g}+n_{\alpha e})\) given above would actually result in the relation between general velocity and momentum, \(\dot{\theta}=\frac{U}{\hbar}Nn\). Here, \(N\) is the total atom number \(N=N_{0}+N_{1}\) and \(n\) is the atom number difference \(n=(N_{0}-N_{1})/N\) between two wells.
From Faraday's law, one know that the effective voltage \(V\) should equal to \(V=\frac{\hbar}{q}\dot{\theta}\). On the other hand, the effective capacitance \(C\) satisfies \(C=\frac{Q}{V}\), where \(Q=qn\) and \(n\) represents atom number with respect to the phase \(\theta\). Then, one find the relation \(Q=\frac{C\hbar}{q}\dot{\theta}\). It allow us to obtain energy of the effective capacitance \(\frac{Q^{2}}{2C}=\frac{C\hbar^{2}}{2q^{2}}\dot{\theta}^{2}\). Comparing with the capacitance energy \(\frac{Q^{2}}{2C}\) and the kinetic energy in Hamiltonian (27), one find the capacitance equals to \(C=\frac{q^{2}}{2U}\). Finally, from the charging energy formula \(E_{C}=\frac{q^{2}}{2C}\), we have \(E_{C}=U\). When the charging energy satisfies \(E_{C}\gg E_{Jg}\), phase fluctuation in the junction is large, on the contrary, when the charging energy satisfies \(E_{C}\ll E_{Jg}\), phase fluctuation is small [49].
Fig. 3 (a) illustrates that the external control phase could arbitrarily change depth of the potential \(U_{P}=-2E_{Jg}\cos(\frac{\varphi}{2})\cos(\theta)\). Anharmonic property of this potential allows us to access a uniquely addressable quantum two-level system for qubit designing [45]. From Taylor expansion of the term \(\cos(\theta)\),
\[H_{eff}=\frac{\hbar^{2}}{4U}\dot{\theta}^{2}-2E_{Jg}\cos(\frac{\varphi}{2})(1- \frac{\theta^{2}}{2}+\frac{\theta^{4}}{24}+...). \tag{28}\]
one can extract quadratic order of the potential \(U_{har}=-2E_{Jg}\cos(\frac{\varphi}{2})(1-\frac{\theta^{2}}{2})\). Fig. 3 (b) shows comparison between the effective potential \(U_{P}\) and its quadratic part \(U_{har}\). For small \(\theta\), we have the approximation \(U_{P}\to U_{Q}\). Frequency \(\omega\) of the harmonic oscillator \(H_{har}=\frac{\hbar^{2}}{4U}\dot{\theta}^{2}+U_{Q}\) is not hard to be given by,
\[\omega=\frac{2}{\hbar}\sqrt{UE_{Jg}\cos(\frac{\varphi}{2})}. \tag{29}\]
Here, \(\varphi\) should takes \(2m\pi-\pi<\varphi<2m\pi+\pi\). Just considering lowest two levels of the Hamiltonian \(H_{har}\), one could write an estimated Hamiltonian of two-level system,
\[\hat{H}_{Q}=\hbar\omega\sigma_{z}, \tag{30}\]
where \(\sigma_{z}\) is the Pauli matrix. Eq. (27) represents the free evolution Hamiltonian of the qubit system whose energy level could be tuned by the system parameters as illustrated in Fig. 3 (c). All dates in this figure belong to the regime \(E_{C}\ll E_{Jg}\). The condition \(E_{C}\gg E_{Jg}\) should occur when the total atom number is very small and optical potentials are very narrow, so as to cause atom-atom repulsion is much intensive.
**Conclusions**: In conclusions, quantum tunneling of BEC between two neighboring wells and optical transitions in the atom internals states could form SQUID in synthetic dimensions. Clockwise and anti-clockwise currents in the
synthetic dimensional ring can be predicted. Optical transitions in atom-light interactions work as effective inductors and appear the property of Josephson junctions. In the optical Josephson junction, atom number fluctuation is negligible during the coherent resonant coupling. This feature would be favorable to simplify four-junction SQUID into two-junction SQUID. Consequently, considering the relation of flux quantization in the closed loop, one could achieve 1-dimensional quantum phase fluctuation in the atomtronic SQUID. Therefore, the model may have great applications for the qubit design in quantum computations. This qubit is controllable with the artificial magnetic flux generated from the coherent atom-light coupling.
###### Acknowledgements.
This work was supported by R & D Program of Beijing Municipal Education Commission (KM202011232017) and Qin Xin Talents Cultivation Program of BISTU under Grant No. QXTCP C201711.
_Data availability statement_: All data that support the findings of this study are included within the article.
|
2304.10927 | Spatiotemporally ordered patterns in a chain of coupled dissipative
kicked rotors | In this work we consider the dynamics of a chain of many coupled kicked
rotors with dissipation. We map a rich phase diagram with many dynamical
regimes. We focus mainly on a regime where the system shows period doubling,
and forms patterns that are persistent and depend on the stroboscopic time with
period double than that of the driving: The system shows a form of
spatiotemporal ordering analogous to quantum Floquet time crystals.
Spatiotemporally ordered patterns can be understood by means of a
linear-stability analysis that predicts an instability region that contains the
spatiotemporally ordered regime. The boundary of the instability region
coincides with the lower boundary of the spatiotemporally ordered regime, and
the most unstable mode has length scale double than the lattice spacing, a
feature that we observe in the spatiotemporally ordered patterns: Period
doubling occurs both in time and space. We propose an implementation of this
model in an array of SQUID Josephson junctions with a pulsed time-periodic
flux. | Angelo Russomanno | 2023-04-21T12:56:16Z | http://arxiv.org/abs/2304.10927v3 | # Patterns and spatio-temporal order in a chain of coupled dissipative kicked rotors
###### Abstract
In this work we consider the dynamics of a chain of many coupled kicked rotors with dissipation. We find a rich phase diagram with many dynamical regimes. Beyond the chaotic regime and that of trivial relaxation to a uniform state, there is a regime where the system spontaneously forms patterns that are persistent in time and stroboscopic-time independent. In another regime the system shows period doubling, and forms patterns that are persistent in time and depend on the stroboscopic time with period double that of the driving: In order to break the discrete time-translation symmetry, the system must break also the space translation symmetry, in a form of spatio-temporal ordering analogous to quantum Floquet time crystals. We find that the asymptotic onsite energy is finite and does not scale with the system size; This fact opens the possibility of implementing this model in an array of SQUID Josephson junctions with a pulsed time-periodic flux.
## I Introduction
In many-body dynamical systems out of equilibrium, ordered coherent patterns in space and time naturally appear, from the convective Railegh-Benard cells, to the heart beats, to the Belusov-Zabotinskii reaction, to synchronization [1; 2; 3]. In this context there are some universal properties. One of these is the period-doubling bifurcation cascade [4; 5]. In simple systems undergoing a periodic driving, like one-dimensional maps, one can see that increasing a parameter, the system undergoes a sequence of period doublings. At each of these transitions, a response of the system appears with period double than the preceding regime. As a result, one has a response with period \(2^{k}\) times the driving, and for the parameter tending towards a finite value, \(k\to\infty\) and the system becomes chaotic. This phenomenon occurs with universal scaling properties near the transition to chaos, independently of the precise choice of the map [6; 7].
The period-doubling cascade has been observed in Nature in many contexts, from the convection rolls in water and mercury [8; 9], to nonlinear electronic circuits [10; 11], neurons [12], and infinite-range dissipative quantum spin systems described by a mean-field theory in the thermodynamic limit [13]. All these systems have in common the fact that they are nonlinear, are described by few effective variables, and undergo a periodic driving.
When the periodic driving is applied to many-body systems, the situation changes. As argued in [14], in these systems the generic response is the period doubling (\(k=1\) in the scheme above). Responses at larger multiples of the period are possible, but are non generic and fragile to noise and to breaking the symmetry of the system. In presence of noise, any spatially ordered pattern with a period \(m\) times the driving with \(m>2\) is doomed to be destroyed by the growth of bubbles generated by the random perturbations. The only possible stable patterns are the ones with period doubling, where at each period the driving exchanges the inner and the outer of the bubble, that therefore alternatively grows and shrinks.
This result is of great importance for the recent researches on discrete time crystals in classical noisy periodically-driven systems [15; 16; 17]. In these many-body systems a persistent response at a multiple of the driving period appears only in the thermodynamic limit [18], and in all the examples known at this time this response occurs as a period doubling, in agreement with the results of [14] described above.
Motivated by this framework of research on the classical time crystals, we aim to understand how the period-doubling cascade of the nonlinear systems with few degrees of freedom changes when a many-body context is considered. In order to do that, we consider an array of coupled kicked rotors with dissipation. Without dissipation, this model reduces to a slight generalization [19] of the Hamiltonian coupled kicked rotors, a standard model for studies in classical Hamiltonian chaos [20; 21; 22; 23; 24; 25]. Without the coupling this model reduces to the single dissipative kicked rotor, known also as Zaslavsky map [26; 27]. This model shows a peculiar strange attractor and a very interesting dynamics [26], and both the single- and the many-body case are very easy to numerically simulate, also for quite large system sizes. For what concerns our purposes, this single-particle model shows a behavior very similar to two parallel period doubling cascades at the onset of chaos, so we can study the case of many coupled rotors and see if the period-doubling cascade survives.
We probe the system stroboscopically, that's to say at discrete times, integer multiple of the driving period, and we consider appropriate averages over any random initial conditions. Using some "period-doubling order parameters" inspired by the literature on time crystals [28; 29] we see that the period-doubling cascade is washed away and the model can show essentially only period doubling in the regime of regular dynamics. In small parameter ranges there is a response at a period 4 times the driving, but it is many order of magnitude smaller than the one at period 2. So, we find that the findings of [14] are essentially confirmed, with the small period 4 response due probably to the fact that this model has continuous onsite variables, while the cellular automata and the dissipative time crystals have discrete onsite variables.
Beyond the period \(m\)-tupling behavior, this model
shows a very rich and complex behavior, and we summarize it in the phase diagram shown in Fig. 1. Here we focus on the behavior of the momentum coordinates, that is clearer to interpret, and usually at the focus on studies on chaos in kicked-rotor models (see for instance [20; 21; 30]). Anticipating a little bit, on the axes there are two parameters, \(J\) and \(K\), of the model (it is discussed in detail in Sec. II), while another parameter, \(\gamma\), is kept fixed at \(\gamma=0.8\).
Let us first of all focus on the regime where period doubling occurs, that in the phase diagram we term "Spatio-temporal ordering". This name is due to the fact that, whenever period doubling occurs, the system spontaneously organizes in space, breaking the translation symmetry and giving rise to patterns. These patterns are stable and persistent in time, change with a period twice the one of the driving, and so give rise to the period doubling. They are an example of an effect of nonlinear dynamics in spatially extended systems very common in Nature [3], and the precise form of the patterns depend on the initial state chosen.
We emphasize that our model is symmetric under space translations and discrete time translations, and in this regime both the symmetries are broken. The finding that the if the system breaks time-translation symmetry, then it breaks also space symmetry is a phenomenon analogous to the "spatio-temporal order" of quantum Floquet time crystals. Here the time-translation symmetry breaking occurs only if also an internal symmetry of the model is broken, and long-range order is present [28; 31; 32]. For that reason we use the same name for this phenomenon.
Nevertheless, although our finding is very similar to a time crystal, it is important to specify that we are not seeing a true time crystal, because we see a persistent period doubling response already at finite sizes, while a true time crystal should break the space and time translation symmetry only in the thermodynamic limit. The point is that the effects we see in our model are a consequence of nonlinear dynamics (similar to the cases described in [3]), a mechanism physically different to the quantum phase transition-like behavior involved in quantum Floquet time crystals (see for instance the discussion in [29]).
There are other regimes in the phase diagram of Fig. 1. We have first of all the regime marked with the label "Pattern", where the system shows patterns that are persistent in time and are fixed in the stroboscopic time, so there is no period doubling. In this regime the system breaks the space translation symmetry, but not the discrete time translation symmetry. Quite remarkably, the transition from stroboscopic-time-independent patterns to spatio-temporal ordering cannot be seen in the average amplitude of the patterns, that changes continuously across this threshold. If one looks instead at the typical length scale of the patterns, one can see a drop when moving from the time-independent patterning to the spatiotemporal ordering. For some reason, in presence of period doubling, the patterns oscillate in a tighter way.
Let us consider some properties of the stroboscopic-time-independent patterning regime. We see that the time-independent patterning exists only for \(K\) smaller than a threshold (\(K<0.45\)), for larger values only patterns with period doubling exist. In the stroboscopic-time-independent patterning region we display a small region surrounded by yellow lines. This is the "weak patterning" regime, where, in a very jagged and seemingly fractal way, there are points without patterning and points with patterns with very small amplitude.
In the region labeled as "Trivial", the system relaxes to a uniform and time-independent condition, where all the momenta are vanishing. In the chaotic regime, in contrast, nearby trajectories in the phase space diverge exponentially from each other, and the dynamics is given by aperiodic oscillations in space and time. Here the largest Lyapunov exponent (LLE - the measure of the rate of exponential divergence) is positive. The transition from negative (regular dynamics) to positive (chaotic dynamics) is always sharp, but along the segment marked in red, where the LLE is near to \(0\) and intermittently becomes slightly positive. Near the onset of chaos - in a range of parameters - the characteristic length scale of the patterns shows a peak, where it increases by an order of magnitude, and then drops back inside the chaotic regime (here there are patterns but are aperiodic in time and irregular).
Many of the transitions between the different regime described above (basically, everything but the transition from stroboscopic-time-independent patterning and spatio-temporal ordering) can be seen in the behavior of the kinetic energy per site, a quantity often considered in studies about kicked rotors [19; 20; 21; 30]. In contrast with cases with Hamiltonian chaos, where this quantity increases in steadily and without a bound [20; 21; 25; 30], here the kinetic energy per site reaches an asymptotic value, that does not scale with the system size.
This is an important information for experimental realizations. Indeed, we propose an experimental realization of this model with an array of SQUID Josephson junctions with a time-periodic pulsed magnetic flux. The kinetic energy per site translates in the charging energy per site of the superconducting system, and the fact that it is bounded provides the possibility to tune the parameters so that this energy per site stays below the superconducting gap. In this way the array of Josephson junctions can keep superconductivity for long times, and be correctly described by our model.
The paper is organized as follows. In Sec. II we introduce the model we study. In Sec. III we study chaos by means of the largest Lyapunov exponent, and map the boundary line of the "Chaos" region in Fig. 1. In Sec.the period-doubling order parameters we consider, describing the random initial conditions and the average procedure we perform over randomness and noise. In Sec. IV we study the period \(m\)-tupling in the single- and many- rotor cases. We define some period-doubling order parameters that correctly witness the period-doubling cascade
in the single-rotor model, and show that in the many-rotor model only period doubling survives. In Sec. V we discuss the patterns and their properties, show how to quantify their amplitude and characteristic length scale, and see how these properties change at the threshold between the different regime in Fig. 1 and how they appear in the kinetic energy per site. In Sec. VI we discuss how to realize our model in an array of SQUID Josephson junctions with pulsed time-periodic magnetic flux. In Sec. VII we draw our conclusions.
## II Model
We add dissipation to the many-body generalization of the kicked rotor considered in [19], a slight generalization to the paradigmatic model of many-body Hamiltonian chaos theory studied in [20; 21; 22; 23; 24; 25], that can be even realized experimentally with an array of bosonic Josephson junctions [25; 33]. The purely Hamiltonian model is given by the kicked Hamiltonian
\[H(t)=\sum_{j=1}^{L}\left[\frac{p_{j}^{2}}{2}-\delta_{1}(t)\left(J\cos(\theta_ {j}-\theta_{j+1})+K\cos(\theta_{j})\right)\right]\,, \tag{1}\]
where \(\theta_{j}\), \(p_{j}\) are pairs of canonically conjugated variables, we assume periodic boundary conditions (\(\theta_{L+1}=\theta_{1}\)), and we have defined the periodic delta function \(\delta_{\tau}(t)\equiv\sum_{n\in\mathbb{N}}\delta(t-n)\) as in [23]. We see that the rotors for a 1-dimensional chain with short-range interactions and we focus on the stroboscopic dynamics, probing the system at discrete times \(t_{n}=n^{-}\), where the superscript "-" means that we look at the system just before the \(n\)-the kick has been applied.
Writing the canonical equations of the dynamics
\[\dot{\theta}_{j}=\partial_{p_{j}}H,\quad\dot{p}_{j}=-\partial_{\theta_{j}}H,\]
and using a standard analysis (essentially the integration of the second equation across the \(\delta\) function - see for instance [20; 30; 34; 21]) we see that the dynamics from \(t_{n}\) to \(t_{n+1}\) is described by a discrete map
\[p_{j}^{(n+1)} =p_{j}^{(n)}-J\left[\sin(\theta_{j}^{(n)}-\theta_{j+1}^{(n)})+ \sin(\theta_{j}^{(n)}-\theta_{j-1}^{(n)})\right]\] \[-K\sin(\theta_{j}^{(n)})\] \[\theta_{j}^{(n+1)} =\theta_{j}^{(n)}+p_{j}^{(n+1)}\,, \tag{2}\]
for \(j=1,\,\ldots,\,L\), where we have written for simplicity \(p_{j}(t_{n})=p_{j}(n)\), \(\theta_{j}(n)=\theta_{j}(t_{n})\).
We add to this model a dissipation, so that between one kick and the next the momenta are damped by a factor \(0<\gamma<1\). The resulting map is
\[p_{j}^{(n+1)} =\gamma p_{j}^{(n)}-J\left[\sin(\theta_{j}^{(n)}-\theta_{j+1}^{(n )})+\sin(\theta_{j}^{(n)}-\theta_{j-1}^{(n)})\right]\] \[-K\sin(\theta_{j}^{(n)})\] \[\theta_{j}^{(n+1)} =\theta_{j}^{(n)}+p_{j}^{(n+1)}\,, \tag{3}\]
for \(j=1,\,\ldots,\,L\). This is the many-body generalization of the well-known single kicked rotor with dissipation [27] - known also as Zaslavsky map [26] -, to which our model reduces for \(L=1\)
\[p^{(n+1)} =\gamma p^{(n)}-K\sin(\theta^{(n)})\] \[\theta^{(n+1)} =\theta^{(n)}+p^{(n+1)}\,. \tag{4}\]
Here we keep fixed \(\gamma=0.8\) (other choices of \(\gamma\) would give a qualitatively similar phase diagram). Where not otherwise specified, we will always consider appropriate averages over random initial conditions, taking in each of them all the \(p_{j}^{(0)}\) and the \(\theta_{j}^{(0)}\) from a random distribution uniform in the interval \([-1,1]\). Of this model we study chaotic properties, period \(m\)-tupling properties, and patterning. For the first one we use the largest Lyapunov exponent that we consider in Sec. III, for the second we define an appropriate set of order parameters in Sec. IV, and discuss the last one in Sec. V.
## III Chaos
To quantify chaos, that's to say exponential increase in time of the distance of two nearby trajectories, we use the largest Lyapunov exponent (LLE - see for instance [35; 36]). It is defined in the following way. Considering two dynamics \(\mathbf{X}^{(n)}=(p_{1}^{(n)},\,\ldots,\,p_{L}^{(n)},\,\theta_{1}^{(n)},\, \ldots,\,\theta_{L}^{(n)})\)
Figure 1: Sketch of the dynamical phase diagram for \(\gamma=0.8\). (For other values of \(\gamma\) the situation is qualitatively similar.) We recognize the trivial regime (where relaxation to a uniform asymptotic condition occurs), the “Pattern” regime (where the system breaks space translation symmetry by generating persistent stroboscopic-time-independent patterns), the “Spatio-temporal ordered” regime (where the persistent patterns depend on the stroboscopic time with a period double the one of the driving, breaking thereby space and time-translation symmetry), and the chaotic regime, where the dynamics is aperiodic in space and time. Inside the yellow region there is the weak patterning regime (see text for a description).
\({\bf X}^{(n)}{}^{\prime}=\left(p_{1}^{(n)}{}^{\prime},\,\dots,\,p_{L}^{(n)}{}^{ \prime},\,\theta_{1}^{(n)}{}^{\prime},\,\dots,\,\theta_{L}^{(n)}{}^{\prime}\right)\) with different initial conditions \({\bf X}^{(0)}\), \({\bf X}^{(0)}{}^{\prime}\) such that \(||{\bf X}^{(0)}-{\bf X}^{(0)}{}^{\prime}||=\epsilon>0\), the LLE is defined as
\[{\rm LLE}=\lim_{\epsilon\to 0}\lim_{n\to\infty}\frac{1}{n}\log\left(\frac{||{\bf X }^{(n)}-{\bf X}^{(n)}{}^{\prime}||}{\epsilon}\right)\,, \tag{5}\]
where \(||\dots||\) is the quadratic norm. In a chaotic dynamics, this quantity evaluates the rate at which nearby trajectories exponentially separate from each other. So, when the dynamics is chaotic this quantity is positive, in absence of chaos it is vanishing or negative. To numerically evaluate the LLE we use the method explained in [37].
The method goes as follows. One considers as above two initial conditions distant \(\epsilon\). After the first cycle one evaluates the distance \(d_{1}=||{\bf X}^{(1)}-{\bf X}^{(1)}{}^{\prime}||\). Then one redefines \({\bf X}^{(1)}{}^{\prime}\) as
\[{\bf X}^{(1)}{}^{\prime\prime}={\bf X}^{(1)}+\frac{\epsilon}{d_{1}}({\bf X}^{ (1)}{}^{\prime}-{\bf X}^{(1)})\,, \tag{6}\]
so that the distance between \({\bf X}^{(1)}\) and \({\bf X}^{(1)}{}^{\prime\prime}\) becomes \(\epsilon\) again. With these initial conditions one performs another stroboscopic-evolution step getting some \({\bf X}^{(2)}\) and \({\bf X}^{(2)}{}^{\prime}\). So one gets another value of the distance \(d_{2}\), and performs a redefinition of \({\bf X}^{(2)}{}^{\prime}\) as in Eq. (6). This cycle is repeated many times, do that one gets a sequence of distances \(d_{1},\,d_{2},\,d_{3},\,\dots,\,d_{n}\) and the Lyapunov exponent is given by
\[{\rm LLE}=\frac{1}{\cal T}\sum_{k=1}^{\cal T}\log\frac{d_{k}}{\epsilon}\,, \tag{7}\]
where \({\cal T}\) is large enough and \(\epsilon\) small enough so that convergence has been attained.
We use precisely this formula to get the LLE. We choose \({\bf X}^{(0)}\) taking all the \(p_{j}^{(0)}\) and the \(\theta_{j}^{(0)}\) from a random distribution uniform in the interval \([-1,1]\). We take \({\bf X}^{(0)}{}^{\prime}\) equal to \({\bf X}^{(0)}\) everywhere but on the coordinate \(p_{1}^{(0)}{}^{\prime}\) that we take \(p_{j}^{(0)}{}^{\prime}=p_{j}^{(0)}+\epsilon\). To make convergence faster, we average Eq. (7) over \(N_{\rm r}\) realizations of the random \({\bf X}^{(0)}\).
Fixing \(J\), we find a quite sudden transition in \(K\) from regular behavior (\({\rm LLE}<0\)) to chaotic behavior (\({\rm LLE}>0\)), provided the system size is large enough. This allows to map the boundary line of the chaotic region shown in Fig. 1. The only region where the mapping of this line is problematic is around \(J=0.04\). Here, there is not a sharp transition from a negative LLE to a positive one, but a range (the segment in red in Fig. 2) where the Lyapunov exponent lies near \(0\) and often becomes slightly positive (see inset of Fig. 2). This value of \(J\) marks an abrupt change in the behavior of the boundary line of the chaotic region, that for \(J<0.04\) keeps a value similar to the single-particle case (\(J=0\)) and for \(J>0.04\) starts going down as a straight line (see Fig. 1).
## IV Period \(m\)-tupling
### Bifurcation diagram of the single rotor
In order to discuss period \(m\)-tupling, let us start with the single dissipative rotor model Eq. (4). This models displays a period-doubling cascade similar to the standard one seen in the May map. Before discussing it more quantitatively, it is useful to show it by means of a bifurcation diagram. In the plots of Fig. 3 we put \(K\) on the horizontal coordinate, and - for each value of \(K\) - we write on the vertical coordinate the last \(10^{3}\) stroboscopic values of \(p^{(n)}\), for an evolution lasting \({\cal T}=10^{5}\) periods. If for that \(K\) the system relaxes to an asymptotic stroboscopic value, we see a single value on the vertical coordinate; if there is a period doubling we will see two values, if there period \(m\)-tupling for generic \(m\) we will see \(m\) values.
What we see is a period-doubling cascade, that's to say a sequence of points of \(K\) where pitchfork bifurcations occur and the number of points doubles. (See [4], [2], [5] for more details on pitchfork bifurcations and period-doubling cascade). So one moves to period doubling, to period \(8\)-tupling, to period \(16\)-pling (all the powers of \(2\)). The period doubling cascade ends at the onset of chaos, that here starts at \(K\simeq 5.978\) as one can see in the bifurcation plot as the appearance of a region with randomly scattered points (one can confirm this chaos threshold with the LLE analysis explained in Sec. III). This period doubling cascade is similar to the logistic map, but there is more, because there are two parallel bifurcation cascades, as one can see in Fig. 3(b), and beyond that also some small ones near \(K=5.97\). Quite remarkably, for each value of \(K\), the system chooses just one of the bifurcation cascades, in an apparently random way.
### Definition of the period \(m\)-tupling order parameters
In order to better quantify these phenomena in a way that can be generalized to the many-body case, we take inspiration from the literature on discrete time crystals [28], and define the following period \(m\)-tupling onsite order parameters
\[\mathcal{O}_{n}^{m}(j)=\left[p_{j}^{(n)}\cos\left(\frac{2\pi n}{m}\right) \right]\,, \tag{8}\]
then we average them over time and sites, and
\[\mathcal{O}^{m}=\frac{1}{n_{0}}\sum_{n=\mathcal{T}-n_{0}}^{\mathcal{T}}\frac{ 1}{L}\sum_{j=1}^{L}\mathcal{O}_{n}^{m}(j)\,, \tag{9}\]
where we choose \(n_{0}\) and \(\mathcal{T}\) so that the initial transient is vanished and the sum over \(n\) is converged. Our order parameter is the average over random initial-state realizations [this average is represented by the symbol \(\overline{(\ldots)}\)] of the absolute value of this quantity
\[\mathcal{O}^{(m)}=\overline{|\mathcal{O}^{m}|}\,. \tag{10}\]
It is not difficult to convince oneself, that - for \(m>1\) - if \(\mathcal{O}_{n}^{m}\) shows a response at frequency \(m\), then the average over an infinite time of \(\mathcal{O}_{n}^{m}\) is nonvanishing. Notice that the system could show the linear superposition of responses with different periods, so a nonvanishing \(\mathcal{O}_{n}^{m}\) is a necessary condition for a period \(m\)-tupling, but not sufficient. For instance, there can be also a response with period \(2m\) - thereby a period \(2m\)-tupling - and we could still find a nonvanishing \(\mathcal{O}_{n}^{m}\), as we are going to see below.
It is important to stress that these order parameters are of some usefulness only in the regular regime where \(\mathrm{LLE}<0\). Where there is chaos (\(\mathrm{LLE}>0\)) the dynamics of the \(p_{j}^{(n)}\) shows aperiodic oscillations, there is a response at all the frequencies (see for instance [38]), and so all the \(\mathcal{O}^{(m)}\), providing no information on the existence of a period doubling, but just trivially witnessing the chaos in the dynamics. So we will focus our period \(m\)-tupling analysis on the regime of regular dynamics.
We define also the quadratic average
\[\mathcal{O}^{(2,m)}=\overline{\left[\frac{1}{n_{0}}\sum_{n=\mathcal{T}-n_{0}}^ {\mathcal{T}}\frac{1}{L}\sum_{j=1}^{L}\mathcal{O}_{n}^{m}(j)\right]^{2}}\,, \tag{11}\]
and obtain the uncertainty on \(\mathcal{O}^{(m)}\) as
\[\delta\mathcal{O}^{(m)}=\frac{1}{\sqrt{N_{r}}}\sqrt{\mathcal{O}^{(2,m)}-( \mathcal{O}^{(m)})^{2}}\,, \tag{12}\]
where \(N_{r}\) is the total number of randomness/noise realizations. In all the numerical analyses that follow, we choose \(\mathcal{T}\) finite large enough so that the limits in Eqs. (10) and (11) are converged.
### Results of the order-parameter analysis
Let us start with the case of the single rotor. We show \(\mathcal{O}^{(m)}\) versus \(K\) for \(m=2\), \(4\), \(8\) in Fig. 4. We see that the behavior of the order parameters closely mirrors the one of the bifurcation diagram in Fig. 3. \(\mathcal{O}^{(2)}\) is nonvanishing whenever there is at least period doubling [Fig. 3(a)], \(\mathcal{O}^{(4)}\) is nonvanishing whenever there is at least period \(4\)-tupling, and \(\mathcal{O}^{(8)}\) is nonvanishing whenever there is at least period \(8\)-tupling [Fig. 3(b)]. We see also that the response at \(m=4\) is two orders of magnitude smaller than the one at \(m=2\), and the one at \(m=8\) even smaller. Also this property closely mirrors the fact that at each bifurcation the outcoming branches are much nearer than the ones at the bifurcation before (see Fig. 3 and the self-similarity analysis in [6]).
Of course the analysis is meaningful whenever the dynamics is regular. After the onset of chaos (marked by the vertical line in Fig. 3) all the order parameters are nonvanishing and of the same order of magnitude, because the chaotic dynamics has contributions at all the frequencies.
If we add the interactions we see that the period-doubling cascade is washed away, as we see in Fig. 5.
Figure 3: (Panel a) Bifurcation diagram. (Panel b) A magnification thereof near the upper branch just below the onset of chaos. Notice the two main parallel bifurcation cascades (for each value of \(K\) there are points on only one of the two).
Focusing on the regular-dynamics regime, we see that the period-doubling response is dominant. There are still some small regions with period 4-tupling, but the response at \(m=4\) is three orders of magnitude smaller that the one at \(m=2\), making it quite negligible. Again in the chaotic regime there is a response at all frequencies due to chaos and we do not consider it. In the figure we choose a specific value of \(J\), but we have checked that the thing is general.
So, we essentially find confirmation of the results of [14]: whenever the dynamics is regular, and then makes sense speaking about period \(m\)-tupling, the dominant response is period doubling, with some negligible contribution at larger frequency. The fact that this contribution exists, in contrast with the results of [14] for cellular automata, is probably due to the fact that our system has continuous local variables, in contrast with the discrete local variables of cellular automata, and the known classical dissipative time crystals [15; 16; 17].
We see in Fig. 5(a) that \(\mathcal{O}^{(2)}\) is vanishing for \(K\) up to a threshold and then abruptly moves from \(0\). The threshold where this happens marks the line bounding from below the phase "Spatio-temporal ordering" in Fig. 1. The reason why we define the period-doubling regime "spatio-temporal ordering" appears in the next Section, where we are going to show how the period doubling is intimately related to the appearance of patterns in the systems: Similar to time crystals [31], spontaneous ordering in time occurs together with spontaneous ordering in space.
## V Pattern formation
### Examples of persistent patterns
When the system is in the regime "Trivial" in Fig. 1, all the values of \(p_{j}\) relax to \(0\). Otherwise, in the regimes "Pattern" and "Spatio-temporal order", after a transient is died away, the system spontaneously forms patterns of \(p_{j}\) that are persistent in time. We find that different ini
Figure 4: Single rotor. (Panel a) Period-doubling order parameters versus \(K\). (Panel b) Period 4-tupling and period 8-pling order parameter versus \(K\). The vertical line marks the onset of chaos, as found with the LLE analysis. Numerical parameters \(\mathcal{T}=4\cdot 10^{4}\), \(n_{0}=10^{3}\), \(N_{\mathrm{r}}=10^{3}\).
Figure 5: Many rotors with \(J=0.1\). (Panel a) Period-doubling order parameters versus \(K\) for different values of \(L\). (Panel b) Period 4-tupling order parameter versus \(K\) for different values of \(L\). The vertical line marks the onset of chaos, as found with the LLE analysis. Numerical parameters \(\mathcal{T}=4\cdot 10^{4}\), \(n_{0}=10^{3},N_{\mathrm{r}}=10^{3}\).
tial conditions give rise to different asymptotic patterns, a phenomenon common in nonlinear dynamics.
In absence of period doubling (the regime "Pattern") the patterns are time independent. We show an example thereof in Fig. 6(a). We initialize with one random initial condition and wait that the initial transient dies out. We see that the pattern is constant in the stroboscopic time, so it lies unchanged whatever the value of \(n\).
In the regime "Spatio-temporal order", instead, the patterns are associated with period doubling. In this regime, the persistence in time of the pattern means that it changes at each stroboscopic time, and comes back to itself after two cycles. We show an example thereof in Fig. 6(b). The period doubling appears in the fact that that the pattern has a constant form for \(n\) even (\(n=3\cdot 10^{4}\), \(n=6\cdot 10^{4}\)) and a different equally constant form for \(n\) odd (\(n=3\cdot 10^{4}+1\), \(n=6\cdot 10^{4}+1\)).
### Analysis of the pattern amplitude
In order to study the existence and the properties of the patterns, it is important to quantify them. For that purpose we introduce two quantities. The first one is the pattern amplitude. To evaluate it, we fix \(n\gg 1\), consider the variance of \(p_{j}^{(n)}\) over space and, average it over random initial state realizations, and evaluate the square root, namely
\[\delta p_{n}=\left[\;\overline{\frac{1}{L}\sum_{j}(p_{j}^{(n)})^{2}-\left( \frac{1}{L}\sum_{j}p_{j}^{(n)}\right)^{2}}\;\right]^{1/2}\;. \tag{13}\]
This quantity is very important, because marks the existence of the patterns (it vanishes in the trivial state). We show some examples of \(\delta p\) versus \(K\) in Fig. 7. In all these figures we consider two values of \(n\) (\(n=3\cdot 10^{4}\) and \(n=6\cdot 10^{4}\)) to show that \(\delta p_{n}\) has converged in time, and consider \(L=512\), large enough that all finite-size effects have disappeared. Let us first focus our attention on the case \(J=0.1\) [Fig. 7(a)]. We see first of all that \(\delta p_{n}\) is independent of \(n\) for \(n\geq 3\cdot 10^{4}\). We see many features, let us discuss them moving from the right to the left.
At the onset of chaos (red vertical line on the extreme right) we notice that \(\delta p_{n}\) starts abruptly to increase with a discontinuous derivative. In the chaotic regime, patterns depend on time in an aperiodic fashion, but the average over initial-state realizations provides a \(\delta p_{n}\) that does not depend on \(n\). At the onset of the period doubling (green vertical line, second from the right) we see that \(\delta p_{n}\) shows no special feature.
The period-doubling regime is fully contained in a region where there is patterning (\(\delta p_{n}>0\)) and - for this value of \(J\) - the disappearance of period doubling has no effect, neither on \(\delta p_{n}\) nor on its derivative. The situation is so for \(J<0.45\); in contrast for \(J\geq 0.45\) the threshold for patterning coincides with the one for period doubling. [We can see an example of that also in the plot in Fig. 7(c)]. It is important to emphasize that period doubling appears always in association with the appearance of patterns, that's why we define "spatio-temporally ordered" the range of parameters where period doubling appears.
Between the two yellow vertical lines in Fig. 7(a) there is the weak patterning regime. It is characterized by a jagged profile where very small value of \(\delta p_{n}\) (order \(10^{-3}\)) alternate with vanishing values (and then no pattern at all). We show some magnification of this jagged profile in Fig. 8. This regime disappears for \(J\simeq 0.14\), but also for larger \(J\) we can see a marked local minimum of \(\delta p_{n}\) for \(J\) just below the onset of period doubling [see Fig. 7(b)].
For \(J=0.5\) we are well inside the regime where the onset of patterning and of period doubling coincide, and patterns in the regular regime show always a time dependence of period 2 (spatio-temporal order) [Fig. 7(c)]. At the transition to chaos one can see the same features as in the other two cases.
Figure 6: Examples of asymptotic patterns. (a) Example for the regime “Pattern” in Fig. 1. Here the pattern is persistent and independent of the stroboscopic time \(n\) (Numerical parameters \(J=0.1\), \(K=2.71\), \(L=512\), one given random initial condition). (b) Example for the regime “Spatio-temporal ordering” in Fig. 1. Here the pattern is persistent and has a periodicity double than the driving, so the patterns for \(n\) even coincide with each other, and the same with the patterns for \(n\) odd (Numerical parameters \(J=0.1\), \(K=4.04\), \(L=512\), one given random initial condition. For clarity we have plotted only part of the pattern).
### Analysis of the characteristic length scale of the pattern
Another important property is the shape of the pattern. We can see in Fig. 6 that the patterns oscillate in space, and to these oscillations we can associate a wavelength, providing us the characteristic length scale of the pattern. We can estimate this length scale in the following way. Exploiting that the number of space oscillations is independent of \(n\) (we can see some examples thereof in Fig. 6), we choose \(n\gg 1\) and numerically perform the space Fourier transform of \(p_{j}^{(n)}\) as
\[f(\kappa)=\frac{1}{L}\sum_{j=1}^{L}\mathrm{e}^{i\kappa j}\,p_{j}^{(n)}\,, \tag{14}\]
where \(\kappa=2\pi\ell/L\), with \(\ell\in\{1,\,\ldots,\,L\}\) integer. Then we evaluate the power spectrum \(|f_{n}(\kappa)|^{2}\) and choose the value \(\kappa_{\mathrm{max}}^{(n)}\) where this power spectrum shows a maximum. The corresponding wavelength is \(\lambda_{\mathrm{max}}^{(n)}=2\pi/\kappa_{\mathrm{max}}^{(n)}\). We perform the logarithmic average of this quantity over random realizations of the initial state and we get an estimate of the characteristic length scale of the patterns for a given set of parameters
\[\lambda_{n}^{*}=\exp\left(\overline{\log\lambda_{\mathrm{max}}^{(n)}}\right)\,. \tag{15}\]
(We choose the logarithmic average because the distributions of the \(\lambda_{\mathrm{max}}\) are broad).
We plot some examples of \(\log_{10}\lambda_{n}^{*}\) versus \(K\), for \(L=512\) and different values of \(J\), in Fig. 9. For \(J=0.1\) [Fig. 9(a)] we see a sudden drop inside the weak-patterning regime (in part of which \(\lambda_{n}^{*}\) it is not even defined), and at the onset of the spatio-temporal ordered regime (dark green vertical line). So, although the amplitude of the patterns was not able to see the transition to the spatio-temporal ordering, it can be clearly seen in the characteristic length scale of the pattern. At the onset
Figure 7: \(\delta p_{n}\) versus \(K\) for different values of \(J\) and of \(n\) (chosen such that convergence has been attained). The vertical lines mark the boundaries of the different regimes listed in Fig. 1: the red line bounds from below the chaotic regime, the green one the period-doubling (spatio-temporal ordering) regime, the purple one the patterning regime and between the yellow lines lies the weak patterning regime. The weak patterning regime exists only in panel (a), and in panel (c) the onset of patterning coincides with the onset of period doubling. Numerical parameters: \(N_{\mathrm{r}}=10^{3},\,L=512\); (a) \(J=0.1\), (b) \(J=0.2\); (c) \(J=0.5\).
Figure 8: Examples of \(\delta p_{n}\) versus \(K\) in the weak patterning regime. Numerical parameters: \(L=160,\,N_{\mathrm{r}}=10^{3}\).
of chaos (red vertical line) the characteristic length scale shows ha huge peak, increasing of one order of magnitude and then suddenly dropping. In some sense at this transition the range of the correlations of the system increases.
For \(J=0.2\) [Fig. 9(b)] the weak patterning regime has disappeared, but \(\log_{10}\lambda_{n}^{*}\) shows still a sudden drop at the onset of the spatio-temporal ordered regime and the peak at the onset of chaos.
For \(J=0.5\) [Fig. 9(c)] there are no stroboscopic-time-independent patterns and the threshold for period doubling coincides with that of patterning, and there is no peak of the characteristic length scale at the onset of chaos.
### Behavior of the kinetic energy
Some properties of the patterns can be read in the kinetic energy per site defined as
\[E(n)=\frac{1}{2L}\sum_{j}[p_{j}^{(n)}]^{2}\,. \tag{16}\]
We consider its average over initial-state realizations and time
\[E=\frac{1}{n_{0}}\sum_{n=\mathcal{T}-n_{0}}^{\mathcal{T}}\overline{E(n)}\,, \tag{17}\]
where \(\mathcal{T}\) and \(n_{0}\) are chosen so that convergence is reached. We see first of all that this quantity is finite, also in the chaotic regime (see Fig. 10). This is an important difference compared with the Hamiltonian case, where chaos leads to an unbounded increase in time of energy. Moreover we see in this quantity the same features that we see in \(\delta p\) (compare with Fig. 7). In particular it vanishes in the trivial regime, and shows a discontinuity in the derivative and a sudden increase at the onset of chaos. Therefore, also the kinetic energy can be used to find if there is pattern formation.
In Fig. 10 we see also that the energy per site is bounded and does not scale with the system size (it is
Figure 10: Onsite kinetic energy averaged over time and initial-state realizations [Eq. (17)] versus \(K\), for different values of \(J\) and \(L\). Numerical parameters: \(N_{\mathrm{r}}=10^{3}\), \(n_{0}=10^{3}\), \(\mathcal{T}=6\cdot 10^{4}\).
Figure 9: (Upper row) Logarithm of the characteristic length scale of the patterns versus \(\gamma\) for different system sizes, models, and values of \(q\) at time \(n=10^{5}\). The horizontal lines mark the logarithm of the system size for the corresponding colour and the errorbars are evaluated as \(\delta\log_{10}\lambda^{*}=\frac{1}{\sqrt{N_{\mathrm{r}}}}\left[\overline{( \log_{10}\lambda)^{2}}-\overline{(\log_{10}\lambda)}^{2}\right]^{1/2}\). (Lower row) Patterns amplitude \(\delta x\) versus \(\gamma\) for different sizes, models, and values of \(q\) at time \(n=10^{5}\). The parameters for each panel are written in the corresponding headings. \(N_{\mathrm{r}}=10^{3}\).
actually size-independent for the values of \(L\) we are considering). This is a key point to make possible an experimental realization with an array of Josephson junctions, as we are going to show in the next section.
## VI Proposal for experimental realization
The Hamiltonian in Eq. (1) can be realized by means of an array of Josephson junctions (see for instance [39; 40; 41] for details). We show it in Fig. 11. With the crosses we mark SQUIDs, that's to say Josephson junctions whose energy can be tuned with the application of a magnetic flux. So, we have \(E_{J}=E_{J}(\Phi)=E_{J}^{(0)}\cos(2\pi\Phi/\Phi_{0})\) for the junctions in the upper row and \(E_{K}=E_{K}(\Phi)=E_{K}^{(0)}\cos(2\pi\Phi/\Phi_{0})\) for the junctions in the lower row, where \(\Phi\) is the magnetic flux and \(\Phi_{0}=hc/(2e)\) the Cooper-pair flux quantum. For the junctions in the upper row we neglect the capacitance, for each of those in the lower row the capacitance is \(C\), so that the corresponding charging energy is \(E_{C}=e^{2}/C\). For each of the junction of the lower row the corresponding capacity is connected in parallel, and each of these parallel circuits is connected to the ground by means of a resistance \(R\).
On each site (the green balls) we have two canonically conjugate dynamical variables, the gauge invariant superconducting phase \(\theta_{j}\) and the charge \(q_{j}\) (expressed in units of the electron charge \(e\)). If we assume that the superconducting pieces just above the resistances have all the same phase \(\varphi\), up to a shift of the \(\theta_{j}\), we can set the phases to \(0\) (\(\varphi=0\)). When there is no resistance, the circuit is described by the Hamiltonian
\[H=\sum_{j=1}^{L}\frac{1}{2}E_{C}q_{j}^{2}-E_{J}(\Phi)\cos(\theta_{j}-\theta_{j +1})-E_{K}(\Phi)\cos(\theta_{j}) \tag{18}\]
In the figure open boundary conditions are represented, but we can impose periodic boundary conditions by adding a junction \(E_{J}\) connecting the first and the last sites. We apply the following time-periodic protocol:
1. For a time \(T_{1}\) the flux \(\Phi\) is kept equal to \(\Phi=\Phi_{0}/4\) (all the SQUIDs have vanishing Josephson energy and they are closed);
2. For a time \(T_{2}\) the flux \(\Phi\) is kept equal to \(\Phi=0\) (all the SQUIDs have maximum Josephson energy and they are open);
Physically, this corresponds to a periodic protocol with period \(T_{1}+T_{2}\) where a kick lasting \(T_{2}\) is applied, because during the interval \(T_{1}\) the Josephson energies of the SQUIDs vanish and there is no Josephson energy term acting.
We consider the stroboscopic evolution, looking at what happens from just before one kick to just before the next, that's to say from time \(t_{n}=n(T_{1}+T_{2})+T_{1}\) to \(t_{n+1}=(n+1)(T_{1}+T_{2})+T_{1}\). The dynamics is provided by the canonical equations
\[\hbar\dot{q}_{j} =-\partial_{\theta_{j}}H\,,\] \[\hbar\dot{\theta}_{j} =\phantom{-}\partial_{q_{j}}H\,. \tag{19}\]
We can solve these equations and, if the time of the kick is much smaller than \(T_{2}\ll\min\left(\frac{\sqrt{E_{J}^{(0)}E_{C}}}{\hbar},\,\frac{\sqrt{E_{J}^{ (0)}E_{C}}}{\hbar}\right)\), we can approximate the evolution from time \(t_{n}=n(T_{1}+T_{2})+T_{1}\) to \(t_{n+1}=(n+1)(T_{1}+T_{2})+T_{1}\) as a discrete map
\[\hbar q_{j}(t_{n+1}) =\hbar q_{j}(t_{n})-T_{2}E_{J}^{(0)}(\sin(\theta_{j}-\theta_{j+1} )+\sin(\theta_{j}-\theta_{j-1}))\] \[-T_{2}E_{K}^{(0)}\sin(\theta_{j})\] \[\hbar\theta_{j}(t_{n+1}) =\hbar\theta_{j}(t_{n})+T_{2}E_{C}q_{j}(t_{n+1})\,. \tag{20}\]
Applying the change of variables \(\theta_{j}^{(n)}=\theta_{j}(t_{n})\), \(p_{j}^{(n)}=\frac{T_{2}E_{C}}{\hbar}q_{j}(t_{n})\), and defining \(J\equiv T_{2}^{2}\frac{E_{C}E_{J}^{(0)}}{\hbar}\), \(K\equiv T_{2}^{2}\frac{E_{C}E_{K}^{(0)}}{\hbar}\) we get back the Hamiltonian mapping Eq. (2).
In order to get the model with dissipation, we consider a nonvanishing resistance \(R\). About the presence of the resistances, we emphasize that they are quite realistic, because they are naturally present and the difficult thing is removing them, rather than adding. With the resistance, between one kick and the next, the junctions are switched off and the charges damp as in the \(RC\) circuit, so that the map Eq. (20) becomes
\[\hbar q_{j}(t_{n+1}) =\mathrm{e}^{-RCT_{1}}\left[\hbar q_{j}(t_{n})-T_{2}E_{J}^{(0)}( \sin(\theta_{j}-\theta_{j+1})\right.\] \[+\left.\sin(\theta_{j}-\theta_{j-1}))-T_{2}E_{K}^{(0)}\sin(\theta_ {j})\right]\] \[\hbar\theta_{j}(t_{n+1}) =\hbar\theta_{j}(t_{n})+E_{C}\left(\frac{1-\mathrm{e}^{-RCT_{1}}} {RC}\right)q_{j}(t_{n+1})\,, \tag{21}\]
provided that \(T_{2}\ll RC\). Applying the change of variables \(\theta_{j}^{(n)}=\theta_{j}(t_{n})\), \(p_{j}^{(n)}=\frac{E_{C}}{\hbar}\left(\frac{1-\mathrm{e}^{-RCT_{1}}}{RC}\right)q _{j}(t_{n})\), and
Figure 11: Josephson junction array for the experimental realization of the model. Here we show open boundary conditions. In order to get periodic boundary conditions, one more junction \(E_{J}\) is needed connecting the first and the last sites.
defining
\[\gamma \equiv\mathrm{e}^{-RCT_{1}}\] \[J \equiv T_{2}\frac{E_{C}E_{J}^{(0)}}{\hbar}\,\mathrm{e}^{-RCT_{1}} \left(\frac{1-\mathrm{e}^{-RCT_{1}}}{RC}\right)\] \[K \equiv T_{2}\frac{E_{C}E_{K}^{(0)}}{\hbar}\,\mathrm{e}^{-RCT_{1}} \left(\frac{1-\mathrm{e}^{-RCT_{1}}}{RC}\right) \tag{22}\]
we get back the dissipative mapping Eq. (2).
There is an important consideration about the charging energy per site, given by \(E_{q}(n)=\frac{E_{C}}{2L}\sum_{j=1}^{L}[q_{j}(t_{n+1})]^{2}\). This quantity is proportional to the kinetic energy per site we have discussed in Sec. V.4, and we have seen there that in the dissipative model it attains an asymptotic value order 1. By appropriately tuning the parameters, there is the possibility to keep this asymptotic value smaller than the superconducting gap of the system. In this way, the model obeys the dynamics we have described here for long times. In contrast with that, in case of Hamiltonian evolution, the energy at some point starts increasing in an unbounded way [20; 21; 25] and at some point it goes beyond the gap and superconductivity is lost.
## VII Conclusion
In conclusion we have studied a model of coupled kicked rotors with dissipation. In the case of a single rotor, the model reduces to the Zaslavsky map and shows a behavior strictly similar to the period-doubling route to chaos, that is so widespread in Nature. (Actually we see two parallel period-doubling cascades - see Fig. 3.
We focus on the momenta probed at discrete stroboscopic times (at each period of the driving) and consider the appropriate averages over many random initial conditions. Using some period \(m\)-tupling order parameters inspired to the time-crystal literature, we see that in the many-rotor case the period-doubling cascade disappears and one can essentially see only period doubling, with here and there much smaller contributions at period four times the driving. So, we find essentially confirmation of the findings of Ref. [14], with the presence of the small period-doubling contributions probably related to the fact that this model has local continuous variables, in contrast with the discrete ones of cellular automata and classical time crystals.
The dynamics of this model is very rich. First of all we see that the period doubling always appears in association with the spontaneous formation of patterns, that are stable and persistent in time, and depend on the stroboscopic time with periodicity double than the one of the driving. Therefore the system breaks at the same time the discrete time translation symmetry and the space translation symmetry, therefore we talk about "spatio-temporal ordering". A similar phenomenon occurs in quantum Floquet time crystals, but the physics is different: Here an effect of the classical nonlinear dynamics, there a sort of quantum phase transition. Indeed, here we see a persistent spatio-temporal ordering already at finite sizes, while time crystals are stable only in the thermodynamic limit.
There is also a phase where the system breaks only the space-translation symmetry and the patterns are stable and independent from the stroboscopic time. Inside this region of the phase space there is a smaller region that we call "Weak patterning" and where points with no patterns alternate with point with small-amplitude patterns, in a jagged and apparently fractal way.
Beyond that, there is the trivial regime, where the model relaxes to a uniform condition with vanishing momentum, and the chaotic regime, where the largest Lyapunov exponent is larger than 0, nearby trajectories in phase space diverge exponentially, and the dynamics is aperiodic and irregular in space and time.
The transition from different regimes can be seen in the properties of the patterns. For instance, the pattern characteristic length has a sudden drop moving from stroboscopic-time-independent patterning to spatio-temporal ordering, and - in a range of parameters - shows a peak at the onset of chaos where it increases of one order of magnitude.
Many transitions in the dynamics can be seen also in the behavior of the pattern amplitude, that anyway misses the threshold between stroboscopic-time-independent patterning and spatio-temporal ordering, because changes in a continuous and regular way at this threshold. The same information given by the amplitude can be understood from the behavior of the asymptotic average kinetic energy per site. This quantity is finite, also in the chaotic regime, in contrast with the unbounded increase of energy in case of Hamiltonian chaos. Moreover, in our case, this quantity does not scale with the system size.
This is an important information for experimental realization. Indeed, we propose to realize this model in an array of SQUID Josephson junctions, and the fact that the system does not heat up above a threshold gives the opportunity to choose the parameters in such a way that the system stays superconducting, and our model is a good description thereof for long times.
About future developments, one possibility is to better understand the properties of the patterns, and study if a description in terms of regions of staggered order separated by defects [as Fig. 6(b) seems to suggest] is possible. Another possibility is studying the patterns in different geometries (for instance in a 2-dimensional lattice) and see if similar dynamical behaviors appear if one couples other systems showing the period doubling cascade (for instance the nonlinear electric circuits of [10; 11]). One might think also to use the methods of [27] to quantize the model of coupled dissipative rotors and, of course, realize the experimental proposal of Sec. VI.
## Acknowledgments
I thank A. Delmonte, R. Fazio, D. Mukamel, G. Passarelli, S. Ruffo, and G. E. Santoro for interesting dicussions, and V. Russomanno for having drawn my attention on Ref. [4] and the problems of chaos and period-doubling cascade. I acknowledge P. Lucignano for the access to the qmat machine where most of the numerics for this project was performed, and thank the ICTP for the warm hospitality received (under ERC Projectlol053159 - RAVE) during the completion of this work.
|
2307.09926 | Analytical Solution of the PDM-Coulombic Klein-Gordon Oscillator in
Kaluza-Klein Theory | In this contribution, the relativistic quantum motions of the
position-dependent mass (PDM) oscillator field with a scalar potential in the
context of the Kaluza-Klein theory is investigated. Through a purely analytical
analysis, the eigensolutions of this system have been obtained. The results
showed that the KGO is influenced not only by curvature, torsion and the
quantum flux associated with the extra dimension, but also by the possibility
of modifying the mass term of the Klein-Gordon equation. | A. Bouzenada, A. Boumali, O. Mustafa, RLL. Vetoria, M. Alraeei | 2023-07-19T12:03:43Z | http://arxiv.org/abs/2307.09926v1 | # Analytical Solution of the PDM-Coulombic Klein-Gordon Oscillator in Kaluza-Klein Theory
###### Abstract
In this contribution, the relativistic quantum motions of the position-dependent mass (PDM) oscillator field with a scalar potential in the context of the Kaluza-Klein theory is investigated. Through a purely analytical analysis, the eigensolutions of this system have been obtained. The results showed that the KGO is influenced not only by curvature, torsion and the quantum flux associated with the extra dimension, but also by the possibility of modifying the mass term of the Klein-Gordon equation.
Klein-Gordon oscillator, topological defects, Kaluza-Klein Theory, PDM, Coulomb-Type Potential. pacs: 03.65.Ge; 03.50.--z; 05.70.Ce ; 03.65.-w ; 04.62.+v; 04.40.--b; 04.20.Gz; 04.20.Jb; 04.20.--q; 03.65.Pm
Introduction
One of the fields of investigation in the context of quantum mechanics is the gravitational effects on the relativistic quantum dynamics of a particle characterized by well-defined spins, for example, spin-1/2 and spin-0 particles, described by the Dirac equation and the Klein-Gordon equation, respectively, both extended to incorporate the curvature effects intrinsic to the gravitation of the environment considered as background [1].
In particular, spin-0 particles have been heavily investigated subject to gravitational effects in various types of space-time, for two reasons: searches for interpretations or extensions of standard models of particle physics, mainly due to the advent of the Higgs boson with its experimental evidence and its powerful effect in the Standard Models [2], as well as in the description of particles or quasi-particles in condensed matter systems in analogous models [3], such as the case of phonons [4], two-dimensional systems [5] and quantum dynamics in graphene sheets [6; 7] with various defects generated by the process to cut and paste [8; 9].
Spin-0 particles have been investigated, for example, in cosmic string spacetime, subject to central potentials and external fields. We therefore have studies of particles with position-dependent mass (PDM) subject to a scalar and electromagnetic Coulomb-type potential, linear potential, Landau quantization, Aharanov-Bohm effect (ABE) for bound states [10] and interacting with the Klein-Gordon oscillator [11]. It is noteworthy that the presence of the cosmic string in the background is associated with the curvature of spacetime [12; 13].
Another ingredient considered in the background in order to extend the understanding of bosonic spin-0 quantum dynamics is the presence of torsion in the environment [14]. From a mathematical point of view, torsion in spacetime arises due to the non-symmetry of the Christoffel symbols, thus producing a tensor field associated with torsion [8; 9; 14]. However, torsion can be typified according to the conservation of the geometric symmetry produced after the "cut and paste" process [8; 14]. The best-known examples are time-type, spiral-type, and screw-type dislocations [9; 14]. In particular, from the point of view of condensed matter, the screw-like dislocation is associated with the Burgers vector [8; 14]. Spin-0 particle has been investigated in spacetime with torsion, for example, subjected to ABE for bound states [15], by interacting to electromagnetic and scalar Coulomb-type potential [16], linear potential and Klein-Gordon oscillator [17]. In addition, there
are studies of scalar particles immersed in the spacetime with a screw-type dislocation under effects of rotation [18; 19].
A quantum effect that can be observed of cosmic string and screw-like dislocation on bound state solutions in quantum particle dynamics: effect analogous to ABE for bound states [20; 21]. That is, the quantum numbers associated with the angular momentum are modified by the background topology, generating a kind of "correction" in the eigenvalues of the angular momentum, which, in turn, provides an effective angular momentum [22; 23; 34; 35; 36].
In particular, ABE for bound states has been investigated in a space-time with an extra dimension [24], known as the Kaluza-Klein theory (KKT) [25; 26]. and quantum mechanics is possible, but with the insertion of extra dimensions. This proposal is defended in the famous string theory [27]. Although KKT is much simpler than more modern theories of extra dimensions, it has been widely used for research into the relativistic quantum dynamics of fundamental particles [11; 27]. In the case of spin-0 particles, we have studies in systems with PDM [28], Landau quantization [29], Klein-Gordon oscillator [30] and under effects of rotation [31].
Therefore, in this analysis, we intend to investigate the effects of curvature [12] and torsion [14] of space-time incremented with an extra dimension on an electrically charged scalar quantum oscillator subjected to a quantum flux analogous to the ABE. Our analysis will be purely analytical in obtaining the relativistic energy profiles, as well as in determining the axial eigenfunctions that describe the oscillations of the electrically charged scalar particle in this non-trivial spacetime.
The structure of this paper is as follows: in Sec. 2, we make a brief review on quantum dynamics of a free particle in KKT spacetime with curvature and torsion, in which we show that the general solution of the scalar particle is defined through the Bessel function of the first type redefined in terms of the parameters associated with the curvature and torsion of KKT space-time; in Sec. 3, we investigated the effects of central linear potential in the mass of the quantum oscillator immersed in the KKT space-time, where we determined bound state solutions; in Sec. 4 we continue with the insertion of central potentials in the mass term of the modified Klein-Gordon equation, in which the potential considered is a Coulomb-type potential, in which we define the relativistic energy profile of the quantum oscillator; in Sec. 5, we present our conclusions.
An overview of the Klein-Gordon oscillators in a cosmic string within Kaluza-Klein theory
The purpose of this section is to investigate the dynamics of the Klein-Gordon oscillator (KGO) in the framework of Kaluza-Klein theory geometry. It is widely known that the relativistic wave equations for a scalar particle in a Riemannian spacetime, described by the metric tensor \(g_{\mu\nu}\), can be obtained through a reformulation of the Klein-Gordon (KG) equation. This provides an opportunity to examine the behavior of the KGO in the presence of cosmic strings [37; 38].
\[\left(\Box+m^{2}-\xi R\right)\Psi(x,t)=0, \tag{1}\]
In this context, the symbol \(\Box\) denotes the Laplace-Beltrami operator, which is commonly defined as follows:
\[\Box=g^{\mu\nu}D_{\mu}D_{\nu}=\frac{1}{\sqrt{-g}}\partial_{\mu} \left(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\right), \tag{2}\]
In the provided equation, the parameter \(\xi\) corresponds to a dimensionless coupling constant with real values. The Ricci scalar curvature, denoted as \(R\), can be expressed as \(R=g^{\mu\nu}R_{\mu\nu}\), where \(R_{\mu\nu}\) denotes the Ricci curvature tensor. The inverse of the metric tensor is represented by \(g^{\mu\nu}\), and the determinant of the metric tensor is denoted as \(g\), specifically \(g=\det\left(g_{\mu\nu}\right)\).
Our subsequent aim is to investigate the quantum dynamics of spin-0 particles within the spacetime generated by a \((4+1)\)-dimensional space. In this context, we will explore the behavior and properties of these particles in this extended dimensional framework.
### Free Klein-Gordon equation in the background of a cosmic string in a Kaluza-Klein theory
In this section [39], we embark on an exploration of a fundamental topological defect that serves as the cornerstone of our research. Taking inspiration from the intriguing characteristics observed in edge dislocations present within crystalline solids, we extend the concept of such defects into the realm of gravity. Upon examining an edge dislocation, we discern its remarkable metamorphosis from a circular form into a spiral dislocation. The line element that elucidates the space-time
curvature incorporating this significant topological defect is expressed in units where \(\hbar=c=1\)[39; 40]. By establishing this connection, we bridge the principles governing crystal defects with the intricate structure of space-time.
\[ds^{2} =g_{\mu\nu}dx^{\mu}dx^{\nu}\] \[=dt^{2}-d\rho^{2}-\left(\alpha\rho\right)^{2}d\varphi^{2}-\left(dz+ Jd\varphi\right)^{2}-\left(dx+\frac{\Phi}{2\pi}d\varphi\right)^{2}, \tag{3}\]
In this context, where \(\chi\) represents a constant value associated with the distortion of the defect, the parameter \(J=\frac{\left|\overrightarrow{b}\right|}{2\pi}\) also relates to the Burgers vector \(\overrightarrow{b}\). The coordinate ranges are as follows: \(-\infty\leq t\leq+\infty\), \(r\geq 0\), \(0\leq\varphi\leq 2\pi\), \(-\infty\leq z,x\leq+\infty\), and \(\alpha\in[0,1[\). The angular parameter \(\alpha\) determines the angular deficit \(\delta\varphi=2\pi(1-\alpha)\), which is connected to the linear mass density \(\mu\) of the string through \(\alpha=1-4\mu\). It is important to note that this metric provides an accurate solution to Einstein's field equations for \(0\leq\mu<1/4\). Furthermore, by setting \(\varphi^{\prime}=\alpha\varphi\), it represents a flat conical outer space with an angular deficit \(\delta\phi=8\pi\mu\).
When we consider the components of the metric tensor and its inverse, denoted as \((g_{\mu\nu})\) and \((g^{\mu\nu})\), respectively [39; 40], we find the following expressions:
\[g_{\mu\nu}=\left(\begin{array}{cccc}1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&-\left(\left(\alpha\rho\right)^{2}+\left(\frac{\Phi}{2\pi}\right)^{2}+J^{2 }\right)&-J&-\frac{\Phi}{2\pi}\\ 0&0&-J&-1&0\\ 0&0&-\frac{\Phi}{2\pi}&0&-1\end{array}\right), \tag{4}\]
and
\[g^{\mu\nu}=\left(\begin{array}{cccc}1&0&0&0&0\\ 0&-1&0&0&0\\ 0&0&-\frac{1}{\left(\alpha\rho\right)^{2}}&\frac{J}{\left(\alpha\rho\right)^{2 }}&\frac{\Phi}{2\pi\left(\alpha\rho\right)^{2}}\\ 0&0&\frac{J}{\left(\alpha\rho\right)^{2}}&-\left(1+\frac{J^{2}}{\left(\alpha \rho\right)^{2}}\right)&-\frac{\Phi J}{2\pi\left(\alpha\rho\right)^{2}}\\ 0&0&\frac{\Phi}{2\pi\left(\alpha\rho\right)^{2}}&-\frac{\Phi J}{2\pi\left( \alpha\rho\right)^{2}}&-\left(1+\frac{\Phi^{2}}{\left(2\pi\alpha\rho\right)^{2 }}\right)\end{array}\right) \tag{5}\]
where
\[\psi(t,\rho,\varphi,z,x)=\psi(\rho)e^{-i(Et-l\varphi-Kz-\lambda x)}, \tag{6}\]
\[\left[\frac{d^{2}}{d\rho^{2}}+\frac{1}{\rho}\frac{d}{d\rho}-\frac{\zeta^{2}}{\rho^ {2}}+\kappa^{2}\right]\psi\left(\rho\right)=0, \tag{7}\]
where we have set
\[\zeta= \frac{\sqrt{\left(l-JK\right)^{2}+\frac{\Phi^{2}}{\pi^{2}}\left[ \frac{\lambda^{2}}{4}-\frac{l\lambda}{\Phi}+\frac{JK\lambda\pi}{\Phi}\right]} }{\alpha} \tag{8}\] \[\kappa= \sqrt{E^{2}-m^{2}-K^{2}-\lambda^{2}}\]
The equation represented by Eq. (7) can be identified as a Bessel equation. Its general solution is defined as follows:
\[\psi\left(\rho\right)=A\,J_{\left|\zeta\right|}\left(\kappa\rho\right)+B\,Y_{ \left|\zeta\right|}\left(\kappa\rho\right), \tag{9}\]
where \(J_{\left|\zeta\right|}\left(\kappa\rho\right)\) and \(Y_{\left|\zeta\right|}\left(\kappa\rho\right)\) are the Bessel functions of order \(\zeta\) and of the first and the second kind, respectively. Here \(A\) and \(B\) are arbitrary constants. We notice that at the origin when \(\zeta=0\), the function \(J_{\left|\zeta\right|}\left(\kappa\rho\right)\neq 0\). However, \(Y_{\left|\zeta\right|}\left(\kappa\rho\right)\) is always divergent at the origin. In this case, we will consider only \(J_{\left|\zeta\right|}\left(\kappa\rho\right)\) when \(\zeta\neq 0\). Hence, we write the solution to Eq. (7) as follows
\[\psi\left(\rho\right)=A\,J_{\left|\frac{\sqrt{\left(l-JK\right)^{2}+\frac{ \Phi^{2}}{\pi^{2}}\left[\frac{\lambda^{2}}{4}-\frac{l\lambda}{\Phi}+\frac{JK \lambda\pi}{\Phi}\right]}}{\alpha}\right|}\,\,\left(\sqrt{E^{2}-m^{2}-K^{2}- \lambda^{2}}\,\rho\right), \tag{10}\]
We can now express the wavefunction of the spinless heavy KG particle in the space-time of a cosmic dislocation using this solution.
\[\psi\left(t,\rho,\varphi,z,x\right)=\left|\mathcal{N}_{1}\right|e^{-i\left(Et -l\varphi-Kz-\lambda x\right)}\,J_{\frac{\left|\sqrt{\left(l-JK\right)^{2}+ \frac{\Phi^{2}}{\pi^{2}}\left[\frac{\lambda^{2}}{4}-\frac{l\lambda}{\Phi}+ \frac{JK\lambda\pi}{\Phi}\right]}\right|}{\alpha}}\,\,\left(\sqrt{E^{2}-m^{2} -K^{2}-\lambda^{2}}\,\rho\right), \tag{11}\]
The determination of the constant \(\left|\mathcal{N}_{1}\right|\) can be achieved by applying the necessary normalization condition to the Klein-Gordon equation. However, it is fortunate that the inability to determine the normalization constants in this manuscript does not impact the final outcomes or results.
### Klein-Gordon oscillator in the background of a cosmic string in a Kaluza-Klein theory
In order to proceed, it is necessary to substitute the momentum operator into Equation (1). Consequently, Equation (10) can be reformulated as follows. Similarly, employing a straightforward
calculation based on the aforementioned approach, we can derive the following differential equation [39].
\[\left[\frac{d^{2}}{d\rho^{2}}+\frac{1}{\rho}\frac{d}{d\rho}-m^{2}\omega^{2}\rho^{2 }-\frac{\sigma^{2}}{\rho^{2}}+E^{2}+\delta\right]\psi\left(\rho\right)=0, \tag{12}\]
with
\[\begin{split}\sigma^{2}=&\left(\frac{\left(l-JK \right)^{2}+\frac{\Phi^{2}}{\pi^{2}}\left[\frac{\lambda^{2}}{4}-\frac{l\lambda \pi}{\Phi}+\frac{JK\lambda\pi}{\Phi}\right]}{\alpha}\right)^{2}\\ \delta-E^{2}=& 2m\omega-m^{2}-K^{2}-\lambda^{2}\end{split} \tag{13}\]
The Klein-Gordon oscillator (KGO) equation for a spin-0 particle within the (1+4) space-time of Kaluza-Klein theory is represented by Equation (12). In order to obtain the solution to this problem, we propose a radial coordinate transformation as a preliminary step.
\[\mathcal{S}=m\omega r^{2}, \tag{14}\]
subsitutuing the expression for \(\chi\) into Eq. (12), we obtain
\[\left[\frac{d^{2}}{d\mathcal{S}^{2}}+\frac{1}{\mathcal{S}}\frac{\partial}{d \mathcal{S}}-\frac{\sigma^{2}}{4\mathcal{S}^{2}}+\frac{\delta}{4m\omega \mathcal{S}}-\frac{1}{4}\right]\psi\left(\mathcal{S}\right)=0. \tag{15}\]
By examining the asymptotic behavior of the wave function at both the origin and infinity, and aiming to find regular solutions, we can consider a solution of the following form:
\[\psi\left(\mathcal{S}\right)=\mathcal{S}^{\frac{\left|\sigma\right|}{2}}e^{- \frac{\mathcal{S}}{2}}F\left(\mathcal{S}\right), \tag{16}\]
As previously, we can plug this back into Eq. (15), and we get
\[\mathcal{S}\frac{d^{2}F\left(\mathcal{S}\right)}{d\mathcal{S}^{2}}+\left( \left|\sigma\right|+1-\chi\right)\frac{dF\left(\mathcal{S}\right)}{d\mathcal{ S}}-\left(\frac{\left|\sigma\right|}{2}-\frac{\delta}{4m\omega}+\frac{1}{2} \right)F\left(\mathcal{S}\right)=0, \tag{17}\]
The equation at hand corresponds to the confluent hypergeometric equation [2], and its solutions are expressed in terms of a specific type of confluent hypergeometric function.
\[F\left(\mathcal{S}\right)=_{1}F_{1}\left(\frac{\left|\sigma\right|}{2}-\frac{ \delta}{4m\omega}+\frac{1}{2},+1,\mathcal{S}\right), \tag{18}\]
It is important to mention that the solution given by Equation (18) must be a polynomial function of degree \(n\). However, as we let \(n\rightarrow\infty\), a divergence issue arises. In order to have a finite
polynomial, the factor associated with the last term in Equation (17) must be a negative integer. This condition implies that:
\[\frac{\left|\sigma\right|}{2}-\frac{\delta}{4m\omega}+\frac{1}{2}=-n_{r}\qquad,n _{r}=0,1,2....., \tag{19}\]
Utilizing this result along with the parameters given in Equation (14), we can derive the quantized energy spectrum of the Klein-Gordon oscillator (KGO) in the space-time of a cosmic dislocation. Consequently, we are able to determine the following:
\[E^{\pm}\left(n_{r}\right)=\pm\sqrt{4m\omega n_{r}+\frac{2m\omega}{\alpha} \left|\left(l-JK\right)^{2}+\frac{\Phi^{2}}{\pi^{2}}\left[\frac{\lambda^{2}}{4 }-\frac{l\pi\lambda}{\Phi}+\frac{JK\lambda\pi}{\Phi}\right]\right|+m^{2}+K^{2} +\lambda^{2}}, \tag{20}\]
It is evident that the energy is directly dependent on the angular deficit \(\alpha\). In other words, the presence of the wedge angle, which is caused by the topological defect (i.e., the cosmic string), influences the curvature of the space-time. As a result, it affects the relativistic dynamics of the scalar particle by introducing a gravitational field. The corresponding wave function is defined as follows:
\[\psi\left(\rho\right)=\left|\mathcal{N}_{2}\right|\left(m\omega\rho^{2}\right) ^{\frac{\left|\sigma\right|}{2}}e^{-\frac{m\omega\rho^{2}}{2}}{}_{1}F_{1} \left(\frac{\left|\sigma\right|}{2}-\frac{\delta}{4m\omega}+\frac{1}{2}, \sigma+1,m\omega\rho^{2}\right), \tag{21}\]
Subsequently, the general eigenfunctions are expressed as:
\[\psi\left(t,\rho,\varphi,z,x\right)=\left|\mathcal{N}_{2}\right| \left(m\omega\rho^{2}\right)^{\frac{\left|\sigma\right|}{2}}e^{-\frac{m\omega ^{2}}{2}}e^{-i\left(Et-l\varphi-Kz-\lambda x\right)}{}_{1}F_{1}\left(\frac{ \left|\sigma\right|}{2}-\frac{\delta}{4m\omega}+\frac{1}{2},\sigma+1,m\omega \rho^{2}\right), \tag{22}\]
Here, \(\left|\mathcal{N}_{2}\right|\) represents the normalization constant, and the values of \(\left(\sigma,\delta\right)\) are provided in Equation (13).
## III PDM of Klein-Gordon oscillator in the background of a cosmic string in a Kaluza-Klein theory
In this subsection, we shall use the non-minimal coupling form of the PDM-momentum operator
\[\mathbf{\hat{p}}\left(\mathbf{r}\right)=-i\left(\nabla-\frac{\nabla f\left( \mathbf{r}\right)}{4f\left(\mathbf{r}\right)}\right)\Longleftrightarrow p_{j }=-i\left(\partial_{j}-\frac{\partial_{j}f\left(\mathbf{r}\right)}{4f\left( \mathbf{r}\right)}\right), \tag{23}\]
introduced by Mustafa and Algadhi [41]. It has been shown that this PDM-momentum operator effectively yields the what is known in the literature as Mustafa and Mazharimousavi's ordering
[42; 43] of the von Roos PDM-kinetic energy Schrodinger operator [44]. It has been used in the study of PDM Klein-Gordon (KG) particles in different spacetime backgrounds [45; 46; 47; 48; 49]. Under such non-minimal coupling form, our PDM operator reads
\[\tilde{p}_{\mu}\longrightarrow-i\partial_{\mu}+i\mathcal{F}_{\mu}\,;\,\, \mathcal{F}_{\mu}=\left(0,\mathcal{F}_{\rho},0,0\right),\,\,\mathcal{F}_{\rho} =\frac{f^{\prime}\left(\rho\right)}{4f\left(\rho\right)} \tag{24}\]
and consequently our PDM KG-equation reads
\[\frac{1}{\sqrt{-g}}\left(D_{\mu}+\mathcal{F}_{\mu}\right)\left[\sqrt{-g}g^{ \mu\nu}\left(D_{\nu}-\mathcal{F}_{\nu}\right)\Psi\right]=\left(m+S\left(\rho \right)\right)^{2}\Psi. \tag{25}\]
where \(D_{\mu}=\partial_{\mu}-ieA_{\mu}\) is the covariant derivative and \(S\left(\rho\right)\) is the Lorentz scalar radial potential. This would, with \(\Psi\) given in (11), imply that
\[\psi^{\prime\prime}\left(\rho\right)+\frac{1}{\rho}\psi^{\prime}\left(\rho \right)+\left[\mathcal{E}^{2}-\frac{\tilde{\zeta}^{2}}{\rho^{2}}-\mathcal{M} \left(\rho\right)-2mS\left(\rho\right)-S\left(\rho\right)^{2}\right]\psi\left( \rho\right)=0, \tag{26}\]
where \(\mathcal{E}^{2}=E^{2}-\left(K^{2}+\lambda^{2}+m^{2}\right)\), and
\[\tilde{\zeta}^{2}=\frac{1}{\alpha^{2}}\left\{\left(\left[\ell-eA_{\varphi} \right]-JK\right)^{2}+\lambda\,\bar{\phi}\left[\lambda\,\bar{\phi}-2\left( \ell-eA_{\varphi}\right)+2JK\right]\right\};\bar{\phi}=\frac{\phi}{2\pi}, \tag{27}\]
and
\[\mathcal{M}\left(\rho\right)=\mathcal{F}_{\rho}^{\prime}+\frac{\mathcal{F}_{ \rho}}{\rho}+\mathcal{F}_{\rho}^{2}. \tag{28}\]
The substitution of \(\psi\left(\rho\right)=U\left(\rho\right)/\sqrt{\rho}\) would imply
\[U^{\prime\prime}\left(\rho\right)+\left[\mathcal{E}^{2}-\frac{\left(\tilde{ \zeta}^{2}-1/4\right)}{\rho^{2}}-\mathcal{M}\left(\rho\right)-2mS\left(\rho \right)-S\left(\rho\right)^{2}\right]U\left(\rho\right)=0. \tag{29}\]
Eventually, with \(\mathcal{F}_{\rho}=\eta\rho\) (or if you wish, \(f\left(\rho\right)=\exp\left(2\eta\rho\right)\)) of (24), we obtain
\[U^{\prime\prime}\left(\rho\right)+\left[\tilde{\mathcal{E}}^{2}-\frac{\left( \tilde{\zeta}^{2}-1/4\right)}{\rho^{2}}-\eta^{2}\rho^{2}-2mS\left(\rho\right)- S\left(\rho\right)^{2}\right]U\left(\rho\right)=0. \tag{30}\]
where \(\tilde{\mathcal{E}}^{2}=\mathcal{E}^{2}-2\eta\). At this point, one may observe that for \(\eta=m\omega\), \(A_{\varphi}=0\), and \(S\left(\rho\right)=0\), the case discussed for (12) is retrieved to obtain
\[\tilde{\mathcal{E}}^{2}=2\eta\left(2n_{r}+\left|\tilde{\zeta}\right|+1\right) \Leftrightarrow E=\pm\sqrt{2\eta\left(2n_{r}+\left|\tilde{\zeta}\right|+2 \right)+K^{2}+\lambda^{2}+m^{2}}, \tag{31}\]
and
\[U\left(\rho\right)\sim\rho^{|\tilde{\zeta}|+1/2}\exp\left(-\frac{\eta\rho^{2 }}{2}\right)L_{n_{r}}^{|\tilde{\zeta}|}\left(\left|\tilde{\zeta}\right|\rho^{2 }\right). \tag{32}\]
### KG-oscillators plus linear confinement \(S\left(\rho\right)=A\rho\)
With a linear Lorentz scalar potential \(S\left(\rho\right)=A\rho\), equation (30) reads
\[U^{\prime\prime}\left(\rho\right)+\left[\tilde{\mathcal{E}}^{2}-\frac{\left( \tilde{\zeta}^{2}-1/4\right)}{\rho^{2}}-\tilde{\eta}^{2}\rho^{2}-2mA\rho\right] U\left(\rho\right)=0\;;\;\tilde{\eta}^{2}=\eta^{2}+A^{2}. \tag{33}\]
Hereby, we shall be interested in an exact solution for equation (33) for \(\tilde{\zeta}=1/2\Leftrightarrow 4\left(\ell-JK-\lambda\bar{\phi}\right)^{2}= \alpha^{2}\), so that equation (33) reads
\[\left\{\partial_{\rho}^{2}-\tilde{\eta}^{2}\rho^{2}-2mA\,\rho+\tilde{\mathcal{ E}}^{2}\right\}U\left(\rho\right)=0. \tag{34}\]
At this point, one should be aware that this equation resembles the radially spherically symmetric Schrodinger equation with an irrational angular momentum quantum number \(\tilde{\ell}=\tilde{\zeta}-1/2=0\) in the central attractive/repulsive core \(\tilde{\ell}(\tilde{\ell}+1)/\rho^{2}\). Let us use a radial wave function in the form of
\[U\left(\rho\right)=\exp\left(-\frac{|\tilde{\eta}|\rho^{2}}{2}-\frac{mA\rho} {|\tilde{\eta}|}\right)\,\left[B_{1}\left(\rho+\frac{mA}{\tilde{\eta}^{2}} \right)F(\rho)+B_{2}\,G(\rho)\right];\;\tilde{A}=\frac{mA}{\tilde{\eta}^{2}}. \tag{35}\]
Obviously, the exponential term in (35) represents the asymptotic behaviour \(U(\rho)\to 0\) as \(\rho\rightarrow\infty\). Whereas, the asymptotic behaviour of \(U(\rho)\) as \(\rho\to 0\) is either finite or zero, depending on the effective interaction potential at hand ( e.g., [50]). We shall instead let the general functions \(F(\rho)\) and \(G(\rho)\) have their say in the process. Hence, the substitution of \(U(\rho)\), (35), would in a straightforward manner result in
\[B_{1}\,\left[(\rho+\tilde{A})\,\partial_{\rho}^{2}-2|\tilde{\eta }|\left((\rho+\tilde{A})^{2}-\frac{1}{|\tilde{\eta}|}\right)\,\partial_{\rho}+ [\tilde{\mathcal{E}}^{2}-3|\tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}] \right]F(\rho)\] \[+B_{2}\,\left[(\rho+\tilde{A})\,\partial_{\rho}^{2}-2|\tilde{ \eta}|((\rho+\tilde{A})\,\partial_{\rho}+[\tilde{\mathcal{E}}^{2}-|\tilde{ \eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}]\right]G(\rho)=0. \tag{36}\]
This equation suggests that
\[\left[(\rho+\tilde{A})\,\partial_{\rho}^{2}-2|\tilde{\eta}|\left((\rho+\tilde {A})^{2}-\frac{1}{|\tilde{\eta}|}\right)\,\partial_{\rho}+[\tilde{\mathcal{E}} ^{2}-3|\tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}]\right]F(\rho)=0, \tag{37}\]
and
\[\left[(\rho+\tilde{A})\,\partial_{\rho}^{2}-2|\tilde{\eta}|((\rho+\tilde{A}) \,\partial_{\rho}+[\tilde{\mathcal{E}}^{2}-|\tilde{\eta}|+\frac{m^{2}A^{2}}{ \tilde{\eta}^{2}}]\right]G(\rho)=0, \tag{38}\]
A change of variable a change of variable \(y=|\tilde{\eta}|(\rho+\tilde{A})^{2}\) would imply
\[\left[y\,\partial_{y}^{2}+(\frac{3}{2}-y)\,\partial_{y}+\frac{1}{4|\tilde{ \eta}|}[\tilde{\mathcal{E}}^{2}-3|\tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta }^{2}}]\right]\,F(y)=0, \tag{39}\]
and
\[\left[y\,\partial_{y}^{2}+\left(\frac{1}{2}-y\right)\partial_{y}+\frac{1}{4|\tilde {\eta}|}[\tilde{\cal E}^{2}-|\tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}} ]\right]\,G(y)=0. \tag{40}\]
One should be able to observe that both equations are in the form of Kummer's equation
\[[u\,\partial_{u}^{2}+\left(b-u\right)\partial_{u}-a]W(u)=0. \tag{41}\]
to immediately imply that
\[F(y)=\,_{1}F_{1}(-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-3|\tilde{\eta}|+ \frac{m^{2}A^{2}}{\tilde{\eta}^{2}}),\frac{3}{2},y), \tag{42}\]
and
\[G(\eta)=\,_{1}F_{1}(-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-|\tilde{\eta }|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}),\frac{1}{2},y), \tag{43}\]
Our general solution for (35) therefore reads
\[U\left(\rho\right)\ =\ \exp\left(-\frac{|\tilde{\eta}|\rho^{2}}{2}- \frac{mA\rho}{|\tilde{\eta}|}\right)\,\left[B_{1}\left(\rho+\frac{mA}{\tilde {\eta}^{2}}\right){}_{1}F_{1}(-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-3| \tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}),\frac{3}{2},y)\right.\] \[\left.+B_{2}\ {}_{1}F_{1}(-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-| \tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}),\frac{1}{2},y)\right] \tag{44}\]
Now we have to appeal to textbook exact solvability of a pure harmonic oscillator \(\tilde{\eta}^{2}\rho^{2}\), with \(A=0\), and enforce the condition that \(U\left(\rho\right)=0\) for \(\rho\to 0\). Consequently, \(B_{2}=0\) and our sought-after solution is obviously given, with \(y=|\tilde{\eta}|(\rho+\tilde{A})^{2}\), by
\[U\left(\rho\right)=B_{1}\left(\rho+\frac{mA}{\tilde{\eta}^{2}}\right)\,\exp \left(-\frac{|\tilde{\eta}|\rho^{2}}{2}-\frac{mA\rho}{|\tilde{\eta}|}\right) \ {}_{1}F_{1}(-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-3|\tilde{\eta}|+ \frac{m^{2}A^{2}}{\tilde{\eta}^{2}}),\frac{3}{2},y). \tag{45}\]
However, finiteness and square integrability of the quantum mechanics wave functions requires that the confluent hypergeometric series should be truncated into a polynomial of order \(n_{r}=0,1,2,\cdots\). Under such condition
\[-n_{r}=-\frac{1}{4|\tilde{\eta}|}(\tilde{\cal E}^{2}-3|\tilde{\eta}|+\frac{m^{ 2}A^{2}}{\tilde{\eta}^{2}})\Longrightarrow\tilde{\cal E}^{2}={\cal E}^{2}-2 \left|\eta\right|=2|\tilde{\eta}|\left(2n_{r}+\frac{3}{2}\right)-\frac{m^{2}A ^{2}}{\tilde{\eta}^{2}}, \tag{46}\]
and
\[E=\pm\sqrt{2|\tilde{\eta}|\left(2n_{r}+\frac{3}{2}\right)+2\left|\eta\right|+ \left(K^{2}+\lambda^{2}+m^{2}\right)-\frac{m^{2}A^{2}}{\tilde{\eta}^{2}}}. \tag{47}\]
It should be noted that this results is in exact accord with that in (31) for \(\tilde{\zeta}=1/2\) and \(A=0\). Our solution in (45) suggests, beyond doubt, that a quantum particle moving in an oscillator-plus-linear
radial potential is allowed to reach the center at \(\rho=0\) with some amplitude ( e.g., [50]). Then the conjecture that \(U(\rho)\to 0\) as \(\rho\to 0\) is not a general admissible condition in quantum mechanics, but finiteness and square integrability of the radial wave function is the only valid condition in general.
On the other hand, it could be interesting to know that equation (33) admits a solution in the form of biconfluent Heun functions
\[U\left(\rho\right)=C_{1}\,\rho^{|\tilde{\zeta}|+1/2}\exp\left(-\frac{|\tilde{ \eta}|\rho^{2}}{2}-\frac{mA\rho}{|\tilde{\eta}|}\right)H_{B}\left(\alpha^{ \prime},\beta^{\prime},\gamma^{\prime},\delta^{\prime},\sqrt{|\tilde{\eta}|} \rho\right), \tag{48}\]
where
\[\alpha^{\prime}=2\left|\tilde{\zeta}\right|,\ \beta^{\prime}=\frac{2mA}{| \tilde{\eta}|^{3/2}},\ \gamma^{\prime}=\frac{m^{2}A^{2}+\tilde{\cal E}^{2}\tilde{\eta}^{2}}{\tilde{ \eta}|^{3}},\ \delta^{\prime}=0. \tag{49}\]
This would result, for \(\tilde{\zeta}=1/2\),
\[U\left(\rho\right)=C_{1}\,\rho\,\exp\left(-\frac{|\tilde{\eta}|\rho^{2}}{2}- \frac{mA\rho}{|\tilde{\eta}|}\right)H_{B}\left(1,\beta^{\prime},\gamma^{\prime },0,\sqrt{|\tilde{\eta}|}\rho\right). \tag{50}\]
A simple comparison between (45) and (50) would suggest, up to a constant, that
\[\left(\rho+\frac{mA}{\tilde{\eta}^{2}}\right)\ _{1}F_{1}(-\frac{1}{4|\tilde{ \eta}|}(\tilde{\cal E}^{2}-3|\tilde{\eta}|+\frac{m^{2}A^{2}}{\tilde{\eta}^{2}} ),\frac{3}{2},|\tilde{\eta}|(\rho+\tilde{A})^{2})=\rho\ H_{B}\left(1,\beta^{ \prime},\gamma^{\prime},0,\sqrt{|\tilde{\eta}|}\rho\right) \tag{51}\]
to imply
\[\left(x+\frac{\beta^{\prime}}{2}\right){}_{1}F_{1}\left(\frac{3-\gamma^{\prime }}{4},\ \frac{3}{2},(x+\frac{\beta^{\prime}}{2})^{2}\right)=x\,H_{B}(1,\beta^{\prime}, \gamma^{\prime},0,x);\ x=\sqrt{|\tilde{\eta}|}\rho. \tag{52}\]
Hence, the truncation of the bi-confluent Heun series \(H_{B}\left(\alpha^{\prime},\beta^{\prime},\gamma^{\prime},\delta^{\prime}, \sqrt{|\tilde{\eta}|}\rho\right)\) into a polynomial of of degree \(n\geq 0\),, one has to apply the condition that \(\gamma^{\prime}=2\left(n+1\right)+\alpha^{\prime}\Longrightarrow\gamma^{ \prime}=2\left(n+3/2\right)\) for \(\alpha^{\prime}=1\). Whereas the truncation of the confluent geometric series into a polynomial of order \(n_{r}=0,1,2,\cdots\) suggests that \(-4n_{r}=3-\gamma^{\prime}\Longrightarrow\gamma^{\prime}=2(2n_{r}+3/2\). Consequently, the confluent geometric and bi-confluent Heun polynomials are both of even powers and \(n=2n_{r}\geq 0\). Moreover, our result in (52) would, for \(\beta^{\prime}=0\), yield the correlation
\[{}_{1}F_{1}\left(\frac{3-\gamma^{\prime}}{4},\,\frac{3}{2},x^{2}\right)=\,H_{ B}(1,0,\gamma^{\prime},0,x), \tag{53}\]
which is, in fact, in exact accord with that reported by Ronveaux [51]
\[H_{B}(\alpha^{\prime},0,\gamma^{\prime},0,y)=\,_{1}F_{1}\left(\frac{1}{2}+ \frac{\alpha^{\prime}}{4}-\frac{\gamma^{\prime}}{4},1+\frac{\alpha^{\prime}}{ 2},y^{2}\right). \tag{54}\]
Free KG equation in the background of a cosmic string in a Kaluza-Klein theory under Coulomb-type potential
In this section, we are studying the interaction between the K-G equation and Coulomb-type potential in KKT, in the space-time caused by a (4+1)-dimensional stationary cosmic string described in Coulomb-Type Potentials \(\left(S\left(r\right)=\frac{\kappa}{r}=\pm\frac{\left|\kappa\right|}{r}\right)\)[52].
\[\left(\frac{d^{2}}{d\rho^{2}}+\frac{1}{\rho}\frac{d}{d\rho}-\frac{\sigma^{2}} {\rho^{2}}+\left(E-S\left(\rho\right)\right)^{2}-\xi\right)\psi\left(\rho \right)=0 \tag{55}\]
where
\[\left(\frac{d^{2}}{d\rho^{2}}+\frac{1}{\rho}\frac{d}{d\rho}-\frac{\sigma^{2}} {\rho^{2}}+\left(E-\frac{\kappa}{\rho}\right)^{2}-\xi\right)\psi\left(\rho \right)=0 \tag{56}\]
and
\[\xi=\left(m^{2}+K^{2}+\lambda^{2}\right)-\left(\frac{\Omega}{\alpha}\right)^{ 2}\left(\ell-JK-\frac{\Phi}{2\pi}\lambda\right)^{2} \tag{57}\]
By simplifying the last differential equation we find
\[\left(\frac{d^{2}}{d\rho^{2}}+\frac{1}{\rho}\frac{d}{d\rho}-\frac{\left( \sigma^{2}-\kappa^{2}\right)}{\rho^{2}}-\frac{2\kappa E}{\rho}+E^{2}-\xi \right)\psi\left(\rho\right)=0 \tag{58}\]
The solution to this differential equation is the following:
\[\psi\left(\rho\right)=\mathcal{N}\rho^{\alpha}\mathrm{WhittakerM}\left[\beta, \gamma,\vartheta\,\rho\right] \tag{59}\]
With derivation and compensation, we find the following constants
\[\begin{split}\alpha=-1/2\\ \beta=-\frac{\kappa E}{\sqrt{\xi-E^{2}}}\\ \gamma=\mathrm{i}\left(\sqrt{\kappa^{2}-\sigma^{2}}\right)\\ \vartheta=2\sqrt{\xi-E^{2}}\end{split} \tag{60}\]
The general solution of the last differential equation is given in the following form:
\[\psi\left(\rho\right)=\mathcal{N}\rho^{-\frac{1}{2}}\mathrm{WhittakerM}\left[- \frac{\kappa E}{\sqrt{\xi-E^{2}}},\mathrm{i}\left(\sqrt{\kappa^{2}-\sigma^{2 }}\right),2\sqrt{\xi-E^{2}}\,\rho\right] \tag{61}\]
The mathematical relationship between the WhitakarM function and the hypergeometric function (\(1F1\)) is given from the general form[52; 53]
\[\mathrm{WhittakerM}\left(\mu,\nu,z\right)=\mathrm{e}^{-\frac{z}{2}}z^{\frac{1 }{2}+\nu}\mathrm{hypergeom}\left(\left[\frac{1}{2}+\nu-\mu\right],\left[1+2 \nu\right],z\right) \tag{62}\]
where the wave function of the free KG equation in this system given as
\[\psi\left(\rho\right)=\mathcal{N}\left(\rho^{-\frac{t}{2}}\,\mathrm{e}^{-\left( \sqrt{\xi-E^{2}}\right)\,\rho}\left(2\left(\sqrt{\xi-E^{2}}\right)\,\rho\right) ^{\frac{1}{2}+\mathrm{i}\left(\sqrt{\kappa^{2}-\sigma^{2}}\right)}\right)R \left(\rho\right) \tag{63}\]
and
\[R\left(\rho\right)={}_{1}F_{1}\left(\left[\frac{2\,\mathrm{i} \left(\sqrt{\kappa^{2}-\sigma^{2}}\right)\,\left(\sqrt{\kappa-E^{2}}\right)+2E \kappa}{2\left(\sqrt{\xi-E^{2}}\right)}+\frac{1}{2}\right],\left[1+2\,\mathrm{ i}\left(\sqrt{\kappa^{2}-\sigma^{2}}\right)\right],2\left(\sqrt{\xi-E^{2}} \right)\,\rho\right) \tag{64}\]
using the condition of quantification
\[\frac{2\,\mathrm{i}\left(\sqrt{\kappa^{2}-\sigma^{2}}\right)\, \left(\sqrt{\kappa-E^{2}}\right)+2E\kappa}{2\left(\sqrt{\xi-E^{2}}\right)}+ \frac{1}{2}=-n \tag{65}\]
After the mathematical simplification of the equation, We find the energy phrase given in the following form:
\[\begin{array}{c}E^{\pm}\left(n\right)=\pm\left(\frac{16\left(\sqrt{\xi^{2} \kappa^{2}(2n+1)^{2}(\kappa^{2}-\sigma^{2})}\right)}{16\left[n^{4}+2n^{3}+ \sigma^{4}+4\kappa^{4}\right]+\left[32\sigma^{2}+24\right]n^{2}+\left[32\sigma ^{2}+8\right]n+8\left[-8\kappa^{2}+1\right]\sigma^{2}+1}\right)\\ \pm\left(\frac{\xi\left[16\left(n^{4}+2n^{3}\right)+\left(-16\left[\kappa^{2}+2 \sigma^{2}\right]+24\right)n^{2}+\left(-16\left[\kappa^{2}+2\sigma^{2}\right] +8\right)n+16\sigma^{4}+8\left(-6\kappa^{2}+1\right)\sigma^{2}+4\left[8\kappa^ {4}-\kappa^{2}\right]+1\right]}{16\left[n^{4}+2n^{3}+\sigma^{4}+4\kappa^{4} \right]+\left[32\sigma^{2}+24\right]n^{2}+\left[32\sigma^{2}+8\right]n+8\left[ -8\kappa^{2}+1\right]\sigma^{2}+1}\right)\end{array} \tag{66}\]
This last equation represents the energy of Klein-Gordon's free equation as a quantitative number \(n\) where \(E^{\pm}\left(n\right)=f(n)\). We note as a result that this last energy is a function dependent of \(n\) compared to the result in the first part by the connotation of the Bessel function, which explains the actual effect of Coulomb-type potential.
## V Conclusion
We have been investigating KGO immersed in a non-trivial background characterized by ingredients that characterize the presence of curvature, torsion and extra dimension, that is, a KKT spacetime with cosmic string and screw-like dislocation, associated with curvature and torsion, respectively. Furthermore, the extra dimension of KKT provides a quantum flux, which influences the quantum dynamics of KGO.
Through a purely analytical analysis, we show that the KGO is influenced not only by curvature, torsion and the quantum flux associated with the extra dimension, but also by the possibility of modifying the mass term of the Klein-Gordon equation.
The influence of curvature and torsion on the KGO can be visualized through a redefinition of the angular momentum eigenvalues, which are now dependent on the parameters associated with the cosmic string and the screw-like displacement. Furthermore, the quantum flux coming from the extra dimension is part of this quantum effect of "correcting" the angular momentum eigenvalues. These quantum effects can be seen as an effect analogous to ABE.
The modification of the mass term of the Klein-Gordon equation drastically modifies the energy profile of the KGO, that is, the central potentials inserted in the axial equation provide totally different relativistic energy profiles.
....................................................
|
2310.12077 | One-Shot Imitation Learning: A Pose Estimation Perspective | In this paper, we study imitation learning under the challenging setting of:
(1) only a single demonstration, (2) no further data collection, and (3) no
prior task or object knowledge. We show how, with these constraints, imitation
learning can be formulated as a combination of trajectory transfer and unseen
object pose estimation. To explore this idea, we provide an in-depth study on
how state-of-the-art unseen object pose estimators perform for one-shot
imitation learning on ten real-world tasks, and we take a deep dive into the
effects that camera calibration, pose estimation error, and spatial
generalisation have on task success rates. For videos, please visit
https://www.robot-learning.uk/pose-estimation-perspective. | Pietro Vitiello, Kamil Dreczkowski, Edward Johns | 2023-10-18T16:13:35Z | http://arxiv.org/abs/2310.12077v1 | # One-Shot Imitation Learning:
###### Abstract
In this paper, we study imitation learning under the challenging setting of: (1) only a single demonstration, (2) no further data collection, and (3) no prior task or object knowledge. We show how, with these constraints, imitation learning can be formulated as a combination of trajectory transfer and unseen object pose estimation. To explore this idea, we provide an in-depth study on how state-of-the-art unseen object pose estimators perform for one-shot imitation learning on ten real-world tasks, and we take a deep dive into the effects that camera calibration, pose estimation error, and spatial generalisation have on task success rates. For videos, please visit www.robot-learning.uk/pose-estimation-perspective.
**Keywords:** One-Shot Imitation Learning, Unseen Object Pose Estimation, Robot Manipulation
## 1 Introduction
Imitation Learning (IL) can be a convenient and intuitive approach for teaching a robot how to perform a task. However, many of today's methods for learning vision-based policies require tens to hundreds of demonstrations per task [1, 2, 3, 4, 5, 6]. Whilst combining with reinforcement learning [7, 8, 9] or pre-training on similar tasks [10, 11, 12] can help, in this paper we take a look at **one-shot imitation learning**, where we assume: (1) only a single demonstration, (2) no further data collection following the demonstration, and (3) no prior task or object knowledge.
Figure 1: We model **one-shot imitation learning** as **trajectory transfer**, where we use **unseen object pose estimation** to adapt an end-effector trajectory from a single demonstration, to a new scene where the object is in a novel pose. In this paper, we are going to study this formulation through a series of four investigations shown in the above boxes.
With only a single demonstration and no prior knowledge about the task or the object(s) the robot is interacting with, the optimal imitation is one where the robot and object(s) are aligned in the same way as during the demonstration. For example, imitating a "scoop the egg" task (see Figure 1) could be achieved by aligning the spatula and the egg with the same sequence of relative poses as was provided during the demonstration.
But without any prior knowledge about the object(s), such as 3D object models, the reasoning required by the robot now disits down to an **unseen object pose estimation** problem: the robot must infer the relative pose between its current observation of the object(s) and its observation during the demonstration, in order to perform this trajectory transfer [13] (see Figure 2). Unseen object pose estimation is already a challenging field within the computer vision community [14; 15; 16; 17], and these challenges are further compounded in a robotics setting.
From this standpoint, we are the first to study the utility of unseen object pose estimation for trajectory transfer in the context of one-shot IL, and we reveal new insights into the characteristics of such a formulation and how to mitigate its challenges. We begin our study by analysing how camera calibration and pose estimation errors affect the success rates of ten diverse real-world manipulation tasks, such as inserting a plug into a socket or placing a plate in a dishwasher rack.
Following this, we estimate the pose estimation errors of eight different unseen object pose estimators in simulation, including one based on NOPE [18], a state-of-the-art unseen object orientation estimation method, and one based on ASpanFormer [19], a state-of-the-art correspondence estimation method. We then benchmark trajectory transfer using these eight unseen object pose estimators against DOME [20], a state-of-the-art one-shot IL method, on the same ten real-world tasks as mentioned above. Our results not only show that the unseen object pose estimation formulation of one-shot IL is capable of outperforming DOME by \(22\%\) on average, but it is also applicable to a much wider range of tasks, including those for which a third-person perspective is necessary [21].
Finally, we evaluate the robustness of this formulation to changes in lighting conditions, and conclude our study by investigating how well it generalises spatially, as an object's pose differs to its pose during the demonstration.
## 2 Related Work
Whilst there are many methods that study imitation learning with multiple demonstrations per task [3; 22; 23], in this section, we set our paper within the context of existing one-shot IL methods.
**Trajectory Transfer**. Trajectory transfer refers to adapting a demonstrated trajectory to a new test scene. Previous work has considered how to warp a trajectory from the geometry during the demonstration to the geometry at test time [13], focusing on non-rigid registration for manipulating deformable objects. However, when relying on only a single demonstration, they displayed very local generalisation to changes in object poses, suggesting the need for multiple demonstrations in order to achieve greater spatial generalisation [13; 24; 25]. In contrast, we are the first to study unseen object pose estimation for trajectory transfer, which enables spatial generalisation from only a single demonstration.
**Methods that require further data collection**. Since one demonstration often does not provide sufficient information to satisfactorily learn a task, some methods rely on further data collection. For instance, Coarse-to-Fine approaches [26; 27] train a visual servoing policy by collecting data around the object in a self-supervised manner. On the other hand, FISH [9] fine-tunes a base policy learned with IL using reinforcement learning and interactions with the environment. While these approaches have their strengths, the additional environment interactions require time and sometimes human supervision. In contrast, modelling one-shot IL as unseen object pose estimation avoids the need for real-world data collection, hence enabling scalable learning.
**Methods that require prior knowledge**. Another way of compensating for the lack of demonstration data is to leverage prior task knowledge. For instance, many IL methods require access to object poses [28; 29; 30] or knowledge of the manipulated object categories [31], which is often impractical in everyday scenarios. Another approach that assumes prior knowledge for learning tasks from a single demonstration is meta-learning [10; 11; 12; 32; 33; 34; 35]. In this paradigm, a policy is pre-trained on a set of related tasks in order to infer actions for similar tasks from a single demonstration. However, the applicability of the learned policy is limited to tasks closely related to the meta-training dataset. Contrary to the meta-learning formulation of one-shot IL, we approach it as unseen object pose estimation, which assumes no prior knowledge and thus increases its generality.
**Methods which do not require further training or prior knowledge**. DOME [20] and FlowControl [36] are one-shot IL algorithms that assume no prior knowledge. The effectiveness of these methods hinges on their reliance on a wrist-mounted camera, limiting their applicability to tasks where hand-centric observability is sufficient [21] and in which hand-held objects do not occlude the wrist-mounted camera. In contrast, the one-shot IL formulation explored in this paper applies to a much wider spectrum of tasks, including those for which a third-person perspective is necessary [21], such as the dishwasher task considered in our experiments (Section 4).
## 3 One-shot Imitation Learning for Robotic Manipulation
In this work, we study one-shot IL under the challenging setting when there is: (1) only a single demonstration, (2) no further data collection, and (3) no prior task or object knowledge. This is an appealing setting to aim for, since it encourages the design of a general and efficient method. In this section, we explore this from the perspective of object pose estimation. First, we provide a formulation of IL. Then, we model one-shot IL for manipulation as a trajectory transfer problem. Finally, we introduce unseen object pose estimation, which underpins the trajectory transfer problem.
Trajectory transfer, illustrated in Figure 2, involves the robot adapting a demonstrated end-effector (EEF) trajectory for deployment. This is done by estimating the relative object pose using unseen object pose estimation from a pair of RGB-D images captured at the beginning of the demonstration and deployment phases, allowing for spatial generalisation of the demonstrated trajectory.
### Imitation Learning Formulation
We observe demonstrations as trajectories \(\mathbf{\tau}=(\{\mathbf{x}_{t}\}_{t=1}^{T},\mathbf{s})\), where \(\mathbf{x}\) represents the state of the system, \(T\) a finite horizon, and \(\mathbf{s}\) the context vector. The state \(\mathbf{x}\) encompasses various measurements relevant to the task and should include sufficient information for inferring optimal actions. In the context of robotic manipulation, the state could be the EEF and/or object(s) pose(s). Similarly, the context vector \(\mathbf{s}\) captures various information regarding the conditions the demonstration was recorded under, and can assume different forms, ranging from an image of the task space captured before the demonstration to a task identifier in a multi-task setting.
Given a dataset of demonstrated trajectories \(\mathcal{D}=\{\mathbf{\tau}_{i}\}_{i=1}^{N}\), IL aims to learn a policy \(\pi^{*}\) that satisfies the objective:
\[\pi^{*}=\arg\min D(q(\mathbf{x}),p(\mathbf{x})), \tag{1}\]
where \(q(\mathbf{x})\) and \(p(\mathbf{x})\) are the distributions of states induced by the demonstrator and policy respectively, and \(D(q,p)\) is a distance measure between \(q\) and \(p\).
Figure 2: **Overview of our formulation for one-shot IL**: (1) The robot receives a demonstration as an RGB-D image and an end-effector trajectory. (2) At deployment, the robot sees the object in a new pose and must adapt the demonstrated trajectory accordingly. (3) To do so, the robot uses unseen object pose estimation to estimate the object transformation between demonstration and deployment. (4) It then applies this transformation to the demonstrated trajectory. (5) Ultimately, this aligns the end-effector with the same object-centric poses as experienced during the demonstration.
### Modelling One-Shot Imitation Learning as Trajectory Transfer
In this section, we model one-shot IL as trajectory transfer (see Figure 2), which we define as the process at test time of moving the EEF, or a grasped object, to the same set of relative poses, with respect to a target object, that it had during the demonstration. Let \(R\) define the frame of the robot and \(E_{t}\) that of the EEF at time step \(t\). A homogeneous transformation matrix \(\mathbf{T}_{AB}\) represents frame \(B\) expressed in frame \(A\). During demonstrations, the robot receives instructions through teleoperation or kinesthetic teaching, which are defined as:
\[\mathbf{\tau}^{Demo}=\left(\mathbf{X}_{R}^{Demo},I^{Demo}\right), \tag{2}\]
and comprise of an RGB-D image \(I^{Demo}\) of the task space captured before the demonstration using an external camera, and a sequence of EEF poses from the demonstration, \(\mathbf{X}_{R}^{Demo}=\{\mathbf{T}_{RE_{t}}^{Demo}\}_{t=1}^{T}\), expressed in the robot frame.
With only a single demonstration and no prior knowledge about the object the robot is interacting with, the optimal imitation can be considered to be where the robot and object(s) are aligned in the same way as during the demonstration, throughout the task. For example, imitating grasping a mug (see Figure 2), could be achieved by aligning the EEF in such a way that the relative pose between the EEF and mug are the same as during the demonstration. Note that this also holds for more complex trajectories beyond grasping, such as insertion or twisting manoeuvres. Moreover, if the task involves a grasped object, assuming that the latter is fixed and rigidly attached to the gripper, aligning the EEF will also align the grasped object.
Now, consider Equation 1 in the context of manipulation, where the optimal policy \(\pi^{*}\) should result in replicating the demonstrated task state, \(\mathbf{x}=\mathbf{T}_{OE}\), where \(O\) is the object frame, at every timestep during deployment. This demonstrated sequence of EEF poses, expressed in the object frame, is defined as \(\mathbf{X}_{O}^{Demo}=\{\mathbf{T}_{OE_{t}}^{Demo}\}_{t=1}^{T}\), where
\[\mathbf{T}_{OE_{t}}^{Demo}=\mathbf{T}_{OR}^{Demo}\mathbf{T}_{RE_{t}}^{Demo}=\left(\mathbf{T}_{ RO}^{Demo}\right)^{-1}\mathbf{T}_{RE_{t}}^{Demo}, \tag{3}\]
and \(\mathbf{T}_{RO}^{Demo}\) is the unknown object pose during the demonstration. During deployment, for optimal imitation, \(\pi^{*}\) should replicate \(\mathbf{X}_{O}^{Demo}\) given a novel unknown object pose \(\mathbf{T}_{RO}^{Test}\), i.e. we would like \(\mathbf{T}_{OE}^{Test}=\mathbf{T}_{OE}^{Demo}\) during every timestep of the interaction. Hence, the sequence of EEF poses (expressed in the robot frame) that aligns with the demonstration can be defined as \(\mathbf{X}_{R}^{Test}=\{\mathbf{T}_{RE_{t}}^{Test}\}_{t=1}^{T}\), where
\[\mathbf{T}_{RE_{t}}^{Test}=\mathbf{T}_{RO}^{Test}\mathbf{T}_{OE_{t}}^{Demo}. \tag{4}\]
Substituting Equation 3 into Equation 4 yields
\[\mathbf{T}_{RE_{t}}^{Test}=\mathbf{T}_{RO}^{Test}\mathbf{T}_{OR}^{Demo}\mathbf{T}_{RE_{t}}^{ Demo}={}^{R}\mathbf{T}_{\delta}\mathbf{T}_{RE_{i}}^{Demo}, \tag{5}\]
where we define
\[\mathbf{T}_{\delta}=\mathbf{T}_{RO}^{Test}\mathbf{T}_{OR}^{Demo}, \tag{6}\]
which represents the transformation of the object between the demonstration and deployment scenes, where we use the superscript \(R\) to indicate that \({}^{R}\mathbf{T}_{\delta}\) is expressed in the robot frame \(R\).
This then leads to the crux of our investigation: the trajectory transfer problem, i.e. computing \(\mathbf{X}_{R}^{Test}\) from \(\mathbf{X}_{R}^{Demo}\), distils down to the problem of estimating the relative object pose between the demonstration and deployment scenes, given a single image from each. Once this pose is estimated, controlling the EEF to follow this trajectory can simply make use of inverse kinematics. And given that we assume no prior object knowledge, the challenge becomes one of one-shot unseen object pose estimation, an active field in the computer vision community [14; 15; 16; 17; 18].
### One-shot Unseen Object Pose Estimation for Trajectory Transfer
One-shot unseen object pose estimation is concerned with estimating the relative pose of a novel object visible in two images. Formally, let \(C\) denote the frame of reference of a camera, and consider one image \(I^{Demo}\), taken when a novel object was at a pose \(\mathbf{T}_{CO}^{Demo}\), and a second image \(I^{Test}\), taken when that same object was at a pose \(\mathbf{T}_{CO}^{Test}\). One-shot unseen object pose estimation aims to estimate the relative transformation between the two object poses, \({}^{C}\mathbf{T}_{\delta}\), that satisfies \(\mathbf{T}_{CO}^{Test}={}^{C}\mathbf{T}_{\delta}\mathbf{T}_{CO}^{Demo}\), where we use the superscript to indicate that \({}^{C}\mathbf{T}_{\delta}\) is expressed in the camera frame \(C\). Rearranging this equation yields
\[{}^{C}\mathbf{T}_{\delta}=\mathbf{T}_{CO}^{Test}\left(\mathbf{T}_{CO}^{Demo}\right)^{-1}= \mathbf{T}_{CO}^{Test}\mathbf{T}_{OC}^{Demo}. \tag{7}\]
Comparing Equations 6 and 7 reveals that \({}^{C}\mathbf{T}_{\delta}\) and \({}^{R}\mathbf{T}_{\delta}\) both represent the relative object pose, but are expressed in different frames of reference. In fact, after estimating \({}^{C}\mathbf{T}_{\delta}\) using one-shot unseen object pose estimation, we can find \({}^{R}\mathbf{T}_{\delta}\) from the following relationship derived in Appendix A:
\[{}^{R}\mathbf{T}_{\delta}=\mathbf{T}_{RC}{}^{C}\mathbf{T}_{\delta}\left(\mathbf{T}_{RC}\right) ^{-1}=\mathbf{T}_{RC}{}^{C}\mathbf{T}_{\delta}\mathbf{T}_{CR}, \tag{8}\]
where \(\mathbf{T}_{RC}\) is the pose of the camera in the robot frame. Hence, the trajectory transfer problem can now be solved by using one-shot unseen object pose estimation to calculate the value of \({}^{C}\mathbf{T}_{\delta}\).
Examining Equation 8 reveals that there are two potential sources of error that could degrade the accuracy of \({}^{R}\mathbf{T}_{\delta}\) and compromise performance during deployment. The first source of error is the error in extrinsic camera calibration \(\mathbf{T}_{RC}\), and the second is the error in unseen object pose estimation itself\({}^{C}\mathbf{T}_{\delta}\), both of which we discuss and study in the following sections.
## 4 Experiments
We now introduce ten representative everyday robotics tasks that span a broad range of complexities. As depicted in Figure 3, these tasks are: placing one bowl into another (**Bowls**), inserting a plug into a socket (**Plug**), grasping a mug by the handle (**Mug**), scooping an egg from a pan (**Egg**), inserting bread into a toaster (**Toaster**), inserting a plate into a specific slot in a dish rack (**Dishwasher**), inserting a cap into a bottle (**Cap**), pouring a marble from a kettle into a mug (**Tea**), grasping a can (**Can**), and placing a lid onto a pot (**Lid**).
We begin this section by studying the effect of calibration and pose estimation errors on the success rate for each of these tasks (Section 4.1). We then consider eight unseen object pose estimation methods and estimate their pose estimation errors in simulation (Section 4.2). And finally, we benchmark these pose estimation methods when used for trajectory transfer on the discussed real-world tasks (Section 4.3), we study the robustness to changes in lighting (Section 4.3) and distractors (Appendix G.2), and we examine the spatial generalisation capabilities of trajectory transfer (Section 4.4). For videos of our experiments, please visit our website.
### Sensitivity Analysis of Task Success Rates to Calibration and Pose Estimation Errors
Correlating task success rates with calibration and pose estimation errors in the real world is challenging. To establish a relationship between these errors and task success rates, we begin by providing a single demonstration via kinesthetic teaching from a last-inch setting. We then measure the correlation between the task success rate and the starting EEF position error prior to imitating the demonstration (see Appendix F.2). Finally, we map starting EEF position errors to either calibration or pose estimation errors using an empirically defined mapping (see Appendix B).
Specifically, for each considered task, we reset the object position, add a position error to the starting EEF pose, replay the demonstration, and note if the task execution is successful. This is repeated 10 times for each position noise magnitude, with noise magnitudes starting from \(0\) mm and increasing in \(2\) mm increments until the success rate is \(0\%\) over three consecutive noise magnitudes. This resulted in a total of approximately 1,500 real-world trajectories in order to establish the relationship between EEF position errors and task success rates for the considered tasks.
Then, to empirically map calibration errors or pose estimation errors to these starting EEF position errors, only one potential source of error was considered at a time. For example, when mapping translation errors in calibration to starting EEF position errors, we assumed that rotation errors in calibration as well as rotation and translation errors in pose estimation are all zero, which isolates the effect of translation errors in calibration on task success rates.
Figure 3: The 10 real-world tasks we use for evaluation.
work. Secondly, given typical performance of pose estimation and calibration methods, rotation errors are more problematic than position errors. For example, considering that rotation errors \(<4^{\circ}\) are more probable than position errors \(>2\) cm with today's methods, we can see that a mere \(\sim\)\(4^{\circ}\) error in calibration or \(\sim\)\(2^{\circ}\) in pose estimation leads to an average success rate of \(\sim\)\(50\%\), whereas this same success rate would require a \(\sim\)\(7\) cm error in calibration or \(\sim\)\(2\) cm in pose estimation.
### Pose Estimation Errors of One-Shot Unseen Object Pose Estimation Methods
We now consider eight different unseen object pose estimation methods and evaluate their pose estimation errors in simulation. To this end, we generate a simulated dataset consisting of 1100 image pairs of 55 different objects from Google Scan Object [37], using Blender [38] (see Appendix D.2).
The considered methods are: 1) **ICP**: We use the Open3D [39] implementation of point-to-point ICP [40]. ICP is given a total of \(5\) seconds to make each estimate, giving it enough time to try \(50-150\) different initialisations. 2) **GMFlow**: We use GMFlow [41] to estimate correspondences between two RGB images and solve for the relative object pose \({}^{C}\mathbf{T}_{\delta}\) with Singular Value Decomposition (SVD) [42] using the depth image. 3) **DINO**: We use DINO [43] to extract descriptors for pixels in the two RGB images, and use the SciPy [44] implementation of the Hungarian algorithm to establish correspondences. Again, the relative pose \({}^{C}\mathbf{T}_{\delta}\) is obtained using SVD. 4) **ASpan.**: We use the pre-trained ASpanFormer [19] to establish correspondences between two RGB images and estimate \({}^{C}\mathbf{T}_{\delta}\) using SVD. 5) **ASpan. (FT)**: The ASpan. baseline, with model weights fine-tuned on a custom object-centric dataset generated in Blender using ShapeNet and Google Scan Objects [37]. 6) **NOPE**: We use the pre-trained NOPE [18] model to estimate the relative object rotation from two images, and a heuristic that centres two partial point clouds to predict the relative translation. 7) **Reg.**: We train an object-agnostic PointNet++ to regress relative object orientations around the world's vertical axis from two coloured point clouds, using data generated in simulation and domain randomisation. We solve for the relative translation using a heuristic that centres two partial point clouds. 8) **Class.**: This is equivalent to the Reg. baseline with the exception that PointNet++ is trained to classify the relative object orientation.
We also experimented with predicting the relative object translation from pairs of RGB-D images for the NOPE, Reg. and the Class. baselines. However, we found that a simple heuristic that centres partial point clouds for translation prediction had a similar performance, and thus used this during inference. We refer the reader to Appendix C for a more detailed description of all of these methods.
\begin{table}
\begin{tabular}{l c c} \hline & Translation & Rotation \\ & Error [cm] & Error [deg] \\ \hline Class. & \(\mathbf{5.9\pm 11.2}\) & \(\mathbf{4.3\pm 10.1}\) \\ ASpan. (FT) & \(6.0\pm 13.0\) & \(6.1\pm 13.4\) \\ Reg. & \(9.8\pm 15.3\) & \(9.4\pm 15.3\) \\ DINO & \(11.3\pm 18.2\) & \(11.6\pm 18.9\) \\ ASpan. & \(11.5\pm 17.4\) & \(11.7\pm 18.8\) \\ NOPE & \(18.8\pm 17.4\) & \(18.4\pm 17.2\) \\ ICP & \(14.3\pm 30.8\) & \(14.4\pm 32.4\) \\ GMFlow & \(28.4\pm 24.6\) & \(27.6\pm 24.5\) \\ \hline \end{tabular}
\end{table}
Table 1: Simulation pose estimation errors and standard deviations for eight different methods.
Figure 4: Correlation between error magnitudes in either calibration or pose estimation, and task success rates, assuming a distance of 80 cm between the camera and the task space.
The results for this experiment are shown in Table 1 (see Appendix E for an error definition and a discussion of these results). Although directly comparing these results to Figure 4 suggests that none of these baselines would be suitable for learning the considered tasks, in practice we found that translation and rotation errors in pose estimates are often coupled and partially cancel each other out, while Figure 4 only considers isolated errors. These observations are further reinforced by the strong performance we found with these baselines in our real-world experiments.
### Real-World Evaluation
We now investigate if the trajectory transfer formulation of one-shot IL can learn real-world, everyday tasks, of varying tolerances.
**Implementation Details** For trajectory transfer, we use a given unseen object pose estimator to estimate \({}^{C}\mathbf{T}_{\delta}\), and Equations 8 and 5 alongside inverse kinematics to align the robot with the first state of the demonstration. From this state, we align the full robot trajectory with the demonstrated trajectory, following Appendix F.2. In order to isolate the object of interest from the background, we segment it from the RGB-D image captured before the demonstration and deployment using a combination of OWL-ViT [45] and SAM [46]. Both segmented RGB-D images are subsequently downsampled to ensure compatibility with a given method. See Appendix F for further details.
**Experimental Procedure** We conduct experiments using a 7-DoF Sawyer robot operating at 30 Hz. The robot is equipped with a head-mounted camera capturing 640-by-480 RGB-D images. The task space is defined as a \(30\times 75\) cm region on a table in front of the robot, which is further divided into 10 quadrants measuring \(15\times 15\) cm each. During the demonstration phase, all objects are positioned in approximately the same location near the middle of the task space (see Figure 5), and a single demonstration is provided for each task via kinesthetic teaching from a last-inch setting. In the testing phase, the object is randomly placed within each quadrant with a random orientation difference of up to \(\pm 45^{\circ}\) relative to the demonstration orientation. We test each method on a single object pose within each of the quadrants, resulting in 10 evaluations per method.
**Results.** The results for this experiment are shown in Table 2, with tasks ordered by mean success rate across methods and methods ordered by mean success rate across tasks. These results also include a comparison against DOME [20], a state-of-the-art one-shot IL method. We observe that for DOME, the majority of failure cases are caused by its inaccurate segmentation of target objects. Its poor performance on the dishwasher task is attributed to the fact that the demonstration had to be started from further away, as DOME requires the object to be fully visible from a wrist-mounted camera. As a result, DOME was beaten on average by three of the baselines.
The Reg. and Class. baselines had the best performance on average, likely due to the fact that their training data was tailored to object manipulation (see Appendix D.1). ICP's performance was affected by the partial nature of the point clouds, which sometimes caused it to converge to local optimums. NOPE found itself out of its training distribution. Being trained on images with the object in the centre, NOPE can confuse a relative translation for a rotation when an object is displaced from the image centre. DINO uses semantic descriptors, which cause keypoints to be locally similar, translating into matches that are coarse and not precise. ASpanFormer was trained on images of entire scenes with many objects, hence expecting scenes rich with features. Therefore, predicting correspondences for a single segmented object causes this method to perform poorly. Meanwhile, we note that the fine-tuned ASpanFormer's performance drops significantly more with the sim-to-real gap than that of the Reg. and Class. methods. Lastly, GMFlow was found to poorly estimate rotations as the predicted flow tended to be smooth and consistent across pixels.
\begin{table}
\begin{tabular}{|l|c c c c c c c c c c|c|} \hline & Plug & Pot & Toaster & Dishwasher & Mug & Egg & Bottle & Tea & Bowls & Can & Mean \\ \hline TT (GMFlow) & 10 & 40 & 0 & 20 & 40 & 20 & 20 & 20 & 60 & 60 & 29 \\ TT (ASpanFormer (FT)) & 0 & 10 & 10 & 50 & 60 & 50 & 50 & 30 & 30 & 50 & 34 \\ TT (ASpanFormer) & 0 & 10 & 0 & 20 & 50 & 50 & 50 & 40 & 60 & 70 & 35 \\ TT (DINO) & 0 & 20 & 10 & 30 & 50 & 60 & 40 & 40 & 80 & 70 & 40 \\ TT (NOPE) & 0 & 10 & 0 & 50 & 0 & 70 & 90 & **100** & 70 & **100** & 49 \\ DOME & 0 & 10 & 80 & 0 & **100** & 70 & 40 & 90 & 70 & **100** & 56 \\ TT (ICP) & 10 & **70** & 80 & 40 & 60 & 80 & **100** & **100** & **100** & 74 \\ TT (Class) & **20** & 10 & **90** & **70** & **100** & **90** & **100** & **100** & 80 & **100** & 76 \\ TT (Reg.) & **20** & 30 & **90** & **70** & **100** & **90** & **100** & **100** & 80 & **100** & 78 \\ \hline Mean & 6.7 & 23.3 & 40 & 46.7 & 62.2 & 64.4 & 65.6 & 68.9 & 70 & 83.3 & \\ \hline \end{tabular}
\end{table}
Table 2: Real-world success rates (%), from ten trials for each combination of method and task. TT (Trajectory Transfer) is used to distinguish all the previously discussed baselines from DOME [20].
**Robustness to Changes in Lighting Conditions**. We now focus on trajectory transfer using regression, the best-performing method in our real-world experiments, and analyse its robustness to changes in lighting conditions. To this end, we rerun the real-world experiment for this method while additionally randomising the position, luminosity, and colour temperature of an external LED light source before each rollout. The results from this experiment indicate that trajectory transfer using regression remains strong, with an average decrease in performance of only \(8\%\) when the lighting conditions are randomised significantly between the demonstration and test scene. We attribute this strong performance to the fact that the dataset used to train this baseline randomises lighting conditions between the two input images as part of domain randomisation. For full details regarding this experiment and its results, we refer the reader to Appendix G.1.
### Spatial Generalisation
Another insight that emerged from the real-world experiments is the impact of the relative object pose between the demonstration and deployment on the average performance of trajectory transfer. When we aggregate the success rates across all baselines, tasks and poses within each of the quadrants, we notice a decline in the success rate of trajectory transfer as the object pose deviates from the demonstration pose. In Figure 5, we display a mug at the approximate location where all objects were placed during demonstrations (labelled as DEMO), as well as a mug at the centre of each of the quadrants. The opacity of the mugs located in the different quadrants is proportional to the average success rate for those quadrants, which is also displayed in white text. Note that whilst in this figure the orientation of the mug is fixed, experiments did randomise the orientations.
The cause of this behaviour lies in the camera perspective. Specifically, even when kept at a fixed orientation, simply changing the position of an object will result in changes to its visual appearance. Moreover, contrary to the effect of errors in camera calibration (see Appendix B.1), the changes in the visual appearance lessen as the camera is placed further away from the task space. These insights might seem intuitive, but for this same reason, they could be easily overlooked by researchers in the field. As a result, for optimal spatial generalisation, we recommend providing demonstrations at the centre of the task space, as this minimises the variations in the object appearance when the object's pose deviates from the demonstration pose.
## 5 Discussion and Limitations
By formulating one-shot IL using unseen object pose estimation, we are able to learn new tasks without prior knowledge, from a single demonstration and no further data collection. We demonstrate this from a theoretical perspective and show its potential when applied to real-world tasks.
One limitation of this method is that we do not address generalisation to intra-class instances. Using semantic visual correspondences [47] is a promising future direction here. Another limitation is the reliance on camera calibration. However, our analysis of calibration errors and real-world experiments do indicate good performance given typical calibration errors.
Although the proposed method has demonstrated to be very versatile in the types of tasks it can learn, in our setup it required a static scene. This is because the robot arm often occludes the task space given the head-mounted camera on our Sawyer robot, making it not possible to continuously estimate the object pose during deployment. However, this is a limitation of the hardware setup and not a fundamental limitation of the method. By optimising the camera placement for minimum occlusions, trajectory transfer could be deployed in a closed-loop and in dynamic scenes.
Finally, the current formulation is unsuitable for tasks that depend on the relative pose between two objects, where neither of them is rigidly attached to the EEF. For instance, pushing an object close to another cannot rely on the rigid transfer of the trajectory, because the latter needs to be adapted according to the relative pose of the two objects. However, such tasks are fundamentally ambiguous with only a single demonstration, and multiple demonstrations would be required.
Figure 5: The correlation between success rate and displacement from the place where the demonstration was given.
#### Acknowledgments
We would like to thank all reviewers for their thorough and insightful feedback, which had a significant impact on our paper.
|
2306.00395 | Traffic Road Congestion System using by the internet of vehicles (IoV) | Traffic problems have increased in modern life due to a huge number of
vehicles, big cities, and ignoring the traffic rules. Vehicular ad hoc network
(VANET) has improved the traffic system in previous some and plays a vital role
in the best traffic control system in big cities. But due to some limitations,
it is not enough to control some problems in specific conditions. Now a day
invention of new technologies of the Internet of Things (IoT) is used for
collaboratively and efficiently performing tasks. This technology was also
introduced in the transportation system which makes it an intelligent
transportation system (ITS), this is called the Internet of vehicles (IOV). We
will elaborate on traffic problems in the traditional system and elaborate on
the benefits, enhancements, and reasons to better IOV by Systematic Literature
Review (SLR). This technique will be implemented by targeting needed papers
through many search phrases. A systematic literature review is used for 121
articles between 2014 and 2023. The IoV technologies and tools are required to
create the IoV and resolve some traffic rules through SUMO (simulation of urban
mobility) which is used for the design and simulation the road traffic. We have
tried to contribute to the best model of the traffic control system. This paper
will analysis two vehicular congestion control models in term of select the
optimized and efficient model and elaborate on the reasons for efficiency by
searching the solution SLR based questions. Due to some efficient features, we
have suggested the IOV based on vehicular clouds. These efficient features make
this model the best and most effective than the traditional model which is a
great reason to enhance the network system. | Muhammad Shoaib Farooq, Sawera Kanwal | 2023-06-01T06:55:40Z | http://arxiv.org/abs/2306.00395v1 | # Traffic Road Congestion System using by the Internet of vehicles (IoV)
###### Abstract
Traffic problems have increased in modern life due to a huge number of vehicles, big cities, and ignoring the traffic rules. Vehicular ad hoc network (VANET) has improved the traffic system in previous some and plays a vital role in the best traffic control system in big cities. But due to some limitations, it is not enough to control some problems in specific conditions. Now a day invention of new technologies of the Internet of Things (IoT) is used for collaboratively and efficiently performing tasks. This technology was also introduced in the transportation system which makes it an intelligent transportation system (ITS), this is called the Internet of vehicles (IOV). We will elaborate on traffic problems in the traditional system and elaborate on the benefits, enhancements, and reasons to better IOV by Systematic Literature Review (SLR). This technique will be implemented by targeting needed papers through many search phrases. A systematic literature review is used for 121 articles between 2014 and 2023. The IoV technologies and tools are required to create the IoV and resolve some traffic rules through SUMO (simulation of urban mobility) which is used for the design and simulation the road traffic. We have tried to contribute to the best model of the traffic control system. This paper will analysis two vehicular congestion control models in term of select the optimized and efficient model and elaborate on the reasons for efficiency by searching the solution SLR based questions. Due to some efficient features, we have suggested the IOV based on vehicular clouds. These efficient features make this model the best and most effective than the traditional model which is a great reason to enhance the network system.
## I Introduction
The use of VANET technology and different network architectures contribute to traffic congestion in the transportation system [1]. In traditional VANET architectures, all nodes are equally important and can act as relay nodes, regardless of their location or role [2]. The routing protocol disseminates network topology information among immediate neighbors and gradually throughout the network, allowing routers to gain knowledge of the network [3]. To establish connectivity in on-demand routing protocols, routes to a node are discovered by flooding request messages[4]. Mobile ad hoc network development relies significantly on simulation as an essential tool. Simulation facilitates the study of network behaviors and characteristics under various conditions and parameters [5]. However, Simulation is not a foolproof approach to ensure the protocol's practical viability, as simulators have assumptions and simplified models that may not precisely represent real network operations.
The purpose of Ad Hoc routing protocol is to address the issue of complex network structure changes due to fast node movement [6]. In other paper proposed intelligent system for monitoring and managing roads stands out from existing systems because it incorporates three related components, namely infrastructure, transport, and cloud that interact with each other in a structurally integrated manner [7].
The research analyzed the behavioral and performance aspects of three routing protocols - proactive, reactive, and hybrid - in an urban environment [8]. This study was conducted using NS2 simulator and included the application of TCP and UDP transport protocols with different car densities. According to [9], the routing table is the primary data structure that stores all the necessary information about routes, including destination address, sequence number, hop count, next hop, lifetime, precursor list, and route state. In emergency situations and harsh weather conditions, SAR data is useful for detecting changes in the environment, as highlighted in [10]. In [11], the authors investigate the features of ad hoc routing protocols - OLSR, AODV, and ZRP and analyze performance metrics such as packet delivery ratio, end-to-end delay, throughput, and jitter under increasing node density in the network.
In [12], the authors utilize the NS2 simulator to assess the performance of AODV, DSR, OLSR, and DSDV protocols
under various conditions. The Ad Hoc committee and the International Committee on Systematic Bacteriology acknowledge the Bacteriology Division of the International Union of Microbiological Societies and the International Council of Scientific Unions for their support, as mentioned in [13]. To improve the overall reliability of embedded sensor networks, [14] proposes a heterogeneous backup scheme that involves substituting one type of resource with another. According to [15], MANETs are characterized by high mobility, with new nodes continuously joining the network while existing ones move within or exit the network. Numerous ad hoc network protocol. [16] Discusses the difficulty of constructing intrusion detection systems for wireless sensor networks and mobile ad hoc networks. The paper presents an overview of current intrusion detection methods and identifies significant areas for future research. Another paper, [17] introduces the IVG (Inter-Vehicle Geocast) protocol, which employs ad hoc wireless networks to broadcast user alarm messages to all vehicles on a highway. The protocol aims to alert drivers of any potential hazards or obstacles, while also providing information on risk areas, driving direction, and vehicle positioning.
The paper introduces a system for real-time traffic monitoring and vehicle tracking in public or private transportation sectors [18]. It utilizes a social network service to provide individual users with traffic monitoring capabilities. A prototype model is developed and presented to showcase the system's functionality and performance evaluation.
In 2020, the focus of network architecture design was to achieve direct communication between vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) using VANET technology and IoV [1]. However, the adoption of VANET has been limited due to insecure network service quality and device inconsistency. To address this, a smart logistics vehicle management scheme based on IoV was proposed, where smart vehicles are connected to multiple networks and share information with road-side units and other road users [19]. The uncontrolled flow of traffic in cities creates problems such as air pollution and accidents, which can negatively impact travel time and delay fraction [20]. Therefore, the accurate prediction of travel time is crucial in such situations.
Real-time traffic updates can be obtained using cameras, sensors, and AI-based predictive algorithms in traffic signal management systems [21]. Smart Transportation Systems (STS) use IoT technology and Big Data analytics to provide real-time traffic updates [22]. A sustainable and robust traffic management system can be achieved using the Internet of Vehicles (IoV) technology, which is a structured system controlled by a central device [20]. The Intelligent Traffic Light Controller Using the Rooted System is an example of an IoV-based traffic control system [23]. The use of IoT technology in traffic control systems has improved traffic management from traditional to advanced systems [24]. Contribute the IOV system through cloud-based computing to discuss better. The central control system of IOV may require expensive communication devices but through cloud computing the technique such a way that there is a leader vehicle will be selected which has all the computing and sharing resources for vehicular cloud-based computing, and another vehicle that has to store, sense traffic, measure speed, onboard unit, and other resources will be a participant in the network [25]. Discussed a comparative study to determine important reasons for Vehicular-based computing with the help of SLR. We will discuss the architecture of the modern model of IOV based on cloud computing and all challenges that may face by developers. We shall also point out some challenges that happened during the architecture of IOV cloud-based computing. These challenges are majority related to security-related issues [24].
The main structures of this paper are to all issues in the traditional traffic control system to the point of SLR. The related paper will be selected in the first phase then skip irate late pear in terms of the target paper for searching for the required answers.
The proposed system aims to address traffic-related problems using modern technology, such as IoV, MAC, DSRC, and SUMO. While VANET was a previous network model that aimed to solve traffic-related problems, it faced deficiencies in terms of security and coverage area [19]. A systematic literature review (SLR) was conducted to identify the best network for controlling traffic and to discuss the outcomes of using IoV. Machine learning algorithms have also found use in various domains, including intelligent transportation systems [26]. The proposed system will be automated and will use real-time data exchange between vehicles to visualize the city traffic on an OBU for drivers [1]. By using cloud-based competitors to control traffic systems, this system aims to deal with the issue of heavy traffic in big cities. To our knowledge, there is no such study related to IoV automated system will be beneficial for solving congestion issues.
The objective of this proposed work has to present a systematic literature review (SLR) domain in IOV and control the traditional traffic system. VANET and discussed some outcomes in terms of the best network for controlling the traffic. We have needed to solve traffic problems using modern technology IoV and two protocols MAC (media access control) and DSRC (dedicated short-range
communication) and a SUMO (simulation of urban mobility) tool which is used for animating the traffic on OBU (On-Board Unit) in front of drivers for visualization of the city traffic. Which mainly uses real-time information exchange data between vehicles. This modern system is automated which will be beneficial for solving traffic-related problems using SLR-based questions.
The structure of this paper is outlined in [18]. Section 1 provides a brief introduction to the development of IoV and highlights the significance of collecting, summarizing, analyzing, and categorizing current research in this area. In Section 2, the background of IoV is presented, including a systematic literature review and a comparison to previous work. Section 3 describes the research methodology, including research questions, presence/prohibiting criteria, and search strings used to gather relevant studies in the IoV domain. The research results are presented in Section 4 through tables, graphs, and a proposed IoV model. Section 5 offers a summary of the findings via a research hierarchy and a designed IoV-based smart model. In Section 6, open issues and challenges in IoV are discussed from various perspectives, based on selected papers. Finally, research legitimacy threats are presented in Section 7, while Section 8 concludes the article.
## II Related Work
The review found that the majority of the studies on the use of IoV for traffic road congestion management have been focused on the use of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication. Crowding can also be described as an increase in the expenses of road users resulting from the disturbance of regular traffic flow [27]. Some restrictions in term of network security, reliability and consistency. The integration of IoT in the IOV-based traffic control system is essential for the efficient management of city traffic [28]. However, security and privacy measures in IoV pose challenges that need to be addressed [29]. GPS and other location-based technologies can provide drivers with useful information, but VANET faces challenges similar to those of MANET, and its wireless nature makes it less secure than wired networks [30]. The dynamic nature of VANET poses challenges in designing routing protocols that can support fast-moving vehicle nodes [31, 32]. AODV-MEC, a novel approach for clustering AODV routing, utilizes edge computing techniques to overcome these challenges [33]. VCC-based traffic management systems have been reviewed and compared to VANET-based systems in terms of their effectiveness in managing road traffic [34]. As more vehicles become connected and transfer data through IOV and other infrastructure technologies, the need for an efficient traffic control system continues to grow [34].
Various standards and protocols have been developed for vehicular ad hoc networks (VANETs) and the Internet of Vehicles (IoV) to enable intelligent transport systems and remote vehicle control [24, 30, 35]. However, privacy and security concerns and data storage management need to be addressed in these networks [36]. V2V and V2I communication are used in VANETs, where vehicles communicate with other vehicles and Road Units (RSUs) [34]. A proposed framework called "vehicular cloud" aims to reduce the workload and power consumption of centralized cloud systems by utilizing computational resources from nearby vehicular clouds [37]. This framework operates in proximity to traffic lights and employs a mathematical model called "MILP" to analyze distributed task assignment compared to a single task assignment approach. A systematic review of blockchain applications in ITS and IoV can provide an overview of current research in this area [20]. The study proposes a framework that involves creating an obfuscation zone on a map, identifying the locations of users, and excluding location-based services that pertain to lakes [38].
This study introduces a new platform for vehicular data storage and processing using cloud computing and IoT technologies, which includes intelligent parking and vehicular data mining cloud services [39]. Other research focuses on developing affordable sensor systems for detecting and classifying vehicles [40], and real-time traffic control based on distributed computing platforms [41]. The Internet of Things (IoT) can be utilized for traffic management by connecting all traffic components through sensor devices [42]. Various algorithms and protocols have been proposed for improving traffic flow, such as cooperative adaptive cruise control, platooning, and traffic signal control, and vehicular ad hoc network (VANET) using DSRC communication technology has been considered for use in connected and autonomous vehicles [43]. Additionally, several studies have examined the impact of IoV on traffic congestion [27, 35, 43, 44]. These studies have shown that the use of IoV can lead to significant improvements in traffic flow and travel time. However, there are also concerns about the privacy and security of the data collected by the IoV system.
Cloud computing is one more trending, modern and compatible technology in almost every field of life. Cloud computing is basically used for use resources which may expensive for individuals like best computing, sensing and storage capacity to build large and efficient system [25]. The use of Cloud computing in IOV is that every vehicle cannot have such powerful and needed requirements to compute complex algorithms by every vehicle Overall, IoV has the potential to enhance traffic road congestion management, as well as provide safety, security, confidence, and comfort to passengers and drivers. VCC aims to offer cost-effective services to drivers while reducing accidents and traffic congestion[24]. The underutilization of communication, storage, and computing resources in vehicles is a motivating factor.
.A meaningful combination of these resources will have a major impact on society [18]. However, additional research is mandatory to address the test associated with implementing IoV, such as privacy and security.
IoT-based traffic management solution using sensors like IR and RFID, to guide ambulance drivers and identify traffic violations [45]. A framework to transform normal cities into smart cities by leveraging ICT and automating processes. The paper also considers a technique to measure traffic density for smooth vehicle movement.
Main challenge addressed in this research is the adaptation of the Public Key Infrastructure (PKI) architecture, traditionally used in wired networks, for use in Vehicular Cloud Networks (VCN). And to provide a security solution for VCNs [46]. The study comprises three steps: firstly, conducting a network architecture study to identify the key components of the network; secondly, proposing a security solution architecture; and lastly, programming and validating the solution through simulation testing.
Smart traffic management systems utilizing queue detectors installed beneath the road surface can detect traffic congestion and transmit data to a central control unit [21].
Real-time traffic monitoring and vehicle tracking systems have been designed for public and private transportation sectors, leveraging social network services to provide individual users with traffic monitoring [18]. Various strategies and automated sensor systems have been proposed to analyze traffic density and alleviate congestion, with a review of different sensor systems evaluating their advantages and drawbacks [47]. An intelligent traffic monitoring system is proposed utilizing global unique EPC codes and RFID readers for all-weather operations [48]. VANET faces security issues, while IoV, which is dependent on IoT and cloud computing, offers a self-organized, infrastructure-less, and automated system for traffic control that can service larger areas [20; 49; 50]. The use of Internet of Vehicles (IoV) has been proposed as a solution to traffic congestion management, and a majority of research has focused on utilizing vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication to improve traffic flow. However, IoV-based vehicular communication faces security threats, categorized as Inter-vehicular attacks and intra-vehicle attacks. Inter-vehicular attacks involve malicious nodes that can attack communication to divert actual information, resulting in potential life and property loss. This type of attack is known as masquerading or impersonation attack. On the other hand, intra-vehicle attacks can target all participating components of a car by any criminal. If the sensors within a car are attacked for criminal purposes, it may result in compromising driver safety and vehicle integrity. [51] Meanwhile, significant technological advancements in the transportation sector have led to the implementation of computer vision in Intelligent Transportation Systems (ITS) to analyze video data sourced from CCTV cameras [52].
In other discuss shows that big data can play a key role in provided that sound and valuable predictions and also provide a comprehensive analysis of several methods, tools, and techniques for the use of big data in IoV [53]. The literature also suggests that the use of IoV can lead to significant improvements in traffic flow and travel time but there are also concerns about the privacy and security of the data collected by the IoV system.
In the table 1 analysis all those points which is used in this research as referenced and after that deals all problems in traditional vehicular ad hoc networks.
Our proposed study aims to compare and analyze the differences between the IoV based on cloud network and traditional VANET models, highlighting their underlying architectures and communication mechanisms. The IoV model relies on cloud computing and communication technologies to enable real-time data exchange between vehicles, infrastructure, and the cloud, while VANET uses direct vehicle-to-vehicle communication. The study will explore the advantages and limitations of both models and address any existing problems to develop an optimized and efficient network model.
## III Research Methodology
A comprehensive procedure for collecting and resolving main a[54]tricles is provided by the SLR for all readings in the field under consideration. In this paper, we describe a design for an IOV system that is based on cloud computing-based VCC vehicular cloud computing. This is a suitable and meditative review process that involves the following steps: 1) Define the work and objectives 2) Discuss the research questions 3) plan the exploration of research papers 4) analyze the articles for information 5) sort the articles according to keywords 6) extracting the information from the articles.
### _Research Objectives_
The major aims and objectives of the purposed work are as follows.
To assess the potential for modern vehicular technology to address traffic problems and to determine its effectiveness as a means of traffic control.
Identify the types of problems commonly faced in traffic control without the use of modern vehicular technology.
Evaluate the impact of cloud computing on the Internet of Vehicles in controlling the traffic system.
To investigate the impact of traffic congestion on driving performance and determine the extent to which it affects the driver.
To identify the technological advancements that make the Internet of Vehicles superior to the traditional traffic system, and to assess how these improvements enhance the efficiency and effectiveness of the traffic system.
### _Research Questions_
Our field can only operate efficiently if we set our goals as research questions.
Further, a complete search design obligatory in the review for the documentation and abstraction of the greatest important articles has been conventional. Table 1 discussed the Questions to motivate that makes a speech and replies in the light of a well-defined process assumed in [30, 54]. We have set all research questions which fulfilled our purposed work in the table 2. After that we will try to collect data and analyzed them to purposed goals. The Table 2 mentioned all Research questions as research aims and objectives. It point out traditional network their positive outcomes, issues in this network, technological improvements required for effective model and try to discussed architecture procedure for development the enhanced network.
\begin{table}
\begin{tabular}{p{42.7pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline Symbol & \multicolumn{2}{c}{Research Question} & \multicolumn{2}{c}{Major Motivation} \\ \hline RQ1 & What type of problems were faced during the usage of traditional vehicular ad hoc networks in literature? & To explore all problems faced using traditional traffic control systems by the drivers discussed in the literature & \\ RQ2 & What type of traffic issues in big cities can be solved with IoV in literature? & To get knowledge about those issues which can be solved by the IoV Architecture model discussed in the literature. & \\ RQ3 & Which technological improvement makes IoV better than VANET? & Identify all those technological enhancements which make our IoV architecture better than the old traffic control system. & \\ RQ4 & What are the research types to explore IoV for solving Traffic problems? & To Explore all those journals which published most research articles to enhance the traffic control system. & \\ RQ5 & Which steps should be taken to deal with the challenges of traffic congestion in the Vast Area? & To explain the procedure of Vehicular cloud computing-based IoV architecture and deal with its challenges. & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Research Questions and Major Motivations
### Search strategy
Our last step in SLR is to research the best, most relevant, and latest articles about our latest work. In this process, the use of the search string technique is very important because needed paper search by easy by the efficient search string. The effective search string cannot search the next phases and will also be inefficient. Many features of composing articles related to fields compose many perceptions related to the study.
### Search string
Discussed all search strings which we have used for collect the targeted articles. Search string selection is very important to search the most suitable and targeted articles for research work in Table 3. By using numerous search strings and a keyword built on the internet of vehicles, we have demonstrated that the keyword was in force and reasonable. We use the search string to find relevant papers about our research objectives. We should also use alternative keywords (Synonyms) to describe a wide range of research areas. Alternative keywords can be found in abstracts, indexes, and on the internet. By using the final selected keyword and its alternative keywords, we will find the most relevant, latest, and validated articles. Through this process, a search string can be formulated for related articles by putting them in relevant online directories.
These are technologies used in vehicular cloud networks Communication and transmission. The finalized search string has three fragments. As we have discussed earlier that VANET is also used now a day to overcome traffic congestion problems. This system also gets positive results to resolve traffic congestion problems. VANET model operated by (V2I) vehicle to infrastructure, (V2V) vehicle to Vehicle, and V2P vehicle to pedestrians. This system has worked like MANET mobile ad hoc network which is infrastructure-less and without a central controlling device. VANET is a branch of MANET which follow the rules of MANET. But this system may have some problems happened in the model. Like this problem is not automated because all nodes and vehicles in the VANET model communicate with each other only in the real-time system and only shares information with a neighbor in the vehicle's roadside unit and On-Board Unit. This system is not able to share all information related to traffic congestion in the whole city. Due to these reasons, all problems cannot resolve efficiently.
### Literature resources / Journal
Repositories
The table 4 mentioned the publishers that have published the articles selected for the systematic literature review (SLR). It has also mentioned the prominent articles name corresponding to Repository. The exploration was approved to collect data using the subsequent search terms: ("IoV" OR "VANET" OR "Connected Cars") AND ("IoT OR "Vulnerability "OR "ITS).
### Inclusion and Exclusion Criteria
Constrictions distinct for addition criteria (AC) are:
**IC1)** Consist of readings that remained used mainly showed for the internet of vehicles cloud-based network.
**IC2)** if the study was directing the IoV cloud base network enhancing traffic system.
The elimination criteria were functional to all the articles except readings that involved a Vehicular ad-hoc network
\begin{table}
\begin{tabular}{l l}
**Repository** & **Search Strings** \\ \hline Science Direct & (Social internet of vehicles OR Social internet of things OR Intelligent Transportation system AND Targeted, Enabling IOV technologies) \\ SCIENCE & (Systematic Literature Review OR SLR AND Internet of Vehicle Communication security OR IOV authenticity OR IOV Integrity OR IOV Confidentiality). \\ IEEE Xplore & (Blockchain technology OR Automotive communication AND Intelligent Transportation system OR Internet of vehicles AND SLR OR systematic literature review). \\ IEEE Xplore & (Smart logistics management OR Smart logistic transportation AND Internet of vehicles OR IOV OR connected vehicles.) \\ IEEE Xplore & (Computation-intensive graph jobs OR vehicular computationally intensive graph AND vehicular clouds AND IOV OR internet of vehicles.) \\ ACM & (Survey AND Road traffic congestion OR vehicular congestion AND traffic measures OR Transportation analysis OR road evaluation AND sustainable and Resilient transportation.) \\ ACM & (Vehicular cloud networking OR VCN OR Transportation cloud networking OR intelligent vehicular cloud computing AND architecture OR Network Design AND principles OR standards.) \\ \end{tabular}
\end{table}
Table 4: Publisher–wise Search strings
\begin{table}
\begin{tabular}{l l} \hline \hline Terms (Keywords) & Synonyms/ Alternative strings \\ \hline Internet Based (IB) & Internet of Vehicles (IoV) and Internet of Things (IoT). ITS \\ SUMO & Simulation of urban mobility \\ & animation tool (SUMO), \\ VANET, MANET & Vehicular ad-hoc network and \\ & Mobile ad-hoc network. \\ DSRC & (Dedicated short-range communication). MAC(media access control), \\ \end{tabular}
\end{table}
Table 3: Terms and keywords used in the search
(VANET) that affects or elaborates on other extraneous processes,
* **EC1**) if the study did not contain any expectations and only include IoV enhancement cloud-based network.
* **EC2**) doesn't have any discussion VANET computational study.
* **EC3**) if the study involved IoV cloud-based vehicle system.
## 6 Selection of Relevant Papers
The first phase is the studies based on labels and replication elimination. This article was obtainable but remains inappropriate in the selected domain, in selection persistence, abstracts were cautiously studied articles were involved that describe the computational certain area. Furthermore, next to investigative the articles follow inclusion criteria. Lastly, the selected IoV domain articles remained included in the following valuation level.
### Abstract-Based Keywording
In this phase, the screening process is implemented on Abstract base keywords. We have to analyze the abstract base reading and screen the relevant keywords from the abstract. After analyzing them we can decide on the current paper to target our objective work. The primary idea can easily get by analyzing the abstract base reading so it is very beneficial in targeting our papers. If keywords are found in this phase, the next work is to combine these keywords to search for relevant work.
The designated articles have been divided into four major types; IEEE, Science Direct, and ACM. Other domains like the internet of things (IoT) and the social internet of vehicles (SIoV). Furthermore, just for clarity IEEE explore has several recognized IoV and VANET [1, 6, 7]considered and grouped here as Science Direct.
### Assessment Criteria for Quality
The evaluation of the systematic quality of the targeted articles is an essential task of a systematic review. We know that all studies of given articles are different from each other according to their design, purpose, and technique of research but the process of evaluation should be according to the sequential assessment process suggested [30]. The whole process should be according to the assessment standard set for the evaluation of the quality. Every step has its standards to evaluate and plan to rehear the standards of objectives, methodologies, and ways to collect the analyze them, and their results should be followed [51, 55]. The survey will be administered using articles written by various authors, as described below. Subsequently, distinct approaches will be utilized to answer each questionnaire. The internal and external quality standards may also be employed to assess the caliber of systematic reviews [55].
We have been conducting an online survey for the population who can respond to our questionnaire and after that we have been analyzing them by explain the survey responses according to the literature review for the evaluation of systematic quality in given table 5.
## 7 Data Analysis
In this chapter, we have concluded the technologies IOV (internet of vehicles) and configuration for tools SUMO (simulation of urban mobility). These technologies aid in managing vehicular traffic and optimizing urban mobility. IOV refers to the integration of vehicles with the internet, allowing for real-time communication between vehicles and infrastructure, while SUMO enables simulation-based testing and optimization of transportation systems. A detailed study is needed to answer the question of SLR. The Table 6 explore selection process of targeted SLR related to our Goal. It consists of four phases where first phase selects the articles after the input the selected strings in articles repository. After that the unrelated articles has removed on the basis of analysis most suitable SLR. First of analyzed the review answer by population and then Analyzed the SLR questions conducted to achieve the objective of our work.
## 8 Search Results
There are 121 articles found during the primary stage of the article search. An initial search has been explained and it involves searching all articles using specific keywords
\begin{table}
\begin{tabular}{|p{42.7pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline
**Sr.** & **The assessment Questions** & **Expected Answers** & **Score** \\ \hline \multicolumn{4}{|c|}{**Internal Scoring**} \\ \hline
**1** & Can we control the traffic problems with modern vehicular technology? & Yes & \\ \hline
**2** & Which type of problems are mostly faced without modern vehicular vehicular technology? & No & 13.8\% \\ \hline
**3** & What is the effect of cloud computing based on the internet of vehicles for controlling the traffic system? & heavy congestion & 32.1\% \\ \hline
**4** & Does traffic congestion congestion affect your performance during driving? & Yes & 67.9\% \\ \hline
**5** & Which technological improvements make vehicles better than traditional the traffic system of our country? & May be cloud system & 14.3\% \\ \hline \end{tabular}
\end{table}
Table 5: Questionnaire to assess the quality
chosen for the search. These articles have been searched from online repositories mentioned in the Table#6. After these four phases are applied to the target the relevant articles are issued for extracting the data.
First of all title based selection is done after the primary search in which the title-based selection of articles is implemented called Phase 1 of the selection process. In this phase, all those artists not included in the first case will not be selected for the next phase. The selection is done by analyzing the abstract. Data extraction is performed after a detailed study of the paper. final 17 papers are collected as Phase 4th stage in which detail of each repository is given in table number 5.
### Assessment and discussion of research questions
In this section, we will evaluate and analyze the research question, and then assess the findings based on a review of population analysis. We conducted an online survey using a Google form to collect positive responses and targeted specific audiences for valid responses. This process involved reviewing all relevant articles to analyze the research questions.
What is the impact of cloud computing on traffic control systems, specifically with regard to the internet of vehicles?
It is most important to set the domain of reading articles in terms search the targeted data for review. We have selected the articles for this purpose from January 2014 to June 2023. We have selected 8 articles in terms of targeted and needed data. We have conducted online survey based on questions, the review graph is given below by the selected population and we have analyzed this section to findings reviews. The majority of reviews have been reviewed on better usage of resources 59.4% expensive cost 21.9% and not know is 15.6% in figure 1. It means that this point is valid the cloud computing-based IOV has control of the traffic due to better usage of resources.
#### 2 Which type of problems are mostly faced without modern vehicular technology?
The transportation industry may encounter significant problems in the absence of modern Internet of Vehicles (IoV) technology. Some of these issues include limited connectivity between vehicles, infrastructure, and other devices, resulting in ineffective communication and coordination. This can lead to traffic management inefficiencies, reduced safety, and longer travel times. In this question base result of majority reviews have been reviewed on intelligent traffic control system and responses of 59.4%. Traffic resources by cloud system responses of 34.4%. It means that this point is valid IoV is intelligent traffic control system in Figure 2. In addition, the absence of real-time data can make it difficult to accurately predict traffic patterns, congestion, and other crucial information. This can impede route optimization, emergency planning, and traffic flow enhancement. Furthermore, without modern IoV technology, various safety features like advanced driver assistance systems (ADAS) and collision avoidance systems may be limited or unavailable, which can increase the risk of accidents and fatalities.
Increased Environmental Impact: Without IoV technology, vehicles may not be optimized for efficient driving, leading to higher fuel consumption and increased carbon emissions.
Inefficient Fleet Management: Without real-time data on the status of vehicles, it can be difficult to manage fleets efficiently. This can result in increased maintenance costs, longer downtime, and reduced productivity.
An IEEE "Institute of Electrical and Electronics Engineers." IEEE vehicular technology society section 1st March 2022. It has been contributing a lot of concepts terms of controlling the traffic system with modern vehicular technology. It has also found gaps between the current system and modern needs. It has also contributed to the challenge is in modern vehicular system. We have many articles found by different online repositories used also uses them in this work [49].
Some paper explores this novel architecture of (IoV) cognitive internet of vehicles, as well as research occasions in a
\begin{table}
\begin{tabular}{|c|c c c c c|} \hline
**Database/Repository** & **Primary Search** & **P-I** & **P-II** & **P-III** & **P-IV** \\ \hline IEEE EXPLORE & 50 & 26 & 11 & 10 & 5 \\ SCIENCE DIRECT & 25 & 12 & 8 & 5 & 3 \\ ACM & 26 & 8 & 7 & 5 & 4 \\ SPRINGER LINK & 50 & 20 & 16 & 8 & 5 \\
**TOTAL** & **121** & & & & **17** \\ \end{tabular}
\end{table}
Table 6: Publisher Based Stage Wise Selection Process
Figure 1: impact of cloud system on IoV
Figure 2: problems are faced without IoV technology
-ehicular ad-hoc network. We have presented present an overview of IoV including its evolution, related technologies, and architecture [43].
#### Iii-B3 Can we Control the Traffic Problems with Modern Vehicular Technology?
This study aims to reduce congestion and messaging overhead, and we can control traffic problems using some techniques, tools and modern technological concepts.
Intelligent Transportation Systems (ITS) technology can help manage and control traffic by providing real-time information to drivers, such as traffic conditions, alternate routes, and estimated travel times. ITS also includes systems that can automatically adjust traffic signals and manage traffic flow based on current conditions. Connected vehicles use wireless communication to share data between vehicles and infrastructure. This technology allows vehicles to communicate with each other and with traffic management systems, which can help reduce congestion and improve safety on the roads. Autonomous Vehicles: Autonomous vehicles have the potential to significantly reduce traffic problems by eliminating human error, reducing congestion, and improving fuel efficiency. Self-driving cars can communicate with each other and traffic management systems to optimize traffic flow and reduce traffic jams.
After creating an online survey form and sharing it with all concerned stakeholders, we collected reviews from people which were predominantly positive. The majority of reviews have been reviewed on Yes 67.9%, No 17.9% and May be 14%. It means that this point is valid many traffic problems control with IoV in Figure 3. It was found that the traditional system has limitations, such as the lack of real-time traffic sharing with other nodes and sharing only among neighboring vehicles, which is insufficient to control the traffic of an entire city. In contrast, our enhanced system is designed to address these issues by incorporating Internet of Vehicles (IoV) technology and integrating long-range communication nodes, along with using Vehicular Cloud Computing. This allows our system to have better long-range traffic control capabilities compared to the traditional system.
Brings vision into the covered architecture of the social internet of vehicles (SIoV) by demonstrating the role and architecture of each entity of the system along with permitting technologies and protocols. Another main contribution of this article is to highlight the social relationships between different objects of the system along with the management dynamic nature of SloV systems. And analyzes the proposed SIoV architecture by representing separate use cases and pronouncing the feasibility of the proposed architecture [55] The architecture of IoV Design positions numerous challenges like reorganization, scalability, security, privacy, consciousness, heterogeneity, and interoperability.
#### Iii-B4 Does Traffic Congestion Affect Your Performance During Driving?
Traffic congestion can affect a driver's performance while driving. It can increase stress levels, fatigue, and frustration, which can negatively impact a driver's ability to focus and react to changing driving conditions.
When a driver is stuck in traffic, they may become impatient and try to make risky maneuvers such as changing lanes frequently or aggressively accelerating, which can lead to accidents. Moreover, stop-and-go traffic can lead to more wear and tear on the vehicle and increase fuel consumption.
Therefore, it is important for drivers to remain calm and focused when driving in traffic congestion. Drivers should allow extra time for their journeys, follow traffic rules and signals, and avoid distractions such as texting or using a phone while driving. Additionally, it may be helpful to use alternative transportation methods such as carpooling or public transportation to reduce traffic congestion on the
Traffic congestion can have an impact on your driving performance. When traffic is congested, it can lead to slower driving speeds, more frequent stops and starts, and increased driver frustration, all of which can negatively affect a driver's performance. Yes, traffic congestion can definitely affect your performance during driving. When you are driving in heavy traffic, you have to constantly pay attention to the cars around you, adjust your speed, and change lanes frequently. This can be mentally and physically exhausting, which can affect your ability to focus and react quickly to changing road conditions. In addition, driving in heavy traffic can also be stressful and frustrating, which can impact your mood and mental state. This can lead to a decrease in your ability to make good decisions and respond appropriately to unexpected situations on the road. Furthermore, being stuck in traffic for long
Periods of time can lead to physical discomfort and fatigue, such as muscle tension, back pain, and eye strain. All of these factors combined can ultimately lead to a decrease in your overall driving performance. Therefore, it is important to take breaks and practice safe driving habits, such as maintaining a safe following distance, staying alert and focused, and avoiding distractions while driving, in order to reduce the negative effects of traffic congestion on your driving performance. In figure 4 clearly mentioned that many problems may faces due to absent of modern Internet of Vehicles which can handle by using modern vehicular system reviewed on online survey "Yes" responses 86.2% and No 13.8% responses.
#### Iii-B
Figure 3: Traffic control system using by IoV
## IV Discussions
In this section, we have going to elaborate on Review Questions. The aim of my work is about selected SLR the terms of the best solution to deal with road congestion issues in big cities. We have read about traditional traffic control systems for traffic control. But there were a lot of issues in that system. Some countries do not apply any technological system for the traffic control system.
### Rq1.What Type of Problems were Faced During the Usage of Traditional Vehicular Ad Networks?
There most countries use traditional traffic control systems. Just a traffic signal system is not enough to control all problems related in previous decades the research has been growing the technology of traffic control. VANET (vehicular ad hoc network) is one of the most famous traffic control systems in previous decades. VANET is on the relay based on NET (Mobile ad hoc network). MANET is an infrastructures-less and self-controlled communication network in which participating nodes communicate each other without central communicating device. VANET is work same as MANET system in which all vehicles work as mobile nodes. But there are many issues also faced by drivers using VANET system all vehicles just communicate Vehicular to vehicular (V2V), Vehicular to infrastructure (V2I), and Vehicular to pedestrians (V2P). All information can get through on board unit (OBU) on front of drivers. However, VANETs has not been widely used and has not brought greater commercial value for long time [1].
### No Real Time Traffic Information
This system could not able to share real time traffic information. It means that this system may not able to current situation of another place of city.
### Limited Distance
This system has a limited geographical coverage, which is one of its disadvantages.
### Not Intelligence
The system is not an intelligent one, meaning that it cannot predict potential traffic congestion based on current parameters.
### Security Issues
VANETs are vulnerable to unauthorized access, where attackers gain access to the network without permission. This can lead to the interception of sensitive data and the unauthorized control of vehicles. There may be malicious node may attack on other nodes. Other integrity, authenticity and confidentiality related issues may be found in this system. Since VANET are evolved from MANET the vulnerabilities posed by VANET are largely inherited from MANET ad hoc architecture [51].
The figure 8 explain detailed architecture of VANET. There are many stockholders in these architectures to share information among different vehicles. The information may be vehicular to vehicular (V2V), vehicle to road side units (RSU), and vehicle to pedestrians. But this architecture have many problems like no central control system, there is no IoT Technology involve in it so real time and vast range communication cannot be achieved it is just for local level communication without intelligent system. Just one node share its information to neighboring vehicle of traffic information through MANET.
### Rescribe Traffic Issues in Big Cities Which Can be solved with the help of IoV?
There may many traffic related issues which we can solve with the help of IoV. In this section we elaborate that which traffic related issues can be solved by cloud based IoV.
* IOV solve many issues which may not able to solve with VANET. If there may a congestion in specific junction of road then our system must be able to inform all those vehicles which are coming toward projected road. So that may follow another alternative route to reach their destination.
Figure 4: Affected your performance
Figure 8: VANET Architecture
This task can completely get by IOV based system.
* If anyone violate speed limit rule then this may inform the related team to control this violence.
* If road accident is occurred on specific road, then this system can inform rescue teams.
* They provide efficient time management system for traffic control. Traditional traffic signal system work on the fixed time slots to allow for cross the road. But our new system can manage this efficiently with detection and sharing the information of traffic.
* It covers large geographic area for traffic control system. It not only communicates node to node communication but also everything participating in the network like IoT.
* It has a smaller number of security and integrity Issues due to best routing protocols control by central communication system.
### RQ3. Which Technological Improvement MARES IOV BETTER THAN VANET?
In this section we elaborate with the help of related articles that which technological enhancements makes IoV better than traditional VANET system. Improvements are given below.
* IOV system is based of internet of things so it is an
* Intelligence system. In IoT all participating nodes works collaborative manner in term of perform the task. IOV is one of them application of IoT. Capability and the development of cloud computing And AI technology enable vehicles to autonomously choose to access better performing networks to insure table network connectivity [1]. Specific List of Data sources directories in table 7.
* In this system all devices are compatible and collectively works according to the needed information for the traffic control.
* SUMO Simulation of urban mobility makes this
* System efficient and fast. This is a tool to process and generate traffic simulation with the help of raw form of data from sensors in the vehicles and RSU.
* IOV also have GPS to share and get information from vehicular clouds to cover vast area of traffic network.
* This system is also given facilitate to those vehicles which have not resources to compute and Storage and cloud resources. All vehicles having on bored unit and installed this system may participate and get facilitate from this network [10].
### RQ4. What Are the Research Types to Explore to Solve Traffic Problems?
Various research types that can be used to explore the use of IOV (Internet of Vehicles) for solving traffic problems. Some of these research types include: Experimental research involves conducting experiments and collecting data on the use of IOV technologies in real-world traffic scenarios. This can help to identify the effectiveness of various IOV solutions in reducing traffic congestion and improving overall traffic flow. Survey research collecting data from individuals and organizations through surveys to understand their perspectives on the use of IOV technologies for solving traffic problems. This can help to identify the challenges and opportunities associated with implementing IOV solutions in different contexts.
Case study research conducting in-depth analysis of specific cases where IOV technologies have been used to solve traffic problems. This can help to identify best practices and lessons learned that can be applied in other contexts. Comparative research comparing the effectiveness of different IOV solutions in solving traffic problems. This can help to identify the most suitable solutions for specific contexts and provide guidance for decision-making.
### RQ5. What is the Procedure to Create the Vehicular Based IoV Architecture and Their Challenges?
In this section complete procedure of designing the IOV architecture based on vehicular cloud network. And also deal all challenges about this architecture.
### CREATE the Cloud
Source allocation is initial procedure of vehicular cloud computing. The main vehicular node is selected by system which is used for manage the whole network. Head or leader node send the resource allocation message to all participants of network, this is called RREQ message.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \begin{tabular}{c} **Sr.** \\ **No.** \\ \end{tabular} & \begin{tabular}{c} **Name of** \\ **Data Source** \\ \end{tabular} &
\begin{tabular}{c} **Data Source Link** \\ \end{tabular} & **Accessible** \\ \hline
1 & ACM & \begin{tabular}{c} Systematic literature review \\ on Internet-of-Vehicles \\ communication security \\ \end{tabular} &
\begin{tabular}{c} _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Technology_ \\ \end{tabular} \\
3 & IEEE Explore & \begin{tabular}{c} Blockchain Technology for \\ Intelligent Transportation Systems: \\ A Systematic Literature Review \\ \end{tabular} &
\begin{tabular}{c} _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ \\ _Yes_ _Yes_ _
The participants of network of network will be replied on the bases of needed information and scenario. After transfer the RREQ message to whole network the route reply message will come back head node which is responsible of manage the networks. Other nodes are just participating in the network for just get the traffic related information.
### Goal Assignment and Collection the Output
When cloud leader receives the resource reply message (RREPs) from other vehicles then leader select of them as cloud member. The selection procedure is based on the cloud range and the correct operation of the resources that will run the application. Once the cloud members have been selected, the cloud leader assigns them to their respective tasks for collecting traffic-related data. Each cloud member then collects data from their environment and sends it to the cloud leader. If capacity of processing 50 and data storage resources of cloud leader is low then cloud leader may assign this task to other cloud member which can processed and store the task. The cloud leader also maintains the table in which all information related to vehicle and resource will be maintained. The leader also updates this table according to scenario.
### Sharing the Processed Resources Results
After processing and storage, the resources result the leader vehicle is also responsible to share this information to whole nodes of network. The connected participated nodes and concept of shared resources among all nodes. The vehicles which have needed any information about the traffic, it will send the query about it and then cloud leader send this content to this node. This is the main benefit got by vehicular cloud computing because all nodes do not have all required resources due to expensive resources. The main idea behind this concept to get all raw data is collected by cloud participated nodes which are able to collect and after that share it to leader node for computing the raw data of traffic. After this all information share among all participated vehicles in the network. Publishing research papers or articles in academic journals or other publications to disseminate the results of data processing and analysis to a wider audience. Sharing processed resource results through open data platforms, such as data.gov or Kaggle, can enable others to access and analyze the data, facilitating collaboration and innovation. Social media Sharing processed resource results through social media can increase visibility and engagement with stakeholders and the wider community. Overall, sharing the processed resource results is crucial to ensure that data-driven decisions are based on accurate, reliable, and transparent information. By using a variety of methods to communicate the results to different audiences, stakeholders can make informed decisions that have a positive impact on their organizations and communities.
### Maintenance of Vehicular Clouds
All vehicular nodes work in the area of cloud range and after this it will leave the network. When a cloud member wants to leave the network the cloud leader searched another node which will perform this resources task. The leader again send message to whole network for assign the task then vehicular nodes that can take this task will send request reply to cloud leader. Then cloud leader allows to leaving node to leave the network. After that cloud leader update the table for new upcoming member.
### Release the Cloud:
When cloud leader moves out of range of network or no need of cloud resources then it will release the resources so that other vehicles can also use the vehicular cloud. For this purpose, the cloud leader sends the message to all vehicles that are participant of network and release all resources of clouds.
A Systematic Literature Review is focused on the research that has been conducted on the use of IoV technology to address traffic congestion on roads. IoV architecture and communication protocols and taxonomy of Traffic congestion detection and prediction techniques shown in Figure 9, Congestion avoidance and traffic routing strategies, IoV-based traffic management and control systems and Evaluation and performance analysis of IoV-based traffic management systems.
### Comparison:
\begin{table}
\begin{tabular}{l l l} \hline \hline \multicolumn{1}{c}{S\#} & Traditional VANET & IOV based on Cloud computing \\ \hline
1 & Infrastructure-less & Infrastructure based \\
2 & No real time & Real time information \\ & Information sharing & \\
3 & Covered less area & More covered Area \\ \hline \hline \end{tabular}
\end{table}
Table 8: Comparatively analysis between VANET and IOV
Figure 9: Architecture of IOV based on vehicular clouds
By analysis the table 8 we have found many key differences between VANET and IOV and we can explain all reason to best IOV architecture due to main points mentioned in the table 8.VANET (Vehicular Ad Hoc Network) and IOV (Internet of Vehicles) are two distinct concepts, but they are closely related to the advancement of technology in transportation.
VANET is a wireless network that allows communication between vehicles and between vehicles and road infrastructure such as traffic lights, sensors, and cameras. The primary objective of VANET is to provide safety and traffic management on the road by exchanging information about accidents, traffic congestions, and weather conditions, among others. IOV, on the other hand, is a network of vehicles and other objects such as traffic lights, cameras, and sensors that are interconnected through the Internet. Comparative analysis among previous study and modern vehicle system of IoV in figure 9. The primary goal of IOV is to create an ecosystem of vehicles that can share information and provide a range of services, such as infotainment, vehicle diagnostics, and remote assistance.
## VI Future Work
### Contribution
We have selected and read many articles in term of get the answers to set questions for SLR. Our main goal to point out modern vehicular control system beneficial strategies. IOV can be implemented in many ways but we have select the Vehicular cloud computing. The reason behind this strategy is that no extra central control system is required. Just pick leader capability vehicle which can be create vehicular model then task assign to different capable vehicles to collect traffic related information after that leader vehicle execute and share all information to whole traffic of city. In this network the main role is IoT in which all stakeholders devices is connected in term of create intelligent traffic control system. We have set many Questions to collect this information from many articles. Every answers of every question is to related comparatively better performance than traditional one. At the end we are also point out some challenges which may happened in IOV architectures. And we are need to deal to this challenges for better performance. The IOV (Internet of Vehicles) traffic control system has numerous benefits for drivers, pedestrians, passengers, and the city's overall infrastructure. However, it also poses some challenges that need to be addressed. One of the major issues is security threats, as well as communication failures or delayed communication due to the dynamic running protocols of moving vehicles. To overcome these challenges, further research should be conducted to improve the network. One potential solution could be to implement 5G network technology, which would enhance the system's capabilities. In addition, installing modern devices that can guide drivers to park their vehicles correctly and prevent them from wrong parking and over-speeding in sensitive areas could be helpful. By addressing these challenges, the IOV traffic control system can continue to provide significant benefits to the community.
### Challenges
This system may face many problems that should be resolve in term of best IOV architecture.
* There is need to create and maintain all those algorithms and protocols which should be better opera table the vehicle network. Vehicles are constantly moving and change the distance between them. So, protocol must be able of dynamic fashion of networking. [9]
* This system may face problems in term of intelligent routing and path planning.
* Real time huge amount of data may process on the specific node of network. If this processing slows down the real time communication system may affect.
* All participants of Cloud computing need security form vulnerable threats. All components like central communication system vehicles, wrong path information, error SUMO running software on current time or may slow down.
* Denial of services (Do's) may happened on any vehicle then this node can be share traffic information to others or leader vehicle. This may slow down the network.
* Denial of services may Couse the jamming problem in medium in which is used for commonly sharing information. And there may also threats like message declaration, message refuse. For security system we need proper authentication, integrity control system.
* There may be privacy information issues may happen if hackers are involved between vehicular to road side unit (RSU) to vehicular to vehicular (V2V) or vehicular to pedestrians. This is need proper authentication need for which information should be share and which privacy information of vehicle remain.
* There may be other challenge of service delay can be faced in some situation where traffic is too huge amount. This challenge is mostly due to continuously changing the location of vehicles, dynamic routing protocol may face problem in some situations. Proper location sharing is mostly through GPSs system. Current location sharing is very important for avoidance the collisions. So proper dynamic protocol system is very important in this situation.
* There is important to control IOV information security system to deal with attackers, control communication security system holes like diversity traffic, privacy contravention, changing the messages, stop or keep away from using the cryptographic algorithm based software.
## VI Conclusion
The systematic literature review indicates that extensive research has been conducted on the topic of the internet of vehicles (IoV) from 2014 to 2023. IoV has emerged as a potential solution to tackle traffic congestion on roads, as evidenced by multiple studies demonstrating its ability to enhance traffic flow, reduce congestion, and improve safety. By collecting data from vehicles, infrastructure, and sensors, IoV systems can provide real-time traffic information to drivers and traffic management authorities, which can be utilized to optimize traffic flow and alleviate congestion. The literature review highlights several approaches and techniques, including machine learning, data analytics, and artificial intelligence, that can be leveraged to develop IoV-based traffic congestion systems. These techniques can help to improve the accuracy of traffic predictions, which can be used to make better traffic management decisions. Overall, the literature review highlights the potential of IoV-based traffic congestion systems to address the problem of traffic congestion on roads. However, it is important to note that there are still several challenges that need to be addressed, such as data privacy, cybersecurity, and interoperability. Therefore, future research should focus on addressing these challenges to ensure the effective implementation of IoV-based traffic congestion systems. Our work was about to conduct SLR in term of best solution to deal with road congestion issues in big cities. We have chosen the technique to explore the problems in traditional VANET system. This outcomes of SLR are very effective to explore issues and we have also elaborated the technique of new architecture by which most of issues were solved. System is based on IoV based on vehicular cloud computing to control traffic related problems. We have found by this SLR that this system has solve many traffic issues and we need to apply this technology in all big cities. But there were few challenges in new architecture which we have found by literature. We need to deal better of these challenges. But overall, this system is very effective to control the traffic related issues. |
2302.12266 | SHAPER: Can You Hear the Shape of a Jet? | The identification of interesting substructures within jets is an important
tool for searching for new physics and probing the Standard Model at colliders.
Many of these substructure tools have previously been shown to take the form of
optimal transport problems, in particular the Energy Mover's Distance (EMD). In
this work, we show that the EMD is in fact the natural structure for comparing
collider events, which accounts for its recent success in understanding event
and jet substructure. We then present a Shape Hunting Algorithm using
Parameterized Energy Reconstruction (SHAPER), which is a general framework for
defining and computing shape-based observables. SHAPER generalizes N-jettiness
from point clusters to any extended, parametrizable shape. This is accomplished
by efficiently minimizing the EMD between events and parameterized manifolds of
energy flows representing idealized shapes, implemented using the
dual-potential Sinkhorn approximation of the Wasserstein metric. We show how
the geometric language of observables as manifolds can be used to define novel
observables with built-in infrared-and-collinear safety. We demonstrate the
efficacy of the SHAPER framework by performing empirical jet substructure
studies using several examples of new shape-based observables. | Demba Ba, Akshunna S. Dogra, Rikab Gambhir, Abiy Tasissa, Jesse Thaler | 2023-02-23T19:00:00Z | http://arxiv.org/abs/2302.12266v4 | # SHAPER: Can You Hear the Shape of a Jet?
###### Abstract
The identification of interesting substructures within jets is an important tool for searching for new physics and probing the Standard Model at colliders. Many of these substructure tools have previously been shown to take the form of optimal transport problems, in particular the Energy Mover's Distance (EMD). In this work, we show that the EMD is in fact _the_ natural structure for comparing collider events, which accounts for its recent success in understanding event and jet substructure. We then present a Shape Hunting Algorithm using Parameterized Energy Reconstruction (Shaper), which is a general framework for defining and computing shape-based observables. Shaper generalizes \(N\)-jettiness from point clusters to any extended, parametrizable shape. This is accomplished by efficiently minimizing the EMD between events and parameterized manifolds of energy flows representing idealized shapes, implemented using the dual-potential Sinkhorn approximation of the Wasserstein metric. We show how the geometric language of observables as manifolds can be used to define novel observables with built-in infrared-and-collinear safety. We demonstrate the efficacy of the Shaper framework by performing empirical jet substructure studies using several examples of new shape-based observables.
## 1 Introduction
* 2 The Unreasonable Effectiveness of Wasserstein in Collider Physics
* 2.1 Event Shapes, Jet Shapes, and Geometric Shapes
* 2.2 Measure-ing the Energy Flow
* 2.3 Geometrizing IRC Safety
* 2.4 The Importance of Being Faithful
* 3 Hearing Shapes
* 3.1 Shape Observables and Shape Parameters
* 3.2 Composing Shapes
* 3.3 Examples of Novel Shapes
* 3.3.1 \(N\)-Ringiness
* 3.3.2 \(N\)-Diskiness
* 3.3.3 \(N\)-Ellipsiness
* 3.3.4... Plus Pointiness
* 3.3.5... Plus Pileup
* 3.3.6... And More!
* 4 The Shaper Framework
* 4.1 The Shaper Algorithm
* 4.2 The Dual Formulation of Wasserstein
* 4.3 Reviewing the Sinkhorn Divergence
* 4.4 Implementation Details
* 5 Empirical Studies with Jets
* 5.1 Dataset
* 5.2 Benchmarking Sinkhorn: Jet Isotropy
* 5.3 Benchmarking Optimization: \(N\)-Subjettiness
* 5.4 Hearing Gradients
* 5.5 Hearing Jets Ring (and Disk, and Ellipse)
* 5.6 Shapiness and Shape Parameters
* 5.7 Pileup-Mitigating Shapes
* 6 Conclusions and Outlook
* A Energy Flows as Measures
B Constructing the Wasserstein Metric * B.1 Shaping Up the Loss * B.2 Why Wassterstein?
## 1 Introduction
Collisions at the Large Hadron Collider (LHC) produce events with hundreds of particles in the final state, which must be carefully analyzed to extract information about the underlying physics. In order to make sense of these high-dimensional data, increasingly sophisticated observables are required that are well understood at both the theoretical and experimental levels. Event shape [1; 2; 3; 4] and jet shape [5; 6] observables have played an important role in refining our understanding of the structure of high energy collisions, by relating hadronic final states to perturbatively accessible partonic degrees of freedom. Many shape observables, such as event thrust [1; 7; 8] and jet angularities [9; 10], have been computed to next-to-next-to-next-to leading log (N\({}^{3}\)LL) accuracy [11] and next-to-next-to leading log accuracy N\({}^{2}\)LL [12] in \(e^{+}e^{-}\) collisions, respectively. Shape observables have been extensively measured and used to search for new physics signatures [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24].
It was shown in Ref. [25] that many of these event shapes and jet shapes can be cast as optimal transport problems, using the Energy Mover's Distance (EMD). The EMD was introduced in Ref. [26] in order to provide a quantitative measure of the "distance" between two collider events, \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\). The EMD is based off the "earth mover's distance" from computer vision [27; 28; 29; 30; 31], which itself is a special case of the Wasserstein metric [32; 33]. The EMD has since seen many uses in collider physics applications, such as in building a metrized latent space of events [34; 35; 26; 36] and in event/jet tagging and classification [37; 38; 39]. The EMD has also been used to define a novel shape observable, the event isotropy [40; 41; 42], which probes how "uniform" an event \(\mathcal{E}\) looks by comparing it to the idealized isotropic event \(\mathcal{U}\).
In this paper, we seek to explain the effectiveness of the Wasserstein metric, by showing that it is the unique metric on collider events that both is continuous and respects the detector geometry _faithfully_. As shown in Ref. [26], continuity encodes the collider physics concept of infrared-and-collinear (IRC) safety. Geometric faithfulness is, to our knowledge, a new concept for the collider community, which allows statements to be made about spatial distributions of energy within events. After advocating for Wasserstein geometry, we then generalize the notion of event shapes and jet shapes, motivated by the EMD. Our framework - called Shaper - not only allows observables to be defined that probe _any_ geometric substructure of events and jets in an IRC-safe way, analogous to the event isotropy probing the uniform structure of events, but also allows those observables to be numerically estimated.
In particular, we:
1. **Motivate the Wasserstein Metric:** In Sec. 2, we show that the Wasserstein metric is _the_ natural structure for building shape-based observables for collider physics, justifying its success in Refs. [25; 26; 34; 35; 36; 37; 38; 39; 40; 41; 42] and beyond. By adopting a measure-theoretic language for energy flows, we show that the EMD is _not_ an ad-hoc structure to impose on the space of collider events, but rather the only structure that faithfully respects the detector geometry and continuity on the space of events. Further details of this argument are provided in Apps. A and B.
2. **Use Optimal Transport to Define Shapes:** In Sec. 3, we build off the work in Ref. [25], where it was shown that several well-known shape observables can be described in the form: \[\mathcal{O}_{\mathcal{M}}(\mathcal{E}) \equiv\min_{\mathcal{E}_{\theta}\in\mathcal{M}}\mathrm{EMD}^{( \beta,R)}(\mathcal{E},\mathcal{E}_{\theta}),\] (1) \[\theta_{\mathcal{M}}(\mathcal{E}) \equiv\operatorname*{argmin}_{\mathcal{E}_{\theta}\in\mathcal{M}} \mathrm{EMD}^{(\beta,R)}(\mathcal{E},\mathcal{E}_{\theta}),\] (2) where \(\mathcal{M}\) is a parameterized manifold of energy flows that define the shape, \(R\) sets a length scale for the shape, and \(\beta\) is a distance weighting exponent. Importantly, both the observable \(\mathcal{O}_{\mathcal{M}}\) and the optimal shape parameters \(\theta_{\mathcal{M}}\) can be separately extracted from the EMD. We extend this construction to define many new shape observables, by greatly expanding the class of manifolds \(\mathcal{M}\) considered, which can be constructed as explicit geometric shapes. We develop a prescription for defining new custom shape observables by parameterizing probability distributions, and even prescriptions for composing old shape observables together to form new ones.
3. **Introduce SHAPER:** In Sec. 4, we introduce Shaper, or **S**hape **H**unting **A**lgorithm using **P**arameterized **E**nergy **R**econstruction. This is a computational framework for defining shapes and evaluating Eqs. (1) and (2) on data. Shaper leverages the Sinkhorn approximation of the Wasserstein metric [43; 44; 45; 46], which enables fast numerical calculation and even gradient estimation with respect to entire events, enabling easy and efficient optimization. See NEEMo [47] for an alternative gradient-based Wasserstein estimator.
4. **Evaluate Empirical Examples:** To demonstrate the potential of Shaper, in Sec. 5 we define and evaluate several observables on a top and QCD jet benchmark dataset [48; 49]. These shape observables can be used to extract dynamic jet radii and jet energies, and even non-radially-symmetric structures, such as jet eccentricity. In particular, we show empirical examples of our new observables for jet substructure analysis and automated pileup removal.
Generalized shape observables defined using Shaper can be used to probe interesting collider signatures. For example, Shaper can be used to build specialized jet algorithms
with dynamic radii1 and even dynamic pileup mitigation. This can be viewed as a generalization of \(k\)-means type clustering algorithms, such as XCone[54]: rather than finding \(k\) points that best approximate an event, shape observables can be used to find the \(k\) geometric structures that best describe the event. This means that it is possible to design specialized jet algorithms that select for e.g. elliptical or non-isotropic jets, or that even probe the soft and collinear structure of jets separately. This can prove especially useful, for example, in boosted top or heavy vector boson decays that produce "fat jets" with multi-pronged substructure, which may not be described well by circular or isotropic patterns. We comment on further phenomenological studies in Sec. 6.
Footnote 1: See e.g. [50; 51; 52; 53] for other examples of dynamic jet radii.
## 2 The Unreasonable Effectiveness of Wasserstein in Collider Physics
In this section, we aim to answer the question, "If I had never heard of event or jet shapes before, how could I have come up with them myself?" Our discussion builds off the work of Refs. [25; 26], wherein the EMD was introduced as a new language for event and jet shape observables. Here, we show that the EMD is _the_ natural language for event and jet shapes. We do this by showing that the EMD is the unique metric between a geometric shape \(\mathcal{E}^{\prime}\) and an event \(\mathcal{E}\) that encodes IRC safety (through its topological features) and faithfully respects the geometry of the detector. This section is largely self-contained, and readers primarily interested in the construction of new shape observables can skip to Sec. 3.
We begin with a review of event shapes and jet shapes, noting that they all share a general form - they can all be written as a minimization of a universal loss function between the event and a parameterized set of "idealized" events, which can be interpreted as geometric shapes. We show that if the universal loss function is both IRC-safe and reduces to the ground metric on the detector geometry (that is, it _faithfully_ lifts the ground metric, without distorting extended objects), then the universal loss function must indeed be the Wasserstein metric. More details of this construction can be found in Apps. A and B.
### Event Shapes, Jet Shapes, and Geometric Shapes
We begin with a review of event shapes and jet shapes. _Event shapes_ are observables that probe the geometric distribution of energy in events. Many different event shapes, such as thrust [1; 7; 8], spherocity [55], broadening [56], and \(N\)-jettiness [54; 57], have been defined and extensively studied over the years in the context of \(e^{+}e^{-}\) collisions [3], with analogues studied in the context of \(pp\) collisions [58; 59]. We may also include jet algorithms, such as XCone[54] and sequential recombination algorithms (\(k_{T}\)[60; 61], Cambridge-Aachen [62; 63], and anti-\(k_{T}\)[64]), in this list. Similarly, _jet shapes_ probe the geometric distribution of energy within individual jets rather than the global event. Examples of commonly studied jet shapes
include the _integrated_ jet shape2[65, 66], angularities [9, 10], and \(N\)-subjettiness [68]. While there are significant theoretical complications when considering the difference between event and jet shapes, such as the introduction of non-global logarithms [69, 70], for our purposes, we will treat event shapes and jet shapes interchangeably.
Footnote 2: A note about nomenclature: The original “jet shape” refers to the observable \(\Psi(r/R)\), the radial jet energy fraction [65, 66]. However, the word has been hijacked by Ref. [67] to refer to observables consisting of weighted sums of particle momenta. We later justify the name “shapes” by showing how this corresponds to fitting actual geometric shapes to event data.
Following the definition in Ref. [67], an event shape (jet shape) is an IRC-safe weighted sum over the four-momenta of the particles in an event (jet). These observables probe the geometric distribution of energy in an event (jet), and typically depend on the _detector ground metric_, \(d(x,y)\), which defines distances between points on the detector. Expressions for common event and jet shapes can be found in Tables 1 and 2, respectively. We note that that all the above listed event shapes can be written in the generic form:
\[\mathcal{O}(p_{1},...,p_{M})=\min_{\theta\in\mathcal{M}}F\left(\sum_{i=1}^{M}E _{i}\,\phi_{\theta}(x_{i})\right), \tag{1}\]
where \(F\) and \(\phi_{\theta}\) are generic functions, and the observable may involve a minimization (or maximization) over auxiliary parameters \(\theta\) living in some constrained manifold \(\mathcal{M}\). The choice of \(F\), \(\phi_{\theta}\), and \(\mathcal{M}\) define the event shape. Not every shape requires an optimization - for instance, while recoil-free jet angularities require an optimization over possible jet axes, it is also possible to define angularities with respect to a fixed axis [25]. In this case, the minimization may be written over the trivial manifold isomorphic to \(\mathcal{M}=\{0\}\). It is also common to divide by the total energy scale \(E_{\rm tot}\), or some other hard scale, as this reduces the sensitivity of the event (jet) shape on experimental jet energy scale uncertainties [59]. Unless otherwise stated, we will normalize our events such that \(E_{\rm tot}=1\) without loss of generality.
We propose to write Eq. (1) in a universal form, such that the event shape is instead specified solely by the choice of \(\mathcal{M}\):
\[\mathcal{O}_{\mathcal{M}}(\mathcal{E})=\min_{\theta\in\mathcal{M}}\left[ \mathcal{L}(\mathcal{E};\theta)\right], \tag{2}\]
where \(\mathcal{L}\) is a universal loss function. All geometrical information about the event shape is then contained in the construction of \(\mathcal{M}\). To emphasize this, we will adopt the notation \(\mathcal{O}_{\mathcal{M}}\) for these observables, to remind us that the observable is defined through the choice of \(\mathcal{M}\).
The task is now to determine what universal \(\mathcal{L}\) reproduces all event and jet shape observables - we will argue in Sec. 2.4 that \(\mathcal{L}\) must be the Wasserstein metric. To begin, we may rewrite Eq. (2) in a more suggestive form. We note that for all of the event and jet shapes in Tables 1 and 2, there is always some optimal \(\mathcal{E}^{*}\), not necessarily unique, such that \(\mathcal{O}_{\mathcal{M}}(\mathcal{E}^{*})=0\). For example, the \(\mathcal{E}^{*}\) for thrust is a perfectly back-to-back event, the \(\mathcal{E}^{*}\) for \(N\)-subjettiness is an event with exactly \(N\) particles, and so on. Thus, it is convenient to
rewrite Eq. (2), such that the minimization is over a space of events \(\mathcal{E}_{\theta}\), and that \(\mathcal{L}=0\) is achieved when \(\mathcal{E}=\mathcal{E}_{\theta}\), where \(\theta\) parameterizes the space of all \(\mathcal{E}^{*}\)'s:
\[\mathcal{O}_{\mathcal{M}}(\mathcal{E})=\min_{\mathcal{E}_{\theta}\in\mathcal{M} }\left[\mathcal{L}(\mathcal{E};\mathcal{E}_{\theta})\right]. \tag{3}\]
Eq. (3) provides a nice geometric intuition for event and jet shapes. We can interpret \(\mathcal{O}_{\mathcal{M}}(\mathcal{E})\) as the answer to "How close, in event space, is my event to looking like an optimal \(\mathcal{E}^{*}\)?". Importantly, the \(\mathcal{E}^{*}\)'s do not have to be physically realized events - they can be _any_ radiation pattern measured on the detector wall, even continuous ones. For example, we can take \(\mathcal{E}^{*}\) to be events with a radiation pattern that look like the interior of a hexagon - then the event shape \(\mathcal{O}_{\mathcal{M}}(\mathcal{E})\) is a measure of how far \(\mathcal{E}\) is, in "event space", from an idealized hexagonal event. By taking our idealized events \(\mathcal{E}^{*}\) to have radiation patterns resembling literal geometric shapes, living in the parameterized manifold \(\mathcal{M}\), the observable \(\mathcal{O}_{\mathcal{M}}(\mathcal{E})\) can be used as a measure of how much \(\mathcal{E}\) "looks like" the shape of interest.
\begin{table}
\begin{tabular}{c c l} \hline \hline Event Shape & Description & Expression \\ \hline \hline Thrust [1; 7; 8] & How Pencil-Like? & \(t(\mathcal{E})=2\min_{\hat{n}}\left(\sum_{i}E_{i}(1-|\hat{n}_{i}\cdot\hat{n}|)\right)\) \\ Spherccity [55] & How Tranverse-Planar? & \(s(\mathcal{E})=\min_{\hat{n}}\left(\sum_{i}E_{i}|\hat{n}_{i}\times\hat{n}| \right)^{2}\) \\ Broadening [56] & How 2-Pronged? & \(b(\mathcal{E})=\min_{\hat{n}_{1},\hat{n}_{2}}\left(\sum_{i}E_{i}\min(d_{i1}, d_{i2})\right)\) \\ \(N\)-jettiness [54; 57] & How \(N\)-particle like? & \(\mathcal{T}_{N}^{(\beta)}(\mathcal{E})=\min_{\hat{n}_{1},...,\hat{n}_{N}} \left(\sum_{i}E_{i}\min(R^{\beta},d_{i1}^{\beta},...,d_{iN}^{\beta})\right)\) \\ Isotropy [40] & How Uniform? & \(\mathcal{I}^{(\beta)}(\mathcal{E})=\min_{\mathcal{U}\in\mathcal{M}}\left( \text{EMD}^{(\beta,R)}(\mathcal{E},\mathcal{U})\right)\) \\ \hline XCone[54] & Which \(N\)-particles? & \(\hat{n}_{i}(\mathcal{E})=\text{argmin}_{\hat{n}_{1},...,\hat{n}_{N}}\left( \sum_{i}E_{i}\min(R^{\beta},d_{i1}^{\beta},...,d_{iN}^{\beta})\right)\) \\ S. Recomb. [60; 61; 62; 63; 64] & Clustering History? & \(d_{ij}^{N}(\mathcal{E})=\min(E_{i}^{2p},E_{j}^{2p})_{R^{2}}^{d_{ij}^{2}}\): \(d_{iR}^{N}(\mathcal{E})=E_{i}^{2p}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Common event shapes and jet algorithms studied in collider physics. Note that most of these observables take the general form of Eq. (1). Here, we do not necessarily normalize energies.
\begin{table}
\begin{tabular}{c c l} \hline \hline Jet Shape & Description & Expression \\ \hline \hline Angularities [9; 10] & Angular Moments? & \(\lambda_{\beta}(\mathcal{J})=\sum_{i}E_{i}d_{iJ}^{\beta}\) \\ &... Recoil Free? & \(\lambda_{\beta}(\mathcal{J})=\min_{\hat{n}}\left(\sum_{i}E_{i}d_{in}^{\beta}\right)\) \\ \(N\)-subjettiness [68] & How \(N\)-Particle Like? & \(\mathcal{T}_{N}^{(\beta)}(\mathcal{J})=\min_{\hat{n}_{1},...,\hat{n}_{N}} \left(\sum_{i}E_{i}\min(d_{i1}^{\beta},...,d_{iN}^{\beta})\right)\) \\ Int. Shape [65; 66] & Radial Energy CDF? & \(\psi_{\mathcal{J}}(r/R)=\left(\sum_{i}E_{i}\Theta(r-d_{iJ})\right)/\left(\sum_ {i}E_{i}\Theta(R-d_{iJ})\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Common jet shapes studied in collider physics. Note that most of these observables take the general form of Eq. (1). The notation \(d_{iJ}\) refers to the distance from particle \(i\) to the jet axis. Here, we do not necessarily normalize energies.
### Measure-ing the Energy Flow
In order to make progress in determining the universal loss function in Eq. (3), we must first understand the IRC-safe information available for us to use within the events \(\mathcal{E}\) and \(\mathcal{E}_{\theta}\). This information is represented by the _energy flow_ of the event. We first briefly review energy flows, before proposing a new definition of the energy flow as a measure theoretic quantity, which enables a useful language for discussing "idealized" events such as those discussed in Sec. 2.1.
The _energy flow_\(\mathcal{E}\) of an event is the distribution of energy within the event. At a very high level, in a collider experiment, one has a detector with geometry \(\mathcal{X}\) infinitely far away from the collision site - for instance, in _pp_ collisions such as those at the LHC, one uses a cylindrical detector \(\mathcal{X}=[y_{\rm min},y_{\rm max}]\times S^{1}\), where \(y\in[y_{\rm min},y_{\rm max}]\) is the rapidity and \(\phi\in S^{1}\) is the azimuthal angle. After a collision, particles hit the detector at a site with coordinate \(x_{i}\in\mathcal{X}\), where the energy \(E_{i}\) is recorded by a calorimeter. The energy flow \(\mathcal{E}\) for an event with \(M\) particles of energies \(E_{i}\) measured at locations \(x_{i}\) is given by:
\[\mathcal{E}(x)=\sum_{i=1}^{M}E_{i}\,\delta(x-x_{i}). \tag{4}\]
The energy flow quantifies the total amount of energy measured at position \(x\), which can be thought of as an idealized calorimeter cell. Assuming that the particles are massless, this is the complete _accessible_3 kinematic information about the event, which therefore allows us to consider an event and its energy flow interchangeably. In the context of hadron colliders, the transverse momentum \(p_{T}\) is often used in place of the energy \(E\). In this paper, however, we focus on energies to save on notational complexity, as the story is relatively unchanged when switching to \(p_{T}\).
Footnote 3: Here, we take _accessible_ to mean the calorimeter information after infinite time has passed, preserving no timing information. We implicitly assume that the calorimeter’s detector response is linear, so that two photons entering the same calorimeter cell cannot be distinguished from a single photon with their summed energy, though even if the response is nonlinear, one cannot distinguish how many photons entered the calorimeter cell from the total energy alone.
The energy flow operator is well-understood theoretically and in some cases, can even be computed analytically [71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83]. In terms of field-theoretic quantities, the energy flow is given as:
\[\mathcal{E}(x)=\lim_{r\to\infty}\int_{-\infty}^{\infty}dt\,n_{i}\,T^{0i}(t,rn_ {i}), \tag{5}\]
where \(n_{i}\) is the unit 3-vector corresponding to the detector coordinate \(x\). We assume for this work that the spectrum of energies is non-negative - that is, for all \(x\), \(\mathcal{E}(x)\geq 0\).
We now propose a natural generalization of the energy flow that captures its salient properties and is key to enabling our geometric analysis:
**Definition 1**.: _The energy flow \(\mathcal{E}\) of an event in a detector geometry \(\mathcal{X}\) is a (positive) **measure** over subsets \(X\subseteq\mathcal{X}\), such that \(\mathcal{E}(\mathcal{X})=E_{\rm tot}\), the total energy of the event._
n this new language, the energy flow \(\mathcal{E}(X)\) is the total energy measured in _any_ region \(X\subseteq\mathcal{X}\) of the detector, rather than just probing a localized point \(x\in\mathcal{X}\). The region \(X\) can be an extended set and does not need to be connected. Fig. 1 illustrates an example of this on a cylindrical collider. This generalized notion of energy flow \(\mathcal{E}(X)\) reduces to the usual energy flow \(\mathcal{E}(x)\), which we now refer to as the _energy flow density_, and can be written as:
\[\mathcal{E}(X)=\int_{X}dx\,\mathcal{E}(x). \tag{6}\]
A particle measured at \(x_{i}\) will contribute energy to \(\mathcal{E}(X)\) only if \(x_{i}\in X\), which can be seen by carrying out the integration over the \(\delta\)-functions in Eq. (4).4 Unlike Eq. (4), however, we do not restrict energy flows to just a finite sum of localized \(\delta\)-functions - they can be continuous, extended deposits of energy! In Sec. 3, we will see energy flows with continuous energy distributions are key to defining generalized shape observables. We will refer to energy flows whose densities can be represented by a finite sum of weighted \(\delta\)-functions (as in, for example, Eq. (4)) as _atomic measures_ or _atomic flows_. We will often write atomic measures as \(\mathcal{E}\sim\sum_{i}E_{i}\,\delta_{x_{i}}\) for notational simplicity.
Footnote 4: We will assume that energy flows can always be written as the integral of an associated energy flow density. Note that the energy flow density depends on the choice of coordinates \(x\) used on \(\mathcal{X}\).
Under Def. 1, energy flows inherit a very rich and natural mathematical structure. The most important operation for our purposes is the integral of a function \(\phi:\mathcal{X}\to\mathbb{R}\) against an energy flow \(\mathcal{E}\), which we denote \(\langle\mathcal{E},\phi\rangle\), defined as:
\[\langle\mathcal{E},\phi\rangle \equiv\int_{\mathcal{X}}dx\,\mathcal{E}(x)\,\phi(x) \tag{7}\] \[=\sum_{i}E_{i}\,\phi(x_{i})\text{ for atomic flows.} \tag{8}\]
Figure 1: An illustration of the energy flow \(\mathcal{E}(X)\), which is the total amount of radiation captured inside a subset \(X\) (in purple) of the total detector geometry \(\mathcal{X}\). The red dots represent particles that hit the detector wall, with their size proportional to their energy.
This operation can be thought of as the energy-weighted expectation value of the random variable \(\phi\) under the distribution \(\mathcal{E}\). A brief review of this, and other salient measure-theoretic concepts and definitions we call upon in this paper, is presented in App. A.
### Geometrizing IRC Safety
Infrared and colinear (IRC) safety is an incredibly powerful constraint on the form of observables - it ensures not only that an observable is well-defined in perturbation theory, but also that the observable is robust to detector effects. Using the language developed in Sec. 2.2, IRC safety becomes a topological statement on the space of energy flows, which we may use to place constraints on the potential form of the universal loss function \(\mathcal{L}\) of Eq. (3).
An observable \(\mathcal{O}\) is IRC safe if it satisfies:5
Footnote 5: There are several different statements of IRC-safety with different limit structures, each with different pathologies. A brief discussion of this can be found in Sec. 2.1 of Ref. [25].
* **Infrared safety**: For any event atomic \(\mathcal{E}\), adding or removing an \(\epsilon\)-soft emission to \(\mathcal{E}\) leaves \(\mathcal{O}\) unchanged as \(\epsilon\to 0\).
* **Collinear safety**: For any atomic event \(\mathcal{E}\), splitting any particle into two particles at the same location with the same total energy leaves \(\mathcal{O}\) unchanged. Moreover, translating either particle by an \(\epsilon\)-small displacement leaves \(\mathcal{O}\) unchanged as \(\epsilon\to 0\).
Essentially, IRC safety means that observables should not change significantly if we change \(\mathcal{E}\) by slightly adjusting particle energies and positions. As with energy flows, we propose a generalization of IRC safety that captures all its salient features:
**Definition 2**.: _An observable \(\mathcal{O}\) is IRC safe if it is continuous with respect to the weak* topology on energy flows._
A function \(f\) on energy flows is continuous to the weak* topology if, for any sequence of energy flows \(\mathcal{E}_{n}\) that converges to \(\mathcal{E}\), the function \(F(\mathcal{E}_{n})\) converges to \(F(\mathcal{E})\) (see App. A for more details). Note that this is actually a slightly _weaker_ constraint than the one considered in Ref. [25], which defines IRC safety through the metric topology induced by the EMD - the definition here does not require a metric on the space of events, or even a metric on the detector space, only a notion of continuity. In fact, there is a large class of metrics one can place on the space of events to metrize the weak* topology, not just the EMD.
An interesting consequence of this definition is that if an observable \(\mathcal{O}\) is IRC safe, then \(\mathcal{O}(\mathcal{E})\) for _any_ energy flow \(\mathcal{E}\) can be arbitrarily well-approximated by atomic energy flows. This implies, for example, that a continuous circle can be arbitrarily well approximated by a finite number of points arranged in a ring - this is makes possible to not only encode continuous distributions numerically, but also to make broad statements about the behavior of IRC safe observables by considering their action only on simple atomic energy flows.
In order to be IRC safe, our universal loss function \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})\) must be continuous in both of its arguments. This is very restrictive, and immediately implies that \(\mathcal{L}\) cannot have any
terms that are discontinuous in either energy or distance, e.g. terms like \(E^{-1}\) or \(d(x,y)^{-1}\), or any term of the form \(\left\langle\mathcal{E},\phi\right\rangle\) for noncontinuous \(\phi\). Recalling the discussion in Sec. 2.1 that \(\mathcal{L}\) quantifies how close in the space of energy flows \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are, it is convenient (though not strictly necessary) to use \(\mathcal{L}\) to _metrize_ the weak* topology - that is, if an observable \(\mathcal{O}\) is continuous with respect to the metric topology induced by \(\mathcal{L}\), then it is also continuous with respect to the weak* topology, and therefore is IRC safe. This allows the same universal loss function \(\mathcal{L}\) to be used both to define shape observables and to define IRC safety.6 This is convenient, since it captures the very intuitive notion that if two events geometrically look similar (that is, \(\mathcal{L}\) is small), then IRC-safe observables evaluated on them should also be the same.
Footnote 6: This is not required however – there are many different choices of \(\mathcal{L}\) that may be used to give the same definition of IRC safety, e.g. maximum mean discrepancies, which does not necessarily have to be the same function \(\mathcal{L}\) whose minimum defines shapes as in Eq. (3)
### The Importance of Being Faithful
In order to encode geometric information about energy distributions, the universal loss function \(\mathcal{L}\) of Eq. (3) must explicitly depend on the detector ground metric, \(d(x,y)\). While there are many metrics on the space of measures that encode geometric information while also being IRC-safe (as defined in Sec. 2.3), a natural choice is the family of Wasserstein metrics, which we denote \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})=\mathrm{EMD}^{(\beta,R)}( \mathcal{E},\mathcal{E}^{\prime})\) (for _Earth-Mover's_ or _Energy-Movers Distance_, which we will use synonymously). We show in this section that unlike other potential candidates for \(\mathcal{L}\), the Wasserstein metric will never warp distances between shapes - that is, the Wasserstein metric lifts the ground metric of the detector _faithfully_. A constructive proof of this can be found in App. B.
The EMD between two measures \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) is given by:
\[\mathrm{EMD}^{(\beta,R)}(\mathcal{E},\mathcal{E}^{\prime})=\min_ {\pi\in\mathcal{M}(\mathcal{X}\times\mathcal{X})}\left[\frac{1}{\beta R^{ \beta}}\left\langle\pi,d(x,y)^{\beta}\right\rangle\right]+|\Delta E_{\mathrm{ tot}}|,\] \[\pi(\mathcal{X},Y)\leq\mathcal{E}^{\prime}(Y),\quad\pi(X, \mathcal{X})\leq\mathcal{E}(X),\quad\pi(\mathcal{X},\mathcal{X})=\min(E_{ \mathrm{tot}},E^{\prime}_{\mathrm{tot}}), \tag{9}\]
where \(d(x,y)\) is the ground metric between points \(x\) and \(y\) on \(\mathcal{X}\), and \(\mathcal{M}(\mathcal{X}\times\mathcal{X})\) is the space of all positive measures on \(\mathcal{X}\times\mathcal{X}\). The parameter \(R>0\) sets a distance scale for the EMD, and sets the relative scale for the two terms in Eq. (9). The parameter \(\beta\geq 1\) sets the distance norm.7 Note that our definition of the EMD differs from Refs. [25; 26] by a factor of \(\beta\), which we do to match the conventions of the geomloss[46] package. The additional energy difference term, \(|\Delta E_{\mathrm{tot}}|\), contributes whenever the two energy flows do not have the same total energy.
Footnote 7: In order to satisfy the triangle inequality and be a true metric, the first term in the EMD must be raised to the \(1/\beta\) power, and either \(2R\) should exceed the largest value of \(d(x,y)\)[25; 26] or \(\Delta E_{\mathrm{tot}}\) must be guaranteed to be zero. In this paper, we will not need the triangle inequality, so we will not do this.
The Wasserstein metric is special in that it _faithfully_ lifts the ground metric, \(d\). To lift the ground metric means that the Wasserstein metric reduces to \(d(x,y)^{\beta}\) when evaluated on
two point measures \({\cal E}\sim\delta_{x}\) and \({\cal E}^{\prime}\sim\delta_{y}\) - that is, the Wasserstein metric preserves distances between points. Moreover, to do so _faithfully_ means that the Wasserstein metric preserves distances for entire extended shapes: if \({\cal E}\) is _any_ measure, and \({\cal E}^{\prime}\) is the same as \({\cal E}\) whose density is translated by a vector \(t\) (that is, \({\cal E}^{\prime}\) has corresponding density \({\cal E}(x-t)\)), then the metric between them is simply \(d(0,t)^{\beta}\).
To see this explicitly, we can compare to two other potential candidates for our universal loss function \({\cal L}\): the class of Maximum Mean Discrepancies (MMDs) [84] and Chamfer distances [85], which can respectively be written as:8
Footnote 8: We choose the MMD and Chamfer distance for comparison because, as shown in App. B, the most general loss that is symmetric, IRC safe, and lifts the ground metric (though not necessarily faithfully) has terms individually resembling the Wasserstein, MMD, and Chamfer distances.
\[{\rm MMD}^{(\beta)}({\cal E},{\cal E}^{\prime}) =-\frac{1}{2\beta}\int_{{\cal X}\times{\cal X}}dx\,dy\,d(x,y)^{ \beta}\,\left({\cal E}(x)-{\cal E}^{\prime}(x)\right)\left({\cal E}(y)-{\cal E }^{\prime}(y)\right), \tag{10}\] \[{\rm CD}^{(\beta)}({\cal E},{\cal E}^{\prime}) =\frac{1}{2\beta}\int_{{\cal X}}dx\,\min_{y\in{\rm Supp}({\cal E}^ {\prime})}[d(x,y)^{\beta}]\,{\cal E}(x)+\frac{1}{2\beta}\int_{{\cal X}}dy\, \min_{x\in{\rm Supp}({\cal E})}[d(x,y)^{\beta}]\,{\cal E}^{\prime}(y). \tag{11}\]
These candidate loss functions are IRC safe, translationally invariant, and even lift the ground metric, but importantly, they do not do so faithfully. To see this, we choose the following example energy flows:
\[{\cal E}(x)\sim\frac{1}{2}\delta_{0}+\frac{1}{2}\delta_{a},\quad{\cal E}^{ \prime}(x)\sim\frac{1}{2}\delta_{t}+\frac{1}{2}\delta_{a+t}, \tag{12}\]
where \(a\) and \(t\) are arbitrary vectors (where \({\cal E}\) consists of 2 points separated by a vector \(a\), and \({\cal E}^{\prime}\) is \({\cal E}\) translated by a vector \(t\)). A direct computation using Eqs. (9), (10), and (11) (with \(d(x,y)=|x-y|\)) yields:
\[{\rm EMD}^{(\beta,R=1)}({\cal E},{\cal E}^{\prime}) =\frac{1}{\beta}|t|^{\beta}, \tag{13}\] \[{\rm MMD}^{(\beta)}({\cal E},{\cal E}^{\prime}) =\frac{1}{2\beta}\left(|t|^{\beta}+\frac{1}{2}|t-a|^{\beta}+\frac{ 1}{2}|t+a|^{\beta}-|a|^{\beta}\right),\] (14) \[{\rm CD}^{(\beta)}({\cal E},{\cal E}^{\prime}) =\frac{1}{2\beta}\left(\min\left[|t|^{\beta},|t+a|^{\beta}\right]+ \min\left[|t|^{\beta},|t-a|^{\beta}\right]\right). \tag{15}\]
While Eqs. (14) and (15) do indeed reduce to \(\sim|t|^{\beta}\) when \(a\to 0\) (that is, when \({\cal E}\) reduces to a single point), in general the MMD and Chamfer distance effectively "distort" the shape.9 When \(\beta=1\), for instance, \({\cal E}^{\prime}\) appears slightly closer to \({\cal E}\), as measured using either MMD or the Chamfer distance, than its total displacement \(|t|\) - energy in the interior of an extended distribution gets effectively "screened" by the energy in the rest of the distribution! Not only
does this distortion ruin our ability to think of our observables as measuring the geometric distribution of energy in the detector, the screening effect also induces a practical issue, as it causes vanishing gradients when trying to optimize over \(\mathcal{L}\)[46] (in this case, by minimizing \(t\), for example). The Wasserstein metric does not suffer these problems, making it the natural choice for our universal loss function.
## 3 Hearing Shapes
Having constructed the Wasserstein metric and EMD in Sec. 2 for event and jet shapes, we next generalize Eqs. (1) and (2), which were originally introduced in Ref. [25] as a common form for many well-known observables. We treat Eqs. (1) and (2) as definitions for _shape observables_\(\mathcal{O}_{\mathcal{M}}\) and _shape parameters_\(\theta_{\mathcal{M}}\), which together are the natural generalization of event and jet shapes. Moreover, we show how the manifold of energy flows \(\mathcal{M}\) can be chosen to construct new observables that probe specific geometric structures. We provide a prescription for building \(\mathcal{M}\), which defines the shape observable, as well as prescriptions for _composing_ shape observables together, allowing new shape observables to be defined from simpler ones in a geometrically intuitive way.10
Footnote 10: Like music, shapes are composed using the Shaper paradigm by splicing and overlaying together smaller shapes, after which it may be “heard” by evaluating on event data – a posthoc rationalization for the title of this paper.
This section proceeds as follows. First, we define generalized shape observables and shape parameters, and discuss their properties. Next, we discuss our prescription for shape composition, which can be used to define shape observables and parameters that probe complex geometric structure. Finally, we use our prescription to construct a large (but importantly, inexhaustive) suite of novel shape observables for jet substructure analysis to serve as an example of what can be done with this framework. Several examples of emperical studies using these new shape observables, evaluated using the Shaper framework defined in Sec. 4, can be found in Sec. 5.
### Shape Observables and Shape Parameters
We define a _shape observable_\(\mathcal{O}_{\mathcal{M}}\) as follows:
**Definition 3**.: _A shape observable \(\mathcal{O}_{\mathcal{M}}\), with associated shape parameters \(\theta_{\mathcal{M}}\), on an energy flow \(\mathcal{E}\) is any function of the form:_
\[\mathcal{O}(\mathcal{E}) \equiv\min_{\mathcal{E}_{\theta}\in\mathcal{M}}\mathrm{EMD}^{( \beta,R)}(\mathcal{E},\mathcal{E}_{\theta}), \tag{3}\] \[\theta(\mathcal{E}) \equiv\operatorname*{argmin}_{\mathcal{E}_{\theta}\in\mathcal{M}} \mathrm{EMD}^{(\beta,R)}(\mathcal{E},\mathcal{E}_{\theta}), \tag{4}\]
_where \(\mathcal{M}\) is a manifold of positive measures on the detector space \(\mathcal{X}\), and \(\mathrm{EMD}^{(\beta,R)}\) is the \(\beta\)-Wasserstein distance with length scale \(R\)._
Importantly, we return _both_ the minimum EMD value, \(\mathcal{O}(\mathcal{E})\), _and_ the parameters of the shape that produced the minimum EMD, \(\theta(\mathcal{E})\). For a manifold \(\mathcal{M}\) of generic parameterized shapes, we refer the former as the "_shapiness_" of \(\mathcal{E}\), and the latter as the associated "_shape parameters_" of \(\mathcal{E}\). For example, if \(\mathcal{M}\) is the manifold of \(N\)-(sub)jet events, then \(\mathcal{O}(\mathcal{E})\) is the \(N\)-(sub)jettiness of \(\mathcal{E}\), and \(\theta(\mathcal{E})\) are the (sub)jet parameters of \(\mathcal{E}\), highlighting that Eq. (10) is really a generalization of the \(N\)-jettiness. Intuitively, \(\mathcal{O}(\mathcal{E})\) answers the question, "How much like a _shape_ does my event look like?", while \(\theta(\mathcal{E})\) answers the question, "Which _shape_ does my event look most like?".
For a fixed choice of \(\beta\) and scale \(R\) defining the EMD, shape observables are completely specified by the choice of the manifold of energy flows \(\mathcal{M}\). This choice specifies the class of shapes being considered. Practically speaking, this manifold can be defined by choosing a set of coordinates \(\theta\) on the manifold, that parameterize a set of constrained energy flows \(\mathcal{E}_{\theta}\). Since, as established in Sec. 2.2, energy flows are positive measures on the detector space, \(\mathcal{M}\) can be built as a (weighted) parameterized probability distribution \(p_{\theta}\), which is realized by a finite sampling procedure for weighted points on \(\mathcal{X}\).11
Footnote 11: Because energy flow densities depend on the choice of coordinates, the choice of \(p_{\theta}\) corresponding to the class of shapes prescribed by \(\mathcal{M}\) is not unique. For example, to sample a Gaussian distribution, one can sample uniformly weight points with positions distributed as a Gaussian, or sample uniformly spaced points with Gaussian weights depending on their position, or some mixture of both.
In Ref. [25], the manifold corresponding to several event and jet shapes were listed. However, the power of our framework is that _any_ manifold of parameterized energy flows defines a valid shape observable, and moreover, this observable directly probes intuitive geometric information. By defining, for example, \(\mathcal{M}\) to be the manifold of energy flows resembling uniform rings of energy, uniform disks of energy, or even uniform ellipses of energy, the corresponding observables directly quantify the diskiness, circliness, and ellipsiness of events. To our knowledge, the event isotropy [40] is the first observable of this form with no known alternative formulation, as it quantifies the "uniforminess" of events. Examples of analyses using these custom observables can be found in Sec. 5.
Below, we list some useful properties of all shape observables:
1. **Monotonicity**: If \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) are defined by manifolds \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), for a fixed choice of \(\beta\) and \(R\), then \(\mathcal{M}_{1}\supseteq\mathcal{M}_{2}\) implies that for all energy flows \(\mathcal{E}\), we have \(\mathcal{O}_{1}(\mathcal{E})\leq\mathcal{O}_{2}(\mathcal{E})\). This captures the monotonic nature of observables such as \(N\)-jettiness, which satisfy \(\tau_{N+1}\leq\tau_{N}\).
2. **Closure**: \(\mathcal{O}(\mathcal{E})=0\) if and only if \(\mathcal{E}\in\mathcal{M}\). It then follows that the shape parameters \(\theta(\mathcal{E})\) are such that \(\mathcal{E}_{\theta}=\mathcal{E}\). In particular, this implies that for \(\mathcal{M}_{\mathcal{E}}\), the space of _all_ energy flows, \(\mathcal{O}(\mathcal{E})=0\) for all events.
3. **Approximation Bounds**: For any \(L\)-Lipschitz function \(\phi\) on \(\mathcal{X}\), the "optimal shape" \(\mathcal{E}_{\theta}\) corresponding to \(\mathcal{E}\) satisfies \(\frac{1}{RL}|\left\langle\mathcal{E},\phi\right\rangle-\left\langle\mathcal{E }_{\theta},\phi\right\rangle|\leq\mathcal{O}(\mathcal{E})\)[26]. That is, the optimal
shape \(\mathcal{E}_{\theta}\) can be used to approximate additive Lipschitz observables on \(\mathcal{E}\), up to a known bounded error.
4. **Upper Bounds**: If \(\mathcal{X}\) is bounded by a maximum distance scale \(R_{\max}\), and both \(\mathcal{E}\) and all energy flows on \(\mathcal{M}\) satisfy \(E_{\rm tot}=1\), then \(\mathcal{O}(\mathcal{E})\) is bounded above by \(\big{(}\frac{R_{\max}}{R}\big{)}^{\beta}\).12 This can be seen by considering the extreme case where \(\mathcal{E}\) and \(\mathcal{E}_{\theta}\) are singleton points located \(R_{\max}\) away from each other. Any configuration other than this will yield a lower EMD. This makes it always possible to normalize \(\mathcal{O}(\mathcal{E})\in[0,1]\). Footnote 12: This is not necessarily the least upper bound, which depends specifically on the choice of \(\mathcal{M}\).
We call a manifold of energy flows \(\mathcal{M}\)_balanced_ if all of the energy flows are all normalized (i.e. \(E_{\rm tot}=1\)), and unbalanced otherwise. Similarly, we call shape observable \(\mathcal{O}_{\mathcal{M}}\) balanced if \(\mathcal{M}\) is balanced _and_ it is only evaluated on normalized energy flows. For a balanced observable, the choice of scale \(R\) constitutes only a change of units for the distance metric, and is unimportant.13 Moreover, in the limit \(R\to\infty\), the quantity \(R^{\beta}\mathcal{O}(\mathcal{E})\) is only finite if \(E_{\rm tot}=E_{\theta\rm tot}\)[25]. In this limit, the (arg)min can be written over the submanifold \(\mathcal{M}_{E}\subseteq\mathcal{M}\), which is the submanifold events with the same total energy as \(\mathcal{E}\). Thus, the \(R\to\infty\) limit effectively forces unbalanced shape observables to be balanced. We will primarily consider balanced shape observables for the remainder of this paper, though there are many important unbalanced observables (e.g. \(N\)-jettiness) that one may consider.
Footnote 13: Though we will see in Sec. 4 that \(R\) matters again for our numerical approximations.
Footnote 14: The \(N\)-simplex, \(\Delta_{N}\), is the set of all points \(z_{1},...,z_{N+1}\geq 0\) such that \(\sum_{i=1}^{N+1}z_{i}=1\).
### Composing Shapes
Given two shape observables \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\), with the same choice of \(\beta\) and \(R\), we can define a new _composite observable_\(\mathcal{O}=\mathcal{O}_{1}\oplus\mathcal{O}_{2}\). As with all shape observables above, we define the composite shape observable \(\mathcal{O}\) by specifying the corresponding manifold \(\mathcal{M}\). We consider two possible scenarios to define \(\mathcal{M}\):
* **Case 1 (Balanced)**: If both \(\mathcal{O}_{1}\) and \(\mathcal{O}_{2}\) are balanced shape observables, then we define \(\mathcal{M}\) to be the manifold of all energy flows \(\mathcal{E}\) of the form \(\mathcal{E}=z_{1}\mathcal{E}_{1}+z_{2}\mathcal{E}_{2}\), where \(\mathcal{E}_{1}\in\mathcal{M}_{1}\) and \(\mathcal{E}_{2}\in\mathcal{M}_{2}\), and \((z_{1},z_{2})\) is a point in \(\Delta_{1}\), the 1-simplex.14 Footnote 14: The \(N\)-simplex, \(\Delta_{N}\), is the set of all points \(z_{1},...,z_{N+1}\geq 0\) such that \(\sum_{i=1}^{N+1}z_{i}=1\).
* **Case 2 (Totally Unbalanced)**: If the energy flows on \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\) have unconstrained \(E_{\rm tot}\)'s (that is, \(E_{\rm tot}\) is an independent free parameter for each manifold), then we define \(\mathcal{M}\) to be the manifold of all energy flows \(\mathcal{E}\) of the form \(\mathcal{E}=\mathcal{E}_{1}+\mathcal{E}_{2}\), where \(\mathcal{E}_{1}\in\mathcal{M}_{1}\) and \(\mathcal{E}_{2}\in\mathcal{M}_{2}\).
There exist other possible cases, such as if the manifolds only admit a small range of \(E_{\rm tot}\) values, but we will not consider them in this work.15 In Case 1, the 1-simplex \(\Delta_{1}\) introduces
2 additional (constrained) parameters to the shape, \(z_{1}\) and \(z_{2}\), that ensure that all energy flows are still normalized to 1, so that \(\mathcal{O}\) is balanced. Here, \(\mathcal{M}\) can be realized by generating points according to the parameterized sampling procedures for \(\mathcal{M}_{1}\) and \(\mathcal{M}_{2}\), and multiplying the energy weights of the generated particles by \(z_{1}\) and \(z_{2}\), respectively. The values of \(z_{1}\) and \(z_{2}\) control the relative contribution of each base shape to the composition. In Case 2, the total energy remains unconstrained, and the relative importance of each base shape to the contribution is controlled by the ratio of the \(E_{\rm 1tot}\) and \(E_{\rm 2tot}\) parameters. Note that monotonicity implies that, for both Case 1 and Case 2, \(\mathcal{O}(\mathcal{E})\leq\mathcal{O}_{1}(\mathcal{E})\) and \(\mathcal{O}(\mathcal{E})\leq\mathcal{O}_{2}(\mathcal{E})\).
It is easy to extend these definitions to a composition of \(N\) observables, \(\mathcal{O}=\bigoplus_{i=1}^{N}\mathcal{O}_{i}\). For the balanced case, \(\mathcal{M}\) comprises of energy flows of the form \(\mathcal{E}=\sum_{i=1}^{N}x_{i}\mathcal{E}_{i}\) for \(\mathcal{E}_{i}\in\mathcal{M}_{i}\) and \(z_{i}\in\Delta_{N-1}\). In the totally unbalanced case, \(\mathcal{M}\) comprises energy flows of the form \(\mathcal{E}=\sum_{i=1}^{N}\mathcal{E}_{i}\) for \(\mathcal{E}_{i}\in\mathcal{M}_{i}\). When the \(\mathcal{O}_{i}\) are all copies of the same shape observable \(\mathcal{O}\), we define the notation \(N\times\mathcal{O}\), and name the resulting composite shape observable the "\(N\)-shapiness".
Given this language, we can understand several existing shape observables as composites of basic "shape" building blocks. For instance, we can define an observable, the (sub)pointiness \(\tau_{1}(\mathcal{E})\), corresponding to the manifold of weighted (normalized) \(\delta\)-function measures on \(\mathcal{X}\). Geometrically, this observable to fitting a single weighted (normalized) point to an event \(\mathcal{E}\), the former with a floating energy weight. We can then define the \(N\)-(sub)pointiness of \(\mathcal{E}\) to be \(\tau_{N}(\mathcal{E})=N\times\tau_{1}(\mathcal{E})\) - this is exactly the \(N\)-(sub)jettiness of \(\mathcal{E}\), with \(N\)-subjettiness corresponding to balanced composition, and \(N\)-jettiness corresponding to totally unbalanced composition.
Shape composition provide a novel avenue for understanding event and jet substructure. For example, the observable \(\tau_{N}\oplus\mathcal{I}\), where \(\mathcal{I}\) is the event isotropy, can be thought of as a "pileup-corrected" \(N\)-(sub)jettiness, where a uniform background is subtracted off. One can use this observable to probe the percentage of energy deposited in hard jets versus a uniform background due to pileup [86] and underlying event effects [87; 88]. In particular, the shape parameter \(\theta(\mathcal{E})=z_{1}(\mathcal{E})\) is an estimate of the percentage of energy in \(E\) due to hard jets. In Sec. 5.7, we consider perform empirical studies of these types of shape observables.
### Examples of Novel Shapes
In this subsection, we list (some) potentially phenomenologically interesting shape observables, all of which are defined for the first time in this work, that can be constructed using the prescription outlined above. For all these observables, we consider the detector geometry to be a rectangular patch of a cylinder, \(\mathcal{X}=[y_{\rm min},y_{\rm max}]\times[\phi_{\rm min},\phi_{\rm max}]\), though one could consider the full cylinder as well. These observables are summarized in Table 3, and we show empirical examples of these observables in action in Sec. 5. We consider balanced observables with \(\beta=1\), leaving unbalanced observables for future work.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \hline
**Sec.** & **Shape** & **Specification** & **Illustration** \\ \hline \hline
3.3.1 & **Ringiness** & **Manifold of Rings** & \\ & \({\cal O}_{R}\) & \({\cal E}_{x_{0},R_{0}}(x)=\frac{1}{2\pi R_{0}}\) for \(|x-x_{0}|=R_{0}\) & \\ & & \(x_{0}\) = Center, \(R_{0}\) = Radius & \\ \hline
3.3.2 & **Diskiness** & **Manifold of Disks** & \\ & \({\cal O}_{D}\) & \({\cal E}_{x_{0},R_{0}}(x)=\frac{1}{\pi R_{0}^{2}}\) for \(|x-x_{0}|\leq R_{0}\) & \\ & & \(x_{0}\) = Center, \(R_{0}\) = Radius & \\ \hline
3.3.3 & **Ellipsiness** & **Manifold of Ellipses** & \\ & \({\cal O}_{E}\) & \({\cal E}_{x_{0},a,b,\varphi}(x)=\frac{1}{\pi ab}\) for \(x\in\mbox{Ellipse}_{x_{0},a,b,\varphi}\) & \\ & & \(x_{0}\) = Center, \(a,b\) = Semi-axes, \(\varphi\) = Tilt & \\ \hline
3.3.4 & **(Ellipse** & **Composite Shape** & \\ & +**Point)iness** & \({\cal O}_{E}\oplus\tau_{1}\) & \\ & & Fixed to same center \(x_{0}\) & \\ \hline
3.3.5 & **N-(Ellipse** & **Composite Shape** & \\ & +**Point)iness** & \(N\times({\cal O}_{E}\oplus\tau_{1})\oplus{\cal I}\) & \\ & +**Pileup** & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Custom observables, defined using the Shaper prescription, designed to probe jet substructure at increasing levels of complexity. For each observable, the manifold parameterization is given, either explicitly, or as a composition of previously defined objects. Here, \(\tau_{1}\) is the one-pointiness (1-subjettiness), and \({\cal I}\) is the event isotropy. More details on these types of observables, plus explicit construction of sampling functions, can be found in Sec. 3.3.
#### 3.3.1 \(N\)-Ringiness
We first consider a simple shape observable: _ringiness_, which probes how ring-like an event is. We begin by defining the manifold of all ring-like energy flows \(\mathcal{M}_{R}\), which consist of energy flows corresponding to energy flow densities of the form:
\[\mathcal{E}_{x_{0},R_{0}}(x)=\begin{cases}\frac{1}{2\pi R_{0}}&|x-x_{0}|=R\\ 0&\text{otherwise}\end{cases}, \tag{3.3}\]
where the parameters \(x_{0}\) and \(R_{0}\) correspond to the center and radius of a ring, respectively. To build a sampler, we use the so-called _reparameterization trick_[89]. Drawing \(N\) samples from the unit uniform distribution \(\phi\sim U(0,1)\), the distribution of points:
\[x=R_{0}\left(\cos(2\pi\phi),\sin(2\pi\phi)\right)+x_{0}, \tag{3.4}\]
where each particle has weight \(\frac{1}{N}\), is a realization of \(\mathcal{E}_{x_{0},R_{0}}(x)\).
The corresponding observable, \(\mathcal{O}_{R}(\mathcal{E})\), is the ringiness of the event \(\mathcal{E}\). While most QCD jets are not expected to be ring-like, this observable can identify clumps of radiation scattered around a central point, as may be the case in a 3-pronged top quark decay. Additionally, observables that probe the boundary of a jet with an empty interior may prove useful in studies of the dead-cone effect [90; 91; 92] where collinear radiation is relatively suppressed.
Having defined ringiness, we can next define \(N\)-ringiness, which probes how much an event looks like \(N\) rings, each of arbitrary center, radius, and weight. This observable is defined as \(N\times\mathcal{O}_{R}\). Following the prescription outlined in Sec. 3.2, we can build \(N\) weighted rings, by separately sampling Eq. (3.4) for each ring's center and radius, and multiplying the weights by \(z_{i}\in\Delta_{N-1}\).
For numerical methods, one must set initial values for these parameters in an IRC-safe way. While in principle, the choice of initialization should make no difference, in practice, the presence of numerical effects and local minima make the choice of initialization important. In our initialization scheme for \(N\)-rings, we perform \(k_{T}\) clustering to find \(N\) subjets. The location of the subjets is then taken to be the ring center, and the subjet energy is taken to be the ring energy. We choose to initialize the radius of each ring to zero, so that the \(N\)-ringiness is guaranteed to only deviate from \(N\)-subjettiness only if it will make the event more ringlike, though it is also possible to initialize the radius to e.g. the distance of the \((N+1)\)-th jet in the clustering history.
#### 3.3.2 \(N\)-Diskiness
Next, we define _diskiness_\(O_{D}\), which measures how much like a disk an event is. Similar to ringiness, we parameterize the manifold of energy flow densities:
\[\mathcal{E}_{x_{0},R_{0}}(x)=\begin{cases}\frac{1}{\pi R_{0}^{2}}&|x-x_{0}| \leq R\\ 0&\text{otherwise}\end{cases}, \tag{3.5}\]
where \(x_{0}\) and \(R_{0}\) are the center and radius of the disk.
To build a sampler, as with ringiness, we draw \(N\) samples from the unit uniform distribution \(\phi\sim U(0,1)\), and also \(N\) points \(r\sim U(0,1)\). Then, the distribution of points:
\[x=\sqrt{r}R_{0}\left(\cos(2\pi\phi),\sin(2\pi\phi)\right)+x_{0}, \tag{3.6}\]
where each particle has weight \(\frac{1}{N}\), is a realization of a uniform disk. The random variable \(r\) controls the radius of the point being sampled, and the square root is from a Jacobian factor to make the disk uniform.
Given the diskiness, we can easily compose the \(N\)-diskiness, \(N\times\mathcal{O}_{D}\). The \(N\)-diskiness is analogous to the \(N\)-jettiness/XCone jet algorithm, in that it returns the locations of conical clusters of particles. However, unlike XCone, the radius \(R_{0}\) is a learned, rather than fixed parameter, allowing for dynamic jet radii.16 To initialize the \(N\)-diskiness, the exact same procedure is used as described in Sec. 3.3.1 for \(N\)-ringiness. Note that there are many ways to modify the \(N\)-diskiness to produce similar observables - for example, one can replace the uniform disks with Gaussians to probe different radiation patterns.
Footnote 16: In place of fixing the radius \(R\), as in the XCone algorithm, one instead assumes that the jet energies are uniform across the disk, so “assumptions are conserved”, so to speak.
#### 3.3.3 \(N\)-Ellipsiness
Jets need not necessarily be circular! Indeed, many jet algorithms, such as the widely-used \(k_{T}\)[64] and Cambridge-Aachen [62; 63] algorithms, do not return circular jets. Motivated by this, we define a generalization of diskiness, the _ellipsiness_\(\mathcal{O}_{E}\) of a jet. The manifold of ellipses is given by energy flow densities of the form:
\[\mathcal{E}_{x_{0},a,b,\varphi}(\vec{x})=\begin{cases}\frac{1}{\pi ab}&\left( \frac{(x-x_{0})\cos\varphi+(y-y_{0})\sin\varphi}{a}\right)^{2}+\left(\frac{(y -y_{0})\cos\varphi-(x-x_{0})\sin\varphi}{b}\right)^{2}\leq 1\\ 0&\text{otherwise}\end{cases}, \tag{3.7}\]
\(x_{0}\) is the center of the ellipse, \(a\) and \(b\) are the semi-major and semi-minor axes,17 and \(\varphi\) is the tilt of the \(x\)-axis. Here, we have restored vector notation \(\vec{x}\) to indicate that the \(x\)-and \(y\)-axes are treated differently. There are many equivalent alternate parameterizations of the ellipse, including in terms of its focal length \(c=\sqrt{\max(a,b)^{2}-\min(a,b)^{2}}\) and eccentricity \(e=\sqrt{1-\frac{\min(a,b)}{\max(a,b)}}\). Note that for \(a=b\), the ellipse reduces to a disk, and the \(\varphi\) parameter becomes redundant.
Footnote 17: non-respectively; \(a\) corresponds to the \(x\)-axis and \(b\) to the \(y\)-axis, and we make no distinction here which is the major versus minor axis.
The sampling procedure for disks can be recycled for ellipses, with some small modifications. Given \(N\) sampled points \(\phi,r\sim U(0,1)\), the distribution:
\[x=U_{\varphi}\cdot\left(a\sqrt{r}\cos(2\pi\phi),b\sqrt{r}\sin(2\pi\phi)\right) ^{T}+x_{0}, \tag{3.8}\]
where \(U_{\varphi}\) is the \(2\times 2\) rotation matrix corresponding to the angle \(\varphi\) and each particle has weight \(\frac{1}{N}\), is a realization of a uniform ellipse. We can then easily compose the \(N\)-ellipsiness, which
can serve as a jet algorithm that finds non-circular jets. In particular, this shape observable allows for the eccentriciy \(e\) of the clustered jets to be extracted, allowing one to quantify how far from circular each jet is. As with the \(N\)-ringiness and \(N\)-diskiness, the centers of the ellipses are chosen using the \(k_{T}\) clustering algorithm. Both \(a\) and \(b\) are initialized to be zero, so that deviations from either \(N\)-subjettiness or \(N\)-diskiness occur if it makes the event more elliptical.
#### 3.3.4... Plus Pointiness
Energy is not uniformly distributed within a jet! Indeed, to leading order in perturbative QCD, much of a jet's radiation will be soft and/or collinear with respect to the emitting parton. We can probe this by composing together shapes that explicitly target soft and collinear radiation separately. To this end, we construct a set of new observables, the (_shape_+point)iness, for \(\textit{shape}\in\{\mathcal{O}_{R},\mathcal{O}_{D},\mathcal{O}_{E}\}\). This is defined using the shape composition prescription described in Sec. 3.2, as:
\[\mathcal{O}_{i}^{\tau}=\mathcal{O}_{i}\oplus\tau_{1}, \tag{3.9}\]
where \(\tau_{1}\) is the 1-pointiness (equivalently, the 1-subjettiness), and \(\mathcal{O}_{i}\) is any of \(\{\mathcal{O}_{R},\mathcal{O}_{D},\mathcal{O}_{E}\}\) previously defined. Importantly, we fix the location of the \(\delta\)-function in \(\tau_{1}\) to be \(x_{0}\), though one may consider letting the location of the \(\delta\)-function float to define a recoil-free variant.18 We can then extend this definition to compose the \(N\)-(shape+point)iness.
Footnote 18: In the elliptical case, one may consider attaching \(\delta\)-functions to one or both of the foci instead of the center. We leave the study of variants of these observables to future work.
When used as a jet algorithm, the \(N\)-(shape+point)iness provides a more physical picture of perturbative QCD than do the previously defined shapes. The base shapes, particularly disks and ellipses, capture wide-angle soft radiation, while the \(\delta\)-functions capture both hard and soft collinear radiation at the center of the (sub)jet. Moreover, within each shape-point pair, the floating parameters \(z_{1}\) and \(z_{2}\) tell us the fraction of radiation in the wide-angle and collinear sectors, which in principle can be calculated in and compared to perturbative QCD.
When initializing observables of this type, the initialization occurs as described in previous sections using the \(k_{T}\) algorithm with all radii set to zero. However, we choose to split the \(k_{T}\) energy equally between the shape and the \(\delta\)-function. Note that this is an IRC-safe choice, since at zero radius, the shape is indistinguishable from the \(\delta\)-function.
#### 3.3.5... Plus Pileup
In hadron-hadron collisions, there are many sources of contamination in jets, including underlying event contributions from proton remnants [87; 88], and pileup due to simultaneous hadron collisions [86]. We will collectively refer to these sources of contamination as _pileup_ for simplicity. Pileup contamination biases and smears the "true" value of observables reconstructed from final state particles, driving the need for mitigation techniques.
Pileup is approximately uniformly distributed in the rapidity-azimuth plane. This is exactly the shape probed by the event isotropy, \(\mathcal{I}\). Thus, in order to protect shape observables against pileup contamination, we can compose them with the event isotropy, which will soak up radiation uniform in the plane. This defines the _shapes\(+\)pileup_ observable:
\[\mathcal{O}_{i}^{\mathcal{I}}=\mathcal{O}_{i}\oplus\mathcal{I}, \tag{3.10}\]
where \(\mathcal{O}_{i}\) is any shape observable, including those previously defined. As a departure from Ref. [40], we realize the uniform event by randomly sampling in the plane, rather than defining a grid. We also primarily focus on the \(\beta=1\) event isotropy. Unlike mitigation techniques such as area subtraction [93; 94; 95] or jet grooming [96; 97], where an implicit assumption is made about the pileup energy density (either explicitly as an input \(\rho\), or implicitly through a soft scale \(z_{\rm cut}\)), the shape observable \(\mathcal{O}_{i}^{\mathcal{I}}\) makes no explicit energy scale assumptions. The uniform energy weight, \(z_{2}\), is optimized over, and so the observable "learns" its own pileup scale, which can then be extracted. We choose to initialize the pileup scale \(z_{2}\) to zero, though one could choose any value of \(z_{2}\) if they had a prior on the amount of pileup in events.
#### 3.3.6... And More!
This has _not_ been an exhaustive list - one can use any manifold \(\mathcal{M}\) of energy flows one can think of, with the only two limits being imagination and the ability to write down a sampling procedure. Other examples of shapes include polygons, hardcoded jet topologies (for example, two-pronged jets restricted to between \(\Delta R=R_{1}\) and \(R_{2}\) apart), Gaussian clusters, graph-based shapes, and so on. These observables can also be combined into more complex ones using shape composition. All of these can be constructed within the Shaper framework (more details in Sec. 4), and we encourage the community to use this prescription to develop their own observables.
## 4 The Shaper Framework
Calculating the Wasserstein metric in Eq. (2.9) is notoriously difficult; if both events have \(n\) particles, then the runtime needed by a brute force, generic Wasserstein solver can be as high as \(\mathcal{O}(n^{3}\log n)\)[98]. Generic solvers also make it difficult to extract the gradients of the metric with respect to one of the events, \(\nabla_{\mathcal{E}}\mathrm{EMD}(\mathcal{E},\mathcal{E}^{\prime})\), which are necessary for performing gradient descent over the space of events in \(\mathcal{M}\). Fortunately, by using the (de-biased) Sinkhorn divergence, which uses an \(\epsilon\)-regularization to approximate the Wasserstein metric, the total costs can be lowered all the way down to \(\mathcal{O}(n^{2}\log n)\)[99; 100; 101; 102; 103].
In this section, we introduce the **S**hape **H**unting **A**lgorithm for **P**arameterized **E**nergy **R**econstruction - or Shaper - to define and calculate shape observables. Shaper is a Pytorch-enabled [104] and parallelized computational framework for defining and composing shape observables and their corresponding energy flow manifolds, built using the geomloss[46] package. We start by outlining the Shaper algorithm. Then, we provide details on the Sinkhorn divergence, before ending this section with implementation details. For the rest
of this paper, we restrict ourselves to balanced observables, i.e. \(E_{\theta,\text{tot}}=E_{\text{tot}}=1\), leaving the unbalanced case for future work.
### The Shaper Algorithm
We now describe how to perform the minimization \((\text{arg})\text{min}_{\mathcal{E}_{\theta}^{\prime}\in\mathcal{M}}\left[ \text{EMD}(\mathcal{E},\mathcal{E}_{\theta}^{\prime})\right]\) using Shaper.19 The Shaper algorithm for estimating shape observables on an event is as follows:
Footnote 19: NEEMo [47] is another differentiable EMD estimator that works by parameterizing the space of Lipschitz-Kantorivich potentials.
1. **Define**: Following the prescription of Sec. 3.1, define a manifold \(\mathcal{M}\) and coordinates \(\theta\) parametrizing the manifold. Define the ground metric \(d(x,y)\), the exponent \(\beta\), and the radius \(R\). This fully defines the observable \(\mathcal{O}\). Build a sampling function \(p_{\theta}\) that uses the parameters \(\theta\) to transform some base distribution into a realization of the energy flows \(\mathcal{E}_{\theta}^{\prime}\in\mathcal{M}\). Finally, choose an approximation parameter \(\epsilon\ll 1\) and an annealing parameter \(\Delta\in(0,1)\).
2. **Initialize**: For each event \(\mathcal{E}\), choose initial parameters \(\theta\). This initialization should be done in an IRC-safe way.
3. **Compute the EMD**: Compute the de-biased Sinkhorn divergence, \(S_{\epsilon}(\mathcal{E},\mathcal{E}^{\prime})\), as defined in Sec. 4.3 below, as an estimate of the EMD. Save the corresponding de-biased Kantorovich potentials, \(F\) and \(G\).
4. **Gradient Update**: Perform the gradient update: \[\theta\leftarrow\theta-\alpha\left(\sum_{j=1}^{M}G(y_{j})\frac{\partial E_{j} ^{\prime}}{\partial\theta}+\sum_{j=1}^{M}E_{j}^{\prime}\frac{\partial G(y_{j}) }{\partial y_{j}}\frac{\partial y_{j}}{\partial\theta}\right),\] (4.1) where \(\alpha\) is a learning rate hyper-parameter. The first term is the dependence of the EMD on particle energies due to \(\theta\), and the second is the dependence due on particle positions due to \(\theta\), both of which are implicit through the sampling function \(p_{\theta}\). This step can be replaced with any other gradient descent optimizer.
5. **Constrain**: If the manifold \(\mathcal{M}\) is nontrivial, impose any necessary constraints on the coordinates \(\theta\), such as wrapping angles between \(-\pi\) and \(\pi\), enforcing positivity, or a simplex projection.
6. **Converse**: Repeat Steps 3-5 until convergence. Return the final value of the EMD and the final \(\theta\) parameters.
The Shaper framework contains modules to aid or automate each of these steps, which we describe further in Sec. 4.4.
### The Dual Formulation of Wasserstein
Observe that the EMD in Eq. (9) falls into a generic class of problems called _linear programs_. A linear program involves minimizing a function \(\mathcal{L}(x)=\langle c,x\rangle\) over vectors \(x\), where \(c\) is some cost function linear in \(x\). Furthermore, \(x\) satisfies some linear constraint of the form \(b=Ax\), and we additionally require \(x\geq 0\). In our case, \(x\) is the (flattened) transfer matrix \(\pi\), \(c\) is the (flattened) distance matrix \(d^{\beta}\), \(b=(\mathcal{E},\mathcal{E}^{\prime})\) are the energy flows, and \(A\) is a matrix enforcing the simplex constraints on \(\pi\).
The theory of linear programs is well-studied [105]. In particular, for every _primal_ linear program, there exists a _dual_ linear program, where the constraints and variables to be optimized switch roles, similar to the method of Lagrange multipliers. In the dual problem, one instead maximizes the function \(\mathcal{L}(y)=\langle b,y\rangle\), subject to \(A^{T}y\leq c\).20 For the Wasserstein metric, the dual formulation looks like:
Footnote 20: For problems showcasing strong duality, the existence of an optimal solution for the primal problem implies the existence of an optimal solution for the dual problem. The problems we consider in this work admit strong duality. See Ref. [106] for a mathematically rigorous discussion.
\[\text{EMD}(\mathcal{E},\mathcal{E}^{\prime})^{(\beta,R)}=\max_{f,g:\mathcal{ X}\to\mathbb{R}}\left[\langle\mathcal{E},f\rangle+\left\langle\mathcal{E}^{ \prime},g\right\rangle\right],\text{ such that }f(x)+g(y)\leq\frac{1}{\beta R^{ \beta}}d(x,y)^{\beta}, \tag{10}\]
where \(f\) and \(g\) are known as the _dual potentials_ or _Kantorovich potentials_. This formulation of the EMD is known as the _Kantorovich-Rubinstein_ metric [106].
In this form, the EMD has several nice properties. First, the arguments \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are explicit, rather than implicit in the form of constraints. This makes taking the gradient of the EMD with respect to either energy flow much easier. This property is incredibly useful for performing optimizations over energy flows, since it enables easy differentiation. Second, the optimization over an \(MN\)-dimensional object, \(\pi_{ij}\), is replaced by an optimization over the \((M+N)\)-dimensional object, \(f_{i}\) and \(g_{j}\), making the simplex constraint structure more apparent. It can be shown that the optimal choice of \(f\) and \(g\) actually saturates the bound in Eq. (10) [107]. Recalling that the ground metric satisfies \(d(x,x)=0\) for all \(x\), we can see that the optimal \(f,g\) pair satisfies \(f(x)=-g(x)\). This allows us to rewrite the constraint as:
\[|f(x)-f(y)|\leq\frac{1}{\beta R^{\beta}}d(x,y)^{\beta}. \tag{11}\]
That is, \(f\) is \(\beta\)-Holder continuous. Note that for \(\beta=1\), this reduces to Lipschitz continuity on \(f\).
### Reviewing the Sinkhorn Divergence
The source of the difficulty in evaluating Eq. (4.2) is the highly nonconvex optimization. To alleviate this, we introduce a regulator [44; 45] to the dual Wasserstein metric:
\[\mathrm{OT}^{(\beta,R)}_{\epsilon}(\mathcal{E},\mathcal{E}^{\prime })=\max_{f,g:\mathcal{X}\rightarrow\mathbb{R}}\Bigg{[} \langle\mathcal{E},f\rangle+\big{\langle}\mathcal{E}^{\prime},g\rangle\] \[-\epsilon^{\beta}\log\bigg{\langle}\mathcal{E}(x)\otimes \mathcal{E}^{\prime}(y),e^{\left(\frac{1}{\epsilon^{\beta}}(f(x)+g(y)-\frac{1} {\beta R^{\beta}}d(x,y)^{\beta})\right)}\bigg{\rangle}\Bigg{]}, \tag{4.4}\]
where \(\epsilon\) is a regulation parameter. The quantity \(\mathrm{OT}^{(\beta,R)}_{\epsilon}(\mathcal{E},\mathcal{E}^{\prime})\) is known as the _Sinkhorn divergence_[43] between measures \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\). It reduces to the EMD as \(\epsilon\to 0\).21 Notably, for any \(\epsilon>0\), the optimization over \(f\) and \(g\) is fully convex [107], making the minimum significantly easier to evaluate.
Footnote 21: In the \(\epsilon\rightarrow\infty\) limit, we recover instead the maximum mean discrepancy (MMD) [84], another potential metric on collider events, which we have shown in Sec. 2.4 is not faithful.
Note that there are _no_ constraints on the functions \(f\) and \(g\) anymore. Instead, the maximum will only be achieved when \(f(x)+g(y)\) is within order \(\epsilon^{\beta}\) of \(\frac{1}{\beta R^{\beta}}d(x,y)^{\beta}\), a softer version of the original simplex constraint. We can view the parameter \(\epsilon\) as "blurring" the distance metric \(d(x,y)\), where \(\epsilon\) is a distance scale measured in units of \(R\).22
Footnote 22: Even though the \(R\) parameter is unimportant for calculating the exact Wasserstein metric for balanced observables, beyond defining a unit scale, its importance re-emerges when defining the blurring scale \(\epsilon\).
As an unconstrained, convex minimization problem, we can estimate the Sinkhorn divergence using simple gradient descent by taking derivatives of Eq. (4.4) with respect to \(f\) and \(g\). Given two atomic measures, \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\), with \(M\) and \(N\) particles respectively, and an approximation parameter \(\epsilon\), we estimate the Kantorovich potentials \(f(x_{i})\) and \(g(y_{j})\) that give us the Sinkhorn divergence using the following algorithm:
1. **Initialize**: Initialize \(f(x_{i})=0\) and \(g(y_{j})\) = 0.
2. **Gradient Update**: Update \(f\) and \(g\) simultaneously as follows: \[f(x_{i})\leftarrow-\epsilon^{\beta}\log\Biggl{(}\sum_{j=1}^{N} E^{\prime}_{j}e^{\left(\frac{1}{\epsilon^{\beta}}(g(y_{j})-\frac{1}{\beta R^{ \beta}}d(x_{i},y_{j})^{\beta})\right)}\Biggr{)},\] (4.5) \[g(y_{j})\leftarrow-\epsilon^{\beta}\log\Biggl{(}\sum_{i=1}^{M} E_{i}e^{\left(\frac{1}{\epsilon^{\beta}}(f(x_{i})-\frac{1}{\beta R^{\beta}}d(x_{i},y_{j} )^{\beta})\right)}\Biggr{)}.\] (4.6)
3. **Converge**: Repeat Step 2 until convergence. Return the Kantorovich potentials \(f\) and \(g\), and the Sinkhorn Divergence Eq. (4.4) evaluated on these potentials.
This algorithm is known to converge in finite time [108; 43; 109]. The runtime of each iteration scales as approximately \(\mathcal{O}((M+N)^{2})\), and the algorithm converges in approximately
\(\frac{1}{\epsilon^{\beta}}\) iterations. This can be further improved to only \(\log\bigl{(}\frac{1}{\epsilon^{\beta}}\bigr{)}/\log\bigl{(}\frac{1}{\Delta}\bigr{)}\) iterations through the use of simulated annealing [110; 111], with a parameter \(\Delta\in(0,1)\). Beginning with a larger effective blurring radius \(\epsilon^{\prime}=2R\), after every iteration of the Sinkhorn algorithm, we decrease \(\epsilon^{\prime}\leftarrow\Delta\epsilon^{\prime}\), until finally reaching \(\epsilon^{\prime}=\epsilon\). Intuitively, we start with a large blurring scale \(R\), and slowly "zoom in" to a distance scale of \(\epsilon\) to refine the estimate of the Sinkhorn divergence.
However, the Sinkhorn divergence is biased, meaning that it is not generically the case that \(\text{OT}_{\epsilon}(\mathcal{E}^{\prime},\mathcal{E}^{\prime})=0\), which is is an important property of the Wasserstein metric. We use therefore the de-biased Sinkhorn divergence, defined in Ref. [46]:
\[S_{\epsilon}(\mathcal{E},\mathcal{E}^{\prime})=\text{OT}_{\epsilon}(\mathcal{ E},\mathcal{E}^{\prime})-\frac{1}{2}\text{OT}_{\epsilon}(\mathcal{E},\mathcal{E})- \frac{1}{2}\text{OT}_{\epsilon}(\mathcal{E}^{\prime},\mathcal{E}^{\prime}). \tag{4.7}\]
The de-biased Sinkhorn divergence satisfies \(S_{\epsilon}(\mathcal{E},\mathcal{E})=0\) by construction. This can be easily realized algorithmically by simply substituting new de-biased Kantorovich potentials \(f\to F\) and \(g\to G\), where
\[F(x) =f(x)-\tilde{f}(x), \tag{4.8}\] \[G(y) =g(y)-\tilde{g}(y). \tag{4.9}\]
Here, the notation \(\tilde{f}(x)\) refers to the first Kantorovich potential corresponding to \(\text{OT}_{\epsilon}(\mathcal{E},\mathcal{E})\), and \(\tilde{g}(y)\) refers to the second Kantorovich potential corresponding to \(\text{OT}_{\epsilon}(\mathcal{E}^{\prime},\mathcal{E}^{\prime})\). For the rest of this paper, we refer to the de-biased Sinkhorn divergence as simply the Sinkhorn divergence wherever there is no chance for confusion.
In addition to returning the Sinkhorn divergence, this algorithm also returns the Kantorovich potentials, which allow us access to approximate gradients of the EMD, which can be used for shape parameter optimization. The gradients of the EMD with respect to the input measures can be read off of Eq. (4.4):
\[\nabla_{\mathcal{E}}\text{EMD}(\mathcal{E},\mathcal{E}^{\prime}) =F \Rightarrow \left\{\begin{array}{rcl}&\nabla_{E_{i}}\text{EMD}(\mathcal{E },\mathcal{E}^{\prime})=F(x_{i}),\\ &\nabla_{x_{i}}\text{EMD}(\mathcal{E},\mathcal{E}^{\prime})=E_{i}\nabla F(x_ {i}),\end{array}\right.\] \[\nabla_{\mathcal{E}^{\prime}}\text{EMD}(\mathcal{E},\mathcal{E}^ {\prime})=G \Rightarrow \left\{\begin{array}{rcl}&\nabla_{E^{\prime}_{j}}\text{EMD}( \mathcal{E},\mathcal{E}^{\prime})=G(y_{j}),\\ &\nabla_{y_{j}}\text{EMD}(\mathcal{E},\mathcal{E}^{\prime})=E^{\prime}_{j} \nabla G(y_{j}).\end{array}\right. \tag{4.10}\]
### Implementation Details
Before turning to our case study, we discuss the specific details of the Shaper implementation. Shaper uses the geomloss package as a backend for computing Sinkhorn divergences, as described in Sec. 4.3. By default, we use a relatively conservative annealing value of \(\Delta=0.9\), and choose \(\epsilon^{\beta}=10^{-3}\) for our estimates.23 Currently, the Shaper algorithm is implemented only for balanced observables, with either \(\beta=1\) or \(2\).
Footnote 23: See Secs. 5.2 and 5.3 for studies involving the choice of \(\epsilon\).
Once the EMD and Kantorovich potentials are estimated using the geomloss backend, the gradient updates in Step 4 are then handled using automatic differentiation and backpropagation in pytorch. One may select from a suite of common machine learning optimizers to perform the gradient update - by default, we use the Adam optimizer [112], with a learning rate of 0.01. Steps 3-5 can be easily parallelized over batches of hundreds of events at once. This is accomplished by treating the parameters \(\theta\) for each \(\mathcal{E}_{n}\) to be completely independent. The Sinkhorn divergence can be computed on many events at once, and since the parameters are independent, we take advantage of highly parallelized pytorch operations to perform independent derivatives \(\nabla_{\theta_{n}}\) of the combined batch loss, \(\sum_{n}\text{EMD}(\mathcal{E}_{n},\mathcal{E}^{\prime}_{\theta_{n}})\), all at once.
When using Shaper, one must specify a maximum number of epochs, so that the program eventually halts even if convergence is not achieved. We set this number by default to be 500 epochs, though we observe that convergence happens far earlier than this. We define convergence through an early stopping procedure: if an event's EMD has not decreased in at least \(N_{\text{max}}\) epochs, stop early, and return the minimum EMD ever achieved during the training, and the parameters that achieved that minimum. When training on a large batch of events, we stop early when a certain fixed percentage (we choose 95% by default) have hit this condition - this is because there tends to be a handful of outlier events that take exceptionally long to converge. We choose \(N_{\text{max}}=25\) epochs by default. Both this, and the batch stopping percentage, are adjustable user parameters.
To facilitate Steps 1 and 2 of the Shaper algorithm, where the user input occurs, many common manifolds, such as sets of \(N\) points, hypercubes, simplices, and so on, are already pre-built. These pre-built "building-block" manifolds are listed in Table 4.24 From these, more complex manifolds can be easily constructed, such as the rings, disks, and ellipses described in Sec. 3.3. The \(\oplus\) and \(N\times\) operators make it very easy to define new composite observables from old ones and quickly build sophisticated shapes. Parameters can also be easily frozen for more customization. Furthermore, it is straightforward to define more building-block manifold as needed within the framework. Each of the observables described in Sec. 3.3 is also pre-built into Shaper. When defining custom manifolds that use the \(N\)-points as a building block, the custom shape will automatically use the same IRC-safe \(k_{T}\)-clustering initialization scheme as the observables in Sec. 3.3, though it is possible to modify the initialization scheme as needed.
Footnote 24: Note that these manifolds are the _parameters_ from which shapes \(\mathcal{E}_{\theta}\) can be defined, not the shapes themselves. For example, a circular shape is defined by a point parameter (the center) and a positive real parameter (the radius), whereas the circle manifold \(S^{1}\) whenever an angle parameter is needed to define a shape.
Each manifold contains instructions on how to enforce parameter constraints (as needed for Step 5). For most manifolds, this usually involves clipping the values to be within a desired range, though for the simplex \(\Delta_{N}\), which occurs in almost every shape for which energy weights must be normalized, the enforcement is nontrivial. We use a simplex projection algorithm, inspired by the \(K\)-Deep-Simplex framework [113] and its nonlinear extension [114],
which solves the linear program [115]:
\[\min_{z_{i}}\left[\sum_{i}^{N+1}|z_{i}-y_{i}|^{2}\right]\text{ such that }\sum_{i}^{N+1}z_{i}=1,z_{i}\geq 0, \tag{4.11}\]
which finds the simplex \(z_{i}\in\Delta_{N}\) closest to a set of unnormalized points \(y_{i}\).
## 5 Empirical Studies with Jets
We now use the Shaper framework and custom shape observable in example collider physics analyses. We begin by benchmarking the Shaper algorithm, testing the performance of the Sinkhorn divergence and the optimization procedure by comparing the jet isotropy and \(N\)-subjettiness calculated using Shaper to other methods. Next, we calculate the ring, disk, and ellipse-based observables defined in Sec. 3.3 for a dataset of top and QCD jets, showing visualizations of each shape and analyzing the learned EMD's and parameters. Finally, we explore the potential to use shape observables for automatic pileup removal.
### Dataset
For our empirical studies, we use the top tagging benchmark of Refs. [48; 49], which is a dataset consisting of a top quark jet signal and a mixed light-quark/gluon jet background. These samples are generated in Pythia 8.2.15 [116] at 14 TeV, and then passed through Delphes 3.3.2 [117] to simulate the ATLAS detector. Jets are defined using the anti-\(k_{T}\) algorithm [64] in FastJet 3.1.3 [118] with \(R=0.8\). Only the leading jet in any event is considered, and we select jets satisfying \(p_{T,J}\in[475,525]\,\text{GeV}\) and \(|\eta_{J}|<2\). The signal and
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Manifold** & **Description** & **Constraints** \\ \hline \hline Trivial & The set \(\{0\}\) & None \\ \(N\)-Points, \(\mathcal{X}^{N}\) & Set of \(N\) points, \(x_{i}\), in \(\mathbb{R}^{2}\) & None \\ Positive Reals, \(\mathbb{R}^{N}_{\geq 0}\) & Set of \(N\) positive numbers, \(R_{i}\) & Clipped to \(R_{i}\geq 0\) \\ Hypercube, \(\Lambda^{N}\) & Set of \(N\) numbers, \(\lambda_{i}\), between 0 and 1 & Clipped to \(0\leq\lambda_{i}\leq 1\) \\ Circle / Torus, \(S^{N}\) & Set of \(N\) angles, \(\phi_{i}\) & Wrapped to \(-\pi\leq\phi_{i}<\pi\) \\ \(N\)-Simplex, \(\Delta_{N}\) & Set of \(N+1\) numbers, \(z_{i}\geq 0\), summed to 1 & Simplex projection \\ \hline \hline \end{tabular}
\end{table}
Table 4: The building-block manifolds implemented in Shaper, from which parameters can be defined and more complex manifolds can be built. For each manifold, a description is given, and constraint instructions are provided to project points onto the manifold. Note that we do not distinguish here between “positive” reals versus “non-negative” reals, since for continuous parameterizations there is no difference between including zero and getting arbitrarily close to zero.
background samples are generated using \(t\bar{t}\) and QCD dijet events respectively. For signal top jets, a top parton, plus its decay products, are required to be within \(\Delta R=0.8\) of the jet axis. All events are translated such that the jet axis is at (0, 0) on the rapidity-azimuth plane.
In this dataset, multiple parton interactions and pileup have not been included. To mock up the effects of pileup contamination in data, we add in pileup "by hand". To each event, we add in \(N\) particles randomly distributed in an \(R\times R=0.8\times 0.8\) square centered at the origin on the rapidity-azimuth plane, where \(N\) is Poisson-distributed with a mean of 75. Each particle is given an energy weight randomly sampled from a normal distribution with mean \(\frac{E_{\rm PU}}{N}\) and standard deviation \(\frac{E_{\rm fluct}}{N}\), where we take \(E_{\rm PU}\), which represents the total amount of pileup radiation, to be uniformly distributed between 50 and 250 GeV, and \(E_{\rm fluct}\), which represents per-particle fluctuations, to be 25 GeV.25 Many refinements of this simplistic mockup could be considered, but this suffices to show qualitative features of Shaper and the shape observables defined in Sec. 3.3. For the purposes of calculating shape observables, all jets are normalized such that \(E_{\rm tot}=1\), though we save the original total energy of each jet for the purpose of restoring units.
Footnote 25: A floor of 0 GeV is set to avoid negative energies.
An example top jet is shown in Fig. 2, before and after pileup is added, to illustrate this procedure. This contamination procedure is performed for the benchmarking studies in Secs. 5.2 and 5.3 and the pileup studies in Sec. 5.7. For the jet substructure studies in Secs. 5.5 and 5.6, we do not add any pileup, and instead we require that the jets have an
Figure 2: Example of a top jet (signal) event in our dataset, (a) before and (b) after the pileup mock-up procedure described in Sec. 5.1. Each red point is a single particle, and the size of the point is proportional to the relative energy of the particle.
invariant mass \(m_{J}\in[145,205]\) GeV to more closely match the analysis conditions of Ref. [68].
### Benchmarking Sinkhorn: Jet Isotropy
We first use Shaper to compute the jet isotropy for the purposes of benchmarking the Sinkhorn divergence for runtime and accuracy. Jet isotropy is an ideal benchmark since the minimization is trivial; for balanced isotropy, the parameterized manifold consists only of a single event. Therefore, no gradient descent is necessary, and this is purely a test of the Sinkhorn approximation. This can be viewed as a proxy for the per-epoch runtime and accuracy of the Shaper algorithm.
We compute the \(n\times n\) jet isotropy, which is the isotropy given by computing the EMD to the uniform event:
\[\mathcal{U}_{n\times n}\sim\text{ particles in an $R\times R$ square, arranged in a uniform $n\times n$ grid}, \tag{10}\]
as defined in Ref. [40]. We do this using Shaper with many different values of \(\epsilon\), and compare to the same calculation done using the Python Optimal Transport (POT) [119] implementation of the EMD, which was used in Refs. [40; 41].
In Fig. 3, we show the results of a runtime vs. accuracy study. We compute the jet isotropy of 1000 top jets for several different values of \(n\), \(\epsilon\), and for \(\beta=1\) and \(2\). In Fig. 4, we show the learned \(16\times 16\) jet isotropies for \(\epsilon^{\beta}=10^{-3}\). For this experiment, we use a fixed (conservative) annealing parameter of \(\Delta=0.9\). Shaper allows for events to be computed in parallelized batches; we run the entire computation in a single batch on a NVIDIA A100, and report the total runtime of the entire batch.26
Footnote 26: In principle, the only limiting factor to how many events Shaper can process at once is the ability to fit the events on a single GPU.
We see from Fig. 3 that the accuracy of the Sinkhorn divergence, as an estimator for the Wasserstein metric, begins to saturate at \(\epsilon^{\beta}=10^{-3}\), and that there is no substantial gain from choosing a smaller \(\epsilon\). Picking \(\epsilon^{\beta}=10^{-3}\) ensures percent level accuracy for \(\beta=1\), and few-percent level accuracy for \(\beta=2\), which we can see visually in Fig. 4. Furthermore, for larger values of \(n\times n\), the accuracy is mostly independent of \(n\). Note that for \(n=16\), it takes under 1 second to process 1000 events - this implies it is possible to process millions of events on the order of an hour, with further speedups possible by choosing a more aggressive value for the annealing parameter \(\Delta\).
### Benchmarking Optimization: \(N\)-Subjettiness
We perform a second benchmark investigation using \(N\)-subjettiness. Unlike the jet isotropy, \(N\)-subjettiness requires a nontrivial minimization. This allows us to use it to estimate the fidelity of the Shaper algorithm's optimization step.
It is well known that the ratio \(\tau_{32}\equiv\tau_{3}/\tau_{2}\) is a good discriminant between top and QCD jets, as top jets tend to have 3 prongs more often than QCD jets, and thus have lower expected values of \(\tau_{32}\)[68]. We compute this ratio for several different values of \(\epsilon\) to see
if any discrimination power is lost (or gained) in the \(\epsilon\)-approximation. Within the Shaper framework, the \(N\)-subjettiness is given by the following manifold of parameterized events:
\[\mathcal{E}_{x_{i},z_{i}}(x)=\sum_{i=1}^{N}z_{i}\,\delta(x-x_{i}),\qquad x_{i} \in\mathcal{X},z_{i}\in\Delta_{N-1}. \tag{100}\]
As a baseline, we compute \(N\)-subjettiness using FastJet 3.4.0 with FJcontrib 1.050. The results of this study are shown in Fig. 4(a) as ROC curves, computed using an NVIDIA A100 GPU. We see that the Sinkhorn approximations have roughly the same discriminatory power as the baseline for \(\epsilon\sim 10^{-3}\). In Fig. 6, we show the distributions of \(\tau_{32}\) for both datasets computed with FastJet and Shaper with \(\epsilon=10^{-3}\), and see good agreement between the two methods. In order to gauge the impact of float precision on our estimates, we repeat ROC curve calculation using only a CPU, which is shown in Fig. 4(b). For values of \(\epsilon\ll 10^{-3}\), we see that on the GPU architecture, the performance actually begins to degrade due to the lower machine precision due to the accumulation of floating-point errors during the optimization, while it saturates on the CPU. Therefore, it is recommended to use \(\epsilon^{\beta}\sim 10^{-3}\), as this is the most stable compromise between fidelity and machine precision.
Figure 3: The (a) fidelity and (b) runtime of Shaper when computing the \(n\times n\) jet isotropy, for different values of \(\epsilon\), \(n\), and \(\beta\). The fidelity is defined as the ratio of the Sinkhorn divergence to the “true” Wasserstein metric, as computed using the POT library, across a batch of 1000 events. The runtime is the total time to evaluate the Sinkhorn divergences for the entire 1000 event batch, as computed using a NVIDIA A100. An annealing parameter of \(\Delta=0.9\) is used globally.
Figure 4: Distributions of the learned \(16\times 16\) jet isotropy, for \(\beta=1\) (red) and \(\beta=2\) (blue), as calculated using the POT library (filled) and Shaper (points) with \(\epsilon^{\beta}=10^{-3}\).
Figure 5: A ROC curve showing the performance of \(\tau_{32}=\tau_{3}/\tau_{2}\) as a discriminator between top (signal) and QCD (background) jets, for several different values of \(\epsilon\), as calculated using (a) an NVIDIA A100 and (b) only on CPU. A baseline curve, calculated using the \(N\)-subjettiness routines in FastJet 3.4.0, is shown in black.
### Hearing Gradients
Shaper can be used to not only estimate the shapiness of events, but also take derivatives of the shapiness with respect to the event. As discussed in Sec. 4.3, this is completely automatic, since the gradients with respect to the energy flow are given manifestly by the Kantorovich potentials, allowing us to see precisely how our EMD calculations depend on the energies and positions of particles in an event.
Reading off of Eq. (4.10), we obtain an expression for the gradient of the EMD with respect to the energy \(E_{i}\) of particle \(i\):
\[\nabla_{E_{i}}\mathrm{EMD}(\mathcal{E},\mathcal{E}^{\prime}_{\theta})=F(x_{i}). \tag{5.3}\]
If the gradient at particle \(i\) is negative, then increasing the energy of that particle will decrease the EMD, making the event more \(\mathcal{M}\)-like. By adding "ghost" particles to \(\mathcal{E}\), one can probe the energy dependence of the EMD from any point in \(\mathcal{E}\). Similarly, we can take the gradient the EMD with respect to the position \(x_{i}\) of particle \(i\):
\[\nabla_{x_{i}}\mathrm{EMD}(\mathcal{E},\mathcal{E}^{\prime}_{\theta})=E_{i} \nabla_{x_{i}}F(x_{i}). \tag{5.4}\]
This gradient (times \(-1\)) tells us where to move the particle \(i\) to decrease the EMD. Both the energy and position gradients answer the question, "If I want to decrease the EMD (make my energy look _more_ like my shape), what should I do to a particle at site \(x_{i}\)?" Moreover, because of the the reparameterization invariance of the energy flow density, the energy and position gradients are both are valid ways to change the EMD: one can either change the energy of
Figure 6: Distributions of the learned \(\tau_{32}\), for top jets (red) and QCD jets (blue), as calculated using FastJet 3.4.0 (filled) and Shaper (points) with \(\epsilon^{\beta}=10^{-3}\).
particles at the location \(x_{i}\), move the particles at \(x_{i}\) somewhere else, or some combination of both.
In Figs. 7 and 8, the gradients from Eqs. (11) and (12) are plotted for an example top jet, for the 3-subjettiness and \(16\times 16\),\(\beta=1\) jet isotropy, respectively. Using Fig. 7, we can see what parts of the event contribute to the 3-subjettiness - the three large clusters contribute negatively to the EMD (make the event look more like 3 subjets), while the rest of the event contribute positively to the EMD (makes the event deviate from 3 subjets). Similarly, we see from Fig. 8 that the overdensity of energy at the center of the event makes it less isotropic. In both figures, the vector quiver plot tells us which way particles should "flow" (against) to change the shape.
There are many potential applications of calculating gradients of a shape with respect to an energy flow. In experimental contexts, for example, one can use the fact that the gradients are practically instantaneous to compute to do easy Gaussian error propagation due to detector-induced uncertainties in particle energies and positions [120; 121]. This avoids having to do expensive re-sampling and recalculation of the event shape. Phenomenologically, these gradients can be used to probe the sensitivity of observables to certain radiation patterns. For example, the sensitivity of an observable to pileup can be measured by taking
Figure 7: For an example top jet, (a) gradients of the 3-subjettiness with respect to the particle weights, \(\nabla_{E_{i}}\)EMD, and (b) gradients off the 3-subjettiness with respect to the particle positions, \(\nabla_{x_{i}}\)EMD, are shown. In (a), increasing the particle energy in red regions will increase the 3-subjettiness (look less like 3 subjets), and increasing particle energy in blue regions will decrease the 3-subjettiness (look more like 3 subjets). In (b), moving particles along or against the arrows will increase or decrease the 3-subjettiness, respectively, with the arrow’s shading indicating the relative magnitude of the change.
derivatives with respect to the pileup scale [95], which can be numerically realized in Shaper.
### Hearing Jets Ring (and Disk, and Ellipse)
We next use Shaper to realize the custom shape observables defined in Sec. 3.3, starting with some visualizations of these shape observables on an example event.
The three base observables we consider are the ringiness, diskiness, and ellipsiness, as defined in Sec. 3.3. We calculate the shape observables \(N\times\mathcal{O}\) (the \(N\)-ringiness, the \(N\)-diskiness, and \(N\)-ellipsiness) and \(N\times\mathcal{O}^{\tau}\) (the \(N\)-(ring+point)iness, the \(N\)-(disk+point)iness, and \(N\)-(ellipse+point)iness), as defined in Sec. 3.3 with \(\beta=1\). We also consider, for comparison, the \(N\)-subjettiness, \(\tau_{N}\), as defined in Eq. (100). We use Shaper to evaluate all of these shape observables on a single of top jet event from our dataset, restricted to \(m_{J}\in[145,205]\,\mathrm{GeV}\), though without any pileup contamination. Each extended shape is sampled with 100 points, with \(\epsilon=10^{-3}\) and \(\Delta=0.9\).
Geometric visualizations of each of the 21 event shapes, as evaluated on an example top jet, can be found in Figs. 9, 10, 11, and 12. From these visualizations, we note some interesting qualitative features of these shape observables. First, the point variants of each shape correspond more closely to clusters of energy. For example, while the \(N=3\) uniform rings (Fig. 10), disks (Fig. 11), or ellipses (Fig. 12) do not necessarily capture the regions of highest energy, the point variants of each shape align very well with the \(N\)-subjettinesses of Fig. 9. Correspondingly, the EMD's of the shapes are significantly reduced for the point variants - this suggests that this event in particular is not well modeled by uniform radiation profiles, but rather looks more like localized spikes with radiation clouds around them. Moreover, we can see that circular radiation clouds do not model the event as well as elliptical ones - this is
Figure 8: The same as Fig. 7, but for the \(16\times 16\) jet isotropy with \(\beta=1\).
reflected in the \(N\)-ellipsiness in Fig. 12, which learns extremely eccentric line-like structures in an attempt to best model the event, which results in lower EMD's than the corresponding \(N\)-diskinesses in Fig. 11. We also note that the \(N\)-subjettiness and the point shape variants all qualitatively find the same jet centers, suggesting that these shapes can be treated as perturbations to \(N\)-subjettiness.
### Shapiness and Shape Parameters
Continuing the discussion in Sec. 5.5, we now use Shaper to compute distributions of these shape observables on a large sample of top and QCD jets, restricted to \(m_{J}\in[145,205]\,\mathrm{GeV}\), though without any pileup contamination. In particular, we show the utility of both the _shapiness_\(\mathcal{O}(\mathcal{E})\) and the _shape parameters_\(\theta(\mathcal{E})\) in describing the geometry of jets.
For each histogram in this section, we calculate an AUC score, showing the efficacy of a cut on that observable as a top/QCD discriminant. Note that these AUCs are for cuts on a single feature - the discrimination power can in principle be improved by transforming these features and combining many features per jet.
As a representative sample of the "shapiness" observable, we plot the \(N\)-ellipsiness and \(N\)-(ellipse+point)iness of our top and QCD jet samples in Fig. 13. The EMD distributions for the ring and disk variants are qualitatively similar to the ellipse, and thus our discussion of these distributions carry over to them. We notice that the \(N\)-(ellipse+point)iness is not much lower than the corresponding \(N\)-ellipsiness, indicating that (at least in the absence of pileup), subjets can indeed be approximated as roughly uniform. In the test event visualization in Fig. 12, we can see qualitatively that the found ellipses have roughly the same center, comparing the \(N\)-ellipsiness and its corresponding point variant. We also note that the EMD decreases with \(N\), as expected.
In Fig. 14, we show the _ratio_ of the \(N=3\) shapiness to the \(N=2\) shapiness for each
Figure 9: The (a) 1-, (b) 2-, and (c) 3-subjettiness of an example top jet event. Subjets are represented by a purple “\(\times\)”, with size proportional to the subjet’s energy weight.
class of shape observable. For \(N\)-subjettiness, which we show for comparison, this is the classic observable \(\tau_{32}\), which is known to be a good top vs. QCD jet discriminant [68]. First, we observe that the uniform ring, disk, and ellipse observable, along with the point variants, each have an AUC of approximately 0.75, which is still considerably less than \(\tau_{32}\)'s AUC of 0.825. One should expect that, using just the EMD alone, a more complexly parameterized shape should have a lower AUC than a simpler shape. In the extreme case, where the parameterization is flexible enough to reproduce any event in the dataset, the EMD will always be zero and have no discriminatory power. However, this is not the end of the story - shape observables also include their learned parameters, and this information also contains multivariate discriminatory power. For the hypothetical infinitely flexible shape, the parameters contain the full event information even though the EMD is zero, and thus the combination of the shapiness and shape parameters together contain more information (and thus more discriminatory power) than just a simpler shape. We leave a full multivariate analysis of the shape parameters, which is not the case for the \(N\)-subjettiness.
Figure 10: Top row: the (a) 1-, (b) 2-, and (c) 3-ringiness of an example top jet event. Bottom row: the (d) 1-, (e) 2-, and (f) 3-(ring+point)iness of the same top jet event. The point is represented by a “\(\times\)”, with size proportional to its energy weight.
shape parameters for jet classification for potential future work.
We now move on to analyze the learned shape parameters themselves. In Fig. 15, the learned radius parameters of the \(N=1\) shapes are shown.27 Considering the 1-(disk+point)iness and 1-(ellipse+point)iness, which as discussed earlier, we can think of as a "jet algorithm" looking for soft clusters of radius \(R\) with collinear radiation inside, we see that both learn an average radius of approximately \(R=0.4\), which is approximately half the radius of the original AK8 jet. One can think of the uniformity condition as imposing an radius effective cutoff. If one assumes that the energy density falls to zero with distance from the jet center, then there is some critical radius for which any larger uniform shape is an increasingly worse approximation, and in principle this critical radius can be computed in perturbative QCD.
Footnote 27: For ellipses, we define an _effective radius_, taken to be \(\sqrt{ab}\). This is the radius a circle with the same area as the ellipse.
For the point shape variants of the disk and ellipse, we note that there is only a small
Figure 11: Top row: the (a) 1-, (b) 2-, and (c) 3-diskiness of an example top jet event. Bottom row: the (d) 1-, (e) 2-, and (f) 3-(disk+point)iness of the same top jet event. The point is represented by a “\(\times\)”, with size proportional to its energy weight.
## 6 Conclusions
Figure 12: Top row: the (a) 1-, (b) 2-, and (c) 3-ellipsiness of an example top jet event. Bottom row: the (d) 1-, (e) 2-, and (f) 3-(ellipse+point)iness of the same top jet event. The point is represented by a “\(\times\)”, with size proportional to its energy weight. The effective radius of the ellipses are given by the geometric mean of the two axes, \(\sqrt{ab}\).
difference between the top and QCD jets - that is to say, the soft component of a top jet is about as "wide" as the soft component of a QCD jet, as seen by disks and ellipses. However, as top jets tend to have their energy distributed across 3 prongs, rather than 1 prong as in QCD jets, we should expect clusters of radiation away from the central one for top jets. Since rings are thin, we should expect rings to be able to better capture these localized prongs than area-filling disks and ellipses, which can be visualized in Fig. 10. This is especially apparent in Fig. 15(d) for the 1-(ring+point)iness shape, where top jets are significantly larger than QCD jets. Interestingly, we note a spike at \(R=0\) in all of these distributions. This means that Shaper has determined that the best shape for these events is in fact point-like, and reduces to the \(N\)-subjettiness. This spike is reduced for the ellipse shapes, which implies that there are events better modeled by either an eccentric ellipse or a point rather than a uniform disk.
As a last example of the information that shape observables and shape parameters can extract, we look at the learned eccentricity of ellipses. In Fig. 16, we show the _minimum eccentricity_ across the \(N\) ellipses for the 1-, 2-, and 3- ellipsiness and (ellipse+point)iness for
Figure 13: Distributions of the learned (a,d) 1-, (b,e) 2-, and (c,f) 3-ellipsiness (top row) and (ellipse+point)iness (bottom row) of the top (red) and QCD (blue) jet sample.
## 6 Conclusions
Figure 14: Top row: Distributions of the \(N=3\) to \(N=2\) ratio of the (a) \(N\)-subjettiness of the top (red) and QCD (blue) jet sample. Middle row: The same ratio, but for the \(N\)-(b) ringiness, (c) diskiness, and (d) ellipsiness. Bottom row: The same, but for the \(N\)- (e) (ring+point)iness, (f) (disk+point)iness, and (g) (ellipse+point)iness.
Figure 15: Top row: distributions of the learned radius parameter of the 1- (a) ringiness, (b) diskiness, and (c) ellipsiness of the top (red) and QCD (blue) jet sample. Bottom row: the radius parameter of the 1- (d) (ring+point)iness, (e) (disk+point)iness, and (f) (ellipse+point)iness of the same samples. For the ellipse, the effective radius is given by the geometric mean of the two axes.
our top and QCD jet datasets, where the eccentricity is given by \(\sqrt{1-\min(a,b)/\max(a,b)}\). We can immediately see that a value of \(\min(e)=0\) is rare - our jets are much better described by ellipses than they are described by disks. With increasing \(N\), the distributions of \(\min(e)\) tend to shift leftwards for both the top and QCD jet samples. This indicates that the \(N\)-'th ellipse, which probes more substructure than the \((N-1)\)-'th ellipse, is often less eccentric.
The eccentricity distributions for top jets are slightly different from QCD jets. This is a nontrivial result; top jets and QCD jets have very different prong structures (as demonstrated in Fig. 14), so the angular distribution of radiation in the rapidity-azimuth plane is less circular for top jets than QCD jets. Thus, jet eccentricity could be used as a discriminant.
The above plots show just a few examples of the information that can be extracted using generalized shape observables. It is important to emphasize there is much more information beyond what we have shown here available for analysis, such as the learned weights of the points versus uniform shapes, the radii of shapes beyond the leading jet, and so on. This is to
Figure 16: Distributions of the learned minimum eccentricities for (a,d) 1-, (b,e) 2-, and (c,f) 3-ellipsiness (top row) and (ellipse+point)iness (bottom row) of the top (red) and QCD (blue) jet sample.
say nothing of the multivariate information contained in the correlations between the EMDs and shape parameters, both within the same shape and between different shapes. We leave a full quantitative study of the information contained within shapes for potential future work.
### Pileup-Mitigating Shapes
Finally, we show a use case for shape observable composition by applying our custom shape observables to the task of pileup mitigation. We consider the top jet mass spectrum as an example, which is sharply peaked near \(m_{\rm top}^{2}\approx(175\,\mathrm{GeV})^{2}\). In our modified dataset, jets are contaminated with uniform pileup radiation, which significantly biases and smears the top mass peak. There exist many techniques to "groom" away extra contamination, such as area subtraction and soft drop [93; 94; 95; 96; 97], but these techniques often have external hyperparamters characterizing the contamination density (such as \(z_{\rm cut}\) in soft drop).
In order to mitigate this bias even when the contamination density is unknown, we propose to build shapes that _factorize_ the event into a uniform component, \(\mathcal{U}\), and a structure component, \(\mathcal{E}\). As described in Sec. 3.3.5, we accomplish this by building observables of the form:
\[\mathcal{O}_{i}^{\mathcal{I}}=\mathcal{O}_{i}\oplus\mathcal{I}, \tag{100}\]
where \(\mathcal{O}_{i}\) is any shape observable and \(\mathcal{I}\) is the jet isotropy.28 We can then use the energy flow associated to \(\mathcal{O}_{i}\) to calculate a "pileup-corrected" mass, discarding the energy flow associated to \(\mathcal{I}\). The weight \(z_{2}\) associated with \(\mathcal{I}\) is then an estimate of the fraction of event energy due to pileup - the contamination density is an extracted observable, _not_ an assumed hyperparameter!
Footnote 28: Unlike Refs. [40; 41], we sample the isotropy using random points on the plane, with resampling each epoch, rather than use a uniform grid, for better statistical coverage of the plane.
For this study, we consider \(\mathcal{O}_{i}^{\mathcal{I}}\), where \(\mathcal{O}_{i}\) is either the 3-subjettiness or the 3-(disk+point)-iness, motivated by the 3-prong nature of top jets. We also consider the uncorrected observables, \(\mathcal{O}_{i}\), without the uniform background for comparison. For each observable, the "shape-jet mass" \(m_{J}\) is calculated as the sum as the (massless) four vectors of particles comprising the shape. This can be numerically approximated by sampling the disk with 200 particles. To calibrate the shape-jet masses, we calculate the shape-jet mass on top jets without the addition of pileup for each observable. In Fig. 17a, we plot the shape-jet mass corresponding to each observable as evaluated on top jets. Each observable slightly undershoots the top mass peak - the exact discrepancies are given in Table 5. These values are used to shift the means of each shape-jet mass curve to the right for the remainder of this study, such that the means give the truth mean top mass when evaluated on uncontaminated jets.
In Fig. 17b, we show the result of an empirical study using our pileup-mitigating shapes on our pileup-contaminated top jet sample. We also plot, for comparison, the mass corresponding to the _uncorrected_ observables \(\mathcal{O}_{i}\), with no uniform background. These distributions are all calibrated using the values in Table 5. We see clearly that the two pileup-mitigating shapes
indeed remove a significant amount of bias due to pileup, and bring \(m_{\rm jet}\) much closer to \(m_{\rm top}\), suggesting that this is a viable strategy for pileup removal. Note, interestingly, that the distribution for the uncorrected 3-(disk+point)iness jet mass is similar to the uncorrected jet mass - this is because to best approximate the entire event, disks will grow large in an attempt to capture all of the pileup, since the disks themselves are also uniform radiation patterns.
\begin{table}
\begin{tabular}{|c|c|} \hline \hline
**Observable** & **Bias [GeV]** \\ \hline \hline
3-(Disk+Point)iness+Pileup & \(-18.7\) \\
3-Subjettiness+Pileup & \(-23.1\) \\
3-(Disk+Point)iness & \(-5.5\) \\
3-Subjettiness & \(-12.5\) \\ \hline \hline \end{tabular}
\end{table}
Table 5: The difference between the mean shape-jet mass and the mean of the truth top jet distribution in Fig. (a)a, for each of the four observables under consideration. These are used to calibrate the calculated mass distributions for each shape.
Figure 17: The shape-jet mass for several different choices of shape observable, evaluated on (a) top jets without pileup, used to derive the calibration, and (b) top jets with pileup, calibrated using Table 5. Two different shape observables are used – the 3-jettiness in blue and 3-(disk+point)iness in red. For each shape observable, the ordinary version is plotted with dark dashed lines while the pileup-mitigating variant is plotted with bright solid lines.
In Fig. 18a, we show the distribution of extracted pileup energy fraction values, corresponding to the \(z_{2}\) shape parameter. We see that there is qualitative agreement between the extracted pileup energy fractions and the true one, though the shapes tend to overestimate the pileup density slightly. This is consistent with Fig. 17b, where the shapes slightly underestimate the top mass without the calibration. In Fig. 18b, we use \(z_{2}\) to compute the value of the extracted mass bias \(\Delta\hat{m}_{\rm PU}^{2}=m_{\rm jet}^{2}-\hat{m}_{\rm top}^{2}(z_{2})\), compared to the "true" mass bias obtained by comparing the jet mass before and after contamination. To approximate the resolution of our estimator, we take \(\hat{\sigma}^{2}=\text{Var}\left[\sqrt{\hat{m}_{\rm PU}^{2}}-\sqrt{m_{\rm PU}^ {2}}\right]\) as the average Gaussian uncertainty. We can see that both pileup-mitigating observables estimate the pileup mass bias correctly on average, and that the 3-(disk+point)iness+pileup has a slightly better resolution than the 3-subjettiness+pileup, which is to be expected as the former has more freedom to better capture features of jets.
Figure 18: (a) The learned pileup fractions \(z_{2}\) for the 3-jettiness in blue and 3-(disk+point)iness in red. The actual fraction of energy due to pileup is in black. (b) The mass corrections to the top mass due to pileup, for the 3-jettiness in blue and 3-(disk+point)iness in red. On the \(x\)-axis, the change in jet mass \((\Delta m_{\rm PU}^{2})^{1/2}\) before and after contamination is plotted. On the \(y\)-axis, the value of \((\Delta\hat{m}_{\rm PU}^{2})^{1/2}\) as extracted from the corrected top mass is plotted for a small test set. As a proxy for the resolution of the estimate, we take \(\hat{\sigma}\) to be the standard deviation of the residuals.
Conclusions and Outlook
In this work, we generalized the notion of event and jet shapes into _shape observables_, which are a wide class of observables that can probe the geometric structure of collider events. We introduced a natural measure-theoretic language for describing events as energy flows, which encodes properties such as IRC safety as inherent topological information. Using this construction, we showed that the Wasserstein metric arises naturally from the requirement of geometric faithfulness. This post-hoc justifies its past use to define observables, IRC safety, and geometry on the space of events [25; 26; 39; 40].
We showcased how to define _arbitrary_ shape observables, which can be specified solely by parameterizing the manifold of shapes one wants to fit to. Importantly, we can extract both the "shapiness" value (how much the event looks like the shape) and the "shape parameters" (which shape is the best fit). This is a very intuitive picture - if one wants to ask how "ellipsy" a jet is, for example, all one needs to do is parameterize the space of ellipses to define the ellipsiness observable. The Shaper framework makes it easy to build new shape observables and evaluate them on events. It leverages the Sinkhorn approximation of the Wasserstein metric to enable fast, parallelizable, and differentiable \(\epsilon\)-approximations of shape observables.
With the Shaper framework in place, the natural question arises: _Which shapes should we study?_ We introduced a few examples of shapes motivated by jet substructure, including rings, disks, and ellipses. Using the Shaper prescription, it is easy to modify these shapes to probe additional jet structures, by including \(\delta\)-functions to capture collinear radiation or uniform backgrounds to capture pileup. Our empirical studies showed how these custom observables might serve as an effective and simple pileup mitigation strategy in the study of top jets, where the amount of pileup does not need to be assumed beforehand. Many further analyses are possible: for example, in heavy quark decays, collinear bremsstrahlung is suppressed (the so-called "dead-cone effect" [90; 91; 92]), leading to ring-like or annulus-like, rather than point-like, jets. The list of observables we have provided is certainly non-exhaustive, and we hope that the Shaper framework can be used by the community to explore a wide range of brand new observables.
While Shaper can be used to analyze _any_ shape observable, and thus has broad generality, it is not necessarily the fastest way to compute or approximate any _specific_ observable. For example, much faster, exact algorithms for computing \(N\)-jettiness and related observables are available in the FastJet[118] package, which make use of the fact that the double optimization in Eq. (10) can be simplified. Most of the new observables we have defined appear to be irreducible, though, making their analysis and theoretical treatment complicated. In principle, this class of observables is perturbatively calculable in QCD, at least numerically. We anticipate that these generalized shapes might provide a useful groundwork for further theoretical studies of QCD distributions.
## Code and Data
The code for the general-use Shaper framework can be found at [https://github.com/rikab/SHAPER](https://github.com/rikab/SHAPER), along with installation instructions. This directory also contains the analysis code for the top/QCD plots in Sec. 5. In this directory is also a tutorialized example notebook, showing how to use the Shaper framework.
Our dataset is a modified version of the top/QCD jet benchmark dataset described in Refs. [48; 49]. The original version of this dataset can be found at [https://desycloud.desy.de/index.php/s/llbX3zPLhazgPJ6](https://desycloud.desy.de/index.php/s/llbX3zPLhazgPJ6).
## Acknowledgments
Special thanks goes to Samuel Alipour-ford for helping to come up with the acronym Shaper, and for useful discussions about pileup. We thank Cari Cesarotti and Matthew LeBlanc for useful discussions about event isotropy, and Ouali Kitouni, Niklas Nolte, and Mike Williams for useful discussions on EMD estimation with Kantorovich potentials. Finally, we would like to thank Eugene Wigner of Ref. [122] for inspiring the title of Sec. 2, and Mark Kac of Ref. [123] for inspiring the title of this paper.
DB, ASD, RG, and JT are supported by the National Science Foundation under Cooperative Agreement PHY-2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions). ASD's research was also funded by the President's PhD Scholarship at Imperial College London and supported by the EPSRC Centre for Doctoral Training in Mathematics of Random Systems: Analysis, Modelling and Simulation (EP/S023925/1). RG and JT are additionally supported by the U.S. DOE Office of High Energy Physics under grant number DE-SC0012567. AT's research is supported by NSF DMS 2208392.
## Appendix A Energy Flows as Measures
In this appendix, we review aspects of measure theory and topology related to energy flows, as introduced in Sec. 2.2. To begin, we define a _measure_:
**Definition 4**.: _Given a set \(\mathcal{X}\) and a \(\sigma\)-algebra on \(\mathcal{X}\) denoted \(\sigma(\mathcal{X})\), a (positive) **measure**\(\mathcal{E}\) over \(\sigma(\mathcal{X})\) is a function \(\mathcal{E}:\sigma(\mathcal{X})\to\mathbb{R}\) satisfying:_
* _Non-Negativity:_ _For all_ \(X\in\sigma(\mathcal{X})\)_, we have_ \(\mathcal{E}(X)\geq 0\)_._
* _Null Set:_ _For the empty set, we have_ \(\mathcal{E}(\emptyset)=0\)_._
* _Additivity:_ _For a (countable) collection of disjoint sets_ \(X_{i}\in\sigma(\mathcal{X})\)_, we have:_ \[\mathcal{E}\left(\bigcup_{i}X_{i}\right)=\sum_{i}\mathcal{E}(X_{i}).\] (100)
_If, additionally, we have \(\mathcal{E}(\mathcal{X})=1\), we say \(\mathcal{E}\) is a **probability measure**._
Associating energy flows with measures is very natural: an energy flow answers the question "How much energy did I detect in the calorimeters located within the subregion \(X\) of my detector \(\mathcal{X}\)?", and this answer satisfies all of the above properties. We refer to \(\mathcal{E}(X)\) as the energy flow, whereas we call the object
\[\mathcal{E}(x)=\sum_{i}E_{i}\delta(x-x_{i}) \tag{100}\]
as the _energy flow density_ or _measure density_. This language (while different from Ref. [25]) is natural, since integrating over Eq. (100) yields the energy flow:
\[\mathcal{E}(X)=\int_{X}dx\,\mathcal{E}(x). \tag{101}\]
Under this definition, the energy flow is independent of the choice of coordinates \(x\) used on \(\mathcal{X}\), unlike the energy flow density. In particular, the form of energy flow is completely invariant under exactly 0-energy emissions or exactly collinear splittings, and additionally is also invariant under particle relabellings. Throughout this paper, we restrict ourselves to measures that can be written as the integral of a well-behaved associated density, which is the case for all physically realized energy flows. We refer to measures whose density is a finite sum of weighted \(\delta\)-functions as _atomic measures_.
Measures can be used to formalize what we mean by integration. We begin by defining \(\langle\mathcal{E},\phi\rangle\) as the integral of an integrable function \(\phi:\mathcal{X}\to\mathbb{R}\) against an energy flow \(\mathcal{E}\).29 Assuming coordinates \(x\) on \(\mathcal{X}\), we define this quantity as the ordinary integral over the associated energy flow density:30
Footnote 29: The integral of \(\phi\) over \(\mathcal{E}\) is often denoted \(\int d\mathcal{E}\,\phi\). When \(\mathcal{E}\) is the Lebesgue measure, uniform over \(\mathcal{X}\), this becomes the ordinary integral.
Footnote 30: We will be satisfied here defining integration in terms of the ordinary Lebesgue integration on real numbers, which is always possible if an associated density exists.
\[\langle\mathcal{E},\phi\rangle\equiv\int_{\mathcal{X}}dx\,\mathcal{E}(x)\, \phi(x). \tag{102}\]
In the special case where \(\mathcal{E}\) is a probability measure, this quantity is the expectation value \(\mathbb{E}_{X\sim\mathcal{E}}\left[\phi(X)\right]\) of the random variable \(\phi(X)\) sampled over \(\mathcal{E}\). The notation \(\langle\mathcal{E},\phi\rangle\) highlights the inner product structure of a measure \(\mathcal{E}\) "acting" on the space of functions on \(\mathcal{X}\). Notably, this inner product is bilinear - though when considering positive measures, one must be careful to not produce energy flows where the energy is anywhere negative, as these are nonphysical.
Next, we discuss the important topological features of measures. First, we define _weak* convergence_ (also sometimes referred to as _convergence in law_) of a sequence of measures:
**Definition 5**.: _A sequence of measures \(\mathcal{E}_{n}\)**converges with respect to the weak* topology** to a measure \(\mathcal{E}\), which we denote \(\mathcal{E}_{n}\to\mathcal{E}\), if for any continuous test function \(\phi\) on \(\mathcal{X}\), the sequence of real numbers \(\langle\mathcal{E}_{n},\phi\rangle\) converges to the real number \(\langle\mathcal{E},\phi\rangle\)._
That is, we say a sequence of measures converges if every single expectation value converges. Associated to this definition of convergence is the _weak* topology_, which is the (weakest) topology for which the map \(\langle\cdot,\phi\rangle\) is considered continuous for all continuous \(\phi\). Note that this definition does _not_ make use of any metric on measures; it simply inherits the metric and topological structure of the real numbers. Building off of this, we can now define what it means for any function on energy flows to be continuous:
**Definition 6**.: _A function \(F\) on the space of positive measures is **continuous with respect to the weak* topology** if, for every convergent sequence of measures \(\mathcal{E}_{n}\to\mathcal{E}\), the sequence \(F(\mathcal{E}_{n})\) converges to \(F(\mathcal{E})\)._
Continuity is a very powerful tool for dealing with energy flows. First, it immediately implies that continuous measures can be arbitrarily well approximated by atomic measures with an increasing number of samples. Moreover, if a function \(F\) on measures is continuous with respect to the weak* topology, this further implies it is only necessary to specify how \(F\) acts on atomic measures, since the action on all other measures is fixed by weak* continuity.
Intuitively, weak continuity captures the idea that a continuous distribution is well approximated by a discrete one with "enough samples" - as the number of sample increases, we approach the continuous one. Note, however, we have not yet defined what it means for two energy flows to be "close" to each other, so we cannot yet describe what happens to functions under small energy flow perturbations. This requires a metric on the space of energy flows that respects the weak* topology, which as argued in Sec. 2 and further justified in App. B, can be taken to be the Wasserstein metric.
We finally introduce one last piece of notation. For any two energy flows \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\), the _joint energy flow_\(\mathcal{E}=\mathcal{E}_{1}\otimes\mathcal{E}_{2}\) is the energy flow satisfying for any sets \(X,Y\subseteq\mathcal{X}\):
\[\mathcal{E}(X,Y)=\int_{X\times Y}dx\,dy\,\mathcal{E}_{1}(x)\,\mathcal{E}_{2}( y). \tag{100}\]
The energy flows \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) are referred to as the _marginals_ of \(\mathcal{E}\). They can be written as \(\mathcal{E}_{1}(X)=\mathcal{E}(X,\mathcal{X})\) and \(\mathcal{E}_{2}(Y)=\mathcal{E}(\mathcal{X},Y)\), with densities \(\mathcal{E}_{1}(x)=\int_{\mathcal{X}}dy\,\mathcal{E}(x,y)\) and \(\mathcal{E}_{2}(y)=\int_{\mathcal{X}}dx\,\mathcal{E}(x,y)\) respectively.
## Appendix B Constructing the Wasserstein Metric
In Sec. 2.1, we proposed to write event and jet shapes \(\mathcal{O}_{\mathcal{M}}\) using some universal loss function \(\mathcal{L}\) on energy flows, and in Sec. 2.3, we showed that IRC safety implies that must \(\mathcal{L}\) must be weakly continuous. Then, in Sec. 2.4, we claimed that the condition of faithfully lifting the ground metric means that \(\mathcal{L}\) must be the Wasserstein metric, and showed examples of how similar metrics do not satisfy this property. In this appendix, we show how the Wasserstein metric arrives constructively. First, in App. B.1, we briefly review the properties we use to construct \(\mathcal{L}\), before outlining the construction in App. B.2.
### Shaping Up the Loss
To start, we list the properties we would like the universal loss function \(\mathcal{L}\) to satisfy. First, we would like \(\mathcal{L}\) to be a proper metric on the space of energy flows, which implies the following usual properties of metrics:
1. **Finiteness**: We require that \(\mathcal{L}\) is finite, even when evaluated on atomic measures. This immediately rules out the KL divergence [124] and its variants, including log-likelihoods, used in Ref. [51] to define "fuzzy jet" observables. These functions are only finite on continuous measures with support almost everywhere.
2. **Positivity and Closure**: We require that \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})\geq 0\). Moreover, we require that \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})=0\) if and only if \(\mathcal{E}=\mathcal{E}^{\prime}\). This captures the notion of the "optimal shape" as discussed in Sec. 2.1.
3. **Symmetry**: We require that \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})=\mathcal{L}(\mathcal{E}^{ \prime},\mathcal{E})\). This is to say, if the energy flow \(\mathcal{E}\) looks like \(\mathcal{E}^{\prime}\), then the energy flow \(\mathcal{E}^{\prime}\) looks like \(\mathcal{E}\) as well.
We will not specifically require the triangle inequality, as we never use it throughout this work. Of course, the Wasserstein metric does indeed satisfy the triangle inequality when the appropriate powers of \(1/\beta\) are included, making it a proper metric.
Not only is \(\mathcal{L}\) a metric, but it must be continuous to the weak* topology to be IRC safe, as established in Sec. 2.3:
1. **Weak Contintuity/IRC Safety**: We require that \(\mathcal{L}\) is weakly continuous in both of its arguments. This not only allows for continuous energy flows to be arbitrarily well approximated by atomic energy flows, but heavily constrains the energy- and position-dependence of \(\mathcal{L}\).
Finally, as discussed in Sec. 2.4, we demand that \(\mathcal{L}\)_faithfully lifts the ground metric_. This allows the space of events to inherit the geometry of the ground metric space with no distortions or warping:
1. **Faithfully Lifts the Ground Metric**: We require that if \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are atomic measures with a single normalized particle at positions \(x\) and \(y\) respectively, then \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})\) is proportional to \(d(x,y)^{\beta}\), where \(\beta\) is some fixed positive power. Moreover, we require that \(\mathcal{L}\)_faithfully_ lifts the ground metric. For any measure \(\mathcal{E}\), we can "translate" the measure by \(t\) to \(\mathcal{E}_{t}\), defined by the density \(\mathcal{E}(x-t)\). We then require that \(\mathcal{L}(\mathcal{E},\mathcal{E}_{t})\) is proportional to \(d(0,t)^{\beta}\).
If the ground metric is translationally invariant, which is the case for Euclidean metrics, property 5 implies that \(\mathcal{L}\) must also be translationally invariant - if both \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are translated by a vector \(t\), the metric distance between them is unchanged.
### Why Wassterstein?
Having defined the properties we would like our universal loss function to satisfy, we can now finally construct \(\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})\).
To begin, weak continuity allows us to drastically simplify the problem. Continuous energy flows may be approximated arbitrarily well by atomic energy flows, and thus it suffices to build our function only on atomic energy flows. This allows us to argue that \(\mathcal{L}\) can only depends on single powers of the distance \(d(x,y)^{\beta}\). In particular, lifting the ground metric (though not necessarily faithfully, yet) forbids terms of the form \(d(x,y)^{\beta}d(x^{\prime},y^{\prime})^{\beta}\) or other higher-order distance correlations, since in the single-particle case this would result in losses of the form \(d^{\beta_{1}}+d^{\beta_{2}}+...\) with several differing exponents. This implies the following form for \(\mathcal{L}\) when evaluated on uniform atomic flows \(\mathcal{E}\sim\sum_{i}\delta_{x_{i}}\) and \(\mathcal{E}^{\prime}\sim\sum_{j}\delta_{y_{j}}\):
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime}) =\Pi\left(\sum_{i,j}\pi_{ij}\,d(x_{i},y_{j})^{\beta}\right)\] \[+K\left(\sum_{i_{1},i_{2}}k_{i_{1}i_{2}}\,d(x_{i_{1}},x_{i_{2}})^ {\beta}\right)+K\left(\sum_{j_{1},j_{2}}k_{j_{1}j_{2}}\,d(y_{j_{1}},y_{j_{2}}) ^{\beta}\right)\] \[+F\left(\sum_{i}f_{i}\,d(x_{i},g(x_{i}))^{\beta}\right)+F\left( \sum_{j}f_{j}\,d(\tilde{g}(y_{j}),y_{j})^{\beta}\right), \tag{101}\]
where \(\Pi\), \(K\), and \(F\) are universal functions, \(\pi\), \(k\), and \(f\) are coefficients that implicitly depend on the two measures, and \(g\) is a function from the domain of \(\mathcal{E}\) to the domain of \(\mathcal{E}^{\prime}\) (and vice-versa for \(\tilde{g}\)). Note that this form is symmetric under swapping \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\). Without loss of generality, \(k_{ii^{\prime}}\) can be chosen to be a symmetric traceless matrix.
Still considering the single particle case, lifting the ground metric also implies that the functions \(\Pi\) and \(F\) must be linear functions (though the single particle case provides no information about \(K\)), as to reproduce the desired \(d(x,y)^{\beta}\) behavior. Moreover, to still maintain only terms proportional to \(d^{\beta}\) even in the multiparticle case, \(K\) must also be linear. This linearity means that we can combine the arguments of each of the \(\Pi\), \(K\), and \(F\) functions.
Recall that energy flows are additive objects, and the energy flows defined by the densities \(\mathcal{E}=(E_{1}+E_{2})\delta_{x}\) and \(\mathcal{E}=E_{1}\delta_{x}+E_{2}\delta_{x}\) must be identical. This implies that each term of the terms in Eq. (101) must be individually linear in the energy weights of \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\), respectively.
This implies the following constraints, up to an overall proportionality factor:
\[\sum_{i}\pi_{ij} =E^{\prime}_{j}, \tag{114}\] \[\sum_{j}\pi_{ij} =E_{i},\] (115) \[\sum_{i_{1}}k_{i_{1}i_{2}} =E_{i_{2}}, \sum_{i_{2}}k_{i_{1}i_{2}} =E_{i_{1}},\] (116) \[\sum_{j_{1}}k_{j_{1}j_{2}} =E^{\prime}_{j_{2}}, \sum_{j_{2}}k_{j_{1}j_{2}} =E^{\prime}_{j_{1}},\] (117) \[f_{i} =E_{i}, f_{j} =E^{\prime}_{j} \tag{118}\]
Now, consider the two particle case, as in Sec. 2.4, and define the energy flows for vectors \(a\) and \(t\):
\[\mathcal{E}(x) \sim\frac{1}{2}\delta_{0}+\frac{1}{2}\delta_{a}, \tag{119}\] \[\mathcal{E}^{\prime}(x) \sim\frac{1}{2}\delta_{t}+\frac{1}{2}\delta_{a+t}. \tag{120}\]
When \(t\to 0\), the two energy flows are the same, and when \(a\to 0\), the energy flows reduce to the single particle case. Evaluating Eq. (113), we obtain:
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime}) =\Pi\left(\pi_{1_{x}1_{y}}d(0,t)^{\beta}+\pi_{1_{x}2_{y}}d(0,a+t) ^{\beta}+\pi_{1_{x}1_{y}}d(a,t)^{\beta}+\pi_{2_{x}2_{y}}d(a,a+t)^{\beta}\right)\] \[+K\left(2k_{1_{x}2_{x}}d(0,a)^{\beta}+2k_{1_{y}2_{y}}d(t,a+t)^{ \beta}\right)\] \[+\frac{1}{2}F\left(d(0,g(0))^{\beta}+d(a,g(a))^{\beta}+d(\tilde{g }(t),t)^{\beta}+d(\tilde{g}(a+t),a+t)^{\beta}\right). \tag{121}\]
In the limit \(t\to 0\), this expression must evaluate to zero by closure, for any choice of \(a\). This is possible if the off-diagonal components of \(\pi\) and \(k\) either cancel or are individually zero. Note that the function \(g\) must select a particle \(y_{j}\) given \(x_{i}\) (and vice-versa for \(\tilde{g}\)). Importantly, however, it must do this in a label-independent way, since the labels \(i\) and \(j\) are completely arbitrary. The only way to do so is to select \(y_{j}\) based on its distance from \(x_{i}\) - in principle, we can choose \(g\) to select the furthest particle, or the closest particle, or even the particle closest to the average distance amongst all particles. However, the only choice consistent with closure, as can be seen in Eq. (121), is to choose the _closest_ particle \(y_{j}\) to the given \(x_{j}\). This allows us to write the third line of Eq. (113) as:
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})=[...]+F\left(\sum_{i}E_{i}\min_ {y_{j}\in\text{Supp}(\mathcal{E}^{\prime})}d(x_{i},y_{j})^{\beta}+\sum_{j}E^{ \prime}_{j}\min_{x_{i}\in\text{Supp}(\mathcal{E})}d(x_{i},y_{j})^{\beta} \right). \tag{122}\]
Eq. (122) is exactly the form of a Chamfer distance from Eq. (11)! The Chamfer distance is the average closest distance from points in \(\mathcal{E}\) to \(\mathcal{E}^{\prime}\), plus the average closest distance from
points in \(\mathcal{E}^{\prime}\) to \(\mathcal{E}\). Following the explicit counter-example given in Eq. (15), the Chamfer distance does _not_ lift the ground metric _faithfully_, and therefore, \(F\) must be zero.
Returning to Eq. (111), we now want to show that \(K=0\). Since \(k\) is traceless by assumption, the constraints Eqs. (110) and (111) imply that \(k_{1_{x}2_{x}}=E_{1}E_{2}\) and \(k_{1_{y}2_{y}}=E_{1}^{\prime}E_{2}^{\prime}\). Assuming that \(K\neq 0\), closure plus Eqs. (111) and (112) allows us to solve for \(\pi_{ij}\). From this, we deduce that \(\pi_{ij}\) reduces to \(E_{i}E_{j}^{\prime}\) as \(t\to 0\) (with proportionality constant \(\Pi=-2K\)). Furthermore, \(\pi_{ij}\) has to be constant in \(t\), since otherwise \(\mathcal{L}\) would depend on higher powers of the metric and therefore fail to lift the ground metric. Given this, Eq. (110) reduces to:
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime})=K\left(\sum_{i_{1},i_{2}}E_{i_{1}}E_{i_{2}}d(x_{i_{1}},x_{i_{2}})^{\beta}+\sum_{j_{1},j_{2}}E_{ j_{1}}^{\prime}E_{j_{2}}^{\prime}d(y_{j_{1}},y_{j_{2}})^{\beta}-2\sum_{ij}E_{i}E_{ j}^{\prime}d(x_{i},y_{j})^{\beta}\right) \tag{112}\]
However, this implies that \(\mathcal{L}\) is exactly the maximum mean discrepancy (MMD) of Eq. (10)! The MMD can be thought of as the average potential energy of a system of springs with potential \(V(r)\sim r^{\beta}\) connecting particles between \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\), minus the self-energy of springs connecting particles within \(\mathcal{E}\) and within \(\mathcal{E}^{\prime}\). The explicit counter-example given in Eq. (14) shows that this cannot be faithful, _except in the special case of \(\beta=2\)_, which is not sufficient for our purposes.31 It follows that \(K\) must also be zero, and we have significantly constrained the form of our metric.
Footnote 31: Generically, physical systems with potentials \(V(r)\sim r^{\beta}\) experience screening, most notably electrostatic screening when \(\beta=-1\). However, screening does not occur for ideal springs, i.e. \(\beta=2\). Screening and (un)faithfulness are related – as \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) move relative to eachother, the potential energy between them will not necessarily scale with \(r^{\beta}\) in extended systems due to screening.
It now remains only to find the form of \(\pi\). To do this, we now consider an energy flow \(\mathcal{E}\), consisting of \(N\) particles each with energy \(\frac{1}{N}\). By closure, we again must have \(\mathcal{L}(\mathcal{E},\mathcal{E})=0\), which occurs when \(\pi_{ij}\) is \(\frac{1}{N^{2}}\) times the \(N\times N\) identity matrix.
Next, consider \(\mathcal{E}^{\prime}\), an exact copy of \(\mathcal{E}\), except the particles have been re-indexed, sending the original particle \(x_{i}\) to \(x_{j=\sigma(i)}\). As measures are invariant under reordering, \(\mathcal{L}\) must still be zero, though \(\pi\) will be (proportional to) some \(N\times N\) permutation matrix. We can write this as:
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime}) =\frac{1}{N}\sum_{i}d(x_{i},y_{\sigma(i)})^{\beta},\] \[=\sum_{ij}\pi_{ij}\,d(x_{i},y_{j})^{\beta}, \tag{113}\]
This will only be zero if \(\pi_{ij}\) is the _correct_ permutation matrix that undoes the index shuffling. If one instead guesses that \(\pi\) is a different permutation matrix, the point \(y_{\sigma(i)}\) will not lie on top of the point \(x_{i}\), leading to \(\mathcal{L}>0\).
Thus, even if we did not know how exactly the particle labels on \(\mathcal{E}^{\prime}\) were shuffled, all we would have to do to find the correct \(\pi\) is to search through all possible permutation matrices,
and take the one that gives the minimum answer:
\[\mathcal{L}(\mathcal{E},\mathcal{E}^{\prime}) =\min_{\sigma:[1,N]\to[1,N]}\frac{1}{N}\sum_{i}d(x_{i},y_{\sigma(i)})^{\beta}\] \[=\min_{\pi_{ij}\in S_{N}}\frac{1}{N^{2}}\sum_{ij}\pi_{ij}d(x_{i},y _{j})^{\beta}, \tag{111}\]
where \(S_{N}\) is the set of all \(N\times N\) permutation matrices. Moreover, the index-shuffling trick allows us to see that Eq. (111) faithfully lifts the ground metric. If \(\mathcal{E}^{\prime}\) is translated by a vector \(t\), then \(\pi\) should still be the identity matrix before shuffling so that our result is proportional to \(d(0,t)^{\beta}\). If we guess the wrong permutation matrix, while it may be possible for some distances to close (that is, \(d(x_{i},y_{\sigma(i)})\) to be less than \(d(0,t)\)), this will be made up for by other particles being further apart than \(d(0,t)\), and so \(\mathcal{L}\) will be larger. For values of \(\beta\geq 1\), this last fact follows from the triangle inequality. This can be seen explicitly in the two-particle case: If \(|t|\gg|a|\), it follows that:
\[d(0,t)^{\beta}\leq\frac{1}{2}d(0,t-a)^{\beta}+\frac{1}{2}d(0,t+a)^{\beta}, \tag{112}\]
for \(\beta\geq 1\). Therefore Eq. (111) is a faithful metric for \(\beta\geq 1\).
The problem in Eq. (111) is referred to as the _combinatorial Monge problem_, which is the precursor to the Wasserstein metric. We can think of Eq. (111) as finding the optimal map \(\pi\) to "transport" the points \(x\) to \(y\), minimizing the total distance\({}^{\beta}\) needed to travel. These arguments are enough to fully fix the form of \(\mathcal{L}\) - even if \(\mathcal{E}\) and \(\mathcal{E}^{\prime}\) are different energy flows, as long as they both have \(N\) particles with uniform weights, the minimization over permutation matrices is valid.
To finish our construction, we use the additivity of measures and the weak* topology to generalize Eq. (111) to any energy flow. Since measures are additive, any atomic energy flow with _rational_ energy weights can be thought of particles of a base denomination weight stacked on top of each other - for example, the two-particle energy flow \(\mathcal{E}\sim 0.381\delta_{x_{1}}+0.619\delta_{x_{2}}\) is exactly equal to the energy flow given by 381 particles at site \(x_{1}\) and 619 particles at site \(x_{2}\), each with energy 0.001. One can then solve the combinatorial Monge problem using this uniform energy flow, and "collapse" the corresponding redundant subspace in \(\pi\) to produce the optimal transport map. In the case that energy weights are irrational, weak continuity ensures that these may be constructed as limits of rationally-weighted energy flows, so this is not an issue. Similarly, any continuous energy flow can be written this way.
Thus, the loss Eq. (111) is our universal loss function! Accounting for arbitrary weights and particle numbers, as discussed above, this becomes the well-known Wasserstein metric. The \(\beta\)-Wasserstein metric, or "Energy/Earth Mover's Distance" (EMD), is the metric on the space of measures that is positive, closed, metrizes the weak convergence, and faithfully lifts the ground metric. Repeating Eq. (9) for convenience, the EMD between two measures \(\mathcal{E}\)
and \({\cal E}^{\prime}\) is given by:
\[\text{EMD}^{(\beta,R)}({\cal E},{\cal E}^{\prime})=\min_{\pi\in{\cal M }({\cal X}\times{\cal X})}\left[\frac{1}{\beta R^{\beta}}\left\langle\pi,d(x,y)^ {\beta}\right\rangle\right]+|\Delta E_{\text{tot}}|,\] \[\pi({\cal X},Y)\leq{\cal E}^{\prime}(Y),\quad\pi(X,{\cal X})\leq{ \cal E}(X),\quad\pi({\cal X},{\cal X})=\min(E_{\text{tot}},E^{\prime}_{\text{ tot}}). \tag{111}\]
The parameter \(R>0\) sets a distance scale for the EMD. The additional energy difference term, \(|\Delta E_{\text{tot}}|=|E_{\text{tot}}-E^{\prime}_{\text{tot}}|\), contributes whenever the two energy flows do not have the same total energy. Our argument in this section does not fix this term uniquely, and in fact there are many different approaches to unbalanced metrics [39, 125, 126]. In the common case that both \({\cal E}\) and \({\cal E}^{\prime}\) are atomic measures with \(M\) and \(N\) particles with energies \(E_{i}\) and \(E^{\prime}_{j}\) respectively, the EMD takes the form:
\[\text{EMD}^{(\beta,R)}({\cal E},{\cal E}^{\prime})=\min_{\pi_{ij }\geq 0}\left[\frac{1}{\beta R^{\beta}}\sum_{i=1}^{M}\sum_{j=1}^{N}\pi_{ij} \,d_{ij}^{\beta}\right]+|\Delta E_{\text{tot}}|,\] \[\sum_{i=1}^{M}\pi_{ij}\leq E^{\prime}_{j},\quad\sum_{j=1}^{N}\pi _{ij}\leq E_{i},\quad\sum_{j=1}^{N}\pi_{ij}=\min(E_{\text{tot}},E^{\prime}_{ \text{tot}}). \tag{112}\]
|
2303.05433 | Generalised Spin$^r$ Structures on Homogeneous Spaces | Spinorial methods have proven to be a powerful tool to study geometric
properties of spin manifolds. Our aim is to continue the spinorial study of
manifolds that are not necessarily spin. We introduce and study the notion of
$G$-invariance of spin$^r$ structures on a manifold $M$ equipped with an action
of a Lie group $G$. For the case when $M$ is a homogeneous $G$-space, we prove
a classification result of these invariant structures in terms of the isotropy
representation. As an example, we study the invariant spin$^r$ structures for
all the homogeneous realisations of the spheres. | Diego Artacho, Marie-Amélie Lawn | 2023-03-09T17:25:13Z | http://arxiv.org/abs/2303.05433v3 | # Generalised Spin\({}^{r}\) Structures on Homogeneous Spaces
###### Abstract.
Spinorial methods have proven to be a powerful tool to study geometric properties of Spin manifolds. We aim to continue the spinorial study of manifolds that are not necessarily Spin. We introduce and study the notion of \(G\)-invariance of generalised Spin\({}^{r}\) structures on a manifold \(M\) equipped with an action of a Lie group \(G\). For the case when \(M\) is a homogeneous \(G\)-space, we prove a characterisation of the existence of these invariant structures in terms of a lift of the isotropy representation. As an application, we study the invariant generalised Spin\({}^{r}\) structures for all the homogeneous realisations of the spheres.
_Key words:_ Spin geometry, homogeneous spaces, holonomy, generalised Spin structures, Spin\({}^{c}\), Spin\({}^{h}\).
**MSC:** 53C27; 57R15; 53C30.
###### Contents
* 1 Introduction
* 2 Generalised Spin\({}^{r}\) structures
* 2.1 Spin\({}^{r}\) groups
* 2.2 Spin\({}^{r}\) structures on manifolds
* 3 Invariance: Spheres
## 1. Introduction
The field of Spin geometry has proven to be a fruitful area of study with numerous applications in various areas of mathematics. It has been applied to a wide range of problems, from the study of topology and differential equations to the theory of quantum mechanics. However, by their nature, spinorial considerations are limited to manifolds admitting a Spin structure. To address this limitation, considerable efforts have been made in recent decades to extend Spin geometry to non-Spin manifolds. Recent developments have led to the introduction of Spin\({}^{\mathbb{C}}\) and Spin\({}^{\mathbb{H}}\) structures, which come, respectively, from the complexification and quaternionification of the classical Spin groups [11, 24, 6, 13]. The idea of extending Spin geometry to non-Spin manifolds was introduced by Friedrich and Trautmann [12]. They started the theory of spinorial Lipschitz structures, which has been developed further in [19, 20, 21].
Every Spin manifold is \(\operatorname{Spin}^{\mathbb{C}}\), and every \(\operatorname{Spin}^{\mathbb{C}}\) manifold is in turn \(\operatorname{Spin}^{\mathbb{H}}\), but the converse is not true. For example, even-dimensional complex projective spaces \(\mathbb{CP}^{2n}\) are not \(\operatorname{Spin}\), but they are \(\operatorname{Spin}^{\mathbb{C}}\). Similarly, the Wu manifold \(W=\operatorname{SU}(3)/\operatorname{SO}(3)\) is not \(\operatorname{Spin}^{\mathbb{C}}\), but it is \(\operatorname{Spin}^{\mathbb{H}}\)[7]. Moreover, there are examples of manifolds which are not even \(\operatorname{Spin}^{\mathbb{H}}\), for instance \(W\times W\)[2].
It is a well-known fact that every almost-complex Riemannian manifold is \(\operatorname{Spin}^{\mathbb{C}}\), and every almost-quaternionic manifold is \(\operatorname{Spin}^{\mathbb{H}}\). Moreover, it is known that every oriented Riemannian manifold of dimension \(\leq 3\) is Spin. In 2005, Teichner and Vogt [26] completed the work started by Hirzebruch and Hopf in 1958 [17] to show that every oriented Riemannian manifold of dimension \(\leq 4\) has a \(\operatorname{Spin}^{\mathbb{C}}\) structure. These structures are crucial for Seiberg-Witten theory, which has become an essential tool in the study of smooth \(4\)-manifolds [23]. In 2021, Albanese and Milivojevic [2, 3] proved that every oriented Riemannian manifold of dimension \(\leq 5\) is \(\operatorname{Spin}^{\mathbb{H}}\). If, moreover, the manifold is closed, this is true for dimension \(\leq 7\). In [25], in analogy to Seiberg-Witten theory, the authors construct quaternionic monopoles, which are associated to \(\operatorname{Spin}^{\mathbb{H}}\) structures on \(4\)-manifolds.
Espinosa and Herrera [10] introduced, for each \(r\in\mathbb{N}\), the concept of \(\operatorname{Spin}^{r}\) structure, which further generalises this picture, the cases \(r=1,2,3\) corresponding, respectively, to \(\operatorname{Spin}\), \(\operatorname{Spin}^{\mathbb{C}}\) and \(\operatorname{Spin}^{\mathbb{H}}\). They call these structures _spinorially twisted_ Spin _structures_. Their associated spinor bundles have been studied in depth in [10, 15, 14]. Albanese and Milivojevic [2] call these structures _generalised \(\operatorname{Spin}^{k}\) structures_. An oriented Riemannian manifold \(M\) is \(\operatorname{Spin}^{r}\) if, and only if, it embeds into a Spin manifold with codimension \(r\). Hence, if \(r<s\), every \(\operatorname{Spin}^{r}\) manifold is \(\operatorname{Spin}^{s}\). They proved that there is no \(r\in\mathbb{N}\) such that every oriented Riemannian manifold is \(\operatorname{Spin}^{r}\). This result can be interpreted as ensuring that this chain of structures does not _stabilise_. As Lawson points out in [18], this is striking, as every manifold embeds into an orientable manifold with codimension \(1\).
In this paper, we deepen into the study of these generalised structures. As for the classical Spin case, in general, for a \(\operatorname{Spin}^{r}\) manifold, there is not a preferred \(\operatorname{Spin}^{r}\) structure to work with. In [10], the authors solve this problem by giving a canonical construction of a \(\operatorname{Spin}^{n}\) structure for every oriented Riemannian \(n\)-manifold in terms of its holonomy. We show that this structure is not induced by a \(\operatorname{Spin}^{s}\) structure for any \(s<n\) for a wide class of manifolds.
Homogeneous spaces are of great interest, because there is a simple way of looking at Spin structures provided they are invariant, namely as lifts of the isotropy representation to the Spin group [5, 4, 9]. However, not all homogeneous spaces are Spin, and, in fact, all the examples provided above are homogeneous. It is therefore natural to ask whether this isotropy-lifting criterion can be generalised to invariant \(\operatorname{Spin}^{r}\) structures. It turns out this is the case, and we prove it in Section 3.
Another feature of homogeneous spaces is that they provide a good ground for explicit spinorial computations. For instance, in [1], the authors explore the relationship between certain invariant spinors on homogeneous spheres and various \(G\)-structures.
For each \(n\in\mathbb{N}\), we give simple examples of \(n\)-dimensional oriented Riemannian homogeneous \(G\)-spaces \(M\) for which the minimal \(r\in\mathbb{N}\) such that \(M\) admits a \(G\)-invariant \(\operatorname{Spin}^{r}\) structure is exactly \(n\). This provides a \(G\)-invariant analogue of the fact that, for each \(r\in\mathbb{N}\), there exists an oriented Riemannian manifold which does not admit a \(\operatorname{Spin}^{r}\) structure [2].
We also study a family of examples. The compact and connected Lie groups that act transitively and effectively on spheres were classified by Montgomery and Samelson in [22]. For each of these homogeneous realisations of the spheres, we compute the minimal \(r\in\mathbb{N}\) such that each sphere
admits an invariant \(\mathrm{Spin}^{r}\) structure, refining the picture presented in [9], where only \(\mathrm{Spin}\) structures were considered. Finally, we show the beautiful relationship between existence of invariant \(\mathrm{Spin}^{r}\) structures on homogeneous spheres and lifts of the holonomy representation of an oriented Riemannian \(n\)-manifold to \(\mathrm{Spin}^{r}(n)\), via Berger's classification of holonomy groups.
## 2. Generalised \(\mathrm{Spin}^{r}\) structures
### \(\mathrm{Spin}^{r}\) groups
Denote by \(\mathrm{SO}(n)\) the special orthogonal group, and let \(\lambda_{n}\colon\,\mathrm{Spin}(n)\to\mathrm{SO}(n)\) be the standard two-sheeted covering. For \(r\in\mathbb{N}\), we define the group
\[\mathrm{Spin}^{r}(n)\coloneqq\left(\mathrm{Spin}(n)\times\mathrm{Spin}(r) \right)/\mathbb{Z}_{2}\,,\]
where \(\mathbb{Z}_{2}=\langle(-1,-1)\rangle\subseteq\mathrm{Spin}(n)\times\mathrm{ Spin}(r)\). Note that \(\mathrm{Spin}^{1}(n)=\mathrm{Spin}(n)\). Moreover, as \(\mathrm{Spin}(2)\cong\mathrm{U}(1)\) and \(\mathrm{Spin}(3)\cong\mathrm{Sp}(1)\), it is clear that \(\mathrm{Spin}^{2}(n)=\mathrm{Spin}^{\mathbb{C}}(n)\) and \(\mathrm{Spin}^{3}(n)=\mathrm{Spin}^{\mathbb{H}}(n)\). There are canonical Lie group homomorphisms
\[\begin{array}{cc}\lambda_{n}^{r}\colon&\mathrm{Spin}^{r}(n)\to\mathrm{SO}( n)\\ &\left[\mu,\nu\right]\mapsto\lambda_{n}(\mu)\end{array},\qquad\begin{array}{ cc}\xi_{n}^{r}\colon&\mathrm{Spin}^{r}(n)\to\mathrm{SO}(r)\\ &\left[\mu,\nu\right]\mapsto\lambda_{r}(\nu)\end{array}.\]
For \(r<s\), there is a natural inclusion \(\iota_{n}^{rs}\colon\,\mathrm{Spin}^{r}(n)\to\mathrm{Spin}^{s}(n)\), which makes the following diagram commute:
We need to make some topological considerations first, which we shall use later. Define the map
\[\varphi^{r,n}\colon \,\mathrm{Spin}^{r}(n)\to\mathrm{SO}(n)\times\mathrm{SO}(r)\,.\] \[\left[\mu,\nu\right]\mapsto\left(\lambda_{n}(\mu),\lambda_{r}(\nu)\right)\]
The following result is going to be crucial for our purposes:
**Proposition 2.1**.: Let \(n,r\in\mathbb{N}\). Let \(\varphi_{\sharp}^{r,n}\) be the map induced by \(\varphi^{r,n}\) at the level of fundamental groups. Then,
1. \(\varphi^{r,n}\) is a \(2\)-sheeted covering map;
2. \(\varphi_{\sharp}^{2,2}\left(\pi_{1}\left(\mathrm{Spin}^{2}(n)\right)\right)= \langle(1,\pm 1)\rangle\subseteq\mathbb{Z}\times\mathbb{Z}\cong\pi_{1}\left( \mathrm{SO}(2)\times\mathrm{SO}(2)\right)\);
3. if \(n\geq 3\), then \(\varphi_{\sharp}^{2,n}\left(\pi_{1}\left(\mathrm{Spin}^{2}(n)\right)\right)= \langle(1,1)\rangle\subseteq\mathbb{Z}_{2}\times\mathbb{Z}\);
4. if \(r,n\geq 3\), then \(\varphi_{\sharp}^{r,n}\left(\pi_{1}\left(\mathrm{Spin}^{r}(n)\right)\right)= \langle(1,1)\rangle\subseteq\mathbb{Z}_{2}\times\mathbb{Z}_{2}\).
Proof.: Point (1) follows from the fact that \(\lambda_{n},\lambda_{r}\) are covering maps. For the rest of the proof, consider the composition
\[\mathrm{Spin}(n)\times\mathrm{Spin}(r)\xrightarrow{p}\mathrm{Spin}^{r}(n) \xrightarrow{\varphi^{r,n}}\mathrm{SO}(n)\times\mathrm{SO}(r)\,,\]
where \(p\) is the obvious projection. Note that \(p\), \(\varphi^{r,n}\) and \(\varphi^{r,n}\circ p\) are covering maps. Suppose \(r=n=2\). Recall that \(\mathrm{Spin}(2)\cong\mathrm{SO}(2)\cong S^{1}\), and that the map \(\varphi^{2,2}\circ p\) is given by \((z,w)\mapsto(z^{2},w^{2})\). Consider the loop \(\alpha_{\pm}\colon[0,1]\to\mathrm{Spin}^{2}(2)\) defined by \(\alpha_{\pm}(t)=p\left(e^{i\pi t},e^{\pm i\pi t}\right)\). It is clear that the image of \(\alpha_{\pm}\) in the fundamental group of \(S^{1}\times S^{1}\), which is isomorphic to \(\mathbb{Z}\times\mathbb{Z}\), is \((1,\pm 1)\). Moreover, the lift of every loop based at \([1,1]\) in \(\mathrm{Spin}^{2}(2)\) along \(p\) starting at \((1,1)\) must end either at \((1,1)\) or at \((-1,-1)\). This proves (2). Now, suppose \(n\geq 3\). Let \(\beta\) be a loop in \(\mathrm{Spin}^{2}(n)\) based at \([1,1]\). Its lift along \(p\) starting at \((1,1)\) is a path with endpoint either \((1,1)\) or \((-1,-1)\). Hence, this lift is of the form \((\beta_{1},\beta_{2})\), where either both \(\beta_{i}\) are loops based at \(1\) or both are paths from \(1\) to \(-1\) in the corresponding factors of \(\mathrm{Spin}(n)\times\mathrm{Spin}(2)\). In the first case, the image of \(\beta\) in the fundamental group of \(\mathrm{SO}(n)\times\mathrm{SO}(2)\), which is isomorphic to \(\mathbb{Z}_{2}\times\mathbb{Z}\), is of the form \((0,2m)\), for some \(m\in\mathbb{Z}\). In the other case, the image of \(\beta\) is of the form \((1,2l+1)\), for some \(l\in\mathbb{Z}\). This proves (3). The proof of (4) is analogous.
### Spin\({}^{r}\) structures on manifolds
As in the classical \(\mathrm{Spin},\mathrm{Spin}^{\mathbb{C}}\) and \(\mathrm{Spin}^{\mathbb{H}}\) cases, one can define a family of structures on oriented Riemannian manifolds:
**Definition 2.2**.: Let \(M\) be an oriented Riemannian \(n\)-manifold with principal \(\mathrm{SO}(n)\)-bundle of positively oriented orthonormal frames \(P_{\mathrm{SO}}(M)\). A \(\mathrm{Spin}^{r}\) structure on \(M\) is a reduction of the structure group of \(P_{\mathrm{SO}}(M)\) along the Lie group homomorphism \(\lambda_{n}^{r}\colon\mathrm{Spin}^{r}(n)\to\mathrm{SO}(n)\). In particular, it is a principal \(\mathrm{Spin}^{r}(n)\)-bundle \(P_{\mathrm{Spin}^{r}}\) over \(M\) with a \(\mathrm{Spin}^{r}(n)\)-equivariant bundle homomorphism \(P_{\mathrm{Spin}^{r}}\to P_{\mathrm{SO}}(M)\), where \(\mathrm{Spin}^{r}(n)\) acts on \(P_{\mathrm{SO}}(M)\) via \(\lambda_{n}^{r}\). The associated principal \(\mathrm{SO}(r)\)-bundle to \(P_{\mathrm{Spin}^{r}}\) along \(\xi_{n}^{r}\) is called the auxiliary bundle of the \(\mathrm{Spin}^{r}\) structure, and it is denoted by \(P_{\mathrm{SO}(r)}\). Note that the natural map \(\theta\colon P_{\mathrm{Spin}^{r}}\to P_{\mathrm{SO}}(M)\tilde{\times}P_{ \mathrm{SO}(r)}\) is a two-sheeted \(\mathrm{Spin}^{r}\)-equivariant covering map, where \(\mathrm{Spin}^{r}(n)\) acts on the codomain via \(\varphi^{r,n}\). The rank-\(r\) vector bundle over \(M\) associated to the auxiliary bundle \(P_{\mathrm{SO}(r)}\) along the standard representation of \(\mathrm{SO}(r)\) is called the auxiliary vector bundle of the structure.
**Remark 2.3**.: For \(r<s\), a \(\mathrm{Spin}^{r}\) structure naturally induces a \(\mathrm{Spin}^{s}\) structure, namely via the associated bundle construction along \(\iota^{rs}\).
Given an oriented Riemannian manifold \(M\), it is therefore natural to ask which is the minimal \(r\in\mathbb{N}\) such that \(M\) has a \(\mathrm{Spin}^{r}\) structure.
**Definition 2.4**.: Let \(M\) be an oriented Riemannian manifold. The Spin type of \(M\), denoted by \(\Sigma(M)\), is the smallest \(r\in\mathbb{N}\) such that \(M\) admits a \(\mathrm{Spin}^{r}\) structure.
The Spin type measures the failure of a manifold to be Spin. For example, every Spin manifold has Spin type \(1\), and \(\Sigma\left(\mathbb{CP}^{2n}\right)=2\) for all \(n\geq 1\). Every oriented Riemannian manifold of dimension \(\leq 4\) has Spin type \(\leq 3\)[26]. In [2, 3], the authors show that every (compact) oriented Riemannian manifold of dimension \(\leq 5\) (\(\leq 7\)) has Spin type \(\leq 3\), and that, in every dimension \(\geq 8\), there is a
closed manifold with Spin type \(\geq 4\). For example, the product of the Wu manifold with itself is a \(10\)-dimensional manifold with Spin type \(\geq 4\) - see [2].
**Proposition 2.5** ([2], Proposition 3.2).: The following are equivalent for an orientable Riemannian manifold \(M\):
1. \(M\) is \(\operatorname{Spin}^{r}\),
2. there is an orientable rank-\(r\) real vector bundle \(E\to M\) such that \(TM\oplus E\) is Spin,
3. \(M\) immerses in a Spin manifold with codimension \(r\),
4. \(M\) embeds in a Spin manifold with codimension \(r\).
This lets us prove that Definition 2.4 is a good definition, in the sense that the Spin type is defined for all oriented Riemannian manifolds:
**Corollary 2.6**.: An oriented Riemannian \(n\)-manifold \(M\) has Spin type \(1\leq\Sigma(M)\leq n\), and so the Spin type is well-defined.
Proof.: By Whitney's embedding theorem, such a manifold can be smoothly embedded into \(\mathbb{R}^{2n}\), which is Spin. Now apply Proposition 2.5. Equivalently, one can take \(E=TM\) in point (2) of Proposition 2.5, and note that \(w_{1}(TM\oplus TM)=0\) and \(w_{2}(TM\oplus TM)=0\), as \(TM\) is oriented.
**Remark 2.7**.: Note that, if \(n>1\) and \(M\) is a closed oriented Riemannian \(n\)-manifold, one can improve the bound in Corollary 2.6 to \(1\leq\Sigma(M)\leq n-\alpha(n)\), where \(\alpha(n)\) is the number of ones in the binary expression of \(n\), by Cohen's theorem [8].
There is a different approach to this problem. Let \(M\) be an oriented Riemannian \(n\)-manifold, fix \(x\in M\), and let \(G\) be the holonomy group of \(M\) at \(x\). Let \(h\colon G\to\operatorname{SO}(n)\) be the holonomy representation at \(x\). The positively oriented orthonormal frame bundle of \(M\) admits a reduction of the structure group from \(\operatorname{SO}(n)\) to \(G\) via \(h\). Hence, if \(h\) lifts to \(\operatorname{Spin}^{r}(n)\), we obtain a \(\operatorname{Spin}^{r}\) structure on \(M\) via the associated bundle construction. For \(r=n\), this is always the case, giving a more constructive proof of Corollary 2.6.
**Proposition 2.8** (see [10], Proposition 3.3).: Let \(M\) be an oriented Riemannian \(n\)-manifold, let \(x\in M\), and let \(h\colon G\to\operatorname{SO}(n)\) be the holonomy representation at \(x\). Then, \(h\) lifts to \(\operatorname{Spin}^{n}(n)\), _i.e._, there exists a Lie group homomorphism \(\tilde{h}\colon G\to\operatorname{Spin}^{n}(n)\) such that the diagram
commutes.
Proof.: The map \(h\times h\colon G\to\operatorname{SO}(n)\times\operatorname{SO}(n)\) always lifts to \(\operatorname{Spin}^{n}(n)\), by Proposition 2.1 and the fact that \(\varphi^{n,n}\) is a covering map.
This construction gives us a naturally preferred \(\operatorname{Spin}^{n}\) structure on any oriented Riemannian \(n\)-manifold. Note that its auxiliary vector bundle is the tangent bundle \(TM\) itself (compare with the proof of Corollary 2.6). It is natural to ask whether this \(\operatorname{Spin}^{n}\) structure on \(M^{n}\) is _proper_, in the sense that it might be induced by a \(\operatorname{Spin}^{r}\) structure on \(M\), for some \(r<n\). To answer this question, we first generalise a result by Moroianu (Lemma 2.1 in [24]), who focused on the case \(r=2\), \(s=1\):
**Lemma 2.9**.: Let \(M\) be an oriented Riemannian \(n\)-manifold admitting a \(\operatorname{Spin}^{r}\) structure, for some \(r\geq 1\). Let \(1\leq s\leq r\). Then, the \(\operatorname{Spin}^{r}\) structure is induced by a \(\operatorname{Spin}^{s}\) structure along \(\iota_{n}^{sr}\) if, and only if, the auxiliary \(\operatorname{SO}(r)\)-bundle of the \(\operatorname{Spin}^{r}\) structure admits a reduction of the structure group to \(\operatorname{SO}(s)\).
Proof.: The proof generalises the one in [24]. One implication is obvious. For the other one, let \(\Sigma\) be a reduced \(\operatorname{SO}(s)\)-subbundle of the auxiliary bundle \(P_{\operatorname{SO}(r)}\) of the \(\operatorname{Spin}^{r}\)-structure. Let \(P_{\operatorname{SO}}(M)\) be the positively-oriented orthonormal frame bundle of \(M\). Then, the preimage of \(P_{\operatorname{SO}}(M)\tilde{\times}\Sigma\) under the map \(\theta\colon P_{\operatorname{Spin}^{r}}\to P_{\operatorname{SO}}(M)\tilde{ \times}P_{\operatorname{SO}(r)}\) is a \(\operatorname{Spin}^{s}\) structure from which the original \(\operatorname{Spin}^{r}\) structure is induced.
A principal \(\operatorname{SO}(r)\)-bundle admits a reduction of the structure group to \(\operatorname{SO}(s)\) if, and only if, the associated rank-\(r\) vector bundle admits \(r-s\) global orthonormal sections. In the construction of Proposition 2.8, the auxiliary rank-\(n\) vector bundle is the tangent bundle \(TM\). Take, for instance, an even-dimensional sphere \(S^{2n}\). By the hairy ball theorem, we know that it does not admit a nowhere-vanishing vector field, so in particular the \(\operatorname{Spin}^{2n}\) structure constructed in Proposition 2.8 is not induced by any \(\operatorname{Spin}^{r}\) structure, for any \(r<2n\). This argument generalises to a wide class of manifolds, namely those closed oriented Riemannian manifolds with non-zero Euler characteristic class. Examples of these include \(S^{2n}\), \(\mathbb{CP}^{n}\), \(\mathbb{HP}^{n}\) and \(\mathbb{OP}^{2}\). For more homogeneous examples, see [27]. Note also that every odd-dimensional manifold admits a nowhere-vanishing vector field, and hence the \(\operatorname{Spin}^{n}\) structure given by the holonomy construction of Proposition 2.8 is induced by a \(\operatorname{Spin}^{n-1}\) structure.
**Corollary 2.10**.: Let \(M\) be an oriented Riemannian \(n\)-manifold. If \(r>n\), every \(\operatorname{Spin}^{r}\) structure on \(M\) is induced by a \(\operatorname{Spin}^{n}\) structure on \(M\) along the inclusion \(\iota_{n}^{n,r}\colon\operatorname{Spin}^{n}(n)\hookrightarrow\operatorname{ Spin}^{r}(n)\).
Proof.: Recall that, if \(r>n\), every rank-\(r\) vector bundle over an \(n\)-manifold has a nowhere-vanishing section [16]. Apply this inductively to the auxiliary vector bundle of a \(\operatorname{Spin}^{r}\) structure on \(M\).
Using Lemma 2.9, one can refine Corollary 2.6:
**Corollary 2.11**.: Let \(n>1\). An oriented Riemannian \(n\)-manifold \(M\) has Spin type \(1\leq\Sigma(M)\leq n-1\).
Proof.: If \(M\) is compact, this follows from Cohen's theorem, as in Remark 2.7. If \(M\) is non-compact, recall that every rank-\(n\) vector bundle over \(M\) has a nowhere-vanishing section [16]. Hence, by Lemma 2.9, every \(\operatorname{Spin}^{n}\) structure on \(M\) comes from a \(\operatorname{Spin}^{n-1}\) structure on \(M\)
## 3. Invariance: Spheres
We now turn to the case where a Lie group \(G\) acts smoothly on a manifold \(M\), and study the notion of \(G\)-invariance of these structures. Later, we will focus on the case where \(G\) acts transitively on \(M\), making it into a homogeneous space.
Let \(M\) be a smooth manifold and let \(F(M)\) be its frame bundle. Let \(f\colon M\to M\) be a diffeomorphism. Suppose \(M\) has a \(G\)-structure \(P_{G}\). We say that the \(G\)-structure is invariant by \(f\) if the induced diffeomorphism \(\overline{f}\) of the frame bundle lifts to a diffeomorphism of the \(G\)-structure, _i.e._, if there exists a diffeomorphism \(\tilde{f}\) of \(P_{G}\) such that the diagram
commutes. For example, an orientation (resp. Riemannian metric, almost-complex structure) is invariant by \(f\) if, and only if, \(f\) is orientation-preserving (resp. an isometry, \(df\) is linear with respect to the almost-complex structure).
**Definition 3.1**.: Let \(K\) be a Lie group acting smoothly on a smooth manifold \(M\) from the left. A \(G\)-structure on \(M\) is \(K\)-invariant if it is invariant by \(L_{k}\) for every \(k\in K\), where \(L_{k}\colon M\to M\) is the left action of \(k\) on \(M\).
Let \(G\) be a Lie group, \(H\subseteq G\) a closed subgroup and \((M=G/H,g)\) an \(n\)-dimensional oriented Riemannian homogeneous space (so \(g\) is a \(G\)-invariant Riemannian metric on \(M\), and \(G\) acts on \(M\) by orientation-preserving isometries). Then, \(G/H\) is reductive. Let \(\mathfrak{m}\) be a reductive (_i.e._\(\operatorname{Ad}_{H}\)-invariant) complement of the Lie algebra \(\mathfrak{h}\) of \(H\) in the Lie algebra \(\mathfrak{g}\) of \(G\). The positively oriented orthonormal frame bundle is
\[P_{SO}(M)\cong G\times_{\sigma}\operatorname{SO}(\mathfrak{m})\,,\]
where \(\sigma\colon H\to\operatorname{SO}(\mathfrak{m})\) is the isotropy representation [9, 1]. By fixing an orthonormal basis of \(\mathfrak{m}\), we can identify \(\operatorname{SO}(n)\cong\operatorname{SO}(\mathfrak{m})\) and \(\operatorname{Spin}^{r}\left(\mathfrak{m}\right)\cong\operatorname{Spin}^{r} (n)\).
Now, we generalise the characterisation in [9]:
**Theorem 3.2**.: Let \(G/H\) be an \(n\)-dimensional oriented Riemannian homogeneous space. For each \(r\in\mathbb{N}\), \(G/H\) has a \(G\)-invariant \(\operatorname{Spin}^{r}\) structure if, and only if, the isotropy representation \(\sigma\colon H\to\operatorname{SO}(n)\) lifts to \(\operatorname{Spin}^{r}(n)\), _i.e._, there exists a Lie group homomorphism \(\tilde{\sigma}\colon H\to\operatorname{Spin}^{r}(n)\) such that
the diagram
commutes.
Proof.: It is essentially the same as in Proposition 1.3 in [9]. For the sake of completeness, we reproduce it here. Assume first that the isotropy representation lifts to \(\operatorname{Spin}^{r}(n)\). Then, \(G\times_{\tilde{\sigma}}\operatorname{Spin}^{r}(n)\) clearly defines a \(G\)-invariant \(\operatorname{Spin}^{r}\) structure on \(G/H\).
Conversely, suppose that there exists a \(G\)-invariant \(\operatorname{Spin}^{r}\) structure \(P\to G/H\). Then, \(P\) admits a transitive action of \(G\times\operatorname{Spin}^{r}(n)\), where \(G\) acts via the equivariant structure - here we use \(G\)-invariance of \(P\) - and \(\operatorname{Spin}^{r}(n)\) acts by its right action on the fibres. Let \(x\in G/H\) be a point with isotropy \(H\) in \(G\). Let \(p\in P\) with projection \(x\). The first projection \(G\times\operatorname{Spin}^{r}(n)\to G\) induces an isomorphism from the isotropy \(H_{p}\) of \(p\) in \(G\times\operatorname{Spin}^{r}(n)\) to the isotropy \(H\) of \(x\) in \(G\). This is because \(\operatorname{Spin}^{r}(n)\) acts simply transitively on the fibres of \(P\to G/H\), so that for every element \(h\in H\) there exists a unique \(\mu\in\operatorname{Spin}^{r}(n)\) such that \((h,\mu)\in H_{p}\). The inverse isomorphism \(H\to H_{p}\subseteq G\times\operatorname{Spin}^{r}(n)\) gives, by projection to the second factor, a homomorphism \(H\to\operatorname{Spin}^{r}(n)\). It is clear that, when we project this further to \(\operatorname{SO}(n)\), we obtain the isotropy action on the frame bundle at \(x\).
In the classical Spin case \(r=1\), if such a structure exists, it is unique. However, for \(r>1\), one can have a priori more invariant \(\operatorname{Spin}^{r}\) structures, even if \(H\) is connected. This is because \(\lambda_{n}^{r}\) is not a covering map for \(r>1\).
We now introduce a \(G\)-invariant analogue of the Spin type of Definition 2.4:
**Definition 3.3**.: Let \(K\) be a Lie group acting smoothly by orientation-preserving isometries on an oriented Riemannian manifold \(M\). The \(K\)-invariant Spin type of \(M\), denoted by \(\Sigma(M,K)\), is the least \(r\in\mathbb{N}\) such that \(M\) admits a \(K\)-invariant \(\operatorname{Spin}^{r}\) structure.
For example, we know that the unique Spin structure of \(S^{n}\) is not \(G\)-invariant for \(G=\operatorname{SO}(n+1),\operatorname{U}(n+1)\)[9], _i.e._, \(\Sigma(S^{n},\operatorname{SO}(n+1)),\Sigma(S^{n},\operatorname{U}(n+1))>1\). But there might exist a \(G\)-invariant \(\operatorname{Spin}^{\mathbb{C}}\) structure. Let us consider an example, as a motivation for our next result.
Take the 2-dimensional sphere \(S^{2}\cong\operatorname{SO}(3)/\operatorname{SO}(2)\). We know [9] that the isotropy representation \(\sigma\colon\operatorname{SO}(2)\to\operatorname{SO}(2)\) is the identity, and that, for topological reasons, it does not lift to \(\operatorname{Spin}(2)\), so the unique Spin structure of \(S^{2}\) is not \(\operatorname{SO}(3)\)-invariant. However, we can build a lift of \(\sigma\) to \(\operatorname{Spin}^{\mathbb{C}}(2)\) quite easily. Take \(\gamma\colon\mathbb{R}\to\operatorname{Spin}(2)\) the curve defined by \(\gamma(t)=\cos(2t)+\sin(2t)e_{1}e_{2}\), which is a well-defined curve in \(\operatorname{Spin}(2)\) satisfying \(\gamma(0)=1\) and \(\gamma(\pi/2)=-1\). Define \(\tilde{\sigma}\colon\operatorname{SO}(2)\to\operatorname{Spin}^{\mathbb{C}}(2)\) by
\[\tilde{\sigma}\left(e^{it}\right)=\left[\gamma(t),\gamma(t)\right]\,.\]
It is easy to see that this is a well-defined Lie group homomorphism and that it is a lift of \(\sigma\). So \(S^{2}\) does not have an \(\operatorname{SO}(3)\)-invariant Spin structure, but it does have an \(\operatorname{SO}(3)\)-invariant \(\operatorname{Spin}^{\mathbb{C}}\) structure. Hence, \(S^{2}\) has \(\operatorname{SO}(3)\)-invariant Spin type 2, _i.e._, \(\Sigma(S^{2},\operatorname{SO}(3))=2\). We can mimic this to build an \(\operatorname{SO}(n+1)\)-invariant \(\operatorname{Spin}^{n}(n)\) structure on \(S^{n}\) as follows. Consider the diagram
\[\begin{CD}\operatorname{Spin}^{n}(n)\\ @V{}V{\varphi^{n,n}}V\\ \operatorname{SO}(n)@>{\sigma=id}>{}>\operatorname{SO}(n)@>{\Delta}>{}> \operatorname{SO}(n)\times\operatorname{SO}(n)\end{CD}\]
where \(\Delta\) is the diagonal map. Then, by Proposition 2.1, \(\Delta\circ\sigma\) lifts to \(\operatorname{Spin}^{n}(n)\), yielding an \(\operatorname{SO}(n+1)\)-invariant \(\operatorname{Spin}^{n}\) structure on \(S^{n}\). Hence, for \(n\geq 2\), the \(\operatorname{SO}(n+1)\)-invariant Spin type of \(S^{n}\) satisfies \(2\leq\Sigma(S^{n},\operatorname{SO}(n+1))\leq n\). In fact, we have the following result:
**Theorem 3.4**.: The \(\operatorname{SO}(n+1)\)-invariant Spin type of \(S^{n}\) is \(n\), for \(n\neq 4\), and \(3\), for \(n\)=4 (cfr. Proposition 3.9 in [2]). In other words,
\[\Sigma(S^{n},\operatorname{SO}(n+1))=\begin{cases}n,&n\neq 4\\ 3,&n=4\,.\end{cases}\]
Proof.: The case \(n=1\) is obvious, and the case \(n=2\) has been covered in the previous discussion. Suppose \(n\neq 1,2,4\). Let \(2\leq r<n\), and suppose \(S^{n}\) has an \(\operatorname{SO}(n+1)\)-invariant \(\operatorname{Spin}^{r}\) structure. Then, there exists a lift \(\tilde{\sigma}\colon\operatorname{SO}(n)\to\operatorname{Spin}^{r}(n)\), making the following diagram commute:
\[\begin{CD}\operatorname{SO}(n)@>{\tilde{\sigma}}>{}>\operatorname{SO}(n)\\ @V{}V{\varphi^{r,n}}V\\ \operatorname{SO}(n)\end{CD}\]
Now consider the Lie group homomorphism \(\operatorname{pr}_{2}\circ\varphi^{r,n}\circ\tilde{\sigma}\colon\operatorname {SO}(n)\to\operatorname{SO}(r)\). This induces a Lie algebra homomorphism \(\mathfrak{so}(n)\to\mathfrak{so}(r)\). The kernel of this map is an ideal of \(\mathfrak{so}(n)\), which is a simple Lie algebra (as \(n\geq 3\) and \(n\neq 4\)), and hence it is either the whole of \(\mathfrak{so}(n)\) or \(0\). As \(r<n\), it cannot be \(0\) for dimensional reasons. Hence, it has to be the whole of \(\mathfrak{so}(n)\). As \(\operatorname{SO}(n)\) is connected, the Lie exponential map generates the whole group, and hence the Lie group homomorphism has to be trivial. Now, let \(\alpha_{k}\) be a generator of the fundamental group of \(\operatorname{SO}(k)\). Then, \(\alpha_{n}\in\pi_{1}(\operatorname{SO}(n))\)
goes, on the one hand, to \((\alpha_{n},0)\in\pi_{1}(\operatorname{SO}(n)\times\operatorname{SO}(r))\) and, on the other hand, to \((\alpha_{n},s\alpha_{r})\), for some odd \(s\), by Proposition 2.1, which is a contradiction.
Now let us consider the case \(n=4\). In this case, the Lie algebra \(\mathfrak{so}(4)\cong\mathfrak{so}(3)\oplus\mathfrak{so}(3)\), so it is not simple, and the previous argument does not work. First, let us show that \(S^{4}\) does not admit an \(\operatorname{SO}(5)\)-invariant \(\operatorname{Spin}^{2}\) structure. Indeed, if such a structure existed, as before, we would have a Lie group homomorphism \(\operatorname{SO}(4)\to\operatorname{SO}(2)\), and hence a Lie algebra homomorphism \(\mathfrak{so}(3)\oplus\mathfrak{so}(3)\to\mathfrak{so}(2)\). The kernel of this map is an ideal of \(\mathfrak{so}(3)\oplus\mathfrak{so}(3)\), and so it is either \(0\), \(\mathfrak{so}(3)\) or the whole of \(\mathfrak{so}(3)\oplus\mathfrak{so}(3)\). It cannot be \(0\) for dimensional reasons. If the kernel is the whole Lie algebra, then the Lie group homomorphism has to be trivial, as before, and hence concluding that the isotropy cannot lift to \(\operatorname{Spin}^{2}(4)\), by Proposition 2.1. So, the only possibility is that the kernel is \(\mathfrak{so}(3)\), which again is impossible for dimensional reasons. Hence, the \(\operatorname{SO}(5)\)-invariant Spin type of \(S^{4}\) is either \(3\) or \(4\). We shall show now that it is \(3\).
We know that \(\operatorname{SO}(4)/\mathbb{Z}_{2}\cong\operatorname{SO}(3)\times\operatorname {SO}(3)\). Hence, we have a chain of Lie group homomorphisms:
\[\operatorname{SO}(4)\xrightarrow{\pi}\operatorname{SO}(4)/\mathbb{Z}_{2} \xrightarrow{\cong}\operatorname{SO}(3)\times\operatorname{SO}(3)\xrightarrow {\operatorname{pr}_{i}}\operatorname{SO}(3)\,.\]
At the level of fundamental groups, a generator of \(\pi_{1}\left(\operatorname{SO}(4)\right)\) goes to a generator of \(\pi_{1}\left(\operatorname{SO}(3)\right)\), for a suitable choice of \(i\in\{1,2\}\). Hence, the isotropy representation lifts to \(\operatorname{Spin}^{3}(4)\).
This is an instance of a more general result:
**Theorem 3.5**.: Let \(n\geq 3\) and let \(G/H\) be an \(n\)-dimensional oriented Riemannian homogeneous space. Then, there is either a canonical \(G\)-invariant Spin structure on \(G/H\) or a canonical \(G\)-invariant \(\operatorname{Spin}^{n}\) structure on \(G/H\). In particular, its \(G\)-invariant Spin type satisfies \(1\leq\Sigma(G/H,G)\leq n\).
Proof.: Let \(\sigma\colon H\to\operatorname{SO}(n)\) be its isotropy representation. The induced homomorphism at the level of fundamental groups is \(\sigma_{\sharp}\colon\pi_{1}(H)\to\pi_{1}\left(\operatorname{SO}(n)\right)\cong \mathbb{Z}_{2}\). If \(\sigma_{\sharp}\) is trivial, then the isotropy representation lifts to \(\operatorname{Spin}(n)\), and hence \(G/H\) has a (unique) \(G\)-invariant Spin structure. If \(\sigma_{\sharp}\) is not trivial, then consider \(\sigma\times\sigma\colon H\to\operatorname{SO}(n)\times\operatorname{SO}(n)\). Then, \((\sigma\times\sigma)_{\sharp}\left(\pi_{1}(H)\right)\subseteq\langle(1,1) \rangle\subseteq\mathbb{Z}_{2}\times\mathbb{Z}_{2}\cong\pi_{1}\left( \operatorname{SO}(n)\times\operatorname{SO}(n)\right)\), and \((\varphi^{n,n})_{\sharp}\left(\pi_{1}\left(\operatorname{Spin}^{n}(n)\right) \right)=\langle(1,1)\rangle\), by Proposition 2.1. Hence, the isotropy representation lifts to \(\operatorname{Spin}^{n}(n)\), yielding a \(G\)-invariant \(\operatorname{Spin}^{n}\) structure on \(G/H\).
Note that Theorem 3.4 shows that the bound in Theorem 3.5 is sharp (compare with Corollary 2.11). Moreover, recall that, if the isotropy \(H\) is connected and \(G/H\) has a \(G\)-invariant Spin structure, then such a structure is unique [9]. This has the advantage of singling out a _preferred_ Spin structure to work with. We observed that, in general, we cannot expect to have a unique \(G\)-invariant \(\operatorname{Spin}^{r}\) structure. Theorem 3.5 gives a \(G\)-invariant \(\operatorname{Spin}^{n}\) structure which is built in a very natural way using only the isotropy representation. So, in general, an \(n\)-dimensional oriented Riemannian homogeneous \(G\)-space comes equipped with a _natural_\(G\)-invariant \(\operatorname{Spin}^{n}\) structure.
Let us now continue with the study of homogeneous spheres. We know that the unique Spin structure of \(S^{2n+1}\) is not \(\operatorname{U}(n+1)\)-invariant [9]. So, the \(\operatorname{U}(n+1)\)-invariant Spin type of \(S^{2n+1}\) is not \(1\). However, we have the following:
**Theorem 3.6**.: The \(\mathrm{U}(n+1)\)-invariant Spin type of \(S^{2n+1}\) is \(2\):
\[\Sigma(S^{2n+1},\mathrm{U}(n+1))=2\,.\]
Proof.: The isotropy representation \(\sigma\colon\,\mathrm{U}(n)\to\mathrm{SO}(2n+1)\) is given by the natural inclusions \(\mathrm{U}(n)\subseteq\mathrm{SO}(2n)\subseteq\mathrm{SO}(2n+1)\). Consider the map \(\sigma\times\det\colon\,\mathrm{U}(n)\to\mathrm{SO}(2n+1)\times\mathrm{SO}(2)\), given by \(\sigma(A)=(A,\det(A))\). The image of a generator of the fundamental group of \(\mathrm{U}(n)\) is an element of the form \((\alpha_{2n+1},\alpha_{2})\), where \(\alpha_{s}\) is a generator of the fundamental group of \(\mathrm{SO}(s)\). Hence, by Proposition 2.1, this map lifts to \(\mathrm{Spin}^{2}(2n+1)\), yielding a \(\mathrm{U}(n+1)\)-invariant \(\mathrm{Spin}^{2}\) structure on \(S^{2n+1}\).
We know that the \(\mathrm{Sp}(n+1)\cdot\mathrm{U}(1)\)-invariant Spin type of \(S^{4n+3}\) is \(1\) for \(n\) odd, and that it is not \(1\) for \(n\) even [9].
**Theorem 3.7**.: The \(\mathrm{Sp}(2n+1)\cdot\mathrm{U}(1)\)-invariant Spin type of \(S^{8n+3}\) is \(2\). Hence,
\[\Sigma(S^{4n+3},\mathrm{Sp}(n+1)\cdot\mathrm{U}(1))=\begin{cases}1,&n\text{ odd}\\ 2&n\text{ even}\end{cases}.\]
Proof.: Take the description of the isotropy representation \(\sigma\colon\,\mathrm{Sp}(2n)\cdot\mathrm{U}(1)\to\mathrm{SO}(8n+3)\) given in [9]. Then, the Lie group homomorphism \(\mathrm{Sp}(2n)\cdot\mathrm{U}(1)\to\mathrm{U}(1)\) given by \([A,z]\mapsto z^{2}\) is well-defined, and, by looking at generators of the corresponding fundamental groups and applying Proposition 2.1, one concludes.
We also know that the \(\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1)\)-invariant Spin type of \(S^{4n+3}\) is \(1\) for \(n\) odd, and that it is not \(1\) for \(n\) even [9]. Now, we obtain:
**Theorem 3.8**.: The \(\mathrm{Sp}(2n+1)\cdot\mathrm{Sp}(1)\)-invariant Spin type of \(S^{8n+3}\) is \(3\). Hence,
\[\Sigma(S^{4n+3},\mathrm{Sp}(n+1)\cdot\mathrm{Sp}(1))=\begin{cases}1,&n\text{ odd}\\ 3&n\text{ even}\end{cases}.\]
Proof.: Take the description of the isotropy representation \(\sigma\colon\,\mathrm{Sp}(2n)\cdot\mathrm{Sp}(1)\to\mathrm{SO}(8n+3)\) in [9]. Then, the Lie group homomorphism \(\mathrm{Sp}(2n)\cdot\mathrm{Sp}(1)\cong\mathrm{Sp}(2n)\cdot\mathrm{Spin}(3)\to \mathrm{SO}(3)\) given by \([A,z]\mapsto\lambda_{3}(z)\) is well-defined, and by looking at generators of the corresponding fundamental groups one concludes that the \(\mathrm{Sp}(2n+1)\cdot\mathrm{Sp}(1)\)-invariant Spin type of \(S^{8n+3}\) is either \(2\) or \(3\). But it cannot be \(2\), because then we would have a non-trivial Lie group homomorphism \(\mathrm{SO}(3)\to\mathrm{SO}(2)\), which is impossible, as in the proof of Proposition 2.1.
We summarise our results in Table 1, which shows the \(G\)-invariant Spin type of the spheres for all connected Lie groups \(G\) acting transitively and effectively on them [22], and thus completing the study of invariant generalised Spin structures on homogeneous spheres.
Table 1 has an interesting application, in analogy with Proposition 1.6 in [9]:
**Proposition 3.9**.: Let \(G\) be the holonomy group of a simply connected irreducible non-symmetric Riemannian manifold of dimension \(n+1\geq 3\). Let \(H\leq G\) be a subgroup such that \(S^{n}\cong G/H\), which exists, by Berger's classification. If there exists a lift of the holonomy representation \(h\colon G\to\mathrm{SO}(n+1)\) to \(\mathrm{Spin}^{r}(n+1)\), then \(S^{n}\) has a \(G\)-invariant \(\mathrm{Spin}^{r}\) structure. For \(r=1\), the converse is also true (Proposition 1.6 in [9]).
Proof.: Suppose that the holonomy representation \(h\) lifts to \(\operatorname{Spin}^{r}(n+1)\), _i.e._, that there exists a Lie group homomorphism \(\tilde{h}\colon G\to\operatorname{Spin}^{r}(n+1)\) such that the diagram
commutes. At the level of fundamental groups, \(\left(\varphi^{r,n}\circ\tilde{h}\right)_{\tilde{\sharp}}(\pi_{1}(G))\subseteq \left\langle(1,1)\right\rangle\). Now, let \(\sigma\colon H\to\operatorname{SO}(n)\) be the isotropy representation of the corresponding homogeneous sphere. Let \(\iota\colon H\to G\) be the inclusion. Define \(\psi\colon G\to\operatorname{SO}(r)\) by \(\psi=\operatorname{pr}_{2}\circ\varphi^{r,n}\circ\tilde{h}\), and consider \(\sigma\times\left(\psi\circ\iota\right)\colon H\to\operatorname{SO}(n)\times \operatorname{SO}(r)\). We will see now that this lifts to \(\operatorname{Spin}^{r}(n)\), yielding a \(G\)-invariant \(\operatorname{Spin}^{r}\) structure on \(S^{n}=G/H\). Indeed, we have a commutative diagram
where the vertical arrows come from the long exact sequence of homotopy groups of the spheres \(G/H\) and \(\operatorname{SO}(n+1)/\operatorname{SO}(n)\) respectively, and hence they are isomorphisms. One concludes using Proposition 2.1.
\begin{table}
\begin{tabular}{c c c} \hline \hline Space \(M\) & Group \(G\) & \(\Sigma(M,G)\) \\ \hline \hline \(S^{n}\) & \(\operatorname{SO}(n+1)\) & \(\begin{array}{c}n,\,\text{for}\,\,n\neq 4\\ 3,\,\text{for}\,\,n=4\end{array}\) \\ \hline \(S^{2n+1}\) & \(\operatorname{U}(n+1)\) & \(2\) \\ \hline \(S^{2n+1}\) & \(\operatorname{SU}(n+1)\) & \(1\) \\ \hline \(S^{4n+3}\) & \(\operatorname{Sp}(n+1)\) & \(1\) \\ \hline \(S^{4n+3}\) & \(\operatorname{Sp}(n+1)\cdot\operatorname{U}(1)\) & \(\begin{array}{c}1,\,\text{for}\,\,n\,\text{odd}\\ 2\,\,\text{for}\,\,n\,\text{even}\end{array}\) \\ \hline \(S^{4n+3}\) & \(\operatorname{Sp}(n+1)\cdot\operatorname{Sp}(1)\) & \(\begin{array}{c}1,\,\text{for}\,\,n\,\text{odd}\\ 3\,\,\text{for}\,\,n\,\text{even}\end{array}\) \\ \hline \(S^{6}\) & \(G_{2}\) & \(1\) \\ \hline \(S^{7}\) & \(\operatorname{Spin}(7)\) & \(1\) \\ \hline \(S^{15}\) & \(\operatorname{Spin}(9)\) & \(1\) \\ \hline \hline \end{tabular}
\end{table}
Table 1. \(G\)-invariant Spin type of homogeneous spheres.
This result puts the seemingly specific study of invariant generalised Spin structures on spheres in a much wider context. For example, as a consequence of Proposition 3.9 and Table 1, the holonomy representation of a simply connected irreducible non-symmetric \((8k+4)\)-dimensional quaternionic Kahler manifold (or more generally any \((8k+4)\)-dimensional manifold with holonomy containing \(\operatorname{Sp}(2k+1)\cdot\operatorname{Sp}(1)\)) does not lift to \(\operatorname{Spin}^{\mathbb{C}}(8k+4)\).
|
2308.11734 | Dynamic Compact Data Structure for Temporal Reachability with Unsorted
Contact Insertions | Temporal graphs represent interactions between entities over time. Deciding
whether entities can reach each other through temporal paths is useful for
various applications such as in communication networks and epidemiology.
Previous works have studied the scenario in which addition of new interactions
can happen at any point in time. A known strategy maintains, incrementally, a
Timed Transitive Closure by using a dynamic data structure composed of $O(n^2)$
binary search trees containing non-nested time intervals. However, space usage
for storing these trees grows rapidly as more interactions are inserted. In
this paper, we present a compact data structures that represent each tree as
two dynamic bit-vectors. In our experiments, we observed that our data
structure improves space usage while having similar time performance for
incremental updates when comparing with the previous strategy in temporally
dense temporal graphs. | Luiz Fernando Afra Brito, Marcelo Keese Albertini, Bruno Augusto Nassif Travençolo, Gonzalo Navarro | 2023-08-22T18:46:38Z | http://arxiv.org/abs/2308.11734v1 | # Dynamic Compact Data Structure for Temporal Reachability with Unsorted Contact Inser
###### Abstract
Temporal graphs represent interactions between entities over time. Deciding whether entities can reach each other through temporal paths is useful for various applications such as in communication networks and epidemiology. Previous works have studied the scenario in which addition of new interactions can happen at any point in time. A known strategy maintains, incrementally, a Timed Transitive Closure by using a dynamic data structure composed of \(O(n^{2})\) binary search trees containing non-nested time intervals. However, space usage for storing these trees grows rapidly as more interactions are inserted. In this paper, we present a compact data structures that represent each tree as two dynamic bit-vectors. In our experiments, we observed that our data structure improves space usage while having similar time performance for incremental updates when comparing with the previous strategy in temporally dense temporal graphs.
## 1 Introduction
Temporal graphs represent interactions between entities over time. These interactions often appear in the form of contacts at specific timestamps. Moreover, entities can also interact indirectly with each other by chaining several contacts over time. For example, in a communication network, devices that are physically connected can send new messages or propagate received ones; thus, by first sending a new message and, then, repeatedly propagating messages over time, remote entities can communicate indirectly. Time-respecting paths in temporal graphs are known as temporal paths, or simply journeys, and when a journey exists from one entity to another, we say that the first can reach the second.
In a computational environment, it is often useful to check whether entities can reach each other while using low space. Investigations on temporal reachability have been used, for instance, for characterizing mobile and social networks [19, 13], and for validating protocols and better understanding communication networks [5, 20]. Some other applications require the ability to reconstruct a concrete journey if one exists. Journey reconstruction has been used in applications such as finding and visualizing detailed trajectories in transportation networks [21, 10, 24], and
matching temporal patterns in temporal graph databases [14, 11]. In all these applications, low space usage is important because it allows the maintenance of larger temporal graphs in primary memory.
In [2, 20], the authors considered updating reachability information given a chronologically sorted sequence of contacts. In this problem, a standard Transitive Closure (TC) is maintained as new contacts arrive. Differently, in [4, 22], the authors studied the problem in which sequences of contacts may be chronologically unsorted and queries may be intermixed with update operations. For instance, during scenarios of epidemics, outdated information containing interaction details among infected and non-infected individuals are reported in arbitrary order, and the dissemination process is continually queried in order to take appropriate measures against contamination [16, 23, 9, 18].
Particularly to our interest, the data structure proposed by [4] maintains a Timed Transitive Closure (TTC), a generalization of a TC that takes time into consideration. It maintains well-chosen sets of time intervals describing departure and arrival timestamps of journeys in order to provide time related queries and enable incremental updates on the data structure. The key idea is that, each set associated with a pair of vertices only contains non-nested time intervals and it is sufficient to implement all the TTC operations. Our previous data structure maintains only \(O(n^{2}\tau)\) intervals (as opposed to \(O(n^{2}\tau^{2})\)) using \(O(n^{2})\) dynamic Binary Search Trees (BSTs). Although the reduction of intervals is interesting, the space to maintain \(O(n^{2})\) BSTs containing \(O(\tau)\) intervals each can still be prohibitive for large temporal graphs.
In this paper, we propose a dynamic compact data structure to represent TTCs incrementally while answering reachability queries. Our new data structure maintains each set of non-nested time intervals as two dynamic bit-vectors, one for departure and the other for arrival timestamps. Each dynamic bit-vector uses the same data layout introduced in [17], which resembles a B\({}^{+}\)-tree [1] with static bit-vectors as leaf nodes. In this work, we used a raw bit-vector representation on leaves that stores bits as a sequence of integer words. In our experiments, we show that our new algorithms follow the same time complexities introduced in the previous section, however, the space to maintain our data structure is much smaller on temporally dense temporal graphs. Encoding [8] or packing [12] the distance between 1's on leaves may improve the efficiency on temporally very sparse temporal graphs.
### Organization of the document
This paper is organized as follows. In Section 2, we briefly review the Timed Transitive Closure, the data structure introduced in [4], and the dynamic bit-vector proposed by [17]. In Section 3, we describe our data structure along with the algorithms for each operation. In Section 4, we conduct some experiments comparing our data structure with the previous work [4]. Finally, Section 5 concludes with some remarks and open questions such as the usage of an encoding or packing techniques for temporal very sparse temporal graphs.
## 2 Background
### Timed Transitive Closure
Following the formalism in [7], a temporal graph is represented by a tuple \(\mathcal{G}=(V,E,\mathcal{T},\rho,\zeta)\) where: \(V\) is a set of vertices; \(E\subseteq V\times V\) is a set of edges; \(\mathcal{T}\) is the time interval over which the temporal
graph exists (lifetime); \(\rho:E\times\mathcal{T}\rightarrow\{0,1\}\) is a function that expresses whether a given edge is present at a given time instant; and \(\zeta:E\times\mathcal{T}\mapsto\mathbb{T}\) is function that expresses the duration of an interaction for a given edge at a given time, where \(\mathbb{T}\) is the time domain. In this paper, we consider a setting where \(E\) is a set of directed edges, \(\mathbb{T}\) is discrete such that \(\mathcal{T}=[1,\tau]\subseteq\mathbb{T}\) is the lifetime containing \(\tau\) timestamps, and \(\zeta=\delta\), where \(\delta\) is any fixed positive integer. Additionally, we call \((u,v,t)\) a contact in \(\mathcal{G}\) if \(\rho((u,v),t)=1\).
Reachability in temporal graphs can be defined in terms of journeys. A journey from \(u\) to \(v\) in \(\mathcal{G}\) is a sequence of contacts \(\mathcal{J}=\langle c_{1},c_{2},\ldots,c_{k}\rangle\), whose sequence of underlying edges form a valid \((u,v)\)-path in the underlying graph \(G\) and, for each contact \(c_{i}=(u_{i},v_{i},t_{i})\), it holds that \(t_{i+1}\geq t_{i}+\delta\) for \(i\in[1,k-1]\). Throughout this article we use \(departure(\mathcal{J})=t_{1}\), and \(arrival(\mathcal{J})=t_{k}+\delta\). Thus, a vertex \(u\) can reach a vertex \(v\) within time interval \([t_{1},t_{2}]\) iff there exists a journey \(\mathcal{J}\) from \(u\) to \(v\) such that \(t_{1}\leq departure(\mathcal{J})\leq arrival(\mathcal{J})\leq t_{2}\).
In [4], the authors introduced the Timed Transitive Closure (TTC), a transitive closure that captures the reachability information of a temporal graph within all possible time intervals. Informally, the TTC of a temporal graph \(\mathcal{G}\) is a multigraph with time interval labels on edges. Each time interval expresses the \(departure(\mathcal{J})\) and \(arrival(\mathcal{J})\) timestamps of a journey \(\mathcal{J}\) in \(\mathcal{G}\) as its left and right boundaries, respectively. This additional information allows answering reachability queries parametrized by time intervals and also deciding if a new contact occurring anywhere in history can be composed with existing journeys. Furthermore, a TTC needs at most \(\tau\) edges (in the same direction) between two vertices instead of \(\tau^{2}\) to perform basic operations. The key idea is that each set of intervals from these edge labels can be reduced to a set containing only non-nested time intervals. For instance, in the contrived example shown in Figure 1, we can see that, even though the information of an existing journey in the temporal graph was discarded in the corresponding TTC, a reachability query that could be satisfied by a "larger" interval can also be satisfied by a "smaller" nested interval.
Their data structure encodes TTCs as \(n\times n\) matrices, in which every entry \((u,v)\) points to a self-balanced Binary Search Tree (BST) denoted by \(T(u,v)\). Each tree \(T(u,v)\) contain up to \(\tau\) intervals corresponding to the reduced edge labels from vertex \(u\) to vertex \(v\) in the TTC. As all these intervals are non-nested, one can use any of their boundaries (departure or arrival) as sorting key. This data structure supports the following operations: add_contact(_u, v, t_), which updates information based on a new contact \((u,v,t)\); can_reach(_u, v, t_1, t_2), which returns true if \(u\) can reach \(v\) within the interval \([t_{1},t_{2}]\); is_connected(_t\({}_{1}\), t\({}_{2}\)_), which returns true if \(\mathcal{G}\), restricted to the
Figure 1: On the left, a temporal graph \(\mathcal{G}\) on four vertices \(V=\{a,b,c,d\}\), where the presence times of edges are depicted by labels. For \(\delta=1\), this temporal graph has only two non-trivial journeys, _i.e._ journeys with more than one contact, namely \(\mathcal{J}_{1}=\langle(a,b,1),(b,d,4)\rangle\) and \(\mathcal{J}_{2}=\langle(a,b,2),(b,d,4)\rangle\). On the right, the corresponding Timed Transitive Closure (TTC). Note that only the interval \(\mathcal{I}_{2}=[2,4]\), regarding \(\mathcal{J}_{2}\), is depicted on the edge from \(a\) to \(d\) because the other possibility, \(\mathcal{I}_{1}=[1,4]\), regarding \(\mathcal{J}_{1}\), encloses \(I_{2}\). A query to check whether \(a\) reaches \(d\) within the time interval \(\mathcal{I}_{1}\) can also be satisfied by using \(\mathcal{I}_{2}\).
interval \([t_{1},t_{2}]\), is temporally connected, _i.e._, all vertices can reach each other within \([t_{1},t_{2}]\); and reconstruct_journey(\(u\), \(v\), \(t_{1}\), \(t_{2}\)), which returns a journey (if one exists) from \(u\) to \(v\) occurring within the interval \([t_{1},t_{2}]\). All these operations can be implemented using the following BST primitives, where \(T_{(u,v)}\) is a BST containing reachability information regarding journeys from \(u\) to \(v\): find_prev(\(T_{(u,v)}\), \(t\)), which retrieves from \(T_{(u,v)}\) the earliest interval \([t^{-},t^{+}]\) such that \(t^{-}\geq t\), if any, and nil otherwise; find_next(\(T_{(u,v)}\), \(t\)), which retrieves from \(T_{(u,v)}\) the latest interval \([t^{-},t^{+}]\) such that \(t^{+}\leq t\), if any, and nil otherwise; and insert(\(T_{(u,v)}\), \(t^{-}\), \(t^{+}\)), which inserts into \(T_{(u,v)}\) a new interval \(\mathcal{I}=[t_{1},t_{2}]\) if no other interval \(\mathcal{I}^{\prime}\) such that \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\) exists while removing all intervals \(\mathcal{I}^{\prime\prime}\) such that \(\mathcal{I}\subseteq\mathcal{I}^{\prime\prime}\).
The algorithm for add_contact(\(u\), \(v\), \(t\)) manages the insertion of a new contact \((u,v,t)\) as follows. First, the interval \([t,t+\delta]\), corresponding to the trivial journey \(\mathcal{J}\) from \(u\) to \(v\) with \(departure(\mathcal{J})=t\) and \(arrival(\mathcal{J})=t+\delta\), is inserted in \(T_{(u,v)}\) using the insert primitive, which runs in time \(O(\log\tau+d)\) where \(d\) is the number of redundant intervals removed. Then, the core of the algorithm consists of computing the indirect consequences of this insertion for the other vertices. Their algorithm consists of enumerating these compositions with the help of the find_prev and find_next primitives, which runs in time \(O(\log\tau)\), and inserting them into the TTC using insert. As there can only be one new interval for each pair of vertices, the algorithm takes \(O(n^{2}\log\tau)\) amortized time.
The algorithm for can_reach(\(u\), \(v\), \(t_{1}\), \(t_{2}\)) consists of testing whether \(T_{(u,v)}\) contains at least one interval included in \([t_{1},t_{2}]\). The cost of this algorithm reduces essentially to calling find_next(\(T_{(u,v)}\), \(t_{1}\)) once, which takes \(O(\log\tau)\) time. The algorithm for is_connected(\(t_{1}\), \(t_{2}\)) simply calls can_reach(\(u\), \(v\), \(t_{1}\), \(t_{2}\)), for every pair of vertices; therefore, it takes \(O(n^{2}\log\tau)\) time. For reconstruct_journey(\(u\), \(v\), \(t_{1}\), \(t_{2}\)), an additional field successor must be included along every time interval indicating which vertex comes next in (at least one of) the journeys. The algorithm consists of unfolding intervals and successors, one pair at a time using the find_next primitive, until the completion of the resulting journey of length \(k\); therefore, it takes \(O(k\log\tau)\) time in total.
### Dynamic bit-vectors
A bit-vector \(B\) is a data structure that holds a sequence of bits and provides the following operations: access(\(B\), \(i\)), which accesses the bit at position \(i\); rank\({}_{b}(B\), \(i)\), which counts the number of \(b\)'s until (and including) position \(i\); and select\({}_{b}(B,\ j)\), which finds the position of the \(j\)-th bit with value \(b\). It is a fundamental data structure to design more complex data structures such as compact sequence of integers, text, trees, and graphs [15, 6]. Usually, bit-vectors are static, meaning that we first construct the data structure from an already known sequence of bits in order to take advantage of these query operations.
Additionally, a dynamic bit-vector allows changes on the underlying bits. Although many operations to update a dynamic bit-vector has been proposed, the following are the most commonly used: insert\({}_{b}(B\), \(i)\), which inserts a bit \(b\) at position \(i\); update\({}_{b}(B\), \(i)\), which writes the new bit \(b\) to position \(i\); and remove(\(B\), \(i\)), which removes the bit at position \(i\). Apart from these operations, there are others such as insert_word\({}_{w}(B\), \(i)\), which inserts a word \(w\) at position \(i\), and remove_word\({}_{n}(B\), \(i)\), which removes a word of \(n\) bits from position \(i\).
In [17], the authors proposed a dynamic data structure for bit-vectors with a layout similar to B\({}^{+}\)-trees [1]. Leaves wrap static bit-vectors of maximum length \(l\) and internal nodes contain at most \(m\) pointers to children along with the number of \(1\)'s and the total number of bits in each subtree. With exception to the root node, static bit-vectors have a minimum length of \(\lceil\!/2\rceil\) and
internal nodes have at least \(\lceil\nicefrac{{m}}{{2}}\rceil\) pointers to children. These parameters serve as rules to balance out tree nodes during insertion and removal of bits. Figure 2 illustrates the overall layout of this data structure.
Any static bit-vector representation can be used as leaves, the simplest one being arrays of words representing bits explicitly. In this case, the maximum length could be set to \(l=\Theta(|w|^{2})\), where and \(|w|\) is the integer word size. Other possibility is to represent bit-vectors sparsely by computing the distances between consecutive 1's and then encoding them using an integer compressor such as Elias-Delta [8] or simply packing them using binary packing [12]. In this case, we can instead use as parameter the maximum number of 1's encoded by static bit-vectors to balance out leaves.
Their data structure supports the main dynamic bit-vector operations as follows. An access(\(B\), \(i\)) operation is done by traversing the tree starting at the root node. In each node the algorithm searches from left to right for the branch that has the \(i\)-th bit and subtracts from \(i\) the number of bits in previous subtrees. After traversing to the corresponding child node, the new \(i\) is local to that subtree and the search continues until reaching the leaf containing the \(i\)-th bit. At a leaf node, the algorithm simply accesses and returns the \(i\)-th local bit in the corresponding static bit-vector. If bits in static bit-vectors are encoded, an additional decoding step is necessary.
The rank\({}_{b}(B,\,i)\) and select\({}_{b}(B,\,j)\) operations are similar to access(\(B\), \(i\)). For rank\({}_{b}(B,\,i)\), the algorithm also sums the number of 1's in previous subtrees when traversing the tree. At a leaf, it finally sums the number of 1's in the corresponding static bit-vector up to the \(i\)-th local bit using popcount operations, which counts the number of 1's in a word, and return the resulting value. For select\({}_{b}(B,\,j)\), the algorithm instead uses the number of 1's in each subtree to guide the search. Thus, when traversing down, it subtracts the number of 1's in previous subtrees from \(j\), and sums the total number of bits. At a leaf, it searches for the position of the \(j\)-th local set bit using clz or ctz operations, which counts, respectively, the number of leading and trailing zeros in a word; sums it, and returns the resulting value.
The algorithm for insert\({}_{b}(B,\,i)\) first locates the leaf that contains the static bit-vector with the \(i\)-th bit. During this top-down traversal, it increments the total number of bits and the number of 1's, whether \(b=1\), in each internal node key associated with the child it descends. Then, it reconstructs the leaf while including the new bit \(b\). If the leaf becomes full, the algorithm splits its content into two bit-vectors and updates its parent accordingly while adding a new key and a pointer to the new leaf. After this step, the parent node can also become full and, in this case, it must also be split into two nodes. Therefore, the algorithm must traverse back, up to the root node, balancing any node that becomes full. If the root node becomes full, then it creates a new root
Figure 2: A dynamic bit-vector using the data structure introduced in [17]. Leaves wrap static bit-vectors and internal nodes contain pointers to children along with the number of 1’s and the total number of bits in each of them. The maximum number of pointers in each internal node \(m\) and the length of each static bit-vector \(n\) in this example is 4.
containing pointers to the split nodes along with the keys associated with both subtrees.
The algorithm for remove(\(B\), \(i\)) also has a top-down traversal to locate and reconstruct the appropriate leaf, and a bottom-up phase to rebalance tree nodes. However, internal node keys associated with the child it descends must be updated during the bottom-up phase since the \(i\)-bit is only known after reaching the corresponding leaf. Moreover, a node can become empty when it has less than half the maximum number of entries. In this case, first, the algorithm tries to share the content of siblings with the current node while updating parent keys. If sharing is not possible, it merges a sibling into the current node and updates its parent while removing the key and pointer previously related to the merged node. If the root node becomes empty, the algorithm removes the old root and makes its single child the new root.
The update\({}_{b}(B\), \(i)\) operation can be implemented by calling remove(\(B\), \(i\)) then insert\({}_{b}(B\), \(i)\), or by using a similar strategy with a single traversal.
## 3 Dynamic compact data structure for temporal reachability
Our new data structure uses roughly the same strategy as in the previous work [4]. The main difference is the usage of a compact dynamic data structure to maintain a set of non-nested intervals instead of Binary Search Trees (BSTs). This compact representation provides all BST primitives in order to incrementally maintain Temporal Transitive Closures (TTCs) and answer reachability queries. In [4], the authors defined them as follows, where \(T_{(u,v)}\) represents a BST holding a set of non-nested intervals associated with the pair of vertices \((u,v)\). (1) find_next(\(T_{(u,v)},\,t\)) returns the earliest interval \([t^{-},t^{+}]\) in \(T_{(u,v)}\) such that \(t^{-}\geq t\), if any, and nil otherwise; (2) find_prev(\(T_{(u,v)},\,t\)) returns the latest interval \([t^{-},t^{+}]\) in \(T_{(u,v)}\) such that \(t^{+}\leq t\), if any, and nil otherwise; and (3) insert(\(T_{(u,v)},\,t^{-},\,t^{+}\)) inserts the interval \([t^{-},t^{+}]\) in \(T_{(u,v)}\) and performs some operations for maintaining the property that all intervals in \(T_{(u,v)}\) are minimal.
For our new compact data structure, we take advantage that every set of intervals only contains non-nested intervals, thus we do not need to consider other possible intervals. For instance, if there is an interval \(\mathcal{I}=[4,6]\) in a set, no other interval starting at timestamp 4 or ending at 6 is possible, otherwise, there would be some interval \(\mathcal{I}^{\prime}\) such that \(\mathcal{I}^{\prime}\subseteq\mathcal{I}\) or \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\). Therefore, we can represent each set of intervals as a pair of dynamic bit-vectors \(D\) and \(A\), one for departure and the other for arrival timestamps. Both bit-vectors must provide the following low-level operations: access(\(B\), \(i\)), rank\({}_{b}(B\), \(i)\), select\({}_{b}(B\), \(j)\), insert\({}_{b}(B,\,i)\), and update\({}_{b}(B,\,i)\).
By using these simple bit-vectors operations, we first introduce algorithms for the primitives find_next(\((D,A)_{(u,v)},\,t\)), find_prev(\((D,A)_{(u,v)},\,t\)) and insert(\((D,A)_{(u,v)},\,t^{-},\,t^{+}\)) that runs, respectively, in time \(O(\log\tau)\), \(O(\log\tau)\) and \(O(d\log\tau)\), where \(d\) is the number of intervals removed during an interval insertion. Note that, now, these operations receive as first argument a pair containing two bit-vectors \(D\) and \(A\) associated with the pair of vertices \((u,v)\) instead of a BST \(T_{(u,v)}\). If the context is clear, we will simply use the notation \((D,A)\) instead of \((D,A)_{(u,v)}\).
Then, in order to improve the time complexity of insert(\((D,A)_{(u,v)},\,t^{-},\,t^{+}\)) to \(O(\log\tau+d)\), we propose a new bit-vector operation: unset_one_range(\(B\), \(j_{1}\), \(j_{2}\)), which clears all bits in the range [select\({}_{1}(B,\,j_{1}),\)select\({}_{1}(B,\,j_{2})\)].
### Compact representation of non-nested intervals
Each set of non-nested intervals is represented as a pair of dynamic bit-vectors \(D\) and \(A\), one storing departure timestamps and the other arrival timestamps. Given a set of non-nested intervals \(\mathcal{I}_{1},\mathcal{I}_{2},\ldots,\mathcal{I}_{k}\), where \(I_{i}=[d_{i},a_{i}]\), \(D\) contains 1's at every position \(d_{i}\), and \(A\) contains 1's at every position \(a_{i}\). Figure 3 depicts this representation.
### Query algorithms
Algorithms 1 and 2 answers the primitives \(\textsc{find\_prev}((D,A),\,t)\) and \(\textsc{find\_next}((D,A),\,t)\), respectively. In order to find a previous interval, at line 1, Algorithm 1 first counts in \(j\) how many 1's exist up to position \(t\) in \(A\). If \(j=0\), then there is no interval \(I=[t^{-},t^{+}]\) such that \(t^{+}\leq t\), therefore, it returns nil. Otherwise, at lines 4 and 5, the algorithm computes the positions of the \(j\)-th 1's in \(D\) and \(A\) to compose the resulting intervals. In order to find a next interval, at line 1, Algorithm 2 first counts in \(j^{\prime}\) how many 1's exist up to time \(t-1\) in \(D\). If \(j^{\prime}=\textsc{rank}_{1}(D,\,len(D))\), then there is no interval \(I^{\prime}=[t^{\prime-},t^{\prime+}]\) such that \(t^{\prime}\leq t^{-}\), therefore, it returns nil. Otherwise, at lines 4 and 5, the algorithm computes the positions of the \((j^{\prime}+1)\)-th 1's in \(D\) and \(A\) to compose the resulting interval.
```
1:\(j\leftarrow\textsc{rank}_{1}(A,\,t)\)
2:if\(j=0\)then
3:return nil
4:\(t^{-}\leftarrow\textsc{select}_{1}(D,\,j)\)
5:\(t^{+}\leftarrow\textsc{select}_{1}(A,\,j)\)
6:return\([t^{-},t^{+}]\)
```
**Algorithm 1**\(\textsc{find\_prev}((D,A),\,t)\)
```
1:\(j\leftarrow\textsc{rank}_{1}(A,\,t)\)
2:if\(j=0\)then
3:return nil
4:\(t^{-}\leftarrow\textsc{select}_{1}(D,\,j+1)\)
5:\(t^{+}\leftarrow\textsc{select}_{1}(A,\,j+1)\)
6:return\([t^{-},t^{+}]\)
```
**Algorithm 2**\(\textsc{find\_next}((D,A),\,t)\)
```
1:\(j\leftarrow\textsc{rank}_{1}(D,\,t-1)\)
2:if\(j=\textsc{rank}_{1}(D,\,len(D))\)then
3:return nil
4:\(t^{-}\leftarrow\textsc{select}_{1}(D,\,j+1)\)
5:\(t^{+}\leftarrow\textsc{select}_{1}(A,\,j+1)\)
6:return\([t^{-},t^{+}]\)
```
**Algorithm 3**\(\textsc{find\_next}((D,A),\,t)\)
Figure 3: Representation of a set of non-nested interval using two bit-vectors, one for departures and the other for arrival timestamps. In this example, a set containing the intervals \([1,4]\) and \([3,6]\) is represented by the first bit-vector containing 1’s at position 1 and 3, and the second bit-vector containing 1’s at positions 4 and 6. Note that both bit-vectors must have the same number of 1’s, otherwise, there would be an interval with missing values for departure or arrival.
As \(\textsc{rank}_{1}(B,\,i)\) and \(\textsc{select}_{1}(B,\,j)\) on dynamic bit-vectors have time complexity \(O(\log\tau)\) using the data structure proposed by [17], \(\textsc{find\_prev}((D,A),\,t)\) and \(\textsc{find\_next}((D,A),\,t)\) have both time complexity \(O(\log\tau)\).
#### 3.2.1 Interval insertion
Due to the property of non-containment of intervals, given a new interval \(\mathcal{I}=[t_{1},t_{2}]\), we must first assure that there is no other interval \(\mathcal{I}^{\prime}\) in the data structure such that \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\), otherwise, \(\mathcal{I}\) cannot be present in the set. Then, we must find and remove all intervals \(\mathcal{I}^{\prime\prime}\) in the data structure such that \(\mathcal{I}^{\prime\prime}\subseteq\mathcal{I}\). Finally, we insert \(\mathcal{I}\) by setting the \(t_{1}\)-th bit of bit-vector \(D\) and the \(t_{2}\)-th bit of \(A\). Figure 4 illustrates the process of inserting new intervals.
Algorithm 3 describes a simple process for the primitive \(\textsc{insert}((A,D),\,t_{1},\,t_{2})\) in order to insert a new interval \(\mathcal{I}=[t_{1},t_{2}]\) into a set of non-nested intervals encoded as two bit-vectors \(D\) and \(A\). At line 1, it computes how many 1's exist in \(D\) prior to position \(t_{1}\) by calling \(r_{d}=\textsc{rank}_{1}(D,\,t_{1}-1)\) and access the \(t_{i}\)-th bit in \(D\) by calling \(bit_{d}=\textsc{access}(D,\,t_{1})\). At line 2, it computes the same information with respect to the bit-vector \(A\) and timestamp \(t_{2}\) by calling \(r_{a}=\textsc{rank}_{1}(A,\,t_{2}-1)\) and \(bit_{a}=\textsc{access}(A,\,t_{2})\). We note that the operations \(\textsc{rank}_{1}(B,\,i)\) and \(\textsc{access}(B,\,i)\) can be processed in a single tree traversal using the dynamic bit-vector described in [17]. If \(r_{d}\) is less than \(r_{a}+bit_{a}\), then there are more intervals closing up to timestamp \(t_{2}\) than intervals opening before \(t_{1}\), therefore, there is some interval \(\mathcal{I}^{\prime}=[d^{\prime},a^{\prime}]\) such that \(t_{1}\leq d^{\prime}\leq a^{\prime}\leq t_{2}\), _i.e._, \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\). In this case, the algorithm stops, otherwise, it proceeds with the insertion. When proceeding, if \(r_{d}+bit_{d}\) is greater than \(r_{a}\), then there are more intervals opening up to \(t_{1}\) than intervals closing before \(t_{2}\), therefore, there are \(d=(r_{d}+bit_{d})-r_{a}\) intervals \(\mathcal{I}^{\prime\prime}_{i}=[d^{\prime\prime}_{i},a^{\prime\prime}_{i}]\), such that \(d^{\prime\prime}_{i}\leq t_{1}\leq t_{2}\leq a^{\prime\prime}_{i}\), _i.e._, \(\mathcal{I}^{\prime\prime}_{i}\subseteq\mathcal{I}\), that must be removed. From lines 5 to 9, the algorithm removes the \(d\) intervals that contain \(I\) by iteratively unsetting their corresponding bits in \(D\) and \(A\). In order to unset the \(j\)-th 1 in a bit-vector \(B\), we first search for its position by calling \(i=\textsc{select}_{1}(B,\,j)\), then update \(B[i]=0\) by calling \(\textsc{update}_{0}(B,\,i)\). Thus, the algorithm calls \(\textsc{update}_{0}(D,\,\textsc{select}_{1}(D,\,r_{a}+1))\)
Figure 4: Sequence of insertions using our data structure based on bit-vectors \(D\) and \(A\). In (a), our data structure is empty, thus, the insertion of interval \(\mathcal{I}_{1}=[2,6]\) results in setting the position 2 in \(D\) and 6 in \(A\). Then, in (b), the new interval \(\mathcal{I}_{2}=[1,6]\) encloses \(\mathcal{I}_{1}\), therefore, the insertion is skipped. Next, in (c), no interval encloses or is enclosed by the new interval \(\mathcal{I}_{3}=[1,5]\), thus, it suffices to set the position 1 in \(D\) and 5 in \(A\). Finally, in (d), the new interval \(\mathcal{I}_{4}=[3,4]\) is enclosed by \(\mathcal{I}_{1}\) and \(\mathcal{I}_{3}\), thus both of them is removed by clearing the corresponding bits and then \(\mathcal{I}_{4}\) is inserted by setting the position 3 in \(D\) and 4 in \(A\).
and \(\textsc{update}_{0}(A,\,\textsc{select}_{1}(A,\,r_{a}+1))\)\(d\) times to remove the \(d\) intervals that closes after \(r_{a}\). Finally, at lines 10 and 11, the algorithm inserts \(\mathcal{I}\) by calling \(\textsc{update}_{1}(D,\,t_{1})\) and \(\textsc{update}_{1}(A,\,t_{2})\). Note that both bit-vectors can grow with new insertions, thus we need to assure that both bit-vectors are large enough to accommodate the new 1's. That is why the algorithm calls \(ensureCapacity\) before setting the corresponding bits. The \(ensureCapacity\) implementation may call \(\textsc{insert}_{0}(B,\,len(B))\) or \(\textsc{insert\_word}_{0}(B,\,len(B))\) until \(B\) has enough space. Moreover, \(\textsc{rank}_{1}(B,\,i)\) and \(\textsc{access}(B,\,i)\) operations can also receive positions that are larger than the actual length of \(B\). In such cases, these operations must instead return \(\textsc{rank}_{1}(B,\,len(B))\) and \(0\), respectively.
```
1:\(r_{d}\leftarrow\textsc{rank}_{1}(D,\,t_{1}-1)\); \(bit_{d}\leftarrow\textsc{access}(D,\,t_{1})\)
2:\(r_{a}\leftarrow\textsc{rank}_{1}(A,\,t_{2}-1)\); \(bit_{a}\leftarrow\textsc{access}(A,\,t_{2})\)
3:if\(r_{d}\geq r_{a}+bit_{a}\)then
4:if\(r_{d}+bit_{d}>r_{a}\)then
5:\(r_{d}^{+}\gets r_{d}+bit_{d}\)
6:while\(r_{d}^{+}>r_{a}\)do
7:\(\textsc{update}_{0}(D,\,\textsc{select}_{1}(D,\,r_{a}+1))\)
8:\(\textsc{update}_{0}(A,\,\textsc{select}_{1}(A,\,r_{a}+1))\)
9:\(r_{d}^{+}\gets r_{d}^{+}-1\)
10:\(ensureCapacity(D,t_{1})\); \(\textsc{update}_{1}(D,\,t_{1})\)
11:\(ensureCapacity(A,t_{2})\); \(\textsc{update}_{1}(A,\,t_{2})\)
```
**Algorithm 3**insert(\((D,A),\,t_{1},\,t_{2}\))
**Theorem 1**.: _The update operation has worst-case time complexity \(O(d\log\tau)\), where \(d\) is the number of intervals removed._
Proof.: All operations on dynamic bit-vectors have time complexity \(O(\log\tau)\) using the data structure proposed by [17]. As the maximum length of each bit-vector is \(\tau\), the cost of \(ensureCapacity\) is amortized to \(O(1)\) during a sequence of insertions. Therefore, the time complexity of \(\textsc{insert}((D,A),\,t_{1},\,t_{2})\) is \(O(d\log\tau)\) since in the worst case Algorithm 3 removes \(d\) intervals from line 6 to 9 before inserting the new one at lines 10 and 11.
This simple strategy has a multiplicative factor on the number of removed intervals. In general, as more intervals in \([1,\tau]\) are inserted, the number of intervals \(d\) to be removed decreases, thus, in the long run, the runtime of this naive solution is acceptable. However, when static bit-vectors are encoded sparsely as distances between consecutive 1's, it needs to decode/encode leaves \(d\) times and thus runtime degrades severely. In the next section, we propose a new operation for dynamic bit-vectors using sparse static bit-vectors as leaves, \(\textsc{unset\_one\_range}(B,\,j_{1},\,j_{2})\), to replace this iterative approach and improve the time complexity of \(\textsc{insert}((D,A),\,t_{1},\,t_{2})\) to \(O(\log\tau)\).
### New dynamic bit-vector operation to improve interval insertion
In this section, we propose a new operation \(\textsc{unset\_one\_range}(B,\,j_{1},\,j_{2})\) for dynamic bit-vector using sparse static bit-vectors as leaves to improve the time complexity of \(\textsc{insert}((D,A),\,t_{1},\,t_{2})\). This new operation clears all bits starting from the \(j_{1}\)-th 1 up to the \(j_{2}\)-th 1 in time \(O(\log\tau)\). Our algorithm for \(\textsc{unset\_one\_range}(B,\,j_{1},\,j_{2})\), based on the split/join strategy commonly used in
parallel programs [3], uses two internal functions split_at_jth_one(\(N\), \(j\)) and join(\(N_{1}\), \(N_{2}\)). The split_at_jth_one(\(N\), \(j\)) function takes a root node \(N\) representing a dynamic bit-vector \(B\) and splits its bits into two nodes \(N_{1}\) and \(N_{2}\) representing bit-vectors \(B_{1}\) and \(B_{2}\) containing, respectively, the bits in range \([1,\textsc{select}_{1}(B\), \(j)-1]\) and \([\textsc{select}_{1}(B,\ j),len(B)]\). The join(\(N_{1}\), \(N_{2}\)) function takes two root nodes \(N_{1}\) and \(N_{2}\), representing two bit-vectors \(B_{1}\) and \(B_{2}\) and constructs a new tree with root node \(N\) representing a bit-vector \(B\) containing all bits from \(B_{1}\) followed by all bits from \(B_{2}\). The resulting trees for both functions must preserve the balancing properties of dynamic bit-vectors [17].
Thus, given a dynamic bit-vector \(B\) represented as a tree with root \(N\), our algorithm for unset_one_range(\(B\), \(j_{1}\), \(j_{2}\)) is described as follows. First, the algorithm calls split_at_jth_one(\(N\), \(j_{1}\)) in order to split the bits in \(B\) into two nodes \(N_{left}\) and \(N_{tmp}\) representing two bit-vectors containing, respectively, the bits in range \([1,\textsc{select}_{1}(B,\ j_{1})-1]\) and in range \([\textsc{select}_{1}(B,\ j_{1}),len(B)]\). Then, it calls split_at_jth_one(\(N_{tmp}\), \(j_{2}-j_{1}\)) to split \(N_{tmp}\) further into two nodes \(N_{ones}\) and \(N_{right}\) containing, respectively the bits in range \([\textsc{select}_{1}(B,\ j_{1}),\textsc{select}_{1}(B,\ j_{2})-1]\), and \([\textsc{select}_{1}(B,\ j_{2}),len(B)]\). The tree with root node \(N_{ones}\) contains all 1's previously in the original dynamic bit-vector \(B\) that should be cleared. In the next step, the algorithm creates a new tree with root node \(N_{zeros}\) containing \(len(N_{ones})\) 0's to replace \(N_{ones}\). Finally, it calls join(join(\(N_{left},\ N_{zeros}\)), \(N_{right}\)) to join the trees with root nodes \(N_{left}\), \(N_{zeros}\), and \(N_{right}\) into a final tree representing the original bit-vector \(B\) with the corresponding 1's cleared.
Note that the tree with root \(N_{ones}\) is still in memory, thus it needs some sort of cleaning. The cost of immediately cleaning this tree would increase proportionally to the total number of nodes in \(N_{ones}\) tree. Instead, we keep \(N_{ones}\) in memory and reuse its children lazily in other operations that request node allocations so that the cost of cleaning is amortized. Moreover, even though we need to create a new bit-vector filled with zeros, this operation is performed in \(O(1)\) time with a sparse implementation since only information about 1's is encoded. We do not recommend using this strategy for a dense implementation, i.e, leaves represented as raw sequences of bits, since this last operation would run in time \(O(\tau)\).
Next we describe join(\(N_{1}\), \(N_{2}\)) and split_at_jth_one(\(N\), \(j\)). The idea of join(\(N_{1}\), \(N_{2}\)) is to merge the root of the smallest tree with the correct node of the highest tree and rebalance the resulting tree recursively.
Algorithm 4 details the join(\(N_{1}\), \(N_{2}\)) recursive function. If \(height(N_{1})=height(N_{2})\), at line 2, the algorithm tries to merge keys and pointers present in \(N_{1}\) and \(N_{2}\) if possible, or distributes their content evenly and grow the resulting tree by one level. This process is done by calling \(mergeOrGrow(N_{1},N_{2})\), which returns the root node of the resulting tree. Instead, if \(height(N_{1})>height(N_{2})\), at line 4, the algorithm first extracts the rightmost child from \(N_{1}\), by calling \(extractRightmostChild(N_{1})\), and then recurses further passing the rightmost child instead. The next recursive call might perform: a merge operation or grow the resulting subtree one level; therefore, the output node \(R\) may have, respectively, height equals to \(height(N_{1})-1\) or \(height(N_{1})\). If the resulting tree grew, _i.e._, \(height(R)=height(N_{1})\), then, at line 6, the algorithm returns the result of \(mergeOrGrow(N_{1},R)\). Otherwise, if a merge operation was performed, _i.e._, \(height(R)=height(N_{1})-1\), then, at line 7, it inserts \(R\) into \(N_{1}\) as its new rightmost child, and returns \(N_{1}\). Finally, if \(height(N_{1})<height(N_{2})\), at line 10, the algorithm extracts the leftmost child from \(N_{2}\) by calling \(extractLeftmostChild(N_{2})\) and recurses further passing the leftmost child instead. Similarly, the root \(R^{\prime}\) resulted from the next recursive call might have height equals to \(height(N_{2})-1\) or \(height(N_{2})\). If \(height(R^{\prime})=height(N_{2})\), then, at line 12, the algorithm returns
the result of calling \(mergeOrGrow(R,N_{2})\), otherwise, if \(height(R^{\prime})=height(N_{2})-1\), then, at line 13, it inserts \(R^{\prime}\) into \(N_{2}\) as its new leftmost child, and returns \(N_{2}\). Note that all subroutines must properly update keys describing the length and number of 1's of the bit-vector represented by the corresponding child subtree. For instance, a call to \(rightmost=extractRightmostChild(N)\) must decrement from the key associated with \(N\) the length and number of 1's in the bit-vector represented by \(rightmost\).
**Lemma 2**.: _The operation \(\textsc{join}(N_{1},\,N_{2})\) has time complexity \(O(|height(N_{1})-height(N_{2})|)\)._
Proof.: Algorithm 4 descends at most \(|height(N_{1})-height(N_{2})|\) levels starting from the root of the highest tree. At each level, in the worst case, it updates a node doing a constant amount of work equals to the branching factor of the tree. Therefore, the cost of \(\textsc{join}(N_{1},\,N_{2})\) is \(O(|height(N_{1})-height(N_{2})|)\).
The idea of \(\textsc{split\_at\_jth\_one}(N,\,j)\) is to traverse \(N\) recursively while partitioning and joining its content properly until it reaches the node containing the \(j\)-th 1 at position \(\textsc{select}_{1}(B,\,j)\). During the forward traversal, it partitions the current subtree in two nodes \(N_{1}\) and \(N_{2}\), excluding the entry associated with the child to descend. Then, during the backward traversal, it joins \(N_{1}\) and \(N_{2}\), respectively, with the left and right nodes resulting from the recursive call.
```
1:if\(N\) is leaf then
2:\((N_{1},N_{2})\gets partitionLeaf(N,j)\)
3:return\((N_{1},N_{2})\)
4:\((N_{1},child,N_{2})\gets partitionNode(N,j)\)
5:\((N^{\prime}_{1},N^{\prime}_{2})\leftarrow\textsc{split}(child,j-ones(N_{1}))\)
6:return\((\textsc{join}(N_{1},N^{\prime}_{1}),\textsc{join}(N^{\prime}_{2},N_{2}))\)
```
**Algorithm 5**split_at_jth_one\((N,\,j)\)
The details of this function is shown in Algorithm 5. From lines 1 to 3, the algorithm checks
whether the root is a leaf. If it is the case, it partitions the current bit-vector \(B_{1}\cdot b\cdot B_{2}\), where \(b\) is the \(j\)-th 1, and returns two nodes containing, respectively, \(B_{1}\) and \(b\cdot B_{2}\). Otherwise, from lines 4 to 6, the algorithm first finds the \(i\)-th child that contains the \(j\)-th 1 using a linear search and partitions the current node into three other nodes: \(N_{1}\), containing the partition with all keys and children in range \([1,i-1]\); \(child\), which is the child node associated with position \(i\); and \(N_{2}\), containing the partition with all keys and children in range \([i+1,\ldots]\). Then, at line 5, it recursively calls split_at_jth_one\((child\), \(j-ones(N_{1}))\) to retrieve the partial results \(N^{\prime}_{1}\) containing bits from \(child\) up to the \(j\)-th 1; and \(N^{\prime}_{2}\) containing bits from \(child\) starting at the \(j\)-th 1 and forward. Note that the next recursive call expects an input \(j\) that is local to the root node \(child\). Finally, at line 6 it joins \(N_{1}\) with \(N^{\prime}_{1}\) and \(N^{\prime}_{2}\) with \(N_{2}\), and returns the resulting trees.
**Lemma 3**.: _The operation split_at_jth_one\((N,\,j)\) has time complexity \(O(\log\tau)\)._
Proof.: As join\((N_{1},\,N_{2})\) has cost \(O(|height(N_{1})-height(N_{2})|)\) and the sum of height differences for every level cannot be higher than the resulting tree height containing \(n<\tau\) nodes, the time complexity of split_at_jth_one\((N,\,j)\) is \(O(\log\tau)\).
Furthermore, since join\((N_{1},\,N_{2})\) outputs a balanced tree when concatenating two already balanced trees, both trees resulting from the split_at_jth_one\((N,\,j)\) calls are also balanced.
**Lemma 4**.: _The operation unset_one_range\((B,\,j_{1},\,j_{2})\) has time complexity \(O(\log\tau)\) when \(B\) encodes leaves sparsely._
Proof.: The unset_one_range\((B,\,j_{1},\,j_{2})\) operation calls split_at_jth_one and join twice. It must also create a new tree containing select\({}_{1}(B,\,j_{2}-1)-\textsc{select}_{1}(B,\,j_{1})\) 0's to replace the subtree containing \(j_{2}-j_{1}\) 1's. If leaves of \(B\) are represented sparsely, then the creation of a new tree filled with 0's costs \(O(1)\) since the resulting tree only has a root node, with its only key having the current length (select\({}_{1}(B,\,j_{2}-1)\) - select\({}_{1}(B,\,j_{1})\)), and an empty leaf. Therefore, as the cost of split_at_jth_one\((N,\,j)\), \(O(\log\tau)\), dominates the cost of join\((N_{1},\,N_{2})\), the time complexity of unset_one_range\((B,\,j_{1},\,j_{2})\) is \(O(\log\tau)\).
**Theorem 5**.: _The primitive insert\(((D,A),\,t^{-},\,t^{+})\) has time complexity \(O(\log\tau)\) when \(D\) and \(A\) encode leaves sparsely._
Proof.: Following from Theorem 1 and Lemma 4, the loop in Algorithm 3 that iteratively unset \(d\) bit-vector bits can be substituted by a call to unset_one_range\((B,\,j_{1},\,j_{2})\). As the cost of Algorithm 3 is dominated by this loop, its time complexity reduces to \(O(\log\tau)\).
## 4 Experiments
In this section, we conduct experiments to analyze the wall-clock time performance and the space efficiency of data structures when adding new information from synthetic datasets. In Section 4.1, we compare our compact data structure that maintain a set of non-nested intervals directly with an in-memory B\({}^{+}\)-tree implementation storing intervals as keys. For our compact data structure, we used dynamic bit-vectors [17] with leaves storing bits explicitly as arrays of integer words with words being 64 bits long. Internal nodes have a maximum number of pointers to children \(m=32\) and leaf nodes have static bit-vectors with maximum length \(l=4096\). For the B\({}^{+}\)-tree implementation we used \(m=32\) for all nodes. In Section 4.2, we compare the overall
Temporal Transitive Closure (TTC) data structure using our new compact data structure with the TTC using the B\({}^{+}\)-tree implementation for each pair of vertices. All code is available at [https://bitbucket.org/luizufu/zig-ttc/src/master/](https://bitbucket.org/luizufu/zig-ttc/src/master/).
### Comparison of data structures for sets of non-nested intervals
For this experiment, we created datasets containing all \(O(\tau^{2})\) possible intervals in \([1,\tau]\) for \(\tau\in[2^{3},2^{14}]\). Then, for each dataset, we executed 10 times a program that shuffles all intervals at random, and inserts them into the tested data structure while gathering the wall-clock time and memory space usage after every insertion.
Figure 5(a) shows the average wall-clock time to insert all intervals into the both data structures as \(\tau\) increases. Figure 5(b) shows the cumulative wall-clock time to insert all intervals and the memory usage throughout the lifetime of a single program execution with \(\tau=2^{14}\). As shown in Figure 5(a), our new data structure slightly underperforms when compared with the B\({}^{+}\)-tree implementation. However, as shown in Figure 5(b), the wall-clock time have a higher overhead at the beginning of the execution (first quartile) and, after that, the difference between both data structures remains almost constant. This overhead might be due to insertions of 0's at the end of the bit-vectors in order to make enough space to accommodate the rightmost interval inserted so far. We can also see in Figure 5(b) that the space usage of our new data structure is much smaller than the B\({}^{+}\)-tree implementation. It is worth noting that, if the set of intervals is very sparse, maybe the use of sparse bit-vector as leaves could decrease the space since it does not need to preallocate most of the tree nodes, however, the wall-clock time could increase since at every operation leaves need to be decoded/unpacked and encoded/packed.
Figure 5: Comparison of incremental data structures to represent a set of non-nested intervals. In (a), the overall average wall-clock time to insert all possible \(O(\tau)\) intervals randomly shuffled into data structures. In (b), the cumulative wall-clock time and the memory space usage to insert all possible \(O(\tau)\) intervals randomly shuffled throughout a single execution. Note that the final wall-clock time of the execution described in (b) was one of the 10 executions with \(\tau=2^{14}\) used to construct (a).
### Comparison of data structures for Time Transitive Closures
For this experiment, we created datasets containing all \(O(n^{2}\tau)\) possible contacts fixing the number of vertices \(n=32\) and the latency to traverse an edge \(\delta=1\) while varying \(\tau=[2^{3},2^{14}]\). Then, for each dataset, we executed 10 times a program that shuffles all contacts at random, and inserts them into the tested TTC data structure while gathering the wall-clock time and memory space usage after every insertion.
Figure 6(a) shows the average wall-clock time to insert all contacts into the TTCs using both data structures as \(\tau\) increases. Figure 6(b) shows the cumulative wall-clock time to insert all contacts and the memory usage throughout the lifetime of a single program execution with \(n=32\) and \(\tau=2^{14}\). As shown in Figure 6(a), the TTC version that uses our compact data structure in fact outperforms when compared with the TTC that uses the B\({}^{+}\)-tree implementation for large values of \(\tau\). In Figure 6(b), we can see that the time to insert a contact into the TTC using our new data structure is lower during almost all lifetime, and the space usage followed the previous experiment comparing data structures in isolation.
## 5 Conclusion and open questions
We presented in this paper an incremental compact data structure to represent a set of non-nested time intervals. This new data structure is composed by two dynamic bit-vectors and works well using common operations on dynamic bit-vectors. Among the operations of our new data structures are: find_prev(\((A,D)\), \(t\)), which retrieves the previous interval \([t_{1},t_{2}]\) such that \(t_{2}\leq t\) in time
Figure 6: Comparison of Temporal Transitive Closures (TTCs) using incremental data structures to represent sets of non-nested intervals for each pair of vertices. In (a), the overall average wall-clock time to insert all possible \(O(n^{2}\tau)\) contacts randomly shuffled into data structures. In (b), the cumulative wall-clock time and the memory space usage to insert all possible \(O(n^{2}\tau)\) contacts randomly shuffled throughout a single execution. Note that the final wall-clock time of the execution described in (b) was one of the 10 executions with \(\tau=2^{14}\) used to construct (a).
\(O(\log\tau)\); \(\textsc{find\_next}((A,D),\,t)\), which retrieves the next interval \([t_{1},t_{2}]\) such that \(t_{1}\geq t\) also in time \(O(\log\tau)\); and \(\textsc{insert}((A,D),\,t_{1},\,t_{2})\), which inserts a new interval \(\mathcal{I}=[t_{1},t_{2}]\) if no other interval \(\mathcal{I}^{\prime}\) such that \(\mathcal{I}\subseteq\mathcal{I}^{\prime}\) exists while removing all intervals \(\mathcal{I}^{\prime\prime}\) such that \(\mathcal{I}\subseteq\mathcal{I}^{\prime\prime}\) in time \(O(d\log\tau)\), where \(d\) is the number of intervals removed. Moreover, we introduced a new operation \(\textsc{unset\_one\_range}(B,\,j_{1},\,j_{2})\) for dynamic bit-vectors that encode leaves sparsely, which we used to improve the time complexity of our insert algorithm to \(O(\log\tau)\).
Additionally, we used our new data structure to incrementally maintain Temporal Transitive Closures (TTCs) using much less space We used the same strategy as described in [4], however, instead of using Binary Search Trees (BSTs), we used our new compact data structure. The time complexities of our algorithms for the new data structure are the same as those for BSTs. However, as we showed in our experiments, using our new data structure greatly reduced the space usage for TTCs in several cases and, as they suggest, the wall-clock time to insert new contacts also improves as \(\tau\) increases.
For future investigations, we conjecture that our compact data structure can be simplified further so that the content of both its bit-vectors are merged into a single data structure. Our current insertion algorithm duplicates most operations in order to update both bit-vectors. Furthermore, each of these operations traverse a tree-like data structure from top to bottom. With a single tree-like data structure, our insertion algorithm could halve the number of traversals and, maybe, benefit from a better spatial locality. In another direction, our algorithm for \(\textsc{insert}((A,D),\,t_{1},\,t_{2})\) only has time complexity \(O(\log\tau)\) when both \(A\) and \(D\) encode leaves sparsely. Perhaps, a dynamic bit-vector data structure that holds a mix of leaves represented densely or sparsely can be employed to retain the \(O(\log\tau)\) complexity while improving the overall runtime for other operations. Lastly, we expect soon to evaluate our new compact data structure on larger datasets and under other scenarios; for instance, in very sparse and real temporal graphs.
AcknowledgementsThis study was financed in part by Fundacao de Amparo a Pesquisa do Estado de Minas Gerais (FAPEMIG) and the Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 001* - under the "CAPES PrInt program" awarded to the Computer Science Post-graduate Program of the Federal University of Uberlandia.
|
2306.11298 | On the $Δ_a$ invariants in non-perturbative complex Chern-Simons
theory | Recently a set of $q$-series invariants, labelled by $\operatorname{Spin}^c$
structures, for weakly negative definite plumbed $3$-manifolds called the
$\widehat{Z}_a$ invariants were discovered by Gukov, Pei, Putrov and Vafa. The
leading rational powers of the $\widehat{Z}_a$ invariants are invariants
themselves denoted by $\Delta_a$. In this paper we further analyze the
structure of these $\Delta_a$ invariants. We outline some of the foundations of
the $\Delta_a$ invariants and provide answers to some questions in the
literature. We also provide a way to compute $\Delta_0$ for Brieskorn spheres. | Shimal Harichurn | 2023-06-20T05:24:15Z | http://arxiv.org/abs/2306.11298v1 | # On the \(\Delta_{a}\) invariants in non-perturbative complex Chern-Simons theory
###### Abstract
Recently a set of \(q\)-series invariants, labelled by \(\mathrm{Spin}^{c}\) structures, for weakly negative definite plumbed 3-manifolds called the \(\widehat{Z}_{a}\) invariants were discovered by Gukov, Pei, Putrov and Vafa. The leading rational powers of the \(\widehat{Z}_{a}\) invariants are invariants themselves denoted by \(\Delta_{a}\). In this paper we further analyze the structure of these \(\Delta_{a}\) invariants. We outline some of the foundations of the \(\Delta_{a}\) invariants and provide answers to some questions in the literature. We also provide a way to compute \(\Delta_{0}\) for Brieskorn spheres.
###### Contents
* 1 Introduction
* 2 The \(\Delta_{a}\) invariant
* 2.1 The definition of the \(\Delta_{a}\) invariant
* 2.2 A characterization of the \(\Delta_{a}\) invariant
* 2.3 \(\Delta_{a}\) and orientation reversal
* 3 \(\Delta_{0}\) for Brieskorn Spheres
* 3.1 A formula for \(\Delta_{0}\) for Brieskorn spheres
* 3.2 Computing \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for Brieskorn Spheres
* 4 \(\Delta_{0}\) and Homology cobordism
* 4.1 \(\Delta_{a}\) and \(\mathrm{Spin}^{c}\) homology cobordism
* 5 \(\Delta_{a}\) and correction terms, \(d\), to Heegaard Floer Homology
* 5.1 Heegard Floer Homology
* 5.2 Connections between \(\Delta_{a}\) and \(d\)
* 5.3 \(\Delta_{0}\) and integral homology spheres
* 5.4 Sharpness of the relation between \(\Delta_{a}\) and \(d\)
* A Comparing \(\Delta_{0}\) and \(d\) for some further examples
B Some computations of \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for \(\Sigma(b_{1},b_{2},b_{3})\)B.1 Further computations for \(\Sigma(b_{1},b_{2},b_{3})\) homology cobordant to \(S^{3}\).C Negative and Positive Definite Matrices
## 1 Introduction
In [1] a new invariant of (weakly negative definite) plumbed 3-manifolds called \(\widehat{Z}_{a}\) invariants were discovered. These invariants are labelled by \(\operatorname{Spin}^{c}\) structures and for a given (weakly negative definite) plumbed 3-manifold \(Y:=Y(\Gamma)\) they take the form
\[\widehat{Z}_{a}(Y;q)=2^{-c}q^{\Delta_{a}}\left(c_{0}q^{0}+c_{1}q^{1}+\cdots\right) \tag{1}\]
where \(c\in\mathbb{N}\) and \(\Delta_{a}\in\mathbb{Q}\). Both the numbers \(c\) and \(\Delta_{a}\) turn out to be topological invariants. In this paper we will be focusing on the \(\Delta_{a}\) invariant. We will give a formal definition of these invariants later on, but, as equation 1 suggests, one can think of as the unique rational pre-factored power of the \(\widehat{Z}_{a}\) invariants. In Lemma2.5 we will formalize this notion further. For now, let's look at a couple of examples where these invariants pop up.
**Example 1.1**.: As recorded in [2, Equation 3.28], if \(Y=L(p,1)\) one has that \(\widehat{Z}_{0}(Y;q)=-2q^{\frac{p-3}{4}}\) and \(\widehat{Z}_{1}(Y;q)=q^{\frac{p-3}{4}}(2q^{\frac{1}{p}}).\) Thus \(\Delta_{0}(Y)=\frac{p-3}{4}\) and \(\Delta_{1}(Y)=\frac{p^{2}-3p+1}{4p}\).
**Example 1.2**.: By using the diffeomorphism between \(L(1,1)\) and \(S^{3}\) one obtains that \(\widehat{Z}_{0}(S^{3};q)=q^{-\frac{1}{2}}(2q-2)\) and hence \(\Delta_{0}(S^{3})=-\frac{1}{2}\).
In [2] the \(\Delta_{a}\) invariants were studied more intensely and in the process these invariants were related to the correction terms \(d\) to Heegaard Floer Homology. Moreover as stated in [2], the \(\Delta_{a}\) invariants under the _"3d Modularity Conjecture"_[3] are equal to the scaling dimension of associated log-VOA modules. The goal of this paper is to investigate the structure of these \(\Delta_{a}\) invariants and provide some answers to open questions in the literature regarding them. We list the main results in this paper below.
1. In Proposition3.3, we derive a formula for \(\Delta_{0}\) for Brieskorn spheres. In the process we obtain a formula for \(\widehat{Z}_{0}\) for Brieskorn spheres which is immediately of the form \(q^{\Delta_{0}}\mathbb{Z}[[q]]\). In Examples3.2 and 3.3 we compute \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for \(\Sigma(2,9,11)\) and \(\Sigma(3,7,8)\).
2. In Theorem4.3 we show that neither \(\widehat{Z}_{a}\) nor \(\Delta_{a}\) are homology cobordism or cobordism invariants by providing some counterexamples. This answers a question raised in [2]. We further conjecture, in Conjecture4.5, that \(\Delta_{a}\) is not a \(\operatorname{Spin}^{c}\) homology cobordism invariant and provide evidence for this.
3. In Proposition5.5 we prove that \(\Delta_{0}(Y)=\frac{1}{2}\mod 1\) if \(Y\) is a negative definite plumbed integral homology sphere.
* In Proposition 5.7, using an existing example from the literature, we show that if \(\Delta_{b}(Y)=\frac{1}{2}-d(Y,b)\mod x\) holds for all \(3\)-manifolds \(Y\), we must have \(x=1\). This answers a question that was raised in [2, Section 4].
* In Section B we compute the \(\widehat{Z}_{0}\) and \(\Delta_{0}\) invariants for various Brieskorn spheres which have not appeared thus far in the literature.
**Organization of the paper.** In Section 2 we begin with a quick recap of the \(\widehat{Z}_{a}\) invariants before defining the \(\Delta_{a}\) invariants and providing a characterization for them.
In Section 3 we will analyze \(\Delta_{0}\) and \(\widehat{Z}_{0}\) further for Brieskorn spheres \(Y:=\Sigma(b_{1},b_{2},b_{3})\) of the form \((b_{1},b_{2},b_{3})\neq(2,3,5)\). In particular we will derive a formula for finding \(\Delta_{0}(Y)\) and also put forward a proposition which puts \(\widehat{Z}_{0}(Y;q)\) immediately into the form \(q^{\Delta_{0}}P(q)\) for some \(P(q)\in\mathbb{Z}[[q]]\). To digest these results we also give a couple of examples, those being \(\Sigma(2,9,11)\) and \(\Sigma(3,7,8)\) for which we compute \(\Delta_{0}\) and \(\widehat{Z}_{0}\).
In Section 4 we will show that the \(\Delta_{a}\) invariants are not homology cobordism invariants by providing a counterexample. We will also discuss our conjecture that \(\Delta_{a}\) is not a \(\operatorname{Spin}^{c}\) homology cobordism invariant.
In Section 5 we will discuss the relation between \(\Delta_{a}\) and the correction terms to Heegaard Floer homology. In this section we will also show that \(\Delta_{0}(Y)=\frac{1}{2}\mod 1\) for negative definite plumbed \(Y\).
**Acknowledgements.** The author would like to thank Sergei Gukov for guidance throughout the course of this project and feedback on a draft of this paper. The author would further like to thank Eveliina Peltola for helpful comments during the author's Master's thesis, from which this paper is derived. The author would like to also thank Mrunmay Jagadale and Josef Svoboda for stimulating discussions. The author was supported by the FirstRand FNB 2020 Fund Education scholarship.
## 2 The \(\Delta_{a}\) invariant
We will begin with a quick recap of the \(\widehat{Z}_{a}\)-invariants. First let us recall the definitions of negative definite and weakly negative definite plumbed manifolds. The reader may find the definition of a plumbed manifold in [4, p. 18].
**Definition 2.1**.: A weighted tree \(\Gamma\), and its resulting plumbed manifold \(Y(\Gamma)\) is called
* _negative definite_ if the linking matrix \(M\) it produces is negative definite.
* _weakly negative definite_ if the linking matrix \(M\) it produces is invertible and \(M^{-1}\) is negative definite on the subgroup of \(\mathbb{Z}^{s}\) generated by the vertices of degree \(\geq 3\). By this we mean that if \(v_{1},\ldots,v_{s}\) are all the vertices of \(\Gamma\) and \(v_{1},\ldots,v_{k}\) where \(k<s\) are all the vertices of degree \(\geq 3\) then we require \(M^{-1}\) is negative definite on the subgroup \(\widetilde{\mathbb{Z}^{k}}:=\{(l_{v_{1}},\ldots,l_{v_{k}},0,\ldots,0)\in \mathbb{Z}^{s}\ |\ l_{v_{i}}\in\mathbb{Z}\text{ for }1\leq i\leq k\}\) that is for every \(\vec{\ell}\in\widetilde{\mathbb{Z}^{k}}\) we require that \((\vec{\ell},M^{-1}\vec{\ell})<0\).
The following is an adapted definition from [4, p. 21].
**Definition 2.2** (Principal value).: Let \(P(z)\) be a Laurent series which is holomorphic in some open set containing \(\{z\in\mathbb{C}\mid 1-c\leq|z|\leq 1+c\}\) except possibly at \(|z|=1\) (that is \(P(z)\) can possibly be singular for \(|z|=1\)). We define
\[\operatorname{v.p.}\int_{|z|=1}P(z)dz:=\frac{1}{2}\left[\int_{|z|=1-\epsilon} P(z)dz+\int_{|z|=1+\epsilon}P(z)dz\right]\]
where \(c>\epsilon>0\) is some real number.
_Remark_.: The reason why the above definition doesn't depend on \(\epsilon\) is by virtue of Cauchy's Theorem. Furthermore, if \(P(z)\) is not singular for \(|z|=1\) then one sees that \(\operatorname{v.p.}\int_{|z|=1}P(z)dz=\int_{|z|=1}P(z)dz\)
**Notation 2.1**.: If \(f(z_{0},\dots,z_{s})\) is a complex function, then we will write
\[\operatorname{v.p.}\int_{|z_{0}|=1}\cdots\int_{|z_{s}|=1}f(z_{0},\dots,z_{s}) dz_{s}\cdots dz_{0}\]
to mean
\[\operatorname{v.p.}\int_{|z_{0}|=1}\operatorname{v.p.}\int_{|z_{1}|=1}\cdots \operatorname{v.p.}\int_{|z_{s}|=1}f(z_{0},\dots,z_{s})dz_{s}\cdots dz_{0}.\]
Now let us turn to the definition of the \(\widehat{Z}_{a}\)-invariants. These invariants are constructed in [1, Appendix A]. The following definition has been taken with minor (but equivalent) modifications from [4, p. 21].
**Definition 2.3** (The \(\widehat{Z}_{a}\)-invariants).: Let \(\Gamma\) be a weighted tree with \(s\) vertices which produces a weakly negative definite plumbed manifold \(Y:=Y(\Gamma)\) with associated linking matrix \(M\) such that \(b_{1}(Y)=0\). Recall that \(\operatorname{Spin}^{c}(Y)\cong(2\mathbb{Z}^{s}+\vec{\delta})/2M\mathbb{Z}^{s}\) (where \(\vec{\delta}=(\delta_{v})_{v\in\operatorname{Vert}(\Gamma)}\in\mathbb{Z}^{s}\) denotes the vector comprised of the degrees of the vertices from \(\Gamma\)). For any \(a\in\operatorname{Spin}^{c}(Y)\) let \(\vec{a}\in 2\mathbb{Z}^{s}+\vec{\delta}\) be a representative of \(a\), then letting \(q\) be a formal variable, we define \(\widehat{Z}_{a}(Y;q)\) by
\[\widehat{Z}_{a}(Y;q) =(-1)^{\pi}q^{\frac{3\sigma-\operatorname{Tr}(M)}{4}}\] \[\times\operatorname{v.p.}\oint_{|z_{v_{1}}|=1}\cdots\oint_{|z_{v _{s}}|=1}\left[\prod_{i=1}^{s}\left(z_{v_{i}}-\frac{1}{z_{v_{i}}}\right)^{2- \deg(v_{i})}\right]\cdot\Theta_{a}^{-M}(\vec{z})\frac{dz_{v_{s}}}{2\pi iz_{v_{ s}}}\cdots\frac{dz_{v_{1}}}{2\pi iz_{v_{1}}}\]
where
\[\Theta_{a}^{-M}(\vec{z})=\sum_{\begin{subarray}{c}\vec{l}=(l_{v_{1}},\dots,l_ {v_{s}}),\\ \vec{l}\in 2M\mathbb{Z}^{s}+\vec{a}\end{subarray}}q^{-\frac{(\vec{l},M^{-1} \vec{l})}{4}}\prod_{i=1}^{s}z_{v_{i}}^{l_{v_{i}}},\]
where in the above we have that \(\operatorname{v.p.}\) denotes taking the principal part of the integral, \(\sigma\) is the signature of the matrix \(M\) (the number of positive eigenvalues of \(M\) minus the number of negative eigenvalues of \(M\)) and \(\pi\) is the number of positive eigenvalues of \(M\).
Let us now state a lemma which summarizes a few structural results on the \(\widehat{Z}_{a}\) invariants. The second of which formalizes a comment made in [4, p. 22].
**Lemma 2.1**.: _Let \(Y:=Y(\Gamma)\) be a weakly negative definite plumbed \(3\)-manifold with \(b_{1}(Y)=0\). Let \(\vec{a}\) be a representative of \(a\in\operatorname{Spin}^{c}(Y)\), then_
1. \(\widehat{Z}_{a}(Y;q)\) _can be written in the form_ \[\widehat{Z}_{a}(Y;q)=(-1)^{\pi}2^{-\eta}q^{\frac{3\sigma-\operatorname{Tr}(M) }{4}}\sum_{\vec{\ell}\in 2M\mathbb{Z}^{s}+\vec{a}}c_{\vec{\ell}}\cdot q^{-\frac{( \vec{\ell},M^{-1}\vec{\ell})}{4}},\] (2) _where_ \(\eta\in\mathbb{N}\cup\{0\}\) _and_ \(c_{\vec{\ell}}\in\mathbb{Z}\)_._
2. _If_ \(v_{i}\) _is a vertex of_ \(\Gamma\) _which has_ \(\deg(v_{i})\leq 2\)_, then the number of values of_ \(l_{v_{i}}\) _in the tuple_ \(\vec{\ell}=(l_{v_{1}},\dots,l_{v_{s}})\in 2M\mathbb{Z}^{s}+\vec{a}\) _that yields a non-zero contribution to the_ \(q\)_-series_ \(\widehat{Z}_{a}(Y;q)\) _is finite._
3. _Suppose that_ \(\widehat{Z}_{a}(Y;q)\) _is given in the form of equation_ 2_. Letting_ \(I=\{\vec{\ell}\in 2M\mathbb{Z}^{s}+\vec{a}\mid c_{\vec{\ell}}\neq 0\}\) _then we have that_ \(\max_{\vec{\ell}\in I}(\vec{l},M^{-1}\vec{l})\) _exists._
Proof.: (i) One applies the result (cf. [5, p. 98]) that \(\int_{|z|=1}z^{k}dz=0\) if \(k\neq 1\) and it equals \(2\pi i\) if \(k=-1\) to the definition of the \(\widehat{Z}_{a}\) invariants and the result follows.
(ii) The proof we outlines builds upon a basic argument laid out in [4, p. 22]. Proving (ii) basically boils down to the fact that if we let \(q\) be a formal variable, then an integral of the form
\[\oint_{|z|=1}\left(z-\frac{1}{z}\right)^{m}\sum_{l\in\mathbb{Z}}z^{l}q^{l} \frac{dz}{2\pi iz} \tag{3}\]
for \(0\leq m\leq 2\) is non-zero for only finitely many values of \(l\). One then applies this general fact to the integrals which occur in \(\widehat{Z}_{a}(Y;q)\) to complete the proof of the lemma.
(iii) The proof we provide fleshes out the details of the basic argument laid out in [4, p. 22]. Suppose without loss of generality that \(\Gamma\) has \(k<s\) vertices of degree \(\geq 3\) and if \(v_{1},\dots,v_{s}\) are the vertices of \(\Gamma\) then \(v_{1},\dots,v_{k}\) are the vertices of degree \(\geq 3\). Since \(Y\) is a weakly negative definite plumbed manifold we have that \(M^{-1}\) is negative definite on the subgroup \(\widetilde{\mathbb{Z}^{k}}:=\{(l_{v_{1}},\dots,l_{v_{k}},0,\dots,0)\in\mathbb{ Z}^{s}\mid l_{v_{i}}\in\mathbb{Z}\text{ for }1\leq i\leq k\}\) that is for every \(\vec{\ell}\in\widetilde{\mathbb{Z}^{k}}\) we have that \((\vec{\ell},M^{-1}\vec{\ell})<0\). In particular this shows that \(\max_{\vec{l}\in\widetilde{\mathbb{Z}^{k}}}(\vec{l},M^{-1}\vec{l})\) exists (via Corollary C.3) and hence that \(\max_{\vec{l}\in I\cap\widetilde{\mathbb{Z}^{k}}}(\vec{l},M^{-1}\vec{l})\) exists.
Now for any \(\vec{l}=(l_{v_{1}},\dots,l_{v_{k}},l_{v_{k+1}},\dots l_{v_{s}})\in I\subseteq 2M \mathbb{Z}^{s}+\vec{a}\) we see from part (ii) above that each of the \(l_{v_{k+1}},\dots,l_{v_{s}}\) can only take on finitely many different values from \(\mathbb{Z}\). Thus one can view the indexing set \(I\) as \(I=\widetilde{\mathbb{Z}^{k}}\cup F\) where \(F\) is some finite set. Thus we can conclude that \(\max_{\vec{l}\in I}(\vec{l},M^{-1}\vec{l})\) exists from the fact that \(\max_{\vec{l}\in I\cap\widetilde{\mathbb{Z}^{k}}}(\vec{l},M^{-1}\vec{l})\) exists.
In the case when \(Y:=Y(\Gamma)\) is negative definite the above proof of (iii) can be shortened as we show in the lemma below.
**Lemma 2.2**.: _Let \(Y:=Y(\Gamma)\) be a a negative definite plumbed manifold arising from a weighted tree \(\Gamma\) with linking matrix \(M\). Then for any \(a\in\operatorname{Spin}^{c}(Y)\), if \(a\in 2\mathbb{Z}^{s}+\vec{\delta}\) is a representative of \(a\), we have that \(\max_{\vec{l}\in 2M\mathbb{Z}^{s}+\vec{a}}(\vec{l},M^{-1}\vec{l})\) exists._
Proof.: Since \(Y\) is a negative definite plumbed manifold, we have that \(M\) is an \(s\times s\) integer valued negative definite matrix. Then note that Lemma C.1 (part e.) shows that \(M^{-1}\) is also an integer valued negative definite matrix and so Corollary C.3 then immediately shows that \(\max_{\vec{l}\in 2M\mathbb{Z}^{s}+\vec{a}}(\vec{l},M^{-1}\vec{l})\) exists.
### The definition of the \(\Delta_{a}\) invariant
**Definition 2.4** (The \(\Delta_{a}\)-invariant).: Let \(\Gamma\) be a weighted tree, with \(|\operatorname{Vert}(\Gamma)|=s\) which produces a weakly negative definite plumbed manifold \(Y:=Y(\Gamma)\) with linking matrix \(M\) and such that \(b_{1}(Y)=0\). Let \(a\in\operatorname{Spin}^{c}(Y)\), and \(\vec{a}\in 2\mathbb{Z}^{s}+\vec{\delta}\) be a representative of \(a\), suppose that \(\widehat{Z}_{a}(Y;q)\) is given in the form as per Lemma 2.1 part (i)
\[\widehat{Z}_{a}(Y;q)=(-1)^{\pi}2^{-\eta}q^{\frac{3\sigma-\operatorname{Tr}(M) }{4}}\sum_{\vec{\ell}\in 2M\mathbb{Z}^{s}+\vec{a}}c_{\vec{\ell}}\cdot q^{- \frac{(\vec{\ell},M^{-1}\vec{\ell})}{4}}\]
where \(\eta\in\mathbb{N}\cup\{0\}\) and \(c_{\vec{\ell}}\in\mathbb{Z}\). Let \(I=\{\vec{\ell}\in 2M\mathbb{Z}^{s}+\vec{a}\mid c_{\vec{\ell}}\neq 0\}\), then we define
\[\Delta_{a}=\frac{3\sigma-\operatorname{Tr}(M)}{4}-\max_{\vec{l}\in I}\frac{( \vec{l},M^{-1}\vec{l})}{4}\]
where \((\vec{l},M^{-1}\vec{l})\) means the inner product of \(\vec{l}\) with \(M^{-1}\vec{l}\).
_Remark_.: a.) We will sometimes write \(\Delta_{a}(Y)\) to show the explicit dependence on the manifold \(Y\), but we will for the most part simply write \(\Delta_{a}\) instead.
b.) The definition we provided above was produced in a slightly different manner in [1, Appendix A]. In [1, Appendix A] the indexing set \(I\) was not mentioned in the maximum \(\max_{\vec{l}\in I}\frac{(\vec{l},M^{-1}\vec{l})}{4}\) which occurs in the definition of \(\Delta_{a}\), though we believe that this indexing set was implicitly assumed, as is done in [2].
c.) Lemma 2.1 part (iii) ensures that the maximum used in the definition above exists, and hence that the definition of \(\Delta_{a}\) is well-defined.
The following lemma shows that the \(\Delta_{a}\)-invariants really are invariants of weakly negative definite plumbed manifolds.
**Lemma 2.3**.: _Let \(\Gamma_{0}\), \(\Gamma_{1}\) be two weighted trees, which produce weakly negative definite plumbed manifolds \(Y_{0}:=Y(\Gamma_{0})\) and \(Y_{1}:=Y(\Gamma_{1})\) with linking matrix \(M_{0}\) and \(M_{1}\) respectively. If \(Y_{0}\) is diffeomorphic to \(Y_{1}\) then we have that \(\Delta_{a}(Y_{0})=\Delta_{a}(Y_{1})\)_
Proof.: This follows immediately since we have that \(\widehat{Z}_{a}(Y_{0};q)=\widehat{Z}_{a}(Y_{1};q)\) because the fact that \(Y_{0}\) is diffeomorphic to \(Y_{1}\) implies that \(\Gamma_{0}\) and \(\Gamma_{1}\) are related by a sequence of Neumann moves and the \(\widehat{Z}_{a}\)-invariants are invariant under Neumann moves.
We expect to factorize \(\widehat{Z}_{a}(Y;q)\) into an algebraic object of the form \(2^{-\eta}q^{\Delta_{a}}P(q)\) where \(\Delta_{a}\) is some rational power, \(\eta\in\mathbb{N}\cup\{0\}\) and \(P(q)\in\mathbb{Z}[[q]]\). Let us now prove this. The result of the following proposition was stated in [4], but was not proven.
**Proposition 2.4**.: _Let \(\Gamma\) be a weighted tree, which produces a weakly negative definite plumbed manifold \(Y(\Gamma)\) with linking matrix \(M\). Then for any \(a\in\operatorname{Spin}^{c}(Y)\), we have that \(\widehat{Z}_{a}(Y;q)\in 2^{-\eta}q^{\Delta_{a}}\mathbb{Z}[[q]]\) where \(\eta\in\mathbb{N}\cup\{0\}\) and \(2^{-\eta}q^{\Delta_{a}}\mathbb{Z}[[q]]:=\left\{2^{-\eta}q^{\Delta_{a}}P(q)\mid P (q)\in\mathbb{Z}[[q]]\right\}.\)_
Proof.: From Lemma 2.1 part (i) we know that \(\widehat{Z}_{a}(Y;q)\) is given in the form
\[\widehat{Z}_{a}(Y;q)=(-1)^{\pi}2^{-\eta}q^{\frac{3\sigma-\operatorname{Tr}(M) }{4}}\sum_{\vec{\ell}\in I}c_{\vec{\ell}}\cdot q^{-\frac{(\vec{\ell},M^{-1}\vec {\ell})}{4}} \tag{4}\]
where \(I\subseteq 2M\mathbb{Z}^{s}+\vec{a}\) is an indexing set such that \(c_{\vec{\ell}}\neq 0\) for each \(\vec{\ell}\in I\), \(\vec{a}\in 2\mathbb{Z}^{s}+\vec{\delta}\) is a _fixed_ representative of \(a\) and \(\eta\in\mathbb{N}\cup\{0\}\). Since \(Y\) is a weakly negative definite plumbed manifold by Lemma 2.1 part (iii) we see that \(\gamma:=\max_{\vec{\ell}\in I}(\vec{\ell},M^{-1}\vec{\ell})\) exists. Thus we can factorize \(\widehat{Z}_{a}(Y;q)\) into the form
\[\widehat{Z}_{a}(Y;q)=(-1)^{\pi}2^{-\eta}q^{\frac{3\sigma-\operatorname{Tr}(M) -\gamma}{4}}\sum_{\vec{\ell}\in I}c_{\vec{\ell}}\cdot q^{\frac{\gamma-(\vec{ \ell},M^{-1}\vec{\ell})}{4}} \tag{5}\]
Now let \(\vec{\ell}_{0}\in I\) be such that \(\gamma=(\vec{\ell_{0}},M^{-1}\vec{\ell_{0}})\). Then to show that \(\widehat{Z}_{a}(Y;q)\in 2^{-\eta}q^{\Delta_{a}}\mathbb{Z}[[q]]\), where \(\Delta_{a}=\frac{3\sigma-\operatorname{Tr}(M)-\gamma}{4}\) it suffices to show that for any \(\vec{\ell}\in I\) that \(\gamma-(\vec{\ell},M^{-1}\vec{\ell})=(\vec{\ell_{0}},M^{-1}\vec{\ell_{0}})-( \vec{\ell},M^{-1}\vec{\ell})\in 4\mathbb{Z}_{+}\) where \(\mathbb{Z}_{+}=\{x\in\mathbb{Z}\mid x\geq 0\}\) because then it would follow that all the exponents of \(q\) on the right hand side of equation (5) are non-negative integers.
To that end let \(\vec{\ell}\in I\) be given arbitrarily. First notice that since \(\gamma:=\max_{\vec{\ell}\in I}(\vec{\ell},M^{-1}\vec{\ell})\) we have \(\gamma-(\vec{\ell},M^{-1}\vec{\ell})\geq 0\). Then secondly notice that we can express \(\vec{\ell}=\vec{\ell_{0}}+2Mv\) where \(v\in\mathbb{Z}^{s}\). The reason for this is that both \(\vec{\ell}\) and \(\vec{\ell_{0}}\) are elements of the set \(2M\mathbb{Z}^{s}+\vec{a}\) and since \(\vec{a}\) is fixed we have that \(\vec{\ell}-\vec{\ell_{0}}\in 2M\mathbb{Z}^{s}\), which implies that \(\vec{\ell}=\vec{\ell_{0}}+2Mv\). One can then compute (using the fact that \(M\) is a symmetric matrix and properties of the transpose) that
\[\left(\vec{\ell},M^{-1}\vec{\ell}\right)=\vec{\ell}^{T}M^{-1}\vec{\ell}=(\vec{ \ell_{0}}+2Mv)^{T}M^{-1}(\vec{\ell_{0}}+2Mv)=\gamma+4\vec{\ell_{0}}^{T}v+4(v,Mv).\]
Noting that \(\vec{\ell_{0}}^{T}v\in\mathbb{Z}\) and \((v,Mv)\in\mathbb{Z}\), we can thus see that \(\gamma-(\vec{\ell},M^{-1}\vec{\ell})=-4\vec{\ell_{0}}^{T}v-4(v,Mv)\in 4 \mathbb{Z}\). Combining this with the fact that \(\gamma-(\vec{\ell},M^{-1}\vec{\ell})\geq 0\) implies that \(\gamma-(\vec{\ell},M^{-1}\vec{\ell})\in 4\mathbb{Z}_{+}\). This completes the proof of the proposition.
### A characterization of the \(\Delta_{a}\) invariant
The following lemma provides a very useful way to simply'read off' the \(\Delta_{a}\)-invariant from a given expression of \(\widehat{Z}_{a}(Y;q)\). A small statement within the statement of the lemma below appeared in [2, Equation 1.2] in a different form.
**Lemma 2.5**.: _Let \(Y=Y(\Gamma)\) be a weakly negative definite plumbed manifold with linking matrix \(M\). Suppose that \(\widehat{Z}_{a}(Y;q)=q^{\delta}(c_{0}+c_{1}q^{n_{1}}+c_{2}q^{n_{2}}+\cdots)\) where \(\delta\in\mathbb{Q}\), \(c_{i}\in\mathbb{Q}\setminus\{0\}\) and \(n_{i}\in\mathbb{N}\cup\{0\}\) for each \(i\). Suppose moreover that \(n_{0}=0\) (so that \(c_{0}=c_{0}q^{n_{0}}\) in the above) and \(n_{i}\neq n_{j}\) for \(i\neq j\), then \(\delta=\Delta_{a}\)._
Proof.: From \(\widehat{Z}_{a}(Y;q)=q^{\delta}(c_{0}+c_{1}q^{n_{1}}+c_{2}q^{n_{2}}+\cdots)\), simply multiply the \(q^{\delta}\) factor out and use the fact that \(n_{0}=0\) to get
\[\widehat{Z}_{a}(Y;q)=c_{0}q^{n_{0}+\delta}+c_{1}q^{n_{1}+\delta}+c_{2}q^{n_{2} +\delta}+\cdots=\sum_{i\in I^{\prime}}c_{i}q^{n_{i}+\delta}. \tag{6}\]
where \(I^{\prime}\subseteq\mathbb{Z}\) is some indexing set. From Lemma 2.1 part (i) we see that we can also write
\[\widehat{Z}_{a}(Y;q)=\sum_{\vec{\ell}\in I}c_{\vec{\ell}^{\prime}}\cdot q^{ \frac{3\sigma-\operatorname{Tr}(M)-(\vec{\ell},M^{-1}\vec{\ell})}{4}} \tag{7}\]
where each of the powers of \(q\) are unique, \(c_{\vec{\ell}}\) are some choice of rational numbers and \(I\subseteq 2M\mathbb{Z}^{s}+\vec{a}\) is an indexing set such that each of the rational numbers \(c_{\vec{\ell}}\) are non-zero 1. Comparing equations (6) and (7) we see that we must have
Footnote 1: We can express \(\widehat{Z}_{a}(Y;q)\) in the form of equation 7 if we absorb the leading factors of \(2^{-\eta}\) in Lemma 2.1 part (i).
\[n_{i}+\delta=\frac{3\sigma-\operatorname{Tr}(M)-(\vec{\ell}_{i},M^{-1}\vec{ \ell}_{i})}{4} \tag{8}\]
for each \(i\geq 0\) and moreover we must have a bijection between the indexing sets \(I^{\prime}\) and \(I\). In particular equation 8 implies that
\[\delta=n_{0}+\delta=\frac{3\sigma-\operatorname{Tr}(M)-(\vec{\ell}_{0},M^{-1} \vec{\ell}_{0})}{4} \tag{9}\]
for some \(\vec{\ell}_{0}\in 2M\mathbb{Z}^{s}+\vec{a}\). Equation (9) then implies that
\[n_{i}=n_{i}+\delta-\delta=\frac{(\vec{\ell}_{0},M^{-1}\vec{\ell}_{0})-(\vec{ \ell}_{i},M^{-1}\vec{\ell}_{i})}{4}.\]
The assumptions that \(n_{i}=0\) and \(n_{i}\in\mathbb{N}\cup\{0\}\) for all \(i\geq 0\) along with the assumption that \(n_{i}\neq n_{j}\) for any \(i\) and \(j\) imply that for \(i>1\) we have that each \(n_{i}>0\). This then implies
that \(\frac{1}{4}\left((\vec{\ell}_{0},M^{-1}\vec{\ell}_{0})-(\vec{\ell}_{i},M^{-1}\vec {\ell}_{i})\right)>0\) and hence that \((\vec{\ell}_{0},M^{-1}\vec{\ell}_{0})>(\vec{\ell}_{i},M^{-1}\vec{\ell}_{i})\) for each \(i>1\). Thus \((\vec{\ell}_{0},M^{-1}\vec{\ell}_{0})=\max_{\vec{l}\in I}\frac{1}{4}(\vec{l},M ^{-1}\vec{l})\) and hence we find that
\[\delta=\frac{3\sigma-\operatorname{Tr}(M)}{4}-\max_{\vec{l}\in I}\frac{(\vec{l },M^{-1}\vec{l})}{4}=\Delta_{a}\]
as desired.
_Remark_.: With the proof of this lemma, our earlier remark in the introduction that one can view \(\Delta_{a}\) as the unique rational pre-factored power of \(q\) in \(\widehat{Z}_{a}(Y;q)\) is justified.
### \(\Delta_{a}\) and orientation reversal
Suppose that \(Y:=Y(\Gamma)\) is a negative definite plumbed manifold. The orientation reversal of \(Y\), that being \(-Y\), is the plumbed manifold given by \(-Y:=Y(-\Gamma)\) where \(-\Gamma\) is the weighted tree consists of the same vertices and edges as \(\Gamma\) but with the negation of each weight (see [4, p. 9 and 18]). If \(M\) is the linking matrix of \(Y\), then we see that \(-M\) is the linking matrix of \(-Y\).
Suppose that we wanted to find \(\widehat{Z}_{a}(-Y;q)\) and in particular \(\Delta_{a}(-Y)\). Given the current definitions of both these objects we run into an issue, namely that the linking matrix \(-M\) of \(-Y\) will be positive definite rather than negative definite (this is because \(Y\) being a negative definite plumbed manifold implies that \(M\) is negative definite and hence that \(-M\) is positive definite). We can salvage a way to calculate \(\Delta_{a}(-Y)\), by means of the following definition.
**Definition 2.5**.: Let \(Y:=Y(\Gamma)\) be a negative definite plumbed manifold. If \(-Y\) is the orientation reversal of \(Y\), then we define \(\Delta_{a}(-Y):=-\Delta_{a}(Y)\).
This definition is justified by the fact that, if by Lemma 2.1 part (i) we express \(\widehat{Z}_{a}(Y;q)\) in the following form:
\[\widehat{Z}_{a}(Y;q)=(-1)^{\pi}2^{-\eta}q^{\frac{3\sigma-\operatorname{Tr}(M )}{4}}\sum_{\vec{\ell}\in I}c_{\vec{\ell}^{\,\cdot}}\,q^{-\frac{(\vec{\ell},M ^{-1}\vec{\ell})}{4}} \tag{10}\]
where \(I\subseteq 2M\mathbb{Z}^{s}+\vec{a}\) is some indexing set such that all the \(c_{\vec{\ell}}\) are non-zero, then we find that
\[-\Delta_{a}(Y)=-\left(\frac{3\sigma-\operatorname{Tr}(M)}{4}\right)+\max_{ \vec{l}\in I}\frac{(\vec{l},M^{-1}\vec{l})}{4}=\frac{3\sigma(-M)-\operatorname {Tr}(-M)}{4}-\min_{\vec{l}\in I}\frac{(\vec{l},-M^{-1}\vec{l})}{4}\]
where by \(\sigma(-M)\) we mean the signature of the matrix \(-M\). Note that \(\min_{\vec{l}\in I}\frac{(\vec{l},-M^{-1}\vec{l})}{4}\) exists since \(-M\) is positive definite (and thus so is \(-M^{-1}\)).
## 3 \(\Delta_{0}\) for Brieskorn Spheres
Given relatively co-prime integers \(0<b_{1}<b_{2}<b_{3}\) one can define the _Brieskorn sphere_\(\Sigma(b_{1},b_{2},b_{3}):=\{(x,y,z)\in\mathbb{C}^{3}\mid x^{b_{1}}+y^{b_{2}}+z^{b_{ 3}}=0\}\cap S^{5}.\) Brieskorn spheres are negatively definite plumbed 3-manifolds which are also integral homology spheres. Hence for Brieskorn spheres there is a unique \(\mathrm{Spin}^{c}\) structure, which we label as \(0\). Hence there is only a single corresponding \(\widehat{Z}_{a}\)-invariant that being \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3});q)\) and a single \(\Delta_{0}\)-invariant. The method for finding a plumbing description for \(\Sigma(b_{1},b_{2},b_{3})\), was mentioned in [4, p. 21]. Let us explain this method. The Brieskorn sphere \(\Sigma(b_{1},b_{2},b_{3})\) is the Seifert manifold \(M(b;\frac{a_{1}}{b_{1}},\frac{a_{2}}{b_{2}},\frac{a_{3}}{b_{3}})\) where \(b<0\) and \(a_{1},a_{2},a_{3}>0\) are _chosen_ to satisfy the equation
\[b_{1}b_{2}b_{3}\cdot b+b_{2}b_{3}a_{1}+b_{1}b_{3}a_{2}+b_{1}b_{2}a_{3}=-1. \tag{11}\]
The integers \(b,a_{1},a_{2},a_{3}\) are chosen as there is far more than a single solution to equation 11 above (thus there is also more than one plumbing description).
Then the plumbing graph \(\Gamma\) (see Figure 1) which produces \(\Sigma(b_{2},b_{2},b_{3})\) has a central vertex labelled by \(b\) and three "legs" whose vertices are given by the integers \(-k_{i_{1}},\ldots,-k_{i_{s_{i}}}\) for \(1\leq i\leq 3\) which come from the continued fraction decomposition of \(\frac{b_{i}}{a_{i}}\), i.e.
\[\frac{b_{i}}{a_{i}}=[k_{i_{1}},\ldots,k_{i_{s_{i}}}]=k_{i_{1}}-\frac{1}{k_{i_{ 2}}-\frac{1}{\cdots-\frac{1}{k_{i_{s_{i}}}}}}.\]
Let us give an example to illustrate this procedure.
Figure 1: Plumbing description of the Brieskorn manifold \(Y(\Gamma):=\Sigma(b_{1},b_{2},b_{3})=M(b;\frac{a_{1}}{b_{1}},\frac{a_{2}}{b_{2} },\frac{a_{3}}{b_{3}})\). We call the vertices which correspond to the weights \(-k_{i_{s_{i}}}\) for \(1\leq i\leq 3\), the _terminal vertices_ of the graph \(\Gamma\). Sometimes we also call it the ’terminal vertices of the plumbing description’ of the Brieskorn sphere \(\Sigma(b_{1},b_{2},b_{3})\).
**Example 3.1**.: Consider the Brieskorn sphere \(\Sigma(2,9,11)\). The integers \(b=-1\), \(a_{1}=1,a_{2}=2\) and \(a_{3}=3\) satisfy equation 11 with \(b_{1}=2,b_{2}=9\) and \(b_{3}=11\). One can then compute the continued fractions \(\frac{b_{1}}{a_{1}}=2=[2]\), \(\frac{b_{2}}{a_{2}}=\frac{9}{2}=[5,2]\) and \(\frac{b_{3}}{a_{3}}=\frac{11}{3}=[4,3]\) to produce the following plumbing graph for \(\Sigma(2,9,11)\).
**Notation 3.1**.: For the rest of this section let us fix the following notation: suppose we are given co-prime integers \(b_{i}\), \(1\leq i\leq 3\) which satisfy \(0<b_{1}<b_{2}<b_{3}\) and we are considering the Brieskorn sphere \(\Sigma(b_{1},b_{2},b_{3})\). We then define \(p:=b_{1}b_{2}b_{3}\) and
\[\alpha_{1} :=b_{1}b_{2}b_{3}-b_{1}b_{2}-b_{1}b_{3}-b_{2}b_{3},\] \[\alpha_{2} :=b_{1}b_{2}b_{3}+b_{1}b_{2}-b_{1}b_{3}-b_{2}b_{3},\] \[\alpha_{3} :=b_{1}b_{2}b_{3}-b_{1}b_{2}+b_{1}b_{3}-b_{2}b_{3},\] \[\alpha_{4} :=b_{1}b_{2}b_{3}+b_{1}b_{2}+b_{1}b_{3}-b_{2}b_{3}.\]
In [4, Proposition 4.8] a simplification of the general formula for \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3});q)\) was put forward which incorporated functions known as _false theta functions_.2 These functions are defined in [4, Equations 53 and 54] as
Footnote 2: As mentioned in [4], another derivation of the formula for \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3});q)\) was given by Chung in [6].
\[\widetilde{\Psi}_{p}^{(a)}(q):=\sum_{n=0}^{\infty}\psi_{2p}^{(a)}(n)q^{\frac{ n^{2}}{4p}}\in q^{\frac{a^{2}}{4p}}\mathbb{Z}[[q]] \tag{12}\]
where
\[\psi_{2p}^{(a)}(n)=\left\{\begin{array}{rl}1&\mbox{if $n=\quad a+m\cdot 2p$ for $m\in \mathbb{Z}$}\\ -1&\mbox{if $n=-a+m\cdot 2p$ for $m\in\mathbb{Z}$}\\ 0&\mbox{otherwise}\end{array}\right. \tag{13}\]
and, following the convention in [4, Equation 55], the notation \(\widetilde{\Psi}_{p}^{c_{1}(a_{1})+c_{2}(a_{2})+\cdots}(q)\) is used as a shorthand for \(c_{1}\widetilde{\Psi}_{p}^{(a_{1})}(q)+c_{2}\widetilde{\Psi}_{p}^{(a_{2})}(q)+\cdots\). We noticed however, that there was a sign error contained in the proof of [4, Proposition 4.8]. Correcting this sign error we state the simplified formula below.
**Theorem 3.1**.: _For \((b_{1},b_{2},b_{3})\neq(2,3,5)\), consider the Brieskorn sphere \(Y=\Sigma(b_{1},b_{2},b_{3})\) which has linking matrix \(M\), where the integers \(0<b_{1}<b_{2}<b_{3}\) are pairwise relatively prime. Then we have that_
\[\widehat{Z}_{0}(Y;q)=q^{\xi}\cdot\left(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{( \alpha_{1})-(\alpha_{2})-(\alpha_{3})+(\alpha_{4})}(q)\right) \tag{14}\]
_where \(\alpha_{i}\) for \(1\leq i\leq 3\) is defined as in Notation 3.1 and_
\[\xi=\frac{1}{4}\left(\sum_{i=1}^{3}h_{i}-3s-\operatorname{Tr}(M)-\frac{b_{2}b _{3}}{b_{1}}-\frac{b_{1}b_{3}}{b_{2}}-\frac{b_{1}b_{2}}{b_{3}}\right)\in \mathbb{Q}\]
_wherein \(h_{i}\) is the absolute value of the determinant of the linking matrix of the graph obtained by deleting a terminal vertex on the \(i\)-th leg from the plumbing description \(\Gamma\) which produces \(\Sigma(b_{1},b_{2},b_{3})\), and \(s\) is the number of vertices in \(\Gamma\). 3_
Footnote 3: Note that in [4], the notation \(\Delta\) was adopted instead of the notation \(\xi\) we use here. One should however not confuse the notation \(\Delta\) used in [4, Proposition 4.8] with the \(\Delta_{0}\) invariant for \(\Sigma(b_{1},b_{2},b_{3})\). From what we will show below one can see that \(\Delta\) and \(\Delta_{0}\) are related but not equal.
The sign error that occurred in [4, Proposition 4.8] occurs towards the end of the proof.
We do not consider \((b_{1},b_{2},b_{3})=(2,3,5)\) because in that case the form of \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3});q)\) which appears in equation (14) is not entirely correct, namely there is an extra term that one needs to include. One can see this extra term in the statement of [4, Proposition 4.8]. To simplify our proofs below and for brevity we exclude this case for the rest of the section.
### A formula for \(\Delta_{0}\) for Brieskorn spheres
We first state a few useful results that will help us.
**Lemma 3.2**.: _Suppose we are given co-prime integers \(b_{i}\), \(1\leq i\leq 3\) which satisfy \(0<b_{1}<b_{2}<b_{3}\). Let \(p\) and \(\alpha_{i}\) for \(1\leq i\leq 4\) be given as in Notation 3.1. Then_
1. \(\min\{\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\}=\alpha_{1}\)_._
2. \(\frac{\alpha_{i}^{2}-\alpha_{1}^{2}}{4p}\in\mathbb{Z}\) _for_ \(1\leq i\leq 4\)_._
3. _Suppose further that_ \((b_{1},b_{2},b_{3})\neq(2,3,5)\)_, then_ \(\frac{1}{b_{1}}+\frac{1}{b_{2}}+\frac{1}{b_{3}}<1\) _and_ \(0<\alpha_{i}<2p\) _for_ \(1\leq i\leq 4\)
Proof.: (i) We have the following series of equalities
\[\alpha_{1}=\alpha_{2}-b_{1}b_{2}=\alpha_{3}-b_{1}b_{3}=\alpha_{4}-b_{1}b_{2}-b_{1 }b_{3}.\]
Then the fact that \(b_{i}>0\) for \(1\leq i\leq 3\) implies that \(\alpha_{1}<\alpha_{j}\) for \(2\leq j\leq 4\) which completes the proof.
(ii) Note that to show that \(\frac{\alpha_{i}^{2}-\alpha_{1}^{2}}{4p}\in\mathbb{Z}\) for \(i=1,2,3,4\) it suffices to show that \(4p|(\alpha_{i}^{2}-\alpha_{1}^{2})\) for \(i=2,3,4\). Since \(\alpha_{i}^{2}-\alpha_{1}^{2}=(\alpha_{i}+\alpha_{1})(\alpha_{i}-\alpha_{1})\), if we can show that \(4p\) divides \((\alpha_{i}+\alpha_{1})(\alpha_{i}-\alpha_{1})\) for \(i=2,3,4\) then we are done. To that end observe that we have the following series of equalities:
\[(\alpha_{2}+\alpha_{1})(\alpha_{2}-\alpha_{1}) =(2b_{1}b_{2}b_{3}-2b_{1}b_{3}-2b_{2}b_{3})(2b_{1}b_{2})\] \[(\alpha_{3}+\alpha_{1})(\alpha_{3}-\alpha_{1}) =(2b_{1}b_{2}b_{3}-2b_{1}b_{2}-2b_{2}b_{3})(2b_{1}b_{3})\] \[(\alpha_{4}+\alpha_{1})(\alpha_{4}-\alpha_{1}) =(2b_{1}b_{2}b_{3}-2b_{2}b_{3})(2b_{1}b_{2}+2b_{1}b_{3})\]
In all the cases above one can just expand the right hand side and check directly that \(4p\) where \(p=b_{1}b_{2}b_{3}\) divides all the quantities above.
(iii) Proof omitted - follows from elementary number theory arguments.
**Proposition 3.3**.: _Suppose \((b_{1},b_{2},b_{3})\neq(2,3,5)\) where the integers \(0<b_{1}<b_{2}<b_{3}\) are pairwise relatively prime. Suppose the Brieskorn sphere \(Y=\Sigma(b_{1},b_{2},b_{3})\) has linking matrix \(M\) then we have that:_
\[\widehat{Z}_{0}(Y;q)=q^{\Delta_{0}}\cdot\left[q^{-\frac{\alpha_{1}^{2}}{4p}} \left(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{1})-(\alpha_{2})-(\alpha_{ 3})+(\alpha_{4})}(q)\right)\right]\]
_where_
\[\Delta_{0}=\xi+\frac{\alpha_{1}^{2}}{4p}\]
_wherein \(p=b_{1}b_{2}b_{3}\) and \(\xi=\frac{1}{4}\left(\sum_{i=1}^{3}h_{i}-3s-\operatorname{Tr}(M)-\frac{b_{2}b_ {1}}{b_{1}}-\frac{b_{1}b_{2}}{b_{2}}-\frac{b_{1}b_{2}}{b_{3}}\right)\in\mathbb{Q}\) wherein \(h_{i}\) is the absolute value of the determinant of the linking matrix of the graph obtained by deleting a terminal vertex on the \(i\)-th leg from the decomposition \(\Gamma\) of \(\Sigma(b_{1},b_{2},b_{3})\), and \(s\) is the number of vertices in \(\Gamma\)._
Proof.: Let \(Y=\Sigma(b_{1},b_{2},b_{3})\). Using the notation from Theorem 3.1 (and that \(p:=b_{1}b_{2}b_{3}\) for later bits in the proof) we know that
\[\widehat{Z}_{0}(Y;q)=q^{\xi}\cdot\left(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{( \alpha_{1})-(\alpha_{2})-(\alpha_{3})+(\alpha_{4})}(q)\right)\]
where \(\xi\) was defined in Theorem 3.1. Then using the fact that in equation 12 we defined that
\[\widetilde{\Psi}_{p}^{(a)}(q):=\sum_{n=0}^{\infty}\psi_{2p}^{(a)}(n)q^{\frac{n ^{2}}{4p}}\in q^{\frac{a^{2}}{4p}}\mathbb{Z}[[q]] \tag{15}\]
we see that \(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{i})}(q)=q^{\frac{\alpha_{i}^{2}}{4p}}P_ {i}(q)\) where \(P_{i}(q)\in\mathbb{Z}[[q]]\) for \(1\leq i\leq 4\). Thus, by making use of how we defined \(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{1})-(\alpha_{2})-(\alpha_{3})+( \alpha_{4})}(q)\) as a linear combination, we can factorize \(\widehat{Z}_{0}(Y;q)\) into the form:
\[\widehat{Z}_{0}(Y;q) =q^{\xi}\cdot\left(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{ 1})}(q)-\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{2})}(q)-\widetilde{\Psi}_ {b_{1}b_{2}b_{3}}^{(\alpha_{3})}(q)+\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{( \alpha_{1})}(q)\right) \tag{16}\] \[=q^{\xi}\left(q^{\frac{\alpha_{1}^{2}}{4p}}P_{1}(q)-q^{\frac{ \alpha_{2}^{2}}{4p}}P_{2}(q)-q^{\frac{\alpha_{3}^{2}}{4p}}P_{3}(q)+q^{\frac{ \alpha_{4}^{2}}{4p}}P_{4}(q)\right)\] (17) \[=q^{\xi+\frac{\alpha_{1}^{2}}{4p}}\left(P_{1}(q)-q^{\frac{\alpha_ {2}^{2}-\alpha_{1}^{2}}{4p}}P_{2}(q)-q^{\frac{\alpha_{3}^{2}-\alpha_{1}^{2}}{4 p}}P_{3}(q)+q^{\frac{\alpha_{4}^{2}-\alpha_{1}^{2}}{4p}}P_{4}(q)\right). \tag{18}\]
Note that Lemma 3.2 part (ii) implies that \(\frac{\alpha_{i}^{2}-\alpha_{1}^{2}}{4p}\in\mathbb{Z}\) for \(1\leq i\leq 4\). Moreover Lemma 3.2 part (i) implies that \(\frac{\alpha_{i}^{2}-\alpha_{1}^{2}}{4p}\in\mathbb{N}\cup\{0\}\) for \(1\leq i\leq 4\). Thus in particular we see that \(q^{\frac{\alpha_{i}^{2}-\alpha_{1}^{2}}{4p}}P_{i}(q)\in\mathbb{Z}[[q]]\) for \(1\leq i\leq 4\). Moreover since \(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{1})}(q)=q^{\frac{\alpha_{1}^{2}} {4p}}P_{1}(q)\) we then see by expanding the definition of \(\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{1})}(q)\) that
\[q^{\frac{\alpha_{1}^{2}}{4p}}P_{1}(q) =\widetilde{\Psi}_{b_{1}b_{2}b_{3}}^{(\alpha_{1})}(q)\] \[=q^{\frac{\alpha_{1}^{2}}{4p}}+\sum_{m\geq 1}^{\infty}q^{ \frac{\alpha_{1}^{2}}{4p}+\alpha_{i}m+m^{2}p}-q^{\frac{\alpha_{1}^{2}}{4p}- \alpha_{i}m+m^{2}p}\] \[=q^{\frac{\alpha_{1}^{2}}{4p}}\left(1+\sum_{m\geq 1}^{\infty}q^{ \alpha_{i}m+m^{2}p}-q^{-\alpha_{i}m+m^{2}p}\right)\]
which implies that \(P_{1}(q)=1+\sum_{m\geq 1}^{\infty}q^{\alpha_{i}m+m^{2}p}-q^{-\alpha_{i}m+m^{2}p}\) (to see this simply multiply4 both sides of the above equation by \(q^{-\frac{\alpha_{1}^{2}}{4p}}\)) and importantly from this we deduce that the leading term of \(P_{1}(q)\) is \(1\). From this we can apply Lemma 2.5 to the form of \(\widehat{Z}_{0}(Y;q)\) obtained in equation 18 to see that
Footnote 4: This multiplication is well-defined in \(\mathbf{k}\), the Novikov field
\[\Delta_{0}(Y)=\xi+\frac{\alpha_{1}^{2}}{4p}.\]
### Computing \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for Brieskorn Spheres
Suppose a Brieskorn sphere \(\Sigma(b_{1},b_{2},b_{3})\) is given such that \((b_{1},b_{2},b_{3})\neq(2,3,5)\) and one wants to compute \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for it. Then one needs to do the following.
1. First find integers \(b<0\) and \(a_{1},a_{2},a_{3}>0\) which satisfy \(b_{1}b_{2}b_{3}\cdot b+b_{2}b_{3}a_{1}+b_{1}b_{3}a_{2}+b_{1}b_{2}a_{3}=-1\) and produce the plumbing graph \(\Gamma\) of \(\Sigma(b_{1},b_{2},b_{3})\).
2. Then delete the terminal vertices to produce new graphs \(\Gamma_{i}\) with linking matrices \(M_{i}\) for \(1\leq i\leq 3\) and compute \(h_{i}:=\det(M_{i})\).
3. Then compute \(\alpha_{i}\) (as defined in Proposition 3.3) for \(1\leq i\leq 4\).
4. Finally, use all of this data as input to Proposition 3.3 to compute \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3}))\) and \(\Delta_{0}(\Sigma(b_{1},b_{2},b_{3}))\).
This process can be viewed as an algorithm and thus be coded into a program. Let's now consider an example to see how to use this proposition to compute the \(\widehat{Z}_{0}\) and \(\Delta_{0}\) invariants for Brieskorn spheres in practice.
**Example 3.2**.: Consider \(\Sigma(2,9,11)\). The plumbing description for this Brieskorn sphere was computed in Example 3.1 and is depicted below:
We first notice that \(s=|\operatorname{Vert}(\Gamma)|=6\). The linking matrix is given by:
\[M=\begin{bmatrix}v_{1}&v_{2}&v_{3}&v_{4}&v_{5}&v_{6}\\ -1&1&1&0&1&0\\ 1&-2&0&0&0&0\\ 1&0&-5&1&0&0\\ 0&0&1&-2&0&0\\ 1&0&0&0&-4&1\\ 0&0&0&0&1&-3\end{bmatrix}\]
We find that \(\operatorname{Tr}(M)=-17\). If we delete the terminal vertex \(v_{2}\) on the first leg we obtain the linking matrix \(M_{1}\) for the corresponding plumbing graph \(\Gamma_{1}\). Deleting the terminal
Figure 3: Plumbing graph for \(\Sigma(2,9,11)\)
vertex \(v_{4}\) on the second leg yields the linking matrix \(M_{2}\) for the corresponding plumbing graph \(\Gamma_{2}\). Deleting the terminal vertex \(v_{6}\) on the third leg yields the linking matrix \(M_{3}\) for the corresponding plumbing graph \(\Gamma_{3}\).
The linking matrices \(M_{i}\) corresponding to \(\Gamma_{i}\) for \(1\leq i\leq 3\) are collected below:
\[M_{1}=\begin{bmatrix}v_{1}&v_{2}&v_{3}&v_{4}&v_{5}\\ -1&1&0&1&0\\ 1&-5&1&0&0\\ 0&1&-2&0&0\\ 1&0&0&-4&1\\ 0&0&0&1&-3\end{bmatrix}M_{2}=\begin{bmatrix}v_{1}&v_{2}&v_{3}&v_{4}&v_{5}\\ -1&1&1&0\\ 1&-2&0&0&0\\ 1&0&-5&0&0\\ 1&0&0&-4&1\\ 0&0&0&1&-3\end{bmatrix}M_{3}=\begin{bmatrix}-1&1&1&0&1\\ 1&-2&0&0&0\\ 1&0&-5&1&0\\ 0&0&1&-2&0\\ 1&0&0&0&-4\end{bmatrix}\]
Now one needs to compute \(h_{i}=|\det(M_{i})|\) for \(1\leq i\leq 3\). One finds that \(h_{1}=50\), \(h_{2}=3\) and \(h_{3}=2\). Letting \(b_{1}=2,b_{2}=9,b_{3}=11\) and recalling that \(p=b_{1}b_{2}b_{3}=198\) we further calculate that \(\alpha_{1}=59\), \(\alpha_{2}=95\), \(\alpha_{3}=103\), \(\alpha_{4}=139\) and moreover, \(\frac{\alpha_{2}^{2}-\alpha_{1}^{2}}{4p}=7\), \(\frac{\alpha_{3}^{2}-\alpha_{1}^{2}}{4p}=9\), \(\frac{\alpha_{4}^{2}-\alpha_{1}^{2}}{4p}=20\). Then by substituting in the values we just found in the formulas we derived for \(\Delta_{0}\) and \(\widehat{Z}_{0}\) for Brieskorn spheres in Proposition 3.3, we find that
\[\Delta_{0}(\Sigma(2,9,11))=\frac{9}{2}\]
and
\[\widehat{Z}_{0}(\Sigma(2,9,11);q)=q^{\frac{9}{2}}\cdot\left[q^{-\frac{59^{2}} {198}}\left(\widetilde{\Psi}_{198}^{(59)-(95)-(103)+(139)}(q)\right)\right].\]
**Example 3.3**.: Consider the Brieskorn sphere \(\Sigma(3,7,8)\). The integers \(b=-1\), \(a_{1}=1,a_{2}=2\) and \(a_{3}=3\) satisfy equation 11 with \(b_{1}=3,b_{2}=7\) and \(b_{3}=8\). One can then compute the continued fractions \(\frac{b_{1}}{a_{1}}=3=[3]\), \(\frac{b_{2}}{a_{2}}=\frac{7}{2}=[4,2]\) and \(\frac{b_{3}}{a_{3}}=\frac{8}{3}=[3,3]\) to produce the following plumbing graph for \(\Sigma(3,7,8)\).
Figure 4: Plumbing description of the graphs \(\Gamma_{i}\) for \(1\leq i\leq 3\).
We leave it as an exercise to the reader to use this plumbing description to find \(\Delta_{0}\) and \(\widehat{Z}_{0}\) for \(\Sigma(3,7,8)\) from Proposition 3.3. One finds that
\[\Delta_{0}(\Sigma(3,7,8))=\frac{13}{2}\]
and further that
\[\widehat{Z}_{0}(\Sigma(3,7,8);q)=q^{\frac{13}{2}}\cdot\left[q^{-\frac{67^{2}}{ 168}}\left(\widetilde{\Psi}_{168}^{(67)-(109)-(115)+(157)}(q)\right)\right].\]
## 4 \(\Delta_{0}\) and Homology cobordism
We begin with a brief review of homology cobordism, the interested reader may consult either [7], [8] or [9]. Let \(\Sigma_{0}\) and \(\Sigma_{1}\) be two oriented integral homology \(3\)-spheres. We call \(\Sigma_{0}\) and \(\Sigma_{1}\), _homology cobordant_ if there exists a smooth, compact, oriented \(4\)-manifold \(W\) with boundary \(\partial W=-\Sigma_{0}\sqcup\Sigma_{1}\) such that inclusions \(\iota_{0}:\Sigma_{0}\hookrightarrow W\) and \(\iota_{1}:\Sigma_{1}\hookrightarrow W\) induce isomorphisms \((\iota_{0})_{*}:H_{i}(\Sigma_{0})\to H_{i}(W)\) and \((\iota_{1})_{*}:H_{i}(\Sigma_{1})\to H_{i}(W)\) for all \(i\geq 0\). Homology cobordism is an equivalence relation on the class of all oriented integral homology \(3\)-spheres.
We define the _(\(3\)-dimensional) homology cobordism group_, \(\Theta^{3}\), to be the class of oriented integral smooth homology \(3\)-spheres modulo the equivalence relation of homology cobordism. An addition operation is given by \([M]+[N]:=[M\#N]\) wherein \([M],[N]\) denote the homology cobordism classes of \(M\) and \(N\). The identity of the group is \([S^{3}]\) and for every \([M]\) an additive inverse is given by \([-M]\). Accordingly, we say that an oriented integral smooth homology \(3\)-sphere is _homology cobordant to zero_ if it is homology cobordant to \(S^{3}\).
A _homology cobordism invariant_ is a function \(f:\Theta^{3}\to X\) where \(X\) is any set. In particular this means that if \(M\) and \(N\) are two oriented homology \(3\)-spheres which are homology cobordant, then \(f(M)=f(N)\). Two examples of homology cobordism invariants are given by the Rokhlin invariant \(\mu:\Theta^{3}\to\mathbb{Z}/2\) and the correction terms to Heegaard
Figure 5: Plumbing graph for \(\Sigma(3,7,8)\)
Floer homology \(d:\Theta^{3}\to\mathbb{Z}\) which we'll see later. A question was raised in [2] which asked whether the \(\Delta_{0}\) invariants were homology cobordism invariants. In this section we will show that this is not the case.
In searching for a counterexample, we will make use of the following theorem which appears in a different (but equivalent) form in [10] and [11, Problem 4.2 - p. 183].
**Theorem 4.1**.: _The following families of Brieskorn spheres are all homology cobordant to \(S^{3}\)._
1. \(\Sigma(p,pq-1,pq+1)\) _for_ \(p\) _even and_ \(q\) _odd,_
2. \(\Sigma(p,pq+1,pq+2)\) _for_ \(p\) _odd and any_ \(q\)_._
Proof.: By [10] and [11, Problem 4.2 - p. 183] manifolds from the families (i) and (ii) bound smooth contractible 4-manifolds.
Let \(\Sigma\) denote any manifold from either of the families (i) or (ii) above and let \(W\) denote the smooth contractible 4-manifold \(W\) which bounds \(\Sigma\). Then since \(W\) is a contractible 4 -manifold with non-empty boundary which is a homology 3-sphere, \(W\) is compact and orientable.
It is known (see [12, Proposition 2.3.4]) that a smooth oriented integral homology 3-sphere \(M\) bounds a smooth compact orientable 4-manifold \(W\) with \(H_{i}(W)=0\) for all \(i\geq 0\), if and only if \(M\) is homology cobordant to \(S^{3}\).
From this above characterization, we thus conclude that \(\Sigma\) is homology cobordant to \(S^{3}\).
**Corollary 4.2**.: _The Brieskorn manifolds \(\Sigma(2,9,11)\) and \(\Sigma(3,7,8)\) are both homology cobordant to \(S^{3}\)._
Proof.: From Theorem 4.1, in (i) take \(p=2\) and \(q=5\) and in (ii) take \(p=3\) and \(q=2\).
**Theorem 4.3**.: \(\widehat{Z}_{a}\) _and \(\Delta_{a}\) are neither homology cobordism invariants, nor cobordism invariants._
Proof.: We saw in Corollary 4.2 that \(\Sigma(2,9,11)\) is homology cobordant to \(S^{3}\). Moreover since any homology cobordism between two manifolds is by definition a cobordism between them, \(\Sigma(2,9,11)\) is also cobordant to \(S^{3}\).
If in general \(\widehat{Z}_{a}\) and \(\Delta_{a}\) were homology cobordism invariants, then we would have in this particular case that \(\widehat{Z}_{0}(\Sigma(2,9,11);q)=\widehat{Z}_{0}(S^{3};q)\) and also that \(\Delta_{0}(\Sigma(2,9,11))=\Delta_{0}(S^{3})\). However in Example 3.2 and Example 1.2 respectively we computed that
\[\widehat{Z}_{0}(\Sigma(2,9,11);q) =q^{\frac{9}{2}}(1-q^{7}-q^{9}+q^{20}+q^{79}+\cdots)\] \[\widehat{Z}_{0}(S^{3};q) =q^{-\frac{1}{2}}(2q-2)\] \[\Delta_{0}(\Sigma(2,9,11)) =\frac{9}{2}\] \[\Delta_{0}(S^{3}) =-\frac{1}{2}.\]
Thus the \(\widehat{Z}_{a}\) and \(\Delta_{a}\) invariants are not invariants of homology cobordism. Similarly since \(\Sigma(2,9,11)\) is also cobordant to \(S^{3}\) neither the \(\widehat{Z}_{a}\) nor \(\Delta_{a}\) invariants are invariants of cobordism.
_Remark_.: In the above we can use other Brieskorn manifolds \(\Sigma(b_{1},b_{2},b_{3})\) which appeared in Theorem4.1.
### \(\Delta_{a}\) and \(\operatorname{Spin}^{c}\) homology cobordism
**Definition 4.1**.: [4, Section 5.2] Suppose we have a oriented \(n\)-dimensional manifold \(Y\) with boundary \(\Sigma\). There exists a collar neighbourhood \(U\subseteq Y\) such that \(U\) is diffeomorphic to \(\Sigma\times[0,1]\). Now \(\Sigma\times[0,1]\) is homotopy equivalent to \(\Sigma\). Then recall that \(\operatorname{Spin}^{c}(\Sigma)\cong H^{2}(\Sigma;\mathbb{Z})\). We thus have a sequence of isomorphisms followed by an induced inclusion map
\[\operatorname{Spin}^{c}(\Sigma)\cong_{\operatorname{aff}}H^{2}(\Sigma; \mathbb{Z})\cong H^{2}(\Sigma\times[0,1];\mathbb{Z})\cong H^{2}(U;\mathbb{Z}) \overset{i^{*}}{\leftarrow}H^{2}(Y;\mathbb{Z})\cong_{\operatorname{aff}} \operatorname{Spin}^{c}(Y).\]
Then we define a _relative \(\operatorname{Spin}^{c}\) structure on \(Y\)_, to be a choice of mapping a \(\operatorname{Spin}^{c}\) structure on \(Y\) to the element in \(\operatorname{Spin}^{c}(\Sigma)\cong H^{2}(\Sigma;\mathbb{Z})\) which corresponds to the first Chern class \(c_{1}\in H^{2}(\Sigma;\mathbb{Z})\) via the sequence of maps above. The set of all relative \(\operatorname{Spin}^{c}\) structures are denoted \(\operatorname{Spin}^{c}(Y,\partial Y)\).
**Claim 4.4**.: _If \(Y\) is homology cobordant to \(S^{3}\), then \(Y\) is \(\operatorname{Spin}^{c}\) cobordant to \(S^{3}\)_
Sketch.: We have that both \(Y\) and \(S^{3}\) are integral homology spheres and hence also rational homology spheres. Let \(W\) be a cobordism between \(Y\) and \(S^{3}\) such that \(H_{i}(Y;\mathbb{Z})\to H_{i}(W;\mathbb{Z})\) and \(H_{i}(S^{3};\mathbb{Z})\to H_{i}(W;\mathbb{Z})\) are isomorphisms for all \(i\). From the fact that \(Y\) and \(S^{3}\) are integral homology spheres we see that \(H_{i}(W;\mathbb{Q})=H_{i}(W;\mathbb{Z})\otimes\mathbb{Q}=0\) for \(i=1,2\).
Moreover both \(Y\) and \(S^{3}\) have only the trivial \(\operatorname{Spin}^{c}\) structure on them. Furthermore since \(H_{i}(W;\mathbb{Z})\cong H_{i}(S^{3};\mathbb{Z})\) in particular, via the Universal Coefficients Theorem one sees that \(H^{2}(W;\mathbb{Z})=0\), this implies that \(W\) has only the trivial \(\operatorname{Spin}^{c}\) structure on it. By Definition4.1 above we see that the trivial \(\operatorname{Spin}^{c}\) structure on \(W\) restricts to the trivial \(\operatorname{Spin}^{c}\) structures on \(Y\) and \(S^{3}\). Thus \(Y\) is \(\operatorname{Spin}^{c}\) cobordant to \(S^{3}\).
**Conjecture 4.5**.: \(\Delta_{a}\) _is not a \(\operatorname{Spin}^{c}\) homology cobordism invariant._
Sketch Proof.: Consider \(\Sigma(2,9,11)\). Note that \(\Sigma(2,9,11)\) is homology cobordant to \(S^{3}\) and if Claim4.4 holds, then \(\Sigma(2,9,11)\) is \(\operatorname{Spin}^{c}\) cobordant to \(S^{3}\). The proof now follows in exactly the same manner as the proof of Theorem4.3.
**Claim 4.6**.: _The homology cobordism group \(\Theta^{3}\) embeds into the \(\operatorname{Spin}^{c}\) homology cobordism group \(\theta^{c}\)_
Sketch Proof.: First note that if \(Y_{1}\) and \(Y_{2}\) are two integral homology spheres which are homology cobordant via a cobordism \(W\), in the sense that \(H_{i}(Y_{j};\mathbb{Z})\to H_{i}(W;\mathbb{Z})\) are isomorphisms for all \(i\) and \(j=1,2\), then since for any space \(X\) we have \(H_{n}(X;\mathbb{Q})=H_{n}(X;\mathbb{Z})\otimes\mathbb{Q}\) (see e.g. [13]) we see that \(H_{i}(Y_{j};\mathbb{Q})\to H_{i}(W;\mathbb{Q})\) are isomorphisms for all
and \(j=1,2\) and further since integer homology spheres are also rational homology spheres that \(H_{i}(W;\mathbb{Q})=0\) for \(i=1,2\). We can also compute that \(H^{2}(W;\mathbb{Z})\cong\operatorname{Hom}(H_{2}(W;\mathbb{Z});\mathbb{Z}) \oplus\operatorname{Ext}(H_{1}(W;\mathbb{Z});\mathbb{Z})=\operatorname{Hom}(0; \mathbb{Z})\oplus\operatorname{Ext}(0;\mathbb{Z})=0\), since \(H_{i}(W;\mathbb{Z})\cong H_{i}(Y_{j};\mathbb{Z})\) for \(j=1,2\) and \(H_{i}(Y_{j};\mathbb{Z})\cong H_{i}(S^{3};\mathbb{Z})\) since \(Y_{1},Y_{2}\) are both integral homology spheres. This implies that \(W\) has only the trivial \(\operatorname{Spin}^{c}\) structure on it. By Definition4.1 above we see that the trivial \(\operatorname{Spin}^{c}\) structure on \(W\) restricts to the trivial \(\operatorname{Spin}^{c}\) structures on \(Y_{1}\) and \(Y_{2}\). Thus \(Y_{1}\) is \(\operatorname{Spin}^{c}\) cobordant to \(Y_{2}\).
_Remark_.: In Claim4.4, Conjecture4.5 and Claim4.6, the only thing that is preventing us from turning all three into propositions is that we are somewhat uncertain as to whether Definition4.1 agrees with conventions.
**Claim 4.7**.: _Suppose that \((Y_{1},t_{1})\) and \((Y_{2},t_{2})\) are two rational homology three-spheres which are \(\operatorname{Spin}^{c}\) cobordant via a cobordism \(W\). Then \(H_{i}(W;\mathbb{Q})=0\) for \(i=1,2\) if and only if \(H_{i}(Y_{1};\mathbb{Q})\to H_{i}(W;\mathbb{Q})\) and \(H_{i}(Y_{2};\mathbb{Q})\to H_{i}(W;\mathbb{Q})\) are isomorphisms for all \(i\)._
Sketch Proof.: The reverse direction (\(\iff\) ) is immediate since \(Y_{1}\) and \(Y_{2}\) are rational homology three spheres. For the forward direction (\(\implies\) ), looking at the long exact sequence of the pairs \((Y_{1},W)\) and \((Y_{2},W)\) should show that \(H_{i}(Y;\mathbb{Q})\to H_{i}(W;\mathbb{Q})\) is an isomorphism for all \(i\) making use of the fact that \(Y_{1}\) and \(Y_{2}\) are rational homology three spheres.
## 5 \(\Delta_{a}\) and correction terms, \(d\), to Heegaard Floer Homology
We introduce Heegard Floer homology closely following [14] and [9].
### Heegard Floer Homology
Let \(Y\) be a closed orientable \(3\)-manifold with Heegard decomposition \(Y=U_{0}\cup_{\Sigma}U_{1}\). From the Heegard decomposition we can produce a Heegard diagram \((\Sigma_{g},\alpha_{1},\ldots,\alpha_{g},\beta_{1},\ldots,\beta_{g})\) which is sometimes written as \((\Sigma_{g},\alpha,\beta)\). We then define \(\operatorname{Sym}(\Sigma_{g})=\left(\prod_{i=1}^{g}\Sigma_{g}\right)/ \mathfrak{S}_{g}\) where \(\mathfrak{S}_{g}\) is the symmetric group on \(g\) letters which acts on \((\prod_{i=1}^{g}\Sigma_{g})\) in the following way: \((\sigma,(x_{1},\ldots,x_{g}))\mapsto(x_{\sigma(1)},\ldots,x_{\sigma(g)})\) for \(\sigma\in\Sigma_{g}\) and \((x_{1},\ldots,x_{g})\in\prod_{i=1}^{g}\Sigma_{g}\) (cf. [14, Section 4]). As similarly stated in (cf. [14, Section 4.1]), the "attaching circles" \(\alpha_{i}\) and \(\beta_{i}\) produce tori
\[\mathbb{T}_{\alpha}=\alpha_{1}\times\cdots\times\alpha_{g}\text{ and }\mathbb{T}_{ \beta}=\beta_{1}\times\cdots\times\beta_{g}\]
which are subsets of \(\operatorname{Sym}(\Sigma_{g})\). A certain map \(s_{z}:\mathbb{T}_{\alpha}\cap\mathbb{T}_{\beta}\to\operatorname{Spin}^{c}(Y)\) is then defined (cf. [14, Section 6.2]). Making use of this map \(s_{z}\) we can define groups:
1. \(CF^{\infty}(\alpha,\beta,a)\) to be the free abelian group generated by the pairs \([x,i]\) where the \(x\in\mathbb{T}_{\alpha}\cap\mathbb{T}_{\beta}\) with \(s_{z}(x)=a\) and \(i\in\mathbb{Z}\).
2. \(CF^{-}(\alpha,\beta,a)\) to be the free abelian group generated by the pairs \([x,i]\) where the \(x\in\mathbb{T}_{\alpha}\cap\mathbb{T}_{\beta}\) with \(s_{z}(x)=a\) and \(i<0\) where \(i\) is an integer.
3. \(CF^{+}(\alpha,\beta,a):=CF^{\infty}(\alpha,\beta,a)/CF^{-}(\alpha,\beta,a).\)
One can assign a (relative) grading to each of these groups which then allow us to define chain complexes from them. From the resulting chain complexes we get homology groups \(HF^{-}(Y,a),HF^{+}(Y,a)\) and \(HF^{\infty}(Y,a)\) as is usually done in algebraic topology.
According to [15, Theorem 7.1], if \(Y\) is additionally a rational homology sphere, we have that \(HF^{-}(Y,a),HF^{+}(Y,a),HF^{\infty}(Y,a)\) are \(\mathbb{Q}\)-graded. This means, for example in the case of \(HF^{+}(Y,a)\), that \(HF^{+}(Y,a)=\bigoplus_{\omega\in\mathbb{Q}}HF^{+}_{\omega}(Y,a).\) We have a similar direct sum decomposition for \(HF^{\infty}(Y,a)\).
Furthermore for each \(\omega\in\mathbb{Q}\) there is a family of homomorphisms \(HF^{+}_{\omega}(Y,a)\to HF^{\infty}_{\omega}(Y,a)\) which come from a long exact sequence (see [16, Section 2, eq. 3])
\[\cdots HF^{-}(Y,a)\to HF^{+}(Y,a)\to HF^{\infty}(Y,a)\to\cdots.\]
The following is a slightly expanded version of [16, Definition 4.1].
**Definition 5.1**.: Let \(Y\) be an oriented rational homology \(3\)-sphere and \(a\in\operatorname{Spin}^{c}(Y)\). We define a rational number \(d(Y,a)\in\mathbb{Q}\), called the _correction term_, to be the minimal \(\omega\in\mathbb{Q}\) such that an element \(x\) in the image of the homomorphism \(HF^{+}_{\omega}(Y,a)\to HF^{\infty}_{\omega}(Y,a)\) is non-torsion, i.e. \(x^{n}\neq 0\) for any integer \(n\).
_Remark_.: In the case when \(Y\) is an oriented \(3\)-manifold with \(b_{1}(Y)=0\), then there is a unique \(\operatorname{Spin}^{c}\) structure on \(Y\) and we simply write \(d(Y)\) instead of \(d(Y,a)\).
**Theorem 5.1** (Properties of correction terms).:
1. _The correction terms give group homomorphisms:_ 1. \(d:\theta^{c}\to\mathbb{Q}\) _where_ \(\theta^{c}\) _is the_ \(\operatorname{Spin}^{c}\) _homology cobordism group (__[_16_, Theorem 1.2]__)_ 2. \(d:\Theta^{3}\to\mathbb{Z}\) _where_ \(\Theta^{3}\) _is the homology cobordism group (__[_9_, p. 9]__)_
2. _Let_ \((Y,a)\) _be a rational homology_ \(3\)_-sphere with_ \(\operatorname{Spin}^{c}\) _structure_ \(a\)_. Then it follows that_ \(d(-Y,a)=-d(Y,a)\) _where_ \(-Y\) _is the orientation reversal of_ \(Y\)_. (__[_16_, Proposition 4.2]__)_
3. _Let_ \((Y,a)\) _be a rational homology_ \(3\)_-sphere with_ \(\operatorname{Spin}^{c}\) _structure_ \(a\)_. Then it follows that_ \(d(Y,a)=d(Y,\overline{a})\) _where_ \(\overline{a}\) _is the conjugation of_ \(a\)_. (__[_16_, Proposition 4.2]__)_
The following is a corollary to part (i) of Theorem 5.1 above.
**Corollary 5.2**.: _The correction term for \(S^{3}\) is zero, i.e. \(d(S^{3})=0\)_
Proof.: The homology cobordism class of \(S^{3}\) is the identity of the group \(\Theta^{3}\). Thus it must be mapped to \(0\in\mathbb{Z}\) since \(d\) is a group homomorphism.
### Connections between \(\Delta_{a}\) and \(d\)
Notice that for an oriented rational homology \(3\)-sphere \(Y\) with \(\operatorname{Spin}^{c}\) structure \(a\), we have the following connections, outlined in [2], between \(\Delta_{a}(Y)\) and \(d(Y,a)\):
1. \(\Delta_{a}(Y)\) and \(d(Y,a)\) are both labelled by \(\operatorname{Spin}^{c}\) structures on \(Y\).
2. \(\Delta_{a}(Y)\) and \(d(Y,a)\) are both negated under orientation reversal of \(Y\).
3. \(\Delta_{a}(Y)\) and \(d(Y,a)\) both remain unchanged under conjugation of \(\operatorname{Spin}^{c}\) structures.
Therefore it is natural to ask the question: _"Just how closely related are \(\Delta_{a}(Y)\) and \(d(Y,a)\)?"_. In particular \(d(Y,a)\) is a homology cobordism invariant, so a further question one could ask is: _"Is \(\Delta_{a}(Y)\) also a homology cobordism invariant"?_ To partially answer these questions, the following proposition was proven in [2].
**Proposition 5.3**.: _If \(Y=Y(\Gamma)\) is a negative definite plumbed manifold then \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod 1\)._
Proof.: For the proof of this fact, we refer the reader to [2, Equations 3.56-3.59]
_Remark_.: The result that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod 1\) could equivalently be rewritten as as \(\Delta_{a}(Y)=n+\frac{1}{2}-d(Y,a)\) for some \(n\in\mathbb{Z}\).
**Corollary 5.4**.: _The function \(f:\{\text{class of negative definite plumbed manifolds}\}\to\mathbb{Q}\) defined by \(f(Y)=\Delta_{a}(Y)\mod 1\) is a homology cobordism invariant_
Proof.: If \(Y_{1}\) is homology cobordant to \(Y_{2}\). Then \(\Delta_{a}(Y_{1})=\frac{1}{2}-d(Y_{1},a)+n\) and \(\Delta_{a}(Y_{2})=\frac{1}{2}-d(Y_{2},a)+m\) for some \(n,m\in\mathbb{Z}\) by Proposition 5.3. Since the correction term \(d\) is a homology cobordism invariant we have that \(d(Y_{1},a)=d(Y_{2},a)\). Thus \(\Delta_{a}(Y_{2})=\frac{1}{2}-d(Y_{1},a)+m\). From this we obtain that \(\Delta_{a}(Y_{1})=\Delta_{a}(Y_{2})+n-m\). Thus this implies that \(\Delta_{a}(Y_{1})=\Delta_{a}(Y_{2})\mod 1\) and hence that \(f(Y_{1})=f(Y_{2})\).
Let us now take a look at an example to see this phenomenon.
**Example 5.1**.: 5 We know from Corollary 4.2 that \(\Sigma(2,9,11)\) and \(\Sigma(3,7,8)\) are homology cobordant to \(S^{3}\). We also saw in Examples 1.2, 3.2 and 3.3 that \(\Delta_{0}(S^{3})=\frac{1}{2}\), \(\Delta_{0}(\Sigma(2,9,11))=\frac{9}{2}=\frac{1}{2}+4\) and \(\Delta_{0}(\Sigma(3,7,8))=\frac{13}{2}=\frac{1}{2}+6\). Then notice that \(\Delta_{0}(S^{3})=\Delta_{0}(\Sigma(2,9,11))=\Delta_{0}(\Sigma(3,7,8))\mod 1\) as expected.
Footnote 5: Part of this example was inspired by discussion with Mrunmay Jagadale.
### \(\Delta_{0}\) and integral homology spheres
**Proposition 5.5**.: _Let \(Y:=Y(\Gamma)\) be a negative definite plumbed \(3\)-manifold which is an integral homology sphere, then \(\Delta_{0}(Y)=\frac{1}{2}\mod 1\)_
Proof.: By Proposition 5.3\(\Delta_{0}(Y)=\frac{1}{2}+d(Y)+n\) for some \(n\in\mathbb{Z}\). Since \(d:\Theta^{3}\to\mathbb{Z}\) is a homomorphism we have that \(d(Y)\in\mathbb{Z}\) which completes the proof.
### Sharpness of the relation between \(\Delta_{a}\) and \(d\)
Suppose now that Proposition 5.3 instead stated that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) where \(x\) is some integer. Intuitively the possibility of showing that the relation \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) holds, for a higher value of \(x\) is desirable since this leads to a stronger relation between \(\Delta_{a}(Y)\) and \(d(Y,a)\).
It was stated in [2, p. 25, 26] that the best hope would be to find such a relation \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) for \(x=2\). This conclusion was based off the examples computed within [2]. Let us now provide an independent proof of this.
**Lemma 5.6**.: _Suppose that the relation \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) holds for all negative definite plumbed manifolds \(Y=Y(\Gamma)\). Then \(x\leq 2\)._
Proof.: The relation that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) can be equivalently restated as
\[\Delta_{a}(Y)=nx+\frac{1}{2}-d(Y,a) \tag{19}\]
for some \(n\in\mathbb{Z}\). Now we have that \(\Delta_{a}(\Sigma(2,9,11))=\frac{9}{2}=\frac{1}{2}+4\) and \(\Delta_{a}(\Sigma(3,7,8))=\frac{13}{2}=\frac{1}{2}+6\) since we have that \(d(\Sigma(2,9,11))=d(\Sigma(3,7,8))=d(S^{3})=0\) since \(\Sigma(2,9,11)\) and \(\Sigma(3,7,8)\) are both homology cobordant to \(S^{3}\). Now by equation (19) we expect that \(4=nx\) and \(6=mx\) for some \(n,m\in\mathbb{Z}\). Thus we see that \(x\) is a divisor of both \(4\) and \(6\). The only possible common divisors are \(1\) and \(2\) hence \(x\in\{1,2\}\) thus completing the proof.
A question was raised in [2, Section 4] as to whether it would be possible to improve the relation that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod 1\) to \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod 2\)? This question was asked generally for \(3\)-manifolds \(Y\) which aren't necessarily plumbed. We will show below that this cannot be the case and that in fact an example within the paper [2] answers this question.
**Proposition 5.7**.: _If \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) holds for all \(3\)-manifolds \(Y\) and all \(a\in\mathrm{Spin}^{c}(Y)\), then \(x=1\)._
Proof.: The relation that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\mod x\) can be equivalently restated as
\[\Delta_{a}(Y)=nx+\frac{1}{2}-d(Y,a) \tag{20}\]
for some \(n\in\mathbb{Z}\). If there exists some manifold \(Y\) and \(a\in\mathrm{Spin}^{c}(Y)\) such that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\pm 1\), then the only possible solution for \(n\) and \(x\) in equation (20) are \(n,x\in\{-1,1\}\). Thus to prove this result, it suffices to find a \(3\)-manifold \(Y\), such that \(\Delta_{a}(Y)=\frac{1}{2}-d(Y,a)\pm 1\) since this will force \(x\) to be equal to \(1\). To that end, in [2] it was computed that for \(Y:=S^{3}_{-1/2}(\mathbf{4_{1}})\) we have \(\Delta_{0}(Y)=-\frac{1}{2}\) and \(d(Y)=0\). Thus we have that \(\Delta_{0}(Y)=\frac{1}{2}-d(Y)\mod 1\iff\Delta_{0}(Y)=\frac{1}{2}-d(Y)+n\iff- \frac{1}{2}=\frac{1}{2}+n\iff n=-1\) and this proves the proposition.
Comparing \(\Delta_{0}\) and \(d\) for some further examples
Let us look at some further examples with which we can compare \(\Delta_{0}\) and the correction terms \(d\). For many classes of \(3\)-manifolds there exist techniques which aid the computation of both Heegard Floer homology and the correction terms. Here is one such example that we shall use in the next section. The paper [17] gives us the following computational tool.
**Proposition A.1**.: _For \(\frac{1}{1}\)-Dehn surgery on the torus knot \(T_{p,p+1}\) embedded in \(S^{3}\), that is for \(S^{3}_{+1}(T_{p,p+1})=-\Sigma(p,p+1,p(p+1)-1)\) we have._
Proof.: A proof can be found in [17].
Using Proposition A.1 we will compute the correction term, \(d\), for various manifolds of the form \(-\Sigma(p,p+1,p(p+1)-1)\) and compare the values of \(d\) obtained to \(\Delta_{0}\) for these manifolds. These examples are new in that they haven't been explicitly produced in this form before, but are not new in the sense that they say anything new about Proposition 5.3.
There is a subtlety that the reader should be aware of however, as stated in [4], \(S^{3}_{+1}(T_{p,p+1})=-\Sigma(p,p+1,p(p+1)-1)\) is not a negative definite plumbed manifold, however \(\Sigma(p,p+1,p(p+1)-1)\) is a negative definite plumbed manifold, thus we will compare \(\Delta_{0}\) and \(d\) for manifolds of the form \(\Sigma(p,p+1,p(p+1)-1)\). To compute \(d(\Sigma(p,p+1,p(p+1)-1))\) we simply use the fact that \(d(\Sigma(p,p+1,p(p+1)-1))=-d(-\Sigma(p,p+1,p(p+1)-1))\).
We produce the following examples of Brieskorn spheres \(\Sigma(p,p+1,p(p+1)-1)\) for which we can compare \(\Delta_{0}\) to \(d\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\Sigma(b_{1},b_{2},b_{3})\) & \(\widehat{Z}_{0}(\Sigma(b_{1},b_{2},b_{3});q)\) & \(\Delta_{0}\) & \(d\) \\ \hline \(\Sigma(3,4,11)\) & \(q^{1/2}\left(1-q^{5}-q^{19}-q^{29}+\cdots\right)\) & \(\frac{1}{2}\) & \(-2\) \\ \(\Sigma(4,5,19)\) & \(q^{37/2}\cdot\left(1-q^{11}-q^{53}-q^{71}+q^{72}+q^{92}+\cdots\right)\) & \(\frac{37}{2}\) & \(-6\) \\ \(\Sigma(5,6,29)\) & \(q^{141/2}\cdot\left(1-q^{19}-q^{111}-q^{139}+q^{140}+\cdots\right)\) & \(\frac{141}{2}\) & \(-6\) \\ \(\Sigma(6,7,41)\) & \(q^{361/2}\cdot\left(1-q^{29}-q^{199}-q^{239}+q^{240}+\cdots\right)\) & \(\frac{361}{2}\) & \(-12\) \\ \hline \end{tabular}
\end{table}
Table 1: Comparisons of \(\Delta_{0}\) and \(d\) for various \(\Sigma(p,p+1,p(p+1)-1)\)
Appendix B Some computations of \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for \(\Sigma(b_{1},b_{2},b_{3})\)
### Further computations for \(\Sigma(b_{1},b_{2},b_{3})\) homology cobordant to \(S^{3}\).
### Further computations for \(\Sigma(b_{1},b_{2},b_{3})\) homology cobordant to \(S^{3}\).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\Sigma(p,pq-1,pq+1)\) & \(\widehat{Z}_{0}(\Sigma(p,pq-1,pq+1);q)\) & \(\Delta_{0}\) \\ \hline \(\Sigma(2,13,15)\) & \(q^{25/2}\cdot\left(1-q^{11}-q^{13}+q^{28}-q^{167}+q^{204}+\cdots\right)\) & \(\frac{25}{2}\) \\ \(\Sigma(2,21,23)\) & \(q^{81/2}\left(1-q^{19}-q^{21}+q^{44}+\cdots\right)\) & \(\frac{81}{2}\) \\ \(\Sigma(2,81,83)\) & \(q^{1521/2}\cdot\left(1-q^{79}-q^{81}+q^{164}-q^{6559}+q^{6800}+\cdots\right)\) & \(\frac{1521}{2}\) \\ \(\Sigma(4,11,13)\) & \(q^{97/2}\cdot\left(1-q^{29}-q^{35}+q^{72}-q^{119}+q^{170}+q^{180}+\cdots\right)\) & \(\frac{97}{2}\) \\ \(\Sigma(4,59,61)\) & \(q^{3697/2}\cdot\left(1-q^{173}-q^{179}+q^{360}-q^{3479}+q^{3770}+\cdots\right)\) & \(\frac{3697}{2}\) \\ \(\Sigma(6,17,19)\) & \(q^{505/2}\cdot\left(1-q^{79}-q^{89}+q^{180}-q^{287}+q^{400}+\cdots\right)\) & \(\frac{505}{2}\) \\ \(\Sigma(6,41,43)\) & \(q^{3265/2}\cdot\left(1-q^{199}-q^{209}+q^{420}+q^{1960}-q^{1679}+\cdots\right)\) & \(\frac{3265}{2}\) \\ \(\Sigma(8,23,25)\) & \(q^{1441/2}\cdot\left(1-q^{153}-q^{167}+q^{336}-q^{527}+q^{726}+\cdots\right)\) & \(\frac{1441}{2}\) \\ \(\Sigma(8,87,89)\) & \(q^{22497/2}\cdot\left(1-q^{601}-q^{615}-q^{7567}+q^{8342}+q^{8360}+\cdots\right)\) & \(\frac{22497}{2}\) \\ \hline \end{tabular}
\end{table}
Table 3: Further computations of \(\widehat{Z}_{0}\) and \(\Delta_{0}\) for various \(\Sigma(p,pq-1,pq+1)\)
Negative and Positive Definite Matrices
We recall the following definition from [18].
**Definition C.1**.: Let \(A\in M_{n\times n}(\mathbb{R})\) be an \(n\times n\) matrix with real entries. We say that \(A\) is
1. _negative definite_ if for all \(x\in\mathbb{R}^{n}\setminus\{0\}\) we have that \(x^{T}Ax=(x,Ax)<0\).
2. _positive definite_ if for all \(x\in\mathbb{R}^{n}\setminus\{0\}\) we have that \(x^{T}Ax>0\).
The following lemma, collected from [18], can be reconstructed from the arguments and statements given in [18] and using the fact that, as stated in [18], \(A\) is negative definite if and only if \(-A\) is positive definite.
**Lemma C.1** (Properties of negative definite matrices).: _Negative definite matrices possess the following properties:_
1. _If_ \(A=(a_{ij})\) _is a negative definite_ \(n\times n\) _matrix with real entries, then the the diagonal entries of the matrix,_ \(a_{ii}\) _for_ \(1\leq i\leq n\)_, are all negative, that is_ \(a_{ii}<0\) _for_ \(1\leq i\leq n\)_._
2. _Let_ \(A\) _be a negative definite_ \(n\times n\) _symmetric matrix, then_ \(A\) _is invertible._
3. _Let_ \(A\) _be a negative definite_ \(n\times n\) _matrix. Then all of the eigenvalues of_ \(A\) _are negative._
4. _If_ \(A\) _is a negative definite_ \(n\times n\) _symmetric matrix with real entries, then_ \(A^{-1}\) _is also a negative definite_ \(n\times n\) _symmetric matrix with real entries._
5. _If_ \(A=(a_{ij})\) _is a negative definite_ \(n\times n\) _symmetric matrix with integral entries, that is_ \(a_{ij}\in\mathbb{Z}\) _for all_ \(i,j\)_. Then_ \(A^{-1}\) _is also a negative definite_ \(n\times n\) _symmetric matrix with integral entries._
An important, standard fact about negative definite matrices that we will use is the following (which can be reconstructed from material in [19, Chapter XV, Section 2- Quadratic Maps]):
**Proposition C.2**.: _Let \(A\in M_{n\times n}(\mathbb{R})\) be a symmetric negative definite \(n\times n\) matrix with real entries. Then the quadratic form \(q:\mathbb{R}^{n}\rightarrow\mathbb{R}\) defined by \(q(x)=x^{T}Ax\) has a global maximum._
We include the standard proof below for the benefit of the reader. One can reconstruct parts of the proof below from material in [19, Chapter XV, Section 2- Quadratic Maps]. Parts of the proof below were inspired by [20, Notes for Lecture 9, page 1].
Proof.: By standard linear algebra, the matrix \(A\) can be diagonalized into the form \(A=P^{-1}BP\) for some invertible \(n\times n\) matrix \(P\) such that \(B\) is a diagonal matrix containing all
the eigenvalues of \(A\), those being \(\lambda_{i}\) for \(1\leq i\leq n\) on the diagonal and zero's elsewhere. That is
\[B=\begin{pmatrix}\lambda_{1}&&\\ &\ddots&\\ &&\lambda_{n}\end{pmatrix}.\]
Then \(A\) and \(B\) are _similar matrices_ (see [21, Definition on Page 241]) and so represent the same linear map \(L:\mathbb{R}^{n}\to\mathbb{R}^{n}\) (see [21, Theorem 31.1]). Hence \(q\) can be written as \(q(x)=x^{T}Bx\). One sees that for \(x=(x_{1},\ldots,x_{n})\) we have that \(q(x)=\sum_{i=1}^{n}\lambda_{i}x_{i}^{2}\). Since \(A\) is negative definite, by Lemma C.1 part (c.) above, we have that \(\lambda_{i}<0\) for each \(1\leq i\leq n\). Thus \(q\) has a global maximum given at \(x=(0,\ldots,0)=\mathbf{0}\in\mathbb{R}^{n}\).
**Corollary C.3**.: _Let \(A\in M_{n\times n}(\mathbb{Z})\) be a negative definite \(n\times n\) matrix with integral entries. Let \(V\subseteq\mathbb{Z}^{n}\) be some set, then the function \(q:V\to\mathbb{Z}\) defined by \(q(x)=x^{T}Ax\) for all \(x\in V\) has a global maximum._
Proof.: One simply views \(A\) as an element of \(A\in M_{n\times n}(\mathbb{R})\) and then applies the above proposition to find a supremum of the set \(W=\{x^{T}Ax\in\mathbb{Z}\mid x\in V\subseteq\mathbb{Z}^{n}\}\). Since \(\mathbb{Z}^{n}\) is a discrete subset of \(\mathbb{R}^{n}\), so is \(V\). This implies that \(W\) is a discrete subset of \(\mathbb{R}\). Since the supremum of \(W\) exists and \(W\) is discrete, \(W\) must contain it's supremum. Thus let \(y^{T}Ay=\sup W\), then \(y\) is the global maximum of \(q\).
There is a corresponding result and corollary for positive definite matrices.
**Proposition C.4**.: _Let \(A\in M_{n\times n}(\mathbb{R})\) be a positive definite \(n\times n\) matrix with real entries. Then the quadratic form \(q:\mathbb{R}^{n}\to\mathbb{R}\) defined by \(q(x)=x^{T}Ax\) has a global minimum._
**Corollary C.5**.: _Let \(A\in M_{n\times n}(\mathbb{Z})\) be a positive definite \(n\times n\) matrix with integral entries. Let \(V\subseteq\mathbb{Z}^{n}\) be some set, then the function \(q:V\to\mathbb{Z}\) defined by \(q(x)=x^{T}Ax\) for all \(x\in V\) has a global minimum._
|
2303.03724 | Learning Bipedal Walking for Humanoids with Current Feedback | Recent advances in deep reinforcement learning (RL) based techniques combined
with training in simulation have offered a new approach to developing robust
controllers for legged robots. However, the application of such approaches to
real hardware has largely been limited to quadrupedal robots with direct-drive
actuators and light-weight bipedal robots with low gear-ratio transmission
systems. Application to real, life-sized humanoid robots has been less common
arguably due to a large sim2real gap. In this paper, we present an approach for
effectively overcoming the sim2real gap issue for humanoid robots arising from
inaccurate torque-tracking at the actuator level. Our key idea is to utilize
the current feedback from the actuators on the real robot, after training the
policy in a simulation environment artificially degraded with poor
torque-tracking. Our approach successfully trains a unified, end-to-end policy
in simulation that can be deployed on a real HRP-5P humanoid robot to achieve
bipedal locomotion. Through ablations, we also show that a feedforward policy
architecture combined with targeted dynamics randomization is sufficient for
zero-shot sim2real success, thus eliminating the need for computationally
expensive, memory-based network architectures. Finally, we validate the
robustness of the proposed RL policy by comparing its performance against a
conventional model-based controller for walking on uneven terrain with the real
robot. | Rohan Pratap Singh, Zhaoming Xie, Pierre Gergondet, Fumio Kanehiro | 2023-03-07T08:16:46Z | http://arxiv.org/abs/2303.03724v2 | # Learning Bipedal Walking for Humanoids with Current Feedback
###### Abstract
Recent advances in deep reinforcement learning (RL) based techniques combined with training in simulation have offered a new approach to developing control policies for legged robots. However, the application of such approaches to real hardware has largely been limited to quadrupedal robots with direct-drive actuators and light-weight bipedal robots with low gear-ratio transmission systems. Application to life-sized humanoid robots has been elusive due to the large _sim-to-real_ gap arising from their large size, heavier limbs, and a high gear-ratio transmission systems.
In this paper, we present an approach for effectively overcoming the _sim-to-real_ gap issue for humanoid robots arising from inaccurate torque tracking at the actuator level. Our key idea is to utilize the current feedback from the motors on the real robot, after training the policy in a simulation environment artificially degraded with poor torque tracking. Our approach successfully trains an end-to-end policy in simulation that can be deployed on a real HRP-5P humanoid robot for bipedal locomotion on challenging terrain. We also perform robustness tests on the RL policy and compare its performance against a conventional model-based controller for walking on uneven terrain.
## I Introduction
As conventional model-based approaches for humanoid locomotion continue to improve, such as those based on preview control [1] or model predictive control (MPC) [2], their robustness against unexpected disturbances and inaccurate modeling is still an elusive research goal. On the other hand, rapid advancements in RL-based control methods for legged locomotion have shown outstanding performance in unstructured, and uncontrolled environments for quadrupedal robots [3, 4, 5, 6] and even bipedal robots [7, 8, 9]. It would be appealing to apply similar methods to develop walking controllers for larger and heavier humanoid robots too.
Training a capable policy using deep RL is data intensive and can be damaging to the hardware. Physics simulation environments offer a safe way to collect a large amount of data, so policies are typically trained in simulation and then transferred to the real system. However, the simulated environment can fail to capture the richness of real-world dynamics. This gives rise to the "reality gap", more commonly known as the _sim2real_ gap. The _sim2real_ gap can cause the performance of a policy trained in simulation to drop drastically when deployed on the real hardware. In the case of life-sized humanoid robots such as the HRP-series humanoids, this gap can have a more critical effect on the robot's stability during walking, compared to the quadrupedal robots or lightweight bipedal robots that are used in most of the recent works. Furthermore, these robots have heavy limbs to support a heavy upper body, which requires a high gear-ratio transmission system with a large output torque range. Consequently, the actuators used have relatively large armature (also known as rotor inertia) and low backdrivable joints. High gear-ratio also induces other hard-to-model but non-trivial effects such as joint friction and back-EMF. This makes the _sim2real_ gap for a robot such as HRP-5P (Figure 1) much larger than the lighter and more backdrivable bipedal systems like Cassie and Digit. Hence, the suitability or relevance of recent _sim2real_ successes in the literature for this robot is a matter of investigation.
In this paper, we develop a system to train bipedal walking controllers in simulation and deploy them on a HRP-5P humanoid robot. HRP-5P is a high-power, electrically-actuated, 53 degrees of freedom (DoF) humanoid robot weighing over
Fig. 1: **HRP-5P humanoid robot** trained to perform bipedal locomotion via model-free reinforcement learning in MuJoCo (top); RL policy transferred to the real robot (bottom). We make use of the feedback from measured actuator current to account for the poor torque tracking on the real system.
100kg, with a height of \(182cm\)[10]. Our key insight is that the _sim2real_ gap is mainly a result of poor torque tracking due to the effect of back-EMF. We simulate the back-EMF effect during training and develop a policy that incorporates current feedback from the motors. The resulting policy learns to actively use the current feedback signal and compensate for the inaccurate torque-tracking within the motor drivers. We then successfully deploy this policy on a real HRP-5P robot. Notably, our experiments show that it is possible to bridge the "reality gap" without relying on memory-based policy architectures or resorting to unreasonably wide randomization of dynamics parameters.
To the best of our knowledge, this is the **first demonstration of an end-to-end policy for controlling a life-size humanoid robot** to achieve dynamic stability. We demonstrate locomotion over uneven and compliant surfaces. The policy runs onboard the robot; and can receive user commands via a joystick. We also compare the qualitative performance of the RL policy to an open-source model-based walking controller over the same surfaces.
## II Related Work
### _Reinforcement Learning and Sim-to-Real for Legged Robots._
Reinforcement Learning has become a powerful approach for synthesizing controllers for legged robots. Control policies are typically trained in simulation and then transferred to the hardware, i.e., sim-to-real. A large number of such works focus on quadrupedal robots, e.g., ANYmal[4], Laikago[11], A1[5], Jueying[12] and Mini-Cheetah[3]. There are also successes in applying the same approach for bipedal robots, e.g., on the Cassie [7], [13], [14], Digit[15], [16] and NimbRo-OP2X[17]. DeepWalk [17] demonstrates a single learned policy for a real humanoid robot that can achieve omnidirectional walking on a flat floor. However, it is unclear whether the robot can achieve a dynamically stable walk. The robot platform also appears to have relatively large feet which may help the robot to avoid falls even under a fragile RL policy.
Important ingredients for successful sim-to-real include (1) careful system identification, e.g, a learned actuator model is incorporated in the simulator to account for hard-to-model actuator dynamics [4], [18], (2) dynamics randomization[19], where simulation parameters such as mass and inertia are randomized to improve robustness of the policy (3) domain adaptation, where the policy learns to adapt based on the history of observations and actions, e.g, [6], [5], [13]. While successful on several legged robot platforms, we are yet to see such approaches being successfully applied to life-size (e.g., more than 170cm) humanoid robots with heavy limbs. This is not due to lack of attempt, prior work has explored how to apply reinforcement learning to the NASA Valkyrie robot, e.g., [20], but so far no _sim2real_ is demonstrated. In this work, we demonstrated the first _sim2real_ success on such a platform, with a focus on how the choice of input to the policy and accurate modeling of the system play important roles.
Again, implementing learned policies on the hardware of a bulky humanoid robot, such as Valkyrie and the HRP-series robots, presents significantly greater challenges than lighter bipedal robots, such as Cassie. Safety risks are also amplified in the case of a bigger robot with powerful actuators.
### _Control for Humanoid robots._
Conventional model-based approaches for humanoid bipedal locomotion consist of local feedback controllers to track Zero-Moment-Point (ZMP) or Center-of-Mass (CoM) trajectories precomputed in an offline process. Stabilization control through the use of divergent component of motion (DCM) has been extensively studied in prior works [21], [22] and applied to real robots. It relies on the linear inverted pendulum model [23] of bipedal walking. Other recent works such as [1], on the other hand, do not rely on biped-specific dynamics and instead perform online generation of centroidal trajectory based on preview control for impressive multi-contact motion. The method uses a preview control scheme to generate a centroidal trajectory and a stabilization scheme to correct errors in tracking the trajectory. Since the trajectory is generated online the robot can react robustly to environmental disturbances.
Generally, the output of such controllers is in the form of desired ZMP, CoM, and/or contact wrenches, which are then fed to Quadratic programming (QP) solver to compute desired joint positions. The desired joint positions are then tracked by a local proportional-derivative (PD) controller for stiff position control. This is in sharp contrast to RL approaches mentioned above that use low PD gains to achieve compliant tracking.
## III Background
The motor control system on the HRP-5P robot (and generally, on most robot transmission systems) is shown in Figure 2. The block structure consists of a PD controller which computes the desired torque command given the reference position from a higher-level controller and measured position from the joint encoder. The desired torque command - or equivalently, the current command (assuming a proportional relationship between torque and current for a brushed-DC motor) is then sent to a proportional-integral (PI) controller. The PI controller tries to track the current command given the measured current from the current sensor in the motor. The output of the PI controller is fed to the motor power amplifier, which in turn drives the motor.
The key observation here is that the PI controller is unable to precisely track the torque commands, as desired by the higher-level controller or policy, often leading to significant torque tracking errors. We suspect that such tracking errors could be caused largely due to the effect of the back electro-motive force (EMF). When the motor rotates, the back-EMF creates a counter-voltage that opposes the applied voltage, reducing the current flowing through the armature, leading to tracking errors. This makes the real system vastly different from simulation environments where the desired torque command is applied _exactly_ to the actuator without
errors. Other factors such as the battery voltage, resistance of the transmission cables, changes in load, or poorly tuned gains of the PI controller may also contribute to poor current tracking.
## IV Approach
In this section, we detail each component involved in training the RL policy. The training is performed in the MuJoCo simulator [24]. In particular, we describe how we overcome the poor torque-tracking on the real robot by simulating Back-EMF effect and using current feedback for the policy. Since the current and torque on the robot's actuators are assumed to be proportional, we use the terms interchangeably through the paper.
### _Observations and actions._
Similar to prior work in simulation [25], the input to our policy consists of the robot state, the external state, and a clock signal. The robot state includes the joint positions and joint velocities of each actuated joint (6 in each leg), roll and pitch orientation and angular velocity of the root (pelvis). And, more relevant to the contributions of this work, we include the motor-level torque signal for each actuator in the robot state. In simulation, this signal is equal to the actual torque applied to the actuators in the previous timestep. On the real robot, it needs to be derived from the raw current measurements, as explained in V-B. The external state vector comprises of a \(3D\) one-hot encoding to denote the walking mode -- \([0,0,1]\) for standing and \([0,1,0]\) for stepping in-place and \([1,0,0]\) for walking forward. It also includes a \(1D\) scalar which acts as a reference value depending on the mode: If the active mode is _Stepping_, the reference value denotes the turning speed; for _Walking_ it denotes forward walking speed; and is ignored for the _Standing_ mode.
The policy also observes a clock signal that depends on a cyclic phase variable \(\phi\). This variable is also used to define a periodic reward term in our reward function to generate walking behaviors. As in [25], we do a bijective projection of \(\phi\) to a \(2D\) unit cycle:
\[\text{Clock}=\left\{\sin\left(\frac{2\pi\phi}{L}\right),\cos\left(\frac{2\pi \phi}{L}\right)\right\}, \tag{1}\]
where \(L\) is the cycle period. \(\phi\) increments from 0 to 1 at each control timestep and reset to 0 after every \(L\) timesteps. Clock is then used as input to the policy.
The policy outputs desired positions of the actuated joints in the robot's legs. These predictions from the network are added to fixed motor offsets corresponding to the robot's half-sitting posture. The desired positions are tracked using a low-gain PD controller, which computes the desired joint torque as follows:
\[\text{tau}_{pd}=K_{p}(q_{des}-q)+K_{d}(0-\dot{q}), \tag{2}\]
where \(K_{p}\) and \(K_{d}\) denote the proportional and derivate gain factors respectively. \(q_{des}\) is the policy prediction summed with the fixed motor offsets. \(q\) and \(\dot{q}\) denote the current joint position and velocity.
### _Reward function._
Our reward design ensures that a reference motion is not needed. Instead, we rely on several hand-crafted reward terms that promote the desired robot behavior in 3 modes: stand in place, step in-place (including turning) and walk forward given a reference speed. This requires the robot to develop a periodic bipedal gait, follow the mode and reference velocity command and maintain a fixed height. Further, we introduce terms to develop a more realistic motion that will allow _sim2real_ transfer in a realistic and safe manner.
**Bipedal Walking.** We introduce reward terms for promoting a symmetrical bipedal walking gait characterized by a periodic motion of the legs, alternating between double-support (DS) phases, and the single-support (SS) phases. Depending on the phase variable \(\phi\) and the desired mode (_standing_ or _walking_), the rewards for feet ground reaction forces (GRF) and body speeds are computed.
For example, when \(\phi\) lies in the first single-support region of the gait cycle (the left foot is in the swing phase and the right foot is in the support phase), larger values of GRF on right foot lead to positive reward. Simultaneously, higher speeds for the left foot are incentivized but penalized for the right foot.
The definition of the bipedal walking terms -- ground reaction forces at the feet \(r_{\text{grf}}\) and the feet body speeds \(r_{\text{spd}}\) -- are adopted exactly from [25]:
\[r_{\text{grf}} =I_{left}^{grf}(\phi)\cdot F_{left}+I_{right}^{grf}(\phi)\cdot F_ {right} \tag{3}\] \[r_{\text{spd}} =I_{left}^{spd}(\phi)\cdot S_{left}+I_{right}^{spd}(\phi)\cdot S_ {right} \tag{4}\]
where \(F_{left}\) and \(F_{right}\) denote the GRF and \(S_{left}\) and \(S_{right}\) denote the body speeds on the left and right foot respectively. We refer the reader to [25] for a detailed explaination of the "phase indicator" functions \(I_{s}^{grf}\) and \(I_{s}^{spd}\) for modulating the reward coefficients for ground reaction forces and feet speeds.
For _standing_, the DS phase is expanded to span the entire gait cycle, and the policy is rewarded to maximize ground reaction forces on both feet while minimizing the feet speeds.
Fig. 2: **Block diagram of the motor control system in HRP-5P. The torque computed by the PD controller is tracked by a PI current controller, albeit, with significant tracking errors. These tracking errors form a crucial component of the _sim2real_ gap.**
**Root Velocity, Orientation and Height.** The root linear velocity reward term is a simple cost on global speed \({}^{x}v_{root}\) of the root link of the robot in the \(x\)-direction.
\[r_{\mathrm{rv}}=\exp(-10\cdot\|^{x}v_{root}-{}^{x}\hat{v}_{root}\|^{2}) \tag{5}\]
The root yaw velocity term encourages the angular velocity of the root \(\omega_{root}\) to be close to the desired velocity \(\hat{\omega}_{root}\).
\[r_{\mathrm{av}}=\exp(-10\cdot\|\omega_{root}-\hat{\omega}_{root}\|^{2}) \tag{6}\]
During training, the active mode is randomly selected between _Standing_, _Stepping_, and _Walking_ at the start of an episode. Depending on the active mode, the scalar input for the reference value is sampled from a uniform distribution, i.e., \({}^{x}\hat{v}_{root}\) from a range of \([0,0.4]\mathrm{m\,s^{-1}}\) if mode is _Walking_ and \(\hat{\omega}_{root}\) from a range of \([-0.5,0.5]\mathrm{rad}/\sec\) if mode is _Standing_.
We also reward the policy to maintain the root height \(h_{root}\) at a desired value \(\hat{h}_{root}=0.79\,\mathrm{m}\):
\[r_{\mathrm{height}}=\exp(-40\cdot(h_{root}-\hat{h}_{root})^{2}). \tag{7}\]
**Safe and realistic motion.** In addition to the above terms, we also try to create a motion that remains close to the nominal posture of the robot to avoid unnecessary sways. This is critical for safe deployment on the real robot, which has a wide range of motion and significantly strong actuators.
To encourage the robot to maintain an upright posture, we use a reward term to minimize the distance between the floor projection of the head position \({}^{x,y}\mathbf{p}_{head}\) and the root position \({}^{x,y}\mathbf{p}_{root}\). This prevents the robot from developing a leaned-back behavior and excessively swaying the upper body:
\[r_{\mathrm{upper}}=\exp(-10\cdot\|^{x,y}\mathbf{p}_{head}-{}^{x,y}\mathbf{p}_ {root}\|^{2}). \tag{8}\]
We use a term to penalize the distance of the current joint positions \(\mathbf{q}\) from the nominal "half-sitting posture", \(\mathbf{q}_{\mathbf{nominal}}\):
\[r_{\mathrm{posture}}=\exp(-\|\mathbf{q}-\mathbf{q}_{\mathbf{nominal}}\|^{2}). \tag{9}\]
We also place a penalty on joint velocities \(\mathbf{\dot{q}}\) above \(50\%\) of the maximum joint velocity \(\mathbf{\dot{q}}_{\mathbf{lim}}\).
\[r_{\mathrm{jv}}=\exp\left(-5\times 10^{-6}\sum_{\mathbf{\dot{q}}>k\cdot \mathbf{\dot{q}}_{\mathbf{lim}}}\|\mathbf{\dot{q}}\|^{2}\right). \tag{10}\]
The full reward function is given by:
\[r=w_{1}r_{\mathrm{grf}}+w_{2}r_{\mathrm{spd}}+w_{3}r_{\mathrm{ rv}}+\ldots\\ w_{4}r_{\mathrm{av}}+w_{5}r_{\mathrm{height}}+w_{6}r_{ \mathrm{upper}}+\ldots\\ w_{7}r_{\mathrm{posture}}+w_{8}r_{\mathrm{jv}}, \tag{11}\]
where, the weights \(w_{1}\), \(w_{2}\), \(w_{3}\), \(w_{4}\), \(w_{5}\), \(w_{6}\), \(w_{7}\), \(w_{8}\) are set to 0.225, 0.225, 0.100, 0.100, 050, 0.100, 0.100, 0.100, respectively.
### _Dynamics Randomization._
Since policies trained in simulation interact with an imperfect representation of the world, they are prone to overfitting and show brittleness on the hardware. A common approach to overcome this is to randomize various robot model and environment parameters, such as mass, intertia, motor strength, latency, ground friction, etc [13, 5].
In our work, we carefully select the variables that are needed to be randomized for a better transfer. Firstly, we can expect the mass and position of the center of mass (CoM) of each link to be different on the real system than in the simulation, due to the distribution of electronics and mechanical parts within a link. We randomize the mass of each link by \(5\%\) and randomize the CoM positions by \(5cm\) at the start of each episode during training.
Secondly, prior work shows there exists a significant amount of friction between the motor and the load [26] in geared transimision systems. This frictional torque is difficult to identify or even model in simulation. MuJoCo allows the simulation of static friction and viscous friction. Hence, we randomize the static friction magnitude in the uniform range of \((2,8)\mathrm{N\,m}\) and coefficient for viscous friction in the uniform range of \((0.5,5)\mathrm{Nm}/\mathrm{rad\,s^{-1}}\), based on coarse identification performed in [26].
Besides the mass, CoM positions, joint friction, we do not randomize any other robot dynamics parameters during training.
### _Terrain Randomization._
In order to enable the real robot to walk robustly over uneven terrain, we expose the policy to randomized terrains during training. In MuJoCo, the terrains are represented using a height fields - a \(2D\) matrix comprising of elevation data. As generating new height fields online during the training may cause slowness during training, instead, we generate one random height field at the start of the training and then randomize its relative position to the flat floor in each episode. The flat ground plane and the height map are simulated simulatenously, so that the floor resembles a terrain with obstacles of varying heights scattered randomly. In this way the robot is exposed to unevennes of maximum height of \(3.5\,\mathrm{cm}\). The introduction of terrain randomization in this work is done uniformly through the training; but it could also be done gradually according to a curriculum to promote smoother learning.
### _Simulating Back-EMF._
In order to simulate the phenomenon of poor torque tracking (observed on the real system), during the training phase, we introduce a modification to the applied torque for each joint at each simulation timestep. The modification is implemented by injecting a counter-torque that scales with the joint velocity, specifically, by using the following equation:
\[\mathsf{tau}_{applied}=\mathsf{tau}_{pd}-\mathsf{k}_{bemf}\times\dot{q}, \tag{12}\]
where, \(\text{tau}_{pd}\) denotes the torque at the output of the PD controller, and \(\dot{q}\) represents the joint velocity. The damping coefficient, \(\text{k}_{bemf}\), is unknown for the real system. During training, we randomize \(\text{k}_{bemf}\) to simulate different tracking behavior. The coefficient for each joint is sampled uniformly within \([5,40]\) at every \(100\,\mathrm{ms}\).
## V Experiments
### _RL Policy_
**Training Details.** As in [25], both the actor and critic policies are represented by MLP architectures to parameterize the policy and the value function in PPO. Both MLP networks have 2 hidden layers of size 256 each and use _ReLU_ activations. Each episode rollout spans a maximum of 400 control timesteps (equivalent to \(10\,\mathrm{s}\) of simulated time), and may reset if a terminal condition is met. Each training batch holds 64 of such rollouts. The learning rate was set to 0.0001. We use the _LOSS_ method [8], which adds an auxiliary loss term (in addition to the original PPO loss term) to enforce symmetry. Training the policy takes around 12 hours to collect a total of 120 million samples for learning all modes, on a AMD Ryzen Threadripper PRO 5975WX CPU with 32 cores.
### _Implementation on Real Robot._
We propose to include the actual applied torque in the observation vector to our RL policy. As mentioned previously, the applied torque on the real robot is extracted from the measurements of the current sensors in the motor drivers. The measured current is multiplied by the torque constant and the gear ratio corresponding to the joint and fed to the policy. It is important to note that the applied torque here refers to the torque applied at the level of the actuator. The torque applied to the load (i.e. the robot link) cannot be measured on the HRP-5P robot due to absence of joint torque sensor. The difference between the actuator-level torque and the joint-level torque can in fact be quite significant due to the presence of static friction and viscous friction.
The policy is executed on the control PC of the robot (specifications: Intel NUC5i7RYH i7-5557U CPU with 2 cores, Ubuntu 18.04 LTS PREEMPT-RT kernel), and is implemented as an _mc-rtc1_, controller in C++. The inference is done at \(40\,\mathrm{Hz}\) with the PD controller running at \(1000\,\mathrm{Hz}\). Policy inference is quite fast, taking only around \(0.2\,\mathrm{ms}\). The global run of the controller is around \(1\,\mathrm{ms}\).
Footnote 1: [https://jrl-umi3218.github.io/mc_rtc/index.html](https://jrl-umi3218.github.io/mc_rtc/index.html)
### _Sim-to-sim Validation._
While we use MuJoCo as the training environment, we perform thorough evaluations also in the Choreonoid simulator before real robot deployment. Choreonoid is traditionally a more popular choice for simulating humanoid robot controllers [27]. We use the _mc-rtc_ control framework for executing the policy onboard the control PC of the robot.
This allows us to evaluate the same controller code transparently in 3 different environment: (1) in MuJoCo using the _mc-mujoco_ interface [28], (2) in Choreonoid and (3) on the real robot, both using the _mc-openrtm_ interface for communicating with OpenRTM middleware [29] used in the HRP robots and Choreonoid simulator.
We found the contact modelling in Choreonoid to be more stable than the contact modelling for height fields in MuJoCo. Hence, Choreonoid forms an important part of our pipeline for evaluating policies on uneven terrain before real robot deployment. Nevertheless, besides some small differences, we did not observe any major discrepancies in the policies behavior between MuJoCo and Choreonoid
Fig. 4: **Real experiment logs for torque tracking on the right hip roll joint** for the baseline policy (top) and the policy trained with our proposed approach (bottom). We observe that the torque tracking becomes unstable for the baseline policy (circled), when the robot is turning and an audible noise can be heard from the joint. Such effects are not observed with the proposed approach, and the robot behaviour looks much more stable.
Fig. 3: **Sim-to-sim validation.** Simulating HRP-5P in Choreonoid (left) and MuJoCo (right) using the mc-openrtm and mc-mujoco interfaces respectively.
during evaluation.
### _Ablation study._
We perform ablation on the two main ingredients proposed in this work for _sim2real_ success - \((1)\) training with simulated poor torque-tracking and \((2)\) providing torque feedback to the policy.
Although real robot experiments are expensive and testing of policies that are prone to failure can be dangerous, developing a test environment in simulation is not a credible alternative. We found that policies that succeed in simulation even in very challenging circumstances (like degraded torque tracking, uneven terrain, external perturbations), can still behave undesirably when deployed on the real robot; indicative of a large and critical reality gap. Hence, it is important to study the behaviour of the policies on the real system. We train 3 different policies to analyze the impact of the proposed approach:
1. **Policy A** forms the baseline policy. It is trained without simulated poor torque-tracking and without observing the current feedback from the actuators. When deployed on the real robot, this policy gives the worst performance. The robot is prone to self-collision between the feet when the swing leg lands on the ground. This points to the difficulty faced by the policy in controlling the real robot's leg swing motion. This is because the policy is trained with perfect tracking in the simulation environment but is exposed to degraded tracking on the real robot. Further, we attempt to train another policy with an additional termination condition on the feet distance (\(d<0.2\,\mathrm{m}\)) to promote a wider stance. In this case, self-collision is prevented on the real robot but we can observe that the torque-tracking becomes unstable in some regions of the motion with an audible noise heard from the joints (see Figure 4).
2. **Policy B** is trained with poor torque-tracking but without torque feedback. We replace the inputs corresponding to the torque-feedback with the \(\mathbf{0}\) vector during training and evaluation, while keeping all other parameters the same. This policy appears robust in the simulation environment. However, when deployed on the real robot, we again observe the self-collision between the feet. The speed of the swing leg is also considerably higher, meaning that the policy finds it harder to compensate for the changing discrepancy between command and applied torque. From this observation, we conclude that providing the torque feedback (from the measured current) is vital for the policy to adapt to the degraded torque tracking environment.
3. **Policy C** is trained using the proposed approach of simulating poor torque-tracking plus providing feedback from current measurements to the policy. This policy appears significantly more stable on the real robot. Self-collision is not observed and there is no audible noise during any part of the motion. The robot could successfully walk upto several meters, including turning, stepping in-place and standing. The robot could also walk over uneven terrain consisting of rigid and soft obstacles upto \(2\,\mathrm{cm}\) high.
We further analyze the real robot experiment logs corresponding to the 3 policies in Figure 5 for torque-tracking on the "RKP" (right knee pitch) joint. The "RKP" joint is chosen because the tracking errors are more noticeable for this joint. And also because the knee joint is expected to have a more consequential impact on the walking behavior (than compared to the hip yaw joint, say). We observed self-collision between the feet in case of Policy A and Policy B, while Policy C can perform stable walking and handle uneven terrain well. The tracking for the "RKP" joint for A and C looks somewhat similar, however, in case of Policy C, the feedback allows the policy to react to the tracking error
Fig. 5: **Torque-tracking performance on the RKP joint for 3 policies. Policy A is trained without back-emf effect in sim and without feedback. Policy B is trained with back-emf but without feedback. Policy C is trained with back-emf and with the torque (current) feedback. Policy A and B both lead to self-collision between the feet (circled in red) while C is able to perform stable walking. An important observation leading to the developments in this work, is that even though the output of the RL policy is in the form of “desired joint positions”, the robot can make environmental interaction only by applying torques on the links. Since there is mismatch between the PD torque exerted by the policy and the actual torque applied at the motor on the real systems, the policy needs to be trained to account for this discrepancy for improved _sim2real_ transfer.**
in the previous timestep and achieve better control on the swing motion. For B, the tracking is observed to be much worse. Tracking for "RCR" (right hip roll) joint for Policy A (retrained for wider feet distance) and C are also showns in Figure 4.
Notably, providing the torque feedback will not eliminate tracking errors because it is difficult for the policy to anticipate the errors in the future. We believe that incorporating history data in the observation space will also be beneficial.
### _Comparison to model-based controller._
We compare the robustness of our policy for locomotion on uneven terrain to an existing model-based controller for humanoid locomotion. For the test, we use the open-source _BaselineWalkingController_, which provides an implementation of walking control based on linear inverted pendulum mode (LIPM). This method combines online CoM trajectory generation based on preview control, ZMP feedback based on the divergent component of motion (DCM) of LIPM, and foot contact force control based on the damping control law [30, 23].
Our test environment consists of a stack of padded carpet tiles, each of thickness \(0.6\,\mathrm{cm}\), placed on a flat floor. The robot starts from some distance ahead of the stack and needs to go across while stepping on top of the stack. Since the tiles are made from a soft material, the obstacle forms a compliant support surface - which is more challenging from a balance perspective. The _BaselineWalkingController_ could succeed on a stack of 3 tiles but failed on a stack of 4 tiles (nearly \(2.4\,\mathrm{cm}\) in height) -- the robot loses balance and falls when the support leg is on the stacked carpets. On the other hand, the RL policy trained with our approach could succeed on a stack of 5 carpets (\(=3\,\mathrm{cm}\) high) on 2 of 2 trials. The policy also succeeds in making several partial contacts on the obstacle, where the foot is placed partially on the stack. (The tests are shown in the supplementary video.)
While there exist newer model-based approaches for humanoid locomotion that may provide greater robustness [1, 2], our test still provides valuable insights into the robustness of model-free RL policies against LIPM-based bipedal locomotion controllers. The critical factors responsible for the higher robustness in the case of controllers based on deep RL is subject to debate, but we believe the low PD gains, feedback nature of the policy, and the absence of strict constraints on feet trajectories, may play an important role.
## VI Conclusion
In this work, we developed a system to train control policies for a life-sized humanoid robot HRP-5P. We identified that the main _sim2real_ gap for these types of large robots arises from poor torque tracking of the motor control systems due to high gear-ratio. We simulated back-EMF and applied torque feedback to the policy to combat the _sim2real_ gap. Policies were trained in simulation and directly transferred to the hardware.
Our experiments show that providing the current feedback is a key ingredient for reliable _sim2real_ transfer. Without the proposed feedback signal, the policy is prone to failure in controlling the leg swing motion, often causing self-collision between the legs. We could not achieve _sim2real_ success without simulating poor torque tracking during training. For robots with joint-level torque sensors, we believe our proposed approach can yield better performance by accounting for the frictional torque in the joints.
An important aspect of our findings was that we did not need to perform tedious manual tuning of the reward function or randomizing dynamics variables to wide, unreasonable ranges (an often omitted part from the literature). It points to the potential effectiveness of an accurate robot model for training as well as careful identification of key factors responsible in overcoming the _reality gap_.
We compared the RL policy to a conventional model-based approach for bipedal locomotion on the real humanoid platform and obtained encouraging results. The RL policy could handle obstacles over \(3\,\mathrm{cm}\) while the robot lost balance and falls with the model-based controller for obstacles over \(2\,\mathrm{cm}\). We release the source code for RL training and evaluation in MuJoCo for reproducibility 2. The model-based controller is also public 3.
Footnote 2: [https://github.com/rohanspingh/LearningHumanoidWalking](https://github.com/rohanspingh/LearningHumanoidWalking)
Footnote 3: [https://github.com/isri-aist/BaselineWalkingController](https://github.com/isri-aist/BaselineWalkingController)
In the future, we plan to expand the framework for developing a policy for _backwards_ locomotion and tackle even more challenging terrain. We also hope to identify and overcome other factors inhibiting better _sim2real_ transfer.
## Acknowledgements
The authors thank all members of JRL for providing their support in conducting robot experiments that were done during the production of this work. We are especially grateful to Hiroshi Kaminaga, Mitsuharu Morisawa, and Mehdi Benallegue for the many insightful discussions. This work was partially supported by JST SPRING Fellowship Program, Grant Number JPMJSP2124.
|
2310.14360 | Is ChatGPT a game changer for geocoding -- a benchmark for geocoding
address parsing techniques | The remarkable success of GPT models across various tasks, including toponymy
recognition motivates us to assess the performance of the GPT-3 model in the
geocoding address parsing task. To ensure that the evaluation more accurately
mirrors performance in real-world scenarios with diverse user input qualities
and resolve the pressing need for a 'gold standard' evaluation dataset for
geocoding systems, we introduce a benchmark dataset of low-quality address
descriptions synthesized based on human input patterns mining from actual input
logs of a geocoding system in production. This dataset has 21 different input
errors and variations; contains over 239,000 address records that are uniquely
selected from streets across all U.S. 50 states and D.C.; and consists of three
subsets to be used as training, validation, and testing sets. Building on this,
we train and gauge the performance of the GPT-3 model in extracting address
components, contrasting its performance with transformer-based and LSTM-based
models. The evaluation results indicate that Bidirectional LSTM-CRF model has
achieved the best performance over these transformer-based models and GPT-3
model. Transformer-based models demonstrate very comparable results compared to
the Bidirectional LSTM-CRF model. The GPT-3 model, though trailing in
performance, showcases potential in the address parsing task with few-shot
examples, exhibiting room for improvement with additional fine-tuning. We open
source the code and data of this presented benchmark so that researchers can
utilize it for future model development or extend it to evaluate similar tasks,
such as document geocoding. | Zhengcong Yin, Diya Li, Daniel W. Goldberg | 2023-10-22T17:03:56Z | http://arxiv.org/abs/2310.14360v4 | # Is ChatGPT a game changer for geocoding - a benchmark for geocoding address parsing techniques*
###### Abstract.
The remarkable success of GPT models across various tasks, including toponymy recognition motivates us to assess the performance of the GPT-3 model in the geocoding address parsing task. To ensure that the evaluation more accurately mirrors performance in real-world scenarios with diverse user input qualities and resolve the pressing need for a 'gold standard' evaluation dataset for geocoding systems, we introduce a benchmark dataset of low-quality address descriptions synthesized based on human input patterns mining from actual input logs of a geocoding system in production. This dataset has 21 different input errors and variations; contains over 239,000 address records that are uniquely selected from streets across all U.S. 50 states and D.C.; and consists of three subsets to be used as training, validation, and testing sets. Building on this, we train and gauge the performance of the GPT-3 model in extracting address components, contrasting its performance with transformer-based and LSTM-based models. The evaluation results indicate that Bidirectional LSTM-CRF model has achieved the best performance over these transformer-based models and GPT-3 model. Transformer-based models demonstrate very comparable results compared to the Bidirectional LSTM-CRF model. The GPT-3 model, though trailing in performance, showcases potential in the address parsing task with few-shot examples, exhibiting room for improvement with additional fine-tuning. We open source the code and data of this presented benchmark1 so that researchers can utilize it for future model development or extend it to evaluate similar tasks, such as document geocoding.
[MISSING_PAGE_POST]
Footnote *: Footnote *: footnotetext: *: footnotetext: *
Footnote *: footnotetext: *: footnotetext: *
Footnote *: footnotetext: *
Footnote *: footnotetext: *
Footnote *: footnotetext: *
Footnote *: footnotetext: *: footnotetext: *
Footnote *: footnotetext: *
Footnote *: footnotetext: *
[MISSING_PAGE_POST]
Footnote *: footnotetext: *: footnotetext: * footnotetext: *: footnotetext: *: footnotetext: *: footnotetext: * footnotet: *: footnotetext: * footnotet: *: footnotetext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footnotet: * footextext: * footnotetext: * footnotetext: * footextext: * footnotetext: * footnotetext: * footnotetext: * footnotetext: * footextext: * footnotetext: * footextext: * footext: * footextext: * footextext: * footext: footextext: * footext: * footext: * footextext: * footext: * footext: footextext: * footext: * footext: * footext: * footext: footextext: * footext: * footext: footext: * footext: * footext: * footext: * footext: footext: * footext: footextext: * footext: * footext: footext: * footext: * footext: footext: * footext: * footext: footext: * footext: * footext: footext: * footext: footext: *
geocoding system logs, and we evaluate address parsing performance using GPT-3 model and compare to other transformer-based and recurrent neural network-based address parsing methods.
The contributions of this work can be summarized as follows.
* A benchmark dataset that contains diverse address descriptions (e.g., highway and grid style) covering all U.S. states and 21 input errors and variations is generated by mining real geocoding system logs. A data processing pipeline is developed to analyze input errors and variations occurring in different address components from real user input, and then the harvested inject errors and variations using these identified patterns to synthesize low-quality geocoding input. To the best of our knowledge, this is the first publicly released annotated low-quality geocoding input dataset for U.S. addresses with such magnitude of coverage and error/variation.
* Address parsers built upon five different models (i.e., GPT-3 model, transformer-based model, and LSTM-based model) are evaluated by synthesized low-quality address input with different errors to reflect their performance when facing various input qualities in real scenarios. These evaluation results can provide insights into the potential capabilities of each model, especially the GPT-3 model, for further fine-tuning or enhancement.
* The proposed benchmark, encompassing benchmark datasets and address parsing methods, is available as open source and can be accessed at Github2. Researchers could use the benchmark dataset for other geospatial text processing tasks or use the evaluation results as baselines for future development and experimental comparisons. This proposed framework can be extended to synthesize language- or county-specific low-quality input to evaluate address parsing or geocoding systems in different countries. Footnote 2: [https://github.com/zhengcongyin/Geocoding-Address-Parsing-Benchmark](https://github.com/zhengcongyin/Geocoding-Address-Parsing-Benchmark)
The remainder of this paper is organized as follows. Section 2 summarizes recent work on address parsing techniques in geocoding and Name Entity Recognition in other domains. Section 3 describes the design details of the proposed benchmark, including the approach to synthesize the low-quality geocoding input, evaluated address parsing techniques, and evaluation metrics. In Section 4, we illustrate the evaluation outcomes and discuss the results. We conclude this paper with potential avenues for future work in Section 5.
## 2. Related Work
Geocoding address parsing is a domain-specific Named Entity Recognition (NER) task that has received extensive research attention. In previous research endeavors, the primary competitor has centered on algorithms designed to improve address parsing capabilities. In the initial stage, these parsing algorithms were predominantly built on rule-based and statistical methodologies. Rule-based approaches usually leverage the format of local address schema and its hierarchy to determine the sequence of labels for a given address input (Beng et al., 2017). Typically, a tire or tree-based data structure is used to mimic the hierarchy of address systems, string matching (i.e., forward/backward string method), beam search, heuristic search strategies, and fine-state machines are used to explore the possible label sequences for addresses. Given that rule-based methods heavily rely on address system rules and lexicons to recognize certain address components (e.g., road types), the variation of user input in terms of quality and descriptions could easily result in the "Out Of Vocabulary" issue. Later, statistical-based address parsing represents a learning and tagging process, as an annotated corpus is required for training, and sequence tagging algorithms make decisions for each label. Two popular models, Hidden Markov Models (HMMs) and conditional Random Fields (CRFs) have been used to build address parser (Hinton et al., 2015; Chen et al., 2016) and achieved SOTA at that time. To augment the coverage of the state transition matrix for variations, (Hinton et al., 2015) has enhanced the training data to contain intentionally manipulated addresses. Later, the hybrid-based address parser that combines the rule-based and statistical-based approaches (Hinton et al., 2015) has shown better parsing performance. In recent years, research has shifted towards using neural networks and LLMs as the foundational framework for building address parsers(Hinton et al., 2015; Chen et al., 2016; Chen et al., 2017; Chen et al., 2018; Chen et al., 2019), given their proven success in NER tasks across various domains (Chen et al., 2017). Another avenue of research related to address parsing involves reducing the need for annotated data (Chen et al., 2018) or predicting noisy tokens in geocoding queries(Xu et al., 2018).
Given that address descriptions and formats differ among countries (Xu et al., 2018), the aforementioned studies are using input address descriptions from the address system specifics to their study areas, including U.S. (Chen et al., 2018), China (Xu et al., 2018), Japan (Xu et al., 2018), and India (Xu et al., 2018). However, the lack of a standardized evaluation dataset for each individual address system complicates the direct comparison of the experimental results of different studies targeting the same country. Our work extends the existing works by presenting a unified evaluation framework, including a benchmark dataset, evaluation procedures, and evaluation metrics created specifically to assess geocoding address parsing. The benchmark dataset, which accounts for the heterogeneity in address formats and encompasses a wide range of input errors/variations, is publicly released to facilitate future research investigations.
## 3. Benchmark Designs
This section describes how benchmark datasets are generated, the selection of evaluated models, and the metrics to access address parsing performance.
### Benchmark Dataset
Figure 2 depicts the workflow to generate the benchmark dataset, namely, the low-quality geocoding input dataset. This workflow contains three major steps: (1) extracting ground-truth dataset; (2) building _address component error injector_ that can generate common
Figure 1. USPS standard address components of a postal address description
geocoding input error and variations; (3) synthesizing low-quality geocoding input.
#### 3.1.1. Ground-truth data
The ground-truth data is generated by extracting from reference datasets, as reference datasets are the single source of truth for geocoding systems to perform the retrieval processing to derive final outputs. We extracted address description from the Navteq 2016 address point reference datasets 3 used by Texas A&M geocoding platform4, because this dataset has been utilized by other studies (Narayan et al., 2017; Wang et al., 2018), and every address description in this reference dataset has already been segmented and aligned with a USPS standard address component label shown in Figure 1. To ensure the diversity of address descriptions across the U.S., we first get the unique combination of address components except for house number (i.e., street name, predictional, postdirectional, city name, and postal code) from each U.S. state and the District of Columbia, meaning that we obtained one address description from every street from all U.S. 50 states and D.C to formalize a unique address description dataset. Then, we further split this unique address dataset into three smaller datasets designated for training, validation, and testing procedures in this benchmark. The testing dataset is generated by extracting one address description from every pair of state and postal codes in the U.S., resulting in a dataset of 30,622 addresses. To obtain the training and validation datasets, we first exclude the testing dataset from the unique address collection; we then randomly select up to 9 addresses from every unique combination of city, state, and postal code from the unique address collection and put the first two addresses and the last address into the training and validation dataset, respectively, when applicable. In the end, the training and validation datasets have 148,173 and 60,522 addresses, respectively, and all address descriptions in the training, validation, and test datasets are mutually exclusive.
Footnote 3: [https://www.hber.com/en/navteq](https://www.hber.com/en/navteq)
Footnote 4: [https://geoservices.tamu.edu/](https://geoservices.tamu.edu/)
#### 3.1.2. Address component error injector
To synthesize low-quality geocoding input, we build an _address component error injector_ to randomly generate errors and variations based on human input patterns. To capture such patterns for geocoding input, we firstly extract three-month geocoding transactions from Texas A&M geocoding platform5 and only keep these inputs, which cannot lead to full matching scores (i.e., the reference data can only partially match with the input.) In total, we obtained roughly 30 million input queries. Next, we iterate each input and compare it to its corresponding reference data to detect input errors and variations. Since user input and matched reference data have already been segmented based on address components to seek a match by the geocoding platform, input errors can be found by aligning user input address descriptions with their corresponding description in address reference datasets. For example, if the city name is missing in the input compared to the corresponding reference data, the error of _omission_ is detected. While iterating the historical user input data, we collect sets of mismatched input samples per each address component and then further distill to get cases of commonly used abbreviations and common substations for each address component. Totally, we identify 21 errors and variations on different address components listed in Table 1.
Footnote 5: [https://geoservices.tamu.edu/](https://geoservices.tamu.edu/)
Lastly, we create the logic to generate these identified errors and variations by aligning and comparing between the segmented user input and the segmented reference data. Addition or omission errors can be generated by reversing the process of how these errors/variations are detected. For instance, a directional addition error can be identified by comparing the user input and the reference data; such an error can be synthesized by adding a directional to an address record in the reference data. For typographic errors, we employ the same mechanism of Freely Extensible Biomedical Record Linkage (FEBRL) (Brock et al., 2015) to randomly swap, delete, insert, or replace a character. We quantify the degree of typographic error by edit distance and set the probabilities of typographic errors with edit distance 1 or 2 to be the same. As for error/variation of abbreviation and substitution, we leverage the collected common cases to reproduce the error/variation, for example, replacing _Los Angeles_ by _LA_ for a city name input.
#### 3.1.3. Low-quality geocoding input for benchmarks
The last step is synthesizing low-quality geocoding input used as the benchmark dataset to access address parsing techniques. We apply the address component error injector to the training, validation, and test ground-truth datasets obtained in Section 3.1.1. Specifically, we set the probability of injecting errors/variations to an address record in these three split datasets to 0.5, and the ratio of injecting two or one error/variation is 7:3. Every address component has the same chance to be manipulated to contain an error/variation. It's worth noting that we only inject errors that are applicable to an address record. For example, if an address is an ordinal number street such as "5th Avenue", it is applicable to have the error of ordinal number suffix omission to become "5 Avenue". We intentionally reduce the postal code digits mismatched error to prevent it from dominating synthesized errors. To this end, training, validation, and test datasets all contain address records with zero, one, or two errors/variations, as summarized in Table 2. The distribution of each error/variation for each dataset is summarized in Figure 3. To label these three datasets for training and evaluation, we employ the IOB (Inside-outside-beginning) tagging scheme to assign the corresponding label to each chuck segmented by white space. For example, the city name _Los Angeles_ would receive two labels: _B-CITY_ and _I-CITY_.
### Baseline models
The following section provides an overview of the baseline models utilized to build address parsers in our experiments, each representing significant strides in the field of NLP.
Figure 2. Benchmark dataset processing workflow
**Bidirectional LSTM-CRF**[16]: The Bidirectional LSTM-CRF model combines the strengths of both Bidirectional Long Short-Term Memory (Bi-LSTM) and Conditional Random Fields (CRF) for sequence labeling tasks. Bi-LSTM, a type of Recurrent Neural Network (RNN), is capable of capturing the context from both directions of a sequence and hence is widely used for NLP tasks. CRF, on the other hand, is a statistical modeling method often used for structured prediction. In the context of NLP, CRFs are used to predict the most likely labels for a sequence of words. The Bidirectional LSTM-CRF model leverages the Bi-LSTM to extract complex features from input sequences, and then use the CRF to predict the optimal labeling sequence, considering both the input sequence and the correlation of labels, resulting in state-of-the-art performance on various sequence labeling tasks.
**BERT**[5]: Bidirectional Encoder Representations from Transformers (BERT), developed by Google, revolutionized the NLP landscape by introducing a novel pre-training objective known as Masked Language Model (MLM). This objective allows BERT to understand the context of a word by considering both its preceding and following words, a significant departure from previous models that only captured unidirectional contexts. Pre-trained on a substantial corpus of unlabelled text, including the entirety of Wikipedia and the Book Corpus [6; 41], BERT has shown remarkable performance across a variety of NLP tasks. We choose to use the standard _bert-base-uncased_ model which contains 110M parameters.
**roBERTa**[25]: roBERTa, a variant of BERT introduced by Facebook, further refines the pre-training process. It eliminates the next-sentence pretraining objective, modifies several key hyper-parameters, and leverages larger mini-batches and learning rates. Additionally, roBERTa is trained on an augmented version of the BookCorpus dataset [6; 41], leading to improved performance over BERT in several benchmark tasks. We choose to use the standard _roberta-base_ model which contains 125M parameters.
**DistilBERT**[32]: As a distilled variant of BERT, DistilBERT represents an effort to optimize the balance between model performance and resource efficiency. DistilBERT is 60% smaller in size, six times faster, yet retains 95% of BERT's performance. This is achieved through a process known as distillation, where a smaller model (the student) is trained to mimic the behavior of a larger model (the teacher). We choose to use the standard _distilbert-base-uncased_ model which contains 67M parameters.
**GPT-3**[1]: GPT-3, also known as ChatGPT, the successor to GPT-2 and also developed by OpenAI, is an autoregressive language model with a staggering 175 billion machine learning parameters.
\begin{table}
\begin{tabular}{c l l l} \hline \hline
**Address component** & **Error/Variation** & **Example** & \\ \hline House number & Omission & **1600** Main St \(\rightarrow\) Main St \\ Pre-/Post-directional & Omission & **East** Main St \(\rightarrow\) Main St \\ & Pre/Post-Direction swap & E Main St NW \(\rightarrow\)**NW** Main St **E** \\ Street base name & Typo (edit distance 1) & Main St \(\rightarrow\)**Man** St \\ & Typo (edit distance 2) & Main St \(\rightarrow\)**Mian** St \\ & Number suffix omission & 5th Ave \(\rightarrow\)**5** Ave \\ & Spanish prefix omission & **La** Brea Ave \(\rightarrow\) Brea Ave \\ & Space omission & Memory Hill \(\rightarrow\)**Memorhythill** \\ & Space addition & Reachcliff \(\rightarrow\)**Reach Cliff** \\ & Partial abbreviation & Warm Mountain \(\rightarrow\) Warm **Mtn** \\ Road type & Omission & Main **St**\(\rightarrow\) Main \\ & Valid road type substitution & Main St \(\rightarrow\) Main **Ave** \\ & Invalid road type substitution & Main St \(\rightarrow\) Main **St St** \\ City & Omission & **Houston**, TX 77845 \(\rightarrow\) TX 77845 \\ & Typo (edit distance 1) & Austin \(\rightarrow\)**Austiun** \\ & Typo (edit distance 2) & Lurerenre \(\rightarrow\)**Lurve** \\ & Direction addition & Houston \(\rightarrow\)**South** Houston \\ & Direction omission & **North** Little Rock \(\rightarrow\) Little Rock \\ & First character abbreviation & Los Angeles \(\rightarrow\)**LA** \\ & Space addition & Redlands \(\rightarrow\)**Red Lands** \\ State & Omission & Houston, **TX** 77001 \(\rightarrow\) Houston, 77001 \\ Postal code & Omission & Houston, TX **77001**\(\rightarrow\) Houston, TX \\ & Any digits mismatched & 77845 \(\rightarrow\)**77843** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Geocoding input errors and variations
\begin{table}
\begin{tabular}{c c c c} \hline \hline Subject & Total & No error/variation & One error/variation & Two error/variation \\ \hline Training & 148,173 & 74,086 & 51,898 & 22,189 \\ Validation & 60,522 & 30,230 & 21,247 & 9,045 \\ Test & 30,622 & 15,286 & 10,736 & 4,600 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Frequency of address records with different quality in the benchmark dataset
GPT-3's size and complexity enable it to excel in tasks involving the generation of long, coherent text passages. In addition to this, GPT-3 exhibits remarkable proficiency in translating between languages, answering questions, summarizing text, and more, making it one of the most versatile language models to date.
### Evaluation metrics
Given the task of geocoding address parsing is to segment the input address description and assign a corresponding address component label to each segmentation based on the USPS address standard, we quantify the parsing performance by the standard NER evaluation metrics, namely, the precision and recall, and the F1 score (i.e., the harmonic mean of precision and recall) of every annotated label. Such a measurement indicates a parsing model's capability to recognize all address components correctly. Since the output from geocoding address parsing is to build a query string to retrieve and rank matched candidates, it's possible that not all address components would be used to build queries, and some address components are more important than others, depending on how the matching component of a geocoding system is built. To this end, we further calculate a score (denoted as _parsing score_) based on the weight of each address component used by Texas A&M geocoding platform6 using Equation 1 as follows.
Footnote 6: [https://geoservices.tamu.edu/](https://geoservices.tamu.edu/)
Footnote 7: [https://geoservices.tamu.edu/](https://geoservices.tamu.edu/)
Footnote 8: [https://github.com/comprolls/Promptify](https://github.com/comprolls/Promptify)
Footnote 9: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5)
\[\text{Parsing Score}=\sum W_{address}\times F1_{address} \tag{1}\]
where, \(W_{address}\) and \(F1_{address}\) represents the weight and F1 score of every address component, respectively. The weight of each address component (shown in Table 3) is obtained from Texas A&M geocoding platform7 given its performance in (Cheng et al., 2018; Chen et al., 2018).
Footnote 7: [https://huggingface.co/docs/transformers/index](https://huggingface.co/docs/transformers/index)
## 4. Experiment results and discussion
### Model Implementation
Since the GPT-3 model generates output via user prompt, we conducted the NER task via the promplify library 8. This library sends a structured input to LLMs, which is equivalent to asking a properly structured question that would help these GPT-3 understand the question better. The API version we used is _gpt-3.5-turbo_9. We supplied three examples to the GPT-3 model to help it understand the expectations for the output, as we found the output under the zero-shot scenario is suboptimal. These three examples listed below are randomly selected from the training dataset, containing pre-directional, post-directional, and no directional.
Footnote 8: [https://huggingface.co/docs/transformers/index](https://huggingface.co/docs/transformers/index)
1. 467 W BROOKWOOD CIR OZAR AL 36360
2. 27195 DORY RD W SALVO NC 27972
3. 118 LUKE HICKS RD HAZL GREEN AL 35750
The three transformer-based models, along with the Bidirectional LSTM-CRF model, were implemented utilizing the Pytorch framework. These transformer-based models were built using the hugging face library 10. We trained these models using an Adam optimizer (Kingmae et al., 2014), a popular choice for training deep learning models due to its efficiency and low memory requirements. The initial learning rate
Figure 3. The distribution of synthesized geocoding input errors/variations in training (a), validation (b), and test (c) datasets
\begin{table}
\begin{tabular}{l c} \hline \hline Address Component & Weight \\ \hline House number & 20 \\ Predirectional & 7 \\ Street base name & 45 \\ Road type & 10 \\ Postdirectional & 4 \\ City & 17 \\ State & 1 \\ Zip code & 45 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Weight of each address component
was set to 0.00002, with the linear learning rate schedule type. The Adam optimizer was configured with beta1 and beta2 parameters set to 0.9 and 0.999, respectively. The dropout is set to 0.5, as we observed that the default dropout can easily lead to over-fitting in the initial stage of this experiment. The batch size is set to 30. The Bidirectional LSTM-CRF model was implemented on an open-sourced work11. Specifically, we employed GloVe.6B.10od 12 for word embedding to fed into neural network, the stochastic gradient descent optimizer with a learning rate of 0.1, the hidden size of an LSTM cell of 200, and a batch size of 10. We added an IOB label constraint for transition parameters to enforce valid transitions.
Footnote 11: [https://github.com/allanyl/pytorch_neural_crf](https://github.com/allanyl/pytorch_neural_crf)
Footnote 12: [https://nlp.stanford.edu/projects/glove](https://nlp.stanford.edu/projects/glove)
### Experiment Settings
This experiment aims to compare the different baseline models' performance on the task of geocoding address parsing. To have a fair comparison, we utilized the same datasets, run-time environment, and training/evaluation procedures to ensure any differences in performance could be attributed to the models' architecture and capabilities rather than external factors. Among these five baseline models, the four (i.e., the Bidirectional LSTM-CRF model and three transformers-based models) require a training process, whereas the GPT-3 model does not need to train, as we directly leveraged the gpt-3.5 turbo API to conduct NER inference for address parsing. Thus, we first trained and evaluated these five baselines using training and validation datasets to get their trained models; we then applied these trained models and the GPT-3 model to the test dataset to compare their performance. We set the training epoch to be 25, as the preliminary experiment indicated the evaluation loss was less than 0.001. Each training model was evaluated on the validation dataset at the end of each epoch, and their evaluation loss was recorded. This allowed us to monitor the models' learning progress and adjust the training parameters if necessary. All training and evaluation processes were conducted on Google Colaboratory with the Tesla V100 GPU.
### Results and discussion
Figure 4 presents the trajectories of training and validation loss for the baseline models throughout the entirety of the experimental processes. The roBERTa model's validation performance is initially high but experiences a rapid decrease as training progresses. In contrast, the other models exhibit a steady validation loss throughout the entire process. Most models reach convergence around the 20-epoch mark. Notably, the DistilBERT model stands out for its faster convergence rate than the other models. Having trained these four models, we then tested them alongside the GPT-3 model using the same test dataset detailed in Section 3.1. The evaluation results of the five evaluated baseline models are presented in Table 4, illustrating their effectiveness in recognizing and extracting individual address elements and the overall performance.
Across all address components, the Bidirectional LSTM-CRF model consistently demonstrates superior or comparable performance to the other models. For instance, in identifying the house number, this model achieved the highest F1 score of 0.99977, marginally surpassing the performance of roBERTa (0.99976) and BERT (0.99963). Its superiority is also evident in parsing the state and postal code components, where it yielded an F1 score of 0.99993 and a perfect score of 1.00000, respectively. The BERT model exhibits robust performance across all tasks, with its performance closely trailing that of the Bidirectional LSTM-CRF model. It performed particularly well in identifying the house number and postal code, with F1 scores of 0.99963 and 1.00000, respectively. Notably, the roBERTa model, while generally performing well, exhibited a slight drop in performance when parsing the postdirectional component, with an F1 score of 0.94003. The Bidirectional LSTM-CRF model also has the highest Parsing Score, indicating that it not only performs well in parsing each address component but also excels in parsing the components that carry the most weight in the geocoding process. This is significantly lower than the scores achieved by the other models for this task. On the other hand, the DistilBERT model's performance was consistently high across all tasks, with its lowest F1 score being 0.96771 for the postdirectional component. Its performance was particularly strong in parsing the house number and postal code, achieving F1 scores of 0.99970 and 1.00000, respectively. The GPT-3 model, however, displayed a relatively lower performance compared to the other models. While it performed reasonably well in parsing the house number, state, and postal code with F1 scores of 0.98810, 0.97505, and 0.97851, respectively, it struggled significantly with the postdirectional component, achieving an F1 score of 0.42917, which is markedly lower than the scores of the other models.
The Bidirectional LSTM-CRF model consistently outperforms or matches the other models across all address components. This superior performance could be attributed to the inherent strengths of this model. The Bidirectional LSTM-CRF model combines the advantages of both bidirectional LSTM and conditional random fields, which allows the model to capture context from both past and future input while CRF can make the most of the sentence-level tag information, making it a powerful model for sequence labeling tasks such as NER. The BERT model and its variant, while performing robustly across all tasks, fall slightly behind the Bidirectional LSTM-CRF model in terms of performance. This could be due to the fact that while BERT is a powerful model, it is pre-trained on a masked language model and next-sentence prediction tasks, which may not be perfectly aligned with the NER tasks in address parsing. On the other hand, the pre-trained models have been trained on large amounts of data, their performance could potentially be improved with hyperparameter tuning to optimize them for the specific task of address parsing. This could involve adjusting parameters like learning rate and batch size, adding or removing layers, or changing the number of hidden units, among other things. However, its strong performance in identifying house numbers and postal codes suggests that it is still a valuable tool for these tasks. As a generative model, GPT-3 demonstrates a lower performance compared to others. One of the potential reasons for that is the GPT-3 generates output based on the context provided by the prompt. Therefore, the way the prompt is framed can significantly affect the model's performance. The other reason could be the selected few-shot learning examples used by the GPT-3 model were completely error-free. It would be interesting to compare the impact of different few-shot learning examples on the GPT-3 model
performance. It's worth noting that the hyperparameters of evaluated models come from common settings used by other studies, as the main scope of this paper is to provide a solid foundation to facilitate future model evaluations. Fine-tuning hyperparameters for each model to find out their best performance can be one direction of future work.
## 5. Conclusion and Future Work
In this work, we introduce a benchmark consisting of benchmark datasets and evaluation metrics to assess the performance of the GPT-3 model in geocoding address parsing and compare with three transformer-based models and one LSTM-based model. We create a benchmark dataset capturing 21 input errors/variations observed in real user input logs, and this dataset also contains the unique address formatting across the U.S. (i.e., 50 states and D.C). This helps to address the demand for a 'gold standard' evaluation dataset in geocoding and further guarantees that evaluation results closely reflect their performance in real-world scenarios. Our findings reveal that the Bidirectional LSTM-CRF model slightly outperforms the transformer-based models. Though the GPT-3 model's performance lags behind the other evaluated models, it shows encouraging results in address parsing using few-shot examples, suggesting room for improvement with additional fine-tuning. We aim this work to serve as a solid baseline for future development and experimental comparisons in similar geographic information retrieval-related tasks.
Future work includes (1) enhancing the evaluation benchmark dataset by capturing more input errors/variations, (2) fine-tuning the models to improve their performance to attempt to achieve SOTA performance in the given input dataset and comparing to traditional models (e.g., CRF and HMM models), and (3) extending this benchmark to be applicable to evaluate address parsing or geocoding systems in other countries, given the heterogeneity of language and address systems in different countries.
|
2303.14258 | Optimal Measures for Multivariate Geometric Potentials | We study measures and point configurations optimizing energies based on
multivariate potentials. The emphasis is put on potentials defined by geometric
characteristics of sets of points, which serve as multi-input generalizations
of the well-known Riesz potentials for pairwise interaction. One of such
potentials is volume squared of the simplex with vertices at the $k \ge 3$
given points: we show that the arising energy is maximized by balanced
isotropic measures, in contrast to the classical two-input energy. These
results are used to obtain interesting geometric optimality properties of the
regular simplex. As the main machinery, we adapt the semidefinite programming
method to this context and establish relevant versions of the $k$-point bounds. | Dmitriy Bilyk, Damir Ferizović, Alexey Glazyrin, Ryan W. Matzke, Josiah Park, Oleksandr Vlasiuk | 2023-03-24T19:52:57Z | http://arxiv.org/abs/2303.14258v1 | # Optimal measures for multivariate geometric potentials
###### Abstract.
We study measures and point configurations optimizing energies based on multivariate potentials. The emphasis is put on potentials defined by geometric characteristics of sets of points, which serve as multi-input generalizations of the well-known Riesz potentials for pairwise interaction. One of such potentials is volume squared of the simplex with vertices at the \(k\geq 3\) given points: we show that the arising energy is maximized by balanced isotropic measures, in contrast to the classical two-input energy. These results are used to obtain interesting geometric optimality properties of the regular simplex. As the main machinery, we adapt the semidefinite programming method to this context and establish relevant versions of the \(k\)-point bounds.
Key words and phrases:Potential energy minimization, optimal measures, random polytopes, spherical codes, tight frames, isotropic measures.
Introduction
The _weak_-strong (strong)
the uniform distribution \(\sigma\) optimal?
The case \(s=2\) appears to be more manageable than others, since, as mentioned above, both \(V^{2}\) and \(A^{2}\) can be expressed as polynomials. In fact, it has been already shown by Cahill and Casazza [1] (see also Theorem 5.1 below) that \(I_{V^{2}}\) is maximized by isotropic measures on the sphere (see (2.2) for the definition), which includes \(\sigma\). Based on this result we show that \(I_{V^{s}}\) is maximized by the discrete measure uniformly distributed over the vertices of an orthonormal basis when \(s>2\) (Corollary 5.2). Other main results of the present paper concerning multivariate geometric potentials include:
* The maximizers of the energy \(I_{A^{2}}(\mu)\) on the sphere \(\mathbb{S}^{d-1}\) are exactly the balanced isotropic measures (which includes the uniform surface measure \(\sigma\), see Section 2 for the relevant definitions). This is proved in Theorem 7.1 in full generality (for all \(d\geq 2\) and \(3\leq k\leq d+1\)), but different proofs of partial cases are also given in Theorem 4.3 (the case of \(k=3\) inputs, i.e. area squared of a triangle) and Theorem 5.3 (\(k=d+1\) inputs in dimension \(d\geq 2\), i.e. a full-dimensional simplex; this theorem also applies to measures on \(\mathbb{R}^{d}\) with unit second moment).
* When \(k=d+1\) and \(s>2\), the energy \(I_{A^{s}}(\mu)\) is maximized by the discrete measure uniformly distributed over the vertices of a regular \(d\)-dimensional simplex (Corollary 5.4).
* For \(0<s\leq 2\), the discrete energies \(E_{V^{s}}\) and \(E_{A^{s}}\) with \(N=d+1\) points are maximized by the vertices of a regular \(d\)-dimensional simplex, see Corollary 3.2. As a corollary, a regular \(d\)-dimensional simplex maximizes the sum of volumes of \(j\)-dimensional faces (\(1\leq j\leq d\)) among all simplices of a given circumradius (Corollary 3.3).
For more precise technical statements of these results the reader is directed to the theorems and corollaries referenced above.
The case \(s=2\) is also special due to the fact that in the classical two-input setting this is exactly the phase transition for the Riesz energy on the sphere, which is maximized uniquely by the uniform surface measure \(\sigma\) for \(0<s<2\) and by discrete measures for \(s>2\)[2] (see also Proposition 2.9 below). Some of our main results suggest that similar behavior persists in the multivariate case, although the case \(0<s<2\) (including the very natural \(s=1\)) remains out of reach. We conjecture that the uniform surface measure \(\sigma\) maximizes both \(I_{A^{s}}(\mu)\) and \(I_{V^{s}}(\mu)\) when \(0<s<2\).
The main machinery for our optimization results is a variant of the semidefinite programming method. We adapt the method developed by Bachoc and Vallentin for finding three-point packing bounds for spherical codes [4]. Three-point bounds were also applied to energy optimization for pair potentials in [4] and for multivariate \(p\)-frame energy in [10]. The approach of Bachoc and Vallentin later was generalized by Musin [11] who established the \(k\)-point version of packing bounds. This method is actively utilized for solving packing/energy problems (see, e.g. [12]) but its applicability is typically limited due to complexity of actual semidefinite programs. Our paper seems to be the first one where general \(k\)-point bounds are explicitly used for all positive integer \(k\).
The paper is organized as follows. Section 2 describes the relevant background, definitions, notation, and covers the two-input case of the energies. Section 3 presents the applications of our main results to some geometric optimality properties of the regular simplex. In Section 4 we discuss the semidefinite programming approach of [4] and demonstrate how it leads to optimization results for 3-input energies with geometric kernels. Section 5 shows how the known results about \(I_{V^{2}}\)[1] can be used to obtain partial results for \(I_{A^{2}}\), as well as the discreteness of maximizers for \(I_{V^{s}}\) and \(I_{A^{s}}\) with \(s>2\). In Section 6 we provide a self-contained description of \(k\)-point semidefinite bounds for the sphere and give a general construction of \(k\)-positive definite multivariate functions based on these bounds. Finally, in the main result of Section 7, we use multivariate functions from Section 6 to prove that the energy \(I_{A^{2}}\) based on the squared volume of a simplex is maximized by balanced isotropic measures on the sphere. In the Appendix (Section 9) we give an explicit expression for the potential \(A^{2}\).
## 2. Background and notation
The notation in this paper generally follows [3]. Most of the optimization problems, with a few exceptions, will be formulated for measures or finite configurations of points on the unit Euclidean sphere \(\mathbb{S}^{d-1}\). Often the potentials will be invariant under the action changing an argument to its opposite. Essentially, this means that the underlying space is the real projective space \(\mathbb{RP}^{d-1}\), but we will still formulate our results in terms of the unit sphere.
In what follows, the domain \(\Omega\) is either the sphere \(\mathbb{S}^{d-1}\) or the Euclidean space \(\mathbb{R}^{d}\). Assume \(k\in\mathbb{N}\setminus\{1\}\) is the number of inputs and the kernel \(K:\Omega^{k}\to\mathbb{R}\) is continuous. We denote by \(\mathcal{M}(\Omega)\) the set of finite signed Borel measures on \(\Omega\), and by \(\mathcal{P}(\Omega)\) the set of Borel probability measures on \(\Omega\). If \(\Omega=\mathbb{R}^{d}\), we define \(\mathcal{P}^{*}(\mathbb{R}^{d})\) to be the set of Borel probability measures \(\mu\) on \(\mathbb{R}^{d}\) satisfying
\[\int_{\mathbb{R}^{d}}\|x\|^{2}d\mu(x)=1. \tag{2.1}\]
Observe that, by a slight abuse of notation, \(\mathcal{P}(\mathbb{S}^{d-1})\subset\mathcal{P}^{*}(\mathbb{R}^{d})\)
Let \(\omega_{N}=\{z_{1},z_{2},\ldots,z_{N}\}\) be an \(N\)-point configuration (multiset) in \(\Omega\), for \(N\geq k\). Then the discrete \(K\)-energy of \(\omega_{N}\) is defined to be
\[E_{K}(\omega_{N}):=\frac{1}{N^{k}}\sum_{j_{1}=1}^{N}\cdots\sum_{j_{k}=1}^{N}K (z_{j_{1}},\ldots,z_{j_{k}}).\]
Similarly, we define the energy integral for measures on \(\Omega\): for \(\mu\in\mathcal{M}(\Omega)\),
\[I_{K}(\mu)=\int_{\Omega}\cdots\int_{\Omega}K(x_{1},\ldots,x_{k})\,d\mu(x_{1}) \cdots d\mu(x_{k}),\]
when absolutely convergent, as will be the case in all of the contexts considered below. In the present paper we shall be interested in finding probability measures (\(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) or \(\mu\in\mathcal{P}^{*}(\mathbb{R}^{d})\)) which optimize (in most cases, maximize) the energy integrals \(I_{K}\).
### Isotropic measures and frame energy
The _\(p\)-frame potential_ is defined as \(|\langle x,y\rangle|^{p}\). The notion of the \(2\)-frame potential, or simply _frame potential_, was introduced by Benedetto and Fickus [1], and later generalized to \(p\in(0,\infty)\) by Ehler and Okoudjou [1]. Minimization of the frame energy is well understood: the following lemma is usually stated for \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\), see e.g. Theorem 4.10 in [1], but the extension to \(\mathcal{P}^{*}(\mathbb{R}^{d})\) is straightforward (see also Remark 1 below).
**Lemma 2.1**.: _For any \(\mu\in\mathcal{P}^{*}(\mathbb{R}^{d})\), and hence also any \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\),_
\[\int_{\Omega}\int_{\Omega}\langle x,y\rangle^{2}\,d\mu(x)d\mu(y)\geq\frac{1}{d}.\]
It is easy to see that the equality in the estimate above is achieved precisely for the measures which satisfy
\[\int_{\Omega}xx^{T}\,d\mu(x)=\frac{1}{d}I_{d}, \tag{2.2}\]
where \(I_{d}\) is the \(d\times d\) identity matrix. It will be convenient for us to use this condition in the following form: for any \(y\in\mathbb{S}^{d-1}\),
\[\int_{\Omega}\langle x,y\rangle^{2}\,d\mu(x)=\frac{1}{d}. \tag{2.3}\]
Measures which satisfy (2.2) or, equivalently, (2.3), are called _isotropic_. We note that \(\operatorname{Tr}(xx^{T})=\|x\|^{2}\) and (2.2) implies \(\int_{\Omega}\|x\|^{2}d\mu(x)=1\) so, as a matter of fact, all isotropic measures on \(\mathbb{R}^{d}\) automatically belong to \(\mathcal{P}^{*}(\mathbb{R}^{d})\).
The discrete version of Lemma 2.1 states that for \(N\geq d\) and \(\{x_{1},\ldots,x_{N}\}\subset\mathbb{S}^{d-1}\),
\[\sum_{i=1}^{N}\sum_{j=1}^{N}\langle x_{i},x_{j}\rangle^{2}\geq\frac{N^{2}}{d}.\]
Discrete sets for which this bound is sharp are known as _unit norm tight frames_, which explains the term _frame energy_. The lower bound for the discrete frame energy is a special case of bounds by Welch [W] and Sidelnikov [Si].
There is a natural projection \(\pi:\mathcal{P}^{*}(\mathbb{R}^{d})\to\mathcal{P}(\mathbb{S}^{d-1})\) that maps isotropic measures in \(\mathbb{R}^{d}\) onto isotropic measures in \(\mathbb{S}^{d-1}\). First, we define the projection \(\pi_{0}:\mathbb{R}^{d}\setminus\{0\}\to\mathbb{S}^{d-1}\) by \(\pi_{0}(x)=x/\|x\|\). Now for any \(\mu\in\mathcal{P}^{*}(\mathbb{R}^{d})\), we define \(\mu^{*}=\pi(\mu)\) as the pushforward measure \((\pi_{0})_{\#}\|x\|^{2}\,d\mu(x)\), that is: for any Borel subset \(B\) of \(\mathbb{S}^{d-1}\), we set
\[\mu^{*}(B)=\int_{\pi_{0}^{-1}(B)}\|x\|^{2}\,d\mu(x).\]
Clearly, \(\mu^{*}\) is a Borel probability measure on \(\mathbb{S}^{d-1}\). Checking (2.3), we can also see that for an isotropic \(\mu\), \(\pi(\mu)\) is isotropic too.
**Remark 1**.: For potentials \(K\) that are homogeneous of degree \(2\) in each variable, the energy \(I_{K}(\mu)\) is invariant under the projection \(\pi\). The kernel \(V^{2}\) is such a function, since it is the determinant of the Gram matrix of \(\{x_{1},\dots,x_{N}\}\). This property is also satisfied by the frame potential \(K(x,y)=\langle x,y\rangle^{2}\). In such cases, it is sufficient to find optimizers for probability measures on the sphere in order to solve an optimization problem in \(\mathcal{P}^{*}(\mathbb{R}^{d})\).
We call a measure \(\mu\)_balanced_ if \(\int_{\Omega}x\,d\mu(x)=0\), i.e. the center of mass is at the origin. Balanced isotropic measures can be used to construct isotropic measures in higher dimensions, as will be seen in the proof of Theorem 5.3.
### Linear programming and positive definite kernels
The linear programming method, developed for the spherical case in [DGS], appeared to be successful in finding optimizing measures and point configurations as well as in giving lower bounds for two-point interaction energies (see, e.g., [BGMPV, CK, Y]). Here we briefly describe how it works. In Sections 4 and 6, we explain in more detail how the method is extended to semidefinite bounds for \(k\)-point energies.
A symmetric kernel \(K:(\mathbb{S}^{d-1})^{2}\to\mathbb{R}\) is called _positive definite_ if for every \(\nu\in\mathcal{M}(\mathbb{S}^{d-1})\), the energy integral satisfies \(I_{K}(\nu)\geq 0\). A classical theorem of Schoenberg described positive definite kernels via Gegenbauer polynomials [Sc]. The Gegenbauer polynomials \(P_{m}^{d}\), \(m\geq 0\), form an orthogonal basis on \([-1,1]\) with respect to the measure \((1-t^{2})^{\frac{d-3}{2}}dt\). Here, \(P_{m}^{d}\) is normalized so that \(P_{m}^{d}(1)=1\). All continuous functions on \([-1,1]\) can be expanded like so:
\[f(t)=\sum_{m=0}^{\infty}\hat{f}_{m}P_{m}^{d}(t), \tag{2.4}\]
where the sum converges uniformly and absolutely if \(K(x,y)=f(\langle x,y\rangle)\) is positive definite on \(\mathbb{S}^{d-1}\) (due to Mercer's Theorem). Rotationally-invariant positive definite kernels on the sphere are exactly characterized by the positivity of their Gegenbauer coefficients.
**Theorem 2.2** (Schoenberg [Sc]).: _The kernel \(K(x,y)=f(\langle x,y\rangle)\) is positive definite on \(\mathbb{S}^{d-1}\) if and only if all coefficients \(\hat{f}_{m}\) of the Gegenbauer expansion (2.4) are non-negative._
More background on Gegenbauer polynomials, energy, and positive definite kernels on the sphere can be found in [AH, BHS].
If one can bound a given function \(f\) from below by a positive definite (modulo a constant) function \(h\), usually a polynomial, then the linear programming bounds on the energy of \(f\) are then essentially consequences of the inequalities
\[\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}P_{m}^{d}(\langle x,y\rangle)\,d \mu(x)d\mu(y)\geq 0.\]
For example, \(P_{2}^{d}(t)=\frac{dt^{2}-1}{d-1}\) and the inequality above immediately implies the lower bound in Lemma 2.1.
### \(k\)-positive definite kernels
As an extension of the notion of positive definite kernels to the multivariate case, we define _\(k\)-positive definite kernels_. Let \(K:(\mathbb{S}^{d-1})^{k}\to\mathbb{R}\) be continuous and symmetric in the first two variables. We define the _potential function of \(K\) for fixed \(z_{3},\ldots,z_{k}\)_ as
\[U_{K}^{z_{3},\ldots,z_{k}}(x,y):=K(x,y,z_{3},\ldots,z_{k}). \tag{2.5}\]
We call \(K\)\(k\)-positive definite if for any \(z_{3},\ldots,z_{k}\in\mathbb{S}^{d-1}\), the potential function \(U_{K}^{z_{3},\ldots,z_{k}}(x,y)\) is positive definite as a function of \(x\) and \(y\). For kernels symmetric in all variables, this definition is the same as the one given in [3]. A kernel \(Y\) is \(k\)-negative definite if \(-Y\) is \(k\)-positive definite. In Section 6 we provide a self-contained construction of large classes of \(k\)-positive definite kernels for \(\mathbb{S}^{d-1}\). Here we collect some general results about positive definiteness and energy minimization for multivariate kernels.
**Lemma 2.3**.: _Suppose that \(K_{1},K_{2},\ldots\) are \(k\)-positive definite. Then \(K_{1}+K_{2}\) and \(K_{1}K_{2}\) are \(k\)-positive definite. If the sequence of \(K_{j}\)'s converges (uniformly in the first two variables and pointwise in the others) to a kernel \(K\), then \(K\) is also \(k\)-positive definite._
This result follows immediately from the same results for two-input kernels. Similarly, we have the following:
**Lemma 2.4**.: _Suppose that \(K_{1},K_{2},\ldots\) are kernels such that each \(I_{K_{j}}\) is minimized by some probability measure \(\mu\). Then \(I_{K_{1}+K_{2}}\) is also. If the sequence of \(K_{j}\)'s converges (uniformly in the first two variables and pointwise in the others) to a kernel \(K\), then \(I_{K}\) is also minimized by \(\mu\)._
As in the two-input case, multiplication does not generally preserve the minimizers of energies.
**Proposition 2.5**.: _Suppose that \(Y\) is a \(k\)-positive definite kernel on \(\mathbb{S}^{d-1}\) and \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) with \(I_{Y}(\mu)=0\). Then \(\mu\) is a minimizer of \(I_{Y}\)._
Proof.: Let \(\nu\in\mathcal{P}(\mathbb{S}^{d-1})\). Then, since \(Y\) is \(k\)-positive definite
\[I_{Y}(\nu)\geq\min_{z_{3},\ldots,z_{k}\in\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d -1}}\int_{\mathbb{S}^{d-1}}Y(x,y,z_{3},\ldots,z_{n})\,d\nu(x)d\nu(y)\geq 0=I_{Y}( \mu).\]
We can create multivariate kernels from kernels with fewer inputs in a natural way that preserves minimizers of the energy.
**Lemma 2.6**.: _For some kernel \(Y:(\mathbb{S}^{d-1})^{k}\to\mathbb{R}\) and \(n>k\), let_
\[K(x_{1},x_{2},\ldots,x_{n})=\frac{1}{|S|}\sum_{\pi\in S}Y(x_{1},x_{2},x_{\pi(3 )},\ldots,x_{\pi(k)}), \tag{2.6}\]
_where \(S\) is a nonempty set of permutations of the set \(\{3,\ldots,n\}\). Then \(I_{K}\) is minimized by \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) if and only if \(I_{Y}\) is as well. In addition, if \(Y\) is \(k\)-positive definite, then \(K\) is \(n\)-positive definite._
Note that if \(S\) is the set of all such permutations, then \(K\) is symmetric in the last \(n-2\) variables.
Proof.: For any \(\nu\in\mathcal{M}(\mathbb{S}^{d-1})\), we see that
\[I_{K}(\nu)=I_{Y}(\nu),\]
meaning their minimizers must be the same, and for any \(z_{3},\ldots,z_{n}\in\mathbb{S}^{d-1}\),
\[\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}K(x,y,z_{3},\ldots,z_{n})\,d\nu (x)d\nu(y)=\frac{1}{|S|}\sum_{\pi\in S}\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S }^{d-1}}Y(x,y,z_{\pi(3)},\ldots,z_{\pi(k)})\nu(x)\,d\nu(y),\]
which is non-negative if \(Y\) is \(k\)-positive definite.
**Proposition 2.7**.: _For some kernel \(Y:(\mathbb{S}^{d-1})^{k}\to\mathbb{R}\) let \(K\) be defined by_
\[K(x_{1},\ldots,x_{k})=\frac{1}{k!}\sum_{\pi}Y(x_{\pi(1)},\ldots,x_{\pi(k)}),\]
_where \(\pi\) varies over all permutations of \(\{1,\ldots,k\}\). Then \(K\) is a symmetric kernel, and \(I_{K}\) is minimized by \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) if and only if \(I_{Y}\) is as well._
The proof is identical to that of Lemma 2.6. We note that, unlike in Lemma 2.6, \(k\)-positive definiteness of \(Y\) does not imply that \(K\) is also \(k\)-positive definite. In fact, in the three-input case, \(-V^{2}\) and \(-A^{2}\) in this paper are examples of symmetric kernels that are not \(3\)-positive definite [1, Propositions 6.9 and 6.10] but are the symmetrizations of \(3\)-positive definite kernels (modulo a constant), see (4.3) and (4.5).
We finally remark that the discussion of this section generalizes to arbitrary compact metric spaces in place of the sphere \(\mathbb{S}^{d-1}\).
### Two-input volumes
Here, we address the two-input versions of \(V^{2}\) and \(A^{2}\) on the sphere.
**Proposition 2.8**.: _Let \(k=2\). On the sphere \(\mathbb{S}^{d-1}\), \(\sigma\) is a maximizer of the two-input energies \(I_{A^{s}}\) and \(I_{V^{s}}\) for \(0<s<2\). Moreover, in the case of \(A^{s}\), \(\sigma\) is the unique maximizer._
Proof.: It is well known, see e.g. [1, Proposition 2.3], that \(\sigma\) is a minimizer of \(I_{K}\), where \(K(x,y)=f(\langle x,y\rangle)\), if and only if for the Gegenbauer expansion \(f(t)=\sum_{m=0}^{\infty}\hat{f}_{m}P^{d}_{m}(t)\), \(\hat{f}_{m}\geq 0\) for all \(m\geq 1\) (which, according to Theorem 2.2, is equivalent to the fact that \(f\) is positive definite on \(\mathbb{S}^{d-1}\) modulo an additive constant). Moreover, \(\sigma\) is the unique minimizer if \(\hat{f}_{m}>0\) for all \(m\geq 1\).
We note that it is sufficient to use a weaker condition based on Maclaurin expansions of \(f\). Assume \(f(t)=\sum_{m=0}^{\infty}f_{m}^{*}t^{m}\) for \(t\in[-1,1]\), with the series converging uniformly and absolutely. Each function \(t^{m}\) is positive definite on \(\mathbb{S}^{d-1}\) by Schur's product theorem, and, by Theorem 2.2, it can be represented as a non-negative combination of Gegenbauer polynomials with, in particular, a positive coefficient for \(P^{d}_{m}(t)\). This means that, whenever all \(f_{m}^{*}\) are non-negative (positive) for \(m\geq 1\), then all Gegenbauer coefficients \(\hat{f}_{m}\) for \(m\geq 1\) are also non-negative (positive). We need to show that \(\sigma\) is a maximizer, so it is sufficient to check that all coefficients, starting from \(m=1\), of the Maclaurin expansions of \(V^{s}\) and \(A^{s}\) are nonpositive.
Indeed,
\[V^{s}(x,y)=(V^{2})^{s/2}=\big{(}1-\langle x,y\rangle^{2}\big{)}^{s/2}=\sum_{m =0}^{\infty}(-1)^{m}\binom{s/2}{m}\langle x,y\rangle^{2m}.\]
Similarly,
\[A^{s}(x,y)=(A^{2})^{s/2}=(2-2\langle x,y\rangle)^{s/2}=2^{s/2}(1-\langle x,y \rangle)^{s/2}=2^{s/2}\sum_{m=0}^{\infty}(-1)^{m}\binom{s/2}{m}\langle x,y \rangle^{m}.\]
In both cases, \((-1)^{m}\binom{s/2}{m}\) is negative for all \(m\geq 1\) so \(\sigma\) is a maximizer for \(V^{s}\) and the unique maximizer for \(A^{s}\).
**Remark 2**.: Since \(V\) is invariant under central symmetry, it would be natural to consider it as a potential on the projective space \(\mathbb{RP}^{d-1}\). Under this setup the uniform distribution over \(\mathbb{RP}^{d-1}\) is the unique maximizer of \(I_{V^{s}}\).
Since \(A(x,y)=\|x-y\|\), the statements about \(A^{s}\) in Proposition 2.8 can be viewed as a special case of a more general result of Bjorck [1]. Below we collect his results specialized to the sphere.
**Proposition 2.9** (Bjorck [1]).: _Let \(k=2\), i.e. \(A(x,y)=\|x-y\|\). For the two-input energy \(I_{A^{s}}\) on the sphere \(\mathbb{S}^{d-1}\),_
* _if_ \(0<s<2\)_, then_ \(\sigma\) _is the unique maximizer of_ \(I_{A^{s}}\)_;_
* _if_ \(s=2\)_, then_ \(\mu\) _is a maximizer of_ \(I_{A^{s}}\) _if and only if_ \(\mu\) _is balanced;_
* _if_ \(s>2\)_, then the maximizers of_ \(I_{A^{s}}\) _are exactly measures of the the form_ \(\frac{1}{2}(\delta_{p}+\delta_{-p})\)_, for some_ \(p\in\mathbb{S}^{d-1}\)_._
A similar proposition about the minimizers over \(\mathcal{P}(\mathbb{S}^{d-1})\) can be formulated for powers of \(V\) in the two-input case.
**Proposition 2.10**.: _Let \(k=2\), i.e. \(V(x,y)=\big{(}1-\langle x,y\rangle^{2}\big{)}^{1/2}\). For the two-input energy \(I_{V^{s}}\) on the sphere \(\mathbb{S}^{d-1}\),_
* _if_ \(0<s<2\)_, then_ \(\sigma\) _is a maximizer of_ \(I_{V^{s}}\)_;_
* _if_ \(s=2\)_, then_ \(\mu\) _is a maximizer of_ \(I_{V^{s}}\) _if and only if_ \(\mu\) _is isotropic;_
* _if_ \(s>2\)_, then the only maximizers (up to central symmetry and rotation) of_ \(I_{V^{s}}\) _are uniform measures on the elements of an orthonormal basis of_ \(\mathbb{R}^{d}\)_, i.e. measures of the the form_ \(\frac{1}{d}\sum_{i=1}^{d}\delta_{e_{i}}\)_, where_ \(\{e_{i}\}_{i=1}^{d}\) _is an orthonormal basis of_ \(\mathbb{R}^{d}\)_._
The case \(0<s<2\) is covered in Proposition 2.8 above. The phase transition case \(s=2\) follows from the case of equality in Lemma 2.1. The case \(s>2\) can be easily handled by the linear programming method, but we give the proof of a more general statement for all \(2\leq k\leq d\) in Corollary 5.2. Exposition on the logarithmic and singular energies (\(s<0\)) can be found in [BHS] (and the references therein) for \(A^{s}\) and [CHS] for \(V^{s}\).
### Comparison of two-input and multi-input energies
The multi-input, i.e. \(k\geq 3\), generalizations of Propositions 2.9 and 2.10, which are naturally more complicated, require different methods and form the main purpose of this paper. As stated in the introduction, we believe that the uniform measure \(\sigma\) still maximizes both \(I_{A^{s}}\) and \(I_{V^{s}}\) in the range \(0<s<2\) for \(k\geq 3\), but this remains a conjecture.
When \(s=2\) and \(k\geq 3\), maximizers of \(I_{V^{2}}\) are, as in Proposition 2.10, exactly the isotropic measures on \(\mathbb{S}^{d-1}\)[CC] (see Theorem 5.1). However, we shall show (see Theorem 7.1, as well as Theorems 4.3 and 5.3) that the maximizers of \(I_{A^{2}}\) for \(3\leq k\leq d+1\) are exactly _balanced isotropic measures_ (and not just balanced as in Proposition 2.9 for \(k=2\)).
The case \(s>2\) of Proposition 2.10 for \(I_{V^{s}}\) still holds for all \(2\leq k\leq d\) (Corollary 5.2). However, we are only able to prove an analogue of this case for \(A^{s}\) when \(k=d+1\) (Corollary 5.4): the uniform measure on the vertices of a regular simplex replaces the two poles as the unique (up to rotations) maximizer of \(I_{A^{s}}\) for \(s>2\). We conjecture that maximizers of \(I_{A^{s}}\) with \(s>2\) are discrete for all \(3\leq k\leq d+1\), but their exact structure remains elusive (see end of Section 7).
This discussion shows that in the multi-input case \(k\geq 3\) the behavior of \(A^{s}\) is significantly more complicated than that of \(V^{s}\), which is evidenced already by the fact that the polynomial representation of \(A^{2}\) (Lemma 9.1) is more involved than that of \(V^{2}\).
## 3. Discrete Energies and Optimality of the Regular Simplex
Before presenting the study of maximizers of continuous energies with kernels \(V^{s}\) and \(A^{s}\), we discuss their discrete analogues with \(N=d+1\) points, and find that the regular simplex is a maximizer for \(0<s<2\). Consequently, we discover a new geometrically optimal property of the regular simplex. These statements use the results from Sections 5 and 7 about continuous \(k\)-point energies as a tool. We chose to open with these discrete results since, in our opinion, they yield particularly elegant applications of the theory. We start with a general statement:
**Theorem 3.1**.: _Let \(2\leq k\leq d+1\) and \(B:(\mathbb{S}^{d-1})^{k}\to[0,\infty)\) be a polynomial kernel of degree at most two in each variable, such that \(\sigma\) maximizes \(I_{B}\) and whenever \(x_{i}=x_{j}\) for some \(i\neq j\), then \(B(x_{1},x_{2},\ldots,x_{k})=0\)._
_Let \(f:[0,\infty)\to\mathbb{R}\) be concave, increasing, and such that \(f(0)=0\), and define the kernel \(K(x_{1},\ldots,x_{k})=f(B(x_{1},\ldots,x_{k}))\). If \(N=d+1\), then the set of vertices of a regular \((N-1)\)-simplex inscribed in \(\mathbb{S}^{d-1}\) maximizes the discrete energy \(E_{K}(\omega_{N})\) over all \(N\)-point configurations on the sphere._
_Moreover, if \(f\) is strictly concave and strictly increasing, then the vertices of regular \((N-1)\)-simplices are the only maximizers of the energy (if \(B\) doesn't contain terms which are linear in some of the variables, the uniqueness is up to changing any individual vertex \(x\) to its opposite \(-x\))._
Proof.: Let \(\omega_{N}=\{z_{1},\ldots,z_{N}\}\) be an arbitrary point configuration on \(\mathbb{S}^{d-1}\). Since \(B\) is zero if two of its inputs are the same, we can restrict the sum to \(k\)-tuples with distinct entries. Combining this with the
fact that \(f\) is increasing and concave, using Jensen's inequality, we have
\[E_{K}(\omega_{N}) :=\frac{1}{N^{k}}\sum_{z_{1},\ldots,z_{k}\in\omega_{N}}f(B(z_{1}, \ldots,z_{k}))\] \[\leq\frac{N(N-1)\cdots(N-k+1)}{N^{k}}f\left(\sum_{\begin{subarray} {c}z_{j_{1}},\ldots,z_{j_{k}}\in\omega_{N}\\ J_{1},\ldots,J_{k}\text{ distinct}\end{subarray}}\frac{B(z_{j_{1}},\ldots,z_{j _{k}})}{N(N-1)\cdots(N-k+1)}\right)\] \[=\frac{N(N-1)\cdots(N-k+1)}{N^{k}}f\left(\frac{N^{k}E_{B}(\omega_ {N})}{N(N-1)\cdots(N-k+1)}\right)\] \[\leq\frac{N(N-1)\cdots(N-k+1)}{N^{k}}f\left(\frac{N^{k}I_{B}( \sigma)}{N(N-1)\cdots(N-k+1)}\right).\]
The first inequality becomes an equality if
\[B(y_{1},\ldots,y_{k})=\frac{N^{k}E_{B}(\omega_{N})}{N(N-1)\cdots(N-k+1)}\]
for all distinct \(y_{1},\ldots,y_{k}\in\omega_{N}\), while the second becomes an equality if the point configuration is a spherical 2-design, in particular, if \(\omega_{N}\) is a regular simplex. The case of uniqueness is similar.
This generalizes some known results for \(B(x,y)=\|x-y\|^{2}\)[13, 14]. Note that this proof also extends to provide an upper bound of the energy \(E_{f\circ B}(\omega_{N})\) for every \(N\geq k\), and that this upper bound is achieved whenever \(B(z_{j_{1}},...,z_{j_{k}})\) is constant for every \(k\)-tuple of distinct points, meaning that one may find additional optimizers of the energy. For instance, for \(B=V^{2}\) and \(N=d\), any orthonormal basis would be a maximizer (though this would not work for \(B=A^{2}\), since an orthonormal basis is not balanced). An upper bound of this form was given for \(E_{V}(\omega_{N})\) in [14, Corollary 5.2].
We will show in subsequent sections that \(\sigma\) maximizes the continuous energies with kernels \(V^{2}\) and \(A^{2}\) (Theorems 7.1 and 5.1), both of which are polynomials of degree two. Hence Theorem 3.1 applies, immediately yielding the following corollary:
**Corollary 3.2**.: _Assume that either \(K(x_{1},\ldots,x_{k})=V(x_{1},\ldots,x_{k})^{s}\) with \(2\leq k\leq d\), or \(K(x_{1},\ldots,x_{k})=A(x_{1},\ldots,x_{k})^{s}\) with \(2\leq k\leq d+1\)._
_Let \(0<s\leq 2\). For \(N=d+1\) points, the discrete \(k\)-input energy \(E_{K}\) on the sphere \(\mathbb{S}^{d-1}\) is uniquely (up to rotations, and up to central symmetry in the case of \(V^{2}\)) maximized by the vertices of a regular simplex in \(\mathbb{S}^{d-1}\)._
Proof.: For \(0<s<2\), the function \(f(t)=t^{s/2}\) is strictly concave and strictly increasing, so the Theorem immediately applies. The uniqueness in the case \(s=2\) needs a separate discussion. By Theorems 7.1 and 5.1, \(I_{V^{2}}\) is maximized by isotropic measures on \(\mathbb{S}^{d-1}\), and \(I_{A^{2}}\) - by balanced isotropic measures. In the discrete case, isotropic measures on \(\mathbb{S}^{d-1}\) are exactly unit norm tight frames. The only tight frames on \(\mathbb{S}^{d-1}\) with \(N=d+1\) elements (up to central symmetry and rotations) are the vertices of a regular simplex [15, Theorem 2.6].
Taking \(K=A^{s}\) in Corollary 3.2, and setting \(j=k-1\), we obtain an interesting geometric result:
**Corollary 3.3**.: _Let \(1\leq j\leq d\), \(0<s\leq 2\), \(S\) be a \(d\)-simplex inscribed in \(\mathbb{S}^{d-1}\), \(\mathcal{F}_{j}\) the set of \(j\)-dimensional faces of \(S\), and \(\operatorname{Vol}_{j}(C)\) the \(j\)-dimensional volume of a set \(C\). Then_
\[\sum_{F\in\mathcal{F}_{j}}\operatorname{Vol}_{j}(F)^{s}, \tag{3.1}\]
_achieves its maximum if and only if \(S\) is a regular simplex._
In the case \(s=1\), this generalizes the known results for \(j=1\), i.e. the sum of distances between vertices [13], \(j=d-1\), i.e. the surface area [14], and \(j=d\), i.e. the volume [13, 15, 15]. We also note that
(3.1) is a special case of the \(T\)-functional, which has received a fair amount of study, mostly in Stochastic Geometry (see e.g. [A, GKT, HoL, HMR, KMTT, KTT]).
**Remark 3**.: We note that by adjusting the definition of the discrete energy \(E_{K}\) to only include summands where all inputs are distinct, we can study lower-semicontinuous kernels \(K:\left(\mathbb{S}^{d-1}\right)^{k}\to(-\infty,\infty]\). In this case, if we define \(f\) in the statement of Theorem 3.1 as a decreasing, convex function \(f:(0,\infty)\to\mathbb{R}\), with \(f(0)=\lim_{x\to 0^{+}}f(x)\), an identical proof shows that the vertices of a regular simplex minimize \(E_{f\circ B}\), as a generalization of [CK, Theorem 1.2]. In particular, as an extension of Corollary 3.2, this shows that regular simplices are optimal for \(-\log(A)\) and \(-\log(V)\), as well as \(A^{s}\) and \(V^{s}\) for \(s<0\).
## 4. Semidefinite Programming and Three-input Volumes
In this section we recall the basics of semidefinite programming and apply it to the maximization of integral functionals with three-input kernels. It will be shown that isotropic probability measures on \(\mathbb{S}^{d-1}\) maximize \(I_{V^{2}}\), where \(V\) is the \(3\)-volume, among all probability measures, while for the \(2\)-area energy integral \(I_{A^{2}}\), maximizers are isotropic and balanced probability measures. In Sections 5.1 and 7, these results are generalized to larger numbers of inputs and to measures on \(\mathbb{R}^{d}\).
For brevity, we denote \(u=\langle y,z\rangle\), \(v=\langle v,z\rangle\), \(t=\langle x,y\rangle\). We also take \(\sigma\), as before, to be the uniform probability measure on the sphere \(\mathbb{S}^{d-1}\). In [BV], Bachoc and Vallentin produced a class of infinite matrices and associated polynomials of the form
\[(Y^{d}_{m})_{i+1,j+1}(x,y,z):=Y^{d}_{m,i,j}(x,y,z):=P^{d+2m}_{i}(u)P^{d+2m}_{j }(v)Q^{d}_{m}(u,v,t), \tag{4.1}\]
where \(m,i,j\in\mathbb{N}_{0}\), \(P^{h}_{m}\) is the normalized Gegenbauer polynomial of degree \(m\) on \(\mathbb{S}^{h-1}\) and
\[Q^{d}_{m}(u,v,t)=((1-u^{2})(1-v^{2}))^{\frac{m}{2}}P^{d-1}_{m}\left(\frac{t-uv }{\sqrt{(1-u^{2})(1-v^{2})}}\right). \tag{4.2}\]
**Remark 4**.: Polynomials \(Y^{d}_{m,i,j}\) in [BV] were defined with certain coefficients which we omit here for the sake of simplicity.
Here we provide the upper left \(3\times 3\), \(2\times 2\), and \(1\times 1\) submatrices of infinite matrices \(Y^{d}_{0}\), \(Y^{d}_{1}\), and \(Y^{d}_{2}\), respectively, which is all that we need for the rest of this section:
\[\begin{pmatrix}1&v&\frac{dv^{2}-1}{d-1}\\ u&uv&u\frac{dv^{2}-1}{d-1}\\ \frac{dv^{2}-1}{d-1}&\frac{du^{2}-1}{d-1}v&\frac{du^{2}-1}{d-1}\frac{dv^{2}-1 }{d-1}\\ \end{pmatrix}\]
\[\begin{pmatrix}t-uv&u(t-uv)\\ v(t-uv)&uv(t-uv)\end{pmatrix},\left(\frac{(d-1)(t-uv)^{2}-(1-u^{2})(1-v^{2})}{ d-2}\right).\]
By letting \(\pi\) run through the group of all permutation of the variables \(x\), \(y\), and \(z\), and averaging, they defined the following symmetric matrices and associated polynomials
\[(S^{d}_{m})_{i+1,j+1}(x,y,z):=S^{d}_{m,i,j}(x,y,z):=\frac{1}{6}\sum_{\pi}Y^{d} _{m,i,j}(\pi(x),\pi(y),\pi(z)).\]
These polynomials and matrices have a variety of nice properties:
1. For any \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) and \(e\in\mathbb{S}^{d-1}\), \[\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}Y^{d}_{m}(x,y,e)\,d\mu(x)d\mu(y)\] and \[S^{d}_{m}(\mu):=\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S }^{d}}S^{d}_{m}(x,y,z)\,d\mu(x)d\mu(y)d\mu(z)\] are positive semidefinite, i.e. all principal minors (formed by finite submatrices) are nonnegative.
2. For \((m,i,j)\neq(0,0,0)\), \(I_{S^{d}_{m,i,j}}(\sigma)=0\), and for all \(e\in\mathbb{S}^{d-1}\), \(I_{Y^{d}_{m,i,j}}(\sigma,\sigma,\delta_{e})=0\).
3. For \(m\geq 1\) and \(e\in\mathbb{S}^{d-1}\), \(I_{S^{d}_{m,i,j}}(\delta_{e})=I_{Y^{d}_{m,i,j}}(\delta_{e})=0\).
We note that the paper [4] was only concerned with finite point sets. However, the results naturally extend to the continuous setting, and (1) is simply the extension of Corollary 3.5 in [4], while (2) follows from the construction of \(Y_{m,i,j}\)'s from spherical harmonics (see Theorem 3.1 and the preceding text, as well as equation (11), in [4]). Finally, (3) follows from the fact that \(Q_{m}^{d}(1,1,1)=0\).
Now consider an infinite, symmetric, positive semidefinite matrix \(A\) with finitely many nonzero entries. Then for any \(m\geq 1\) and \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\), \(\operatorname{Tr}(S_{m}^{d}(\mu)A)\geq 0\), with equality if \(\mu=\sigma\). (Indeed, observe that, for two matrices positive definite matrices \(B=(b_{ij})\), \(C=(c_{ij})\), Schur's theorem implies that the Hadamard product \(B\circ C=(b_{ij}c_{ij})\) is positive definite, which leads to the inequality \(\operatorname{Tr}(BC)=\sum_{i,j}b_{ij}c_{ij}\geq 0\).)
Likewise, let \(A_{0}\) be an infinite, symmetric, positive semidefinite matrix \(A\) with finitely many nonzero entries and such that all entries in the first row and first column are zeros. Then for any probability measure \(\mu\), \(\operatorname{Tr}(S_{m}^{d}(\mu)A_{0})\geq 0\), with equality if \(\mu=\sigma\). In this case, we require zeros in the first row and column due to the fact that \(S_{0,0,0}^{d}\) is a constant, so we would not get equality for \(\sigma\) in the above inequality. This gives us the following:
**Theorem 4.1**.: _Let \(n\in\mathbb{N}_{0}\). For each \(m\leq n\), let \(A_{m}\) be an infinite, symmetric, positive semidefinite matrix with finitely many nonzero entries, with the additional requirement that \(A_{0}\) has only zeros in its first row and first column. Let_
\[K(x,y,z)=\sum_{m=0}^{n}\operatorname{Tr}(S_{m}^{d}(x,y,z)\,A_{m}).\]
_Then \(\sigma\) is a minimizer of \(I_{K}\) over probability measures on the sphere \(\mathbb{S}^{d-1}\)._
Naturally, adding a constant to \(K\) does not change this statement, and multiplying by \(-1\) turns it into a maximization result. Observe also that, when \(A\) is a diagonal matrix, \(\operatorname{Tr}(S_{m}^{d}A)\) is simply a positive linear combination of the diagonal elements of \(S_{m}^{d}\). Theorem 4.1 is often applied in this way (see the proofs of Theorems 4.2 and 4.3 below), in close analogy to Theorem 2.2.
### Volume of a parallelepiped
Maximizing the sum of distances between points on a space (or the corresponding distance integrals) is a very natural optimization problem for two-input kernels, and one which has garnered a fair amount of attention (see, e.g. [1, 2, 3, 4, 5, 6, 7, 8, 9]), and, as mentioned in the introduction, higher dimensional analogues, such as area and volume, yield natural extensions for kernels with more inputs. In this section we discuss such questions for \(k=3\) inputs, focusing on volume squared and area squared, as these produce polynomials which are easier to work with.
We first consider the kernel
\[K(x,y,z)=V^{2}(x,y,z)=\det\begin{pmatrix}1&u&v\\ u&1&t\\ v&t&1\end{pmatrix}=1-u^{2}-v^{2}-t^{2}+2uvt\]
on the sphere \(\mathbb{S}^{d-1}\), with \(d>2\), and where \(V(x,y,z)\) is the volume of the parallelepiped formed by the vectors \(x\), \(y\), and \(z\). As mentioned in [1], \(-V^{2}\) is not \(3\)-positive definite (modulo a constant) but as we show here, \(\sigma\) is a minimizer of \(I_{-V^{2}}\), i.e. a maximizer of \(I_{V^{2}}\).
Indeed, we see that
\[V^{2}(x,y,z)=\frac{(d-1)(d-2)}{d^{2}}-\frac{(d-1)(d-2)}{d^{2}}S_{0,2,2}^{d}- \frac{4(d-2)}{d}S_{1,1,1}^{d}-\frac{(3d-4)(d-2)}{d(d-1)}S_{2,0,0}^{d}, \tag{4.3}\]
so Theorem 4.1 tells us that \(\sigma\) is a maximizer. Moreover, since \(V^{2}\) is a polynomial of degree two in every variable, and has no linear terms, any isotropic measure on the sphere is also a maximizer, and in fact this classifies all maximizers.
**Theorem 4.2**.: _Isotropic probability measures on the sphere maximize \(I_{V^{2}}\) over \(\mathcal{P}(\mathbb{S}^{d-1})\)._
### Area of a triangle
Using the same method as in Theorem 4.2, we can show that \(\sigma\) is a maximizer of \(I_{A^{2}}\), where \(A(x,y,z)\) is the area of a triangle, since
\[A^{2}(x,y,z)=\frac{1}{4}\Big{(}3\frac{d-1}{d}-3\frac{d-2}{d-1}S_{2,0,0}^{d}-6S_ {1,1,1}^{d}-6S_{1,0,0}^{d}-3\frac{d-1}{d}S_{0,2,2}^{d}\Big{)}. \tag{4.4}\]
However, we can also prove this with a slightly different method, which acts as a special case of Theorem 4.2, a more general statement that we will prove by means of \(k\)-point bounds.
**Theorem 4.3**.: _Suppose \(d\geq 2\), and let \(A^{2}(x,y,z)\) be the square of the area of the triangle with vertices at \(x\), \(y\), \(z\in\mathbb{S}^{d-1}\). Then the uniform surface measure \(\sigma\) maximizes \(I_{A^{2}}(\mu)\) over \(\mathcal{P}(\mathbb{S}^{d-1})\). Moreover, any balanced, isotropic measure \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\) maximizes \(I_{A^{2}}\)._
Proof.: Using Heron's formula, we express \(A^{2}(x,y,z)\) via the scalar products of \(x,y,z\):
\[A^{2}(x,y,z)=\frac{3}{4}-\frac{1}{2}(u+v+t)+\frac{1}{2}(uv+vt+tu)-\frac{1}{4}(u ^{2}+v^{2}+t^{2})=\frac{3}{4}-\frac{3}{2}S_{1,0,0}^{d}-\frac{1}{4}(u^{2}+v^{2} +t^{2}). \tag{4.5}\]
Note that \(\sigma\) minimizes both \(S_{1,0,0}^{d}\) and \(u^{2}+v^{2}+t^{2}\) by Theorem 4.1 and Lemma 2.1, respectively. Therefore, for all \(\mu\in\mathcal{P}(\mu)\),
\[I_{A^{2}}(\mu)\leq\frac{3}{4}-\frac{1}{4}\cdot\frac{3}{d}=\frac{3(d-1)}{4d}=I_ {A^{2}}(\sigma).\]
More generally, maximizers of \(I_{A^{2}}\) must be isotropic measures in order to achieve the sharp bound from Lemma 2.1 and must be balanced so that \(S_{1,0,0}^{d}\) vanishes on them.
An alternative proof of this result is also given in [1, Theorem 6.7]. We also would like to remark that since
\[3\frac{d-2}{d-1}S_{2,0,0}^{d}+6S_{1,1,1}^{d}+3\frac{d-1}{d}S_{0,2,2}^{d}+\frac {3}{d}=u^{2}+v^{2}+t^{2},\]
which follows from [1, Proposition 3.6], Lemma 2.1 follows from Theorem 4.1, which demonstrates an instance of obtaining \(2\)-point bounds from \(3\)-point bounds.
## 5. Maximizing \(k\)-volumes
This section collects results on maximization of volume integral functionals over probability measures with unit second moment in \(\mathbb{R}^{d}\), denoted as before by \(\mathcal{P}^{*}(\mathbb{R}^{d})\), for \(k\geq 3\) inputs. In some cases we will further restrict the supports of such measures to the unit sphere, thereby optimizing over \(\mathcal{P}(\mathbb{S}^{d-1})\). As in the rest of the paper, we are interested in powers of two kernels: the \(k\)-dimensional Euclidean volume of the parallelepiped \(V(x_{1},\ldots,x_{k})\), and the \((k-1)\)-dimensional volume of the simplex \(A(x_{1},\ldots,x_{k})\).
### Maximizing the powers of \(V\)
As in the previous section, we start with \(V^{2}\), i.e. the squared \(k\)-dimensional volume of a parallelepiped spanned by the vectors \(x_{1},\ldots,x_{k}\), equal to the determinant of the Gram matrix of the set of vectors \(\{x_{1},\ldots,x_{k}\}\subset\mathbb{R}^{d}\). Alternatively, \(\frac{1}{(k!)^{2}}V^{2}(x_{1},\ldots,x_{k})\) can be seen as the square of the Euclidean volume of the simplex with vertices \(0,x_{1},\ldots,x_{k}\). The following theorem (in a slightly different form) can be found in the literature:
**Theorem 5.1**.: _Let \(d\geq 3\) and \(3\leq k\leq d\). The set of maximizing measures of \(I_{V^{2}}\) in \(\mathcal{P}^{*}(\mathbb{R}^{d})\) is the set of isotropic measures on \(\mathbb{R}^{d}\). The value of the maximum is \(\frac{k!}{d^{k}}{d\choose k}\)._
_As a corollary, isotropic measures on \(\mathbb{S}^{d-1}\) (which include the uniform surface measure \(\sigma\)) are exactly the maximizers of \(I_{V^{2}}\) over \(\mathcal{P}(\mathbb{S}^{d-1})\)._
This theorem was proved by Rankin [1] for \(k=d\) and by Cahill and Casazza [1] in the general case. In both papers, the statements are for finite spherical sets but the proofs work for measures in \(\mathcal{P}^{*}(\mathbb{R}^{d})\) with only minor adjustments. We also note that due to Remark 1, it is sufficient to prove the result for the spherical case only, since \(V^{2}\) is homogeneous of degree two. The equality case of Theorem 5.1 was also treated in a more general context in [10, P].
Theorem 5.1 allows one to characterize the minimizers of \(I_{V^{s}}\) with \(s>2\) as well.
**Corollary 5.2**.: _For \(s>2\), the energy \(I_{V^{s}}\) on \(\mathbb{S}^{d-1}\) is uniquely (up to rotations and central symmetry) maximized by the uniform measure on an orthonormal basis._
Proof.: For \(s>2\), \(V^{2}(x_{1},\ldots,x_{k})\geq V^{s}(x_{1},\ldots,x_{k})\) for all \(x_{1},\ldots,x_{k}\in\mathbb{S}^{d-1}\), with equality exactly when \(x_{1},\ldots,x_{k}\) is an orthonormal set (so the volume is \(1\)) or when \(x_{1},\ldots,x_{k}\) are linearly dependent (so the volume is \(0\)). Thus, for all \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\),
\[\frac{k!}{d^{k}}\binom{d}{k}\geq I_{V^{2}}(\mu)\geq I_{V^{s}}(\mu). \tag{5.1}\]
The first inequality becomes an equality if \(\mu\) is isotropic, and the second inequality becomes equality if any \(x_{1},\ldots,x_{k}\in\operatorname{supp}(\mu)\) are either orthonormal or linearly dependent. Since the support of an isotropic measure must be full-dimensional, both of these conditions occur simultaneously if and only if \(\mu(\{-e_{j},e_{j}\})=\frac{1}{d}\) for \(j=1,\ldots,d\) for some orthonormal basis \(e_{1},\ldots,e_{d}\).
It is easy to see that the uniform distribution on an orthonormal basis is not a maximizer of \(V^{s}\) for \(0<s<2\).
### Maximizing the powers of \(A\)
We now turn to the powers of \(A(x_{1},\ldots,x_{k})\), the \((k-1)\)-dimensional volume of the simplex with vertices \(x_{1},\ldots,x_{k}\), and again start by considering the kernel \(A^{2}\). The result of Theorem 5.1 for \(V^{2}\) can be used to obtain a similar statement for the measures in \(\mathcal{P}^{*}(\mathbb{R}^{d})\) maximizing \(I_{A^{2}}\), in the case of \(k=d+1\) inputs, i.e when the simplex is full-dimensional. The main idea is to embed \(\mathbb{R}^{d}\) into \(\mathbb{R}^{d+1}\), treat the value of \(A^{2}(x_{1},\ldots,x_{d+1})\) as the value of \(V^{2}(y_{1},\ldots,y_{d+1})\) for a suitable \(y_{1},...,y_{k}\), and then use Theorem 5.1.
We shall defer the calculation of the maximal value of \(I_{A^{2}}\) until Theorem 7.1, which also gives an alternative proof of the characterization of maximizers on the sphere \(\mathbb{S}^{d-1}\), moreover, addressing the case of _any_ number of inputs \(3\leq k\leq d+1\), rather than just \(k=d+1\) as in the theorem below. However, the result of this section, Theorem 5.3, applies to measures on \(\mathbb{R}^{d}\), while Theorem 7.1 is restricted to the sphere.
**Theorem 5.3**.: _For \(d\geq 2\) and \(k=d+1\), maximizers of \(I_{A^{2}}\) in \(\mathcal{P}^{*}(\mathbb{R}^{d})\) are the balanced, isotropic probability measures on \(\mathbb{R}^{d}\)._
Proof.: In this proof, we say that a measure \(\mu\) on \(\mathbb{R}^{d}\) is \(d\)-isotropic if equation (2.2) holds. Given a unit basis vector \(e_{d+1}\in\mathbb{R}^{d+1}\), we identify \(\mathbb{R}^{d}\) with the hyperplane in \(\mathbb{R}^{d+1}\), orthogonal to \(e_{d+1}\) and passing through the origin.
To reduce the value of \(I_{A^{2}}\) to that of \(I_{V^{2}}\), given a measure \(\mu\) on \(\mathbb{R}^{d}\), denote its pushforward to \(\mathbb{R}^{d+1}\) by \(\hat{\mu}\):
\[\hat{\mu}:=\psi_{\#}\mu, \tag{5.2}\]
where the map \(\psi:\mathbb{R}^{d}\to\mathbb{R}^{d+1}\) is
\[\psi(x):=\sqrt{\frac{d}{d+1}}x+\frac{1}{\sqrt{d+1}}e_{d+1}.\]
It is understood here that \(x\in\mathbb{R}^{d}\subset\mathbb{R}^{d+1}\), so the addition in right-hand side is in \(\mathbb{R}^{d+1}\).
Recall that a \((d+1)\)-dimensional simplex with base of \(d\)-dimensional volume \(S\) and height \(h\) has \((d+1)\)-dimensional volume \(Sh/(d+1)\). This gives, with \(V^{2}=V^{2}(x_{1},\ldots,x_{d+1})\) the square of the \((d+1)\)-dimensional volume of the parallelepiped spanned by its inputs,
\[I_{V^{2}}(\hat{\mu})=(d!)^{2}\frac{d^{d}}{(d+1)^{d+1}}I_{A^{2}}(\mu), \tag{5.3}\]
where we account for the fact that \(V\) includes a factor of \((d+1)!\), whereas \(A\) does not.
By Theorem 5.1, the functional \(I_{V^{2}}\) on the left-hand side of equation (5.3) is maximized over \(\mathcal{P}^{*}(\mathbb{R}^{d+1})\) exactly when \(\hat{\mu}\) is isotropic. To finish the proof, it remains to observe that the pushforward \(\hat{\mu}=\psi_{\#}\mu\) is \((d+1)\)-isotropic if and only if \(\mu\) is \(d\)-isotropic and balanced. Indeed, writing \(x=(x^{(1)},\ldots,x^{(d+1)})=(y,x^{(d+1)})\), we have for \(1\leq i\leq j\leq d+1\)
\[\int_{\mathbb{R}^{d+1}}x^{(i)}x^{(j)}\,d\hat{\mu}(x)=\begin{cases}\frac{d}{d+1 }\int_{\mathbb{R}^{d}}y^{(i)}y^{(j)}\,d\mu(y)&j\leq d,\\ \frac{\sqrt{d}}{d+1}\int_{\mathbb{R}^{d}}y^{(i)}\,d\mu(y)&i\leq d,\ j=d+1,\\ \frac{1}{d+1}&i=j=d+1.\end{cases} \tag{5.4}\]
By (2.2), \((d+1)\)-isotropy of \(\hat{\mu}\) means the integral in the left-hand side of (5.4) is equal to \(\delta_{ij}/(d+1)\), \(1\leq i\leq j\leq d+1\). In particular, then the first integral in the right-hand side is equal to \(\delta_{ij}/d\), which is precisely the condition for \(d\)-isotropy of \(\mu\), and the integrals of \(y^{(i)}\) are all equal to zero, implying \(\mu\) is balanced. The converse implications obviously follows along the same lines.
We can use the same methods as in Corollary 5.2 to find the maximizers of \(I_{A^{s}}\), for larger powers, on the sphere \(\mathbb{S}^{d-1}\).
**Corollary 5.4**.: _Let \(s>2\) and \(A(x_{1},\ldots,x_{d+1})\) be the \(d\)-dimensional volume of a simplex with vertices \(x_{1},\ldots,x_{d+1}\in\mathbb{S}^{d-1}\). Then \(I_{A^{s}}\) is uniquely (up to rotations) maximized by the uniform distribution on the vertices of a regular \(d\) simplex._
Proof.: We know that \(A(x_{1},\ldots,x_{d+1})\) is maximized exactly when \(x_{1},\ldots,x_{d+1}\) are the vertices of a regular simplex (see, e.g. [1, 2, 11], see also the case \(j=d\) of Corollary 3.3). Let \(\alpha\) be that maximum volume. We see that for \(s>2\),
\[A^{2}(x_{1},\ldots,x_{d+1})\geq\alpha^{2-s}A^{s}(x_{1},\ldots,x_{d+1})\]
for all \(x_{1},\ldots,x_{d+1}\in\mathbb{S}^{d-1}\), with equality exactly when \(A(x_{1},\ldots,x_{d+1})\) is \(0\) or \(\alpha\).
We know that, for all \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\)
\[\frac{d+1}{d!\,d^{d}}\geq I_{A^{2}}(\mu)\geq\alpha^{2-s}I_{A^{s}}(\mu). \tag{5.5}\]
The first inequality becomes an equality when \(\mu\) is balanced and isotropic, and the second becomes an equality when \(A(x_{1},\ldots,x_{d+1})\) is \(0\) or \(\alpha\) for all \(x_{1},\ldots,x_{d+1}\in\operatorname{supp}(\mu)\). These both occur exactly when \(\mu\) is the uniform distribution on the vertices of a regular \(d\)-simplex.
For \(0<s<2\), the uniform distribution on the vertices of a regular simplex is not a maximizer of \(I_{A^{s}}\). It is also not clear which measures maximize \(I_{A^{s}}\) for \(2<s\) for general \(k<d+1\). We conjecture that the maximizers are again discrete in this case (see the discussion at the end of Section 7).
## 6. Kernels for \(k\)-point bounds
In this section, we explain how to construct a class of continuous (in certain cases, polynomial) kernels which are \(k\)-positive definite and whose energy is minimized by \(\sigma\). These kernels are a generalization of those developed by Bachoc and Vallentin in [3], and similar to the kernels given in [1] and [4], all of which were used for obtaining \(k\)-point semidefinite programming bounds. We provide a slight alteration to these kernels, so that the inputs are no longer restricted to being linearly independent, or constrained to some proper subset of the sphere.
Consider the points \(\{x_{1},\ldots,x_{k}\}\subset\mathbb{S}^{d-1}\) with \(k\leq d+1\). Suppose that \(x_{3},\ldots,x_{k}\) are linearly independent and \(x_{1},x_{2}\not\in X=\operatorname{span}\{x_{3},\ldots,x_{k}\}\), and denote the orthogonal projections of \(x_{1}\) and \(x_{2}\) onto \(X^{\perp}\) as \(y_{1}\) and \(y_{2}\), respectively. Then the normalized vectors \(\frac{y_{1}}{\|y_{1}\|}\) and \(\frac{y_{2}}{\|y_{2}\|}\) belong to the unit sphere in the \((d-k+2)\)-dimensional space \(X^{\perp}\). If \(k\leq d\), then on this unit sphere, the kernel given by \(P_{l}^{d-k+2}(\langle x,y\rangle)\) is positive definite, suggesting that we may be able to build a \(k\)-positive definite kernel from
\[P_{l}^{d-k+2}\Big{(}\Big{\langle}\frac{y_{1}}{\|y_{1}\|},\frac{y_{2}}{\|y_{2} \|}\Big{)}. \tag{6.1}\]
If \(k=d+1\), then \(\frac{y_{1}}{\|y_{1}\|},\frac{y_{2}}{\|y_{2}\|}\in\mathbb{S}^{0}=\{-1,1\}\), and we see that \(1\) and \(\frac{y_{1}}{\|y_{1}\|}\frac{y_{2}}{\|y_{2}\|}\) make a basis for positive definite functions. Of course, for \(l>0\), \(P_{l}^{d-k+2}\Big{(}\Big{\langle}\frac{y_{1}}{\|y_{1}\|},\frac{y_{2}}{\|y_{2} \|}\Big{\rangle}\Big{)}\) is not well defined, as a function of \(x_{1},\ldots,x_{k}\), if \(x_{1}\) or \(x_{2}\) is in \(X\), and may not be continuous whenever the dimension of \(X\) changes. We can modify (6.1) to account for these issues, arriving at the following polynomial kernel. In what follows, we denote \(W\) to be the Gram matrix of \(x_{3},\ldots,x_{k}\), and we set \(P_{0}^{1}=1\) and \(P_{1}^{1}(t)=t\) (these are the only cases when \(P_{j}^{1}\) are defined).
**Theorem 6.1**.: _With the notation above, for any \(l\in\mathbb{N}_{0}\), the function \(Q_{k,l}^{d}:\left(\mathbb{S}^{d-1}\right)^{k}\to\mathbb{R}\) defined by_
\[Q_{k,l}^{d}(x_{1},\ldots,x_{k})=\det(W)^{l}\|y_{1}\|^{l}\|y_{2}\|^{l}P_{l}^{d-k+ 2}\Big{(}\Big{\langle}\frac{y_{1}}{\|y_{1}\|},\frac{y_{2}}{\|y_{2}\|}\Big{)} \Big{)} \tag{6.2}\]
_is a rotationally-invariant \(k\)-positive definite polynomial kernel and \(I_{Q^{d}_{k,l}}\) is minimized by \(\sigma\)._
We note that these kernels are symmetric in the last \(k-2\) variables as well as in the first two variables.
Proof.: Note that \(Q^{d}_{k,0}=1\), so our claim holds in these cases. Now, assume that \(l\in\mathbb{N}\).
Rotational-invariance follows immediately from the rotational-invariance of \(W\), \(\|y_{1}\|\), \(\|y_{2}\|\) and \(\langle y_{1},y_{2}\rangle\).
In what follows, we denote \(u_{i,j}=\langle x_{i},x_{j}\rangle\) for all \(i\) and \(j\), and for \(h\in\{1,2\}\), \(w_{h}=(u_{h,3},\ldots,u_{h,k})^{T}\), \(y_{h}^{\perp}=x_{h}-y_{h}\), and \(z_{h}=\frac{y_{h}}{\|y_{h}\|}\).
We first must show that our kernel \(Q^{d}_{k,l}\) is well-defined. Write \(y_{1}^{\perp}=\sum_{j=3}^{k}\alpha_{j}x_{j}\) and \(y_{2}^{\perp}=\sum_{j=3}^{k}\beta_{j}x_{j}\), and denote \(\alpha=(\alpha_{3},\ldots,\alpha_{k})^{T}\) and \(\beta=(\beta_{3},\ldots,\beta_{k})^{T}\). Since for \(3\leq j\leq k\),
\[u_{1,j}=\langle x_{1},x_{j}\rangle=\langle y_{1}^{\perp},x_{j}\rangle=\sum_{i =3}^{k}\alpha_{i}u_{i,j}\quad\text{ and }\quad u_{2,j}=\langle x_{2},x_{j}\rangle=\langle y_{2}^{\perp},x_{j} \rangle=\sum_{i=3}^{k}\beta_{i}u_{i,j}\]
we conclude that \(w_{1}=W\alpha\) and \(w_{2}=W\beta\).
Now assume that \(x_{3},\ldots,x_{k}\) are linearly independent. Consequently, \(\alpha=W^{-1}w_{1}\) and \(\beta=W^{-1}w_{2}\), so
\[\langle y_{1}^{\perp},y_{2}^{\perp}\rangle=\alpha^{T}W\beta=w_{1}^{T}W^{-1}WW^ {-1}w_{2}=w_{1}^{T}W^{-1}w_{2}, \tag{6.3}\]
and similarly
\[\langle y_{1}^{\perp},y_{1}^{\perp}\rangle=w_{1}^{T}W^{-1}w_{1}\text{ and }\langle y_{2}^{\perp},y_{2}^{\perp}\rangle=w_{2}^{T}W^{-1}w_{2}. \tag{6.4}\]
We then see that
\[\langle y_{1},y_{2}\rangle=\langle x_{1},x_{2}\rangle-\langle y_{1}^{\perp}, y_{2}^{\perp}\rangle=u_{1,2}-w_{1}^{T}W^{-1}w_{2}, \tag{6.5}\]
\[\|y_{1}\|^{2}=1-w_{1}^{T}W^{-1}w_{1}\text{ and }\|y_{2}\|^{2}=1-w_{2}^{T}W^{-1}w_{2}. \tag{6.6}\]
Thus, if \(x_{1},x_{2}\not\in X\), we can rewrite (6.2) as
\[Q^{d}_{k,l}(x_{1},\ldots,x_{k})=\det(W)^{l}\Big{(}(1-w_{1}^{T}W^ {-1}w_{1})(1-w_{2}^{T}W^{-1}w_{2})\Big{)}^{l/2} \tag{6.7}\] \[\times P^{d-k+2}_{l}\left(\frac{u_{1,2}-w_{1}^{T}W^{-1}w_{2}}{ \sqrt{1-w_{1}^{T}W^{-1}w_{1}}\sqrt{1-w_{2}^{T}W^{-1}w_{2}}}\right).\]
Letting \(P^{d-k+2}_{l}(t)=\sum_{m=0}^{\lfloor\frac{l}{2}\rfloor}a_{l-2m}t^{l-2m}\), we see that
\[Q^{d}_{k,l}(x_{1},\ldots,x_{k})=\sum_{m=0}^{\lfloor\frac{l}{2} \rfloor}a_{l-2m}\Big{(} \det(W)u_{1,2}-w_{1}^{T}\operatorname{adj}(W)w_{2}\Big{)}^{l-2m} \Big{(}\det(W)-w_{1}^{T}\operatorname{adj}(W)w_{1}\Big{)}^{m} \tag{6.8}\] \[\times\Big{(}\det(W)-w_{2}^{T}\operatorname{adj}(W)w_{2}\Big{)}^{ m},\]
where \(\operatorname{adj}(W)\) is the adjugate matrix of \(W\). This is a polynomial of the inner products of \(x_{1},\ldots,x_{k}\), and so is well defined for all \(x_{1},\ldots,x_{k}\in\mathbb{S}^{d-1}\).
In addition, by rewriting (6.2) as
\[Q^{d}_{k,l}(x_{1},\ldots,x_{k})=\det(W)^{l}\sum_{m=0}^{\lfloor\frac{l}{2} \rfloor}a_{l-2m}\langle y_{1},y_{2}\rangle^{l-2m}\|y_{1}\|^{m}\|y_{2}\|^{m}, \tag{6.9}\]
for \(k\leq d\) and
\[Q^{d}_{d+1,1}(x_{1},\ldots,x_{k})=\det(W)\langle y_{1},y_{2}\rangle, \tag{6.10}\]
we see that \(Q^{d}_{k,l}\) is zero if \(x_{3},\ldots,x_{k}\) are linearly dependent.
If \(k=d+1\) and \(l=1\), (6.10) shows us that for any fixed \(x_{3},\ldots,x_{d+1}\in\mathbb{S}^{d-1}\) and \(\mu\in\mathcal{M}(\mathbb{S}^{d-1})\),
\[I_{U_{\operatorname{O}_{d+1,l}}^{x_{3},\ldots,x_{d+1}}}(\mu)=\int_{\mathbb{S}^{ d-1}}\int_{\mathbb{S}^{d-1}}\det(W)y_{1}y_{2}\,d\mu(x_{1})d\mu(x_{2})=\det(W) \Big{(}\int_{\mathbb{S}^{d-1}}y_{1}d\mu(x_{1})\Big{)}^{2}\geq 0, \tag{6.11}\]
so \(Q^{d}_{k,1}\) is \(k\)-positive definite. Note that since \(W\) is the Gram matrix of \(x_{3},\ldots,x_{k}\), its determinant is nonnegative. We now want to show that this energy is zero when \(\mu=\sigma\). We first note that this occurs
if \(x_{3},\dots,x_{k}\) are linearly dependent, so let us assume that \(x_{3},\dots,x_{k}\) are linearly independent, and set \(f(y_{1}^{\perp},y_{1})=f(x_{1})=y_{1}\). Denoting the unit ball in \(\mathbb{R}^{n}\) as \(\mathbb{B}^{n}\), we have, by Lemma A.5.4 of [DX], that
\[\int_{\mathbb{S}^{d-1}}f(x_{1})\,d\sigma(x_{1}) =\int_{\mathbb{B}^{d-1}}\frac{f(y_{1}^{\perp},\sqrt{1-\|y_{1}^{ \perp}\|^{2}})+f(y_{1}^{\perp},-\sqrt{1-\|y_{1}^{\perp}\|^{2}})}{\sqrt{1-\|y_ {1}^{\perp}\|^{2}}}dy_{1}^{\perp}\] \[=\int_{\mathbb{B}^{d-1}}\frac{\sqrt{1-\|y_{1}^{\perp}\|^{2}}+(- \sqrt{1-\|y_{1}^{\perp}\|^{2}})}{\sqrt{1-\|y_{1}^{\perp}\|^{2}}}dy_{1}^{\perp} =0.\]
It now follows from (6.11) that \(I_{Q_{d+1,1}^{d}}(\sigma)=0\), so \(\sigma\) minimizes \(I_{Q_{d+1,1}^{d}}\).
When \(k\leq d\), we need a bit more machinery. Let \(Y_{1},\dots Y_{\dim(\mathcal{H}_{l}^{d-k+1})}\) be an orthonormal basis of \(\mathcal{H}_{l}^{d-k+1}\), the space of spherical harmonics of degree \(l\) on \(\mathbb{S}^{d-k+1}\). Then the addition formula (see [DX, Theorem 1.2.6]) tells us that
\[Q_{k,l}^{d}(x_{1},\dots,x_{k})=\det(W)^{l}\|y_{1}\|^{l}\|y_{2}\|^{l}\frac{1}{ \dim(\mathcal{H}_{l}^{d-k+1})}\sum_{j=1}^{\dim(\mathcal{H}_{l}^{d-k+1})}Y_{j} (z_{1})Y_{j}(z_{2}). \tag{6.12}\]
Thus for any fixed \(x_{3},\dots,x_{k}\) and \(\mu\in\mathcal{M}(\mathbb{S}^{d-1})\),
\[I_{U_{Q_{k,l}^{d}}^{x_{3},\dots,x_{k}}}(\mu) =\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}Q_{k,l}^{d}(x_{1}, x_{2},\dots,x_{k})\,d\mu(x_{1})d\mu(x_{2})\] \[=\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}\det(W)^{l}\|y_{1} \|^{l}\|y_{2}\|^{l}\sum_{j=1}^{\dim(\mathcal{H}_{l}^{d-k+1})}Y_{j}(z_{1})Y_{j }(z_{2})\,d\mu(x_{1})d\mu(x_{2})\] \[=\det(W)^{l}\sum_{j=1}^{\dim(\mathcal{H}_{l}^{d-k+1})}\Bigg{(} \int_{\mathbb{S}^{d-1}}Y_{j}(z_{1})\|y_{1}\|^{l}d\mu(x_{1})\Bigg{)}^{2}\geq 0.\]
Note that since \(W\) is the Gram matrix of \(x_{3},\dots,x_{k}\), its determinant is nonnegative. Thus, \(Q_{k,l}^{d}\) is indeed \(k\)-positive definite, so for any \(\mu\in\mathcal{P}(\mathbb{S}^{d-1})\), \(I_{Q_{k,l}^{d}}(\mu)\geq 0\).
We now show that \(\sigma\) minimizes the energy \(I_{Q_{k,l}^{d}}\). For any fixed \(x_{3},\dots,x_{k}\in\mathbb{S}^{d-1}\), we see that
\[I_{U_{Q_{k,l}^{d}}^{x_{3},\dots,x_{k}}}(\sigma)=\det(W)^{l}\sum_{j=1}^{\dim( \mathcal{H}_{l}^{d-k+1})}\Bigg{(}\int_{\mathbb{S}^{d-1}}Y_{j}(z_{1})\|y_{1} \|^{l}d\sigma(x_{1})\Bigg{)}^{2}.\]
If \(x_{3},\dots,x_{k}\) are linearly dependent, we know this is zero. Assume that \(x_{3},\dots,x_{k}\) are linearly independent, and for \(1\leq j\leq\dim(\mathcal{H}_{l}^{d-k+1})\), let
\[f_{j}(x_{1})=f_{j}(y_{1}^{\perp},y_{1})=Y_{j}(z_{1})\|y_{1}\|^{l}.\]
By Lemma A.5.4 of [DX], we have that
\[\int_{\mathbb{S}^{d-1}}f_{j}(x_{1})d\sigma(x_{1}) =\int_{\mathbb{B}^{k-2}}(1-\|y_{1}^{\perp}\|^{2})^{\frac{d-k}{2}} \left[\int_{\mathbb{S}^{d-k+1}}f_{j}(y_{1}^{\perp},\sqrt{1-\|y_{1}^{\perp}\|^ {2}}\xi)d\sigma(\xi)\right]dy_{1}^{\perp}\] \[=\int_{\mathbb{B}^{k-2}}(1-\|y_{1}^{\perp}\|^{2})^{\frac{d-k}{2}} \left[\int_{\mathbb{S}^{d-k+1}}Y_{j}(\xi)(1-\|y_{1}^{\perp}\|^{2})^{\frac{l}{2} }d\sigma(\xi)\right]dy_{1}^{\perp}\] \[=0.\]
Thus, for any fixed \(x_{3},\dots,x_{k}\in\mathbb{S}^{d-1}\),
\[\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}Q_{k,l}^{d}(x_{1},x_{2},\dots,x_{ k})d\sigma(x_{1})d\sigma(x_{2})=0,\]
meaning that
\[I_{Q_{k,l}^{d}}(\sigma)=0,\]
so \(\sigma\) is indeed a minimizer of \(I_{Q^{d}_{k,l}}\).
We note here that \(Q^{d}_{k,l}\) is zero if \(x_{3},\dots,x_{k}\) are linearly dependent or if \(x_{1}\) or \(x_{2}\) are in \(X\).
For \(k=3\), these kernels are essentially (4.2), introduced by Bachoc and Vallentin [4]. In this instance, note that \(\det(W)=1\) is constant. The general case was covered by Musin [11] who used the kernels to formulate general SDP bounds for spherical codes and, with some additional machinery, generalized the result of Schoenberg [12] to characterize all positive definite kernels invariant under the stabilizer of \(X\). However, in that setting, it was assumed that \(x_{3},\dots,x_{k}\) were fixed and linearly independent, so no factor such as \(\det(W)^{l}\) was included and the functions were only really functions of two variables. Recently, similar kernels with \(k\geq 4\) were used for finding new bounds for sizes of equiangular sets of lines in [13], where kernels were constructed in a way that assumed that distance set the last \(k-2\) inputs had finitely many values, making them multivariate functions, but not allowing the last \(k-2\) inputs to take arbitrary values on the sphere. The authors of [13] even discuss the difficulty of such a task. Our inclusion of \(\det(W)\) as a factor of \(Q\) allows us to address this issue, though alone, this would not allow us to construct functions which are not constant when \(x_{3},\dots,x_{k}\) are linearly dependent, or more complicated positive definite functions, such as semidefinite combinations of the functions (4.1). We discuss how to construct such functions later in this section.
For the main result of Section 7, it is sufficient to use the case \(l=1\) so we formulate the relevant statement as a separate lemma.
**Lemma 6.2**.: _For any set of fixed vectors \(x_{3},\dots,x_{k}\in\mathbb{S}^{d-1}\), the kernel_
\[Q^{d}_{k,1}(x_{1},\dots,x_{k})=\det(W)\langle y_{1},y_{2}\rangle=\det(W)u_{1,2 }-w_{1}^{T}\operatorname{adj}(W)w_{2}\]
_is \(k\)-positive definite, and \(I_{Q^{d}_{k,1}}\) is minimized by \(\sigma\)._
For small values of \(k\), these kernels take the form:
\[Q^{d}_{3,1} =u_{1,2}-u_{1,3}u_{2,3},\] \[Q^{d}_{4,1} =u_{1,2}-u_{1,2}u_{3,4}^{2}-u_{1,3}u_{2,3}-u_{1,4}u_{2,4}+u_{1,3}u _{2,4}u_{3,4}+u_{1,4}u_{2,3}u_{3,4}.\]
We can use these kernels \(Q^{d}_{k,l}\) to construct various other kernels which are \(k\)-positive definite and whose energies are minimized by \(\sigma\). Similar objects were studied in [14].
**Corollary 6.3**.: _Let \(G:(\mathbb{S}^{d-1})^{k-1}\to\mathbb{R}\) be a continuous function such that, for \(\eta_{1},\eta_{2},\dots,\eta_{k-1}\in\mathbb{S}^{d-1}\), \(G(\eta_{1},\dots,\eta_{k-1})\) depends only on the inner products \(\langle\eta_{i},\eta_{j}\rangle\), \(1\leq i<j\leq k-1\). Then the kernel_
\[T(x_{1},x_{2},\dots,x_{k})=G(x_{1},x_{3},\dots,x_{k})G(x_{2},x_{3},\dots,x_{k })Q^{d}_{k,l}(x_{1},x_{2},\dots,x_{k}) \tag{6.13}\]
_is rotationally-invariant and \(k\)-positive definite. If \(l\geq 1\), \(T\) satisfies_
\[\inf_{\mu\in\mathcal{P}(\mathbb{S}^{d-1})}I_{T}(\mu)=I_{T}(\sigma)=0. \tag{6.14}\]
From the way we defined \(T\), we can see that \(T\) is indeed continuous and symmetric in the first two variables.
Proof.: We will use the same notation as in the proof of Theorem 6.1. We see immediately that the rotational-invariance of \(T\) follows from the rotational-invariance of \(Q^{d}_{k,l}\) and the inner products \(\langle x_{i},x_{j}\rangle\). We also have that for fixed \(x_{3},\dots,x_{k}\), \(G(x_{i},x_{3},\dots,x_{k})\) depends only on \(y_{i}^{\perp}\), the orthogonal projection of \(x_{i}\) onto \(X\).
For \(k\leq d\), that \(T\) is \(k\)-positive definite can be seen by the fact that for fixed \(x_{3},\dots,x_{k}\) and \(\mu\in\mathcal{M}(\mathbb{S}^{d-1})\), (6.12) gives us
\[I_{U_{T}^{x_{3},\dots,x_{k}}}(\mu)=\det(W)^{l}\sum_{j=1}^{\dim(\mathcal{H}^{d- k+1}_{1})}\left(\int_{\mathbb{S}^{d-1}}Y_{j}(z_{1})\|y_{1}\|^{l}G(x_{1},x_{3}, \dots x_{k})d\mu(x_{1})\right)^{2}\geq 0.\]
If \(x_{3},\ldots,x_{k}\) are linearly dependent, then \(T=0\), so assume that \(x_{3},\ldots,x_{k}\) are linearly independent, and for \(1\leq j\leq\dim(\mathcal{H}_{l}^{d-k+1})\), let
\[f_{j}(x_{1})=f_{j}(y_{1}^{\perp},y_{1})=Y_{j}(z_{1})\|y_{1}\|^{l}G(x_{1},x_{3}, \ldots x_{k}).\]
By Lemma A.5.4 of [4], and since \(G\) does not depend on \(y_{1}\), we have
\[\int_{\mathbb{S}^{d-1}}f_{j}(x_{1})d\sigma(x_{1}) =\int_{\mathbb{B}^{k-2}}(1-\|y_{1}^{\perp}\|^{2})^{\frac{d-k}{2}} \left[\int_{\mathbb{S}^{d-k+1}}f_{j}(y_{1}^{\perp},\sqrt{1-\|y_{1}^{\perp}\|^{ 2}}\xi)d\sigma(\xi)\right]dy_{1}^{\perp}\] \[=\int_{\mathbb{B}^{k-2}}(1-\|y_{1}^{\perp}\|^{2})^{\frac{d-k}{2}} (1-\|y_{1}^{\perp}\|^{2})^{\frac{t}{2}}G(x_{1},x_{3},\ldots x_{k})\left[\int_{ \mathbb{S}^{d-k+1}}Y_{j}(\xi)d\sigma(\xi)\right]dy_{1}^{\perp}\] \[=0.\]
Thus, for any fixed \(x_{3},\ldots,x_{k}\in\mathbb{S}^{d-1}\),
\[\int_{\mathbb{S}^{d-1}}\int_{\mathbb{S}^{d-1}}T(x_{1},x_{2},\ldots,x_{k})d \sigma(x_{1})d\sigma(x_{2})=0,\]
meaning that
\[I_{T}(\sigma)=0,\]
so \(\sigma\) is indeed a minimizer of \(I_{T}\).
The case of \(k=d+1\) is similar.
**Lemma 6.4**.: _Let \(G:(\mathbb{S}^{d-1})^{k-1}\to\mathbb{R}\) be continuous, depend only on the inner products of its inputs, and satisfy \(\int_{\mathbb{S}^{d-1}}G(\eta_{1},\ldots,\eta_{k-1})d\sigma(\eta_{1})=0\). Then the kernel_
\[H(x_{1},x_{2},\ldots,x_{k})=G(x_{1},x_{3},\ldots,x_{k})G(x_{2},x_{3},\ldots,x_ {k}) \tag{6.15}\]
_is rotationally-invariant, \(k\)-positive definite, and satisfies_
\[\inf_{\mu\in\mathcal{P}(\mathbb{S}^{d-1})}I_{H}(\mu)=I_{H}(\sigma)=0. \tag{6.16}\]
The formulation of \(T\) and \(H\) in the corollary and lemma, and the fact that the sum of \(k\)-positive definite kernels minimized by \(\sigma\) is a \(k\)-positive definite kernel minimized by \(\sigma\), allows us to now recover Theorem 4.1. In [4], the authors created matrices \(Y_{I}^{d}\) of polynomials, and then took the trace of the product of a positive semidefinite matrix and a \(Y_{I}^{d}\). When \(l=0\), this would lead to a sum of kernels of the form (6.15), and for \(l>0\) this would lead to a sum of kernels of the form (6.13).
By combining Lemmas 2.3, 2.4, 2.6, and 6.4 with Corollary 6.3 we can now construct a wide range of rotationally-invariant \(k\)-positive definite kernels whose energies are minimized by \(\sigma\) from the kernels \(Q_{n,l}^{d}\)'s for \(n<k\). In particular, we can construct kernels which are not constant when \(x_{3},\ldots,x_{n}\) are linearly dependent, unlike \(Q_{k,l}^{d}\).
## 7. Maximizing the integral of \(A^{2}\) on the sphere
We now turn to the last main results of the paper. As an analogue of the result by Cahill and Casazza (Theorem 5.1), we solve the optimization problem for \(A^{2}\), the square of the \((k-1)\)-dimensional volume of the simplex, for an arbitrary number of inputs \(3\leq k\leq d+1\). We have already proved some partial cases of the theorem below: Theorem 4.3 (for the case \(k=3\) and \(d\geq 2\), i.e. the area of the triangle) and Theorem 5.3 (for full-dimensional simplices, i.e. \(k=d+1\geq 3\)). We would like to point out, that the latter theorem applies to measures on \(\mathbb{R}^{d}\). The following theorem, while restricted to the sphere, covers the whole range \(3\leq k\leq d+1\).
**Theorem 7.1**.: _Let \(d\geq 2\) and \(3\leq k\leq d+1\). Let \(A(x_{1},\ldots,x_{k})\) be the \((k-1)\)-dimensional Euclidean volume of a simplex with vertices \(x_{1},\ldots,x_{k}\in\mathbb{S}^{d-1}\). Then the set of maximizing measures of \(I_{A^{2}}\) in \(\mathcal{P}(\mathbb{S}^{d-1})\) is the set of balanced isotropic measures on \(\mathbb{S}^{d-1}\). In particular, the uniform surface measure \(\sigma\) maximizes \(I_{A^{2}}\). The value of the maximum is \(\frac{k}{(k-1)!d^{k-1}}\binom{d}{k-1}\)._
Proof.: Let \(U\) be the Gram matrix of vectors \(\{x_{1},\ldots,x_{k}\}\subset\mathbb{S}^{d-1}\) with entries \(u_{i,j}\), i.e. \(\langle x_{i},x_{j}\rangle=u_{i,j}\) for \(1\leq i,j\leq k\). For \(I,J\subseteq\{1,\ldots,k\}\), we denote by \(U_{I,J}\) the submatrix of \(U\) obtained by deleting rows with numbers from \(I\) and columns with numbers from \(J\). By Lemma 9.1, whose proof is postponed to the Appendix,
\[((k-1)!)^{2}A^{2}=-\det\begin{pmatrix}U&\mathbf{1}\\ \mathbf{1}^{T}&0\end{pmatrix}.\]
We expand the determinant by choosing, for each \(i,j\in\{1,\ldots,k\}\), the elements in the last column and \(i^{th}\) row and the \(j^{th}\) column and last row. We treat the cases \(i=j\) and \(i\neq j\) separately.
\[((k-1)!)^{2}A^{2} =-\sum_{i=1}^{k}(-1)^{k+1+i+k+i}\det(U_{\{i\},\{i\}})-\sum_{i\neq j }(-1)^{k+1+i+k+j}\det(U_{\{i\},\{j\}})\] \[=\sum_{i=1}^{k}\det(U_{\{i\},\{i\}})+\sum_{i\neq j}(-1)^{i+j}\det (U_{\{i\},\{j\}})\]
For each \(i\in\{1,\ldots,k\}\), \(\det(U_{\{i\},\{i\}})\) is the \((k-1)\)-point kernel \(V^{2}(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})\) from Theorem 5.1. Subsequently, Theorem 5.1 implies that the energy integral for the kernel defined by the first sum is not greater than \(k\frac{(k-1)!}{d^{k-1}}\binom{d}{k-1}\).
It is now sufficient to show that the contribution of the second sum is nonpositive. Let us fix \(i,j\in\{1,\ldots,k\}\), with \(i\neq j\), and denote \(U_{\{i,j\},\{i,j\}}\) by \(U^{\prime}\). We expand \(\det(U_{\{i\},\{j\}})\) by row \(j\) of \(U\) and column \(i\) of \(U\) taking an element \(u_{j,m}\), \(m\neq j\), from the row and \(u_{n,i}\), \(n\neq i\), from the column, respectively.
If \(m=i\) and \(n=j\), then we take \(u_{j,i}\) both for the row and the column expansion. The contribution of this case to \(\det(U_{\{i\},\{j\}})\) the sum is then \((-1)^{i+j-1}u_{j,i}\det(U^{\prime})\).
Let us now consider the case where \(m\neq i\) and \(n\neq j\). Without loss of generality, let us assume that \(i<j\) (the case of \(i>j\) is similar). Let \(n^{\prime}\) be the position of row \(n\) of \(U\) after rows \(i\) and \(j\) are deleted, i.e. \(n^{\prime}=n\) if \(n<i\), \(n^{\prime}=n-1\) if \(i<n<j\), and \(n^{\prime}=n-2\) if \(j<n\). Similarly we define \(m^{\prime}=m\) if \(m<i\), \(m^{\prime}=m-1\) if \(i<m<j\), and \(m^{\prime}=m-2\) if \(j<m\). This guarantees that \(U_{\{i,j,n\},\{i,j,m\}}=U^{\prime}_{\{n^{\prime}\},\{m^{\prime}\}}\). A careful examination of the signs shows that the contribution of this expansion in the sum is then
\[(-1)^{i+n^{\prime}+j+m^{\prime}}u_{n,i}u_{j,m}\det(U^{\prime}_{\{n^{\prime}\},\{m^{\prime}\}}) =(-1)^{p}u_{n,i}u_{j,m}\det(U^{\prime}_{\{n^{\prime}\},\{m^{ \prime}\}})\] \[=(-1)^{p}u_{n,i}u_{j,m}\det(U_{\{i,j,n\},\{i,j,m\}})\]
where
\[p=i+n+j+m+\frac{\operatorname{sgn}(n-i)+\operatorname{sgn}(n-j)+\operatorname {sgn}(m-i)+\operatorname{sgn}(m-j)}{2}-2.\]
Overall, we have
\[(-1)^{i+j}\det(U_{\{i\},\{j\}}) =(-1)^{2i+2j-1}u_{j,i}\det(U^{\prime})+\sum_{\begin{subarray}{c }1\leq m,n\leq k\\ m,n\notin\{i,j\}\end{subarray}}(-1)^{2i+2j+m^{\prime}+n^{\prime}}u_{n,i}u_{j,m} \det(U^{\prime}_{\{n^{\prime}\},\{m^{\prime}\}})\] \[=-u_{j,i}\det(U^{\prime})+\sum_{\begin{subarray}{c}1\leq m,n\leq k \\ m,n\neq i,j\end{subarray}}(-1)^{m^{\prime}+n^{\prime}}u_{n,i}u_{j,m}\det(U^{ \prime}_{\{n^{\prime}\},\{m^{\prime}\}})\] \[=-(u_{j,i}\det(U^{\prime})-u_{i}^{\prime T}\operatorname{adj}(U^{ \prime})u^{\prime}_{j})\] \[=-(u_{j,i}\det(U^{\prime})-u_{i}^{\prime T}\operatorname{adj}(U^ {\prime})u^{\prime}_{j})\] \[=-Q^{d}_{k,1}(x_{i},x_{j},x_{l_{1}},\ldots,x_{l_{k-2}}),\]
where \(u^{\prime}_{i}=(u_{1,i},\ldots,u_{k,i})^{T}\) with the first index running through all \(n\neq i,j\), \(u^{\prime}_{j}=(u_{j,1},\ldots,u_{j,k})^{T}\) with the second index running through all \(m\neq i,j\), and \(\{l_{1},\ldots,l_{k-2}\}=\{1,\ldots,k\}\setminus\{i,j\}\). For the last identity above, see Lemma 6.2. From Theorem 6.1 (or Lemma 6.2 specifically for this case), we know that
is \(k\)-negative definite, so its contribution to \(I_{A^{2}}\), and therefore the contribution of
\[K(x_{1},\dots,x_{k})=\sum_{i\neq j}(-1)^{i+j}\det(U_{\{i\},\{j\}})=-\frac{\binom{ k}{2}}{k!}\sum_{\pi}Q_{k,1}^{d}(x_{\pi(1)},\dots,x_{\pi(k)}),\]
to \(I_{A^{2}}\) is nonpositive, so \(I_{A^{2}}\leq\frac{k}{(k-1)!d^{k-1}}\binom{d}{k-1}\).
It remains to find out which measures maximize \(I_{A^{2}}\). Due to Theorem 5.1, any maximizing measure \(\mu\) must be isotropic. In order to find measures vanishing on the second part we need to return to Lemma 6.2. The necessary and sufficient condition for vanishing on \(Q_{k,1}^{d}(x_{1},\dots,x_{k})\) is the following. For any linearly independent \(x_{3},\dots,x_{k}\) from the support of \(\mu\) and the linear space \(X\) generated by them, the projection of \(\mu\) onto \(X^{\perp}\) must be balanced. In other words, the center of mass of \(\mu\) must belong to \(X\). An isotropic measure must be full-dimensional so there must exist \(d\) linearly independent vectors from \(\operatorname{supp}(\mu)\). The intersection of all linear spaces generated by any \((k-2)\) of these \(d\) vectors is only the origin so the center of mass of \(\mu\) must be at the origin. Clearly, balanced isotropic measures attain the found maximum.
**Remark 5**.: In the last part of the proof we could also have shown that \(\sigma\) is a maximizer, then noted that the potential is a polynomial of degree at most two in any of its variables, with some parts being of degree one, meaning that balanced isotropic measures are the maximizers, since they yield the same value of energy as \(\sigma\) (this a direct analogy to spherical 2-designs).
We note that in the case that \(k=2\), \(A(x,y)\) is simply the Euclidean distance between \(x\) and \(y\). If we were to split \(\|x-y\|^{2}\) into a linear part and a "volume" part as in the proof, then the volume is simply the distance from the origin to a point on the circle, which is always 1. Thus, in that particular case, only the linear part matters, so maximizers of \(I_{A^{2}}\) on the sphere are all balanced measures, as shown in [2]. The case of \(k=3\) was handled by Theorem 4.3, where in this case \(Q_{k,1}^{d}=Y_{1,0,0}\) and the \((k-1)\)-volume squared function is \(V^{2}(x,y)=1-\langle x,y\rangle^{2}\), as discussed in Section 2.4.
Finally, we note that despite having this result for \(A^{2}\) on the sphere, Corollary 5.4 does not hold if \(A\) has \(k<d+1\) inputs. As \(s\to\infty\), we should expect the maximizer of \(I_{A^{s}}\) to be supported on some set such that \(A\) takes only its minimum (zero) and maximum values. This will be the set of vertices of a regular \((k-1)\)-simplex on some \((k-2)\)-dimensional subsphere. Such a measure is not isotropic on \(\mathbb{S}^{d-1}\) for \(k<d+1\), and so we can also not use the same proof method to determine maximizers. We do, however, conjecture that the maximizers of \(I_{A^{s}}\) are discrete when \(s>2\), for all \(3\leq k\leq d+1\).
## 8. Acknowledgments
We would like to thank Danylo Radchenko and David de Laat for fruitful discussions and useful suggestions. All of the authors express gratitude to ICERM for hospitality and support during the Collaborate@ICERM program in 2021. D. Bilyk has been supported by the NSF grant DMS-2054606 and Simons Collaboration Grant 712810. D. Ferizovic thankfully acknowledges support by the Methusalem grant of the Flemish Government. A. Glazyrin was supported by the NSF grant DMS-2054536. R.W. Matzke was supported by the Doctoral Dissertation Fellowship of the University of Minnesota, the Austrian Science Fund FWF project F5503 part of the Special Research Program (SFB) "Quasi-Monte Carlo Methods: Theory and Applications", and NSF Postdoctoral Fellowship Grant 2202877. O. Vlasiuk was supported by an AMS-Simons Travel Grant.
## 9. Appendix: Expressing \(A^{2}\) through Gram determinants
Let \(U\) be the Gram matrix of vectors \(\{x_{1},\dots,x_{k}\}\subset\mathbb{S}^{d-1}\) with entries \(u_{i,j}\), i.e. \(\langle x_{i},x_{j}\rangle=u_{i,j}\) for \(1\leq i,j\leq k\). The following lemma provides a linear-algebraic description of \(A^{2}\).
**Lemma 9.1**.: \[A^{2}(x_{1},\dots,x_{k})=-\frac{1}{((k-1)!\,)^{2}}\det\begin{pmatrix}U& \mathbf{1}\\ \mathbf{1}^{T}&0\end{pmatrix},\]
_where \(\mathbf{1}\) is the column vector of \(k\) ones._
Proof.: \(A^{2}\) can be found from the Gram matrix of the vectors \(x_{2}-x_{1},\ldots,x_{k}-x_{1}\).
\[A^{2}(x_{1},\ldots,x_{k}) =\frac{1}{((k-1)!)^{2}}\det\begin{pmatrix}\langle x_{2}-x_{1},x_{2} -x_{1}\rangle&\ldots&\langle x_{2}-x_{1},x_{k}-x_{1}\rangle\\ \vdots&\ddots&\vdots\\ \langle x_{k}-x_{1},x_{2}-x_{1}\rangle&\ldots&\langle x_{k}-x_{1},x_{k}-x_{1} \rangle\end{pmatrix}\] \[=\frac{1}{((k-1)!)^{2}}\det\begin{pmatrix}2-2u_{1,2}&\ldots&1+u_{ 2,k}-u_{1,2}-u_{1,k}\\ \vdots&\ddots&\vdots\\ 1+u_{k,2}-u_{1,k}-u_{1,2}&\ldots&2-2u_{1,k}\end{pmatrix}\] \[=\frac{-1}{((k-1)!)^{2}}\det\begin{pmatrix}0&0&\ldots&0&1\\ 0&2-2u_{1,2}&\ldots&1+u_{2,k}-u_{1,2}-u_{1,k}&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&1+u_{k,2}-u_{1,k}-u_{1,2}&\ldots&2-2u_{1,k}&0\\ 1&0&\ldots&0&0\end{pmatrix}\] \[=\frac{-1}{((k-1)!)^{2}}\det\begin{pmatrix}0&u_{1,2}&\ldots&u_{1, k}&1\\ u_{1,2}&2-2u_{1,2}&\ldots&1+u_{2,k}-u_{1,2}-u_{1,k}&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ u_{1,k}&1+u_{k,2}-u_{1,k}-u_{1,2}&\ldots&2-2u_{1,k}&0\\ 1&0&\ldots&0&0\end{pmatrix}.\]
Note that in the third equality, we created a \((k+1)\times(k+1)\) matrix whose determinant is the negative of our original matrix, due to the only nonzero entries in the last row and column being the ones in the upper right and lower left corners. This also means that inserting \(u_{1,j}\)'s into the first row and column doesn't affect the determinant.
Now we add the first row and column to all rows and columns except for the last ones.
\[A^{2}=-\frac{1}{((k-1)!)^{2}}\det\begin{pmatrix}0&u_{1,2}&\ldots&u_{1,k}&1\\ u_{1,2}&2&\ldots&1+u_{2,k}&1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ u_{1,k}&1+u_{k,2}&\ldots&2&1\\ 1&1&\ldots&1&0\end{pmatrix}.\]
We subtract the last column from all columns except for the first one, then add the bottom row to the top row, and see that
\[A^{2} =-\frac{1}{((k-1)!)^{2}}\det\begin{pmatrix}0&u_{1,2}-1&\ldots&u_{ 1,k}-1&1\\ u_{1,2}&1&\ldots&u_{2,k}&1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ u_{1,k}&u_{k,2}&\ldots&1&1\\ 1&1&\ldots&1&0\end{pmatrix}\] \[=-\frac{1}{((k-1)!)^{2}}\det\begin{pmatrix}1&u_{1,2}&\ldots&u_{1,k }&1\\ u_{1,2}&1&\ldots&u_{2,k}&1\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ u_{1,k}&u_{k,2}&\ldots&1&1\\ 1&1&\ldots&1&0\end{pmatrix}=-\frac{1}{((k-1)!)^{2}}\begin{pmatrix}U&\mathbf{1 }\\ \mathbf{1}^{T}&0\end{pmatrix}.\]
|
2309.01168 | Noise Measurement of a Wind Turbine using Thick Blades with Blunt
Trailing Edge | The noise generated by wind turbines can potentially cause significant harm
to the ecological environment and the living conditions of residents.
Therefore, a proper assessment of wind turbine noise is crucial. The IEC
61400-11 standard provides standardized guidelines for measuring turbine noise,
facilitating the comparison of noise characteristics among different wind
turbine models. This work aims to conduct a comprehensive noise measurement of
a 100kW wind turbine using thick blades with blunt trailing edge, which differs
from the typical turbines studied previously. The work takes into account the
unique design and dynamic characteristics of small-scale wind turbines and
adjusts the measurement accordingly, with deviations from the IEC standards
will be explicitly addressed. | Weicheng Xue, Bing Yang | 2023-09-03T13:16:10Z | http://arxiv.org/abs/2309.01168v1 | # Noise Measurement of a Wind Turbine using Thick Blades with Blunt Trailing Edge
###### Abstract
The noise generated by wind turbines can potentially cause significant harm to the ecological environment and the living conditions of residents. Therefore, a proper assessment of wind turbine noise is crucial. The IEC 61400-11 standard provides standardized guidelines for measuring turbine noise, facilitating the comparison of noise characteristics among different wind turbine models. This work aims to conduct a comprehensive noise measurement of a 100kW wind turbine using thick blades with blunt trailing edge, which differs from the typical turbines studied previously. The work takes into account the unique design and dynamic characteristics of small-scale wind turbines and adjusts the measurement accordingly, with deviations from the IEC standards will be explicitly addressed.
keywords: Noise measurement, wind turbine, Sound power spectrum, IEC 61400-11, blunt trailing edge +
Footnote †: journal: Journal of Turbine
## 1 Introduction
The noise generated by wind turbines potentially has a significant impact on local residents' quality of life [1]. Therefore, a rational assessment of wind turbine noise is necessary. The IEC 61400-11 standard provides standardized guidelines for noise measurement of wind turbines [2], enabling the comparison of noise characteristics among different wind turbine models. Previous studies conducted by King et al. [3] measured four 2 MW wind turbines in Ireland continuously for 10 weeks, following the Irish regulations for measurement and calculation. They determined the critical wind speed for wind turbine operation, which is influenced by the turbine model,
background noise level, and measurement location. Furthermore, the critical wind speed differs between daytime and nighttime. Migliore et al. [4] conducted a systematic noise assessment of eight small-scale wind turbines with power ranging from 400W to 100kW at NREL. They pointed out that the acoustic characteristics of wind turbines vary significantly even at the same wind speed due to climatic factors, and the variability of background noise severely affects the accuracy of noise assessments. Martens et al. [5] presented five extensive measurement campaigns acquiring and analyzing the meteorological, acoustic and turbine-specific data at different locations in northern Germany, which lasted in total over 13 month. In order to obtain reliable measurements of the wind turbine noise, Gallo et al. [6] presented a work to estimate the immission and the residual noise components measured nearby a wind farm, which allows the evaluation of the noise impact produced by operational wind farms.
The distribution of wind resources in China [7] differs from that of Western countries [8; 9], and it is necessary to develop wind turbine noise measurement standards suitable for the local conditions, especially when considering that the area of China is very large and the distribution of wind resources is very complicated. However, the IEC 61400-11 standard can still serve as a general method for noise measurement, facilitating the comparison and communication among different wind turbine noise assessments. Bo et al. [10] developed an IEC 61400-11 program for wind turbine noise measurement using the LabVIEW platform and summarized the advantages of real-time measurements using a virtual platform.
To develop wind energy resources in low-wind-speed and typhoon-prone areas, the Institute of Engineering Thermophysics of the Chinese Academy of Sciences designed a 100kW small wind turbine in Zhangbei County, Hebei Province, with autonomously designed thick blades and blunt trailing edges. This work focuses on the standardized noise measurement of this small-scale wind turbine using thick blades with blunt trailing edges [11; 12; 13; 14]. The noise characteristics of this small-scale wind turbine using thick blades with blunt trailing edges, are different from those of turbines using thin blades with sharp trailing edges. It should be noted that considering the different designs and dynamic characteristics of small-scale wind turbines compared to large-scale wind turbines, the noise measurement standards have been appropriately adjusted for this small-scale wind turbine. The procedures in which the measurements do not fully comply with the IEC standards will be explicitly addressed.
## 2 Characteristics of the Wind Turbine
### Blade Characteristics
The blades of the 100kW wind turbine were designed by the Institute of Engineering Thermophysics of the Chinese Academy of Sciences, with the remaining components being completed by Shenyang Huarun Wind Power Co., Ltd. The blade design parameters were referenced from the relevant parameters provided by Huarun for the 100kW wind turbine and were designed using the NREL published blade design code HARP_opt [15]. The blade profile at the maximum chord length adopts the blunt trailing edge airfoil CAS-W2-450 [16], while the blade tip uses the NACA0018 airfoil [17], and all other airfoils adopt the DU series [18].
### Main Technical Parameters
The wind turbine is a horizontal-axis wind turbine, with a downwind rotor configuration. The hub height is 26.2 m, and the horizontal distance from the wind wheel center to the tower axis is 0.9 m. The rotor radius is 10.092 m, and it adopts pitch control and variable speed operation. The tower is of a cylindrical structure.
## 3 Natural Environment
The wind turbine is located in a large wind farm in Zhangbei County, Zhangjiakou City. To the south of this wind turbine, there is a large photovoltaic power generation test base, while to the north, there is a road several hundred meters away. Three abandoned small wind turbines are located to the east, continuously idling, and the west side features are mainly open shrubbery and grassland. The surface characteristics mainly consist of shrubs and grasslands. To the east, there is a secondary road connected to the main road, which may cause a certain degree of sound wave reflections. The three abandoned small wind turbines to the east may introduce interference in noise measurements, and when wind speeds are too high, they may adversely affect the measurement accuracy.
## 4 Test Instruments
1. A microphone with a baseboard and windscreen that meets the requirements of IEC 61400-11, as shown in Fig. 1(a).
2. NI digital acquisition system, with the PXIe-4496 data acquisition card offering a single-channel synchronous sampling frequency of 204.8kS/s, meeting the sampling frequency requirements for noise measurement in this experiment, as shown in Fig. 1(b).
3. Labview data acquisition and processing program, including time domain data sampling, FFT transform to the frequency domain, sound pressure level calculation modules, etc., shown in Fig. 2.
4. Other instruments, such as a monitor, laser rangefinder, and data cables.
## 5 Data Processing Procedure
The purpose of data processing is to obtain 1/3-octave sound power spectrum and overall power spectrum using statistical methods. There are two averaging methods used in this analysis: arithmetic averaging for non-acoustic data and energy averaging for acoustic data.
For the uncertainty calculation of the 1/3-octave sound power spectrum and overall power spectrum, this work follows the IEC 61400-11 standard and divides it into Type A uncertainty and Type B uncertainty. Type A uncertainty is obtained from statistical methods using multiple repeated measurements, while Type B uncertainty is derived from similar situations based on available information and experience. Finally, the Type A and Type B uncertainties are combined to obtain the combined standard uncertainty. It should be noted that the uncertainty of the acoustic measurement instrument chain cannot be accurately obtained due to the lack of instrument calibration certificates. In this work, it is assumed to be 0.2 dB. The values for other uncertainties are shown in Table 1 and Table 2.
Figure 2: The Labview program for data acquisition and processing
Noise and wind speed are measured for 10 seconds each time, and the data points are classified and averaged according to wind speed bins, resulting in:
1. Average wind speed;
2. Average A-weighted 1/3-octave spectrum;
3. Corresponding standard uncertainty.
For each 1/3-octave band, the background noise spectrum and total noise spectrum at the bin center can be obtained by linear interpolation of the average values of adjacent bins. At each wind speed bin center, the total noise spectrum is corrected using the 1/3-octave background noise spectrum at the same wind speed bin center, resulting in the 1/3-octave spectrum of the wind turbine generator. If the difference between the sum of the 1/3-octave band spectrum of the total noise and the sum of the 1/3-octave band spectrum of the background noise falls within the range of 3 to 6 dB, this result is marked with an asterisk (*) in the report. If this difference is less than or equal to 3 dB, the result corresponding to that wind speed bin is not recorded.
The details of the data processing procedure are further described in the form of a flowchart 3.
\begin{table}
\begin{tabular}{c c} \hline Uncertainty component & Standard uncertainty, dB \\ \hline Validation & 0.2 \\ Instrument & 0.2 \\ Borad & 0.3 \\ Wind screen insertion loss & 0 \\ Distance and direction & 0.1 \\ Air absorption & 0 \\ Weather conditions & 0.46 \\ \hline \end{tabular}
\end{table}
Table 2: Type B uncertainty component relevant for apparent sound power spectra
\begin{table}
\begin{tabular}{c c} \hline Uncertainty component & Standard uncertainty, m/s \\ \hline Wind speed & 0.28 \\ \hline \end{tabular}
\end{table}
Table 1: Type B wind speed uncertainty component relevant for apparent sound power spectra
Figure 3: Flowchart showing the data processing procedure
## 6 Sound Power Level
It is important to select reasonable experimental data (discarding outliers), and make sure that noise and wind speed are classified into wind speed bins and averaged over a period of 10 seconds. The apparent sound power level and its uncertainty are calculated based on the reference wind speed at hub height, as shown in Table 3. Alternatively, based on the experimental conditions in this paper, a reasonable roughness length \(z_{0}\) = 0.05 m can be chosen, and the data processing can be carried out based on the wind speed at a reference height of 10 m using the logarithmic wind profile formula. Unless otherwise specified, the intermediate and final results analyzed and explained in this paper are based on the wind speed at hub height and will not be reiterated.
From Fig. 4, it can be observed that the differences between the sound pressure level of total noise and the sound pressure level of background noise for the wind speed bins of 6 m/s, 7 m/s, 8 m/s, and 9 m/s are all greater than 6 dB. For the 10 m/s bin, the difference falls within the range of [3; 6] dB, indicated by an asterisk in Table 3. Within this range, the apparent sound power level shows an increasing trend with the increasing speed, and the rate of increase becomes faster. However, when the wind speed reaches 10 m/s, the apparent sound power level suddenly decreases. Analyzing the data for the 10 m/s bin marked with asterisks, it can be observed that the difference between the total noise and background noise decreases, and the more rapidly growing background noise has a negative impact on the results. Additionally, from the 6 m/s bin to the 7 m/s bin, the apparent sound power level decreases slightly, with values much smaller than the uncertainty. This may be related to changes in wind direction during the measurements and a slight increase in background noise.
\begin{table}
\begin{tabular}{c c c} \hline Hub height wind speed, m/s & Apparent SPL, dB & Uncertainty, dB \\ \hline
6 & 96.65 & 1.58 \\
7 & 96.47 & 1.30 \\
8 & 97.02 & 1.02 \\
9 & 99.16 & 0.94 \\
10* & 98.40* & 1.74* \\ \hline \end{tabular}
\end{table}
Table 3: Apparent sound power level (SPL)
Based on the analysis above, let's examine the relationship between the A-weighted sound pressure level of total noise and wind speed in the experiment, as shown in Fig. 4. The fitted curve in Fig. 4 also indicates that there is a large difference between the total noise and background noise when the wind speed is low. As the wind speed increases to the bin of 10 m/s, the increase in the total noise slows down and even exhibits a downward trend. There is indeed a significant reduction in the difference between total noise and background noise. The reason for this might be a lack of secondary windshield, or increased variability in wind speed and wind direction under high wind speed conditions, leading to slightly unstable test conditions for the wind turbine's dynamic response.
According to the IEC 61400-11 wind turbine noise measurement standard, both total noise and background noise should be processed in a similar way. Fig. 5 shows scatter plots of the A-weighted sound pressure spectrum in the 1/3-octave band of the total noise and background noise. Each wind speed bin contains 10 valid measurements. In general, as the wind speed increases, both the total noise and background noise increase, but this trend is not always consistent across all wind speed bins and frequency bands.
Figure 4: Scatter plot of total noise and background noise (A-weighted)
The A-weighted sound pressure spectra for the total noise and background noise from the 10 measurements are averaged over 1/3-octave frequency bands. The total noise and background noise at the center of each wind speed bin are obtained through linear interpolation of the average values of adjacent bins, as shown in Fig. 6. Statistically, within the same frequency band, higher wind speeds correspond to higher total noise and background noise levels. Additionally, small peaks can be observed around f = 60Hz and f = 315Hz.
Figure 5: Scattered data of A-weighted sound pressure spectra for the total noise and background noise
Fig. 7 shows the standard uncertainties of the sound pressure spectrum for the wind speed bin centers of total noise and background noise. The calculation of these uncertainties involves combining Type A and Type B uncertainties, incorporating covariance with wind speed, and substituting them into the formula for the standard uncertainty of the sound pressure spectrum at the bin center obtained through segmented linear interpolation. From Fig. 7, it can be observed that the standard uncertainties of the sound pressure spectra for both the total noise and background noise are larger at higher or lower wind speeds. The uncertainties for the 8 m/s and 9 m/s wind speed bins are the smallest. This indicates that relatively large measurement errors occur at lower and higher wind speeds.
Figure 6: A-weighted sound pressure spectra in the 1/3-octave band for the total noise and background noise
Fig. 8 shows the A-weighted corrected sound pressure spectrum and its uncertainties in the 1/3-octave band, obtained by correcting the total noise based on the corresponding background noise. Similar to the previous analysis, small peaks can be observed around f = 60Hz and f = 315 Hz. Unlike the previous analysis, at lower frequencies, the corrected sound pressure level for higher wind speeds is higher than that for lower wind speeds. However, as the frequency increases, the highest corrected sound pressure level does not necessarily correspond to the highest wind speed. In terms of uncertainties, the highest uncertainties occur at both lower and higher wind speeds, while the uncertainties are smaller for wind speeds of 8 m/s and 9 m/s. Fig.9 represents the apparent sound pressure spectrum in the 1/3-octave band, which exhibits similar characteristics to Fig. 8(a), but will not be analyzed further.
Figure 7: Standard uncertainty of the sound pressure spectra of total noise and background noise
## 7 Conclusions
This study conducted noise measurements on a small-scale 100 kW wind turbine with large thickness and blunt trailing edge blades, following the IEC 61400-11 wind turbine noise measurement method. The characteristics
Figure 8: Corrected A-weighted sound pressure spectrum and its uncertainty in the 1/3-octave band of total noise
Figure 9: Apparent sound power spectrum in the 1/3-octave band |
2301.02243 | Machine Fault Classification using Hamiltonian Neural Networks | A new approach is introduced to classify faults in rotating machinery based
on the total energy signature estimated from sensor measurements. The overall
goal is to go beyond using black-box models and incorporate additional physical
constraints that govern the behavior of mechanical systems. Observational data
is used to train Hamiltonian neural networks that describe the conserved energy
of the system for normal and various abnormal regimes. The estimated total
energy function, in the form of the weights of the Hamiltonian neural network,
serves as the new feature vector to discriminate between the faults using
off-the-shelf classification models. The experimental results are obtained
using the MaFaulDa database, where the proposed model yields a promising area
under the curve (AUC) of $0.78$ for the binary classification (normal vs
abnormal) and $0.84$ for the multi-class problem (normal, and $5$ different
abnormal regimes). | Jeremy Shen, Jawad Chowdhury, Sourav Banerjee, Gabriel Terejanu | 2023-01-04T19:07:01Z | http://arxiv.org/abs/2301.02243v1 | # Machine Fault Classification using Hamiltonian Neural Networks
###### Abstract
A new approach is introduced to classify faults in rotating machinery based on the total energy signature estimated from sensor measurements. The overall goal is to go beyond using black-box models and incorporate additional physical constraints that govern the behavior of mechanical systems. Observational data is used to train Hamiltonian neural networks that describe the conserved energy of the system for normal and various abnormal regimes. The estimated total energy function, in the form of the weights of the Hamiltonian neural network, serves as the new feature vector to discriminate between the faults using off-the-shelf classification models. The experimental results are obtained using the MaFauDa database, where the proposed model yields a promising area under the curve (AUC) of 0.78 for the binary classification (normal vs abnormal) and 0.84 for the multi-class problem (normal, and 5 different abnormal regimes).
Physics-Informed Neural Networks, Supervised Learning, Energy Conservation, Dynamical Systems
## 1 Introduction
A cost-effective method for ensuring component reliability is to enhance the current schedule-based maintenance approach with deterministic component health and usage data to inform selective and targeted maintenance activities. Condition monitoring and fault diagnosis systems are required to guard against unexpected failures in safety-critical and production applications. Early fault detection can reduce unplanned failures, which will in turn reduce life cycle costs and increase readiness and mission assurances. Irrespective of different machinery, manufacturing tools like CNC machines, heavy equipment, aircraft, helicopters, space vehicles, car engines, and machines generate vibrations. The analysis of these vibration data is the key to detecting machinery degradation before the equipment or the structure fails. Machine faults usually leave key indications of its internal signature through the changes in modal parameters. For example, they may change the natural frequency of the system, generate unique damping characteristics, degradation in stiffness, generation of acoustic frequencies, etc. The defects and faults in the system may also generate a different form of energy transduction from mechanical to electrical or to electromagnetic energy, which leaves unique signatures. The statistical features of vibration signals in the time, frequency, and time-frequency domains each have different strengths for detecting fault patterns, which has been thoroughly studied [21, 22, 23]. Various approaches have been proposed to extract features from these vibration signals using time-domain and frequency-domain analysis [18], Fourier and wavelet transform [11, 10], and manifold learning [15]. It is also shown that integration and hybridization of feature extraction algorithms can yield synergies that combine strengths and eliminate weaknesses [21, 16]. Most of the work done in this area is based on data collected from vibration sensors [21, 22, 20], which are cheap and enable non-intrusive deployments. However, they generate huge datasets. As these data sets are very big in nature finding the above-mentioned unique features, and their respective paraxial contributions are extremely challenging. Hence, recently several feature extraction-driven machine learning algorithms are deployed to solve this challenge [1, 23, 24, 25].
One of the challenges with building and deploying machine learning models to support decision-making is achieving a level of generalization that allows us
to learn on one part of the data distribution and predict on another. This challenge is amplified when learning using data from physical systems, as machine learning models such as neural networks (NN) capture an approximation of the underlying physical laws. Recently, new approaches have emerged under the umbrella of physics-informed neural networks (PINN) [10] to train NN that not only fit the observational data but also respect the underlying physics. This work leverages the Hamiltonian neural network (HNN) [11] to learn the Hamiltonian equations of energy-conserving dynamical systems from noisy data.
HNNs are used to characterize the total energy of rotating machinery, which is part of a wide range of applications such as power turbines, helicopters, and CNC machines just to name a few. In Ref. [12], the authors proposed a similarity-based model to calculate the similarity score of a signal with a set of prototype signals that characterize a target operating condition. These similarity score features are used in conjunction with time and spectral domain features to classify the behavior of the system using off-the-shelf classification models, such as random forests.
The main contribution of this work is to be intentional with respect to the underlying physics of the rotating machinery when generating discriminatory features. Namely, the conservation of energy is used as an inductive bias in the development and training of the HNN. While these mechanical systems are dissipative in nature, we assume that for short periods of time, the energy of the system is conserved due to the energy injected by the motor. The features derived by our approach are in the form of the weights of the HNN, which characterize the total energy of the system. In other words, we attempt to identify the operating regime based on the energy function. As with the previous approaches, these physics-informed features are then used to train off-the-shelf classifiers, such as logistic regressions and random forests to predict the condition of the mechanical system. The experimental results are performed on the Machinery Fault Database (MaFauIDa)1 from the Federal University of Rio de Janeiro. The proposed system yields a promising area under the curve (AUC) of 0.78 for both the binary classification (normal vs abnormal) and 0.84 for the multi-class problem (normal, and 5 different abnormal regimes).
Footnote 1: [http://www02.smt.ufrj.br/~offshore/mfs/page_01.html](http://www02.smt.ufrj.br/~offshore/mfs/page_01.html)
This paper is structured as follows: Section 2 introduces the background on the HNN and MaFauIDa dataset. Section 3 presents our proposed approach to derive physics-informed features to classify operating conditions. Section 4 shows the empirical evaluations and Section 5 summarizes our findings.
## 2 Background
### Hamiltonian Neural Networks
The Hamiltonian equations of motion, Eq. 1, describe the mechanical system in terms of canonical coordinates, position \(\mathbf{q}\) and momentum \(\mathbf{p}\), and the Hamiltonian of the system \(\mathcal{H}\).
\[\frac{d\mathbf{q}}{dt}=\frac{\partial\mathcal{H}}{\partial\mathbf{p}},\quad \frac{d\mathbf{p}}{dt}=-\frac{\partial\mathcal{H}}{\partial\mathbf{q}} \tag{1}\]
Instead of using neural networks to directly learn the Hamiltonian vector field \(\left(\frac{\partial\mathcal{H}}{\partial\mathbf{p}},-\frac{\partial\mathcal{ H}}{\partial\mathbf{q}}\right)\), the approach used by Hamiltonian neural networks is to learn a parametric function in the form of a neural network for the Hamiltonian itself [11]. This distinction accounts for learning the exact quantity of interest and it allows us to also easily obtain the vector field by taking the derivative with respect to the canonical coordinates via automatic differentiation. Given the training data, the parameters of the HNN are learned by minimizing the following loss function, Eq. 2.
\[\mathcal{L}=\left\|\frac{\partial\mathcal{H}}{\partial\mathbf{p}}-\frac{d \mathbf{q}}{dt}\right\|_{2}+\left\|\frac{\partial\mathcal{H}}{\partial\mathbf{ q}}+\frac{d\mathbf{p}}{dt}\right\|_{2} \tag{2}\]
### Machinery Fault Database
(MaFauIDa)
A comprehensive set of machine faults and vibration data was needed for the development and testing of the Hamiltonian-based feature extraction and classification of different operating states with damage/defects. The Machinery Fault Database (MaFauIDa) consists of a comprehensive set of vibration data from a SpectraQuest Alignment-Balance-Vibration System, which includes multiple types of faults, see Fig. 1. The equipment has two shaft-supporting bearings, a rotor, and a motor. Accelerometers are attached to the bearings to measure the vibration in the radial, axial, and tangential directions of each bearing. In addition, measurements from a tachometer (for measuring system rotation frequency) and a microphone (for capturing sound during system operation) are also included in the database. The database includes 10 different operating states and a total of 1951 sets of vibration data: (1) normal operation, (2) rotor imbalance, (3) underhang bearing fault: outer track, (4) underhang bearing fault: rolling
elements, (5) underhang bearing fault: inner track, (6) overhang bearing fault: outer track, (7) overhang bearing fault: rolling elements, (8) overhang bearing fault: inner track, (9) horizontal shaft misalignment, (10) vertical shaft misalignment.
**Normal Operation**. There are 49 sets of data from the system operating under normal conditions without any fault, each with a fixed rotating speed within the range from 737 rpm to 3686 rpm with steps of approximately 60 rpm.
**Rotor Imbalance**. To simulate different degrees of imbalanced operation, distinct loads of (6, 10, 15, 20, 25, 30, 35) g were coupled to the rotor. The database includes a total of 333 different imbalance-operation scenarios with combinations of loads and rotation frequencies.
**Bearing Faults**. As one of the most complex elements of the machine, the rolling bearings are the most susceptible elements to fault occurrence. Three defective bearings, each one with a distinct defective element (outer track, rolling elements, and inner track), were placed one at a time in each of the bearings. The three masses of (6, 10, 20) g were also added to the rotor to induce a combination of rotor imbalance and bearing faults with various rotation frequencies. There is a total of 558 underhang bearing fault scenarios and 513 overhang bearing fault scenarios.
**Horizontal Shaft Misalignment**. Horizontal shaft misalignment faults were induced by shifting the motor shaft horizontally of (0.5, 1.0, 1.5, 2.0) mm. The database includes a total of 197 different scenarios with combinations of horizontal shaft misalignment and rotation frequencies.
**Vertical Shaft Misalignment**. Vertical shaft misalignment faults were induced by shifting the motor shaft vertically of (0.51, 0.63, 1.27, 1.4, 1.78, 1.9) mm. The database includes a total of 301 different scenarios with combinations of vertical shaft misalignment and rotation frequencies.
## 3 Methodology
The approach proposed to identify the operating state of the rotating machinery is to learn the total energy of the system from vibration data using HNN and use the parameters of the Hamiltonian as discriminating features. The intuition is that the total energy signature is different under various faults. The main assumption that we make is that the energy of the system is conserved for short periods of time thanks to the energy injected by the motor, which allows us to use the HNN [1]. The overall model architecture is shown in Fig. 2.
The first step is to develop a set of generalized coordinates from the raw vibration data using an autoencoder trained only on the data from normal conditions. The encoder NN is then used to generate a low dimensional representation (2D in our case) from the 8 vibration measurements taken in any operating regime. This approach in developing arbitrary coordinates has been proposed in the original HNN paper [1].
Using the newly developed coordinates, the second step is to train an HNN for each sequence of data generated at 50 kHz sampling rate during 5 s. The parameters \(\theta\) of the Hamiltonian \(\mathcal{H}_{\theta}\) fully characterize the energy function of the operating state and they can be used to train a classifier.
The parameterization of the Hamiltonian is high-dimensional (\(41,200\) weights in our case) as it depends on the number of layers and hidden neurons per layer chosen in HNN. As a result, we have chosen to reduce its dimension using principal component analysis (PCA) before training a classifier as a random forest in the last modeling step.
## 4 Numerical Results
The proposed fault classification system has been used with the MaFaulDa dataset and a \(70:30\) split into train and test data. Given the imbalance of collected data, namely 49 datasets recorded for normally operating motors vs. 1800+ datasets recorded for all faulty operating motors, we have used the synthetic minority over-sampling technique (SMOTE) [1] to create synthetic data points for the minority class. The PyCaret2 framework was used to develop the classifiers and preprocess the HNN features.
Footnote 2: [https://pycaret.org](https://pycaret.org)
Two different tasks are considered. The first is the binary classification where we are discriminating between normal and abnormal conditions using a random forest, and the second is the multi-class problem where we are discriminating using a logistic regression between the classes listed in Table 1, where class 0 is the normal regime.
The receiver operating characteristic (ROC) curves are provided for both tasks in Figs. 3 and 4 respectively. The macro-averaged AUC calculated on the test data is 0.78 for the binary classification and 0.84 for the multi-class problem and the F1 score is 0.96 and 0.51 respectively, which demonstrates the viability of physics-informed features from HNN to
capture the state of the system. These classification problems are imbalanced due to the skewed distribution of examples across the classes and as a result, we have chosen not to report the accuracy as it was reported in prior work [10, 11]. We note however that Ref. [10] reports an F1-score of 0.99 on a 10-fold cross-validation exercise, which is higher that the 0.96 on our binary classification, and that our multi-class F1-score is lower due to the aggregation of bearing fault classes.
Table 1 shows the AUC for pairwise classification between each unique defective operating condition and the normal condition. Fig. 6 shows the phase spaces of 10 different operating conditions (1 normal and 9 faulty). Interestingly, among all the pairwise comparisons, the model finds the discrimination between normal and horizontal-misalignment regimes rather challenging, which we plan to further explore in future studies. We do expect that the faults introduced generate slight changes in the phase portraits of various regimes, see Fig. 6. However, we find qualitatively that the phase portrait of overhang/ball-fault is significantly different than the rest, which suggests that the sub-classes of overhang, namely ball-fault, cage-fault, and outer-race should be treated as classes on their own.
**Discussion on the effect of rotation frequency on the Hamiltonian**. Fig. 5 shows the Hamiltonian of normally operating motors operating at different speeds. Interestingly, even though this has not been enforced, the general structure of the Hamiltonian vector field remains largely the same across various speeds, while the magnitude of the Hamiltonian increases at higher speeds as expected. It can be concluded in this case that the vector field is dependent on the operating condition, and the magnitude is dependent on the operating speed.
**Discussion on HNN on Dissipative Systems**. The HNN has the ability to learn the total energy of a number of systems [1], including an ideal mass-spring system. Although the HNN is designed to conserve energy, it is interesting to consider what the HNN learns from dissipative systems. We believe that the methodology is more broadly applicable and it applies also when this assumption does not hold.
We have used a mass-spring-damper to experiment with the behavior of HNN for dissipative systems. This is a non-conservative system and the Hamiltonian formulated by the HNN is not a conventional solution to the mass-spring-damper system, the conserved quantity is not the total energy, and the generalized coordinates are not position and momentum defined by classical mechanics. Nevertheless, a qualitative analysis of the trajectories shows that the HNN creates unique solutions for each value of the damping ratio, see Fig. 7. While we are unable to use conventional physics to understand the results of the HNN on the mass-spring-damper system, it is evident that the results can be used to discriminate between the different systems.
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline Class & **normal vs X** & **AUC** & **F1-score** \\ \hline
1 & horizontal-misalign. & 0.59 & 0.80 \\ \hline
2 & imbalance & 0.92 & 0.95 \\ \hline
3 & overhang & 0.85 & 0.85 \\ \hline
4 & underhang & 0.80 & 0.88 \\ \hline
5 & vertical-misalign. & 0.91 & 0.92 \\ \hline \end{tabular}
\end{table}
Table 1: Pairwise classification - results on testing set
Figure 1: SpectraQuest System: Alignment-Balance-Vibration
## 5 Conclusions
A novel predictive model is introduced to discriminate between normal and abnormal operating regimes of rotating machinery. The model is based on the total energy signature of the system learned using a Hamiltonian Neural Network. The performance measures obtained from the experimental data suggest that the proposed physics-informed features are an excellent candidate for machine fault classification.
Figure 4: Results on test set - multi-class problem
Figure 3: Results on test set - binary classification
Figure 2: The proposed fault classification model
## Acknowledgement
Research was sponsored by the National Institute of Food and Agriculture under Grant Number 2017-67017-26167 and by the Army Research Office under Grant Number W911NF-22-1-0035. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office, the National Institute of Food and Agriculture, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Figure 5: The effect of rotation frequency on the Hamiltonian
Figure 6: Phase portraits of various operating conditions
Figure 7: HNN results on variable damping ratios |
2303.13537 | Toxicity and Cultural Entrenchment in Peer-Production Communities:
Toward a Handbook on Intelligent System Design | Toxicity and abuse are common in online peer-production communities. The
social structure of peer-production communities that aim to produce accurate
and trustworthy information require some conflict and gate-keeping to spur
content production and curation. However, conflict and gate-keeping often
devolve into hierarchical power structures which punish newcomers and lock out
marginalized groups through entrenched cultural norms. Community administrators
often focus on content quality, rather than consideration for all user safety,
to promote community growth and survival. Once toxic cultural norms dominate a
peer-production community, it is very difficult for community administrators to
stop these behaviors from undermining inclusive peer-production. We propose
developing a "handbook of intelligent system design" that attempts to frame
design protocols to better read user-community culture and accurately
distinguish toxic negative interactions from beneficial conflict. | Chris Blakely, Andrew Vargo | 2023-03-15T09:55:57Z | http://arxiv.org/abs/2303.13537v1 | Toxicity and Cultural Entrenchment in Peer-Production Communities: Toward a Handbook on Intelligent System Design
###### Abstract.
Toxicity and abuse are common in online peer-production communities. The social structure of peer-production communities that aim to produce accurate and trustworthy information require some conflict and gate-keeping to spur content production and curation. However, conflict and gate-keeping often devolve into hierarchical power structures which punish newcomers and lock out marginalized groups through entrenched cultural norms. Community administrators often focus on content quality, rather than consideration for all user safety, to promote community growth and survival. Once toxic cultural norms dominate a peer-production community, it is very difficult for community administrators to stop these behaviors from undermining inclusive peer-production. We propose developing a "handbook of intelligent system design" that attempts to frame design protocols to better read user-community culture and accurately distinguish toxic negative interactions from beneficial conflict.
toxicity, peer-production communities, community culture, online interaction +
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
+
Footnote †: journal: Information systems
## 1. Introduction: Social Conflict and Toxic Conflict
Conflict1 is often conflated with toxicity and can sometimes lead to abusive user behavior, but not all conflict or negative interactions are or will become toxic (Bakely et al., 2016; Bakely et al., 2016). Social conflict and tribalism develop for various reasons irrespective of user perceptions of harassment (Bakely et al., 2016; Bakely et al., 2016). Since some form of conflict in peer-production communities is necessary to stimulate discussion, some tension is important to the community culture. There is trouble however, when the community becomes so entrenched as to not allow newcomers to join the community. This can then lead to a situation in which marginalized groups are effectively banned from full participation in the system. The community may try to mitigate this behavior with campaigns and new rules, but this will often fail (Bakely et al., 2016).
Footnote 1: This manuscript was submitted to The CHI 23 Workshop on Combating Toxicity, Harassment, and Abuse in Online Social Spaces, April 23–28, 2023, Hamburg, Germany
We focus on peer-production communities, which we define as communities where users interact with each other to produce valuable content for a specific goal or interest, because these communities play a major role in online socialization and professional development. In these communities, conflicts occur because of 1. debate/discussion, 2. user correction, or 3. failure to follow community standards (Bakely et al., 2016). In this position paper, we assume that the peer-production system seeks to be fully inclusive.
## 2. Cultural Design and Community-Building: Opportunities and Limitations
Cultural entrenchment is inherent to the growth of large, complex communities. Addressing toxic conflicts early on in peer-production communities can be hindered by different factors. Administrators, moderators and bystanders of peer-production communities may under-prioritize addressing conflicts that can normalize toxicity because of a lack of urgency (Bakely et al., 2016) or a need for greater context. This means that administrators and moderators may be too late to stop toxic
norms from becoming culturally entrenched. However, we believe that it is possible to prevent this from occurring in the first place through informed design.
We propose developing a "Handbook for Intelligent System Design" that gathers effective strategies from past community case studies across different domains, in addition to findings from cultural evolution [6, 11], psychology [4, 5], sociology [2, 7, 12], and human-computer interaction [1, 3, 8]. We suggest the following framework (see Fig. 1) as a foundation for evaluating cultural norms and interactions in peer-production communities. Limitations include domain issues for different communities, differing cultural views from users on what constitutes bullying or harassment, optimal modes for delivering changes and the impact of varying degrees of anonymity. Opportunities include the large number of communities with problems, tools to document and map when and how problems occur, means to observe mitigation attempts and measure how communities feel about themselves.
|
2302.04362 | Disentangling Learning Representations with Density Estimation | Disentangled learning representations have promising utility in many
applications, but they currently suffer from serious reliability issues. We
present Gaussian Channel Autoencoder (GCAE), a method which achieves reliable
disentanglement via flexible density estimation of the latent space. GCAE
avoids the curse of dimensionality of density estimation by disentangling
subsets of its latent space with the Dual Total Correlation (DTC) metric,
thereby representing its high-dimensional latent joint distribution as a
collection of many low-dimensional conditional distributions. In our
experiments, GCAE achieves highly competitive and reliable disentanglement
scores compared with state-of-the-art baselines. | Eric Yeats, Frank Liu, Hai Li | 2023-02-08T22:37:33Z | http://arxiv.org/abs/2302.04362v1 | # Disentangling Learning Representations
###### Abstract
Disentangled learning representations have promising utility in many applications, but they currently suffer from serious reliability issues. We present Gaussian Channel Autoencoder (GCAE), a method which achieves reliable disentanglement via flexible density estimation of the latent space. GCAE avoids the curse of dimensionality of density estimation by disentangling subsets of its latent space with the Dual Total Correlation (DTC) metric, thereby representing its high-dimensional latent joint distribution as a collection of many low-dimensional conditional distributions. In our experiments, GCAE achieves highly competitive and reliable disentanglement scores compared with state-of-the-art baselines.
## 1 Introduction
The notion of disentangled learning representations was introduced by Bengio et al. (2013) - it is meant to be a robust approach to feature learning when trying to learn more about a distribution of data \(X\) or when downstream tasks for learned features are unknown. Since then, disentangled learning representations have been proven to be extremely useful in the applications of natural language processing Jain et al. (2018), content and style separation John et al. (2018), drug discovery Polykovskiy et al. (2018); Du et al. (2020), fairness Sarhan et al. (2020), and more.
Density estimation of learned representations is an important ingredient to competitive disentanglement methods. Bengio et al. (2013) state that representations \(\mathbf{z}\sim Z\) which are disentangled should maintain as much information of the input as possible while having components which are mutually _invariant_ to one another. Mutual invariance motivates seeking representations of \(Z\) which have _independent_ components extracted from the data, necessitating some notion of \(p_{Z}(\mathbf{z})\).
Leading unsupervised disentanglement methods, namely \(\beta\)-VAE Higgins et al. (2016), FactorVAE Kim and Mnih (2018), and \(\beta\)-TCVAE Chen et al. (2018) all learn \(p_{Z}(\mathbf{z})\) via the same variational Bayesian framework Kingma and Welling (2013), but they approach making \(p_{Z}(\mathbf{z})\) independent with different angles. \(\beta\)-VAE indirectly promotes independence in \(p_{Z}(\mathbf{z})\) via enforcing low \(D_{\mathrm{KL}}\) between the representation and a factorized Gaussian prior, \(\beta\)-TCVAE encourages representations to have low Total Correlation (TC) via an ELBO decomposition and importance weighted sampling technique, and FactorVAE reduces TC with help from a monolithic neural network estimate. Other well-known unsupervised methods are Annealed \(\beta\)-VAE Burgess et al. (2018), which imposes careful relaxation of the information bottleneck through the VAE \(D_{\mathrm{KL}}\) term during training, and DIP-VAE I & II Kumar et al. (2017), which directly regularize the covariance of the learned representation. For a more in-depth description of related work, please see Appendix D.
While these VAE-based disentanglement methods have been the most successful in the field, Locatello et al. (2019) point out serious reliability issues shared by all. In particular, increasing disentanglement pressure during training doesn't tend to lead to more independent representations, there currently aren't good unsupervised indicators of disentanglement, and no method consistently dominates the others across all datasets. Locatello et al. (2019) stress the need to find the right inductive biases in order for unsupervised disentanglement to truly deliver.
We seek to make disentanglement more reliable and high-performing by incorporating new inductive biases into our proposed method, Gaussian Channel Autoencoder (GCAE). We shall explain them in
more detail in the following sections, but to summarize: GCAE avoids the challenge of representing high-dimensional \(p_{Z}(\mathbf{z})\) via disentanglement with Dual Total Correlation (rather than TC) and the DTC criterion is augmented with a scale-dependent latent variable arbitration mechanism. This work makes the following contributions:
* Analysis of the TC and DTC metrics with regard to the curse of dimensionality which motivates use of DTC and a new feature-stabilizing arbitration mechanism
* GCAE, a new form of noisy autoencoder (AE) inspired by the Gaussian Channel problem, which permits application of flexible density estimation methods in the latent space
* Experiments1 which demonstrate competitive performance of GCAE against leading disentanglement baselines on multiple datasets using existing metrics
Footnote 1: Code available at [https://github.com/ericyeats/gcae-disentanglement](https://github.com/ericyeats/gcae-disentanglement)
## 2 Background and Initial Findings
To estimate \(p_{Z}(\mathbf{z})\), we introduce a discriminator-based method which applies the density-ratio trick and the Radon-Nikodym theorem to estimate density of samples from an unknown distribution. We shall demonstrate in this section the curse of dimensionality in density estimation and the necessity for representing \(p_{Z}(\mathbf{z})\) as a collection of conditional distributions.
The optimal discriminator neural network introduced by Goodfellow et al. (2014a) satisfies:
\[\operatorname*{arg\,max}_{D}\mathbb{E}_{\mathbf{x}_{r}\sim X_{real}}\left[ \log D(\mathbf{x})\right]+\mathbb{E}_{\mathbf{x}_{f}\sim X_{fake}}\left[\log \left(1-D(\mathbf{x}_{f})\right)\right]\triangleq D^{*}(\mathbf{x})=\frac{p_ {real}(\mathbf{x})}{p_{real}(\mathbf{x})+p_{fake}(\mathbf{x})}\]
where \(D(\mathbf{x})\) is a discriminator network trained to differentiate between "real" samples \(\mathbf{x}_{r}\) and "fake" samples \(\mathbf{x}_{f}\). Given the optimal discriminator \(D^{*}(\mathbf{x})\), the density-ratio trick can be applied to yield \(\frac{p_{real}(\mathbf{x})}{p_{fake}(\mathbf{x})}=\frac{D^{*}(\mathbf{x})}{1- D^{*}(\mathbf{x})}\). Furthermore, the discriminator can be supplied conditioning variables to represent a ratio of conditional distributions Goodfellow et al. (2014b); Makhzani et al. (2015).
Consider the case where the "real" samples come from an _unknown_ distribution \(\mathbf{z}\sim Z\) and the "fake" samples come from a _known_ distribution \(\mathbf{u}\sim U\). Permitted that both \(p_{Z}(\mathbf{z})\) and \(p_{U}(\mathbf{u})\) are finite and \(p_{U}(\mathbf{u})\) is nonzero on the sample space of \(p_{Z}(\mathbf{z})\), the optimal discriminator can be used to retrieve the unknown density \(p_{Z}(\mathbf{z})=\frac{D^{*}(\mathbf{z})}{1-D^{*}(\mathbf{z})}p_{U}(\mathbf{ z})\). In our case where \(\mathbf{u}\) is a uniformly distributed variable, this "transfer" of density through the optimal discriminator can be seen as an application of the Radon-Nikodym derivative of \(p_{Z}(\mathbf{z})\) with reference to the Lebesgue measure. Throughout the rest of this work, we employ discriminators with uniform noise and the density-ratio trick in this way to recover unknown distributions.
This technique can be employed to recover the probability density of an \(m\)-dimensional isotropic Gaussian distribution. While it works well in low dimensions (\(m\leq 8\)), the method inevitably fails as \(m\) increases. Figure 0(a) depicts several experiments of increasing \(m\) in which the KL-divergence of the true and estimated distributions are plotted with training iteration. When number of data samples is finite and the dimension \(m\) exceeds a certain threshold, the probability of there being any uniform samples in the neighborhood of the Gaussian samples swiftly approaches zero, causing the density-ratio trick to fail.
This is a well-known phenomenon called the _curse of dimensionality_ of density estimation. In essence, as the dimensionality of a joint distribution increases, concentrated joint data quickly become isolated within an extremely large space. The limit \(m\leq 8\) is consistent with the limits of other methods such as kernel density estimation (Parzen-Rosenblatt window).
Fortunately, the same limitation does not apply to conditional distributions of many jointly distributed variables. Figure 0(b) depicts a similar experiment of the first in which \(m-1\) variables are independent Gaussian distributed, but the last variable \(\mathbf{z}_{m}\) follows the distribution \(\mathbf{z}_{m}\sim\mathcal{N}(\mu=(m-1)^{-\frac{1}{2}}\sum_{i=1}^{m-1}\mathbf{ z}_{i},\ \sigma^{2}=\frac{1}{m})\) (i.e., the last variable is Gaussian distributed with its mean as the sum of observations of the other variables). The marginal distribution of each component is
Gaussian, just like the previous example. While it takes more iterations to bring the KL-divergence between the true and estimated conditional distribution to zero, it is not limited by the curse of dimensionality. Hence, we assert that conditional distributions can capture complex relationships between subsets of many jointly distributed variables while avoiding the curse of dimensionality.
## 3 Methodology
Analysis of Dual Total Correlation
Recent works encourage disentanglement of the latent space by enhancing the Total Correlation (TC) either indirectly Higgins et al. (2016); Kumar et al. (2017) or explicitly Kim and Mnih (2018); Chen et al. (2018). TC is a metric of multivariate statistical independence that is non-negative and zero if and only if all elements of \(\mathbf{z}\) are independent.
\[\text{TC}(Z)=\mathbb{E}_{\mathbf{z}}\log\frac{p_{Z}(\mathbf{z})}{\prod_{i}p_{ Z_{i}}(\mathbf{z}_{i})}=\sum_{i}h(Z_{i})-h(Z)\]
Locatello et al. (2019) evaluate many TC-based methods and conclude that minimizing their measures of TC during training often does not lead to VAE \(\mu\) (used for representation) with low TC. We note that computing \(\text{TC}(Z)\) requires knowledge of the joint distribution \(p_{Z}(\mathbf{z})\), which can be very challenging to model in high dimensions. We hypothesize that the need for a model of \(p_{Z}(\mathbf{z})\) is what leads to the observed reliability issues of these TC-based methods.
Consider another metric for multivariate statistical independence, Dual Total Correlation (DTC). Like TC, DTC is non-negative and zero if and only if all elements of \(\mathbf{z}\) are independent.
\[\text{DTC}(\mathbf{z})=\mathbb{E}_{\mathbf{z}}\log\frac{\prod_{i}p_{Z_{i}}( \mathbf{z}_{i}|\mathbf{z}_{i})}{p_{Z}(\mathbf{z})}=h(Z)-\sum_{i}h(Z_{i}|Z_{ \backslash i})\]
We use \(\mathbf{z}_{\backslash i}\) to denote all elements of \(\mathbf{z}\) except the \(i\)-th element. At first glance, it appears that \(\text{DTC}(\mathbf{z})\) also requires knowledge of the joint density \(p(\mathbf{z})\). However, observe an equivalent form of DTC manipulated for the \(i\)-th variable:
\[\text{DTC}(Z)=h(Z)-h(Z_{i}|Z_{\backslash i})-\sum_{j\neq i}h(Z_{j}|Z_{ \backslash j})=h(Z_{\backslash i})-\sum_{j\neq i}h(Z_{j}|Z_{\backslash j}). \tag{1}\]
Figure 1: Empirical KL divergence between the true and estimated distributions as training iteration and distribution dimensionality increase. Training parameters are kept the same between both experiments. We employ Monte-Carlo estimators of KL divergence, leading to transient negative values when KL is near zero.
Here, the \(i\)-th variable only contributes to DTC through each set of conditioning variables \(\mathbf{z}_{\backslash j}\). Hence, when computing the derivative \(\partial\operatorname{\text{DTC}}(Z)/\partial\mathbf{z}_{i}\), no representation of \(p_{Z}(\mathbf{z})\) is required - only the conditional entropies \(h(Z_{j}|Z_{\backslash j})\) are necessary. Hence, we observe that the curse of dimensionality can be avoided through gradient descent on the DTC metric, making it more attractive for disentanglement than TC. However, while one only needs the conditional entropies to compute gradient for DTC, the conditional entropies alone don't measure how close \(\mathbf{z}\) is to having independent elements. To overcome this, we define the summed information loss \(\mathcal{E}_{\Sigma I}\):
\[\mathcal{L}_{\Sigma I}(Z)\triangleq\sum_{i}I(Z_{i};Z_{\backslash i})=\left[ \sum_{i}h(Z_{i})-h(Z_{i}|Z_{\backslash i})\right]+h(Z)-h(Z)=\operatorname{ \text{TC}}(Z)+\operatorname{\text{DTC}}(Z). \tag{2}\]
If gradients of each \(I(Z_{i};Z_{\backslash i})\) are taken only with respect to \(\mathbf{z}_{\backslash i}\), then the gradients are equal to \(\frac{\partial\operatorname{\text{DTC}}(Z)}{\partial\mathbf{z}}\), avoiding use of any derivatives of estimates of \(p_{Z}(\mathbf{z})\). Furthermore, minimizing one metric is equivalent to minimizing the other: \(\operatorname{\text{DTC}}(Z)=0\Leftrightarrow\operatorname{\text{TC}}(Z)=0 \Leftrightarrow\mathcal{L}_{\Sigma I}=0\). In our experiments, we estimate \(h(Z_{i})\) with batch estimates \(\mathbb{E}_{\mathbf{z}_{\backslash i}}p_{Z_{i}}(\mathbf{z}_{i}|\mathbf{z}_{ \backslash i})\), requiring no further hyperparameters. Details on the information functional implementation are available in Appendix A.1.
### Excess Entropy Power Loss
We found it very helpful to "stabilize" disentangled features by attaching a feature-scale dependent term to each \(I(Z_{i};Z_{\backslash i})\). The entropy power of a latent variable \(\mathbf{z}_{i}\) is non-negative and grows analogously with the variance of \(\mathbf{z}_{i}\). Hence, we define the Excess Entropy Power loss:
\[\mathcal{L}_{\text{EEP}}(Z)\triangleq\frac{1}{2\pi e}\sum_{i}\left[I(Z_{i};Z_ {\backslash i})\cdot e^{2h(Z_{i})}\right], \tag{3}\]
which weighs each component of the \(\mathcal{L}_{\Sigma I}\) loss with the marginal entropy power of each \(i\)-th latent variable. Partial derivatives are taken with respect to the \(\mathbf{z}_{\backslash i}\) subset only, so the marginal entropy power only weighs each component. While \(\nabla_{\phi}\mathcal{L}_{\text{EEP}}\neq\nabla_{\phi}\mathcal{L}_{\Sigma I}\) in most situations (\(\phi\) is the set of encoder parameters), this inductive bias has been extremely helpful in consistently yielding high disentanglement scores. An ablation study with \(\mathcal{L}_{\text{EEP}}\) can be found in Appendix C. The name "Excess Entropy Power" is inspired by DTC's alternative name, excess entropy.
### Gaussian Channel Autoencoder
We propose Gaussian Channel Autoencoder (GCAE), composed of a coupled encoder \(\phi:X\to Z_{\phi}\) and decoder \(\psi:Z\to\hat{X}\), which extracts a representation of the data \(\mathbf{x}\in\mathbb{R}^{n}\) in the latent space \(\mathbf{z}\in\mathbb{R}^{m}\). We assume \(m\ll n\), as is typical with autoencoder models. The output of the encoder has a bounded activation function, restricting \(\mathbf{z}_{\phi}\in(-3,3)^{m}\) in our experiments. The latent space is subjected to Gaussian noise of the form \(\mathbf{z}=\mathbf{z}_{\phi}+\nu_{\sigma}\), where each \(\nu_{\sigma}\sim\mathcal{N}(0,\sigma^{2}I)\) and \(\sigma\) is a controllable hyperparameter. The Gaussian noise has the effect of "smoothing" the latent space, ensuring that \(p_{Z}(\mathbf{z})\) is continuous and finite, and it guarantees the existence of the Radon-Nikodym derivative. Our reference noise for all experiments is \(\mathbf{u}\sim\text{Unif}(-4,4)\). The loss function for training GCAE is:
\[\mathcal{L}_{\text{GCAE}}=\mathbb{E}_{\mathbf{x}_{\nu_{\sigma}}}\left[\frac{1 }{n}\left\|\hat{\mathbf{x}}-\mathbf{x}\right\|_{2}^{2}\right]+\lambda\, \mathcal{L}_{\text{EEP}}(Z), \tag{4}\]
where \(\lambda\) is a hyperparameter to control the strength of regularization, and \(\nu_{\sigma}\) is the Gaussian noise injected in the latent space with the scale hyperparameter \(\sigma\). The two terms have the following intuitions: the mean squared error (MSE) of reconstructions ensures \(\mathbf{z}\) captures information of the input while \(\mathcal{L}_{\text{EEP}}\) encourages representations to be mutually independent.
## 4 Experiments
We evaluate the performance of GCAE against the leading unsupervised disentanglement baselines \(\beta\)-VAE Higgins et al. (2016), FactorVAE Kim and Mnih (2018), \(\beta\)-TCVAE Chen et al. (2018), and DIP-VAE-II Kumar et al. (2017). We measure disentanglement using four popular supervised disentanglement metrics: Mutual Information Gap (MIG) Chen et al. (2018), Factor Score Kim and Mnih (2018), DCI Disentanglement Eastwood and Williams (2018), and Separated Attribute Predictability (SAP) Kumar et al. (2017). The four metrics cover the three major types of disentanglement metrics identified by Carbonneau et al. (2020) in order to provide a complete comparison of the quantitative disentanglement capabilities of the latest methods.
We consider two datasets which cover different data modalities. The **Beamsynthesis** dataset Yeats et al. (2022) is a collection of \(360\) timeseries data from a linear particle accelerator beamforming simulation. The waveforms are \(1000\) values long and are made of two independent data generating factors: _duty cycle_ (continuous) and _frequency_ (categorical). The **dSprites** dataset Matthey et al. (2017) is a collection of \(737280\) synthetic images of simple white shapes on a black background. Each \(64\times 64\) pixel image consists of a single shape generated from the following independent factors: _shape_ (categorical), _scale_ (continuous), _orientation_ (continuous), _x-position_ (continuous), and _y-position_ (continuous).
All experiments are run using the PyTorch framework Paszke et al. (2019) using 4 NVIDIA Tesla V100 GPUs, and all methods are trained with the same number of iterations. Hyperparameters such as network architecture and optimizer are held constant across all models in each experiment (with the exception of the dual latent parameters required by VAE models). Latent space dimension is fixed at \(m=10\) for all experiments, unless otherwise noted. More details are in Appendix B.
In general, increasing \(\lambda\) and \(\sigma\) led to lower \(\mathcal{L}_{\Sigma I}\) but higher MSE at the end of training. Figure 2(a) depicts this relationship for Beamsynthesis and dSprites. Increasing \(\sigma\) shifts final loss values towards increased independence (according to \(\mathcal{L}_{\Sigma I}\)) but slightly worse reconstruction error. This is consistent with the well-known Gaussian channel - as the relative noise level increases, the information capacity of a power-constrained channel decreases. The tightly grouped samples in the lower right of the plot correspond with \(\lambda=0\) and incorporating any \(\lambda>0\) leads to a decrease in \(\mathcal{L}_{\Sigma I}\) and increase in MSE. As \(\lambda\) is increased further the MSE increases slightly as the average \(\mathcal{L}_{\Sigma I}\) decreases.
Figure 2(b) plots the relationship between final \(\mathcal{L}_{\Sigma I}\) values with MIG evaluation scores for both Beamsynthesis and dSprites. Our experiments depict a moderate negative relationship with correlation coefficient \(-0.823\). These results suggest that \(\mathcal{L}_{\Sigma I}\) is a promising unsupervised indicator of successful disentanglement, which is helpful if one does not have access to the ground truth data factors.
Figure 2: Depiction of the proposed method, GCAE. Gaussian noise with variance \(\sigma^{2}\) is added to the latent space, smoothing the representations for gradient-based disentanglement with \(\mathcal{L}_{\text{EEP}}\). Discriminators use the density-ratio trick to represent the conditional distributions of each latent element given observations of all other elements, capturing complex dependencies between subsets of the variables whilst avoiding the curse of dimensionality.
In this experiment, we plot the disentanglement scores (average and standard deviation) of GCAE as the latent space noise level \(\sigma\) and disentanglement strength \(\lambda\) vary on Beamsynthesis and dSprites. In each figure, each dark line plots the average disentanglement score while the shaded area fills one standard deviation of reported scores around the average.
Figure 3(a) depicts the disentanglement scores of GCAE on the Beamsynthesis dataset. All \(\sigma\) levels exhibit relatively low scores when \(\lambda\) is set to zero (with the exception of FactorScore). In this situation, the model is well-fit to the data, but the representation is highly redundant and entangled, causing the "gap" or "separatedness" (in SAP) for each factor to be low. However, whenever \(\lambda>0\) the disentanglement performance increases significantly, especially for MIG, DCI Disentanglement, and SAP with \(\lambda\in[0.1,0.2]\). There is a clear preference for higher noise levels, as \(\sigma=0.1\) generally has higher variance and lower disentanglement scores. FactorScore starts out very high on Beamsynthesis because there are just two factors of variation, making the task easy.
Figure 4: Effect of \(\lambda\) and \(\sigma\) on different disentanglement metrics. \(\lambda\) is varied in the \(x\)-axis. Starting from the top left of each subfigure and moving clockwise within each subfigure, we report MIG, FactorScore, SAP, and DCI Disentanglement. Noise levels \(\sigma=\{0.2,0.3\}\) are preferable for reliable disentanglement performance. **KEY:** Dark lines - average scores. Shaded areas - one standard deviation.
Figure 3: Scatter plots of \(\log(\mathcal{L}_{\Sigma I})\) vs MSE and MIG, respectively, as \(\sigma\) is increased.
Figure 4b depicts the disentanglement scores of GCAE on the dSprites dataset. Similar to the previous experiment with Beamsynthesis, no disentanglement pressure leads to relatively low scores on all considered metrics (\(\sim 0.03\) MIG, \(\sim 0.47\) FactorScore, \(\sim 0.03\) DCI, \(\sim 0.08\) SAP), but introducing \(\lambda>0\) signficantly boosts performance on a range of scores \(\sim 0.35\) MIG, \(\sim 0.6\) FactorScore, \(\sim 0.37\) SAP, and \(\sim 0.45\) DCI (for \(\sigma=\{0.2,0.3\}\)). Here, there is a clear preference for larger \(\sigma\); \(\sigma=\{0.2,0.3\}\) reliably lead to high scores with little variance.
### Comparison of GCAE with Leading Disentanglement Methods
We incorporate experiments with leading VAE-based baselines and compare them with GCAE \(\sigma=0.2\). Each solid line represents the average disentanglement scores for each method and the shaded areas represent one standard deviation around the mean.
Figure 5 depicts the distributional performance of all considered methods and metrics on Beamsynthesis. When no disentanglement pressure is applied, disentanglement scores for all methods are relatively low. When disentanglement pressure is applied (\(\lambda,\beta>0\)), the scores of all methods increase. GCAE scores highest or second-highest on each metric, with low relative variance over a large range of \(\lambda\). \(\beta\)-TCVAE consistently scores second-highest on average, with moderate variance. FactorVAE and \(\beta\)-VAE tend to perform relatively similarly, but the performance of \(\beta\)-VAE appears highly sensitive to hyperparameter selection. DIP-VAE-II performs the worst on average.
Figure 6 shows a similar experiment for dSprites. Applying disentanglement pressure significantly increases disentanglement scores, and GCAE performs very well with relatively little variance when \(\lambda\in[0.1,0.5]\). \(\beta\)-VAE achieves high top scores with extremely little variance but only for a very narrow range of \(\beta\). \(\beta\)-TCVAE scores very high on average for a wide range of \(\beta\) but with large variance in scores. FactorVAE consistently scores highest on FactorScore and it is competitive on SAP. DIP-VAE-II tends to underperform compared to the other methods.
Figure 5: Disentanglement metric comparison of GCAE with VAE baselines on Beamsynthesis. GCAE \(\lambda\) is plotted on the lower axis, and VAE-based method regularization strength \(\beta\) is plotted on the upper axis. **KEY:** Dark lines - average scores. Shaded areas - one standard deviation.
### Disentanglement Performance as Z Dimensionality Increases
We report the disentanglement performance of GCAE and FactorVAE on the dSprites dataset as \(m\) is increased. FactorVAE Kim & Mnih (2018) is the closest TC-based method: it uses a single monolithic discriminator and the density-ratio trick to explicitly approximate \(\text{TC}(Z)\). Computing \(\text{TC}(Z)\) requires knowledge of the joint density \(p_{Z}(\mathbf{z})\), which is challenging to compute as \(m\) increases.
Figure 7 depicts an experiment comparing GCAE and FactorVAE when \(m=20\). The results for \(m=10\) are included for comparison. The average disentanglement scores for GCAE \(m=10\) and \(m=20\) are very close, indicating that its performance is robust in \(m\). This is not the case for FactorVAE - it performs worse on all metrics when \(m\) increases. Interestingly, FactorVAE \(m=20\) seems to recover its performance on most metrics with higher \(\beta\) than is beneficial for FactorVAE \(m=10\). Despite this, the difference suggests that FactorVAE is not robust to changes in \(m\).
## 5 Discussion
Overall, the results indicate that GCAE is a highly competitive disentanglement method. It achieves the highest average disentanglement scores on the Beamsynthesis and dSprites datasets, and it has relatively low variance in its scores when \(\sigma=\{0.2,0.3\}\), indicating it is reliable. The hyperparameters are highly transferable, as \(\lambda\in[0.1,0.5]\) works well on multiple datasets and metrics, and the performance does not change with \(m\), contrary to the TC-based method FactorVAE. GCAE also used the same data preprocessing (mean and standard deviation normalization) across the two datasets. We also find that \(\mathcal{L}_{\Sigma I}\) is a promising indicator of disentanglement performance.
While GCAE performs well, it has several limitations. In contrast to the VAE optimization process which is very robust Kingma & Welling (2013), the optimization of \(m\) discriminators is sensitive to choices of learning rate and optimizer. Training \(m\) discriminators requires a lot of computation, and the quality of the learned representation depends heavily on the quality of the conditional densities stored in the discriminators. Increasing the latent space noise \(\sigma\) seems to make learning more
Figure 6: Disentanglement metric comparison of GCAE with VAE baselines on dSprites. GCAE \(\lambda\) is plotted on the lower axis, and VAE-based method regularization strength \(\beta\) is plotted on the upper axis. **KEY:** Dark lines - mean scores. Shaded areas - one standard deviation.
robust and generally leads to improved disentanglement outcomes, but it limits the corresponding information capacity of the latent space.
## 6 Conclusion
We have presented Gaussian Channel Autoencoder (GCAE), a new disentanglement method which employs Gaussian noise and flexible density estimation in the latent space to achieve reliable, high-performing disentanglement scores. GCAE avoids the curse of dimensionality of density estimation by minimizing the Dual Total Correlation (DTC) metric with a weighted information functional to capture disentangled data generating factors. The method is shown to consistently outcompete existing SOTA baselines on many popular disentanglement metrics on Beamsynthesis and dSprites.
## Acknowledgements
This research is supported by grants from U.S. Army Research W911NF2220025 and U.S. Air Force Research Lab FA8750-21-1-1015. We would like to thank Cameron Darwin for our helpful conversations regarding this work.
This research is supported, in part, by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research's "Data-Driven Decision Control for Complex Systems (Dnc2S)" project.
This research used resources of the Experimental Computing Laboratory (ExCL) at ORNL. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan ([http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan)).
Figure 7: Comparison of GCAE with FactorVAE on dSprites as \(m\) increases. \(\lambda\) is plotted below, and \(\beta\) is plotted above. **KEY:** Dark lines - mean scores. Shaded areas - one standard deviation. |
2308.11052 | Beyond Discriminative Regions: Saliency Maps as Alternatives to CAMs for
Weakly Supervised Semantic Segmentation | In recent years, several Weakly Supervised Semantic Segmentation (WS3)
methods have been proposed that use class activation maps (CAMs) generated by a
classifier to produce pseudo-ground truths for training segmentation models.
While CAMs are good at highlighting discriminative regions (DR) of an image,
they are known to disregard regions of the object that do not contribute to the
classifier's prediction, termed non-discriminative regions (NDR). In contrast,
attribution methods such as saliency maps provide an alternative approach for
assigning a score to every pixel based on its contribution to the
classification prediction. This paper provides a comprehensive comparison
between saliencies and CAMs for WS3. Our study includes multiple perspectives
on understanding their similarities and dissimilarities. Moreover, we provide
new evaluation metrics that perform a comprehensive assessment of WS3
performance of alternative methods w.r.t. CAMs. We demonstrate the
effectiveness of saliencies in addressing the limitation of CAMs through our
empirical studies on benchmark datasets. Furthermore, we propose random
cropping as a stochastic aggregation technique that improves the performance of
saliency, making it a strong alternative to CAM for WS3. | M. Maruf, Arka Daw, Amartya Dutta, Jie Bu, Anuj Karpatne | 2023-08-21T21:30:48Z | http://arxiv.org/abs/2308.11052v1 | Beyond Discriminative Regions: Saliency Maps as Alternatives to CAMs for Weakly Supervised Semantic Segmentation
###### Abstract
In recent years, several Weakly Supervised Semantic Segmentation (WS3) methods have been proposed that use class activation maps (CAMs) generated by a classifier to produce pseudo-ground truths for training segmentation models. While CAMs are good at highlighting discriminative regions (DR) of an image, they are known to disregard regions of the object that do not contribute to the classifier's prediction, termed non-discriminative regions (NDR). In contrast, attribution methods such as saliency maps provide an alternative approach for assigning a score to every pixel based on its contribution to the classification prediction. This paper provides a comprehensive comparison between saliencies and CAMs for WS3. Our study includes multiple perspectives on understanding their similarities and dissimilarities. Moreover, we provide new evaluation metrics that perform a comprehensive assessment of WS3 performance of alternative methods w.r.t. CAMs. We demonstrate the effectiveness of saliencies in addressing the limitation of CAMs through our empirical studies on benchmark datasets. Furthermore, we propose random cropping as a stochastic aggregation technique that improves the performance of saliency, making it a strong alternative to CAM for WS3.
## 1 Introduction
The goal in weakly supervised semantic segmentation (WS3) is to train segmentation models with coarse-scale supervision and without using pixel-level annotations. In recent years, several WS3 methods have been proposed that use image-level class labels to generate pseudo-ground truths for training segmentation models. Many of these methods employ localization methods such as Class Activation Maps (CAMs) Zhou et al. (2016); Selvaraju et al. (2016); Chattopadhay et al. (2018), generated from a pre-trained classifier, to guide the segmentation process.
CAMs are **activation maps** generated by the last convolutional neural network (ConvNet) layer of the classification model, which is integrated with the class-specific weights of the final fully-connected layer to produce a score for every pixel. While Class Activation Maps (CAM) are good at highlighting discriminative regions (DRs) of an image (i.e., regions that contribute significantly to the classifier's decision), CAMs are also known to ignore regions of the target object class that do not contribute to the classifier's prediction, termed non-discriminative regions (NDRs). In particular, it has been shown that the activation maps in the final convolution layer only contain information relevant for classification, a phenomenon called _information bottleneck_Lee et al. (2021). As a result, CAMs are biased towards mostly finding DR while missing the NDR of the target object, which is equally important for the purpose of segmentation. A number of WS3 solutions thus require further processing of the CAM outputs to recover NDR for high segmentation accuracy Lee et al. (2021, 2021, 2018); Li et al. (2018); Hou et al. (2018); Kolesnikov & Lampert (2016); Araslanov & Roth (2020).
In contrast to activation maps, **attribution maps** provide an alternative approach for assigning a score to every pixel based on its contribution to the final neural network prediction. The most commonly used attribution map is the gradient-based Saliency Maps Simonyan et al. (2013). The basic idea of saliency is to calculate the gradient of the target class score with respect to every pixel in the input image. Attribution maps are fundamentally distinct from activation maps obtained from the last layer of ConvNet models. However, despite the frequent use of attribution maps for neural network interpretability, their use in WS3 as an alternative to CAMs has largely been unexplored.
With the advancement of vision transformers achieving state-of-the-art (SOTA) performance on many computer vision tasks Han et al. (2022), extending CAMs to work with non-ConvNet-based classifiers is a non-trivial exercise. In contrast, gradient-based Saliency maps can be applied to any classifier with differentiable layers, rendering them as a universal solution for WS3 tasks. Moreover, Saliency maps inherently provide a solution to the deficiencies of CAM-based approaches as explored in this work. Although the limitations of CAMs have been well-known in the WS3 research community and all SOTA methods in WS3 provide solutions to mitigate the deficiencies of CAMs, they lack in providing deeper insights on how saliencies can be used as an alternative to CAM for WS3.
Our goal in this paper is to provide a comprehensive study of the comparison between CAMs and Saliecies for WS3. It is important to mention that our goal is not to achieve SOTA performance for WS3, but rather to provide novel insights into the potential of saliencies and their variations in addressing the limitations of CAMs. Our contributions are outlined below:
* We offer multiple perspectives to understand the similarities and differences between CAMs and Saliencies. Section 3 delves into these perspectives, serving as a "bridge" in the analysis of CAMs and saliencies.
* We provide new evaluation metrics to measure WS3 performance, which are specifically designed to complement existing metrics such as mIoU in quantifying the deficiencies of CAMs and evaluating the effectiveness of alternate techniques w.r.t. CAMs. The proposed evaluation metrics are detailed in Section 4.
* We demonstrate the effectiveness of saliencies in addressing the limitation of CAM through our empirical studies on the PASCAL VOC, COCO, and MNIST datasets, as detailed in Section 5.
* We identify the limitations of saliency maps for WS3 and propose different variations of stochastic aggregation methods to fix these limitations. Specifically, we propose a random cropping approach for stochastic aggregation that disintegrates the spatial structure of input images as compared to injecting spatially invariant noise. While random cropping is a common data augmentation technique, its application as a stochastic aggregation method in this work is novel. Additional insights regarding stochastic aggregation of saliencies are presented in Sections 6 and 7.
## 2 Fundamental Concepts and Definitions
### Class Activation Maps
The Class Activation Maps (CAMs) are based on convolutional neural networks with a global average pooling (GAP) layer applied before the final layer. Formally, let the classifier be parameterized by \(\theta=\{\theta_{f},\mathbf{w}\}\), where \(f(.;\theta_{f})\) is the feature extractor network prior to the GAP layer and \(\mathbf{w}\) is the set of weights of the final classification layer. The CAM of the \(c\)-th class for an image \(\mathbf{I}\) can be obtained as follows:
\[\text{CAM}_{c}(\mathbf{I};\theta)=\frac{\mathbf{w}_{c}^{\mathrm{T}}\mathbf{A}} {\max\mathbf{w}_{c}^{\mathrm{T}}\mathbf{A}} \tag{1}\]
where \(\mathbf{A}=f(\mathbf{I};\theta_{f})\) is the activation map, \(\mathbf{w}_{c}\in\mathbf{w}\) is \(c\)-th class weight, and \(\max(.)\) is the maximum value over all pixels in \(\mathbf{I}\) for normalization.
#### 2.1.1 Limitations of CAMs
CAMs produce coarse-scale localizations of objects because the activation maps of the final convolutional layer have significantly lower resolution compared to the input image. Additionally, the
final activation maps show high values for only a subset of regions of the target object that are discriminative for the classification task, while disregarding regions that do not impact the accuracy of classification. Thus, CAMs in their raw form without supplementary post-processing, are unsuitable for training segmentation models.
#### 2.1.2 Discriminative and Non-Discriminative Regions
_Discriminative regions (DRs)_ are those regions of the ground-truth object that are crucial for the classification model to predict the class label of the image accurately. In contrast, _non-discriminative regions (NDRs)_ are those regions of the ground-truth that are still important for segmenting the object but do not significantly impact the model's accuracy upon removal. We formally define DR and NDR based on the CAM outputs as follows:
**Definition 2.1** (DR and NDR).: The discriminative region (DR) and non-discriminative region (NDR) for the \(c\)-th class of an image \(\mathbf{I}\) can be defined for every pixel \((i,j)\) belonging to the \(c\)-th class ground-truth segmentation \(\mathcal{S}^{c}_{GT}\) as follows:
\[\text{DR}_{c}(i,j) =\mathbb{I}(\text{CAM}_{c}(i,j)\geq\tau_{cam}) \tag{2}\] \[\text{NDR}_{c}(i,j) =\mathbb{I}(\text{CAM}_{c}(i,j)<\tau_{cam}) \tag{3}\]
where \(\tau_{cam}\) represents a threshold applied to the CAM to obtain the segmentation of the object class and \(\mathbb{I}(.)\) is the indicator function. While the optimal threshold may differ for each image, we adopted the common practice of using a global threshold \((\tau_{cam}=0.25)\) for defining DR and NDR throughout this paper. Note that DRs and NDRs are a partitioning of the ground-truth mask \(\mathcal{S}^{c}_{GT}\) based on CAM scores.
### Saliency Maps
Saliency maps are attribution maps that assign a score to every image pixel representing its contribution to the final classifier prediction. They are frequently employed as a tool to enhance model interpretability. Formally, the saliency map (SM) of the \(c\)-th class for image \(\mathbf{I}\) can be defined as:
\[\text{SM}_{c}(\mathbf{I},\theta)=\Big{|}\frac{\partial S_{c}}{\partial\mathbf{ I}}\Big{|}=\Big{|}\mathbf{w}_{c}^{\intercal}\frac{\partial\text{GAP}( \mathbf{A})}{\partial\mathbf{I}}\Big{|} \tag{4}\]
where \(S_{c}=\mathbf{w}_{c}^{\intercal}\text{GAP}(\mathbf{A})+b_{c}\) is the score for the \(c\)-th class, and \(b_{c}\in\mathbf{w}\) is the bias term. For a multi-channel image, saliency maps are computed by taking a maximum of the gradient values across the channels.
**Definition 2.2** (Hsr and Lsr).: The high saliency region (HSR) and low saliency region (LSR) for the \(c\)-th class of an image \(\mathbf{I}\) can be defined for every pixel \((i,j)\) belonging to the \(c\)-th class ground-truth segmentation \(\mathcal{S}^{c}_{GT}\) using a threshold \(\tau_{sm}\) specific to saliency maps as follows:
\[\text{HSR}_{c}(i,j) =\mathbb{I}(\text{SM}_{c}(i,j)\geq\tau_{sm}) \tag{5}\] \[\text{LSR}_{c}(i,j) =\mathbb{I}(\text{SM}_{c}(i,j)<\tau_{sm}) \tag{6}\]
Just like DRs and NDRs, the HSRs and LSRs are an alternate partitioning of \(\mathcal{S}^{c}_{GT}\) based on SM score.
## 3 Comparing CAMs and Saliency Maps
### A Visual Comparison Using Hyperplanes
While CAMs and saliency maps differ in many respects, they also exhibit several similarities. We offer a novel viewpoint of comparing CAMs and SMs from the lens of CAM and SM hyperplanes. First, we define two \(k\)-dimensional Hilbert spaces (where \(k\) is the number of channels in the activation map): \(\mathcal{A}\) for the activations of images and \(\mathcal{A}^{\prime}\) for the gradients of the GAP layer w.r.t. the image. Formally, for an arbitrary image \(\mathbf{I}\), let the activation at any pixel \(\mathbf{A}_{(i,j)}\in\mathcal{A}\), and the gradient of the GAP layer \(\frac{\partial\text{GAP}(\mathbf{A})}{\partial\mathbf{I}}\big{|}_{(i,j)}\in \mathcal{A}^{\prime}\).
**Definition 3.1** (\(c\)-th class CAM hyperplane).: For every image \(\mathbf{I}\), let \(\mathcal{H}^{c}_{cam}\) be the following hyperplane in \(\mathcal{A}\):
\[\mathcal{H}^{c}_{cam}\ :\ \frac{\mathbf{w}_{c}^{\mathsf{T}}}{Z}\mathbf{a}- \tau_{cam}=0 \tag{7}\]
where \(\tau_{cam}\) is the CAM threshold, \(\mathbf{w}_{c}\in\mathbf{w}\) is the weight for the \(c\)-th class, and \(Z=\max\mathbf{w}_{c}^{\mathsf{T}}\mathbf{A}\) is a normalization factor depending on \(\mathbf{I}\). Note that \(Z\) changes for every image and is equivalent to having a variable intercept term for the CAM hyperplane but with a fixed slope \(\mathbf{w}_{c}\) for every image.
_Remark 3.2_.: If a point \(\mathbf{a}\in\mathcal{A}\) corresponding to a ground-truth pixel lies above \(\mathcal{H}^{c}_{cam}\), i.e., \(\mathbf{w}_{c}^{\mathsf{T}}\mathbf{a}/Z-\tau_{cam}\geq 0\), then the pixel belongs to DR; otherwise, it belongs to NDR.
See Appendix for proof. This remark states that any arbitrary pixel \((i,j)\in\mathcal{S}^{c}_{GT}\) will belong to the DR or NDR depending on which side of the CAM hyperplane it lies. In other words, as long as \(\mathbf{w}_{\mathbf{c}}\) and \(\tau_{cam}\) are fixed, the DR and NDR of the \(c\)-th class for any image \(\mathbf{I}\) are separated by its CAM hyperplane \(\mathcal{H}^{c}_{cam}\).
**Definition 3.3** (\(c\)-th class SM parallel-hyperplane).: Let \(\mathcal{H}^{c}_{sm}\) be the following set of two parallel hyperplanes in \(\mathcal{A}^{\prime}\):
\[\mathcal{H}^{c}_{sm}\ :\ |\mathbf{w}_{c}^{\mathsf{T}}\mathbf{a}^{\prime}|- \tau_{sm}=0 \tag{8}\]
where \(\tau_{sm}\) is the saliency map threshold and \(\mathbf{a}^{\prime}\in\mathcal{A}^{\prime}\) is the gradient of the GAP layer w.r.t. image at any arbitrary pixel.
_Remark 3.4_.: If a point \(\mathbf{a}^{\prime}\) corresponding to a ground-truth pixel lies on the outer sides of \(\mathcal{H}^{c}_{sm}\), i.e., \(|\mathbf{w}_{c}^{\mathsf{T}}\mathbf{a}^{\prime}|-\tau_{sm}\geq 0\), then the point belongs to HSR; otherwise, it belongs to LSR.
See appendix for proof. Similar to the DR/NDR for CAMs, the HSR/LSR are separated by SM parallel-hyperplanes. Furthermore, the slope of both CAM and SM hyperplanes are the same: \(\mathbf{w}_{\mathbf{c}}\). However, the important distinction is that for CAMs, the DR/NDR depends on the values of the activation map \(\mathbf{A}_{(i,j)}\), while for SMs, the HSR/LSR depends on the gradient \(\frac{\partial\mathbf{GAP}(\mathbf{A})}{\partial\mathbf{I}}|_{(i,j)}\). A ground-truth pixel may thus belong to DR or NDR and HSR or LSR depending on the value of its activations and gradient of GAP layer, respectively.
In Figure 1, we visually compare CAMs and SMs for a representative image from the VOC12 dataset. From this comparison, we observe that the CAM (see Figure 0(b)) predominantly highlights the DR of the bird class such as its head -- a crucial feature for classification. As a result, NDRs such as the bird's body are sparingly covered by the CAM. In contrast, the saliency map (see Figure 0(c)) for the same image covers most regions of the target bird class, albeit with some noisy representation of the background class too. To provide a comprehensive visualization of how HSRs in saliency maps
Figure 1: A visual comparison of CAMs and saliency maps (SMs) for a representative image from the VOC12 dataset.
can potentially recover NDRs, we present a scatterplot in Figure 0(c) comparing the signed distances of each pixel \((i,j)\in\mathcal{S}_{GT}^{bird}\) from the CAM and SM hyperplanes, namely, \(\mathcal{H}_{cam}^{bird}\) and \(\mathcal{H}_{sm}^{bird}\). Notably, the HSRs successfully recover a substantial portion of DRs, labeled as HSR-DR (blue). A minor segment of the DRs (\(2.62\%\) of GT) is missed by SMs, termed LSR-DR (yellow). Nonetheless, SMs are proficient in recovering \(55.32\%\) of the GT regions originally classified as NDR, labeled as HSR-NDR (maroon). Yet, both SMs and CAMs fall short in capturing the LSR-NDR region, which constitutes \(15.99\%\) of the GT (green). The color-coded segmentation map for these four distinct regions are presented in Figure 0(d), thereby showing the potential of saliency maps in addressing the limitations of CAMs in recovering NDRs.
### Perspective from Contribution Windows
Next, we present another novel viewpoint of comparing saliencies and CAMs from the perspective of _contribution windows_--a concept innate to the architecture of convolutional neural networks (ConvNets). Note that the tendency of CAMs to only focus on DRs can be understood using the _information bottleneck_ principle proposed in Lee et al. (2021)--every layer of a neural network filters or "funnels in" information about inputs and as a result only task-specific information is retained at the outputs. While this information bottleneck exists in the forward propagation of ConvNets, the reverse phenomenon happens during backpropagation when information "funnels out" from the activation maps to the input image. This phenomenon can be described using the contribution window of an input pixel on the activation maps, defined as follows.
**Definition 3.5** (Contribution Window).: Let's consider a ConvNet with \(N\) layers, where every layer \(l\) performs a 2D convolution using an \(F\times F\) kernel denoted as \(\mathbf{K}_{l}\), to compute activation \(\mathbf{A}_{l}=\text{Conv2D}(\mathbf{A}_{l-1},\mathbf{K}_{l})\). The contribution window at layer \(l\) of a pixel in the input image can then be defined as the region in \(\mathbf{A}_{l}\) that affects (or contributes to) the gradients of \(\mathbf{A}_{l}\) w.r.t. the input pixel.
This concept is illustrated in Figure 2, where the contribution window is highlighted in yellow at every layer for an example yellow pixel at layer 0. The contribution window can be viewed as the reverse concept of "receptive fields" defined for the forward pass of ConvNets. Indeed, since the gradient of the forward convolution \(\mathbf{K}_{l}\) is also a convolution with a rotated kernel Kafuna (2016), the receptive field of the backward convolution during gradient computation becomes the concept of contribution window. We can show that all activations at layer \(l\) in the contribution window of an input pixel can affect its gradient.
Now, let us consider pixels that have \(0\) activations across all channels in the final layer shown in grey in Figure 2. By design, such _non-activated pixels_ will register \(0\) CAM scores. We want to analyze if it is possible for a non-activated pixel (yellow) to show non-zero gradients (and thus saliencies) in the input image. Assuming we use activation functions \(f(z)\) that are \(0\) when \(z\leq 0\), we can show that this depends on whether the contribution window of the pixel contains any _activated pixel_ with non-zero activations at the final layer, shown in red. In fact, we can show that if the contribution window size of a non-activated pixel is smaller than its distance from an activated pixel, it will have 0 gradients. However, this is practically not likely as the contribution window size generally grows linearly with the depth of ConvNets. An exception is when we use \(1\times 1\) kernels. Through empirical evidence provided in section 5.1, we can establish that as the contribution window expands (achieved by increasing the \(F\times F\) kernel size), saliencies can progressively encompass more NDRs, thus directly addressing the limitations of CAMs.
## 4 Experimental Setup & Evaluation Metrics
### Experimental Setup:
Following the common practice in WS3, in this paper, we compared different approaches quantitatively and qualitatively by conducting experiments on MNIST, PASCAL VOC '12, and MS COCO '14 datasets. We also utilized two types of classification models based on ResNet50 architecture: i) "model-org", which is simply fine-tuned on the corresponding dataset, and ii) "model-pert", which is fine-tuned with additional noise perturbation.
### Evaluation Metrics:
To assess the quality of the segmentation maps, _mean intersection over union (mIoU)_ is a widely used metric in WS3 literature. mIoU measures the ratio of correct prediction (intersection) over the union of predictions and ground truths, averaged across all classes, including background class. Notably, mIoU provides an unbiased estimate of the segmentation performance; however, it fails to provide insights about the coverage of NDRs and DRs. Given the limitation of CAMs not being able to identify NDRs, it becomes crucial to measure how effective alternative WS3 techniques (e.g., saliencies) are at addressing the deficiencies of CAMs. This warrants the need for novel evaluation metrics focusing on the DRs and NDRs.
In this paper, we introduce the following three novel evaluation metrics: NDR-Recall, DR-Recall, and Foreground Precision (FG-Prec). **DR-Recall** is the ratio of correct DR prediction over the ground-truth DR and can be formally defined as: \(\text{DR-Recall}=|\text{TP}(P,DR_{GT})|/(|\text{TP}(P,DR_{GT})|+|\text{FN}(P,DR_{ GT})|)\), where \(P\) denotes the segmentation prediction, \(DR_{GT}\) denotes the ground-truth DR area, and \(|\text{TP}|\) and \(|\text{FN}|\) denote the count of true positives and false negatives over the DR region. As mentioned in Section 2.1, we define ground truth DR (\(DR_{GT}\)) and NDR (\(NDR_{GT}\)) by employing a global threshold (\(\tau_{cam}=0.25\)) on the CAM prediction and then taking its overlap with the ground-truth segmentation mask. In a similar manner, we compute **NDR-Recall** for a given segmentation prediction (\(P\)) and the corresponding ground-truth NDR region (\(NDR_{GT}\)). Apart from these two metrics, we also compute the **Foreground-Precision** of different target-classes as an additional metric, which can be defined as the ratio of correct foreground prediction over the total foreground prediction. Note that our proposed metrics are defined to analyze the deficiencies of CAM and hence, are biased only if we are evaluating CAMs just by themselves (e.g., CAMs would show low NDR Recall value by definition). However, these metrics are unbiased if the goal is to measure how well alternative WS3 techniques (e.g., saliencies) fix the shortcomings of CAMs.
## 5 Quantitative Comparison: CAM/Saliency
### Effect of Contribution Window
To empirically demonstrate the effect of contribution window on the recovery of NDRs, we utilize a 5-layer ConvNet architecture where each layer employs an \(F\times F\) kernel, followed by ReLU activation. We apply sufficient zero padding to ensure that the spatial dimension of the activations in each layer is equal to that of the input image. Different models with varying kernel sizes were then trained on the MNIST Segmentation dataset.
The results for CAM and Saliency, in terms of mIoU and NDR-Recall, are presented in Figure 3. The \(F\times F\) kernel size correlates with the size of the contribution window for the backpropagated gradients. Notably, when the contribution window is \(1\times 1\), the performance of CAMs and Salien
Figure 2: A schematic of “contribution window” demonstrating how the gradients at layer \(l-1\) is affected by the gradients from the contribution window of layer \(l\).
cies is quite comparable. However, differences in performance become more prominent (larger red and blue shaded regions) as the contribution window size increases. With an expanding contribution window, saliencies are capable of recovering more pixels that have high gradients and low (\(\approx 0\)) activations, effectively capturing a larger proportion of NDR. This, in turn, leads to a gradual increase in NDR-Recall until saturation is achieved. Further discussion of this experiment can be found in the Appendix.
### Comparing NDR Recovery
Table 1 presents quantitative evaluation of CAMs and saliencies on the PASCAL VOC dataset using different methods for background resolve (see Appendix for details). We compare the best-segmented map produced by each method by varying the global threshold of \(\tau_{cam}\) and \(\tau_{sm}\) from \(0.01\) to \(0.50\) and selecting the segmented map with the highest mIoU. The "basic background resolve" row of Table 1 shows that saliency map outperforms CAM in finding non-discriminative regions, as indicated by its higher NDR-Recall score. However, CAM outperforms the saliency maps in terms of mIoU, FG-precision, and DR-Recall, likely due to the noisy and scattered nature of saliency maps. This motivates further exploration of opportunities to improve the quality of saliency maps.
### Improving Saliencies with Simple Post-processing
We first explore if simple post-processing methods such as **kernel smoothing background resolve** and **Superpixel-based background resolve** can improve SM performance. Kernel Smoothing smooths the gradients of the saliencies by applying a Gaussian kernel, while superpixel-based smoothing assigns a label to each superpixel, which effectively mitigates the noisiness and scatteredness that may be present in saliency maps. See Appendix for details of these post-processing approaches. Table 1 presents their results as 'Smooth' and 'Superpixel' background Resolve. Both approaches outperform basic background resolve results in terms of mIoU, FG-Precision, DR-Recall, and NDR-Recall. Superpixel-based saliency maps demonstrate significant improvement over CAM in terms of mIoU and NDR-Recall; however, CAM outperforms all saliency methods in finding discriminative regions, as indicated by its higher DR-Recall score. It is worth mentioning that superpixel-based background resolve is not scalable for larger datasets. To this end, we need to explore saliencies where the smoothing can be integrated inherently without additional computational overheads.
\begin{table}
\begin{tabular}{c c|c c c c} \hline \hline
**Method** & **B/G Resolve** & **mIoU** & **FG-Prec** & **DR-Recall** & **NDR-Recall** \\ \hline CAM & Basic & 43.7 & 56.1 & **93.8** & 43.7 \\ \hline & Basic & 37.7 & 45.9 & 75.4 & 55.6 \\ Saliency & Smooth & 44.0 & 52.2 & 84.3 & 60.0 \\ & Superpixel & **49.0** & **60.0** & 80.9 & **61.8** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative comparison of CAM and Saliency on VOC dataset in terms of mIoU, Foreground Precision, and DR-/NDR-Recall.
Figure 3: Effect of Contribution Window on NDR-Recall and mIoU for MNIST Dataset.
## 6 Stochastic Aggregation of Saliencies
To reduce the noisiness of saliencies, Smilkov et al. (2017) proposed a stochastic aggregation-based method for saliency maps, named **SmoothGrad**, where Gaussian noise is added to the input image for smoothing saliencies. In this paper, we explored another variation of input noise perturbation, namely **BinaryMask**, where we multiply the image by a binary mask instead of adding Gaussian noise to the input image. The amount of perturbation for SmoothGrad is controlled by standard deviation of Gaussian noise, whereas for BinaryMask, the probability of each pixel in the mask being 1 controls the perturbation magnitude. See Appendix for additional details on these methods. "_Model-pert-binary_" and "_Model-pert-gaussian_" are the two finetuned classifiers augmented by binary and Gaussian noise, respectively.
### Smoothing Saliencies by Injecting Noise
Table 2 compares results of saliency with different stochastic aggregation methods like SmoothGrad and BinaryMask. The change in performance from the basic or vanilla saliencies (without stochastic aggregation) is shown in parentheses; a positive percentage denotes improvement and a negative percentage denotes degradation. Saliencies from the classification models perturbed with similar noise (_model-pert-gaussian_ for **SmoothGrad** and _model-pert-binary_ for **BinaryMask**) perform better than the saliencies generated by the original model. According to Bishop (1995), adding noise during training is a common regularization technique that results in denoising. The additive effect of adding noise during training and inferring with noise yields the best saliency map.
Although adding noise may make the saliency maps smoother, with increasing noise, the saliency maps may become unstable and the mIoU performance may gradually drop with excessive noise. A detailed analysis of the sensitivity of our experiments to noise is provided in Appendix. Also note that the classification model needs to be fine-tuned with similar noise for these stochastic perturbations techniques to produce smoothened saliencies. This additional fine-tuning could be an expensive process, and further motivates us to explore alternate aggregation methods that do not involve additional fine-tuning steps.
## 7 Stochastic Aggregation Through Cropping
Random cropping is commonly used as a data augmentation technique to increase the variety of training data by cropping random regions of the input image to a specific size. One of the advantages of random cropping is that it generates input samples that follow the input data distribution, since
\begin{table}
\begin{tabular}{l|l|l|l l l l} \hline \hline
**Model** & **Method** & **BG-Res** & **mIoU** & **FG-Prec** & **DR-Rec** & **NDR-Rec** \\ \hline \multirow{3}{*}{org} & Smooth- & Basic & 38.6 (+0.9) & 47.1 (+1.2) & 82.0 (+6.6) & 51.7 (-3.9) \\ & Grad & Smooth & 37.5 (-6.5) & 47.1 (-5.1) & 79.2 (-5.1) & 48.3 (-11.7) \\ & & Superpix & 41.0 (-8.0) & 52.2 (-7.8) & 77.0 (-3.9) & 52.1 (-9.7) \\ \hline \multirow{3}{*}{pert-gauss} & Smooth- & Basic & 45.3 (+7.6) & 54.9 (+9.0) & 87.4 (+12.0) & 55.9 (+0.3) \\ & Grad & Smooth & 44.8 (+0.8) & 54.1 (+1.9) & **87.5 (+32.5)** & 56.8 (-3.2) \\ & & Superpix & **48.1 (-0.9)** & **57.4 (-2.6)** & 86.4 (+5.5) & 62.9 (+1.1) \\ \hline \multirow{3}{*}{org} & Binary- & Basic & 41.2 (+3.5) & 51.3 (+5.4) & 79.9 (+4.5) & 53.6 (-2.0) \\ & Mask & Smooth & 43.4 (-0.6) & 53.5 (+1.3) & 84.7 (+0.4) & 53.9 (-6.1) \\ \cline{1-1} & & Superpix & 47.3 (-1.7) & 57.0 (-3.0) & 84.8 (+3.9) & 62.0 (+0.2) \\ \hline \multirow{3}{*}{bert-binary} & Binary- & Basic & 42.4 (+4.7) & 52.9 (+7.0) & 78.7 (+3.3) & 55.8 (+0.2) \\ \cline{1-1} & Mask & Smooth & 44.9 (+0.9) & 54.8 (+2.6) & 84.8 (+0.5) & 57.2 (-2.8) \\ \cline{1-1} & & Superpix & **48.9 (-0.1)** & 56.8 (-3.2) & 86.2 (+5.3) & **68.0 (+6.2)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative comparison of SmoothGrad and BinaryMask in terms of mIoU, FG-Precision, DR-/ NDR-Recall for different fine-tuned models on VOC dataset. The difference between the aggregated saliency performance and the vanilla saliency performance is shown in parentheses. A positive value denotes an increase in performance; whereas a negative value denotes a decrease in performance for aggregated saliencies.
all the crops are basically part of the input image. In this section, we utilize random cropping as a stochastic aggregation technique to improve the performance of saliencies.
### Disintegrating the Spatial Structure of Images using Random Cropping
Random cropping can also be viewed as a perturbation technique where the individual crops disintegrate the spatial structure of the input image. We treat random cropping as a spatial perturbation and generate a saliency map by stochastically aggregating the saliency maps of the individual cropped images. We define this spatial perturbation-based aggregation as follows: \(\hat{\text{SM}}_{c}(\mathbf{I})=\frac{1}{n}\sum_{i=1}^{n}w_{i}\text{SM}_{c}( \tilde{\mathbf{I}}_{i})\), where \(\tilde{\mathbf{I}}_{i}=f_{pert}(\mathbf{I})\), \(\mathbf{I}\) corresponds to the input image, \(\tilde{\mathbf{I}}_{i}\) denotes the individual crops, and \(f_{pert}(.)\) denotes the spatial perturbation function, which is random cropping for this experiment. \(\text{SM}_{c}(.)\) is the (basic) saliency map and \(\hat{\text{SM}}_{c}\) corresponds to the final aggregated saliency, and \(w_{i}\) denotes the weight of each of the individual crop saliencies. For our experiments, we choose \(w_{i}=\sigma(S_{c}(\tilde{\mathbf{I}}_{i}))\), where \(S_{c}(\tilde{\mathbf{I}}_{i})\) is the classification score of the crop \(\tilde{\mathbf{I}}_{i}\), and \(\sigma(.)\) is the sigmoid activation function.
First row of Table 3 shows the performance of random cropping as a stochastic aggregation method, where we can see that it performs better than saliencies in terms of mIoU, FG-Precision, and DR-/NDR- Recall for all the background resolve approaches (difference in performance of random crop and saliencies are provided in parentheses). We can achieve as high as 50.4 mIoU using random crop-based aggregated saliencies with superpixel-based background resolve. Notably, random cropping-based aggregated saliencies employ the "_Model-org_" classifier to compute the saliencies, showing that random cropping does not require the classifier to be finetuned on additional perturbations to perform well.
### Can we do better than random cropping?
Next, we explore different variations of random cropping and patching techniques that break the spatial structure of input images. Random patching is an erasure-based method similar to the idea of the cutout method DeVries & Taylor (2017). The discriminative variations of random cropping (Disc-Crop)and patching (Disc-Patch) take the real values of CAM to complement the probability of selecting a crop or patch. See Appendix for details. Table 3 shows the results of these alternate methods. Random cropping and its discriminative variation (Disc-Crop) perform significantly better than the (basic) saliency method. However, the patch-based methods do not show comparative performance in terms of mIoU, FG-Precision, and DR-/NDR-Recall. One possible reason is that we used the original "_Model-org_" classifier, which is not augmented with the patch-wise perturbations. Therefore, patching creates unnatural artifacts during inference, and the classifier fails to attribute the individual samples correctly. The discriminative versions of cropping and patching did not significantly outperform the random versions.
\begin{table}
\begin{tabular}{l|l|l l l l} \hline \hline
**Method** & **BG-Res** & **mIoU** & **FG-Precision** & **DR-Recall** & **NDR-Recall** \\ \hline Random & Basic & 44.6 (+6.9) & 53.6 (+7.7) & 84.2 (+8.8) & 59.4 (+3.8) \\ Crop & Smooth & 46.2 (+2.2) & 56.6 (+4.4) & **84.4 (+0.1)** & 57.5 (-2.5) \\ & Superpix & 50.4 (+1.4) & **61.7 (+1.7)** & 82.6 (+1.7) & **61.7 (+0.1)** \\ \hline Random & Basic & 35.6 (-2.1) & 43.9 (-2.0) & 71.5 (-3.9) & 57.8 (+2.2) \\ Patch & Smooth & 37.7 (-6.3) & 45.4 (-6.8) & 77.6 (-6.7) & 59.9 (-0.1) \\ & Superpix & 39.3 (-9.7) & 47.7 (-12.3) & 76.9 (-4.0) & 61.6 (-0.2) \\ \hline \multirow{3}{*}{Disc-Patch} & Basic & 35.4 (-2.3) & 32.6 (-13.3) & 74.7 (-0.7) & 58.3 (+2.7) \\ & Smooth & 38.6 (-5.4) & 45.8 (-6.4) & 78.8 (-5.5) & 61.7 (+1.7) \\ & Superpix & 40.7 (-8.3) & 51.8 (-8.2) & 72.2 (-8.7) & 57.0 (-4.8) \\ \hline \multirow{3}{*}{Disc-Crop} & Basic & 45.1 (+7.4) & 54.0 (+8.1) & 76.5 (+1.1) & 55.5 (-0.1) \\ & Smooth & 46.3 (+2.3) & 56.5 (+4.3) & 74.7 (-9.6) & 53.4 (-6.6) \\ \cline{1-1} & Superpix & **50.6 (+1.6)** & 61.6 (+1.6) & 73.9 (-7.0) & 57.9 (-3.9) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Comparison of Random Crop, Discriminative Crop, Random Patch, and Discriminative Patch in terms of mIoU, FG-Precision, DR-/ NDR-Recall on VOC12. The difference between the aggregated and saliency performance is shown in parenthesis.
## 8 Related Works
Current techniques for WS3 utilize CAMs as the foundation to produce segmentation maps. These methods can be broadly categorized into three types: (1) Modifying model architecture, (2) Iterative update-based methods, and (3) Modifying Loss functions.
First, several methods that modify the model architecture for WS3 have been developed to overcome the well-known limitations of CAM Kolesnikov and Lampert (2016); Araslanov and Roth (2020); Lee et al. (2021a). For example, a global weighted rank (GWR) pooling layer was proposed in Kolesnikov and Lampert (2016) that neither underestimates the object size like global max pooling (GMP) nor overestimates it using GAP. Normalized global weighted pooling (nGWP) was also proposed in Araslanov and Roth (2020) to replace the GAP layer, which helps to recover small segments, thus improving the mask precision. Another method FickleNet Lee et al. (2019) introduced stochastic aggregations in feature maps to produce the localization maps. However, changing the architecture can be difficult and restricts the types of models that are compatible with these methods.
The second set of methods aims to improve the seed performance of CAMs through iterative updates, such as erasure-based methods Li et al. (2018); Hou et al. (2018); Choe et al. (2020); Wei et al. (2017) and adversarial optimizations Lee et al. (2021b); Wei et al. (2017). Specifically, erasure-based methods suggest erasing the most discriminative regions to unveil the non-discriminative regions, thus addressing some of the limitations of CAMs. On the other hand, AdvCAM Lee et al. (2021b) proposed an anti-adversarial optimization technique to exploit the boundary information with pixel-level affinity for capturing more regions of the target objects. One primary limitation of such methods is that the termination condition is not well-defined and often heuristically chosen.
Finally, a third set of WS3 methods focus on modifying the loss function to improve the object coverage of CAMs. Specifically, the RIB Lee et al. (2021a) demonstrates that an information bottleneck occurs in later layers as only the task-relevant information is passed to the output. As a result, CAMs which are computed at the last layer, have sparse coverage of the target object. A new loss function was proposed that encourages the transmission of information from non-discriminative regions for classification, thus improving the quality of localization maps.
Several prior works have utilized saliency maps for WS3, as documented in Kolesnikov and Lampert (2016); Shimoda and Yanai (2016); Sun and Li (2019); Zeng et al. (2019). These studies primarily concentrate on enhancing segmentation map accuracy through post-processing techniques. However, their focus differs from our work on exploring the inherent potential of saliencies in overcoming the limitations associated with CAM-based approaches. Although these existing works contribute valuably to the field, they do not directly address the specific research questions that our study delves into - specifically, the comprehensive analysis of saliencies' effectiveness with respect to CAMs.
CAMs and Saliencies have also been extensively examined in the realm of explainability research that is focused on providing explanations of the model outputs, which can potentially satisfy regulatory experiments Goodman and Flaxman (2017), help practitioners debug their model Casillas et al. (2013); Cadamuro et al. (2016) and identify unintended bias in the model Lakkaraju et al. (2017); Wang and Rudin (2015). Approaches based on activation maps fall under the CAM-based methods category Zhou et al. (2016); Selvaraju et al. (2016); Chattopadhay et al. (2018); Wang et al. (2020). Conversely, techniques relying on attribution maps belong to the saliency-like methods group Simonyan et al. (2013); Shrikumar et al. (2016); Springenberg et al. (2014); Zeiler and Fergus (2014); Smilkov et al. (2017); Sundararajan et al. (2017).
\begin{table}
\begin{tabular}{c l|c c c c} \hline \hline
**Method** & **B/G Resolve** & **mIoU** & **FG-Prec** & **DR-Recall** & **NDR-Recall** \\ \hline CAM & Basic & **28.82** & **41.16** & **83.59** & 31.46 \\ \hline Saliency & Basic & 22.22 & 28.26 & 65.46 & 48.78 \\ & Smooth & 25.46 & 31.94 & 73.02 & **52.65** \\ \hline Random- & Basic & 21.13 & 27.6 & 62.87 & 46.38 \\ Crop & Smooth & 26.58 & 33.83 & 72.09 & 52.22 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Quantitative comparison of CAM and Saliency on COCO dataset in terms of mIoU, Foreground Precision, and DR-/NDR-Recall.
## 9 Discussion and Future Directions
Table 4 quantitatively evaluates the performance of competing methods on the MS COCO 2014 dataset. We compare the best-segmented map generated by each method by varying the global threshold across the range of \(0.01\) to \(0.50\). The segmented map with the highest mIoU value is selected for comparison. The Table illustrates that both saliency and random crop saliency outperform CAM in terms of NDR-Recall. This signifies that saliency-based approaches exhibit better recovery of the NDR region compared to CAM. However, CAM surpasses saliencies in terms of mIoU, FG-Precision, and DR-Recall. The smooth saliencies show comparable performance to CAM, which indicates the potential for improvement in the performance of saliencies by reducing its noisiness, especially when dealing with challenging datasets like the COCO dataset.
In conclusion, our paper proposes three novel evaluation metrics for WS3, namely NDR-Recall, DR-Recall, and FG-Precision, which can be used to assess the performance of alternative WS3 models in fixing the deficiencies of CAMs. We also revisit the potential of the use of saliency maps for WS3, which has been largely overlooked in the past, and demonstrate that simple post-processing steps, stochastic aggregation methods, and random cropping-based aggregation can significantly improve the quality of segmentation masks.
Although our work lays the foundation for future research in saliency maps for WS3, it's important to clarify that we are not the first to use saliencies for WS3, neither are we claiming state-of-the-art (SOTA) performance using stochastic aggregation methods when applied over saliencies. Instead, our focus is on presenting novel insights into the strengths and weakness of saliences w.r.t. CAMs from multiple perspectives, and showing how simple modifications to saliencies can effectively address the limitations inherent in CAMs.
As newer techniques based on Vision Transformers Xie et al. (2022); Li et al. (2023) and Foundation models such as Segment-Anything Chen et al. (2023) are developed in the WS3 community to deliver SOTA performance, we anticipate future research to comprehensively understand their strengths and weaknesses building upon the metrics and analyses presented in our paper. Furthermore, while current post-processing methods in WS3 like CRF, PSA, and IRN are designed specifically to complement the limitations of CAM-based methods, we anticipate that researchers will build upon our findings to develop more advanced post-processing techniques for gradient-based WS3 methods.
|
2310.04173 | A physics-informed generative model for passive radio-frequency sensing | Electromagnetic (EM) body models predict the impact of human presence and
motions on the Radio-Frequency (RF) stray radiation received by wireless
devices nearby. These wireless devices may be co-located members of a Wireless
Local Area Network (WLAN) or even cellular devices connected with a Wide Area
Network (WAN). Despite their accuracy, EM models are time-consuming methods
which prevent their adoption in strict real-time computational imaging problems
and Bayesian estimation, such as passive localization, RF tomography, and
holography. Physics-informed Generative Neural Network (GNN) models have
recently attracted a lot of attention thanks to their potential to reproduce a
process by incorporating relevant physical laws and constraints. Thus, GNNs can
be used to simulate/reconstruct missing samples, or learn physics-informed data
distributions. The paper discusses a Variational Auto-Encoder (VAE) technique
and its adaptations to incorporate a relevant EM body diffraction method with
applications to passive RF sensing and localization/tracking. The proposed
EM-informed generative model is verified against classical diffraction-based EM
body tools and validated on real RF measurements. Applications are also
introduced and discussed. | Stefano Savazzi, Federica Fieramosca, Sanaz Kianoush, Vittorio Rampa, Michele D'amico | 2023-10-06T11:43:22Z | http://arxiv.org/abs/2310.04173v1 | # A physics-informed generative model for passive radio-frequency sensing
###### Abstract
Electromagnetic (EM) body models predict the impact of human presence and motions on the Radio-Frequency (RF) stray radiation received by wireless devices nearby. These wireless devices may be co-located members of a Wireless Local Area Network (WLAN) or even cellular devices connected with a Wide Area Network (WAN). Despite their accuracy, EM models are time-consuming methods which prevent their adoption in strict real-time computational imaging problems and Bayesian estimation, such as passive localization, RF tomography, and holography. Physics-informed Generative Neural Network (GNN) models have recently attracted a lot of attention thanks to their potential to reproduce a process by incorporating relevant physical laws and constraints. Thus, GNNs can be used to simulate/reconstruct missing samples, or learn physics-informed data distributions. The paper discusses a Variational Auto-Encoder (VAE) technique and its adaptations to incorporate a relevant EM body diffraction method with applications to passive RF sensing and localization/tracking. The proposed EM-informed generative model is verified against classical diffraction-based EM body tools and validated on real RF measurements. Applications are also introduced and discussed.
EM body models, generative models, variational auto-encoders, generative adversarial networks, radio tomography, integrated sensing and communication, localization.
## I Introduction
Passive radio sensing employs stray ambient radio signals from Radio Frequency (RF) devices to detect, locate, and track people that do not need to wear any electronic device, namely device-free [1] - [4]. In line with the _Communication while Sensing_ paradigm [4], these methods provide seamless detection capabilities, while performing radio communications. In fact, radio signals encode a view of all moving/fixed objects traversed during the signal propagation, and several data analytic methods, such as Bayesian [5] and machine learning approaches [6], can be usefully employed to decode this information, typically by large-scale processing of radio signals exchanged by the wireless devices.
Almost all emerging approaches proposed for solving the radio sensing problem require an approximated knowledge of a physical-informed (prior) model to evaluate the effects of human subjects on radio propagation. The perturbative effects of RF signals induced by the presence or movements of human bodies can be interpreted using Electro-Magnetic (EM) propagation theory considerations [7]. These methods have paved the way to several physical and statistical models for passive radio sensing, which exploit full wave approaches, ray tracing, moving point scattering [8], and diffraction theory [9, 10, 11, 12]. The body-induced perturbations that impair the radio channel can be thus collected, measured, and processed using physical-informed models to estimate location and track target information. A general-purpose EM tool for the prediction of body-induced effects on propagation is still under evaluation [13]. While simplified or approximated EM models such as path-loss methods [14] are too simplistic to capture the complexity of the EM environment, current EM models are too complicated or time-consuming to be of practical use for real-time sensing scenarios [15], although usable for off-line applications e.g. during pre-deployment assessment [16].
Physics-informed generative modelling [17] is an emerging field in several application contexts ranging from Bayesian estimation, computational imaging and inverse problems [18, 19]. Generative models are typically useful for reproducing the physics of a given phenomenon by generating observations drawn from any prior distribution which reflects the complex underlying physics [20] of the environment under
Fig. 1: From top to bottom: the generative approach; the EM model link geometry including the 2D sheet-like obstacle and the RX/TX antennas. |
2305.09839 | Spontaneous stochasticity in the presence of intermittency | Spontaneous stochasticity is a modern paradigm for turbulent transport at
infinite Reynolds numbers. It suggests that tracer particles advected by rough
turbulent flows and subject to additional thermal noise, remain
non-deterministic in the limit where the random input, namely the thermal
noise, vanishes. Here, we investigate the fate of spontaneous stochasticity in
the presence of spatial intermittency, with multifractal scaling of the
lognormal type, as usually encountered in turbulence studies. In principle,
multifractality enhances the underlying roughness, and should also favor the
spontaneous stochasticity. This letter exhibits a case with a less intuitive
interplay between spontaneous stochasticity and spatial intermittency. We
specifically address Lagrangian transport in unidimensional multifractal random
flows, obtained by decorating rough Markovian monofractal Gaussian fields with
frozen-in-time Gaussian multiplicative chaos. Combining systematic Monte-Carlo
simulations and formal stochastic calculations, we evidence a transition
between spontaneously stochastic and deterministic behaviors when increasing
the level of intermittency. While its key ingredient in the Gaussian setting,
roughness here suprisingly conspires against the spontaneous stochasticity of
trajectories. | André Luís Peixoto Considera, Simon Thalabard | 2023-05-16T22:42:34Z | http://arxiv.org/abs/2305.09839v2 | # Spontaneous stochasticity in the presence of intermittency
###### Abstract
Spontaneous stochasticity is a modern paradigm for turbulent transport at infinite Reynolds numbers. It suggests that tracer particles advected by rough turbulent flows and subject to additional thermal noise, remain non-deterministic in the limit where the random input, namely the thermal noise, vanishes. Here, we investigate the fate of spontaneous stochasticity in the presence of spatial intermittency, with multifractal scaling of the lognormal type, as usually encountered in turbulence studies. In principle, multifractality enhances the underlying roughness, and should also favor the spontaneous stochasticity. This letter exhibits a case with a less intuitive interplay between spontaneous stochasticity and spatial intermittency. We specifically address Lagrangian transport in unidimensional multifractal random flows, obtained by decorating rough Markovian monofractal Gaussian fields with frozen-in-time Gaussian multiplicative chaos. Combining systematic Monte-Carlo simulations and formal stochastic calculations, we evidence a transition between spontaneously stochastic and deterministic behaviors when increasing the level of intermittency. While its key ingredient in the Gaussian setting, roughness here suprisingly conspires against the spontaneous stochasticity of trajectories.
Introduction.When transported by a sufficiently turbulent flow, puffs of fluid particles are known to undergo a phase of algebraic inflation \(R\sim t^{3/2}\), independent from their initial size and now known as Richardson diffusion [1; 2; 3; 4; 5; 6; 7; 8; 9; 10]. Beyond the law in itself, Richardson's seminal contribution is the intuition that turbulent transport requires some probabilistic modeling: The modern interpretation uses the phenomenon of spontaneous stochasticity [11; 12; 13; 14; 15; 16], which involves tracers as fluid particles advected by the fluid and subject to additional thermal noise of amplitude \(\kappa\)[17]: In the vanishing viscosity limit, the multi-scale nature of turbulent flows amplifies thermal noise in such a drastic fashion, that initially coinciding particles may separate in finite time although their dynamics formally solve the same initial value problem [18; 19; 20], hereby suggesting intrinsic nature for the underlying randomness.
To date, the scenario of spontaneous stochasticity for Lagrangian separation is fully substantiated within the theory of Kraichnan flows. Kraichnan flows are minimal random erasatzes of homogeneous isotropic turbulent fields [17; 19; 21; 22; 23; 24]; They are defined as white-in-time Gaussian random fields, whose spatial statistics are centered and prescribed by two-point correlation functions with algebraic decay of the kind
\[C_{\eta}^{(\xi)}(r)=1-|r|^{\xi}\text{ for }\eta\leq|r|\ll 1, \tag{1}\]
and vanishing at large scales \(\gg 1\). \(\eta\) is a scale under which the flow is smooth, analogous to so-called Kolmogorov scale: The scales \(\eta\leq|r|\ll 1\) define the so-called inertial range in turbulence theory. The Hurst parameter \(\xi\in]0;2[\)prescribes the roughness of the field, through inertial-range scaling \(\left\langle\left(v(x+r)-v(x)\right)^{2}\right\rangle\sim r^{\xi}\). In the limit \(\eta\to 0\), this means that the lesser \(\xi\), the rougher \(v\). In this stochastic setting, spontaneous stochasticity essentially means that some random time accounting for the large-scale \(O(1)\) dispersion of a puff of tracers with initial size \(O(\eta)\) has probability \(1\) to be finite in limits where \(\eta,\kappa\) jointly vanish. The limit describes puffs initially coalescing to a point in prescribed (quenched) space-time velocity realizations [4; 20]. For instance, explicitly considering the relative separation \(\mathbf{R}(t,\mathbf{r}_{0}):=\mathbf{X}_{2}(t,\mathbf{x}_{0}+\mathbf{r}_{0} )-\mathbf{X}_{1}(t,\mathbf{x}_{0})\) between two tracers initiated at \(\mathbf{x}_{0},\mathbf{x}_{0}+\mathbf{r}_{0}\), a natural separation time is
\[\tau_{1}(\eta,\kappa):=\inf_{\|\mathbf{r}_{0}\|=\eta}\left\{t\|\mathbf{R}(t, \mathbf{r}_{0})\|\geq 1\right\}. \tag{2}\]
from which we interpret spontaneous stochasticity as the property
\[\mathbb{P}\left[\tau_{1}<\infty\right]\to 1\text{ as }\eta,\kappa\to 0. \tag{3}\]
Even at this essential level, the presence or the absence of spontaneous stochasticity in Kraichnan flows depends on a subtle interplay between four parameters: roughness, compressibility, space-dimension, reflection rules for colliding trajectories. To highlight the effect of roughness, we focus on the unidimensional space, hence prescribing unit compressibility, with a thermal noise \(\kappa=\eta\) ensuring that colliding trajectories reflect upon collision in the limit \(\eta\to 0\). The only relevant parameter is then the roughness exponent \(\xi\): Spontaneously stochastic property (3) holds if and only if \(\xi<1\). For \(\xi\geq 1\), particles wind up sticking together hence producing apparent deterministic behavior [17; 25; 26]: In short, Kraichnan flows suggest the mantra "The rougher, the more spontaneously stochastic". In this letter, we show that this mantra cannot be repeated in the presence of multifractality, a feature which we later also refer to as spatial intermittency.
1D multifractal Kraichnan flows.We propose a multifractal unidimensional generalization of the Kraichnan model, which prescribes the motion of \(N\) tracers particles (\(X_{i},i\in[|1;N||]\)) in terms of the advection-diffusion
\[dX_{i}=u_{\eta}^{\xi,\gamma}\,(X_{i}(t),dt)+\sqrt{2\kappa}\;B_{i}(dt), \tag{4}\]
where the \(B_{i}\)'s are independent Brownian motions and we set \(\kappa=\eta\) for the thermal noise amplitude: This scaling ensures that close-by tracers separated by at most \(\eta\) diffuse away from each other. The smoothing scale \(\eta\) will ultimately be taken to \(0\). The velocity \(u_{\eta}^{\xi,\gamma}\) models turbulent advection in a rough multifractal field, prescribed by the Hurst exponent \(\xi\in]0;2[\) and the intermittency parameter \(\gamma\). We use the 1D Markovian version of the spatio-temporal fields constructed by Chevillard & Reneuve [27]
\[\begin{split}& u_{\eta}^{\xi,\gamma}(x,dt)=\frac{1}{Z_{\eta}} \int_{\mathbb{R}}L_{\eta}^{(\xi)}(x-y)\,e^{\gamma Y(y)}W_{1}(dy,dt),\\ &\text{for}\;Y(y):=\int_{\mathbb{R}}L_{\eta}^{(0)}(y-z)W_{2}(dz),\;\;Z:=e^{\gamma^{2}\mathbb{E}\left(Y^{2}\right)},\end{split} \tag{5}\]
in terms of the mutually independent (1+1) dimensional Wiener process \(W_{1}\) and Brownian motion \(W_{2}\), also independent from the \(B_{i}\)'s. The kernels \(L_{\eta}^{(\xi)}\) prescribe the Hurst exponent of the velocity field when \(\gamma=0\); They are here defined as convolution square roots of the correlation function
\[C^{(\xi)}(r)=\left\{\begin{array}{c}\left(1-r^{\xi}\right)\mathbf{1}_{r<1} \text{ for }\xi>0\\ \left(\log\frac{1}{r}\right)\mathbf{1}_{r<1}\text{ for }\xi=0\end{array}.\right. \tag{6}\]
The subscript \(\eta\) denotes a regularization over the small-scale \(\eta\), in practice most easily defined using Fourier transforms [26]. Please note that the expressions (6) indeed represent correlation functions for \(0<\xi\leq 1\)[28], and we therefore restrict our analysis to this range. With this choice, Eq. (1) is then exactly and not just asymptotically satisfied. Spatial intermittency is modeled by the term \(M_{\eta}^{(\gamma)}=e^{\gamma Y}W_{1}(dy,\cdot)\), namely the exponentiation of a regularized fractional Gaussian field \(Y\) with vanishing Hurst exponent. This non-trivial operation requires to be suitably normalized by the term \(Z=e^{\gamma^{2}\mathbb{E}(Y^{2})}\sim\eta^{-\gamma^{2}}\). The mathematical expectation \(\mathbb{E}\) denotes an average over the random environment Y. When \(\gamma<\sqrt{2}/2\simeq 0.707\), the limit \(\eta\to 0\) then produces a well-defined and non-trivial multifractal random distribution called _Gaussian multiplicative chaos_[29; 30; 31; 32] (later referred to as GMC). The multifractality prescribes the power-law scaling \(S_{p}(\ell):=\langle|u(x+r)-u(x)|^{p}\rangle\sim C_{p}|r|^{\zeta(p)}\) in the inertial range \(\eta\leq r\ll 1\), with quadratic variation of the structure function exponents as
\[\zeta(p)=p\left(\xi/2+\gamma^{2}\right)-\gamma^{2}p^{2}/2; \tag{7}\]
This is a signature of log-normal multifractality - see Fig. 1 for a numerical illustration using Monte-Carlo averaging with \(\eta=2^{-20}\), and the power law extending over almost 5 decades.
Here, the originality of the field (5) comes from its temporal dependence. The Gaussian component is Markovian, and the random environment (5) is analogous to a Kraichnan flow when we set the intermittency parameter \(\gamma=0\). The GMC component is random but frozen-in-time: This feature will allow the spatial intermittency to play out in (4) even at the level of two-particle dynamics.
Two wrong intuitive assumptions on multi-scaling & spontaneous stochasticity.Eq. (7) prescribes \(\zeta(2)=\xi\), meaning that for the multifractal fields (5), the two-point correlation (1) is prescribed by the Hurst exponent, independently from the intermittency parameter \(\gamma\). Still, our multifractal fields are rougher than their mono-fractal counterpart. This is seen qualitatively from the numerical realizations of Fig. 1 (c): Increasing \(\gamma\) also increases the spikiness of the signal. More quantitatively, the spatial roughness \(\xi_{K}\) of a single realization of the random field \(u^{\xi,\gamma}\) is tied to its scaling exponents \(\zeta(p)\) through the classical Kolmogorov continuity theorem [33; 34] as
\[\xi_{K}=2\sup_{p}\frac{\zeta(p)-1}{p}\leq\xi \tag{8}\]
For \(\gamma=0\), the mono-fractal behavior \(\zeta(p)=\ p\xi/2\) holds for arbitrarily large \(p\)'s, and as such the exponent \(\xi_{K}\)
Figure 1: (a) Multifractal prediction given by Eq. (7) for \(\gamma=0.2\). Inset shows the correlation function with the power-law decay (1) for \(\xi=\zeta(2)\). (b) Compensated structure functions \(r^{-\xi}\,\langle|\delta u|^{\gamma}\rangle^{2/p}\propto r^{2\zeta(p)/p-\xi}\) for integer orders \(1\leq p\leq 6\) (from bottom to top). Points indicate Monte-Carlo averaging over 512 samples using \(N=2^{20}\) points and smoothing scale \(\eta=4\pi/N\). The black shaded lines indicate the multifractal prediction. (c) Random realizations of the spatial part of (5) for \(\xi=2/3\) and various \(\gamma\). (d) Effective roughes \(\xi_{K}\) as a function of \(\gamma\) for various \(\xi\). Inset illustrates Definition (8).
identifies to the Hurst exponent \(\xi\). For \(\gamma\neq 0\), though, this value is reached at \(p_{K}=\sqrt{2}/\gamma\), which prescribes \(\xi_{K}<\xi\) - see Fig 1 (d). On the one hand, because of this enhanced roughness, one could in principle expect that tracers advected in the intermittent fields (4) are more likely to exhibit non-deterministic behavior than in Kraichnan flows, following the mantra "The rougher, the more spontaneous stochastic". On the other hand, two-particle separations in the monofractal Kraichnan flows depend only on \(\zeta(2)=\xi\)[26]: One may as well expect that at least at the level of two-particle separation, one should see no effect of intermittency. We now present some formal and numerical results, to argue that none of the intuitive assumptions formulated above are in fact correct.
From random fields to random potentials.We focus on the dynamics of pair separations, obtained by considering \(N=2\) in Eq. (4). Similar to the Gaussian case [35; 36], tracers advected by Eq. (4) can be interpreted in terms of particles interacting through a random pairwise potential, and whose dynamics are prescribed through the stochastic differential equation (SDE)
\[dX_{i}=\frac{1}{Z_{\eta}}\sum_{j=1,2}L_{ij}\left(\left|X_{1}-X_{ 2}\right|\right)e^{\gamma Y(X_{j})}W_{j}(dt) \tag{9}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\sqrt{2\kappa} B_{i}(dt)\] \[\text{with }L(r):=\left(\begin{array}{cc}1&0\\ 1-r^{\xi}&\sqrt{1-(1-r^{\xi})^{2}}\end{array}\right)\]
The matrix \(L\) is a discrete analogue to the kernels \(L_{\eta}^{(\xi)}\) featured in Eq. (5), except for the regularizing scale \(\eta\). It corresponds to an explicit Choleski decomposition of the correlation matrix \(C_{2}:=C(|X_{i}-X_{j}|)_{i,j=1,2}\) such that \(LL^{T}=C_{2}\). As a SDE version of the original dynamics (4), Eq. (9) comes with two advantages. _(i)_ At a numerical level, it allows for Monte-Carlo sampling of trajectories without the need to generate the fields of Eq. (5) at each time-step, similar to the Gaussian setting [35; 36]. _(ii)_ At a formal level, separation-time statistics can be obtained by means of stochastic calculus and potential theory for Markov processes, in other words Feynman-Kac-like formulas. The word _formal_ is advisory, as the frozen-in-time GMC entering the dynamics could require cautious mathematical handling [37; 31; 38], but this goes way beyond the scope of the present letter.
The paradoxical interplay between intermittency and spontaneous stochasticity.Stochastic calculus suggests that for a prescribed realization of the GMC, the pair-separation time \(T_{1}^{Y}(r)\) from scale \(r\) to scale \(1\) formally solves the boundary-value problem
\[\begin{split}&\mathcal{L}_{2}^{Y}T_{1}^{Y}=-1\\ &\text{with }T_{1}^{Y}(1)=0\text{ and }\partial_{r}T_{1}^{Y}(\eta)=0,\end{split}\] ( \[\gamma\text{-BVP}\] ) involving the GMC-dependent operator \[\mathcal{L}_{2}^{Y}(X_{1},r):=\frac{e^{2\gamma Y(X_{1})}}{2Z_{\eta}^{2}} \left(r^{\xi}+e^{2\gamma\Delta Y}\left(2-r^{\xi}\right)\right)r^{\xi} \partial_{rr}, \tag{10}\]
which features the increment \(\Delta Y:=Y\left(X_{1}+r\right)-Y\left(X_{1}\right)\). For \(\gamma\neq 0\), (\(\gamma\)-BVP) features a non-trivial coupling between the pair-separation time and the underlying GMC. Because of this coupling, (\(\gamma\)-BVP) is not closed and one cannot a priori solve it explicitly for \(T_{1}^{Y}\). Setting \(\gamma=0\) retrieves the Gaussian setting and provides a statistical decoupling [17; 26], which makes Eq. (\(\gamma\)-BVP)-(10) solvable. For \(\gamma\neq 0\), we define the annealed separation-time
\[\tau_{1}:=\mathbb{E}\left(T_{1}^{Y}\right), \tag{11}\]
where we recall that the expectation \(\mathbb{E}\left(\cdot\right)\) denotes an average over the GMC random environment. A non-trivial decoupling is then obtained under the mean-field Ansatz
\[\mathbb{E}\left(e^{\gamma\Delta Y}T_{1}^{Y}\right)=\mathbb{E}\left(e^{\gamma \Delta Y}\right)\tau_{1}. \tag{12}\]
For \(\eta\ll 1\), (\(\gamma\)-BVP) under Anzatz (12) formally becomes
\[\begin{split}&\mathcal{L}_{2}^{*}\tau_{1}=-1\quad\text{with }\tau_{1}(1)=0\text{ and }\partial_{r}\tau_{1}(0)=0,\\ &\text{for }\mathcal{L}_{2}^{*}(r):=\left(1-\frac{r^{\xi}}{2} \right)r^{\xi+4\gamma^{2}}\partial_{rr}.\end{split} \tag{13}\]
We refer the reader to Supplemental Material [26] for details on derivations of (10)-(13). As for now, we observe that the term \((1-r^{\xi}/2)\) is bounded and of order \(O(1)\). Hence, the separation times behave as if the multifractal random flows were Gaussian, yet with effective _driving_ Hurst parameter
\[\xi_{\gamma}:=\xi+4\gamma^{2}>\xi>\xi_{K}. \tag{14}\]
This Gaussian flow is smoother than the flow at \(\gamma=0\)! This calculation evidences a highly paradoxical interplay between multifractality, roughness and spontaneous stochasticity: Increasing \(\gamma\) makes the flow rougher in terms of the effective roughness \(\xi_{K}\) deduced from Kolmogorov theorem, but makes the flow smoother in terms of the spontaneous stochasticity of tracers, driven by \(\xi_{\gamma}\) given above. A practical consequence of Eq. (14) is the presence of a phase transition driven by \(\gamma\), and characterized by \(\xi_{\gamma}=1\). This prescribes the mean-field critical curve
\[\gamma_{c}=\frac{1}{2}\sqrt{1-\xi}: \tag{15}\]
For \(0<\xi<1\), tracers are spontaneously stochastic when \(\gamma<\gamma_{c}\) and deterministic when \(\gamma\geq\gamma_{c}\). For \(\xi=1\), the Gaussian case is deterministic, and the critical \(\gamma_{c}\) is vanishing, as should be. For \(\xi=0\), this prescribes \(\gamma_{c}=1/2\), less than the maximum value \(\sqrt{2}/2\) allowed for the GMC. The critical value \(\gamma_{c}\) can therefore in principle be achieved for any Hurst exponent \(\xi\in]0,1],
_Numerics._ To illustrate the rationale of the prediction (14) and mean-field Ansatz (12), we now report results of Monte-Carlo sampling of pair trajectories, obtained from two different methods, and where we vary the levels of roughness \(\xi\), intermittency \(\gamma\) and the regularization scale \(\eta\). The first method is field-based. It uses direct integration of the dynamics (4) with standard Euler-Maruyama method, and requires to generate a new spatial realization of the field (5) at each timestep. Tracers are then advected by smoothly interpolating the velocity field at their current positions. The second method is SDE-based. It uses the representation (9) in terms of interacting particles and only requires to generate a single field, namely the frozen GMC, per a pair of trajectories. In the SDE setting, in order to enhance numerical stability, we add a callback function ensuring exact reflecting boundary conditions for particles reaching \(\eta\).
Beyond the physical parameters \(\xi,\gamma,\eta\), both methods require to set values for the field resolutions \(N\), the number of trajectory realizations, the timesteps \(dt\) - Table 1 lists the essential numerical parameters. To ensure a finite-time completion of the numerical algorithms, we use a maximal time \(T_{max}\) over which the numerics are stopped: Our Monte-Carlo sampling therefore does not measure \(\tau_{1}(\eta)\) but rather the estimate \(\tilde{\tau}_{1}=\left\langle T_{1}^{Y}\wedge T_{max}\right\rangle\).
Fig. 2 and Fig. 3 summarize our numerical observations, and those prove compatible with the mean-field predictions. As seen from Fig. 2, we monitor two types of behaviors when prescribing \(0<\xi<1\). Setting for instance \(\xi=2/3\) as in Panels (a,b), we observe that for small \(\gamma\), the estimates \(\tilde{\tau}_{1}\) converge to finite value, independent from \(T_{max}\). Upon decreasing \(\eta\), it is found that this value is compatible with the mean-field prediction \(\tau_{1}^{mf}(\xi_{\gamma})=\left(2(1-\xi_{\gamma})(1-\xi_{\gamma}/2)\right)^ {-1}\) involving the Kraichnan flow estimate with the driving Hurst exponent as input [26]. This evidences the spontaneous stochastic nature of separations for small \(\gamma\). For large \(\gamma\), the apparent convergence of \(\tilde{\tau}_{1}\) when decreasing \(\eta\) is a numerical artifact, as the limiting value grows with \(T_{max}\). This signals deterministic behavior, with the particles not separating in the limit \(\eta\to 0\). As such, for all the values of \(\xi\) considered in this work, our numerics reflect the presence of a phase transition at finite value of \(\gamma\). This is in agreement with our mean-field argument, and substantiates our claim that intermittency here favors deterministic behavior. The onset of deterministic behavior when increasing \(\gamma\) is found to be similar when using either field-based or SDE-based numerics. Focusing on the cases \(\xi=1/3\) and \(\xi=2/3\), we find good compatibility with the mean-field prediction in the latter case and observe deviations in the former case - see Panel (c). As seen from Panel (d), the mean-field prediction accurately captures the transition between deterministic and non-deterministic behaviors for small values of the effective roughness, corresponding to the larger values of \(\xi\). The agreement seems to deteriorate for smaller \(\xi\). This discrepancy suggests that the mean-field approach becomes inaccurate in the latter regime, but one cannot rule out a defect of the numerics : As \(\xi\to 0^{+}\), the paths become very rough, and the Euler-Maruyama scheme may become unfit even when combined with very fine timestepping.
Fig. 3 shows the effect of non-Gaussianity playing out at fixed value of the driving Hurst exponent \(\xi_{\gamma}=2/3\) in the spontaneous stochastic regime. As seen from Panels (a) and (b), the behaviors between Gaussian and non-Gaussian settings are qualitatively different, although by construction, both share the same mean-field average separation time. In the Gaussian case, the GMC \(\propto e^{\gamma Y}\) is unity, and the behavior of pairs is statistically independent from their absolute positions. In the non
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\xi\) & \(\gamma\) & \(N\) & \(\eta\) & \(dt\) & \(T_{max}\) & Realizations \\ \hline
2/9 to 1 & 0 to 0.6 & \(2^{7}\) to \(2^{14}\) & \(4\pi/N\) & \(10^{-4}\) & 8 to 64 & \(10^{5}\) \\ \hline \end{tabular}
\end{table}
Table 1: Numerical parameters for Fig.2 and Fig. 3 for both methods. For SDE-based numerics, realizations are independent while for the field-based numerics, we use minibatches of 100 trajectories to reach \(10^{5}\) samples.
Figure 2: (a) Convergence with \(\eta\) of the separation time at \(\xi=2/3\) for field-based numerics, using maximal simulation times \(T_{max}=8(\triangle),16(\Box),32(\circ),64(\circ)\), for \(\gamma=0\) (orange) and \(\gamma=0.6\) (green). Dashed line indicates the Kraichnan flow value. (b) Same but for SDE-based numerics. (c) \(\tau_{1}\) against \(\gamma\) at \(\xi=1/3\) (red) and \(2/3\) (blue) for field-based (\(\circ\)) and SDE-based (\(\times\)) numerics, using the smallest \(\eta=10^{-3}\) and largest \(T_{max}=64\). (d) Colormap for the average separation time \(\tau_{1}\) against the roughness and intermittency parameters \(\xi,\gamma\) from SDE-based numerics. Data are normalized by \(T_{max}=64\), and interpolated from that measured at the white dots. Red (blue) rendering indicate values close to 0 (1) suggestive of spontaneously stochastic (deterministic) behavior.
Gaussian case, the GMC is non-trivial and we observe a dependence on the local values of the magnitude \(\gamma Y\). This is compatible with the fact that Eq. (10) ruling the pair separations is not closed, unless one further averages over the GMC realizations. This behavior reflects in the PDF of exit-times. At large times, the Gaussian setting exhibits the exponential decay \(\propto e^{-t/\tau_{1}}\) predicted by the Kraichnan flow theory [26]. The non-Gaussian case deviates from the exponential behavior; it exhibits fat tails, likely reflecting particles trapped in quiet "valleys" of the frozen-in-time GMC. Understanding the details of this slow decay requires tools more refined than the present mean-field approach and is left for future studies.
Concluding remarks.We have proposed a non-trivial extension of the Kraichnan flow theory towards a multifractal setting, that we obtained by decorating the original Markovian Gaussian flows with a frozen-in-time Gaussian multiplicative chaos. Multifractality makes the flow rougher in terms of the Kolmogorov roughness \(\xi_{K}\), but the spontaneous stochasticity of two-particle separation maps to that of a smoother Gaussian environment, with Hurst exponent \(\xi_{\gamma}>\xi>\xi_{K}\). This paradoxical effect is all the less intuitive, as the second-order structure functions of our parametric family of fields are characterized by constant \(\zeta(2)=\xi\) independent of the level of the intermittency. This is an example of a "smoother ride over a rougher sea" at play in scalar transport [39; 21], and possibly connected to the mathematical theory of regularization by noise [40]. The availability of a SDE-based interpretation suggests a new playground to build a fundamental understanding of transport in multifractal environments. This includes tackling higher-dimensions, revisiting scalar intermittency and connections with anomalous dissipation [41; 42], or more generally addressing irreversibility [43; 44] and universality of transport. On the latter aspect, let us here point out that the specific value of the driving roughness \(\xi_{\gamma}\) is clearly model-dependent. For example, using white-in-time versions of the GMC (not shown here), both the mean-field approach and the numerics yield \(\xi_{\gamma}=\xi\). The dependence upon time-correlation is reminiscent of that observed in the Gaussian case [20]. However, one could expect that the feature \(\xi_{\gamma}\geq\zeta(2)\) is a robust feature of pair-dispersion in multifractal environments, and this could prove crucial in assessing transport in genuine turbulent environments. Roughness and intermittency levels measured in 3D homogeneous isotropic turbulence corresponds to \(\xi\simeq 0.69\) and \(\gamma\simeq 0.15\)[45; 46; 32]. In our model, this choice of parameters results in driving Hurst parameter \(\xi_{\gamma}\simeq 0.77\). Transposing this exponent to a deterministic setting would yield Richardson diffusion \(R\sim t^{1.62}\), noticeably different from \(t^{3/2}\). Naturally, while they extend the Kraichnan flows to a more realistic setting, our unidimensional multifractal environments remain caricatures of genuine turbulent fields, and lack essential ingredients such as skewness, temporal correlations, incompressiblity, etc. At a more mathematical level, our analysis suggests that the problem of transport in multifractal random flows, be them Eq. (5) or variations thereof, might prove solvable. Here, the use of a frozen-in-time GMC as a random environment is strongly reminiscent of the parabolic Anderson model used in condensed matter physics [47] and the Liouville Brownian motion entering the construction of field theories in the context of 2D quantum gravity [37; 38]. Exploiting those analogies may provide a path towards a rigorous treatment of transport in multifractal turbulent-like random environments, and quantitative modeling of turbulent transport in terms of random fields.
###### Acknowledgements.
We thank A. Barlet, A. Cheminet, B. Dubrulle & A. Mailybaev for continuing discussions. ST acknowledges support from the French-Brazilian network in Mathematics for visits at Impa in Southern summers 2022 & 2023, where this work was initiated, and thanks L. Chevillard for essential insights on multifractal random fields.
Figure 3: (a) 3 Random realizations of initially coalesced pair trajectories until their separation times \(T_{1}\) for driving Hurst parameter \(\xi_{\gamma}=2/3\) with \(\xi=2/3,\gamma=0\). (b) Same but with \(\xi=1/3,\gamma=\sqrt{3}/6\). Color indicates the magnitude of the underlying GMC, whose profile is represented vertically in the negative axis. (c) PDF of separation-times obtained from SDE-based numerics for \(\xi=2/3,\gamma=0\). Inset uses log-scale for the \(y\)-axis. (b) Same but for \(\xi=1/3,\gamma=\sqrt{3}/6\). |
2304.01194 | Burstormer: Burst Image Restoration and Enhancement Transformer | On a shutter press, modern handheld cameras capture multiple images in rapid
succession and merge them to generate a single image. However, individual
frames in a burst are misaligned due to inevitable motions and contain multiple
degradations. The challenge is to properly align the successive image shots and
merge their complimentary information to achieve high-quality outputs. Towards
this direction, we propose Burstormer: a novel transformer-based architecture
for burst image restoration and enhancement. In comparison to existing works,
our approach exploits multi-scale local and non-local features to achieve
improved alignment and feature fusion. Our key idea is to enable inter-frame
communication in the burst neighborhoods for information aggregation and
progressive fusion while modeling the burst-wide context. However, the input
burst frames need to be properly aligned before fusing their information.
Therefore, we propose an enhanced deformable alignment module for aligning
burst features with regards to the reference frame. Unlike existing methods,
the proposed alignment module not only aligns burst features but also exchanges
feature information and maintains focused communication with the reference
frame through the proposed reference-based feature enrichment mechanism, which
facilitates handling complex motions. After multi-level alignment and
enrichment, we re-emphasize on inter-frame communication within burst using a
cyclic burst sampling module. Finally, the inter-frame information is
aggregated using the proposed burst feature fusion module followed by
progressive upsampling. Our Burstormer outperforms state-of-the-art methods on
burst super-resolution, burst denoising and burst low-light enhancement. Our
codes and pretrained models are available at https://
github.com/akshaydudhane16/Burstormer | Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Shahbaz Khan, Ming-Hsuan Yang | 2023-04-03T17:58:44Z | http://arxiv.org/abs/2304.01194v1 | # Burstormer: Burst Image Restoration and Enhancement Transformer
###### Abstract
On a shutter press, modern handheld cameras capture multiple images in rapid succession and merge them to generate a single image. However, individual frames in a burst are misaligned due to inevitable motions and contain multiple degradations. The challenge is to properly align the successive image shots and merge their complimentary information to achieve high-quality outputs.
Towards this direction, we propose Burstormer: a novel transformer-based architecture for burst image restoration and enhancement. In comparison to existing works, our approach exploits multi-scale local and non-local features to achieve improved alignment and feature fusion. Our key idea is to enable inter-frame communication in the burst neighborhoods for information aggregation and progressive fusion while modeling the burst-wide context. However, the input burst frames need to be properly aligned before fusing their information. Therefore, we propose an enhanced deformable alignment module for aligning burst features with regards to the reference frame.
Unlike existing methods, the proposed alignment module not only aligns burst features but also exchanges feature information and maintains focused communication with the reference frame through the proposed reference-based feature enrichment mechanism, which facilitates handling complex motions. After multi-level alignment and enrichment, we re-emphasize on inter-frame communication within burst using a cyclic burst sampling module. Finally, the inter-frame information is aggregated using the proposed burst feature fusion module followed by progressive upsampling. Our Burstormer outperforms state-of-the-art methods on burst super-resolution, burst denoising and burst low-light enhancement. Our codes and pre-trained models are available at [https://github.com/akshaydudhane16/Burstormer](https://github.com/akshaydudhane16/Burstormer).
## 1 Introduction
In recent years, smartphone industry has witnessed a rampant growth on account of the fueling demand of smartphones in day-to-day life. While the image quality of smartphone cameras is rapidly improving, there are several barriers that hinder in attaining DSLR-like images. For instance, the physical space available in handheld devices restricts manufacturers from employing high-quality bulky camera modules. Most smartphone cameras use small-sized lens, aperture, and sensor, thereby generating images with limited spatial resolution, low dynamic range, and often with noise and color distortions especially in low-light conditions. These problems have shifted the focus nowadays in developing computational photography (software) solutions for mitigating the hardware limitations and to approach the image quality of DSLRs.
One emerging approach to achieve high-quality results from a smartphone camera is to take advantage of burst shots consisting of multiple captures of the same scene. The burst image processing approaches aim to recover the high-quality image by merging the complementary information in multiple frames. Recent works [3, 4, 9] have validated the potential of burst processing techniques in reconstructing rich details that cannot be recovered from a single image. However, these computationally expensive approaches are often unable to effectively deal with the inherent sub-pixel shifts among multiple frames arising due to camera and/or object movements. This sub-pixel misalignment often causes blurring and ghosting artifacts in the final image. To tackle alignment issues, existing methods employ complex explicit feature alignment [3] and deformable con
Figure 1: Burst super-resolution results (Tab. 1) vs. efficiency (GFIops). Burstormer advances state-of-the-art, while being compute efficient and light-weight.
volutions [9]. However, these approaches target only the local features at a single level, while the use of global information together with multi-scale features has not been extensively explored. Additionally, while aggregating multi-frame features, existing approaches either employ late fusion strategy [3, 4] or rigid fusion mechanism (in terms of number of frames) [9]. The former one limits the flexible inter-frame communication, while the later one limits the adaptive multi-frame processing.
In this work, we propose Burstormer for burst image processing, which incorporates multi-level local-global burst feature alignment and adaptive burst feature aggregation. In contrast to previous works [3, 4] that employ bulky pre-trained networks for explicit feature alignment, we present a novel enhanced deformable alignment (EDA) module that handles misalignment issues implicitly. Overall, the EDA module reduces noise and extracts both local and non-local features with a transformer-based attention and performs multi-scale burst feature alignment and feature enrichment which is not the case with the recent BIPNet [9].
Unlike existing approaches [3, 4, 9] which allow a one go interaction with the reference frame during alignment process, we add a new reference-based feature enrichment (RBFE) mechanism in EDA to allow a more extensive interaction with the reference frame. This helps in effectively aligning and refining burst features even in complex misalignment cases where the simple alignment approaches would not suffice. In the image reconstruction stage we progressively perform feature consolidation and upsampling, while having access to the multi-frame feature information at all time. This is achieved with our no-reference feature enrichment (NRFE) module. NRFE initially generates burst neighborhoods with the proposed cyclic burst sampling (CBS) mechanism that are then aggregated with our burst feature fusion (BFF) unit. Unlike, the existing pseudo bursts [9], the proposed burst neighborhood mechanism is flexible and enables inter-frame communication with significantly less computational cost.
The key highlights of our work are outlined below:
* Our Burstormer is a novel Transformer based design for burst-image restoration and enhancement that leverages multi-scale local and non-local features for improved alignment and feature fusion. Its flexible design allows processing bursts of variable sizes.
* We propose an enhanced deformable alignment module which is based on multi-scale hierarchical design to effectively denoise and align burst features. Apart from aligning burst features it also refines and consolidates the complimentary burst features with the proposed reference-based feature enrichment module.
* We propose no-reference feature enrichment module to progressively aggregate and upsample the burst features with less computational overhead. To enable inter-frame interactions, it generates burst neighborhoods through the proposed cyclic burst sampling mechanism followed by the burst feature fusion.
Our Burstormer sets new state-of-the-art on several real and synthetic benchmark datasets for the task of burst super-resolution, burst low-light enhancement, and burst denoising. Compared to existing approaches, Burstormer is more accurate, light-weight and faster; see Fig. 1. Further, we provide detailed ablation studies to demonstrate the effectiveness of our design choices.
## 2 Related Work
**Multi-Frame Super-Resolution.** Unlike single image super-resolution, multi-frame super-resolution (MFSR) approaches are required to additionally deal with the sub-pixel misalignments among burst frames caused by camera and object motions. While being computationally efficient, the pioneering MFSR algorithm [37] processes burst frames in frequency domain, often producing images with noticeable artifacts. To obtain better super-resolved results, other methods operate in the spatial domain [10, 17], exploit image priors [33], use iterative back-projection [31], or maximum a posteriori framework [1]. However, all these approaches assume that the image formation model, and motion among input frames can be estimated reliably. Successive works addressed this constraint with the joint estimation of the unknown parameters [11, 15]. To deal with noise and complex motion, the MSFR algorithm of [40] employs non-parametric kernel regression and locally adaptive detail enhancement model.
The DBSR algorithm [3] addresses the MFSR problem by applying the explicit feature alignment and attention-centric fusion mechanisms. However, their image warping technique and explicit motion estimation may find difficult in handling scenes with fast moving objects. The EBSR [25] builds on prior PCD alignment techniques [39] by aligning enhanced features specifically for the burst SR task. In addition, the BSRT [24] employs a combination of optical flow and deformable convolution for feature alignment and utilizes a Swin Transformer [21] for feature extraction. More recently, BIPNet [9] was introduced to process noisy raw bursts using implicit feature alignment and pseudo-burst generation. Building on BIPNet, AFCNet [29] incorporates existing Restormer [43] to improve feature extraction for burst SR tasks. Despite having an effective inter-frame communication, their approach is rigid to using certain number of burst frames during alignment and fusion.
**Multi-Frame Denoising.** Aside from aforementioned MFSR approaches, several multi-frame methods have been developed to perform denoising [7, 14, 26, 27]. The algorithm of [36] leverages visually similar image block within and across frames to obtain denoised results. Other works
[7, 27] extend the state-of-the-art single image denoising technique BM3D [7] to videos. The method of [22] yields favorable denoising results by employing a novel homography flow alignment technique with consistent pixel compositing operator. In the work of [12], the authors extend single-image denoising network to multi-frame task via recurrent deep convolutional neural network. The kernel prediction network [30] generates per-pixel kernels for fusing multiple-frames. RViDeNet [42] uses deformable convolutions to perform explicit frame alignment in order to provide improved denoising results. The re-parametrization approach of MFIR [4] learns image formation model in deep feature space for the multi-frame denoising. BIPNet [9] presents a novel pseudo-burst feature fusion approach to perform denoising on burst frames.
**Multi-Frame Low-light Image Enhancement.** In low-light conditions, smartphone cameras often yield noisy and color-distorted images due to their small aperture and sensor pixel cavities. [6] collect a multi-frame dataset for low-light image enhancement, and present a data-driven approach to learn camera imaging pipeline in order to map under-exposed RAW images directly to well-lit sRGB images. The quality of output image is further improved with the perceptual loss presented by [45]. The works of [28] and [47], respectively, use residual learning framework and recurrent convolution network to obtain enhanced images from multiple degraded low-lit input frames. The two-stage approach of [18] employs one subnet for explicitly denoising multiple frames followed by the second subnet to provide us with the enhanced image. Along with super-resolution and denoising, BIPNet [9] is also capable of performing multi-frame low-light image enhancement. Unlike the existing multi-frame approaches, our Burstormer aligns burst features at multiple-scales and enables flexible inter-frame communication without much computational overhead. It also incorporates progressive feature merging to obtain high-quality images.
## 3 Proposed Burst Image Processing Pipeline
Burst sequences are usually acquired with handheld devices. The spatial and color misalignments among burst frames are unavoidable due to hand-tremor and camera/object motions. These issues negatively affect the overall performance of the burst processing approaches. In this work, our goal is to effectively _align_ and progressively _merge_ the desired information from multiple degraded frames to reconstruct a high-quality composite image. To this end, we propose Burstormer, a novel unified model for multi-frame processing where different modules jointly operate to perform feature denoising, alignment, fusion, and upsampling tasks. Here, we describe our method for the task of burst super-resolution, nevertheless, it is applicable to different burst restoration tasks such as burst denoising and burst enhancement (see experiments Sec. 4).
**Overall Pipeline.** Fig. 2 shows the overall pipeline of the proposed Burstormer. _First,_ the RAW input burst is passed through the proposed enhanced deformable alignment (EDA) module which extract noise-free deep features that are aligned and refined with respect to the reference frame features. _Second,_ an image reconstruction module is employed that takes as input the burst of aligned features and progressively merges them using the proposed no reference feature enrichment (NRFE) module. To obtain the super-resolved image, the upsampling operation is immediately applied after each NRFE module in the reconstruction stage. Next, we describe each stage of our approach.
### Enhanced Deformable Alignment
In burst processing, effective alignment of mismatched frames is essential as in any error arising in this stage will propagate to later stages, subsequently making the reconstruction task difficult. Existing methods perform image alignment either explicitly [3, 4], or implicitly [9]. While, these techniques are suitable to correcting mild pixel displacements among frames, they might not adequately handle fast moving objects. In Burstormer, we propose enhanced deformable alignment (EDA) which employs a multi-scale design as shown in Fig. 2(a). Since sub-pixel shifts among frames are naturally reduced at low-spatial resolution, using the multi-level hierarchical architecture provides us with more robust alignment. Therefore, EDA starts feature alignment from the lowest level (3\({}^{rd}\) in this paper) and progressively passes offsets to upper high-resolution levels to help with the alignment process. Furthermore, at each level, the aligned features are passed through the proposed reference-based feature enrichment (RBFE) module to fix remaining misalignment issues in burst frames by interacting them again with the reference frame. EDA has two key components: **(1)** Feature alignment, and **(2)** Reference-based feature enrichment.
**Feature alignment.** Burst images are often contaminated with random noise that impedes in finding the dense correspondences among frames. Therefore, before performing alignment operation, we extract noise-free burst features by using burst feature attention (BFA) module which is built upon the existing transformer block [43]. Unlike in other approaches [3, 4, 9], the BFA module encodes local and non-local context using MDTA block [43] and controls feature transformation through the GDFN [43] block. Furthermore, unlike existing attention techniques [21, 34, 38], BFA module is efficient enough to be applied to high-resolution images. The denoised features from BFA are passed further for alignment. Figure 2(b) shows the feature alignment (FA) module that utilizes a modulated deformable
convolution [48] to align features of each burst frame to those of the reference frame. Let, \(\left\{\mathbf{g}^{b}:b\in[1,\ldots,B]\right\}\in\mathbb{R}^{B\times f\times H\times W}\) denotes the burst features obtained after BFA module, where \(B\) denotes number of burst frames, \(f\) is the number of feature channels, and \(H\times W\) is the spatial size. We align the features of the current frame \(\mathbf{g}^{b}\) with the reference frame* \(\mathbf{g}^{b_{r}}\). Feature alignment module processes \(\mathbf{g}^{b}\) and \(\mathbf{g}^{b_{r}}\) via an offset convolution layer and outputs the offset \(\Delta n\) and modulation scalar \(\Delta a\) values for \(\mathbf{g}^{b}\). In Fig. 2(a), for simplicity, only offset \(\Delta n\) is shown. The aligned features \(\mathbf{\bar{g}}^{b}\) are computed as:
Footnote *: We consider the first burst image to be the reference frame.
\[\mathbf{\bar{g}}^{b}=W_{\text{def}}\left(\mathbf{g}^{b},\;\left\{\Delta n,\;\Delta a \right\}\right),\left\{\Delta n,\Delta a\right\}=W_{\text{off}}\left(\mathbf{g}^{b },\;\mathbf{g}^{b_{r}}\right), \tag{1}\]
where, \(W_{\text{def}}(\cdot)\) and \(W_{\text{off}}(\cdot)\) represent the deformable and offset convolutions, respectively. Specifically, every position \(n\) on the aligned feature map \(\mathbf{\bar{g}}^{b}\) is calculated as:
\[\mathbf{\bar{g}}^{b}_{n}=\sum_{i=1}^{K}W_{n_{i}}^{d}\;\;\mathbf{y}^{b}_{(n+n_{i}+\Delta n _{i})}\cdot\Delta a_{n_{i}}, \tag{2}\]
where, \(K\)=9, \(\Delta a\) in range \([0,1]\) for each
Figure 2: Overall pipeline of the proposed Burstormer for burst image processing. Burstormer takes as input a RAW burst of degraded images and outputs a clean high-quality sRGB image. It has two main parts: enhanced deformable alignment (EDA), and the image reconstruction. EDA, labeled as (a), is a multi-scale hierarchical module that, at each level, first extracts noise-free local and non-local features with the burst feature attention (BFA), performs feature alignment (b), and finally refines and consolidates features with an additional interaction with the base frame via (c) the proposed reference-based feature enrichment (RBFE) module. RBFE further employs (d) the burst feature fusion (BFF) unit for merging features by using the back-projection and squeeze-excitation mechanisms. The aligned burst of features are then passed to the image reconstruction stage (e). Here (f) the adaptive burst pooling module transforms the input burst size (B frames) to constant 8 frames through an average pooling operator. Finally, (g) the no-reference feature enrichment (NRFE) module progressively aggregates, and upsamples the burst features to generate the final HR image.
\(\{(-1,1),(-1,0),...,(1,1)\}\) is a regular \(3{\times}3\) kernel grid. The convolution is performed on non-uniform positions \((n_{i}+\Delta n_{i})\), where \(n_{i}\) may be fractional. To tackle the fractional values, this operation is implemented with the bilinear interpolation.
**Reference-Based Feature Enrichment.** In the presence of complex pixel displacements among frames, simple alignment techniques [3, 4, 9] may not able to align burst features completely. Thus, to fix the remaining minor misalignment issues, we propose the reference-based feature enrichment (RBFE) module, shown in Fig. 2(c). RBFE enables additional interaction of aligned frames features \(\mathbf{\bar{g}}^{b}\) with the reference frame features \(\mathbf{g}^{b_{r}}\) to generate consolidated and refined representations. This interactive feature merging is performed via our burst feature fusion (BFF) unit as illustrated in Fig. 2(d). The BFF mechanism is built upon the principles of feature back projection [13] and squeeze-excitation techniques [16]. Given the concatenated feature maps of the current frame and the reference frame \([\mathbf{\bar{g}}^{b},\mathbf{g}^{b_{r}}]\in\mathbb{R}^{1\times 2^{sf}\times H\times W}\), BFF applies BFA to generate representations \(\mathbf{g_{a}^{b}}\) encoding the local non-local context. Overall, BFF yields fused features \(\mathbf{g_{f}^{b}}\in\mathbb{R}^{1\times f\times H\times W}\):
\[\mathbf{g_{f}^{b}}=\mathbf{g_{s}^{b}}+W\left(\mathbf{g_{a}^{b}}-\mathbf{g_{e}^{b}}\right), \tag{3}\]
where \(\mathbf{g_{s}^{b}}{=}W_{s}\mathbf{g_{a}^{b}}\) represents squeezed features and \(\mathbf{g_{e}^{b}}{=}W_{e}W_{s}\mathbf{g_{a}^{b}}\) are the expanded features. \(W_{s}\) and \(W_{e}\) denote simple convolutions to squeeze and expand feature channels. The squeezed features \(\mathbf{g_{s}^{b}}\) poses complementary properties of multiple input features. While, \(\mathbf{g_{e}^{b}}\) is used to compute the high-frequency residue with the attentive features \(\mathbf{g_{a}^{b}}\). The aggregation of this high-frequency residual with the squeezed features \(\mathbf{g_{s}^{b}}\) helps to learn the feature fusion process implicitly and provides the capability to extract high-frequency complementary information from multiple inputs. While illustrated for fusing features of two frames in Fig. 2(d), the proposed BFF can be flexibly adapted to any number of inputs.
### Image Reconstruction
Figure 2(e) illustrates the overall image reconstruction stage. To operate on bursts of arbitrary sizes, we propose an adaptive burst feature pooling (ABFP) mechanism that returns the constant burst-size features. As shown in Fig. 2(f), the burst features (\(B*f\)) are concatenated along channel dimension followed by 1D average pooling operation which adaptively pools out the burst features (\(8*f\)) as per the requirement. Next, the pooled burst feature maps pass through the no-reference feature enrichment (NRFE) module, shown in Fig. 2(g). The key idea of the proposed NRFE module is to pair immediate neighborhood frames along feature dimension and fuse them using the BFF module. However, doing this would limit the inter-frame communication to successive frames only. Therefore, we propose cyclic burst sampling (CBS) that gathers the neighborhood frames in zigzag manner (called here as burst neighborhoods) such that reference frame could interact with the last frame as well via intermediate frames; see Fig. 2(h). This cyclic scheme of sampling the burst frames helps in long-range communication without increasing the computational overhead unlike the existing pseudo burst technique [9]. Next, the sampled neighborhood features are combined along burst dimension and processed with BFF to integrate the useful information available in multiple frames of the burst sequence.
To upscale the burst features, we adapt pixel-shuffle [32] such that the information available in burst frames is shuffled to increase the spatial resolution. This helps in reducing the compute cost and the overall network parameters.
## 4 Experiments and Analysis
We evaluate the performance of the proposed Burstormer on three different burst image processing tasks: **(a)** super-resolution (on synthetic and real burst images), **(b)** low-light image enhancement, and **(c)** denoising (on grayscale and color data). Additional visual results, ablation experiments, and more details on the network and training settings are provided in the supplementary material.
**Implementation Details.** We train separate models for different tasks in an end-to-end manner without pre-training any module. We pack the input mosaicked raw burst into 4-channel RGGB format. All burst frames are handled with shared Burstormer modules (FA, RBFE, BFF, NRFE) for better parameter efficiency. The following training settings are common to all tasks, whereas task-specific experimental details are provided in their corresponding sections. The EDA module of Burstormer is a 3-level encoder-decoder, where each level employs 1 FA (containing single deformable conv. layer) and 1 RBFE module. The BFF unit both in RBFE and NRFE consists of 1 BFA module. Each BFA module consists of 1 multi-dconv head transposed attention (MDTA) and 1 gated-Dconv feed-forward network (GDFN) [43]. In the image reconstruction stage, we use 2 NRFE modules. We train models with \(L_{1}\) loss and Adam optimizer with the initial learning rate \(1e^{-4}\) that is gradually reduced to \(1e^{-6}\) with the cosine annealing scheduler [23] on four RTX6000 GPUs. Random horizontal and vertical image flipping is used for data augmentation.
### Burst Super-resolution
We evaluate the proposed Burstormer on synthetic as well as on real-world datasets [2, 3] for the SR scale factor \({\times}4\). For comparisons, we consider several burst SR approaches such as DBSR [3], LKR [19], HighResNet [8], MFIR [4] and BIPNet [9].
**Datasets.****(1)** SyntheticBurst dataset [3] contains 46,839 RAW burst sequences for training and 300 for validation. Each sequence consists of 14 LR RAW images (with spatial resolution of 48\(\times\)48 pixels) that are synthetically generated from a single sRGB image as follows. The given sRGB image is first transformed to RAW space with the inverse camera pipeline [5]. Next, random rotations and translations are applied to this RAW image to generate the HR burst sequence. The HR burst is finally converted to LR RAW burst sequence by applying the downsampling, Bayer mosaicking, sampling and random noise addition operations.
**(2)** BurstSR dataset [3] has 200 RAW burst sequences, each containing 14 images. The LR images of these sequences are captured with a smartphone camera, whereas their corresponding HR (ground-truth) images are taken with a DSLR camera. From 200 full-resolution sequences, the original authors extract 5,405 patches of size 80\(\times\)80 for training and 882 patches for validation.
**SR results on synthetic dataset.** We train Burstormer with batch size 4 for 300 epochs on SyntheticBurst dataset [3]. Table 1 shows that our approach significantly advances the state of the art. When compared to the previous best BIPNet [9], our Burstormer yields performance gain of 0.9 dB, while having 47\(\%\) fewer parameters, 80\(\%\) less FLOPs, and runs 2\(\times\) faster. Fig. 3 shows that Burstormer-restored images are visually superior with enhanced structural and textural details compared to competing methods. Specifically, the reproductions of DBSR [3], LKR [19], and MFIR [4] contain blotchy textures and color artifacts.
**SR results on real dataset.** In BurstSR dataset [3], the LR and HR bursts are slightly misaligned as they are captured with different cameras. We address this by training Burstormer using aligned \(L_{1}\) loss and evaluating with aligned PSNR/SSIM, as in prior works [3, 4, 9]. Instead of training from scratch, we fine-tune the pre-trained model (of SyntheticBurst dataset) for 100 epochs on the BurstSR dataset. Table 1 shows that our Burstormer performs favorably well by providing PSNR gain of 0.33 dB over the previous best method BIPNet [9]. We present visual comparisons in Fig. 4. Burstormer-generated images exhibit higher detail, sharpness, and visual accuracy.
**Ablation experiments.** To study the impact of different modules of the proposed architecture on the final performance, we train several ablation models on the SyntheticBurst dataset [3] for 100 epochs. Results are provided in Fig. 5. In the baseline model, we use Resblocks [20] for feature extraction, simple concatenation-based fusion, and the pixel-shuffle operation for upsampling. It can be seen that inclusion of the proposed RBFE in feature alignment stage leads to substantial PSNR boost of 1.02 dB. This performance gain is further increased by 1.49 dB when we add the proposed burst fusion (NRFE) and upsampling modules. Overall, when deployed all our modules, we achieve 5.67 dB increment over the baseline. Further, Table 2 shows that replacing the proposed alignment and fusion methods with other existing techniques causes significant performance drop, _i.e._, 0.43 dB and 0.34 dB, respectively. Specifically, our Burstormer lead to 0.79 dB boost when compared with existing multi-level PCD alignment [39]. The proposed RBFE module with local-non-local feature extraction ability is a key difference between the existing PCD and our enhanced deformable alignment. Further, we observe 0.34 dB drop in PSNR when we replace the proposed NRFE (fusion module) with existing compute extensive PBFF [9]. Ablation experiments show that with compute efficient in nature
\begin{table}
\begin{tabular}{l|c c c|c c} \hline \hline \multirow{2}{*}{**Methods**} & \multicolumn{4}{c|}{**SyntheticBurst**} & \multicolumn{2}{c}{**(Real) BurstSR**} \\ \cline{2-6} & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **Time (ms)** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** \\ \hline Single Image & 36.17 & 0.91 & 40.0 & 46.29 & 0.982 \\ HighRes-net [8] & 37.45 & 0.92 & 46.3 & 46.64 & 0.980 \\ DBSR [3] & 40.76 & 0.96 & 431 & 48.05 & 0.984 \\ LKR [19] & 41.45 & 0.95 & - & - & - \\ MFIR [4] & 41.56 & 0.96 & 420 & 48.33 & 0.985 \\ BIPNet [9] & 41.93 & 0.96 & 130 & 48.49 & 0.985 \\ AFCNet [29] & 42.21 & 0.96 & 140 & 48.63 & 0.986 \\ \hline
**Burstormer (Ours)** & **42.83** & **0.97** & **55.0** & **48.82** & **0.986** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Burst super-resolution results** on synthetic and real datasets [3] for factor 4\(\times\).
Figure 4: **Burst super-resolution** (\(\times\)4) results on BurstSR dataset [3]. Our results recover better visual details.
Figure 3: **Burst super-resolution** (\(\times\)4) results on SyntheticBurst dataset [3]. The SR images by our Burstormer retain more texture and structural content than the other approaches.
our modules outperform other existing modules in all manner without any compromise in performance.
### Burst Low-Light Image Enhancement
We test the performance of our Burstormer on the Sony subset from the SID dataset, as in other existing works [18, 44, 47, 9, 1]. In addition to \(L_{1}\) loss, we use the perceptual loss [46] for network optimization.
**Dataset.** SID [6] contains input RAW burst sequences captured with short-camera exposure in extreme low ambient light, and their corresponding well-exposed sRGB ground-truth images. The dataset consists of 161 burst sequences for training, 36 for validation, and 93 for testing. We crop 6,500 patches of size \(256\times 256\) with burst size varying from 4 to 8 and train the network for 200 epochs. Since the input RAW burst is mosaicked, we use single \(2\times\) upsampler in our Burstormer to obtain the final image.
**Enhancement results.** The image quality scores for competing approaches are summarized in Table 3. Our Burstormer achieves PSNR gains of 0.47 dB over the previous best method BIPNet [9] and 3.54 dB over the second best algorithm LEED [18]. Figure 6 shows enhanced images produced by different approaches. Our Burstormer yields images with more faithful color and structural content than the other compared approaches.
### Burst Denoising
This section presents the results of burst denoising on grayscale data [30] as well as on color data [30]. As there is no need to upscale the burst features, we replace the upsampler in Burstormer with a simple convolution to generate the output image.
**Datasets.** Following the experimental protocols of [30] and [41], we prepare training datasets for grayscale denoising and color denoising, respectively. We train separate denoising models for 300 epochs on 20K synthetic burst patches. Each burst contains 8 frames of 128\(\times\)128 spatial resolution. Testing is performed on 73 grayscale bursts and 100 color bursts. Both of these test sets contain 4 variants with different noise gains (1,2,4,8), corresponding to noise parameters \((\log(\sigma_{r}),\log(\sigma_{s}))\rightarrow\) (-2.2,-2.6), (-1.8,-2.2), (-1.4,-1.8), and (-1.1,-1.5), respectively.
**Denoising results.** We compare various existing methods such as KPN [30], MKPN, BPN [41], MFIR [4], and BIPNet [9]. Since the proposed Burstormer trained without any extra data or supervision, we consider results of the MFIR [4] variant that uses a custom optical flow sub-network (without pre-training it on extra data). Table 4
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Task** & **Methods** & **PSNR \(\uparrow\)** \\ \hline \multirow{4}{*}{**Alignment**} & Explicit [3] & 39.84 \\ & TDAN [35] & 40.58 \\ & PCD [39] & 41.26 \\ & EBFA [9] & 41.62 \\ & **Burstormer (Ours)** & **42.05** \\ \hline \multirow{4}{*}{**Burst Fusion**} & Addition & 40.20 \\ & Concat & 40.65 \\ & DBSR [3] & 41.08 \\ & PBFF [9] & 41.71 \\ \hline \multirow{4}{*}{**Burstormer (Ours)**} & **42.05** \\ \cline{2-3} & **Burstormer (Ours)** & **42.05** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison of **alignment and fusion** techniques. PSNR is computed on SyntheticBurst [3] for \(4\times\) SR.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Methods** & **PSNR \(\uparrow\)** & **SSIM \(\uparrow\)** & **LPIPS \(\downarrow\)** \\ \hline Chen _et al._[6] & 29.38 & 0.892 & 0.484 \\ Maharjan _et al._[28] & 29.57 & 0.891 & 0.484 \\ Zamir _et al._[45] & 29.13 & 0.881 & 0.462 \\ Zhao _et al._[47] & 29.49 & 0.895 & 0.455 \\ LEED [18] & 29.80 & 0.891 & 0.306 \\ BIPNet [9] & 32.87 & 0.936 & 0.305 \\ \hline
**Ours** & **33.34** & **0.941** & **0.285** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Burst low-light image enhancement** evaluation on the SID dataset [6]. Burstormer performs well across three metrics.
Figure 5: **Ablation experiments** for Burstormer contributions. PSNR is reported on SyntheticBurst dataset [3] for \(4\times\) SR. All our major components contribute significantly. As given in Table, our Burstormer achieves 5.67 dB gain over the baseline approach.
Figure 6: **Burst low-light image enhancement** comparisons on the Sony subset of SID dataset [6]. Our Burstormer retains color and structural details faithfully relative to the ground-truth.
reports grayscale denoising results where our Burstormer consistently performs well. When averaged across all noise levels, our method provides 0.75 dB PSNR improvement over the state-of-the-art BIPNet [9]+. Table 5 shows that the performance trend of Burstormer is similar on color denoising as well. For instance, on high noise level bursts (Gain \(\propto\) 8), Burstormer provides PSNR boost of 0.57 dB over BIPNet [9]. Visual comparisons in Fig. 7 show that Burstormer's denoised outputs are relatively cleaner, sharper and preserve subtle textures. Additional qualitative results are provided in supplementary material.
Footnote †: We use BIPNet results from the official Github repository.
## 5 Conclusion
We present a transformer-based framework for burst image processing. The proposed Burstormer is capable of generating a single high-quality image from a given burst of noisy images having pixel misalignments among them. Burstormer employs a multi-scale hierarchical module EDA that, at each scale, first generates denoised features encoding local and non-local context, and then aligns each burst frame with the reference frame. To fix any remaining minor alignment issues, we incorporate a reference-based feature enrichment RBFE module in EDA that enables additional interaction of the features of each frame with the base frame features. Overall, EDA improves model robustness by yielding a burst of features that are well denoised, aligned, consolidated and refined. In the image reconstruction stage, we repeatedly apply the no-reference feature enrichment NRFE and upsampling modules in tandem until the final image is obtained. NRFE progressively and adaptively fuses each pair of frame features that are obtained with the proposed cyclic burst sampling. Experiments performed on three representative burst processing tasks (super-resolution, denoising, low-light image enhancement) demonstrate that our Burstormer provides state-of-the-art results and generalizes well compared to recent burst processing approaches.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline & Gain \(\propto\) 1 & Gain \(\propto\) 2 & Gain \(\propto\) 4 & Gain \(\propto\) 8 & **Average** \\ \hline KPN [30] & 36.47 & 33.93 & 31.19 & 27.97 & 32.39 \\ BPN [41] & 38.18 & 35.42 & 32.54 & 29.45 & 33.90 \\ MFR [4] & 39.10 & 36.14 & 32.89 & 28.98 & 34.28 \\ BIPNet [9]+ & 38.53 & 35.94 & 33.08 & 29.89 & 34.36 \\ \hline
**Burstormer (Ours)** & **39.49** & **36.70** & **33.71** & **30.55** & **35.11** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Grayscale burst denoising** on the dataset by [30]. PSNR is reported.
\begin{table}
\begin{tabular}{l|c c c c|c} \hline \hline & Gain \(\propto\) 1 & Gain \(\propto\) 2 & Gain \(\propto\) 4 & Gain \(\propto\) 8 & **Average** \\ \hline KPN [30] & 38.86 & 35.97 & 32.79 & 30.01 & 34.41 \\ BPN [41] & 40.16 & 37.08 & 33.81 & 31.19 & 35.56 \\ BIPNet [9]+ & 40.58 & 38.13 & 35.30 & 32.87 & 36.72 \\ MFIR [4] & **41.90** & 38.85 & 35.48 & 32.29 & 37.13 \\ \hline
**Burstormer (Ours)** & 41.70 & **39.15** & **36.09** & **33.44** & **37.59** \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Color burst denoising** on the dataset by [41]. PSNR is reported.
Figure 7: **Burst denoising** results on burst images from the grayscale [30] and color datasets [41]. Our Burstormer produces more sharp and clean results than other competing approaches. More examples are provided in the supplementary material. |
2310.12711 | Modelling multivariate extremes through angular-radial decomposition of
the density function | We present a new framework for modelling multivariate extremes, based on an
angular-radial representation of the probability density function. Under this
representation, the problem of modelling multivariate extremes is transformed
to that of modelling an angular density and the tail of the radial variable,
conditional on angle. Motivated by univariate theory, we assume that the tail
of the conditional radial distribution converges to a generalised Pareto (GP)
distribution. To simplify inference, we also assume that the angular density is
continuous and finite and the GP parameter functions are continuous with angle.
We refer to the resulting model as the semi-parametric angular-radial (SPAR)
model for multivariate extremes. We consider the effect of the choice of polar
coordinate system and introduce generalised concepts of angular-radial
coordinate systems and generalised scalar angles in two dimensions. We show
that under certain conditions, the choice of polar coordinate system does not
affect the validity of the SPAR assumptions. However, some choices of
coordinate system lead to simpler representations. In contrast, we show that
the choice of margin does affect whether the model assumptions are satisfied.
In particular, the use of Laplace margins results in a form of the density
function for which the SPAR assumptions are satisfied for many common families
of copula, with various dependence classes. We show that the SPAR model
provides a more versatile framework for characterising multivariate extremes
than provided by existing approaches, and that several commonly-used approaches
are special cases of the SPAR model. | Ed Mackay, Philip Jonathan | 2023-10-19T13:04:28Z | http://arxiv.org/abs/2310.12711v2 | # Modelling multivariate extremes through angular-radial decomposition of the density function
###### Abstract
We present a new framework for modelling multivariate extremes, based on an angular-radial representation of the probability density function. Under this representation, the problem of modelling multivariate extremes is transformed to that of modelling an angular density and the tail of the radial variable, conditional on angle. Motivated by univariate theory, we assume that the tail of the conditional radial distribution converges to a generalised Pareto (GP) distribution. To simplify inference, we also assume that the angular density is continuous and finite and the GP parameter functions are continuous with angle. We refer to the resulting model as the semi-parametric angular-radial (SPAR) model for multivariate extremes. We consider the effect of the choice of polar coordinate system and introduce generalised concepts of angular-radial coordinate systems and generalised scalar angles in two dimensions. We show that under certain conditions, the choice of polar coordinate system does not affect the validity of the SPAR assumptions. However, some choices of coordinate system lead to simpler representations. In contrast, we show that the choice of margin does affect whether the model assumptions are satisfied. In particular, the use of Laplace margins results in a form of the density function for which the SPAR assumptions are satisfied for many common families of copula, with various dependence classes. We show that the SPAR model provides a more versatile framework for characterising multivariate extremes than provided by existing approaches, and that several commonly-used approaches are special cases of the SPAR model. Moreover, the SPAR framework provides a means of characterising all 'extreme regions' of a joint distribution using a single inference. Applications in which this is useful are discussed.
## 1 Introduction
### Motivation
This work proposes a methodology which addresses two problems in multivariate extremes: (1) characterising extremes of random vectors in multiple orthants of \(\mathbb{R}^{d}\) simultaneously; and (2) doing this for distributions with arbitrary dependence class. To motivate the first of these problems, consider the following problems in offshore engineering. Structures in the ocean must be designed to withstand the largest forces from waves and winds that are expected in their lifetime. To do this, we need a model for the joint distributions of environmental variables which affect loads on the structure. Wave-induced forces are dependent on both the height and period of the waves. The largest structural response will also depend on the resonant period of structure. Figure 1(a) shows response curves for two structures as a function of wave period. These could represent a wide range of responses, such as vessel motions, bending moments in a structure, or tensions in a mooring line. Responses are assumed to increase with wave height at a given period. Figure 1(b) shows 25 years of hourly observations of wave height and wave period for a location of the east coast of the US. The ellipses indicate regions of the variable space in which large structural loads may occur for different structures. For a structure with a resonant period around 5 seconds, the largest responses to wave loading may occur in the region indicated by the red ellipse. In this region neither variable is extreme, although the value of wave height, conditional on wave period can be viewed as extreme. In the blue region, large floating structures or vessels with resonant periods around 10 seconds may experience large responses. This region includes the largest observed wave heights, but not the largest or smallest observed values of wave period.
A similar challenge arises in the design of offshore wind turbines. Figure 1(c) shows a typical structural response curve as a function of wind speed, which could represent bending moments in the turbine tower or blades. Due to the way the turbine is controlled, the response to wind loading is non-monotonic. Wind turbines have a rated wind speed (usually around 10-15 m/s), at which they reach their maximum power output. For wind speeds above the rated speed, the turbines are controlled in a way that maintains constant power output with increased wind speed, but loads on the structure are reduced. To reduce loading in high wind speeds, the turbines are shut down at a certain wind speed, known as the cut-out speed, which results in a large reduction in the loading. The loads then increase with wind speed, due to passive drag loads on the structure. Loading on the structure also increases with wave height. This leads to two regions of the wind-wave variable space in which large loads are experienced. Figure 1(d) shows a scatter plot of hourly values of wind speed and wave height over a 50-year period for a location in the North Sea. The ellipses indicate regions in which a turbine may experience large loads. In the red ellipse, neither variable is extreme, but the value of wave height is extreme conditional on the wind speed. In the blue ellipse both variables are extreme.
In both example problems, the joint distribution of the variables could be characterized for the regions indicated by the ellipses in Figures 1(b) and (d), using existing methods. In the red ellipses, a non-stationary univariate extreme value model could be used, e.g. [1, 2, 3, 4]. For the blue ellipse in Figure 1(b), the conditional approach of Heffernan and Tawn [5] could be applied. And in the blue in ellipse in Figure 1(d), one of the wide range of approaches in multivariate extreme value theory could be used, since both variables are extreme. However, ensuring consistency between the fitted models in the overlapping regions is difficult and it would be useful to characterise the joint distribution in all regions on the 'outside' of the sample in a single inference. The 'outside' of the sample is more formally defined as regions in which the radius of an observation is large relative to some origin within the body of the sample, where 'large' is quantified locally, relative to other nearby observations. This motivates a transformation to some polar coordinate system and modelling of the distribution of large radii conditional on angle.
Two possible transformations of the datasets shown in Figure 1 are shown in Figure 2. For the wave height-period data,'standard' polar coordinates are used, where radii are defined in terms of the \(L^{2}\) norm. For the wind-wave data, the observations have been transformed to Laplace margins before defining polar coordinates, and radii are defined in terms of the \(L^{1}\) norm. After the coordinate transformation, the problem of modelling the joint distribution of extreme regions of the original sample is converted to a problem of modelling univariate extremes conditional on angle. We will show that this view of multivariate extremes is useful not only for the problems described here, but also for a much wider range of problems, including'standard' problems in multivariate extremes, where the interest lies only in the region where all variables are large.
The coordinate transformations illustrated in Figure 2 are only two possibilities. This raises various questions such as: How should radii and angles be defined? Are inferences made using different polar coordinate systems equivalent? Should the margins be transformed to a standard scale before the angular-radial transformation is applied? Where should the origin of the polar coordinate system be defined? We will put aside these questions for now, and return to them later on.
Next, consider the second problem mentioned above, of modelling the joint extremes of a distribution with an arbitrary dependence class. Suppose that random vector \(\mathbf{X}\in\mathbb{R}^{d}\) has joint distribution function \(F\), and continuous univariate marginal distribution functions \(F_{1},...,F_{d}\), with corresponding joint and marginal survivor functions \(\bar{F}\) and \(\bar{F}_{1},...\bar{F}_{d}\). The corresponding copula \(C:[0,1]^{d}\rightarrow[0,1]\) and survival copula \(\hat{C}:[0,1]^{d}\rightarrow[0,1]\) are defined as
\[C(u_{1},...,u_{d}) =F(F_{1}^{-1}(u_{1}),...,F_{d}^{-1}(u_{d})),\] \[\hat{C}(u_{1},...,u_{d}) =\bar{F}(\bar{F}_{1}^{-1}(u_{1}),...,\bar{F}_{d}^{-1}(u_{d})),\]
where \(F_{j}^{-1}\) and \(\bar{F}_{j}^{-1}\) are the inverse functions of \(F_{j}\) and \(\bar{F}_{j}\), \(j=1,...,d\) (see e.g. [6]). The strength of dependence in the lower and upper tails of \(F\) can be quantified in terms of the lower and upper tail orders, \(\kappa_{L}\) and \(\kappa_{U}\), defined as [7]
\[\kappa_{L} =\lim_{t\to 0^{+}}\frac{\log(C(t,...,t))}{\log(t)}, \tag{1}\] \[\kappa_{U} =\lim_{t\to 0^{+}}\frac{\log(\hat{C}(t,...,t))}{\log(t)},\]
when these limit exists. Since \(C(t,...,t)\leq t\) we have \(\kappa_{L}\geq 1\), and similarly \(\kappa_{U}\geq 1\). When \(\kappa_{L}=1\) or \(\kappa_{U}=1\), the dependence in the lower and upper tail can be quantified in terms of the upper and lower tail dependence coefficients,
\(\chi_{U},\chi_{L}\in[0,1]\), defined as [6]
\[\chi_{L}=\lim_{t\to 0^{+}}\frac{C(t,...,t)}{t}, \tag{2}\] \[\chi_{U}=\lim_{t\to 0^{+}}\frac{\hat{C}(t,...,t)}{t}.\]
When \(\kappa_{U}=1\) and \(\chi_{U}>0\), the components of \(\mathbf{X}\) are said to be _asymptotically dependent_ (AD) in the upper tail, and when \(\kappa_{U}>1\) or \(\kappa_{U}=1\) and \(\chi_{U}=1\), they are said to be _asymptotically independent_ (AI). Dependence coefficients for joint tails in other regions can be defined analogously, as discussed in Section 4.
Figure 1: Example datasets and structural responses for motivating problems. Ellipses on plots (b) and (d) indicate regions of variable space of where offshore structures may experience large responses. Regions in the red ellipses are not the largest values in either variable, whereas regions in the blue ellipses are extreme in at least one variable. The aim of the methodology proposed in this work is to characterise all ‘extreme regions’ of a joint distribution using a single inference.
Classical multivariate extreme value theory addresses the case where \(\kappa_{U}=1\) and \(\chi_{U}>0\), and has been widely studied - see e.g. [8, 9, 10] for reviews. In practice, we do not know the tail order a priori, so it is useful to adopt a modelling framework in which \(\kappa_{L}\) or \(\kappa_{U}\) are estimated as part of the inference. Ledford and Tawn [11, 12] proposed a method to characterise multivariate extremes for distributions with arbitrary values of \(\kappa_{U}\), in the region where all variables are large. However, when \(\kappa_{U}>1\), this may not be the region of most interest, since extremes of all variables may not occur simultaneously. An alternative approach was proposed by Heffernan and Tawn [5], to estimate the joint distribution of variables conditional on at least one variable being large. However, inferences made using different conditioning variables are not necessarily consistent [13]. Wadsworth _et al._[14] proposed a similar model to the one described in this work, which is capable of representing joint extremes of distributions with a range of dependence classes. The approach in [14] involves a transformation to polar coordinates, and modelling the tail of the radial variable using a generalised Pareto (GP) distribution. However, the key difference to the approach proposed here is that Wadsworth _et al._ assume that the angular and radial variables are asymptotically independent (AI) at large radii,
Figure 2: Examples of two possible transformations from Cartesian to polar coordinates, using data from the previous figure. After the polar coordinate transformation, the problem of modelling multivariate extremes is transformed to a problem of modelling univariate extremes conditional on angle.
and the margins and polar coordinate system are chosen so that the assumption of AI angular and radial variables is a reasonable approximation. We will show that if margins and coordinates exist which satisfy the assumption of AI angular and radial variables, then the coordinates are, in general, non-standard and would be difficult to estimate in practice. This restricts the range of distributions that can be modelled using the approach proposed in [14]. In contrast, we will show that the approach proposed here removes the need to select specific coordinate systems, simplifying the modelling approach, and making it applicable to a much wider range of cases.
To define our approach formally, we work with the probability density function, rather than the distribution function. The transformation of the density from Cartesian to polar coordinates is relatively simple, whereas it is less straightforward to express the Cartesian joint distribution function in terms of the polar joint distribution function. This is because the density is a local quantity, whereas the distribution function is an integral quantity over a region. The formal definition of our model and its underpinning assumptions are presented in the next subsection.
### Definition of the SPAR model
Consider a random vector \(\mathbf{X}\in\mathbb{R}^{d}\), with joint density function \(f_{\mathbf{X}}\). Assume there is a bijective map, \(\mathcal{T}(\mathbf{X})=(R,\mathbf{W})\), from Cartesian coordinates to some polar coordinate system, where \(R\in[0,\infty)\) is a radial variable and \(\mathbf{W}\in\mathcal{S}\) is an angular variable. The set \(\mathcal{S}\subset\mathbb{R}^{d}\) is a hypersurface, with the property that for each \(\mathbf{w}\in\mathcal{S}\), the ray \(\{r\mathbf{w}:r>0\}\) intersects \(\mathcal{S}\) at a single point. Polar coordinate systems will be discussed in detail in Section 2. For now, we assume that a map to such a coordinate system exists. The obvious example is'standard' polar coordinates, where \(\mathcal{S}\) is the unit hypersphere in \(\mathbb{R}^{d}\). Unless otherwise stated, in this work we will assume that probability density functions of random vectors and variables exist and are integrable.
Returning to the definition of our model, suppose that \((R,\mathbf{W})\) has joint density function \(f_{R,\mathbf{W}}\), and \(\mathbf{W}\) has density \(f_{\mathbf{W}}\). The angular-radial joint density function can be written in conditional form,
\[f_{R,\mathbf{W}}(r,\mathbf{w})=f_{\mathbf{W}}(\mathbf{w})f_{R|\mathbf{W}}(r| \mathbf{w}),\]
where \(f_{R|\mathbf{W}}(r|\mathbf{w})\) is the density of \(R\) conditional on \(\mathbf{W}=\mathbf{w}\). The problem of modelling bivariate extremes then becomes a problem of modelling the angular density \(f_{\mathbf{W}}(\mathbf{w})\) and the tail of the conditional density \(f_{R|\mathbf{W}}(r|\mathbf{w})\). It is important to note that the joint distribution function does not admit an analogous decomposition in general. This is why we begin by considering the joint density function, rather than joint distribution function.
Define the conditional radial survivor function \(\bar{F}_{R|\mathbf{W}}(r|\mathbf{w})=\int_{r}^{\infty}f_{R|\mathbf{W}}(s| \mathbf{w})\,ds\). For a given angle, \(\bar{F}_{R|\mathbf{W}}(r|\mathbf{w})\) is univariate. In univariate extremes, most methods for statistical inference begin with the assumption that the distribution is in the domain of attraction of an extreme value distribution. This suggests a possible assumption about the asymptotic behaviour of the tail of \(\bar{F}_{R|\mathbf{W}}(r|\mathbf{w})\), namely:
1. There exists functions \(\sigma:(0,\infty)\times\mathcal{S}\to(0,\infty)\) and \(\xi:\mathcal{S}\to\mathbb{R}\) such that for each \(\mathbf{w}\in\mathcal{S}\) and all \(r>0\), \[\lim_{\mu\to r_{F}(\mathbf{w})}\sup_{r\in[0,r_{F}(\mathbf{w})-\mu]}\left| \frac{\bar{F}_{R|\mathbf{W}}(\mu+r|\mathbf{w})}{\bar{F}_{R|\mathbf{W}}(\mu| \mathbf{w})}-\bar{F}_{GP}(r;\xi(\mathbf{w}),\sigma(\mu,\mathbf{w}))\right|=0,\] (3) where \(r_{F}(\mathbf{w})\) is the upper endpoint of \(R|(\mathbf{W}=\mathbf{w})\).
\(\bar{F}_{GP}(r;\xi,\sigma)\) is the GP survivor function with shape parameter \(\xi\in\mathbb{R}\) and scale parameter \(\sigma>0\), given by
\[\bar{F}_{GP}(r;\xi,\sigma)=\begin{cases}\left(1+\xi\frac{r}{\sigma}\right)^{- \frac{1}{\xi}},&\xi\neq 0,\\ \exp\left(-\frac{r}{\sigma}\right),&\xi=0.\end{cases}\]
The support of \(F_{GP}\), the corresponding cumulative distribution function (cdf), is \(0\leq r<r_{F}\), where the upper end point is \(r_{F}=\infty\) for \(\xi\geq 0\) and \(r_{F}=-\sigma/\xi\) for \(\xi<0\). Assumption (A1) is equivalent to the assumption that \(F_{R|\mathbf{W}}(r|\mathbf{w})\) is in the domain of attraction of an extreme value distribution with shape parameter \(\xi(\mathbf{w})\)[15, 16].
In the univariate peaks-over-threshold (POT) method, assumption (A1) is used to motivate the approximation of the tail of a distribution above some high threshold by a GP distribution. In the multivariate context, assumption (A1) suggests a parametric model for the tail of the conditional radial distribution, but does not lead to a parametric form for the distribution of the angular variable. This motivates the so-called semi-parametric angular-radial (SPAR) model, introduced in [17]. A similar approach was also recently proposed in [18]. The SPAR model assumes that the
tail of \(f_{R|\mathbf{W}}(r|\mathbf{w})\) above some threshold \(\mu(\mathbf{w})\), can be represented by a GP density, with parameters conditional on \(\mathbf{w}\). Let \(\zeta(\mathbf{w})\coloneqq\Pr(R>\mu(\mathbf{w})\mid\mathbf{W}=\mathbf{w})\). Then the SPAR model for the joint angular-radial density can be written:
\[f_{R,\mathbf{W}}(r,\mathbf{w})=\zeta(\mathbf{w})\,f_{\mathbf{W}}(\mathbf{w})\, f_{GP}(r-\mu(\mathbf{w});\xi(\mathbf{w}),\sigma(\mathbf{w})),\quad r>\mu( \mathbf{w}), \tag{4}\]
where \(f_{GP}\) is the GP density function. For the purposes of inference, it is useful to make two further assumptions:
1. There exist parameter functions \(\sigma(\mu,\mathbf{w})\) and \(\xi(\mathbf{w})\) that satisfy (A1) and are continuous, and \(\sigma(\mu,\mathbf{w})\) is finite for finite \(\mu\);
2. The angular density \(f_{\mathbf{W}}(\mathbf{w})\) is continuous and finite for all \(\mathbf{w}\).
Assumptions (A2) and (A3) are intended to simplify the inference procedure. Under these assumptions, the inference can be viewed as a non-stationary peaks-over-threshold analysis, for which there are many examples in the literature, e.g. [1, 2, 3, 4]. As there are many potential methods for inference under the SPAR model assumptions, we defer an investigation of estimation methods to future work. Instead, the focus of the present article is to consider conditions when assumptions (A1)-(A3) are valid. The intention is that these theoretical considerations will inform an approach to inference, that reframes multivariate extremes as a natural extension of univariate extremes with angular dependence.
We will consider cases where assumptions (A1)-(A3) hold either for all \(\mathbf{w}\in\mathcal{S}\), or for all \(\mathbf{w}\in\mathcal{S}^{+}\), where \(\mathcal{S}^{+}=\mathcal{S}\cap[0,\infty)^{d}\) is the restriction of \(\mathcal{S}\) to the (closed) non-negative orthant in \(\mathbb{R}^{d}\). When these assumptions hold, we will say that \(\mathbf{X}\) has a SPAR representation under map \(\mathcal{T}\) for \(\mathbf{w}\in\mathcal{S}\) or \(\mathbf{w}\in\mathcal{S}^{+}\), respectively. Cases of other orthants in \(\mathbb{R}^{d}\) can be treated analogously, by multiplying components of \(\mathbf{X}\) by \(-1\), so that the variable range of interest lies in the non-negative orthant.
### Outline of the paper
The definition of the SPAR model requires a map from Cartesian to some angular-radial coordinates (or polar coordinates for short). As mentioned in Section 1.1, there are various ways in which polar coordinates can be defined. This is discussed in detail in Section 2. We use more general types of polar coordinate system than those commonly-used for multivariate extremes, which are defined in terms of gauge functions for star-shaped sets. The more commonly-used polar coordinates, defined in terms of norms, are special cases of these generalised coordinate systems. Using these generalised polar coordinates is necessary for addressing the question of whether coordinate systems can be defined in which angular and radial variables are AI. As many of the examples in this work are two-dimensional, it is helpful to introduce a generalised concept of scalar angles in two dimensions, discussed in Section 2.2. We show that using these generalised scalar angles leads to useful simplifications in some cases.
The effect of the choice of polar coordinate system is discussed in Section 3. We consider which coordinate systems lead to a simple transformation of the probability density function from Cartesian to generalised polar coordinates. We also consider the effect of changing between different polar coordinate systems on whether the SPAR assumptions are satisfied. Theorem 3.3 addresses the question of when it is possible to define coordinate systems in which the angular and radial components are AI. We show that when this is possible, the polar coordinate system for which this is true, is not necessarily a'standard' system, and may be difficult to estimate in practice.
In order to make general statements about the impact of the choice of margins on whether a given copula has a SPAR representation, it is necessary to make some general assumptions about the asymptotic behaviour of copulas. Section 4 presents some generalised definitions of previous asymptotic models for copulas and copulas densities, as well as introducing some new definitions. We derive some links between these models, which are useful for understanding the limitations of angular-radial representations on various margins.
In Section 5, we consider the effect of the choice of margins on whether the SPAR assumptions are met, and present some conditions on the copula which are sufficient for it to have a SPAR representation on certain margins. We show that using Laplace margins leads to forms of the density function in which the SPAR assumptions are satisfied for many commonly-used copulas, with various dependence classes. In contrast, the use of heavy-tailed or short-tailed margins imposes more restrictions on the types of copula that have SPAR representations. In Section 5.2.2 we discuss the concept of limit sets of a random sample, introduced by Davis _et al._[19]. Various existing representations for multivariate extremes can be related to the limit set, when it exists [20]. We show that SPAR models on Laplace margins have a simple link to the corresponding limit set for the distribution, and provide a rigorous means of estimating the limit set, when it exists.
Finally, a discussion and conclusions are presented in Section 6. We return to some of the questions posed in Section 1.1, regarding inference for the SPAR model, and discuss how the theoretical results derived in this work can inform the method used for inference. Proofs of results stated in the text are provided in Appendix A. MATLAB code for calculating the numerical examples in this work is provided at [https://github.com/edmackay/SPAR](https://github.com/edmackay/SPAR).
Generalised angular-radial coordinate systems
In this section we define the generalised polar coordinate systems used throughout this work. Section 2.1 defines polar coordinates in \(\mathbb{R}^{d}\) in terms of gauge functions of star-shaped sets. This type of coordinate system has been studied by Richter [21], and we summarise the necessary information here. Then, Section 2.2 considers generalised scalar metrics of angle in \(\mathbb{R}^{2}\). As far as we are aware, this type of generalised angle has not been defined previously, so these are considered in some detail.
### Polar coordinate systems for star-shaped sets
We begin by presenting some preliminary definitions which can be found in reference texts e.g. [22]. In the following discussion, we will associate various objects with norms and star-shaped sets. To refer to an arbitrary norm or set, we will use the notation \(\|\cdot\|_{*}\) or \(S_{*}\), replacing the asterisk with an appropriate label for definiteness as necessary. Throughout the article, we assume that the number of dimensions, \(d\), is a natural number.
**Definition 1** (Star-shaped set).: A star-shaped set, \(G_{*}\subset\mathbb{R}^{d}\), is a set with the property that \(\mathbf{x}\in G_{*}\) implies \(t\mathbf{x}\in G_{*}\) for \(0\leq t\leq 1\).
**Definition 2** (Gauge function).: Let \(\mathcal{S}_{*}\subset\mathbb{R}^{d}\) be a surface which is the boundary to a compact star-shaped set containing the origin and assume \(\mathbf{0}\notin\mathcal{S}_{*}\). The gauge function of \(\mathcal{S}_{*}\), also referred to as the Minkowski functional, \(\mathcal{R}_{*}:\mathbb{R}^{d}\to[0,\infty)\) is defined as \(\mathcal{R}_{*}(\mathbf{x})=\inf\{r\in[0,\infty):\mathbf{x}\in r\mathcal{S}_{ *}\}\), where \(r\mathcal{S}_{*}=\{(rw_{1},...,rw_{d}):(w_{1},...,w_{d})\in\mathcal{S}_{*}\}\).
We use the notation \(\mathcal{R}_{*}\) for the gauge function to connote that for each point \(\mathbf{x}\in\mathbb{R}^{d}\), a gauge function defines a kind of radius of \(\mathbf{x}\) from the origin. Informally, the gauge function defines how much we need to 'inflate' or 'contract' the surface, so that the point \(\mathbf{x}\) lies on the scaled surface. Next, we consider norms and their unit spheres.
**Definition 3** (Norm).: A norm on \(\mathbb{R}^{d}\) is a function \(\|\cdot\|_{*}:\mathbb{R}^{d}\to[0,\infty)\), for which the following properties hold for all \(\mathbf{x},\mathbf{y}\in\mathbb{R}^{d}\) and \(a\in\mathbb{R}\):
1. Subadditivity: \(\|\mathbf{x}+\mathbf{y}\|_{*}\leq\|\mathbf{x}\|_{*}+\|\mathbf{y}\|_{*}\).
2. Absolute homogeneity: \(\|a\mathbf{x}\|_{*}=|a|\|\mathbf{x}\|_{*}\).
3. Positive definiteness: \(\|\mathbf{x}\|_{*}=0\) if and only if \(\mathbf{x}=\mathbf{0}\).
**Definition 4** (Unit sphere of a norm).: The unit sphere with respect to the norm \(\|\cdot\|_{*}\) is the set of points \(\mathcal{U}_{*}=\{\mathbf{w}\in\mathbb{R}^{d}:\|\mathbf{w}\|_{*}=1\}\). In the case \(d=2\), \(\mathcal{U}_{*}\) is referred to as the unit circle.
The subadditivity property of norms implies that the unit sphere with respect to any norm must be convex, and the absolute homogeneity property implies the unit sphere is centrally symmetric, i.e. \(\mathbf{u}\in\mathcal{U}_{*}\) implies \(-\mathbf{u}\in\mathcal{U}_{*}\). The unit sphere defines the boundary to a star-shaped set \(G_{*}=\{\mathbf{w}\in\mathbb{R}^{d}:\|\mathbf{w}\|_{*}\leq 1\}\). The fact that \(G_{*}\) is star-shaped can be seen from the absolute homogeneity property of norms: \(\mathbf{x}\in G_{*}\) implies that for \(0\leq t\leq 1\) we have \(\|t\mathbf{x}\|_{*}\leq\|\mathbf{x}\|_{*}\leq 1\), and hence \(t\mathbf{x}\in G_{*}\). An equivalent way to define a norm would be in terms of the gauge function of its unit sphere or circle, i.e. \(\|\mathbf{x}\|_{*}=\inf\{r\in[0,\infty):\mathbf{x}\in r\mathcal{U}_{*}\}\). Writing in this way, it is clear that norms are a special type of gauge function. In general, gauge functions differ from norms in that they do not necessarily satisfy subadditivity or absolute homogeneity. However, gauge functions are positive definite and positively homogeneous, with \(\mathcal{R}_{*}(a\mathbf{x})=a\mathcal{R}_{*}(\mathbf{x})\) for \(a>0\). Moreover, the boundaries to star-shaped sets do not necessarily have the properties of the unit sphere for a norm. In particular, they need not be convex or centrally symmetric. Norms are usually given as explicit functions of the input variables, making their calculation straightforward. In contrast, evaluating the gauge function for an arbitrary surface \(\mathcal{S}_{*}\) is less practical from a computational perspective, although clearly achievable in principle. We will see in Section 3 that defining polar coordinates in terms of this more general notion of gauge functions leads to simple forms of SPAR representations in which the angular and radial variables are independent in some cases.
The most commonly-used norm used for defining angular-radial coordinates is the \(L^{p}\) norm, defined below.
**Definition 5** (\(L^{p}\) norm).: For a vector \(\mathbf{x}=(x_{1},\cdots,x_{d})\in\mathbb{R}^{d}\) and real number \(p\geq 1\), the \(L^{p}\) norm is defined as
\[\|\mathbf{x}\|_{p}=\left(\sum_{j=1}^{d}|x_{j}|^{p}\right)^{1/p}\,. \tag{5}\]
In the case \(p=\infty\), the \(L^{\infty}\) norm is defined as \(\|\mathbf{x}\|_{\infty}=\max\{|x_{1}|,\cdots,|x_{d}|\}\). The \(L^{p}\) norms for \(p=1,2,\infty\) are sometimes referred to as the sum norm, Euclidean norm and max norms, respectivel
In the case \(p\in(0,1)\), (5) does not define a norm, since it is not subadditive. However, \(\|\mathbf{x}\|_{p}\) for \(p\in(0,1)\) does define a gauge function. This is discussed further in Example 1 below. Next, we come to the definition of generalised polar coordinates, defined in terms of the gauge function of a star-shaped set. Consider a vector \(\mathbf{x}\in\mathbb{R}^{d}\setminus\{\mathbf{0}\}\). In the multivariate extremes literature, it is common to define angular-radial coordinates as
\[r=\|\mathbf{x}\|_{rad},\quad\mathbf{w}=\frac{\mathbf{x}}{\|\mathbf{x}\|_{ang}},\]
where \(\|\cdot\|_{rad}\) and \(\|\cdot\|_{ang}\) are two arbitrary norms, usually \(L^{p}\) norms (see e.g. [8, p258]). This can be generalised, by replacing the norms with gauge functions. To ensure that we can calculate the Jacobian of the transformation from Cartesian to polar coordinates, we require that the gauge functions used to define radii and angles correspond to star-shaped sets with continuous and piecewise-smooth boundaries. This will be discussed further in Section 3.
**Definition 6** (Polar coordinates in \(\mathbb{R}^{d}\)).: Let \(\mathbf{x}\in\mathbb{R}^{d}\setminus\{0\}\) and \(\mathcal{R}_{rad}\), \(\mathcal{R}_{ang}\), be a arbitrary gauge functions, corresponding to continuous and piecewise-smooth surfaces \(\mathcal{S}_{rad}\) and \(\mathcal{S}_{ang}\). Define the bijective map \(\mathcal{T}:\mathbb{R}^{d}\setminus\{0\}\to(0,\infty)\times\mathcal{S}_{ang}\) as
\[\mathcal{T}(\mathbf{x})=\left(\mathcal{R}_{rad}(\mathbf{x}),\frac{\mathbf{x} }{\mathcal{R}_{ang}(\mathbf{x})}\right).\]
The values \((r,\mathbf{w})=\mathcal{T}(\mathbf{x})\) are the polar coordinates of \(\mathbf{x}\) with respect to gauge functions \(\mathcal{R}_{rad}\) and \(\mathcal{R}_{ang}\). The inverse map \(\mathcal{T}^{-1}:(0,\infty)\times\mathcal{S}_{ang}\to\mathbb{R}^{d}\setminus \{0\}\) from polar to Cartesian coordinates is given by
\[\mathcal{T}^{-1}(r,\mathbf{w})=r\frac{\mathbf{w}}{\mathcal{R}_{rad}(\mathbf{ w})}.\]
Since \(\mathcal{T}\) is dependent on the two gauge functions \(\mathcal{R}_{rad}\), \(\mathcal{R}_{ang}\), we could include this information and write \(\mathcal{T}(\mathbf{x};\mathcal{R}_{rad},\mathcal{R}_{ang})\). However, as the gauge functions should be clear from the context, we omit this information to avoid overly-cumbersome notation.
### Generalised angles in \(\mathbb{R}^{2}\)
For the polar coordinate systems defined above, although the angular coordinate \(\mathbf{w}\in\mathcal{S}_{ang}\) is \(d\)-dimensional, it only has \(d-1\) degrees of freedom, due to the constraint that \(\mathcal{R}_{ang}(\mathbf{w})=1\). For \(d=2\), the angular variable \(\mathbf{w}\) has only one degree of freedom, but is defined in terms of two coordinates, \(\mathbf{w}=(w_{1},w_{2})\). For bivariate cases, it is useful to define a single variable to specify the angle. In many works on bivariate extremes, it is only the upper-right quadrant of the plane that is of interest. In this case, if there is a one-to-one relation between \(w_{1}\) and \(w_{2}\) in this quadrant, then the variable \(w_{1}\) can be used as a surrogate for angle. However, if we wish to uniquely specify coordinates in all quadrants of the plane, then this definition of angle is ambiguous. For example, if angles are defined in terms of the \(L^{1}\) norm, then for \((x,y)\in\mathbb{R}^{2}\setminus\{0\}\) we have \(w_{1}=x/(|x|+|y|)\), which contains no information about the sign of \(y\).
Below, we will define a generalised scalar angle of a point, defined in terms of a gauge function. To motivate this, consider the definition of Euclidean angles, defined in terms of the \(L^{2}\) norm. We have \((x,y)/\|(x,y)\|_{2}=(\cos(\theta),\sin(\theta))\), where \(\theta=\text{atan2}(x,y)\), and atan2 is the four-quadrant inverse tan function. In this case, the coordinates \((r,\theta)\in(0,\infty)\times(-\pi,\pi]\) uniquely specify a point in the punctured plane \(\mathbb{R}^{2}\setminus\{0\}\). The Euclidean angle of \((x,y)\) can be defined as the distance around the circumference of the \(L^{2}\) unit circle from \((1,0)\) to \((x,y)/\|(x,y)\|_{2}\), measured counter-clockwise, with negative values corresponding to clockwise distances. This definition can be generalised to arbitrary gauge functions as follows. To ensure that the angle is well-defined, we require that the boundary to the star-shaped set is continuous and piecewise-smooth.
**Definition 7** (Arc length functions).: Let \(\mathcal{R}_{*}\) be a gauge function on \(\mathbb{R}^{2}\) for a continuous and piecewise-smooth boundary \(\mathcal{S}_{*}\). Denote the total length of \(\mathcal{S}_{*}\) as \(\mathcal{C}_{*}\) (in the case where \(\mathcal{R}_{*}\) is a norm, this is the circumference of the unit circle). Define the arc length function \(\ell_{*}:\mathcal{S}_{*}\to[0,\mathcal{C}_{*})\), to be the distance along \(\mathcal{S}_{*}\), from \((1,0)/\mathcal{R}_{*}(1,0)\) to \(\mathbf{w}\in\mathcal{S}_{*}\), measured anti-clockwise (see Figure 3). Define a set-valued function \(\mathfrak{L}_{*}:\mathcal{S}_{*}\to\{\{d+n\mathcal{C}_{*}:n\in\mathbb{Z}\}:d\in [0,\mathcal{C}_{*})\}\) as
\[\mathfrak{L}_{*}(\mathbf{w})=\{\ell_{*}(\mathbf{w})+n\mathcal{C}_{*}:n\in \mathbb{Z}\}.\]
The branches of \(\mathfrak{L}_{*}\) for each value of \(n\in\mathbb{Z}\) correspond to the number of loops around the unit circle, with \(n<0\) corresponding to distances measured clockwise from the x-axis. We denote the restriction of the image of \(\mathfrak{L}_{*}\) to a particular branch with values in the interval \(I\), as \(\mathfrak{L}_{*}^{I}\), where \(I\) is a half open interval, either \((a,a+\mathcal{C}_{*}]\) or \([a,a+\mathcal{C}_{*})\) for some \(a\in\mathbb{R}\).
The arc length function can be evaluated using integration. By the assumption that \(\mathcal{S}_{*}\) is piecewise-smooth, for \((u,v)\in\mathcal{S}_{*}\) we can divide it into segments where either \(dv/du\) or \(du/dv\) (or both) is continuous and bounded. For segments where \(dv/du\) is continuous and bounded, the arc length, \(s\), between \(u=a\) and \(u=b\), can be calculated as
\[s=\int_{a}^{b}\left(1+\left(\frac{dv}{du}\right)^{2}\right)^{1/2}du. \tag{6}\]
In segments where \(dv/du\) becomes infinite, we switch \(u\) and \(v\) in (6). As the circumference \(\mathcal{C}_{*}\) is dependent on \(\mathcal{S}_{*}\), we define pseudo-angles in terms of a normalised distance around the boundary.
**Definition 8** (Pseudo-angles).: Let \(\mathcal{R}_{*}\) be a gauge function on \(\mathbb{R}^{2}\) for piecewise-smooth boundary \(\mathcal{S}_{*}\). A set-valued pseudo-angle function \(\mathcal{A}_{*}:\mathcal{S}_{*}\to\{\{q+4n:n\in\mathbb{Z}\}:q\in[0,4)\}\), is defined as
\[\mathcal{A}_{*}(\mathbf{w})\coloneqq\frac{4}{\mathcal{C}_{*}}\mathcal{L}_{*}( \mathbf{w}).\]
The restriction of the image of \(\mathcal{A}_{*}\) to a particular branch with values in the interval \(I\), is denoted \(\mathcal{A}_{*}^{I}\), where \(I\) is a half-open interval, either \((a,a+4]\) or \([a,a+4)\) for some \(a\in\mathbb{R}\).
The motivation for the inclusion of the factor \(4\) in the definition of pseudo-angles is as follows. By definition, the set of angles \(\{0+4n:n\in\mathbb{Z}\}\) corresponds to the direction of the positive x-axis for any gauge function, \(\mathcal{R}_{*}\). When \(\mathcal{R}_{*}\) is a norm, from the central symmetry property of the unit circle, the set of angles \(\{2+4n:n\in\mathbb{Z}\}\) corresponds to the direction of the negative x-axis. Suppose that \(\mathcal{S}_{*}\) is symmetric about the x- and y-axes (in the case of unit circles for norms, the absolute homogeneity property implies that if the unit circle is symmetric in one axis then it is symmetric in both axes). In this case, the arc length in each quadrant of the plane is equal. This symmetry property holds for \(L^{p}\) norms. Under this symmetry, the sets of angles \(\{1+4n:n\in\mathbb{Z}\}\) and \(\{3+4n:n\in\mathbb{Z}\}\) correspond to the directions of the positive and negative y-axes, respectively. In these cases, the pseudo-angle, \(q=\mathcal{A}_{*}^{[0,4)}(\mathbf{x}/\mathcal{R}_{*}(\mathbf{x}))\), can be interpreted as indicating the quadrant of the plane containing \(\mathbf{x}\), i.e. \(q\in(0,1)\) indicates \(\mathbf{x}\) lies in the first quadrant, \(q\in(1,2)\) indicates \(\mathbf{x}\) lies in the second quadrant, etc., motivating the inclusion of the factor \(4\) in the definition of pseudo-angle. The pseudo-angle could have been defined alternatively using the normalising factor \(2\pi\), so that the definition coincides with the standard Euclidean definition for the \(L^{2}\) norm. However, we have opted to normalise using the factor \(4\), firstly for the interpretation in terms of quadrants of the plane and secondly to emphasise the difference between Euclidean angles and pseudo-angles.1
Footnote 1: The definition of pseudo-angle is given in terms of a relation between a vector and a point on the x-axis. More generally, we could define a pseudo-angle between any two vectors in the same way. However, apart from the case of the Euclidean norm (and norms which are scalar multiples of this), the pseudo-angle between two vectors is not invariant to rotation.
To define polar coordinates on \(\mathbb{R}^{2}\), we consider the case where the pseudo-angle and gauge functions are defined in terms of two arbitrary gauge functions. This can simplify the transformation of density functions from Cartesian
Figure 3: Illustration of definition of gauge function \(\mathcal{R}_{*}\) and pseudo-angle, \(q\), with respect to boundary \(\mathcal{S}_{*}\). The circumference of the boundary is \(\mathcal{C}_{*}\). Pseudo-trigonometric functions \(\sin_{*}(q)\) and \(\cos_{*}(q)\) relate the pseudo-angle with the corresponding x- and y-coordinates on the unit circle.
to polar coordinates, discussed in the next section. When specifying polar coordinates, we will assume \(I=(-2,2]\). This follows the convention for the angular variable \(\theta\) used in standard polar coordinates, where it is usual to assume \(\theta\in(-\pi,\pi]\).
**Definition 9** (Polar coordinates on \(\mathbb{R}^{2}\)).: Let \(\mathbf{x}=(x,y)\in\mathbb{R}^{2}\setminus\{0\}\) and \(\mathcal{R}_{rad}\), \(\mathcal{R}_{ang}\), be arbitrary gauge functions, defined in terms of piecewise-smooth boundary \(\mathcal{S}_{ang}\). Define the bijective map \(\tilde{\mathcal{T}}:\mathbb{R}^{2}\setminus\{0\}\to(0,\infty)\times(-2,2]\) as
\[\tilde{\mathcal{T}}(\mathbf{x})=\left(\mathcal{R}_{rad}(\mathbf{x}), \mathcal{A}_{ang}^{(-2,2]}\left(\frac{\mathbf{x}}{\mathcal{R}_{ang}(\mathbf{x })}\right)\right).\]
The values \((r,q)=\tilde{\mathcal{T}}(x,y)\) are the polar coordinates of \((x,y)\) with respect to gauge functions \(\mathcal{R}_{rad}\) and \(\mathcal{R}_{ang}\). The notation \(\tilde{\mathcal{T}}\) is used for the two-dimensional map from Cartesian to polar coordinates, to emphasise that it differs from the map in \(\mathcal{T}\) which uses vector angles.
The trigonometric functions sine and cosine provide a short-hand for the inverse map from standard polar coordinates to Cartesian coordinates, i.e. if \((r,\theta)\) are the standard polar coordinates of \((x,y)\) then \((x,y)=r(\cos(\theta),\sin(\theta))\). If gauge function \(\mathcal{R}_{*}\) is used to define both radii and angles, then it is useful to define analogous pseudo-trigonometric functions \(\cos_{*}(q)\) and \(\sin_{*}(q)\) such that \(\tilde{\mathcal{T}}^{-1}(r,q)=r(\cos_{*}(q),\sin_{*}(q))\). In general, if different gauge functions are used to specify radii and angles, then the inverse map will be defined as
\[\tilde{\mathcal{T}}^{-1}(r,q)=r\frac{(\cos_{ang}(q),\sin_{ang}(q))}{\mathcal{ R}_{rad}(\cos_{ang}(q),\sin_{ang}(q))}.\]
**Definition 10** (Pseudo-trigonometric functions).: Let \(\mathcal{A}_{*}^{I}\) be a pseudo-angle function for the boundary \(\mathcal{S}_{*}\). Note that \(\mathcal{A}_{*}^{I}\) is a bijection and denote the inverse map as \((\mathcal{A}_{*}^{I})^{-1}:I\to\mathcal{S}_{*}\). Denote the extent of \(\mathcal{S}_{*}\) in the x- and y-directions as:
\[u_{min} =\inf\{u:(u,v)\in\mathcal{S}_{*},v\in\mathbb{R}\},\] \[u_{max} =\sup\{u:(u,v)\in\mathcal{S}_{*},v\in\mathbb{R}\},\] \[v_{min} =\inf\{v:(u,v)\in\mathcal{S}_{*},u\in\mathbb{R}\},\] \[v_{max} =\sup\{v:(u,v)\in\mathcal{S}_{*},u\in\mathbb{R}\}.\]
The pseudo-trigonometric functions \(\cos_{*}:\mathbb{R}\to[u_{min},u_{max}]\) and \(\sin_{*}:\mathbb{R}\to[v_{min},v_{max}]\), are defined in terms of the inverse map as
\[(\cos_{*}(q),\sin_{*}(q))=(\mathcal{A}_{*}^{I})^{-1}(q). \tag{7}\]
Although the domain on the RHS of (7) is restricted to interval \(I\), the definition extends to \(q\in\mathbb{R}\), by noting that \(I\) can be any half-open interval in \(\mathbb{R}\) with length \(4\).
The pseudo-trigonometric functions relate the pseudo-angle to the corresponding x- and y-positions on the boundary \(\mathcal{S}_{*}\) (see Figure 3), in a way that is directly analogous to standard trigonometric functions. Richter [23] also considered the definition of functions to relate the Euclidean angle \(\theta\) with the corresponding point on the \(L^{p}\) unit circle. The key difference with the functions defined here is that our pseudo-trigonometric functions take pseudo-angles as an input, rather than Euclidean angles. As far as we are aware, our definition of pseudo-angle has not been presented previously.
**Example 1** (Pseudo-angles for \(L^{p}\) norms on \(\mathbb{R}^{2}\)).: In this example we will consider the pseudo-angles and pseudo-trigonometric functions for the \(L_{p}\) norm. As noted above, when \(p\in(0,1)\), \(\|\cdot\|_{p}\) is not a norm, but is a gauge function. We will include these cases for completeness. Examples of the unit circles for \(L^{p}\) norms are shown in Figure 4. Let \((u,v)\) be points on the \(L^{p}\) unit circle, so that \(u=\pm(1-|v|^{p})^{1/p}\). The absolute value of the gradient is given by
\[\left|\frac{du}{dv}\right|=\left(\frac{|v|^{p}}{1-|v|^{p}}\right)^{1-\frac{ 1}{p}}.\]
This can be substituted into (6) to calculate the arc length \(\ell_{p}(u,v)\) numerically (see Definition 7). Note that the point \(u=v=2^{-1/p}\) corresponds to \(1/8\) of the circumference of the unit circle, \(\mathcal{C}_{p}\), measured from the positive x- or y-axes. Therefore, we have \(\mathcal{C}_{p}=8\,\ell_{p}(2^{-1/p},2^{-1/p})\), so we need only calculate pseudo-angles for \(u,v\in[0,2^{-1/p}]\), and the remaining values can be found by the symmetry of the \(L^{p}\) unit circle. The pseudo-trigonometric functions \(\cos_{p}(q)\)
and \(\sin_{p}(q)\) are shown in Figure 4 for various values of \(p\). For the special cases \(p=1\) and \(2\) we can write explicit expressions for the pseudo-trigonometric and angle functions. For \(p=1\) we have
\[\cos_{1}(q) =1-|q|, q\in[-2,2]\] \[\mathcal{A}_{1}^{(-2,2]}(u,v) =\varepsilon(v)(1-u), (u,v)\in\mathcal{U}_{1},\]
where \(\varepsilon\) is the generalised sign function, where \(\varepsilon(v)=1\) for \(v\geq 0\) and \(\varepsilon(v)=-1\) otherwise.ii The domain of definition of \(\cos_{1}\) can be extended to \(q\in\mathbb{R}\) using the periodic property. We can then define \(\sin_{1}(q)=\cos_{1}(q-1)\) for \(q\in\mathbb{R}\). For \(p=2\), we have \(\cos_{2}(q)=\cos(\pi q/2)\), \(\sin_{2}(q)=\sin(\pi q/2)\) for \(q\in\mathbb{R}\) and \(\mathcal{A}_{2}^{(-2,2]}(u,v)=(2/\pi)\text{atan}2(u,v)\) for \((u,v)\in\mathcal{U}_{2}\). \(\blacksquare\)
Footnote ii: The difference is that \(\text{sgn}(0)=0\) whereas \(\varepsilon(0)=1\).
## 3 Effects of the choice of coordinate system
The SPAR model requires a transformation of density functions from Cartesian to polar coordinates. In this section we consider the effect of the choice of polar coordinate system on SPAR representations for a given joint density. To illustrate the general principles, we start by considering bivariate cases in Section 3.1, with polar coordinates specified in terms of scalar pseudo-angles. We consider the Jacobian of the transformation from Cartesian to generalised polar coordinates, with focus on pseudo-angles defined in terms of \(L^{p}\) norms. We then consider the effect of changing the gauge functions used to define radii and angles, and how this affects whether the SPAR assumptions are satisfied. Finally, these results are used to address the question of when coordinate systems can be defined in which angular and radial variables are asymptotically independent. In Section 3.2 we go on to consider the general case of density functions in \(\mathbb{R}^{d}\), and show that similar results apply. The results in this section are essentially quite simple, as they are all based on calculating Jacobians of various transformations of coordinate systems. However, as the coordinate systems involved are non-standard in some cases, we illustrate the results using simple examples.
### Bivariate density functions
Suppose that random vector \((X,Y)\) has probability density function \(f_{X,Y}(x,y)\). Define the random vector \((R,Q)=\tilde{\mathcal{T}}(X,Y)\) for some gauge functions \(\mathcal{R}_{rad}\) and \(\mathcal{R}_{ang}\) (see Definition 9). Then \((R,Q)\) has density function
\[f_{R,Q}(r,q)=f_{X,Y}\left(r\frac{(\cos_{ang}(q),\sin_{ang}(q))}{\mathcal{R}_{ rad}(\cos_{ang}(q),\sin_{ang}(q))}\right)|\det(\mathbf{J}(r,q))|, \tag{8}\]
where \(\mathbf{J}\) is the Jacobian matrix of \(\tilde{\mathcal{T}}^{-1}(r,q)\), so that
\[|\det(\mathbf{J}(r,q))|=\frac{r}{(\mathcal{R}_{rad}(\cos_{ang}(q),\sin_{ang}( q)))^{2}}\left|\cos_{ang}(q)\sin_{ang}^{\prime}(q)-\cos_{ang}^{\prime}(q)\sin_{ang}(q) \right|, \tag{9}\]
Figure 4: Left: Unit circles for the \(L^{p}\) norm for various values of \(p\). When \(p<1\), \(\|\cdot\|_{p}\) is not a norm, but is a gauge function. Right: Corresponding pseudo-trigonometric functions.
where the prime indicates differentiation with respect to the argument. As the surface \(\mathcal{S}_{ang}\), used to define the pseudo-angle, is only required to be piecewise smooth, the functions \(\cos_{ang}\) and \(\sin_{ang}\) may not be differentiable everywhere, but we can divide \(\mathcal{S}_{ang}\) into a finite number of disjoint segments where the derivatives exist. Finding explicit expressions for the pseudo-trigonometric functions and their derivatives is not always easy. The following proposition considers the case for \(L^{p}\) norms. [Proofs of results stated in the text are provided in Appendix A].
**Proposition 3.1**.: _Let \(\cos_{p}\) and \(\sin_{p}\) be pseudo-trigonometric functions for the \(L^{p}\) norm. Then for \(q\in\mathbb{R}\),_
\[J_{p}(q)\coloneqq\left|\cos_{p}(q)\sin_{p}^{\prime}(q)-\cos_{p}^{\prime}(q) \sin_{p}(q)\right|=\frac{\mathcal{C}_{p}}{4}\left(\cos_{p}^{2(p-1)}(q)+\sin_{p }^{2(p-1)}(q)\right)^{-1/2}. \tag{10}\]
_When \(p=1\), 2 or \(\infty\), \(J_{p}(q)\) is a constant and we have_
\[J_{1}(q)=1,\qquad J_{2}(q)=\frac{\pi}{2},\qquad J_{\infty}(q)=2,\qquad q\in \mathbb{R}.\]
_For \(p\in(1,2)\cup(2,\infty)\), \(J_{p}(q)\) is not constant._
Proposition 3.1 shows that using pseudo-angles defined in terms of the \(L^{p}\) norm for \(p=1,2,\infty\) results in a simple transformation of density functions from Cartesian to polar coordinates. For other values of \(p\), the Jacobian of the transformation is non-trivial to calculate, so we will restrict our attention to these cases. However, since the \(L^{\infty}\) unit circle is a scaled and rotated copy of the \(L^{1}\) unit circle (see Figure 4), pseudo-angles defined in terms of the \(L^{\infty}\) norm correspond to a phase-lagged version of pseudo-angles defined in terms of the \(L_{1}\) norm, so we will not consider these pseudo-angles further. For the case of the \(L^{2}\) norm, it is more natural to use the usual Euclidean angle. As above, suppose that random vector \((X,Y)\) has probability density function \(f_{X,Y}(x,y)\), and define the radial variable \(R=\mathcal{R}_{*}(X,Y)\), for some gauge function \(\mathcal{R}_{*}\), and angular variables \(\Theta=\text{atan}2(X,Y)\) and \(Q_{1}=\mathcal{A}_{1}^{(-2,2]}((X,Y)/\|(X,Y)\|_{1})\). Then we have
\[f_{R,Q_{1}}(r,q) =\frac{r}{(\mathcal{R}_{*}(\cos_{1}(q),\sin_{1}(q)))^{2}}f_{X,Y} \left(r\frac{(\cos_{1}(q),\sin_{1}(q))}{\mathcal{R}_{*}(\cos_{1}(q),\sin_{1}( q))}\right), \tag{11}\] \[f_{R,\Theta}(r,\theta) =\frac{r}{(\mathcal{R}_{*}(\cos(\theta),\sin(\theta)))^{2}}f_{X,Y }\left(r\frac{(\cos(\theta),\sin(\theta))}{\mathcal{R}_{*}(\cos(\theta),\sin( \theta))}\right). \tag{12}\]
The variable \(\Theta\) is used here instead of \(Q_{2}=\mathcal{A}_{2}^{(-2,2]}((X,Y)/\|(X,Y)\|_{2})\), for which the joint density of \((R,Q_{2})\) differs by a factor of \(\pi/2\) from the joint density of \((R,\Theta)\). In both cases, the calculation of the angular-radial density from the Cartesian density is straightforward. For the remainder of this work, we will work with either Euclidean angles or \(L^{1}\) pseudo-angles. When working in \(L^{1}\) coordinates, we will denote the angular variable \(Q\), dropping the subscript. The next example illustrates that the most convenient coordinate system to use depends on the application.
**Example 2** (Independence on normal and Laplace margins).: Suppose that \((X,Y)\) is a random vector on \(\mathbb{R}^{2}\) and define angular-radial random vectors
\[(R_{1},Q) =\left(\|(X,Y)\|_{1},\mathcal{A}_{1}^{(-2,2]}\left(\frac{(X,Y)}{ \|(X,Y)\|_{1}}\right)\right),\] \[(R_{2},\Theta) =\left(\|(X,Y)\|_{2},\text{atan}2(X,Y)\right).\]
First, suppose that \(X\) and \(Y\) are independent standard normal variables. The corresponding joint density functions are
\[f_{X,Y}(x,y) =\frac{1}{2\pi}\exp\left(-\tfrac{1}{2}(x^{2}+y^{2})\right),\] \[f_{R_{1},Q}(r,q) =\frac{r}{2\pi}\exp\left(-\frac{r^{2}}{2}(\cos_{1}^{2}(q)+\sin_{1} ^{2}(q))\right),\] \[f_{R_{2},\Theta}(r,\theta) =\frac{r}{2\pi}\exp\left(-\frac{r^{2}}{2}\right).\]
So \(f_{R_{1},Q}\) has angular dependence, whereas \(f_{R_{2},\Theta}\) is independent of \(\Theta\). Now suppose that \(X\) and \(Y\) are independent standard Laplace variables. In this case, the corresponding joint densities are
\[f_{X,Y}(x,y) =\tfrac{1}{4}\exp(-(|x|+|y|)),\] \[f_{R_{1},Q}(r,q) =\frac{r}{4}\exp(-r),\] \[f_{R_{2},\Theta}(r,\theta) =\frac{r}{4}\exp(-r(|\cos(\theta)|+|\sin(\theta)|)).\]
In this case \(f_{R_{2},\Theta}\) has angular dependence, whereas \(f_{R_{1},Q}\) does not. \(\blacksquare\)
In the cases considered in Example 2, the joint densities in Cartesian coordinates are given in terms of a norm of the coordinates, so using that particular norm to define the angular-radial coordinates is a natural choice, and leads to a simple representation. The next example illustrates this for the case of elliptical distributions.
**Example 3** (SPAR models for elliptical distributions).: Suppose that random vector \((X,Y)\) follows an elliptical distribution, and that both \(X\) and \(Y\) have zero median, unit variance and Pearson correlation coefficient \(\rho\in(-1,1)\). Then the joint density function can be written \(f_{X,Y}(x,y)=f_{0}(\|(x,y)\|_{e,\rho}^{2})\), where \(f_{0}\) is a generator function, and \(\|\cdot\|_{e,\rho}\) is the elliptical norm given by (see e.g. [24])
\[\|(x,y)\|_{e,\rho}=\left(\frac{x^{2}+y^{2}-2\rho xy}{1-\rho^{2}}\right)^{1/2}.\]
For example, the bivariate normal distribution is given by setting \(f_{0}(z)=\gamma\exp(-z/2)\), where \(\gamma=1/(2\pi\sqrt{1-\rho^{2}})\). Similarly, the bivariate t distribution with \(\nu\) degrees of freedom, is given by setting \(f_{0}(z)=\gamma\left(1+z/\nu\right)^{-1-\frac{\nu}{2}}\). Define the random variables \(R_{e}=\|(X,Y)\|_{e,\rho}\), \(\Theta=\text{atan}2(X,Y)\). From (12), the random vector \((R_{e},\Theta)\) has joint density
\[f_{R_{e},\Theta}(r,\theta)=\frac{r}{\|(\cos(\theta),\sin(\theta))\|_{e,\rho}^{ 2}}f_{X,Y}\left(r\frac{(\cos(\theta),\sin(\theta))}{\|(\cos(\theta),\sin( \theta))\|_{e,\rho}}\right)=\alpha^{2}(\theta)rf_{0}(r^{2}),\]
where
\[\alpha(\theta)=\|(\cos(\theta),\sin(\theta))\|_{e,\rho}^{-1}=\left(\frac{1- \rho^{2}}{1-\rho\sin(2\theta)}\right)^{1/2}.\]
Under this transformation, the density factorises to independent radial and angular components, where the angular component is invariant to the type of elliptical distribution. This choice of radial variable was proposed by Wadsworth _et al._[14] in their Example 3. Provided that \(rf_{0}(r^{2})\) is in the domain of attraction of an extreme value distribution and \(\int_{0}^{\infty}rf_{0}(r^{2})dr\) is finite, then \((X,Y)\) has a SPAR representation for \(\theta\in(-\pi,\pi]\).
If the generator function has antiderivative \(F_{0}\) (i.e. \(F_{0}^{\prime}=f_{0}\)), then \(\partial F_{0}(r^{2})/\partial r=2rf_{0}(r^{2})\). We can then write explicit expressions for the angular density and conditional radial distribution in terms of \(F_{0}\),
\[f_{\Theta}(\theta) =\int_{0}^{\infty}f_{R,\Theta}(r,\theta)\,dr=\alpha^{2}(\theta) \int_{0}^{\infty}rf_{0}(r^{2})\,dr=\frac{\alpha^{2}(\theta)}{2}\left[F_{0}(r^ {2})\right]_{r=0}^{r=\infty},\] \[\bar{F}_{R|\Theta}(r|\theta) =\frac{\int_{r}^{\infty}f_{R,\Theta}(s,\theta)\,ds}{f_{\Theta}( \theta)}=\frac{[F_{0}(s^{2})]_{s=\infty}^{s=\infty}}{[F_{0}(s^{2})]_{s=0}^{s= \infty}}.\]
For a bivariate normal distribution, \(F_{0}(z)=-2\gamma\exp(-z/2)\), and we have
\[f_{\Theta}(\theta) =\frac{\sqrt{1-\rho^{2}}}{2\pi(1-\rho\sin(2\theta))},\] \[\bar{F}_{R|\Theta}(r|\theta) =\exp\left(-\frac{r^{2}}{2}\right).\]
So the angular density is continuous and finite, satisfying assumption (A3). In this case, the conditional radial distribution is Rayleigh, with unit scale parameter. This satisfies assumptions (A1) and (A2) with \(\xi(\theta)=0\) and \(\sigma(\mu,\theta)=1/\mu\). However, the convergence to the asymptotic form is slow. This is directly analogous to the univariate case, in the sense that a normal distribution is in the domain of attraction of a Gumbel distribution, but converges slowly. For a bivariate t distribution, \(F_{0}(z)=-2\gamma\left(1+z/\nu\right)^{-\frac{\nu}{2}}\), and the angular density is identical to that for the bivariate normal distribution. The survivor function of the conditional radial distribution is
\[\bar{F}_{R|\Theta}(r|\theta)=\left(1+\frac{r^{2}}{\nu}\right)^{-\nu/2}.\]
The quantile \(\mu\) at exceedance probability \(\zeta\) is therefore \(\mu^{2}=\nu(\zeta^{-2/\nu}-1)\). Moreover, for all \(r>0\) we have
\[\frac{\bar{F}_{R|\Theta}(\mu+r|\theta)}{\bar{F}_{R|\Theta}(\mu|\theta)}=\left( \frac{1+\frac{(\mu+r)^{2}}{\nu}}{1+\frac{\mu^{2}}{\nu}}\right)^{-\nu/2}\sim \left(1+\frac{r}{\mu}\right)^{-\nu}=\bar{F}_{GP}(r;1/\nu,\mu/\nu),\quad\mu \rightarrow\infty. \tag{13}\]
[Throughout this paper we use the notation \(f(x)\sim g(x)\) as \(x\to\infty\) to mean \(\lim_{x\to\infty}f(x)/g(x)=1\).] Hence assumptions (A1) and (A2) are satisfied with \(\xi(\theta)=1/\nu\) and \(\sigma(\mu,\theta)=\mu/\nu\). An example of a SPAR approximation to a bivariate t distribution with \(\nu=2\) and \(\rho=0.6\) is shown in Figure 5. In this example, the threshold exceedance probability has been set at \(\zeta=0.05\). The isodensity contours of the SPAR approximation, closely follow those of the true distribution. The angular density is also shown. As it is smoothly-varying and finite it is readily amenable to non-parametric estimation.
Example 3 illustrates that for elliptical distributions, a judicious choice of coordinate system can lead SPAR representations in which the angular and radial components are independent. However, the coordinate system used here is non-standard and may be difficult to estimate in practice. Example 2 showed that changing the angular-radial coordinate system leads to a change in the angular dependence. In the remainder of the section, we consider the effect of changing the gauge functions used to define angular and radial variables.
**Lemma 3.2** (Change of radial variable).: _For random vector \((X,Y)\) define radial variables \(R_{a}=\mathcal{R}_{a}(X,Y)\) and \(R_{b}=\mathcal{R}_{b}(X,Y)\), for some gauge functions \(\mathcal{R}_{a}\) and \(\mathcal{R}_{b}\). Define the angular variable \(Q=\mathcal{A}_{*}^{(-2,2]}((X,Y)/\mathcal{R}_{*}(X,Y))\), for arbitrary gauge function \(\mathcal{R}_{*}\). Suppose that \((R_{a},Q)\) has joint density \(f_{R_{a},Q}\), then \((R_{b},Q)\) has joint density \(f_{R_{b},Q}\), with_
\[f_{R_{b},Q}(r_{b},q)=\frac{\mathcal{R}_{a}(\mathbf{w})}{\mathcal{R}_{b}( \mathbf{w})}f_{R_{a},Q}\left(r_{b}\frac{\mathcal{R}_{a}(\mathbf{w})}{\mathcal{ R}_{b}(\mathbf{w})},q\right),\]
_where \(\mathbf{w}=(\cos_{*}(q),\sin_{*}(q))\)._
When the star-shaped sets used to define the gauge functions have a continuous boundary, then the Jacobian of the transformation, \(\mathcal{R}_{a}(\mathbf{w})/\mathcal{R}_{b}(\mathbf{w})\), is also continuous. In this case, if \((R_{a},Q)\) has a SPAR representation, then \((R_{b},Q)\) will also have a SPAR representation. This is stated in the following corollary.
**Corollary 3.2.1**.: _In addition to the assumptions of Lemma 3.2, suppose that gauge functions \(\mathcal{R}_{a}\) and \(\mathcal{R}_{b}\) are defined in terms of star-shaped sets with continuous boundaries. Suppose that \(f_{R_{a},Q}\) satisfies assumptions (A1)-(A3) for \(q\in I\subseteq(-2,2]\) with angular density \(f_{Q}\) and GP parameter functions \(\xi(q)\), \(\sigma_{a}(\mu,q)\). Then \(f_{R_{b},Q}\) satisfies SPAR assumptions (A1)-(A3) for \(q\in I\subseteq(-2,2]\) with the same angular density and GP parameter functions \(\xi(q)\), \(\sigma_{b}(\mu,q)\). Define the conditional quantile functions \(\mu_{a}(\zeta,q)=\tilde{F}_{R_{a}|Q}^{-1}(\zeta|q)\) and \(\mu_{b}(\zeta,q)=\tilde{F}_{R_{b}|Q}^{-1}(\zeta|q)\) for \(\zeta\in[0,1]\). Then the conditional quantile functions and GP scale functions are related by_
\[\mu_{b}(\zeta,q) =\frac{\mathcal{R}_{b}(\mathbf{w})}{\mathcal{R}_{a}(\mathbf{w})} \mu_{a}(\zeta,q)\] \[\sigma_{b}(\mu_{b}(\zeta,q),q) =\frac{\mathcal{R}_{b}(\mathbf{w})}{\mathcal{R}_{a}(\mathbf{w})} \sigma_{a}\left(\mu_{a}(\zeta,q),q\right).\]
These results can be applied to the case of elliptical distributions, to give SPAR representations in terms of standard polar coordinates.
Figure 5: SPAR representation for bivariate t distribution with \(\rho=0.6\) and \(\nu=2\). Left: Angular density function. Right: Isodensity contours of joint pdf at equal logarithmic increments from \(10^{-1}\) to \(10^{-7}\). Coloured lines are exact values, and dashed lines are density from SPAR model with threshold exceedance probability \(\zeta=0.05\).
**Example 4** (Elliptical distributions in standard polar coordinates).: Continuing from Example 3, if we work in standard polar coordinates and define \(R=\|(X,Y)\|_{2}\), then from Lemma 3.2 we have
\[f_{R,\Theta}(r,\theta)=\frac{\|(\cos(\theta),\sin(\theta)\|_{e,\rho}}{\|(\cos( \theta),\sin(\theta)\|_{2}}f_{R_{e},Q}\left(r\frac{\|(\cos(\theta),\sin(\theta) \|_{e,\rho}}{\|(\cos(\theta),\sin(\theta)\|_{2}},\theta\right)=rf_{0}\left( \left(\frac{r}{\alpha(\theta)}\right)^{2}\right).\]
This agrees with the density obtained using (12) directly. As there is no change in the angular variable from the previous example, the angular density is unchanged in this coordinate system. From Corollary 3.2.1, whenever the generator \(f_{0}\) is such that the SPAR assumptions are satisfied in the elliptic coordinate system (see Example 3), the SPAR assumptions will also be satisfied in the standard polar coordinate system. In these cases, the conditional radial quantiles and GP scale parameter function will be scaled by \(\|(\cos(\theta),\sin(\theta)\|_{e,\rho}^{-1}=\alpha(\theta)\), when expressed in standard polar coordinates.
Examples 2 and 3 showed that, in some cases, coordinates could be chosen for which angular and radial components are independent. The question of when the SPAR assumptions can be satisfied with a GP approximation to the tail of the radial variable that is independent of angle can be addressed using Corollary 3.2.1. This is stated in the following theorem.
**Theorem 3.3** (Coordinate systems for asymptotically independent angular and radial components).: _Suppose that \((X,Y)\) is a random vector on \(\mathbb{R}^{2}\). Then the following statements are equivalent._
1. _There exists a map_ \(\tilde{\mathcal{T}}^{*}\) _to a polar coordinate system in which_ \((S,Q)=\tilde{\mathcal{T}}^{*}(X,Y)\) _satisfies SPAR assumptions (A1)-(A3) for all_ \(q\in I\subseteq(-2,2]\)_, with_ \(I\) _an interval, and GP parameter functions that are independent of angle._
2. _There exists a map_ \(\tilde{\mathcal{T}}\) _to a polar coordinate system with the same angle function as_ \(\tilde{\mathcal{T}}^{*}\)_, for which the variables_ \((R,Q)=\tilde{\mathcal{T}}(X,Y)\) _satisfy SPAR assumptions (A1)-(A3) for all_ \(q\in I\)_, with constant GP shape parameter and scale parameter function_ \(\sigma(\mu,q)\sim\alpha(q)h(\mu)\) _as_ \(\mu\to r_{F}(q)\)_, where_ \(r_{F}(q)\) _is the upper end point of_ \(R|Q\)_, for some continuous functions_ \(\alpha(q)>0\) _and_ \(g(\mu)>0\)_._
_When (ii) is satisfied, the map \(\tilde{\mathcal{T}}^{*}\) can be defined in terms of a radial gauge function \(\mathcal{R}_{\alpha}(x,y)=r/\alpha(q)\), where \((r,q)=\tilde{\mathcal{T}}(x,y)\)._
Referring back to Example 4, we can see immediately, that for elliptical distributions \(\alpha(\theta)=\|(\cos(\theta),\sin(\theta))\|_{e,\rho}^{-1}\), and hence \(\mathcal{R}_{\alpha}(x,y)=\|(x,y)\|_{e,\rho}\). Example 3 showed that using this gauge function to define the radial variable does indeed lead to independent angular and radial variables. Theorem 3.3 provides necessary and sufficient conditions to satisfy the model assumptions of Wadsworth _et al._[14]. However, the coordinate systems required to satisfy the model assumptions do not always exist, and can be non-standard when they do exist. This will be illustrated in Sections 5.2-5.4, which presents examples of the functions \(\sigma(\mu,q)\) on various margins. The requirement for AI angular and radial variables in the modelling approach proposed in [14], limits the types of distribution which can be represented using this approach. In contrast, the SPAR approach relaxes this restriction, enabling a representation for a wider range of distributions.
To conclude this section, we consider the effect of changing the angular variable.
**Lemma 3.4** (Change of angular variable).: _For random vector \((X,Y)\) define the radial variable \(R=\mathcal{R}_{*}(X,Y)\), for some gauge function \(\mathcal{R}_{*}\), and angular variables_
\[Q_{a}=\mathcal{A}_{a}^{(-2,2]}\left(\frac{(X,Y)}{\mathcal{R}_{a}(X,Y)}\right), \qquad Q_{b}=\mathcal{A}_{b}^{(-2,2]}\left(\frac{(X,Y)}{\mathcal{R}_{b}(X,Y)} \right),\]
_for gauge functions \(\mathcal{R}_{a}\) and \(\mathcal{R}_{b}\). Suppose that \((R,Q_{a})\) has joint density \(f_{R,Q_{b}}\) then \((R,Q_{b})\) has joint density \(f_{R,Q_{b}}\), given by_
\[f_{R,Q_{b}}(r,q)=\frac{dq_{a,b}(q)}{dq}f_{R,Q_{a}}\left(r,q_{a,b}(q)\right), \qquad q_{a,b}(q)=\mathcal{A}_{a}^{(-2,2]}\left(\frac{(\cos_{b}(q),\sin_{b}(q)) }{\mathcal{R}_{a}(\cos_{b}(q),\sin_{b}(q))}\right).\]
_In the case that \(\mathcal{R}_{a}=\|\cdot\|_{1}\) and \(\mathcal{R}_{b}=\|\cdot\|_{2}\) we have_
\[q_{1,2}(q) =\varepsilon(\sin_{2}(q))\left(1-\frac{\cos_{2}(q)}{\|(\cos_{2}(q ),\sin_{2}(q))\|_{1}}\right), \frac{dq_{1,2}(q)}{dq} =\frac{\pi}{2}\|(\cos_{2}(q),\sin_{2}(q))\|_{1}^{-2}\] \[q_{2,1}(q) =\frac{2}{\pi}\mathrm{atan2}(\cos_{1}(q),\sin_{1}(q)), \frac{dq_{2,1}(q)}{dq} =\frac{2}{\pi}\|(\cos_{1}(q),\sin_{1}(q))\|_{2}^{-2},\]
_where \(\varepsilon\) is the generalised sign function, defined in Example 1._
Lemma 3.4 considers the Jacobians involved when changing between two coordinate systems defined in terms of pseudo-angles. Since Euclidean angles differ from \(L^{2}\) pseudo-angles by a factor of \(\pi/2\), if we change between coordinates defined in terms of \(L^{1}\) pseudo-angles and Euclidean angles, then the factors of \(\pi/2\) in the Jacobians in Lemma 3.4 vanish. The Jacobians of the transformation between Euclidean and \(L^{1}\) angles are continuous. However, although the pseudo-angle is a continuous function of the Cartesian coordinates, the Jacobian \(dq_{a,b}(q)/dq\) is not guaranteed to be continuous. This is illustrated in the following example.
**Example 5**.: Suppose that \(\mathcal{S}_{a}\) is the curve illustrated in Figure 6, and define corresponding gauge and angle functions \(\mathcal{R}_{a}\) and \(\mathcal{A}_{a}\). The circumference of \(\mathcal{S}_{a}\) is \(\mathcal{C}_{a}=2(1+\sqrt{2})\). For \((w_{1},w_{2})\in\mathcal{S}_{a}\cap[0,1]^{2}\) we have
\[\mathcal{A}_{a}^{(-2,2]}(w_{1},w_{2})=\frac{1}{1+\sqrt{2}}\begin{cases}2w_{2},&w_{2}<1/2,\\ 1+\sqrt{2}(2w_{2}-1),&w_{2}\geq 1/2.\end{cases}\]
The \(L^{1}\) angle corresponding to \(\mathcal{S}_{a}\) angle \(q\) is therefore
\[q_{1,a}(q)=\mathcal{A}_{1}\left(\frac{(\cos_{a}(q),\sin_{a}(q))}{\cos_{a}(q)+ \sin_{a}(q)}\right)=\begin{cases}\frac{(1+\sqrt{2})q}{1+(1+\sqrt{2})q},&q\in[ 0,(1+\sqrt{2})^{-1}],\\ \frac{(1+\sqrt{2})q-1+\sqrt{2}}{2\sqrt{2}},&q\in((1+\sqrt{2})^{-1},1].\end{cases}\]
Consider the case of independent standard exponential variables, \((X,Y)\). In \(L^{1}\) polar coordinates \((R_{1},Q_{1})=(X+Y,Y)\), the joint density is \(f_{R_{1},Q_{1}}(r,q)=r\exp(-r)\) for \((r,q)\in[0,\infty)\times[0,1]\), and angular density is \(f_{Q_{1}}(q)=1\) for \(q\in[0,1]\). If we define \(Q_{a}=\mathcal{A}_{a}^{(-2,2]}((X,Y)/\mathcal{R}_{a}(X,Y))\) then from Lemma 3.4, the joint density of \((R_{1},Q_{a})\) is \(f_{R_{1},Q_{a}}(r,q)=(dq_{1,a}(q)/dq)r\exp(-r)\) for \((r,q)\in[0,\infty)\times[0,1]\), and the corresponding angular density is \(f_{Q_{a}}(q)=(dq_{1,a}(q)/dq)\) for \(q\in[0,1]\). Since \(dq_{1,a}(q)/dq\) is discontinuous in \(q\), so is \(f_{Q_{a}}(q)\).
Therefore, to ensure that a change of angle does not affect whether the SPAR assumptions are satisfied, we need to specify that the Jacobian of the transformation is also continuous. This is stated in the following corollary to Lemma 3.4.
**Corollary 3.4.1**.: _Under the assumptions of Lemma 3.4, suppose that the joint density \(f_{R,Q_{a}}\) satisfies SPAR assumptions (A1)-(A3) for all \(q\in[s,t]\subseteq(-2,2]\), with parameter functions \(\xi_{a}(q)\) and \(\sigma_{a}(\mu,q)\) and angular density \(f_{Q_{a}}(q)\). If the Jacobian \(J(q)=dq_{a,b}(q)/dq\) is continuous for \(q\in[q_{b,a}(s),q_{b,a}(t)]\), then the joint density \(f_{R,Q_{b}}\) satisfies assumptions (A1)-(A3) for \(q\in[q_{b,a}(s),q_{b,a}(t)]\subseteq(-2,2]\) with parameter functions \(\xi_{b}(a)=\xi_{a}(q_{a,b}(q))\), \(\sigma_{b}(\mu,\theta)=\sigma_{a}(\mu,q_{a,b}(q))\) and angular angular density_
\[f_{Q_{b}}(q)=\frac{dq_{a,b}(q)}{dq}f_{Q_{a}}\left(q_{a,b}(q)\right).\]
Figure 6: Left: Curve \(\mathcal{S}_{a}\) used to define angular variable \(Q_{a}\), and unit circle for \(L^{1}\) norm, \(\mathcal{U}_{1}\), used to define angular variable \(Q_{1}\). Right: Angular density of independent standard exponential variables in terms of angular variables \(Q_{1}\) and \(Q_{a}\). Note that \(f_{Q_{1}}(q)\) is continuous, whereas \(f_{Q_{a}}(q)\) is not.
### Multivariate density functions
For the general multivariate case, we start by considering the situation where both radii and angles are defined in terms of the same gauge function, \(\mathcal{R}_{*}\), with corresponding piecewise-smooth surface \(\mathcal{S}_{*}\subset\mathbb{R}^{d}\). As will become apparent below, piecewise-smoothness is required so that partial gradients of the surface can be calculated. Suppose that \(\mathbf{w}=(w_{1},...,w_{d})\in\mathcal{S}_{*}\), and that \(\mathcal{S}_{*}\) can be partitioned into a finite number of pairwise disjoint segments \(\mathcal{S}_{*,j}\), such that \(\mathcal{S}_{*}=\bigcup_{j}\mathcal{S}_{*,j}\), and in each segment \(\mathcal{S}_{*,j}\) is smooth and \(w_{d}\) is uniquely determined by the values \(w_{1},...,w_{d-1}\). In this case, we can define a function \(g_{j}\) for each segment such that \(w_{d}=g_{j}(w_{1},...,w_{d-1})\). For example, consider the case of the \(L^{2}\) unit circle in \(\mathbb{R}^{2}\). In this case, we would partition the circle in segments in the upper and lower halves of the plane. Then for \((w_{1},w_{2})\in\mathcal{U}_{2}\), we can write \(w_{2}=\sqrt{1-w_{1}^{2}}\) in the upper half of the plane and \(w_{2}=-\sqrt{1-w_{1}^{2}}\) in the lower half. Obviously, the general case will be more complicated, since the surface \(\mathcal{S}_{*}\) will not necessarily have any symmetries. For \(\mathbf{w}\in\mathcal{S}_{*,j}\), the absolute value of the Jacobian of the transformation from generalised polar to Cartesian coordinates can then be written [21, Lemma 1]
\[|\text{det}(J(r,\mathbf{w}))|=r^{d-1}\left|g_{j}(w_{1},...,w_{d-1})-\sum_{i=1 }^{d-1}w_{i}\frac{\partial g_{j}(w_{1},...,w_{d-1})}{\partial w_{i}}\right|.\]
In the case where both radii and angles are defined in terms of the \(L^{1}\) norm, we can partition the unit sphere \(\mathcal{U}_{1}\) into the sections falling in each orthant of \(\mathbb{R}^{d}\). In each orthant \(g_{j}(w_{1},...,w_{d-1})=\pm\left(1-\sum_{i=1}^{d-1}|w_{i}|\right)\), where the sign is dependent on the orthant. It is then straightforward to confirm that \(|\text{det}(J(r,\mathbf{w}))|=r^{d-1}\) in any orthant. Therefore, for random vector \(\mathbf{X}\in\mathbb{R}^{d}\), with density \(f_{\mathbf{X}}\), if we define \(R=\|\mathbf{X}\|_{1}\) and \(\mathbf{W}=\mathbf{X}/R\), then \((R,\mathbf{W})\) has joint density
\[f_{R,\mathbf{W}}(r,\mathbf{w})=r^{d-1}f_{\mathbf{X}}(r\mathbf{w}). \tag{14}\]
We can then consider how changing the gauge function used for the radius affects the joint density. Suppose that \(R_{*}=\mathcal{R}_{*}(\mathbf{X})\) for some gauge function \(\mathcal{R}_{*}\), then we have
\[\mathbf{X}=R\mathbf{W}=\frac{R_{*}}{\mathcal{R}_{*}(\mathbf{W})}\mathbf{W}.\]
Hence if \((R,\mathbf{W})\) has joint density \(f_{R,\mathbf{W}}(r,\mathbf{w})\) then \((R_{*},\mathbf{W})\) has joint density
\[f_{R_{*},\mathbf{W}}(r,\mathbf{w})=\frac{1}{\mathcal{R}_{*}(\mathbf{w})}f_{R, \mathbf{W}}\left(\frac{r}{\mathcal{R}_{*}(\mathbf{w})},\mathbf{w}\right).\]
An analogous result to Theorem 3.3 can then be derived for the multivariate case, showing that in certain cases we can define generalised polar coordinate systems in which \(R\) and \(\mathbf{W}\) are AI, but that these coordinate systems are dependent on the distribution. Since a change of coordinate system does not affect whether the SPAR assumptions are satisfied, we shall restrict our attention to the case where both radial and angular variables are defined in terms of the \(L^{1}\) norm, so that the angular-radial density has the simple form given in (14).
Working in \(L^{1}\) polar coordinates also has the advantage that we can switch easily between vector and scalar angles in \(\mathbb{R}^{2}\), since for \((w_{1},w_{2})=(\cos_{1}(q),\sin_{1}(q))\) we have \(|dw_{1}/dq|=1\). Therefore, joint angular-radial densities defined in terms of scalar angular variable \(Q\) have unit Jacobian when transforming from those defined in terms of vector angular variable \(\mathbf{W}\). That is, \(f_{R,Q}(r,q)=f_{R,\mathbf{W}}(r,(\cos_{1}(q),\sin_{1}(q)))\). Similarly, for the angular density we have \(f_{Q}(q)=f_{\mathbf{W}}(\cos_{1}(q),\sin_{1}(q))\). The ability to switch between vector and scalar angles does not apply for standard polar coordinates, where for \((w_{1},w_{2})=(\cos(\theta),\sin(\theta))\) we have \(|dw_{1}/d\theta|=|\sin(\theta)|\).
## 4 Angular-radial models for copulas
The joint extremal behaviour of random variables is determined by the asymptotic behaviour of their copula. In order to make general statements about the effect of the choice of margins, it is necessary to start by introducing general assumptions about the asymptotic behaviour of copulas and copula densities. In this section, we consider various angular-radial models for the asymptotic behaviour of copulas and copula densities, including generalisations of the models introduced by Hua and Joe [7] and Wadsworth and Tawn [25]. Although our primary interest in this work is in the density, it is useful to also consider the behaviour of the copula, to highlight some links between various properties of the density and the tail order (1), which is defined in terms of the copula. We will also show that the asymptotic models for the copula considered here are limited in terms of the range of distributions they can distinguish between,
whereas a certain type of asymptotic model for the density can distinguish between a wider range of distributions with various dependence classes.
The models we consider here can be viewed as various types of assumptions about regular variation of the copula and copula density. We start by defining regularly-varying functions and listing some properties (see e.g. [26]).
**Definition 11** (Regularly-varying function).: A measurable function \(f:[0,\infty)\to[0,\infty)\) is regularly-varying at \(r_{0}\) with index \(\alpha\in\mathbb{R}\), denoted \(f\in RV_{\alpha}(r_{0})\), if for any \(x>0\),
\[\lim_{r\to r_{0}}\frac{f(rx)}{f(r)}=x^{\alpha}. \tag{15}\]
A regularly-varying function with index \(\alpha=0\) is said to be slowly-varying at \(r_{0}\). A regularly-varying function with index \(\alpha=\pm\infty\) is said to be rapidly-varying at \(r_{0}\). Any \(f\in RV_{\alpha}(r_{0})\) can be written as \(f(r)=\mathcal{L}(r)r^{\alpha}\), where \(\mathcal{L}\) is slowly-varying at \(r_{0}\). Also, note that \(f(r)\in RV_{\alpha}(\infty)\) if and only if \(f(1/r)\in RV_{\alpha}(0^{+})\).
Previous asymptotic models for copulas and copula densities have considered behaviour of \(C(\mathbf{u})\) and \(c(\mathbf{u})\) for \(\mathbf{u}\to\mathbf{0}_{d}=(0,...,0)\) or \(\mathbf{u}\to\mathbf{1}_{d}=(1,...,1)\), which governs the behaviour of a random vector when all components are small or all components are large. For our purposes, it is useful to generalise this to any corner of the copula. To do this, we start by introducing some notation.
**Definition 12**.: Suppose \(\mathbf{u}_{0}=(u_{0,1},\ldots,u_{0,d})\) where \(u_{0,j}\in\{0,1\}\), \(j=1,...,d\). For \(d\)-dimensional copula \(C\) with corresponding density \(c\) define
\[c_{\mathbf{u}_{0}}(u_{1},\ldots,u_{d})=c(u_{0,1}+(-1)^{u_{0,1}}u _{1},\ldots,u_{0,d}+(-1)^{u_{0,d}}u_{d}),\] \[C_{\mathbf{u}_{0}}(u_{1},\ldots,u_{d})=\int\limits_{0}^{u_{1}} \cdots\int\limits_{0}^{u_{d}}c_{\mathbf{u}_{0}}(s_{1},\ldots,s_{d})\,ds_{1} \cdots ds_{d}.\]
Note that \(C_{\mathbf{u}_{0}}\) and \(c_{\mathbf{u}_{0}}\) are also a copula and copula density, and that, by definition, \(c_{\mathbf{0}_{d}}\equiv c\) and \(C_{\mathbf{0}_{d}}\equiv C\). In words, \(C_{\mathbf{u}_{0}}\) gives the value of copula \(C\) in coordinates relative to the corner \(\mathbf{u}_{0}\), and \(c_{\mathbf{u}_{0}}\) is the corresponding density function. We refer to \(C_{\mathbf{u}_{0}}\) and \(c_{\mathbf{u}_{0}}\) as the copula and copula density of \(C\) with respect to corner \(\mathbf{u}_{0}\)
An equivalent way to define \(C_{\mathbf{u}_{0}}\) and \(c_{\mathbf{u}_{0}}\) is in terms of'reflections' [27]. Suppose that the random vector with uniform margins \((U_{1},...,U_{d})\) has copula \(C\). For \(j=1,...,d\), define \(\bar{U}_{j}=U_{j}\) when \(u_{0,j}=0\) and \(\bar{U}_{j}=1-U_{j}\) when \(u_{0,j}=1\). Then \((\bar{U}_{1},...,\bar{U}_{d})\) has copula \(C_{\mathbf{u}_{0}}\) and corresponding density \(c_{\mathbf{u}_{0}}\) (when it exists). For bivariate copula \(C\) we have
\[C_{(0,0)}(u,v)=C(u,v)\] \[C_{(1,0)}(u,v)=v-C(1-u,v)\] \[C_{(0,1)}(u,v)=u-C(u,1-v)\] \[C_{(1,1)}(u,v)=u+v-1+C(1-u,1-v).\]
Similar but more complex relations can be derived in higher dimensions.
Using this notation, the asymptotic models for copulas and copula densities discussed below, can be stated as models for \(C_{\mathbf{u}_{0}}(\mathbf{u})\) and \(c_{\mathbf{u}_{0}}(\mathbf{u})\) as \(\mathbf{u}\to\mathbf{0}_{d}\) along different paths. Section 4.1 considers models where \(\mathbf{0}_{d}\) is approached at the same rate in each variable, and Section 4.2 considers models where \(\mathbf{0}_{d}\) is approached at a different rate in each variable. These two types of models can be viewed as angular-radial models for the behaviour of a copula or copula density, using different polar coordinate systems. However, in contrast to Sections 2 and 3, the polar coordinate systems are nonlinear, in a sense that will be discussed in Section 4.3. This view of the asymptotic models for copulas is instructive for understanding the relations between the models and the limitations of what can be described by each model. In Section 5, we discuss how the limitations of the models can be related to whether a copula has a SPAR representation on a given margin. Finally, in Section 4.4, we introduce a more general angular-radial description of copula densities, and discuss how it is related to the models defined in Sections 4.1 and 4.2. Examples of the various models discussed in this section are presented in Appendix B. As with other sections, proofs are presented in Appendix A.
### Constant scaling order
The models defined by Hua and Joe [7], based on the concepts introduced in [11, 12], assume that the corner of the copula is approached at a constant rate in each variable. In the original presentation, Hua and Joe [7], considered
corners \(\mathbf{u}_{0}=\mathbf{0}_{d}=(0,...,0)\) and \(\mathbf{u}_{0}=\mathbf{1}_{d}=(1,...,1)\), and referred to the indices of regular variation in these corners as the upper and lower tail orders. The definition below generalises this to any corner of the copula.
**Definition 13** (Tail order).: Suppose that \(C\) is a \(d\)-dimensional copula and for corner \(\mathbf{u}_{0}\) there exists some \(\kappa_{\mathbf{u}_{0}}>0\) and \(\mathcal{L}_{\mathbf{u}_{0}}(t)\in RV_{0}(0^{+})\), such that
\[C_{\mathbf{u}_{0}}(t\mathbf{1}_{d})\sim\mathcal{L}_{\mathbf{u}_{0}}(t)t^{ \kappa_{\mathbf{u}_{0}}},\quad t\to 0^{+}. \tag{16}\]
Then \(\kappa_{\mathbf{u}_{0}}\) is the tail order of \(C\) in corner \(\mathbf{u}_{0}\).
Clearly, \(\mathcal{L}_{\mathbf{u}_{0}}\) and \(\kappa_{\mathbf{u}_{0}}\) are also dependent on the copula \(C\), so we could write \(\mathcal{L}_{\mathbf{u}_{0}}(t;C)\) and \(\kappa_{\mathbf{u}_{0}}(C)\) to make this explicit. However, the copula in question should be clear from the context, so we omit this information to avoid overly-cluttered notation. This convention is also applied to other coefficients and functions of the copula, defined below. Definition 13 states that if copula \(C\) is regularly-varying in corner \(\mathbf{u}_{0}\), when approached on the diagonal, then the index of regular variation is referred to as the tail order. Proposition 0.8(i) in [28] states that if \(f\in RV_{\alpha}(0^{+})\) then \(\lim_{x\to 0^{+}}\log(f(x))/\log(x)=\alpha\). Therefore, if (16) is satisfied,
\[\lim_{t\to 0^{+}}\frac{\log(C_{\mathbf{u}_{0}}(t\mathbf{1}_{d}))}{\log(t)}= \kappa_{\mathbf{u}_{0}},\]
and the definition of tail order given above is consistent with that presented in (1). A copula may not be regularly-varying in all corners, in which case the tail order is undefined. When the tail order is defined in a given corner, since \(C_{\mathbf{u}_{0}}(t,...,t)\leq t\), we have \(\kappa_{\mathbf{u}_{0}}\geq 1\). Define \(\Upsilon_{\mathbf{u}_{0}}=\lim_{t\to 0^{+}}\mathcal{L}_{\mathbf{u}_{0}}(t)\). Joe [6] makes the following classification of tail behaviour:
* _Strong tail dependence:_ \(\kappa_{\mathbf{u}_{0}}=1\) and \(\Upsilon_{\mathbf{u}_{0}}>0\). This corresponds to the usual definition of asymptotic dependence and in this case \(\Upsilon_{\mathbf{u}_{0}}=\chi_{\mathbf{u}_{0}}\), the tail dependence coefficient, defined in (2).iii
* _Intermediate tail dependence_: \(1<\kappa_{\mathbf{u}_{0}}<d\) or \(\kappa_{\mathbf{u}_{0}}=1\) and \(\Upsilon_{\mathbf{u}_{0}}=0\).
* _Tail orthant independence_: \(\kappa_{\mathbf{u}_{0}}=d\) and \(\Upsilon_{\mathbf{u}_{0}}\in(0,\infty)\).
* _Negative dependence_: \(\kappa_{\mathbf{u}_{0}}>d\).
The tail order describes the behaviour of the copula as the corner is approached along the diagonal. The tail order function generalises this to other linear paths toward the corner.
**Definition 14** (Tail order function and ARL model).: Suppose that \(C\) is a \(d\)-dimensional copula and has tail order \(\kappa_{\mathbf{u}_{0}}\) in corner \(\mathbf{u}_{0}\) with corresponding slowly-varying function \(\mathcal{L}_{\mathbf{u}_{0}}(t)\). Suppose also that there exists a function \(B_{\mathbf{u}_{0}}:[0,\infty)^{d}\to[0,\infty)\) such that
\[C_{\mathbf{u}_{0}}(t\mathbf{z})\sim B_{\mathbf{u}_{0}}(\mathbf{z})\mathcal{L} _{\mathbf{u}_{0}}(t)t^{\kappa_{\mathbf{u}_{0}}},\qquad t\to 0^{+}. \tag{17}\]
Then \(B_{\mathbf{u}_{0}}\) is the tail order function of \(C\) in corner \(\mathbf{u}_{0}\). We refer to the asymptotic assumption (17) as the Angular-Radial model for the copula in Linear coordinates (ARL model).
Hua and Joe [7] derived various properties of tail order functions. In particular, they showed that \(B_{\mathbf{u}_{0}}(\mathbf{z})\) is homogeneous of order \(\kappa_{\mathbf{u}_{0}}\), i.e. \(B_{\mathbf{u}_{0}}(t\mathbf{z})=t^{\kappa_{\mathbf{u}_{0}}}B_{\mathbf{u}_{0}}( \mathbf{z})\). Moreover, Joe _et al._[29] showed for strong tail dependence, the function \(\Upsilon_{\mathbf{u}_{0}}B_{\mathbf{u}_{0}}(\mathbf{z})\), can be related to the stable tail dependence function (e.g. [8, p257]), for the corresponding lower extreme value limit of \(C_{\mathbf{u}_{0}}\). An analogous model for the copula density can also be defined [7, 30, 31].
**Definition 15** (Tail density function and ARL density model).: Suppose that \(C\) is a \(d\)-dimensional copula with density function \(c\), and that \(C\) has tail order \(\kappa_{\mathbf{u}_{0}}\) in corner \(\mathbf{u}_{0}\), with corresponding slowly-varying function \(\mathcal{L}_{\mathbf{u}_{0}}(t)\). Suppose also that there exists a function \(b_{\mathbf{u}_{0}}:[0,\infty)^{d}\to[0,\infty)\) such that
\[c_{\mathbf{u}_{0}}(t\mathbf{z})\sim b_{\mathbf{u}_{0}}(\mathbf{z})\mathcal{L} _{\mathbf{u}_{0}}(t)t^{\kappa_{\mathbf{u}_{0}}-d},\qquad t\to 0^{+}. \tag{18}\]
Then \(b_{\mathbf{u}_{0}}\) is the tail density function of \(c\) in corner \(\mathbf{u}_{0}\). We refer to (18) as the ARL model for the copula density.
Under some mild constraints, the tail density function can be derived from the tail order function. Suppose that copula \(C\) has a tail order function in corner \(\mathbf{u}_{0}\), and the partial derivatives \(\partial C_{\mathbf{u}_{0}}/\partial u_{j}\) are ultimately monotone as
\(u_{j}\to 0^{+}\). In this case, the operations of taking limits and partial differentiation can be interchanged. Note that for \(u_{j}=tz_{j}\), \(\partial/\partial u_{j}=t^{-1}\,\partial/\partial z_{j}\). Therefore, for \(t\to 0^{+}\), we can write
\[c_{\mathbf{u}_{0}}(t\mathbf{z})=\frac{\partial^{d}}{\partial u_{1}\cdots \partial u_{d}}C_{\mathbf{u}_{0}}(t\mathbf{z})\sim\mathcal{L}_{\mathbf{u}_{0}} (t)\,t^{\kappa_{\mathbf{u}_{0}}-d}\frac{\partial^{d}}{\partial z_{1}\cdots \partial z_{d}}B_{\mathbf{u}_{0}}(\mathbf{z})=\mathcal{L}_{\mathbf{u}_{0}}(t) \,t^{\kappa_{\mathbf{u}_{0}}-d}b_{\mathbf{u}_{0}}(\mathbf{z}). \tag{19}\]
When the tail density function exists it is homogeneous of order \(\kappa_{\mathbf{u}_{0}}-d\). For the case \(\kappa_{\mathbf{u}_{0}}<d\) (i.e. intermediate or strong tail dependence) the copula density becomes infinite in corner \(\mathbf{u}_{0}\). Many families of copulas and copula densities have asymptotic forms (17) and (18), with examples presented in [7, 30, 31], and further examples are given in Appendix B. However, it is important to note that the ARL models for the copula and copula density assume that \(t\to 0^{+}\) with \(\mathbf{z}=(z_{1},...z_{d})\) fixed. In some cases, the limit does not apply when \(z_{j}\to 0^{+}\), for some \(j=1,...,d\). In such cases the ARL models do not describe the behaviour close to the margins. This is discussed further in Section 4.3 and the implications for SPAR models on heavy tailed margins are discussed in Section 5.3.
### Different scaling orders
An alternative assumption about multivariate tail probabilities was presented by Wadsworth and Tawn [25], who considered the joint tail region for variables with standard Pareto or exponential margins. This assumption can be reformulated in terms of an assumption about the asymptotic form of the copula in the corners. The Wadsworth-Tawn model considers the behaviour of the copula as the corner is approached at different rates in each variable.
**Definition 16** (Copula exponent function).: Suppose that \(C\) is a \(d\)-dimensional copula and there exists a function \(\Lambda_{\mathbf{u}_{0}}:[0,\infty)^{d}\to[0,\infty)\) such that
\[C_{\mathbf{u}_{0}}(t^{\mathbf{z}})\sim\mathcal{L}_{\mathbf{u}_{0}}(t;\mathbf{ z})t^{\Lambda_{\mathbf{u}_{0}}(\mathbf{z})},\quad t\to 0^{+}, \tag{20}\]
where \(\mathbf{z}=(z_{1},...,z_{d})\), \(t^{\mathbf{z}}=(t^{z_{1}},...,t^{z_{d}})\), and \(\mathcal{L}_{\mathbf{u}_{0}}(t;\mathbf{z})\) is slowly-varying in \(t\) at \(0^{+}\). Then \(\Lambda_{\mathbf{u}_{0}}\) is the exponent function of copula \(C\) in corner \(\mathbf{u}_{0}\). We refer to assumption (20) as the Angular-Radial model for the copula in Exponential coordinates (ARE model).
Wadsworth and Tawn [25] refer to \(\Lambda_{\mathbf{u}_{0}}\) as the angular dependence function. However, as we are considering various types of angular-radial model, we use the term 'copula exponent function' to avoid confusion. Wadsworth and Tawn [25] showed that \(\Lambda_{\mathbf{u}_{0}}\) has properties analogous to those of the stable tail dependence function for EV copulas. In particular, \(\Lambda_{\mathbf{u}_{0}}\) is homogeneous of order 1, and by comparing (16), and (20) we see that when both these representations are valid, \(\Lambda_{\mathbf{u}_{0}}(\mathbf{1}_{d})=\kappa_{\mathbf{u}_{0}}\). Moreover, when \(\kappa_{\mathbf{u}_{0}}=1\), we have \(\Lambda_{\mathbf{u}_{0}}(\mathbf{u}_{0})(z_{1},...,z_{d})=\max(z_{1},...,z_{d})\).12 We can also introduce an analogous definition for the density function.
Footnote 12: This can be seen as follows. First note that since \(C_{\mathbf{u}_{0}}(u_{1},...,u_{d})\leq\min(u_{1},...,u_{d})\), we have \(\Lambda_{\mathbf{u}_{0}}(\mathbf{u}_{0})(z_{1},...,z_{d})\geq\max(z_{1},...,z _{d})\). Secondly, since \(C(r^{\alpha w_{1}},r^{\alpha w_{2}},...,r^{\alpha w})\leq C(r^{w_{1}},r^{w_{2 }},...,r^{w_{d}})\) for \(a>1\), the exponent function \(\Lambda_{\mathbf{u}_{0}}\) is non-decreasing in the first argument, and similarly for the other arguments. Combining these observations gives the result.
Footnote 13: Specifically, on exponential margins for \(\mathbf{z}\in\mathcal{L}_{1}^{\prime}\), \(f_{\mathbf{x}}(\mathbf{z})=\exp(-r)c_{\mathbf{1}_{d}}(\mathbf{z})(\exp(-r \mathbf{z}))\sim\mathcal{M}_{\mathbf{1}_{d}}(\exp(-r);\mathbf{z})\exp(-r \lambda_{\mathbf{1}_{d}}(\mathbf{z}))\). Proposition 2.6(i) in [10] shows that \(\lim_{r\to\infty}[-\log(f_{\mathbf{x}}(\mathbf{r}))/r]=\Lambda_{\mathbf{1}_{ d}}(\mathbf{z})\). Proposition 2.2 in [20] shows that this is a sufficient condition for \(f_{\mathbf{x}}\) to have a limit set with gauge function \(\lambda_{\mathbf{1}_{d}}(\mathbf{z})\). The assumptions of Proposition 3.3 in [20] then apply, and provide a relation between \(\Lambda_{\mathbf{1}_{d}}(\mathbf{z})\) and \(\lambda_{\mathbf{1}_{d}}(\mathbf{z})\).
**Definition 17** (Copula density exponent function).: Suppose that \(c\) is a \(d\)-dimensional copula density and there exists a function \(\lambda_{\mathbf{u}_{0}}:(0,\infty)^{d}\to[0,\infty)\) such that
\[c_{\mathbf{u}_{0}}(t^{\mathbf{z}})\sim\mathcal{M}_{\mathbf{u}_{0}}(t;\mathbf{ z})t^{\lambda_{\mathbf{u}_{0}}(\mathbf{z})-S_{\mathbf{z}}},\quad t\to 0^{+}, \tag{21}\]
where \(S_{\mathbf{z}}=\sum_{j=1}^{d}z_{j}\) and \(\mathcal{M}_{\mathbf{u}_{0}}(t;\mathbf{z})\) is slowly-varying in \(t\) at \(0^{+}\). Then \(\lambda_{\mathbf{u}_{0}}\) is the exponent function of copula density \(c\) in corner \(\mathbf{u}_{0}\). We refer to assumption (21) as the Angular-Radial model for the copula density in Exponential coordinates (ARE model for the density). The reason for including the term \(S_{\mathbf{z}}\) in the index will be discussed further below.
As with the copula exponent function, \(\lambda_{\mathbf{u}_{0}}\) is also homogeneous of order 1. However, in contrast to the copula exponent function, the copula density exponent function is only defined for \(\mathbf{z}=(z_{1},...,z_{d})\) such that \(z_{j}>0\) for \(j=1,...,d\). In the case that \(z_{j}=0\), the path \(t^{\mathbf{z}}\) does not terminate in corner \(\mathbf{u}_{0}\) as \(t\to 0^{+}\), and hence the variation of the density along this path is not representative of the behaviour in corner \(\mathbf{u}_{0}\). The relation between \(\Lambda_{\mathbf{u}_{0}}\) and \(\lambda_{\mathbf{u}_{0}}\) is more complicated than that between \(B_{\mathbf{u}_{0}}\) and \(b_{\mathbf{u}_{0}}\), but can be inferred from Proposition 3.3 in [20].24 We can also relate \(\lambda_{\mathbf{u}_{0}}\) to the tail order. Comparing (18) and (21) we see that when both these representations are valid, we have \(\lambda_{\mathbf{u}_{0}}(\mathbf{1}_{d})=\kappa_{\mathbf{u}_{0}}\).
Footnote 2: Specifically, on exponential margins for \(\mathbf{z}\in\mathcal{L}_{1}^{\prime}\), \(f_{\mathbf{x}}(\mathbf{z})=\exp(-r)c_{\mathbf{1}_{d}}(\mathbf{z})(\exp(-r \mathbf{z}))\sim\mathcal{M}_{\mathbf{1}_{d}}(\exp(-r);\mathbf{z})\exp(-r \lambda_{\mathbf{1}_{d}}(\mathbf{z}))\). Proposition 2.6(i) in [10] shows that \(\lim_{r\to\infty}[-\log(f_{\mathbf{x}}(\mathbf{r}))/r]=\Lambda_{\mathbf{1}_{ d}}(\mathbf{z})\). Proposition 2.2 in [20] shows that this is a sufficient condition for \(f_{\mathbf{x}}\) to have a limit set with gauge function \(\lambda_{\mathbf{1}_{d}}(\mathbf{z})\). The assumptions of Proposition 3.3 in [20] then apply, and provide a relation between \(\Lambda_{\mathbf{1}_{d}}(\mathbf{z})\) and \(\lambda_{\mathbf{1}_{d}}(\mathbf{z})\).
The lower tails of EV copulas represent canonical cases of copula exponent functions, where \(\mathcal{L}_{\mathbf{0}_{d}}(t,\mathbf{z})=1\) and \(\mathcal{M}_{\mathbf{0}_{d}}(t,\mathbf{z})\) is a function of \(\mathbf{z}\) only. Moreover, for EV copulas \(\Lambda_{\mathbf{0}_{d}}(\mathbf{z})=\lambda_{\mathbf{0}_{d}}(\mathbf{z})=A(\mathbf{z})\), where \(A(\mathbf{z})\) is the stable tail dependence function (see Appendix B.1). This is, in part, the motivation for including the term \(S_{\mathbf{z}}\) in the definition of angular density functions. The other motivation is related to expressions for the density on exponential margins, discussed in Section 4.4.
### Relations between models for constant and variable scaling orders
The ARL and ARE models for the copula and copula density were presented in terms of assumptions about the scaling order of each variable. However, we can also view them as describing the asymptotic behaviour of the copula in different angular-radial coordinate systems. To illustrate the points of interest, it will suffice to consider two-dimensional cases. The ARL models consider the variation of copula and copula density in terms of \(L^{1}\) polar coordinates \((u,v)=(t(1-z),tz)\).vi Similarly, the ARE models describe the variation of the copula and copula density in terms of coordinates \((u,v)=(r^{1-w},r^{w})\), for \(r\to 0^{+}\) and \(w\in(0,1)\). In this case the coordinates \((r,w)\) define a nonlinear polar coordinate system, as shown in Figure 7.
Footnote vi: Strictly, the definition was presented for \(\mathbf{u}=t\mathbf{z}\) and \(\mathbf{z}\in[0,\infty)^{d}\), but the homogeneity property of the tail order and tail density functions allows us to restrict the domain of \(\mathbf{z}\) to lie on the unit simplex in the non-negative orthant.
When considering the behaviour of the copula density for \((u,v)\to(0,0)\), it is useful to use a log-log scale for visualisations. Consider a two-dimensional case with Cartesian coordinates \((u,v)=(t(1-z),tz)\). A line of constant angle \(z\) has equation by \(v=uz/(1-z)\). When viewed on a log-log scale, the equation becomes \(\log(v)=\log(u)+\log(z/(1-z))\), which has unit gradient and the value of \(z\) determines the y-intercept (see Figure 7). In contrast, for Cartesian coordinates \((u,v)=(r^{1-w},r^{w})\), a line of constant \(w\) is given by \(v=u^{w/(1-w)}\), whereas on the log-log scale the line has equation \(\log(v)=(w/(1-w))\log(u)\). So in this case, lines of constant \(w\) correspond to linear rays on the log-log scale, with gradient \(w/(1-w)\) (see Figure 7). Or to put it another way, \((\log(r),w)=(\log(uv),\log(v)/\log(uv))\), so that \((\log(r),w)\) are the \(L^{1}\) polar coordinates of \((\log(u),\log(v))\).
For the polar coordinate system used ARL models, on the log-log scale, all rays with fixed \(z\) are parallel to the ray \(w=1/2\) for the coordinate system used in the ARE model. Informally speaking, any information about how the copula or copula density vary with \(z\in(0,1)\) at small values of \(t\) which is contained in ARL model, is collapsed onto the ray \(w=1/2\) for ARE model. Conversely, the information about the angular variation of the density, described by exponent functions for the copula and copula density must relate to information about the behaviour of the tail order and tail density functions for \(z\to 0^{+}\) or \(z\to 1^{-}\). This is made precise in the following propositions.
**Proposition 4.1** (Tail functions in terms of exponent functions).: _Suppose that bivariate copula \(C\) and corresponding density \(c\), satisfy the ARE model assumptions in corner \(\mathbf{u}_{0}\). Suppose also that and \(\lambda_{\mathbf{u}_{0}}(1-w,w)\) both have a Taylor series about \(w=1/2\) with \(\beta\coloneqq[\frac{d}{dw}\Lambda_{\mathbf{u}_{0}}(1-w,w)]_{w=1/2}=[\frac{d} {dw}\lambda_{\mathbf{u}_{0}}(1-w,w)]_{w=1/2}\). Suppose that \(C\) and \(c\) also satisfy the ARL model assumptions in corner \(\mathbf{u}_{0}\). Then the tail order and tail density functions are given by_
\[B_{\mathbf{u}_{0}}(z_{1},z_{2}) =\gamma_{1}z_{1}^{\frac{1}{2}(\kappa_{\mathbf{u}_{0}}-\beta)}z_{2 }^{\frac{1}{2}(\kappa_{\mathbf{u}_{0}}+\beta)},\] \[b_{\mathbf{u}_{0}}(z_{1},z_{2}) =\gamma_{2}z_{1}^{\frac{1}{2}(\kappa_{\mathbf{u}_{0}}-\beta)-1} z_{2}^{\frac{1}{2}(\kappa_{\mathbf{u}_{0}}+\beta)-1},\]
_for \((z_{1},z_{2})\in(0,\infty)^{2}\) and \(\kappa_{\mathbf{u}_{0}}=\Lambda_{\mathbf{u}_{0}}(1,1)=\lambda_{\mathbf{u}_{0} }(1,1)\) and constants \(\gamma_{1},\gamma_{2}>0\). When \(\kappa_{\mathbf{u}_{0}}<2\) and \(\beta=0\), the ARL model assumptions are not valid for \(z_{1}\to 0^{+}\) or \(z_{2}\to 0^{+}\)._
When the assumptions of Proposition 4.1 are satisfied, the tail order and tail density functions are defined completely by properties of the exponent functions at \(w=1/2\). It was noted in Section 4.2 that when \(\kappa_{\mathbf{u}_{0}}=1\), we have \(\Lambda_{\mathbf{u}_{0}}(z_{1},z_{2})=\max(z_{1},z_{2})\), so \(\Lambda_{\mathbf{u}_{0}}\) is not differentiable along the ray \(z_{1}=z_{2}\). Moreover, it will be shown in Section 5.2.2, that when \(\kappa_{\mathbf{u}_{0}}=1\), the function \(\lambda_{\mathbf{u}_{0}}(1-w,w)\) is not differentiable at \(w=1/2\). So Proposition 4.1 does not apply in the case \(\kappa_{\mathbf{u}_{0}}=1\). In cases of intermediate tail dependence (i.e. \(\kappa_{\mathbf{u}_{0}}\in(1,2)\)), Proposition 4.1 tells us that the ARL models cannot be valid for \(z_{1}\to 0^{+}\) or \(z_{2}\to 0^{+}\) whenever the exponent functions are symmetric (and hence \(\beta=0\)). In Section 5.3, we will see that this implies that the SPAR representation of these types of copula on heavy-tailed margins is only valid in the region where both variables are large.
A trivial example where the assumptions of Proposition 4.1 are satisfied is the independence copula, where \(\Lambda_{\mathbf{u}_{0}}(\mathbf{z})=\lambda_{\mathbf{u}_{0}}(\mathbf{z})=1\) and \(B_{\mathbf{u}_{0}}(z_{1},z_{2})=z_{1}z_{2}\), \(b_{\mathbf{u}_{0}}(\mathbf{z})=1\). Non-trivial examples where the proposition applies are the Gaussian copula and the lower tail of EV copulas (see Appendix B). Figure 8(a) shows contours of \(C(u,v)\) for the lower tail of an EV copula with symmetric logistic dependence, together with contours of the ARL model (17). The ARL model
is a good approximation close to the line \(u=v\), but deviates from the true contours away from this region. For EV copulas, the ARE model is exact for the lower tail (see Appendix B.1). Informally, when \(\Lambda_{\mathbf{u}_{0}}\) is differentiable along the ray \(u=v\), the level sets of \(C\) asymptote to straight lines in the region of the ray \(u=v\). Due to the coordinate system used in the ARL model, the tail order function approximates this section of the copula exponent function, resulting in the differences from the true values in other regions.
In contrast, contours of the upper corner \(C_{(1,1)}\) are shown in Figure 8(b), together with the corresponding ARE model. In this case, since \(\kappa_{(1,1)}=1\), the copula exponent function is \(\Lambda_{(1,1)}(z,1-z)=\max(z,1-z)\). On the log-log scale of the axes, the convergence to the asymptotic model is evident. However, it is clear that the ARE model does capture the small (on this scale) portion of the contour that is curved, close to the line \(u=v\). This information about the variation of the copula is described by ARL model, but is lost in the description provided by the ARE model. This is made precise in the converse result to Proposition 4.1, stated below. Since we know that the ARE model for the copula always takes the same form when \(\kappa_{\mathbf{u}_{0}}=1\), and that for intermediate tail dependence, ARL models have the limitations described in Proposition 4.1, we focus on the relation between the ARL and ARE models for the copula density here.
**Proposition 4.2** (Copula density exponent function in terms of tail density function).: _Suppose that bivariate copula density \(c\) satisfies ARL model assumptions in corner \(\mathbf{u}_{0}\), and that \(c_{\mathbf{u}_{0}}(t(1-z),tz)\sim b_{\mathbf{u}_{0}}(1-z,z)\mathcal{L}_{ \mathbf{u}_{0}}(t)\,t^{\kappa_{\mathbf{u}_{0}}-2}\) for \(t\to 0^{+}\) and \(z\in[0,1]\). Suppose also that \(b_{\mathbf{u}_{0}}(1-z,z)\in RV_{\beta_{1}}(0^{+})\) and \(b_{\mathbf{u}_{0}}(1-z,z)\in RV_{\beta_{2}}(1^{-})\) for some \(\beta_{1},\beta_{2}\in(0,\infty)\). Then \(c\) also satisfies the ARE model assumptions in corner \(\mathbf{u}_{0}\), with copula density exponent function given by_
\[\lambda_{\mathbf{u}_{0}}(1-w,w)=\frac{\kappa_{\mathbf{u}_{0}}}{2}+\begin{cases} \left(2(1+\beta_{2})-\kappa_{\mathbf{u}_{0}}\right)\left|w-\frac{1}{2}\right|,&w\in(0,1/2],\\ \left(2(1+\beta_{1})-\kappa_{\mathbf{u}_{0}}\right)\left|w-\frac{1}{2}\right|,&w\in(1/2,1).\end{cases}\]
Figure 7: Polar coordinate grids used for asymptotic models for copulas. Top: ARL models / constant scaling order. Bottom: ARE models / different scaling orders. Red lines: constant radius. Black lines: constant angle.
The form of \(\lambda_{\mathbf{u}_{0}}\) given in Proposition 4.2 consists of two linear segments, with \(\lambda_{\mathbf{u}_{0}}(1^{-},0^{+})=1+\beta_{2}\), \(\lambda_{\mathbf{u}_{0}}(\frac{1}{2},\frac{1}{2})=\kappa_{\mathbf{u}_{0}}/2\) and \(\lambda_{\mathbf{u}_{0}}(0^{+},1^{-})=1+\beta_{1}\). When the assumptions of the proposition hold, \(\lambda_{\mathbf{u}_{0}}\) only contains information about the tail order \(\kappa_{\mathbf{u}_{0}}\) and the indices of regular variation for the tails of \(b_{\mathbf{u}_{0}}(1-z,z)\). Information about the variation of \(b_{\mathbf{u}_{0}}(1-z,z)\) for values of \(z\) away from \(0\) and \(1\) is lost. The assumptions of Proposition 4.2 hold trivially for the case of the independence copula. Non-trivial examples with \(\kappa_{\mathbf{u}_{0}}=1\) where the assumptions of Proposition 4.2 hold with \(\beta_{1}=\beta_{2}\), include the upper tail of the bivariate EV copula with symmetric logistic dependence (see Appendix B) and any copulas that are in the domain of attraction of this (this includes all Archimedean copulas with strong tail dependence [32]), and all corners of the t-copula (see Appendix B.3). Asymmetric examples with \(\beta_{1}\neq\beta_{2}\) include the upper tails of other EV copulas, with stable tail dependence function given by the Dirichlet model [33] or the polynomial model [34]. The assumptions of the proposition do not hold for EV copulas with the Husler-Reiss dependence model [35], as the function \(b_{(1,1)}\) is rapidly-varying in the tails, i.e. regularly-varying with index \(+\infty\) (see Appendix B.1 and Example 11 below).
In summary, ARL models for the copula and copula density do not apply close to the margins in cases of intermediate tail dependence. Copulas with strong tail dependence all have the same form of ARE model, making this form less useful for describing multivariate extremes. However, the ARE model for the copula density is more versatile in the types of distribution it can describe, over a range of dependence classes. In Section 5.2 we will see that the ARE model for the copula density is closely linked to SPAR representations of distributions on Laplace margins, and their corresponding limit sets.
### Generalised angular-radial dependence functions
As noted in Section 4.2, the copula exponent function was originally defined in terms of angular-radial representations of a joint distribution function on exponential margins. Similarly, as discussed below, the tail order and tail density functions are closely related to asymptotic models for the angular-radial behaviour of the survivor and density functions for variables on heavy-tailed margins. We can generalise this idea and define functions to represent the variation of the copula density for angular-radial coordinates defined on arbitrary margins.
**Definition 18**.: Consider a random vector \(\mathbf{X}_{*}\in\mathbb{R}^{d}\) with copula density \(c\) and common marginal distribution \(F_{*}\) and marginal density \(f_{*}\). For \((r,\mathbf{w})\in\{(t,\mathbf{z})\in[0,\infty)\times\mathcal{U}_{1}:t\mathbf{ z}\in\operatorname{supp}(\mathbf{X}_{*})\}\) define
\[\delta_{*}(r,\mathbf{w};c) =c(F_{*}(rw_{1}),\cdots,F_{*}(rw_{d})),\] \[m_{*}(r,\mathbf{w}) =\prod_{i=1}^{d}f_{*}(rw_{i}).\]
Figure 8: Illustration of limitations of ARL and ARE models for lower and upper tails of EV copula, \(C\), with symmetric logistic dependence and parameter \(\alpha=2\). For copulas with intermediate tail dependence in a given corner, meeting the assumptions of Proposition 4.1 with symmetric exponent function, contours of ARL model appear as straight lines on a log-log scale, as shown on panel (a). For copulas with strong tail dependence in a given corner, the ARE model always has the form shown in panel (b).
We refer to \(\delta_{*}\) as the angular-radial representation of the copula density (abbreviated to _AR copula function_) on margins \(F_{*}\), and \(m_{*}\) as the marginal product function. The joint density function of \(\mathbf{X}_{*}\) can then be expressed as \(f_{\mathbf{X}_{*}}(r\mathbf{w})=m_{*}(r,\mathbf{w})\delta_{*}(r,\mathbf{w};c)\).
In the notation defined above we have written \(\delta_{*}(r,\mathbf{w};c)\) to emphasise that \(\delta_{*}\) is dependent on the copula density, whereas \(m_{*}\) is not. As with other functions and coefficients of the copula, we will omit this information when \(c\) is clear from the context, and write \(\delta_{*}(r,\mathbf{w})\). In the case of the independence copula, \(c(\mathbf{u})\equiv 1\), we have \(\delta_{*}(r,\mathbf{w})=1\) for any choice of margin, and \(f_{\mathbf{X}_{*}}(r\mathbf{w},c)=m_{*}(r,\mathbf{w})\). For arbitrary copula density \(c\), the function \(\delta_{*}(r,\mathbf{w},c)\) therefore describes how the joint density of independent random variables with common margins \(F_{*}\) is modified to obtain the density of dependent random variables with copula \(c\).
Using this notation, we can relate the asymptotic form of the AR copula function on exponential margins, in terms of the copula density exponent function. If we define \(r=-\log(t)\), then we can write
\[\delta_{E}(r,\mathbf{w})=c_{\mathbf{1}_{d}}(\exp(-r\mathbf{w}))\sim\mathcal{M} _{\mathbf{1}_{d}}(\exp(-r);\mathbf{w})\exp(-r(\lambda_{\mathbf{1}_{d}}(\mathbf{ w})-1)),\quad r\rightarrow\infty,\mathbf{z}\in\mathcal{U}_{1}\cap(0,1]^{d}.\]
Similarly, the copula density exponent function can also be related to the asymptotic form of AR copula function on Laplace margins.
**Proposition 4.3**.: _Suppose that \(d\)-dimensional copula density \(c\) satisfies the ARE model assumptions in corner \(\mathbf{u}_{0}=(u_{0,1},...,u_{d,1})\), with copula density exponent function \(\lambda_{\mathbf{u}_{0}}(\mathbf{z})\) that is continuous with bounded partial derivatives everywhere apart from the ray \(\mathbf{z}=\mathbf{1}_{d}\). Then for all \(\mathbf{w}\in\{(w_{1},...,w_{d})\in\mathcal{U}_{1}:(-1)^{1+u_{d,j}}w_{j}>0,\,j= 1,...,d\}\) there exists a function \(\mathcal{M}_{\mathbf{u}_{0}}(t,\mathbf{w})\) that is slowly-varying in \(t\) at \(0^{+}\), such that_
\[\delta_{L}(r,\mathbf{w})\sim\mathcal{M}_{\mathbf{u}_{0}}(\exp(-r),\mathbf{w}) \exp(-r(\lambda_{\mathbf{u}_{0}}(\mathbf{w})-1)),\quad r\rightarrow\infty. \tag{22}\]
In the case that \(\mathbf{w}=(w_{1},...,w_{d})\) has \(w_{j}=0\) for one or more \(j\in\{1,...,d\}\), the path through the copula, \(\mathbf{u}=F_{L}(r\mathbf{w})\), does not terminate in a corner as \(r\rightarrow\infty\), so the copula density exponent functions do not provide information about the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\) for these values of \(\mathbf{w}\). However, in these cases, for the examples considered in Appendix B, it was found that we have \(\delta_{L}(r,\mathbf{w})\sim\mathcal{M}_{\mathbf{u}_{0}}(\exp(-r),\mathbf{w}) \exp(-r(\lim_{\mathbf{z}\rightarrow\mathbf{w}}\lambda_{\mathbf{u}_{0}}( \mathbf{z})-1))\), for \(r\rightarrow\infty\). In Section 5.2, we will consider more general examples of asymptotic forms for \(\delta_{L}(r,\mathbf{w})\) than the form given in Proposition 4.3. However, in the examples presented in Section 5.2.3, we will show that this form is applicable for many families of copulas.
We can also relate the ARL model for the copula density to the asymptotic form of the AR copula function on GP margins with positive shape parameter.
**Proposition 4.4**.: _Suppose that \(d\)-dimensional copula density \(c\) satisfies the ARL model assumptions in the upper tail, i.e. for \(z\in(0,\infty)^{d}\) we have \(c_{\mathbf{1}_{d}}(t\mathbf{z})\sim b_{\mathbf{1}_{d}}(\mathbf{z})\mathcal{L} _{\mathbf{1}_{d}}(t)t^{\kappa_{1}{\mathbf{1}_{d}}-d}\) as \(t\to 0^{+}\), for some \(\mathcal{L}_{\mathbf{1}_{d}}\in RV_{0}(0^{+})\) and \(b_{\mathbf{1}_{d}}(\mathbf{z})>0\). Then for GP margins with shape parameter \(\xi_{m}>0\) and \(\mathbf{w}\in\mathcal{U}_{1}\cap(0,1]^{d}\), we have_
\[\delta_{GP}(r,\mathbf{w})\sim s_{\mathbf{w}}^{\kappa_{1}{\mathbf{1}_{d}}-d}b_ {\mathbf{1}_{d}}\left(s_{\mathbf{w}}^{-1}\mathbf{w}^{-1/\xi_{m}}\right) \mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1/\xi_{m}}\right)r^{\frac{d-\kappa_{ \mathbf{1}_{d}}}{\xi_{m}}},\quad r\rightarrow\infty, \tag{23}\]
_where \(s_{\mathbf{w}}=\xi_{m}^{-1/\xi_{m}}\sum_{j=1}^{d}w_{j}^{-1/\xi_{m}}\)._
Propositions 4.3 and 4.4 relate a description of the marginal-scale angular-radial behaviour back to a description of the copula. In Section 5, we will see how this can be used to make some general statements about how the choice of margin affects whether a given copula has a SPAR representation.
## 5 Effects of the choice of margin
In this section we consider the effect of the choice of margins on whether a given copula has a SPAR representation, i.e. whether assumptions (A1)-(A3) are satisfied. Assumptions (A1) and (A2) are related to the tail of the conditional radial distribution, whereas assumption (A3) is related to the angular density, which is an integral over the entire conditional radial distribution. We start by discussing assumption (A3) in Section 5.1, and consider under what choice of margins the angular density is finite and continuous for the types of copula introduced in Sections 4.1 and 4.2. To examine whether assumption (A3) is satisfied, we need to specify the entire marginal distribution, rather than just the properties of the tail. We show that, in some cases, for two choices of marginal distribution functions, \(F\), \(G\), which are asymptotically equivalent in the tail (i.e. \(\bar{F}(x)\sim\bar{G}(x)\) as \(x\rightarrow\infty\)), using margin \(F\) satisfies assumption (A3), whereas using margin \(G\) does not. We show that using Laplace margins leads to forms of the joint density where (A3)
is satisfied for a wide range of copulas, whereas for other choices of margins the angular density is not finite in some cases. In particular, using two-sided Laplace margins has distinct advantages to using one-sided exponential margins.
In the remainder of the section, we consider particular choices of margin in more detail, in relation to whether assumption (A2) is satisfied (i.e. whether the convergence to a GP tail in assumption (A1) is satisfied with parameter functions are finite and continuous with angle). Section 5.2 considers the case of Laplace margins. We show that this leads to forms of the joint density which satisfy (A2) for a large family of copulas, including many commonly-used models. In Section 5.2.2 we demonstrate that there is a useful relation between SPAR representations on Laplace margins and the corresponding limit set, when they exist. This provides links between SPAR and other representations for multivariate extremes, which can be related to properties of the limit set [20]. The link between SPAR representations and limit sets also provides a means of estimating limit sets. We also show that some copulas which do not have limit sets on Laplace margins, do have SPAR representations.
Section 5.3 considers SPAR representations on heavy-tailed margins. We show that copulas with asymptotic dependence in the upper tail have a convenient representation on heavy-tailed margins. However, there are copulas with asymptotic dependence in the upper tail for which the SPAR representation on heavy-tailed margins is only valid in the region where all variables are large, whereas the SPAR representation on Laplace margins is valid in all 'extreme regions'. Moreover, we show that for a large class of copulas which are asymptotically independent in the upper tail, the SPAR representations on heavy-tailed margins are only valid in the region where all variables are large. We also demonstrate that SPAR representations on GP margins with \(\xi_{m}=1\) are equivalent to the representations proposed by Coles and Tawn [33] for the case of asymptotic dependence, and the representation of Ledford and Tawn [11] for the case of asymptotic independence.
Finally, Section 5.4 considers SPAR representations on short-tailed margins. We show that, in general, there are significant restrictions on the types of copula which have SPAR representations on these margins. However, there are certain types of copula, which do not have support on the whole of \((0,1)^{d}\), that do have SPAR representations on short-tailed margins. We discuss the implications this has for modelling real datasets, such as those used in the motivating examples in Section 1.1.
Given the results of the Section 3, without loss of generality, we can work in \(L^{1}\) polar coordinates, and for random vector \(\mathbf{X}\in\mathbb{R}^{d}\) we define \(R=\|\mathbf{X}\|_{1}\) and \(\mathbf{W}=\mathbf{X}/R\). When \(d=2\) we define \(Q=\mathcal{A}_{1}^{(-2,2]}(\mathbf{W})\). As noted at the end of Section 3, this allows us to switch between the use of vector and scalar angles in \(\mathbb{R}^{2}\), with unit Jacobian. As with other sections, proofs of results stated in the text are provided in Appendix A.
### Angular density
To understand how the choice of margins affects assumption (A3), we need to consider the behaviour of the copula density along rays of constant angle, defined on various margins. Due to the wide range of possibilities for choices of margin, we restrict our interest to two families of distributions. For cases where we are only interested in extremes in the non-negative orthant of \(\mathbb{R}^{d}\), a natural choice is GP margins with unit scale and shape parameter \(\xi_{m}\) (we denote the shape parameter of the margins as \(\xi_{m}\), to distinguish it from the shape parameter of the tail of the conditional radial distribution). The three canonical cases are \(\xi_{m}=-1\) (uniform margins), \(\xi_{m}=0\) (exponential margins) and \(\xi_{m}=1\) (asymptotically equivalent to Frechet or Pareto margins). When the interest is in extremes in all orthants of \(\mathbb{R}^{d}\), it is beneficial to use symmetric margins. Define the symmetric GP (SGP) density function to be \(f_{SGP}(x;\xi_{m})=\frac{1}{2}f_{GP}(|x|;\xi_{m},1)\) for \(x\in(-r_{F},r_{F})\), where \(f_{GP}\) is the usual GP density function, and \(r_{F}=\infty\) for \(\xi_{m}\geq 0\) and \(r_{F}=-1/\xi_{m}\) for \(\xi_{m}<0\). The three cases above now correspond to a uniform distribution on \([-1,1]\) when \(\xi_{m}=-1\), the standard Laplace distribution when \(\xi_{m}=0\), and a _'double Pareto'_ distribution when \(\xi_{m}=1\).
Suppose that \((X_{GP},Y_{GP})\), \((X_{SGP},Y_{SGP})\) and \((X_{P},Y_{P})\) are random vectors with copula density \(c\), and GP, SGP and Pareto margins, respectively. Even though GP margins with \(\xi_{m}=1\) are asymptotically equivalent to standard Pareto margins, the angular density can differ in important ways, described below. Define corresponding \(L^{1}\) polar variables \(R_{*}=\|(X_{*},Y_{*})\|_{1}\), \(Q_{*}=\mathcal{A}_{1}^{(-2,2]}((X_{*},Y_{*})/R_{*})\), where \(*\) denotes the respective marginal distribution. Figure 9 shows the paths through the copula corresponding to rays of constant \(Q_{*}\). For GP margins, the rays all emanate from \((u,v)=(0,0)\) on the copula scale, whereas for SGP margins, the rays emanate from \((u,v)=(\frac{1}{2},\frac{1}{2})\). The case of Pareto margins forms an interesting contrast to uniform margins. The cdf of the standard Pareto distribution is \(F_{P}(x)=1-x^{-1}\), \(x\geq 1\). For \(q\in(0,1)\) and \(r\geq\max(q^{-1},(1-q)^{-1})\) the angular-radial representation of the copula density on Pareto margins is
\[\delta_{P}(r,(1-q,q))=c_{(1,1)}((r(1-q))^{-1},(rq)^{-1})=c_{(1,1)}(t(1-w),tw),\]
where \(t=(rq(1-q))^{-1}\) and \(w=1-q\). That is, rays \(Q_{p}=q\) correspond to straight lines on the copula scale, emanating from the upper right corner.
From Figure 9, it is evident that on symmetric margins the paths only encounter one corner, whereas on one-sided margins the paths can asymptote close to the lower left / upper right corner as \(q\to 0^{+}\) or \(1^{-}\). In these cases, if the copula density tends to infinity in the lower right or upper left corners (this corresponds to \(\kappa_{(0,1)}<2\) or \(\kappa_{(1,0)}<2\)), then in order to consider whether the angular density is finite, we need to consider how the joint density \(f_{R_{*},Q_{*}}(r,q)\)
Figure 10: Contours of scaled copula density \(c_{s}(u,v)=c(u,v)/(1+c(u,v))\) for various two-dimensional copulas, with contour levels from \(0.1\) (dark blue) to \(0.9\) (yellow) at steps of \(0.1\). Note that \(c_{s}(u,v)\to 1^{-}\) as \(c(u,v)\to\infty\).
Figure 9: Paths through the copula corresponding to rays of constant angle on different margins.
behaves when \(\mathbf{u}=(F_{*}(r(1-q)),F_{*}(r,q))\) is close to these corners. Figure 10 shows contours of scaled copula density \(c_{s}(u,v)=c(u,v)/(1+c(u,v))\) for various copulas. The scaling is used so that \(c_{s}(u,v)\in[0,1]\), with \(c_{s}(u,v)\to 1^{-}\) as \(c(u,v)\to\infty\). For the Gaussian copula with \(\rho\in(-1,1)\) the tail order is \(2/(1+\rho)\) in the upper right and lower left corners, and \(2/(1-\rho)\) in the upper left and lower right corners (see Appendix B.2 for details). So when \(\rho<0\), the copula density is infinite in the lower right and upper left corners. For the t-copula, there is strong tail dependence in all corners, and hence the density is infinite at each corner (see Appendix B.3). For 2-dimensional EV copulas there is strong tail dependence in the upper right corner and intermediate tail dependence in the lower left corner (see Appendix B.1).
The following propositions consider the angular density \(f_{Q_{\mathcal{G}}}(q)\) for various choices of marginal distribution, when the copula density has either ARL or ARE form in the corners. Not all copulas have these forms, but they are applicable for many widely-used families of copula. In particular, the cases considered highlight that copulas for which \(f_{Q_{\mathcal{G}}}(q)\) is finite on one choice of margin may not be for other choices of margin.
**Proposition 5.1** (Angular densities on different margins).: _Suppose that two-dimensional copula density \(c\) has ARL form in each corner, and is continuous and finite away from the corners. Suppose that angular and radial variables on various margins are defined as above. Then the following results hold for the angular densities. **GP margins:**_
1. _If_ \(-\kappa_{(1,1)}<\xi_{m}\) _then_ \(f_{Q_{GP}}(q)\) _is finite for_ \(q\in(0,1)\)_._
2. _Suppose there is strong tail dependence in the lower right corner._ 1. _If_ \(\xi_{m}\geq 0\) _then_ \(f_{Q_{GP}}(q)\to\infty\) _for_ \(q\to 0^{+}\)_._ 2. _If_ \(\xi_{m}<0\) _and there exist_ \(\beta_{0},\beta_{1}>0\) _such that_ \(b_{(1,0)}(1-z,z)\in RV_{\beta_{0}}(0^{+})\) _and_ \(b_{(1,0)}(1-z,z)\in RV_{\beta_{1}}(1^{-})\)_, then_ \(f_{Q_{GP}}(q)\) _is finite for_ \(q\to 0^{+}\)_._
_SGP margins:_
1. _If_ \(\xi_{m}<1\) _then_ \(f_{Q_{SGP}}(q)\) _is finite for_ \(q\in\mathbb{R}\)_._
2. _If_ \(\xi_{m}\geq 1\) _and_ \(c\) _is positive and finite on the edges, then_ \(f_{Q_{SGP}}(q)\) _is infinite for_ \(q\in\mathbb{Z}\)_._
3. _Suppose that_ \(c(1-t,1/2)\in RV_{\alpha}(0^{+})\) _for some_ \(\alpha>0\)_, then_ \(f_{Q_{SGP}}(0)\) _is finite for_ \(\xi_{m}<1+\alpha\)_, with similar constraints for other values of_ \(q\in\mathbb{Z}\)_._
_Pareto margins:_
1. _If there is strong tail dependence in the lower left corner, then_ \(f_{Q_{P}}(1/2)=\infty\)_._
Proposition 5.1 shows that there is some advantage to using two-sided SGP margins rather than one-side GP margins, even when interest is only in the upper right quadrant of the plane, since \(f_{Q_{SGP}}(q)\) is finite for all \(q\in\mathbb{R}\) when \(\xi_{m}<1\), whereas and \(f_{Q_{GP}}(q)\) can tend to infinity for \(q\to 0^{+}\) or \(1^{-}\). This is because the rays of constant \(Q_{SGP}\) pass close to at most one corner of the copula, whereas rays of constant \(Q_{GP}\) pass close to the lower right and upper left corners when \(q\to 0^{+}\) or \(1^{-}\). Proposition 5.1 considers the case of strong tail dependence in the lower right corner. However, this is not a necessary condition for \(f_{Q_{GP}}(q)\to\infty\) for \(q\to 0^{+}\). For example, tor the case of the independence copula and GP margins with \(\xi_{m}=1\), we can calculate directly \(f_{Q_{GP}}(q)=\left(\log(1+z)-\log(1-z)-2z\right)z^{-3}\), where \(z=1-2q\), and hence \(f_{Q_{GP}}(q)\to\infty\) as \(q\to 0^{+}\) or \(q\to 1^{-}\).
When one-sided margins are used, Proposition 5.1 shows that GP margins have some advantage over Pareto margins, in that \(f_{Q_{P}}(1/2)=\infty\) when there is strong lower tail dependence, whereas \(f_{Q_{GP}}(q)\) is finite for \(q\in(0,1)\). However, note that when \(\xi_{m}=1\), \((X_{GP},Y_{GP})=(X_{P}-1,Y_{P}-1)\). So, defining angular-radial variables on GP margins with the origin at \((X_{GP},Y_{GP})=(0,0)\) is equivalent to defining angular-radial variables on Pareto margins with origin at \((X_{P},Y_{P})=(1,1)\). However, when plotting the joint density on a log-log scale, it is more convenient to work with Pareto margins, since the lower end point of the margins is \(1=10^{0}\), rather than \(0=10^{-\infty}\) for GP margins. Therefore, in section 5.3, we will work with Pareto rather than GP margins, and define the origin of the polar coordinates at \((1,1)\).
In many cases, copulas have both ARL and ARE asymptotic forms in the corners. However, as discussed in Section 4.3, it is not always possible to calculate one form in terms of the other. It is therefore useful to consider the behaviour of the angular density when the copula has ARE form in the corners. Since Proposition 5.1 established that Laplace margins have some advantages over other margins in two-dimensional cases, we only consider Laplace margins here. We also extend to the multivariate case.
**Proposition 5.2** (Angular density on Laplace margins).: _Suppose that \(d\)-dimensional copula density \(c\) satisfies the ARE model assumptions in each corner, and is continuous and finite away from the corners. Then on Laplace margins, \(f_{\mathbf{W}}(\mathbf{w})\) is finite for \(\mathbf{w}\in\mathcal{U}_{1}=\{\mathbf{x}\in\mathbb{R}^{d}:\|\mathbf{x}\|_{1}=1\}\)._
So, on Laplace margins, under the assumptions of Proposition 5.2, the angular density is finite for all angles. SPAR assumption (A3) also requires the angular density to be continuous. To complete our consideration of assumption (A3), we make the following observations about the continuity of \(f_{\mathbf{W}}\).
**Proposition 5.3** (Continuity of angular density).: _Suppose \(f_{R,\mathbf{W}}\) is continuous and bounded for all \((r,\mathbf{w})\) in the domain, and that \(f_{\mathbf{W}}(\mathbf{w})\) is finite for all \(\mathbf{w}\) in the domain. Then \(f_{\mathbf{W}}(\mathbf{w})\) is also continuous and hence satisfies SPAR assumption (A3)._
Note that Proposition 5.3 applies whenever \(f_{R,\mathbf{W}}\) is continuous and bounded, not just for the case of Laplace margins. The next lemma shows that the assumptions of Proposition 5.2 are consistent with those of Proposition 5.3, and hence also imply continuity of angular density.
**Lemma 5.4**.: _Under the assumptions of Proposition 5.2, \(f_{R,\mathbf{W}}\) is continuous and bounded for all \((r,\mathbf{w})\in[0,\infty)\times\mathcal{U}_{1}\), and hence \(f_{\mathbf{W}}\) satisfies SPAR assumption (A3)._
### SPAR models on Laplace margins
The previous section showed that under certain conditions, the angular density on Laplace margins satisfies SPAR assumption (A3). In this section we consider assumptions (A1) and (A2), regarding the tail of the conditional radial distribution, for the case of Laplace margins. The standard Laplace density function is \(f_{L}(x)=\frac{1}{2}\exp(-|x|)\) for \(x\in\mathbb{R}\). For \(\mathbf{w}=(w_{1},...,w_{d})\in\mathcal{U}_{1}\) and \(r\in[0,\infty)\), the marginal product function is
\[m_{L}(r,\mathbf{w})=2^{-d}\exp\left(-r\sum_{i=1}^{d}|w_{i}|\right)=2^{-d}\exp( -r).\]
In the examples below, we will show that many commonly-used copulas have asymptotic form (22). In this case, for \((r,\mathbf{w})\in[0,\infty)\times\mathcal{U}_{1}\), the angular-radial joint density has asymptotic form
\[f_{R,\mathbf{W}}(r,\mathbf{w})=r^{d-1}m_{L}(r,\mathbf{w})\delta_{L}(r, \mathbf{w})\sim 2^{-d}r^{d-1}\mathcal{M}(\exp(-r),\mathbf{w})\exp(-r\lambda( \mathbf{w})),\quad r\to\infty, \tag{24}\]
for some function \(\mathcal{M}(t,\mathbf{w})>0\), which is slowly-varying in \(t\) at \(0^{+}\), and \(\lambda(\mathbf{w})>0\), which is continuous in \(\mathbf{w}\). From Propositions 5.2 and 5.3, we know that this asymptotic form together with the copula being continuous and bounded away from the corners is sufficient for the angular density to satisfy assumption (A3). The next proposition gives sufficient conditions for assumptions (A1)-(A2) to be satisfied.
**Proposition 5.5**.: _Suppose that angular-radial density \(f_{R,\mathbf{W}}\) has asymptotic form (24), and is continuous and bounded, and that \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is ultimately monotone in \(r\) for each \(\mathbf{w}\in\mathcal{U}_{1}\). Then joint density \(f_{R,\mathbf{W}}\) satisfies SPAR assumptions (A1) and (A2) for all \(\mathbf{w}\in\mathcal{U}_{1}\), with GP shape parameter \(\xi(\mathbf{w})=0\) and scale parameter \(\sigma(\mu,\mathbf{w})=1/\lambda(\mathbf{w})\)._
Returning briefly to the discussion of coordinate systems, angular-radial densities of the form given in Proposition 5.5 also satisfy the assumptions of Theorem 3.3(ii), with \(\sigma(\mu,\mathbf{w})=h(\mu)\alpha(\mathbf{w})\), where \(h(\mu)=1\) and \(\alpha(\mathbf{w})=1/\lambda(\mathbf{w})\). We can therefore define alternative angular-radial coordinates so that the angular and radial variables are AI. However, as we will show in the examples in Section 5.2.3, the general forms of \(\lambda(\mathbf{w})\) do not result in standard coordinate systems. The following example considers the case for the independence copula on Laplace margins, where angular and radial variables are independent using \(L^{1}\) polar coordinates.
**Example 6** (Independence on Laplace margins).: The independence copula \(c(\mathbf{u})=1\) for \(\mathbf{u}\in[0,1]^{d}\), has asymptotic form (22) on Laplace margins with \(\mathcal{M}(r,\mathbf{w})=1\) and \(\lambda(\mathbf{w})=1\), for \((r,\mathbf{w})\in[0,\infty)\times\mathcal{U}_{1}\). The angular-radial joint density is \(f_{R,\mathbf{W}}(r,\mathbf{w})=2^{-d}r^{d-1}\exp(-r)\). The assumptions of Proposition 5.5 are therefore satisfied, so \(c\) has a SPAR representation on Laplace margins. In this case, the angular density is constant, \(f_{\mathbf{W}}(\mathbf{w})=2^{-d}(d-1)!\). \(\blacksquare\)
#### 5.2.1 Alternative approximation for Laplace margins
In the SPAR model (4), we assume that the tail of the conditional radial density \(f_{R|\mathbf{W}}(r|\mathbf{w})\) can be approximated by a GP distribution. This is a direct analogue to the univariate case. In the multivariate setting, we have
\[f_{R|\mathbf{W}}(r|\mathbf{w})=r^{d-1}m_{*}(r,\mathbf{w})\delta_{*}(r, \mathbf{w}).\]
In the multivariate case, the assumption about the form of the tail could either be applied to \(f_{R[\mathbf{W}}(r|\mathbf{w})\) itself, or it could be applied to the tail of the angular-radial dependence function \(\delta_{*}(r,\mathbf{w})\). In the examples below, we will show that for many families of copulas, the angular-radial dependence function on Laplace margins takes a specific asymptotic form, where the slowly-varying function in (22) is given by \(\mathcal{M}(\exp(-r),\mathbf{w})=g(\mathbf{w})r^{\beta(\mathbf{w})}\), for bounded functions \(g(\mathbf{w})>0\) and \(\beta(\mathbf{w})\). That is, we have
\[\delta_{L}(r,\mathbf{w})\sim g(\mathbf{w})r^{\beta(\mathbf{w})}\exp(-r(\lambda (\mathbf{w})-1)),\quad r\to\infty, \tag{25}\]
The independence copula is a trivial example, with \(g(\mathbf{w})=1\) and \(\beta(\mathbf{w})=0\). Similar forms were also considered by Wadsworth and Campbell [36] for exponential margins.
If \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) with \(\beta(\mathbf{w})=0\), then \(f_{R[\mathbf{W}}(r|\mathbf{w})\sim\tilde{g}(\mathbf{w})r^{d-1}\exp(-r\lambda (\mathbf{w}))\) for \(r\to\infty\), where \(\tilde{g}(\mathbf{w})=2^{-d}(g(\mathbf{w})/f_{\mathbf{W}}(\mathbf{w}))\). In this case it is more accurate to apply the assumption of a GP-type (exponential) tail to \(\delta_{L}(r,\mathbf{w})\), rather than to the conditional radial density. This motivates an alternative POT assumption for the conditional radial density on Laplace margins, that
\[f_{R[\mathbf{W}}(r|\mathbf{w})\sim\frac{r^{d-1}}{A(\mathbf{w})}\,\exp\left(- \frac{r}{\sigma(\mathbf{w})}\right),\quad r\to\infty, \tag{26}\]
where \(\sigma(\mathbf{w})=1/\lambda(\mathbf{w})\) and \(A(\mathbf{w})\) is a normalisation constant. The RHS of (26) is a gamma density function with shape \(d\) and scale \(\sigma(\mathbf{w})\), and hence \(A(\mathbf{w})=(d-1)!\left(\sigma(\mathbf{w})\right)^{d}\). The difference between the assumption that the tail of \(f_{R[\mathbf{W}}\) is exponential and the assumption that it is gamma, becomes larger as the number of dimensions \(d\) increases. Therefore, in many cases assumption (26) will be a more accurate basis for SPAR models on Laplace margins than model (4), where the tail of \(f_{R[\mathbf{W}}\) is approximated with a GP distribution. If assumption (26) is assumed to hold above some threshold \(\mu(\mathbf{w})\) with corresponding exceedance probability \(\zeta(\mathbf{w})\), then a modified version of model (4) for Laplace margins can be written as
\[f_{R,\mathbf{W}}(r,\mathbf{w})=\frac{\zeta(\mathbf{w})f_{\mathbf{W}}(\mathbf{ w})}{A(\mathbf{w})}r^{d-1}\,\exp\left(-\frac{r}{\sigma(\mathbf{w})}\right), \quad r\geq\mu(\mathbf{w}), \tag{27}\]
where the normalisation constant becomes
\[A(\mathbf{w})=\int_{\mu(\mathbf{w})}^{\infty}r^{d-1}\,\exp\left(-\frac{r}{ \sigma(\mathbf{w})}\right)dr=(\sigma(\mathbf{w}))^{d}\Gamma\left(d,\frac{\mu( \mathbf{w})}{\sigma(\mathbf{w})}\right),\]
and \(\Gamma(\cdot,\cdot)\) is the upper incomplete gamma function. In the case \(d=2\) we have
\[A(\mathbf{w})=\sigma(\mathbf{w})(\sigma(\mathbf{w})+\mu(\mathbf{w}))\exp \left(-\frac{\mu(\mathbf{w})}{\sigma(\mathbf{w})}\right).\]
#### 5.2.2 Relation to limit sets
It is useful to introduce the concept of limit sets and show how these are related to SPAR models on Laplace margins.
**Definition 19** (Limit set).: Let \(\mathbf{X}_{1},\ldots,\mathbf{X}_{n}\) be a sequence of independent and identically distributed random vectors in \(\mathbb{R}^{d}\). Suppose that for some non-random compact set \(G\in\mathbb{R}^{d}\), there is a sequence of real constants \(r_{n}>0\), with \(r_{n}\to\infty\) as \(n\to\infty\), such that as \(n\to\infty\),
\[G_{n}=\{\mathbf{X}_{1}/r_{n},\cdots,\mathbf{X}_{n}/r_{n}\}\overset{P}{\to}G,\]
with respect to the Hausdorff metric, and \(\overset{P}{\to}\) denotes convergence in probability. Then \(G\) is referred to as the limit set of the scaled sample cloud [19].
Kinoshita and Resnick [37] showed that when the limit set exists, it is compact and star-shaped. The limit set can be used to describe various extremal dependence properties of \(\mathbf{X}_{j}\)[38, 39]. Moreover, Nolde and Wadsworth [20] showed how various dependence coefficients and representations for multivariate extremes could be related to properties of the limit set, including those of Ledford and Tawn [11, 12], Heffernan and Tawn [5], and Wadsworth and Tawn [25]. We will not repeat the details here, but note that SPAR representations can also be related to these methods, through the link to the limit set, described below.
Limit sets can be viewed as a multivariate analogue to the scaling of univariate maxima. Consider a variable \(X_{E}\) which follows an exponential distribution with scale parameter \(\sigma\), with cdf \(F_{X_{E}}(x)=1-\exp(-x/\sigma)\), \(x\geq 0\). Define
\(M_{n}\) to be the maximum of the maximum of \(n\) independent variables with cdf \(F_{X_{E}}\). As \(n\to\infty\), the distribution of \(M_{n}\) converges to a Gumbel distribution with scale \(\sigma\) and location \(\sigma\log(n)\), i.e. \([F_{X_{E}}(x)]^{n}\sim\exp(-\exp(-(x-\sigma\log(n))/\sigma))\), as \(n\to\infty\). Hence the distribution of scaled maxima \(M_{n}/\log(n)\) converges to a Gumbel distribution with scale \(\sigma/\log(n)\) and location \(\sigma\). Since \(\sigma/\log(n)\to 0\) as \(n\to\infty\), the limit distribution is degenerate and we have \(\lim_{n\to\infty}\Pr(|M_{n}/\log(n)-\sigma|>\varepsilon)=0\) for all \(\varepsilon>0\), and hence \(M_{n}/\log(n)\) converges in probability to \(\sigma\). In the multivariate setting, the boundary of the limit set can be viewed as the radius of scaled maxima from the conditional radial distribution.vii The following proposition, which is an adaptation of Proposition 2.2 in [20], makes this explicit.
Footnote vii: In the multivariate context, the observations are distributed over angle. For a small \(\epsilon\)-neighbourhood around a particular angle \(\mathbf{w}\), the mean number falling within this neighbourhood converges to \(m=n\epsilon V_{d}f_{\mathbf{w}}(\mathbf{w})\) as \(\epsilon\to 0^{+}\), where \(V_{d}\) is the volume of the \(d\)-dimensional \(L^{1}\) sphere. However, if we set \(\epsilon=1/\log(n)\) then \(\log(m)/\log(n)\to 1\) as \(n\to\infty\), and we obtain a similar convergence of scaled maxima at any particular angle.
**Proposition 5.6**.: _Suppose that the random vector \(\mathbf{X}\in\mathbb{R}^{d}\) has standard Laplace margins and has joint density function \(f_{\mathbf{X}}\) that satisfies_
\[f_{\mathbf{X}}(r\mathbf{x})\sim\mathcal{M}(\exp(-r),\mathbf{x})\exp\left(- \frac{r}{\sigma(\mathbf{x})}\right),\quad r\to\infty, \tag{28}\]
_for \(\mathbf{x}\in\mathbb{R}^{d}\backslash\{\mathbf{0}\}\), where \(\mathcal{M}(t,\mathbf{w})\) is slowly-varying in \(t\) at \(0^{+}\) and the function \(\sigma\) is positive, continuous and homogeneous of order \(-1\). Define \(G_{n}=\{\mathbf{X}_{1}/\log(n),\ldots,\mathbf{X}_{n}/\log(n)\}\) to be a sequence of scaled random samples from \(f_{\mathbf{X}}\). Then the following results hold._
1. _The scaled sample cloud,_ \(G_{n}\)_, has a limit set,_ \(G\)_, with boundary_ \(\partial G=\{\mathbf{w}\,\sigma(\mathbf{w}):\mathbf{w}\in\mathcal{U}_{1}\}\)_._
2. _The limit set is bounded by the unit box,_ \(G\subseteq\{\mathbf{x}\in\mathbb{R}^{d}:\|\mathbf{x}\|_{\infty}=1\}\)_, and this bound is reached in each dimension, i.e. for_ \(j=1,...,d\)_,_ \[\max\{x_{j}:(x_{1},...,x_{d})\in G\} =1,\] \[\min\{x_{j}:(x_{1},...,x_{d})\in G\} =-1.\]
Clearly, both SPAR model (4) and modified SPAR model (27), which have constant shape parameter \(\xi(\mathbf{w})=0\) and scale function \(\sigma(\mathbf{w})\), satisfy the assumptions of Proposition 5.6. Therefore, the SPAR framework provides a rigorous means of estimating limit sets when they exist, since the GP scale parameter is equal to the radius of the limit set. A similar approach was proposed by Simpson and Tawn [40], where a large quantile of the conditional radial distribution was used as a proxy for the limit set, estimated by fitting a GP distribution to the tail of the conditional radial distribution (see also [41]). However, part (i) of Proposition 5.6 shows explicitly that the \(L^{1}\) radius of points on the boundary of limit set is described by the scale function of the SPAR model, since \(\|\mathbf{w}\sigma(\mathbf{w})\|_{1}=\sigma(\mathbf{w})\) for \(\mathbf{w}\in\mathcal{U}_{1}\). Part (ii) provides bounds on the scale function in these cases, namely that for \(\mathbf{w}=(w_{1},\ldots,w_{d})\in\mathcal{U}_{1}\) we have \(0<|w_{j}|\sigma(\mathbf{w})\leq 1\) for \(j=1,\ldots,d\) and that the upper bound is achieved for some \(|w_{j}|\leq 1/2\). The upper and lower bounds in part (ii) are simply a consequence on the choice of margins, for which scaled maxima and minima must converge to one, as described above.
In Section 4.3 it was mentioned that for two-dimensional copula densities which have both an ARL and ARE representation, when \(\kappa_{(u_{0},v_{0})}=1\) the function \(\lambda_{(u_{0},v_{0})}(1-w,w)\) is not differentiable at \(w=1/2\). The reason for this is now evident. Namely, when \(\kappa_{(u_{0},v_{0})}=1\) we have \(\lambda_{(u_{0},v_{0})}(1/2,1/2)=1/2\) and hence the \(L^{1}\) radius of the limit set is \(1/\lambda_{(u_{0},v_{0})}(1/2,1/2)=2\) in the direction \(((-1)^{u_{0}+1},(-1)^{v_{0}+1})\). Since the limit set is bounded by the unit box, and the upper bound is achieved in the corner \(((-1)^{u_{0}+1},(-1)^{v_{0}+1})\), the function \(\lambda_{(u_{0},v_{0})}(1-w,w)\) must have a cusp at \(w=1/2\), and hence is not differentiable at this point.
Again returning to the question of coordinate systems discussed in Theorem 3.3, we can see that when a joint density on Laplace margins has asymptotic form (28), the coordinate system required to obtain AI angular and radial variables can be defined in terms of the gauge function of the limit set.
Finally, we note that there are copulas for which the limit set on Laplace margins is degenerate in certain regions, i.e. the boundary has zero radius at certain angles. Examples of this type are EV copulas with Husler-Reiss dependence model and the bivariate exponential copula, discussed in Examples 11 and 12 below. We will show that in these cases, the copula still has a SPAR representation, despite the limit set being degenerate.
#### 5.2.3 Bivariate examples
In the bivariate examples below, we will use the notation \(\mathbf{w}=(w_{1},w_{2})=(\cos_{1}(q),\sin_{1}(q))\) and \(\sigma(q)=1/\lambda(\mathbf{w})\) for \(q\in(-2,2]\). We will demonstrate that SPAR models on Laplace margins can represent a range of tail dependence levels.
From the discussion in Section 4.2, when \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) and also has an ARL representation, the tail orders in each quadrant are given by \(\kappa_{(u_{0},v_{0})}=\lambda((-1)^{u_{0}+1},(-1)^{v_{0}+1})\). Throughout this section we use the modified SPAR model (27) to approximate the joint density. In the examples below, the angular density and thresholds have been calculated using numerical integration in most cases. Example 3 considered SPAR models for bivariate Gaussian and t distributions on their own respective margins. In examples 9 and 10, we revisit these examples and show that Gaussian and t copulas also have SPAR representations on Laplace margins. Extreme value copulas have interesting properties so we consider this family of copulas in some detail. Expressions for the copulas and copula densities presented here can be found in reference works, e.g. [6].
**Example 7** (Frank copula).: The Frank copula density is given by
\[c(u,v)=-\alpha h(1)\frac{1+h(u+v)}{(h(u)h(v)+h(1))^{2}},\]
for \((u,v)\in[0,1]^{2}\) and \(\alpha\in\mathbb{R}\setminus\{0\}\), where \(h(z)=e^{-\alpha z}-1\). It is straightforward to show that \(c(u,v)\in(0,\infty)\) for \((u,v)\in[0,1]^{2}\), and hence \(\delta_{L}(r,\mathbf{w})\) is asymptotically a function of \(\mathbf{w}\) only, and therefore of form (25) with \(\beta(\mathbf{w})=0\), \(\lambda(\mathbf{w})=1\). This corresponds to asymptotic independence in each quadrant. The angular density for various values of \(\alpha\) is shown in Figure 11 (top row). A comparison of the joint density from the SPAR model (dashed black lines) with the true joint density (solid coloured lines) is shown for a case with \(\alpha=10\) and threshold exceedance probability \(\zeta=10^{-4}\). In this case, the modified SPAR model (27) is asymptotically exact. However, a low threshold exceedance probability is required to give good agreement between the SPAR density and true density, due to slow convergence of \(\delta_{L}(r,\mathbf{w})\) to the limit in the 2nd and 4th quadrants of the plane. Any copula with continuous density which is positive along the edges, has a SPAR representation on Laplace margins, determined solely by the angular density \(f_{Q}(q)\). Other examples include Plackett, Ali-Mikhail-Haq, and Farlie-Gumbel-Morgenstern copulas. Note that for these cases the limit sets are all the same, but the SPAR models differ, due to the different angular densities.
**Example 8** (Joe copula).: The Joe copula density is given by
\[c(1-u,1-v)=(uv)^{\alpha-1}z^{\tfrac{1}{\alpha}-2}\left(z+\alpha-1\right),\]
where \(z=u^{\alpha}+v^{\alpha}-(uv)^{\alpha}\), \((u,v)\in[0,1]^{2}\) and \(\alpha\geq 1\). The Joe copula with \(\alpha=1\) is equal to the independence copula. The dependence function has asymptotic form (25) with \(\beta(\mathbf{w})=0\) and for \(\alpha>1\) we have
\[\lambda(\mathbf{w})=\begin{cases}1,&q\in(-2,-1]\\ 1+(\alpha-1)|w_{1}|,&q\in(-1,0]\\ 1-\alpha+(2\alpha-1)\max(|w_{1}|,|w_{2}|),&q\in(0,1]\\ 1+(\alpha-1)|w_{2}|&q\in(1,2].\end{cases}\]
The angular density, scale parameter and corresponding limit sets are shown in Figure 11 for various values of \(\alpha\). When \(\alpha>1\) the distribution is asymptotically dependent in the 1st quadrant and negatively dependent in the 2nd and 4th quadrants. The distribution is asymptotically quadrant independent in the 3rd quadrant for all values of \(\alpha\). A comparison of the joint density with the SPAR approximation is shown in Figure 11 for \(\alpha=3\) and threshold exceedance probability \(\zeta=0.05\). The agreement is good in all quadrants.
**Example 9** (Gaussian copula).: The Gaussian copula density, with Pearson correlation \(\rho\in(-1,1)\), is given by
\[c(u,v)=\frac{1}{\sqrt{1-\rho^{2}}}\exp\left(-\frac{\rho^{2}(x^{2}+y^{2})-2 \rho xy}{2(1-\rho^{2})}\right),\]
for \((u,v)\in[0,1]^{2}\), where \(x=\Phi^{-1}(u)\) and \(y=\Phi^{-1}(v)\), and \(\Phi^{-1}\) is the inverse of the standard normal cdf. \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) with
\[\lambda(\mathbf{w})=\frac{1-2\rho\operatorname{sgn}(w_{1}w_{2})\sqrt{|w_{1}w_ {2}|}}{1-\rho^{2}},\]
and \(\beta(\mathbf{w})=(\lambda(\mathbf{w})-1)/2\) (see Appendix B.2). The modified SPAR model for the Gaussian copula on Laplace margins is therefore not asymptotically exact. However, an approximation using \(\beta(\mathbf{w})=0\) leads to a reasonably good agreement, as shown in Figure 11. In the example shown, \(\rho=0.6\) and \(\zeta=0.01\). The angular density, scale parameter and corresponding limit sets are also shown in Figure 11 for various values of \(\rho\).
**Example 10** (t copula).: The t copula density, with Pearson correlation \(\rho\in(-1,1)\) and \(\nu>0\) degrees of freedom is given by
\[c(u,v)=\left(\frac{\Gamma(\nu/2)}{\Gamma((\nu+1)/2)}\right)^{2}\,\frac{\nu}{2 \sqrt{1-\rho^{2}}}\,\left(1+\frac{x^{2}+y^{2}-2\rho xy}{\nu(1-\rho^{2})}\right)^ {-\nu/2-1}\,\left(\left(1+\frac{x^{2}}{\nu}\right)\left(1+\frac{y^{2}}{\nu} \right)\right)^{(\nu+1)/2},\]
for \((u,v)\in[0,1]^{2}\), where \(x=F_{t}^{-1}(u;\nu)\), \(y=F_{t}^{-1}(v;\nu)\) and \(F_{t}^{-1}(\cdot;\nu)\) is the inverse cdf of the univariate t distribution on \(\nu\) degrees of freedom. \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) with \(\beta(\mathbf{w})=0\) and (see Appendix B.3)
\[\lambda(\mathbf{w})=\max(|w_{1}|,|w_{2}|)+\frac{1}{\nu}||w_{1}|-|w_{2}||.\]
In this case, the SPAR model is asymptotically exact. The scale parameter and corresponding limit sets are independent of the correlation coefficient \(\rho\). However, the angular density is a function of both \(\nu\) and \(\rho\). The example shown in Figure 11 for the joint density has \(\rho=0.6\), \(\nu=2\) and threshold exceedance probability \(\zeta=0.05\), the same case as considered in Example 3 and shown in Figure 5. On Laplace margins, the agreement between the SPAR model and the true density is slightly better in the first and third quadrants, than in the 2nd and 4th.
**Example 11** (Extreme value copulas).: As mentioned above, EV copulas have ARE form in the lower tail and hence from Propositions 4.3 and 5.5, they satisfy the SPAR model assumptions for the negative orthant of \(\mathbb{R}^{d}\). In other orthants, it is useful to consider some specific examples to illustrate various possibilities. Appendix B.1 considers two-dimensional cases where the stable tail dependence function follows either the symmetric logistic, asymmetric logistic, or Husler-Reiss form. Here, we summarise results that are of general interest and refer to Appendix B.1 for details.
For the case of symmetric logistic dependence with parameter \(\alpha>1\), we find that \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25), with \(\beta(\mathbf{w})=0\) for \(q\in(-2,-1)\cup[0,1]\) and \(\beta(\mathbf{w})=1-\alpha\) for \(q\in[-1,0)\cup(1,2]\). The angular density, GP scale parameter and limit sets are shown in Figure 12 for various values of \(\alpha\). A comparison of the SPAR approximation with the true density is also shown, for the case \(\alpha=2\) and \(\zeta=0.05\). The agreement is good in the first and third quadrants, but slightly worse in the second and fourth due to the non-zero value of \(\beta\) for this angular range.
For the case of the asymmetric logistic model [42], we find that \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25), with \(\beta(\mathbf{w})=0\) for all \(q\). The limit set is symmetric about the line \(q=1/2\). However, at finite levels, the corresponding isodensity contours are not symmetric about \(q=1/2\), as illustrated in Figure 12, for a case with \(\alpha=5\), \(\gamma_{1}=0.9\) and \(\gamma_{2}=0.1\). To improve the agreement between the SPAR approximation and true joint density at finite levels, we can define the angular-radial coordinate system relative to a different origin. A finite shift in the origin does not affect the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\), and hence the GP scale and shape parameters for the SPAR approximation, but can improve the rate of convergence of the true density to the SPAR approximation. The SPAR approximation for a case with parameters \(\alpha=5\), \(\gamma_{1}=0.9\) and \(\gamma_{2}=0.1\) is shown in Figure 12, with the origin placed at \((x_{0},y_{0})=(-0.859,-3.3)\). The rationale for this choice of origin is discussed in Appendix B.1, where it is shown to improve the speed of convergence of \(\delta_{L}(r,\mathbf{w})\) to its asymptotic form. In the first quadrant, the threshold has been set as \(\mu(q)=\max\{10,|x_{0}/\cos_{1}(q)|,|y_{0}/\sin_{1}(q)|\}\), to ensure good agreement between the asymptotic SPAR approximation and the true density. The values \(|x_{0}/\cos_{1}(q)|\) and \(|y_{0}/\sin_{1}(q)|\) correspond to the positive x- and y-axes of the original Cartesian coordinate system, and the minimum value of \(10\) has been chosen somewhat arbitrarily through trial and error so that the asymptotic approximation is reasonable. In the third quadrant, the density is strictly decreasing for all \(q\in[-2,-1]\) and the threshold is set at a constant value \(\mu(q)=10\). Similarly, we set \(\mu(q)=\max\{10,|y_{0}/\sin_{1}(q)|\}\) in the second quadrant, and \(\mu(q)=\max\{10,|x_{0}/\cos_{1}(q)|\}\) in the fourth quadrant. The corresponding threshold exceedance probabilities are then calculated using numerical integration. In the regions where the SPAR model is defined, the agreement with the true density contours is good, including the region along the higher-density 'finger' in the first quadrant.
Finally, for the case of the Husler-Reiss dependence model [35], with parameter \(\alpha>0\), we find that \(\delta_{L}(r,\mathbf{w})\) does not have asymptotic form (25) or a continuous limit set. Instead, we find that \(\delta_{L}(r,\mathbf{w})\propto\exp\left(-\frac{1}{2}\left[\left(\frac{ \alpha}{2}(w_{1}-w_{2})r\right)^{2}-r\right]\right)\) for \(q\in(0,1)\). However, it is shown in Appendix B.1 that the SPAR assumptions (A1)-(A3) are nevertheless satisfied for this copula. This illustrates that the conditions of Proposition 5.5 are sufficient but not necessary for a copula to have a SPAR representation on Laplace margins. Moreover, this example shows that the SPAR approach provides a more general framework for representing multivariate extremes than using limit sets alone.
In summary, we find that the lower tail of EV copulas have a 'natural' representation on Laplace margins, but that the case for other orthants is more complicated. In Section 5.3 it will be shown that the upper tails of EV copulas have a natural representation on heavy-tailed margins, but that there are severe limitations on the types of copula that have SPAR representations on these margins.
**Example 12** (Bivariate exponential copula).: This example illustrates that although the tail order is infinite in the lower left corner, and the limit set is degenerate in this quadrant, the bivariate exponential copula does have a SPAR representation on Laplace margins. The bivariate exponential copula with parameter \(\alpha\in[0,1]\) is given by
\[C(u,v)=uv\exp(-\alpha\log(u)\log(v)),\]
for \((u,v)\in[0,1]^{2}\), with corresponding density function
\[c(u,v)=\exp(-\alpha\log(u)\log(v))\left(\alpha^{2}\log(u)\log(v)-\alpha\log( uv)-\alpha+1\right).\]
Figure 11: SPAR models for Frank (top row), Joe (2nd row), Gaussian (3rd row) and t (bottom row) copulas on Laplace margins. Left column: angular density for various values of dependence parameters \(\alpha\), \(\rho\) and \(\nu\). Second column: Gamma scale parameter. Third column: Limit sets. Fourth column: Isodensity contours for joint density (coloured lines) and SPAR approximation (black dashed lines). Contours are at density levels \(10^{-2}\), \(10^{-4}\), \(10^{-6}\), etc.
The case \(\alpha=0\) corresponds to the independence copula. For \(\alpha\in(0,1]\) and \(x>0\), in the lower left corner we have
\[\lim_{t\to 0^{+}}\frac{C(x(t,t))}{C(t,t)}=\lim_{t\to 0^{+}}\frac{(xt)^{2- \alpha\log(xt)}}{t^{2-\alpha\log(t)}}=\begin{cases}0,&x<1,\\ 1,&x=1,\\ \infty,&x>1.\end{cases}\]
Hence the lower tail order is \(\kappa_{(0,0)}=\infty\). For \(\alpha\in(0,1)\), the angular-radial representation of the copula density has asymptotic form
\[\delta_{L}(r,\mathbf{w})\sim\begin{cases}2^{-\alpha\log(2)}\alpha r\left[ \alpha w_{1}w_{2}r+\alpha\log(2)+1\right]\exp(-\alpha[r^{2}w_{1}w_{2}+r\log(2 )]),&q\in[-2,-1],\\ \alpha|w_{2}|r,&q\in(-1,0),\\ 1-\alpha+\alpha\log(2),&q=0,1,\\ 1-\alpha,&q\in(0,1),\\ \alpha|w_{1}|r,&q\in(1,2).\end{cases}\]
This does not have asymptotic form (25) in the lower left quadrant. However, assumptions (A1) and (A2) are satisfied, taking \(\xi(q)=0\) and \(\sigma(\mu,q)=1\) for \(q\in[-1,2]\) and \(\sigma(\mu,q)=(1+2\alpha w_{1}w_{2}\mu)^{-1}\) for \(q\in(-1,-2)\). When \(\alpha=1\), the asymptotic form of \(\delta_{L}(r,\mathbf{w})\) is the same for \(q\in(-2,0]\cup[1,2]\), whereas for \(q\in(0,1)\) we have \(\delta_{L}(r,\mathbf{w})\sim g(\mathbf{w})\exp(-r\min(w_{1},w_{2}))\), where \(g(\mathbf{w})=1\) for \(q=\frac{1}{2}\) and \(g(\mathbf{w})=\frac{1}{2}\) otherwise. For any value of \(\alpha\in[0,1]\), the angular density is continuous and finite and hence satisfies assumption (A3). \(\blacksquare\)
### SPAR models on heavy-tailed margins
When considering joint extremes on heavy-tailed margins, it is common to use the framework of multivariate regular variation (MRV) [28] and hidden regular variation (HRV) [43]. However, Li and Wu [30] showed that an MRV distribution can be constructed from a copula with upper tail order \(\kappa_{\mathbf{1}_{d}}=1\), coupled with heavy-tailed margins (i.e. margins in the domain of attraction of an extreme value distribution with positive shape parameter). Similarly, Li and Hua [31] showed that an HRV distribution can be constructed from a copula with upper tail order \(\kappa_{\mathbf{1}_{d}}\in(1,d)\), coupled with heavy-tailed margins. Therefore, for consistency with previous sections, we continue to use the framework of tail order functions and tail density functions and the ARL model.
Figure 12: As previous figure, but for extreme value copulas with symmetric logistic dependence (top row) and asymmetric logistic dependence (bottom row) on Laplace margins. For EV asymmetric logistic copula, red dot indicates origin of angular-radial coordinates, red line indicates SPAR threshold.
Proposition 4.4 showed that the AR copula function on GP margins can be derived in terms of the ARL representation of the copula density in the upper corner, when this exist. Since the relation in Proposition 4.4 is valid for any \(\xi_{m}>0\), for simplicity we will assume \(\xi_{m}=1\) in the following. Moreover, as discussed in Section 5.1, the use of Pareto margins with the polar origin defined at \(\mathbf{x}=\mathbf{1}_{d}\) is equivalent to using GP margins with the polar origin defined at \(\mathbf{x}=\mathbf{0}_{d}\), but Pareto margins are more convenient for visualisations on a log-log scale. Therefore, throughout this section, we will work with Pareto margins with the polar origin defined at \(\mathbf{x}=\mathbf{1}_{d}\).
**Proposition 5.7**.: _Suppose that \(\mathbf{X}\in[1,\infty)^{d}\) is a random vector with standard Pareto margins and copula density \(c\). Suppose also that \(c\) satisfies the ARL model assumptions in the upper and lower tails and is continuous and finite away from the corners. Define \(R=\|\mathbf{X}-\mathbf{1}_{d}\|_{1}\) and \(\mathbf{W}=(\mathbf{X}-\mathbf{1}_{d})/R\). Then_
\[f_{R,\mathbf{W}}(r,\mathbf{w})\sim\tilde{b}(\mathbf{w})\mathcal{L}_{\mathbf{1 }_{d}}\left(r^{-1}\right)r^{-1-\kappa_{\mathbf{1}_{d}}},\quad r\to\infty,\]
_where \(\tilde{b}(\mathbf{w})=\left[\prod_{j=1}^{d}w_{j}^{-2}\right]s_{\mathbf{w}}^{ \kappa_{\mathbf{1}_{d}}-d}b_{\mathbf{1}_{d}}\left((s_{\mathbf{w}}\mathbf{w})^ {-1}\right)\) and \(s_{\mathbf{w}}=\sum_{j=1}^{d}w_{j}^{-1}\). Moreover, the joint density \(f_{R,\mathbf{W}}(r,\mathbf{w})\) satisfies assumptions (A1)-(A3) for \(\mathbf{w}\in\mathcal{U}_{1}\cap(0,1]^{d}\), with \(\xi(\mathbf{w})=1/\kappa_{\mathbf{1}_{d}}\) and \(\sigma(\mu,\mathbf{w})=\mu/\kappa_{\mathbf{1}_{d}}\)._
The expression for \(f_{R,\mathbf{W}}\) on Pareto margins, given in Proposition 5.7 is equivalent to that obtained from the Ledford-Tawn model [11, 12] in Cartesian coordinates. This is a direct consequence of the formulation of the ARL model for the copula, discussed by Hua and Joe [7]. Proposition 5.7 therefore gives a relation between the Ledford-Tawn model and SPAR models on Pareto margins. However, Proposition 5.7 only tells us that the SPAR assumptions are satisfied in the joint exceedance region, where all variables are large, since we have assumed that \(\mathbf{w}=(w_{1},...,w_{d})\in\mathcal{U}_{1}\cap(0,1]^{d}\). In the remainder of the section, we will examine the circumstances in which the SPAR assumptions are satisfied for \(w_{j}\to 0^{+}\) for one or more \(j\in\{1,...,d\}\).
Before we consider this further, there is another important point to note regarding the asymptotic GP scale parameters. Proposition 5.7 gives a scaling function \(\sigma(\mu,\mathbf{w})\) for the asymptotic GP distribution that is independent of \(\mathbf{w}\). However, it is important to note that whilst the conditional radial tail distribution is asymptotically equivalent to \(\bar{F}_{GP}(r;1/\kappa_{\mathbf{1}_{d}},\mu/\kappa_{\mathbf{1}_{d}})\), this is not necessarily the best approximation at finite levels. Consider the case of a GP distribution, with scale \(\sigma_{0}\) and shape \(\xi>0\). In this case we have
\[\frac{\bar{F}_{GP}(\mu+r;\xi,\sigma_{0})}{\bar{F}_{GP}(\mu;\xi, \sigma_{0})}=\frac{\left(1+\xi\frac{\mu+r}{\sigma_{0}}\right)^{-1/\xi}}{\left( 1+\xi\frac{\mu}{\sigma_{0}}\right)^{-1/\xi}}=\left(1+\xi\frac{r}{\sigma_{0}+ \xi\mu}\right)^{-1/\xi}=\bar{F}_{GP}(r;\xi,\sigma_{0}+\xi\mu).\]
That is, the tail distribution of a GP distribution at threshold level \(\mu\) is also a GP distribution, with the same shape parameter and scale parameter \(\sigma_{0}+\xi\mu\). This is the well-known threshold stability property of the GP distribution. However, \(\bar{F}_{GP}(r;\xi,\sigma_{0}+\xi\mu)\sim\bar{F}_{GP}(r;\xi,\xi\mu)\) as \(\mu\to\infty\). So, the Pickands-Balkema-de Haan theorem would be satisfied taking \(\sigma(\mu)=\mu\xi\), whereas a more accurate approximation to the tail (an equality in this case) is obtained by taking \(\sigma(\mu)=\sigma_{0}+\xi\mu\). When \(\sigma_{0}\) is large relative to \(\xi\mu\), neglecting this term can lead to large errors in the GP approximation at finite levels. The next example illustrates that in some bivariate cases \(\sigma_{0}\to\infty\) as \(\mathbf{w}\to(0,1)\).
**Example 13** (EV copulas on Pareto margins).: For EV copulas we have \(c_{\mathbf{1}_{d}}(t\mathbf{w})\sim t^{1-d}\left|A^{(1,\ldots,1)}(\mathbf{w})\right|\), where \(A\) is the stable tail dependence function (see Appendix B.1). In the two-dimensional case, for random vector \((X,Y)\in[1,\infty)^{2}\), we define \(R=(X-1)+(Y-1)\) and \(Q=(Y-1)/R\). Recalling that in two-dimensions \(f_{R,\mathbf{W}}(r,(1-q,q))=f_{R,Q}(r,q)\) (see discussion in Section 3.2), from Proposition 5.7 we have
\[f_{R,Q}(r,q)\sim\tilde{b}(q)r^{-2},\quad r\to\infty, \tag{29}\]
where \(\tilde{b}(q)=-(q(1-q))^{-1}A^{(1,1)}(q,1-q)\). Note that \(f_{GP}(r;1,\sigma)\sim\sigma r^{-2}\) for \(r\to\infty\). Hence, for the conditional radial density we have
\[f_{R|Q}(r|q)\sim f_{GP}\left(r;1,\sigma_{0}(q)\right),\quad r\to\infty,\]
where \(\sigma_{0}(q)=\tilde{b}(q)/f_{Q}(q)\). Hence, from the discussion above a more accurate approximation to the tail of the radial distribution at threshold \(\mu\), than given in Proposition 5.7, is a GP distribution with shape \(\xi(q)=1\) and scale \(\sigma(\mu,q)=\mu+\sigma_{0}(q)\). To investigate the behaviour of \(\sigma_{0}(q)\) for \(q\to 0^{+}\), we revisit the three models for the stable tail dependence function which were considered in Example 11 (see Appendix B.1 for details).
For the case of symmetric logistic dependence with parameter \(\alpha\geq 1\), we find that \(f_{Q}(q)\to 0^{+}\) as \(q\to 0^{+}\) for all \(\alpha>1\). When \(\alpha=1\), the copula is equal to the independence copula and hence \(f_{Q}(q)\to\infty\) as \(q\to 0^{+}\). When \(\alpha>1\), \(\tilde{b}(q)=(\alpha-1)(q(1-q))^{\alpha-2}(q^{\alpha}+(1-q)^{\alpha})^{\frac{ 1}{\alpha}-2}\) and hence
\[\tilde{b}(q)\to\begin{cases}\infty,&\alpha<2,\\ 1,&\alpha=2,\\ 0,&\alpha>2,\end{cases}\qquad q\to 0^{+}.\]
Therefore \(\sigma_{0}(q)\to\infty\) as \(q\to 0^{+}\) when \(\alpha\leq 2\). So, in this case the assumption of a finite scale parameter function (A2) does not hold for \(q\to 0^{+}\). When \(\alpha>2\), numerical calculations indicate that \(\sigma_{0}(q)\to 0^{+}\) as \(q\to 0^{+}\), so for these cases the copula appears to have a valid SPAR representation for all \(q\in[0,1]\). Examples of the angular density and GP scale parameter as a function of \(q\) are shown in Figure13 for various values of \(\alpha\). Contour plots of the joint densities \(f_{R,Q}\) and \(f_{X,Y}\) are also shown for the case \(\alpha=3\) together with the SPAR approximations. For the Cartesian density, the agreement is good both in the joint exceedance region and close to the margins.
For the case of asymmetric logistic dependence, the angular density goes to infinity as \(q\to 0^{+}\) or \(q\to 1^{-}\) for any \(\alpha\geq 1\) and \(\gamma_{1},\gamma_{2}\in(0,1)\). Hence the SPAR model assumptions are only valid in the joint exceedance region \(0<q<1\). Examples of the joint densities \(f_{R,Q}\) and \(f_{X,Y}\) are shown in Figure13 for the case \(\alpha=5\), \(\gamma_{1}=0.1\), \(\gamma_{2}=0.9\) (the same case considered on Laplace margins), together with the SPAR approximations. For the Cartesian density, the location of the 'finger' is captured by the SPAR approximation, without the need for a judicious choice of origin, required on Laplace margins. The reason for this is that the asymmetric behaviour of the joint density is contained in the partial derivative of the stable tail dependence function \(A^{(1,1)}\), and hence this information is also represented in \(\sigma_{0}(q)\). In contrast, \(\delta_{L}(r,\mathbf{w})\) is symmetric, so the asymmetric behaviour at finite levels must be accounted for using a shift of origin on Laplace margins. However, on Pareto margins, the density close to the axes is not captured by the SPAR approximation. In the angular-radial space, the SPAR approximation appears much better, since for any fixed \(q\in(0,1)\) the SPAR approximation converges to the true density as \(r\to\infty\).
Finally, for the Husler-Reiss dependence model, we find that \(f_{Q}(q)\to 0^{+}\) and \(\sigma_{0}(q)\to 0^{+}\) for \(q\to 0^{+}\) or \(1^{-}\) for any \(\alpha>0\) - see Figure13. So the SPAR assumptions are valid close to the margins as well. An example of the SPAR approximation to the joint density for the case \(\alpha=1\) is shown in Figure13. The SPAR approximation is in good agreement with the true density both in the joint exceedance region and close to the margins.
These examples illustrate that while SPAR models provide a good approximation to the upper tail of EV copulas on Pareto margins in the joint exceedance region, they do not necessarily provide a good approximation close to the axes, since either the angular density or scale parameter can become unbounded for \(q\to 0^{+}\) or \(1^{-}\). The angular-radial representation of the joint density in (29), is the same as that derived by Coles and Tawn [33]. So from a probabilistic standpoint, the SPAR model for EV copulas on Pareto margins is equivalent to the Coles-Tawn method. However, from a statistical perspective the SPAR approach is less parsimonious in this application, since it requires estimation of both the angular density and GP scale parameter, whereas \(\tilde{b}(q)\) is estimated directly by fitting parametric models in the Coles-Tawn method.
Proposition4.1 showed that in certain cases with \(\kappa_{\mathbf{1}_{d}}>1\), the ARL representation for the copula was only valid in a limited region. The next proposition makes use of this result together with Proposition5.7, to show that for these cases, SPAR models on Pareto margins are only valid in the joint exceedance region.
**Proposition 5.8**.: _Suppose that random vector \((X,Y)\in[1,\infty)^{2}\) has standard Pareto margins and copula density \(c\), and define \(R=(X-1)+(Y-1)\) and \(Q=(Y-1)/R\). Suppose also that \(c\) satisfies the assumptions of Proposition4.1 in the upper right corner, with \(\beta=0\) and \(\kappa_{(1,1)}>1\). Then for \(q\in(0,1)\) the angular-radial joint density is asymptotically_
\[f_{R,Q}(r,q)\sim(q(1-q))^{-1-\frac{\kappa_{(1,1)}}{2}}\mathcal{L}(r)r^{-1- \kappa_{(1,1)}},\qquad r\to\infty.\]
_for some \(\mathcal{L}\in RV_{0}(\infty)\)._
Under the assumptions of Proposition5.8, the angular variation of the density is dependent only on the tail order. For fixed \(r\), the approximation for \(f_{R,Q}(r,q)\) given in Proposition5.8 tends to infinity when \(q\to 0^{+}\) or \(1^{-}\), and hence the SPAR representation is only valid in the joint exceedance region. Examples where the assumptions of Proposition5.8 are satisfied include the case where \(c(u,v)=c_{EV}(1-u,1-v)\) where \(c_{EV}\) is an extreme value copula density, with symmetric stable tail dependence function, such as the symmetric logistic model or Husler-Reiss model (see AppendixB.1), and when \(c\) is the Gaussian copula (see AppendixB.2). Figure14 illustrates these examples, and shows contours of the true joint density together with the SPAR approximations, both in the angular-radial space and Cartesian
space. It is evident that the asymptotic expression for the angular-radial joint density given in Proposition 5.8 is valid for any fixed \(q\in(0,1)\) as \(r\to\infty\). However, in the Cartesian space with a log-log scale used for the margins, the approximation is only valid in the region where both variables are large, and the approximations tend to infinity on the margins.
In summary, copulas with strong tail dependence have a natural SPAR representation on heavy-tailed margins in the joint exceedance region. However, this representation is not always valid close to the margins. Moreover, for many copulas without strong tail dependence, the SPAR representation on heavy-tailed margins is only valid in the joint exceedance region. In general, heavy-tailed margins are less useful than Laplace margins for SPAR representations.
Finally, although this may be obvious to some readers, it is worth mentioning that joint densities on heavy-tailed margins do not have limit sets. Consider the case of GP margins with shape \(\xi_{m}>0\) and scale \(\sigma=1\). The distribution of \(M_{n}\), the maximum of \(n\) samples from the marginal distribution, converges to a generalised extreme value (GEV) distribution with shape \(\xi\), scale \(n^{\xi}\) and location \((n^{\xi}-1)/\xi\), as \(n\to\infty\). Therefore, the distribution of scaled maxima \(M_{n}/n^{\xi}\), converges to a GEV distribution with shape \(\xi\), unit scale and location \(1/\xi\). This limit is non-degenerate, and therefore the scaled sample has no upper bound. Since the scaled margins do not converge to a finite interval, scaled samples from the joint distribution do not converge onto a finite set.
Figure 13: SPAR models for EV copulas on standard Pareto margins, with polar origin at \((x,y)=(1,1)\), for symmetric logistic dependence (top row), asymmetric logistic dependence (middle row) and Hüsler-Reiss dependence model (bottom row). First and second columns show angular density and GP scale parameter for various dependence parameter values, \(\alpha\). Third column shows logarithmically-spaced contours of the true angular-radial joint density (coloured lines) and SPAR approximation (black dashed lines). Right hand column shows logarithmically-spaced contours of the true Cartesian joint density (coloured lines) and SPAR approximation (black dashed lines).
### SPAR models on short-tailed margins
To illustrate some features of SPAR models on short-tailed margins (i.e. margins in the domain of attraction of an extreme value distribution with negative shape parameter), it is useful to start by considering the case of the independence copula. Throughout this section, we will consider two-dimensional examples and assume that \((X,Y)\in\mathcal{D}\subset\mathbb{R}^{\varkappa}\) has GP margins with unit scale parameter and shape parameter \(\xi_{m}<0\). Angular and radial variables are defined as \(R=X+Y\) and \(Q=Y/R\). The upper end point of \(X\) and \(Y\) is \(-1/\xi_{m}\). The upper end point of \(R\) is dependent on \(Q\) and the copula of \((X,Y)\), and will be denoted \(r_{F}(q)\) at \(Q=q\).
**Example 14** (Independence on GP margins with negative shape).: Suppose that \((X,Y)\) and \((R,Q)\) are random vectors defined as above, and that \(X\) and \(Y\) are independent. The upper end point of \(R\) at \(Q=q\) is given by \(r_{F}(q)=-(\xi_{m}\max(q,1-q))^{-1}\). The angular-radial joint density is given by \(f_{R,Q}(r,q)=r\,m_{GP}(r,(1-q,q))\), where \(m_{GP}\) is the marginal product function, given below. If we substitute \(r=r_{F}(q)-s\), then for \(r\in[0,r_{F}(q)]\) and \(q\in[0,1]\) we have
\[m_{GP}(r,(1-q,q)) =f_{GP}(r(1-q);\xi_{m},1)f_{GP}(rq;\xi_{m},1)\] \[=\left[(1+\xi_{m}r(1-q))(1+\xi_{m}rq)\right]^{-1-\frac{1}{\xi_{m} }}\] \[=\begin{cases}\left[\left(1-\frac{\min(q,1-q)}{\max(q,1-q)}- \frac{s}{r_{F}(q)}\right)\left(\frac{s}{r_{F}(q)}\right)\right]^{-1-\frac{1}{ \xi_{m}}},&q\in[0,1/2)\cup(1/2,1]\\ \left(\frac{s}{r_{F}(q)}\right)^{-2-\frac{2}{\xi_{m}}},&q=1/2.\end{cases}\]
The angular density, \(f_{Q}(q)\), is finite for \(q\in[0,1]\) for all \(\xi_{m}<0\). Moreover, assumption (A1) is satisfied, with \(\xi(q)=\xi_{m}\) for \(q\in[0,1/2)\cup(1/2,1]\) and \(\xi(1/2)=(\xi_{m}+2)/\xi_{m}\). Hence, unless \(\xi_{m}=-1\), there is a discontinuity in the radial shape parameter and hence (A2) is not satisfied. In the case that \(\xi_{m}=-1\), we have \(m_{GP}(r,(1-q,q))=1\) and \(f_{R,Q}(r,q)=r\). So although assumptions (A1)-(A3) are satisfied, the approximation of the tail of \(f_{R|Q}(r|q)\) with a GP distribution with \(\xi=-1\) (i.e. a uniform distribution) is not a good approximation, since \(f_{R|Q}(r|q)\) is linearly increasing in \(r\).
Example 14 shows that there is a discontinuity in the domain of attraction of the marginal product function at \(q=1/2\). This arises because for \(q<1/2\) the variable \(Y\) does not reach its upper end point as \(r\to r_{F}(q)\) and hence the density \(f_{GP}(rq;\xi_{m},1)\) converges to a non-zero value as \(r\to r_{F}(q)\). Similarly, for \(q>1/2\) the variable \(X\) does not reach its upper end point as \(r\to r_{F}(q)\) (see Figure 9). So for assumption (A2) to be satisfied on GP margins with
Figure 14: As previous figure, but for lower tail of EV copula with symmetric logistic dependence with \(\alpha=3\) (top) and Gaussian copula with \(\rho=0.6\) (bottom).
negative shape, either a copula must compensate for this discontinuity, or the support of the copula must be such that \(\delta_{GP}(r,(1-q,q))=0\) for \(r<-(\xi_{m}\max(q,1-q))^{-1}\), so that \(m_{GP}(r,(1-q,q))\) is non-zero everywhere within the support and of \(f_{R,Q}\), and the domain of attraction of \(f_{R|Q}(r|q)\) is controlled by the behaviour of \(\delta_{GP}(r,(1-q,q))\) at the edge of its support.
For the copulas considered so far in this work, \(c(u,v)>0\) for \((u,v)\in(0,1)^{2}\), so these copulas result in tail behaviour of \(f_{R|Q}\) that is governed by both \(m_{GP}\) and \(\delta_{GP}\). Archimedean copulas with non-strict generators (see e.g. [44]) provide an example where the Hausdorff distance between the support of \(c\) and \([0,1]^{2}\) can be greater than zero. Two examples where the SPAR assumptions are satisfied on short-tailed margins are given below. In both examples, we will establish the parameters of asymptotic GP distribution by reparameterising the density as follows. The GP density \(f_{GP}(r;\xi,\sigma)\) with \(\xi<0\), has upper end point \(r_{F}=-\sigma/\xi\). If we substitute \(r=r_{F}-s\), then
\[f_{GP}(r_{F}-s;\xi,\sigma)=\sigma^{\tfrac{1}{\xi}}{\left(-\xi s\right)}^{-1- \tfrac{1}{\xi}},\quad s\in[0,r_{F}]. \tag{30}\]
Now suppose that the conditional radial density has asymptotic form
\[f_{R|Q}(r_{F}(q)-s|q)\sim g(q)s^{\beta},\quad s\to 0^{+},\]
for some continuous function \(g(q)>0\) and \(\beta>-1\), where \(r_{F}(q)\) is the upper end point of \(R\) conditional on \(Q=q\). Equating this with the GP parameters shows that \(f_{R|Q}(r|q)\sim f_{GP}(r;\xi,\sigma(q))\) as \(r\to r_{F}(q)=-\sigma(q)/\xi\), with \(\xi=-1/(1+\beta)\) and \(\sigma(q)=(g(q))^{\xi}(-\xi)^{1+\xi}\).
**Example 15** (Clayton copula).: The Clayton copula with parameter \(\alpha\in[-1,\infty)\setminus\{0\}\) is given by \(C(u,v)=z^{-1/\alpha}\), where \(z=\max(u^{-\alpha}+v^{-\alpha}-1,0)\) and \((u,v)\in[0,1]^{2}\). When \(\alpha<0\), we have \(C(t,t)=0\) for \(t\in[0,2^{1/\alpha}]\). When \(\alpha>0\), we have \(C(t,t)>0\) for \(t>0\). An example contour plot of the copula is shown in Figure 15 for the case \(\alpha=-0.2\). In the remainder of this example, we assume that \(\alpha\in[-1,0)\). The corresponding density is
\[c(u,v)=\begin{cases}(1-\alpha)(uv)^{-\alpha-1}z^{-\tfrac{1}{\alpha}-2},&(u,v) \in\{(a,b)\in[0,1]^{2}:a^{-\alpha}+b^{-\alpha}\geq 1\},\\ 0,&\text{otherwise}.\end{cases}\]
Since the support of \(c\) is limited in the lower left corner, we work with the density of the survival copula instead, \(c_{(1,1)}(u,v)=c(1-u,1-v)\). On GP margins with \(\xi_{m}<0\), the upper end point of \(\delta_{GP}(r,(1-q,q);c_{(1,1)})\) occurs when \(z=0\), or equivalently \((1+\xi_{m}r(1-q))^{\alpha/\xi_{m}}+(1+\xi_{m}rq)^{\alpha/\xi_{m}}=1\). Setting \(\xi_{m}=\alpha\) leads to a simple solution, with the upper end point of \(\delta_{GP}\) given by \(r_{F}(q)=-1/\alpha\) for \(q\in[0,1]\). Substituting \(r=r_{F}(q)-s\), we find that the angular-radial joint density is independent of \(q\) and given by
\[f_{R,Q}(r,q) =(1+\alpha)r(1+\alpha r)^{-\tfrac{1}{\alpha}-2}\] \[=-(1+\alpha)\left(\frac{1}{\alpha}+s\right)(-\alpha s)^{-\tfrac{ 1}{\alpha}-2}.\] \[\sim-\left(\frac{1}{\alpha}+1\right)(-\alpha s)^{-\tfrac{1}{ \alpha}-2},\quad s\to 0^{+}\] \[=f_{GP}(r_{F}-s;\xi,\sigma),\]
where \(\xi=\alpha/(1+\alpha)\) and \(\sigma=(-\xi)^{-\xi}(1-\xi)^{\xi+1}\). Since \(f_{R,Q}(r,q)\) is independent of \(q\), \(f_{Q}(q)=1\) and \(f_{R,Q}(r,q)=f_{R|Q}(r|q)\). Hence SPAR model assumptions (A1)-(A3) are satisfied. The Cartesian joint density \(f_{X,Y}(x,y)\) for the case \(\alpha=1/5\) is shown in Figure 15. In this case, the corresponding SPAR approximation is not shown, as the contours are closely spaced. However, the discussion above shows that it is asymptotically exact.
**Example 16** (Nelsen copula 4.2.15).: Table 4.2 in Nelsen [44] presents a list of one-parameter families of Archimedean copulas. The copula labelled 4.2.15, with parameter \(\alpha\geq 1\) is given by \(C(u,v)=z^{\alpha}\) for \((u,v)\in[0,1]^{2}\), where
\[z=\max\left(1-\|(x,y)\|_{\alpha},0\right),\]
and \(x=1-u^{1/\alpha}\), \(y=1-u^{1/\alpha}\), and \(\|\cdot\|_{\alpha}\) is the \(L^{\alpha}\) norm. We have \(C(t,t)=0\) for \(t\leq(1-2^{-1/\alpha})^{\alpha}\). An example contour plot of the copula is shown in Figure 15 for the case \(\alpha=3\). The corresponding density is
\[c(u,v)=\begin{cases}(1-\tfrac{1}{\alpha})\left(uv\right)^{\tfrac{1}{\alpha}-1} (xy)^{\alpha-1}\left(x^{\alpha}+y^{\alpha}\right)^{\tfrac{1}{\alpha}-2}z^{ \alpha-2},&(u,v)\in\{(a,b)\in[0,1]^{2}:\|(1-a^{1/\alpha},1-b^{1/\alpha})\|_{ \alpha}\leq 1\}\\ 0,&\text{otherwise}.\end{cases}\]
As in the previous example, we will work with the density of the survival copula, \(c_{(1,1)}(u,v)=c(1-u,1-v)\). Using GP margins with \(\xi_{m}=-1/\alpha\), the upper end point of \(\delta_{GP}(r,(1-q,q);c_{(1,1)})\) occurs when \(z=0\). Solving for \(r\) shows that the upper end point is given by \(r_{F}(q)=\alpha/\|(1-q,q)\|_{\alpha}\). Making the substitution \(r=r_{F}(q)-s\), we find that the angular-radial joint density is given by
\[f_{R,Q}(r_{F}(q)-s,q)=\alpha^{2-\alpha}(\alpha-1)(q(1-q))^{\alpha-1}((1-q)^{ \alpha}+q^{\alpha})^{-\tfrac{1}{\alpha}-1}s^{\alpha-2}.\]
Integrating this gives the angular density and conditional radial density as
\[f_{Q}(q) =\alpha(q(1-q))^{\alpha-1}((1-q)^{\alpha}+q^{\alpha})^{-2},\] \[f_{R|Q}(r_{F}(q)-s|q) =\alpha^{1-\alpha}(\alpha-1)((1-q)^{\alpha}+q^{\alpha})^{1- \tfrac{1}{\alpha}}s^{\alpha-2}.\]
The conditional radial density \(f_{R|Q}\) is exactly of the form (30) and hence is a GP density, with parameters \(\xi=-1/(\alpha-1)\) and \(\sigma(q)=(g(q))^{\xi}(-\xi)^{1+\xi}\), with \(g(q)=\alpha^{1-\alpha}(\alpha-1)((1-q)^{\alpha}+q^{\alpha})^{1-\tfrac{1}{ \alpha}}\). The Cartesian joint density \(f_{X,Y}(x,y)\) for the case \(\alpha=3\) is shown in Figure 15, together with the angular density for various values of \(\alpha\). Since the SPAR representation is exact everywhere in the domain, this is not shown.
The examples presented above are for cases where the angular-radial density assumes a simple form for a judicious choice of shape parameter for the margins. Some Archimedean copulas with non-strict generators have infinite density along the boundary of the level set \(C(u,v)=0\) (see e.g. Table 4.2 in [44]). In these cases, the SPAR assumptions are not satisfied on GP margins.
The use of short-tailed margins and copulas whose support does not cover the whole of \((0,1)^{2}\) has natural applications to some environmental variables, such as winds, waves, temperatures, etc., discussed in Section 1.1. It many
Figure 15: Examples of Archimedean copulas on GP margins with negative shape parameter, for Clayton copula with \(\alpha=-0.2\) (top row) and Nelsen copula 4.2.15 with \(\alpha=3\) (bottom row). Left column: contours of the copula levels from \(0\) to \(0.9\) at increments of \(0.1\). Thick black line indicates level set \(C(u,v)=0\). Middle column: contours of joint density \(f_{X,Y}\) for corresponding survival copulas on GP margins, with contour levels at logarithmic increments. Thick black line indicates level set \(f_{X,Y}(x,y)=0\). Right column: angular density for various copula parameter \(\alpha\).
cases it is reasonable to assume that an environmental variable has an upper limit due to some physical constraints. For example, the maximum possible water surface wave height at a given location is limited by the water depth. Even if physical constraints are not fully understood, in many cases it is reasonable to expect that values of environmental variables cannot go to infinity. Moreover, some combinations of variables may also be ruled out by physical constraints. Sticking with the example of waves, combinations of wave heights, \(H\), and wavelengths, \(L\), that are physically possible are constrained by the wave steepness, defined as the ratio \(H/L\).vii When the steepness exceeds a certain value, the wave breaks, limiting the height as a function of wavelength. Therefore, the range of possible values of the random vector \((H,L)\) is bounded by a line of the form \(H=sL\), for some limiting steepness \(s\). The analogous behaviour of the density of the Clayton copula on GP margins with negative shape, illustrated in Figure 15, is clearly evident, with the joint density bounded by the line \(x+y=-1/\alpha\). However, when modelling environmental variables which are assumed to have bounded support, it may be more appropriate not to use prescribed margins, and instead estimate the SPAR model on suitably scaled and centred data. This is discussed further in the following section.
Footnote viii: In deep water, the wavelength is proportional to the square of the wave period – see Figure 2(a).
In summary, Example 14 shows that the independence copula does not have a SPAR representation on GP margins with negative shape parameter, and illustrates that there are constraints on the types of copula that have SPAR representations on these margins. However, Examples 15 and 16 show that some copulas whose support does not cover the whole of \((0,1)^{2}\) do have SPAR representation on short-tailed margins.
## 6 Discussion and conclusions
This work has presented a new and general framework for modelling multivariate extremes, based on an angular-radial representation of the density function. The framework assumes a transformation of variables from Cartesian to polar coordinates. In Section 3 it was shown that the choice of generalised polar coordinates, introduced in Section 2, does not affect whether the SPAR model assumptions are satisfied, but that certain coordinate systems lead to simpler representations in which the angular and radial variables are asymptotically independent. However, the coordinate systems for which this is true depends on both the copula and margins. So, in general, it seems more appropriate to work with polar coordinates defined in terms of the \(L^{1}\) norm, which are straightforward to work with in higher dimensions, and result in a simple transformation between Cartesian and polar density functions.
For the theoretical examples considered in this work, the SPAR model assumption (A1), that the conditional radial distribution converges to a GP distribution was always satisfied. However, the more restrictive assumptions (A2) and (A3), regarding continuity and finite values for the angular density and GP parameter functions, were not always satisfied. For a given copula, the choice of margin affects whether the SPAR model assumptions are satisfied. It was shown that the use of Laplace margins leads to density functions where the SPAR model assumptions are satisfied for many commonly-used copulas. Using exponential margins leads to similar asymptotic representations in the non-negative orthant of \(\mathbb{R}^{d}\). However, in certain cases the angular density can become unbounded on exponential margins, whereas it remains finite on Laplace margins. So there is some advantage to using Laplace margins over exponential margins, even when the interest is only in extremes in the non-negative orthant of \(\mathbb{R}^{d}\).
SPAR models on Laplace margins also have a useful link to the limit set of the scaled sample cloud, when it exists. The scale parameter function of a SPAR model on Laplace margins describes the radius of the boundary of the limit set as a function of angle. However, it is important to note that the for some copulas, the limit set is degenerate in some regions (i.e. the boundary has zero radius from the origin at some angles). For the examples considered where this was the case, the joint densities still have SPAR representations on Laplace margins. While limit sets are of interest from a theoretical point of view, estimation of the density in extreme regions (or the corresponding probability of observations falling within an extreme sets) is usually of greater practical importance. Characterising the density in extreme regions of variable space is the main aim of the SPAR approach.
SPAR models on Laplace margins can represent a wide range of copulas. However, for copulas with strong dependence in the upper tail, it was shown that the joint exceedance region in the upper tail has a more natural representation on Pareto margins. From a probabilistic perspective, SPAR representations for these cases are equivalent to the representation derived by Coles and Tawn [33]. Similarly, for copulas with \(\kappa_{\mathbf{1}_{d}}>1\), the SPAR representation on Pareto margins is equivalent to the Ledford-Tawn model [11, 12]. However, in both cases \(\kappa_{\mathbf{1}_{d}}=1\) and \(\kappa_{\mathbf{1}_{d}}>1\), there are copulas for which the SPAR representation on Pareto margins is only valid in the region where all variables are large, and is not valid when one or more variables are small. In contrast, in the examples presented, the use of Laplace margins for such copulas leads to SPAR representations which are valid for all angles.
With regard to how the SPAR framework relates to existing approaches, we can summarise that the models of Coles and Tawn [33], Ledford and Tawn [11] and Wadsworth _et al._[14] are all special cases of SPAR representations, and
that SPAR models can represent extremal properties of a wider range of distributions than these models. Moreover, the angular-radial decomposition of the joint density function, into an angular density and conditional radial density, provides a simple intuitive motivation for our model. Viewed in this way, multivariate extremes can be seen as a simple extension to univariate extremes, with angular dependence. It is important to note that the joint distribution function does not admit an analogous decomposition in general, which highlights why it is important to begin by considering the joint density function, rather than joint distribution function.
The focus of the present work has been on probabilistic aspects of SPAR representations, with particular consideration of the effects of the choice of coordinate system and the choice of margins. The motivation is that these theoretical considerations will inform an approach to inference, which will be developed in further work. The general conclusion from the present work is that working on Laplace margins in \(L^{1}\) polar coordinates provides a variable space in which the SPAR model assumptions are satisfied for a wide range of copulas, with various tail dependence properties. As mentioned in the introduction, the SPAR model can be viewed as a type of non-stationary peaks-over-threshold (POT) model, with angle as covariate. There are many examples of inference for non-stationary POT in the literature, e.g. [1, 2, 3, 4]. In the examples considered in this work, it was found that when the SPAR assumptions are satisfied, the angular densities are smooth functions of angle, which a readily amenable to non-parametric estimation. However, although the GP scale functions are continuous (assumption A2), they were not always smooth functions of angle. This will have implications for their estimation using semi-parametric means, which often assume smoothly-varying functions. For these cases, a piecewise-linear model (e.g. [4]) may be more appropriate.
For a SPAR model on Laplace margins with constant shape parameter \(\xi(\mathbf{w})=0\), the tail order in corner \(\mathbf{u}_{0}=(u_{0,1},...,u_{0,d})\) is linked to the GP scale parameter by \(\sigma(\mathbf{w}_{0})=2/\kappa_{\mathbf{u}_{0}}\), where \(2\mathbf{w}_{0}=\big{(}(-1)^{u_{0,1}+1},...,(-1)^{u_{0,d}+1}\big{)}\). However, the relations between the SPAR model parameters and tail dependence coefficients have not been considered in the present work. The general dependence properties of SPAR models can be derived from the marginal and joint exceedance probabilities, which can be calculated from integrals of the SPAR density over angle. Whilst this is simple in principle, the calculations involved are lengthy, and require consideration of numerous separate cases. These considerations are therefore deferred to future work.
Section 5 considered the effect of the margins on whether the SPAR assumptions are satisfied. However, as discussed in Section 1.1, conducting the inference on prescribed margins is not a requirement of the SPAR approach. When applying the model to observational data, generally the marginal distribution is not known. In this case, working on prescribed margins requires first estimating the distribution for each margin and then applying a transformation. This will introduce further uncertainty to the estimated joint distribution. An alternative approach would be to apply the SPAR model to the observations directly, after a suitable normalisation for scale and location, and estimate both the extremal properties of the margins and copula in a single inference. This type of approach may be applicable when it is reasonable to assume that all variables have a finite upper and lower bounds. As discussed in Section 5.4, this is a reasonable assumption for most environmental applications. This type of approach will be considered in future work, where the inference for SPAR models is discussed in detail.
A further consideration for the application of the SPAR model to real datasets, is where to define the origin of the polar coordinate system. When working on prescribed margins, the natural choice is to define the polar origin to coincide with the Cartesian origin. For distributions which have support in all of \(\mathbb{R}^{d}\), a finite shift in the origin has no effect on the asymptotic properties of the model, but can result in a faster convergence to the asymptotic form, as discussed in Example 11. However, on short-tailed margins, the choice of origin can have an impact on the SPAR model parameters, or even whether the model assumptions are satisfied. For instance, suppose that random vector \(\mathbf{X}\in\mathbb{R}^{d}\) has bounded support, \(\text{supp}(\mathbf{X})=G\subset\mathbb{R}^{d}\). The bounded set \(G\) can be viewed as analogous to a limit set for random vectors with unbounded support. Clearly, a minimum requirement for the SPAR representation when \(G\) is bounded, is that \(G\) is star-shaped with respect to the origin. The choice of origin for cases where random vector \(\mathbf{X}\in\mathbb{R}^{d}\) has bounded support will be discussed in detail in future work.
## Acknowledgement
This work was funded by the EPSRC, United Kingdom Supergen Offshore Renewable Energy Hub [EP/S000747/1] Flexible Fund project "Improved Models for Multivariate Metocean Extremes (IMEX)". The data shown in Figures 1 and 2 was provided as part of the Benchmarking Exercise for Environmental Contours [45] and is available from [https://github.com/ec-benchmark-organizers](https://github.com/ec-benchmark-organizers). The generic response curves shown in Figures 1(a) and 1(c) were reproduced from [46] and [47], respectively. We would like to thank Callum Murphy-Barl trop for useful comments on a draft of this paper.
Proofs
### Proof of Proposition 3.1
Define \(u=\cos_{p}(q)\) and \(v=\sin_{p}(q)\). Then we can write
\[\cos_{p}^{\prime}(q) =\frac{du}{dq},\] \[\sin_{p}^{\prime}(q) =\frac{dv}{dq}=\frac{du}{dq}\frac{dv}{du}.\]
Differentiating (6), the rate of change of pseudo-angle with \(u\) is
\[\left|\frac{dq}{du}\right|=\frac{4}{\mathcal{C}_{p}}\left(1+\left(\frac{dv}{ du}\right)^{2}\right)^{1/2},\]
where \(\mathcal{C}_{p}\) is the circumference of the \(L^{p}\) unit circle (see Definition 8). Substituting these expressions into (10) gives
\[J_{p}(q)=\frac{\mathcal{C}_{p}}{4}\left|u\frac{dv}{du}-v\right|\left(1+\left( \frac{dv}{du}\right)^{2}\right)^{-1/2}. \tag{31}\]
To simplify the presentation, we just consider the case for the first quadrant. Results for other quadrants follow by symmetry. For \(L^{p}\) norms, in the first quadrant we have \(v=(1-u^{p})^{1/p}\). Therefore
\[\frac{dv}{du}=-u^{p-1}\left(1-u^{p}\right)^{\frac{1}{p}-1}=-\left(\frac{u}{v} \right)^{p-1}.\]
Substituting this gives
\[\left|u\frac{dv}{du}-v\right|=\left|\frac{u^{p}}{v^{p-1}}+v\right|=v^{1-p} \left|u^{p}+v^{p}\right|=v^{1-p}.\]
Then, using (31) we have
\[J_{p}(q)=\frac{\mathcal{C}_{*}}{4}v^{1-p}\left(1+\left|\frac{u}{v}\right|^{2p- 2}\right)^{-1/2}=\frac{\mathcal{C}_{*}}{4}\left(u^{2(p-1)}+v^{2(p-1)}\right)^ {-1/2}.\]
Substituting \(u=\cos_{p}(q)\), \(v=\sin_{p}(q)\) gives the first part of the result. The specific forms of the Jacobian for \(p=1\), \(2\) and \(\infty\) are straightforward to verify and follow directly from Example 1.
For the last part of the result, we again restrict attention to the first quadrant. We write \(J_{p}\) as a function of \(u\), substituting \(v=(1-u^{p})^{1/p}\), and consider which values of \(p\) lead to zero derivative. We have
\[\frac{dJ_{p}}{du}=2(p-1)\left[u^{2p-3}-u^{p-1}(1-u^{p})^{1-2/p}\right].\]
For \(p\in(1,2)\cup(2,\infty)\) we have \(dJ_{p}/du=0\) implies \(u^{p-2}=v^{p-2}\), which only occurs when \(u=v\), and completes the proof.
### Proof of Lemma 3.2
For a vector \(\mathbf{x}\in\mathbb{R}^{2}\setminus\{\mathbf{0}\}\), define \(r_{a}=\mathcal{R}_{a}(x,y)\), \(r_{b}=\mathcal{R}_{b}(x,y)\) and \(q=\mathcal{A}_{*}^{(-2,2]}((x,y)/\mathcal{R}_{*}(x,y))\). Denote \(\mathbf{w}=(\cos_{*}(q),\sin_{*}(q))\). Then we can write
\[\mathbf{x}=r_{a}\frac{\mathbf{w}}{\mathcal{R}_{a}(\mathbf{w})}=r_{b}\frac{ \mathbf{w}}{\mathcal{R}_{b}(\mathbf{w})}.\]
Hence
\[r_{a}=r_{b}\frac{\mathcal{R}_{a}(\mathbf{w})}{\mathcal{R}_{b}(\mathbf{w})}. \tag{32}\]
Therefore \(dr_{b}/dr_{a}=\mathcal{R}_{b}(\mathbf{w})/\mathcal{R}_{a}(\mathbf{w})\), and the result follows by substituting the Jacobian.
### Proof of Corollary 3.2.1
Denote the upper end point of \(R|(Q=q)\) as \(r_{F}(q)\), and denote \(\mathbf{w}=(\cos_{*}(q),\sin_{*}(q))\). From Lemma 3.2 we can use the change of variables (32) and write
\[\int_{0}^{r_{F}(q)}f_{R_{k},Q}(r_{b},q)dr_{b}=\frac{\mathcal{R}_{a}(\mathbf{w})} {\mathcal{R}_{b}(\mathbf{w})}\int_{0}^{r_{F}(q)}f_{R_{a},Q}\left(r_{b}\frac{ \mathcal{R}_{a}(\mathbf{w})}{\mathcal{R}_{b}(\mathbf{w})},q\right)dr_{b}=\int _{0}^{r_{F}(q)}f_{R_{a},Q}\left(r_{a},q\right)dr_{a}=f_{Q}(q).\]
So the angular density \(f_{Q}(q)\) remains unchanged by the change of gauge function and satisfies (A3) from the assumptions of the corollary. The relation between the conditional quantile functions \(\mu_{a}\) and \(\mu_{b}\) follows by noting that for given \(\zeta\) and \(q\), these must both refer to the same point in Cartesian space, so must be related by (32). Similarly, the conditional upper end points of the conditional radial distributions, \(r_{a,F}(q)\) and \(r_{b,F}(q)\) must also be related by (32).
To demonstrate \(f_{R_{b},Q}\) satisfies (A1) we have (writing \(\mu_{a}\) and \(\mu_{b}\) for brevity, instead of \(\mu_{a}(\zeta,q)\), \(\mu_{b}(\zeta,q)\))
\[\lim_{\zeta\to 1}\sup_{r_{b}\in[0,r_{b},r(q)-\mu_{b}]}\left| \frac{\bar{F}_{R_{b}|Q}(\mu_{b}+r_{b}|q)}{\bar{F}_{R_{b}|Q}(\mu_{b}|q)}-\bar{ F}_{GP}(r_{b};\xi(q),\sigma_{b}(\mu_{b},q))\right|\] \[=\lim_{\zeta\to 1}\sup_{r_{b}\in[0,r_{b},r(q)-\mu_{b}]}\left| \frac{\int_{\mu_{b}+r_{b}}^{r_{F}(q)}f_{R_{b}|Q}(s|q)ds}{\int_{r_{b}}^{r_{F} (q)}f_{R_{b}|Q}(s|q)ds}-\left(1+\xi(q)\frac{r_{b}}{\sigma_{b}(\mu_{b},q)} \right)^{-1/\xi(q)}\right|\] \[=\lim_{\zeta\to 1}\sup_{r_{a}\in[0,r_{a},r(q)-\mu_{a}]}\left| \frac{\int_{\mu_{a}+r_{a}}^{r_{F}(q)}f_{R_{a}|Q}(s|q)ds}{\int_{r_{a}}^{r_{F} (q)}f_{R_{a}|Q}(s|q)ds}-\left(1+\xi(q)\frac{r_{a}\frac{\mathcal{R}_{b}( \mathbf{w})}{\mathcal{R}_{a}(\mathbf{w})}}{\mathcal{R}_{a}(\mathbf{w})}\right) ^{-1/\xi(q)}\right|\] \[=\lim_{\zeta\to 1}\sup_{r_{a}\in[0,r_{a},r(q)-\mu_{a}]}\left| \frac{\bar{F}_{R_{a}|Q}(\mu_{a}+r_{a}|q)}{\bar{F}_{R_{a}|Q}(\mu_{a}|q)}-\left( 1+\xi(q)\frac{r_{a}}{\sigma_{a}(\mu_{a},q)}\right)^{-1/\xi(q)}\right|=0.\]
Here we have tacitly assumed that \(\xi(q)\neq 0\). For values of \(q\) where \(\xi(q)=0\), the proof is identical to that above, substituting the appropriate form of the GP survivor function. Regarding assumption (A2), about the finiteness and continuity of the GP parameter functions, we note that the ration \(\mathcal{R}_{a}(\mathbf{w})/\mathcal{R}_{b}(\mathbf{w})\) is finite and continuous, since it was stipulated that polar coordinate systems are defined in terms of gauge functions for continuous surfaces (see Definition 9). Hence if \(\sigma_{a}\) satisfies (A2), so does \(\sigma_{b}\).
### Proof of Theorem 3.3
Applying Corollary 3.2.1 to \(f_{S,Q}\) shows that (i) implies (ii). For the second part, first note that \(\mathcal{R}_{\alpha}\) must define a gauge function, since the set \(\{\mathbf{x}\in\mathbb{R}^{2}:\mathbf{x}\leq\sigma(q)(\cos_{*}(q),\sin_{*}(q) ),q\in I\}\) is star-shaped and has continuous boundary. Then, applying Corollary 3.2.1 to \(f_{R,Q}\) with \(S=\mathcal{R}_{\alpha}(R)\) shows that (ii) implies (i).
### Proof of Lemma 3.4
The first part of the lemma is simply a statement of the general rule for transformation of coordinate systems for probability density functions. Note that the Jacobian is always positive, since any function \(q_{a,b}(q)\) must be monotonically increasing. The particular forms of \(q_{1,2}\) and \(q_{2,1}\) follow immediately from Example (1). For the Jacobians we just need to consider the case \(q\in[0,1]\). The Jacobians in other quadrants follow by symmetry. For \(q\in[0,1]\) we can write
\[q_{1,2}(q)=\frac{\sin(\pi q/2)}{\cos(\pi q/2)+\sin(\pi q/2)}.\]
Differentiating this expression gives the first part of the result. For the second part, for \(q\in[0,1]\) we can write \(q_{2,1}=(2/\pi)\mathrm{atan}(q/(1-q))\), so that
\[\frac{dq_{2,1}(q)}{dq}=\frac{2}{\pi}(2q^{2}-2q+1)^{-1}=\frac{2}{\pi}\left( \cos_{1}^{2}(q)+\sin_{1}^{2}(q)\right)^{-1}.\]
### Proof of Corollary 3.4.1
From Lemma 3.4, for \(q\in[q_{b,a}(s),q_{b,a}(t)]\) we have
\[f_{Q_{b}}(q)=\int_{0}^{r_{F}(q)}f_{R,Q_{b}}(r,q)dr=\left|\frac{dq_{a,b}(q)}{dq }\right|\int_{0}^{r_{F}(q)}f_{R,Q_{a}}(r,q_{a,b}(q))dr=\left|\frac{dq_{a,b}(q) }{dq}\right|f_{Q_{a}}\left(q_{a,b}(q)\right),\]
where \(r_{F}(q)\) is the upper end point of \(R|Q_{b}\). Since \(f_{Q_{a}}\) is continuous and finite for \(q\in[s,t]\) and hence satisfies assumption (A3), \(f_{Q_{b}}(q)\) also satisfies assumption (A3). For the conditional radial density, we have
\[f_{R|Q_{b}}(r|q)=\frac{f_{R,Q_{b}}(r,q)}{f_{Q_{b}}(q)}=\frac{f_{R,Q_{a}}(r,q_{a,b}(q))}{f_{Q_{a}}(q_{a,b}(q))}=f_{R|Q_{a}}(r|q_{a,b}(q)).\]
Therefore, the conditional radial density \(f_{R|Q_{b}}(r|q)\) satisfies the SPAR assumptions (A1) and (A2) for the mapped angular range \(q\in[q_{b,a}(s),q_{b,a}(t)]\).
### Proof of Proposition 4.1
We shall by considering the relation between the ARE and ARL models for the copula. The relation between the models for the copula density is derived in the same way. Under the assumptions of the proposition, for \(\mathbf{w}=(1-w,w)\) and \(w\in(0,1)\) we have
\[C_{\mathbf{u}_{0}}(r^{1-w},r^{w})\sim\mathcal{L}_{\mathbf{u}_{0}}(r,\mathbf{w })r^{\Lambda_{\mathbf{u}_{0}}(\mathbf{w})},\quad r\to 0^{+},\]
where \(\mathcal{L}_{\mathbf{u}_{0}}\) is slowly-varying in \(r\) at \(0^{+}\). We wish to derive a representation of the form (17), for \(\mathbf{z}=(1-z,z)\), \(z\in(0,1)\), given by
\[C_{\mathbf{u}_{0}}(t\mathbf{z})\sim B_{\mathbf{u}_{0}}(\mathbf{z})\tilde{ \mathcal{L}}_{\mathbf{u}_{0}}(t)t^{\kappa_{\mathbf{u}_{0}}},\quad t\to 0^{+},\]
where \(\tilde{\mathcal{L}}_{\mathbf{u}_{0}}\in RV_{0}(0^{+})\). Equating the variables of each model, \(r^{1-w}=t(1-z)\), \(r^{w}=tz\), and solving for \(r\) and \(w\) gives
\[r =t^{2}z(1-z),\] \[w =\frac{\log(tz)}{\log(t^{2}z(1-z))}=\frac{1}{2}+\frac{\log(z)- \log(1-z)}{2\log(t^{2}z(1-z))}.\]
For fixed \(z\), \(w\to 1/2\) for \(t\to 0^{+}\). Hence \(\mathcal{L}_{\mathbf{u}_{0}}(r,\mathbf{w})\sim\tilde{\mathcal{L}}_{\mathbf{u} _{0}}(t)\) as \(t\to 0^{+}\) for some \(\tilde{\mathcal{L}}_{\mathbf{u}_{0}}\in RV_{0}(0^{+})\). From the assumptions of the proposition, we can expand \(\Lambda_{\mathbf{u}_{0}}(1-z,z)\) as a Taylor series about \(z=1/2\):
\[\Lambda_{\mathbf{u}_{0}}(1-z,z)=\Lambda_{\mathbf{u}_{0}}(\tfrac{1}{2},\tfrac{ 1}{2})+\beta(z-\tfrac{1}{2})+O(z-\tfrac{1}{2})^{2},\]
where \(\beta=[\frac{d}{dz}\Lambda_{\mathbf{u}_{0}}(1-z,z)]_{z=1/2}\). From the discussion in Section 4.2, we have \(\Lambda_{\mathbf{u}_{0}}(\tfrac{1}{2},\tfrac{1}{2})=\tfrac{1}{2}\kappa_{ \mathbf{u}_{0}}\). Therefore, we have
\[r^{\Lambda_{\mathbf{u}_{0}}(\mathbf{w})} =\exp\left(\log(t^{2}z(1-z))\left[\frac{1}{2}\kappa_{\mathbf{u}_{ 0}}+\beta\frac{\log(z)-\log(1-z)}{2\log(t^{2}z(1-z))}+O(\log(t))^{-2}\right]\right)\] \[=(t^{2}z(1-z))^{\frac{\kappa_{\mathbf{u}_{0}}}{2}}\left(\frac{z}{ 1-z}\right)^{\frac{\beta}{2}}O(1).\]
This gives the required form of \(B_{\mathbf{u}_{0}}(\mathbf{z})\), stated in the proposition. The required form of \(B_{\mathbf{u}_{0}}\) is obtained in the same way. For the final claim, note that when \(\kappa_{\mathbf{u}_{0}}<2\) and \(\beta=0\), the tail density function tends to infinity for \(z\to 0^{+}\) and \(z\to 1^{-}\). Since \(t^{\kappa_{\mathbf{u}_{0}}-2}\to\infty\) as \(t\to 0^{+}\), if the representation (18) was valid for \(t\to 0^{+}\) and \(z\to 0^{+}\) (or \(z\to 1^{-}\)) simultaneously, then this would imply that copula density is unbounded over a finite region, which is a contradiction, since the integral of the density over \((u,v)\in[0,1]^{2}\) must be equal to one.
### Proof of Proposition 4.2
This proof follows a similar reasoning to that for Proposition 4.1. From the assumptions of the proposition we have \(c_{\mathbf{u}_{0}}(t(1-z),tz)\sim b_{\mathbf{u}_{0}}(1-z,z)\mathcal{L}_{ \mathbf{u}_{0}}(t)\,t^{\kappa_{\mathbf{u}_{0}}-2}\) for \(t\to 0^{+}\) and \(z\in[0,1]\), and also \(c_{\mathbf{u}_{0}}(r^{1-w},r^{w})\sim\mathcal{M}_{\mathbf{u}_{0}}(r;w)\,r^{ \lambda_{\mathbf{u}_{0}}(\mathbf{w})-1}\) for \(r\to 0^{+}\) and \(w\in(0,1)\). Equating the variables of each model and solving for \(t\) and \(z\) gives
\[t =r^{1-w}+r^{w}\sim r^{\min(w,1-w)}\times\begin{cases}1,&w\in(0,1/ 2)\cup(1/2,1),\\ 2,&w=1/2,\end{cases},\qquad r\to 0^{+},\] \[z =\frac{r^{w}}{r^{1-w}+r^{w}}\sim\begin{cases}1-r^{1-2w},&w<1/2,\\ 1/2,&w=1/2,\end{cases}\qquad r\to 0^{+}.\]
When \(w=1/2\), we have
\[c_{\mathbf{u}_{0}}(r^{1/2},r^{1/2})=c_{\mathbf{u}_{0}}(t/2,t/2)\sim b_{\mathbf{u} _{0}}(1/2,1/2)\mathcal{L}_{\mathbf{u}_{0}}(2r^{1/2})\,(2r)^{\frac{\kappa_{ \mathbf{u}_{0}}}{2}-1}.\]
Since \(b_{\mathbf{u}_{0}}(1-z,z)\in RV_{\beta_{1}}(0^{+})\), for \(w>1/2\) and \(r\to 0^{+}\) we have \(b_{\mathbf{u}_{0}}(1-z,z)=\mathcal{M}_{0}(r,w)r^{\beta_{1}(2w-1)}\) for some function \(\mathcal{M}_{0}\) that is slowly-varying in \(r\) at \(0^{+}\). Also note that \(\mathcal{L}_{\mathbf{u}_{0}}(t)\) is also slowly-varying in \(r\). So, putting these together, for \(w>1/2\) we have
\[c_{\mathbf{u}_{0}}(r^{1-w},r^{w})=c_{\mathbf{u}_{0}}(t,(1-z),tz)\sim\mathcal{ M}_{\mathbf{u}_{0}}(r;w)r^{\beta_{1}(2w-1)+(1-w)(\kappa_{\mathbf{u}_{0}}-2)}, \quad r\to 0^{+},\]
for some function \(\mathcal{M}_{\mathbf{u}_{0}}(r;w)\) that is slowly-varying in \(r\) at \(0^{+}\). Similarly, for \(w<1/2\), we have
\[c_{\mathbf{u}_{0}}(r^{1-w},r^{w})=c_{\mathbf{u}_{0}}(t,(1-z),tz)\sim\mathcal{ M}_{\mathbf{u}_{0}}(r;w)r^{\beta_{2}(1-2w)+w(\kappa_{\mathbf{u}_{0}}-2)},\quad r \to 0^{+}.\]
Combining these results gives the desired form of \(\lambda_{\mathbf{u}_{0}}\).
### Proof of Proposition 4.3
For \(\mathbf{w}\in\{(w_{1},...,w_{d})\in\mathcal{U}_{1}:(-1)^{1+u_{d,j}}w_{j}>0,\,j= 1,...,d\}\), we can write
\[\delta_{L}(r,\mathbf{w})=c_{\mathbf{u}_{0}}(\tfrac{1}{2}\exp(-rw_{1}),..., \tfrac{1}{2}\exp(-rw_{d}))=c_{\mathbf{u}_{0}}(\exp(-tz_{1}),...,\exp(-tz_{d})).\]
Equating marginal-scale and copula-scale coordinates gives for \(j=1,...,d\), gives
\[\frac{1}{2}\exp(-rw_{j})=\exp(-tz_{j})\iff\log(2)+rw_{j}=tz_{j}.\]
To solve for \(t\) and \(\mathbf{z}\), we assume \(\mathbf{z}\in\mathcal{U}_{1}\) (this is always possible by rescaling), which gives
\[t =\sum_{j=1}^{d}tz_{j}=d\log(2)+r,\] \[z_{j} =\frac{\log(2)+rw_{j}}{d\log(2)+r}=w_{j}+O(r^{-1}).\]
When \(|w_{j}|=1/d\) for \(j=1,...d\), we have \(z_{j}=w_{j}\) for \(j=1,...d\), and
\[\delta_{L}(r,\mathbf{w}) \sim\mathcal{M}_{\mathbf{u}_{0}}(\exp(-t),\mathbf{z})\exp(-t( \lambda_{\mathbf{u}_{0}}(\mathbf{z})-1)),\qquad r\to\infty\] \[=\mathcal{M}_{\mathbf{u}_{0}}(2^{-d}\exp(-r),\mathbf{w})\exp(-(d \log(2)+r)(\lambda_{\mathbf{u}_{0}}(\mathbf{w})-1))\] \[=\mathcal{M}_{\mathbf{u}_{0}}(2^{-d}\exp(-r),\mathbf{w})2^{-d( \lambda_{\mathbf{u}_{0}}(\mathbf{w})-1)}\exp(-r(\lambda_{\mathbf{u}_{0}}( \mathbf{w})-1)).\]
For other values of \(\mathbf{w}\in\{(w_{1},...,w_{d})\in\mathcal{U}_{1}:(-1)^{1+u_{d,j}}w_{j}>0,\,j= 1,...,d\}\), since \(\lambda_{\mathbf{u}_{0}}\) is continuous and has bounded partial derivatives everywhere apart from the ray \(\mathbf{z}=\mathbf{1}_{d}\), we have \(\lambda_{\mathbf{u}_{0}}(\mathbf{z})=\lambda_{\mathbf{u}_{0}}(\mathbf{w}+O(r^{ -1}))=\lambda_{\mathbf{u}_{0}}(\mathbf{w})+O(r^{-1})\). Therefore, for \(r\to\infty\) we can write
\[\delta_{L}(r,\mathbf{w}) \sim\mathcal{M}_{\mathbf{u}_{0}}(\exp(-t),\mathbf{z})\exp(-t( \lambda_{\mathbf{u}_{0}}(\mathbf{z})-1))\] \[=\mathcal{M}_{\mathbf{u}_{0}}(2^{-d}\exp(-r),\mathbf{w}+O(r^{-1} ))\exp(-(d\log(2)+r)(\lambda_{\mathbf{u}_{0}}(\mathbf{w})-1+O(r^{-1})))\] \[=\mathcal{M}_{\mathbf{u}_{0}}(2^{-d}\exp(-r),\mathbf{w}+O(r^{-1} ))O(1)\exp(-r(\lambda_{\mathbf{u}_{0}}(\mathbf{w})-1)).\]
The terms outside the exponential function are slowly-varying in \(\exp(r)\) at \(\infty\), which completes the proof.
### Proof of Proposition 4.4
For this case we write
\[\delta_{GP}(r,\mathbf{w})=c_{\mathbf{1}_{d}}((1+\xi_{m}rw_{1})^{-1/\xi_{m}},...,(1+\xi_{m}rw_{d})^{-1/\xi_{m}})=c_{\mathbf{1}_{d}}(t\mathbf{z}).\]
We can relate the copula-scale angular-radial coordinates \((t,\mathbf{z})\) with the marginal-scale angular-radial coordinates \((r,\mathbf{w})\) as follows. If we assume \(\mathbf{z}\in\mathcal{U}_{1}\) and denote \(s_{\mathbf{w}}=\xi_{m}^{-1/\xi_{m}}\sum_{j=1}^{d}w_{j}^{-1/\xi_{m}}\) then
\[t =\sum_{j=1}^{d}tz_{j}=\sum_{j=1}^{d}(1+\xi_{m}rw_{j})^{-1/\xi_{m}} \sim r^{-1/\xi_{m}}s_{\mathbf{w}},\quad r\to\infty\] \[z_{j} =t^{-1}(1+\xi_{m}rw_{j})^{-1/\xi_{m}}\sim w_{j}^{-1/\xi_{m}}s_{ \mathbf{w}}^{-1},\quad r\to\infty.\]
Therefore \(\mathbf{z}\) is asymptotically constant along rays of constant \(\mathbf{w}\). Substituting this into the assumed asymptotic form of the copula gives
\[\delta_{GP}(r,\mathbf{w})\sim s_{\mathbf{w}}^{\kappa_{\mathbf{1}}-d}b_{ \mathbf{1}_{d}}\left(s_{\mathbf{w}}^{-1}\mathbf{w}^{-1/\xi_{m}}\right) \mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1/\xi_{m}}\right)r^{\frac{d-\kappa_{ \mathbf{1}_{d}}}{\xi_{m}}},\quad r\to\infty.\]
Above we have used the fact that since \(\mathcal{L}_{\mathbf{1}_{d}}\) is slowly-varying at \(0^{+}\), \(\mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1/\xi_{m}}s_{\mathbf{w}}\right)\sim \mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1/\xi_{m}}\right)\) as \(r\to\infty\).
### Proof of Proposition 5.1
To simplify the presentation, we use the notation \(\mathbf{w}=(\cos_{1}(q),\sin_{1}(q))\) and \(\mathbf{u}=(F_{*}(r\cos_{1}(q)),F_{*}(r\sin_{1}(q)))\) throughout this proof. The general strategy for proving these results is as follows. We have
\[f_{Q_{*}}(q)=\int_{0}^{r_{F}(q)}f_{R_{*},Q_{*}}(r,q)dr,\]
where \(r_{F}(q)\) is the upper end point of \(R_{*}|(Q_{*}=q)\). We need to examine the behaviour of the integrand \(f_{R_{*},Q_{*}}(r,q)\) for \(r\in[0,r_{F}(q)]\). First, we write the joint angular-radial density as
\[f_{R_{*},Q_{*}}(r,q) =r\,m_{*}(r,\mathbf{w})c(\mathbf{u})\] \[=r\,m_{*}(r,\mathbf{w})\delta_{*}(r,\mathbf{w}).\]
To examine the behaviour of \(f_{R_{*},Q_{*}}\) for \(\mathbf{u}\) close to corner \((u_{0},v_{0})\), we write \(\delta_{*}(r,\mathbf{w})=c_{(u_{0},v_{0})}(t(1-z),tz)\), where
\[t(1-z) =u_{0}+(-1)^{u_{0}}F_{*}(r\cos_{1}(q)),\] \[tz =v_{0}+(-1)^{v_{0}}F_{*}(r\sin_{1}(q))).\]
We then find asymptotic relations between copula-scale polar coordinates \((t,z)\) and marginal-scale polar coordinates \((r,q)\), so that the assumed asymptotic ARL form (18) can be applied. To check whether the integral is finite, we use the assumed regular variation property, together with Karamata's theorem (see e.g. [28]) which states that
1. if \(\alpha>-1\) then \(g\in RV_{\alpha}(0^{+})\) implies \(G(x)=\int_{0}^{x}g(t)dt\in RV_{\alpha+1}(0^{+})\) and is finite;
2. if \(\alpha<-1\) then \(g\in RV_{\alpha}(\infty)\) implies \(G(x)=\int_{x}^{\infty}g(t)dt\in RV_{\alpha+1}(\infty)\) and is finite.
(a) For GP margins, we have \(q\in[0,1]\) and hence \(\mathbf{w}=(1-q,q)\). The rays originate in the lower left corner of the copula. Write
\[f_{R_{GP},Q_{GP}}(r,q)=r\left[(1+\xi_{m}r(1-q))(1+\xi_{m}rq)\right]^{-1-\frac{ 1}{\xi_{m}}}c_{(0,0)}(t(1-z),tz),\]
where
\[t(1-z) =1-\left(1+\xi_{m}r(1-q)\right)^{-\frac{1}{\xi_{m}}}\] \[tz =1-\left(1+\xi_{m}rq\right)^{-\frac{1}{\xi_{m}}}.\]
Solving for \(t\) and \(w\) and expanding as Taylor series about \(r=0\) gives \(t\sim r\) and \(z\sim q\) for \(r\to 0^{+}\). Therefore, we have
\[f_{R_{GP},Q_{GP}}(r,q)\sim b_{(0,0)}(1-q,q)\mathcal{L}_{(0,0)}(r)\,r^{\kappa_{ (0,0)}-1},\quad r\to 0^{+}.\]
Therefore, the angular radial density is bounded as \(r\to 0^{+}\) since either \(\kappa_{(0,0)}=1\) and \(\lim_{r\to 0^{+}}\mathcal{L}_{(0,0)}(r)=\Upsilon_{(0,0)}\leq 1\), or \(\kappa_{(0,0)}>1\).
For the behaviour as \(r\to\infty\), we need to consider three cases, \(\xi_{m}<0\), \(\xi_{m}=0\) and \(\xi_{m}>0\). When \(\xi_{m}<0\), we have \(r\leq r_{F}(q)=\min\left(-(\xi_{m}(1-q))^{-1},-(\xi_{m}q)^{-1}\right)\). When \(q<1/2\) the path terminates on the right hand edge of the copula and when \(q>1/2\) the path terminates on the upper edge. As the copula density is bounded away from the corners, \(f_{R_{GP},Q_{GP}}\) is also bounded for \(r\in[0,r_{F}(q)]\). Therefore \(f_{Q_{GP}}(q)\) is also finite, since the integral is over a finite range. For \(q=1/2\), substitute \(r=r_{F}(q)-s=-2/\xi_{m}-s\), so that
\[f_{R_{GP},Q_{GP}}(r,1/2)=\left(-\frac{2}{\xi_{m}}-s\right)\,(-\xi_{m}s/2)^{-2- \frac{2}{\xi_{m}}}c_{(1,1)}(t/2,t/2),\]
where \(t=2(-\xi_{m}s/2)^{-\frac{1}{\xi_{m}}}\). For \(s\to 0^{+}\) we have
\[f_{R_{GP},Q_{GP}}(r,1/2) \sim-\frac{2}{\xi_{m}}\left(-\xi_{m}s/2\right)^{-2-\frac{2}{\xi_ {m}}}b_{(1,1)}(1/2)\mathcal{L}_{(1,1)}(t)t^{\kappa_{(1,1)}-2}\] \[\propto\mathcal{L}_{(1,1)}\left(s^{-\frac{1}{\xi_{m}}}\right)s^{ -2-\frac{\kappa_{(1,1)}}{\xi_{m}}}.\]
In the second line we have used the fact that since \(\mathcal{L}_{(1,1)}\) is slowly-varying at \(0^{+}\), for all \(a>0\), \(\mathcal{L}_{(1,1)}(at)\sim\mathcal{L}_{(1,1)}(t)\) as \(t\to 0^{+}\). So \(f_{R_{GP},Q_{GP}}(r,1/2)\) is in \(RV_{\alpha}(0^{+})\), with \(\alpha=-2-(\kappa_{(1,1)}/\xi_{m})\). Applying Karamata's theorem, the integral is finite when \(\alpha>-1\) or \(\xi_{m}>-\kappa_{(1,1)}\).
When \(\xi_{m}>0\) we have
\[f_{R_{GP},Q_{GP}}(r,q)=r\left[(1+\xi_{m}r(1-q))(1+\xi_{m}rq)\right]^{-1-\frac{ 1}{\xi_{m}}}c_{(1,1)}(t(1-z),tz),\]
where
\[t(1-z) =(1+\xi_{m}r(1-q))^{-\frac{1}{\xi_{m}}}\] \[tz =(1+\xi_{m}rq)^{-\frac{1}{\xi_{m}}}.\]
Solving for \(t\) and \(z\) gives for \(r\to\infty\)
\[t \sim r^{-\frac{1}{\xi_{m}}}\left[(\xi_{m}(1-q))^{-\frac{1}{\xi_{m }}}+(\xi_{m}q)^{-\frac{1}{\xi_{m}}}\right],\] \[z \sim\frac{q^{-\frac{1}{\xi_{m}}}}{(1-q)^{-\frac{1}{\xi_{m}}}+q^{- \frac{1}{\xi_{m}}}}.\]
Therefore, rays of constant \(q\) asymptote to rays of constant \(z\) on the copula scale. So for the joint density, we get
\[f_{R_{GP},Q_{GP}}(r,q)\sim\left(\xi_{m}^{2}q(1-q)\right)^{-1-\frac{1}{\xi_{m }}}b_{(1,1)}(z)\left[(\xi_{m}(1-q))^{-\frac{1}{\xi_{m}}}+(\xi_{m}q)^{-\frac{ 1}{\xi_{m}}}\right]^{\kappa_{(1,1)}-2}\mathcal{L}_{(1,1)}(t)r^{-1-\frac{ \kappa_{(1,1)}}{\xi_{m}}}.\]
This is in \(RV_{\alpha}(\infty)\) with \(\alpha=-1-(\kappa_{(1,1)}/\xi_{m})<-1\). Hence, by Karamata's theorem, this integral is finite for any \(\xi_{m}>0\).
Finally, in the case that \(\xi_{m}=0\), we have
\[f_{R_{GP},Q_{GP}}(r,q)=r\exp(-r)c_{(1,1)}(t(1-z),tz),\]
where \(t(1-z)=\exp(-r(1-q))\) and \(tz=\exp(-rq)\), so that
\[t =\exp(-r(1-q))+\exp(-rq),\] \[z =\frac{\exp(-rq)}{\exp(-r(1-q))+\exp(-rq)}.\]
Note that for \(q<1/2\), \(z\sim\exp(-r(2q-1))\) as \(r\to\infty\). In some cases, the assumed asymptotic for of the copula is not valid for \(t,w\to 0^{+}\) simultaneously. So in these cases, we note that \(tc_{(1,1)}(t(1-z),tz)\) is bounded for \(t\to 0^{+}\) for all \(z\in[0,1]\). (If \(tc_{(1,1)}(t(1-z),tz)\) is not bounded for \(z\to 0^{+}\), then this contradicts the fact that \(\int_{0}^{1}c_{(1,1)}(t,0)dt=1\)).
Since \(tc_{(1,1)}(t(1-z),tz)\) is bounded, there exists \(\beta\in(0,\infty)\) and \(\tau>0\) such that \(tc_{(1,1)}(t(1-z),tz)<\beta\) for \(t<\tau\). Then for \(t<\tau\)
\[f_{R_{GP},Q_{GP}}(r,q) \leq r\exp(-r)\frac{\beta}{t}\] \[=r\exp(-r)\frac{\beta}{\exp(-r(1-q))+\exp(-rq)}\] \[\sim\beta r\begin{cases}\exp(-r(1-\min(q,1-q))),&q\in(0,1/2)\cup( 1/2,1),\\ 2\exp(-r/2),&q=1/2.\end{cases}\]
So by the dominated convergence theorem, \(f_{Q_{GP}}(q)\) is finite for \(\xi_{m}=0\), which completes the proof for part (a).
(b) For GP margins, as \(q\to 0^{+}\) the paths \(\mathbf{u}\) through the copula pass closer to the lower right corner of the copula (see Figure 9). To examine the behaviour of \(f_{R_{GP},Q_{GP}}\) for \(\mathbf{u}\) close to corner \((1,0)\), we need to determine the range of \(r\) for which \(\mathbf{u}=(F_{GP}(r(1-q)),F_{GP}(rq))\to(1^{-},0^{+})\) as \(q\to 0^{+}\). For part (i), we treat the cases \(\xi_{m}=0\) and \(\xi_{m}>0\) separately. For the case of exponential margins we have
\[f_{R_{E},Q_{E}}(r,q)=r\exp(-r)c_{(1,0)}(t(1-z),tz),\]
where \(t(1-z)=\exp(-r(1-q))\) and \(tz=1-\exp(-rq)\). Solving for \(t\) and \(z\) gives
\[t =\exp(-r(1-q))+1-\exp(-rq),\] \[z =\frac{1-\exp(-rq)}{\exp(-r(1-q))+1-\exp(-rq)}.\]
Consider the behaviour of the joint density in the range \(r\in J=[-\frac{1}{2}\log(q),-2\log(q)]\). For \(q\to 0^{+}\), the lower limit of \(J\) tends to infinity, but \(rq\to 0\) for all \(r\in J\). Therefore, for \(r\in J\) and \(q\to 0^{+}\) we have
\[t \sim\exp(-r)+rq,\] \[z \sim\frac{rq}{\exp(-r)+rq}.\]
For \(r\in J\) and \(q\to 0^{+}\), when \(rq=\exp(-r)\), we have \(z\to 1/2\). The solution of \(rq=\exp(-r)\) is \(r=W_{0}(1/q)\), where \(W_{0}\) is the zeroth branch of the Lambert W function (see e.g. [48, SS4.13]). Note that \(W_{0}(1/q)\sim-\log(q)-\log(-\log(q))\) as \(q\to 0^{+}\). Define \(s=r-W_{0}(1/q)\), then for \(r\in J\) and \(q\to 0^{+}\) we have
\[t \sim\exp(-s-W_{0}(1/q))+(s+W_{0}(1/q))q\] \[=(s+W_{0}(1/q))q(e^{-s}+1)\] \[\sim W_{0}(1/q)q(e^{-s}+1)\] \[z \sim(e^{-s}+1)^{-1}.\]
Substituting this into the assumed form of the copula gives for \(q\to 0^{+}\) and \(r\in J\),
\[c_{(1,0)}(t(1-z),tz) \sim\Upsilon_{(1,0)}b_{(1,0)}\left(\frac{e^{-s}}{e^{-s}+1},\frac {1}{e^{-s}+1}\right)\left[W_{0}(1/q)q(e^{-s}+1)\right]^{-1},\]
and
\[f_{R_{E},Q_{E}}(r,q) =(W_{0}(1/q)+s)W_{0}(1/q)q\exp(-s)c_{(1,0)}(t(1-z),tz)\] \[\sim\Upsilon_{(1,0)}\frac{\exp(-s)}{\exp(-s)+1}b_{(1,0)}\left( \frac{e^{-s}}{e^{-s}+1},\frac{1}{e^{-s}+1}\right)W_{0}(1/q),\]
Therefore, \(f_{R_{E},Q_{E}}\) is unbounded for \(r\in J\) as \(q\to 0^{+}\), and hence \(f_{Q_{E}}(q)\to\infty\) as \(q\to 0^{+}\).
For the case \(\xi_{m}>0\) we take a similar approach to above and consider the behaviour of the joint density for \(r\) in the interval \(J=[-\frac{1}{2}\log(q),-2\log(q)]\). In this case we have
\[f_{R_{GP},Q_{GP}}(r,q) =r(1+\xi_{m}r(1-q))^{-1-\frac{1}{\xi_{m}}}(1+\xi_{m}rq)^{-1-\frac{ 1}{\xi_{m}}}c_{(1,0)}(t(1-z),tz),\]
where \(t(1-z)=(1+\xi_{m}r(1-q))^{-\frac{1}{\xi_{m}}}\) and \(tz=1-(1+\xi_{m}rq)^{-\frac{1}{\xi_{m}}}\). For \(r\in J\) and \(q\to 0^{+}\) we have
\[t(1-z) \sim(\xi_{m}r)^{-\frac{1}{\xi_{m}}},\] \[tz \sim rq.\]
Solving for \(t\) and \(z\), gives for \(r\in J\) and \(q\to 0^{+}\),
\[t \sim(\xi_{m}r)^{-\frac{1}{\xi_{m}}}+rq,\] \[z \sim\frac{rq}{(\xi_{m}r)^{-\frac{1}{\xi_{m}}}+rq}.\]
Note that \(z\to 1/2\) when \((\xi_{m}r)^{-\frac{1}{\xi_{m}}}=rq\) for \(r\in J\) and \(q\to 0^{+}\). If we substitute \(r=sq^{-1}(q/\xi_{m})^{\frac{1}{\xi_{m}+1}}\) then for \(r\in J\) and \(q\to 0^{+}\),
\[t \sim\left(\frac{q}{\xi_{m}}\right)^{\frac{1}{\xi_{m}+1}}\left[s^ {-\frac{1}{\xi_{m}}}+s\right],\] \[z \sim\frac{s}{s^{-\frac{1}{\xi_{m}}}+s}.\]
Substituting this for the joint density, and recalling that for \(r\in J\) and \(q\to 0^{+}\) we have \(r\to\infty\) and \(rq\to 0^{+}\), so
\[f_{R_{GP},Q_{GP}}(r,q) =r(1+\xi_{m}r(1-q))^{-1-\frac{1}{\xi_{m}}}(1+\xi_{m}rq)^{-1-\frac{ 1}{\xi_{m}}}c_{(1,0)}(t(1-z),tz)\] \[\sim\xi_{m}^{-1-\frac{1}{\xi_{m}}}r^{-\frac{1}{\xi_{m}}}c_{(1,0) }(t(1-z),tz)\] \[\propto q^{\frac{1}{\xi_{m}+1}}s^{-\frac{1}{\xi_{m}}}c_{(1,0)}(t(1 -z),tz)\] \[\sim q^{\frac{1}{\xi_{m}+1}}s^{-\frac{1}{\xi_{m}}}\Upsilon_{(1,0) }b_{(1,0)}(1-z,z)t^{-1}\] \[\propto b_{(1,0)}(1-z,z)\frac{s^{-\frac{1}{\xi_{m}}}}{s^{-\frac{1 }{\xi_{m}}}+s}.\]
Therefore, for \(r\in J\) and \(q\to 0^{+}\) the joint density tends to a fixed form in terms of \(s\). However, since \(dr/ds\propto q^{-\frac{\xi_{m}}{\xi_{m}+1}}\), the integral of the joint density over \(r\) diverges as \(q\to 0^{+}\), which completes the proof for part (i).
For (ii) we proceed in the same way as for part (i), but using a different change of variables. As with part (a), we substitute \(r=r_{F}(q)-s=a(1-s)/(1-q)\), where \(a=-1/\xi_{m}\), to get \(t(1-z)=s^{a}\) and \(tz=a(1-s)q+O(q^{2})\). Therefore, for \(q\to 0^{+}\) we have
\[t \sim s^{a}+aq(1-s),\] \[z \sim\frac{aq(1-s)}{s^{a}+aq(1-s)}.\]
As before, we are interested in the behaviour of the joint density around \(z=1/2\), which occurs when \(aq=s^{a}/(1-s)\). The value \(s^{a}/(1-s)\) is monotonically increasing in \(s\) for \(s\in[0,1)\), so when \(q\) is small, the solution of \(aq=s^{a}/(1-s)\) is approximately \(s=(aq)^{1/a}\). Therefore, we substitute \(s=m(aq)^{\frac{1}{a}}\), with \(m\in[0,(aq)^{-\frac{1}{a}}]\). As we are interested in the behaviour of the joint density for \(\mathbf{u}\) close to \((1,0)\), we restrict our interest to \(m\in J=[0,-\log(q)]\), so that the upper end point of this interval tends to infinity as \(q\to 0^{+}\), but \(mq^{b}\to 0^{+}\) for any \(b>0\) and \(m\in J\). So for \(q\to 0^{+}\) and
\(m\in J\) we have \(t\sim aq\,[m^{a}+1]\) and \(z\sim(m^{a}+1)^{-1}\). We now write the joint density as
\[f_{R_{GP},Q_{GP}}(r,q) =r(1+\xi_{m}r(1-q))^{-1-\frac{1}{\xi_{m}}}(1+\xi_{m}rq)^{-1-\frac{1 }{\xi_{m}}}c_{(1,0)}(t(1-z),tz)\] \[=a\frac{1-s}{1-q}\left[\left(1-q\frac{1-s}{1-q}\right)s\right]^{a -1}c_{(1,0)}(t(1-z),tz)\] \[\sim as^{a-1}c_{(1,0)}(t(1-z),tz),\qquad q\to 0^{+},m\in J\] \[=am^{a-1}(aq)^{\frac{a-1}{a}}c_{(1,0)}(t(1-z),tz)\] \[\sim am^{a-1}(aq)^{\frac{a-1}{a}}\Upsilon_{(1,0)}b_{(1,0)}(1-z,z )t^{-1}\] \[\sim a(aq)^{-\frac{1}{a}}\Upsilon_{(1,0)}b_{(1,0)}(1-z,z)\frac{m ^{a-1}}{1+m^{a}}.\]
Noting that \(ds/dm=(aq)^{\frac{1}{a}}\), the integral of \(f_{R_{GP},Q_{GP}}(r,q)\) over \(r\) converges to a fixed form in terms of \(m\) for \(q\to 0^{+}\) and \(m\in J\). For \(m\to 0^{+}\), \(z\sim 1-m^{a}\), under the assumptions above,
\[f_{R_{GP},Q_{GP}}(r,q)\sim a(aq)^{-\frac{1}{a}}\Upsilon_{(1,0)}\mathcal{M}_{ 1}(1-m^{a})m^{a(\beta_{1}+1)-1},\]
for some \(\mathcal{M}_{1}(z)\in RV_{0}(1^{-})\). The integral \(\int_{0}^{m_{0}}f_{R_{GP},Q_{GP}}(r,q)dm\) for small \(m_{0}\) has an integrand that is independent of \(q\) and is regularly-varying at \(0^{+}\) with index \(\alpha=a(\beta_{1}+1)-1>-1\). Hence by Karamata's theorem, this part of the integral is finite. For the upper limit of the interval \(J\), \(m=-\log(q)\) and \(z\sim m^{-a}\) as \(q\to 0^{+}\). So we can write
\[f_{R_{GP},Q_{GP}}(r,q)\sim a(aq)^{-\frac{1}{a}}\Upsilon_{(1,0)}\mathcal{M}_{ 0}(m^{-a})m^{a\beta_{0}-1},\]
for some \(\mathcal{M}_{0}(z)\in RV_{0}(0^{+})\). As above, Karamata's theorem implies that the integral is finite as \(q\to 0^{+}\) and \(m\to\infty\). For the remaining part of the integral, we note that \(c\) is finite away from the corners, so \(f_{R_{GP},Q_{GP}}(r,q)\) is also finite and the range of the integral is also finite. Hence \(f_{Q_{GP}}(q)\) is finite as \(q\to 0^{+}\).
(c) For symmetric margins, the rays of constant angle on the copula scale, emanate from the centre of the copula \(\mathbf{u}=(0.5,0.5)\) and pass close to at most one corner of the copula - see Figure 9. Since the copula is assumed to be continuous and bounded away from the corners, we need only consider the behaviour of the joint density for \(\mathbf{u}\) close to a corner of the copula. The same steps used in part (a) then show that the density is convergent when \(q\notin\mathbb{Z}\), and the ray terminates in a corner. For the cases where \(q\in\mathbb{Z}\), the ray terminates on the edge of the copula. From the assumed form of the copula, there is a \(\beta>0\) such that \(c(\mathbf{u})\leq\beta\) for all values of \(\mathbf{u}\) along the path and hence \(f_{R_{SGP},Q_{SGP}}\leq\beta\,r\,m_{SGP}(r,\mathbf{w})\). When \(\xi_{m}<0\), the range of the integral is finite and the joint density \(f_{R_{SGP},Q_{SGP}}\) is finite, so the integral is finite. When \(\xi_{m}=0\), we have \(f_{R_{SGP},Q_{SGP}}\leq\frac{1}{4}\beta\int_{0}^{\infty}r\exp(-r)\), which has finite integral over \(r\in[0,\infty)\). Finally, when \(\xi_{m}>0\) we have \(f_{Q_{SGP}}\leq\frac{1}{4}\beta\int_{0}^{\infty}r(1+\xi_{m}r)^{-1-1/\xi_{m}}dr\). This integral is finite when \(\xi_{m}<1\), which completes this part of the proof.
(d) For simplicity, consider the case \(q=0\) - other cases are treated identically. In this case, the path terminates at \(\mathbf{u}=(1,1/2)\). Since \(c\) is positive and finite on the edges \(c(1,1/2)=\beta\) for some \(\beta\in(0,\infty)\). Since \(c\) is continuous, for any \(\epsilon\in(0,\beta)\) there exists \(\Delta>0\) such that \(c(1-\Delta,1/2)\geq\beta-\epsilon\). Therefore for \(\frac{1}{2}(1+\xi_{m}r)^{-1/\xi_{m}}<\Delta\), we have \(f_{R_{SGP},Q_{SGP}}(r,0)\geq\frac{1}{4}(\beta-\epsilon)r(1+\xi_{m}r)^{-1-1/\xi_ {m}}\). Define \(r_{0}=((2\Delta)^{-\xi_{m}}-1)/\xi_{m}\). Then
\[f_{Q_{SGP}}(0)>\int_{r_{0}}^{\infty}f_{R_{SGP},Q_{SGP}}(r,0)dr\geq\frac{1}{4}( \beta-\epsilon)\int_{r_{0}}^{\infty}r(1+\xi_{m}r)^{-1-1/\xi_{m}}dr.\]
The integral on the RHS is infinite, so since this inequality holds for any \(\epsilon>0\), \(f_{Q_{SGP}}(0)\) is also infinite.
(e) For this case, we equate the copula-scale coordinate with the marginal-scale coordinate, to give \(t=(1+\xi_{m}r)^{-1/\xi_{m}}\sim(\xi_{m}r)^{-1/\xi_{m}}\) for \(r\to\infty\). Then, since \(c(1-t,1/2)\in RV_{\alpha}(0^{+})\), we have \(c(1-t,1/2)=\mathcal{L}(t)t^{\alpha}\) as \(t\to 0^{+}\), for some function \(\mathcal{L}\in RV_{0}(0^{+})\). Therefore
\[f_{R_{SGP},Q_{SGP}}(r,0) =\frac{1}{4}r(1+\xi_{m}r)^{-1-\frac{1}{\xi_{m}}}\mathcal{L}\left( r^{-\frac{1}{\xi_{m}}}\right)(\xi_{m}r)^{-\frac{\alpha}{\xi_{m}}}\] \[\sim\frac{1}{4}\xi_{m}^{-1-\frac{\alpha+1}{\xi_{m}}}\mathcal{L} \left(r^{-\frac{1}{\xi_{m}}}\right)r^{-\frac{\alpha+1}{\xi_{m}}},\quad r\to\infty,\]
Therefore \(f_{R_{SGP},Q_{SGP}}(r,0)\in RV_{\beta}(\infty)\) with \(\beta=-(\alpha+1)/\xi_{m}\). Therefore, by Karamata's theorem, \(f_{Q_{SGP}}(0)\) is finite when \(\xi_{m}<\alpha+1\).
(f) For this case the joint density for \(q=1/2\) is
\[f_{R_{P},Q_{P}}(r,1/2)=r(r/2)^{-4}c(t/2,t/2),\quad r\geq 2\]
where \(t/2=1-(2/r)\). Substitute, \(r=2+s\) for \(s\geq 0\), then \(t\sim s\) for \(s\to 0^{+}\). Then we have
\[f_{R_{P},Q_{P}}(2+s,1/2) =(2+s)((2+s)/2)^{-4}c(t/2,t/2)\] \[\sim 2\Upsilon_{(0,0)}b_{(0,0)}(1/2)s^{-1},\quad s\to 0^{+}.\]
The integral of this function from \(0\) to any \(s_{0}>0\) is infinite, and hence \(f_{Q_{P}}(1/2)\) is also infinite.
### Proof of Proposition 5.2
We have two cases to consider, depending on whether the path \(\mathbf{u}=(F_{L}(rw_{1}),...,F_{L}(rw_{d}))\) terminates in a corner of the copula or not, as \(r\to\infty\). First, assume that \(w_{j}\neq 0\) for \(j=1,...,d\), so that the path \(\mathbf{u}\) terminates in the corner \(\mathbf{u}_{0}=(H(w_{1}),...,H(w_{d}))\) as \(r\to\infty\), where \(H\) is the Heaviside step function, \(H(x)=1\) for \(x\geq 0\) and \(H(x)=0\) for \(x<0\). By Proposition 4.3, there exists a function \(\mathcal{M}_{\mathbf{u_{0}}}(t,\mathbf{w})\) that is slowly-varying in \(t\) at \(0^{+}\), such that
\[c_{\mathbf{u_{0}}}(\exp(-t\mathbf{z}))\sim\mathcal{M}_{\mathbf{u_{0}}}(\exp(-r ),\mathbf{w})\exp(-r(\lambda_{\mathbf{u_{0}}}(\mathbf{w})-1)),\quad r\to\infty.\]
Therefore, we can write
\[f_{R,\mathbf{W}}(r,\mathbf{w}) =2^{-d}r^{d-1}\exp(-r)c_{\mathbf{u_{0}}}(\exp(-t\mathbf{z})),\] \[\sim 2^{-d}r^{d-1}\mathcal{M}_{\mathbf{u_{0}}}(\exp(-r),\mathbf{w })\exp(-r\lambda_{\mathbf{u_{0}}}(\mathbf{w})),\quad r\to\infty\] \[=2^{-d}(\log(m))^{d-1}\mathcal{M}_{\mathbf{u_{0}}}(1/m,\mathbf{ w})m^{-\lambda_{\mathbf{u_{0}}}(\mathbf{w})},\]
where \(r=\log(m)\). Noting that \(dr/dm=m^{-1}\), the integrand for the angular density is regularly-varying at infinity with index less than \(-1\), and hence the integral is finite by Karamata's theorem.
In the second case, assume that some component of \(\mathbf{w}=(w_{1},...,w_{d})\) is zero, then the path \(\mathbf{u}\) does not terminate at a corner of the copula. Therefore the copula density is bounded everywhere along the path and hence \(f_{R,\mathbf{W}}(r,\mathbf{w})\leq\beta r^{d-1}\exp(-r)\) for some \(\beta>0\). Hence the angular density is finite in this case as well.
### Proof of Proposition 5.3
Suppose that \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is defined on \([0,\infty)\times\Sigma\), where \(\Sigma\subseteq\mathcal{U}_{1}\). By definition, for any \(r_{1}\in(0,\infty)\) we have \(f_{\mathbf{W}}(\mathbf{w})=\int_{0}^{r_{1}}f_{R,\mathbf{W}}(r,\mathbf{w})dr +\int_{r_{1}}^{\infty}f_{R,\mathbf{W}}(r,\mathbf{w})dr\). Since \(f_{\mathbf{W}}(\mathbf{w})\) is finite and \(f_{R,\mathbf{W}}\) is bounded (no point masses), \(\int_{r_{1}}^{\infty}f_{R,\mathbf{W}}(r,\mathbf{w})dr\to 0\) as \(r_{1}\to\infty\). Therefore, for all \(\mathbf{w}\in\Sigma\) and any \(\varepsilon_{1}>0\) there exists \(r_{1}\) such that \(\int_{r_{1}}^{\infty}f_{R,\mathbf{W}}(r,\mathbf{w})dr<\varepsilon_{1}\). Since \(f_{R,\mathbf{W}}\) is continuous and finite for \(r\in[0,r_{1}]\), for any \(\mathbf{w}\in\Sigma\) and \(\varepsilon_{2}>0\) there exists a \(\Delta>0\) and neighborhood \(N_{\Delta}=\{\mathbf{z}\in\Sigma:\|\mathbf{w}-\mathbf{z}\|_{1}<\Delta\}\) such that \(|f_{R,\mathbf{W}}(r,\mathbf{w})-f_{R,\mathbf{W}}(r,\mathbf{z})|<\varepsilon_{2}\) for all \((r,\mathbf{z})\in[0,r_{1}]\times N_{\Delta}\). Now choose \(\varepsilon_{1}\) and \(r_{1}\) such that \(\int_{r_{1}}^{\infty}f_{R,\mathbf{W}}(r,\mathbf{z})dr<\varepsilon_{1}\) for all \(\mathbf{z}\in N_{\Delta}\). Then we have
\[|f_{\mathbf{W}}(\mathbf{w})-f_{\mathbf{W}}(\mathbf{z})| =\left|\int_{0}^{r_{1}}f_{R,\mathbf{W}}(r,\mathbf{w})dr-\int_{0} ^{r_{1}}f_{R,\mathbf{W}}(r,\mathbf{z})dr+\int_{r_{1}}^{\infty}f_{R,\mathbf{W}} (r,\mathbf{w})dr-\int_{r_{1}}^{\infty}f_{R,\mathbf{W}}(r,\mathbf{z})dr\right|\] \[<\left|\int_{0}^{r_{1}}f_{R,\mathbf{W}}(r,\mathbf{w})dr-\int_{0} ^{r_{1}}f_{R,\mathbf{W}}(r,\mathbf{z})dr\right|+\varepsilon_{1}\] \[<r_{1}\varepsilon_{2}+\varepsilon_{1}\]
Note that \(r_{1}\to\infty\) as \(\varepsilon_{1}\to 0^{+}\). However, for any given \(\varepsilon_{1}\) and \(r_{1}\), we can set \(\varepsilon_{2}=r_{1}^{-2}\), so that \(r_{1}\varepsilon_{2}\to 0^{+}\) as \(\varepsilon_{1}\to 0^{+}\). Hence \(f_{\mathbf{W}}(\mathbf{w})\) is continuous for all \(\mathbf{w}\in\Sigma\). If the upper end point of \(R\) is finite, then the continuity follows in the same way.
### Proof of Lemma 5.4
The angular-radial joint density is given by \(f_{R,\mathbf{W}}(r,\mathbf{w})=2^{-d}r^{d-1}\exp(-r)c(\mathbf{u})\), where \(\mathbf{u}=(F_{L}(rw_{1}),...,F_{L}(rw_{d}))\). For any \(r\in[0,\infty)\), we have \(\mathbf{u}\in(0,1)^{d}\). Hence \(c(\mathbf{u})\) is continuous and therefore \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is also continuous. The bounds on \(f_{R,\mathbf{W}}(r,\mathbf{w})\) can be established in the same way as in the proof of Proposition 5.2. For the case that the path \(\mathbf{u}\) terminates at an edge of the copula, the copula density is assumed to be bounded everywhere along the path. In the case that \(\mathbf{u}\) terminates at a corner of the copula, the assumed asymptotic ARE form (21) multiplied by the marginal product function \(m_{L}(r,\mathbf{w})=2^{-d}\exp(-r)\), ensures that the angular-radial joint density is bounded as well. Hence \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is bounded everywhere and hence satisfies the assumptions of Proposition 5.3.
### Proof of Proposition 5.5
From the assumptions of the proposition, the angular density is finite at all angles. So for \(r\to\infty\), the conditional radial density has asymptotic form
\[f_{R|\mathbf{W}}(r|\mathbf{w})\sim\frac{2^{-d}r^{d-1}}{f_{\mathbf{W}}(\mathbf{w })}\mathcal{M}(\exp(-r),\mathbf{w})\exp(-r\lambda(\mathbf{w}))\coloneqq\mathcal{ M}^{*}(\exp(-r),\mathbf{w})\exp(-r\lambda(\mathbf{w})).\]
Note that \(\mathcal{M}^{*}(t,\mathbf{w})\) is also slowly-varying in \(t\) at \(0^{+}\). Since \(f_{R|\mathbf{W}}(r|\mathbf{w})\) is ultimately monotone, from the monotone convergence theorem for \(r\to\infty\) we have
\[\bar{F}_{R|\mathbf{W}}(r|\mathbf{w})=\int_{r}^{\infty}f_{R|\mathbf{W}}(s| \mathbf{w})ds\sim\int_{r}^{\infty}\mathcal{M}^{*}(\exp(-s),\mathbf{w})\exp(-s \lambda(\mathbf{w}))ds=\int_{\exp(r)}^{\infty}\mathcal{M}^{*}(1/t,\mathbf{w})t ^{-\lambda(\mathbf{w})-1}dt,\]
where we have made the substitution \(t=\exp(s)\) in the last step. The integrand on the RHS is regularly-varying in \(t\) with index \(-\lambda(\mathbf{w})-1<-1\) and hence by Karamata's theorem, the integral is finite and regularly-varying with index \(-\lambda(\mathbf{w})\). Therefore, there exists a function \(\mathcal{L}(t,\mathbf{w})\) that is slowly-varying in \(t\) at \(0^{+}\), such that \(\bar{F}_{R|\mathbf{W}}(r|\mathbf{w})=\mathcal{L}(\exp(-r),\mathbf{w})\exp(-r \lambda(\mathbf{w}))\). So for all \(r>0\) we have
\[\frac{\bar{F}_{R|\mathbf{W}}(\mu+r|\mathbf{w})}{\bar{F}_{R|\mathbf{W}}(\mu| \mathbf{w})}=\frac{\mathcal{L}(\exp(-(\mu+r)),\mathbf{w})}{\mathcal{L}(\exp(- \mu),\mathbf{w})}\frac{\exp(-(\mu+r)\lambda(\mathbf{w}))}{\exp(-\mu\lambda( \mathbf{w}))}\sim\exp(-r\lambda(\mathbf{w})),\quad\mu\to\infty.\]
Therefore, assumptions (A1) and (A2) are satisfied taking \(\xi(\mathbf{w})=0\) and \(\sigma(\mu,\mathbf{w})=1/\lambda(\mathbf{w})\).
### Proof of Proposition 5.7
To demonstrate that the angular density is finite, we need to consider the behaviour of the joint density in the lower and upper tails. Taking a similar approach to the proof of Proposition 5.1, in the lower tail we have
\[f_{R,\mathbf{W}}(r,\mathbf{w})\sim b_{\mathbf{0}_{d}}\left(\mathbf{w}\right) \mathcal{L}_{\mathbf{0}_{d}}\left(r\right)r^{\kappa_{\mathbf{0}_{d}}-1},\quad r \to 0^{+}.\]
Hence \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is bounded for \(r\to 0^{+}\). Moreover, since the copula density is bounded away from the corners, \(f_{R,\mathbf{W}}(r,\mathbf{w})\) is bounded for any finite value of \(r\). For the upper tail, we note that when the polar origin is defined at \(\mathbf{x}=\mathbf{1}_{d}\) on Pareto margins and \(\xi_{m}=1\), \(\delta_{P}(r,\mathbf{w})=\delta_{GP}(r,\mathbf{w})\). Hence from Proposition 4.4, for \(\mathbf{w}\in\mathcal{U}_{1}\cap(0,1]^{d}\) we have,
\[\delta_{P}(r,\mathbf{w})\sim s_{\mathbf{w}}^{\kappa_{\mathbf{1}_{d}}-d}b_{ \mathbf{1}_{d}}\left(\left(s_{\mathbf{w}}\mathbf{w}\right)^{-1}\right) \mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1}\right)r^{d-\kappa_{\mathbf{1}_{d}}}, \quad r\to\infty.\]
For the marginal product function we have
\[m_{P}(r,\mathbf{w})=\prod_{j=1}^{d}(1+rw_{j})^{-2}\sim r^{-2d}\prod_{j=1}^{d}w _{j}^{-2},\quad r\to\infty.\]
Combining this we get
\[f_{R,\mathbf{W}}(r,\mathbf{w})\sim\left[\prod_{j=1}^{d}w_{j}^{-2}\right]s_{ \mathbf{w}}^{\kappa_{\mathbf{1}_{d}}-d}b_{\mathbf{1}_{d}}\left(\left(s_{ \mathbf{w}}\mathbf{w}\right)^{-1}\right)\mathcal{L}_{\mathbf{1}_{d}}\left(r^ {-1}\right)r^{-1-\kappa_{\mathbf{1}_{d}}},\quad r\to\infty.\]
So \(f_{R,\mathbf{W}}\in RV_{-1-\kappa_{\mathbf{1}_{d}}}(\infty)\). Hence by Karamata's theorem, \(f_{\mathbf{W}}(\mathbf{w})\) is finite and \(\bar{F}_{R|\mathbf{W}}\in RV_{-\kappa_{\mathbf{1}_{d}}}(\infty)\). Thus from the regular variation property
\[\lim_{\mu\to\infty}\frac{\bar{F}_{R|\mathbf{W}}(\mu+r|\mathbf{w})}{\bar{F}_{R| \mathbf{W}}(\mu|\mathbf{w})}=\left(1+\frac{r}{\mu}\right)^{-\kappa_{\mathbf{1} _{d}}}=\bar{F}_{GP}\left(r,\frac{1}{\kappa_{\mathbf{1}_{d}}},\frac{\mu}{ \kappa_{\mathbf{1}_{d}}}\right).\]
Therefore, assumptions (A1) and (A2) are satisfied, taking \(\xi(\mathbf{w})=1/\kappa_{\mathbf{1}_{d}}\) and \(\sigma(\mu,\mathbf{w})=\mu/\kappa_{\mathbf{1}_{d}}\).
### Proof of Proposition 5.8
From the discussion at the end of Section 3.2, we know that for two-dimensional angular-radial densities in \(L^{1}\) polar coordinates, we have \(f_{R,\mathbf{W}}(r,(1-q,q)))=f_{R,Q}(r,q)\) for \(q\in[0,1]\) and \(r\in[\max(q^{-1},(1-q)^{-1}),\infty)\). Using Proposition 5.7 we have
\[f_{R,Q}(r,q)\sim(q(1-q))^{-\kappa_{\mathbf{1}_{d}}}b_{\mathbf{1}_{d}}\left(1- q,q\right)\mathcal{L}_{\mathbf{1}_{d}}\left(r^{-1}\right)r^{-1-\kappa_{\mathbf{1}_{d}}}, \quad r\to\infty.\]
From Proposition 4.1 we have \(b_{\mathbf{1}_{d}}(1-q,q)=\gamma(q(1-q))^{\frac{\kappa_{\mathbf{1}_{d}}}{2}-1}\) for some \(\gamma>0\). Substituting this into the equation above completes the proof.
Examples of angular-radial models for copulas
In this appendix we present some examples of the various angular-radial models for copulas introduced in Section 4. Where applicable, we also confirm the relations between the various models defined in Propositions 4.1 and 4.2.
### Extreme value copulas
Extreme value (EV) copulas can be written in the form
\[C(\mathbf{u})=\exp(-A(\mathbf{x})),\]
where \(\mathbf{x}=(x_{1},...,x_{d})=(-\log(u_{1}),...,-\log(u_{d}))\), and \(A:[0,\infty)^{d}\rightarrow[0,\infty)\) is a convex homogeneous function of order one, known as the stable tail dependence function [8], that satisfies \(\max(x_{1},...x_{d})\leq A(x_{1},...,x_{d})\leq x_{1}+\cdots+x_{d}\). The lower and upper bounds in the inequality correspond to the cases of complete dependence and independence respectively. In general we will rule out these cases and assume that the inequality is strict. The homogeneity property of \(A\) suggests an angular-radial representation. Define \(\tau=\|\mathbf{x}\|_{1}\) and \(\boldsymbol{\psi}=\mathbf{x}/\tau\). The variables \(\tau\) and \(\boldsymbol{\psi}\) are referred to as the Pickands coordinates [49], and are the \(L^{1}\) polar coordinates of \(\mathbf{x}\). EV copulas can then be expressed as
\[C(\mathbf{u})=\exp(-\tau A(\boldsymbol{\psi})).\]
The general expression for the density involves many products of partial derivatives of \(A\). However, the following general form is useful for asymptotic calculations in the upper and lower tail. Progressively taking partial derivatives shows that
\[\frac{\partial}{\partial u_{1}}C(\mathbf{u}) =-\frac{1}{u_{1}}\frac{\partial}{\partial x_{1}}\exp(-A(\mathbf{ x}))=\frac{1}{u_{1}}\exp(-A(\mathbf{x}))A^{(1,0,0,\ldots)}(\mathbf{x})\] \[\frac{\partial^{2}}{\partial u_{1}\partial u_{2}}C(\mathbf{u}) =-\frac{1}{u_{1}u_{2}}\frac{\partial}{\partial x_{2}}\left[\exp (-A(\mathbf{x}))A^{(1,0,0,\ldots)}(\mathbf{x})\right]\] \[=\frac{1}{u_{1}u_{2}}\exp(-A(\mathbf{x}))\left[A^{(1,0,0,\ldots)} (\mathbf{x})A^{(0,1,0,\ldots)}(\mathbf{x})-A^{(1,1,0,\ldots)}(\mathbf{x})\right]\] \[\vdots\] \[\frac{\partial^{d}}{\partial u_{1}\cdots\partial u_{d}}C( \mathbf{u}) =\left(\prod_{j=1}^{d}u_{j}^{-1}\right)\exp(-A(\mathbf{x}))\left[ \left(\prod_{j=1}^{d}A_{j}(\mathbf{x})\right)+...+(-1)^{d-1}A^{(1,\ldots,1)}( \mathbf{x})\right],\]
where \(A_{j}=\partial A/\partial x_{j}\) and the notation \(A^{(n_{1},...,n_{d})}\) denotes partial differentiation \(n_{j}\) times in the \(j^{\text{th}}\) variable. The omitted terms on the final line involve products of partial derivatives of \(A\) of mixed order. Note that \(n^{\text{th}}\)-order partial derivatives of \(A\) are homogeneous of order \(1-n\), and that \(\prod_{j=1}^{d}u_{j}^{-1}=\exp(\tau)\). We can therefore write
\[c(\mathbf{u})=\exp(-\tau(A(\boldsymbol{\psi})-1))\left[\left(\prod_{j=1}^{d}A_ {j}(\boldsymbol{\psi})\right)+\tau^{-1}[...]+\tau^{-2}[...]+...+(-\tau)^{1-d}A ^{(1,\ldots,1)}(\boldsymbol{\psi})\right], \tag{33}\]
where the omitted terms involve partial derivatives of \(A\) of orders between \(2\) and \(d-1\), evaluated at \(\boldsymbol{\psi}\).
#### b.1.1 ARL model
**Lower tail:** In the lower tail we have \(C(t,...,t)=t^{A(\mathbf{1}_{d})}\). Hence EV copulas have lower tail order \(\kappa_{\mathbf{0}_{d}}=A(\mathbf{1}_{d})\) with \(\mathcal{L}_{\mathbf{0}_{d}}(t)\equiv 1\). Hua and Joe [7] noted that in the lower tail
\[C(t\mathbf{w}) =\exp\left(\log(t)A\left(1+\frac{\log(w_{1})}{\log(t)},...,1+ \frac{\log(w_{d})}{\log(t)}\right)\right)\] \[\sim\exp\left(\log(t)\left[A(\mathbf{1}_{d})+A_{1}(\mathbf{1}_{d} )\frac{\log(w_{1})}{\log(t)}+\cdots+A_{d}(\mathbf{1}_{d})\frac{\log(w_{d})}{ \log(t)}\right]\right),\quad\log(t)\to 0^{+}\] \[=t^{A(\mathbf{1}_{d})}\prod_{j=1}^{d}w_{j}^{A_{j}(\mathbf{1}_{d})}.\]
Hence EV copulas have lower tail order function \(B_{\mathbf{0}_{d}}(\mathbf{w})=\prod_{j=1}^{d}w_{j}^{A_{j}(\mathbf{1}_{d})}\). However, note that the asymptotic equivalence above assumes that \(\log(w_{j})/\log(t)\to 0^{+}\) as \(t\to 0^{+}\) for \(j=1,...,d\), so this form of the copula does not hold if \(w_{j}\) decreases at a similar rate to \(t\). This is discussed further in Section 5.3. Using (19) and differentiating the expression for \(C(t\mathbf{w})\), we obtain for the density
\[c(t\mathbf{w})\sim\left[\prod_{j=1}^{d}A_{j}(\mathbf{1}_{d})\right]t^{A( \mathbf{1}_{d})-d}\prod_{j=1}^{d}w_{j}^{A_{j}(\mathbf{1}_{d})-1}. \tag{34}\]
Alternatively, we can work directly with the density, using the same expansion as above to obtain
\[\exp(-\tau(A(\boldsymbol{\psi})-1))\sim t^{A(\mathbf{1}_{d})-d}\prod_{j=1}^{d }w_{j}^{A_{j}(\mathbf{1}_{d})-1}.\]
For the terms in the square brackets in (33), we note that in the lower tail
\[\tau=-\log\left(t^{d}\prod_{j=1}^{d}w_{j}\right)\to\infty,\quad t \to 0^{+}\] \[\boldsymbol{\psi}\to\left(\tfrac{1}{d},...,\tfrac{1}{d}\right), \quad t\to 0^{+}.\]
Therefore, the terms in the square brackets in (33) which are proportional to \(\tau^{-1}\), \(\tau^{-2}\),..., \(\tau^{1-d}\), tend to zero. The remaining terms are all homogeneous of order zero, so \(\prod_{j=1}^{d}A_{j}(\mathbf{1}_{d})=\prod_{j=1}^{d}A_{j}\left(\tfrac{1}{d},...,\tfrac{1}{d}\right)\), which agrees with the asymptotic expression obtained from the tail order function.
**Upper tail:** In the upper tail, for \(\mathbf{u}=\mathbf{1}_{d}-t\mathbf{w}\) and \(\mathbf{w}\in\mathcal{U}_{1}\cap[0,1]^{d}\), for \(t\to 0^{+}\) we have
\[\tau =-\sum_{j=1}^{d}\log\left(1-tw_{j}\right)=\sum_{j=1}^{d}(tw_{j}+O( t^{2}))=t+O(t^{2}),\] \[\boldsymbol{\psi} =\tau^{-1}(-\log(1-tw_{1}),...,-\log(1-tw_{d}))=\mathbf{w}+O(t).\]
The bounds on \(A\) together with the convexity property imply that \(A(\mathbf{w}+O(t))=A(\mathbf{w})+O(t)\). Hence
\[C(\mathbf{1}_{d}-t\mathbf{w})=\exp(-tA(\mathbf{w})+O(t^{2}))=1-tA(\mathbf{w}) +O(t^{2}). \tag{35}\]
The general relation between asymptotic forms of \(C(\mathbf{1}_{d}-t\mathbf{w})\) and \(C_{\mathbf{1}_{d}}(t\mathbf{w})\) is discussed in [29]. For the two-dimensional case, for any copula we have \(C_{(1,1)}(t(1-w),tw)=t-1+C(1-t(1-w),1-tw)\), and hence for EV copulas \(C_{(1,1)}(t\mathbf{w})\sim t(1-A(\mathbf{w}))\) for \(t\to 0^{+}\). For the copula density, since the partial derivatives of \(C\) are ultimately monotone, we can differentiate (35) to obtain
\[c_{\mathbf{1}_{d}}(t\mathbf{w})\sim t^{1-d}\left|A^{(1,...,1)}(\mathbf{w}) \right|.\]
Hence EV copulas have upper tail order \(\kappa_{\mathbf{1}_{d}}=1\), although the expression above does not give separate values for \(\Upsilon_{\mathbf{1}_{d}}\) and \(b_{\mathbf{1}_{d}}(\mathbf{w})\). As with the lower tail order function, this asymptotic form can be obtained directly from the density. Since \(\tau=t+O(t^{2})\), \(\exp(-\tau(A(\boldsymbol{\psi})-1))\to 1\) as \(t\to 0^{+}\). Similarly, the dominant terms in the square brackets in (33) are those proportional to \(\tau^{1-d}\), giving the same asymptotic expression obtained by differentiating (35).
#### b.1.2 ARE model
**Lower tail:** In the lower tail we have \(u_{j}=\exp(-rw_{j})\) and hence for \(\mathbf{w}\in\mathcal{U}_{1}\cap[0,1]^{d}\) we have \(\tau=r\) and \(\boldsymbol{\psi}=\mathbf{w}\). Therefore
\[C(\exp(-r\mathbf{w}))=\exp(-rA(\mathbf{w})),\]
and hence \(\Lambda_{\mathbf{0}_{d}}(\mathbf{w})=A(\mathbf{w})\), and \(\mathcal{L}_{\mathbf{0}_{d}}(t,\mathbf{w})\equiv 1\). For the density, we note that the terms in the square brackets of (33) that are proportional to \(\tau^{-1}\), \(\tau^{-2}\), etc., all tend to zero for \(r\to\infty\). Therefore
\[c(\exp(-r\mathbf{w}))\sim\left(\prod_{j=1}^{d}A_{j}(\mathbf{w})\right)\exp(-r (A(\mathbf{w})-1)),\quad r\to\infty. \tag{36}\]
Hence we have \(\lambda_{\mathbf{0}_{d}}(\mathbf{w})=A(\mathbf{w})\), and \(\mathcal{M}_{\mathbf{0}_{d}}(t,\mathbf{w})=\prod_{j=1}^{d}A_{j}(\mathbf{w})\).
The copula and copula density exponent functions satisfy the assumptions of Proposition 4.1. We can confirm that the expressions for the tail order and tail density functions derived in the previous subsection are in agreement with the forms given in Proposition 4.1. As noted above, \(\kappa_{(0,0)}=A(1,1)\). Also note that, \(\beta\coloneqq[\frac{d}{dw}\Lambda_{(0,0)}(1-w,w)]_{w=1/2}=A_{2}(\frac{1}{2}, \frac{1}{2})-A_{1}(\frac{1}{2},\frac{1}{2})=A_{2}(1,1)-A_{1}(1,1)\), where the final equality is obtained by using the fact that since \(A\) is homogeneous of order one, its first partial derivatives are homogeneous of order zero. Next note that by Euler's theorem on homogeneous functions \(A(1,1)=A_{1}(1,1)+A_{2}(1,1)\). Substituting these expressions into the form of the tail order and tail density functions given in Proposition 4.1, shows that this agrees with the expressions derived above.
**Upper tail:** Since EV copulas have an upper tail order of \(1\), \(\Lambda_{\mathbf{1}_{d}}(\mathbf{w})=\max(w_{1},...,w_{d})\). The case for the density exponent function is more complicated. Examples of \(\delta_{L}(r,\mathbf{w})\) (which is related to \(\lambda_{\mathbf{1}_{d}}(\mathbf{w})\) by Proposition 4.3) for two-dimensional cases are considered in the following subsection.
#### b.1.3 AR copula function for Laplace margins
For the two-dimensional case we write
\[c(u,v)=T_{1}(T_{2}+T_{3}), \tag{37}\]
where
\[T_{1} =\exp(-\tau(A(\mathbf{\psi})-1)),\] \[T_{2} =A^{(1,0)}(\mathbf{\psi})A^{(0,1)}(\mathbf{\psi}),\] \[T_{3} =-\tau^{-1}A^{(1,1)}(\mathbf{\psi}).\]
To understand the asymptotic behave of the copula density, we consider the asymptotic behaviour of the Pickands coordinates. On Laplace margins with \((u,v)=(F_{L}(rw_{1}),F_{L}(rw_{2}))\), \(\mathbf{w}=(w_{1},w_{2})=(\cos_{1}(q),\sin_{1}(q))\), and \(\mathbf{\psi}=(\psi,1-\psi)\), we have
\[\tau=\begin{cases}\log(4)+r&q\in[-2,-1]\\ \log(2)+r|w_{2}|+O(\exp(-r|w_{1}|)&q\in(-1,0]\\ \frac{1}{2}\exp(-r|w_{2}|)+O(\exp(-r\min(2|w_{2}|,|w_{1}|))&q\in(0,1/2)\\ \exp(-r/2)+O(\exp(-r))&q=1/2\\ \frac{1}{2}\exp(-r|w_{1}|)+O(\exp(-r\min(2|w_{1}|,|w_{2}|))&q\in(1/2,1)\\ \log(2)+r|w_{1}|+O(\exp(-r|w_{2}|)&q\in[1,2)\end{cases}\] \[\psi=\begin{cases}|w_{2}|+O(r^{-1})&q\in[-2,-1]\\ 1-O((r|w_{2}|)^{-1}\exp(-r|w_{1}|))&q\in(-1,0)\\ 1-O(\exp(-r(|w_{1}|-|w_{2}|))&q\in[0,1/2)\\ 1/2&q=1/2\\ O(\exp(-r(|w_{2}|-|w_{1}|))&q\in(1/2,1]\\ O((r|w_{1}|)^{-1}\exp(-r|w_{2}|))&q\in(1,2)\end{cases}\]
The variation of \((\tau,\psi)\) with \((r,q)\) is illustrated in Figure 16. From the preceding subsection and Proposition 4.3, we know the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\) in the third quadrant. In the second and fourth quadrants, \(\psi\) is exponentially close to zero or one, so since \(A(0,1)=A(1,0)=1\) for all stable tail dependence functions, \(T_{1}\to 1\) as \(r\to\infty\). Similarly, in the first quadrant \(\tau\to 0\), and hence \(T_{1}\to 1\) as \(r\to\infty\) in this quadrant as well. Therefore, in the first, second and fourth quadrants, the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\) is determined by the terms \(T_{2}\) and \(T_{3}\). To illustrate various possibilities for how \(T_{2}\) and \(T_{3}\) behave in these quadrants, and hence the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\), we consider three examples for the stable tail dependence function \(A\): the symmetric and asymmetric logistic models, and the Husler-Reiss model.
**Symmetric logistic:** The symmetric logistic model with parameter \(\alpha\geq 1\) is given by
\[A(\mathbf{\psi})=z^{\frac{1}{\alpha}},\]
with \(z=\psi^{\alpha}+(1-\psi)^{\alpha}\). The case \(\alpha=1\) corresponds to the independence copula. When \(\alpha>1\), we have
\[T_{2} =z^{\frac{2}{\alpha}-2}(\psi(1-\psi))^{\alpha-1}\] \[T_{3} =(\alpha-1)z^{-\frac{1}{\alpha}}\tau^{-1}T_{2}.\]
After substituting the expressions for \(\tau\) and \(\psi\) above we find that \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) with
\[\beta(\mathbf{w}) =\begin{cases}0,&q\in(-2,-1)\cup[0,1],\\ 1-\alpha,&q\in[-1,0)\cup(1,2],\end{cases}\] \[\lambda(\mathbf{w}) =\begin{cases}A(\mathbf{w}),&q\in(-2,-1],\\ 1+(\alpha-1)|w_{1}|,&q\in(-1,0],\\ 1-\alpha+(2\alpha-1)\max(|w_{1}|,|w_{2}|),&q\in(0,1]\\ 1+(\alpha-1)|w_{2}|&q\in(1,2].\end{cases}\]
Note that in this case \(A^{(1,1)}(\psi,1-\psi)=(1-\alpha)z^{\frac{1}{\alpha}-2}(\psi(1-\psi))^{\alpha-1}\), so as \(\psi\to 0^{+}\) we have \(A^{(1,1)}(\psi,1-\psi)\sim(1-\alpha)\psi^{\alpha-1}\). Therefore, the assumptions of Proposition 4.2 are satisfied, taking \(\beta_{1}=\beta_{2}=\alpha-1\). It can then be verified that Proposition 4.2 in combination with Proposition 4.3 gives the same form of \(\lambda(\mathbf{w})\) obtained above for the range \(q\in(0,1)\).
**Asymmetric logistic:** The asymmetric logistic model proposed by Tawn [42], is given by
\[A(\boldsymbol{\psi})=(1-\gamma_{1})\psi+(1-\gamma_{2})(1-\psi)+z^{\frac{1}{ \alpha}},\]
where \(z=(\gamma_{1}\psi)^{\alpha}+(\gamma_{2}(1-\psi))^{\alpha}\) and \(\alpha\geq 1\) and \(\gamma_{1},\gamma_{2}\in[0,1]\). When \(\gamma_{1}=\gamma_{2}=1\) this reduces to the symmetric logistic model. Independence occurs when \(\alpha=1\) or \(\gamma_{1}=0\) or \(\gamma_{2}=0\). In the case that \(\gamma_{1},\gamma_{2}\in(0,1)\), we have
\[T_{2} =\left(1-\gamma_{1}+\gamma_{1}^{\alpha}\psi^{\alpha-1}z^{\frac{1}{ \alpha}-1}\right)\left(1-\gamma_{2}+\gamma_{2}^{\alpha}(1-\psi)^{\alpha-1}z^{ \frac{1}{\alpha}-1}\right)\] \[T_{3} =(\alpha-1)(\gamma_{1}\gamma_{2})^{\alpha}(\psi(1-\psi))^{\alpha -1}\tau^{-1}z^{\frac{1}{\alpha}-2}.\]
For \(q\in[-1,1/2)\), \(\psi\to 1\) as \(r\to\infty\) and hence \(T_{2}\to 1-\gamma_{2}\) as \(r\to\infty\). Similarly, for \(q\in(1/2,2]\), \(T_{2}\to 1-\gamma_{1}\) as \(r\to\infty\). For \(q\) in the second and fourth quadrants, \(\tau\to\infty\) as \(r\to\infty\), hence \(T_{3}\to 0\). In the first quadrant for \(r\to\infty\) we have
\[T_{3}\sim 2(\alpha-1)\exp(r[1-\alpha+(2\alpha-1)\min(w_{1},w_{2})])\times \begin{cases}\gamma_{1}^{\alpha}\gamma_{2}^{1-\alpha},&q<1/2,\\ \gamma_{2}^{\alpha}\gamma_{1}^{1-\alpha},&q>1/2.\end{cases}\]
The terms in the exponential function are positive when \(\min(w_{1},w_{2})>(\alpha-1)/(2\alpha-1)\). Overall, we find that \(\delta_{L}(r,\mathbf{w})\) has asymptotic form (25) with \(\beta(\mathbf{w})=0\) for all \(q\) and
\[\lambda(\mathbf{w})=\begin{cases}A(\mathbf{w}),&q\in(-2,-1),\\ 1&q\in\left[-1,\frac{\alpha-1}{2\alpha-1}\right],\\ 1-\alpha+(2\alpha-1)\max(|w_{1}|,|w_{2}|),&q\in\left(\frac{\alpha-1}{2\alpha- 1},\frac{\alpha}{2\alpha-1}\right)\\ 1&q\in\left[\frac{\alpha}{2\alpha-1},2\right].\end{cases}\]
Figure 16: Pickands coordinates \((t,\psi)\) for two-dimensional EV copulas as a function of \(L^{1}\) polar coordinates \((r,q)\) on Laplace margins.
Note that although \(A^{(1,1)}\) has the same form as for the symmetric logistic model, the assumptions of Proposition 4.2 are not satisfied, since \(T_{3}\to 0^{+}\) for \(q\to 0^{+}\) and \(q\to 1^{-}\), but \(T_{2}\to 1-\gamma_{1}\) or \(1-\gamma_{2}\), respectively. Hence the ARL model is not valid for \(q\to 0^{+}\) or \(q\to 1^{-}\).
In the first quadrant \(\lambda(\mathbf{w})\) is symmetric about \(q=1/2\) and is not dependent on the values of \(\gamma_{1}\) and \(\gamma_{2}\). However, at finite levels, the corresponding isodensity contours are not symmetric about \(q=1/2\), as illustrated in Figure 12, for a case with \(\alpha=5\), \(\gamma_{1}=0.9\) and \(\gamma_{2}=0.1\). To improve the agreement between the SPAR approximation and true joint density at finite levels, we can define the angular-radial coordinate system relative to a different origin. Note that a finite shift in the origin does not affect the asymptotic behaviour of \(\delta_{L}(r,\mathbf{w})\), and hence the values of \(\beta(\mathbf{w})\) and \(\lambda(\mathbf{w})\) remain unchanged.
Define new angular-radial variables relative to origin \((x_{0},y_{0})\), \(r^{*}=(x-x_{0})+(y-y_{0})\) and \(q^{*}=(y-y_{0})/r^{*}\), for \(x>x_{0}\) and \(y>y_{0}\). Then, in terms of the original angular-radial coordinates,
\[r =r^{*}+x_{0}+y_{0},\] \[q =\frac{r^{*}q^{*}+y_{0}}{r}.\]
We can find \((x_{0},y_{0})\) such that \(T_{2}\sim T_{3}\) as \(r\to\infty\) at constant angle \(q^{*}\). This gives two sets of equations
\[1-\gamma_{1} =2(\alpha-1)\gamma_{1}^{\alpha}\gamma_{2}^{1-\alpha}\exp(r[1- \alpha+(2\alpha-1)q]),\quad q\in(0,1/2)\] \[1-\gamma_{2} =2(\alpha-1)\gamma_{2}^{\alpha}\gamma_{1}^{1-\alpha}\exp(r[ \alpha+(1-2\alpha)q]),\quad q\in(1/2,1).\]
Substituting the new coordinates and rearranging gives
\[\beta_{1} =r^{*}(1-\alpha+(2\alpha-1)q^{*})+(1-\alpha)x_{0}+\alpha y_{0}, \quad q\in(0,1/2)\] \[\beta_{2} =r^{*}(\alpha+(1-2\alpha)q^{*})+\alpha x_{0}+(1-\alpha)y_{0}, \quad q\in(1/2,1),\]
where
\[\beta_{1} =\log\left(\frac{1-\gamma_{1}}{2(\alpha-1)\gamma_{2}}\left(\frac {\gamma_{2}}{\gamma_{1}}\right)^{\alpha}\right),\] \[\beta_{2} =\log\left(\frac{1-\gamma_{2}}{2(\alpha-1)\gamma_{1}}\left(\frac {\gamma_{1}}{\gamma_{2}}\right)^{\alpha}\right).\]
For these equations to hold approximately for \(r^{*}\to\infty\), the terms multiplying \(r^{*}\) must be zero. Then solving for \((x_{0},y_{0})\) gives
\[x_{0}=\frac{(\alpha-1)\beta_{1}+\alpha\beta_{2}}{2\alpha-1},\quad y_{0}=\frac {\alpha\beta_{1}+(\alpha-1)\beta_{2}}{2\alpha-1}.\]
The effect of the choice of origin on the speed of convergence of \(\delta_{L}(r,\mathbf{w})\) to its asymptotic form is illustrated in Figure 17 for the case \(\alpha=5\), \(\gamma_{1}=0.9\) and \(\gamma_{2}=0.1\). The plots show that placing the origin at \((0,0)\) results in slow convergence, whereas placing the origin so that \(T_{2}\sim T_{3}\) at constant angle in the first quadrant improves the rate of convergence.
**Husler-Reiss:** The Husler-Reiss dependence model [35], with parameter \(\alpha>0\), is given by
\[A(\boldsymbol{\psi})=\psi z_{1}+(1-\psi)z_{2},\]
where
\[z_{1} =\Phi\left(\frac{1}{\alpha}+\frac{\alpha}{2}\log\left(\frac{\psi }{1-\psi}\right)\right),\] \[z_{2} =\Phi\left(\frac{1}{\alpha}-\frac{\alpha}{2}\log\left(\frac{\psi }{1-\psi}\right)\right),\]
and \(\Phi\) is the standard normal distribution function. In this case we have
\[T_{2} =z_{1}z_{2},\] \[T_{3} =\frac{\alpha}{2\tau\psi}\,\phi\left(\frac{1}{\alpha}-\frac{ \alpha}{2}\log\left(\frac{\psi}{1-\psi}\right)\right),\]
where \(\phi\) is the standard normal density function. In this case we have
\[\left|A^{(1,1)}(\psi,1-\psi)\right| =\frac{\alpha}{2\psi}\,\phi\left(\frac{1}{\alpha}-\frac{\alpha}{2} \log\left(\frac{\psi}{1-\psi}\right)\right)\] \[=\phi\left(\frac{1}{\alpha}\right)\frac{\alpha}{2\psi}\,\exp \left(\log\left(\frac{\psi}{1-\psi}\right)-\frac{\alpha^{2}}{8}\log^{2}\left( \frac{\psi}{1-\psi}\right)\right)\] \[=\phi\left(\frac{1}{\alpha}\right)\frac{\alpha}{2\psi}\left( \frac{\psi}{1-\psi}\right)^{1-\frac{\alpha^{2}}{8}\log\left(\frac{\psi}{1- \psi}\right)}.\]
This is rapidly-varying for \(\psi\to 0^{+}\) and \(\psi\to 1^{-}\) with index \(+\infty\). The assumptions of Proposition 4.2 are therefore not satisfied, so we cannot obtain the copula density exponent function from the tail density function. Instead, we can calculate the AR copula function directly.
To calculate asymptotic expressions for \(T_{2}\) we utilise the asymptotic Mill's ratio for the standard normal distribution. Combining this with the other terms, we obtain the following results
\[\delta_{L}(r,\mathbf{w})\sim\begin{cases}A_{1}\exp\left(-\frac{1}{2}\left[ \left(\frac{\alpha}{2}r\right)^{2}+\left(\frac{\alpha^{2}}{2}\log(\log(4))-1 \right)r\right]\right),&q=0,1\\ A_{2}\exp\left(-\frac{1}{2}\left[\left(\frac{\alpha}{2}(w_{1}-w_{2})r\right)^ {2}-r\right]\right),&q\in(0,1)\\ A_{3}\sqrt{\frac{2}{r}}\exp\left(-\frac{1}{2}\left[\left(\frac{\alpha}{2}r|w_ {2}|\right)^{2}+\left(\frac{\alpha^{2}}{2}\log(2r|w_{1}|)-1\right)r|w_{2}|+ \left(\frac{\alpha}{2}\log(2r|w_{1}|)\right)^{2}\right]\right),&q\in(1,2)\\ A_{4}\left(\log(r)\right)^{-1}r\frac{1}{4}^{\alpha^{2}\log\left(\log(2)\right) +\frac{1}{2}}\,\exp\left(-\frac{1}{2}\left(\frac{1}{2}\alpha\log(r)\right)^{ 2}\right),&q=2,3\\ A_{5}\exp(-r(A(\mathbf{w})-1)),&q\in(2,3),\end{cases}\]
Figure 17: Effect of choice of origin for polar coordinates on speed of convergence of angular-radial dependence function to asymptotic form, for EV copula with asymmetric logistic dependence with \(\alpha=0.5\), \(\gamma_{1}=0.9\) and \(\gamma_{2}=0.1\). Left plot has polar origin at \((x_{0},y_{0})=(0,0)\), right plot has origin selected such that \(T_{2}=T_{3}\) at fixed angle – see (37).
where the terms \(A_{j}\) are constants dependent on \(q\) only, given by
\[A_{1} =\alpha\phi\left(\frac{1}{\alpha}\right)(\log(4))^{-\frac{1}{2}- \frac{\alpha^{2}}{8}\log(\log(4))}\] \[A_{2} =\alpha\phi\left(\frac{1}{\alpha}\right)\] \[A_{3} =\phi\left(\frac{1}{\alpha}\right)2^{-\frac{\alpha^{2}}{4}\frac{ |w_{1}|}{|w_{2}|}}\left[\frac{2}{\alpha}\frac{\sqrt{|w_{1}|}}{|w_{2}|}+\frac{ \alpha}{2}\frac{1}{\sqrt{|w_{1}|}}\right]\] \[A_{4} =\frac{4}{\alpha}\phi\left(\frac{1}{\alpha}\right)(\log(2))^{- \frac{1}{2}-\frac{\alpha^{2}}{8}\log(\log(2))}\] \[A_{5} =4^{1-A(\mathbf{w})}\Phi\left(\frac{1}{\alpha}+\frac{\alpha}{2} \log\left(\frac{|w_{1}|}{|w_{2}|}\right)\right)\Phi\left(\frac{1}{\alpha}- \frac{\alpha}{2}\log\left(\frac{|w_{1}|}{|w_{2}|}\right)\right).\]
Clearly, this does not have asymptotic form (25) for all \(q\). Moreover, outside the third quadrant, the limit set is the line segment \(\{(x,x),x\in[0,1]\}\). However, after multiplying by \(m_{L}(r,\mathbf{w})=\frac{1}{4}\exp(-r)\), the terms multiplying the leading order contributions within the exponential are continuous in \(q\), and hence assumptions (A1) and (A2) are satisfied.
### Gaussian copula
For the Gaussian copula we focus on models for the copula density, since the copula does not admit a simple closed form expression. The bivariate Gaussian copula density with Pearson correlation coefficient \(\rho\in(-1,1)\) is given by
\[c(u,v) =\frac{1}{\sqrt{1-\rho^{2}}}T_{1}T_{2},\] \[T_{1} =\exp(-\tfrac{1}{2}\rho\beta(x^{2}+y^{2})),\] \[T_{2} =\exp(\beta xy),\]
where \(x=\Phi^{-1}(u)\), \(y=\Phi^{-1}(v)\), where \(\Phi\) is the standard normal cdf \(\beta=\rho/(1-\rho^{2})\). From the symmetry of the copula, we need only consider the asymptotic behaviour of the copula in the lower left corner. For \(u\to 0^{+}\), \(x\to-\infty\), and utilising the asymptotic Mill's ratio we have \(\Phi(x)\sim-\phi(x)/x\), where \(\phi\) is the standard normal pdf. Solving for \(x\) gives
\[x^{2}\sim W_{0}\left(\frac{1}{2\pi(1-u)^{2}}\right)\sim L_{u}- \log(L_{u}),\quad u\to 0^{+},\]
where \(W_{0}\) is the zeroth branch of the Lambert W function, and \(L_{u}=-\log(2\pi u^{2})\). Substituting this approximation shows that
\[T_{1}\sim\left(2\pi(1-u)(1-v)\sqrt{L_{u}L_{v}}\right)^{\rho\beta},\quad u,v \to 0^{+}. \tag{38}\]
For the product terms, for \(u,v\to 0^{+}\) we have
\[xy \sim\sqrt{(L_{u}-\log(L_{u}))(L_{v}-\log(L_{v}))}\] \[=\sqrt{L_{u}L_{v}}\sqrt{1-\frac{\log(L_{u})}{L_{u}}-\frac{\log(L_ {v})}{L_{v}}+\frac{\log(L_{u})\log(L_{v})}{L_{u}L_{v}}}\] \[\sim\sqrt{L_{u}L_{v}}\left[1-\frac{\log(L_{u})}{2L_{u}}-\frac{ \log(L_{v})}{2L_{v}}+\frac{\log(L_{u})\log(L_{v})}{2L_{u}L_{v}}\right]\] \[\sim\sqrt{L_{u}L_{v}}-\frac{1}{2}\left[\sqrt{\frac{L_{v}}{L_{u}} }\log(L_{u})+\sqrt{\frac{L_{u}}{L_{v}}}\log(L_{v})\right]. \tag{39}\]
These expressions can be used to derive the tail density function and copula density exponent functions, as discussed below.
#### b.2.1 ARL model
In the lower left corner we have \((u,v)=(t(1-w),tw)\), so that
\[L_{u}L_{v}=-4\left(\log^{2}(\sqrt{2\pi}t)+\log(\sqrt{2\pi}t)\log(w(1-w))+\log(w) \log(1-w)\right)\]
The terms which dominate depend on the relative sizes of \(t\) and \(w\). For fixed \(w\in(0,1)\) and \(t\to 0^{+}\), we have
\[\sqrt{L_{u}L_{v}} =-2\log(\sqrt{2\pi}t)\left(1+\frac{\log(w(1-w))}{\log(\sqrt{2\pi} t)}+\frac{\log(w)\log(1-w)}{\log^{2}(\sqrt{2\pi}t)}\right)^{1/2}\] \[\sim-2\log(\sqrt{2\pi}t)\left(1+\frac{\log(w(1-w))}{2\log(\sqrt{2 \pi}t)}+\frac{\log(w)\log(1-w)}{2\log^{2}(\sqrt{2\pi}t)}\right)\] \[\sim-\log(2\pi t^{2}w(1-w)).\]
However, for \(w\to 0^{+}\) with \(t\) fixed, we have
\[\sqrt{L_{u}L_{v}}\sim 2\left(\log\left(\sqrt{2\pi}t\right)\log(w)\right)^{1/2}.\]
The other terms in (39) also behave differently depending on the relative sizes of \(w\) and \(t\). For \(t\to 0^{+}\) with \(w\) fixed we have \(L_{u}/L_{v}\to 1\). In contrast, when \(w\ll t\), \(L_{u}/L_{v}\to 0\). Gathering all the terms together, for fixed \(w\in(0,1)\) and \(t\to 0^{+}\) we get
\[c(t(1-w),tw)\sim\frac{1}{1-\rho^{2}}\left(-4\pi t^{2}w(1-w)\log(t)\right)^{- \frac{\rho}{1+\rho}}.\]
Comparing this with (18), we see that \(\kappa_{(0,0)}=2/(1+\rho)\). A similar analysis for the other corners shows that \(\kappa_{(1,1)}=2/(1+\rho)\) and \(\kappa_{(1,0)}=\kappa_{(0,1)}=2/(1-\rho)\).
For small \(t\) and \(w\to 0^{+}\) with \(w\ll t\), combining the terms above gives
\[c(t(1-w),tw)\sim\frac{1}{1-\rho^{2}}\left(2\sqrt{2}\pi t^{2}w(\log(2\pi t^{2} )\log(w))^{\frac{1}{4}}\right)^{\frac{\rho^{2}}{1-\rho^{2}}}\exp\left(\frac{2 \rho}{1-\rho^{2}}\sqrt{\log(2\pi t^{2})\log(w)}\right).\]
#### b.2.2 ARE model
For the lower left corner we have \((u,v)=(\exp(-r(1-w),\exp(-rw))\), so that \(L_{u}=r(1-w)-\log(2\pi)\) and \(L_{v}=rw-\log(2\pi)\), and \(x^{2}\sim 2r(1-w)-\log(2\pi L_{u})\), \(y^{2}\sim 2rw-\log(2\pi L_{v})\) as \(r\to\infty\). Substituting this back shows that
\[T_{1}\sim(4\pi r\sqrt{w(1-w)})^{2\alpha}\exp(-2\alpha r),\quad r\to\infty.\]
Similarly, from (39) we have
\[xy\sim 2r\sqrt{w(1-w)}-\frac{1}{2}\sqrt{\frac{w}{1-w}}\log(2\pi L_{u})-\frac{1} {2}\sqrt{\frac{1-w}{w}}\log(2\pi L_{v}),\quad r\to\infty.\]
Therefore
\[T_{2}\sim\exp(2\beta r\sqrt{w(1-w)})(4\pi r(1-w))^{-\frac{\beta}{2}\sqrt{ \frac{w}{1-w}}}(4\pi rw)^{-\frac{\beta}{2}\sqrt{\frac{1-w}{w}}},\quad r\to\infty.\]
Combining these terms gives
\[c(\exp(-r\mathbf{w}))\sim\mathcal{M}_{(0,0)}(\exp(-r),\mathbf{w})\exp(-r( \lambda_{(0,0)}(\mathbf{w})-1)),\quad r\to\infty,\]
where
\[\mathcal{M}_{(0,0)}(\exp(-r),(1-w,w))) =\frac{1}{1-\rho^{2}}(4\pi r)^{a_{1}}w^{a_{2}}(1-w)^{a_{3}},\] \[\lambda_{(0,0)}(1-w,w) =\frac{1-2\rho\sqrt{w(1-w)}}{1-\rho^{2}},\]
\[a_{1} =\frac{\rho}{2-2\rho^{2}}\left[2\rho-\frac{1}{\sqrt{w(1-w)}}\right],\] \[a_{2} =\frac{\rho}{2-2\rho^{2}}\left[\rho-\sqrt{\frac{1-w}{w}}\right],\] \[a_{3} =\frac{\rho}{2-2\rho^{2}}\left[\rho-\sqrt{\frac{w}{1-w}}\right].\]
Note that this asymptotic form is only valid for \(w\in(0,1)\), since it assumes that both \(u\) and \(v\) go to zero as \(r\to\infty\).
The copula density exponent function satisfies the assumptions of Proposition 4.1 with \(\frac{d}{dw}\lambda_{(0,0)}(w,1-w)=0\). Recalling that \(\kappa_{(0,0)}=2/(1+\rho)\), we see that the form of \(b_{(0,0)}\) obtained from Proposition 4.1 agrees with that obtained in the preceding subsection.
#### b.2.3 AR copula function for Laplace margins
From Proposition 4.3, we just need to check that \(\lambda(-1)=\lim_{w\to 1^{-}}\lambda_{(0,0)}(w)\). On Laplace margins, for \(q=-1\) we have \(u=\frac{1}{2}\exp(-r)\) and \(v=1/2\). In this case \(y=0\) and \(L_{u}=2r-\log(\pi/2)\), so that \(x^{2}\sim 2r-\log(\pi L_{u}/2)\). Therefore
\[\delta_{L}(r,(0,-1),c) \sim\frac{1}{\sqrt{1-\rho^{2}}}\exp(-\tfrac{1}{2}\rho\beta(2r- \log(\pi L_{u}/2)))\] \[\sim\frac{1}{\sqrt{1-\rho^{2}}}(\pi r)^{\tfrac{1}{2}\rho^{\beta}} \exp\left(-r\frac{\rho^{2}}{1-\rho^{2}}\right).\]
Hence, \(\lambda(-1)=\lim_{w\to 1^{-}}\lambda_{(0,0)}(w)\).
### T copula
As with the Gaussian copula, the t-copula does not admit a simple closed form expression. We therefore focus on the copula density here. The bivariate t copula density function with parameters \(\rho\in(-1,1)\), \(\nu>0\) is given by
\[c(u,v)=\frac{g^{2}\nu}{2\sqrt{1-\rho^{2}}}\left(1+\frac{x^{2}+y^{2}-2\rho xy }{\nu(1-\rho^{2})}\right)^{-\frac{\nu}{2}-1}\left(\left(1+\frac{x^{2}}{\nu} \right)\left(1+\frac{y^{2}}{\nu}\right)\right)^{\frac{\nu+1}{2}},\]
where \(g=\Gamma(\nu/2)/\Gamma((\nu+1)/2)\), \(x=F_{t}^{-1}(u,\nu)\), \(y=F_{t}^{-1}(v,\nu)\) and \(F_{t}^{-1}(\cdot,\nu)\) is the inverse cdf of the univariate t distribution on \(\nu\) degrees of freedom. The quantiles of the univariate t distribution asymptote to [50]
\[F_{t}^{-1}(p,\nu) \sim-\sqrt{\nu}(pg\nu\sqrt{\pi})^{-1/\nu},\quad p\to 0,\] \[F_{t}^{-1}(p,\nu) \sim\sqrt{\nu}((1-p)g\nu\sqrt{\pi})^{-1/\nu},\quad p\to 1.\]
#### b.3.1 ARL model
Substituting these asymptotic approximations shows that
\[c(t\mathbf{w})\sim\Upsilon_{(0,0)}b_{(0,0)}(\mathbf{w})t^{-1},\]
where
\[\Upsilon_{(0,0)}b_{(0,0)}(\mathbf{w})=\frac{g}{2\sqrt{\pi}}(1-\rho^{2})^{ \frac{\nu+1}{2}}(w_{1}w_{2})^{\frac{1}{\nu}}\left(w_{1}^{\frac{2}{\nu}}+w_{2} ^{\frac{2}{\nu}}-2\rho(w_{1}w_{2})^{\frac{1}{\nu}}\right)^{-1-\frac{\nu}{2}}.\]
So the copula density has asymptotic form (18) with \(\kappa_{(0,0)}=1\). The same tail order is obtained in other corners, whereas the tail density function has the opposite sign of \(\rho\) in the lower right and upper left corners. Separating \(\Upsilon_{(0,0)}\) and \(b_{(0,0)}\) requires consideration of the copula, which does not have a simple closed form. However, the tail dependence coefficient can be expressed in terms of the cdf of the univariate t-distribution (see e.g. [51, 52, 6, 53]).
#### b.3.2 ARE model
A similar analysis shows that the copula density also has ARE form with
\[\lambda_{(0,0)}(1-w,w) =\left(1+\frac{2}{\nu}\right)\max(w,1-w)-\frac{1}{\nu}\] \[\mathcal{M}_{(0,0)}(t,w) =\frac{g}{2\sqrt{\pi}}(1-\rho^{2})^{\frac{\nu+1}{2}}\times\begin{cases} (2-2\rho)^{-1-\frac{\nu}{2}},&w=1/2\\ 1,&w\in[0,1/2)\cup(1/2,1].\end{cases}\]
As with the tail order function, the same functions are obtained in the other corners, but with the sign of \(\rho\) changed in the lower right and upper left corners.
Since the t-copula satisfies the assumptions of Proposition 4.2, we can confirm that the copula density exponent function obtained above is the same as that given in Proposition 4.2. The tail density function \(b_{(0,0)}(1-z,z)\) has regularly-varying tails with order \(1/\nu\) for \(z\to 0^{+}\) and \(z\to 1^{-}\). Also note that \(|w-\frac{1}{2}|=\max(w,1-w)-\frac{1}{2}\). Substituting these into the expression given in Proposition 4.2 gives the desired result.
#### b.3.3 AR copula function for Laplace margins
As before, we just need to check the behaviour for \(\mathbf{w}=(1,0)\). In this case, we have \(y=0\) and \(u=1-\frac{1}{2}\exp(-r)\), so that \(x\sim\sqrt{\nu}(\frac{1}{2}\exp(-r)g\nu\sqrt{\pi})^{-1/\nu}\). Substituting this shows that
\[\delta_{L}(r,(1,0),c)\sim\frac{g^{2}\nu}{2}\,\left(\frac{g\nu\sqrt{\pi}}{2} \right)^{1/\nu}\,\left(1-\rho^{2}\right)^{(\nu+1)/2}\,\exp(-r/\nu).\]
Hence \(\lambda(0)=\lim_{w\to 0^{+}}\lambda_{(1,1)}(w)\).
|
2308.08503 | Principle of Terahertz Time-Domain Spectroscopy | In this work, the detection of terahertz (THz) pulse via different schemes of
THz time-domain spectroscopy (THz-TDS), including electro-optic sampling (EOS)
and photoconductive sampling (PCS), has been reviewed by insisting on its
principles and characteristics. Furthermore, four developments of THz-TDS, i.e.
THz air-breakdown/biased coherent detection (THz-ABCD), THz-radiation-enhanced
emission of fluorescence (THz-REEF), single-shot THz-TDS, and THz asynchronous
optical sampling (THz-ASOPS) have also been introduced. | Jiayu Zhao | 2023-08-09T03:10:35Z | http://arxiv.org/abs/2308.08503v1 | # Principle of Terahertz Time-Domain Spectroscopy
###### Abstract
In this work, the detection of terahertz (THz) pulse via different schemes of THz time-domain spectroscopy (THz-TDS), including electro-optic sampling (EOS) and photoconductive sampling (PCS), has been reviewed by insisting on its principles and characteristics. Furthermore, four developments of THz-TDS, i.e. THz air-breakdown/biased coherent detection (THz-ABCD), THz-radiation-enhanced emission of fluorescence (THz-REEF), single-shot THz-TDS, and THz asynchronous optical sampling (THz-ASOPS) have also been introduced.
Terahertz time-domain spectroscopy, principle, electro-optic sampling, photoconductive sampling, air-breakdown/biased coherent detection, THz-radiation-enhanced emission of fluorescence, single-shot, asynchronous optical sampling.
## 1 Introduction
Terahertz (THz) wave, commonly defined from 0.1 THz to 10 THz, is one of the least-explored spectral regions in the electromagnetic (EM) spectrum (between microwave and far infrared), mainly due to the lack of efficient time-domain detection techniques[1]. However, since THz time-domain spectroscopy (THz-TDS) was intensively studied in the 1990s[2], THz science and technology has become a passionate research field receiving increasing attention from all over the world in the past three decades[3, 4, 5, 6, 7, 8, 9, 10].
Triggered by its cutting-edge applications, THz-TDS has now been broadly applied into diverse fields, such as bio-medicine, material identification, non-destructive evaluation and nonlinear spectroscopy, etc[3, 4, 5, 6, 7, 8, 9, 10]. THz-TDS is becoming increasingly important and popular because of its broad-bandwidth detection and ability to simultaneously extract both THz amplitude and phase information, which affords a powerful technique for the research of the rich physical and chemical processes in THz range[11, 12].
A THz-TDS setup is, in principle, a coherent (single-cycle) THz emission and detection system. The time-resolved measurement of THz wave is achieved by splitting laser pulses into pump beam to excite pulsed THz radiation and probe beam to temporally sample the THz pulse (by mixing the THz and probe pulses in a detector). The THz temporal waveform thus can be obtained by varying the relative time delay between the THz wave (pump beam) and the probe beam. The detected THz signal is in the form of electric field and its Fourier transformation gives rise to both amplitude and phase information over a wide THz spectral range.
In this work, we have presented the principles of THz-TDS, as well as an overview of six representative experiment schemes of THz-TDS. The basic principles and experimental techniques of THz-TDS have been described in Section 2.1, showing two most widely used systems, i.e. electro-optic sampling (EOS[13, 14, 15], in Section 2.2) and photoconductive sampling (PCS[16], in Section 2.3). The relevant materials and devices, including EO crystals, PC substrate materials, and optical and electronic devices, have also been discussed. In Section 2.4, the characteristics of THz-TDS setup have been addressed, with emphasis on its performances in terms of sensitivity, signal-to-noise ratio (SNR) and dynamic range (DR). In the same section, THz-TDS and the traditional Fourier transform infrared spectroscopy (FTIR)[17] have been compared, as well. In addition to the standard EOS and PCS based THz-TDS, four developments on THz-TDS, namely, THz air-breakdown/biased coherent detection (THz-ABCD)[18], THz-radiation-enhanced emission of fluorescence (THz-REEF)[19, 20], single-shot THz-TDS[21, 22], and THz asynchronous optical sampling (THz-ASOPS)[23, 24, 25, 26] have been briefly introduced in Section 3.
## 2 THz time-domain spectroscopy (THz-TDS)
The pioneering studies of materials or gases with THz pulse were initialized in the late 1980s[27, 28], and afterwards this advanced experimental technique was started to be described as the well-known THz-TDS. Recently, THz-TDS has been widely used to investigate the THz applications in fields of physics, chemistry, biology and medicine[29]. The main advantage of THz-TDS relies on its simultaneous measurement of both the amplitude and
phase of the THz signals transmitted, reflected or scattered by the samples, with broadband response and good performance of signal-to-noise ratio (SNR).
### Basic principle
A conventional THz-TDS system consists of a femtosecond laser system, from which the same laser pulse is used for both generation and detection of THz wave. As shown in Fig.1, a femtosecond laser pulse is split into two paths. One was the pump beam and the other was the probe.
The pump pulse is used to generate THz wave by means of laser plasma filament (can also be optical rectification in nonlinear crystal or photoconductive antenna, etc), which is created at the focus by focusing the pump beam with a converging lens. The exiting THz pulse from the filament was first collimated by an off-axis parabolic mirror, and then focused by another identical parabolic mirror onto the 'THz detector' (described in detail in Section 2.2 and 2.3).
The probe beam was combined with THz pulse by a Pellicle beam splitter, performing THz-TDS measurement. The basic principle of THz-TDS is to slowly sample a fast THz transient with a probe pulse by coherent detection in time domain, as depicted in Fig. 2.
The temporal duration of the probe pulse (also the integration time of the detector) is typically much shorter than that of the THz pulse, and hence can be employed as a small temporal 'gate' to stepwise sample the THz waveform. The time-domain sampling is performed when the THz and probe pulses arrive at the detector with a variable delay \(t_{i}\), and generate a corresponding nonlinear response to be detected. This time-variable sampling can be achieved by scanning \(t_{i}\), i.e. by guiding either the pump or the probe beam traveling along a path with an adjustable length, which is generally caused by a spatial movement of a mechanical delay line (with an appropriate step length and range), changing the relative arrival time between the probe and the THz pulses at the detector.
When repeating the measurement of the THz field at the detector for a set of different delays, the full THz waveform \(E_{\text{THz}}(t)\) could be sampled in its entirety. Fig. 3 provides an example of the measured THz waveform. It could be seen that the THz pulse has the characteristic of single cycle. The corresponding amplitude/power (can also be phase) of each spectral component of the complex THz spectral field can be generated by a following numerical Fourier transformation of the recorded THz temporal data, as shown in the inset of Fig. 3.
### Electro-optic sampling (EOS)
EOS technique is commonly used for the phase-sensitive detection of EM radiation, due to its easy experimental configuration and detectable broad bandwidth. In 1995 to 1996, THz detection by free space EOS was realized due to significant efforts of Nahata _et al.[13]_, Jepsen _et al.[14]_ and Wu _et al.[15]_. In a THz-TDS system, EOS setup is a widely used 'THz detector', which can be used to detect fast THz transients in a nonlinear EO crystal (such as ZnTe) based on its birefringence (EO effect) induced by the incident THz radiation.
#### 2.2.1 Principle
EOS[30][31], the most common detection method used in the THz community, is realized through an optical rectification process inside a nonlinear EO crystal, such as ZnTe or GaP. The use of EO crystal for EOS typically
Figure 1: Schematic diagram of a representative THz-TDS system. A Teflon plate, which has high transmission for THz pulse, is put after the plasma filament to block the residual pumping laser. PBS is short for polarization beam splitter. The other optical and electronic devices are described in detail in the main text.
Figure 3: A representative THz pulse detected by THz-TDS and its Fourier transformation spectrum (inset).
Figure 2: Schematic of the sampling of a THz temporal waveform with probe pulses via nonlinear response and variable time delay.
employs the EO (or Pockel's) effect. When the THz beam interacts with the probe beam inside the EO crystal, the THz electric field induces an instantaneous birefringence in the crystal (Fig. 4 and 5).
This birefringence (difference in the refractive indices along each EO crystal axis) leads to a change in the polarization state (polarization rotation, \(\propto E_{\text{THz}}\)) of the collinearly propagating probe pulse, which can be detected after the crystal using a combination of polarization optics (\(\lambda\)/4 plate \(+\) Wollaston prism/polarization beam splitter) and a pair of photo detectors (or a single balanced detector). The use of such 'balanced' detection scheme improves the sensitivity by cancelling most of the laser fluctuations and achieves better SNR. As for the time-resolved detection of the THz pulse \(E_{\text{THz}}\)(\(t\)), the transient birefringence is sampled with the probe pulse at different time delays.
#### 2.2.2 EO crystal
The performance of EOS is limited by the dielectric properties of the EO crystal. Since LiTaO\({}_{3}\) crystal was first demonstrated as the EOS element in 1995 by Wu and Zhang[15], a variety of EO materials without inversion symmetry, including semiconductors, inorganic and organic crystals, have been tested as 'detector' for applications in THz-TDS[32, 33]. Among such crystals, ZnTe and GaP are most commonly used for EOS because of their large EO coefficients and good transparency at the wavelength of both the incident THz radiation and probe beam. Apart from ZnTe and GaP, LiTaO\({}_{3}\), GaAs and DAST are also excellent candidates for EOS of the THz pulse. Properties of these EO crystals have been compared in Table 1, including EO coefficient, group refractive index and surface orientation.
An important condition for broadband detection of THz wave by EOS is the EO crystal's optical phonon absorption band, which is insensitive to the incident THz wave due to the phonon resonances. Compared with ZnTe crystal, GaP has a higher-frequency optical phonon absorption band[32] and thus can be used to achieve broader-bandwidth detection of THz radiation. However, the EO coefficient of GaP crystal is only one fourth that of ZnTe[32].
Another crucial factor is the appropriate phase matching between the THz and probe pulses, which propagate on the same axis inside the EO crystal. Phase matching is achieved when the phase velocity of THz wave is equal to the group velocity of probe pulse (or the velocity of probe envelope)[34]. For an EO crystal with dispersion at optical frequencies, the phase difference \(\Delta\Gamma\) of the probe beam is proportional to the crystal thickness \(d\) as described by[34]:
\[\Delta\Gamma=\frac{2\pi}{\lambda}\,dn_{\text{g}}^{3}r_{41}E_{THz}\,, \tag{1}\]
where \(n_{\text{g}}\) is the group refractive index at the wavelength \(\lambda\) of the probe beam, and \(r_{41}\) is the EO coefficient of the EO crystal.
Due to the difference in the respective dispersions, the phase mismatch accumulates after the propagation of the two waves through the crystal. Therefore, a thinner EO crystal normally detects higher THz frequencies than a thicker one, because of the phase mismatch (shorter coherence length) and also stronger optical phonon absorption in the latter. If possible, the EO crystal thickness should be estimated before applying EOS, by taking into account of the phase-matching condition dependent on the target THz frequency range, as well as the probe central wavelength.
\begin{table}
\begin{tabular}{c c c c} \hline EO & EO coefficient & Group refractive & Surface \\ crystal & (pm/V) at (\(\mu\)m) & index at (\(\mu\)m) & orientation \\ \hline ZnTe & \(r_{41}=4.04\) at 0.633 & 3.224 at 0.835 & 110 \\ GaP & \(r_{41}=0.97\) at 0.633 & 3.556 at 0.835 & 110 \\ LiTaO\({}_{3}\) & \(r_{33}=30.5\) at 0.633 & — & — \\ & \(r_{31}=8.40\) at 0.633 & & — \\ GaAs & \(r_{41}=1.43\) at 1.15 & 3.61 at 0.886 & — \\ DAST & \(r_{11}=160\) at 0.820 & — & — \\ \hline \end{tabular}
\end{table}
Table 1: Properties of typical EO crystals[32, 33]
Figure 4: Schematic diagram of a representative EOS setup.
Figure 5: The horizontal polarization of the probe pulse (a) is rotated via EO effect (b) when THz pulse co-propagates inside the EO crystal, which will result in a final non-zero output of the balanced detector.
#### 2.2.3 Optical and electronic devices
To develop a THz-TDS, one requires both optical and electronic elements. This section discusses some aspects of these devices.
_Off-axis parabolic mirror_
Off-axis parabolic mirrors are used to collimate and focus the THz radiation from source to detector, or focus the THz wave to a diffraction-limited spot on a potentially small sample. Its advantages are (i) large area, (ii) high reflectivity (\(>95\,\%\) in entire THz range with protected gold coating), and (iii) no echo pulses.
_Chopper and lock-in amplifier_
During EOS, the detected THz signal is integrated over a large number of pulses with a lock-in amplifier. Accordingly, a mechanical (optical) chopper is applied to periodically modulate the pumping laser pulses (or the bias on the photoconductive emitter). The function of a lock-in amplifier is as follows: when the THz signal is chopped, the lock-in amplifier detects the fraction of the modulated THz signal with the same frequency, canceling out most other frequencies and DC parts. Therefore, weak THz signals on a high background noise level can be detected.
_Optical delay line_
The optical delay line enables the ability to scan the time delay. The used optical delay line is normally a combination of two mirrors orthogonally mounted on a stepper motor driven device with high mechanical stability and good positioning accuracy. So it is possible to measure small change of time difference \(\Delta t\).
On the other hand, the maximum scanning range (time window \(T_{\text{window}}\)) is of importance for spectral resolution \(\Delta v\) (the inverse of \(T_{\text{window}}\)) of the THz-TDS system, given by the following mathematical relation:
\[\Delta v=\frac{1}{T_{window}}=\frac{1}{N\cdot\Delta t}\,, \tag{2}\]
where \(N\) is the number of data points recorded in the time domain. As an example, typically, THz-TDS systems employ a 3-cm-long delay line, leading to \(T_{\text{window}}=200\) ps and \(\Delta v=5\) GHz. This is the reason for scanning long time delay and recording the THz signal over a large \(T_{\text{window}}\) during EOS in order to increase the spectral resolution. However, longer delay line requires a perfect alignment of the laser beams, as well as an accurate control of the laser power stability, which need to be taken into account.
Until very recently, the EOS technique has been the only method for the ultra-broadband detection of the THz radiation. Compared with EOS, although the photoconductive antenna (PCA) was a key device to initiate the recent development of THz science and technology, the photoconductive sampling (PCS) method has become somewhat old-fashioned. However, PCS is still widely adopted in THz labs due to its easy and integrated experimental configuration.
### Photoconductive sampling (PCS)
The first report of THz-TDS for the detection of THz radiation was by Auston _et al.[16]_ in 1984 via PCA, which is the earliest THz pulse detector, typically containing a pair of micro-fabricated metal electrodes with a small gap on a semiconductor substrate[35]. A sensitive electronic current amplifier is used to connect both electrodes[36]. When a PCA based PCS setup is used to detect the THz radiation, the basic detection principle and some related characteristics are described as follows.
#### 2.3.1 Principle
During PCS, a PCA acts as the THz detector, as shown in Fig. 6, which is triggered by a time-delayed probe laser pulse to generate photo-created carriers in the electrode gap. This time delay is adjusted thanks to an optical delay line, as described in Section 2.2.3.
The incident instantaneous THz field coupled by the metallic dipole antenna provides a local bias field between two electrodes and accelerates the mobile carriers, creating transient photocurrent, which is then measured by a current meter (Fig. 7). In order to optimize the THz detection of PCS scheme, the polarization directions of both the THz and probe pulses need to be perpendicular to the edge of electrodes[37], as displayed in Fig. 6. In this way, the measured THz pulse will retain the characteristic of single cycle[37] with better performance of detectable bandwidth and signal-to-noise ratio (SNR).
Figure 6: Schematic diagram of a representative PCS setup.
Figure 7: Between two electrodes, the THz electric field accelerates the carriers and creates transient photocurrent, which can be detected by a sensitive current meter.
The measured photocurrent \(J(t)\) induced by the incident THz radiation at a time delay \(t\) is proportional to the product of the incident THz electric field \(E_{\textrm{THz}}(t)\) and the number density of the existing photo-created carriers \(N(t)^{(38)}\):
\[J(t)=e\mu\big{|}_{-\infty}^{\infty}E_{\textrm{THz}}(t^{\prime})\cdot N(t^{ \prime}-t)dt^{\prime}\,, \tag{3}\]
where \(e\) and \(\mu\) are elementary electric charge and electron mobility, respectively. Noting that, for simplicity, \(N(t)\) actually includes the convolution of the probe pulse intensity and the carrier density.
By controlling the time delay between THz and probe pulses, the temporal change of \(J(t)\), which is proportional to the incident THz electric field, can be sampled with a current meter. Compared with the balanced detection manner of EOS, the THz electric field to be PC sampled is directly converted to the photocurrent in a PCA, which means that PCA is as robust to the optical noises as EOS technique.
From Eq. (3) one can also see that, when the PCA is considered as a sampling detector, the temporal increase and decrease of \(N(t)\) should be as short as possible (best if \(N(t)\) were a \(\delta\)-function), so that \(J(t)\) would exactly reflect \(E_{\textrm{THz}}(t)\). In reality, \(N(t)\) will be restricted by several factors, such as probe pulse width and carrier lifetime of the PCA substrate material.
Clearly, the probe pulse duration must be much shorter than the photo-excited carrier lifetime, and the latter must be also significantly shorter compared to the THz pulse period, in order that the PCA response to the THz field is almost instantaneous, and the measured THz field has sufficient time resolution. Otherwise, the PCA will qualitatively work as an integrating detector according to Eq. (3). This is one of the reasons to explore the substrate materials with short carrier lifetime.
#### 2.3.2 Substrate material
Performance of a PCA depends mainly on the substrate material, whose carrier lifetime essentially limits the bandwidth of the detected THz emission. Especially, materials with a short carrier lifetime are usually selected as substrate in order to increase the response speed of PCA and achieve broadband detection.
With the motivation to shorten the carrier lifetime, low-temperature-grown GaAs (LT-GaAs) and silicon-on-sapphire (RD-SOS) has been well developed[39][42]. The typical lifetime of LT-GaAs is around 0.6 ps[39][40]. An ultra-broadband THz detection with frequency components over a few tens of THz has been demonstrated in previous reports[41][42] with a LT-GaAs based PCA.
#### 2.3.3 Frequency response
Apart from the carrier lifetime of the substrate material, probe pulse width is another crucial factor influencing the frequency response of a PCA. In case of PCA working as a PCS detector, it can sufficiently operate in low THz frequency regimes, generally below 5 THz. The frequency response of PCA in the low THz band is almost independent on the probe pulse width. On the other hand, the decay of the high-frequency components of the THz spectra critically depends on the pulse width of the probe beam. As the pulse width of the probe beam was broadened, the high THz band rapidly decreases (even faster than the EOS spectrum).
The significant influence of probe pulse width on PCA frequency response was confirmed by Ref. [41], which has demonstrated that the detectable bandwidth of THz radiation could be extended up to 60 THz with PCA sampled by the probe pulse width of only 15 fs. In order to obtain broadband detection of THz wave, the pulse duration of the probe beam should be as short as possible for both PCS and EOS. Compared with EOS, the frequency response of PCA, mainly dependent on the probe pulse width, is advantageous in the spectral analysis.
### Characteristics
In this section, we will investigate some characteristics that affect the overall performances of the THz-TDS system.
#### 2.4.1 Sensitivity
One of the most attractive features of THz-TDS is the ability to coherently detect both the THz field amplitude and phase[43], which results in a far higher sensitivity compared to intensity detection (e.g. with a thermal detector or conventional photodiode). This is because the measured signal \(S_{\textrm{THz}}\) is proportional to the THz field amplitude, i.e. \(S_{\textrm{THz}}\propto E_{\textrm{THz}}\propto\surd I_{\textrm{THz}}\) as opposed to the intensity signal \(S_{\textrm{THz}^{*}}\propto I_{\textrm{THz}}\) in case of an intensity detector. Therefore, if the THz intensity/power is reduced by a factor of 100, the coherently detected signal only decreases by a factor of 10.
#### 2.4.2 Signal-to-noise ratio (SNR)
The SNR of the THz waveforms measured by the EOS and PCS techniques are similar to each other. The estimated SNR in the EOS method with the same electric field of the THz radiation is about four times larger than that in case of PCS is applied[44].
Recalling that SNR is defined by:
\[SNR(\omega)=\frac{S(\omega)}{\sigma(\omega)}\,, \tag{4}\]
where \(S(\omega)\) is the THz amplitude at angular frequency \(\omega\), \(\sigma(\omega)\) is its standard deviation. The design of PCA might have a complicated impact on \(\sigma(\omega)\), including the collecting efficiency of THz field by metallic antenna, the electrical
properties of the semiconductor substrate (dark resistivity, carrier mobility and lifetime) and the excitation conditions (photo-carriers density), which leads to worse SNR of PCS than EOS. Normally, one has to select a PCA with fast response to increase the SNR of a PCS setup.
#### 2.4.3 Dynamic range (DR)
If \(\sigma(\omega)\) in Eq. (4) is THz signal independent, then \(\sigma(\omega)\) will equal to \(S_{\text{min}}(\omega)\), which is the minimum measurable THz signal at \(\omega\). Then the formula of SNR will be deduced as
\[DR=\frac{S(\omega)}{S_{\text{min}}(\omega)}\,, \tag{5}\]
which is the definition of DR.
Since THz-TDS detects THz electric field rather than THz intensity, the measurement outcomes typically have greater DR than conventional techniques. As an example, considering an optically dense medium with a very small power transmission coefficient of \(10^{8}\), a sensitive THz-TDS system can still characterize this medium by measuring the THz field transmission coefficient of \(10^{-4}\).
#### 2.4.4 Comparison with Fourier transform infrared spectroscopy (FTIR)
Far infrared spectral properties of materials have long been studied for more than 100 years[7]. Until the 1980s, the most powerful tool for spectroscopic analysis in the far-IR regime with wavelength between 20 and 500 \(\upmu\)m was FTIR, which is based on a black body source (thermal emitter), a Michelson interferometer and a bolometer[17]. By contrast, THz-TDS was just proposed about 30 years ago. However, THz-TDS has become more and more useful in the past three decades.
Compared with FTIR, it has been recognized that the most important advantage of THz-TDS is the possibility of measuring not only the amplitude of the THz signal, but also its phase. This allows determining the complex refractive index of a sample, which can avoid the uncertainty caused by the Kramers-Kronig analysis.
On the other hand, based on THz-TDS, the THz signal is observed in the form of a time trace with sub-picosecond time resolution, and the coherent nature of THz detector dramatically reduces its minimum detectable power. Thus THz-TDS allows one to perform time-of-flight and time-resolved measurements with better sensitivity than FTIR.
In terms of bandwidth, FTIR is of course better than THz-TDS as it can be used from the visible to the far infrared domain. In general, THz-TDS has a better SNR than FTIR below 3 THz, whereas FTIR is preferable over 5 THz[45]. Therefore, before widely replaced by THz-TDS, the traditional FTIR remains a mature and popular approach, which employs reliable, well established and easy-to-use apparatuses.
## 3 Progress on THz-TDS
In this section, four developments of THz-TDS have been briefly introduced, including THz air-breakdown/biased coherent detection (THz-ABCD), THz-radiation-enhanced emission of fluorescence (THz-REEF), single-shot THz-TDS, and THz asynchronous optical sampling (THz-ASOPS).
### THz-ABCD
Detection of THz wave with laser-induced air plasma as not only THz wave emitter but also THz detector was initially reported in 2006[18]. The basic mechanism for using gaseous media to sense THz waves primarily lies in the THz-induced second harmonic generation through a third-order nonlinear process[18][46][47], as shown in Fig. 8.
The interaction between \(\omega\) pulse (probe) and THz wave in laser-induced air plasma results in \(2\omega\) pulse, which can be described in the following equation:
\[E_{2\omega}\propto\chi^{(3)}E_{\omega}E_{\omega}E_{\text{THz}}\,, \tag{6}\]
where \(E_{2m}\), \(E_{\omega}\), and \(E_{\text{THz}}\) are the electric field amplitudes of the \(2\omega\), \(\omega\), and THz waves, respectively, and \(\chi^{(3)}\) is the third-order susceptibility of air. Since \(E_{2\omega}\propto E_{\text{THz}}\), the intensity of the measured second harmonic is proportional to the intensity of the THz wave, i.e., \(I_{2\omega}\propto I_{\text{THz}}\). This technique has been named as THz air-breakdown coherent detection (THz-ABCD).
In order to further reduce the probe pulse energy required for THz detection of this all-air based THz-TDS method, an external AC bias across the probe-induced plasma volume has been applied. Correspondingly, THz-ABCD changes its meaning from "THz air-breakdown coherent detection" to "THz air-biased coherent detection".
Since air is the only medium used for THz detection, THz-ABCD has unique features, such as no phonon absorption or significant phase-matching issue over THz
Figure 8: Schematic diagram of a THz-ABCD setup. A filter is put after the plasma filament to block the residual probe laser. The detector is used to measure the time-resolved second harmonic signal induced by mixing the probe and the THz pulses.
band as those existing in EO crystal. Therefore, only the probe pulse duration could limit the detection bandwidth of THz-ABCD, which is obviously much broader than that of EOS. The useful spectral range of a typical THz-ABCD system continuously covers from 0.2 to over 30 THz[48].
### THz-REFF
The interaction between THz wave and femtosecond laser-induced air plasma is of great importance to the THz pulse detection, since it not only can result in second harmonic generation (as described in the previous section of THz-ABCD), but also can significantly enhance the fluorescence emission from the plasma, which has been demonstrated as an omnidirectional THz-TDS method[19][20], named as THz-REFF. In THz-REFF experiment as shown in Fig. 9, a single-cycle THz pulse is collinearly focused to the focal region (air plasma) of the laser beam, and the fluorescence emission is detected by a combination of monochromator and photomultiplier tube (PMT).
Inside the plasma, the irradiation of THz pulse leads to an increase of plasma temperature through electron acceleration in the THz field, and thus the total kinetic energy of the electrons is increased. On the other hand, inside the plasma, there exists many high-lying states of atoms and molecules formed via absorption of multiple laser photons[49]. Compared to the ground states, they are more easily ionized by inelastic collisions with the surrounding energetic electrons. Therefore, the energy transferred from THz wave to electrons would increase the rate of electron-impact ionization of the highly excited particles, and larger ion population will lead to more fluorescence emission. As a result, the fluorescence emission is enhanced by the THz field.
In this manner, the THz waveform information is encoded into the change of the fluorescence at different time delay between THz and laser pulses. The enhanced fluorescence \(\Delta I_{\mathrm{FL}}\) can be approximate as follows:
\[\Delta I_{\mathrm{FL}}\propto\int E_{\mathrm{THz}}^{2}(t)dt\, \tag{7}\]
which shows the agreement between the THz intensity and derivative of the enhanced fluorescence. Thus, the time-dependent fluorescence contains the information of the time-resolved THz intensity. Due to the high transparency and omnidirectional emission pattern of the fluorescence (ultraviolet, UV), THz-REFF can be used to measure THz pulses at remote distances, e.g. 10 m[20], with minimal water vapor absorption and unlimited directionality for THz signal collection.
### Single-shot THz-TDS
The conventional THz-TDS uses a mechanical delay line to vary the optical path between the pump and probe pulses[50][151], and the THz electric field is repetitively sampled for each sequential time delay. Therefore, the data acquisition rate via this temporal scanning is very limited, which clearly cannot meet the requirement for real-time THz-TDS measurements.
To increase the acquisition rate, one possible method is based on single-shot THz-TDS[21][22], whose most important feature is the capability of the measurement of a full THz temporal waveform with only one chirped probe pulse (without using mechanical delay line). This will significantly speed up the acquisition rate.
As schematically illustrated in Fig. 10, a representative single-shot THz-TDS system is similar to the aforementioned EOS setup, except for the use of a chirped probe pulse and an optical spectrum analyzer (OSA). As for the probe pulse, it has been linearly frequency chirped (a linear ramp of frequency versus time) and time stretched to tens of picoseconds. Then, the chirped probe pulse and the THz pulse co-propagate in the EO crystal.
Inside the EO crystal, one can see from Fig. 11 (Right) that, the information of THz temporal waveform is linearly encoded onto the different spectral components of the chirped probe pulse, and finally decoded by dispersing the
Fig. 10: Schematic diagram of a representative single-shot THz-TDS setup. The polarization of different wavelength components of the chirped probe pulse is rotated by different portions of THz electric field through the Pockels effect inside the EO crystal. More precisely, the degree and direction of the probe polarization rotation is proportional to the THz field strength and polarity. After the optical analyzer, this polarization modulation is converted to the amplitude variation on the optical spectrum analyzer (OSA).
Fig. 9: Schematic diagram of a representative THz-REFF setup.
probe beam in an OSA (Fig. 10).
The THz temporal waveform can be extracted by performing a subtraction between the probe spectra with and without THz pulse modulation (as background). The resulting differential spectral distribution reconstructs both amplitude and phase of the THz pulse.
### THz- Asops
A significant disadvantage of the conventional THz-TDS system is the low acquisition rate for THz transients with high spectral resolution. For example, the detection of a 1-ns-long THz temporal waveform (corresponding spectral resolution of 1 GHz) requires a travelling distance of 2\(\times\)15 cm of a mechanical delay line, which is usually limited in scanning speed. Therefore, the total data acquisition time could be in the range of a few (or even tens of) minutes.
This tradeoff between the THz spectral resolution and data acquisition rate can be avoided by asynchronous optical sampling (ASOPS)[23], which is a THz-TDS technique introduced in the late 1980s without using a mechanical translation stage.
As shown in Fig. 12, THz-ASOPS[24, 25] is based on two separate femtosecond laser sources with constant repetition frequency of \((f+\Delta f)\) and \(f\), respectively, close to each other due to the slight offset frequency \(\Delta f\). One laser serves as pump to generate the THz radiation, and the other is probe used for EOS of the THz waveform.
In such a scheme, the time delay between pulse pairs from the two lasers is repetitively scanned (linearly ramped) between zero and the inverse laser repetition rate (i.e. 1/\(f\)) at a scanning rate given by \(\Delta f\). That is, the available time window \(T_{\text{window}}\) is 1/\(f\) (thus the spectral resolution is \(f\), according to Eq. (2)), whereas the data acquisition rate equals to \(\Delta f\).
Generally, laser repetition rate of \(f\)= 1 GHz and \(\Delta f\) in the kHz frequency range[26] (e.g. 10 kHz) are favorable for ASOPS, leading to the detected THz spectral resolution as high as 1 GHz with data acquisition time of only 0.1 ms. This performance of ASOPS is impossible with traditional THz-TDS systems with mechanical delay stages.
## 4 Conclusion
In summary, in the present article, we have reviewed the basic principles and experimental techniques of THz-TDS. Six representative schemes (EOS, PCS, THz-ABCD, THz-REF, single-shot THz-TDS and THz-ASOPS) have been discussed, in terms of principles, experimental performances, characteristics, and relevant materials and devices. In addition, a brief comparison between THz-TDS and FTIR has also been made.
## Acknowledgment
Authors acknowledge the support of???.
|
2305.07814 | Cloud-RAIN: Point Cloud Analysis with Reflectional Invariance | The networks for point cloud tasks are expected to be invariant when the
point clouds are affinely transformed such as rotation and reflection. So far,
relative to the rotational invariance that has been attracting major research
attention in the past years, the reflection invariance is little addressed.
Notwithstanding, reflection symmetry can find itself in very common and
important scenarios, e.g., static reflection symmetry of structured streets,
dynamic reflection symmetry from bidirectional motion of moving objects (such
as pedestrians), and left- and right-hand traffic practices in different
countries. To the best of our knowledge, unfortunately, no reflection-invariant
network has been reported in point cloud analysis till now. To fill this gap,
we propose a framework by using quadratic neurons and PCA canonical
representation, referred to as Cloud-RAIN, to endow point \underline{Cloud}
models with \underline{R}eflection\underline{A}l \underline{IN}variance. We
prove a theorem to explain why Cloud-RAIN can enjoy reflection symmetry.
Furthermore, extensive experiments also corroborate the reflection property of
the proposed Cloud-RAIN and show that Cloud-RAIN is superior to data
augmentation. Our code is available at
https://github.com/YimingCuiCuiCui/Cloud-RAIN. | Yiming Cui, Lecheng Ruan, Hang-Cheng Dong, Qiang Li, Zhongming Wu, Tieyong Zeng, Feng-Lei Fan | 2023-05-13T01:57:39Z | http://arxiv.org/abs/2305.07814v1 | # Cloud-RAIN: Point Cloud Analysis with Reflectional Invariance
###### Abstract
The networks for point cloud tasks are expected to be invariant when the point clouds are affinely transformed such as rotation and reflection. So far, relative to the rotational invariance that has been attracting major research attention in the past years, the reflection invariance is little addressed. Notwithstanding, reflection symmetry can find itself in very common and important scenarios, _e.g._, static reflection symmetry of structured streets, dynamic reflection symmetry from bidirectional motion of moving objects (such as pedestrians), and left- and right-hand traffic practices in different countries. To the best of our knowledge, unfortunately, no reflection-invariant network has been reported in point cloud analysis till now. To fill this gap, we propose a framework by using quadratic neurons and PCA canonical representation, referred to as Cloud-RAIN, to endow point Cloud models with ReflectionAI Invariance. We prove a theorem to explain why Cloud-RAIN can enjoy reflection symmetry. Furthermore, extensive experiments also corroborate the reflection property of the proposed Cloud-RAIN and show that Cloud-RAIN is superior to data augmentation. Our code is available at [https://github.com/YimingCuiCuiCui/Cloud-RAIN](https://github.com/YimingCuiCuiCui/Cloud-RAIN).
## I Introduction
Recently, point clouds have been intensively studied due to their broad applications as a basic representation of 3D models in autonomous driving and robotics. Deep learning techniques are introduced into this field for either direct analysis [1, 2, 3, 4] or indirect analysis represented by PointNet [5] and its successors [6, 7]. As a point cloud is an unordered set, previous direct analysis models focus on the invariance to sample permutation. However, in addition to the invariance to point permutation, the networks are also supposed to infer consistently when the point clouds are affinely transformed such as translation, rotation, and reflection. While the bias caused by translation can be counteracted easily by centralization, past years have witnessed a plethora of works [8, 9, 10, 11] proposed to address the invariance to rotation and achieved the encouraging success, _e.g.,_ rotation data augmentation, spherical-related convolutions, explicit local rotation-invariant features, and canonical poses.
However, relative to the rotational symmetry, the reflection symmetry is little addressed. Notwithstanding, reflection symmetry can find itself in very common and important scenarios. Often, reflection symmetry appears due to the symmetric city planning. Figure 1(a) shows one such example in autonomous driving, where the driver's front view exhibits reflection symmetry in well-structured traffic scenes; Also, reflection symmetry is obtained by the direction of movement. Figure 1(b) manifests that pedestrians crossing the street to the left and to the right are symmetric in reflection. A devastating car accident may happen if a self-driving car only identifies the pedestrians walking along one direction and fails the opposite; In addition, reflectional symmetry can be given rise by the left- and right-hand traffic practices in different countries, as Figure 1(c) shows. Furthermore, we argue that reflection symmetry is more common than rotation symmetry in some important scenarios such as driving, where the environment around an autonomous vehicle is unlikely to rotate severely but likely to exhibit reflection symmetry. Therefore, reflectional invariance should be examined dedicatedly in a point cloud model.
However, to the best of our knowledge, no reflection-invariant network has been claimed in point cloud analysis till now. To deal with this issue, here we shift our attention to the quadratic neurons and networks. Recently, motivated by introducing neuronal diversity in deep learning, the so-called quadratic neuron [12] was designed by using a simplified quadratic function to replace the inner product in the
Fig. 1: In self-driving scenarios, front view of cars often exhibits reflection symmetry, which could include (a) static reflection symmetry of structured streets, (b) dynamic reflection symmetry from bidirectional motion of moving objects (such as pedestrians), and (c) left- and right-hand traffic practices in different countries (such as UK and US).
conventional neuron. Hereafter, we call a network made of quadratic neurons a quadratic network and a network made of conventional neurons a conventional network. _Why can quadratic networks address reflectional symmetry?_ Here, we prove that a quadratic network can approximate a reflectionally symmetric function well compared to conventional networks. The idea is that due to the invariance to the sign flipping of the input, the power term in the quadratic neuron can directly deal with the reflection of the input with respect to axis-aligned planes. Next, we combine quadratic networks with PCA to build a point Cloud method with ReflectionAI Invariance, referred to as Cloud-RAIN. While quadratic neurons deal with the reflectional symmetry due to the sign flipping, the PCA canonical transform derives canonical poses of a point cloud, which can transform the problem of reflection across arbitrary planes into a problem of sign flipping. Thus, Cloud-RAIN can tackle a general arbitrary-plane reflectional invariance problem. Our contributions are threefold:
* We identify reflection invariance as an essential yet overlooked concern in point cloud analysis.
* We prove a theorem showing that a quadratic network is powerful in approximating a reflectionally-invariant function. Then, we propose Cloud-RAIN that combines quadratic neurons and the PCA transform for point cloud applications.
* Empirically, Cloud-RAIN is invariant to reflection and much better than reflection data augmentation.
## II Related Works
**Deep Learning-based Point Cloud Analysis.** Currently, deep learning methods for point cloud analysis can be divided into two classes: indirect and direct. Indirect methods transform point clouds either to multi-view images or volumetric data followed by a 2D CNN or a 3D CNN, including VoxelNet [1], SPLATNet [13], and PCNN [4]. In contrast, direct methods represented by PointNet [5], PointNet++ [6], RSCNN [14], PointWeb [15], KPConv [16], PointConv [17], and so on [18, 19, 20, 21]. Direct methods are able to capture extrinsic features like 3D corners, edges, and local shape parts. Concurrently, graph-based methods build a graph by considering each point in a point cloud as a vertex and generating directed edges based on the vertices' neighborships. Then, graph networks perform feature learning in spatial or spectral domains. Representative graph models include DGCNN [22], LDGCNN [23], DeepGCNs[24], PointGCN [25], GAC-Net [26], and so on. Recently, Transformers [27] were more and more introduced in the point cloud tasks [28, 29, 30, 31]. One common concern regarding deep models is that these models are vulnerable to symmetric transforms when data augmentation is not applied.
**Rotation Symmetry.** Intensive investigations have been conducted on the rotation-invariant property. These studies are roughly divided into three categories: i) rotation-invariant operations [32, 33, 34, 9, 10], which designs rotation-invariant convolutions or attentions in a network to achieve invariance; ii) explicit local rotation-invariant features [36, 37, 38, 11, 39, 38], which handcrafts rotationally invariant features based on the intrinsic geometric relations of point clouds such as relative distances and angles; and iii) canonical poses [39, 40, 41], which transforms point cloud data to their PCA-based canonical poses that preserve shape information to achieve rotational invariance. We think that reflection symmetry is more important than rotation symmetry in some important applications such as driving, where buildings, views, and pedestrians around a normally-moving car are unlikely to encounter severe rotation but very likely to exhibit reflection symmetry. However, unfortunately, endowing a point cloud model with the reflection-invariant property was little studied.
**Polynomial Neural Networks.** The story of polynomial neurons begins with the Group Method of Data Handling (GMDH [42]), which gradually learns higher-order terms as a feature extractor. Furthermore, the so-called higher-order unit was proposed in [43, 44] by adding a nonlinear activation \(\sigma(\cdot)\) after GMDH: \(y=\sigma(Y(\mathbf{x}_{1},\cdots,\mathbf{x}_{n}))\). To balance the power of high-order units and parametric efficiency, Milenkoiv _et al._[45] only utilized linear and quadratic terms and proposed to use the annealing technique to find optimal parameters. Recently, higher-order, particularly quadratic units were intensively studied [46, 47, 48, 12] from the perspective of neuronal diversity. The neurons in [49, 50, 47, 51] are a special case of [52]. [47] excludes the power term, and [50] only has the power term, which renders an incomplete quadratic representation. This work intends to make the first attempt to introduce quadratic neurons into point cloud analysis to realize reflection invariance.
## III Power of Quadratic Networks in Reflectional Invariance
Here, we illustrate why quadratic neurons are suitable for point cloud tasks requiring reflection invariance. We provide a theorem that a quadratic network is powerful in approximating a symmetric function, _e.g._, using much fewer parameters than a conventional one. This suggests that for tasks requiring a model to be reflectionally symmetric, quadratic networks are favored over conventional ones, which makes the employment of quadratic networks in reflectional invariance well justified. The core idea of this theorem is that the power term in a quadratic neuron are straightforwardly reflection-invariant, while it is costly for the conventional to learn power terms.
**Quadratic neurons**. Given an input vector \(\mathbf{f}=(f_{1},f_{2},\cdots,f_{n})\), a conventional neuron computes an inner product plus a bias \(b\) followed by a nonlinear activation \(\sigma\):
\[\sigma(\mathbf{f}^{\top}\mathbf{w}+b), \tag{1}\]
while a quadratic neuron computes two inner products and one power term and then feeds them into a nonlinear activation function. Mathematically, its inner-working is
\[\sigma\big{(}(\mathbf{f}^{\top}\mathbf{w}_{1}+b_{1})(\mathbf{f}^{\top}\mathbf{ w}_{2}+b_{2})+(\mathbf{f}\circ\mathbf{f})^{\top}\mathbf{w}_{3}+b_{3}\big{)}, \tag{2}\]
where \(\circ\) is a Hadamard product, \(\mathbf{w}_{1}\), \(\mathbf{w}_{2}\), and \(\mathbf{w}_{3}\) are parameters. Compared to the designs of quadratic neurons in [46, 53] with a complexity \(\mathcal{O}(n^{2})\), Eq. 2 is much more compact with the complexity of \(\mathcal{O}(3n)\), scaling linearly with the number of features \(n\).
**Definition 1** (reflectionally invariant function).: _We call a function \(h(\mathbf{x})=h(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{d}):\big{(} \mathbb{R}^{N}\big{)}^{d}\rightarrow\mathbb{R}\) reflectionally invariant, if for any \(\pi\in\{-1,1\}^{d}\), \(h(\mathbf{x})\) satisfies_
\[h(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{d})=h(\pi_{1}\mathbf{x}_{1 },\pi_{2}\mathbf{x}_{2},\cdots,\pi_{d}\mathcal{x}_{d}), \tag{3}\]
_where \(N\) is the number of samples, \(d\) is the number of features, \(\mathbf{x}_{i}\in\mathbb{R}^{N}\), and \(\prod_{i=1}^{d}\pi_{i}=-1\)._
When \(\prod_{i=1}^{d}\pi_{i}=-1\), \(\pi_{1}\mathbf{x}_{1},\pi_{2}\mathbf{x}_{2},\cdots,\pi_{d}\mathbf{x}_{d}\) is a reflection of \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{d}\). When \(\prod_{i=1}^{d}\pi_{i}=1\), \(\pi_{1}\mathbf{x}_{1},\pi_{2}\mathbf{x}_{2},\cdots,\pi_{d}\mathbf{x}_{d}\) is a rotation of \(\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{d}\). It is also important for a model to maintain rotation invariance over samples. However, previous literature has studied rotation invariance. Therefore, rotation invariance is not the focus of our draft.
**Proposition 1** (Proposition 2 in [54]).: _Given \(M>0\) and \(\epsilon\in(0,1)\), there is a ReLU network with two inputs that implements a function \(\widetilde{\times}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) so that a) for any inputs \(x,y\), if \(|x|\leq M\) and \(|y|\leq M\), then \(|\widetilde{\times}(x,y)-xy|\leq\epsilon\); b) if \(x=0\) or \(y=0\), then \(\widetilde{\times}(x,y)=0\); c) the number of computation units in this ReLU network is \(O(\ln(1/\epsilon))\)._
**Proposition 2** (Lemma 1 in [55]).: _There exists a one-hidden-layer quadratic ReLU network that can implement a mapping \(\widetilde{\times}:\mathbb{R}^{2}\rightarrow\mathbb{R}\), satisfying that i) \(\widetilde{\times}(x,y)=xy\) ; ii) \(\widetilde{\times}(x,y)\) has 2 quadratic neurons and accordingly 4 parameters._
**Theorem 1**.: _Let \(f:\left(\mathbb{R}^{N}\right)^{d}\rightarrow\mathbb{R}\) be a continuously differentiable and totally symmetric function, where \(\Omega\) is a compact subset of \(\mathbb{R}^{d}\). Let \(\delta>0\), then there exists \(\tilde{\mathbf{F}}:\left(\mathbb{R}^{N}\right)^{d}\rightarrow\mathbb{R}\), such that for any \(X\), we have_
\[\sup_{X}\left|f(X)-\tilde{\mathbf{F}}(X)\right|\leq\delta, \tag{4}\]
_where \(\tilde{\mathbf{F}}(X)=\sum_{l=1}^{L}c_{l}\left(\widetilde{\times}_{i=1}^{N} \widetilde{\times}_{\alpha=1}^{d}X_{i,\alpha}^{\phi_{i,\alpha}(l)}\right)\), and \(L\) is a fundtion of \(\delta\), \(c_{l}\) is a coefficient for different \(l\), and \(\phi_{i,\alpha}(l)\) is the power function dependent on i and \(\alpha\). To represent \(\tilde{\mathbf{F}}\) in the sense of the max norm, a conventional network needs \(NdLO(\ln(1/\delta))\) parameters, while a quadratic network only needs \(4NdL\) parameters._
**The sketch of proof:** We first show a polynomial approximation scheme to the symmetric function \(f\), which is \(F(\mathbf{x})\). Then, we construct the quadratic and the conventional networks to represent \(F(\mathbf{x})\), respectively, with a comparison of the approximation error and the number of parameters.
Proof.: **Step 1**. We first consider the case of \(N=1\). According to the Stone-Weierstrass theorem, a polynomial can approximate any continuous function defined on a closed interval. Similarly, based on the fundamental theorem of symmetric polynomials [56], \(f\) can be naturally approximated by a polynomial of elementary symmetric polynomials made of power terms. Mathematically, for any symmetric polynomial \(P\), we have \(P(X)=Q\left(e_{1}(X),\ldots,e_{d}(X)\right)\), where \(Q\) is some polynomial, and \(e_{k}\) is as the following:
\[e_{k}(X)=\sum_{1\leq j_{1}<j_{2}<\cdots<j_{k}\leq d}x_{j_{1}}^{2}x_{j_{2}}^{2} \cdots x_{j_{k}}^{2}. \tag{5}\]
To extend the result to the case of \(N>1\), the monomial takes the form
\[F(X)=\prod_{i=1}^{d}\prod_{\alpha=1}^{N}X_{i,\alpha}^{\phi_{i,\alpha}}, \tag{6}\]
where \(\phi_{i,\alpha}\) is an even integer, which ensures that this monomial is a reflectionally invariant function. \(\phi_{i,\alpha}\) is determined by the target function and the index \(\alpha,i\). The approximation using a symmetric polynomial can be a linear combination of \(L\) symmetrized monomials.
\[\mathbf{F}(X)=\sum_{l=1}^{L}c_{l}F_{i}^{(l)}(X)=\sum_{l=1}^{L}c_{l}\left(\prod_ {i=1}^{d}\prod_{\alpha=1}^{N}X_{i,\alpha}^{\phi_{i,\alpha}(l)}\right), \tag{7}\]
where \(L\) is a function of \(\delta\), such that
\[\sup_{X}\left|f(X)-\mathbf{F}(X)\right|\leq\delta/2. \tag{8}\]
**Step 2**. Now, let us use a conventional and a quadratic neural network to approximate \(\mathbf{F}(X)\).
Approximation by a conventional network: We can approximate this product by a conventional neural network with the help of Proposition 1. Let \(\widetilde{\times}\) be the approximate multiplication. Applying \(\widetilde{\times}\) in a compositional manner,
\[\sup_{X}\left|\widetilde{\times}_{i=1}^{N}\widetilde{\times}_{\alpha=1}^{d}X_{ i,\alpha}^{\phi_{i,\alpha}}-\prod_{i=1}^{N}\prod_{\phi=1}^{d}X_{i,\alpha}^{\phi_{i, \alpha}}\right|\leq\delta/2. \tag{9}\]
Let \(\tilde{\mathbf{F}}=\sum_{l=1}^{L}c_{l}\left(\widetilde{\times}_{i=1}^{N} \widetilde{\times}_{\alpha=1}^{d}X_{i,\alpha}^{\phi_{i,\alpha}(l)}\right)\), it can also approximate \(f(X)\) based on the triangle inequality:
\[\sup_{X}\left|f(X)-\tilde{\mathbf{F}}(X)\right|\leq\delta. \tag{10}\]
The number of parameters used to construct \(\tilde{\mathbf{F}}\) is \(NdLO(\ln(1/\delta))\).
Approximation by a quadratic network: A quadratic neuron can precisely approximate a product function without incorporating extra error like a conventional network. To express \(\tilde{\mathbf{F}}=\sum_{l=1}^{L}\widetilde{\times}_{i=1}^{N}\widetilde{\times }_{\alpha=1}^{d}X_{i,\alpha}^{\phi_{i,\alpha}(l)}\), the number of parameters used in a quadratic network is \(4NdL\).
**Remark 1**.: The above constructive theorem informs us that a conventional network is hard to learn reflectionally invariant functions, _i.e._, much more parameters, but it is much easier for quadratic networks because quadratic neurons can easily represent a power term and a multiplication operation. Our main theorem heavily relies on [57]. But our highlight is to permute the dimension, while their result permutes samples. Moreover, we intend to use quadratic neurons to reimplement this scheme to show their superiority in representing reflectional symmetry. One may ask if there are other kinds of constructs. Actually, one can also partition the domain into a lattice with a small grid size to approximate \(f(X)\) instead of using the linear combination. But still, the quadratic network will show the advantages because the power term in the quadratic neuron is essentially suitable for reflectional invariance.
## IV Cloud-RAIN
Inspired by the above encouraging theoretical result, here we propose a point Cloud method which enjoys the ReflectionAl[Nvariance (Cloud-RAIN) and whose spotlight is the incorporation of quadratic neurons. In addition, we also equip Cloud-RAIN with the PCA canonical transform, because the quadratic neuron alone can only tackle reflection across axis-aligned planes. When combining the PCA canonical transform, quadratic neurons can deal with reflection with respect to arbitrary planes.
### _Quadratic Neurons_
Given a set of points \(\mathbf{X}\in\mathbb{R}^{N\times C}\), where \(N\) represents the number of points, and \(F\) is the number of features. Usually, \(C=3+c\) denotes \((x,y,z)\) position in the geometric space, and the feature number (_e.g._, \(c=3\) for points only with color; \(c=6\) for those with color and normals). Existing point cloud analysis methods usually focus on how to engineer \(\mathbf{w}\) or \(\mathbf{f}\) to fulfill the needs and characters of the problems. For example, RS-CNN [14] restricts \(\mathbf{w}\) to learn the relations between \(p_{i}\) and \(p_{j}\) in geometric space and extends a regular rigid CNN to the irregular configuration. DGCNN [22] proposes the EdgeConv that operates on the edges between the central point and its neighbors and treats \(\mathbf{f}\) as the edge features. Point Transformer [28] uses the vector self-attention operator to aggregate local features and the subtraction relation to generate \(\mathbf{w}\). Despite these arts, they all use conventional neurons which integrate \(\mathbf{w}\) and \(\mathbf{f}\) in a linear fashion, as Eq. (1) shows.
Here, the proposed Cloud-RAIN replaces the simple linear relation between \(\mathbf{f}\) and \(\mathbf{w}\) with a quadratic interaction by quadratic neurons. Theorem 1 shows that quadratic neurons are powerful in learning a reflectionally invariant function.
### _PCA Canonical Transform_
Note \((\mathbf{x}_{1},\mathbf{x}_{2},\cdots,\mathbf{x}_{D})\rightarrow(\pi_{1} \mathbf{x}_{1},\pi_{2}\mathbf{x}_{2},\cdots,\pi_{D}\mathbf{x}_{D})\) is a special case of reflection symmetry, where the planes of symmetry align with the axes. Here, we introduce PCA canonical transform to empower the quadratic aggregation to deal with reflection across an arbitrary plane. We first show how the PCA transform is performed on the point cloud so as to obtain its canonical representation. Without loss of generality, let the point cloud be \(\mathbf{X}\in\mathbb{R}^{N\times 3}\), \(\mathbf{X}_{i}\in\mathbb{R}^{3\times 1}\) be the \(i\)-th point of \(\mathbf{X}\), and \(\overline{\mathbf{X}}=(\sum_{i=1}^{N}\mathbf{X}_{i})/N\in\mathbb{R}^{3\times 1}\) be the geometric center of \(\mathbf{X}\), we perform the PCA decomposition for \(\mathbf{X}\) as follows:
\[\frac{\sum_{i=1}^{N}\left(\mathbf{X}_{i}-\overline{\mathbf{X}}\right)\left( \mathbf{X}_{i}-\overline{\mathbf{X}}\right)^{T}}{N}=\mathbf{V}\mathbf{\Lambda} \mathbf{V}^{\top}, \tag{11}\]
where \(\mathbf{V}\in\mathbb{R}^{3\times 3}\) is the matrix composed of eigenvectors \((\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3})\) and \(\mathbf{\Lambda}=\operatorname{diag}\left(\lambda_{1},\lambda_{2},\lambda_{3}\right)\) are the corresponding eigenvalues. By multiplying the point cloud with the eigenvector matrix, we obtain the canonical representation of the point cloud \(\mathbf{X}\) as \(\mathbf{X}_{\text{can}}=\mathbf{X}\mathbf{V}\).
The reflection-invariant property of \(\mathbf{X}_{\text{can}}\) is illustrated in the following: Applying a random reflection matrix \(\mathbf{F}\in\mathbb{R}^{3\times 3}\) on the point cloud \(\mathbf{X}\) to get a reflected point cloud \(\mathbf{X}\mathbf{F}^{\top}\) and performing PCA decomposition for \(\mathbf{F}\mathbf{X}\) as shown in Eq. (11), we have the following:
\[\frac{\sum\left(\mathbf{F}\mathbf{X}_{i}-\mathbf{F}\overline{ \mathbf{X}}\right)\left(\mathbf{F}\mathbf{X}_{i}-\mathbf{F}\overline{\mathbf{X }}\right)^{\top}}{N} \tag{12}\] \[= \mathbf{F}\left(\frac{\sum\left(\mathbf{X}_{i}-\overline{\mathbf{ X}}\right)\left(\mathbf{X}_{i}-\overline{\mathbf{X}}\right)^{\top}}{N}\right) \mathbf{F}^{\top}\] \[= (\mathbf{F}\mathbf{V})\mathbf{\Lambda}(\mathbf{F}\mathbf{V})^{ \top},\]
where \(\mathbf{F}\mathbf{V}\) is the eigenvector matrix of the reflected point cloud \(\mathbf{X}\mathbf{F}^{\top}\). Therefore, the canonical representation \(\mathbf{X}\mathbf{F}^{\top}\) is \(\left(\mathbf{X}\mathbf{F}^{\top}\right)_{\text{can}}=\mathbf{X}\mathbf{F}^{T} \cdot\mathbf{F}\mathbf{V}=\mathbf{X}\mathbf{V}\). Thus, the effect of the reflection is counteracted in \(\mathbf{X}_{\text{can}}\).
Although employing the canonical representation can realize the reflection invariance with respect to arbitrary planes, it brings up the issue of sign ambiguity. Given an eigenvector \(\mathbf{v}\), both choosing \(+\mathbf{v}\) and \(-\mathbf{v}\) fulfills the rule of PCA transform. What's undesirable is we don't know if it is \(+\mathbf{v}\) or \(-\mathbf{v}\) being selected. Because \(\mathbf{V}=[\mathbf{v}_{1},\mathbf{v}_{2},\mathbf{v}_{3}]\), there are eight possible combinations (\([\pm\mathbf{v}_{1},\pm\mathbf{v}_{2},\pm\mathbf{v}_{3}]\)) that cause sign ambiguity in the canonical representation of \(\mathbf{X}_{\text{can}}\). This is the intrinsic problem of using the canonical representation. However, quadratic aggregation can overcome the sign ambiguities due to its power term based on our earlier analyses.
**Remark 2** (Theoretical guarantee). The canonical representation alone can deal with reflection across the arbitrary plane but suffers sign ambiguity, while the quadratic aggregation alone is invariant to the sign flipping but only works for reflection across axis-aligned planes. It is the combination between the canonical representation and quadratic aggregation that realizes the invariance to reflection across arbitrary planes. The advantage of such a combination is that it has a theoretical guarantee to be invariant to reflection, which will be confirmed by our experiments later.
## V Experiments
To validate the reflection invariance of the proposed Cloud-RAIN, we conduct extensive experiments on point cloud semantic segmentation with multiple recent networks as a backbone. Other supplementary results such as model complexity analysis are put into the supplementary materials.
### _Experimental Setups_
We mainly use S3DIS [61] and ScanNet [62], the mainstream point cloud semantic segmentation benchmarks, to evaluate our proposed methods. We use the widely-adopted mean accuracy (mAcc) and IOU (mIOU) as evaluation metrics. For S3DIS, following the protocol of previous work [5, 6, 22, 59, 63, 64], we train for 100 epochs with 4 Tesla V100 GPUs. The batch size is set to 32. Following the standard practice, the raw input points are firstly grid-sampled to generate \(4,096\) points. Unless otherwise specified, we use scale and jitter as data augmentation. For ScanNet, we train for 100 epochs with weight decay and batch size set to 0.1 and 32,
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Dataset & \multicolumn{2}{c|}{S3DIS} & \multicolumn{2}{c}{ScanNet} \\ \hline Methods & mAcc & mIOU & mAcc & mIOU \\ \hline PointNet[5] & 53.1 & 45.0 & 44.3 & 41.9 \\ PointNet[5]+Ours & 56.5 & 48.1 & 45.6 & 43.7 \\ PointNet++[6] & 62.0 & 53.9 & 53.4 & 54.4 \\ PointNet++[6]+Ours & 64.4 & 56.0 & 57.3 & 57.6 \\ DGCNN[22] & 57.0 & 49.3 & 48.0 & 46.7 \\ DGCNN[22]+Ours & 61.9 & 50.6 & 51.0 & 49.8 \\ LDGCNN[23] & 59.1 & 49.9 & 50.6 & 49.9 \\ LDGCNN[23]+Ours & 61.4 & 50.6 & 51.6 & 51.8 \\ GSNet[58] & 52.7 & 44.5 & 43.7 & 45.2 \\ GSNet[58]+Ours & 54.5 & 46.2 & 44.9 & 46.4 \\ ShellNet[59] & 58.3 & 49.5 & 49.7 & 49.0 \\ ShellNet[59]+Ours & 61.2 & 50.4 & 50.8 & 51.7 \\ PointMLP[20] & 80.3 & 70.1 & 68.9 & 69.3 \\ PointMLP[20] + Ours & 81.5 & 70.9 & 70.1 & 70.5 \\ PointMixer[60] & 80.5 & 70.7 & 69.7 & 70.6 \\ PointMixer[60] + Ours & 81.7 & 71.8 & 70.9 & 71.8 \\ \hline \end{tabular}
\end{table} TABLE I: Experiments on S3DIS and ScanNet.
respectively. The number of input points is set to be \(8,192\) by sampling. Except for random jitter, the data augmentation of the ScanNet is the same as that of the S3DIS. We re-implement the original models for a fair comparison and only replace the linear aggregation with the quadratic one without changing other elements of the model.
### _Segmentation Results_
We report the basic segmentation results in Table I. All compared models are either classic models or recently published in flagship venues. The spotlight is that our proposed Cloud-RAIN can universally boost the existing models' performance by a reasonably large margin. In detail, integrated with our Cloud-RAIN, the current models' mAcc and mIOU scores are improved by at least \(1.8\%\) and \(0.7\%\) on the S3DIS benchmark, respectively. For a simple model like PointNet, our proposed Cloud-RAIN can boost the performance on mAcc and mIOU by \(3.4\%\) and \(3.1\%\). On the ScanNet benchmark, the improvement over ScanNet is not on par with that on the S3DIS dataset. However, our proposed methods can still escalate the mAcc and mIOU scores by at least \(1.0\%\) relative to the existing methods. We think that if normal features over the ScanNet benchmark are considered, the improvement could be more significant. Although reflection symmetry is the primary goal of this manuscript, we would like to note to readers that the Cloud-RAIN is an effective drop-in replacement to escalate the performance of the original models.
### _Reflection Invariance Analysis_
Now, we provide detailed experiments to analyze the reflection-invariance properties of our proposed Cloud-RAIN. In the first part, we concentrate on reflections across axis-aligned planes that are very common in realistic scenarios such as symmetric street views. No PCA transform needs to be used because quadratic aggregation alone can deal with such reflection operations. In the second part, reflection across arbitrary planes is tested.
Reflection invariance (axis-aligned planes)To analyze the effect of data reflection, we select PointNet, DGCNN, PointMLP, and PointMixer as examples to conduct experiments on the S3DIS benchmark. In the experiment, we flip the point clouds along the \(x\), \(y\), \(z\), and \(xyz\) axis, respectively, and test the segmentation performance of the well-trained models with the reflected inputs, accordingly. During the training process, reflection augmentation is not applied for the original models and those integrated with our proposed methods. The results are listed in Table II.
As seen in Table II, our proposed Cloud-RAIN can keep the performance for the reflected inputs thanks to the effect of power terms. This is mathematically guaranteed because power terms return the same value for the flipped inputs. In contrast, the existing methods cannot withstand the input reflection, evidenced by the dramatic performance drop. Especially when the reflection is applied to the \(z\) axis, the performance drop is devastating. This might be because the dynamic range of positions along the \(z\) axis is far more extensive than that along the \(x\) and \(y\) axes. Thus, the destruction of flipping the \(z\) axis to the model is severer.
Figure 2 offers semantic segmentation results in different reflection settings. For better visualization, we view the semantic segmentation results from the same angle regardless of their reflection. An interesting finding from Figure 2 is that when the point clouds are reflected along the \(z\) axis, the original model is handicapped in distinguishing the categories of ceiling and floors. Many points with the ceiling category are misclassified into the floor class, indicating that the model just learns a reflectionally asymmetric mapping.
Reflection invariance (arbitrary planes)We conduct experiments with PointNet, DGCNN, PointMLP, and PointMixer on the S3DIS benchmark to study the reflection across arbitrary planes. We randomly generate reflection planes, across which we flip the point clouds. To contrast the effectiveness of the proposed Cloud-RAIN, we apply data augmentation during the training process, where the training data are flipped along
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{2}{c|}{\(x\) axis} & \multicolumn{2}{c|}{\(y\) axis} & \multicolumn{2}{c|}{\(z\) axis} & \multicolumn{2}{c}{\(xyz\) axis} \\ \cline{2-9} & \(\Delta\)mAcc & \(\Delta\)mIOU & \(\Delta\)mAcc & \(\Delta\)mIOU & \(\Delta\)mAcc & \(\Delta\)mIOU & \(\Delta\)mAcc & \(\Delta\)mIOU \\ \hline PointNet [5] & \(\downarrow 3.5\%\) & \(\downarrow 4.1\%\) & \(\downarrow 2.6\%\) & \(\downarrow 3.1\%\) & \(\downarrow 45.0\%\) & \(\downarrow 43.8\%\) & \(\downarrow 45.8\%\) & \(\downarrow 44.2\%\) \\ PointNet [5] + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline DGCNN [22] & \(\downarrow 3.8\%\) & \(\downarrow 4.4\%\) & \(\downarrow 3.5\%\) & \(\downarrow 3.6\%\) & \(\downarrow 44.3\%\) & \(\downarrow 49.2\%\) & \(\downarrow 45.3\%\) & \(\downarrow 49.3\%\) \\ DGCNN [22] + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline PointMLP [20] & \(\downarrow 4.3\%\) & \(\downarrow 5.1\%\) & \(\downarrow 3.9\%\) & \(\downarrow 4.1\%\) & \(\downarrow 64.7\%\) & \(\downarrow 68.9\%\) & \(\downarrow 66.1\%\) & \(\downarrow 69.2\%\) \\ PointMLP [20] + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline PointMixer [60] & \(\downarrow 4.9\%\) & \(\downarrow 5.7\%\) & \(\downarrow 4.4\%\) & \(\downarrow 4.9\%\) & \(\downarrow 65.1\%\) & \(\downarrow 69.4\%\) & \(\downarrow 66.5\%\) & \(\downarrow 69.9\%\) \\ PointMixer [60] + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE II: Performance drops on different reflection operations.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline Model & \(\Delta\)mAcc & \(\Delta\)mIOU \\ \hline PointNet & \(\downarrow 46.3\%(\pm 0.52\%)\) & \(\downarrow 42.7(\pm 0.32\%)\) \\ PointNet + Aug & \(\downarrow 2.6\%(\pm 0.46\%)\) & \(\downarrow 2.4\%(\pm 0.28\%)\) \\ PointNet + PCA & \(\downarrow 1.7\%(\pm 0.38\%)\) & \(\downarrow 1.9\%(\pm 0.32\%)\) \\ PointNet + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline DGCNN & \(\downarrow 45.9\%(\pm 0.22\%)\) & \(\downarrow 40.4\%(\pm 0.18\%)\) \\ DGCNN + Aug & \(\downarrow 8.0\%(\pm 0.28\%)\) & \(\downarrow 8.5\%(\pm 0.32\%)\) \\ DGCNN + PCA & \(\downarrow 6.8\%(\pm 0.22\%)\) & \(\downarrow 7.7\%(\pm 0.26\%)\) \\ DGCNN + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline PointMLP & \(\downarrow 66.9\%(\pm 0.22\%)\) & \(\downarrow 69.8\%(\pm 0.16\%)\) \\ PointMLP + Aug & \(\downarrow 10.2\%(\pm 0.32\%)\) & \(\downarrow 11.3\%(\pm 0.38\%)\) \\ PointMLP + PCA & \(\downarrow 7.4\%(\pm 0.20\%)\) & \(\downarrow 8.6\%(\pm 0.22\%)\) \\ PointMLP + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline PointMixer & \(\downarrow 67.1\%(\pm 0.38\%)\) & \(\downarrow 70.0\%(\pm 0.42\%)\) \\ PointMixer + Aug & \(\downarrow 11.0\%(\pm 0.30\%)\) & \(\downarrow 11.9\%(\pm 0.34\%)\) \\ PointMixer + PCA & \(\downarrow 8.6\%(\pm 0.26\%)\) & \(\downarrow 9.4\%(\pm 0.28\%)\) \\ PointMixer + Ours & \(\downarrow 0\%\) & \(\downarrow 0\%\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Performance of PointNet and DGCNN drops dramatically for reflection across arbitrary planes.
randomly generated planes. In the inference stage, we generate a plane randomly and flip the point clouds across the plane. For reliability of results, we run \(5\) experiments and calculate the mean as well as the standard deviation. We also train the original models aided by PCA transform. All those comparative results are listed in Table III.
From Table III, we notice that without reflection data augmentation, all models suffer a severe performance loss for both mAcc and mIOU scores when there is no training augmentation. Even when the models are trained with randomly generated data augmentation, PointNet is still subjected to at least a \(2.0\%\) drop on both evaluation metrics, while DGCNN's mAcc and mIOU scores drop at least \(8.0\%\). What is favorable is that the performance of our models does not have any compromise with randomly generated reflection planes, which agrees with our theoretical analyses that CloudRAIN has the theoretical guarantee for reflection invariance. Furthermore, aided by PCA transform, the existing models are still subjected to the performance drop, which means these models cannot overcome the sign ambiguity given rise by PCA. This means that quadratic aggregation is necessary for reflectional symmetry over arbitrary planes.
We visualize and compare a representative example from the S3DIS with respect to an arbitrary plane reflection as Figure 3. We sample \(4,096\) points from the point clouds of each room and reflect those sampled points across the same randomly generated plane. Then, we merge the segmentation results to be the original room. From Figure 3, we notice that when the test data are reflected across an arbitrary plane, the models trained without data augmentation cannot even accurately distinguish the categories like walls, ceilings, and so on. When it comes to the model trained with reflection data augmentation, though it is not severely affected by the arbitrary reflection during the inference time, there is still a structural distortion in distinguishing the boundaries compared with our proposed method, as highlighted in the red rectangle.
## VI Conclusion
In this manuscript, we have identified the reflection invariance, alongside the rotation invariance, as a practical concern in point cloud applications. Then, we have derived a theorem suggesting that quadratic neurons can learn a reflectionally symmetric function much easier than conventional neurons do. Next, we have introduced quadratic neurons and PCA canonical transform to prototype a reflectionally-invariant model. Lastly, extensive comparison experiments show that embedding quadratic aggregation in the existing models can not only strengthen the reflection invariance as our theoretical analyses illustrate. In the future, more efforts can be invested in exploring polynomial aggregation to unify rotational and reflectional invariance ultimately.
|
2302.12500 | Preparing random state for quantum financing with quantum walks | In recent years, there has been an emerging trend of combining two
innovations in computer science and physics to achieve better computation
capability. Exploring the potential of quantum computation to achieve highly
efficient performance in various tasks is a vital development in engineering
and a valuable question in sciences, as it has a significant potential to
provide exponential speedups for technologically complex problems that are
specifically advantageous to quantum computers. However, one key issue in
unleashing this potential is constructing an efficient approach to load
classical data into quantum states that can be executed by quantum computers or
quantum simulators on classical hardware. Therefore, the split-step quantum
walks (SSQW) algorithm was proposed to address this limitation. We facilitate
SSQW to design parameterized quantum circuits (PQC) that can generate
probability distributions and optimize the parameters to achieve the desired
distribution using a variational solver. A practical example of implementing
SSQW using Qiskit has been released as open-source software. Showing its
potential as a promising method for generating desired probability amplitude
distributions highlights the potential application of SSQW in option pricing
through quantum simulation. | Yen-Jui Chang, Wei-Ting Wang, Hao-Yuan Chen, Shih-Wei Liao, Ching-Ray Chang | 2023-02-24T08:01:35Z | http://arxiv.org/abs/2302.12500v2 | # Preparing random state for quantum financing with quantum walks
###### Abstract
In recent years, there has been an emerging trend of combining two innovations in computer science and physics to achieve better computation capability. Exploring the potential of quantum computation to achieve highly efficient performance in various tasks is a vital development in engineering and a valuable question in sciences, as it has a significant potential to provide exponential speedups for technologically complex problems that are specifically advantageous to quantum computers. However, one key issue in unleashing this potential is constructing an efficient approach to load classical data into quantum states that can be executed by quantum computers or quantum simulators on classical hardware. Therefore, the split-step quantum walks (SSQW) algorithm was proposed to address this limitation. We facilitate SSQW to design parameterized quantum circuits (PQC) that can generate probability distributions and optimize the parameters to achieve the desired distribution using a variational solver. A practical example of implementing SSQW using Qiskit has been released as open-source software. Showing its potential as a promising method for generating desired probability amplitude distributions highlights the potential application of SSQW in option pricing through quantum simulation.
Cheng-Ray Chang: [email protected], You can use the email thanks commands to add additional information for the preceding author.
## 1 Introduction
In quantum mechanics, particles have a wave-like nature, meaning their position and momentum cannot be known simultaneously with arbitrary precision. This uncertainty is a result of the wave-like nature of quantum particles and is known as Heisenberg's Uncertainty Principle[1]. This wave-like nature of quantum particles also allows them to exist in a superposition of states, where a particle is in multiple states simultaneously. The wave-like nature of quantum particles also leads to the phenomenon of quantum interference, where the probability of a particle being in a certain state is determined by the sum of the probabilities of all the possible paths it could take to reach that state. Additionally, the wave-like nature of quantum particles also gives rise to quantum entanglement, where the properties of two or more quantum particles are correlated in such a way that the state of one particle cannot be described independently of the state of the other particle. This can lead to correlations that cannot be explained by classical physics and are often considered random or unpredictable. The wave-like nature of quantum particles is closely related to the randomness in quantum dynamics; it gives rise to the uncertainty principle, wave-particle duality, quantum interference, superposition of states, and entanglement that lead to the unpredictability and randomness of quantum systems.
Recently, the randomness of quantum systems has been demonstrated [2] the distribution of states close to the Haar-random. Haar-random[3, 4] is particularly important in the field of quantum computing, as they can be used
to generate arbitrary quantum states, perform quantum gates and perform quantum walk that enables applications of this quantum computations in a much wider context. Quantum computers utilize quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data. Unlike classical computers, which encode data into bits that are either 0 or 1. Due to the unique properties of quantum mechanics, quantum bits(qubits) exist in multiple states simultaneously and are strongly correlated with one another. This makes it possible for a quantum computer to perform certain operations, such as factorization[5] and simulation[6; 7], much more quickly than a classical computer. The achievement of quantum advantage is considered an important goal in the field of quantum computing. The three steps of realizing quantum advantage: loading quantum state, quantum algorithm computation, and result measurements, are shown in Fig.1.
This allows quantum computers to have the potential to solve some challenging problems considerably faster than classical computers using the most efficient known quantum algorithms[8; 9; 10; 11; 12; 13], provided that the data can be efficiently loaded into a quantum state. Recently, there has been an increased focus on utilizing quantum computers to generate specific probability distributions with the quantum nature of randomness. The ability to generate probability distributions in a quantum circuit has many conceivable applications. One area where this has been used in finance[14; 15; 16] is assigning probability amplitudes to a quantum system's basis states. With n qubits, a system would have \(2^{n}\) basis states. The generator function is a probability amplitude generator, which can be modified to achieve the desired probability amplitude distribution. The generator must designate amplitudes to these states in a way that closely resembles a targeted probability distribution that can create a quantum superposition state.
\[\ket{\psi}=\sum_{i}\sqrt{p_{i}}\ket{i} \tag{1}\]
where \(\ket{i}\) are orthonormal quantum states. Efficient methods for generating probability distributions in quantum circuits are an important study area. Additionally, generating probability distributions is also useful in creating initial states for Hamiltonian simulation, simulating a wave function's time evolution.
An ineffective and slow data loading method could diminish the potential impact of quantum computing[17]. The preparation methods of a generic quantum state have been developed[18; 19; 20]. Quantum Generative Adversarial Networks (qGAN) have been demonstrated to load distributions[19]. The qGAN utilizes a combination of a quantum generator and a classical discriminator to learn the probability distribution of classical training data. The quantum generator, a parametrized quantum channel, is trained to convert an input state of n-qubits, represented by \(\ket{\psi}\), into an output state of n-qubits. The earliest papers on generating probability distributions had been proposed[18]. A scheme shows how to generate a superposition of quantum states by taking an ancilla register that performs a controlled rotation of angle \(\theta_{i}\):
\[\sqrt{p_{i}}\ket{i}\rightarrow\sqrt{p_{i}}\ket{i}\otimes(\cos\theta\ket{0}+ \sin\theta\ket{1}) \tag{2}\]
Recently, a method using variational solvers to fix the rotation parameters of the gate has been proposed to generate symmetrical and asymmetrical probability distributions[20]. The authors demonstrated trajectories of the individual quantum states to understand the effect of an ancilla register to control rotation.
In this paper, we present a method, based on eq.(2), for incorporating SSQW into probability distributions. The structure of the paper is as follows. First, we begin by reviewing the theoretical foundations of quantum walks and explain their role that quantum walks for loading probability distributions. Then, we introduce the methodology of SSQW and present the SSQW-based loading method on different test cases, including normal, log-normal, and real stock price distributions, with a quantum simulator accessible via
Figure 1: The three generic steps to realize the quantum advantage.
the IBM Q Experience. We then demonstrate using the SSQW-based loading method to facilitate quantum benefit in financial derivative pricing. Finally, we provide conclusions and discuss open questions and potential additional applications of the scheme.
## 2 Quantum Walk
Quantum walks (QW)[21, 22, 23, 24, 25, 26, 27, 28], which are the quantum equivalent of classical random walks, are used as a foundation for generating models of controlled quantum simulation. Quantum walks enable the walker to simulate several quantum mechanical phenomena by tuning the parameters and evolution coin operators of a QW. The formalism for quantum walks is broadly classified into the discrete-time quantum walk (DTQW) and the continuous-time quantum walk (CTQW). Both approaches have unique features that cause them suitable for performing quantum computing tasks. Here we will focus only on the one-dimensional DTQW. A classical walk can be described using just a position Hilbert space, while a DTQW, or quantum walk, requires an additional coin Hilbert space to express its dynamics fully. This coin space represents the internal state of the walker and is necessary to capture the controlled dynamics of the walker. Hilbert space of quantum walk is defined as follows.
\[\mathcal{H}=\mathcal{H}_{c}\otimes\mathcal{H}_{p} \tag{3}\]
where \(\mathcal{H}_{c}\) is the coin Hilbert space and \(\mathcal{H}_{p}\) is the position Hilbert space. The coin Hilbert space for one-dimensional DTQW has the basis states \(\{\ |\uparrow\rangle=\begin{pmatrix}1\\ 0\end{pmatrix},\ |\downarrow\rangle=\begin{pmatrix}0\\ 1\end{pmatrix}\}\) and the position Hilbert space is defined by the basis states \(\ |x\rangle\) where \(x\in Z\). The probability amplitude of the quantum state at position \(x\) can be represented by
\[\Psi(x,t)\rangle=\begin{pmatrix}\Psi^{\uparrow}(x,t)\\ \Psi^{\downarrow}(x,t)\end{pmatrix} \tag{4}\]
where describe the state of DTQW with two internal degrees of freedom \(\{\ |\uparrow\rangle,\ |\downarrow\rangle\}\). In a Discrete-Time Quantum Walk (DTQW), the system's evolution is governed by two unitary operators: the coin operator and the shift operator. The shift operator moves the walker in a superposition of position states, while the coin operator acts on the coin Hilbert space and determines the amplitudes of the position space. The coin Hilbert space represents the internal state of the walker and plays a crucial role in determining the overall dynamics of the system. A universal operator is defined as
\[\hat{C}(\theta,\phi,\lambda)=\begin{pmatrix}\cos(\frac{\theta}{2})&-e^{i \lambda}\sin(\frac{\theta}{2})\\ e^{i\phi}\sin(\frac{\theta}{2})&e^{i(\lambda+\phi)}\cos(\frac{\theta}{2}) \end{pmatrix} \tag{5}\]
where are the three independent parameters and the most general unitary coin operator. Therefore, accurately estimating the coin parameters is essential for effectively using quantum walks as a quantum simulation tool and for further research on modeling realistic dynamics. Finding patterns in complex data can be challenging, but an algorithm that automates the learning process can solve this problem.
The shift operator is an essential part of the DTQW. A unitary operator moves the quantum walker in a superposition of position states. The shift operation is defined as,
\[\hat{S}=|\downarrow\rangle\langle\downarrow|\otimes\sum_{x}|x-1\rangle\langle x |+|\uparrow\rangle\langle\uparrow|\otimes\sum_{x}|x+1\rangle\langle x| \tag{6}\]
In other words, the shift operator shifts the position of the particle one step to the right if the internal state is \(|\uparrow\rangle\) or one step to the left \(|\downarrow\rangle\). The initial state of the system is a superposition of the position states, with the internal state of the particle determined by the coin operator and defined as
\[|\Psi_{0}\rangle=(\alpha|\uparrow\rangle+\beta|\downarrow\rangle)\otimes|x=0\rangle \tag{7}\]
At each time step, the shift operator is applied to the position state after the coin operator is applied to the internal state of the particle. Together, these two operators form the evolution operator of the DTQW, which describes the overall dynamics of the system. This process is repeated several times, and the system's final state is a superposition of position states, with the amplitudes determined by the coin operator.
\[\Psi(x,t)=[\hat{S}(\hat{I}\otimes\hat{C})]^{t}|\Psi_{0}\rangle=\hat{W}^{t}| \Psi_{0}\rangle \tag{8}\]
where \(\hat{I}=\sum_{x}|x\rangle\langle x|\) is an identity in space. The probability distributions in the position space of a walker performing a DTQW, shown in Fig.(2),
are not similar to a market probability distribution in everyday life. In the marketplace, prices are typically determined by the interaction of buyers and sellers. The price of a good or service in the market is established through the agreement between buyers and sellers. At a particular price point, buyers can decide whether to purchase, while sellers can decide whether to offer their product. The final price is formed as a result of this process. Inspired by the free market economy, we introduce the split-step quantum walk, a specific type of quantum walk that divides the evolution of the quantum system into two separate stages to simulate the probability distribution.
The probability distributions in position space of a discrete-time quantum walk (DTQW), as shown in studies[24, 25, 27], do not resemble the probability distributions in everyday life. In contrast, in a free market economy, the final prices are determined by the agreement between buyers and sellers. Inspired by the dynamics of a free market, the split-step quantum walk (SSQW)[29, 30] is introduced as a way to simulate probability distributions in the quantum realm. SSQW is a specific type of quantum walk that divides the evolution of the quantum system into two steps, one to the right and one to the left. The evolution operator \(\hat{W}\) is divided by a composition of two half-steps,
\[\hat{W}=\hat{S}_{-}\hat{C}_{\theta_{k}}\hat{S}_{+}\hat{C}_{\theta_{1}} \tag{9}\]
where \(\hat{C}_{\theta_{k}}\) is a universal coin operator as Eq.(5) and shift operators \(\hat{S}_{\pm}\) are defined as,
\[\hat{S}_{+} =\sum_{x}[|\uparrow\rangle\langle\uparrow|\otimes|x+1\rangle \langle x|+|\downarrow\rangle\langle\downarrow|\otimes|x\rangle\langle x|] \tag{10}\] \[\hat{S}_{-} =\sum_{x}[|\uparrow\rangle\langle\uparrow|\otimes|x\rangle \langle x|+|\downarrow\rangle\langle\downarrow|\otimes|x-1\rangle\langle x|]\]
The probability of the walker going right or stopping for a buyer is shown by \(\hat{C}_{\theta_{1}}\) while \(\hat{C}_{\theta_{2}}\) displays the probability of the walker going left or stopping for a seller. Additionally, \(\hat{S}_{+}\) represents the walker goes to right(\(|\uparrow\rangle\)) or stop in place(\(|\downarrow\rangle\)) and \(\hat{S}_{-}\) represents the walker goes to left(\(|\downarrow\rangle\)) or stop in place(\(|\uparrow\rangle\)). The concept the SSQW also can be regarded as a method of Haar-random unitary[2], which is a collection of unitary operators that can be used to approximate all unitary operators on a given space with high accuracy.
## 3 Methods
In this section, the scheme for the loading probability distribution is introduced with a practical approach to find patterns in complex data and map them to an SSQW dynamic system when loading probability distributions. Also, an
Figure 3: The scheme of (a)DTQW and (b)SSQW
Figure 2: Demonstrating probability distributions in position space using a Discrete Time Quantum Walk (DTQW) with Z, X, and H gates as coin operators and various initial states (a) Initial state \(=|0\rangle\) (b) Initial state \(=\frac{1}{\sqrt{2}}(|0\rangle+i|1\rangle)\)(c) Initial state \(=|1\rangle\)
SSQW implementation and results evaluation is provided. The architecture of loading the probabilistic data into a quantum state has shown below in Fig.(4). Firstly, there are two domains of computation in this scheme of solution. Firstly, a quantum computation domain consists of five qubits for computation, one for coin space and the other for position space. Secondly, a classical optimizer using the Constrained Optimization By Linear Approximation algorithm(COBYLA) is implemented to converge the trained results to the targeted distribution. The classical computation facilitates COBYLA optimizer with a loss function using mean-square error (MSE) to approach the targeted distribution better. Since MSE has a better fitting performance than the L2 loss function, this research primarily set MSE as the loss function of choice.
The coin space of an SSQW that performs a controlled motion of a walker on the position space is similar to the ancilla qubit taking a controlled rotation in Eq.(2). The goal is to optimize the coin parameters of an SSQW to achieve the targeted distribution of the position space, and then we only compute the position space. We will accomplish this by using parameterized quantum circuits (PQC) to load the distribution. The steps for this process are as follows:
* Begin with a classical targeted data set \(p=\{p_{0},\dots,p_{2^{N}-1}\}\in R\) sampled from a targeted probability distribution.
* Implementation of SSQW uses an ancilla qubit representing the coin space and N qubits representing \(2^{N}\) distributions in the position space.
* Imply \(\hat{W}\) operator on the quantum circuit and repeat \(t\) steps.
* Measure the state amplitudes of the position space and compute the trained distribution.
* Update the coin parameters using the classical optimizer with the mean square error(MSE) \[MSE=\frac{1}{2^{N}}\sum_{i=0}^{2^{N}-1}(p_{i}^{targeted}-p_{i}^{trained})^{2}\]
* Iterate \(k\) times until converge to the targeted distribution
## 4 Results
Moreover, a comprehensive simulation to evaluate the performance of the SSQW-based data loading scheme using various distributions was conducted using a standard application programming interface developed by the authors, including normal, log-normal, and actual stock daily return distributions. We demonstrate that this approach has effectively utilized the potential quantum computation in a financial application, as shown by its ability to apply with financial derivative pricing.
### Performances of various distributions
Probability theory[31, 32], the mathematical study of randomness, is built upon the concept of probability distributions and the random variables they describe. This article shows examples of two common probability distributions and three actual stock daily return distributions over different periods. The normal distribution, also known as the Gaussian distribution or bell curve, is a typical probability distribution. It is defined by its mean (\(\mu\)) and standard deviation (\(\sigma\)). The normal distribution is often used in statistics and other fields to describe real-world phenomena, such as IQ scores, height, and weight.
Figure 4: The scheme of loading probability distribution
The log-normal distribution is a probability distribution of a random variable whose logarithm is normally distributed. One of the main properties of the log-normal distribution is that the values are skewed to the right, meaning that the tail of the distribution is on the right side, representing large values. It also has thicker tails than a normal distribution, meaning extreme events are more likely. This is why it helps model variables that fluctuate widely, such as stock prices. The log-normal distribution also can be used to price options by assuming that the underlying asset's price follows a geometric Brownian motion, which means that the logarithm of the asset's price follows a normal distribution. This assumption is the basis of the Black-Scholes model, which is a widely used method for pricing options.
The methodology begins with generating targeted distributions constructed by drawing 100000 samples truncated to \((0,15)\) with NumPy. A 5-qubit quantum circuit is created, with 1 qubit as the coin space and 4 qubits as the position space and set the initial state\(\Psi_{0}=|0\rangle^{\otimes 5}\). The \(\hat{W}\) operator has applied 7 steps, and the state amplitudes of the position space are measured. These measurements are used to compute the trained distribution. The coin parameters are then updated using a classical optimizer and the mean squared error as the objective function. This process is repeated 800 times to converge to the targeted distribution. The whole process only takes about 20 seconds. In Fig. 5, we show better normal and log-normal distribution fitting performances.
Our experience has shown that the initial coin parameters significantly impact the SSQW-based data loading scheme for the normal distribution. However, utilizing the symmetry of the normal distribution allows for simplified implementation and improved results, taking only seconds. This is because the number of coin parameters is reduced from 6 to just 2. On the other hand, implementing the log-normal distribution is difficult as it is skewed to the right and obtaining good results is challenging.
A daily return distribution is a statistical representation of the daily returns of a financial asset, such as a stock or a commodity. The daily re
Figure 5: The performances of (a)Normal distribution and (b)log-normal distribution
turn is the percentage change in the asset's value from one day to the next. The distribution shows how frequently different return values occur and can be used to assess an investment's risk and potential returns. The distribution is typically presented in a histogram, with the x-axis representing the return percentage and the y-axis representing the probability of occurrences. The process of obtaining a daily return distribution starts by collecting real stock data daily from Yahoo Finance over a specified time period and truncates to \((0,15)\) Then we implement the SSQW-based data loading scheme to obtain these results, a more realistic probability distribution of the market shown in Fig 6. The distributions are similar to the normal distribution with slight skewness. This skewness is important in determining which distribution is appropriate for investment decision-making.
By implementing the SSQW-based data loading scheme, we are able to obtain results that represent a more realistic probability distribution of the market, as shown in Fig. 6. The Fig. 6 displays the daily return distributions of APPLE, Microsoft, and JP Morgan over two periods: January 2nd to March 2nd, 2022 (a total of two months), and March 2nd to June 2nd, 2022 (a total of three months). The first column represents the results for the first period and the second column represents the results for the second period. The line chart represents the actual daily stock returns distribution, while the histogram displays the distribution obtained through the SSQW-based data loading methodology.These distributions are similar to the normal distribution with minimal skewness. The results generated by the SSQW method are highly accurate, with a calculation time of approximately 10 seconds. For financial applications, simulation calculations that are more real-time and offer increased accuracy can be advantageous.
### Application: European call option price
Using quantum computing for the pricing problem is still infancy period. More recent works focus on the quantum algorithm for amplitude estimation[10] and Monte Carlo with the pricing of financial derivatives[16, 19, 33, 34, 35, 36, 37, 38]. In the following, we demonstrated that the SSQW-based data loading scheme enables the exploitation of the potential quantum advantage in finance, such as European call option pricing.
The Black-Scholes (BS) model[39] is widely used in the financial industry to value options and other financial derivatives. is used to determine the theoretical value of an option by using specific parameters such as the underlying asset's price(\(S\)), strike price(\(K\)), time to expiration(\(T\)), risk-free interest rate(\(r\)), and volatility(\(vol\)). The BS model calculates the option value by assuming that the underlying asset's price follows a geometric Brownian motion, a continuous-time stochastic process. Considering this, the BS model can calculate the probability distribution of the underlying asset's price at expiration. A constant drift(\(\alpha\)) and constant volatility (\(\sigma\)) characterize the geometric Brownian motion. It has been proven that the natural logarithm of the price of an asset following a geometric Brownian motion will be normally distributed with a mean equal to the drift and a standard deviation similar to the volatility. This is why the BS model is often associated with the log-normal distribution. So the option parameters can be used to calculate the log-normal distribution of the un
Figure 6: Exploring Stock Performance Over Different Time Horizons: Daily Returns for Apple, Microsoft, and JP Morgan
derlying asset's price at expiration. The steps to estimate the mean and variance of the log-normal distribution to estimate the volatility (\(\sigma\)) and drift (\(\alpha\)) of the underlying asset's price, considering non-dividend, initial spot price(\(S_{0}\)), the daily risk-free interest rate(\(r\)), expected return(\(\mu\)) and volatility(\(vol\)), and the expiration of time (\(T\)) are as follows:
* Collect historical data on the underlying asset's price, such as daily prices.
* Next, Calculate the mean and variance of the logarithm of the asset's prices.
* Use the mean and variance of the logarithm of the asset's prices to estimate the volatility (\(\sigma\)) and drift (\(\alpha\)) of the underlying asset's price. The stock price is log-normally distributed with parameters \[\sigma =vol\cdot\sqrt{T}\] (11) \[\alpha =\ln S_{0}+(\mu-r-\frac{\sigma^{2}}{2})\cdot T\]
* Finally, using the mean and standard deviation of log-normal distribution we can calculate the underlying asset's price at expiration.The results for estimating \(E[max\{S_{T}-K,0\}]\) where \(S_{T}\) is the maturity price.
We compare the trained probability distribution with the targeted probability distribution for the parameters \(S_{0}=2\),\(K=2\), \(\sigma=0.4\), \(r=0.05\) and \(T=40\) by plotting them together, as shown in Fig. 7. Now, the trained probability distribution can be used to evaluate the expectation value of the option's payoff function. We can see that the analytically calculated expected payoff is \(5.5342\) when using the targeted distribution, and \(6.9951\) when using the trained distribution. This difference is due to the skewness of the trained probability distribution to the right, leading to a larger expected value than the result obtained from the log-normal distribution of the target. Therefore, when the trained probability distribution is more accurate, resulting in a more favorable outcome after calculation.
## 5 Discussion
In this study, we have conducted preliminary theoretical analysis and practical implementation to demonstrate the effectiveness of our approach in addressing probabilistic data loading issues on quantum computers. Our method efficiently loads data while maintaining the accuracy of the probability distribution fitting, with an average computing duration of around 20 seconds, showing great promise for future applications and it allows for the modeling of complex probability distributions. By adjusting the parameters of these coin operations, it is possible to accurately model the probability distribution of the quantum particles' behavior. The SSQW method has the potential to be integrated into larger quantum algorithms, enabling the development of more powerful quantum machine learning and quantum artificial intelligence applications.
Beyond that, there are several areas worth exploring to improve further this method, such as expanding the implementation to more distributions and increasing accuracy. As we have demonstrated, specifying the statistical moments of the probability distribution plays a crucial role in the SSQW-based data loading scheme. For example, the normal distribution can be described with only two parameters, the 1st moment(\(\mu\)) and the 2nd moment(\(\sigma^{2}\)). However, from the point of view of machine learning, the SSQW-based
Figure 7: Demonstration of European call pricing
data loading scheme uses six parameters to describe the probability distribution, and it can easily simulate the normal distribution with only two parameters. But when considering probability distributions with more parameters, the SSQW-based data loading scheme may not be able to simulate accurately. Therefore, further research can be done to design the SSQW scheme to widen its applicable restrictions.
Additionally, there is still a substantial improvement that could be made in this study regarding the accuracy of fitting and effectiveness of searching the initial parameter configuration and the future integration with hardware acceleration and machine learning algorithms. We will refine this method by delving deeper into its details and making it even more sophisticated.
Furthermore, there is significant potential for enhancing the accuracy of fitting and the effectiveness of searching for initial parameter configurations in this study. The quality of the optimized search parameters is influenced by the initial coin operator, which in turn affects the accuracy of this method. To enhance its efficiency, combining this method with hardware acceleration and machine learning algorithms could yield promising results. Our objective is to refine and further develop this method by thoroughly examining its intricacies and making it even more sophisticated. In conclusion, a method for simulating and predicting future behaviors involves describing a classical probability distribution as a quantum walk. We can then convert this quantum walk into a Hamiltonian[40, 41] and use it to establish a quantum simulation system.
Integrating quantum computing and machine learning technologies has immense potential to advance the limitations of machine learning solutions better. An efficient loading scheme for probability distribution, like the SSQW-based data loading scheme, has the potential to be a key component in the development of quantum machine learning and the advancement of quantum artificial intelligence (QAI). A novel strategy for building SSQW would benefit future researchers to incorporate this solution with future quantum machine learning development. This could allow a deeper understanding of complex phenomena through the quantum walk picture.
## 6 Acknowledgments
We thank IBM Quantum Hub at NTU for providing computational resources and accesses for conducting the real quantum device experiments. We acknowledges support from National Science and technology council, Taiwan under Grants MOST111-2119-M-033-001, by the research project Applications of quantum computing in optimization and finances.
|
2303.05355 | Banach's theorem in higher order reverse mathematics | In this paper, methods of second order and higher order reverse mathematics
are applied to versions of a theorem of Banach that extends the
Schroeder-Bernstein theorem. Some additional results address statements in
higher order arithmetic formalizing the uncountability of the power set of the
natural numbers. In general, the formalizations of higher order principles here
have a Skolemized form asserting the existence of functionals that solve
problems uniformly. This facilitates proofs of reversals in axiom systems with
restricted choice. | Jeffry L. Hirst, Carl Mummert | 2023-03-09T16:01:42Z | http://arxiv.org/abs/2303.05355v2 | # Banach's theorem in higher order reverse mathematics
###### Abstract
In this paper, methods of second order and higher order reverse mathematics are applied to versions of a theorem of Banach that extends the Schroeder-Bernstein theorem. Some additional results address statements in higher order arithmetic formalizing the uncountability of the power set of the natural numbers. In general, the formalizations of higher order principles here have a Skolemized form asserting the existence of functionals that solve problems uniformly. This facilitates proofs of reversals in axiom systems with restricted choice.
## 1 Introduction
The Schroeder-Bernstein theorem is perhaps the best known result about cardinality. In full generality, it states that if \(A\) and \(B\) are sets, there is an injection \(f\colon A\to B\), and there is an injection \(g\colon B\to A\), then there is a bijection from \(A\) to \(B\). Unfortunately, this theorem is not ideal for reverse mathematics analysis. If we add the assumption that \(A,B\subseteq\mathbb{N}\), the result is computationally trivial: whenever \(A,B\subseteq\mathbb{N}\) have the same cardinality, there is an \((A\oplus B)\)-computable bijection between them.
In higher order reverse mathematics, we might consider the case where \(A,B\subseteq 2^{\mathbb{N}}\) or \(A,B\subseteq\mathbb{N}^{\mathbb{N}}\). In this setting, the Schroeder-Bernstein theorem is no longer trivial. However, because the theorem does not postulate any relationship between the bijection being constructed and the original two injections, obtaining reversals presents a challenge.
Our focus is a classical theorem of Banach [1] from 1924 more suited to reverse mathematical analysis. Banach argued this theorem captures the essence of proofs of the Schroeder-Bernstein theorem, such as the well known proof by Julius Konig.
**Theorem 1.1** (Banach).: If \(A\) and \(B\) are sets, \(f\colon A\to B\) is an injection, and \(g\colon B\to A\) is an injection, there are decompositions \(A=A_{1}\cup A_{2}\) and \(B=B_{1}\cup B_{2}\) such that \(A_{1}\cap A_{2}=\emptyset\), \(B_{1}\cap B_{2}=\emptyset\), \(f(A_{1})=B_{1}\), and \(g(B_{2})=A_{2}\).
Restating this in terms of the existence of a bijection gives a corollary that strengthens the Schroeder-Bernstein theorem, which we will also call Banach's Theorem.
**Corollary 1.2**.: If \(f\) is an injection from a set \(A\) to a set \(B\), and \(g\) is an injection from \(B\) to \(A\), there is a bijection \(h\colon A\to B\) such that, whenever \(h(a)=b\), either \(f(a)=b\) or \(g(b)=a\).
A brief history of Banach's Theorem and the Schroder-Bernstein theorem is given by Remmel [12, Introduction]. An analysis of Banach's Theorem for subsets of \(\mathbb{N}\), using subsystems of second order arithmetic, appears in Hirst's thesis [6, SS3.2] and a related article [7]. That development uses symmetric marriage theorems to prove the following second order arithmetic results.
**Theorem 1.3** ([7, Theorem 4.1]).: \(\mathsf{RCA}_{0}\) proves the following are equivalent:
1. \(\mathsf{ACA}_{0}\).
2. (Countable Banach's Theorem) Let \(f\colon\mathbb{N}\to\mathbb{N}\) and \(g\colon\mathbb{N}\to\mathbb{N}\) be injections. Then there is a bijection \(h\colon\mathbb{N}\to\mathbb{N}\) such that for all \(m\) and \(n\), \(h(n)=m\) implies either \(f(n)=m\) or \(g(m)=n\).
**Theorem 1.4** ([7, Theorem 4.2]).: \(\mathsf{RCA}_{0}\) proves the following are equivalent:
1. \(\mathsf{WKL}_{0}\).
2. (Bounded Countable Banach's Theorem) Let \(f\colon\mathbb{N}\to\mathbb{N}\) and \(g\colon\mathbb{N}\to\mathbb{N}\) be injections such that the ranges of \(f\) and \(g\) exist. Then there is a bijection \(h\colon\mathbb{N}\to\mathbb{N}\) such that for all \(m\) and \(n\), \(h(n)=m\) implies either \(f(n)=m\) or \(g(m)=n\).
In this paper, we use methods from higher order reverse mathematics to study the _uniformity_ of results like these. We are interested not only in the existence of the bijection \(h\), but also whether there is a functional that can produce \(h\) uniformly from \(f\) and \(g\). This question of uniformity is purely higher order, and cannot be expressed directly in second order reverse mathematics. To study this uniformity, we examine Skolemized versions of theorems. For example, instead of examining Banach's Theorem in a form such as
\[(\forall f,g,A,B)(\exists h)\,\Phi(f,g,A,B,h),\]
we consider the form
\[(\exists H)(\forall f,g,A,B)\,\Phi(f,g,A,B,H(f,g,A,B)).\]
Both versions of a theorem are of interest, of course, and the latter always follows from the former if we assume sufficient choice principles. We are interested in the Skolemized forms because they represent a particular kind of uniformity, and we typically do not assume enough choice to derive them directly from the un-Skolemized form. As discussed in Section 4, this is a different kind of uniformity than Weihrauch reducibility.
Section 2 begins with a survey of reverse mathematics results on countability. Sections 3 and 4 present a number of supporting lemmas to prepare for the analysis of Banach's theorem. Section 5 examines Theorems 1.3 and 1.4 from the viewpoint of Skolemized uniformity. Section 6 extends the study of Banach's Theorem to subsets of \(2^{\mathbb{N}}\) and, more generally, subsets of compact metric spaces.
### Formal theories
This work relies on several well studied systems of second order arithmetic and higher order arithmetic. Simpson [14] and Dzhafarov and Mummert [3] provide thorough references for reverse mathematics. Kohlenbach [9] provides a reference for higher order reverse mathematics. We follow Kohlenbach's definitions of higher order systems throughout this paper, noting any exceptions explicitly.
For the purposes of higher order reverse mathematics, we assume that our systems use the function based language of higher order arithmetic, rather than the set based language. Accordingly, \(2^{\mathbb{N}}\) is used throughout this paper to denote the set of all functions from \(\mathbb{N}\) to \(\{0,1\}\).
Many of our results will use fragments of the quantifier-free choice scheme. For types \(\rho\) and \(\tau\), we have the scheme
\[\mathsf{QF\mbox{-}AC}^{\rho,\tau}\colon(\forall x^{\rho})(\exists y^{\tau})A( x,y)\to(\exists Y^{\rho\to\tau})(\forall x^{\rho})A(x,Y(x)),\]
where \(A\) is a quantifier free formula. Here \(A\) can have parameters of arbitrary type.
The system \(\mathsf{RCA}_{0}^{\omega}=\mathsf{E\mbox{-}PRA}^{\omega}+\mathsf{QF\mbox{-}AC} ^{1,0}\) is a fragment of higher order arithmetic. It is axiomatized by a set of basic axioms along with induction for \(\Sigma_{1}^{0}\) formulas and the choice scheme \(\mathsf{QF\mbox{-}AC}^{1,0}\). The syntax has term-forming operations for \(\lambda\) abstraction and primitive recursion.
The system \(\mathsf{RCA}_{0}^{2}\) is a second order fragment of \(\mathsf{RCA}_{0}^{\omega}\), with only types \(0\) and \(1\) for elements of \(\mathbb{N}\) and functions \(\mathbb{N}\to\mathbb{N}\), respectively. Formally, we have \(\mathsf{RCA}_{0}^{2}=\mathsf{E\mbox{-}PRA}^{\omega}+\mathsf{QF\mbox{-}AC}^{0,0}\). This system is equivalent to the set based system \(\mathsf{RCA}_{0}\) presented by Simpson [14], and we will henceforth denote \(\mathsf{RCA}_{0}^{2}\) by \(\mathsf{RCA}_{0}\) when no confusion is likely.
A sequence \(\langle f_{n}:n\in\mathbb{N}\rangle\) is viewed as a map \(f\colon\mathbb{N}\times\mathbb{N}\to\mathbb{N}\), so that \(f_{n}(m)=f(\langle n,m\rangle)\), where \(\langle\cdot,\cdot\rangle\) is a suitable pairing function.
Emulating Kohlenbach [9], we use parentheses around the name of a functional to denote the principle stating that the functional exists. For example, the principle \((\exists^{2})\) asserts the existence of the functional \(\exists^{2}\), defined below.
There are several ways to extend the comprehension axioms of second order arithmetic to the higher order setting. One particular functional (set) existence axiom for higher order arithmetic is \((\exists^{2})\), defined by
\[(\exists^{2})\colon(\exists\varphi^{1\to 0})(\forall f)(\varphi(f)=0\leftrightarrow( \exists n)[f(n)=0]).\]
The functional \(\varphi^{1\to 0}\) from this principle is itself called \(\exists^{2}\). The system \(\mathsf{ACA}_{0}^{\omega}\equiv\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) implies the arithmetical comprehension scheme. Kohlenbach [9] showed that \(\mathsf{ACA}_{0}^{\omega}\) is conservative over \(\mathsf{ACA}_{0}\) for sentences in the language \(L_{2}\).
Other functional existence principles correspond to \(\mathsf{ACA}_{0}\). Kohlenbach [9] presents two such functionals. One, \(\mu_{0}\), selects a zero of a function if such a zero exists:
\[(\mu_{0})\colon(\exists\mu_{0}^{2})(\forall f^{1})[(\exists n^{0})(f(n)=_{0}0) \to f(\mu_{0}(f))=0].\]
Another returns the least zero of a function, in the fashion of Feferman [4, SS2.3.3]:
\[(\mu)\colon(\exists\mu^{2})(\forall f^{1})((\exists n^{0})(f(n)=_{0}0) \to[f(\mu(f))=0\wedge(\forall t<\mu(f))(f(t)\neq 0)]).\]
**Proposition 1.5**.: The following are pairwise equivalent over \(\mathsf{RCA}_{0}^{\omega}\colon(\exists^{2})\), \((\mu_{0})\), and \((\mu)\).
Proof.: Kohlenbach [9, Proposition 3.9] proves the equivalence of \((\exists^{2})\) and \((\mu_{0})\). Because any \(\mu\) satisfying \((\mu)\) also satisfies \((\mu_{0})\), it suffices to show that \(\mathsf{RCA}_{0}^{\omega}\) proves thaxt \((\mu_{0})\) implies \((\mu)\). Given a functional \(\mu_{0}\) as in the definition of \((\mu_{0})\), \(\mu(f)\) is the least \(t\leq\mu_{0}(f)\) such that \(f(t)=0\). This functional is primitive recursive in \(\mu_{0}\) and thus exists by \(\mathsf{RCA}_{0}^{\omega}\) and \((\mu_{0})\).
## 2 Countability
One motivation for this research is the question: how difficult is it to prove \(2^{\mathbb{N}}\) is uncountable? As usual, being uncountable simply means not being countable. There are many ways to express the principle that \(2^{\mathbb{N}}\) is countable, with the following three being particularly natural:
* \(\mathsf{C}_{\mathrm{enum}}\): there is a sequence \(\langle f_{n}:n\in\mathbb{N}\rangle\) such that for all \(g\in 2^{\mathbb{N}}\) there is an \(n\in\mathbb{N}\) with \(g=f_{n}\).
* \(\mathsf{C}_{\mathrm{inj}}\): there is a functional \(\Phi^{1\to 0}\) that is an injection from \(2^{\mathbb{N}}\) to \(\mathbb{N}\).
* \(\mathsf{C}_{\mathrm{bij}}\): there is a functional \(\Phi^{1\to 0}\) that is an bijection from \(2^{\mathbb{N}}\) to \(\mathbb{N}\).
The principles \(\mathsf{C}_{\mathrm{inj}}\) and \(\mathsf{C}_{\mathrm{bij}}\) cannot be stated in the language of second order arithmetic, but they can be stated in \(\mathsf{RCA}_{0}^{\omega}\). When we say we assume \(\mathsf{C}_{\mathrm{inj}}\) or \(\mathsf{C}_{\mathrm{bij}}\), this means we assume the existence of a functional with the property stated. Similarly, if we assume \(\neg\mathsf{C}_{\mathrm{inj}}\) or \(\neg\mathsf{C}_{\mathrm{bij}}\), this means we assume no functional has the property stated.
In context of set theory there is little reason to distinguish between \(\mathsf{C}_{\mathrm{inj}}\) and \(\mathsf{C}_{\mathrm{bij}}\), because of the comprehension principles available. As discussed below, there are key distinctions between these principles in the context of theories of arithmetic with restricted comprehension principles.
Of course, \(\mathsf{C}_{\mathrm{enum}}\), \(\mathsf{C}_{\mathrm{bij}}\), and \(\mathsf{C}_{\mathrm{inj}}\) are classically false. There are two key questions: which systems are "strong enough" to disprove these false principles, and which are "weak enough" to be consistent with one or more of the principles. As is well known, Cantor's diagonalization proof allows us to disprove \(\mathsf{C}_{\mathrm{enum}}\) in very weak systems (compare Theorem II.4.9 of Simpson [14] showing \(\mathbb{R}\) is uncountable in \(\mathsf{RCA}_{0}\)).
**Proposition 2.1**.: \(\mathsf{RCA}_{0}^{2}\) proves \(\neg\mathsf{C}_{\mathrm{enum}}\).
Proof.: Given \(\langle f_{n}:n\in\mathbb{N}\rangle\) witnessing \(\mathsf{C}_{\mathrm{enum}}\), \(\mathsf{RCA}_{0}^{2}\) can construct the function \(g\) defined by \(g(m)=1-f_{m}(m)\). Then \(g\in 2^{\mathbb{N}}\), but \(g\) cannot be \(f_{k}\) for any \(k\in\mathbb{N}\).
The principles \(\mathsf{C}_{\mathrm{inj}}\) and \(\mathsf{C}_{\mathrm{bij}}\) have much more interesting behavior. Normann and Sanders [10] provide a detailed analysis of the negations of these principles, which they name NIN and NBI, respectively. (They formulate NIN and NBI for \(\mathbb{R}\) but the results hold equally for \(2^{\mathbb{N}}\).) Their Theorem 3.2 shows that the true principle NIN is not provable in the system \(\mathsf{Z}_{2}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\) (which includes \(\Pi_{\infty}^{1}\) comprehension with parameters of type 1), and hence this system is consistent with \(\mathsf{C}_{\mathrm{inj}}\)[10, Theorem 3.26]. They also show that NIN is provable in \(\mathsf{Z}_{2}^{\Omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\), which includes the functional \(\exists^{3}\) in addition to \(\Pi_{\infty}^{1}\) comprehension. In the remainder of this section, we discuss some aspects of their results related to \(\mathsf{C}_{\mathrm{inj}}\) and \(\mathsf{C}_{\mathrm{bij}}\).
A key issue in analyzing \(\mathsf{C}_{\mathrm{inj}}\) is that the range of an injection from \(\mathbb{N}^{\mathbb{N}}\) to \(\mathbb{N}\) may be hard to form with weak comprehension axioms. We will see that a similar issue arises in the study of Banach's theorem, as well, where the existence of the range of a functional becomes a key question. By contrast, it is relatively easy to disprove \(\mathsf{C}_{\mathrm{bij}}\)[10, Theorem 3.28].
**Proposition 2.2** (\(\mathsf{RCA}_{0}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\)).: There is no injection \(\Phi\colon 2^{\mathbb{N}}\to\mathbb{N}\) for which the characteristic function for the range exists. In particular, \(\mathsf{C}_{\mathrm{bij}}\) is disprovable in \(\mathsf{RCA}_{0}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\).
Proof.: We will work in \(\mathsf{RCA}_{0}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\) and assume there is a bijection \(\Phi\) from \(2^{\mathbb{N}}\) to \(\mathbb{N}\) with range \(D=\{n:(\exists g)[\Phi(g)=n]\}\) given by a characteristic function. We will prove the principle \(\mathsf{C}_{\mathrm{enum}}\) by constructing a kind of left inverse of \(\Phi\), which will be a (possibly noninjective) enumeration of \(2^{\mathbb{N}}\). Because \(\mathsf{RCA}_{0}\) proves \(\neg\mathsf{C}_{\mathrm{enum}}\), this gives a contradiction.
By assumption, for each \(n\in D\) there is a \(g\in 2^{\mathbb{N}}\) with \(\Phi(g)=n\). Therefore, by \(\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\), we may form a function \(f\) so that \((\forall n)[n\in D\to\Phi(f_{n})=n]\). Then \(\langle f_{n}:n\in\mathbb{N}\rangle\) is an enumeration of \(2^{\mathbb{N}}\), so \(\mathsf{C}_{\mathrm{enum}}\) holds, a contradiction.
We now explain how the lemma implies certain higher order formulations of the Schroeder-Bernstein theorem are nontrivial. Suppose, in the context of set theory, we wanted to try to use the Schroeder-Bernstein theorem to show \(2^{\mathbb{N}}\) is countable. Because there is a trivial injection from \(\mathbb{N}\) to \(2^{\mathbb{N}}\), the other assumption in the Schroeder-Bernstein theorem is the existence of an injection from \(2^{\mathbb{N}}\) to \(\mathbb{N}\), that is, \(\mathsf{C}_{\mathrm{inj}}\). The conclusion is the existence of a bijection, that is, \(\mathsf{C}_{\mathrm{bij}}\). We can thus view the implication \(\mathsf{C}_{\mathrm{inj}}\to\mathsf{C}_{\mathrm{bij}}\) as a specific formal instance of the Schroeder-Bernstein theorem. (Normann and Sanders [11] study a different formulation of the Schroeder-Bernstein theorem, which they call \(\mathsf{CBN}\).)
**Corollary 2.3**.: The implication \(\mathsf{C}_{\mathrm{inj}}\to\mathsf{C}_{\mathrm{bij}}\) is not provable in \(\mathsf{Z}_{2}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\).
Proof.: \(\mathsf{Z}_{2}^{\omega}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\) is consistent with \(\mathsf{C}_{\mathrm{inj}}\) but not \(\mathsf{C}_{\mathrm{bij}}\).
Lemma 2.2 can also be used to obtain an upper bound on the strength required to disprove \(\mathsf{C}_{\mathrm{inj}}\). Normann and Sanders prove a version of the following lemma using the principle (\(\exists^{3}\)) as a formalization of \(\Sigma^{1}_{1}\) comprehension with functional parameters.
**Corollary 2.4** (see Normann and Sanders [10, Theorem 3.1]).: \(\mathsf{C}_{\mathrm{inj}}\) is disprovable from \(\mathsf{RCA}_{0}+\mathsf{QF}\text{-}\mathsf{AC}^{0,1}\) along with \(\Sigma^{1}_{1}\) comprehension with parameters of type 2.
Proof.: Assume \(\Phi\) is a functional witness \(\mathsf{C}_{\mathrm{inj}}\). Applying \(\Sigma^{1}_{1}\) comprehension with parameter \(\Phi\), we can construct the range of \(\Phi\). We then obtain a contradiction from Proposition 2.2.
A final point of interest is that the classically false principle \(\mathsf{C}_{\mathrm{inj}}\), although consistent with \(\mathsf{RCA}_{0}^{\omega}\), has nontrivial set existence strength. Normann and Sanders discuss the contrapositive of the following proposition in the guise of a "trick" related to excluded middle [10, SS3].
**Proposition 2.5**.: \(\mathsf{C}_{\mathrm{inj}}\) implies (\(\exists^{2}\)) over \(\mathsf{RCA}_{0}^{\omega}\).
Proof.: If \(\Phi\) is an injection from \(2^{\mathbb{N}}\) to \(\mathbb{N}\) then \(\Phi\) is discontinuous at every point (here we identify elements of \(\mathbb{N}\) with constant functions from \(\mathbb{N}\) to \(\mathbb{N}\)). The existence of a discontinuous functional implies (\(\exists^{2}\)) by Proposition 3.7 of Kohlenbach [9].
Thus, for example, there is no model of \(\mathsf{RCA}_{0}^{\omega}\) in which \(\mathsf{C}_{\mathrm{inj}}\) holds and every element of \(2^{\mathbb{N}}\) is computable.
## 3 Bounding calculations of type 1 functions
This section contains several technical lemmas related to the range of a function \(f\colon\mathbb{N}\to\mathbb{N}\). Each function of this type has a number of auxiliary functions
related to its range. The most obvious is the characteristic function for the range. We write \(\rho(f,g)\) as shorthand for the formula asserting that \(g\) is the characteristic function for the range of \(f\). More formally,
\[\rho(f,g)\text{ is }(\forall n)[(\exists m)(f(m)=n)\leftrightarrow g(n)>0].\]
A bounding function can also be used to compute the range of \(f\). We write \(\beta(f,g)\) for the formula asserting that \(g\) is such a bounding function. Formally,
\[\beta(f,g)\text{ is }(\forall n)[(\exists m)(f(m)=n)\leftrightarrow(\exists t \leq g(n))(f(t)=n)].\]
The results below address the problem of converting between the characteristic function for the range of a function and a bounding function for the range, and the amount of uniformity present in the conversion. In the second order setting, principles asserting the existence of characteristic functions and the existence of bounding functions are interchangeable, as shown by the following two results.
**Proposition 3.1** (\(\mathsf{RCA}_{0}\)).: For all \(f\colon\mathbb{N}\to\mathbb{N}\), \((\exists g)\,\rho(f,g)\leftrightarrow(\exists h)\,\beta(f,h)\).
Proof.: Working in \(\mathsf{RCA}_{0}\), suppose \(g\) is a characteristic function for the range of a function \(f\). Then \(h(n)=(\mu\,t)(g(n)=0\lor f(t)=n)\) is the desired bounding function and exists by recursive comprehension.
Now suppose that \(h\) is a bounding function for the range of \(f\). The characteristic function \(g\colon\mathbb{N}\to\mathbb{N}\) can be defined by the formula
\[g(n)=\begin{cases}1,&\text{if }(\exists t\leq h(n))(f(t)=n),\\ 0,&\text{otherwise},\end{cases}\]
and hence \(g\) exists by recursive comprehension.
The relationship between characteristic and bounding functions is uniform in the sense that \(\mathsf{RCA}_{0}\) proves the sequential extension of the previous result.
**Proposition 3.2** (\(\mathsf{RCA}_{0}\)).: For every sequence \(\langle f_{i}\rangle_{i\in\mathbb{N}}\) of functions from \(\mathbb{N}\) to \(\mathbb{N}\), we have
\[(\exists\langle g_{i}\rangle_{i\in\mathbb{N}})(\forall n)\,\rho(f_{n},g_{n}) \leftrightarrow(\exists\langle h_{i}\rangle_{i\in\mathbb{N}})(\forall n)\, \beta(f_{n},h_{n}).\]
Proof.: We will write \(\langle f_{i}\rangle_{i\in\mathbb{N}}\) as a function of two variables, so \(f(i,n)=f_{i}(n)\). Adapting the proof of the preceding result, write
\[h(i,n)=(\mu\,t)(g(i,n)=0\lor f(i,t)=n)\]
and
\[g(i,n)=\begin{cases}1,&\text{if }(\exists t\leq h(i,n))(f(i,t)=n),\\ 0,&\text{otherwise},\end{cases}\]
to translate the sequences of auxiliary functions in \(\mathsf{RCA}_{0}\).
Third order arithmetic can formalize "translating functionals" of type 2 to convert between characteristic functions and bounding functions for ranges. Principles asserting the existence of the translating functionals provide additional examples of Skolemized uniformity, distinct from the sequential uniformity often considered in second order settings.
As shown below, the existence of a translating functional from bounding functions to characteristic functions can be proved in \(\mathsf{RCA}_{0}^{\omega}\). However, the reverse translation functional requires stronger assumptions. Thus the interchangeability of the two sorts of auxiliary functions witnessed in the second order setting by the previous two propositions does not extend to Skolemized functional formulations in third order arithmetic.
**Proposition 3.3** (\(\mathsf{RCA}_{0}^{\omega}\)).: There is a functional \(T_{\beta\to\rho}\) of type \(1\to 1\) that translates bounding functions into characteristic functions for ranges. That is,
\[(\exists T_{\beta\to\rho})(\forall f)(\forall g)[\beta(f,g)\to\rho(f,T_{\beta \to\rho}(f,g))].\]
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), by \(\mathsf{QF}\)-\(\mathsf{AC}^{1,0}\) there is a functional \(Y\) from \(\mathbb{N}^{<\mathbb{N}}\times\mathbb{N}^{<\mathbb{N}}\times\mathbb{N}\) to \(\{0,1\}\) such that \(Y(f,g,n)=1\) if and only if \((\exists t\leq g(n))[f(t)=n]\). Note that the defining formula is quantifier free because the bounded quantifier can be rewritten using a primitive recursive functional. The desired functional \(T_{\beta\to\rho}(f,g)\) is then
\[T_{\beta\to\rho}(f,g)=\lambda n.Y(f,g,n).\qed\]
**Proposition 3.4** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. There is a functional \(T_{\rho\to\beta}\) of type \(1\to 1\) that translates characteristic functions for ranges into bounding functions. That is: \[(\forall f)(\forall g)[\rho(f,g)\to\beta(f,T_{\rho\to\beta}(f,g))].\]
Proof.: To prove that (1) implies (2), assume \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\). The base system \(\mathsf{RCA}_{0}^{\omega}\) suffices to prove the existence of the functional \(\chi\) which takes the (type 1 code for the) pair \((f,n)\) and maps it to the function \(f_{n}\colon\mathbb{N}\to 2\) satisfying \(f_{n}(t)=0\leftrightarrow f(t)=n\). By Proposition 3.9 of Kohlenbach [9], \((\exists^{2})\) implies the existence of Feferman's \(\mu\) functional satisfying the formula
\[(\forall f)[(\exists x)(f(x)=0)\to f(\mu(f))=0].\]
The function \(T_{\beta\to\rho}(f,g)=\mu(\chi(f,n))\) is the desired bounding function.
In the proof of the preceding implication, the principle \((\exists^{2})\) is sufficiently strong that we can discard the given characteristic function and calculate the bounding function directly from \(f\) and \(\exists_{2}\). We now show we can calculate \(\exists_{2}\) from any translation functional \(T_{\rho\to\beta}\) in the reverse direction.
To prove that (2) implies (1), suppose \(T_{\rho\to\beta}\) is a translation functional as described in (2). We will show that this functional is not \(\varepsilon\)-\(\delta\) continuous in the sense of Definition 3.5 of Kohlenbach [9].
We can view inputs \(f\) and \(g\) as a single sequence \(\langle f(0),g(0),f(1),g(1)\dots\rangle\) and use the usual Baire space topology. The functional \(T_{\rho\to\beta}\) is defined for every input of two type 1 arguments, including inputs \(f\) and \(g\) for which \(g\) is not a a characteristic function for the range of \(f\). For example, let \(f_{1}\colon\mathbb{N}\to\mathbb{N}\) satisfy \(f_{1}(n)=1\) for all \(n\). Let \(g_{2}\colon\mathbb{N}\to\mathbb{N}\) satisfy \(g_{2}(0)=g_{2}(1)=1\) and \(g_{2}(n)=0\) otherwise. Then \(g_{2}\) is not a correct characteristic function for \(f_{1}\). However, \(T_{\rho\to\beta}(f_{1},g_{2})=h\) for some totally defined type 1 function \(h\), and \(h(0)=b\) for some value \(b\).
Suppose by way of contradiction that \(T_{\rho\to\beta}\) is \(\varepsilon\)-\(\delta\) continuous. Then for every pair \((f,g)\) in some neighborhood \(N\) of \((f_{1},g_{2})\) we must have \(T_{\rho\to\beta}(f,g)(0)=b\). Let \(f_{2}\colon\mathbb{N}\to 2\) be a function that is 1 for every \(t\leq b\), outputs a sufficient number of ones so that \((f_{2},g_{2})\) is in the neighborhood \(N\), and is eventually constantly zero. Then \(g_{2}\) is a correct characteristic function for \(f_{2}\), so \(\rho(f_{2},g_{2})\) holds. However, \(T_{\rho\to\beta}(f_{2},g_{2})(0)=b\). Thus \(T_{\rho\to\beta}(f_{2},g_{2})\) is not a bounding function for \(f_{2}\) because \((\exists t)[f_{2}(t)=0]\) but \((\forall t\leq T_{\rho\to\beta}(f_{2},g_{2})(0))[f_{2}(t)\neq 0]\). Thus \(\beta(f_{2},T_{\rho\to\beta}(f_{2},g_{2}))\) fails. This contradicts the implication given in item (2) of the proposition. Thus \(T_{\rho\to\beta}\) must not be \(\varepsilon\)-\(\delta\) continuous. By Proposition 3.7 of Kohlenbach [9], \((\exists^{2})\) follows.
Let \(R\) be the Weihrauch problem taking a type 1 function as an input and yielding output consisting of the characteristic function of the range of the input. Let \(B\) be the Weihrauch problem that outputs bounding functions as described above. Ideas from the proof of Proposition 3.1 can be adapted to show that \(R\) and \(B\) are weakly Weihrauch equivalent, and strongly Weihrauch incomparable. Summarizing, analyses based on sequential second order statements, Skolemized higher order statements, and Weihrauch reducibility yield different results. This indicates that there are three distinct notions of uniformity considered here.
## 4 Realizers for omniscience principles
The principle \((\exists^{2})\) is closely related to a certain formulation of the limited principle of omniscience. The Weihrauch problem \(\mathsf{LPO}\) asks for a realizer that determines whether an infinite sequence of natural numbers contains a zero. Indeed, the definition of \((\exists^{2})\) could be rewritten as
\[(\exists^{2})\colon(\exists R_{\mathsf{LPO}})(\forall f^{1})[R_{\mathsf{LPO}}(f )=0\leftrightarrow(\exists n)(f(n)=0)]\]
to emphasize \((\exists^{2})\) asserts the existence of a realizer for this problem,
The Weihrauch problem \(\mathsf{LLPO}\), related to the lesser limited principle of omniscience, asks for a realizer to identify a parity (even or odd) on which a sequence of numbers is zero, assuming either that all even positions are zero
or all odd positions are zero. We will use a principle asserting the existence of a realizer for \(\mathsf{LLPO}\):
\[(\mathsf{LLPO})\colon(\exists R_{\mathsf{LLPO}}\leq 1)(\forall f^{1})( [(\forall n)(f(2n)=0)\vee(\forall n)(f(2n+1)=0)]\\ \to(\forall n)[f(2n+R_{\mathsf{LLPO}}(f))=0]).\]
Often it is more convenient to work with an equivalent form that asks for the parity of the first location where a sequence is zero, if there is such a location:
\[(\mathsf{LLPOmin})\colon(\exists R_{\mathsf{LLPOmin}})(\forall f ^{1})(\forall n)\\ [f(n)=0\to R_{\mathsf{LLPOmin}}(f)\equiv_{\mathrm{mod}\;2}(\mu \,t\leq n)(f(t)=0)].\]
For example, suppose \(f=\langle 1,0,1,0,0\dots\rangle\) denotes the infinite sequence consisting of \(1,0,1\) followed by all zeros. Then \(R_{\mathsf{LPO}}(f)=1\) because the sequence contains a \(0\); \(R_{\mathsf{LLPO}}(f)=1\) because \(f(2n+1)=0\) for all \(n\); and \(R_{\mathsf{LLPOmin}}(f)=1\) because the first zero occurs in position \(1\), which is odd. For the sequence \(g=\langle 1,1,0,0,\dots\rangle\), \(R_{\mathsf{LPO}}(g)=1\); \(R_{\mathsf{LLPOmin}}=0\) because the first zero occurs at position \(2\), which is even; and the value of \(R_{\mathsf{LLPO}}\) is not determined by its defining formula.
One motivation of \(\mathsf{LLPOmin}\) is that its value is determined for every sequence that includes a zero. The next proposition shows that \(\mathsf{LLPO}\) and \(\mathsf{LLPOmin}\) are equivalent for our purposes. For Weihrauch problems \(\mathsf{P}\) and \(\mathsf{Q}\) expressible in the language of \(\mathsf{RCA}_{0}^{\omega}\), we say that \(\mathsf{RCA}_{0}^{\omega}\) proves \(\mathsf{P}\leq_{\mathrm{sW}}\mathsf{Q}\) if \(\mathsf{RCA}_{0}^{\omega}\) proves there are functionals \(\varphi,\psi\colon\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}\) such that, for every realizer \(R_{\mathsf{Q}}\) of \(\mathsf{Q}\), the functional \(\psi\circ R_{\mathsf{Q}}\circ\varphi\) is a realizer of \(\mathsf{P}\).
**Proposition 4.1**.: \(\mathsf{RCA}_{0}^{\omega}\) proves that the problems \(\mathsf{LLPO}\) and \(\mathsf{LLPOmin}\) are strongly Weihrauch equivalent, and that the principles \((\mathsf{LLPO})\) and \((\mathsf{LLPOmin})\) are equivalent.
Proof.: First, assume \(R\) is a realizer for \(\mathsf{LLPOmin}\). Define a preprocessing function \(h\colon\mathbb{N}\to\mathbb{N}\) such that \(h(n)=0\) if \(n\neq 0\) and \(h(0)=1\). Define a postprocessing function \(w(n)=1-(n\,\mathrm{mod}\,2)\). Then \(S=w\circ R\circ h\) is a realizer for \(\mathsf{LLPO}\).
To see this, assume \(g\) is an instance of \(\mathsf{LLPO}\). If \(g\) is identically zero, then whichever value in \(\{0,1\}\) is produced by \(S\) is acceptable. If \(g\) is not identically zero, then \(h\circ g\) is zero on exactly the inputs where \(g\) is nonzero. Thus \(R(h\circ g)\) is the parity of the first location where \(g\) is nonzero, and \(w\circ R(h\circ g)\) is the parity for which \(g\) is always zero. This shows \(\mathsf{LLPO}\leq_{\mathrm{sW}}\mathsf{LLPOmin}\).
Conversely, suppose \(S\) is a realizer for \(\mathsf{LLPO}\). Define a preprocessing function \(J\colon\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}\) such that, for \(f\in\mathbb{N}^{\mathbb{N}}\),
\[J(f)(n)=\begin{cases}1,&\text{if }(\exists m<n)(f(m)=0),\\ f(n),&\text{otherwise}.\end{cases}\]
Thus \(J(f)\) and \(f\) agree through the first zero of \(f\), but afterwards \(J(f)\) takes only the value \(1\). Hence there is at most one input \(k\) for which \(h(J(f))(k)\) is
nonzero, and if there is such a \(k\) then it is the least input for which \(f(k)=0\). This means that \(h(J(f))\) is in the domain of \(\mathsf{LLPO}\), and \(S(h(J(f)))\) produces the parity of \(k\). Thus \(S\circ(h\circ J)\) is a realizer for \(\mathsf{LLPO}\mathsf{min}\). We have shown \(\mathsf{LLPO}\mathsf{min}\leq_{\mathrm{sW}}\mathsf{LLPO}\).
The preprocessing and postprocessing functionals in this argument can all be formed in \(\mathsf{RCA}_{0}^{\omega}\), which can verify the correctness of the argument. None of the postprocessing functions require access to the original instance of a problem. Hence \(\mathsf{RCA}_{0}^{\omega}\) proves that \(\mathsf{LLPO}\) or \(\mathsf{LLPO}\mathsf{min}\) are strongly Weihrauch equivalent.
Concatenation of the preprocessing and postprocessing functionals with any \(\mathsf{LLPO}\) realizer yields an \(\mathsf{LLPO}\mathsf{min}\) realizer, so the principle (\(\mathsf{LLPO}\)) implies the principle (\(\mathsf{LLPO}\mathsf{min}\)). The converse follows in a similar fashion.
Results of Weihrauch analysis include \(\mathsf{LLPO}<_{W}\mathsf{LPO}\) and the parallelized form \(\widehat{\mathsf{LLPO}}<_{W}\widehat{\mathsf{PO}}\). See Weihrauch [15, SS4] and Brattka and Gherardi [2, Theorem 7.13] for proofs. Consequently, the following result may intially be surprising. The underlying difference is that Weihrauch reducibility requires a single reduction that works for all realizers; the argument below breaks into cases depending on the behavior of the realizer.
**Proposition 4.2** (\(\mathsf{RCA}_{0}^{\omega}\)).: (\(\mathsf{LLPO}\)) _implies \((\exists^{2})\)._
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), by Proposition 4.1, it is sufficient to assume the existence of \(R=R_{\mathsf{LLPO}\mathsf{min}}\) as provided by (\(\mathsf{LLPO}\mathsf{min}\)), and prove \((\exists^{2})\) holds.
Let \(f=\langle 1,1,1\dots\rangle\) be the infinite sequence of ones. Our goal is to show that \(R\) is sequentially discontinuous at \(f\). We will construct a sequence \(\langle g_{n}\rangle\) such that \(\lim_{n\to\infty}g_{n}=f\) and for each \(n\), \(R(g_{n})\) disagrees with \(R(f)\). In particular, if \(R(f)=1\), we want \(R(g_{n})=0\) for all \(n\), so we define \(g_{n}\) as a sequence of \(2+2n\) ones followed by all zeros. On the other hand, if \(R(g_{n})=1\), we want \(R(g_{n})=1\) for all \(n\), so we define \(g_{n}\) as a sequence of \(1+2n\) ones followed by all zeros. Summarizing, for each \(n\) and \(m\) we have
\[g_{n}(m)=\begin{cases}1,&\text{if }m<1+R(f)+2n,\\ 0,&\text{otherwise}.\end{cases}\]
Note that \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of the sequence \(\langle g_{n}\rangle\), that \(\lim_{n\to\infty}g_{n}=f\), and that for all \(n\), \(R(g_{n})\neq R(f)\). Thus \(R\) is sequentially discontinuous and \((\exists^{2})\) follows by Proposition 3.7 of Kohlenbach [9] (see Proposition 6.1).
The proof of Kohlenbach's proposition is based on the proof of Lemma 1 of Grilliot [5]. We append that argument here to give a direct derivation of \((\exists^{2})\) from (\(\mathsf{LLPO}\mathsf{min}\)). Let the function \(f\) and the sequence \(\langle g_{n}\rangle\) be defined as above. \(\mathsf{RCA}_{0}^{\omega}\) suffices to prove the existence of the operator \(J\colon\mathbb{N}^{\mathbb{N}}\to 2^{\mathbb{N}}\) defined for \(h\colon\mathbb{N}\to\mathbb{N}\) and \(j\in\mathbb{N}\) by
\[J(h)(j)=\begin{cases}1,&\text{if }\ (\forall x\leq j)[h(x)\neq 0],\\ g_{i},&\text{if }\ i\leq j\wedge i=(\mu t)[h(t)=0].\end{cases}\]
Note that \(J(h)=f\) if \((\forall x)[h(x)\neq 0]\). On the other hand, if \(i\) is the least value for which \(h(i)=0\), then \(J(h)=g_{i}\). Consequently, for all \(h\colon\mathbb{N}\to\mathbb{N}\), \(R(J(h))=R(f)\) if and only if \((\forall x)[h(x)\neq 0]\). Thus \(R_{\mathsf{LPO}}(h)=1-|R(J(h))-R(f)|\), so the existence of \(R_{\mathsf{LPO}}\) follows by \(\mathsf{RCA}_{0}^{\omega}\).
**Theorem 4.3** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. (\(\mathsf{LLOO}\)).
Proof.: To show that (1) implies (2), as noted in Proposition 1.5, Proposition 3.9 of Kohlenbach [9] shows that the principle \((\exists^{2})\) proves the existence of Feferman's \(\mu\) functional which satisfies:
\[(\forall f)[(\exists x)(f(x)=0)\to[f(\mu(f))=0\wedge(\forall t<\mu(f))(f(t) \neq 0)]]\]
The remainder function \(\mathsf{rm}(n,2)\) yielding the remainder of dividing \(n\) by \(2\) is primitive recursive. Thus \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of the composition functional \(\mathsf{rm}(\mu(f),2)\), which satisfies the definition of (\(\mathsf{LLOO}\)).
The converse was proved as Proposition 4.2 above.
For an alternative proof of the forward implication of Theorem 4.3, we can use a formalized Weihrauch reducibility result. The next two results illustrate this process. The following proposition converts formal Weihrauch reducibility to proofs of implications of Skolemized functional existence principles.
**Proposition 4.4**.: If \(\mathsf{P}\) and \(\mathsf{Q}\) are problems and (\(\mathsf{P}\)) and (\(\mathsf{Q}\)) are the associated Skolemized functional existence principles, then \(\mathsf{RCA}_{0}^{\omega}\) proves
\[P\leq_{\mathsf{W}}\mathsf{Q}\to((\mathsf{Q})\to(\mathsf{P})).\]
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), suppose \(P\leq_{\mathsf{W}}\mathsf{Q}\) and (\(\mathsf{Q}\)) hold. Let \(\varphi\) and \(\psi\) be the functionals witnessing \(P\leq_{\mathsf{W}}\mathsf{Q}\) and let \(R_{\mathsf{Q}}\) witness (\(\mathsf{Q}\)). \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of the composition \(\psi(R_{\mathsf{Q}}(\varphi(x)),x)\), which can be directly shown to realize the principle (\(\mathsf{P}\)).
Next, we prove the formalized Weihrauch reducibility result corresponding to the forward implication of Theorem 4.3.
**Proposition 4.5**.: \(\mathsf{RCA}_{0}^{\omega}\) proves \(\mathsf{LLOO}\leq_{\mathrm{sW}}\mathsf{LPO}\).
Proof.: We again work with \(\mathsf{LLOOmin}\) in place of \(\mathsf{LLOO}\). Let \(\varphi\colon\mathbb{N}^{\mathbb{N}}\to 2^{\mathbb{N}}\) be the preprocessing functional defined by:
\[\varphi(h)(n)=\begin{cases}1,&\text{if $(\forall t\leq n)(h(t)>0)$,}\\ 1,&\text{if $(\exists t\leq n)(h(t)=0)$ and the least such $t$ is odd,}\\ 0,&\text{if $(\exists t\leq n)(h(t)=0)$ and the least such $t$ is even.}\end{cases}\]
Note that \(\varphi(h)\) is the sequence of all ones except when the first zero in the range of \(h\) occurs in an even location. Define the postprocessing functional \(\psi(h,n)=n\). If \(R_{\mathsf{LPO}}\) is a realizer for \(\mathsf{LPO}\), then \(\psi(h,R_{\mathsf{LPO}}(\varphi(h))\) is a realizer for \(\mathsf{LLPO}\). Because \(\psi\) makes no use of \(h\), this shows that \(\mathsf{LLPOmin}\leq_{\mathrm{sW}}\mathsf{LPO}\). Applying Proposition 4.1, we see that \(\mathsf{LLPO}\leq_{\mathrm{sW}}\mathsf{LPO}\).
As mentioned before, the forward implication of Theorem 4.3 follows immediately from Proposition 4.5 and Proposition 4.4. The next result uses Theorem 4.3 to give a short proof of one direction of Proposition 3.4 of Kohlenbach [8], showing that a uniform version of weak Konig's lemma is equivalent to \((\exists^{2})\). This equivalence is also included in Proposition 3.9 of Kohlenbach [9]. This equivalence will be helpful in the analysis of Banach's Theorem for \(\mathbb{N}\) in the next section. (For a discussion of uniform WWKL see Theorem 3.2 of Sakamoto and Yamazaki [13].)
**Proposition 4.6** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. (\(\mathsf{WKL}\)) There is a functional \(\mathsf{WKL}\colon\mathbb{N}^{\mathbb{N}}\to 2^{\mathbb{N}}\) such that if \(T\) is a code for an infinite tree in \(2^{\mathbb{N}}\), then \(\mathsf{WKL}(T)\) is an infinite path in \(T\).
Proof.: As noted in the proof of Proposition 3.4 of Kohlenbach [8], the proof that (1) implies (2) follows from the fact that, given the functional \(\exists^{2}\), primitive recursion can define a functional which selects an infinite branch of an infinite binary tree. For a short proof of the converse, it suffices to show that (\(\mathsf{WKL}\)) implies (\(\mathsf{LLPOmin}\)). Consider an instance \(f\colon\mathbb{N}\to 2\) for \(\mathsf{LLPOmin}\). Let \(\langle 1\rangle_{n}\) denote the sequence of \(n\) ones. Define the \(0\)-\(1\) tree \(T_{f}\) by:
* Only sequences of the form \(0^{\smallfrown}\langle 1\rangle_{n}\) and \(1^{\smallfrown}\langle 1\rangle_{n}\) are in \(T_{f}\),
* \(0^{\smallfrown}\langle 1\rangle_{n}\in T_{f}\) if and only if either \((\forall t\leq n)[f(t)\neq 0]\) or \((\mu t\leq n)[f(t)=0]\) is even, and
* \(1^{\smallfrown}\langle 1\rangle_{n}\in T_{f}\) if and only if either \((\forall t\leq n)[f(t)\neq 0]\) or \((\mu t\leq n)[f(t)=0]\) is odd.
\(\mathsf{RCA}_{0}^{\omega}\) can prove the existence of a functional \(\varphi\) that maps each \(f\) to \(T_{f}\). For each \(f\), the first element of \(\mathsf{WKL}(\varphi(f))\) is an \(\mathsf{LLPOmin}\) solution for \(f\).
## 5 Banach's Theorem on \(\mathbb{N}\)
This section reformulates Theorems 1.3 and 1.4 as higher order functional existence statements. In particular, Theorem 5.8 shows that, in the Skolemized higher order setting, the bounded version is equivalent to the unbounded version. This collapse mimics that of the uniform principle (\(\mathsf{WKL}\)). Our discussion begins with the formulation of the bounded principle and its proof from (\(\mathsf{WKL}\)).
**Definition 5.1**.: A bounded Banach functional \(\mathsf{bB}_{\mathbb{N}}\) on \(\mathbb{N}\) is defined as follows. For injective functions \(f_{0}\colon\mathbb{N}\to\mathbb{N}\) and \(f_{1}\colon\mathbb{N}\to\mathbb{N}\) with bounding functions \(b_{0}\) and \(b_{1}\), \(\mathsf{bB}_{\mathbb{N}}(f_{0},f_{1},b_{0},b_{1})\) is a bijective function \(h\colon\mathbb{N}\to\mathbb{N}\) such that for all \(m,n\in\mathbb{N}\), \(h(m)=n\) implies \(f_{0}(m)=n\) or \(f_{1}(n)=m\). As usual, the parenthesized expression \((\mathsf{bB}_{\mathbb{N}})\) denotes the principle asserting the existence of a bounded Banach functional for \(\mathbb{N}\).
**Proposition 5.2** (\(\mathsf{RCA}_{0}^{\omega}\)).: (WKL) implies \((\mathsf{bB}_{\mathbb{N}})\).
Proof.: We work in \(\mathsf{RCA}_{0}^{\omega}\). For bounded injections \(\vec{f}=\langle f_{0},f_{1},b_{0},b_{1}\rangle\), we will describe the computation of a related tree \(T_{\vec{f}}\) so that any infinite path through \(T_{\vec{f}}\) defines an injection \(h\) satisfying Banach's theorem. If \(p\) is an infinite path through \(T_{\vec{f}}\), the bijection \(h\) will be defined by:
\[h(n)=\begin{cases}f_{0}(n),&\text{if }p(n)=0,\\ (\mu\,t\leq b_{1}(n))(f_{1}(t)=n),&\text{if }p(n)=1.\end{cases}\]
A finite sequence \(\sigma\in 2^{<\mathbb{N}}\) is included in \(T_{\vec{f}}\) if it satisfies the following four conditions, (i)-(iv), each ensuring an aspect of the back-and-forth construction of the bijection \(h\).
First, if there is an \(m<\operatorname{length}(\sigma)\) which is not in the range of \(f_{1}\), we ensure that \(h(m)=f_{0}(m)\).
1. If \(m<\operatorname{length}(\sigma)\) and \((\forall t\leq b_{1}(m))[f_{1}(t)\neq m]\) then \(\sigma(m)=0\).
Next, if \(m\) is \(f_{1}(t)\) for some \(t\) and \(t\) is not in the range of \(f_{0}\), we set \(h(m)=t\) in the following fashion.
1. If \(m<\operatorname{length}(\sigma)\), there is a \(t\leq b_{1}(m)\) such that \(f_{1}(t)=m\), and \((\forall s\leq b_{0}(t))[f_{0}(s)\neq t]\), then \(\sigma(m)=1\).
The next clause ensures that \(h\) is injective.
1. If \(m,n<\operatorname{length}(\sigma)\), \(\sigma(m)=0\), and \(\sigma(n)=1\), then \(f_{1}(f_{0}(m))\neq n\).
This final clause ensures that \(h\) is surjective.
1. If \(m,n<\operatorname{length}(\sigma)\), \(\sigma(m)=0\), and \(\sigma(n)=1\), then \(f_{1}(f_{0}(n))\neq m\).
The sequences satisfying the clauses are closed under initial segments, so \(T_{\vec{f}}\) is a tree. The second order proof of the bounded Banach theorem in \(\mathsf{WKL}_{0}\) (Theorem 1.4) shows that \(T_{\vec{f}}\) is infinite. The construction of \(T_{\vec{f}}\) terminates for arbitrary choices of \(\vec{f}\), even if \(f_{0}\) and \(f_{1}\) are not injections or if \(b_{0}\) or \(b_{1}\) gives incorrect bounding information. Thus \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of a functional mapping \(\vec{f}\) to \(T_{\vec{f}}\). Whenever \(T_{\vec{f}}\) is infinite, \(\mathsf{WKL}(\vec{f})\) yields an infinite path. Concatenating these functionals with the one computing the bijection \(h\) as described at the beginning of the proof yields the desired Banach functional.
The preceding proposition differs from the second order analog (Theorem 1.3) in the formulation of the bounding functions. The original second order version was formulated with characteristic functions for the ranges of the injections. However, in the calculation of \(T_{\vec{f}}\), the use of an incorrect characteristic function could result in an unbounded nonterminating search, causing the functional mapping \(\vec{f}\) to \(T_{\vec{f}}\) to be undefined on some inputs. This difficulty could be circumvented by using the fact that the uniform principle (WKL) implies \((\exists^{2})\), but the argument presented here uses a single application of (WKL).
Our proof of the unbounded version of Banach's Theorem for \(\mathbb{N}\) from \((\exists^{2})\) uses a proposition relating \((\exists^{2})\) to the existence of bounding functions.
**Definition 5.3**.: The functional \(\mathsf{b}\colon\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}\) maps any function \(f\colon\mathbb{N}\to\mathbb{N}\) to a bounding function \(\mathsf{b}(f)\) for \(f\). In the notation of section SS5, for all \(f\) we have \(\beta(f,\mathsf{b}(f))\). As usual, the parenthesized expression (\(\mathsf{b}\)) denotes the principle asserting the existence of a bounding functional.
**Lemma 5.4** (\(\mathsf{RCA}_{0}^{\omega}\)).: \((\exists^{2})\) implies (\(\mathsf{b}\)).
Proof.: \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of a functional \(\mathsf{z}\colon\mathbb{N}^{\mathbb{N}}\times\mathbb{N}\to 2^{\mathbb{N}}\) where \(\mathsf{z}(f,n)=g\) satisfies \(g(m)=0\) if and only if \(f(m)=n\). By Proposition 1.5, \((\exists^{2})\) proves the existence of Kohlenbach's \(\mu_{0}\). By composition and \(\lambda\) abstraction, there is a functional \(\mathsf{b}(f)\) mapping \(f\) to the bounding function \(\mu_{0}(\mathsf{z}(f,n))\).
The next definition formulates an unbounded form of Banach's theorem on \(\mathbb{N}\). Using the principle (\(\mathsf{b}\)), the unbounded form can be derived from the bounded form.
**Definition 5.5**.: A Banach functional on \(\mathbb{N}\), denoted \(\mathsf{B}_{\mathbb{N}}\), is defined as follows. For injective functions \(f_{0}\colon\mathbb{N}\to\mathbb{N}\) and \(f_{1}\colon\mathbb{N}\to\mathbb{N}\), \(\mathsf{B}_{\mathbb{N}}(f_{0},f_{1})\) is a bijective function \(h\colon\mathbb{N}\to\mathbb{N}\) such that for all \(m,n\in\mathbb{N}\), \(h(m)=n\) implies \(f_{0}(m)=n\) or \(f_{1}(n)=m\). As usual, the parenthesized expression (\(\mathsf{B}_{\mathbb{N}}\)) denotes the principle asserting the existence of a Banach functional for \(\mathbb{N}\).
**Proposition 5.6** (\(\mathsf{RCA}_{0}^{\omega}\)).: \((\exists^{2})\) implies (\(\mathsf{B}_{\mathbb{N}}\)).
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), assume \((\exists^{2})\). By Proposition 4.6, we have (WKL), so by Proposition 5.2, we have (\(\mathsf{bB}_{\mathbb{N}}\)). By Lemma 5.4, we have the functional \(\mathsf{b}\) mapping functions to associated bounding functions. The composition functional \(\mathsf{bB}(f_{0},f_{1},\mathsf{b}(f_{0}),\mathsf{b}(f_{1}))\) satisfies (\(\mathsf{B}_{\mathbb{N}}\)).
The next proposition essentially shows that the principle \((\exists^{2})\) can be deduced from the restricted form of Banach's theorem for \(\mathbb{N}\).
**Proposition 5.7** (\(\mathsf{RCA}_{0}^{\omega}\)).: (\(\mathsf{bB}_{\mathbb{N}}\)) implies (\(\mathsf{LLPO}\)).
Proof.: Assume \(\mathsf{RCA}_{0}^{\omega}\) and \((\mathsf{bB}_{\mathbb{N}})\). Our goal is to prove the existence of the functional \(\mathsf{LLPOmin}\). Let \(g\colon\mathbb{N}\to\mathbb{N}\) and define bounded injections \(f_{0}\) and \(f_{1}\) as follows.
Figure 1 illustrates the construction of \(f_{0}\) and \(f_{1}\) for three choices of \(g\). In general, for even inputs like \(n=2m\), let \(f_{0}(n)=n+2\) and \(f_{1}(n)=n\). For \(n=1\), let \(f_{0}(1)=0\) and \(f_{1}(1)=1\). The appearance of \(0\) in the range of \(g\) affects the definitions of \(f_{0}\) and \(f_{1}\) on other odd values. Suppose that \(n=2m+3\). If \((\forall t\leq n-2)[g(t)\neq 0]\), then let \(f_{0}(n)=n-2\) and \(f_{1}(n)=n\). If \((\exists t\leq n-2)[g(t)=0]\), write \(s=(\mu t)[g(t)=0]\). If \(s\) is even, then let
\[f_{0}(n)=\begin{cases}n-2,&\text{if $s=n-3$},\\ n,&\text{if $s<n-3$},\end{cases}\]
and let \(f_{1}(n)=n+2\). If \(s\) is odd, then let
\[f_{0}(n)=\begin{cases}n-2,&\text{if $s=n-2$},\\ n,&\text{if $s<n-2$},\end{cases}\]
and
\[f_{1}(n)=\begin{cases}n,&\text{if $s=n-2$},\\ n+2,&\text{if $s<n-2$}.\end{cases}\]
In figure, \(f_{0}\) is represented by solid arrows and \(f_{1}\) by dashed arrows. Extending the chains to the left and right, each number has an exiting arrow, so both \(f_{0}\) and \(f_{1}\) are total. No number has two entering arrows, so \(f_{0}\) and \(f_{1}\) are injective.
Figure 1(a) corresponds to the situation when \(0\) does not appear in the range of \(g\). Any bijection \(h\) satisfying Banach's theorem must either consist of all the (inverses of the) dashed arrows or all the solid arrows. In this situation, \(h(1)\) may be \(0\) or \(1\).
Figure 1(b) corresponds to the case when \(g(2)=0\) is the first zero in the range of \(g\). In this case, \(5\) must be in the domain of \(h\), so \(h(5)=f_{0}(5)=3\). The only bijection satisfying Banach's theorem consists of solid arrows to the left of \(5\), so \(h(1)=0\).
Figure 1(c) is for the case when \(g(3)=0\) is the first zero in the range of \(g\). Here \(5\) must be in the range of \(h\), so \(h(5)=f_{1}^{-1}(5)=5\). The only bijection satisfying Banach's theorem consists of (inverses of the) dashed arrows to the left of \(5\), and so \(h(1)=1\). If \(0\) first appears in the range of \(g\) at an even value, the the figure for \(f_{0}\) and \(f_{1}\) will be a shifted version of the second figure. Odd values yield a shifted version of the third figure.
Because \(f_{0}(n)\) is never less than \(n-2\), \(b_{0}(n)=n+2\) is a bounding function for \(f_{0}\). Similarly, \(f_{1}(n)\) is never less than \(n\), so \(b_{1}(n)=n\) is a bounding function for \(f_{1}\). Routine verifications show that for any choice of \(g\), \(f_{0}\) and \(f_{1}\) will be injections bounded by \(b_{0}\) and \(b_{1}\). Suppose that \(h\) is any bijection satisfying Banach's theorem for \(f_{0}\), \(f_{1}\), \(b_{0}\), and \(b_{1}\). If the first \(0\) in the range
of \(g\) occurs at an even value, then \(h(1)=0\). If it occurs at an odd value, then \(h(1)=1\). If \(0\) is not in the range of \(g\), then \(h(1)\) may be either \(0\) or \(1\). \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of the functional mapping \(g\) to the bounded injections \(\vec{f}_{g}=\langle f_{1},f_{1},b_{0},b_{1}\rangle\) as defined above. The functional \(\mathsf{bB}_{\mathbb{N}}(\vec{f}_{g})\) yields the bijection \(h\) for \(\vec{f}_{g}\). Consequently, the functional mapping \(g\) to \(\mathsf{bB}_{\mathbb{N}}(\vec{f}_{g})(1)\) (which equals \(h(1)\)) is \(\mathsf{LLPOmin}(g)\).
Concatenating the preceding arguments yields the desired equivalence theorem and concludes the section.
**Theorem 5.8** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. \((\mathsf{B}_{\mathbb{N}})\).
3. \((\mathsf{bB}_{\mathbb{N}})\).
Proof.: Proposition 5.6 shows that (1) implies (2). Because \((\mathsf{bB}_{\mathbb{N}})\) is a restriction of \((\mathsf{B}_{\mathbb{N}})\), (2) implies (3) is immediate. Proposition 5.7 and Theorem 4.3 show that (3) implies (1).
## 6 Banach's Theorem on compact spaces
Our next goal is to analyze the strength of Banach's theorem restricted to uniformly continuous functions on complete separable metric spaces. We formalize complete separable metric spaces in the manner of Simpson [14, II.5].
Figure 1: Construction for Proposition 5.6. (a): \(f_{0}\) (solid) and \(f_{1}\) (dashed) when \(0\) is not in the range of \(g\). (b): \(f_{0}\) (solid) and \(f_{1}\) (dashed) when \(g(2)=0\). (c): \(f_{0}\) (solid) and \(f_{1}\) (dashed) when \(g(3)=0\).
The space \(\hat{A}\) is the collection of rapidly converging sequences of elements of an underlying (countable) set \(A\). The metric is a function \(d\colon A\times A\to\mathbb{R}\), extended to \(\hat{A}\) by defining \(d(\langle a_{i}\rangle_{i\in\mathbb{N}},\langle a^{\prime}_{i}\rangle_{i\in \mathbb{N}})=\langle d(a_{i},a^{\prime}_{i})\rangle_{i\in\mathbb{N}}\). As in Definition III.2.3 of Simpson [14], a space is compact if there is an infinite sequence of finite sequences of points of \(\hat{A}\) of the form \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\), such that for all \(z\in\hat{A}\) and \(j\in\mathbb{N}\), there is an \(i\leq n_{j}\) such that \(d(x_{ij},z)<2^{-j}\).
Uniform continuity can be witnessed by a modulus of uniform continuity as formalized in Definition IV.2.1 of Simpson [14]. The function \(h\colon\mathbb{N}\to\mathbb{N}\) is a modulus of uniform continuity for \(f\) if for all \(k\), \(|x-y|<2^{-h(k)}\) implies \(|f(x)-f(y)|<2^{-k}\). If \(h_{f}\) is a modulus of uniform continuity for \(f\) and \(h_{g}\) is a modulus of uniform continuity for \(g\), then \(h\) defined by \(h(n)=\max\{h_{f}(n),h_{g}(n)\}\) is a modulus of uniform continuity for \(f\) and \(g\). Consequently, a joint modulus can be used to simplify some statements.
Kohlenbach [9] defines two equivalent forms of continuity for functionals of type \(1\to 1\). First, \(C^{1\to 1}\) is everywhere sequentially continuous if ([9, Definition 3.3]):
\[(\forall g^{1})(\forall\langle g_{n}\rangle)[\lim_{n\to\infty}g_{n}=g\to\lim_ {n\to\infty}C(g_{n})=C(g)].\]
Second, \(C^{1\to 1}\) is everywhere \(\varepsilon\)-\(\delta\) continuous if ([9, Definition 3.5]):
\[(\forall g^{1})(\forall k)(\exists n)(\forall h^{1})[d(g,h)<2^{-n}\to d(C(g), C(h))<2^{-k}].\]
This second definition is similar to familiar textbook definitions of continuity for total functions. The use of \(n\) and \(k\) reduces the type of the quantifiers corresponding to \(\delta\) and \(\varepsilon\). Proposition 3.6 of Kohlenbach [9] proves in \(\mathsf{RCA}^{\omega}_{0}\) that \(C\) is sequentially continuous if and only if \(C\) is \(\varepsilon\)-\(\delta\) continuous.
The following portion of Proposition 3.7 of Kohlenbach [9] is very useful in proving reversals.
**Proposition 6.1** ([9, Proposition 3.7]).: The following are equivalent over \(\mathsf{RCA}^{\omega}_{0}\):
1. \((\exists^{2})\).
2. There is a functional which is not everywhere sequentially continuous.
3. There is a functional which is not everywhere \(\varepsilon\)-\(\delta\) continuous.
For uniformly continuous functionals on compact complete separable metric spaces, it is possible to find ranges using only \((\exists^{2})\). Indeed, the next two lemmas show that for Cantor space the existence of ranges is equivalent, a higher order analog of Lemma III.1.3 of Simpson [14]
**Lemma 6.2** (\(\mathsf{RCA}^{\omega}_{0}+(\exists^{2})\)).: Suppose \(X\) is a compact complete separable metric space. There is functional \(R\) such that if \(f\colon X\to X\) is a function with modulus of uniform continuity \(h\), \(R(f,h)\) is the characteristic function of the range of \(f\). That is, for all \(y\in X\), \(R(f,h)(y)\in\{0,1\}\) and \(R(F,h)(y)=1\) if and only if \((\exists x\in X)[f(x)=y]\).
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), let \(X\) be as hypothesized, with the compactness of \(X\) witnessed by \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\). Consider a function \(f\colon X\to X\) with modulus of uniform continuity \(h\). Informally, a value \(y\in X\) is in the range of \(f\) if and only if for every \(m\) there is an \(x\) with \(d(F(x),y)<2^{-m}\). By uniform continuity and compactness, such an \(x\) exists if and only if there is an \(i\leq n_{h(m)}\) such that \(d(f(x_{ih(m)}),y)<2^{-m}\). In \(\mathsf{RCA}_{0}^{\omega}\), for \(f\colon X\to X\) and \(h\colon\mathbb{N}\to\mathbb{N}\), we may define
\[K(f,h,y,m)=\begin{cases}1,&\text{if }(\exists i\leq n_{h(m)})[d(f(x_{ih(m)}),y )<2^{-m}]\\ 0,&\text{otherwise.}\end{cases}\]
Viewing \(K\) as a function in \(m\) with parameters \(f\), \(h\), and \(y\), in \((\exists^{2})\) we may define \(R(f,h)(y)=\varphi(K(F,h,y,m))\). Informally, by the definition of \(\varphi\), \(R(f,h)(y)=1\) if and only if for all \(m\) there is an \(x\) with \(d(f(x),y)<2^{-m}\). Note that the termination of the calculation of \(R(f,h)\) does not depend on the continuity of \(f\) or the correctness of \(h\).
To complete our proof, we must verify in \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) our informal claim that for each continuous function \(f\) with modulus of uniform continuity \(h\), \(R(f,h)\) is the characteristic function for the range of \(f\). First, if \(R(f,h)(y)=1\), then there is a sequence \(\langle x^{\prime}_{i_{m}}\rangle\) such that for every \(m\), \(d(f(x^{\prime}_{i_{m}}),y)<2^{-m}\). The principle \((\exists^{2})\) implies \(\mathsf{ACA}_{0}\) which implies the Bolzano-Weierstrass theorem (see Theorem III.2.7 of Simpson [14]), so we can thin \(\langle x^{\prime}_{i_{m}}\rangle\) to a sequence converging to some \(x\in X\). By sequential continuity of \(f\), we have \(f(x)=y\).
Second, if \(R(f,h)(y)=0\), then for some natural number \(m\), we must have \((\forall i\leq n_{h(m)})[d(f(x_{ih(m)},y)\geq 2^{-m}]\). Suppose by way of contradiction that \(f(x)=y\). Choose \(i\leq n_{h(m)}\) such that \(d(x,x_{ih(m)})<2^{-h(m)}\). Because \(f\) is uniformly continuous, \(d(f(x_{ih(m)}),f(x))<2^{-m}\). Concatenating inequalities, we have
\[2^{-m}\leq d(f(x_{ih(m)}),y)=d(f(x_{ih(m)}),f(x))<2^{-m},\]
a contradiction. Thus, \(R(f,h)(y)=0\) implies \((\forall x\in X)[f(x)\neq y]\), completing the proof.
**Lemma 6.3** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. If \(X\) is a compact complete separable metric space, then there is functional \(R\) such that if \(f\colon X\to X\) is a function with modulus of uniform continuity \(h\), \(R(f,h)\) is the characteristic function of the range of \(f\).
3. There is functional \(R\) such that if \(f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}\) is a function with modulus of uniform continuity \(h\), \(R(f,h)\) is the characteristic function of the range of \(f\).
Proof.: We will work in \(\mathsf{RCA}_{0}^{\omega}\). By Lemma 6.2, item (1) implies item (2). Item (3) is a special case of item (2), so we need only show that item (3) implies item (1).
In \(\mathsf{RCA}^{\omega}_{0}\), we can prove the existence of the function that maps an arbitrary function \(f\colon\mathbb{N}\to\mathbb{N}\) to an element of Cantor space \(f^{\prime}\colon\mathbb{N}\to 2\) so that, for all \(n\), \(f^{\prime}(n)=1\) if and only if \(f(n)>0\). In terms of the function from the definition of \((\exists^{2})\), \(\mathsf{R_{LPO}}(f)=\mathsf{R_{LPO}}(f^{\prime})\). \(\mathsf{RCA}^{\omega}_{0}\) can also prove the existence of the transformation \(S\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}\) such that for all \(f\colon\mathbb{N}\to 2\), \(S(f)(n)=0\) if \((\forall m\leq n)[f(m)\neq 0]\) and \(S(f)(n)=1\) otherwise. Let \(\mathcal{C}\) denote the set of functions from \(2^{\mathbb{N}}\) to \(2^{\mathbb{N}}\). \(\mathsf{RCA}^{\omega}_{0}\) proves the existence of the function \(T\colon 2^{\mathbb{N}}\to\mathcal{C}\) which maps each \(f\in 2^{\mathbb{N}}\) to the constant function in \(\mathcal{C}\) that takes the value \(S(f)\). For each \(f\), because \(T(f)\) is a constant function, the constant \(0\) function on \(\mathbb{N}\), denoted by \(z(n)\equiv 0\), is a modulus of uniform continuity for \(T(f)\). (Any function could be used as a modulus.) For any \(f\in\mathbb{N}^{\mathbb{N}}\), \(z\) is in the range of \(T(f^{\prime})\) if and only if \(0\) is not in the range of \(f^{\prime}\), and this occurs if and only if \(\mathsf{R_{LPO}}(f)=1\). Using the functional from item (3), we have \(\mathsf{R_{LPO}}(f)=R(T(f^{\prime}),z)(z)\), so item (3) implies \((\exists^{2})\).
Our proof of Banach's theorem in compact metric spaces requires a functional that can calculate the inverse of a given function. The next two lemmas show that \((\exists^{2})\) is sufficient and also necessary for this task.
**Lemma 6.4** (\(\mathsf{RCA}^{\omega}_{0}+(\exists^{2})\)).: Suppose \(X\) is a compact complete separable metric space. There is a function \(I\) such that if \(f\colon X\to X\) is a function with modulus of uniform continuity \(h\), then \(I(f,h)\) is a function that selects elements from the pre-image of \(f\). That is, for all \(y\in X\), if there is an \(p\) such that\(f(p)=y\), then \(f(I(f,h)(y))=y\). In particular, if \(f\) is injective, then the restriction of \(I(f,h)\) to the range of \(f\) is the inverse of \(f\).
Proof.: Suppose that the compactness of \(X\) is witnessed by the sequence of finite sequences \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\). Thus for all \(z\in X\), there is an \(x_{ij}\) in \(\langle x_{ij}:i\leq n_{j}\rangle\) such that \(d(x_{ij},z)<2^{-j}\). Given a function \(f\) with a modulus of uniform continuity \(h\), for each \(y\) we will calculate a rapidly converging subsequence \(p=\langle p_{m}:m\in\mathbb{N}\rangle\) such that if \(y\) is in the range of \(f\) then \(f(p)=y\). We will argue that this calculation is sufficiently uniform that the desired function \(I\) can be found using \(\mathsf{RCA}^{\omega}_{0}+(\exists^{2})\).
Fix \(f\colon X\to X\) with modulus of uniform continuity \(h\), so that if \(d(t_{1},t_{2})<2^{-h(k)}\) then \(d(f(t_{1}),f(t_{2}))<2^{-k}\). Increasing \(h\) if necessary, we may assume that \(h(k)\geq k+3\) for all \(k\). Using the witness points for compactness, if \(y\) is in the range of \(f\), then for all \(j\) we have \((\exists k\leq n_{h(j)})[d(f(x_{k,h(j)},y)<2^{-j}]\).
Given \(f\), \(h\), and \(y\) as above, we can define the desired \(p=\langle p_{m}:m\in\mathbb{N}\rangle\). If \(y\) is not in the range of \(f\), let \(p=y\). If \(y\) is in the range of \(f\) construct \(\langle p_{m}:m\in\mathbb{N}\rangle\) as follows. Let \(p_{m}=x_{ih(m)}\) where \(i\leq n_{h(m)}\) is the least integer such that:
1. \(d(f(x_{ih(m)}),y)<2^{-m}\),
2. \((\forall j>m)(\exists k\leq n_{h(j)})[d(f(x_{kh(j)},y)<2^{-j}\wedge d(x_{kh(j) },x_{ih(m)})<2^{-m-2}]\), and
3. if \(m>0\), then \(d(p_{m-1},x_{ih(m)})\leq 2^{-m}\).
The third clause ensures that \(p=\langle p_{m}:m\in\mathbb{N}\rangle\) is a rapidly converging Cauchy sequence. By Proposition 3.6 of Kohlenbach [9], \((\exists^{2})\) proves that \(f\) is sequentially continuous, so the first clause shows that \(f(p)=y\). Informally, the second clause guarantees that each \(p_{m}\) is sufficiently close to a pre-image of \(y\) that the construction can continue. We verify this next.
To initialize the construction, we must find \(p_{0}\). Suppose \(f(t_{0})=y\). Because \(h(2)\geq 0+3\) we can fix an \(i\leq n_{h(0)}\) with \(d(x_{ih(0)},t_{0})<2^{-3}\). Because \(h\) is a modulus of uniform continuity, \(d(f(x_{ih(0)}),y)=d(f(x_{ih(0)}),f(t_{0}))<2^{-0}\), so clause (1) is satisfied. For any \(j>0\), there is a \(k\leq n_{h(j)}\) such that \(d(x_{kh(j)},t_{0})<2^{-h(j)}<2^{-3}\), and so \(d(f(x_{kh(j)},y)=d(f(x_{kh(j)}),f(t_{0}))<2^{-j}\). For such a \(j\) and \(k\),
\[d(x_{kh(j)},x_{ih(0)})\leq d(x_{kh(j)},t_{0})+d(x_{ih(0)},t_{0})<2^{-3}+2^{-3}= 2^{-2},\]
so clause (2) is also satisfied. The third clause is vacuously true. We have shown that for some \(i\), \(x_{ih(0)}\) satisfies all three clauses. Let \(i_{0}\) be the least such \(i\), and set \(p_{0}=x_{i_{0}h(0)}\).
Suppose \(p_{m-1}\) has been chosen satisfying all three clauses. By clause (2) for \(p_{m-1}\), we can find a sequence of points \(\langle t_{m_{j}}:j\in\mathbb{N}\rangle\) such that for every \(j\), \(d(f(t_{m_{j}}),y)<2^{-j}\) and \(d(t_{m_{j}},p_{m-1})<2^{-(m-1)-2}=2^{-m-1}\). In Theorem III.2.7, Simpson [14] proves the generalization of the Bolzano-Weierstrass theorem for compact metric spaces in \(\mathsf{ACA}_{0}\), so it is also a theorem of \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\). Consequently, there is a subsequence of \(\langle t_{m_{j}}:j\in\mathbb{N}\rangle\) converging to a value \(t_{m}\) with \(f(t_{m})=y\) and \(d(t_{m},x_{m-1})\leq 2^{-m-1}\). Choose \(i\leq n_{h(m)}\) so that \(d(x_{ih(m)},t_{m})<2^{-h(m)}<2^{-m-3}\). Clause (1) holds for \(x_{ih(m)}\) because \(h\) is a modulus of uniform continuity and \(f(t_{m})=y\). For any \(j>m\), there is a \(k\leq n_{h(j)}\) such that \(d(x_{kh(j)},t_{m})<2^{-h(j)}\leq 2^{-j-3}\) and so \(d(f(x_{kh(j)}),y)<2^{-j}\). For such a \(j\) and \(k\),
\[d(x_{kh(j)},x_{ih(m)})\leq d(x_{kh(j)},t_{0}) +d(x_{ih(m)},t_{0})\] \[<2^{-j-3}+2^{-m-3}<2^{-m-2},\]
so clause (2) holds for \(x_{ih(m)}\). Finally,
\[d(p_{m-1},x_{ih(m)})<d(p_{m-1},t_{m})+d(x_{ih(m)},t_{m})<2^{-m-1}+2^{-m-3}<2^{ -m}.\]
We have shown that all three clauses hold for some choice of \(i\), so let \(i_{m}\) be the least such \(i\) and set \(p_{m}=x_{i_{m}h(m)}\). This concludes the argument that our construction never halts, yielding the desired pre-image \(p=\langle p_{m}:m\in\mathbb{N}\rangle\).
It remains to show that \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) suffices to prove the existence of the function \(I\) from the statement of the lemma. Suppose we are given \(f\) with modulus \(h\) and a value \(y\) from the metric space. By Lemma 6.2, \(R(f,h)\) is the characteristic function for the range of \(f\). If \(y\) is not in the range of \(f\), output \(y\). Otherwise, begin constructing \(p\), searching for an \(x_{ih(m)}\) satisfying
clauses (1), (2), and (3) above. By \((\exists^{2})\), we may use a realizer for \(\mathsf{LPO}\) to check if clause (2) holds. As argued above, when \(y\) is in the range, this process calculates the desired pre-image of \(y\). Summarizing, \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) proves the existence of the function mapping \(f\), \(h\), and \(y\) to the desired value. Applying \(\lambda\)-abstraction yields \(I(f,h)\).
The use of \((\exists^{2})\) in the previous lemma is necessary, as shown by the following reversal.
**Lemma 6.5** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. If \(X\) is a compact complete separable metric space, then there is a function \(I\) such that if \(f\colon X\to X\) is a function with modulus of uniform continuity \(h\), then \(I(f,h)\) is a function that selects elements from the pre-image of \(f\).
3. There is a function \(I\) such that if \(f\colon 2^{\mathbb{N}}\to 2^{\mathbb{N}}\) is a function with modulus of uniform continuity \(h\), then \(I(h,f)\) is a function that selects elements from the pre-image of \(f\).
Proof.: We will work in \(\mathsf{RCA}_{0}^{\omega}\). By Lemma 6.4, item (1) implies item (2). Item (3) is a special case of item (2), so we need only show that item (3) implies item (1). By Proposition 4.2, it suffices to show that (3) implies (\(\mathsf{LLPO}\)).
Given an input \(w\colon\mathbb{N}\to 2\) for \(\mathsf{LLPO}\mathsf{min}\), we will show how to construct a function \(f\) with modulus of uniform continuity \(h\) such that information about the pre-image of \(f\) as provided by \(I(f,h)\) in item (3) can be used to calculate \(\mathsf{LLPO}\mathsf{min}\) for \(w\). In particular, we will control the pre-image of the constant \(0\) function, denoted \(\vec{0}\in 2^{\mathbb{N}}\). If the first \(t\) where \(w(t)=0\) is even, we require \(f^{-1}(\vec{0})=\{\vec{0}\}\). If the first \(t\) such that \(w(t)=0\) is odd, we require \(f^{-1}(\vec{0})=\{\vec{1}\}\). If \(0\) is not in the range of \(w\), \(f^{-1}(\vec{0})\) will be the set \(\{\vec{0},\vec{1}\}\).
Now we can specify the behavior of \(f\). Let \(x\colon\mathbb{N}\to 2\) by and element of \(2^{\mathbb{N}}\). Evaluating \(f\) at \(x\) yields a function \(f(x)\), which also maps \(\mathbb{N}\) into \(2\) and is defined as follows.
1. \(f(x)(0)=0\).
2. For \(n>0\), \(f(x)(n)\) is defined by two cases: 1. if \(n-1\) is not the least \(t\) such that \(w(t)=0\) then \[f(x)(n)=\begin{cases}x(n),&\text{if $x(0)=0$},\\ 1-x(n),&\text{if $x(0)=1$}.\end{cases}\] 2. if \(n-1\) is the least \(t\) such that \(w(t)=0\) then \[f(x)(n)=\begin{cases}x(0),&\text{if $n-1$ is even},\\ 1-x(0),&\text{if $n-1$ is odd}.\end{cases}\]
Routine arguments verify that the pre-image of \(f\) satisfies the requirements listed above. Also, if the sequences \(x\) and \(y\) agree in the first \(n\) values, the sequences \(f(x)\) and \(f(y)\) also agree in the first \(n\) values. Thus, the function \(h(n)=n\) is a modulus of uniform continuity for \(f\). The construction of \(f\) from \(w\) is sufficiently uniform that \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of a function \(g\) mapping each \(w\in 2^{\mathbb{N}}\) to its associated function \(f\).
Let \(I\) be the function described in item (3) of the statement of the lemma, and let \(h(n)=n\) be the identity function on \(\mathbb{N}\). The for any \(w\in 2^{\mathbb{N}}\), \(I(g(w),h)(\vec{0})\) is (an element of \(2^{\mathbb{N}}\)) equal to \(\vec{0}\) if the first \(0\) in the range of \(w\) occurs at an even value, and \(\vec{1}\) if the first \(0\) occurs at an odd value. The sequences coding elements of \(2^{\mathbb{N}}\) output by \(I\) are rapidly converging sequences of finite approximations to \(\vec{0}\) or \(\vec{1}\). By the definition of the metric on \(2^{\mathbb{N}}\), the first entry in the third finite approximation for any sequence equal to \(\vec{0}\) will be \(0\), and similarly the value \(1\) can be extracted from any sequence equal to \(\vec{1}\). Thus \(I(g(w),h)(0)\) uniformly calculates \(\mathsf{LLPOmin}\) for \(w\).
The previous results allow us to formulate and analyze some restrictions of Banach's theorem. For compact complete separable metric spaces, a functional form of Banach's theorem restricted to uniformly continuous functions is equivalent to the functional existence principle (\(\exists^{2}\)). Note that if \(f\) and \(g\) have moduli of uniform continuity \(h_{f}\) and \(h_{g}\), then \(h\) defined by \(h(n)=\max\{h_{f}(n),h_{g}(n)\}\) is a modulus of uniform continuity for both \(f\) and \(g\). As a notational convenience, we will use common moduli of uniformity for pairs of functions.
**Definition 6.6**.: For a complete separable metric space \(X\), a Banach functional \(B_{X}\) is defined as follows. For injective functions \(f\colon X\to X\) and \(g\colon X\to X\) with a common modulus of uniformity \(h\), \(B_{X}(f,g,m)\) is a bijective function \(H\colon X\to X\) such that for all \(x\in X\), \(H(x)=f(x)\) or \(g(H(x))=x\). The parenthesized expression (\(\mathsf{B}_{X}\)) denotes the principle asserting the existence of a Banach functional for \(X\).
Following our previous pattern, the next result proves a version of Banach's theorem for compact metric spaces using (\(\exists^{2}\)). The reversals and a summary appear in a second result.
**Lemma 6.7** (\(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\)).: If \(X\) is a compact metric space, then (\(\mathsf{B}_{X}\)).
Proof.: Assume \(\mathsf{RCA}_{0}^{\omega}\) and (\(\exists^{2}\)). Suppose \(X\) is a complete separable metric space and that \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\) witnesses that \(X\) is compact. Let \(f\) and \(g\) be injections of \(X\) into \(X\) with a common modulus of uniform continuity \(h\). Apply Lemma 6.2 to find the range functionals \(R(f,h)\) and \(R(g,h)\). Apply Lemma 6.4 to to find pre-image selectors \(I(f,h)\) and \(I(g,h)\). Because \(f\) and \(g\) are injections, the restrictions of these functions to the ranges of \(f\) and \(g\) are inverse functions. Consequently, we will use the shorthand notation \(f^{-1}\) and \(g^{-1}\). Note that the pre-image selectors \(f^{-1}\) and \(g^{-1}\) are defined for all inputs from \(X\), and that if (for example) \(y\) is in the range of \(f\), then \(f(f^{-1}(y))=y\)
Our goal is to construct the bijection \(H\) in the statement of \((\mathsf{B}_{X})\). This is achieved by a back-and-forth construction, alternately iterating applications of \(g^{-1}\) and \(f^{-1}\), and basing the value of \(H\) on the terminating condition of this process.
First we construct a functional that alternately applies \(g^{-1}\) and \(f^{-1}\). Using primitive recursion, define \(S(x,n)\) by \(S(x,0)=x\) and for \(n\geq 0\), \(S(x,2n+1)=g^{-1}(S(x,2n))\) and \(S(x,2n+2)=f^{-1}(S(x,2n+1))\). Calculating a few values yields \(S(x,0)=x\), \(S(x,1)=g^{-1}(x)\), \(S(x,2)=f^{-1}(g^{-1}(x))\), and \(S(x,3)=g^{-1}(f^{-1}(g^{-1}(x)))\). As noted above, \(f^{-1}\) and \(g^{-1}\) are total, so \(S(x,n)\) is defined for all \(x\) and all \(n\).
A traditional back-and-forth construction using partial inverse functions might halt if, for example, \(x\) was not in the range of \(g\), or if \(g^{-1}(x)\) was not in the range of \(f\), and so on. Define the function \(P\colon X\times\mathbb{N}\to\{0,1\}\) by \(P(x,0)=1\) and for \(n\geq 0\), \(P(x,2n+1)=R(g,h)(S(x,2n))\) and \(P(x,2n+2)=R(f,h)(S(x,2n+1))\). Consider the initial stages of the back-and-forth process displayed in the following table.
\begin{tabular}{c|c|c|c|c} \(n\)=stage & 0 & 1 & 2 & 3 & \(\ldots\) \\ \hline \(S(x,n)\) & \(x\) & \(g^{-1}(x)\) & \(f^{-1}(g^{-1}(x))\) & \(g^{-1}(f^{-1}(g^{-1}(x)))\) & \(\ldots\) \\ \end{tabular} If \(x\) is not in the range of \(g\) then \(P(x,1)=0\). If \(g^{-1}(x)\) is not in the range of \(f\), then \(P(x,2)=0\). In general, the least \(n\) with \(P(x,n)=0\) will be the stage where the traditional back-and-forth process based on partial inverse functions will halt. If the back-and-forth process does not halt, then \(P(x,n)=1\) for all \(n\). Writing \(P_{x}(n)\) for \(P(x,n)\), we may view \(P_{x}(n)\) as a function from \(\mathbb{N}\) into \(\{0,1\}\). Apply \(\varphi\) as provided by \((\exists^{2})\), and we have \(\varphi(P_{x}(n))=0\) if the back-and-forth process halts and \(\varphi(P_{x}(n))=1\) if the process never terminates.
By \((\exists^{2})\) and Theorem 4.3, we may also use \(R_{\mathsf{LLPOmin}}\), a realizer for \(\mathsf{LLPOmin}\). Define the functional \(T(x)\) by
\[T(x)=\begin{cases}1,&\text{if $\varphi(P_{x}(n))=1$},\\ R_{\mathsf{LLPOmin}}(P_{x}),&\text{if $\varphi(P_{x}(n))=0$}.\end{cases}\]
Finally, define the bijection \(H(x)\) by
\[H(x)=\begin{cases}f(x),&\text{if $T(x)=1$, and}\\ g^{-1}(x),&\text{if $T(x)=0$}.\end{cases}\]
One can argue that \(H\) is the desired bijection by the usual arguments. Briefly, consider the following diagram representing images and pre-images of an element \(x\) from \(X\).
\(f^{-1}(g^{-1}(x))\)\(x\)\(f(x)\)\(g(f(x))\)\(f(g(f(x))\)\(f^{-1}(g^{-1}(x))\)\(x\)\(x\)\(f(x)\)\(g(f(x))\)\(f(g(f(x))\)\(g(f(x))\)
Each element of the lower copy of \(X\) appears in at least one bipartite subgraph of the sort pictured. Also, for each \(y\) in the upper copy of \(X\), we know \(y=g^{-1}(g(y))\), so each element in the upper copy of \(X\) appears in at least one bipartite subgraph. Because \(f\) and \(g\) are injective, each element appears in exactly one bipartite subgraph. The choice of the values of \(H(x)\) ensure that if the bipartite graph terminates on the left, the left most vertex is either in the lower copy of \(X\) and in the domain of \(H\), or in the upper copy of \(X\) and in the range of \(H\). Thus \(H\) is a bijection of \(X\) into itself, satisfying the requirements of \((\mathsf{B}_{X})\).
This section concludes with proofs of two reversals for instances of the previous lemma, summarizing the results for Banach's theorem on compact metric spaces in the following theorem.
**Theorem 6.8** (\(\mathsf{RCA}_{0}^{\omega}\)).: The following are equivalent:
1. \((\exists^{2})\).
2. If \(X\) is a compact metric space, then \((\mathsf{B}_{X})\).
3. \((\mathsf{B}_{[0,1]})\).
4. \((\mathsf{B}_{2^{\mathbb{N}}})\).
Proof.: The previous lemma proves that item (1) implies item (2). Item (3) and item (4) are special cases of item (2), so we can complete the proof by reversing (3) and (4) to (1). For the first reversal, suppose \(B_{[0,1]}(f,g,h)\) is the Banach functional for \([0,1]\). Consider the injections \(f\) and \(g\) defined by \(f(x)=g(x)=x/2\). Each \(x\) in \([0,1]\) is represented by a rapidly converging sequence of rationals, and dividing each element of the sequence by \(2\) yields a rapidly converging sequence representing \(x/2\). Thus \(\mathsf{RCA}_{0}^{\omega}\) proves that \(f\) and \(g\) are defined and total. The identity function \(h(k)=k\) is a modulus of uniform continuity for \(f\) and \(g\). Suppose \(H=B_{[0,1]}(f,g,h)\) is the bijection satisfying Banach's theorem for \(f\) and \(g\). Consider \(x=\frac{1}{2}\) and the sequence \(x_{n}=\frac{1}{2}+\frac{1}{2^{n}}\). For each \(n\), \(x_{n}\) is not in the range of \(g\), so \(H(x_{n})=f(x_{n})=\frac{x_{n}}{2}=\frac{1}{4}+\frac{1}{2^{n+1}}\). Thus, \(\lim_{n\to\infty}H(x_{n})=\frac{1}{4}\). The functional \(H\) is bijective, so \(1\) is in the range of \(H\). Fix \(x\) with \(H(x)=1\). By the Banach theorem, \(H(x)=f(x)\) or \(H(x)=g^{-1}(x)\). Because \(1\) is not in the range of \(F\), \(H(x)=g^{-1}(x)\). Thus \(1=g^{-1}(x)\), so \(x=\frac{1}{2}\) and \(H(\frac{1}{2})=1\). Summarizing,
\[H(\lim_{n\to\infty}x_{n})=H(\frac{1}{2})=1\neq\frac{1}{4}=\lim_{n\to\infty}H( x_{n}).\]
Thus \(H\) is not sequentially continuous at \(x=\frac{1}{2}\), and \((\exists^{2})\) follows by Proposition 6.1.
For the final reversal, suppose \(B_{2^{\mathbb{N}}}(f,g,h)\) is the Banach functional for Cantor space. Consider the padding function \(P(x)\) that adds a zero after each
entry in a binary input string. Formally, \(P(x)(n)=x(m)\) if \(n=2m\), and \(P(x)(n)=0\) otherwise. For example,
\[P(\langle 1,0,1,1\dots\rangle)=\langle 1,0,0,0,1,0,1,0\dots\rangle.\]
\(\mathsf{RCA}_{0}^{\omega}\) proves that \(P(x)\) is defined and total, and that the identity function \(h(k)=k\) is a modulus of uniform continuity for \(P(x)\). Let \(f(x)\) and \(g(x)\) both be \(P(x)\). Let \(H=B_{2^{\mathbb{N}}}(f,g,h)\) be the bijection satisfying Banach's theorem for \(f\) and \(g\). For each \(n\), let \(\sigma_{n}\) consist of \(n\) copies of the string \(10\), followed by \(11\), followed by zeros. The double \(1\) ensures that \(\sigma_{n}\) is not in the range of \(g(x)=P(x)\). Thus, for each \(n\), \(H(\sigma_{n})=f(\sigma_{n})=P(\sigma_{n})\), which consists of \(n\) copies of the string \(1000\) followed by \(1010\), followed by zeros. Thus \(\lim_{n\to\infty}H(\sigma_{n})\) is the string \(1000\) repeated infinitely. On the other hand, \(\lim_{n\to\infty}\sigma_{n}\) is \(\langle 1,0,1,0\dots\rangle\). The string \(\langle 1,1,1\dots\rangle\) is not in the range of \(f(x)\), so \(H(g(\langle 1,1,1\dots\rangle))=\langle 1,1,1,\dots\rangle\). Because \(g(\langle 1,1,1\dots\rangle)=\langle 1,0,1,0\dots\rangle\), we have \(H(\lim_{n\to\infty}\sigma_{n})=H(\langle 1,0,1,0\dots\rangle)=\langle 1,1,1 \dots\rangle\). Thus \(H(\lim_{n\to\infty}\sigma_{n})\neq\lim_{n\to\infty}H(\sigma_{n})\), so \(H\) is not sequentially continuous at \(x=\langle 1,0,1,0\dots\rangle\). The principle \((\exists^{2})\) follows by Proposition 6.1, completing the reversal and the proof of the theorem.
We note that the functional \(R\) in Lemma 6.2, the functional \(I\) in Lemma 6.5, and the functional \(B\) in Theorem 6.8 are constructed uniformly in a code for the space \(X\). Hence these functionals could, in principle, be defined with \(X\) as a parameter. This is another layer of uniformity in the constructions, although noting the parameter explicitly complicates the notation.
## 7 Moduli of uniform continuity
This section introduces a function that computes moduli of uniform continuity. As shown below, the strength of the existence of the function lies below \((\exists^{2})\), allowing us to streamline the definition of Banach functionals and Theorem 6.8.
**Definition 7.1**.: Suppose \(X\) is a compact complete separable metric space and \(Y\) is a complete separable metric space. The principle (\(\mathsf{M}\)) asserts the existence of a function \(M\) such that if \(f\colon X\to Y\) is continuous, then \(M(f)\) is a modulus of uniform continuity for \(f\).
Near the end of his article, Kohlenbach [9] presents a functional form of the fan theorem, denoted by (\(\mathsf{MUC}\)). He notes that (\(\mathsf{M}\)) is a consequence of (\(\mathsf{MUC}\)), \(\mathsf{MUC}\) is conservative over \(\mathsf{WKL}_{0}\) for second order sentences, and (\(\mathsf{MUC}\)) is inconsistent with \((\exists^{2})\). Because (\(\mathsf{MUC}\)) proves (\(\mathsf{M}\)), (\(\mathsf{M}\)) is also conservative over \(\mathsf{WKL}_{0}\) for second order sentences. The next lemma shows that unlike (\(\mathsf{MUC}\)), the principle (\(\mathsf{M}\)) is a consequence of \((\exists^{2})\).
**Lemma 7.2** (\(\mathsf{RCA}_{0}^{\omega}\)).: \((\exists^{2})\) implies (\(\mathsf{M}\)).
Proof.: Let \(X\) be a compact complete separable metric space with compactness witnessed by the sequence of sequences \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\). Let \(Y\) be a simple separable metric space. We will use \(d\) to denote the metric in both spaces. For \(f\colon X\to Y\) we can define a prospective value of a modulus of uniform continuity for \(f\) at \(m\) by setting \((M(f))(m)\) equal to the least \(n\) such that:
\[(\forall x_{ij})(\forall x_{i^{\prime}j^{\prime}})[d(x_{ij},x_{i^{\prime}j^{ \prime}})<2^{-n}\to d(f(x_{ij}),f(x_{i^{\prime}j^{\prime}}))<2^{-m-1}] \tag{1}\]
Informally, \(M(f)\) is a function from \(\mathbb{N}\) to \(\mathbb{N}\) that resembles a modulus of uniform continuity on the compactness witnesses for \(X\). First we will show that \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) suffices to prove the existence of the function \(M\). Then we will verify that if \(f\) is continuous, then \(M(f)\) is a modulus of uniform continuity for \(f\).
Working in \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\), let \(X\) and \(Y\) be as above, and suppose \(f\colon X\to Y\). Recalling the reverse mathematical formalization of inequalities in the reals, the formulas \(d(x_{ij},x_{i^{\prime}j^{\prime}})<2^{-n}\) and \(d(f(x_{ij}),f(x_{i^{\prime}j^{\prime}}))>2^{-m-1}\) are \(\Sigma_{1}^{0}\). Thus \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of a function \(a(f,m,n,t)\) which is \(0\) if \(t\) codes a witness that there are \(x_{ij}\) and \(x_{i^{\prime}j^{\prime}}\) such that \(d(x_{ij},x_{i^{\prime}j^{\prime}})<2^{-n}\) and \(d(f(x_{ij}),f(x_{i^{\prime}j^{\prime}}))>2^{-m-1}\), and is \(1\) otherwise. Note that formula (1) holds if \(a(f,m,n,t)\) is \(1\) for all \(t\), and fails if there is a \(t\) such that \(a(f,m,n,t)\) is \(0\). As noted in section 4, \((\exists^{2})\) implies the existence of the function \(R_{\mathsf{LPO}}\). The \(\lambda\) notation \(\lambda t.a(f,m,n,t)\) denotes the function that maps each \(t\in\mathbb{N}\) to the value \(a(f,m,n,t)\). Applying \(\lambda\) abstraction (which is a consequence of \(\mathsf{RCA}_{0}^{\omega}\)[9]) and \((\exists^{2})\), we can prove the existence of the function \(b(f,m,n)=R_{\mathsf{LPO}}(\lambda t.a(f,m,n,t))\). Note that for all \(f\), \(m\), and \(n\), \(b(f,m,n)=1\) if formula (1) holds and \(b(f,m,n)=0\) otherwise. By Proposition 1.5, \((\exists^{2})\) proves the existence of Feferman's \(\mu\), so by \((\exists^{2})\) and an additional application of \(\lambda\) abstraction, we can prove the existence of the function \(c(f,m)=\mu(1-\lambda n.b(f,m,n))\). Note that for each \(f\) and \(m\), if there is an \(n\) such that formula (1) holds, then \(c(f,m)\) is the least such \(n\). If there is no such \(n\), for example if \(f\) is discontinuous, then \(c(f,m)\) still yields some value, but no useful information. By \(\lambda\) abstraction, \(\mathsf{RCA}_{0}^{\omega}+(\exists^{2})\) proves the existence of \(M(f)=\lambda m.c(f,m)\). For every \(f\colon X\to Y\), \(M(f)\) yields a function from \(\mathbb{N}\) to \(\mathbb{N}\).
It remains to show that if \(f\) is continuous then \(M(f)\) is a modulus of uniform continuity for \(f\). Fix a continuous \(f\colon X\to Y\) and \(m\in\mathbb{N}\). Let \(n=M(f)(m)\). Suppose that \(u,v\in X\) satisfy \(d(u,v)<2^{-n}\). Choose \(\delta<2^{-n}-d(u,v)\). Because \(f\) is continuous and \(\langle\langle x_{ij}:i\leq n_{j}\rangle:j\in\mathbb{N}\rangle\) is dense in \(X\), we can find an \(x_{ij}\) such that \(d(x_{ij},u)<\delta/2\) and \(d(f(x_{ij}),f(u))<2^{-m-2}\). Similarly, find \(x_{i^{\prime}j^{\prime}}\) such that \(d(x_{i^{\prime}j^{\prime}},v)<\delta/2\) and \(d(f(x_{i^{\prime}j^{\prime}}),f(v))<2^{-m-2}\). By the triangle inequality,
\[d(x_{ij},x_{i^{\prime}j^{\prime}})\leq d(x_{ij},u)+d(u,v)+d(v,x_{i^{\prime}j^{ \prime}})<\delta/2+d(u,v)+\delta/2<2^{-n}.\]
Because \(d(x_{ij},x_{i^{\prime}j^{\prime}})<2^{-n}\), and because \((M(f))(m)=n\), formula (1) holds, so
\(d(f(x_{ij}),f(x_{i^{\prime}j^{\prime}}))<2^{-m-1}\). By the triangle inequality,
\[d(f(u),f(v)) <d(f(u),f(x_{ij}))+d(f(x_{ij},f(x_{i^{\prime}j^{\prime}}))+d(f(x_{i ^{\prime}j^{\prime}}),f(v))\] \[<2^{-m-2}+2^{-m-1}+2^{-m-2}=2^{-m}.\]
Summarizing, when \(f\) is continuous and \(M(f)(m)=n\), if \(d(u,v)<2^{-n}\) then \(d(f(u),f(v)<2^{-m}\). Thus \(M(f)\) is a modulus of uniform continuity for \(f\).
The principle (M) allows us to reformulate Theorem 6.8, stripping all reference to moduli of uniform continuity.
**Theorem 7.3** (\(\mathsf{RCA}_{0}^{\omega}\)).: The principle \((\exists^{2})\) is equivalent to the statement that for every compact complete separable metric space \(X\), there is a function \(B^{\prime}_{X}\) that maps each pair of injections from \(X\) to \(X\) to a bijection satisfying Banach's theorem.
Proof.: Assuming \((\exists^{2})\), by Lemma 7.2 we may use the function \(M\) to calculate moduli of uniform continuity for \(f\) and \(g\). The pointwise maximum function \(\max(M(f),M(g))\) is a joint modulus of uniform continuity for \(f\) and \(g\). If \(B_{X}(f,g,m)\) is the function provided by Theorem 6.8 part (2), then the function defined by \(B^{\prime}_{X}(f,g)=B_{x}(f,g,\max(M(f),M(g)))\) is the desired Banach function. The converse is immediate from Theorem 6.8.
Because (M) is a consequence of (MUC), the principle (M) does not imply \((\exists^{2})\). That is, the converse of Lemma 7.2 is not true. The next two results show that like (MUC), the second order theorems of (M) are exactly those of \(\mathsf{WKL}_{0}\). As part of that proof, the next lemma allows us to change representations of functions, with the eventual goal of applying a traditional reverse mathematics result to show that (M) implies \(\mathsf{WKL}_{0}\).
**Lemma 7.4** (\(\mathsf{RCA}_{0}^{\omega}\)).: Suppose \(X\) and \(Y\) are complete separable metric spaces. Suppose that \(\Phi\) is a code for a totally defined continuous function as described in Definition II.6.1 of Simpson [14]. Then there is a function \(f\colon X\to Y\) such that for all \(n\), \(a\), \(r\), \(b\), and \(s\), if \((n,a,r,b,s)\in\Phi\) then \(d(f(a),b)\leq s\).
Proof.: Working in \(\mathsf{RCA}_{0}^{\omega}\), suppose \(X\), \(Y\), and \(\Phi\) are as above. Fix \(x\in X\). Because \(x\) is in the domain of the function defined by \(\Phi\), for each \(m\) we can find \((n,a,r,b,s)\in\Phi\) (occurring first in some fixed enumeration of quintuples) such that \(d(x,a)<r\) and \(s<2^{-m-1}\). Set \(f(x)(m)=b\). The sequence \(\langle f(x)(m):m\in\mathbb{N}\rangle\) is a rapidly converging sequence of elements of \(Y\) converging to the desired value of \(f(x)\). \(\mathsf{RCA}_{0}^{\omega}\) proves the existence of \(f\).
We now verify the last sentence of the lemma. Suppose \((n,a,r,b,s)\in\Phi\). Let \(\varepsilon>0\) and choose \(m\) so that \(2^{-m-1}<\min\{\varepsilon/2,s\}\). Let \((n^{\prime},a^{\prime},r^{\prime},b^{\prime},s^{\prime})\in\Phi\) be the quintuple witnessing the the value for \(f(a)(m)\). Then \(d(a,a^{\prime})<r^{\prime}\) and \(s^{\prime}<2^{-m-1}<\varepsilon/2\). Let \(r_{0}=\min\{r,r^{\prime}-d(a,a^{\prime})\}\). Then the ball \(B(a,r_{0})\) is a subset of \(B(a,r)\), and is also a subset of \(B(a^{\prime},r^{\prime})\). Applying property (2) of Simpson's Definition II.6.1, we have \((a,r_{0})\Phi(b,s)\) and \((a,r_{0})\Phi(b^{\prime},s^{\prime})\). By
property (1) of Simpson's definition, \(d(b,b^{\prime})\leq s+s^{\prime}<s+\varepsilon/2\). By the choice of \(m\), \(d(b^{\prime},f(a))\leq 2^{-m}<\varepsilon/2\). By the triangle inequality \(d(f(a),b)<s+\varepsilon\). Because \(\varepsilon\) was an arbitrary positive value, \(d(f(a),b)\leq s\).
The preceding lemma allows us to completely characterize the second order theory of (M).
**Proposition 7.5**.: The second order theorems of \(\mathsf{RCA}_{0}^{\omega}+(\mathsf{M})\) are exactly the same as those of \(\mathsf{WKL}_{0}\).
Proof.: As noted before, (M) is a consequence of Kohlenbach's (MUC), and so any second order theorem provable using (M) is provable in \(\mathsf{WKL}_{0}\). It remains to show that (M) implies \(\mathsf{WKL}_{0}\). By Theorem IV.2.3 of Simpson [14], it suffices to show that if \(f\) is a continuous function (coded by \(\Phi\)) on \([0,1]\), then \(f\) is uniformly continuous. Suppose \(\Phi\) codes a continuous function on \([0,1]\). By Lemma 7.4, \(\mathsf{RCA}_{0}^{\omega}\) proves that there is a function \(f\colon[0,1]\to\mathbb{R}\) matching the values of the coded function. Applying (M), the function \(M(f)\) is a modulus of uniform continuity for \(f\), and so also for the function coded by \(\Phi\). Thus \(\Phi\) codes a uniformly continuous function on \([0,1]\).
We conclude by pointing out the potential and limitations of this section. The principle (M) can be viewed as a higher order analogue of \(\mathsf{WKL}_{0}\) in much the same fashion that (\(\exists^{2}\)) is a higher order analogue of \(\mathsf{ACA}_{0}\). A number of Skolemized forms of statements equivalent to \(\mathsf{WKL}_{0}\) may be equivalent to (M) over \(\mathsf{RCA}_{0}^{\omega}\). (But not all, as witnessed by Kohlenbach's \(\mathsf{UWKL}\). See Proposition 4.6.) However, (M) may not be the only reasonable candidate for a \(\mathsf{WKL}_{0}\) analogue. For example, reformulating (M) as a function on second order continuous function codes yields an alternative principle (M\({}_{\mathsf{c}}\)). It seems likely that Proposition 7.5 holds for (M\({}_{\mathsf{c}}\)), but it is possible that neither (M) nor (M\({}_{\mathsf{c}}\)) proves the other over \(\mathsf{RCA}_{0}^{\omega}\).
|
2310.02502 | Entanglement-Assisted Quantum Chiral Spectroscopy | The most important problem of spectroscopic chiral analysis is the inherently
weak chiral signals are easily overwhelmed by the environment noises. Enormous
efforts had been spent to overcome this problem by enhancing the symmetry break
in the light-molecule interactions or reducing the environment noises. Here, we
propose an alternative way to solve this problem by using frequency-entangled
photons as probe signals and detecting them in coincidence, i.e., using quantum
chiral spectroscopy. For this purpose, we develop the theory of
entanglement-assisted quantum chiral spectroscopy. Our results show that the
signals of left- and right-handed molecules in the quantum spectrum are always
distinguishable by suitably configuring the entangled probe photons. In
construct, the classical spectrum of the two enantiomers become
indistinguishable when the symmetry break in the interactions is overwhelmed by
the environment noises. This offers our quantum chiral spectroscopy a great
advantage over all classical chiral spectroscopy. Our work opens up an exciting
area that exploring profound advantages of quantum spectroscopy in chiral
analysis. | Chong Ye, Yifan Sun, Xiangdong Zhang | 2023-10-04T00:27:14Z | http://arxiv.org/abs/2310.02502v1 | # Entanglement-Assisted Quantum Chiral Spectroscopy
###### Abstract
The most important problem of spectroscopic chiral analysis is the inherently weak chiral signals are easily overwhelmed by the environment noises. Enormous efforts had been spent to overcome this problem by enhancing the symmetry break in the light-molecule interactions or reducing the environment noises. Here, we propose an alternative way to solve this problem by using frequency-entangled photons as probe signals and detecting them in coincidence, i.e., using quantum chiral spectroscopy. For this purpose, we develop the theory of entanglement-assisted quantum chiral spectroscopy. Our results show that the signals of left- and right-handed molecules in the quantum spectrum are always distinguishable by suitably configuring the entangled probe photons. In construct, the classical spectrum of the two enantiomers become indistinguishable when the symmetry break in the interactions is overwhelmed by the environment noises. This offers our quantum chiral spectroscopy a great advantage over all classical chiral spectroscopy. Our work opens up an exciting area that exploring profound advantages of quantum spectroscopy in chiral analysis.
## Introduction
Chiral molecules cannot be superimposed with their mirror images by rotations and translations. Such stereo-isomers are called enantiomers. As with our hands, the two enantiomers can be termed by left-handed and right-handed molecules, respectively. The enantiomers of opposite chirality (handedness) play dramatically different roles in chemistry, biotechnologies, and pharmaceutics [1]. Because the two enantiomers share almost all of their physical properties, identifying them is challenging and of great importance to the above and other relevant research areas [2, 3].
In the passing years, spectroscopy methods of chiral analysis are developed to distinguish molecular chirality. The main idea of the methods is to induce the symmetry break in the interactions between light and two enantiomers so that the chiral difference can be mapped to the optical degrees of freedom and detected spectroscopically. By far, well established spectroscopy methods of chiral analysis include (vibrational [4] and photoelectron [5]) circular dichroism [6], optical rotary dispersion [7], Raman optical activity [8], and recent three-wave mixing spectroscopy [9].
However, the molecular size is typically smaller than the wavelengths of the applied electromagnetic fields, leading to inherently weak chiral signals. Such weak chiral signals are easily overwhelmed by the environment noises, which affect the chiral analysis in the process of light-molecule interactions and the detection process. In the first process, the symmetry break in the interactions is suppressed by the environment noises, making it difficult to map the chiral difference to optical degrees of freedom. In the second process, the environment noises make it hard to detect the chiral signals. In order to solve this problem, enormous efforts had been made to reduce the environment noises by using buffer-gas cooling [10, 11], and direct laser cooling [12, 13]. Alternatively, advancements toward enhanced the symmetry break in light-molecule interactions have been widely explored, such as using plasmonic nanostructures [14], chiral surface [15], and twisted light with optical orbital angular momentum [16].
In this Letter, we propose a quantum chiral spectroscopy to solve such a problem by using a more ingenious detecting scheme. To this end, we develop the theory of using frequency-entangled photons as probe signals and detecting them in coincidence. Attributing to the nonclassical bandwidth of the frequency-entangled photons [17, 18, 19], the chiral difference can be enhanced in the coincidence signals. Our results show that the signals of left- and right-handed molecules in the coincidence spectrum are always distinguishable by suitably choosing the frequency-entangled two-photon pairs. In contrast, the classical spectrum left- and right-handed molecules will become indistinguishable in the strong dissipation region, where the symmetry break in the interactions is overwhelmed by the environment noises. This offers our quantum chiral spectroscopy a great advantage over all classical chiral spectroscopes.
## 2 Model
Our theoretical model is shown in Fig. 1 (a), where a chiral molecule couples with three narrow-band classical control fields and the signal photon of an incident broad-band entangled two-photon pair through the electric dipole interactions. A det
Figure 1: (a) Proposed setup for chiral analysis: Three classical fields are offered to control the chiral molecule. \(E_{s}\) and \(E_{l}\) are frequency-entangled. While \(E_{s}\) interacts with the chiral molecule, \(E_{l}\) propagates freely. The two photons are detected in coincidence. (b) The level scheme of our working system without regard to its chirality. The molecular chirality is reflected in the coupling strengths \(\Omega_{ij}\).
the transmitted signal photon. The idle photon will not interact with the chiral molecule and serves as a reference for the two-photon counting. We use a four-level model with a cyclic three-level configuration as shown in Fig. 1 (b) to describe our working system. The three excited states \(|1\rangle\), \(|2\rangle\), and \(|3\rangle\) are coupled to each other in a cyclic three-level configuration through the three classical control fields. The ground state \(|0\rangle\) is coupled to the excited state \(|1\rangle\) through the signal photon.
The energies of molecular states \(|j\rangle\) are \(\hbar\omega_{j}\). The frequencies of the three narrow-band control fields are \(v_{ij}\) (\(i>j=1,2,3\)). The corresponding coupling strengths are \(\Omega_{ij}\). For simplicity, the system is set to work under the the three-photon resonance condition \(v_{31}=v_{21}+v_{32}\). For the broad-band signal and idle photons in our setup, we assume the paraxial and long wave approximations such that they can be simplified to be one-dimensional [17, 19], respectively.
In the rotating-wave approximation, the Hamiltonian of the four-level model at \(r=0\) without regard to the molecular chirality is (\(\hbar=1\))
\[H=\sum_{j=0}^{3}\omega_{j}\sigma_{jj}+\sum_{i>j=1}^{3}(\Omega_{ij}\sigma_{ij}e ^{-\mathrm{i}v_{ij}t}+\mathrm{H.c.})+\sum_{k_{s}}[\omega_{s}a^{\dagger}_{k_{s} }a_{k_{s}}+(g_{k_{s}}a_{k_{s}}\sigma_{10}+\mathrm{H.c.})]+\sum_{k_{l}}\omega_ {l}a^{\dagger}_{k_{l}}a_{k_{l}}. \tag{1}\]
Here, we have defined \(\sigma_{ij}\equiv|i\rangle\langle j|\) with \(i,j=0,1,2,3\). \(a^{\dagger}_{k_{s}}\) and \(a^{\dagger}_{k_{l}}\) are the creation operators for the signal and idle photons with energies \(\hbar\omega_{s}\) and \(\hbar\omega_{l}\). The coupling strength attributing to the signal photon is \(g_{k_{s}}=\mu\sqrt{\omega_{s}/(2\varepsilon_{0}L)}\). Here, \(\mu\), \(L\), and \(\varepsilon_{0}\) are the transition electric dipole, the quantized volume, and the dielectric constant in the vacuum, respectively.
The chirality of our working system can survive in the electric dipole approximation. It is reflected by the three electric dipole transition moments of the cyclic three-level configuration. While their magnitudes are the same for the two enantiomers, the sign of their product is distinct for each enantiomer [20, 21]. This property plays the essential role in chiral analysis [9, 22, 23] and a more ambitious issue named enantio-purification [24, 25, 26, 27, 28, 29, 30]. In our four-level
model, this property is revealed by the sign of the coupling strengths \(\Omega_{ij}\). Without loss of generality, we can choose
\[\Omega_{31}^{R}=-\Omega_{31}^{L},\ \ \Omega_{21}^{R}=\Omega_{21}^{L},\ \ \Omega_{32}^{R}=\Omega_{32}^{L}. \tag{2}\]
Here, the subscripts "\(Q=L,R\)" are used to denote the molecular chirality.
## 3 Frequency-resolved chiral coincidence spectrum
In our setup, the chiral signals are collected by two-photon counting and given by coincidence probabilities in the standard Glauber's approach [31]. We are interested in the coincidence probabilities of detecting a single photon at \(r_{s}\) and a single idle photon at \(r_{l}\) is detected. We assume that the sensitive function of the two detectors are \(\mathcal{S}\delta(\bar{\omega}_{s}-\omega_{s})\) and \(\mathcal{S}\delta(\bar{\omega}_{l}-\omega_{l})\) with sensitive frequencies \(\bar{\omega}_{s}\) and \(\bar{\omega}_{l}\). Then, by fixing the sensitive frequency of one detector and scanning that of the other, we can obtain the frequency-resolved spectrum of the coincidence probability, i.e., the frequency-resolved coincidence spectrum.
In the standard Glauber's approach [31], the coincidence probabilities can be achieved with the help of the system's evolution operators, which can be obtained by using the following
Figure 2: Classical (a) vs Quantum (b) chiral spectroscopy of left-handed (solid line) and right-handed (dashed line) molecules in strong dissipation region. In quantum chiral spectroscopy, we give six cases (distinguished by colors) corresponding to different \(\bar{\omega}_{l}\). The spectrum of the two enantiomers have different spectral-line shapes or signs. (c) The working regimes (colored regimes) of our entanglement-assisted quantum chiral spectroscopy in the \((T_{0}-\bar{\omega}_{l})\) plane. Each regime in (c) corresponds to one of the six cases in (b).
procedure (see more details in Section 1 of Supplement). We treat the photon-molecule coupling \(\sum_{k_{s}}(g_{k_{s}}a_{k_{s}}\sigma_{10}+\text{H.c.})\) as a perturbation. The remaining terms can be diagonalized in the interaction picture as \(\mathcal{H}^{(0)}=\sum_{i=1}^{3}\lambda_{i}^{Q}|\lambda_{i}^{Q}\rangle\langle \lambda_{i}^{Q}|+\sum_{k_{s}}\Delta_{k_{s}}a_{k_{s}}^{\dagger}a_{k_{s}}+\sum_{k _{l}}\omega_{l}a_{k_{l}}^{\dagger}a_{k_{l}}\) with \(\Delta_{k_{s}}\equiv\omega_{s}-(\omega_{1}-\omega_{0})\). Using Dyson series, we give the evolution operators of the system and the coincidence spectrum. The coincidence spectrum are composed of two parts. One is the background, which describes the frequency-resolved coincidence probabilities without the chiral molecule. The other is the frequency-resolved transmission spectrum [18], which is the change of the frequency-resolved coincidence probabilities due to the appearance of the chiral molecule.
Since \(\lambda_{i}^{Q}\) are chirality-dependent, the transmission spectrum are different for the two enantiomers. Specifically, for the system initially prepared in
\[|\Psi(0)\rangle=|0\rangle\otimes\sum_{k_{s},k_{l}}\psi(k_{s},k_{l})a_{k_{s}}^{ \dagger}a_{k_{l}}^{\dagger}|vac\rangle \tag{3}\]
with the vacuum state of the photon field \(|vac\rangle\), the transmission spectrum are (see more details in Section 2 of Supplement)
\[P_{c}^{Q}(\bar{\omega}_{s},\bar{\omega}_{l})=-\,\frac{\mathcal{S}^{2}L^{3}}{8 \pi^{4}}\varepsilon_{\bar{k}_{l}}^{2}\varepsilon_{\bar{k}_{s}}^{2}\sum_{i=1}^{ 3}\sum_{k_{s}^{\prime\prime}}|\eta_{1\lambda_{i}^{Q}}|^{2}\Re\left[\frac{g_{ \bar{k}_{s}}^{*}g_{k_{s}^{\prime\prime}}\psi^{*}(\bar{k}_{s},\bar{k}_{l})\psi (k_{s}^{\prime\prime},\bar{k}_{l})}{(\lambda_{i}^{Q}-\Delta_{\bar{k}_{s}}+ \text{i}\Gamma)(\lambda_{i}^{Q}-\Delta_{k_{s}^{\prime\prime}}+\text{i}\Gamma)}\right] \tag{4}\]
with \(\varepsilon_{k_{s}}=\sqrt{\omega_{s}/(2\varepsilon_{0}L)}\), \(\varepsilon_{k_{l}}=\sqrt{\omega_{l}/(2\varepsilon_{0}L)}\), \(\bar{k}_{s}=\bar{\omega}_{s}/c\), \(\bar{k}_{l}=\bar{\omega}_{l}/c\), and \(\eta_{1\lambda_{i}^{Q}}=\langle 1|\lambda_{i}^{Q}\rangle\). \(c\) is the speed of light in vacuum. The environment noises will affect the process of the light-molecule interactions, which give rise to the dissipation of molecules. In order to describe the effect of dissipation, we have introduced \(\Gamma\) in Eq. (4).
## Entanglement-enhanced chiral analysis
In general, the chirality dependency in the transmission spectrum is independent of whether the two photons are entangled or not. However, in the classical chiral spectroscopy, where
the two uncorrelated photons are used, the spectral broadening caused by dissipation will reduce the chirality dependency and eventually make the transmission spectrum of the two enantiomers become indistinguishable. In contrast, in quantum chiral spectroscopy, where frequency-entangled photons are used as probe signals, the signal can be filtered and thus the chirality dependency in the transmission spectrum can be enhanced.
In order to demonstrate such an advantage of quantum chiral spectroscopy, we set the chiral molecule in the strong dissipation region by choosing the coupling strengths for the two enantiomers as \(\Omega_{31}^{R}=-\Omega_{31}^{L}=0.1\Gamma\), \(\Omega_{21}^{R}=\Omega_{21}^{L}=0.1\Gamma\), and \(\Omega_{32}^{R}=\Omega_{32}^{L}=0.1\Gamma\). The three classical control fields are coupled with their corresponding transitions on resonance with \(\Delta_{21}=\Delta_{31}=0\). Under these parameters, the classical transmission spectrum of the two enantiomers are indistinguishable as shown in Fig. 2(a). For calculations, we have used uncorrelated two-photon pairs with center frequencies \(\omega_{sc}\) and \(\omega_{lc}\) by choosing \(\psi(k_{s},k_{l})\propto\exp\{-[(\omega_{s}-\omega_{sc})^{2}+(\omega_{l}- \omega_{lc})^{2}]/(2\sigma^{2})\}\) with width \(\sigma=\Gamma\).
For the case of frequency entangled two-photon pairs, we use [18]
\[\psi(k_{s},k_{l})\propto A_{p}(k_{s},k_{l})\exp{(-\kappa^{2})}. \tag{5}\]
Such two-photon pairs can be generated in nonlinear crystals due to the downconversion process [19]. The envelope \(A_{p}(k_{s},k_{l})\) is given by the pumped pulse. Here, we use a Gaussian envelop with \(A_{p}(k_{s},k_{l})=\exp{[-(\omega_{l}+\omega_{s}-\omega_{p})^{2}/(2\sigma_{p}^{ 2})]}\) with the center frequency \(\omega_{p}\) and the width \(\sigma_{p}\). The phase-mismatching function \(\exp{(-\kappa^{2})}\) appearances due to the propagation of photons in the crystals. For entangled two-photon pairs generated in type-II nonlinear crystals, we approximately have [17, 18, 19]
\[\kappa=(\omega_{s}-\omega_{sc})\frac{T_{s}}{2}+(\omega_{l}-\omega_{lc})\frac{ T_{l}}{2} \tag{6}\]
with the center frequencies of the two photons \(\omega_{sc}\) and \(\omega_{lc}\). \(T_{s}\) and \(T_{l}\) are the maximal time delays of the two photons propagating in the crystals with respect to the pumped pulse.
In Fig. 2(b), we show the transmission spectrum of the two enantiomers in the strong dissipation region by using entangled two-photon pair with \(\sigma_{p}=\Gamma\), \(T_{s}=2.4\times 10^{1}\Gamma^{-1}\), and \(T_{l}=2.5\times 10^{1}\Gamma^{-1}\). By tuning \(\bar{\omega}_{l}\), the transmission spectrum of the two enantiomers with respect to \(\bar{\omega}_{s}\) can have different spectral-line shapes or signs. Here, we have illustrated six cases distinguished by color. Comparing these results with that in Fig. 2(a), we can conclude that the chiral analysis using frequency entangled two-photon pairs is overwhelming advantage over that using uncorrelated ones in the strong dissipation region. The advantage is due to the appearance of frequency entanglement. In this sense, we have demonstrated an entanglement-assisted quantum chiral spectroscopy in the strong dissipation region.
In addition to \(\bar{\omega}_{l}\), \(T_{s}\) and \(T_{l}\) are other two additional tunable parameters in quantum spectroscopy comparing with the classical spectroscopy. It is natural to explore the working regimes of our entanglement-assisted quantum chiral spectroscopy with respect to \(\bar{\omega}_{l}\), \(T_{s}\) and \(T_{l}\). For simplicity, we have used \(T_{s}=2.4T_{0}\) and \(T_{l}=2.5T_{0}\). The results are shown in Fig 2(c). Our entanglement-assisted quantum chiral spectroscopy works in the colored regimes of the \((T_{0}-\bar{\omega}_{l})\) plane. Each regime in Fig. 2(c) corresponds to one of the six cases in Fig. 2(b) and is marked by the same color. These results indicate that the entanglement-assisted quantum chiral spectroscopy is robust against the variations of the entangled two-photon pairs and the detector.
## 3 Physical role of correlation
The frequency correlation between the two photons plays an important role in our entanglement-assisted quantum chiral spectroscopy. In order to illustrate this explicitly, we consider a photon pair with a zero-bandwidth negative energy correlation, given by [19]
\[\psi(k_{s},k_{l})=\phi_{s}(k_{s})\delta(\omega_{l}+\omega_{s}-\omega_{p}). \tag{7}\]
The delta function \(\delta(\omega_{l}+\omega_{s}-\omega_{p})\) indicates the the energy conservation of two photons. Moreover, we choose \(\Delta_{12}=\Delta_{13}\gg\{|\Omega_{12}|,|\Omega_{13}|\}\). In this case, to the second order, the dressed state \(|\lambda_{1}^{Q}\rangle\simeq|1\rangle\),[24] i.e., \(\eta_{1\lambda_{2}^{Q}}\simeq 0\) and \(\eta_{1\lambda_{3}^{Q}}\simeq 0\). The transmission spectrum can be greatly simplified as
\[P_{c}^{Q}(\bar{\omega}_{s},\bar{\omega}_{l})= -\frac{{\cal S}^{2}L^{4}}{16\pi^{4}}\varepsilon_{\bar{k}_{l}}^{2 }\varepsilon_{\bar{k}_{s}}^{2}\delta(\bar{\omega}_{l}+\bar{\omega}_{s}-\omega _{p})|\eta_{1\lambda_{i}^{Q}}|^{2}\Re\left[\frac{g_{\bar{k}_{s}}g_{\bar{k}_{ pl}}\phi_{s}^{*}(\bar{k}_{s})\phi_{s}(\bar{k}_{pl})}{(\lambda_{1}^{Q}- \Delta_{\bar{k}_{s}}+\mathrm{i}\Gamma)(\lambda_{1}^{Q}-\Delta_{\bar{k}_{pl}}+ \mathrm{i}\Gamma)}\right] \tag{8}\]
with \(\bar{k}_{pl}\equiv k_{p}-\bar{k}_{l}\). Due to the delta function in \(\psi(k_{s}^{\prime\prime},\bar{k}_{l})\), only the term of \(k_{s}^{\prime\prime}=k_{p}-\bar{k}_{l}\) with \(k_{p}=\omega_{p}/c\) is nonzero and will contribute to the summation over \(k_{s}^{\prime\prime}\) in Eq. (4), i.e., the correlation filters our signals.
The entanglement-assisted quantum chiral spectroscopy is also robust against the variation of dissipation. For a fixed \(\bar{\omega}_{l}\), \(P_{c}\) is nonzero at \(\bar{\omega}_{s}=\omega_{p}-\bar{\omega}_{l}\), i.e., \(\bar{k}_{s}=\bar{k}_{pl}\). The sign of \(P_{c}^{Q}\) is determined by the sign of \([(\lambda_{1}^{Q}-\Delta_{\bar{k}_{pl}})^{2}-\Gamma^{2}]\). Since \(\lambda_{1}^{L}\neq\lambda_{1}^{R}\),[32]\(P_{c}^{Q}\) have different signs for the two enantiomers when \(\Delta_{\bar{k}_{pl}}=\omega_{p}-\bar{\omega}_{l}-(\omega_{1}-\omega_{0})\) is in the range between \((\Gamma-\lambda_{1}^{L})\) and \((\Gamma-\lambda_{1}^{R})\). Since the photon pair with a zero-bandwidth negative energy correlation can be approximately achieved by adjusting the width of the pumped pulse and the delay times,[19] the transmission spectrum of the two enantiomers are always distinguishable in the quantum chiral spectroscopy by suitable choosing the entangled photons.
## Discussions and Conclusions
In summary, we have developed the theory of entanglement-assisted quantum chiral spectroscopy to detect the molecular chirality. It offers an alternative way for suppressing the effect of environment noises on chiral analysis. Using a theoretical model, we have demonstrated that attributing to the nonclassical bandwidth of the frequency-entangled photons, the chiral difference can be enhanced in the coincidence signals. It can be applied to distinguish the molecular chirality in the strong dissipation region, where the classical chiral spectroscopy based on uncorrelated light loses efficiency due to the dissipation-induced spec
tral broadening.
As an experimentally accessible approach, frequency-entangled photons can be produced by spontaneous parametric down conversion. The apparatuses for generating the required entangled photons in our discussions are actually available. For example, when the transition \(|0\rangle\leftrightarrow|1\rangle\) is between vibrational levels, the signal photon is in the optical domain, which has been broadly investigated [33]. Also, the cyclic three-level configuration can be realized in the optical domain [25], which means the whole procedure is workable in such a frequency domain.
Quantum spectroscopy that using frequency-entangled photons as probe signals is gaining attentions due to unique features of frequency-entangled photons, such as quantum fluctuations [34], linear scaling behaviors [35], and nonclassical bandwidth [17, 18, 19]. We for the first time introduce it to chiral analysis and demonstrate the advantage of using frequency-entangled photons as probe signals due to their nonclassical bandwidth. Despite this, quantum spectroscopy had been demonstrated to achieve technical benefits, such as probing of the enormous range of wavelengths using visible photons [36, 37], to offer the possibility of cancelling uncorrelated background optical noises, and noises of the detectors [38], and to achieve a provable quantum advantage over all schemes using classical sources [39]. Our work opens up an exciting area that exploring the profound advantages of quantum spectroscopy in chiral analysis.
The authors thank the support by National Natural Science Foundation of China (No. 91850205), National Science Foundation for Young Scientists of China (No.11904022), and Beijing Institute of Technology Research Fund Program for Young Scholars.
Derivation of frequency-resolved chiral coincidence signals. |
2307.08386 | Tabular Machine Learning Methods for Predicting Gas Turbine Emissions | Predicting emissions for gas turbines is critical for monitoring harmful
pollutants being released into the atmosphere. In this study, we evaluate the
performance of machine learning models for predicting emissions for gas
turbines. We compare an existing predictive emissions model, a first
principles-based Chemical Kinetics model, against two machine learning models
we developed based on SAINT and XGBoost, to demonstrate improved predictive
performance of nitrogen oxides (NOx) and carbon monoxide (CO) using machine
learning techniques. Our analysis utilises a Siemens Energy gas turbine test
bed tabular dataset to train and validate the machine learning models.
Additionally, we explore the trade-off between incorporating more features to
enhance the model complexity, and the resulting presence of increased missing
values in the dataset. | Rebecca Potts, Rick Hackney, Georgios Leontidis | 2023-07-17T10:50:09Z | http://arxiv.org/abs/2307.08386v1 | # Tabular Machine Learning Methods for Predicting Gas Turbine Emissions
###### Abstract
Predicting emissions for gas turbines is critical for monitoring harmful pollutants being released into the atmosphere. In this study, we evaluate the performance of machine learning models for predicting emissions for gas turbines. We compare an existing predictive emissions model, a first principles-based Chemical Kinetics model [1], against two machine learning models we developed based on SAINT [2] and XGBoost [3], to demonstrate improved predictive performance of nitrogen oxides (NOx) and carbon monoxide (CO) using machine learning techniques. Our analysis utilises a Siemens Energy gas turbine test bed tabular dataset to train and validate the machine learning models. Additionally, we explore the trade-off between incorporating more features to enhance the model complexity, and the resulting presence of increased missing values in the dataset.
keywords: gas turbines, machine learning, tabular data, transformers, PEMS, emissions +
Footnote †: journal: Journal of Chemical Physics
## 1 Introduction
Gas turbines are widely employed in power generation and mechanical drive applications, but their use is associated with the production of harmful emissions, including nitrogen oxides (NOx) and carbon monoxide (CO), which pose environmental and health risks. Regulations have been implemented to limit emissions and require monitoring.
To monitor emissions from gas turbines, a Continuous Emissions Monitoring System (CEMS) is commonly employed, which involves sampling gases and analysing their composition to quantify emissions. While CEMS can accurately measure emissions in real-time, it can lead to a high cost to the process owner, including requiring daily maintenance to avoid drift. As a result, CEMS may not always be properly maintained, leading to inaccurate or unreliable measurements.
Predictive emissions monitoring system (PEMS) models provide an alternative method of monitoring emissions that is cost-effective and requires minimal maintenance compared to CEMS, while not requiring the large physical space needed for CEMS gas analysis. PEMS is
trained on historical data using process parameters such as temperatures and pressures, and uses real-time data to generate estimations for emissions.
To develop a PEMS model, it is necessary to validate the model's predictive accuracy using data with associated emissions values [4]. In our experiments, we used test bed tabular data consisting of tests conducted over a wide range of operating conditions to train our models. Gradient-boosted decision trees (GBDTs) such as XGBoost [3] and LightGBM [5] have demonstrated excellent performance in the tabular domain, and are widely regarded as the standard solution for structured data problems.
Previous studies comparing neural networks (NNs) and GBDTs for tabular regression have generally found that GBDTs match or outperform NN-based models, particularly when evaluated on datasets not documented in their original papers [6], while some NN-based methods are beginning to outperform GBDTs, such as SAINT [2].
We compare the predictive performance of an industry used Chemical Kinetics PEMS model [1], serving as the baseline, against two machine learning approaches: SAINT and XGBoost, to determine how improvements can be made in emissions prediction for gas turbines.
We observe that, on average, XGBoost outperforms both the original Chemical Kinetics model and the deep learning-based SAINT model for predicting both NOx and CO emissions on test bed data for gas turbines.
## 2 Background
### Gradient-Boosted Decision Trees
Gradient-boosted decision trees (GBDTs) are popular machine learning algorithms that combine the power of decision trees with the boosting technique, where multiple weak learners are combined in an ensemble to create highly accurate and robust models. GBDTs build decision trees iteratively, correcting errors of the previous trees in each iteration. Gradient boosting is used to combine the predictions of all the decision trees, with each tree's contribution weighted according to its accuracy. The final prediction is made by aggregating the predictions of all the decision trees.
XGBoost, or eXtreme Gradient Boosting [3], is a widely-used implementation of GBDTs, used for both classification and regression tasks. XGBoost is designed to be fast, scalable, and highly performant, making it well-suited for large-scale machine learning applications.
Figure 1: XGBoost initialisation, training, and prediction process.
One of the key features of XGBoost is its use of regularisation functions to prevent overfitting and improve the generalisation of the model. XGBoost also uses a tree pruning algorithm to remove nodes with low feature importance to reduce the complexity of the model and improve accuracy.
XGBoost has been highly successful for tabular data analysis, and deep learning researchers have been striving to surpass its performance.
### Attention and Transformers
Transformers, originating from Vaswani et al. [7], are a type of deep learning architecture originally developed for natural language processing tasks and have been adapted for use in the tabular domain. These models use self-attention to compute the importance of each feature within the context of the entire dataset, enabling them to learn complex, non-linear relationships between features. This is contrasted to GBDTs where all features are treated equally and relationships are not considered between them. Attention mechanisms are capable of highlighting relevant features and patterns in the dataset that are the most informative for making accurate predictions.
Multi-head self-attention is a type of attention mechanism used in Transformers. A weight is assigned to each input token based on its relevance to the output, allowing selective focus on different parts of the input data.
The attention mechanism is applied multiple times in parallel, with each attention head attending to a different subspace of the input representation, allowing the model to capture different aspects of the input data and learn more complex, non-linear relationships between the inputs. The outputs of the multiple attention heads are then concatenated and passed
Figure 2: Multi-head attention from [7], where h is the number of heads, Q, K, and V are the query, key and value vectors.
through a linear layer to produce the final output. This is depicted in Figure 2, where the scaled dot-product attention is:
\[Attention(Q,K,V)=softmax(\frac{QK^{T}}{\sqrt{d_{k}}})V \tag{1}\]
In Figure 2 and Equation 1, Q, K, and V are the query, key and value vectors used to compute attention weights between each element of the input sequence. \(d_{k}\) is the dimension of the key vectors.
SAINT [2], the Self-Attention and Intersample Attention Transformer, is a deep learning model designed to make predictions based on tabular data. SAINT utilises attention to highlight specific features or patterns within the dataset that are most relevant for making accurate predictions, helping models better understand complex relationships within the data and make more accurate predictions.
In their experiments, they find that SAINT, on average, outperforms all other methods on supervised and semi-supervised tasks for regression, including GBDT-based methods, on a variety of datasets.
#### 2.2.1 Chemical Kinetics
Siemens Energy developed a Chemical Kinetics PEMS model [1] through mapping emissions via a 1D reactor element code 'GENE-AC' computational fluid dynamics model of their SGT-400 combustor and converting this to a parametric PEMS model. This is a first principles method that uses factors such as pilot/main fuel split, inlet air temperature and inlet air pressure to calculate the predicted emissions.
## 3 Related Works
### Gas Turbine Emissions Prediction
#### 3.1.1 First Principles
Predictive emissions monitoring systems (PEMS) for gas turbines have been developed since 1973 [8] in which an analytical model was developed using thermodynamics to predict NOx emissions. Rudolf et al. [9] developed a mathematical model which takes into account performance deterioration due to engine ageing. They combined different datasets, such as validation measurements and long-term operational data, to provide more meaningful emission trends. Lipperheide et al. [10] also incorporate aging of the gas turbines into their analytical model which is capable of accurately predicting NOx emissions for power in the range 60-100%. Siemens Energy developed a Chemical Kinetics model [1] to accurately predict CO and NOx emissions for their SGT-400 gas turbine. They used a 1D reactor model to find the sensitivity of the emissions to the different input parameters as a basis for the PEMS algorithm. Bainier et al. [11] monitor their analytical PEMS over two years and find a continuous good level of accuracy, noting that training is required to fully upkeep the system.
#### 3.1.2 Machine Learning
A number of machine learning (ML) methods have been used to predict emissions for gas turbines and have been found to be more flexible for prediction than first principles
methods. Cuccu et al. [12] compared twelve machine learning methods including linear regression, kernel based methods and feed-forward artificial neural networks with different backpropagation methods. They used k-fold cross-validation to select the optimal method-specific parameters and found that improved resilient backpropagation (iRPROP) achieved the best performance, and note that thorough pre-processing is required to produce such results. Kaya et al. [13] compared three decision fusion schemes on a novel gas turbine dataset, highlighting the importance of certain features within the dataset for prediction. Si et al. [14] also used k-fold validation to determine the optimal hyperparameters for their neural-network based models. Rezazadeh et al. [15] proposed a k-nearest-neighbour algorithm to predict NOx emissions.
Azzam et al. [16] utilised evolutionary artificial neural networks and support vector machines to model NOx emissions from gas turbines, finding that use of their genetic algorithm results in a high enough accuracy to offset the computational cost compared to the cheaper support vector machines. Kochueva et al. [17] develop a model based on symbolic regression and a genetic algorithm with a fuzzy classification model to determine "standard" or "extreme" emissions levels to further improve their prediction model. Botros et al. [18; 19; 20] developed a predictive emissions model based on neural networks with an accuracy of \(\pm 10\) parts per million.
Guo et al. [21] developed a NOx prediction model based on attention mechanisms, LSTM, and LightGBM. The attention mechanisms were introduced into the LSTM model to deal with the sequence length limitation LSTM faces. They eliminate noise through singular spectrum analysis and then use LightGBM to select the dependent feature. This processed data is then used as input to the LSTM and the attention mechanism is used to enhance the historical learning ability of information. They add feature attention and temporal attention to the LSTM model to improve prediction by allowing different emphasis by allocating different weights.
### Tabular Prediction
#### 3.2.1 Tree-Based
Gradient-boosted decision trees (GBDTs) have emerged as the dominant approach for tabular prediction, with deep learning methods only beginning to outperform them in some cases. Notably, XGBoost [3] often achieves state-of-the-art performance in regression problems. Other GBDTs such as LightGBM [5] and CatBoost [22] have shown success in tabular prediction.
Deep learning faces challenges when dealing with tabular data, such as low-quality training data, the lack of spatial correlation between variables, dependency on preprocessing, and the impact of single features [23]. Shwartz et al. [6] conclude that deep models were weaker than XGBoost, and that deep models only outperformed XGBoost alone when used as an ensemble with XGBoost. They also highlight the challenges in optimizing deep models compared to XGBoost. Grinsztajn et al. [24] find that tree-based models are state-of-the-art on medium sized data (10,000 samples), especially when taking into account computational cost, due to the specific features of tabular data, such as uninformative features, non rotationally-invariant data, and irregular patterns in the target function. Kadra et al. [25] argue that well-regularized plain MLPs significantly outperform more specialized neural network architectures, even outperforming XGBoost.
#### 3.2.2 Attention and Transformers
Attention- and transformer-based methods have shown promise in recent years for tabular prediction. Ye et al. [26] provide an overview on attention-based approaches for tabular data, highlighting the benefits of attention in tabular models. SAINT [2] introduced intersample attention, which allows rows attend to each other, as well as using the standard self-attention mechanism, leading to improved performance over GBDTs on a number of benchmark tasks. TabNet [27] is an interpretable model that uses sequential attention to select features to reason from at each step. FT-Transformer [28] is a simple adaption of the Transformer architecture that has outperformed other deep learning solutions on most tasks. However, GBDTs still outperform it on some tasks. TabTransformer [29] transforms categorical features into robust contextual embeddings using transformer layers, but it does not affect continuous variables. Kossen et al. [30] took the entire dataset as input and used self-attention to reason about relationships between datapoints. ExcelFormer [31] alternated between two attention modules to manipulate feature interactions and feature representation updates and manages to convincingly outperform GBDTs.
Despite the promising results of these attention- and transformer-based methods, deep learning models have generally been weaker than GBDTs on datasets that were not originally used in their respective papers [6]. Proper pre-processing, pre-training [32] and embedding [33] can enable deep learning tabular models to perform significantly better, reducing the gap between deep learning and GBDT models.
## 4 Methodology
### Data
The data is test bed data from the Siemens SGT400 gas turbines. This is tabular data consisting of a number of different gas turbines tested over a wide range of operating conditions. In total, there are 37,204 rows of data with 183 features including process parameters such as temperatures and pressures, and the target emission variables NOx and CO. All data is numerical values.
### Pre-Processing
From the test bed dataset, two comparison sub-datasets were used: "Full" and "Cropped". The Cropped dataset consisted of a significant amount of filters pre-applied to the data by Siemens Energy for the Chemical Kinetics model, while the Full dataset had no filters applied. Standard pre-processing was applied to both sets of data including removing rows with missing data, removing negatives from emissions data, and removing liquid fuel data. Features with a significant number of missing rows were also removed. For the Full dataset, any features with more than 18,100 missing values were removed. Similarly, for the Cropped dataset, features with more than 3,000 missing values were removed. These threshold values were chosen to be greater than the number of missing values than the maximum number of missing values found in the emission columns.
Table 1 provides an overview of both sub-datasets and the number of rows and features in each. Due to the prior pre-processing removing proportionally more missing values through the original filters, the Cropped dataset ends with more rows of data compared to the Full dataset, at the cost of reducing the number of features. When removing the same features
from the Cropped dataset as the Full dataset only 2044 rows remain so this was not chosen to be used for modelling. Further feature details can be found in Table 1.
The dataset is collected from 0% to 126% load, and pre-processing reduces this to 24% to 126%. We utilise this full range for our comparisons.
Figure 3 depicts the spread of the data for the target emissions, NOx and CO, for both sub-datasets. CO has many more outliers compared to NOx, with some particularly far from the median.
### Models
We compare a transformer-based model, SAINT [2], and GBDT XGBoost [3], against the an existing PEMS model used by Siemens Energy, a first principles-based Chemical Kinetics model [1].
#### 4.3.1 Saint
SAINT accepts a sequence of feature embeddings as input and produces contextual representations with the same dimensionality.
\begin{table}
\begin{tabular}{c|c|c} \hline \hline
**Action** & **Full** & **Cropped** \\ \hline Start & 37204 rows, 183 features & 9873 rows, 183 features \\ Remove low data features & Removes 9 features & Removes 95 features \\ Remove liquid fuel data & Removes 5752 rows & No change \\ Remove negative emissions & Removes 16977 rows & Removes 744 rows \\ Remove all missing values & Removes 8615 rows & Removes 2700 rows \\ End & 5860 rows, 174 features & 6429 rows, 88 features \\ \hline \hline \end{tabular}
\end{table}
Table 1: Pre-processing process for the Full and Cropped datasets showing number of rows in each dataset.
Figure 3: NOx and CO data spread for Full and Cropped datasets on a logarithmic scale.
Features are projected into a combined dense vector space and passed as tokens into a transformer encoder. A single fully-connected layer with a ReLU activation is used for each continuous feature's embedding.
SAINT alternates self-attention and intersample attention mechanisms to enable the model to attend to information over both rows and columns. The self-attention attends to individual features within each data sample, and intersample attention relates each row to other rows in the input, allowing all features from different samples to communicate with each other.
Similar to the original transformer [7], there are \(L\) identical layers, each containing one self-attention and one intersample attention transformer block. The self-attention block is identical to the encoder from [7], consisting of a multi-head self-attention layer with 8 heads, and two fully-connected feed-forward layers with a GELU non-linearity. A skip connection and layer normalisation are applied to each layer. The self-attention layer is replaced by an intersample attention layer for the intersample attention block. For the intersample atten
Figure 4: Proposed method based on SAINT [2]. The features, \([f_{1},...,f_{n}]\), are the process parameters from sensors within the gas turbine tests, where \(n\) is the number of features, 11. Each \(x_{i}\) is one row of data including one of each feature, where \(b\) is the batch size, 32. A [CLS] token with a learned embedding is appended to each data sample. This batch of inputs is passed through an embedding layer, consisting of a linear layer, a ReLU non-linearity, followed by a linear layer, prior to being processed by the SAINT model \(L\) times, where \(L\) is 3. Only representations corresponding to the [CLS] token are selected for an MLP to be applied to. MSE loss is done on predictions during training. For our experiments, \(b\) is the batch size (32), \(n\) is the number of features (7). \(L_{1}\) is the first linear layer, with 1 input feature and 100 output features, \(L_{2}\) is the second linear layer, with 100 input features and 1 output feature. The embedding layer is performed for each feature.
tion layer, the embeddings of each feature are concatenated for each row, and attention is computed over samples rather than features, allowing communication between samples.
We use SAINT in a fully supervised multivariate regression setting, which was not originally reported on in the paper. The code we based our experiments on can be found at1. We used the AdamW optimiser with a learning rate of 0.0001.
Footnote 1: [https://github.com/somepago/saint](https://github.com/somepago/saint)
#### 4.3.2 XGBoost
XGBoost reduces overfitting through regularisation and pruning, using a distributed gradient boosting algorithm to optimise the model's objective function to make it more scalable and efficient, and automatically handles missing values.
Decision trees are constructed in a greedy manner as a weak learner. At each iteration, XGBoost evaluates the performance of the current ensemble and adds a new tree to the ensemble that minimises the loss function through gradient descent. Each successive tree implemented compensates for residual errors in the previous tree.
#### 4.3.3 Chemical Kinetics
We compare our work to an updated Chemical Kinetics model, based on [1], using the same sets of test data for comparisons. The predictions for the Chemical Kinetics model are essentially part of the original dataset, with the number of features and rows of each sub-dataset, described in Section 4.2, not affecting the raw predictions but eliminating the varying rows depending on missing values due to features in the dataset.
Figure 5: Box plots for MAE results for NOx for each model on a logarithmic scale.
### Metrics and Evaluation
The metrics used to evaluate the models in this work are the mean absolute error (MAE) and root mean squared error (RMSE).
MAE is expressed as follows:
\[MAE=\frac{1}{n}\sum_{i=1}^{n}|y_{i}-\hat{y}_{i}| \tag{2}\]
RMSE is expressed as follows:
\[RMSE=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_{i}-\hat{y}_{i})^{2}} \tag{3}\]
We used randomised cross-validation to evaluate the performance of the machine learning models, SAINT and XGBoost, whereby the data was randomly sub-sampled 10 times to obtain unbiased estimates of the models' performance on new, unseen data on which they were re-trained and tested on. We report the average and standard deviation of the MAE and RMSE for each sub-dataset, providing an insight into the models' consistency and variation in performance. The Chemical Kinetics model is also compared on these test sets to provide relative benchmark for the performance of the models. The CO and NOx emissions targets are individually trained for to achieve specialised models for each target.
### Impact of Number of Features
To assess the influence of the number of features compared to the number of rows of data on prediction performance, we further split each dataset where each subset contained a decreasing number of features, leading to fewer rows of missing data, allowing an examination into the effect of removing less important features on the availability of data points for training. Feature removal followed the order of decreasing feature importance according to XGBoost, where the importance is calculated by XGBoost based on how often each feature is used to make key decisions across all trees in the ensemble. The order of importance for each feature can be found in Table 1.
## 5 Results and Discussion
Table 2 describes the average MAE and RMSE obtained from the 10 sub-samples of the dataset with the varying number of features. XGBoost has on average the lowest MAE for each emission and number of features, while SAINT has a lower RMSE on average.
All models, especially the Chemical Kinetics model, have significant errors when predicting CO. Further analysis of these results indicated that these large errors were primarily driven by a small number of data points with extremely anomalous MAE values. Figure 6 illustrates these outliers, with the logarithmic scale emphasizing the limited number of data points responsible for the higher mean MAE. Despite the presence of outliers, the median MAE values for each model were not excessively high, with the majority of data points exhibiting more accurate predictions for CO.
Figure 8 demonstrates that the majority of predictions generated by all models fall within a reasonable range for accurate CO emission prediction for gas turbines. While overall performance may be affected by the presence of outliers, the models do exhibit good predictive capabilities for CO and NOx emissions.
Figure 7 and 8 show the normalised predictions compared to the real value for NOx and CO. For Figure 8, the predictions above 1000ppm were removed from view as these were extremely anomalous and prevented the main results to be seen clearly. For both emissions, the Chemical Kinetics model has more spread compared to SAINT and XGBoost. For CO especially, XGBoost predictions are closer to the identity line compared to SAINT.
Figure 9 displays the relationship between the MAE values and the number of features in
\begin{table}
\begin{tabular}{c|c|c c|c c|c c} \hline \hline \multicolumn{2}{c|}{Methods} & \multicolumn{2}{c|}{SAINT} & \multicolumn{2}{c|}{XGBoost} & \multicolumn{2}{c}{Chemical Kinetic} \\ \hline \multicolumn{2}{c|}{Metric} & \multicolumn{2}{c|}{MAE} & \multicolumn{2}{c|}{RMSE} & \multicolumn{2}{c|}{MAE} & \multicolumn{2}{c|}{RMSE} & \multicolumn{2}{c}{MAE} & \multicolumn{2}{c}{RMSE} \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)1000 \\ \end{tabular} } & 174 & 0.91 \(\pm\)0.11 & **2.82 \(\pm\)2.45** & **0.62 \(\pm\)0.14** & 4.08 \(\pm\)3.09 & 4.46 \(\pm\)0.15 & 6.59 \(\pm\)1.43 \\ & 130 & 0.89 \(\pm\)0.21 & **2.92 \(\pm\)2.02** & **0.74 \(\pm\)0.18** & 4.48 \(\pm\)3.65 & 4.09 \(\pm\)0.10 & 6.14 \(\pm\)1.14 \\ & 87 & 1.72 \(\pm\)0.70 & **3.83 \(\pm\)1.62** & **0.76 \(\pm\)0.12** & 4.04 \(\pm\)2.62 & 4.09 \(\pm\)0.10 & 6.14 \(\pm\)1.14 \\ & 45 & 1.14 \(\pm\)0.38 & **2.96 \(\pm\)1.64** & **0.74 \(\pm\)0.08** & 3.00 \(\pm\)1.99 & 3.68 \(\pm\)0.12 & 5.55 \(\pm\)0.94 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)1000 \\ \end{tabular} } & 88 & 0.54 \(\pm\)0.08 & **0.92 \(\pm\)0.1** & **0.47 \(\pm\)0.02** & 0.95 \(\pm\)0.17 & 2.67 \(\pm\)0.06 & 3.84 \(\pm\)0.33 \\ & 45 & 0.56 \(\pm\)0.07 & 0.94 \(\pm\)0.07 & **0.44 \(\pm\)0.02** & **0.92 \(\pm\)0.16** & 2.67 \(\pm\)0.06 & 3.84 \(\pm\)0.33 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 174 & 11.37 \(\pm\)6.61 & **117.61 \(\pm\)191.07** & **5.05 \(\pm\)6.45** & 117.83 \(\pm\)197.50 & 2.49E+6 \(\pm\)7.54E+5 & 3.79E+7 \(\pm\)7.35E+6 \\ & 130 & 10.58 \(\pm\)5.84 & **164.20 \(\pm\)225.07** & **7.41 \(\pm\)8.09** & 220.53 \(\pm\)260.67 & 1.47E+6 \(\pm\)5.98E+5 & 2.85E+7 \(\pm\)7.37E+6 \\ & 87 & 14.31 \(\pm\)6.33 & **152.70 \(\pm\)225.24** & **7.68 \(\pm\)10.80** & 214.44 \(\pm\)317.08 & 1.50E+6 \(\pm\)5.98E+5 & 2.85E+7 \(\pm\)7.37E+6 \\ & 45 & 24.97 \(\pm\)30.58 & 292.55 \(\pm\)236.71 & **6.04 \(\pm\)6.30** & **219.92 \(\pm\)262.52** & 1.38E+6 \(\pm\)8.93E+5 & 2.64E+7 \(\pm\)1.28E+7 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 88 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.30 & 20.01 \(\pm\)10.15 & **0.63 \(\pm\)0.37** & **10.50 \(\pm\)9.31** & 5.96E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 88 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.30 & 20.01 \(\pm\)10.15 & **0.63 \(\pm\)0.37** & **10.50 \(\pm\)9.31** & 5.96E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 88 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.30 & 20.01 \(\pm\)10.15 & **0.63 \(\pm\)0.37** & **10.50 \(\pm\)9.31** & 5.96E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 88 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.30 & 20.01 \(\pm\)10.15 & **0.63 \(\pm\)0.37** & **10.50 \(\pm\)9.31** & 5.96E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ \hline \multirow{4}{*}{\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 88 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.30 & 20.01 \(\pm\)10.15 & **0.63 \(\pm\)0.37** & **10.50 \(\pm\)9.31** & 5.96E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ \hline \multirow{4}{*}{
\begin{tabular}{c} \(\times\)10000 \\ \end{tabular} } & 85 & 2.46 \(\pm\)0.72 & 20.02 \(\pm\)10.14 & **0.59 \(\pm\)0.31** & **9.13 \(\pm\)8.15** & 5.97E+5 \(\pm\)3.32E+5 & 1.80E+7 \(\pm\)9.34E+6 \\ & 45 & 2.73 \(\pm\)2.3
the analysis, highlighting the potential impact of feature removal and its effect on prediction performance. Despite having 2415 more rows of training data, with the exception of SAINT's CO prediction, the MAE is not significantly affected by the change in number of features and rows.
In our evaluation, XGBoost provided the best prediction accuracy for both NOx and CO, with both machine learning methods outperforming the original Chemical Kinetics model. Prediction for NOx is significantly more accurate than CO prediction for all models. This can be attributed to the wider spread of data points and greater presence of influential outliers in the CO real values, as evident in Figure 3. The abundance of outliers in the CO dataset made it inherently more challenging to predict accurately. The filters used for the Cropped dataset particularly improved the RMSE of the machine learning models as it removed some outlier inputs in the dataset.
## 6 Conclusion and Future Work
XGBoost remains the best model for tabular prediction for this gas turbine dataset for both NOx and CO, but the attention-based model, SAINT, is catching up in terms of performance. Both machine learning models outperformed the first-principles-based Chemical Kinetics model, indicating that machine learning continues to show a promising future for gas turbine emissions prediction.
Furthermore, to fully utilise the years of operational gas turbine data that is available but unlabelled, a future step to improve gas turbine emissions prediction will be to include
Figure 8: Normalised real vs. predicted values for CO for each model within one standard deviation for the Full dataset with all features. Extreme anomalous real and predicted values above 1000 were also removed, removing 14 data points.
Figure 7: Normalised real vs. predicted values for NOx for each model within one standard deviation.
self-supervised learning into the training process. Despite XGBoost displaying the best performance here, attention-based methods such as SAINT will be easier to combine with self-supervised learning by performing a pretext task such as masking to predict masked sections of the operational data to learn representations of the data, which can then be used in a downstream task using SAINT to create predictions.
## Aknowledgements
The work presented here received funding from EPSRC (EP/W522089/1) and Siemens Energy Industrial Turbomachinery Ltd. as part of the iCASE EPSRC PhD studentship "Predictive Emission Monitoring Systems for Gas Turbines".
|
2304.11387 | Counting base phi representations | In a base phi representation a natural number is written as a sum of powers
of the golden mean $\varphi$. There are many ways to do this. How many? Even if
the number of powers of $\varphi$ is finite, then any number has infinitely
many base phi representations. By not allowing an expansion to end with the
digits 0,1,1, the number of expansions becomes finite, a solution proposed by
Ron Knott. Our first result is a recursion to compute this number of
expansions. This recursion is closely related to the recursion given by Neville
Robbins to compute the number of Fibonacci representations of a number, also
known as Fibonacci partitions. We propose another way to obtain finitely many
expansions, which we call the natural base phi expansions. We prove that these
are closely connected to the Fibonacci partitions. | Michel Dekking, Ad van Loon | 2023-04-22T12:45:14Z | http://arxiv.org/abs/2304.11387v1 | ###### Abstract
###### Abstract
In a base phi representation a natural number is written as a sum of powers of the golden mean \(\varphi\). There are many ways to do this. How many? Even if the number of powers of \(\varphi\) is finite, then any number has infinitely many base phi representations. By not allowing an expansion to end with the digits 0,1,1, the number of expansions becomes finite, a solution proposed by Ron Knott. Our first result is a recursion to compute this number of expansions. This recursion is closely related to the recursion given by Neville Robbins to compute the number of Fibonacci representations of a number, also known as Fibonacci partitions. We propose another way to obtain finitely many expansions, which we call the natural base phi expansions. We prove that these are closely connected to the Fibonacci partitions.
Keywords: Base phi; Lucas numbers; Fibonacci numbers; Fibonacci partitions
**Counting base phi representations**
**Michel Dekking and Ad van Loon**
_Adresses M. Dekking: CWI, Amsterdam and Delft University of Technology, Faculty EEMCS._
_Email adresses: [email protected], [email protected], [email protected]_
**April 22, 2023**
## 1 Introduction
A natural number \(N\) is written in base phi if \(N\) has the form
\[N=\sum_{i=-\infty}^{\infty}a_{i}\varphi^{i},\]
where \(a_{i}=0\) or \(a_{i}=1\), and where \(\varphi:=(1+\sqrt{5})/2\) is the golden mean.
There are infinitely many ways to do this. When the number of powers of \(\varphi\) is finite we write these representations (also called expansions) as
\[\alpha(N)=a_{L}a_{L-1}\ldots a_{1}a_{0}\cdot a_{-1}a_{-2}\ldots a_{R+1}a_{R}.\]
Infinitely many expansions can be generated in a rather trivial way from expansions with just a few powers of \(\varphi\) using the replacement \(100\to 011\) at the end of the expansion.
So we use Knott's truncation rule from [11]:
\[a_{R+2}a_{R+1}a_{R}\neq 011. \tag{1}\]
Let \(\mbox{Tot}^{\kappa}(N)\) be the number of base phi expansions of the number \(N\) satisfying Equation (1):
\[\mbox{Tot}^{\kappa}=\ 0,1,1,2,3,3,5,5,5,8,8,8,5,10,13,12,12,13,10,7,15,18,21,16,20,20,16,21,18,15,7,17,\ldots\lx@note{footnote}{In OEIS ([16]): A289749 Number of ways not ending in 011 to write n in base phi.}\]
In 1957 George Bergman ([1]) proposed restrictions on the digits \(a_{i}\) which entail that the representation becomes unique and finite. This is generally accepted as _the_ representation of the natural numbers in base phi. A natural number \(N\) is written in the Bergman representation if \(N\) has the form
\[N=\sum_{i=-\infty}^{\infty}d_{i}\varphi^{i},\]
with digits \(d_{i}=0\) or \(d_{i}=1\), and where \(d_{i+1}d_{i}=11\) is not allowed. We write these representations as
\[\beta(N)=d_{L}d_{L-1}\ldots d_{1}d_{0}\cdot d_{-1}d_{-2}\ldots d_{R+1}d_{R}.\]
A natural number \(N\) is written in base Fibonacci if \(N\) has the form
\[N=\sum_{i=2}^{\infty}c_{i}F_{i},\]
where \(c_{i}=0\) or \(c_{i}=1\), and \((F_{i})_{i\geq 0}=0,1,1,2,3,\ldots\) are the Fibonacci numbers.
Let \(\operatorname{Tot}^{\mathrm{FIB}}(N)\) be the total number of Fibonacci expansions of the number \(N\). Then
\[\operatorname{Tot}^{\mathrm{FIB}}=\,1,1,1,2,1,2,2,1,3,2,2,3,1,3,3,2,4,2,3,3,1,4,3,3,5,\ldots\lx@note{footnote}{In OEIS ([16]): A000119 Number of representations of n as a sum of distinct Fibonacci numbers.}\]
This sequence has received a lot of attention, see e.g., the papers [9], [8], [4], [5], [17], [2], [14], and [3].
In 1952 the paper [12] proposed restrictions on the digits \(c_{i}\) which entail that the representation becomes unique. This is known as the Zeckendorf expansion of the natural numbers after the paper [15].
A natural number \(N\) is written in the Zeckendorf representation if \(N\) has the form
\[N=\sum_{i=2}^{\infty}e_{i}F_{i},\]
with digits \(e_{i}=0\) or \(e_{i}=1\), and where \(e_{i+1}e_{i}=11\) is not allowed.
The Fibonacci representation and the base phi representation are closely related. We make a list.
\begin{tabular}{|c|c|c|} \hline Property & Fibonacci & Base phi \\ \hline & \(F_{n}:\quad n\geq 2\) & \(\varphi^{n}:\quad n\) integer \\ \hline Fundamental recursion & \(F_{n+1}=F_{n}+F_{n-1}\) & \(\varphi^{n+1}=\varphi^{n}+\varphi^{n-1}\) \\ Golden mean flip & \(100\to 011\) & \(100\to 011\) \\ \hline Unique expansion & Zeckendorf & Bergman \\ Condition on the digits & no 11 & no 11 \\ \hline Fundamental intervals & \([F_{n},F_{n+1}-1]\) & \([L_{2n},\,L_{2n+1}]\), \([L_{2n+1}+1,\,L_{2n+2}-1]\) \\ Examples \(F_{5}=5,\,L_{4}=7\) & \([5,7]=[2\,2\,1]\) & \([7,11]=[5\,8\,8\,8\,5]\) \\ Examples \(F_{6}=8,\,L_{5}=11\) & \([8,12]=[3\,2\,2\,3\,1]\) & \([12,17]=[10\,13\,12\,12\,13\,10]\) \\ \hline \end{tabular}
Here the \(L_{n}\) are the Lucas numbers defined by \(L_{0}=2,L_{1}=1\) and \(L_{n+1}=L_{n}+L_{n-1}\) for \(n\geq 1\).
The intervals \(\Lambda_{2n}=[L_{2n},\,L_{2n+1}]\), \(\Lambda_{2n+1}=[L_{2n+1}+1,\,L_{2n+2}-1]\) are called the even and odd Lucas intervals.
Replacing the digits \(100\) in an expansion by \(011\) will be called a _golden mean flip_. Our Theorem 2.1 shows that any base phi expansion can be obtained from the Bergman expansion by a finite number of such golden mean flips. There is a special case which needs attention, which we illustrate with an example. Let \(N=4\). Then \(\beta(4)=101\cdot 01\). Applying the golden mean flip at the right gives the expansion \(101\cdot 0011\), which is not an allowed expansion. However, if we apply a second golden mean flip we can obtain \(100\cdot 1111\), which _is_ an allowed expansion. We call this operation a _double golden mean flip_.
In Section 2 we determine a formula for \(\operatorname{Tot}^{\kappa}(N)\). In Section 3 we give simple formula's for \(N=F_{n}\), and for \(N=L_{n}\). In Section 4 we introduce a new way to count expansions, by defining _natural expansions_, and give a formula for \(\operatorname{Tot}^{\nu}(N)\), the number of natural base phi expansions of \(N\). We moreover show that \((\operatorname{Tot}^{\nu}(N))\) is a subsequence of the sequence of total numbers of Fibonacci representations. Section 5 gives important information on the different behaviour of phi expansions on the odd and the even Lucas intervals.
A recursive formula for the number of Knot expansions
In this section we determine a formula for \(\mbox{\rm Tot}^{\kappa}(N)\) for each natural number \(N\).
Because of the fundamental recursion \(\varphi^{n+1}=\varphi^{n}+\varphi^{n-1}\), one can transform a base phi expansion \(\alpha(N)=a_{L}a_{L-1}\ldots a_{1}a_{0}\cdot a_{-1}a_{-2}\ldots a_{R+1}a_{R}\) of \(N\) with \(a_{i+1}a_{i}a_{i-1}=100\) to another base phi expansion of \(N\), by the map
\[T_{i}:a_{i+1}a_{i}a_{i-1}\rightarrow[a_{i+1}-1][a_{i}+1][a_{i-1}+1],\]
where \(R-1\leq i\leq L-1\). This is a more detailed definition of the golden mean flip.
In the definition we put of course \(a_{R-1}=a_{R-2}=0\).
The map \(T_{i}\) has an inverse denoted \(U_{i}\) for \(R-1\leq i\leq L\) given by
\[U_{i}:a_{i+1}a_{i}a_{i-1}\rightarrow[a_{i+1}+1][a_{i}-1][a_{i-1}-1],\]
as soon as \(a_{i+1}a_{i}a_{i-1}=011\). We call this map the _reverse golden mean flip_.
Example: \(U_{1}(110\cdot 01)=1000\cdot 01\).
**Theorem 2.1**: _Any finite base phi expansion \(\alpha(N)\) with digits 0 and 1 of a natural number \(N\) can be obtained from the Bergman expansion \(\beta(N)\) of \(N\) by a finite number of applications of the golden mean flip._
_Proof:_ We prove this by showing that any expansion of \(N\) will be mapped to its Bergman expansion by a finite number of applications of the reverse golden mean flip. Let \(\alpha(N)=a_{L}a_{L-1}\ldots a_{1}a_{0}\cdot a_{-1}a_{-2}\ldots a_{R+1}a_{R}\) be an expansion of \(N\) with digits 0 and 1. When 11 does not occur in \(\alpha(N)\), then \(\alpha(N)=\beta(N)\), and there is nothing to do. Otherwise, let \(m:=\max\{i:a_{i}a_{i-1}=11\}\). First, suppose \(m\leq L-2\). Then by the definition of \(m\), we have \(a_{i+1}=0\). So for the two possibilities \(a_{i+2}=0\) and \(a_{i+2}=1\)
\[U_{i}(\ldots 0a_{i+1}a_{i}a_{i-1}\ldots)=U_{i}(0011)=0100,\quad\mbox{and }U_{i}(\ldots 1a_{i+1}a_{i}a_{i-1}\ldots)=U_{i}(1011)=1100.\]
Note that in the first case the total number of 11 occurring in the expansion of \(N\) has decreased by 1, and in the second case it remained constant. However, in the second case the \(m\) of \(U_{i}(\alpha(N))\) has increased by 2. If we keep iterating the reverse golden mean flip on the left most occurrence of 11, then either 0011 will occur, or if not, then \(\alpha(N)=1101\ldots\). This is the case \(m=L\), where there _is_ a decrease in the number of 11, since \(U_{L}(1101\ldots)=10001\ldots\). Conclusion: in all cases the number of 11 will decrease by at least 1 after a finite number of applications of the reverse golden mean flip. So after a finite number of applications of the reverse golden mean flip we reach an expansion with no occurrences of 11. By definition, this is the Bergman expansion.
The case \(m=L\) has already been considered above, the case \(m=L-1\) corresponds to \(\alpha(N)=011\ldots\), where an application of the reverse golden mean flip leads also to a decrease in the number of 11. \(\Box\)
**Example 2.2**: Let \(N=5\), with \(\beta(5)=1000\cdot 1001\):
\[10\underline{1}\cdot 1111\rightarrow\underline{1}10\cdot 0111\to 1000 \cdot 0\underline{1}11\to 1000\cdot 1001,\quad\mbox{with maps }U_{0},U_{2},U_{-2}.\]
Our proof for \(\mbox{\rm Tot}^{\kappa}\) resembles the work of Neville Robbins [17] on Fibonacci representations, but we have to incorporate the double golden mean flip defined in the Introduction. It then appears that the two recursions for Fibonacci representations and golden mean (Knott) representations are the same, but that there is a difference in the initial conditions.
The emphasis will be on the manipulation of 0-1-words, not on numbers.
Let \(\beta(N)=d_{L}d_{L-1}\ldots d_{1}d_{0}\cdot d_{-1}d_{-2}\ldots d_{R+1}d_{R}\). By removing the radix point, we obtain a 0-1-word \(B(N):=d_{L}d_{L-1}\ldots d_{1}d_{0}d_{-1}d_{-2}\ldots d_{R+1}d_{R}\). Let us denote \(r(B(N)):=\mbox{\rm Tot}^{\kappa}(N)\).
More generally, \(r(w)\) is the number of words satisfying the Knott condition that can be obtained from a word \(w\) by golden mean flips. Note that in general the representations that we obtain are not representations of a natural number--not for any choice of the radix point. An example is \(w=100001\), which represents \(\varphi^{5}+1\). Nevertheless, these words represent numbers \(a+b\varphi\) with non-negative natural numbers \(a\) an \(b\) in the ring \(\mathbb{Z}(\varphi)\).
For example \(w=100001\) represents \(5\varphi+4\). This is the justification for continuing with the terminology of representations.
Here are two basic examples.
\[r(10^{s})=\frac{1}{2}s+1\quad s\,{\rm even} \tag{2}\]
\[r(10^{s})=\frac{1}{2}(s+1)\quad s\,\,{\rm odd} \tag{3}\]
This follows easily by making golden mean flips from left to right.
Suppose the Bergman representation \(\beta(N)\) of a number \(N\) contains \(n+1\) ones. Then we can write for some numbers \(s_{1},s_{2},\ldots,s_{n}\)
\[B(N)=10^{s_{n}}\ldots 10^{s_{2}}\,10^{s_{1}}\,1.\]
We start with the case \(n=2\), so
\[B(N)=10^{s_{2}}10^{s_{1}}\,1.\]
Let us call \(I_{2}:=10^{s_{2}}\) the _initial segment_ of \(B(N)\), and \(T_{1}:=10^{s_{1}}\,1\) the _terminal segment_ of \(B(N)\).
We want to deduce \(r(B(N))=r(I_{2}T_{1})\) from the number of representations \(r(I_{2})\) and \(r(T_{1})\). There are two cases to consider.
* Arbitrary combinations of representations of \(I_{2}\) and \(T_{1}\).
* Arbitrary combinations of representations of \(I_{2}\) and \(T_{1}\)_plus_ an 'overlap' combination.
Type 1 typically occurs if \(s_{2}\) is even. For example for the case \(s_{2}=4\), we have the three representations 10000, 01100, 01011. Note that in general these representations always end in 00 or 11.
So for Type 1 one has simply
\[r(B(N))=r(I_{2}T_{1})=r(I_{2})r(T_{1}). \tag{4}\]
But for \(s_{2}\) odd, for example \(s_{2}=5\), 100000, 01100, 010110 are the three representations of \(I_{2}\). Note that in general these representations always end in 00 or 10.
So if a representation \(0v\) of \(T_{1}\) starts with a 0, then the representation \(010110\,0v\) generates an 'overlap' representation \(010101\,1v\) via the golden mean flip.
Obviously it is true in general that an \(I_{2}\) word with \(s_{2}\) odd will have exactly one representation that ends in 10. Also important: there is no representation that ends in 01. Therefore, if \(r^{(i)}(T_{1})\) denotes the number of representations of \(T_{1}\) starting with \(i\) for \(i=0,1\), then we obtain for Type 2:
\[r(B(N))=r(I_{2}T_{1})=r(I_{2})r(T_{1})+r^{(0)}(T_{1}). \tag{5}\]
It thus follows from Equation (5), the trivial equation \(r^{(0)}(T_{1})+r^{(1)}(T_{1})=r(T_{1})\), and the fact that the segment \(T_{1}=10^{s_{1}}\,1\) has just one representation that starts with a 1, that
\[r(B(N))=r(T_{1})[r(I_{2})+1]-r^{(1)}(T_{1})=r(T_{1})[r(I_{2})+1]-1. \tag{6}\]
We continue with the case \(n=3\), so
\[B(N)=10^{s_{3}}10^{s_{2}}10^{s_{1}}\,1.\]
Now \(I_{3}:=10^{s_{3}}\) is the _initial segment_, and \(T_{2}:=10^{s_{2}}10^{s_{1}}\,1\) the _terminal segment_.
As before there are two cases to consider to compute \(r(B(N))=r(I_{3}T_{2})\).
* Arbitrary combinations of representations of \(I_{3}\) and \(T_{2}\).
* Arbitrary combinations of representations of \(I_{3}\) and \(T_{2}\)_plus_ an 'overlap' combination.
For Type 1 one has simply
\[r(B(N))=r(I_{3}T_{2})=r(I_{3})r(T_{2}). \tag{7}\]
For Type 2 one has :
\[r(B(N))=r(I_{3}T_{2})=r(I_{3})r(T_{2})+r^{(0)}(T_{2}). \tag{8}\]
Next, we split \(T_{2}=I_{2}T_{1}\), where \(I_{2}=:10^{s_{2}}\). Then we have, since \(I_{2}\) has just one representation that starts with a \(1\), that \(r^{(1)}(T_{2})=r(T_{1})\). It thus follows from Equation(8) and \(r^{(0)}(T_{2})+r^{(1)}(T_{2})=r(T_{2})\) that
\[r(B(N))=r(I_{3})r(T_{2})+r(T_{2})-r^{(1)}(T_{2})=r(T_{2})[r(I_{3})+1]-r^{(1)}(T _{2})=r(T_{2})[r(I_{3})+1]-r(T_{1}). \tag{9}\]
In the same way one proves that for any \(k=1,\ldots n-1\) the following formula holds for \(s_{k+1}\) odd, where \(T_{k+1}\) is split as \(T_{k+1}=I_{k+1}T_{k}\).
\[r(T_{k+1})=r(T_{k})[r(I_{k+1})+1]-r(T_{k-1}). \tag{10}\]
Defining \(r_{n}:=r(B(N))\), \(r_{k}:=r(T_{k})\) for \(k=1,\ldots,n-1\) and \(r_{0}=1\) (cf. Equation (6)), we have obtained a recursion that computes \(r(B(N))\).
**Theorem 2.3**: _For a natural number \(N\) let the Bergman expansion of \(N\) have \(n+1\) digits \(1\). Suppose \(\beta(N)=10^{s_{n}}\ldots 10^{s_{1}}\,1\). Let \(\mbox{\rm Tot}^{\kappa}(N)=r_{n}\) be the number of Knott representations of \(N\). Define the initial conditions: \(r_{0}=1\) and \(r_{1}=\frac{1}{2}s_{1}+1\) if \(s_{1}\) is even, \(r_{1}=\frac{1}{2}(s_{1}\!+\!1)+1\) if \(s_{1}\) is odd. Then for \(n\geq 2\):_
\[r_{n}=\begin{cases}[\frac{1}{2}s_{n}\!+\!1]\,r_{n-1}&\mbox{if $s_{n}$ is even}\\,[\frac{1}{2}(s_{n}\!+\!1)+1]r_{n-1}-r_{n-2}&\mbox{if $s_{n}$ is odd}\end{cases}\]
The initial condition for \(r_{1}\) is different from the Fibonacci case: if \(s_{1}\) is odd, then the base phi expansion has an extra representation that is generated by the "double golden mean flip" (see Section 5).
## 3 Expansions of the Fibonacci numbers and the Lucas numbers
Let \((F_{n})=0,1,1,2,3,5,\ldots\) be the Fibonacci numbers. We will determine the number of Knott representations of these numbers. Then we first have to find a formula for the Bergman expansions of the Fibonacci numbers. Recall that \(L(N)\) is the left most position of a \(1\) in \(\beta(N)\), and that \(B(N)\) is \(\beta(N)\) without the radix point in the expansion. In the following proposition and its proof the simple \(1\)-to-\(1\) correspondence between the pair \((L(N),B(N))\) and the Bergman expansion \(\beta(N)\) plays an essential role.
**Proposition 3.1**: _a) For \(n\geq 3\) one has \(L(F_{n})=n-1\). b) If \(n\geq 3\) is odd, then \(B(F_{n})=(1000)^{p}1001\), with \(p=(n-3)/2\). c) If \(n\geq 4\) is even, then \(B(F_{n})=(1000)^{p}10001\), with \(p=(n-4)/2\)._
_Proof:_ This will, of course, be proved by induction. It is simple to check that \(\beta(F_{3})=\beta(2)=10\!\cdot\!01,\beta(F_{4})=\beta(3)=100\!\cdot\!01, \beta(F_{5})=\beta(5)=1000\!\cdot\!1001\). So the statements hold for \(n=3,4,5\). We start the induction at \(n=6\). Since \(F_{6}=F_{4}+F_{5}\), we have
\[\beta(F_{4}) = 100\!\cdot\!01\] \[\beta(F_{5}) = 1000\!\cdot\!1001\] \[\beta(F_{4})+\beta(F_{5}) = 1100\!\cdot\!1101\] \[\beta(F_{4})+\beta(F_{5}) = 10001\!\cdot\!0001.\]
Here we applied the reverse golden mean flip twice in the last step. Since the last expansion does not have any \(11\), we must have \(\beta(F_{6})=10001\!\cdot\!0001\), and \(L(F_{6})=5\). Next we show what happens at \(n=7\).
\[\beta(F_{5}) = 1000\!\cdot\!1001\] \[\beta(F_{6}) = 10001\!\cdot\!0001\] \[\beta(F_{5})+\beta(F_{6}) = 11001\!\cdot\!1002\] \[\beta(F_{5})+\beta(F_{6}) = 100010\!\cdot\!001001.\]
Here we applied the reverse golden mean flip twice, and a shifted version of \(\beta(2)=10\!\cdot\!01\) in the last step. Since the last expansion does not have any \(11\), we must have \(\beta(F_{7})=100010\!\cdot\!001001\), and \(L(F_{7})=6\).
These addition schemes clearly generalize to \(\beta(F_{n-2})+\beta(F_{n-1})\) with \(n-2\) even, respectively odd, finishing the induction proof.
**Theorem 3.2**: _For all \(n\geq 1\) one has \({\rm Tot}^{\kappa}(F_{n})=F_{n}\)._
_Proof:_ It is easily checked that the proposition holds for \(n=1\) and \(n=2\). So let \(n\geq 3\). According to Proposition 3.1, the number of ones in \(\beta(F_{n})\) is \(p+2\), with \(p+2=(n+1)/2\) if \(n\) is odd, and \(p+2=n/2\) if \(n\) is even. Also, \(\beta(F_{n})=10^{s_{p+1}}\ldots 10^{s_{k}}\ldots 10^{s_{1}}\,1\), with \(s_{k}=3\) for \(k=2,\ldots,p+1\), and \(s_{1}=2\) for \(n\) odd, \(s_{1}=3\) for \(n\) even.
We apply Theorem 2.3. This yields that \({\rm Tot}^{\kappa}(F_{n})=r_{p+1}\), the number of Knott representations of the Bergman representation of \(F_{n}\) satisfies
\[r_{p+1}=3r_{p}-r_{p-1}.\]
Here the initial conditions are \(r_{0}=1\), \(r_{1}=s_{1}/2+1=2\) for \(n\) even, and \(r_{1}=(s_{1}+1)/2+1=3\) for \(n\) odd. Amusingly, the same recurrence relation holds for the subsequences of even and odd Fibonacci numbers:
\[F_{n+1}=F_{n}+F_{n-1}=2F_{n-1}+F_{n-2}=3F_{n-1}-F_{n-1}+F_{n-2}=3F_{n-1}-F_{n- 3}. \tag{11}\]
(I) Suppose \(n=2m+1\) is odd. Then \(p=m-1\), so \({\rm Tot}^{\kappa}(F_{2m+1})=r_{m}\).
We claim that \(r_{m}=F_{2m+1}\) for all \(m\geq 0\).
For \(m=0\), we have \(r_{0}=1=F_{1}\), and for \(m=1\) we have \(r_{1}=2=F_{3}\).
For \(m\geq 2\),
\[r_{m}=3r_{m-1}-r_{m-2}=3F_{2m-1}-F_{2m-3}=F_{2m+1},\]
by the induction hypothesis and Equation (11).
(II) Suppose \(n=2m+2\) is even. Then \(p=m-1\), so \({\rm Tot}^{\kappa}(F_{2m+2})=r_{m}\).
We claim that \(r_{m}=F_{2m+2}\) for all \(m\geq 0\).
For \(m=0\), we have \(r_{0}=1=F_{2}\), and for \(m=1\) we have \(r_{1}=3=F_{4}\).
For \(m\geq 2\),
\[r_{m}=3r_{m-1}-r_{m-2}=3F_{2m}-F_{2m-2}=F_{2m+2},\]
by the induction hypothesis and Equation (11).
Combining (I) and (II) yields the conclusion: \({\rm Tot}^{\kappa}(F_{n})=F_{n}\) for all \(n\geq 1\). \(\Box\)
At the Fibonacci numbers the total number of expansions is very large, but here we show that it is very small at the Lucas numbers \((L_{n})\).
**Theorem 3.3**: _For all \(n\geq 1\) one has \({\rm Tot}^{\kappa}(L_{2n})={\rm Tot}^{\kappa}(L_{2n+1})=2n+1\)._
_Proof:_ The Lucas numbers have simple representations: \(\beta(L_{2n})=10^{2n}\cdot 0^{2n-1}1,\ \beta(L_{2n+1})=1(01)^{n}\cdot(01)^{n}\). This can be shown using the golden mean flip as in Theorem 2.1.
So the representation of \(L_{2n}\) has only two ones. It follows therefore from Theorem 2.3 that \({\rm Tot}^{\kappa}(L_{2n})=r_{1}=(s_{1}+1)/2+1=2n+1\), since \(s_{1}=4n-1\) is odd.
The representation of \(L_{2n+1}\) has \(2n+1\) ones, and each \(s_{k}\) of the blocks \(10^{s_{k}}\) is equal to \(1\), which is odd. It follows therefore from Theorem 2.3 that \({\rm Tot}^{\kappa}(L_{2n+1})=r_{n}=2r_{n-1}-r_{n-2}\). And indeed, induction gives that \(r_{n}=2(2n-1)-(2n-3)=2n+1\). \(\Box\)
## 4 Natural base phi expansions
A consequence of the application of the double golden mean shift is that length of the negative part of the Knott expansions may take two different values.
To obtain what we will call the _natural_ expansions, let us delete all expansions that have a length of the negative part that is not equal to the length of the negative part of the Bergman expansion.
For example in the case \(N=4\) Knott proposes the three expansions \(101\cdot 01,100\cdot 1111\) and \(11.1111\). However, there is only one natural expansion: the Bergman expansion \(101\cdot 01\).
Let \({\rm Tot}^{\nu}(N)\) denote the number of natural base phi expansions. Then we have the following
\[({\rm Tot}^{\nu}(N))=1,1,2,2,1,5,5,4,5,4,3,1,10,13,12,12,13,10,6,11,12,\ldots\]
\[\mbox{instead of}\]
\[({\rm Tot}^{\kappa}(N))=1,1,2,3,3,5,5,5,8,8,8,5,10,13,12,12,13,10,7,15,18,\ldots\]
The number of natural base phi expansions can be determined in a way that is very similar to the Knott expansion case.
**Theorem 4.1**: _For a natural number \(N\) let the Bergman expansion of \(N\) have \(n+1\) digits \(1\). Suppose \(\beta(N)=10^{s_{n}}\ldots 10^{s_{1}}1\). Let \(\operatorname{Tot}^{\nu}(N)=r_{n}\) be the number of natural base phi representations of \(N\). Define the initial conditions: \(r_{0}=1\) and \(r_{1}=\frac{1}{2}s_{1}+1\) if \(s_{1}\) is even, \(r_{1}=\frac{1}{2}(s_{1}\!+\!1)\) if \(s_{1}\) is odd. Then for \(n\geq 2\):_
\[r_{n}=\begin{cases}\lfloor\frac{1}{2}s_{n}\!+\!1\rfloor r_{n-1}&\text{if $s_{n}$ is even}\\ \lfloor\frac{1}{2}(s_{n}\!+\!1)+1\rfloor r_{n-1}-r_{n-2}&\text{if $s_{n}$ is odd}\end{cases}\]
_Proof:_ This follows directly from Theorem 2.3 and its proof. The only difference between the process of generating all Knott expansions and all natural expansions is the double golden mean flip, which is performed in the Knott expansion at the segment \(10^{s_{1}}1\), and only when \(s_{1}\) is odd. So \(\operatorname{Tot}^{\nu}(N)=r_{n}\) satisfies the same recursion as \(\operatorname{Tot}^{\text{FIB}}(N)\), except that \(r_{1}=\frac{1}{2}(s_{1}\!+\!1)+1\) has to be replaced by \(r_{1}=\frac{1}{2}(s_{1}\!+\!1)\) in the case that \(s_{1}\) is odd. \(\Box\)
We will determine the total number of natural expansions of the Fibonacci numbers. First we present a lemma that emphasizes the inter-connection between the Fibonacci and the Lucas numbers. Recall the even and odd Lucas intervals \(\Lambda_{2n}=[L_{2n},\,L_{2n+1}]\), \(\Lambda_{2n+1}=[L_{2n+1}+1,\,L_{2n+2}-1]\) (cf. [6]).
**Lemma 4.2**: _For all \(n=1,2,\ldots\) one has \(F_{2n+2}\in\Lambda_{2n},\,F_{2n+3}\in\Lambda_{2n+1}\)._
_Proof:_ By induction. For \(n=1\) we have \(F_{3}=4\in\Lambda_{2}=[3,4]\), and \(F_{5}=4\in\Lambda_{3}=[5,6]\).
For \(n=2\) we have \(F_{6}=8\in\Lambda_{4}=[7,11]\), and \(F_{7}=13\in\Lambda_{5}=[12,17]\).
Suppose the statement of the lemma has been proved for \(F_{2n+1}\) and \(F_{2n+2}\). So we know
\[F_{2n+1}\in[L_{2n-1}+1,\,L_{2n}-1] =\Lambda_{2n-1}\] \[F_{2n+2}\in[L_{2n},\,L_{2n+1}] =\Lambda_{2n}.\]
Adding the numbers in these two equations vertically, we obtain
\[F_{2n+3}\in[L_{2n+1}+1,\,L_{2n+2}-1] =\Lambda_{2n+1}.\]
We can then write
\[F_{2n+2}\in[L_{2n},\,L_{2n+1}] =\Lambda_{2n}\] \[F_{2n+3}\in[L_{2n+1}+1,\,L_{2n+2}-1] =\Lambda_{2n+1}.\]
This time, adding gives
\[F_{2n+4}\in[L_{2n+2}+1,\,L_{2n+3}-1].\]
Since \(F_{2n+4}\neq L_{2n+2}\), this implies that \(F_{2n+4}\in\Lambda_{2n+2}\). \(\Box\)
**Theorem 4.3**: _For all \(n=0,1,2,\ldots\) one has \(\operatorname{Tot}^{\nu}(F_{2n+2})=F_{2n+1}\) and \(\operatorname{Tot}^{\nu}(F_{2n+3})=F_{2n+3}\)._
_Proof:_ We use the result from Proposition 5.1, which gives that for all \(N\) from \(\Lambda_{2n+1}\) if \(\beta(N)=...10^{s_{1}}1\), then \(s_{1}\) is even. So for all \(N\) from \(\Lambda_{2n+1}\) we have that the total number of natural expansions is equal to the total number of Knott expansions. In particular we obtain from Lemma 4.2, using Theorem 3.2, that
\[\operatorname{Tot}^{\nu}(F_{2n+3})=\operatorname{Tot}^{\kappa}(F_{2n+3})=F_{ 2n+3}.\]
From Proposition 3.1, part c) we have that \(B(F_{2n+2})=(1000)^{p}10001\) with \(p=(2n+2-4)/2=n-1\). Therefore \(r_{n}\) satisfies the recurrence relation \(r_{n}=3r_{n-1}-r_{n-2}\), with \(r_{1}=\frac{1}{2}(3+1)=2=F_{3}\). This is the recurrence relation for the Fibonacci numbers with odd indices, cf. Equation (11). Therefore \(r_{n}=F_{2n+1}\). \(\Box\)
There is a direct connection between the total number of natural expansions and the total number of Fibonacci expansions.
**Theorem 4.4**: _For every \(N>3\) let \(\beta(N)=d_{L(N)}\ldots d_{R(N)}\) be the Bergman expansion of \(N\). Then_
\[{\rm Tot}^{\nu}(N)={\rm Tot}^{\rm FIB}(F_{-R(N)+2}\,N).\]
_Proof:_ Suppose that \(\beta(N)=d_{L}\ldots d_{R}\), so \(N=\sum_{L}^{R}d_{i}\varphi^{i}\).
Multiply by \(\varphi^{-R+2}\):
\[\varphi^{-R+2}N=\sum_{i=R}^{L}d_{i}\varphi^{i-R+2}=\sum_{j=2}^{L-R+2}d_{j+R-2} \varphi^{j}=\sum_{j=2}^{L-R+2}e_{j}\varphi^{j}\]
where we substituted \(j=i-R+2\), and defined \(e_{j}:=j+R-2\).
Next we use the well known equation \(\varphi^{j}=F_{j}\varphi+F_{j-1}\):
\[[F_{-R+2}\varphi+F_{-R+1}]N=\sum_{j=2}^{L-R+2}e_{j}[F_{j}\varphi+F_{j-1}].\]
This implies that
\[F_{-R+2}N=\sum_{j=2}^{L-R+2}e_{j}F_{j}.\]
We conclude that the number \(F_{-R+2}N\) has a Zeckendorf expansion given by the sum on the right side.
But the manipulations above can be made for any 0-1-word of length \(L-R+1\), so the golden mean flips of \(d_{L}\ldots d_{R}\) are in 1-to-1 correspondence with golden mean flips of \(e_{2}\ldots e_{L-R+2}\). This implies that \({\rm Tot}^{\nu}(N)={\rm Tot}^{\rm FIB}(F_{-R(N)+2}N)\). \(\Box\)
**Example 1** The Bergman expansion of 4 is 101-01, and \(F_{4}=3\). So \({\rm Tot}^{\nu}(4)={\rm Tot}^{\rm FIB}(12)=1\).
**Example 2** The Bergman expansion of 14 is 100100-001001, and \(F_{8}=21\). So \({\rm Tot}^{\nu}(14)={\rm Tot}^{\rm FIB}(294)=12\).
**Example 3** Consider the Lucas numbers. From \(L_{2n}=\varphi^{2n}+\varphi^{-2n}\), and \(L_{2n+1}=L_{2n}+L_{2n-1}\):
\(\beta(L_{2n})=10^{2n}.0^{2n-1}1,\quad\beta(L_{2n+1})=1(01)^{n}.(01)^{n}.\)
We read off: \(R(L_{2n})=-2n,R(L_{2n+1})=-2n\).
It is also clear that \({\rm Tot}^{\nu}(L_{2n})=2n\), and \({\rm Tot}^{\nu}(L_{2n+1})=1\).
So Theorem 4.4 gives the total number of Fibonacci representations of \(F_{2n+2}L_{2n}\) and \(F_{2n+2}L_{2n+1}\):
\({\rm Tot}^{\rm FIB}(F_{2n+2}L_{2n})=2n\), \({\rm Tot}^{\rm FIB}(F_{2n+2}L_{2n+1})=1\) for all \(n\geq 1\).
We find in [16]: From Miklos Kristof, Mar 19 2007:
Let \(L(n)={\rm A000032}(n)={\rm Lucas}\) numbers. Then for \(a>=b\) and odd \(b,\ F(a+b)-F(a-b)=F(a)*L(b)\). So \(F_{2n+2}L_{2n+1}=F_{4n+3}-F_{1}=F_{4n+3}-1\). But \({\rm Tot}^{\rm FIB}(F_{n}-1)=1\) is a well-known formula.
**Remark 4.5**: An alternative proof of Theorem 4.3 can be given with Theorem 4.4.
From Proposition 5.1 we know that a number \(N\) with \(\beta(N)=d_{L}\ldots d_{R}\) in \(\Lambda_{2n}\) has \(-R(N)=2n\). According to Lemma 4.2: \(F_{2n+2}\in\Lambda_{2n}\). So Theorem 4.4 leads to
\[{\rm Tot}^{\nu}(F_{2n+2})={\rm Tot}^{\rm FIB}(F_{2n+2}\,F_{2n+2})={\rm Tot}^{ \rm FIB}(F_{2n+2}^{2}).\]
To finish the alternative proof, one needs to know that \({\rm Tot}^{\rm FIB}(F_{2n}^{2})=F_{2n-1}\) for all \(n\geq 1\). This can be proved similarly to the proof of the main result of Stockmeyers paper [14]. The extra trick is to jointly prove the formula \({\rm Tot}^{\rm FIB}(F_{2n}^{2})=F_{2n-1}\) together with the formula \({\rm Tot}^{\rm FIB}(F_{2n+1}^{2}-2)=F_{2n}\).
The main tool of the proof is Klarner's result from [9]: \({\rm Tot}^{\rm FIB}(n)={\rm Tot}^{\rm FIB}(n-F_{m})+{\rm Tot}^{\rm FIB}(F_{m+ 1}-n-2)\), which holds for \(m\geq 4\) and \(F_{m}\leq n<F_{m+1}-1\).
The proof by induction applies Klarner's identity with \(n=F_{m}^{2}\), respectively with \(n=F_{m}^{2}-2\).
Here the identities (5): \(F_{m}^{2}=F_{2m-2}+F_{m-2}^{2}\) and (6): \(F_{m}^{2}=F_{2m-1}-F_{m-1}^{2}\) from [14] make the induction step work.
Comparing Knot expansions and natural expansions
It is not hard to see that the double golden mean shift--in general combined with more golden mean shifts--can be applied if and only if the expansion ends in \(10^{s}1\), where \(s\) is odd. So the difference between the Knott expansions and the natural expansions is made more explicit by part a) of the following result.
**Proposition 5.1**: **a)** _A number \(N\) is in \(\Lambda_{2n}\) if and only if \(\beta(N)=...10^{s}1\), where \(s\) is odd, and \(N\) is in \(\Lambda_{2n+1}\) if and only if \(\beta(N)=...10^{s}1\), where \(s\) is even._
**b)** _Let \(\beta(N)=L(N)...R(N)\). A number \(N\) in \(\Lambda_{2n}\) has \(-R(N)=2n\), a number \(N\) in \(\Lambda_{2n+1}\) has \(-R(N)=2n+2\)._
Proposition 5.1 will be proved by induction. Thus we need recursions to let the proof work. These are given in the paper [7], from which we repeat the following.
To obtain recursive relations, the interval \(\Lambda_{2n+1}=[L_{2n+1}+1,L_{2n+2}-1]\) has to be divided into three subintervals. These three intervals are
\[I_{n}:= [L_{2n+1}+1,\,L_{2n+1}+L_{2n-2}-1],\] \[J_{n}:= [L_{2n+1}+L_{2n-2},\,L_{2n+1}+L_{2n-1}],\] \[K_{n}:= [L_{2n+1}+L_{2n-1}+1,\,L_{2n+2}-1].\]
It will be convenient to extend the monoid of words of \(0\)'s and \(1\)'s to the corresponding free group. So, for example, \((01)^{-1}0001=1^{-1}001\).
**Theorem 5.2**: **[Recursive structure theorem, [7]]**
**I** _For all \(n\geq 1\) and \(k=0,\ldots,L_{2n-1}\) one has \(\beta(L_{2n}+k)=\beta(L_{2n})+\beta(k)=10\ldots 0\,\beta(k)\,0\ldots 01\)._
**II** _For all \(n\geq 2\) and \(k=1,\ldots,L_{2n-2}-1\)_
\[I_{n}: \beta(L_{2n+1}+k)=1000(10)^{-1}\beta(L_{2n-1}+k)(01)^{-1}1001,\] \[K_{n}: \beta(L_{2n+1}+L_{2n-1}+k)=1010(10)^{-1}\beta(L_{2n-1}+k)(01)^{-1 }0001.\]
_Moreover, for all \(n\geq 2\) and \(k=0,\ldots,L_{2n-3}\)_
\[J_{n}: \beta(L_{2n+1}+L_{2n-2}+k)=10010(10)^{-1}\beta(L_{2n-2}+k)(01)^{-1 }001001.\]
_Proof of Proposition 5.1:_ To start the induction, we note that
\[\Lambda_{2}=[3,4]; \beta(3)=100\cdot 01,\;\beta(4)=101\cdot 01,\] \[\Lambda_{3}=[5,6]; \beta(5)=1000\cdot 1001,\;\beta(6)=1010\cdot 0001.\]
For the even intervals we have that \(\beta(L_{2n})=10^{2n}\cdot 0^{2n-1}1\), so the expansion of the first element ends indeed in \(10^{s}1\), where \(s\) is odd. Note also that \(R(L_{2n})=2n\), and this property will hold for all \(L_{2n}+k\), \(k=0,\ldots,L_{2n-1}\) since the sum \(\beta(L_{2n})+\beta(k)\) in **I** does not change the length of the negative part. Moreover, since the length of the negative part of each \(\beta(k)\) in the sum \(\beta(L_{2n})+\beta(k)\) is even (by the induction hypothesis for part **b)**), the expansion must end in \(10^{s}1\) with \(s\) odd, simply because the difference of two even numbers is even.
For the odd intervals we have to consider the three cases from **II**.
For \(I_{n}\): we know that \(\beta(L_{2n-1}+k)\) ends in \(01\), so \(\beta(L_{2n+1}+k)\) ends in \(1001\). For part **b)**: the length of the negative part is increased by \(2\).
For \(K_{n}\): \(L_{2n-1}+k\) is from an odd interval, so the expansion ends in \(10^{2t}1\) from some \(t>0\). But then the expansion of \(L_{2n+1}+L_{2n-1}+k\) ends in \(10^{2t}1\,(01)^{-1}0001=10^{2t-1}0001=10^{2t+2}1\). For part **b)**: the length of the negative part is increased by \(2\).
For \(J_{n}\): obviously \(\beta(L_{2n+1}+L_{2n-2}+k)\) ends in \(1001\). For part **b)**: the length of the negative part is \(2n-2+4=2n+2\). |
2303.15146 | Hirano inverse of anti-triangular matrix over Banach Algebras | In this paper we investigate Hirano invertibility of anti-triangular matrix
over a Banach algebra. Let $a\in {\mathcal A}^H, b\in {\mathcal A}^{sD}.$ If
$b^Da=0, bab^{\pi}=0,$ we prove that $\begin{pmatrix} a&1\\ b&0
\end{pmatrix}\in M_2(\mathcal A)^H.$ Moreover, we considered Hirano
invertibility of anti-triangular matrices under commutative-like conditions.
These provide new kind of operator matrices with tripotent and nilpotent
decompositions. | Haibo Gou, Huanyin Chen | 2023-03-27T12:32:41Z | http://arxiv.org/abs/2303.15146v2 | # Hirano inverse of anti-triangular matrix over Banach algebras
###### Abstract.
In this paper we investigate Hirano invertibility of anti-triangular matrix over a Banach algebra. Let \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b^{D}a=0,bab^{\pi}=0,\) we prove that \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\) Moreover, we considered Hirano invertibility of anti-triangular matrices under commutative-like conditions. These provide new kind of operator matrices with tripotent and nilpotent decompositions.
Key words and phrases:Hirano inverse, tripotent, nilpotent, anti-triangular matrix, Banach algebra 2020 Mathematics Subject Classification: 15A09, 47A08, 16U99
## 1. Introduction
Hirano invertibility was introduced by Chen and Abdolyousefi (see [10]). An element \(a\in\mathcal{A}\) has Hirano inverse if there exists \(x\in\mathcal{A}\) such that
\[ax=xa,x=xax,a^{2}-ax\in\mathcal{A}\]
is nilpotent. Such \(x,\) if it exists, is unique and will be denoted by \(a^{H}.\)
Recall that an element \(a\in\mathcal{A}\) has strongly Drazin inverse if there exists \(x\in\mathcal{A}\) such that
\[ax=xa,x=xax,a-ax\in\mathcal{A}\]
is nilpotent. Such \(x,\) if it exists, is unique and will be denoted by \(a^{sD}.\) Evidently, \(a\in\mathcal{A}^{sD}\) if and only \(a-a^{2}\in\mathcal{A}\) is nilpotent if and only if \(a\) is the sum of an idempotent and a nilpotent that commute (see [8, Lemma 2.2]). In this paper, we focus on the Hirano invertibility of anti-triangular matrices over a Banach Algebra.
These provide new kind of operator matrices which are the sum of an tripotent and a nilpotent.
In Section 2, we present several elementary results of Hirano inverse, which will be repeatedly used in the sequel.
In Section 3, we study the Hirano invertibility of anti-triangular operator matrices with a form \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\) when \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD},b^{D}a=0\) and \(bab^{\pi}=0\).
In Section 4, we consider some communicative-like conditions on anti-triangular matrix to study its Hirano invertibility. Let \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b^{D}a=0,bab^{\pi}=abb^{\pi},\) we prove that \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)
In Section 5, several examples are given to illustrate our results.
Throughout the paper, \(\mathcal{A}\) denotes a complex Banach algebra with an identity. \(\mathcal{A}^{H}\) represents the set of all Hirano invertible elements in \(\mathcal{A}.\)\(\mathcal{A}^{sD}\) is the set of all elements which are strongly Drazin invertible. \(N(\mathcal{A})\) is the set of nilpotent elements in \(\mathcal{A}\) and \(U(\mathcal{A})\) stands for the set of all invertible elements in \(\mathcal{A}.\)
## 2. Key lemmas
The aim of this section is to present some elementary results of Hirano inverse which will be used in the sequel.
**Proposition 2.1**.: _Let \(\mathcal{A}\) be a Banach algebra. Then the following are equivalent:_
1. \(a\in\mathcal{A}^{H}.\)__
2. \(a-a^{3}\in\mathcal{A}\) _is nilpotent._
3. \(a\) _is the sum of a tripotent and a nilpotent that commute._
Proof.: See [7, Theorem 3.1] and [11, Corollary 2.8].
We have (1)\(\Leftrightarrow\)(2)\(\Leftrightarrow\)(3) obtained.
**Lemma 2.2**.: _If \(a,b\in\mathcal{A}^{H},\) then \(\begin{pmatrix}a&c\\ 0&b\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: Since \(a,b\in\mathcal{A}^{H},\) we have \((a-a^{3})^{m}=0\) and \((b-b^{3})^{n}=0\) for some \(m,n\in\mathbb{N}.\) According to Proposition 2.1, we only need to
prove that
\[\begin{pmatrix}a&c\\ 0&b\end{pmatrix}-\begin{pmatrix}a&c\\ 0&b\end{pmatrix}^{3}\in N(\mathcal{A}).\] \[\begin{pmatrix}a&c\\ 0&b\end{pmatrix}-\begin{pmatrix}a&c\\ 0&b\end{pmatrix}^{3}=\begin{pmatrix}a-a^{3}&c-(a^{2}c+abc+cb^{2})\\ 0&b-b^{3}\end{pmatrix}.\]
Let \(A=a-a^{3},B=c-(a^{2}c+abc+cb^{2}),D=b-b^{3},X=\begin{pmatrix}A&B\\ 0&D\end{pmatrix}.\)
We directly compute that
\[X^{n+m} = \begin{pmatrix}A^{(n+m)}&\sum\limits_{i=0}^{(n+m)-1}A^{(n+m)-1-i }BD^{i}\\ 0&D^{(n+m)}\end{pmatrix}\] \[= \begin{pmatrix}0&\sum\limits_{i=0}^{(n+m)-1}A^{(n+m)-1-i}BD^{i}\\ 0&0\end{pmatrix}.\]
Then we have \(X^{2(n+m)}=0,\) which infers that
\[\begin{pmatrix}a&c\\ 0&b\end{pmatrix}-\begin{pmatrix}a&c\\ 0&b\end{pmatrix}^{3}\in N(\mathcal{A}).\]
Therefore, by Proposition 2.1, \(\begin{pmatrix}a&c\\ 0&b\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)
**Lemma 2.3**.: _Let \(A\in M_{m\times n}(\mathcal{A}),B\in M_{n\times m}(\mathcal{A}).\) If \(AB\in M_{m}(\mathcal{A})^{H},\) then \(BA\in M_{n}(\mathcal{A})^{H}.\)_
Proof.: We may assume that \(m\geq n.\) Then \(AB=\begin{pmatrix}A&0\end{pmatrix}\begin{pmatrix}B\\ 0\end{pmatrix}.\)
In light of [10], we have \(\begin{pmatrix}B\\ 0\end{pmatrix}\begin{pmatrix}A&0\end{pmatrix}\) has Hirano inverse. That is
\(\begin{pmatrix}BA&0\\ 0&0\end{pmatrix}\) has Hirano inverse. In view of Proposition 2.1, we have
\(\begin{bmatrix}BA&0\\ 0&0\end{pmatrix}-\begin{pmatrix}BA&0\\ 0&0\end{pmatrix}^{3}\end{bmatrix}^{m}=0.\) Therefore, \([BA-(BA)^{3}]^{m}=0\) Accordingly, \(BA\in M_{n}(\mathcal{A})^{H},\) as asserted.
**Lemma 2.4**.: _Let \(a,b\in\mathcal{A}^{H}\). If \(ab=0,\) then \(a+b\in\mathcal{A}^{H}.\)_
Proof.: Let \(A=\begin{pmatrix}1\\ a\end{pmatrix},B=\begin{pmatrix}b&1\end{pmatrix}\), then \(a+b=BA,AB=\begin{pmatrix}b&1\\ 0&a\end{pmatrix}.\) According to Lemma 2.3, \((BA)^{H}=B[(AB)^{H}]^{2}A.\) By using of Lemma 2.2 and Lemma 2.3, \(AB\in M_{2}(\mathcal{A})^{H}.\) Therefore \(BA=a+b\in\mathcal{A}^{H}.\)
**Theorem 2.5**.: _Let \(a,b\in\mathcal{A}^{H}.\) If \(aba=0\) and \(ab^{2}=0,\) then \(a+b\in\mathcal{A}^{H}.\)_
Proof.: Let \(a+b=\begin{pmatrix}1&b\end{pmatrix}\begin{pmatrix}a\\ 1\end{pmatrix}.\) Then \(a+b\in\mathcal{A}^{H}\Leftrightarrow\begin{pmatrix}a\\ 1\end{pmatrix}\begin{pmatrix}1&b\end{pmatrix}=\begin{pmatrix}a&ab\\ 1&b\end{pmatrix}\in\mathcal{A}^{H}\) according to Lemma 2.3. We deduce that
\[\begin{pmatrix}a&ab\\ 1&b\end{pmatrix}=\begin{pmatrix}a&0\\ 0&0\end{pmatrix}+\begin{pmatrix}0&0\\ 0&b\end{pmatrix}+\begin{pmatrix}0&ab\\ 1&0\end{pmatrix}.\]
Since \(a,b\in\mathcal{A}^{H},\) then \(\begin{pmatrix}a&0\\ 0&0\end{pmatrix}\) and \(\begin{pmatrix}0&0\\ 0&b\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)
\[\begin{pmatrix}0&ab\\ 1&0\end{pmatrix}-\begin{pmatrix}0&ab\\ 1&0\end{pmatrix}^{3}=\begin{pmatrix}0&ab\\ 1-ab&0\end{pmatrix},\]
\[\begin{pmatrix}0&ab\\ 1-ab&0\end{pmatrix}^{4}=0.\]
According to Proposition 2.1, \(\begin{pmatrix}0&ab\\ 1&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}\Rightarrow\begin{pmatrix}a&ab\\ 1&b\end{pmatrix}\in M_{2}(\mathcal{A})^{H}\Rightarrow a+b\in\mathcal{A}^{H}\) obtained.
## 3. Anti-triangular operator matrices
The purpose of this section is to investigate the Hirano invertibility of anti-triangular operator matrices. We now derive them.
**Theorem 3.1**.: _Let \(x=\begin{pmatrix}a&1\\ b&0\end{pmatrix},\)\(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b^{D}a=0,bab^{\pi}=0\), then \(x\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: Let \(e=\begin{pmatrix}bb^{D}&0\\ 0&1\end{pmatrix},\) then \(e^{2}=e.\) Set \(bb^{D}=b^{e}.\) Then we have the Peirce decomposition of \(x\) related to the idempotent \(e.\)
\[x=\begin{pmatrix}exe&ex(1-e)\\ (1-e)xe&(1-e)x(1-e)\end{pmatrix}_{e}=\begin{pmatrix}\alpha&\beta\\ \gamma&\delta\end{pmatrix}_{e}=\alpha+\beta+\gamma+\delta,\]
We compute that
\[\alpha = \begin{pmatrix}bb^{D}&0\\ 0&1\end{pmatrix}\begin{pmatrix}a&1\\ b&0\end{pmatrix}\begin{pmatrix}bb^{D}&0\\ 0&1\end{pmatrix}\] \[= \begin{pmatrix}bb^{D}abb^{D}&bb^{D}\\ bbb^{D}&0\end{pmatrix}=\begin{pmatrix}bb^{D}a&bb^{D}\\ b^{2}b^{D}&0\end{pmatrix}\] \[= \begin{pmatrix}0&b^{e}\\ bb^{e}&0\end{pmatrix}_{e},\]
\[\beta=\begin{pmatrix}0&0\\ bb^{\pi}&0\end{pmatrix}_{e},\gamma=\begin{pmatrix}b^{\pi}ab^{e}&b^{\pi}\\ 0&0\end{pmatrix}_{e},\delta=\begin{pmatrix}ab^{\pi}&0\\ 0&0\end{pmatrix}_{e},\]
where \(\alpha,\beta,\gamma,\delta\in\mathcal{A}.\) We compute that
\[\alpha^{3}=\begin{pmatrix}0&b^{2}b^{D}\\ b^{3}b^{D}&0\end{pmatrix},\]
\[\alpha-\alpha^{3}=\begin{pmatrix}0&(b-b^{2})b^{D}\\ (b-b^{2})bb^{D}&0\end{pmatrix},\]
By hypothesis, \(b\) has strongly Drazin inverse, and so \(b-b^{2}\in\mathcal{A}\) is nilpotent. Thus \(\alpha\) is Hirano invertible by Proposition 2.1. Note that \((\beta+\gamma)^{2}=\begin{pmatrix}bb^{\pi}&0\\ bb^{\pi}a&bb^{\pi}\end{pmatrix}\) is nilpotent, and then \(\beta+\gamma\) is nilpotent too. Thus, \((\beta+\gamma)^{H}=0.\) Obviously, we have
\[ab^{\pi}-(ab^{\pi})^{3}=ab^{\pi}-ab^{\pi}ab^{\pi}ab^{\pi}=(a-a^{3})b^{\pi},\]
and then
\[[ab^{\pi}-(ab^{\pi})^{3}]^{m}=[(a-a^{3})b^{\pi}]^{m}=\cdots=(a-a^{3})^{m}b^{\pi}\]
for any \(m\in\mathbb{N}\). This implies that \(ab^{\pi}\in\mathcal{A}^{H}.\) Thus \(\delta\) is Hirano invertible. For \(\beta\delta=0,\gamma\delta=0,\) so \((\beta+\gamma)\delta=0.\) According to Lemma
2.4, \(\beta+\gamma+\delta\in\mathcal{A}^{H}\).
\[\beta+\gamma+\delta=x-\alpha=\begin{pmatrix}a&1\\ b&0\end{pmatrix}-\alpha=\begin{pmatrix}b^{\pi}a&b^{\pi}\\ bb^{\pi}&0\end{pmatrix},\]
and we have \(\alpha(\beta+\gamma+\delta)=0.\) According to Lemma 2.4, \(\alpha+\beta+\gamma+\delta\in\mathcal{A}^{H}\). Therefore, \(x\in M_{2}(\mathcal{A})^{H}.\)
**Corollary 3.2**.: _Let \(x=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\)\(a,d\in\mathcal{A}^{H},bc\in\mathcal{A}^{sD}.\) If \((bc)^{D}a=0,bca(bc)^{\pi}=0,bdc=0\) and \(bd^{2}=0\), then \(x\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: Write \(x=y+z\), where \(y=\begin{pmatrix}0&0\\ 0&d\end{pmatrix}\) and \(z=\begin{pmatrix}a&b\\ c&0\end{pmatrix}.\) Obviously, \(y\) is Hirano invertible and \(y^{H}=\begin{pmatrix}0&0\\ 0&d^{H}\end{pmatrix}.\) To prove that \(z\) is Hirano invertible, we write \(z=pq,\) where \(p=\begin{pmatrix}a&1\\ c&0\end{pmatrix}\) and \(q=\begin{pmatrix}1&0\\ 0&b\end{pmatrix}.\) Applying Theorem 3.1, we deduce that \(qp=\begin{pmatrix}a&1\\ bc&0\end{pmatrix}\) is Hirano invertible. According to Lemma 2.3, \(z=pq\) is Hirano invertible and \(z^{H}=p[(qp)^{H}]^{2}q.\) Since \(bdc=0\) and \(bd^{2}=0,\) we see that \(zyz=0,zy^{2}=0.\) By using of Theorem 2.5, we have \(x=y+z\in M_{2}(\mathcal{A})^{H}.\)
**Corollary 3.3**.: _Let \(x=\begin{pmatrix}a&b\\ c&d\end{pmatrix},\)\(a,d\in\mathcal{A}^{H},bc\in\mathcal{A}^{sD}.\) If \((bc)^{D}a=0,a(bc)^{\pi}=0\) and \(bd=0,\) then \(x\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: This is an immediate application of Corollary 3.2.
**Lemma 3.4**.: _Let \(C\in\mathbb{C}^{n\times n}.\) Then \(C\in(\mathbb{C}^{n\times n})^{H}\) if and only if the eigenvalues of \(C\) are \(-1\), \(0\), \(1.\)_
Proof.: If \(C\in\mathbb{C}^{n\times n}\) is Hirano invertible, then there exists \(P,\) such that
\[P^{-1}CP=\begin{pmatrix}J_{1}&0&\cdots&0\\ 0&J_{2}&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\cdots&J_{n}\end{pmatrix},\]
\[M=\begin{pmatrix}A&I\\ B&0\end{pmatrix}=\begin{pmatrix}1&1&1&0\\ 0&0&1\\ 0&1&0&0\\ 0&1&0&0\end{pmatrix}.\]
\[|\lambda I-M|=\begin{vmatrix}\lambda-1&-1&-1&0\\ 0&\lambda&0&-1\\ 0&-1&\lambda&0\\ 0&-1&0&\lambda\end{vmatrix}|=\lambda(\lambda+1)(\lambda-1)^{2},\]
_so \(\lambda=0,1,-1.\) By Lemma 3.4, we have_
\[M=\begin{pmatrix}A&I\\ B&0\end{pmatrix}\in(\mathbb{C}^{4\times 4})^{H}.\]
## 4. Communicative-like conditions
In this section we consider the Hirano invertibility of anti-triangular matrices under certain communicative-like conditions.
**Theorem 4.1**.: _Let \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b^{D}a=0,abb^{\pi}=bab^{\pi},\) then \(x=\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: Since \(b\) is strongly Drazin invertible, it has Drazin inverse. Then \(b\) can be written as \(b=b_{0}+b_{00},\) where \(b^{s}={b_{0}}^{s}\oplus 0,b^{\pi}=0\oplus 1\) and \(b_{0}\) is invertible. Since \(b^{D}a=0,\)\(a\) has the form \(a=\begin{pmatrix}a_{0}&0\\ a_{1}&a_{00}\end{pmatrix}\) and \(a_{00}\in\mathcal{A}^{H}.\) From
\[\begin{pmatrix}a&1\\ b&0\end{pmatrix}=\begin{pmatrix}a_{0}&0&1&0\\ a_{1}&a_{00}&0&1\\ b_{0}&0&0&0\\ 0&b_{00}&0&0\end{pmatrix}\hookrightarrow_{S}\begin{pmatrix}a_{0}&1&0&0\\ b_{0}&0&0&0\\ a_{1}&0&a_{00}&1\\ 0&0&b_{00}&0\end{pmatrix},\]
where \(S=E_{11}+E_{32}+E_{23}+E_{44}\) and \(E_{ij}\) be the \(4\times 4\) matrix with \((i,j)\) element equal to \(1\) and others zero.
One easily checks that \(a_{0}=bb^{D}abb^{D}=bb^{D}a=0,b_{0}=b^{2}b^{D}\in\mathcal{A}^{sD}\). Moreover, we have
\[b_{0}a_{0}b_{0}^{\pi}=b^{2}b^{D}abb^{D}[1-b^{2}b^{D}(b^{D})]=0.\]
In view of Theorem 3.1, we prove that \(\begin{pmatrix}a_{0}&1\\ b_{0}&0\end{pmatrix}=\begin{pmatrix}0&1\\ b_{0}&0\end{pmatrix}\) has Hirano invertible.
Then we know that \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\) is Hirano invertible if and only if \(\begin{pmatrix}a_{00}&1\\ b_{00}&0\end{pmatrix}\) is Hirano invertible. From \(abb^{\pi}=bab^{\pi}\) we get \(a_{00}b_{00}=b_{00}a_{00}.\) So \(a_{00}\) and \(b_{00}\) can be written as
\[a_{00}=a_{3}\oplus a_{4},b_{00}=b_{3}\oplus b_{4},\]
where \(a_{3}\) is invertible, \(a_{4},b_{3},b_{4}\in N(\mathcal{A})\) and \(a_{i}b_{i}=b_{i}a_{i}(i=3,4).\) So
\[\begin{pmatrix}a_{00}&1\\ b_{00}&0\end{pmatrix}=\begin{pmatrix}a_{3}&0&1&0\\ 0&a_{4}&0&1\\ b_{3}&0&0&0\\ 0&b_{4}&0&0\end{pmatrix}\hookrightarrow_{S}\begin{pmatrix}a_{3}&1&0&0\\ b_{3}&0&0&0\\ 0&0&a_{4}&1\\ 0&0&b_{4}&0\end{pmatrix}.\]
Hence \(\begin{pmatrix}a_{00}&1\\ b_{00}&0\end{pmatrix}\) is Hirano invertible if and only if \(\begin{pmatrix}a_{i}&1\\ b_{i}&0\end{pmatrix},i=3,4\) are Hirano invertible. Since \(a_{4},b_{4}\in\mathcal{A}\) are nilpotent and \(a_{4}b_{4}=b_{4}a_{4}\), it follows by [2, Lemma 1] that \(\begin{pmatrix}a_{4}&1\\ b_{4}&0\end{pmatrix}\) is nilpotent.
Since \(a_{3}\) is invertible, \(b_{3}\in\mathcal{A}\) is nilpotent with \(a_{3}b_{3}=b_{3}a_{3}\), there exists \(x_{3}\in N(\mathcal{A})\) such that \(a_{3}x_{3}+x_{3}^{2}=b_{3}\) and \(x_{3}b_{3}\in\mathcal{A},a_{3}x_{3}=x_{3}a_{3},a_{3}+x_{3}\) is invertible. Then \(\begin{pmatrix}a_{3}&1\\ b_{3}&0\end{pmatrix}\hookrightarrow_{S_{0}}\begin{pmatrix}a_{3}+x_{3}&1\\ 0&-x_{3}\end{pmatrix},\) where \(S_{0}=\begin{pmatrix}1&0\\ -x_{3}&1\end{pmatrix}.\) Following we claim that \(\begin{pmatrix}a_{3}+x_{3}&1\\ 0&-x_{3}\end{pmatrix}\) is Hirano invertible.
Since \(a_{00}=\begin{pmatrix}a_{3}&0\\ 0&a_{4}\end{pmatrix}\in\mathcal{A}^{H},a_{4}\) is nilpotent, therefore \(a_{3}\in\mathcal{A}^{H}.\) According to Theorem 2.1, \(a_{3}\) is the sum of an tripotent and a nilpotent. Addictively, \(x_{3}\) is nilpotent, thus \(a_{3}+x_{3}\) also can be written as the sum of an tripotent and a nilpotent. Therefore, \(a_{3}+x_{3}\in\mathcal{A}^{H}.\)
Then, \(\begin{pmatrix}a_{3}&1\\ b_{3}&0\end{pmatrix}\in\mathcal{A}^{H}.\) We complete the proof.
**Corollary 4.2**.: _Let \(a\in\mathcal{A}^{H},bc\in\mathcal{A}^{sD}.\) If \(acb=cba,(cb)^{D}a=0,\) then \(\begin{pmatrix}a&c\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: We easily verify \(\begin{pmatrix}a&c\\ b&0\end{pmatrix}=\begin{pmatrix}1&0\\ 0&c\end{pmatrix}\begin{pmatrix}a&1\\ b&0\end{pmatrix}\) and \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\begin{pmatrix}1&0\\ 0&c\end{pmatrix}=\begin{pmatrix}a&1\\ cb&0\end{pmatrix}.\) In light of Lemma 2.3, \(\begin{pmatrix}a&c\\ b&0\end{pmatrix}\) is Hirano invertible if and only if \(\begin{pmatrix}a&1\\ cb&0\end{pmatrix}\) is Hirano invertible. Since \(a,cb\in(\mathcal{A})^{sD},acb=cba,\) apply Theorem 4.1 to \(\begin{pmatrix}a&1\\ cb&0\end{pmatrix},\) we have \(\begin{pmatrix}a&1\\ cb&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\) Accordingly, \(\begin{pmatrix}a&b\\ c&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)
**Corollary 4.3**.: _Let \(a,b\in\mathcal{A}^{sD}.\) If \(b^{D}a=0,ab=ba,\) then \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: This is obvious by Corollary 4.2.
**Theorem 4.4**.: _Let \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b=ba,\) then \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
Proof.: When \(b\) can be written as \(\begin{pmatrix}b_{1}&0\\ 0&b_{2}\end{pmatrix},\)\(b_{1}\) is invertible, \({b_{2}}^{t}=0\) for some \(t\in\mathbb{N}.\) Since \(b=ba,\) we have \(a=\begin{pmatrix}bb^{D}&0\\ a_{3}&a_{2}\end{pmatrix},\) and \(b_{2}=b_{2}a_{2}.\) Then
\[\begin{pmatrix}a&1\\ b&0\end{pmatrix}=\begin{pmatrix}bb^{D}&0&1&0\\ a_{3}&a_{2}&0&1\\ b_{1}&0&0&0\\ 0&b_{2}&0&0\end{pmatrix}\hookrightarrow_{P}\begin{pmatrix}bb^{D}&1&0&0\\ b_{1}&0&0&0\\ a_{3}&0&a_{2}&1\\ 0&0&b_{2}&0\end{pmatrix}.\]
Notice that from \(b_{2}=b_{2}a_{2},\) we can obtain \(b_{2}=\begin{pmatrix}b_{21}&0\\ b_{23}&0\end{pmatrix},\) and \(b_{21}=b_{21}a_{21}.\) Then
\[\begin{pmatrix}a_{2}&1\\ b_{2}&0\end{pmatrix}=\begin{pmatrix}a_{21}&0&1&0\\ 0&a_{22}&0&1\\ b_{21}&0&0&0\\ b_{23}&0&0&0\end{pmatrix}\hookrightarrow\begin{pmatrix}a_{21}&1&0&0\\ b_{21}&0&0&0\\ 0&0&a_{22}&1\\ b_{23}&0&0&0\end{pmatrix}.\]
Because \({a_{22}}^{n}=0\) for some \(n\in\mathbb{N}\), so does \(\begin{pmatrix}a_{22}&1\\ 0&0\end{pmatrix}.\) Therefore, \(\begin{pmatrix}a_{2}&1\\ b_{2}&0\end{pmatrix}\) Hirano invertible is equal to \(\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}\) Hirano invertible. Since \({b_{2}}^{t}=0\Rightarrow{b_{21}}^{t}=0,b_{21}=b_{21}a_{21},a_{21}\) is invertible, there exists a nilpotent operator \(x_{21}\) such that \(x_{21}^{2}+a_{21}x_{21}=b_{21},\) and \(a_{21}+x_{21}\) is invertible, Thus,
\[\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}\sim\begin{pmatrix}a_{21}+x_{21}&1\\ 0&-x_{21}\end{pmatrix}.\]
According to Theorem 4.1,\(\begin{pmatrix}a_{21}+x_{21}&1\\ 0&-x_{21}\end{pmatrix}\) is Hirano invertible, so does \(\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}.\) Thus, \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)\(\square\)
**Corollary 4.5**.: _Let \(a\in\mathcal{A}^{H},b\in\mathcal{A}^{sD}.\) If \(b=ab,\) then \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)_
_Proof._ When \(b\) can be written as \(\begin{pmatrix}b_{1}&0\\ 0&b_{2}\end{pmatrix},\)\(b_{1}\) is invertible, \({b_{2}}^{t}=0\) for some \(t\in\mathbb{N}.\) Since \(b=ab,\) we have \(a=\begin{pmatrix}bb^{D}&a_{3}\\ 0&a_{2}\end{pmatrix},\) and \(b_{2}=a_{2}b_{2}.\) Then
\[\begin{pmatrix}a&1\\ b&0\end{pmatrix}=\begin{pmatrix}bb^{D}&a_{3}&1&0\\ 0&a_{2}&0&1\\ b_{1}&0&0&0\\ 0&b_{2}&0&0\end{pmatrix}\hookrightarrow_{P}\begin{pmatrix}bb^{D}&1&a_{3}&0\\ b_{1}&0&0&0\\ 0&0&a_{2}&1\\ 0&0&b_{2}&0\end{pmatrix}.\]
Notice that from \(b_{2}=a_{2}b_{2},\) we can obtain \(b_{2}=\begin{pmatrix}b_{21}&b_{22}\\ 0&0\end{pmatrix},\) and \(b_{21}=a_{21}b_{21}.\) Then
\[\begin{pmatrix}a_{2}&1\\ b_{2}&0\end{pmatrix}=\begin{pmatrix}a_{21}&0&1&0\\ 0&a_{22}&0&1\\ b_{21}&b_{22}&0&0\\ 0&0&0&0\end{pmatrix}\hookrightarrow\begin{pmatrix}a_{21}&1&0&0\\ b_{21}&0&b_{22}&0\\ 0&0&a_{22}&1\\ 0&0&0&0\end{pmatrix}.\]
Because \({a_{22}}^{n}=0\) for some \(n\in\mathbb{N}\), so does \(\begin{pmatrix}a_{22}&1\\ 0&0\end{pmatrix}.\) Therefore, \(\begin{pmatrix}a_{2}&1\\ b_{2}&0\end{pmatrix}\) Hirano invertible is equal to \(\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}\) Hirano invertible. Since \({b_{2}}^{t}=0\Rightarrow{b_{21}}^{t}=0,b_{21}=a_{21}b_{21},a_{21}\) is invertible, there exists a nilpotent operator \(x_{21}\) such that \(x_{21}^{2}+a_{21}x_{21}=b_{21},\) and \(a_{21}+x_{21}\) is invertible, Thus,
\[\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}\sim\begin{pmatrix}a_{21}+x_{21}&1\\ 0&-x_{21}\end{pmatrix}.\]
According to Theorem 4.4,\(\begin{pmatrix}a_{21}+x_{21}&1\\ 0&-x_{21}\end{pmatrix}\) is Hirano invertible, so does \(\begin{pmatrix}a_{21}&1\\ b_{21}&0\end{pmatrix}.\) Thus, \(\begin{pmatrix}a&1\\ b&0\end{pmatrix}\in M_{2}(\mathcal{A})^{H}.\)
## 5. Numeral examples
In this section, three specific examples would be given to illustrate the main results.
**Example 5.1**.: \(A=\begin{pmatrix}0&0&1\\ 1&0&1\\ 1&0&0\end{pmatrix},\) _we can judge \(A\in M_{3}(A)^{H}.\)_
Proof.: \(A=\begin{pmatrix}0&0&1\\ 1&0&1\\ 1&0&0\end{pmatrix}=\begin{pmatrix}0&0&1\\ 1&0&0\\ 1&0&0\end{pmatrix}+\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&0&0\end{pmatrix}=B+C,B^{3}=B,C\in N(\mathcal{A}).\) According to Theorem 2.1, \(\begin{pmatrix}0&0&1\\ 1&0&1\\ 1&0&0\end{pmatrix}\in M_{3}(A)^{H}.\)
**Example 5.2**.: _Let \(A=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\in\mathcal{A}^{H},B=\begin{pmatrix}1&0\\ 1&0\end{pmatrix}\in\mathcal{A}^{sD}.\) Then \(x=\begin{pmatrix}A&I\\ B&0\end{pmatrix}\in M_{4}(\mathcal{A})^{H}.\)_
Proof.: We have \(B^{D}A=0,BAB^{\pi}=0.\) Using Theorem 3.1, \(x=\begin{pmatrix}A&I\\ B&0\end{pmatrix}=\begin{pmatrix}0&0&1&0\\ 0&1&0&1\\ 1&0&0&0\\ 1&0&0&0\end{pmatrix}\in M_{4}(\mathcal{A})^{H}\) obtained.
**Example 5.3**.: _Let \(A=\begin{pmatrix}0&0\\ 0&1\end{pmatrix}\in\mathcal{A}^{H},B=\begin{pmatrix}1&0\\ -1&0\end{pmatrix}\in\mathcal{A}^{sD}.\) Then \(x=\begin{pmatrix}A&I\\ B&0\end{pmatrix}\in M_{4}(\mathcal{A})^{H}.\)_
Proof.: We have \(B^{D}A=0,BAB^{\pi}=ABB^{\pi}.\) Using Theorem 4.1,
\[x=\begin{pmatrix}A&I\\ B&0\end{pmatrix}=\begin{pmatrix}0&0&1&0\\ 0&1&0&1\\ 1&0&0&0\\ -1&0&0&0\end{pmatrix}\in M_{4}(\mathcal{A})^{H}\text{ obtained.}\qed\]
|
2309.00276 | Superexchange coupling of donor qubits in silicon | Atomic engineering in a solid-state material has the potential to
functionalize the host with novel phenomena. STM-based lithographic techniques
have enabled the placement of individual phosphorus atoms at selective lattice
sites of silicon with atomic precision. Here, we show that by placing four
phosphorus donors spaced 10-15 nm apart from their neighbours in a linear
chain, it is possible to realize coherent spin coupling between the end dopants
of the chain, analogous to the superexchange interaction in magnetic materials.
Since phosphorus atoms are a promising building block of a silicon quantum
computer, this enables spin coupling between their bound electrons beyond
nearest neighbours, allowing the qubits to be spaced out by 30-45 nm. The added
flexibility in architecture brought about by this long-range coupling not only
reduces gate densities but can also reduce correlated noise between qubits from
local noise sources that are detrimental to error correction codes. We base our
calculations on a full configuration interaction technique in the atomistic
tight-binding basis, solving the 4-electron problem exactly, over a domain of a
million silicon atoms. Our calculations show that superexchange can be tuned
electrically through gate voltages where it is less sensitive to charge noise
and donor placement errors. | Mushita M. Munia, Serajum Monir, Edyta N. Osika, Michelle Y. Simmons, Rajib Rahman | 2023-09-01T06:12:40Z | http://arxiv.org/abs/2309.00276v1 | # Superexchange coupling of donor qubits in silicon
###### Abstract
Atomic engineering in a solid-state material has the potential to functionalize the host with novel phenomena. STM-based lithographic techniques have enabled the placement of individual phosphorus atoms at selective lattice sites of silicon with atomic precision. Here, we show that by placing four phosphorus donors spaced 10-15 nm apart from their neighbours in a linear chain, it is possible to realize coherent spin coupling between the end dopants of the chain, analogous to the superexchange interaction in magnetic materials. Since phosphorus atoms are a promising building block of a silicon quantum computer, this enables spin coupling between their bound electrons beyond nearest neighbours, allowing the qubits to be spaced out by 30-45 nm. The added flexibility in architecture brought about by this long-range coupling not only reduces gate densities but can also reduce correlated noise between qubits from local noise sources that are detrimental to error correction codes. We base our calculations on a full configuration interaction technique in the atomistic tight-binding basis, solving the 4-electron problem exactly, over a domain of a million silicon atoms. Our calculations show that superexchange can be tuned electrically through gate voltages where it is less sensitive to charge noise and donor placement errors.
superexchange, silicon qubits, NEMO, FCI, donors, STM
## I Introduction
Donor qubits in silicon are promising candidates for encoding quantum information in the solid state due to their long coherence times [1; 2; 3] and their technological link to the silicon platform of the electronics industry. Experimental advancements in the past decades have enabled the precision placement of phosphorus donors in silicon [4; 5; 6; 7; 8]. The platform of phosphorus donor based quantum computing has been bolstered by key milestone achievements over the last decade, including single-shot spin-readout [9], the realization of single electron and nuclear spin qubits [10; 11], and more recently, two-qubit SWAP gates [12] and three-qubit donor quantum processor with universal logic operation [13]. Exchange coupling between the electronic spins of donors remains a key mechanism for fast coupling of two qubits [12; 14]. The exchange interaction depends on the overlap between the electronic wavefunctions and ultimately limits the separation of donor qubits to about 10-15 nm in silicon devices. Long-range coupling schemes through resonators and cavities have recently been explored in donor qubits [15; 16; 17], however, these typically require additional fabrication and integration steps adding complexity to the overall manufacturing process. Spacing out the qubits is beneficial from an architectural point of view in fault-tolerant quantum computing [18] as correlations between the qubits due to local noise sources can be minimised. An increase in separation also relaxes stringent gate density requirements and offers more independent electrostatic control of the qubits by reducing their capacitive cross-talk. For STM-patterned donors with phosphorus doped in-plane gates, the density is already low [19], so this technique is particularly appealing.
In this work, we study long-range exchange coupling between the end spins of four single donor (1P) quantum dots in a linear chain. With each donor containing a single electron, a superexchange coupling is found to emerge between the donors at the end of the chain. This third nearest neighbour interaction enables the qubits to be separated by 30-45 nm. Using atomistic full configuration interaction calculations, we study the eigenvalues and eigenvectors of four electron spins across four 1P atoms in silicon. We investigate the regime of superexchange where the distant qubits can be coherently manipulated and provide guidelines on the appropriate donor placement to achieve this. We also investigate the role of the conduction band valleys on superexchange for donor separation along different crystallographic directions. We simulate the system using realistic electrostatic potentials produced by the surrounding in-plane STM-patterned gates where we demonstrate tunability of superexchange with gate voltages, a crucial requirement for the realization of electrically-controlled singlet-triplet oscillations. Finally, we comment on the sensitivity of superexchange to charge noise and donor placement errors, as well as the role of nuclear spins in singlet-triplet oscillations induced by superexchange.
Recent experiments and theoretical calculations have shown indirect coupling of distant electrons in quantum dots via a central empty mediator [20; 21; 22; 23; 24], multi-electron quantum dot [21; 24; 25; 26; 27; 28; 29] and linear chain of singly-occupied quantum dots [20; 30; 31]. However, to date, comprehensive studies have not been performed on long-range indirect coupling for donor qubits in silicon. Compared to electrostatically defined quantum dots, donor quantum dots in silicon typically have atomic-scale properties with wavefunction length scales an order of magnitude less and large quantity of orbital-valley energy splittings [32]. A single phosphorus donor can bind at most one or two electrons with
the nucleus having a net 1/2 spin. Early papers on single donor qubits showed how the exact position and axis of separation of donor qubits can significantly affect their direct exchange, including such effects as exchange oscillations due to valley interference [33; 34]. More recent work has shown that these effects can be mitigated by separating the donors along specific crystallographic directions, by using multi-donor quantum dots [35] or by applying strain or placing the donors close to the surface [33]. Highly tunable nearest-neighbour exchange has also been predicted [35] and demonstrated [12] in asymmetric donor quantum dots, showing that multi-electron molecular physics can be tuned by gate voltages. However, the atomistic character of these donors and donor dot systems needs to be accounted for when considering indirect couplings like superexchange since the phenomenon emerges from individual nearest-neighbour exchange couplings.
## II Methods
The exact calculation of a multi-electron system is challenging because of the complicated and numerically intensive electron-electron interaction term in the Schrodinger equation [36]. The accuracy relies both on the quality of the basis states and the multi-electron approximation. In semiconductor materials such as GaAs, eigenstates from effective Hubbard Hamiltonian [20; 25; 37] or simple Fock-Darwin states are often used as basis states [21]. These are however unsuitable for silicon due to the multi-valley states. The effective mass approximation has also been used for calculating single electron basis states for donors in silicon [38; 39]. However, the theory does not provide a complete description of the band structure and can be inaccurate at higher energy levels. On the contrary, the full-band atomistic tight-binding model has the potential to capture all intricate nuances of the wavefunction of phosphorus donors in silicon hence its use in this work.
The most common approximation for multi-electron calculations is the Hartree-Fock method which neglects the electron-electron correlations to minimize the complexity of the calculations. Configuration interaction (CI), on the other hand, enhances the accuracy in the treatment of many-body interactions but can be computationally intensive. This is particularly true for a multi-valley material like silicon where only a few electron calculations are present in the literature [33; 40]. In this paper, we have performed state-of-the-art investigation combining atomistic basis states with a full configuration interaction method to calculate the energy levels of a four-electron donor system without
Figure 1: **Schematic representation of a simulation using four single phosphorus donors each with an electron.****(a)** Four phosphorus donors placed in the silicon crystal (grey) where the electrons localized in the two middle donors (blue) are mediators (\(M_{1}\) and \(M_{2}\)) for the electron spins localized in the end donors (\(Q_{1}\) and \(Q_{2}\)) (red). The separation of the blue mediator spins is \(r_{M}\) and their exchange coupling is \(j_{M}\). The first qubit \(Q_{1}\) is separated from the first mediator \(M_{1}\) by \(r_{1}\) (\(\sim\) 10 nm) and the second qubit \(Q_{2}\) is separated from the second mediator \(M_{2}\) by \(r_{2}\) (\(\sim\) 10 nm). The corresponding exchange coupling between them is \(j_{1}\) and \(j_{2}\) respectively. The total separation of the qubits \(Q_{1}\) and \(Q_{2}\) is \(R\) (\(\sim\) 30 nm). The probability density of the lowest single electron valley-orbital **(b)** bonding and **(c)** anti-bonding state in a logarithmic scale calculated using an atomistic tight-binding method.
compromising accuracy.
### Atomistic Full Configuration Interaction
Full configuration interaction (FCI) is a method to obtain the exact numerical solution of a many-body Schrodinger equation, limited by the number and quality of single electron basis states. The single electron basis states used here are calculated using a 10-band \(sp^{3}d^{5}s^{*}\) atomistic tight-binding (TB) method in NEMO3D [41; 42]. This approach uses a localized atomic orbital-based method with nearest-neighbour interactions. The TB parameters are optimized to reproduce the bulk silicon band structure [43]. The phosphorus donors are represented using a Coulomb potential with a central cell correction at the donor site which can successfully determine the experimentally measured energy spectra of donors in silicon [44; 45].
The schematic representation of the simulations is illustrated in Figure 1(a). There are four phosphorus donors placed in a chain, each with an electron. The end electron spins, \(Q_{1}\) and \(Q_{2}\) are the qubits separated by R. The middle spins, \(M_{1}\) and \(M_{2}\) work as mediators with corresponding donors being separated by \(r_{M}\). The separation between \(Q_{1}\) and \(M_{1}\) is \(r_{1}\) and the separation between \(M_{2}\) and \(Q_{2}\) is \(r_{2}\). The simulation domains used in the calculations entail \(\sim 1.14\) million atoms which account for \(\sim 60\) nm of silicon in the direction of separation and \(\sim 20\) nm in the other two directions. The single electron basis states, solved from a parallel Block Lanczos eigensolver, are used to calculate the anti-symmetric Slater determinants of the multi-electron problem. The lowest two single electron valley-orbital states of the system are shown in Figure 1(b) and (c). Here we see bonding and anti-bonding state formation in the middle two donors \(M_{1}\) and \(M_{2}\) of a four-donor chain. The rest of the single electron molecular basis states are shown in Figure 8 in the Supplementary Information where we see the next two valley-orbital states are mainly localized in the outer donor dots.
The four-electron wavefunction is a superposition of various symmetry-permitted configurations of the Slater determinants. All possible integrals between the Slater Determinants with pairwise electronic interaction operators are computed to capture Coulomb, exchange, and higher-order correlations. The four electron Hamiltonian constructed from the Slater determinants is solved using the block Krylov-Schur algorithm within the Trilinos framework [46]. The number of single electron basis states in FCI calculations is increased until the eigenvalues of the four-electron system converge within a chosen tolerance - see Supplementary Information. On average, 56 single-electron basis states are sufficient to reach convergence. The eigenvalue of the ground state is separated from the triply degenerate excited states by the indirect exchange coupling. We place the donor atoms in different separations and orientations in our simulations and calculate this exchange coupling.
### Effective spin Hamiltonian
To analyze and interpret our FCI results, we compare them with an effective Hamiltonian model [30]. We can represent the four-spin system with a spin Hamiltonian such as -
\[H_{eff}=\frac{j_{1}}{4}\sigma_{1}.\sigma_{2}+\frac{j_{M}}{4}\sigma_{2}.\sigma_ {3}+\frac{j_{2}}{4}\sigma_{3}.\sigma_{4} \tag{1}\]
where \(\sigma_{i}\) is a Pauli matrix corresponding to an electron spin located on \(i^{th}\) donor in the chain. \(j_{M}\) is the exchange coupling between the middle two spins, \(M_{1}\) and \(M_{2}\). \(j_{1}\) is the exchange coupling between \(Q_{1}\) and \(M_{1}\) and \(j_{2}\) is the exchange coupling between \(Q_{2}\) and \(M_{2}\) - see Figure 1(a) for the schematic of the system.
We only consider the subspace with spin-zero states (\(S_{z}=0\)) to construct the Hamiltonian as we are interested in the singlet-triplet oscillations in the qubits \(Q_{1}\) and \(Q_{2}\). When \(j_{1},j_{2}\ll j_{M}\), we can isolate the low energy states of the four-electron spin Hamiltonian using a Schrieffer-Wolff transformation. In that case, the effective Hamiltonian in a Heisenberg exchange form, \(H_{SW}\)[30] now becomes -
\[\begin{split} H_{SW}&=\frac{J_{SW}}{4}\sigma_{1}. \sigma_{4}\\ J_{SW}&=\frac{j_{1}j_{2}}{2j_{M}}[1+\frac{3(j_{1}+ j_{2})}{4j_{M}}]\end{split} \tag{2}\]
In the regime where \(j_{1},j_{2}\ll j_{M}\), the lowest energy eigenstates are characterized by the singlet state formed within the two middle dots and singlet or triplet states formed within the outer dots [30]. The energy difference, \(\Delta E\) between
the lowest long-distance singlet and triplet states, i.e.
\[\begin{split} S^{l}\approx 1/\sqrt{2}(\ket{\uparrow S\downarrow}- \ket{\downarrow S\uparrow})\\ T_{0}^{l}\approx 1/\sqrt{2}(\ket{\uparrow S\downarrow}+\ket{ \downarrow S\uparrow})\end{split} \tag{3}\]
is what we call superexchange (\(\Delta E\)) in this paper - see Figure 3(a) for the energy level diagram of the system. For \(j_{1},j_{2}\ll j_{M}\) the middle-dot singlet manifold is well separated from all the middle-dot triplet states desirable for coherent coupling of the outer spins only, without any interference from the middle-dot spins.
## III Results and discussions
### Equal nearest-neighbour separation of donors
We first analyze the equidistant case, where the separation between each neighbouring pair of donors is equal, \(r_{1}=r_{2}=r_{M}=R/3\) - see Figure 1. Here we have approximately equal values of exchange coupling between each pair of donors, i.e. \(j_{1}\approx j_{2}\approx j_{M}\). In Figure 2 (with red dots), we show the values of the indirect exchange coupling for these equispaced donors (4P chain) with constant but increasing separation between the dots along the [100] (left) and [110] directions (right) in the silicon crystal, calculated using FCI. For comparison, we also plot (with blue dots) the direct exchange values for two donors, i.e. 1P-1P system, separated by R, as previously calculated in [35]. Looking at the fitted dashed lines between the 4P chain and 1P-1P cases, we can see that the presence of the mediator donors dramatically increases the exchange coupling between the two outer spins. The indirect exchange coupling for a donor chain of \(R\) of about 25 nm reaches \(\sim 10^{2}\) GHz (\(10^{-1}\) meV), while for direct exchange for the same separation without the presence of mediators would fall below \(10^{-4}\) GHz (\(10^{-7}\) meV) (extrapolated from the dataset in [35]).
We also see that the direct exchange decays much faster than the indirect exchange by comparing the slope of these two plots, true for both [100] and [110] crystal directions. For the [100] case in Figure 2(a), the slope of the fitted dashed line for the direct exchange as a function of qubit separation is \(\sim-0.38\)/nm whereas the slope for the
Figure 2: **Comparison of indirect and direct exchange coupling of 1P donors in silicon calculated using atomistic FCI (a)** Comparison of indirect exchange (4P chain) of the end-spins (red dots) with nearest-neighbour exchange coupling (1P-1P) (blue dots, replicated from [35]) along the [100] direction. Dashed lines provide linear fits to both data sets. Here we see that the indirect exchange is higher than the nearest-neighbour exchange, with the exponential dependence with separation being less steep for indirect coupling. **(b)** Same as (a) but along the [110] direction. Here we see a similar trend in terms of the comparison between direct and indirect exchange as for the [100] direction. However, we also observe oscillations in the exchange coupling along this crystalline orientation due to valley quantum interference.
indirect exchange is \(\sim-0.12\)/nm. Similarly in the [110] case in Figure 2(b), the slope of the fitted line for the direct exchange is \(\sim-0.36\)/nm whereas the slope for the indirect exchange is \(\sim-0.11\)/nm. In general, the indirect exchange decays almost three times less with donor separation than the direct exchange. This dependency with distance lets the qubits be far separated in the device and allows less correlated noise between them while maintaining strong spin coupling. Since the nearest neighbour exchange coupling has an exponential dependence with increasing separation of the donors, one might expect superexchange to change as an exponential function of \(r_{1}+r_{2}-r_{M}\) - see Equation 2. On the contrary, the nearest neighbour exchange coupling changes as an exponential function of the qubit separation \(R=r_{1}+r_{2}+r_{M}\) which results in a faster decay of direct exchange than the indirect one.
We observe no oscillations in the exchange coupling when the donors are separated in the [100] direction whilst we see oscillations in the exchange energy for separation in the [110] orientation, a signature of inter-valley quantum interference arising from the band structure of silicon [35, 33, 47].
Figure 3: **Estimate of exchange coupling strength from the spin Hamiltonian.****(a)** Energies of 4-electron eigenstates calculated with the effective Hamiltonian \(H_{eff}\) as a function of \(j_{1,2}/j_{M}\). The two lowest states (spanned by \(\ket{\uparrow S\downarrow}\) and \(\ket{\downarrow S\uparrow}\)) are well separated from the higher states when \(j_{1,2}/j_{M}\) is smaller. The separation of these two energy levels is the superexchange \(\Delta E\). The labels on the right refer to the dominant contribution of the corresponding energy states when \(j_{1,2}/j_{M}\sim 0\). **(b)** A comparison of the exchange coupling, \(\Delta E\) calculated from the effective spin and the Schrieffer-Wolff Hamiltonian shows that up to approximately \(j_{1,2}/J_{M}=0.3\), the superexchange from \(H_{eff}\) and \(H_{SW}\) is the same since the Schrieffer-Wolff Hamiltonian is a valid approximation of the spin Hamiltonian. Beyond this regime, their behaviour diverges significantly because the Schrieffer-Wolff transformation no longer holds indicating the development of an admixture in the singlet-like state of the middle two spins. **(c)** Contribution of the inner-singlet and inner-triplet basis states to the ground state of \(H_{eff}\) as a function of \(j_{1,2}/j_{M}\). Here we see that, as \(j_{1,2}/j_{M}\) increases, the singlet contribution from the inner mediator spins decreases, and the triplet contribution increases. At \(j_{1,2}/j_{M}=1\), the ground state has equal contributions from singlet and triplet mediator spin states. The contribution of the inner singlet to the ground state is more than 90% for \(j_{1,2}/J_{M}<0.3\) (shaded region).
### Different regimes of superexchange from effective spin Hamiltonian
To obtain superexchange and coherent control between the electron spin qubits at the end of the chains \(Q_{1}\) and \(Q_{2}\), it is essential that the middle spins \(M_{1}\) and \(M_{2}\) form a singlet-like state. Otherwise, there will be additional oscillations originating from the middle spins impeding coherent manipulation of the qubits. Thus we are looking for an operational regime where the ground state and the first excited state are well separated from the higher energy states in which the indirect coupling can be termed superexchange. In Figure 3(a) we plot the energies of all the eigenstates of the effective Hamiltonian \(H_{eff}\) as a function of \(j_{1,2}/j_{M}\) varying from 0 to 1. When the outer spins are weakly coupled with the middle spins (\(j_{1,2}/j_{M}\sim 0\)), we see two clear branches of energy states separated by \(d\) but the separation between the branches decreases as \(j_{1,2}/j_{M}\sim 1\).
In Figure 3(b), we plot the energy difference between the two lowest states, as a function of \(j_{1,2}/j_{M}\) as calculated from \(H_{eff}\) and \(H_{SW}\). These two lowest states are the long-distance singlet and triplet states, namely \(S^{l}\) and \(T_{0}^{l}\). Here we see that, in the regime where \(j_{1,2}/j_{M}\sim 0\), the solutions from the effective spin and the Schrieffer-Wolff Hamiltonian are the same since the assumption of the transformation is valid here. As \(j_{1,2}/j_{M}\) increases, the two solutions start to diverge suggesting the Schrieffer-Wolff transformation no longer holds. The lowest two energy states (spanned by \(\ket{\uparrow S\downarrow}\) and \(\ket{\downarrow S\uparrow}\)) are no longer well separated from the excited states and there are now triplet-like admixtures in the singlet-like state of the middle two spins.
In Figure 3(c), we can see the contributions in the ground state \(S^{l}\) of the inner-dot singlet states (i.e. \(\ket{\uparrow S\downarrow}\) and \(\ket{\downarrow S\uparrow}\)) and inner-dot triplet states (all the remaining basis states). When \(j_{1,2}/j_{M}=1\) (i.e. the equidistant case where all donors are equally separated), the inner-dot singlet and triplet contributions in \(S_{l}\) are both approximately 50% and we no longer have a well-defined two-level system of \(S^{l}\) and \(T_{0}^{l}\), but now have contributions from the four higher states shown in 3(a). As \(j_{1,2}/j_{M}\) decreases, the contribution of the inner-dot singlet reaches more than 90% where \(j_{1,2}<\sim 0.3j_{M}\) (shaded region). This regime can be considered as a threshold for coherent operation of the qubits. However, we note that the threshold we mention here (\(j_{1,2}/j_{M}\sim 0.3\)) is not the upper limit of the coherent regime. The ratio should ideally be very close to 0.
Figure 4: **Exchange coupling as a function of different middle donor separations, \(r_{M}\).** The exchange coupling \(\Delta E\) is calculated using atomistic FCI for different separations of the middle donors \(r_{M}\) in the (a) [100] and (b) [110] direction while keeping the total outer qubit separation fixed (R = 29.3274 nm for [100] and R = 27.648 nm for [110]). The system is symmetric (\(r_{1}=r_{2}\)) to the end donors. The results show that the dependence of exchange energy changes on either side of the equidistant separation (dashed grey line) for all donors. The dashed line indicates where \(r_{1}=r_{2}=r_{M}\) and the left region of this line where \(r_{1}=r_{2}>r_{M}\) is where the two middle donors form a singlet state and the exchange coupling between the end donors is termed superexchange. We compare our FCI results with the effective spin Hamiltonian, \(H_{eff}\) of Eq. 1 with \(j_{1,2}\) and \(j_{M}\) extracted from [35]. For both cases, the results show reasonable agreement with each other. We attribute any differences mainly to the fact that the total confinement potential of 4 donors is deeper than that of 2 donors which results in a slightly higher exchange coupling \(\Delta E\) in our atomistic FCI calculations.
### Modulating superexchange by changing the middle donor separation
To explore the coherent-control regime of the 4-donor chain using our FCI calculations, we switch from the equidistant case discussed in Figure 2 to a chain with varied \(r_{1,2}/r_{M}\) ratios, see Figure 4. Here, we present values of the indirect exchange coupling for chains oriented along the [100] and [110] direction in 4 (a), (b) respectively, as a function of middle donor separation while keeping the outer donor separation \(R\) constant. The lower middle donor separation corresponds to a higher exchange coupling \(j_{M}\) and lower \(j_{1}\) and \(j_{2}\) couplings, moving the system closer to the regime well described by the Schrieffer-Wolff approximation. In general, we see from Figure 4 that the exchange coupling exponentially decreases as we decrease the middle donor separation. The grey dashed line represents the separation where \(r_{1}=r_{2}=r_{M}\). The exchange coupling shows two different dependencies on either side of this point. The first is where the distance between the middle donors is smaller than the values of \(r_{1}\) or \(r_{2}\) and the second is where it is large. For donors separated along the [110] direction, we see clear oscillations in \(\Delta E\) as a function of \(r_{M}\).
As we have discussed in the previous section the equidistant donor chain is not suitable for coherent manipulation of distant spins due to the admixture of states. The right side of the grey dashed line where \(r_{1}=r_{2}\leq r_{M}\) is therefore not valid for operating qubits. To specify the limits of these different regimes, we use the values of direct exchange from the 1P-1P results in Ref. [35] to find \(r_{1,2}\) and \(r_{M}\) and the corresponding threshold. Decreasing the separation of the middle dots gives rise to a value of the superexchange that decreases exponentially while increasing the inner singlet contributions to the ground state - see Table 1 in the supplementary information. So there is a trade-off between the strength of coupling and operation fidelity. However, we see even small changes in the middle donor separation, of the order of \(\sim\) 2 nm, can increase the inner singlet contributions to the ground state from \(\sim\) 49% to \(\sim\) 98% while keeping the value of superexchange still high enough (\(\sim\) 100 MHz or \(\sim\) \(1\times 10^{-3}\) meV) for use in realistic devices.
### Electrical control of superexchange
To address the tunability of superexchange, we have applied an electric field along the direction of the donor chain in our simulations where all donors are separated by 9.7758 nm (\(r_{1}=r_{2}=r_{M}\) = 9.7758 nm). For a realistic applied electric field of 2 MV/m, we observe an exchange coupling of 18.09 GHz (\(74.83\,\mu\)eV) compared to 11.01 GHz (\(45.556\,\mu\)eV) under no electric field. The electric field detunes the qubits and increases the virtual tunnelling between them which results in slightly increased (less than a factor of 2) superexchange. However, when the donors are all charge neutral with one electron on each phosphorus atom, it is difficult to reach the (2,1,1,0) charge regime from a (1,1,1,1) regime without applying a very large bias. A similar challenge has been observed in [35] where applying an electric field of 2 MV/m the exchange coupling was modulated by 5 times. In the case of superexchange, the sensitivity of the coupling with an applied electric field is even less, hence, the distanced qubits are less susceptible to charge noise caused by electric field fluctuations compared to the nearest neighbour ones. To explore different levels of control over superexchange, we apply a J-gate bias on a system with a phosphorus-doped silicon top gate based on the original Kane architecture [14]. We place gates 30 nm above the donor plane to control the potential barrier between donor atoms and the nearest neighbour exchange and observe their effect on superexchange. Similar three-dimensional tuning of the potential barrier between in-plane phosphorus-doped gates has been recently experimentally demonstrated [48] in precision engineered STM tunnel junctions where the gates were degenerately phosphorus-doped silicon layers. The schematic of the system is illustrated in Figure 5(a). The device consists of five quasi-metallic gates (source S, drain D, and three top gates - \(G_{1}\), \(G_{2}\) and \(G_{M}\)). \(G_{1}\) and \(G_{2}\) are used to tune \(j_{1}\) and \(j_{2}\) and \(G_{M}\) is used to tune \(j_{M}\). The total electrostatic potential profile of the device in the x-z plane is shown in Figure 5(b) where a cut is taken at the donor location. Here we see that, in the donor plane, the top gates create a parabolic potential profile. This total electrostatic potential is added to the tight-binding Hamiltonian while solving the energy levels of the donor quantum dots. In Figure 5(c-h), 1D cuts are taken at the donor locations shown for different \(V_{M}\) and \(V_{1}\)(=\(V_{2}\)), i.e. for different voltages applied at gates \(G_{M}\) and \(G_{1}\)(\(G_{2}\)), respectively. Here S and D are grounded, i.e. \(V_{S}=V_{D}=0\). We note that just the presence of the electrostatic leads around the donors also modifies the total electrostatic potential of the device, even without an applied voltage. This is due to the band structure mismatch between the leads and the surrounding material, and the band-bending caused by the leads [49]. We calculate the total electrostatic potential profile of the device using a multi-scale modelling technique that combines the atomistic calculation of the band structure of the gates with a non-linear Poisson solution of the entire device [50].
#### iii.2.1 Applying voltage to the middle top gate, \(G_{m}\)
Since superexchange \(\Delta E\) is a function of nearest-neighbour exchanges (i.e. \(j_{1}\), \(j_{2}\) and \(j_{M}\)), it is necessary to understand how these exchange couplings change under an applied bias. When we apply a positive voltage to the middle gate \(G_{M}\), the potential barrier between the mediators \(M_{1}\) and \(M_{2}\) decreases but \(j_{M}\) would increase - see Figure 5(c-e) for increasing values of \(V_{M}\). However, from the dashed line in Figure 5(e), we see that the applied bias in \(G_{M}\) detunes the qubits \(Q_{1}\) and \(Q_{2}\) with respect to \(M_{1}\) and \(M_{2}\) so \(j_{1}\) and \(j_{2}\) would also increase. As we can see from Equation 2, the superexchange strongly depends on the interplay between \(j_{M}\) and \(j_{1},j_{2}\). In our case, here the superexchange increases overall with \(V_{M}\) since the effect caused by detuning between the qubits and the mediator donors is stronger than from lowering the middle barrier - see the potential profile in Figure 5(e). We observe this effect on the value of \(\Delta E\) in Figure 5(i), where we can see the superexchange as a function of \(V_{M}\) for \(V_{1}=V_{2}=0\) (case corresponding to 5(c-e)). By changing the middle gate potential from 0 to 0.5V, we modulate the magnitude of the superexchange by a factor of 10. The increase of superexchange is due to a relatively higher increase of \(j_{1},j_{2}\) than \(j_{M}\), resulting in lesser singlet contribution from the mediator dots, as discussed in the previous section. In our case, the singlet contribution in the middle dot drops from 92.2% (\(V_{M}=0\)) to 67% (\(V_{M}=0.5V\)).
#### iii.2.2 Applying voltage to all three top gates, \(G_{1}\), \(G_{2}\) and \(G_{m}\)
We can minimize the detuning of the qubits \(Q_{1}\) and \(Q_{2}\) by applying a negative voltage to the left and right gates. Then the superexchange would decrease back - here from 65.42 \(\mu\)eV to 6 \(\mu\)eV when changing \(V_{1}=V_{2}\) from 0 to -0.5 V
Figure 5: **Modulating superexchange using voltage biases on phosphorus-doped top gates above the donor plane.** Electrostatic potential of the device in the **(a)** Schematic representation of the simulation domain. The qubit donor atoms (\(Q_{1}\) and \(Q_{2}\)) and the mediator donor atoms (\(M_{1}\) and \(M_{2}\)) are placed in-plane between the source and drain (S, D) electrodes. Three phosphorus-doped top gates (\(G_{1}\), \(G_{2}\), and \(G_{M}\)) are placed 30 nm above the donor plane. **(b)** The total potential of the device in the x-z plane (cut taken at donor location). Here the voltage of gates \(G_{1}\) and \(G_{2}\) are \(V_{1}\) = \(V_{2}\) = \(-0.1V\) whilst the middle gate \(G_{M}\) has a voltage \(V_{M}\) = 0.5\(V\) and S and D are grounded. The red and blue dots represent the location of the donors and are not to scale. 1D potentials with cuts taken at donor location when \(V_{1}\) = \(V_{2}\) = 0V and **(c)**\(V_{M}\) = 0V, **(d)**\(V_{M}\) = 0.3V and **(e)**\(V_{M}\) = 0.5V respectively as we deplete the barrier between the donors. The dashed lines represent the potential from the gates and the solid lines represent the individual donor potentials combined with the electrostatic potential from the gates. As the voltage of the top gate increases, the curvature of the potential also increases. Same as (c-e) when \(V_{M}=0.5V\) and **(f)**\(V_{1}\) = \(V_{2}\) = -0.1V, **(g)**\(V_{1}\) = \(V_{2}\) = -0.3V and **(h)**\(V_{1}\) = \(V_{2}\) = -0.5V respectively. **(i)** Superexchange as a function of \(V_{M}\) when \(V_{1}\) = \(V_{2}\) = 0V. We see that the magnitude of the superexchange is increasing as a positive bias is applied to \(G_{M}\). The increased curvature of the applied potential of \(V_{M}\) = 0.5\(V\) (comparing (c) and (e)) gives rise to the modulation of the magnitude of superexchange by a factor of 10. **(j)** Superexchange as a function of \(V_{1}\) = \(V_{2}\) when \(V_{M}\) = 0.5\(V\). Here we see the superexchange is decreasing as more negative bias is applied as a result of decreased potential curvature (comparing (f) and (h))
for \(V_{M}\) = 0.5 V - see Figure 5(j). The positive bias applied to \(G_{M}\) increases \(j_{M}\) and the negative bias applied to \(G_{1,2}\) decreases \(j_{1,2}\), so the \(j_{1,2}/j_{M}\) ratio becomes smaller. The potential profile at the donor location for \(V_{M}\) = 0.5\(V\) and \(V_{1}\) = \(V_{2}\) - -0.1V, -0.3V and -0.5V is shown in Figure 5(f-h). If we look at the inner singlet contributions on the ground state, we see that the contribution has increased up to 94.05% when \(V_{1}\) = \(V_{2}\) = -0.5V. Tuning superexchange by using three gates might be useful in increasing the fidelity of the operation, otherwise, the tuning shows a similar range of superexchange even with one top gate. With these simulations, we have shown that it is possible to manipulate superexchange in a 4P chain by purely electrostatic means. Applying voltage through top gates exhibits better tunability of superexchange than a tilt voltage along the donor separation. We have shown that it is possible to modulate superexchange by a factor of 10 with modest gate potentials of 0.5 V. We note that even higher tunability might be desired to turn the coupling on and off for qubit operations. Such high tunability of the exchange coupling can also be achieved by depleting the middle donors. Without electrons in the middle donors, the qubits will be coupled by direct exchange which is significantly low at these separations. On the other hand, loading electrons in the middle donors will introduce a high value of superexchange. This mechanism can be crucial for the realization of singlet-triplet oscillations in distanced qubits.
### Effect of asymmetric separation of the donors
Current STM lithography techniques allow us to place donor atoms in silicon with the precision of a single lattice constant [51; 52; 53; 54; 55]. As a consequence, we can investigate how small shifts in the exact P atom location within the four donor chain can affect the value of superexchange available. For that purpose, we analyze a chain oriented along the [100] direction with constant middle dot separation \(r_{M}\) = 5.431 nm but with variable positions of the outer donors, described by \(r_{1}\) and \(r_{2}\) where \(r_{1},r_{2}\geq\) 7 nm. We have deliberately chosen a smaller value of \(r_{M}\) so that \(r_{1},r_{2}>r_{M}\). We do not consider nearest neighbour separations less than 7 nm (qubit separation \(R\) \(<\) 20 nm) since we are interested in the long-distance qubit coupling regime and want to exceed the distance achievable by direct exchange.
The results are shown in Figure 6. Here we can see the superexchange \(\Delta E\) is maximum (11.53 GHz) for the smallest values of \(r_{1}\) and (\(r_{1}\) = \(r_{2}\) = 7.6 nm). This result arises both from minimizing the total chain separation, \(R\) as well as maximizing the \(r_{M}/r_{1,2}\) ratio - as observed in Figure 4. The value of \(\Delta E\) decreases exponentially if either of the outer donors is moved outwards, i.e. if either \(r_{1}\) or \(r_{2}\) increases. However, at the same time, we can see \(\Delta E\) does not change if both of the outer donors are shifted simultaneously in the same way - see the dashed lines where \(r_{1}+r_{2}\) is constant. This result is consistent with the Schrieffer-Wolff approximation for superexchange in Eq. 2, where the dominating contribution to \(\Delta E\) comes from \(j_{1}j_{2}/2j_{M}\). While \(j_{1}\) and \(j_{2}\) change approximately exponentially with distance, their product will not change when \(r_{1}+r_{2}\) is kept constant.
Figure 6: **Effect of donor placement on the magnitude of superexchange calculated using atomistic FCI along the [100] direction.** Keeping the separation of the middle donors fixed (\(r_{M}\) = 5.431\(nm\)), we change \(r_{1,2}\) to account for any deviation in donor placement. The superexchange \(\Delta E\) is maximum when \(r_{1}\) = \(r_{2}\). We also see that \(\Delta E\) remains constant when \(r_{1}+r_{2}\) is constant – see dashed lines.
Thus we can conclude that any small shifts in the location of the donors within the [100]-oriented chain can result in large changes in total \(\Delta E\). However, interestingly, a simultaneous shift of both outer (or both middle) donors in the same direction would not have any impact on the superexchange (indicated by the black dashed lines). The situation is completely different for the [110] crystalline direction due to the oscillations of two-donor exchange with distance which lifts the smooth \(j_{1}j_{2}/2j_{M}\) dependency for varied \(r_{1},r_{2}\) when \(r_{1}+r_{2}\) is constant. Here \(\Delta E\) will change in an oscillatory manner if one or both middle donors are shifted.
### Impact of donor nuclear spins on the superexchange
Finally, we comment on the presence of nuclear spins in the donor system and how they can also impact superexchange and coherent electron spin manipulation. The nuclear spins of phosphorus donor atoms couple with their electron spins through the hyperfine interaction. From the perspective of the electron spin, this can be treated as a small additional magnetic field dependent on the nuclear spin polarization. In the case of an electron singlet-triplet spin qubit, localized within the middle double donor dot, this hyperfine interaction can create a magnetic field gradient that mixes singlet and triplet states [17; 56]. For a 1P-1P system, the gradient is \(\sim\) few Milli Tesla which gives rise to a Zeeman energy difference, \(\Delta E_{z}\sim 5\times 10^{-4}\) meV between the donors when the two nuclear spins are oriented antiparallel (\(\Uparrow\Downarrow\)) and \(\Delta E_{z}\sim 0\) when the two nuclear spins are aligned in the same direction (\(\Uparrow\Uparrow\)).
To create well-defined eigenstates as well as singlet and triplet states, the exchange coupling \(\Delta E\) needs to dominate over \(\Delta E_{z}\). We can consider the 4-electron system discussed in this paper essentially as two pairs of singlet-triplet qubits - one defined between the two inner dots and one between the two outer dots. To avoid mixing of the singlet and triplet states it is desirable for both \(j_{M}\) and \(\Delta E\) to be significantly greater than \(\Delta E_{z}\). From Figure 2 this is satisfied when the middle donor separation \(r_{M}\) is smaller than 10-15 nm, which includes most of the results presented in this work. The value of superexchange \(\Delta E\) ultimately depends both on \(r_{M}\) and \(R\). From Figure 4 we can see it is possible to design 4P configurations which belong to the regime where \(j_{1},j_{2}/j_{M}<0.3\) and at the same time satisfy \(\Delta E>\Delta E_{z}\). These considerations are however no longer relevant if we deterministically initialize the nuclear spins to all-parallel before any operation using a local Nuclear Magnetic Resonance (NMR) [10]. In this case, there is no magnetic field gradient and thus no limitations on the minimum values of \(j_{M}\) and \(\Delta E\).
## IV Conclusion
In this work, we have presented the results of strong non-nearest neighbour exchange coupling over a long distance (20-35 nm) with a possible extension of up to 45 nm, which has not been explored in donor quantum dots before. Using an atomistic full configuration interaction technique, we have calculated the eigenvalues of a 4-electron system with high-quality atomistic basis states and showed a comparison with an effective spin Hamiltonian to determine the viability of our intensive numerical calculations. We have shown that by placing the mediator donors one atomic position inwards compared to the symmetric chain, it is possible to enable coherent control of the qubits separated in [100] and [110] directions. Similarly to previous works on direct exchange, we observe a monotonic dependence of superexchange versus distance for donors separated in [100] and oscillatory dependence in [110]. Importantly, we demonstrate that the superexchange can be modulated by an order of magnitude by purely electrostatic means using realistic experimental gate voltages. These results pave the way for the realization of fast singlet-triplet control for distanced qubits. The calculations support the experimental realization of long-distance coupling of donor qubits in silicon.
|
2308.03456 | Bayesian inference from gravitational waves in fast-rotating,
core-collapse supernovae | Core-collapse supernovae (CCSNe) are prime candidates for gravitational-wave
detectors. The analysis of their complex waveforms can potentially provide
information on the physical processes operating during the collapse of the iron
cores of massive stars. In this work we analyze the early-bounce rapidly
rotating CCSN signals reported in the waveform catalog of Richers et al 2017,
which comprises over 1800 axisymmetric simulations extending up to about 10~ms
of post-bounce evolution. It was previously established that for a large range
of progenitors, the amplitude of the bounce signal, $\Delta h$, is proportional
to the ratio of rotational-kinetic energy to potential energy, T/|W|, and the
peak frequency, $f_{\rm peak}$, is proportional to the square root of the
central rest-mass density. In this work, we exploit these relations to suggest
that it could be possible to use such waveforms to infer protoneutron star
properties from a future gravitational wave observation, if the distance and
inclination are well known. Our approach relies on the ability to describe a
subset of the waveforms in the early post-bounce phase in a simple form
depending only on two parameters, $\Delta h$ and $f_{\rm peak}$. We use this
template to perform a Bayesian inference analysis of waveform injections in
Gaussian colored noise for a network of three gravitational wave detectors
formed by Advanced LIGO and Advanced Virgo. We show that, for a galactic event,
it is possible to recover the peak frequency and amplitude with an accuracy
better than 10% for about 80% and 60% of the signals, respectively, given known
distance and inclination angle. However, inference on waveforms from outside
the Richers catalog is not reliable, indicating a need for carefully verified
waveforms of the first 10 ms after bounce of rapidly rotating supernovae of
different progenitors with agreement between different codes. | Carlos Pastor-Marcos, Pablo Cerdá-Durán, Daniel Walker, Alejandro Torres-Forné, Ernazar Abdikamalov, Sherwood Richers, José Antonio Font | 2023-08-07T10:19:16Z | http://arxiv.org/abs/2308.03456v2 | # Bayesian inference from gravitational waves in fast-rotating, core-collapse supernovae
###### Abstract
Core-collapse supernovae (CCSNe) are prime candidates for gravitational-wave detectors. The analysis of their complex waveforms can potentially provide information on the physical processes operating during the collapse of the iron cores of massive stars. In this work we analyze the early-bounce rapidly rotating CCSN signals reported in the waveform catalog of Richers et al 2017. This catalog comprises over 1800 axisymmetric simulations extending up to about 10 ms of post-bounce evolution. It was previously established that for a large range of progenitors, the amplitude of the bounce signal, \(D\cdot\Delta h\), is proportional to the ratio of rotational-kinetic energy to potential energy, \(T/|W|\), and the peak frequency, \(f_{\rm peak}\), is proportional to the square root of the central rest-mass density, \(\sqrt{\rho_{\rm e}}\). In this work, we exploit these relations to suggest that it could be possible to use such waveforms to infer protoneutron star properties from a future gravitational wave observation, but only if the distance and inclination are well known and the rotation rate is sufficiently low. Our approach relies on the ability to describe a subset of the waveforms in the early post-bounce phase in a simple form - a master waveform template - depending only on two parameters, \(D\cdot\Delta h\) and \(f_{\rm peak}\). We use this template to perform a Bayesian inference analysis of waveform injections in Gaussian colored noise for a network of three gravitational wave detectors formed by Advanced LIGO and Advanced Virgo. We show that it is possible to recover the injected parameters, peak frequency and amplitude, with an accuracy better than 10% for more than 50% of the detectable signals (given known distance and inclination angle). However, inference on waveforms from outside the Richers catalog is not reliable, indicating a need for carefully verified waveforms of the first 10 ms after bounce of rapidly rotating supernovae of different progenitors with agreement between different codes.
## I Introduction
Core-collapse supernova (CCSN) explosions are prime astrophysical sources of transient gravitational waves (GWs; see e.g. [1; 2] and references therein). If the explosions occur sufficiently close (i.e. inside the galaxy or within the local system of satellite galaxies) GWs from CCSNe could be within reach [3; 4; 5; 6; 7; 8] of the current network of ground-based observatories Advanced LIGO[9], Advanced Virgo[10], and KAGRA[11]. The most recent optically-targeted search for GW transients associated with CCSNe within a source distance of 30 Mpc using data from the third observing run of Advanced LIGO and Advanced Virgo has been reported in [6]. No significant GW candidate was announced. The rate of galactic CCSNe is 1-3 per century (see [4; 12] and references therein) and about 1-10% of all events may have significant rotation (see discussion in [13]). The mechanism by which rapidly rotating progenitor cores are formed is also poorly understood, since magnetic and turbulent stresses can drive the star toward slower rigid rotation (e.g., [14; 15]). Despite their low intrinsic event rate, CCSNe hold great scientific interest as the analysis of their complex waveforms can potentially provide valuable information about the underlying physical processes operating during the gravitational collapse of the iron cores of massive stars.
At the onset of collapse the iron core, where no nuclear burning is taking place, is very close to axisymmetry. If the core is rotating sufficiently quickly, the dominant deviation from spherical symmetry below the silicon burning shell is produced by its rotation, which imprints an oblate shape into the core, contributing to its mass quadrupole. The collapse produces an acceleration of the infalling material and, therefore, a non-zero second time derivative of the mass quadrupole, inducing GW emission. The peak GW amplitude is reached right before bounce, at which time the nuclear equation of state (EOS) stiffens. This sharply halts the collapse and produces a small re-expansion and a series of oscillations of the newly formed proto-neutron star (PNS), which are visible in the GW signal for a few milliseconds. The maximum GW strain is related to the degree of oblateness of the core at bounce, which is in turn related to the rotation rate; hence, for non-rotating progenitors the GW
emission at bounce is zero. Numerical simulations have shown that the frequency of the bounce oscillations is in the range 100-1000 Hz [16; 17]. These oscillations are the result of the non-linear excitation of an axisymmetric \(f\)-mode of the PNS whose frequency depends on the local sound speed in this region [18]. Beyond \(\sim 10\,\)ms after bounce, the growth of hydrodynamical instabilities breaks axisymmetry (see e.g. [19]) and the GW signal, with a typical duration of \(\sim 1\) s, becomes stochastic. In this work we focus our attention in the bounce GW signal.
Since the seminal numerical study of [20] the bounce signal has been extensively studied in a number of works [16; 17; 18; 21; 22; 23; 24]. Richers et al. [17] performed over 1800 2D (axisymmetric) numerical simulations of CCSNe that include the collapse phase and up to about 10 ms of the post-bounce evolution. For such short post-bounce timescales axisymmetry is a valid approximation because all possible non-axisymmetric instabilities (such as convection and the standing accretion shock instability) develop in longer timescales. The limiting factor is probably the occurrence of prompt convection that for the typical values of the Brunt-Vaisala frequency (\(\sim 100\) ms) may develop in timescales of \(\gtrsim 10\) ms. Furthermore, the post-bounce timescale covered in [17] is much shorter than the characteristic timescale in which the PNS cools due to diffusing neutrinos (\(\gtrsim 100\) ms), which renders unnecessary the incorporation of neutrino transport for an accurate modelling, although the equation of state is still sensitive to deleptonization of infalling matter by neutrino emission. Regarding GW emission, the axisymmetry of the system implies that the GW signal is completely polarized. The study of [17] has shown that, for a \(12M_{\odot}\) progenitor and a large range of EOS, the amplitude of the GW signal at bounce is proportional to the ratio of rotational-kinetic energy to gravitational potential energy, \(T/|W|\), and the peak frequency is proportional to the square root of the central rest mass density, \(\sqrt{\rho_{\rm c}}\). These relations break for sufficiently large rotation rates, \(T/W>0.06\), in which case the centrifugal forces have an impact in the dynamics of the collapse and the bounce. These high rotation rates are typically predicted only for progenitors of long gamma-ray bursts in low-metallicity environments (see e.g. [25]). The relative rarity of extremely energetic supernovae (e.g., [26]) suggest that much more common are progenitors in which only a part of the rotation is retained by the core leading to values of \(T/|W|\) well below \(0.06\) (see e.g. [14]). Therefore, for the most common rotating CCSNe these two phenomenological laws could allow us to infer some PNS properties from a future GW observation, which is the main aim of this investigation.
Efforts to infer the rotational properties of newborn PNS from the GW signal have been attempted in previous works. In [27] the authors tried to estimate the distribution of angular momentum in the PNS from the sign of the second peak in the bounce signal. A matched-filtering analysis method to infer the total angular momentum of the core with \(20-30\%\) accuracy using CCSN bounce signals injected in Gaussian detector noise was developed by [23]. The main caveats of this work are the choice of source orientation (optimal), the use of simulation waveforms as templates (instead of parametric templates) and the inability of the method to estimate errors associated with the measurements. Additionally, several works [28; 29; 30; 31] have developed data analysis pipelines based on principal component analysis (PCA) of the signal, capable of performing Bayesian model selection. In particular, for sufficiently close sources, these investigations were able to distinguish between neutrino-driven and magnetorrotational explosions, allowing to assess the presence of rotation in CCSNe. Similar classification approaches and outcomes, but based on machine-learning techniques, have been developed by [32; 33]. For a limited set of simulations, [34] were able to infer details on the progenitor core structure combining information from the bounce and post-bounce GW signal. [35] performed a PCA decomposition of the simulation catalog of [17] to determine the dominant features of the waveforms and create a map between the measured properties of the waveforms and the physical properties of the progenitor star (i.e. \(T/|W|\) and peak frequency). Their analysis shows that this is possible for galactic CCSNe with current GW detectors and up to 50 kpc with 3rd generation detectors. As in the case of [23], they use optimally oriented sources for their analysis, which, as we show in this work, limits the ability of the results to generalize to observations with unknown orientation. Finally, an alternative signature of rapid rotation is the presence of circular polarization in the post-bounce signal [36], which could be detectable within 5 kpc by current detectors [37].
The starting assumption of our work is the multi-messenger detection of a nearby CCSN. The first indication from the occurrence of one such event would be provided by the simultaneous observation of neutrinos and GWs. Neutrino detectors such as SuperK [38] and IceCube [39] are able to detect MeV neutrinos from CCSNe at distances of about 100 kpc. In the case of an event, the network of neutrino detectors would emit an alert by means of the Supernovae Early Warning System (SNEWS [40; 41]). These alerts provide estimations of the time of bounce in 10 s windows. However, a detailed analysis of the data from neutrino detectors should allow to estimate the time of bounce within 10 ms [42; 43]. On the other hand, online GW burst searches such as Coherent Wave Burst (CWB [44]) used in the current LIGO-Virgo-KAGRA network of advanced GW detectors are capable of detecting GWs from nearby CCSNe for the case of neutrino-driven explosions and up to \(\sim 100\) kpc for extremely fast-rotating progenitors [5]. In the case of a successful GW detection, it should be possible to obtain an accurate measurement of the time of bounce within ms. Later on, on a timescale of minutes to days after bounce, the electromagnetic signature of the supernovae would allow an accurate determination of the sky location and of the distance to the source if linked
to a known presupernova star. Neutrino detections can provide complementary direction and distance estimates (e.g. [41; 45]).
For the purpose of this paper, we will assume that a GW detection of a CCSN has been made and that an accurate enough measurement of the time of bounce, sky location and distance was possible. Under these circumstances, our goal is to develop a data analysis framework to infer the properties of the characteristic bounce signal, signature of the presence of rotation, as a first step towards inferring the rotation rate and mean density of the PNS. For this purpose, in Section II we develop a parametric waveform template based on the numerical CCSN simulations of [17]. In section III we describe the additional numerical waveforms used for testing. In Section IV we introduce the Bayesian method that we use to perform the parameter estimation of the signal. The results of this analysis for the case of a three-detector network with Gaussian colored noise are shown in Section V. Finally, our conclusions are presented in Section VI.
## II Bounce signal characterization
The work of [17] shows that for a large range of rotation rates (\(T/W<0.06\)), the two most relevant parameters characterizing the waveform at bounce are the peak amplitude (normalized by the distance to the source), \(D\cdot\Delta h\), and the peak frequency, \(f_{\rm peak}\). The peak amplitude is measured as the difference between the highest and lowest points in the bounce signal strain assuming optimal orientation. Correspondingly, the peak frequency is the largest frequency measured from the signal in the first 6 ms following the bounce. We show in this section that it is possible to describe the waveforms in the early post-bounce phase in a simple form - a master waveform template - depending only on these two parameters: \(D\cdot\Delta h\) and \(f_{\rm peak}\).
### Waveform selection and renormalisation
We employ data from the 1824 numerical-relativity simulations carried out in [17]. (Note that this catalog is publicly available in [46].) These simulations were performed using the CoCoNuT code [21] in general relativity (XCFC approximation [47]) and a leakage scheme [22] for the neutrino transport. All simulations were performed with the same \(12M_{\odot}\) progenitor star, but different initial rotational profile and rate, and using 18 different equations of state (EOS) (see [17] for additional details). The simulations focused on the bounce signal, therefore they were run only 50 ms after bounce.
For each simulation, the catalog provides the strain in units of distance for an optimally oriented source, \(D\cdot h_{+}^{\rm opt}\) as a function of time after core bounce \(t-t_{\rm b}\), the peak frequency of the post-bounce GW oscillations \(f_{\rm peak}\), the ratio of rotational kinetic energy to gravitational potential energy of the inner core at bounce \(T/|W|\), and the maximum initial pre-collapse rotation rate \(\Omega_{0}\). The minimum and maximum values of the first and second GW strain peak allow us to obtain the peak amplitude, \(D\cdot\Delta h\). Since the system is axisymmetric, the strain for different source orientations can be computed simply as
\[h_{+} = h_{+}^{\rm opt}\,\sin^{2}\theta\,,\] \[h_{\times} = 0\,, \tag{1}\]
where \(\theta\) is the inclination angle between the rotational axis of the core and the observer's line of sight. Note that in the coordinate system of the source there is no dependence on the azimuthal angle because of axisymmetry. Note also that the strain in the coordinate system of the observer depends on an additional polarization angle \(\psi\) measuring the orientation of the projection of the rotation axis on the sky with respect to the celestial meridian.
Some of the 1824 simulations performed in [17] do not undergo core collapse within the simulation time as a consequence of the initial values of the parameters describing the progenitors. For those models, the rotational kinetic energy is large enough to yield sufficient centrifugal support at the onset of collapse to prevent them from collapsing. Those waveforms are listed in Table 3 of [17] and they have been excluded from our analysis. In addition, [17] observed that the waveforms of extremely fast-rotating cores show no specific trend with \(D\cdot\Delta h\) and \(f_{\rm peak}\). However, if rotation is sufficiently slow (\(T/W<0.06\)), \(D\cdot\Delta h\) increases linearly with \(T/|W|\) (see Fig. 6 of [17]), due to the quadrupolar deformation of the core induced by rotation. Taking this observation into account we restrict the number of waveforms of our analysis to those for which \(0.00<T/|W|<0.06\). We note
Figure 1: Waveform amplitude, \(D\cdot\Delta h\), vs peak frequency, \(f_{\rm peak}\), for all models in the waveform catalog of [17]. Red (blue) markers indicate models included (excluded) in our analysis. The two kind of models occupy regions of the parameter space that are mostly disjoint.
that within this limit there is no particular dependence of \(f_{\rm peak}\) with \(T/|W|\). As discussed in Section I, most CCSN progenitors are expected to be in this range, and only those capable of producing long gamma-ray bursts may be above the upper limit. However, even with this limitation it could be possible to devise algorithms able to determine whether a particular observation is in this linear regime or not, based purely on observations. Fig. 1 shows the values of \(f_{\rm peak}\) and \(D\)-\(\Delta h\) for all the models in [17]. Models with \(T/|W|<0.06\) (used in our analysis) occupy a region of the parameter space mostly disjoint to models with \(T/|W|>0.06\) (excluded of our analysis). Therefore, measuring \(f_{\rm peak}\) and \(D\)-\(\Delta h\) could help determining whether a signal is in the linear regime or not. We refer to this selection of waveforms as Richers et al. catalog, hereafter.
Each of the Richers et al. catalog waveforms has been extracted from a particular simulation. The waveform morphology at bounce, while similar, is still somewhat diverse (see left panel of Fig. 2 for a few examples). The work of [17] suggests that the main parameters describing the waveforms are \(D\)-\(\Delta h\) and \(f_{\rm peak}\). Hence, it seems natural to normalize all the waveforms by these two parameters to try to suppress as much as possible all implicit dependencies for each individual waveform. To do this, we first align the time of bounce for all waveforms and renormalize the value of the strain with \(D\)-\(\Delta h\). Due to the linear dependence of \(D\)-\(\Delta h\) on \(T/|W|\) the result of this rescaling is the effective elimination of the dependence of each waveform on rotation. Additionally, we rescale the time by multiplying by \(f_{\rm peak}\). Given the approximate linear dependence of \(f_{\rm peak}\) with \(1/\sqrt{\rho_{\rm c}}\) (\(\rho_{\rm c}\) being the mean central density within the 0.2 ms after bounce), this rescaling effectively removes the dependence of the waveform on \(\rho_{\rm c}\).
Let us denote by \(h_{n}(t)\) the strain of the \(n\)-th waveform in the catalog at optimal orientation and at a distance of 10 kpc, with \(n=1,...,N_{T}\) and \(N_{T}\) the total number of waveforms. Its normalized version is thus defined as:
\[H_{n}(\hat{t})=\frac{D\cdot h_{n}((t-t_{\rm b})\cdot f_{\rm peak}^{n})}{D\cdot \Delta h_{n}},\qquad\hat{t}\equiv(t-t_{\rm b})\cdot f_{\rm peak}^{n}, \tag{2}\]
where \(D\)-\(\Delta h_{n}\) and \(f_{\rm peak}^{n}\) denote the corresponding values for the \(n\)-th waveform. Note that we keep the \(D\) explicitly in front of the strain quantities to have combinations of quantities that are source properties independent of the distance. The right panel of Fig. 2 shows the normalized waveforms, \(H_{n}(\hat{t})\), resulting from rescaling the waveforms in the left panel. Once the dominant dependencies of the waveform are eliminated, the shape of the main peak and the first few oscillations match closely. At late times, differences become larger due to the onset of hydrodynamical instabilities in the PNS (e.g. convection) that have a stochastic behaviour. Nevertheless, overall the overlap is acceptable for such a simple two-parameter normalization.
The most significant distinction between different normalized waveforms is the presence of spurious, high-frequency oscillations in some of them, in particular for models with slow rotation rates. The source of these oscillations is numerical noise in the calculation of the GW signal using the quadrupole formula. In the limit of decreasing rotation, \(\Omega_{0}\to 0\), the object becomes spherically symmetric so the strain should vanish. However, computationally this implies perfect numerical cancellation of the integrals over the source necessary to evaluate the quadrupole formula, which inevitably leads to numerical noise. This noise, which is always present, is more evi
Figure 2: Strain (left panel) and normalized strain (right panel) for five illustrative examples. These waveforms have been selected from simulations that are as different as possible from each other to display the diversity of waveforms. The right panel shows the universality of the shape of the normalized waveform. The simulation model names have the form AXwY_Z, indicating X the degree of differential rotation, Y the value of \(\Omega_{0}\) in rad/s and Z the EOS used (see more details in [17]).
dent in signals with lower amplitudes because the strain is comparable to the noise. The normalization factor \(1/(D\cdot\Delta h)\) amplifies the numerical noise for the case of slow rotation progenitors. By visual inspection we have checked that this numerical noise ceases to be relevant for cases with \(\Omega_{0}\geq 3\) rad/s. Therefore, all waveforms not fulfilling this requirement have also been excluded from our analysis.
### Master Template
After the model selection discussed in the previous section we are left with a total of \(N_{T}=402\) different waveforms whose normalized strain, \(H_{n}(\hat{t})\), depends weakly on the particular simulation. We use the average value of all \(H_{n}(\hat{t})\) to create a master function \(H_{\rm T}(\hat{t})\) that can be utilized for constructing waveform templates through the straightforward procedure of rescaling by \(D\cdot\Delta h\) and \(f_{\rm peak}\) (in addition to the dependence on \(D\) and \(\theta\)).
The first necessary step is thus the averaging of all waveforms. The time coordinate of the signals in the waveform catalog is different depending on the simulation and after the rescaling this becomes more evident in the normalized signals. Therefore, before doing any averaging it is necessary to perform an interpolation to a common grid of normalized times, \(\hat{t}\). We create this common grid within the interval \(\hat{t}\in[-4,6]\), since for \(\hat{t}<-4\) there are no events of interest and for \(\hat{t}>6\) the divergence of the waveforms is too severe. We use a cubic interpolation with 1000 points in the interval.
For all waveforms we compute the average and standard deviation of \(H_{n}\) for each value of \(\hat{t}\), obtaining a triplet \(\hat{t}-\bar{H}-\Sigma\) that defines the master waveform:
\[H_{T}(\hat{t}) \equiv\frac{1}{N_{T}}\sum_{n=1}^{N_{T}}H_{n}(\hat{t})\,, \tag{3}\] \[\Sigma^{2}(\hat{t}) \equiv\frac{1}{N_{T}}\sum_{n=1}^{N_{T}}|H_{n}-H_{T}|^{2}\,. \tag{4}\]
The standard deviation \(\Sigma\) accounts for the error made when performing inference over the master template. However, all our estimations have been done without considering this error. The master waveform template is shown in Fig. 3 together with the \(2\Sigma\) interval corresponding to a \(\sim 95\%\) confidence level.
The master template is expected to apply to a wide range of parameters and to allow us to construct a parameterized template, suitable for performing Bayesian inference. Our phenomenological waveforms depend on \(D\cdot\Delta h\), \(f_{\rm peak}\), \(D\) and \(\theta\), and can be expressed as:
\[h_{+}(t)=H_{\rm T}(t\cdot f_{\rm peak})\,\frac{D\cdot\Delta h\cdot\sin^{2} \theta}{D},\ \ \ \ h_{\times}(t)=0. \tag{5}\]
We note that to keep explicit the dependence on the combination \(D\cdot\Delta h\) we have to divide by \(D\). Additionally, for a fixed distance and \(f_{\rm peak}\), the strain amplitude is proportional to the combination \(D\cdot\Delta h\cdot\sin^{2}\theta\). In the next section we discuss the importance of this combination as a parameter.
## III Core Collapse Waveforms
In addition to the waveforms from [17] we use in this work a series of GW templates from other authors that serve as a test base for our Bayesian inference procedure.
Figure 3: Master template. Left panel shows \(H_{\rm T}(\hat{t})\) for the master template (red line) together with the 402 waveforms used for its construction. Right panel shows the same master template with the 2-\(\sigma\) (\(\sim 95\%\)) confidence interval computed using \(\Sigma(\hat{t})\) (dashed lines).
The waveforms considered correspond to some models from [48; 49; 50; 51; 52; 53; 54] and their details are reported in Table 1. This selection is not meant to be complete among all possible existing numerical simulations of fast rotating cores and serves only as a testbed of our approach. However, note that the minimum value of \(\Omega_{0}\) that we admit from the Richers catalog is \(\Omega_{0}=3\,\)rad/s, but the maximum value for the CC waveforms is \(\Omega_{0}=2\,\)rad/s. Although none of these supernovae are rotating extremely quickly and we expect the template to be suitable throughout, future multidimensional simulations with more rapid rotation will benefit this analysis.
The simulations from [48] were performed using the FORNAX code [55] with a mutigroup M1 neutrino transport. [49; 50; 52; 54] used the AENUS-ALCAR code [56] with a mutigroup M1 neutrino transport. [53] uses the FLASH code with IDSA neutrino transport [57; 58]. [51] uses the CoCoNuT code [21] and the fast multigroup neutrino transport method [59]. All the codes, except CoCoNuT, use pseudo-relativistic gravity [60]. Gravity in CoCoNuT is general relativistic using the XCFC approximation [47]. Gravitational waves were extracted using the quadrupole-formula in all cases [61; 62].
Among the models there are full three-dimensional simulations (3D) and axisymmetric simulations (2D). Different from the simulations from [17], these models include the post-bounce evolution for several hundred milliseconds, showing additional features in the gravitational wave signal such as g-mode excitation and, in some cases, the standing accretion shock instability. However, these occur on timescales longer than the window used in the analysis of the bounce signal and have little consequence for the present study. The values of \(f_{\rm peak}\) and \(D\cdot\Delta h\) are estimated from the waveforms following the same procedure outlined in [17].
## IV Methodology
### Bayesian inference
The inference of parameters of our CCSN waveforms is achieved through Bayesian analysis, which is briefly reviewed next.
Let \(H\) be an uncertain hypothesis (the presence of a bounce signal that can be modelled by our master template), \(\Theta\) the set of source parameters that we want to constrain or estimate and \(d\) the observational data. Bayes' theorem states that (see e.g. [63; 64])
\[p(\Theta|d,H)=\frac{p(\Theta|H)\,p(d|\Theta,H)}{p(d|H)}, \tag{6}\]
where \(p(\Theta|d,H)\) is the probability distribution of the parameters \(\Theta\) given both the data and the hypothesis
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline Source & Model & Dim. & EOS & \(M_{\rm ZAMS}\) [\(M_{\odot}\)] & \(\Omega_{0}\) [rad/s] & \(D\cdot\Delta h\) [cm] & \(f_{\rm peak}\)[Hz] \\ \hline Morozova et al. (2018) & [48] & M13\_SFHo\_rotating & 2D & SFHo & 13.0 & 0.2 & 3.81 & 538.0 \\ \hline Bugli et al. (2020) & [49] & l1\_r2M & 2D & LS220 & 35.0 & 2.0 & 49.9 & 821.3 \\ & & l2\_r2M & 2D & LS220 & 35.0 & 2.0 & 48.3 & 800.1 \\ & & l3\_r2M & 2D & LS220 & 35.0 & 2.0 & 46.7 & 803.1 \\ & & l4\_r2M & 2D & LS220 & 35.0 & 2.0 & 45.7 & 802.2 \\ & & hydro\_2d & 2D & LS220 & 35.0 & 2.0 & 42.5 & 819.0 \\ & & hydro\_3d & 3D & LS220 & 35.0 & 2.0 & 163.8 & 894.6 \\ \hline Obergaulinger \& Aloy (2020) [50] & 35OC-RO & 2D & LS220 & 35.0 & 2.0 & 71.0 & 734.9 \\ & & 35OC-RO2 & 2D & LS220 & 35.0 & 2.0 & 42.6 & 816.5 \\ & & 35OC-RP2 & 2D & LS220 & 35.0 & 2.0 & 42.6 & 640.7 \\ & & 35OC-RP3 & 2D & LS220 & 35.0 & 2.0 & 40.1 & 713.6 \\ & & 35OC-RP4 & 2D & LS220 & 35.0 & 2.0 & 44.5 & 709.7 \\ & & 35OC-Rs & 2D & LS220 & 35.0 & 2.0 & 54.3 & 797.4 \\ & & 35OC-RRw & 2D & LS220 & 35.0 & 4.0 & 95.0 & 717.6 \\ \hline Powell et al. (2020) & [51] & m39 & 3D & LS220 & 39.0 & 0.54 & 142.0 & 739.1 \\ \hline Obergaulinger \& Aloy (2021) & [52] & O & 3D & LS220 & 35.0 & 2.0 & 200.4 & 809.1 \\ & & W & 3D & LS220 & 35.0 & 2.0 & 237.1 & 734.6 \\ \hline Pan et al. (2021) & [53] & s40\_FR & 3D & LS220 & 40.0 & 1.0 & 212.6 & 897.5 \\ \hline Obergaulinger et al. (2022) & [54] & A08 & 2D & SFHo & 8.0 & 0.2 & 2.7 & 966.9 \\ & & A13 & 2D & SFHo & 13.0 & 0.2 & 4.1 & 655.3 \\ & & A20 & 2D & SFHo & 20.0 & 0.2 & 5.0 & 816.7 \\ & & A39 & 2D & SFHo & 39.0 & 0.2 & 7.3 & 964.3 \\ \hline \hline \end{tabular}
\end{table}
Table 1: List of CCSN simulations used for testing the Bayesian inference algorithm. \(M_{\rm ZAMS}\) refers to the zero-age main-sequence mass of the progenitor star.
(posterior distribution), \(p(\Theta|H)\) gives the expectation of the parameters given the hypothesis (prior distribution), \(p(d|H)\) is the expectation of the observed data given the hypothesis (evidence), and \(p(d|\Theta,H)\) is the probability distribution of observing the data, given the parameters and hypothesis (the likelihood, \(\mathcal{L}\)).
We use the Bayesian inference library Bilby [65], which is an open-source, MIT-licensed Python code developed by the LVK Collaboration, that provides the parameter estimation infrastructure that we need to analyze our data. We model the observed signal at each detector, \(s(t)\), as a linear combination of the actual GW signal, \(h(t)\), and the detector's noise, \(n(t)\); i.e. \(s(t)=h(t)+n(t)\). Based on this assumption, we use a Gaussian likelihood of the form [64]
\[\log\mathcal{L}=-\frac{2}{T}\sum_{\text{IFOs}}\sum_{i=1}^{N/2}\frac{|\hat{s}_{ i}-\hat{h}_{i}|^{2}}{S_{n\,i}}, \tag{7}\]
\(T\) being the duration of the signal, \(N\) the number of samples, \(\hat{s}_{i}\) the Fourier transform of the signal, \(\hat{h}_{i}\) the Fourier transform of the template and \(S_{ni}\) the power spectral density (PSD) of the detector's noise. The summation sign _IFOs_ means a sum over all GW interferometers in the network. The Fourier transforms are computed from the discrete Fourier transform as
\[\hat{h}_{k}=\sum_{j=1}^{N}h(t_{j})e^{-\frac{2\pi i}{N}(j-1)(k-1)}\Delta t\quad ;\quad k=1,...,N. \tag{8}\]
We consider the waveform template model described in Section II (see Eq. (5)), which, in principle, depends on four intrinsic parameters: \(\{D\cdot\Delta h,f_{\text{peak}},D,\theta\}\). In a general situation, the actual GW strain observed in a detector depends on the source location and on the polarization of the waves, which is taken into account by defining the so-called antenna response, \(F(\alpha,\delta,\psi)\), such that
\[h=F_{+}(\alpha,\delta,\psi)h_{+}+F_{\times}(\alpha,\delta,\psi)h_{\times}. \tag{9}\]
Here, \(\alpha\) and \(\delta\) are the right ascension and declination of the source, respectively, and \(\psi\) is the polarization angle, which describes the orientation of the projection of the progenitor's rotation axis onto the plane of the sky (see discussion in Section II.1). Bilby combines the source location and orientation with detector characteristics to calculate \(F_{+}\) and \(F_{\times}\), and therefore determine each detector's observed waveform.
For our analysis we will consider that the sky location (\(\alpha\) and \(\delta\)), the distance \(D\) and the time of bounce \(t_{\text{b}}\) are known to arbitrary accuracy (see discussion in Section I). Hence, they are kept fixed to the values of the injection. The combination \(D\cdot\Delta h\cdot\sin^{2}\theta\) appears in Eq. (5) as proportional to the waveform amplitude, so the parameters \(D\cdot\Delta h\) and \(\theta\) will be degenerate. Therefore, we use the combination \(D\cdot\Delta h\cdot\sin^{2}\theta\) for our parameter estimation. We discuss in Section VI possible ways of breaking this degeneracy. This means that we will perform our Bayesian inference for three unknown parameters \(\Theta=\{D\cdot\Delta h\cdot\sin^{2}\theta,f_{\text{peak}},\psi\}\). In practice this is done by performing inference for \(D\cdot\Delta h\) at fixed \(\theta\) (different for each injection) and then computing \(D\cdot\Delta h\cdot\sin^{2}\theta\) itself. For \(D\cdot\Delta h\) we consider a uniform prior for its logarithm in the interval \([0.01,1000]\) (recall that \(D\cdot\Delta h\) is intrinsic to the source and does not depend on distance). For \(f_{\text{peak}}\) we use a uniform prior in the range \([300,1500]\) Hz, while for \(\psi\) we use a uniform prior in the range \([0,\pi]\).
All waveforms of our catalog are injected in simulated, zero-mean, coloured Gaussian noise using the power spectral densities (PSD) of Advanced LIGO and Advanced Virgo [66, 67]. We use the three detector network formed by Advanced Virgo and the two Advanced LIGO detectors. The signals are injected in 1 s long segments with a sampling rate of 8192 Hz. Since the signals are shorter than 1 s, they are padded with zeros to fill the length. We use Bilby's dynesty sampler with 2000 live points.
To measure the quality of our inferred posteriors we compute the Bayes factor, \(\mathcal{B}\), defined as
\[\log\mathcal{B} = \log P(d|H)-\log P(d|H_{0}) \tag{10}\] \[= -\frac{2}{T}\sum_{\text{IFOs}}\sum_{i=1}^{N/2}\left(\frac{|\hat{ s}_{i}-\hat{h}_{i}|^{2}-|\hat{s}_{i}|^{2}}{S_{n\,i}}\right)\,,\]
where \(P(d|H)\) is the evidence of the signal and \(P(d|H_{0})\) is the evidence of the noise, defined as the expectation of the observed data given the hypothesis that there is no signal (null hypothesis, \(H_{0}\)), i.e. that there is only noise. A value of \(\log\mathcal{B}<0\) implies that the observed data is more consistent with being noise than a signal; however, \(\log\mathcal{B}>0\), does not immediately imply that there is a matching signal, but rather just that the model is a better one than Gaussian noise. In the next section we discuss the significance of the values of \(\log\mathcal{B}\) in terms of the matched-filter signal to noise ratio (SNR) of the detectors network, which is defined as [68]
\[\text{network SNR}=\sqrt{\sum_{\text{IFOs}}\text{SNR}^{2}}. \tag{11}\]
SNR here is the matched filter SNR of each detector defined as
\[\text{SNR}=\sqrt{\frac{4}{T}\sum_{i}^{N/2}\frac{|\hat{h}_{i}|^{2}}{S_{ni}}}, \tag{12}\]
with \(\Delta t\) being the sampling time of the signal.
### Error estimation
If the Bayesian analysis is carried out with injections of waveforms generated with the master template, there will always be a set of parameters of the template that exactly matches the injected waveform. In those cases
the error in the inferred values is caused by the detector noise alone and can be estimated from the posterior distributions. However, when the analysis is done with waveforms different from the master template, there is always a mismatch between the waveform and the template, even in the absence of noise. In this case, the error in the inferred values is not only limited by the noise of the detectors but there is, in addition, an intrinsic source of error associated with this mismatch. This source of error is connected to the modeling of the waveform, or, from a different point of view, to the existence of unmodelled degrees of freedom that are not considered in our simple master template waveform. This is the case when we consider waveforms from the Richers et al. catalog and those from Sec. III.
In order to compute the full error for our inferred values we differentiate between the intrinsic parameters of the master template, \(\Theta_{\rm int,MT}=\{f_{\rm peak,MT},(D\!\cdot\!\Delta h\cdot\sin^{2}\theta )_{\rm MT}\}\) and the real values of the waveform, \(\Theta_{\rm int,real}=\{f_{\rm peak,real},(D\!\cdot\!\Delta h\cdot\sin^{2} \theta)_{\rm real}\}\) (i.e., the signal actually measured in the detector). There will be error in the inferred MT parameters due to the spread between actual model waveforms and the template waveform, and additional error in the inferred real parameters due to detector noise. The posterior probability for the inference of the real parameters is
\[p(\Theta_{\rm real}|d,H)=p(\Theta_{\rm int,real}|\Theta_{\rm int,MT},H^{\prime} )p(\Theta_{\rm MT}|d,H) \tag{13}\]
where \(\Theta_{\rm real}\) and \(\Theta_{\rm MT}\) are the full sets of parameters including the corresponding intrinsic parameters. \(p(\Theta_{\rm int,real}|\Theta_{\rm int,MT},H^{\prime})\) is the probability of a waveform having intrinsic parameters \(\Theta_{\rm int,real}\) given a match to a template with parameters \(\Theta_{\rm int,MT}\) and a model \(H^{\prime}\) for the relation between \(\Theta_{\rm int,real}\) and \(\Theta_{\rm int,MT}\) (naturally, we use as model (\(H^{\prime}\)) that \(\Theta_{\rm int,real}=\Theta_{\rm int,MT}\).). This is precisely the term including the information about the error associated with the master template itself.
\(p(\Theta_{\rm int,real}|\Theta_{\rm int,MT},H^{\prime})\) can be estimated using the Bayes' theorem as a function of the likelihood \(p(\Theta_{\rm int,MT}|\Theta_{\rm int,real})\). Given that we have no a priori information about the prior probability \(p(\Theta_{\rm int,MT})\) we use a uniform distribution and hence \(p(\Theta_{\rm int,real}|\Theta_{\rm int,MT},H^{\prime})\propto p(\Theta_{\rm int,MT}|\Theta_{\rm int,real},H^{\prime})\). To compute the likelihood of the MT parameters given the real, \(p(\Theta_{\rm int,MT}|\Theta_{\rm int,real},H^{\prime})\), we follow the procedure in Appendix A. We perform this analysis using only the 402 waveforms of [17] and not including the 12 signals from Table 1. The result is that this model is consistent with the data and that the likelihood (and hence the posterior distribution) can be modelled as a normal distribution with a mean given by the model \(H^{\prime}\) and with standard deviations \(\sigma_{f}=0.024\cdot f_{\rm peak}\) and \(\sigma_{\Delta}=0.065\cdot D\!\cdot\!\Delta h\cdot\sin^{2}\theta\) for \(f_{\rm peak}\) and \(D\!\cdot\!\Delta h\cdot\sin^{2}\theta\), respectively.
In practice, it means that we can perform our inference algorithm using the master templates, as described in Section IV and then add the contribution of the additional error to the posterior distribution. The posterior distribution obtained with Bilby is a collection of points in the parameter space (samples) that can be used to estimate the posterior probability at any point. In order to propagate the error we generate a cloud of posterior values for each actual sample of the posterior distribution. Specifically, we generate for each sample of the original posterior distribution a random number \(N_{\rm spawn}\) of additional values using the values of the original sample as mean and the standard deviation values given above. The collection of values form a blurred posterior distribution including an estimate of the errors of the master template. We have tried values of \(N_{\rm spawn}=1,10,50\) but all produce very similar posterior distributions. This is because the number of samples used in the original distribution was already sufficient and increasing its number does not improve the distribution. Therefore, in this work we use \(N_{\rm spawn}=1\) to not increase the computational cost of the analysis.
We could use the same arguments as above to try to estimate directly the physical parameters of the system (and their errors) using the relations suggested by [17] between \(\rho_{\rm c}\) and \(f_{\rm peak}\), and between \(T/|W|\) and \(D\!\cdot\!\Delta h\). In this case we want to infer a set of physical parameters \(\Theta_{\rm int,phys}\), so the procedure is identical as the one described above but using \(\Theta_{\rm int,phys}\) instead of \(\Theta_{\rm int,real}\). The detailed analysis can be found in Appendix A. In this case we use as a model \(H^{\prime}\) the result of the linear fits of the parameters \(\rho_{c}\) and \(T/|W|\sin^{2}\theta\) as a function of the master template parameters \(f_{\rm peak}\) and \(D\!\cdot\!\Delta h\cdot\sin^{2}\theta\), respectively, which results in:
\[\rho_{\rm c}=\left(7.3\times\frac{f_{\rm peak}}{1000\rm Hz}-1.67 \right)\times 10^{14}{\rm g\,cm^{-3}}, \tag{14}\] \[T/|W|\sin^{2}\theta=(1.1\times D\!\cdot\!\Delta h\cdot\sin^{2} \theta+17)\times 10^{-4}. \tag{15}\]
Since the amplitude of the waveform depends on the inclination angle \(\theta\), we can put constraints only on the combination \(T/|W|\sin^{2}\theta\), but not on \(T/|W|\). The likelihood, \(p(\Theta_{\rm int,phys}|\Theta_{\rm int,MT})\) can be modelled as a normal distribution with mean given by the values above and standard deviation \(\sigma_{\rho_{\rm c}}=0.07\cdot\rho_{\rm c}\) and \(\sigma_{T/W}=0.08\cdot T/|W|\) for \(\rho_{\rm c}\) and \(T/|W|\), respectively.
### Waveform injection
In order to test systematically our parameter estimation method we perform a series of injections under different conditions with random values of the parameters. Each of the series consists of 1000 injections in randomly generated Gaussian noise of a 3-detector network corresponding to the two Advanced LIGO detectors and Advanced Virgo at design sensitivity [67]. Injections are performed at random locations in the sky (constant probability per solid angle in the sky), random luminosity distance, \(D\), in the range \(0.1-1000\) kpc, random inclination angle, \(\theta\), of the rotation axis (constant probability per solid angle) and random polarization angle, \(\psi\). Signals from 3D simulations (see Section III) depend on an
additional azimuthal angle with respect to the rotation axis, \(\phi\), describing the orientation of the source. In those cases we a use random angle in the range \([0,2\pi]\). This angle can not be inferred from our analysis as the templates do not contain this dependence. While \(\phi\) only produces a significant effect on the waveforms after bounce since the bounce dynamics is mainly axisymmetric, we vary \(\phi\) for good measure. For each of the injections we perform the Bayesian inference analysis described in Sections IV.1 and IV.2 for the parameters \(f_{\rm peak}\), \(D\cdot\Delta h\cdot\sin^{2}\theta\) and \(\psi\) assuming that the location in the sky, distance and time of bounce are known.
We carry out four main series of injections depending on the type of waveforms selected:
* _Noise_: Null injections with zero amplitude that serve as a reference for cases in which there is only noise and no signal.
* _Signal MT_: Waveforms using the master template to generate signals with random values of \(f_{\rm peak}\) in the range \([600,1000]\) Hz and \(D\cdot\Delta h\) in the range \([0,700]\) cm.
* _Signal Richers_: Waveforms chosen ramdomly among the 402 waveforms selected from the catalog of [17] using the procedure of Section II.
* _Signal CC:_ Waveforms chosen randomly among the 12 core collapse signals of Table 1 as discussed in Section III.
## V Results
### Significance of the inferred values
Before discussing in detail each of the different injection cases separately we first consider in more general grounds the quality of the inferred values by computing the Bayes factor, \(\mathcal{B}\). Fig. 4 shows the dependence of \(\log_{10}\mathcal{B}\) for the different series of injections as a function of the distance to the source. For signal injections, the maximum of \(\log_{10}\mathcal{B}\) approximately scales with \(1/D^{2}\) (dashed grey line). This upper limit corresponds to injections with optimal inclination and sky location, as well as the highest amplitude \(D\cdot\Delta h\). Below this line we find injections with non-optimal configurations and lower amplitudes. Note that even for small distances some signals can have very low values of the Bayes factor. Since the injected values of \(f_{\rm peak}\) are close to the maximum sensitivity of the detectors, the dependence of the sensitivity on frequency is fairly flat and does not produce any observable systematic influence in the value of \(\log_{10}\mathcal{B}\), for the range of frequencies considered.
To try to interpret the value of the Bayes factor, and in particular to understand which signals would be detectable and which would be hidden in the detector noise, we perform a series of injections with pure noise. For those injections we obtain values of \(\log_{10}\mathcal{B}\) close to zero. There is a small dependence on the distance to the source, which appears because the distance (which is considered to be known in the analysis) affects the priors. For large distances, the expected amplitude of the signal is small, therefore the analysis cannot conclude whether there is an unobserved signal in the noise or if it is instead pure noise, which results in a \(\log_{10}\mathcal{B}\approx 0\). For small distances the model predicts a high signal amplitude for most orientations. Therefore, the latter case, there is a tendency to obtain \(\log_{10}\mathcal{B}<0\), i.e. to reject the presence of a signal given that there is only noise. To model the effect of the detector noise we fit \(\log_{10}\mathcal{B}\) as a function of \(\log D\) with a second-order polynomial (violet line in Fig. 4). Next we perform a linear fit of the quadratic difference between this noise model and the values for different injections; this linear fit gives us an estimate of the variance of \(\log_{10}\mathcal{B}\) as a function of \(\log D\). Since the distribution is clearly non Gaussian, instead of using this variance directly, we rescale it by a constant value to
Figure 4: Logarithm of the Bayes factor as a function of the distance of the injected signal. Both signal (blue and green markers) and noise (red dots) injections are plotted. The violet solid line (only visible in the bottom panel since it has negative values) and violet area display the mean value for noise injections and the 99% confidence interval (see main text for details on the noise model). The grey dashed line in the top plot shows the \(1/D^{2}\) dependence. The lower panel shows a detailed view of the region with low values of the Bayes factor in linear scale.
estimate the 0.5th and 99.5th percentiles. The resulting 99% confidence interval (i.e. the region interval within which a signal is consistent with noise) is displayed as a violet area in Fig. 4. In those cases the posteriors obtained from the Bayesian analysis are uninformative and essentially follow the priors, except for \(D\cdot\Delta h\cdot\sin^{2}\theta\), which tends to be close to zero.
In general, any signal with \(\log_{10}\mathcal{B}>0.5\) will fulfill the criterion of being above the noise. However, this does not imply that those signals are detectable or that one could extract meaningful information. We also note that this threshold could increase significantly if we were to use real detector noise, in which non-stationary noise (glitches) is present. Although a proper analysis of the detectability of this kind of signals is out of the scope of this paper, we can make an estimate based on the SNR. Previous studies on the performance of the cWB pipeline [4; 5] for CCSN waveforms (in particular for waveforms similar to those considered here) have shown that typically more than 50% of the events with SNR\(>20\) are detectable when real detector noise is considered [5]. However, in this work we place the threshold on \(\log_{10}\mathcal{B}\) since it offers a more clear separation between noise and true signal in our tests. In Fig. 5 we show the dependence of the Bayes factor on the network SNR. The maximum value of \(\log_{10}\mathcal{B}\) is proportional to the square of the network SNR (shown as a dashed grey line). All injections resulting in \(\log_{10}\mathcal{B}>100\) have a SNR\(>20\). Therefore, for practical purposes, we will use this threshold on the value of the Bayes factor as a conservative estimate of the detectability of a signal based only on observed data (i.e., assuming high-SNR, low-\(\mathcal{B}\) signals to be undetectable).
If it were possible to detect CCSN signals using (optimal) matched-filtering techniques, the detection threshold would lower to SNR\(>8\), which is satisfied for all signals with \(\log_{10}\mathcal{B}>16\). Table 2 shows the fraction of injections that pass the various thresholds considered for the different series of injections performed. There can be interpreted as an approximate comparison of the detectability of the different classes of waveforms. However, the fraction of detectable waveforms depends on the distribution of inherent amplitudes of the generated waveforms, which is not uniform across the datasets. The Richers dataset has a slightly higher fraction of detectable events than the MT dataset because it skews more to high-amplitude events, even though the injected MT signals are generated from the same template as is used in the inference procedure. However, the mismatch with the CC waveforms is large enough to significantly decrease the number of detectable events.
### Master template signal injections
In the case of master template injections, the templates used for the inference and for the injections are obviously the same. This means that, for the appropriate parameters, the template is a perfect match for the waveform injected. As a result, the accuracy of the inferred values will improve as the SNR of the injected signal increases. Figure 6 shows an illustrative example of the posterior probability distribution obtained for a case with a relatively high SNR of 27.5. We are able to recover the parameters of the injected signal within the 1-\(\sigma\) confidence interval with a relatively low error.
Figure 7 shows the median values and errors of the inferred posteriors depending on the injected values for the three quantities of the analysis. For signals with high significance (\(\log_{10}\mathcal{B}>100\), red dots) the relative difference between the inferred median and injected values is almost always smaller than 2.3%, 14% and 7% for \(f_{\rm peak}\), \(D\cdot\Delta h\cdot\sin^{2}\theta\) and \(\psi\), respectively. Only in one case the inferred value of \(D\cdot\Delta h\cdot\sin^{2}\theta\) has a larger relative error (\(\sim 40\%\), see the outlier red dot in the middle panel of Fig. 7) albeit within the 3-\(\sigma\) limit, which is expected for 1000 injections. The error in the median values are
\begin{table}
\begin{tabular}{c c c c} Injection & \(\log_{10}\mathcal{B}>0.5\) & \(\log_{10}\mathcal{B}>16\) & \(\log_{10}\mathcal{B}>100\) \\ series & (SNR\(>2\)) & (SNR\(>8\)) & (SNR\(>20\)) \\ \hline MT & 0.64 & 0.53 & 0.44 \\ Richers & 0.67 & 0.57 & 0.48 \\ CC & 0.51 & 0.41 & 0.32 \\ noise & 0 & 0 & 0 \\ \end{tabular}
\end{table}
Table 2: Fraction of injections of the different series with a logarithm of the Bayes factor above a certain threshold. Threshold SNR values are estimated for the MT signals, though Richers and CC injections have SNR values below this threshold.
Figure 5: Logarithm of the Bayes factor as a function of the network SNR for different types of injections. The dashed grey line shows a quadratic dependence with the network SNR. The violet area corresponds to values of \(\log_{10}\mathcal{B}<0.5\), which approximately corresponds to the 99% confidence interval of the noise injections.
mostly confined within the 2-\(\sigma\) confidence interval.
To evaluate the quality of the posterior distributions we show in Fig. 8 a P-P plot. For this plot we performed an ad hoc set of 1000 injections with the same distribution of the parameters \(D\!\cdot\!\Delta h\cdot\sin^{2}\theta\), \(f_{\rm peak}\) and \(\psi\) as the priors used for the Bayesian analysis. This plot compares the observed cumulative probability distribution of the deviation of the inferred values from the injected ones (vertical axis) with the expected cumulative distribution obtained when comparing the posterior distribution of each injection with the corresponding injected value. A straight line (equal probability) indicates that the posterior describes properly the results obtained, i.e. it is a good estimator of the error associated with the inference process. Deviations from this equal probability line would indicate an excess or scarcity of events at certain confidence level. The results obtained for master template signals show that the posterior follows closely the equal probability line for all quantities and therefore the analysis can be used with high confidence in its results.
### Richers et al. signal injections
Next we discuss our results when injecting waveforms from the Richers et al. catalog [17]. In this case we have to post-process the posteriors given by the Bayesian inference algorithm to take into account the intrinsic error of the master templates (see Section IV.2). Fig. 9 shows an illustrative example of the posterior probability of a loud signal with a SNR of 57. The plot shows both the posterior probability obtained directly with the Bayesian inference process (blue) and after post-processing the posteriors to incorporate the error of the master template (black). Since the SNR is high, the predicted error associated with the detector noise (blue) is relatively low and the master template error has a significant contribution to the final posteriors of \(f_{\rm peak}\) and \(D\!\cdot\!\Delta h\cdot\sin^{2}\theta\) (black). Note that \(\psi\) is unaffected. For this particular case, the injected value falls at the edge of the 3-\(\sigma\) confidence interval before the inclusion of the template errors but within 1-\(\sigma\) after its inclusion. At sufficiently high SNR the contribution of noise diminishes and the final posteriors are essentially dominated by the master template errors (the transition seems to happen at \({\rm SNR}\sim 20\) in this work). Correspondingly, at sufficiently low SNR the noise becomes dominant to the point that the posterior is insensitive to the template error contribution.
Fig. 10 depicts the inferred values depending on the injected values and their errors. The observed behaviour for all three parameters is similar to the case of master template injections discussed before. However, we note now a larger dispersion of the inferred values due to the intrinsic error of the templates. Of the five outliers visible in \(f_{\rm peak}\) (top panel, red circles) three of them correspond to injections of the same waveform (A634w3.00_SFHo_ecapture_0.1, according to the naming convention in [17]), which is a model with artificially decreased electron capture rates. The Fourier transform of this waveform presents a secondary peak at about the frequency that is recovered by our analysis. That hints that, for some models, secondary modes may be excited at bounce that would need to be included in the model for more accurate results.
The errors shown in Fig. 10 lay mostly in the 2-\(\sigma\) interval, although the number of injections outside this interval is larger than for the master template case. This could be an effect of the assumption that the error due to the master template follows a normal distribution (see Section IV.2). This could be corrected by using a more accurate description for that distribution and should be the subject of future studies.
### Core collapse signal injections
We turn now to consider signal injections with the waveforms from Table 1, i.e. waveforms which were not employed to build the master template. The inferred values and errors are displayed in Fig. 11. In this case, there are clear mismatches between the inferred and the true values. The worst results are obtained for \(f_{\rm peak}\), for which the inferred value is systematically underestimated. This is an extreme case of the pathology commented in the previous section affecting to most
Figure 6: Corner plot of the posterior probability distribution for a master template injection. The network SNR of the signal is 27.5 and \(\log_{10}\mathcal{B}=157\). Contours show 1-\(\sigma\), 2-\(\sigma\) and 3-\(\sigma\) levels. Dashed blue lines show the 1-\(\sigma\) confidence intervals, and orange lines correspond to the injected values. The correlation between \(\psi\) and \(D\Delta h\cdot\sin^{2}\theta\) is a result of the particular sky location and detector orientation.
a few: The post-bounce evolution of the CCSN signals from Table 1 is more complex than those from [17] and cannot be modelled so faithfully by our master template waveform. In particular, the Fourier transform of the strain near bounce presents multiple peaks, whereas the Richers waveforms generally only show one. The presence of additional peaks have been discussed in [18], and may be related to overtones of the fundamental \(f\)-mode, to non-linear couplings or to the presence of \(g\)-modes. This produces a double confusion in the algorithm: i) it is difficult to evaluate what the nominal injected value of \(f_{\rm peak}\) is; and ii) the Bayesian algorithm could be fitting the wrong oscillation mode. This is a strong indication that further work is needed to converge on a realistic bounce signal and/or our master template needs improvement possibly by adding secondary frequencies.
Regarding \(D\cdot\Delta h\cdot\sin^{2}\theta\), the results seem to follow the right behaviour for values of \(D\cdot\Delta h\cdot\sin^{2}\theta<250\). Above that point, however, the inferred value is systematically underestimated. The results for \(\psi\) are better but in many cases there are still large deviations. Both effects could be attributed to the strong mismatch in \(f_{\rm peak}\), but the presence of additional unaccounted modeling dependencies on these two parameters cannot be dismissed. This highlights the potential for future investigations to systematically examine these dependencies, while also considering possible improvements to the waveform models.
### Dependence with SNR and distance
We next discuss the dependence of the results on the network SNR and on the distance to the source. In so doing we only consider the master template and the Richers et al. waveforms, leaving out of the analysis the additional CCSN waveforms of Table 1 because of the poor match between these waveforms and the template. Figure 12 shows both the relative error of the inferred values as a function of the network SNR (left panels) and the fractions of injections below a certain relative threshold as a function of the distance to the source (right panels).
For master template injections the relative error (green
Figure 8: P-P plot for master template injections: Observed cumulative probability distribution of the deviation of the inferred values from the injected values vs the expected cumulative distribution when comparing the posterior distribution of each injection with the corresponding injected value. For this analysis we consider a set on injections in which the parameter distribution of the injections match the priors. Dashed black line indicates equal probability.
Figure 7: Bayesian inference for master template injections. _Upper panels:_ Median of the inferred posteriors for \(f_{\rm peak}\) (left panel), \(D\cdot\Delta h\cdot\sin^{2}\theta\) (middle panel) and \(\psi\) (right panel) as a function of the true value of the injected waveform. Colors indicate the logarithm of the Bayes factor. Values of \(\log_{10}\mathcal{B}>100\) are indicated in red and \(\log_{10}\mathcal{B}<0.5\) in grey. The blue solid line in the diagonal of all plots corresponds to equal true and inferred values. _Lower panels:_ error in the inferred values in term of standard deviations. Horizontal dashed lines indicate the 2-\(\sigma\) interval.
symbols in left panels) decreases with increasing SNR, as expected. The green lines in the right panels show the fraction of injections at a given distance that are recovered within a given error (regardless of the Bayes factor). For a Galactic event (\(D\lesssim 10\) kpc), both \(f_{\rm peak}\) and \(D\cdot\Delta h\cdot\sin^{2}\theta\) could be recovered in most cases with less than 10% error. For the case of \(f_{\rm peak}\) this accuracy could be maintained for half of the events up to \(\sim 100\) kpc. For signals with low significance (low value of \(\log_{10}\mathcal{B}\)), the analysis recovers systematically very low values of \(D\cdot\Delta h\cdot\sin^{2}\theta\). This is because the observation is compatible with noise, i.e. with an observation of a zero-amplitude waveform.
For injections from the Richers et al. catalog, the relative error (blue symbols in left panels) decreases with the network SNR in the regime in which it is dominated by the detector noise (network SNR\(<20\), vertical dashed line). However, for values of the network SNR associated with an unambiguous detection (network SNR\(>20\)) the relative error remains fairly constant and it is dominated by the error in the template. This limits the accuracy of the analysis to a few percent for \(f_{\rm peak}\) and to \(\sim 10\%\) for \(D\cdot\Delta h\cdot\sin^{2}\theta\). For a Galactic event, it is still possible to infer \(f_{\rm peak}\) with a relative error smaller than 10% (see the green curves in the top right panel of Fig. 12). However, the accuracy of the recovery of \(D\cdot\Delta h\cdot\sin^{2}\theta\) degrades, with about 25% of the events showing a relative error larger than 10%. Given that \(\psi\) is mostly unaffected by the master template error, the relative error keeps decaying at larger SNR, saturating at values larger than \(\sim 100\), consistent with the discussion in Section V.3.
### Waveform reconstruction
The determination of the parameters of a signal allows reconstruction of the injected signal using the median values of the posterior distribution to generate a waveform using the master template. Fig. 13 shows an example of waveform reconstruction from the Richers et al. catalog (same example as in Fig 9). We have estimated the error of the waveform in two ways: First, using the standard deviation of the master template (see Section II) we have evaluated the 2-\(\sigma\) confidence interval (purple region). This estimation only considers the master template error and works well for this particular case with a high network SNR of 57. In general, however, it underestimates the error for cases dominated by detector noise errors (lower SNR). Second, we plot a selection of master template waveforms (cyan curves) generated using 100 random samples from the posterior distribution. This procedure could be used to evaluate confidence intervals in a more accurate way, incorporating both template and detector noise errors. In both cases the injected template is within the error interval.
### Inference of physical parameters
Finally, we use the procedure outlined in Section IV.2 to estimate the physical parameters \(\rho_{\rm c}\) and \(T/|W|\sin^{2}\theta\) from the waveform. This estimation is exclusively carried out for the waveforms in the Richers et al. catalog. The master template waveforms do not come from numerical simulations and so they do not have associated physical parameters. There is only partial information about the CC waveforms publicly available, so while we could still perform the analysis, we would not be able to compare with the injected values. Furthermore, we need bounce signals consistent with the template in order to reliably infer the physical parameters \(\rho_{\rm c}\) and \(T/|W|\sin^{2}\theta\), which is not the case for these models as shown in Section III.
Fig. 14 shows the inferred values versus the injected ones. Regarding \(\rho_{\rm c}\), the inferred values have a large scatter around the true values. The combined effect of the errors coming from the detector noise, the master template errors and the dispersion in the \(\rho_{\rm c}\) vs \(f_{\rm peak}\) relation, results in a large scatter of the inferred values in the range \(3-4\times 10^{14}\) g/cm\({}^{3}\) over the entire range of the injected ones. Moreover, the fact that the range of possible values for \(\rho_{\rm c}\) is relatively narrow does not help, raising the degree of accuracy necessary to obtain a meaningful physical result.
Figure 9: Corner plot of the posterior probability distribution for an example injection of the Richers et al. catalog. The network SNR of the signal is 57 and \(\log_{10}\mathcal{B}=463\). Blue contours show the posterior distribution computed by the Bayesian analysis and black contours, the corresponding distribution after incorporating the error in the template. Contours show 1-\(\sigma\), 2-\(\sigma\) and 3-\(\sigma\) levels. Dashed black lines display the 1-\(\sigma\) confidence intervals and orange lines correspond to the injected values.
The results are significantly better for \(T/|W|\sin^{2}\theta\). The dispersion is larger than in the case of \(D\cdot\Delta h\cdot\sin^{2}\theta\) due to the imperfect mapping between waveform amplitude and rotation rate, but its value can be recovered with \(\sim 25\%\) accuracy in most cases. Notice that the value that we infer includes the dependence on \(\theta\), which is not a physical parameter of the system. If one desires to remove the dependence on \(\theta\) without any previous knowledge of its value, then the results obtained become automatically lower limits. Note that a low Bayes factor event (grey dots) is equivalent to a non-detection. In that case, the inferred values are usually zero, meaning that the only information that we can extract from a non-detection is that \(T/|W|\geq 0\), i.e. it is completely uninformative. In Section VI we discuss possible ways of determining \(\theta\) independently to break the existing degeneracy.
## VI Conclusions and outlook
In this work we have proposed a procedure to infer proto-neutron star properties from future gravitational-wave observations. We have focused on the early-time (bounce) GW signal from the collapse of stellar cores for fast rotating progenitors. Despite the complex and stochastic character of CCSN signals, the bounce part of the signal is rather regular and depends on a simple form on two parameters: the bounce amplitude, \(D\cdot\Delta h\), and the peak frequency, \(f_{\rm peak}\). The main interest of these two quantities is that their values correlate with the physical properties of the PNS and, in particular, with the ratio of rotational-kinetic energy to potential energy, \(T/|W|\), and the central density, \(\rho_{\rm c}\), at bounce [17]. The main goal of this investigation has been to provide estimates for such two parameters directly from the bounce GW signal.
Figure 11: Bayesian inference of \(f_{\rm peak}\), \(D\cdot\Delta h\cdot\sin^{2}\theta\) and \(\psi\) as a function of the true value of the injected waveform, when using injections from Table 1. The description of the plots is as in Fig. 7
Figure 10: Bayesian inference of \(f_{\rm peak}\), \(D\cdot\Delta h\cdot\sin^{2}\theta\) and \(\psi\) as a function of the true value of the injected waveform, when using injections from the Richers et al. catalog. See Fig. 7 for details.
We have developed a simple parametric waveform template (master template) to model the bounce. This template has been constructed using a carefully selected set of 402 models from the Richers et al. waveform catalog [17], which globally comprises over 1800 axisymmetric simulations extending up to about 10 ms of post-bounce evolution. Using the master template, we have performed Bayesian inference on signals injected in Gaussian colored noise of a three-detector network of advanced GW interferometers (Advanced LIGO and Advanced Virgo). We have performed this analysis making use of the Bayesian inference library Bilby[65]. We have been able to prop
Figure 12: Left panels: Relative error in the inference of \(f_{\rm peak}\) (top), \(D\cdot\Delta h\cdot\sin^{2}\theta\) (middle), and \(\psi\) (bottom) as a function of the network SNR for both, master template injections (green symbols) and injections from the Richers et al. catalog [17] (blue symbols). Dashed vertical lines mark the approximate detection threshold of SNR\(=20\). Right panels: Fraction of injections at a given distance with a relative error smaller than a certain threshold (10%, 1% and 0.1%) for the three parameters studied.
agate the errors due to the master template inaccuracies into the final posterior distributions, generating error estimates for the inferred values that include Gaussian detector noise and model uncertainties.
Our procedure has been tested with four data sets: i) null injections (pure noise), ii) master template injections, iii) bounce signals from [17] and iv) other CCSN waveforms from the literature. Injected template waveforms are recovered with better accuracy than injected waveforms from the training set, which in turn are recovered better than injected waveforms from other simulations. While master injections are only affected by detector noise error, more realistic signals are affected by the fact that the master template itself is an imperfect representation of simulation waveforms. Using the bounce signals of [17], we have been able to recover the injected parameters, namely \(f_{\rm peak}\), \(\textit{D}\cdot\Delta h\cdot\sin^{2}\theta\), and \(\psi\); with an accuracy better than 10% for more than 50% of the detectable events. While for waveforms outside of the training catalog the accuracy and confidence of the results worsens, it is still possible to obtain valid information for sufficiently loud signals. Inferences of the physical parameters of the PNS, \(T/|W|\) and \(\rho_{\rm c}\), have similarly low accuracies for the same reason. Since modeling uncertainty plays such a large role in the otherwise very regular signals, there is a significant opportunity for future work to address these modeling uncertainties with relatively short-duration simulations.
We have identified the main limitations of our procedure that presently are preventing us from reaching a higher accuracy for inferred parameters. Addressing the following issues will be subject of future investigations:
* We model the errors due to the master template inaccuracies with a simple normal distribution. This may be responsible for an excess of injections outside the 2-\(\sigma\) confidence interval. The solution is relatively simple and would consist of using the complete distribution that we obtain from the analysis of the errors.
* In some of the the CC bounce waveforms (i.e. not in the catalog of [17]) we have identified multiple oscillating frequencies in the bounce signal that cannot be modeled by a single peak frequency. It is unclear whether the origin of these other peaks is modeling deficiencies or differing choices of initial conditions. Therefore, it may be necessary to incorporate additional degrees of freedom in the templates (e.g. a secondary frequency) and converge on unified waveforms produced by multiple codes. A more detailed analysis of the bounce signal under more realistic conditions (e.g. treatment of deleptonization, heavy nuclei, progenitor profiles, fully dynamical spacetime) and different numerical methods would be necessary to implement this improvement.
* All the waveforms in the Richers et al. catalog employed to create the master template make use of the same \(12M_{\odot}\) progenitor star. Using waveforms from a wider variety of progenitors (e.g. [69]) could allow for the creation of master templates that better approximate realistic CCSN waveforms from any core collapse with significant rotation. Additionally, we would obtain improved error estimates and better estimators of the physical properties.
One of the main caveats of our analysis is the degeneracy of the measurement of \(\textit{D}\cdot\Delta h\) (or \(T/|W|\)) with the value of the inclination angle \(\theta\). In order to break this degeneracy we would need an independent measurement of constraint on the value of \(\theta\). Some possibilities are:
* It has been shown [36] that the stokes parameters of the gravitational wave signal of the post-bounce evolution carry information about the PNS rotation if non-axisymetric deformations are present. These deformations could appear systematically in the form of bar mode or spiral instabilities for sufficiently high rotation rates [70; 71; 72; 53; 73; 74; 53]. In this case it should be possible to constrain the inclination angle by measuring the ratio of the different stokes parameters.
* Long term observations of the supernova remnant could allow to measure the velocity field of the ejecta and their asymmetries. If those asymmetries are significant and related to the presence of strong rotation (e.g. bipolar flows), the inclination angle could be estimated.
* Very quickly rotating progenitors are a possible source of long-GRBs. In those cases the collimated jet would producing the GRB is expected to be aligned with the angular momentum of the system.
Figure 13: Reconstructed (blue) and injected (orange) signal for the same case shown in Fig. 9. Purple region is the 2-\(\sigma\) confidence interval estimated with the error of the master templates. Cyan curves correspond to waveforms generated with 100 random samples of the posterior distribution (including template errors).
Therefore its observation could constrain the inclination angle.
While our results provide a possible way to infer PNS properties from GW observations of CCSNe, they should be taken with care as there are a number of simplifying assumptions in our model that could have an impact on the inferred waveforms and physical parameters should they be relaxed. Perhaps the most important simplification has been the use of zero-mean, coloured Gaussian noise into which the injections have been made. An immediate extension of the present work would be to account for actual (non-Gaussian) detector noise. In addition, we should explore the impact in the analysis on the number of detectors in the network (e.g. with only the two aLIGO detectors it may not be possible to recover the polarization angle) or the use of next generation detectors such as the Einstein Telescope[76], Cosmic Explorer[77] or NEMO[78]. Results incorporating those elements will be presented elsewhere.
## VII Acknowledgements
We thank Christopher Berry, Sylvia Biscoveanu, Marie-Anne Bizouard, Jade Powell, and Marek Szczepanczyk for their useful comments and suggestions. This research has been supported by the Spanish Agenica Estatal de Investigacion (grant PID2021-125485NB-C21 funded by MCIN/AEI/10.13039/501100011033 and ERDF A way of making Europe). Further support is provided by the Generalitat Valenciana (Prometeo program for excellent research groups grant CIPROM/2022/49 and Astrophysics and High Energy Physics program grant ASFAE/2022/003 funded by MCIN and the European Union NextGenerationEU (PRTRTR-C17.1)), by the EU's Horizon 2020 research and innovation (RISE) programme H2020-MSCA-RISE-2017 (FunFiCO-77740), and by the European Horizon Europe staff exchange (SE) programme HORIZON-MSCA-2021-SE-01 (NewFunFiCO-101086251). PCD acknowledges support from the _Ramon y Cajal_ funding (RYC-2015-19074). SR was supported by a NSF Astronomy & Astrophysics Postdoctoral Fellowship under Grant No. 2001760. EA was supported by RK MES grant No. AP13067834 and NU Faculty Development Grant No. 11022021FD2912. We thank the YITP for hospitality and support during the 2019 long-term workshop _Multi-Messenger Astrophysics in the Gravitational Wave Era_ during which the idea for this project was discussed.
## Appendix A Computation of the errors associated to the master template
As described in Section V.2, we need to relate the values inferred for the master template, \(\Theta_{\rm int,MT}\), with the real values, \(\Theta_{\rm int,real}\), by means of the probability \(p(\Theta_{\rm int,MT}|\Theta_{\rm int,real},H^{\prime})\). In order to compute this probability, first we find the master template parameters, \(\Theta_{\rm int,MT}\), that match best with each waveform from the Richers et al. catalog, i.e. those parameters that maximize the waveform likelihood given by
\[\log\mathcal{L}_{\rm waveform}=-\frac{\sum_{i=1}^{N/2}|\hat{h}_{i}^{\rm MT }-\hat{h}_{i}^{\rm Ric}|^{2}}{\sum_{i=1}^{N/2}|\hat{h}_{i}^{\rm Ric}|^{2}}. \tag{10}\]
Second, we consider that the resulting maximized parameters for all the waveforms in the catalogue constitute samples of the distribution that we want to measure, \(p(\Theta_{\rm int,MT}|\Theta_{\rm int,real},H^{\prime})\).
Figure 14: Bayesian inference of \(\rho_{c}\) and \(T/|W|\sin^{2}\theta\) as a function of the true value of the injected waveform, when using injections from the Richers et al. catalog [17]. See Fig. 7 for details.
Note that the denominator of Eq. (36) is just a normalization constant and does not affect the computation of the maximum. The estimation has been performed using \(\theta=0\), \(\psi=0\), \(D=10\) kpc; however, the analysis does not depend on the arbitrary choice of these extrinsic parameters. We compute the maximum likelihood using the Nelder-Mead simplex algorithm [79] implemented in the SciPy python library [80].
Fig. 15 shows the relation between the value of \(\Theta_{\rm int,MT}\) computed as the maximum likelihood for each waveform and the real values in the template. We have performed linear regressions for both parameters \(f_{\rm peak}\) and \(D\cdot\Delta h\cdot\sin^{2}\theta\), and in both cases they result in good fits (\(R\) values of 0.88 and 0.98, respectively) with a slope compatible with unity. We test two alternatives as a model for \(H^{\prime}\): one case in which we assume that the slope is one (identity model, with \(\Theta_{\rm int,real}=\Theta_{\rm int,MT}\)) and a second model in which we use the result of the fit (fit model). The relative errors with respect to the models can be found in the lower panels of Fig. 15. In
Figure 16: Upper panels show the values of the maximum likelihood of \(\Theta_{\rm int,MT}\) (MT value) for a template with parameters \(\Theta_{\rm int,phys}\) (physical value), for the parameters \(\rho_{\rm c}\) (left panel) and \(T/|W|\sin^{2}\theta\) (right panel). The blue line shows the result of a linear regression. Lower panels show the corresponding relative errors with respect to the fit. Dashed lines show the 2-\(\sigma\) confidence interval for the fit. The visible groups of points correspond to models with the same initial rotation rate but different rotation law.
Figure 15: Upper panels show the parameters \(\Theta_{\rm int,real}\) for each template (real value) as a function of the value of the maximum likelihood of \(\Theta_{\rm int,MT}\) (MT value) for this template, for the parameters \(f_{\rm peak}\) (left panel) and \(D\cdot\Delta h\cdot\sin^{2}\theta\) (right panel). Dashed lines indicate equal value of the MT and real values (identity model) and the blue line, the result of a linear regression (fit model). Lower panels show the corresponding relative errors considering the the identity and fit models. Dashed lines show 2-\(\sigma\) confidence interval for the fit model.
both cases errors are very similar, so we will consider the identity model hereafter for simplicity. The distribution of relative errors for both variables can be modelled as a normal distribution with zero mean and standard deviations \(0.024\) and \(0.065\) for \(f_{\text{peak}}\) and \(D\cdot\Delta h\cdot\sin^{2}\theta\), respectively. Dashed lines in lower panels show the 2-\(\sigma\) confidence interval.
Note that for the case of \(D\cdot\Delta h\cdot\sin^{2}\theta\), there is an outlier at 4.7 sigmas from the mean. The corresponding waveform has a high frequency artifact close to the maximum amplitude, which could be related to a numerical artifact. In any case, removing this outlier has very little impact on the results of the analysis so we have kept it for simplicity.
To compute the errors associated to the estimation of the physical parameters \(\Theta_{\text{int,phys}}\) from the template parameters \(\Theta_{\text{int,MT}}\), we assume, following [46], a linear dependence of \(\rho_{\text{c}}\) and \(T/|W|\sin^{2}\theta\) on \(f_{\text{peak}}\) and \(D\cdot\Delta h\cdot\sin^{2}\theta\), respectively. We compute the parameters of the linear function by performing fits to the data, as shown in Fig. 16. The results of the fits, which serve as model \(H^{\prime}\), are:
\[\rho_{\text{c}}=\left(7.3\times\frac{f_{\text{peak}}}{1000\text{ Hz}}-1.67\right)\times 10^{14}\text{g}\,\text{cm}^{-3}, \tag{20}\] \[T/|W|\sin^{2}\theta=(1.1\times D\cdot\Delta h\cdot\sin^{2}\theta +17)\times 10^{-4}, \tag{21}\]
with \(R\) values \(0.71\) and \(0.97\), respectively. The relative error with respect to this model (lower panels in Fig. 16) can be modelled as normal distributions with standard deviations \(0.07\) and \(0.08\) for \(\rho_{\text{c}}\) and \(T/|W|\sin^{2}\theta\), respectively.
A closer inspection of the model for \(T/|W|\sin^{2}\theta\) reveals that there are 4 outliers outside the 4-\(\sigma\) confidence interval. The numerical simulations corresponding to these four outliers were performed using an artificially increased electron capture rate (a factor 10) that was used by [17] to test the influence of this parameter on the waveform. [17] showed that the electron capture rate treatment can change the slope of the relation between \(T/|W|\) and the peak amplitude (see their Fig. 16), therefore modifying the slope of Eq. (21). However, they showed that the effect of the EOS in the slope is much smaller (see their Fig. 6). In this sense, it is reasonable to think that using models with much tighter constrained electron capture rates would lead to a better model for \(T/|W|\sin^{2}\theta\) with no outliers present.
Regarding the model for \(\rho_{\text{c}}\), the low value of \(R\) is related to the large scatter in the data. This is a clear indication that the relation in this case may be depending on other parameters. In particular, [17] pointed out the influence of the EOS on the slope of Eq. (20) (see their Fig. 8, bottom panel).
For the current work we will consider the data as it is without further restrictions. This will lead to more conservative results and larger errors than in the case of a particular EOS and a model for electron capture rates.
|
2306.04695 | ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image
Diffusion Models | The ability to understand visual concepts and replicate and compose these
concepts from images is a central goal for computer vision. Recent advances in
text-to-image (T2I) models have lead to high definition and realistic image
quality generation by learning from large databases of images and their
descriptions. However, the evaluation of T2I models has focused on photorealism
and limited qualitative measures of visual understanding. To quantify the
ability of T2I models in learning and synthesizing novel visual concepts
(a.k.a. personalized T2I), we introduce ConceptBed, a large-scale dataset that
consists of 284 unique visual concepts, and 33K composite text prompts. Along
with the dataset, we propose an evaluation metric, Concept Confidence Deviation
(CCD), that uses the confidence of oracle concept classifiers to measure the
alignment between concepts generated by T2I generators and concepts contained
in target images. We evaluate visual concepts that are either objects,
attributes, or styles, and also evaluate four dimensions of compositionality:
counting, attributes, relations, and actions. Our human study shows that CCD is
highly correlated with human understanding of concepts. Our results point to a
trade-off between learning the concepts and preserving the compositionality
which existing approaches struggle to overcome. The data, code, and interactive
demo is available at: https://conceptbed.github.io/ | Maitreya Patel, Tejas Gokhale, Chitta Baral, Yezhou Yang | 2023-06-07T18:00:38Z | http://arxiv.org/abs/2306.04695v2 | # ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models
###### Abstract
The ability to understand visual concepts and replicate and compose these concepts from images is a central goal for computer vision. Recent advances in text-to-image (T2I) models have lead to high definition and realistic image quality generation by learning from large databases of images and their descriptions. However, the evaluation of T2I models has focused on photorealism and limited qualitative measures of visual understanding. To quantify the ability of T2I models in learning and synthesizing novel visual concepts, we introduce **ConceptBed**, a large-scale dataset that consists of 284 unique visual concepts, 5K unique concept compositions, and 33K composite text prompts. Along with the dataset, we propose an evaluation metric, Concept Confidence Deviation (CCD), that uses the confidence of oracle concept classifiers to measure the alignment between concepts generated by T2I generators and concepts contained in ground truth images. We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions. Our human study shows that CCD is highly correlated with human understanding of concepts. Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome.
## 1 Introduction
Humans reason about the visual world by aggregating entities that they see into "visual concepts": both cats and elephants are animals, and both palms and pines are trees. When we describe images in natural language we often make use of such concepts to describe things that we see. Although this type of visual concept learning is well-defined in human psychology [27], it remains elusive with respect to the development of data-driven techniques capable of learning and reasoning from images and their natural language descriptions.
Text-to-Image (T2I) generative models are trained to translate natural language phrases into images that correspond to that input. High-quality T2I models, therefore, serve as a link between human-level concepts (expressed in natural language) and their visual representations and are one way to reproduce visual concepts. On the other hand, this has also sparked interest in visual concept learning through the procedure of _"image inversion"_ - to translate one or many images corresponding to a visual concept into a latent representation of that visual concept. While early works primarily explored image inversion using Generative Adversarial Networks (GANs) [42], methods such as Textual Inversion [10] and Dreambooth [35] combine image inversion with T2I - this has led to an effective way to quickly learn concepts from a few images and reproduce them in novel combinations and compositions with other concepts, attributes, styles, etc. These methods aim to learn concepts with minimal reference images by fine-tuning pre-trained text-conditioned diffusion models (as shown in
Figure 1). Therefore this paradigm of T2I and image inversion is a powerful new way of learning and reproducing concepts.
Within this paradigm of concept learning via image inversion, two primary evaluation criteria have emerged: concept alignment, which assesses the generated image's correspondence to target concept images, and compositional reasoning, which evaluates whether the generated images maintain compositionality. Previous works on T2I concept learning have evaluated only a small number of concepts and unique compositions, rendering their findings non-generalizable to a wide range of potential concepts. Furthermore, the established evaluation metrics such as DINO cosine similarity [35] (for measuring concept alignment), and CLIPScore [16] (for evaluating compositionality), have encountered challenges in accurately capturing human preferences. Consequently, there is a growing need for automated evaluations.
Therefore, we propose the **ConceptBed** benchmark: a comprehensive evaluation strategy aligned with human preferences and accompanied by a concept dataset. The **ConceptBed** dataset comprises 284 distinct concepts and over 33,000 composite text prompts, which can be further extended using the provided automatic realistic dataset creation logic. The dataset focuses on four diverse concept learning evaluation tasks: learning styles, learning objects, learning attributes, and compositional reasoning. To gain a deeper understanding of previous methodologies, we incorporate four composition categories - Action, Attribution, Counting, and Relations.
Armed with the dataset, we show how it can be used to evaluate concept learners. To that end, we present a novel concept deviation-aware evaluation framework that exhibits a strong correlation with human preferences. This framework, combined with the **ConceptBed** dataset, offers an alternative to existing evaluation strategies, facilitating more effective large-scale evaluations that are aligned with human evaluations. For each of the four tasks within **ConceptBed**, we train a supervised classifier (referred to as an oracle) to detect the respective concepts. Subsequently, the confidence scores outputted by these oracles are utilized to calculate the concept deviations of the generated concept images in relation to the reference target ground truth images using the proposed CCD metric. This approach enables us to assess concept and compositional alignment accurately.
We conduct extensive experiments on four recently proposed concept learning methodologies. In total, we fine-tune approximately 1100 models (one model per concept) and generate over 200,000 concept-specific images. Our results reveal a trade-off between concept alignment and compositional reasoning, wherein methods excelling at concept alignment tend to fall short in preserving compositions and vice versa. This suggests that previous concept learning approaches are either highly overfitted or severely underfitted. Furthermore, our experiments demonstrate that utilizing a pre-trained CLIP [32] textual encoder aids in maintaining compositionality, but it lacks the flexibility required to learn complex concepts, such as sketch.
In summary, we make the following **key contributions:**
* We introduce **ConceptBed**, a comprehensive benchmark for grounded quantitative evaluations of text-conditioned concept learners. It comprises 284 unique and diverse concepts and over 33,000 composite text prompts across four composition categories.
* Our key contribution is the **Concept** Confidence **Deviation** (CCD) evaluation metric, which measures the learners' ability to preserve concepts and compositions. On average, we demonstrate a strong negative correlation of \(-0.89\) (Pearson's correlation) between CCD and human preferences.
* Through extensive experiments with 1,100+ models, we identify shortcomings in prior works and suggest future research directions. **ConceptBed** sets a standard for evaluating text-conditioned concept learners.
Figure 1: The outline of visual concept learning via image inversion. Here, V\({}^{*}\) denotes the target concept. First, image inversion is employed to learn a mapping from concept images to the latent space. Then, this learned concept V\({}^{*}\) is used in text prompts to generate images with novel compositions.
## 2 Concept Learning Preliminaries
In this section, we establish the formal definition of concept learning (Section 2.1). Later, we delve into the concept learning under the paradigm of text-conditioned diffusion models (Section 2.2).
### Definition of Concept Learning
**Definition 1**: _A concept (c) is characterized as a set of entities clustered together, sharing commonly associated properties (\(P_{c}\))._
The concept represents an abstract notion, which can be identified by the recurring properties observed among different entities. For instance, in an image collection featuring various animals such as _dogs_, cats, and more, the concept of animal emerges. Similarly, images of distinct golden retrievers and _bisons_ exemplify the concept of _dog_.
**Definition 2**: _A Concept Learner (\(CL\)) refers to a model or modeling strategy capable of acquiring the concept (c) and effectively reproducing and associating it with other concepts._
An ideal concept learner should exhibit two crucial properties: 1) **Reproduce Concepts:** It should be able to replicate the learned concepts faithfully. 2) **Compositional Reasoning:** The concept learner must be adept at associating the acquired concepts with prior knowledge, thereby retaining both the concept itself and its composition.
To evaluate the set of concepts (\(\mathcal{C}\)), we assume the existence of a relationship (\(\mathcal{R}\)) between two concepts (\(a\), \(b\)) such that \(a\in\mathcal{C}\). This compositional relationship can be denoted by the predicate \(\mathcal{R}(a,b)\), representing the composition of the two concepts. For example, when \(\mathcal{R}(a,b)\implies\) with(bird, two legs) and the composition can be expressed as "A bird with two legs." We further assume that the combination of any \(\mathcal{R}\) with \(a\) and \(b\) is realistic (e.g., we do not evaluate a bird with five legs). Note that two concepts can also be combined with each other, denoted as \(\mathcal{R}_{1}(a,b)\) and \(\mathcal{R}_{2}(a,c)\). For instance, "A bird with two legs and eating the fish" combines the concepts of a bird having two legs and eating a fish.
### Concept Learning Under the Text-to-Image Paradigm
Prior studies on concept learning have focused on text-conditioned diffusion models, such as Textual Inversion, DreamBooth, and Custom Diffusion. These models operate within the text-to-image (T2I) paradigm, where a text prompt (\(y\)) serves as input to generate the corresponding image (\(x_{gen}\)
Figure 2: A summary of the ConceptBed dataset for large-scale grounded evaluations of concept learners. The collection of concepts is categorized into three classes: (1) Domain, (2) Objects, and (3) Attributes. ConceptBed has 284 unique concepts and four compositional categories. Here, V* is a learned concept.
representing the given prompt \(y\). A popular approach within T2I is the Latent Diffusion Model (LDM), which incorporates two key modules: 1) Textual Encoder (\(C_{\theta}\)): This module generates embeddings corresponding to the input text prompt; 2) Generator (\(\mathcal{E}_{\phi}\)): The generator estimates the noise iteratively from the input randomly sampled matrix at timestamp \(t\) (\(z_{t}\)), conditioned on the text.
Since T2I models solely consider text input, it becomes necessary to represent the target concept \(c\) in terms of text tokens. These tokens can subsequently be employed to generate images associated with concept \(c\). Therefore, the concept learning task is approached as an image inversion problem, aiming to map the target concept back to the text-embedding space. Let \(\text{V}^{*}\) denote the text tokens corresponding to the learned concept \(c\). Once the optimal mapping from \(\text{V}^{*}\) to the target concept is determined, we can generate concept-specific images using the LDM by providing \(\text{V}^{*}\) in the text prompt. Suppose we are provided with \(m\) images (\(X_{1:m}\)) of the target concept \(c\). Now, in order to learn the text tokens \(\text{V}^{*}\) corresponding to the concept \(c\) from the set of images \(X_{1:m}\), the textual inversion technique aims to optimize \(\text{V}^{*}\) by reconstructing \(X_{1:m}\) using the objective function of the LDM with frozen parameters \(\theta\) and \(\phi\):
\[V^{*}=\underset{v}{\operatorname{argmin}}\mathbb{E}_{z\sim\mathcal{E}(x), \epsilon\sim\mathcal{N}(0,1),t,y,x\in X_{1:m}}\Big{[}||\epsilon-\epsilon_{ \phi}(z_{t},t,x,C_{\theta}(y))||_{2}^{2}\Big{]}, \tag{1}\]
In the case of DreamBooth, instead of finding the optimal \(\text{V}^{*}\), it optimizes the model parameter \(\phi\) associated with the noise estimator. This optimization process enables the model to learn the mapping between randomly initialized \(\text{V}^{*}\) and the target concept \(c\). Once \(\phi^{*}\) is obtained, it can be used to generate images related to the target concept whenever the \(\text{V}^{*}\) token is provided as an input text token. This optimization strategy follows the same LDM objective function2:
Footnote 2: DreamBooth uses the additional regularizer term to maintain the compositionality.
\[\mathcal{L}=\mathbb{E}_{z\sim\mathcal{E}(x),\epsilon\sim\mathcal{N}(0,1),t,y, x\in X_{1:m}}\Big{[}||\epsilon-\epsilon_{\theta}(z_{t},t,x,C_{\phi}(V^{*}))||_{2}^{2 }\Big{]}, \tag{2}\]
Once the mapping from \(\text{V}^{*}\) to concept \(c\) is learned, we can generate concept \(c\) images. However, in order to evaluate these generated images, it is essential to verify whether they align with the learned concepts while maintaining compositionality. Suppose we have a set of reference concept images \(X_{1:m}\) that depicts a bird eating, a bird standing, a bird with two legs, and so on. In this case, the common concept across these images is bird, represented as \(c\in\mathcal{C}\). Once we determine that the target \(c\) is a bird or a specific species of bird, it becomes straightforward to verify the generated images. To assess the quality of the generated images, we can utilize a concept dataset \(\mathcal{D}=\{(x,c)\}_{1}^{n}\) to train the Oracle which can be used as the concept identifier/verifier.
## 3 ConceptBed
To improve the reliability of evaluations, we propose a novel evaluation framework called ConceptBed, designed to accurately estimate Concept and Composition Alignment by quantifying deviations in the generated images. In this section, we first introduce the ConceptBed dataset (Section 3.1) and next, we put forward the concept deviation-aware evaluation framework (Section 3.2). Please refer to the appendix for insights on the proposed dataset and evaluation framework.
### ConceptBed: Dataset
We introduce the ConceptBed dataset, a comprehensive collection of concepts to enhance quantitative evaluations of concept learning. Our dataset incorporates popular existing datasets such as ImageNet [8], PACS [24], CUB [41], and Visual Genome [21], enabling us to create a labeled dataset that improves evaluation accuracy. Figure 2 provides an overview of the ConceptBed dataset.
**Learning Styles.** To learn the various styles, we utilize the PACS dataset, which encompasses four distinct domains: Art Painting, Cartoon, Photo, and Sketch. Each style contains images corresponding to seven entities. In this task, the concept learner aims to use examples from one style as a reference and generate this style-specific images for all seven entities.
**Learning Objects.** Extracting object-level concepts is accomplished through the utilization of the ImageNet dataset. It comprises 1000 low-level concepts from the WordNet [9] hierarchy. However,
due to the presence of noise in ImageNet images and the lack of relevance to daily life for many concepts, we employ an automated filtering pipeline to ensure the usefulness and quality of the reference concept images. The pipeline involves extracting a list of low-level concepts and their parent concepts from ImageNet, followed by extracting text phrases from Visual Genome containing the concept as a subject in the caption. If an insufficient number of such captions exists (less than 10 in Visual Genome) or they cannot be found, the concepts are discarded. This filtering process results in 80 concepts such as (brambling, squirrel monkey, dutch oven, etc.). To obtain high-quality images for concept learners, an object detector is deployed to identify the percentage of pixels belonging to a given concept, ensuring the realistic requirement of the compositions. Finally, we select the top 100 images for which the concept pixel area is the largest and greater than 0.4.
**Learning Attributes.** Since ImageNet dataset images are not labeled based on the attributes present in the image, it is necessary to rely on datasets that provide attribute-level grounded labels. Therefore, we employ the CUB dataset, which offers attribute-level labels (such as orange wing, blue forehead, etc.), enabling the ConceptBed to perform evaluations and measure the attribute-level accuracy of concept learners.
**Compositional Reasoning.** In addition to learning new concepts, it is crucial to maintain prior knowledge and associate the acquired concepts with it. To conduct comprehensive evaluations, we use Visual Genome to extract captions in which the concept appears as the subject of the sentence. These captions are categorized into four composition categories (actions, attributes, counting, and relation) through few-shot classification using GPT3 [2]. This categorization allows us to measure the performance of the baselines on each category, providing an in-depth understanding of the varying difficulty levels of different compositions.
**Dataset Statistics:** The ConceptBed dataset consists of 284 unique concepts, comprising 80 concepts from ImageNet, 200 concepts from CUB, and 4 concepts from PACS. The dataset encompasses over 5000 unique composite prompts distributed across the 80 ImageNet concepts, with each composite prompt having up to two associations. Specifically, we have 3028, 2891, 1267, and 203 prompts contributing to the attribute, relation, action, and counting categories, respectively. In total, the dataset contains approximately 33,000 composite prompts (counting the total number of prompts over the entire set of concepts) for the evaluation of all processed concepts from ImageNet.
We would like to emphasize that our dataset curation pipeline can be readily extended to more extensive datasets, such as OpenImages-v7 [23] and LAION-5B [37]. However, it is important to note that this extension would significantly increase the resource requirements. With the introduction of this dataset, our primary objective is to provide a standardized and benchmarked evaluation framework for concept learners, enhancing research in the field.
### ConceptBed: Evaluation Framework
In this section, we present our Concept Confidence Deviation (CCD) metric, which measures the alignment of generated images with respect to a reference concept. Furthermore, in Section 3.2.2, we outline the necessary evaluation settings to accurately assess the performance of concept learners across various tasks.
#### 3.2.1 Concept Confidence Deviation (CCD)
For simplicity, let us consider a pre-trained text-conditioned diffusion model, denoted as \(g(\cdot)\), which can be further fine-tuned on a specific concept \(c\) such that \(c\in\mathcal{C}_{\textbf{ConceptBed}}\). We assume the availability of concept-specific training images from the ConceptBed dataset, denoted as \(\mathcal{D}^{real}_{c}\in\mathcal{D}^{test}_{\textbf{ConceptBed}}\). The concept learner \(g(\cdot)\) fine-tuned on concept \(c\) using \(\mathcal{D}^{real}_{c}\) is referred to as \(g_{c}(\cdot)\). To begin, we generate a collection of \(N\) images, each representing the learned concept, denoted as \(\mathcal{D}^{gen}_{c}=x^{gen}_{c}=g_{c}(p^{i}_{c},rs^{i});\quad\forall i\in[0,N]\), where \(p^{i}_{c}\) denotes the concept-specific prompt and \(rs\) represents the random seed. The existing quantitative evaluation strategies can be formu
Figure 3: The outline of ConceptBed evaluation framework.
lated as: 1) Concept Alignment: \(score=DINO(\mathcal{D}_{c}^{gen},\mathcal{D}_{c}^{real})\), and 2) Compositional Reasoning: \(score=CLIPScore(\mathcal{D}_{c}^{gen},p_{c})\). Here, DINO measures the pair-wise cosine similarities between generated and target real concept images. While CLIPScore measures the image-text alignment on generated images with respect to the input text prompt.
However, these methods fail to capture the inherent concept deviations within the generated images, rendering them ineffective in measuring performance accurately. To address this limitation, the next step involves training an oracle classifier, denoted as \(f(\cdot)\), specifically for the concept detection task using the **ConceptBed** training dataset, \(\mathcal{D}_{\textbf{ConceptBed}}^{train}\). By leveraging the output probabilities of the oracle (with respect to the concept label \(y_{c}\)) for each generated image \(x_{gen}\) with respect to the output probabilities of real target images \(x_{real}\), we can estimate the deviations associated with the generated images. The Concept Confidence Deviation (CCD) evaluation metric is defined as follows:
\[\texttt{CCD}=-\mathbb{E}\bigg{[}\mathbb{E}_{gen}\big{[}f(y_{c}|x_{gen})- \mathbb{E}_{x_{real}}f(y_{c}|x_{real})\big{]}\bigg{]}, \tag{3}\]
CCD first calculates the mean target probability on the ground truth images and then measures the difference in probability of the generated images. A CCD value of \(0\) indicates that the generated images closely follow the distribution of the ground truth concept images. A positive CCD value suggests that the generated images deviate from the original distribution, while a negative CCD value implies that the generated images adhere to the exact distribution as the training concept images.
#### 3.2.2 Task Specific Evaluation Settings
To efficiently leverage the **ConceptBed** evaluation pipeline, we trained separate oracles on the corresponding **ConceptBed** datasets. Two different types of evaluations are conducted, each with its respective set of oracles: 1) Concept alignment, measured by a concept classifiers, and 2) Compositional reasoning, measured by a Visual Question Answering model.
**Concept Alignment:** Concept alignment evaluation was performed on all tasks, including the generated concept images with different composite text prompts. To evaluate the style, a ResNet18 [13] model is trained to distinguish the images between four style concepts. To evaluate the objects, a ConvNeXt [25] model is fine-tuned on 80 classes from the **ConceptBed** using the ImageNet training subset. The Concept Embedding Model (CEM) [43] was trained on CUB to detect the concepts and respective attributes. Images corresponding to the concepts were generated for each task by following the prompts: "A photo of V\({}^{*}\)" for objects and "A photo of a <_entity-name>_ in the style of V\({}^{*}\)" for styles. Here, <_entity-name>_ belongs to the seven classes from the PACS dataset. The remaining task, composition, utilized the same pre-trained ConvNeXt model for concept-alignment, as **ConceptBed** compositions are for 80 ImageNet concepts.
**Compositional Reasoning:** To measure the image-text alignment with respect to the input prompts, the concept-specific token (i.e., dogs, cats, etc.) was removed and replaced with the corresponding ground truth label making \(\mathcal{R}(a,b)\) to \(\mathcal{R}(\text{V}^{*},b)\). The image-text similarity was then measured. Unlike previous works, CLIP was not used due to its inability to capture compositions [38]. Instead, taking after [6], boolean questions are generated for each prompt, which could be answered by a pre-trained VQA model such as ViLT [18]. As ViLT is essentially a classifier, the CCD can be calculated with respect to the confidence of the model associated with a "yes" answer.
## 4 Experiments & Results
In this section, we extensively study text-conditioned diffusion models' ability to learn new concepts while preserving compositionality. To achieve that, we benchmark four state-of-the-art concept learning methodologies. In Section 4.1, we explain the setup of the experiments, and in Section 4.2 we report the evaluation results using **ConceptBed** evaluation framework: Concept Alignment and Compositional Reasoning. At last, we compare our findings with Human Evaluations. Please refer to the appendix for an in-depth discussion about the experimental setup, results, and human evaluations.
### Experimental Setup
In this work, we study four text-conditioned diffusion modeling-based concept learning strategies: 1) Textual Inversion (LDM) [10], 2) Textual Inversion (SD), 3) DreamBooth [35], and 4) Custom Diffusion [22]. We generate \(N=100\) images for all concepts to measure the concept alignment. While \(N=1\) images for ImageNet concepts on about \(33,000\) composite text prompts. For a total of 284 concepts, we train all four different baselines. This leads to \(1,100\)+ concept-specific fine-tuned models and we generate a total of \(240,000\)+ images for evaluations.
### Evaluation Results
**Concept Alignment.** Table 1 shows the overall performance of the baselines in terms of CCD, where lower score indicates better performance. First, we can observe that CCD for _concept alignment_ is low for the original images; suggesting that the oracle is certain about its predictions. Second, it can be inferred that Custom Diffusion performs poorly, while Textual Inversion (SD) outperforms the other methodologies except for the case of the learning styles. We attribute this behavior to differences in textual encoders. LDM trains the BERT-style textual encoder from scratch while SD uses pre-trained CLIP to condition the diffusion model. CLIP contains vast image-text knowledge
Figure 4: We present qualitative examples showcasing the effectiveness of concept learners on the **ConceptBed** dataset. The leftmost column displays four instances of ground truth target concept images. Subsequent columns exhibit target concept-specific images generated by all four baseline methods. The first row demonstrates an example of the style concept (sketch). The second row illustrates the bird concept. Furthermore, the last two rows exhibit generated images based on distinct composite text prompts, leveraging the same learned concept from the second row.
leading to better performance on learning objects but less flexibility to learn different styles as a concept. Surprisingly, if we compare the _concept alignment_ performance with and without composite prompts, we observe that the performance further drops significantly for all baseline methodologies when composite prompts are used. This shows that existing concept learning methodologies find it difficult to maintain the concepts whenever the additional context is provided via composite prompts.
**Compositional Reasoning.** Previously, we discussed concept alignment on composite prompts. Table 2 summarizes the evaluations on composition tasks. Here, we observe the complete opposite trend in results. Custom Diffusion outperforms the other approaches across the composition categories. This result shows the trade-off between learning concepts and at the same time maintaining compositionality in recent concept learning methodologies. Moreover, it can be observed that neither the CLIP score nor VQA accuracy is a reliable metric to evaluate the compositions.
**Sensitivity.** Figure 5 illustrates the sensitivity distribution observed by the CCD from generated concept-specific images from different methods from Table 1. Here, we can observe that the sensitivity distribution of original ground truth images is very concentrated and it is very high for generated images.
**Qualitative Results.** Figure 4 provides the qualitative examples of the concept learning. It can be inferred that Textual Inversion (LDM) learns the sketch concept very well (the first row). While DreamBooth and Custom Diffusion struggle to learn it. All of the baselines perform comparatively well in reproducing the learned concept (the second row). But interestingly, in the case of compositions, DreamBooth and Custom Diffusion performs well with the cost of losing the concept alignment (last two rows). At the same time, textual inversion approaches cannot reproduce the compositions (like, Two V\({}^{*}\)) but they maintain the concept alignment. Overall, these qualitative examples align with our quantitative results and strengthen our evaluation framework.
**Human Evaluations.** We perform Human Evaluations using Amazon Mechanical Turk for both types of evaluations: 1) Concept Alignment - to measure the alignment between generated images and ground truth reference images on DomainPACS and ObjectImageNet, and 2) Compositional Reasoning - to measure the image-text alignment. In the case of Concept Alignment, we ask human annotators to rate the likelihood of the target image the same as three reference images. While for Compositional Reasoning we simply ask the annotators to rate the likelihood alignment of the image and the corresponding caption. Table 3 summarizes the performance of prior and proposed (CCD) quantitative
\begin{table}
\begin{tabular}{l c c|c c|c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Concepts**} & \multicolumn{2}{c}{**Fine-grained\({}_{\text{CUB}}\)**} & \multirow{2}{*}{**Composition**} \\ \cline{2-2} \cline{4-5} & \multicolumn{1}{c}{**DomainPACS**} & & & \multicolumn{1}{c}{**ObjectImageNet**} & \multicolumn{1}{c}{**Object-level**} & \multicolumn{1}{c}{**Attribute-level**} \\ \hline Textual Inversion (LDM) & **0.0549** & 0.1196 & 0.2252 & 0.1244 & 0.1868 \\ Textual Inversion (SD) & 0.2853 & **0.0662** & **0.0903** & **0.0350** & **0.1163** \\ DreamBooth & **0.6849** & 0.0880 & 0.1055 & 0.0532 & 0.3551 \\ Custom Diffusion & 0.6319 & **0.2309** & **0.3969** & **0.1877** & 0.4882 \\ \hline Original & 0.0000 & 0.0000 & 0.000 & 0.000 & 0.000 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Concept Alignment Evaluation Results. The table shows the performance of concept learners on the CCD (\(\downarrow\)) metric. The table includes results in three parts: Concepts (DomainPACS, and ObjectImageNet), Fine-grainedCUB (Object-level, and Attribute-level), and Composition. The best and worst performing models are indicated by bold and bold numbers, respectively.**
Figure 5: The sensitivity distribution of concept learners on CCD.
metrics with respect to the Human Evaluations. It can be inferred that the CCD is strongly correlated with human preferences and outperforms the prior evaluation metrics by a large amount. This further strengthens the importance of the introduced **ConceptBed** framework.
## 5 Related Work
**Concept Learning.** Concept learning encompasses various problem statements and approaches, depending on the perspective adopted. _Concept Bottleneck Models (CBMs)_[20] and _Concept Embedding Models (CEMs)_[43] treat object attributes as concepts and propose classification strategies to identify these concepts. _Neuro Symbolic Concept Learner (NS-CL)_[26] aims to learn visual concepts by associating them with language semantics, enabling the model to perform visual question answering. _CompPly_[4] and _CRIPP-VQA_[31] proposes the challenge of learning concepts that are visually concealed or not easily discernible. _Image Inversion Style Concept Learning_, as exemplified by Xia _et. al._[42], takes a different approach. Its objective is to invert a given concept image back into the latent space of a pre-trained model. By faithfully reconstructing the concept from the inverted code using the generator, this method explores the latent representations of concepts.
**Text-to-Image Generative Models.** With advances in vector quantization [40] and diffusion modeling [34], text-to-image generation has improved its performance. Notable works such as DALL-E [33] train transformer models on quantized latent spaces. Current State-of-the-Art, diffusion-based text-to-image models such as GLIDE [29], LDM [34], and Imagen [36], have surpassed prior approaches (such as StackGAN [44], StackGAN++ [45], TReCS [19], and DALL-E [33]) and achieved superior performance. Although diffusion-based generative models are state-of-the-art methods, they can be prone to perturbations in input space. Hence, measuring this sensitivity to perturbations is essential.
**Text-to-Image Concept Learning.** Text-conditioned diffusion models, such as LDM and Imagen, have demonstrated their potential for learning new concepts with only a few reference images. Textual Inversion [10] proposes learning the embedding corresponding to the placeholder (V\({}^{*}\)) through optimization. DreamBooth [35] suggests optimizing the UNet parameters instead of optimizing the placeholder embedding. Custom Diffusion [22] combines both approaches by optimizing the placeholder embedding and Key/Value weights from the Cross-Attentions layers of the UNet model
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**Relation**} & \multicolumn{2}{c}{**Action**} & \multicolumn{2}{c}{**Action**} & \multicolumn{2}{c}{**Attribute**} & \multicolumn{2}{c}{**Counting**} \\ & **CLIP** & **Acc.** & **CCD** & **CLIP** & **Acc.** & **CCD** & **CLIP** & **Acc.** & **CCD** & **CLIP** & **Acc.** & **CCD** \\ \hline Textual Inversion (LDM) & 0.6587 & 80.47\% & 0.2085 & 0.6518 & 83.1\% & 0.2120 & 0.6680 & 81.55\% & 0.1329 & 0.6538 & 76.67\% & 0.1312 \\ Textual Inversion (SD) & 0.6294 & 81.01\% & 0.1729 & 0.6274 & 83.71\% & 0.1885 & 0.6362 & 82.13\% & 0.1089 & 0.6299 & 77.08\% & 0.1041 \\ DreamBooth & 0.7051 & 81.24\% & 0.0545 & 0.6992 & 83.87\% & 0.0480 & 0.6580 & 82.16\% & 0.0347 & 0.6934 & 74.78\% & 0.0047 \\ Custom Diffusion & **0.7056** & **84.63\%** & **0.0460** & **0.7056** & **87.97\%** & **0.0320** & **0.6943** & **84.99\%** & **0.0168** & **0.6905** & **80.05\%** & **0.031** \\ \hline Stable Diffusion & 0.7222 & 85.41\% & 0.0412 & 0.7183 & 88.65\% & 0.0213 & 0.7056 & 84.79\% & 0.0192 & 0.7095 & 80.07\% & 0.0263 \\ Original & 0.6626 & 87.45\% & 0.0000 & 0.6831 & 89.80\% & 0.0000 & 0.6306 & 85.79\% & 0.0000 & 0.6553 & 78.31\% & 0.0000 \\ \hline \hline \end{tabular}
\end{table}
Table 2: **Compositional Reasoning Evaluation Results.** The table shows the performance of the prior works for Composition Alignment. CLIP (\(\uparrow\)) is the traditional image-text alignment metric. Accuracy (\(\uparrow\)) is the accuracy of the ViLT VQA classifier on generated boolean questions. And CCD (\(\downarrow\)) is the composition deviations reported from the ViLT model with respect to its performance on original images. The best-performing model is indicated by **bold** numbers, while the performance that is higher than the original data is reported with underline.
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{2}{c}{**DomainPACS**} & \multicolumn{2}{c}{**ObjectImageNet**} & \multicolumn{2}{c}{**Compositional Reasoning**} \\ \cline{2-9} & **DINO** (\(\uparrow\)) & **CCD** (\(\downarrow\)) & **Human Score** (\(\uparrow\)) & **DINO** (\(\uparrow\)) & **CCD** (\(\downarrow\)) & **Human Score** (\(\uparrow\)) & **CLIP** (\(\uparrow\)) & **CCD** (\(\downarrow\)) & **Human Score** (\(\uparrow\)) \\ \hline Textual Inversion (LDM) & 0.5073 & 0.0549 & 4.028 & 0.4708 & 0.1196 & 4.069 & 0.6611 & 0.1711 & 2.851 \\ Textual Inversion (SD) & 0.4104 & 0.2853 & 0.484 & 0.4457 & 0.0662 & 4.159 & 0.6309 & 0.1436 & 3.694 \\ DreamBooth & 0.3925 & 0.6849 & 3.083 & 0.4525 & 0.0880 & 4.075 & 0.6919 & 0.0360 & 3.556 \\ Custom Diffusion & 0.3956 & 0.6319 & 3.164 & 0.4450 & 0.2309 & 3.803 & 0.6968 & 0.0204 & 4.178 \\ \hline Correlation (_v.c.t._ Human) & 0.6557 & **-0.9345** & 1.0 & 0.2782 & **-0.9686** & 1.0 & 0.3486 & **-0.7488** & 1.0 \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Human Evaluations.** Comparison of prior quantitative metrics and CCD metric with Human evaluations. DINO [3] based pairwise cosine similarity is the prior evaluation metric [35]. CLIP (CLIPScore) [16] is the traditional reference-free image-text similarity metric. CCD is our presented concept deviation-aware evaluation metric. Here, DomainPACS and ObjectImageNet evaluations are for concept alignment and Compositional reasoning is for image-text similarity. The correlation between CCD and the human evaluations is negative as a lower score on CCD and a higher score on human evaluations mean better performance. |
2303.01133 | Acceptability of classical groups in non-zero characteristic | A group $G$ is called to be acceptable (due to M. Larsen) if for any finite
group $H$, two element-conjugate homomorphisms are globally conjugate. We
answer the acceptability question for general linear, special linear, unitary,
symplectic and all orthogonal groups over an algebraically closed field of
non-zero characteristics. | Saikat Panja | 2023-03-02T10:27:53Z | http://arxiv.org/abs/2303.01133v1 | # Acceptability of classical groups
###### Abstract.
A group \(G\) is called to be _acceptable_ (due to M. Larsen) if for any finite group \(H\), two element-conjugate homomorphisms are globally conjugate. We answer the acceptability question for general linear, special linear, unitary, symplectic and all orthogonal groups over an algebraically closed field of non-zero characteristics.
Key words and phrases:acceptable groups, classical groups, element-conjugacy, global conjugacy 2020 Mathematics Subject Classification: 20C33, 20C99 The author (Panja) is supported by PDF-M fellowship from HRI.
## 1. Introduction
We start with a basic result, coming from an application of the representation theory of finite groups over complex numbers. Let \(G\) be a finite group. Consider two homomorphisms \(\phi_{1},\ \phi_{2}:G\longrightarrow\mathrm{GL}(n,\mathbb{C})\). Suppose that for all \(g\in G\) there exists \(h(g)\in G\), such that \(\phi_{1}(g)=h(g)^{-1}\phi_{2}(g)h(g)^{-1}\). In this case, these two homomorphisms will be called _element-conjugate homomorphisms_. Note that here \(h:G\longrightarrow G\) is a function. If there exists a choice of constant function \(h\), we say that \(\phi_{1}\) and \(\phi_{2}\) are _globally conjugate_. Now, note that if the above-mentioned maps \(\phi_{1},\ \phi_{2}\) are element-conjugate, then they are globally conjugate. Indeed if they are element-wise conjugate, their corresponding character values are the same on each of the conjugacy classes of \(G\). Hence their characters are the same. Thus the two homomorphisms are globally conjugate. Now suppose we change \(\mathrm{GL}(n,\mathbb{C})\) by \(\mathrm{SL}(n,\mathbb{C})\) or \(\mathrm{SO}(n,\mathbb{C})\). Then it is natural to ask whether all element-conjugate homomorphisms are globally conjugate. At this point, we define a group \(\Gamma\) to be _acceptable_ (or _strongly acceptable_) if for all finite groups (resp. group) \(G\), two element-conjugate homomorphisms are globally conjugate.
The general question of the relation between element-conjugacy and global conjugacy arises in many contexts such as algebra, number theory, and geometry. For example, the multiplicity-one question in the theory of automorphic forms (see [2]). A motivating question, related to multiplicity-one questions in the theory of automorphic forms, is when a compact group can be the common covering space of a pair of non-isometric isospectral manifolds (See [17]). Motivated by this M. Larsen has dealt with many of the compact or complex simple Lie groups, viewed as real groups, after showing that the
question reduces to the semisimple case in [9]. In a subsequent paper [10] he proved that a connected, simply connected, compact Lie group \(G\) is acceptable if and only if it has no direct factors of type \(B_{n}\ (n\geq 4),D_{n}\ (n\geq 4),\ E_{n},\ \text{or}\ F_{4}\). This carries over to the complex groups as well. Moreover, the group of type \(G_{2}\) is the only exceptional group (simply connected or not) which is acceptable. In the year 2016, Y. Fang, G. Han and B. Sun proved that the following two statements are equivalent:
1. For all connected compact Lie groups \(H\) and all continuous homomorphisms \(\phi,\psi:H\longrightarrow G\), if \(\phi(h)\) and \(\psi(h)\) are conjugate in \(G\) for all \(h\in H\), then \(\phi\) and \(\psi\) are globally conjugate.
2. The Lie algebra of \(G\) contains no simple ideal of type \(D_{n}(n\geq 4),\ E_{6},\ E_{7},\ \text{or}\ E_{8}\).
These results have strong applications in many parts of mathematics. For example how to determine whether a given representation is distinguished or not in terms of the Langlands parameters (see [1]). Another question being that do the multiplicities of the trivial representation of \(H\) in finite-dimensional irreducible representations of \(G\) determine the conjugacy class of \(H\), where \(H\) is a closed subgroup of a complex reductive group \(G\) (see [22]).
Consider a prime \(p\) and the target group is considered to be one of the general linear groups, special linear groups, unitray groups, symplectic groups or the orthogonal groups when the field is algebraically closed field \(\overline{\mathbb{F}_{q}}\) of characteristic \(p\). We prove that all these groups are not acceptable. We are further looking into the cases for exceptional cases, which will be carried out in future work.
### Methodology
To work with the above-mentioned groups, we make use of their conjugacy classes. This has been adapted from the work of Green ([5]) and Macdonald ([11]) for the general linear groups and from the work of Wall ([21]) in the cases of unitary, symplectic and orthogonal groups. We will briefly mention these results in due course. The idea is to work with unipotent classes, for which we need to know the class representative for the same (see [3]). The work [4] of Gonshaw, Liebeck, and O'Brien has been a rescuer to this. After understanding the conjugacy classes, we move to construct particular homomorphisms from particular groups to the target groups. For most of the cases, we have used the group \(\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p\mathbb{Z}\) to be the source group. Without further delay, we will present the organization of the paper.
### Organization of the paper
We start with some of the general results in Section 2. This section contains two very important results, viz. Lemma 2.1 and Lemma 2.2 which have been the lifeline for most of the proofs. In Section 3 we briefly mention the groups and their conjugacy classes. Thereafter Section 4, Section 5 and Section 6 are devoted to the main proof of the results concerning general and special linear groups, unitary groups, symplectic and orthogonal groups, respectively. Our main results are
Theorem 4.3, Theorem 5.3, Theorem 6.4, Theorem 6.7, Theorem 6.10. We finish the paper by addressing a few future questions in Section 7.
### Acknowledgment
The author would like to thank professor Anupam Singh from IISER Pune for his helpful discussions. A part of this work was finished during his visit IIT-Jodhpur in December 2022. I would like to take this opportunity to thank the institute for its hospitality during the stay.
## 2. General results
**Lemma 2.1** (Proposition 1.1, [9]).: _Let \(G=G_{1}\times G_{2}\) for two groups \(G_{1},\ G_{2}\). Then \(G\) is acceptable if and only if \(G_{1}\) and \(G_{2}\) are acceptable._
Proof.: Let \(G_{1}\) and \(G_{2}\) be two acceptable groups. Assume \(\phi_{1},\phi_{2}:H\longrightarrow G_{1}\times G_{2}\) be two homomorphisms which are element-conjugate. Then the maps \(\pi_{1}\circ\phi_{1},\pi_{1}\circ\phi_{2}:H\longrightarrow G_{1}\) are element conjugate, hence globally conjugate, via say \(g_{1}\). Similarly there exists \(g_{2}\in G_{2}\), via which \(\pi_{2}\circ\phi_{1},\pi_{2}\circ\phi_{2}:H\longrightarrow G_{2}\) are globally conjugate. Hence \(\phi_{1},\phi_{2}\) are globally conjugate.
On the other hand considering the monomorphisms \(i_{1}:G_{1}\longrightarrow G_{1}\times G_{2}\) and \(i_{2}:G_{2}\longrightarrow G_{1}\times G_{2}\), we get that if \(G_{1}\times G_{2}\) is acceptable, then so are \(G_{1}\) and \(G_{2}\).
**Lemma 2.2**.: _Let \(G\) be an acceptable group. Then for all \(a\in G\) the subgroup \(\mathscr{C}_{G}(a)\), the centralizer of \(a\) is acceptable._
Proof.: Let \(\varphi_{1},\varphi_{2}:H\longrightarrow\mathscr{C}_{G}(a)\) be two element conjugate homomorphisms. Construct the following homomorphisms
\[\widetilde{\varphi}_{i}:H\times\langle a\rangle\longrightarrow G,\ \text{defined as}\ \widetilde{\varphi}_{i}(h,a^{m})=\varphi(h)a^{m}. \tag{2.1}\]
Then the homomorphisms are element-conjugate and hence globally conjugate since \(G\) is acceptable. Thus there exists \(g\in G\) such that
\[g\varphi_{1}(h)a^{m}g^{-1}=\varphi_{2}(g)\ \text{for all}\ m\in\mathbb{Z},h\in H.\]
On plugging \(m=0\) and \(h=e_{H}\), we get that \(g\in\mathscr{C}_{G}(a)\). Hence we get the result by putting \(m=0\).
**Lemma 2.3** (See Lemma 2.3 of [23] for Lie group set-up).: _Let \(G\) be an acceptable group and \(A\) be an abelian subgroup of \(G\). Then \(\mathscr{C}_{G}(A)\) is an acceptable subgroup._
Proof.: Let \(\varphi_{1},\varphi_{2}:H\longrightarrow\mathscr{C}_{G}(A)\) be two element conjugate homomorphisms. Construct the following homomorphisms
\[\widetilde{\varphi}_{i}:H\times A\longrightarrow G,\ \text{defined as}\ \widetilde{\varphi}_{i}(h,a)=\varphi(h)a. \tag{2.2}\]
Then the homomorphisms are element-conjugate and hence globally conjugate since \(G\) is acceptable. Thus there exists \(g\in G\) such that
\[g\varphi_{1}(h)ag^{-1}=\varphi_{2}(g)\text{ for all }a\in A,h\in H.\]
Assuming \(h=e_{H}\), we get that \(g\in\mathscr{C}_{G}(A)\). Then taking \(a=e_{A}\), we get that \(\varphi_{1}\) and \(\varphi_{2}\) are globally conjugate via \(g\).
## 3. Conjugacy classes in the groups
In this section, we will discuss the classical groups in brief and present the description of their conjugacy classes. This will be based on the works [5], [11] and [21].
### General linear group
The general linear group \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\) is defined to be the set of all \(n\times n\) matrices defined over \(\overline{\mathbb{F}_{q}}\). The conjugacy class of this group was firstly determined in works [5] and [11] in the case of finite fields. Let \(\Phi\) denote the set of all non-constant, monic, irreducible polynomials \(f(x)\) with coefficients in \(\overline{\mathbb{F}_{q}}\) which is not equal to the polynomial \(x\). Let \(\Lambda\) denote the set of all partitions \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{k})\) with \(\lambda_{i}\geq\lambda_{i+1}\geq 0\) and each \(\lambda\) are integers. The conjugacy class of \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\) is in one-one correspondence with a set of all functions \(\alpha:\Phi\longrightarrow\Lambda\) which takes the value empty partition on all but finitely many polynomials in \(\Phi\), and satisfies
\[\sum_{f\in\Phi}|\alpha(f)|\deg f=n.\]
Hence the conjugacy classes of \(\operatorname{GL}(n,q)\) are determined by combinatorial data consisting of tuples of the form \((f,\lambda_{f})\) where \(f\in\Phi\). We note that two matrices in \(\operatorname{GL}(n,q)\) are conjugate if and only if they are conjugate in \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\), which we will use in section 7.
### Unitary group
For the field \(\overline{\mathbb{F}_{q}}\), consider an involution \(\sigma\) of \(\overline{\mathbb{F}_{q}}\). This induces an automorphism on the ring \(\overline{\mathbb{F}_{\shortparallel}}[x]\) and the group \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\). The _unitary_ group is the group of all matrices in \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\) which satisfy \({}^{t}\overline{A}JA=J\), where \(J\) is given by \(\begin{pmatrix}&&1\\ &&1\\ &&\iddots&\\ 1&&&\end{pmatrix}\) (matrix of a hermitian form) and this group will be denoted by \(\operatorname{U}(n,\overline{\mathbb{F}_{q}})\).
**Definition 3.1**.: The _twisted dual_ of a monic degree \(r\) polynomial \(f(x)\in\overline{\mathbb{F}_{q}}[x]\) satisfying \(f(0)\neq 0\), is the polynomial given by \(\widetilde{f}(x)=\overline{f(0)}^{-1}x^{r}\overline{f}(x^{-1})\). The polynomial \(f\) will be called \(\sim\)_-symmetric_ if \(f=\widetilde{f}\). A monic polynomial \(f(x)\in\overline{\mathbb{F}_{q}}[x]\), will be called to be \(\sim\)**-irreducible** if and only if it does not have any proper \(\sim\)-symmetric factor. We will denote the set of all \(\sim\)-irreducible monic polynomials by \(\widetilde{\Phi}\)
We know that elements \(g,h\in\mathrm{U}(n,\overline{\mathbb{F}_{q}})\) are conjugate in \(\mathrm{U}(n,q)\) if and only if they are conjugate in \(\mathrm{GL}(n,\overline{\mathbb{F}_{q}})\) (see [3]) and the conjugacy classes in \(\mathrm{U}(n,\overline{\mathbb{F}_{q}})\) are parametrized by sequences \(\{(f,\lambda)|f\in\widetilde{\Phi},\lambda\in\Lambda\}\) is the set of irreducible polynomials. The class representative for the conjugacy classes can be found in [20].
### Symplectic group and Orthogonal group
For a vector space \(V\) of dimension \(2n\) over \(\overline{\mathbb{F}_{q}}\), consider the unique (up to similarity) non-degenerate alternating bilinear form on \(V\) given by
\[\left\langle(x_{i})_{i=1}^{2n},(y_{j})_{j=1}^{2n}\right\rangle=\sum_{j=1}^{n}x _{j}y_{2n+1-j}-\sum_{i=0}^{n-1}x_{2n-i}y_{i+1}.\]
The symplectic group is the subgroup of \(\mathrm{GL}(2n,\overline{\mathbb{F}_{q}})\) consisting of elements preserving this alternating form on \(V\). By fixing an appropriate basis, the matrix of the form can be chosen to be \(J=\begin{pmatrix}0&\Lambda_{n}\\ -\Lambda_{n}&0\end{pmatrix}\) where \(\Lambda_{n}=\begin{pmatrix}0&0&\cdots&0&1\\ 0&0&\cdots&1&0\\ \vdots&\vdots&\cdots&\vdots&\vdots\\ 1&0&\cdots&0&0\end{pmatrix}\) and
\[\mathrm{Sp}(2n,\overline{\mathbb{F}_{q}})=\{A\in\mathrm{GL}(2n,\overline{ \mathbb{F}_{q}})\mid{}^{t}\!AJA=J\}.\]
Now consider \(V\) to be an \(m\)-dimensional vector space over the field \(\overline{\mathbb{F}_{q}}\). Then there a unique (up to similarity) non-degenerate quadratic forms on \(V\). The orthogonal group consists of elements of \(\mathrm{GL}(m,\overline{\mathbb{F}_{q}})\) which preserve a non-degenerate quadratic form \(Q\). With respect to an appropriate basis, we will fix the matrices of the symmetric bilinear forms (associated with the quadratic forms) as follows:
\[J_{o}=\begin{pmatrix}0&0&\Lambda_{n}\\ 0&\alpha&0\\ \Lambda_{n}&0&0\end{pmatrix},J_{e}=\begin{pmatrix}0&\Lambda_{n}\\ \Lambda_{n}&0\end{pmatrix}\]
where \(\alpha\in\mathbb{F}_{q}^{\times}\), and \(\Lambda_{l}=\begin{pmatrix}0&0&\cdots&0&1\\ 0&0&\cdots&1&0\\ \vdots&\vdots&\cdots&\vdots&\vdots\\ 1&0&\cdots&0&0\end{pmatrix}\), an \(l\times l\) matrix. Then, the orthogonal group in matrix form is
\[\mathrm{O}(2m,\overline{\mathbb{F}_{q}})=\{A\in\mathrm{GL}(2m,\overline{ \mathbb{F}_{q}})\mid{}^{t}\!AJ_{e}A=J_{e}\},\]
\[\mathrm{O}(2m+1,\overline{\mathbb{F}_{q}})=\{A\in\mathrm{GL}(2m,\overline{ \mathbb{F}_{q}})\mid{}^{t}\!AJ_{o}A=J_{o}\}.\]
To describe the conjugacy classes of finite symplectic and orthogonal groups, we will need the concept of _self-reciprocal polynomials, symplectic and orthogonal partitions_, which are defined below. For examples of the same, the readers are suggested to have a look at [3].
**Definition 3.2**.: The _dual_ of a monic degree \(r\) polynomial \(f(x)\in k[x]\) satisfying \(f(0)\neq 0\), is the polynomial given by \(f^{*}(x)=f(0)^{-1}x^{r}f(x^{-1})\). The polynomial \(f\) will be called _self reciprocal_ if \(f=f^{*}\). A monic polynomial \(f(x)\in\mathbb{F}_{q}[x]\), will be called to be \(*-\)**irreducible** if and only if it does not have any proper self-reciprocal factor.
**Definition 3.3**.: A **symplectic partition** is a partition of a number \(k\), such that the odd parts have even multiplicity. The set of all symplectic partitions will be denoted as \(\mathcal{D}^{\prime}_{\mathrm{Sp}}\).
**Definition 3.4**.: An **orthogonal partition** is a partition of a number \(k\), such that all even parts have even multiplicity. The set of all orthogonal signed partitions will be denoted as \(\mathcal{D}^{\prime}_{\mathrm{O}}\).
It can be shown that the characteristic polynomial of a symplectic or orthogonal matrix is self-reciprocal. We follow J. Milnor's terminology [12] to distinguish between the \(*\)-irreducible factors of the characteristic polynomials. We call a \(*\)-irreducible polynomial \(f\) to be
(1) Type 1 if \(f=f^{*}\) and \(f\) is irreducible polynomial of even degree;
(2) Type 2 if \(f=gg^{*}\) and \(g\) is irreducible polynomial satisfying \(g\neq g^{*}\);
(3) Type 3 if \(f(x)=x\pm 1\).
According to [21] the conjugacy classes of \(\mathrm{Sp}(2n,\overline{\mathbb{F}_{q}})\) are parameterized by the functions \(\lambda:\Phi\to\mathcal{P}^{2n}\cup\mathcal{D}^{2n}_{\mathrm{Sp}}\), where \(\Phi\) denotes the set of all monic, non-constant, irreducible polynomials, \(\mathcal{P}^{2n}\) is the set of all partitions of \(1\leq k\leq 2n\) and \(\mathcal{D}^{2n}_{\mathrm{Sp}}\) is the set of all symplectic partitions of \(1\leq k\leq 2n\). Such a \(\lambda\) represent a conjugacy class of \(\mathrm{Sp}\) if and only if
1. \(\lambda(x)=0\),
2. \(\lambda_{\varphi^{*}}=\lambda_{\varphi}\),
3. \(\lambda_{\varphi}\in\mathcal{D}^{n}_{\mathrm{Sp}}\) iff \(\varphi=x\pm 1\),
4. \(\sum_{\varphi}|\lambda_{\varphi}|\mathrm{deg}(\varphi)=2n\).
Also from [21], we find out that a similar kind of statement is true for the groups \(\mathrm{O}(n,\overline{\mathbb{F}_{q}})\). The conjugacy classes of \(\mathrm{O}(n,\overline{\mathbb{F}_{q}})\) are parameterized by the functions \(\lambda:\Phi\to\mathcal{P}^{n}\cup\mathcal{D}^{n}_{\mathrm{O}}\), where \(\Phi\) denotes the set of all monic, non-constant, irreducible polynomials, \(\mathcal{P}^{n}\) is the set of all partitions of \(1\leq k\leq n\) and \(\mathcal{D}^{n}_{\mathrm{O}}\) is the set of all orthogonal partitions of \(1\leq k\leq n\). Such a \(\lambda\) represent a conjugacy class of \(\mathrm{O}(n,\overline{\mathbb{F}_{q}})\) if and only if
1. \(\lambda(x)=0\),
2. \(\lambda_{\varphi^{*}}=\lambda_{\varphi}\),
3. \(\lambda_{\varphi}\in\mathcal{D}^{n}_{\mathrm{O}}\) iff \(\varphi=x\pm 1\),
4. \(\sum_{\varphi}|\lambda_{\varphi}|\mathrm{deg}(\varphi)=n\).
Class representative corresponding to given data can be found in [19], [18], [4] and we will mention them whenever needed.
## 4. General and special linear groups
**Lemma 4.1**.: _For a field \(\overline{\mathbb{F}_{q}}\), if \(\operatorname{GL}(n+1,\overline{\mathbb{F}_{q}})\) is acceptable, then so is \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\)._
Proof.: We are going to use Lemma 2.2. Take the following element
\[a=\begin{pmatrix}I_{n}&0\\ 0&-1\end{pmatrix}\in\operatorname{GL}(n+1,\overline{\mathbb{F}_{q}}).\]
Then we know that
\[\mathscr{C}_{\operatorname{GL}(n+1,\overline{\mathbb{F}_{q}})}(a)= \left\{\begin{pmatrix}A&0\\ 0&\alpha\end{pmatrix}\middle|_{\begin{subarray}{c}A\in\operatorname{GL}(n, \overline{\mathbb{F}_{q}})\\ \alpha\in\overline{\mathbb{F}_{q}}^{*}\end{subarray}}\right\}\] \[\cong \operatorname{GL}(n,\overline{\mathbb{F}_{q}})\times\overline{ \mathbb{F}_{q}}^{*}.\]
Hence using Lemma 2.1, we conclude that \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\) is acceptable.
**Proposition 4.2**.: _For \(q\geq 5\) the group \(\operatorname{GL}(2,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: Let \(a\neq b\) be two nonzero elements of \(\overline{\mathbb{F}_{q}}\) of order \(p\), where \(q=p^{f}\). Further, assume that \(a^{2}\neq b^{2}\). Note that such a choice exists since \(x^{2}=1\) has less than or equal to two solutions in \(\overline{\mathbb{F}_{q}}\). Consider the following two homomorphisms
\[\varphi_{1},\varphi_{2}:\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p \mathbb{Z}\longrightarrow\operatorname{GL}(2,\overline{\mathbb{F}_{q}})\] \[\text{defined as }\varphi_{1}(1,0)=\begin{pmatrix}1&a\\ 0&1\end{pmatrix},\varphi_{1}(0,1)=\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\] \[\varphi_{2}(1,0)=\begin{pmatrix}1&b\\ 0&1\end{pmatrix},\varphi_{2}(0,1)=\begin{pmatrix}1&a\\ 0&1\end{pmatrix}.\]
Then we know that these two homomorphisms are element-conjugate. We now show that these two homomorphisms are not globally conjugate. On the contrary, assume that they are. So there is a matrix \(\begin{pmatrix}p&q\\ r&s\end{pmatrix}\in\operatorname{GL}(2,\overline{\mathbb{F}_{q}})\) acting as a global conjugator. Hence we have the following equations:
\[\begin{pmatrix}p&q\\ r&s\end{pmatrix}\begin{pmatrix}1&a\\ 0&1\end{pmatrix} =\begin{pmatrix}1&b\\ 0&1\end{pmatrix}\begin{pmatrix}p&q\\ r&s\end{pmatrix}\] \[\text{and }\begin{pmatrix}p&q\\ r&s\end{pmatrix}\begin{pmatrix}1&b\\ 0&1\end{pmatrix} =\begin{pmatrix}1&a\\ 0&1\end{pmatrix}\begin{pmatrix}p&q\\ r&s\end{pmatrix}.\]
Solving the polynomial equations obtained by comparing the coefficients, we get that \(r=s=0\). This implies that there doesn't exist any matrix playing the role of a global conjugator. Hence the conclusion follows.
**Theorem 4.3**.: _Let \(p\) be a prime and \(q=p^{f}\). For \(q>5\) the group \(\mathrm{GL}(n,\overline{\mathbb{F}_{q}})\) is not acceptable for all \(n\in\mathbb{N}\)._
Proof.: Follows from Proposition 4.2 and Lemma 4.1.
**Corollary 4.4**.: _The group \(\mathrm{SL}(2n+1,\overline{\mathbb{F}_{q}})\) is not acceptable for all \(n\in\mathbb{N}\)._
Proof.: We are going to use Lemma 2.2. Assume if possible \(\mathrm{SL}(2n+1,\overline{\mathbb{F}_{q}})\) to be acceptable. Choose
\[a=\begin{pmatrix}-I_{2n}&0\\ 0&1\end{pmatrix}\in\mathrm{SL}(2n+1,\overline{\mathbb{F}_{q}}).\]
Then we get that
\[\mathscr{C}_{\mathrm{SL}(2n+1,\overline{\mathbb{F}_{q}})}(a)=\left\{ \begin{pmatrix}A&0\\ 0&\alpha\end{pmatrix}\middle|\begin{subarray}{c}A\in\mathrm{GL}(n,\overline{ \mathbb{F}_{q}})\\ \alpha\in\overline{\mathbb{F}_{q}}\\ \det A\cdot\alpha=1\end{subarray}\right\}\] \[\cong \left\{\begin{pmatrix}A&0\\ 0&\det A^{-1}\end{pmatrix}\middle|A\in\mathrm{GL}(n,\overline{\mathbb{F}_{q}})\right\}\] \[\cong \mathrm{GL}(2n,\overline{\mathbb{F}_{q}}).\]
Hence the result follows from Lemma 2.2 and Theorem 4.3.
**Proposition 4.5**.: _Let the power map \(\psi_{n}:\overline{\mathbb{F}_{q}}\longrightarrow\overline{\mathbb{F}_{q}}\) defined as \(\psi_{n}(x)=x^{n}\) be a surjection. Then \(\mathrm{SL}(n,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: On the contrary assume that \(\mathrm{SL}(n,\overline{\mathbb{F}_{q}})\) is acceptable. We will show that in this case, it will imply \(\mathrm{GL}(n,\overline{\mathbb{F}_{q}})\) to be acceptable. Note that the map \(\psi_{k}\) is a bijection. Let \(\varphi_{1},\varphi_{2}:G\longrightarrow\mathrm{GL}(n,\overline{\mathbb{F}_{q}})\) be two element conjugate homomorphisms. Consider the following two maps
\[\widetilde{\varphi_{1}},\widetilde{\varphi_{2}}:G\longrightarrow \mathrm{SL}(n,\overline{\mathbb{F}_{q}})\] \[\text{defined as }\widetilde{\varphi_{i}}(x)=\psi_{n}^{-1}(\det \varphi_{i}(x))\varphi_{i}(x).\]
Note that \(\widetilde{\varphi_{1}}\) and \(\widetilde{\varphi_{2}}\) are element conjugate homomorphisms. Hence by assumption, they are globally conjugate. Since \(\psi_{n}\) is a bijection and conjugate matrices have the same determinant, we get that \(\varphi_{1}\) and \(\varphi_{2}\) are globally conjugate. This contradicts Theorem 4.3 and finishes the proof.
**Corollary 4.6**.: _Let \((q-1,n)=1\). Then \(\mathrm{SL}(n,\overline{\mathbb{F}_{q}})\) is not acceptable._
**Proposition 4.7**.: _For \(n\geq 4\), the group \(\mathrm{SL}(n,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: Fix \(\lambda\in\overline{\mathbb{F}_{q}}\) such that \(\lambda^{2}\neq 1\). Fix the element \(a=\begin{pmatrix}\lambda&&\\ &\dfrac{1}{\lambda}&\\ &&I\end{pmatrix}\in\operatorname{SL}(n,\overline{\mathbb{F}_{q}})\). Then we have that
\[\mathscr{C}_{\operatorname{SL}(n,\overline{\mathbb{F}_{q}})}(a)= \left\{\begin{pmatrix}\alpha&&\\ &\beta&\\ &&A\end{pmatrix}\right|_{\begin{subarray}{c}\alpha,\beta\in\overline{ \mathbb{F}_{q}}\\ \alpha\beta\det A=1\end{subarray}}^{\alpha,\beta\in\overline{\mathbb{F}_{q}} \\ \alpha\beta\det A=1\end{subarray}}\right\}\] \[\cong \overline{\mathbb{F}_{q}}^{*}\times\operatorname{GL}(n-2,\overline {\mathbb{F}_{q}}).\]
Since \(\operatorname{GL}(n-2,\overline{\mathbb{F}_{q}})\) is not acceptable, using Lemma 2.1, we get that \(\operatorname{SL}(n,\overline{\mathbb{F}_{q}})\) is not acceptable.
## 5. Unitary groups
**Lemma 5.1**.: _For a field \(\overline{\mathbb{F}_{q}}\), if \(\operatorname{U}(n+1,\overline{\mathbb{F}_{q}})\) is acceptable, then so is \(\operatorname{U}(n,\overline{\mathbb{F}_{q}})\)._
Proof.: We work with the hermitian form, which is given by the matrix
\[\begin{pmatrix}1&&\\ &1&&\\ &&\ddots&\\ &&&1\end{pmatrix}.\]
Consider the element \(a=\begin{pmatrix}I_{n}&\\ &-1\end{pmatrix}\). Then by Lemma 2.2, we get that the \(\mathscr{C}_{\operatorname{U}(n+1,\overline{\mathbb{F}_{q}})}(a)\) is acceptable, if \(\operatorname{U}(n+1,\overline{\mathbb{F}_{q}})\) is. Now we have that
\[\mathscr{C}_{\operatorname{U}(n+1,\overline{\mathbb{F}_{q}})}(a) =\left\{\begin{pmatrix}A&\\ &\alpha\end{pmatrix}\middle|_{\begin{subarray}{c}A\in\operatorname{U}(n, \overline{\mathbb{F}_{q}})\\ \alpha\sigma(\alpha)=1\end{subarray}}^{A\in\operatorname{U}(n,\overline{ \mathbb{F}_{q}})}\right\}\] \[=\operatorname{U}(n,\overline{\mathbb{F}_{q}})\times\{\alpha\in \overline{\mathbb{F}_{q}}^{*}|\alpha\sigma(\alpha)=1\}.\]
Hence the result follows from Lemma 2.1.
**Proposition 5.2**.: _The group \(\operatorname{U}(4,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: We will work with the hermitian form given by the matrix \(\begin{pmatrix}&&&1\\ &&1\\ &\iddots&\\ 1&&&\end{pmatrix}\). Firstly fix \(1\neq a\in\overline{\mathbb{F}_{q}}^{*}\). Then consider the following two matrices
\[X_{1}=\begin{pmatrix}1&1&&\\ &1&&\\ &&1&\\ &&-1&1\end{pmatrix}\text{ and }X_{a}=\begin{pmatrix}1&a&&\\ &1&&\\ &&1&\\ &&-a&1\end{pmatrix}.\]
Then we know that these two matrices commute and are conjugate to each other. Consider the following two homomorphisms
\[\varphi_{1},\varphi_{2}:\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p \mathbb{Z}\longrightarrow\operatorname{U}(4,\overline{\mathbb{F}_{q}})\] \[\text{defined as }\varphi_{1}((1,0))=X_{1},\varphi_{1}((0,1))=X_{a},\] \[\varphi_{2}((1,0))=X_{a},\varphi_{2}((0,1))=X_{1}.\]
Then we know that these two homomorphisms are element conjugate but not globally conjugate. Hence \(\operatorname{U}(4,\overline{\mathbb{F}_{q}})\) is not acceptable.
**Theorem 5.3**.: _For and \(n\geq 4\), the groups \(\operatorname{U}(n,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: Follows directly from Lemma 5.1 and Proposition 5.2.
## 6. Symplectic and Orthogonal groups
**Proposition 6.1**.: _For a field \(\overline{\mathbb{F}_{q}}\) if \(\operatorname{Sp}(2m+2,\overline{\mathbb{F}_{q}})\) is an acceptable group, then so is \(\operatorname{Sp}(2m,\overline{\mathbb{F}_{q}})\)._
Proof.: Fix an element \(\lambda\in\overline{\mathbb{F}_{q}}^{*}\) such that \(\lambda^{2}\neq 1\). Then fix the element
\[a=\begin{pmatrix}\lambda&&&&\\ &\dfrac{1}{\lambda}&&&&\\ &&1&&\\ &&-1&&\\ &&\ddots&&\\ &&&&1\\ &&&&-1\end{pmatrix}\in\operatorname{Sp}(2m+2,\overline{\mathbb{F}_{q}}).\]
Then we know that
\[\mathscr{C}_{\mathrm{Sp}(2m+2,\overline{\mathbb{F}_{q}})}(a) =\mathscr{C}_{\mathrm{GL}(2m+2,\overline{\mathbb{F}_{q}})}(a)\cap \mathrm{Sp}(2m+2,\overline{\mathbb{F}_{q}})\] \[=\left\{\begin{pmatrix}\alpha&\\ &\beta&\\ &&A\end{pmatrix}\right|_{A\in\mathrm{GL}(2m,\overline{\mathbb{F}_{q}})}^{ \alpha,\beta\in\overline{\mathbb{F}_{q}}}\Biggr{\}}\cap\mathrm{Sp}(2m+2, \overline{\mathbb{F}_{q}})\] \[\cong\left\{\begin{pmatrix}\alpha&\\ &\frac{1}{\alpha}\end{pmatrix}\Biggr{|}\alpha\in\overline{\mathbb{F}_{q}}^{*} \right\}\times\mathrm{Sp}(2m,\overline{\mathbb{F}_{q}}).\]
Hence the result follows from Lemma 2.1.
**Proposition 6.2**.: _The group \(\mathrm{Sp}(2,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: Fix \(a\in(\overline{\mathbb{F}_{q}}^{*})^{2}\setminus\{1\}\). It is easy to see that
\[\begin{pmatrix}1&1\\ &1\end{pmatrix},\begin{pmatrix}1&a\\ &1\end{pmatrix}\in\mathrm{Sp}(2,\overline{\mathbb{F}_{q}})\]
are conjugate to each other by an element of \(\mathrm{Sp}(2,\overline{\mathbb{F}_{q}})\). Consider the following two homomorphisms
\[\varphi_{1},\varphi_{2}:\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p \mathbb{Z}\longrightarrow\mathrm{Sp}(2,\overline{\mathbb{F}_{q}})\] \[\text{given by }\varphi_{1}\left((1,0)\right) =\begin{pmatrix}1&1\\ &1\end{pmatrix},\varphi_{1}\left((0,1)\right)=\begin{pmatrix}1&a\\ &1\end{pmatrix},\] \[\varphi_{2}\left((1,0)\right) =\begin{pmatrix}1&a\\ &1\end{pmatrix},\varphi_{2}\left((0,1)\right)=\begin{pmatrix}1&1\\ &1\end{pmatrix}.\]
Then using the same argument as Proposition 4.2 we get that these two homomorphisms are not globally-conjugate. Hence the result follows.
**Corollary 6.3**.: _The group \(\mathrm{SL}(2,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: Since \(\mathrm{SL}(2,\overline{\mathbb{F}_{q}})\cong\mathrm{Sp}(2,\overline{\mathbb{ F}_{q}})\) (see [6]), the result follows immediately.
**Theorem 6.4**.: _The groups \(\mathrm{Sp}(2n,\overline{\mathbb{F}_{q}})\) are not acceptable._
Proof.: Follows easily from Proposition 6.1 and Proposition 6.2.
**Proposition 6.5**.: _The acceptability of \(\mathrm{O}(2m+1,\overline{\mathbb{F}_{q}})\) implies the acceptability of \(\mathrm{O}(2m-1,\overline{\mathbb{F}_{q}})\)._
Proof.: Fix the element \(a=\begin{pmatrix}-I_{2}&&\\ &I_{2m-1}\end{pmatrix}\in\operatorname{O}(2m+1,\overline{\mathbb{F}_{q}})\). Then by Lemma 2.2 we get that \(\mathscr{C}_{\operatorname{O}(2m+1,\overline{\mathbb{F}_{q}})}(a)\) is acceptable. Now we have that
\[\mathscr{C}_{\operatorname{O}(2m+1,\overline{\mathbb{F}_{q}})}(a) =\left\{\begin{pmatrix}A&&\\ &B\end{pmatrix}\middle|_{B\in\operatorname{GL}(2m-1,q)}^{A\in\operatorname{GL }(2,q)}\right\}\cap\operatorname{O}(2m+1,\overline{\mathbb{F}_{q}})\] \[\cong\left\{A\in\operatorname{GL}(2,\overline{\mathbb{F}_{q}})|AA ^{t}=I\right\}\times\operatorname{O}(2m-1,\overline{\mathbb{F}_{q}}).\]
Hence the result follows from Lemma 2.1.
**Proposition 6.6**.: _The group \(\operatorname{O}(5,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: We will work with the non-degenerate quadratic form of the form
\[\begin{pmatrix}0&0&I_{2}\\ 0&1&0\\ I_{2}&0&0\end{pmatrix},\]
where \(I_{2}\) is the identity matrix of size \(2\times 2\). Consider the matrices
\[\alpha_{1}=\begin{pmatrix}1&1\\ &1\end{pmatrix}\text{ and }\alpha_{a}=\begin{pmatrix}1&a\\ &1\end{pmatrix},\]
where \(a\neq 1\). Then the matrices
\[X_{1}=\begin{pmatrix}\alpha_{1}&&\\ &1&\\ &&{}^{t}\alpha_{1}^{-1}\end{pmatrix}\text{ and }X_{a}=\begin{pmatrix}\alpha_{a}&&\\ &1&\\ &&{}^{t}\alpha_{a}^{-1}\end{pmatrix}.\]
Then \(X_{1}\cdot X_{a}=X_{a}\cdot X_{1}\) and both are elements of order \(p\) and are conjugate in \(\operatorname{O}(5,\overline{\mathbb{F}_{q}})\). Consider the following two homomorphisms:
\[\varphi_{1},\varphi_{2}:\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p \mathbb{Z}\longrightarrow\operatorname{O}(5,\overline{\mathbb{F}_{q}})\] \[\text{ given by }\varphi_{1}((1,0)) =X_{1},\varphi_{1}((0,1))=X_{a}\] \[\varphi_{2}((1,0)) =X_{a},\varphi_{2}((0,1))=X_{1}.\]
Then although these two homomorphisms are element-conjugate, they are not globally conjugate, using the same argument as of Proposition 4.2.
**Theorem 6.7**.: _For \(m\geq 2\) none of the groups \(\operatorname{O}(2m+1,\overline{\mathbb{F}_{q}})\) are acceptable._
Proof.: Follows from Proposition 6.6 and Proposition 6.5.
**Proposition 6.8**.: _The acceptability of the group \(\operatorname{O}(2m+2,\overline{\mathbb{F}_{q}})\) implies the acceptability of \(\operatorname{O}(2m,\overline{\mathbb{F}_{q}})\)._
Proof.: We will work with the quadratic form
\[\begin{pmatrix}0&1&&&\\ 1&0&&&\\ &&\ddots&&\\ &&&0&1\\ &&&1&0\end{pmatrix}.\]
Consider the element \(a=\begin{pmatrix}I_{2m}&\\ &-I_{2}\end{pmatrix}\in\operatorname{O}(2m+2,\overline{\mathbb{F}_{q}})\). Then by Lemma 2.2, we have that \(\mathscr{C}_{\operatorname{O}(2m+2,\overline{\mathbb{F}_{q}})}(a)\) is acceptable. Note that
\[\mathscr{C}_{\operatorname{O}^{+}(2m+2,\overline{\mathbb{F}_{q}}) (a)} =\left\{\begin{pmatrix}A&\\ &B\end{pmatrix}\middle|_{B\in\operatorname{GL}(2m,\overline{\mathbb{F}_{q}}) }\right\}\cap\operatorname{O}(2m+2,\overline{\mathbb{F}_{q}})\] \[=\operatorname{O}(2m,q)\times\operatorname{O}(2,\overline{ \mathbb{F}_{q}}).\]
Hence the result follows from Lemma 2.1.
**Proposition 6.9**.: _For \(q\geq 5\), the group \(\operatorname{O}(4,\overline{\mathbb{F}_{q}})\) is not acceptable._
Proof.: In this proof we will work with the quadratic form \(\begin{pmatrix}0&I_{2}\\ I_{2}&0\end{pmatrix}\). The proof is similar to that of Proposition 6.6, but we will replicate it with proper modification for completeness. Consider the matrices
\[\alpha_{1}=\begin{pmatrix}1&1\\ &1\end{pmatrix}\text{ and }\alpha_{a}=\begin{pmatrix}1&a\\ &1\end{pmatrix},\]
where \(a\neq 1\). Then the matrices
\[X_{1}=\begin{pmatrix}\alpha_{1}&&\\ &\iota_{\alpha_{1}^{-1}}\end{pmatrix}\text{ and }X_{a}=\begin{pmatrix} \alpha_{a}&\\ &\iota_{a}\alpha_{a}^{-1}\end{pmatrix}.\]
Then \(X_{1}\cdot X_{a}=X_{a}\cdot X_{1}\) and both are element of order \(p\) and are conjugate in \(\operatorname{O}(4,\overline{\mathbb{F}_{q}})\). Now consider the following two homomorphisms:
\[\varphi_{1},\varphi_{2}:\mathbb{Z}/p\mathbb{Z}\oplus\mathbb{Z}/p \mathbb{Z}\longrightarrow\operatorname{O}(4,\overline{\mathbb{F}_{q}})\] \[\text{given by }\varphi_{1}((1,0)) =X_{1},\varphi_{1}((0,1))=X_{a}\] \[\varphi_{2}((1,0)) =X_{a},\varphi_{2}((0,1))=X_{1}.\]
Then although these two homomorphisms are element-conjugate, they are not globally conjugate, using the same argument as of Proposition 4.2.
**Theorem 6.10**.: _For \(m\geq 2\), the groups \(\operatorname{O}(2m,q)\) are not acceptable._
Proof.: This is an immediate consequence of Proposition 6.8 and Proposition 6.9.
## 7. Concluding remarks and future questions
### Finite groups of Lie type
Consider the finite general linear group \(\operatorname{GL}(n,q)\), where \(q\) is a power of prime. Following the proof of Theorem 4.3 we easily see that these groups are not acceptable, for all \(n\) and \(q\). The same result holds for \(\operatorname{U}(n,q)\) as well. For finite fields, in case of symplectic and orthgonal (three different) groups the unipotent conjugacy class further breaks into several conjugacy classes. Hence the method adopted here will not be applicable to decide the acceptability of these groups. At this point we believe that these groups are unacceptable as well.
### Exceptional cases
We have seen that the groups discussed in this article are all unacceptable. On the contrary, the analogue of these groups (defined over \(\mathbb{C}\) or \(\mathbb{R}\)) are acceptable (see [9, pp. 254] for precise table). In the same paper M. Larsen proved that \(F_{4}(\mathbb{C})\) and its compact form are unacceptable (see [9, Proposition 3.11]). Hence it will be interesting to see which exceptional groups (defined over \(\overline{\mathbb{F}_{p}}\)) are acceptable.
### Application to invariant theory
Invariant theory studies the ring of those regular functions on an affine variety \(X\) which are constant on the orbits of an action of a linear algebraic group \(G\) on \(X\), where the action is given by a morphism \(G\times X\longrightarrow X\). Consider a field \(\mathbb{F}\) and a subgroup \(G\) of \(\operatorname{GL}(V)\cong\operatorname{GL}(n,\mathbb{F})\). Take an action of \(G\) on a variety \(X\). The algebra \(\mathbb{F}[X]\) of regular functions on \(X\), can be equipped with a \(G\)-module structure by defining \(g\cdot f(v)=f(g^{-1}\cdot v)\) for all \(g\in G\), \(f\in V^{*}\) and \(v\in V\). The ring of invariant
\[\mathbb{F}[X]^{G}=\{f\in\mathbb{F}[X]|g\cdot f=f\text{ for all }g\in G\}\]
is finitely generated, due to classical results of Noether (see [13], [14]). A consequence of seminal works [7] and [8] of the French mathematician Vincent Lafforgue is that: when \(\mathbb{F}=\mathbb{C}\) and \(G\) is a connected linear reductive group and if \(\mathbb{C}[G^{n}]^{G}\) (where the action is diagonal conjugation) is generated by \(1\)-argument invariants for all \(n\geq 1\), then \(G\) is acceptable. The Italian mathematician Claudio Procesi has proved that this is true for classical groups general linear groups, symplectic groups, and orthogonal groups over \(\mathbb{C}\) in his works [15] and [16]. Given that we have proved the unacceptability of \(\operatorname{GL}(n,\overline{\mathbb{F}_{q}})\), \(\operatorname{SL}(n,\overline{\mathbb{F}_{q}})\), \(\operatorname{U}(n,\overline{\mathbb{F}_{q}})\), \(\operatorname{Sp}(2n,\overline{\mathbb{F}_{q}})\), \(\operatorname{O}^{\epsilon}(n,\overline{\mathbb{F}_{q}})\), we would like to finish by posing the question that does there exist \(n\geq 1\) such that \(\overline{\mathbb{F}_{q}}[G^{n}]^{G}\) is not generated by \(1\)-argument invariants in case \(G\) is one of the groups discussed in this paper.
|
2310.06764 | OmniLingo: Listening- and speaking-based language learning | In this demo paper we present OmniLingo, an architecture for distributing
data for listening- and speaking-based language learning applications and a
demonstration client built using the architecture. The architecture is based on
the Interplanetary Filesystem (IPFS) and puts at the forefront user sovereignty
over data. | Francis M. Tyers, Nicholas Howell | 2023-10-10T16:40:00Z | http://arxiv.org/abs/2310.06764v1 | # OmniLingo: Listening- and speaking-based language learning
###### Abstract
In this demo paper we present OmniLingo, an architecture for distributing data for listening- and speaking-based language learning applications and a demonstration client built using the architecture. The architecture is based on the Interplanetary Filesystem (IPFS) and puts at the forefront user sovereignty over data.
## 1 Introduction
Language learning apps can give a fun, convenient way to learn a new language. Like many uses of the internet, though, there is an opportunity for dark patterns to sneak in: user data hoarding, targeted presentation, and majority-cultural filter bubbles can have negative social impacts, and properetary or closed-source software and always-connected centralised backends can create unstable infrastructure and restrict user freedom.
In this paper we present OmniLingo -- an free/open-source project to build language learning protocols, software, and infrastructure that avoids these problems, prioritising language communities and user sovereignty over their own data. OmniLingo is designed to be scaleable, both in terms of infrastructure -- it should not require large server farms to run, and in terms of language coverage -- it should not require a large amount of additional effort per language added.
The remainder of the paper is structured as follows: Section 2 gives an overview of the architecture behind OmniLingo; Section 3 describes the language data that is used; Section 4 describes a demonstration user experience given the platform; and Section 7 highlights some future directions for development.
## 2 Architecture
OmniLingo language data is stored on IPFS1 in a hierarchy of JSON2 and MP3 files. The _root index_ of a language data store is a JSON dictionary mapping ISO-639 language codes to _language indices_ and _language metadata_ (see Figure 1).
Footnote 1: InterPlanetary File System (Benet, 2014)
Footnote 2: JavaScript Object Notation
### Ipfs
The InterPlanetary File System (IPFS) is a peer-to-peer content-addressed filesystem protocol and network (Benet, 2014). File content stored on IPFS is identified by a _content-ID_ (CID), a hash generated by an extensible hashing algorithm designed for content-addressable networking. Since the address is generated from the content, modifications to files result in new CIDs.
IPFS network members can generate the CID for a file and then advertise it; other peers will be able to download directly from the hoster or indirectly through other peers on the network.
There are two major implementations of IPFS available, the canonical implementation in Go and a newer partial implementation in JavaScript.
### OmniLingo data structures
The language metadata structure (Figure 18) consists of a "display name" for the language and a set of character rewrite rules to make typing easier. Language indices (Figure 3) are JSON lists of clip structures consisting of MP3 audio sample and difficulty metadata, used to generate an appropriate exercise for the learner's level. Each clip structure contains a reference to the sentence (including licence metadata, see Figure 4), as well as to additional sentence metadata to inform exercise generation (Figure 5).
Root indexes are encouraged to be published to IPNS, so that clients can receive updates.
( "or": { "cids": [ "Qm" #... ], "meta": "Qm" #... }, "pa-IN": { "cids": [ "Qm" #... ], "meta": "Qm" #... } } ```
Figure 1: An OmniLingo root index. The root index is a JSON dictionary mapping ISO-639 language codes to language entries. Each language entry contains a list of references to language indices (see Figure 3) and a reference to a language metadata structure (Figure 18). References are IPFS content-ID multihashes. IPFS CIDs start with Qm, and have been elided for space.
## 3 Language data
Language data for the project comes primarily from Common Voice [1], a project run by the Mozilla Foundation that collects voice data for training speech recognition systems. For a given language, interested parties provide sentences and participants in the project read them out. This data is then recorded and released to the public under a Creative Commons CC-0 licence every three months. The data is provided in a tab-separated format for transcripts and metadata and MP3 files for the audio data, as such implementing the format for a language not in Common Voice is a simple matter of following the same format.
As of writing there are over 100 languages represented in Common Voice, and over 24,210 hours of recordings. The dataset downloads include both the audio files, in MP3 format and the transcripts for each file, along with a certain amount of demographic information such as gender, age range and accent or variant.
By default OmniLingo extracts 10,000 clips for each language, grouping them into 10 buckets by a given difficulty metric. The current metric used is characters per second, that is number of charac
Figure 4: An OmniLingo sentence structure, consisting of a JSON dictionary of original sentence transcript, licence data, and ISO-639 language code.
Figure 5: An OmniLingo sentence metadata structure, consisting of a reference to the sentence structure (see Figure 4), tokenisation of the sentence and matched token tags. Token tags are currently either “X” (no tag) or “PUNCT”. The reference is an IPFS content-ID multihash, and has been elided for space, as have the tokens and tags.
Figure 3: An OmniLingo language index is a JSON list of clip structures. Each clip structure contains some basic metadata about the clip (duration, characters-per-second) and references to the clip MP3, a sentence structure (Figure 4), and a sentence metadata structure (Figure 5). References are IPFS content-ID multihashes, and have been elided for space.
Figure 2: An OmniLingo language metadata structure. Currently language metadata consists of display names and reverse-keymaps.
ters in the transcript divided by the number of seconds of audio. The motivation behind this metric is that slower speakers should be easier to understand. However working on improved difficulty metrics is an ongoing area of research (see SS7 for discussion).
Transcripts are processed by a separate library _commonvoice-utils_,3 which provides tokenisation for the languages in Common Voice and rudimentary tagging of word tokens (those which can be substituted by a gap) versus punctuation.
Footnote 3: [https://github.com/ftyers/commonvoice-utils](https://github.com/ftyers/commonvoice-utils)
## 4 Tasks and progression
Currently the main task type in OmniLingo's demonstration interface is a gap filling task. A learner is presented with a sentence (see Figure 6(a)) where one of the words is replaced by a gap. The learner is invited to listen to the audio by clicking on the play button and then fill in the gap according to what she heard. Once the audio has finished playing a timer starts to keep track of how long it takes her to fill in the gap. She may re-listen to the audio file as many times as she wishes.
If the learner fills in a gap incorrectly, she is given the correct answer (Figure 6(b)) and may listen to the clip again or move on to the next clip. If she answers correctly (Figure 6(c)), the correct answer is highlighted and again she may move onto the next clip.
Tasks are presented in groups of five, which constitute a level. A learner _passes_ a level when she answers all five tasks in less time than the clip takes to play. Her score is the how long it took her to answer subtracted from the total length of the audio at that level, meaning that faster response times result in higher scores.
At any point the learner may discard (or deactivate) a clip by clicking on the discard button, a new random clip from the same bucket is then added to the current group. The user may also choose to skip a particular clip by clicking on the [>] button.
Each time a clip is presented the gap is generated randomly from the word tokens. This means that if a user listens to a clip and cannot fill in the gap, the next time the sentence is presented the gap will likely be in a different place.
The interface indicates which level the learner is currently in \(L\): to the bottom left, what her current
Figure 6: A demonstration interface for OmniLingo showing the gap-filling task in Breton. The sentence _Gouzout a rit ar pezh zo c’hoarvezet ganta\(\bar{n}\)?_ ‘Do you know who arrived with him?” has a gap for the word _zo_ ‘is’. In Figure 6(b) the learner has made a mistake and written _eo_ (another form of ‘is’) and this is corrected to _zo_.
score is _S_: to the bottom left and how many clips are remaining at this level _R_: to the bottom right.
## 5 Pronunciation feedback
Modern speech recognition architectures allow for the design of pronunciation assistants, analysing input speech and comparing it per-phone against the speech it was trained on Moses et al. (2020). We have implemented a pronunciation assistant based on models trained from the speech data we used in our OmniLingo indices.
The reference implementation is based on the Coqui STT platform,4 but it could be adapted to use any browser-based speech recognition system, for example _Whisper_.5
Footnote 4: [https://github.com/coqui-ai/STT/](https://github.com/coqui-ai/STT/)
Footnote 5: [https://github.com/pluja/whisper](https://github.com/pluja/whisper)
The speech recognition models are stored on IPFS and are indexed through a list of models included in the language metadata structure.
When OmniLingo is loaded, the speech recognition model (in this case 45M) is downloaded and stored in localStorage. The speech recorded by the user is stored in the browser and transcoded using the WebAudio API.
The system works as follows: The learner is presented with a sentence, they are asked to record themselves saying the sentence, which they can do any number of times using the record button. When they have finished, they press the get feedback button.
The recording they have made is run through the speech recognition system. The output of this system is a hypothesis transcript. This hypothesis is then aligned using the Needleman-Wunch algorithm Needleman and Wunsch (1970). The alignment produced contains gaps where the hypothesis does not match the transcript.
In the interface, gaps are filled with the characters from the corresponding position in the transcript and coloured red to indicate that the learner should work on their pronunciation for this part of the sentence. The red is brighter the longer the length of the gap. Thus the gap _-ale-_ in _talentos_ 'talents' appears brighter than the gap _-r-_ in _mostra_'show'.
## 6 Voice contribution
We have also implemented a voice contribution system allowing users to generate and publish their own Omnilingo root indices. As with any
\begin{table}
\begin{tabular}{l l} \hline \{ & *format”:“cqui”, & \\ *licence”:*\_AGPL-3.0”, & \\ *model”:@m”, & \# & \\ *src”:“[https://example.com/models/](https://example.com/models/)”, & \\ *type”:“acoustic” & \\ \} \hline \end{tabular}
\end{table}
Table 1: Example of alignment (Alig) between the transcript (Tr) and the ASR hypothesis (Hyp), where a gap is indicated with the middot,. The sentence is in Portuguese and means “She qualified for the talent show.”
Figure 8: A list of language models is included in the language metadata structure. For reference see Figure 18.
Figure 7: An example of the metadata structure for speech recognition models. In this case the format is Coqui STT, the type is acoustic model (for speech recognition) and the model is found at CID “model”.
Figure 9: Example of the pronunciation feedback interface, with original sentence (above) and feedback shown (below). The buttons are record, get feedback and skip.
networked system, collecting and preserving data from our users can be done only with their consent. Managing that consent within the context of a decentralised filesystem comes with its own special challenges, and we designed what we think is as good of a privacy- and consent-respecting system as we can.
As opposed to most current systems for data collection via crowd sourcing, in Omnilingo, contributors own their own data and can define their own terms and conditions for its use.
### Omnilingo privacy structures
Our contribution privacy initiative brings with it a handful of new structures. These are introduced bottom-up; read this section backwards if you prefer a top-down introduction.
#### 6.1.1 Omnilingo session keys
An Omnilingo session key is a JSON Web Key6; our implementation uses the SubtleCrypto WebAPI7 to generate and encode these keys. Currently we recommend only 256-bit AES-GCM keys, and our Web client supports only this configuration.
Footnote 6: [https://dattracker.ietf.org/doc/html/rfc7517](https://dattracker.ietf.org/doc/html/rfc7517)
Footnote 7: [https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto](https://developer.mozilla.org/en-US/docs/Web/API/SubtleCrypto)
Omnilingo session keys form the unit of "consent": for a given session key, users may have contributed several samples. If a user wishes to revoke their consent for a sample, they signal this by unpublishing the session key, thus revoking consent for all samples contributed with that key.
For a more positive user experience, we recommend the user-facing interface reference session keys by the pgpfone wordlist8 encoding of their fingerprint.
Footnote 8: [https://web.archive.org/web/2010#0326141145/http://web.mit.edu/network/pgpfone/manual/index.html#GP@ppe00#62](https://web.archive.org/web/2010#0326141145/http://web.mit.edu/network/pgpfone/manual/index.html#GP@ppe00#62)
#### 6.1.2 Omnilingo encrypted object
An Omnilingo encrypted object is an object which has been encrypted by an Omnilingo session key; the structure is:
We wrap in encrypted objects the MP3 of the contribution as well as the list of Omnilingo clip structures.
Encrypted clip:
#### 6.1.3 Omnilingo root with encrypted language index
An Omnilingo root with encrypted language index is similar to the classic Omnilingo root index: a JSON dictionary with language codes as keys, but with Omnilingo encrypted language indices as the values.
An example:
#### 6.1.4 Omnilingo encrypted root
An Omnilingo encrypted root is a JSON dictionary; the keys are fingerprints of Omnilingo session keys, and each value is the CID of an Omnilingo root with language indices encrypted with the corresponding session key.
Encrypted roots can optionally contain some of the referenced session keys, allowing decryption. In Figure 14, the key ea6bbc9b... is included.
#### 6.1.5 Omnilingo identity
An Omnilingo identity is a IPNS key (colloquially referred to as a k5). Published to this k5 is an encrypted root, containing the session keys for which the user (the one controlling the private part of the k5). The Omnilingo client has been updated to ac
Figure 11: An OmniLingo language index with encrypted clip MP3.
Figure 12: An OmniLingo root index with encrypted language index.
Figure 10: An OmniLingo session key structure.
cept Omnilingo identities, fetching and decrypting the contained encrypted indices.
In Figure 14 the material encrypted by session key ea6b0c9b2 can be used with the controlling user's consent, whereas the material encrypted by session key dab24db6 cannot be any longer, as the user has unpublished the key.
### Data flows
There are two new data flows introduced with this system: contributing data, and retrieving contributed data.
#### 6.2.1 Contribution
A contributor client will be drawing sentences from a (presumably classic) Omnilingo language index, and contributing new clips. They start by generating an Omnilingo identity (k5) and a session key. The session key is stored locally.
When the user makes their first contribution (an MP3 recording of them reading a sentence), a new Omnilingo encrypted root index is published to their k5:
As the user makes more contributions, the encrypted clip list grows in length, updating the encrypted language index and encrypted root index, each time republished to the k5, all under the same session key:
At some point, the user decides to "roll" their session key, creating a new session. (A client might decide to do this automatically, e.g. each time it is opened, or each time the language is switched.) A new session key is generated, and everything propagates up to the user identity (k5):
At some later time, the user decides to revoke consent to use the material recorded under key1; "keys": { fpr(key): JWK(key) }, fpr(key): CID({ // encrypted language "xx": { "cids": [ CID(encrypt([ // encrypted clip list ])]] }))] }) }) }) } }
Figure 16: An OmnilLingo language metadata structure. Currently language metadata consists of display names and reverse-keymaps.
Figure 14: An Omnilingo encrypted root with components encrypted with different keys. One key is provided for decryption, while the other is unavailable.
Figure 13: An Omnilingo encrypted root. In this example, no decryption keys are provided.
Figure 15: An OmnilLingo language metadata structure. Currently language metadata consists of display names and reverse-keymaps.
the JSON Web Key encoded copy of key1 is removed, only fpr(key1) remains published under their identity:
Consumers who have stored key1 will retain access to this data, just as they would if they had stored the decrypted copies; however, use of it would constitute a violation of the user's consent.
#### 6.2.2 Consumption
Omnilingo consumers now have two types of root indices to deal with: classic root indices and encrypted root indices. An encrypted root index may be detected by the presence of the keys field; iterating over this dictionary then gives the consumer a list of fingerprints to look up in the encrypted root index, as well as the key needed to decode the resulting encrypted language index.
## 7 Future work
We have presented our prototype web client and a proof-of-concept terminal-based Python client. We would be happy to see native implementations for major device platforms: graphical desktop operating systems as well as smart phones and tablets. We leave as an open questions what sorts of extensions can be made to smart watches, speakers, and VR systems.
There is a wide field of possible exercises that could be generated based on OmniLingo data. We imagine that other sources of data could be integrated, e.g. picture-word association from Wikidata (possibly combined with Wiktionary), to build more comprehensive language-learning applications. Distributing these over the IPFS network should be straightforward.
Current client software stores user progress data locally, but modern proprietary language-learning ecosystems allow multiple client programs to sync, storing (or at least transferring) user progress data through providers servers. We would like to implement a similar user experience while respecting our users' privacy and fitting organically into OmniLingo's philosophy of decentralisation. We are in the design phase of implementing such a system.
Multiple OmniLingo root indices can be merged to allow for multiple sources of language data to be used. Using the IPFS publish-subscribe notification protocol, OmniLingo root indices could be advertised and automatically collected and merged by clients. Designing a protocol for such advertising (including how to resolve the omnipresent question of trust, as well as domain-specific questions) remains a future project.
|
2301.01992 | Monotonicity of the period and positive periodic solutions of a
quasilinear equation | We investigate the monotonicity of the minimal period of the periodic
solutions of some quasilinear differential equations involving the $p$-Laplace
operator. The monotonicity is obtained as a function of a Hamiltonian energy in
two cases. We first extend to the case $p\ge2$ classical results for $p=2$ due
to Chow and Wang, and Chicone. Then we consider a differential equation
associated with a fundamental interpolation inequality in Sobolev spaces. In
that case, we generalize to $p\ge2$ a recent result due to Benguria, Depassier
and~Loss when $p=2$. | Jean Dolbeault, Marta García-Huidobro, Raúl Manásevich | 2023-01-05T10:18:05Z | http://arxiv.org/abs/2301.01992v4 | # Monotonicity of the period and positive periodic solutions of a quasilinear equation
###### Abstract.
We investigate the monotonicity of the minimal period of the periodic solutions of some quasilinear differential equations and extend results for \(p=2\) due to Chow and Wang, and to Chicone, to the case of the \(p\)-Laplace operator. Our main result is the monotonicity of the period for positive solutions of a nonlinear Euler-Lagrange equation for a minimization problem related with a fundamental interpolation inequality. In particular we generalize to \(p\) greater than \(2\) recent results of Benguria, Depassier and Loss.
Key words and phrases:Hamiltonian systems, quasilinear elliptic equations, \(p\)-Laplace operator, periodic solutions, period, energy levels 2020 Mathematics Subject Classification: Primary: 34C25, 35J92. Secondary: 34L30, 34C23 \({}^{*}\) Corresponding author: Jean Dolbeault
The energy \(E=\frac{1}{p}\,|w^{\prime}|^{p}+\mathcal{V}(w)\) is conserved if \(w\) solves (1) and we are interested in the positive periodic solutions with energy less than \(E_{*}:=\mathcal{V}(0)\) which are enclosed by the homoclinic orbit attached to \((w,w^{\prime})=(0,0)\). We further assume that \(\mathcal{V}\) is such that these solutions are uniquely determined, up to translations, by the energy level \(E\), with minimal period \(T(E)\).
The purpose of this paper is to study under which conditions \(T\) is an increasing function of \(E\) in the range \(0\leq E\leq E_{*}\) where \(E_{*}\) is the energy level of the homoclinic orbit. Furthermore we will consider the asymptotic behaviour of \(T(E)\) as \(E\to 0_{+}\) and as \(E\to(E_{*})_{-}\). Surprisingly enough, the cases \(p=2\) and \(p>2\) differ as \(E\to 0_{+}\).
Our first result is an extension to \(p>2\) of a result of Chow and Wang [8, Theorem 2.1].
**Theorem 1**.: _Let \(p>2\) and assume that \(\mathcal{V}\) is a \(C^{2}\) function on \(\mathbb{R}^{+}\) such that \(\mathcal{V}(A)=0=\mathcal{V}^{\prime}(A)\) and \(\mathcal{V}^{\prime\prime}>0\) on \((0,B)\) with \(B:=\min\{w>A\,:\,\mathcal{V}(w)\geq\mathcal{V}(0)\}\). If \(w\mapsto|\mathcal{V}^{\prime}(w)|^{2}-p^{\prime}\,\mathcal{V}(w)\,\mathcal{V }^{\prime\prime}(w)\) is a positive function, then \(E\mapsto T(E)\) is increasing on \((0,E_{*})\)._
Notice that \(w\mapsto|\mathcal{V}^{\prime}(w)|^{2}-p^{\prime}\,\mathcal{V}(w)\,\mathcal{V}^ {\prime\prime}(w)\) is a positive function if and only if \(w\mapsto\mathcal{V}(w)\,|\mathcal{V}^{\prime}(w)|^{-p^{\prime}}\) is a monotone increasing function.
Our second result is also an extension to \(p>2\) of the monotonicity result in [7, Theorem A] under _Chicone's condition_, which is also a growth condition, but of higher order in the derivatives.
**Theorem 2**.: _Let \(p>2\) and assume that \(\mathcal{V}\) is a \(C^{3}\) function on \(\mathbb{R}^{+}\) such that \(\mathcal{V}(A)=0=\mathcal{V}^{\prime}(A)\) and let \(B:=\min\{w>A\,:\,\mathcal{V}(w)\geq\mathcal{V}(0)\}\). If \(\mathcal{V}/(\mathcal{V}^{\prime})^{2}\) is a convex function, then \(E\mapsto T(E)\) is increasing on \((0,E_{*})\)._
A central motivation for this paper arises from the study of the minimization problem
\[\mu(\lambda):=\inf_{f\in\mathrm{W}^{1,p}(\mathbb{S}^{1})\setminus\{0\}}\frac{ \|f^{\prime}\|^{2}_{\mathrm{L}^{p}(\mathbb{S}^{1})}+\lambda\,\|f\|^{2}_{ \mathrm{L}^{p}(\mathbb{S}^{1})}}{\|f\|^{2}_{\mathrm{L}^{q}(\mathbb{S}^{1})}} \tag{2}\]
where \(q>p\) is an arbitrary exponent and \(\mathbb{S}^{1}\) is the unit circle. The problem can also be seen as the search for the optimal constant in the interpolation inequality
\[\|f^{\prime}\|^{2}_{\mathrm{L}^{p}(\mathbb{S}^{1})}+\lambda\,\|f\|^{2}_{ \mathrm{L}^{p}(\mathbb{S}^{1})}\geq\mu(\lambda)\,\|f\|^{2}_{\mathrm{L}^{q}( \mathbb{S}^{1})}\quad\forall\,f\in\mathrm{W}^{1,p}(\mathbb{S}^{1})\,.\]
Testing the inequality with constant functions shows that \(\mu(\lambda)\leq\bar{\mu}(\lambda):=\lambda\,|\mathbb{S}^{1}|^{\frac{2}{p}- \frac{2}{q}}\). If \(p=2\), it is well known from the _carre du champ_ method [2, 3] that equality holds if and only if \(\lambda\leq d/(q-2)\). If \(\lambda>d/(q-2)\), we have \(\mu(\lambda)<\bar{\mu}(\lambda)\) and optimal functions are non constant, so that _symmetry breaking_ occurs. The minimization problem problem with \(p>2\) was studied in [18]. There is an optimal function for (2) and the corresponding Euler-Lagrange equation turns out to be the nonlinear differential equation with nonlocal terms given by
\[-\,\|f^{\prime}\|^{2-p}_{\mathrm{L}^{p}(\mathbb{S}^{1})}\left(\phi_{p}(f^{ \prime})\right)^{\prime}+\lambda\,\|f\|^{2-p}_{\mathrm{L}^{p}(\mathbb{S}^{1})} \,\phi_{p}(f)=\mu(\lambda)\,\|f\|^{2-q}_{\mathrm{L}^{q}(\mathbb{S}^{1})}\,\phi _{q}(f)\,, \tag{3}\]
where we look for positive solutions on \(W^{1,p}(\mathbb{S}^{1})\setminus\{0\}\) or equivalently positive \(2\pi\)-periodic solutions on \(\mathbb{R}\). So far, we do not know the precise value of \(\lambda\) for which there is symmetry breaking but according to [18]_rigidity_ holds if \(0<\lambda<\lambda_{1}\) for some explicit \(\lambda_{1}>0\), where rigidity means that any positive solution of (3) is a constant. In that range, we have \(\mu(\lambda)=\bar{\mu}(\lambda)\). On the contrary, one can prove that _symmetry breaking_ occurs if \(\lambda>\lambda_{2}\) for some \(\lambda_{2}>\lambda_{1}\), so that \(\mu(\lambda)<\bar{\mu}(\lambda)\) and (3) admits non-constant positive solutions for any \(\lambda>\lambda_{2}\). Using homogeneity, scalings and a suitable change of variables, the study of (3) is reduced in [18] to the study of positive periodic solutions on \(\mathbb{R}\) of
\[\left(\phi_{p}(w^{\prime})\right)^{\prime}+\phi_{q}(w)-\phi_{p}(w)=0\,.\] ( \[\star\] )
In this equation, there are no non-local terms but the minimal period of periodic solutions is no more given. Equation (\(\star\)) enters in the framework of (1) with \(A=1\) and potential
\[\mathcal{V}(w)=\tfrac{1}{q}\,|w|^{q}-\tfrac{1}{p}\,|w|^{p}-\left(\tfrac{1}{q}- \tfrac{1}{p}\right), \tag{4}\]
so that \(E_{*}=1/p-1/q\). Positive periodic solutions exist only if the energy level satisfies the condition \(E<E_{*}\). Again, let \(T(E)\) be the minimal period of such a solution. Theorems 1 and 2 do not apply easily and we shall prove directly the following result, which is the main contribution of this paper.
**Theorem 3**.: _Let \(p\) and \(q\) be two exponents such that \(2<p<q\) and consider the positive periodic solutions of (\(\star\)). Then the map \(E\mapsto T(E)\) is increasing on \((0,E_{*})\) with \(\lim_{E\to 0_{+}}T(E)=0\) and \(\lim_{E\to(E_{*})_{-}}T(E)=+\infty\)._
The study of (3) is motivated by rigidity and symmetry breaking results associated with interpolation inequalities on the unit sphere \(\mathbb{S}^{d}\) in one and higher dimensions, that is, \(d\geq 1\). If \(p=2\), a precise description of the threshold value of \(\lambda\) is known in the framework of Markov processes if \(q\) is not too large (see [3] for an overview with historical references that go back to [2]) and from [5, 11, 14, 15, 16, 17, 13] using _entropy methods_ applied to nonlinear elliptic and parabolic equations; also see [12] for an overview and extensions to various related variational problems.
Almost nothing is known beyond [18] if \(p>2\), even for \(d=1\). Our results are a contribution to a better understanding of the fundamental properties of the solutions of (1) in the simplest of the cases when \(p>2\). Without the Assumption that \(\mathcal{V}^{\prime}(A)=0\) in Theorems 1 and 2 (which is also satisfied in Theorem 3), it is easy to give similar results so that \(E\mapsto T(E)\) is decreasing, but in phase plane the solutions are not described anymore by orbits enclosed by a homoclinic orbit. Some comments on this issue can be found in Section 2.
In dimension \(d=1\), the bifurcation problem (3) degenerates in the limit case \(p=2\), for which \(\lambda_{1}=\lambda_{2}=1/(q-2)\) according to [2]. We refer to [4, Section 1] for an introduction to the minimization problem (2) with \(p=2\), the issue of the branches and the monotonicity of the period problem. Proving that _symmetry breaking_ occurs if and only if \(\lambda>1/(q-2)\) can be reduced to a proof of the _monotonicity of the minimal period_ using Chicone's criterion [7, Theorem A]. The study of bifurcation problems using the
period function goes back to [23] in case of equations with cubic non-linearities and was later extended to various classes of Hamiltonian systems in [22, 21, 10, 9, 19].
If \(p^{\prime}=p/(p-1)\) is the Holder conjugate of the exponent \(p\) and
\[\mathcal{H}(u,v):=\mathcal{V}(u)+\tfrac{1}{p^{\prime}}\left|v\right|^{p^{ \prime}},\]
Equation (1) can be rewritten as the Hamiltonian system of equations
\[u^{\prime}=\frac{\partial\mathcal{H}}{\partial v}=\phi_{p^{\prime}}(v)\quad \text{and}\quad v^{\prime}=-\,\frac{\partial\mathcal{H}}{\partial u}=-\, \mathcal{V}^{\prime}(u)\]
with \(w=u\) and \(w^{\prime}=\phi_{p^{\prime}}(v)\). Although this Hamiltonian structure may superficially look similar to the conditions of [22, Theorem 1], we have a definitely different set of assumptions. In [21], a much larger set of Hamiltonian systems is considered but again our assumptions differ, for instance for the simple reason that the function \(\phi_{p^{\prime}}\) is not of class \(C^{2}\). Further references on the period function can be found in [24]. There are various other extensions of Chicone's result [7], see for instance [6]. Also notice that there is a computation in [6, Section 4] which turns out to be equivalent to an argument used in the proof of our Theorem 4 (see below in Section 2), although it is stated neither in that form nor as in Theorem 1. The Hamiltonian version of the method has interesting applications to Lotka-Volterra systems.
The monotonicity of the minimal period as a function of the energy level is a question of interest by itself and particularly in the model case of the potential \(\mathcal{V}\) as in (4), even in the case \(p=2\). We quote from [4] that: "It is somewhat surprising that, despite its ubiquity, the monotonicity of the period function for [this problem] in full generality was only established recently." In [20], Miyamoto and Yagasaki proved the monotonicity of the period function for \(p=2\) and for \(q\) an integer. In [24], Yagasaki generalized the result to all values of \(q>2\). Both papers, [20, 24], rely on Chicone's criterion which is difficult to apply to non-integer values of \(q\). The purpose of Benguria, Depassier and Loss in [4] was to give a simplified proof of the monotonicity of the period of the positive solutions of \(w^{\prime\prime}+w^{q-1}-w=0\) (corresponding to \(p=2\) in our notations).
We point out that in many situations in the paper we will consider the equation
\[\big{(}\phi_{p}(w^{\prime})\big{)}^{\prime}+V^{\prime}(w)=0 \tag{5}\]
where \(V\) is a potential of class \(C^{2}\) defined on \(\mathbb{R}\) such that
_There are \(a\), \(b\in\mathbb{R}\) with \(a<0<b\) such that \(V(a)=V(b)=E_{*}>0\), \(V^{\prime}(a)=V(0)=V^{\prime}(0)=0\), \(V^{\prime\prime}(0)\neq 0\), and \(0<V(w)<E_{*}\) for all \(w\in(a,0)\cup(0,b)\)._
The potential \(V(w)\) achieves its minimum on \((a,b)\) at \(x=0\). The relationship of \(V\) with \(\mathcal{V}\) is given by \(V(w)=\mathcal{V}(w+A)\), \(a=-\,A\) and \(b=B-A\). The origin \(w=0\), \(w^{\prime}=0\) is a stationary point of (5) giving rise to a _center_ surrounded by closed periodic orbits with minimal period \(T(E)\), such that these periodic orbits are enclosed by a homoclinic orbit attached to \((a,0)\).
This paper is organized as follows. Section 2 is devoted to the proof of the \(p\)-Laplacian version of results which are classical when \(p=2\) and are summarized in Theorems 1 and 2. We are not aware of such statements in the existing literature but they are natural extensions of the case \(p=2\) and might already be known, so we do not claim any deep originality. The result of Theorem 3 is by far more difficult. In Section 3 we start with problem (1) by making a change of variables and obtain an expression for the minimal period following Chicone's ideas. We also prove some properties of the minimal period when the energy goes to zero and when it goes to the homoclinic level \(E_{*}\). In Section 4 we prove the monotonicity of the minimal period extending, in particular, the results of [4] for \(p=2\) to the more general case of the one-dimensional \(p\)-Laplacian operator \(w\mapsto\left(\phi_{p}(w^{\prime})\right)^{\prime}\), with \(p>2\). Our main result (Theorem 3) is proved in Section 5, the proof is highly non-trivial.
## 2. A \(p\)-Laplacian version of some classical results
This section is devoted to the proof of Theorems 1 and 2. We also provide a slightly more detailed statement of Theorem 1.
We begin by extending [8, Theorem 2.1] by Chow and Wang to the \(p\)-Laplacian situation when \(p\geq 2\).
We recall that \(p^{\prime}=p/(p-1)\) denotes the Holder conjugate of \(p\). Equation (5) has a first integral given by
\[\tfrac{1}{p^{\prime}}\,|w^{\prime}|^{p}+V(w)=E \tag{6}\]
for any energy level \(E\in(0,E_{*})\) and the minimal period is given in terms of the energy by
\[T(E)=\frac{2}{p^{\prime 1/p}}\int_{w_{1}(E)}^{w_{2}(E)}\frac{dw}{\left(E-V(w )\right)^{1/p}} \tag{7}\]
where \(w_{i}(E)\), \(i=1,\,2\), are two roots of \(V(w)=E\) such that
\[a<w_{1}(E)<0<w_{2}(E)<b\quad\text{and}\quad V(w)<E\quad\forall\,w\in\left(w_{1 }(E),w_{2}(E)\right).\]
At this point, let us notice that the map \(E\mapsto T(E)\) is a continuous function if we assume that \(w\,V^{\prime}(w)>0\) for any \(w\in(a,0)\cup(0,b)\), but that it is not the case if \(V\) admits another local minimum than \(w=0\) in the interval \((a,b)\). Let us define
\[\gamma(w,E):=p^{\prime}\left(E-V(w)\right),\quad R(w):=V^{\prime}(w)^{2}-p^{ \prime}\,V(w)\,V^{\prime\prime}(w)\]
and notice that
\[\frac{\partial\gamma}{\partial w}=-\,p^{\prime}\,V^{\prime}(w)\quad\text{and} \quad\frac{\partial\gamma}{\partial E}=p^{\prime}\,.\]
The following result is a detailed version of Theorem 1.
**Theorem 4**.: _Let \(p\geq 2\) and consider Equation (5) where we assume that \(V\) satisfies (H1). With the above notations, for any \(E\in(0,E_{*})\), it holds that_
\[\frac{dT}{dE}(E)=\frac{2}{p^{\prime}\,E}\int_{w_{1}(E)}^{w_{2}(E)}\frac{R(w)}{ \gamma(w,E)^{1/p}\,V^{\prime}(w)^{2}}\,dw \tag{8}\]
_if the integral in the right-hand side is finite. Thus if \(R\) is positive on \((a,0)\cup(0,b)\), then the minimal period is increasing._
Notice that from Assumption (H1), we know that \(V(a)=E_{*}>0\) and \(V^{\prime}(a)=0\) so that \(\lim_{w\to a_{+}}V(w)\,|V^{\prime}(w)|^{-p^{\prime}}=+\infty\) and
\[\left(\frac{V}{|V^{\prime}|^{p^{\prime}}}\right)^{\prime}=\frac{R\,V^{\prime} }{|V^{\prime}|^{p^{\prime}+2}}\]
which is incompatible with \(R\) being a negative valued function in a neighbourhood of \(w=a_{+}\). If we remove the assumption that \(V^{\prime}(a)=0\), then it makes sense to assume that \(R\) is a negative function on \((a,0)\cup(0,b)\). In this case, the minimal period is decreasing.
Proof.: The proof relies on the same strategy as for [8, Theorem 2.1]. We skip some details and emphasize only the changes needed to cover the case \(p>2\). Let us set
\[I(E):=\int_{w_{1}(E)}^{w_{2}(E)}\,\gamma(w,E)^{1/p^{\prime}}\,dw\quad\text{and }\quad J(E):=\int_{w_{1}(E)}^{w_{2}(E)}\big{(}\gamma(w,E)-p^{\prime}\,E\big{)} \,\gamma(w,E)^{1/p^{\prime}}\,dw\,.\]
By differentiating with respect to \(E\), we obtain
\[\frac{dI}{dE}(E)=\int_{w_{1}(E)}^{w_{2}(E)}\frac{dw}{\gamma(w,E)^{1/p}}=\frac {1}{2}\,T(E)\quad\text{and}\quad\frac{dJ}{dE}(E)=\int_{w_{1}(E)}^{w_{2}(E)} \frac{\gamma(w,E)-p^{\prime}E}{\gamma(w,E)^{1/p}}\,dw\,,\]
which implies that
\[\frac{dJ}{dE}(E)=I(E)-p^{\prime}\,E\,\frac{dI}{dE}(E)\,. \tag{9}\]
Differentiating once more with respect to \(E\), we get
\[\frac{d^{2}J}{dE^{2}}(E)=(1-p^{\prime})\,\frac{dI}{dE}(E)-p^{\prime}\,E\, \frac{d^{2}I}{dE^{2}}(E)\,. \tag{10}\]
On the other hand, by integrating by parts in
\[\int_{w_{1}}^{w_{2}}\,\gamma^{\frac{p^{\prime}+1}{p^{\prime}}}\,\frac{V^{ \prime 2}-V\,V^{\prime\prime}}{V^{\prime 2}}\,dw=\int_{w_{1}}^{w_{2}}\, \gamma^{\frac{p^{\prime}+1}{p^{\prime}}}\left(\frac{V}{V^{\prime}}\right)^{ \prime}\,dw=-\,\frac{p^{\prime}+1}{p^{\prime}}\int_{w_{1}}^{w_{2}}\,\gamma^{ \frac{1}{p^{\prime}}}\,\frac{V}{V^{\prime}}\,\frac{\partial\gamma}{\partial w }\,dw\,,\]
we obtain
\[J(E)=-\,\frac{p^{\prime}}{p^{\prime}+1}\int_{w_{1}(E)}^{w_{2}(E)}\,\gamma(w,E )^{\frac{p^{\prime}+1}{p^{\prime}}}\,\frac{V^{\prime}(w)^{2}-V(w)\,V^{\prime \prime}(w)}{V^{\prime}(w)^{2}}\,dw\]
by definition of \(J\) and \(\gamma\). See [8] for further details in the case \(p=2\). By differentiating twice this expression of \(J(E)\) with respect to \(E\), we obtain
\[\frac{d^{2}J}{dE^{2}}(E)=-\,p^{\prime}\int_{w_{1}(E)}^{w_{2}(E)}\frac{V^{\prime }(w)^{2}-V(w)\,V^{\prime\prime}(w)}{\gamma(w,E)^{1/p}\,V^{\prime}(w)^{2}}\,dw\,.\]
Since \(T(E)=2\frac{dI}{dE}(E)\), we learn from (10) that
\[\frac{p^{\prime}\,E}{2}\,\frac{dT}{dE}(E)= \,p^{\prime}\,E\,\frac{d^{2}I}{dE^{2}}(E)\] \[= \,(1-p^{\prime})\,\frac{dI}{dE}(E)-\frac{d^{2}J}{dE^{2}}(E)\] \[= \,(1-p^{\prime})\int_{w_{1}(E)}^{w_{2}(E)}\,\frac{dw}{\gamma(w,E )^{1/p}}+p^{\prime}\int_{w_{1}(E)}^{w_{2}(E)}\frac{V^{\prime}(w)^{2}-V(w)\,V^{ \prime\prime}(w)}{\gamma(w,E)^{1/p}\,V^{\prime}(w)^{2}}\,dw\] \[= \,\int_{w_{1}(E)}^{w_{2}(E)}\frac{R(w)}{\gamma(w,E)^{1/p}\,V^{ \prime}(w)^{2}}\,dw\,.\]
This concludes the proof of (8).
Proof of Theorem 2.: Let us consider again Equation (5) with a potential \(V\) which satisfies (H1). We adapt the proof of [7, Theorem A] to the case \(p>2\). Let us consider the function
\[\mathsf{h}(w):=\frac{w}{|w|}\,\sqrt{V(w)} \tag{11}\]
for any \(w\in(a,0)\cup(0,b)\) and extend it by \(\mathsf{h}(0)=0\) at \(w=0\). With the notations of (7), we have \(\mathsf{h}\big{(}w_{1}(E)\big{)}=-\,\sqrt{E}\), \(\mathsf{h}\big{(}w_{2}(E)\big{)}=+\,\sqrt{E}\) and we can reparametrize the interval \(\big{(}w_{1}(E),w_{2}(E)\big{)}\) with some \(\theta\in(-\pi/2,\pi/2)\) such that
\[\sqrt{E}\,\sin\theta=\mathsf{h}(w)\,. \tag{12}\]
With this change of variables, the minimal period can be written as
\[T(E)=2\,\frac{E^{\frac{1}{2}-\frac{1}{p}}}{p^{\prime\frac{1}{p}}}\int_{-\frac {\pi}{2}}^{\frac{\pi}{2}}\frac{\cos\theta^{1-\frac{2}{p}}}{\big{(}\mathsf{h}^ {\prime}\circ\mathsf{h}^{-1}\big{)}\big{(}\sqrt{E}\,\sin\theta\big{)}}\,d \theta\,. \tag{13}\]
Its derivative with respect to \(E\) is given by
\[\frac{dT}{dE}(E)=\big{(}\tfrac{1}{2}-\tfrac{1}{p}\big{)}\,\frac{T(E)}{E}-(p^{ \prime}\,E)^{-\frac{1}{p}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{\mathsf{h }^{\prime\prime}(w)}{\mathsf{h}^{\prime}(w)^{3}}\,\cos\theta^{1-\frac{2}{p}} \,\sin\theta\,d\theta\]
where we use the short-hand notation \(w=\mathsf{h}^{-1}\big{(}\sqrt{E}\,\sin\theta\big{)}\). After an integration by parts, this expression becomes
\[\frac{dT}{dE}(E)=\big{(}\tfrac{1}{2}-\tfrac{1}{p}\big{)}\,\frac{T(E)}{E}+ \tfrac{1}{2}\,(p^{\prime})^{\frac{1}{p^{\prime}}}\,E^{\frac{1}{2}-\frac{1}{p}} \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{3\,\mathsf{h}^{\prime\prime}(w)^{2} -\mathsf{h}^{\prime}(w)\,\mathsf{h}^{\prime\prime\prime}(w)}{\mathsf{h}^{ \prime}(w)^{5}}\,\cos\theta^{3-\frac{2}{p}}\,d\theta\]
and one can show that
\[3\,(\mathsf{h}^{\prime\prime})^{2}-\mathsf{h}^{\prime}\,\mathsf{h}^{\prime \prime\prime}=\frac{|V^{\prime}|^{4}}{8\,V^{2}}\left(\frac{V}{|V^{\prime}|^{2} }\right)^{\prime\prime}\]
is positive if and only if \(V/(V^{\prime})^{2}\) is a convex function. This completes the proof of Theorem 2.
## 3. Asymptotic results
As in Section 2, let \(V(w)=\mathcal{V}(w+A)\) and recall that (5) has a first integral given by (6) where \(E\geq 0\) is the energy level. In this short section, we shall assume that (H1) holds with \(a=-\,A\), define
\[\omega:=\sqrt{V^{\prime\prime}(0)}=\sqrt{\mathcal{V}^{\prime\prime}(A)}>0 \tag{14}\]
and make the additional hypothesis
\[\liminf_{w\to 0_{+}}\frac{|V^{\prime}(w+a)|}{w^{p-1}}=\liminf_{w\to 0_{+}} \frac{|\mathcal{V}^{\prime}(w)|}{w^{p-1}}>0\,.\] (H2)
This assumption is satisfied in case of (4) as soon as \(q>p>2\) and in that case \(\omega=\sqrt{\mathcal{V}^{\prime\prime}(1)}=\sqrt{q-p}\), but the following result holds for a much larger class of potentials.
**Lemma 5**.: _Let \(p>1\). If \(V\) is a potential such that (H1) holds, then we have_
\[T(E)\sim\frac{2\,\sqrt{2\,\pi}\,\Gamma\big{(}1-\frac{1}{p}\big{)}}{p^{\prime 1 /p}\,\omega\,\Gamma\big{(}\frac{3}{2}-\frac{1}{p}\big{)}}\,E^{\frac{1}{2}- \frac{1}{p}}\quad\text{as}\quad E\to 0_{+}\]
_with \(\omega\) defined by (14). As a consequence, we obtain_
\[\lim_{E\to 0_{+}}T(E)= \,0 \text{if}\quad p>2\,,\] \[\lim_{E\to 0_{+}}T(E)= \,\frac{2\,\pi}{\omega} \text{if}\quad p=2\,,\] \[\lim_{E\to 0_{+}}T(E)= \,+\infty \text{if}\quad p\in(1,2)\,.\]
_Additionally, if (H2) holds, then for any \(p>1\) we have \(\lim_{E\to(E_{*})_{-}}T(E)=+\infty\)._
Proof.: In a neighbourhood of \(w=0\), we can write \(V(w)\sim\frac{1}{2}\,\omega^{2}\,w^{2}\), use (7) and the change of variables \(w=\sqrt{2\,E}\,y/\omega\) to obtain
\[T(E)\sim\frac{2\,\sqrt{2}}{p^{\prime 1/p}\,\omega}\,E^{\frac{1}{2}-\frac{1}{p}} \int_{-1}^{1}\frac{dy}{\big{(}1-y^{2}\big{)}^{1/p}}\quad\text{as}\quad E\to 0 _{+}\,.\]
We obtain the expression of the integral using the formulae [1, 6.2.1 & 6.2.2] for the Euler Beta function.
Now let us consider the limit as \(E\to(E_{*})_{-}\). We learn from (H2) that
\[E_{*}-v(w)\geq\frac{\ell}{p}\,(w-a)^{p}\]
for some \(\ell>0\) if \(w-a>0\) is taken small enough. We deduce from (7) that \(T(E)\) diverges as \(E\to(E_{*})_{-}\)
## 4. The monotonicity of the minimal period
Applying the formulae of Section 2 to study the monotonicity of the minimal period for periodic solutions of (\(\star\) *> 2) leads later to very complicated expressions for our problem with potential (4). For that reason, it is convenient to introduce a new change of variables as follows. Let \(A=-\,a>0\) and define
\[h(y):=\mathsf{h}(w)=\frac{w}{|w|}\,\sqrt{V(w)}\quad\text{with}\quad y=(w+A)^{p} \tag{15}\]
for any \(w\in(a,0)\cup(0,b)\) and extend it by \(h(0)=0\) at \(w=0\). Here \(\mathsf{h}\) is defined as in Section 2 (proof of Theorem 1, Eq. (12)) while \(h\) is such that
\[h(y)=\mathsf{h}\left(y^{\frac{1}{p}}-A\right)\quad\forall\,y\in(0,B)\]
with \(B=(A+b)^{p}\). We have that
\[\mathsf{h}^{\prime}(w)=p\,y^{\frac{1}{p^{\prime}}}\,h^{\prime}(y)\,.\]
Let us make the simplifying assumption
\[w\,V^{\prime}(w)>0\quad\forall\,w\in(a,0)\cup(0,b)\,.\] (H3)
Under this assumption, \(w_{i}(E)\), \(i=1\), \(2\), are the two roots in \((a,b)\) of \(V(w)=E\), as in Theorem 4, \(V(w)=E\) admits no other root in \((a,b)\) for any \(E\in(0,E_{*})\) and the map \(E\mapsto T(E)\) is continuous. Also notice that
\[h^{\prime}(y)>0\quad\forall\,y\in\big{(}y_{1}(E),A^{p}\big{)}\cup\big{(}A^{p},y_{2}(E)\big{)}\]
where \(y_{1}(E):=\big{(}A-|w_{1}(E)|\big{)}^{p}\) and \(y_{2}(E):=\big{(}A+w_{2}(E)\big{)}^{p}\).
By the above definition of \(h\) and (13), the minimal period can now be computed as
\[T(E)=\mathsf{c}_{p}\,E^{\frac{1}{2}-\frac{1}{p}}\int_{-\frac{\pi}{2}}^{\frac{ \pi}{2}}\frac{\cos\theta^{1-\frac{2}{p}}}{y^{\frac{1}{p^{\prime}}}\,h^{\prime}( y)}\,d\theta\quad\text{with}\quad\mathsf{c}_{p}:=\frac{2}{p\,p^{\prime\frac{1}{p}}} \tag{16}\]
using the _change of variables_\(y\mapsto\theta\in(-\pi/2,\pi/2)\) such that
\[\sqrt{E}\,\sin\theta=h(y)=\frac{y-A}{|y-A|}\,\sqrt{V\big{(}y^{1/p}-A\big{)}}\,. \tag{17}\]
Let us define
\[J:=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{\cos\theta^{1-\frac{2}{p}}}{y^{ \frac{1}{p^{\prime}}}\,h^{\prime}(y)}\,d\theta\]
and notice that \(J\) is a function of \(E\) as a consequence of the change of variables (17): \(y=y(E,\theta)\) is such that
\[\frac{\partial y}{\partial E}=\frac{\sin\theta}{2\,\sqrt{E}\,h^{\prime}(y)}\,.\]
By differentiating \(T(E)\) in (16) with respect to \(E\), we find that
\[\frac{T^{\prime}(E)}{T(E)}=\frac{p-2}{2\,p}\,\frac{1}{E}+\frac{J^{\prime}(E)}{J( E)}\]
where
\[J^{\prime}(E)=-\,\frac{1}{2\,\sqrt{E}}\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}}\,K( y)\cos\theta^{1-\frac{2}{p}}\,\sin\theta\,d\theta\,,\]
\(y\) is given by (17) and
\[K(y):=-\,\frac{1}{h^{\prime}(y)}\,\frac{d}{dy}\left(\frac{1}{y^{\frac{1}{p^{ \prime}}}\,h^{\prime}(y)}\right)=\frac{y^{2}\,h^{\prime\prime}(y)+\frac{1}{p^ {\prime}}\,y\,h^{\prime}(y)}{y^{2+\frac{1}{p^{\prime}}}\left(h^{\prime}(y) \right)^{3}}\,. \tag{18}\]
With \(p>2\), \(E\mapsto T(E)\) is increasing if \(J^{\prime}(E)>0\). Here is a sufficient condition on \(h\), which is in fact an assumption on \(V\).
**Lemma 6**.: _Assume that (H1) and (H3) hold. With the above notations, if the function \(K\) is decreasing on \([A,B]\), then \(J^{\prime}>0\) on \((0,E_{*})\) and the minimal period \(T(E)\) is a monotone increasing function of \(E\)._
Proof.: With \(y(E,\theta)\) defined by (17), the result is a consequence of
\[J^{\prime}(E)=-\,\frac{1}{2\,\sqrt{E}}\int_{0}^{\frac{\pi}{2}}\,\left(K\big{(} y(E,\theta)\big{)}-K\big{(}y(E,-\theta)\big{)}\right)\cos\theta^{1-\frac{2}{p}}\, \sin\theta\,d\theta\]
and \(y(E,-\theta)<y(E,\theta)\) if \(\theta\in(0,\pi/2)\).
We deduce from Lemma 6 a sufficient condition on \(h\) to obtain that the minimal period is monotone increasing.
**Corollary 7**.: _Assume that (H1) and (H3) hold. If \(h\) and and \(1/h^{\prime 2}\) are convex functions, then the minimal period \(T(E)\) is a monotone increasing function of \(E\in(0,E_{*})\)._
Proof.: By convexity of \(1/h^{\prime 2}\), we have that
\[0<\frac{1}{2}\frac{d^{2}}{dy^{2}}\left(\frac{1}{h^{\prime 2}}\right)=-\,\frac{d }{dy}\left(\frac{h^{\prime\prime}}{h^{\prime 3}}\right)\]
and \(\frac{h^{\prime\prime}}{h^{\prime 3}}\) is a decreasing function. Next, from (18) written as
\[K(y)=\frac{1}{y^{\frac{1}{p^{\prime}}}}\,\frac{h^{\prime\prime}(y)}{\big{(}h^{ \prime}(y)\big{)}^{3}}+\frac{1}{p^{\prime}}\,\frac{1}{y^{2-\frac{1}{p}}}\, \frac{1}{\big{(}h^{\prime}(y)\big{)}^{2}}\quad\text{if}\quad y>0\,, \tag{19}\]
we observe that all the factors on the right hand of this expression are positive decreasing functions, implying that \(K\) is a decreasing function on \([A,B]\).
## 5. Proof of the main result
By applying Lemma 6 and Corollary 7, we prove Theorem 3. The main difficulty is to establish that \(K\) is monotone decreasing if \(1<m<2\), which is done in Section 5.3.
### Notations
Let us consider (1) with \(\mathcal{V}\) given by (4) and \(q>p\geq 2\), hence \(\mathcal{V}^{\prime}(w)=\phi_{q}(w)-\phi_{p}(w)\), and (1) is reduced to
\[\big{(}\phi_{p}(w^{\prime})\big{)}^{\prime}+\phi_{q}(w)-\phi_{p}(w)=0\,.\] ( \[\star\] )
In particular \(w=1\) is a trivial solution of this equation. All conditions of Section 1 for \(\mathcal{V}\) are satisfied, \(\mathcal{V}\) (resp. \(V\)) reaches a minimum at \(w=1\) (resp. \(w=0\)) and
\[E_{*}=\tfrac{q-p}{p\,q}=\mathcal{V}(B)=\mathcal{V}(0)\quad\text{where}\quad B:= \left(\tfrac{q}{p}\right)^{\frac{1}{q-p}}. \tag{20}\]
In the discussion, we shall consider the three cases: \(m=2\), \(m>2\) and \(1<m<2\), where
\[m=\frac{q}{p}\,.\]
We have that \(V(w)=\mathcal{V}(w+A)\) with \(A=1\), _i.e._,
\[V(w)=\tfrac{1}{q}\,|w+1|^{q}-\tfrac{1}{p}\,|w+1|^{p}-\left(\tfrac{1}{q}-\tfrac {1}{p}\right),\]
With the definitions of (15), we find that
\[q\,V(w)=y^{m}-m\,y-(1-m)\quad\text{with}\quad w=y^{\frac{1}{p}}-1\]
and the _change of variables_\(y\mapsto\theta\in(-\pi/2,\pi/2)\) defined by (17) amounts to
\[\sqrt{E}\,\sin\theta=h(y)=\frac{y-1}{|y-1|}\,\sqrt{\tfrac{1}{q}\left(y^{m}-m\, y+m-1\right)}\,.\]
It is convenient to define
\[\gamma_{m}:=m^{\frac{1}{m-1}}=B=\left(\tfrac{q}{p}\right)^{\frac{p}{q-p}} \quad\text{and}\quad W(y):=y^{m}-m\,y+m-1\quad\forall\,y\in[0,\gamma_{m}]\,.\]
With these notations, we have
\[0=W(1)<W(y)<W(0)=W(\gamma_{m})=m-1\quad\forall\,y\in(0,1)\cup(1,\gamma_{m})\,.\]
As a special case, note that \(W(y)=(y-1)^{2}\) and \(h(y)=(y-1)/\sqrt{q}\) if \(m=2\). In that case, the result of Theorem 3 is straightforward.
**Lemma 8**.: _If \(m=2\) and \(\mathcal{V}\) given by (4), the minimal period \(T(E)\) is a monotone increasing function of \(E\in(0,E_{*})\) with \(E_{*}=\tfrac{1}{2\,p}\)._
Proof.: The function \(K\) defined by (18) is explicitly given by \(K(y)=\tfrac{q^{2}}{p^{\prime}}\,y^{-1/p}\) hence monotone decreasing and Lemma 6 applies.
### The case \(m>2\)
We obtain the following result.
**Lemma 9**.: _If \(m>2\), \(h\) and \((h^{\prime})^{-2}\) are convex._
Proof.: Let \(z=y^{m-1}\). With \(0\leq y\leq\gamma_{m}\), \(W(y):=y^{m}-m\,y+m-1\) and the function \(h\) given by \(h(y)=\tfrac{y-1}{|y-1|}\,\sqrt{W(y)/q}\), we find that the expression
has its sign given by
\[F(y):=-\,m^{2}+2\,m\,(m-1)^{2}\,y^{m-2}-2\,m^{2}\,(m-2)\,y^{m-1}+m\,(m-2)\,y^{2m-2}\,.\]
\(\bullet\) If \(y\geq 1\), then \(y^{m-2}\geq 1\), \(-\,m^{2}+2\,m\,(m-1)^{2}\,y^{m-2}\geq m\,(m-2)\,(2\,m-1)\) and
\[F(y)\geq m\,(m-2)\,(z-1)\,(z+1-2\,m)\geq 0\,.\]
\(\bullet\) If \(y\leq 1\), then \(y^{m-2}\leq 1\), \(y^{2m-2}\leq 1\), \(m\,(m-2)\,y^{2m-2}-m^{2}\leq-\,2\,m\,y^{2m-2}\) and
\[-F(y)\geq 2\,m\,(z-1)\,\big{(}z+(m-1)^{2}\big{)}\geq 0\,.\]
In both cases, we conclude that \(h^{\prime\prime}\geq 0\).
The function \(\left(\left(h^{\prime}\right)^{-2}\right)^{\prime\prime}\) has the sign of
\[G(y):=2\,(m-1)\,(m-2)-m\,(2\,m-1)\,y\\ +2\,(m-1)\,(2\,m-1)\,y^{m-1}-2\,(m-2)\,(2\,m-1)\,y^{m}+(m-2)\,y^{2 m-1}\,.\]
Since \(G(1)=G^{\prime}(1)=0\) and
\[G^{\prime\prime}(y)=2\,(m-1)\,(m-2)\,(2\,m-1)\,W(y)\geq 0\,,\]
we conclude that \(g\geq 0\) and \(\left(\left(h^{\prime}\right)^{-2}\right)^{\prime\prime}\geq 0\).
This proves Theorem 3 as a consequence of Corollary 7 and Lemma 9 if \(m>2\).
### The case \(1<m<2\)
We cannot apply Corollary 7 and we have to directly rely on Lemma 6. We recall that \(m=q/p\). Let us start by computing \(K^{\prime}\).
**Lemma 10**.: _The function \(y\mapsto-\,K^{\prime}(y)\) has the sign of \(p^{2}\,y^{2}\,f(a,m,y,z)\) where \(z=y^{m-1}\), the parameters \((a,m)\) are admissible in the sense that_
\[a=\frac{1}{p}\in\left(0,\tfrac{1}{2}\right),\quad m=\frac{q}{p}\in(1,2)\,,\]
_and_
\[f(a,m,y,z)= \,-\,3\,m\,y\,(z-1)^{2}\,(m\,z-1)\] \[\qquad+2\,(m-1-m\,y+y\,z)\,\big{(}2+(1-6\,m+m^{2})\,z+2\,m^{2}\,z ^{2}\big{)}\] \[\,+a\,\Big{(}3\,m\,y\,(z-1)^{3}-6\,(z-1)\,(m\,z-1)\,(m-1-m\,y+y\, z)\Big{)}\] \[\,+a^{2}\,\Big{(}2\,(z-1)^{2}\,(m-1-m\,y+y\,z)\Big{)}\,.\]
Proof.: We set \(y=x^{p}\) so that \(x=y^{1/p}\) and \(\frac{dx}{dy}=\frac{1}{p}\,y^{-1/p^{\prime}}\). Let
\[\Phi(x):=W(y)=x^{mp}-m\,x^{p}+m-1\quad\forall\,x\in\left[0,\gamma_{m}^{1/p} \right],\]
where \(W\) and \(h\) are as in Section 5.1, so that
\[\left|\sqrt{q}\,h^{\prime}(y)\right|^{2}=\left|\frac{\Phi^{\prime}(x)}{2\,p\, y^{1/p^{\prime}}\,\sqrt{\Phi(x)}}\right|_{|x=y^{1/p}}^{2}\,,\]
that is, \(4\,m\,p^{3}\left|y^{1/p^{\prime}}\,h^{\prime}(y)\right|^{2}=(\Phi^{\prime}(x))^{2} /\Phi(x)\) and \(K\) as in (18) can be rewritten as
\[K(y)=-\,\frac{1}{2}\,y^{1/p^{\prime}}\,\frac{d}{dy}\left(\frac{1}{y^{2/p^{\prime }}\,h^{\prime}(y)^{2}}\right)=-\,2\,m\,p^{3}\,\frac{d}{dx}\left(\frac{\Phi(x)} {|\Phi^{\prime}(x)|^{2}}\right).\]
Hence \(-\,K^{\prime}\) has the sign of \(\frac{d^{2}}{dx^{2}}\,(\Phi(x)\,|\Phi^{\prime}(x)|^{-2})\), _i.e._, of \(6\,\Phi\,|\Phi^{\prime\prime}|^{2}-2\,\Phi\,\Phi^{\prime}\,\Phi^{\prime\prime \prime}-3\,|\Phi^{\prime}|^{2}\,\Phi^{\prime\prime}\) and the detailed computation shows that
\[\frac{x^{4}}{q^{2}}\,|\Phi^{\prime}(x)|^{4}\,\frac{d^{2}}{dx^{2}}\left(\frac{ \Phi(x)}{|\Phi^{\prime}(x)|^{2}}\right)=p^{2}\,y^{2}\,f(a,m,y,z),\]
ending the proof of the lemma.
**Lemma 11**.: _With \(\mathcal{V}\) given by (4) and \(2<p<q<2\,p\), \(K\) defined by (18) is monotone decreasing._
Proof.: Keeping the notations of Lemma 10, our goal is to prove that \(y\mapsto f\,(a,m,y,y^{m-1})\) is nonnegative for any \(y\in(0,\gamma_{m})\) whenever the parameters \((a,m)\) are _admissible_.
Let us start by considering its value at some remarkable points.
\(\bullet\) At \((y,z)=(0,0)\), we have \(f(a,m,0,0)=2\,(1-a)\,(2-a)\,(m-1)>0\).
\(\bullet\) At \((y,z)=(1,1)\), we have \(f(a,m,1,1)=0\) but a Taylor expansion shows that
\[f\left(a,m,y,y^{m-1}\right)=\frac{1}{12}\,(m-1)^{3}\,c_{m,a}\,(y-1)^{4}+O\left( (y-1)^{5}\right)\quad\text{as}\quad y\to 1 \tag{21}\]
for any \(a\in(0,1/2)\), where
\[c_{m,a}=12\,m\,a\,(a-m-1)+m\left(2\,m^{2}+7\,m+2\right)\geq c_{m,1/2}=m\,(m+1 )\,(2\,m-1)>0\,.\]
This proves that \(y\mapsto f\,(a,m,y,y^{m-1})\) is positive for any \(y\in(1-\varepsilon,1)\cup(1,1+\varepsilon)\) for some \(\varepsilon=\varepsilon(a,m)>0\) whenever the parameters \((a,m)\) are _admissible_.
\(\bullet\) At \((y,z)=(\gamma_{m},m)\), we have
\[f(a,m,\gamma_{m},m)=(m-1)^{3}\,c_{m,a}\,,\quad c_{m,a}=2\,a^{2}-3\,a\,(2\,m+2- m\,\gamma_{m})+2\left(2\,m^{2}+5\,m+2\right).\]
Using \(\inf_{m\in(1,2)}\frac{3}{4}\,(2\,m+2-m\,\gamma_{m})=\lim_{m\to 1_{+}} \frac{3}{4}\,(2\,m+2-m\,\gamma_{m})=3\,(1-e/4)>1/2\), we have
\[c_{m,a}>c_{m,1/2}=(4-3\,\gamma_{m})\,m^{2}+\left(7-\frac{3}{2} \,\gamma_{m}\right)m+\frac{3}{2}\\ >\lim_{m\to 1_{+}}\left((4-3\,\gamma_{m})\,m^{2}+\left(7-\frac{3}{2} \,\gamma_{m}\right)m+\frac{3}{2}\right)=\frac{1}{2}\,(25-9\,e)>0\,.\]
In the limit as \(m\to 2\), we have \(y=z\) and
\[f(a,2,y,z)=2\,(1-a)\,(2-a)\,(z-1)^{4}\,. \tag{22}\]
Hence \(f\,(a,2,y,y^{m-1})\) is positive unless \(y=1\). We are now going to take a given \(a\in(0,1/2)\) and consider \(m\in(1,2)\) as a parameter. Let us prove that for some \(m_{*}\in(1,2)\), we have \(f\,(a,m,y,y^{m-1})\geq 0\) for any \((m,y)\) such that \(m_{*}<m<2\) and \(0\leq y\leq\gamma_{m}\). We assume by contradiction that there are two sequences \((m_{k})_{k\in\mathbb{N}}\) and \((y_{k})_{k\in\mathbb{N}}\) such that \(1<m_{k}<2\) for any \(k\in\mathbb{N}\), \(\lim_{k\to+\infty}m_{k}=2\), \(0\leq y_{k}\leq\gamma_{m_{k}}\) and \(f\big{(}a,m_{k},y_{k},y_{k}^{m_{k}-1}\big{)}<0\)
for any \(k\in\mathbb{N}\). Up to the extraction of a subsequence, \((y_{k})_{k\in\mathbb{N}}\) converges to some limit \(y_{\infty}\in[0,2]\) and by continuity of \(f\) we know that \(f\left(a,2,y_{\infty},y_{\infty}\right)\leq 0\): the only possibility is \(y_{\infty}=1\) by (22). Since \(f\big{(}a,m_{k},y_{k},y_{k}^{m_{k}-1}\big{)}<0=f(a,m_{k},1,1)\), we learn that \(y_{k}\neq 1\). Since \(\lim_{k\to+\infty}y_{k}=1\), this contradicts (21) or, to be precise, \(|y_{k}-1|\geq\varepsilon(a,m_{k})\), as the reader is invited to check that \(\liminf_{k\to+\infty}\varepsilon(a,m_{k})>0\) because \(f\) is a smooth function of all of its arguments. If we redefine
\[m_{*}(a):=\inf\left\{m\in(1,2)\,:\,f\left(a,m,y,y^{m-1}\right)\geq 0\,\, \forall\,y\in[0,\gamma_{m}]\right\},\]
then we know that for any \(a\in(0,1/2)\), we have \(m_{*}(a)<2\).
We want to prove that \(m_{*}(a)=1\). Again, let us argue by contradiction: if \(m_{*}(a)>1\), and assume that there are two sequences \((m_{k})_{k\in\mathbb{N}}\) and \((y_{k})_{k\in\mathbb{N}}\) such that \(1<m_{k}<m_{*}(a)\) for any \(k\in\mathbb{N}\), \(\lim_{k\to+\infty}m_{k}=m_{*}(a)\), \(0\leq y_{k}\leq\gamma_{m_{k}}\) and \(f\big{(}a,m_{k},y_{k},y_{k}^{m_{k}-1}\big{)}<0\) for any \(k\in\mathbb{N}\). Up to the extraction of a subsequence, \((y_{k})_{k\in\mathbb{N}}\) converges to some limit \(y_{\infty}\in[0,2]\) and by continuity of \(f\) we know that \(f\left(a,m_{*}(a),y_{\infty},y_{\infty}^{m-1}\right)\leq 0\). For the same reasons as above, \(y_{\infty}=0\), \(y_{\infty}=1\) and \(y_{\infty}=\gamma_{m_{*}(a)}\) are excluded. Altogether, we have proved that for
\[m=m_{*}(a)\,,\]
we have \(f\left(a,m,y_{\infty},y_{\infty}^{m-1}\right)=0\) for some \(y_{\infty}\in(0,1)\cup(1,\gamma_{m})\) and we also have that \(f\left(a,m,y,y^{m-1}\right)\geq 0\) for any \(y\in(0,1)\cup(1,\gamma_{m})\), so that \(y_{\infty}\) is a local minimizer of \(y\mapsto f\left(a,m,y,y^{m-1}\right)\). As a consequence, we have shown that for \(m=m_{*}(a)>1\) and \(y=y_{\infty}\neq 1\), we have
\[f\left(a,m,y,y^{m-1}\right)=0\quad\text{and}\quad\frac{\partial}{\partial y}f \left(a,m,y,y^{m-1}\right)=0\,. \tag{23}\]
As we shall see below, this contradicts Lemma 12. Hence \(y\mapsto f\left(a,m,y,y^{m-1}\right)\) takes nonnegative values for any _admissible_ parameters \((a,m)\) with \(1<m<2\). By Lemma 10, \(K^{\prime}(y)\leq 0\), thus completing the proof.
We still have to prove that (23) has no solution \(y\in(0,1)\cup(1,\gamma_{m})\). Since
\[y\,\frac{\partial}{\partial y}f\left(a,m,y,y^{m-1}\right)=y\,\frac{\partial f }{\partial y}(a,m,y,z)+\left(m-1\right)z\,\frac{\partial f}{\partial z}(a,m, y,z)\,,\]
we can relax the condition \(z=y^{m-1}\) and prove the slightly more general result.
**Lemma 12**.: _With the notations of Lemma 10, assume that \(m>1\), \(y\in(0,\gamma_{m}]\) and \(z\in(0,m]\). For any admissible parameters \((a,m)\), if_
\[f(a,m,y,z)=0\,, \tag{24a}\] \[y\,\frac{\partial f}{\partial y}(a,m,y,z)+\left(m-1\right)z\, \frac{\partial f}{\partial z}(a,m,y,z)=0\,. \tag{24b}\]
_then \(z=1\)._
Proof.: Solving the system (24a)-(24b) is an _elimination problem_ because the function \(f\), as defined in Lemma 10, is a polynomial in the variables \(a\), \(y\) and \(z\). Since (24a) is a first order equation in \(y\), we can eliminate this variable and find that
\[y=\frac{\mathsf{n}(a,m,z)}{\mathsf{d}(a,m,z)}\]
with
\[\mathsf{n}(a,m,z):=2\,(m-1)\,\big{(}a^{2}\,(z-1)^{2}-3\,a\,(z-1)\, (m\,z-1)\] \[+2\,m^{2}\,z^{2}+\big{(}m^{2}-6\,m+1\big{)}\,z+2\big{)}\,,\] \[\mathsf{d}(a,m,z):=m\,\big{(}2\,a^{2}\,(z-1)^{2}+3\,a\,(z+1)^{2} \,(z-1)+9\,z^{2}+8\,z+1\big{)}\] \[-2\,z\,\big{(}a^{2}\,(z-1)^{2}+3\,a\,(z-1)+z+2\big{)}\] \[-m^{2}\,z\,\big{(}6\,a\,(z-1)+z^{2}+8\,z+9\big{)}+2\,m^{3}\,z\,(2 \,z+1)\,.\]
After replacing, solving (24b) under the condition \(z\neq 1\) is reduced to a second order equation in \(z\), whose discriminant is
\[\delta(a,m):=-\,3\,(a-1)^{2}\,(m-1)^{2}\,(a-m)^{2}\,\big{(}5\,a^{2}-10\,a\,(m+ 1)-3\,m^{2}+14\,m-3\big{)}\.\]
Since \(5\,a^{2}-10\,a\,(m+1)-3\,m^{2}+14\,m-3\) takes only positive values for _admissible_\((a,m)\), there are no other roots than \(z=1\). This is the desired contradiction, which completes the proof thanks to Lemma 6.
Proof of Theorem 3.: It is a straightforward consequence of Lemmas 6 and 11.
**Acknowledgment:** J.D. was partially supported by the Project EFI (ANR-17-CE40-0030) of the French National Research Agency (ANR). M.G-H. was supported by Fondecyt grants 1210241 and 1190102 from ANID-Chile, and R.M. by Centro de Modelamiento Matematico (CMM) BASAL fund FB210005 for center of excellence from ANID-Chile.
|
2301.08008 | Improving Machine Translation with Phrase Pair Injection and Corpus
Filtering | In this paper, we show that the combination of Phrase Pair Injection and
Corpus Filtering boosts the performance of Neural Machine Translation (NMT)
systems. We extract parallel phrases and sentences from the pseudo-parallel
corpus and augment it with the parallel corpus to train the NMT models. With
the proposed approach, we observe an improvement in the Machine Translation
(MT) system for 3 low-resource language pairs, Hindi-Marathi, English-Marathi,
and English-Pashto, and 6 translation directions by up to 2.7 BLEU points, on
the FLORES test data. These BLEU score improvements are over the models trained
using the whole pseudo-parallel corpus augmented with the parallel corpus. | Akshay Batheja, Pushpak Bhattacharyya | 2023-01-19T11:27:56Z | http://arxiv.org/abs/2301.08008v1 | # Improving Machine Translation with Phrase Pair Injection and Corpus Filtering
###### Abstract
In this paper, we show that the combination of Phrase Pair Injection and Corpus Filtering boosts the performance of Neural Machine Translation (NMT) systems. We extract parallel phrases and sentences from the pseudo-parallel corpus and augment it with the parallel corpus to train the NMT models. With the proposed approach, we observe an improvement in the Machine Translation (MT) system for 3 low-resource language pairs, Hindi-Marathi, English-Marathi, and English-Pashto, and 6 translation directions by up to 2.7 BLEU points, on the FLORES test data. These BLEU score improvements are over the models trained using the whole pseudo-parallel corpus augmented with the parallel corpus.
## 1 Introduction
Deep Neural Architectures are _data hungry_. It is believed that the robustness of an NMT model depends on the size of the training corpus. However, not all language pairs have a substantial amount of parallel data. The primary data resource for low-resource languages is the web. However, the web-crawled data contains a lot of noise that degrades the performance of NMT systems. Hence, the quality of the training data is as important as its quantity. We aim to improve the quality of Machine Translation for Hindi-Marathi, English-Marathi, and English-Pashto language pairs by using the LaBSE-based (Feng et al., 2020) corpus filtering along with the Phrase Pair Injection. We use Phrase Pair Injection to increase the size and LaBSE-based corpus filtering to improve the quality of the parallel data extracted from the pseudo-parallel corpus. We observe that using these two techniques together makes the optimum use of the pseudo-parallel corpus.
The contributions of this work are as follows:
* We extract good quality parallel sentences and phrases from the pseudo-parallel corpus of huge size. To the best of our knowledge, this is the first method that combines Phrase Pair Injection with neural-based Corpus Filtering and extracts both good quality parallel sentences and phrases from the pseudo-parallel corpus.
* We show that the extracted parallel sentences and phrases significantly improve the performance of the MT systems.
## 2 Related Work
Neural Networks have become very popular with increased computational power in recent times. The Transformer model introduced by Vaswani et al. (2017) gave significant improvements in the quality of translation as compared to the previous approaches (Bahdanau et al., 2015; Sutskever et al., 2014). Transformers allow parallelization, which enables the model to train faster and get better performances.
There are several prior studies to improve the quality and size of the parallel corpus. A set of heuristic rules were proposed by Lu et al. (2020) to remove low-quality sentence pairs from the noisy parallel corpus. Another popular approach is to compute cosine similarities between the source and target sentence embeddings. Feng et al. (2020) proposed the LaBSE model, which is a multilingual sentence embedding model trained on 109 languages, including some Indic languages. Pourmostafa Roshan Sharami et al. (2021) proposed a data selection pipeline that selects the In-Domain data from the Generic Domain based on its similarity with the other In-Domain data. This helps the MT model to perform well in domain-specific cases.
Sen et al. (2021) augmented the raw phrase table with the parallel corpus. The raw phrase table contains very noisy and repetitive phrase pairs. We observe that augmenting the whole phrase table
with parallel corpus does not show much improvement in the performance of the MT system. In contrast, we first extract the longest unique phrase pairs from the phrase table and then further filter them using LaBSE filtering to extract good quality phrase pairs. Then we augment these good quality phrase pairs with the LaBSE filtered parallel sentences. This helps improve the performance of MT systems significantly.
## 3 Approaches
We first discuss a method to extract good-quality parallel sentences from the pseudo-parallel corpus. Then we discuss a method to extract good quality phrase pairs from the pseudo-parallel corpus.
### LaBSE based Filtering
In this approach, we aim to extract good-quality parallel sentences from the pseudo-parallel corpus. Language Agnostic BERT Sentence Embedding model Feng et al. (2020) is a multilingual embedding model that supports 109 languages, including some Indic languages. We generate the sentence embeddings for the source and target sides of the pseudo-parallel corpora using the LaBSE 1 model. Then, we compute the cosine similarity between the source and target sentence embeddings. After that, we extract good quality parallel sentences based on a threshold value of the similarity scores. We calculate the average similarity score on a small dataset from the PM-India corpus (PMI) Haddow and Kirefu (2020). The PMI corpus consists of high-quality sentence pairs, so it helps us decide upon the threshold value.
Footnote 1: [https://huggingface.co/sentence-transformers/LaBSE](https://huggingface.co/sentence-transformers/LaBSE)
### Phrase Pair Injection (PPI) with LaBSE Filtering
In this approach, we aim to extract good-quality parallel phrases from a noisy pseudo-parallel corpus. We first train a PBSMT model on the noisy pseudo-parallel corpus using the _Moses_2 decoder, followed by tuning using MERT. Then, we extract phrase pairs from the generated Phrase Table based on their weighted average translational and lexical probabilities mentioned in the phrase table. As each sentence pair leads to multiple and repetitive phrase pairs in the Phrase Table, we only keep the longest unique phrase pairs from the extracted phrase pairs. Then, we perform the LaBSE-based filtering on these longest unique phrase pairs to remove the poor quality phrase pairs.
Footnote 2: [http://www2.statmt.org/moses/?n=Development.GetStarted](http://www2.statmt.org/moses/?n=Development.GetStarted)
We augment these good-quality phrase pairs to the LaBSE filtered parallel sentences. This approach allows us to use the pseudo-parallel corpora at its full potential because we are extracting good quality parallel sentences as well as phrase pairs.
## 4 Experimental Setup
In this section, we discuss the setup of various experiments that we performed. We use Byte Pair Encoding Sennrich et al. (2016) as a segmentation technique to split words into subwords. We use 16000 merge operations for all experiments. We use the _OpenNMT-py3_Klein et al. (2017) library to train the Transformer based NMT models. We also train the PBSMT systems using Moses Koehn et al. (2007). Then, we perform tuning using Minimum Error Rate Training (MERT) to find the optimal weights which maximize the translation performance. We use a tune set of the first 2000 parallel sentences from the pseudo-parallel corpus for the MERT tuning. Then, we extract the
\begin{table}
\begin{tabular}{l|l|c} \hline
**Corpus Name** & **Language Pairs** & **Sentence Pairs** \\ \hline \multirow{3}{*}{**Parallel Corpus**} & Hindi-Marathi & 604K \\ & English-Marathi & 248K \\ & English-Pashto & 123K \\ \hline \multirow{3}{*}{**Pseudo-Parallel Corpus**} & Hindi-Marathi & 1.98M \\ & English-Marathi & 3.28M \\ & English-Pashto & 1.02M \\ \hline \end{tabular}
\end{table}
Table 1: Dataset Statistics of Parallel and Pseudo-Parallel Corpus
Figure 1: Phrase Pair Injection with LaBSE Filtering Pipeline
phrase pairs from the generated Phrase Table, based on their weighted average translational and lexical probabilities mentioned in the phrase table. Further training details can be found in the appendix A.
### Dataset Preparation
We train the Hindi-Marathi, English-Marathi, and English-Pashto NMT models using their respective Parallel and Pseudo-Parallel Corpus. The Parallel corpus for English-Marathi and Hindi-Marathi language pairs consists of the Indian Languages Corpora Initiative (ILCI) phase 1 corpus (Jha, 2010), BIBLE corpus (Christos Christodouloupoulos, 2015), Press Information Bureau corpus (PIB), and PM-India corpus (PMI) (Haddow and Kirefu, 2020). The Hindi-Marathi Parallel corpus also consists of Tatoeba challenge dataset (Tiedemann, 2020). In the Hindi-Marathi Parallel corpus, except for ILCI, the parallel data was synthetically generated by translating the English sentences of the English-Marathi corpus to Hindi using Google Translation API. The Pseudo-Parallel corpus consists of Samanantar Corpus (Ramesh et al., 2021). For English-Pashto language pair, we use the parallel and pseudo-parallel corpus provided by the WMT20 shared task on Parallel Corpus Filtering and Alignment (Koehn et al., 2020). The detailed corpus statistics are mentioned in Table 1.
For evaluation, we use the test set introduced in WAT 2021 MultiIndicMT: An Indic Language Multilingual Task and FLORES 101. The test set from WAT 2021 contains 2,390 sentences and is a part of the PMI corpus. The FLORES 101 test set contains 1012 sentences.
### Baseline
We train the baseline models (Hindi-Marathi, English-Marathi and English-Pashto) directly on their respective parallel corpus.
### No Filtering
In No Filtering model, we train the NMT models on the whole pseudo-parallel corpus augmented with parallel corpus.
### Baseline + PPI
In this model, we first train a PBSMT model on the pseudo-parallel Corpus, followed by tuning using MERT. Then, we extract longest unique phrase pairs from the Phrase Table using a threshold Probability value of 0.95 for Hindi-Marathi and 0.8 for English-Marathi. Finally, we augment the parallel corpus with the extracted phrases pairs to train the NMT models.
### Baseline + LaBSE
In this model, we use the LaBSE filtering technique with a threshold of **0.9**, **0.8** and **0.8** to extract 350K, 2.6M, and 166K good-quality parallel sentences from the Hindi-Marathi, English-Marathi and English-Pashto pseudo-parallel corpus, respectively. Then, we augment the parallel corpus with
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline
**Technique** & \# **Sentence** & \multicolumn{3}{c}{**Hindi\(\rightarrow\)Marathi**} & \multicolumn{3}{c}{**Marathi\(\rightarrow\)Hindi**} \\ & **Pairs** & **WAT 2021** & **FLORES 101** & **WAT** 2021** & **FLORES 101** \\ & & **BLEU** & **chrF** & **BLEU** & **chrF** & **BLEU** & **chrF** & **BLEU** & **chrF** \\ \hline
**Baseline+LaBSE +PPI- LaBSE** & 1.75M & **17.6** & **51.0** & 10.0 & 42.5 & **25.4** & 51.5 & 16.2 & 42.5 \\
**Baseline+LaBSE +PPI** & 2.55M & 17.4 & 51.0 & **10.2** & **43.0** & 25.3 & **51.6** & **16.5** & **42.5** \\
**Baseline+PPI-LaBSE** & 1.4M & 15.5 & 49.8 & 9.5 & 42.0 & 23.4 & 50.2 & 15.4 & 41.6 \\
**Baseline+LaBSE** & 960K & 17.5 & 50.3 & 9.4 & 41.9 & 25.1 & 51.2 & 15.9 & 41.6 \\
**Baseline+PPI** & 2.2M & 16.1 & 49.9 & 9.2 & 41.4 & 23.1 & 49.9 & 15.1 & 40.8 \\
**No Filtering** & 2.56M & 16.8 & 49.9 & 9.1 & 39.5 & 24.0 & 49.9 & 14.8 & 39.8 \\
**Baseline** & 604K & 15.2 & 48.0 & 7.6 & 38.8 & 22.1 & 48.1 & 14.4 & 39.7 \\ \hline \end{tabular}
\end{table}
Table 2: **BLEU and chrF scores of Hindi-Marathi NMT models.**
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Technique** & \# **Sentence** & \multicolumn{3}{c}{**English\(\rightarrow\)Marathi**} & \multicolumn{3}{c}{**Marathi\(\rightarrow\)English**} \\ & **Pairs** & **FLORES 101** & **FLORES 101** & **FLORES 101** \\ & & **BLEU** & **chrF** & **BLEU** & **chrF** \\ \hline
**Baseline + LaBSE + PPI-LaBSE** & 3.24M & 9.8 & 40.3 & **17.0** & **44.6** \\
**Baseline + LaBSE + PPI** & 4.09M & **9.9** & **40.5** & 16.2 & 43.4 \\
**Baseline + PPI-LaBSE** & 640K & 6.2 & 33.8 & 12.7 & 40.3 \\
**Baseline + LaBSE** & 2.85M & 8.8 & 39.8 & 16.7 & 44.0 \\
**Baseline + PPI** & 1.49M & 6.6 & 35.0 & 12.7 & 40.3 \\
**No Filtering** & 3.53M & 8.8 & 37.8 & 15.9 & 43.3 \\
**Baseline** & 248K & 5.1 & 32.4 & 10.2 & 38.0 \\ \hline \end{tabular}
\end{table}
Table 3: **BLEU and chrF scores of English-Marathi NMT models**
the LaBSE filtered parallel sentences and train the respective NMT models.
### Baseline + LaBSE + PPI
In this model, we combine the two techniques mentioned in the section 4.4 and 4.5. We first extract longest unique phrase pairs from pseudo-parallel corpus. Then, we extract good quality parallel sentences from pseudo-parallel corpus using LaBSE filtering. Finally, we augment the parallel corpus with the extracted longest unique phrase pairs and LaBSE filtered parallel sentences to train the NMT models.
### Baseline + PPI-LaBSE
In this model, we first extract the longest unique phrase pairs using the technique mentioned in section 4.4. Then we apply LaBSE filtering on these extracted phrase pairs using a threshold value of 0.9. Finally, we augment the parallel corpus with the LaBSE filtered phrase pairs to train the NMT models.
### Baseline + LaBSE + PPI-LaBSE
In this model, We combine the techniques mentioned in the section 4.5 and 4.7. We first extract LaBSE filtered parallel sentences from pseudo-parallel corpus. Then, we extract LaBSE filtered longest unique phrase pairs from the pseudo-parallel corpus. Finally, we augment the parallel corpus with the LaBSE filtered parallel sentences and LaBSE filtered phrase pairs.
## 5 Results and Analysis
We evaluate our NMT models using BLEU Papineni et al. (2002) and **chrF** score metric. We use sacrebleu Post (2018) python library to calculate the BLEU and chrF scores. The results of all the experiments are summarized in Table 2, 4.
We can observe from the results that the models trained using the extracted phrase pairs and parallel sentences outperform the models trained using the whole pseudo-parallel corpus when augmented with the parallel corpus. The results of Hindi-Marathi, English-Marathi and English-Pashto MT models show that \(\textbf{Baseline}+\textbf{LaBSE}+\textbf{PPI}\)-**LaBSE** and \(\textbf{Baseline}+\textbf{LaBSE}+\textbf{PPI}\), perform best amongst others on **WAT 2021** and **FLORES 101** test data, respectively.
In Hindi-Marathi MT, the best models improve the performance by **0.8** and **1.1** BLEU score points in _Hindi\(\rightarrow\)Marathi_ and **1.4** and **1.7** BLEU score points in _Marathi\(\rightarrow\)Hindi_ over the **No Filtering** model on FLORES 101 and WAT2021 test data.
In English-Marathi MT, the best models improve the performance by **1.0** and **1.1** BLEU score points in _English\(\rightarrow\)Marathi_ and _Marathi\(\rightarrow\)English_ over the **No Filtering** model, on the FLORES 101 test data.
In English-Pashto MT, the best models improve the performance by **2.7** and **1.0** BLEU score points over the **No Filtering** model, on the FLORES 101 test data.
We observe that even though the corpus size of the best models is smaller than the pseudo-parallel corpus, it still performs better as it has a higher proportion of good quality parallel sentences and phrase pairs.
## 6 Conclusion and Future Work
In this work, we show that extracting parallel sentences and phrase pairs from a pseudo-parallel corpus helps the NMT models improve its performance significantly when augmented with a parallel corpus. We show that LaBSE filtering assists the Phrase Pair Injection to extract the parallel data, which has good quality and quantity.
In the future, we plan to use the proposed corpus filtering techniques for other language pairs. This will provide us with a general overview of how these filtering techniques perform on multiple
\begin{table}
\begin{tabular}{l c c c c c} \hline
**Technique** & **\# Sentence** & \multicolumn{2}{c}{**English\(\rightarrow\)Pashto**} & \multicolumn{2}{c}{**Pashto\(\rightarrow\)English**} \\ & **Pairs** & **FLORES 101** & **FLORES 101** \\ & & **BLEU** & **chrF** & **BLEU** & **chrF** \\ \hline
**Baseline + LaBSE + PPI-LaBSE** & 342K & 8.6 & 31 & 10.0 & 36.2 \\
**Baseline + LaBSE + PPI** & 790K & **8.7** & **31** & **10.4** & **37.1** \\
**Baseline + PPI-LaBSE** & 176K & 0.9 & 13.1 & 1.1 & 17.3 \\
**Baseline + LaBSE** & 290K & 8.0 & 30.5 & 9.8 & 36.4 \\
**Baseline + PPI** & 624K & 0.7 & 12.3 & 0.8 & 13.9 \\
**No Filtering** & 1.14M & 6.0 & 23.2 & 9.4 & 34.4 \\
**Baseline** & 124K & 0.2 & 8.7 & 0.4 & 15.9 \\ \hline \end{tabular}
\end{table}
Table 4: **BLEU and **chrF** scores of English-Pashto NMT models
languages. We also anticipate improvements in the result by trying different threshold values in the LaBSE filtering and Phrase Pair Injection.
## 7 Acknowledgements
We would like to thank the anonymous reviewers for their insightful feedback. We also express our gratitude towards Shivam Mhaskar, Sourabh Degohare, Jyotsana Khatri, and other members of the Machine Translation group at CFILT, IIT Bombay, for their interesting and insightful comments.
## 8 Limitations
The size of extracted phrase pairs from the pseudo-parallel corpus is huge, and computing sentence embeddings using LaBSE for such a huge corpus is computationally expensive. We use various threshold values in LaBSE Filtering and Phrase Pair Injection. We need to try multiple combinations of these threshold values to obtain good results.
|
2302.11009 | Challenges in the Design and Implementation of IoT Testbeds in
Smart-Cities: A Systematic Review | Advancements in wireless communication and the increased accessibility to
low-cost sensing and data processing IoT technologies have increased the
research and development of urban monitoring systems. Most smart city research
projects rely on deploying proprietary IoT testbeds for indoor and outdoor data
collection. Such testbeds typically rely on a three-tier architecture composed
of the Endpoint, the Edge, and the Cloud. Managing the system's operation
whilst considering the security and privacy challenges that emerge, such as
data privacy controls, network security, and security updates on the devices,
is challenging. This work presents a systematic study of the challenges of
developing, deploying and managing urban monitoring testbeds, as experienced in
a series of urban monitoring research projects, followed by an analysis of the
relevant literature. By identifying the challenges in the various projects and
organising them under the V-model development lifecycle levels, we provide a
reference guide for future projects. Understanding the challenges early on will
facilitate current and future smart-cities IoT research projects to reduce
implementation time and deliver secure and resilient testbeds. | Vijay Kumar, Sam Gunner, Theodoros Spyridopoulos, Antonis Vafeas, James Pope, Poonam Yadav, George Oikonomou, Theo Tryfonas | 2023-02-21T21:22:57Z | http://arxiv.org/abs/2302.11009v1 | # Challenges in the Design and Implementation of IoT Testbeds in Smart-Cities: A Systematic Review
###### Abstract
Advancements in wireless communication and the increased accessibility to low-cost sensing and data processing IoT technologies have increased the research and development of urban monitoring systems. Most smart city research projects rely on deploying proprietary IoT testbeds for indoor and outdoor data collection. Such testbeds typically rely on a three-tier architecture composed of the Endpoint, the Edge, and the Cloud. Managing the system's operation whilst considering the security and privacy challenges that emerge, such as data privacy controls, network security, and security updates on the devices, is challenging. This work presents a systematic study of the challenges of developing, deploying and managing urban monitoring testbeds, as experienced in a series of urban monitoring research projects, followed by an analysis of the relevant literature. By identifying the challenges in the various projects and organising them under the V-model development lifecycle levels, we provide a reference guide for future projects. Understanding the challenges early on will facilitate current and future smart-cities IoT research projects to reduce implementation time and deliver secure and resilient testbeds.
Internet of Things, Smart Cities, Urban monitoring architecture
## I Introduction
Urbanization has raised the demand for natural resources in cities, while increased pollution has also increased environmental impacts. As cities grow, the logistics to ensure the provision of essential services becomes more challenging for city councils [1, 2]. To improve city management and allow the development of relevant services, councils monitor city parameters such as air quality, road traffic, pedestrian movement, electricity usage, etc. Emerging technologies such as Internet of Things (IoT) provide the ability to understand the physical environment with more granular data and allow citizens and city councils to make better decisions. The opportunities offered by IoT implementations are numerous. For example, monitoring Volatile Organic Compounds (VOCS) and equivalent CO2 (eCO2) in households can help residents with long-term lung conditions (such as asthma) identify poor air quality and act accordingly.
City councils, in association with researchers and universities, participate in multiple research projects such as SPHERE [3, 4], REPLICATE [5] and Twinergy [6] that aim to improve energy use, mobility, human well-being and productivity, reduce energy footprint, and increase resilience [7] and sustainability of the city [8].
The Array of Things (AoT) team [8] has conducted various workshops with multi-disciplinary academics and citizen communities to understand how IoT technology comprising sensors, cameras, and computation capabilities can help modern cities. They concluded that scientific instruments (endpoint/edge IoT devices) deployed in an urban environment to provide spatial and temporal sensor data for analysis could benefit residents and city councils. Their emerging IoT platform ultimately forms an urban-scale instrument for research and development [8], simultaneously testing new sensors, communication, and computing devices.
Edge devices deployed in public spaces can also be used to test and support new technologies such as Vehicle-to-Infrastructure (V2I) communication in Co-operative is Intelligent Transportation Systems and Augmented Reality (AR) to display city information.
However, the development and management of urban monitoring systems pose many challenges. The collection of citizen data can lead to privacy violations if they are not properly managed [9]. The complexity of such systems and the integration of heterogeneous and in many cases, proprietary technologies further increase the data management problem and can also result in security issues that may ultimately disrupt services [10, 11, 12].
This paper aims to systematically identify the challenges in developing urban monitoring IoT testbeds based on the authors' experience in relevant Urban Observatory (UO), smart city projects, and the analysis of the relevant literature. These projects include: Harbourside water quality monitoring [13, 14], Clifton Suspension Bridge structural health monitoring [15], e-bike monitoring [16], damp residential detection [17] and Smart Citizen Kit (SCK) deployment in the Cohtam Hill Pedestrianisation Programme, as well as others [5, 6, 18]. Table 1 lists the different research projects in which the authors were involved and provided their experiences. Table 2 lists similar smart city projects that the authors referred to understand the challenges faced in the projects mentioned in the research publications. We hope this work will benefit the design and implementation of future
smart cities research projects and IoT testbeds, reducing the implementation time.
The rest of the paper is structured as follows: SS II provides the background knowledge of smart cities research projects, testbed and monitoring architecture. SS III provides a brief about literature review. SS IV provides the methodology followed to understand the challenges. SS V provides the challenges faced by the research projects mapped to the V-model. SS VI concludes the work.
## II Background
### _Smart Cities Research Projects_
Multiple organisations collaborate with the city council to make a city smart and work on research projects to improve citizen lives and city council services. Smart city research projects can target different areas, for example, collecting environmental data to monitor air, noise, water pollution, residential dampness, energy monitoring, or structural health monitoring of buildings and bridges. Table 1 and Table 2 provide a list of innovative city projects, the data they collect, their architecture, and the size of the deployment. Multiple smart city research projects deploy testbeds to collect urban or citizen health data for different analyses. Major implementations occur in public places or citizens' homes. In public places, there have been multiple projects such as Smart Santander [38], UMBRELLA [39], and AoT [8], whereas projects such as SPHERE [3, 4] and REPLICATE [5] have deployed devices in citizen homes. Smart Santander deployed multiple IEEE 802.15.4 devices, General Packet Radio Service (GPRS) modules, and joint Radio-frequency identification (RFID) tag/Quick Response code (QR) code labels deployed at both static locations (streetlights, facades, bus stops) and mobile vehicles (buses, taxis) for different smart city use cases. Similarly, UMBRELLA deployed multiple edge nodes mounted on lamp posts containing wireless radio nodes and sensors, providing a real-world platform to test wireless algorithms and smart city sensing (temperature, air quality, and noise). The AoT project deployed edge nodes in Chicago to collect real-time data on the city's environment, infrastructure, and activity for research and public use. SPHERE deployed a multi-modal platform of non-medical home sensors to serve as a prototype for future residential healthcare systems. REPLICATE deployed edge devices to deploy energy efficiency, mobility, and Information and Communications Technology (ICT) solutions in city districts. Twinery has installed house batteries and smart plugs in people's houses to improve their self-consumption of locally generated renewable energy and monitor their uptake of energy demand side management.
### _A Brief About Testbeds_
Testbeds play an essential role in experimental research by allowing researchers to perform experiments, deploy multiple devices, set up realistic environments, and collect sensor data and insights [40, 41, 42]. The testbeds are made up of endpoints (sensors that sense the physical parameter), edge gateways (collect and process data from endpoints) and cloud infrastructure (collect and process data from endpoints/edge). Managing such an infrastructure is challenging [26]. The challenges include the security and management of multiple devices, data security and privacy, user privacy controls, visualisation, multitenancy of applications, hardware malfunctions, programming bugs, software incompatibilities, network resilience, and plain misunderstanding of concepts [26, 43]. Furthermore, each research project implements the testbed differently based on the project team's requirements, usability, budget, time, and technical skillset.
A testbed should enable researchers to **i.** deploy and connect multiple devices at the edge and endpoint tier safely and securely, **ii.** deploy applications on the cloud and edge devices collecting and processing data from the endpoints (sensors) and sending it securely to the edge/cloud, **iii.** manage the devices for accounting and administrator purposes **iv.** provide data visualisation and insights to end users [44] and **v.** be adaptable to fulfill other requirements. The authors categorised the testbed into three different categories **i.** Distributed large-scale cloud resources testbed providing researchers the access to the bare metal and control over computing, storage, and networking resource, e.g., Chameleon [45], GENI [46], GRID5000 [47], FED4FIRE/FED4FIRE+ [48], FIT-Cloud [49], Emulab [50], PlanetLab [51], PRAGMA [52], DETER [53], NOR-NET Core [54], SAVI [55]**ii.** distributed large-scale endpoint Wireless Sensor Networks (WSN) testbed that provide access to the WSN nodes to conduct network experiments, i.e., FIT IoT-Lab [56], SmartSantander [38], City of Things [57], UMBRELLA [39]**iii.** data-collecting research testbed that collects data from citizen house or public spaces, i.e., SPHERE [3, 4], REPLICATE [5], Twinergy [6], 3E Houses [58], SONYC [28], AoT [8], Scallop4SC [59], Padova [29].
### _A General Three Tier Architecture_
Testbeds can have different architectures based on the project requirements, such as endpoint-cloud, endpoint-edge, and endpoint-edge-cloud. In endpoint-cloud, devices at the endpoint tier communicate directly with the cloud tier; in endpoint-edge, the endpoint sends the data to the edge, and the cloud tier does not exist. In endpoint-edge-cloud, endpoints connect directly to edge devices, and devices at the edge tier connect to the cloud tier. Endpoint-edge-cloud is a standard architecture used by different projects such as SPHERE [3, 4], REPLICATE [5], Clifton Suspension Bridge [15], AoT [8]
Figure 1: Typical three-tier architecture for a smart city
and others [30, 60] and also mentioned in relevant literature [43, 61].
Fig. 1 presents a typical architecture of a data collection testbed consisting of cloud, edge, and endpoint tiers. We provide a brief introduction about each tier below:
_Endpoint Tier:_ The endpoint tier consists of resource-constrained, battery-powered embedded devices with low-power wireless communications capability. The devices are generally inconspicuous and have a small nominal form factor for deployment in space-constrained environments [62]. They can sense different environmental parameters such as barometric pressure, temperature, humidity, light, motion (with an accelerometer, gyroscope, or compass) and presence (using an infrared sensor to detect the human body's heat). In addition, a reed relay or switches can sense the opening/closing of a window/door. Endpoints generate monitoring data and send it to a collection point at the edge/cloud tier for processing and analysis. The endpoints can be connected to the edge/cloud tier by different technologies such as **i.** an IEEE 802.15.4 network (in a mesh or star topology) created and controlled by an edge tier device, **ii.** low-power wide-area network (LPWAN) network technologies (Sigfox, LoRaWAN, NB-IoT, Wi-Fi, Bluetooth Low Energy (BLE)), **iii.** directly connected to the edge device using a Universal Serial Bus (USB), Inter-Integrated Circuit (I2C), Serial Peripheral Interface (SPI), Universal Asynchronous Receiver/Transmitter (UART).
_Edge Tier:_ The edge level can consist of a single-board computer (SBC) (Raspberry Pi (RPi), Jetson Nano (JN), Grapeboard, Intel NUC) installed in a citizen's home or public spaces (street lamps, bus stops, city council vehicles) or private buildings [63]. The edge tier collects the data sent by the endpoints and either process it or sends it in a raw format to the cloud tier [64] for further analysis. Processing data at the edge reduces payload size and communication bandwidth, shortens latency, and simplifies data formatting and aggregation for the cloud [65]. The edge device can also run different applications, such as urban environment monitoring and counting people/vehicles, and is often designed to be application-agnostic. It provides end users with a sensing/processing element at the network edge that can service novel applications. Edge devices are typically connected to the cloud tier using higher bandwidth and more reliable communications technologies, such as 4G/5G, Wi-Fi, and fibre. According to the project requirements, the edge device can contain multiple radios onboard, such as IEEE 802.1ac on 2.4/5 GHz, DASH7 on 433/868 Mhz, BLE, IEEE 802.15.4, IEEE 802.15.4g, and LoRa [32]. Edge devices (based on their location) can also be used in infrastructure mode (endpoints connecting to edge tier) or ad hoc peer-to-peer (edge devices connecting to each other using radios).
_Cloud Tier:_ The cloud tier consists of multiple servers, hosting all the applications and services required to manage
\begin{table}
\begin{tabular}{l l l l l l l} \hline
**Project** & **Size (Where)** & **Data collected** & **Architecture** & & & \\ \hline SPHERE & (100 Homes), (1 EN; multiple EPphone & Environmental & EP (802.15.4) & \(\rightarrow\) & EN(4G) & \(\rightarrow\) & CS \\ UMBRELLA & 200 EN (steet lamps) on on-board EP & Environmental, Camera & EP (I2C/SPI) & \(\rightarrow\) & EN (Fibre/WiFi) & \(\rightarrow\) & CS \\ Column Hill Polestiantian & 10 EP (in 8 homes) & Noise and air pollution & EP (WiFi) & \(\rightarrow\) & \(\rightarrow\) & CS \\ Residential Lampness & (1 home), (1 EN with on-board EP) & Temperature, Humidity & EP (Analog) & \(\rightarrow\) & EN & \\ Clifton Superson Bridge & 1 EN, 2 EP & Structural health monitoring data & EP (802.15.4) & \(\rightarrow\) & EN (4G) & \(\rightarrow\) & CS \\ Water quality monitoring & 3 sites (1 device with 7 sensors) & Water quality & Paper & EP (Serial to WiFi) & \(\rightarrow\) & \(\rightarrow\) & CS \\ SYNERGIA & 3 EN, 15 EP (office) & Environmental & EP (802.15.4/LoRa) & \(\rightarrow\) & EN (LAN) & \(\rightarrow\) & CS \\ REPLICAATE (Energy) & Smart appliances (151 Homes); & Energy consumption & EP (LAN) & \(\rightarrow\) & EN (LAN) & \(\rightarrow\) & CS \\ REPLICAATE (eBike) & EN (12 e-bikes) & Battery level, motor power & EP (CAN) & \(\rightarrow\) & EN (LoRa/WiFi) & \(\rightarrow\) & CS \\ Bristol AoT & 3 EN with on-board EP & Environmental, Camera & EP (I2C/SPI) & \(\rightarrow\) & EN (4G) & \(\rightarrow\) & CS \\ Twiersy & 12 home & Energy consumption data & EP (WiFi) & \(\rightarrow\) & \(\rightarrow\) & CS \\ Car/Valve & (40 homes), (4 EN; 1 EP/home) & RSSI and accelerometer data & EP (Bietooth) & \(\rightarrow\) & EN(4G/WiFi) & \(\rightarrow\) & CS \\ \hline \end{tabular}
\end{table}
Table 1: Research projects in which the authors participated. CS: Cloud Server, EN: Edge Node (Gateway), EP: Endpoints (IoT Node). CS may contain all or a subset of open-source components (Kafka, K3S, MQT, InfluxDB, Grafana). EN may consist of SBC (RPi or Intel NUC)
\begin{table}
\begin{tabular}{l l l l l l l} \hline
**Project** & **Size (Where)** & **Data collected** & **Architecture** & & & & \\ \hline AzT [8] & 13 EN (Netet lamps) with on-board EP & Environmental, Camera & EP (I2C/SPI) & \(\rightarrow\) & EN (4G) & \(\rightarrow\) & CS \\ \(\omega\)-Agriculture [19] & EN (AB deployment) & Light, temperature, soil pH and humidity & EP (Analog) & \(\rightarrow\) & EN & & \\ Living Lab [20] & 150 EN, 80 EP (I20 location) & Air quality, microchimating, hotterie & EP (RPL) & \(\rightarrow\) & EN (2G) & \(\rightarrow\) & CS \\ Connected Vehicle Tashed [21] & 3 Fixed EN (EN), 2 Mobile EN (MEN) & Vehicle position data & & & & \\ Wireless Environmental Sensors [22] & 1 EN, 2 EP (Lab deployment) & Environment & & & & \\ Solar-powered WSN [23] & 2 EN (event daily deployment) & Temperature, RSSI, battery level & EP (WiN0) & \(\rightarrow\) & EN (cellular) & \(\rightarrow\) & CS \\ Community Elderly Care [24] & EN, EP (70 elderly homes) & Motion, door contact & EP (Z-wave) & \(\rightarrow\) & EN (cellular) & \(\rightarrow\) & CS \\ IEEE/ISO 15.4 Connectivity Taces & 350 EP (Office/nomnomement) & RSSI, PDR & EP (902.15.4) & \(\rightarrow\) & \(\rightarrow\) & \(\rightarrow\) & CS \\ LOFAR-App [62] & 100 EP /3 EN, (next-world deployment) & Temperature, humidity & EP (WSN) \(\rightarrow\) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\
32 Heunes [27] & (100 homes/EP 1 ENphone) & Energy consumption data & EP (Z-wave) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ New York Noise sensor network [28] & 585 & 108 EP & 120 & 180 & 180 & 180 & 180 & 180 \\ Padown Smart City [29] & 1 EN, 8 EP & Temperature, humidity, benzene & EP (WiN2.15.4) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ Flash Flood Monitoring [30] & 3 site of IoT device deployed, EN & Water levels & EP (USB) & \(\rightarrow\) & EN (cellular) & \(\rightarrow\) & CS \\ Smart Smart Smart [31] & 500 EN, 700 EP & Environmental & EP (902.15.4) & \(\rightarrow\) & EN (WiWiFi) & \(\rightarrow\) & CS \\ City of Things [32] & 32 locations (1 node/location-amidi-en) & Air quality, traffic monitoring, parking & EP (WSN) & \(\rightarrow\) & EN (Anti-radio) & \(\rightarrow\) & CS \\ SAMDate [33] & 5 EN, 12 EP & Environmental & EP (WSN) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ SensetScope [34] & \(\rightarrow\)6EN, each serving \(\sim\)100 EP & Environmental & EP (WSN) & \(\rightarrow\) & EN (GPS) & \(\rightarrow\) & CS \\ EqPi [35] & \(\rightarrow\) 18 locations (2 EP, IES)/location & Environmental & EP (WSN/WiFi) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ Parking System [36] & 2 EP 8 EN & Parking, Light sensor & EP (Loz) & \(\rightarrow\) & EN (Loz) & \(\rightarrow\) & EN (Loz) & \(\rightarrow\) & CS \\ Residential Sensing [34] & \(\approx\) 20 homes \(\approx\) 1200 EP & Temperature, light, door & EP (Z-wave) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ Water consumption [37] & 30 homes, 1EN, \(\times\) 4 EP & Water consumption & EP (433MHz) & \(\rightarrow\) & EN (WiFi) & \(\rightarrow\) & CS \\ \hline \end{tabular}
\end{table}
Table 2: Research projects referred by the authors. Based on the details provided in the papers. CS: Cloud Server, EN: Edge Node, EP: Endpoints (IoT Node). CS may contain all or a subset of open-source components (Kafka, K3S, MQT, InfluxDB, Grafana). EN may consist of SBC (RPi or Intel NUC)
the devices at the edge, endpoint tier and the applications necessary to achieve the project objectives [64]. The servers will run multiple components on virtual machine (VM) or containers. The cloud tier can be hosted privately (OpenStack, VMWare) or on commercial cloud services (Azure, AWS). It contains the application logic and services required to operate and manage the testbed platform. The cloud tier should provide different services to the edge tier, such as credential management, data storage, provisioning of devices, networking, time synchronisation, secure remote software updating, configuring, and maintaining access to the edge and endpoint devices. It should also provide a secure communication channel to devices and services in the edge and endpoint tiers.
With the advancements in core networks, part of the functionality is distributed from the cloud tier to multiple geographical locations towards the edge network. In this case, the main point of distinction becomes Radio Access Network (RAN) and how the endpoint tier is connected to the cloud tier. Depending on the wireless and wired transmission network, some core network features, computation, and offloading can occur on the edge tier. Therefore, it is vital to address the challenges of edge-to-cloud connectivity and the architectural decisions that each testbed has chosen.
## III Literature Review
Santana et al. [66] surveyed multiple smart cities projects in the area of cyber-physical systems, IoT, Big Data and cloud computing. They provided challenges and open research problems in developing next-generation, robust software platforms for smart cities. They included privacy (data owners, data usage), data management (storing, processing a large amount of data and trusting it), heterogeneity (different devices, data sources), energy management (energy consumption failure), communication (network reliability), scalability (increase in the number of users, devices, data), security (safe from cyber-terrorism and cyber-vandalism), lack of testbeds, city models (effective and efficient city model), platform maintenance (mange devices).
There have been multiple lessons learned papers regarding deployment of battery-powered devices in the endpoint tier communicating over IEEE 802.15.4 [25, 26, 33, 67, 68], devices deployed in public spaces [8, 69], citizens houses [24, 67, 70, 71, 72], testbeds [73, 74], and others [20, 21, 22, 23, 75, 76, 77, 78, 79, 80]. However, the lessons learnt are from a project with specific requirements such as "challenges in flash flood monitoring" or does not include human engagement (deployment in citizen houses or public spaces) or has very small-scale deployment. They do not cover every stage or all the challenges faced during a smart-city research project. Our work focuses on a three-tier architecture (cloud, edge and endpoints) usually used in smart-cities research projects. It covers the end-to-end perspective and the challenges faced during the research project, from the requirement analysis, system design, implementation, and integration testing to the final deployment stage, keeping security and scalability in mind. It is more expansive and covers a larger scope, providing learnings from deploying multiple smart-city research projects.
## IV Methodology
The methodology authors used to understand the challenges faced in the smart-cities research project is interview-based and literature review-based. To collect the necessary data for our research, we conducted semi-structured interviews with the system architects and the deployment team of European research projects on IoT platforms and testbeds for urban monitoring [5, 6, 13, 15, 17, 18]. Table I lists the different research projects authors were involved in and provides their experiences. The interviews focused on the challenges the participants faced during the development, deployment and management of the IoT platforms and testbeds. The author asked simple open-ended questions with a free-flowing approach by asking a set of questions to the interviewee, and the conversation was continued based on the answers. The questions asked were about the challenges faced, such as "_What are the challenges faced during the projects_", "_How did we provision the devices_", "_What was the architecture of the project_", "_How did the devices communicate with each other_", "_How did we manage the storage, credentials_", "_Any challenges faced in the implementation, deployment_", "_What could have made your (system architect) life better_", "_any unexpected challenges_". The authors captured additional challenges based on their reflections on their experiences as members of smart-cities projects.
Moreover, we thoroughly reviewed the relevant literature on infrastructure deployment. Table II provides a summary of different projects that authors referred to understand the challenges faced in projects deploying IoT infrastructure.
To facilitate the exploitation of our work by future projects, we categorized the identified challenges based on the stage of the project lifecycle they appear. Almost all engineering projects follow a similar development lifecycle, from "requirement analysis" and "system design" to "integration and testing" and final project delivery. In our work, we identify the challenges in the various project and organize them under the V-model's level to formalize the development process and provide a reference guide for future projects.
### _The V-model_
The V-model is one of several project life cycle models developed over time. Project life-cycle models try to visualise and map the different stages of a technology development project. They are an essential tool for the engineer and provide a standard conceptual framework of reference [81].
The V model [82] is based heavily on 'the waterfall model' [83] that preceded it but increased it by projecting the project cycle into a three-dimensional space. Fig. 2 shows the first two dimensions of this space, x and y, representing 'time' (or project maturity) and 'Design Detail', respectively. The 'Design Detail' axis has high-level design at the top and low-level (or detailed) design decisions at the bottom. The central elements of the model (referred to as the core of the Vee) are shown in blue. The specific phraseology used in these elements varies depending on the particular application. However, the general theme is always the same: as the project works down
the left arm of the V, high-level system designs are converted into more detailed system designs.
One failure of the waterfall model was its embargo on any detailed design work before official approval of high-level design decisions [84]. The V-Model removes this restriction, allowing the detailed technical enquiry to inform higher-level decisions. This 'off-core' work can be seen in the white boxes below the core of V. The work in the off-core varies depending on the design stage, but it aims to derisk the decisions currently being made. Important off-core work [84] is the identification of 'critical issues'. Capability demonstrations are also important, demonstrating that technology can perform the desired functionality before it is written into a specification.
The final dimension of the V-Model, the z-axis pushing into the page, represents the different system design elements at that level of system decomposition. For example, architecture will consist of many modules, and each must be designed, so the V-model fans out to represent this, one branch for each module. Below the V, the z-axis represents the different and competing design options that must be evaluated before a selection can be made. The workflow moves down the left-hand side of V until the bottom is reached, which means that design decisions are completed and can now be implemented. This means that each piece of hardware can be built and each software package written.
Integrating these different components is necessary to form the final functioning system. It is performed by moving back on the right-hand side of the V. Each module is tested against the design from which it was created and then integrated with other modules to deliver more sophisticated functionality. That functionality is then tested against the higher-level design. Not only is'verification' carried out (confirming that the module has been built according to its design specification), but 'validation' is also performed to ensure that the design captures the system's requirements.
## V Challenges
This section discusses the challenges captured. We use the V-model to classify challenges and map various phases of the research project. Fig. 3 summarises the challenges faced during different stages of the research project assigned to the V-model phases. Challenges can be categorised into multiple phases of a smart city research project, from understanding project requirements (requirement analysis) to designing how to fulfil those requirements (system design) and setting up defined infrastructure (implementation) to ensure that different infrastructure components work together (integration) and tested in the laboratory and initial small-scale deployment (operational testing) followed by deployment in the real world and operational challenges.
### _Requirements Analysis_
The requirement analysis stage helps to understand the application and data requirements, collaboration dependency, and project use cases.
_Application/Data requirements:_ Data is at the heart of urban monitoring research projects. Depending on the need, it can be collected from multiple sensors deployed in citizens' houses, streetlamps, or bus stops. The nature of data required to meet project objectives and expected results affects, in general, all aspects of the project, from the technology to be used to the security implications of the privacy achieved [85]. For example, in the SPHERE project, researchers created bespoke wearable devices with multiple components, many of which (e.g. second acceleration sensor, gyroscope, non-volatile flash memory, LED, button) were never used in real deployment [67]. During the REPLICATE and Twinergy project, it was found that it is essential to engage with the stakeholders of the project (e.g., the city council and citizens) at the beginning of the project, clarify their expectations, understand their needs, and translate them into requirements for data collection, processing, storage, sharing, and visualisation [63].
Once the type of data is clarified, it is essential to consider the relevant General Data Protection Regulation (GDPR) [86] implications. In the UK, the Data Protection Act 2018 implements the European GDPR. The Act introduces the terms "data controller" and "data processor" and clarifies the responsibilities around personal data collection, processing, and storage. These considerations will influence the system's design (e.g. employ mechanisms to ensure secure data collection, data anonymisation, or data destruction) and final deployment (e.g. deployment only after citizens' consent) in the subsequent development process steps. For example, the SPHERE project stored raw sensor data related to health in an external Linux Unified Key Setup (LUKS) encrypted solid-state drive (SSD) [87].
Great care must also be taken to ensure that the collected non-personal data cannot be used to infer information about individuals. For example, environmental/energy data can reveal citizens' behaviour and habits when not handled appropriately. Depending on the entities involved in the project, different actors may be interested in ensuring compliance with GDPR. Universities undergo an ethical approval process that involves a rigorous analysis of relevant implications and solutions. City councils may require a privacy impact assessment that describes the data the project aims to collect, potential privacy issues, and the related impact.
Figure 2: A graphical representation of the V-Model (modified from [84])
In addition to legal implications around data collection, special care must be taken to clarify, understand, and comply with contractual agreements (e.g. data-sharing agreements) among the project's partners. The partnership agreement should detail the data each partner aims to collect, share, or process and the purpose of this activity, including potentially generated intellectual property and monetisation. This information should also be considered when considering the project's GDPR implications.
Another data-related requirement that must be addressed in the early stages of a project is the need to integrate the data collected by the platform into other existing city data platforms (such as Bristol Open Data [88] and London Datastore [89]). Capturing integration requirements with external systems early on ensures the use of appropriate technologies and the timely delivery of the project.
Stakeholders must agree on the data requirements to ensure that the system's development follows user needs.
_Collaboration dependencies:_ Urban monitoring research projects often involve multiple partners (e.g. universities, city councils, industries) and require collaboration between different departments between partners (e.g. IT support, estate team). For example, project servers are usually behind the university or company firewall. The opening of ports on the firewall can take a considerable amount of time, ranging from weeks to months. The process may require multiple approvals from different entities and involves cyber security risk assessments to understand the various threats to the system and identify possible mitigation techniques. In projects with multiple collaborators, it is essential to consider these interactions and dependencies and address them during the requirements analysis period of the project.
_Clear use cases:_ Once the requirement collection has been completed, the project team must develop use cases that address the requirements [63]. Below, we provide a few examples of use cases in urban monitoring projects:
* **Use case for deployment of sensors in Citizen Home (Indoor)**: Sensors provide details about indoor pollution and help citizens take action, such as opening windows for cross-ventilation.
* **Use case for deployment of sensors in a commercial building**: Assuming that a corporate building consists of multiple floors/rooms, the building management team can consist of a Heating, Ventilation, and Air Conditioning (HVAC) team, estates teams, admin team, fire safety, and different companies occupying the offices/floor. Data can be sent to different teams depending on their requirements. For example, temperature data to the HVAC team to ensure the optimal temperature in rooms/offices; battery data to the estate's teams to ensure that the sensor batteries are replaced on time; air quality data and occupancy data to respective companies on respective floors.
* **Use case for deployment of sensors in public spaces**: When sensors are installed inside citizen homes, outdoor data (vehicle traffic, pedestrian traffic, light levels, weather, atmospheric conditions) can be compared with indoor data and provide context [90]. The sensor data can validate and train the various micro-climate weather models. Citizens can also use noise and air pollution data to decide on the suitability of buying a house in the neighbourhood
### _System Design_
The V-model system design phase provides a system overview, details of the different hardware, software, network protocols, applications, and logical components in the three-tier architecture mentioned in SS II and the interfaces between them. It allows system architects to define testbed requirements from the perspectives of resources, security, resilience, data, and technology. System design decisions must be based on project requirements, and the requirements can always be referred back to understand and justify the design as specified
Fig. 3: Summary of challenges in smart-cities research projects
in the V-model. For simplicity, the architecture and module design are merged into the system design.
_End-to-End Security:_ Securing a testbed from end-to-end (endpoint, edge, cloud tier) is challenging. It includes the security of all devices at each tier and the communication between them, including physical and data security. Endpoint-Edge-Cloud or End-to-End testbeds should be secure by design and provide fundamental security blocks such as confidentiality, integrity, availability, and non-repudiation [76, 91]. Confidentiality requires data protection from unauthorised people; Integrity requires protecting data from being altered; Availability requires ensuring access to data to authorised users when needed; Non-repudiation requires an assurance that authentic communication requests cannot be denied. A chain of trust is established by validating each process and component of hardware/software from the base up to the final system, including the design, manufacture, and supply chain. A dependency graph (chain of trust) can be created by examining the component and services in which one trusted layer establishes the trust in the next by validating it and providing the core trusted functions on which it depends. Any security weakness at a lower level compromises the security of the higher levels dependent on it. This results in an untrusted base that compromises trust in the system. The roots of trust for a system are levels of trust origin - the root of the chains of trust. The roots would be the hardware or hosting environment, the Operating System (OS) and any applications, libraries, and compilers. For building a system in secure environments, the roots may be the factories and supply chains for the hardware, the software design processes for the libraries, the location of manufacture, the supply chain and delivery. Additionally, the data collected by the testbed can be sensitive such as patient health and environmental data. Processing sensitive data using data analysis and machine-learning techniques [92] makes the testbed a target for cyber-criminals [93] and adversarial machine-learning attacks [94].
_Threat modelling - "Identify threats, threat actors and determine risk acceptance":_ Security of the testbed and the data collected is important. In projects that collect sensitive data, it is essential to understand the various threat actors, attacker models, and risks involved [95] that could compromise the security of the collected data or the testbed [96]. Creating a threat model is a crucial and challenging part of a research project and should be performed at the beginning of the project. It helps identify threats, attacks, vulnerabilities, and countermeasures that may affect the testbed infrastructure and its components. It can be performed in five significant threat modelling steps: defining security requirements, creating an infrastructure testbed diagram, identifying threats, mitigating threats, and validating that threats have been mitigated [98, 99, 97]. Threat modelling will enable testbed administrators and architects to communicate about the security design of the testbed, analyse those designs for potential security issues, and manage mitigations for security issues.
An example of architectural consideration of threat models for our three-tier approach is presented in Fig. 4. Some key security questions arise, particularly regarding the edge and endpoint interaction. Serial to IP (SLIP) bridges with the coordinator endpoint node require a multi-role endpoint node which requires separate firmware and networking behaviour for each node. The SLIP radio is the same hardware as other endpoint nodes but needs its firmware to be developed hand in hand with the edge device networking implementation to maximise security. The computing resources of the endpoint are minimal; therefore, communication with external devices must be tested with radio connectivity in full operation.
Fig. 4 also presents the concept of an edge network. Many challenges arise from the inability of the edge network to extend beyond a single computer (i.e. tunnel interface on a single RPi SBC). In this case, it is difficult to distinguish between the edge and the cloud and their interfaces. A strong firewall must be implemented and the network separation between Local Area Network (LAN) and Wide Area Network (WAN) must be enforced at the edge site.
_Computation, hardware and physical security requirements:_ Based on the use cases and the required functionality, it is essential to determine the computation capabilities (memory, storage, Central Processing Unit (CPU)) at each tier [9]. For example, cloud tier servers have high resources such as memory (8+ GB RAM), CPU power (multiple cores), and network (Internet speed 50 + MB). Edge devices are SBCs and have fewer resources (1-8 GB RAM, single or dual-core CPU) than the cloud tier. On the other hand, endpoints are typically low in power consumption, memory (128KB-2MB of programmable flash and 20-512 KB of volatile RAM), and processing power (Arm Cortex-M Microcontrollers (MCU)) [25].
Furthermore, devices at each tier should provide hardware security features such as a cryptoprocessor (Trusted Platform Module (TPM)), the hardware-based root of trust that allows secure boot, secure firmware, secure credentials storage, and an encrypted file system. Secure boot prevents the loading of unauthorised software onto the device during the boot process; Secure firmware ensures that only authorised code (signed images) from the manufacturer is booted. Secure boot and
Figure 4: Local and remote threat models originate from the bottom up and up to bottom respectively
firmware update capabilities ensure that the device does not run unauthorised or malicious code. Crypto-processor with a random number generator enables cryptographic functions such as encryption, decryption, and key generation for security purposes. However, generating random numbers in constrained embedded systems is a significant challenge due to the lack of resources and entropy. Modern endpoints provide a way to protect the integrity by providing a physically write-protect non-volatile memory with a mechanical switch. The end-user can switch to write-enable the memory for firmware update and then write-protect the device once the update is complete.
Furthermore, the physical security of the devices is essential, as they can contain confidential data such as Personally Identifiable Information (PII), log-in credentials and network information. An attacker who can gain physical access to devices can compromise and steal confidential information. Cloud tier servers hosted inside a secure perimeter (company offices) are physically more secure than the devices at the edge and endpoint tier deployed in the field (citizens' houses, bridges, streetlamps, or roadside). A determined attacker can reach the physically insecure edge and endpoint device and compromise its security. For example, an attacker having physical access to the edge device that contains an SBC (e.g. RPi) can easily remove the Secure Digital (SD) card and read its contents containing confidential information such as passwords and data. To provide another example, the AoT node (deployed on out-of-reach streetlamps) exposes a serial cable wrapped in a protected rubber cover connected to the UART of the SBC. It can provide access to the device enabling the node's root access and allowing access to the filesystem and possibly confidential data. During the AoT project, it was found that it is essential to place edge and endpoint devices outside public reach (where possible) and protect them with spikes, locked cabinets, and tamper-proof casing.
However, once attackers have physical access to the edge and endpoint device, they can physically manipulate it to compromise them. The edge and endpoint tier devices have a large attack surface area, such as exposed copper vias and unused connectors, such as serial/Joint Test Action Group (JTAGs) used for debugging. An attacker can extract confidential data and embedded firmware code from the device using physical probing signals on the exposed interfaces. Most endpoint devices contain a sticker detailing the hardware components that can provide additional information to hackers. Devices with adequate physical and hardware security make it difficult for attackers to compromise them.
_Resilience (network, device, thermal, power and testbed)_: Edge and endpoint nodes deployed in citizen houses or public spaces connect to the Internet and cloud via home broadband, Fibre, 4G, or Wi-Fi. The average downtime of broadband per year ranges from 25.4 to 168.9 hours in the UK [100]. Suppose the edge and endpoint device sends the endpoint data directly to the cloud tier without storing it locally. In this case, data will be lost due to lack of network connectivity [20, 35, 70]. Furthermore, applications also suffer from latency problems [79] depending on the quality of the network. It is essential to have network resilience (multi-network such as Wi-Fi, 4G, LPWAN) built into the device to handle network loss and latency issues.
Furthermore, there can be scenarios where the edge node becomes unresponsive, does not connect to cloud services, and cannot be accessed using Secure Shell (SSH). In such cases, building resilience on edge devices is good. For example, AoT [8] implemented a waggle manager to monitor the health of the SBC (temperature, current draw, digital heartbeat), enclosure internal temperature and humidity. It supports changing the boot medium from SD card to Embedded MultiMediaCard (eMMC) and allows a hard and soft reset of onboard sensors. Rebooting the device often solves most problems [101]. In such cases, a mechanism to reboot the device remotely is required. For example, if the edge device has multi-network connectivity (LoRaWAN, Sigfox, NB-IoT) and is not responding, the cloud tier can use LPWAN to send a downlink packet destined for that device, instructing it to reboot the system. NFC or magnetic devices can be used to cold-reboot the device without opening the enclosure (helpful for cold-rebooting publicly deployed devices) [31]. If the devices are powered by Power-over-Ethernet (PoE), the ability to remotely turn the device on and off PoE is preferable. The edge device should also be able to operate independently if cloud tier services are unavailable due to network issues [8].
Edge and endpoint devices generally run 24 \(\times\) 7 and are usually deployed on citizens' premises or streetlamps. Suppose that a processor-intensive application is performed on the endpoint or edge, and the amount of processing power on the device is not regulated. The device can be damaged due to overheating. For example, in SPHERE houses, the Kinect camera that captures the activities in the kitchen runs 24 hours a day, processing the data. The camera becomes quite hot, reducing the device's lifespan. The edge and endpoint device should be able to self-regulate its temperature by performing CPU throttling to reduce the temperature. For example, RPi performs CPU throttling when the device temperature reaches 60-80 degrees [102].
Another challenge is to provide electrical power to devices at the edge and endpoint tiers. Edge tier devices are usually powered by a mains or battery and must be safe from an electrical safety perspective. For example, AoT is powered and installed on the streetlamp with a 110/230V mains supply. An electrical hazard can occur should the device fall from the streetlamp or the transformer inside the device malfunction. The edge tier devices deployed on the streetlamp can be powered by PoE to reduce electrical risks. Running on the battery limits the device's capabilities. Battery lifetimes typically range from a few hours to a few days. For example, SCK kits provide a USB rechargeable battery that lasts for at least a day, depending on the sensing interval and the time to send the sensor data (after 30 seconds or 1 minute). Additionally, the use of solar panels can add resilience to power devices.
Additionally, a testbed can contain development, staging, and production environments. The testbed environment will often be compromised by an attacker creating a cyber security incident due to default credentials or misconfiguration [74]. Once the testbed is compromised, it is essential to understand the affected components, as the attacker might have installed difficult-to-detect rootkits. It is prudent to recreate
the entire testbed environment from scratch automatically. If done manually, the entire activity (setting up the VMs, configuring the applications, and ensuring that the end-to-end system is working) can take up to a week or more. To quickly recreate the testbed environment, it is essential to have version control [9], continuous integration, delivery, and automation.
_Authentication, Authorisation, Certificate Authority (CA) and secure storage of secrets:_ Testbeds consist of multiple devices and numerous applications on the cloud or at the edge for data storage, analysis and visualisation and have multiple users/administators accessing those applications and devices. Devices and applications should have proper authentication and authorisation, allowing trusted users to access services [103]. Authentication requires digital certificates or credentials to validate the identity of devices and users. Authorisation requires that only trusted nodes and users should be able to gain network access to the testbed. As the testbed also hosts different services (such as web servers, WebSockets, and authentication servers), it is essential to have a CA in the testbed that can be used to create public-private keys and sign certificates. Different users and devices can trust the CA to secure data transmission. Further, the testbed will need to protect stored cryptographic material. The encryption keys (public/private and symmetric) and credentials are usually hardcoded in the code or stored in files. To protect the credentials from hard coding and unsecured storage, they must be stored securely using a hardware security module or key management solutions.
_Exposed services and security updates on the endpoint, edge, and cloud:_ Devices in each tier run multiple services (e.g. SSH, web servers) and are often insecure with weak authentication mechanisms. These mechanisms include using default passwords, running a vulnerable version, using old encryption methods, and misconfigured applications [74, 28]. The services exposed on the cloud, edge, and endpoint devices depend entirely on the project's requirements. Additionally, the greater the number of services, the greater the attack surface area for the attackers and the possibility of compromise. For example, the cloud can expose port 1194 (user datagram protocol (UDP)) and transmission control protocol (TCP) port 443 to provide Virtual Private Network (VPN) connectivity. The Grafana server (data visualisation) exposes port 3000. An edge node might expose port 1883 to allow communication with endpoint devices using Message Queuing Telemetry Transport (MQTT). The endpoints can also run a Web server [104]. As endpoints are resource-constrained, there is a possibility that they might be running a vulnerable version of the web server software.
There have been instances where attackers have compromised insecure services running at the cloud/edge tier. For example, an attacker compromised a cloud server providing authentication (Keycloak instance) running with default credentials and used the server for crypto-mining [74]. Alternatively, an internal attacker can connect to the insecure MQTT service running on the edge device and subscribe to the topics to collect the published data. Furthermore, a vulnerable application deployed on the cloud/edge poses a security risk.
However, such services and systems must be made secure by default. It is essential to ensure that there are no default passwords and that the OS, applications and firmware are configured securely and up to date. If the infrastructure contains many devices kept remotely (citizens houses, streetlamps), upgrading software/firmware is often challenging. Software updates should have rollback functionality, so the device will return to its previous state even if the update process goes wrong. Upgrading software is comparably easier than upgrading firmware. A poor firmware update mechanism can leave the device unusable when an update fails.
For endpoints, it is recommended to have Over-the-Air (OTA) functionality to allow remote upgrade and configuration for long-term deployments in urban environments [68, 23, 33]. The inability to upgrade or configure the firmware remotely means that the code/firmware must be perfect and thoroughly tested, and no new requirements can be applied. For example, the Cotham Hill Pedestrianisation Programme wanted to measure noise pollution due to pedestrianisation. However, the deployed SCK kits took sensor readings at 60-second intervals (by default) and did not capture noise pollution correctly due to the 60-second gap. The only way to reduce the reading interval was to revisit the citizen's houses and configure the settings resulting in disturbing the citizens. Remote management of the technology will minimise disruption for the participants.
_Data storage, reduction, access, integration and visualisation:_ Research projects require data storage, analysis, and visualisation. Data must be encrypted in transit and rest at all tiers. Research projects often go through different data protection and research ethics, defining data collection and usage. The data owner's responsibility is to ensure data validity, quality, secure storage, access and maintenance, replication, processing, backup, and deletion policy. Having clear information and policies helps to ensure user privacy [105]. Policies should include what participant data will be acquired, where it will be stored, and how long it will be stored. User data should be deleted once the duration of data consent is over. However, Post Docs/Ph.D (staff joining and leaving) often manage research projects, and it becomes challenging to ensure data deletion. For example, in university-managed research projects, access to the data is usually restricted to university premises (IT services managed machines) and provided via jump host machines via different credentials, and might require hopping through multiple networks. The difficulty in accessing the data makes it challenging for the data analysis activity, resulting in researchers copying and processing the data locally, which may break user privacy and data agreements.
Further, sensitive data can attract attackers. It is ideal to identify potentially sensitive information in the collected data at the endpoint/edge tier and eliminate or limit its collection [75, 76]. Data reduction and compression methods, such as sending preprocessed data to the edge/cloud tier rather than raw data [22], can also help reduce data bandwidth and power consumption. For example, an edge tier device that measures the number of cars parked using image recognition should send only the count rather than the images to the cloud [75]. Another example would be when an endpoint only transmits the reading to the edge device when a significant change is
detected to improve the energy efficiency of battery-powered endpoints [87]. Data compression and reduction should maintain the initial data requirements required for the research project's objective.
It is a good practice to store all raw data for historical and future references [72]. As users frequently access the collected data of the last few days, it is a good practice to separate current and historical data for better application performance [24]. For example, 3E houses executed SQL queries on the sensor data recorded. Over time, the query response time changed from < 1s to > 8s, resulting in an unresponsive display leaving citizens less engaged [27].
From a data integration perspective, the platform should be able to integrate data streams from multiple heterogeneous data sources [106, 107, 108, 109]. Using similar data formats will allow better data interoperability [79, 85, 110]. Further, the testbed should provide an open Application Programming Interface (API) for the end-users/developers to access the data and build applications on top of that [21, 69, 79, 87, 111]. Furthermore, data transfer from the endpoint to the edge to the cloud should be reliable with minimal data loss [30, 35]. During the AoT and Cotham Hill Pedestrianisation project, it was found that providing flexible data query capabilities for users (such as extracting specific periods or a subset of measurements/nodes) is essential. Such capabilities allow the user to monitor conditions over a particular period, such as an ongoing event (e.g. a festival, severe storm, or emergency), and stream data to specific stakeholders (city-council/car-parking and others). Data should also be visualised for stakeholders using different methods (maps, line/bar charts, dashboards and others) [103].
_Technology compatibility, Device naming conventions and Time synchronisation:_ The testbed comprises multiple components, including hardware, software and OS, to support various services such as data storage, analysis, visualisation, authentication, and authorisation. In addition, there could be different hardware platforms such as amd64, armhf (32 bits), arm64 architecture CPUs, graphics processing unit (GPU)s, and trusted execution environment (TEE). It is vital to support standard libraries, packages (for researchers to deploy their applications on the device), and control interfaces (USB, I2C, SPI, serial) to add new hardware modules with standard network technologies (Wi-Fi, wired, Bluetooth) [8, 19]. Creating an interoperability matrix that captures the different versions of software and the OS is important. For example, Debian 11 switched to ergroup v2, which broke some applications (docker monitor) [112].
The platform can contain hundreds of thousands of endpoint and edge devices. It is essential to have a good naming convention for devices at each tier to identify them uniquely and the data generated from the devices [23, 69]. Also, all devices in each level (cloud, edge and endpoint) must be synchronised in time for data integrity and audit log purposes [68, 71].
Requirement analysis helps to understand the research project's aims and objectives. System design helps to understand how the set of requirements can be achieved. Once a higher-level system design is defined, the testbed architect can start implementing the testbed architecture, functional model [113], and how devices at the endpoint, edge, and cloud tier will be managed, provisioned, and communicate with each other [21].
### _Implementation_
The implementation phases bring challenges such as provisioning devices, ensuring secure network connectivity, credential management, application deployment, and compatibility between different hardware architectures (armhf, arm64, amd64), hardware and software accounting and monitoring. The challenges of the integration phase include ensuring that the platform is scalable, modular, extensible, adaptive, and reproducible and supports heterogeneous devices, proprietary software, and different standards.
_Provisioning the cloud, edge and endpoint devices:_ Provisioning the cloud tier requires the installation and configuration of VMs on the on-premises hosted hypervisor (HyperV, Proxmax, OpenStack) or cloud hosting providers (AWS, Azure). The number of VMs depends on the services required to support the edge and endpoint tier and usually ranges from one to ten. Installing and configuring a VM is a tedious task and requires installing OS applications, configuring them securely, and configuring hardware allocation (e.g. RAM, CPUs, GPU passthrough). Most research projects currently provision the servers manually or using a bash script. The bash script installs the necessary packages and configures them with security. Those images can be packaged to support different hypervisor environments without requiring changes to the provisioning scripts and source code. Such platform-independent virtual machine image creation tools are Yocto and Packer.
Provisioning edge tier devices (Intel NUC or SBC) involves installing an OS on the SD card/Hard disk drive (HDD)/eMMC, with configured software packages, and ensuring stable and secure connectivity to the cloud tier. The number of edge devices depends on the sample size of the case study, such as the number of houses or streetlamps, and can range from one to hundreds. One way to provision edge devices is to create a base kernel image containing the installed OS and applications and flash it to the edge devices. Adding the Linux kernel headers in the base image is essential because future application installations might require building a kernel module (e.g. wireguard). Otherwise, the base image needs to be created and flashed again. For any further changes, the administrator logs in to the device using the SSH/serial console and configures it according to the requirements. Creating a base image and flashing it on multiple edge devices comes with security and administration challenges. The security challenge is that the credentials and other settings, such as Wi-Fi SSID, hostname on all the edge devices, will be the same until changed. If one of the edge devices is compromised and the attacker obtains the credentials, they can compromise all the edge devices by performing the lateral movement. The administration challenge is to log into the machine and make changes after flashing the base image. For example, deploying the edge device in the citizen's home could require changing parameters such as house number identification, Wi-Fi credentials, and IP address settings. Additionally, suppose
that the device is deployed on citizens' premises during pandemic outbreaks. In that case, minimising the time spent configuring the device is essential.
Endpoint tier devices are usually resource-constrained devices, such as SCK [114], Luftdaten [115], SensorTag [116], and Smart Plugs [117]. Endpoints are usually connected to the smart home platform or the edge device. The provisioning of endpoint devices depends on the capabilities of the device and the communication medium between the endpoint, edge, and cloud. It mainly includes configurations such as setting up the connectivity (using Wi-Fi/ZigBee/802.15.4), the MQTT server address to publish sensor data, and the time at the endpoint using Network Time Protocol (NTP). Moreover, standards such as Lightweight Machine to Machine (LWM2M) [118] have been developed to manage endpoints securely and in a mannered function. LWM2M provides device management capabilities (remote provisioning of security credentials, firmware updates, and connectivity management) and service establishment capabilities (sensor readings, remote actuation, and endpoint device configuration). Various papers [67, 68, 33, 26] have provided lessons learnt from experience by deploying battery-powered devices in the endpoint tier communicating over IEEE 802.15.4.
Endpoints could also be configured dynamically or bootstrapped by the device on the edge/cloud tier by providing configurations such as which endpoints are allowed to join the network, the encryption keys to encrypt the data, and the network address/port number of destination, and other settings. Additionally, communication between the endpoint and the edge must be encrypted. For example, if the endpoint connects to the edge via 802.15.4, the edge device requires a border router to communicate. If the endpoint connects to the edge via Wi-Fi, Wi-Fi encryption (WPA2) encrypts the data over the air. For example, the SPHERE [71] project deployed multiple endpoints connected using 802.15.4 in around 100 houses in Bristol and used one hard-coded encryption key per house to encrypt data over the air. They used media access control (MAC) address filtering to prevent external devices from joining the IEEE 802.15.4 Time Slotted Channel Hopping (TSCH) network.
_Endpoint-Edge-Cloud Connectivity_: From the communication perspective between devices at each tier, it is essential to use encrypted protocols for communication from endpoint to edge to cloud tier [67, 71]. Secure transmission protects against packet sniffing, man-in-the-middle attacks, replay attacks [119], and unauthorised attempts to communicate with the node.
The servers that host the cloud tier must provide services to edge tier devices and expose them to IP addresses and ports. Services could range from Hypertext Transfer Protocol (HTTP), HTTPS, WebSockets, Lightweight Directory Access Protocol (LDAP), VPN, and others and may require different ports exposed to the Internet. Testbed administrators prefer to reduce the number of ports exposed to the Internet to reduce the attack surface area, which is better from a security perspective. An example of a WSN implementation providing the connectivity points between the three tiers is presented in Fig. 5. Both sensor LPWAN nodes and cloud addressable Uniform Resource Locator (URL) or IP can be considered endpoints. The challenge for the edge device is to distinguish between the two directions of communication. Routing tables for packet forwarding for LAN and WAN and also the SLIP bridge create complexity and are challenging to design, implement, and secure.
**Edge to Cloud Connectivity:** There are three ways to expose services hosted on the cloud tier. Firstly, by opening the ports on the cloud tier firewall. However, opening multiple ports on the firewall increases the attacker's surface area and is not preferred [120]. Second, connect the device through Demilitarized Zone (DMZ) to the cloud using a VPN [28]. However, in the case of a cyber-incident where an attacker compromises one edge tier device, they can explore and enumerate the internal network for vulnerabilities (depending on routing configuration and if the network is flat at the data-link layer). The third is to use a Software Defined Perimeter (SDP) that runs a client on the device using the authentication process. SDP defines a policy to determine who gets access to what resources and distributes access to internal applications based on a user's identity. It makes the application infrastructure invisible to the Internet, evades network-based attacks (DDoS [119], ransomware, malware, server scanning) and reduces the security risk. However, enterprise organisations often use SDP, which might be overkill for a research testbed. Furthermore, if the devices at the edge and cloud tier are in the same network connected over ethernet or Wi-Fi for demonstration purposes, edge and cloud tier devices will be in a trusted private network; VPN or firewall might not be required.
The typical way to connect edge devices to the cloud network is through a VPN. For example, if there are 50 edge devices in different houses or streetlamps, it is good to generate 50 unique credentials from a security perspective. However, more manual/scripted effort is required to create credentials and provision them to nodes. For example, the REPLICATE project used OpenVPN to provide secure connectivity and issued certificates through a CA. The administrator generated 150 credentials and stored them on a USB stick with 150 folders for each house. The deployment team (DT) was responsible for visiting a particular home and installing and provisioning the edge and endpoint devices. They executed the
Figure 5: Connectivity points between the three tiers for a WSN use case
bash script on the edge tier device that installs the certificate for that house and provides secure connectivity to the cloud tier.
**Endpoint to Edge Connectivity:** Endpoints are usually connected to the edge/cloud using mesh networks and LPWAN technologies. The choice of network technology depends on connectivity requirements such as range, bandwidth, power, interoperability, security, and reliability [85].
However, there are challenges when multiple endpoint devices communicate over various channels in an urban environment. An urban environment can have numerous networks such as cellular, LPWAN, mesh, and others. In a real-world deployment, connectivity between multiple devices in the vicinity of each other depends on external interference, frequency-selective multipath fading, and dynamics in the environment. The dynamics of the environment can include the number of people, the movement of people, the Wi-Fi traffic, the rooms, the layout, and the type of building materials used [36, 87]. A house deployment might initially function until further technology is deployed into a neighbouring house, causing disruptions due to radio interference. External interference can occur when a different technology or a deployment of the same technology operates within the same radio range (IEEE 802.11 Wi-Fi interferes with IEEE 802.15.4 at 2.4 GHz) [25, 121, 122]. Furthermore, in an 802.15.4 network, the mobility and activity of an endpoint can affect the throughput and data on the mesh infrastructure.
Fig. 6 presents the packet delivery ratio (PDR) calculated from packet sequence reconstruction for individual receivers in a home environment. The strength of the received signal and the packet loss patterns show the effect of mobility between rooms in the residential environment and the effect on PDR. The PDR is affected by the increased bandwidth requirements on the forwarding gateways when many packets are generated locally that require forwarding. In Fig. 6, four tags that require a fixed uplink bandwidth generated enough packets to saturate the uplink capacity allocated to the mesh network. In particular, gateway 8 is sharing uplink bandwidth with gateway 5, which is visible from the alignment of the two principal component analysis (PCA) components of PDR (g8pdr and g5pdr). In other words, gateway 8 uses gateway 5 in a mesh network topology to forward its traffic in the network. Since the available bandwidth is limited, there is a lot of packet loss in the data originating from gateway 8, making the PCA component **g8** the least significant in the overall entropy. The PDR, network usage, and packet loss have a dynamic nature in a dynamic environment [26]. For example, SPHERE has deployed a mix of network technologies such as 802.15.4 400 MHz, BLE channels 37, 38, 39, and 16 channels of 802.15.4, 5GHz Wi-Fi, and a router with an Ethernet interface. BLE packets were generated on the advertisement channels 37, 38, and 39 with an interval defined by the BLE 4.2 standard at about every 200 ms. The specification allows only a fixed interval with increments of 0.625 ms with a random delay of 0 ms to 10 ms. These packets are scanned from receivers that scan on one of the three channels at any particular time and rotate across those channels many times every second. Those packets are encapsulated in CoAP messages, which are forwarded to the 802.15.4e gateway from these intermediate receivers using a fixed uplink time-slotted schedule. The gateway uses a bridge to bring CoAP messages to a compute host using Contiki-NG [123]. Link quality is an important metric when connecting endpoint devices to the edge/cloud. When the security of the communication channel depends on the Radio-frequency (RF) channel, if an attacker gets physical access to the device or sniffs the network, they can learn the procedure for joining the network, such as the exchange of network keys. In particular, in IEEE 802.15.4, in the minimal implementation, the pattern of connecting a node to a network uses a fixed channel [124]. Information for the particular network in its formation [104] can be inferred by sniffing those 10 ms timeslots where routing is established [125].
**Credential Management**: After provisioning, the edge and cloud devices must be maintained and accessed occasionally. One of the ways to access the device is by SSH using authentication mechanisms or credentials such as a username, password, or digital certificates [28, 126]. The device can authenticate the user by storing the credential on the device or authenticating through a central server and storing it locally for a specific time. Using passwords is not recommended, as it allows the attacker to brute-force the username and password. Furthermore, when the password is sent to the device for authentication, it can be compromised by man-in-the-middle (MITM) attacks [120]. One preferred way of providing access is to store the administrator's public SSH keys1[127] in each of the devices. However, storing public SSH keys on the device is risky as if one of the private SSH keys is compromised, access to all edge devices may be compromised. In addition to using SSH, administrators also use remote management tools such as TeamViewer/AnyDesk to update scripts or perform functionality that requires Graphical User Interface (GUI). However, recently attackers compromised Florida City's water supply using remote access software (TeamViewer), which allowed staff to share screens and troubleshoot IT issues [128] by exploring systems from the Shodan search engine and outdated passwords.
Footnote 1: SSH has public and private keys, the public key is stored on the device, and the private key is kept with the user requiring device access.
Figure 6: Mobility of BLE tags in a house, the association of the PDR and signal strength for eight listening gateways
Application deployment and compatibility on different systems
Research projects involve multiple researchers developing different applications (Python/R programs) [103] that need to be deployed on the edge device with different architectures (arm64, amd64, armhf). Researchers need to access edge device hardware (sensors, cameras, GPU) for edge processing and cloud resources for data analysis. Initially, developers work on sample data and develop applications that work fine on their machines. However, applications must be deployed on the edge and in the cloud to access real-world data. Deploying custom applications often requires installing library dependencies (e.g. pandas, scikit-learn) and may require administrative privileges, often resulting in the application not working correctly on the edge/cloud platforms.
The above results in scenarios where developers say, "It works on my machine!" resulting in numerous meetings and debugging of applications to determine the root cause of the problem. Python and Linux distributions have a lot of inter-component dependencies embedded into them. It is crucial to monitor those interdependencies and evaluate any security updates against those dependencies. Tools are being explored in the literature to evaluate those dependencies [129, 130] and provide early warning when changes lead to incompatibilities.
Additionally, the project must always store the data collected on designated machines to comply with data protection laws and user privacy. Many applications need access to a graphics card or more memory to process the data. This requires moving the data to a more computationally capable machine, which becomes challenging due to data management guidelines. Due to data management guidelines, application incompatibility often results in either no or delayed application execution on the whole dataset. The application code also needs to be consistently deployed on devices; one of the ways it is maintained is by using a remote git repository cloned on the device remotely updated as a batch process [35].
Accounting and MonitoringThe testbed can contain tens, hundreds, or thousands of devices on the cloud, edge and endpoint tiers. It is crucial to maintain an inventory of the number of devices at each tier, with their hardware and software details (make and model, OS versions, installed applications, and their version) [72]. The OS and application version can be used to actively monitor the National Vulnerability Database (NVD) database to detect vulnerabilities and patch the system proactively. Additionally, audit logs with synced timestamps should be collected to a central server and enabled to ensure forensic investigation during cyber-security incidents. Also, it is essential to maintain the details of who (i.e., which user) has logged into which machine and performed what activities for auditing purposes. However, it can depend on the remote management software's licence (free version/enterprise edition).
The infrastructure deployed for data collection requires that all hardware/software be working as expected and usable by researchers [127]. In addition, all endpoints must be connected to the edge, which should be connected to the cloud tier. If not, any loss of network connectivity can result in data loss. The monitoring infrastructure is essential to ensure this [34, 31, 34, 71, 87, 131]. Monitoring includes detecting whether devices are reachable and sending regular data. Monitoring also includes checking infrastructure components (such as web servers, adequate disk space, and system overload) [132]. The monitoring infrastructure should include an effective alert mechanism (email, slack, text messages). From the endpoints deployed through 802.15.4, it is good to have statistics about energy (battery), network (number of data/control packets, acknowledged packets), neighbourhood statistics (list of neighbour nodes and the link quality), per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per-channel per per-channel per-
is a probability that a system component will not work as expected due to hardware or software failure [24]. Debugging and finding the misbehaving piece takes considerable time and is challenging [9, 23, 77, 137]. It requires detailed logs of different system components with timestamps, understanding what triggered the logs, and ensuring that the devices generate log messages representing various failures.
Therefore, performing regular automated integration and end-to-end testing is essential to prevent such failures [73]. Additionally, components and their functionality must be well defined and have robustness and resilience built in, saving time for system administrators [71, 87, 85]. It also helps minimise the number and duration of visits to the citizen's residence to repair the system [35]. The maintainability of the infrastructure and the consistency of the interfaces between all different components [40] (such as commercial off-the-shelf (COTS) of hardware/software) can help with the resilience of the infrastructure.
_Scalability, modularity and extensibility:_ Research projects require the deployment of endpoints in multiple locations. The testbed platform is easy to manage when small and consists of only a house/streetlamp in one place. However, running a scalable trial that is supposed to scale to 100-200 houses/location becomes challenging. The system must be able to scale to tens to thousands and tens of thousands of homes/streetlamps in a reliable manner [24, 30, 43, 87]. In addition, software and hardware development occurs rapidly and can quickly become obsolete. The hardware and software components of the test bed must be designed with modularity and extensibility in mind to adapt to ever-evolving technology [24]. Hardware and software at the cloud/edge tier can be modular and extensible (for example, replacing the SBC at the edge with a newer, more powerful SBC) [21]. However, modularity, extensibility, and future-proofing at the endpoint tier is challenging because it is difficult to predict the exact requirements of future deployments and the electronics market progresses quickly. As a rule of thumb, testbed designers should follow the Keep it simple, stupid (KISS) principle [85].
_Heterogeneous devices, proprietary software, and different standards:_ Projects can have different devices on edge and endpoints generating various types of data and formats [30, 34, 75, 106, 113, 138]. For example, edge tier devices can have SBCs (GrapeBoard, RPi, Coral boards, Intel NUC). Endpoints tier devices can have different devices such as Nordic Semiconductor nRF5340-DK2, Texas Instruments Launchpad (LAUNCHXL-CC2650/CC1310/CC1350), TI CC2650 SensorTag. The testbed requires the devices to be securely configured and connected to the network. In addition, the endpoints used to collect data can run open-source or proprietary software [30, 34, 113]. In the case of proprietary, they may not provide an open source script to take the sensor data and may have a GUI to download the data or allow it to be sent only to the endpoint manufacturer website. In such cases, the administrator must figure out how to extract the data from the proprietary device or the manufacturer's website. Some proprietary technology may not be designed or evaluated for cybersecurity purposes. In addition, it is always difficult to evaluate and secure different network connectivity (802.15.4, BLE) in IP networks.
As there may be different devices from different vendors on the testbed, they can be running on various standards and formats (sending data over MQTT, HTTP, WebSocket, proprietary protocol), resulting in a lack of interoperability between sensors [78, 79, 113, 139, 32, 77]. It is vital to use widely open standards and possibly the same standard and format to help reduce learning times for research personnel [21, 71].
_Ruggedization:_ Ruggedisation is essential when deploying devices in citizen houses or outside on streetlamps. For example, any edge device installed indoors/outdoors requires specific Ingress Protection Ratings (IPR) and electrical testing [68]. It must be packaged in a form that can be securely mounted [8, 31] and still easily open if a battery or component change is required. IPR define levels of sealing effectiveness of the electrical enclosure sealing against foreign body intrusion (i.e., dust) and moisture. From the electrical safety perspective, it is crucial to have a Conformite Europeenne (CE) rating (for EU/UK) or country-specific certification rating on the endpoint and edge device. The certification mark ensures that the manufacturer has verified that the products have met country-specific safety, health, or environmental requirements. For example, Bristol Urban Observatory (BUO) had difficulty installing AoT nodes in streetlamps and on the university campus because the nodes did not have CE ratings (the electrical safety certification of the USA is different from the UK). Additionally, when designing enclosures for devices that contain sensors (such as air quality), it is essential that the airflow is optimal and allows the proper functioning of the sensors on board. The enclosure should protect the electronics from moisture and insects [8]. It might be a good idea to place the sensors in a Stevenson radiation shield2 separate from the sealed waterproof electronic enclosure. Furthermore, it is recommended to identify a suitable enclosure first (accepted and visually aesthetics) and then fit the edge and endpoint device in it with minimal modification. Designing a custom casing is often challenging and more expensive than modifying a readily available casing [67]. During the Cotham Hill Pedestrianisation project, it was found that designing a 3D-printed enclosure, models, printing it, and post-processing the 3D print (cleaning up the support materials) is challenging and time-consuming.
Footnote 2: shield instruments against precipitation and direct heat radiation from outside sources while still allowing air to circulate freely around them
_Testbed adaptiveness and replicability:_ The testbed must be adaptive to the project requirements or the community demand. For example, change in hardware requirements (such as a powerful graphics card, more RAM, hard disk space, or low-power processors) or human-interaction interfaces (ways to visualise/process data). Also, supporting as many users as possible depends on two factors: cost of users, experiments, and adapting the testbed to the needs of different communities [74, 140]. Also, the testbed should be reproducible using open-source software and automation, allowing implementation of the testbed by other administrators using applicable documentation (e.g. wikis) and other supporting materials.
### _Operational Testing_
The next step is to develop a prototype testbed in a laboratory and a small-scale real-world environment before large-scale deployment in the wild [20, 21].
_Time resource allocation_: The concept of time as a resource available to the testbed can be interpreted as a CPU processing time at both the edge and the endpoint. Furthermore, this can be associated with radio utilisation time at the co-coordinating endpoint connected to the edge or other edge nodes. The available time is governed by the data rate related to the sensor sampling frequency and resolution. Monitoring tools enable observations such as CPU time use and radio usage, which is essential when scaling the testbed. To give some real-world perspective, a byte of data, when transmitted, is serialised into eight bits of 0's and 1's and sent over a medium such as wires or radio. Communication protocols are responsible for encoding/decoding the bytes and bit streams and depend on the medium's capacity in bits per second. This can create an interesting paradigm between radio use and environmental monitoring. Almost all analogue-to-digital converters support Layer 2 access control allowing many sensors to be connected to inexpensive System on Chip (SoC) micro-controllers. This reduces the cost of the Printed Circuit Board (PCB) design by reducing the number of wire traces and complexity. Similarly, the radios, where the MAC layer controls access to the radio medium. In both cases, consideration of time allocation applies.
_Lab deployment_: The testbed will contain multiple heterogeneous devices at each tier. Each device would have different interfaces, components, applications, and services running. It is essential to ensure that the system is working as a whole [141] and securely sending the data from the endpoint to the cloud with analysis and visualisation satisfying project requirements. The platform must be deployed in a laboratory environment before being deployed on a large scale. It helps to face the challenges early on and test any new software/application internally on the testbed rather than pushing it directly into production.
Assignment of a provisioning budget is essential for setting up a lab testbed, buying various spare devices and components, and conducting deployment site visits. Based on the budget, project scope, and the number of researchers working, it might be good to have more than one lab testbed (dev1, dev2). Multiple lab testbeds help keep work in progress, even if one testbed has broken down because of a misconfiguration or software/hardware failure. Additionally, the laboratory testbed must be set up and running as early as possible in the project to test the different devices, components, software updates and applications to ensure the final real-world deployment is completed on time. Although only sometimes possible, the testbed should be as close as possible to real environmental conditions. For example, the Living Lab project first deployed electrochemical air quality sensors using laboratory-based wall sockets; however, electromagnetic interference from the power supply caused interference in the sensors, affecting the readings when deployed in the field [20].
_Small scale real-world deployment_: Research projects often require the installation of sensors in the environment/infrastructure owned by a different party. However, before deploying a large-scale deployment, it is important to have a small-scale deployment to understand real-world challenges and build confidence with infrastructure owners. Devices may behave differently depending on external factors (power supply, network infrastructure, and physical environment) [9, 20]. The small-scale deployment could include one citizen house, street-lamp or vehicle. Deploying scientific infrastructure on others infrastructure (bridge - owned by a trust, streetlamps - owned by the council, citizens' house - rented or owned by tenants) requires partnership with the respective owner [8]. There could be two individual bodies governing the infrastructure, first, the management team (MT) (board of directors, members of C-suite) and second, the operations teams (OT) (people managing/implementing the infrastructure). We refer to the research team (the team that deploys the infrastructure) as DT for brevity. Fig. 7 provides the different teams and their relationships.
During multiple projects involving device deployments, it was found that it is essential to gain the MT's trust (such as citizens and the city council) and inform them about the benefits of deploying the monitoring infrastructure. They will require assurance that the DT takes their work seriously and that installing the monitoring infrastructure will not disrupt their infrastructure working in any way.
Once the MT is on board, the DT must work with the OT. OT could be performing essential jobs such as keeping the city, a bridge running or operating their electric bicycle platform. The OT of different companies has their own key performance indicators (KPIs), processes, and structures. The challenge for DT personnel is to fit into that culture without causing problems. The DT should provide details (make, models, working, safety, security) of the monitoring infrastructure to
Fig. 7: Different teams involved in smart city research projects and their relationships
gain OT's confidence and trust. The DT should experiment with the OT infrastructure without disrupting them and not being a burden. They need to explain and provide realistic expectations about the research project and what and how they will be doing it. Furthermore, the relationship between DT and OT should be sufficiently positive so that the research team can fit the practise of the infrastructure operations team and that OT is happy to work with DT.
Finally, the DT should behave safely, securely, and carefully while working with the OT. The DT must be aware of health and safety concerns [20] and respect other people's time. For example, installing sensors on other infrastructures is often cancelled for non-technical reasons (e.g. violating health and safety requirements). Installing the sensors on an initial site (first house, streetlamp) will build up the DT's confidence and relationship with the OT/MT team.
### _Implementation/Deployment (in the real world)_
Data-gathering research infrastructure can be deployed at citizens' houses, private buildings (offices), and public places (streetlamps, council vehicles). All have a different set of challenges. First, we cover the challenges faced in the deployment in citizen homes and public spaces. In addition to the deployment team (DT), we denote the community team interacting with citizens as CT. CT is often responsible for interacting with citizens and informing them about the project research objectives and results. They are the bridge between citizens and the DT.
From the perspective of citizen participation, privacy and transparency, it is also a good idea to display the data the device collects and how it is used by providing documentation near the device [20, 142]. It is also important to mention to whom the device belongs and where to contact for more information [68].
_Deployment in citizen houses_: Challenges faced by the CT can be divided into **i.** finding a way to interact with citizens **ii.** encouraging and involving them to participate in the research project **iii.** providing adequate information to citizens **iv.** maintaining regular contact with citizens.
**Finding potential motivated citizens: Recruitment and engagement of citizens (potentially motivated) is challenging, requires proper planning and often requires plenty of time. It is more manageable in areas with community cohesion or a coordinating body to promote the project [27]. Recruitment works best using various methods, from brochures and social media to door-knocking and face-to-face visits [143]. While interacting with citizens during the REPLICATE, Twinery project, it was found that it is essential to consider literacy rates within the pilot area and to publish information/leaflets in the local language [143] for non-native English citizens. Also, over the years, the CT often knows citizens from previous engagements who would be happy to participate. Local events are a good way to attract interest. The CT organises small events or has a booth with information during open markets. Before engagement, it is essential to check whether there is a specific research project requirement, such as the deployment of devices in citizens' houses with diabetes or Parkinson's disease or citizens with solar PV or in an excellent socio-economical situation [6]. In such cases, CT interacts with different community groups through local community centres and social media applications, such as Facebook and Nextdoor [144]. Additionally, pandemic events such as COVID-19 make it difficult for CT to interact with citizens.
After identifying the recruitment method to build citizen interest, it is essential to consider the larger picture and connect people to these concepts. The CT also uses creativity and art to get that message across. The involvement of the physical and kinaesthetic aspects of the citizen often helps people become more involved, engaged, and excited about the research project. For example, Knowle West Media Centre (KWMC) CT installed a booth with a workshop of crafts activities to engage citizens during an open market. Once citizens are engaged and enjoying the craft activities, the CT asks for details about where they live and introduces the research project objectives. Additionally, citizens often drop out of the research study for multiple reasons, such as ill health, changes in circumstances, moving house, and occasional frustration with technology/process [27]. Therefore, having more participants than the project requires and having few citizens as a reserve is always good.
**Citizen encouragement:** The second challenge of CT is to get citizens excited about the project. It often comes to a fundamentally simple proposition: why they (citizens) would get involved and what is in it for them. Citizen participation becomes more complicated if the project requires a power supply or Wi-Fi (which costs money to citizens). When expenses are covered, there will still be a disruption in citizen life due to the installation of devices in houses [35]. In many cases, incentives (free Wi-Fi access, free tablets, shopping vouchers, or the opportunity to win a smartphone) will not convince citizens to participate. It is essential to think carefully about how citizens can be recruited and maintain interest among them [143]. For many people, simply getting involved is a barrier. For example, Twinergy [6] requires that citizens have solar PV connected to their homes. However, citizens who have solar PV will be early adopters and tech-savvy, so they may not be interested in the project. Citizen onboarding to the research project is challenging and can involve different efforts depending on citizens' eagerness and benefits.
**Respecting citizen time and preferences:** Deploying the endpoints in a home involves connecting up the sensors (using Wi-Fi, LPWAN or mesh networks). It can take a reasonable time, depending on the number of endpoints configured or connected and finding and deciding on a suitable place to keep the device, talk to the participants, and answer their questions [35]. Technology that is easy to install with little or no cabling is preferred. Radio transmission devices are preferred as citizens do not prefer additional cables in their staircases and dwellings [143]. During the Twinergy project, one participant decided not to install the
technology because it would spoil their minimalist decor.
In case of Wi-Fi connectivity, DT would need the credentials (SSID and password) and can collect them through phone calls, online forms, or in-person. However, remotely managing the Wi-Fi credentials often results in issues such as participants being uncomfortable entering their password into a document, participants needing to know their Wi-Fi credentials, and mistakes made during communication (such as mistaking O with 0 (zero)). An incorrect Wi-Fi credential is only detected when the deployment occurs. In this case, the endpoints must be returned to the DT and loaded with the correct network name and password, or a visit to the participant's house is required to correct the credentials [35].
Furthermore, the endpoint devices must remain placed throughout the deployment period without damaging the participant's house (delicate surfaces such as precious antique wood and wallpaper) [34, 72]. It is advised to anticipate objects and environmental conditions that can affect installation. This includes moisture, the quality of surface finishes, the typical movement of the object, and the methods of interaction of inhabitants with the object [31, 72]. Often, the citizen, pet, or robot vacuum cleaner accidentally or unknowingly disconnects the power supply to the devices, causing a failure, resulting in loss of connectivity and data [34]. Therefore, it is essential to identify the location of the device deployment at home carefully. The DT must respect the citizen's house and time [26]. The longer the DT takes at a citizen's home, the more inconvenient it is for the citizen and their regular routine [35]. Home visits of citizens for deployment and maintenance purposes must be highly optimised and efficient with preparation done beforehand [34].
Expecting user participation at all times is futile; expecting users to accurately record their activities for labelling data (such as who cooked dinner at what time) is challenging, as it requires citizens to remember and observe their lives [34].
Device looks and deployment surrounding:User comfort, acceptance, and aesthetics of deployed devices are paramount for a successful deployment (especially for wearable endpoints or visible devices) [67, 85, 87]. The citizen usually prefers the devices to look aesthetically or hidden away. When there are deployments in the citizen home, there must be no light emitting from devices deployed in bedrooms, as they can disturb users' sleep or affect user behavior [67, 72]. Furthermore, LEDs also consume a good amount of energy [68]. It would be good to have the ability in the endpoints to turn on/off the LEDs so that they can be on during debugging and off during real deployments [19]. For example, SCK deployed on the Cotham Hill citizen's house emits red light in case of setup issues; a senior resident was concerned and asked if it is safe to operate and has no fire hazard. In addition, it is essential to ensure that the device does not make any noise that can affect the lives of citizens [34].
It is also essential to note the device deployment conditions or the surrounding location to understand the sensor readings [72]. For example, a temperature reading in an area with direct sunlight will vary from a temperature reading in the shade [8]. To provide another example, anomalies in the SCK noise sensor readings in the Cotham-Hill deployment were observed because of the direct sunlight on the SCK kit kept near the window. Direct sunlight leads to device heating and can affect sensor readings [22]. In public deployments, context is also essential (near an intersection, highway, garbage can, and recycling centres). It is critical to understand how local environmental conditions (indoor/outdoor/sunshine/rain/snow) will affect the deployments [68].
Citizens switching home broadband provider:The device installed in the house often connects to the Internet through the ethernet port of the broadband router or Wi-Fi (which requires broadband Wi-Fi credentials) [37, 120]. For example, in REPLICATE, the endpoint connects to the edge device using ethernet to forward and route all traffic from VPN to the smart city platform. Most edge devices are SBCs with one ethernet port and a Wi-Fi adapter. Therefore, when the Ethernet port is occupied, the device must connect via Wi-Fi to connect to the Internet.
Citizens often change their broadband providers from one to six months to a year, leading to the change of Wi-Fi credentials (SSID and password) and loss of Internet connectivity and data. The DT does not have any mechanism to replace the Wi-Fi passphrase but requests the household owners to change the Wi-Fi passwords to what it was before, including the SSID, so that the device can connect to the Wi-Fi network. The other way is to plug the edge device into a monitor, attach a keyboard/mouse, provide credentials to the household owner and ask them to run the script to change the Wi-Fi password. However, most homeowners are not tech-savvy, making changes difficult. In addition, many citizens are unfamiliar with the technology introduced to their homes. For example, citizens might not have the experience of using a tablet or have problems accessing their information via the Internet [27].
Deployment in private building and public spaces:The deployment of any devices on the city's infrastructure (buses, garbage trucks, streetlamps) requires the willingness and collaboration of the city council [31]. Similarly, deploying devices on private buildings requires the building management team's approval. During the Clifton Suspension Bridge project, it was found that it is essential to ensure that any device deployed does not hinder the functioning of city infrastructure or private buildings. The power source for the deployed device must be planned (such as streetlamp power or car batteries when deployed on buses/trucks, mains powered, battery powered) [31]. It takes time and effort to secure permissions with the relevant infrastructure owners to deploy devices. Therefore, it is essential to identify the locations with the most significant impact to deploy the edge/endpoint that provides the most value to the stakeholders of the research/project [23, 35]. Suppose the device is deployed on the streetlamps and contains a downward camera. In that case, it might be a good idea to mount the device at a higher position to protect it from vandalism or theft [23, 30]. This would also allow an extensive view from the camera, allowing images of the entire intersection/park.
For a successful public deployment of infrastructure, policies, agreements, processes, public engagement, and interac
tions are necessary.
**Public engagement:** Public engagement is essential for the success of the research project. It brings city residents closer to the project and makes them active participants. It helps citizens without technology experience to discuss and learn the use of data and technology. This broader citizenry can explore and develop solutions to urban issues by proposing ideas for how collected data can be used. Community centres or community outreach help to publicise the project. There must be a named person to whom participants can go with any questions [143]. Face-to-face meetings help people identify and assign a named person to a project. Throughout the project, excellent and responsive personal support from a friendly and accessible coordinator (in the form of a building manager, a housing association contact, or even a community leader) can increase engagement. Any research project aiming to impact citizens' lives or affect behaviour change must build a relationship with participants and a deep understanding of their contexts and motivations to increase engagement and participation levels. Users must feel involved in each stage of project development and see that their participation is valued and that their input can have a real impact [143]. In addition, periodic reinforcement of the message and encouragement by contact between the neighbours and the central coordinator helps keep the motivation and the participants interested [27]. It is vital to provide ongoing support through visits, calls, and workshops, especially for those who find technology difficult or have literacy problems. Creating a relationship with participants based on trust and responsibility for communicating bad and good news [143] helps the researcher and the citizen.
Also, there is a possibility that the research projects engage with people from underserved or disadvantaged socio-economic or minority ethnic backgrounds. It is crucial not to lump them into one group. The CT must treat everyone equally and ensure that communication with the citizens is appropriate and accessible, and no one should be offended.
Furthermore, the amount of information must be provided in an easily digestible fashion (short video, infographics, a mechanism with which citizens can engage and interact) to get comfortable with the idea and not overwhelm them. The research project results depend heavily on the interaction and feedback of the participants. Hence, it is essential to ensure that easy-to-understand and straightforward messages are used to communicate with citizens (communication is key) [27]. For example, SPHERE created a 3 min animated video [145] to provide information to the citizens. Being active on social media, such as Twitter, responding to media requests for interviews, and publishing detailed information about the research project on the website/pamphlets/leaflets helps improve public perception and participation [90].
In the case of deployment in citizen houses, once citizens are on board and have signed the consent forms (ensures commitment and guarantees confidentiality), and the DT has installed the devices in their house, it is still essential to maintain regular contact with the citizens to ensure devices are working and they can use the technology and data provided for their benefit. Another minor challenge for CT is managing the signed consent forms provided to citizens for participation. Encouraging all participants to return completed questionnaires is always challenging and must be considered for any citizen attitude/behavioural analysis [27].
**Transparency:** Deploying any public infrastructure requires transparency, privacy protection, and system security. The public usually suspects publicly deployed devices based on fears about surveillance and data collected by the node [20]. It is essential to develop and provide privacy and governance policies to show the project's commitment to transparency and privacy. The privacy policy should provide what data are being collected, processed, used, destroyed, or made available to city residents. Additionally, allowing open comments from the citizens and community on the policy drafts help gain citizen confidence. DT/CT can arrange community meetings for citizens to ask questions about the draft policies. It is essential to resolve all the comments and questions publicly, consider citizen feedback for policy revision, and include a report of the public engagement process. The public/government cybersecurity centre can assess the deployed system security and privacy practices to ensure system security and gain public trust [90].
In the case of deployment at home, citizens will have questions about the different endpoints, frequencies used, data collected, and how data will be used [37] and stored. On the contrary, the DT requests information from the CT on the house floor plan to design/customise the sensors according to the requirements [34, 35]. The above situation can often land the CT in a dilemma, as projects often decide which sensors will be deployed and data collected late in the project. Furthermore, the CT cannot tell the citizens about the sensors until the project's data and requirements are well defined. Citizens can only decide whether they want to participate in the project once they have clarity on what is collected, which means that the CT cannot provide house details to the DT. Therefore, it is better to perform a requirement analysis (SS V-A) earlier in the project to understand data collection and be transparent with citizens.
### _Operational Challenges_
Research projects also have operational challenges, which are problems that arise and can render a project less efficient.
_Skills shortage:_ A significant challenge is the shortage of people with the appropriate skill set to act as system architects in urban monitoring research projects. Research projects (a collaboration between universities, industry, and city councils) are often for 1-5 years. The people who develop and manage the urban monitoring platform are research associates and doctoral students, who mainly cover only part of the required skill set. Furthermore, students who maintain the project often work part-time due to semesters and other courses, leading to staffing problems [63]. Experience and knowledge in system administration, cloud infrastructure, networking, DevOps, and cybersecurity are required [131, 146, 147].
_Different expectations and goals:_ Research projects can have multiple partners and collaborations. Each partner can have a different set of expertise, business models, expectations and their own project agenda on how it benefits them [146].
There may be cases where collaboration priorities are different, which can create challenges in communication and work completion. Teamwork is essential for project success [9, 146]. Furthermore, research members can have other KPIs on which their managers judge their performance. If the delivery of the research project is not one of them, it can affect the researcher's commitment to the project. There will always be members in the project who will be hard working, average working, and who would cause trouble; always good to identify the right person for the right work.
_Clear, concise communication:_ Research projects often include multiple meetings to discuss various objectives and goals of the project. It is crucial to have clearly defined agendas and final takeaways. Also, it is a good practice to invite only a few key people or technical leads to the meeting for clear and concise communication. In addition, face-to-face meetings are preferred over online discussions, especially brainstorming sessions. Things become delayed if the parties involved do not communicate clearly and concisely.
_Risk Management_: The research project should also have risk management that considers different issues in the project schedule. Risks could include COVID-19 affecting people, datasets not available for analysis, delays in setting up the testbeds, deployment of devices in public spaces, and related safety issues (electrical hazards, devices falling from streetmaps), among others. Furthermore, it should include critical personal backup plans if someone gets sick or leaves the project/company. Furthermore, suppose that the deployed devices are expected to work after the end of the research phase. In that case, it is essential to have a handover-takeover (HOTO) (including hiring and transferring skills) to continue a successful project. Often, the platform and devices require some human intervention to operate [20].
_Infrastructure availability:_ There will be inevitable situations outside the control of the research team. For example, infrastructure suffers from an outage, a global internet outage, or installed devices affected by weather [20]. As another example, there is little the DT can do if the cloud tier is hosted on city-council infrastructure and an outage occurs with their main administrator on leave. Case in point, the Internet recently suffered a significant outage of approximately one hour [148], leaving multiple cloud services unavailable.
Devices required for deployment must be purchased early. Importing devices from another country and connecting them to the home network is expensive and challenging. A significant amount of time is lost in the shipment of devices across continents, exacerbated by having to work in multiple timezones [30].
_Partnerships_: It is essential to have the support and partnership of the city council [80]. The city council officials can act as a catalyst for informing and organising discussions with other city departments (electricity board, hospitals, recycling). These other departments can update the city council about the project and ask for their input on deployment locations or how the project can support a particular department in solving its challenges. The research project, depending on its objective, can support the vision of the city plan (usually published year-by-year, such as the Bristol city plan [149], Belfast Agenda [150], Chicago Technology plan [151]) in terms of how the research project and the deployment of the public infrastructure can allow the city to use technology and data for engagement, innovation, inclusion, and opportunity.
In addition, it is essential to engage and win the confidence of city departments and employees by involving them in the project. For example, suppose that the infrastructure will be installed on city streetmaps. In that case, it is important to bring prototype units to the electrical department and seek their input on electrical safety and mounting procedure, effectively gaining their confidence and working as a team toward a common goal.
_Logistics_: The DT should be aware of the design of the nodes, the installation procedures, the node deployment locations, and other information. In addition, they should have ownership and power to make decisions on the fly, such as moving a node to a different street corner due to a blocked view during installation. Interactions and conversations can lead to collaborations and understanding of how research data collected by public deployment can be used and integrated into existing city data platforms (such as Bristol Open Data [88], London Datastore [89]).
Furthermore, DT can create communication channels such as surveys and forms to collect the location of the node deployment, the type of data, and the problems to be solved from the project stakeholders, city departments, communities, research groups, and residents [90].
## VI Conclusion
The continued growth of wireless technologies has resulted in significant research into urban monitoring via data-gathering IoT testbeds. These research testbeds follow a typical three-tier architecture, and many designs and implementation challenges remain, including data privacy controls, network security, and device updates. We extracted these challenges and associated lessons learned by considering several real-world IoT testbed projects. We analyzed the projects in the context of the V-model development life cycle phases. We presented the project challenges and lessons learned organized by requirements analysis, system design, implementation, testing and deployment phases. We believe this will assist other urban monitoring researchers in planning future testbeds. We hope this research will prove valuable and reduce these projects' design and implementation costs.
## Acknowledgment
This work has been supported in part by EPSRC through grant no EP/P016782/1 (UKCRIC Urban Observatories) and an Industrial CASE award sponsored by BT.
|
2308.04339 | Spectral multiplicity functions of adjacency operators of graphs and
cospectral infinite graphs | The adjacency operator of a graph has a spectrum and a class of scalar-valued
spectral measures which have been systematically analyzed; it also has a
spectral multiplicity function which has been less studied. The first purpose
of this article is to review some examples of infinite graphs for which the
spectral multiplicity function of the adjacency operator has been determined.
The second purpose of this article is to show explicit examples of infinite
connected graphs which are cospectral, i.e., which have unitarily equivalent
adjacency operators, and explicit examples of infinite connected graphs which
are uniquely determined by their spectrum. | Pierre de la Harpe | 2023-08-08T15:36:14Z | http://arxiv.org/abs/2308.04339v2 | # Spectral multiplicity functions
###### Abstract.
The adjacency operator of a graph has a spectrum and a class of scalar-valued spectral measures which have been systematically analyzed; it also has a spectral multiplicity function which has been less studied. The first purpose of this article is to review a small number of examples of infinite graphs \(G=(V,E)\) for which the spectral multiplicity function of the adjacency operator \(A_{G}\) of \(G\) has been determined. The second purpose of this article is to show explicit examples of infinite connected graphs which are cospectral, i.e., which have unitarily equivalent adjacency operators.
Key words and phrases:Spectral graph theory, adjacency operators, spectral measure, spectral multiplicity function, unitarily equivalent operators, cospectral graphs, Jacobi matrix 2
* a **scalar-valued spectral measure**\(\mu_{G}\) which is a finite Borel measure on \(\Sigma(A_{G})\), well-defined up to equivalence, sometimes viewed as a measure on \(\mathbf{R}\) with closed support \(\Sigma(A_{G})\),
* the **spectral multiplicity function**\(\mathfrak{m}_{G}\), which is a measurable function from \(\Sigma(A_{G})\) to \(\{1,2,\ldots,\infty\}\), well defined up to equality \(\mu_{G}\)-almost everywhere.
For finite graphs, spectra of adjacency operators and multiplicities of eigenvalues have been studied intensively; scalar-valued spectral measures are not an issue, because any measure on the spectrum which charges every eigenvalue is a scalar-valued spectral measure. For infinite graphs, spectra of adjacency operators have attracted a lot of attention, but In contrast spectral measures a bit less, and spectral multiplicity functions even less (though some authors have precise computations of multiplicities for classes of graphs, for example for sparse trees [1]).
The first purpose of this article is to review a small number of examples known to us of infinite graphs \(G=(V,E)\) for which the spectral multiplicity function of \(A_{G}\) has been determined. In Section 2, we review various kinds of multiplication operators, the Hahn-Hellinger multiplicity theorem, and the definition of the spectral multiplicity function for a bounded self-adjoint operator. The short Section 3 states some of the standard questions on the spectral multiplicity functions of adjacency operators of graphs and mentions some facts known in the case of finite graphs.
Section 4 shows that adjacency operators have uniform multiplicity one for the infinite ray (Proposition 4.1), uniform multiplicity two for the infinite line (Proposition 4.2), and infinite uniform multiplicity for a lattice \(L_{d}\) of dimension \(d\geq 2\) (Proposition 4.7). Section 5 is a study of spherically symmetric rooted trees. For the particular case of regular trees, we need in Section 6 to recall results on operators defined by infinite Jacobi matrices. It is shown that the infinite regular rooted tree \(T_{d}^{\mathrm{root}}\) of branching degree \(d\geq 2\) and the infinite regular tree \(T_{d}\) of degree \(d\geq 3\) have infinite uniform multiplicity (Propositions 5.7 and 6.3). All this is well-known; however, since it leads to the first explicit mention I know of cospectral infinite connected graphs, it seems worthwhile to expose the complete argument in one place.
Two graphs \(G\) and \(G^{\prime}\) of bounded degree are **cospectral** if their adjacency operators have equal spectra, equivalent scalar-valued spectral measures, and spectral multiplicity functions which are equal almost everywhere; equivalently if there adjacency operators \(A_{G}\) and \(A_{G^{\prime}}\) are unitarily equivalent. (When \(G\) and \(G^{\prime}\) are finite, this boils down to the usual definition.) Examples of cospectral graphs date back to the very first papers in spectral graph theory. They include a pair of graphs on 5 vertices, a pair of connected graphs on 6 vertices, a pair of trees on 8 vertices (already in [12]), and pairs of regular connected graphs on 10 vertices; for these and much more, see [10] and [11]. The second
purpose of this article is to provide examples of cospectral infinite connected graphs. It is shown that, for any integer \(d\geq 2\), the graphs \(L_{d}\), \(T_{d^{2}}^{\rm root}\) and \(T_{d^{2}+1}\) are cospectral (Corollaries 5.8 and 6.4); note that \(L_{d}\) and \(T_{d^{2}+1}\) are Cayley graphs. Multiplets of cospectral spherically symmetric trees are shown in Example 5.9. The final Section 7 is a very short account of an uncountable family of cospectral Schreier graphs, from [GrNP-22].
I am grateful to Jonathan Breuer, Slava Grigorchuk, Chris Godsil, and Tatiana Nagnibeda, for useful indications and remarks during the writing of this paper.
**Reminder of spectral measures and the Hahn-Hellinger multiplicity theorem on bounded self-adjoint operators**
In this section, we state the Hahn-Hellinger multiplicity theorem, which shows that a bounded self-adjoint operator on a separable Hilbert space is characterized up to unitary equivalence by its three fundamental invariants, the spectrum, the class of a scalar-valued spectral measure, and the spectral multiplicity function. The theorem is due to E. Hellinger in 1907 and H. Hahn in 1912; precise references can be found in [DuSc-63, Section X.6, Page 928]. Most of this carries over to bounded operators which are normal (rather than self-adjoint) and to unbounded self-adjoint operators.
All Hilbert spaces which appear in this paper are complex. The scalar product of two vectors \(\xi,\eta\) in a Hilbert space \(\mathcal{H}\) is denoted by \(\langle\xi\mid\eta\rangle\); it is linear in \(\xi\) and antilinear in \(\eta\). We use the following notation: \(\mathbf{N}=\{0,1,2,\ldots,\}\), \(\overline{\mathbf{N}}=\{0,1,2,\ldots,\infty\}\), \(\mathbf{N}^{*}=\{1,2,\ldots\}\), and \(\overline{\mathbf{N}^{*}}=\{1,2,\ldots,\infty\}\).
### Spectrum and spectral measures
Let \(\mathcal{H}\) be a Hilbert space and \(\mathcal{L}(\mathcal{H})\) the algebra of bounded linear operators on \(\mathcal{H}\). Let \(X\in\mathcal{L}(\mathcal{H})\). The **spectrum** of \(X\) is the set \(\Sigma(X)\) of \(\lambda\in\mathbf{C}\) such that \(\lambda-X\) is not invertible in \(\mathcal{L}(\mathcal{H})\). It is a compact subset of \(\mathbf{C}\), and a non-empty one unless \(\mathcal{H}=\{0\}\). Assume from now on that \(X\) is **self-adjoint**, so that \(\Sigma(X)\) is a compact subset of \(\mathbf{R}\). Denote by \(\mathcal{B}_{\Sigma(X)}\) the \(\sigma\)-algebra of Borel subsets of \(\Sigma(X)\). By the spectral theorem, there exists a **spectral projection-valued measure**\(E_{X}:\mathcal{B}_{\Sigma(X)}\to\operatorname{Proj}(\mathcal{H})\) such that \(X=\int_{\Sigma(X)}tdE_{X}(t)\). A vector \(\xi\in\mathcal{H}\) defines a **local spectral measure at \(\xi\)** on \(\Sigma(X)\), denoted by \(\mu_{\xi}\), defined by \(\mu_{\xi}(B)=\langle E_{X}(B)\xi\mid\xi\rangle\) for all \(B\in\mathcal{B}_{\Sigma(X)}\). A vector \(\xi\) is **dominant** for \(X\) if \(\mu_{\eta}\) is absolutely continuous with respect to \(\mu_{\xi}\) for all \(\eta\in\mathcal{H}\). ("Dominant vector" is the terminology of [Sim4-15, Page 306]; the terminology of [BoSm-20, Page 446] is "vector of maximal type", and that of [Dixm-69] is "separating vector" for the W\({}^{*}\)-algebra generated by \(X\).) A **scalar valued spectral measure for \(X\)** is a measure on \(\Sigma(X)\) which is equivalent to a measure \(\mu_{\xi}\), where \(\xi\) is a dominant vector for \(X\). Two scalar-valued spectral measures for \(X\) are equivalent, i.e., are absolutely continuous one with respect to the other. A vector \(\xi\in\mathcal{H}\) is **cyclic** for \(X\) if the closed linear span of \(\{X^{n}\xi\}_{n\in\mathbf{N}}\) is the whole of \(\mathcal{H}\).
We denote by \(\mathcal{B}(\Sigma(X))\) the algebra of bounded Borel-measurable functions on \(\Sigma(X)\). For \(f\) in this algebra, the operator \(f(X)\) is defined by Borel functional calculus.
**Proposition 2.1** (**existence and characterizations of dominant vectors for self-adjoint operators**).: _Let \(X\) be a bounded self-adjoint operator on a separable Hilbert space \(\mathcal{H}\). Let \(\mathcal{B}(\Sigma(X))\) and \(E_{X}\) be as above._
1. _There exist dominant vectors for_ \(X\)_. More precisely, for any_ \(\eta\in\mathcal{H}\)_, there exists a dominant vector_ \(\xi\) _for_ \(X\) _such that_ \(\eta\) _is in the closed linear span of_ \(\{X^{n}\xi\}_{n\in\mathbf{N}}\)_._
2. _A vector_ \(\xi\in\mathcal{H}\) _is dominant for_ \(X\) _if and only if, for any_ \(f\in\mathcal{B}(\Sigma(X))\)_, the equality_ \(f(X)\xi=0\) _implies_ \(f(X)=0\)_._
3. _A vector_ \(\xi\in\mathcal{H}\) _is dominant for_ \(X\) _if and only if, for any Borel subset_ \(B\) _of_ \(\Sigma(X)\)_, the equality_ \(\mu_{\xi}(B)=0\) _is equivalent to the equality_ \(E_{X}(B)=0\)_._
4. _Cyclic vectors for_ \(X\) _are dominant vectors for_ \(X\)_._
5. _If_ \(X\) _has at least one cyclic vector, dominant vectors for_ \(X\) _are cyclic vectors for_ \(X\)_._
_Suppose that \(\mathcal{H}\) has an orthonormal basis \((\varepsilon_{j})_{j\geq 1}\); let \(\mu_{j}\) denote the local spectral measure at \(\varepsilon_{j}\)._
1. _If_ \(\xi\in\mathcal{H}\) _is such that the local spectral measure_ \(\mu_{\xi}\) _dominates_ \(\mu_{j}\) _for all_ \(j\geq 1\)_, then_ \(\xi\) _is a dominant vector._
_References for the proof._ For (1) and (2), see [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 22, 224, 213, 225, 226, 227, 230, 231, 232, 240, 233, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 311, 320, 321, 333, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 82, 84, 85, 87, 89, 80, 83, 86, 88, 89, 91, 80, 84, 88, 89, 80, 85, 89, 81, 86, 87, 88, 89, 82, 89, 83, 84, 85, 89, 86, 87, 88, 89, 92, 88, 89, 80, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 80, 81, 84, 88, 89, 80, 82, 85, 86, 89, 81, 87, 88, 89, 80, 83, 88, 87, 89, 81, 88, 89, 80, 84, 85, 89, 82, 86, 87, 88, 88, 89, 80, 89, 81, 88, 82, 89, 82, 83, 84, 85, 86, 87, 88, 89, 80, 88, 89, 82, 89, 80, 83, 88, 89, 81, 80, 84, 85, 86, 87, 88, 89, 82, 88, 89, 80, 81, 82, 89, 83, 84, 85, 88, 89, 82, 85, 86, 87, 88, 89, 80, 81, 83, 88, 89, 82, 83, 84, 85, 86, 88, 89, 82, 87, 88, 89, 80, 83, 88, 89, 84, 89, 85, 86, 87, 88, 89, 80, 87, 88, 89, 80, 89, 82, 83, 84, 85, 88, 89, 86, 88, 89, 87, 88, 88, 89, 80, 89, 80, 88, 89, 80, 80, 81, 82, 89, 80, 82, 83, 84, 85, 86, 87, 88, 89, 80, 84, 88, 89, 80, 82, 85, 89, 80, 83, 86, 89, 80, 84, 87, 88, 89, 80, 85, 89, 82, 86, 87, 88, 89, 80, 88, 89, 80, 89, 81, 80, 82, 83, 84, 85, 86, 87, 88, 89, 80, 89, 80, 81, 82, 89, 83, 84, 86, 88, 89, 82, 87, 88, 89, 80, 83, 88, 89, 80, 84, 85, 86, 89, 82, 88, 89, 80, 85, 86, 87, 88, 89, 80, 86, 88, 89, 82, 89, 80, 87, 88, 88, 89, 80, 88, 89, 80, 89, 80, 81, 80, 82, 83, 84, 85, 86, 87, 88, 89, 80, 89, 80, 82, 89, 80, 83, 84, 85, 87, 89, 80, 86, 88, 89, 82, 89, 80, 83, 88, 89, 80, 84, 86, 89, 80, 87, 88, 89, 82, 89, 80, 83, 88, 89, 80, 84, 89, 80, 85, 89, 86, 89,
For our analysis of lattice graphs \(L_{d}\) in Section 4, we will need the following facts on local spectral measures of some operators defined on tensor products. Let \(\mathcal{H}_{1},\mathcal{H}_{2}\) be two separable Hilbert spaces. For \(j\in\{1,2\}\), let \(X_{j}\) be a bounded self-adjoint operator on \(\mathcal{H}_{j}\); choose a vector \(\xi_{j}\) in \(\mathcal{H}_{j}\), and let \(\mu_{j}\) be the local spectral measure of \(X_{j}\) at \(\xi_{j}\); we view \(\mu_{j}\) as a finite measure on \(\mathbf{R}\) with closed support contained in \(\Sigma(X_{j})\). Let \(\operatorname{id}_{j}\) denote the identity operator on \(\mathcal{H}_{j}\). Let \(\mathcal{H}\) be the Hilbert space tensor product \(\mathcal{H}_{1}\otimes\mathcal{H}_{2}\) and let \(X\in\mathcal{L}(\mathcal{H})\) be the operator \(X_{1}\otimes\operatorname{id}_{2}+\operatorname{id}_{1}\otimes X_{2}\). It is well-known that the operator \(X\) is bounded, self-adjoint, of norm \(\|X\|=\|X_{1}\|+\|X_{2}\|\), and of spectrum
\[\Sigma(X)=\{t\in\mathbf{R}:t=r+s\,\text{ for some }\,r\in\Sigma(X_{1})\, \text{ and }\,s\in\Sigma(X_{2})\}\]
(see [1, 2]). Let \(\xi=\xi_{1}\otimes\xi_{2}\in\mathcal{H}\) and let \(\mu\) be the local spectral measure of \(X\) at \(\xi\).
**Proposition 2.3**.: _Let \(X_{1}\in\mathcal{L}(\mathcal{H}_{1})\), \(X_{2}\in\mathcal{L}(\mathcal{H}_{2})\), \(\xi_{1}\), \(\xi_{2}\), \(\mu_{1}\), \(\mu_{2}\), \(X=X_{1}\otimes\operatorname{id}_{2}+\operatorname{id}_{1}\otimes X_{2}\in \mathcal{L}(\mathcal{H}_{1}\otimes\mathcal{H}_{2})\), \(\xi\) and \(\mu\) be as above._
_Then \(\mu\) is the convolution product \(\mu_{1}\ast\mu_{2}\)._
Proof.: Recall that the convolution of two finite measure \(\nu_{1},\nu_{2}\) on \(\mathbf{R}\) is the direct image of the measure \(\nu_{1}\otimes\nu_{2}\) on \(\mathbf{R}^{2}\) by the map \(\mathbf{R}^{2}\to\mathbf{R},\,\,\,(r,s)\mapsto r+s\). For a complex-valued continuous function \(f\) on \(\mathbf{R}\) which tends to zero at infinity, we have
\[\int_{\mathbf{R}}f(t)d(\nu_{1}\ast\nu_{2})(t)=\int_{\mathbf{R}}\int_{\mathbf{R }}f(r+s)d\nu_{1}(r)d\nu_{2}(s).\]
See [1, Chapter 7].
For the next computation, observe that the operators \(X_{1}\otimes\operatorname{id}_{2}\) and \(\operatorname{id}_{1}\otimes X_{2}\) commute, and that \(\left(X_{1}\otimes\operatorname{id}_{2}\right)^{j}\left(\operatorname{id}_{1 }\otimes X_{2}\right)^{k}=X_{1}^{j}\otimes X_{2}^{k}\) for all \(j,k\geq 0\). For all \(n\in\mathbf{N}\), we have
\[\int_{\Sigma(X)}t^{n}d\mu(t) =\langle(X_{1}\otimes\operatorname{id}_{2}+\operatorname{id}_{ 1}\otimes X_{2})^{n}(\xi_{1}\otimes\xi_{2})\,|\,\xi_{1}\otimes\xi_{2}\rangle\] \[=\left\langle\sum_{j=0}^{n}\binom{n}{j}(X_{1}^{j}\otimes X_{2}^{ n-j})(\xi_{1}\otimes\xi_{2})\,\,\left|\,\,\xi_{1}\otimes\xi_{2}\right\rangle\right.\] \[=\sum_{j=0}^{n}\binom{n}{j}\langle X_{1}^{j}\xi_{1}\,|\,\xi_{1} \rangle\langle X_{2}^{n-j}\xi_{2}\,|\,\xi_{2}\rangle,\]
hence
\[\int_{\Sigma(X)} t^{n}d\mu(t)=\sum_{j=0}^{n}\binom{n}{j}\int_{\Sigma(X_{1})}r^{j}d\mu_{1}(r) \int_{\Sigma(X_{2})}s^{n-j}d\mu_{2}(s)\] \[=\int_{\Sigma(X_{1})}\int_{\Sigma(X_{2})}\left(\sum_{j=0}^{n} \binom{n}{j}r^{j}s^{n-j}\right)d\mu_{1}(r)d\mu_{2}(s)\] \[=\int_{\Sigma(X_{1})}\int_{\Sigma(X_{2})}\left(\sum_{j=0}^{n} \binom{n}{j}r^{j}s^{n-j}\right)d\mu_{1}(r)d\mu_{2}(s)(r+s)^{n}d\mu_{1}(r)d\mu_ {2}(s)\] \[=\int_{\Sigma(X)}t^{n}d(\mu_{1}*\mu_{2})(t).\]
This shows that the moments of \(\mu\) are the same as the moments of \(\mu_{1}*\mu_{2}\). Since \(\mu\) and \(\mu_{1}*\mu_{2}\) are measures with compact support, it follows that \(\mu=\mu_{1}*\mu_{2}\).
**Remark 2.4**.: For each positive integer \(d\), there is a similar fact which holds for operators of the form
\[X=X_{1}\otimes\operatorname{id}_{2}\otimes\cdots\otimes\operatorname{id}_{d} +\operatorname{id}_{1}\otimes X_{2}\otimes\cdots\otimes\operatorname{id}_{d} +\cdots+\operatorname{id}_{1}\otimes\operatorname{id}_{2}\otimes\cdots\otimes X _{d}\]
and local spectral measures of the form \(\mu=\mu_{1}*\mu_{2}*\cdots*\mu_{d}\).
### Multiplication operators
The simplest multiplication operators are the operators \(M\) on the Hilbert space \(L^{2}([a,b],\lambda)\), where \([a,b]\) is a non-empty compact interval of the real line, \(\lambda\) is the Lebesgue measure, and \(f:[a,b]\to\mathbf{C}\) is an essentially bounded measurable function; they are defined by \((M\xi)(t)=f(t)\xi(t)\) for all \(\xi\in L^{2}([a,b],\lambda)\) and \(t\in\Sigma\). The purpose of this subsection is to describe the generalization of these operators which is relevant for the Hahn-Hellinger theorem.
Let \(\Sigma\) be a non-empty metrizable compact space (later a non-empty compact subset of the real line). Let \(\mathcal{B}_{\Sigma}\) the \(\sigma\)-algebra of Borel subsets of \(\Sigma\), and \(\mu\) a finite positive measure on \((\Sigma,\mathcal{B}_{\Sigma})\). Let \(\mathfrak{m}:\Sigma\to\overline{\mathbf{N}^{*}}\) be a measurable function. We consider the Hilbert space \(\ell^{2}(\mathbf{N}^{*})\) and, for \(n\in\overline{\mathbf{N}^{*}}\), the subspace \(\mathcal{K}_{n}\) of sequences \((z_{j})_{j\geq 1}\) in \(\ell^{2}(\mathbf{N}^{*})\) such that \(z_{j}=0\) for all \(j\geq n+1\) (thus \(\dim\mathcal{K}_{n}=n\) and \(\mathcal{K}_{\infty}=\ell^{2}(\mathbf{N}^{*})\)). Let \(L^{2}(\Sigma,\mu,\mathfrak{m})\) be the Hilbert space of square summable measurable functions \(\xi:\Sigma\to\ell^{2}(\mathbf{N}^{*})\) such that \(\xi(t)\in\mathcal{K}_{\mathfrak{m}(t)}\) for all \(t\in\Sigma\). The scalar product of two vectors \(\xi,\xi^{\prime}\) in this space is given by
\[\langle\xi\,|\,\xi^{\prime}\rangle_{L^{2}(\Sigma,\mu,\mathfrak{m})}=\int_{ \Sigma}\langle\xi(t)\,|\,\xi^{\prime}(t)\rangle_{\mathcal{K}_{\mathfrak{m}(t) }}d\mu(t).\]
The Hilbert space \(L^{2}(\Sigma,\mu,\mathfrak{m})\) is separable.
In more sophisticated terms, \(L^{2}(\Sigma,\mu,\mathfrak{m})\) is the Hilbert space of square summable vector fields of a \(\mu\)-mesurable field of Hilbert spaces \((\mathcal{H}_{t})_{t\in\Sigma}\), where \(\mathcal{H}_{t}=\mathcal{K}_{\mathfrak{m}(t)}\) for all \(t\in\Sigma\). The space \(L^{2}(\Sigma,\mu,\mathfrak{m})\) can also be seen as a Hilbert direct sum \(\bigoplus_{n\in\overline{\mathbf{N}^{*}}}L^{2}(\Sigma_{n},\mu_{n},\mathcal{K}_ {n})\), where \(\Sigma_{n}=\mathfrak{m}^{-1}(n)\), the measure \(\mu_{n}\) on \(\Sigma_{n}\) is defined by
\(\mu_{n}(B)=\mu(B)\) for all Borel sets \(B\) contained in \(\Sigma_{n}\), and \(L^{2}(\Sigma_{n},\mu_{n},\mathcal{K}_{n})\) is the Hilbert space of square-summable \(\mathcal{K}_{n}\)-valued functions on \((\Sigma_{n},\mu_{n})\). Note that \(\Sigma_{n}=\emptyset\) when \(\mathfrak{m}(t)\neq n\) for all \(t\in\Sigma\), and more generally that \(\mu_{n}=0\) and \(L^{2}(\Sigma_{n},\mu_{n},\mathcal{K}_{n})=\{0\}\) when \(\mathfrak{m}(t)\neq n\) for \(\mu\)-almost all \(t\in\Sigma\). Note also that the \(\mu_{n}\)'s can be seen as measures on \((\Sigma,\mathcal{B}_{\Sigma})\) which are pairwise singular with each other.
Let \(f:\Sigma\to\mathbf{C}\) be a measurable function. The **essential supremum** of \(f\) is the infimum \(\|f\|_{\infty}\) of the numbers \(c\geq 0\) such that \(\mu\left(\left\{t\in\Sigma:|f(t)|>c\right\}\right)=0\); the function \(f\) is **essentially bounded** if \(\|f\|_{\infty}<\infty\). The **essential range** of \(f\) is the set \(R_{f}\) of complex numbers \(\lambda\) such that \(\mu(\left\{t\in\Sigma\mid|f(t)-\lambda|<\varepsilon\right\})>0\) for all \(\varepsilon>0\); we have \(\|f\|_{\infty}=\sup\{|z|:z\in R_{f}\}\). In other words, \(R_{f}\) is the closed support of the measure \(f_{*}(\mu)\) on \(\mathbf{C}\), the push forward of \(\mu\) by \(f\), and therefore \(R_{f}\) is a closed subset of \(\mathbf{C}\), indeed a compact subset of \(\mathbf{C}\) since \(f\) is essentially bounded. Below, \(\|f\|_{\infty}\) and \(R_{f}\) will be the norm and the spectrum of a multiplication operator.
For \(z\in\mathbf{C}\) and \(\varepsilon>0\), let \(D_{\varepsilon}(z)\) denote the closed disc \(\{w\in\mathbf{C}:|w-z|\leq\varepsilon\}\). Note that \(\mu(f^{-1}(D_{\varepsilon}(z)))>0\) for all \(\varepsilon>0\) when \(z\in R_{f}\). For each \(z\in R_{f}\), the **essential pre-image**\(f_{\mu}^{-1}(z)\) is defined as the set of those \(t\in\Sigma\) for which, for every neighborhood \(V\) of \(t\) in \(\Sigma\), we have
\[\liminf_{\varepsilon\to 0}\frac{\mu\left(V\cap f^{-1}(D_{\varepsilon}(z)) \right)}{\mu\left(f^{-1}(D_{\varepsilon}(z))\right)}>0.\]
For \(z\in\mathbf{C}\smallsetminus R_{f}\), set \(f_{\mu}^{-1}(z)=\emptyset\). When \(f\) is continuous, \(f_{\mu}^{-1}(y)\) is contained in \(f^{-1}(y)\) [AbKr-73, Theorem 6]; equality need not hold [AbKr-73, Pages 853-854]. Below, the cardinalities of the essential pre-images of \(f\) will be the values of the spectral multiplicity function of a multiplication operator.
Let \(f,f^{\prime}:\Sigma\to\mathbf{C}\) be two measurable functions which are equal \(\mu\)-almost every where; then the norms \(\|f\|_{\infty}\), \(\|f^{\prime}\|_{\infty}\) are equal, \(f,f^{\prime}\) have the same essential range, and \(f,f^{\prime}\) have the same essential pre-images. From now on, we consider such functions as being equal, and write (abusively) "function" for "equivalence class of functions modulo equality \(\mu\)-almost everywhere". The space \(L^{\infty}(X,\mu)\) of essentially bounded complex-valued functions on \((\Sigma,\mu)\) is a Banach space for the norm \(\|\cdot\|_{\infty}\). It is the dual of \(L^{1}(X,\mu)\), hence it can be considered with both its norm topology and its w\({}^{*}\)-topology (see for example [Doug-72, Theorem 1.45]).
Suppose that \(\Sigma\) is a nonempty compact subset of the line. Denote by \(\mathcal{C}(\Sigma)\) the algebra of continuous functions on \(\Sigma\), with the sup-norm, and by \(\mathcal{P}(\Sigma)\) the subalgebra of functions which are restrictions to \(\Sigma\) of polynomial functions on \(\mathbf{R}\). Then \(\mathcal{P}(\Sigma)\) is dense in \(\mathcal{C}(\Sigma)\), by the Stone-Weierstrass theorem, and the natural image of \(\mathcal{C}(\Sigma)\) in \(L^{\infty}(\Sigma,\mu)\) is w\({}^{*}\)-dense, see [Doug-72, Corollary 4.53]. It follows that \(\mathcal{P}(\Sigma)\) is w\({}^{*}\)-dense in \(L^{\infty}(\Sigma,\mu)\).
**Example 2.5**.: Let \(\Sigma\) be a non-empty metrizable compact space, and \(\mu\) a finite positive measure on this space. Let \(\mathfrak{m}:\Sigma\to\overline{\mathbf{N}^{*}}\) be a measurable function;
let the Hilbert space \(L^{2}(\Sigma,\mu,\mathfrak{m})\) be defined as above. Let \(f\in L^{\infty}(\Sigma,\mu)\); the **multiplication operator**\(M_{\Sigma,\mu,f,\mathfrak{m}}\) is defined on \(L^{2}(\Sigma,\mu,\mathfrak{m})\) by
\[(M_{\Sigma,\mu,f,\mathfrak{m}}\xi)(t)=f(t)\xi(t)\quad\text{for all}\,\,\,\xi\in L ^{2}(\Sigma,\mu,\mathfrak{m})\,\text{ and}\,\,\,t\in\Sigma.\]
When \(\mathfrak{m}\) is the constant function of value \(1\), we write \(M_{\Sigma,\mu,f}\) instead of \(M_{\Sigma,\mu,f,\mathfrak{m}}\). When \(\Sigma\) is a non-empty compact subspace of \(\mathbf{R}\) and when \(f\) is given by the inclusion \(\Sigma\subset\mathbf{R}\), we write \(M_{\Sigma,\mu,\mathfrak{m}}\) instead of \(M_{\Sigma,\mu,f,\mathfrak{m}}\).
**Proposition 2.6**.: _Let \(\Sigma\), \(\mu\), \(\mathfrak{m}:\Sigma\to\overline{\mathbf{N}^{*}}\), \(L^{2}(\Sigma,\mu,\mathfrak{m})\), \(f\in L^{\infty}(\Sigma,\mu)\), and \(M_{\Sigma,\mu,f,\mathfrak{m}}\) be as in Example 2.5. Let \(\Sigma_{\mu}\) denote the closed support of \(\mu\). Suppose now that \(f\) is a real-valued function._
1. \(M_{\Sigma,\mu,f,\mathfrak{m}}\) _is a bounded self-adjoint operator with norm_ \(\|M_{\Sigma,\mu,f,\mathfrak{m}}\|=\|f\|_{\infty}\)_._
2. _The spectrum of_ \(M_{\Sigma,\mu,f,\mathfrak{m}}\) _is the essential range_ \(R_{f}\) _of_ \(f\)_._
3. _The spectral measure_ \(E_{M_{\Sigma,\mu,f,\mathfrak{m}}}\) _is given by_ \(E_{M_{\Sigma,\mu,f,\mathfrak{m}}}(B)=M_{\Sigma,\mu,\chi_{f^{-1}(B),\mathfrak{m }}}\) _for any Borel subset_ \(B\) _of_ \(R_{f}\)_, where_ \(\chi_{f^{-1}(B)}\) _stands for the characteristic function of the inverse image of_ \(B\) _by_ \(f\)_._
4. _Any measure on_ \(\Sigma\) _equivalent to_ \(\mu\) _is a scalar-valued spectral measure for_ \(M_{\Sigma,\mu,f,\mathfrak{m}}\)_._
_(For the spectral multiplicity function of \(M_{\Sigma,\mu,f,\mathfrak{m}}\), see Proposition 2.11.)_
_Suppose that, in particular, \(\Sigma\subset\mathbf{R}\) and that \(f\) is given by the inclusion \(\Sigma\subset\mathbf{R}\); then:_
1. \(\|M_{\Sigma,\mu,\mathfrak{m}}\|=\sup\{|t|:t\in\Sigma_{\mu}\}\)_,_
2. _the spectrum of_ \(M_{\Sigma,\mu,\mathfrak{m}}\) _is_ \(\Sigma_{\mu}\)_,_
3. \(E_{M_{\Sigma,\mu,\mathfrak{m}}}(B)=M_{\Sigma,\mu,\chi_{B},\mathfrak{m}}\) _for any Borel subset_ \(B\) _of_ \(\Sigma_{\mu}\)_,_
4. \(\mu\) _is a scalar-valued spectral measure for_ \(M_{\Sigma,\mu,\mathfrak{m}}\)_._
_Suppose moreover that \(\mathfrak{m}=\mathbf{1}_{\Sigma}\) is the constant function of value \(1\), so that the operator \(M=M_{\Sigma,\mu,\mathbf{1}_{\Sigma}}\) acts on \(L^{2}(\Sigma,\mu)\)._
1. _A vector_ \(\xi\in L^{2}(\Sigma,\mu)\) _is cyclic for_ \(M\) _if and only if it is dominant for_ \(M\)_, if and only if_ \[\mu(\{t\in\Sigma:\xi(t)=0\})=0.\]
2. _In particular, the function on_ \(\Sigma\) _of constant value_ \(1\) _is a cyclic vector for_ \(M\)_._
_On the proof._ Let \(\mathfrak{m}_{\mu}\) denote the restriction of the function \(\mathfrak{m}\) to \(\Sigma_{\mu}\). The spaces \(L^{2}(\Sigma,\mu,\mathfrak{m})\) and \(L^{2}(\Sigma_{\mu},\mu,\mathfrak{m}_{\mu})\) are canonically isomorphic, and \(M\) can be seen as an operator on \(L^{2}(\Sigma_{\mu},\mu,\mathfrak{m}_{\mu})\). It follows that we can assume without loss of generality that \(\Sigma_{\mu}=\Sigma\), namely that the closed support of \(\mu\) is the whole of \(\Sigma\).
The arguments to prove Claims (1) to (4) are standard; see for example Sections 4.20 to 4.28 in [Doug-72], or any of [AbKr-73, Abra-78, Krie-86].
Let \(\xi\in L^{2}(\Sigma,\mu)\). Suppose first that the condition \(\mu(\{t\in\Sigma:\xi(t)=0\})=0\) of (5) is satisfied. We show that \(\xi\) is cyclic for \(M\); for this, we consider \(\eta\in L^{2}(\Sigma,\mu)\) orthogonal to \(M^{n}\xi\) for all \(n\in\mathbf{N}\), and we have to show that \(\eta=0\). Note that
the product \(\xi\overline{\eta}\) is in the weak\({}^{*}\) dual \(L^{1}(\Sigma,\mu)\) of \(L^{\infty}(\Sigma,\mu)\), because \(\xi\) and \(\eta\) are in \(L^{2}(\Sigma,\mu)\). Since \(\langle M^{n}\xi\,|\,\eta\rangle=\int_{\Sigma}t^{n}\xi(t)\overline{\eta(t)}\mu( t)=0\) for all \(n\in\mathbb{N}\), we have
\[\int_{\Sigma}f(t)\xi(t)\overline{\eta(t)}d\mu(t)=0\]
for all \(f\in\mathcal{P}(\Sigma)\), and therefore also for all \(f\in L^{\infty}(\Sigma,\mu)\) because \(\mathcal{P}(\Sigma)\) is \(\mathrm{w}^{*}\)-dense in \(L^{\infty}(\Sigma,\mu)\). This implies that \(\xi\overline{\eta}=0\) in \(L^{1}(\Sigma,\mu)\), hence that \(\xi(t)\eta(t)=0\) for \(\mu\)-almost all \(t\in\Sigma\), hence by hypothesis on \(\xi\) that \(\eta(t)=0\) for \(\mu\)-almost all \(t\in\Sigma\), hence that \(\eta=0\).
As a consequence, Claim (6) is proved.
Suppose now that \(\xi\in L^{2}(\Sigma,\mu)\) is such that \(\mu(\{t\in\Sigma:\xi(t)=0\})>0\). Define a Borel function \(\chi:\Sigma\to\mathbf{C}\) by \(\chi(t)=1\) when \(t\) is such that \(\xi(t)\neq 0\) and \(\chi(t)=0\) otherwise. Then \(\chi(M)\neq 0\) and \(\chi(M)\xi=0\). It follows that \(\xi\) is not dominant for \(M\).
We have shown that \(\xi\) is dominant if and only if \(\mu(\{t\in\Sigma:\xi(t)=0\})=0\). Since cyclic vectors are dominant by (4) of Proposition 2.1, and dominant vectors for \(M\) are cyclic by (5) of Proposition 2.1 and by (6), this ends the proof of (5).
### The Hahn-Hellinger multiplicity theorem, and spectral multiplicity functions
The following Theorem 2.7 is the keystone of Hahn-Hellinger theory.
Recall that an operator \(X_{1}\) on a Hilbert space \(\mathcal{H}_{1}\) and an operator \(X_{2}\) on a Hilbert space \(\mathcal{H}_{2}\) are **unitarily equivalent** if there exists a unitary operator (= a surjective isometry) \(U:\mathcal{H}_{1}\to\mathcal{H}_{2}\) such that \(X_{2}=UX_{1}U^{*}\).
**Theorem 2.7**.: _Any self-adjoint operator \(X\) on a separable Hilbert space \(\mathcal{H}\) is unitarily equivalent to the operator \(M_{\Sigma,\mu,\mathfrak{m}}\) defined as in Example 2.5 for the spectrum \(\Sigma=\Sigma(X)\) of \(X\), a scalar-valued spectral measure \(\mu\) for \(X\), and a measurable function \(\mathfrak{m}:\Sigma\to\{1,2,\ldots,\infty\}\)._
_Moreover, if \(\mu^{\prime}\) is a measure on \(\Sigma\) and \(\mathfrak{m}^{\prime}:\Sigma\to\{1,2,\ldots,\infty\}\) a measurable function, then \(X\) is unitarily equivalent to the multiplication operator \(M_{\Sigma,\mu^{\prime},\mathfrak{m}^{\prime}}\) if and only if the measures \(\mu,\mu^{\prime}\) are equivalent, and the functions \(\mathfrak{m},\mathfrak{m}^{\prime}\) are equal \(\mu\)-almost everywhere._
For a sample of other formulations of the theorem and for proofs, see [14, Theorem X.5.10], [15, Chap. II, SS 6], [16, Section 2.2], [17, 18], [19, Theorem 10.16 and Theorem 10.20], [20, Section 5.4], and
[21, Theorem 10.4.6].
**Corollary 2.8** (**reformulation of part of Theorem 2.7**).: _Let \(A_{1},A_{2}\) be two self-adjoint operators on Hilbert spaces \(\mathcal{H}_{1},\mathcal{H}_{2}\). Suppose that \(A_{1}\) and \(A_{2}\) have same spectrum, equivalent scalar-valued spectral measures, and spectral multiplicity functions which are equal almost everywhere._
_Then \(A_{1}\) and \(A_{2}\) are unitarily equivalent._
Let \(\mathcal{H}\), \(X\), \(\Sigma\), \(\mu\) and \(\mathfrak{m}\) be as in the previous theorem. The function \(\mathfrak{m}\) is the **spectral multiplicity function** of \(X\). The operator \(X\) is of **finite multiplicity** if there exists a finite constant \(N\) such that \(\mathfrak{m}(t)\leq N\) for \(\mu\)-almost all \(t\in\Sigma\). The operator \(X\) is **multiplicity-free**, or simple, if \(\mathfrak{m}(t)=1\) for \(\mu\)-almost all \(t\in\Sigma\), equivalently if it is unitarily equivalent to the operator of multiplication by \(t\) on the Hilbert space \(L^{2}(\Sigma,\mu)\), where \(\mu\) is a scalar-valued spectral measure on the spectrum \(\Sigma\) of \(X\). The operator \(X\) is of **uniform multiplicity**\(n\in\overline{\mathbf{N}^{*}}\) if \(\mathfrak{m}(t)=n\) for \(\mu\)-almost all \(t\in\Sigma(X)\), equivalently if \(X\) is unitarily equivalent to a direct sum \(X_{1}\oplus\cdots\oplus X_{n}\) of pairwise unitarily equivalent multiplicity-free self-adjoint operators \(X_{1},\ldots,X_{n}\).
**Proposition 2.9**.: _For a self-adjoint operator \(X\) on a separable Hilbert space \(\mathcal{H}\), the following properties are equivalent:_
1. \(X\) _is multiplicity-free,_
2. \(X\) _has a cyclic vector._
_Reference for a proof, and comments._ Suppose that \(X\) satisfies Condition (i). Let \(M_{\Sigma,\mu,\mathfrak{m}}=M_{\Sigma,\mu,\mathbf{1}_{\Sigma}}\) be as in Theorem 2.7. Then \(M_{\Sigma,\mu,\mathbf{1}_{\Sigma}}\) has a cyclic vector by (6) of Proposition 2.6, hence \(X\) has a cyclic vector.
The converse implication (ii) \(\Rightarrow\) (i) can be seen as one form of the spectral theorem; we refer to [21, Theorem 5.1.7].
For a reader who would feel uncomfortable with the fact that the definition of the spectral multiplicity function \(\mathfrak{m}\) of \(X\) depends on the unitary equivalence of Theorem 2.7, we quote now a "general principle" from [13], which provides an equivalent definition of \(\mathfrak{m}\). For \(\xi,\xi^{\prime}\in\mathcal{H}\), denote by \(\mu_{\xi,\xi^{\prime}}\) the complex measure defined on the spectrum of \(X\) by \(\mu_{\xi,\xi^{\prime}}(B)=\langle E_{X}(B)\xi\,|\,\xi^{\prime}\rangle\) for all \(B\in\mathcal{B}_{\Sigma(X)}\). Let \(\nu\) be a scalar-valued spectral measure for \(X\). Then \(\mu_{\xi,\xi^{\prime}}\) is absolutely continuous with respect to \(\nu\) and the Radon-Nikodym derivative \(\frac{d\mu_{\xi,\xi^{\prime}}}{d\nu}(t)\) is well defined for \(\nu\)-almost all \(t\in\Sigma(X)\). Let \(\mathcal{C}\) be a countable subset of \(\mathcal{H}\) of which the closed linear span is \(\mathcal{H}\).
**Proposition 2.10**.: _Let \(X\) be a selfadjoint operator on \(\mathcal{H}\), let \(\mathfrak{m}\) be its multiplicity function, and let \(t\in\Sigma(X)\). Then \(\mathfrak{m}(t)\) is \(\nu\)-almost everywhere the supremum of the natural numbers \(k\) for which there exist \(\xi_{1},\ldots,\xi_{k}\) in \(\mathcal{C}\) such that_
\[\det\left(\Big{(}\frac{d\mu_{\xi_{i},\xi_{j}}}{d\nu}(t)\Big{)}_{i,j=1}^{k} \right)\neq 0.\]
The next proposition is a complement to Proposition 2.6 for the spectral multiplicity function, in a slightly simpler case (\(\mathfrak{m}=\mathbf{1}_{\Sigma}\)). We use it below in the proof of Proposition 4.7. For the proof, we refer to [1, Theorem 5].
**Proposition 2.11**.: _Let \(\Sigma\) be a non-empty metrizable compact space and \(\mu\) a finite positive measure on \(\Sigma\); assume that the closed support of \(\mu\) is the whole of \(\Sigma\). Let \(f\) be a continuous real-valued function on \(\Sigma\), viewed as \(f\in L^{\infty}(\Sigma,\mu)\). Let
\(M=M_{\Sigma,\mu,f}\) be the multiplication operator by \(f\) on \(L^{2}(\Sigma,\mu)\), as in Proposition 2.6; recall from Proposition 2.6 that the spectrum of \(M\) is the essential range \(R_{f}\)._
_Then the spectral multiplicity function \(\mathfrak{m}\) for \(M_{\Sigma,\mu,f}\) satisfies_
\[\mathfrak{m}(t)=\sharp\left(f_{\mu}^{-1}(t)\right)\]
_for \(\mu\)-almost all \(t\in\Sigma(M)=R_{f}\)._
## 3. **On spectral multiplicity of adjacency operators of graphs**
Let \(G=(V,E)\) be a countable graph of bounded degree and let \(A_{G}\in\mathcal{L}(\ell^{2}(V))\) be its adjacency operator. When \(G\) is infinite and connected, very little seems to be known about the spectral multiplicity function \(\mathfrak{m}_{G}\) of \(A_{G}\). For example:
**Questions 3.1**.: What are the infinite connected graphs \(G\) for which \(A_{G}\) is of finite uniform multiplicity? or of finite multiplicity? or multiplicity-free?
These questions are equally open in the particular case of Cayley graphs of finitely generated groups.
For adjacency operators of finite graphs, multiplicity of the eigenvalues has long received some attention. As a sample of what is known, let us quote two results on graphs of which all eigenvalues are simple. The prototypes of such graphs are the finite paths, of which the eigenvalues are well-known and all simple. The tables of [13] provide the lists of all graphs with all eigenvalues simple among connected graphs with at most 5 vertices and trees with at most 10 vertices.
If \(G\) is a finite graph such that all eigenvalues of \(A_{G}\) are simple, any automorphism of \(G\) is of order 2; more precisely, the automorphism group of \(G\) is an elementary abelian 2-group ([14], see also [1, Corollary 1.6.1]).
Let \(G=(V,E)\) be a finite graph with \(n=|V|\) vertices and \(A_{G}\) its adjacency matrix. We denote by \(\mathbf{1}_{V}\) the vector in \(\ell^{2}(V)\) defined by \(\mathbf{1}_{V}(v)=1\) for all \(v\in V\). Say \(G\) is **controllable** if \(\mathbf{1}_{V}\) is a cyclic vector for \(A_{G}\). It is conjectured in [15] and proved in [16] that _almost all finite graphs are controllable, and therefore multiplicity-free._ This is made precise as follows. Consider a positive integer \(n\) and a probability \(p\in\left]0,1\right[\). Let \(\mathcal{G}(n,p)\) be the set of all graphs with vertex set \(\left\{1,2,\ldots,n\right\}\) having \(\left\lfloor p\binom{n}{2}\right\rfloor\) edges. Let \(\mathcal{MFG}(n,p)\) be the subset of \(\mathcal{G}(n,p)\) of multiplicity-free graphs. We denote by \(\sharp S\) the cardinality of a set \(S\).
**Theorem 3.2** (**Godsil, O'Rourke-Touri**).: _With the notation above,_
\[\lim_{n\to\infty}\sharp\mathcal{MFG}(n,p)/\sharp\mathcal{G}(n,p)=1.\]
What can be said about infinite graphs determined by their spectra, namely infinite graphs of bounded degree \(G\) such that any graph of bounded degree with adjacency operator unitarily equivalent to the adjacency operator of \(G\) is isomorphic to \(G\)? For this question on finite graphs, see [13].
## 4. **The infinite ray, the infinite line, and the lattices**
Let again \(G=(V,E)\) be a graph, \((\delta_{v})_{v\in V}\) the standard orthonormal basis of the Hilbert space \(\ell^{2}(V)\), and \(A_{G}\) its adjacency operator on \(\ell^{2}(V)\). A vertex \(v\in V\) is **dominant** if the vector \(\delta_{v}\) is dominant for \(A_{G}\), and \(v\) is **cyclic** if the vector \(\delta_{v}\) is cyclic for \(A_{G}\). The **vertex spectral measure at \(v\in V\)** is the local spectral measure at \(\delta_{v}\) on the spectrum \(\Sigma(A_{G})\) of \(A_{G}\).
The **infinite ray** is the graph \(R\) with vertex set \(\mathbf{N}=\{0,1,2,3,\ldots\}\) and edge set \(E=\{\{j,j+1\}:j\in\mathbf{N}\}\). The adjacency operator \(A_{R}\) of \(R\) is defined by
\[(A_{R}\xi)(u)=\xi(u-1)+\xi(u+1)\quad\text{ for all }\,\xi\in\ell^{2}(\mathbf{N })\,\text{ and }\,u\in\mathbf{N},\]
where \(\xi(-1)\) should be read as \(0\). With respect to the standard basis \(\left(\delta_{n}\right)_{n\in\mathbf{N}}\) of the Hilbert space \(\ell^{2}(\mathbf{N})\) the adjacency operator \(A_{R}\) is the **free Jacobi matrix**\(J\):
\[A_{R}=J=\begin{pmatrix}0&1&0&0&0&\cdots\\ 1&0&1&0&0&\cdots\\ 0&1&0&1&0&\cdots\\ 0&0&1&0&1&\cdots\\ 0&0&0&1&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}.\]
The following proposition collects standard results on \(J\).
**Proposition 4.1**.: _Let \(R\) be the infinite ray and let \(A_{R}=J\) be its adjacency operator._
1. _The norm of_ \(A_{R}\) _is_ \(2\)_._
2. _The spectrum of_ \(A_{R}\) _is_ \([-2,2]\)_._
3. _The vertex spectral measure of_ \(A_{R}\) _at_ \(0\) _is given by_ \(d\mu(t)=\frac{1}{2\pi}\sqrt{4-t^{2}}dt\) _for_ \(t\in[-2,2]\)_; it is a scalar-valued spectral measure for_ \(A_{R}\)_._
4. _The vertex_ \(0\) _is cyclic in the graph_ \(R\) _and the operator_ \(A_{R}\) _is multiplicity-free._
Proof.: The strategy of the proof is to view \(J\) as the matrix of an operator of multiplication by \(t\) on a Hilbert space of functions on \([-2,2]\) with respect to an appropriate basis of orthogonal polynomials. For some background on orthogonal polynomials and their relations with Jacobi matrices, see [Sim4-15, Section 4.1].
Consider the sequence \(\left(P_{n}\right)_{n=0}^{\infty}\) of functions defined on the interval \([-2,2]\) of the real line by
\[P_{n}(2\cos\theta)=\frac{\sin((n+1)\theta)}{\sin(\theta)}.\]
Note that \(P_{0}(t)=1\), \(P_{1}(t)=t\), \(P_{2}(t)=t^{2}-1\), for all \(t\in[-2,2]\). Define \(P_{-1}\) to be the zero function. From the trigonometric formula
\[\sin((n-1)\theta)+\sin((n+1)\theta)=2\cos(\theta)\sin(n\theta),\]
it follows that
\[tP_{n-1}(t)=P_{n-2}(t)+P_{n}(t)\quad\text{for all}\,\,\,n\geq 1. \tag{4.1}\]
This implies, by induction on \(n\), that \(P_{n}\) is a polynomial, of the form \(P_{n}(t)=t^{n}+(\text{lower order terms})\) for all \(n\geq 0\).
(The \(P_{n}\)'s are Chebychev polynomials, up to a scale change. More precisely, if \(U_{n}(t)\) denotes the Chebychev polynomial of the second kind of degree \(n\), defined by \(U_{n}(\cos\theta)=\sin((n+1)\theta)/\sin(\theta)\), then \(P_{n}(t)=U_{n}(t/2)\).)
Define a probability measure \(\mu\) on \([-2,2]\) by
\[d\mu(t)=\frac{1}{2\pi}\sqrt{4-t^{2}}\:dt\quad\text{ for}\,\,\,t\in[-2,2].\]
Let \(m,n\geq 0\); using the change of variables \(t=2\cos(\theta)\), we compute
\[\int_{-2}^{2}P_{m}(t)P_{n}(t)d\mu(t)=\frac{1}{2\pi}\int_{-2}^{2}P _{m}(t)\sqrt{4-t^{2}}\:P_{n}(t)\sqrt{4-t^{2}}\:\frac{dt}{\sqrt{4-t^{2}}}\] \[\qquad=\frac{1}{2\pi}\int_{0}^{\pi}P_{m}(2\cos(\theta))2\sin( \theta)\:P_{n}(2\cos(\theta))2\sin(\theta)\:d\theta\] \[\qquad=\frac{2}{\pi}\int_{0}^{\pi}\sin((m+1)\theta)\:\sin((n+1) \theta)\:d\theta\] \[\qquad=\frac{1}{\pi}\int_{0}^{\pi}\Big{[}\cos\big{[}(m+1)\theta- (n+1)\theta\big{]}-\cos\big{[}(m+1)\theta)+(n+1)\theta\big{]}\Big{]}d\theta\] \[\qquad=0\,\,\,\text{if}\,\,\,m\neq n\,\,\,\text{and}\,\,\,1\,\, \,\text{if}\,\,\,m=n\]
It follows that \((P_{n})_{n\geq 0}\) is an orthonormal basis of \(L^{2}([-2,2],\mu)\). If \(M_{\mu}\) denotes the operator of multiplication by \(t\) on this space, we have by (4.1) above
\[M_{\mu}P_{n}=P_{n-1}+P_{n+1}\quad\text{ for all}\,\,\,n\geq 0, \tag{4.2}\]
where \(P_{-1}\) should be read as \(0\).
This shows that \(J\) is the matrix of \(M_{\mu}\) with respect to the basis \((P_{n})_{n\geq 0}\). The claims of Proposition 4.1 follow therefore from the corresponding facts of Proposition 2.6.
The **line** is the graph \(L\) with vertex set \(\mathbf{Z}=\{\ldots,-1,0,1,\ldots\}\) and edge set \(E=\{\{j,j+1\}:j\in\mathbf{Z}\}\). The line can be seen as the Cayley graph of the infinite cyclic group \(\mathbf{Z}\) generated by \(\{1,-1\}\). The adjacency operator \(A_{L}\) of \(L\) is defined by
\[(A_{L}\xi)(u)=\xi(u-1)+\xi(u+1)\quad\text{ for all}\,\,\,\xi\in\ell^{2}( \mathbf{Z})\,\,\,\text{and}\,\,\,u\in\mathbf{Z}.\]
The following proposition can be viewed as an exercise in Fourier series.
**Proposition 4.2**.: _Let \(L\) be the infinite line and let \(A_{L}\) be its adjacency operator._
1. _The norm of_ \(A_{L}\) _is_ \(2\)
_._
2. _The spectrum of_ \(A_{L}\) _is_ \([-2,2]\)_._
3. _For all_ \(j\in\mathbf{Z}\)_, the vertex spectral measure_ \(\mu_{j}\) _of_ \(A_{L}\) _at_ \(j\) _is given by its density with respect to the Lebesgue measure:_ \[d\mu_{j}(t)=\frac{1}{\pi\sqrt{4-t^{2}}}dt\quad\text{for }\,t\in[-2,2];\] _it is a scalar-valued spectral measure for_ \(A_{L}\)_._
4. _The operator_ \(A_{L}\) _is of uniform multiplicity_ \(2\)_._
The proof of (1) to (3) is after Lemma 4.3 and the proof of (4) is after Example 4.5. For a different proof of (2), using a multiplication operator in the Hardy space \(H^{2}\), see [14, Lemma 12].
**Lemma 4.3**.: _The adjacency operator \(A_{L}\) of the line \(L\) is unitarily equivalent to the operator \(M_{[0,2\pi],\lambda,2\cos}\) of multiplication by the function \(2\cos\) on the Hilbert space \(L^{2}([0,2\pi],\lambda)\), where \(\lambda\) stands for the Lebesgue measure on the interval._
Proof.: The Fourier transform
\[U:\ell^{2}(\mathbf{Z})\to L^{2}([0,2\pi],\lambda),\quad\,(U\xi)(t)=\sum_{n\in \mathbf{Z}}\xi(n)e^{int}\]
is a surjective isometry with inverse
\[U^{-1}:\,L^{2}([0,2\pi],\lambda)\to\ell^{2}(\mathbf{Z}),\quad\,(U^{-1}\eta)(n )=\frac{1}{2\pi}\int_{0}^{2\pi}\eta(t)e^{-int}dt.\]
For any \(\eta\in L^{2}([0,2\pi],\lambda)\), we have
\[\left(UA_{L}U^{-1}\eta\right)(t) =\sum_{n\in\mathbf{Z}}\left(A_{L}U^{-1}\eta\right)(n)e^{int}\] \[=\sum_{n\in\mathbf{Z}}\Big{(}\left(U^{-1}\eta\right)(n-1)e^{int} +\left(U^{-1}\eta\right)(n+1)e^{int}\Big{)}\] \[=\Big{(}U(U^{-1}\eta)\big{)}(t)e^{it}+\big{(}U(U^{-1}\eta)\big{)} (t)e^{-it}=2\cos(t)\eta(t)\]
for all \(t\in[0,2\pi]\), so that
\[UA_{L}U^{-1}=M_{[0,2\pi],\lambda,2\cos}\]
as was to be proved.
Proof of (1) to (3) of Proposition 4.2.: For (1) and (2), use Lemma 4.3: the norm of \(A_{L}\) is the norm of \(M_{[0,2\pi],\lambda,2\cos}\), which is \(\sup_{-2\leq t\leq 2}|2\cos(t)|=2\), and the spectrum of \(A_{L}\) is the spectrum of \(M_{[0,2\pi],\lambda,2\cos}\), which is the range of the function \(2\cos\), namely which is \([-2,2]\).
(3) Let \(j\in\mathbf{Z}\), viewed as a vertex of \(L\). The vertex spectral measure \(\mu_{j}\) at \(j\) is defined by
\[\int_{[-2,2]}f(t)d\mu_{j}(t)=\langle f(A_{L})\delta_{j}\,|\,\delta_{j}\rangle\]
for all continuous function \(f\) on the spectrum of \(A_{G}\). For \(n\in\mathbf{N}\), its \(n^{\text{th}}\) moment is
\[\int_{[-2,2]}t^{n}d\mu_{j}(t)=\langle(A_{L})^{n}\delta_{j}\,|\,\delta_{j}\rangle.\]
This number is also the number of paths of length \(n\) from \(j\) to \(j\) in the graph \(L\). When \(n\) is odd, this number is clearly \(0\). When \(n=2m\) is even, each such path has \(m\) left steps and \(m\) right steps, so that this number is the binomial coefficient \(\binom{2m}{m}\).
The moments of the measure \(\frac{dt}{\pi\sqrt{4-t^{2}}}\) on \([-2,2]\) are also easy to compute. Moments of odd order vanish, because \(\int_{-2}^{2}\frac{f(t)}{\pi\sqrt{4-t^{2}}}dt=0\) when \(f\) is an odd function, in particular when \(f(t)=t^{2m+1}\) for some \(m\in\mathbf{N}\). For moments of even order \(2m\), using again the change of variables \(t=2\cos\theta\), we have
\[\int_{-2}^{2}\frac{t^{2m}}{\pi\sqrt{4-t^{2}}}\,dt =\frac{1}{\pi}\int_{0}^{\pi}\frac{(2\cos\theta)^{2m}}{2\sin\theta }2\sin\theta\;d\theta=\frac{1}{\pi}\int_{0}^{\pi}\left(e^{i\theta}+e^{-i\theta }\right)^{2m}\;d\theta\] \[=\frac{1}{\pi}\sum_{k=0}^{2m}\binom{2m}{k}\int_{0}^{\pi}e^{i2(m-k )\theta}\;d\theta=\binom{2m}{m},\]
because all but one term (\(k=m\)) vanish in the sum over \(k\).
These computations show that the measure \(\mu_{j}\) and the measure \(\frac{dt}{\pi\sqrt{4-t^{2}}}\) on \([-2,2]\) have the same moments, hence they are equal. In particular \(\mu_{j}\) is independent of \(j\). It follows from Proposition 2.1 (6) that this measure is a scalar-valued spectral measure for \(A_{L}\).
On the one hand, Claim (4) follows from Proposition 2.11. On the other hand, we prefer to show it with a more elementary argument, just after Example 4.5.
**Lemma 4.4** (**unitarily equivalent pairs of multiplication operators**).: _Let \([a_{1},b_{1}]\), \([a_{2},b_{2}]\) be two intervals of the real line, with \(-\infty<a_{1}<b_{1}<\infty\) and \(-\infty<a_{2}<b_{2}<\infty\). We consider the Hilbert spaces \(L^{2}([a_{1},b_{1}],\lambda)\) and \(L^{2}([a_{2},b_{2}],\lambda)\), where \(\lambda\) is the Lebesgue measure. Let_
\[\varphi_{2}:[a_{2},b_{2}]\stackrel{{\approx}}{{\longrightarrow}} [a_{1},b_{1}]\]
_be a function of class \(\mathcal{C}^{1}\), injective, mapping \([a_{2},b_{2}]\) onto \([a_{1},b_{1}]\), and such that \(|\varphi_{2}^{\prime}(t)|>0\) for all \(t\in\,]a_{2},b_{2}[\). Define an operator \(M_{1}\) on \(L^{2}([a_{1},b_{1}],\lambda)\) by_
\[(M_{1}\xi_{1})(t)=t\xi_{1}(t)\quad\text{ for all }\,\xi_{1}\in L^{2}([a_{1},b_{1 }])\,\text{ and }\,t\in[a_{1},b_{1}]\]
_and an operator \(M_{2}\) on \(L^{2}([a_{2},b_{2}],\lambda)\) by_
\[(M_{2}\xi_{2})(t)=\varphi_{2}(t)\xi_{2}(t)\quad\text{ for all }\,\xi_{2}\in L^{2}([a_{2},b_{2}])\,\text{ and }\,t\in[a_{2},b_{2}].\]
_Then \(M_{1}\) and \(M_{2}\) are unitarily equivalent._
Proof.: Let \(U:L^{2}([a_{1},b_{1}],\lambda)\to L^{2}([a_{2},b_{2}],\lambda)\) be the operator defined by
\[(U\xi_{1})(t)=\sqrt{|\varphi_{2}^{\prime}(t)|}\,\xi_{1}(\varphi_{2}(t))\quad \text{ for all }\xi_{1}\in L^{2}([a_{1},b_{1}],\lambda)\,\text{ and }\,t\in[a_{2},b_{2}].\]
Then \(U\) is unitary. Indeed, for \(\xi_{1}\in L^{2}([a_{1},b_{1}],\lambda)\) and \(\xi_{2}\in L^{2}([a_{2},b_{2}],\lambda)\), we have
\[\|U\xi_{1}\|^{2}=\int_{a_{2}}^{b_{2}}|(U\xi_{1})(t)|^{2}dt=\int_{a_{2}}^{b_{2} }|\xi_{1}(\varphi_{2}(t))|^{2}|\varphi_{2}^{\prime}(t)|dt=\int_{a_{1}}^{b_{1} }|\xi_{1}(s)|^{2}ds=\|\xi_{1}\|^{2}.\]
Similarly \(\|U^{-1}\xi_{2}\|^{2}=\|\xi_{2}\|^{2}\).
For \(\xi_{1}\in L^{2}([a_{1},b_{1}],\lambda)\), we have
\[(M_{2}U\xi_{1})(t) =\varphi_{2}(t)\left(\sqrt{|\varphi_{2}^{\prime}(t)|}\,\xi_{1} \big{(}\varphi_{2}(t)\big{)}\right)=\sqrt{|\varphi_{2}^{\prime}(t)|}\Big{(} \varphi_{2}(t)\xi_{1}(\varphi_{2}(t))\Big{)}\] \[=\sqrt{|\varphi_{2}^{\prime}(t)|}\;(M_{1}\xi_{1})(\varphi_{2}(t) )=(UM_{1}\xi_{1})(t).\]
It follows that \(M_{2}U=UM_{1}\), and this ends the proof.
**Example 4.5**.: We particularize the pairs of unitarily equivalent multiplication operators of Lemma 4.4 to the following two cases.
(1) Let \([a_{1},b_{1}]=[a_{2},b_{2}]=[0,1]\) and \(\varphi_{2}(t)=t^{\alpha}\) for some \(\alpha\in\mathbf{R},\alpha>0\). The operator \(M_{1}\) of multiplication by \(t\) and the operator \(M_{\alpha}\) of multiplication by \(t^{\alpha}\) on \(L^{2}([0,1],\lambda)\) are unitarily equivalent. The unitary operator \(U\) on \(L^{2}([0,1],\lambda)\) is given by \((U\xi)(t)=\sqrt{\alpha t^{\alpha-1}}\xi(t^{\alpha})\), and \(M_{\alpha}U=UM_{1}\).
(2) Let \([a_{1},b_{1}]=[-2,2]\), \([a_{2},b_{2}]=[0,\pi]\), and \(\varphi_{2}(t)=2\cos(t)\). The operator \(M_{1}\) of multiplication by \(t\) on \(L^{2}([-2,2],\lambda)\) and the operator \(M_{2\cos}\) of multiplication by \(2\cos(t)\) on \(L^{2}([0,\pi],\lambda)\) are unitarily equivalent. Similarly, the operator \(M_{1}\) on \(L^{2}([-2,2],\lambda)\) and the operator of multiplication by \(2\cos(t)\) on \(L^{2}([\pi,2\pi],\lambda)\) are unitarily equivalent.
Proof of (4) of Proposition 4.2.: We view the operator \(M_{[0,2\pi],\lambda,2\cos}\) of Lemma 4.3 as the direct sum of two operators: the operator \(M_{[0,\pi],\lambda,2\cos}\) of multiplication by \(2\cos\) on \(L^{2}([0,\pi],\lambda)\) and the operator \(M_{[\pi,2\pi],\lambda,2\cos}\) of multiplication by \(2\cos\) on \(L^{2}([\pi,2\pi],\lambda)\). By Example 4.5, each of these two operators is unitarily equivalent to the operator \(M_{1}\) of multiplication by \(t\) on \(L^{2}([-2,2],\lambda)\). It follows that \(M_{[0,2\pi],\lambda,2\cos}\), and therefore also the adjacency operator \(A_{L}\) of the line, are unitarily equivalent to the operator of multiplication by \(t\) on the space \(L^{2}([-2,2],\lambda.\mathbf{C}^{2})\), so that \(A_{L}\) is of uniform multiplicity \(2\).
**Remark 4.6**.: In view of Corollary 2.8, Propositions 4.1 and 4.2 show that the adjacency operator \(A_{L}\) of the line is unitarily equivalent to the direct sum of two copies of the adjacency operator \(A_{R}\) of the infinite ray.
Let now \(d\) be an integer, \(d\geq 1\). Let \(\{e_{1},\ldots,e_{d}\}\) be the canonical basis of the free abelian group \(\mathbf{Z}^{d}\). The **lattice**\(L_{d}\) is the graph with vertex set \(\mathbf{Z}^{d}\) and edge set
\[E=\{\{u,v\}:u\in\mathbf{Z}^{d},\;v=u+e_{j}\,\text{ for some }\,j\in\{1,\ldots,d \}\}.\]
In other words, \(L_{d}\) is the Cayley graph of the group \({\bf Z}^{d}\) with respect to the generating set \(\{\pm e_{1},\ldots,\pm e_{d}\}\). The adjacency operator \(A_{d}\) of \(L_{d}\) is given by
\[(A_{d}\xi)(u)=\sum_{j=1}^{d}\xi(u-e_{j})+\xi(u+e_{j})\quad\text{ for all }\,\xi\in\ell^{2}({\bf Z}^{d})\,\text{ and }\,u\in{\bf Z}^{d}.\]
When \(d=1\), the lattice \(L_{1}\) is the infinite line \(L\) of Proposition 4.2; now we denote by \(\mu_{1}\) the vertex spectral measure of the line, given by \(d\mu_{1}(t)=\frac{1}{\pi\sqrt{4}-t^{2}}dt\) for all \(t\in[-2,2]\).
**Proposition 4.7**.: _Let \(d\geq 2\). Let \(L_{d}\) be the lattice graph of dimension \(d\) and let \(A_{d}\) be its adjacency operator._
1. _The norm of_ \(A_{d}\) _is_ \(2\)_._
2. _The spectrum of_ \(A_{d}\) _is_ \([-2d,2d]\)_._
3. _The vertex spectral measure_ \(\mu_{d}\) _of a vertex_ \(v\) _in_ \(L_{d}\) _is independent of_ \(v\)_; it is the convolution of_ \(d\) _copies of the spectral measure_ \(\mu_{1}\) _of Proposition_ 4.2_. It is a scalar-valued spectral measure for_ \(A_{d}\) _and it is equivalent to the Lebesgue measure supported on_ \([-2d,2d]\)_._
4. _The operator_ \(A_{d}\) _had infinite uniform multiplicity._
For the proof of the proposition above, we begin as for Proposition 4.2, with minor modifications. Much of what follows holds for \(d\geq 1\), rather than for \(d\geq 2\) only. Proposition 2.11 is used for the only slightly delicate point, which is our proof of (4).
**Lemma 4.8**.: _Let \(\lambda\) denote the Lebesgue measure on \([0,2\pi]^{d}\). The Fourier transform_
\[U:\ell^{2}({\bf Z}^{d})\to L^{2}([0,2\pi]^{d},\lambda),\quad\,(U\xi)(t)=\sum_{ u\in{\bf Z}^{d}}\xi(u)e^{i\langle u|t\rangle}\]
_(where \(\langle u\mid t\rangle=\sum_{j=1}^{d}u_{j}t_{j}\)) is a surjective isometry with inverse_
\[U^{-1}:L^{2}([0,2\pi]^{d},\lambda)\to\ell^{2}({\bf Z}^{d}),\quad\,(U^{-1}\eta )(u)=\frac{1}{(2\pi)^{d}}\int_{[0,2\pi]^{d}}\eta(t)e^{-i\langle u|t\rangle}dt.\]
_We have_
\[UA_{d}U^{-1}=M_{[0,2\pi]^{d},\lambda,2\sum\cos}\]
_where \(2\sum\cos\) is the function \([0,2\pi]^{d}\to{\bf R},\,\,\,(t_{j})_{1\leq j\leq d}\mapsto 2\sum_{j=1}^{d} \cos(t_{j})\)._
Proof.: For any \(\eta\in L^{2}([0,2\pi]^{d},\lambda)\), we have
\[\left(UA_{d}U^{-1}\eta\right)(t)=\sum_{n\in\mathbf{Z}^{d}}\left(A_{d }U^{-1}\eta\right)(n)e^{i\langle n|t\rangle}\] \[\quad=\sum_{n\in\mathbf{Z}^{d}}\sum_{j=1}^{d}\Big{(}\left(U^{-1} \eta\right)(n-e_{j})e^{i\langle n|t\rangle}+\left(U^{-1}\eta\right)(n+e_{j})e ^{i\langle n|t\rangle}\Big{)}\] \[\quad=\Big{(}\sum_{k\in\mathbf{Z}^{d}}\sum_{j=1}^{d}\left(U^{-1} \eta\right)(k)e^{i\langle k|t\rangle}\Big{)}e^{it_{j}}\ +\ \Big{(}\sum_{k\in\mathbf{Z}^{d}}\sum_{j=1}^{d}\left(U^{-1}\eta\right)(k)e^{i \langle k|t\rangle}\Big{)}e^{-it_{j}}\] \[\quad=\sum_{j=1}^{d}U(U^{-1}\eta)(t)e^{it_{j}}+\sum_{j=1}^{d}U(U^ {-1}\eta)(t)e^{-it_{j}}=\left(2\sum_{j=1}^{d}\cos(t_{j})\right)\eta(t),\]
so that \(UA_{L}U^{-1}\) is the operator of multiplication by the function \(2\sum\cos\) on the Hilbert space \(L^{2}([0,2\pi]^{d},\lambda)\).
Proof of Proposition 4.7.: By Proposition 2.6, the operator \(M_{[0,2\pi]^{d},\lambda,2\sum\cos}\) of multiplication by the function \(2\sum_{j=1}^{d}\cos(t_{j})\) on \(L^{2}([0,2\pi]^{d},\lambda)\) has norm \(2d\) and spectrum \([-2d,2d]\). Claims (1) and (2) follow from Lemma 4.8.
Observe that there is a natural isomorphism
\[\ell^{2}(\mathbf{Z})\otimes\ell^{2}(\mathbf{Z})\otimes\cdots\otimes\ell^{2}( \mathbf{Z})\to\ell^{2}(\mathbf{Z}^{d})\]
by which we can identify the operators
\[A_{1}\otimes\operatorname{id}\otimes\cdots\otimes\operatorname{id}+\cdots+ \operatorname{id}\otimes\cdots\otimes\operatorname{id}\otimes A_{1}\quad \text{ and }\quad A_{d}.\]
By Proposition 2.3, the vertex spectral measure of \(A_{d}\) at a vertex of \(L_{d}\) is the convolution of \(d\) copies of the vertex spectral measure of \(A_{1}\) at a vertex of \(L_{1}\). It follows from Proposition 2.1 (6) that the vertex spectral measure of \(A_{d}\) at a vertex of \(L_{d}\) is a scalar-valued spectral measure for \(A_{d}\). This proves the first part of Claim (3).
By Proposition 4.2, the vertex spectral measure at a vertex of the line \(L_{1}\) is \(d\mu_{1}(t)=f(t)dt\), where \(f(t)=\frac{1}{2\pi}\sqrt{4-t^{2}}\) if \(-2\leq t\leq 2\) and \(f(t)=0\) otherwise, and where \(\lambda\) stands for the Lebesgue measure. The vertex spectral measure at a vertex of the lattice \(L_{d}\), which is the convolution power \(\mu_{d}\doteqdot\mu_{1}\otimes\mu_{1}\otimes\cdots\otimes\mu_{1}\) (\(d\) factors), is consequently of the form \(f_{d}(t)dt\), where \(f_{d}\) is a continuous function, \(f_{d}(t)>0\) for all \(t\in]{-2d,2d[}\), and \(f_{d}(t)=0\) for all \(t\) such that \(|t|\geq 2d\). In particular, this measure \(\mu_{d}\) is equivalent to the Lebesgue measure on the interval \([-2d,2d]\). This ends the proof of Claim (3).
By Proposition 2.11, the operator \(M_{[0,2\pi]^{d},\lambda,2\sum\cos}\) has uniform infinite spectral multiplicity. By Lemma 4.8, Claim (4) follows.
## 5. **Spherically symmetric infinite rooted trees**
Let \(T=(V,E)\) be a spherically symmetric rooted tree, of bounded degree and without leaves, and let \(A\) be its adjacency operator. The main technical result of this section is Proposition 5.6, showing an orthogonal decomposition of \(\ell^{2}(V)\) in subspaces invariant by \(A\) on each of which \(A\) is an infinite Jacobi matrix. This is standard, it has been used for trees as here and in other contexts; see [RoRu-95], [AlFr-00, Lemma 1], [Solo-04, Theorem 3.2], [Breu-07, Theorem 2.4].
Let \(T=(V,E)\) be a tree. Choose a root \(v_{0}\in V\). For \(v\in V\), denote by \(|v|\) the distance from \(v\) to \(v_{0}\). For an integer \(r\geq 0\), let \(S_{r}=\{v\in V\mid|v|=r\}\) be the sphere in \(V\) of radius \(r\) around \(v_{0}\). For \(v\in V\), denote by \(N_{v}^{+}\) the set of neighboring vertices of \(V\) at distance \(|v|+1\) from \(v_{0}\). For \(v\in V\) different from \(v_{0}\), denote by \(v_{-}\) the neighboring vertex of \(v\) at distance \(|v|-1\) from \(v_{0}\); note that, for \(v\neq v_{0}\), the set of neighbors of \(v\) is \(\{v_{-}\}\cup N_{v}^{+}\), and therefore the degree of \(v\) is \(\deg(v)=1+|N_{v}^{+}|\). The set of neighbors of \(v_{0}\) is \(N_{v_{0}}^{+}=S_{1}\).
The infinite rooted tree \(T\) is **spherically symmetric** if, for every \(r\geq 0\), every vertex in \(S_{r}\) has exactly \(d_{r}\geq 1\) adjacent vertices in \(S_{r+1}\), for some sequence \((d_{r})_{r\geq 0}\) of positive integers, the sequence of **branching degrees** of \(T\). From now on, we consider an infinite spherically symmetric rooted tree \(T\) of bounded degree, with sequence of branching degrees such that
\[d_{r}\geq 2\,\,\,\text{for all}\,\,\,r\geq 0\quad\text{and}\quad\sup_{r}d_{r}<\infty. \tag{5.1}\]
For \(r\geq 0\), we identify \(\ell^{2}(S_{r})\) to the subspace of \(\ell^{2}(V)\) of functions which vanish on \(V\smallsetminus S_{r}\). We set \(\ell^{2}(S_{-1})=\{0\}\).
Define an operator \(H\) on \(\ell^{2}(V)\) by
\[(H\xi)(v)=\xi(v_{-})\,\,\,\text{if}\,\,\,|v|\geq 1\quad\text{and}\quad(H\xi) (v_{0})=0,\]
and let \(A_{T}\) denote the adjacency operator of \(T\).
**Proposition 5.1**.: _Let \(T=(V,E)\) be a spherically symmetric infinite rooted tree with root \(v_{0}\in V\), and with sequence of branching degrees \((d_{r})_{r\geq 0}\) such that Condition (5.1) holds. Let \(H\) be as above._
1. _The operator_ \(H\) _is bounded on_ \(\ell^{2}(V)\) _of norm_ \(\sqrt{\max_{r\geq 0}d_{r}}\)_, and is injective._
2. _The adjoint_ \(H^{*}\) _of_ \(H\) _is given by_ (5.2) \[(H^{*}\xi)(v)=\sum_{w\in N_{v}^{+}}\xi(w)\,\,\,\text{for all}\,\,\,v\in V,\] _and we have_ \[A_{T}=H+H^{*}.\]
3. _For all_ \(r\geq 0\)_:_ * _the restriction_ \(\frac{1}{\sqrt{d_{r}}}H\big{|}_{\ell^{2}(S_{r})}\) _is an isometry from_ \(\ell^{2}(S_{r})\) _into_ \(\ell^{2}(S_{r+1})\) * \(\,\,\text{and}\,\,\frac{1}{d_{r}}H^{*}H\big{|}_{\ell^{2}(S_{r})}=\operatorname{ id}_{\ell^{2}(S_{r})}\)_,_ * \(H^{*}(\ell^{2}(S_{r}))=\ell^{2}(S_{r-1})\) _and_ \(HH^{*}(\ell^{2}(S_{r})\subset\ell^{2}(S_{r})\)
_._
4. _Let_ \(r\geq 0\) _and_ \(k\geq 0\)_. If_ \(\xi\) _and_ \(\eta\) _in_ \(\ell^{2}(S_{r})\) _are orthogonal, then_ \(H^{k}\xi\) _and_ \(H^{k}\eta\) _in_ \(\ell^{2}(S_{r+k})\) _are also orthogonal._
Proof.: (1) Let \(\xi\in\ell^{2}(V)\). We have
\[\|H\xi\|^{2} =\sum_{v\in V}|(H\xi)(v)|^{2}=\sum_{v\in V,v\neq v_{0}}|\xi(v_{-}) |^{2}=\sum_{w\in V}d_{|w|}\,|\xi(w)|^{2}\] \[\leq\big{(}\max_{r\geq 0}d_{r}\big{)}\sum_{w\in V}|\xi(w)|^{2}= \big{(}\max_{r\geq 0}d_{r}\big{)}\|\xi\|^{2},\]
hence \(\|H\|\leq\sqrt{\max_{r\geq 0}d_{r}}\). For the equality, see the end of (3) below.
If \(H\xi=0\), i.e., if \(\xi(v_{-})=0\) for all \(v\in V\smallsetminus\{v_{0}\}\), then \(\xi=0\), hence \(H\) is injective.
(2) We use temporarily Formula (5.2) as a definition of \(H^{*}\). Then \(H^{*}\) is bounded; indeed, using the Cauchy-Schwarz inequality, we have for all \(\xi\in\ell^{2}(V)\)
\[\sum_{v\in V}|(H^{*}\xi)(v)|^{2} =\sum_{v\in V}\Big{|}\sum_{w\in N_{v}^{+}}\xi(w)\Big{|}^{2}\leq \sum_{v\in V}d_{|v|}\sum_{w\in N_{v}^{+}}|\xi(w)|^{2}\] \[=\sum_{w\in V,w\neq v_{0}}d_{|w|-1}|\xi(w)|^{2}\leq\big{(}\max_{r \geq 0}d_{r}\big{)}\|\xi\|^{2}.\]
And \(H^{*}\) is the adjoint of \(H\) because, for \(\xi,\eta\in\ell^{2}(V)\), we have
\[\langle H^{*}\xi\mid\eta\rangle =\sum_{v\in V}(H^{*}\xi)(v)\overline{\eta(v)}=\sum_{v\in V}\sum_{ w\in N_{v}^{+}}\xi(w)\overline{\eta(v)}\] \[=\sum_{w\neq v_{0}}\xi(w)\overline{\eta(w_{-})}=\sum_{w\neq v_{0 }}\xi(w)\overline{(H\eta)(w)}=\langle\xi\mid H\eta\rangle.\]
The equality \(A_{T}=H+H^{*}\) is now obvious.
(3) Let \(\xi\in\ell^{2}(S_{r})\). It is obvious that \(H\xi\in\ell^{2}(S_{r+1})\). Moreover, the computation of the proof of (1) continues as
\[\|H\xi\|^{2}=\sum_{w\in V}d_{|w|}\,|\xi(w)|^{2}=d_{r}\sum_{w\in V}|\xi(w)|^{2} =d_{r}\|\xi\|^{2},\]
hence \(\frac{1}{\sqrt{d_{r}}}H\big{|}_{\ell^{2}(S_{r})}\) is an isometry from \(\ell^{2}(S_{r})\) into \(\ell^{2}(S_{r+1})\). We have also
\[(H^{*}H\xi)(v)=\sum_{w\in N_{v}^{+}}(H\xi)(w)=d_{r}\xi(v)\quad\text{ for all }\,v\in V,\]
hence
\[\frac{1}{d_{r}}H^{*}H\big{|}_{\ell^{2}(S_{r})}=\operatorname{id}_{\ell^{2}(S_ {r})}. \tag{5.3}\]
It follows that \(H^{*}\) maps \(\ell^{2}(S_{r+1})\) onto \(\ell^{2}(S_{r})\), and also that \(\|H\|\geq\sqrt{d_{r}}\).
It follows now that \(\|H\|=\sqrt{\max_{r\geq 0}d_{r}}\).
(4) For \(\xi\) and \(\eta\) orthogonal in \(\ell^{2}(S_{r})\) we have, using Equality (5.3),
\[\langle H\xi\mid H\eta\rangle=\langle H^{*}H\xi\mid\eta\rangle=d_{r}\langle\xi \mid\eta\rangle=0,\]
so that \(H\xi\) and \(H\eta\) are orthogonal in \(\ell^{2}(S_{r+1})\). For \(k\geq 0\), the same argument repeated \(k\) times shows that \(H^{k}\xi\) and \(H^{k}\eta\) are orthogonal.
Set
\[\mathcal{U}_{0,0}=\ell^{2}(S_{0})\quad\text{ and }\quad\mathcal{U}_{0,r}=H^{r}( \mathcal{U}_{0,0})\quad\text{ for each integer }\,r\geq 0.\]
Note that \(\mathcal{U}_{0,r}\) is the one-dimensional subspace of \(\ell^{2}(V)\) of functions on \(V\) which vanish outside \(S_{r}\) and which are constant on \(S_{r}\). Set
\[\mathcal{V}_{0}=\bigoplus_{r=0}^{\infty}\mathcal{U}_{0,r},\]
which is the subspace of \(\ell^{2}(V)\) of functions which are constant on each sphere.
We define now subspaces \(\mathcal{U}_{n,r}\) and \(\mathcal{V}_{n}\) for \(n\geq 1\) and \(r\geq n\), by induction on \(n\). Let \(n\geq 1\); assume that \(\mathcal{U}_{m,q}\) has already been defined when \(0\leq m<n\) and \(q\geq m\). Define
\[\mathcal{U}_{n,n} =\text{ orthogonal complement of }\mathcal{U}_{0,n}\oplus\mathcal{U}_{1,n}\oplus\cdots \oplus\mathcal{U}_{n-1,n}\text{ in }\ell^{2}(S_{n}),\] \[\mathcal{U}_{n,r} =H^{r-n}(\mathcal{U}_{n,n})\,\text{ in }\,\ell^{2}(S_{r})\, \text{ for all }\,r\geq n,\] \[\mathcal{V}_{n} =\bigoplus_{r=n}^{\infty}\mathcal{U}_{n,r}.\]
**Proposition 5.2**.: _Let the notation be as above. There are orthogonal direct sums decompositions_
\[\ell^{2}(V)=\bigoplus_{n=0}^{\infty}\mathcal{V}_{n}=\bigoplus_{n=0}^{\infty} \bigoplus_{r=n}^{\infty}\mathcal{U}_{n,r}.\]
_For each \(n\geq 0\), the subspace \(\mathcal{V}_{n}\) of \(\ell^{2}(V)\) is invariant by \(A_{T}\), \(H\) and \(H^{*}\)._
Proof.: We continue to follow [1].
We first check that the direct sums are orthogonal. Let \(n_{1},r,s\) be nonnegative integers such that \(r\neq s\) and \(0\leq n_{1}\leq\min\{r,s\}\). The spaces \(\mathcal{U}_{n_{1},r}\) and \(\mathcal{U}_{n_{1},s}\) are orthogonal, because they are respectively subspaces of \(\ell^{2}(S_{r})\) and \(\ell^{2}(S_{s})\) which are orthogonal. It follows that \(\mathcal{V}_{n_{1}}=\bigoplus_{r=n_{1}}^{\infty}\mathcal{U}_{n_{1},r}\) is an orthogonal sum. Let moreover \(n_{2}\) be an integer such that \(n_{2}>n_{1}\). The spaces \(\mathcal{U}_{n_{1},n_{2}}\) and \(\mathcal{U}_{n_{2},n_{2}}\) are orthogonal by definition of \(\mathcal{U}_{n_{2},n_{2}}\). By (4) of Proposition 5.1, the spaces \(\mathcal{U}_{n_{1},r}=H^{r-n_{2}}(\mathcal{U}_{n_{1},n_{2}})\) and \(\mathcal{U}_{n_{2},r}=H^{r-n_{2}}(\mathcal{U}_{n_{2},n_{2}})\) are orthogonal whenever \(r\geq n_{2}\). It follows that \(\mathcal{V}_{n_{1}}=\bigoplus_{r=n_{1}}^{\infty}\mathcal{U}_{n_{1},r}\) and \(\mathcal{V}_{n_{2}}=\bigoplus_{r=n_{2}}^{\infty}\mathcal{U}_{n_{2},r}\) are orthogonal, and therefore that \(\ell^{2}(V)=\bigoplus_{n=0}^{\infty}\mathcal{V}_{r}\) is an orthogonal sum.
By definition, each \(\mathcal{V}_{n}\) is invariant by \(H\). It remains to show that each \(\mathcal{V}_{n}\) is also invariant by \(H^{*}\), i.e., that \(H^{*}(\mathcal{U}_{n,r})\subset\mathcal{V}_{n}\) for all \(r\geq n\).
Let \(\xi\in\mathcal{U}_{n,r}\) for some \(n\) and \(r\) such that \(0\leq n\leq r\); we distinguish three cases.
Assume first that \(r>n\). There exists \(\eta\in\mathcal{U}_{n,n}\) such that \(\xi=H^{r-n}\eta\). Then \(H^{*}\xi=(H^{*}H)(H^{r-n-1}\eta)=d_{r-1}H^{r-n-1}\eta\) by (3) of Proposition 5.1, hence \(H^{*}\xi\in\mathcal{U}_{n,r-1}\subset\mathcal{V}_{n}\).
Assume now that \(r=n\geq 1\). Then \(H^{*}\xi\in\ell^{2}(S_{n-1})\). We claim that \(H^{*}\xi=0\). Indeed, choose \(\ell\in\{0,1,\ldots,n-1\}\) and \(\zeta\in\mathcal{U}_{\ell,n-1}\). Then \(H\zeta\in\mathcal{U}_{\ell,n}\) and \(\xi\in\mathcal{U}_{n,n}\) are orthogonal (because \(\ell<n\)), so that \(\langle H^{*}\xi\mid\zeta\rangle=\langle\xi\mid H\zeta\rangle=0\); hence \(H^{*}\xi\) is orthogonal to \(\mathcal{U}_{\ell,n-1}\) for each \(\ell\leq n-1\), i.e., \(H^{*}\xi\) is orthogonal to \(\ell^{2}(S_{n-1})\), i.e., \(H^{*}\xi=0\).
Assume finally that \(r=n=0\); then \(H^{*}\xi=0\). This shows that \(H^{*}\xi\in\mathcal{V}_{n}\) in all cases.
The next proposition is now straightforward:
**Proposition 5.3**.: _With the notation as above, we have_
1. \(\dim\ell^{2}(S_{n})=|S_{n}|=\prod_{q=0}^{n-1}d_{q}\) _for all_ \(n\geq 0\)_,_
2. \(\dim\mathcal{U}_{n,r}=\Big{(}\prod_{q=0}^{n-2}d_{q}\Big{)}(d_{n-1}-1)\) _for all_ \(n\geq 2\) _and_ \(r\geq n\)_,_ _and_ \(\dim\mathcal{U}_{1,r}=d_{0}-1\) _for all_ \(r\geq 1\)_, and_ \(\dim\mathcal{U}_{0,r}=1\) _for all_ \(r\geq 0\)_,_
3. \(\dim\mathcal{V}_{n}=\infty\) _for all_ \(n\geq 0\)_._
Let \(n\geq 0\). Denote by \(\ell^{2}(\mathbf{N},\mathcal{U}_{n,n})\) the Hilbert space of sequences \((\xi_{j})_{j\geq 0}\) of vectors in \(\mathcal{U}_{n,n}\) such that \(\sum_{j=0}^{\infty}\|\xi_{j}\|^{2}<\infty\). For all \(j\geq 0\), by (3) of Proposition 5.1 and by definition of \(\mathcal{U}_{n,n+j}\); the operator
\[\frac{1}{\sqrt{\prod_{q=n}^{n+j-1}d_{q}}}\,H^{j}:\mathcal{U}_{n,n}\to \mathcal{U}_{n,n+j}\]
is a surjective isometry.
Let \(\xi\in\mathcal{V}_{n}\). For all \(j\geq 0\), there exists \(\xi_{n+j}\in\mathcal{U}_{n,n+j}\), and therefore \(\chi_{n+j}\in\mathcal{U}_{n,n}\), such that
\[\xi=\big{(}\xi_{n+j}\big{)}_{j\geq 0}=\left(\frac{1}{\sqrt{\prod_{q=n}^{n+j-1}d_ {q}}}\,H^{j}\chi_{n+j}\right)_{j\geq 0}. \tag{5.4}\]
We have shown:
**Proposition 5.4**.: _Let the notation be as above. For any \(n\geq 0\), the operator_
\[W_{n}:\mathcal{V}_{n}\to\ell^{2}(\mathbf{N},\mathcal{U}_{n,n})\quad\text{ defined by}\quad W_{n}\big{(}(\xi_{n+j})_{j\geq 0}\big{)}=(\chi_{n+j})_{j\geq 0}\]
_is a surjective isometry, and \(W_{n}^{*}\big{(}(\chi_{n+j})_{j\geq 0}\big{)}=(\xi_{n+j})_{j\geq 0}\)._
Let \(n\geq 0\). We define the **weighted shift**\(S_{\mathcal{U},n}\) on \(\ell^{2}(\mathbf{N},\mathcal{U}_{n,n})\) by
\[S_{\mathcal{U},n}(\chi_{n},\chi_{n+1},\chi_{n+2},\chi_{n+3},\ldots)=(0,\sqrt{d _{n}}\chi_{n},\sqrt{d_{n+1}}\chi_{n+1},\sqrt{d_{n+2}}\chi_{n+2},\ldots).\]
The operator \(S_{\mathcal{U},n}\) is the direct sum of \(\dim(\mathcal{U}_{n,n})\) copies of the standard weighted shift \(S_{n}\) defined on the usual sequence space \(\ell^{2}(\mathbf{N})\) by
\[S_{n}(\lambda_{0},\lambda_{1},\lambda_{2},\lambda_{3},\ldots)=(0,\sqrt{d_{n}} \lambda_{0},\sqrt{d_{n+1}}\lambda_{1},\sqrt{d_{n+2}}\lambda_{2},\ldots).\]
**Proposition 5.5**.: _With the notation as above, we have for all \(n\geq 0\)_
\[W_{n}HW_{n}^{*}=S_{\mathcal{U},n}\quad\text{and}\quad W_{n}H^{*}W_{n}^{*}=S_{ \mathcal{U},n}^{*}.\]
Proof.: Let \(\big{(}\chi_{n+j}\big{)}_{j\geq 0}\in\ell^{2}(\mathbf{N},\mathcal{U}_{n,n})\). The vector \(W_{n}^{*}\big{(}(\chi_{n+j})_{j\geq 0}\big{)}\) is the vector \(\xi\) of (5.4), so that
\[HW^{*}\big{(}(\chi_{n+j})_{j\geq 0}\big{)} =H\big{(}(\xi_{n+j})_{j\geq 0}\big{)}=H\Bigg{(}\bigg{(}\frac{ \sqrt{d_{n+j}}}{\sqrt{\prod_{q=n}^{n+j}d_{q}}}\;H^{j}\chi_{n+j}\bigg{)}_{j\geq 0 }\Bigg{)}\] \[=(0,\eta_{1},\eta_{2},\ldots,\eta_{k},\ldots)\]
with
\[\eta_{k}=\sqrt{d_{n+k-1}}\frac{1}{\sqrt{\prod_{q=n}^{n+k-1}d_{q}}}H^{k-1}\chi _{n+k-1}=\sqrt{d_{n+k-1}}\xi_{n+k-1}\quad\text{ for all }\quad k\geq 1.\]
Therefore
\[W_{n}HW_{n}^{*}\big{(}(\chi_{n+j})_{j\geq 0}\big{)} =W_{n}(0,\eta_{1},\eta_{2},\ldots,\eta_{k},\ldots)\] \[=W_{n}(0,\sqrt{d_{n}}\xi_{n},\sqrt{d_{n+1}}\xi_{n+1},\sqrt{d_{n+2 }}\xi_{n+2},\ldots),\] \[=S_{\mathcal{U},n}(\chi_{n},\chi_{n+1},\chi_{n+2},\chi_{n+3}, \ldots),\]
hence \(W_{n}HW_{n}^{*}=S_{\mathcal{U},n}\). The equality \(W_{n}H^{*}W_{n}^{*}=S_{\mathcal{U},n}^{*}\) follows.
For \(n\geq 0\), we denote by
\[\delta_{*,n}\quad\text{ the sequence }\quad(\sqrt{d_{n}},\sqrt{d_{n+1}},\ldots, \sqrt{d_{n+j}},\ldots)\]
and we consider the infinite Jacobi matrix
\[J_{\delta_{*,n}}=\begin{pmatrix}0&\sqrt{d_{n}}&0&0&\cdots\\ \sqrt{d_{n}}&0&\sqrt{d_{n+1}}&0&\cdots\\ 0&\sqrt{d_{n+1}}&0&\sqrt{d_{n+2}}&\cdots\\ 0&0&\sqrt{d_{n+2}}&0&\cdots\\ \cdots&\cdots&\cdots&\cdots&\ddots\end{pmatrix} \tag{5.5}\]
If we identify the operators \(S_{n}\) and \(S_{n}^{*}\) with their matrices with respect to the standard basis \((\delta_{j})_{j\in\mathbf{N}}\) of \(\ell^{2}(\mathbf{N})\), we have
\[J_{\delta_{*,n}}=S_{n}+S_{n}^{*}.\]
Here is a reformulation of part of the previous propositions.
**Proposition 5.6**.: _Let \(T=(V,E)\) be an infinite spherically symmetric tree with root \(v_{0}\) and with sequence of branching degrees \((d_{r})_{r\geq 0}\) such that \(d_{r}\geq 2\) for all \(r\geq 0\) and \(\sup_{r}d_{r}<\infty\)._
_The adjacency operator \(A_{T}\) of \(T\) is unitarily equivalent to a direct sum_
\(\bigoplus_{n=0}^{\infty}m_{n}J_{\delta_{*,n}}\), where the multiplicities \(m_{n}\) are given by_
\[m_{n} =\dim\mathcal{U}_{n,n}=\Big{(}\prod_{q=0}^{n-2}d_{q}\Big{)}(d_{n-1} -1)\quad\text{ for }\,n\geq 2\] \[m_{1} =\dim\mathcal{U}_{1,1}=d_{0}-1\] \[m_{0} =\dim\mathcal{U}_{0,0}=1\]
_and where the \(J_{\delta_{*,n}}\)'s are the Jacobi matrices of (5.5)._
As a first particular case, consider an integer \(d\geq 2\), the constant sequence \((d,d,d,\ldots)\), and the **regular rooted tree \(T_{d}^{\rm root}=(V,E)\) of branching degree \(d\)**; the relevant Jacobi matrix is the multiple \(\sqrt{d}J\) of the free Jacobi matrix \(J\) of Section 4. By Proposition 4.1 for the spectral invariants of \(J\) and by Proposition 2.2 for those of \(\sqrt{d}J\), we obtain:
1. the norm of \(\sqrt{d}J\) is \(2\sqrt{d}\),
2. the spectrum of \(\sqrt{d}J\) is \([-2\sqrt{d},2\sqrt{d}]\),
3. the vertex spectral measure of \(\sqrt{d}J\) at \(\delta_{0}\) is \(d\mu(t)=\frac{1}{2\pi d}\sqrt{4d-t^{2}}dt\) for \(t\in[-2\sqrt{d},2\sqrt{d}]\) (where \(dt\) stands for the Lebesgue measure),
4. the vector \(\delta_{0}\) is cyclic for \(\sqrt{d}J\) and the operator \(\sqrt{d}J\) is multiplicity-free.
By Proposition 5.6, the adjacency operator of \(T_{d}^{\rm root}\) is the direct sum of infinitely many copies of \(\sqrt{d}J\), and we obtain the following:
**Proposition 5.7**.: _Let \(d\geq 2\) and let \(T_{d}^{\rm root}=(V,E)\) be the regular rooted tree of branching degree \(d\). Let \(A_{d}^{\rm root}\) denote the adjacency operator of \(T_{d}^{\rm root}\)._
1. _The norm of_ \(A_{d}^{\rm root}\) _is_ \(2\sqrt{d}\)_._
2. _The spectrum of_ \(A_{d}^{\rm root}\) _is_ \([-2\sqrt{d},2\sqrt{d}]\)_._
3. _The vertex spectral measure at_ \(0\) _is given by_ \(d\mu(t)=\frac{1}{2\pi d}\sqrt{4d-t^{2}}dt\) _for_ \(t\) _in_ \(\Sigma(A_{d}^{\rm root})\)_; it is a scalar-valued spectral measure for_ \(A_{d}^{\rm root}\)_._
4. \(A_{d}^{\rm root}\) _has uniform infinite multiplicity._
Recall from the introduction that two graphs \(G,G^{\prime}\) of bounded degree are **cospectral** if their adjacency operators have equal spectra, equivalent scalar-valued spectral measures, and spectral multiplicity functions which are equal almost everywhere.
**Corollary 5.8**.: _For any integer \(d\geq 2\), the lattice graph \(L_{d}\) and the regular rooted tree \(T_{d^{2}}^{\rm root}\) are cospectral._
Proof.: This is an immediate consequence of Corollary 2.8 and of Propositions 4.7 and 5.7.
Note that the measure \(\mu_{d}\) of Proposition 4.7 for \(L_{d}\) and the measure \(\mu\) of Proposition 5.7 for \(T_{d^{2}}^{\rm root}\) are not equal, but they are both equivalent to the Lebesgue measure on \([-d^{2},d^{2}]\), and this is enough to apply Corollary 2.8.
**Example 5.9**.: Consider an integer \(p\geq 2\) and a sequence of integers \(d_{*}=(d_{r})_{r\geq 0}\) such that \(d_{r}\geq 2\) and \(d_{p+r}=d_{r}\) for all \(r\geq 0\). For \(s\in\{0,1,\ldots,p-1\}\), let \(T_{s}\) be the spherically symmetric rooted tree with sequence of branching degrees \(d_{*,s}=(d_{s},d_{s+1},d_{s+2},\ldots)\). When \(p\) is the smallest period of the sequence \(d_{*}\), the trees \(T_{0},\ldots,T_{p-1}\) are pairwise non-isomorphic.
It follows from Proposition 5.6 that the \(p\) trees \(T_{0},\ldots,T_{p-1}\) are cospectral.
## 6. Regular trees
Let \(d\) be an integer, \(d\geq 3\); let \(T_{d}=(V,E)\) be the **regular tree of degree \(d\)**. Choose one vertex \(v_{0}\in V\) to be the root of \(T_{d}\). Then \(T_{d}\) is the spherically symmetric rooted tree with sequence of branching degrees \((d,d-1,d-1,d-1,\ldots)\) of which all terms are \(d-1\) but the initial one which is \(d\). The initial Jacobi matrix which appears in Proposition 5.6 is
\[J_{\sqrt{d},\sqrt{d-1}^{\infty}}=\begin{pmatrix}0&\sqrt{d}&0&0&0&\cdots\\ \sqrt{d}&0&\sqrt{d-1}&0&0&\cdots\\ 0&\sqrt{d-1}&0&\sqrt{d-1}&0&\cdots\\ 0&0&\sqrt{d-1}&0&\sqrt{d-1}&\cdots\\ 0&0&0&\sqrt{d-1}&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}, \tag{6.1}\]
viewed as an operator on \(\ell^{2}(\mathbf{N})\). For any positive real number \(a\), set
\[J_{a}=\begin{pmatrix}0&a&0&0&\cdots\\ a&0&1&0&\cdots\\ 0&1&0&1&\cdots\\ 0&0&1&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}, \tag{6.2}\]
Then \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}=\sqrt{d-1}J_{a}\) for \(a=\sqrt{d}/\sqrt{d-1}\); observe that this \(a\) satisfies \(1<a\leq\sqrt{3/2}\), since \(d\geq 3\). All other Jacobi matrices which appear in Proposition 5.6 are equal to \(\sqrt{d-1}J_{1}\); note that \(J_{1}\) (or sometimes \(\frac{1}{2}J_{1}\)) is called the **free Jacobi matrix**. For Proposition 6.3 below, we will need to know properties of the scalar-valued spectral measures defined by these matrices. This is straightforward and very standard for \(J_{1}\), as already shown in Proposition 4.1, but we did not find a simple ad hoc argument for \(J_{\sqrt{d}/\sqrt{d-1}}\), and we rather quote the following
**Proposition 6.1**.: _Consider a real number \(a\) such that \(0<a\leq\sqrt{2}\) and the matrix \(J_{a}\) of (6.2), viewed as a self-adjoint operator acting on \(\ell^{2}(\mathbf{N})\), with its canonical orthonormal basis \((\delta_{n})_{n\geq 0}\)._
1. _The norm of_ \(J_{a}\) _is_ \(2\)_._
2. _The spectrum of_ \(J_{a}\) _is the interval_ \([-2,2]\)_._
3. _The vector_ \(\delta_{0}\) _is cyclic for the operator_ \(J_{a}\)
_._
4. _The vertex spectral measure of_ \(J_{a}\) _is equivalent to the Lebesgue measure on_ \([-2,2]\)_, and it is a scalar-valued spectral measure._
Proof for (1) to (3) and reference for (4).: The spectrum \(\Sigma(X)\) of a bounded self-adjoint operator \(X\) is the union of the essential spectrum \(\Sigma_{\operatorname{ess}}(X)\) and a discrete set of points in \(\mathbf{R}\smallsetminus\Sigma_{\operatorname{ess}}(X)\) which are eigenvalues of finite multiplicity, so that \(\Sigma_{\operatorname{ess}}(J_{1})=\Sigma(J_{1})=[-2,2]\) by Proposition 4.1. If \(K\) is a compact self-adjoint operator on the same space as \(X\), it is a theorem of Weyl that \(\Sigma_{\operatorname{ess}}(X+K)=\Sigma_{\operatorname{ess}}(X)\), so that \(\Sigma_{\operatorname{ess}}(J_{a})=[-2,2]\); this holds for all \(a\geq 0\). The eigenvalue equation \(J_{a}\xi=\lambda\xi\) for \(\xi=(\xi_{n})_{n\geq 0}\in\ell^{2}(\mathbf{N})\) gives rise to a difference equation of second order with constant coefficients, and a routine computation shows that this equation has no solution in \(\ell^{2}(\mathbf{N})\) when \(0<a^{2}\leq 2\) (details for example in [BrHN, Lemma 4.6]); it follows that \(\Sigma(J_{a})=\Sigma_{\operatorname{ess}}(J_{a})=[-2,2]\). This completes the proof of Claims (1) and (2). It is straightforward to check Claim (3).
Claim (4) is more delicate to prove, and we quote here a particular case of the result of [MaNe-83] (particular because we impose diagonal coefficient \(b_{n}=0\) here, and because we exclude eigenvalues):
_Let \((a_{n})_{n\geq 0}\) be a sequence of positive real numbers such that \(\lim_{n\to\infty}a_{n}=1\) and \(\sum_{n=1}^{\infty}|a_{n+1}-a_{n}|<\infty\). Let \(\mu\) be the measure associated to the sequence of orthonormal polynomials \((p_{n})_{n\geq 0}\) defined by the recurrence formula_
\[tp_{n}(t)=a_{n}p_{n+1}(t)+a_{n-1}p_{n-1}(t)\quad\text{for}\quad n\geq 0\]
_(with \(a_{-1}=0\), \(p_{-1}=0\), \(p_{0}\) constant) and the normalisation \(p_{n}(t)=\gamma_{n}t^{n}+\) lower order terms, \(\gamma_{n}>0\). Consider the operator \(J\) defined on the Hilbert space \(\ell^{2}(\mathbf{N})\) with its canonical basis \((\delta_{n})_{n\in\mathbf{N}}\) by the Jacobi matrix_
\[\begin{pmatrix}0&a_{0}&0&0&\cdots\\ a_{0}&0&a_{1}&0&\cdots\\ 0&a_{1}&0&a_{2}&\cdots\\ 0&0&a_{2}&0&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}, \tag{6.3}\]
_and assume that this operator does not have any eigenvalue. Let \(\mu\) be the local spectral measure of \(J\) at \(\delta_{0}\), defined by \(\int_{\Sigma(J)}f(t)d\mu(t)=\langle f(J)\delta_{0}\mid\delta_{0}\rangle\) for any function \(f\) continuous on the spectrum \(\Sigma(J)\) of \(J\)._
_Then \(\Sigma(J)=[-2,2]\) and \(\mu=\rho\lambda\) for a function \(\rho\) which is continuous positive on \(]-2,2[\) and zero outside \([-2,2]\). In particular, \(\mu\) is equivalent to \(\lambda\) on \([-2,2]\)._
Claim (4) follows. Rather than relying on [MaNe-83], we could alternatively quote [Yafa-17, Theorem III.11], which provides an explicit formula for the local spectral measure of \(J_{a}\) at the vector \(\delta_{0}\), or quote results related to that of [MaNe-83], such as [DoNe-86, Theorem 3] or [Yafa-22, Theorem 8.18].
By Corollary 2.8, an immediate consequence of the previous proposition is the following corollary, surprising for us:
**Corollary 6.2**.: _Two Jacobi matrices of the family \((J_{a})_{0<a\leq\sqrt{2}}\) are unitarily equivalent._
In contrast, for \(a>\sqrt{2}\), the operator \(J_{a}\) has two simple eigenvalues \(\pm\frac{a^{2}}{\sqrt{a^{2}-1}}\), and therefore is not unitarily equivalent to \(J_{1}\).
By a change of scale, Proposition 6.1 provides the invariants of the operator of (6.1):
1. The norm of \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}\) is \(2\sqrt{d}\).
2. The spectrum of \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}\) is the interval \([-2\sqrt{d},2\sqrt{d}]\).
3. The vector \(\delta_{0}\) is cyclic for the operator \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}\).
4. The vertex spectral measure of \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}\) is equivalent to the Lebesgue measure on \([-2\sqrt{d},2\sqrt{d}]\), and it is a scalar-valued spectral measure.
By Proposition 5.6, the adjacency operator \(A_{d}\) of \(T_{d}\) is the direct sum of one copy of \(J_{\sqrt{d},\sqrt{d-1}^{\infty}}\) and infinitely many copies of \(\sqrt{d-1}J_{1}\), hence we obtain the following:
**Proposition 6.3**.: _Let \(d\geq 3\) and let \(T_{d}=(V,E)\) be the regular tree of degree \(d\). Let \(A_{d}\) be the adjacency operator \(T_{d}\)._
1. _The norm of_ \(A_{d}\) _is_ \(2\sqrt{d-1}\)_._
2. _The spectrum of_ \(A_{d}\) _is_ \([-2\sqrt{d-1},2\sqrt{d-1}]\)_._
3. _The vertex spectral measure at any vertex is equivalent to the Lebesgue measure on_ \([-2\sqrt{d-1},2\sqrt{d-1}]\)_; it is a scalar-valued spectral measure for_ \(A_{d}\)_._
4. \(A_{d}\) _has uniform infinite multiplicity._
**Corollary 6.4**.: _For any integer \(d\geq 3\), the lattice graph \(L_{d}\) and the regular tree \(T_{d^{2}+1}\) are cospectral._
Remark: the vertex spectral measures of \(T_{d}\) and \(T_{d}^{\rm root}\) which appear here are equivalent to the Lebesgue measure on the appropriate interval. This is in sharp contrast with large families of spherically symmetric rooted trees, for which vertex spectral measures don't have absolutely continuous spectrum [11], [12, 13, 14, 15, 16, 17, 18, 19].
## 7. An uncountable family of cospectral graphs from [11]
There are in [11] examples of uncountable families of pairwise non-isomorphic cospectral Schreier graphs. They are defined in terms of certain groups of automorphisms of infinite regular rooted trees called spinal groups, and the actions of these groups on the boundaries of the trees. We restrict here to the particular case of the Fabrykowski-Gupta group, which is the simplest of the spinal groups acting on rooted trees of branching degree \(\geq 3\), and we describe shortly one of these families as follows.
Consider the regular rooted tree \(T=T_{3}^{\text{root}}\) of branching degree \(3\), its boundary \(\partial T\) which is the Cantor space \(\{0,1,2\}^{\mathbf{N}}\) of infinite sequences of \(0,1\) and \(2\)'s, and the Bernoulli measure \(\nu\) on \(\partial T\) which is a probability measure invariant by the automorphism group of \(T\). The Fabrykowski-Gupta group \(G\) is the group of automorphisms of \(T\) generated by the symmetric set \(S=\{a,a^{-1},b,b^{-1}\}\), where \(a\) is the cyclic permutation of the three main branches of \(T\) just below the root, and where \(b\) is the automorphism usually defined recursively by \(b=(a,1,b)\), see for example [11, Subsection 8.2].
For \(\xi\in\partial T\), let \(\operatorname{Stab}_{\xi}(G)\) denote the stabilizer \(\{g\in G\mid g\xi=\xi\}\) and let \(\operatorname{Sc}_{\xi}=\operatorname{Sc}(G,\operatorname{Stab}_{\xi}(G),S)\) be the **Schreier graph** of the indicated triple, with vertex set the orbit \(G\xi\); it is a \(4\)-regular graph. graph has loops and multiple edges, but its adjacency operator \(A_{\xi}\) acting on \(\ell^{2}(G\xi)\) can be naturally defined, and the considerations of Sections 2 and 3 hold without change.
It is known that there exists a measurable subset \(\mathcal{W}\) of \(\partial T\) of full measure, i.e., \(\nu(\mathcal{W})=1\), such that for \(\xi\in\mathcal{W}\) the adjacency operator \(A_{\xi}\) has the following properties:
* The closure of the set of eigenvalues of \(A_{\xi}\), which is the spectrum of \(A_{\xi}\), is the union of a Cantor subset of \(\mathbf{R}\) of Lebesgue measure zero and of countably many points accumulating on this Cantor set; see [1, Theorem 3.6 & Corollary 4.13] and [10, Theorem 1.5].
* \(A_{\xi}\) has a pure point spectrum, more precisely there exists an orthonormal basis of \(\ell^{2}(G\xi)\) of eigenvectors of \(A_{\xi}\), moreover each eigenvector in this basis is a function of finite support on \(G\xi\)[10, Theorem 1.8].
* The set of these eigenvalues and their multiplicities, which are all infinite, do not depend on \(\xi\)[10, Section 5].
Moreover, for \(\xi\in\mathcal{W}\), the set of \(\xi^{\prime}\in\mathcal{W}\) for which \(\operatorname{Sc}_{\xi^{\prime}}\) is isomorphic to \(\operatorname{Sc}_{\xi}\) has \(\nu\)-measure \(0\)[11, Corollary 7.13].
In particular, there are uncountably many graphs \(\operatorname{Sc}_{\xi}\) which are cospectral and pairwise non-isomorphic.
|
2310.09457 | UCM-Net: A Lightweight and Efficient Solution for Skin Lesion
Segmentation using MLP and CNN | Skin cancer poses a significant public health challenge, necessitating
efficient diagnostic tools. We introduce UCM-Net, a novel skin lesion
segmentation model combining Multi-Layer Perceptrons (MLP) and Convolutional
Neural Networks (CNN). This lightweight, efficient architecture, deviating from
traditional UNet designs, dramatically reduces computational demands, making it
ideal for mobile health applications. Evaluated on PH2, ISIC 2017, and ISIC
2018 datasets, UCM-Net demonstrates robust performance with fewer than 50KB
parameters and requires less than 0.05 Giga Operations Per Second (GLOPs).
Moreover, its minimal memory requirement is just 1.19MB in CPU environment
positions. It is a potential benchmark for efficiency in skin lesion
segmentation, suitable for deployment in resource-constrained settings. In
order to facilitate accessibility and further research in the field, the
UCM-Net source code is https://github.com/chunyuyuan/UCM-Net. | Chunyu Yuan, Dongfang Zhao, Sos S. Agaian | 2023-10-14T00:32:11Z | http://arxiv.org/abs/2310.09457v4 | # UCM-Net: A Lightweight and Efficient Solution for Skin Lesion Segmentation using MLP and CNN
###### Abstract
Skin cancer is a significant public health problem, and computer-aided diagnosis can help to prevent and treat it. A crucial step for computer-aided diagnosis is accurately segmenting skin lesions in images, which allows for lesion detection, classification, and analysis. However, this task is challenging due to the diverse characteristics of lesions, such as appearance, shape, size, color, texture, and location, as well as image quality issues like noise, artifacts, and occlusions. Deep learning models have recently been applied to skin lesion segmentation, but they have high parameter counts and computational demands, making them unsuitable for mobile health applications. To address this challenge, we propose UCM-Net, a novel, efficient, and lightweight solution that integrates Multi-Layer Perceptions (MLP) and Convolutional Neural Networks (CNN). Unlike conventional UNet architectures, our UCMNet-Block reduces parameter overhead and enhances UCM-Net's learning capabilities, leading to robust segmentation performance. We validate UCM-Net's competitiveness through extensive experiments on isic2017 and isic2018 datasets. Remarkably, UCM-Net has less than 50KB parameters and less than 0.05 Giga-Operations Per Second (GLOPs), setting a new possible standard for efficiency in skin lesion segmentation. The source code will be publicly available.
Medical image segmentation Light-weight model Mobile health
## 1 Introduction
Skin cancer poses a significant global health concern and stands as one of the leading cancer types worldwide. Skin cancer can be broadly categorized into two types: melanoma and non-melanoma. While melanoma accounts for only 1% of cases, it is responsible for the majority of deaths due to its aggressive nature. In 2022, it was estimated that melanoma would account for approximately 7,650 deaths in the United States, affecting 5,080 men and 2,570 women [1, 2]. In addition, it is estimated that the United States will have 97,610 new cases of melanoma in 2023. Current statistics suggest that one in five Americans will develop skin cancer at some point in their lives, underscoring the gravity of this issue. Over the past few decades, skin cancer has emerged as a substantial public health problem, resulting in annual expenses of approximately $ 8.1 billion in the United States alone [3].
Skin cancer [4] is a prevalent and potentially life-threatening disease affecting millions worldwide. Among the various types of skin cancer, malignant melanoma is known for its rapid progression and high mortality rate if not detected and treated early. Early and accurate diagnosis is, therefore, critical to improving patient outcomes. Medical imaging [5], particularly dermatoscopy and dermoscopy, is crucial in diagnosing skin cancer. Dermatologists and healthcare
professionals rely on these imaging techniques to examine and analyze skin lesions for signs of malignancy. However, the manual interpretation of such images with a naked eye is a time-consuming and error-prone process, heavily reliant on the expertise of the examining physician [6; 7].
To address these challenges and improve the accuracy and efficiency of skin cancer diagnosis, computer-aided tools and artificial intelligence (AI) have been leveraged in recent years[8; 9; 10]. Skin cancer segmentation, a fundamental step in the diagnostic process, involves delineating the boundaries of skin lesions within medical images. This task is essential for quantifying lesion characteristics, monitoring changes over time, and aiding in the decision-making process for treatment. Segmenting skin lesions from images faces several key challenges [11]: unclear boundaries where the lesion blends into surrounding skin; illumination variations that alter lesion appearance; artifacts like hair and bubbles that obscure lesion boundaries; variability in lesion size and shape; different imaging conditions and resolutions; age-related skin changes affecting texture; complex backgrounds that hinder segmentation; and differences in skin color due to race and climate. Figure 1 presents some representative samples of complex skin lesion.
Overcoming these difficulties is crucial for accurate segmentation to enable early diagnosis and treatment of malignant melanoma. Recently, a groundbreaking transformation in skin cancer segmentation has been driven by the development of advanced deep-learning algorithms [12; 13; 14; 15; 16]. These AI-driven approaches have exhibited remarkable capabilities in automating the segmentation of skin lesions, significantly reducing the burden on healthcare professionals and potentially improving diagnostic accuracy. In addition, the rapid advancements in AI techniques and the widespread adoption of smart devices,such as the point-of-care ultrasound (POCUS) devices or smartphones [17; 18; 19], have brought about transformative changes in the healthcare industry [20]. Figure 2 briefly presents the entire diagnose of skin cancer detection with portable devices and AI techniques.
Patients now have greater access to medical information, remote monitoring, and personalized care, leading to increased satisfaction with their healthcare experiences. However, amidst these advancements, there are still challenges that need to be addressed. One such challenge is the accurate and efficient segmentation of skin lesions for diagnostic purposes within limited computation hardwares and devices. Most of AI medical methods are developed based on deep-learning [21]. The major deep-learning methods utilize expensive computation overhead and a large number of learning parameters to achieve a good prediction result. It is a challenge to embed these methods to hardware-limit
Figure 1: Complex skin lesion samples
Figure 2: AI diagnose process of skin cancer detection
devices [22; 23]. In this study, we introduce UCM-Net, a lightweight and robust approach for skin lesion segmentation. UCM-Net leverages a novel hybrid module that combines Convolutional Neural Networks (CNN) and Multi-Layer Perceptrons (MLP) to enhance feature learning while reducing parameters. Utilizing group loss functions, our method surpasses existing machine learning-based techniques in skin lesion segmentation.
Key contributions of UCM-Net include:
1. **Hybrid** Module: We introduce the UCM-Net Block, a hybrid structure combining CNN and MLP with superior feature-learning capabilities and reduced computation and parameters.
2. **Efficient Segmentation**: UCM-Net is developed based on UCM-Net Blocks and the base model U-Net, offering a highly efficient method for skin lesion segmentation. It is the first model with less than **50** KB parameters and less than **0.05** Giga-Operations Per Second (GLOPs). UCM-Net is **1177** times faster and has **622** times fewer parameters than U-Net. Compared to the state-of-the-art EGE-UNet, UCM-Net reduces parameter and computation costs by **1.06x** and **1.56x**.
3. **Improved Segmentation**: UCM-Net's segmentation performance is evaluated using mean Intersection over Union (mIoU) and mean Dice similarity score (mDice). On the Isic2017 and Isic2018 datasets, UCM-Net enhances the baseline U-Net model by an average of **3.03%** in mIoU and **1.72%** in mDice. Notably, UCM-Net outperforms the state-of-the-art EGE-UNet on ISIC 2017 and ISIC 2018 datasets, with respective mean IoU scores of **81.43% (UCM-Net)** vs 80.95% and **80.71% (UCM-Net)** vs 80.00%.
## 2 Related works
**AI Method Categories and Applications** AI-driven approaches for biomedical images can be broadly classified as supervised learning methods, semi-supervised learning methods, and unsupervised learning methods [24; 25; 26]. Supervised learning is a solution with labeled image data, expecting to develop predictive capability. The labeled image data can be the previous patients' diagnosed results, such as computed tomography(CT), with Clinicians' analysis. With the labeled data, AI-driven solutions can be developed and performed against the ground truth results. Supervised learning solutions are widely applied to disease classification, tumor location detection, and tumor segmentation [27]. Relatively, unsupervised learning is a discovering process, diving into unlabeled data to capture hidden information. Unsupervised learning solutions derive insights directly from unlabeled medical data without inadequate or biased human supervisions and can be used for information compression, dimensional reduction, super resolution for medical image and sequence data detection and analysis such as protein, DNA and RNA [28]. In recent years, semi-supervised
Figure 3: This figure shows the visualization of comparative experimental results on the ISIC2017 dataset. The X-axis represents mDice score (higher is better), while Y-axis represents mIoU (higher is better). The color depth represents the number of parameters (blue is better).
learning is becoming popular, which utilizes a large number of unlabeled data in conjunction with the limited amount of labeled data to train higher-performing models. Semi-supervised learning solutions can be also applied into disease classification and medical segmentation [29, 30].
**Supervised Methods of Segmentation** As the technique evolves and develops, the solution of AI for medical image segmentation is from purely applying a convolution neural network(CNN) such as U-Net and Att-UNet [31] to a hybrid structure method like TransFuse [32] and SANet [33]. U-Net is a nearest CNN solution on biomedical image segmentation, which replaces pooling operators by upsampling operators. Att-UNet is developed on the top of U-Net adding attention structures. TransFuse is a novel approach that combines Transformers and CNNs with late fusion for medical image segmentation, achieving a strong performance while maintaining high efficiency, with potential applications in various medical-related tasks. SANet [34], the Shallow Attention Network, addresses challenges in polyp segmentation by mitigating color inconsistencies, preserving small polyps through shallow attention modules, and balancing pixel distributions, achieving remarkable performance improvements. Swin-UNet is a UNet-like pure Transformer for medical image segmentation that proposed shifted windows as the encoder to extract context features, and a transformer-based decoder with patch expanding layer performs the up-sampling operation to restore the spatial resolution of the feature maps. MedT [35] is also a transformer-based network architecture, a gated axial-attention model that introduces an additional control mechanism in the self-attention module. ConvUNeXt [36], an efficient model inspired by ConvNeXts [37] and based on the classic UNet architecture, achieving excellent medical image segmentation results with a significantly reduced parameter count while incorporating features such as large convolution kernels, depth-wise separable convolution, residual connections, and a lightweight attention mechanism. UNeXt [38] is introduced as an efficient Convolutional multilayer perceptron (MLP) based network that reduces parameters and computational complexity while achieving superior segmentation performance through tokenized MLP blocks and channel shifting, making it suitable for point-of-care applications. MALUNet [39] and its extended version EGE-UNet [40] develop new attention modules to significantly reduce parameters and computational complexity while achieving powerful skin lesion segmentation performance, making it highly suitable for resource-constrained clinical environments.
Figure 4: UCM-Net Structure
## 3 UCM-Net
**Network Design** Figure 4 provides a comprehensive view of the structural framework of UCM-Net, an advanced architecture that showcases a distinctive U-Shape design. Our design is developed from U-Net. UCM-Net includes a down-sampling encoder and an up-sampling decoder, resulting in a high-powered network for skin lesion segmentation. The entirety of the network encompasses six stages of encoder-decoder units, each equipped with channel capacities of {8, 16, 24, 32, 48, 64}. Within each stage, we leverage a convolutional block alongside our novel UCMNet block, facilitating the extraction and acquisition of essential features. In the convolutional block, we opt for a kernel size of 1, a choice that serves to further curtail the parameter count. Our innovative UCMNet introduces a hybrid structure module, wherein an amalgamation of a Multi-Layer Perceptron (MLP) linear component and Convolutional Neural Network (CNN) is employed, bolstered by the inclusion of skip connections. This strategic amalgamation fortifies the network's prowess in feature acquisition and learning capabilities.
**Convolution Block** In our designing, we contain two different convolution blocks. The difference between conv block 1 and conv block 2 is from the number of kernel size. Small number of kernel size can not only reduce the number of learning weights, but also reduce the computations usage [41]. We set conv block 2's kernel to 1 \(\times\) 1. And we maintain conv block 1's kernel size to 3 \(\times\) 3 normally since fewer the beginning convolution kernel can affect the features entropy performance from input.
```
#Input:X,thefeaturemapwithshape[Batch(B),Channel(C),Height(H),Width(W)]
#Output:Out,thefeaturemapwithshape[B,Height+Width(N),C]
#Operator:Conv,2DConvolutionLN,LayerNormBN,BatchNorm,Linear,Linear TransformationLeaky,LeakyRelU
#UCM-NetBlockProcessingPipeline B,C,H,W=X.shape()
#TransformFeaturefrom[B,C,H,W]to[B,H*W,C] X=X.flatten(2).trnapse(1,2)
#Copyfeatureforlaterresidualaddition X1=copy(X) X=Linear(LN(X)) B,N,C=X.shape()
#TransformFeaturefrom[B,H*W,C]to[B,C,H,W] X=X.transpose(1,2).view(B,C,H,W) X=Conv(LN(X)) X=Conv(Leaky(X))
#TransformFeaturefrom[B,C,H,W]to[B,H*W,C] X=X.flatten(2).trnapse(1,2) X=Linear(BN(X))
#TransformFeaturefrom[B,H*W,C]to[B,C,H,W] X=X.transpose(1,2).view(B,C,H,W) X=Conv(LN(X))
#TransformFeaturefrom[B,C,H,W]to[B,H*W,C] X=X.flatten(2).trnapse(1,2)
#Outputwithresidualaddition Out=X+X1
```
**UCM-Net Block** The pseudocode of UCM-Net Block 1 presents our defined sequence of operations, which is how we combine CNN with MLP(Linear transformation operation) for feature learning. The input feature structure for CNN operation is a four-dimensional structure containing batch, channel, height, and weight. However, the input feature structure for MLP is a three-dimensional structure, which includes batch, channel, and vector. As the pseudocode shows, we implement several feature transformations to service CNN and MLP learning in different layers.
**Loss functions** In our solution, we selected the group loss function from EGE-UNet [40]. The loss function can calculate the loss from the scaled layer masks in different stages with ground truth masks. Equation 1 and 2 present the stage loss in different stage layer and output loss in the output layer, which calculated by binary cross-entropy (BCE) and dice loss (Dice) components.
\[Loss_{Stage}=\text{Bce}(StagePred,Target)+\text{Dice}(StagePred,Target) \tag{1}\]
\[Loss_{Output}=\text{Bce}(OutputPred,Target)+\text{Dice}(OutputPred,Target) \tag{2}\]
Equation 3 represents the loss in different stages. Equation 4 presents the group loss that includes different stages loss and output loss. \(\lambda_{i}\) is the weight for different stage. In this paper, we set \(\lambda_{i}\) to 0.1, 0.2, 0.3, 0.4 and 0.5 based on i-th stage as shown in Figure 1.
\[Loss_{\text{Stages}}=\sum_{i=1}^{5}\lambda_{i}\times Loss_{\text{Suggi}} \tag{3}\]
\[GroupLoss=Loss_{\text{Output}}+Loss_{\text{Stages}} \tag{4}\]
## 4 Experiments and Results
### Experiments Setting
**Datasets** To evaluate the efficiency and performance of model with other published models, we pick the two public skin segementation datasets from International Skin Imaging Collaboration, namely ISIC 2017 and ISIC2018. we select two public skin lesion segmentation datasets, namely ISIC2017 [42; 43] and ISIC2018 [44; 45]. The ISIC2017 dataset comprises 2150 dermoscopy images, and ISIC2018 includes 2694 images. We noted that earlier studies[40; 39] have already presented a dataset version with a pre-established train-test partition, maintaining a 7:3 ratio. In our experimental setup, we opted to utilize the previously published dataset version.
**Implementation Details** Our UCM-Net is implemented with Pytorch [46] framework. All experiments are conducted on the instance node at Lambda [47] that has a single NVIDIA RTX A6000 GPU (24 GB), 14vCPUs, 46 GiB RAM and 512 GiB SSD. The images are normalized and resized to 256\(\times\)256. Simple data augmentations are applied, including horizontal flipping, vertical flipping, and random rotation. We noticed the prior studies [40; 39] applied initial image processing with the calculated mean and standard deviation (std) values of the whole train and test datasets separately. While this approach can potentially enhance their models' training and testing performance, the outcomes are notably influenced by the computed mean and std values. Additionally, if the test dataset's context information is unknown, this operation can render the trained model less practical. In our experiment, we don't calculate the mean and std values based on the train and test datasets. Besides, TransFuse-S and SwinNet require the pre-train models with the specified input image size in their encoding stage. To enable fair benchmark testing, we follow the image input size for the TransFuse-S [32; 48] and SwinNet [33; 49] to 192\(\times\)256 and 224\(\times\)224, Correspondingly. For the optimizer, we select AdamW [50] initialized with a learning rate of 0.001 and a weight decay of 0.01. The CosineAnnealingLR [51] is Utilized as the scheduler with a maximum number of iterations of 50 and a minimum learning rate of 1e-5. A total of 300 epochs are trained with a training batch size of 8 and a testing batch size of 1.
**Evaluate Metrics** To assess the predictive performance of our methods, we employ mean Intersection over Union (mIoU) and mean Dice similarity score (mDice) as evaluation metrics. It's worth noting that previous studies [40; 39] and [38] have employed distinct calculation methods for mIoU and mDice. To comprehensively compare the performance predictions, our experiments include the presentation of mIoU, mDice, mIoU*, and mDice* results. These results are calculated using the following equations:
\[\text{IoU}=\frac{\text{intersection}}{\text{union}} \tag{5}\]
\[\text{Dice}=\frac{2\times\text{intersection}}{\text{sum of pixels in prediction}+\text{sum of pixels in ground truth}} \tag{6}\]
where intersection represents the number of pixels that are common between the predicted output and the ground truth, and union represents the total number of pixels in both the predicted output and the ground truth.
\[\text{mIoU}=\frac{1}{N}\sum_{i=1}^{N}\text{IoU}_{i} \tag{7}\]
\[\text{mDice}=\frac{1}{N}\sum_{i=1}^{N}\text{Dice}_{i} \tag{8}\]
where N is the number of images, IoU\({}_{i}\) represents the IoU score for image i and Dice\({}_{i}\) represents the Dice score for image i.
\[\text{IoU*}=\frac{\text{TP}}{\text{TP}+\text{FP}+\text{FN}} \tag{9}\]
\[\text{Dice*}=\frac{2\times\text{TP}}{2\times\text{TP}+\text{FP}+\text{FN}} \tag{10}\]
where TP represents the number of true positive pixels, FP represents the number of false positive pixels and FN represents the number of false negative pixels.
\[\text{mIoU*}=\frac{\text{TP}_{sum}}{\text{TP}_{sum}+\text{FP}_{sum}+\text{ FN}_{sum}} \tag{11}\]
\[\text{mDice*}=\frac{2\times\text{TP}_{sum}}{2\times\text{TP}_{sum}+\text{FP}_{ sum}+\text{FN}_{sum}} \tag{12}\]
where TP\({}_{sum}\) represents the total number of true positive pixels for images, FP\({}_{sum}\) represents the total number of false positive pixels for images and FN\({}_{sum}\) represents the total number of false negative pixels for images.
In our benchmark experiments, we evaluate our method's performance and compare the results among other published efficient models'. To ensure a fair comparison, we perform three sets of experiments for each method and subsequently present the mean and std of the prediction outcomes across each dataset.
### Performance Comparisons
Table 1 comprehensively evaluates the performance of our UCM-Net, a novel skin lesion segmentation model, compared to well-established models, using the widely recognized ISIC2017 and ISIC2018 datasets. Introduced in 2023, UCM-Net is a robust and highly competitive solution in this domain. One of the key takeaways from the table is UCM-Net's ability to outperform EGE-UNet, which had previously held the title of the state-of-the-art model for skin lesion segmentation. Our model achieves superior results across various prediction metrics, emphasizing its advancement in the field and its potential to redefine the standard for accurate skin lesion delineation. Moreover, UCM-Net's performance is notably competitive even when compared to SwinNet, a model that relies on pre-trained models during training. Table 2 complements this assessment by comparing computational aspects and the number of parameters for various segmentation models. Remarkably, UCM-Net, operating with the same number of channels {8,16,24,32,48,64} and image size, as EGE-UNet, boasts fewer parameters and lower GFLOPs. Additionally, even when compared to TransFuse-S and SwinNet, which operate with smaller image sizes, UCM-Net demonstrates f
Figure 5: Vision performance comparison on samples
In Figure 5, we present a visual exhibition of all the models' segmentation outputs. This figure directly compares our segmentation results, those produced by other methods, and the ground truth, all displayed side by side using representative sample images. Notably, our segmentation results demonstrate a remarkable level of accuracy, closely resembling the ground truth annotations. Tables 1-2 and Figure 5 collectively underscore UCM-Net's exceptional performance and efficiency in skin lesion segmentation, affirming its potential to make a substantial impact in advancing early skin cancer diagnosis and treatment.
depicted in Figure 4 and Figure 6(C), with 49,932 parameters and 0.0465 GFLOPs, outperforms the U-Net variant with 248,531 parameters and the baseline U-Net with 31,037,633 parameters in terms of both mean Intersection over Union (mIoU) and mean Dice Similarity Coefficient (mDice) metrics.
Furthermore, when incorporating the Group Loss into the UCM-Net architecture, denoted as "UCM-Net + Group Loss", the model's performance continued to excel. This enhancement resulted in a higher mIoU of 80.63% and a mDice of 87.64%, demonstrating the effectiveness of the proposed Group Loss in further improving segmentation accuracy. Figures 7-8 show that "UCM-Net + Group Loss" always presents the high scores of mIoU and mDice with the training epoch increase.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline Models & Year & Image Size (H x W) & Params\(\downarrow\) & GFLOPs\(\downarrow\) \\ \hline U-Net [52; 53] & 2015 & 256 x 256 & 31,037,633 & 54.7378 \\ Att-UNet [54; 31] & 2018 & 256 x 256 & 34,878,573 & 66.6318 \\ TransFuse-S [32; 48] & 2021 & 192 x 256 & 26,248,725 & 8.6462 \\ SANet [34; 55] & 2021 & 256 x 256 & 23,899,497 & 5.9983 \\ SwinNet [33; 49] & 2021 & 224 x 224 & 20,076,204 & 5.5635 \\ MedT [35; 56] & 2021 & 256 x 256 & 1,564,202 & 2.4061 \\ ConvUNeXt [36; 57] & 2022 & 256 x 256 & 3,505,697 & 7.2537 \\ UNeXt-S [38; 58] & 2022 & 256 x 256 & 253,561 & 0.1038 \\ MALUNet [39; 59] & 2022 & 256 x 256 & 177,943 & 0.0830 \\ EGE-UNet [40; 60] & 2023 & 256 x 256 & 53,374 & 0.0721 \\
**UCM-Net(ours)** & 2023 & 256 x 256 & **49,932** & **0.0465** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparative performance results on models’ computations and the number of parameters.
Figure 6: Stage Block Structures in ablation experiments.
A: U-Net and U-Net Variant. B: U-Net Variant 1. C: UCM-Net
\begin{table}
\begin{tabular}{l|c|c c|c c} \hline \hline Models & Structure Reference & Params\(\downarrow\) & GFLOPs\(\downarrow\) & mIoU(\%)\(\uparrow\) & mDice(\%)\(\uparrow\) \\ \hline U-Net(baseline) & Figure 6 (A) & 310,376,33 & 54.7378 & 78.34 & 86.23 \\ U-Net Variant & Figure 6 (A) & 248,531 & 0.5715 & 78.48 & 86.22 \\ U-Net Variant 1 & Figure 6 (B) & 148,157 & 0.3700 & 73.89 & 82.36 \\ UCM-Net & Figure 6 (C) & 49,932 & 0.0465 & 79.76 & 86.94 \\ UCM-Net + Group Loss & Figure 6 (C) & 49,932 & 0.0465 & 80.63 & 87.64 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation experiments’ results on the ISIC2017 dataset
Figure 8: Dice vs Epoch results of ablation experiments
Figure 7: IoU vs Epoch results of ablation experiments
The above findings in the ablation experiments underscore the significance of architectural innovations such as the UCMNet block in achieving superior semantic segmentation performance, even with fewer parameters and computational complexity than the U-Net baseline.
## 5 Conclusion
This paper introduces UCM-Net, a novel, lightweight, and highly efficient solution. UCM-Net combines MLP and CNN, providing robust feature learning capabilities while maintaining a minimal parameter count and reduced computational demand. We applied this innovative approach to the challenging task of skin lesion segmentation, conducting comprehensive experiments with a range of evaluation metrics to showcase its effectiveness and efficiency. The results of our extensive experiments unequivocally demonstrate UCM-Net's superior performance compared to the state-of-the-art EGE-UNet. Remarkably, UCM-Net is the first model with fewer than 50KB parameters and consuming less than 0.05 GLOPs for skin lesion segmentation. Looking forward to future research endeavors, we aim to expand the application of UCM-Net to other critical medical image tasks, advance the field, and explore how this efficient architecture can contribute to a broader spectrum of healthcare applications, potentially revolutionizing how we utilize deep learning for medical image analysis.
|
2302.08558 | Generative Adversarial Networks for Malware Detection: a Survey | Since their proposal in the 2014 paper by Ian Goodfellow, there has been an
explosion of research into the area of Generative Adversarial Networks. While
they have been utilised in many fields, the realm of malware research is a
problem space in which GANs have taken root. From balancing datasets to
creating unseen examples in rare classes, GAN models offer extensive
opportunities for application. This paper surveys the current research and
literature for the use of Generative Adversarial Networks in the malware
problem space. This is done with the hope that the reader may be able to gain
an overall understanding as to what the Generative Adversarial model provides
for this field, and for what areas within malware research it is best utilised.
It covers the current related surveys, the different categories of GAN, and
gives the outcomes of recent research into optimising GANs for different
topics, as well as future directions for exploration. | Aeryn Dunmore, Julian Jang-Jaccard, Fariza Sabrina, Jin Kwak | 2023-02-16T20:07:19Z | http://arxiv.org/abs/2302.08558v2 | # Generative Adversarial Networks for Malware Detection: a Survey
###### Abstract
Since their proposal in the 2014 paper by Ian Goodfellow [1], there has been an explosion of research into the area of Generative Adversarial Networks. While they have been utilised in many fields, the realm of malware research is a problem space in which GANs have taken root. From balancing datasets to creating unseen examples in rare classes, GAN models offer extensive opportunities for application. This paper surveys the current research and literature for the use of Generative Adversarial Networks in the malware problem space. This is done with the hope that the reader may be able to gain an overall understanding as to what the Generative Adversarial model provides for this field, and for what areas within malware research it is best utilised. It covers the current related surveys, the different categories of GAN, and gives the outcomes of recent research into optimising GANs for different topics, as well as future directions for exploration.
Generative Adversarial Networks (GAN); machine learning; current research survey; threat detection; malware; data augmentation; adversarial examples
## 1 Introduction
Generative Adversarial Networks, or GANs, are a type of deep learning neural network model based on the Game Theory premise of a zero-sum game[1]. These networks have become popular in many fields as a Machine Learning (ML) model which has great success at synthesising large samples of dataset classes based on learning classes and features from an existing dataset. They are particularly good
at synthesising images [2], making them both popular in computer vision tasks, and excellent at generating malware 'images' for training systems to detect malicious files and applications. They also offer a chance to augment the rarest classes in a dataset [3]. While they have received attention from many disciplines and research topics, the research into GANs for synthesising images of different malware classes is very promising, and as such, is where we have chosen to focus our survey.
The research in this study concerns of the state of the art in GANs and Malware, and we have found a space for an up-to-date examination of where this research discipline is and where it appears to be headed. As is explained in Section 3, on related works, while there are surveys or studies which are _similar_ in aspects of their research, to our knowledge there is no updated survey on this topic. This is of distinct importance because of how much growth we have seen in this area of research in recent years. We have done our utmost to present a balanced examination, with both breadth and depth, which can be of use to both researchers new to the area, and those wanting an update on the current problem space. We have also attempted to approach this survey in a way that makes it accessible for the machine learning or cybersecurity researcher both.
The rest of the paper is structured as follows: Section 2 describes the structure of GAN models, and how they are built and trained, as well as defining malware for the purposes of this paper. Section 3 gives a relatively brief breakdown on the recent works with the most similarities to our survey, and explains how we have developed something different in content and structure. Section 4 clarifies the different methods by which researchers measure the performance of their respective GAN models. Section 5 explains the datasets used in the experiments and research we have surveyed and what types of data they contain as well as their origins. Section 6 delves into the different types of Generative Adversarial Network - both the most commonly implemented models and the innovations that have come from recent researchers developing new ways in which to use the model. Section 7 will explain the types of uses GAN models are being used in malware research, along with the specific areas within the area to which GAN research is contributing. Section 8 contains our in-depth discussion of how GANs are functioning within malware research and what this means for researchers both in cybersecurity and machine learning. Finally, Section 9 discusses the opportunities for future research,
and then Section 10 goes on to the conclusions we believe can be drawn from the survey of papers within this discipline.
## 2 Terms, Definitions, and Explanations
A Generative Adversarial Network, at its base, is a machine learning algorithm built out of two separate deep learning networks which work together, competing to win a zero-sum game. One network takes in noise and then attempts to create samples with the right characteristics to have them seem like real or 'genuine' samples. The second network takes as input both real and generated samples, and then classifies them as either real or fake samples. The back-propagation that occurs then is the backbone of the model. If the Discriminator network is right, information is sent back to the Generator network, so that it will adjust its weights and probability distributions to improve the quality of its forgeries. If the Discriminator instead gets it wrong, that information is sent to the Discriminator to make the necessary adjustments. The games end when the Discriminator has the accuracy of a coin flip - when the forgeries are all but impossible to separate from the genuine samples.
As discussed above, the Generator creates data that is meant to look as real as possible. The Discriminator has only one job - determine if the data provided to it is generated data (created by the Generator) or if it is in fact genuine data. The Generator is considered to have "won" when the Discriminator has a success rate of 50%. Meaning that the Generator is so good at producing almost real data, that the Discriminator is left with the same accuracy as tossing a coin. The GAN is considered an unsupervised machine learning method, and it was developed in 2013 to help model the behaviour of wildlife[4]. GANs are an alternative generative model to Variational Autoencoders (VAE), which can also be used to create new samples from a given dataset. In many of the papers we surveyed, VAE were used as a point of comparison/control in the experiments for improved GAN models.
### How does a GAN model work?
Having given a simplistic overview, we now explain the architecture of the GAN model for machine learning in detail. The architecture is innovative for the way it processes and creates both information and datasets. In a world where we need exceptionally large datasets to train the machine learning algorithms that are now slipping into so much of our technology - and therefore into our lives - the ability
to _create_ new data for training purposes is invaluable. This is, of course, provided the data is, or is at least almost, authentic. Creating exceptional forgeries, in the way that GANs do, is therefore a reason in and of itself to employ them in most problem domains. Cybersecurity especially has need of as many samples as possible of the different types of malware, in order to be able to train defensive technologies, to detect when a file or action is malicious.
### A Standard GAN Model Structure
The standard/vanilla Goodfellow GAN design is simple but ingenious. It relies on the use of two main deep learning neural nets, with back-propagation and feedback. The Generator and Discriminator play a "two-player minimax game" [1] using the value function found in Equation 9. This equation is taken from Goodfellow's original 2014 paper[1] introducing Generative Adversarial Networks[1].
Figure 1, from [5], shows the structure of a standard model GAN, and one of its derivatives, Least Squares GAN (LSGAN) discussed later in Section 6.8.
#### 2.2.1 Generator
The Generator portion of the network is arguably the more complex. The Generator Network within the GAN model starts with a random seed or noise as input, and produces an output which starts understandably far from the goal. However, as the Discriminator Network feeds information through back-propagation to the Generator, it slowly achieves convergence on the target samples. Each iteration, each epoch, the Generator syntheses data that is more and more realistic. Convergence
Figure 1: A standard GAN, SGAN, or LSGAN model, reproduced based on diagrams from Wang et al., 2017 [5]
and training of the Generator Network is finished when the Discriminator cannot tell if the synthesised data is real or manufactured with any more than 50% accuracy - effectively becoming as useful in classification as a coin toss. The advantages to being able to do this are many, and this generation of data is what makes the GAN model so popular in the world of machine learning. The ability to extract and analyse important features of each dataset class are a highly sought after trait in machine learning research.
#### 2.2.2 Discriminator
The Discriminator's job essentially comes down to a binary option. Is the data real, or is it synthesised? This part of the network, like most deep neural networks, is about feedback - in this case, the back-propagation of the result to the Generator Network. When the Discriminator guesses incorrectly, that alters the internal weights of the system as the information back-propagates to both Generator and Discriminator. The Discriminator is also playing to lose, as the target is to have the Discriminator as effective at distinguishing real data from generated data as a simple coin toss. At that point in the game, the Generator has graduated to building data for use in other scenarios. The data provided is considered so close in feature-space that it may as well be treated like the real deal. The Discriminator is the part of the dual network that recognises when the model has been sufficiently trained to produce high-quality synthesised data. It is, also, no longer required once the Generator is producing the samples at that level of 'perfection'. It does not need to be used outside of the training of the Generator, as that is its singular purpose.
#### 2.2.3 Types of feedback loops
Essential to every GAN implementation is the type of feedback response network it utilises. Being able to feed the results of the tests run by the Discriminator back to the Generator is key in the development of an effective Generator. With every wrong choice, the Generator gets to adjust the weights that little bit closer to where they want to be. Likewise, the Discriminator choosing correctly means that the Generator needs to update its neural weights differently, to change the emphasis on particular neurons, and to change the outputs of the network for a more favourable outcome. The back-propagation of the network allows both systems to carefully adjust the
weights and probabilities of their internal deep learning neural networks. Because of the feedback loop needed by GAN architecture, it is a back-propagation network. There are networks that change the types of feedback loops and what is presented to cause changes in the algorithm. The function to alter the weights is essential to the training period, and is as such customised in different models and applications such that the network can reach optimal function as efficiently as possible. It is important to remember that this adjustment over time occurs in a black-box. The results and inputs are what we as researchers can control, while the actual operations can only be modelled and estimated.
### Malware
Malware, in the general definition for the purposes of this paper, is code written to cause malicious behaviour. Trojans and ransomware are both examples of malware. In its purest form, malware is programming intended to in some way break or disrupt the regular operation of an operating system [6]. One of the most well-known malware attacks is WannaCry's ransomware in 2017, infecting computers across the globe with hackers locking users out of their PCs. As ransomware has evolved, it has become a billion dollar business [7]. This is simply another reason to use every means at our disposal to create a new generation of detection systems, such that individuals (who may be mostly unaware of how best to protect themselves) need not attempt to protect their devices on their own. This paper aims to show the ways in which GAN models can and are being used in order to help create these new systems.
## 3 Related Work
The areas in which the use of GAN is both possible and implemented is exhaustive. The classification, generalisation, and feature extraction abilities of the GAN models make them useful in too many fields to reasonably keep track of, but there are many surveys that have tried to enumerate all the ways in which GAN models helped in their fields. These include steganography [8], the cracking of cryptographic methods [9], e-commerce [10], and cross-lingual methods for detecting hackers [11]. There are however, some areas in which GAN models are more common than others. We have summarised the types of GAN models examined in the related works in Table 1.
In Berman et al., 2019[12], the authors take the chance to explore the ways in which deep learning methods have been integrated into different realms of cybersecurity. Our paper is similar, though it focuses specifically on the work done using Generative Adversarial Networks for implementation in the research space of malware, both generation and detection. Berman et al. [12] is intended to familiarise the reader with research into machine learning for cybersecurity. The authors are careful to emphasize the difference between the deep versus shallow neural network machine learning models. This survey is far greater in depth than spread - the types of uses for GAN models in the survey are a small number primarily about identifying attacks, but they are all examined in great detail over the course of many papers in each area. Their survey is indeed intended to familiarise the reader first with machine learning, neural networks, and deep learning, before moving on to Generative Adversarial Networks and then to their applications in cybersecurity. In this, the paper does achieve its aims. However, the lack of variety in the tasks in which GAN models can be used in cybersecurity is a note on which there is certainly room for more breadth. This survey is very careful to ensure the reader understands the metrics, and the results for each of the different papers included in the survey are clearly outlined. The metrics are important and effective at communicating why researchers should further investigate GAN methods for different tasks.
In Liu, et al., 2022, [13], the authors undertake a thorough, extensive, and careful examination of the state of the art in Adversarial Machine Learning - of which GANs are one type. The stated purpose of this survey is to examine the weaknesses of Machine Learning Intrusion Detection Systems, and look at ways AML models are being implemented to assist in defending them. The over-arching theme, of course, is through the creation of adversarial examples such that Machine Learning approaches can train on unseen data, and learn to spot the necessary contextual and semantic relationships that can signal malicious code.
Like our paper, it includes a table of related and similar works and their main features. The paper leans heavily on the uses for machine learning to protect mobile networks from attacks. Interestingly, this paper separates into two distinct parts with regards to the surveying of papers, coded as the Offensive Perspective, and the Defensive Perspective.
The focus on Adversarial Machine Learning, as opposed to Generative Adversarial Networks, for network intrusion detection systems mean that, while this paper is very in-depth and covers a variety of machine learning models including GANs, our paper fits within the gap between AML for IDS and GANs for malware research.
The survey, Navidan et al., 2021 [14], covers familiar ground in the research surveyed, but in exceptional depth. The authors are quick to note that, as is evident by the content of these surveys, while GAN for cybersecurity is a relatively new field, it is already being extensively researched. However, this paper has a very small related works section, with only two other surveys mentioned briefly.
The survey covers some interesting areas in which GANs are currently being utilised. In the paper [15], the survey authors note the creation of an ingenious GAN model for morphing traffic flow data, called FlowGAN. The purpose of this model is to train it as to what benign or normal traffic flow patterns look like. Then it morphs the traffic data that needs to pass by undetected, into something with the right features and patterns to be labelled as benign/normal traffic. Such a model would be of interest in regards to malware research as a method to defend against malicious files and applications being disguised as normal and benign traffic.
In Future Work, the authors of the survey note that improving ways to avoid image translations are of great significance. This would ease the ability of researchers when finding or developing a new method to use with a different type of data, making it not so cost-expensive with regards to overhead.
We have found a space within these existing surveys to fill gaps with regards to how Generative Adversarial networks are used in areas of malware research for cybersecurity. We have done so with a focus on creating an overview that will be of use to those individuals in need of a primer on GANs and their potential applications in research into malware. We have attempted to balance depth with breadth of content, and to point readers to other papers that may give them further information on the use of GAN models in these research areas. While our paper might be considered of a parallel topic to [16], the latter paper is much more compact, and as well as dealing more broadly in cybersecurity as a whole, it involves a detailed case study which leaves it little room to discuss the topics in the depth which we
have attempted in this survey. Overall, we believe that we have contributed valuable commentary on the state of the art in GAN models for malware research.
## 4 Measuring Performance
In most papers related to GAN schemes, there are expected metrics for evaluating a machine learning systems like this one[17]. The most popular are listed here, as are the ways these show the performance of the GAN.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Topics & Berman et al., 2019[12] & Liu, et al., 2022[13] & Navidan et al., 2021[14] \\ \hline Malware & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline Adversarial Examples & & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline Data Augmentation & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline Network Data & \(\mathbf{\times}\) & & \(\mathbf{\times}\) \\ \hline Reinforcement Learning & & \(\mathbf{\times}\) & \\ \hline Unseen Examples & & \(\mathbf{\times}\) & \\ \hline Offensive/Attacker Models & \(\mathbf{\times}\) & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline Defensive/Defender Models & & \(\mathbf{\times}\) & \\ \hline Social Network Analysis & \(\mathbf{\times}\) & & \\ \hline Android Malware & & & \(\mathbf{\times}\) \\ \hline Financial Fraud Detection & & & \(\mathbf{\times}\) \\ \hline Image Enhancement & \(\mathbf{\times}\) & & \\ \hline Domain Generation Algorithms & \(\mathbf{\times}\) & & \\ \hline Botnet Detection & \(\mathbf{\times}\) & & \\ \hline Drive-By Download Attacks & \(\mathbf{\times}\) & & \\ \hline Password Attacks & & & \(\mathbf{\times}\) \\ \hline Mobile Network Attacks & & & \(\mathbf{\times}\) \\ \hline Internet of Things Attacks & & & \(\mathbf{\times}\) \\ \hline \hline \end{tabular}
**GAN MODELS DISCUSSED IN SURVEY**
\begin{tabular}{|c|c|c|} \hline VanillaGAN & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline CGAN & & \(\mathbf{\times}\) \\ \hline DCGAN & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline WGAN & & \(\mathbf{\times}\) & \(\mathbf{\times}\) \\ \hline BiGAN & & & \(\mathbf{\times}\) \\ \hline CycleGAN & & & \(\mathbf{\times}\) \\ \hline AC-GAN & & & \(\mathbf{\times}\) \\ \hline MalGAN & & & \\ \hline ISGAN & & & \(\mathbf{\times}\) \\ \hline InfoGAN & & & \(\mathbf{\times}\) \\ \hline FlowGAN & & & \(\mathbf{\times}\) \\ \hline \end{tabular}
\end{table}
Table 1: Types of GAN Research in Related Works
### Evaluation Metrics
The TP, FP, TN, FN scores are often tabulated as a Confusion Matrix to show the performance of the ML algorithm. An example Confusion Matrix can be found in Table 2.
#### 4.1.1 True Positive
The _True Positive/TP_ is the number of correctly predicted positive results, or the total number of correctly classified benign samples.
#### 4.1.2 False Positive
The _False Positive/TP_ is the number of incorrectly predicted positive results, or the total number of incorrectly classified benign samples.
#### 4.1.3 True Negative
The _True Negative/TN_ is the number of correctly predicted negative results, or the total number of correctly classified malicious samples.
#### 4.1.4 False Negative
The _False Negative/FN_ is the number of incorrectly predicted negative results, or the total number of incorrectly classified malicious samples.
#### 4.1.5 Accuracy
The _accuracy_ is the average of correct predictions - of both positive and negative varieties - when classified. Thus, it is the correct predictions divided by the total predictions, or:
\[acc=\frac{TP+TN}{TP+TN+FP+FN} \tag{1}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline & & \multicolumn{2}{c}{Predicted Classification} \\ \hline & & Benign & Malicious \\ \hline Actual Classification & Benign & TP & FP \\ & Malicious & FN & TN \\ \hline \hline \end{tabular}
\end{table}
Table 2: Metrics in Machine Learning Classifiers
This is the assumed metric in papers or articles which talk only about averages and score.
#### 4.1.6 Precision
Also known as _Positive Predictive Value_ or PPV, this is the samples that were classed correctly as benign over all samples that have been classified as benign.
\[p=\frac{TP}{TP+FP} \tag{2}\]
#### 4.1.7 Recall
The recall, also known as _true positive ratio, or sensitivity_, is the ratio of samples classed as benign over the total samples classed as benign.
\[r=\frac{TP}{TP+TN} \tag{3}\]
#### 4.1.8 F1-Score
This is the _Harmonic Mean_ of the **precision** and the **recall** values. A _harmonic mean_ is one of three types of Pythagorean averages. It is heavily influenced by the lowest of the values, when applied to real numbers, meaning it holds an important place to check the minority classes' accuracy.SS
\[F_{1}=2\left(\frac{pr}{p+r}\right) \tag{4}\]
#### 4.1.9 Inception Score
When \(g\) is the Generator, \(d\) is the Discriminator, and there are two finite, label sets, \(\Omega_{X}\) and \(\Omega_{Y}\). As such, \(p_{g}\) is a distribution over \(\Omega_{X}\). \(p_{Y}:\Omega_{x}\to M(\Omega_{Y})\) is the discriminator function and \(M(\Omega_{Y})\) is the set of all possible probability distributions over the set \(\Omega_{Y}\). Any image can be \(x\), while \(y\) is any label. Thus, writing \(p_{d}(y|x)\) is calculating the probability that the image \(x\), has the label \(y\) - as calculated by the Discriminator Network. The below shows the equation for calculating the Inception Score over all probability distributions, \(\Omega_{X}\), and \(\Omega_{Y}\)[18].
\[IS(g,d)=\exp(\mathbb{E}_{x}\sim p_{g}[D_{KL}(p_{d}(\cdot\mid x)\parallel \int p_{d}(\cdot\mid x)p_{g}(x)dx)]) \tag{5}\]
There is a pre-trained network which measures the Inception Score, and the higher the score of the model, the higher the quality of the images produced [19]. The
Inception Score and Network were introduced in 2016 for Convolutional Neural Networks by Szegedy et al. [20]. It was originally developed to remove human subjectivity in computer vision research.
#### 4.1.10 Mode Score
The Mode Score is meant to be an improved version of the Inception Score. It still measures the quality and diversity of images, but it counts the prior distribution of labels[21].
\[\small MS(\mathbb{KL})=\exp(\mathbb{E}_{x}[\mathbb{KL}(p(y\mid x)\parallel p( y^{train})]-\mathbb{KL}(p(y)\parallel p(y^{train}))) \tag{6}\]
#### 4.1.11 Frechet Inception Distance
There is another equation derived from the Inception Score. The Frechet Inception Distance (FID) and the Inception Score (IS) together can be used as an attempt to solve overfitting[22]. The FID is shown below in Equation 7. The purpose of the FID is to examine the distance between groups. It was also developed for the specific task of image processing in machine learning [20]. Frechet Inception Distance for any two probability distributions, \(\mu\) and \(\nu\), over the set of real numbers, \(\mathbb{R}^{n}\), is calculated as follows:
\[d_{F}(\mu,\nu):=\left(\underset{\nu\in\Gamma(\mu,\nu)}{inf}\int_{\mathbb{R}^{n }\times\mathbb{R}^{n}}\parallel x-y\parallel^{2}d\gamma(x,y)\right)^{1/2} \tag{7}\]
The set used in the FID here, \(\Gamma(\mu,\nu)\) is actually the 2-Wasserstein distance over \(\mathbb{R}^{n}\)[21]. There is a second calculation for the FID score, but it works only over two Gaussian, multi-dimensional distributions, \(\mathcal{N}(\mu,\sum)\) - symbolised below as \(r\) - and \(\mathcal{N}(\mu^{\prime},\Sigma^{\prime})\) - shown below as \(g\).
\[FID(r,g)=\parallel\mu_{r}-\mu_{g}\parallel_{2}^{2}+Tr(\sum_{r}+\sum_{g}-2(\sum _{r}\sum_{g})^{1/2}) \tag{8}\]
## 5 Dataset
The different datasets on which the Machine Learning algorithms are trained have a significant effect on how they read the given data, what their biases or preconceived ideas may be, and how they are trained to recognise different integral features. In
this section we have attempted to cover the main datasets used in the papers we have surveyed.
### DGA Archive
The DGA Archive is a set of domains, of 43 families, classes, or variants, with more than 20 million domains as of 2015[22]. These domains are from models in Domain Generating Algorithms which create domains for Control & Command centres for botnets. The database of malicious botnet C&C domains allows for machine learning classifiers to be trained on how to detect domain name malware. This data is extremely important in creating new machine learning methods for identifying botnet C&C centres (as in [23]). The compilation of this information into such a large and comprehensive database is an important research tool. The DGA Archive dataset is also used to create adversarial machine learning models, such as MaldomDetector[24], which undertake the generation of malicious domain names itself, and allows researchers to test defensive machine learning algorithms on an adversary.
### VirusTotal
This repository of both anti-virus software and a database of files, both benign and malicious, is known as VirusTotal. It can scan a given file using 70 antivirus systems as well as checking with URL and domain blacklisting programs[25]. Each uploaded file - as well as resulting in a report stating the findings and results labelling it either benign or malicious and how these results were arrived at - is also kept and added to the overall database of VirusTotal files. The service is free for research and non-commercial use, and licences can be purchased for commercial users or those needing a large sample set of data[26]. In addition to scanning, users can request a subset of the database for use in training and testing their own algorithms. This is a often used service in machine learning research, such as [27], because of the depth and breadth of malware covered by the VirusTotal dataset. It is also useful because of the constant updating the servers get as users upload their own programs and files to scan. This is an unusual dataset in that regard, where other datasets covered in this section are static and set, while VirusTotal is continually changing and updating. VirusTotal contains files, programs, Android applications[28], applications for Windows, Mac, Linux, iOS, and so on. This is
another point in favour of the database - the type of data available for training and testing is extensive and covers a lot of ground, where other datasets discussed only cover one type of information.
### Contagio
Contagio is a publicly available dataset of malware, specifically samples of Android malware and benign applications[29]. The dataset was updated periodically between 2011 and 2018, and can be found for open access online at a range of places[30]. The fact that this dataset is focused on Android malware makes it extremely useful, as overall, openly available databases of mobile malware are not as prevalent as those for desktop malware or traffic flow data. As of 2021, Contagio contained 11,960 malicious and 16,800 benign samples of Android software[29], with a total size of approximately 9GB[31]. This dataset can be accessed in.zip format for researchers and white-hat activities.
### Drebin
Drebin is a repository for Android malware, similar to Contagio. Drebin contains 123,453 applications and 5,560 malicious APKs for Android, in a variety of malware families, totalling about 6GB in size[31]. It was collected from 2010 to 2012. It was originally proposed as part of a paper in which Drebin - a new algorithm - was proposed to catch malware on Android smartphones. As part of this, a database of 5,560 malicious Android APKs were collected, which now make up the Drebin dataset [32]. It is important, therefore, to differentiate between the Drebin static-analysis detection software, and the Drebin dataset. Both were organised around the use of eight main feature sets for analysis [32]. These sets are as follows:
* Hardware components
* Requested permissions
* App components
* Filtered intents
* Restricted API calls
* Used permissions
* Suspicious API calls
* Network addresses
### Comodo
The Comodo Database[33], maintained by Comodo Antivirus, contains sample files of malware. As part of a program to encourage research, Comodo partnered with universities and launched Comodemia[34]. This gave access to Comodo's tools to researchers internationally, for research purposes only. The Comodo malware database primarily contains files classified as unknown malware - totalling 147,103 instances. The other categories are Trojan viruses (462 instances) and Unwanted Applications (13 instances). It is a clearly unbalanced dataset. However, like the other datasets, it is used to train and test different machine learning classifiers, as in [35].
### VirusShare
The VirusShare database[36] is a large online, open-source repository for malware. A user account is required for access, but anyone with an account can access and download the live viruses in the database. The database is found at VirusShare.com, and is maintained by Corvus Forensics, though anyone can submit files to be added to the dataset. Some researchers, like [37], in which 14,616 unique examples were taken from the VirusShare database, have taken portions of the VirusShare database and melded them with other datasets in order to balance and augment datasets as necessary. In [38], the VirusShare dataset was augmented with malware obtained from Kaggle, in order to test the author's proposed malware detection scheme, and was used as a benchmark when the new model was run against VirusTotal's (see Section 5.2) antivirus detection program.
### Microsoft Malware Classification Challenge (2015)
The Microsoft Malware Classification Challenge [39], made available an open source database of malware examples for Windows. The challenge was part of a general push to coders into creating their own deep learning methods for malware classification [40]. It can be found primarily on Kaggle[ii]. It was an open competition on the site, for teams to come together and develop their own solutions to the challenge. It was completed by 377 teams during the open competition time between April 13 - 18, 2015. The dataset contains more than 20,000 malware examples, and
according to the authors of the challenge [41], as of 2018 it had been cited in over 50 research papers. It is now a widely used dataset for machine learning research, as in [42], [43], [44].
### Mnist
The MNIST dataset, or Modified NIST dataset, a collection of images of handwritten characters, was introduced in 1998 by LeCun et al[45], for the primary purpose of computer vision and recognition tasks in machine learning. It contains characters which are clear representations, as well as those which have been perturbed to examine the extent to which a computer vision algorithm can recognise deformed figures. Many models dealing with the challenges of computer vision in machine learning have utilised this dataset. For example, the InfoGAN model, discussed in Section 6.11, was trained and tested on the MNIST dataset, [46], before moving on to 3D renderings. Another study, surveying the effectiveness of different models of GANs, used the MNIST dataset to benchmark the performance of each model [47]. Since its inception, new, updated versions of MNIST have been proposed. One such dataset is EMNIST (Extended MNIST), which takes the dataset from digits only into handwritten alphanumeric characters [48], with a total of 814,255 samples in all classes combined.
## 6 Types of GAN models
The papers we have surveyed have used a range of variants of the traditional GAN. As a reference and refresher, we have included this section. in which we cover the different types of GANs we will be discussing, and the points of difference in each. We also wanted to clearly illustrate the issues inherent in the standard GAN model, so that the variations which are developed specifically for overcoming them are understood.
### Vanilla GAN
The traditional, or Vanilla, Generative Adversarial Network is the original proposed model from Goodfellow et al's 2014 paper [1]. The authors proposed the GAN model as an alternative to Variational Autoencoders for adversarial machine learning. This original version of the Generative Adversarial Network is a deep learning model, and was based on adversarial nets as a framework, with back-propagation. This
model uses a two-player minmax game to adjust the weights, as per Figure [1]. An important distinction is that while the discriminator has access to both real and generated data, the generator has _no access to either_, and so has to rely on the value functions and the back-propagation to change the weights and take the model closer to producing realistic generated output [47]. The generator and discriminator are both able to be non-linear mapping functions [49]. The GAN model came through the adversarial nets framework, a way of dealing with weights without Markov chains, and instead using back-propagation[49]. For an in depth overview of how the Goodfellow GAN operates, please refer to Section 2.1.
**Inherent Problems in the Goodfellow GAN Architecture**
#### 6.1.1 Mode Collapse Problem
The complexities of the MinMax game that are essential to the standard/Vanilla GAN result in an optimisation problem. This is solved in the standard version using the gradient descent-ascent (GDA) method [50]. However, this can lead to serious errors in convergence resulting in failure of the GAN, including a problem known as _mode collapse[51]_. Combating this problem is one of the reasons there are so many variations of GAN models - many are developed to help avoid the convergence problems in optimisation of the minmax function as much as possible.
#### 6.1.2 Catastrophic Forgetting
Catastrophic Forgetting (CF) occurs when "knowledge of previously learned tasks is abruptly destroyed by the learning of the current task" ([51]). CF can prevent proper convergence in the model, and limit it from finding the necessary local maxima optimum for the task it is set. Remembering the location and features of the real samples used to train the generator is essential - when the generator loses these samples, catastrophic forgetting occurs as the new generated samples are not created with the real samples as a guide[51].
It is important to note that mode collapse and catastrophic forgetting are interlinked - they make the other worse in situations where both problems arise. The equation for describing the optimal Discriminator in Goodfellow's GAN model
is shown in Equation 9. The training criteria for a given discriminator, \(D\), and a generator, \(G\), are shown in Equation 10.
\[D_{G}^{*}(x)=\frac{p_{data}(x)}{p_{data}(x)+p_{g}(x)} \tag{9}\]
\[V(G,D) =\int_{x}p_{data}(x)log(D(x))dx \tag{10}\] \[+\int_{x}p_{z}(z)log(1-D(g(z)))dz\] (11) \[V(G,D) =\int_{x}p_{data}(x)log(D(x))dx\] (12) \[+p_{g}(x)log(1-D(x))dx \tag{13}\]
### Conditional GAN
The Conditional GAN (CGAN), proposed in [49], modify the original vanilla GAN. In the original model, the generative process could not be controlled or conditioned. It was unsupervised entirely. CGANs allow the generation process to be controlled and directed, meaning that the model can be steered towards a focus on a particular class, or feature. The original paper proposing a CGAN model utilised the MNIST dataset (see Section 5.8) to test its capabilities. The change in control occurs when the focus is put on some element \(y\), which can be a class, value, feature, so on, and this \(y\) is fed into both the generator and discriminator as an additional layer of input. In the original proposal, the generator is fed not only the focus \(y\), but also a noise function, \(p_{z}(z)\)[49]. In a subsequent study, a CGAN for facial recognition was proposed [52], which used sampled random noise for \(p_{z}(z)\) and a random sampling for \(y\) which is taken from the training dataset, utilising a Pazan window, \(p_{y}(y)\).
### Deep Convolutional GAN
The Deep Convolutional Generative Adversarial Network, or DCGAN, was proposed in a 2015 paper titled "Unsupervised representation learning with deep convolutional generative adversarial networks" [53]. The model for DCGAN was based in research around convolutional neural networks, and how they might offer
opportunities for growth in other machine learning models. The paper was focused on the generation of sudo-natural images, as GAN models are so highly effective in image generation tasks. While most deep learning algorithms are black-box methods, it is possible through careful tuning to examine the underlying functions of a Convolutional Neural Network (CNN) model. The authors made use of several changes to traditional CNN architecture, from the following papers:
* _Striving for simplicity: The all convolutional net[54]_
* _Inceptionism: Going deeper into neural networks[55]_
* _Batch normalization: Accelerating deep network training by reducing internal covariate shift[56]_
### cDCGAN
The conditional Deep Convolutional Generational Adversarial Network, or cDCGAN, takes the properties of both the CGAN (See section 6.2) and DCGAN (see section 6.3) models. It was proposed as part of a paper focusing on handwritten Arabic characters [57], a significantly more complicated task than identifying English alphanumeric characters. Arabic characters are distinctly different in that Arabic lettering can have similar characters to the extent that they "are only distinguishable by dots"[57]. The database utilised was the AHDBase/MADBase[58], containing 70,000 digits, chosen because it was the database that best matched the MNIST database of numeric handwritten digits. (5.8). The discriminator in the cDCGAN model is a deep CNN. The Leaky activation function is used in this model. The generator matches its picture quality to 32x32 pixels, with a LeakyReLu function.
### Bi-directional GAN
The Bi-directional Generative Adversarial Network (BiGAN) was proposed in a 2017 paper called "Adversarial feature learning"[59]. Similarly to MGAN (see Section 6.15) this is a three party model, consisting of an encoder, a generator, and a discriminator. The role of the encoder is to map data \(x\) to a latent space representation \(y\). Donahue et al specify that the encoder is taught to invert the generator, even though the modules do not interact with one another or directly process the other module's outputs.
The BiGAN model is meant to excel at tasks that involve semantic data and
representation. They are also an entirely unsupervised model in machine learning. Interestingly, BiGAN was brought into the spotlight in Bioinformatics in a 2021 paper titled "BiGAN: LncRNA-disease association prediction based on bidirectional generative adversarial network"[60], the BiGAN model proved highly effective. When compared against the three gold-standard algorithms for detecting the "associations of lncRNA-disease pairs"[60], BiGAN achieved the highest scores, including 93.1% for the AUC. That was several percentage points higher than the standard methods. BiGAN models are now found in many different fields, including research into malware.
### MalFox
The creation of MalFox, a GAN model for creating attacks and new malware [61] gave an important and powerful tool to those testing or attacking existing systems. MalFox is an amalgamation of parser, generator, and discriminator layers, which takes as input Windows Portable Executable files, or PEs, and outputs the same executables. This makes it a more practical tool that the more common adversarial example generators which often take an image created by a feature extraction process and don't create functioning malware in pre-approved file types. Since its inception, MalFox has undergone more than one transformation, but even the original version shows the power of GAN-based schemes for attack purposes. The initial experiments were used against pre-trained classifiers - Decision Tree; Random Forest; Logistic Regression; Support Vector Machine; Multi-Layer Perceptron; Vote; Long Short-Term Memory; Bi-directional LSTM; LSTM Average; Bi-LSTM Average; LSTM Attention; and BiLSTM Attention. The evasion rate - the percentage of times the program was classified as benign by the systems - MalFox achieved was 99% minimum across the board. This is a stunning display of the power of these schemes. Furthermore, when tested against the open-access giant VirusShare, the detection rate was only 29.7% on average. The evasion rate on the same was averaged as 56.2%. MalFox and the experiments done show exactly how powerful these schemes can be.
### MalGAN
In 2017, the authors [62] created a GAN which proposed black-box adversarial examples for attacking via Windows binaries. This scheme, called MalGAN, is
now widespread, with multiple different variants and branches of development. The use of binaries, and portable executable files, is one of the ways this is so successful at showing the potential of GAN-based attacks. The authors of the original MalGAN were able to get the detection rate down to almost zero. This clearly demonstrates the danger that is posed by GAN attack systems. Since its inception, MalGAN has been modified and improved, under the auspices of creating the best database for the training of robust detection schemes. [63] proposed using an LSGAN model to address weaknesses like mode collapse. The model they propose still uses the MalGAN scheme, after the use of the LSGAN method, which involves implementing a Least Square function. Their purpose was making MalGAN more robust and avoiding the potential fallout from limiter problems or mode collapse. The authors are still focused on creating adversarial examples, though they focus more on poisoning attacks as well as the traditional 'perturbation' attacks, which change only a small portion of the code while still retaining the capabilities or functionality of the original sample. They are not the only researchers to build a version of MalGAN with Least Square functions to increase the robustness of the function. [64] also suggested the use of a Least Square function in order to minimize mode collapse. The authors of this paper also changed up the activation functions, and added LeakyReLU to the mix. They achieved an 85% success rate over seven different ML classifiers.
### Least Square GAN
Mao et al propose the creation of a GAN variant called the Least Square GAN, (LSGAN)[65]. Named for its innovation, the model uses the Least Squares equation for the discriminator. This helps to minimise the Pearson divergence of \(X^{2}\). The Pearson \(X^{2}\) Divergence is a variant of the \(f\)-divergence, The LS function can help distance correctly classified samples from the genuine data, improving the performance of the classifier, and thus increasing the training level of the generator. The objective functions of the LSGAN model are presented in Equation 14, The objectives functions for the LSGAN model, from [65].
\[\underset{D}{min}V(D)_{LSGAN}=\frac{1}{2}\mathbb{B}_{x\text{-}p_{data}}[(D(x)- b)^{2}]+\frac{1}{2}\mathbb{B}_{x\text{-}p_{x}(Z)}[D(G(z))-c)^{2}] \tag{14}\]
\[\underset{G}{min}V(G)_{LSGAN}=\frac{1}{2}\mathbb{B}_{x-p_{\varepsilon}(z)}\left[(D(G (z))-c)^{2}\right] \tag{15}\]
### Ac-Gan
The auxiliary classifier GAN (ACGAN) was proposed in 2016 by Odena et al [66]. The ACGAN variant was proposed, at the time, for use in image generation, but has since moved into other subjects, as many GAN models do due to their easily transferable nature. The variant distinguishes itself by the use of an auxiliary decoder network _within_ the discriminator. As a result, the algorithm can:
* Give as output the label of the class for training samples.
* Output a subset of the set of latent variables used to generate the samples.
According to Navidan et al. [14], the strongest point of difference between the ACGAN model and the CGAN variant is that in order to determine class labels, the CGAN relies on the conditioning of the generator. In contrast, the ACGAN predicts class labels due to the auxiliary decoder network. The way the ACGAN predicts class labels can be found in Equation 16.
\[L_{\mathcal{S}} =E_{x-p_{data}(x)}\left[\log D(x)\right]+E_{z-p_{\varepsilon}(z)} \left[\log(1-D(G(z)))\right] \tag{16}\] \[L_{\mathcal{C}} =E_{x-p_{data}(x)}\left[\log Q(c|x)\right]+E_{z-p_{\varepsilon}(z) }\left[\log Q(Q(c|z))\right] \tag{17}\]
### Is Gan
The Identity-Sensitive Generative Adversarial Network was proposed and focused on _face photo-sketch synthesis_[67]. This is the process by which a photo of a face is turned into a sketch through machine learning. The goal of creating the ISGAN model was to create a formal image translation task that addresses the problem of turning a photo into a pseudo-hand drawn sketch. This area does have a security and police angle - on occasion, when given a poor quality image of the face of a suspect, turning it into a sketch can help to idetify features that may not be as prominent in the original photo. Especially when machine learning is involved, allowing the ISGAN to understand and augment the original image. ISGAN is not the only GAN model that has been applied to this task, but, when the benchmark tests were run against the current state-of-the-art methods, ISGAN was either on par or above them in score[67].
### InfoGAN
The Information Maximising Generative Adversarial Network, or InfoGAN, model was first proposed in Chen et al, in 2016 [46]. The authors noted the ability of InfoGAN's model to untangle images of handwritten characters. The model was tested and trained using the MNIST dataset (see Section 5.8). It was also utilised on 3-dimensional images of faces, and on pictures of house street numbers. In performance, the InfoGAN model adds 'negligible' complexity to the vanilla GAN (6.1) model. The training itself was based on the training done for a DCGAN (6.3), rather than a vanilla GAN.
### fvgan
In [68], the authors develop a method by which they can utilise GAN to build malicious code into PDF files in such a way as to evade detection by even schemes dedicated to the detection of PDF malware. The proposed method, feature vector GAN or fvGAN, took in the fact that features in PDF files are highly interconnected and interdependent, meaning one cannot simply change the features to the ones required. First they had to pull out the features that were most essential. Using _mimicus_, an invention of their own design, they were able to pull feature vectors with 135-dimensions from the files. Once the fvGAN had been trained on the Contagio and Surrogate datasets (from the original PDFRate study[69]) of both malicious and benign PDFs, they used it to create PDFs with content injection attacks to great effect.
### CycleGAN
The CycleGAN model is foremost an image translation mechanism [70]. Cycling an 'unpaired' image from one domain to another is its main purpose. Proposed in 2017 by Zhu et al [71], CycleGANs have become a reliable tool in image processing, and have assisted researchers in many domains. With regards to security, it is important to note that CycleGAN models can be used for biometrics - particularly facial recognition. More recently, a CycleGAN variant was proposed for video-to-video translations - Mocycle-GAN [72]. This raises the possibility of using this type of GAN to build facial recognition into CCTV software.
### ProGAN
The Proximity Generative Adversarial Network, or ProGAN [73], is meant to preserve the proximities of instances and samples that are reduced in dimensionality. The original proximity in the space prior to dimensionality reduction must be preserved, and thus ProGan was created. The proximity of nodes in this subject is classed as first-order, second-order, and so on. If a node is connected to another node with an edge, it is considered first-order. These relationships are preserved through network embedding.
### Mgan
Mixture Generative Adversarial Networks (MGAN) was proposed focusing on overcoming the mode collapse problem in vanilla GAN models (see Section 6.1.1). This problem is a serious risk for standard GAN models. MGAN seeks to address that issue by using multiple generators to create generated output based on the real data given to the discriminator [74]. The generators are trained simultaneously, rather than sequentially, and the resulting distributions can be _mixed_ to achieve a realistic distribution. The ultimate goal is to create a three-party minmax game, rather than the traditional two party game with vanilla GAN. The parties involved in a MGAN minmax game are: the discriminator, the classifier, and the set of generators. The different generators are meant to work harmoniously, and the authors point out the importance of having the different generators specialise in different data modes.
## 7 Areas of Use
There are many areas in which Generative Adversarial Networks are of use, and these include a selection of cybersecurity related topics. A broad overview of the types of use each different model of GAN is used in is shown in the table in Section 6.15.
### Classification and Images
One of the first, and enduring, tasks for which GANs are used is that of image classification[75]. The ease with which GANs can compare and create images with the necessary similarities and contextual elements, meaning the creation of many images that belong to clear classes, is a task that GAN models do so well even non-image based tasks are often translated into images[76] for the ease it provides when creating augmented datasets using GAN schemes. The implications for the us in malware are self-evident. The ability to generate new samples of malware families in order to train machine learning based detection schemes on what the _features_ of a family of malware are, is a leap forwards in terms of defensive technology. for an example, in [77], the authors use deep learning GAN models to generate unseen malware examples and train schemes against the signatures of these new malware images. While the authors did not achieve a high increase in classification, the principle of their work has been examined by others as well. One study, [78] found they could increase classification accuracy of malware samples by 6% through the generation of synthetic examples of malware for training purposes. In [79], authors implemented GAN for the classification of greyscale images that were created by transforming malware files with feature extraction. This task is suited to GAN schemes because GAN, more so than any other problem space, excels at image classification. In this study, the classifier's performance improved by approximately 6%. In [80], the authors again translate malware files into greyscale images in order to use GAN models on them for classification. GAN systems are excellent at picking out images that have significant similarity, which in this case means that they belong to the same malware families. Using the Microsoft Malware Classification Challenge [81], the system is run against AE-SVM [82], tDCGAN [83], Strand [84], and MCSCasm [85], It performs better when classifying the malware family the images belong to than these state of the art classifiers, with the lowest error rate.
### Data Augmentation, Rare Classes, and Balancing a Dataset
Machine learning models suffer from a desperate need for training data. The amount of data needed to train complex systems to the point at which they know how to deal with the data coming into their systems is enormous. And, unless they are an unsupervised learning model, that data requires logging and labelling. This task is an enormous undertaking, and one that requires human input, hours upon hours of sitting at computers and labelling each piece of data that will be sent to the model for training and verification processes. GAN is an effective tool to potentially solve some of the issues which arise out of a need for data augmentation[86]. Instead of an individual creating hashes of new malware as it arrives, GANs can be used to generate and generalise synthetic malware examples for machine learning models. Rather than learning through the hashed values of malware files, the GAN can produce images of malware files and rare families for training purposes. The importance of feature extraction for learning in malware detection is of particular weight in this scenario. This is shown clearly in [78], in which they augment their malware datasets with synthetic samples created by GAN models. There are other problems with the data necessary for machine learning models too.
Augmenting an existing dataset such that the different types of data, the different classes, are able to be trained for the highest levels of accuracy is highly important in classifying data. It is a major use of GAN schemes (see [87, 88, 89, 90, 91, 92, 93, 94]). Taking a dataset that is perhaps too small, or has classes that are too imbalanced to learn robustly, is an area GAN models shine. As an example, [95] focuses on using GAN models to create new samples of different classes of malware in order to balance a dataset on which to train machine learning models.
In many existing datasets, especially those related to cybersecurity, there exists a significant imbalance in the classes of data within a dataset. There are many sets where the class ratios are significantly imbalanced. As seen in [96], a dataset with very unbalanced classes can be made more even across types using a GAN to solve the rarity of certain classes. The involved datasets were malware designed as Android APKs, which the authors translated into greyscale images. By supplementing these datasets of Android malware with GAN methods, the authors were able to achieve increases of 5-20% in the F1-score. This shows the power of
GAN models in augmenting rare data.
### Zero Day Malware
The accurate and speedy classification of malware files and malicious code in computer systems is a expansive task, and one that has remained an enduring problem in the cybersecurity domain[83]. This is a task that is getting bigger and more pressing, not losing significance. The ease with which malicious code can be edited once antivirus software has been updated to detect that particular type of malware means that the creators of this malware have the edge in the battle between attackers and defenders. The sophisticated attacks using polymorphic malware (see 8.2, which morphs and changes itself in order to escape detection, make the task of identifying the malicious code even more difficult[84]. Traditional antivirus software relies on the use of hash codes. Once the authors of the antivirus software have identified a piece of malicious code, it is hashed into a value string that specifically identifies that piece of code. The antivirus definitions are updated, and the program knows how to recognize that code, should it come into contact with the computer on which the defense software is installed. This does, however, rely on the idea that this malware has been identified before it turns up on the target system. Because the slightest change in the malicious code causes cascading changes in the hash code, all a malware developer needs to do to escape the antivirus system is move a portion of code to a different place within the program. This retains functionality, while changing the hash code of the program enough that it will no longer be picked up by traditional antivirus schemes. Until the antivirus developers find and identify the new variant of the malware, the traditional scheme will not notice the altered malware and it will be able to operate undetected. GAN has become a tool for building new adversarial examples that the defender can train with to prevent this outcome[97]. Similarly, in [98], the authors address this problem using GAN-based adversarial examples to train their blockchain method of intrusion detection. Their LSTM-CGAN model for generating these examples allowed the resulting classifiers to jump several percentage points in accuracy when the classifier was trained on the AEs as opposed to when it was trained on the unenhanced dataset.
One of the bigger stumbling blocks in malware detection is the unseen or 'zero-day'
attacks[99]. Because of the types of data that a GAN model can create, generating new adversarial examples and zero day malware is a possible way to train a security model on unseen data. In [100], the authors use a GAN model they name TrafficGAN to create new malicious traffic patterns for zero-day attacks, partially by including noise into the data as substitution for some unseen traffic, In [101], a standard variation GAN with a LeakyReLU activation function is implemented to train IDS models on unseen malware. While their contribution on the whole is an increase of only 1% accuracy overall, the idea of using the data generated by GAN to increase the robust nature of a model on data it hasn't seen in training sets is a useful and practical one. In fact, the authors of [62] use GAN modelling to create black-box attack methods. Their chosen model in particular, MalGAN (see Section 6.7), was created entirely towards the view that GAN schemes could create functional and unseen examples of malware to attack machine learning systems. The attacks they conducted on non-neural network systems managed to achieve a True Positive Rate of zero, while on the neural network based models (RF, LR, DT, SVM, MLP, VOTE), which had achieved a TPR of 92% minimum prior to the attack tests, managed to achieve a TPR of 0.19% maximum when the generated attack data was integrated into the testing set. This study shows how vulnerable IDS or malware detection systems are to the data and attacks generated by a malicious GAN. On the other had, when the authors of [102] generated their own zero-day attacks and added them into the training datasets for their IDS, they were able to achieve a success rate of 84% when classifying unseen malware. Utilising the Microsoft Malware Classification Challenge (2015) dataset [81], they outperformed 14 other state-of-the-art systems, and achieved a 98% success rate when classifying the zero day attacks. This shows that using GAN to create unseen examples and zero-day attacks can be a powerful force for creating truly robust classifiers and detectors.
### Detection Evasion
Given the success at generating unseen examples in order to train more robust systems, the use of GAN methods to create malicious code which evades detection is a logical step. Building GAN schemes which can generate malicious files that evade current detection schemes is necessary for the training of the next generation of malware detection schemes. In [103], the authors create a scheme using GAN
to evade detection from a broad range of machine learning models - Multi-layer Perceptron, Decision Tree, Logistical Regression, Support Vector Machine, Random Forest - and found that feature extraction was the key in making their scheme achieve the minimal True Positive Rate (TPR). As they increased the selection of features in both attacker and defender, the TPR rose. While the authors take the steps of adding benign features into the malicious examples they create, they achieve a slightly less impressive TPR than the previously mentioned scheme - the lowest TPR is under 11%. Unlike the previous model, the authors utilise n-gram feature selection, a method borrowed primarily from Natural Language Schemes, but which removes the step of translating the malware into a different form - such as the popular method of making malware 'images' - and allows the model to work on the raw data itself. The creation of the MalFox GAN scheme in [61] shows the ability of GAN schemes to create black-box attacks, in which the attacker knows nothing about the actual structure of the system they are attacking. In a black box attack, the attack must be generalised enough that it can be employed successfully against any defender. In MalFox, which was trained specifically with Obfusmal, Stealmal, and Hollowmal as techniques to perturb the data for attacking, when used against the online malware repository Virus Share [36], the detection rate was minimised to 56.2%. In [104], the authors use a standard GAN model against a Deep Neural Network defence system in order to create Windows malware that is perturbed just enough to evade detection and retain its original purpose and function. The authors achieved this using raw byte sequences and training the GAN on existing byte sequences of malware. They achieved evasion rates of more than 50%.
### Applied Attack GAN Models
The flipside of using GAN to generate unseen malware and use it to train antivirus and IDS models, is the use of GAN to create highly effective, theoretical attacks on existing systems. The possibilities offered by GANs for creating new 'adversarial examples' to use against detection systems, mean that there is significant opportunity to build new malware with the same functionality, but in a way that is fast and avoids detection with high rates of success. As we discussed in Section 6, the creation of models like MalGAN and MalFox (see Section 6.7 and 6.6), are clear demonstrations of the attack potential of GAN models in generating
malware. In [105], the authors managed to build a model which evaded firewalls in order to attack Android systems with a success rate of 95%. This is a troubling note for security against new malware creation techniques. The use of these systems to _train_ the new generation of malware detection schemes should be therefore a high priority for security researchers and developers.
## 8 Discussion
As popular as GAN research is, there are almost as many types of proposed models for GAN as there are papers discussing GANs. There are some which can be easily seen and categorised as types of the same genus, and then there are those rare new examples, which offer a fresh method for implementing GANs within a research or work setting. Surveying as many papers proposing solutions to difficulties in particular sectors results in surveying almost as many problems in others. We have gathered some particular points of interest and problems which may offer future lines of research or simply act to temper future models with an eye towards creation of new malware detection schemes.
### Malware and the Sandbox
One issue with identifying malware based on features and contextual relationships is that it may require the malware be run before it can be accurately identified. This type of analysis is _dynamic_ analysis, whereas detection systems which use the file without running it are performing _static_ analysis. The majority of the papers we have assessed in this study propose static analysis [106]. However, this then causes concern for the implementation of these methods - can they run successfully in real-time, stopping the malware from running even before it has been classified? This puts a level of vulnerability into the scheme. What damage will be done while the machine identifies the currently executing program as malware? A mitigation for this type of problem is to run the programs in a _sandbox_ before classification. This means they cannot affect the performance of the system or exploit any vulnerabilities prior to identification[107]. Another suggestion is that the program is not run until the binaries have been extracted and used to try to identify the malware family to which it belongs. This problem is not specific to Deep Learning or Machine Learning based detectors - it has been of concern for many years, and has often been addressed using the sandbox option. However, this simply lead to the
creation of "environmental detection" in malware[108], allowing the code to detect when it was being run in a sandbox and when it was in the real system environment.
### Polymorphism, Evolution and the Dangers of Malware
Malware developers have taken stock of the current state of research in the area and used it to their advantage. New types of polymorphic malware, which twists and turns itself into code-based pretzels in order to avoid detection, are able to fool signature-based detection systems [106]. GANs schemes have the ability to perturb the code of existing malware code and use these unseen examples to train new models. This ability has potential to defend against polymorphic malware. One method, discussed in [106], is to use malware _behaviour_ to teach new detection systems. Of course, this has innate risk, because the malware has to be executed in order to complete the behaviour analysis. In such a case, the idea would be to run the malware sample in a'sandbox', as discussed in Section 8.1. Another potential problem is fighting the _method by which the malware is spread_. Social engineering hacking, sending malicious files to email, or using watering hole attacks[iii] to download the malware onto the target computer [110] are all tasks that are reliant on the ways in which the user interacts with their devices. Ideally, a new and improved machine learning based detection scheme would protect users even when they clicked on email links or went to download a PDF from a new site. This makes training effective and accurate classifiers are priority in cybersecurity, and one of the preeminent areas of malware research.
### Optimisation and Nature Inspired Computing
In all machine learning models based on Neural Networks, there is a need to optimise the model. The number of layers, the nodes in each layer, the activation function, the weights. All of these make the model significantly better or worse at its task, and for the most part, they are altered and improved through trial and error. One group of researchers, however, has taken this optimisation problem and made it the focus of their research into GAN models [111]. Using Genetic Algorithms, they run through the different options for optimisation automatically, with the non
denominated sorting genetic algorithm (NSGA-II) finding the best values for the GAN model. This true positive score of the optimised GAN was more than 98% on the MNIST dataset, introduced in [112], which is made up of images of handwritten letters. The same format for optimisation was employed when the model ran on the malware dataset taken from the Vision Institute [113], and achieved a true positive rate of 97.87%, a score several points higher than the GAN run on the same dataset without the optimisation of the NSGA-II algorithm. This suggests there is a road to take for optimising the layout and structure of each of the GAN models, and that it is possible another Nature-Inspired Computing area may be how that is achieved. As such, it is an interesting direction for malware researchers to explore.
### Data Format and Translation
In many of the examples profiled in this paper, the data, be it traffic flow, machine language, API calls, or malware files, is often translated into different types of images. This makes the job of the GAN model easier because they work so well with image classification tasks, but it also means that there is additional complexity in the algorithms due to the necessity of translating the data into the correct format. There are papers, however, such as [95], [114], [62], [61], [115], or [68] which use the raw data, without translating it to a type more palatable to a GAN scheme. This is important because of the overhead of these different models. When investigating to find an efficient model for use in a given domain, ensuring that there is little to no additional or unnecessary overhead seems a significant consideration.
One thing common amongst almost all papers discussed is the need to change the form in which the data is used. The pre-processing, referred to in [116] as _data triage_, is a cost-expensive and high overhead requirement in both time and processing power. There are those that have managed to avoid translating the data overmuch - such as the research presented in [100] - but most translate data into images or carefully sectioned bytes of data. This is an area of concern while real-time application of a GAN based IDS or firewall is a goal. To further the idea of a real-time GAN scheme, the methods of data pre-processsing need to be carefully examined. They offer unacceptably large overheads for real world application. Because malware files are generally simple and popular to translate into images, the task is not as arduous as it might be in other areas of interest.
### Ethics and Responsibility in Malware Research
The ethical questions posed in [117] offer an interesting path for future research - there is a general lack of discussion in GAN research papers about the ramifications of possible misappropriation or misclassification of the work contained within. This is an important step to consider in all academic research, and the ethical implications of things like Deepfake[118], or AdvandMal[64] are likely to cause extreme and often unintended effects. Whether by GAN or VAE, the type of research in papers which create new models for malware synthesis are an excellent example of why researchers need to be careful in balancing the quest for knowledge and the security/ethical risks of their research. Knowing this type of attack is possible, and with the results they achieved, is important for those who develop defensive mechanisms for this type of attack, but it is also useful to black hat individuals and malicious operators. The amount of detail over the model created, how it operates, and was trained is where that balance of ethics and information comes into play. These things must be looked at carefully and applied with consideration. The situation of neural network applications as they are demands that we take stock of what has been built, how the dataset has been labelled, how the features have been established, and how/where the neural network is going to be deployed. While this work is starting to take place in the field - see [119],[120] - there is much still to cover, and researchers like those in [121] are offering frameworks for incorporating ethics into the fabric of machine learning research. Therefore, like many research topics, it has a flipside, and both sides need serious ethical review and consideration to ensure that the research benefits those who are most vulnerable.
## 9 Future Research Directions
There are still many avenues for potential research. The methods employed by the authors in [103] to avoid the popular step of translating the dataset into sequences or images and instead working on the data directly using the n-gram feature extraction method is certainly an area worthy of future research for more applications. The incredibly low detection rate achieved by The use of Genetic Algorithms to optimise the performance metrics of GAN models [111] is an avenue which could prove very fruitful. The development of real-time, dynamic analysis and detection is a challenge researchers are still only beginning to scratch the surface of [106], and requires
further research into the types of secure environments in which this analysis can safely take place. The realm of malware research contains so many possible avenues for research when it comes to GAN algorithms, and this has been illustrated in this paper to the best of our ability.
## 10 Conclusion
This paper has presented a wide range of research in the current malware research space, using different types of Generative Adversarial Models. Our aim is to have provided an explanation of not only what Generative Adversarial Networks are and how they are trained and assessed, that we have also given an effective grounding in the applications within the malware research community, which GANs may work with both in the current literature and in any potential future research. To that end, we have iterated through the explanation of GAN's basic functions; the work on other surveys done in related areas, particularly to demonstrate that there has yet to be an in-depth survey paper on the uses of GAN in cubersecurity and malware; we have explained the different metrics used for evaluation; we gave an in-depth review of the datasets currently favoured in the problem space; we included a list on the different GAN models that are discussed throughout the survey; we have delved in depth to the areas of use that are currently most popular, we have endeavoured to provide a discussion and survey that goes into detail so as to give the reader the full picture; and we have presented potential future avenues for research in the area. We hope that this survey of malware research through the lens of Generative Adversarial Networks, and the way in which they can be employed, has given the reader an idea of where to start with their own research in this area, or given them an update state of the art. The field of GANs for malware research is only getting started, and there is much to do, and many questions to be answered.
## References
* [1] S. Abadi, M. Abadi, M. |
2301.11559 | Enabling Multi-threading in Heterogeneous Quantum-Classical Programming
Models | In this paper, we address some of the key limitations to realizing a generic
heterogeneous parallel programming model for quantum-classical heterogeneous
platforms. We discuss our experience in enabling user-level multi-threading in
QCOR as well as challenges that need to be addressed for programming future
quantum-classical systems. Specifically, we discuss our design and
implementation of introducing C++-based parallel constructs to enable 1)
parallel execution of a quantum kernel with std::thread and 2) asynchronous
execution with std::async. To do so, we provide a detailed overview of the
current implementation of the QCOR programming model and runtime, and discuss
how we add 1) thread-safety to some of its user-facing API routines, and 2)
increase parallelism in QCOR by removing data races that inhibit
multi-threading so as to better utilize available computing resources. We also
present preliminary performance results with the Quantum++ back end on a
single-node Ryzen9 3900X machine that has 12 physical cores (24 hardware
threads) with 128GB of RAM. The results show that running two Bell kernels with
12 threads per kernel in parallel outperforms running the kernels one after the
other each with 24 threads (1.63x improvement). In addition, we observe the
same trend when running two Shor's algorthm kernels in parallel (1.22x faster
than executing the kernels one after the other). Furthermore, the parallel
version is better in terms of strong scalability. We believe that our design,
implementation, and results will open up an opportunity not only for 1)
enabling quicker prototyping of parallel/asynchrony-aware quantum-classical
algorithms on quantum circuit simulators in the short-term, but also for 2)
realizing a generic heterogeneous parallel programming model for
quantum-classical heterogeneous platforms in the long-term. | Akihiro Hayashi, Austin Adams, Jeffrey Young, Alexander McCaskey, Eugene Dumitrescu, Vivek Sarkar, Thomas M. Conte | 2023-01-27T06:48:37Z | http://arxiv.org/abs/2301.11559v2 | # Enabling Multi-threading in Heterogeneous Quantum-Classical Programming Models
###### Abstract
While quantum computers enable significant performance improvements for certain classes of applications, building a well-defined programming model has been a pressing issue. In this paper, we address some of the key limitations to realizing a generic heterogeneous parallel programming model for quantum-classical heterogeneous platforms. We discuss our experience in enabling user-level multi-threading in QCOR [1] as well as challenges that need to be addressed for programming future quantum-classical systems.
Specifically, we discuss our design and implementation of introducing C++-based parallel constructs to enable 1) parallel execution of a quantum kernel with std::thread and 2) asynchronous execution with std::async. To do so, we provide a detailed overview of the current implementation of the QCOR programming model and runtime, and discuss how we add 1) thread-safety to some of its user-facing API routines, and 2) increase parallelism in QCOR by removing data races that inhibit multi-threading so as to better utilize available computing resources.
We also present preliminary performance results with the Quantum++ [2] back end on a single-node Ryzen9 3900X machine that has 12 physical cores (24 hardware threads) with 128GB of RAM. The results show that running two Bell kernels with 12 threads per kernel in parallel outperforms running the kernels one after the other each with 24 threads (1.63\(\times\) improvement). In addition, we observe the same trend when running two Shor's algorithm kernels in parallel (1.22\(\times\) faster than executing the kernels one after the other). Furthermore, the parallel version is better in terms of strong scalability.
We believe that our design, implementation, and results will open up an opportunity not only for 1) enabling quicker prototyping of parallel-aware quantum-classical algorithms on quantum circuit simulators in the short-term, but also for 2) realizing a generic parallel programming model for quantum-classical heterogeneous platforms in the long-term.
Quantum-Classical Programming Models, Parallel Programming Models, QCOR, Heterogeneous Computing
## I Introduction
Quantum computing is a rapidly evolving field that leverages the laws of quantum mechanics for computation. Since near-term quantum computers are susceptible to significant levels of noise, a hybrid combination of classical computers and quantum computers, namely _quantum-classical_ computers, is explored to mitigate noise while achieving orders-of-magnitude performance improvements for certain classes of applications. Such a hybrid combination can be viewed as one realization of heterogeneous computing where different types of processing elements, including special purpose accelerators, simultaneously and asynchronously work together.
QCOR [1] is a programming system to realize such a heterogeneous quantum-classical model. It is based on the C++-based programming language and a compiler that is built on top of XACC [4] As shown in Figure 1, QCOR's target machine is a heterogeneous system where multiple CPUs (cores) are connected with quantum devices and other accelerators such as GPUs and FPGAs.
To program quantum devices in QCOR, the user writes a _quantum kernel_ (i.e., a function that will be executed on a quantum device) in quantum computing domain-specific languages (DSLs), such as XACC's XASM or IBM's Open-QASM [5]. Similar to other GPU-based heterogeneous programming models such as CUDA [6], SYCL [7], and OpenCL [8], QCOR allows the user to write quantum kernels and CPU control code in the same program. This single-source programming model greatly facilitates quantum-classical programming.
However, one open research question for QCOR and other quantum DSLs is how to provide well-defined, user-level multi-threading support. Specifically, as the machine model in Figure 1 implies, it is possible that multiple CPU cores might simultaneously utilize one or more quantum devices. Currently, there is no user-facing API-level support for multi-threading in quantum-classical programming models like QCOR and DSLs like OpenQASM, although it is typical to internally use multi-threading for accelerating quantum circuit simulations [9, 2, 10, 11].
Fig. 1: QCOR Machine Model [3]
In this paper, we explore the possibility of enabling user-level multi-threading in QCOR, which enables coarser grain parallelism in quantum-classical programming models. We believe this is an important step towards realizing an end-to-end heterogeneous programming system that can work on general heterogeneous platforms that include quantum computers. This work makes the following key contributions:
* Design and implementation of multi-threading support for a heterogeneous quantum-classical programming model.
* Discussion of scenarios and use cases where user-level multi-threading is beneficial for near-term quantum systems.
* A demonstration which shows that running two quantum kernels in parallel using \(N/2\)-threads for each kernel outperforms running the kernel one-by-one using \(N\)-threads, by factors of 1.22\(\times\) to 1.63\(\times\) for the evaluated kernels.
## II Motivation
This section highlights our motivation for enabling user-level multi-threading in quantum-classical computing by discussing potential parallelism in quantum-classical programs.
Let us use Shor's algorithm as a motivating example. In Algorithm 1, Shor is a quantum-classical task that invokes the period-finding quantum kernel (ShorKernel) to estimate exponent \(r\). Notice that Shor can be called multiple times until one or more (non-)trivial divisors are found or the entire search space is explored.
```
0:\(N\): A natural number to be factorized.
0: A non-trivial divisor(s) of \(N\).
1:procedureMain(N)
2:repeat
3:\(a\gets random(1,N)\); \(\triangleright\)\(1<a<N\)
4:\(K\gets gcd(a,N)\);
5:if\(K==1\)then
6:shor(N, a);
7:else
8:return\(K\)
9:until a divisor(s) is found or explored all
10:procedureShor(N, a)
11:foreach\(s=1,...,nShots\)do
12:\(r_{s}\leftarrow\textsc{ShorKernel}(N,a)\)
13:\(r\gets r_{1},...,r_{s}\quad\triangleright\) Estimate \(r\) from the measurements
14:if\(r\mod 2\equiv 1\) or \(a^{r}\mod N\equiv-1\)then
15:return\(\phi\);
16:else
17:return\(gcd(a^{r}/2\pm 1,N)\);
```
**Algorithm 1** Shor's Algorithm (Pseudocode)
From the perspective of parallel processing, one possibility of parallelizing this algorithm is to run multiple instances of Shor in parallel. Furthermore, since it can require multiple shots to find \(r\), it would be also possible to further parallelize the shot loop in Shor (Line 11). Finally, if the ShorKernel is executed on a simulator, there is a massive amount of parallelism as in [9, 2, 10, 11]. Algorithm 2 is a pseudo-parallel version of Algorithm 1. As in the X10 language [12], **async** represents parallel task creation and execution and **foreach** represents parallel loop creation and execution.
Figure 2 graphically illustrates the potential parallelism in Shor's algorithm across these three levels. Based on what we discussed for Algorithm 2 and observe in Figure 2, we identify the following multiple levels of parallelism in quantum-classical programs:
**Task level parallelism:** multiple independent classical tasks that can include quantum kernels are executed in parallel.
**Shot level parallelism:** multiple independent shots are executed in parallel.
**Inner simulator level parallelism:** quantum simulators, including state vector and tensor network simulators such as [9, 2, 10, 11], are typically parallelized using OpenMP, CUDA, and the Eigen library to utilize a massive amount of parallelism on CPUs and/or GPUs.
It is worth noting that the actual amount of available parallelism depends not only on algorithms but also on the simulated or physical quantum back ends that are targeted. One example would be when a user executes their program on a current-day single QPU system in which there would be limited parallelism due to the lack of additional physical hardware. However, in most cases, we believe that allowing the user to specify all available parallelism for a quantum-classical task will greatly enhance the performance and expressiveness of quantum-classical programs because there are plenty of computing resources (CPUs, GPUs, and FPGAs) that can accelerate the development of quantum-classical algorithms even on conventional systems.
Thus, we believe that enabling user-level multi-threading
in quantum-classical programming models will 1) accelerate the development of a quantum-classical algorithm, and 2) facilitate porting an existing heterogeneous algorithm to a quantum-classical one. It is also worth noting that the goal of this work is not optimizing and fine-tuning quantum-classical parallel programs for a specific target system. Instead, we look to motivate and introduce concrete parallel programming constructs (std::thread and std::async) for quantum-classical programming models.
## III Qcor
QCOR is a C++-based high-level quantum-classical programming model. One of the key features of QCOR is that the user can write both quantum and classical kernels and functions in the same code. This feature is not only analogous to existing heterogeneous programming models such as CUDA, OpenCL, and SYCL, but it also also provides a new programming model for heterogeneous quantum-classical computing programs that achieve hybrid quantum-classical workflows. As shown in the machine model in Figure 1, in theory, the user is free to leverage different kinds of processors (e.g., CPUs, GPUs, FPGAs, Quantum Devices) that could all be enabled through a QCOR-style programming model.
Listing 1 shows an example of QCOR program that executes the Bell kernel. First, on Line 13, the qalloc API is called to allocate 2-qubits. Then, the kernel written in XASM is invoked on Line 15. Notice that the kernel is defined on Line 3 - 10.
Fig. 2: Multi-level parallelism in a quantum-classical program (Shor’s algorithm).
After the kernel is invoked, the measurement results can be inspected by printing the content of the quantum register as shown on Line 17. An example output of the QCOR program can be found in Listing 2.
```
1usingnamespacestdj;
2//thebellkernel
3_upu_voidbell(qregq){
4usingdoor:issm;
5H(q[0]);
6CXR(q[0],q[1])
7for(intint=0;i<q.size();i++){
8Measure(q[i]);
9}
10}
11voidfoo(){
12//Createtwoqubitregisters,eachsize2
13autoq=galloe(2);
14//Runthequantumkernel
15bell(q);
16//dumptheresults
17.q(print);
18}
19intmain(intargc,char***argv){
20threadt0(foo);threadt1(foo);
21//Otherclassical/quantumwork
22...
23t0.join();t1.join();
24}
```
Listing 4: Simultaneously Launching two Bell kernels (std::thread)
## IV Design
### _Multi-threading Design Overview_
Since QCOR is primarily written in C++, we look to enable user-level multi-threading in QCOR in a way that is acceptable to both QCOR and C++ programmers. For QCOR programmers, our goal is to minimize modifications to the code required for enabling multi-threading. For C++ programmers, our goal is to provide a threading interface that is natural to use. To that end, we leverage C++'s standard threading constructs (std::thread and std::async). However, in terms of general applicability, our discussions should apply to other parallel programming systems for C++, such as OpenMP [13], Kokkos [14], and RAJA [15].
Our current focus is on enabling coarse-grain parallelism to exploit the full capability of a CPU-QPU system. In one scenario, the user would like a one-to-one relation between a CPU and a QPU to simultaneously perform \(N\) independent tasks, where \(N\) is the number of CPU-QPU pairs. Another scenario might be a one-to-many/many-to-one relation between CPU(s) and QPU(s). It is worth noting that the QPU part is not necessarily a hardware QPU device. Since QCOR offers different backends, the QPU part can be a quantum circuit simulation on either a local machine or a cloud service and can also incorporate coarser tasks such as VQE.
### _User-Facing API_
#### Iv-B1 std::thread
Listing 4 shows an example where two threads simultaneously run the Bell kernel using thread. The main function creates two threads (t0 and t1), each of which executes the foo function. In the foo function, it first allocates 2-qubits using qalloc, then invokes the kernel written in XASM in Line 3 - 10, and finally gets the results. This approach enables the user to overlap other work on the main thread with the two threads. Also, the main function can wait on each thread by calling join().
#### Iv-B2 std::async
Another example (Listing 5) is asynchronous execution where the main function asynchronously launches the foo() function with async. Similar to the thread example, the user may want to overlap other work with the launched task. However, one interesting difference is that async returns a future object, which helps the user to check the status of the asynchronously launched task and take further action depending on the return value of the task (get()).
### _Enabling Thread Safety_
Thread safety is usually attributed to a function/routine that can be safely invoked by multiple threads simultaneously. It is very common that thread safety is guaranteed in conventional heterogeneous programming models such as CUDA, OpenCL, and SYCL. For example, the SYCL specification [16] describes this in the following manner: "_SYCL guarantees that all the member functions and special member functions of the SYCL classes described are thread safe_."
It is worth noting that enabling thread safety does not necessarily mean improving performance because it essentially prevents multiple threads from simultaneously accessing shared data. In this work, our first priority is to enable thread safety for QCOR's user-facing API. For portions where parallelization is important, we explore the possibility of increasing parallelism in Section V.
This section discusses how we enable user-level multi-threading in QCOR and XACC.
Since the QCOR and XACC systems include over 200K lines of code written in modern C++, we focus on discussing a few common cases that can possibly inhibit user-level multi-threading. Essentially, these cases are focused on identifying potential sources of data races when multi-threading is added.
### _Identifying sources of data races_
#### V-A1 Global Variables
Global variables are the most common source of data races because these variables can be accessed simultaneously by multiple threads. The following is a global attached:map object that is used to implement qalloc().
```
1numespacexacc{
2namespaceinternal_compiler{
3//globalvariable
4std::shared_ptrAccelerator>qpu=nullptr;
5...
6}}
```
Listing 6Makingqalloc()thread-safewithMutexLock
7
8
9
10namespacestd::mapobject
11namespacestd::mapobject
12namespacestd::mapobject
13//globalvariable
14std::shared_ptrAccelerator>qpu=nullptr;
15...
16} ```
Listing 7How a QPU instance is declared and created
This section discusses how we enable user-level multi-threading in QCOR and XACC.
Since the QCOR and XACC systems include over 200K lines of code written in modern C++, we focus on discussing a few common cases that can possibly inhibit user-level multi-threading. Essentially, these cases are focused on identifying potential sources of data races when multi-threading is added.
### _Identifying sources of data races_
#### V-A1 Global Variables
Global variables are the most common source of data races because these variables can be accessed simultaneously by multiple threads. The following is a global attached:map object that is used to implement qalloc().
```
1numespacexacc{
3map<string,shared_ptrAcceleratorBuffer>
4allocated_buffers{}; ```
Because qalloc() internally invokesmap'sinsert(), which is not thread-safe, concurrent invocations of qalloc() can be problematic.
#### V-A2 Services
QCOR depends on different software components provided by QCOR itself and XACC. Typically, xacc::getService<T>(...) is used to obtain a shared pointer to a specific service, namely T in this example. For services that do not derive xacc::Cloneable, the xacc::getService<T>(...) always returns a pointer to the same instance, which can be another source of a data race. The following is an example where a pointer to the qpp accelerator, a software simulator in QCOR/XACC (i.e., Quantum++ [2]), which is used to run the Bell kernel in Listing 4 and Listing 5, is stored into acc.
```
1shared_ptr<Accelerator>acc;//alocalvariable
2acc=xacc::getService<Accelerator>("qpp",...); ```
Because Accelerator is not Cloneable, getService<Accelerator>(...) always returns the same qpp instance. This can cause a data collision since multiple threads can simultaneously register their gates to the same accelerator and can thus end up simulating an erroneous circuit.
### _Implementation Details_
In general, we pursue the following two approaches to remove data races that inhibit multi-threading in QCOR and XACC: 1) enabling thread safety and 2) increasing parallelism. The former goal is achieved by adding safety to multi-threaded execution with mutex locks. The latter approach explores the possibility of leveraging multi-threading to accelerate user programs.
#### V-B1 Enabling thread-safety
For enabling thread-safety, we leverage std::mutex or std::recursive_mutex to enable mutual exclusions. For example, Listing 6 shows qalloc(), which has a non-thread-safe call in Line 5. We first create a mutex object in the global scope, and then the object is used to create a critical section with std::lock_guard.
#### V-B2 Increasing Parallelism
For increasing parallelism, we use a quantum accelerator object (qpu) as a motivating example. In the original implementation, as shown in Listing 7, the qpu object is declared as a global variable and is initialized by calling xacc::getAccelerator(), which internally calls xacc::getService<Accelerator>(). Thus, this example includes the two data race scenarios discussed above in Section V-A.
We remove the data races by i) making Accelerator cloneable to create different instances every time xacc::getAccelerator() is called, and ii) providing a map that maps a current thread ID to the corresponding accelerator object, the latter of which is called QPUManager.
Listing 8 shows a brief overview of QPUManager. QPUManager is implemented by using the singleton pattern and contains the setter and getter functions. The setter function takes the return variable of xacc::getAccelerator() and registers the accelerator instance along with a current thread id to the map. Similarly, the getter function returns a qpu instance that corresponds to a current thread.
### _Current Implementation Status_
We have implemented these changes to enable thread-safety for QCOR and have created a pull request against the QCOR [17], QCOR_SPEC [18], and XACC [19] repositories.
For increasing multi-threaded parallelism, we have confirmed that the examples (Listing 4 and Listing 5) and Shor's kernel work in a parallel fashion, and we plan to create another pull request to share that functionality.
One small limitation of our implementation is that the user needs to manually call quantum::initialize() API at the beginning of each thread so the runtime can register its thread ID to the QPUManager. In the future, we plan to create a compiler pass that automatically inserts this API call. Alternatively, we could provide qcor::thread and qcor::async wrappers for the original C++ constructs that internally call this initialization function.
## VI Preliminary Performance Evaluation
This section presents the results of an empirical evaluation of our extended QCOR programming model and runtime implementation on a single-node platform to demonstrate its performance benefits.
**Purposes:** The goal of our evaluation is two-fold:
1. to demonstrate that our extended QCOR programming model and runtime system with C++ threading model enables parallel quantum kernel execution.
2. to demonstrate that enabling parallel quantum kernel execution is beneficial in terms of performance.
**Platform:** We present the performance results on a single-node AMD server, which consists of a 12-core, 24-thread Ryzen9 3900X CPU running at 3.8GHz with 128GB of DRAM.
**Quantum Kernels:**
We use the following quantum kernels written in XASM:
1. **Bell Kernel:** The 2-qubit Bell kernel shown in Listing 4. The number of shots is 1024.
2. **Shor's Kernel:** The period-finding quantum kernel, which is based on [20]. The number of shots is 10.
**Experimental variants:** For kernel simulations, we use the QppAccelerator backend in QCOR, which uses the Quantum++ library [2].
We compare the following two variants in terms of performance:
1. **One-by-One (baseline, conventional):** Run the first kernel with \(N\)-threads and then run the second kernel with \(N\)-threads.
2. **Parallel:** Run the two kernels in parallel, each of which uses \(N/2\)-threads.
Note that each kernel is executed on multiple physical cores/threads even in the baseline version because Quantum++ uses OpenMP [13]. For both variants, we appropriately set the QMP_NUM_THREADS parameter to specify the number of threads per kernel. However, tuning this parameter for the best performance is beyond the scope of this paper. Instead, our goal is to study scenarios where running multiple quantum kernels simultaneously could lead to performance benefits. Finally, note that shot-level parallelism is not exploited in these versions.
### _Impact of Parallel Kernel Execution_
Figure 3 and Figure 4 show relative performance improvements over the baseline execution (one-by-one execution with 12-threads). In one-by-one execution, increasing the number of threads from 12 threads to 24 threads does not improve performance. In contrast, parallel execution of the two kernels enables further performance improvements -i.e., 1.63\(\times\) for the Bell kernel and 1.22\(\times\) for Shor's kernel. Based on an analysis of this kernel using AMD \(\mu\)Prof, we observe that increasing the number of threads increases L1 data cache-related performance counter numbers such as L1_DC_MISSES. L1 misses get significantly worse, particularly when increasing the number of threads from 12 to 24, which is why the parallel 12 thread per task version is faster than the original version.
### _Strong Scalability Study_
Figure 5 shows strong scalability of two Shor's kernels with the one-by-one and the parallel approaches. The numbers are relative performance improvements over the single-threaded one-by-one execution. While both approaches show good scalability, the parallel version always outperforms the baseline.
Fig. 4: Shor’s Kernel: Shor(N=15, a=2) and Shor(N=15, a=7) from Algorithm 1
Fig. 3: Bell Kernel
## VII Discussion
As shown in Section VI, we demonstrated a scenario where running multiple kernels simultaneously is beneficial. The goal of this section is to summarize difference application scenarios that we believe are good candidates for user-level multi-threading:
**Shor's algorithm**: As we discussed in Section II, suppose we factorize \(N\) using Shor's algorithm, we can create \(p\) parallel tasks with a random number \(a_{p}\) s.t. \(1<a_{p}<N\) and \(gcd(a_{p},N)=1\), each of which invokes Shor's kernel to estimate \(r_{p}\) and checks if \(r_{p}\) is even and \(a^{r_{p}}\mod N\equiv 1\) in parallel. Algorithm 2 summarizes the parallel algorithm and Figure 4 shows that running two Shor's kernels in parallel outperforms one-by-one execution. We anticipate that the performance improvement will be more significant if CPUs with more cores and GPUs are used for simulating Shor's circuit.
**VQE**: VQE [21] optimizes a (Hamiltonian \(H\)) cost function over a parameterized manifold of quantum states \(|\psi(\vec{\theta})\rangle=U(\vec{\theta})|\psi_{0}\rangle\) as \(\underset{\vec{\theta}}{\min}\langle\psi(\vec{\theta})|H|\psi(\vec{\theta})\rangle\). For QMA-hard Hamiltonians, \(\text{dim}(\vec{\theta})\) is large but for many interesting models in physical sciences \(\text{dim}(\vec{\theta})\) may scale (sub-)polynomially, in which case the optimization problem at hand may still be quite challenging. The pleasantly parallel nature of the optimization process can be utilized with multiple asynchronous quantum kernel instances minimizing over \(\vec{\theta}\)-space.
**Asynchronous Quantum JIT Compilation**: Shi et al. [22] discusses a scenario where a GPU is used to compile and optimize quantum circuits, which can take several hours. With user-level multi-threading enabled, it is possible to avoid blocking computing resources by asynchronously offloading a compilation task onto a GPU and launching the compiled kernel on a QPU only when it is ready.
**Parallel Quantum-Classical Workflow**: As generalizations of different parallel execution scenarios discussed above, one can write an entire workflow in which different tasks run on different processing units including CPUs, QPUs, GPUs, and FPGAs.
## VIII Related Work
While domain-specific languages (DSLs) for quantum computing significantly facilitate the development of quantum algorithms, many DSLs only focus on the kernel part and do not provide a system-wide programming model. We believe that such a system-wide programming model will become more important in quantum-classical computing because exploiting classical parallelism such as thread-level parallelism can improve end-to-end performance as discussed in Section VI. Here, we briefly discuss existing programming models from the viewpoint of classical parallelism on non-quantum devices.
Qiskit [23] has been one of the most popular programming frameworks for quantum computing. However, it is not appropriate to directly map Qiskit programs to quantum-classical systems unless there is an AOT/JIT-level smart compiler that is aware of the underlying parallel hardware because the Global Interpeter Lock (GIL) may hinder Python-level multi-threaded execution.
Q# is a programming language designed to express hybrid quantum-classical algorithms [24]. Currently, there is no way to express the concept of threads in the Q# language itself [25], nor in the Q# standard library [26]. Additionally, QIR (Quantum Intermediate Representation), a hybrid quantum-classical IR based on LLVM IR that is generated by the Q# compiler front-end, does not explicitly guarantee thread-safety for any runtime functions [27]. Indeed, the reference QIR runtime [28] may exhibit data races if used in multi-threaded code. It is worth noting that QParallel [29] allows the user to explicitly express parallelism in the quantum kernel part, not in the classical part.
Other newer platforms for hybrid quantum-classical computing have been proposed like NVIDIA's QODA [30], which is designed for the simulation of quantum circuits with GPUs and QPUs. It is unclear what multi-threaded support model QODA uses as it is a proprietary product.
## IX Conclusions and Future Work
This paper explores the possibility of enabling user-level multi-threading in QCOR. We made enhancements to QCOR to support C++-based parallel and asynchronous execution of
Fig. 5: Scalability of the one-by-one and the parallel approaches: two Shor(N=7, a=2) from Algorithm 1
quantum kernels by 1) adding thread safety to QCOR API routines, and 2) increase parallelism by removing data races that inhibit multi-threading.
Our preliminary results with the Bell and Shor's algorithm kernels show that enabling user-level multi-threading gives us performance improvements over the conventional baseline version in which each kernel is still executed by multiple threads, but is executed one-by-one.
We believe this multi-threading design for heterogeneous quantum-classical programming models will open up an opportunity for rapidly prototyping and developing quantum-classical programs on conventional systems in the short-term. At the same time, we envision that this initial design would be a good starting point for longer-term explorations of heterogeneous programming systems for future quantum-classical systems.
In future work, we plan to run other quantum-classical tasks, such as VQE, with additional quantum simulation and physical back ends and also use different back ends to demonstrate where user-level multi-threading is most beneficial.
## Acknowledgement
We acknowledge DOE ASCR funding under the Quantum Computing Application Teams program, FWP number ERKJ347. We also acknowledge support for this work from NSF planning grant #2016666, "Enabling Quantum Computer Science and Engineering".
|
2304.03783 | New parametrization of the dark-energy equation of state with a single
parameter | We propose a novel dark-energy equation-of-state parametrization, with a
single parameter $\eta$ that quantifies the deviation from $\Lambda$CDM
cosmology. We first confront the scenario with various datasets, from Hubble
function (OHD), Pantheon, baryon acoustic oscillations (BAO), and their joint
observations, and we show that $\eta$ has a preference for a non-zero value,
namely a deviation from $\Lambda$CDM cosmology is favored, although the zero
value is marginally inside the 1$\sigma$ confidence level. However, we find
that the present Hubble function value acquires a higher value, namely $ H_0=
66.624^{+0.011}_{-0.013}~Km~ s^{-1} Mpc^{-1} $, which implies that the $H_0$
tension can be partially alleviated. Additionally, we perform a cosmographic
analysis, showing that the universe transits from deceleration to acceleration
in the recent cosmological past, nevertheless, in the future, it will not
result in a de Sitter phase, since it exhibits a second transition from
acceleration to deceleration. Finally, we perform the Statefinder analysis. The
scenario behaves similarly to the $ \Lambda$CDM paradigm at high redshifts,
while the deviation becomes significant at late and recent times and especially
in the future. | J. K. Singh, Preeti Singh, Emmanuel N. Saridakis, Shynaray Myrzakul, Harshna Balhara | 2023-04-07T12:06:41Z | http://arxiv.org/abs/2304.03783v3 | # New parametrization of the dark-energy equation of state with a single parameter
###### Abstract
We propose a novel dark-energy equation-of-state parametrization, with a single parameter \(\eta\) that quantifies the deviation from \(\Lambda\)CDM cosmology. We first confront the scenario with various datasets, from Hubble function (OHD), Supernovae Type Ia (SNIa), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB) observations, and we show that \(\eta\) has a preference to a non-zero value, namely a deviation from \(\Lambda\)CDM cosmology is favoured, although the zero value is marginally inside the \(1\sigma\) confidence level. However, we find that the present Hubble function value acquires a higher value, namely \(H_{0}=70.339^{+1.789}_{-1.761}\), which implies that the \(H_{0}\) tension can be partially alleviated. Additionally, we perform a cosmographic analysis, showing that the Universe transits from deceleration to acceleration in the recent cosmological past, nevertheless in the future it will not result to a de Sitter phase, since it exhibits a second transition from acceleration to deceleration. Finally, we perform statefinder and Om diagnostic analyses. The scenario behaves similarly to \(\Lambda\)CDM paradigm at high redshifts, while the deviation becomes significant at late and recent times and especially in the future.
pacs: 98.80.-k, 95.36.+x, 98.80.Es
## I Introduction
According to accumulating observations the universe in the recent cosmological past entered into a period of accelerating expansion. The first general class of explanations is to introduce new, exotic sectors in the universe content, under the umbrella term dark energy [1; 2] in the framework of general relativity. The second general class is to extend the underlying gravitational theory, in which case the cause of acceleration is of gravitational origin [3; 4; 5; 6; 7].
Nevertheless, at the phenomenological level, both approaches can be quantified through the (effective) dark-energy equation-of-state parameter \(w_{x}\). Thus, introducing various parametrizations of \(w_{x}\) allows us to describe the universe evolution and confront with observational datasets, to reveal the required dark-energy features in order to obtain agreement. In particular, starting from the simple cosmological constant, a large number of parametrizations have been introduced in the literature, involving one parameter [8; 9], two parameters, such as the Chevallier-Polarski-Linder (CPL) parametrization [10; 11], the Linear parametrization [12; 13; 14], the Logarithmic parametrization [15], the Jassal-Bagla-Padmanabhan parametrization (JBP) [16], the Barboza-Alcaniz (BA) parametrization [17], \(H_{0}\) problem at low reshift [18; 19], etc [20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47]. Additionally, note that one can impose the parametrization at the deceleration parameter level [48; 49; 50], at equation-of-sate (EoS) parameter level [51] or even at the Hubble parameter level [52; 53; 54; 55].
In the present manuscript we propose a novel dark-energy equation-of-state parametrization, with a single parameter \(\eta\) that quantifies the deviation from \(\Lambda\)CDM cosmology. Additionally, under this scenario, dark energy behaves like cosmological constant at high redshifts, while the deviation becomes significant at low and recent redshifts, and especially in the future. Finally, for \(\eta=0\) we recover \(\Lambda\)CDM cosmology completely. As we will see, apart form being capable of fitting the data, the new parametrization can partially alleviate the \(H_{0}\) tension too, since it leads to a \(H_{0}\) value in between the Planck one and the one from direct measurements [56; 57].
The article is organized as follows. In Section II we present the novel dark-energy parametrization. Then in Section III we perform a detailed confrontation with observations, namely with Hubble function (OHD), Supernovae Type Ia (SNIa), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB) data. In Section IV we perform a cosmographic analysis and we apply the Statefinder and Om diagnostics. Finally, Section V is devoted to the conclusions. Lastly, the details of the various datasets and the corresponding fitting procedure is given in the Apendix.
New single-parameter equation-of-state parametrization
In this section we first briefly review the basic equations of any cosmological scenario, and then we introduce the new parametrization for the dark-energy equation of state, with just a single parameter. We consider the usual homogeneous and isotropic Friedmann-Robertson-Walker (FRW) metric
\[\mathrm{d}s^{2}=-\mathrm{d}t^{2}+a^{2}(t)\left[\frac{\mathrm{d}r^{2}}{1-Kr^{2}} +r^{2}\left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)\right], \tag{1}\]
with \(a(t)\) the scale factor and \(K\) the spatial-curvature parameter, (\(K=0,-1,+1\) for spatially flat, open and closed universe, respectively). Furthermore, we consider that the universe is filled with baryonic and dark matter, radiation as well as the effective dark-energy fluid. Hence, the Friedmann equations that determine the background evolution of the Universe are
\[H^{2}+\frac{K}{a^{2}} =\frac{8\pi G}{3}\rho_{tot}, \tag{2}\] \[2\dot{H}+3H^{2}+\frac{K}{a^{2}} =-8\pi G\,p_{tot}, \tag{3}\]
where \(G\) is the gravitational constant and \(H=\dot{a}/a\) the Hubble function, with dots marking time derivatives. The total energy density and pressure are thus given as \(\rho_{tot}=\rho_{r}+\rho_{b}+\rho_{c}+\rho_{x}\) and \(p_{tot}=p_{r}+p_{b}+p_{c}+p_{x}\), where the subscripts \(r,\ b,\ c,\ x\) stand respectively for radiation, baryon, cold dark matter and dark energy. As usual, and without loss of generality, we focus on the spatially flat case and therefore in the following we impose \(K=0\). Finally, assuming that the various sectors do not interact mutually we deduce that they are separately conserved, following the conservation equations
\[\dot{\rho}_{i}+3H(1+w_{i})\rho_{i}=0, \tag{4}\]
where \(i\in\{r,b,c,x\}\). In the above expression we have introduced the equation-of-state parameter of each fluid as \(p_{i}\equiv w_{i}\rho_{i}\).
We proceed by providing for completeness the evolution equations of the Universe at the perturbation level. In synchronous gauge, the perturbed metric reads as
\[ds^{2}=a^{2}(\eta)\left[-d\eta^{2}+(\delta_{ij}+h_{ij})dx^{i}dx^{j}\right], \tag{5}\]
with \(\eta\) the conformal time, and where \(\delta_{ij}\) and \(h_{ij}\) denote the unperturbed and the perturbed metric parts (with \(h=h_{j}^{j}\) the trace). Perturbing additionally the universe fluids and transforming to the Fourier space we finally extract [58; 59; 60]:
\[\delta_{i}^{\prime} =-(1+w_{i})\,\left(\theta_{i}+\frac{h^{\prime}}{2}\right)-3 \mathcal{H}\left(\frac{\delta p_{i}}{\delta\rho_{i}}-w_{i}\right)\delta_{i}- 9\mathcal{H}^{2}\left(\frac{\delta p_{i}}{\delta\rho_{i}}-c_{a,i}^{2}\right)( 1+w_{i})\frac{\theta_{i}}{k^{2}}, \tag{6}\] \[\theta_{i}^{\prime} =-\mathcal{H}\left(1-3\frac{\delta p_{i}}{\delta\rho_{i}} \right)\theta_{i}+\frac{\delta p_{i}/\delta\rho_{i}}{1+w_{i}}\,k^{2}\,\delta_ {i}-k^{2}\sigma_{i}, \tag{7}\]
with primes denoting conformal-time derivative and with \(\mathcal{H}=a^{\prime}/a\) the conformal Hubble function, and where \(k\) is the mode wavenumber. Moreover, \(\delta_{i}=\delta\rho_{i}/\rho_{i}\) stands for the overdensity of the \(i\)-th fluid, \(\theta_{i}\equiv ik^{j}v_{j}\) marks the divergence of the \(i\)-th fluid velocity, and \(\sigma_{i}\) is the corresponding anisotropic stress. Lastly, \(c_{a,i}^{2}=\dot{p}_{i}/\dot{\rho}_{i}\) is the adiabatic sound speed given as \(c_{a,i}^{2}=w_{i}-\frac{w_{i}^{\prime}}{3\mathcal{H}(1+w_{i})}\).
Let us now introduce the new dark-energy parametrization. As usual, knowing the equation of state of a fluid allows as to extract its time-evolution by solving the conservation equation (4). For radiation, we have \(w_{r}=1/3\) and thus we obtain \(\rho_{r}=\rho_{r_{0}}\,a^{-4}\) (setting the scale factor at present to 1), while for baryonic and dark matter we have \(w_{b}=w_{c}=0\), which leads to \(\rho_{b}=\rho_{b_{0}}\,a^{-3}\) and \(\rho_{c}=\rho_{c_{0}}\,a^{-3}\), where \(\rho_{i_{0}}\) stands for the present density value of the \(i\)-th fluid. Concerning for the equation-of-state parameter of the dark-energy sector, since it is unknown, as we mentioned in the Introduction one can consider various parametrizations. Focusing on the barotropic fluid sub-class we consider that it is a function of time only, or equivalently of the scale factor \(a\), i.e. \(w_{x}(a)\). Hence, the solution of the dark-energy conservation equation leads to
\[\rho_{x}(a)=\rho_{x_{0}}\,a^{-3}\exp\left[-3\int_{1}^{a}\frac{w_{x}\left(a^{ \prime}\right)}{a^{\prime}}\,da^{\prime}\right]. \tag{8}\]
In this work we consider that the energy density of the dark-energy sector evolves as
\[\rho_{x}(a)=ce^{-\eta a}\tan(a^{\eta}), \tag{9}\]
where \(\eta\) is the single parameter, and \(c=\rho_{x_{0}}\frac{e^{\eta}}{\tan 1}\). Inserting this into (8) we easily extract the dark-energy equation-of-state parameter as
\[w_{x}(a)=-1+\frac{\eta a}{3}+\frac{\frac{\eta}{a^{\eta}}}{\sqrt{\left(1+\frac{1 }{a^{2\eta}}\right)\arctan\left(\frac{1}{a}\right)^{\eta}}}. \tag{10}\]
Hence, introducing for convenience the redshift \(z\) as the independent variable (where \(a^{-1}=1+z\)) the above relation becomes
\[w_{x}(z)=-1+\frac{\eta}{3(1+z)}+\frac{\eta(1+z)^{\eta}}{\sqrt{[1+(1+z)^{2\eta} ]\arctan(1+z)^{\eta}}}. \tag{11}\]
Relation (11) is the novel parametrization of the dark-energy equation-of-state parameter that we propose in this work. In the case \(\eta=0\) we recover \(\Lambda\)CDM concordance model, where \(w_{x}=-1\) and \(\rho_{x}=\rho_{x_{0}}=const\). However, in the general case the parameter \(\eta\) quantifies the deviation form \(\Lambda\)CDM scenario.
Inserting the above parametrization in the first Friedmann equation (2) we obtain
\[H=H_{0}\sqrt{\left[(\Omega_{b_{0}}+\Omega_{c_{0}})(1+z)^{3}+\Omega_{r_{0}}(1+ z)^{4}+\Omega_{x_{0}}e^{\frac{\eta z}{1+z}}\frac{\arctan(1+z)^{\eta}}{\arctan 1} \right]}, \tag{12}\]
with \(H_{0}\) the present value of the Hubble parameter, and where we have introduced the present values of the density parameters \(\Omega_{i_{0}}\equiv\frac{8\pi G}{3H^{2}}\rho_{i_{0}}\) (hence the present value of the total matter density parameter is \(\Omega_{m_{0}}\equiv\Omega_{b_{0}}+\Omega_{c_{0}}\)). This expression allows us to investigate the cosmological evolution in detail, and confront it with observational datasets.
Lastly, from parametrization (11) and the corresponding Hubble function (12), we can straightforwardly calculate various quantities. In particular, the deceleration parameter \(q=-1-\dot{H}H^{-2}\) is given by
\[q(z)=-1+\frac{\left[3(1+z)^{4}\Omega_{m_{0}}+\frac{4\eta\Omega_{x_{0}}(1+z)^{ 2}e^{\frac{\eta z}{1+z}}\arctan(1+z)^{\eta-1}}{\pi(z^{2}+2z+2)}+\frac{4\eta \Omega_{x_{0}}e^{\frac{\eta z}{1+z}}\arctan(1+z)^{\eta}}{\pi}\right]}{2(1+z) \Big{[}(1+z)^{3}\Omega_{m_{0}}+\frac{4\Omega_{x_{0}}e^{\frac{\eta z}{1+z}} \arctan(1+z)^{\eta}}{\pi}\Big{]}}, \tag{13}\]
while the higher-order cosmographic parameters [61] read as
\[j =-q+2q(1+q)+(1+z)\frac{dq}{dz}, \tag{14}\] \[s =j-3j(1+q)-(1+z)\frac{dj}{dz},\] (15) \[l =s-4s(1+q)-(1+z)\frac{ds}{dz},\] (16) \[m =l-5l(1+q)-(1+z)\frac{dl}{dz}. \tag{17}\]
Similarly, for the matter and dark-energy density parameters we obtain
\[\Omega_{m}(z)=\frac{1}{1+\frac{4\Omega_{x_{0}}e^{\frac{\eta z}{1+z}}\arctan(1 +z)^{\eta}}{\pi\Omega_{b_{0}}(1+z)^{3}}}, \tag{18}\]
and
\[\Omega_{x}(z)=\frac{1}{1+\frac{\pi\,\Omega_{b_{0}}e^{\frac{-\eta z}{1+z}}[(1+ z)^{3}\arctan(1+z)^{-\eta}]}{4\,\Omega_{x_{0}}}}. \tag{19}\]
## III Observational constraints
In the previous section we proposed a new parametrization for the dark-energy equation of state, given by (11), which has a single parameter, namely \(\eta\). In this section, we perform a detailed confrontation with various datasets, focusing on the bounds of \(\eta\). In particular, we will use data from: (i) Hubble function observations (OHD) with 77 data
points [62], (ii) Supernovae Type Ia (SNIa) observations from Union 2.1 compilation dataset [63] (iii) baryon acoustic oscillations (BAO), and (iv) cosmic microwave background (CMB). The details of the datasets and the corresponding methodology, are given in the Appendix.
Let us not present the constraints we obtain after applying the above formalism and datasets in the Friedmann equations at hand, focusing on the new model parameter \(\eta\). In Fig. 1 we present the likelihood contours in the \(\eta\)-\(H_{0}\) plane with \(1\sigma\), \(2\sigma\) and \(3\sigma\) confidence levels, around the best-fit values. Additionally, in Table 1 we summarize the obtained results, and we provide the corresponding \(\chi^{2}\). Finally, in Table 2 we summarize other cosmological parameters, such as the density parameters and the deceleration parameter, the equation-of-state parameters, and the transition redshift.
As we observe, the new model parameter \(\eta\) that quantifies the deviation from \(\Lambda\)CDM cosmology has a preference to a non-zero value, although zero is marginally inside the \(1\sigma\) confidence level. Concerning \(H_{0}\) we observe that we obtain a higher value comparing to \(\Lambda\)CDM scenario, although a bit lower than the direct measurements [56], which implies that the new dark-energy parametrization at hand can partially alleviate the \(H_{0}\) tension. This is an additional result of the present work.
## IV Cosmographic analysis and statefinder diagnostic
In this section, for completeness, we perform the cosmographic analysis of the cosmological scenario with the new dark-energy parametrization, and we apply the Statefinder diagnostic. For simplicity we neglect the radiation sector.
Figure 1: _The likelihood contours in the \(\eta\)-\(H_{0}\) plane, with \(1\sigma\), \(2\sigma\) and \(3\sigma\) confidence levels, for \(H(z)\) data (upper left), \(H(z)+SNIa\) data (upper right), for \(H(z)+SNIa+BAO\) data (lower left), and for the joint analysis of \(H(z)+SNIa+BAO+BAO/CMB\) data (lower right). The black dots represent the best-fit values._
Let us start with the deceleration parameter given in (13). Using the best-fit values of the model parameters given in Tables 1-2, we plot \(q(z)\) in Fig. 2. As we can see, we obtain the transition from deceleration to acceleration at the transition redshift \(z_{tr}\) in agreement with observations. However, it is interesting to note that the novel parametrization at hand will lead to a second transition in the future, at redshift \(z_{tr_{2}}\), from acceleration to deceleration (at around \(z_{tr_{2}}\approx-0.9\)).
We proceed to the examination of the other cosmographic parameters given in (14)-(17). In particular, we use the best-fit values of the model parameters given in Tables 1-2, and in Fig. 3 we present their evolution. Additionally, in Table 3 we summarize their values at present. Since in \(\Lambda\)CDM paradigm the value of the jerk parameter is equal to unity (\(j=1\)), the deviation from \(j=1\) quantifies the deviation of a dark-energy scenario from the concordance model. Again we find that the new proposed dark-energy parametrization behaves similarly to \(\Lambda\)CDM scenario at high redshifts, while the deviation becomes more significant at late and recent times, and especially in the future.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Dataset & \(q\) & \(z_{tr}\) & \(w_{x_{0}}\) & \(\Omega_{m_{0}}\) & \(\Omega_{x_{0}}\) \\ \hline \(H(z)\) (77 points data) & \(-0.513^{+0.105}_{-0.112}\) & \(\simeq 0.927\) & \(-0.732^{+0.200}_{-0.008}\) & \(0.2757^{+0.0079}_{-0.0081}\) & \(0.7243^{+0.0081}_{-0.0079}\) \\ \(H(z)\) + \(SNIa\) & \(-0.591^{+0.067}_{-0.072}\) & \(\simeq 0.927\) & \(-0.874^{+0.125}_{-0.133}\) & \(0.2702^{+0.0048}_{-0.0051}\) & \(0.7298^{+0.0051}_{-0.0048}\) \\ \(H(z)\) + \(SNIa\) & \(+ BAO\) & \(-0.595^{+0.067}_{-0.069}\) & \(\simeq 0.927\) & \(-0.879^{+0.125}_{-0.126}\) & \(0.2670^{+0.0049}_{-0.0049}\) & \(0.7300^{+0.0049}_{-0.0049}\) \\ \(H(z)\) + \(SNIa\) & \(+ BAO\)/\(CMB\) & \(-0.590^{+0.066}_{-0.068}\) & \(\simeq 0.927\) & \(-0.871^{+0.126}_{-0.126}\) & \(0.2703^{+0.0048}_{-0.0049}\) & \(0.7297^{+0.0049}_{-0.0048}\) \\ \(H(z)\) + \(SNIa\) + \(BAO\) + \(BAO\)/\(CMB\) & \(-0.593^{+0.066}_{-0.069}\) & \(\simeq 0.927\) & \(-0.876^{+0.124}_{-0.126}\) & \(0.2701^{+0.0048}_{-0.0048}\) & \(0.7299^{+0.0048}_{-0.0048}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Summary of the constraints on the deceleration parameter \(q\), the transition redshift \(z_{tr}\), the current value of the dark-energy equation-of-state parameter \(w_{x_{0}}\), the total matter density parameter at present \(\Omega_{m_{0}}\equiv\Omega_{b_{0}}+\Omega_{c_{0}}\), and the dark-energy density parameter at present \(\Omega_{x_{0}}\).
\begin{table}
\begin{tabular}{l c c c} \hline \hline Dataset & \(\chi^{2}_{min}\) & \(H_{0}\) (km/s/Mpc) & \(\eta\) \\ \hline \(H(z)\) (77 points data) & 52.436 & \(70.057^{+1.777}_{-1.753}\) & \(0.217^{+0.387}_{-0.359}\) \\ \(H(z)\) + \(SNIa\) & 615.852 & \(70.334^{+1.784}_{-1.766}\) & \(0.102^{+0.233}_{-0.230}\) \\ \(H(z)\) + \(SNIa\)+ \(BAO\) & 631.801 & \(70.345^{+1.775}_{-1.775}\) & \(0.098^{+0.236}_{-0.230}\) \\ \(H(z)\) + \(SNIa\) + \(BAO\)/\(CMB\) & 620.722 & \(70.329^{+1.769}_{-1.771}\) & \(0.105^{+0.233}_{-0.229}\) \\ \(H(z)\) + \(SNIa\) + \(BAO\) + \(BAO\)/\(CMB\) & 636.675 & \(70.339^{+1.789}_{-1.761}\) & \(0.100^{+0.231}_{-0.228}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the observational constraints on the model parameter \(\eta\) and on the current Hubble function \(H_{0}\), from various datasets, alongside the corresponding \(\chi^{2}_{min}\).
Figure 2: _The evolution of the deceleration parameter \(q\) in terms of the redshift \(z\), using the best-fit values of the model parameters given in Tables 1-2. The red dots mark the present values._
Finally, the same features can be obtained from the evolution of the snap \(s\), lerk \(l\) and \(m\) parameters.
Let us now come to the statefinder diagnostic, which is based on higher derivatives of the scale factor [64; 65; 66]. In particular, one introduces a pair of geometrical parameters \(\{r,s^{*}\}\) in order to examine the dynamics of different dark-energy models [67; 68]. The pair of parameters \(\{r,s^{*}\}\) are defined as:
\[r=\frac{\dddot{a}}{aH^{3}},\ \ s^{*}=\frac{r-1}{3(q-\frac{1}{2})}, \tag{20}\]
with \(q\neq\frac{1}{2}\). For our parametrization (8), the expression for \(r\) is found to be
\[r=\frac{r_{1}+r_{2}+r_{3}}{(1+z)^{2}[2+z(2+z)]^{2}\tan^{-1}(1+z)^{2}\left[\pi( 1+z)^{3}\Omega_{b_{0}}+4e^{\frac{\pi i}{1+z}}\Omega_{x_{0}}\tan^{-1}(1+z)^{ \eta}\right]}, \tag{21}\]
Figure 3: _The evolution of the cosmographic parameters \(j,s,l,m\) given in (14)-(17), in terms of the redshift \(z\), using the best-fit values of the model parameters given in Tables 1-2._
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Dataset & \(j\) & \(s\) & \(l\) & \(m\) & \(r\) & \(s^{*}\) \\ \hline \(H(z)\) (77 points data) & \(0.524^{+0.341}_{-0.269}\) & \(-0.878^{+0.563}_{-0.210}\) & \(1.831^{+0.086}_{-0.686}\) & \(-8.417^{+1.368}_{-1.150}\) & \(0.549^{+0.341}_{-0.269}\) & \(0.148^{+0.124}_{-0.123}\) \\ \(H(z)\) + \(SNIa\) & \(0.762^{+0.238}_{-0.200}\) & \(-0.509^{+0.489}_{-0.309}\) & \(1.811^{+0.483}_{-0.041}\) & \(-7.919^{+1.703}_{-0.653}\) & \(0.775^{+0.238}_{-0.200}\) & \(0.069^{+0.074}_{-0.076}\) \\ \(H(z)\) + \(SNIa\) & \(0.772^{+0.227}_{-0.201}\) & \(-0.491^{+0.468}_{-0.315}\) & \(1.823^{+0.470}_{-0.024}\) & \(-7.882^{+1.644}_{-0.674}\) & \(0.784^{+0.227}_{-0.201}\) & \(0.066^{+0.074}_{-0.073}\) \\ \(H(z)\) + \(SNIa\) & \(+ BAO/CMB\) & \(0.770^{+0.225}_{-0.198}\) & \(-0.518^{+0.457}_{-0.301}\) & \(1.806^{+0.431}_{-0.048}\) & \(-7.937^{+1.544}_{-0.639}\) & \(0.770^{+0.225}_{-0.199}\) & \(0.070^{+0.073}_{-0.074}\) \\ \(H(z)\) + \(SNIa\) & \(BAO+BAO/CMB\) & \(0.767^{+0.226}_{-0.109}\) & \(-0.501^{+0.462}_{-0.310}\) & \(1.816^{+0.454}_{-0.032}\) & \(-7.902^{+1.600}_{-0.659}\) & \(0.779^{+0.225}_{-0.199}\) & \(0.067^{+0.073}_{-0.073}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Summary of the constraints on the present values of the cosmographic parameters, namely jerk \(j\), snap \(s\), lerk \(l\) and \(m\) parameters, as well as on the present values of the statefinder diagnostic parameters \(r\) and \(s^{*}\).
where
\[r_{1} = \pi(1+z)^{5}[2+z(2+z)]^{2}\Omega_{m_{0}}\tan^{-1}(1+z)^{2}+2e^{\frac {\pi\pi}{1+z}}(1+z)^{4}(\eta-1)\eta\Omega_{x_{0}}\tan^{-1}(1+z)^{\eta},\] \[r_{2} = 4e^{\frac{\pi\tau}{1+z}}\eta\left\{2\eta-3+z[2\eta-7+z(\eta-2z-6) ]\right\}\Omega_{x_{0}}\tan^{-1}(1+z)^{1+\eta},\] \[r_{3} = 2e^{\frac{\pi\eta}{1+z}}[2+z(2+z)]^{2}[2(1+z)^{2}-4(1+z)\eta+ \eta^{2}]\Omega_{x_{0}}\tan^{-1}(1+z)^{2+\eta}. \tag{22}\]
Finally, the expression for \(s^{*}\) is obtained using (13) and (21).
In Fig. 4 we present trajectories for different observational datasets in the \(q-r\) plane. As we can see, all trajectories start from the decelerating zone, enter into the accelerating zone behaving close to \(\Lambda\)CDM at present time, and in the far future they converge to the CDM model without cosmological constant (namely \(SCDM\) model) without resulting to the de Sitter (\(dS\)) phase. The present values of parameters \(\{r,s^{*}\}\) of statefinder diagnostic are also given in Table 3.
Finally, we close this section with the examination of the diagnostic tool known as Om diagnostic [69, 70], which is employed to distinguish among dark-energy models by analyzing the behaviour of different trajectories of \(Om(z)\), given as
\[Om(z)=\frac{-1+\left[\frac{H(z)}{H_{0}}\right]^{2}}{-1+(1+z)^{3}}, \tag{23}\]
which in our case, using the Hubble evolution (12) becomes
\[Om(z)=\frac{-1+(1+z)^{3}\Omega_{m_{0}}+\frac{4e^{\frac{\pi\eta}{1+z}}\Omega_{x _{0}}\tan^{-1}(1+z)^{\eta}}{\pi}}{-1+(1+z)^{3}}. \tag{24}\]
As we can see, the new parametrization at hand at high redshifts behave as \(\Lambda\)CDM, while the deviation appears at small-redshifts and present time.
## V Conclusions
In this work we proposed a novel dark-energy equation-of-state parametrization, with a single parameter \(\eta\) that quantifies the deviation from \(\Lambda\)CDM cosmology. Firstly, we confronted the scenario with various datasets, from Hubble function (OHD), Supernovae Type Ia (SNIa), baryon acoustic oscillations (BAO), and cosmic microwave background (CMB) observations, and we presented the corresponding likelihood contours. As we saw, the new model parameter \(\eta\) has a preference to a non-zero value, namely a deviation from \(\Lambda\)CDM cosmology is favoured, although the zero
Figure 4: _Statefinder diagnostic trajectories in the \(q-r\) plane, using the best-fit values of the model parameters given in Tables 1-2. The red dots mark the present values._
value is marginally inside the \(1\sigma\) confidence level. However, interestingly enough, we find that \(H_{0}\) acquires a higher value comparing to \(\Lambda\)CDM scenario, which implies that the new dark-energy parametrization at hand can partially alleviate the \(H_{0}\) tension.
Additionally, we performed a cosmographic analysis, examining the cosmographic parameters, namely deceleration \(q\), jerk \(j\), snap \(s\), lerk \(l\) and \(m\) parameters. As we showed, in the scenario at hand the Universe transits from deceleration to acceleration in the recent cosmological past, however in the future it will not result to a de Sitter phase, since a second transition (at \(z_{tr_{2}}\approx-0.9\)) will lead from acceleration to deceleration. Additionally, we found that the scenario behaves similarly to \(\Lambda\)CDM paradigm at high redshifts, while the deviation becomes significant at late and recent times (and thus the \(H_{0}\) tension is alleviated), and especially in the future.
Finally, we performed a statefinder diagnostic analysis, as well as we applied the Om diagnostic. As we saw, all trajectories start from the decelerating zone, enter into the accelerating zone behaving close to \(\Lambda\)CDM at present time, while in the far future they converge to deceleration without resulting to the de Sitter phase.
In summary, the new parametrization of the dark-energy equation of state with a single parameter is efficient in describing the data, as well as it can alleviate the \(H_{0}\) tension. Hence, it would be worthy to proceed to more detail investigations, such as to examine it at the perturbation level, and in particular relating to the \(\sigma_{8}\) tension. Such an analysis will be performed in a separate project.
## Acknowledgements
J. K. Singh wishes to thank M. Sami and S. G. Ghosh, for fruitful discussions.
## Appendix A Observational data
In this Appendix we present the observational datasets we use in our analysis, and we provide the relevant methodology and the corresponding \(\chi^{2}\).
### \(H(z)\) data
In the case of observational Hubble data (OHD) the corresponding \(\chi^{2}\) of the maximum likelihood analysis is given by
\[\chi^{2}_{OHD}(\eta,H_{0})=\sum_{i=1}^{29}\left[\frac{H_{th}(\eta,H_{0},z_{i})- H_{obs}(z_{i})}{\sigma_{H(z_{i})}}\right]^{2}, \tag{10}\]
where \(H(z_{i})\) is evaluated at redshift \(z_{i}\), while \(H_{th}\) and \(H_{obs}\) represent the theoretical and observed value, and \(\sigma_{H(z_{i})}\) is the standard deviation. The detailed \(H(z)\) data, namely the 77 points, are given in Table 4 below.
### Supernova Type Ia (SNIa) data
We use the latest Union 2.1 compilation dataset. In this case, we have
\[\chi^{2}_{SNIa}(\eta,H_{0})=\sum_{i=1}^{580}\left[\frac{\mu_{th}(\eta,H_{0},z_ {i})-\mu_{obs}(z_{i})}{\sigma_{\mu(z_{i})}}\right]^{2}, \tag{11}\]
where \(\mu_{th}\) and \(\mu_{obs}\) are the theoretical and observed distance modulus, and \(\sigma_{\mu(z_{i})}\) the standard deviation. The distance modulus \(\mu(z)\) is defined as
\[\mu(z)=m-M=5LogD_{l}(z)+\mu_{0}, \tag{12}\]
where \(m\) and \(M\) denote the apparent and absolute magnitudes. Additionally, the luminosity distance \(D_{l}(z)\) for flat Universe and the nuisance parameter \(\mu_{0}\) are given by
\[D_{l}(z)=(1+z)H_{0}\int_{0}^{z}\frac{1}{H(z^{*})}dz^{*}, \tag{13}\]
\begin{table}
\begin{tabular}{c c c c} \hline \hline \(z\) & \(H(z)\) (\(km/s/Mpc\)) & Method & Reference \\ \hline
[MISSING_PAGE_POST]
\hline \hline \end{tabular}
\end{table}
Table 4: The 77 Hubble Parameter Data from \(H(z)\) measurements used in the present analysis in units of \(\rm km\,s^{-1}Mpc^{-1}\). Method (a) corresponds to Cosmic chronometric method, method (b) to \(BAO\) signal in galaxy distribution, and method (c) to \(BAO\) signal in \(Ly\alpha\) forest distribution alone, or cross-correlated with \(QSOs\).
and
\[\mu_{0}=5Log\Big{(}\frac{H_{0}^{-1}}{1Mpc}\Big{)}+25, \tag{10}\]
respectively.
### Baryon acoustic oscillations (BAO)
Concerning Baryon Acoustic Oscillations (BAO) we use the data from Sloan Digital Sky Survey (\(SDSS\)) [85], \(6dF\) Galaxy survey (\(6dFGS\)) [86], \(BOSSCMASS\)[87] and three parallel measurements from \(WiggleZ\) survey [88]. In BAO observations the distance redshift ratio \(d_{z}\) is
\[d_{z}=\frac{r_{s}(z^{*})}{D_{v}(z)}, \tag{11}\]
where \(z^{*}=1090\) is the redshift at the time of photon decoupling [89], and \(r_{s}(z^{*})\) is the corresponding comoving sound horizon [90]. The dilation scale defined by Eisenstein et al. [91] is
\[D_{v}(z)=\left[(1+z)^{2}\frac{d_{A}^{2}(z)z}{H(z)}\right]^{\frac{1}{3}}, \tag{12}\]
where \(d_{A}(z)\) is the angular diameter distance, which essentially is a geometric mean of two transverse and one radial direction. The value of \(\chi^{2}_{BAO}\) is given by [92]
\[\chi^{2}_{BAO}=A^{T}C^{-1}A, \tag{10}\]
where
\[A=\left[\begin{array}{c}\frac{d_{A}(z^{*})}{D_{v}(0.106)}-30.84 \\ \frac{d_{A}(z^{*})}{D_{v}(0.357)}-6.72\\ \frac{d_{A}(z^{*})}{D_{v}(0.44)}-8.41\\ \frac{d_{A}(z^{*})}{D_{v}(0.6)}-6.66\\ \frac{d_{A}(z^{*})}{D_{v}(0.73)}-5.43\end{array}\right],\]
and the inverse covariance matrix \(C^{-1}\) is
\[C^{-1}=\left[\begin{array}{cccc}0.52552&-0.03548&-0.07733&-0.00167&-0.00532& -0.00590\\ -0.03548&24.97066&-1.25461&-0.02704&-0.08633&-0.09579\\ -0.07733&-1.25461&82.92948&-0.05895&-0.18819&-0.20881\\ -0.00167&-0.02704&-0.05895&2.91150&-2.98873&1.43206\\ -0.00532&-0.08633&-0.18819&-2.98873&15.96834&-7.70636\\ -0.00590&-0.09579&-0.20881&1.43206&-7.70636&15.28135\end{array}\right],\]
approaching the correlation coefficients available in [92; 53; 93].
### Baryon acoustic oscillations and Cosmic microwave background
We use the combined \(BAO/CMB\) constraints which were derived by Giostri et al. [92], and we define the comoving sound horizon at the decoupling as
\[r_{s}(z^{*})=\frac{c}{\sqrt{3}}\int_{0}^{(1+z^{*})^{-1}}\frac{1}{ a^{2}H(a)\sqrt{1+3(\Omega_{b0}/\Omega_{\gamma 0})a}}da, \tag{11}\]
where \(\Omega_{b0}\) and \(\Omega_{\gamma 0}\) are the density parameters of baryon and photon at present, while \(z_{d}\approx 1020\) is the redshift at the drag epoch. Rather than including priors on \(\Omega_{b}h^{2}\) to break the degeneracy between \(\Omega_{b}h^{2}\) and the distance constraint, we can fit the ratio \(\frac{r_{s}(z_{d})}{D_{v}(z_{eff})}\), where \(r_{s}(z_{d})\) is the sound horizon at the baryon drag epoch \(z_{d}\). In principle, it will be less efficient if the fit is driven by the shape of the correlation function. During the fit we calculate \(r_{s}(z_{d})\) using the fitting formula of [94]. Finally, we define the acoustic scale \(l_{A}\) as
\[l_{A}=\pi\frac{d_{A}(z^{*})}{r_{s}(z^{*})}, \tag{12}\]
where
\[d_{A}(z^{*})=\int_{0}^{z^{*}}\frac{1}{H(z^{{}^{\prime}})}dz^{{}^{ \prime}} \tag{13}\]
is the comoving angular diameter distance, as well as the dilation scale as [95]
\[D_{v}(z)=\left[\frac{d_{A}^{2}(z)z}{H(z)}\right]^{\frac{1}{3}}. \tag{14}\]
Lastly, the value of \(\chi^{2}_{BAO/CMB}\) is written as [92]
\[\chi^{2}_{BAO/CMB}=B^{T}C^{-1}B, \tag{15}\]
where \(B\)
\[B=\left[\begin{array}{c}\frac{d_{A}(z^{*})}{D_{v}(0.106)}-30.95\\ \frac{d_{A}(z^{*})}{D_{v}(0.20)}-17.55\\ \frac{d_{A}(z^{*})}{D_{v}(0.35)}-10.11\\ \frac{d_{A}(z^{*})}{D_{v}(0.44)}-8.44\\ \frac{d_{A}(z^{*})}{D_{v}(0.60)}-6.69\\ \frac{d_{A}(z^{*})}{D_{v}(0.73)}-5.45\end{array}\right],\]
and the inverse covariance matrix \(C^{-1}\) is
\[C^{-1}=\left[\begin{array}{cccc}0.484350&-0.101383&-0.164945&-0.0305703&-0.09 7874&-0.106738\\ -0.101383&3.288200&-2.454970&-0.0787898&-0.252254&-0.275100\\ -0.164945&-2.454970&9.559160&-0.128187&-0.4104040&-0.447574\\ -0.0305703&-0.0787898&-0.128187&2.787280&-2.75632&1.164370\\ -0.097874&-0.252254&-0.410404&-2.756320&14.924500&-7.324410\\ -0.106738&-0.275100&-0.447574&1.164370&-7.324410&14.502200\end{array}\right],\]
approaching the correlation coefficients for the pair of measurements \(r_{s}/D_{v}\) at \(z=(0.2,0.35)\), \(z=(0.44,0.6)\) and \(z=(0.6,0.73)\)[96; 97] (the \(6dF\,Galaxy\,Survey\)[86] and the \(Wiggle\,Z\) team [96] have measured this quantity at \(z=0.106\), \(z=0.44\), \(z=0.60\) and \(z=0.73\), while Percival et al. [97] measured it at \(z=0.20\) and \(z=0.35\)).
### Joint analysis
In the case where some of the above datasets are used simultaneously, the corresponding \(\chi^{2}\) arises from the sum of the separate ones. In particular, we will use the combinations:
\[\chi^{2}_{HS}=\chi^{2}_{OHD}+\chi^{2}_{SNIa}, \tag{14}\]
\[\chi^{2}_{HSB}=\chi^{2}_{OHD}+\chi^{2}_{SNIa}+\chi^{2}_{BAO}, \tag{15}\]
\[\chi^{2}_{HSC}=\chi^{2}_{OHD}+\chi^{2}_{SNIa}+\chi^{2}_{BAO/CMB}, \tag{16}\]
and
\[\chi^{2}_{HSBC}=\chi^{2}_{OHD}+\chi^{2}_{SNIa}+\chi^{2}_{BAO}+\chi^{2}_{BAO/CMB}. \tag{17}\]
|
2304.12856 | Retinal Vessel Segmentation via a Multi-resolution Contextual Network
and Adversarial Learning | Timely and affordable computer-aided diagnosis of retinal diseases is pivotal
in precluding blindness. Accurate retinal vessel segmentation plays an
important role in disease progression and diagnosis of such vision-threatening
diseases. To this end, we propose a Multi-resolution Contextual Network
(MRC-Net) that addresses these issues by extracting multi-scale features to
learn contextual dependencies between semantically different features and using
bi-directional recurrent learning to model former-latter and latter-former
dependencies. Another key idea is training in adversarial settings for
foreground segmentation improvement through optimization of the region-based
scores. This novel strategy boosts the performance of the segmentation network
in terms of the Dice score (and correspondingly Jaccard index) while keeping
the number of trainable parameters comparatively low. We have evaluated our
method on three benchmark datasets, including DRIVE, STARE, and CHASE,
demonstrating its superior performance as compared with competitive approaches
elsewhere in the literature. | Tariq M. Khan, Syed S. Naqvi, Antonio Robles-Kelly, Imran Razzak | 2023-04-25T14:27:34Z | http://arxiv.org/abs/2304.12856v1 | # Retinal Vessel Segmentation via a Multi-resolution Contextual Network and Adversarial Learning
###### Abstract
Timely and affordable computer-aided diagnosis of retinal diseases is pivotal in precluding blindness. Accurate retinal vessel segmentation plays an important role in disease progression and diagnosis of such vision-threatening diseases. To this end, we propose a Multi-resolution Contextual Network (MRC-Net) that addresses these issues by extracting multi-scale features to learn contextual dependencies between semantically different features and using bi-directional recurrent learning to model former-latter and latter-former dependencies. Another key idea is training in adversarial settings for foreground segmentation improvement through optimization of the region-based scores. This novel strategy boosts the performance of the segmentation network in terms of the Dice score (and correspondingly Jaccard index) while keeping the number of trainable parameters comparatively low. We have evaluated our method on three benchmark datasets, including DRIVE, STARE, and CHASE, demonstrating its superior performance as compared with competitive approaches elsewhere in the literature.
keywords: Retinal vessel Segmentation, encoder-decoder, contextual network, adversarial learning, diabetic retinopathy. +
Footnote †: journal: Neural Networks
## 1 Introduction
Diabetic retinopathy (DR) [1] primarily affects the working-age population around the world [2; 3] and is the principal cause of blindness. Recent investigations reveal that a substantial number of patients face vision deterioration due to delayed follow-ups, referrals, and treatment [4; 5]. Computer-aided screening and diagnosis of such diseases is promising, can augment the deficient screening resources and help clinicians effectively use their time [6]. The role of retinal vascular morphology for the detection and progression of DR has been established by various studies [7; 8]. The studies suggest that the measurements of vessel structure can be predictive of disease progression and can be utilized to determine disease severity [7].
The segmentation of the retinal vessels is an important and challenging step in the construction of an automated diagnosis system. The key signs of DR including hemorrhages and microaneurysms normally occur in the vessel surroundings, thus urging the need for robust vessel segmentation. Moreover, the topology of the vascular tree can be estimated by using convincing retinal vessel segmentation [9]. Physiological characteristics of the retinal vasculature (including shape, length, branching and diameter) have other important applications as well, such as identification, fundus image registration and categorization of retinal arteries and veins [9; 10].
Important challenges of vessel segmentation are intensity inhomogeneity, contrast issues and vessel characteristics variations. The presence of exudates, lesions, and hemorrhages can further complicate the task at hand. Many studies for automatic segmentation of vessels by means of computer vision with either supervised or unsupervised algorithms [11; 12; 13; 14; 15; 16; 17; 18] have been reported. Studies using deep learning architectures have, in particular, been found to be more effective than alternatives [19; 20; 21].
In the literature, numerous DL-based approaches with promising results have been proposed for retinal vessel segmentation [1; 11; 22; 23; 14]. For instance, U-Net [24] and its variants have proven to be useful for various medical image segmentation tasks. Gu _et al._[25] presented a vessel segmentation network that preserved spatial information by capturing high-level contextual information. In another approach, Yan _et al._[26] introduced a joint loss including both a pixel-wise and a segmentation-level cost in U-Net for improved segmentation. DEU-Net [27] incorporated a function fusion module to capture semantic information. In [27], a context path with a multi-scale convolution block is used. DeepVessel [28] employed a multi-scale convolution neural network (CNN) at multiple layers to learn a rich hierarchical representation. Moreover, a conditional random field was employed to capture pixel interactions.
Despite, the efficacy of encoder-decoder network variants, these methods generally struggle in retention or reassurance of feature information between the encoder and the decoder. This can be generally attributed to their feature extraction strategy at the encoder and feature fusion policy at the decoder. There have been attempts to preserve the spatial information [25; 27], through effective feature extraction and contextual information. Moreover, standalone adversarial learning has been explored before in the literature [29] with a vanilla U-shaped generator. However, a systematic association of these design stages and their contribution to overall performance improvement still needs more exploration.
In this work, a Multi-resolution Contextual Network (MRC-Net) that extracts multi-scale features for spatial information preservation and learning contextual dependencies between semantically different features is introduced. The computational requirements are kept under check through adversarial learning-based performance enhancement. To this end, to tackle the challenge of the adversarial nature of smartphone-based retinal image data, we incorporate a bi-directional recurrent block that captures former-to-latter and latter-to-former dependencies. By adding this block, the method proposed here is able to effectively capture both contextual dependencies in time between semantically varying features. Multi-scale features are extracted to effectively capture vessel information at all scales, ensuring special attention is paid to tiny vessels with degraded quality. To further boost the performance of the segmentation
network against the noisy nature of the data, while simultaneously preventing it from exploding in terms of trainable parameters, the training of the network is performed in an adversarial end-to-end setting. The general architecture of the proposed MRC-Net is designed to be shallow by carefully setting the number of encoder-decoder blocks according to the representational requirements of the features.
The contributions of this study are as follows:
* Multi-resolution feature extraction preserves rich information on vessels occurring at varying scales. Loss of information during the merging of information from multiple scales is prevented through residual learning, resulting in improved performance.
* Robust context-based feature fusion is introduced by capturing space-time relationships between encoder and decoder features that aid the network to recuperate information that may be lost during the convolution process.
* Adversarial learning is employed to improve the segmentation performance by explicit optimization of the Dice score of the output foreground maps.
* The proposed approach demonstrated competitive performance with only 0.9M parameters.
* A systematic assessment of the various design choices at network stages including feature extraction, feature fusion, loss function, and training strategy is presented for improved retinal vessel segmentation.
## 2 Multi-resolution Contextual Network (MRC-Net)
In this work, we propose an adversarial learning-based network with multi-scale contextual features for robust vessel segmentation. The architectural details, the loss function, and the training strategy are presented in detail in the following sections.
### Overall Architecture
The proposed architecture is a generative adversarial network (GAN) with generator and discriminator networks as shown in Figure 1 (a) and (b), respectively. During training, the generator network generates a map with probabilities
\begin{table}
\begin{tabular}{l c c} \hline \hline Network & \# of Parameters (M) & Model Size (MB) \\ \hline MobileNet-V3-small [30] & 2.5 & 11.0 \\ ERFNet [31] & 2.06 & 8.0 \\ MultiRes UNet [32] & 7.2 & – \\ VessNet [33] & 9.3 & 36.6 \\ MRC-Net & 0.9 & 3.85 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computational requirements of the proposed MRC-Net with current state methods.
assigned to retinal vessels from an original fundus image. These maps range from 0 to 1 indicating the likelihood of a particular pixel belonging to a vessel. Given a fundus image and its corresponding vessel image, the job of the discriminator is to determine whether the vessel image corresponds to a hand-labelled vessel map or the generator output. The two networks compete in an adversarial manner during training to get better at their respective tasks. During inference, the trained generator network takes fundus images as input and generates retinal probability maps. The key idea behind the application of the GAN in this work is to make the generator more robust to the adversarial nature of the vessel segmentation problem without making it deeper and, hence, more computationally expensive. The generator network obtained through this setting is more robust as compared with those networks yielded via stand-alone training.
The proposed generator is an encoder-decoder fully convolutional network as shown in Figure 1 (a). The key ideas of the generator network are the following:
1. To achieve multi-scale feature extraction through the multi-resolution feature extraction module at the encoder
Figure 1: a) Block level diagram of the generator network; b) Illustration of the discriminator network.
as described in Section 2.2.
2. To achieve robust context-based feature fusion at the decoder stage as described in Section 2.3. Contextual learning has been employed in the past for improving performance [25; 27], however, in contrast, we devise contextual learning for robust feature fusion by exploring the time dependencies between features at different network stages. In turn, the network latently recovers the information lost due to the convolution process. Moreover, the proposed context-aware fusion is systematically evaluated with other key design stages including multi-resolution feature extraction and adversarial training.
Note that the two multi-resolution modules encompass all important scales thus capturing the required variations including the thickest and the thinnest vessel structures. As opposed to residual learning for bridging the semantic gap between encoder and decoder features, the multi-scale encoder features are fused with the decoder ones through respective bi-directional recurrent blocks.
The architecture of the discriminator network is presented in Figure 1 (b). In the discriminator network, the features are extracted from the tensor by nine successive convolutions followed by batch normalization and activation. The feature maps are downsampled by max-pooling and strided convolution operations with a \(2\times 2\) filter size. The dimensionality-reduced feature maps are passed through global average pooling followed by dense prediction. The output of the network is either of the two classes, i.e., a human-annotated vessel map or a machine-generated segmentation.
During adversarial training, we specifically optimize the Dice loss. Moreover, during multiple rounds of adversarial learning our objective is to maximize the Dice score and indirectly the Jaccard index. Specifically, we select the training round that maximizes the Dice or F1 score as
\[\arg\;\max_{F}\mathrm{GAN}(r), \tag{1}\]
where \(r\) is the number of training rounds. Our choice of optimizing the region-based scores (Dice score, Jaccard index) originates from prior works that demonstrate the superiority of the region-based type measures as compared to AUC and AP measures for evaluation of foreground maps [34].
### Multi-resolution Feature Extraction Module
The architectural detail of the multi-resolution feature extraction module is depicted in Figure 2. It can be observed that we employed 3\(\times\)3, 5\(\times\)5, and 7\(\times\)7 receptive fields to extract multi-scale contextual information. During our experiments, we found that these three receptive field sizes capture the wide variation of vessels in terms of scale, and going narrower or deeper does not affect the performance. The shortcut connection is introduced to ensure that residual information lost during the filtering process is preserved and fed to the later stages and to also ease the training process. The proposed multi-resolution feature extraction approach derives the idea of factorizing convolutions from the well-known work of Szegedy and colleagues [35]. This promotes parameter reduction as a result of weight sharing.
### Robust Feature Fusion
Recall that an inherent problem in the design of encoder-decoder architectures is the semantic gap between the encoder and decoder features. There are a number of options to consider in this regard, including residual learning [35], dense learning [36], and recurrent learning [37]. Here we have opted to employ bi-directional recurrent learning to learn the dependencies between semantically lagging encoder and leading decoder features. The motivation for this resides in our architecture, which lends itself naturally to fusing encoder and decoder features. Thus, here we employ a two-dimensional bi-directional long-short-term memory convolutional layer at each encoder-decoder stage of the proposed generator network.
### Loss Function
Let G be the generator that performs a mapping of an image \(f\) to a vessel map \(v\), i.e., \(G:f\to v\). The discriminator \(D:f,v\to 0,1\) maps the image and vessel map to a binary value as a classification task, where 0 is for human-generated versus 1 for machine-generated \(v\). Then the objective of the GAN can be treated as a minimax formulation with respect to the generator and the discriminator, given as:
\[\begin{split} L_{\mathrm{GAN}}(G,D)=&\mathbb{E}_{f, v\sim p_{d}(f,v)}[\mathrm{log}D(f,v)]+\\ &\mathbb{E}_{f\sim p_{d}(f)}\left[\mathrm{log}(1-D(f,G(f))) \right]\end{split} \tag{2}\]
For the segmentation task, the simplest objective is to penalize the distance between the ground truth \(y\) and the predicted output \(v\), i.e. the binary cross entropy
\[\begin{split} L_{\mathrm{SEG}_{1}}(G)=\mathbb{E}_{y,v\sim p_{d} (l,v)}-y&\mathrm{log}(G(v))-\\ &(1-y)\mathrm{log}(1-G(v))\end{split} \tag{3}\]
In order to evaluate the similarity between the predicted and ground truth maps, we formulate two more loss functions based on the employed similarity metric. The second loss is a sum of the binary cross entropy and the Dice loss, given as
\[L_{\mathrm{SEG}_{2}}(G)=L_{\mathrm{SEG}_{1}}(G)+L_{\mathrm{Dice}}, \tag{4}\]
where \(L_{\mathrm{Dice}}=1-\frac{2|f\bigcap v|}{|f|+|v|}\). By incorporating Intersection over Union (IoU) or the Jaccard loss \(L_{\mathrm{Jaccard}}=1-\left(\frac{|f\bigcap v|}{|f|+|v|-|f\bigcap v|}\right)\), our third loss is formulated.
\[L_{\mathrm{SEG}}(G)=L_{\mathrm{SEG}_{2}}(G)+L_{\mathrm{Jaccard}} \tag{5}\]
Finally, the overall loss function of the GAN is a composite loss expressed as
\[L_{\mathrm{COMP}}=\arg\;\min_{G}\left[\arg\;\max_{D}L_{\mathrm{GAN}}(G,D) \right]+\beta L_{\mathrm{SEG}}(G), \tag{6}\]
where \(\beta\) is a parameter that balances the two diverse objectives.
### Datasets
The proposed network is evaluated using CHASE [38]1, DRIVE [39]2 and STARE [40]3 databases. The DRIVE dataset came from a screening program for diabetic retinopathy in the Netherlands. It includes a total of 40 colour images, 20 for training and 20 for testing with an image size of 584\(\times\)565 pixels. Only seven images show signs of mild early diabetic retinopathy. Manual annotations, generated by experts, are used as the ground truth. The STARE dataset consists of 20 colour retinal images with a FOV of \(35^{\circ}\) and a resolution of 700\(\times\)605 pixels. There are pathologies in ten of the twenty images. Two manual annotations are available as the gold standard. The annotation from the first expert is employed in this work.
Footnote 1: The dataset can found at [https://blogs.kingston.ac.uk/retinal/chasedb1/](https://blogs.kingston.ac.uk/retinal/chasedb1/)
Footnote 2: The dataset is widely available at [https://drive.grand-challenge.org](https://drive.grand-challenge.org)
Footnote 3: More information regarding the STARE project can be found at [https://cecas.clemson.edu/~ahoover/stare/](https://cecas.clemson.edu/~ahoover/stare/)
The CHASE dataset includes 28 colour images, where each image is captured with a \(30^{\circ}\) FOV centred at the optic disc and a resolution of 999\(\times\)960 pixels. As a ground truth, two different expert annotations are available. For our experiments, we use the first expert's segmentation ground truth. There are no separate training or testing sets in the CHASE dataset. The first 20 images were used for training, and the last eight images were used for testing.
Figure 2: Architectural overview of the multi-resolution feature extraction module.
## 3 Results and Discussion
In this section, we compare the performance of our method on widely available datasets with current state benchmark methods.
### Experimental Setup
The tests were performed on a computer system running Ubuntu 20.04.1 LTS and equipped with an Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz processor, 260GB of RAM, and a GeForce GTX1080TI GPU 4. For our implementation, the Keras 2.2.4 library was employed. The ADAM was employed and the learning rate was initialized to \(2\times 10^{-}5\) with an exponential decay rate of 0.9. The images were resized to 640 pixels as a pre-processing step for all datasets. The z-score statistic normalization and augmentation techniques including contrast enhancement, random flipping, and random rotation between 1-360 degrees were applied. The total number of training rounds \(r\) for the GAN was set to \(10^{5}\). The balancing parameter \(\beta\) in \(L_{\mathrm{COMP}}\) was fixed to 10 in our experiments so as to place more
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Method** & **Acc** & **F1** & **Bacc** & **J** & **E** & **AUC** \\ \hline MRC-Net+Dice & 0.9693 & 0.8257 & 0.9071 & 0.7036 & 0.2964 & 0.9742 \\ MRC-Net+IoU & 0.9700 & 0.8285 & 0.9062 & 0.7077 & 0.2923 & 0.9761 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Effect of loss functions on segmentation performance.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline
**Method** & **Acc** & **F1** & **Bacc** & **J** & **E** & **AUC** \\ \hline MRC-Net without Multi-resolution+BConvLSTM & 0.9678 & 0.8160 & 0.8960 & 0.6895 & 0.3105 & 0.9747 \\ MRC-Net without Multi-resolution & 0.9680 & 0.8162 & 0.9015 & 0.6900 & 0.3100 & 0.9787 \\ MRC-Net without BConvLSTM & 0.9696 & 0.8204 & 0.9019 & 0.6958 & 0.3042 & **0.9847** \\ MRC-Net & **0.9698** & **0.8270** & **0.9044** & **0.7055** & **0.2945** & 0.9812 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Contribution of multi-resolution contextual feature extraction and bi-directional recurrent feature fusion.
\begin{table}
\begin{tabular}{l l c c c c c c} \hline \hline
**Dataset** & **Method** & **Acc** & **F1** & **Bacc** & **J** & **E** & **AUC** \\ \hline \multirow{3}{*}{DRIVE} & MRC-Net without Adversarial Training & 0.9701 & 0.8076 & 0.8966 & 0.7026 & 0.2974 & 0.9857 \\ & MRC-Net & 0.9698 & 0.8270 & 0.9044 & 0.7055 & 0.2945 & 0.9812 \\ \hline \multirow{3}{*}{STARE} & MRC-Net without Adversarial Training & 0.9738 & 0.8208 & 0.9000 & 0.7000 & 0.3000 & 0.9780 \\ & MRC-Net & 0.9747 & 0.8286 & 0.9032 & 0.7105 & 0.2895 & 0.9684 \\ \hline \multirow{3}{*}{CHASE} & MRC-Net without Adversarial Training & 0.9781 & 0.8269 & 0.9147 & 0.7067 & 0.2933 & 0.9852 \\ & MRC-Net & 0.9779 & 0.8548 & 0.9186 & 0.7466 & 0.2534 & 0.9850 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ablation study of the adversarial training strategy.
emphasis on the segmentation task. In \(L_{\text{SEG}_{4}}(G)\), \(\gamma\) was set to 0.5 to balance the cross entropy and the Dice loss. The total trainable parameters of the proposed generator network for input size of 640\(\times\)640\(\times\)3 is \(\sim\) 0.9M.
### Evaluation Criteria
The segmentation maps for vessels consist of binary images, where each pixel is classified as either part of a vessel or the background. These maps are created using manually labelled ground truth data by expert ophthalmologists, who categorize each pixel in the image as either vascular or nonvascular. When evaluating the accuracy of vessel segmentation methods, there are four possible outcomes for each image: true positive (TP) when vascular pixels are correctly identified as such, true negative (TN) when nonvascular pixels are correctly classified, false positive (FP) when nonvascular pixels are wrongly identified as vascular, and false negative (FN) when vascular pixels are mistakenly categorized as nonvascular. Therefore, to assess the effectiveness of different approaches in the literature, five commonly used parameters, including Sensitivity, Specificity, Accuracy, F1, and AUC, are utilized for comparison.
The AUC, Mathews correlation coefficient (Mathews) [41], Jaccard index (J), overlapping error (E) and balanced accuracy (Bacc) are also considered objective measures of the segmentation performance.
Figure 3: Segmentation results delivered by our MRC-Net method on representative test images, i.e. image numbers 1, 2, 4, 16, and 19 from the DRIVE dataset. From left to right, we show the input image, ground truth, and the results yielded by BCDUNet, MultiResUNet, SegNet, U-Net++, and our network. In the figure, false positives are shown in red, whereas blue pixels depict false negatives.
### Ablation Study
To investigate the contribution of the proposed multi-scale contextual feature extraction and the bi-directional recurrent feature fusion scheme to the overall performance of MRC-Net, we performed multiple experiments with architectural modifications. For the first experiment, we replaced the multi-resolution block with a simple convolutional layer and concatenated semantically different features directly. The resulting architecture was termed MRC-Net without Multi-resolution+BConvLSTM. The second architectural modification was termed as MRC-Net without Multi-resolution, which included the bi-directional recurrent block to minimize the semantic gap between features, whereas the multi-resolution block was replaced by a convolutional layer. The third modification included the multi-resolution block while the bi-directional recurrent block was removed. Table 2 presents the results of the aforementioned experiments. It can be observed that the multi-resolution contextual feature block resulted in incremental improvement, whereas the robust feature fusion through a bi-directional recurrent block obtained more substantial improvements. The combined representational power of both modules affected performance improvements of 1.31%, 0.94%, and 2.32% in terms of F1, Bacc, and J, respectively.
The segmentation accuracy of the generator network in our proposed method was enhanced by incorporating adversarial training. Table 3 displays a comparison of the segmentation performance achieved by the proposed method, with and without the application of adversarial training. The results demonstrate an improvement of 2.4%, 1%, and 3.37% in terms of F1 measure for DRIVE, STARE, and CHASE datasets in support of the adversarial training. Improvements of 1.5% and 5.65% in terms of J on STARE and CHASE databases were also obtained. Additionally,
Figure 4: Segmentation results delivered by our MRC-Net method on representative test images, i.e. image numbers 2, 3, 4, 5, and 7, from the STARE dataset. From left to right, we show the input image, ground truth, and the results yielded by BCDUNet, MultiResUNet, SegNet, U-Net++, and our network.
consistent improvements in Bacc and E clearly demonstrate the effectiveness of the proposed end-to-end adversarial learning strategy.
Table 4 presents the contribution of the segmentation losses to the proposed MRC-Net method when applied to the DRIVE dataset. In the table, we have denoted MRC-Net+Dice as the option where our MRC-Net is implemented solely with the Dice loss. The MRC-Net+IoU denotes the case where the loss employs only the Intersection over Union (IoU). From the table, it can be observed that by introducing the segmentation-aware loss functions based on the IoU, the segmentation performance in terms of F1 and Jaccard index (J) is improved. It is also noticeable that the introduction of the Dice to the loss improves the balanced accuracy (Bacc). Although the IoU loss resulted in improved performance on the DRIVE dataset, the Dice loss exhibited better overall generalization ability on all the datasets in our experiments.
It can be observed from Tables 2, 3 and 4 that our method is designed through systematic evaluation of its stages including feature extraction, feature fusion, loss function, and adversarial framework. The methodical improvements are evident from the ablation with adversarial learning contributing most to the performance improvement. From the ablation results, we note that the proposed method focuses more on boosting the overall F1 and Jaccard index (J)
Figure 5: Segmentation results delivered by our MRC-Net method on representative test images, i.e. image numbers 1, 2, 3, 5, and 7, from the CHASE dataset. From left to right, we show the input image, ground truth, and the results yielded by SegNet, VessSeg, and our network.
scores while compromising on other measures. This can be attributed to the proposed adversarial framework settings and reinforces our design choice as discussed in Section 2.1. The contributions of the proposed feature extraction and fusion schemes to the overall performance are evident from Table 2.
### Comparison and Experiments
Our method and other state-of-the-art techniques were evaluated on three different datasets, namely STARE, DRIVE, and CHASE_DB1. We first present a comprehensive comparison of various supervised and unsupervised methods. Subsequently, we compare our approach with U-Net [44] and SegNet [49] based methods. Our aim is to provide the reader with a detailed and self-contained comparison of our approach, while also demonstrating its performance against popular deep networks that are commonly used as benchmarks in the field. To ensure a fair comparison with the alternatives, here we have conducted all experiments using the code from the authors where available. Note that there is no single framework to evaluate the methods shown here and, hence, we have followed the tuning and parameter settings specified by the authors and used the same standard training and testing splits for all datasets across all the alternatives in our experiments.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Unsupervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[13] & 2018 & 0.790 & 0.965 & 0.951 & - \\
[12] & 2019 & 0.7980 & 0.9732 & 0.9561 & - \\
[12] & 2019 & 0.7860 & 0.9725 & 0.9583 & - \\
[2] & 2019 & 0.8011 & 0.9694 & 0.9545 & - \\ \hline \hline \multicolumn{6}{c}{**Supervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[42] & 2016 & 0.7726 & 0.9844 & 0.9628 & 0.9879 \\
[43] FC & 2017 & 0.7680 & 0.9738 & - & - \\
[43] UP & 2017 & 0.7692 & 0.9675 & - & - \\
[26] & 2018 & 0.7581 & 0.9846 & 0.9612 & 0.9801 \\
[44] U-Net & 2018 & 0.8270 & 0.9842 & 0.9690 & 0.9898 \\
[45] Deeplab v3++ & 2018 & 0.8320 & 0.9760 & 0.9650 & 0.9735 \\
[19] Strided U-Net & 2019 & 0.8010 & 0.9690 & 0.9610 & 0.9450 \\
[46] DEU-Net & 2019 & 0.8074 & 0.9821 & 0.9661 & 0.9812 \\
[47] Vessel-Net & 2019 & 0.8132 & 0.9814 & 0.9661 & 0.9860 \\
[33] Vessel & 2019 & 0.8526 & 0.9791 & 0.9697 & **0.9883** \\
[48] HAnet & 2020 & 0.8239 & 0.9813 & 0.9670 & 0.9871 \\
[32] MultiRes UNet & 2020 & 0.7126 & **0.9908** & 0.9703 & 0.9444 \\
[1] RCED-Net & 2020 & **0.8397** & 0.9792 & 0.9659 & 0.9810 \\ MRC-Net & 2020 & 0.8190 & 0.9874 & **0.9747** & 0.9706 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance of the proposed method in comparison to state-of-the-art approaches on the STARE database.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Unsupervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[50] & 2015 & 0.7615 & 0.9575 & 0.9467 & 0.9623 \\
[51] & 2015 & 0.7585 & 0.9587 & 0.9387 & 0.9487 \\
[52] & 2016 & 0.7626 & 0.9661 & 0.9452 & 0.9606 \\ \hline \hline \multicolumn{6}{c}{**Supervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[42] & 2016 & 0.7507 & 0.9793 & 0.9581 & 0.9716 \\
[43] FC & 2017 & 0.7277 & 0.9712 & - & - \\
[53] DUNet & 2019 & 0.7595 & 0.9878 & 0.9641 & 0.9832 \\
[33] VessNet & 2019 & 0.8206 & 0.9800 & 0.9726 & 0.9800 \\
[26] & 2018 & 0.7633 & 9809 & 0.9610 & 0.9781 \\
[54] D-Net & 2019 & 0.7839 & **0.9894** & 0.9721 & 0.9866 \\
[55] VGN & 2019 & **0.9463** & 0.9364 & 0.9373 & - \\
[48] HAnet & 2020 & 0.8186 & 0.9844 & 0.9673 & **0.9881** \\
[1] RCED-Net & 2020 & 0.8440 & 0.9810 & 0.9722 & 0.9830 \\ MRC-Net & 2020 & 0.8485 & 0.9887 & **0.9779** & 0.9857 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Quantitative results of our network in comparison to other approaches on the CHASE_DB1 dataset.
We begin by presenting Tables 7, 6, and 5, which provide quantitative results for our network, MRC-Net, and a number of alternative methods. As can be observed from the tables, our network consistently outperforms existing methods in terms of accuracy. Additionally, for the CHASE dataset, the proposed network's sensitivity, AUC, and sensitivity are also highly competitive. It consistently ranks among the top models evaluated, and it's important to mention that there's no discernible pattern among the other techniques regarding the most optimal specificity, AUC, or sensitivity.
We now proceed to show a detailed comparison of our network with the results yielded by BCDUNet [56], MultiResUNet [32], SegNet [49] and U-Net++ [36]. These are based upon U-Net [44] and SegNet [49] and have shown recently to deliver state-of-the-art performance. We commence by showing qualitative results before focusing our attention on a more quantitative analysis. Figure 3 illustrates a visual comparison between our MRC-Net approach and other methods on the DRIVE database. It can be observed clearly from images 2 and 16 that MRC-Net delivers much fewer false positives on tiny vessels as compared with the state-of-the-art methods. Further, U-Net-based variants struggle on the boundary in image 4. SegNet seems to generate false tiny vessels in the majority of the images. The BCDUNet method tends to miss vessel information, which is robustly captured by the proposed MRC-Net method, while simultaneously suppressing false vessel information.
In Figure 4, a visual comparison of the STARE dataset is shown. It can be seen that the alternative methods generate a greater number of false positives, particularly around retinal boundaries, optic nerves, and small vessels, as evidenced by images 2 and 3. Conversely, MRC-Net exhibits greater robustness to these artifacts in these images. Similar results are observed when our method is applied to the CHASE dataset, as shown in Figure 5.
The quantitative comparison of the proposed MRC-Net method with the benchmark methods on the DRIVE, STARE, and CHASE databases is presented in Tables 8, 9 and 9, respectively. The quantitative results support the visual findings presented earlier. Our MRC-Net method outperforms all the alternatives in terms of all measures in the STARE data set. The CHASE and the DRIVE datasets, also consistently outperform all the other methods under consideration in all metrics except the specificity.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multicolumn{6}{c}{**Unsupervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[51] & 2015 & 0.7655 & 0.9704 & 0.9442 & 0.9614 \\
[50] & 2015 & 0.7395 & 0.9782 & 0.9494 & 0.9672 \\
[57] & 2015 & 0.7246 & 0.979 & 0.9403 & N.A \\
[52] & 2016 & 0.7743 & 0.9725 & 0.9476 & 0.9636 \\ \hline \hline \multicolumn{6}{c}{**Supervised methods**} \\ \hline
**Methods** & **Year** & **Se** & **Sp** & **Acc** & **AUC** \\ \hline
[58] & 2014 & 0.7252 & 0.9798 & 0.9474 & 0.9648 \\
[42] & 2016 & 0.7569 & 0.9816 & 0.9527 & 0.9738 \\
[43] FC & 2016 & 0.7893 & 0.9792 & N.A & 0.9507 \\
[43] UP & 2017 & 0.7076 & 0.9870 & N.A & 0.9474 \\
[59] & 2017 & 0.7691 & 0.9801 & 0.9533 & 0.9744 \\
[26] & 2018 & 0.7653 & 0.9818 & 0.9542 & 0.9752 \\
[46] DEU-Net & 2019 & 0.7940 & 0.9816 & 0.9567 & 0.9772 \\
[47] Vessel-Net & 2019 & 0.8038 & 0.9802 & 0.9578 & 0.9821 \\
[33] Vessel & 2019 & 0.8022 & 0.9810 & 0.9655 & 0.9820 \\
[48] HAnet & 2020 & 0.7991 & 0.9813 & 0.9581 & 0.9823 \\
[60] & 2020 & 0.8038 & 0.9837 & 0.9578 & **0.9846** \\
[61] & 2020 & 0.6994 & 0.9811 & 0.945 & N.A \\
[32] MultiRes UNet & 2020 & 0.7928 & **0.9845** & 0.9677 & 0.9781 \\
[1] RCED-Net & 2020 & **0.8252** & 0.9787 & 0.9649 & 0.9780 \\ MRC-Net & 2020 & 0.8250 & 0.9837 & **0.9698** & 0.9825 \\ \hline \hline \end{tabular}
\end{table}
Table 7: The table presents a comprehensive performance evaluation of our network and several alternative methods on the DRIVE database.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline
**Methods** & **Se** & **Sp** & **Acc** & **F1** & **Mathews** & **Dice** & **Bacc** & **J** & **E** & **AUC** \\ \hline BCDUNet [56] & 0.8192 & 0.9833 & 0.9689 & 0.8217 & 0.8048 & 0.8217 & 0.9013 & 0.6977 & 0.3023 & 0.9775 \\ MultiResUNet [32] & 0.79 & **0.9848** & 0.9678 & 0.8108 & 0.7936 & 0.8108 & 0.8874 & 0.6822 & 0.3178 & 0.9784 \\ SegNet [49] & 0.8246 & 0.9804 & 0.9667 & 0.8127 & 0.7946 & 0.8127 & 0.9025 & 0.6847 & 0.3153 & 0.9752 \\ U-Net++ [36] & 0.8116 & 0.9823 & 0.9673 & 0.8126 & 0.7948 & 0.8126 & 0.8969 & 0.6851 & 0.3149 & 0.9815 \\ MRC-Net & **0.8250** & 0.9837 & **0.9698** & **0.8270** & **0.8106** & **0.8270** & **0.9044** & **0.7055** & **0.2945** & **0.9825** \\ \hline \hline \end{tabular}
\end{table}
Table 8: Performance comparison of our network with benchmark methods on the DRIVE database.
our approach, in contrast with the alternatives, employs adversarial training and uses LSTM convolutional layers. Also note that, in the results shown in the previous sections, U-Net [44] and SegNet [49] consistently rank as the best methods in the publicly available datasets under consideration. This is somewhat expected, as it is expected that unsupervised methods would be among the worst performers. However, this must be taken into consideration, as supervised methods do have an inherent advantage. That being said, we decided to include them here for the sake of the completeness of the results.
In addition, recent studies have proposed modifications to the SegNet and UNet architectures to achieve state-of-the-art performance in retinal vessel segmentation, as reported in [56; 36]. However, to attain state-of-the-art performance, these methods employ a larger number of trainable parameters, typically around ten times more than our network. Furthermore, in our experiments, we focused on standard datasets and commonly used performance metrics to facilitate better comparison with other published results in the literature. As far as we know, U-Net [44] and SegNet [49] remain widely used benchmarks, and as shown in our results, our method is still highly competitive against these benchmarks.
## 4 Conclusions
In this paper, we introduce a novel contextual network for the accurate identification of retinal vessels that considers the limitations of computing resources for deploying the network on devices with restricted resources such as smartphones. Our proposed network is an end-to-end trainable architecture that utilizes a multi-resolution feature extraction block to capture vessel information at different scales and preserve residual information with fine-grained detail. Furthermore, we present a feature fusion strategy that robustly combines encoder and decoder features without
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Methods** & **Se** & **Sp** & **Acc** & **F1** & **Mathews** & **Dice** & **Bacc** & **J** & **E** & **AUC** \\ \hline SegNet [49] & 0.8111 & 0.9829 & 0.9698 & 0.8047 & 0.7884 & 0.8047 & 0.897 & 0.6734 & 0.3266 & 0.9805 \\ vesselSeg & 0.738 & 0.9763 & 0.958 & 0.7297 & 0.7072 & 0.7297 & 0.8572 & 0.5748 & 0.4252 & 0.913 \\ MRC-Net & **0.8485** & **0.9887** & **0.9779** & **0.8548** & **0.8430** & **0.8548** & **0.9186** & **0.7466** & **0.2534** & **0.9857** \\ \hline \hline \end{tabular}
\end{table}
Table 10: Performance comparison of the proposed MRC-NET with benchmark alternatives on the CHASE dataset.
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline
**Methods** & **Se** & **Sp** & **Acc** & **F1** & **Mathews** & **Dice** & **Bacc** & **J** & **E** & **AUC** \\ \hline BCDUNet [56] & 0.6778 & 0.989 & 0.9659 & 0.7401 & 0.7295 & 0.7401 & 0.8334 & 0.5981 & 0.4019 & 0.88 \\ MultiResUNet [32] & 0.6989 & **0.9912** & 0.9694 & 0.7532 & 0.7524 & 0.7532 & 0.845 & 0.626 & 0.374 & 0.9452 \\ U-Net++ [36] & 0.8152 & 0.9858 & 0.973 & 0.8187 & 0.8041 & 0.8187 & 0.9005 & 0.6946 & 0.3054 & **0.9832** \\ MRC-Net & **0.8190** & 0.9874 & **0.9747** & **0.8286** & **0.8150** & **0.8286** & **0.9032** & **0.7105** & **0.2895** & 0.9706 \\ \hline \hline \end{tabular}
\end{table}
Table 9: Performance comparison of our network with benchmark methods on the STARE database.
compromising important vessel information. Notably, our segmentation network has a relatively low parameter count of approximately 0.9 million, which is lower than several state-of-the-art approaches in the literature. We have presented results on three widely available datasets and compared them against several alternatives. In our experiments, our method was quite competitive on the DRIVE, STARE, and CHASE datasets. The proposed approach investigated bi-directional recurrent feature fusion and region-based compound losses for robust vessel segmentation. In future, we aim to investigate attention-incorporated feature fusion with joint optimization of boundary- and distribution-based losses for extending this approach to medical image segmentation.
|
2301.06025 | Evolution and feedback of AGN Jets of different Cosmic-ray Composition | Jet feedback from active galactic nuclei (AGN) is one of the most promising
mechanisms for suppressing cooling flows in cool-core clusters. However, the
composition of AGN jets and bubbles remains uncertain; they could be thermally
dominated, or dominated by cosmic-ray proton (CRp), cosmic-ray electron (CRe),
or magnetic energy. In this work, we investigate the evolution and feedback
effects of CRp and CRe dominated jets by conducting 3D magnetohydrodynamic
simulations of AGN jet-inflated bubbles in the intracluster medium using the
FLASH code. We present the evolution of their energies, dynamics and heating,
and model their expected cavity-power versus radio-luminosity relation ($P_{\rm
cav}-L_R$). We find that bubbles inflated by CRe dominated jets follow a very
similar dynamical evolution to CRp dominated bubbles even though CRe within
bubbles suffer significantly stronger synchrotron and inverse-Compton cooling.
This is because, as CRe lose their energy, the jet-inflated bubbles quickly
become thermally dominated within $\sim 30$ Myr. Their total energy stops
decreasing with CR energy and evolves similarly to CRp dominated bubbles. The
ability of CRe and CRp dominated bubbles to heat the intracluster medium is
also comparable; the cold gas formed via local thermal instabilities is well
suppressed in both cases. The CRp and CRe bubbles follow different evolutionary
trajectories on the $P_{\rm cav}-L_R$ plane, but the values are broadly
consistent with observed ranges for FRI sources. We also discuss observational
techniques that have potential for constraining the composition of AGN jets and
bubbles. | Yen-Hsing Lin, H. -Y. Karen Yang, Ellis R. Owen | 2023-01-15T06:56:10Z | http://arxiv.org/abs/2301.06025v1 | # Evolution and feedback of AGN Jets of different Cosmic-ray Composition
###### Abstract
Jet feedback from active galactic nuclei (AGN) is one of the most promising mechanisms for suppressing cooling flows in cool-core clusters. However, the composition of AGN jets and bubbles remains uncertain; they could be thermally dominated, or dominated by cosmic-ray proton (CRp), cosmic-ray electron (CRe), or magnetic energy. In this work, we investigate the evolution and feedback effects of CRp and CRe dominated jets by conducting 3D magnetohydrodynamic simulations of AGN jet-inflated bubbles in the intracluster medium using the FLASH code. We present the evolution of their energies, dynamics and heating, and model their expected cavity-power versus radio-luminosity relation (\(P_{\rm cav}-L_{R}\)). We find that bubbles inflated by CRe dominated jets follow a very similar dynamical evolution to CRp dominated bubbles even though CRe within bubbles suffer significantly stronger synchrotron and inverse-Compton cooling. This is because, as CRe lose their energy, the jet-inflated bubbles quickly become thermally dominated within \(\sim 30\) Myr. Their total energy stops decreasing with CR energy and evolves similarly to CRp dominated bubbles. The ability of CRe and CRp dominated bubbles to heat the intracluster medium is also comparable; the cold gas formed via local thermal instabilities is well suppressed in both cases. The CRp and CRe bubbles follow different evolutionary trajectories on the \(P_{\rm cav}-L_{R}\) plane, but the values are broadly consistent with observed ranges for FRI sources. We also discuss observational techniques that have potential for constraining the composition of AGN jets and bubbles.
keywords: galaxies: active - galaxies: evolution - galaxies: jets - methods: numerical - galaxies: clusters: intracluster medium
## 1 Introduction
The immense energy output from active galactic nuclei (AGN) is believed to have a great impact on galaxy evolution across cosmic time. Specifically, relativistic jets greatly alter the evolution of galaxy clusters in the Universe by heating the intracluster medium (ICM) and preventing the runaway cooling of cool-core (CC) clusters via self-regulated feedback cycles (McNamara and Nulsen, 2012; Blandford et al., 2019). The detailed interaction between AGN jets and the ICM is likely to be dependent on the energy content of AGN jet-inflated bubbles (e.g., Yang et al., 2019); however, the composition of AGN jets and bubbles remains poorly constrained observationally.
When jets travel beyond kpc scales and punch into the ICM, they create low density "bubbles" that rise upward in the cluster potential through buoyancy. Due to their low density, these bubbles form low surface brightness regions in the X-ray emitting ICM, often called _X-ray cavities_. By contrast, the relativistic particles inside the bubbles produce radio synchrotron emission when they interact with magnetic fields, forming the _radio lobes1_. With high spatial resolution X-ray and radio imaging, one can measure the pressure of ICM (denoted as \(P_{\rm ext}\) in this work) as well as the pressure provided by the synchrotron emitting electrons in the bubbles (denoted as \(P_{\rm int}\)). Interestingly, observations found that there are two populations of AGN bubbles (Dunn and Fabian, 2004; De Young, 2006; Birzan et al., 2008; Croston et al., 2018). One population presents similar external and internal pressures (\(P_{\rm int}\sim P_{\rm ext}\)), meaning that the pressure from the radio-emitting cosmic-ray electrons (CRe) is sufficient to support the bubbles. On the other hand, other AGN bubbles show substantially smaller pressure from CRe than the external ICM pressure (\(P_{\rm int}\ll P_{\rm ext}\)), suggesting that the dominant energy content inside these bubbles may come from other sources, such as magnetic fields, ultra-hot thermal plasma or cosmic-ray protons (CRp).
Footnote 1: Following convention in the context of AGN feedback in galaxy clusters, the term “bubbles” in this paper refers to “X-ray cavities,” regardless of whether they contain active or inactive radio sources.
Theoretically, comprehensive studies of kinetic-energy dominated jets have been performed using hydrodynamic (HD) simulations (e.g. Gaspari et al., 2012; Barai et al., 2014; Reynolds et al., 2015; Hillel and Soker, 2016; Yang and Reynolds, 2016, 2016; Bari et al., 2016; Li et al., 2017; Fabian et al., 2017; Bambic et al., 2018). More recently, the properties of CRp dominated jets have also seen growing attention in the literature (e.g. Mathews and Brighenti, 2008; Guo and Mathews, 2011; Ruszkowski et al., 2017; Ehlert et al., 2018; Yang et al., 2019; Su et al., 2021; Beckmann et al., 2022). These works have demonstrated that differences in the energy content of AGN bubbles could lead to very different heating impacts and dynamical evolution of the ICM. For instance, it is shown that CRp dominated bubbles tend to be more oblate and buoyant, leading to more efficient up
lift of the ICM and suppressed radiative cooling (Guo and Oh, 2008; Mathews and Brighenti, 2008; Yang et al., 2019). Direct mixing between the ICM and ultra-hot thermal plasma could be the dominant heating mechanism in kinetic-energy dominated jets. For CRp dominated jets, the CRs could instead heat the ICM via Coulomb, hadronic collisions and streaming heating (e.g. Ruszkowski et al., 2017; Yang et al., 2019). As a result, self-regulated AGN feedback simulations including kinetic-energy dominated jets tend to produce more quasi-steady AGN activities (e.g., Yang and Reynolds, 2016), while simulations including CRp dominated jets tend to exhibit more episodic AGN activities (Ruszkowski et al., 2017). Therefore, understanding the composition of AGN jets/bubbles and their impact on the dynamics and heating of the ICM is crucial for predicting the behavior of feeding onto the central AGN and the evolution of galaxy clusters under the influence of AGN feedback.
While the feedback effects of kinetic-energy and CRp dominated jets are better understood, those of CR dominated jets have not yet been investigated in the literature. The existence of CRe dominated bubbles can be inferred from observations of AGN bubbles that are relatively close to pressure balance without the need for non-radiating CRs (\(P_{\rm int}\sim P_{\rm ext}\)) (Dunn and Fabian, 2004; Croston et al., 2018). For the Fanaroff-Riley (FR; Fanaroff and Riley, 1974) Is sources often found in galaxy clusters, the CRe within the bubbles/lobes can be supplied by freshly injected CR at the shocks/flaring point when they travel to kpc scales (see e.g. Section 3.1.3 in Blandford et al., 2019). However, the long-term evolution of these CRe bubbles and their impact on AGN feedback remains an open question.
It is therefore our aim to investigate the differences between CRp and CRe dominated jets and their subsequent evolution and feedback impact on the ICM. There are several reasons why one may expect them to show very different feedback behaviors. Firstly, compared to CRp, CRe suffer stronger synchrotron and inverse-Compton (IC) losses within magnetic fields and cosmic microwave background (CMB) radiation, respectively. Secondly, CRp can heat the ICM via hadronic and Coulomb collisions, while CRe cannot undergo hadronic heating and their Coulomb heating effect is comparatively negligible.
Therefore, one might expect to find an interesting case of failed AGN feedback, where CRe dominated bubbles eventually deflate, and fail to provide sufficient heating to the ICM. To this end, we perform three-dimensional (3D) magnetohydrodynamic (MHD) simulations including relevant CR physics to investigate the energy, dynamics and heating of CRe bubbles and compare their evolution with their CRp counterparts.
The paper is organized as follows. In Section 2, we describe the governing equations, assumptions, initial conditions, and the detailed modeling for CRp and CRe. In Section 3, we present the overall evolution (Section 3.1) of the bubbles, the evolution of different energy components in the bubbles (Section 3.2), and heating/cooling profiles and cold gas formation (Section 3.3). We compare the radio-luminosity versus cavity-power relations predicted by our simulations with observed data in Section 3.4. In Section 4, we discuss the implications of our results to the current understanding of jet composition, followed by the conclusions in Section 5.
## 2 Methods
We perform 3D MHD simulations of AGN jet-inflated bubbles in an idealized Perseus-like cluster using the FLASH code (Fryxell et al., 2000; Dubey et al., 2008). We focus on two cases: CRp dominated jets and CRe dominated jets (hereafter CRp and CRe jets, respectively). The main difference between the two is the cooling of CRs and the amount of heating they provide to the thermal gas. We also investigate the effects of CR streaming by performing two additional simulations (called CRpS and CReS, respectively). Detailed physics included in each simulation is summarised in Table 1.
### Cosmic-ray physics
We treat CRs as a second fluid that follows the MHD equations (see e.g. Yang et al., 2012, 2019, for more detailed descriptions):
\[\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho\mathbf{v})=0 \tag{1}\]
\[\frac{\partial\rho\mathbf{v}}{\partial t}+\nabla\cdot\left(\rho\mathbf{v}\mathbf{v}- \frac{\mathbf{B}\mathbf{B}}{4\pi}\right)+\nabla p_{\rm tot}=\rho\mathbf{g} \tag{2}\]
\[\frac{\partial\mathbf{B}}{\partial t}-\nabla\times(\mathbf{v}\times\mathbf{B})=0 \tag{3}\]
\[\frac{\partial e}{\partial t}+\nabla\cdot\left[(\epsilon+p_{\rm tot })\mathbf{v}-\frac{\mathbf{B}(\mathbf{B}\cdot\mathbf{v})}{4\pi}\right]\\ =\rho\mathbf{v}\cdot\mathbf{g}+\nabla\cdot(\mathbf{\kappa}\cdot\nabla\epsilon _{\rm cr})+\mathcal{H}_{\rm cr}-n_{e}^{2}\Lambda(T) \tag{4}\]
\[\frac{\partial e_{\rm cr}}{\partial t}+\nabla\cdot(e_{\rm cr}\mathbf{v})=-p_{\rm cr }\nabla\cdot\mathbf{v}+\nabla\cdot(\mathbf{\kappa}\cdot\nabla e_{\rm cr})+\mathcal{C }_{\rm cr} \tag{5}\]
where \(\rho\) and \(v\) are the density and velocity of gas, respectively, \(\mathbf{g}\) is the gravitational field, \(\mathbf{\kappa}\) is the CR diffusion tensor, \(e_{\rm cr}\) is the CR energy density, and \(e\) is the total energy density, consisting of kinetic, thermal, CR and magnetic energy (\(e=0.5\rho v^{2}+e_{\rm th}+e_{\rm cr}+B^{2}/8\pi\)). The total pressure is \(p_{\rm tot}=(\gamma-1)e_{\rm th}+(\gamma_{\rm cr}-1)e_{\rm cr}+B^{2}/8\pi\), in which we adopt the adiabatic index \(\gamma=5/3\) for the thermal gas and \(\gamma_{\rm cr}=4/3\) for relativistic CRs. \(\mathcal{H}_{\rm cr}\) is the net contribution of CR related processes to the change of total energy density, \(\mathcal{C}_{\rm cr}\) is the CR cooling rate due to the combined effect of Coulomb losses, hadronic processes, streaming, IC scattering and synchrotron losses. \(n_{e}\) is the electron number density, and \(\Lambda(T)\) is the radiative cooling function.
In the above CR-MHD formalism (Zweibel, 2013, 2017), to the first order, CRs advect with the thermal gas since they are well scattered by small-scale structures in the magnetic field. Assuming the CRs are primarily scattered by waves as a part of a background turbulent magnetic field and that the turbulence is isotropic, there is no net energy transfer from the CRs to the gas. This picture, called the _extrinsic turbulence_ picture of CR transport, is adopted in many early studies of CR simulations (e.g. Guo and Oh, 2008; Mathews and Brighenti, 2008). In this picture, one can approximate CR transport as a spatial diffusion process, with a diffusion coefficient of \(\kappa\sim 3\times 10^{28}\rm cm\,s^{-1}\), which
\begin{table}
\begin{tabular}{c c c c} \hline \hline Simulation name & Streaming & IC + sync. cooling & Coulomb and hadronic cooling \\ \hline CRp & No & No & Yes \\ CRe & No & Yes & No \\ CRpS & Yes & No & Yes \\ CReS & Yes & Yes & No \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary of the 4 main simulations we performed and the physics adopted in each of them.
is typical for an ICM environment (Yang et al., 2019). This model (identical to the CRdh simulation in Yang et al., 2019) is adopted in the CRp and CRe simulations as listed in Table 1.
For the CRpS and CReS simulations, we adopt the _self-confinement_ picture of CR transport. In this picture, CRs are assumed to be scattered by their self-excited Alfven waves via the streaming instability (Kulsrud and Pearce, 1969; Wentzel, 1974; Zweibel, 2013). Direct simulations of streaming have been performed in previous works (e.g. Ruszkowski et al., 2017; Jiang and Oh, 2018; Thomas and Pfrommer, 2019) but they are more numerically challenging. Nevertheless, under the assumption that CRs are well scattered by small-scale magnetic field structures that are unresolved, one can also approximate CR transport due to streaming as spatial diffusion, and one can show that the CR diffusion coefficient is comparable to that in the _extrinsic turbulence_ picture (see Yang et al., 2019, for more detailed discussion). For simplicity, this approximation is applied in our CRpS and CReS simulations. In addition to the CR transport term, streaming can also transfer energy from the CRs to Alfven waves and subsequently heat up the thermal gas. This corresponds to the CR cooling term in the CR energy density equation, \(C_{\rm cr,s}=\nu_{A}\cdot\nabla p_{\rm cr}\). Note that streaming transfers energy from the CRs to the thermal gas, and therefore the total energy density would be unchanged during the process (i.e., the contribution to \(\mathcal{H}_{\rm cr}\) due to streaming is zero).
Following Yoast-Hull et al. (2013); Ruszkowski et al. (2017), the energy loss rates of CRp due to Coulomb and hadronic processes can be written as:
\[C_{\rm CRp,c}=-4.93\times 10^{-19}\frac{n-4}{n-3}\frac{e_{\rm cr}\rho}{E_{\rm min,GeV}}\frac{\rho}{\mu_{\rm e}m_{\rm p}}\,{\rm erg\,cm^{-3}\ s^{-1}}, \tag{6}\]
and
\[C_{\rm CRp,h}=-8.56\times 10^{-19}\frac{n-4}{n-3}\frac{e_{\rm cr}\rho}{E_{\rm min,GeV}}\frac{\rho}{\mu_{\rm e}m_{\rm p}}\,{\rm erg\,cm^{-3}\ s^{-1}}, \tag{7}\]
where \(n\) is the slope of the power-law distribution function of CRp in momentum space, \(E_{\rm min,GeV}\) is the minimum energy of CRp in units of GeV, and \(\mu_{p}\) and \(\mu_{e}\) are the mean molecular weights per proton and electron, respectively.
In hadronic processes, secondary electrons receive around \(1/6\) of the inelastic energy which is transferred to heat the gas (Mannheim and Schlickeiser, 1994; Guo and Oh, 2008), see also Owen et al. (2018), which considered the efficiency of this heating in different conditions; the remainder will be emitted and lost to gamma rays or neutrinos via pion production processes. Thus, \(\mathcal{H}_{\rm CRp}=(5/6)\,C_{\rm CRp,h}\).
Following Yoast-Hull et al. (2013); Miniati et al. (2001), the cooling rate of CRe due to synchrotron emission and IC scattering can be written as
\[C_{\rm CRe,IC+Syn}=\frac{2-p}{3-p}\beta e_{\rm cr}\left[\left(\frac{E_{\rm max }}{E_{\rm min}}\right)^{2-p}-1\right]\left(E_{\rm max}-E_{\rm min}\right), \tag{8}\]
where it is assumed that the energy spectrum of CRe follows a power-law distribution, and \(\beta=4\sigma_{\rm T}\left(U_{\rm B}+U_{\rm f}\right)/(3m_{\rm e}^{2}c^{3})\), in which \(\sigma_{\rm T}\) is the Thomson scattering cross section, \(U_{\rm B}=4\times 10^{-14}(B/\mu\rm G)^{2}\) erg cm\({}^{-3}\) is the magnetic energy density (\(B\) is the magnetic field strength), and \(U_{\rm r}=4.2\times 10^{-13}(1+z)^{4}\) erg cm\({}^{-3}\) is the energy density of the CMB. We set \(z=0\) for all our simulations since the observed cavity systems of interest are primarily at low redshift. \(p=2.5\) is the slope of CRe distribution function (\(n(E)\propto E^{-p}\)), and \(E_{\rm max}\) and \(E_{\rm min}\) are the maximum and minimum energy of the CRs, respectively.
In general, \(C_{\rm CRe,IC+Syn}\) at any given simulation time step depends on the local CRe spectrum (\(E_{\rm max}\) and \(E_{\rm min}\)) and the magnetic field strength. While in our MHD simulations, the magnetic field is self-consistently modeled, the values of \(E_{\rm max}\) and \(E_{\rm min}\) for the CRe need to be assumed. Strictly, one would need to follow the evolution of the CRe spectrum on-the-fly during the simulations in order to compute the CRe energy losses self-consistently (e.g. Yang and Ruszkowski, 2017). However, the system we are modeling is relatively simple, where the CRe spectral evolution is dominated by synchrotron and IC cooling after the initial jet injection phase. As such, adopt a simpler approach to model \(E_{\rm max}\) and \(E_{\rm min}\). Specifically, we assume that the energy spectrum of CRe starts with a power-law distribution with \(E_{\rm max,0}=100\) GeV and \(E_{\rm min,0}=1\) GeV from \(t=0\) Myr up to \(t=10\) Myr (the inflation period). Then, we evolve \(E_{\rm max}\) and \(E_{\rm min}\) according to
\[E=\frac{E_{0}}{1+\beta(t-10\,{\rm Myr})E_{0}}, \tag{9}\]
(Kardashev, 1962), where \(\beta\) is the same as described by Eq. 8, and \(t\) is the simulation time. We offset the start time of the spectral evolution to 10 Myr because, during the injection phase, there can be a complex mix of processes (e.g., mixing between newly injected CRe and existing CRe, adiabatic expansion, and possibly in situ acceleration) that are not well constrained. These uncertainties can be effectively captured by using different values of \(E_{\rm max,0}\) and \(E_{\rm min,0}\) (see Appendix B). Equation 9 is only valid for a constant magnetic field strength. Therefore, for consistency, we adopt a field strength of 1 \(\mu\)G to model the evolution of \(E_{\rm max}\) and \(E_{\rm min}\); instead of using the magnetic fields from our simulations. We verified this approach to be reasonable by comparing results using simulated magnetic field strengths and constant field strengths. We found these are not significantly different since IC cooling is much stronger than synchrotron cooling in our setup.
### Simulation setup
We perform the simulations in a box of 500 kpc on each side. The simulation domain is refined adaptively up to 8 levels of refinement (corresponding to a maximum resolution of 500 pc), according to a refinement criterion based on steep temperature gradients. The total simulation time is 100 Myr. The initial gas profile of the cluster is set using empirical fits with the Perseus cluster, one of the most studied CC clusters in the local Universe, in hydrostatic equilibrium within a static Navarro-Frenk-White (NFW, Navarro et al., 1996) gravitational potential. Radiative cooling of the gas is calculated using tables from Sutherland and Dopita (1993), adopting \(1/3\) solar metallicity. The initial cooling time of the simulated Perseus-like cluster is around 250 Myr. Although this radiative cooling time is longer than the simulation time of 100 Myr, we note that it is still essential to include radiative cooling. This is because it would affect the amount of cold gas formed via local thermal instabilities (e.g., McCourt et al., 2011). A reflective boundary condition is chosen for the simulations, so that the total energy within the simulation domain after the initial jet injection phase would be conserved in the absence of cooling, or otherwise lost only due to radiative cooling as in our current study2. We generate a tangled ICM magnetic field by conducting a 3D inverse Fourier transform of a magnetic power spectrum with coherence length of 50 kpc (see e.g. Yang and Ruszkowski, 2017, for detailed descriptions). We then normalize the field strength to the local thermal pressure so that the plasma beta is constant, \(\beta=P_{\rm th}/P_{\rm B}=100\)(Carilli and Taylor, 2002).
Footnote 2: Our conclusions are insensitive to the choice of the boundary condition since the region of interest is much smaller than the simulation box size.
Following the method of AGN jet injection in Yang et al. (2019),
bipolar AGN jets are injected from a cylinder with radius of 2 kpc and height of 4 kpc at the center of the simulation box. The power of each jet is \(\dot{E}_{\rm ej}=5\times 10^{45}\) erg s\({}^{-1}\) and the duration of their activity is 10 Myr. These parameter choices are representative of those obtained from previous self-regulated feedback simulations using a similar setup (Yang and Reynolds, 2016). They are also consistent with recent estimated ranges of kinetic luminosities and duty cycles of a large sample of radio galaxies (Shabala et al., 2020; Hardcastle et al., 2019). The rate of mass injection can be written as \(\dot{M}_{\rm ej}=2(1-f_{\rm cr})\dot{E}_{\rm ej}/v_{\rm ej}^{2}\), where \(v_{\rm ej}=0.01c\) is the bulk velocity of the jet, which is chosen to represent jets which have gone through significant deceleration on kpc scales (Laing et al., 2006), and \(f_{\rm cr}=e_{\rm cr}/e=0.9\) is the fraction of CR energy in the jets. The remaining 10 per cent of the injected energy is in kinetic form; no magnetic or thermal energy is explicitly injected.
## 3 Results
In this section, we describe the important features in our simulations. In Section 3.1, we describe the general evolution of the bubbles. In Section 3.2, we describe the evolution of different energy components in the bubbles. In Section 3.3, we describe the radial CR heating profile in the cluster and the evolution of cold (\(\mathcal{T}\leq 5\times 10^{5}\) K) gas formation. Finally in Section 3.4, we compare the predicted synchrotron luminosity (\(L_{\rm R}\)) and the jet power (\(P_{\rm cav}\)) with observations.
### Bubble evolution
Fig. 1 shows the evolution of two representative simulations, CRpS and CReS. In their early stages, the ram pressure of the jets inflates low-density bubbles and create a series of shock waves. After the jets are turned off at \(t=10\) Myr, the rapid inflation of the bubbles stops and they then rise due to buoyancy. Next, hydrodynamic instabilities (e.g. Rayleigh-Taylor and Kelvin-Helmholtz) start to deform and disrupt the bubbles due to the large shear velocity and density contrast between the bubbles and ambient medium. Eventually, the bubbles break into several small, irregular bubbles and mix with the ambient thermal gas. Contrary to our expectation, the evolution of the CRe/CReS bubbles is very similar to the CRp/CRpS bubbles. The dynamical evolution of the bubbles seems to be unaffected by the cooling differences between the two types of CRs. We discuss this result further in Section 4.
In Fig. 2, we show (from left to right) slices of temperature, CR energy density, total-to-thermal pressure ratio \(\beta_{\rm th}\), total-to-CR pressure ratio \(\beta_{\rm CR}\), and the projected thermal Bremsstrahlung emissivity of the CRpS and CReS simulations at \(t=60\) Myr. At this epoch, the CR energy density within the CRpS bubbles is still around \(10^{-9}\) erg/cm\({}^{3}\), while that of the CReS bubbles is more than 3 times weaker due to the strong synchrotron plus IC cooling of CRe. Similarly, CRpS bubbles have \(\beta_{\rm th}\) around 2 to 3 and \(\beta_{\rm CR}\) around 1, which means that the pressure and dynamics inside the bubbles is dominated by CR pressure. On the other hand, the CReS bubbles are dominated by thermal pressure with \(\beta_{\rm th}\sim 1\). The gas temperature of the two simulations are similar: both CRpS and CReS bubbles show a peak temperature around \(2\times 10^{8}\) K at the edges of the bubbles. Finally, we use the projected thermal Bremsstrahlung emissivity for typical ICM conditions calculated using
\[\epsilon_{\rm ff}=3\times 10^{-27}T^{1/2}n_{e}^{2}~{}{\rm erg~{}cm^{-3}s^{-1}} \tag{10}\]
(Sarazin, 1986) as a proxy to emulate the expected X-ray emission maps from our simulations, and find that the X-ray cavities have a similar morphology in the two cases.
### Energy evolution
Fig. 3 shows the evolution of different energy components inside the bubbles. We define bubbles according to the criteria \(t_{\rm cool}\geq 3\) Gyr, where \(t_{\rm cool}\) is the cooling timescale of the gas by Bremsstrahlung, given by
\[t_{\rm cool}\sim 4.4~{}\left(\frac{n_{e}}{10^{-2}\,{\rm cm^{-3}}}\right)^{-1} \left(\frac{T}{10^{8}\,{\rm K}}\right)^{1/2}~{}{\rm Gyr}~{}, \tag{11}\]
where \(n_{e}\sim n/2\sim\rho/m_{p}\). In Appendix A, we show that the choice of the \(t_{\rm cool}\) threshold does not affect our main conclusions.
For the first 10 Myr, the total energy inside the bubbles rises due to the energy injection of the jets. During this period, all the bubbles are CR-energy dominant in all four simulations; the kinetic energy and thermal energy are sub-dominant. After the injection ends, the four scenarios start to differ due to their underlying physics. In the CRp simulation, the bubbles remain CR-energy dominant all the way from 10 to 100 Myr, only slightly decreasing due to the gradual expansion of the bubbles and CR energy losses via the hadronic processes. For the CRpS case, the CR energy decreases faster than the CRp case because of the additional energy transferred from the CRs to the gas via streaming. The thermal energy becomes the dominant energy component for the CRpS bubbles at around 75 Myr. By contrast, CR energy is depleted much faster in the CRe and CReS cases due to the strong synchrotron and IC cooling of CRe. The thermal and CR energies cross at around 20 to 40 Myr. In all four cases, kinetic energy remains a sub-dominant component.
### Cold gas evolution and CR heating
In Fig. 4, we show the evolution of radially averaged profiles of the total CR heating rates (including hadronic, Coulomb and streaming heating) over-plotted with the radiative cooling profile of the cluster at \(t=0\). In the CRe simulation, we do not consider CR heating. This is because hadronic heating would not operate, while Coulomb heating would be very inefficient for the conditions of the ICM (see e.g. Fig. 4 of Owen et al., 2018). In the CRp simulation, the CR heating profile decreases, then subsequently increases. This behavior originates from the rapid drop of density due to bubble inflation at the cluster center in the early epoch. After the bubbles form and detach, the CRs gradually diffuse out and interact with the ambient ICM. This increases the hadronic and Coulomb heating rates. For the CRpS case, the CR heating profiles are somewhat different to the CRp case because streaming heating dominates over hadronic and Coulomb heating (Ruszkowski et al., 2017). Overall the heating rates in the CRpS case are greater than the case without streaming, and the heated region extends to larger radii due to additional streaming transport. CReS bubbles behave similarly to the CRpS bubbles, but the heating is weaker. This is because cooling removes most of the CRe energy.
In all four simulations, the CR heating rates are much smaller than the radiative cooling rate. Nonetheless, as shown in Fig. 5, cold (\(T\leq 5\times 10^{5}\) K) gas formed during the jet-injection phase via thermal instabilities triggered by rapid adiabatic cooling is well suppressed in all cases. Cold gas only forms in the first 12 Myr in the simulations due to the strong adiabatic cooling at early stages and is subsequently heated through direct mixing between the hot bubbles and the ambient gas. This suggests that even with streaming heating included, direct mixing is still the dominant heating mechanism
(Hillel and Soker, 2016; Yang and Reynolds, 2016). This conclusion is further supported by the CRe case, in which there is no CR heating, and the heating can only come from direct mixing. Overall, streaming only helps to suppress the amount of cold gas mass below \(T\leq 5\times 10^{5}\)K by \(\sim 20\) per cent.
### Observable properties in radio
In Fig. 6 and Fig. 7, we calculate the observable properties of our simulated bubbles and compare them with observed cavities in Croston et al. (2018). This includes cavity samples from nearby CC clusters compiled by Birzan et al. (2008), which are mostly FRI sources, and a sample of other radio galaxies, taken from Cavagnolo et al. (2010); O'Sullivan et al. (2011); Ineson et al. (2017). We compute the cavity power using \(P_{\rm cav}=(E_{\rm CR}+E_{\rm th}+E_{\rm k}+PV)/t\), where \(P\) is the total pressure inside the bubbles, \(V\) is the volume of the bubbles, and \(t\) is the simulation time, which is the age of the bubbles. This is compared with the observed cavity power, which is a proxy for the true jet power. Note that this definition is not strictly the same as that used by observers.3 However, we argue that as observations advance, cavity power estimations should eventually approach to our definition of \(P_{\rm cav}\). Therefore, our approach
Figure 1: Density slices of the CRpS (top) and CReS (bottom) simulations at \(t=20\), \(40\), \(60\), \(80\), \(100\) Myr. The physical scale of the plotted region is \(100\) kpc by \(100\) kpc.
Figure 2: Different fields in the CRpS (top) and CReS (bottom) simulations at \(t=60\) Myr, including temperature, CR energy density, \(\beta_{\rm th}=P_{\rm tot}/P_{\rm th}\), \(\beta_{\rm CR}=P_{\rm tot}/P_{\rm CR}\), and projected free-free emissivity. The plotted region is \(100\) kpc by \(100\) kpc.
can still serve as a useful reference for future observations. For the CRe/CReS simulations, the synchrotron luminosity is calculated using a pitch-angle averaged synchrotron emissivity of power-law CRe
\[\epsilon_{\rm s}(\nu)=\frac{e_{\rm cr}(2-p)}{E_{\rm max}^{2-p}-E_{\rm min}^{2-p }}(m_{e}c^{2})^{1-p}\frac{3\sigma_{\rm T}c_{\rm B}}{16\pi\sqrt{\pi}\nu_{L}} \left(\frac{\nu}{\nu_{L}}\right)^{-\frac{p-1}{2}} \tag{12}\] \[\times 3^{\frac{p}{2}}\left(\frac{2.25}{p^{2.2}+0.105}\right)\ {\rm erg \ cm^{-3}\ s^{-1}},\]
(Ghisellini, 2013), where \(\nu_{L}\) is the Larmor frequency. To obtain the total radio luminosity, we integrate over the whole simulation domain. For the CRp/CRpS cases, we consider that the synchrotron emission is dominated by the secondary particles created via hadronic collisions (mainly electrons and positrons from pion-decays). We model their spectrum following a steady-state approximation, where the secondary electron injection rate is balanced by their cooling (see Owen & Yang, 2022 for details). We adopt a simplified analytic approximation for the production of secondary electrons, as detailed in Appendix C.
From Fig. 6, we observe the following features:
1. In general, CRe/CReS bubbles produce stronger synchrotron radiation than CRp/CRpS bubbles. This is because synchrotron emission from primary CRe is very efficient, whereas the production of secondary particles by the hadronic process occurs on relatively longer timescales.
2. For simulations with CR streaming, the bubbles tend to produce slightly weaker radio emission. That is due to lower CR energy densities within the bubbles, resulting from streaming transferring energy from the CRs to the thermal gas.
3. CRe/CReS bubbles reach their highest radio luminosity at early times (\(<10\)) Myr. After the end of the injection phase, at 10 Myr, \(E_{\rm max}\) and \(E_{\rm min}\) quickly drop according to Eq. 9, such that the radio emissivities (Eq. 12) also gradually decrease as the bubbles age.
Figure 3: The evolution of different energy components inside the bubbles defined by \(t_{\rm cool}\geq 3\) Gyr. The gray solid lines represent the accumulated energy injected by jets. The red lines represent CRp/CRpS, and the blue lines represent CRe/CReS. The solid, dashed, dotted, and dotted-dashed lines represent the total energy (\(E_{\rm tot}\)), CR energy (\(E_{\rm CR}\)), thermal energy (\(E_{\rm th}\)), and kinetic energy (\(E_{\rm th}\)) within the bubbles, respectively. While the CRp/CRpS bubbles are dominated by CR energy for most of their evolution, the CRe/CReS bubbles suffer more significant synchrotron and IC cooling and quickly become thermally dominated after \(t\sim 30\) Myr.
(iv) The radio luminosity of CRp/CRpS bubbles follows a different evolutionary path to CRe/CReS bubbles. Since synchrotron emission in the CRp/CRpS cases comes from secondary particles created by the hadronic process, the evolution of synchrotron luminosity is coupled with the hadronic heating rate. As described in 3.3, rapid bubble expansion at early times dramatically reduces the gas density and the CR energy density near the cluster center, leading to a decrease in bubble synchrotron emission during the first 15 Myr. As the CRs diffuse out of the bubbles and interact with the ambient ICM at later times, the synchrotron emission rises with the corresponding increase of hadronic heating rates (see Fig. 4).
* The predicted radio luminosities for both the CRe and CRp bubbles are broadly consistent with the observed ranges in the sample presented by Birzan et al. (2008) (cf. Fig. 6).
The observational sample of bubbles from Birzan et al. (2008) also provides information about their composition. This allows a more direct comparison to be made between our simulations and their data on the \(P_{\rm{ext}}-L_{\rm{R}}\) plane. This is shown in Fig. 7, where the \(x\)-axis shows the total radio luminosity integrated from 10 to 10000 MHz. In Birzan et al. (2008), the composition of the bubbles was measured and presented in terms of \(1+k\) values, where \(k\equiv E_{p}/E_{e}\) is the energy ratio between non-radiating CRp and the radio-emitting CRe, i.e., \(E_{\rm{tot}}=E_{\rm{R}}+(1+k)E_{e}\). For bubbles that are not dominated by magnetic field energy, the \(1+k\) value thus has the same physical meaning as the \(P_{\rm{ext}}/P_{\rm{int}}\) value. In order to compare with the simulated CRp and CRe bubbles, we divide the sample in Birzan et al. (2008) into two groups: one with \(1+k>100\) (bubbles supported by non-radiating particles; shown in red crosses), and the other with \(1+k<100\) (bubbles supported by radiating particles; shown in blue crosses). Note that Figs. 6 and 7 are only different in the definition of the radio luminosity, and hence the trajectories of the simulated data points on the two plots are essentially identical.
Fig. 7 shows that overall there is a good agreement between the simulated and observed data in terms of their locations on the \(P_{\rm{ext}}-L_{\rm{R}}\) plane. There appears to be a tentative trend that the observed CRp bubbles have a larger slope on the \(L_{\rm{R}}-P_{\rm{ext}}\) plane than the CRe bubbles. From the comparison with our simulated data points, the different slopes could potentially be explained by the different evolutionary tracks of the CRp and CRe bubbles. However, with the current sample size, we were not able to draw a robust conclusion from this result. Future observational results to be obtained by the next-generation X-ray and radio facilities (e.g. _Athena_ and the Square Kilometer Array, respectively) will allow fainter and more distant X-ray cavity systems to be explored. It would be interesting to consider in future work how bubbles with different \(P_{\rm{int}}/P_{\rm{ext}}\) values would populate this diagram and provide insights into the evolution of bubbles of different compositions.
Figure 4: The radially averaged profiles of the total CR heating rates (color-coded solid lines) and the initial radiative cooling profile (dashed lines) of the four simulations at \(t=\) 2, 20, 40, 60, 80 and 100 Myr.
## 4 Discussion
We find that the morphology, evolution, and the effect of ICM heating of CRp and CRe bubbles are very similar (see section 3). This result is somewhat counter-intuitive, given that CRe provide no hadronic and negligible Coulomb heating in a bubble environment and undergo strong synchrotron and IC cooling, while CRp heat primarily through hadronic processes and do not experience significant synchrotron or IC cooling. Our findings can be understood as follows.
As shown in Fig. 3, even though the CR energy within CRe bubbles drops rapidly, their total energy does not follow the same trend at later times. This is because the CRe bubbles become thermally-dominated by \(t\sim 20\) Myr. After they become thermal bubbles, their total energies stop decreasing with the CR energy. Since the overall dynamical evolution of the bubbles depends on the total energy, the CRp and CRe bubbles follow similar dynamical evolution over long timescales. Therefore, the subsequent X-ray morphology of the CRp and CRe bubbles are also similar. Moreover, although synchrotron and IC cooling efficiently removes energy from the CRe, the bubbles can still heat up the ambient ICM efficiently via direct mixing (Yang and Reynolds, 2016) because at later times the bubbles are thermally dominated. The amount of radio emission produced by CRp and CRe bubbles is also comparable - in both cases the radio emission is within the observed ranges (see Fig. 6). We therefore conclude that the evolution and feedback effects of AGN bubbles inflated by CRe dominated and CRp dominated jets are similar under the same initial conditions, and for the same initial jet power. It is difficult to determine the composition of an AGN bubble using its X-ray morphology or integrated radio luminosity alone.4 It is even more difficult to infer the intrinsic composition of the jets by observing the bubbles they inflated because bubbles inflated by CRp and CRe jets would all become \(P_{\rm ext}/P_{\rm int}\gg 1\) on longer timescales. Therefore, other means are required to distinguish their compositions.
Footnote 4: The radio morphology of CRp and CRe bubbles could have systematic differences. As mentioned in Section 3.4, synchrotron emission from CRe bubbles comes from the secondary particles that originate in hadronic collisions. Therefore, CRp bubbles tend to brighten at the bubble edges (\(\propto e_{\rm cr}\omega^{2}\); cf. Eq. 7). However, predicting realistic radio morphologies involves additional information (in particular, realistic CR spectral evolution, a more complete treatment of CR propagation and the coherence length of the initial tangled magnetic field). These details will be investigated in future, dedicated work.
One potential way to constrain the composition of AGN bubbles is by using the thermal Sunyaev-Zeldovich (SZ) effect (Sunyaev and Zeldovich, 1972; Birkinshaw, 1999). Because the thermal SZ effect directly traces the integrated thermal pressure along a line of sight, CR dominated bubbles would exhibit suppression in the SZ fluxes compared to the surrounding ICM, causing "SZ cavities" (Pfrommer et al., 2005; Ehlert et al., 2018; Yang et al., 2019). Since our simulations show that bubbles created by CRp jets could remain CRp dominated for 70 Myr or more (Fig. 3), and that bubbles created by CRe jets would become thermally dominated after \(\sim 20\) Myr, the SZ effect could then be used to distinguish these two cases at later times of the bubble evolution.5
Footnote 5: Note, however, that the SZ effect would not be able to tell apart whether thermally dominated bubbles are created by CRe dominated jets or kinetic-energy dominated jets.
One could potentially infer the composition of AGN jets/bubbles by tracking the evolution of the \(P_{\rm ext}/P_{\rm int}\) value at early stages as the bubbles rise. One of the predictions from our simulations is that the \(P_{\rm ext}/P_{\rm int}\) value for CRe bubbles keeps increasing as bubbles rise and lose CRe energy along the way. On the other hand, CRp bubbles have high \(P_{\rm ext}/P_{\rm int}\) values throughout their evolution.6 Therefore, by measuring the \(P_{\rm ext}/P_{\rm int}\) values for a sample of AGN bubbles as a function of distance from the cluster center, one may be able to determine whether such an evolution exists and infer the intrinsic jet compositions. In fact, in the observed sample of AGN bubbles in the Perseus and Centaurus clusters (Dunn et al., 2005), it is found that their \(k\) values (\(k\equiv E_{p}/E_{e}\)) increase with the distance from the cluster center, implying that older and more distant bubbles tend to have greater pressure support from non-radiating particles. Particularly, the \(k\) values for the bubbles in the Centaurus cluster increase from \(\sim 1\) for the inner bubbles to \(\sim 100\) for the outer bubbles (see their Fig. 8). Although different episodes of AGN jets do not necessarily have the same intrinsic composition, according to our simulations this observed trend would be more consistent with the evolution of AGN bubbles inflated by CRe dominated jets. Finally, we note that the observed trend of increasing \(P_{\rm ext}/P_{\rm int}\) or \(k\) values as a function of distance from cluster center suggests that there is no significant re-acceleration of CRe within the AGN bubbles (Dunn et al., 2005). This conclusion is also supported by our simulations, which showed that the observed trends could be explained without invoking CR re-acceleration mechanisms.
Footnote 6: Since we define bubbles in the simulations using a cooling-time threshold, the \(P_{\rm ext}/P_{\rm int}\) values are not affected by the mixing/contamination of the ambient thermal gas.
More recently, Vazza et al. (2021, 2022) have performed detailed analysis on the transport and energetics of relativistic electrons using cosmological simulations. They found that even when both shock (Fermi I) and turbulence (Fermi II) re-acceleration mechanisms are considered, the energy density of CRe would not exceed \(\sim 10\) per cent of the local thermal gas energy (see e.g. Fig. 12 and 13 in Vazza et al., 2022)). Their results therefore also support that the re-accelerated CRe represent a sub-dominant component in the dynamics of AGN bubbles. If this is the case, tracing the relation between \(P_{\rm ext}/P_{\rm int}\) and the distance of the bubbles from cluster center within the same cluster can in principle provide important clues on the intrinsic composition of AGN jets.
Figure 5: The evolution of cold-gas mass in the simulations. CRp/CRpS simulations tend to create roughly 20% more cold gas. Including streaming heating can help suppress cold-gas formation by around 20%. Values after 50 Myr are zero in all simulations.
## 5 Conclusions
Relativistic jets from SMBHs are one of the most important yet complicated mechanisms that could affect galaxy evolution and provide energetic feedback to the ICM in CC clusters. However, the feedback effects to the ICM by AGN jets/bubbles with different energy compositions remain poorly understood. Observational constraints of cluster radio bubbles suggest that there could be two populations of AGN bubbles: one dominated by radiating particles (i.e., CRe), the other dominated by non-radiating particles (e.g., CRp). In order to understand the evolution of AGN bubbles and their influence on the AGN feedback mechanisms in these two scenarios, we performed four 3D MHD simulations of CRp-dominated jets versus CRe-dominated jets, with and without CR streaming (Table 1). We investigated their differences in terms of the dynamical evolution, the amount of heating provided to the ICM, and their observable properties in the X-ray and radio bands. We summarize the key results as follows.
1. Despite the stronger synchrotron and IC cooling of CRe, the long-term evolution of bubbles inflated by CRe jets is very similar to those by CRp jets (Fig. 1). This is because, although the energy of CRe within the bubbles is quickly lost due to synchrotron and IC cooling, thermal energy takes over and becomes the dominant energy component within \(\sim 20\) Myr (Fig. 3). Afterwards, the total bubble energy stops decreasing with the rapidly declining CR energy, and hence the bubbles have similar dynamical evolution to the CRp dominated bubbles.
2. All four simulations in our study show very similar amount of cold gas formed via local thermal instabilities (Fig. 5), suggesting that the ability of CRp and CRe bubbles to heat the ICM is similar. In addition, the CR heating rates in all four simulations are much weaker than the radiative cooling rate (except for CRp cases at earliest times), even though all simulations (including a case with no heating from CRs, i.e., the CRe simulation in Table 1) show suppressed formation of cold gas at a similar level. These results suggest that, in addition to
Figure 6: \(P_{\rm{cm}}-L_{\rm{R}}\) diagram of our simulated bubbles over-plotted with observational data from Croston et al. (2018). The data points obtained from our simulations are plotted from \(t=5\) to 95 Myr with a time interval of 10 Myr (where lighter colors represents earlier simulation times). We mark the observational data from Birzan et al. (2008) using dark gray crosses while other data (from Cavagnolo et al.2010; O’Sullivan et al.2011; Ineson et al.2017) is plotted with light gray crosses. The dashed horizontal line represents the injected power of simulated AGN jets (\(5\times 10^{45}\) erg s\({}^{-1}\)).
heating from CRs (Coulomb, hadronic, and streaming), other heating mechanisms such as direct mixing still play an important role.
3. With CR streaming, the bubbles can provide stronger heating to the ICM and the heating can extend to larger radii than cases without streaming (Fig. 4). The heating rates of CRe bubbles are in general smaller than the CRp bubbles because of the lower CR energy density due to cooling. For both the CRp and CRe cases, CR streaming could help to reduce the amount of cold gas within the simulations by \(\sim 20\%\).
4. We computed the predicted radio luminosity for the CRp and CRe bubbles and investigated their evolution on the \(P_{\rm cav}-L_{\rm 151~{}MHz}\) plane. We find that the CRp and CRe bubbles evolve differently because of their different emission mechanisms. For CRe bubbles, their synchrotron emission decreases with time due to the energy losses of CRe. For CRp bubbles, their synchrotron emission comes from secondary electrons produced by hadronic interaction processes. This is suppressed at early times, when rapid expansion of the bubbles reduces the gas density close to the cluster center. Despite the difference in their evolutionary trajectories, the predicted radio luminosity for both the CRp and CRe bubbles in our simulations are broadly consistent with the observed FRI sample from (Birzan et al., 2008) (Fig. 6). Overall, we find that AGN bubbles inflated by CRe dominated and CRp dominated jets behave very similarly in terms of their dynamical evolution, X-ray morphology, their ability to heat the ICM and suppress cold-gas formation, as well as their radio luminosity. Our results suggest that it may be difficult to determine the composition of an AGN bubble using its X-ray morphology or integrated radio luminosity alone, and inferring the intrinsic jet composition from the these bubbles may be even more challenging. Other observational techniques (e.g., the SZ effect) would be needed to help constrain their composition. Our simulations predict that, due to the cooling of CRe, the \(P_{\rm caf}/P_{\rm int}\) values (or, equivalently, the ratio between non-radiating and radiating particles \(k\equiv E_{p}/E_{e}\) in the literature) for CRe bubbles would increase as the bubbles rise toward larger radii. In contrast, the CRp bubbles would have high \(P_{\rm caf}/P_{\rm int}\) values throughout their evolution. Therefore, measuring the \(P_{\rm caf}/P_{\rm int}\) values as a function of distance from cluster centers for bubbles within the same cluster could potentially provide additional constraints on the composition of AGN jets/bubbles. Interestingly, the cavities in the Centaurus cluster (Dunn et al., 2005) show increasing \(k\) values with distance from the cluster center, suggesting that these bubbles could be produced by CRe dominated jets. In addition, the fact that both the simulated and observed bubbles show rising \(k\) values with distance suggests that re-acceleration of CRe is subdominant within AGN bubbles.
## Acknowledgements
YHL and HYKY acknowledge support from National Science and Technology Council (NSTC) of Taiwan (109-2112-M-007-037-MY3). HYKY acknowledges support from Yushan Scholar Program of the Ministry of Education (MoE) of Taiwan. ERO is an overseas researcher under the Postdoctoral Fellowship of Japan Society for the Promotion of Science (JSPS), supported by JSPS KAKENHI Grant Number JP22F2327, and also acknowledges support from the Center for Informatics and Computation in Astronomy (CICA) at National Tsing Hua University (NTHU) through a grant from the MoE of Taiwan. This work used high-performance computing facilities operated by CICA at NTHUI. FLASH was developed largely by the DOE-supported ASC/Alliances Center for Astrophysical Thermonuclear Flashes at University of Chicago. Data analysis presented in this paper was conducted with the publicly available yt visualization software (Turk et al., 2011). We are grateful to the yt development team and community for their support. This research has made use of NASA's Astrophysics Data Systems. We thank the anonymous referee for their insightful comments and suggestions that led to improvements in our manuscript.
## Data Availability
The data underlying this article will be shared on reasonable request to the corresponding authors.
|
2307.14977 | Accelerated Particle Detectors with Modified Dispersion Relations | There is increasing interest in discrete or "pixelated" spacetime models as a
foundation for a satisfactory theory of quantum gravity. If spacetime possesses
a cellular structure, there should be observable consequences: for example, the
vacuum becomes a dispersive medium. Of obvious interest are the implications
for the thermodynamic properties of quantum black holes. As a first step to
investigating that topic, we present here a calculation of the response of a
uniformly accelerating particle detector in the (modified) quantum vacuum of a
background pixelated spacetime, which is well known to mimic some features of
the Hawking effect. To investigate the detector response we use the standard
DeWitt treatment, with a two-point function modified to incorporate the
dispersion. We use dispersion relations taken from the so-called doubly special
relativity (DSR) and Ho\v{r}ava-Lifshitz gravity. We find that the correction
terms retain the Planckian nature of particle detection, but only for
propagation faster than the speed of light, a possibility that arises in this
treatment because the dispersion relations violate Lorentz invariance. A fully
Lorentz-invariant theory requires additional features; however, we believe the
thermal response will be preserved in the more elaborate treatment. | Paul C. W. Davies, Philip Tee | 2023-07-27T16:14:53Z | http://arxiv.org/abs/2307.14977v1 | # Accelerated Particle Detectors with Modified Dispersion Relations.
###### Abstract
There is increasing interest in discrete or 'pixelated' spacetime models as a foundation for a satisfactory theory of quantum gravity. If spacetime possesses a cellular structure, there should be observable consequences: for example, the vacuum becomes a dispersive medium. Of obvious interest are the implications for the thermodynamic properties of quantum black holes. As a first step to investigating that topic, we present here a calculation of the response of a uniformly accelerating particle detector in the (modified) quantum vacuum of a background pixelated spacetime, which is well known to mimic some features of the Hawking effect. To investigate the detector response we use the standard DeWitt's treatment, with a two-point function modified to incorporate the dispersion. We use dispersion relations taken from the so-called doubly special relativity (DSR) and Horava-Lifshitz gravity. We find that the correction terms retain the Planckian nature of particle detection, but only for propagation faster than the speed of light, a possibility that arises in this treatment because the dispersion relations violate Lorentz invariance. A fully Lorentz-invariant theory requires additional features; however, we believe the thermal response will be preserved in the more elaborate treatment.
## I Introduction and Background
Much of theoretical physics is formulated on the assumption that spacetime is continuous and maps to the real numbers. While this is an obvious idealization, convenient for calculation, it rarely presents difficulties. However, in quantum field theory, spacetime continuity leads to divergences that must be evaded by renormalization - an ad hoc procedure. Worse still, many formulations of quantum gravity are non-renormalizable.
What happens if spacetime continuity is discarded? There is a rich history of models in which spacetime emerges in the macroscopic limit from some sort of discrete or pixelated substructure [1; 2; 3; 4; 5; 6; 7]. Quantum gravity defines a fundamental length scale, the Planck length \(l_{p}\equiv\sqrt{\frac{\hbar G}{c^{3}}}=1.616\times 10^{-35}\ m\), which provides a natural measure of the 'pixel' size, and a natural cut-off energy for any incipient ultra-violet divergence. Although 'pixelating' spacetime, i.e. replacing real numbers by a countable infinity, is quite probably a placeholder for some more nuanced micro-structure, or pre-geometry [8], it is interesting to explore the impact of this simple modification to determine whether there are any observable consequences. One of the most basic spacetime structures, and the most intensively studied, is the black hole. How might spacetime discreteness affect its properties, such as Hawking radiation? As a first step to addressing this question, we here investigate the response of an accelerating particle detector, the so-called Davies-Fulling-Unruh effect, known to mimic Hawking radiation, in a pixelated spacetime background.
Although the existence of a fixed fundamental length scale violates Lorentz invariance, Minkowski space may be retained if momentum space acquires non-zero curvature, a construction known as Doubly Special Relativity (DSR) [1; 9; 10]. DSR may be generalized to curved spacetime by replacing locally Minkowski space with locally de Sitter space [11].
The effect of introducing curvature in momentum space is to render the vacuum a dispersive medium in which different frequency light waves propagate at different speeds. This can be modelled by adding successively higher powers of the three-momentum \(p\) to the standard energy-momentum dispersion relations,
\[E^{2}=p^{2}+m^{2}+\frac{\kappa_{1}}{M_{p}}p^{3}+\frac{\kappa_{2}}{M_{p}^{2}}p ^{4}+\ldots, \tag{1}\]
where the dimensionless kappa coefficients are arbitrary at this stage. The dispersion relation Eq. (1) on its own puts the theory in violation of Lorentz invariance, a defect that may, however, be remedied in a more elaborate treatment that involves changing the momentum measure [12; 13]. We defer the complete DSR treatment in this paper and restrict our analysis to the effect of the dispersion relation alone on the response of a particle detector.
If the vacuum really is dispersive, then there should be observable effects. Indeed, it is known that the \(p^{3}\) term produces a correction to the black hole entropy which is usually discarded by setting \(\kappa_{1}\) to zero [14], a practice to which we will adhere in this paper, focusing instead on the consequences of non-zero \(p^{4}\) and \(p^{6}\) terms (the latter arising in Horava-Lifshitz theories of gravity (HLG) [15]). Modifications to the dispersion relations in turn alter the Green's functions of any quantized field theory, which we have previously explored in the context of the effect of these changes on the bending of light by massive bodies [16]. The same Green's functions can be adapted for use in calculating the response of a DeWitt-Unruh particle
detector. The principle results of this calculation are presented in Section IV, with the details covered in the appendices.
A similar calculation was performed by Rinaldi [17], but crucially only the case of positive \(\kappa_{2}\) was considered, and higher order terms in \(p\) were not considered. The approach taken there was to treat the higher order terms in the dispersion relationship perturbatively, whereas our calculation is more direct. For positive \(\kappa\) we obtain a similar result to Rinaldi, but our method is more general and able to accommodate negative as well as positive correction terms.
The layout of the paper starts with an overview of the effect of extra terms on the propagation of waves through the vacuum in Section II, and establish that \(\kappa\) controls whether propagation is sub or superluminal. In Section III we present the modified Hadamard functions that we use in Section IV to compute the corrections to the detector response functions. We then briefly consider the effect of higher order terms in the dispersion relations in Section V, before concluding with a discussion of the impact of these results on other semi-classical phenomena in Section VI.
## II Modified dispersion relation and refractive index
We restrict the discussion here to the modified dispersion relation,
\[E^{2}=p^{2}+\kappa\eta^{2}p^{4}+m^{2}, \tag{2}\]
where \(\eta\) is a characteristic length scale related to the pixelation scale, and \(\kappa\) controls the sign of the correction. In the massless limit with units \(\hbar=1,E=\omega\), and \(p=k\) we note that a scalar field \(\phi(t,x)\) obeys the modified free space wave equation,
\[\frac{\partial^{2}\phi}{\partial t^{2}}-\frac{\partial^{2}\phi}{\partial x^{2 }}+\kappa\eta^{2}\frac{\partial^{4}\phi}{\partial x^{4}}=0, \tag{3}\]
which has a set of solutions \(\phi(t,x)=e^{\pm(\omega t-k\cdot x)},e^{\pm i(\omega t-k\cdot x)}\), shown to be complete by computation of the Wronskian. The key result is that, although Eq. (3) has wavelike solutions, the vacuum itself is dispersive and possesses a refractive index \(n(k)\) given by,
\[n(k)=\frac{\sqrt{1+\kappa\eta^{2}k^{2}}}{1+2\kappa\eta^{2}k^{2}}, \tag{4}\]
a result that extends to spin 1 (photons) and (linearized) spin 2 (gravitons) too, and broadly follows the analysis by Myers et al [18]. The fact that the speed of wave propagation of the waves depends on the frequency introduces some peculiar features. Note that for \(\kappa>0\) high frequency radiation propagates faster than low frequency radiation, and all waves propagate at a speed \(>1\), i.e. faster than the speed of light in the unmodified case. For the case \(\kappa\leq 0\) the propagation speed is \(\leq 1\) and slows as \(k\) increases, albeit slowly. There is a singularity at \(k=\frac{1}{\eta\sqrt{2\kappa}}\), which is determined by the pixelation length, which might naturally be associated with the Planck length. However, for the purposes of this paper, we shall sidestep the choice of length scale, as the key issue from the above discussion is the sign rather than the value of \(\kappa\).
There is the obvious problem of reconciling superluminal propagation with relativistic causality. This is of course expected since the dispersion relations from DSR and HLG explicitly violate Lorentz invariance. Locally Lorentz invariant discrete spacetimes theories can be constructed by introducing curvature in momentum space [19]. We think our principal result will remain true in the more general case.
## III Green's functions in position space
In the massive case, Eq. (2) leads to a modified propagator in n-dimensional momentum space, and associated Feynman diagram [16],
\[\parbox{142.26378pt}{ \fmfig{height=142.26378pt}{ \fmfig{height=142.26378pt}{ \fmfig{height=142.26378pt}{ \fmfig{height=142.26378pt}{\fmfig{height=142.26378pt}{\fmfig{height=142.26378pt}{ \fmfig{height=142.
limit, \(3+1\) case, we find for time-like separation,
\[D^{+}(t,r)=\frac{-\theta(\sigma^{2})}{4\pi^{2}(\sigma^{2}-i\epsilon)}\left\{1- \frac{\kappa\eta^{2}}{(\sigma^{2}-i\epsilon)}\right\}, \tag{7}\]
and for the (retarded) space-like case,
\[D^{-}(t,r)=\frac{\theta(-\sigma^{2})}{4\pi^{2}(-\sigma^{2}+i\epsilon)}\left\{1 +\frac{\kappa\eta^{2}}{(-\sigma^{2}+i\epsilon)}\right\}. \tag{8}\]
Here we denote by \(\sigma^{2}=t^{2}-r^{2}\) the invariant interval, with \(\theta(x)\) being the normal sign function. In the case of \(\eta=0\) we recover the standard result [21].
## IV Accelerated Detectors
Following SS3.3 of [20], we can use the correction to the propagator to investigate the response of an accelerated particle in pixelated spacetime by leveraging the modifications to dispersion relations from DSR. We consider a particle detector under uniform inverse acceleration \(\alpha\), executing a hyperbolic path in Minkowski space parameterized by proper time \(\tau\) according to,
\[x=y=0,\ z=(t^{2}+\alpha^{2})^{\frac{1}{2}},\ \alpha=\text{const, such that},\] \[z=\alpha\cosh\frac{\tau}{\alpha},\ t=\alpha\sinh\frac{\tau}{ \alpha},\]
The standard DeWitt-Unruh detector [22; 23; 24] leads to the transition probability per unit proper time,
\[c^{2}\sum_{E}\left|\left\langle E\right|m(0)\left|E_{0}\right\rangle\right|^{ 2}\int\limits_{-\infty}^{\infty}\text{d}(\Delta\tau)\ e^{i(E-E_{0})\Delta\tau}D ^{+}(\Delta\tau), \tag{9}\]
and the detector response function,
\[\mathcal{F}(E)=\int\limits_{-\infty}^{\infty}\text{d}(\Delta\tau)\ e^{i(E-E_{0 })\Delta\tau}D^{+}(\Delta\tau). \tag{10}\]
The response function is modified by the correction term in the propagator (see Appendix B) to yield,
\[\mathcal{F}(E)=\left\{\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2}}\right)\frac {E-E_{0}}{e^{2\pi(E-E_{0})\alpha}-1}+\left(\frac{\kappa\eta^{2}}{6}\right) \frac{(E-E_{0})^{3}}{e^{2\pi(E-E_{0})\alpha}-1}\right\} \tag{11}\]
which retains a thermal character. We also show (Appendix B) that the modified Green's function remains periodic in imaginary time with period \(2\pi\alpha\), as in the standard (unmodified) treatment.
Although the modification to the detector response looks innocuous at first sight, it contains some troublesome features. If \(\kappa<0\) (subluminal propagation) the transition probability becomes negative for \(\left|\kappa\right|>\frac{6\alpha^{2}}{\eta^{2}}\). This may point to a breakdown of unitarity or some other pathology with the theory, but only in cases of extreme acceleration (very small \(\alpha\)). Our results would still be valid over a wide range of accelerations. Alternatively, one might simply rule out values of \(\kappa<0\), corresponding to subluminal propagation, as unphysical.
In the case of \(\kappa>0\), (superluminal propagation),the detector response function has no worrying negative terms that would produce a pathological transition probability. In some sense this is consistent with the fact that the calculation is not Lorentz invariant, and so by definition places no restriction of propagation velocities. This was essentially the case considered by Rinaldi [17], and we need to perform the full Lorentz invariant calculation to investigate whether this behavior survives.
## V Effect of Higher Terms on the Result
We now briefly consider higher order terms in the dispersion relations. It was remarked in Section I that (HLG) can introduce a \(p^{6}\) term into the dispersion relations. The central idea of HLG gravity is an anisotropic scaling of space and time, achieved by introducing a scaling parameter \(b\) and critical exponent \(z\), such that \(\mathbf{\ddot{x}}\to b\mathbf{\ddot{x}}\), and \(t\to b^{z}t\). As a quantized theory this becomes power counting renormalizable at a value of \(z=3\)[15], and we focus on the modifications this makes to the dispersion relations. The anisotropic scaling introduces new diffeomorphisms, that in turn introduce additional terms to the gravitational action, in particular a potential term that alters the dispersion relations. At the high energy UV limit these modifications suggest the following form of the dispersion relations,
\[E^{2}=p^{2}+m^{2}+\kappa\eta^{2}p^{4}-\kappa\eta^{4}p^{6}. \tag{12}\]
The additional \(p^{6}\) term is explicitly set to have the opposite sign to the \(p^{4}\) term, as required to ensure that the sign of \(\kappa\) dictates whether propagation is super or subluminal [19; 25].
We can formulate an associated equation of motion to the dispersion relationship Eq. (12) to determine the refractive index of the vacuum. This modifies the equation
of motion Eq. (3) to
\[\frac{\partial^{2}\phi}{\partial t^{2}}-\frac{\partial^{2}\phi}{\partial x^{2}}+ \kappa\eta^{2}\frac{\partial^{4}\phi}{\partial x^{4}}-\kappa\eta^{4}\frac{ \partial^{6}\phi}{\partial x^{6}}=0, \tag{13}\]
with the corresponding momentum-dependent refractive index,
\[n(k)=\frac{\sqrt{1+\kappa\eta^{2}k^{2}+\kappa\eta^{4}k^{4}}}{1+2\kappa\eta^{2 }k^{2}+3\kappa\eta^{4}k^{4}}. \tag{14}\]
As before, for \(\kappa>0\) propagation is superluminal, and \(\kappa<0\) restores subluminal propagation.
The impact of this new dispersion relation on the detector response function is straightforward to calculate as detailed in Appendix C and we obtain the following modification to the detector response function,
\[\mathcal{F}(E)=\left\{\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2}}-\frac{3 \kappa^{\prime}\eta^{4}}{5\alpha^{4}}\right)\frac{E-E_{0}}{e^{2\pi(E-E_{0}) \alpha}-1}+\left(\frac{\eta^{2}[\kappa+12\kappa^{\prime}\eta^{2}]}{6}\right) \frac{(E-E_{0})^{3}}{e^{2\pi(E-E_{0})\alpha}-1}+\left(\frac{\kappa^{\prime} \eta^{4}}{10}\right)\frac{(E-E_{0})^{5}}{e^{2\pi(E-E_{0})\alpha}-1}\right\}. \tag{15}\]
The calculation introduces a new sign term \(\kappa^{\prime}=\kappa\left(\frac{3}{4}\kappa+1\right)\), but for \(\kappa=\pm 1\) it has the same polarity as the original term (i.e. for \(\kappa=\pm 1,\kappa^{\prime}=\pm 1\)).
The first thing to note is that the additional contributions arising from the \(p^{6}\) term retain the Planck factor, and so preserve the thermal nature of the response. However, as in Eq. (11), we have problematic features that arise in the case of subluminal propagation where \(\kappa,\kappa^{\prime}<0\). The new terms in the detector function pick up a coefficient of \(\eta^{4}\) from the dispersion relation, and are therefore much smaller in magnitude than the \(p^{4}\) terms. The only hope for restoring positive definiteness would arise from the \(\frac{3\kappa^{\prime}\eta^{4}}{5\alpha^{4}}\) coefficient of the first term overwhelming the \(\frac{\kappa\eta^{2}}{6\alpha^{2}}\) from the original calculation. If we assume \(\eta\sim\frac{1}{M_{p}}\), this would only be the case for very small accelerations in the order of a few \(ms^{-2}\) or \(\alpha\sim c\). At this point the absence of a factor of \(\alpha\) in the second Planck term would cause that to dominate the sign of \(\mathcal{F}(E)\) and give it an overall negative value. The converse argument for superluminal propagation (\(\kappa,\kappa^{\prime}>1\)) confirms that the detector response function remains positive across the full range of \(\alpha\) in this case.
We can therefore conclude that the detector response function remains potentially problematic for subluminal propagation, and the possibility exists that the transition probability will then also be negative. On dimensional grounds, for higher powers of \(p\) in the dispersion relation, the corrections will acquire increasing powers of \(\eta\) and follow the same pattern for their contribution to the response function. One would expect that similar arguments would apply to those corrections, and the detector response function will cease to be positive definite in the case of subluminal propagation. We conclude that any route to cancellation of the \(p^{4}\) term from higher order terms in dispersion relations appears shut off.
## VI Conclusion and discussion
The principal result of this paper is the discovery of severe pathologies in the calculation of the Davies-Fulling-Unruh effect in the case of discrete spacetime theories, unless we are willing to accept superluminal propagation. Although we use the modified dispersion relations derived from DSR, a fully Lorentz-invariant treatment that accommodates a fundamental length, requires additional features in the theory that we shall address in a future paper. The close relationship between the Davies-Fulling-Unruh effect, systems of accelerating mirrors and Hawking radiation makes this result even more interesting. Adding higher order terms in the dispersion relations does not provide a remedy to the issue. In fact due to the relative size of contribution at higher powers of \(p\) and the appearance of powers of inverse acceleration, the likelihood of cancellation from higher order momentum terms in the dispersion relationship appears low. One is led to the conclusion that the coarse-grained structure of spacetime may significantly modify, or even suppress, thermal vacuum effects for certain parameter ranges. It is well recognized that introducing a simple cut-off in trans-Planckian modes occasioned by the existence of a fundamental length is problematic for some derivations of the Hawking effect [26; 27]. Further investigation of how vacuum dispersion affects the calculation of the Bogoliubov transformation in the black hole radiance calculation may clarify these issues.
In particular our principal conclusion concerning pathologies in the detector response function arise when considering subluminal propagation may not survive the re-imposition of Lorentz invariance. The question of whether this is indeed the case is the subject of ongoing investigation.
## Appendix A Computation of position space Green's function in 3+1 spacetime
### First method of computation
We start with the massless propagator as defined in Eq. (6), which we convert to \(n\)-dimensional spherical polar momentum space to obtain,
\[D(t,r)=\frac{2(\pi^{\frac{n-2}{2}})}{(2\pi)^{n-1}\Gamma(\frac{n-2}{2})}\int \limits_{0}^{\pi}\sin^{n-3}\theta\;\mathrm{d}\theta\int\limits_{0}^{\infty} \frac{\mathrm{d}^{n}p}{2E_{p}}e^{-i(E_{p}t-pr)}.\]
The angular integral can be performed by analytic continuation of the standard result from [28] (pg. 346. 3.387 (1) ),
\[\int\limits_{0}^{\pi}\sin^{k}\theta e^{ipr\cos\theta}\;\mathrm{d}\theta=\sqrt{ \pi}\left(\frac{2}{pr}\right)^{\frac{k}{2}}\Gamma\left(\frac{k+1}{2}\right)J_{ \frac{k}{2}}(pr),\]
where \(J_{n}(x)\) are the Bessel functions of order \(n\). To remove clutter in what follows, we define,
\[\zeta=\frac{1}{2(2\pi)^{(n-1)/2}r^{(n-3)/2}}. \tag{10}\]
Using this result and inserting our modified form for \(E_{p}\), we have the following integral to perform,
\[D(t,r)=\zeta\int\limits_{0}^{\infty}\mathrm{d}p\;\frac{p^{\frac{n-3}{2}}}{(1+ \kappa\eta^{2}p^{2})^{1/2}}J_{\frac{n-3}{2}}(pr)e^{-ip(1+\kappa\eta^{2}p^{2})^ {1/2}t}. \tag{11}\]
Exploiting the fact that \(\kappa\eta^{2}\) is very small we can expand the numerator \((1+\kappa\eta^{2}p^{2})^{-\frac{k}{2}}\) to first order in \(\kappa\eta^{2}\), to obtain,
\[D(t,r)= \zeta\int\limits_{0}^{\infty}\mathrm{d}p\;p^{\frac{n-3}{2}}J_{ \frac{n-3}{2}}(pr)e^{-ip(1+\eta^{2}p^{2})^{1/2}t}\] \[- \zeta\frac{\kappa\eta^{2}}{2}\int\limits_{0}^{\infty}\mathrm{d}p \;p^{\frac{n+1}{2}}J_{\frac{n-3}{2}}(pr)e^{-ip(1+\eta^{2}p^{2})^{1/2}t}\]
One can now expand the \((1+\kappa\eta^{2}p^{2})^{\frac{1}{2}}\) in the exponential,
\[e^{-ip(1+\kappa\eta^{2}p^{2})^{1/2}t}=e^{-ipt}e^{-i\frac{\kappa\eta^{2}}{2}p^ {3}t}.\]
For the second factor for small \(\kappa\eta^{2}\) we can further expand the exponential to first order, obtaining \(e^{-i\frac{\kappa\eta^{2}}{2}p^{3}t}=1-\frac{\kappa\eta^{2}}{2}it\). This would give a term proportional to \(t\), which we are free to disregard as it is not an invariant quantity and we are free to rotate to a frame where \(r=\theta(-\sigma^{2})\sqrt{r^{2}-t^{2}+i\epsilon},t=0\) as in the \(1+1\) propagator case. This will compute the retarded or time-like massless propagator \(D^{-}(t,r)\).
We are left with the following integrals to evaluate,
\[D^{-}(t,r)= \zeta\int\limits_{0}^{\infty}\mathrm{d}p\;p^{\frac{n-3}{2}}J_{ \frac{n-3}{2}}(pr)e^{-ipt}\] \[-\zeta\frac{\kappa\eta^{2}}{2}\int\limits_{0}^{\infty}\mathrm{d} p\;p^{\frac{n+1}{2}}J_{\frac{n-3}{2}}(pr)e^{-ipt}\]
To solve these we analytically continue the standard integral from [28] (pg. 694. 6.623 (1)),
\[\int\limits_{0}^{\infty}e^{-\gamma x}J_{\nu}(\beta x)x^{\nu}\;\mathrm{d}x= \frac{(2\beta)^{\nu}\Gamma(\nu+\frac{1}{2})}{\sqrt{\pi}(\gamma^{2}+\beta^{2})^ {\nu+\frac{1}{2}}}. \tag{12}\]
It will be convenient to subtract a small imaginary component \(i\epsilon\) to \(t\) to allow convergence of the integral later, replacing \(t^{2}\) with \(t^{2}+i\epsilon\), after discarding the term in \(\mathcal{O}(\epsilon^{2})\).
For the first integral we have,
\[\zeta\int\limits_{0}^{\infty}\mathrm{d}p\;p^{\frac{n-3}{2}}J_{\frac{n-3}{2}}( pr)e^{-ipt}=\frac{\Gamma(\frac{n}{2}-1)}{4(\pi)^{\frac{n}{2}}(r^{2}-t^{2}+i \epsilon)^{\frac{n}{2}-1}}. \tag{13}\]
The second integral is more problematic, but can be massaged into standard forms by taking advantage of the differential recurrence relationship of the Bessel functions and integrating by parts. We note,
\[\frac{\mathrm{d}}{\mathrm{d}x}\left\{x^{\eta}J_{\eta}(\beta x)\right\}=\beta x ^{\eta}J_{\eta-1}(\beta x). \tag{14}\]
One can now write the integrand as,
\[\zeta\frac{\kappa\eta^{2}}{2}\int\limits_{0}^{\infty}\mathrm{d}p \;p^{\frac{n+1}{2}}J_{\frac{n-3}{2}}(pr)e^{-ipt}\] \[=\zeta\frac{\kappa\eta^{2}}{2r}\int\limits_{0}^{\infty}\mathrm{d}p \;pe^{-ipt}\frac{\mathrm{d}}{\mathrm{d}p}\left\{p^{\frac{n-1}{2}}J_{\frac{n-1 }{2}}(pr)\right\}.\]
This is in the form of \(\int u\mathrm{d}v=uv-\int v\mathrm{d}u\), with \(u=pe^{-ipt}\) and \(v=p^{\frac{n-1}{2}}J_{\frac{n-1}{2}}(pr)\). Rotation of the time axis by an arbitrary small angle \(\delta\leq\pi/2\), such that \(t->\infty e^{-i\delta}\) in the upper limit of the integral, ensures the convergence of \([uv]_{0}^{\infty}\), and we are left with,
\[\zeta\frac{\kappa\eta^{2}}{2r}\int\limits_{0}^{\infty}\mathrm{d}p \;pe^{-ipt}\frac{\mathrm{d}}{\mathrm{d}p}\left\{p^{\frac{n-1}{2}}J_{\frac{n-1 }{2}}(pr)\right\}=\] \[-\zeta\frac{\kappa\eta^{2}}{2r}\int\limits_{0}^{\infty}\mathrm{d}p \;e^{-ipt}p^{\frac{n-1}{2}}J_{\frac{n-1}{2}}(pr)\] \[+\zeta\frac{\kappa\eta^{2}}{2r}it\int\limits_{0}^{\infty}\mathrm{d}p \;e^{-ipt}p^{\frac{n+1}{2}}J_{\frac{n-1}{2}}(pr).\]
The second of these integrals can be discarded due to the leading factor of \(it\) as above, and the first is in the standard form used earlier. Using the formula Eq. (10) we have the result for this correction to order \(\eta^{2}\),
\[\zeta\frac{\kappa\eta^{2}}{2}\int\limits_{0}^{\infty}\mathrm{d}p\ p^{\frac{n+1 }{2}}J_{\frac{n-3}{2}}(pr)e^{-ipt}=\frac{\kappa\eta^{2}\Gamma(\frac{n}{2})}{4( \pi)^{\frac{n}{2}}(r^{2}-t^{2}+i\epsilon)^{\frac{n}{2}}}. \tag{11}\]
Substituting in \(n=4\) into Eqs. (11) and (11), with \(\sigma^{2}=t^{2}-r^{2}\) being the invariant interval, we obtain our final result for the propagator to first order in \(\kappa\),
\[D^{-}(t,r)=\frac{\theta(-\sigma^{2})}{4\pi^{2}(-\sigma^{2}+i\epsilon)}\left\{ 1+\frac{\kappa\eta^{2}}{(-\sigma^{2}+i\epsilon)}\right\}. \tag{12}\]
This has the corresponding advanced or time-like counterpart,
\[D^{+}(t,r)=\frac{-\theta(\sigma^{2})}{4\pi^{2}(\sigma^{2}-i\epsilon)}\left\{ 1-\frac{\kappa\eta^{2}}{(\sigma^{2}-i\epsilon)}\right\}. \tag{13}\]
### Second method of computation
We start with Eq. (12), however we proceed differently. Instead we make the simplification \(p(1+\kappa\eta^{2}p^{2})^{1/2}\sim\sqrt{\kappa}\eta p^{2}\) in the exponential, we can also expand the \((1+\kappa\eta^{2}p^{2})\) for small \(\kappa\eta^{2}\) in the denominator to \((1-\frac{\kappa\eta^{2}}{2}p^{2}+\cdots+\mathcal{O}(\kappa^{2}\eta^{4}))\), and following a change of variable \(x=p^{2}\), we have,
\[D(t,r)=\frac{\zeta}{2}\int\limits_{0}^{\infty}\mathrm{d}x\ \left(x^{\frac{n-5}{4}}-\frac{\kappa\eta^{2}}{2}x^{\frac{n-1}{4}} \right)J_{\frac{n-3}{2}}(r\sqrt{x})e^{-i\sqrt{\kappa}\eta xt}.\]
These gives two standard integrals [28] (pg. 701. 6.643(i)) of the form,
\[\int\limits_{0}^{\infty}\mathrm{d}x\ x^{\mu-1/2}e^{-ax}J_{2\nu}(2b \sqrt{x})= \tag{14}\] \[\frac{\Gamma(\mu+\nu+1/2)}{b\Gamma(2\nu+1)}\exp\frac{-b^{2}}{2a} a^{-\mu}M_{\mu,v}\left(\frac{b^{2}}{a}\right),\]
where \(M_{\mu,v}(x)\) is the Whittaker special function, and we analytically continue to imaginary exponents. For our integrals,
\[a =i\sqrt{\kappa}\eta t,\] \[b =\frac{r}{2},\]
and we have and two pairs of parameters \(\mu,\nu\). These are
\[\mu =\frac{n-3}{4}, \nu =\frac{n-3}{4},\,\text{for}\ x^{\frac{n-5}{4}},\,\text{and,}\] \[\mu =\frac{n+1}{4}, \nu =\frac{n-3}{4},\,\text{for}\ x^{\frac{n-1}{4}}.\]
Completing the substitutions we have for our Green's function,
\[D(t,r)=\frac{\zeta}{r}\exp\biggl{\{}\frac{-r^{2}}{8i\sqrt{\kappa }\eta t}\biggr{\}}\times \tag{15}\] \[\left[\frac{\Gamma(\frac{n-2}{2})}{\Gamma(\frac{n-1}{2})}(i\sqrt {\kappa}\eta t)^{-(\frac{n-3}{4})}M_{\frac{n-3}{4},\frac{n-3}{4}}\left(\frac {r^{2}}{4i\sqrt{\kappa}\eta t}\right)-\frac{\kappa\eta^{2}\Gamma(\frac{n}{2}) }{2\Gamma(\frac{n-1}{2})}(i\sqrt{\kappa}\eta t)^{-(\frac{n+1}{4})}M_{\frac{n+ 1}{4},\frac{n-3}{4}}\left(\frac{r^{2}}{4i\sqrt{\kappa}\eta t}\right)\right]\]
This unpleasant looking result collapses rather nicely in the limit of \(t\to 0\), when the argument of the Whittaker function becomes infinite. The Whittaker function \(M_{\mu,\nu}(z)\) has the following asymptotic behavior in the case of \(z\rightarrow\infty\),
\[\lim_{z->\infty}M_{\mu,\nu}(z)\sim\frac{\Gamma(1+2\nu)}{\Gamma(\frac{1}{2}+ \nu-\mu)}e^{\frac{1}{2}z}z^{-\mu}. \tag{16}\]
The exponential term is the reciprocal of the \(\exp\Bigl{\{}\frac{-r^{2}}{8i\eta t}\Bigr{\}}\) term before the brackets, and the \(z^{-\mu}\) cancels the \((i\sqrt{\kappa}\eta t)^{\mu}\) terms preceding the Whittaker functions, leaving the \(\frac{r^{2}}{4}\) factors. In this limit therefore, the Green's function becomes,
\[D(t,r)= \tag{17}\] \[\frac{\zeta}{r}\left[\frac{\Gamma(\frac{n-2}{2})}{\Gamma(\frac{n -1}{2})}\left(\frac{r^{2}}{4}\right)^{-(\frac{n-3}{4})}-\frac{\kappa\eta^{2}} {2}\frac{\Gamma(\frac{n}{2})}{\Gamma(-\frac{1}{2})}\left(\frac{r^{2}}{4}\right) ^{-(\frac{n+1}{4})}\right].\]
When taking the limit \(t\to 0\), we should interpret \(r^{2}\) as being the spacelike interval, \(r^{2}=\theta(-\sigma^{2})(-\sigma^{2}+i\epsilon)\), and the propagator as the retarded or space-like propagator \(D^{-}(t,r)\). Setting \(n=4\) and making this substitution for \(r\) we have,
\[D^{-}(t,r)=\frac{\theta(-\sigma^{2})}{4\pi^{2}(-\sigma^{2}+i\epsilon)}\left[1+ \frac{\kappa\eta^{2}}{(-\sigma^{2}+i\epsilon)}\right]. \tag{18}\]
We can further make the substitution
\(-(\sigma^{2}-i\epsilon)\) to recover the advanced or time-like propagator,
\[D^{+}(t,r)=\frac{-\theta(\sigma^{2})}{4\pi^{2}(\sigma^{2}-i\epsilon)}\left[1- \frac{\kappa\eta^{2}}{(\sigma^{2}-i\epsilon)}\right]. \tag{50}\]
It will be noted that this result is identical to that obtained using the first method in Appendix A.1.
## Appendix B Detector function calculation
To compute the transition probability we need to perform the integral in Eq. (10), and follow a calculation similar to that in [20]. To make progress we use the modified \(3+1\) propagator Eq. (8), choosing the advanced propagator (space-like) form,
\[D^{+}(\Delta t,\Delta z)= -\frac{1}{4\pi^{2}(\Delta t^{2}-\Delta z^{2}-i\epsilon)}\times\] \[\left\{1-\frac{\kappa\eta^{2}}{(\Delta t^{2}-\Delta z^{2}-i \epsilon)}\right\}.\]
Using the elementary properties of the hyperbolic functions, we have for \(\Delta t^{2}\) and \(\Delta z^{2}\),
\[\Delta t =2\alpha\cosh\left(\frac{\tau+\tau^{\prime}}{2\alpha}\right) \sinh\left(\frac{\tau-\tau^{\prime}}{2\alpha}\right)\] \[\Delta z =2\alpha\sinh\left(\frac{\tau+\tau^{\prime}}{2\alpha}\right) \sinh\left(\frac{\tau-\tau^{\prime}}{2\alpha}\right)\]
Inserting these back into the propagator we have,
\[\begin{split} D^{+}(\Delta\tau)=&-\frac{1}{16\pi^{2 }\alpha^{2}}\sinh^{-2}\left(\frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{\alpha }\right)\times\\ &\left[1-\frac{\kappa\eta^{2}}{4\alpha^{2}}\sinh^{-2}\left(\frac {\Delta\tau}{2\alpha}-\frac{i\epsilon}{\alpha}\right)\right].\end{split} \tag{51}\]
One can make use of the identities,
\[\sum_{k=-\infty}^{\infty}(x-\pi ik)^{-2} =\sinh^{-2}x,\] \[\sum_{k=-\infty}^{\infty}(x-\pi ik)^{-4} =\frac{1}{3}(\tanh^{-2}x\sinh^{-2}x+\sinh^{-4}x),\]
to allow us to restate the \(\sinh^{-4}\) term as,
\[\sinh^{-4}x=\sum_{k=-\infty}^{\infty}(x-\pi ik)^{-4}-\frac{2}{3}\sum_{k=- \infty}^{\infty}(x-\pi ik)^{-2}, \tag{52}\]
and then write the propagator as,
\[D^{+}(\Delta\tau)=-\frac{1}{4\pi^{2}}\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2 }}\right)\sum_{k=-\infty}^{\infty}(\Delta\tau-2i\epsilon-2\pi i\alpha k)^{-2 }+\frac{\kappa\eta^{2}}{4\pi^{2}}\sum_{k=-\infty}^{\infty}(\Delta\tau-2i \epsilon-2\pi i\alpha k)^{-4} \tag{53}\]
This expression for the propagator can be substituted back into Eq. (9), and evaluated by reversing the order of the sum and the integral. To evaluate the integral we choose a semi-circular contour around the multiple pole at \(\Delta\tau=2\pi i\alpha k\), closed in the upper half plane. The integral then evaluates to \(2\pi i\) times the residue at \(2\pi i\alpha k\). These residues are found by elementary means to be \(-i(E-E_{0})e^{2\pi\alpha(E-E_{0})k}\), and \(\frac{i}{6}(E-E_{0})e^{2\pi\alpha(E-E_{0})k}\) respectively. We have for our two integrals the values,
\[\int\limits_{-\infty}^{\infty}\mathrm{d}(\Delta\tau)\,\frac{e^{-i (E-E_{0})\Delta\tau}}{(\Delta\tau-2i\epsilon-2\pi i\alpha k)^{2}} =2\pi(E-E_{0})e^{2\pi(E-E_{0})\alpha k}\] \[\int\limits_{-\infty}^{\infty}\mathrm{d}(\Delta\tau)\,\frac{e^{- i(E-E_{0})\Delta\tau}}{(\Delta\tau-2i\epsilon-2\pi i\alpha k)^{4}} =-\frac{\pi}{3}(E-E_{0})^{3}e^{2\pi(E-E_{0})\alpha k}\]
The sum over \(k\) is from \(k=-\infty\) to \(k=\infty\), but our choice of contour for the integral subtracts the contribution to the sum from \(k\in[-\infty,0)\) leaving the result as the sum of a geometric series. One obtains for the transition
probability,
\[\frac{c^{2}}{2\pi}\sum_{E}\left|\left\langle E\right|m(0)\left|E_{0}\right\rangle \right|^{2}\left\{\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2}}\right)\frac{E-E_{0 }}{e^{2\pi(E-E_{0})\alpha}-1}+\left(\frac{\kappa\eta^{2}}{6}\right)\frac{(E-E_ {0})^{3}}{e^{2\pi(E-E_{0})\alpha}-1}\right\}. \tag{40}\]
And correspondingly for the detector response function,
\[\mathcal{F}(E)=\left\{\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2}}\right)\frac{ E-E_{0}}{e^{2\pi(E-E_{0})\alpha}-1}+\left(\frac{\kappa\eta^{2}}{6}\right) \frac{(E-E_{0})^{3}}{e^{2\pi(E-E_{0})\alpha}-1}\right\}. \tag{41}\]
The Planck factor is unaffected by the modification to the propagator, and the presence of this factor suggests that the particles detected by the Unruh detector are essentially thermal. This can be further reinforced by examining the periodic behavior of the Green's function. It is well known [20] that a thermal Green's function \(G_{\beta}\) is periodic in imaginary time. In particular it satisfies \(G_{\beta}^{\mp}(t,\mathbf{x};t^{\prime},\mathbf{x}^{\prime})=G_{\beta}^{\mp} (t+i\beta,\mathbf{x};t^{\prime},\mathbf{x}^{\prime})\), where \(\beta=(kT)^{-1}\). It will be noted that for our smoothly accelerating detector the propagator Eq. (40) is composed of terms in \(\sinh^{-2}(\Delta t)\). It is well known that the hyperbolic functions are periodic with an imaginary period of \(2\pi i\), however, both of the \(\sinh\) terms are squared. This has the effect of changing the period to \(\pi i\). Inspection of Eq. (40), reveals that the period of the propagator is \(2\pi\alpha\), indicating that \(\beta=2\pi\alpha\) as suggested by the Planck factor in Eq. (11).
## Appendix C Computation of Horava-Lifshitz Corrections
As mentioned in Section I, and discussed in Section V, the Horava-Lifshitz theory of gravity can introduce higher order terms in \(p\) into the dispersion relationship. In general the \(p^{6}\) term has opposite sign to the \(p^{4}\) term, and so to asses the effect of light speed propagation on the result obtained in Section IV, we propose for a massless particle the following dispersion relation,
\[E^{2}=p^{2}+\kappa\eta^{2}p^{4}-\kappa\eta^{4}p^{6}, \tag{42}\]
where \(\eta^{4}\) is introduced on dimensional grounds.
We repeat our calculation in \(3+1\) dimensions to obtain the \(D^{+}(t,r)\) propagator as outlined in Appendix A.1. The crux of the calculation involves expanding the denominator of the integral measure, \(E_{p}=\sqrt{p^{2}+\kappa\eta^{2}p^{4}-\kappa\eta^{4}p^{6}}\), for small \(\kappa\eta^{2}\). We proceed as before, but collect terms up to \(p^{4}\), and as such we now have,
\[(E_{p})^{-1}\simeq p\left[1-\frac{1}{2}\kappa\eta^{2}p^{2}+\frac{1}{2}\kappa \left(\frac{3}{4}\kappa+1\right)\eta^{4}p^{4}+\ldots\right].\]
For brevity we will refer to \(\kappa^{\prime}=\kappa\left(\frac{3}{4}\kappa+1\right)\), but note for \(\kappa=1,\kappa^{\prime}=\frac{7}{4}\), and \(\kappa=-1,\kappa^{\prime}=-\frac{1}{4}\), preserving the opposite polarity of the effect to the \(p^{4}\) term.
Following the analysis in Appendix A.1, the expression for the space-like propagator acquires an additional integral contribution,
\[D^{-}(t,r)= \zeta\int\limits_{0}^{\infty}\mathrm{d}p\ p^{\frac{n-3}{2}}J_{ \frac{n-3}{2}}(pr)e^{-ipt}\] \[-\zeta\frac{\kappa\eta^{2}}{2}\int\limits_{0}^{\infty}\mathrm{d} p\ p^{\frac{n+1}{2}}J_{\frac{n-3}{2}}(pr)e^{-ipt}\] \[+\zeta\frac{\kappa^{\prime}\eta^{4}}{2}\int\limits_{0}^{\infty} \mathrm{d}p\ p^{\frac{n+5}{2}}J_{\frac{n-3}{2}}(pr)e^{-ipt},\]
after discarding the irrelevant contribution from expanding \(E_{p}\) in the exponential.
We can proceed identically for the third integral as undertaken for the second, making use of the Bessel function recursion relationship Eq. (41) twice. After manipulation the third integral is reduced to,
\[\frac{3\zeta\kappa^{\prime}\eta^{4}}{2r^{2}}\int\limits_{0}^{\infty}\mathrm{d} p\ p^{\frac{n+1}{2}}J_{\frac{n+1}{2}}(pr)e^{-ipt},\]
whose value can be read off from the standard integral Eq. (40), as,
\[\frac{3\Gamma(\frac{n+2}{2})\kappa^{\prime}\eta^{4}}{2(\pi)^{\frac{n}{2}}(r^ {2}-t^{2}+i\epsilon)^{\frac{n+2}{2}}}.\]
Setting \(n=4\), this can now be inserted into the expression for the propagator, and for the time-like advanced propagator we have,
\[D^{+}(t,r)=\frac{-\theta(\sigma^{2})}{4\pi^{2}(\sigma^{2}-i\epsilon)}\left\{1 -\frac{\kappa\eta^{2}}{(\sigma^{2}-i\epsilon)}+\frac{12\kappa^{\prime}\eta^{4 }}{(\sigma^{2}-i\epsilon)^{2}}\right\}. \tag{43}\]
We note the positive sign for the \((\sigma^{2}-i\epsilon)^{-2}\) term, which arises from the conversion of \((-\sigma^{2}+i\epsilon)\) to \((\sigma^{2}-i\epsilon)\) inside an overall cubic denominator.
Turning to the computation of the detector response function, we follow the calculation in Appendix B, noting that upon inserting the smoothly accelerating trajectory
we had an additional term in the propagator,
\[D^{+}(\Delta\tau)=-\frac{1}{16\pi^{2}\alpha^{2}}\sinh^{-2}\left( \frac{\Delta\tau}{2\alpha}-\frac{i\epsilon}{\alpha}\right)\times\] \[\left[1-\frac{\kappa\eta^{2}}{4\alpha^{2}}\sinh^{-2}\left(\frac{ \Delta\tau}{2\alpha}-\frac{i\epsilon}{\alpha}\right)+\frac{3\kappa^{\prime} \eta^{4}}{4\alpha^{4}}\sinh^{-4}\left(\frac{\Delta\tau}{2\alpha}-\frac{i \epsilon}{\alpha}\right)\right]. \tag{10}\]
Similar considerations to Eq. (11) allow us to substitute for the \(\sinh^{-6}\) term the following infinite sum,
\[\sinh^{-6}x=\sum_{k=-\infty}^{\infty}(x-\pi ik)^{-6}-\sum_{k=-\infty}^{\infty} (x-\pi ik)^{-4}-\frac{4}{5}\sum_{k=-\infty}^{\infty}(x-\pi ik)^{-2}. \tag{11}\]
The calculation proceeds identically and yields a similar result with additional correction terms at all orders of \((E-E_{0})\) in the detector response function,
\[\mathcal{F}(E)=\left\{\left(1+\frac{\kappa\eta^{2}}{6\alpha^{2}}-\frac{3 \kappa^{\prime}\eta^{4}}{5\alpha^{4}}\right)\frac{E-E_{0}}{e^{2\pi(E-E_{0}) \alpha}-1}+\left(\frac{\eta^{2}[\kappa+12\kappa^{\prime}\eta^{2}]}{6}\right) \frac{(E-E_{0})^{3}}{e^{2\pi(E-E_{0})\alpha}-1}+\left(\frac{\kappa^{\prime} \eta^{4}}{10}\right)\frac{(E-E_{0})^{5}}{e^{2\pi(E-E_{0})\alpha}-1}\right\}. \tag{12}\]
It was remarked in Appendix B that the propagator, being stated in terms of \(\sinh^{-2}\), is periodic with a period of \(\pi i\), which strengthens the interpretation of \(2\pi\alpha\) as the temperature of the radiation. Although we now have a term in \(\sinh^{-4}\), the period is the same and we can conclude that the propagator Eq. (10) satisfies \(G_{\beta}^{\pm}(t,\mathbf{x};t^{\prime},\mathbf{x}^{\prime})=G_{\beta}^{\mp}( t+i\beta,\mathbf{x};t^{\prime},\mathbf{x}^{\prime})\), where \(\beta=(kT)^{-1}=2\pi\alpha\).
|
2304.13590 | Synthetic Aperture Anomaly Imaging | Previous research has shown that in the presence of foliage occlusion,
anomaly detection performs significantly better in integral images resulting
from synthetic aperture imaging compared to applying it to conventional aerial
images. In this article, we hypothesize and demonstrate that integrating
detected anomalies is even more effective than detecting anomalies in
integrals. This results in enhanced occlusion removal, outlier suppression, and
higher chances of visually as well as computationally detecting targets that
are otherwise occluded. Our hypothesis was validated through both: simulations
and field experiments. We also present a real-time application that makes our
findings practically available for blue-light organizations and others using
commercial drone platforms. It is designed to address use-cases that suffer
from strong occlusion caused by vegetation, such as search and rescue, wildlife
observation, early wildfire detection, and sur-veillance. | Rakesh John Amala Arokia Nathan, Oliver Bimber | 2023-04-26T14:34:43Z | http://arxiv.org/abs/2304.13590v1 | # Synthetic Aperture Anomaly Imaging
###### Abstract
Previous research has shown that in the presence of foliage occlusion, anomaly detection performs significantly better in integral images resulting from synthetic aperture imaging compared to applying it to conventional aerial images. In this article, we hypothesize and demonstrate that integrating detected anomalies is even more effective than detecting anomalies in integrals. This results in enhanced occlusion removal, outlier suppression, and higher chances of visually as well as computationally detecting targets that are otherwise occluded. Our hypothesis was validated through both: simulations and field experiments. We also present a real-time application that makes our findings practically available for blue-light organizations and others using commercial drone platforms. It is designed to address use-cases that suffer from strong occlusion caused by vegetation, such as search and rescue, wildlife observation, early wildfire detection, and surveillance.
synthetic aperture imaging, anomaly detection, occlusion removal, through-foliage
## 1 Introduction
Several time-critical aerial imaging applications, such as search and rescue (SAR), early wildfire detection, wildlife observation, border control, and surveillance are affected by occlusion caused by vegetation, particularly forests. With Airborne Optical Sectioning (AOS) [1-16], we have introduced a synthetic aperture imaging technique that removes occlusion in real-time (cf. Fig. 1). AOS utilizes conventional camera drones to capture a sequence of single images with telemetry data while flying along a path which defines an extremely wide synthetic aperture (SA). These images are then computationally registered and integrated (i.e., averaged) based on defined visualization parameters, such as a virtual focal plane. Aligning this focal plane with the forest floor, for instance, results in a shallow depth-of-field integral image of the ground surface. It approximates the image that a physically impossible optical lens of the SA's size would capture. In this integral image, the optical signals of out-of-focus occluders are suppressed, while focused targets are emphasized.
The principle synthetic aperture sensing applied by AOS in the optical field is also commonly used in other sensing domains where sensor size correlates to signal quality. The physical size limitations of sensors are overcome by computationally combining multiple measurements of small sensors to improve signal quality. This principle has found its application in various fields, including (but not limited) to radar [17-43], radio telescopes [44, 45], interferometric microscopy [46], sonar [47-50], ultrasound [51, 52], LiDAR [53, 54], and imaging [55-62].
One major advantage of AOS, in addition to its real-time capability, is its wavelength independence. The same technique can be used in the visible range, near-infrared range, and far-infrared range, which opens up many application fields. We have demonstrated that image processing techniques, such as deep-learning based classification [8-10] and anomaly detection [12,15], benefit significantly from integral images when compared to conventional single-image inputs. As a result, we have explored the application of AOS in search and rescue with autonomous drones [8-10], bird census in ornithology [5] and through-foliage tracking for surveillance and wildlife observation [12,14,16].
One advantage of model-based anomaly detection over machine-learning based classification is its robustness and invariance to training data. A common unsupervised anomaly detection method that can be applied to multispectral images is Reed-Xaoli (RX) detection [63-64], which is often considered as benchmark for anomaly detection. It calculates global background statistics over the entire image and then compares individual pixels based on the Mahalanobis distance:
\[\alpha(r)=(r-\mu)^{T}R_{n\times n}^{T}(r-\mu),\] (Eqn. 1)
where \(K_{n\times n}\) is the covariance matrix of the image with \(n\) input channels, the \(n\)-dimensional vector \(r\) is the pixel under test, and the \(n\)-dimensional vector \(\mu\) is the image mean. The \(t\%\) of all image pixels with the highest anomaly scores \(\alpha\) are detected as abnormal by RX-detector, where \(t\) is referred to as RX threshold.
In [12], we have demonstrated that RX detection performs significantly better on integral images than on single images. The reason for this is that the background statistics of integral images is much more uniform than of single images, due to the defocused occluders. Our new hypothesis is that integrating anomalies (i.e., anomalies detected in single images before integration) even outperforms the detection of anomalies in integral images, as illustrated in Fig. 2. We refer to this principle as _Synthetic Aperture Anomaly Imaging_ (SAAI).
Figure 1: AOS principle: Registering and integrating multiple images captured along a synthetic aperture \(a\) while computationally focusing on focal plane \(F\) at distance \(h\) will defocus occluders \(O\) at distance \(o\) from \(F\) (with a point-spread of \(b\)) while focusing targets on \(F\).
We substantiate our hypothesis quantitatively through simulations that provide ground truth data. For real field experiments we have developed an application that is compatible with the latest DJI enterprise platforms, such as the Mavic 3T or the Matrice 30T. This application runs in real-time on DJI' Plus and Pro smart controllers and is freely available1 to support blue-light organizations (BOS) and other organizations.
Footnote 1: [https://github.com/JKU-ICG/AOS/](https://github.com/JKU-ICG/AOS/)
## 2 Results
The results presented in Fig. 3 are based on simulations conducted as described in Sect. 3.1, using procedural forests of different densities (300-500 trees/ha) with a hidden avatar lying on the ground. The far-infrared (thermal) channel is computed for cloudy and sunny environmental conditions, and is color-mapped (hot color bar, as shown in Fig. 2). We compare two cases: First, the thermal channel is integrated and the RX detector is applied to the integral (AD on integral), as done in [12]. Second, the RX detector is applied to the single images and the resulting anomalies are then integrated (SAAI).
Figure 2: Top row: Integrating single images (color-mapped thermal images in this example) first and applying anomaly detection (AD) on the integral next. RX is used in this example. Bottom row: Applying anomaly detection to single images first, and then integrating the detected anomalies (SAAI). Results are simulated with a procedural forest model under sunny conditions (see Section 3.1).
Since the ground truth projection of the unconcluded target can be computed in the simulation, its maximum visibility without occlusion is known. This is the aera of the target's projected footprint in the simulated images. One quality metric that can be considered is the remaining _target visibility_ in case of occlusion. Thus, a target visibility of 61.5%, for instance, indicates that 61.5% of the complete target's footprint is still visible. However, target visibility considers only true positives (i.e., if a pixel that belongs to the target is visible or not). To consider false positives as well (i.e., pixels that are indicated but don't belong to the target), we determine _precision_ as a second metric, which is the intensity integral of all true positives divided by the sum of true positives and false positives intensity integrals. High precision values indicate more true positives and fewer false positives.
As shown in Fig. 3, integrating anomalies (SAAI) always outperforms anomaly detections on integrals (AD on integral), both, in target visibility and precision. This is the case for all forest densities and also for cloudy and sunny environmental conditions. Target visibility and precision drop generally with higher densities due to more severe occlusion. In particular under sunny conditions, many and large false positive areas are detected. This is due to the higher thermal radiation of non-target objects, such as on tree-tops, that appear similar hot as the target. Under cloudy and cool conditions, the biggest thermal contribution comes mainly from the target itself.
One major difference between AD on integrals and SAAI is that the first case results in binary masks, as pixels are being indicated to be abnormal if they belong to the 1% of pixels with highest anomaly scores \(\alpha\) (Eqn. 1), while for the latter case the detections of the 1% top-abnormal pixels of each single image are integrated. This leads to a non-binary value per pixels which corresponds to visibility (i.e., how often an abnormal point on the focal plane was captured free of occlusion).
Note that because the background models of single and integral images differ significantly [12], two different RX thresholds had to be applied to compare the two cases in Fig. 3 (\(t\)=99% for AD on integral, and \(t\)=90% for SAAI). They have been chosen such that the results of both cases approach each other as good as possible. For other thresholds SAAI outperforms AD on integral even more. Our simulations initially confirm our hypothesis that SAAI outperforms previous AD on integral approaches, and Sect. 4 discusses the reasons in more detail.
Figure 4 illustrates results of real field experiments under the more difficult (i.e., sunny) conditions for thermal imaging. For these experiments we implemented real-time SAAI on commercially available drones, as explained in Sect. 3.2. In contrast to simulated results, a ground truth does not exist here. Consequently, results can only be presented and compared visually.
Figure 3: Simulated results for cloudy (left) and sunny (right) conditions and various forest densities (300, 400, 500 trees/ha): Anomaly detection applied to integral (top row) vs. integrated anomalies (bottom row). As shown in Fig. 2, color-mapped thermal images and RX detection were applied. We use a constant SA of \(a\)=10 m (integrating 10 images at 1 m sampling distance captured from an altitude of \(l\)=35 m AGL).
Compared to single thermal images where the target is partially or fully occluded and its fractional footprint is often indistinguishable from false heat singles of the surrounding, SAAI clearly reveals target shape and suppresses false heat signals well. Fig. 5 illustrates that applying anomaly detection to the integrals under the same conditions results in many wrong detections that form larger clusters, just like in our simulations. Identifying the target can be challenging for AD on integrals while SAAI leads to clear improvements.
Figure 4: Real SAAI field-experiment results under sunny conditions: Screen-shots of our application running in real-time on the drone’s smart controller. Color-mapped single thermal image and RGB image of our forest test site (top-left). SAAI detection results for sitting persons (left column) and for lying persons (right column). Left side of split-screen visualization shows the single image (color-mapped thermal) and ride side shows integral of anomalies (RX applied to single images before integration). See supplementary video for run-time details. We applied a maximal SA of \(a\)=15 m (integrating no more than 30 images at 0.5 m sampling distance, captured from an altitude of \(h\)=35 m AGL). Note, that depending on the local occlusion situations, the target became well visible after covering the max. SA to a shorter or larger extend. The RX threshold was individually optimized to achieve the best possible tradeoff between false and true positives (\(t\)=97.4% - 99.3%).
## 3 Materials and Methods
### Simulation
Our simulations were realized with a procedural forest algorithm called _ProTree2_ and was implemented using WebGL. We computed 512x512 px aerial images (color-mapped thermal) for drone flights over a predefined area using defined sampling parameters (e.g., waypoints, altitudes, and camera field-of-view). Figure 1 shows examples of such simulated images. The virtual rendering camera (FOV = 50 deg in our case) applied perspective projection and was aligned with its look-at vector parallel to the ground surface normal (i.e., pointing downwards). Procedural tree parameters, such as tree height (20 m - 25 m), trunk length (4 m - 8 m), trunk radius (20 cm-50 cm), and leaf size (5 cm-20 cm) were used to generate a representative mixture of tree species. Finally, a seeded random generator was applied to generate a variety of trees at defined densities and degrees of similarity. Besides thermal effects of direct sunlight on tree crowns, other environmental properties, such as varying tree species, foliage and time of year, were assumed to be constant. The simulated forest densities were considered sparse with 300 trees/ha, medium with 400 trees/ha, and dense with 500 trees/ha. The simulated environment was a 1 ha procedural forest with one hidden avatar lying on the ground. Since the maximal visibility of the target (i.e., the maximal pixel coverage of the target's footprint in the simulated camera images under no occlusion) is known, the simulation allows quantitative comparisons in target visibility and precision, as presented in Fig. 3.
Footnote 2: [https://github.com/supereggbert/proctree.js](https://github.com/supereggbert/proctree.js)
### Real-Time Application on Drones
The real-time application used for our field-experiments was developed using DJI's Mobile SDK 5 which currently supports the new DJI enterprise series, such as Mavic 3T and Matric 30T. It runs on the Android 10 smart controllers (DJI RC Plus and Pro) and, as shown in Fig. 6, it supports three modes of operation: a _flight mode_ that displays the live video stream of either the wide FOV, thermal, or zoom RGB camera, a _scan mode_ and _parameter mode_ in which single images for integration are recorded, the visualization parameters are interactively adjusted, and the live video stream (RGB or thermal) are displayed on the left side while the resulting integral (thermal, RGB, or RX = SAAI / AD on integral) is shown on the right side. The application supports networked Real-Time Kinematics (RTK), if available.
We forgo the use of soft-buttons or -sliders to be operated on the touch-screen. Instead, we map all functionality of the application to hard-buttons and -sticks. This allows more robust control and supports first-person-view (FPV) flying by attaching goggles to the HDMI port of the smart controller. The control assignments on the smart controller are chosen as follows (using DJI's naming convention): The C2 button switches forth and back between flight mode and scan mode. In parameter mode, it resets the focal plane parameters. The C1 button switches forth and back between scan mode and parameter mode. The record button switches forth and back between RGB and thermal imaging (and zoom-camera in flight mode only) in flight and scan modes. The shutter button turns on/off anomaly detection (RX). The C3 button opens parameter options (different settings for RGB
Figure 5: Real AD on integral field-experiment results under same conditions as in Fig. 5: Here, the RX detector is applied to the thermal integral images. Results show lying persons (left two images) and sitting persons (right two images). As in Fig.5, the RX threshold was individually optimized to achieve the best possible tradeoff between false and true positives (\(t\)=99.0% - 99.9%).
and thermal imaging), such as RTK settings, sampling distance and integration window size, thermal color modes, etc. The left dial changes the gimbal tilt in flight and scan modes. The right dial changes the RX threshold or contrast of integral images (the pause button is used to toggle). In flight mode, it changes the zoom of the zoom camera. The pause button toggle between changing contrast of the integral image or RX threshold in parameter mode. The right stick controls the drone in flight and scan modes. In parameter mode, it changes the focal plane distance and compass correction settings (push/pull: focal plane up/down, left/right: compass correction counter-clockwise/clockwise). The left stick controls the drone in flight and scan modes. In parameter mode, it changes the focal plane orientation (push/pull: tilt forward/backward, left/right: tilt left/right).
Figure 7 depicts a schematic overview over the application's main software-system components. It consists of three parallelly running threads that share image and telemetry data through queues. This is essential for buffering slight temporal differences in runtime of these threads. The first thread (implemented in Java) receives raw (RGB or thermal) video and telemetry data (RTK corrected GPS, compass direction, gimbal angles) from the drone, decodes the video data, and selects frames (video-telemetry pairs) based on the defined sampling distance. Thus, if the selected sampling distance is 0.5 m, for example, only frames at flight distance \(\geq\) 0.5 m are selected and pushed into the queue. Note, that while video streams are delivered at 30 Hz, GPS sampling is limited to a maximum of 10 Hz (10 Hz with RTK, 5 Hz with conventional GPS). Therefore, SAAI results are displayed at a speed of 10 Hz / 5 Hz.
Figure 6: Common process of operation: (1) Take off and fly to target area in _flight mode_. (2) Scan in target area by flying a linear sideways path in _scan mode_. (3) Fine-tune visualization parameters (focal plane, compass correction, contrast, RX threshold) in _parameter mode_. Steps 2 and 3 can be repeated to cover larger areas. Operation supports interactive visualization directly on the smart control as well as first person view (FPV) using additional goggles.
The next thread (implemented in PyTorch for C++) computes single image RX detection on the GPU of the smart controller. It receives the preselected frames from the first queue and computes the per-pixel anomalies base on the selected RX threshold \(t\) (which is called \(Rx\) in the application's GUI). These frames are then forwarded through the second queue to the third thread. The last thread (implemented in C++) registers a window of images (i.e., the \(n\) latest frames, where \(n\) is the integral window size of the SA) based on their telemetry data and given visualization parameters: the focal plane distance \(h\) (called \(FP\) in the application's GUI), pitch (\(Pi\)) and roll (\(Ro\)), the compass correction value (CC), and a contrast enhancement factor (\(Ct\)). The final integral is displayed on the right side of the split screen. Note, that \(FP\), \(Ro\), \(Pi\), \(CC\), \(Cn\), and \(Rx\) can be interactively changed in parameter mode, as explained above. The _sampling distance_ and _integral windows size_ are defined in the application's settings. Note that _sampling distance_ times _integral windows size_ equals the SA size (\(a\) in Fig. 1).
Performance measures were timed on a DJI RC Pro Enterprise running on Android 10: The initial thread required 10 - 20 ms, the second thread 5 - 10 ms, and the last thread took approximately 40 ms. Overall, this led to approximately 45-75 ms required processing time. However, the maximum GPS sampling speed of 10Hz (RTK) defers the processing to 100 ms in practice.
## 4 Discussion and Conclusion
It was previously shown that, in the presence of foliage occlusion, anomaly detection performs significantly better in integral images resulting from synthetic aperture imaging than in conventional single aerial images [12]. The reason for this is the much more uniform background statistics of integral images compared to the one of single images. In this article, we demonstrate that integrating detected anomalies significantly outperforms detecting anomalies in integrals. This leads to enhanced occlusion removal and outlier suppression, and consequently, to higher chances for detecting otherwise occluded targets visually (i.e., by a human observer) as well as computationally (e.g., by an automatic classification method). This new finding can be explained as follows:
With respect to Fig. 1, the integral signal of a target point \(F\) is the sum of all registered ray contributions of all overlapping areal images. This integral signal consists of a mixture of uncconcluded (signal of target) and occluded (signal of forest background) ray contributions. Only if the uncconcluded contributions dominate, the resulting integral pixel can robustly be detected as an anomaly. On the one hand, applying anomaly detection before integration zeros out occluding rays initially while assigning the highest possibly signal contribution of 1 to the target rays. Thus, integrating detected anomalies reduces background noise in the integrated target signal. In fact,
Figure 7: Software-system overview: The main components of our application are three parallel running threads that exchange video and telemetry data through two queues. They are distributed on CPU and GPU of the smart controller for optimal performance and parallel processing.
the integrated target signal corresponds directly to visibility (i.e., how often an abnormal point on the focal plane was captured free of occlusion). On the other hand, the integral signals of occluders (e.g., \(O\) in Fig. 1) can also be high (e.g., tree crowns that are headed by sun light). They would be considered as anomalies if they differ too much from the background model, and would be binary masked just like targets. Consequently, anomaly detection applied to integrals can lead to severe false positives that are indistinguishable from true positives, as shown in Fig. 3 (top row). Certainly, this is also the case when anomaly detection is applied to single images. But integrating the anomaly masks suppresses the contribution of false positives as their rays are not registered on the focal plane, as shown in Fig. 3 (bottom row). Only false positives located directly on the focal plane (e.g., open ground patches that are heated by sunlight, as the large bright patches shown by the two top-right examples in Fig. 5) remain registered and can lead to classification confusion. For dense forests, however, we can expect that large open ground patches are rare, and that most of the sunlight that could cause false detections is reflected by the tree crowns.
We have already demonstrated that using more channels (e.g., RGB+thermal instead of RGB or thermal only) improves anomaly detection in general [15]. Since a simultaneous access of the raw RGB and thermal video streams is not supported by consumer DJI drones, we focused on the most relevant (thermal) channel in our application so far. Extending anomaly detection to more channels (including motion) is on our future work list. Also, the integration of classification approaches, such as the automatic person classifier that was trained for AOS data [10], will be considered in future. We expect that applying classification to SAAI results leads to better results than an application to integrals (as done in [10]), as false positive pixels are initially filtered out.
Supporting information and the AOS application for DJI can be downloaded at: [https://github.com/IKU-ICG/AOS/](https://github.com/IKU-ICG/AOS/) (AOS for DJI). The supplementary video is available at: [https://user-images.githubusercontent.com/83944465/217470172-74a2b272-2cd4-431c-9e21-b91938a340f2.mp4](https://user-images.githubusercontent.com/83944465/217470172-74a2b272-2cd4-431c-9e21-b91938a340f2.mp4)
Data Availability StatementAll experimental data presented in this article is available at: [https://doi.org/10.5281/ze-modo.7867080](https://doi.org/10.5281/ze-modo.7867080)
Author ContributionsConceptualization O.B.; methodology O.B. and R.N.; software R.N.; validation R.N. and O.B.; formal analysis R.N. and O.B.; investigation R.N.; resources R.N.; data curation R.N; writing\(-\)original draft preparation O.B.; writing\(-\)review and editing O.B. and R.N.; visualization O.B. and R.N.; supervision O.B.; project administration O.B.; funding acquisition O.B. All authors have read and agreed to the published version of the manuscript. FundingThis research was funded by the Austrian Science Fund (FWF) and German Research Foundation (DFG) under grant numbers P 32185-NBL and I 6046-N, and by the State of Upper Austria and the Austrian Federal Ministry of Education, Science and Research via the LIT-Linz Institute of Technology under grant number LIT-2019-8-SEE-114.
We thank Francis Seits, Rudolf Ortner, and Indrajit Kurmi for contributing to the implementation of the application, and Mohammed Abbass for supporting the field experiments. Conflicts of InterestThe authors declare no conflict of interest.
|
2303.14270 | Dependence of the loop group method on the base point | We discuss how the loop group method for harmonic maps from Riemann manifolds
M to inner symmetric spaces S depends on the choice of a base point in M. | Josef F. Dorfmeister | 2023-03-24T20:49:25Z | http://arxiv.org/abs/2303.14270v1 | # Dependence of the loop group method on the base point
###### Abstract.
We discuss, how the loop group method for harmonic maps \(f:M\to\mathcal{S}\) into symmetric spaces depends on the choice of a base point \(z_{*}\in M\).
Actually, we consider two ways of discussing this dependence:
\(\bullet\) The first way follows strictly the procedure of [9], but we do not assume from the beginning that the symmetric space \(\mathcal{S}\) is realized a priori as a fixed quotient \(G/K\)
In this approach one chooses first a basepoint, say \(z_{0}\) and then considers the natural realization of the symmetric space \(\mathcal{S}\) as \(\mathcal{S}=G/K\), where \(K\) denotes the fixed point group of \(G\) at the point \(f(z_{0})\in\mathcal{S}\).
Given another base point \(z_{1}\in M\), one makes the analogous choices.
The result of the first part of the paper is a dictionary, how to translate the description/realization of \(f\) relative to \(z_{0}\) to the one relative to \(z_{1}\).
\(\bullet\) The second way, more convenient for explicit computations, see, e.g., [13], is discussed in the second part of the paper.
In this case one realizes the symmetric space \(\mathcal{S}\) once and for all as a fixed quotient space. We show that the corresponding frames are related by dressing.
In both cases we explain what these realizations mean for the construction of the compact dual of the symmetric space \(\mathcal{S}\).
To avoid repeated distinctions of cases we assume throughout that \(\mathcal{S}\) is an inner symmetric space. Moreover, since all frames and potentials are defined only on the universal cover \(\mathbb{D}\) of \(M\), we use \(M=\mathbb{D}\) unless the opposite is stated explicitly.
**Keywords:** Duality; Iwasawa decomposition; normalized potential; non-compact symmetric space;.
**MSC(2010):** **53A30, 53C30, 53C35**
## 1. Introduction
The "DPW-procedure" [9] has been applied to the discussion of the construction of many types of surfaces ( with or without symmetries), considered as submanifolds of certain specific manifolds. More precisely, we consider surface classes for which a harmonic "Gauss type map" exists which characterizes the given surface of its class. Examples for this are, among many others, the surfaces of constant mean curvature in \(\mathbb{R}^{3}\), (CMC surfaces in \(\mathbb{R}^{3}\)) [4], the pseudo-spherical surfaces in \(\mathbb{R}^{3}\), [8] the minimal surfaces in the Heisenberg group [7], the indefinite proper affine spheres [3], the "unfashionable geometries"[2].
For the discussion of surface classes one considers primarily harmonic "Gauss type " maps into semi-simple symmetric spaces. In this note we consider only cases, where the associated symmetric space is inner.
It is natural to normalize the setting by choosing, once and for all, a fixed base point \(z_{0}\) in the simply connected Riemann surface \(\mathbb{D}\) of definition of the harmonic map \(f:\mathbb{D}\to G/K\) and to require (w.l.g. up to congruence) that \(f(z_{0})=eK\) holds, where \(e\) is the identity element of \(G\).
The procedure suggested in [9] was quite successful in many ways.
But it was never discussed, in what way the discussion needs to be adapted, if the fixed base point is chosen different from the originally chosen one.
This note gives two answers to this issue: |
2305.16931 | Measurement incompatibility is strictly stronger than disturbance | The core of Heisenberg's heuristic argument for the uncertainty principle,
involving the famous $\gamma$-ray microscope $\textit{Gedankenexperiment}$,
hinges upon the existence of measurements that irreversibly alter the state of
the system on which they are acting, causing an irreducible disturbance on
subsequent measurements. The argument was put forward to justify measurement
incompatibility in quantum theory, namely, the existence of measurements that
cannot be performed jointly$-$a feature that is now understood to be different
from irreversibility of measurement disturbance, though related to it. In this
article, on the one hand, we provide a compelling argument showing that
measurement incompatibility is indeed a sufficient condition for
irreversibility of measurement disturbance; while, on the other hand, we
exhibit a toy theory, termed the minimal classical theory (MCT), that is a
counterexample for the converse implication. This theory is classical, hence it
does not have complementarity nor preparation uncertainty relations, and it is
both Kochen-Specker and generalised noncontextual. However, MCT satisfies not
only irreversibility of measurement disturbance, but also the properties of
no-information without disturbance and no-broadcasting, implying that these
cannot be understood $\textit{per se}$ as signatures of nonclassicality. | Marco Erba, Paolo Perinotti, Davide Rolino, Alessandro Tosini | 2023-05-26T13:47:00Z | http://arxiv.org/abs/2305.16931v5 | # Measurement incompatibility is strictly stronger than disturbance
###### Abstract
The core of Heisenberg's argument for the uncertainty principle, involving the famous \(\gamma\)-ray microscope _Gedankenexperiment_, consists in the existence of measurements that irreversibly alter the state of the system on which they are acting, causing an irreducible disturbance on subsequent measurements. The argument was put forward to justify the existence of incompatible measurements, namely, measurements that cannot be performed jointly. In this Letter, on the one hand, we provide a compelling argument showing that incompatibility is indeed a sufficient condition for disturbance, while, on the other hand, we exhibit a toy theory that is a counterexample for the converse implication.
Since Heisenberg's \(\gamma\)-ray microscope _Gedankenexperiment_[1], the relation between measurement disturbance and the existence of pairs of observables that cannot be jointly measured has puzzled the authors that tackled quantum measurement theory [2; 3; 4; 5; 6; 7; 8; 9; 10]. Over the years, several arguments have been proposed in favor of the fact that these two facets of quantum theory might not necessarily equivalent. While it is intuitively true that the impossibility of jointly measuring two observables necessarily implies measurement disturbance, a proof of this fact has never been given in a theory-independent fashion. On the other hand, the converse implication does not even intuitively hold true. The main difficulty in this direction is that Quantum Theory (QT) exhibits both features, while Classical Theory (CT) none of them. In order to understand the logical relation between incompatibility and disturbance, one needs to move outside the limited scenarios of QT and CT, broadening the perspective to the wider context of general probabilistic theories.
In the present Letter, we address the above question in the framework of Operational Probabilistic Theories (OPT) [11; 12; 13; 14]. These are generic theories of information, including QT and CT as particular cases, but also encompassing a wealth of toy theories sharing the same basic compositional structures for systems and processes. This framework is the appropriate one to seek general arguments about the logical dependency of different properties that physical theories may exhibit.
Indeed, the operational-probabilistic framework has been devised in order to survey general physical theories "from outside". Relevant quantum properties, such as entanglement and contextuality, has been then investigated in this general scope. For instance, in Ref. [15] entanglement is established as an inevitable feature of any theory superseding CT while admitting of emergent classicality. Furthermore, in Ref. [16] a contextuality witness is deduced in terms of the functional form of an uncertainty relation, thus pinpointing some aspects of quantum uncertainty that may constitute a genuine evidence of nonclassicality.
In this Letter, we show that the existence of pairs of incompatible measurements--a property that we term _incompatibility_--is a strictly stronger condition than the existence of operations that irreversibly disturb the state of the system on which they act--that we name _irreversibility_. In order to achieve this we will prove that the former property implies the latter one, while exhibiting a toy theory that violates the converse implication. The counterexample consists in a theory--that we call Minimal Classical Theory (MCT)--where all measurements are compatible, but any measurement irreversibly alter the state of the system. This theory is obtained from CT by restricting to the bone the set of operations one is allowed to perform on a system. Being the states of systems of MCT classical, this theory also represents a proof that, contrarily to what is normally believed, the disturbance action caused by the interaction with a system is not a characteristic property of the quantum world. Moreover, this result is complementary to the one of Ref. [16], in that MCT has irreversibility while having no incompatibility of measurements--hence no uncertainty thereon--thus being clearly noncontextual. Furthermore, MCT, being embeddable [17] into classical theory, is generalized-noncontextual according to Ref. [18].
_Framework.--_ We now sketch the framework of Operational Probabilistic Theories which is here leveraged. An OPT is meant as a theory of elementary systems and their processes, where all the elements that are mathematically defined have an operational counterpart. The probabilistic aspect consists in rules to assess the probability of events in any network of processes occurring on a given set of systems. In detail, a generic OPT has a collection of _systems_ and of _tests_ thereon. Systems are denoted by capital Roman letters \(\mathrm{A},\mathrm{B},...\in\mathsf{Sys}\left(\Theta\right)\). As
an example, every system in QT corresponds to a complex Hilbert space. Tests, \(\mathsf{T}_{\mathsf{X}}=\left\{\mathscr{T}_{x}\right\}_{x\in\mathsf{X}}\), represent the experiments that we can perform, acting on a given input system \(\mathrm{A}\), and obtaining the output system \(\mathrm{B}\). The class of tests with input system \(\mathrm{A}\) and output system \(\mathrm{B}\) is denoted by \(\mathsf{Instr}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\). Every test consists in a collection of possible _events_\(\mathscr{T}_{x}\), labelled by the possible outcomes \(x\in\mathsf{X}\) of the experiment. Accordingly, a finite _outcome space_\((\mathsf{X})\) is associated with each instrument. The class of events with input \(\mathrm{A}\) and output \(\mathrm{B}\) is denoted by \(\mathsf{Transf}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\). Referring again to QT, tests are _quantum instruments_, and events are _quantum operations_. For example, in a Stern and Gerlach experiment the instrument that models the action of the magnetic field is of the form \(\mathsf{T}_{(\uparrow,\downarrow)}=\left\{\mathscr{T}_{\uparrow},\mathscr{T}_ {\downarrow}\right\}\), where the two transformations \(\mathscr{T}_{\uparrow}\) and \(\mathscr{T}_{\downarrow}\) represent the two occurrences in which the system collapses into a state with spin up or down, respectively. A test is _deterministic_ if it has only one event. A deterministic test does not provide information (it has a unique outcome that occurs with certainty), and can represent e.g. the evolution of an open system. In QT, a deterministic test is a quantum channel.
The first main feature of tests is that they can be applied in a sequence, with the simple rule that a sequence is well defined if the input of the second test is of the same type as the output of the first one. Tests (and events) will be drawn as boxes, and this makes intuitive the diagram for a sequence of tests (events):
where \(\mathsf{G}_{\mathsf{Y}}\circ\mathsf{T}_{\mathsf{X}}=\left\{\mathscr{G}_{y} \circ\mathscr{T}_{x}\right\}_{(x,y)\in\mathsf{X}\times\mathsf{Y}}\).
A second defining structure of OPTs is _parallel composition_, that allows one to combine any pair of systems \(\mathrm{A}\) and \(\mathrm{B}\) in a _composite system_\(\mathrm{AB}\). Given a composite system, moreover, one can independently apply tests \(\mathsf{T}_{\mathsf{X}}\) and \(\mathsf{G}_{\mathsf{Y}}\) on the two components. The resulting test is the parallel composition \(\mathsf{T}_{\mathsf{X}}\boxtimes\mathsf{G}_{\mathsf{Y}}\) that is drawn as follows
where \(\mathsf{T}_{\mathsf{X}}\boxtimes\mathsf{G}_{\mathsf{Y}}=\left\{\mathscr{T}_{x} \boxtimes\mathscr{G}_{y}\right\}_{(x,y)\in\mathsf{X}\times\mathsf{Y}}\). Both sequential and parallel composition are associative.
A special kind of test consists in the _preparation_ of a system \(\mathrm{A}\). This kind of tests are called _preparation-instruments_, and their class deserves a dedicated symbol: \(\mathsf{Prep}\left(\mathrm{A}\right)\). The possible events of a preparation test are _states_, that can be denoted as \(\left|\rho\right\rangle_{\mathrm{A}}\in\mathsf{St}\left(\mathrm{A}\right)\). Similarly, a special class of tests is that representing measurements after which the system \(\mathrm{A}\) is destroyed, discarded, or just neglected. These tests are called _observation-instruments_ (\(\mathsf{Obs}\left(\mathrm{A}\right)\)), and their events are called _effects_ (for example \(\left(a\right|_{\mathrm{A}}\in\mathsf{Eff}\left(\mathrm{A}\right)\)). Observation-instruments are the generalization of POVMs of QT to generic theories. We will draw states and effects as
\[\left\langle\underline{\rho}\right\rangle^{\mathrm{A}}\quad,\quad\underline{ \mathsf{A}}\quad,\quad\underline{\mathsf{A}}\quad,\]
respectively. Preparation (observation) instruments can be regarded as special tests whose input (output) system is _trivial_. The (unique) trivial system is denoted by \(\mathrm{I}\). From an operational point of view this system represents "nothing the theory cares to describe" [12]. The trivial system behaves as a unit for parallel composition: \(\mathrm{AI}=\mathrm{IA}=\mathrm{A}\).
For every system \(\mathrm{A}\) of the theory, we require the existence of a deterministic transformation \(\mathscr{I}_{\mathrm{A}}\)--called _identity_--representing "doing nothing" on the system, i.e. such that \(\mathscr{I}_{\mathrm{A}}\mathscr{T}=\mathscr{T}\mathscr{I}_{\mathrm{B}}= \mathscr{T}\), for every event \(\mathscr{T}\in\mathsf{Transf}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\). A transformation \(\mathscr{T}\in\mathsf{Transf}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\) is _reversible_ if there exists \(\mathscr{T}^{-1}\in\mathsf{Transf}\left(\mathrm{B}\!\rightarrow\!\mathrm{A}\right)\) such that \(\mathscr{T}\mathscr{T}^{-1}=\mathscr{I}_{\mathrm{B}}\) and \(\mathscr{T}^{-1}\mathscr{T}=\mathscr{I}_{\mathrm{A}}\).
Moreover, for every pair of systems \(\mathrm{A}\) and \(\mathrm{B}\), we require the existence of the swap operation representing the exchange of the two systems, i.e.
\[\begin{array}{c}\includegraphics[width=142.364pt]{A}\end{array}\,\]
which is a deterministic reversible transformation. Tests and transformations can slide along the crossed wires through a swap.
In an OPT every circuit that starts with a preparation instrument and ends with an observation instrument represents a probability distribution of the events in the circuit. For example,
\[\begin{array}{c}\includegraphics[width=142.364pt]{A}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{A}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c} \includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c} \includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c} \includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c}\includegraphics[width=142.364pt]{B}\end{array}\ \ \begin{array}{c} \includegraphics[width=142.
We now introduce the notion of compatibility of observation-tests, which will play a central role in our results. The definition is borrowed from a wide literature on the subject (see e.g. [19; 20; 21]), where compatibility is ubiquitously identified with joint measurability. In precise terms, we say that the observation-tests \(\mathsf{a}_{\mathsf{X}}\in\mathsf{Obs}\left(\mathrm{A}\right)\) and \(\mathsf{b}_{\mathsf{Y}}\in\mathsf{Obs}\left(\mathrm{A}\right)\) are _compatible_ if there exists a third test \(\mathsf{c}_{\mathsf{X}\times\mathsf{Y}}\in\mathsf{Obs}\left(\mathrm{A}\right)\) such that
\[\begin{array}{rcl}\includegraphics[width=142.364pt]{images/.eps}&=&\sum_{y \in\mathsf{Y}}\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&,\\ \includegraphics[width=142.364pt]{images/.eps}&=&\sum_{x\in\mathsf{X}} \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&.\end{array}\]
Accordingly, we will say that a theory has _incompatibility_ if it admits of a system \(\mathrm{A}\) and a pair of observation-tests for \(\mathrm{A}\) that are not compatible.
In order to determine whether an OPT exhibits tests with a disturbance in the sense of Heisenberg, i.e. when an OPT has _irreversibility_, we require the existence of at least a test that irreversibly alters the state of the system on which it acts. In this way, we are stating that these operations set a direction for the arrow of time, in analogy with the second law of thermodynamics.
We then say that an instrument is _intrinsically irreversible_ if its occurrence precludes the possibility of implementing some other instrument [22] on the same input system. Notice that in general one can implement an instrument using ancillary systems, and our definition allows one to post-process them along with the output system. The precise definition of intrinsic irreversibility is then the following. We say that the test \(\mathsf{A}_{\mathsf{X}}\in\mathsf{Instr}\left(\mathrm{A}\!\to\!\mathrm{B}\right)\) is intrinsically irreversible if it _extudes_ some other test \(\mathsf{B}_{\mathsf{Y}}\in\mathsf{Instr}\left(\mathrm{A}\!\to\!\mathrm{C}\right)\)[22], i.e. there exists a test \(\mathsf{B}_{\mathsf{Y}}\in\mathsf{Instr}\left(\mathrm{A}\!\to\!\mathrm{C}\right)\) such that, for every \(\mathsf{C}_{\mathsf{Z}}\in\mathsf{Instr}\left(\mathrm{A}\!\to\!\mathrm{BE}\right)\) and every disjoint partition \(\{S_{x}\}_{x\in\mathsf{X}}\) of \(\mathrm{Z}\) with
\[\begin{array}{rcl}\includegraphics[width=142.364pt]{images/.eps}&=&\sum_{z \in S_{x}}\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&=&\sum_{z\in S_{x}}\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \end{array} \tag{1}\]
there exists no post-processing \(\mathsf{P}_{\mathsf{Y}}^{(z)}\in\mathsf{Instr}\left(\mathrm{BE}\!\to\!\mathrm{C}\right)\) such that
\[\begin{array}{rcl}\includegraphics[width=142.364pt]{images/.eps}&\includegraphics [width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&=&\sum_{z\in\mathsf{Z}} \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \end{array} \tag{2}\]
A first result that we can prove is that a test is intrinsically irreversible if and only if it excludes the identity test. Indeed, if this is the case, the above definition holds choosing \(\mathsf{B}_{\mathsf{Y}}=\{\mathscr{F}_{\mathrm{A}}\}\). On the other hand, by contradiction, if \(\mathsf{A}_{\mathsf{X}}\) can be post-processed to the identity test, then it can be post-processed to any other test.
In the light of the above discussion, we will say that a theory has _irreversibility_ if it admits of a test that is intrinsically irreversible. Notice that, according to our definition, in QT--where all channels admit of a unitary dilation--no channel is intrinsically irreversible. On the other hand, almost all quantum instruments are intrinsically irreversible. Irreversibility thus stems, at least in QT, from the very extraction of information in a measurement.
_Incompatibility implies irreversibility.--_ We can now prove the first of our two main results: the existence of incompatible observation-instruments implies irreversibility of the theory.
Given two generic observation-instruments \(\left\{\mathsf{a}_{x}\right\}_{x\in\mathsf{X}},\left\{\mathsf{b}_{y}\right\}_ {y\in\mathsf{Obs}\left(\mathrm{A}\right)\), there always exist two tests \(\left\{\mathscr{F}_{x}\right\}_{x\in\mathsf{X}}\in\mathsf{Instr}\left( \mathrm{A}\!\to\!\mathrm{B}\right)\) and \(\left\{\mathscr{G}_{y}\right\}_{y\in\mathsf{Y}}\in\mathsf{Instr}\left(\mathrm{A} \!\to\!\mathrm{C}\right)\) such that
\[\begin{array}{rcl}\includegraphics[width=142.364pt]{images/.eps}&=&\includegraphics [width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}\\ \end{array} \tag{3}\]
This set of tests is always non-empty since in any OPT and for every system it is possible to choose a measure-and-prepare instrument
\[\begin{array}{rcl}\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}& \includegraphics[width=142.364pt]{images/.eps}&\includegraphics[width=142.364pt]{images/.eps}\\ \end{array}\]
where \(\left|\rho\right\rangle_{\mathrm{B}}\in\mathsf{St}\left(\mathrm{B}\right)\) is an arbitrary deterministic state, and analogously for \(\left\{\mathsf{b}_{y}\right\}_{y\in\mathsf{Y}}\).
Now we observe that, whenever either of two instruments \(\mathsf{T}_{\mathsf{X}}\) or \(\mathsf{G}_{\mathsf{Y}}\) does not exclude the other, the observation-instruments \(\left(\mathrm{e}\right|\circ\mathsf{T}_{\mathsf{X}}\right)\) and \(\left(\mathrm{e}\right|\circ\mathsf{G}_{\mathsf{Y}}\) are compatible [22]. Then, whenever a theory has incompatibility, there must exist at least a pair of instruments that exclude each other--specifically, the ones associated to the effects as in (3)--and thus they are intrinsically irreversible. In summary, incompatibility is a sufficient condition for irreversibility.
_Irreversibility does not imply incompatibility.--_ We now proceed to second main result, exhibiting a toy theory called Minima Classical Theory (MCT) that has irreversibility but no incompatibility. This theory is obtained by restricting the sets of allowed transformations and instruments of CT, keeping its sets of states and effects. More in detail, the only allowed instruments (and consequently transformations) are the ones that can be obtained combining preparation- and observation-instruments with the identity and swap operations (and limits of sequences of tests thereof).
MCT is an instance of a family of OPTs that can be obtained analogously: starting from an OPT, one builds its minimal version by only allowing preparation- and observation-tests, permutations of systems, and arbitrary compositions or limits thereof. These theories can be called Minimal Operational Probabilistic Theories.
Causal classical theories are here defined as OPTs where the state spaces are simplexes whose vertices are perfectly discriminable [14; 23].
We can now review some aspects of MCT, actually referring to results that hold for arbitrary minimal OPTs. The detailed proofs can be found in Appendix H. Instruments of a minimal OPT--with the exclusion at most of
limit instruments--are of the form
(4)
where \(\left\{\rho_{x}\right\}_{x\in\mathsf{X}}\in\mathsf{Prep}\left(\mathrm{CB}^{\prime}\right)\) and \(\left\{\mathrm{a}_{y}\right\}_{y\in\mathsf{V}}\in\mathsf{Obs}\left(\mathrm{CA}^{\prime}\right)\) are generic preparation- and observation-instruments, \(\mathcal{S}^{(1)}\) and \(\mathcal{S}^{(2)}\) are generic permutations [24]. Notice that there is some degree of arbirariness in the choice of the systems \(\mathrm{A}^{\prime}\), \(\mathrm{B}^{\prime}\), \(\mathrm{C}\), \(\mathrm{E}\), and in some cases they can be taken as the trivial system \(\mathrm{I}\). As a consequence of the realisation scheme of tests in Eq. (4), MCT satisfies the property called _no-information without disturbance_[7, 10, 13, 25]. This consists in the fact that every test whose full coarse-graining is the identity transformation (no-disturbance) has a probability distribution that does not depend on the input state (no-information). This fact was first observed in QT [7, 13], while it does not hold in \(\mathrm{CT}\), where all the measurements on a system can be made without disturbing it. In Ref. [25] it was proved in full generality that no-information without disturbance is equivalent to the atomicity of the identity transformation--i.e. every instrument whose full-coarse graining is equal to \(\mathscr{I}\) must be of the form \(\left\{p_{x}\mathscr{I}\right\}_{x\in\mathsf{X}}\), with \(\left\{p_{x}\right\}_{x\in\mathsf{X}}\) a probability distribution.
The proof that MCT has no-information without disturbance proceeds as follows. For the details, we refer to Appendix H.3. First, suppose that there is some instrument that decomposes \(\mathscr{I}_{\mathrm{A}}\). This instrument is the limit of some sequence \(\mathsf{T}_{\mathsf{X}}^{(n)}\) of tests of the form (4). The important fact here is that the arbitrary systems \(\mathrm{A}^{\prime}{}_{n}\), \(\mathrm{B}^{\prime}{}_{n}\), \(\mathrm{C}_{n}\), \(\mathrm{E}_{n}\), as well as the permutations \(\mathcal{S}_{n}^{(1)}\) and \(\mathcal{S}_{n}^{(2)}\), for the tests \(\mathsf{T}_{\mathsf{X}}^{(n)}\) in the sequence can be taken to be independent of \(n\). Then, the full coarse-graining of the limit test--that coincides with the limit of the sequence of full coarse-graining--is of the form of Eq. (4) where the observation-test \(\left\{\mathrm{a}_{y}\right\}_{y\in\mathsf{V}}\) reduces to the deterministic effect \(e_{\mathrm{CA}^{\prime}}\). Since in our case \(\mathrm{A}\equiv\mathrm{B}\), it must also be \(\mathrm{A}^{\prime}\equiv\mathrm{B}^{\prime}\). Moreover, since the overall transfromation must be the identity, one can easily check that it must be \(\mathcal{S}^{(2)}=[\mathcal{S}^{(1)}]^{-1}\). In summary, one must have
(5)
and finally, inverting the permutations on both sides, we end up with
(6)
which then requires \(\mathrm{A}^{\prime}\equiv\mathrm{I}\). By the stability of the systems \(\mathrm{A}^{\prime}{}_{n}\), \(\mathrm{B}^{\prime}{}_{n}\), \(\mathrm{C}_{n}\), \(\mathrm{E}_{n}\), also the sequence of tests of the form (4) converging to our decomposition of \(\mathscr{I}_{\mathrm{A}}\) must have trivial systems \(\mathrm{A}^{\prime}\equiv\mathrm{B}^{\prime}\equiv\mathrm{I}\). As a consequence, all such tests must contain events proportional to \(\mathscr{I}_{\mathrm{A}}\), and so must have the limit test.
We finally prove that every (non-trivial) theory with no-information without disturbance, and then MCT, has irreversibility. This is shown by contradiction. Suppose that a theory has no intrinsically irreversible tests. Then any test \(\mathsf{A}_{\mathsf{X}}\in\mathsf{Instr}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\) does not exclude the identity and is achievable via an instrument \(\left\{\mathscr{C}_{z}\right\}_{z\in\mathsf{Z}}\in\mathsf{Instr}\left(\mathrm{A }\!\rightarrow\!\mathrm{BE}\right)\) such that Eq. (2) holds with \(\mathscr{B}_{y}\) replaced by the identity \(\mathscr{I}_{\mathrm{A}}\). Due to the atomicity of the identity one has \(\left(e\right|_{\mathrm{BE}}\!\mathscr{C}_{z}=p_{z}(e)_{\mathrm{A}}\), which means that the observation-test associated to \(\mathsf{A}_{\mathsf{X}}\in\mathsf{Instr}\left(\mathrm{A}\!\rightarrow\!\mathrm{B}\right)\) is of the form \(\left\{p_{k}\mathrm{e}\right\}_{z\in\mathsf{Z}}\), namely it is trivial. Since this is true for every test, all observation-tests must be trivial, which is possible only if the theory has just one system corresponding to the trivial one. MCT is not trivial, indeed one can take for example the observation-test \(\left\{\mathrm{a}_{x}\right\}_{x\in\mathsf{X}}=\left\{\mathrm{a}_{0},\mathrm{a }_{1}\right\}\) where \(\left(\mathrm{a}_{0}\right|\) and \(\left(\mathrm{a}_{1}\right|\) are the effects that perfectly discriminates between the two states \(\left|0\right)\) and \(\left|1\right)\) of a bi-dimensional system of the theory. Therefore, since MCT satisfies no-information without disturbance, it has irreversibility. To conclude the argument, it is sufficient to observe that MCT does not have incompatibility, since it has the same observation-tests as \(\mathrm{CT}\), where all the measurements are compatible.
_Discussion.--_ In this Letter we have proven that, in a general theory of physical systems, the presence of incompatible measurements implies the existence of tests that are intrinsically irreversible, but the viceversa does not hold. The counterexample is given in terms of a fully-fledged OPT, that we termed MCT, whose state spaces are simplexes. This is expected, in the light of the results of Ref. [26], where the author proves that compatibility of all observation-tests is equivalent to simplicial state spaces.
The notion of intrinsic irreversibility has been also introduced for the first time in an operational framework to qualify instruments that cannot be post-processed to the identity, even with access to arbitrary ancillary systems. The consequent notion of irreversibility--i.e., the property of a theory with an intrinsically irreversible instrument--is very restrictive, and one may conjecture that it lies at the origin of thermodynamic irreversibility. The analysis of this hypothesis will be the subject of future studies.
The toy theory presented here, exhibiting irreversibility but also full compatibility, can be used to compare other features which are beyond the scope of this Letter as well. For example, MCT establishes that classicality is not sufficient for a theory to have full compatibility of instruments, where the latter is defined according to Refs. [27, 22]. Moreover, MCT represents the evidence that no-information without disturbance is not a signature of nonclassicality.
As a final remark, we observe that MCT does not have complementarity nor preparation uncertainty relations, such as Roberston's ones [2], which shows that those are
not implied by irreversibility, just as incompatibility is not. It is yet unknown whether or not the converse of the above statement holds.
P. P. acknowledges financial support from PNRR MUR project PE0000023. A. T. acknowledges the financial support of Elvia and Federico Faggin Foundation (Silicon Valley Community Foundation Project ID#2020-214365). M. E. acknowledges financial support from the National Science Centre, Poland (Project title: "Categorical foundations of the nonclassicality of nature", Grant n. 2021/41/B/ST2/03149).
|
2308.01269 | VAPI: Vectorization of Algorithm for Performance Improvement | This study presents the vectorization of metaheuristic algorithms as the
first stage of vectorized optimization implementation. Vectorization is a
technique for converting an algorithm, which operates on a single value at a
time to one that operates on a collection of values at a time to execute
rapidly. The vectorization technique also operates by replacing multiple
iterations into a single operation, which improves the algorithm's performance
in speed and makes the algorithm simpler and easier to be implemented. It is
important to optimize the algorithm by implementing the vectorization
technique, which improves the program's performance, which requires less time
and can run long-running test functions faster, also execute test functions
that cannot be implemented in non-vectorized algorithms and reduces iterations
and time complexity. Converting to vectorization to operate several values at
once and enhance algorithms' speed and efficiency is a solution for long
running times and complicated algorithms. The objective of this study is to use
the vectorization technique on one of the metaheuristic algorithms and compare
the results of the vectorized algorithm with the algorithm which is
non-vectorized. | Mahmood Yashar, Tarik A. Rashid | 2023-07-19T11:23:03Z | http://arxiv.org/abs/2308.01269v2 | # VAPI: Vectorization of Algorithm for Performance Improvement
###### Abstract
This study presents the vectorization of metaheuristic algorithms as the first stage of vectorized optimization implementation. Vectorization is a technique for converting an algorithm, which operates on a single value at a time to one that operates on a collection of values at a time to execute rapidly. The vectorization technique also operates by replacing multiple iterations into a single operation, which improves the algorithm's performance in speed and makes the algorithm simpler and easier to be implemented. It is important to optimize the algorithm by implementing the vectorization technique, which improves the program's performance, which requires less time and can run long-running test functions faster, also execute test functions that cannot be implemented in non-vectorized algorithms and reduces iterations and time complexity. Converting to vectorization to operate several values at once and enhance algorithms' speed and efficiency is a solution for long running times and complicated algorithms. The objective of this study is to use the vectorization technique on one of the metaheuristic algorithms and compare the results of the vectorized algorithm with the algorithm which is non-vectorized.
**Keywords:** metaheuristic algorithms, non-vectorized algorithms, vectorized algorithms, algorithms, optimizations, nature-inspired optimization algorithms.
## 1 Introduction
Vectorization is a technique that can be applied to algorithms to modify their behavior and enable them to execute multiple values at once. The vectorization-induced change in algorithm behavior results in parallel processing, which reduces the number of iterations, improves algorithm performance in speed, and performs those test functions that cannot be executed in a short amount of time. In time complexity running large algorithms or complex equations in optimal time in real-world applications is very important (Bauckhage, 2018). To apply vectorization technique in Python language provides a standard library to use vectorization this library is known as NumPy. Vectorization plays an important role in different fields of science including field as machine learning, Image processing, and Audio processing to reduce time on processing (Josh Patterson, 2017). Many problems can be solved through optimization in some cases the metaheuristic algorithm performs slow even some algorithms cannot get the result of test functions. The focus of this paper is to apply the vectorization technique to the Ant Nesting Algorithm (ANA), which is a metaheuristic algorithm to improve the algorithm's performance speed.
This study presents a vectorized algorithm to minimize computing time, enhance algorithm speed, make the algorithm clearer to understand, and reduce iterations in the algorithm, which are the major reasons for algorithm speed reduction. Python is a programming language that offers the NumPy library, which can be used to vectorize mathematical calculations, save memory copying, and reduce operation counts. NumPy array is a multinational array that is represented as a block of memory to easily manipulate numbers. It employs a central processing unit (CPU) to work in memory efficiently, storing and accessing multidimensional arrays(Van Der Walt, Colbert, and Varoquaux, 2011). Most programming languages misguidedly use iterations, which results in poor algorithm performance speed, particularly when applied to large datasets. For example, applying one arithmetic operation to an array must use iterations, but in vector representation, the algorithm may perform the same operation on all items without iteration, resulting in increased algorithm performance speed. NumPy arrays provide a compact and powerful way to operate and manipulate data in vectors or matrices (Van Der Walt, Colbert, and Varoquaux, 2011) (Harris _et al._, 2020). |
2302.13636 | Deterministic and stochastic coarsening control in optically-addressed
spatial light modulators subject to optical feedback | Phase separation accompanied by further domain growth and coarsening is a
phenomenon common to a broad variety of dynamical systems. In this connection,
controlling such processes represents a relevant interdisciplinary problem.
Using methods of numerical modelling, we demonstrate two approaches for the
coarsening control in bistable systems based on the example of a
spatially-extended model describing an optically-addressed spatial light
modulator with two color illumination subject to optical feedback. The first
method implies varying system parameters such that the system evolves as the
pitchfork or saddle-node normal forms. The second method leverages noise whose
intensity is used as an additional system parameter. Both, deterministic and
stochastic schemes allow to control the direction and speed of the fronts
separating spatial domains. The considered stochastic control represents a
particular case of the noise-sustained front propagation in bistable systems
and involves the properties of the optical system under study. In contrast, the
proposed deterministic control technique can be applied to bistable systems of
different nature. | Vladimir V. Semenov, Xavier Porte, Laurent Larger, Daniel Brunner | 2023-02-27T10:04:13Z | http://arxiv.org/abs/2302.13636v1 | Deterministic and stochastic coarsening control in optically-addressed spatial light modulators subject to optical feedback
###### Abstract
Phase separation accompanied by further domain growth and coarsening is a phenomenon common to a broad variety of dynamical systems. In this connection, controlling such processes represents a relevant interdisciplinary problem. Using methods of numerical modelling, we demonstrate two approaches for the coarsening control in bistable systems based on the example of a spatially-extended model describing an optically-addressed spatial light modulator with two color illumination subject to optical feedback. The first method implies varying system parameters such that the system evolves as the pitchfork or saddle-node normal forms. The second method leverages noise whose intensity is used as an additional system parameter. Both, deterministic and stochastic schemes allow to control the direction and speed of the fronts separating spatial domains. The considered stochastic control represents a particular case of the noise-sustained front propagation in bistable systems and involves the properties of the optical system under study. In contrast, the proposed deterministic control technique can be applied to bistable systems of different nature.
spatially-extended system, bistability, front propagation, coarsening, noise, control, pitchfork bifurcation, saddle-node bifurcation, bifurcation normal forms pacs: 05.10.-a, 05.40.-a, 43.50.+y, 46.65.+g
## I Introduction
Besides the well-known Turing patterns, reaction-diffusion systems exhibit a big variety of spatio-temporal dynamics [1; 2; 3; 4] including traveling fronts, solitary and periodic pulses, spiral turbulence, scroll waves and noise-induced pattern formation. In particular, bistable reaction-diffusion media can exhibit dynamics, where for the case when two kinds of domains form and evolve in space, separating fronts between them arise and propagate. Such propagating wavefronts [33] are of frequent occurrence in chemistry, see, for instance, the Schlogl model [5; 6; 7] developed for the explanation of an auto-catalytic reaction mechanism, as well as in electronics [8], flame propagation theory [9], just to name a few.
In the simplest case, front propagation appears in 1D-space. If a studied bistable media evolves in 2D-space, then the peculiarities of front propagation are additionally determined by the shape of domains formed by such fronts. In such a case, one observes an effect often referred to as 'coarsening'. Coarsening is a particular form of front propagation, and corresponds to the expansion of domains that invade the entire space on the cost of other domains. It is a fundamental phenomenon demonstrated in the context of different areas: physics of liquid crystals [10] and magnetism [11; 12; 13; 14], physics and chemistry of materials [15; 16; 17; 18], laser physics [19; 20; 21], electronics [22] and animal population statistics [23]. It occurs in bistable spatially-extended systems [11] and time-delay oscillators [22; 19; 20] when a bistable system is prepared in an inhomogeneous state including both steady states. Under such conditions, separating fronts start to propagate such that growing spatial domains appear and one state (phase) therefore starts to dominate the whole system. Analogous processes can arise in stochastic systems as an accompaniment of noise-induced phase transitions [24; 25].
It is well-known that the presence of any kind of asymmetry in bistable spatially-extended systems has a principal impact on the speed of wavefront propagation, for instance, in bistable reaction-diffusion models [26; 7]: the bigger is the asymmetry, the faster wavefronts propagate. Moreover, control over the system's asymmetry allows to stop the wavefront propagation or to even invert its direction. In the current paper we illustrate these facts by means of numerical modelling on an example of a spatially-extended bistable dynamical system describing the optical device considered in Ref. [27]. In particular, we use the Taylor-series-based technique of the pitchfork and saddle-node bifurcation implementation developed in Ref. [27] to control the system's asymmetry and consequently to control coarsening.
The second strategy for controlling the propagating fronts is based on noise [28; 29; 26]. In particular, it has been established that multiplicative noise can influence the systematic part of front dynamics [26; 4; 30]. We numerically show how the stochastic scheme of the propagation control can be implemented in terms of optics on the example of the optical device stochastic model.
Generally speaking, we consider two options for coars
ening control in bistable dynamical systems: deterministic and stochastic approaches. The deterministic control scheme is a universal methodology for approaching the model equations to the pitchfork or saddle-node bifurcation normal forms and can be applied to dynamical systems of different nature. In contrast, the stochastic approach involves the specificity of the concrete optical device model and hence is not universal. However, the obtained results complement a manifold of stochastic phenomena associated with propagating fronts by optical processes and are potentially interesting for specialists in optics, nonlinear dynamics and theory of stochastic processes.
## II Model under study
The central element of the system discussed in this paper (see Fig. 1) is an optically-addressed spatial light modulator (OASLM). Our OASLM model was developed in [27] based on an experimental study of the OASLM sample fabricated according to the concept reported in [31]. The OASLM is a light-transmissive device, and it is assumed in the following that the OASLM fully transmits the incident light, i.e. has zero absorption. This is a valid approximation, as the devices relies on nanometer thin photo-sensitive layers. The OASLM operates as an optically controlled birefringent phase plate leveraging a nematic liquid crystal (LC) layer, the phase retardation of which, \(\Gamma\), is a dynamic quantity. The LC-layer is located between two a-As\({}_{2}\)S\({}_{3}\) chalcogenide thin films that simultaneously function as a photosensitive (PS) and alignment layers. The OASLM is connected to a DC-power source resulting in a voltage drop across the LC layer that is uniform without illumination. However, illumination spatially modifies the PS's conductivity and in turn the local voltage drop across the LC layer. Due to the induced dipole moment, LC molecules change their orientation in response, which results in a spatial birefringence distribution that is a function of the optical illumination profile. Consequently, an optical wave crossing the LC experiences a change of its polarization state due to phase retardation \(\Gamma\) between the ordinary and the extraordinary axes and \(\Gamma(I)=(\alpha I+\beta)^{-1}+\gamma\)[27]. Noteworthy, the second PS-layer does not increase the device's dynamical complexity and has no principal impact on the dynamics, besides doubling the responsivity of the device [27]. In the rest of the manuscript we therefore only consider an OASLM with a single PS-layer located on the left side of the LC-layer.
In our generic setup, depicted in Fig. 1, the single-PS-layer OASLM operates in the phase modulation regime (the OASLM's rotation angle \(\psi=m\pi\) where \(m\in\mathbb{Z}\)). After traversing the OASLM, a dichroic mirror transmits the green light beam but reflects the blue with reflectivity \(R\), which then interferes with the original blue illumination. Thus, the dichroic mirror creates optical feedback and potentially coupling. Generally, optical interference, i.e. a temporal beating originating from the superposition blue and green light, can be ignored due to the vast difference in frequencies of both light sources. It is assumed that the PS's thickness is significantly smaller than the wavelength.
Using Jones matrix calculus, we describe the op
Figure 1: Single-PS-layer OASLM under simultaneous blue and green illumination when the blue light beam is reflected by the dichroic mirror and creates feedback. The system contains a defocusing lens to emulate local diffusion by spatially broadening the field distribution of the back-reflected optical field. Lenses L1 and L2 create 4f-imaging of the OASLM’s state back on itself after reflection by the mirror.
tical field distributions which determine the system dynamics. Here, we consider uniformly distributed horizontally polarized blue illumination, \(\vec{E}_{0}(x,y)=[E_{0},0]\). After passing the OASLM one obtaines \(\vec{E}_{1}(x,y)=\exp\left(i(\phi_{0}+\Gamma(x,y))\right)\vec{E}_{0}(x,y)\), where \(\phi_{0}\) is the constant retardation induced by the OASLM without illumination and \(i\) is the imaginary unit. The setup in Fig. 1 contains two optical lenses L\({}_{1}\) and L\({}_{2}\) to create 4f-imaging of the OASLM's state back on itself after reflection by the mirror, and a defocusing lens L\({}_{D}\) within the optical feedback path. Defocusing leads to blurred imaging, as illustrated in Fig. 1, and its impact can be mathematically described as a convolution with a Gaussian of a width that can be controlled through the positioning lenses. Applying this, one obtains a spatial distribution of the returned light Jones vector \(\vec{E}_{2}(x,y)\) as
\[\vec{E}_{2}(x,y)=\Bigg{(}R\exp(\phi_{1})\vec{E}_{1}(x,y)\Bigg{)}*\Bigg{(}\frac{ 1}{2\pi\sigma^{2}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}-\frac{y^{2}}{2\sigma^{2 }}\right)\Bigg{)}, \tag{1}\]
where \(\phi_{1}\) is the retardation accumulated in the external cavity round-trip, '\(*\)' is the convolution operation and the Gaussian function plays a role of a point spread function widened from the normal imaging setup via the defocusing lens. Finally, the optical wave passes through the OASLM again and one obtains field \(\vec{E}_{3}(x,y)=\exp\left(i(\phi_{0}+\Gamma(x,y))\right)\vec{E}_{2}(x,y)\) the resulting blue light field at the PS-layer on the left side of the OASLM is \(\vec{E}_{\rm b}(x,y)=\vec{E}_{0}(x,y)+\vec{E}_{3}(x,y)\) with intensity \(I_{\rm b}(x,y)=\left|\vec{E}_{\rm b}(x,y)\right|^{2}\). To simplify the model, diffusive processes inside the OASLM are neglected and width \(\sigma\) of the optical convolution kernel is chosen to be several times greater than the OASLM resolution, \(\sigma_{\rm OASLM}=3.5\mu\)m (see details in papers [27; 31]). Finally, optical feedback is considered instantaneous relative to the OASLM's response time \(\varepsilon\). The temporal evolution of the blue light's retardation then takes the form
\[\begin{array}{l}\vec{E}_{0}(x,y)\ =\ \left[\begin{matrix}E_{0}\\ 0\end{matrix}\right],\qquad\vec{E}_{1}(x,y)\ =\ \exp\left(i(\phi_{0}+\Gamma(x,y))\right)\left[ \begin{matrix}E_{0}\\ 0\end{matrix}\right],\\ \vec{E}_{2}(x,y)=\Bigg{(}R\exp(\phi_{1})\vec{E}_{1}(x,y)\Bigg{)}*\Bigg{(} \frac{1}{2\pi\sigma^{2}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}-\frac{y^{2}}{2 \sigma^{2}}\right)\Bigg{)},\\ \vec{E}_{3}(x,y)\ =\ \exp\left(i(\phi_{0}+\Gamma(x,y))\right)\vec{E}_{2}(x,y), \qquad I_{\rm b}(x,y)=\left|\vec{E}_{0}(x,y)+\vec{E}_{3}(x,y)\right|^{2},\\ \varepsilon\frac{d\Gamma(x,y)}{dt}=-\Gamma(x,y)+\frac{1}{\alpha_{\rm b}I_{\rm b }(x,y)+\tilde{\alpha}_{\rm g}I_{\rm bg}+\beta}+\gamma,\end{array} \tag{2}\]
where \(\tilde{\alpha}_{\rm g}=\frac{\lambda_{\rm g}}{\lambda_{\rm b}}\alpha_{g}\) is the retardation effect of \(I_{\rm 0g}\) on the blue signal.
If one neglects spatial aspects of the dynamics and excludes the optical convolution implemented by L\({}_{D}\), the system is reduced into an ordinary differential equation (see paper [27]) for the temporal evolution of the retardation:
\[\begin{array}{l}\varepsilon\frac{d\Gamma}{dt}=-\Gamma+\frac{1}{\alpha_{\rm b }I_{\rm b}(\Gamma)+\tilde{\alpha}_{\rm g}I_{\rm 0g}+\beta}+\gamma,\\ I_{\rm b}(\Gamma)=I_{\rm 0b}\Big{\{}1+R^{2}+2R\cos(2\phi_{0}+\phi_{1}+2 \Gamma)\Big{\}}.\end{array} \tag{3}\]
The action of the convolution operation in spatially-extended model (2) is associated with homogenous coupling of the system state at any point on the plane (\(x\),\(y\)) with its neighbour states in some range \(x\in[x-\Delta x;x+\Delta x]\), \(y\in[y-\Delta y;y+\Delta y]\). Defocusing represents a natural physical approach for the homogeneous coupling implementation similarly to diffusive effects occurring inside the OASLM. If the coupling radius does not exceed the OASLM's linear pixel size, one deals with local coupling, whose impact is identical to the action of diffusion. Then one can expect to observe the effects of wave propagation and coarsening in Eq. (2), where the system parameters correspond to the regime of bistability in single-oscillator of Eq. (3). Our numerical study is based on modelling Eq. (2) using the Heun method [32] with time step \(\Delta t=10^{-3}\). In the rest of the manuscript Eq. (2) are studied for the fixed parameter set \(R=0.95\), \(\alpha_{\rm b}=0.117\) [m\({}^{2}\)Rad\({}^{-1}\)W\({}^{-1}\)],
\(\alpha_{\rm g}=98.5\times 10^{-6}\) [m\({}^{2}\)Rad\({}^{-1}\)W\({}^{-1}\)], \(\beta=0.052\) [Rad\({}^{-1}\)], \(\gamma=-0.55\) [Rad], \(\phi_{0}=\pi/2\) [Rad], \(\phi_{1}=\pi\) [Rad], \(\varepsilon=1\) [s], \(\sigma=10^{-5}\) [m], and varying \(I_{\rm 0b}\) and \(I_{\rm 0g}\). The blue and green light wavelength are chosen according to experiments carried out in article [27], \(\lambda_{\rm b}=450\times 10^{-9}\) [m] and \(\lambda_{\rm g}=532\times 10^{-9}\) [m] correspondingly.
## III Deterministic control
As demonstrated in [27], one can implement the pitchfork and the saddle-node bifurcation of steady states in our system described by Eq. (3), if \(I_{\rm 0b}\) and \(I_{\rm 0g}\) are chosen according to the corresponding bifurcation condition curves depicted in Fig. 2. In more detail, when \(I_{\rm 0b}\) and \(I_{\rm 0g}\) are varied according to the curves in Fig. 2, the right-hand side function \(f(\Gamma)\) of Eq. (3) represented in the form \(\dfrac{d\Gamma}{dt}=f(\Gamma)\) evolves in some range of \(\Gamma\) as a cubic and a quadratic function. Then Eq. (3) is considered as the pitchfork and saddle-node bifurcation normal forms, \(\dfrac{d\Gamma}{dt}=b\Gamma-d\Gamma^{3}\) and \(\dfrac{d\Gamma}{dt}=a+c\Gamma^{2}\). Here, we use these bifurcation conditions to control the effect of coarsening in Eq. (2). We consider Eq. (3) in the form \(\dfrac{d\Gamma}{dt}=f(\Gamma)\) and illustrate the right-hand side function \(f(\Gamma)\) for varying \(I_{\rm 0b}\) and \(I_{\rm 0g}\). For all parameter values, \(f(\Gamma)\) shows three steady states corresponding to the condition \(f(\Gamma)=0\): stable steady states A and B and an unstable equilibrium between them (see Fig. 3 and Fig. 4). We visualise the fact that the symmetry properties of Eq. (3) describing the local dynamics without coupling are reflected in the duration and direction of coarsening in Eq. (2). For this purpose, we fix the initial spatial pattern at \(t=0\) and observe the spatial evolution when \(I_{\rm 0b}\) and \(I_{\rm 0g}\) change.
### Pitchfork bifurcation conditions
Let us fix light intensities \(I_{\rm 0g}=22\), \(I_{\rm 0b}=|\vec{E}_{0}|^{2}=0.015\) [W/m\({}^{2}\)] in Eq. (2). This parameter set corresponds to the regime of bistability in the single-oscillator model described by Eq. (3), but the pitchfork bifurcation conditions are not fulfilled (this parameter set corresponds to point 1 in Fig. 2 (a)) and the right-hand side function of Eq. (3) is asymmetric, see Fig. 3 (a). In that case, the spatially extended model described by Eq. (2) exhibits coarsening, see Fig. 3 (b1-b3). The system asymmetry is reflected in the fact that the basin of attraction of state B is larger than the one of state A, and the unstable fixed point is closer to attractor A than to the stable steady state B. This results in the spatial evolution of Eq. (2) such that the red domains corresponding to state B extend and invade the entire space (\(x\),\(y\)).
Increasing \(I_{\rm 0g}\) allows to fulfil the pitchfork bifurcation conditions at \(I_{\rm 0g}\approx 30.1\) [W/m\({}^{2}\)] (point 2 in Fig. 2 (a)), for which the asymmetry of the right-hand side function \(f(\Gamma)\) is minimized, see Fig. 3 (c), and coarsening is substantially slower. Consequently, a longer time is necessary for the transformation of the same initial metastable state as in Fig. 3 (b1) (the initial states in Fig. 3(b1,d1,f1) are identical) into the quiescent regime when either steady state A or B invades the entire space, see Fig. 3 (d1-d3). It should be noted that in the case of minimal asymmetry, the probabilities to observe the final state \(\Gamma(x,y)=A\) or \(\Gamma(x,y)=B\) starting from random initial conditions are almost identical.
If one continues to increase the green light intensity, the phase space structure is inverted in comparison with the initial configuration, as can be seen from comparison of \(f(\Gamma)\) in Fig. 3 (a,e). The motion of fronts separating domains reverses, and coarsening direction becomes opposite: steady state A invades the whole space, see Fig. 3(f1-f3) corresponding to \(I_{\rm 0b}=0.015\) [W/m\({}^{2}\)] and \(I_{\rm 0g}=36\) [W/m\({}^{2}\)] (point 3 in Fig. 2 (a)).
### Saddle-node bifurcation conditions
Varying \(I_{\rm 0b}\) and \(I_{\rm 0g}\) according to the curve obtained using the saddle-node bifurcation conditions, corresponding to the blue line in Fig. 2 (b), allows to move the right-hand side function of Eq. (3) up and down, see Fig. 4 (a,c,e). A symmetric configuration of \(f(\Gamma)\) can be achieved during this motion, see Fig. 4 (c), and the same effects as in the previous section can be observed. First, the system's asymmetry is well-pronounced, as illustrated in Fig. 4 (a), and the state B rapidly invades the space (\(x\),\(y\)), see Fig. 4 (b1-b3). When \(I_{\rm 0b}\) and \(I_{\rm 0g}\) are adjusted such that the saddle-node bifurcation conditions are fulfilled, the system passes through the symmetric state (see Fig. 4 (c)), and the coarsening effect slows down as illustrated in Fig. 4 (d1-d3). Further changing \(I_{\rm 0b}\) and \(I_{\rm 0g}\) inverts the asymmetry, see Fig. 4 (e), and the motion of fronts separating blue and red domains reverses
its direction, see Fig. 4 (f1-f3).
## IV Stochastic control
Consider a stochastic model of the optical setup illustrated in Fig. 1. For that purpose, it is assumed that the green light illumination contains a stochastic contribution according to \(I_{0\rm g}(x,y)=I_{0\rm g}+\xi(x,y)\). Here, \(\xi(x,y)\) represents a source of spatial coloured noise described by the first-order Ornstein-Uhlenbeck process
\[\tau_{c}\frac{d\xi(x,y)}{dt}=-\xi(x,y)+\sqrt{2D_{\rm g}\tau_{c}}n(x,y,t), \tag{4}\]
where \(\tau_{c}\) is the coloured noise correlation time, \(n(x,y,t)\) is a normalized source of white Gaussian noise, \(D_{\rm g}\) plays a role of the noise intensity. The temporal and spatial correlation properties of the noise source \(n(x,y,t)\) at any point \(\vec{r}_{0}\) are described by the delta function \(<n(\vec{r}_{0},t)>=0,<n(\vec{r}_{0},t)n(\vec{r}_{0},t+\tau)>=\delta(\tau)\), \(<n(\vec{r}_{0},t)n(\vec{r}_{0}+\vec{r}_{d},t)>=\delta(\vec{r}_{d})\) (here, the brackets \(<...>\) denote the mean value), which means that the correlation time of the source \(n(x,y,t)\) equals zero and the noise signal values \(n(x,y,t)\) at any different points (\(x_{1}\), \(y_{1}\)) and (\(x_{2}\), \(y_{2}\)) are statistically independent.
Physically, random spatial component \(\xi(x,y)\) can be included into the green illumination by adding an electronically-addressed spatial light modulator that spatially modifies the green illumination. In such coloured noise, the spatial random illumination is characterised by a finite temporal correlation determined by the parameter \(\tau_{c}\). It is assumed in the following that the noise correlation time \(\tau_{c}\) is much smaller than the OASLM's response time \(\varepsilon\). In addition, all instantaneous values \(\xi(x,y,t)<-I_{0\rm g}\) are changed to \(\xi(x,y,t)=-I_{0\rm g}\) since the summary green light intensity \(I_{0\rm g}+\xi(x,y,t)\) cannot be negative. Finally, the stochastic spatial model of the setup in Fig. 1 takes the form
\[\begin{split}&\varepsilon\frac{d\Gamma(x,y)}{dt}=-\Gamma(x,y)\\ &+\frac{1}{\alpha_{\rm b}I_{\rm b}(x,y)+\bar{\alpha}_{\rm g}(I_{ 0\rm g}+\xi(x,y))+\beta}+\gamma,\\ &\tau_{c}\frac{d\xi(x,y)}{dt}=-\xi(x,y)+\sqrt{2D_{\rm g}\tau_{c}} n(x,y,t),\end{split} \tag{5}\]
where all the Jones vector component determining the
blue light intensity are the same as in Eq. (2).
First, Eq. (5) are considered for a set of parameters corresponding to Fig. 3 (a) when the basin of attraction of steady state B is larger than the basin of state A. Equation (5) therefore exhibits coarsening and the system state \(\Gamma(x,y)=B\) invades the entire space in the absence of noise, \(D_{\rm g}=0\) (see Fig. 5 (a-c)). However, increasing noise intensity \(D_{\rm g}\) slows down the effect of coarsening, see Fig. 5 (d-f), and above a threshold at around \(D\approx 3.7\times 10^{3}\) [s\({}^{-1}\)], noise inverts the the front propagation dynamics and state A dominates, see Fig. 5 (g-i).
Similarly, if the system parameter set corresponds to Fig. 3 (e), one observes invading state A [Fig. 6 (a-c)]. In such a case increasing the noise intensity speeds up the process [Fig. 6 (d-f)]. Thus, it is demonstrated in Fig. 5 and Fig. 6 that, depending on the particular system configuration, noise can speed up coarsening, slow it down or even to invert the direction.
The theoretically rigorous explanation of the stochastic coarsening control in OASLM-based spatial models is significantly more challenging as compared with, for instance, the theoretical analysis given in Refs. [30; 4] for basic reaction-diffusion models with multiplicative noise. In particular, the'small-noise-expansion approach' used in [30; 4] cannot be applied in the context of Eq. (5) due to the fact that any polynomial expression of Eq. (5) is challenging to obtain, and will furthermore give rise to stochastic terms in all the polynomial components. Consequently, it becomes extremely difficult to distinguish the systematic part of the noise influence. Nevertheless, we would like to emphasize the similarity between the processes observed in the basic models discussed in Refs. [4; 26; 30] and in OASLM-based spatial model described by Eq. (5). To visualise the fact that stochastic forcing has an asymmetric impact on Eq. (5), a single-oscillator stochastic model corresponding to Eq. (5) at \(\sigma\to 0\) is taken into consideration. If \(\sigma\to 0\), the spatial coupling is absent and the retardation \(\Gamma\) individually evolves according to Eq. (3) at each point of the illuminated area,
Figure 4: Coarsening and the saddle-node bifurcation conditions: evolution of the right-hand side function of Eq. (3) and coarsening in Eq. (2) when \(I_{\rm 0b}\) and \(I_{\rm 0g}\) vary according to the saddle-node bifurcation conditions for Eq. (3): \(I_{\rm 0b}=0.228\) [W/m\({}^{2}\)], \(I_{\rm 0g}=990\) [W/m\({}^{2}\)] (panels (a) and (b)) corresponding to point 1 in Fig. 2 (b), \(I_{\rm 0b}=0.241\) [W/m\({}^{2}\)], \(I_{\rm 0g}=1050\) [W/m\({}^{2}\)] (panels (c) and (d)) corresponding to point 2 in Fig. 2 (b), \(I_{\rm 0b}=0.2645\) [W/m\({}^{2}\)], \(I_{\rm 0g}=1153\) [W/m\({}^{2}\)] (panels (e) and (f)) corresponding to point 3 in Fig. 2 (b). Other parameters are: \(\varepsilon=1\) [s], \(\alpha_{\rm b}=0.117\) [m\({}^{2}\)Rad\({}^{-1}\)W\({}^{-1}\)], \(\alpha_{\rm g}=98.5\times 10^{-6}\) [m\({}^{2}\)Rad\({}^{-1}\)W\({}^{-1}\)], \(\beta=0.052\) [Rad\({}^{-1}\)], \(\gamma=-0.55\) [Rad], \(\phi_{0}=\pi/2\) [Rad], \(\phi_{1}=\pi\) [Rad], \(\lambda_{\rm b}=450\times 10^{-9}\) [m], \(\lambda_{\rm g}=532\times 10^{-9}\) [m], \(R=0.95\), \(\sigma=10^{-5}\) [m].
but in the presence of the noise term \(\xi\)
\[\begin{split}&\varepsilon\frac{d\Gamma}{dt}=-\Gamma+\frac{1}{\alpha_{ \rm b}I_{\rm b}+\tilde{\alpha}_{\rm g}(I_{\rm log}+\xi)+\beta}\\ &+\gamma+\sqrt{0.02}n_{a}(t),\\ &\tau_{c}\frac{d\xi}{dt}=-\xi+\sqrt{2D_{\rm g}\tau_{c}}n(t),\\ & I_{\rm b}=I_{\rm 0b}\Big{\{}1+R^{2}+2R\cos(2\phi_{0}+\phi_{1}+2 \Gamma)\Big{\}},\end{split} \tag{6}\]
where the additive white Gaussian noise term \(\sqrt{0.02}n_{a}(t)\) has no impact on the system's symmetry and is included to obtain a stationary distribution of the normalised probability density function for the dynamical variable, \(P_{n}(\Gamma)\), in numerical simulations. The evolution of \(P_{n}(\Gamma)\) caused by increasing noise intensity \(D_{\rm g}\) illustrated in Fig.
7 indicates that the left peak becomes broadened faster than the right one. Thus, the action of noise \(\xi(t)\) is significantly stronger in the vicinity of the left steady state \(\Gamma_{*}=A\). This effect is similar to the noise-induced evolution of \(P_{n}(u)\) in the phenomenological model defined by equation \(\dfrac{dx}{dt}=-x(x-a+\xi_{a})(x+b+\xi_{b})+\nabla^{2}x\) (see paper [26]).
## V Conclusions
The peculiarities of the bifurcation transitions to the bistable dynamics discussed in paper [27] in the context of single-oscillator models, are reflected in the behaviour of the corresponding spatially-extended systems, as for example in Eq. (2) or similar models corresponding to different OASLM's rotation angles or incident light polarization states, as formation of localized spatial domains corresponding to the attraction of two coexisting steady states. If the right-hand side function is asymmetric, the steady state characterized by the larger basin of attraction, invades the entire space. This process is accompanied by the effect of coarsening, which is determined by both asymmetry and the shape of evolving domains.
Applying the saddle-node or pitchfork bifurcation conditions, one can remove the system asymmetry and then the dominating domain expansion is slowed down. Moreover, if the incident green and blue light intensities vary and obey the saddle-node bifurcation condition, one can controllably invert the front propagation direction. However, the saddle-node bifurcation conditions do not allow to rigorously define the absolutely symmetric state, while applying the pitchfork bifurcation conditions provide for mathematical derivation of appropriate parameter values.
The second approach to control coarsening is the introduction of noise into the system. In particular, the presence of parametric noise modulating the green light intensity gives rise to speeding up or slowing down and inverting the effects of front propagation and coarsening. The ability to control the dynamics by increasing noise intensity strength results from the the fact that fluctuation growth changes the system symmetry. Detailed theoretical analysis of the stochastic control represents an issue for further investigations.
The interdisciplinary significance of the obtained results consists in developed approach for the control of propagating fronts in bistable spatially-extended systems of any nature exhibiting the coexistence of two steady states. Representing the function being responsible for the local dynamics in a polynomial form by using the Taylor-series expansion, one can derive the pitchfork or saddle-node bifurcation conditions in the similar way as in the current paper and then apply them to tune the systems's symmetry and, resultantly, the front propagation speed and direction. Finally, the developed approach for controlling the symmetry of bistable spatially-extended systems offers great opportunities for future implementations of spin-state networks.
###### Acknowledgements.
We are very grateful to professor Serhiy Yanchuk for fruitful discussions. This work has been supported by the EIPHI Graduate School (Contract No. ANR-17-EURE-0002) and the Bourgogne-Franche-Comte Region and H2020 Marie Sklodowska-Curie Actions (MULTIPLY, No. 713694). Results of numerical simulations presented in Sec. III and IV are obtained by V.V.S. in the framework Russian Science Foundation Grant No. 22-72-00038.
## Declarations
* The authors have no conflicts to disclose.
* The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2305.09811 | On the properties $\mathrm{SOP}_{2^{n+1}+1}$ | We show that approximations of strict order can calibrate the fine structure
of genericity. Particularly, we find exponential behavior within the
$\mathrm{NSOP}_{n}$ hierarchy from model theory. Let $0$-$\eth$-independence
denote forking-independence. Inductively, a formula $(n+1)$-$\eth$-divides over
$M$ if it divides by every $n$-$\eth$-independent Morley sequence over $M$, and
$(n+1)$-$\eth$-forks over $M$ if it implies a disjunction of formulas that
$(n+1)$-$\eth$-divide over $M$; the associated independence relation over
models is called $(n+1)$-$\eth$-independence. We show that a theory where
$n$-$\eth$-independence is symmetric or transitive must be
$\mathrm{NSOP}_{2^{n+1}+1}$. We then show that, in the classical examples of
$\mathrm{NSOP}_{2^{n+1}+1}$ theories, $n$-$\eth$-independence is symmetric and
transitive; in particular, there are strictly $\mathrm{NSOP}_{2^{n+1}+1}$
theories where $n$-$\eth$-independence is symmetric and transitive, leaving
open the question of whether symmetry or transitivity of
$n$-$\eth$-independence is equivalent to $\mathrm{NSOP}_{2^{n+1}+1}$. | Scott Mutchnik | 2023-05-16T21:25:36Z | http://arxiv.org/abs/2305.09811v2 | # On the properties \(\mathrm{SOP}_{2^{n+1}+1}\)
###### Abstract.
We show that approximations of strict order can calibrate the fine structure of genericity. Particularly, we find exponential behavior within the \(\mathrm{NSOP}_{n}\) hierarchy from model theory. Let \(\bigsqcup\,^{\mathfrak{G}^{n}}\) denote forking-independence. Inductively, a formula \((n+1)\)-\(\mathfrak{G}\)-_divides_ over \(M\) if it divides by every \(\bigsqcup\,^{\mathfrak{G}^{n}}\)-Morley sequence over \(M\), and \((n+1)\)-\(\mathfrak{G}\)-_forks_ over \(M\) if it implies a disjunction of formulas that \((n+1)\)-\(\mathfrak{G}\)-divide over \(M\); the associated independence relation over models is denoted \(\bigsqcup\,^{\mathfrak{G}^{n+1}}\). We show that a theory where \(\bigsqcup\,^{\mathfrak{G}^{n}}\) is symmetric or transitive must be \(\mathrm{NSOP}_{2^{n+1}+1}\). We then show that, in the classical examples of \(\mathrm{NSOP}_{2^{n+1}+1}\) theories, \(\bigsqcup\,^{\mathfrak{G}^{n}}\) is symmetric and transitive; in particular, there are strictly \(\mathrm{NSOP}_{2^{n+1}+1}\) theories where \(\bigsqcup\,^{\mathfrak{G}^{n}}\) is symmetric and transitive, leaving open the question of whether symmetry or transitivity of \(\bigsqcup\,^{\mathfrak{G}^{n}}\) is equivalent to \(\mathrm{NSOP}_{2^{n+1}+1}\).
## 1. Introduction
This paper is on Shelah's _strong order property_ hierarchy, the properties \(\mathrm{SOP}_{n}\) introduced in [35] and extended in [36], [13]. For \(n\geq 3\), these are defined as follows:
**Definition 1.1**.: _A theory \(T\) is \(\mathrm{NSOP}_{n}\) (that is, does not have the n-strong order property) if there is no definable relation \(R(x_{1},x_{2})\) with no \(n\)-cycles, but with tuples \(\{a_{i}\}_{i\in\omega}\) with \(\models R(a_{i},a_{j})\) for \(i<j\). Otherwise it is \(\mathrm{SOP}_{n}\)._
For \(1\leq n\leq 4\), these properties have been developed to various degrees. In [17], Kaplan and Ramsey extend the theory of forking-independence in simple theories to _Kim-independence_ in \(\mathrm{NSOP}_{1}\)_theories_, by modifying the definition of dividing to require an invariant Morley sequence to witness the dividing. There, a characterization of \(\mathrm{NSOP}_{1}\) is given in terms of symmetry for Kim-independence, using work of Chernikov and Ramsey in [9], and also in terms of the independence theorem for Kim-independence. Later work has continued the development of Kim-independence in \(\mathrm{NSOP}_{1}\) theories; for example, see Kaplan and Ramsey ([18]) for transitivity and witnessing; Kaplan, Ramsey and Shelah ([19]) for local character; Dobrowolski, Kim and Ramsey ([12]) and Chernikov, Kim and Ramsey ([4]) for independence over sets; Kruckman and Ramsey ([23]) and Kruckman, Tran and Walsberg ([24]) for improvements upon the independence theorem; Kim ([21]) for canonical bases; and Kamsma ([16]), Dobrowolski and Kamsma ([12]) and Dmitrieva, Gallinaro and Kamsma ([3]) for extensions to positive logic, as well as the examples, by various authors, of \(\mathrm{NSOP}_{1}\) theories in applied settings. See also Kim and Kim [22], Chernikov and Ramsey ([9]), Ramsey ([33]), and Casanova and Kim ([6]) for type-counting and combinatorial criteria for \(\mathrm{SOP}_{1}\) and \(\mathrm{SOP}_{2}\), and Ahn and Kim ([1]) for connections of \(\mathrm{SOP}_{1}\) and \(\mathrm{SOP}_{2}\) to the antichain tree property further developed by Ahn, Kim and Lee in [2].
The \(\mathrm{SOP}_{2}\)_theories_ were characterized by Dzamonja and Shelah ([13]), Shelah and Usvyatsov ([37]), and Malliaris and Shelah ([26]) as (under mild set-theoretic assumptions) the maximal class in the order \(\lhd^{*}\), related to Keisler's order, and in celebrated work of Malliaris and Shelah ([25]), they were shown to be maximal in Keisler's order, in ZFC. Then in [29], it was shown that a theory is \(\mathrm{NSOP}_{2}\) if and only if it is \(\mathrm{NSOP}_{1}\), bringing together Kim-independence and Keisler's order. It remains open whether all \(\mathrm{NSOP}_{3}\) theories are \(\mathrm{NSOP}_{2}\)
Generalizing work of Evans and Wong in [15], showing the \(\omega\)-categorical Hrushovski constructions introduced in [14], which have a natural notion of free amalgamation, are either simple or \(\mathrm{SOP}_{3}\), and work of Conant in [10] showing that all modular free amalgamation theories are simple or \(\mathrm{SOP}_{3}\), the author in [30] isolates two structural properties, with no known \(\mathrm{NSOP}_{4}\) counterexamples, which generalize [15] and [10] and imply that a theory must be either \(\mathrm{NSOP}_{1}\) or \(\mathrm{SOP}_{3}\). As a consequence, all free amalgamation theories are \(\mathrm{NSOP}_{1}\) or \(\mathrm{SOP}_{3}\). Malliaris and Shelah ([26]) show symmetric inconsistency for _higher formulas_ in \(\mathrm{NSOP}_{3}\) theories, and Malliaris ([27]) investigates the graph-theoretic depth of independence in relation to \(\mathrm{SOP}_{3}\). In [20], Ramsey, Kaplan and Simon show very recently that all binary \(\mathrm{NSOP}_{3}\) theories are simple, by giving a theory of independence for a class of theories containing all binary theories. Until recently, no consequences of \(\mathrm{NSOP}_{n}\) were known for the program of further extending the theory of Kim-independence in \(\mathrm{NSOP}_{2}=\mathrm{NSOP}_{1}\) theories to \(\mathrm{NSOP}_{n}\) for \(n>2\); then the author, in [28], shows that types in \(\mathrm{NSOP}_{3}\) theories with internally \(\mathrm{NSOP}_{1}\) structure satisfy Kim's lemma at an external level, as well as an independence theorem, and also shows that \(\mathrm{NSOP}_{3}\) theories with symmetric Conant-independence satisfy an independence theorem for finitely satisfiable types with the same Morley sequences, related to that proposed for \(\mathrm{NTP}_{2}\) theories by Simon ([38]).
Shelah, in [35], gives results on universal models in \(\mathrm{SOP}_{4}\) theories. Generalizing a line of argument from the literature originally used by Patel ([31]), Conant, in [10] (where an historical overview of this argument can be found), shows free amalgamation theories are \(\mathrm{NSOP}_{4}\). In [30], the author connects this result to a potential theory of independence in \(\mathrm{NSOP}_{4}\) theories, defining the relation of _Conant-independence_1:
Footnote 1: This was originally introduced under a nonstandard definition in [29], to show \(\mathrm{NSOP}_{2}\) theories are \(\mathrm{NSOP}_{1}\).
**Definition 1.2**.: _Let \(M\) be a model and \(\varphi(x,b)\) a formula. We say \(\varphi(x,b)\)_Conant-divides _over \(M\) if for every invariant Morley sequence \(\{b_{i}\}_{i\in\omega}\) over \(M\) starting with \(b\), \(\{\varphi(x,b)\}_{i\in\omega}\) is inconsistent. We say \(\varphi(x,b)\)_Conant-forks _over \(M\) if and only if it implies a disjunction of formulas Conant-dividing over \(M\). We say \(a\) is_Conant-independent _from \(b\) over \(M\), written \(a\underset{M}{\overset{K^{*}}{\downarrow}}_{M}b\), if \(\mathrm{tp}(a/Mb)\) does not contain any formulas Conant-forking over \(M\)._
By Kim's lemma (Theorem 3.16 of [17]), this coincides with Kim-independence in \(\mathrm{NSOP}_{1}\) theories. Conant-independence gives a plausible theory of independence for \(\mathrm{NSOP}_{4}\) theories:
**Fact 1.1**.: _(Theorem 6.2, [30]) Any theory where Conant-independence is symmetric is \(\mathrm{NSOP}_{4}\), and there are strictly \(\mathrm{NSOP}_{4}\) (\(\mathrm{NSOP}_{4}\) and \(\mathrm{SOP}_{3}\)) theories where Conant-independence is
_is symmetric. Thus \(n=4\) is the largest value of \(n\) so that there are strictly \(\mathrm{NSOP}_{n}\) theories where Conant-independence is symmetric._
In [30], the author characterizes Conant-independence in most of the known examples of \(\mathrm{NSOP}_{4}\) theories, where it is symmetric. This leaves open the question of whether Conant-independence is symmetric in all \(\mathrm{NSOP}_{4}\) theories, giving a full theory of independence for the class \(\mathrm{NSOP}_{4}\) theories.
However, to our knowledge, other than some examples ([35], [7]; see also [11]), little has been known about the properties \(\mathrm{SOP}_{n}\) for \(n\geq 5\).
The main result of this paper will be to generalize the interactions betwen \(\mathrm{SOP}_{4}\) and Conant-independence to the higher levels of the \(\mathrm{SOP}_{n}\) hierarchy. As with Conant-independence, we will move from the forking-independence "at a generic scale" considered by Kruckman and Ramsey's work on Kim-independence in ([17], where the phrase is coined), to forking-independence at a _maximally generic scale_, grounding our notion of independence in dividing with respect to _every_ Morley sequence of a certain kind, rather than just some Morley sequence. There is precedent for this kind of definition in the "strong Kim-dividing" of Kaplan, Ramsey and Shelah in [19], defined in the context of "dual local character" in \(\mathrm{NSOP}_{1}\) theories and grounding the defintion of Conant-independence.
We will also turn our attention to the _fine structure_ of the genericity in the sequences that witness dividing, taking into account the variation between different classes of Morley sequences. For Kim-independence in \(\mathrm{NSOP}_{1}\) theories, this fine structure is submerged: by Corollary 5.4 of [18], Kim-independence in \(\mathrm{NSOP}_{1}\) theories remains the same when one replaces invariant Morley sequences in the genericity with Kim-independence itself. In the examples of \(\mathrm{NSOP}_{4}\) theories where Conant-independence has been characterized, it can also be seen that Conant-independence remains the same when one replaces invariant Morley sequences in the definition (Definition 1.2) with Conant-nonforking Morley sequences; see remarks at the end of Section 2 of this paper. However, in, say, strictly \(\mathrm{NSOP}_{5}\) theories, Conant-independence cannot be symmetric, but a symmetric notion of independence can be obtained in some examples by replacing the invariant Morley sequences with nonforking Morley sequences. More generally, we can iteratively obtain different levels of genericity, the independence relations \(\mathop{\mathchoice{\kern 1.0pt\hbox{\vrule width 0.
In Section 2, we define \(\big{\downarrow}^{\mathfrak{S}^{n}}\), show some basic properties necessary for our main result, and give some connections with stability motivating the possibility that symmetry for \(\big{\downarrow}^{\mathfrak{S}^{n}}\) forms a hierarchy.
In section 3, we characterize \(\big{\downarrow}^{\mathfrak{S}^{n}}\) in the classical examples of \(\mathrm{NSOP}_{2^{n+1}+1}\) theories, including the generic directed graphs without short directed cycles and undirected graphs without short odd cycles of [34], and the free roots of the complete graph of [7], developed in [11]. Though \(\big{\downarrow}^{\mathfrak{S}^{n}}=\big{\downarrow}^{a}\) will be trivial (and thus symmetric) in these classical examples, giving us the existence result of our main theorem, it is promising for the full characterization, Question 4.6, that the cycle-free examples and the free roots of the complete graph have \(\big{\downarrow}^{\mathfrak{S}^{n}}=\big{\downarrow}^{a}\) for different reasons. In the cycle-free examples, successive approximations of forking-independence tend towards larger graph-theoretic distances, while in the free roots of the complete graph, distances in successive approximations of forking-independence tend _away_ from the extremes.
In section 4, we show that if \(\big{\downarrow}^{\mathfrak{S}^{n}}\) is symmetric in the theory \(T\), then \(T\) is \(\mathrm{NSOP}_{2^{n+1}+1}\), completing our main result. We pose the converse as an open question.
## 2. Definitions and basic properties
We recall the defintion of _forking-independence_:
**Definition 2.1**.: _A formula \(\varphi(x,b)\) divides over a model \(M\) if there is an \(M\)-indiscernible sequence \(\{b_{i}\}_{i<\omega}\) with \(b_{0}=b\) and \(\{\varphi(x,b_{i})\}_{i<\omega}\) inconsistent. A formula \(\varphi(x,b)\) forks over a model \(M\) if there are \(\varphi_{i}(x,b_{i})\) dividing over \(M\) so that \(\models\varphi(x,b)\to\bigvee_{i=1}^{n}\varphi_{i}(x,b_{i})\). We say that \(a\) is_ forking-independent _from \(b\) over \(M\), denoted \(a\big{\downarrow}^{f}_{A}b\), if \(\mathrm{tp}(a/Mb)\) contains no formulas forking over \(M\)._
We define the relations \(\big{\downarrow}^{\mathfrak{S}^{n}}\), in analogy with the _Conant-independence_ of [30]. To give the definition, we need to generalize the notion of Morley sequence to any relation between sets over a model.
**Definition 2.2**.: _Let \(\big{\downarrow}\) be a relation between sets over a model. An \(\big{\downarrow}\)-Morley sequence over \(M\) is an \(M\)-indiscernible sequence \(\{b_{i}\}_{i<\omega}\) with \(b_{i}\big{\downarrow}_{M}b_{0}\ldots b_{i-1}\) for \(i<\omega\)._
Let \(a\big{\downarrow}^{u}_{M}b\) denote that \(\mathrm{tp}(a/Mb)\) is \(M\)-finitely satisfiable; as elsewhere in the literature, a _finitely satisfiable_ or _coheir Morley sequence_ will be a \(\big{\downarrow}^{u}\)-Morley sequence.
**Definition 2.3**.: _(1) Let \(\big{\downarrow}^{\mathfrak{S}^{0}}\), \(0\)-\(\mathfrak{G}\)-independence, denote forking-independence over a model \(M\); a formula \(0\)-\(\mathfrak{G}\)-divides (\(0\)-\(\mathfrak{G}\)-forks) over \(M\) if it divides (forks) over \(M\)._
_Inductively,_
_(2a) A formula \(\varphi(x,b)\)\((n+1)\)-\(\mathfrak{G}\)-divides over a model \(M\) if, for any \(\big{\downarrow}^{\mathfrak{S}^{n}}\)-Morley sequence \(\{b_{i}\}_{i<\omega}\) with \(b_{0}=b\), \(\{\varphi(x,b_{i})\}_{i<\omega}\) is inconsistent.2_
Footnote 2: It is not immediate that this defintion is independent of adding or removing unusual parameters in \(b\), though this is corrected by the definition of \(n+1\)-\(\mathfrak{G}\)-forking. We fix the convention that a formula only has finitely many parameters. Fixing this convention, it will follow from the results of this section that \(n\)-\(\mathfrak{G}\)-dividing of a formula \(\varphi(x,b)\) is independent of adding or removing unused parameters in \(b\) for \(n>1\); this is not known for \(n=1\).
_(2b) A formula \(\varphi(x,b)\)\((n+1)\)-\(\eth\)-forks over a model \(M\) if there are \(\varphi_{i}(x,b_{i})\)\((n+1)\)-\(\eth\)-dividing over \(M\) so that \(\models\varphi(x,b)\to\bigvee_{i=1}^{n}\varphi_{i}(x,b_{i})\)._
_(2c) We say that \(a\) is \((n+1)\)-\(\eth\)-independent from \(b\) over \(M\), denoted \(a\mathop{\mathchoice{\kern 0.0pt\hbox{\vrule width 0.0pt height 3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0.0pt height -3.0pt depth -0.0pt\kern-3.0pt\vrule width 0pt height -3.0pt depth -0.
\(e_{i}\subseteq d_{i}\) with \(\{e_{i}\}_{i<\omega}\ Ma^{\prime}c^{\prime}\)-indiscernible and \(e_{0}=e\). By definition of \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{ \hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}}^{\mathfrak{n}^{n-1}}\)-Morley sequence, for \(i<\omega\), \(d_{i}\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}^{\mathfrak{n}^{n-1}}d_{0}\ldots d_{i-1}\). So it follows from the definition of \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}_{M}^{\mathfrak{n}^{n-1}}\) that \(e_{i}\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}_{M}^{\mathfrak{n}^{n-1}}e_{0}\ldots e_{i-1}\) (i.e. \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}_{M}^{\mathfrak{n}^{n-1}}\) is monotone.) So \(\{e_{i}\}_{i<\omega}\) is an \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}}_{M}^{\mathfrak{n}^{n-1}}\)-Morley sequence over \(M\). Let \(\varphi^{\prime}(y,x,e)\in\operatorname{tp}(c^{\prime}a^{\prime}/Md)\). Then by \(Ma^{\prime}c^{\prime}\)-indiscernibility, \(\{\varphi^{\prime}(y,x,e_{i})\}_{i<\omega}\) is consistent, realized by \(a^{\prime}c^{\prime}\). So \(\varphi^{\prime}(y,x,e)\) does not \(n\)-\(\mathfrak{G}\)-divide over \(M\).
**Proposition 2.2**.: _For \(n\geq 2\), \(n\)-\(\mathfrak{G}\)-forking coincides with \(n\)-\(\mathfrak{G}\)-dividing._
Proof.: Exactly as in Fact 6.1 of [30], using right- and left-extension for \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}^{\mathfrak{n}^{n-1}}\), and the standard arguments. Suppose \(\varphi(x,b)\)\(n\)-\(\mathfrak{G}\)-forks over \(M\). Then \(\varphi(x,b)\to\bigvee_{j=1}^{N}\varphi_{j}(x,c^{j})\) for some \(\varphi_{j}(x,c^{j})\)\(n\)-\(\mathfrak{G}\)-dividing over \(M\). We show that \(\varphi(x,b)\)\(n\)-\(\mathfrak{G}\)-dividing over \(M\); suppose otherwise. Then by the definition of \(n\)-\(\mathfrak{G}\)-dividing and compactness, there is an \(\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.0999968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 2.099968pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}}^{\mathfrak{n}^{n-1}}\)-Morley sequence \(\{b_{i}\}_{i<\kappa}\), for large \(\kappa\), i.e. indiscernible over \(M\) with \(b_{i}\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt\vrule height 6.299904pt wid th 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{ \kern 1.499977pt\vrule height 6.299904pt width 1px\hss}\hbox{$ \searrow$}}}}^{\mathfrak{n}^{n-1}}b_{<i}\) for \(i<\kappa\), with \(b_{0}=b\) and \(\{\varphi(x,b_{i})\}_{i<\kappa}\) consistent. By induction we find \(c^{j}_{i}\), \(1\leq j\leq N\), \(i<\kappa\), so that \(\{c^{j}_{i}\}_{j=1}^{N}b_{i}\equiv_{M}\{c^{j}\}_{j=1}^{N}b\) and \(\{c^{j}_{i}\}_{j=1}^{N}b_{i}\mathop{\mathchoice{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 2.099968pt \vrule height 6.299904pt width 1px\hss}\hbox{$\searrow$}}}{\hbox{\hbox to 0.0pt{\kern 1.499977pt\vrule heigh 6.299904pt width 1px\hss}\hbox{$\searrow$}}}}_{M}^{\mathfrak{n}^{n-1}}\{c^{j}_{<\lambda}\}_{j=1}^{N}b_{<i}\) for \(i<\kappa\). Suppose by induction that for \(\lambda<\kappa\) we have found \(c^{j}_{i}\), \(1\leq j\leq N\), \(i<\lambda\), so that \(\{
**Definition 2.4**.: _A theory \(T\) satisfies the stable \(n\)-\(\etheth\)-forking conjecture if whenever \(a\mathrel{\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox t o 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt \hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$ \mid$}\hss}{\kern 0.0pt\hbox to 0.0pt{\hss$\mid$}\hss}{\kern 0.
By lowness and \(n\)-\(\eth\)-dividing of \(\varphi(x,a)\) over \(M\), the successors to each node witness \(k\)-dividing of \(\varphi(x,a)\) over \(M\) for some fixed \(k\). This together with \(1\) gives the \(k\)-tree property for \(\varphi(x,a)\), a contradiction.
We now show the linear hierarchy obtained when symmetry of \(n\)-\(\eth\)-independence is improved to the conclusion of the stable \(n\)-\(\eth\)-forking conjecture.
**Proposition 2.6**.: _If \(T\) satisfies the stable \(n\)-\(\eth\) forking conjecture, then \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}=\raisebox{-0.86pt}{\mbox{ \Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n+1}}\) (so \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}=\raisebox{-0.86pt}{\mbox{ \Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{m}}\) for \(m\geq n\).)_
Proof.: It suffices to show that for \(\varphi(x,y)\) a stable formula, if \(\varphi(x,b)\)\(n\)-\(\eth\)-forks over \(M\), so \(n\)-\(\eth\)-divides over \(M\), then it \(n+1\)-\(\eth\)-divides over \(M\). Let \(\{b_{i}\}_{i<\omega}\) be an \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\)-Morley sequence over \(M\); by the definition of \(n+1\)-\(\eth\)-dividing, it suffices to show that \(\{\varphi(x,b_{i})\}_{i<\omega}\) is \(k\)-inconsistent. Suppose otherwise. By compactness, extend \(I=\{b_{i}\}_{i<\omega}\) to a \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\)-Morley sequence \(\{b_{i}\}_{i<\omega+\omega}\) over \(M\). Then \(\{b_{i}\}_{\omega\leq i<\omega+\omega}\) is a (nonforking) Morley sequence over \(MI\) that does not witness dividing of \(\varphi(x,b_{\omega})\). So it suffices to show that \(\varphi(x,b_{\omega})\) divides over \(MI\) anyway, contradicting the basic properties of stability. By the previous proposition, \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\) is symmetric, so \(I\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}_{M}^{\eth^{n}}b_{\omega}\). Note that \(\varphi(x,b_{\eta})\) divides over \(M\). So by lowness, there is some \(k\) so that each \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n-1}}\)-Morley sequence over \(M\) starting with \(b_{\omega}\) witnesses \(k\)-dividing of \(\varphi(x,b_{\eta})\). Now for any formula in \(\operatorname{tp}(b_{\omega}/MI)\), by \(I\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}_{M}^{\eth^{n}}b_{\omega}\) there is an \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n-1}}\)-Morley sequence over \(M\) starting with \(b_{\omega}\), each term of which realizes this formula. In sum, for any formula in \(\operatorname{tp}(b_{\omega}/MI)\), there is an \(M\)-indiscernible sequence of realizations of this formula witnessing the \(k\)-dividing of \(\varphi(x,b_{\omega})\) over \(M\). So by compactness, \(\varphi(x,b_{\eta})\)\(k\)-divides over \(MI\).
In the next section, we will characterize \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\) in the classical examples of \(\operatorname{NSOP}_{2^{n+1}+1}\) theories, for \(n\geq 1\); it will be trivial in these examples, so satisfies the stable \(n\)-\(\eth\)-forking conjecture. Note that, if we start with the analogous stability assumption for Conant-independence (see [30]), the proof of the previous propositions show that it coincides with \(n\)-\(\eth\)-independence for \(n\geq 1\), and is symmetric. There are no known counterexamples to the "stable Conant-forking conjecture" for \(\operatorname{NSOP}_{4}\) theories. 3
Footnote 3: Conant-independence is characterized for some classical examples of \(\operatorname{NSOP}_{4}\) theories in [30] For the Fraisse-Hrushovski constructions of finite language, where the author has shown that Conant-independence coincides with \(d\)-independence, the proof of the stable forking conjecture for the simple case of these structures in [32], [14] should extend to the general case using this characterization.
## 3. Attainability/examples
In \(\operatorname{NSOP}_{1}\) theories, \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\) is just Kim-independence for \(n\geq 1\). Moreover, in the examples of \(\operatorname{NSOP}_{4}\) theories where Conant-independence has been characterized, it coincides with \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\) and is symmetric (see the end of the previous section). We now give some proper examples. These examples will show the attainability of \(\operatorname{SOP}_{2^{n+1}}\) as the bound on the levels of the \(\operatorname{SOP}_{k}\) hierarchy where \(\raisebox{-0.86pt}{\mbox{\Large$\mid$}}\raisebox{-0.86pt}{\mbox{\Large$\mid$}} \raisebox{-0.86pt}{\mbox{\Large$\mid$}}^{\eth^{n}}\) can be symmetric or transitive.
**Example 3.1**.: (Free roots of the complete graph) In [7], Casanovas and Wagner show that the theory \(T_{n}^{-}\) of metric spaces valued in the set \(\{0,\ldots,n\}\) has a model companion
\(T_{n}\). (More precisely, this is interdefinable with the theory introduced in [7], but we use the language of metric spaces.) They show that this theory is \(\omega\)-categorical, eliminates quantifiers, and has trivial algebraicity, and that it is NSOP but not simple. Later, Conant and Terry show in [11] that \(T_{n}\) is strictly \(\mathrm{NSOP}_{n+1}\). We want to show the following
**Theorem 3.2**.: _In \(T_{n}\), if \(2^{k+1}\leq n\), a \(\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt \hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox t o o 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\kern \kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to {\kern 0.0pt\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to 0.0pt{\kern 0.0pt\hbox to.0pt{\kern 0.0pt\hbox to 0.0pt{\kern
In the case where \(d(a_{1},b)<n^{*}\) and \(d(a_{2},b)\leq n^{*}\), we first show \(d(a_{1},b)\leq d(a_{2},b)+d(a_{1},a_{2})\) and \(d(a_{2},b)\leq d(a_{1},b)+d(a_{1},a_{2})\). If additionally, \(d(a_{2},b)=n^{*}\), then the first of these inequalities is immediate, so it suffices to prove the second inequality, as by symmetry we will have then proven the first inequality when both \(d(a_{1},b)<n^{*}\) and \(d(a_{2},b)<n^{*}\). By (a), there is some \(c\in C\) with \(d(a_{1},b)=d(a_{1},c)+d(b,c)\), so by (a), (c), \(d(a_{2},b)\leq d(a_{2},c)+d(c,b)\leq d(a_{1},a_{2})+d(a_{1},c)+d(c,b)=d(a_{1},a _{2})+d(a_{1},b)\). Finally, we show in this case that \(d(a_{1},a_{2})\leq d(a_{2},b)+d(a_{1},b)\). For any \(c\in C\), if \(d(a_{2},b)=n^{*}\) this must be because \(m^{a_{2}b}\leq n^{*}\), and if \(d(a_{2},b)<n^{*}\), then \(d(a_{2},b)=m_{ab}\geq m^{ab}\), so \(d(a_{2},b)\geq d(a_{2},c)-d(c,b)\). So \(d(a_{1},a_{2})\leq d(a_{2},b)+d(a_{1},b)\) follows exactly as in the case of \(d(a_{1},b)<n^{*}\) and \(d(a_{2},b)>n^{*}\).
Finally, if \(d(a_{1},b)>n^{*}\) and \(d(a_{2},b)\geq n^{*}\), then \(d(a_{1},a_{2})\leq d(a_{1},b)+d(a_{2},b)\) is immediate. We next show \(d(a_{1},b)\leq d(a_{2},b)+d(a_{1},a_{2})\) and \(d(a_{2},b)\leq d(a_{1},b)+d(a_{1},a_{2})\). If \(d(a_{2},b)=n^{*}\), then the second inequality is immediate, so it suffices to prove the first inequality, as by symmetry we will have then proven the second inequality when both \(d(a_{1},b)>n^{*}\) and \(d(a_{2},b)>n^{*}\). By (a) there will be some \(c\in C\) with either \(d(a_{1},b)=d(a_{1},c)-d(b,c)\) or \(d(a_{1},b)=d(b,c)-d(a_{1},c)\). In the case of \(d(a_{1},b)=d(a_{1},c)-d(b,c)\) by (b), (c), \(d(a_{2},b)\geq d(a_{2},c)-d(b,c)\geq d(a_{1},c)-d(a_{2},a_{1})-d(b,c)=d(a_{1}, b)-d(a_{2},a_{1})\), proving the first inequality in this case. In the case of \(d(a_{1},b)=d(b,c)-d(a_{1},c)\), again by (b), (c), \(d(a_{2},b)\geq d(b,c)-d(a_{2},c)=d(a_{1},c)+d(a_{1},b)-d(a_{2},c)=d(a_{1},b)- (d(a_{2},c)-d(a_{1},c))\geq d(a_{1},b)-d(a_{1},a_{2})\), proving the first inequality in the other case.
**Remark 3.4**.: Say that in the statement of Lemma 3.3, we let \(n^{*}=\lceil\frac{n}{2}\rceil+1\) instead of \(\lceil\frac{n}{2}\rceil\). Then the above proof works: \(n^{*}\) can be any constant at least \(\lceil\frac{n}{2}\rceil\), and the only place that \(n^{*}\geq\lceil\frac{n}{2}\rceil\) is used is in the case where \(a_{1},a_{2}\in A\backslash C\), \(b\in B\backslash C\) and \(d(a_{1},b)=d(a_{2},b)=n^{*}\). In the sequel, we will still let \(n^{*}\) denote \(\lceil\frac{n}{2}\rceil\)
**Definition 3.1**.: _Let \(C\subseteq A,B\) be subspaces of some fixed metric space with values in \(\{0,\ldots,n^{*}\}\), with \(A\cap B=C\)._
_(1) \(A\) and \(B\) are freely amalgamated over \(C\) if the inclusions \(\iota_{A}\) and \(\iota_{B}\) satisfy the conclusion of Lemma 3.3._
_(2a) For \(k\leq n\), \(A\) and \(B\) have distance \(\leq n\) over \(C\) if for \(a\in A\backslash C\), \(b\in B\backslash C\), \(d(a,b)\leq\max(k,m^{ab})\)._
_(2b) For \(k\geq 1\), \(A\) and \(B\) have distance \(\geq n\) over \(C\) if for \(a\in A\backslash C\), \(b\in B\backslash C\), \(d(a,b)\geq\max(k,m_{ab})\)._
**Lemma 3.5**.: _Let \(C\subseteq A,B,D\) be subsets of a fixed metric space, and \(1\leq k_{1}\leq n_{*}\leq n_{2}\leq n\). Suppose \(A\cup D\) and \(B\cup D\) are freely amalgamated over \(D\), and both \(A\) and \(B\) have distance \(\geq k_{1}\) and \(\leq k_{2}\) from \(D\) over \(C\). Then \(A\) has distance \(\geq\min(n^{*},2k_{1})\) and \(\leq\max(k_{2}-k_{1},n^{*})\) from \(B\) over \(C\), and moreover, \(D\) has distance \(\geq k_{1}\) and \(\leq k_{2}\) from \(A\cup B\) over \(C\)._
Proof.: The second clause is obvious, so we prove the first; let \(d\) be the metric. First we show that \(A\) has distance \(\geq\min(n^{*},2k_{1})\) from \(B\) over \(C\). Let \(a\in A\backslash C\), \(b\in B\backslash C\). It suffices to show that if \(d(a,b)<\min(n^{*},2k_{1})\) then there is some \(c\in C\) with \(d(a,c)+d(c,b)\leq d(a,b)\). Because \(d(a,b)<n^{*}\), there is some \(d\in D\) with \(d(a,d)+d(b,d)=d(a,b)<2k_{1}\). So either \(d(a,d)\) or \(d(b,d)\) must be less that \(k_{1}\). Without loss of generality, \(d(a,d)<k_{1}\)
Then because \(D\) and \(A\) have distance \(\leq k_{1}\) over \(C\), there is some \(c\in C\) with \(d(a,d)=d(a,c)+d(c,d)\). Then \(d(a,b)=d(a,c)+d(c,d)+d(b,d)\geq d(a,c)+d(c,b)\). Next we show that \(A\) has distance \(\leq\max(k_{1}-k_{2},n^{*})\) from \(B\) over \(C\). Let \(a\in A\backslash C\), \(b\in B\backslash C\). It suffices to show that if \(d(a,b)>\max(k_{2}-k_{1},n^{*})\), there is some \(c\in C\) with \(d(a,b)\leq|d(a,c)-d(b,c)|\). Because \(d(a,b)>n^{*}\), without loss of generality there is some \(d\in D\) with \(d(a,d)-d(b,d)=d(a,b)>k_{2}-k_{1}\). So either \(d(a,d)>k_{2}\) or \(d(b,d)<k_{1}\). In the case where \(d(a,d)>k_{2}\), since \(A\) has distance \(\geq k_{2}\) from \(D\) over \(C\), there is some \(c\in C\) so that either \(d(a,d)=d(a,c)-d(c,d)\) or \(d(a,d)=d(c,d)-d(a,c)\). If \(d(a,d)=d(a,c)-d(c,d)\), then \(d(a,b)=d(a,c)-d(b,d)-d(c,d)\leq d(a,c)-d(b,c)\). If \(d(a,d)=d(c,d)-d(a,c)\), then \(d(a,b)=d(c,d)-d(a,c)-d(b,d)\leq d(c,b)-d(a,c)\). In the case where \(d(b,d)<k_{1}\), since \(B\) has distance \(\geq k_{1}\) from \(D\) over \(C\), there is some \(c\in C\) with \(d(b,d)=d(b,c)+d(c,d)\). Then \(d(a,b)=d(a,d)-d(b,c)-d(c,d)\leq d(a,c)-d(b,c)\).
**Lemma 3.6**.: _Let \(A\) and \(B\) have distance \(\geq n^{*}\) and \(\leq n^{*}+1\) over \(C\). Then \(A\mathop{\mathchoice{\kern 0.0pt\hbox{\lower 3.0pt\hbox{\lower 3.0pt\hbox{\vrule height 3.0pt depth -0.0pt width 1px\hss}\hss}}{\kern 3.0pt\hbox{\lower 3.0pt \hbox{\vrule height 3.0pt depth -0.
there is some \(c\in C\) with either \(d(b,a)=d(b,c)-d(a,c)\) or \(d(b,a)=d(a,c)-d(b,c)\). If \(d(b,a)=d(b,c)-d(a,c)\), then \(d(a,d)=(d(b,c)-d(b,d))-d(a,c)\leq d(d,c)-d(a,c)\). If \(d(b,a)=d(a,c)-d(b,c)\), then \(d(a,d)=d(a,c)-(d(b,c)+d(b,d))\leq d(a,c)-d(d,c)\). Either way, this is as desired.
Then if \(A\) and \(B\) have distance \(\geq n^{*}\) and \(\leq n^{*}+1\) over \(C\), we can assume \(B\) is an \(|C|^{+}\)-saturated model by the claim, so the it follows as in the second paragraph of the proof of Lemma 2.1.2 and the fact that \(\operatorname{tp}(A/BC)\) does not divide over \(C\) that \(A\mathrel{\mathop{\mathchoice{\kern 0.0pt\hbox to 0.0pt{\hss\vrule height 6.0pt wid th 0.4pt depth -0.0pt\hss}\hbox to 0.0pt{\hss\vrule height 6.0pt wid th 0.4pt depth -0.0pt\hss}}{\kern 0.0pt\hbox to 0.0pt{\hss\vrule height 6.0pt wid th 0.4pt depth -0.0pt\hss}}{\kern 0.0pt\hbox to 0.0pt{\hss\vrule height 6.0pt wid th 0.4pt depth -0.0pt\hss}}{\kern 0.
So if \(n\) is even any pair of distinct nodes \(a,b\) has one of \(2n^{*}=n\) types, depending on the length of a minimal path, which will be at most \(n^{*}\), and the direction of that path; call this the "directed distance" between \(a\) and \(b\). If \(n\) is odd, any pair of distinct nodes has one of \(2n^{*}+1=n\) types, depending on the length of a minimal path, which will be at most \(n^{*}\), and the direction of that path if its length is \(<n^{*}\); if the length of a minimal path is \(n^{*}\), \(a\) and \(b\) will have an bi-directed distance of \(n^{*}\), and \(a\) and \(b\) will otherwise have a directed distance of \(<n^{*}\).
The type of a set \(S\) of distinct nodes will be determined by the distances between any two elements of \(S\). We give necessary and sufficient criteria for an assignment of distances to pairs of nodes in \(S\) to be consistent:
(1) If a directed path of length \(k\leq\frac{n}{2}\) is indicated by a chain of directed distances from \(a\) and \(b\) with total length \(d\) (i.e. there are \(a=a_{0},a_{1},\ldots,a_{m-1},a_{m}=b\), with a directed distance of \(d_{i}\) from \(a_{i-1}\) to \(a_{i}\) and \(d=\sum_{i=1}^{n}d_{i}\leq\frac{n}{2}\)), then the directed distance from \(a\) to \(b\) is at most \(d\).
(2) Chains of distances distances cannot indicate a directed cycle of length \(\leq n\) (i.e. \(a=a_{0},a_{1},\ldots,a_{m-1},a_{m}=a\), with a directed (or bi-directed) distance of \(a_{i}\) from \(a_{i-1}\) to \(a_{i}\) and \(d=\sum_{i=1}^{n}d_{i}\leq n\)).
Clearly (1) and (2) are necessary. To show they are sufficient, assume distances are assigned to pairs in a set \(S\); we will find a model \(M\) containing \(S\) realizing those assignments. For each pair \(a,b\in S\), if a directed distance of \(d\) (or an bi-directed distance of \(d=n^{*}\)) is assigned from \(a\) to \(b\), draw a path from \(a\) to \(b\) of length \(d\) as well as a path in the opposite direction of length \(n+1-d\). We show that \(S\) together with these new vertices has no directed cycles of length \(\leq n\). Suppose it contained such a cycle. That cycle can be partitioned into the paths added between elements of \(S\). Suppose it contained the longer of the two paths, of length \(n-d+1\), added from, say, \(b\) to \(a\). It cannot contain the shorter path, of length \(d\), between \(a\) and \(b\), as then this cycle would be too long. But perhaps there another path that is even shorter than the directed distance \(d\) from \(a\) to \(b\), formed out of paths of length less than \(d\leq n^{*}\) between other vertices in \(S\). This cannot happen, by (1). (This also handles the case where \(a\) and \(b\) have an bi-directed distance of \(n^{*}\) when \(n\) is odd.) Otherwise, all of the added paths between elements of \(S\) that make up the cycle of length \(\leq n\), are the paths of length \(<n^{*}\) going in the direction of the directed distances. But this cannot happen, by (2). Call the union of \(S\) with the additional paths \(T\), which we have shown has no cycles of length \(\leq n\), and find a model \(M\) containing \(T\). Then any two nodes in \(S\) will have directed distance in \(M\) at most the assigned distance \(d\), as we added the shorter path going in that direction, but no less than the assigned distance, as we added the longer paths of length \(n-d\) in the other direction, so a new path of length \(<d\) in the direction of the directed distance would create a cycle of length \(\leq n\).
We next show, as an analogue of Lemma 3.3, that sets can be "freely amalgamated:" if \(A\cap B=C\) with \(A\) and \(B\) given consistent assignments of distances agreeing on \(C\), then there is an assignment of distances on \(C\) agreeing with that on \(A\) and \(B\) and so that for \(a\in A\backslash C\), \(b\in B\backslash C\), the distance between \(a\) and \(b\) is the least total length of a chain of directed distances of total length \(\leq n^{*}\) between \(A\) and \(B\) going through \(C\) (i.e., without requiring a directed distance between a point of \(A\backslash C\) and a point of \(B\backslash C\) as one of the steps in the chain), going in the direction of that chain, and is otherwise of length \(n^{*}\). We
first show that the directed distances already within \(A\cup B\) (i.e. not between a point of \(A\backslash C\) and a point of \(B\backslash C\)) satisfy (1) and (2) (i.e., if \(a\in A\) then \(b\in A\), and if \(a_{i}\in A\) then \(a_{i+1}\in A\), and similarly for \(B\).) For (1), we can assume without loss of generality that \(a\) and \(b\) are in \(A\), and then the chain can be broken up into parts in \(A\) and parts in \(B\) each going between two nodes of \(C\), but all of the parts in \(B\) can be presumed to be in \(C\subseteq A\), by (1) on \(B\); having assumed this, we can then apply 1 on \(A\). For (2), a directed cycle of length \(\leq n\) formed by chains of directed distances can similarly be broken up into parts in \(A\) and parts in \(B\), but only one of the parts, say in \(A\) can have length \(\geq n^{*}\), and all of the other parts, by (1) in \(A\) and \(B\), can be presumed in \(C\subseteq A\), contradicting (2) in \(A\). So the direction of the chain of shortest total length \(\leq n^{*}\) between a point of \(A\backslash C\) and a point of \(B\backslash C\) going through \(C\) is well-defined, by (2) for chains in \(A\cup B\) going through \(C\), so if such a chain exists we use it as the definition for the directed distance, and otherwise we choose a distance in an _arbitrary_ direction of size \(n^{*}\).
It remains to show (1) and (2) on the whole of \(A\cup B\). For (1), if \(a,b\) are not both in \(A\) or both in \(B\), then any directed distance of length \(\leq n^{*}\) on a chain from \(a\) to \(b\) between a point of \(A\backslash C\) and a point of \(B\backslash C\) can be replaced with a chain going through \(C\) of the same total length in the same direction, by construction. So any chain of directed distances between \(a\) and \(b\) can be replaced by a chain of directed distances of the same total length between \(a\) and \(b\) and in the same direction, going through \(C\), so the directed distance between \(a\) and \(b\) will be as short or shorter in that direction, by definition. To complete (1) on the whole of \(A\cup B\), if \(a,b\) both belong to, say, \(A\), then again by construction we can assume a chain of directed distances to be a chain going through \(C\), and then use (1) for chains going through \(C\). Finally, for (2), suppose first that the distances indicating a \(\leq n\) cycle are either between two points of \(A\) or \(B\) or added between a point of \(A\backslash C\) and a point of \(B\backslash C\) because of a chain of the same length and the same direction going through \(C\). Then those distances can be replaced with those chains, reducing (2) to the case of chains going through \(C\), see above. Otherwise, one of the distances must be \(n^{*}\) between a point \(a\in A\backslash C\) and a point \(b\in B\backslash C\), added because there is no chain going through \(C\) of length \(\leq n^{*}\). But the rest of the cycle must be a chain of distances of total length \(\leq n^{*}\), each of which, being of length \(<n^{*}\) can be replaced with chains going through \(C\), and can thus be assumed a chain of length \(<n^{*}\) going through \(C\) from \(a\) to \(b\), a contradiction to the assumption on \(a\) and \(b\) that caused their distance to have length \(n^{*}\).
Say that \(C\subset A,B\), sitting in a fixed model, have distance \(\geq k\) over \(C\) for \(k\leq n^{*}\) if \(A\cap B=C\) and any \(a\in A\backslash C\) and \(b\in B\backslash C\) have distance in the same length and direction as the directed chain of distances of minimal total length through \(C\) in \(A\cup B\) if that length is \(<k\), and otherwise have distance of length \(\geq k\). We show an analogue of Lemma 3.5: if \(C\subseteq A,B,D\) with \(A\cup D\) and \(B\cup D\) freely amalgamated over \(D\) as in the above discussion, and \(A\) and \(B\) both of distance \(\geq k\) from \(D\) over \(C\), then (a) \(A\) has distance \(\geq\min(2k,n^{*})\) from \(B\) over \(C\) and (b) \(A\cup B\) has distance \(\geq k\) from \(D\) over \(C\). To show \(A\), if a point \(a\in A\backslash C\) and \(b\in B\backslash C\) have distance of length \(<\min(2k,n^{*})\) (say, going from \(a\) to \(b\)), then there must be a chain of length \(<\min(2k,n^{*})\) going through \(D\). This chain can be broken alternately into parts entirely in \(A\) and entirely in \(B\). Using the fact that the chain has length \(\leq n^{*}\), all of the parts except for the first and the last can be assumed entirely in \(D\), and then replaced by (1) with a single distance between two points in \(D\); that distance and
the distance from the second point in \(D\) to \(B\) can then be replaced by a single distance, giving a chain of length \(2\), going from \(a\) to \(d\in D\) and then to \(B\). One of the two distances, say from \(a\) to \(d\), must have length \(<k\), so there must be a chain of distances with the same total length and direction from \(A\) to \(D\), going through \(C\) in \(A\cup D\). So we can replace the distance from \(a\) to \(d\) with a distance from \(a\) to \(c\) followed by from \(c\) to \(d\), and can then replace the distance from \(c\) to \(d\) and from \(d\) to \(b\) with a single distance from \(c\) to \(b\), yielding a chain of equal or shorter length from \(a\) to \(b\) going through \(C\), as desired. Meanwhile, (b) is obvious.
Next, we show, if \(A\bigmdownarrow_{C}B\) denotes a distance of \(\geq n^{*}\) (so, \(A\) and \(B\) are freely amalgamated over \(C\); note that this amalgamation is not unique), it implies forking-independence. We first show it implies dividing-independence: let \(I=\{B_{i}\}_{i<\omega}\) with \(B_{0}=B\) be a \(C\)-indiscernible sequence. Assign distances on \(A\cup I\) to give \(AB_{i}\) the same quantifier-free type as \(AB\), and so the assignment agrees with the actual one on \(I\). It suffices to show this satisfies (1) and (2). If \(b\in B_{i}\backslash C\) and \(a\in A\backslash C\), then if there is a chain of distances of length \(\leq n^{*}\) in \(A\cup I\) going through \(C\) between \(a\) and \(b\), then by breaking this chain up into parts in \(A\) and parts in \(I\), and replacing the parts in \(I\) with parts in \(B_{i}\), there is a chain of distances of total length at least as short going in the same direction in \(A\cup B_{i}\) through \(C\). So the distance between \(a\) and \(b\) in our chosen assignment is the length and direction of the chain of least total length between \(a\) and \(b\) in \(A\cup I\) going through \(C\), using the actual assignments on \(A\) and \(I\). For the other pairs \(a\in A\backslash C\) and \(b\in I\backslash C\), a distance of length \(n^{*}\) is chosen. But in constructing the free amalgam, we showed that an arbitrary choice of direction for distances of length \(n^{*}\) (though there will only be a choice when \(n\) is even) is allowed for pairs with no path of length \(\leq n^{*}\) going through the base. So our chosen assignment is one instance of the construction of the free amalgam of \(A\) and \(I\) over \(C\), so in fact satisfies (1) and (2).
It remains to show right-extension for \(\bigmdownarrow\), which will hold if it is transitive, that is for \(C\subseteq A\) and \(C\subseteq B\subseteq D\), \(A\bigmdownarrow_{C}B\) and \(A\bigmdownarrow_{B}D\) implies \(A\bigmdownarrow_{C}D\). Suppose that \(a\in A\backslash C\), \(d\in D\backslash C\) and \(a\) and \(d\) have a distance \(<n^{*}\), say, going from \(a\) to \(d\). Then there is, as above, a chain of distances going from, say \(a\) to \(b\in B\) to \(d\), of the same total length. There is also a chain going from \(a\) to \(c\in C\) to \(d\) of the same total length as the distance between \(a\) and \(d\), because that distance is also \(<n^{*}\), so following that with the distance from \(d\in D\subseteq B\) to \(b\in B\), we get a chain going through \(C\) of the same total length as the distance from \(a\) to \(b\), as desired.
Now let \(\bigmdownarrow^{0}=\bigmdownarrow^{a}\), \(\bigmdownarrow^{i}\) denote a distance of \(\min(2^{i},n^{*})\) over the base. Exactly as in Example 3.1, we can then show that \(\bigmdownarrow^{\emptyset^{k}}=\bigmdownarrow^{a}\) when \(2^{k+1}\geq n\).
**Example 3.9**.: (Model companion of undirected graphs without odd cycles of length \(\leq n\), for \(n\) odd).
In [35], Shelah shows that the theory \(T_{n}\) of undirected graphs without odd cycles of length \(\leq n\), for \(n\) odd, has a model companion and is strictly \(\operatorname{NSOP}_{n+1}\). (This theory is further developed in [8].) Again, this theory has quantifier elimination in the language with binary relations for paths of length \(k\) ([35]). Instead of directed distances in the previous example, we now have the minimal length of a path between two vertices, an undirected distance which will be \(\leq n^{*}=\lceil\frac{n}{2}\rceil\). A similar analysis to example 3.8 will hold in this
theory. Note, however, that because it is never true that \(n=2^{k}\), \(T_{n}\) in this case cannot be used to witness that there are \(\mathrm{SOP}_{2^{k}}\) theories where \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{k}}\) is symmetric.
## 4. Bounds for symmetry and transitivity
We now show that \(\mathrm{SOP}_{2^{k+1}+1}\) is required for \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{k}}\) to be symmetric. (See Theorem 6.2 of [30] for a related result on \(\mathrm{NSOP}_{4}\).) From this and the previous section will follow the second clause of this theorem:
**Theorem 4.1**.: _Assume \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{n}}\) is symmetric for \(n\geq 1\). Then \(T\) is \(\mathrm{NSOP}_{2^{n+1}+1}\). Thus \(2^{n+1}+1\) is the least \(k\) so that every theory where \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{n}}\) is symmetric is \(\mathrm{NSOP}_{k}\)._
We state the construction; fix a Skolemization of \(T\). Suppose \(T\) is \(\mathrm{SOP}_{2^{n+1}+1}\); we show \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{n}}\) is asymmetric. Let \(R(x,y)\) witness this; then there is an indiscernible sequence \(\{c^{*}_{i}\}_{i\in\mathbb{Z}}\) so that \(\models R(c_{i},c_{j})\) for \(i<j\), but there are no \((2^{n+1}+1)\)-cycles. Let \(M=\mathrm{dcl}_{\mathbb{S}}(\{c^{*}_{i}\}_{i\in\mathbb{Z}}\cup\{c^{*}_{2\mathbb{ Z}+i}\}_{i\in\mathbb{Z}})\), and let \(c_{i}=c^{*}_{\mathbb{Z}+i}\) for \(i\in\mathbb{Z}\). For \(k\geq 1\), let \(R_{k}(x,y)=:\exists x_{0}\ldots x_{n-2}R(x,x_{0})\wedge\bigwedge_{i=0}^{n-3}R( x_{i},x_{i+1})\wedge R(x_{n-2},y)\) (so \(R_{1}(x,y)=:R(x,y)\) and \(R_{2}(x,y)=:\exists x_{0}R(x,x_{0})\wedge R(x_{0},y)\). We find instances of \(k\)-\(\eth\)-dividing:
**Lemma 4.2**.: _Let \(0\leq k\leq n\). Then_
\[R^{k}(y_{0},\ldots,y_{2^{k}-1},c_{0},\ldots,c_{2^{k}})=:\bigwedge_{i=0}^{2^{k} -1}R_{2^{n-k}}(c_{i},y_{i})\wedge R_{2^{n-k}}(y_{i},c_{i+1})\]
\(k\)_-\(\eth\)-divides (and therefore \(k\)-\(\eth\)-forks) over \(M\)._
Proof.: By induction on \(k\). For \(k=0\), we show that \(R_{2^{n}}(c_{0},y)\wedge R_{2^{n}}(y,c_{1})\) divides over \(M\), specifically by \(\{c_{2i}c_{2i+1}\}_{i<\omega}\). That is, \(\{R_{2^{n}}(c_{2i},y)\wedge R_{2^{n}}(y,c_{2i+1})\}_{i<\omega}\) is inconsistent; suppose it is consistent, realized by \(c\). Then \(\models R_{2^{n}}(c,c_{1})\wedge R(c_{1},c_{2})\wedge R_{2^{n}}(c_{2},c)\). So there is a \((2^{n+1}+1)\)-cycle, a contradiction.
Now suppose the statement holds for \(0\leq k\leq n-1\); we prove it is true for \(k+1\). Let \(\{c^{i}_{j}\}_{0\leq j\leq 2^{k+1}}^{i<\omega}\) be a sequence with \(c^{0}_{j}=c_{j}\) so that \(\{R^{k+1}(y_{0},\ldots,y_{2^{k+1}-1},c^{i}_{0},\ldots,c^{i}_{2^{k+1}})\}_{i<\omega}\) is realized by \(c^{\prime}_{0},\ldots,c^{\prime}_{2^{k+1}-1}\); by the definition of \((k+1)\)-\(\eth\)-dividing, it suffices to show that \(\{c^{i}_{j}\}_{0\leq j\leq 2^{k+1}}^{i<\omega}\) is not an \(\raisebox{-1.5pt}{\includegraphics[height=1.5pt]{10.eps}}^{\mathfrak{g}^{k}}\)-Morley sequence over \(M\). For \(0\leq i\leq 2^{k}-1\), in particular
\[\models R_{2^{n-(k+1)}}(c^{1}_{2i},c^{\prime}_{2i})\wedge R_{2^{n-(k+1)}}(c^{ \prime}_{2i},c^{0}_{2i+1})\wedge R_{2^{n-(k+1)}}(c^{0}_{2i+1},c^{\prime}_{2i+ 1})\wedge R_{2^{n-(k+1)}}(c^{\prime}_{2i+1},c^{1}_{2(i+1)})\]
It follows that \(\models R_{2^{n-k}}(c^{1}_{2i},c^{0}_{2i+1})\wedge R_{2^{n-k}}(c^{0}_{2i+1},c^ {1}_{2(i+1)})\) for \(0\leq i\leq 2^{k}-1\). Therefore,
\[R^{k}(y_{0},\ldots,y_{2^{k}-1},c^{1}_{0},\ldots,c^{1}_{2i},\ldots,c^{1}_{2^{k+1 }})\in\mathrm{tp}(c^{0}_{1}\ldots c^{0}_{2i+1}\ldots c^{0}_{2^{k+1}-1}/Mc^{1}_{ 0}\ldots c^{1}_{2i}\ldots c^{1}_{2^{k+1}})\]
Because \(c^{1}_{0}\ldots c^{1}_{2^{i}}\ldots c^{1}_{2^{k+1}}\equiv_{M}c^{0}_{0}\ldots c^ {0}_{2^{i}}\ldots c^{0}_{2^{k+1}}=c_{0}\ldots c_{2i}\ldots c_{2^{k+1}}\equiv_{M} c_{0}\ldots c_{2^{k}}\), and \(R^{k}(y_{0},\ldots,y_{2^{k}-1},c_{0},\ldots,y_{2^{k}-1},\ldots,\)\(k\)-\(\eth\)-divides over \(M\) by the induction hypothesis, \(R^{k}(y_{0},\ldots,y_{2^{k}-1},c^{1}_{0},\ldots,c^{1}_{2^{i}},\ldots,c^{1}_{2^{k+1 }})\)\(k\)-\(\eth\)-divides over \(M\), so
\[c^{0}_{1}\ldots c^{0}_{2i+1}\ldots c^{0}_{2^{k+1}-1}\stackrel{{ \eth^{k}}}{{\underset{M}{\sqcup}}}c^{1}_{0},\ldots c^{1}_{2i}\ldots c^{1}_{2^{k+ 1}}\]
and therefore,
\[c_{0}^{0}\ldots c_{2^{k+1}}^{0}\underset{M}{\overset{\mathfrak{Q}^{k}}{\downarrow} }c_{0}^{1}\ldots c_{2^{k+1}}^{1}\]
So \(\{c_{j}^{i}\}_{0\leq j\leq 2^{k+1}}^{i<\omega}\) is not an \(\underset{M}{\overset{\mathfrak{Q}^{k}}{\downarrow}}\)-Morley sequence over \(M\).
It follows from the case \(k=n\) of Lemma 4.2 and an automorphism that
\[\{c_{2i-1}\}_{-2^{n-1}<i\leq 2^{n-1}}\underset{M}{\overset{\mathfrak{Q}^{n}}{ \downarrow}}\{c_{2i}\}_{-2^{n-1}\leq i\leq 2^{n-1}}\]
So we have obtained an instance of \(n\)-\(\mathfrak{V}\) independence. When \(n=1\), so \(c_{-1}c_{1}\underset{M}{\overset{\mathfrak{Q}^{1}}{\downarrow}}c_{-2}c_{0}c_{2}\), this is one direction of the asymmetry: \(c_{-2}c_{0}c_{2}\underset{M}{\overset{\mathfrak{Q}^{1}}{\downarrow}}c_{-1}c_ {1}\). To show this, we extend \(\{c_{i}\}_{i\in\mathbb{Z}}\) to an \(M\)-indiscernible sequence \(\{c_{i}\}_{i\in\mathbb{Q}}\). Then by construction, \(\{c_{-(1+i)}c_{1+i}\}_{i\in[0,1)}\) (note \(c_{-(1+0)}c_{1+0}=c_{-1}c_{1}\)) is a finitely satisfiable Morley sequence over \(M\), indiscernible over \(Mc_{-2}c_{0}c_{2}\). So \(c_{-2}c_{0}c_{2}\underset{M}{\overset{\mathfrak{Q}^{1}}{\downarrow}}c_{-1}c_ {1}\) follows from the following fact, which is immediate from Fact 6.1 of [30] (this is just a standard application of left extension for finite satisfiability, as in the proof of Proposition 2.2):
**Fact 4.3**.: _Let \(\{b_{i}\}_{i<\omega}\) be a finitely satisfiable Morley sequence over \(M\) with \(b_{0}=b\) so that \(\{\varphi(x,b_{i})\}_{i<\omega}\) is consistent. Then \(\varphi(x,b)\) does not \(1\)-\(\mathfrak{V}\)-fork over \(M\)._
This concludes the \(\mathrm{NSOP}_{5}\) case. When \(n\geq 2\), we may not be able to obtain the desired finitely satisfiable Morley sequence. However, unlike the case where \(n=1\), we would not need anything stronger than an \(\underset{M}{\overset{\mathfrak{Q}^{n-1}}{\downarrow}}\)-Morley sequence to show that a formula does not \(n\)-\(\mathfrak{V}\)-fork over \(M\), as opposed to just \(n\)-\(\mathfrak{V}\)-dividing over \(M\), because \(n\)-\(\mathfrak{V}\)-forking already coincides with \(n\)-\(\mathfrak{V}\)-dividing (Proposition 2.2). Still, we do not show that \(\{c_{2i}\}_{-2^{n}\leq i\leq 2^{n}}\underset{M}{\overset{\mathfrak{Q}^{n}}{ \downarrow}}\{c_{2i-1}\}_{-2^{n}<i\leq 2^{n}}\) using an explicit \(\underset{M}{\overset{\mathfrak{Q}^{n-1}}{\downarrow}}\)-Morley sequence. Rather, let \(m\leq 2^{n-1}\) be least such that
\[\{c_{2i-1}\}_{-m<i\leq m}\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}} \{c_{2i}\}_{-m\leq i\leq m}\]
Then \(m>0\). Let \(\bar{a}=\{c_{2i}\}_{-m<i<m}\), \(\bar{b}=\{c_{2i-1}\}_{-m<i\leq m}\), and \(\bar{c}=c_{-2m}c_{2m}\). Then \(\bar{b}\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}}\bar{a}\bar{c}\), \(\bar{a}\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}}\bar{b}\) by minimality of \(m\) and the fact that \(\overline{a}\overline{b}\equiv_{M}\{c_{2i-1}\}_{-(m-1)<i\leq m-1}\{c_{2i}\}_{-( m-1)\leq i\leq m-1}\), and \(\mathrm{tp}(ab/cM)\) is finitely satisfiable over \(M\) by construction. To show asymmetry of \(\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}}\), it remains to show that \(\bar{a}\bar{c}\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}}\bar{b}\). We use the proof technique from Claim 6.2 of [29]. Let \(\varphi(\bar{x},\bar{z},\bar{b})\in\mathrm{tp}(\bar{a}\bar{c}/M\bar{b})\); we show it does not \(n\)-\(\mathfrak{V}\)-fork over \(M\), for which it suffices that it not \(n\)-\(\mathfrak{V}\)-divide over \(M\). More explicitly, \(\models\varphi(\bar{a},\bar{c},\bar{b})\), so \(\models\varphi(\bar{a},\bar{y},\bar{b})\in\mathrm{tp}(\bar{c}/M\bar{a}\bar{b})\). By finite satisfiability, there is some \(\bar{m}\in M\) so that \(\models\varphi(\bar{a},\bar{m},\bar{b})\). So \(\varphi(\bar{x},\bar{m},\bar{b})\in\mathrm{tp}(\bar{a}/M\bar{b})\). Because \(\bar{a}\underset{M}{\overset{\mathfrak{Q}^{n}}{\downarrow}}\bar{b}\), there is then some \(\underset{M}{\overset{\mathfrak{Q}^{n-1}}{\downarrow}}\)-Morley sequence \(\{\bar{b}_{i}\}_{i<\omega}\) over \(M\) with \(\bar{b}_{0}=\bar{b}\) so that \(\{\varphi(\bar{x},m,\bar{b}_{i})\}_{i<\omega}\) is consistent. A fortiori, \(\{\varphi(\bar{x},\bar{z},\bar{b}_{i})\}_{i<\omega}\) is consistent. So \(\varphi(\bar{x},\bar{z},\bar{b})\)
does not \(n\)-\(\eth\)-divide over \(M\), and as \(\varphi(\bar{x},\bar{z},\bar{b})\in\operatorname{tp}(\bar{a}\bar{c}/M\bar{b})\) was arbitrary, \(\bar{a}\bar{c}\mathop{\mathchoice{\hbox{\vrule width 0.0pt height 5.0pt depth 0.0pt \vrule width 0.0pt height 5.
\(e_{I_{2}}^{0}=e_{I_{2}}\), because every element of \(I_{2}\) is above every element of \(J\) and \(J\) has no greatest element. Moreover, because no element of \(I_{1}\) is in between any two elements of \(I_{2}\), \(\{e_{I_{2}}^{i}\}_{i<\omega}\) is indiscernible over \(e_{J}e_{I_{1}}\). So \(e_{I_{1}}\mathop{\downarrow}\limits_{e_{J}}^{\mathfrak{N}^{n}}e_{I_{2}}\) by Fact 4.3.
This completes the case of right transitivity. For left transitivity, we must find \(M_{0}=M^{0}\prec M^{1}\prec\ldots\prec M^{k}\) so that for \(0\leq i\leq 2^{n}\), \(M^{i+1}\mathop{\downarrow}\limits_{M^{i}}^{n^{5}}b\), and \(a\mathop{\downarrow}\limits_{M^{k}}^{\mathfrak{N}^{n}}b\), despite having shown \(a\mathop{\downarrow}\limits_{M_{0}}^{\mathfrak{N}^{n}}b\). For \(0<i\leq 2^{n}\), let \(M_{i}=\mathrm{dcl}_{\mathrm{Sk}}(\{a_{0}\ldots a_{i-1}\})\). Then for \(0\leq i\leq 2^{n}\), \(M^{i+1}\mathop{\downarrow}\limits_{M^{i}}^{u}b\), and \(a\mathop{\downarrow}\limits_{M^{k}}^{u}b\). This completes the case of left transitivity and thus the proof of Theorem 4.4.
We conclude by asking whether the converse holds, giving us a theory of independence for \(\mathrm{NSOP}_{2^{n+1}+1}\) theories:
**Question 4.6**.: _Does \(\mathrm{NSOP}_{2^{n+1}+1}\) imply symmetry of \(\mathop{\downarrow}\limits^{\mathfrak{N}^{n}}\)? Does it imply transitivity of \(\mathop{\downarrow}\limits^{\mathfrak{N}^{n}}\)?_
|
2301.12078 | Division and new multiplication between vectors | The division between two vectors belonging to the same vector space is
obtained by elementary procedures of vector algebra and is defined by a matrix.
This representation is obtained for two and three dimensional vector spaces. A
new vector multiplication is defined and an the inverse vector. Through this
multiplication we can obtain the division previously defined.The meaning of
vector division and multiplication are analyzed. | José E H Ramírez, E R Oria | 2023-01-28T03:44:24Z | http://arxiv.org/abs/2301.12078v1 | # Division and new multiplication between vectors
###### Abstract
The division between two vectors belonging to the same vector space is obtained by elementary procedures of vector algebra and is defined by a matrix. This representation is obtained for two and three dimensional vector spaces. A new vector multiplication is defined and an the inverse vector. Through this multiplication we can obtain the division previously defined.The meaning of vector division and multiplication are analyzed.
Vector Division Vector Multiplication Inverse Vector.
## 1 Introduction
The division between vectors in the framework of vector algebra is a subject not widely treated. Few studies have been carried out about this subject and those that have been carried out conclude that it is impossible to define this operation. In \(1966\), Kenneth O. May wrote an article,[3], in which he states that it is impossible to divide two vectors in a three-dimensional space. This impossibility is essentially due to the ways that have been used to define division between vectors, which have not achieved this goal because they have tried to obtain it by one of the existing products between vectors, either by the dot product or by the vector product, these products not being proper multiplications like those defined between numbers. These ideas lead to endless solutions. In this work we will show that it is possible to define the division between vectors within the framework of vector algebra by means of a geometric representation.
As is well known, in the field **K**, of real numbers, \(\mathbb{R}\), or of complexes, \(\mathbb{C}\), the division operation can be entered as the inverse operation of multiplication, i.e., \(a/b\) is true c such that \(bc=a\); provided that \(b\neq 0\), \(c\) exists and is unique. If \(b=0\), \(c\) exists, then and only if \(a=0\), and in this case \(c\) is any number in **K**. Since in the case of vectors there is no multiplication operation by which to perform the inverse operation, it is common to define the quotient between vectors as a particular tensor. Often the quotient between two quantities is of a different type and more complicated than this, the quotient of two integers is generally not an integer but a rational number. Similarly, the quotient of two vectors is not a vector, as is well known, but a quantity of a new type: a tensor [1]; but this means that the division of vectors can be exploited only with knowledge of the tensor algebra. In [2], division of vectors is treated in the context of multivector algebra, which is not treated in the same way in basic courses. In this algebra multiplication, and the inverse of a multivector, defines the division between multivectors which are vectors in a special case. Our main goal in this paper is to use elementary procedures to divide two vectors, **a** and **b**, of a certain vector space \(V\) of finite dimension with defined inner product. In what follows we will assume that \(\textbf{b}\neq\textbf{0}\) unless otherwise stated, and we denote by \(|\textbf{a}|=a\), etc.
## 2 Vector Division
Let **a** and **b** two vectors in finite n-dimensional space \(V\), over the real numbers, with inner producto defined. We can divide vector **a** into two directions, one parallel and the other perpendicular to vector the **b** as,
\[\textbf{a}=\textbf{a}_{\parallel}+\textbf{a}_{\perp}, \tag{1}\]
thus, \(\textbf{a}_{\parallel}=\alpha\,\textbf{b}\) and \(\textbf{a}_{\perp}=\beta\,\textbf{b}_{\perp}\)(see Fig. 1a ), where \(\alpha\) and \(\beta\) are two real proportionality coefficients to be determined, and \(\textbf{b}_{\perp}\) a perpendicular rotation of the vector **b**. So we can write the above equation as,
\[\mathbf{a}=\alpha\,\mathbf{b}+\beta\,\mathbf{b}_{\perp}. \tag{2}\]
Scalar multiplication by \(\mathbf{b}\) and \(\mathbf{b}_{\perp}\) let us to find the \(\alpha\) and \(\beta\) coefficients,
\[\alpha = \frac{\mathbf{a}\cdot\mathbf{b}}{b^{2}},\] \[\beta = \frac{\mathbf{a}\cdot\mathbf{b}_{\perp}}{b^{2}}.\]
We can write \(\mathbf{b}_{\perp}\) as \(\mathbf{b}_{\perp}=R\,\mathbf{b}\), were \(R\) is a perpendicular rotation matrix, so 2 can be writen as,
\[\mathbf{a}=\alpha\,I\,\mathbf{b}+\beta\,R\,\mathbf{b}, \tag{3}\]
where \(I\) is the unit matrix. Factorization of \(\mathbf{b}\) in 3 give,
\[\mathbf{a}=\left(\alpha\,I\,+\beta\,R\right)\,\mathbf{b}, \tag{4}\]
were we can define,
**Definition 2.1**.: _The division between the vectors \(\mathbf{a}\) and \(\mathbf{b}\), which we denote as \(\mathbf{a}/\mathbf{b}\), where \(\mathbf{b}\neq\mathbf{0}\), is given by the matrix \(E\),_
\[\frac{\mathbf{a}}{\mathbf{b}} = \alpha\,I\,+\beta\,R. \tag{5}\]
If \(V\) is the vector space \(\mathcal{R}^{2}\), then
\[E = \left[\begin{array}{cc}\alpha&-\beta\\ \beta&\alpha\end{array}\right]\] \[= \frac{a}{b}\left[\begin{array}{cc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right].\]
Matrix \(E\) rotates vector \(\mathbf{b}\) and then adjusts its magnitude to the magnitude of vector \(\mathbf{a}\). In the particular case that \(\mathbf{a}=\mathbf{b}\), then \(E\) is the usual rotation.
In this case the determinant of \(E\) (det\(E\)) e given by
\[detE = \alpha^{2}+\beta^{2}\] \[= \left(\frac{a}{b}\right)^{2}\]
If \(V\) is the vector space \(\mathcal{R}^{3}\), we have
\[E = \left[\begin{array}{cc}\alpha&C\beta&-B\beta\\ -C\beta&\alpha&A\beta\\ B\beta&-A\beta&\alpha\end{array}\right]\] \[= \frac{a}{b}\left[\begin{array}{cc}\cos\theta&C\sin\theta&-B\sin \theta\\ -C\sin\theta&\cos\theta&A\sin\theta\\ B\sin\theta&-A\sin\theta&\cos\theta\end{array}\right],\]
where \(A\), \(B\) and \(C\) are the components of the unit vector \(\mathbf{n}\) given by,
\[\mathbf{n} = \frac{\mathbf{a}\times\mathbf{b}}{\|\mathbf{a}\times\mathbf{b}\|}\] \[= \left(A,B,C\right)\]
Here,
\[detE = \alpha\left(\alpha^{2}+\beta^{2}\right)\] \[= \left(\frac{a}{b}\right)^{3}\cos\theta.\]
Let's consider two examples:
_Example 2.1_.: Let \(\mathbf{a}\) = (3, -1) and \(\mathbf{b}\) = (2, 5) be two vectors in \(\mathbb{R}^{2}\) find \(\mathbf{a}/\mathbf{b}\).
\[\alpha = \frac{\mathbf{a}\cdot\mathbf{b}}{b^{2}}\] \[= \frac{1}{29}.\]
\[\beta = \frac{\mathbf{a}\cdot\mathbf{b}_{\perp}}{b^{2}}\]
There are two vectors perpendicular to \(\mathbf{b}\), \(\mathbf{b}_{1\perp}=(-5,2)\) and \(\mathbf{b}_{2\perp}=(5,-2)\). The perpendicular rotation matrix \(R\) will depend on which vector we choose. Let's choose \(\mathbf{b}_{1\perp}\), then
\[\beta = -\frac{17}{29}\]
and
\[E = \frac{1}{29}I-\frac{17}{29}R\] \[= \frac{1}{29}\left[\begin{array}{cc}1&17\\ -17&1\end{array}\right].\]
where \(R\) is given by
\[R = \left[\begin{array}{cc}0&-1\\ 1&0\end{array}\right]\]
Note that if we choose \(\mathbf{b}_{2\perp}\), the corresponding perpendicular rotation matrix would be the transpose of the previous one, on the other hand we can easily verify that \(\mathbf{a}=E\,\mathbf{b}\).
_Example 2.2_.: Let \(\mathbf{a}\) = (3, -1, 2) and \(\mathbf{b}\) = (2, 5, 1) be two vectors in \(\mathbb{R}^{3}\) find \(\mathbf{a}/\mathbf{b}\).
In this case we find that \(\alpha=3/30\). To find \(\beta\), let's take the cross product of \(\mathbf{a}\) and \(\mathbf{b}\)
\[\mathbf{a}\times\mathbf{b} = -11\mathbf{i}+\mathbf{j}+17\mathbf{k}\]
being
\[\textbf{n} = \frac{1}{\sqrt{411}}(-11,1,17)\]
In this way we find that \(E\) is given by
\[E = \frac{1}{30}\left[\begin{array}{ccc}3&17&-1\\ -17&3&-11\\ 1&11&3\end{array}\right]. \tag{11}\]
Again we can verify that \(\textbf{a}=E\,\textbf{b}\).
### Division Properties
* \(\textbf{a}/\textbf{a}=I\) Proof.: By Definition 2.1 we have that \(\textbf{a}/\textbf{a}=\textbf{a}\cdot\textbf{a}/a^{2}\,I+\textbf{a}\cdot \textbf{a}_{\perp}/a^{2}\,R\) but \(\textbf{a}\cdot\textbf{a}_{\perp}=0\) and \(\textbf{a}\cdot\textbf{a}/a^{2}=1\), then \(\textbf{a}/\textbf{a}=I\).
* \(\textbf{0}/\textbf{a}=O\) Proof.: By Definition 2.1 we have that \(\textbf{0}/\textbf{a}=\textbf{0}\cdot\textbf{a}/a^{2}\,I+\textbf{0}\cdot \textbf{a}_{\perp}/a^{2}\,R\) then \(\textbf{a}/\textbf{a}=O\).
* \(\textbf{a}/\textbf{b}=(\textbf{b}/\textbf{a})^{-1}\) Proof.: We are going to prove this property by taking the product \((\textbf{a}/\textbf{a})(\textbf{b}/\textbf{a})\) and we will show that it is equal to the unitary matrix \(I\). \((\textbf{a}/\textbf{b})(\textbf{b}/\textbf{a})=(\textbf{a}\cdot\textbf{b}/b^ {2}\,I+\textbf{a}\cdot\textbf{b}_{\perp}/b^{2}\,R)(\textbf{a}\cdot\textbf{b} /a^{2}\,I+\textbf{a}\cdot\textbf{b}_{\perp}/a^{2}\,\dot{R})\) Carrying out the indicated operation and considering that \(R\,\dot{R}=I\) and \(R=-\dot{R}\) we arrive at the expected result.
* \(\textbf{(a+b)}/\textbf{c}=\textbf{a}/\textbf{c}+\textbf{b}/\textbf{c}\) Proof.: By definition, \(\textbf{(a+b)}/\textbf{c}=((\textbf{a+b)}\cdot\textbf{c})/c^{2}\,I\,+\,(( \textbf{a+b)}\cdot\textbf{c}_{\perp})/c^{2}\,R\) Because scalar product follow distributive law we have that, \(\textbf{(a+b)}/\textbf{c}=\textbf{a}\cdot\textbf{c}/c^{2}\,I+\,\textbf{b}\cdot \textbf{c}/c^{2}\,I+\textbf{a}\cdot\textbf{c}_{\perp}/c^{2}\,R+\textbf{b} \cdot\textbf{c}_{\perp}/c^{2}\,R\) Rearranging the terms we get that \(\textbf{(a+b)}/\textbf{c}=\textbf{a}/\textbf{c}+\textbf{b}/\textbf{c}\), as we hope.
## 3 Vector Multiplication
The deficiency of thedot and cross products is due to the fact that through them it is not possible to define the inverse vector. For this reason, we will dedicate this section to defining a new multiplication between vectors, through which we can obtain the inverse vector, and that said multiplication be consistent with the division operation between vectors previously defined.
**Definition 3.1** (Vector Multiplication).: _Let \(V\) be a vector space with inner product defined and **a** and **b** two vectors \(\in V\). The product between the vectors **a** and **b**, which we denote as \(\textbf{a}\otimes\textbf{b}\), is given by the matrix \(E\),_
\[\textbf{a}\otimes\textbf{b} = E\] \[= \alpha\,I\,+\beta\,R,\]
_where \(I\) is the identity matrix, \(R\) a perpendicular rotation matrix and \(\alpha\) and \(\beta\) two coefficients given by_
\[\alpha = \textbf{a}\cdot\textbf{b}\] \[\beta = \textbf{a}\cdot\textbf{b}_{\perp}=\textbf{a}_{\perp}\cdot \textbf{b}.\]
The \(\beta\) coefficient is closely related to the perpendicular rotation matrix \(R\) and to the perpendicular vectors \(\textbf{b}_{\perp}\) or \(\textbf{a}_{\perp}\).
Now we are going to define the inverse vector by considering Definition 3.1.
**Definition 3.2**.: _Let \(\textbf{a}\neq\textbf{0}\) a vector in \(V\). The inverse vector of **a**, which we denote as \(\textbf{a}^{-1}\), is, from Definition 3.1, such that_
\[\textbf{a}\otimes\textbf{a}^{-1} = I. \tag{13}\]
From Definition 3.2 it follows that \(\textbf{a}^{-1}=\textbf{a}/a^{2}\), which shows that the vectors **a** and \(\textbf{a}^{-1}\) are collinear. Now let's see the properties of multiplication.
For two and tree dimensional space we have that
\[\textbf{a}\otimes\textbf{b} = a\,b\left[\begin{array}{ccc}\cos\theta&-\sin\theta\\ \sin\theta&\cos\theta\end{array}\right], \tag{14}\]
and
\[\textbf{a}\otimes\textbf{b} = \left[\begin{array}{ccc}\alpha&C\beta&-B\beta\\ -C\beta&\alpha&A\beta\\ B\beta&-A\beta&\alpha\end{array}\right]\] \[= a\,b\left[\begin{array}{ccc}\cos\theta&C\sin\theta&-B\sin\theta\\ -C\sin\theta&\cos\theta&A\sin\theta\\ B\sin\theta&-A\sin\theta&\cos\theta\end{array}\right]\]
These matrices rotate the inverse vector \({\bf a}^{-1}\) (\({\bf b}^{-1}\)) towards \({\bf b}({\bf a})\) and then adjust its magnitude converting it into the vector \({\bf b}\) (\(\,{\bf a}\)).
From Eq.(15) we can obtain the well-known vector algebra relation,
\[(a\,b)^{2} = ({\bf a}\cdot{\bf b})^{2}+(\|{\bf a}\times{\bf b}\|)^{2}\,. \tag{16}\]
### Multiplication Properties
* \({\bf a}\otimes{\bf a}^{-}1=I\), multiplicative inverse.
Proof.: We will show that there exists a multiplicative inverse of \({\bf a}\) (which we denote as \({\bf a}^{-1}\)) such that \({\bf a}\otimes{\bf a}^{-1}=I\) Since \(\alpha=1\) that means that \(a^{-1}=1/a\). On the other hand because \(\beta=0\) the vectors are collinear (that is, \({\bf a}^{-1}=k\,{\bf a}\)), so we have that \(k=1/a^{2}\), in this way we find that \({\bf a}^{-1}={\bf a}/a^{2}\).
* \({\bf a}\otimes{\bf b}={\bf b}\otimes{\bf a}\), commutative law for multiplication.
Proof.: Let \({\bf a}\otimes{\bf b}=\alpha\,I\ +\beta\,R\), and \({\bf b}\otimes{\bf a}=\dot{\alpha}\,I+\dot{\beta}\,\dot{R}\), with \({\bf a}\neq{\bf 0}\) and \(\beta\neq{\bf 0}\) and \(\theta\) the angle between the vectors \({\bf a}\) and \({\bf b}\). We must show that \(\alpha=\dot{\alpha}\), \(\beta=\dot{\beta}\) and \(R=\dot{R}\). By definition \(\alpha={\bf a}\cdot{\bf b}\) and \(\dot{\alpha}={\bf a}\cdot{\bf b}\), so \(\alpha=\dot{\alpha}\). On the other hand, \(\beta={\bf a}\cdot{\bf b}_{\perp}\) and \(\dot{\beta}={\bf a}\cdot{\bf b}_{\perp}\), so \(\beta=\dot{\beta}\). In the plane of the vectors \({\bf a}\) and \({\bf b}\), there are two vectors perpendicular to \({\bf b}\) (see Fig. 1b ), which we will call \({\bf b}_{\perp 1}\) and \({\bf b}_{\perp 2}\), where \({\bf b}_{\perp 1}=-{\bf b}_{\perp 2}\). Since \(\beta=\dot{\beta}\), we have that \({\bf a}\cdot{\bf b}_{\perp 1}={\bf a}\cdot{\bf b}_{\perp 2}\). If we consider that \({\bf b}_{\perp 1}\neq{\bf b}_{\perp 2}\) we got to that \(2\,{\bf a}\cdot{\bf b}_{\perp 1}=0\), which is a contradiction because \({\bf a}\neq 0\) and \({\bf b}_{\perp 1}\) is not perpendicular to \({\bf a}\) in the general sense. Therefore our assumption that \({\bf b}_{\perp 1}\neq{\bf b}_{\perp 2}\) is not correct, then, \({\bf b}_{\perp 1}={\bf b}_{\perp 2}\). On the other hand we have that \(R\,{\bf b}^{-1}={\bf b}_{\perp 1}^{-1}\) and \(\dot{R}\,{\bf b}^{-1}={\bf b}_{\perp 2}^{-1}={\bf b}_{\perp 1}^{-1}\) then we conclude that \(R=\dot{R}\), so we show that \({\bf a}\otimes{\bf b}={\bf b}\otimes{\bf a}\).
* \({\bf a}\otimes{\bf u}={\bf u}\otimes{\bf a}=a\,I\), identity element of multiplication.
Proof.: We are going to show that there exists a vector \({\bf u}\) such that \({\bf a}\otimes{\bf u}=a\,I\). By definition \({\bf a}\otimes{\bf u}=\alpha\,I+\beta\,R\), where \(\alpha={\bf a}\cdot{\bf u}\) and \(\beta={\bf a}\cdot{\bf u}_{\perp}={\bf a}_{\perp}\cdot{\bf u}\), but \({\bf a}\cdot{\bf u}=a\) and \({\bf a}\cdot{\bf u}_{\perp}=0\), so \(u=1/cos\theta\) and \(au\,tan\theta=0\) then \(\theta=0\) and \(u=1\). Since \({\bf u}=({\bf a}\otimes{\bf u})\,{\bf a}^{-1}\) we have that \({\bf u}=a\,I\,{\bf a}^{-1}=a\,{\bf a}/a^{2}={\bf a}/a\). That is, we find that \({\bf u}={\bf a}/a\).
* \({\bf a}\otimes({\bf b}+{\bf c})={\bf a}\otimes{\bf b}+{\bf a}\otimes{\bf c}\), distributive with respect to vector addition.
Proof.: By definition \({\bf a}\otimes({\bf b}+{\bf c})={\bf a}\cdot({\bf b}+{\bf c})\,I+{\bf a}\cdot({ \bf b}+{\bf c})_{\perp}\,R\). Since the dot product is distributive, we have that \({\bf a}\otimes({\bf b}+{\bf c})={\bf a}\cdot{\bf b}\,I+{\bf a}\cdot{\bf c}\,I+ {\bf a}\cdot{\bf b}_{\perp}\,R+{\bf a}\cdot{\bf c}_{\perp}\,R\)
\[{\bf a}\otimes({\bf b}+{\bf c})=({\bf a}\cdot{\bf b}\,I+{\bf a}\cdot{\bf b}_{ \perp}\,R)+({\bf a}\cdot{\bf c}\,I+{\bf a}\cdot{\bf c}_{\perp}\,R).\] Then \({\bf a}\otimes({\bf b}+{\bf c})={\bf a}\otimes{\bf b}+{\bf a}\otimes{\bf c}\).
* \({\bf a}\otimes k\,{\bf b}=k\,{\bf a}\otimes{\bf b}\), linear with respect to scalar multiplication, where \(k\in\mathbb{R}\).
Proof.: By definition, \({\bf a}\otimes k\,{\bf b}=\alpha\,I+\beta\,R\), where \(\alpha={\bf a}\cdot k{\bf b}=k{\bf a}\cdot{\bf b}\) and \(\beta=k{\bf a}\cdot{\bf b}_{\perp}\) then \({\bf a}\otimes k\,{\bf b}=k{\bf a}\cdot{\bf b}\,I\,+k{\bf a}\cdot{\bf b}_{ \perp}\,R=k\,({\bf a}\otimes{\bf b})\)
* \({\bf a}\otimes{\bf b}=O\), then either \({\bf a}={\bf 0}\) or \({\bf b}={\bf 0}\).
Proof.: \({\bf a}\otimes{\bf b}=O\), means that \(\alpha=0\) and \(\beta=0\) or \(ab\,(cos\theta-sin\theta)=0\) for any \(\theta\), then necessarily \(a=0\) or \(b=0\), which means that \({\bf a}={\bf 0}\) or \({\bf b}={\bf 0}\).
Let's see now two examples of multiplication. For this we return to the vectors of the examples 2.1 and 2.2.
_Example 3.1_.: Let \({\bf a}\) = (3, -1) and \({\bf b}\) = (2, 5) be two vectors in \(\mathbb{R}^{2}\) find \({\bf a}\otimes{\bf b}\).
\({\bf a}\otimes{\bf b}={\bf a}\cdot{\bf b}\,I+{\bf a}\cdot{\bf b}_{\perp}\,R\), where \({\bf a}\cdot{\bf b}=1\) and \({\bf a}\cdot{\bf b}_{\perp}=-17\) (we choose \({\bf b}_{\perp}=(-5,2)\)).
\(R=\left[\begin{array}{cc}0&-1\\ 1&0\end{array}\right]\), then
\({\bf a}\otimes{\bf b}=\left[\begin{array}{cc}1&17\\ -17&1\end{array}\right]\).
If \({\bf b}=1/29\,(2,5)\) (the vector inverse of \({\bf b}\)) then we can see that \({\bf a}\otimes{\bf b}\) match with the division of the example 2.1 as we hope.
_Example 3.2_.: Let \({\bf a}\) = (3, -1, 2) and \({\bf b}\) = (2, 5, 1) be two vectors in \(\mathbb{R}^{3}\) find \({\bf a}\otimes{\bf b}\).
\({\bf a}\otimes{\bf b}=3\,I+\sqrt{411}\,R\),
were \(\alpha={\bf a}\cdot{\bf b}=3\) and \(\beta={\bf a}\cdot{\bf b}_{\perp}=\sqrt{411}\)
\[R = \frac{1}{\sqrt{411}}\left[\begin{array}{ccc}0&17&-1\\ -17&0&-11\\ 1&11&0\end{array}\right],\]
then
\[\mathbf{a}\otimes\mathbf{b} = \left[\begin{array}{ccc}3&17&-1\\ -17&3&-11\\ 1&11&3\end{array}\right].\]
If \(\mathbf{b}=1/30\left(2,5,1\right)\) (the inverse vector of \(\mathbf{b}\)), then we can see that \(\mathbf{a}\otimes\mathbf{b}\) match with the division of the example 2.2.
## 4 Conclusions
It is possible in the framework of vector algebra to define the division between vectors in the vector spaces \(\mathbb{R}^{2}\) and \(\mathbb{R}^{3}\), which is defined as a matrix. This matrix represents rotation together with scaling. It is possible to define the corresponding multiplication, which represents, in the same way, a rotation. The inverse vector was defined through this multiplication, through which it was possible to obtain the division. Fundamental properties of division and multiplication between vectors, analogous to the properties of operations with numbers, were given along with their proofs. We will continue with the study of these operations and their applications in Physics. We think that they can be linked to group theories.
|
2301.05504 | Combining Dynamic Mode Decomposition with Ensemble Kalman Filtering for
Tracking and Forecasting | Data assimilation techniques, such as ensemble Kalman filtering, have been
shown to be a highly effective and efficient way to combine noisy data with a
mathematical model to track and forecast dynamical systems. However, when
dealing with high-dimensional data, in many situations one does not have a
model, so data assimilation techniques cannot be applied. In this paper, we use
dynamic mode decomposition to generate a low-dimensional, linear model of a
dynamical system directly from high-dimensional data, which is defined by
temporal and spatial modes, that we can then use with data assimilation
techniques such as the ensemble Kalman filter. We show how the dynamic mode
decomposition can be combined with the ensemble Kalman filter (which we call
the DMDEnKF) to iteratively update the current state and temporal modes as new
data becomes available. We demonstrate that this approach is able to track time
varying dynamical systems in synthetic examples, and experiment with the use of
time-delay embeddings. We then apply the DMDEnKF to real world seasonal
influenza-like illness data from the USA Centers for Disease Control and
Prevention, and find that for short term forecasting, the DMDEnKF is comparable
to the best mechanistic models in the ILINet competition. | Stephen A Falconer, David J. B. Lloyd, Naratip Santitissadeekorn | 2023-01-13T12:13:58Z | http://arxiv.org/abs/2301.05504v1 | # Combining Dynamic Mode Decomposition with Ensemble Kalman Filtering for Tracking and Forecasting
###### Abstract
Data assimilation techniques, such as ensemble Kalman filtering, have been shown to be a highly effective and efficient way to combine noisy data with a mathematical model to track and forecast dynamical systems. However, when dealing with high-dimensional data, in many situations one does not have a model, so data assimilation techniques cannot be applied. In this paper, we use dynamic mode decomposition to generate a low-dimensional, linear model of a dynamical system directly from high-dimensional data, which is defined by temporal and spatial modes, that we can then use with data assimilation techniques such as the ensemble Kalman filter. We show how the dynamic mode decomposition can be combined with the ensemble Kalman filter (which we call the DMDEnKF) to iteratively update the current state and temporal modes as new data becomes available. We demonstrate that this approach is able to track time varying dynamical systems in synthetic examples, and experiment with the use of time-delay embeddings. We then apply the DMDEnKF to real world seasonal influenza-like illness data from the USA Centers for Disease Control and Prevention, and find that for short term forecasting, the DMDEnKF is comparable to the best mechanistic models in the ILINet competition.
## Keywords
Dynamic mode decomposition; Ensemble Kalman filter; Data-driven modelling; Data assimilation; Dynamical systems
## 1 Introduction
Data assimilation refers to the collection of methods that integrate vast data sets with sophisticated mathematical models, to track and forecast systems that may evolve or change [38]. The majority of its applications lie in the earth sciences [51], however due to the generality of its techniques they have also been successfully applied in a wide range of areas from medicine [18] to ecology [40]. The Kalman filter [36] is one such data assimilation technique widely used throughout industry [3] that optimally combines predictions from a linear model with Gaussian data. Whilst traditionally applied to a model's state, the parameters of the model can simultaneously be filtered, leading to
what is known as the joint state-parameter estimation problem [33]. If the system being filtered is nonlinear, alternative versions of the Kalman filter can be utilized such as the extended Kalman filter [53], unscented Kalman filter [59] or ensemble Kalman filter (EnKF) [23]. The EnKF represents the distribution of a system's state with an ensemble of random samples, that can then be used to estimate useful statistics like the state's covariance via the sample covariance or a point estimate of the state via the sample mean [23] and is well-suited for high-dimensional problems. All of these methods require a model of the system, however if no model exists then one must be generated and the most generalizable way to do this is via data-driven modelling.
Dynamic mode decomposition (DMD) is a data-driven modelling technique for identifying low dimensional, spatial and temporal patterns within a dynamical system directly from high-dimensional data [54]. It does this by postulating the state vector is evolved via a linear system and looking for a low-dimensional approximation of the eigenvalues (temporal modes) and corresponding eigenvectors (spatial modes). Spatial modes can be thought of as modes that decompose state variables into separate components that evolve together linearly in time. The corresponding temporal modes describe whether a spatial mode is growing, decaying or stationary in time. DMD has been used to approximate dynamical systems from measurement data in a multitude of fields, ranging from epidemiology [49], finance [43] and neuroscience [12]. Due to its popularity, it has been extended to systems that are nonlinear in their recorded measurement functions via Extended/Kernel DMD [61]/[44], with one such extension Hankel-DMD [2] employing time-delay embeddings of the original observables. In the presence of measurement noise, the standard DMD has been shown to induce a systematic bias by asymmetrically attributing all noise to the model's target output measurements and none to its inputs during training [15]. This systematic bias, prompted the creation of noise handling variants of DMD that directly account for the noise term [15], the Forward Backward DMD [15] that performs DMD forwards and backward in time and combines the results, and Total DMD (TDMD) [31] that minimizes the total least squares error as opposed to minimizing the ordinary least squares error.
The aim of this paper is to develop an algorithm that iteratively improves the temporal modes (eigenvalues) and state estimates produced by DMD with the EnKF as new data becomes available. This would be highly useful for dynamical systems that make a change from growing or decaying behaviour over time. While estimating just the state of the system using the DMD modes can be done using a standard Kalman filter, without also filtering the model's temporal mode, estimates are likely to suffer if the system changes over time. Methods already exist that combine DMD with the Kalman filter [45] or extended Kalman filter [46], which apply filtering to estimate the entire system dynamics matrix. The filtering in our work is instead focused on efficiently tracking the system's temporal modes, and forecasting the system's future states. DMD produces a linear model which makes it a natural fit for the Kalman filter, however when a system's state and temporal modes are estimated simultaneously the filtering process becomes nonlinear. Hence, we need to use a filter designed for a nonlinear model, and we chose the EnKF due to its versatility, scalability to large dynamical systems, and ease of implementation [52]. While any DMD variant that produces temporal modes would be compatible with the DMDEnKF framework, we use TDMD to remain consistent with the EnKF's assumption that noise is present in the data. In tandem, we apply the DMDEnKF using a total least squares version of Hankel-DMD, henceforth referred to as the Hankel-DMDEnKF, to investigate the effect time-delay embeddings have on our framework.
To demonstrate the DMDEnKF method, we first test it on synthetically generated datasets. Initially, on a simple noisy oscillating system with a decreasing period of oscillation, we use the DMDEnKF to track the system's temporal modes and compare results with the Hankel-DMDEnKF, other iterative DMD variants, and "gold standard" filtering methods. Next, we simulate a pandemic and evaluate the DMDEnKF's ability to track the system's temporal modes and generate multistep ahead forecasts.
Finally, we apply the DMDEnKF and Hankel-DMDEnKF to real seasonal influenza-like illness (ILI) data in the United States from the Centers for Disease Control and Prevention (CDC) ILINet [25] shown in Figure 1, with the aim of investigating their forecasting skills for ILI consultation rates. ILI is defined as a fever with a cough or sore throat that has no known cause other than influenza [25] and infects between 9 and 35 million people in the US each year [24]. Due to its prevalence, a multitude of methods have already been developed to model the spread of ILI [14, 47] and the approaches these models take can broadly be classified as either mechanistic or statistical [37]. Mechanistic methods [5, 48] make explicit hypotheses about what is driving the spread of an infectious disease, before then fitting parameters in the proposed models to the data. They have the advantage of being highly interpretable making them useful when trying to understand how one can control the spread of a disease [39], however can make assumptions that are oversimplified [4]. For example, a simple SIR-type model would struggle to describe specific behaviours like the drop in ILI consultations around Christmas seen in Figure 1. Statistical methods [11, 60] are generally more versatile as they require fewer domain-specific assumptions, but both methods achieve a similar predictive skill in real time on the ILINet dataset [50]. The DMDEnKF attempts to find a middle ground between the two methods, remaining versatile by virtue of being purely data-driven but also providing some level of interpretability via the associated DMD modes.
The remainder of this paper will be structured as follows. First, a brief summary of DMD, Hankel
Figure 1: _ILI consultations as a percentage of total weekly GP consultations in the US from 2003 to end of 2018. The data shows the annual peaks in ILI consultations that vary in size, timing and shape, which would make them difficult to model with a simple SIR-type model._
DMD and EnKF algorithms for completeness. After which, the DMDEnKF algorithm will be described in full. We will then apply the DMDEnKF and Hankel-DMDEnKF to synthetic data and compare their performance against other pre-existing, iterative DMD variants. Finally, we will use the DMDEnKF and Hankel-DMDEnKF on ILINet data to forecast the rate of ILI consultations up to 4 weeks into the future and examine their performance.
## 2 DMDEnKF
### Dynamic Mode Decomposition (DMD)
Consider an \(n\) dimensional state \(\mathbf{x}_{k}\in\mathbb{R}^{n}\) measured at regular time intervals \(k=1,...,m\). Assuming this time-series data was generated by a linear dynamical system, the consecutive states \(\mathbf{x}_{k}\) and \(\mathbf{x}_{k+1}\) are connected via
\[\mathbf{x}_{k+1}=\mathbf{A}\mathbf{x}_{k} \tag{2.1}\]
for some unknown matrix \(\mathbf{A}\in\mathbb{R}^{n\times n}\). By denoting
\[\mathbf{X}=\left[\begin{array}{cccc}|&|&&|\\ \mathbf{x}_{1}&\mathbf{x}_{2}&...&\mathbf{x}_{m-1}\\ |&|&&|\end{array}\right],\quad\mathbf{X}^{\prime}=\left[\begin{array}{cccc} |&|&&|\\ \mathbf{x}_{2}&\mathbf{x}_{3}&...&\mathbf{x}_{m}\\ |&|&&|\end{array}\right], \tag{2.2}\]
equation (2.1) can be written succinctly over all consecutive data pairs as
\[\mathbf{X}^{\prime}=\mathbf{A}\mathbf{X}. \tag{2.3}\]
To minimize the mean squared error term \(\sum_{k=1}^{m-1}\|\mathbf{x}_{k+1}-\mathbf{A}\mathbf{x}_{k}\|_{2}^{2}\), the standard DMD defines
\[\mathbf{A}=\mathbf{X}^{\prime}\mathbf{X}^{+}, \tag{2.4}\]
where \(\mathbf{X}^{+}\) is the Moore-Penrose pseudoinverse [6] of \(\mathbf{X}\). Efficiently solving for the eigendecomposition of \(\mathbf{A}\) is the primary purpose of DMD, as these eigenvalues/eigenvectors correspond to spatio-temporal patterns in the data.
The DMD method starts by applying the Singular Value Decomposition (SVD) to the data matrix \(\mathbf{X}\), representing it as the matrix multiplication of 2 real-valued, orthonormal matrices (complex and unitary if \(\mathbf{X}\in\mathbb{C}^{n\times m}\)) \(\mathbf{U}\in\mathbb{R}^{n\times n},\mathbf{V}\in\mathbb{R}^{m\times m}\) and a rectangular diagonal matrix with decreasing non-negative real values (\(\mathbf{\Sigma}\in\mathbb{R}^{n\times m}\)) in the form
\[\mathbf{X}=\mathbf{U}\mathbf{\Sigma}\mathbf{V}^{*}. \tag{2.5}\]
The best rank \(r\) approximation of a matrix according to the Eckart-Young Theorem [22] is obtained by truncating its SVD, hence by truncating equation (2.5) to a suitable rank \(r\)[26] we can compress the data matrix with minimal loss of information, which we write as
\[\mathbf{X}\approx\mathbf{U}_{r}\mathbf{\Sigma}_{r}\mathbf{V}_{r}^{*}. \tag{2.6}\]
By performing this compression, we are implicitly assuming that there exists a low dimensional (\(\leq r\)), linear structure within the high-dimensional data.
The Moore-Penrose pseudoinverse can be found directly from the SVD computed in equation (2.5) as \(\mathbf{V}\mathbf{\Sigma}^{-1}\mathbf{U}^{*}\). We use the rank \(r\) truncated matrices from equation (2.6) for reasons of efficiency, setting
\[\begin{split}\mathbf{A}&=\mathbf{X}^{\prime}\mathbf{ X}^{+},\\ &\approx\mathbf{X}^{\prime}\mathbf{V}_{r}\mathbf{\Sigma}_{r}^{-1} \mathbf{U}_{r}^{*}.\end{split} \tag{2.7}\]
This approximation of \(\mathbf{A}\) now acts only on an \(r\) dimensional subspace defined by \(\text{Col}(\mathbf{U}_{r})\). Hence, we can restrict \(\mathbf{A}\) onto this \(r\) dimensional subspace (representing the largest \(r\) POD modes of \(\mathbf{X}\)) and denote the restricted \(\mathbf{A}\in\mathbb{R}^{r\times r}\) as
\[\begin{split}\tilde{\mathbf{A}}&=\mathbf{U}_{r}^{*} \mathbf{A}\mathbf{U}_{r},\\ &\approx\mathbf{U}_{r}^{*}\mathbf{X}^{\prime}\mathbf{V}_{r}\mathbf{ \Sigma}_{r}^{-1}.\end{split} \tag{2.8}\]
We calculate the eigenvalues (\(\lambda_{i}\)), and corresponding eigenvectors (\(\mathbf{v}_{i}\)) of \(\tilde{\mathbf{A}}\), and define
\[\mathbf{\Lambda}=\left[\begin{array}{ccc}\lambda_{1}&0&0\\ 0&\ddots&0\\ 0&0&\lambda_{r}\end{array}\right],\quad\mathbf{W}=\left[\begin{array}{ccc} |&|&&|\\ \mathbf{v}_{1}&\mathbf{v}_{2}&...&\mathbf{v}_{r}\\ |&|&&|\end{array}\right]. \tag{2.9}\]
Reconstructing the eigenvalues/eigenvectors of the original operator \(\mathbf{A}\) will provide insights into the structure of the system [1] and allow us to propagate it forward in time. The eigenvalues of \(\tilde{\mathbf{A}}\) (\(\mathbf{\Lambda}\)) can be shown to be equal to the eigenvalues of the original operator \(\mathbf{A}\)[34], however recovering the original eigenvectors is more involved and can be done using either projected or exact DMD. We use the exact DMD method introduced by Tu et al. [34] as it finds the exact DMD modes (\(\mathbf{\Phi}\)) for all eigenvectors with non-zero \(\lambda_{i}\), where \(\mathbf{\Phi}\) is defined as
\[\mathbf{\Phi}=\mathbf{X}^{\prime}\mathbf{V}_{r}\mathbf{\Sigma}_{r}^{-1}\mathbf{W}. \tag{2.10}\]
DMD modes with zero eigenvalues have no effect on the system's dynamics, so this restriction of exact DMD is of little consequence. This method finds \(\mathbf{A}\) such that \(\mathbf{A}\mathbf{X}=\mathbf{X}^{\prime}\) exactly provided \(r\geq\text{rank}(\mathbf{X})\) and \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) are linearly consistent [34].
With \(\mathbf{\Lambda}\) and \(\mathbf{\Phi}\) in hand, we can construct a \(r\) dimensional approximation of \(\mathbf{A}\), however still need to find the initial phase and amplitude of each mode. The standard method [54] for computing this vector (\(\mathbf{b}\)) is to rewrite the initial state \(\mathbf{x}_{1}\) in a basis of the DMD modes via
\[\mathbf{b}=\mathbf{\Phi}^{+}\mathbf{x}_{1}. \tag{2.11}\]
It is worth noting that there exist alternative methods for example [13, 35] that focus on optimizing \(\mathbf{b}\) over all data points with additional conditions.
To summarise, the final solution to the discrete system can be written as
\[\mathbf{x}_{k}=\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}. \tag{2.12}\]
In the remainder of the paper, we call \(\mathbf{\Lambda}\) the temporal modes and \(\mathbf{\Phi}\) the spatial modes.
#### 2.1.1 Hankel-DMD
Hankel-DMD first augments the original, measured state \(\mathbf{x}_{k}\in\mathbb{R}^{n}\), by appending to it measurements of the state at the previous \(d-1\) time steps
\[h(\mathbf{x}_{k})=\left[\begin{array}{ccc}\mathbf{x}_{k}{}^{T}&\mathbf{x}_{k- 1}{}^{T}&\ldots&\mathbf{x}_{k-(d-1)}{}^{T}\end{array}\right]^{T}, \tag{2.13}\]
to form a new state \(h(\mathbf{x}_{k})\in\mathbb{R}^{dn}\). This is known as a time-delay embedding, and we refer to \(d\) as the delay-embedding dimension. Taking time-delay embeddings, \(h(\mathbf{x}_{k})\), to be our new states, matrices \(\mathbf{X}\) and \(\mathbf{X}^{\prime}\) from equation (2.2) now become
\[\mathbf{X}=\left[\begin{array}{ccc}\mathbf{x}_{d}&\ldots&\mathbf{x}_{m-1}\\ \vdots&\ddots&\vdots\\ \mathbf{x}_{1}&\ldots&\mathbf{x}_{m-d}\end{array}\right],\quad\mathbf{X}^{ \prime}=\left[\begin{array}{ccc}\mathbf{x}_{d+1}&\ldots&\mathbf{x}_{m}\\ \vdots&\ddots&\vdots\\ \mathbf{x}_{2}&\ldots&\mathbf{x}_{m-(d-1)}\end{array}\right]. \tag{2.14}\]
With \(X\) and \(X^{\prime}\) defined above, Hankel-DMD proceeds exactly as the standard DMD algorithm, generating eigenvalues \(\mathbf{\Lambda}\), DMD modes \(\mathbf{\Phi}\) and their initial states \(\mathbf{b}\) as described above. The original system can be reconstructed/forecast for all time steps from \(d\) onwards, by applying equation (2.12) and restricting the result to the first \(n\) rows.
#### 2.1.2 Iterative DMD Variants
There exists other variants of DMD that are designed to be applied iteratively, and in this paper we will compare these with the DMDEnKF in their ability to track a system's eigenvalues and make future state predictions. Streaming DMD [32] is an adaption of the standard DMD algorithm to efficiently process new data as it becomes available, and the noise aware variant Streaming TDMD [30] is the first variant we wish to compare against. The second method we will use for comparison is Windowed DMD [62], where the standard DMD described above is applied over a sliding window of the \(w\) most recent data snapshots only. The final method we will be comparing against is Online DMD [62], specifically the variant of this algorithm that places an exponentially decaying weight \(\rho\) on the importance of past measurements.
### Ensemble Kalman Filter (EnKF)
Consider a discrete-time, nonlinear dynamical system with a stochastic perturbation
\[\mathbf{x}_{k}=\mathbf{F}(\mathbf{x}_{k-1})+\mathbf{w}_{k},\quad\mathbf{w}_{ k}\sim\mathcal{N}(\mathbf{0},\mathbf{Q}_{k}), \tag{2.15}\]
where \(\mathbf{F}\) is a nonlinear function \(\mathbf{F}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\), \(\mathbf{x}_{k}\in\mathbb{R}^{n}\) is the system's state, \(\mathbf{w}_{k}\in\mathbb{R}^{n}\) is a stochastic perturbation and \(\mathcal{N}\) is the normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\mathbf{Q}_{k}\). A measurement equation that relates what we observe to the true state of the system is given by
\[\mathbf{y}_{k}=\mathbf{H}(\mathbf{x}_{k})+\mathbf{v}_{k},\quad\mathbf{v}_{k} \sim\mathcal{N}(\mathbf{0},\mathbf{R}_{k}), \tag{2.16}\]
where \(\mathbf{H}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{l}\) is the system's observation operator, \(\mathbf{y}_{k}\in\mathbb{R}^{l}\) is an observation of the system, \(\mathbf{v}_{k}\in\mathbb{R}^{l}\) is the noise in the observation and \(\mathcal{N}\) is the normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\mathbf{R}_{k}\). We focus on the instance relevant to our use case where \(\mathbf{H}\) is linear, so can be represented by a matrix \(\mathbf{H}\in\mathbb{R}^{l\times n}\).
In general, filtering methods aim to combine information from the state-transition model (2.15) and observation model (2.16) to compute the conditional density \(p(\mathbf{x}_{k}|\mathbf{Y}_{k})\), where \(\mathbf{Y}_{k}=(\mathbf{y_{1}},...,\mathbf{y}_{k})\). The Kalman filter is the optimal filter if \(\mathbf{F}\) and \(\mathbf{H}\) are both linear and the stochastic perturbations are normal [36]. The EnKF was developed to deal with the filtering problem where either the linear or normal assumption (or both) is violated [23]. It exploits the Kalman formulation to propagate an ensemble of the state into a region of high probability in such a way that the ensemble spread would be consistent with the linear and normal model.
To begin the EnKF algorithm, an initial ensemble of \(N\) state estimates \(\hat{\mathbf{x}}_{0}^{(1)}\),..., \(\hat{\mathbf{x}}_{0}^{(N)}\) is required. If an ensemble is not available, one can be generated from initial state estimates \(\hat{\mathbf{x}}_{0}\) and covariance matrix \(\mathbf{P}_{0}\) by taking \(N\) independent draws from \(\mathcal{N}(\hat{\mathbf{x}}_{0},\mathbf{P}_{0})\).
AlgorithmThe EnKF then acts as follows [23]:
**Step 1:** Propagate forward in time each ensemble member using equation (2.15) for \(i=1,...,N\) via
\[\hat{\mathbf{x}}_{k|k-1}^{(i)}=\mathbf{F}(\hat{\mathbf{x}}_{k-1|k-1}^{(i)})+ \mathbf{w}_{k}^{(i)}. \tag{2.17}\]
The notation \(\hat{\mathbf{x}}_{k|k-1}^{(i)}\) denotes the state estimate at time \(k\) of the \(i\)th ensemble member \(\hat{\mathbf{x}}_{k}^{(i)}\) using only information up to time \(k-1\), and \(\hat{\mathbf{x}}_{k-1|k-1}^{(i)}\) represents the same ensemble member at time \(k-1\) using information up to time \(k-1\). Each \(\mathbf{w}_{k}^{(i)}\) is independently drawn from \(\mathcal{N}(\mathbf{0},\mathbf{Q}_{k})\). The current covariance matrix can also now be estimated via the sample covariance of the ensemble, which we denote as \(\mathbf{\hat{P}}_{k|k-1}\). This can then be used to estimate the Kalman Gain matrix \(\mathbf{\hat{K}}_{k}\) as
\[\mathbf{\hat{K}}_{k}=\mathbf{\hat{P}}_{k|k-1}\mathbf{H}^{T}(\mathbf{H} \mathbf{\hat{P}}_{k|k-1}\mathbf{H}^{T}+\mathbf{R}_{k})^{-1}. \tag{2.18}\]
**Step 2:** Calculate the measurement innovation utilizing equation (2.16).
From measurement \(\mathbf{y}_{k}\), we again use \(i=1,...,N\) and generate simulated measurements
\[\mathbf{y}_{k}^{(i)}=\mathbf{y}_{k}+\mathbf{v}_{k}^{(i)} \tag{2.19}\]
where each \(\mathbf{v}_{k}^{(i)}\) is an independent draw from \(\mathcal{N}(\mathbf{0},\mathbf{R}_{k})\). These simulated measurements \(\mathbf{y}_{k}^{(i)}\) are combined with the ensemble members \(\hat{\mathbf{x}}_{k|k-1}^{(i)}\) from equation (2.17) to define \(N\) measurement innovations
\[\mathbf{e}_{k}^{(i)}=\mathbf{y}_{k}^{(i)}-\mathbf{H}\hat{\mathbf{x}}_{k|k-1}^ {(i)}. \tag{2.20}\]
The \(\mathbf{e}_{k}^{(i)}\) represent samples from the distribution of the distance of the model's prediction from the measured value.
**Step 3:** Combine the model estimates in equation (2.17) and measurement innovation of equation (2.20) via the estimated Kalman gain from (2.18) to update each ensemble member's state estimate
\[\hat{\mathbf{x}}_{k|k}^{(i)}=\hat{\mathbf{x}}_{k|k-1}^{(i)}+\mathbf{\hat{K}}_ {k}\mathbf{e}_{k}^{(i)}. \tag{2.21}\]
We can generate a point estimate for the state \(\hat{\mathbf{x}}_{k}\) using the mean of the \(N\) updated ensemble members. This process then repeats every time a new state measurement becomes available, with the updated ensemble from the previous data point becoming the initial ensemble for the new one.
We combine these 2 previously described techniques to form the DMDEnKF. This new, hybrid method uses DMD to generate a low dimensional model of a dynamical system that is then iteratively improved by the EnKF as new data emerges.
### DMDEnKF
We now describe how we carry out filtering of the temporal modes and state of the system, while keeping the spatial modes found by one's chosen version of DMD on the "spin-up" fixed. We note that once we allow the temporal modes to vary with the spatial modes being fixed, these are no longer eigenvalues/eigenvectors, and we then call them temporal modes. Consider an \(n\) dimensional state \(\mathbf{x}_{k}\in\mathbb{R}^{n}\) measured at regular time intervals \(k=1,...,m\) and then measured iteratively at times \(k=m+1,...\).
#### Algorithm
**Step 1:** Perform the chosen version of DMD on the dataset \(\mathbf{x}_{1},...,\mathbf{x}_{m}\), defining \(\mathbf{X},\mathbf{X}^{\prime}\) as before in equation (2.2) to obtain the expression
\[\mathbf{x}_{k}=\mathbf{\Phi}\mathbf{\Lambda}^{k}\mathbf{b}, \tag{2.22}\]
where
\[\mathbf{\Lambda}=\left[\begin{array}{cccc}\lambda_{1}&0&0\\ 0&\ddots&0\\ 0&0&\lambda_{r}\end{array}\right],\quad\mathbf{\Phi}=\left[\begin{array}{ cccc}|&|&&|\\ \mathbf{d}_{1}&\mathbf{d}_{2}&...&\mathbf{d}_{r}\\ |&|&&|\end{array}\right],\quad\mathbf{b}=\left[\begin{array}{c}b_{1}\\ \vdots\\ b_{r}\end{array}\right], \tag{2.23}\]
and defining \(\lambda_{i}\), \(\mathbf{d}_{i}\), \(b_{i}\) as the \(i\)th temporal mode, DMD mode, initial condition triplet of the \(r\) retained modes. This acts as a spin-up process to generate a model we can then filter using the EnKF.
**Step 2:** Define the matrices required for the EnKF's ensemble initialisation, propagation via equation (2.15), and measurement using equation (2.16).
First, rewrite each of the \(r\) temporal modes in polar coordinates as
\[\lambda_{i}=\tau_{i}e^{\theta_{i}i}, \tag{2.24}\]
where \(\tau_{i}\geq 0\), \(0\leq\theta_{i}<2\pi\) and \(\mathrm{i}^{2}=-1\). As \(\mathbf{x}_{k}\in\mathbb{R}^{n}\), the temporal modes in the DMD model's spectrum will either be real or in a complex conjugate pair. When filtering, we view the temporal modes as a time varying parameter. However, we must enforce that the real temporal modes remain real and complex conjugate pairs remain intact, as this ensures the state output by the model will still be real. We do this by defining the filterable model parameters \(\mu_{i}\) as new variables for \(i=1,...,r\)
\[\mu_{i}=\begin{cases}\tau_{i},&\text{if $\theta_{i}=0$, or $\nexists j$ for $j<i$ such that $\lambda_{j}^{*}=\lambda_{i}$},\\ \theta_{i},&\text{otherwise}.\end{cases} \tag{2.25}\]
Written in this way, these \(\mu_{i}\)'s represent all the possible degrees of freedom in the model's temporal modes under the additional constraint of producing a real state estimate. By maintaining a note of the positional indexes of each complex conjugate pair produced in the initial DMD, it is possible to recover the \(\lambda_{i}\) representation from the \(\mu_{i}\)'s. While this transformation technically requires the full list of \(\mu_{i}\)'s, we informally write \(\Lambda(\mu_{i})=\lambda_{i}\) to symbolize the reversion from \(\mu_{i}\)'s back to \(\lambda_{i}\)'s.
**Ensemble initialisation:** We can now define an augmented joint parameter state \(\mathbf{z}_{0}\in\mathbb{R}^{n+r}\) to be used as the initial state for the EnKF
\[\mathbf{z_{0}}=\left[\begin{array}{cccc}-&\mathbf{x}_{m}&-&\mu_{1}&\ldots& \mu_{r}\end{array}\right]^{T}. \tag{2.26}\]
We denote the joint parameter state at time \(m+k\) as \(\mathbf{z}_{k}\in\mathbb{R}^{n+r}\). To generate an initial ensemble from this state, we first define sample covariance \(\mathbf{C}=(1/m)(\mathbf{X}^{\prime}-\boldsymbol{\Phi}\boldsymbol{\Lambda} \boldsymbol{\Phi}^{+}\mathbf{X})(\mathbf{X}^{\prime}-\boldsymbol{\Phi} \boldsymbol{\Lambda}\boldsymbol{\Phi}^{+}\mathbf{X})^{T}\), which represents the current state uncertainty based on prediction errors in the spin-up DMD. We then form the initial covariance matrix
\[\mathbf{P}_{0}=\left[\begin{array}{c|c}\mathbf{C}&\mathbf{0}\\ \hline\mathbf{0}&\alpha_{2}I_{r}\end{array}\right], \tag{2.27}\]
where \(\alpha_{2}>0\), \(I_{r}\) is the r-dimensional identity matrix, and the \(\alpha_{2}I_{r}\) term determines the initial uncertainty in the spin-up DMD's temporal modes. Take independent draws from \(\mathcal{N}(\mathbf{z}_{0},\mathbf{P}_{0})\) until the ensemble is sufficiently large. The optimal ensemble size will vary from problem to problem, adding ensemble members will increase accuracy but at the cost of computational efficiency.
**Propagation:** Using the notation \(\mathbf{z}_{k}^{i}\) to signify the \(i\)th element of \(\mathbf{z}_{k}\), we define the matrix \(\boldsymbol{\Lambda}_{\mathbf{z}_{k}}\in\mathbb{R}^{r\times r}\) for state \(\mathbf{z}_{k}\) as
\[\boldsymbol{\Lambda}_{\mathbf{z}_{k}}=\left[\begin{array}{ccc}\Lambda( \mathbf{z}_{k}^{n+1})&0&0\\ 0&\ddots&0\\ 0&0&\Lambda(\mathbf{z}_{k}^{n+r})\end{array}\right]. \tag{2.28}\]
The EnKF's propagation equation can be written as
\[\mathbf{z}_{k+1}=\left[\begin{array}{c|c}\boldsymbol{\Phi}\boldsymbol{ \Lambda}_{\mathbf{z}_{k}}\boldsymbol{\Phi}^{+}&\mathbf{0}\\ \hline\mathbf{0}&I_{r}\end{array}\right]\mathbf{z}_{k}+\mathbf{w}_{k}. \tag{2.29}\]
For convenience, we introduce notation \(\mathbf{z}_{k}^{i;j}\in\mathbb{R}^{j-i+1}\) to denote the \(i\)th through to the \(j\)th element of \(\mathbf{z}_{k}\) where \(i\leq j\)
\[\mathbf{z}_{k}^{i;j}=\left[\begin{array}{ccc}\mathbf{z}_{k}^{i}&\ldots& \mathbf{z}_{k}^{j}\end{array}\right]^{T}. \tag{2.30}\]
Equation (2.29) propagates \(\mathbf{z}_{k}^{1:n}\) representing the state in the DMD framework \(\mathbf{x}_{m+k}\) forward in time using the standard DMD equation with the updated temporal modes from \(\boldsymbol{\Lambda}_{\mathbf{z}_{k}}\). The vector \(\mathbf{z}_{k}^{n+1:n+r}\) represents the current estimate of the temporal modes in their \(\mu_{i}\) representation and is unchanged other than the addition of noise by the propagation equation, for although we assume the temporal modes vary in time no direction of drift in the parameters is explicitly foreknown. The vector \(\mathbf{w}_{k}\in\mathbb{R}^{n+r}\) is a normally distributed variable \(\mathbf{w}_{k}\sim\mathcal{N}(\mathbf{0},\mathbf{Q}_{k})\), and this represents the uncertainty within the model of the system. We construct \(\mathbf{Q}_{k}\) as follows,
\[\mathbf{Q}_{k}=\left[\begin{array}{c|c}\alpha_{1}I_{n}&\mathbf{0}\\ \hline\mathbf{0}&\alpha_{2}I_{r}\end{array}\right], \tag{2.31}\]
where \(\alpha_{1}\) and \(\alpha_{2}\) are constants determined by the user such that \(\alpha_{2}\ll\alpha_{1}\). This construction with \(\mathbf{Q}_{k}\) a diagonal matrix assumes model errors for each element of \(\mathbf{z}_{k}\) are uncorrelated with one another. The condition \(\alpha_{2}\ll\alpha_{1}\) ensures that the state of the DMD system \(\mathbf{z}_{k}^{1:n}\) changes significantly faster than its temporal modes \(\mathbf{z}_{k}^{n+1:n+r}\), as parameters by definition should vary slowly in time. Furthermore, for the temporal mode's moduli being filtered, it prevents the strictly positive modulus dropping below 0.
**Measurement:** We write the EnKF's measurement equation as
\[\mathbf{y}_{k}=\left[\begin{array}{c|c}I_{n}&\mathbf{0}\\ \hline\mathbf{0}&\mathbf{0}\end{array}\right]\mathbf{z}_{k}+\mathbf{v}_{k}, \tag{2.32}\]
where \(\mathbf{y}_{k}\in\mathbb{R}^{n}\) are observations of the DMD state \(\mathbf{x}_{m+k}\), and \(\mathbf{v}_{k}\in\mathbb{R}^{n}\) is a normally distributed variable \(\sim\mathcal{N}(\mathbf{0},\mathbf{R}_{k})\) representing the noise in the measurements. We assume new measurements \(\mathbf{y}_{k}\) to be available for the full DMD state \(\mathbf{z}_{k}^{1:n}\) but not its temporal modes \(\mathbf{z}_{k}^{n+1:n+r}\), as this is consistent with the format of the data used to generate the spin-up DMD model. We also assume uncorrelated measurement noise on each dimension of the state, so choose a diagonal matrix \(\mathbf{R}_{k}\).
**Step 3** State measurements \(\mathbf{x}_{m+k}\) at times \(k=1,...\) are being iteratively generated. By setting \(\mathbf{y}_{k}=\mathbf{x}_{m+k}\) as each new measurement arrives, we can iteratively apply the EnKF to produce a hybrid estimate for \(\mathbf{z}_{k}\) that combines model predictions from \(\mathbf{z}_{k-1}\) and noisy measurement \(\mathbf{y}_{k}\). A brief summary of how the EnKF does this is provided in Section 2.2, and a more expansive description can be found at [42].
**Step 4:** The state of the original system \(\mathbf{x}_{m+k}\) can be reconstructed from \(\mathbf{z}_{k}\) by simply taking it's first \(n\) elements \(\mathbf{z}_{k}^{1:n}\). Predictions \(p\) steps ahead at time \(m+k\) can also be forecast from \(\mathbf{z}_{k}\) via
\[\mathbf{x}_{m+k+p}=\mathbf{\Phi}\mathbf{\Lambda}_{\mathbf{z}_{k}}^{p}\mathbf{ \Phi}^{+}\mathbf{z}_{k}^{1:n}. \tag{2.33}\]
The Hankel-DMDEnKF is defined algorithmically in exactly the same way, with the only difference being that Hankel-DMD is applied over the "spin-up" period as opposed to standard DMD.
## 3 Synthetic Applications
### Comparison against other iterative DMD variants
To test the DMDEnKF, we first apply it to data generated from a synthetic system with time varying eigenvalues, which we aim to track. The dynamics of this system are governed by the 2 dimensional rotation matrix, where the angle of rotation \(\theta_{k}\) increases linearly from \(\pi/64\) to \(\pi/8\) over the course of \(500\) time steps. The evolution of the state \(\mathbf{x}_{k}\) of the system can hence be written as
\[\mathbf{x}_{k+1}=\left[\begin{array}{cc}\cos(\theta_{k})&-\sin(\theta_{k}) \\ \sin(\theta_{k})&\cos(\theta_{k})\end{array}\right]\mathbf{x}_{k},\quad\mathbf{ x}_{1}=\left[\begin{array}{c}1\\ 0\end{array}\right], \tag{3.1}\]
where \(\theta_{k}=\pi/64+\frac{(k-1)(7\pi/64)}{499}\) and \(k=(1,...,500)\).
We assume noisy measurement values \(\mathbf{y}_{k}\) to be available for the state at each time step, such that
\[\mathbf{y}_{k}=\mathbf{x}_{k}+\mathbf{v}_{k},\quad\mathbf{v}_{k}\sim\mathcal{ N}(\mathbf{0},\sigma^{2}I_{2}), \tag{3.2}\]
where each experiment \(\sigma=0.05\text{ or }0.5\) to simulate a low or high level of measurement noise respectively. The \(500\) values of \(\mathbf{y}_{k}\) (shown in Figure 2) are used to train the DMDEnKF and Hankel-DMDEnKF, with the first \(100\) time steps being used for the spin-up process described in Step 1 of the DMDEnKF algorithm to produce the output described in equation (2.23).
We will also train the iterative variants of DMD described at the end of Section 2.1 (Streaming TDMD1, Windowed DMD and Online DMD) on this dataset to compare their ability to track the
system's time varying eigenvalues against that of the DMDEnKF. Within the Windowed DMD algorithm, we replace DMD with TDMD to allow for this method to effectively handle the noise in the data, henceforth referring to this amalgamation of the two methods as Windowed TDMD. To implement Online DMD, we use code made available by its creators here [29]. Computational parameters were set as follows; window size \(w=10\) for Windowed TDMD, exponential decay rate \(\rho=0.9\) for Online DMD, delay-embedding dimension \(d=50\) for the Hankel-DMDEnKF and spin-up time steps \(m=100\) for the DMDEnKF as previously stated.
At each time step \(k\), the system's true eigenvalues can be written in modulus-argument form as
\[\lambda_{k}=1e^{\pm\theta_{k}i}, \tag{3.3}\]
and for each time step where the models are defined their estimates of the system's eigenvalues can also be written as
\[\hat{\lambda}_{k}=\hat{\tau}_{k}e^{\pm\hat{\theta}_{k}i}. \tag{3.4}\]
We start by comparing the errors in each method's estimate of the constant eigenvalue modulus (\(\hat{\tau}_{k}-1\)). A thousand runs of the synthetic data were generated for each value of \(\sigma\), and the difference of each method's eigenvalue modulus and argument from their true values at every time step after the spin-up period were collected. When any of the methods failed to identify the eigenvalues as a complex conjugate pair at a given time step in a run, the dominant eigenvalue's modulus was used for modulus error calculations. The average errors in the eigenvalue modulus estimates are shown in Table 1.
For all levels of measurement noise, Streaming TDMD estimated the eigenvalue modulus the most accurately. This is due to the method's assumption of a stationary system, hence assigning an equal weight to the importance of each data point, which works well in the case of estimating a constant parameter. At low levels of measurement noise as seen in the first column of Table 1, Windowed TDMD, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed similarly
Figure 2: _Time series for a synthetic system with a linearly increasing eigenvalue argument, showing the state’s first dimension with no, low (\(\sigma=0.05\)) and high (\(\sigma=0.5\)) measurement noise._
well with mean eigenvalue modulus errors below \(0.01\). As errors in the eigenvalue modulus grow exponentially when forecasting future states, these 4 methods could produce acceptable short term forecasts but would quickly diverge from the true state as the forecast horizon was extended. At high levels of noise shown in the second column of Table 1, Windowed TDMD and Online DMD's eigenvalue modulus estimates degrade significantly, making them unsuitable for forecasting in this scenario. The errors in the DMDEnKF and Hankel-DMDEnKF remain fairly small, however are still an order of magnitude greater than those produced by Streaming TDMD.
A typical trajectory of the eigenvalue argument estimates (\(\hat{\theta}_{k}\)) for each method over the course of one run from the end of the spin-up period onwards can be seen in Figures 2(a) and 2(c). The error distributions for each method's eigenvalue argument estimates (\(\hat{\theta}_{k}-\theta_{k}\)) over all 1000 runs are plotted in Figures 2(b) and 2(d).
At low levels of noise as seen in Figures 2(a) and 2(b), all 5 methods on average underestimated the eigenvalue argument of the system. This is to be expected as the eigenvalue argument is increasing with time, meaning that all but the last data pair available to each method would have been generated using an argument smaller than its current value. Streaming TDMD exhibited the worst performance, again due to its equal weighting of every data point, however in this instance being a negative quality as it hampers the model's ability to adapt to fresh data that reflects the changing parameter. Windowed TDMD, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed similarly. Online DMD produced a tighter error distribution, but with a slightly larger bias than Windowed TDMD. This suggests that Online DMD's soft thresholding reduces the model volatility caused by measurement noise compared to the hard cut-off employed by Windowed TDMD. For this same reason however, Online DMD is slower to adapt to new measurements than Windowed TDMD, leading to a larger bias below the system's true eigenvalue argument. The DMDEnKF and Hankel-DMDEnKF performed very similar to Windowed TDMD at this noise level, however tweaks to the magnitude of the DMDEnKF's system uncertainty matrix can be made to balance the speed of model innovation with its volatility and produce distributions closer to that of Online DMD if required.
At higher noise levels shown in Figures 2(c) and 2(d), the performance of Windowed TDMD and Online DMD significantly degrades. Placing a larger weight on more recent samples allowed these methods
\begin{table}
\begin{tabular}{||c c c||} \hline Iterative DMD & \multicolumn{2}{c||}{Mean Eigenvalue Modulus Error} \\ Variant & \(\sigma=0.05\) & \(\sigma=0.5\) \\ \hline \hline Windowed TDMD & \(9.82\times 10^{-3}\) & \(1.39\) \\ \hline Online DMD & \(6.04\times 10^{-3}\) & \(3.06\times 10^{-1}\) \\ \hline Streaming TDMD & \(2.31\times 10^{-4}\) & \(2.50\times 10^{-3}\) \\ \hline \hline DMDEnKF & \(8.07\times 10^{-3}\) & \(1.89\times 10^{-2}\) \\ \hline Hankel-DMDEnKF & \(9.49\times 10^{-3}\) & \(1.38\times 10^{-2}\) \\ \hline \end{tabular}
\end{table}
Table 1: _Mean absolute errors in the synthetic system’s eigenvalue modulus estimates produced by each iterative DMD variant over all time steps over the course of all 1000 runs. Measurement noise is set to either low levels with \(\sigma=0.05\) (left) or high levels with \(\sigma=0.5\) (right). Streaming TDMD scored significantly lower errors than all other methods, and as noise levels increased, errors in Windowed TDMD and Online DMD grew significantly larger than those produced by the DMDEnKF and Hankel-DMDEnKF._
to quickly adapt to changes in the system's parameters, however as the noise increases this induces an extreme volatility in their respective models. The performance of Streaming TDMD is not largely changed from the low noise case, still lagging behind the true system values but somewhat insulated from the noise by its symmetric treatment of all data points. Here the benefit of explicit inclusion of measurement noise in the DMDEnKF framework becomes apparent, as at this noise level the DMDEnKF and Hankel-DMDEnKF are the only techniques tested capable of producing an accurate eigenvalue argument estimate.
Furthermore, here we see the first significant difference in the performance of the DMDEnKF and Hankel-DMDEnKF, as the DMDEnKF's error distribution has a thin tail extending down to \(-\pi/8\) which is not present in the error distribution of the Hankel-DMDEnKF. These additional errors are caused by the spin-up of the DMD for the DMDEnKF method occasionally failing to identify the system's eigenvalues as a complex conjugate pair (empirically, this happens \(\sim 3\%\) of the time), due to the increased noise in the data. When this happens, the DMDEnKF catastrophically fails for the EnKF is unable to generate complex eigenvalues from real ones regardless of how many future time steps are filtered due to its formulation in equation (2.25). This failure of the DMDEnKF can be mitigated in the following way. If the errors produced by the DMDEnKF during the filtering stage are deemed too large (e.g. exceed a given threshold) for a prolonged period of time, then the spin-up DMD process can be rerun on an extended dataset consisting of the original spin up data,
Figure 3: _Estimates of the synthetic system’s eigenvalue argument produced by each iterative DMD variant. Presented are typical trajectories of the eigenvalue argument at each time step over the course of 1 experiment’s run (left) and error distributions of the difference between the true system’s eigenvalue argument and the estimated eigenvalue argument over all time steps over the course of all 10 runs (right). Measurement noise is set to either low levels with \(\sigma=0.05\) (top) or high levels with \(\sigma=0.5\) (bottom). The DMDEnKF and Hankel-DMDEnKF experience similar errors to Online DMD and Windowed TDMD at low measurement noise, but track the eigenvalue argument much more accurately than them for high measurement noise._
plus the newly available data used so far in the filtering step. By including more data in the spin-up process, the spin-up DMD model is more likely to successfully capture the signal component in the data as a pose to measurement noise, and hence produce eigenvalues with the same structure as those of the true system. Time-delay embeddings make the SVD step in the DMD algorithm more robust to measurement noise [16]. Hence, while the Hankel-DMDEnKF is similarly restricted by the eigenvalues it can produce at the filtering stage, in all 1000 runs of the synthetic data the spin up Hankel-DMD was able to identify the system's eigenvalues to be a complex conjugate pair, so this was not an issue for the Hankel-DMDEnKF.
### Comparing against DMD with a particle filter
Having compared the performance of the DMDEnKF against other iterative DMD variants, we now focus on evaluating the filtering component of the algorithm. Since the linear DMD model acts nonlinearly in the filter when applied to both the model's state and eigenvalues, we compare the EnKF filter with a particle filter. Particle filters [27] have been show to converge to the optimal filter as the number of particles tends to infinity for general nonlinear models with non-Gaussian noise [17]. However, particle filters are restricted to low dimensional systems only, as the number of particles required scales approximately exponentially with the dimension of the state [57]. Hence, we compare the DMDEnKF and Hankel-DMDEnKF with a DMD plus particle filter which we will take to be the "gold standard" estimation to assess how well the EnKF does with the nonlinear filtering problem.
We use the same synthetic system (3.1) with a linearly increasing eigenvalue argument as in the previous subsection to generate data with high levels of measurement noise (\(\sigma=0.5\)); a trajectory of which can be seen in Figure 2. Again, the time-delay embedding dimension \(d=50\) for the Hankel-DMDEnKF, and the first 100 time steps are used to train a spin-up DMD model, with the next 400 used to filter the state and spin-up model's eigenvalues.
The DMDEnKF's filter state thus has dimension 4 (2 state dimensions and 2 temporal modes), while the Hankel-DMDEnKF's filter state is of dimension 102 (100 state dimensions and 2 temporal modes). To generate a "gold standard" solution, at the filtering step we use a particle filter with 10,000 particles, applying multinomial importance resampling [20] every time the effective sample size falls below half the number of particles to avoid sample degeneracy [21]. For the DMDEnKF and Hankel-DMDEnKF at the filtering step, we run the EnKF with varying numbers of ensemble members (\(N\)), to see if as \(N\) increases their estimates mean and covariance will tend to that of the particle filter ensemble. We generated 1000 runs of the synthetic data to apply the particle filter/EnKF with each value of \(N\) to and collected the errors in the eigenvalue argument estimates for each method at every time step.
As can be seen in Figure 3(a), the DMD particle filter with 10,000 particles produces an extremely tight error distribution that is slightly biased to produce estimates below that of the true eigenvalue's argument. This is to be expected, as mentioned in the previous subsection, due to the system's eigenvalue argument constantly increasing. There is also a thin tail in the error distribution that extends down to \(-\pi/8\). This is again a result of the spin up DMD sometimes failing to identify a complex conjugate eigenvalue pair, trapping the particle filter in the faulty assumption that the eigenvalues are real.
For low numbers of ensemble members (\(N=5\)), the DMDEnKF and Hankel-DMDEnKF are centred at a similar value to the "gold standard". However, they produce a far larger spread with long tails in both directions that imply a lack of robustness with this few ensemble members. With only a small increase to \(N=10\), both methods become more stable, as although they still have a larger variance than the particle filter, the long positive tails from \(N=5\) have been eliminated. A similar pattern occurs as we move to \(N=20\), with more ensemble members resulting in a tighter error distribution. At this point, the Hankel-DMDEnKF's distribution can be distinguished from that of the DMDEnKF and DMD particle filter by its aforementioned lack of a persistent thin negative tail. By \(N=40\), the main peaks of the DMDEnKF, Hankel-DMDEnKF and "gold standard" are almost indistinguishable on the graphical scale, with the DMDEnKF and DMD particle filter both sharing a thin negative tail.
Figure 3(b) shows how the mean squared error for the eigenvalue arguments predicted by the DMDEnKF and Hankel-DMDEnKF are affected by varying the number of ensemble members. For the DMDEnKF, errors initially sharply decline as \(N\) is increased, however on this small synthetic system returns diminish quickly after \(N=20\). By \(N=50\), we achieve a mean squared error with the DMDEnKF only \(\sim 3\%\) larger than that of the "gold standard", despite using 200 times fewer particles. When comparing the Hankel-DMDEnKF to the "gold standard", the errors in the DMD particle filter's eigenvalue estimates are skewed by the runs in which the spin up DMD was unable to identify a complex conjugate eigenvalue pair, as Hankel-DMD did not encounter this problem on these synthetic examples. To attempt to fairly compare the filtering methods, we remove all runs in which the spin up DMD failed in this way, before again calculating the mean squared error for the DMD particle filter and recording it in Figure 3(b). A similar pattern of reducing errors with diminishing returns can be seen for the Hankel-DMDEnKF as ensemble size is increased, and by \(N=50\) its mean squared error is within \(5\%\) of the newly calculated DMD particle filter's score. Our results show that in this simple, synthetic case at least, the EnKF is an efficient and effective solution to the nonlinear filtering problem that arise within the DMDEnKF framework.
Figure 4: _Error distributions (left) and mean squared errors (right) for estimates of the synthetic system’s eigenvalue arguments produced by the DMDEnKF and Hankel-DMDEnKF with varying numbers of ensemble members (\(N\)) against those produced by a particle filter with 10,000 particles. Increasing \(N\) quickly leads to error levels in the DMDEnKF and Hankel-DMDEnKF that are similar to those produced by their respective “gold standards”._
### Tracking a synthetically generated pandemic
Lastly, we test the DMDEnKF's performance on synthetic data designed to simulate a simple pandemic with a state \(\mathbf{x}_{k}\) representing the level of infection in 3 different population classes. The system's dynamics are governed by a matrix \(\mathbf{A}\in\mathbb{R}^{3\times 3}\) that we randomly generate with non-negative elements each being drawn from the Uniform distribution \(\mathcal{U}[0,1)\). The \((i,j)\)th element of \(\mathbf{A}\) represents how the level of infection in class \(j\) at time \(k\) will affect the level of infection in class \(i\) at time \(k+1\). To control whether the synthetic pandemic is spreading or dying off, we then define a new matrix \(\mathbf{\hat{A}}=\frac{\mathbf{A}}{\lambda_{1}}\) where \(\lambda_{1}\) is the largest eigenvalue of \(\mathbf{A}\), thus ensuring the spectral radius \(\rho(\mathbf{\hat{A}})=1\). By introducing a constant \(\gamma\), we can replace \(\mathbf{A}\) with \(\gamma\mathbf{\hat{A}}\) causing the state to grow if \(\gamma>1\) or decay for \(\gamma<1\). To simulate a pandemic, we linearly decrease \(\gamma\) from \(1.01\) to \(0.99\) over the course of the experiment's \(1000\) time steps. The initial state used is a vector of ones. The system that generates the synthetic data can be written as
\[\mathbf{x}_{k+1}=\gamma_{k}\mathbf{\hat{A}}\mathbf{x}_{k},\quad\mathbf{x}_{1}= \left[\begin{array}{ccc}1&1&1\end{array}\right]^{T}, \tag{3.5}\]
where the state \(\mathbf{x}_{k}\in\mathbb{R}^{3}\), \(\gamma_{k}=1.01-\frac{0.02(k-1)}{999}\) and \(k=(1,...,1000)\). We assume not to have access to the true state of the system \(\mathbf{x}_{k}\) but instead noisy measurements \(\mathbf{y}_{k}\) defined by
\[\mathbf{y}_{k}=\mathbf{x}_{k}+\mathbf{v}_{k}, \tag{3.6}\]
The constant \(\sigma\) that governs the level of measurement noise is set to \(\sigma=0.05\) to represent low noise and \(\sigma=0.5\) for high noise as in (3.2). Figure 5 shows the values of the system's three state dimensions and the respective available measurements over the course of one run.
All five DMD variants tested had their computational parameters set to the same values as those used in the synthetic experiments in Section 3.1. The only small difference was that Streaming TDMD, Windowed TDMD, the DMDEnKF and Hankel-DMDEnKF truncated the data by removing the smallest singular value to reduce model instability caused by what was often a very low
Figure 5: _Time series for a synthetic system that simulates a pandemic, showing all 3 dimensions of the state with no, low (\(\sigma=0.05\)) and high (\(\sigma=0.5\)) measurement noise._
signal-to-noise ratio in this direction. Online DMD did not apply any truncation to the data as the method was not designed to do so, however it did not appear to suffer from any stability issues as a consequence.
The first 100 measurements \((\mathbf{y}_{1},...,\mathbf{y}_{100})\) were used to initialize the models, and as each new data point \((\mathbf{y}_{100},...,\mathbf{y}_{1000})\) was successively fed into the models, they produced 50 step ahead forecasts \((\mathbf{\hat{x}}_{150},...,\mathbf{\hat{x}}_{1050})\). We generate 1000 data points, however a standard flu season lasts around 20 weeks. For this reason, we chose to forecast 50 steps ahead to mimic forecasting 1 week ahead in a more realistic timescale. The relative prediction errors \(\hat{e}_{k}=\frac{\|\mathbf{x}_{k}-\mathbf{\hat{x}}_{k}\|}{\|\mathbf{x}_{k}\|}\) could then be calculated for \(k=(150,...,1000)\) and the mean of these errors was the main metric we used to evaluate the forecasting skill of each method over the course of one run. A thousand runs were performed for both low and high levels of noise and the empirical cumulative distributions of 50 step ahead forecast mean run relative errors for low noise (\(\sigma=0.05\)) can be seen in Figure 6.
The first noteworthy feature of the cumulative error distributions is the wide range in some method's forecast errors. This is a result of the 50 step ahead forecasts being produced by training each model to forecast 1 step ahead, then applying the trained model to the data 50 times. As such, forecast errors compound exponentially and small errors over a 1-step forecast horizon can become vast after 50 iterations. Inspecting the individual methods, we see Windowed TDMD to be the worst performing method. This is due to its aforementioned instability under measurement noise caused by considering only a small subset of the data at a time. This instability could be reduced by increasing the window size (\(w\)) computational parameter, however as \(w\) increases the model's ability to track a system that changes with time diminishes. Streaming TDMD had the second-largest errors, caused by the method's assumption of a stationary system hindering its ability to
Figure 6: _Cumulative error distributions of the mean run relative errors for the 50 step ahead forecasts of each iterative DMD variant. Mean relative errors were calculated over all time steps for each run of the experiment, with the results from 1000 runs under low levels of measurement noise (\(\sigma=0.05\)) displayed. Forecast errors had a wide range for some methods, due to exponentially compounding errors caused by forecasting 50 steps ahead. The DMDEnKF, Hankel-DMDEnKF and Online DMD produced errors orders of magnitude smaller than those of Streaming TDMD and Windowed TDMD._
correctly adapt to the system's changing eigenvalues as new data became available. In the majority of cases, Online DMD, the DMDEnKF and Hankel-DMDEnKF all performed similarly well. All three methods exhibited cumulative error distributions tightly concentrated around a low error value, however in a few runs, the DMDEnKF became unstable and produced large errors. It is clear even at low levels of noise that the forecasting performance of Online DMD, the DMDEnKF and Hankel-DMDEnKF are far superior on this type of system to those of Windowed TDMD and Streaming TDMD. Hence, we now focus exclusively on these top three performing methods to allow for a thorough comparison of them on an appropriate error scale.
In Figure 7 for Online DMD, the DMDEnKF and Hankel-DMDEnKF, we plot the distributions of 50 step ahead forecast mean run relative errors at both low and high levels of measurement noise. At low levels of noise, Online DMD's errors peak at a lower level than those of the DMDEnKF and Hankel-DMDEnKF, however as noise levels increase we see the peaks switch sides, and the DMDEnKF/Hankel-DMDEnKF become the better performing methods. At both noise levels, the peak in the DMDEnKF's error distribution is centred at the same value as the Hankel-DMDEnKF's peak, however it is less dense due to the additional probability mass stored in the long tail of the DMDEnKF's error distribution, which is not present in that of the Hankel-DMDEnKF.
These disproportionately large errors in the DMDEnKF distribution's tail occur when the spin-up DMD process fails to produce a model similar enough to the system's true dynamics. As briefly touched upon in the first synthetic example, if the spin-up DMD model is sufficiently inaccurate then it can stop the EnKF from effectively assimilating new data, leading to the catastrophic failure of the DMDEnKF. In this example, as the signal-to-noise ratio in the direction of the second-largest singular value was often low, an unfortunate random draw of the system dynamics (\(\mathbf{A}\)) and measurement noise (\(\mathbf{v}_{k}\)) in the spin-up period could produce large errors in DMD's second eigenvalue. Empirically, using the interquartile range method to detect outlying forecast errors, this DMDEnKF failure occurred 5.5% of the time for \(\sigma=0.05\). The errors would persist in the filtering step as new data was processed, whereas other methods were able to recover from poor initial model estimates more effectively. The quality of the model produced by the initial DMD is dependent on the quality and volume of the spin-up data, hence this problem was exacerbated and occurred much more regularly at higher noise levels (21.9% of the time for \(\sigma=0.5\)). It could be
Figure 7: _Error distributions of the mean run relative errors for the 50 step ahead forecasts of Online DMD, the DMDEnKF and Hankel-DMDEnKF, attained over 1000 runs under low \(\sigma=0.05\) (left) and high \(\sigma=0.5\) (right) levels of measurement noise. Similar errors were found at both noise levels, with Online DMD performing better at low measurement noise and the DMDEnKF/Hankel-DMDEnKF performing better at high measurement noise._
mitigated somewhat by increasing the number of time steps in the spin-up stage as described at the end of Section 3.1, however similarly to Windowed TDMD as the system is assumed to be time varying there likely exists a point of negative returns once the spin-up period becomes too long due to the stationarity assumption of batch DMD becoming progressively more violated.
Unlike the DMDEnKF, the Hankel-DMDEnKF and Online DMD do not suffer from a long tail in their error distributions, and perform consistently well over all 1000 runs. At both noise levels, their error distributions have a similar variance, with the Hankel-DMDEnKF's errors being slightly more tightly grouped than those of Online DMD. Hence, the average error is the main factor when differentiating between the method's performance in this example, meaning Online DMD is the preferred method at low noise and the Hankel-DMDEnKF (or DMDEnKF provided the spin-up DMD does not catastrophically fail) is more accurate at high noise. As both methods posses useful yet differing attributes, we generate a typical data trajectory (one for which the DMDEnKF does not fail) for both low and high measurement noise. We then investigate how each model's 50 step ahead forecasts and dominant eigenvalue estimates change over the course of each run, as shown in Figure 8.
First, observing the low noise forecasts in Figure 7(a), it is clear Online DMD produces forecasts that are more robust and closer to the true state's value than those of the DMDEnKF and Hankel-DMDEnKF. This was to be expected, by virtue of Online DMD's lower average errors in the error distributions of Figure 6(a) at this noise level. As noise is increased, the forecasts in Figure 7(c) show the DMDEnKF and Hankel-DMDEnKF becoming the more accurate methods, however Online
Figure 8: Typical trajectories of the 50 step ahead forecasts for the value of the state’s first dimension (left) and estimates of the dominant eigenvalue’s current modulus (right) under low (\(\sigma=0.05\)) and high (\(\sigma=0.5\)) levels of measurement noise for Online DMD, the DMDEnKF and Hankel-DMDEnKF over the course of 1 run. Online DMD forecasts 50 steps ahead more accurately at low noise, and the DMDENKF/Hankel-DMDEnKF more accurately at high noises, however when signal-to-noise ratio is low (at the start and end of the experiment) Online DMD’s eigenvalue estimates become unstable.
DMD's forecasts remains fairly stable, and still appear to be a viable forecasting option.
Analysing the eigenvalue estimates in Figures (b)b and (d)d, we see that over the middle section of data where \(k=(250,...,750)\), Online DMD is able to track the dominant eigenvalue effectively. However, at the beginning and end of the dataset when states and hence the signal component of each new data point is small relative to the measurement noise, Online DMD's eigenvalue estimates become progressively more unstable. In the low noise case this is not a problem, as Online DMD's estimates are significantly more accurate than those of the DMDEnKF/Hankel-DMDEnKF, so even in the poorly performing sections of the data it's estimates still better/match those of the DMDEnKF. For higher noise however, Online DMD provides significantly less robust estimates of the dominant eigenvalue at the start and end of the datasets than those generated by the DMDEnKF and Hankel-DMDEnKF. In the epidemiological context of an infectious disease outbreak, which this synthetic example attempts to mimic, scientists will often try to calculate the basic reproduction number (\(R_{0}\)) [58] using noisy data from the small number of initial infections. If \(R_{0}>1\) the number of infections will grow exponentially if left unchecked, and if \(R_{0}<1\) the number of infections will decay naturally to 0. Within this example, using the DMDEnKF/Hankel-DMDEnKF one can quickly determine that initially \(R_{0}>1\) and take any required action thanks to the stability of it's early eigenvalue estimates, whereas it takes significantly longer and a higher level of infection for Online DMD to consistently determine if \(R_{0}\) is above or below the growth/decay threshold.
## 4 Seasonal Influenza-like Illness Application
### Problem setup
DMD based methods have previously been applied to infectious disease data [49]. In this case, DMD modes can be viewed as stationary, spatial modes used to create a reduced order model in which only the amplitudes and frequencies are time varying [9]. Hence, modelling influenza-like illness (ILI) data is a prime potential application for the DMDEnKF/Hankel-DMDEnKF.
The CDC's ILINet data [25] we will be using records the number of ILI General Practitioner (GP) consultations in the US each week, alongside the number of total GP consultations which can be used to normalize the ILI data. We use a subset of the data from the start of 2003, the first year when data is available all year round, to the end of 2018 as seen in Figure 1. We then split each week's data into demographics, consisting of 4 age groups (0-4, 5-24, 25-24, 65+) and 10 Health and Human Services (HHS) regions. Each region consists of the following locations:
* Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont.
* New Jersey, New York, Puerto Rico, and the U.S. Virgin Islands.
* Delaware, District of Columbia, Maryland, Pennsylvania, Virginia, and West Virginia.
* Alabama, Florida, Georgia, Kentucky, Mississippi, North Carolina, South Carolina, and Tennessee.
* Illinois, Indiana, Michigan, Minnesota, Ohio, and Wisconsin.
- Arkansas, Louisiana, New Mexico, Oklahoma, and Texas.
* Iowa, Kansas, Missouri, and Nebraska.
* Colorado, Montana, North Dakota, South Dakota, Utah, and Wyoming.
* Arizona, California, Hawaii, and Nevada.
* Alaska, Idaho, Oregon, and Washington.
Whilst ILI consultation data is available over all 40 of these strata, total GP consultation data is only provided by region. To generate an age breakdown for a region's total consultations we linearly interpolate using census data to approximate the US population's age demographics for a given week. We then allocate the region's total consultations to each age group based on the proportion of the total population they represent. This method assumes that all age groups have a similar likelihood of attending the GP's, which may be flawed but we believe it to be sufficient for the purpose of demonstrating the DMDEnKF on real-world data.
### Building the spin-up DMD model
The format of the ILI data used in the DMDEnKF is thus a 40 dimensional vector for each week, recording ILI consultations as a percentage of total GP consultations over every demographic. This data exists in \(\mathbb{R}_{+}^{40}\) however DMD computes modes in \(\mathbb{R}^{40}\). Hence, to ensure the model's estimates are non-negative, we first transform the data by adding a small constant (\(c=1\)) then taking the natural logarithm of each element. For the Hankel-DMDEnKF, this transformed data is then delay-embedded with the previous 99 time steps (\(d=100\)) to form a state in \(\mathbb{R}^{4000}\). We use data up to the end of 2012 as training data for the spin-up DMD processes of the DMDEnKF/Hankel-DMDEnKF detailed in Step 1 of the DMDEnKF algorithm, and then filter the remaining years from 2013-2018. The transformed, centred data with the split where the spin-up process ends and the filtering begins marked is shown in Figure 9.
We initially choose to truncate to 8 DMD modes for the DMDEnKF to demonstrate the method. We discuss the effect of changing the truncation on the DMDEnKF method below, but at 8 DMD modes approximately the amount of additional variance in the data that is retained by keeping more modes diminishes significantly. This is evidenced by the "elbow" seen in the cumulative variance plot of Figure 13(a), at the point where the graph transitions from rapidly increasing in variance with \(r\) to a more gradual ascent. We also truncate the Hankel-DMDEnKF to 8 DMD modes, to allow for a more direct comparison between the two variants.
The spectrum and dominant DMD/Hankel-DMD modes associated with each frequency identified by the spin-up DMD processes can be seen in Figure 10. All eigenvalues shown in Figures 10(a) and 10(c) had a modulus of \(\sim 1\), meaning in both cases each mode was expected to persist in the data without growing exponentially. The major difference between the two methods spectra is that Hankel-DMD identifies the most dominant mode to have a period of one year, whereas DMD does not detect any modes with this period. Annual peaks in ILI consultations occurring at a relatively similar time each year indicates that the data contains a strong mode of period one year, and this is supported by Fourier analysis [10] which also identifies one year as the dominant period. Hence, DMD is missing the yearly mode present in the data which Hankel-DMD is able to detect, and
this is likely due to Hankel-DMD's aforementioned enhanced robustness to measurement noise. There are two clear patterns in the structure of the dominant DMD and Hankel-DMD modes seen in Figures (b)b and (d)d. Firstly, their strata generally move together. This is shown by the vast majority of entries for each DMD mode, and entries within the same delay-embedded week (denoted by a vertical slice through the mode) for Hankel-DMD modes, sharing the same sign. This implies that the percentage of ILI consultations increases and decreases at a similar time across all ages and regions. Secondly, the variance is higher for the younger age groups. This is demonstrated by the absolute value of elements in the top two rows of each region generally being larger than those in the bottom two. From Figure (b)b, this is visible trivially for the DMD modes. In Figure (d)d, age groups are arranged in ascending order for each region, so this effect is evidenced in the Hankel-DMD modes by the presence of more intensely coloured horizontal lines of width 2, followed by less intensely coloured lines of width 2 repeating over each of the 10 regions. This indicates that there are sharper peaks and deeper troughs in the percentage of ILI consultations for the young, while the rates for those 25 and over remain more stable.
### Applying the filter
The filtering steps of the DMDEnKF/Hankel-DMDEnKF are then applied over the remaining data using the spatial and temporal modes from the spin-up DMD and spin-up Hankel-DMD respectively. Producing a 4-week ahead ILI forecast for the ILINet data that consistently outperforms a simple historical baseline prediction is difficult even for state-of-the-art models [50]. As such, to test the DMDEnKF/Hankel-DMDEnKF we use a forecast horizon of 4 weeks when making predictions. In Figure 11, the DMDEnKF and Hankel-DMDEnKF's forecasting of total ILI consultations as
Figure 9: ILI consultations as a percentage of total weekly GP consultations across 4 age brackets and the 10 HHS regions in the US, log transformed and centred. The peaks of varying size, timing and shape in Figure 1 are visible here as vertical red areas of varying width and intensity that encompass most demographics.
Figure 10: _Eigenvalue Spectrum (left) and DMD modes in descending order of dominance (right) generated by the DMD (top)/Hankel-DMD (bottom) applied to the data in Figure 9 up to the spin-up date. In both cases, all eigenvalues lie approximately on the unit circle, and dominant modes feature the same sign for most demographics with a magnitude that varies with age. The DMD modes are more interpretable, but Hankel-DMD identifies the mode with period 1 year, which DMD does not._
a percentage of total weekly GP consultations can be seen in full. Up until 2012, we generate 4-week ahead reconstructions using the spin-up DMD models only, estimating each state by taking the state measurement from 4 weeks prior, then iteratively applying the model to it 4 times. The 4-week ahead DMD reconstruction in Figure 10(a) captures more fluctuations in the data than that of Hankel-DMD, however these high frequency fluctuations can also indicate the effect of noise in the measurements. The Hankel-DMD reconstruction shown in Figure 10(b) is much less sensitive to noise, although fails to identify the sharper peaks in the data, which suggest it may be over-smoothing. From 2012 onwards the filtering begins, and forecasts are generated as described in the DMDEnKF algorithm using equation (2.33). The DMDEnKF forecasts become significantly more stable, while the Hankel-DMDEnKF forecasts improve in capturing the true shape of the data, however both suffered from some degree of lag in their predictions.
During this second section of the data, the models are producing actual forecasts, as the DMEnKFs only have access to data up to 4 weeks prior to the prediction target's date. Hence, it is in this section of the data we compare the models' performance against that of the historical baseline. The historical baseline prediction was created in a similar manner to that used in [8], taking ILI consultation rates from the same week of every previous year in the data (excluding the pandemic year of 2009) and then producing a probability distribution for the current week's consultations via Gaussian kernel density estimation (KDE) [55]. KDE Bandwidths were determined using Silverman's rule of thumb [56], and when point estimates were required they were taken as the median of the distribution. The results of the comparisons can be seen in Figure 12 and Table 2. Here it is worth noting that although we use data dated 4 weeks prior to the prediction date, in reality this data is often subject to revisions so the ILINet data as it currently stands would not necessarily be available in real time [50].
### Evaluating the DMDEnKF's performance
Figure 12 demonstrates graphically the 4-step ahead DMDEnKF, Hankel-DMDEnKF and historical baseline forecasts. The DMDEnKFs more successfully capture the shape and height of each flu season's peak, however tend to predict the peaks late, whilst the historical baseline consistently under-predicts the peak rates but is fairly accurate on the timings. The Hankel-DMDEnKF's forecasts are smoother than those of the DMDEnKF, however do not capture smaller details within the shape of the peaks. We also plot the 95% confidence intervals for the DMDEnKF and Hankel-DMDEnKF's forecasts in Figure 12, generated using the ensemble that is maintained and propagated in the EnKF framework. At all times, the real data lies within the DMDEnKF's confidence interval, which is not true for the Hankel-DMDEnKF. The DMDEnKF's confidence interval is significantly wider than that of the Hankel-DMDEnKF, and this is due to Hankel-DMD's robustness to noise, meaning that when the ensemble is propagated through the model, a large amount of probability mass is concentrated in a small area of the state space. This then leads to the Hankel-DMDEnKF underestimating the uncertainty in the system, and hence some real data values falling outside the boundaries of it's 95% confidence interval.
To numerically compare performance, we used metrics designed for the Forecast the Influenza Season Collaborative Challenge (FluSight), in which multiple teams would submit predictions about the weekly ILINet consultation rates for the upcoming flu season at a national and HHS regional level [7], [8]. The FluSight challenge evaluated models abilities to generate 1-4-week ahead forecasts
Figure 11: _ILI consultations as a percentage of total weekly GP consultations forecast 4 weeks ahead using the DMDEnKF (top) and Hankel-DMDEnKF (bottom). The DMD reconstruction captures the shape of the data well but is unstable, whereas the Hankel-DMD reconstruction is less sensitive to noise but over-smooths. The DMDEnKF and Hankel-DMDEnKF forecasts help reduce these issues present in their respective reconstructions, but both suffer from some degree of lag in their predictions._
Figure 12: _ILI consultations as a percentage of total weekly GP consultations, forecast 4 weeks ahead using the DMDEnKF (top) and Hankel-DMDEnKF (bottom). A 95% confidence interval for each forecast, and historical baseline predictions are also shown. The Hankel-DMDEnKF forecasts are smoother than those of the DMDEnKF, but both forecasts contain some lag. The real data always lies within the DMDEnKF’s confidence interval but not the Hankel-DMDEnKF’s, however this is likely due to the DMDEnKF’s confidence interval being significantly wider than that of the Hankel-DMDEnKF._
known as short-term targets over the course of a flu season, as well as other longer term targets, known as seasonal targets, before the season had begun. The DMDEnKF is primarily intended to be a tool for tracking and short-term forecasting, hence we focus on forecasting these short-term targets only. For this purpose we used two different metrics, the log probability measure (log score) slightly adjusted from the FluSight challenge as used in [50] and the mean squared error due to its popular use in regression problems. The log score represents the geometric average probability of each model's prediction being accurate, with accuracy deemed as a forecast within \(+/-0.5\) of the true ILI consultation rate. The higher the log score, the better the forecast. Metrics are calculated from week 40 to week 20 of the following year to prioritize evaluation of forecasts during the flu season, and we use the 6 full seasons from 2012/13 to 2017/18.
The results for the historical baseline prediction and DMDEnKF/Hankel-DMDEnKF's forecasts at a national level can be seen in Table 2. As one would expect, the accuracy of both DMDEnKFs degrade as they make predictions further into the future. The DMDEnKF achieves a higher log score and mean squared error than the historical baseline for forecast horizons up to 4 weeks ahead, where it attains a similar level of forecast skill. For forecasts of 5 or more weeks ahead, the DMDEnKF is unable to outperform the historical baseline in either metric. The Hankel-DMDEnKF consistently underperforms against the DMDEnKF in both metrics over these short forecast horizons. The top 3 statistical models and top 3 mechanistic models in the FluSight challenge achieved log scores of 0.32 and 0.3 respectively for their 4-week ahead forecasts, hence the DMDEnKF has lower (but comparable) forecasting skill than current state of the art ILI models. As the forecast horizon is extended up to 12 weeks ahead, the DMDEnKF's forecast scores continue to decrease monotonically, whereas the Hankel-DMDEnKF's log scores for 9-12 weeks ahead are no worse than those for 5-8 weeks ahead. As such, the DMDEnKF is preferred for short-term forecasting, while the Hankel-DMDEnKF is considered superior when forecasting over longer timescales.
Figure 13 shows the log scores for the 4-week ahead DMDEnKF forecast, and how these compare to
\begin{table}
\begin{tabular}{||c c c||} \hline Forecast & Log Score & Mean Squared Error \\ \hline \hline Historical Baseline & 0.28 & 1.24 \\ \hline
1-week ahead DMDEnKF & 0.49 & 0.33 \\ \hline
2-week ahead DMDEnKF & 0.38 & 0.61 \\ \hline
3-week ahead DMDEnKF & 0.32 & 0.87 \\ \hline
4-week ahead DMDEnKF & 0.27 & 1.16 \\ \hline
1-week ahead Hankel-DMDEnKF & 0.41 & 0.49 \\ \hline
2-week ahead Hankel-DMDEnKF & 0.33 & 0.70 \\ \hline
3-week ahead Hankel-DMDEnKF & 0.29 & 0.97 \\ \hline
4-week ahead Hankel-DMDEnKF & 0.23 & 1.26 \\ \hline \end{tabular}
\end{table}
Table 2: _The log scores and mean squared errors for the DMDEnKF and Hankel-DMDEnKF with differing forecast horizons, and the historical baseline prediction. The DMDEnKF achieves a higher log score and mean squared error than the historical baseline for forecast horizons up to 4 weeks ahead, where it attains a similar level of forecast skill. The Hankel-DMDEnKF consistently underperforms against the DMDEnKF in both metrics over these short forecast horizons. Scores are calculated over the 6 flu seasons from 2012/13 to 2017/18._
the scores attained by the historical baseline prediction at an age and regional level. The Hankel-DMDEnKF scored similarly to the DMDEnKF across all ages and regions, so its breakdown is rather similar to that of the DMDEnKF, and we do not include it to avoid redundancy. In the DMDEnKF's log scores, we see a major pattern in the older age groups scoring higher and hence being better predicted than the younger demographics. This pattern does not persist when the historical baseline's scores are removed, indicating it is a more general trait of the data as opposed to a specific quality of the DMDEnKF's modelling technique. There is also a significant difference in the predictability from region to region. For example, region 1 was the most predictable region for both the DMDEnKF and historical baseline, which is consistent with the findings in [50]. However, the DMDEnKF improved on the historical baseline's forecast for only two of the four age groups in this region. In [50] it was found that the most overall improvement gained by forecasting for a region using a model as opposed to the historical baseline prediction also occurred in region 1, so one would expect to see improvements by the DMDEnKF over the historical baseline in all four age groups. As log score is heavily influenced by the amount of uncertainty in a forecast, it is possible that the covariance matrices used in the DMDEnKF were too large for this region. Hence, setting the magnitude of the DMDEnKF's variance on a region by region basis could lead to better results and more accurate forecasts. Region 6 was the worst forecast region by the DMDEnKF, and the historical baseline predictions were slightly more accurate. Again, this is consistent with [50] where region 6 was the lowest scoring region for the models. In that work however, region 6 experienced the second most improvement by using a model over the historical baseline prediction. Hence, for this region a larger variance within the DMDEnKF may have been more appropriate to account for its extra unpredictability, further supporting the idea of varying the variance by region in the future.
Figure 13: _Log scores over all ages and regions for the DMDEnKF’s 4-week ahead forecast (left), followed by those same scores with the log scores of the historical baseline prediction subtracted (right). The Hankel-DMDEnKF scored similarly to the DMDEnKF across all ages and regions, so we do not include its breakdown to avoid redundancy. In the top figure, the generally increasing intensity of red as one moves down the age groups shows the DMDEnKF performing more accurately for older age groups. The bottom figure’s varying areas of red/blue shows the DMDEnKF/historical baseline vary in superiority of forecasting skill depending on the age and region being forecast, with the historical baseline scoring more highly for most regions._
### Varying the truncation rank
Having analysed the DMDEnKF and Hankel-DMDEnKF's ILI forecasting with 8 DMD modes, we now investigate the effect different truncation ranks (\(r\)) have on their performance in Figure 14. From Figure 13(a), the subjective process of identifying an "elbow" in the data could lead an observer to determine a suitable rank for truncation as low as 4 or as high as 12. Application of the algorithm of Gavish and Donoho for identifying the optimal truncation threshold [26] also finds the truncation rank to be 12, hence we will focus on investigating values of \(r\) in the interval from 4 to 12.
Figures 13(b) and 13(d) show how the metrics we use to measure the DMDEnKF and Hankel-DMDEnKF's forecasting skill vary with \(r\). An ideal forecast will have a high log score reflecting a relatively tight and accurate probability distribution, with a low mean squared error indicating a point estimate close to the true percentage of ILI consultations. For both methods, log score is maximised and mean squared error minimised by \(r=8\), indicating this is the optimal rank to truncate at for our models. For \(r=4\), we have the simplest model tested, hence it has a low degree of uncertainty resulting in a relatively high log score, however is too simple to properly model the system so receives a high mean squared error. By increasing \(r\), we allow the DMDEnKFs more freedom to capture complexity within the system, resulting in a more accurate representation of the true dynamics and hence a generally lower mean squared error. When the number of eigen
Figure 14: _On the left, the \(\%\) of the total variance in the data (top) and delay-embedded data (bottom), dependent on the number of singular values that are retained (\(r\)). An “elbow” in the data occurs around \(r=8\) where we choose to truncate, however determining the exact position of the “elbow” is subjective and could be considered anywhere from \(r=4\) to \(r=12\). On the right, the log score and mean squared errors for 4-step ahead forecasts generated using the DMDEnKF (top) and Hankel-DMDEnKF (bottom) with differing values of \(r\). In both cases, log score is maximised and mean squared error minimised for \(r=8\)._
values is increased too far however, it begins modelling elements of the noise in the system which negatively impacts future predictions, as seen in the increase in mean squared errors for \(r>8\). The additional freedom afforded to the DMDEnKF by increasing \(r\) also means the model contains more parameters, each of which have an associated degree of uncertainty. This causes the overall forecast's probability distribution to become more spread out, and when no longer offset by the increased model accuracy up to \(r=8\), reduces the forecasts log score.
## 5 Conclusion
To conclude, we have defined two new algorithms, the DMDEnKF and Hankel-DMDEnKF, that combine dynamic mode decomposition and Hankel dynamic mode decomposition respectively with ensemble Kalman filtering, to update state and temporal mode estimates of a dynamical system as new data becomes available. When applied to simple, synthetic systems with a time varying parameter and low measurement noise, the DMDEnKFs performed similarly to other iterative DMD variants tested in tracking the system's time varying parameter and forecasting future states. As measurement noise was increased, the DMDEnKFs outperformed the other methods tested in both metrics, and the Hankel-DMDEnKF produced more stable forecasts than those of the DMDEnKF. Both DMDEnKFs achieved similar performance levels to their equivalent DMD Particle Filters (an alteration to the DMDEnKF algorithms where the ensemble Kalman filters were switched for Particle Filters), while requiring significantly fewer ensemble members. When forecasting influenza-like illness across age groups and HHS regions in the US using data from the CDC, the DMDEnKF produced more accurate forecasts than a historical baseline prediction up to 3 weeks ahead, and forecasts approximately as accurate 4 weeks ahead. The Hankel-DMDEnKF produced less accurate forecasts for these short-term targets than the DMDEnKF, however in general it's forecasts were more stable. Also, the Hankel-DMDEnKF was able to identify the presence of a mode with period 1 year, which is strongly visible in the data, yet not identified by the DMDEnKF. Both DMDEnKFs exhibited lower forecasting skill than current state of the art influenza-like illness models.
A natural extension of the DMDEnKF would be to apply extended/kernel DMD in the spin-up DMD phase, allowing the algorithm to be used more effectively on dynamical systems that act nonlinearly in their measured states. Instead of taking the observed values alone as the system's state \(\mathbf{x}_{k}\), these variants use for the state a collection of functions on the observables \(\mathbf{g}(\mathbf{x}_{k})\), which often increases the state's dimension \(n\). The EnKF is well suited to this pairing, as it scales more computationally efficiently in the state dimension than other Kalman filtering methods [23]. The best choice of the collection of functions \(\mathbf{g}(\mathbf{x}_{k})\) as an embedding for nonlinear systems so that DMD may be effectively utilized is an interesting area of future work. Many methods have been developed that propose ways of generating \(\mathbf{g}(\mathbf{x}_{k})\), for example using deep learning [41] or reservoir computing [28], and this remains a promising avenue for future work.
## Code availability
Codes used to produce the results in this paper are available at [https://github.com/falconical/DMDEnKF](https://github.com/falconical/DMDEnKF).
## Data availability statement
All data used to produce the results in this paper will be made available upon reasonable request.
## Acknowledgements
This work was supported by the UKRI, whose Doctoral Training Partnership Studentship helped fund Stephen Falconers PhD. He would also like to thank for their valuable discussions, Nadia Smith and Spencer Thomas from the National Physics Laboratory.
|
2310.05245 | Influence of Camera-LiDAR Configuration on 3D Object Detection for
Autonomous Driving | Cameras and LiDARs are both important sensors for autonomous driving, playing
critical roles in 3D object detection. Camera-LiDAR Fusion has been a prevalent
solution for robust and accurate driving perception. In contrast to the vast
majority of existing arts that focus on how to improve the performance of 3D
target detection through cross-modal schemes, deep learning algorithms, and
training tricks, we devote attention to the impact of sensor configurations on
the performance of learning-based methods. To achieve this, we propose a
unified information-theoretic surrogate metric for camera and LiDAR evaluation
based on the proposed sensor perception model. We also design an accelerated
high-quality framework for data acquisition, model training, and performance
evaluation that functions with the CARLA simulator. To show the correlation
between detection performance and our surrogate metrics, We conduct experiments
using several camera-LiDAR placements and parameters inspired by self-driving
companies and research institutions. Extensive experimental results of
representative algorithms on nuScenes dataset validate the effectiveness of our
surrogate metric, demonstrating that sensor configurations significantly impact
point-cloud-image fusion based detection models, which contribute up to 30%
discrepancy in terms of the average precision. | Ye Li, Hanjiang Hu, Zuxin Liu, Xiaohao Xu, Xiaonan Huang, Ding Zhao | 2023-10-08T17:37:32Z | http://arxiv.org/abs/2310.05245v2 | # Influence of Camera-LiDAR Configuration on 3D Object Detection for Autonomous Driving
###### Abstract
Cameras and LiDAR are both important sensors for autonomous driving, playing critical roles for 3D object detection. Camera-LiDAR Fusion has been a prevalent solution for robust and accurate autonomous driving perception. In contrast to the vast majority of existing arts that focus on how to improve the performance of 3D target detection through cross-modal schemes, deep learning algorithms, and training tricks, we devote attention to the impact of sensor configurations on the performance of learning-based methods. To achieve this, we propose a unified information-theoretic surrogate metric for camera and LiDAR evaluation based on the proposed sensor perception model. We also design an accelerated high-quality framework for data acquisition, model training, and performance evaluation that functions with the CARLA simulator. To show the correlation between detection performance and our surrogate metrics, We conduct experiments using several camera-LiDAR placements and parameters inspired by self-driving companies and research institutions. Extensive experimental results of representative algorithms on NuScenes dataset validate the effectiveness of our surrogate metric, demonstrating that sensor configurations significantly impact point-cloud-image fusion based detection models, which contribute up to 30% discrepancy in terms of average precision.
## I Introduction
Multi-sensor fusion plays an important role in autonomous driving perception. Existing 3D object detection algorithms based on sensor fusion mainly use cameras and LiDAR. Cameras capture the rich texture and features of 3D objects in the environment [1, 2], and LiDARs obtain the geometric characteristics of 3D objects [3, 4, 5]. In this paper, we study the perception system from the perspective of the physical design of multiple sensors, rather than fusion algorithms, and focus on the influence of camera-LiDAR configurations on 3D object detection performance.
Well-designed sensor configurations are critical for autonomous driving, as improper camera and LiDAR configurations can lead to poor input data quality, which can affect detection performance [6]. Previous works [7, 8, 9] have explored a number of novel camera-LiDAR fusion perception algorithms, achieving excellent accuracy on autonomous driving datasets, such as NuScenes [10] and Waymo [11]. However, only a few preliminary arts [12, 13, 14, 15] study the perception problem from the sensor-configuration perspective, e.g. different placements or parameters of sensors. Most current research focuses on LiDAR [16, 17] and camera [18] configurations for sensing performance separately, but rarely establishes consistent criteria for both sensors. To this end, we aim to investigate the unified evaluation methods for both camera and LiDAR configurations, as shown in Fig. 1.
Fast evaluation of 3D detection performance under different camera and LiDAR configurations in the real world is quite challenging due to the laboriousness of data acquisition, model training, and performance testing [12]. Besides, efficiently comparing different sensor configurations for better 3D perception remains an open and critical question under a general trend of using more multi-modal sensors in autonomous driving [19]. To this point, this paper investigates the impact of camera-LiDAR configuration on 3D object detection performance, and proposes a novel and unified framework for accelerating the evaluation of different camera-LiDAR configurations. The main contributions of this paper are summarized as follows:
* We establish a new systematic framework to efficiently evaluate the 3D detection performance of different camera-LiDAR configurations without the effort-costly loop of data collection, model training and evaluation.
* We propose an easy-to-compute unified surrogate metric based on the sensing mechanisms of both cameras and LiDARs, effectively characterizing the sensing procedure and accelerating the evaluation of perception performance.
* Extensive experimental results in CARLA validate the correlation between our surrogate metric and the detection
Fig. 1: LiDAR configurations of (a) Center, (b) Pyramid, (c) Line, (d) Trapezoid, and camera configurations of (e) Wide and (f) Narrow. |
2308.07618 | Vision-based Semantic Communications for Metaverse Services: A Contest
Theoretic Approach | The popularity of Metaverse as an entertainment, social, and work platform
has led to a great need for seamless avatar integration in the virtual world.
In Metaverse, avatars must be updated and rendered to reflect users' behaviour.
Achieving real-time synchronization between the virtual bilocation and the user
is complex, placing high demands on the Metaverse Service Provider (MSP)'s
rendering resource allocation scheme. To tackle this issue, we propose a
semantic communication framework that leverages contest theory to model the
interactions between users and MSPs and determine optimal resource allocation
for each user. To reduce the consumption of network resources in wireless
transmission, we use the semantic communication technique to reduce the amount
of data to be transmitted. Under our simulation settings, the encoded semantic
data only contains 51 bytes of skeleton coordinates instead of the image size
of 8.243 megabytes. Moreover, we implement Deep Q-Network to optimize reward
settings for maximum performance and efficient resource allocation. With the
optimal reward setting, users are incentivized to select their respective
suitable uploading frequency, reducing down-sampling loss due to rendering
resource constraints by 66.076\% compared with the traditional average
distribution method. The framework provides a novel solution to resource
allocation for avatar association in VR environments, ensuring a smooth and
immersive experience for all users. | Guangyuan Liu, Hongyang Du, Dusit Niyato, Jiawen Kang, Zehui Xiong, Boon Hee Soong | 2023-08-15T07:56:33Z | http://arxiv.org/abs/2308.07618v1 | # Vision-based Semantic Communications for Metaverse Services: A Contest Theoretic Approach
###### Abstract
The popularity of Metaverse as an entertainment, social, and work platform has led to a great need for seamless avatar integration in the virtual world. In Metaverse, avatars must be updated and rendered to reflect users' behaviour. Achieving real-time synchronization between the virtual location and the user is complex, placing high demands on the Metaverse Service Provider (MSP)'s rendering resource allocation scheme. To tackle this issue, we propose a semantic communication framework that leverages contest theory to model the interactions between users and MSPs and determine optimal resource allocation for each user. To reduce the consumption of network resources in wireless transmission, we use the semantic communication technique to reduce the amount of data to be transmitted. Under our simulation settings, the encoded semantic data only contains 51 bytes of skeleton coordinates instead of the image size of 8.243 megabytes. Moreover, we implement Deep Q-Network to optimize reward settings for maximum performance and efficient resource allocation. With the optimal reward setting, users are incentivized to select their respective suitable uploading frequency, reducing down-sampling loss due to rendering resource constraints by 66.076% compared with the traditional average distribution method. The framework provides a novel solution to resource allocation for avatar association in VR environments, ensuring a smooth and immersive experience for all users.
semantic communications, Metaverse, contest theory, resource allocation
## I Introduction
Metaverse is a shared and persistent three-dimensional (3D) virtual reality (VR) space that integrates multiple technologies, such as sensing, communication, and computing while integrating virtuality and reality. Based on the initial concept proposed in Neal Stephenson's 1992 science fiction novel _Snow Crash_, there has been substantial growth in the popularity of Metaverse as an entertainment, social, and working platform [1]. Undoubtedly, such popularity has led to the need for seamless avatar integration in the virtual world. As the first step, avatars need to be updated and rendered to represent users in Metaverse in a way that accurately reflects their behaviour. Recently, with the advancement of Artificial Intelligence (AI), several studies [2, 3] have deployed Human pose estimation (HPE) as a tool to acquire users' behaviour information for avatar association purposes.
Even though the deployment of HPE for avatar association has been researched, the allocation of server resources for users' avatar synchronization in such scenarios has yet to be well discussed [4]. Here, resource allocation refers to distributing computational resources, such as rendering power and memory. In HPE for avatar association, it is pertinent to consider resource allocation, as the MSPs must allocate sufficient resources to accurately update the avatars of all users in real-time. This becomes increasingly relevant in virtual environments where high-fidelity, real-time avatar action is crucial for providing an immersive experience. The MSPs must ensure that computational resources are distributed fairly among users and that resource constraints negatively impact no single user's experience. Additionally, the MSP should allocate resources dynamically based on changes in the number of users and the complexity of their movements. In this way, MSP can provide a accommodation between different services as well as different users with a better Quality of Service (QoS).
In response to this need, we propose a contest theory-based semantic communication framework. This framework leverages the principles of contest theory, i.e., a non-cooperative game theory that studies decision-making in situations where multiple individuals or users have conflicting interests and do not cooperate to achieve a common goal. In the context of HPE information for avatar association, the conflicting interests are each user's updating demands and the server's limited rendering resources. The contributions of this paper can be summarised as
* We deploy HPE to perform semantic encoding of pose information. This encoding optimizes the amount of data transmitted over the network by reducing the amount of data transmitted. The test results that demonstrate the encoding schemes' effectiveness are also presented.
* We propose a framework that uses a game-theoretic approach to model the interactions between the users and the MSPs and determines an optimal resource allocation for each user. The framework considers factors such as the rendering demands, i.e., each user's movement, the server's available resources, and the desired QoS. By implementing this contest theory-based semantic communication framework, we aim to provide a novel solution to the challenge of resource allocation in deploying HPE for avatar association and ensure that the VR experience is smooth and immersive for all users.
* We deployed Deep Q-Network (DQN) to optimize award settings for minimized down-sampling loss induced by
limited resources. The results show the effectiveness of DQN for reward optimization and its ability to minimize fluctuations caused by discrete action space and the total down-sampling loss. The framework provides a novel solution to resource allocation for avatar association in VR environments.
## II Related works
### _Dynamic resource allocation for MSP_
The growing Metaverse market has increased in virtual services, including socializing, entertainment and education [5]. This has highlighted the need for resource allocation by MSPs. A key concern in resource allocation is supporting synchronization between the Metaverse and the real world. For instance, in [6], the digital twin association was discussed as an important factor affecting immersion and users' quality of experience (QoE). Similarly, the importance of dynamic resource allocation frameworks for Metaverse services was emphasized in [7]. These works demonstrate the significance of resource allocation and synchronization for optimal user experience in Metaverse services.
### _3D human pose estimation_
HPE is a task in computer vision that involves identifying the position and joints of a human body in a given scene. Various approaches have been proposed, such as the top-down approach [8], which recognizes the human as an object first, then identifies the joints and forms the skeleton. Alternatively, the bottom-up approach [8] recognizes the human joints first and assembles them into the skeleton. In 3D pose estimation, one approach involves using stereoscopic or multiple cameras to obtain depth information to establish the 3D skeleton. Another approach is to use a monocular camera to obtain a 2D skeleton and perform 2D-3D lifting to reconstruct the 3D pose [9, 10]. These methods have been applied in various applications, such as fitness and rehabilitation, and Metaverse services. The emergence of these skeleton capture techniques has created ample opportunities for vision-based avatar associations.
### _Contest theory_
Contests can be defined as games in which participants invest resources with a certain probability of winning prizes. This concept has been widely utilized for solving different problems. For instance, authors in [11] utilized contest theory to balance requests and services among users in a service exchange application with low-demand service. Moreover, a contest-based incentive mechanism was introduced in [12] to maximize participants' efforts in improving the QoS of MSPs.
Inspired by the above studies, this paper proposes a novel incentive framework in a Metaverse service by using a joint approach of the contest theory and deep reinforcement learning. Our approach utilizes the semantic encoding method through pose estimation to minimize the quantity of sensing data that requires transmission. Additionally, we present a dynamic resource allocation strategy based on contest theory for MSP avatar rendering service to further enhance the QoS.
## III System Overview
### _Cameras in Physical World_
As illustrated in Fig. 1, 3D body key-points detection Cameras from Closed Circuit Television (CCTV), computers, and smart devices capture real-time images to obtain visual data. Such data will then be encoded by extracting users' pose skeletons to obtain semantic data. The extraction is performed by smart devices or MSPs :
* The extractions are performed by sensing devices: Capturing images, processing to extract the user's body keypoints, and transmitting the extracted semantic pose keypoints will experience large latency due to limited storage and computing resources.
* The extractions are performed by the MSPs: The captured images instead of encoded semantic information is transmitted to MSPs. This will create a large data flow and threat to users' information privacy since visual images contain lots more information than the needed keypoints.
### _3D Body Key-points Detection Encoding at Tier-2 Devices_
One of this paper's main focuses is capturing and maintaining users' motion characteristics for associating digital twin avatars. As illustrated in Fig. 2, to extract such semantic pose
Fig. 1: System diagram of the proposed framework
Fig. 2: Structures of 3 stages encoder
data from the received visual data, Tier-2 Devices such as edge nodes or fog nodes conduct 3D Body Key-points recognition with respect to the transmitter. Consider frequently only a monocular camera is available, a monocular 3D Body Key-points extraction encoder proposed by Meta (formerly Facebook) [10] is deployed. This semantic encoder of the system consists of three stages: a human bounding box detection model, a top-down HPE model, and a 2D-3D keypoints lifting model. These models [13] are used as encoders before the incentive mechanism, and the selection of the models depends on the available computational power and specifications of the tier-2 devices. Other models may be substituted as needed to achieve the desired levels of accuracy and precision for the respective service.
### _Contest Theory_
MSPs typically serve multiple services with multiple users simultaneously. The MSP's resources are distributed among users in a fair manner which means those users who move rapidly or have significant movements will experience nonconsecutive avatar updates. Meanwhile, users who move slowly or have smaller movements will receive extra resources to update their avatars. In the worst case, lots of resources are wasted on static motion users. Current solutions include increasing the smoothness of avatar linkage by allocating enough resources for all users assuming they have significant movement. However, such a practice could decrease the total number of users a single server could accommodate.
Note that keypoint upload rates directly impact avatar refresh rates and rendering of MSPs, which further affect the QoS. Thus, we introduce an incentive mechanism for tier-2 devices in the service framework based on contest theory. In order to win the award, the transmitter needs to upload extracted keypoints at the proper frequency. As a result, this mechanism improves MSP's overall rendering and refresh resource allocation, which further enhances QoS.
## IV Contest Theory-based resource Allocation
To improve the user's experience with Metaverse services and optimize the resource usage of MSPs, an efficient incentive mechanism is designed to encourage tier-2 devices to upload semantic data at an appropriate rate. In particular, Our objective is to incentivize all contestants to choose a suitable update frequency for all users with a fixed total payment, rather than upload as frequently as possible. A suitable solution to this problem is to use payments to host a contest among transmitters. We define the contest as a game in which contestants, i.e., the tier-2 devices for an individual user, must choose a suitable uploading frequency to earn awards based on the down-sampling loss. This section examines the payoff of tier-2 devices and the payment of MSPs utilizing contest theory [14].
### _Capability, effort and Award_
Under the aforementioned scenario, the contestants are tier-2 devices and respective exerted efforts are the semantic data updating frequency. Let \(f_{n}\) denote the uploading frequency of the \(n^{\mathrm{th}}\) contestants in \(T\) updates. Therefore, \(n^{\mathrm{th}}\) contestant's cost function can be defined as a twice differentiable function, i.e., \(C(a_{n},f_{n})\) complying with [14]
\[\left\{\begin{array}{l}\frac{\partial C(a_{n},f_{n}))}{\partial f_{n}}>0, \\ \frac{\partial C(a_{n},f_{n}))}{\partial a_{n}}<0,\\ \frac{\partial^{2}C(a_{n},f_{n}))}{\partial a_{n}\partial f_{n}}<0,\end{array}\right. \tag{1}\]
where \(a_{n}\) denote the capability of \(n^{\mathrm{th}}\) contestants. The inequality of (1) shows that the more capable a contestant is, the more likely it will choose a larger \(f_{n}\) as upload frequency. Depending on the chosen effort \(f_{n}\) that the tier-2 device
Exerted, there is the loss between the real semantic signal and the sampled signal as illustrated in Fig. 3. Given the same down-sampling frequency, users with larger movements will create more loss compared to the viewed signal. Such loss reflects the movements of each user's motion and thus is regarded as the capability of each user. Let \((X_{ij},Y_{ij},Z_{ij})\) be the 3D coordinates of \(j^{\mathrm{th}}\) keypoint out of \(k\) keypoints of the \(i^{\mathrm{th}}\) updates, and \((X^{\prime}_{ij},Y^{\prime}_{ij},Z^{\prime}_{ij})\) as the rendered 3D coordinates according to \(f_{n}\) on Metaverse server. Capability \(a_{n}\) can be expressed as
\[a_{n}=\Delta(f_{n})=\sqrt{\frac{1}{T}\times\sum_{i=1}^{T}\sum_{j=1}^{k}d}, \tag{2}\]
where \(d=\left(X_{ij}-X^{\prime}_{ij}\right)^{2}+\left(Y_{ij}-Y^{\prime}_{ij}\right) ^{2}+\left(Z_{ij}-Z^{\prime}_{ij}\right)^{2}\), \(\sqrt{d}\) is the euclidean distance between 3D coordinates between the original semantic skeleton keypoint and the rendered down-sampled skeleton keypoint.
As the capability is dependent on the chosen uploading frequency, for a given user with a fixed movement size, a more capable user will choose a larger frequency resulting in a lower loss. By taking (1) into consideration, the following conclusions can be drawn: the cost function can be expressed as \(C(a_{n},fn)=f_{n}/a_{n}\). Sorting \(N_{t}\) contestants in descending order according to exerted efforts to obtain the effort list \(\{f_{1},f_{2},\ldots,f_{N_{T}}\}\). The \(n^{\mathrm{th}}\) contestant receive award \(r_{n}(1\leq n\leq N_{A}\leq N_{T})\), where \(N_{A}\) is the number of available awards. As the reward is given in descending order,
Fig. 3: Two examples of how down-sampling loss is created
\(r_{1}\geq r_{2}\geq\cdots\geq r_{N_{A}}\), and when \(n>N_{A}\),\(r_{n}=0\). The total award is fixed as it is often necessary for game theory and mechanism design to ensure that the total reward paid to participants is fixed. This rule is implemented to ensure a fair game is played and to prevent collusion between users [14]. Thus the total award is expressed as \(r_{T}=\sum_{n=1}^{N_{A}}r_{n},r_{T}=R\).
### _Utility_
The utility of the \(n^{\rm th}\) contestant can be derived as follows:
\[U(a_{n},f_{n})=\left\{\begin{aligned} & r_{m}-C(a_{m},f_{m})),& \text{for }\mathrm{m}^{\rm th}\ \mathrm{rank}\ \mathrm{user},\\ &-C_{n}(f_{n}),&\text{$n\geq N_{A}$}.\end{aligned}\right. \tag{3}\]
Under such a scenario, the users only possess knowledge of their own capability without access to others' data. The cumulative distribution of capability in the population is represented by a continuous function \(\mathcal{P}(\Delta(f_{n}))\). Here, we assume that the \(\mathcal{P}\) follows a uniform distribution [12]. The considered distribution is easily extensible to other distributions. For the \(n^{\rm th}\) contestant, the probability that the other contestant's capability is smaller than its capability, i.e.,\(\mathcal{P}(\Delta(f_{n}))\) is
\[\mathcal{P}(\Delta(f_{n}))=\left\{\begin{aligned} &\frac{max_{\Delta}(f_{n})- \Delta(f_{n})}{max_{\Delta(f_{n})}},& 0\leq\Delta(f_{n})\leq max_{\Delta(f_{n})},\\ & 0,&\mathrm{otherwise}.\end{aligned}\right. \tag{4}\]
With the obtained \(\mathcal{P}(\Delta(f_{n}))\) and fixed payment pool, the expected number of contestants who win before the \(n^{\rm th}\) contestant is \((N_{T}-1)\left(1-\mathcal{P}(\Delta(f_{n}))\right)\). Since the reward pool is fixed, the expected payment received by the \(n^{\rm th}\) contestant is equal to the probability of winning \(i^{\rm th}\) payment(\(i-1\) numbers of contestants larger than \(n^{\rm th}\) contestant) times the respective \(i^{\rm th}\) payment. Therefore, the expected payment received by the \(n^{\rm th}\) contestant can be calculated as follows:
\[E(\Delta(f_{n}),r_{m})= \sum_{i=1}^{N_{A}}u(r_{m})\binom{N_{T}-1}{i-1} \tag{5}\] \[\times\mathcal{P}^{N_{T}-i}(\Delta(f_{n}))\] \[\times(1-\mathcal{P}(\Delta(f_{n})))^{i-1}.\]
### _Optimal Loss Analysis_
As illustrated in Fig. 1, the effort of contestants can be measured in terms of the semantic keypoints uploading frequency. Let \(M_{n}\) denote the frequency that the \(n^{\rm th}\) contestant receives the keypoints data and \(f_{n}\) be the uploading frequency. Due to the sampling nature of visual information, \(f_{n}\) must be a factor of \(M_{n}\) to avoid uneven sampling. Furthermore, the total effort of all users cannot exceed the allocated resource \(f_{T}\). With the constraints of average sampling of the visual signal, the available effort for selection \(f_{n}\) is limited to the factor of \(M_{n}\). With \(\mathbb{F}(M_{n})\) denoting the factor sets of \(M_{n}\), the effort is selected as follows:
\[f_{n}=\arg\max E(\Delta(f^{*}),r_{m}),\quad f^{*}\in\mathbb{F}(M_{n}). \tag{6}\]
Under the setting of the previous scenario, the server could easily adjust the resource allocation by adjusting the award setting \(r_{m}\). The server adjusts the awards to minimize the sum of down-sampling-induced loss by solving
\[\min_{N_{A},r_{1},\ldots,r_{N_{A}}} \sum_{n=1}^{N_{T}} \Delta(f_{n},r_{n}) \tag{7}\] \[\mathrm{s.t.} \sum_{n=1}^{N_{T}} f_{n}\leq f_{T},\ \sum_{m=1}^{N_{A}} r_{m}=R.\]
### _Optimal award setting Analysis_
The analysis of the award settings poses a challenge as it is a mixed-integer linear programming problem, which traditional numerical tools may not be able to accommodate. Therefore, in this section, we use the DQN to solve (7) and obtain the optimal award setting that minimizes the sum of the Loss, even when dealing with a flexible range of users and services.
**State space:** The state space is a set of values that represent the current award setting \([r_{1},r_{2},\ldots,r_{N_{T}}]\) and the effort exerted by the users \([f_{1},f_{2},\ldots,f_{N_{T}}]\).
**Action space:** The action space is supposed to be constructed by the award setting of the receiver: \([r_{1},r_{2},\ldots,r_{N_{T}}]\). However, even with the limitation of fixed total reward \(R\), the action is still too large to explore. In the case that \(R=100\) and \(N_{T}=4\), the number of possible actions is 103883550. Such a large action space will lead to difficulties in converging. Thus, we set the action to \([i_{1},i_{2},\ldots,i_{N_{T}}]\), where values of \(i_{n}\) could be \(1\) (Increase), 0 (Stay), and \(-1\) (Decrease). Such a design greatly reduce the action space. For example, when \(N_{T}=4\), the number of possible actions is only 19.
**Reward function:** Rewards are set based on both the objective function and constraints related to (7). To be able to determine the optimal award setting between the instant reward term and the long-term reward term, the requirement-aware reward function is defined as
\[reward(f_{n},r_{m})=\left\{\begin{aligned} & 0,& \mathrm{Condition1}\\ & c\times\frac{1}{\sum_{n=1}^{N_{T}}\Delta(f_{n},r_{m})},& \mathrm{otherwise}\end{aligned}\right. \tag{8}\]
where c is constant, and Condition 1 is \(\sum_{n=1}^{N_{T}}f_{n}<f_{T}\) or \(\sum_{n=1}^{N_{A}}r_{n}>R\).
For clarity, the details of the proposed DQN-based approach are summarised in Algorithm 1. Additionally, we perform the analysis for the algorithm's time complexity [15]. We assume that the Deep Neural Network (DNN) deployed in DQN contains \(L\) number of layers, \(N_{l_{\rm th}}\) number of neurons per layer, and \(\overline{I}\) number of local iterations each training. The training time complexity of such a Q-network is \(O\left(\overline{I}\sum_{l=1}^{L}N_{l}N_{l+1}\right)\). The inference time complexity is \(O\left(\overline{I}\right)\).
```
1:\(\mathcal{P}(\Delta(f_{n}))\), \(
of skeleton information for each user. Each frame consisted of 17 3D coordinates, providing data for later analysis. Besides the semantic skeleton data, other important parameters used in the study are summarized in Table I. Although rendering resources are not typically measured in fps, we adopt this metric to reflect the resources allocated to each service. Which is based on the observation that the processing and rendering resources required for each frame are consistent, given the constant size of each frame of skeleton information. Thus, we use the number of fps that the server can accommodate and render with the allocated resources as the measure.
### _Performance Evaluation_
#### Iv-B1 Users Movement
The Euclidean distance between each frame's pose is illustrated in Fig. 4 to validate the movements of each user. As shown in Fig. 4, User 1 possesses the largest average movement followed by users 2, 3 then 4. In a real-world scenario, factors such as the user's distance to the camera, body size, and camera angle can affect the parameters. To mitigate these disturbances, a 2D-3D lifting process is implemented, which eliminates the distortions by using an origin-rebased function with a Dilated Temporal Convolutions network [10]. This results in a skeleton with a unified height, which captures only the user's actions and eliminates the effect of the camera angle. This provides an added layer of privacy, as the height and camera position is not made known to MSPs.
#### Iv-B2 User chosen effort under different award setting
We present experimental results on a system with varying reward settings, as specified in Table I. The results, presented in Fig. 5, show that when no constraints are imposed on the reward distribution for \(f_{T}\), awarding the prize solely to the first-place winner leads to users putting in their best efforts according to their abilities. Conversely, when the prize is evenly distributed among all users, all users exert minimal effort irrespective of their capabilities, which aligns with the finding in [12].
Put simply, when the reward is allocated only to the first winner \(([100,0,0,0])\), all users try to upload as frequently as possible to minimize down-sampling loss and increase their chances of winning as shown in Fig. 5(a). However, when the reward is evenly distributed among users \(([25,25,25])\), each user receives the same reward regardless of their effort, resulting in all users choosing the lowest uploading frequency and exerting minimal effort as shown in Fig. 5(c).
Furthermore, when the award is evenly distributed to the first two winners \(([50,50,0,0])\), users with higher capability choose to upload more frequently to secure the award, while lower capability users upload less frequently to reduce the cost of uploading as shown in Fig. 5(b). Such varied behaviours under different award setting scenarios demonstrate that users exert suitable effort based on the award settings.
#### Iv-B3 Comparison of the amount of data before and after pose estimation
With the proposed semantic encoding approach, a significant reduction in the data amount can be achieved while preserving system performance. A single frame image signal, which initially consists of approximately 8.243 megabytes of data (\(1080\times 1908\times 32/8\)), is now reduced to 51 bytes (\(17\times 3\times 1\)) of encoded data. Additionally, such encoding mechanism ensures the security of information by not allowing any MPSs to obtain excessive visual information from users. Video information no longer has to propagate through the entire network, thereby reducing the threat to confidentiality.
Fig. 4: Comparison of motion difference among 4 users.
Fig. 5: Users chosen effort under the different schemes
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Parameters** & \multicolumn{4}{c|}{**Values**} \\ \hline
**Users** & 1 & 2 & 3 & 4 \\ \hline
**Video actions** & Run & Dance & Wave & Stand \\ \hline
**Number of users\(N_{T}\)** & \multicolumn{4}{c|}{4} \\ \hline
**Original frequency\(M_{n}\)** & \multicolumn{4}{c|}{60 fps} \\ \hline
**Key points extracted\(k\)** & \multicolumn{4}{c|}{17} \\ \hline
**Total resource \(f_{T}\)** & \multicolumn{4}{c|}{120 fps} \\ \hline
**Total award\(R\)** & \multicolumn{4}{c|}{100} \\ \hline \end{tabular}
\end{table} TABLE I: Parameters For Experiment settings.
#### V-C4 DQN optimization on reward setting
DQN is effectively utilized in the optimization process to determine the optimal reward setting that incentivizes the desired level of effort. As illustrated in Fig. 6(a), the results show a clear improvement in the average rewards obtained per episode, which increased steadily and converged over time. Additionally, as depicted in Fig. 6(b), the average down-sampling loss decreased consistently and eventually converged. The comparison between the averagely distributed resources and the user-chosen effort under DQN optimized reward setting, as shown in Fig. 7, highlights the significant improvement achieved, with the sum of losses decreasing from \(339.3578\) to \(115.1214\). With a significant reduction of 66.076% in the total loss achieved through dynamic resource allocation, the effectiveness of utilizing DQN for optimizing reward settings is demonstrated. The results also highlight the ability of DQN to effectively reduce fluctuations in the discrete action space, leading to convergence over time.
## VI Conclusion
In this paper, we propose a novel framework that leverages a joint approach of the contest theory and deep reinforcement learning to incentivize user effort in a Metaverse service. The proposed framework offers the advantage of reduced data transmission, increased privacy for users, and optimized reward settings that can effectively incentivize users to provide semantic data to the MSPs. The experiment results have shown the efficacy of the semantic encoding approach in reducing the amount of transmitted data while maintaining system performance. The results also highlight the significant impact of reward distribution on users' efforts, emphasizing the importance of reward settings in motivating users. Moreover, the DQN optimization has demonstrated the capability of optimizing reward settings to encourage the desired level of user effort, resulting in a substantial reduction in the sum of losses. These findings suggest the potential of the proposed approach for real-world applications.
|
2304.08826 | Perceive, Excavate and Purify: A Novel Object Mining Framework for
Instance Segmentation | Recently, instance segmentation has made great progress with the rapid
development of deep neural networks. However, there still exist two main
challenges including discovering indistinguishable objects and modeling the
relationship between instances. To deal with these difficulties, we propose a
novel object mining framework for instance segmentation. In this framework, we
first introduce the semantics perceiving subnetwork to capture pixels that may
belong to an obvious instance from the bottom up. Then, we propose an object
excavating mechanism to discover indistinguishable objects. In the mechanism,
preliminary perceived semantics are regarded as original instances with
classifications and locations, and then indistinguishable objects around these
original instances are mined, which ensures that hard objects are fully
excavated. Next, an instance purifying strategy is put forward to model the
relationship between instances, which pulls the similar instances close and
pushes away different instances to keep intra-instance similarity and
inter-instance discrimination. In this manner, the same objects are combined as
the one instance and different objects are distinguished as independent
instances. Extensive experiments on the COCO dataset show that the proposed
approach outperforms state-of-the-art methods, which validates the
effectiveness of the proposed object mining framework. | Jinming Su, Ruihong Yin, Xingyue Chen, Junfeng Luo | 2023-04-18T08:47:03Z | http://arxiv.org/abs/2304.08826v1 | # Perceive, Excavate and Purify: A Novel Object Mining Framework for Instance Segmentation
###### Abstract
Recently, instance segmentation has made great progress with the rapid development of deep neural networks. However, there still exist two main challenges including discovering indistinguishable objects and modeling the relationship between instances. To deal with these difficulties, we propose a novel object mining framework for instance segmentation. In this framework, we first introduce the semantics perceiving subnetwork to capture pixels that may belong to an obvious instance from the bottom up. Then, we propose an object excavating mechanism to discover indistinguishable objects. In the mechanism, preliminary perceived semantics are regarded as original instances with classifications and locations, and then indistinguishable objects around these original instances are mined, which ensures that hard objects are fully excavated. Next, an instance purifying strategy is put forward to model the relationship between instances, which pulls the similar instances close and pushes away different instances to keep intra-instance similarity and inter-instance discrimination. In this manner, the same objects are combined as the one instance and different objects are distinguished as independent instances. Extensive experiments on the COCO dataset show that the proposed approach outperforms state-of-the-art methods, which validates the effectiveness of the proposed object mining framework.
## 1 Introduction
Instance segmentation is a fundamental perception task, which aims to detect and segment pre-defined objects at the instance level. Over the past years, the task of instance segmentation has made significant progress with the development of deep neural networks [18] and has a wide range of applications in real-world scenarios such as autonomous driving [28] and video surveillance [40].
To address the task of instance segmentation, lots of learning-based methods have been proposed in recent years, achieving impressive performance. For the basic task, the existing methods can be summarized into three categories. (1) Top-down methods [2, 12, 39, 35, 15] address the problem based on object detection,, detecting and segmenting the object in the box. (2) Bottom-up methods [9, 11, 29] deal with the problem in a labeling and clustering manner,,
Figure 1: Challenges of instance segmentation. (a) Indistinguishable objects. It is difficult for adjacent and overlapping objects (, missed persons from SOLOv2 [37]) to locate and segment. (b) Underresearched relationship between instances. The relationships (, feature distance) between different instances are rarely studied while they are important for instance distinguishment.
learning the per-pixel embeddings first and then clustering them into groups. (3) The latest direct method [36, 37] aims at dealing with instance segmentation directly, without dependence on object detection or embedding-and-clustering. However, there still exist two challenges that hinder the development of instance segmentation. First, there usually exists indistinguishable objects in real-world scenarios, especially adjacent and overlapping objects as shown in Fig. 1 (a). Second, The relationships (such as the distance in feature space) between different instances are rarely studied while they are important for instance distinguishment as shown in Fig. 1 (b). Due to these two difficulties, instance segmentation remains a challenging vision task.
To address these two difficulties, many methods have made some efforts. To deal with indistinguishable objects, online hard example mining (OHEM) [32, 33] automatically select hard examples to make training more effective and efficient, which boosts the detection performance and is easy to transfer to instance segmentation. In addition, focal loss [25] focuses training on a sparse set of hard examples and prevents the vast number of easy negatives from overwhelming the detector during training. These methods mainly enhance the learning ability of difficult examples at the level of training strategies, which are indirect schemes without awareness at the object level. To address the second difficulty about modeling instance relationship, relation network [14] proposes an object relation module to process a set of objects simultaneously through interaction between their appearance feature and geometry, which allows modeling of their relations. Besides, IRNet [1] defines two kinds of relations between a pair of pixels: the displacement between their coordinates and their class equivalence, which are used to train the network effectively. These methods utilize neural networks to implicitly model the instance relationship, but it is still difficult to explore explicit representation for the relationship between instances.
Inspired by these observations and analysis, we propose a novel object mining framework for instance segmentation, as shown in Fig. 2. In this framework, we first introduce semantics perceiving to capture all pixels potentially belonging to an instance, and we select pixels with high category confidence as the original instance descriptors with semantics-level classification and location. In this manner, independent objects are easy to be perceived, and indistinguishable objects mainly exist in the case caused by object gathering. In order to discover indistinguishable objects, we propose an object excavating mechanism. In this mechanism, original descriptors are fed into an instance learning subnetwork to determine whether each pixel is the center of the instance around original descriptors. In this way, indistinguishable objects around origin instances are detected with classification and locations, which are named mined descriptors. After that, we put all the instance descriptors (including both original instance descriptors and mined descriptors) into the instance purifying graph to model the relationship between instances. In this graph, we constrain instance descriptors of the same objects to have the highest similarity but instance descriptors of different objects have
Figure 2: Framework of our approach. We first extract pyramid features by the feature pyramid network, which provides features for semantics perceiving and mask learning. Next, a semantics perceiving subnetwork based on semantic segmentation is introduced to capture pixels that may belong to an obvious instance called original instance descriptors with classifications and locations. Then, by learning instances from the region around original descriptors, the object excavating subnetwork is used to discover indistinguishable objects called mined descriptors. After excavating, all descriptors are modeled in a relationship graph according to feature distances, which can link all instances with each other and obtain final independent instances. Note that \(\otimes\) means convolutional operation.
little similarity, which can pull the similar instances close and push away different instances. Therefore, the instance purifying graph can keep intra-instance similarity and inter-instance discrimination. Finally, all independent instances after purifying are utilized to generate final masks by mask learning. Experimental results on the COCO [26] dataset achieve state-of-the-art performance, which verifies the effectiveness of the proposed method.
The main contributions of this paper are as follows: 1) we propose an object mining framework for instance segmentation, which achieves state-of-the-art performance and validates the effectiveness of this perceiving, excavating, and purifying pipeline. 2) we put forward the object excavating mechanism, in which indistinguishable objects around original instances are discovered so that hard instances in an image are fully mined. 3) we introduce the instance purifying mechanism to model the relationship between instances, which maintains the intra-instance similarity and inter-instance discrimination.
## 2 Related Work
In this section, we review related works that aim to resolve the challenges of instance segmentation in three aspects.
### Instance Segmentation
Instance segmentation is a basic vision task, which requires instance-level and pixel-level predictions. With the development of deep neural networks, many methods [36, 37, 11, 29, 10, 38] have made great progress in the benchmark dataset [26]. For the challenging task, the existing methods can be classified into three folds. First, top-down methods are usually based on object detection to segment objects in a bounding box. For example, Mask R-CNN [12] extends the object detection network Faster R-CNN [30] by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. In particular, recent anchor-free methods(, FCOS [35] and PolarMask [39]) also achieve good results with fast inference. The second kind of method is bottom-up methods, which are usually learning the feature of each pixel and clustering pixels into groups in feature space. For example, SSAP [11] proposes a pixel-pair affinity pyramid, which computes the probability that two pixels belong to the same instance in a hierarchical manner. The last kind of method is proposed in recent years, which aims at addressing the task of instance segmentation in direct ways. For example, a series of methods, like SOLO [36, 37], assign categories to each pixel within an instance according to the instance's location and size, thus nicely converting instance segmentation into a single-shot classification-solvable problem. In the above methods, various strategies have improved the performance of instance segmentation. However, there still exist two main challenges including discovering indistinguishable objects and modeling the relationship between instances.
### Hard Example Mining
Hard example mining is used to speed up the convergence of neural networks and enhance the discriminative power of the learned features. To reduce the computational overhead of identifying hard examples, existing works have been explored in two directions including an exact search within each mini-batch and an approximate search from the whole dataset [16, 32, 33, 25]. For example, OHEM [32] automatically selects hard examples in a mini-batch to make training more effective and more efficient, which is an intuitive algorithm that eliminates several heuristics and hyper-parameters in common use. In addition, UHEM [16] presents how large numbers of hard negatives can be obtained automatically by analyzing the output of a trained detector on video sequences, and describes simple procedures for mining large numbers of such hard negatives (and also hard positives) from unlabeled video data. Moreover, SCHEM [33] makes use of class signatures that keep track of feature embedding online with minor additional cost during training and identifies hard negative example candidates using the signatures. As for the above methods, they mainly enhance the learning ability with hard examples by means of training strategies, which are indirect schemes without awareness at the object level.
### Object Relation
It is well believed that modeling relations between objects would make contributions to object recognition. Early works make use of object relations as a post-processing step [10], in which the detected objects are re-scored by considering object relationships. After stepping into the era of deep learning, learning-based object relation is widely explored. For example, relation network [14] introduces an object relation module to process a set of objects simultaneously through interaction between their appearance feature and geometry, thus allowing modeling of their relations. Additionally, IRNet [1] estimates rough areas of individual instances and detects boundaries between different object classes, which ensures to assign instance labels to the seeds and to propagate them within the boundaries. In this way, the entire areas of instances can be estimated accurately and IRNet is trained with inter-pixel relations on the attention maps, thus no extra supervision is required. In these methods, object relation is studied and proved to be effective for object detection or instance segmentation. Nevertheless, it is not clear which information has been learned in these relation modules.
## 3 The Proposed Approach
To address the aforementioned two difficulties (_i.e_., perceiving indistinguishable objects and modeling the relationship between instances), we propose a novel object mining framework for instance segmentation as depicted in Fig. 2, denoted as **PEP** (Perceiving, Excavating, and Purifying). In this framework, we first extract common features for further learning. Then, the semantics perceiving subnetwork is introduced to obtain original instance descriptors. The object excavating subnetwork is also proposed to mine indistinguishable objects as mined instance descriptors. Next, the instance purifying mechanism is used to model the relationship between all instances. Finally, all the obtained instance descriptors are fed into mask learning. Details of the proposed approach are described as follows.
### Feature Extractor
Inspired by recent methods for panoptic segmentation and instance segmentation [23, 34, 36], we adopt the pyramid convolutional network to extract the common features. As shown in Fig. 2, the proposed method PEP takes feature pyramid network [13, 24] as the feature extractor, which is modified by removing the last two layers (_i.e_., classification and global average pooling layers) for the pixel-level prediction task. Feature extractor is composed of five stages for feature encoding, named as \(\mathcal{C}_{s}(\pi_{s})\) with parameters \(\pi_{s}\), where \(s(s=1,2,\ldots,5)\) represent the \(s\)th stage in the feature extractor. After the feature extractor, features in each stage are fed into two subnetworks: semantics perceiving subnetworks and mask learning subnetworks. For each subnetwork, there are four \(3\times 3\) convolution layers with 256 kernels to learn the feature transferring.
### Semantics Perceiving
In fact, it is easy to perceive independent objects. Indistinguishable objects mainly exist in the case of object gathering. Therefore, in order to recognize all instances, we first introduce semantics perceiving subnetwork to capture all pixels that may belong to an instance. Specifically, we carry out semantic segmentation to classify each pixel and then choose pixels with high category confidence as the original instance descriptors with semantics-level classification and locations.
For semantics perceiving subnetwork, we first conduct the common semantic segmentation as semantics perceiving branch \(\mathcal{P}_{s}(\pi_{\mathcal{P}_{s}})\) with several convolution layers to obtain the feature map with size \(C_{\mathcal{P}_{s}}\times H_{s}\times W_{s}\), where \(C_{\mathcal{P}_{s}},H_{s},W_{s}\) means number of channel, height and width of the feature. In this feature, each pixel \((c,h,w)\in\mathbb{R}^{C_{\mathcal{P}_{s}}\times H_{s}\times W_{s}}\) may represent one object, where \(h\) and \(w\) are locations of each object in the feature, and \(c\) represents the confidence that the object belongs to each class, and the maximum confidence determines the classification of each object. To learn the locations and classification of objects, we expect the output of the semantics perceiving branch to approximate the ground-truth mask (represented as \(G_{\mathcal{P}_{s}}\), which is a one-hot encoding map in the channel dimension and is generated from the ground-truth mask of instance semantics) of classes and locations for each object by minimizing the loss:
\[\mathcal{L}_{\mathcal{P}}=\sum_{s=1}^{5}CE(\mathcal{P}_{s},G_{\mathcal{P}_{s}}), \tag{1}\]
where \(CE(\cdot,\cdot)\) means the cross-entropy loss function with the following formulation:
\[CE(P,G)=-\sum_{i}^{H\times W}\sum_{j}^{C}G_{i,j}\mathrm{log}P_{i,j}, \tag{2}\]
where \(P_{i,j}\) represents the predicted probabilities that the \(i\)th (\(i\in\mathbb{R}^{H\times W}\)) pixel belongs to \(j\)th (\(0\leq j\leq C\), 0 represents background) class, \(G_{i,j}\) is the ground truth.
To represent instances better, we adopt descriptors to characterize instances. Toward this end, we introduce instance descriptor extracting branch \(\mathcal{D}_{s}(\pi_{\mathcal{D}_{s}})\) with several convolution layers. It can learn the instance descriptors represented by the output feature map with size \(C_{\mathcal{D}_{s}}\times H_{s}\times W_{s}\), in which these symbols have similar meanings as ones in the object semantics perceiving branch. In the descriptor branch, each object is represented as an instance descriptor \(\mathcal{I}\) with size \(C_{\mathcal{D}_{s}}\times 1\times 1\). In this manner, each object is represented by a \(C_{\mathcal{D}_{s}}\)-dimensional vector, which simultaneously ensures the consistency of features and reduces the difficulty of instance representation during learning. By the way, this branch does not directly learn an objective function but is indirectly supervised.
For the proposed semantics and descriptors, object representation is redundant because each non-background pixel can be represented as one object, which will bring extra computation in the next procedures. To reduce the redun
Figure 3: Structure of the object excavating subnetwork. The original instance descriptor is fed into the instance-level semantic segmentation to detect objects around original instances. In order to learn instance descriptors of the newly detected instances, instance descriptors of original objects are copied and concatenated with the coordinate to learn new instance descriptors. Note that \(\oplus\) means concatenation operation, and Coord means CoordConv.
dancy, only instance descriptors with high category confidence are retained while others are discarded. After that, we get a set of instance descriptors \(\mathbb{D}=\{\mathcal{I}_{ind}\}_{ind=1}^{N_{ori}}\) called original instance descriptors, where \(\mathcal{I}_{ind}\) means the instance descriptor with index \(ind\), and \(N_{ori}\) means the number of selected instance descriptors.
### Object Excavating
After semantics perceiving, independent objects can be perceived, and original instance descriptors with classifications and locations are obtained. In fact, original instances can be regarded as easily identifiable objects. Due to object gathering, the rest of the indistinguishable objects mainly exist around the easily identifiable objects. In order to perceive indistinguishable objects, we propose the object excavating subnetwork to mine indistinguishable objects around the original instances.
For mining indistinguishable objects, we introduce the object excavating subnetwork. In the subnetwork, each instance descriptor in \(\mathbb{D}\) is fed into an instance-level semantic segmentation subnetwork to detect whether each pixel is the center of a new instance around origin instances as shown in Fig. 3. The output \(\mathcal{E}_{ind}\) of instance-level semantic segmentation around the original instance \(\mathcal{I}_{ind}\) is easy to learn with several convolutional layers and then supervised by the ground truth of instance-level semantic segmentation \(G_{\mathcal{E}_{ind}}\) (one binary map, in which the locations are set to 1 if the pixel is the center of the instance while others representing background are set to 0). To learn the feature map \(\mathcal{E}_{ind}\), the minimizing optimization objective is as follows:
\[\mathcal{L}_{\mathcal{E}}=\sum_{ind=1}^{N_{ori}}CE(\mathcal{E}_{ind},G_{ \mathcal{E}_{ind}}). \tag{3}\]
In this manner, indistinguishable objects around each original instance are detected as the key pixels.
Then, we learn the corresponding instance descriptor from each key pixel by following operations. Firstly, descriptors of the original instance are copied \(N_{\mathcal{E}_{ind}}\) times, which is the same as the number of key pixels. Then, the corresponding coordinate information on each copied feature descriptor is concatenated by CoordConv [27]. Then, the new instance descriptors are learned by convolutional layers. At the same time, similar classification learning is carried out for each key pixel \(\mathcal{E}_{ind}^{k}(e=1,\dots,N_{\mathcal{E}_{ind}})\) to gain semantic information. To achieve this, we expect the output \(\mathcal{P}_{\mathcal{E}_{ind}^{*}}\) of classification module for the key pixel to approximate its ground truth (represented as \(\mathcal{G}_{\mathcal{E}_{ind}^{*}}\), which is a similar map as \(G_{\mathcal{P}_{s}}\)) by minimizing the loss:
\[\mathcal{L}_{\mathcal{P}_{\mathcal{E}}}=\sum_{ind=1}^{N_{ori}}\sum_{e=1}^{N_{ ei}}CE(\mathcal{P}_{\mathcal{E}_{ind}^{*}},\mathcal{G}_{\mathcal{E}_{ind}^{*}}). \tag{4}\]
As a result, through the object excavating subnetwork, we mine objects around each original instance to obtain imperceptible objects with the classification and descriptors. Supposing the number of newly mined objects is \(N_{mined}\), we add these new instance descriptors to the original descriptor set and obtain \(\mathbb{D}=\{\mathcal{I}_{ind}\}_{ind=1}^{N_{all}}\) where \(N_{all}=N_{ori}+N_{mined}\). By the way, we denote the whole object excavating subnetwork as \(\mathcal{M}_{s}(\pi_{\mathcal{M}_{s}})\).
### Instance Purifying
For the sake of improving the mutual promotion ability between different instances, we introduce the instance purifying mechanism to directly model the relationship between instances as displayed in Fig. 4. In this mechanism, we construct the relationship between each instance and other instances, in which each instance is represented by the instance descriptor \(\mathcal{I}_{ind}\in\mathbb{D}\).
To model the relationship, we propose an instance purifying subnetwork to maintain intra-instance similarity and inter-instance discrimination. As shown in Fig. 4, we put all the instance descriptors from the semantic perceiving subnetwork and object excavating subnetwork into the instance purifying graph, in which each node is regarded as an instance descriptor, and the weight of the connection edge between nodes represents the feature distance between two instance descriptors. For the purifying graph, its adjacency matrix M can represent the relationship between instance descriptors. This matrix has the size of \(N_{all}\times N_{all}\). In the matrix, the rows and columns represent the indexes of instances, and each value in the matrix represents the similarities between instances of the corresponding row and column. We drive the matrix M to approach its ground truth \(G_{\text{M}}\) by minimizing the loss function
\[\mathcal{L}_{\text{M}}=CE(\text{M},G_{\text{M}}). \tag{5}\]
It is worth noting that modeling pixel-level relationships are usually difficult due to a large amount of computation. However, the number of instance descriptors is usually limited (\(N_{all}\leq 30\)), so the computation is efficient. By graph learning, we constrain the same object to have the highest
Figure 4: Structure of the instance purifying mechanism. Each instance descriptor learns the similarity with other instance descriptors in feature space, ensuring that the similar instances are close and different instances are far away. Then, close instances are merged and different instances are distinguished to get the final instance descriptors.
similarity, but different objects have no similarity. In other words, the graph learning pulls the similar instances close and pushes away different instances, which can keep intra-instance similarity and inter-instance discrimination.
During inference, we predict the similarity matrix in feature space. Then, we merge the close instances and distinguish different instances to obtain the final instance descriptors. For the convenience of description, we denote this instance purifying subnetwork as \(\mathcal{IL}(\pi_{\mathcal{IL}})\).
### Mask Learning
After obtaining the instance descriptors \(\mathcal{I}_{ind}\), we use a general feature learning branch \(\mathcal{B}_{s}(\pi_{\mathcal{B}_{s}})\) to produce a general feature with size \(C_{\mathcal{B}_{s}}\times H_{s}\times W_{s}\) to represent the feature space of object mask learning, which provides shared pixel-level feature bases for all instance descriptors. In this way, all descriptors are convolved with the general features \(\mathcal{B}\) to obtain the instance segmentation result as follows.
\[M_{ind}=\mathcal{I}_{ind}\otimes\mathcal{B}, \tag{6}\]
where \(\otimes\) means the convolutional operation. Instance segmentation result \(M_{ind}\) is expected to approximate the ground-truth masks \(G_{M_{ind}}\) by minimizing the loss
\[\mathcal{L}_{M}=\sum_{ind=1}^{N_{all}}CE(M_{ind},G_{M_{ind}}). \tag{7}\]
By taking the losses of Eqs.(1),(3), (4), (5), and (7), the overall learning objective can be formulated as follows:
\[\min_{\mathbb{P}}\mathcal{L}_{\mathcal{P}}+\alpha\mathcal{L}_{\mathcal{E}}+ \beta\mathcal{L}_{\mathcal{P}_{\mathcal{E}}}+\gamma\mathcal{L}_{\text{M}}+ \delta\mathcal{L}_{M}, \tag{8}\]
where \(\mathbb{P}\) is the set of \(\{\pi_{s},\pi_{\mathcal{B}_{s}},\pi_{\mathcal{P}_{s}},\pi_{\mathcal{D}_{s}}, \pi_{\mathcal{M}_{s}}\}_{s=1}^{5}\) and \(\pi_{\mathcal{IL}}\) for convenience of presentation.
## 4 Experiments and Results
### Experimental Setup
Dataset.To evaluate the performance of the proposed method, we carry out all experiments on COCO dataset [26]. As set in previous work [12, 24], we train our network using the union of 118k train images _train2017_, conduct ablation studies on the 5k validation images _val2017_, and report final results on _test-dev2017_ for comparison with state-of-the-art methods.
Metrics.We report the results with the standard COCO metrics including AP (averaged over IoU thresholds), AP\({}_{50}\), AP\({}_{75}\), and AP\({}_{S}\), AP\({}_{M}\) and AP\({}_{L}\) (AP at different scales).
Training and Inference.Our proposed method can be trained end-to-end. We apply the stochastic gradient descent algorithm with a weight decay of 0.0001 and momentum of 0.9 to optimize the loss in Eq. 8. In the optimization process, parameters in the feature extractor are initialized by the pre-trained ResNet-101 [13] model, whose output dimensions for the second to fifth stages are 256, 512, 1024, and 2048 respectively. COCO _train2017_ dataset is used as the training set. The shorter side of the input image is resized to 800 pixels, and the longer side is less than or equal to 1333 pixels. We train our network for 36 epochs with a mini-batch of 16 images (8 GPUs \(\times\) 2 mini-batch). "Step" learning rate policy is employed for all experiments. The initial learning rate is 0.01, which is decreased by a factor of 10 at 27 and 33 epoch respectively. In addition, we empirically set the weight \(\alpha=\beta=\gamma=\delta=1\) in Eq. 8. The dimension \(C_{\mathcal{D}_{s}}\) of instance descriptors is 256. To match the dimension in instance descriptor, the dimension \(C_{\mathcal{B}_{s}}\) of the general feature \(B\) is 256, too. During inference, all supervisions are removed, and the final result is obtained from the output of mask learning.
### Comparisons with State-of-the-art Methods
We compare our approach with about 26 state-of-the-art algorithms in instance segmentation, including two-stage methods (MNC [8], FCIS [22], Mask R-CNN [12], MaskLab+ [5], MS R-CNN [15], HTC [4], DCT-Mask R-CNN [31], BMask R-CNN [7], and BCNet+Faster R-CNN [17]), and one-stage methods ( YOLACT [2], TensorMask [6], ShapeMask [19], PolarMask [39], CenterMask-W [38], CenterMask-L [21], SOLO [36], SOLOv2 [37], CondInst [34] and BCNet+FCOS [17]) on COCO _test-dev2017_ dataset.
To prove the effectiveness of our method, we report our results based on ResNet-101 with FPN, as listed in Tab. 1. Comparing the proposed method with state-of-the-art methods, we can see that our method consistently outperforms both one-stage and two-stage methods across almost all metrics. For AP, compared with the second-best methods DCT-Mask R-CNN in two-stage methods and BCNet+FCOS in one-stage methods, our method is noticeably improved by 0.8% (from 40.1% to 40.9%) and 1.3% (from 39.6% to 40.9%) respectively. Additionally, it is worth noting that compared with DCT-Mask R-CNN, our method is significantly better on AP\({}_{M}\) (+1.7%), AP\({}_{L}\) (+7.9%) respectively. As for AP\({}_{S}\) in one-stage methods, we are below the methods (BCNet+Faster R-CNN and DCT-Mask R-CNN), which conduct more computations for fine feature learning and decoding. However, our method has fewer computational overheads. Compared with BCNet+FCOS, our method gains an 8.7% higher result in AP\({}_{L}\). Overall, these observations present the effectiveness and robustness of our proposed method and validate that the object mining frame
work is useful for the task of instance segmentation.
Some examples generated by our approach are shown in Fig. 5. We can see that instances can be distinguished and segmented with accurate location and precise shape by the proposed method in both easy (like large objects) and complex situations(small objects, occluded and densely distributed scenes). These visualizations indicate that the proposed method has a good ability of object excavating and instance purifying. Besides, it also shows the effectiveness and superiority of the proposed object mining method.
### Ablation Analysis
To validate the effectiveness of different components, we carry out several experiments on COCO _val2017_ dataset and compare the performance variations of our framework PEP.
Firstly, to investigate the effectiveness of the proposed object excavating mechanism, we conduct ablation experiments and introduce two different models for comparisons. The first setting only consists of the feature extractor, semantics perceiving, and the masking learning, which is regarded as the "Baseline" model. In addition, we carry out another model ("Baseline+excavating") by adding object excavating subnetwork to Baseline. The comparisons of above two models are listed in Tab. 2. As for the metric AP, the object excavating mechanism can improve the performance of Baseline from 38.5% to 39.8% (+1.3%). We also observe that the object excavating mechanism can con
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline & **Backbone** & **AP** & **AP\({}_{50}\)** & **AP\({}_{75}\)** & **AP\({}_{S}\)** & **AP\({}_{M}\)** & **AP\({}_{L}\)** \\ \hline Two-stage methods & & & & & & & \\ \hline MNC [8] & ResNet-101 & 24.6 & 44.3 & 24.8 & 4.7 & 25.9 & 43.6 \\ FCIS [22] & ResNet-101 & 29.2 & 49.5 & - & 7.1 & 31.3 & 50.0 \\ Mask R-CNN [12] & ResNet-101 FPN & 35.7 & 58.0 & 37.8 & 15.5 & 38.1 & 52.4 \\ Mask R-CNN [12] & ResNext-101 FPN & 37.1 & 60.0 & 39.4 & 16.9 & 39.9 & 53.5 \\ MaskLab+ [5] & ResNet-101 FPN & 35.4 & 57.4 & 37.4 & 16.9 & 38.3 & 49.2 \\ MS R-CNN [15] & ResNet-101 FPN & 38.3 & 58.8 & 41.5 & 17.8 & 40.4 & 54.4 \\ HTC [4] & ResNet-101 FPN & 39.7 & 61.8 & 43.1 & 21.0 & 42.2 & 53.5 \\ MS R-CNN\({}^{\dagger}\) & ResNet-101 FPN & 39.6 & 60.7 & 43.1 & 18.8 & 41.5 & 56.2 \\ BMask R-CNN [7] & ResNet-101 FPN & 37.7 & 59.3 & 40.6 & 16.8 & 39.9 & 54.6 \\ BMask R-CNN with MS & ResNet-101 FPN & 38.7 & 59.1 & 41.9 & 17.4 & 40.7 & 55.5 \\ BCNet+Faster R-CNN [17] & ResNet-101 FPN & 39.8 & 61.5 & 43.1 & **22.7** & 42.4 & 51.1 \\ DCT-Mask R-CNN [31] & ResNet-101 FPN & 40.1 & 61.2 & 43.6 & **22.7** & 42.7 & 51.8 \\ \hline One-stage methods & & & & & & & \\ \hline YOLACT [2] & ResNet-101 FPN & 31.2 & 50.6 & 32.8 & 12.1 & 33.3 & 47.1 \\ TensorMask [6] & ResNet-101 FPN & 37.1 & 59.3 & 39.4 & 17.4 & 39.1 & 51.6 \\ ShapeMask [19] & ResNet-101 FPN & 37.4 & 58.1 & 40.0 & 16.1 & 40.1 & 53.8 \\ PolarMask [39] & ResNet-101 FPN & 30.4 & 51.9 & 31.0 & 13.4 & 32.4 & 42.8 \\ CenterMask-W [38] & Hourglass-104 & 34.5 & 56.1 & 36.3 & 16.3 & 37.4 & 48.4 \\ CenterMask-L [21] & ResNet-101 FPN & 38.3 & - & - & 17.7 & 40.8 & 54.5 \\ SOLO [36] & ResNet-101 FPN & 37.8 & 59.5 & 40.4 & 16.4 & 40.6 & 54.2 \\ BlendMask [3] & ResNet-101 FPN & 38.4 & 60.7 & 41.3 & 18.2 & 41.5 & 53.3 \\ MEIns [41] & ResNet-101 FPN & 33.9 & 56.2 & 35.4 & 19.8 & 36.1 & 42.3 \\ D-SOLO [36] & ResNet-101 FPN & 38.4 & 59.6 & 41.1 & 16.8 & 41.5 & 54.6 \\ SOLOv2 [37] & ResNet-101 FPN & 39.7 & 60.7 & 42.9 & 17.3 & 42.9 & 57.4 \\ CondInst [34] & ResNet-101 FPN & 39.1 & 60.9 & 42.0 & 21.5 & 41.7 & 50.9 \\ BCNet+FCOS [17] & ResNet-101 FPN & 39.6 & 61.2 & 42.7 & 22.3 & 42.3 & 51.0 \\ BoundaryFormer [20] & ResNet-101 FPN & 39.4 & 60.9 & 42.6 & 22.1 & 42.0 & 51.2 \\ \hline Ours & ResNet-101 FPN & **40.9** & **61.5** & **44.3** & 18.2 & **44.4** & **59.7** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparisons with state-of-the-art methods on COCO _test-dev2017_ dataset. The best result is in **bold** fonts.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & **AP** & **AP\({}_{50}\)** & **AP\({}_{75}\)** & **AP\({}_{S}\)** & **AP\({}_{M}\)** & **AP\({}_{L}\)** \\ \hline Baseline & 38.5 & 59.1 & 41.4 & 17.2 & 42.8 & 57.4 \\ Baseline + excavating & 39.8 & 60.9 & 42.1 & 17.5 & 43.3 & 58.7 \\ Baseline + purifying & 40.1 & 60.8 & 42.3 & 17.6 & 43.9 & 58.4 \\ \hline PEP & **40.6** & **61.0** & **43.2** & **17.7** & **44.3** & **59.8** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Performance of different settings of the proposed method on COCO _val2017_ dataset.
sistently improve the performance of Baseline model on all metrics by 0.3%-1.8%, which verifies the effectiveness of the proposed object excavating mechanism.
Secondly, to explore the effectiveness of the instance purifying mechanism, we conduct another experiment by combining the instance purifying subnetwork with Baseline as "Baseline+purifying". From the third row of Tab. 2, we find that the performance of instance segmentation can be consistently improved (from 38.5% to 40.1% on AP), which validates that the instance purifying mechanism is effective.
Thirdly, comparing the proposed framework PEP with other models in Tab. 2, we can also find that if combining both object excavating and instance purifying mechanisms in PEP, our method can gain further performance. This shows that our proposed object excavating and instance purifying mechanisms are compatible with each other.
## 5 Conclusion
In this paper, we rethink the difficulties that hinder the development of instance segmentation and propose an object mining framework. In this framework, we introduce a novel object excavating mechanism to mine the objects around each instance, which ensures to fully excavate hard samples. In addition, the instance purifying mechanism is introduced to model the relationship between instances, which pulls similar instances close and pushes away different instances. Extensive experiments on benchmark datasets validate the effectiveness of the proposed approach and show that the perspective of object mining is useful for the task.
Figure 5: Visualization examples of the proposed approach. |
2307.11827 | Obstructed swelling and fracture of hydrogels | Obstructions influence the growth and expansion of bodies in a wide range of
settings -- but isolating and understanding their impact can be difficult in
complex environments. Here, we study obstructed growth/expansion in a model
system accessible to experiments, simulations, and theory: hydrogels swelling
around fixed cylindrical obstacles with varying geometries. When the obstacles
are large and widely-spaced, hydrogels swell around them and remain intact. In
contrast, our experiments reveal that when the obstacles are narrow and
closely-spaced, hydrogels fracture as they swell. We use finite element
simulations to map the magnitude and spatial distribution of stresses that
build up during swelling at equilibrium in a 2D model, providing a route toward
predicting when this phenomenon of self-fracturing is likely to arise. Applying
lessons from indentation theory, poroelasticity, and nonlinear continuum
mechanics, we also develop a theoretical framework for understanding how the
maximum principal tensile and compressive stresses that develop during swelling
are controlled by obstacle geometry and material parameters. These results thus
help to shed light on the mechanical principles underlying growth/expansion in
environments with obstructions. | Abigail Plummer, Caroline Adkins, Jean-François Louf, Andrej Košmrlj, Sujit S. Datta | 2023-07-21T18:01:12Z | http://arxiv.org/abs/2307.11827v2 | # Obstructed swelling and fracture of hydrogels
###### Abstract
Obstructions influence the growth and expansion of bodies in a wide range of settings--but isolating and understanding their impact can be difficult in complex environments. Here, we study obstructed growth/expansion in a model system accessible to experiments, simulations, and theory: hydrogels swelling around fixed cylindrical obstacles with varying geometries. When the obstacles are large and widely-spaced, hydrogels swell around them and remain intact. In contrast, our experiments reveal that when the obstacles are narrow and closely-spaced, hydrogels unexpectedly fracture as they swell. We use finite element simulations to map the magnitude and spatial distribution of stresses that build up during swelling at equilibrium, providing a route toward predicting when this phenomenon of self-fracuring is likely to arise. Applying lessons from indentation theory, poroclassicity, and nonlinear continuum mechanics, we also develop a theoretical framework for understanding how the maximum principal tensile and compressive stresses that develop during swelling are controlled by obstacle geometry and material parameters. These results thus help to shed light on the mechanical principles underlying growth/expansion in environments with obstructions.
+
Footnote †: To whom correspondence may be addressed.
+
Footnote †: To whom correspondence may be addressed.
+
Footnote †: To whom correspondence may be addressed.
+
Footnote †: To whom correspondence may be addressed.
+
Footnote †: To whom correspondence may be addressed.
+
Footnote †: To whom correspondence may be addressed.
Many growth and expansion processes are sculpted through confinement by rigid obstructions. Familiar examples include muffins rising into their characteristic shape during baking [1], trees growing around boulders [2], and even cities expanding around inhospitable geographic features [3]. Obstructed growth and expansion also play pivotal roles--both harmful and beneficial--in many practical applications. For example, excessive tissue growth around metal mesh tubes inserted into blood vessels is a common, but life-threatening, complication of stenting [4; 5; 6]; conversely, the expansion of spray foam into cracks and in between walls underlies the thermal insulation of many energy-efficient buildings [7]. More broadly, obstructed growth and expansion critically influence the emergence of form and function across diverse non-living and living systems, ranging from hydrogels added to soil for water retention to biofilms and biological tissues in complex environments [8; 9; 10; 11; 12; 13; 14]. Therefore, we ask: are there general principles that dictate how obstructions influence growth and expansion? And if so, how do we discover them?
When the growth/expansion of a body is resisted by surrounding obstructions, large and spatially-nonuniform stresses can develop, influencing subsequent growth/expansion in turn [15; 16]. Being able to understand the distribution and magnitude of these stresses is thus a necessary step in the development of widely-applicable, predictive models of growth and expansion. Unfortunately, due to the complexity of real-world materials and environments, model systems in which the coupling between growth and stress can be systematically studied are scarce. Here, we address this issue using studies of spherical hydrogel beads swelling in 3D-printed obstacle arrays with defined geometries. Hydrogels are cross-linked networks of hydrophilic polymers that can absorb large amounts of water and swell while still retaining integrity. As a result, they are extensively used in biomedical, environmental, and manufacturing applications, and have well-characterized and highly-tunable properties such as their degree of swelling and elasticity [17; 18; 19; 20]. Indeed, the comprehensive theoretical literature on hydrogel swelling makes computations of shape and internal stresses accessible for a variety of boundary conditions. While much work has focused on the case of a hydrogel swelling while adhered to another material [21; 22; 23; 24; 25; 26; 27; 28; 29], non-adhered swelling around obstructions has received limited attention, primarily being considered in the context of indentation testing [22; 30; 31; 32].
Our study extends this body of work. First, we use experiments to directly visualize how hydrogel swelling is altered by obstacles of systematically-varying geometries. When obstacles are positioned further apart, the hydrogel swells through the spaces between them and maintains its integrity. By contrast, when the obstacles are closer together, we observe a surprising phenomenon: the hydrogel fractures, repeatedly tearing itself apart as it swells! We then use theory and simulations to rationalize these observations, and importantly, quantify and understand the distribution of stresses. Taken together, our work provides a prototypical example of obstructed growth/expansion and uncovers unexpected swelling behaviors and mechanical instabilities that can result dur
ing this process--highlighting the rich physics waiting to be explored in this area of soft mechanics.
### Experiments
Our experimental platform is schematized in Fig. 1A and detailed in _Materials and Methods_. To define the obstacles, we 3D-print four rigid cylindrical columns of radius \(r_{\mathrm{obs}}\) to be placed an equal distance \(r_{\mathrm{ctr}}\) from a central point; in the experiments, we vary \(r_{\mathrm{obs}}\) and \(r_{\mathrm{ctr}}\) between \(2.5-15\) mm and \(3.5-25\) mm, respectively. The cylinders are securely attached to horizontal, parallel laser-engraved acrylic plates spaced vertically by a fixed amount, \(\Delta z=40\) mm. Importantly, these plates are transparent, permitting direct visualization of a hydrogel as it swells between the cylindrical obstacles and parallel plates. Hence, at the beginning of each experiment, we place a spherical polyacrylamide hydrogel bead of initial radius \(\sim 6\) mm (characterized in SI Appendices A, B) in the center and submerge the entire apparatus in a bath of ultrapure milli-Q water--thereby initiating swelling, which we track using a camera focused on the top plate.
We first examine the case of obstacles that are spaced further apart. As the hydrogel swells, it contacts the top and bottom plates, as well as the cylinders, and continues to swell through the space between them (Fig. 1B, Movie S1). It eventually reaches an unchanging four-lobed equilibrium shape that reflects the balance between the osmotic pressure driving swelling and the elastic stresses arising from the polymer network and hydrogel-obstacle contact [8; 33].
The case of closer-spaced obstacles of the same size is dramatically different. We observe similar behavior to the previous case of less confinement at early times: the hydrogel contacts the surrounding surfaces and swells through the space between them. As these lobes continue to swell, however, cracks abruptly form at the hydrogel surface (48 h in Fig. 1C), reflecting the development of large stresses during obstructed swelling. Remarkably, this process then continues, resulting in elaborate, multi-step fracturing of the hydrogel as it swells, repeatedly ejecting fragments of the hydrogel outward (Fig. 1C, Movie S2). This process eventually stops as the hydrogel reaches its final equilibrium degree of swelling. Several other representative examples of fracturing hydrogels with different obstacle geometries are shown in Movies S3 and S4. The fracturing process varies significantly between samples, reflecting the acute sensitivity of crack
Figure 1: **Hydrogels swelling around obstacles remain intact at equilibrium when obstacles are further apart, but fracture when obstacles are closer together.** (A) Schematic of the experimental platform, showing a hydrogel (yellow) surrounded by cylindrical obstacles with radius \(r_{\mathrm{obs}}\) separated according to \(r_{\mathrm{ctr}}\) and confined vertically by parallel plates separated by \(\Delta z\). (B-C) Top view images taken over the course of swelling, with \(r_{\mathrm{obs}}=5\) mm. The approximate borders of the obstacles and hydrogels are outlined for clarity. As each hydrogel swells, the yellow dye fixed in its polymer network becomes more dilute; thus, the color intensity serves as a proxy for the local polymer concentration. (B) When the obstacles are further apart (\(r_{\mathrm{ctr}}=20\) mm), a hydrogel reaches a four-lobed clover-like shape at equilibrium. (C) When the obstacles are closer together (\(r_{\mathrm{ctr}}=10\) mm), cracks appear at the surface of the hydrogel as it swells, driving the repeated production of discrete fragments.
formation and propagation on the random imperfections in the hydrogel and the complex topography arising from previous fracturing [34; 35; 36]. To our knowledge, this is the first report of fracture induced by obstructed swelling.
These observations suggest that, when sufficiently obstructed, a growing/expanding body can tear itself apart. To characterize the dependence of this phenomenon on confinement, we repeat these experiments with obstacles of varying sizes \(r_{\rm obs}\) and spacings \(r_{\rm ctr}\). We observe a similar phenomenon in all cases, as summarized in the state diagram in Fig. 2. When the obstacles are far apart, the hydrogel swells and retains its integrity (\(\square\)), while when the obstacles are closer i.e., \(r_{\rm ctr}\) is below a threshold value, the hydrogel repeatedly self-fractures as it swells (\(\times\)). This threshold value also depends on the size of the obstacles. One might expect that fracturing is promoted when the hydrogel has less free space to swell into i.e., larger \(r_{\rm obs}\); in stark contrast to this expectation, we find that for a given \(r_{\rm ctr}\), fracturing occurs _below_ a threshold value of \(r_{\rm obs}\). We rationalize this intriguing observation using theory and simulations in the following sections. The two geometric parameters \(r_{\rm ctr}\) and \(r_{\rm obs}\) thus delineate a boundary between swelling without fracture and obstructed swelling causing fracture, as shown by the light and dark green regions, respectively, in Fig. 2.
To demonstrate the generality of this phenomenon, we repeat our experiments in fully 3D granular packings, which more closely mimic the obstructions experienced by hydrogels in many applications [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. Our previous experiments [8] using this platform used a solvent that promoted only slight hydrogel swelling, and therefore accessed the case of weak confinement; however, repeating these experiments with a solvent that promotes more hydrogel swelling indeed gives rise to self-fracturing, as shown in Fig. S4 (SI Appendix C). Thus, this phenomenon of obstructed growth/expansion causing fracture arises not just in idealized geometries, but also in more realistic complex spaces.
## II Theory and simulations
What are the essential physical principles that govern this phenomenon? To address this question, we develop finite element simulations that incorporate the energetic penalty of contacting obstructions into a model of hydrogel swelling based on classic Flory-Rehner theory. Our goal is to better understand the complex distribution of stresses that arises during obstructed swelling, not to quantitatively capture all the features of the experiments; as such, we use a simplified two-dimensional (2D) model that permits straightforward visualization of stresses and thereby enables us to develop an intuitive and analytic description of the underlying physics.
We describe stretching of the hydrogel polymer network and solvent-polymer interactions with the commonly-used free energy density [47; 22; 23; 48; 49; 24]:
\[\frac{F_{\rm en}}{A_{0}k_{B}T} =\frac{n_{p}}{2}\left(F_{iK}F_{iK}-2-2\ln(\det(\mathbf{F}))\right)\] \[+\frac{1}{\Omega}\left(\Omega C\ln\left(\frac{\Omega C}{1+\Omega C }\right)+\chi\frac{\Omega C}{1+\Omega C}\right). \tag{1}\]
The first term describes the elastic free energy: \(n_{p}\) is the number of polymer chains per unit dry reference area \(A_{0}\) and \(F_{iK}=\frac{\partial F_{i}}{\partial X_{K}}\) is the deformation gradient tensor, with \(F_{iK}F_{iK}=\mathrm{tr}(\mathbf{F}^{T}\mathbf{F})=\sum_{i}\lambda_{i}^{2}\) in terms of principal stretches \(\lambda_{i}\). Following standard conventions, deformed/current configurations are denoted by lowercase letters and reference/initial configurations are denoted by capital letters unless otherwise noted [16]. The second term describes the mixing free energy: \(\Omega\) is the area of a solvent molecule, \(C\) is the nominal concentration of solvent (number of solvent molecules per unit reference area), and \(\chi\) is the Flory-Huggins interaction parameter. Both terms are scaled by the product of the Boltzmann constant and temperature, \(k_{B}T\).
Given that the hydrogel network is held together by permanent cross-links between its polymer chains, we additionally assume that it only changes volume by uptake of solvent, which allows us to express concentrations in terms of the deformation: \(\det(\mathbf{F})=1+\Omega C\). We require the chemical potential \(\mu\) to be zero on the boundary of the hydrogel to mimic submerging it in pure water, as in the experiments. To impose this boundary condition, we perform the standard Legendre transform of Eq. (1) and derive a new free energy \(\hat{F}_{\rm en}\) in terms of \(\mu\) as \(\hat{F}_{\rm en}(x_{i},\mu)=F_{\rm en}(x_{i},C)-\mu C\)[23]. Finally, we model contact by imposing an energy penalty for overlap between the hydrogel and obstacles; as detailed in _Materials and
Figure 2: **Experiments reveal a hydrogel fracture threshold that depends on obstacle radius and spacing.** Each \(\square\) indicates an experiment in which the hydrogel reached an intact equilibrium shape as in Fig. 1B, while each \(\times\) indicates an experiment that resulted in fracture as in Fig. 1C. Schematics show a top view of the obstacle geometries for the indicated points. The grey excluded region in the top left shows parameters for which the obstacles would overlap. The green background shading provides a guide to the eye.
Methods_, our results are insensitive to the choice of this parameter. Note that the chemical potential boundary condition \(\mu=0\) is enforced in the regions of contact between the hydrogel and the obstacles (see SI Appendix E for further discussion).
To visualize and track hydrogel deformation during obstructed swelling, along with the concomitant development of internal stresses, we implement this model in FEniCS [49], an open-source finite element platform. Our simulations consider a circular hydrogel swelling around four fixed obstacles. Although our primary focus is the stress distribution in the hydrogel at equilibrium, we simulate the full dynamics of obstructed swelling to provide numerical stability and ensure that there is a realistic path from the initial to the final equilibrium swollen state (SI Appendix E).
We begin by examining hydrogel swelling around obstacles that have the same size, but varying spacing--just as in the experiments shown in Fig. 1(B,C). Four representative examples, varying from the case of weak to strong confinement, are shown in Fig. 3(A-D). The corresponding maximum tensile and compressive principal stresses on the line connecting the hydrogel center to an obstacle center are plotted in panels (E,F). We quantify these variations using \(\Delta/r^{*}\equiv\left(r^{*}-r_{\rm{ctr}}\right)/r^{*}\), where \(r^{*}\) is the equilibrium, fully-swollen, unconfined radius of the hydrogel and \(\Delta\) is the difference between \(r^{*}\) and the distance from the center to the closest obstacle \(r_{\rm{ctr}}\). The case of weak confinement in (A) has \(\Delta/r^{*}=0.02\), increasing to \(\Delta/r^{*}=0.6\) for the case of strong confinement in (D). As seen by the color maps of solvent concentration and principal stresses in panel (A), in this limit of weak confinement, the hydrogel is barely deformed and the resultant stresses are not visible when plotted on the same scale as (B-D). Thus, as we describe in Sec. II.1, the hydrogel stresses can be captured analytically using linear theories in this regime. Increasing hydrogel confinement causes the magnitudes of the stresses to increase (panels E,F), but they still remain primarily localized near the obstacles; as discussed in Sec. II.2 and Sec. II.3, the largest compressive stresses can still be described by linear theory, but the largest tensile stresses require consideration of a geometric nonlinearity. As the separation between obstacles decreases further, large compressive stresses span the entirety of the hydrogel (panel C), eventually triggering a symmetry-breaking instability (panel D) as discussed in Sec. II.4. Finally, while our model is not suitable to treat fracture directly, we discuss the connection between our calculations of stresses and the experimental observations of swelling-induced self-fracture in Sec. III.
### Weak confinement
Consider a hydrogel disk that has swollen around obstacles to reach equilibrium. The chemical potential is spatially uniform and therefore all solvent transport has stopped. Nonetheless, due to contact with the obstacles, the distribution of solvent is inhomogeneous through the hydrogel: solvent preferentially enters the uncompressed lobes of the hydrogel between the obstacles, as shown by the yellow color in Fig. 1B [33]. Now, consider another hydrogel that was first swollen, unobstructed, to its equilibrium size, and then slowly and incrementally squeezed by an identical set of obstacles moving towards its center, acting as four indenters. Solvent must exit the hydrogel where it is indented by the obstacles; recall our condition \(1+\Omega C=\det(\mathbf{F})\). For the same final obstacle geometry, these two scenarios must have identical solvent distributions and stress profiles at equilibrium. Thus, the long time limit of obstructed swelling can be treated as an indentation problem, which is well-studied in the limit of small deformations. Making this analogy between obstructed swelling and indentation allows us to apply lessons from a large body of literature on linear contact mechanics and poroelasticity to derive expressions for the stress tensor in the hydrogel [30; 31; 50; 51].
Assuming that deformations relative to the fully swollen state are small, we linearize Eq. (1) to find effective linear elastic parameters (SI Appendix F, [52; 53]). In particular, for a 2D hydrogel, the effective Poisson's ratio \(\nu\) and Young's modulus \(E\) are given by
\[\nu =1-2n_{p}\Omega\left(n_{p}\Omega\left(1+\frac{1}{\lambda_{0}^{2} }\right)+\frac{1}{\lambda_{0}^{2}(\lambda_{0}^{2}-1)}-\frac{2\chi}{\lambda_{0} ^{4}}\right)^{-1}, \tag{2}\] \[E =2(1+\nu)n_{p}k_{B}T, \tag{3}\]
where \(\lambda_{0}\) is the principal stretch corresponding to the fully swollen state. The expression for the Poisson's ratio can be understood as its value for an incompressible 2D solid, \(\nu=1\), minus a correction--reflecting the fact that the compressibility of the swollen hydrogel is generated via solvent transport. This linearization also yields the shear modulus, \(G_{0}=n_{p}k_{B}T\), which we use to normalize stresses throughout this paper.
Given these effective elastic parameters, we solve for the stresses in the hydrogel as a 2D linear contact mechanics problem (SI Appendix G, [54; 55]). This approach provides expressions for the stress tensor \(\sigma_{ij}\) along the line directly beneath the top obstacle as a function of \(y\) as shown in Fig. 4B:
\[\sigma_{xx} =\frac{2\zeta}{\pi}\Bigg{(}\frac{2{r^{*}}^{3}}{({r^{*}}^{2}+y^{2} )^{2}}-\frac{1}{r^{*}}-\frac{2(r^{*}-y)}{a^{2}}\] \[+\frac{2(r^{*}-y)^{2}+a^{2}}{a^{2}\sqrt{(r^{*}-y)^{2}+a^{2}}} \Bigg{)}, \tag{4}\] \[\sigma_{yy} =\frac{2\zeta}{\pi}\left(\frac{1}{r^{*}+y}-\frac{1}{r^{*}}+ \frac{2r^{*}y^{2}}{(r^{*2}+y^{2})^{2}}+\frac{1}{\sqrt{(r^{*}-y)^{2}+a^{2}}} \right),\] (5) \[\sigma_{xy} =0, \tag{6}\]
where \(\zeta<0\) is the force applied to the indenters and the half contact width \(a\) is
\[a=\sqrt{-\frac{4\zeta}{E\pi\left(\frac{1}{r_{\mathrm{obs}}}+\frac{1}{r^{*}} \right)}}, \tag{7}\]
as defined in Fig. 4B. Note that \(y\) is defined with respect to the hydrogel's unobstructed, fully-swollen state and ranges from \(-r^{*}\) to \(r^{*}\).
Given these analytical expressions, we next ask: Under what confinement regimes (as parameterized by \(\Delta/r^{*}\)) does this linearized theory work, and when does it break down? And since we would like to gain an intuitive understanding of obstructed swelling-induced fracture, how does the maximal principal tensile stress -- which can be used to approximate material strength [34]-- vary with confinement? To address these questions, we first re-cast Eqs. (4)-(5) in terms of \(\Delta/r^{*}\) to facilitate direct comparison with the results of the nonlinear simulations. To
Figure 3: **Finite element simulations quantify how the equilibrium strains and stresses that develop in the hydrogel depend on obstacle geometry.** (A-D) Maps of the solvent concentration per unit reference area with the finite element grid superimposed (left column) and the principal stresses (right column) in the hydrogel at equilibrium, normalized by the area of the solvent molecule \(\Omega\) and the fully-swollen hydrogel shear modulus \(G_{0}\), respectively. Confinement increases from top to bottom, in this case by changing the obstacle spacing with fixed \(r_{\mathrm{obs}}/r^{*}=0.3\); thus, \(\Delta/r^{*}=1-r_{\mathrm{ctr}}/r^{*}=0.02,0.1,0.5,0.6\) from top to bottom, where \(\Delta\) is the difference between the unconfined swollen hydrogel radius \(r^{*}\) and the distance to an obstacle from the center \(r_{\mathrm{ctr}}\). Each mesh point in the right column bears two perpendicular lines, oriented and scaled according to the direction and magnitude of the principal stresses \(\sigma_{1}\), \(\sigma_{2}\) at that point, and colored such that compressive stresses are blue and tensile stresses are red. The stresses exceed the range of the color bar for (C-D). (E-F) Points show the maximum principal tensile and compressive stresses obtained from the simulations, taken along the line connecting the hydrogel center to an obstacle center, at this same fixed \(r_{\mathrm{obs}}/r^{*}=0.3\) as a function of \(\Delta/r^{*}\). Stresses are again normalized by \(G_{0}\). The values of \(\Delta/r^{*}\) corresponding to (A–D) are marked by the grey diamonds. The predictions of linear indentation theory (solid lines) agree well with the simulation data for small \(\Delta/r^{*}\) (purple shaded region). With increasing \(\Delta/r^{*}\), the maximum tension exceeds linear predictions, but can be reproduced by a _geometrically_ nonlinear elastic theory with a linear constitutive law (blue shaded region, SI Appendix H). For compression, the linear theory is accurate over a larger range of \(\Delta/r^{*}\), but the geometrically nonlinear theory cannot explain the deviations; instead, an elastic model incorporating _material_ nonlinearities (Eq. (10), dashed line) better captures the data. As \(\Delta/r^{*}\) increases even further, the hydrogel exhibits a symmetry-breaking instability (D, red shaded region).
do so, we integrate the strain \(u_{yy}=\frac{\sigma_{yy}}{Y}-\frac{\nu}{Y}\sigma_{xx}\) using Eqs. (4)-(5) to find the displacement at the surface of the indenter relative to the center of the hydrogel, and expand the result to linear order in \(a/r^{*}\). This procedure gives
\[\Delta=-\frac{\zeta}{E\pi}\left(\ln\left(\frac{16r^{*2}}{a^{2}}\right)+\frac{1 }{2}(\pi-6-\pi\nu)\right), \tag{8}\]
which we then invert, apply Eqs. (2)-(3), and substitute the resulting expression for \(\zeta(\Delta)\) into Eqs. (4)-(5). The resulting expressions for \(\sigma_{xx}(\Delta)\) and \(\sigma_{yy}(\Delta)\) cannot be expressed in terms of elementary functions, so we omit them, but are shown by the solid lines in Fig. 4 for the illustrative case of \(r_{\rm obs}/r^{*}=0.3\); for comparison, the symbols show the results of the full nonlinear simulations.
As expected, when \(\Delta/r^{*}\ll 1\), the linearized indentation solution agrees well with the nonlinear simulation results, while as \(\Delta/r^{*}\) increases, the discrepancy between the two becomes more apparent. In particular, as exemplified by the data for \(\Delta/r^{*}=0.16\) (dark blue circles) and \(\Delta/r^{*}=0.13\) (yellow squares) in Fig. 4, the linear solution underestimates the compression at both the center \(y=0\) and boundary \(y=r^{*}\) of the hydrogel, as well as the tension that builds up at \(y/r^{*}\approx 0.75\) (Fig. 4A, inset). Indeed, though tension (\(\sigma_{xx}>0\)) does appear in the linear solutions for small \(\Delta/r^{*}\), it disappears with increasing \(\Delta/r^{*}\), in contrast to the simulation results (see arrow in Fig. 3C, for example).
Since we are interested in fracture behavior, we thus focus our attention on the maximal value of this tensile stress for the same illustrative case of \(r_{\rm obs}/r^{*}=0.3\). By symmetry, we expect the largest stresses to lie beneath each obstacle, since stresses must go to zero at the hydrogel boundary away from obstacles in the weak confinement regime. Moreover, because the straight edges of the finite element mesh can introduce spurious tensile forces at the edge of the hydrogel-obstacle contact (described further in SI Appendix I), we plot the maximum and minimum values of the principal stresses along the \(x=0\) line shown in the schematic inset of Fig. 4B. The results are displayed in Fig. 3(E,F). As noted in Fig. 4A, the linear theory only predicts the presence of tension for small confinement before deviating from the nonlinear simulation results at \(\Delta/r^{*}\gtrsim 0.02\), as shown by the solid line and points in Fig. 3E, respectively. Interestingly, however, linear theory captures the maximum compressive stress over a broader range of confinement, shown by the solid line in Fig. 3F, which agrees well with the simulation data up to \(\Delta/r^{*}\approx 0.2\).
Thus, while linear indentation theory can predict both tensile and compressive stresses during obstructed swelling in weak confinement, it underpredicts both for larger deformations--suggesting that the assumptions made in the linear theory are no longer valid. We revisit these assumptions for both tension and compression in the next two sections, respectively.
### Tension beyond the linear regime
The linear theory in the previous section relies on a number of assumptions that can fail as deformations increase:
1. The effective elastic parameters [Eqs. (2)-(3)] are independent of strain,
2. The stress is linearly related to the strain,
Figure 4: **When a hydrogel is weakly confined, linear indentation theory can be used to predict the stresses, but misses key features in stronger confinement.** (A, B) Stress components \(\sigma_{xx}\) and \(\sigma_{yy}\), respectively, determined from simulations (points) and linear theory (lines) along a line running from the center of the hydrogel (\(x=y=0\)) to the point of contact with the top obstacle (\(x=0,\;y=r^{*}\)) with fixed \(r_{\rm obs}/r^{*}=0.3\). The inset to (B) defines the variables used: the hydrogel-obstacle contact width is \(2a\) and the indenter displacement is \(\Delta\), defined relative to the undeformed hydrogel radius \(r^{*}\) (dashed orange line). As the indentation depth \(\Delta/r^{*}\equiv 1-r_{\rm ctr}/r^{*}\) increases, the discrepancy between the linear theory and simulations increases, as expected. With increasing confinement, tension (\(\sigma_{xx}>0\)) builds up at \(r_{\rm ctr}/r^{*}\approx 0.75\), as shown by the magnified inset in A.
3. The strain tensor is linear in the displacements.
To assess the validity of these assumptions, we compare the results of our full nonlinear simulations to those of more complex elastic models that incorporate _material_/_geometric_ nonlinearities.
First, we explore the limits of assumption 1 by relaxing assumptions 2 & 3. Specifically, we compare the hydrogel simulation results to those of a compressible neo-Hookean elastic material with elastic parameters given by Eqs. (2)-(3). As detailed in SI Appendix F, the neo-Hookean model closely reproduces the stress profiles of the hydrogel simulations over a broad range of \(\Delta/r^{*}\) up to \(\approx 0.4\), well beyond the limits of the linear theory at \(\sim 0.02\). Therefore, nonlinearities due to the effective elastic parameter mapping can be neglected up to this point.
Next, we explore the limits of assumption 2 by relaxing assumption 3. Specifically, we use a St. Venant-Kirchoff elastic model with strain tensor \(u_{ij}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u _{j}}{\partial x_{i}}+\frac{\partial u_{k}}{\partial x_{i}}\frac{\partial u_{ k}}{\partial x_{j}}\right)\); thus, the strain tensor is nonlinear in displacements, but we still require that the hydrogel material follows a linear constitutive law. Note that derivatives are taken with respect to coordinates in the unobstructed, fully-swollen state, denoted here with lowercase letters for simplicity of presentation. Intriguingly, as detailed in SI Appendix H, the St. Venant-Kirchhoff model quantitatively reproduces the maximum principal tensile stress in the hydrogel simulations up to \(\Delta/r^{*}\approx 0.1\), well beyond the limit of the linear theory at \(\approx 0.02\). Hence, the excess tension \(\sigma_{xx}\) that develops beneath the obstacles just beyond the linear regime is driven by _geometric_ nonlinearity, and does not require a nonlinear constitutive relationship. At even larger displacements \(\Delta/r^{*}>0.1\), the St. Venant-Kirchoff simulations are unstable and we expect that both geometric and material nonlinearities contribute to the tensile stress.
How exactly does geometric nonlinearity generate tension during obstructed swelling? We answer this question using an illustrative argument reminiscent of the derivation of the Foppl-von Karman equation [56]. As detailed in _Materials and Methods_, we first find the variation of the integrated St. Venant-Kirchhoff strain-energy function with respect to displacements, which can be written in terms of the second Piola-Kirchhoff stress tensor, \(\sigma_{ij}^{PK}\). In 2D, this quantity gives the stress component in a material direction \(i\) perpendicular to a line that has unit length and normal \(j\) in the reference configuration [57]. We then make approximations specific to our obstacle geometry. Ultimately, we find
\[\sigma_{xx}^{PK}\approx-\frac{\partial\sigma_{yy}^{PK}}{\partial y}\left(1+ \frac{\Delta}{r^{*}}\right)r_{\text{obs}}. \tag{9}\]
Since \(\sigma_{yy}^{PK}\) becomes more negative as \(y\) increases, \(\frac{\partial\sigma_{yy}^{PK}}{\partial y}<0\) (Fig. 4B), and therefore \(\sigma_{xx}^{PK}>0\) indicating tension. Thus, geometric nonlinearity generates tension beneath an indenter perpendicular to the indentation direction, qualitatively matching our simulations.
### Compression beyond the linear regime
A notable result shown in Fig. 3 is that while linear indentation theory predicts tensile stresses for \(\Delta/r^{*}<0.02\), it captures the compressive stresses over a broader range, up to \(\Delta/r^{*}\approx 0.2\). Which nonlinearities drive the deviations that arise at even larger displacements? We answer this question by following the same procedure as in the previous section, detailed further in SI Appendix F & H. In contrast to the case of tension, we do not find any parameters for which the St. Venant-Kirchoff model is more accurate than the linear model. Furthermore, as shown in Fig. 3F, past the linear regime, the compressive (Cauchy) stress underneath the top obstacle scales like that of a compressible neo-Hookean elastic material experiencing uniaxial compression, with the principal stretch parallel to indentation set to \(\lambda_{1}=1-\Delta/r^{*}\) and the principal stretch perpendicular to the indentation set to 1:
\[\sigma_{yy}\sim G_{0}\left(1-\frac{\Delta}{r^{*}}\right)+\frac{2G_{0}\nu}{1- \nu}\frac{\ln\left(1-\frac{\Delta}{r^{*}}\right)}{\left(1-\frac{\Delta}{r^{*} }\right)}-\frac{G_{0}}{\left(1-\frac{\Delta}{r^{*}}\right)}. \tag{10}\]
Thus, unlike the case of tension, deviations from the linear theory do not arise from geometric nonlinearities and can instead be attributed to _material_ nonlinearities.
### Symmetry-breaking instability
A striking phenomenon arises in our simulations as the separation between obstacles decreases further: as shown in Fig. 3D, the hydrogel displays a symmetry-breaking instability and swells preferentially along a diagonal. Why does this instability arise? Inspecting the spatial distribution of compressive stresses provides a clue. As the hydrogel swells in increasing amounts of confinement, its central core becomes increasingly compressed (see, e.g., Fig. 3C-D). Compressing this circular core along a _single_ axis, forming an ellipse, requires less energy than does compressing this core isotropically. Thus, as confinement increases, one expects the case of asymmetric swelling to be energetically preferred, leading to this instability--as described further in SI Appendix J.
## III Comparison between simulations and experiments
Our theoretical analyses and simulations capture the essential features of the hydrogel deformation observed in experiments (e.g., Figs. 1B and 3C). They also enabled us to explore how stresses develop during obstructed swelling more generally (Figs. 3-4). While our model is not suitable to directly treat the swelling-induced self-fracture observed experimentally, the simulated stresses help rationalize this phenomenon. To this end, we compare the simulations to the experimental state diagram
shown in Fig. 2 by plotting the maximum principal tensile and compressive stresses as a function of \(r_{\rm obs}\) and \(r_{\rm crt}\). The results, shown in Fig. 5, bear a compelling resemblance to the experimental results. In particular, the convexity and shape of the experimental fracture boundary are similar to the simulated contours of maximum principal stress. Indeed, appealing to a commonly-used fracture criterion for brittle materials [34], we expect that hydrogel fracture occurs when the maximum tensile stress exceeds a threshold -- whose exact value would establish the position of the experimental fracture boundary in Fig. 5A. We conjecture that this threshold is reached prior to the symmetry-breaking instability revealed by the simulations, as we do not observe it in our experiments; experiments using tougher hydrogels than those studied here may be a useful way to probe this deformation mode in future work.
## IV Discussion
Despite the ubiquity of obstructed growth and expansion in our everyday lives, how exactly this process generates large, spatially non-uniform stresses in a body -- and how these stresses influence its subsequent growth/expansion in turn -- has remained challenging to systematically study. One reason is the lack of model systems in which this intricate coupling between growth and stress can be probed both experimentally and theoretically. We addressed this need by studying the swelling of spherical hydrogel beads in obstacle arrays of tunable geometries. Our experiments revealed a striking phenomenon: under weak confinement, a hydrogel retains its integrity and assumes a symmetric, four-lobed clover-like shape, while in stronger confinement, it repeatedly tears itself apart as it swells. We elucidated the underlying physics by adopting established models of hydrogel swelling to map the tensile and compressive stresses arising during swelling. In particular, we found that when a hydrogel is weakly deformed, stresses are well-described by linear indentation theory, while as a hydrogel is increasingly deformed, geometric and material nonlinearities engender large tensile and compressive stresses tangential and normal to the obstacles, respectively, driving the hydrogel towards fracture.
Because our study represents a first step toward fully unravelling the mechanics of obstructed growth and expansion, it necessarily has limitations. For example, while the experimental results shown in Fig. 2 revealed a fracture threshold that varies with obstacle geometry, quantitative comparison to theory and simulation will require more precise control over the system geometry and dimensionality [58; 59], both in experiments and in the model, along with a more detailed treatment of the microscopic processes underlying fracturing [60; 61; 62; 63; 64; 65]. Moreover, many more experimental trials near this threshold and with hydrogels of varying mechanical properties, along with high-resolution imaging of crack propagation [66], will be useful in characterizing the details of the fracturing process, which likely depend on the presence of microscopic imperfections in the hydrogel. Finally, although here we restricted our attention to the case of rigid obstacles, many scenarios involve growth/expansion around _compliant_ constraints--e.g., the development of biofilms, tissues, and organs in the body [67; 68; 69; 70], with potential implications for biological function [71; 72]. Extending our study to the case of deformable obstacles would therefore be a useful direction for future work.
Our results may be especially relevant to diverse applications of hydrogels, and other swellable materials, that frequently involve their confinement in tight and tortuous spaces. For example, driven by growing demands for food and water, hydrogels are increasingly being explored as additives to soils to absorb and release water to plants and therefore reduce the burden for irrigation [42; 43; 44; 45; 37]. They are also widely adopted in other applications, such as oil recovery, construction, mechanobiology, and filtration, all of which involve hydrogel swelling in confinement [38; 39; 40; 41]. A common assumption made in all these cases is that the hydrogel retains integrity as it swells; however, our study indicates that these applications should be evaluated for the possibility of swelling
Figure 5: **Simulations predict that hydrogel stresses grow as obstacles are made smaller and brought closer together in a manner similar to the experimental findings.** (A, B) Most positive (tensile) principal stress \(\sigma_{\rm max}\) and magnitude of the most negative (compressive) principal stress \(|\sigma_{\rm min}|\), respectively, again normalized by the fully-swollen shear modulus \(G_{0}\). Each (light or dark) green square corresponds to a simulation. As in Fig. 2, the grey region corresponds to overlapping obstacles. Note that we take the maxima/minima over the entire mesh, rather than just beneath an obstacle, to avoid non-monotonic behavior in the instability regime.
induced self-fracture. Indeed, fracture could lead to the production and dispersal of many small hydrogel fragments, potentially reducing their utility and leading to environmental contamination. This process should therefore be carefully considered in a wide range of real-world contexts.
## Materials and Methods
### Experimental details
To create the obstacle array, we 3D print cylindrical columns in Clear v4 resin using a Form3+ industrial 3D printer (Formlabs), and cut the acrylic plates using an Epilog Laser Mini 24 laser cutter and engraving system. We secure the columns to the acrylic plates using a twist-and-lock mechanism. The hydrogels are polyacrylamide beads ("water gel beads" obtained from Jangostor) and are stored in a screw cap container prior to experiments; as such, they experience some slight swelling due to ambient humidity. The hydrogel beads have varying sizes and colors, but all appear to have the same swelling behavior and beads of similar sizes are used for experiments (SI Appendix A). These hydrogel beads were extensively characterized in our previous experiments [33, 8], which provide additional detail.
Early in the swelling process (e.g. 2 h in Fig. 1B,C), each hydrogel appears out of focus since it has not yet made contact with the top plate (focal plane for imaging), and cusps are visible on its surface due to differential swelling as water enters the hydrogel from its outer surface [73, 74, 75, 21]. We verify that the hydrogel beads swell to an equilibrium shape without rupturing when no obstacles are present (either with or without the 40 mm \(z\)-confinement), indicating the transient stresses that engender these cusps are insufficient to drive fracture as in other less tough gels [76, 66]. Because the plates and obstacles are made of acrylic or a polymeric resin, respectively, we do not expect or see evidence of adhesion between the polyacrylamide hydrogel surface and the confining surfaces. For experiments in which we image the entire process of obstructed swelling, we verify that the hydrogel reaches an equilibrium shape (in the cases of no fracturing) when its size/shape does not noticeably change for at least three hours. In other experiments where we do not image the entire process of obstructed swelling, we verify that equilibrium is reached by waiting an additional 12 hours. Each symbol in Fig. 2 represents a single experiment.
### Simulation details
We create meshes using FEniCS's built-in mesh generation function with cell size set such that there are 30 vertices along the radius. We confirmed that this mesh resolution was sufficient for numerical convergence. We set \(n_{p}\Omega=0.001\) and \(\chi=0.3\). The penalty function is integrated over the hydrogel boundary, and is given by
\[\frac{F_{\rm pen}}{2\pi r_{0}k_{B}T}=\frac{p}{4\pi r_{0}}\sum_{i}\langle r_{ \rm obs}^{2}-(\mathbf{x}-\mathbf{x}_{i})^{2}\rangle_{+}^{2}, \tag{11}\]
where \(p\) is the penalty strength, \(\mathbf{x}_{i}\) is position of the center of the \(i^{\rm th}\) obstacle, \(r_{0}\) is the dry reference radius of the hydrogel, and the sum is over all obstacles. The brackets \(\langle\cdots\rangle_{+}\) take the positive part of the argument, defined as \(\langle x\rangle_{+}=\frac{x+|x|}{2}\). Thus, when evaluated at positions away from any obstacles, the penalty function is zero, but takes a large positive value inside the obstacles. We set \(p=6.25\pi/r_{0}^{4}\) to generate the data shown in this text, and have verified that using \(p=62.5\pi/r_{0}^{4}\) produces the same results.
Following the suggestions of Refs. [77, 78, 32], we use the backwards Euler scheme for time integration (see SI Appendix E for further discussion of dynamics), Taylor-Hood mixed elements (quadratic elements for the displacement field and linear elements for the chemical potential field), and early time ramping of the chemical potential boundary condition to ensure numerical stability. The Newton-Raphson method is used at each time step, and equilibrium is defined by when zero iterations are required for a step to complete.
To find the maximum/minimum principal Cauchy stresses, we first calculate the eigenvalues of the stress tensor for a given displacement field. We project these eigenvalue fields onto a function space of discontinuous Lagrange elements of order 1. We then compare the eigenvalues defined on this mesh. The largest positive eigenvalue is the maximum principal (tensile) stress \(\sigma_{\rm max}\), and the most negative eigenvalue is the minimum principal (compressive) stress, \(\sigma_{\rm min}\). To find the minimum and maximum stresses beneath an obstacle (data in Fig. 3(E,F)), we instead project the stress tensor onto the vertical line directly beneath the top obstacle. The minimum value of \(\sigma_{yy}\) and the maximum value of \(\sigma_{xx}\) along this line are plotted as the below top obstacle minimum and maximum stresses respectively.
### Tension generated by geometric nonlinearity
We describe the argument leading up to Eq. (9) in more detail, which demonstrates how tension can appear beneath an obstacle when a nonlinear strain tensor is used. The variation of the integrated St. Venant-Kirchhoff strain-energy function with respect to displacements in terms of the second Piola-Kirchhoff stress tensor is
\[\delta W=\int\sigma_{ij}^{PK}\delta u_{ij}dA=\int\sigma_{ij}^{PK}\left(\frac {\partial\delta u_{i}}{\partial x_{j}}+\frac{\partial u_{k}}{\partial x_{ i}}\frac{\partial\delta u_{k}}{\partial x_{j}}\right)dA. \tag{12}\]
Upon integrating by parts, assuming we cannot vary the displacements at the boundaries due to the presence of
obstacles, we find
\[\delta W =\int\left(\frac{\partial\sigma_{ij}^{PK}}{\partial x_{j}}\delta u_{i }+\frac{\partial}{\partial x_{j}}\left(\sigma_{ij}^{PK}\frac{\partial u_{k}}{ \partial x_{i}}\right)\delta u_{k}\right)dA,\] \[=\int\left(\frac{\partial\sigma_{ij}^{PK}}{\partial x_{j}}\delta u _{i}+\frac{\partial\sigma_{ij}^{PK}}{\partial x_{j}}\frac{\partial u_{k}}{ \partial x_{i}}\delta u_{k}+\sigma_{ij}^{PK}\frac{\partial^{2}u_{k}}{\partial x _{i}x_{j}}\delta u_{k}\right)dA. \tag{13}\]
Next, in order to make approximations specific to our geometry, we explicitly list all the terms that appear for a two-dimensional solid. Since we will examine stresses directly beneath an obstacle, we set \(\sigma_{xy}^{PK}=0\) by symmetry, yielding:
\[\delta W \approx\int dA\delta u_{x}\Bigg{(}\frac{\partial\sigma_{xx}^{PK}} {\partial x}\left(1+\frac{\partial u_{x}}{\partial x}\right)+\frac{\partial \sigma_{yy}^{PK}}{\partial y}\frac{\partial u_{x}}{\partial y}\] \[+\sigma_{xx}^{PK}\frac{\partial^{2}u_{x}}{\partial x^{2}}+\sigma _{yy}^{PK}\frac{\partial^{2}u_{x}}{\partial y^{2}}\Bigg{)}\] \[+\int dA\delta u_{y}\Bigg{(}\frac{\partial\sigma_{yy}^{PK}}{ \partial y}\left(1+\frac{\partial u_{y}}{\partial y}\right)+\frac{\partial \sigma_{xx}^{PK}}{\partial x}\frac{\partial u_{y}}{\partial x}\] \[+\sigma_{xx}^{PK}\frac{\partial^{2}u_{y}}{\partial x^{2}}+\sigma _{yy}^{PK}\frac{\partial^{2}u_{y}}{\partial y^{2}}\Bigg{)}. \tag{14}\]
In equilibrium, the coefficients of \(\delta u_{x}\) and \(\delta u_{y}\) must be zero (in the absence of body forces). We focus our attention on the coefficient of \(\delta u_{y}\). If we consider the internal stresses directly beneath the obstacle, we can set \(\partial u_{y}/\partial x\) to zero by symmetry. We approximate the deformation as an affine contraction in the \(y\) direction, which sets \(\partial u_{y}/\partial y\approx\Delta/r^{*}\) and \(\partial^{2}u_{y}/\partial y^{2}\approx 0\). We assume that the curvature of the \(y\) displacement, \(\frac{\partial^{2}u_{y}}{\partial x^{2}}\), is determined by the curvature of the obstacle, \(1/r_{\rm obs}\). With these substitutions, the equilibrium condition becomes
\[\frac{\partial\sigma_{yy}^{PK}}{\partial y}\left(1+\frac{\Delta}{r^{*}} \right)+\frac{\sigma_{xx}^{PK}}{r_{\rm obs}}\approx 0. \tag{15}\]
Solving for \(\sigma_{xx}^{PK}\), we find
\[\sigma_{xx}^{PK}\approx-\frac{\partial\sigma_{yy}^{PK}}{\partial y}\left(1+ \frac{\Delta}{r^{*}}\right)r_{\rm obs}. \tag{16}\]
Just beyond the linear regime, \(\sigma_{yy}^{PK}\) should follow the same trends as \(\sigma_{yy}\) predicted by the linear theory. In Fig. 4B, we observe that \(\sigma_{yy}\) decreases as \(y\) increases (\(\partial\sigma_{yy}/\partial y<0\)) until it plateaus at \(y/r^{*}\approx 1\). Thus, \(\partial\sigma_{yy}/\partial y\) reaches its minimum a small distance away from the obstacle boundary, and our argument predicts that the largest geometric nonlinearity-generated tensions will appear there. This location is indeed where the greatest tensile stresses appear in simulations (arrow in Fig. 3C). We can also compare the magnitude of the tension predicted by Eq. (16) using \(\sigma_{yy}\) from the linear theory with the tension measured in hydrogel simulations--however, we find an estimate of the tension that is approximately five times larger than the maximum tension found via simulations and scales more gradually with increasing \(\Delta/r^{*}\). This discrepancy is not surprising given the approximations made in this calculation.
###### Acknowledgements.
It is a pleasure to acknowledge John Kolinski, Amaresh Sahu, and Carolyn Bull for helpful discussions, as well as funding from the NSF through the Princeton MRSEC DMR-2011750, the Project X Innovation Fund, a Princeton Materials Science Postdoctoral Fellowship (A.P.), and the Camille Dreyfus Teacher-Scholar Program of the Camille and Henry Dreyfus Foundation (S.S.D).
## References
* Ding _et al._ [2019]M. Ding, X. Han, S. Wang, T. F. Gast, and J. M. Teran, ACM Transactions on Graphics (TOG) **38**, 1 (2019).
* Taylor _et al._ [2021]I. Taylor, K. Lehner, E. McCaskey, N. Nirmal, Y. Ozkan-Aydin, M. Murray-Cooper, R. Jain, E. W. Hawkes, P. C. Ronald, D. I. Goldman, _et al._, Proceedings of the National Academy of Sciences **118**, e2018940118 (2021).
* Borsdorf and Haller [2020]A. Borsdorf and A. Haller, in _The Elgar Companion to Geography, Transdisciplinary and Sustainability_ (Edward Elgar Publishing, 2020) pp. 140-154.
* Byrne _et al._ [2015]R. A. Byrne, M. Joner, and A. Kastrati, European heart journal **36**, 3320 (2015).
* Kuhl _et al._ [2007]E. Kuhl, R. Maas, G. Himpel, and A. Menzel, Biomechanics and modeling in mechanobiology **6**, 321 (2007).
* Cheng and Zhang [2021]J. Cheng and L. T. Zhang, Journal of Biomedical Science and Engineering **14**, 33 (2021).
* Cabeza _et al._ [2010]L. F. Cabeza, A. Castell, M. Medrano, I. Martorell, G. Perez, and I. Fernandez, Energy and Buildings **42**, 630 (2010).
* Louf _et al._ [2021]J.-F. Louf, N. B. Lu, M. G. O'Connell, H. J. Cho, and S. S. Datta, Science Advances **7**, eabd2711 (2021).
* Beebe _et al._ [2000]D. J. Beebe, J. S. Moore, J. M. Bauer, Q. Yu, R. H. Liu, C. Devadoss, and B.-H. Jo, Nature **404**, 588 (2000).
* Alben [2022]S. Alben, Proceedings of the Royal Society A **478**, 20210742 (2022).
* Chu _et al._ [2018]E. K. Chu, O. Kilic, H. Cho, A. Groisman, and A. Levchenko, Nature communications **9**, 1 (2018).
* Streichan _et al._ [2014]S. J. Streichan, C. R. Hoerner, T. Schneidt, D. Holzer, and L. Hufnagel, Proceedings of the National Academy of Sciences **111**, 5586 (2014).
* Bengough _et al._ (1997)A. Bengough, C. Croser, and J. Pritchard, in _Plant Roots-From Cells to Systems_ (Springer, 1997) pp. 107-116.
* Alric _et al._ (2022)B. Alric, C. Formosa-Dague, E. Dague, L. J. Holt, and M. Delarue, Nature Physics **18**, 411 (2022).
* Ambrosi _et al._ (2019)D. Ambrosi, M. Ben Amar, C. J. Cyron, A. DeSimone, A. Goriely, J. D. Humphrey, and E. Kuhl, Journal of the Royal Society Interface **16**, 20190233 (2019).
* Goriely (2017)A. Goriely, _The mathematics and mechanics of biological growth_, Vol. 45 (Springer, 2017).
* Zhang and Khademhosseini (2017)Y. S. Zhang and A. Khademhosseini, Science **356**, eaaf3627 (2017).
* Oyen (59)M. L. Oyen, **59**, 44, publisher: Taylor & Francis _eprint: [https://doi.org/10.1179/1743280413Y.0000000022](https://doi.org/10.1179/1743280413Y.0000000022).
* Deligkaris _et al._ (2010)K. Deligkaris, T. S. Tadele, W. Olthuis, and A. van den Berg, Sensors and Actuators B: Chemical **147**, 765 (2010).
* Guo _et al._ (2020)Y. Guo, J. Bae, Z. Fang, P. Li, F. Zhao, and G. Yu, Chemical Reviews **120**, 7642 (2020).
* Tanaka _et al._ (1987)T. Tanaka, S.-T. Sun, Y. Hirokawa, S. Katayama, J. Kucera, Y. Hirose, and T. Amiya, Nature **325**, 796 (1987).
* Hong _et al._ (2008)W. Hong, X. Zhao, J. Zhou, and Z. Suo, Journal of the Mechanics and Physics of Solids **56**, 1779 (2008).
* Hong _et al._ (2009)W. Hong, Z. Liu, and Z. Suo, International Journal of Solids and Structures **46**, 3282 (2009).
* Kang and Huang (2010)M. K. Kang and R. Huang, Journal of Applied Mechanics **77** (2010).
* Marcombe _et al._ (2010)R. Marcombe, S. Cai, W. Hong, X. Zhao, Y. Lapusta, and Z. Suo, Soft Matter **6**, 784 (2010).
* Amar and Ciarletta (2010)M. B. Amar and P. Ciarletta, Journal of the Mechanics and Physics of Solids **58**, 935 (2010).
* Dervaux and Ben Amar (2012)J. Dervaux and M. Ben Amar, Annu. Rev. Condens. Matter Phys. **3**, 311 (2012).
* Kim _et al._ (2010)J. Kim, J. Yoon, and R. C. Hayward, Nature materials **9**, 159 (2010).
* Cho _et al._ (2019)H. J. Cho, N. B. Lu, M. P. Howard, R. A. Adams, and S. S. Datta, Soft matter **15**, 4689 (2019).
* Hui _et al._ (2006)C.-Y. Hui, Y. Y. Lin, F.-C. Chuang, K. R. Shull, and W.-C. Lin, Journal of Polymer Science Part B: Polymer Physics **44**, 359 (2006).
* Hu _et al._ (2010)Y. Hu, X. Zhao, J. J. Vlassak, and Z. Suo, Applied Physics Letters **96**, 121904 (2010).
* Bouklas _et al._ (2015)N. Bouklas, C. M. Landis, and R. Huang, Journal of the Mechanics and Physics of Solids **79**, 21 (2015).
* Louf and Datta (2021)J.-F. Louf and S. S. Datta, Soft matter **17**, 3840 (2021).
* Gdoutos (2020)E. E. Gdoutos, _Fracture mechanics: an introduction_, Vol. 263 (Springer Nature, 2020).
* Wang _et al._ (2022)M. Wang, M. Adda-Bedia, J. M. Kolinski, and J. Fineberg, Journal of the Mechanics and Physics of Solids **161**, 104795 (2022).
* Li _et al._ (2023)C. Li, X. Wei, M. Wang, M. Adda-Bedia, and J. M. Kolinski, Journal of the Mechanics and Physics of Solids, 105330 (2023).
* Misiewicz _et al._ (2022)J. Misiewicz, S. S. Datta, K. Lejcus, and D. Marczak, Materials **15**, 4465 (2022).
* Krafcik _et al._ (2017)M. J. Krafcik, N. D. Macke, and K. A. Erk, Gels **3**, 46 (2017).
* Kapur _et al._ (1996)V. Kapur, J. C. Charkoudian, S. B. Kessler, and J. L. Anderson, Industrial & engineering chemistry research **35**, 3179 (1996).
* Dolega _et al._ (2017)M. E. Dolega, M. Delarue, F. Ingremeau, J. Prost, A. Delon, and G. Cappello, Nature communications **8**, 14056 (2017).
* Lee _et al._ (2019)W. Lee, N. Kalashnikov, S. Mok, R. Halaoui, E. Kuzmin, A. J. Putnam, S. Takayama, M. Park, L. McCaffrey, R. Zhao, _et al._, Nature communications **10**, 144 (2019).
* Woodhouse and Johnson (1991)J. Woodhouse and M. Johnson, Agricultural water management **20**, 63 (1991).
* Demitri _et al._ (2013)C. Demitri, F. Scalera, M. Madaghiele, A. Sannino, and A. Maffezzoli, International Journal of Polymer Science **2013** (2013).
* Guilherme _et al._ (2015)M. R. Guilherme, F. A. Aouada, A. R. Fajardo, A. F. Martins, A. T. Paulino, M. F. Davi, A. F. Rubira, and E. C. Muniz, European Polymer Journal **72**, 365 (2015).
* Lejcus _et al._ (2018)K. Lejcus, M. Spitalniak, and J. Dabrowska, Polymers **10**, 271 (2018).
* Wei and Durian (2013)Y. Wei and D. J. Durian, Physical Review E **87**, 053013 (2013).
* Flory and Rehner (1943)P. J. Flory and J. Rehner, The Journal of Chemical Physics **11**, 521 (1943).
* Flory (1953)P. J. Flory, _Principles of polymer chemistry_ (Cornell University Press, 1953).
* Alnaes _et al._ (2015)M. Alnaes, J. Blechta, J. Hake, A. Johansson, B. Kehlet, A. Logg, C. Richardson, J. Ring, M. E. Rognes, and G. N. Wells, Archive of Numerical Software **3** (2015).
* Goriely _et al._ (2016)A. Goriely, J. Weickenmeier, and E. Kuhl, Physical review letters **117**, 138001 (2016).
* Weickenmeier _et al._ (2016)J. Weickenmeier, E. Kuhl, and A. Goriely, Journal of the Mechanics and Physics of Solids **96**, 572 (2016).
* Hu _et al._ (2011)Y. Hu, X. Chen, G. M. Whitesides, J. J. Vlassak, and Z. Suo, Journal of Materials Research **26**, 785 (2011).
* Bouklas and Huang (2012)N. Bouklas and R. Huang, Soft Matter **8**, 8194 (2012).
* Barber (2018)J. R. Barber, _Contact mechanics_ (Springer, 2018).
* Johnson (1987)K. L. Johnson, _Contact mechanics_ (Cambridge university press, 1987).
* Landau _et al._ (1986)L. D. Landau, A. Kosevich, L. P. Pitaevskii, and E. M. Lifshitz, _Theory of Elasticity_ (Butterworth-Heinemann, Oxford, 1986).
* Audoly and Pomeau (2010)B. Audoly and Y. Pomeau, _Elasticity and geometry: from hair curts to the non-linear response of shells_ (Oxford university press, 2010).
* Joshi _et al._ (2022)C. Joshi, D. G. Goldstein, C. Wennerholm, E. Downey, E. Hamilton, S. Hocking, A. Andrei, J. H. Adler, and T. J. Atherton, arXiv preprint arXiv:2208.07859 (2022).
* Joshi _et al._ (2023)C. Joshi, M. Q. Giso, J.-F. Louf, S. S. Datta, and T. J. Atherton, arXiv preprint arXiv:2304.04252 (2023).
* Lai and Hu (2018)Y. Lai and Y. Hu, Soft Matter **14**, 2619 (2018).
* Kuna (2013)M. Kuna, Solid Mechanics and Its Applications **201**, 153 (2013).
* Mao and Anand (2018)Y. Mao and L. Anand, Journal of the Mechanics and Physics of Solids **115**, 30 (2018).
* Wang and Hong (2012)X. Wang and W. Hong, Soft Matter **8**, 8171 (2012).
* Baumberger and Ronsin (2020)T. Baumberger and O. Ronsin, Mechanics of Soft Materials **2**, 1 (2020).
* Yang _et al._ (2022)Y. Yang, H. Guo, Z. Du, W. Hong, T. Lu, and T. Wang, Journal of the Mechanics and Physics of Solids, 105007 (2022).
* Leslie _et al._ (2021)K.-A. Leslie, R. Doane-Solomon, S. Arora, S. J. Curley, C. Szczepanski, and M. M. Driscoll, Soft Matter **17**, 1513 (2021).
* Helmlinger _et al._ (1997)G. Helmlinger, P. A. Netti, H. C. Lichtenbeld, R. J. Melder, and R. K. Jain, Nature biotechnology **15**, 778 (1997).
* Zhang _et al._ (2021)Q. Zhang, J. Li, J. Nijjer, H. Lu, M. Kothari, R. Alert, T. Cohen, and J. Yan, Proceedings of the National Academy of Sciences **118**, e2107107118 (2021).
* Fortune _et al._ [2022]G. T. Fortune, N. M. Oliveira, and R. E. Goldstein, Physical Review Letters **128**, 178102 (2022).
* Goodwin _et al._ [2019]K. Goodwin, S. Mao, T. Guyomar, E. Miller, D. C. Radisky, A. Kosmrlj, and C. M. Nelson, Development **146**, dev181172 (2019).
* Mobius _et al._ [2015]W. Mobius, A. W. Murray, and D. R. Nelson, PLoS computational biology **11**, e1004615 (2015).
*tis _et al._ [2019]S. Atis, B. T. Weinstein, A. W. Murray, and D. R. Nelson, Physical review X **9**, 021058 (2019).
* Tanaka [1986]T. Tanaka, Physica A: Statistical Mechanics and its Applications **140**, 261 (1986).
* Bertrand _et al._ [2016]T. Bertrand, J. Peixinho, S. Mukhopadhyay, and C. W. MacMinn, Physical Review Applied **6**, 064010 (2016).
* Curatolo _et al._ [2017]M. Curatolo, P. Nardinocchi, E. Puntel, and L. Teresi, Journal of Applied Physics **122**, 145109 (2017).
* de Silva and Lapitsky [2016]U. K. de Silva and Y. Lapitsky, ACS applied materials & interfaces **8**, 29015 (2016).
* Zhang _et al._ [2009]J. Zhang, X. Zhao, Z. Suo, and H. Jiang, Journal of Applied Physics **105**, 093522 (2009).
* Ang _et al._ [2020]I. Ang, Z. Liu, J. Kim, C.-Y. Hui, and N. Bouklas, Journal of the Mechanics and Physics of Solids **145**, 104132 (2020).
**Supporting information (SI)**
###### Contents
* A. Hydrogel properties
* B. Fracture due to vertical confinement
* C. 3D packing experiments
* D. Dimensional reduction
* E. Dynamics
* F. Elastic moduli
* G. Indentation solutions
* H. St. Venant-Kirchhoff model
* I. Maximum tensile stresses in weak confinement
* J. Symmetry-breaking instability
### Hydrogel properties
#### Material properties
To assess the consistency of material properties between different hydrogels, we measured the density of five white beads in three different hydration states: ambient, dry, and swollen. The "ambient" state is the condition of the beads prior to any experimentation. This state is characterized by a small amount of swelling due to ambient humidity, evidenced by slight pliability but high elastic resistance. The "dry" state was created by placing hydrogel beads in an oven at \(100^{\circ}\)C for approximately 24 hours, until the material became completely rigid. These dry beads were then placed in deionized water for 96 hours to reach their fully "swollen" state. Volumes of the spherical beads for each state were extrapolated from the diameters measured with ImageJ. The results of these experiments and calculations are displayed in Figure S1.
###### Abstract
The hydrogen sphere is a sphere with radius \(r_{\rm{ctr}}\) of the main text. The sphere is a sphere with radius \(r_{\rm{ctr}}\) of the main text.
not match that of the glass beads, we are unable to collect clear images of the hydrogel during swelling. However, we can disassemble the packing for visual analysis post-swelling.
After a short period of 6 hours, we see that pieces of the hydrogel have already broken off and are distributed throughout the packing (Fig. S4). When this experiment was replicated with extended swelling periods of 12, 24, and 72 hours, increasing degrees of fragmentation were observed.
Hydrogel swelling can clearly generate stresses large enough to cause rupture in 3D granular media, and hydrogel fragmentation therefore may be relevant in agriculture applications. Further studies of hydrogels exposed to water in 3D granular media are an interesting and important extension of our current work.
### Dimensional reduction
In this section, we provide further discussion on our choice to use a 2D model and discuss the corrections we expect to have due to the 3D nature of our experiments. First, we argue that corrections due to the spherical shape of the hydrogel can be neglected near the midplane and thus locally approximate the spherical hydrogel as a cylinder. Then, we describe how an axially compressed hydrogel cylinder can be modeled in 2D via the generalized plane strain approximation.
To frame our discussion, we refer to Fig. S5 which shows the side view of a fully swollen hydrogel (top view shown in Fig. 1B of the main text). From this viewpoint, we clearly see that the spherical shape preferred by the hydrogel leads to \(z\)-dependent obstacle-induced stresses--the cylindrical obstacles penetrate further into the hydrogel at the \(xy\) plane halfway between the plates compared to the \(xy\) plane at the top or bottom plate.
In other words, if we consider horizontal slices of the hydrogel, each slice has a preferred radius that changes as a function of \(z\) due to the spherical geometry. We can estimate this preferred radius by neglecting the shape distortion caused by the top and bottom plates (i.e., assuming the hydrogel swells to a perfect sphere in the absence of obstacles). Given a preferred sphere radius \(R^{*}\) and defining \(z=0\) to be halfway between the plates, the preferred disk radius \(r^{*}\) of a slice of the hydrogel in the \(xy\) plane as a function of \(z\) is given by
\[r^{*}(z)=\sqrt{R^{*2}-z^{2}}\approx R^{*}-\frac{z^{2}}{2R^{*}}.\] (S1)
Indenter displacement is defined as the difference between the preferred hydrogel radius \(r^{*}(z)\) and the leading edge of the obstacle. Since the obstacles are cylinders and their shape/location is independent of \(z\), the \(z\)-dependent preferred hydrogel radius generates a \(z\)-dependent effective indenter displacement. For a fixed value of \(r_{\text{ctr}}\), the dimensionless indenter displacement as a function of \(z\) is
\[\frac{\Delta(z)}{r^{*}(z)}=\frac{r^{*}(z)-r_{\text{ctr}}}{r^{*}(z)}\approx \left(1-\frac{r_{\text{ctr}}}{R^{*}}\right)-\frac{r_{\text{ctr}}}{2R^{*}} \left(\frac{z^{2}}{R^{*2}}\right).\] (S2)
When corrections of order \(z^{2}/R^{*2}\) can be neglected, we can ignore the \(z\)-dependence and locally approximate the hydrogel as a cylinder. This approximation is reasonable close to the midplane of the hydrogel, where the confinement is greatest.
Thus, we assume our system meets the criteria of a generalized plane strain approximation near the midplane of the hydrogel at equilibrium. We next describe this approximation, first in the context of linear elasticity and then in the context of the nonlinear hydrogel model. We emphasize that this dimensional reduction only holds in equilibrium, as the dynamic features observed in experiments, including initial swelling as well as fracture itself, have 3D features that cannot be straightforwardly described in 2D.
#### Generalized plane strain: Hookean elasticity
Generalized plane strain approximations can describe a cylinder in frictionless contact with walls that impose a uniform axial contraction, experiencing tractions independent of \(z\) on its sides [3]. Accordingly, we assume that the strain tensor elements \(u_{xx}\), \(u_{xy}\) and \(u_{yy}\) are independent of \(z\), \(u_{xz}=u_{yz}=0\), and \(u_{zz}=\alpha\), a constant. Assuming
Hooke's law holds, we can write down the standard equations for the stress tensor \(\sigma_{ij}\) in terms of the strain tensor \(u_{ij}\), bulk Young's modulus \(E_{3D}\), and bulk Poisson's ratio \(\nu_{3D}\)[4].
\[\sigma_{xx} =\frac{E_{3D}}{(1+\nu_{3D})(1-2\nu_{3D})}\left((1-\nu_{3D})u_{xx}+ \nu_{3D}\left(u_{yy}+\alpha\right)\right),\] (S3) \[\sigma_{xy} =\frac{E_{3D}}{1+\nu_{3D}}u_{xy},\] (S4) \[\sigma_{yy} =\frac{E_{3D}}{(1+\nu_{3D})(1-2\nu_{3D})}\left((1-\nu_{3D})u_{yy} +\nu_{3D}\left(u_{xx}+\alpha\right)\right),\] (S5) \[\sigma_{xz} =\sigma_{yz}=0,\] (S6) \[\sigma_{zz} =\frac{E_{3D}}{(1+\nu_{3D})(1-2\nu_{3D})}\left((1-\nu_{3D}) \alpha+\nu_{3D}\left(u_{xx}+u_{yy}\right)\right),\] (S7)
Since \(\frac{\partial\sigma_{zz}}{\partial z}=\sigma_{xz}=\sigma_{yz}=0\), there are only two equations of equilibrium. Note that \(\alpha\) drops out of the equilibrium equations entirely.
To make comparisons with 2D elasticity, it will also be useful to express the strains in terms of stresses. In order to do this, we write \(\sigma_{zz}\) in terms of \(\sigma_{xx}\) and \(\sigma_{yy}\) using the relation for \(u_{zz}\).
\[\sigma_{zz}=\nu_{3D}(\sigma_{xx}+\sigma_{yy})+E_{3D}\alpha.\] (S8)
Using this relation, the elements of the strain tensor are
\[u_{xx} =\frac{1-\nu_{3D}^{2}}{E_{3D}}\sigma_{xx}-\frac{\nu_{3D}(1+\nu_{3 D})}{E_{3D}}\sigma_{yy}-\nu_{3D}\alpha,\] (S9) \[u_{xy} =\frac{(1+\nu_{3D})}{E_{3D}}\sigma_{xy}\] (S10) \[u_{yy} =\frac{1-\nu_{3D}^{2}}{E_{3D}}\sigma_{yy}-\frac{\nu_{B}(1+\nu_{3 D})}{E_{3D}}\sigma_{xx}-\nu_{3D}\alpha,\] (S11) \[u_{zz} =\alpha,\qquad u_{yz}=u_{xz}=0.\] (S12)
When \(\alpha=0\), we recover the standard plane strain result.
#### Relation to 2D elastic solid
We compare the stress and strain tensors derived using the generalized plane strain approximation to those of a 2D elastic solid. For a 2D material, there is no \(z\) direction. The two equilibrium equations are therefore identical to those for generalized plane strain. However, the relationship between stress and strain differs. Strain tensor elements are given in terms of stress tensor elements as
\[u_{xx} =\frac{1}{E_{2D}}\sigma_{xx}-\frac{\nu_{2D}}{E_{2D}}\sigma_{yy},\] (S13) \[u_{xy} =\frac{(1+\nu_{2D})}{E_{2D}}\sigma_{xy},\] (S14) \[u_{yy} =\frac{1}{E_{2D}}\sigma_{yy}-\frac{\nu_{2D}}{E_{2D}}\sigma_{xx},\] (S15)
where \(E_{2D}=\frac{4\mu(\mu+\lambda)}{2\mu+\lambda}\) and \(\nu_{2D}=\frac{\lambda}{2\mu+\lambda}\) are the 2D Young's modulus and Poisson's ratio defined in terms of 2D Lame parameters \(\mu\) and \(\lambda\). We observe that if we make the substitutions \(E_{2D}\rightarrow\frac{E_{3D}}{1-\nu_{3D}^{4}}\) and \(\nu_{2D}\rightarrow\frac{\nu_{3D}}{1-\nu_{3D}}\) and subtract \(\nu_{3D}\alpha\delta_{ij}\), we can transform the 2D strain tensor into the generalized plane strain tensor.
Therefore, if we solve for the stresses using a 2D model, we can create solutions to a corresponding generalized plane strain model using the procedure described above.
#### Generalized plane strain: nonlinear hydrogel theory
Thus far, we have only discussed the generalized plane strain approximation in the context of Hookean elasticity. We show here how the same kinematic assumptions affect the nonlinear hydrogel model when deformations are not necessarily small.
The free energy density of a 3D hydrogel is [5; 6]:
\[\frac{F_{\rm en}}{V_{0}k_{B}T}=\frac{n_{p}^{3D}}{2}\left(F_{iK}^{3D}F_{iK}^{3D}-3 -2\ln(\det({\bf F}^{3D}))\right)+\frac{1}{v}\left(vC_{3D}\ln\left(\frac{vC_{3D} }{1+vC_{3D}}\right)+\chi\frac{vC_{3D}}{1+vC_{3D}}\right),\] (S16)
where \({\bf F}^{3D}\) is the 3D deformation gradient tensor, \(v\) is the volume of a solvent molecule, \(n_{p}^{3D}\) is the number of polymer chains per unit reference volume \(V_{0}\), and \(C_{3D}\) is the 3D nominal concentration (number of solvent molecules per unit reference volume).
The Cauchy stress is given by
\[\sigma_{ij}=\frac{n_{p}^{3D}k_{B}T}{J_{3D}}\left(F_{iK}^{3D}F_{jK}^{3D}-\delta _{ij}\right)+\frac{k_{B}T}{vJ_{3D}}\mathcal{A}(J_{3D})\delta_{ij}-\frac{\mu_{ 3D}}{v}\delta_{ij},\] (S17)
where we define the function
\[\mathcal{A}(J_{3D})\equiv\left(J_{3D}\ln\left(\frac{J_{3D}-1}{J_{3D}}\right)+ 1+\frac{\chi}{J_{3D}}\right).\] (S18)
We now apply our generalized plane strain assumptions. The deformation gradient tensor and its inverse can be written
\[{\bf F}^{3D}=\begin{pmatrix}F_{xX}&F_{xY}&0\\ F_{yX}&F_{yY}&0\\ 0&0&\lambda_{z}\end{pmatrix},\qquad\quad({\bf F}^{3D})^{-1}=\begin{pmatrix}F_{ yY}/J&-F_{xY}/J&0\\ -F_{yX}/J&F_{xX}/J&0\\ 0&0&1/\lambda_{z}\end{pmatrix},\] (S19)
where \(J\) is the determinant of the 2D deformation gradient tensor \(F_{iJ}\) and \(\lambda_{z}=1+\alpha\) is the axial stretch. Thus, \(J_{3D}=\lambda_{z}J\). Constrained hydrogel swelling is well-studied--Ref. [7] provides a particularly relevant analysis of indentation on swollen constrained hydrogels.
With these assumptions, \(\sigma_{xz}=\sigma_{yz}=0\), and \(\sigma_{zz}\) is given by
\[\sigma_{zz}=\frac{k_{B}T}{v}\left(\frac{n_{p}^{3D}v}{\lambda_{z}J}\left( \lambda_{z}^{2}-1\right)+\frac{\mathcal{A}(\lambda_{z}J)}{\lambda_{z}J}-\frac {\mu_{3D}}{k_{B}T}\right).\] (S20)
As in Hookean elasticity, this stress is independent of \(z\) and fully determined by the strains in the \(x\) and \(y\) directions and the prescribed stretch in the \(z\) direction. Therefore, \(\frac{\partial\sigma_{xz}}{\partial z}=0\) and there are again only two equilibrium equations to solve. In the absence of body forces, these are \(\frac{\partial\sigma_{xz}}{\partial x_{z}}=0\) and \(\frac{\partial\sigma_{yz}}{\partial x_{z}}=0\).
The other nonzero elements of the Cauchy stress tensor can be written
\[\frac{\sigma_{ij}}{k_{B}T}=\frac{n_{p}^{3D}F_{iK}F_{jK}}{\lambda_{z}J}+\frac{1 }{v}\left(\frac{\mathcal{A}(\lambda_{z}J)}{\lambda_{z}J}-\frac{n_{p}^{3D}v}{ \lambda_{z}J}-\frac{\mu_{3D}}{k_{B}T}\right)\delta_{ij}.\] (S21)
In the absence of obstacles, the hydrogel will swell isotropically in \(x\) and \(y\), reaching an equilibrium swelling stretch ratio \(\lambda_{0}\) that will be a function of \(\lambda_{z}\).
We find \(\lambda_{0}\) by setting \(\sigma_{ij}=0\) with \(F_{iJ}=\lambda_{0}\delta_{iJ}\), \(J=\lambda_{0}^{2}\). This gives the relation
\[\frac{\mu_{3D}}{k_{B}T}=\frac{n_{p}^{3D}v}{\lambda_{z}}\left(1-\frac{1}{ \lambda_{0}^{2}}\right)+\frac{1}{\lambda_{z}\lambda_{0}^{2}}+\ln\left(1-\frac{ 1}{\lambda_{z}\lambda_{0}^{2}}\right)+\frac{\chi}{\lambda_{z}^{2}\lambda_{0}^ {4}}.\] (S22)
This equation must be solved numerically. The relationship between \(\lambda_{z}\) and \(\lambda_{0}\) for the parameters used in this report is shown in Fig. S6 (\(\mu_{3D}=0\), \(n_{p}^{3D}v=0.001\), \(\chi=0.3\)). When \(\lambda_{z}=\lambda_{0}\), we regain the standard 3D result [8].
#### Relation to 2D hydrogel
We compare the stress tensor for a hydrogel in generalized plane strain to that of the purely 2D hydrogel described in the main text. For the energy functional
\[\frac{F}{A_{0}k_{B}T}=\frac{n_{p}}{2}\left(F_{iK}F_{iK}-2-2\ln(\det({\bf F})) \right)+\frac{1}{\Omega}\left(\Omega C\ln\left(\frac{\Omega C}{1+\Omega C} \right)+\chi\frac{\Omega C}{1+\Omega C}\right),\] (S23)
The corresponding Cauchy stresses are
\[\frac{\sigma_{ij}}{k_{B}T}=\frac{n_{p}F_{iK}F_{jK}}{J}+\frac{1}{\Omega}\left( \frac{\mathcal{A}(J)}{J}-\frac{n_{p}\Omega}{J}-\frac{\mu}{k_{B}T}\right)\delta_ {ij},\] (S24)
where
\[\mathcal{A}(J)=\left(J\ln\left(\frac{J-1}{J}\right)+1+\frac{\chi}{J}\right).\] (S25)
We can use 2D solutions to construct generalized plane strain solutions with arbitrary \(\lambda_{z}\). If we make the following substitutions in Eq. (S24), we regain Eq. (S21).
\[F_{iJ} \rightarrow \sqrt{\lambda_{z}}F_{iJ},\] (S26) \[J \rightarrow \lambda_{z}J,\] (S27) \[n_{p} \rightarrow \frac{n_{p}^{3D}}{\lambda_{z}},\] (S28) \[\Omega \to v,\] (S29) \[\frac{\mu}{k_{B}T} \rightarrow \frac{\mu_{3D}}{k_{B}T}-\frac{n_{p}^{3D}v}{\lambda_{z}J}\left( \frac{1}{\lambda_{z}}-1\right).\] (S30)
Note that, as in the Hookean elasticity example, these substitutions change the units of various terms--this occurs because 2D and 3D stress have different units.
#### Summary
Our experimental system can be reasonably modeled in 2D using generalized plane strain assumptions near the hydrogel midsection in equilibrium. In the main text, we work with 2D models to simplify our calculations and simulations. As described above, it is straightforward to transform 2D elasticity solutions to generalized plane strain solutions and vice versa. The imposed stretch in the \(z\) direction enters as an effective chemical potential in the hydrogel model, so chemical potential boundary conditions in the 2D simulations would need to be modified to make more quantitative comparisons with experiments.
### Dynamics
We simulate hydrogel swelling dynamically using the kinetic law proposed in Hong _et al._[5].
\[j_{i}=-\frac{cD}{k_{B}T}\frac{\partial\mu}{\partial x_{i}},\] (S31)
where \(j_{i}\) and \(c\) are the solvent flux and concentration in the current configuration respectively, \(\mu\) is the chemical potential, and \(D\) is solvent diffusivity. To evolve the concentration field, we discretize the continuity equation using the concentration and solvent flux in the reference configuration: \(\frac{\partial C}{\partial t}+\frac{\partial J_{K}}{\partial X_{K}}=0\), with \(J_{K}=\det(\mathbf{F})\frac{\partial X_{K}}{\partial x_{i}}j_{i}\). [9].
We do not expect the dynamical behavior of our simulations to closely reproduce what we observe in our experiments for several reasons. First, the kinetic law we use is simple and does not account for effects such as changes to the diffusion constant as the polymer network expands. See, e.g., [10; 11; 12], for further discussions about appropriate expressions for solvent flux. Second, before the hydrogel makes contact with the top and bottom plates, it swells equally in all three dimensions. A 2D simulation therefore only provides a reasonable approximation of equilibrium experimental behavior, as discussed in SI Appendix D.
Nevertheless, the simulated 2D dynamics are interesting and complex, and are relevant to hydrogel geometries that remain quasi-2D throughout the swelling process, such as a thin hydrogel disk between two flat plates. We therefore show some characteristic results and provide a brief discussion on simulated dynamical behavior here. In all cases, we initialize the model at a radius \(r_{0}\) and impose an isotropic initial stretch \(\lambda_{x}=\lambda_{y}=1.5\) to avoid the singularity that appears for the dry state with \(\lambda_{x}=\lambda_{y}=1\)[8]. As described in _Materials and Methods_, simulations are run until they reach equilibrium which occurs at different times for different trials.
In Fig. S7, we show the maximum principal tensile (top set of curves) and compressive (bottom set of curves) stresses as a function of time for different values of \(\Delta/r^{*}\) with \(r_{\text{obs}}/r^{*}=0.3\). These data are a subset of the data displayed in Fig. 5 of the main text, with maxima/minima taken over the entire mesh.
To understand these data, we use the \(\Delta/r^{*}=0\) curve as a reference. This curve corresponds to the unconstrained case. We observe a peak in both the maximum compressive and tensile principal stresses at dimensionless time \(\hat{t}\equiv\frac{tD}{r_{0}^{*}}\approx 4\), followed by a decay to zero. This peak in stress occurs because the outer regions of the hydrogel swell faster than the inner regions, and can lead to the formation of transient lobed structures, as described in the main text and elsewhere [13]. For \(\Delta/r^{*}\leq 0.5\), simulations follow the free swelling curve until they make contact with the obstacles. Following contact, the hydrogel develops stresses that do not disappear at equilibrium. The data for \(\Delta/r^{*}=0.6\), the strongest confinement shown here, has a number of interesting differences from the other trials. This hydrogel encounters obstacles prior to the time points shown in Fig. S7, and thus sustains larger stresses earlier compared to the other trials. At \(\hat{t}\approx 700\), we see a sharp increase in both the compressive and tensile stresses in this trial, not observed in other trials--this is the symmetry-breaking instability discussed in Sec. IID of the main text.
We compare these dynamics to those in Fig. S8. Here, we show principal stresses as a function of time with \(\Delta/r^{*}\) held fixed and \(r_{\rm obs}/r^{*}\) varied, again with maxima/minima taken over the entire mesh. These data correspond to vertical slices of Figs. 5(A,B). Since \(\Delta/r^{*}\) is the same for all trials, the hydrogels all encounter obstacles at the same time, \(\hat{t}\approx 6\). After that time, trials behave differently due to the different curvature of the obstacles.
Though beyond the scope of this work, we note that it would be interesting to consider the impact of dynamic boundary conditions in this problem: for example, a no-flux boundary condition could be imposed following contact between the hydrogel and obstacles. It is not obvious whether imposing a dynamic no-flux boundary condition or a constant chemical potential boundary condition in 2D is more faithful to the 3D experiments, so we have chosen the constant chemical potential boundary for simplicity in this work.
We conclude this discussion of dynamics by providing an estimate of the poroelastic relaxation time \(\tau\) for the hydrogels used in experiments. A typical obstacle length scale \(l_{c}\) in our experiments is 10 mm. Thus, using the estimate of \(D\approx 7.5\times 10^{-10}\rm m^{2}s^{-1}\) given in Louf and Datta [14],
\[\tau=\frac{l_{c}^{2}}{D}\approx 37\;\;\rm h.\] (S32)
The relaxation timescale \(\tau\) is therefore comparable to a typical fracture timescale (see, e.g., Fig. 1C).
### Elastic moduli
In this section, we provide the derivation for the effective Poisson's ratio and Young's modulus of a 2D hydrogel (Eqs. (2) and (3) of the main text) by adapting the argument presented in Bouklas and Huang [15]. We begin with the Cauchy stress for a 2D hydrogel model,
\[\frac{\Omega\sigma_{ij}}{k_{B}T}=\frac{n_{p}\Omega}{J}F_{iK}F_{jK}+\left(\frac {\mathcal{A}(J)}{J}-\frac{n_{p}\Omega}{J}-\frac{\mu}{k_{B}T}\right)\delta_{ij},\] (S33)
with
\[\mathcal{A}(J)=\left(J\ln\left(\frac{J-1}{J}\right)+1+\frac{\chi}{J}\right).\] (S34)
We apply a uniaxial stress in the \(x\) direction and require \(\sigma_{yy}=0\). This sets
\[0=\frac{\Omega\sigma_{yy}}{k_{B}T}=\frac{n_{p}\Omega}{\lambda_{x}\lambda_{y}} \left(\lambda_{y}^{2}-1\right)+\left(\frac{\mathcal{A}(\lambda_{x}\lambda_{y})} {\lambda_{x}\lambda_{y}}-\frac{\mu}{k_{B}T}\right).\] (S35)
We assume that the resulting stretches are small deformations relative to the stress-free equilibrium state with stretch \(\lambda_{0}\), defined for the 2D hydrogel via the equation
\[\frac{\mu}{k_{B}T}=\frac{n_{p}\Omega}{\lambda_{0}^{2}}\left(\lambda_{0}^{2}-1 \right)+\frac{\mathcal{A}(\lambda_{0}^{2})}{\lambda_{0}^{2}}=n_{p}\Omega\left( 1-\frac{1}{\lambda_{0}^{2}}\right)+\ln\left(1-\frac{1}{\lambda_{0}^{2}}\right) +\frac{1}{\lambda_{0}^{2}}+\frac{\chi}{\lambda_{0}^{4}}.\] (S36)
This allows us to define the strain tensor elements in terms of stretches as
\[\lambda_{x} =\lambda_{0}(1+u_{xx}),\] (S37) \[\lambda_{y} =\lambda_{0}(1+u_{yy}).\] (S38)
We next expand Eq. (S35) to linear order in \(u_{xx}\) and \(u_{yy}\).After simplifying using Eq. (S36), our result is
\[u_{xx}\left(n_{p}\Omega\left(1-\frac{1}{\lambda_{0}^{2}}\right)+\frac{2\chi}{ \lambda_{0}^{4}}-\frac{1}{\lambda_{0}^{2}(\lambda_{0}^{2}-1)}\right)=u_{yy} \left(n_{p}\Omega\left(1+\frac{1}{\lambda_{0}^{2}}\right)+\frac{1}{\lambda_{0}^ {2}(\lambda_{0}^{2}-1)}-\frac{2\chi}{\lambda_{0}^{4}}\right).\] (S39)
Defining the Poisson's ratio as
\[\nu=-\frac{u_{yy}}{u_{xx}},\] (S40)
we find
\[\nu=\frac{n_{p}\Omega\left(\frac{1}{\lambda_{0}^{2}}-1\right)+\frac{1}{ \lambda_{0}^{2}(\lambda_{0}^{2}-1)}-\frac{2\chi}{\lambda_{0}^{2}}}{n_{p}\Omega \left(\frac{1}{\lambda_{0}^{2}}+1\right)+\frac{1}{\lambda_{0}^{2}(\lambda_{0}^ {2}-1)}-\frac{2\chi}{\lambda_{0}^{4}}}=1-\frac{2n_{p}\Omega}{n_{p}\Omega \left(1+\frac{1}{\lambda_{0}^{2}}\right)+\frac{1}{\lambda_{0}^{2}(\lambda_{0}^ {2}-1)}-\frac{2\chi}{\lambda_{0}^{4}}}.\] (S41)
To find the Young's modulus, we perform the same operations in the equation for \(\sigma_{xx}\):
\[\frac{\Omega\sigma_{xx}}{k_{B}T}=\frac{n_{p}\Omega}{\lambda_{x}\lambda_{y}} \left(\lambda_{x}^{2}-1\right)+\left(\frac{\mathcal{A}(\lambda_{x}\lambda_{y}) }{\lambda_{x}\lambda_{y}}-\frac{\mu}{k_{B}T}\right).\] (S42)
We substitute \(u_{yy}=-\nu u_{xx}\) and expand in \(u_{xx}\) to linear order. After simplifying, we find
\[\frac{\Omega\sigma_{xx}}{k_{B}T}=2(1+\nu)n_{p}\Omega u_{xx}.\] (S43)
Thus, the effective 2D Young's modulus is
\[E=2(1+\nu)n_{p}k_{B}T,\] (S44)
and the effective 2D shear modulus is
\[G_{0}\equiv\frac{E}{2(1+\nu)}=n_{p}k_{B}T.\] (S45)
#### Robustness of mapping
In the derivation of effective elastic parameters, we neglect terms quadratic in \(u_{ij}\). Thus, for large deformations, we expect our mapping between hydrogel model parameters and linear elastic parameters [Eqs. (S41) and (S44)] to become inaccurate (i.e., the effective Poisson's ratio and Young's modulus will acquire a dependence on strain). This additional source of nonlinearity has the potential to complicate our comparisons between the nonlinear hydrogel model and the St. Venant-Kirchhoff model in Sec. IIB and C of the main text. For example, an inhomogeneous strain state could induce spatially-varying effective elastic parameters. However, using simulations of a neo-Hookean elastic model, we argue that this nonlinearity can be neglected for the deformations considered in this work.
We simulate a neo-Hookean elastic model in FEniCS, using the same mesh resolution as in the hydrogel trials (approximately 30 vertices across the radius; see Fig. 3, left column). The strain-energy density function is
\[W=\frac{G_{0}}{2}\left(F_{iK}F_{iK}-2-2\ln(\det(\mathbf{F}))\right)+\frac{G_{0} \nu}{1-\nu}\ln(\det(\mathbf{F}))^{2},\] (S46)
where the shear modulus \(G_{0}\) and Poisson's ratio \(\nu\) are set according to Eqs. (S41) and (S44). Setting the coefficients in this manner ensures consistency with linear elasticity. Displacements are defined relative to the zero stress reference configuration with radius \(r^{*}\).
In Fig. S9, we compare the maximum principal stresses in the neo-Hookean elastic model and the hydrogel model. Although there are some differences between the two data sets, it is difficult to tell whether these differences increase as a function of strain as expected. We learn more by comparing the stress profiles directly in Fig. S10. As \(\Delta/r^{*}\) increases, we clearly see differences between the models increase. However, deviations only appear at the largest values of \(\Delta/r^{*}\) tested, and even then, they remain small relative to the magnitude of the stress. These comparisons suggest that our mapping from hydrogel parameters to elastic parameters is reasonably accurate, perhaps even surprisingly so, for large indentation-type deformations.
### Indentation solutions
Here we derive the indentation theory equations in Sec. IIA of the main text. We start with the Flamant solution applied to a point force \(\zeta\) acting normal to an elastic half space in 2D [16]. We assume the half space fills the region \(y<0\) and use the convention that \(\zeta<0\) corresponds to a force that compresses the plane. In radial coordinates with the origin at the location at which the point force acts, this is a simple radial stress distribution
\[\sigma_{rr}=-\frac{\zeta\sin\theta}{\pi r}.\] (S47)
Directly beneath the point source, \(\theta=-\pi/2\) and \(\sigma_{rr}=\frac{\zeta}{\pi r}<0\).
The displacements due to a point force grow logarithmically in 2D. Therefore, we must be mindful of boundaries, as the finite size of our deformable material will influence our calculations. We can gain an appreciation for this subtlety
by considering two diametrically opposed point forces acting on the top and bottom of a 2D elastic disk, as in Goodier and Timoshenko [17] p 107. In this case, all points on the boundary of the disk experience isotropic compression due to the pair of point forces, and a uniform tension \(\sigma_{ij}=-\frac{\zeta}{\pi r^{*}}\delta_{ij}\) must be added to maintain a stress-free boundary. By the same argument, \(\sigma_{ij}=-\frac{2\zeta}{\pi r^{*}}\delta_{ij}\) must be added to the stress tensor to free the boundaries when four perpendicular point forces are acting on the elastic disk.
We now consider the specific geometry we are interested in: four circular indenters of radius \(r_{\rm obs}\) acting on an elastic disk of radius \(r^{*}\). In the weak confinement regime, we expect the greatest stresses to develop directly beneath the obstacle, as stresses go to zero at the boundary. Therefore, by symmetry we only need to solve for stresses along the line that passes through the center of the elastic disk and the center of an obstacle (\(x=0\) line in inset to Fig. 4B in the main text). Appealing to Saint-Venant's principle, we simplify our calculation by modeling one obstacle as a 2D Hertzian indenter and the other three obstacles as point forces. We proceed by generalizing the calculation in Johnson [18] p129.
We place the origin at the center of the contact between the elastic body and the Hertzian indenter, and let \(b>0\) be the distance from this origin along the centerline. The stress tensor contributions from the point indenters along the line of interest are
\[\sigma_{xx} = \frac{4\zeta r^{*3}}{\pi(r^{*2}+(r^{*}-b)^{2})^{2}},\] (S48) \[\sigma_{yy} = \frac{2\zeta}{\pi(2r^{*}-b)}+\frac{4\zeta r^{*}(r^{*}-b)^{2}}{ \pi(r^{*2}+(r^{*}-b)^{2})^{2}},\] (S49) \[\sigma_{xy} = 0.\] (S50)
The Hertzian indenter contributes
\[\sigma_{xx} = \frac{\zeta}{\pi}\left(\frac{2(a^{2}+2b^{2})}{a^{2}\sqrt{a^{2}+b ^{2}}}-\frac{4b}{a^{2}}\right),\] (S51) \[\sigma_{yy} = \frac{2\zeta}{\pi\sqrt{a^{2}+b^{2}}},\] (S52) \[\sigma_{xy} = 0,\] (S53)
where the contact length \(2a\) is given according to
\[a^{2} = -\frac{4\zeta}{E\pi}\left(\frac{1}{r_{\rm obs}}+\frac{1}{r^{*}} \right)^{-1}.\] (S54)
The stress tensor is the sum of these contributions, plus the isotropic tension from the corrective solution, \(\sigma_{ij}=-\frac{2\zeta}{\pi r^{*}}\delta_{ij}\). With the substitution \(y=r^{*}-b\), we find Eqs. (4)-(7) of the main text.
The strain \(u_{yy}\) can be found as
\[u_{yy}=\frac{\sigma_{yy}}{E}-\frac{\nu}{E}\sigma_{xx}.\] (S55)
We find the displacement at the surface directly beneath the indenter by integrating
\[-\Delta=\int_{0}^{r^{*}}u_{yy}(b)db.\] (S56)
We expand this expression in the limit \(a/r^{*}\ll 1\), neglecting terms quadratic in \(a/r^{*}\) to find
\[\Delta=-\frac{\zeta}{E\pi}\left(\ln\left(\frac{16r^{*2}}{a^{2}}\right)+\frac{ 1}{2}(\pi-6-\pi\nu)\right).\] (S57)
This expression can now be inverted using the Lambert W-function or product logarithm ([19], SS4.13). Note that this function is multivalued for small indenter force and the \(k=-1\) branch must be chosen for consistent results.
### St. Venant-Kirchhoff model
As described in Sec. IIB of the main text, we simulate St. Venant-Kirchhoff materials surrounded by four circular indenters in FEniCS. We use the same mesh as in the hydrogel trials (approximately 30 vertices across the radius; see Fig. 3, left column). The strain-energy density function is
\[W =G_{0}u_{ij}^{2}+\frac{\lambda}{2}u_{kk}^{2}=G_{0}\left(u_{ij}^{2 }+\frac{\nu}{1-\nu}u_{kk}^{2}\right),\] (S58) \[u_{ij} =\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{ \partial u_{j}}{\partial x_{i}}+\frac{\partial u_{k}}{\partial x_{i}}\frac{ \partial u_{k}}{\partial x_{j}}\right),\] (S59)
where \(G_{0}=\frac{Y}{2(1+\nu)}\) and \(\lambda=\frac{Y\nu}{1-\nu^{2}}=\frac{2G_{0}\nu}{1-\nu}\) are Lame coefficients, set to match the effective elastic properties of the hydrogel as described in SI Appendix F, \(u_{i}\) is a displacement in the \(i\)th direction, and indices run over \(x\) and \(y\). Displacements are defined relative to the zero stress reference configuration with radius \(r^{*}\).
As the indenter displacement increases, St. Venant-Kirchhoff simulations can become unstable. For example, due to the linear constitutive law, there is a finite energetic cost for compressing material to a point [20]. To maintain stability when possible, we incrementally increase both indenter displacement and penalty strength. Nonetheless, we cannot simulate values of \(\Delta/r^{*}>0.115\) for \(r_{\text{obs}}/r^{*}=0.3\). Results from these simulations prior to this threshold are shown in Fig. S11 and S12.
As discussed in the main text, the St. Venant-Kirchoff model is able to capture the maximum tensile stress reasonably well. However, the compressive stresses in the hydrogel model (and neo-Hookean model, see Fig. S10B) are significantly larger than those in the St. Venant-Kirchoff model for large deformations.
### Maximum tensile stresses in weak confinement
In Figs. S9 and S11, the global maximum of principal tensile stress is larger than the theoretical expectation at small indentation depths. These deviations are the result of isolated cells with large tensions on the boundary of the hydrogel where the material loses contact with the obstacle, and we consider it to be an unphysical effect originating with our finite mesh resolution. In Fig. S13, we show how the tension changes as we increase the resolution. At the relatively coarse resolution used for the simulations in this work with 30 vertices along the radius, the anomalous tension appears close to the boundary, while at higher resolutions it is more reliably located on the boundary itself. We compare the maximum principal stress excluding the boundary points (blue diamonds in Fig. S13) to the maximum principal stress including the boundary and the maximum tensile stress beneath the obstacle. We find good agreement between the maximum stress excluding the boundary and the theoretical expectation at high resolution. However, since the maximum compression occurs close to the boundary as well, excluding these points systematically creates disagreement between theory and simulation for compression. Thus, in Fig. 3. we display the maximum principal stresses taken along a line connecting the obstacle center to the hydrogel center, beneath the top obstacle.
Figure S11: Comparison between maximum principal Cauchy compressive and tensile stresses in simulations with the nonlinear hydrogel model and St. Venant-Kirchhoff model. In (a), we see that the St. Venant-Kirchhoff model displays compressive stresses that are lower than the hydrogel model, implying that a material nonlinearity is responsible for the additional compressive stress. In (b), we observe that the St. Venant-Kirchhoff model produces a approximately the same tensile stresses as the hydrogel model, implying that a geometric nonlinearity can explain tensile stresses up to \(\sim\Delta/r^{*}=0.115\). Note that we plot both the maximum/minimum principal stress over the entire hydrogel domain (labeled global max./min.) as well as the maximum/minimum principal stress along the line connecting the hydrogel center to an obstacle center (labeled below obstacle max./min.).
### Symmetry-breaking instability
When obstacles are very close together (\(\Delta/r^{*}>0.56\) for \(r_{\rm obs}/r^{*}=0.3\)), simulations show that the hydrogel disk swells primarily along a diagonal, rather than equally in all four pore spaces. This effect is quantified in Fig. S14.
As discussed in the main text, this instability can be understood as the hydrogel prioritizing an elliptical shape over a circular shape. In the weak confinement regime, deformations are relatively localized. Therefore, by symmetry, the hydrogel will swell evenly into all four pores. However, as the confinement increases and the deformation becomes global, maintaining symmetric swelling requires isotropic compression of the center of the hydrogel. Assuming uniform deformations, it is less costly for the hydrogel to compress an amount \(\Delta\) along a single axis, forming an ellipse, than to compress a circle an amount \(\Delta\) along two axes, forming a smaller circle. For concreteness, we demonstrate this for an incompressible neo-Hookean model. To transform a circle into a smaller circle, we impose stretches \(\lambda_{1}=\lambda_{2}=1-\Delta/r^{*}\). The energy density is therefore
\[W_{\rm circle}\propto 2\left(1-\frac{\Delta}{r^{*}}\right)^{2}-2.\] (S60)
To transform a circle into an ellipse, we impose \(\lambda_{1}=1-\Delta/r^{*},\lambda_{2}=1\). The energy for this deformation is lower.
\[W_{\rm ellipse}\propto\left(1-\frac{\Delta}{r^{*}}\right)^{2}-1.\] (S61) |
2310.12739 | Strongly stable dual-pairing summation by parts finite difference
schemes for the vector invariant nonlinear shallow water equations -- I:
Numerical scheme and validation on the plane | We present an energy/entropy stable and high order accurate finite difference
method for solving the nonlinear (rotating) shallow water equations (SWE) in
vector invariant form using the newly developed dual-pairing and
dispersion-relation preserving summation-by-parts finite difference operators.
We derive new well-posed boundary conditions for the SWE in one space
dimension, formulated in terms of fluxes and applicable to linear and nonlinear
SWEs. For the nonlinear SWE, entropy stability ensures the boundedness of
numerical solution but does not guarantee convergence. Adequate amount of
numerical dissipation is necessary to control high frequency errors which could
negatively impact accuracy in the numerical simulations. Using the dual-pairing
summation by parts framework, we derive high order accurate and nonlinear
hyper-viscosity operator which dissipates entropy and enstrophy. The
hyper-viscosity operator effectively minimises oscillations from shocks and
discontinuities, and eliminates high frequency grid-scale errors. The numerical
method is most suitable for the simulations of subcritical flows typically
observed in atmospheric and geostrophic flow problems. We prove a priori error
estimates for the semi-discrete approximations of both linear and nonlinear
SWE. Convergence, accuracy, and well-balanced properties are verified via the
method of manufactured solutions and canonical test problems such as the dam
break and lake at rest. Numerical simulations in two-dimensions are presented
which include the rotating and merging vortex problem and barotropic shear
instability, with fully developed turbulence. | Justin Kin Jun Hew, Kenneth Duru, Stephen Roberts, Christopher Zoppou, Kieran Ricardo | 2023-10-19T13:37:45Z | http://arxiv.org/abs/2310.12739v2 | Strongly stable dual-pairing summation by parts finite difference schemes for the vector invariant nonlinear shallow water equations - I: numerical scheme and validation
###### Abstract
We present an energy/entropy stable and high order accurate finite difference method for solving the linear/nonlinear shallow water equations (SWE) in vector invariant form using the newly developed dual-pairing (DP) and dispersion-relation preserving (DRP) summation by parts (SBP) finite difference operators. We derive new well-posed boundary conditions for the SWE in one space dimension, formulated in terms of fluxes and applicable to linear and nonlinear problems. For nonlinear problems, entropy stability ensures the boundedness of numerical solutions, however, it does not guarantee convergence. Adequate amount of numerical dissipation is necessary to control high frequency errors which could ruin numerical simulations. Using the dual-pairing SBP framework, we derive high order accurate and nonlinear hyper-viscosity operator which dissipates entropy and enstrophy. The hyper-viscosity operator effectively tames oscillations from shocks and discontinuities, and eliminates poisonous high frequency grid-scale errors. The numerical method is most suitable for the simulations of sub-critical flows typical observed in atmospheric and geostrophic flow problems. We prove _a priori_ error estimates for the semi-discrete approximations of both linear and nonlinear SWE. We verify convergence, accuracy and well-balanced property via the method of manufactured solutions (MMS) and canonical test problems such as the dam break, lake at rest, and a two-dimensional rotating and merging vortex problem.
## 1 Introduction
The depth-averaged shallow water equations (SWE) are a set of hyperbolic partial differential equations (PDE) governing incompressible fluid flow subject to small wavelength amplitude disturbances. They were first derived by Saint-Venant in 1871 using the Navier-Stokes (momentum) equation, and has since been used to model various types of shallow flows, such as riverine flooding [1], dam breaks [2], coastal floods [3], estuary and lake flows [4], and even for simulating weak oblique shock-wave reflection [5]. Moreover, they are also widely used to model atmospheric flows [6, 7], particularly with respect to the motion constrained near the thin fluid layer on Earth, which is well-known to behave as a nearly incompressible fluid, with only minor changes in density. One unique aspect of the SWE compared to the traditional quasi-geostrophic equations is that it allows for high-fidelity prediction of linear dispersive waves in high-altitude atmospheric flows, often characterised by large Rossby number and the presence of large-scale baroclinic vorticity; these effects, are well-known to have a major influence on weather (see e.g., the role of atmospheric Rossby waves in stratospheric warming [8]), and must therefore be accounted for in weather forecast models. Moreover, the SWEs are also important for the simulation of Rossby waves in astrophysical flows (see e.g., review by [9]). Therefore, provably accurate and efficient numerical methods for reliable simulations of the SWE is significantly important in many applications.
The properties of the SWE can be characterised by a dimensionless number called the _Froude number_, defined as \(\mathrm{Fr}=|\mathbf{u}|/\sqrt{gh}\), where \(\mathbf{u}\) is the depth-averaged fluid velocity field, \(g\) is gravitational acceleration and \(h\) is the flow depth. The SWE is called subcritical when \(\mathrm{Fr}<1\), critical when \(\mathrm{Fr}=1\) and super-critical when \(\mathrm{Fr}>1\). The different regimes change the fundamental nature of flow behaviour. Therefore, numerical schemes solving the SWE between regimes require careful boundary treatments in order to ensure stable and accurate simulations. However,
in this study we are particularly interested in sub-critical flows, with \(\mathrm{Fr}<1\), which are commonly observed in atmospheric and geostrophic flow problems.
Global-scale atmospheric flows modelled by the SWE are often posed on the surface of the sphere where periodic boundary conditions suffice for a well-posed model and stable numerical implementations using cubed sphere meshes [10, 11, 12]. For regional-scale atmospheric models [6], oceanic flow models [13] and many other modelling scenarios [14, 15] well-posed boundary conditions with stable numerical implementations are crucial for accurate and reliable simulations. However, in order to prove accuracy and stability in the nonlinear regime, nonlinear and entropy stable are necessary. Many prior studies [16, 17] have performed linear analysis, where the assumption of smooth solutions is then used to simulate nonlinear problems. The analysis of nonlinear hyperbolic IBVPs, without linearisation, is technically daunting. There are recent efforts, however, (see, e.g., [18]) where nonlinear analysis are considered for the SWE subject to nonlinear boundary conditions. However, to the best of our knowledge there is no numerical evidence yet verifying the theory. One objective of this study is the derivation of well-posed nonlinear boundary conditions and stable numerical implementations for the nonlinear SWE in vector invariant form [19, 20, 21, 22, 23]. It is also significantly noteworthy that a nonlinear analysis must be consistent with the linear theory, since the error equation required for convergence proofs for smooth solutions is linear. For sub-critical flows in 1D we derive new well-posed boundary conditions for the vector invariant form of the SWEs formulated in terms of fluxes, and applicable to linear and nonlinear SWE.
Traditional approaches to solving the SWE rely on either Godunov-type approximate Riemann solvers [24], which are typically second-order in space, and low-order finite difference methods [25, 26]. These schemes are low-order accurate and cannot be expected to accurately capture highly dispersive wave-dominated phenomena, e.g., gravitational or Rossby wave propagation. Recent trends for the development of robust and high order accurate numerical methods for the nonlinear SWE are variational based schemes such as discontinuous Galerkin (DG) and continuous Galerkin (CG) finite/spectral element methods [27, 28, 22]. While DG and CG finite/spectral element schemes a flexible for resolving complex geometries, however, they are designed for the conservative form of the equations and may not discretely preserve some important invariants such as potential/absolute vorticity and enstrophy which are critical for accurate simulations of atmospheric/oceanic flows. DG and CG finite/spectral element schemes formulated for the vector invariant form of the SWE can be designed to preserve several invariants imposed by the system [29, 22, 30, 31, 32]. Finite difference (FD) schemes on structured meshes are attractive because they are more computationally efficient, see e.g., [33]. However, the development of provably stable and high order accurate FD schemes for the nonlinear SWE is a challenge. Similarly, several FD schemes for the SWE are derived for the conservative form of the equations and may not discretely preserve some important invariants in the system [16]. The ultimate goal of this paper is to develop provably accurate FD method for the vector invariant form of the SWE with solid mathematical support.
A necessary ingredient for the development of robust numerical methods is the summation-by-parts (SBP) principle [34, 35]. An important property of SBP methods is its mimetic structure, which allows provably stable methods to be constructed at the semi-discrete level, provided careful treatment of the boundary conditions. Traditional SBP FD operators are central FD stencils on co-located grids with one-sided boundary stencil closures designed such that the FD operator obey a discrete integration-by-parts principle. Generally central FD stencils in the interior have been accepted as necessary for the SBP property [35]. However, recently the dual-pairing (DP) SBP framework [36] has shown that this is not necessarily true. The DP-SBP operators are a pair of high order backward and forward difference stencils and can include additional degrees of freedom that can be tuned to diminish numerical dispersion errors, yielding the so-called dispersion-relation-preserving (DRP) DP-SBP FD operators [37].
In this paper we derive provably stable and high order accurate FD schemes for the nonlinear vector invariant SWE using traditional SBP, DP-SBP and DRP DP-SBP FD operators [36, 37]. For nonlinear problems, energy/entropy stability although ensures the boundedness of numerical solutions but it does not guarantee convergence of numerical errors with mesh refinement. Suitable amount of numerical dissipation is necessary to control high frequency oscillations in the numerical simulations. Using the dual-pairing SBP framework, we designed high order accurate and nonlinear hyper-viscosity operator which dissipates entropy. The hyper-viscosity operator effectively tames oscillations from shocks and discontinuities and eliminates poisonous high frequency grid-scale errors. We prove energy/entropy stability, and derive a priori error estimates for smooth solutions. The scheme is suitable for the simulations of sub-critical flow regimes typical prevalent in atmospheric and geostrophic flow problems. The method has been validated with a number of canonical test cases as well as with the method of manufactured smooth solutions, including 2D merging vortex problems.
The rest of the paper is organised as follows. In Section 2 we perform continuous analysis of the rotational shallow water equations in vector invariant form, before introducing numerical preliminaries in Section 3 and semi-discrete analysis in Section 4. Then, we derive error convergence rates in Sec. 5 and energy-stable hyperviscosity
in Sec. 6. Finally, we conclude with numerical experiments in one and two space dimensions (Sec. 7), verifying our theoretical analysis. Section 8 summarises the main results of the study.
## 2 Continuous analysis
Here we will analyse the shallow water equations in one space dimension (1D), in vector invariant form. Both linear and nonlinear analysis will be performed. We then provide relevant definitions and derive well-posed linear and nonlinear boundary conditions. Much of the arguments for the linear analysis can be deduced from standard texts [38]. The new results here are new well-posed boundary conditions for the vector invariant form of the SWEs formulated in terms of fluxes, and applicable to linear and nonlinear problems. We prove that the boundary conditions are well-posed and energy/entropy stable.
### Rotating shallow water equation in vector invariant form
The vector invariant form of the nonlinear rotating SWEs in two space dimensions are:
\[\frac{\partial h}{\partial t}+\nabla\cdot(\mathbf{u}h)=G_{h},\quad\frac{ \partial\mathbf{u}}{\partial t}+\omega\mathbf{u}^{\perp}+\nabla\bigg{(}\frac{ |\mathbf{u}|^{2}}{2}+gh\bigg{)}\hskip-2.845276pt=\mathbf{G}_{u},\ \ \ \omega=\frac{\partial v}{\partial x}-\frac{\partial u}{\partial y}+f_{c},\ \ \ (x,y)\in\Omega\subset \mathbb{R}^{2},\ \ \ t\geq 0, \tag{1}\]
where \((x,y)\) are the spatial variables, \(t\) denotes time, \(h>0\) and \(\mathbf{u}=[u,v]^{T}\) are primitive variables denoting water height and the velocity vector, \(\omega\) is the absolute vorticity, \(f_{c}\) is the Coriolis frequency and \(\mathbf{u}^{\perp}=[-v,u]^{T}\). The vector \(\mathbf{G}=[G_{h},\mathbf{G}_{u}]\) is a source term, and can be used to model non-flat topography, with \(\mathbf{G}=[0,-g\nabla b]\) where \(b\) is the profile of the topography. In 1D, with \(\mathbf{u}=[u,0]^{T}\) and \(\partial/\partial y=0\), (1) reduces to
\[\frac{\partial h}{\partial t}+\frac{\partial}{\partial x}(uh)=G_{h},\quad \frac{\partial u}{\partial t}+\frac{\partial}{\partial x}\bigg{(}\frac{u^{2}} {2}+gh\bigg{)}\hskip-2.845276pt=G_{u},\ \ x\in\Omega\subset\mathbb{R},\ \ \ t\geq 0, \tag{2}\]
which can be written as
\[\frac{\partial\mathbf{q}}{\partial t}+\frac{\partial\mathbf{F}}{\partial x}= \mathbf{G},\ \ \ x\in\Omega\subset\mathbb{R},\ \ \ t\geq 0, \tag{3}\]
where \(\mathbf{G}=[G_{h},G_{u}]^{T}\), \(\mathbf{q}=[h,u]^{T}\), and \(\mathbf{F}=[\mathbf{F}_{1},\mathbf{F}_{2}]^{T}\) are the nonlinear fluxes with \(\mathbf{F}_{1}=uh\) and \(\mathbf{F}_{2}=u^{2}/2+gh\). We note that unlike the flux form [16] where the integrals of the conserved variables \([h,uh]^{T}\) are invariants, here in (3) the integrals of the vector of primitive variables \(\mathbf{q}\) is an invariant.
We will also consider the linear version of the SWE (3). Introducing zero-mean quantities in the form of perturbed variables, i.e., \(u=U+\widetilde{u}\) and \(h=H+\widetilde{h}\) with the mean states \(H>0\), \(U\), and discarding nonlinear terms of order \(\mathcal{O}(\widetilde{u}^{2},\widetilde{h}^{2},\widetilde{w}\widetilde{h})\), we obtain the linear vector-invariant SWE (3) where \(\mathbf{F}=[\mathbf{F}_{1},\mathbf{F}_{2}]^{T}\) are the linear fluxes, with \(\mathbf{F}_{1}=Uh+Hu\) and \(\mathbf{F}_{2}=Uu+gh\), and we have dropped the tilde on fluctuating quantities for convenience.
### Linear analysis
As mentioned earlier, here we consider first the analysis of the linear SWE. To begin we note that in the linear case, the flux \(\mathbf{F}\) has a more simple expression, namely
\[\mathbf{F}=M\mathbf{q},\ \ \ M=\begin{bmatrix}U&H\\ g&U\end{bmatrix}. \tag{4}\]
To simplify the linear analysis, we will assume that the mean states are independent of the time variable \(t\) but can depend on the spatial variable \(x\), that is \(U(x)\), \(H(x)\). However, the analysis is valid when the mean states depend on both space and time, that is \(U(x,t)\), \(H(x,t)\), see Remark 2.2.1.
#### 2.2.1 Well-posedness
We consider the linear SWE (3)-(4) with a forcing function
\[\frac{\partial\mathbf{q}}{\partial t}=D\mathbf{q}+\mathbf{G},\ \ \ D:=-\frac{\partial}{\partial x}M. \tag{5}\]
Further, we consider (5) in the bounded domain \(\Omega=[0,L]\) with the boundary points \(\Gamma=\{0,L\}\), and augment (5) with the initial condition and homogeneous boundary conditions
\[\mathbf{q}(x,t=0)=\mathbf{f}(x)\in L^{2}(\Omega),\quad x\in\Omega, \tag{6a}\] \[\mathcal{B}\mathbf{q}=0,\quad x\in\Gamma. \tag{6b}\]
Here \(\mathcal{B}\) is a linear boundary operator with homogeneous boundary data. The analysis can be extended to non-homogeneous boundary data, but this would complicate the algebra.
In the following, we will introduce the relevant notation required to prove the well-posedness of the IBVP, (5)-(6). To begin, we define the weighted \(L^{2}(\Omega)\) inner product and norm
\[(\mathbf{p},\mathbf{q})_{W}=\int_{\Omega}\mathbf{p}^{T}W\mathbf{q}\,dx,\quad \|\mathbf{q}\|_{W}^{2}=(\mathbf{q},\mathbf{q})_{W}, \tag{7}\]
where \(W=W^{T}\) and \(\mathbf{q}^{T}W\mathbf{q}>0\quad\forall\mathbf{q}\in\mathbb{R}^{2}\backslash \{\mathbf{0}\}\) (i.e., symmetric and positive definite). For the linear SWE, we choose specifically the symmetric weight matrix
\[W=\begin{bmatrix}g&U\\ U&H\end{bmatrix}, \tag{8}\]
The eigenvalues of \(W\) are \(\lambda_{W}^{\pm}=\frac{1}{2}(g+H)\pm\sqrt{(g+H)^{2}-4gH(1-\mathrm{Fr}^{2})}\), which are positive for sub-critical flows, \(\mathrm{Fr}<1\), so that \(W=W^{T}\) and \(\mathbf{q}^{T}W\mathbf{q}>0\quad\forall\mathbf{q}\in\mathbb{R}^{2}\backslash \{\mathbf{0}\}\).
**Definition 2.1**.: _The IBVP (5)-(6) is strongly well-posed if a unique solution, \(\mathbf{q}=[h,u]^{T}\) satisfies_
\[\|\mathbf{q}\|_{W}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W}+\max_{\tau\in[0,t] }\|\mathbf{G}(\,.\,\tau)\|_{W}\bigg{)},\]
_for every forcing function \(\mathbf{G}\in L^{2}(\Omega)\) and some constants \(K>0\), \(\mu\in\mathbb{R}\) independent of \(\mathbf{f}\) and \(\mathbf{G}\)._
The well-posedness of the IBVP (5)-(6) can be related to the boundedness of the differential operator \(D\). We introduce the function space
\[\mathbb{V}=\left\{\mathbf{p}\ |\ \mathbf{p}(x)\in\mathbb{R}^{2},\quad\|\mathbf{p} \|_{W}<\infty,\quad 0\leq x\leq L,\quad\{\mathcal{B}\mathbf{p}=0,x\in\Gamma\} \right\}. \tag{9}\]
The following definitions will be useful.
**Definition 2.2**.: _The differential operator \(D\) is semi-bounded in the function space \(\mathbb{V}\) if it satisfies_
\[(\mathbf{q},D\mathbf{q})_{W}\leq\mu\|\mathbf{q}\|_{W}^{2},\ \forall\mathbf{q}\in \mathbb{V},\ \mu\in\mathbb{R}.\]
**Lemma 1**.: _Consider the linear differential operator \(D=-\frac{\partial}{\partial x}M\) given in (5) subject to the boundary conditions, \(\mathcal{B}\mathbf{q}=0\), (6b), where \(M\) is the constant coefficient matrix defined in (4). Let \(W\) be the symmetric positive definite matrix given by (8), defining the weighted \(L^{2}\)-norm (7), then the matrix product \(\widetilde{M}=WM\) is symmetric. Further, if \(\mathcal{B}\mathbf{q}=0\) is such that \(\bigg{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}=\bigg{(} \mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{L}-\bigg{(}\mathbf{q}^{ T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}\geq 0\), then \(D\) is semi-bounded._
Proof.: First, we consider the matrix product, \(\widetilde{M}=WM\),
\[\widetilde{M}=\ \begin{bmatrix}2gU&U^{2}+gH\\ U^{2}+gH&2HU\end{bmatrix}. \tag{10}\]
And obviously \(\widetilde{M}\) is symmetric.
Now consider \((\mathbf{q},D\mathbf{q})_{W}\) and integrate by parts, we have
\[(\mathbf{q},D\mathbf{q})_{W}=-\frac{1}{2}\int_{\Omega}\frac{\partial}{ \partial x}\Big{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\Big{)}=-\frac{1}{2} \bigg{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}.\]
So for the boundary operator \(\mathcal{B}\mathbf{q}=0\), if \(\bigg{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}\geq 0\) then
\[(\mathbf{q},D\mathbf{q})_{W}=-\frac{1}{2}\bigg{(}\mathbf{q}^{T}\widetilde{M} \mathbf{q}\bigg{)}\bigg{|}_{0}^{L}\leq 0.\]
An upper bound is the case of \(\mu=0\). Clearly for \((\mathbf{q},D\mathbf{q})_{W}=0\), and \(D\) is semi-bounded by Definition (2.2).
It is often possible to formulate boundary conditions \(\mathcal{B}\mathbf{q}=0\) such that \(\bigg{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}\geq 0\). An immediate example is the case of periodic boundary conditions, where \(\bigg{(}\mathbf{q}^{T}\widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}=0\). However, for non-periodic boundary conditions, it is imperative that the boundary operator must not destroy the existence of solutions. We will now introduce the definition of maximally semi-boundedness of \(D\) which will ensure well-posedness of the IBVP.
**Definition 2.3**.: _The differential operator \(D\) is maximally semi-bounded if it is semi-bounded in the function space \(\mathbb{V}\) but not semi-bounded in any function space with fewer boundary conditions._
The maximally semi-boundedness property is intrinsically connected to well-posedness of the IBVP. We will formulate this result in the following theorem. The reader can consult [38] for more elaborate discussions.
**Theorem 2**.: _Consider the IBVP (5)-(6) if the differential operator \(D\) is maximally semi-bounded, \((\mathbf{q},D\mathbf{q})_{W}\leq\mu\|\mathbf{q}\|_{W}^{2}\), then it is strongly well-posed. That is, there is a unique solution \(\mathbf{q}\) satisfying the estimate_
\[\|\mathbf{q}\|_{W}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W}+\max_{\tau\in[0,t ]}\|\mathbf{G}(.\,\tau)\|_{W}\bigg{)},\quad K=\max\big{(}1,(1-e^{-\mu t})/\mu\big{)}.\]
Proof.: We consider
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W}^{2}=\bigg{(}\mathbf{q},\frac{\partial \mathbf{q}}{\partial t}\bigg{)}_{W}=\big{(}\mathbf{q},D\mathbf{q}\big{)}_{W}+ \big{(}\mathbf{q},\mathbf{G}\big{)}_{W}. \tag{11}\]
Semi-boundedness and Cauchy-Schwartz inequality yield
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W}^{2}\leq\mu\|\mathbf{q}\|_{W}^{2}+\| \mathbf{q}\|_{W}\|\mathbf{G}\|_{W}\iff\frac{d}{dt}\|\mathbf{q}\|_{W}\leq\mu\| \mathbf{q}\|_{W}+\|\mathbf{G}\|_{W}. \tag{12}\]
Combining Gronwall's Lemma and Duhammel's principle gives
\[\|\mathbf{q}\|_{W}\leq e^{\mu t}\|\mathbf{f}\|_{W}+\int_{0}^{t}e^{\mu(t-\tau) }\|\mathbf{G}(.\,\tau)\|_{W}d\tau\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W}+\max_{\tau\in[0,t ]}\|\mathbf{G}(.\,\tau)\|_{W}\bigg{)}, \tag{13}\]
where \(K=\max\left(1,(1-e^{-\mu t})/\mu\right)\). When \(\mu=0\), \(K=\max\left(1,t\right)\) where \(t>0\) is the final time, and we have the required result of well-posedness given by Definition 2.1.
Theorem 2 assumes that the weight matrix \(W\) (that is \(\begin{bmatrix}g&U\\ U&H\end{bmatrix}\)) is time-independent. Nevertheless, the theorem also holds when \(W\) is time-dependent. We formulate this as the remark.
**Remark**.: _Note that if \(W(x,t)\) then_
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W}^{2}=\bigg{(}\mathbf{q},\frac{ \partial\mathbf{q}}{\partial t}\bigg{)}_{W}+\frac{1}{2}\left(\mathbf{q},W_{t} \mathbf{q}\right)=\big{(}\mathbf{q},D\mathbf{q}\big{)}_{W}+\left(\mathbf{q}, \mathbf{G}\right)_{W}+\frac{1}{2}\left(\mathbf{q},W_{t}\mathbf{q}\right). \tag{14}\]
_On the right hand side we use the fact that \(D\) is maximally semi-bounded, and the last two terms can be bounded by Cauchy-Schwartz inequality giving_
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W}^{2}\leq(\mu+\alpha)\|\mathbf{q}\|_{W }^{2}+\|\mathbf{q}\|_{W}\|\mathbf{G}\|_{W}\iff\frac{d}{dt}\|\mathbf{q}\|_{W} \leq(\mu+\alpha)\|\mathbf{q}\|_{W}+\|\mathbf{G}\|_{W}, \tag{15}\]
_where \(0\leq\alpha\leq\max_{\tau\in[0,t]}(|W_{t}|/\min(2|\lambda_{W}^{-}|,2|\lambda_{ W}^{+}|))\) is a constant. Again, combining Gronwall's Lemma and Duhammel's principle gives_
\[\|\mathbf{q}\|_{W}\leq e^{\mu\alpha t}\|\mathbf{f}\|_{W}+\int_{0}^{t}e^{\mu_{0 }(t-\tau)}\|\mathbf{G}(.\,\tau)\|_{W}d\tau\leq Ke^{\mu_{0}t}\bigg{(}\|\mathbf{f}\|_{W}+\max_{\tau\in[0,t ]}\|\mathbf{G}(.\,\tau)\|_{W}\bigg{)},\quad\mu_{0}=\mu+\alpha. \tag{16}\]
#### 2.2.2 Well-posed linear boundary conditions
We will now formulate well-posed boundary conditions for the IBVP (5)-(6). Well-posed boundary conditions require that the differential operator \(D\) to be maximally semi-bounded in the function space \(\mathbb{V}\). That is, we need a minimal number of boundary conditions so that \(D\) is semi-bounded in \(\mathbb{V}\). In general, the number of boundary conditions must be equal to the number of nonzero eigenvalues of \(\widetilde{M}\) and the location of the boundary conditions would depend on the signs of the eigenvalues. In particular, the number of boundary conditions at \(x=0\) must be equal to the number of positive eigenvalues of \(\widetilde{M}\) and the number of boundary conditions at \(x=L\) equal the number of negative eigenvalues of \(\widetilde{M}\). The matrix \(\widetilde{M}\) has two nonzero eigenvalues, namely
\[\lambda^{+}=U(H+g)+\sqrt{U^{2}(H+g)^{2}+(U^{2}+gH)^{2}},\quad \lambda^{-}=U(H+g)-\sqrt{U^{2}(H+g)^{2}+(U^{2}+gH)^{2}}.\]
The eigenvalues are real and have opposite signs, with \(\lambda^{+}>0\) and \(\lambda^{-}<0\). This implies that, for sub-critical flow, we require one BC at \(x=0\), and one BC at \(x=L\).
To enable effective numerical treatments, for both the linear and nonlinear SWE, we will formulate the boundary condition in terms of fluxes. First we note that \(\frac{1}{2}\mathbf{q}^{T}\widetilde{M}\mathbf{q}=F_{1}F_{2}\), where \(F_{1}=Uh+Hu\) and \(F_{2}=gh+Uu\) are linear mass flux and linear velocity flux, respectively. The boundary condition is given by
\[\left\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\right\}\equiv\left\{ \alpha_{1}F_{1}+\alpha_{2}F_{2}=0,\ x=0;\ \beta_{1}F_{1}-\beta_{2}F_{2}=0,\ x=L\right\}, \tag{17}\]
where \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)) are real constants that do not vanish, that is \(|\alpha|>0\) and \(|\beta|>0\). The boundary condition (17) can model different physical situations. For example
1. linear mass flux: \(\alpha_{1}>0,\quad\alpha_{2}=0\) or \(\beta_{1}>0,\quad\beta_{2}=0\),
2. linear velocity flux (pressure BC): \(\alpha_{1}=0,\quad\alpha_{2}>0\) or \(\beta_{1}=0,\beta_{2}>0\),
3. linear transmissive BC: \(\alpha_{1}=1,\quad\alpha_{2}=\sqrt{H/g}\) or \(\beta_{1}=1,\quad\beta_{2}=\sqrt{H/g}\).
The linear transmissive BC is equivalent to setting the incoming linear Riemann invariants to zero on the boundary, that is
\[\left\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\right\}\equiv\left\{ \sqrt{\frac{g}{H}}h+u=0,\ x=0;\ \sqrt{\frac{g}{H}}h-u=0,\ x=L\right\}. \tag{18}\]
**Lemma 3**.: _Consider the linear differential operator \(D=-\frac{\partial}{\partial x}M\) given in (5) subject to the boundary conditions, \(\mathcal{B}\mathbf{q}=0\), (17) with \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)). For sub-critical flows, with \(\text{Fr}=|U|/\sqrt{gH}<1\), the differential operator \(D\) is maximally semi-bounded in \(\mathbb{V}\)._
Proof.: As required, there is one BC at \(x=0\), and one BC at \(x=L\). It suffices to show that the differential operator \(D\) is semi-bounded \((\mathbf{q},D\mathbf{q})_{W}\leq 0\). Now consider \((\mathbf{q},D\mathbf{q})_{W}\) and integrate by parts, we have
\[(\mathbf{q},D\mathbf{q})_{W}=-\frac{1}{2}\bigg{(}\mathbf{q}^{T} \widetilde{M}\mathbf{q}\bigg{)}\bigg{|}_{0}^{L}=-F_{1}F_{2}\bigg{|}_{0}^{L}=F_ {1}F_{2}\bigg{|}_{0}-F_{1}F_{2}\bigg{|}_{L}.\]
Note that if \(\alpha_{1}=0,\quad\alpha_{2}>0\) then \(F_{2}=0\) at \(x=0\), and if \(\beta_{1}=0,\beta_{2}>0\) then \(F_{2}=0\) at \(x=L\), and we have \((\mathbf{q},D\mathbf{q})_{W}=0\). Furthermore, if \(\alpha_{1}>0\) then \(F_{1}=-\alpha_{2}/\alpha_{1}F_{2}\) at \(x=0\) and if \(\beta_{1}>0\) then \(F_{1}=\beta_{2}/\beta_{1}F_{2}\) at \(x=L\), giving \((\mathbf{q},D\mathbf{q})_{W}=-\alpha_{2}/\alpha_{1}F_{2}^{2}|_{x=0}-\beta_{2}/ \beta_{1}F_{2}^{2}|_{L}\leq 0\). Thus for all \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) we must have
\[(\mathbf{q},D\mathbf{q})_{W}=F_{1}F_{2}\bigg{|}_{0}-F_{1}F_{2} \bigg{|}_{L}\leq 0.\]
As before, an upper bound is the case of \(\mu=0\), which satisfies \((\mathbf{q},D\mathbf{q})_{W}=0\).
We introduce the boundary term BT defined by
\[\text{BT}=F_{1}F_{2}\bigg{|}_{0}-F_{1}F_{2}\bigg{|}_{L}\leq 0. \tag{19}\]
Note that by Lemma 3 we must have \((\mathbf{q},D\mathbf{q})_{W}\leq\mu\|\mathbf{q}\|_{W}^{2}\leq 0\), where \(\mu=\max_{r\in[0,t]}\frac{\text{BT}}{\|\mathbf{q}\|_{W}^{2}}\leq 0\).
**Theorem 4**.: _Consider the vector invariant form of the linear SWE (5), at sub-critical flows, with \(\text{Fr}=|U|/\sqrt{gH}<1\), subject to the initial condition (6a) and the boundary condition (6b), \(\mathcal{B}\mathbf{q}=0\), where \(\mathcal{B}\mathbf{q}\) is given by (17) with \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)) and \(\mathrm{BT}\leq 0\) given by (19). The corresponding IBVP (5)-(6) is strongly well-posed. That is, there is a unique \(\mathbf{q}\) satisfying the estimate_
\[\|\mathbf{q}\|_{W}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W}+\max_{\tau\in[0, t]}\|\mathbf{G}(.\,\tau)\|_{W}\bigg{)},\quad\mu=\max_{\tau\in[0,t]}\frac{\mathrm{BT}}{\|\mathbf{ q}\|_{W}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t})/\mu\big{)}.\]
Proof.: Invoking Lemma 3 and Theorem 2 yields the required result.
### Nonlinear analysis
We will now extend the linear analysis performed in section 2.2.1 to the nonlinear vector invariant SWE. It is necessary that the nonlinear analysis must not contradict the conclusions drawn from the linear analysis. Otherwise, any valid linearisation will violate the linear analysis and would render the nonlinear theory ineffective. For example the number and location of boundary conditions must be consistent with the linear theory. The consistency between the linear and nonlinear theory is also necessary for proving the convergence of the the numerical method for nonlinear problems, see Section 5 for details.
A main contribution of this study is the proof of strong well-posedness and strict stability of numerical approximations for the IBVP for nonlinear SWE in vector invariant form for sub-critical flows, \(\text{Fr}<1\). Many prior studies [16] have proven linear stability, where the assumption of smooth solutions is then used to simulate nonlinear problems. There are a few exceptions, however, (see, e.g., [18]) where nonlinear analysis are considered for the IBVP. However, to the best of our knowledge there is no numerical evidence yet verifying the theory.
The proof we present here is very similar to that given in Section 2.2.1, so we shall keep it brief. To begin, we extend the linear boundary condition (17) to the nonlinear regime. The nonlinear boundary condition is given by
\[\left\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\right\}\equiv\left\{\alpha_{1}F_{1 }+\alpha_{2}F_{2}=0,\ x=0;\ \beta_{1}F_{1}-\beta_{2}F_{2}=0,\ x=L\right\}, \tag{20}\]
where \(F_{1}=uh\) and \(F_{2}=u^{2}/2+gh\) are nonlinear mass flux and velocity flux for (3), and \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)) are nonlinear coefficients that do not vanish. As above the nonlinear boundary condition (20) can model different physical situations. We also give the nonlinear examples
1. Mass flux: \(\alpha_{1}>0,\quad\alpha_{2}=0\) or \(\beta_{1}>0,\quad\beta_{2}=0\).
2. Velocity flux (pressure BC): \(\alpha_{1}=0,\quad\alpha_{2}>0\) or \(\beta_{1}=0,\beta_{2}>0\).
3. Transmissive BC: \(\alpha_{1}=1,\quad\alpha_{2}=\sqrt{h/g}(\sqrt{gh}-0.5u)/(\sqrt{gh}-u)>0\), or \(\beta_{1}=1,\quad\beta_{2}=\sqrt{h/g}(\sqrt{gh}+0.5u)/(\sqrt{gh}+u)>0\).
With some tedious but straightforward algebraic manipulations, it can be shown that the nonlinear transmissive BC is equivalent to setting the incoming nonlinear Riemann invariants to zero on the boundary,
\[\left\{\mathcal{B}\mathbf{p}=0,\ x\in\Gamma\right\}\equiv\left\{u+2\sqrt{gh}= 0,\ x=0;\ u-2\sqrt{gh}=0,\ x=L\right\}. \tag{21}\]
For sub-critical flows, with \(\text{Fr}=|u|/\sqrt{gh}<1\), the coefficients \(\alpha_{2}=\sqrt{h/g}(\sqrt{gh}-0.5u)/(\sqrt{gh}-u)>0\) and \(\beta_{2}=\sqrt{h/g}(\sqrt{gh}+0.5u)/(\sqrt{gh}+u)>0\) for the nonlinear transmissive boundary conditions are positive and depend nonlinearly on \(u\) and \(h>0\). This is opposed to the linear case where \(\alpha_{2}=\sqrt{H/g}\), \(\beta_{2}=\sqrt{H/g}\), and \(H>0\) are known constants.
We introduce the nonlinear differential operator
\[\mathcal{D}\mathbf{q}:=-\begin{bmatrix}\frac{\partial}{\partial x}(uh)\\ \frac{\partial}{\partial x}\Big{(}\frac{u^{2}}{2}+gh\Big{)}\end{bmatrix}=- \frac{\partial\mathbf{F}}{\partial x}. \tag{22}\]
We also define
\[(\mathbf{p},\mathbf{q})_{W}=\int_{\Omega}\mathbf{p}^{T}W\mathbf{q}\;dx,\quad \|\mathbf{q}\|_{W}^{2}=(\mathbf{q},\mathbf{q})_{W},\quad W=\begin{bmatrix}g& \frac{u}{2}\\ \frac{u}{2}&\frac{h}{2}\end{bmatrix},\quad h>0. \tag{23}\]
The eigenvalues of \(W\) are \(\lambda_{W}^{\pm}=\frac{1}{2}(g+\frac{1}{2}h)\pm\frac{1}{2}\sqrt{(g+\frac{1}{2} h)^{2}-gh(2-\text{Fr}^{2})}\), which are positive for sub-critical flows, \(\text{Fr}<1<2\), so that \(W=W^{T}\) and \(\mathbf{q}^{T}W\mathbf{q}>0\quad\forall\mathbf{q}\in\mathbb{R}^{2}\backslash \{\mathbf{0}\}\). We will prove the nonlinear equivalence of Lemma 3.
**Lemma 5**.: _Consider the nonlinear differential operator \(\mathcal{D}\) given in (22) subject to the nonlinear boundary condition (20) with \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)). For sub-critical flows, with \(\text{Fr}=|u|/\sqrt{gh}<1\), the differential operator \(\mathcal{D}\) is maximally semi-bounded in \(\mathbb{V}\)._
Proof.: There is one BC at \(x=0\), and one BC at \(x=L\). So it suffices to show that the differential operator \(\mathcal{D}\) is semi-bounded \((\mathbf{q},\mathcal{D}\mathbf{q})_{W}\leq 0\). We consider \((\mathbf{q},\mathcal{D}\mathbf{q})_{W}\) and integrate by parts, we have
\[(\mathbf{q},\mathcal{D}\mathbf{q})_{W}=-F_{1}F_{2}\bigg{|}_{0}^{L}=F_{1}F_{2} \bigg{|}_{0}-F_{1}F_{2}\bigg{|}_{L}.\]
It is sufficient to show that \(F_{1}F_{2}\bigg{|}_{0}\leq 0\) and \(F_{1}F_{2}\bigg{|}_{L}\geq 0\), which follows from the proof of Lemma 3. Therefore, for all \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) we must have \((\mathbf{q},\mathcal{D}\mathbf{q})_{W}\leq 0\).
To prove the nonlinear equivalence of Theorem 4, we introduce the nonlinear weighted \(L^{2}\) norm
\[\|\mathbf{q}\|_{W^{\prime}}^{2}=\int_{\Omega}\mathbf{q}^{T}W^{\prime}\mathbf{ q}dx,\quad W^{\prime}=\begin{bmatrix}g&0\\ 0&h\end{bmatrix},\quad h>0. \tag{24}\]
For sub-critical flows, with \(\text{Fr}=|u|/\sqrt{gh}<1\), the two nonlinear norms, \(\|\cdot\|_{W}\) and \(\|\cdot\|_{W^{\prime}}\), are equivalent, that is for positive real numbers \(0<\alpha\leq 1\), \(\beta\geq 2\) we have
\[\alpha\|\cdot\|_{W^{\prime}}\leq\|\cdot\|_{W}\leq\beta\|\cdot\|_{W^{\prime}}. \tag{25}\]
Note that for \(\mathbf{q}=[h,u]^{T}\), the quantity
\[e=\frac{1}{2}\mathbf{q}^{T}W^{\prime}\mathbf{q}=\frac{1}{2}(gh^{2}+hu^{2}), \tag{26}\]
is the elemental energy. For sub-critical flows, with \(\text{Fr}=|u|/\sqrt{gh}<1\), the elemental energy \(e\) is a convex function (in terms of the prognostic variables \(u\) and \(h\)) and defines an entropy, see [22]. When \(\mathbf{G}=0\), the entropy/energy is conserved for smooth solutions, and dissipated across shocks and discontinuous solutions [38].
Let BT be given by (19). Note that by Lemma 5 we must have \((\mathbf{q},\mathcal{D}\mathbf{q})_{W}\leq\mu\|\mathbf{q}\|_{W^{\prime}}^{2}\leq 0\), where \(\mu=\max_{\tau\in[0,t]}\frac{BT}{\|\mathbf{q}\|_{W^{\prime}}^{2}}\leq 0\). We are now ready to state the theorem which proves the nonlinear equivalence of Theorem 4.
**Theorem 6**.: _Consider the vector invariant form of the nonlinear SWE (3), at sub-critical flows with \(\text{Fr}=|u|/\sqrt{gh}<1\), subject to the initial condition (6a) and the nonlinear boundary condition (20) with \(\alpha_{j}\geq 0\) and \(\beta_{j}\geq 0\) (for \(j=1,2\)) and \(\text{BT}\leq 0\) be given by (19). The solutions \(\mathbf{q}\) of the corresponding IBVP (3), (6a) and (20) satisfy the nonlinear energy estimate_
\[\|\mathbf{q}\|_{W^{\prime}}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime}}+ \max_{\tau\in[0,t]}\|\mathbf{G}(\,\cdot\,,\tau)\|_{W^{\prime}}\bigg{)},\quad \mu=\max_{\tau\in[0,t]}\frac{BT}{\|\mathbf{q}\|_{W^{\prime}}^{2}}\leq 0,\quad K= \max\big{(}1,(1-e^{-\mu t})/\mu\big{)}.\]
Proof.: As before, from the left, we multiply the vector invariant nonlinear SWE (3) with \([\mathbf{F}_{2},\ \mathbf{F}_{1}]=\mathbf{q}^{T}W\) and integrate over the domain \(\Omega\), where here \(W\) is defined by (23). This yields,
\[\bigg{(}\mathbf{q},\frac{\partial\mathbf{q}}{\partial t}\bigg{)}_{W}=(\mathbf{ q},\mathcal{D}\mathbf{q})_{W}+(\mathbf{q},\mathbf{G})_{W}. \tag{27}\]
We note that different from the linear case, we have,
\[\mathbf{q}^{T}W\frac{\partial\mathbf{q}}{\partial t}=\frac{1}{2}\frac{ \partial}{\partial t}\left(\mathbf{q}^{T}W^{\prime}\mathbf{q}\right),\quad W^ {\prime}=\begin{bmatrix}g&0\\ 0&h\end{bmatrix},\quad h>0.\]
The semi-boundedness of \(\mathcal{D}\) yields,
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W^{\prime}}^{2}\leq\mu\|\mathbf{q}\|_{W ^{\prime}}^{2}+(\mathbf{q},\widetilde{\mathbf{G}})_{W^{\prime}},\quad\widetilde{ \mathbf{G}}=(W^{\prime})^{-1}WG, \tag{28}\]
where \(\mu=\max_{\tau\in[0,t]}\frac{BT}{\|\mathbf{q}\|_{W^{\prime}}^{2}}\leq 0\). Note that at sub-critical flow regime we have \(|\widetilde{\mathbf{G}}|\leq|\mathbf{G}|\). We apply Cauchy-Schwarz inequality to the right hand side of (28) giving
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{W^{\prime}}^{2}\leq\mu\|\mathbf{q}\|_{W^ {\prime}}^{2}+\|\mathbf{q}\|_{W^{\prime}}\|\widetilde{\mathbf{G}}\|_{W^{ \prime}}\leq\mu\|\mathbf{q}\|_{W^{\prime}}^{2}+\|\mathbf{q}\|_{W^{\prime}}\| \mathbf{G}\|_{W^{\prime}}. \tag{29}\]
Making use of Gronwall's Lemma and Duhammel's principle gives
\[\|\mathbf{q}\|_{W^{\prime}}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime}}+ \max_{\tau\in[0,t]}\|\mathbf{G}(\,.\,,\tau)\|_{W^{\prime}}\bigg{)}. \tag{30}\]
In the coming sections we will introduce numerical approximations for the IBVP using DP SBP operators. We will prove both linear and nonlinear numerical stability by deriving discrete energy estimates analogous to Theorems 4-6.
## 3 Dual-pairing summation by parts finite difference operators
In order to introduce the relevant notation and keep our study self-contained, we will firstly give a quick introduction to the standard DP SBP finite difference operators derived by [36], as an extension to the traditional (centred difference) SBP (see e.g., reviews by [34, 35]). Then we discuss the recently derived dispersion-relation preserving DP operators of [37].
We consider a finite 1D interval, \(\Omega=[0,L]\), and discretise it into \(N+1\) grid points \(x_{j}\) with contant spatial step \(\Delta x>0\), we have
\[x_{j}=j\Delta x,\quad j\in\{0,1,\ldots,N\},\quad\Delta x=\frac{L}{N},\quad \mathbf{x}:=\left(x_{0},x_{1},\ldots,x_{N}\right)^{T}\in\mathbb{R}^{N+1}. \tag{31}\]
We will use DP upwind, \(\alpha\)-DRP-DP upwind and traditional SBP operators [36, 33, 37, 34] to approximate the spatial derivatives, \(\partial/\partial x\). Let
\[P:=\mathrm{diag}\left(p_{0},p_{1},\ldots,p_{N}\right),\quad p_{j}>0,\quad \forall j \tag{32}\]
be the weights of a composite quadrature rule such that \(\sum_{j=0}^{N}p_{j}\zeta(x_{j})\approx\int_{0}^{L}\zeta(x)dx\) for an integrable function \(\zeta(x)\). Then, we introduce the DP first derivative operators [37, 36]\(D_{+},D_{-}:\mathbb{R}^{n+1}\mapsto\mathbb{R}^{n+1}\) with \((D_{\pm}\mathbf{f})_{j}\approx\partial f/\partial x|_{x=x_{j}}\) so that the SBP property holds:
\[\mathbf{g}^{T}P\left(D_{+}\mathbf{f}\right)+\mathbf{f}^{T}P\left(D_{-}\mathbf{g}\right)=f \left(x_{N}\right)g\left(x_{N}\right)-f\left(x_{0}\right)g\left(x_{0}\right), \tag{33}\]
where \(\mathbf{f}=\left(f\left(x_{0}\right),\ldots,f\left(x_{N}\right)\right)^{T}\), \(\mathbf{g}=\left(g\left(x_{0}\right),\ldots,g\left(x_{N}\right)\right)^{T}\) are vectors sampled from differentiable functions, \(f,g\in C^{1}([0,L])\). Now we introduce the matrix operators,
\[Q_{\pm}=PD_{\pm},\quad A_{\pm}=Q_{\pm}-\frac{1}{2}B,\quad B=e_{N}e_{N}^{T}-e _{1}e_{1}^{T},\quad e_{1}:=\left(1,0,\ldots,0\right)^{T},\quad e_{N}:=\left(0,\ldots,0,1\right)^{T}\in\mathbb{R}^{N+1}, \tag{34}\]
so that from (33) we have \(Q_{+}+Q_{-}^{T}=B\) and \(A_{+}+A_{-}^{T}=0\).
**Definition 3.1**.: _Let \(D_{\pm},Q_{\pm},A_{\pm}:\mathbb{R}^{N+1}\mapsto\mathbb{R}^{N+1}\) be linear operators that solve (33) and (34) and for the diagonal norm \(P\in\mathbb{R}^{n\times n}\). If the matrix \(S_{+}:=A_{+}+A_{+}^{T}\) is negative semi-definite and \(S_{-}:=A_{-}+A_{-}^{T}\) is positive semi-definite, then the 3-tuple \((P,D_{-},D_{+})\) is called an upwind diagonal-norm DP SBP operator. We call \((P,D_{-},D_{+})\) an upwind diagonal-norm DP SBP operator of order \(q\) if the accuracy conditions_
\[\left(D_{x}\mathbf{x}^{i}\right)_{j}=ix_{j}^{i-1},i\in\{0,1\ldots q\},\]
_is satisfied within the interior points \(x_{j}\), \(n_{s}<j<N-n_{s}+1\) for some \(n_{s}\geq 1\). The FD stencils for boundary points \(x_{j}\), for \(0\leq j\leq n_{s}\) and \(N-n_{s}+1\leq j\leq N\), satisfy the same property but have either \(q/2\) order accurate stencils if \(q\) is even, or \((q-1)/2\) if \(q\) is odd._
## 4 Semi-discrete approximation
Now we consider the semi-discrete vector-invariant SWE, and discuss numerical stability and convergence properties of the semi-discrete approximation. To begin, we consider the semi-discrete approximation of the SWE, using the DP-SBP framework [37]
\[\frac{d\mathbf{q}}{dt}+\mathbf{D}_{x}\mathbf{F}=\mathbf{G}+\mathbf{SAT},\quad \mathbf{q}(0)=\mathbf{f},\quad\mathbf{D}_{x}=\begin{bmatrix}D_{+}&\mathbf{0}\\ \mathbf{0}&D_{-}\end{bmatrix},\quad\mathbf{SAT}=\begin{bmatrix}\mathrm{SAT}_{ 1}\\ \mathrm{SAT}_{2}\end{bmatrix}, \tag{35}\]
where \(\mathbf{q}=[\mathbf{h}\quad\mathbf{u}]^{T}\) are grid functions, and \(\mathbf{F}=[\mathbf{F}_{1}\quad\mathbf{F}_{2}]^{T}\) are fluxes, with \(\mathbf{F}_{1}=\mathbf{u}\mathbf{h}\) and \(\mathbf{F}_{2}=\mathbf{u}^{2}/2+g\mathbf{h}\) for the nonlinear SWE and \(\mathbf{F}_{1}=U\mathbf{h}+H\mathbf{u}\) and \(\mathbf{F}_{2}=U\mathbf{u}+g\mathbf{h}\) for the linear SWE. Here \(\mathbf{SAT}\) are penalty terms defined by
\[\begin{split}\mathrm{SAT}_{1}&=-\tau_{11}P^{-1} \mathbf{e}_{1}\left(\alpha_{1}F_{10}+\alpha_{2}F_{20}\right)-\tau_{12}P^{-1} \mathbf{e}_{N}\left(\beta_{1}F_{1N}-\beta_{2}F_{2N}\right),\\ \mathrm{SAT}_{2}&=-\tau_{21}P^{-1}\mathbf{e}_{1} \left(\alpha_{1}F_{10}+\alpha_{2}F_{20}\right)-\tau_{22}P^{-1}\mathbf{e}_{N} \left(\beta_{1}F_{1N}-\beta_{2}F_{2N}\right),\end{split} \tag{36}\]
weakly implementing the linear BC (17) or the nonlinear BC (20). The real coefficients \(\tau_{ij}\) are penalty parameters to be determined by requiring stability. Note that the spatial derivative in continuity equation is approximated with \(D_{+}\) and the spatial derivative in the momentum equation are approximated with the dual-pair \(D_{-}\).
We will now analyse the semi-discrete approximation (35). As in the continuous setting, we will begin with the linear analysis and proceed later to the nonlinear analysis.
### Linear semi-discrete analysis
Here we consider the semi-discrete equation (35) with linear fluxes \(\mathbf{F}_{1}=U\mathbf{h}+H\mathbf{u}\) and \(\mathbf{F}_{2}=U\mathbf{u}+g\mathbf{h}\). We will prove linear stability for the semi-discrete approximation (35).
We will prove the stability of the semi-discrete numerical approximation (35) for linear fluxes \(\mathbf{F}_{1}=U\mathbf{h}+H\mathbf{u}\) and \(\mathbf{F}_{2}=U\mathbf{u}+g\mathbf{h}\). Now, we introduce the discrete weighted \(L_{2}\)-norm,
\[\|\mathbf{q}\|_{WP}^{2}:=\mathbf{q}^{T}\left(W\otimes P\right)\mathbf{q}>0, \quad\forall\mathbf{q}\neq\mathbf{0}. \tag{37}\]
where the linear weight matrix \(W\) is given by (8) and \(P\) is the diagonal SBP norm defined in (32). The main idea is to choose penalty parameters \(\tau_{ij}\) such that we can prove a discrete analogue of Theorem 4. To be precise, we introduce the following definition.
**Definition 4.1**.: _The semi-discrete approximation (35) is strongly stable if the solution \(\mathbf{q}\) satisfies_
\[\|\mathbf{q}\|_{WP}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{WP}+\max_{t\in[0, \tau]}\|\mathbf{G}(\tau)\|_{WP}\bigg{)},\]
_for some constants \(K>0\), \(\mu\in\mathbb{R}\)._
We introduce the boundary term BT given by
\[\begin{split}\mathrm{BT}&=F_{10}F_{20}-F_{1N}F_{2N}- \tau_{11}F_{20}\left(\alpha_{1}F_{10}+\alpha_{2}F_{20}\right)-\tau_{12}F_{2N} \left(\beta_{1}F_{1N}-\beta_{2}F_{2N}\right)\\ &-\tau_{21}F_{10}\left(\alpha_{1}F_{10}+\alpha_{2}F_{20}\right)- \tau_{22}F_{1N}\left(\beta_{1}F_{1N}-\beta_{2}F_{2N}\right).\end{split} \tag{38}\]
As we will see below, the numerical approximation (35) can be shown to be stable if \(\mathrm{BT}\leq 0\). We can prove the following Lemma
**Lemma 7**.: _Consider the boundary term BT given by (38). If the penalty parameters \(\tau_{ij}\) are chosen such that_
\(\tau_{11}\geq 0,\quad\tau_{21}=1/\alpha_{2},\,\tau_{12}\leq 0,\quad\tau_{22}=1/ \beta_{2},\) _for \(\alpha_{1}=0\), \(\alpha_{2}>0\), \(\beta_{1}=0\), \(\beta_{2}>0\), and_
\(\tau_{11}=1/\alpha_{1},\quad\tau_{21}=0,\,\tau_{12}=-1/\beta_{1},\quad\tau_{22}=0,\) _for \(\alpha_{1}>0\), \(\alpha_{2}\geq 0\), \(\beta_{1}>0\), \(\beta_{2}\geq 0\),_
_then \(BT\!\leq 0\)._
Proof.: If \(\tau_{11}\geq 0,\quad\tau_{21}=1/\alpha_{2},\,\tau_{12}\leq 0,\quad\tau_{22}=1/ \beta_{2},\) for \(\alpha_{1}=0\), \(\alpha_{2}>0\), \(\beta_{1}=0\), \(\beta_{2}>0\), then we have
\[\mathrm{BT}=-\tau_{11}\alpha_{2}F_{20}^{2}+\tau_{12}\beta_{2}F_{2N}^{2}\leq 0.\]
If \(\tau_{11}=1/\alpha_{1},\quad\tau_{21}=0\), \(\tau_{12}=-1/\beta_{1},\quad\tau_{22}=0,\) for \(\alpha_{1}>0\), \(\alpha_{2}\geq 0\), \(\beta_{1}>0\), \(\beta_{2}\geq 0\), then we also have
\[\mathrm{BT}=-\frac{\alpha_{2}}{\alpha_{1}}F_{20}^{2}-\frac{\beta_{2}}{\beta_{1}}F_ {2N}^{2}\leq 0.\]
The penalty parameters \(\tau_{ij}\) (\(i=1,2\), \(j=1,2\)) given by Lemma 7 cover all well-posed boundary condition parameters \(\alpha_{j}\geq 0\), \(\beta_{j}\geq 0\). However, the penalty parameters \(\tau_{ij}\) (\(i=1,2\), \(j=1,2\)) given above are not exhaustive. We will now state the theorem which proves the stability of the semi-discrete approximation (35) for linear fluxes.
**Theorem 8**.: _Consider the semi-discrete approximation (35) with \(\mathbf{F}_{1}=U\mathbf{h}+H\mathbf{u}\) and \(\mathbf{F}_{2}=U\mathbf{u}+g\mathbf{h},\) at sub-critical flows, with \(\text{Fr}=|U|/\sqrt{gH}<1\). If the penalty parameters \(\tau_{ij}\) are chosen as in Lemma 7 such that \(\text{BT}\leq 0\), where BT is given by (38), then the semi-discrete approximation (35) is strongly stable. That is, the numerical solution \(\mathbf{q}\) satisfies the estimate_
\[\|\mathbf{q}\|_{WP}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{WP}+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{WP}\bigg{)},\quad\mu=\max_{\tau\in[0,t]}\frac{BT}{ \|\mathbf{q}\|_{WP}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t})/\mu\big{)}.\]
Proof.: Similar to the continuous setting, we consider
\[\begin{split}\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{WP}^{2}& =\bigg{(}\mathbf{q},\frac{d\mathbf{q}}{dt}\bigg{)}_{WP}=-\left( \mathbf{q},\mathbf{D}_{x}\mathbf{F}\right)_{WP}+\left(\mathbf{q},\mathbf{G}+ \mathbf{SAT}\right)_{WP}\\ &=\underbrace{-\left(\mathbf{F}_{2}^{T}P\left(D_{+}\mathbf{F}_{1 }\right)+\mathbf{F}_{1}^{T}P\left(D_{-}\mathbf{F}_{2}\right)\right)}_{\text{ by DP-SBP \eqref{eq:SBP}: $F_{10}F_{20}-F_{1N}F_{2N}$}}+\left(\mathbf{q},\mathbf{G}+\mathbf{SAT}\right)_{WP }.\end{split} \tag{39}\]
Using the DP-SBP property (33) and the definition (36) for **SAT** gives
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{WP}^{2}=\text{BT}+\left(\mathbf{q}, \mathbf{G}\right)_{WP}, \tag{40}\]
where BT is given by (38). Cauchy-Schwartz inequality yields
\[\frac{1}{2}\frac{d}{dt}\|\mathbf{q}\|_{WP}^{2}\leq\mu\|\mathbf{q}\|_{WP}^{2}+ \|\mathbf{q}\|_{WP}\|\mathbf{G}\|_{WP}\iff\frac{d}{dt}\|\mathbf{q}\|_{WP}\leq \mu\|\mathbf{q}\|_{WP}+\|\mathbf{G}\|_{WP}, \tag{41}\]
with \(\mu=\max_{\tau\in[0,t]}\frac{BT}{\|\mathbf{q}\|_{WP}^{2}}\leq 0.\) Again, Gronwall's Lemma and Duhammel's principle give
\[\|\mathbf{q}\|_{WP}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{WP}+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{WP}\bigg{)}.\]
### Nonlinear semi-discrete analysis
We will now consider the nonlinear fluxes, with \(\mathbf{F}_{1}=\mathbf{u}\mathbf{h}\) and \(\mathbf{F}_{2}=\mathbf{u}^{2}/2+g\mathbf{h}\), and prove nonlinear stability for the semi-discrete numerical approximation (35). We will follow closely the steps taken in the last section to prove linear stability. To begin, we note that the penalty parameters \(\tau_{ij}\) given by Lemma 7 are applicable to the nonlinear boundary conditions (20). However, in this case the well-posed boundary parameters \(\alpha_{1}\geq 0\), \(\beta_{1}\geq 0\), \(\alpha_{2}\geq 0\), \(\beta_{2}\geq 0\) as well as the penalty parameters \(\tau_{ij}\) depend nonlinearly on the solution.
As in the continuous setting we will consider the nonlinear weight matrices \(W\) and \(W^{\prime}\), given by (23)-(24), and projected on the grid yielding
\[\mathbf{W}^{\prime}=\begin{bmatrix}g\mathbf{I}&\mathbf{0}\\ \mathbf{0}&\text{diag}(\mathbf{h})\end{bmatrix},\quad\mathbf{W}=\begin{bmatrix} g\mathbf{I}&\frac{1}{2}\text{diag}(\mathbf{u})\\ \frac{1}{2}\text{diag}(\mathbf{u})&\frac{1}{2}\text{diag}(\mathbf{h})\end{bmatrix},\quad\mathbf{h}>0. \tag{42}\]
Now, we introduce the discrete nonlinearly-weighted \(L_{2}\)-norm,
\[\|\mathbf{q}\|_{W^{\prime}P}^{2}:=\mathbf{q}^{T}\mathbf{W}^{\prime}\left(I_{2} \otimes P\right)\mathbf{q}>0,\quad\forall\mathbf{q}\neq\mathbf{0}, \tag{43}\]
as well as the definition for nonlinear stability.
**Definition 4.2**.: _The semi-discrete approximation (35), with nonlinear fluxes \(\mathbf{F}_{1}=\mathbf{u}\mathbf{h}\) and \(\mathbf{F}_{2}=\mathbf{u}^{2}/2+g\mathbf{h}\), is strongly stable if the numerical solution \(\mathbf{q}\) satisfies_
\[\|\mathbf{q}\|_{W^{\prime}P}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime} P}+\max_{t\in[0,\tau]}\|\mathbf{G}(\tau)\|_{W^{\prime}P}\bigg{)},\]
_for some constants \(K>0\), \(\mu\in\mathbb{R}\)._
The following theorem proves the nonlinear equivalence of Theorem 8.
**Theorem 9**.: _Consider the semi-discrete approximation (35) with \(\mathbf{F}_{1}=\mathbf{u}\mathbf{h}\) and \(\mathbf{F}_{2}=\mathbf{u}^{2}/2+g\mathbf{h}\), at sub-critical flows with \(\text{Fr}=|u|/\sqrt{gh}<1\). If the penalty parameters \(\tau_{ij}\) are chosen as in Lemma 7 such that \(\text{BT}\leq 0\), where BT is given by (38), then the semi-discrete approximation (35) is strongly stable. That is, the numerical solution \(\mathbf{q}\) satisfies the estimate_
\[\|\mathbf{q}\|_{W^{\prime}P}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime}P }+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{W^{\prime}P}\bigg{)},\quad\mu=\max_ {\tau\in[0,t]}\frac{BT}{\|\mathbf{q}\|_{WP}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t })/\mu\big{)}.\]
Proof.: This proof follows analogously from the continuous setting (Theorem 6) once again, so we shall keep it brief. Multiplying (35) with \(\mathbf{q}^{T}\mathbf{W}\left(I_{2}\otimes P\right)=\left[\mathbf{F}_{2}, \mathbf{F}_{1}\right]^{T}\left(I_{2}\otimes P\right)\) from the left yields,
\[\begin{split}\left(\mathbf{q},\frac{d\mathbf{q}}{dt}\right)_{WP}& =-\left(\mathbf{q},\mathbf{D}_{x}\mathbf{F}\right)_{WP}+\left( \mathbf{q},\mathbf{G}+\mathbf{SAT}\right)_{WP}\\ &=\underbrace{-\left(\mathbf{F}_{2}^{T}P\left(D_{+}\mathbf{F}_{1} \right)+\mathbf{F}_{1}^{T}P\left(D_{-}\mathbf{F}_{2}\right)\right)}_{\text{ by DP-SBP \eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeqeq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eqeqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:
difference stencils for the interior, the order \((\gamma,\zeta)=(p,2p)\), \(p\in\{1,2,\cdots\}\)[39, 38]. The order of accuracy of the interior stencil of traditional SBP operators is always even. For DP and DRP SBP operators satisfying diagonal norm property but utilising upwind stencils can support even and odd order interior stencils. DP SBP operators with even order accuracy in the interior satisfy the same boundary-interior order of accuracy as the traditional SBP, \((\gamma,\zeta)=(p,2p)\), while odd order interior DP SBP operators have \((\gamma,\zeta)=(p,2p+1)\).
We will assume that the exact solution \(\mathbf{q}_{e}\) is sufficiently smooth such that the truncation error \(\mathbb{T}_{j}\) given in (48) is defined for all grid points \(j\in\{0,1,\cdots,N\}\). The following theorem proves the convergence of the semi-discrete approximation (35).
**Theorem 10**.: _Consider the error equation (47). The error satisfies the estimate,_
\[\|\mathcal{E}\|_{WP}\leq K\bigg{(}\max_{\tau\in[0,t]}\|\mathbb{T}(\tau)\|_{WP }\bigg{)},\quad\mu=\max_{\tau\in[0,t]}\frac{BT(\mathcal{E})}{\|\mathcal{E}\|_{ WP}^{2}}\leq 0,\quad K=\frac{e^{\mu t}-1}{\mu}.\]
Proof.: The proof is analogous to the proof of Theorem 8.
Theorem 10 proves the convergence of the semi-discrete approximation (35). In particular the theorem shows that the weighted \(l_{2}\) error is bounded above by the truncation error \(\mathbb{T}\) of the operators and converges to zero at the rate \(O(\Delta x^{p})\). However, this error estimate is not sharp, and it is less optimal. Nearly-optimal \(O(\Delta x^{p+1/2})\) and optimal \(O(\Delta x^{p+1})\) convergence rates can be proven using the Laplace transform technique, (see e.g., [40, 38, 41, 42, 43, 41]).
**Remark**.: _It suffices to only consider a linear flux, \(\mathbf{F}=[\mathbf{F}_{1},\mathbf{F}_{2}]^{T}\) in the error equation (47). Note that for the nonlinear flux, by mean-value theorem, we have_
\[\mathbf{F}(\mathbf{q})-\mathbf{F}(\mathbf{q}_{e})=M\mathcal{E},\quad M= \begin{bmatrix}U&H\\ g&U\end{bmatrix},\]
_where \(M\) is the Jacobian of \(\mathbf{F}\) evaluated at \(\mathbf{q}_{0}=[H,U]^{T}\), where \(\mathbf{q}_{0}(x,t)\) generally depends on space and time, that is \(H(x,t)\), \(U(x,t)\) and \(W(x,t)=\begin{bmatrix}g&U\\ U&H\end{bmatrix}\). This extends the convergence analysis to the nonlinear case, under the assumption of smoothness of the exact solution \(\mathbf{q}_{e}\) and the flux \(\mathbf{F}\) in \([0,L]\), with the error estimate_
\[\|\mathcal{E}\|_{W}\leq Ke^{\mu_{0}t}\bigg{(}\max_{\tau\in[0,t]}\|\mathbb{T}( \tau)\|_{WP}\bigg{)},\quad\mu=\max_{\tau\in[0,t]}\frac{BT(\mathcal{E})}{\| \mathcal{E}\|_{WP}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu_{0}t})/\mu_{0} \big{)},\quad\mu_{0}=\mu+\alpha,\]
_where \(0\leq\alpha\leq\max_{\tau\in[0,t]}(|W_{t}|/\min(2|\lambda_{W}^{-}|,2|\lambda_{ W}^{+}|))\)._
## 6 Energy/entropy stable hyper-viscosity
The continuous and numerical analyses for nonlinear energy stability in the previous sections show that, away from boundaries, energy/entropy is conserved when \(\mathbf{G}=0\). For physically consistent solutions, this is obviously true for smooth solutions. For non-smooth solutions energy/entropy must be dissipated across shocks and discontinuities. To minimise unwanted oscillations across shocks, here, we introduce nonlinear energy/entropy stable hyper-viscosity.
### Semi-discrete approximation with hyper-viscosity
We denote the viscosity operator \(\mathcal{K}\) and make the ansatz
\[\mathcal{K}=\mathbf{W}^{-1}\left(I_{2}\otimes P^{-1}\right)\left(I_{2}\otimes \mathcal{A}\right),\quad\mathcal{A}=\mathcal{A}^{T},\quad\mathbf{u}^{T} \mathcal{A}\mathbf{u}\leq 0,\quad\forall\mathbf{u}\in\mathbb{R}^{N+1}. \tag{49}\]
Here \(P\) is the diagonal SBP norm (32). For the linear problem \(\mathbf{W}=W\otimes\mathbf{I}\) where \(W\) is the weight matrix given by (8), and for the nonlinear problem \(\mathbf{W}\) is given by (42). Note that the matrix products above commute, that is \(\mathbf{W}^{-1}\left(I_{2}\otimes P^{-1}\right)=\left(I_{2}\otimes P^{-1} \right)\mathbf{W}^{-1}\) and \(\mathbf{W}\left(I_{2}\otimes P\right)=\left(I_{2}\otimes P\right)\mathbf{W}\). The semi-discrete approximation with hyper-viscosity is obtained by appending the (nonlinear) hyper-viscosity operator \(\mathcal{K}\) to the right hand side of (35) giving
\[\frac{d\mathbf{q}}{dt}+\mathbf{D}_{x}\mathbf{F}=\mathcal{K}\mathbf{q}+\mathbf{G }+\mathbf{SAT},\quad\mathbf{q}(0)=\mathbf{f}. \tag{50}\]
We can prove the following linear and nonlinear stability results for the semi-discrete approximation (50).
**Theorem 11**.: _Consider the semi-discrete approximation (50) where the difference operator \(\mathbf{D}_{x}\) is given by (35), the SAT vector is given by (36) and the hyper-viscosity operator \(\mathcal{K}\) defined by (49). For linear fluxes \(\mathbf{F}_{1}=U\mathbf{h}+H\mathbf{u}\) and \(\mathbf{F}_{2}=U\mathbf{u}+g\mathbf{h}\) and sub-critical flows, with \(\text{Fr}=|U|/\sqrt{gH}<1\), if the penalty parameters \(\tau_{ij}\) are chosen as in Lemma 7 such that \(\text{BT}\leq 0\), where BT is given by (38), then the semi-discrete approximation (50) is strongly stable. That is, the numerical solution \(\mathbf{q}\) satisfies the estimate_
\[\|\mathbf{q}\|_{WP}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{WP}+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{WP}\bigg{)},\quad\mu=\max_{\tau\in[0,t]}\frac{BT+ \mathbf{q}^{T}\left(I_{2}\otimes\mathcal{A}\right)\mathbf{q}}{\|\mathbf{q}\|_ {WP}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t})/\mu\big{)}.\]
**Theorem 12**.: _Consider the semi-discrete approximation (50) where the difference operator \(\mathbf{D}_{x}\) is given by (35), the SAT vector is given by (36) and the nonlinear hyper-viscosity operator \(\mathcal{K}\) defined by (49). For nonlinear fluxes \(\mathbf{F}_{1}=\mathbf{u}\mathbf{h}\) and \(\mathbf{F}_{2}=\mathbf{u}^{2}/2+g\mathbf{h}\) and sub-critical flows with \(\text{Fr}=|u|/\sqrt{gh}<1\), if the penalty parameters \(\tau_{ij}\) are chosen as in Lemma 7 such that \(\text{BT}\leq 0\), where BT is given by (38), then the semi-discrete approximation (50) is strongly stable. That is, the numerical solution \(\mathbf{q}\) satisfies the estimate_
\[\|\mathbf{q}\|_{W^{\prime}P}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime}P }+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{W^{\prime}P}\bigg{)},\quad\mu=\max _{\tau\in[0,t]}\frac{BT+\mathbf{q}^{T}\left(I_{2}\otimes\mathcal{A}\right) \mathbf{q}}{\|\mathbf{q}\|_{W^{\prime}P}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t })/\mu\big{)}.\]
The proofs of Theorems 11 and 12 follow similarly from the proofs Theorems 8 and 9.
### Hyper-viscosity operator
In order to construct the hyper-viscosity operator \(\mathcal{A}\), we consider the even order (\(2p\), \(p=1,2,\ldots\)) derivative operator
\[(-1)^{p-1}\alpha\frac{\partial^{p}}{\partial x^{p}}c(x)\frac{\partial^{p}}{ \partial x^{p}},\quad\alpha\geq 0,\quad c(x)\geq 0,\]
and approximate it with the DP SBP operators on the grid. Here \(\alpha\) is a real constant and \(c(x)\) is a positive smooth function that vanishes at the boundaries, \(x=0,L\) and whose derivatives (up to \((p-1)\)th derivatives) also vanish at the boundaries, \(x=0,L\). Figure 1 shows an example of the smooth function \(c(x)\) which vanishes on the boundaries and its first and second derivatives also vanish on the boundaries. Let \(\mathbf{c}=\text{diag}([c(x_{0}),c(x_{1}),\cdots,c(x_{N})])\), the 4th order hyper-viscosity operator, is given by
\[P^{-1}\mathcal{A}=-\alpha D_{+}D_{-}\mathbf{c}D_{+}D_{-}=-\alpha P^{-1}\left( D_{-}^{T}PD_{-}\left(\mathbf{c}P^{-1}\right)D_{-}^{T}PD_{-}\right),\quad\alpha= \delta\Delta x^{3},\quad\delta\geq 0,\]
and the 6th order hyper-viscosity operator given by
\[P^{-1}\mathcal{A}=\alpha D_{-}D_{+}D_{-}\mathbf{c}D_{+}D_{-}D_{+}=-\alpha P^{ -1}\left(D_{+}^{T}PD_{+}P^{-1}D_{+}^{T}\left(P\mathbf{c}\right)D_{+}P^{-1}D_{+ }^{T}PD_{+}\right),\quad\alpha=\delta\Delta x^{5},\quad\delta\geq 0.\]
On the right hand sides of \(P^{-1}\mathcal{A}\), we have eliminated boundary contributions using the fact that the smooth function \(c(x)\) and its first and second derivatives vanish on the boundaries, that is \(c(x)=c^{\prime}(x)=c^{\prime\prime}(x)=0\) at \(x=0,L\). Note that since \(P\) and \(\mathbf{c}\) are diagonal matrices then the products \(\mathbf{c}P^{-1}\) and \(\mathbf{c}P\) are also diagonal matrices. Therefore, for \(\alpha\geq 0\) the dissipation operators \(\mathcal{A}=-\alpha\left(D_{-}^{T}PD_{-}\left(\mathbf{c}P^{-1}\right)D_{-}^{T}PD _{-}\right)\) and \(\mathcal{A}=-\alpha\left(D_{+}^{T}PD_{+}P^{-1}D_{+}^{T}\left(P\mathbf{c} \right)D_{+}P^{-1}D_{+}^{T}PD_{+}\right)\) are symmetric and negative semi-definite. Similar as above, higher order hyper-viscosity operators can also be derived using the DP SBP operators. However, we will require that the corresponding higher derivatives of \(c(x)\) should vanish at the boundaries.
### Discrete eigen-spectrum
To verify the linear stability analysis we compute the eigenvalues of the semi-discrete spatial evolution operator including the SAT penalty terms which enforce the boundary conditions. We consider specifically mass flux, velocity flux and transmissive boundary conditions, with hyper-viscosity \(\alpha>0\) and without hyper-viscosity \(\alpha=0\) respectively. The numerical eigenvalues are displayed in Figure 2. Note that there are no eigenvalues with positive real parts, which verifies the stability proofs of Theorems 8 and 11. For the mass flux BC and the velocity flux BC, without the hyper-viscosity the eigenvalues of the spatial evolution operators are purely imaginary, that is with zero real parts. The addition of hyper-viscosity moves the eigenvalues to the negative half-plane of the complex plane. Note however, with hyper-viscosity, the magnitude of the real parts of the eigenvalues are several order of magnitude (\(\sim 10^{-6}\)) smaller than the imaginary parts of the eigenvalues. This also implies that the addition of hyper-viscosity has a negligible impact on the stable time-step for explicit numerical time-integration of the semi-discrete approximation (35).
**Remark**.: _Theorems 89, 11, 12 for linear and nonlinear strong stability of the semi-discrete approximation (35) and the convergence result Theorem 10 hold for standard DP upwind and \(\alpha\)-DRP operators closed with periodic FD stencils where \(\mathbf{SAT}\equiv 0\)._
In the next section we present numerical experiments in 1D verifying the theory derived in this paper. We will also present some 2D numerical simulations showing extensions of the method 2D nonlinear rotating shallow water equations.
## 7 Numerical experiments
In this section we present detailed numerical experiments verifying the theoretical analysis performed in the previous sections. In particular the experiments are designed to verify stability, accuracy and convergence properties of the method, including verification of the nonlinear transmissive boundary conditions. We also present canonical test problems such as the dam break with wet domain, and the well-balanced test called the lake at rest with non-smooth and smooth immersed bump. A numerical example in 2D within a doubly periodic domain is presented showing extension of the method to 2D and effectiveness of the hyper-viscosity operator for simulating merging vortexes.
For time discretisation, we use the classical fourth order accurate explicit Runge-Kutta method. Throughout the 1D numerical experiments, we set the global time step
\[\Delta t=\frac{\text{CFL}\times\Delta x}{\max_{x}\left(|U|+\sqrt{gH}\right)}, \tag{51}\]
where \(\Delta x=L/N\), \(N+1\) is the number of grid points and \(\text{CFL}=0.3\). We use natural units, so that \(g=9.81\), \(H=1\), \(U=-0.3\sqrt{gH}\), \(|U|+\sqrt{gH}\) is the characteristic wave speed of the linear problems, and for the nonlinear problem we use \(U=u(t=0,x)\) and \(H=h(t=0,x)\).
### Numerical experiments in 1D
We first study the accuracy of the numerical method via the method of manufactured solutions (MMS), before moving to canonical test problems such as the dam break problem with wet domain, and the well-balanced test called the lake at rest with non-smooth and smooth immersed bump, as proposed in [44].
#### 7.1.1 Method of manufactured solutions
In this section, we use MMS to verify the convergence of our numerical scheme. We consider both linear and nonlinear problems. The computational domain is \(x\in[0,L]\) with length \(L=10\). We force the system with the exact solution
\[u_{e}(x,t)=\exp\bigl{(}-(x-x_{0}-c_{s}t)^{2}\bigr{)},\quad h_{e}(x,t)=u_{e}(x,t )+10,\quad x_{0}=0.5L,\quad c_{s}=\sqrt{gH}, \tag{52}\]
while enforcing the linear mass flux BC \(F_{1}(x,t):=Hu(x,t)+Uh(x,t)=Hu_{e}(x,t)+Uh_{e}(x,t)\) at \(x=0,L\) for the linear problem and the nonlinear mass flux BC \(F_{1}(x,t):=u(x,t)h(x,t)=u_{e}(x,t)h_{e}(x,t)\) at \(x=0,L\) for the
Figure 1: Smooth boxcar function used for the hyper-viscosity operator.
Figure 2: Numerical eigenspectra of the DP operators of order \(6\) with \(N=501\), \(g=H=1\), for the three main BCs considered using SAT terms: mass flux, velocity flux and transmissive BCs. The first two are energy conserving without hyperviscosity and maintains zero real parts up to machine error, while adding hyperviscosity and utilising transmissive BCs should dissipate energy, which yields non-positive real parts. All spectra clearly indicate strict stability of the eigensystem.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline DP, order 4, MMS & \multicolumn{6}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -6.5665 & -6.5454 & - & - & -5.5362 & -5.7156 & - & - \\
81 & -10.8428 & -10.8198 & 4.2763 & 4.2743 & -8.9666 & -9.0302 & 3.4305 & 3.3146 \\
161 & -14.8953 & -14.8755 & 4.0525 & 4.0557 & -12.8307 & -13.0011 & 3.9284 & 4.0598 \\
321 & -18.9189 & -18.9011 & 4.0236 & 4.0256 & -16.7592 & -17.0609 & 3.9284 & 4.0598 \\
641 & -22.9305 & -22.9139 & 4.0116 & 4.0127 & -20.7279 & -21.0780 & 3.9687 & 4.0171 \\ \hline \hline DRP, order 4, MMS & \multicolumn{6}{c}{nonlinear} \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -4.2724 & -4.7107 & - & - & -4.2123 & -4.3524 & - & - \\
81 & -10.7031 & -10.7447 & 6.4307 & 6.0341 & -9.3048 & -8.5032 & 5.0925 & 4.1508 \\
161 & -10.7031 & -10.7447 & 4.4595 & 4.3913 & -12.9814 & -12.8435 & 3.6765 & 4.3403 \\
321 & -15.1626 & -15.1360 & 4.4595 & 4.3913 & -16.8969 & -17.1903 & 3.9155 & 4.3468 \\
641 & -23.1426 & -23.1243 & 3.9983 & 4.0011 & -20.8564 & -21.2958 & 3.9596 & 4.1054 \\ \hline \hline DP, order 6, MMS & \multicolumn{6}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -5.9761 & -5.0590 & - & - & -5.6406 & -4.7045 & - & - \\
81 & -10.5185 & -10.4705 & 4.5424 & 5.4115 & -10.2011 & -10.5153 & 4.5605 & 5.8108 \\
161 & -15.5532 & -15.2167 & 5.0348 & 4.7461 & -15.1654 & -14.7981 & 4.9643 & 4.2828 \\
321 & -20.1258 & -19.7233 & 4.5726 & 4.5067 & -19.7564 & -19.2424 & 4.5910 & 4.4443 \\
641 & -24.7807 & -24.2329 & 4.6549 & 4.5095 & -24.4788 & -23.7511 & 4.7224 & 4.5087 \\ \hline \hline DRP, order 6, MMS & \multicolumn{6}{c}{nonlinear} \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -3.9230 & -5.2152 & - & - & -3.1407 & -4.1012 & - & - \\
81 & -8.3058 & -9.0906 & 4.3827 & 3.8754 & -8.0456 & -8.6623 & 4.9049 & 4.5611 \\
161 & -14.3327 & -14.1198 & 6.0270 & 5.0292 & -13.7421 & -13.7835 & 5.6965 & 5.1212 \\
321 & -18.9399 & -18.3042 & 4.6072 & 4.1844 & -18.3152 & -17.6874 & 4.5731 & 3.9039 \\
641 & -23.4452 & -22.7525 & 4.5053 & 4.4483 & -22.8357 & -22.1028 & 4.5205 & 4.4154 \\ \hline \hline \end{tabular}
\end{table}
Table 1: log \(2\)\(L^{2}\) norm errors and convergence rate of various finite difference operators using MMS, for the primitive variables \(u\),\(h\).
nonlinear problem. Note that \(Hu_{e}(x,t)+Uh_{e}(x,t)\) and \(u_{e}(x,t)h_{e}(x,t)\) yield nonzero boundary data. We run the simulations until the final time \(t=0.5\).
MMS without hyper-viscosity.We consider first grid convergence tests without hyper-viscosity and proceed later to convergence tests with hyper-viscosity. The numerical errors and convergence rates for different operators, without hyper-viscosity, for the linear and nonlinear cases are reported in Table 1. As has already been noted in prior works [36, 45, 16], the diagonal norm DP upwind operators have "higher than expected" convergence rates. Our experiments also show that the DRP operators exhibit the same behaviour, higher than expected convergence rates, as it also satisfy the upwind property. We do not understand the precise reason for this yet, further analysis is needed to unravel the super-convergence properties exhibited by the schemes. For additional convergence studies using traditional SBP operators, and odd-order upwind DP and DRP operators, see Appendix A
MMS with hyper-viscosity.We now assess the accuracy of the method with hyper-viscosity for linear and nonlinear problems. To do this we use MMS and perform grid convergence studies for smooth solutions. We set \(\alpha=\delta\Delta x^{p-1}\) and \(\delta=0.1\), where \(p=4,6\) for 4th and 6th order derivative hyper-viscosity operators. For smooth solutions the truncation error for the hyper-viscosity operators is \(\mathbb{T}\sim\Delta x^{p-1}+\mathcal{O}(\Delta x^{p})\). We expect the convergence rate of the errors to be at most \(\mathcal{O}(\Delta x^{p-1})\). Table 2 shows the error and convergence rates of the error for DP upwind and DRP operators of 4 and 6 interior accuracy, for linear and nonlinear problems. Hence, our DP and DRP schemes remain high order accurate with hyper-viscosity for smooth solutions, for both linear and nonlinear problems.
#### 7.1.2 Lake at rest with immersed bump
We consider the canonical 1D lake at rest problem, modeled by the nonlinear SWE (2) with bottom topography (immersed bump), as described in [44]. The length of the domain is \(L=25\) and the bottom topography embedded
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline DP order 4, hyper-viscosity, MMS & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ \(N\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(\log_{2}|\mathrm{error}|_{2,A}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(\log_{2}|\mathrm{error}|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -7.1027 & -6.2709 & 0 & 0 & -6.7094 & -5.9420 & 0 & 0 \\
81 & -11.3750 & -11.0446 & 4.2723 & 4.7738 & -11.1816 & -10.8197 & 4.4722 & 4.8777 \\
161 & -15.8828 & -15.6486 & 4.5078 & 4.6040 & -15.4499 & -15.0417 & 4.2683 & 4.2220 \\
321 & -19.4923 & -19.4737 & 3.6095 & 3.8251 & -18.8463 & -18.3501 & 3.3964 & 3.3084 \\ \hline \hline DRP order 4, hyper-viscosity, MMS & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{\(\log_{2}|\mathrm{error}|_{2,u}\)} & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -7.1027 & -6.2709 & 0 & 0 & -3.8329 & -4.01740 & 0 & 0 \\
81 & -11.3750 & -11.0446 & 4.2723 & 4.7738 & -8.7963 & -10.9147 & 4.9633 & 6.8973 \\
161 & -15.8828 & -15.6486 & 4.5078 & 4.6040 & -14.2213 & -14.0322 & 5.4250 & 3.1175 \\
321 & -19.4923 & -19.4737 & 3.6095 & 3.8251 & -18.5059 & -17.9750 & 4.2846 & 3.9429 \\ \hline \hline DP order 6, hyper-viscosity, MMS & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{\(\log_{2}|\mathrm{error}|_{2,u}\)} & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) \\
1 & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -7.0942 & -6.2723 & 0 & 0 & -6.6884 & -5.9477 & 0 & 0 \\
81 & -11.4008 & -11.0658 & 4.2723 & 4.7738 & -11.2279 & -10.8404 & 4.5396 & 4.8927 \\
161 & -16.2131 & -15.8455 & 4.5078 & 4.6040 & -15.9767 & -15.4594 & 4.7487 & 4.6190 \\
321 & -20.9717 & -20.5254 & 4.7586 & 4.6798 & -20.6898 & -20.1138 & 4.7131 & 4.6544 \\ \hline \hline DRP order 6, hyper-viscosity, MMS & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{Linear} & \multicolumn{6}{c}{nonlinear} \\ & \multicolumn{3}{c}{\(\log_{2}|\mathrm{error}|_{2,u}\)} & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(\log_{2}|\mathrm{error}|_{2,u}\) & \(q_{u}\) & \(q_{h}\) \\
1 & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -4.0209 & -4.7415 & 0 & 0 & -3.8306 & -4.0142 & 0 & 0 \\
81 & -9.2636 & -10.1468 & 5.0607 & 5.4053 & -8.7986 & -11.1187 & 4.4968 & 7.1045 \\
161 & -14.9135 & -14.6628 & 3.8338 & 3.9573 & -14.3226 & -14.0398 & 5.5251 & 2.9211 \\
321 & -19.7461 & -19.0091 & 4.8326 & 4.3463 & -19.1884 & -18.4021 & 4.8648 & 4.3623 \\ \hline \hline \end{tabular}
\end{table}
Table 2: log \(2\)\(L^{2}\) norm errors and convergence rate of the DP and DRP operators of order \(p=4,6\) interior operators, with hyper-viscosity pre-factor \(\delta=0.1\).
below the fluid is modelled as:
\[b(x)=\begin{cases}0.2-0.05(x-10)^{2}&\text{ if }8<x<12,\\ 0&x\in\Omega\backslash\{8,12\},\end{cases} \tag{53}\]
The bottom topography \(b(x)\) enters the nonlinear SWE (2) through its derivative
\[b^{\prime}(x)=\begin{cases}-0.1(x-10)&\text{ if }8<x<12,\\ 0&x\in\Omega\backslash\{8,12\},\end{cases} \tag{54}\]
which appears as a source term in the momentum equation. Note that \(b^{\prime}(x)\) is discontinuous at \(x=\{8,12\}\).
No perturbation.We will now verify the well-balanced property of the numerical method. We prescribe the initial condition
\[h(x,0)+b=0.5,\quad u(x,0)=0,\]
throughout the domain. As the initial conditions solve a steady state problem, a good numerical scheme should maintain the prescribed initial condition for all time. Following [16], we prescribe periodic BCs at \(x=0,L\), with periodic finite difference stencils. No hyper-viscosity is used for this test case, despite the fact that \(b(x)\) is non-smooth. As shown in Fig. 3, our scheme maintains both heights and velocities at the analytical value up to machine error. This is independent of the grid resolution, and confirms the well-balanced property of the numerical scheme.
\begin{table}
\begin{tabular}{c c c c} \hline \hline FD, order 6, Lake at Rest & & & \\ & \(\log_{10}\|\text{error}\|_{2,u}\) & \(\log_{10}\|\text{error}\|_{2,u}\) & \(\log_{10}\|\text{error}\|_{2,u}\) \\ \(N\) & (SBP) & (DP) & (DRP) \\ \hline
51 & -14.6270 & -14.0611 & -15.4622 \\
101 & -14.3264 & -13.7623 & -15.1600 \\
151 & -14.1503 & -13.5869 & -14.9836 \\
201 & -14.0253 & -12.7646 & -14.8581 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Log10 error of the lake at rest’s velocity profile, it can be seen that all operators achieve values near machine precision, regardless of grid resolution.
Figure 3: Plot of the height for the canonical lake at rest problem at end time \(t=5\), where the topography \(b(x)\) is defined via (53), which is non-smooth about \(x\in(8,12)\). The simulation is initialised with \(h+b=0.5\), \(N=201\), \(L=25\). Our numerical scheme maintains the initial condition up to machine precision.
Small perturbations.The aim of this experiment is to verify the implementation and accuracy of the nonlinear transmissive boundary conditions (21). As above we consider the 1D lake at rest problem, with zero initial condition for velocity, \(u(x,0)=0\), and we add small perturbations to the initial condition for height,
\[h(x,0)=0.5-b(x)+\delta b(x),\quad\delta b(x)=0.1\max_{x}|b(x)|\times e^{-\frac{(x -10)^{2}}{0.3}}.\]
The perturbation will generate variations in both height and velocity which will propagate through the domain. However, the variations introduced by the perturbation are expected to leave the domain through the transmissive boundaries without reflections.
In Fig. 4 we show the time evolution of the perturbed velocity and height profiles. Clearly the perturbations leave the domain without reflections, and we recover the steady state solutions. We have also performed grid convergence studies, see Table 4 for the convergence rates of the numerical error. The errors converge to zero at an optimal rate for order 6 FD operators.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline FD, order 6, Lake at Rest, smooth perturbations & \multicolumn{6}{c}{(SBP)} & \multicolumn{6}{c}{(DP)} & \multicolumn{2}{c}{(DRP)} \\ & \(N\) & & \(q_{u}\) & \(q_{h}\) & \(q_{u}\) & \(q_{h}\) \\ \hline & 101 & - & - & - & - & - & - \\ & 201 & 2.0126 & 1.8438 & 2.6958 & 2.8914 & 2.2873 & 1.6657 \\ & 401 & 2.9720 & 2.9958 & 3.6705 & 3.5118 & 2.7088 & 2.8301 \\ & 801 & 3.9608 & 3.8156 & 5.5908 & 5.5869 & 3.9935 & 4.1396 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Convergence rate of the lake at rest problem with small perturbations on \(h\) (10%), with smooth topography of the Gaussian form, see (7.1.2). Satisfactory convergence rates are achieved with smooth \(b(x)\).
Figure 4: Time evolution of the lake at rest profile with 10% perturbations on \(h\) with transmissive BC. The waves transmit out the right and left boundaries to maintain an overall steady state flowfield similar to the canonical lake at rest problem.
#### 7.1.3 Dam break with wet domain
Next, we investigate the efficacy of the method in the presence of nonlinear shocks. We consider the canonical dam break problem [44] in a wet domain with the following initial conditions:
\[h(x,t=0)=\begin{cases}h_{l}&\text{ if }0\leq x\leq x_{0},\\ h_{r}&\text{ if }x_{0}<x\leq L,\end{cases}\qquad u(x,t=0)=0, \tag{55}\]
where we use \(x_{0}=5\), \(h_{l}=1\), \(h_{r}=0.5\) and \(L=10\). Note that the initial condition for the water height is discontinuous at \(x=x_{0}\). The exact solutions consists of a right-going shock front and a left-going rarefaction fan. Using the method of characteristics the exact solution has the closed form expression [44]
\[h(t,x)=\left\{\begin{array}{ll}h_{l}&\text{ if }x\leq x_{A}(t),\\ \frac{\xi_{m}}{g}\left(\sqrt{gh_{l}}-\frac{x-x_{0}}{2t}\right)^{2}&,\;u(t,x)= \begin{cases}0&\text{ if }x\leq x_{A}(t),\\ \frac{2}{3}\left(\frac{x-x_{0}}{t}+\sqrt{gh_{l}}\right)&\text{ if }x_{A}(t)\leq x \leq x_{B}(t),\\ 2\left(\sqrt{gh_{l}}-c_{m}\right)&\text{ if }x_{B}(t)\leq x\leq x_{C}(t),\\ 0&\text{ if }x_{C}(t)\leq x,\end{cases}\right.\]
where subscripts \(l\) and \(r\) denote left and right, and \(x_{A}\), \(x_{B}\), \(x_{C}\) satisfy the following evolution equations, with \(c_{m}\) defined as a solution to the algebraic equation,
\[-8gh_{r}c_{m}^{2}\left(\sqrt{gh_{l}}-c_{m}\right)^{2}+\left({c_{m}}^{2}-gh_{r} \right)^{2}\left({c_{m}}^{2}+gh_{r}\right)=0\]
and the coordinates, \(x_{A},x_{B},x_{C}\) are defined by
\[x_{A}(t)=x_{0}-t\sqrt{gh_{l}},\quad x_{B}(t)=x_{0}+t\left(2\sqrt{gh_{l}}-3c_{ m}\right)\text{ and }x_{C}(t)=x_{0}+t\frac{2c_{m}^{2}\left(\sqrt{gh_{l}}-c_{m}\right)}{c_{m}^{2}-gh_{r}}. \tag{56}\]
To perform numerical simulations we close the boundaries with transmissive boundary conditions and discretise the computational domain with \(N=1001\) grid points. We run the simulation until the final time \(t=2\) using the 6th order accurate DP SBP operators, with hyper-viscosity \(\delta=0.1\) and without hyper-viscosity \(\delta=0\). In Figure 5 we compare numerical solutions with the exact solutions at \(t=1000\Delta t\). Note that the numerical solution is stable and the shock speed and the rarefaction fan are well resolved by our high order numerical method. However, without hyper-viscosity there are spurious oscillations from the shock front which pollute the numerical solutions. The addition of the hyper-viscosity, with \(\delta=0.1\), eliminates the oscillations without destroying the high order accuracy of the solution in smooth regions. Furthermore, in Figs. 6 and 7 we show the time evolution of the height and velocity, with hyper-viscosity. It is clearly demonstrated that the numerical and analytical solutions agree well, despite the presence of shocks and the highly nonlinear nature of the flow problem.
Figure 5: Time evolution of the height and velocity profile of the dam break with wet domain problem, with the overlaid analytical solution using the DP order 6 operators, \(N=501\), showing time snapshot at \(t=1000\Delta t\), comparing the solutions with and without hyperviscosity. Clearly the oscillations are heavily amplified in those without.
### 2D merging vortex problem
As an extension to the 1D numerical experiments to 2D, we consider the "merging vortex problem" [46]. This problem is modelled by the 2D rotating shallow water equations (1) in the spatial domain \(\Omega=(0,2\pi]\times(0,2\pi]\) with periodic boundary conditions. The the initial conditions are a pair of Gaussian vortex with the in-compressible stream function
\[\psi=e^{-5\left((y-\pi)^{2}+(x-2.6\pi/3)^{2}\right)}+e^{-5\left((y-\pi)^{2}+(x- 3.5\pi/3)^{2}\right)}. \tag{57}\]
The initial conditions for the velocity field are defined through \(\mathbf{u}(x,y,0)=\nabla^{\perp}\psi\), \(\nabla^{\perp}=(-\partial_{y},\partial_{x})\) and the initial condition for water height \(h\) is obtained from linear geostrophic balance \(f\times\mathbf{u}^{\perp}(x,y,0)+g\times\nabla h(x,y,0)=0\implies h(x,y,0)=H+f /g\times\psi(x,y)\), with \(f=g=H=8\).
The continuous 2D rotating shallow water equations (1) preserves infinitely many invariants. However, any discrete approximation can (approximately) preserve a finite number of invariants [22]. For the numerical approximation we select a subset of the invariants, namely: 1) total energy/entropy \(E(t)\), 2) total enstrophy \(E_{\mathrm{s}}(t)\), 3) total vorticity \(W(t)\) and 4) total mass \(M(t)\), defined by
\[E(t)=\int_{\Omega}eddxdy,\quad E_{\mathrm{s}}(t)=\int_{\Omega}\omega^{2}hdxdy,\quad W(t)=\int_{\Omega}\omega dxdy,\quad M(t)=\int_{\Omega}hdxdy, \tag{58}\]
which the numerical method should accurately preserve. Here, \(e=\frac{1}{2}\left(gh^{2}+hu^{2}+hv^{2}\right)\) is the elemental energy/entropy and \(\omega\) is the absolute vorticity defined in (1). The energy/entropy \(E(t)\) is critical for nonlinear stability. The enstrophy \(E_{\mathrm{s}}\) is a higher order moment and bounds the derivatives of the solution. More importantly the enstrophy
Figure 6: Time evolution of the height profile of the dam break with wet domain problem, with the overlaid analytical solution using the DRP order 6 operators, \(N=1001\), with final time \(t=1\). The hyperviscosity parameter is set to \(\delta=0.1\)
Figure 7: Same as Fig. 6 but for the time evolution of the velocity profile.
is critical for controlling grid-scale errors and ensuring that high frequency oscillations do not ruin the accuracy of the solution.
The numerical approximations of the invariants are given by
\[\mathcal{E}(t)=\sum_{ij}^{N}e_{ij}\Delta x\Delta y,\quad\mathcal{E}_{\mathrm{s}}( t)=\sum_{ij}^{N}\omega_{ij}^{2}h_{ij}\Delta x\Delta y,\quad\mathcal{W}(t)=\sum_{ ij}^{N}\omega_{ij}\Delta x\Delta y,\quad\mathcal{M}(t)=\sum_{ij}^{N}h_{ij} \Delta x\Delta y, \tag{59}\]
where \(e_{ij}\) and \(\mathbf{\omega}\) are defined in (61) and (63), respectively, and where \(i,j\) are indices corresponding to grid points. We define the relative changes in the discrete invariants
\[\Delta_{r}\mathcal{E}(t)=\frac{\mathcal{E}(t)-\mathcal{E}(0)}{\mathcal{E}(0)},\quad\Delta_{r}\mathcal{E}_{s}(t)=\frac{\mathcal{E}_{s}(t)-\mathcal{E}_{s}(0 )}{\mathcal{E}_{s}(0)},\quad\Delta_{r}\mathcal{W}(t)=\frac{\mathcal{W}(t)- \mathcal{W}(0)}{\mathcal{W}(0)},\quad\Delta_{r}\mathcal{M}(t)=\frac{\mathcal{M }(t)-\mathcal{M}(0)}{\mathcal{M}(0)}. \tag{60}\]
The semi-discrete approximation of the 2D rotating shallow water equations (1) using the dual-pairing SBP framework is derived in Appendix B. In Theorem 13, we prove that the semi-discrete approximation conserves the discrete energy/entropy without hyper-viscosity \(\delta=0\) and dissipates energy/entropy with hyper-viscosity \(\delta>0\). Semi-discrete conservation of total mass and vorticity also follow, but the proofs are not given.
As before, the semi-discrete approximations are evolved in time using classical fourth order accurate explicit Runge-Kutta method with \(\mathrm{CFL}=0.1\). In Fig. 8 we show snapshots of potential vorticity using DP SBP operators of order 4 on a uniform grid of size \(251^{2}\) grid points. The left panel shows the snapshots of the numerical solution without hyper-viscosity (\(\delta=0\)) and right panel shows the snapshots of the numerical solution with hyper-viscosity (\(\delta=0.5\)). It is significantly noteworthy that in both cases the numerical solutions remain bounded through out the simulation. This is consistent with the energy stability proof of Theorem 13. However, without hyper-viscosity (\(\delta=0\), left panel), the structure in the solution is completely destroyed by spurious numerical artefacts. The addition of hyper-viscosity to the scheme (\(\delta=0.5\), right panel) eliminates the spurious wave modes which can destroy the accuracy of the numerical solution. With hyper-viscosity the numerical method preserves approximate geostrophic balance and the solution remains accurate throughout the simulation. The numerical solutions are comparable to the results given in the literature [46] using compatible finite element methods. We also note that in [46], in order to tame poisonous numerical oscillations the authors utilised the so-called Artificial Potential Vorticity Method (APVM) for the finite element method. In general for nonlinear problems, energy/entropy conservation although ensures the boundedness of numerical solutions but it does not guarantee convergence of numerical errors. Suitable amount of numerical dissipation is necessary to control high frequency errors.
We also compute the relative change in the discrete invariants (59) as defined in (60). The time evolution of relative change in the discrete invariants are shown Figs. 9 and 10 for DP and DRP SBP operators of order 4 and 6. Note that total mass, energy and vorticity are conserved up to the grid resolution. Without hyper-viscosity the enstrophy grows linearly leading to the growth of high frequency errors which pollutes the solutions. With the addition hyper-viscosity the magnitude of the enstrophy is relatively smaller and decay linearly with time.
Figure 8: Potential vorticity, \(\omega/h\) for the 2D merging vortex problem (57) using DP operators of order 4. The left panels show the solutions without hyperviscosity, while the right show those with hyperviscosity, where \(\delta=0.5\). Snapshots are at \(t=0,0.3,0.7,1.5,2.0,3.0,3.5,3.9\).
Figure 9: Relative change of the total energy, vorticity, enstrophy and mass for the 2D merging vortex problem using DP operators of order 4. Clearly, energy and vorticity remains conserved up to grid resolution, but enstrophy is dissipated due to the introduction of hyperviscosity.
Figure 10: Same as Fig. 9 but for order 6 DP and DRP operators.
Future work will extend the method to spherical and complex geometries using curvilinear grids. We will also extend the 1D nonlinear boundary conditions to 2D which will enable efficient numerical simulations in non-periodic bounded domains.
## 9 Acknowledgements
We thank Kenny Wiratama, Kieran Ricardo and David Lee for insightful discussions about higher order discretisation methods for hyperbolic systems. We are also grateful to Christopher Williams, Alberto Martin, Rudi Prihandoko, James R. Beattie and Neco Kriel for general discussions regarding this work. J. K. J. H. acknowledges funding via the ANU Chancellor's International Scholarship, the Space Plasma, Astronomy and Astrophysics Research Award, the Boswell Technologies Endowment Fund and Higher Degree Research (HDR) Award for Space Exploration. We also acknowledge computational resources provided by the National Computational Infrastructure (NCI) under the grant xx52, which is supported by the Australian Government.
## 10 Data Availability Statement
The datasets produced/analysed in the present study are available upon reasonable request to the corresponding author (J.K.J.H.).
## Appendix A Additional convergence studies for traditional SBP and odd-order DP and DRP operators
Here in Tables 5 and 6, we show additional convergence tables with the Gaussian MMS conducted using the traditional SBP operators (Table 5), as well as odd-order DP and DRP operators (Table 6). As confirmed earlier, we observe superconvergence for 4th and 6th order SBP operators. However, traditional SBP operators achieve globally 3rd order convergence rate for order 4 interior operator, consistent with the \(p+1\) global accuracy expected from Laplace transform analysis, and 5th order global convergence rate for order 6 interior operator.
However, as shown in Table 6, the DP and DRP odd order operators exhibit super-convergent behaviour, which has also been shown in a number of recent works [36, 45, 16], due to the nature of these types of upwind SBP operators, while we note that the precise reason for this is still unknown.
## Appendix B 2D energy/entropy stable semi-discrete approximation
To extend the 1D semi-discrete approximation (35) to the 2D rotating SWE (1) we discretise the 2D doubly-periodic domain \((x,y)\in\Omega=[0,2\pi]\times[0,2\pi]\) into \((N+1)\times(N+1)\) grid points \((x_{i},y_{j})=(i\Delta x,j\Delta y)\) with the uniform spatial step \(\Delta x=\Delta y=2\pi/N\) for \(N\in\mathbb{N}\). A 2D scalar field \(v(x,y)\) on the grid \((x_{i},y_{j})\) is re-arranged row-wise into a vector \(\mathbf{v}\in\mathbb{R}^{(N+1)^{2}}\). The 2D rotating SWE in semi-discrete form can be written as:
\[\frac{d\mathbf{h}}{dt}+\nabla_{\mathbf{D}_{+}}\cdot\left(\mathbf{U}\mathbf{h} \right)=G_{h},\quad\frac{d\mathbf{U}}{dt}+\boldsymbol{\omega}\mathbf{U}^{ \perp}+\nabla_{\mathbf{D}_{-}}\!\left(\frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h} \right)\!\!=\!\mathbf{G}_{u},\ \ \boldsymbol{\omega}=\mathbf{D}_{-x}\mathbf{v}-\mathbf{D}_{-y}\mathbf{u}+f_{c}, \quad\mathbf{q}(0)=\mathbf{f}, \tag{61}\]
where \(\mathbf{U}=(\mathbf{u},\mathbf{v})^{T}\), \(\mathbf{U}^{\perp}=(-\mathbf{v},\mathbf{u})^{T}\), \(|\mathbf{U}|^{2}=\mathbf{u}^{2}+\mathbf{v}^{2}\), \(\nabla_{\mathbf{D}_{y}}=(\mathbf{D}_{\eta x},\mathbf{D}_{\eta y})^{T}\), with \(\eta\in\{+,-\}\). is the discrete approximation of gradient operator \(\nabla=(\partial/\partial x,\partial/\partial y)^{T}\). As before, note that the derivatives for continuity equation are approximated with \(D_{+}\) and the derivatives for the momentum equation are approximated with the dual operator \(D_{-}\). Using the Kronecker product \(\otimes\), the 2D DP derivative operators are defined as \(\mathbf{D}_{\pm x}=D_{\pm}\otimes\mathbf{I}_{y}\), \(\mathbf{D}_{\pm y}=\mathbf{I}_{x}\otimes D_{\pm}\) where \(D_{\pm}\) are 1D DP operators closed with periodic boundary conditions, and satisfy \(D_{+}+D_{-}^{T}=0\). The 2D SBP norm is defined as \(\mathbf{P}=\Delta x\Delta y\mathbf{I}\) with \(\mathbf{I}=(\mathbf{I}_{x}\otimes\mathbf{I}_{y})\) and \(\mathbf{I}_{x},\mathbf{I}_{y}\in\mathbb{R}^{(N+1)\times(N+1)}\) being identity matrices.
We define the 2D energy functional
\[\|\mathbf{q}\|_{\mathbf{PW}^{\prime}}^{2}:=\mathbf{q}^{T}\left(\mathbf{PW}^{ \prime}\right)\mathbf{q}=\Delta x\Delta y\sum_{ij}\left(gh_{ij}^{2}+h_{ij}u_{ ij}^{2}+h_{ij}v_{ij}^{2}\right),\quad\mathbf{W}^{\prime}=\begin{bmatrix}g\mathbf{I}& \mathbf{0}&\mathbf{0}\\ \mathbf{0}&\mathrm{diag}(\mathbf{h})&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&\mathrm{diag}(\mathbf{h})\end{bmatrix}, \tag{62}\]
and the elemental energy
\[e_{ij}:=\frac{1}{2}\left(gh_{ij}^{2}+h_{ij}u_{ij}^{2}+h_{ij}v_{ij}^{2}\right), \tag{63}\]
defines an entropy functional for sub-critical flows \(u_{ij}^{2}+v_{ij}^{2}<gh_{ij}\). That is the elemental energy \(e_{ij}\) is a convex function in terms of the prognostic variables \(h_{ij},u_{ij},v_{ij}\). It is also noteworthy that
\[\mathbf{W}\mathbf{q}=\begin{bmatrix}\frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h} \\ \mathbf{u}\mathbf{h}\\ \mathbf{v}\mathbf{h}\end{bmatrix},\ \mathbf{q}^{T}\left(\mathbf{PW}\right)\frac{d \mathbf{q}}{dt}=\frac{1}{2}\frac{d}{dt}\!\left(\mathbf{q}^{T}\left(\mathbf{PW} ^{\prime}\right)\mathbf{q}\right)\!,\ \mathbf{W}=\begin{bmatrix}g\mathbf{I}&\frac{1}{2}\mathrm{diag}(\mathbf{u})& \frac{1}{2}\mathrm{diag}(\mathbf{v})\\ \frac{1}{2}\mathrm{diag}(\mathbf{u})&\frac{1}{2}\mathrm{diag}(\mathbf{h})&0\\ \frac{1}{2}\mathrm{diag}(\mathbf{v})&0&\frac{1}{2}\mathrm{diag}(\mathbf{h}) \end{bmatrix},\ \mathbf{W}=\mathbf{W}^{T}. \tag{64}\]
To make the 2D analysis amenable to the 1D theory we will reformulate the semi-discrete approximation (61). As before the 2D hyper-viscosity operator is constructed such that when appended to the semi-discrete formulation
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline DP, order 5, MMS & & & & & & & & & \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -7.2370 & -7.1095 & - & - & -7.0995 & -6.7450 & - & - \\
81 & -11.3378 & -10.9423 & 4.1008 & 3.8328 & -11.6030 & -10.6515 & 4.5035 & 3.9064 \\
161 & -15.2832 & -15.0097 & 3.9454 & 4.0674 & -15.4952 & -14.7624 & 3.8921 & 4.1109 \\
321 & -18.9627 & -18.8146 & 3.6795 & 3.8049 & -19.1202 & -18.6201 & 3.6250 & 3.8577 \\
641 & -22.5701 & -22.5139 & 3.6074 & 3.6993 & -22.7087 & -22.3616 & 3.5885 & 3.7415 \\ \hline \hline DRP, order 5, MMS & & & & & & & & \\ \(N\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) & \(\log_{2}\|\mathrm{error}\|_{2,u}\) & \(\log_{2}\|\mathrm{error}\|_{2,h}\) & \(q_{u}\) & \(q_{h}\) \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) \\ \hline
41 & -6.7009 & -5.7814 & - & - & -6.5358 & -5.1352 & - & - \\
81 & -10.2166 & -9.7475 & 3.5158 & 3.9661 & -10.1635 & -9.2776 & 3.6276 & 4.1424 \\
161 & -14.2013 & -13.9006 & 3.9846 & 4.1531 & -14.1764 & -13.5178 & 4.0129 & 4.2402 \\
321 & -17.9694 & -17.8282 & 3.7681 & 3.9276 & -17.9550 & -17.5155 & 3.7786 & 3.9977 \\
641 & -21.6402 & -21.6193 & 3.6708 & 3.7911 & -21.6441 & -21.3592 & 3.6891 & 3.8437 \\ \hline \hline \end{tabular}
\end{table}
Table 6: \(\log\) 2 \(L^{2}\) norm errors and convergence rate of DP and DRP operators of odd order (order 5) for primitive variables \(u,h\).
(61), it becomes,
\[\frac{d\mathbf{q}}{dt}+\nabla_{\mathbf{D}}\cdot\mathbf{F}+\mathbf{\Omega}_{f}= \mathcal{K}\mathbf{q}+\mathbf{G},\quad\mathbf{\Omega}_{f}=\begin{bmatrix} \mathbf{0}\\ -\mathbf{\omega}\mathbf{v}\\ \mathbf{\omega}\mathbf{u}\end{bmatrix},\quad\mathbf{\omega}=\mathbf{D}_{-x}\mathbf{v} -\mathbf{D}_{-y}\mathbf{u}+f_{c},\quad\mathbf{q}(0)=\mathbf{f}, \tag{65}\]
where \(\nabla_{\mathbf{D}}\cdot\mathbf{F}=\mathbf{D}_{x}\mathbf{F}_{x}+\mathbf{D}_{y} \mathbf{F}_{y}\) and the flux and its gradients are given by
\[\mathbf{F}_{x}:=\begin{bmatrix}\mathbf{u}\\ \frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h}\\ \mathbf{0}\end{bmatrix},\;\mathbf{F}_{y}:=\begin{bmatrix}\mathbf{v}\mathbf{h}\\ \mathbf{0}\\ \frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h}\end{bmatrix},\quad\mathbf{D}_{x} \mathbf{F}_{x}:=\begin{bmatrix}\mathbf{D}_{+x}\left(\mathbf{u}\mathbf{h} \right)\\ \mathbf{D}_{-x}\left(\frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h}\right)\\ \mathbf{0}\end{bmatrix},\;\mathbf{D}_{y}\mathbf{F}_{y}:=\begin{bmatrix} \mathbf{D}_{+y}\left(\mathbf{v}\mathbf{h}\right)\\ \mathbf{0}\\ \mathbf{D}_{-y}\left(\frac{|\mathbf{U}|^{2}}{2}+g\mathbf{h}\right)\end{bmatrix}.\]
Here the 2D nonlinear hyper-viscosity operator \(\mathcal{K}\) is given by
\[\mathcal{K}=\mathbf{W}^{-1}\left(I_{3}\otimes\mathbf{P}^{-1}\right)\left((I_{ 3}\otimes\mathcal{A}\otimes\mathbf{I}_{y})+(I_{3}\otimes\mathbf{I}_{x}\otimes \mathcal{A})\right),\quad\mathcal{A}=\mathcal{A}^{T},\quad\mathbf{u}^{T} \mathcal{A}\mathbf{u}\leq 0,\quad\forall\mathbf{u}\in\mathbb{R}^{N+1}, \tag{66}\]
where \(\mathcal{A}\) is the 1D hyper-viscosity operator derived in Section 6 and \(I_{3}\in\mathbb{R}^{3\times 3}\) is the identity matrix.
We are now ready to prove that the numerical method (61) or (65) is strongly energy stable. Analogous to the 1D result Theorem 12 we also have
**Theorem 13**.: _Consider the semi-discrete approximation (61) with periodic boundary conditions and the nonlinear hyper-viscosity operator \(\mathcal{K}\) defined by (66). Let the symmetric matrix \(\mathbf{W}\) and the diagonal matrix \(\mathbf{W}^{\prime}\) be defined as in (62) and (64). For sub-critical flows with \(F^{2}=(u_{ij}^{2}+v_{ij}^{2})/(gh_{ij})<1\), the semi-discrete approximation (61) or (65) is strongly stable. That is, the numerical solution \(\mathbf{q}\) satisfies the estimate,_
\[\|\mathbf{q}\|_{W^{\prime}P}\leq Ke^{\mu t}\bigg{(}\|\mathbf{f}\|_{W^{\prime} P}+\max_{\tau\in[0,t]}\|\mathbf{G}(\tau)\|_{W^{\prime}P}\bigg{)},\]
_where_
\[\mu=\max_{\tau\in[0,t]}\frac{\mathbf{q}^{T}\left((I_{3}\otimes\mathcal{A} \otimes\mathbf{I}_{y})+(I_{3}\otimes\mathbf{I}_{x}\otimes\mathcal{A})\right) \mathbf{q}}{\|\mathbf{q}\|_{W^{\prime}P}^{2}}\leq 0,\quad K=\max\big{(}1,(1-e^{-\mu t })/\mu\big{)}.\]
The proof is analogous to Theorem 12 and is omitted for brevity.
|
2308.10364 | SE(3) Equivariant Augmented Coupling Flows | Coupling normalizing flows allow for fast sampling and density evaluation,
making them the tool of choice for probabilistic modeling of physical systems.
However, the standard coupling architecture precludes endowing flows that
operate on the Cartesian coordinates of atoms with the SE(3) and permutation
invariances of physical systems. This work proposes a coupling flow that
preserves SE(3) and permutation equivariance by performing coordinate splits
along additional augmented dimensions. At each layer, the flow maps atoms'
positions into learned SE(3) invariant bases, where we apply standard flow
transformations, such as monotonic rational-quadratic splines, before returning
to the original basis. Crucially, our flow preserves fast sampling and density
evaluation, and may be used to produce unbiased estimates of expectations with
respect to the target distribution via importance sampling. When trained on the
DW4, LJ13, and QM9-positional datasets, our flow is competitive with
equivariant continuous normalizing flows and diffusion models, while allowing
sampling more than an order of magnitude faster. Moreover, to the best of our
knowledge, we are the first to learn the full Boltzmann distribution of alanine
dipeptide by only modeling the Cartesian positions of its atoms. Lastly, we
demonstrate that our flow can be trained to approximately sample from the
Boltzmann distribution of the DW4 and LJ13 particle systems using only their
energy functions. | Laurence I. Midgley, Vincent Stimper, Javier Antorán, Emile Mathieu, Bernhard Schölkopf, José Miguel Hernández-Lobato | 2023-08-20T20:49:15Z | http://arxiv.org/abs/2308.10364v6 | # SE(3) Equivariant Augmented Coupling Flows
###### Abstract
Coupling normalizing flows allow for fast sampling and density evaluation, making them the tool of choice for probabilistic modeling of physical systems. However, the standard coupling architecture precludes endowing flows that operate on the Cartesian coordinates of atoms with the SE(3) and permutation invariances of physical systems. This work proposes a coupling flow that preserves SE(3) and permutation equivariance by performing coordinate splits along additional augmented dimensions. At each layer, the flow maps atoms' positions into learned SE(3) invariant bases, where we apply standard flow transformations, such as monotonic rational-quadratic splines, before returning to the original basis. Crucially, our flow preserves fast sampling and density evaluation, and may be used to produce unbiased estimates of expectations with respect to the target distribution via importance sampling. When trained on the DW4, LJ13, and QM9-positional datasets, our flow is competitive with equivariant continuous normalizing flows, while allowing sampling more than an order of magnitude faster. Moreover, to the best of our knowledge, we are the first to learn the full Boltzmann distribution of alanine dipeptide by only modeling the Cartesian positions of its atoms. Lastly, we demonstrate that our flow can be trained to approximately sample from the Boltzmann distribution of the DW4 and LJ13 particle systems using only their energy functions.
## 1 Introduction
Modeling the distribution of a molecule's configurations at equilibrium, known as the Boltzmann distribution, is a promising application of deep generative models (Noe et al., 2019). While the unnormalized density of the Boltzmann distribution can be obtained via physical modeling, sampling from it typically requires molecular dynamics (MD) simulations, which are expensive and produce correlated samples. A promising alternative is to rely on surrogate deep generative models, known as Boltzmann generators. We can draw independent samples from these and debias any expectations
estimated with the samples via importance weighting. The Boltzmann distribution typically admits rotation and translation symmetries, known as the Special Euclidean group \(\mathrm{SE}(3)\), as well as permutation symmetry. These constraints are important to incorporate into the model as they improve training efficiency and generalization (Cohen and Welling, 2016; Batzner et al., 2022; Kohler et al., 2020). Other key desiderata of Boltzmann generators are that they allow for fast sampling and density evaluation. These are necessary for energy-based training, that is, training using the Boltzmann distribution's unnormalized density (Noe et al., 2019; Stimper et al., 2022; Midgley et al., 2023). Training by energy is critical, as it prevents the model quality from being constrained by the quality and quantity of MD samples.
Existing coupling flows which approximate the Boltzmann distribution of molecules are at least partially defined over internal coordinates, i.e. bond distances, bond angles, and dihedral angles (Wu et al., 2020; Campbell et al., 2021; Kohler et al., 2023; Midgley et al., 2023), which are \(\mathrm{SE}(3)\) invariant. However, the definition of these depends on the molecular graph and they are non-unique for most graphs. Furthermore, models parameterized in internal coordinates struggle to capture interactions among nodes far apart in the graph and cannot capture the permutation invariance of some atoms. Thus, they are not suitable for particle systems such as LJ13 (Kohler et al., 2020). \(\mathrm{SE}(3)\) equivariance constraints have been applied to continuous normalizing flows (CNFs) operating on Cartesian coordinates (Kohler et al., 2020; Satorras et al., 2021), and to the closely related diffusion models (Xu et al., 2022; Hoogeboom et al., 2022; Yim et al., 2023). These models are built upon \(\mathrm{SE}(3)\) equivariant graph neural networks (GNNs) (Satorras et al., 2021; Geiger and Smidt, 2022; Batzner et al., 2022). These architectures can be applied to any molecular graph (Jing et al., 2022), enabling a single generative model to generalize across many molecules. Alas, sampling and evaluating the density of CNFs and diffusion models typically requires thousands of neural network evaluations (Xiao et al., 2022), preventing them from being trained by energy. As such, presently no Boltzmann generator exists that (i) acts on Euclidean coordinates of atoms (ii) enforces \(\mathrm{SE}(3)\) equivariance, and (iii) allows for fast sampling.
To address this gap, we propose a flexible \(\mathrm{SE}(3)\) equivariant coupling flow that operates on the Cartesian coordinates of atoms, allowing for fast sampling and density evaluation. Our contributions are:
* We extend coupling layers to be \(\mathrm{SE}(3)\) equivariant by augmenting their input space with auxiliary variables (Huang et al., 2020) which can be acted upon on by \(\mathrm{SE}(3)\). We update the atom positions conditioned on the auxiliary variables by first projecting the atoms into an \(\mathrm{SE}(3)\)-invariant space and then applying a standard normalizing flow transform before projecting its output back onto the equivariant space.
* We demonstrate that, when trained by maximum likelihood, our flow matches the performance of both existing \(\mathrm{SE}(3)\) CNFs and coupling flows operating on internal coordinates on molecular generation tasks. Our flow is more than 10 times faster to sample from than \(\mathrm{SE}(3)\) CNFs. Concurrently with Klein et al. (2023), we are the first to learn the full Boltzmann distribution of alanine dipeptide solely in Cartesian coordinates.
* We demonstrate our flow in the energy-based training setting on the DW4 and LJ13 problems, where parameters are learned using only the molecular energy function. Energy-based training of the CNF is intractable due to slow sampling and density evaluation. Flows that operate on internal coordinates are not able to capture the permutation invariance of these problems. Hence, our flow is the only existing permutation and \(\mathrm{SE}(3)\) equivariant method that can tractably be applied there.
## 2 Background: coupling flows and invariant models
### Normalizing flows and coupling transforms
A (discrete-time) normalizing flow is a flexible parametric family of densities on \(\mathcal{X}\) as the push-forward of a base density \(q_{0}\) along an invertible automorphism \(f_{\theta}:\mathcal{X}\to\mathcal{X}\) with parameters \(\theta\in\Theta\)(Papamakarios et al., 2021). The density is given by the change of variable formula:
\[q_{\theta}(x)=q_{0}(f^{-1}(x))\,\left|\det\!\frac{\partial f_{\theta}^{-1}(x) }{\partial x}\right|. \tag{1}\]
We can efficiently sample from the flow by sampling from \(q_{0}\) and mapping these samples through \(f_{\theta}\) in a single forward pass. A popular way to construct \(f_{\theta}\) is to use coupling transforms. The
dimensional input \(\mathbf{x}\in\mathcal{X}\) is split into two sets, transforming the first set conditional on the second, while leaving the second set unchanged:
\[\mathbf{y}_{1:d}= \,\mathcal{T}(\mathbf{x}_{1:d};\mathbf{x}_{d+1:D}), \tag{2}\] \[\mathbf{y}_{d+1:D}= \,\mathbf{x}_{d+1:D}.\]
They induce a lower triangular Jacobian, such that its determinant becomes \(|\partial\mathcal{T}(\mathbf{x}_{1:d};\mathbf{x}_{d+1:D})/\partial\mathbf{x}_{1:d}|\). Further, choosing \(\mathcal{T}\) to have an easy to compute determinant, such as an elementwise transformation (Dinh et al., 2015, 2017; Durkan et al., 2019), allows for fast density evaluation and sampling at low computational cost.
### Equivariance and invariance for coupling flow models of molecular conformations
Throughout, we deal with observations of an \(n\)-body system represented by a matrix \(\mathbf{x}=\left[x^{1},\ldots,x^{n}\right]\in\mathcal{X}=\mathbb{R}^{3\times n}\), where the rows index Cartesian coordinates and the columns index individual particles. We seek to construct flows on \(\mathcal{X}\) endowed with the symmetries present in molecular energy functions. These are invariant to rotations and translations of \(\mathbf{x}\) (\(\mathrm{SE}(3)\)), and to permutation of atoms of the same type (\(S_{n}\)). We will formalize them in this section.
Symmetry groupsThe special Euclidean group \(\mathrm{SE}(3)\) is the set of orientation preserving rigid transformations in Euclidean space. Its elements \(t\in\mathrm{SE}(3)\) can be decomposed into two components \(t=(R,u)\) where \(R\in\mathrm{SO}(3)\) is a \(3\times 3\) rotation matrix and \(u\in\mathbb{R}^{3}\) represents a translation; for a coordinate \(v\in\mathbb{R}^{3}\), \(t\cdot v=Rv+u\) denotes the action of \(t\) on \(v\). The symmetric group \(S_{n}\) defined over a set of \(n\) atoms consists of all \(n!\) permutations that can be performed with said atoms. Its elements \(\sigma\in S_{n}\) act on an \(n\)-body system as \(\sigma\cdot\mathbf{x}=[x^{\sigma(1)},\ldots,x^{\sigma(n)}]\).
Equivariant mapsA map \(f:\mathcal{X}\to\mathcal{Y}\) is said to be _\(G\)-equivariant_ if it commutes with the group action, i.e. if for any \(x\in\mathcal{X}\) and \(g\in G\) we have \(f(g\cdot x)=g\cdot f(x)\). Invariance is a special case where for any \(x\in\mathcal{X}\) and \(g\in G\) we have \(f(g\cdot x)=f(x)\). There has been a plethora of recent work on constructing graph neural network functions equivariant to the action of \(G=\mathrm{SE}(3)\times S_{n}\)(e.g. Thomas et al., 2018; Satorras et al., 2021; Geiger and Smidt, 2022), which we will leverage to construct our equivariant coupling flow model.
Invariant densityA density \(p:\mathcal{X}\to\mathbb{R}_{+}\) is \(G\)-invariant if for any \(x\in\mathcal{X}\) and \(g\in G\) we have \(p(g\cdot x)\!=\!p(x)\). Combining an invariant base density \(q_{0}\) with an equivariant invertible transform \(f\), as in (1), yields an invariant flow density (Papamakarios et al., 2021; Kohler et al., 2020). This gives a practical way to design invariant densities models.
Challenges in constructing \(\mathbf{SO(3)\times S_{n}}\) invariant flow modelsUnfortunately, no coupling transform can be simultaneously equivariant to both permutation of the particles and their rotations; coupling splits must be performed either across particles or spatial dimensions which would break either permutation or rotational symmetry (Kohler et al., 2020; Bose et al., 2022).
Furthermore, there does not exist a translation invariant _probability_ measure, as any such measure would be proportional to the Lebesgue measure and therefore not have unit volume. This precludes us from defining an invariant base distribution directly on \(\mathcal{X}\). Fortunately, Proposition A.1 and its converse allow us to disintegrate the probability measure into a translational measure proportional to the Lebesgue measure and an \(\mathrm{SO}(3)\)-invariant probability measure on the subspace of \(\mathbb{R}^{3\times n}\) with zero center of mass. We can drop the former and only model the latter.
## 3 Method: \(\mathbf{SE(3)\times S_{n}}\) equivariant augmented coupling flow model
This section describes our main contribution, an \(\mathrm{SE}(3)\times S_{n}\) equivariant coupling flow. We first lay the groundwork for achieving translation invariance by defining our flow density on a lower-dimensional "zero Center of Mass (CoM)" space. To preserve permutation and rotation equivariance, we leverage the augmented flow framework of Huang et al. (2020). Specifically, we use sets of augmented variables as a pivot for coupling transforms. Sec. 3.1 introduces a novel class of coupling transforms that achieve the aforementioned permutation and rotation equivariance by operating on atoms projected using a set of equivariant bases. Sec. 3.2 describes our choice of invariant base distribution and, finally, in Sec. 3.3, we discuss several schemes to train the augmented flow from either samples or energy functions and how to perform efficient density evaluation.
**Translation invariance** is obtained by modelling the data on the quotient space \(\mathbb{R}^{3\times n}/\,\mathbb{R}^{3}\triangleq\hat{\mathcal{X}}\subseteq \mathcal{X}\), where all \(n\)-body systems that only differ by a translation are "glued" together, i.e. where \(\mathbf{x}\sim\mathbf{x}^{\prime}\) if \(\mathbf{x}=\mathbf{x}^{\prime}+p\) with \(p\in\mathbb{R}^{3}\). Constructing a parametric probabilistic model over \(\hat{\mathcal{X}}\), automatically endowing it with translation invariance. In practice, we still work with Cartesian coordinates, but center the data so as to zero its CoM: \(\tilde{\mathbf{x}}\triangleq\mathbf{x}-\tilde{\mathbf{x}}\) with \(\tilde{\mathbf{x}}\triangleq\frac{1}{n}\sum_{i=1}^{n}[\mathbf{x}]^{i}\). Thus, \(\tilde{\mathbf{x}}\) lies on \(\hat{\mathcal{X}}\), an \((n-1)\times 3\) dimensional hyperplane embedded in \(\mathcal{X}\).
Augmented variable pivoted couplingTo allow for simultaneously permutation and rotation equivariant coupling transforms, we introduce _augmented_ variables \(\mathbf{a}\in\mathcal{A}\). Our coupling layers update the particle positions \(\mathbf{x}\) conditioned on \(\mathbf{a}\) and vice-versa. The augmented variables need to "behave" similarly to \(\mathbf{x}\), in that they can also be acted upon by elements of \(\mathrm{SE}(3)\times S_{n}\). We achieve this by choosing \(\mathbf{a}\) to be a set of \(k\) of observation-sized arrays \(\mathcal{A}=\mathcal{X}^{k}\), which we will discuss further in App. B.4. Importantly, we do not restrict \(\mathbf{a}\) to be zero-CoM.
Invariant flow density on the extended spaceWe parameterize a density \(q\) over the extended space \(\hat{\mathcal{X}}\times\mathcal{A}\) w.r.t. the product measure \(\lambda_{\hat{\mathcal{X}}}\otimes\lambda_{\mathcal{A}}\), where \(\lambda_{\hat{\mathcal{X}}}\in\mathcal{P}(\hat{\mathcal{X}})\) and \(\lambda_{\mathcal{A}}\in\mathcal{P}(\mathcal{A})\) respectively denote the Lebesgue measure on \(\hat{\mathcal{X}}\) and \(\mathcal{A}\). We use \(q(\mathbf{x},\mathbf{a})\) as a shorthand for the density of the corresponding zero-CoM projection \(q(\tilde{\mathbf{x}},\mathbf{a})\). The density \(q\) is constructed to be invariant to the action of \(G\triangleq\mathrm{SO}(3)\times S_{n}\) when simultaneously applied to the observed and augmented variables, that is, \(q(\mathbf{x},\mathbf{a})=q(g\cdot\mathbf{x},g\cdot\mathbf{a})\) for any \(g\in G\). We aim to construct a \(G\)-equivariant flow \(f:\hat{\mathcal{X}}\times\mathcal{A}\rightarrow\hat{\mathcal{X}}\times \mathcal{A}\) on this extended space, and combine it with \(q_{0}:\hat{\mathcal{X}}\times\mathcal{A}\rightarrow\mathbb{R}^{+}\), a \(G\)-invariant base density function, to yield the invariant flow density
\[q(\mathbf{x},\mathbf{a})=q_{0}(f^{-1}(\mathbf{x},\mathbf{a}))\,\left|\text{det}\frac{\partial f ^{-1}(\mathbf{x},\mathbf{a})}{\partial(\mathbf{x},\mathbf{a})}\right|. \tag{3}\]
**Proposition 3.1** (Invariant marginal).: _Assume \(q:\mathcal{X}\times\mathcal{A}\rightarrow\mathbb{R}_{+}\) is a G-invariant density over the probability space \((\mathcal{X}\times\mathcal{A},\lambda_{\mathcal{X}}\otimes\lambda_{\mathcal{A}})\), then \(q_{\mathbf{x}}\triangleq\int_{\mathcal{A}}q(\cdot,\mathbf{a})\lambda_{\mathcal{A}}( \mathrm{d}\mathbf{a}):\mathcal{X}\rightarrow\mathbb{R}_{+}\) is a G-invariant density w.r.t. to the measure \(\lambda_{\mathcal{X}}\)._
Proof.: For any \(g\in G\), \(\mathbf{x}\in\mathcal{X}\) and \(\mathbf{a}\in\mathcal{A}\)
\[q_{\mathbf{x}}(g\cdot\mathbf{x})=\!\!\int_{\mathcal{A}}\!\!\!q(g\cdot\mathbf{x},\mathbf{a}) \lambda_{\mathcal{A}}(\mathrm{d}\mathbf{a})=\!\!\int_{g^{-1}\mathcal{A}}\!\!\!q(g \cdot\mathbf{x},g\cdot\mathbf{a})\lambda_{\mathcal{A}}(\mathrm{d}\,g\cdot\mathbf{a})=\!\! \int_{\mathcal{A}}\!\!q(\mathbf{x},\mathbf{a})\lambda_{\mathcal{A}}(\mathrm{d}\mathbf{a})= q_{\mathbf{x}}(\mathbf{x}),\]
where we used the \(G\)-invariance of \(q\) and of the measure \(\lambda_{\mathcal{A}}\), as well as \(g^{-1}\mathcal{A}=\mathcal{A}\).
### \(\mathrm{SE}(3)\) and permutation equivariant coupling transform
We now derive our \(\mathrm{SE}(3)\times S_{n}\) equivariant map \(f:\hat{\mathcal{X}}\times\mathcal{A}\rightarrow\hat{\mathcal{X}}\times \mathcal{A}\) defined on the extended space. We introduce two modules, a shift-CoM transform, swapping the center of mass between the observed and augmented variables, and an equivariant core transformation, which updates \(\tilde{\mathbf{x}}\) conditional on \(\mathbf{a}\) and vice versa. Composing them yields our equivariant coupling layer illustrated in Fig. 1.
\(\mathbf{SE}(3)\times\mathbf{S_{n}}\) equivariant couplingWe dissociate the equivariance constraint from the flexible parameterized flow transformation by (1) projecting the atoms' Cartesian coordinates into a learned,
Figure 1: Illustration of the equivariant coupling layer of our augmented normalizing flow, where our variable with zero center of mass (CoM) \(\tilde{\mathbf{x}}\) is transformed with the augmented variable \(\mathbf{a}\).
local (per-atom) invariant space, (2) applying the flexible flow to the invariant representation of the atoms' positions, and (3) then projecting back into the original Cartesian space. Specifically, we construct a core coupling transformation that composes (a) an invariant map \(\gamma:\Psi\times\mathcal{X}\to\mathcal{Y}\), where \(\mathcal{Y}\) is isomorphic to \(\mathcal{X}\), and \(\mathbf{r}\in\Psi\) parametrizes the map. We denote the parametrized map as \(\gamma_{\mathbf{r}}\). It is followed by (b) a standard flexible normalizing flow transform, e.g. a neural spline, \(\tau:\mathcal{Y}\to\mathcal{Y}\), and (c) the inverse map \(\gamma^{-1}\). Denoting the inputs with superscript \(\ell\) and with \(\ell+1\) for outputs, our core transformation \(\mathcal{F}:(\mathbf{x}^{\ell};\mathbf{a}^{\ell})\mapsto(\mathbf{x}^{\ell+1},\mathbf{a}^{\ell +1})\) is given as
\[\mathbf{x}^{\ell+1} =\gamma_{\mathbf{r}}^{-1}\cdot\tau_{\theta}(\gamma_{\mathbf{r}}\cdot\mathbf{ x}^{\ell}),\quad\text{with}\quad(\mathbf{r},\theta)=h(\mathbf{a}^{\ell}), \tag{4}\] \[\mathbf{a}^{\ell+1} =\mathbf{a}^{\ell}.\]
Here, \(h\) is a (graph) neural network that returns a set of equivariant reference vectors \(\mathbf{r}\), which parametrize the map \(\gamma_{\mathbf{r}}\), and invariant parameters \(\theta\). \(\mathcal{Y}\) is a rotation invariant space. This means that any rotations applied to the inputs will be cancelled by \(\gamma_{\mathbf{r}}\), i.e. \(\gamma_{g\cdot\mathbf{r}}=\gamma_{\mathbf{r}}\cdot g^{-1}\) or equivalently \((\gamma_{g\cdot\mathbf{r}})^{-1}=g\cdot\gamma_{\mathbf{r}}^{-1}\) for all \(g\in\mathrm{SO}(3)\). We use the inverse projection \(\gamma_{\mathbf{r}}^{-1}\) to map the invariant features back to equivariant features. The function \(\tau_{\theta}\) is a standard invertible inner-transformation such as an affine or spline based transform, that we apply to the invariant features (Papamakarios et al., 2021).
We now show that the above described transform is rotation and permutation equivariant.
**Proposition 3.2** (Equivariant augmented coupling flow).: _If \(h:\mathcal{A}\to\mathcal{X}^{n}\times\Theta^{n}\) is \(\mathrm{SO}(3)\)-equivariant for its first output, \(\mathrm{SO}(3)\)-invariant for its second, and \(S_{n}\) equivariant for both, and \((\gamma_{g\cdot\mathbf{r}})^{-1}=g\cdot\gamma_{\mathbf{r}}^{-1}\) for all \(g\in\mathrm{SO}(3)\), the transform \(\mathcal{F}\) given by (10) is \(\mathrm{SO}(3)\times S_{n}\) equivariant._
Proof.: For \(\mathrm{SO}(3)\): We first notice that \(h(g\cdot\mathbf{a})=(g\cdot\mathbf{r},\theta)\), and then since \((\gamma_{g\cdot\mathbf{r}})^{-1}=g\cdot\gamma_{\mathbf{r}}^{-1}\) we have \(\mathcal{F}(g\cdot\mathbf{x},g\cdot\mathbf{a})=(\gamma_{g\cdot\mathbf{r}})^{-1}\cdot\tau_ {\theta}(\gamma_{g\cdot\mathbf{r}}\cdot g\cdot\mathbf{x}^{\ell})=g\cdot\gamma_{\mathbf{r }}^{-1}\cdot\tau_{\theta}(\gamma_{\mathbf{r}}\cdot g^{-1}\cdot g\cdot\mathbf{x}^{ \ell})=g\cdot\mathcal{F}(\mathbf{x},\mathbf{a})\).
For \(S_{n}\): We first note that \(h(\sigma\cdot\mathbf{a})=(\sigma\cdot\mathbf{r},\sigma\cdot\theta)\). Then, using that \(\gamma_{\mathbf{r}}\) and \(\tau\) act on \(\mathbf{x}\) atom-wise, we have \(\mathcal{F}(\sigma\cdot\mathbf{x},\sigma\cdot\mathbf{a})=\gamma_{g\cdot\mathbf{r}}^{-1} \cdot\tau_{\sigma\cdot\theta}(\gamma_{\sigma\cdot\mathbf{r}}\cdot(\sigma\cdot\mathbf{ x}))=(\sigma\cdot\gamma_{\mathbf{r}}^{-1})\cdot(\sigma\cdot\tau_{\theta})((\sigma \cdot\gamma_{\mathbf{r}})\cdot(\sigma\cdot\mathbf{x}))=\sigma\cdot\mathcal{F}(\mathbf{x },\mathbf{a})\).
For the Jacobian of the coupling described above to be well-defined, the variable being transformed must be non-zero CoM (see App. B.1 for a derivation). Thus, although our observations live on \(\tilde{\mathcal{X}}\), for now, assume that the inputs to the transform are not zero-CoM and we will deal with this assumption in the following paragraphs. This choice also allows us to use standard equivariant GNNs for \(h\)(Satorras et al., 2021; Geiger and Smidt, 2022) which leverage per-node features defined in the ambient space, such as atom type and molecular graph connectivity.
Choices of projection \(\mathbf{\gamma}\)The equivariant vectors \(\mathbf{r}\) parameterize a local (per-atom) \(\mathrm{SO}(3)\) equivariant reference frame used in the projection \(\gamma_{\mathbf{r}}\). We introduce three different projection strategies. (i) The first strategy, is for \(\mathbf{r}\) to parameterize frame composed of an origin and orthonormal rotation matrix into which we project each atom's positions. We then take \(\tau\) to be a dimension-wise transformation for each of the projected atoms' coordinates. We dub this method Cartesian-proj. (ii) Alternatively, we let \(\mathbf{r}\) parameterize a origin, zenith direction and azimuth direction for spherical coordinates, as in Liu et al. (2022). We then apply elementwise transforms to each atom's radius, polar angle and azimuthal angle. We call this Spherical-proj. (iii) Lastly, consider a variant of Spherical-proj where just the radius is transformed and the polar and azimuth angles are held constant. Here, \(\mathbf{r}\) parameterizes a single reference point, a per-atom origin. We refer to this last variant as Vector-proj.
Architectural detailsFor the transformations applied in the invariant projected space we consider affine mappings (Dinh et al., 2017) and monotonic rational-quadratic splines (Durkan et al., 2019). Additionally, to limit computational cost, we have our GNNs \(h\) output \(M\) sets of reference vectors \(\mathbf{r}\) and invariant parameters \(\theta\). These parametrize \(M\) core coupling transformations with a single GNN forward pass. For the Cartesian-proj and Spherical-proj flow we include a loss term that discourages certain reference vectors from being collinear, which improves the projection's stability. We provide further details for this and the various projection types in App. B.3.
Center of mass shiftThe shift-CoM transform allows us to apply the aforementioned \(\mathrm{SE}(3)\times S_{n}\) equivariant coupling in the ambient space rather than zero-COM subspace. In particular, before transforming our observed vector \(\tilde{\mathbf{x}}\in\tilde{\mathcal{X}}\) with \(\mathcal{F}\), we lift it onto \(\mathcal{X}\). We achieve this by swapping the center of mass between \(\tilde{\mathbf{x}}\) and \(\mathbf{a}\). For now, assume \(\mathcal{A}=\mathcal{X}\), i.e. \(k=1\), with App. B.4 providing details for \(k>1\). Letting \(\tilde{\mathcal{A}}\subseteq\mathcal{A}\) be the subspace where all augmented variables that differ by
a translation occupy the same point, and \(\bar{\mathbf{a}}\in\tilde{\mathcal{A}}\) be defined analogously to \(\tilde{\mathbf{x}}\), we apply the map \(\mathrm{ShiftCoM}:\tilde{\mathcal{X}}\times\mathcal{A}\mapsto\mathcal{X}\times \tilde{\mathcal{A}}\) which acts on both of its arguments by subtracting from each of them the latter's CoM, that is,
\[\mathrm{ShiftCoM}(\tilde{\mathbf{x}},\mathbf{a})\triangleq(\tilde{\mathbf{x}}-\bar{\mathbf{a}},\,\mathbf{a}-\bar{\mathbf{a}})\quad\text{with}\quad\bar{\mathbf{a}}\triangleq\tfrac{1}{n} \sum_{i=1}^{n}[\mathbf{a}]^{i}. \tag{5}\]
This operation is invertible, with inverse \(\mathrm{ShiftCoM}(\tilde{\mathbf{a}},\mathbf{x})\), and has unit Jacobian determinant.
```
Inputs: Zero-CoM observation \(\tilde{\mathbf{x}}\), augmented variable \(\mathbf{a}\), Coupling transforms \(\mathcal{F}_{1},\mathcal{F}_{2}\) \((\mathbf{x},\tilde{\mathbf{a}})\leftarrow\mathrm{ShiftCoM}(\tilde{\mathbf{x}},\mathbf{a})\) \((\mathbf{x},\tilde{\mathbf{a}})\leftarrow\mathcal{F}_{M}^{(1)}\circ\cdots\circ\mathcal{ F}_{1}^{(1)}(\mathbf{x},\tilde{\mathbf{a}})\) \((\mathbf{a},\tilde{\mathbf{x}})\leftarrow\mathrm{ShiftCoM}(\tilde{\mathbf{a}},\mathbf{x})\) \((\mathbf{a},\tilde{\mathbf{x}})\leftarrow\mathcal{F}_{M}^{(2)}\circ\cdots\circ\mathcal{ F}_{1}^{(2)}(\mathbf{a},\tilde{\mathbf{x}})\) Output: \(\tilde{\mathbf{x}},\mathbf{a}\)
```
**Algorithm 1**Flow block \(f\)
Putting the building blocks togetherOur flow transform is built as a sequence of \(L\) blocks. Each block, described in Alg. 1, consists of two equivariant coupling layers, see Fig. 1. Our observations \(\tilde{\mathbf{x}}\in\tilde{\mathcal{X}}\) are lifted onto \(\mathcal{X}\) with \(\mathrm{ShiftCoM}\), they are transformed with \(M\) core transformations \(\big{(}\mathcal{F}_{i}^{(1)}\big{)}_{i=1}^{M}\), and \(\mathrm{ShiftCoM}\) is applied one more time to map the observations back to the zero-CoM hyperplane. After this, our augmented variables \(\mathbf{a}\) are transformed with \(\big{(}\mathcal{F}_{i}^{(2)}\big{)}_{i=1}^{M}\).
Joint density evaluation \(q(\mathbf{x},\mathbf{a})\) is performed with Alg. 2. We first subtract the observed variables' CoM from both the observed and augmented variables. We then apply our \(L\) flow transform blocks before evaluating the transformed variables' density under our base distribution \(q_{0}\), which is described next. The log determinant of the core flow transform, \(f\), has a contribution from the projection, transform in the invariant space, and inverse projection (see App. B.3 for details).
### \(\mathrm{SE}(3)\times\mathbf{S}_{n}\) Invariant base distribution
Again, we assume \(\mathcal{A}=\mathcal{X}\), i.e. \(k=1\), with the generalization given in App. B.4. Our invariant choice of base distribution is \(q_{0}(\mathbf{x},\mathbf{a})=\tilde{\mathcal{N}}(\mathbf{x};\,0,I)\)\(\mathcal{N}(\mathbf{a};\,\mathbf{x},\eta^{2}I)\) where \(\mathbf{x}\in\mathbb{R}^{3n}\) and \(\mathbf{a}\in\mathbb{R}^{3n}\) refer to \(\mathbf{x}\) and \(\mathbf{a}\) flattened into vectors, \(\eta^{2}\) is a hyperparameter and we denote Gaussian distributions on \(\tilde{\mathcal{X}}\) as \(\tilde{\mathcal{N}}\)(Satorras et al., 2021; Yim et al., 2023) with density
\[\tilde{\mathcal{N}}(\mathbf{x};\,0,I)=(2\pi)^{-3(n-1)/2}\exp(-\tfrac{1}{2}\|\tilde {\mathbf{x}}\|_{2}^{2}). \tag{6}\]
We sample from it by first sampling from a standard Gaussian \(\mathcal{N}(0,I)\) and then removing the CoM. On the other hand, the distribution for \(\mathbf{a}\) is supported on \(\mathcal{A}\) which includes non-zero CoM points. It is centered on \(\mathbf{x}\), yielding joint invariance to translations. The isotropic nature of \(q_{0}\) makes its density invariant to rotations, reflections, and permutations (Satorras et al., 2021; Yim et al., 2023).
### Training and likelihood evaluation
In this section, we discuss learning and density evaluation with augmented variables.
Invariant augmented target distributionWe assume the density of our observations \(p\) is \(\mathrm{SE}(3)\times S_{n}\) invariant. Our target for augmented variables is \(\pi(\mathbf{a}|\mathbf{x})=\mathcal{N}(\mathbf{a};\,\mathbf{x},\eta^{2}I)\), where \(\eta^{2}\) matches the variance of the base Gaussian density over \(\mathbf{a}\). This satisfies joint invariance \(p(g\cdot\mathbf{x})\pi(g\cdot\mathbf{a}|g\cdot\mathbf{x})=p(\mathbf{x})\pi(\mathbf{a}|\mathbf{x})\) for any \(g\in\mathrm{SE}(3)\times S_{n}\), as shown in App. B.5.
Learning from samplesWhen data samples \(\mathbf{x}\sim p\) are available, we train our flow parameters by maximizing the joint likelihood, which is a lower bound on the marginal log-likelihood over observations up to a fixed constant
\[\mathbb{E}_{\mathbf{x}\sim p(\mathbf{x}),\mathbf{a}\sim\pi(\mathbf{a}|\mathbf{x})}[\log q(\mathbf{x}, \mathbf{a})]\leq\mathbb{E}_{p(x)}[\log q(\mathbf{x})]+C. \tag{7}\]
Learning from energyWhen samples are not available but we can query the unnormalized energy of a state \(U(\mathbf{x})\), with \(p(\mathbf{x})\propto\exp(-U(\mathbf{x}))\), we can minimize the joint reverse KL divergence. By
the chain rule of the KL divergence, this upper bounds the KL between marginals
\[D_{\mathrm{KL}}\left(q(\mathbf{x},\mathbf{a})\,||\,p(\mathbf{x})\pi(\mathbf{a}|\mathbf{x})\right) \right)\geq D_{\mathrm{KL}}\left(q(\mathbf{x})\,||\,p(\mathbf{x})\right). \tag{8}\]
However, the reverse KL encourages mode-seeking (Minka, 2005) which may result in the model failing to characterize the full set of meta-stable molecular states. Therefore, we instead use _flow annealed importance sampling bootstrap_ (FAB) (Midgley et al., 2023), which targets the mass covering \(\alpha\)-divergence with \(\alpha=2\). In particular, we minimize the \(\alpha\)-divergence over the joint which leads to an upper bound on the divergence of the marginals
\[D_{2}\left(q(\mathbf{x},\mathbf{a})\,||\,p(\mathbf{x})\pi(\mathbf{a}|\mathbf{x})\right)\triangleq \int\tfrac{p(\mathbf{x})^{2}\pi(\mathbf{a}|\mathbf{x})^{2}}{q(\mathbf{x},\mathbf{a})}\,\mathrm{d} \mathbf{a}\,\mathrm{d}\mathbf{x}\geq\int\tfrac{p(\mathbf{x})^{2}}{q(\mathbf{x})}\,\mathrm{d} \mathbf{x}\triangleq D_{2}\left(q(\mathbf{x})\,||\,p(\mathbf{x})\right). \tag{9}\]
To compute unbiased expectations with the augmented flow we rely on the estimator \(\mathbb{E}_{p(\mathbf{x})}[f(\mathbf{x})]=\mathbb{E}_{q(\mathbf{x},\mathbf{a})}[w(\mathbf{x},\mathbf{a })f(\mathbf{x})]\) where \(w(\mathbf{x},\mathbf{a})=p(\mathbf{x})\pi(\mathbf{a}|\mathbf{x})/q(\mathbf{x},\mathbf{a})\). Minimizing the joint \(\alpha\)-divergence with \(\alpha=2\) corresponds to minimizing the variance in the joint importance sampling weights \(w(\mathbf{x},\mathbf{a})\), which allows for the aforementioned expectation to be approximated accurately.
Evaluating densitiesTo evaluate the marginal density of observations we use the importance weighed estimator \(q(\mathbf{x})=\mathbb{E}_{\mathbf{a}\sim\left(\cdot|\mathbf{x}\right)}\left[\frac{q(\mathbf{ x},\mathbf{a})}{\pi(\mathbf{a}|\mathbf{x})}\right]\), noting that \(\pi\) is Gaussian and thus supported everywhere. The estimator variance vanishes when \(q(\mathbf{a}|\mathbf{x})=\pi(\mathbf{a}|\mathbf{x})\), as shown in App. B.9.
## 4 Experiments
### Training with samples: DW4, LJ13 and QM9 positional
First, we consider 3 problems that involve only positional information, with no additional features such as atom type or connectivity. Thus, the target densities are fully permutation invariant. The first two of these, namely DW4 and LJ13, are toy problems from Kohler et al. (2020), where samples are obtained by running MCMC on the 4 particle double well energy function (DW4) and 13 particles Leonard Jones energy function (LJ13) respectively. The third problem, i.e. QM9 positional (Satorras et al., 2021), selects the subset of molecules with 19 atoms from the commonly used QM9 dataset (Ramakrishnan et al., 2014) and discards their node features.
For our model, Equivariant Augmented Coupling Flow (E-ACF), we consider all projection types (Vector-proj, Cartesian-proj, Spherical-proj) and compare them to: (1) Non-E-ACF: An augmented flow that is not rotation equivariant but is translation and permutation equivariant, as in (Klein et al., 2023). This model uses the same structure as the E-ACF but replaces the EGNN with a transformer which acts directly on atom positions, without any projection. We train Non-E-ACF with data-augmentation whereby we apply a random rotation to each sample within each training batch. (2) E-CNF ML: The SE(3) equivariant continuous normalizing flow from Satorras et al. (2021) trained by maximum likelihood. (3) E-CNF FM: An SE(3) equivariant continuous normalizing flow trained via flow matching (Lipman et al., 2023; Klein et al., 2023). (4) E-CNF-Diff: An SE(3) equivariant diffusion model (Hoogeboom et al., 2022) evaluated as a continuous normalizing flow. All equivariant generative models use the \(\mathrm{SE}(3)\) GNN architecture proposed by Satorras et al. (2021). The Cartesian-proj exhibited numerical instability on QM9-positional causing runs to crash early. To prevent these crashes it was trained at a lower learning rate than the other E-ACF models. App. C.3.1 provides a detailed description of these experiments.
On Tab. 1, we see that on DW4 and LJ13, E-CNF FM performs best, while E-ACF is competitive with E-CNF ML and E-CNF-Diff. We note that both DW4 and LJ13 have biased datasets (see App. C.3.1). On QM9-positional, the E-ACF is competitive with E-CNF-Diff and E-CNF FM, while the under-trained E-CNF ML from Satorras et al. (2021) performs poorly. We expect that with further tuning E-CNF-Diff and E-ACF FM would match the performance of Spherical-proj on QM9-positional. The Non-E-ACF performs much worse than the E-ACF, despite being trained for more epochs, demonstrating the utility of in-build equivariance. Furthermore, Fig. 2 shows that the distribution of inter-atomic distances of samples from our flow matches training data well. Importantly, sampling and density evaluation of the E-ACF on an A100 GPU takes roughly 0.01 seconds. For the CNF trained by flow matching (E-CNF ML) and score matching (E-CNF-Diff), sampling takes on average 0.2, and 5 seconds respectively. Thus, the E-ACF is faster for sampling than the CNF by more than an order of magnitude.
### Training with samples: Alanine dipeptide
Next, we approximate the Boltzmann distribution of alanine dipeptide in an implicit solvent at temperature \(T=800\,\mathrm{K}\). We train the models via maximum likelihood on samples generated by a replica exchange MD simulation (Mori and Okamoto, 2010), which serve as a ground truth. Our generated dataset consists of \(10^{6}\) samples in the training and validation set as well as \(10^{7}\) samples in the test set. Besides our E-ACF using the different projection schemes that we introduced in Sec. 3.1, we train the non-SO(3) equivariant flow (Non-E-ACF) with data augmentation similarly to the previous experiments. Moreover, we train a flow on internal coordinates as in Midgley et al. (2023). The joint effective sample size (ESS) is reported for the E-ACF models which is a lower bound of the marginal ESS (see Eq. (9)). For the CNFs, the Hutchinson trace estimator is used for the log density (Grathwohl et al., 2018). This results in a biased estimate of the ESS, which may therefore be spurious. Further details about model architectures, training and evaluation are given in App. C.3.2.
The results are shown in Fig. 3 and Tab. 2. All variants of our E-ACF clearly outperform the Non-E-ACF, as the Kullback Leibler divergence (KLD) of the Ramachandran plots is significantly lower and the NLL as well as the reverse and forward ESS, see App. B.10, are higher. The flow trained on internal coordinates is only marginally better regarding the NLL and KLD th
\begin{table}
\begin{tabular}{l c c c} \hline \hline & KDD & NLL & Rev ESS (\%) & Fwd ESS (\%) \\ \hline Flow on internal coordinates & \((2.01\pm 0.04)\cdot 10^{-3}\) & \(-190.15\pm 0.02\) & \(1.61\pm 0.03\) & – \\ E-CNF FM & \((3.83\pm 0.18)\cdot 10^{-2}\) & \(\mathbf{-190.13\pm 0.09}\) & \((3.1\pm 0.2)\cdot 10^{-4}\) & \((2.5\pm 0.8)\cdot 10^{-2}\)\(\mathbf{26}\) \\ E-CNF-Diff & \((8.86\pm 0.49)\cdot 10^{-3}\) & \(-188.31\pm 0.01\) & \((8.1\pm 1.1)\cdot 10^{-4}\) & \((5.1\pm 4.1)\cdot 10^{-237}\) \\ Non-E-ACF & \((1.66\pm 0.01)\cdot 10^{-1}\) & \(-184.57\pm 0.35\) & \(0.14\pm 0.07\) & \((5.5\pm 4.5)\cdot 10^{-30}\) \\ Vector-proj E-ACF & \((6.15\pm 1.21)\cdot 10^{-3}\) & \(-188.56\pm 0.01\) & \(19.4\pm 13.4\) & \((\mathbf{9.4\pm 7.7})\cdot\mathbf{10^{-7}}\) \\ Cartesian-proj E-ACF & \((3.46\pm 0.28)\cdot 10^{-3}\) & \(-188.59\pm 0.00\) & \(\mathbf{52.5\pm 3.2}\) & \((9.7\pm 7.9)\cdot 10^{-9}\) \\ Spherical-proj E-ACF & \((\mathbf{2.55\pm 0.29})\cdot\mathbf{10^{-3}}\) & \(-188.57\pm 0.00\) & \(\mathbf{48.4\pm 7.2}\) & \((5.0\pm 4.1)\cdot 10^{-14}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Alanine dipeptide results. KLD is the empirical KLD of the Ramachandran plots (see Fig. 3). Forward ESS is estimated with the test set. Reverse ESS is estimated with \(10^{5}\) model samples. Results are averaged over 3 seeded runs, with the standard error reported as uncertainty.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & DW4 & LJ13 & QM9 positional \\ \hline E-CNF ML & \(8.15\pm N/A\) & \(30.56\pm N/A\) & \(-70.2\pm N/A\) \\ E-CNF FM & \(\mathbf{7.42\pm 0.03}\) & \(\mathbf{28.79\pm 0.25}\) & \(-123.62\pm 1.03\) \\ E-CNF-Diff & \(8.01\pm 0.03\) & \(31.02\pm 0.12\) & \(-158.30\pm 0.15\) \\ Non-E-ACF & \(10.07\pm 0.03\) & \(33.32\pm 0.34\) & \(-76.76\pm 1.77\) \\ Vector-proj E-ACF & \(8.69\pm 0.03\) & \(30.19\pm 0.12\) & \(-152.23\pm 6.44\) \\ Cartesian-proj E-ACF & \(8.82\pm 0.08\) & \(30.89\pm 0.09\) & \(-138.62\pm 0.74\) \\ Spherical-proj E-ACF & \(8.61\pm 0.05\) & \(30.33\pm 0.16\) & \(\mathbf{-165.71\pm 1.35}\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Negative log-likelihood results for flows trained by maximum likelihood on DW4, LJ13 and QM9-positional. E-CNF ML results are from Satorras et al. (2021). Best results are emphasized in **bold**. The results are averaged over 3 seeded runs, with the standard error reported as uncertainty.
the Cartesian-proj and Spherical-proj E-ACF, while we outperform it considering the ESS. Note that the flow on internal coordinates explicitly models the angles \(\phi\) and \(\psi\), while the E-ACF operates on the underlying Cartesian coordinates. The E-ACFs outperform the E-CNF-Diff model in all metrics, but the E-CNF trained with flow matching has a slightly lower NLL while having a significantly higher KLD on the Ramachandran plot. This could be due to a better performance on other marginals or on some outlier data points in the test set. The forward ESS is very low for all models, which suggests that the models do not cover some regions in the target distribution, and that the reverse ESS is spurious. Alternatively, this may be from numerical instabilities in the models. To the best of our knowledge, our models are the first to learn the full Boltzmann distribution of a molecule purely on Cartesian coordinates while being competitive with a flow trained on internal coordinates.
### Energy-based training: DW4 and LJ13
Lastly, we demonstrate that our proposed flow can be trained on the DW4 and LJ13 problems using only the target's unnormalized density with the FAB algorithm (Midgley et al., 2023). The annealed importance sampling procedure within FAB requires sampling from the flow and evaluating its density multiple times. This is used within the training loop of FAB making it significantly more expensive per parameter update than training by maximum likelihood. Given that sampling and density evaluation with CNFs is very expensive, training them with FAB is intractable. Thus, we only report results for our flow, as well as for the Non-E-ACF. We train the Non-E-ACF for more iterations than the E-ACF, such that the training times are similar, given that the Non-E-ACF is faster per iteration. App. C.4 provides further details on the experimental setup.
Figure 3: Ramachandran plots, i.e. marginal distribution of the dihedral angles \(\phi\) and \(\psi\) (see App. C.3.2), obtained with MD (ground truth) and various models.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{DW4} & \multicolumn{3}{c}{LJ13} \\ & Rev ESS (\%) & Fwd ESS (\%) & NLL & Rev ESS (\%) & Fwd ESS (\%) & NLL \\ \hline Non-E-ACF & \(35.94\pm 2.63\) & \(5.45\pm 4.10\) & \(7.38\pm 0.01\) & \(5.38\pm 3.66\) & \(4.14\pm 3.10\) & \(33.22\pm 0.96\) \\ Vector-proj E-ACF & \(\mathbf{84.29\pm 0.41}\) & \(\mathbf{83.39\pm 0.79}\) & \(\mathbf{7.11\pm 0.00}\) & \(59.60\pm 1.13\) & \(65.20\pm 1.61\) & \(30.33\pm 0.03\) \\ Cartesian-proj E-ACF & \(82.44\pm 0.50\) & \(80.08\pm 0.64\) & \(7.13\pm 0.00\) & \(60.68\pm 0.41\) & \(65.54\pm 0.37\) & \(30.34\pm 0.01\) \\ Spherical-proj E-ACF & \(80.44\pm 0.88\) & \(81.46\pm 0.95\) & \(7.14\pm 0.00\) & \(\mathbf{62.09\pm 0.76}\) & \(\mathbf{66.13\pm 0.11}\) & \(\mathbf{30.21\pm 0.02}\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results for training by energy with FAB. Best results are emphasized in **bold**. Results are averaged over 3 seeded runs, with the standard error reported as uncertainty. Reverse ESS is estimated with \(10^{4}\) samples from the flow. Forward ESS is estimated with the test sets.
Tab. 3 shows that the E-ACF trained with FAB successfully approximates the target Boltzmann distributions, with reasonably high joint ESS, and NLL comparable to the flows trained by maximum likelihood. Additionally, the ESS may be improved further by combining the trained flow with AIS, this is shown in App. C.4. In both problems the Non-E-ACF performs worse, both in terms of ESS and NLL. All models trained by maximum likelihood have a much lower ESS (see App. C.3.1). This is expected, as unlike the \(\alpha=2\)-divergence loss used in FAB, the maximum likelihood objective does not explicitly encourage minimizing importance weight variance. Furthermore, the flows trained by maximum likelihood use a relatively small, biased training set, which therefore limits their quality.
## 5 Discussion and Related Work
Augmented flows have been used for improving expressiveness of Boltzmann generators (Kohler et al., 2023, 2023, 2024); however, these models were not equivariant. Klein et al. (2023) proposed an augmented normalizing flow architecture to provide conditional proposal distributions for MD simulations and use a coupling scheme similar to ours. However, this model only achieves translation and permutation equivariance and the authors make their flow approximately rotation invariant through data augmentation. In our experiments, we found data augmentation to perform significantly worse than intrinsic invariance. In principle, Klein et al. (2023)'s model could be made fully invariant by substituting in our flow's projection-based coupling transform.
An alternative to our equivariant flow and equivariant CNFs are the equivariant residual flows proposed in Bose et al. (2022). Alas, residual flows require fixed-point iteration for training and inversion. This is expensive and may interact poorly with energy-based training methods such as FAB (Midgley et al., 2023) which require fast exact sampling and densities. Furthermore, Bose et al. (2022) found that the spectral normalization required for residual flows did not interact well with the equivariant CNN architecture in their experiments.
There has been recent progress in improving the sampling speed of CNFs/diffusion models (Tong et al., 2023; Song et al., 2023) and on using these models for sampling from unnormalized densities (Vargas et al., 2023; Zhang and Chen, 2022; Zhang et al., 2023). Thus, in the future CNF/diffusion models trained by energy may prove to be competitive with discrete-time flow based methods.
The strategy of projection into a local reference frame to enforce equivariance has been successfully employed in existing literature, specifically for protein backbone generation (Jumper et al., 2021; Yim et al., 2023). Here we have focused on modelling the full set of Cartesian coordinates of a molecule, but an interesting avenue for future work is to extend our framework to other domains, such as modelling rigid bodies, which has applications to protein backbone generation (Jumper et al., 2021; Yim et al., 2023) and many-body systems (Kohler et al., 2023).
LimitationsAlthough our flow is significantly faster than alternatives such as CNFs, the expensive EGNN forward pass required in each layer of the flow makes it more computationally expensive than flows on internal coordinates. Additionally, we found our flow to be less numerically stable than flows on internal coordinates, which we mitigate via adjustments to the loss, optimizer and neural network (see App. B.3, App. C.1, App. C.2). Our implementation uses the E(3) equivariant EGNN proposed by Satorras et al. (2021). However, recently there have been large efforts towards developing more expressive, efficient and stable EGNNs architectures (Fuchs et al., 2020; Batatia et al., 2022; Musaelian et al., 2023; Liao and Smidt, 2023). Incorporating these into our flow may improve performance, efficiency and stability. This would be especially useful for energy-based training, where the efficiency of the flow is a critical factor.
## 6 Conclusion
We have proposed an SE(3) equivariant augmented coupling flow that achieves similar performance to CNFs when trained by maximum likelihood, while allowing for faster sampling and density evaluation by more than an order of magnitude. Furthermore, we showed that our flow can be trained as a Boltzmann generator using only the target's unnormalized density, on problems where internal coordinates are inadequate due to permutation symmetries, and doing so with a CNF is computationally intractable. It is possible to extend our model to learn the Boltzmann distribution of diverse molecules, by conditioning on their molecular graph, which we hope to explore in the future.
## Acknowledgments and Disclosure of Funding
We thank Gabor Csanyi and his group for the helpful discussions. Laurence Midgley acknowledges support from Google's TPU Research Cloud (TRC) program and from the EPSRC through the Syntech PhD program. Vincent Stimper acknowledges the Max Planck Computing and Data Facilities for providing extensive computing resources and support. Javier Antorian acknowledges support from Microsoft Research, through its PhD Scholarship Programme, and from the EPSRC. Jose Miguel Hernandez-Lobato acknowledges support from a Turing AI Fellowship under grant EP/V023756/1. Jose Miguel Hernandez-Lobato and Emile Mathieu are supported by an EPSRC Prosperity Partnership EP/T005386/1 between Microsoft Research and the University of Cambridge. This work has been performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service ([http://www.hpc.cam.ac.uk](http://www.hpc.cam.ac.uk)) funded by an EPSRC Tier-2 capital grant. It was also supported by the German Federal Ministry of Education and Research (BMBF): Tubingen AI Center, FKZ: 01IS18039B; and by the Machine Learning Cluster of Excellence, EXC number 2064/1 - Project number 390727645.
|
2305.11086 | Temporal correlation in the inverse-gamma polymer | Understanding the decay of correlations in time for (1+1)-dimensional polymer
models in the KPZ universality class has been a challenging topic. Following
numerical studies by physicists, concrete conjectures were formulated by
Ferrari and Spohn (Ferrari-Spohn '16) in the context of planar exponential last
passage percolation. These have mostly been resolved by various authors. In the
context of positive temperature lattice models, however, these questions have
remained open. We consider the time correlation problem for the exactly
solvable inverse-gamma polymer in $\mathbb{Z}^2$. We establish, up to constant
factors, upper and lower bounds on the correlation between free energy
functions for two polymers rooted at the origin (droplet initial condition)
when the endpoints are either close together or far apart. We find the same
exponents as predicted in (Ferrari-Spohn '16). Our arguments rely on the
understanding of stationary polymers, coupling, and random walk comparison. We
use recently established moderate deviation estimates for the free energy. In
particular, we do not require asymptotic analysis of complicated exact
formulae. | Riddhipratim Basu, Timo Seppäläinen, Xiao Shen | 2023-05-18T16:18:43Z | http://arxiv.org/abs/2305.11086v2 | # Temporal correlation in the inverse-gamma polymer
###### Abstract
Understanding the decay of correlations in time for (1+1)-dimensional polymer models in the KPZ universality class has been a challenging topic. Following numerical studies by physicists, concrete conjectures were formulated by Ferrari and Spohn [27] in the context of planar exponential last passage percolation. These have mostly been resolved by various authors. In the context of positive temperature lattice models, however, these questions have remained open. We consider the time correlation problem for the exactly solvable inverse-gamma polymer in \(\mathbb{Z}^{2}\). We establish, up to constant factors, upper and lower bounds on the correlation between free energy functions for two polymers rooted at the origin (droplet initial condition) when the endpoints are either close together or far apart. We find the same exponents as predicted in [27]. Our arguments rely on the understanding of stationary polymers, coupling, and random walk comparison. We use recently established moderate deviation estimates for the free energy.
In particular, we do not require asymptotic analysis of complicated exact formulae.
###### Contents
* 1 Introduction
* 1.1 Universality in stochastic growth
* 1.2 Methods in the study of the KPZ class
* 1.3 Time correlation problem
* 1.4 Organization of the paper
* 2 Main results
* 3 Preliminaries on the polymer model
* 3.1 Notation
* 3.2 Regularity of the shape function and the characteristic direction
* 3.3 Free energy estimates
* 3.4 Stationary inverse-gamma polymer
* 3.5 Random walk comparison for the free energy profile
* 4 Local fluctuations
Proof of Theorem 2.1
* 6 Proof of Theorem 2.2
* 6.1 Barrier event \(\mathcal{B}_{\rm bar}\)
* 6.2 Concentration of the free energy between \((0,0)\) and \(\mathcal{L}_{r}\)
* 6.3 Concentration of the global free energy along \(\mathcal{L}_{r}\)
* 6.4 Expectation bounds
* 6.5 Constrained variance lower bound
* 6.6 Covariance lower bound
* 7 Nonrandom fluctuation lower bound
* 8 Moderate deviation bounds for the left tail
* 8.1 Upper bound for the left tail
* 8.2 Lower bound for the left tail
* A Proofs of the free energy estimates of Section 3.3
* A.1 Free energy and path fluctuations
* A.2 Interval-to-line free energy
* A.3 Estimates for the constrained free energy
* A.4 Minimum and Maximum of the constrained free energy in a box
* B Proof of the random walk comparison in Section 3.5
* C Monotonicity for the polymer model
* D Sub-exponential random variables.
* E Random walk estimate
## 1 Introduction
### Universality in stochastic growth
Random growth models have always been at the heart of probability theory. The simplest example of random growth is a sum of independent and identically distributed (i.i.d.) random variables. Provided their second moment is finite, the large-scale behavior of the centered sum is independent of the distribution of the summands, as described by the _central limit theorem_. With the fluctuation exponent \(1/2\) and the Gaussian distribution as the central limit scaling law, this model is a member of the _Gaussian universality class_.
Following the seminal 1986 physics work of Kardar, Parisi and Zhang [39], one major goal of recent probability research has been to demonstrate that very different universal behavior arises in a wide class of stochastic models with spatial dependence. Extensive computer simulations, non-rigorous physical arguments, laboratory experiments, and rigorous mathematical results have all suggested that this _Kardar-Parisi-Zhang universality class_ (KPZ) is rich. It includes interacting particle systems, percolation models, polymer models, random tilings, certain stochastic PDEs and more. All known members of the KPZ class share universal fluctuation exponents and limit distributions from random matrix theory [18, 48].
In the past twenty-five years, many ground-breaking advances in understanding KPZ universality have come through the study of _exactly solvable_ or _integrable_ models. However, for the vast majority of the conjectural members of the KPZ class, these fine techniques of integrable probability, representation theory, and algebraic combinatorics do not apply. With the eventual goal of extending results beyond the integrable cases, a second line of research uses in principle broadly applicable probabilistic techniques and geometric arguments to study the integrable models. This paper falls in the latter category. We study the temporal correlation in the inverse-gamma polymer model, originally introduced by the second author [50].
In the remainder of this introduction, Section 1.2 gives a brief overview of the presently used mathematical methods in KPZ study, Section 1.3 discusses the correlation problem studied in this paper, and Section 1.4 explains the organization of the rest of the paper.
### Methods in the study of the KPZ class
Several different approaches to studying the exactly solvable models of the KPZ class have emerged over the last 25 years. We describe these methods briefly on a very general level, mainly in the context of zero-temperature last-passage percolation (LPP) with exponential or geometric weights, where mathematical development is farthest along.
#### 1.2.1 Integrable probability
For exactly solvable LPP models based on RSK correspondence or similar remarkable bijections, it is possible to write down explicit formulas for one-point and multi-point distributions. These formulas typically involve Fredholm determinants and are rather complicated to work with. Integrable probability estimates refer to estimates and asymptotics obtained by careful analysis of these formulas. Beginning with the seminal work of Baik, Deift and Johansson [1], which established the Tracy-Widom scaling limit for the longest increasing subsequence problem, this approach has brought much success. This includes Airy process limits for the last-passage time profile started from particular initial conditions and more recently the construction of the KPZ fixed point that admits general initial conditions. Formulas for two-time distributions have also been obtained but so far these have not yielded useful asymptotics.
#### 1.2.2 Gibbsian line ensembles
A useful approach based on resampling in line ensembles was introduced by Corwin and Hammond [20]. In the zero temperature model of Brownian last-passage percolation, where the corresponding line ensemble has the _Brownian Gibbs property_, a detailed understanding of the passage time profile was obtained in a series of works [31, 32, 33, 34]. A similar approach exists for the positive temperature KPZ equation and has been used recently to great effect [21]. The Gibbsian line ensemble approach led to the construction of the _directed landscape_ (DL), the space-time scaling limit of zero temperature models. Subsequently, DL limits were established for the KPZ equation [57, 56].
#### 1.2.3 Percolation methods with integrable inputs
Yet another suite of techniques uses black-box integrable inputs together with probabilistic and geometric arguments that can in general be referred to as _percolation arguments_. The inputs typically used in this line of work include (i) uniform curvature of the limit shape, (ii) moderate deviation estimates of the passage time, and (iii) convergence of the one point distribution to the
GUE Tracy-Widom law that has a negative mean [8, 10]. Some cases require more sophisticated inputs such as the Airy process limit of the full profile [11]. These inputs are typically obtained from the first approach above. In cases like exponential and Brownian LPP, one can also exploit random matrix connections to obtain similar, albeit usually a bit weaker, estimates [44]. These estimates are then applied to obtain fine information about the geodesic geometry, which in turn provides further information about the space-time profile of last-passage times. An axiomatic framework for these types of arguments has been developed and used in [10].
#### 1.2.4 Coupling methods
The most probabilistic approach that minimizes the role of integrability utilizes couplings with stationary growth processes. In zero temperature the seminal work was [17], followed by [5], and [50] began this development in positive temperature polymer models. This effort has been recently revolutionized by [23] that made possible certain quantitatively optimal bounds. Presently this approach still relies on a special feature of the model, namely, that the stationary measure is explicitly known. It has been most successful in the study of solvable models. Its applicability does extend to some stochastic processes presently not known to be fully integrable, namely classes of zero-range processes and interacting diffusion [6, 41]. Through comparisons with the stationary process, many results about the geodesics, parallel to those developed by the previous approach, have been proved [3, 51]. Following the optimal bounds of [23], some of the integrable inputs of the percolation approach can now be supplied by coupling techniques, thereby reducing dependence on random matrix theory and integrable probability.
#### 1.2.5 The approach of this paper
The current paper uses a combination of the final two approaches discussed above to study the temporal decay of correlations in the positive-temperature exactly solvable inverse gamma polymer model. The major barrier to applying the percolation arguments from [9, 11] has been the lack of one-point moderate deviation estimates. One advantage of the coupling techniques is that they can be extended from zero temperature to positive temperature [24, 58]. In the context of the semi-discrete O'Connell-Yor polymer, one-point estimates have recently been obtained under stationary initial conditions [43] and more recently for the point-to-point problem [42]. These techniques carry over to the inverse gamma polymer model as well. This opens the door for proving versions of the lattice LPP correlation results obtained through one-point estimates, now in the context of the positive temperature polymer models. Our paper provides the first example in this vein, by proving bounds on the time correlation structure of a lattice polymer model. We expect similar techniques to be applicable to a number of other related problems. To emphasize, although our approach here is similar to the one described in SS 1.2.3, we only require the one point moderate deviation inputs, and these are provided by the recent advances in the coupling/stationary polymer approach of SS 1.2.4. Therefore our work does not rely on the integrable methods described in SS 1.2.1-1.2.2.
### Time correlation problem
We turn to the precise problem we are studying, its history, and our contributions.
A central object in KPZ models is the random _height function_\(h:\mathbb{R}\times\mathbb{R}_{\geq 0}\to\mathbb{R}\). Depending on the situation studied, \(h(x,t)\) can be the height of a randomly moving interface over spatial location \(x\) at time \(t\), the passage time on the plane from the origin to \((x,t)\), or the free energy of point to point polymers between the origin and location \((x,t)\).
The spatial statistics \(x\mapsto h(x,t_{0})\) at a fixed time \(t_{0}\) are much better understood than the temporal process \(t\mapsto h(x_{0},t)\). Multi-time joint distributions for the height function have been obtained in several exactly solvable models [2, 36, 37, 38, 45, 46]. However, it has remained difficult to extract useful information from these impressive formulas.
Short of capturing the full distribution of the temporal evolution, a natural object to study is the two-time correlation function
\[\mathbb{C}\mathrm{orr}(h(0,t_{1}),h(0,t_{2})), \tag{1.1}\]
where we have now singled out the origin \(x_{0}=0\) as the spatial location. This correlation was first studied by physicists Takeuchi and Sano [55], who measured the quantity (1.1) from a turbulent liquid crystal experiment. Subsequently came numerical simulations [52, 54] that predicted the behavior of (1.1) by fixing \(t_{1}\) and sending \(t_{2}\) to infinity.
#### 1.3.1 Prior rigorous time correlation results
Ferrari and Spohn [27] studied the large-time behavior of (1.1) from various initial conditions in the corner growth model (CGM). CGM is the most-studied zero-temperature KPZ last-passage growth model on the lattice. Taking time to infinity, they obtained a variational formulation of the (rescaled) height function in terms of two independent _Airy processes_. From the variational problem they derived an explicit formula for the limiting two-time covariance under the stationary initial distribution, as \(t_{1},t_{2}\) both tend to infinity. For the step and flat initial conditions [27] conjectured asymptotics in the regimes \(t_{1}/t_{2}\to 0\) and \(t_{1}/t_{2}\to 1\).
Following the conjectures of [27], several rigorous works studied this problem under different initial conditions in the zero-temperature setting.
The time correlation problem for the droplet initial condition in exponential LPP was solved in two parallel works. Both employed a combination of integrable inputs and a geometric study of geodesics. The results of [25], which also utilizes comparison with stationary processes, are limiting in nature and also used the convergence of the passage time profile to the Airy\({}_{2}\) process. They also obtained an exact formula for the stationary case and identified universal behavior with respect to the initial condition when the two time points are close to one another. In contrast, [9] used one point estimates, convergence to Tracy-Widom GUE distribution (and the negativity of its mean), together with geometric arguments, to obtain similar, but quantitatively weaker, results for the droplet initial condition, but valid also in the pre-limit setting. When the two time points are far away, the case of the flat initial condition was dealt with in [11]. This work relied on strong Brownian comparison results for the Airy\({}_{2}\) process, in addition to convergence to it. The time correlation problem in the half-space exponential LPP has also been recently studied in [26].
The Gibbsian line ensemble approach has also been useful in this context. In an unpublished work, Corwin and Hammond solved the time correlation problem in Brownian LPP with this approach. Subsequently, together with Ghosal, they extended their work to the positive temperature KPZ equation [19].
#### 1.3.2 Our work: temporal correlations in positive temperature on the lattice
Prior to the present work, there does not appear to be any mathematically rigorous work on this problem for positive temperature lattice models. Application of Gibbsian line ensemble techniques appears difficult. Although the one-point convergence to Tracy-Widom GUE is known for the inverse gamma polymer [7, 40, 14], the convergence of the free energy profile does not appear to be known for any positive temperature lattice model (see [56] for recent results along these lines for
the O' Connell-Yor polymer and the KPZ equation), and hence the approach of [25] does not seem too feasible in this case.
Our approach is inspired by [9] and the recent progress in stationary techniques. One cannot directly apply the techniques of [9] in the positive temperature set-up, as much of it refers to the fluctuations and coalescence of _geodesics_ which do not exist in our setting. We modify their definitions appropriately and construct events in terms of the free energy profile and restricted free energies, which can serve similar purposes. Certain estimates are directly proved using stationary techniques. The novel technical ingredients of our paper are developed in these directions.
For the upper bound, we directly prove the locally diffusive behavior of the free energy profile (Section 4) instead of utilizing local fluctuations of geodesics as in [9]. For the lower bound, we use the FKG inequality as in [9], but in the absence of geodesics, the resampling argument is significantly different, and, in fact, somewhat simpler. We give a direct proof of a lower bound of the difference between the expected free energy and its long-term value, at the standard deviation scale. This way we avoid the need for the Tracy-Widom limit, and in fact, this provides a new proof of the negativity of the mean of the Tracy-Widom distribution. Our arguments carry over to the zero temperature setting as well, thus eliminating the integrable probability inputs from the LPP results of [9].
To summarize, we establish the exponents that govern the decay of correlations in the time direction for the inverse gamma polymer model. As expected on universality grounds, the exponents are the same as in the zero-temperature case. Ours is the first such result in a lattice polymer model in the KPZ class. The only special feature of the model we use is the explicit description of the stationary process. In particular, we do not use any weak convergence result (to the Tracy-Widom distribution, for example). Our techniques consist of one-point estimates obtained through stationary polymers, random walk comparisons, and percolation arguments. Ours is the first instance where the stationary polymer techniques have been put together with the percolation arguments in a positive temperature setting. This combination can be useful for extending many zero-temperature results to the inverse gamma polymer.
That our approach does not rely on integrable inputs is not only potentially useful for future extensions, but also necessary in the current state of the subject. There are fewer integrable tools available for our model in comparison with exponential LPP, Brownian LPP or the KPZ equation. There is no determinantal formula for the multi-point joint distribution of the free energy, and there is no corresponding Brownian Gibbs property in this discrete setting (unless one takes a certain limit of the model). Lastly, the inverse-gamma polymer model sits higher up in the hierarchy of the KPZ models. This means that through appropriate transformations and limits, LPP, BLPP, and the KPZ equation can be derived from the inverse-gamma polymer. In consequence, our results should carry over to these other models and thereby remove the integrable inputs utilized in previous works.
### Organization of the paper
The polymer model is defined and our main results on the correlation bounds, Theorems 2.1 and 2.2, are stated in Section 2. Theorem 2.1 is proved in Section 5 and Theorem 2.2 in Section 6. Auxiliary results needed for the main proofs are collected in Sections 3 and 4. We treat the proofs of these auxiliary results differently depending on their status. Those that require significant proof are verified in Sections 7 and 8, while those based on existing work, such as analogous zero-temperature results, are in the appendices. Next, we explain the organization of the supporting results in more detail.
Section 3.1 contains additional notation and conventions, in particular, for various subsets of
and partition functions of restricted collections of paths. Section 3.2 collects regularity properties of the shape function. Nothing beyond calculus is used here.
Section 3.3 covers various estimates for the free energy, organized into several subsections.
* Section 3.3.1 gives moderate deviation estimates for the point-to-point free energy. Two estimates for the left tail that appear here are used multiple times in the paper and proved in Section 8.
* Sections 3.3.2-3.3.6 contain a a variety of estimates. These are used only for the lower bound of Theorem 2.2. Those that have previously appeared in the zero-temperature setting have their proofs in Appendix A.1.
* Section 3.3.7 gives a lower bound on the discrepancy between the asymptotic free energy and the finite-volume expected free energy, sometimes called the _non-random fluctuation_. It is proved in Section 7 by comparison with the increment-stationary polymer. This result is used in the proof of the lower bound of the left tail in Section 3.3.1 and the construction of the Barrier event \(\mathcal{B}_{\mathrm{bar}}\) in Section 6.1.
Section 3.4 introduces the increment-stationary inverse-gamma polymer and discusses some of its properties. The proofs for these properties can be found in Section 7. Among the results here are upper and lower bounds on the free energy difference between the stationary model and the i.i.d. model.
Section 3.5 presents a random walk comparison of the free energy profile. Specifically, we establish upper and lower bounds on the free energy along a down-right path using two random walks. The proof of this comparison, which relies on the stationary polymer process, can be found in Appendix B.
Section 4 is dedicated to local fluctuations in the free energy profile. The proofs in this section rely on the moderate deviation estimates and the random walk comparison. The results obtained here are crucial for the proofs of the main theorems.
**Acknowledgements.** RB was partially supported by a MATRICS grant (MTR/2021/000093) from SERB, Govt. of India, DAE project no. RTI4001 via ICTS, and the Infosys Foundation via the Infosys-Chandrasekharan Virtual Centre for Random Geometry of TIFR. TS was partially supported by National Science Foundation grants DMS-1854619 and DMS-2152362 and by the Wisconsin Alumni Research Foundation. Part of this work was conducted at International Centre for Theoretical Sciences (ICTS), Bengaluru, India during the program "First-passage percolation and related models" in July 2022 (code: ICTS/fpp-2022/7), the authors thank ICTS for the hospitality.
## 2 Main results
Let \(\{Y_{\mathbf{z}}\}_{\mathbf{z}\in\mathbb{Z}^{2}}\) be a collection of positive weights on the integer lattice \(\mathbb{Z}^{2}\). Fix two points \(\mathbf{u},\mathbf{v}\in\mathbb{Z}^{2}\), we denote the collection of up-right paths between them as \(\mathbb{X}_{\mathbf{u},\mathbf{v}}\). This means if \(\gamma\in\mathbb{X}_{\mathbf{u},\mathbf{v}}\), where \(\gamma\) is viewed as a sequence of vertices \(\gamma=(\gamma_{0},\gamma_{1},\ldots,\gamma_{|\mathbf{u}-\mathbf{v}|_{1}})\) such that \(\gamma_{0}=\mathbf{u}\), \(\gamma_{|\mathbf{u}-\mathbf{v}|_{1}}=\mathbf{v}\) and \(\gamma_{i+1}-\gamma_{i}\in\{\mathbf{e}_{1},\mathbf{e}_{2}\}\).
The _point-to-point polymer partition function_ between \(\mathbf{u}\) and \(\mathbf{v}\) is defined by
\[Z_{\mathbf{u},\mathbf{v}}=\sum_{\gamma\in\mathbb{X}_{\mathbf{u},\mathbf{v}}} \prod_{i=1}^{|\mathbf{u}-\mathbf{v}|_{1}}Y_{\gamma_{i}}, \tag{2.1}\]
provided that \(\mathbf{u}\neq\mathbf{v}\) and \(\mathbb{X}_{\mathbf{u},\mathbf{v}}\) is non-empty. Otherwise, we set \(Z_{\mathbf{u},\mathbf{v}}=0\). Note the convention here that the weight \(Y_{\mathbf{u}}\) at the beginning of the path does not enter into the definition of the partition function, since the product starts with \(i=1\).
The _free energy_ is defined to be \(\log Z_{\mathbf{u},\mathbf{v}}\) and takes the value \(-\infty\) if \(Z_{\mathbf{u},\mathbf{v}}=0\). Provided that \(Z_{\mathbf{u},\mathbf{v}}>0\), the _quenched polymer measure_ is a probability measure on the set of paths \(\mathbb{X}_{\mathbf{u},\mathbf{v}}\) which is defined by
\[Q_{\mathbf{u},\mathbf{v}}\{\gamma\}=\frac{1}{Z_{\mathbf{u},\mathbf{v}}}\prod_ {i=1}^{|\mathbf{u}-\mathbf{v}|_{1}}Y_{\gamma_{i}}\qquad\text{ for }\gamma\in \mathbb{X}_{\mathbf{u},\mathbf{v}}.\]
In general, the positive weights \(\{Y_{\mathbf{z}}\}_{\mathbf{z}\in\mathbb{Z}^{2}}\) can be chosen as a collection of i.i.d. positive random variables on some probability space \((\Omega,\mathbb{P})\). Under a mild moment assumption such as
\[\mathbb{E}\big{[}|\log Y_{\mathbf{z}}|^{p}\big{]}<\infty\quad\text{ for some }p>2,\]
a law of large numbers type result called the _shape theorem_ holds for the free energy (Section 2.3 of [35]): there exists a concave, positively homogeneous and deterministic continuous function \(\Lambda:\mathbb{R}_{\geq 0}^{2}\to\mathbb{R}\) that satisfies
\[\lim_{n\to\infty}\sup_{\mathbf{z}\in\mathbb{Z}_{\geq 0}^{2}:|\mathbf{z}|_{1} \geq n}\frac{|\log Z_{(0,0),\mathbf{z}}-\Lambda(\mathbf{z})|}{|\mathbf{z}|_{1 }}=0\qquad\mathbb{P}\text{-almost surely}.\]
For general i.i.d. weights, regularity properties of \(\Lambda\) such as strict concavity or differentiability, expected to hold at least for continuous weights, are unknown. There is a special case, first observed in [50], that if the i.i.d. weights have the inverse-gamma distribution, then \(\Lambda\) can be computed explicitly. The density function of the inverse-gamma distribution is defined by
\[f_{\mu}(x)=\frac{1}{\Gamma(\mu)}x^{-\mu-1}e^{-\frac{1}{x}}\quad\text{for }x>0. \tag{2.2}\]
The shape parameter \(\mu\in(0,\infty)\) plays the role of temperature in this polymer model. We derive several properties for \(\Lambda\) in Section 3.2 which will be used in our proofs later on. In addition, for this inverse-gamma polymer, many more explicit estimates can be established, hence it is often referred to as an exactly solvable model.
As is standard, the correlation coefficient of two random variables \(\zeta\) and \(\eta\) is defined by
\[\mathbb{C}\mathrm{orr}(\zeta,\eta)=\frac{\mathbb{C}\mathrm{ov}(\zeta,\eta)}{ \mathbb{V}\mathrm{ar}(\zeta)^{1/2}\,\mathbb{V}\mathrm{ar}(\eta)^{1/2}}=\frac {\mathbb{E}[\zeta\eta]-\mathbb{E}\zeta\cdot\mathbb{E}\eta}{\mathbb{E}[\,| \zeta-\mathbb{E}\zeta|^{2}\,]^{1/2}\,\mathbb{E}[\,|\eta-\mathbb{E}\eta|^{2} \,]^{1/2}}.\]
Our main result establishes the time correlation exponents \(1/3\) and \(2/3\) for two free energies based on the separation of their endpoints.
The bounds in the next two theorems are valid under the assumption that the weights \(\{Y_{\mathbf{z}}\}\) have the i.i.d. inverse-gamma distribution (2.2) for some choice of the parameter \(\mu\in(0,\infty)\).
**Theorem 2.1**.: _There exist positive constants \(C_{1},C_{2},c_{0},N_{0}\) such that, whenever \(N\geq N_{0}\) and \(N/2\leq r\leq N-c_{0}\), we have_
\[1-C_{1}\Big{(}\frac{N-r}{N}\Big{)}^{2/3}\leq\mathbb{C}\mathrm{orr}\big{(}\log Z _{(0,0),(r,r)},\log Z_{(0,0),(N,N)}\big{)}\leq 1-C_{2}\Big{(}\frac{N-r}{N} \Big{)}^{2/3}.\]
**Theorem 2.2**.: _There exist positive constants \(C_{3},C_{4},c_{0},N_{0}\) such that, whenever \(N\geq N_{0}\) and \(c_{0}\leq r\leq N/2\), we have_
\[C_{3}\Big{(}\frac{r}{N}\Big{)}^{1/3}\leq\mathbb{C}\mathrm{orr}\big{(}\log Z_{ (0,0),(r,r)},\log Z_{(0,0),(N,N)}\big{)}\leq C_{4}\Big{(}\frac{r}{N}\Big{)}^{1 /3}.\]
## 3 Preliminaries on the polymer model
### Notation
Generic positive constants will be denoted by \(C,C^{\prime}\) in the calculations of the proofs. They may change from line to line without change in notation. Other important positive constants appearing in the results will be numbered in the form \(C_{\text{number}}\).
As shown in Figure 3.1, for any point \(\mathbf{a}\in\mathbb{Z}^{2}\), \(\mathcal{L}_{\mathbf{a}}=\{\mathbf{a}+(j,-j):j\in\mathbb{Z}\}\) denotes the anti-diagonal line with slope \(-1\) going through the point \(\mathbf{a}\), and for any positive constant \(k\), set
\[\mathcal{L}_{\mathbf{a}}^{k}=\{\mathbf{x}\in\mathcal{L}_{\mathbf{a}}:| \mathbf{x}-\mathbf{a}|_{\infty}\leq k\}.\]
For \(\mathbf{a},\mathbf{b}\in\mathbb{Z}^{2}\) and \(k\in\mathbb{R}_{\geq 0}\), \(R_{\mathbf{a},\mathbf{b}}^{k}\) denotes the parallelogram spanned by the four corners \(\mathbf{a}\pm(-k,k)\) and \(\mathbf{b}\pm(-k,k)\).
For a collection of directed paths \(\mathfrak{A}\), let \(Z(\mathfrak{A})\) be the free energy obtained by summing over all the paths in \(\mathfrak{A}\). For \(A,B\subset\mathbb{R}^{2}\), let \(Z_{A,B}\) denote the partition function obtained by summing over all directed paths starting from integer points
\[A^{\circ}=\{\mathbf{a}\in\mathbb{Z}^{2}:\mathbf{a}+[0,1)^{2}\cap A\neq\emptyset\}\]
and ending in
\[B^{\circ}=\{\mathbf{b}\in\mathbb{Z}^{2}:\mathbf{b}+[0,1)^{2}\cap B\neq \emptyset\}.\]
Furthermore, set
\[Z_{A,B}^{\text{max}}=\max_{\mathbf{a}\in A^{\circ},\mathbf{b}\in B^{\circ}}Z_{ \mathbf{a},\mathbf{b}}.\]
For \(A,B\subset\mathbb{R}^{2}\), \(\mathbf{c},\mathbf{d}\in\mathbb{Z}^{2}\) and \(h>0\) we define two specific partition functions:
\[Z_{A,B}^{\text{in},R_{\mathbf{c},\mathbf{d}}^{k}} =\text{sum over directed paths from $A$ to $B$ contained inside the parallelogram $R_{\mathbf{c},\mathbf{d}}^{h}$},\] \[Z_{A,B}^{\text{exit},R_{\mathbf{c},\mathbf{d}}^{k}} =\text{sum over directed paths from $A$ to $B$ that exit at least one of}\] \[\quad\text{ the sides of $R_{\mathbf{c},\mathbf{d}}^{h}$ parallel to $\mathbf{d}-\mathbf{c}$}.\]
We simplify the notation when the starting and end places of the free energy match with the parallelogram, for example
\[Z_{\mathcal{L}_{\mathbf{a}}^{s_{1}},\mathcal{L}_{\mathbf{b}}^{k_{2}}}^{ \text{in},R_{\mathbf{c}}^{k_{1}}}=Z_{\mathcal{L}_{\mathbf{a}}^{s_{1}}, \mathcal{L}_{\mathbf{b}}^{s_{2}}}^{\text{in},k}\qquad\text{ and }\qquad Z_{\mathcal{L}_{\mathbf{a}}^{s_{1}}, \mathcal{L}_{\mathbf{b}}^{s_{2}}}^{\text{exit},R_{\mathbf{a},\mathbf{b}}^{k _{1}}}=Z_{\mathcal{L}_{\mathbf{a}}^{s_{1}},\mathcal{L}_{\mathbf{b}}^{s_{2}}}^ {\text{exit},k}.\]
Integer points on the diagonal are abbreviated as \(a=(a,a)\in\mathbb{Z}^{2}\). Common occurrences of this include \(Z_{r,N}=Z_{(r,r),(N,N)}\), \(Z_{\mathbf{p},N}=Z_{\mathbf{p},(N,N)}\), \(\mathcal{L}_{a}^{k}=\mathcal{L}_{(a,a)}^{k}\) and \(R_{a,b}^{k}=R^{k}(a,b)=R_{(a,a),(b,b)}^{k}\).
The standard gamma function is \(\Gamma(s)=\int_{0}^{\infty}x^{s-1}e^{-x}\,dx\) and the polygamma functions are \(\Psi_{k}(s)=\frac{d^{k+1}}{ds^{k+1}}\log\Gamma(s)\) for \(k=0,1,2,\dots\).
Finally, we point out two conventions. First, we drop the integer floor function to simplify notation. For example, if we divide the line segment from \((0,0)\) to \((N,N)\) in 5 equal pieces, we denote the free energy of the first segment by \(\log Z_{0,N/5}\) even if \(N/5\) is not an integer. The second one is about the dependence of constants on parameters. A statement of the type "there exists a positive \(\theta_{0}\) such that for each \(0<\theta<\theta_{0}\), there exist positive constants \(C_{0},N_{0},t_{0}\) such that..." means that \(C_{0},N_{0}\) and \(t_{0}\) can (and often necessarily do) depend on \(\theta\).
### Regularity of the shape function and the characteristic direction
Henceforth fix the shape parameter \(\mu\in(0,\infty)\) and assume that the weights \(\{Y_{\mathbf{z}}\}\) have the i.i.d. inverse-gamma distribution (2.2). Define the _characteristic direction_ as a function of \(\rho\in(0,\mu)\)
\[\boldsymbol{\xi}[\rho]=\Big{(}\frac{\Psi_{1}(\rho)}{\Psi_{1}(\rho)+\Psi_{1}( \mu-\rho)}\,,\,\frac{\Psi_{1}(\mu-\rho)}{\Psi_{1}(\rho)+\Psi_{1}(\mu-\rho)} \Big{)}. \tag{3.1}\]
The term characteristic direction becomes meaningful when we define the stationary inverse-gamma polymer in Section 3.4. \(\Psi_{1}\) is strictly decreasing and \(C^{\infty}\) on \(\mathbb{R}_{>0}\). Thus \(\boldsymbol{\xi}[\rho]\) is a continuous bijection between \(\rho\in(0,\mu)\) and vectors (or directions) on the open line segment between \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\). Denote the slope of the vector \(\boldsymbol{\xi}[\rho+z]\) by
\[m_{\rho}(z)=\frac{\boldsymbol{\xi}[\rho+z]\cdot\mathbf{e}_{2}}{\boldsymbol{ \xi}[\rho+z]\cdot\mathbf{e}_{1}}=\frac{\Psi_{1}(\mu-\rho-z)}{\Psi_{1}(\rho+z)}.\]
It is \(C^{\infty}\) with non-vanishing derivative on the interval \(z\in(-\rho,\mu-\rho)\). Its inverse function \(z_{\rho}(m)\) is \(C^{\infty}\) and has a non-vanishing positive derivative for \(m\in(0,\infty)\). The graph of \(m_{\mu/2}(z)\) is illustrated in Figure 3.2. Taylor expansion for \(m_{\rho}(z)\) around \(z=0\) gives this estimate:
**Proposition 3.1** (Lemma 3.1 of [15]).: _There exist positive constants \(C_{5},C_{6},\epsilon\) such that for each \(z\in[-\epsilon,\epsilon]\) and each \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\big{|}m_{\rho}(z)-(m_{\rho}(0)+C_{\ref{eq:2}}z)\big{|}\leq C_{\ref{eq:2}}z^{2}.\]
Figure 3.2: The graph of \(m_{\mu/2}(z)\) for \(z\in(-\mu/2,\mu/2)\). The function \(m\) is smooth and has a non-vanishing derivative on \((-\mu/2,\mu/2)\). The image of \(m_{\mu/2}(z)\) is \((0,\infty)\) which corresponds the slopes of the points inside \(|\mathbf{e}_{1},\mathbf{e}_{2}|\).
The next few results specialize to the diagonal direction \(\rho=\mu/2\). We drop the subscript and write \(m=m_{\mu/2}\) and \(z=z_{\mu/2}\). Taylor expansion of \(z(m)\) around \(m=1\) gives this estimate:
**Proposition 3.2**.: _There exist positive constants \(C_{7},C_{8},\epsilon\) such that for each \(m\in[1-\epsilon,1+\epsilon]\), we have_
\[\big{|}z(m)-C_{\ref{eq:2}}(m-1)\big{|}\leq C_{\ref{eq:2}}(m-1)^{2}.\]
Next we quantify the dependence of the shape function on \(\rho\). Let \(f(\rho)\) denote the shape function evaluated at the vector \(\boldsymbol{\xi}[\rho]\), and recall from [50] that
\[f(\rho)=\Lambda(\boldsymbol{\xi}[\rho])=-\tfrac{\Psi_{1}(\rho)}{\Psi_{1}(\rho )+\Psi_{1}(\mu-\rho)}\cdot\Psi_{0}(\mu-\rho)-\tfrac{\Psi_{1}(\mu-\rho)}{\Psi_{ 1}(\rho)+\Psi_{1}(\mu-\rho)}\Psi_{0}(\rho). \tag{3.2}\]
Let \(f_{d}=f(\mu/2)\) denote the shape function in the diagonal direction. From concavity and symmetry, we get this inequality:
**Proposition 3.3**.: _For each \(\mu>0\) and each \(z\in(-\mu/2,\mu/2)\), \(f(\mu/2)\geq f(\mu/2+z)\)._
The next bound captures the curvature of the shape function.
**Proposition 3.4**.: _There exist positive constants \(C_{9},C_{10},\epsilon\) such that for each \(z\in[-\epsilon,\epsilon]\), we have_
\[\Big{|}(f(\mu/2+z)-f(\mu/2))-(-C_{9}z^{2})\Big{|}\leq C_{\ref{eq:2}}\circ^{4}.\]
Proof.: From (3.2),
\[f(\mu/2+z)-f(\mu/2)\] \[=\Big{[}-\Big{(}\tfrac{\Psi_{1}(\mu/2+z)}{\Psi_{1}(\mu/2+z)+\Psi_ {1}(\mu/2-z)}\Psi_{0}(\mu/2-z)+\tfrac{\Psi_{1}(\mu/2-z)}{\Psi_{1}(\mu/2+z)+ \Psi_{1}(\mu/2-z)}\Psi_{0}(\mu/2+z)\Big{)}\Big{]} \tag{3.3}\] \[-\Big{[}-\Big{(}\tfrac{\Psi_{1}(\mu/2)}{\Psi_{1}(\mu/2)+\Psi_{1}( \mu/2)}\Psi_{0}(\mu/2)+\tfrac{\Psi_{1}(\mu/2)}{\Psi_{1}(\mu/2)+\Psi_{1}(\mu/2 )}\Psi_{0}(\mu/2)\Big{)}\Big{]}. \tag{3.4}\]
Taylor expand (3.3) around \(z=0\). The "zeroth" derivative terms and (3.4) cancel each other. The coefficients of \(z\), \(z^{3}\), \(z^{5}\) are zero. The coefficient of \(z^{2}\) is \(\tfrac{1}{2}\Psi_{2}(\mu/2)<0\).
Our next proposition controls the variation of the shape function on a segment \(\mathcal{L}_{N}^{hN^{2/3}}\).
**Proposition 3.5**.: _There exist positive constants \(C_{11},N_{0},\epsilon_{0}\) such that for each \(N\geq N_{0}\), \(h\leq\epsilon_{0}N^{1/3}\) and each \(\mathbf{p}\in\mathcal{L}_{N}^{hN^{2/3}}\), we have_
\[\big{|}\Lambda(\mathbf{p})-2Nf_{d}\big{|}\leq C_{\ref{eq:2}}h^{2}N^{1/3}.\]
Proof.: Since each \(\mathbf{p}\in\mathcal{L}_{N}^{hN^{2/3}}\) has the same \(\ell^{1}\)-norm \(2N\), let us rewrite \(\mathbf{p}=2N\boldsymbol{\xi}[\mu/2+z_{\mathbf{p}}]\) for some real number \(z_{\mathbf{p}}\). Then, by our definition \(\Lambda(\mathbf{p})=2Nf(\mu/2+z_{\mathbf{p}})\).
Since the perpendicular \(\ell^{\infty}\)-distance from \(\mathbf{p}\) to the diagonal is at most \(hN^{2/3}\), the slope of the characteristic vector \(\boldsymbol{\xi}[\mu/2+z_{\mathbf{p}}]\) satisfies
\[|m(z_{\mathbf{p}})-1|\leq 2hN^{-1/3}.\]
Fix \(\epsilon_{0}\) sufficiently small, by Proposition 3.2, we obtain
\[|z_{\mathbf{p}}|\leq ChN^{-1/3}.\]
Finally, applying Proposition 3.4, we obtain that
\[|f(\mu/2+z_{\mathbf{p}})-f_{d}|\leq Ch^{2}N^{-2/3}\]
which directly implies the result of our proposition after multiplying by \(2N\) on both sides.
### Free energy estimates
In this section, we collect a number of estimates used later in the proofs, organized thematically into subsections. Some results are merely quoted, some proved later, and in cases where the result has already appeared in the zero temperature setting the positive temperature proofs are in Appendix A.
#### 3.3.1 Moderate deviation estimates for the free energy
There are four moderate deviation estimates: upper and lower bounds for both left and right tails.
The first theorem gives the upper bound on the right tail of the free energy. This result for the inverse-gamma polymer was first proved as a combination of the moderate deviation estimate from [7], which used integrable techniques, and the large deviation estimate from [30]. The same moderate deviation upper bound was also recently proven in [42] for the O'Connell-Yor polymer without input from integrable probability. Also without integrable techniques, the forthcoming work [24] proves this bound and obtains the sharp leading order term \(\frac{4}{3}t^{3/2}\) in the exponent for \(t\leq CN^{2/3}\). A version of this bound can be found in the Ph.D. thesis of one of the authors of [24], as Theorem 4.3.1 in [58]. The right tail estimate for the KPZ equation with the sharp leading order term was also recently obtained in [29]. Finally, we note that both [42] and [24] use the technique of the zero-temperature version of the estimate that first appeared in [23].
**Proposition 3.6**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{12},N_{0}\) depending on \(\epsilon\) such that for each \(N\geq N_{0}\), \(t\geq 1\), and each \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\mathbb{P}(\log Z_{0,2N\boldsymbol{\xi}[\rho]}-2Nf(\rho)\geq tN^{1/3})\leq e ^{-C}\!12^{\,\min\{t^{3/2},tN^{1/3}\}}.\]
The next theorem is the corresponding lower bound for the right tail, restricted to the diagonal direction. This was recently proved for the O'Connell-Yor polymer in [42] in the diagonal direction. The proof uses the subadditivity of the free energy and the Tracy-Widom limit of the free energy. Since using integrable techniques, the Tracy-Widom limit of the inverse-gamma polymer is also known [7], the proof for the O'Connell-Yor polymer in Section 9 of [42] can be repeated verbatim for the inverse-gamma polymer. A similar argument in the zero-temperature setting appeared earlier in [9, 28]. Without this input from integrable probability, a lower bound with the correct leading order \(\frac{4}{3}t^{3/2}\) for \(t\leq CN^{2/3}\) over all directions in a compact interval away from \(\mathbf{e}_{1}\) and \(\mathbf{e}_{2}\) will appear in [24].
**Proposition 3.7**.: _There exist positive constants \(C_{13},N_{0},t_{0},\epsilon_{0}\) such that for each \(N\geq N_{0}\), \(t_{0}\leq t\leq\epsilon_{0}N^{2/3}\), we have_
\[\mathbb{P}(\log Z_{0,N}-2Nf_{d}\geq tN^{1/3})\geq e^{-C}\!13^{t^{3/2}}.\]
The next theorem is the upper bound for the left tail. A similar result was stated as Proposition 3.4 in [42] for the O'Connell-Yor polymer. We prove this estimate for the inverse-gamma polymer in Section 8. Our proof is similar to [42], based on ideas from the zero-temperature work [22].
**Proposition 3.8**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{14},N_{0}\) depending on \(\epsilon\) such that for each \(N\geq N_{0}\), \(t\geq 1\) and and each \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\mathbb{P}(\log Z_{0,2N\boldsymbol{\xi}[\rho]}-2Nf(\rho)\leq-tN^{1/3})\leq e ^{-C}\!12^{\,\min\{t^{3/2},tN^{1/3}\}}.\]
_Remark 3.9_.: The correct order of the left tail should be \(e^{-C\min\{t^{3},tN^{4/3}\}}\) for all \(t\geq t_{0}\). This is different from the zero-temperature model where the left tail behaves as \(e^{-Ct^{3}}\). For the O'Connell-Yor polymer, the authors in [42] also proved an upper bound \(e^{-Ct^{3}}\) when \(t_{0}\leq t\leq N^{2/3}(\log N)^{-1}\). This is done by adapting the bootstrapping argument from the zero-temperature work [28]. We do not pursue this here but expect the same result.
Finally, we have the lower bound on the left tail, which we prove in Section 8. The same lower bound was proved in [42] for the O'Connell-Yor polymer. The idea of the proof follows the zero-temperature work [28].
**Proposition 3.10**.: _There exist positive constants \(C_{15},N_{0},t_{0},\epsilon_{0}\) such that for each \(N\geq N_{0}\) and each \(t_{0}\leq t\leq\epsilon_{0}N^{2/3}/(\log N)^{2}\), we have_
\[\mathbb{P}(\log Z_{0,N}-2Nf_{d}\leq-tN^{1/3})\geq e^{-C}15^{t^{3}}.\]
#### 3.3.2 Free energy and path fluctuations
In this section we state the estimates which capture the loss of free energy when the paths have high fluctuations.
**Proposition 3.11**.: _There exist positive constants \(C_{16},C_{17},N_{0}\) such that for each \(N\geq N_{0}\), \(h\in\mathbb{Z}\) and \(t\geq 0\) we have_
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{N^{2/3}},\mathcal{L}_{N}^{N^{2/3}}} -2Nf_{d}\geq(-C_{\ref{eq:C_16}}h^{2}+t)N^{1/3}\Big{)}\leq e^{-C}17^{(|h|^{3}+ \min\{t^{3/2},tN^{1/3}\})}.\]
Then, by essentially a union bound, we obtain the following proposition.
**Proposition 3.12**.: _There exist positive constants \(C_{18},C_{19},N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq 1\) and \(s\geq 0\), we have_
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}\backslash \mathcal{L}_{N}^{(s+t)N^{2/3}}}-2Nf_{d}\geq-C_{\ref{eq:C_16}}t^{2}N^{1/3}\Big{)} \leq e^{-C}19^{t^{3}}.\]
Following this, we have the next proposition which states that paths with high fluctuation tend to have much small free energy.
**Theorem 3.13**.: _There exist positive constants \(C_{20},C_{21},N_{0}\) such that for each \(N\geq N_{0}\), \(1\leq t\leq N^{1/3}\) and \(0<s<e^{t}\), we have_
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{N^{2/3}} }^{\mathrm{exit},(s+t)N^{2/3}}-2Nf_{d}\geq-C_{\ref{eq:C_20}}t^{2}N^{1/3}\Big{)} \leq e^{-C}21^{t^{3}}.\]
From this, we have the following corollary which is a similar bound for point-to-point free energy that is slightly off the diagonal direction.
**Corollary 3.14**.: _There exist positive constants \(C_{22},C_{23},N_{0}\) such that for each \(N\geq N_{0}\), \(1\leq t\leq N^{1/3}\) and \(0<s<t/10\), we have_
\[\mathbb{P}\Big{(}\log Z_{(-sN^{2/3},sN^{2/3}),N}^{\mathrm{exit},tN^{2/3}}-2Nf _{d}\geq-C_{\ref{eq:C_20}}t^{2}N^{1/3}\Big{)}\leq e^{-C}23^{t^{3}}.\]
#### 3.3.3 Interval-to-line free energy
In our work, we will also need an upper bound for the right tail of the interval-to-line free energy.
**Theorem 3.15**.: _There exist positive constants \(C_{24},C_{25},N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq 1\) and \(1\leq h\leq e^{C}\!24^{\min\{t^{3/2},tN^{1/3}\}}\), we have_
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{hN^{2/3}},\mathcal{L}_{N}}-2Nf_{d} \geq tN^{1/3}\Big{)}\leq e^{-C}\!25^{\,\min\{t^{3/2},tN^{1/3}\}}.\]
#### 3.3.4 Estimates for the constrained free energy
When we constrain the paths, the free energy decreases because we are summing over a smaller collection of paths in the partition function. The first theorem captures that the point-to-point free energy can not be too small if we constrain the paths to a fixed rectangle of size order \(N\times N^{2/3}\) which obeys the KPZ transversal fluctuation scale. Our second theorem gives a lower bound for the probability that a constrained free energy is large.
**Theorem 3.16**.: _For each positive \(a_{0}\), there exist positive constants \(C_{26},t_{0}\) such that for each \(0<\theta\leq 100\), there exists a positive constant \(N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq t_{0}\) and \(\mathbf{p}\in\mathcal{L}_{N}^{a_{0}\theta N^{2/3}}\), we have_
\[\mathbb{P}\Big{(}\log Z_{0,\mathbf{p}}^{\mathrm{in},\theta N^{2/3}}-2Nf_{d} \leq-tN^{1/3}\Big{)}\leq\tfrac{\sqrt{t}}{\theta}e^{-C}\!26^{\theta t}.\]
**Theorem 3.17**.: _For any positive constant \(s\), there exist positive constants \(C_{27},t_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(t_{0}\leq t\leq N^{2/3}\),_
\[\mathbb{P}\Big{(}\log Z_{0,N}^{\mathrm{in},sN^{2/3}}-2Nf_{d}\geq tN^{1/3} \Big{)}\geq e^{-C}\!27^{t^{3/2}}.\]
#### 3.3.5 Minimum and maximum estimate for the free energy
Our first theorem is the box-to-point minimum bound. This was first proved in the zero-temperature setting which appeared in [13] for the Poissonian LPP model, then later in [11] for the exponential LPP model. The proof follows the idea from Section C.4 of [11].
**Theorem 3.18**.: _There exist positive constants \(C_{28},N_{0},t_{0}\) such that for each \(N\geq N_{0}\) and \(t\geq t_{0}\), we have_
\[\mathbb{P}\Big{(}\min_{\mathbf{p}\in R_{0,9N/10}^{N^{2/3}}}\log Z_{\mathbf{p },N}^{\mathrm{in},R_{0,N}^{N^{2/3}}}-(2N-|\mathbf{p}|_{1})f_{d}\leq-tN^{1/3} \Big{)}\leq e^{-C}\!28^{t}.\]
Lastly, we state a box-to-line maximum bound.
**Theorem 3.19**.: _There exist positive constants \(C_{29},N_{0},t_{0}\) such that for each \(N\geq N_{0}\) and each \(t\geq t_{0}\), we have_
\[\mathbb{P}\Big{(}\max_{\mathbf{p}\in R_{0,9N/10}^{N^{2/3}}}\log Z_{\mathbf{p },\mathcal{L}_{N}}-(2N-|\mathbf{p}|_{1})f_{d}\geq tN^{1/3}\Big{)}\leq e^{-C}\! 29^{t}.\]
_Remark 3.20_.: In both bounds of Theorem 3.18 and Theorem 3.19, the power \(1\) on the exponent \(t^{1}\) is not expected to be optimal.
#### 3.3.6 Variance bound for the free energy
We state the variance bound of the free energy which follows directly from the upper and lower bounds for the left and right tails. We omit its proof. The upper bound was first shown in [50] where the inverse-gamma polymer was first introduced.
**Theorem 3.21**.: _There exist positive constants \(C_{30},C_{31},N_{0}\) such that for each \(N\geq N_{0}\), we have_
\[C_{30}N^{2/3}\leq\mathbb{V}\mathrm{ar}(\log Z_{0,N})\leq C_{31}N^{2/3}.\]
#### 3.3.7 Nonrandom fluctuation
Finally, we record a lower bound for the nonrandom fluctuation of the free energy in i.i.d. inverse-gamma polymer. This result follows directly from the Tracy-Widom limit of the inverse-gamma model and the fact that the Tracy-Widom distribution has a negative mean. Our contribution here is an alternative proof (in Section 7) without relying on the Tracy-Widom limit.
**Theorem 3.22**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{32},N_{0}\) such that for each \(N\geq N_{0}\) and \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[2Nf(\rho)-\mathbb{E}[\log Z_{0,2N\mathbf{\xi}[\rho]}]\geq C_{32}N^{1/3}.\]
### Stationary inverse-gamma polymer
The (increment) stationary inverse-gamma polymer (with southwest boundary) is defined on a quadrant. To start, we fix a parameter \(\rho\in(0,\mu)\) and a base vertex \(\mathbf{v}\in\mathbb{Z}^{2}\). For each \(\mathbf{z}\in\mathbf{v}+\mathbb{Z}^{2}_{>0}\), the (vertex) bulk weights are defined by \(Y_{z}\sim\mathrm{Ga}^{-1}(\mu)\). On the boundary \(\mathbf{v}+k\mathbf{e}_{1}\), and \(\mathbf{v}+k\mathbf{e}_{2}\), the (edge) weights have the distributions
\[I^{\rho}_{[v+(k-1)k\mathbf{e}_{1},v+k\mathbf{e}_{1}]} \sim\mathrm{Ga}^{-1}(\mu-\rho) \tag{3.5}\] \[J^{\rho}_{[v+(k-1)k\mathbf{e}_{2},v+k\mathbf{e}_{2}]} \sim\mathrm{Ga}^{-1}(\rho).\]
And all the weights in the quadrant are independent. We denote the probability measure for the stationary inverse-gamma polymer by \(\mathbb{P}\) and record the parameter \(\rho\) and the base point \(\mathbf{v}\) in the notation of the partition function. For \(\mathbf{w}\in\mathbf{v}+\mathbb{Z}^{2}_{\geq 0}\), let us define
\[Z^{\rho}_{\mathbf{v},\mathbf{w}}=\sum_{\gamma\in\mathbb{X}_{\mathbf{v}, \mathbf{w}}}\prod_{i=0}^{|\mathbf{v}-\mathbf{w}|_{1}}\widetilde{Y}_{\gamma_{ i}}\qquad\text{ where }\widetilde{Y}_{\gamma_{i}}=\begin{cases}1&\text{if }\gamma_{i}=\mathbf{v}\\ I^{\rho}_{[\gamma_{i}-\mathbf{e}_{1},\gamma_{i}]}&\text{if }\gamma_{i} \cdot\mathbf{e}_{2}=\mathbf{v}\cdot\mathbf{e}_{2}\\ J^{\rho}_{[\gamma_{i}-\mathbf{e}_{2},\gamma_{i}]}&\text{if }\gamma_{i} \cdot\mathbf{e}_{1}=\mathbf{v}\cdot\mathbf{e}_{1}\\ Y_{z}&\text{otherwise}.\end{cases}\]
And for \(\gamma\in\mathbb{X}_{\mathbf{v},\mathbf{w}}\), the quenched polymer measure is defined by
\[Q^{\rho}_{\mathbf{v},\mathbf{w}}(\gamma)=\frac{1}{Z^{\rho}_{\mathbf{v}, \mathbf{w}}}\prod_{i=0}^{|\mathbf{v}-\mathbf{w}|_{1}}\widetilde{Y}_{\gamma_{ i}}.\]
The name (increment) stationary inverse-gamma polymer is justified by the next theorem, which first appeared in [50, Theorem 3.3].
**Theorem 3.23**.: _For each \(\mathbf{w}\in\mathbf{v}+\mathbb{Z}_{>0}^{2}\). We have_
\[\frac{Z^{\rho}_{\mathbf{v},\mathbf{w}}}{Z^{\rho}_{\mathbf{v},\mathbf{w}-\mathbf{ e}_{1}}}\sim\mathrm{Ga}^{-1}(\mu-\rho)\qquad\text{ and }\qquad\frac{Z^{\rho}_{\mathbf{v},\mathbf{w}}}{Z^{\rho}_{\mathbf{v},\mathbf{w}- \mathbf{e}_{2}}}\sim\mathrm{Ga}^{-1}(\rho).\]
_Furthermore, let \(\eta=\{\eta_{i}\}\) be any finite or infinite down-right path in \(\mathbf{v}+\mathbb{Z}_{\geq 0}^{2}\). This means \(\eta_{i+1}-\eta_{i}\) is either \(\mathbf{e}_{1}\) or \(-\mathbf{e}_{2}\). Then, the increments \(\{Z^{\rho}_{\mathbf{v},\eta_{i+1}}/Z^{\rho}_{\mathbf{v},\eta_{i}}\}\) are independent._
From Theorem 3.23 above, we have the following identity for the expectation of the free energy.
\[\mathbb{E}\Big{[}\log Z^{\rho}_{0,(a,b)}\Big{]}=-a\Phi_{0}(\mu-\rho)-b\Phi_{0 }(\rho) \tag{3.6}\]
Because the weights appearing on the boundary are stochastically larger than the bulk weights, the sampled polymer paths tend to stay on the boundary. However, for each fixed \(\rho\in(0,\mu)\), there is a unique direction for which this effect between the \(\mathbf{e}_{1}\)- and \(\mathbf{e}_{2}\)-boundary is balanced out, we call this the characteristic direction \(\xi[\rho]\), which is defined previously in (3.1).
The first estimate below is the upper bound for the right tail of the free energy. It first appeared in the Ph.D. Thesis [58]. Then, it was proven again in [43]. We know that by (3.6), \(2Nf(\rho)\) can be thought as the expectation of \(\log Z^{\rho}_{0,2N\xi[\rho]}\), if we ignore the error from the integer rounding.
**Theorem 3.24**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{33},N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq 1\) and \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\mathbb{P}\Big{(}\log Z^{\rho}_{0,2N\boldsymbol{\xi}[\rho]}-2Nf(\rho)\geq tN^ {1/3}\Big{)}\leq e^{-C}33^{\,\min\{t^{3/2},tN^{1/3}\}}.\]
Along the characteristic direction, the sampled paths tend to stay on the boundary for order \(N^{2/3}\) number of steps. Our next result is a corollary of this fact, which appears as Corollary 4.2 in [49]. Fix \(\mathbf{w}\in\mathbf{v}+\mathbb{Z}_{\geq 0}^{2}\) and any \(k\in\mathbb{R}_{>0}\). Let \(\{\tau_{\mathbf{v},\mathbf{w}}\geq k\}\) denote the subset of \(\mathbb{X}_{\mathbf{v},\mathbf{w}}\) such that the first \(\lfloor k\rfloor\) steps of the path are all \(\mathbf{e}_{1}\)-steps. Similarly, \(\{\tau_{\mathbf{v},\mathbf{w}}\leq-k\}\) is the subset of \(\mathbb{X}_{\mathbf{v},\mathbf{w}}\) whose first \(\lfloor k\rfloor\) steps are all \(\mathbf{e}_{2}\)-steps. When \(\tau_{\mathbf{v},\mathbf{w}}\) appears inside a quenched polymer measure as below, we will simplify the notation \(\tau_{\mathbf{v},\mathbf{w}}=\tau\) as the starting and end point of the paths are clear.
**Theorem 3.25**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{34},C_{35},N_{0}\) such that for for all \(\rho\in[\epsilon,\mu-\epsilon]\), \(N\geq N_{0}\) and \(r\geq 1\), we have_
\[\mathbb{P}^{\rho}(Q_{0,2N\boldsymbol{\xi}[\rho]+rN^{2/3}\mathbf{e}_{1}}\{\tau \leq-1\}\geq e^{-C}34^{r^{3}})\leq e^{-C}35^{r^{3}}.\]
Let \(\widetilde{Z}\) denote the version of the free energy that also include the weight at the beginning of the path. The following is essentially a lower bound for the difference between the free energies of the stationary boundary model and i.i.d. bulk polymer. We included an additional boundary weight with the i.i.d. bulk free energy in the estimate below because this version will be used to prove Theorem 3.22. Its proof will appear in Section 7.
**Theorem 3.26**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{36},N_{0}\) such that for each \(N\geq N_{0}\), \(0<\delta\leq 1/2\) and \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\mathbb{P}\Big{(}\log Z^{\rho}_{-1,2N\boldsymbol{\xi}[\rho]}-\Big{(}\log I^{ \rho}_{[(-1,-1),(0,-1)]}+\log\widetilde{Z}_{0,2N\boldsymbol{\xi}[\rho]}\Big{)} \leq\delta N^{1/3}\Big{)}\leq C_{36}|\log(\delta\lor N^{-1/3})|\cdot(\delta \lor N^{-1/3}).\]
For completeness, we also record the following upper bound for the difference between the stationary and i.i.d. free energy. This result follows directly from Theorem 3.24 and Proposition 3.8 using a union bound, hence we omit its proof.
**Theorem 3.27**.: _Let \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{37},N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq 1\) and \(\rho\in[\epsilon,\mu-\epsilon]\), we have_
\[\mathbb{P}\Big{(}\log Z^{\rho}_{-1,2N\boldsymbol{\xi}[\rho]}-\log Z_{0,2N \boldsymbol{\xi}[\rho]}\geq tN^{1/3}\Big{)}\leq e^{-C}37^{\,\min\{t^{3/2},tN^{1/3 }\}}.\]
### Random walk comparison for the free energy profile
The stationary polymer allows one to compare the free energy profile along a segment of a downright path to random walks. This technique has appeared previously in [3, 5, 15, 49, 50, 51] and many more places.
To start, fix \(\rho\in(0,\mu)\) and define
\[\mathbf{v}_{N}=2N\boldsymbol{\xi}[\rho].\]
Let \(\Theta_{k}\) denote a down-right path of \(k\) (edge) steps that goes through the vertex \(\mathbf{v}_{N}\). Order the vertices of \(\Theta_{k}\) as \(\mathbf{z}_{0},\ldots,\mathbf{z}_{k}\), where \(\mathbf{z}_{0}\) has the largest \(\mathbf{e}_{2}\)-coordinate value. Define the free energy profile to be the following collection of random variables
\[\log Z_{0,\mathbf{z}_{i}}-\log Z_{0,\mathbf{z}_{i-1}}\quad\text{ where }i=1, \ldots,k. \tag{3.7}\]
The proof of the following theorem appears in Appendix B.
**Theorem 3.28**.: _Fix \(\epsilon\in(0,\mu/2)\). There exist positive constants \(C_{38},N_{0},s_{0},a_{0},q_{0}\) such that for each \(\rho\in[\epsilon,\mu-\epsilon]\), \(N\geq N_{0}\), \(s_{0}\leq s\leq a_{0}N^{1/3}\), \(1\leq k\leq sN^{2/3}\) and each down-right path \(\Theta_{k}=\{\mathbf{z}_{0},\ldots,\mathbf{z}_{k}\}\), there exist two collections of random variables \(\{X_{i}\}\) and \(\{Y_{i}\}\) such that the following holds. Set_
\[\lambda=\rho+q_{0}sN^{-1/3}\qquad\text{ and }\qquad\eta=\rho-q_{0}sN^{-1/3}.\]
_The random variables \(\{X_{i}\}\) are mutually independent with marginal distributions_
\[X_{i} \sim\log(\mathrm{Ga}^{-1}(\mu-\lambda)) \text{ if }\mathbf{z}_{i}-\mathbf{z}_{i-1}=\mathbf{e}_{1}\] \[-X_{i} \sim\log(\mathrm{Ga}^{-1}(\lambda)) \text{ if }\mathbf{z}_{i}-\mathbf{z}_{i-1}=-\mathbf{e}_{2}\.\]
_The random variables \(\{Y_{i}\}\) are mutually independent with marginal distributions_
\[Y_{i} \sim\log(\mathrm{Ga}^{-1}(\mu-\eta)) \text{ if }\mathbf{z}_{i}-\mathbf{z}_{i-1}=\mathbf{e}_{1}\] \[-Y_{i} \sim\log(\mathrm{Ga}^{-1}(\eta)) \text{ if }\mathbf{z}_{i}-\mathbf{z}_{i-1}=-\mathbf{e}_{2}\.\]
_Furthermore, \(X_{i}\) and \(Y_{i}\) bounds the free energy profile with high probability._
\[\mathbb{P}\Big{(}\log\tfrac{9}{10}+Y_{i}\leq\log Z_{0,\mathbf{z}_{i}}-\log Z_ {0,\mathbf{z}_{i-1}}\leq\log\tfrac{10}{9}+X_{i}\text{ for each }i=1,2,\ldots k\Big{)}\geq 1-e^{-C} \!\!38^{s^{3}}.\]
We also note that when \(\Theta_{k}\) is vertical or horizontal, then \(X_{i}\) and \(Y_{i}\) can be coupled together with an explicit joint distribution that allows calculations, see [3, 4, 15, 49, 51]. However, we will not use this fact in this paper.
## 4 Local fluctuations
In this section, we look at fluctuations for the polymer near \(0\) or \((N,N)\) where the time scale can be much smaller than the full scale \(N\). We start with an estimate for the fluctuation of the free energy profile along the anti-diagonal line. This result was first proved for a zero-temperature model (Brownian last-passage percolation) in [34] using the Brownian Gibbs property. Other related results and extensions for the various zero-temperature models have appeared in [9, 11, 16]. Compared to these, our proof does not rely on integrable probability which was used in [16, 34], and we improve the tail estimate from [9, 11] to optimal order.
**Proposition 4.1**.: _There exist positive constants \(C_{39},C_{40},c_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(1\leq t\leq c_{0}N^{1/2}\), and each \(a\in\mathbb{Z}_{\geq 0}\), we have_
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{a}}-\log Z_{0,N}\geq C_{39}t\sqrt{a }\Big{)}\leq e^{-C}40^{\min\{t^{2},t\sqrt{a}\}}.\]
_Remark 4.2_.: Since the free energy profile is expected to be locally Brownian after dividing by \(N^{1/3}\), the difference of the free energies in the probability above should approximate the running maximum of a two-sided random walk. Thus the tail bound above is optimal.
Proof.: The case \(a=0\) is trivial, so we will always assume \(a\in\mathbb{Z}_{>0}\). We may prove the proposition with the maximum version of the free energy since
\[\log Z_{0,N}\leq\log Z_{0,\mathcal{L}_{N}^{a}}\leq\log Z_{0,\mathcal{L}_{N}^{ \max}}^{\max}+10\log(a+1).\]
Let us also note that when \(a\geq t^{2/3}N^{2/3}\), the estimate is straightforward. It holds that
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{\max}}^{\max}-\log Z _{0,N}\geq Ct\sqrt{a}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}}^{\max}-\log Z_{0,N}\geq C\tfrac{\sqrt{a}}{t^{1/3}N^{1/3}}t^{4/3}N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}}^{\max}-\log Z_{0,N}\geq Ct^{4/3}N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}}^{\max}-2Nf_{d} \geq\tfrac{C}{2}t^{4/3}N^{1/3}\Big{)}\] \[\qquad\qquad\qquad+\mathbb{P}\Big{(}\log Z_{0,N}-2Nf_{d}\leq- \tfrac{C}{2}t^{4/3}N^{1/3}\Big{)}\leq e^{-Ct^{2}},\]
where the last inequality comes from Proposition A.2 and Proposition 3.8.
From now on, we will assume that the integer \(a\) satisfies \(1\leq a\leq t^{2/3}N^{2/3}\). In addition, note that our estimate for the difference of two free energies does not change if we included the weight \(Y_{(0,0)}\) in both partition functions. For the remaining part of the proof, we will also assume this without introducing a new notation for this version of the partition function.
By a union bound, it suffices to prove our estimate for
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{a,+}}^{\max}-\log Z_{0,N}\geq C^{ \prime}t\sqrt{a}\Big{)} \tag{4.1}\]
where \(\mathcal{L}_{N}^{a,+}\) is part of \(\mathcal{L}_{N}^{a}\) above \((N,N)\). For any fixed \(k=0,\dots,a\), let us rewrite
\[\log Z_{0,(N-k,N+k)}-\log Z_{0,N}=\sum_{i=1}^{k}\log Z_{0,(N-i,N+i)}-\log Z_{0,(N-(i-1),N+(i-1))}=S_{k}.\]
This allows us to work with a running maximum of the walk \(S_{k}\) since
\[(\ref{eq:1})=\mathbb{P}\Big{(}\max_{0\leq k\leq a}S_{k}\geq C^{\prime}t\sqrt {a}\Big{)}. \tag{4.2}\]
The steps of \(S_{k}\) are not i.i.d., however, Theorem 3.28 allows us to work with an i.i.d. random walk \(\widetilde{S}_{k}\) which upper bounds \(S_{k}\) with high probability. More precisely, the down-right path \(\Theta_{2a}\) will be the staircase from \((N-a,N+a)\) to \((N,N)\). Because the steps of \(S_{k}\) and the free energy profile defined in (3.7) differ by a negative sign, the perturbed parameter will be \(\eta=\mu/2-q_{0}t^{2/3}N^{-1/3}\), and the distribution of the steps of \(\widetilde{S}_{k}\) is given by \(\log(\mathrm{Ga}^{-1}(\eta))-\log(\mathrm{Ga}^{-1}(\mu-\eta))\).
Let \(A\) denote the event that \(\log\frac{10}{9}+\widetilde{S}_{k}\geq S_{k}\) for each \(k=0,1\ldots,a\). Then, we have
\[(\ref{eq:A}) \leq\mathbb{P}\Big{(}\left\{\max_{0\leq k\leq a}S_{k}\geq C^{ \prime}\sqrt{a}t^{3/4}\right\}\cap A\Big{)}+\mathbb{P}(A^{c})\] \[\leq\mathbb{P}\Big{(}\left\{\log\tfrac{10}{9}+\max_{0\leq k\leq a }\widetilde{S}_{k}\geq C^{\prime}\sqrt{a}t^{3/4}\right\}\Big{)}+\mathbb{P}(A^ {c}).\]
From Theorem 3.28, we know \(\mathbb{P}(A^{c})\leq e^{-Ct^{2}}\). Absorb the constant \(\log(10/9)\) into the constant \(C\), and it suffices to obtain the upper bound
\[\mathbb{P}\Big{(}\max_{0\leq k\leq a}\widetilde{S}_{k}\geq C^{\prime}t\sqrt{a }\Big{)}\leq e^{-C\min\{t^{2},t\sqrt{a}\}}. \tag{4.3}\]
This is a standard running maximum estimate for an i.i.d. random walk whose steps are sub-exponential. We omit the details here and postpone the proof of (4.3) to the end of Appendix D.
Next, we extend the value of \(t\) in the previous proposition from \(t_{0}\leq t\leq c_{0}N^{1/2}\) to all \(t\geq t_{0}\). The cost of this is a non-optimal exponent appearing in the exponential bound.
**Proposition 4.3**.: _There exist positive constants \(t_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(t\geq t_{0}\), and each \(a\in\mathbb{Z}_{\geq 0}\), we have_
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{a}}\ -\log Z_{0,N}\geq t\sqrt{a} \Big{)}\leq e^{-t^{1/10}}.\]
Proof.: The case \(a=0\) is trivial, so we will always assume \(a\geq 1\). Due to Proposition 4.1, we only have to show the estimate when \(t\geq C_{\ref{eq:A}}o_{0}N^{1/2}\) where both constants \(C_{\ref{eq:A}}\) and \(c_{0}\) are from Proposition 4.1. Suppose \(t=zN^{1/2}\) where \(z\geq C_{\ref{eq:A}}o_{0}\). Then,
\[\mathbb{P}\Big{(} \log Z_{0,\mathcal{L}_{N}^{a}}\ -\log Z_{0,N}\geq t\sqrt{a}\Big{)} \leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{a}}\ -\log Z_{0,N}\geq t\Big{)}\] \[=\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{N}^{a}}\ -\log Z_{0,N} \geq(zN^{1/6})N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log\mathbb{Z}_{0,\mathcal{L}_{N}^{a}}\ -2 Nf_{d}\geq(\tfrac{1}{2}zN^{1/6})N^{1/3}\Big{)}+\mathbb{P}\Big{(}\log Z_{0,N}-2 Nf_{d}\leq-(\tfrac{1}{2}zN^{1/6})N^{1/3}\Big{)}\] \[\leq e^{-t^{1/10}}\,.\]
The last inequality comes from Proposition A.2 and Proposition 3.8.
Fix \(0\leq r\leq N/2\). Recall that \(\mathcal{L}_{r}\) is the anti-diagonal through the point \((r,r)\). Let \(\mathbf{p}_{*}\) denote the random maximizer in
\[\max_{\mathbf{p}\in\mathcal{L}_{r}}\big{\{}\log Z_{0,\mathbf{p}}+\log Z_{ \mathbf{p},N}\big{\}}=\log Z_{0,\mathbf{p}_{*}}+\log Z_{\mathbf{p}_{*},N}.\]
The proposition below captures the KPZ transversal fluctuation which says that the maximizer \(\mathbf{p}_{*}\) cannot be too far from the diagonal on the local scale \(r^{2/3}\). This can of course be much smaller than the global fluctuation scale \(N^{2/3}\). This result was first proved in the zero-temperature model in [12].
**Proposition 4.4**.: _There exist positive constants \(C_{41},c_{0},t_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(c_{0}\leq r\leq N/2\) and \(t\geq t_{0}\), we have_
\[\mathbb{P}(|\mathbf{p}_{*}-(r,r)|_{\infty}>tr^{2/3})\leq e^{-C}41^{t^{3}}.\]
Proof.: Abbreviate \(J^{h}=\mathcal{L}_{(r-2hr^{2/3},r+2hr^{2/3})}^{r^{2/3}}\). We bound the probability as follows.
\[\mathbb{P}(|\mathbf{p}_{*}-(r,r)|_{\infty}>tr^{2/3})\] \[\leq\mathbb{P}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}\setminus \mathcal{L}_{r}^{tr^{2/3}}}\Big{\{}\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N} \Big{\}}>\log Z_{0,r}+\log Z_{r,N}\Big{)}\] \[\leq\sum_{|h|=|\lfloor t/2\rfloor}^{r^{1/3}}\mathbb{P}\Big{(}\log Z _{0,J^{h}}^{\max}+\log Z_{J^{h},N}^{\max}>\log Z_{0,r}+\log Z_{r,N}\Big{)}\] \[=\sum_{|h|=|\lfloor t/2\rfloor}^{r^{1/3}}\mathbb{P}\Big{(}\Big{[} \log Z_{0,J^{h}}^{\max}-\log Z_{0,r}\Big{]}+\Big{[}\log Z_{J^{h},N}^{\max}- \log Z_{r,N}\Big{]}>0\Big{)}\] \[=\sum_{|h|=\lfloor t/2\rfloor}^{r^{1/3}}\mathbb{P}\Big{(}\log Z _{0,J^{h}}^{\max}-\log Z_{0,r}\geq-Dh^{2}r^{1/3}\Big{)} \tag{4.4}\] \[\qquad\qquad\qquad+\mathbb{P}\Big{(}\log Z_{J^{h},N}^{\max}-\log Z _{r,N}\geq Dh^{2}r^{1/3}\Big{)}. \tag{4.5}\]
where \(D\) is a small positive constant that we will fix.
For (4.4), provided \(t_{0}\) is fixed sufficiently large, we may upper bound (4.4) using Proposition 3.11 and Proposition 3.8 as the following
\[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:
on the full distance scale \(N-r\). Provided that \(t_{0}\) is fixed sufficiently large depending on \(\epsilon_{0}\), (4.5) can be upper bounded with a similar argument as in (4.4).
To summarize, the arguments above show that
\[\sum_{|h|=\lfloor t/2\rfloor}^{r^{1/3}}\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq
The development culminates in the following theorem.
**Theorem 4.8**.: _There exist positive constants \(c_{0},t_{0},N_{0},\) such that for each \(N\geq N_{0}\), \(N/2\leq r\leq N-c_{0}\), \(t\geq t_{0}\), we have_
\[\mathbb{P}\Big{(}\log Z_{0,N}-[\log Z_{0,r}+\log Z_{r,N}]\geq t(N-r)^{1/3}\Big{)} \leq e^{-t^{1/10}}.\]
Proof.: This follows directly from the fact that
\[\log Z_{0,N}\leq\max_{\mathbf{p}\in\mathcal{L}_{r}}\{\log Z_{0,\mathbf{p}}+ \log Z_{\mathbf{p},N}\}+2\log(N-r)\]
and Proposition 4.7.
## 5 Proof of Theorem 2.1
This section proves Theorem 2.1. Throughout, \(c_{0}\leq N-r\leq N/2\) is assumed. First observe,
\[\begin{split}\mathbb{V}\mathrm{ar}(U-V)&\geq\inf_{ \lambda\in\mathbb{R}}\mathbb{V}\mathrm{ar}(U-\lambda V)=(1-\mathbb{C}\mathrm{ Corr}^{2}(U,V))\mathbb{V}\mathrm{ar}(U)\\ &=(1-\mathbb{C}\mathrm{orr}(U,V))(1+\mathbb{C}\mathrm{orr}(U,V)) \mathbb{V}\mathrm{ar}(U).\end{split} \tag{5.1}\]
Apply this to bound \(1-\mathbb{C}\mathrm{orr}(U,V)\) for \(U=\log Z_{0,N}\) and \(V=\log Z_{0,r}\). By the FKG inequality, \(\mathbb{C}\mathrm{orr}(\log Z_{0,N},\log Z_{0,r})\in[0,1]\). (5.1) gives
\[\frac{\inf_{\lambda\in\mathbb{R}}\mathbb{V}\mathrm{ar}(\log Z_{0,N}-\lambda \log Z_{0,r})}{2\mathbb{V}\mathrm{ar}(\log Z_{0,N})}\leq 1-\mathbb{C}\mathrm{orr}( \log Z_{0,N},\log Z_{0,r})\leq\frac{\mathbb{V}\mathrm{ar}(\log Z_{0,N}-\log Z _{0,r})}{\mathbb{V}\mathrm{ar}(\log Z_{0,N})}. \tag{5.2}\]
Since Theorem 3.21 gives \(\mathbb{V}\mathrm{ar}(\log Z_{0,N})\geq CN^{2/3}\), the lower bound of Theorem 2.1 follows from the second inequality of (5.2) and
\[\mathbb{V}\mathrm{ar}(\log Z_{0,N}-\log Z_{0,r})\leq C(N-r)^{2/3}. \tag{5.3}\]
To show (5.3), apply the inequality \(\mathbb{V}\mathrm{ar}(A)\leq 2(\mathbb{V}\mathrm{ar}(B)+\mathbb{E}[(A-B)^{2}])\) to \(A=\log Z_{0,N}-\log Z_{0,r}\) and \(B=\log Z_{r,N}\). \(\mathbb{V}\mathrm{ar}(B)\leq C(N-r)^{2/3}\) follows from Theorem 3.21, and \(\mathbb{E}[(A-B)^{2}]\leq C(N-r)^{2/3}\) follows from Theorem 4.8. Proof of the lower bound of Theorem 2.1 is complete.
We turn to prove the upper bound of Theorem 2.1, by bounding a conditional variance. Recall that \(\llbracket 0,(N,N)\rrbracket\) is the square with lower left corner at \((0,0)\) and upper right corner at \((N,N)\). Let \(\mathcal{F}\) be the \(\sigma\)-algebra of the weights in \(\llbracket 0,(N,N)\rrbracket\) that lie on or below the anti-diagonal line \(\mathcal{L}_{r}\). Note that \(\log Z_{0,r}\) is \(\mathcal{F}\)-measurable.
\[\mathbb{V}\mathrm{ar}(\log Z_{0,N}|\mathcal{F}) =\mathbb{V}\mathrm{ar}(\log Z_{0,N}-\log Z_{0,r}|\mathcal{F})\] \[=\mathbb{E}\Big{[}\Big{(}\log Z_{0,N}-\log Z_{0,r}-\mathbb{E}[ \log Z_{0,N}-\log Z_{0,r}|\mathcal{F}]\Big{)}^{2}\Big{|}\mathcal{F}\Big{]}. \tag{5.4}\]
We develop a lower bound for the last conditional expectation above.
By Theorem 4.8,
\[\Big{|}\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}]-\mathbb{E}[\log Z_{r,N}]\Big{|} \leq C(N-r)^{1/3}. \tag{5.5}\]
In Proposition 3.7 the centering \(2Nf_{d}\) can be replaced with \(\mathbb{E}[\log Z_{0,N}]\) because \(\mathbb{E}[\log Z_{0,N}]\leq 2Nf_{d}\) by superadditivity. Thus altered, Proposition 3.7 and (5.5) give
\[e^{-C}13^{t^{3/2}} \leq\mathbb{P}\big{(}\log Z_{r,N}-\mathbb{E}[\log Z_{r,N}]\geq t( N-r)^{1/3}\big{)}\] \[\leq\mathbb{P}\big{(}\log Z_{r,N}-\mathbb{E}[\log Z_{0,N}-\log Z_ {0,r}]\geq(t-C)(N-r)^{1/3}\big{)}.\]
Let \(s_{0}\) be a large constant and define the event
\[A_{r,N}=\big{\{}\log Z_{r,N}-\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}]\geq s_{0}(N- r)^{1/3}\big{\}}. \tag{5.6}\]
\(A_{r,N}\) is independent of \(\mathcal{F}\) and \(\mathbb{P}(A_{r,N})\) is bounded below independently of \(r\) and \(N\).
Next, using Chebyshev's inequality we get
\[\mathbb{P}\Big{(}\Big{|}\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}| \mathcal{F}]-\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}]\Big{|}>t(N-r)^{1/3}\Big{)}\] \[\leq\frac{\mathbb{V}\text{ar}(\mathbb{E}[\log Z_{0,N}-\log Z_{0,r }|\mathcal{F}])}{t^{2}(N-r)^{2/3}}\] \[\leq\frac{\mathbb{V}\text{ar}(\log Z_{0,N}-\log Z_{0,r})}{t^{2}( N-r)^{2/3}}\leq C/t^{2}\qquad\text{by \eqref{eq:1}.}\]
By choosing \(t\) and \(s_{0}\) large enough, there is an event \(B_{r,N}\in\mathcal{F}\), with positive probability bounded below independently of \(N\) and \(r\), on which
\[\Big{|}\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}|\mathcal{F}]-\mathbb{E}[\log Z_{0, N}-\log Z_{0,r}]\Big{|}\leq\frac{s_{0}}{10}(N-r)^{1/3}. \tag{5.7}\]
On \(A_{r,N}\cap B_{r,N}\) we have the following bound, using first superadditivity \(\log Z_{0,N}-\log Z_{0,r}\geq\log Z_{r,N}\), then (5.7) and last (5.6):
\[\log Z_{0,N}-\log Z_{0,r}-\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}| \mathcal{F}]\] \[\geq\log Z_{r,N}-\mathbb{E}[\log Z_{0,N}-\log Z_{0,r}]-\frac{s_{ 0}}{10}(N-r)^{1/3}\geq\frac{9s_{0}}{10}(N-r)^{1/3}.\]
Square this bound and insert it inside the conditional expectation on line (5.4). Continuing from that line, we then have
\[\mathbb{V}\text{ar}(\log Z_{0,N}|\mathcal{F})\geq C(N-r)^{2/3}\,\mathbb{E}[ \mathbf{1}_{A_{r,N}}\mathbf{1}_{B_{r,N}}|\mathcal{F}]\geq C(N-r)^{2/3}\, \mathbf{1}_{B_{r,N}}.\]
By the law of total variance, for all \(\lambda\in\mathbb{R}\),
\[\mathbb{V}\text{ar}(\log Z_{0,N}-\lambda\log Z_{0,r})\] \[=\mathbb{E}\big{[}\mathbb{V}\text{ar}(\log Z_{0,N}-\lambda\log Z _{0,r}|\mathcal{F})\big{]}+\mathbb{V}\text{ar}\big{[}\mathbb{E}(\log Z_{0,N}- \lambda\log Z_{0,r}|\mathcal{F})\big{]}\] \[\geq\mathbb{E}\big{[}\mathbb{V}\text{ar}(\log Z_{0,N}-\lambda\log Z _{0,r}|\mathcal{F})\big{]}\] \[=\mathbb{E}\big{[}\mathbb{V}\text{ar}(\log Z_{0,N}|\mathcal{F}) \big{]}\geq C(N-r)^{2/3}\,\mathbb{P}(B_{r,N})\geq C(N-r)^{2/3}.\]
Apply this lower bound to the numerator of the first member of (5.2) and apply Theorem 3.21 to the denominator. The upper bound of Theorem 2.1 has been established.
Proof of Theorem 2.2
We assume throughout that \(c_{0}\leq r\leq N/2\). First, we prove the upper bound. By the Cauchy-Schwarz inequality and the independence of \(Z_{0,r}\) and \(Z_{r,N}\),
\[\mathbb{C}\mathrm{ov}(\log Z_{0,r},\log Z_{0,N}) =\mathbb{C}\mathrm{ov}(\log Z_{0,r},\log Z_{0,N}-\log Z_{r,N})\] \[\leq\mathbb{V}\mathrm{ar}(\log Z_{0,r})^{1/2}\cdot\mathbb{V} \mathrm{ar}(\log Z_{0,N}-\log Z_{r,N})^{1/2}.\]
It therefore suffices to show that both variances above have upper bounds of the order \(r^{2/3}\). The first variance satisfies \(\mathbb{V}\mathrm{ar}(\log Z_{0,r})\leq Cr^{2/3}\) by Theorem 3.21. The second variance can be bounded again using the inequality \(\mathbb{V}\mathrm{ar}(A)\leq 2(\mathbb{V}\mathrm{ar}(B)+\mathbb{E}[(A-B)^{2}])\) with \(A=\log Z_{0,N}-\log Z_{r,N}\) and \(B=\log Z_{0,r}\). \(\mathbb{V}\mathrm{ar}(B)\leq Cr^{2/3}\) follows from Theorem 3.21, and \(\mathbb{E}[(A-B)^{2}]\leq Cr^{2/3}\) from Proposition 4.8 with the parameters \(r\) and \(N-r\) swapped. This finishes the proof of the upper bound. The remainder of this section is dedicated to the lower bound of Theorem 2.2.
Our approach follows ideas from [9, 11] that we now describe. For \(\theta>0\), let \(\mathcal{F}_{\theta}\) denote the \(\sigma\)-algebra generated by the weights in the set \([\![(0,0),(N,N)]\!]\setminus R_{0,r}^{\theta r^{2/3}}\). In Section 6.6, we will show that there exists an event \(\mathcal{E}_{\theta}\in\mathcal{F}_{\theta}\) with \(\mathbb{P}(\mathcal{E}_{\theta})\geq\epsilon_{0}>0\) (\(\epsilon_{0}\) independent of \(r\) and \(N\)) such that
\[\mathbb{C}\mathrm{ov}(\log Z_{0,N},\log Z_{0,r}|\mathcal{F}_{\theta})(\omega )\geq Cr^{2/3}\quad\text{ for }\omega\in\mathcal{E}_{\theta}. \tag{6.1}\]
Then, by (6.1) and applying the FKG inequality twice, we have
\[\mathbb{E}[\log Z_{0,N}\log Z_{0,r}]\] \[=\mathbb{E}\Big{[}\mathbb{E}[\log Z_{0,N}\log Z_{0,r}|\mathcal{ F}_{\theta}]\Big{]}\] \[=\int_{\mathcal{E}_{\theta}}\mathbb{E}[\log Z_{0,N}\log Z_{0,r}| \mathcal{F}_{\theta}]\,d\mathbb{P}+\int_{\mathcal{E}_{\theta}^{c}}\mathbb{E} [\log Z_{0,N}\log Z_{0,r}|\mathcal{F}_{\theta}]\,d\mathbb{P}\] \[\geq\int_{\mathcal{E}_{\theta}}\mathbb{E}[\log Z_{0,N}|\mathcal{ F}_{\theta}]\mathbb{E}[\log Z_{0,r}|\mathcal{F}_{\theta}]\,d\mathbb{P}+C \epsilon_{0}r^{2/3}+\int_{\mathcal{E}_{\theta}^{c}}\mathbb{E}[\log Z_{0,N}| \mathcal{F}_{\theta}]\mathbb{E}[\log Z_{0,r}|\mathcal{F}_{\theta}]\,d \mathbb{P}\] \[=\mathbb{E}\Big{[}\mathbb{E}[\log Z_{0,N}|\mathcal{F}_{\theta}] \mathbb{E}[\log Z_{0,r}|\mathcal{F}_{\theta}]\Big{]}+C\epsilon_{0}r^{2/3}\] \[\geq\mathbb{E}[\log Z_{0,N}]\mathbb{E}[\log Z_{0,r}]+C\epsilon_{ 0}r^{2/3}.\]
This shows \(\mathbb{C}\mathrm{ov}(\log Z_{0,N},\log Z_{0,r})\geq Cr^{2/3}\), hence the lower bound in our theorem. In the next few sections, we prove (6.1).
### Barrier event \(\mathcal{B}_{\text{bar}}\)
This section defines a barrier event \(\mathcal{B}_{\text{bar}}\in\mathcal{F}_{\theta}\) and investigates consequences of conditioning on it. Fix the parameters \(0<\theta<1/2,\phi_{1}=\theta^{-10},\phi_{2}=\theta^{-100}\), and \(L=\theta^{-1000}.\) We have the freedom to decrease \(\theta\) if necessary, so that \(0<\theta\leq\theta_{0}\).
Next, we define a barrier event around the rectangle \(R_{0,r}^{\theta r^{2/3}}\). This part of the construction is illustrated on the left of Figure 6.1. The region \(R_{0,r}^{\phi_{2}r^{2/3}}\setminus R_{0,r}^{\theta r^{2/3}}\) is formed by two disjoint rectangles, \(U_{1}\) above the diagonal and \(U_{2}\) below the diagonal.
The anti-diagonal lines \(\{\mathcal{L}_{kr/L}\}_{k=1}^{L-1}\) cut each of \(U_{1}\) and \(U_{2}\) into \(L\) small rectangles. If we fix \(r\) sufficiently large depending on \(L\), these rectangles are not degenerate. Denote these small rectangles by \(U_{i}^{k}\) for \(i=1,2\) and \(k=1,\ldots,L\). Let \(\underline{U}_{i}^{k}\) and \(\overline{U}_{i}^{k}\) denote the top and bottom sides (with slope
\(-1\)) of the rectangle \(U_{i}^{k}\). We define the event
\[\mathcal{B}_{\mathrm{bar}}=\bigcap_{i=1}^{2}\bigcap_{k=1}^{L}\Big{\{}\log Z_{ \underline{U_{k}^{k}\underline{U_{i}^{k}}}}^{\mathrm{in},U_{i}}-2(r/L)f_{d} \leq-Lr^{1/3}\Big{\}}. \tag{6.2}\]
**Lemma 6.1**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constants \(c_{0}\) depending on \(\theta\) such that for each \(r\geq c_{0}\), we have_
\[\mathbb{P}(\mathcal{B}_{\mathrm{bar}})\geq e^{-e^{L^{100}}}.\]
Proof.: Note \(\mathcal{B}_{\mathrm{bar}}\) is the intersection of \(2L\) events, of equal probability by translation invariance. It therefore suffices to lower bound the probability
\[\mathbb{P}\Big{(}\log Z_{\underline{U_{1}^{1}\underline{U_{1}^{1}}}}^{ \mathrm{in},U_{1}}-2(r/L)f_{d}\leq-Lr^{1/3}\Big{)}. \tag{6.3}\]
The following construction is illustrated on the right of Figure 6.1. Using diagonal and anti-diagonal lines, we cut the rectangle \(U_{1}^{1}\) into smaller rectangles whose diagonal \(\ell^{\infty}\)-length is \(\frac{r}{L^{30}}\) and anti-diagonal \(\ell^{\infty}\)-length \(\frac{r^{2/3}}{L^{20}}\). Then, the number of rectangles in this grid (see the right of Figure 6.1) along the diagonal will be \(L^{29}\). And the number of rectangles in the anti-diagonal direction is no more than \(L^{21}\). Let us use \(R(u,v)\) to enumerate these small rectangles, where the index \(u=1,2,\ldots L^{29}\) records the position along the diagonal direction, and \(v=1,2,\ldots,v_{L}\leq L^{21}\) enumerates the small rectangles along the each anti-diagonal line. Let us also use \(\mathcal{L}(u)\) to denote the anti-diagonal line which contains the upper anti-diagonal side of \(R(u,v)\), and let \(\underline{R(u,v)}\) to denote the lower anti-diagonal side of \(R(u,v)\).
Let \(D\) be the small constant \(C_{\ref{eq:D}}\) from Proposition 8.3, and we define the event
\[A=\bigcap_{u,v}\Big{\{}\log Z_{\underline{R(u,v)},\mathcal{L}(u)}-2(r/L^{30} )f_{d}\leq-D(r/L^{30})^{1/3}\Big{\}}. \tag{6.4}\]
Using the FKG inequality and Proposition 8.3, we have \(\mathbb{P}(A)\geq e^{-e^{L^{99}}}\).
Figure 6.1: _Left_: A demonstration of the region used to define the barrier event \(\mathcal{B}_{\mathrm{bar}}\) in (6.2). _Right_: construction used in the event \(A\) in (6.4).
Next, the constrained free energy can be upper bounded by
\[\log Z_{\underline{U_{1}^{1},U_{1}^{1}}}^{\text{in},U_{1}^{1}} \leq\sum_{u=1}^{L^{29}}\Big{(}100\log L+\max_{v}\log Z_{\underline{ R(u,v)},\mathcal{L}(u)}\Big{)}.\] \[\leq L^{29}\Big{(}100\log L+\max_{u,v}\log Z_{\underline{R(u,v)}, \mathcal{L}(u)}\Big{)}\] \[\text{restrict to the event }A \leq L^{29}\Big{(}100\log L+2(r/L^{30})f_{d}-Dr^{1/3}/L^{10}\Big{)}\] \[\leq 2(r/L)f_{d}-DL^{19}r^{1/3}+L^{30}\] \[\leq 2(r/L)f_{d}-Lr^{1/3}\]
provided that \(\theta_{0}\) is sufficiently small (which makes \(L\) large) and then increase \(c_{0}\). With this, we have shown
\[(\ref{eq:201})\geq e^{-e^{L^{100}}}.\]
and finished the proof of the lemma.
### Concentration of the free energy between \((0,0)\) and \(\mathcal{L}_{r}\)
Our goal in this section is to show that when conditioned on \(\mathcal{B}_{\text{bar}}\) the free energy \(\log Z_{0,\mathcal{L}_{r}}\) is concentrated on paths that go from \((0,0)\) to \(\mathcal{L}_{r}^{r^{2/3}}\) and are contained between the diagonal sides of the rectangle \(\mathcal{R}_{0,r-r/L}^{3\theta r^{2/3}}\). This is stated in Proposition 6.9 at the end of this subsection.
Before stating Proposition 6.9, we define our high probability events. To start, split the collection of paths from \((0,0)\) to \(\mathcal{L}_{r}\) as follows. First, let
\[\mathfrak{A} =\text{all paths from }(0,0)\text{ to }\mathcal{L}_{r}^{\phi_{1}r^{2/3 }}\text{ that stay inside }R_{0,r}^{\phi_{2}r^{2/3}}.\] \[\mathfrak{B} =\text{all other paths from }(0,0)\text{ to }\mathcal{L}_{r}.\]
Then among \(\mathfrak{A}\), let us further split \(\mathfrak{A}=\mathfrak{A}_{1}\cup\mathfrak{A}_{2}\cup\mathfrak{A}_{3}\cup \mathfrak{A}_{4}\) where
\[\mathfrak{A}_{1}=\text{paths from }(0,0)\text{ to }\mathcal{L}_{r}^{r^{2/3 }}\text{ that stay between the diagonal sides of }R_{0,r-r/L}^{3\theta r^{2/3}}\text{ and }\] \[\text{touch each }R_{ir/L,(i+1)r/L}^{\theta r^{2/3}}\text{ for }i=0,1, \dots,L-2,\] \[\mathfrak{A}_{2}=\text{paths that avoid at least one of }R_{ir/L,(i+1)r/L}^{\theta r^{2/3}}\text{ completely for }i=0,1, \dots,L-2.\] \[\mathfrak{A}_{3}=\text{paths that exit from the diagonal sides of }R_{0,r}^{3\theta r^{2/3}}\text{ and }\] \[\text{intersect }R_{ir/L,(i+1)r/L}^{\theta r^{2/3}}\text{ for all }i=0,1, \dots,L-2.\] \[\mathfrak{A}_{4}=\text{paths from }(0,0)\text{ to }\mathcal{L}_{r}^{\phi_{1}r^{2/3 }}\setminus\mathcal{L}_{r}^{r^{2/3}}\text{ that stay between the diagonal sides of }R_{0,r-r/L}^{3\theta r^{2/3}}\text{ and }\] \[\text{touch each }R_{ir/L,(i+1)r/L}^{\theta r^{2/3}}\text{ for }i=0,1, \dots,L-2,\]
And among \(\mathfrak{B}\), we write \(\mathfrak{B}=\mathfrak{A}_{5}\cup\mathfrak{A}_{6}\) where
\[\mathfrak{A}_{5}=\text{all paths from }(0,0)\text{ to }\mathcal{L}_{r} \setminus\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\] \[\mathfrak{A}_{6}=\text{all paths from }(0,0)\text{ to }\mathcal{L}_{r}^{\phi_{1}r^{2/3 }}\text{ that exit }R_{0,r}^{\phi_{2}r^{2/3}}\]
Let \(\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{i})\) denote the free energy when we sum over only the paths inside \(\mathfrak{A}_{i}\), and define the following events:
\[\mathcal{A}_{1} =\big{\{}\log Z_{0,r}^{\mathrm{in},\theta r^{2/3}}-2rf_{d}\geq- \theta^{-5}r^{1/3}\big{\}},\] \[\mathcal{A}_{2} =\big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2})-2rf_{d} \leq-\theta^{-900}r^{1/3}\big{\}}\] \[\mathcal{A}_{3} =\big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{3})-2rf_{d} \leq-\theta^{-900}r^{1/3}\big{\}}.\] \[\mathcal{A}_{4} =\big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{4})-2rf_{d} \leq-\theta^{-900}r^{1/3}\big{\}}.\] \[\mathcal{A}_{5} =\big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{5})-2rf_{d} \leq-\theta^{-10}r^{1/3}\big{\}}\] \[\mathcal{A}_{6} =\big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{6})-2rf_{d} \leq-\theta^{-100}r^{1/3}\big{\}}\]
Next, we show that all six events are likely events.
**Proposition 6.2**.: _For \(i=1,\ldots,6\). There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}(\mathcal{A}_{i}|\mathcal{B}_{\mathrm{bar}})\geq 1-e^{-\theta^{-2}}.\]
To prove Proposition 6.2, we will split it into six separated lemmas, according to \(i=1,\ldots,6\).
**Lemma 6.3**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}(\mathcal{A}_{1}|\mathcal{B}_{\mathrm{bar}})\geq 1-e^{-\theta^{-2}}.\]
Proof.: We upper bound \(\mathbb{P}(\mathcal{A}_{1}^{c}|\mathcal{B}_{\mathrm{bar}})\). By independence and Theorem 3.16,
\[\mathbb{P}(\mathcal{A}_{1}^{c}|\mathcal{B}_{\mathrm{bar}})=\mathbb{P}\Big{(} \log Z_{0,r}^{\mathrm{in},\theta r^{2/3}}-2rf_{d}\leq-\theta^{-5}r^{1/3}\Big{)} \leq e^{-\theta^{-2}}.\]
**Lemma 6.4**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}(\mathcal{A}_{2}|\mathcal{B}_{\mathrm{bar}})\geq 1-e^{-\theta^{-10}}.\]
Proof.: Let us further rewrite \(\mathfrak{A}_{2}\) as a non-disjoint union of paths \(\bigcup_{k=1}^{L-2}\mathfrak{A}_{2}^{k,+}\cup\mathfrak{A}_{2}^{k,-}\) where \(\mathfrak{A}_{2}^{k,+}\) and \(\mathfrak{A}_{2}^{k,-}\) are the collections of paths which avoid \(R_{kr/L,(k+1)r/L}^{\theta r^{2/3}}\) by going above or below. For simplicity of the notation, let
\[U_{k}^{+}=\mathcal{L}_{(\frac{kr}{L}-\frac{\phi_{2}+\theta}{2}r^{2/3},\frac{ kr}{L}+\frac{\phi_{2}+\theta}{2}r^{2/3})}\quad\text{ and }\quad U_{k}^{-}=\mathcal{L}_{(\frac{kr}{L}+\frac{\phi_{2}+\theta}{2}r^{2/3}, \frac{kr}{L}-\frac{\phi_{2}+\theta}{2}r^{2/3})}^{\frac{\phi_{2}+\theta}{2}r^ {2/3})}.\]
For each fixed \(k\in\llbracket 1,L-2\rrbracket\) and \(\Box\in\{+,-\}\), we have the upper bound
\[\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2}^{k,\Box})\leq\log Z_{0,\mathcal{L }_{kr/L}}+\log Z_{U_{k}^{\Box},U_{k+1}^{\Box}}+\log Z_{\mathcal{L}_{(k+1)r/L},\mathcal{L}_{r}}. \tag{6.6}\]
Since we are conditioned on the event \(\mathcal{B}_{\mathrm{bar}}\), then
\[\log Z_{U_{k}^{\Box},U_{k+1}^{\Box}}-2(r/L)f_{d}\leq-Lr^{1/3}. \tag{6.7}\]
Using the FKG inequality and the interval to line estimate from Theorem 3.15, we have
\[\begin{split}&\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{kr/L}}-2(kr/ L)f_{d}\geq\sqrt{L}r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\\ &\qquad\qquad\qquad\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{kr /L}}-2(kr/L)f_{d}\geq\sqrt{L}r^{1/3}\Big{)}\leq e^{-\theta^{-100}}\\ &\mathbb{P}\Big{(}\log(Z_{\mathcal{L}_{(k+1)r/L},\mathcal{L}_{r} })-2((L-k-1)r/L)f_{d}\geq\sqrt{L}r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)} \\ &\qquad\qquad\qquad\qquad\leq\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{ (k+1)r/L},\mathcal{L}_{r}}-2((L-k-1)r/L)f_{d}\geq\sqrt{L}r^{1/3}\Big{)}\leq e^ {-\theta^{-100}}.\end{split} \tag{6.8}\]
From (6.6), (6.7), (6.8), We obtain the following estimate for each fixed \(k=1,2,\ldots,L-2\)
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2}^{k,\square})-2rf_ {d}\geq-\tfrac{L}{2}r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\leq e^{- \theta^{-90}}. \tag{6.9}\]
Using (6.9), a union bound will give us the desired result from our lemma,
\[\begin{split}&\mathbb{P}\Big{(}\mathcal{A}_{2}^{c}\Big{|} \mathcal{B}_{\mathrm{bar}}\Big{)}\\ &\leq\mathbb{P}\Big{(}\Big{\{}\max_{k=1}^{L-2}Z_{0,\mathcal{L}_{r }}(\mathfrak{A}_{2}^{k,+})+Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2}^{k,-})\Big{)} -2rf_{d}\geq-\tfrac{L}{10}r^{1/3}\Big{\}}\Big{|}\mathcal{B}_{\mathrm{bar}} \Big{)}\\ &\leq\mathbb{P}\Big{(}\Big{\{}\max_{\begin{subarray}{c}k\in[1,L- 2]\\ \square\in\{+,-\}\end{subarray}}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2}^{k, \square})+2\log L-2rf_{d}\geq-\tfrac{L}{10}r^{1/3}\Big{\}}\Big{|}\mathcal{B}_{ \mathrm{bar}}\Big{)}\\ &\leq\sum_{k=1}^{L-2}\sum_{\square\in\{+,-\}}\mathbb{P}\Big{(} \Big{\{}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{2}^{k,\square})-2rf_{d}\geq- \tfrac{L}{9}r^{1/3}\Big{\}}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\\ &\leq e^{-\theta^{-10}}\end{split}\]
where the last inequality comes from (6.9) and provided \(\theta_{0}\) is sufficiently small.
**Lemma 6.5**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}\Big{(}\mathcal{A}_{3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\geq 1 -e^{-\theta^{-10}}.\]
Proof.: As before, let us rewrite \(\mathcal{A}_{3}\) as a non-disjoint union of paths \(\bigcup_{k=0}^{L-2}\mathfrak{A}_{3}^{k,+}\cup\mathfrak{A}_{3}^{k,-}\) where \(\mathfrak{A}_{3}^{k,+}\) and \(\mathfrak{A}_{3}^{k,-}\) are the collections of paths which exit from the upper and lower diagonal sides of the rectangle \(R_{kr/L,(k+1)r/L}^{3\theta\rho^{2/3}}\).
Let us fix \(k\) and look at \(\mathfrak{A}_{3}^{k,+}\). We will show that all paths in this collection must have high transversal fluctuations. This fact is illustrated in Figure 6.2. First, we further break up this collection of paths by where they cross the lines \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\). For \(i,j\in\mathbb{Z}_{\geq 0}\), let
\[\begin{split} I_{kr/L}^{i}&=\mathcal{L}_{(kr/L-(i+ \frac{1}{2})\theta r^{2/3},kr/L+(i+\frac{1}{2})\theta r^{2/3})}^{\frac{1}{2} \theta r^{2/3}}\\ J_{(k+1)r/L}^{j}&=\mathcal{L}_{((k+1)r/L-(j+\frac{1 }{2})\theta r^{2/3},(k+1)r/L+(j+\frac{1}{2})\theta r^{2/3})}^{\frac{1}{2} \theta r^{2/3}}.\end{split}\]
Then any path in \(\mathfrak{A}_{3}^{k,+}\) must cross \(I_{kr/L}^{i}\) and \(J_{(k+1)r/L}^{j}\) for some \(i,j\in[\![0,\phi_{2}\theta^{-1}]\!]\). Thus we may rewrite \(\mathfrak{A}_{3}^{k,+}\) as a non-disjoint union \(\bigcup_{i,j=0}^{\sharp\theta^{-1}}\mathfrak{A}_{3}^{k,+}(i,j)\) where \(\mathfrak{A}_{3}^{k,+}(i,j)\) is the collection of paths
Figure 6.2: An illustration of the paths from the collection \(\mathfrak{A}_{3}^{k,+}\). The path in this collection must intersect the two gray rectangle shown in the picture. The top two paths crosses the lines \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\) at neighboring segments (the case \(|i-j|\leq 1\) from the proof of Lemma 6.5). They must have a high transversal fluctuation between \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\) because they have to reach the gray rectangles. The bottom picture is a path that crosses the lines \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\) at non-neighboring positions (the case \(|i-j|\geq 2\)). This path has a high transversal fluctuation between \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\) because of these non-neighboring crossing positions.
inside \(\mathfrak{A}_{3}^{k,+}\) that goes through \(I^{i}_{kr/L}\) and \(J^{j}_{(k+1)r/L}\). Next, we will split the case of \(i\) and \(j\) into two cases, when \(|i-j|\leq 1\) or otherwise.
By our assumption, the paths inside \(\mathfrak{A}_{3}^{k,+}\) must intersect \(R^{\theta r^{2/3}}_{kr/L,(k+1)r/L}\) while also exit the upper diagonal side of \(R^{3\theta r^{2/3}}_{kr/L,(k+1)r/L}\). If \(|i-j|\leq 1\), there must be an unusually large transversal of size at least \(\theta r^{2/3}=(\theta L^{2/3})(r/L)^{2/3}\) for the segment of the path \(\mathfrak{A}_{3}^{k,+}(i,j)\) between \(I^{i}_{kr/L}\) and \(J^{j}_{(k+1)r/L}\). We may invoke Theorem 3.13 and obtain that for \(|i-j|\leq 1\),
\[\mathbb{P}\Big{(}\log Z_{I^{i}_{kr/L},J^{j}_{(k+1)r/L}}(\mathfrak{A}_{3}^{k,+} (i,j))-2(r/L)f_{d}\geq-C(\theta L^{2/3})^{2}(r/L)^{1/3}\Big{)}\leq e^{-\theta^ {-100}} \tag{6.10}\]
for some small constant \(D\).
Next, when \(|i-j|\geq 2\), then there is already a large transversal fluctuation of size at least \(\theta r^{2/3}=(\theta L^{2/3})(r/L)^{2/3}\) between for the segment of the path \(\mathfrak{A}_{3}^{k,+}(i,j)\) between \(I^{i}_{kr/L}\) and \(J^{j}_{(k+1)r/L}\). By Proposition 3.12, we obtain that for \(|i-j|\geq 2\),
\[\mathbb{P}\Big{(}\log Z_{I^{i}_{kr/L},J^{j}_{(k+1)r/L}}(\mathfrak{A}_{3}^{k,+} (i,j))-2(r/L)f_{d}\geq-D(\theta L^{2/3})^{2}(r/L)^{1/3}\Big{)}\leq e^{-\theta^ {-100}} \tag{6.11}\]
for some small constant \(D\).
By the FKG inequality, (6.10) and (6.11) still hold if we replace the probability measure \(\mathbb{P}\) with \(\mathbb{P}(\cdot|\mathcal{B}_{\mathrm{bar}})\). Now as before, we upper bound the value of the free energy of the paths outside the region between \(\mathcal{L}_{kr/L}\) and \(\mathcal{L}_{(k+1)r/L}\) by (6.8). We obtain
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{3}^{k,\square}(i,j) )-2rf_{d}\geq-L^{0.99}r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\leq e^{- \theta^{-90}}. \tag{6.12}\]
We have
\[\mathbb{P}\Big{(}\mathcal{A}_{3}^{c}\Big{|}\mathcal{B}_{\mathrm{ bar}}\Big{)}\] \[\leq\mathbb{P}\Big{(}\Big{\{}\max_{\begin{subarray}{c}k\in[0,L-2] \\ \square\in\{+,-\}\\ i,j\in[0,\theta^{2}\theta^{-1}]\end{subarray}}\mathbb{P}\Big{(}\log Z_{0, \mathcal{L}_{r}}(\mathfrak{A}_{2}^{k,\square})-2rf_{d}\geq-L^{0.95}r^{1/3} \Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\] \[\leq e^{-\theta^{-80}}\]
where the last inequality uses (6.12) and provided \(\theta_{0}\) is sufficiently small.
**Lemma 6.6**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}\Big{(}\mathcal{A}_{4}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\geq 1 -e^{-\theta^{-2}}.\]
Proof.: This simply because the last part of the paths have a very large transversal fluctuation,
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\setminus \mathcal{L}_{r}^{\varphi^{2/3}}}(\mathfrak{A}_{4})-2rf_{d}\geq-L^{0.9}r^{1/3} \Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\]
\[\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r-r/L}}+\log Z_{\mathcal{L}^{3 \theta r/2/3}_{r-r/L},\mathcal{L}^{\phi_{1}r^{2/3}}_{r}\backslash\mathcal{L}^{r ^{2/3}}_{r}-2rf_{d}\geq-L^{0.9}r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r-r/L}}-2(r-r/L)f_{d} \geq(L-L^{0.9})r^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)} \tag{6.13}\] \[\qquad\qquad+\mathbb{P}\Big{(}\log Z_{\mathcal{L}^{3\theta r/2/3} _{r-r/L},\mathcal{L}^{\phi_{1}r^{2/3}}_{r}\backslash\mathcal{L}^{r^{2/3}}_{r }-2(r/L)f_{d}\geq-Lr^{1/3}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}. \tag{6.14}\]
By the FKG inequality and Theorem 3.15,
\[(\ref{FKG})\leq\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{r-r/L}}-2(r-r/L)f_{d} \geq(L-L^{0.9})r^{1/3}\Big{)}\leq e^{-L^{0.1}}.\]
By the FKG inequality and Proposition 3.12,
\[(\ref{FKG})\leq\mathbb{P}\Big{(}\log Z_{\mathcal{L}^{3\theta r/2/3}_{r-r/L}, \mathcal{L}^{\phi_{1}r^{2/3}}_{r}\backslash\mathcal{L}^{r^{2/3}}_{r}-2(r/L)f_ {d}\geq-Lr^{1/3}\Big{)}\leq e^{-L^{0.1}}.\]
With these, we have finished the proof of this theorem.
**Lemma 6.7**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}\Big{(}\mathcal{A}_{5}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\geq 1 -e^{-\theta^{-2}}.\]
Proof.: By the FKG inequality, it suffices to show \(\mathbb{P}(\mathcal{A}_{5}^{c})\leq e^{-\theta^{-2}}.\) Then, this estimate follows directly from Proposition 3.12.
**Lemma 6.8**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists positive constant \(c_{0}\) such that for each \(c_{0}\leq r\),_
\[\mathbb{P}\Big{(}\mathcal{A}_{6}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{)}\geq 1 -e^{-\theta^{-2}}.\]
Proof.: By the FKG inequality, it suffices to show \(\mathbb{P}(\mathcal{A}_{6})\leq e^{-\theta^{-2}}.\) Then, this estimate follows directly from Theorem 3.13.
With these lemmas, we have shown Proposition 6.2. Finally, we have the following proposition.
**Proposition 6.9**.: _On the event \(\cap_{i=1}^{6}\mathcal{A}_{i}\), we have_
\[\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{1})\leq\log Z_{0,\mathcal{L}_{r}} \leq\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{1})+\log 6.\]
Proof.: This follows directly from the definition of our events \(\mathcal{A}_{i}\) and the fact
\[\max_{j}\{\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{j})\}\leq\log Z_{0, \mathcal{L}_{r}}\leq\max_{j}\{\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{j})\}+ \log 6\]
and \(\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{1})\geq\log Z_{0,r}^{\mathrm{in}, \theta r^{2/3}}.\)
### Concentration of the global free energy along \(\mathcal{L}_{r}\)
Define \(\mathbf{p}^{*}\) to be the maximizer in
\[\max_{\mathbf{p}\in\mathcal{L}_{r}}\bigl{\{}\log Z_{0,\mathbf{p}}+\log Z_{ \mathbf{p},N}\bigr{\}}.\]
Our goal in this section is to show that when conditioned on \(\mathcal{B}_{\mathrm{bar}}\), with high probability, \(\mathbf{p}^{*}\in\mathcal{L}_{r}^{r^{2/3}}\). This is stated as Proposition 6.12 at the end of this subsection.
Again, we start by defining our high probability events,
\[\mathcal{E}_{1} =\bigcap_{j=1}^{\phi_{2}}\Big{\{}\log Z_{\mathcal{L}_{r}^{j}r^{2 /3}}^{\max},N}-\log Z_{r,N}\leq\theta^{-1}\sqrt{jr^{2/3}}\Big{\}}\] \[\mathcal{E}_{2} =\Big{\{}\max_{\mathbf{p}\in\mathcal{L}_{r}\backslash\mathcal{L} _{r}^{\phi_{1}r^{2/3}}}\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\leq\log Z_ {0,r}^{\mathrm{in},\theta r^{2/3}}+\log Z_{r,N}-\theta^{-1}r^{1/3}\Big{\}}.\]
The next two lemmas show that \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) are high probability events.
**Lemma 6.10**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exist positive constants \(c_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(c_{0}\leq r\leq N/2\), we have_
\[\mathbb{P}(\mathcal{E}_{1}|\mathcal{B}_{\mathrm{bar}})\geq 1-e^{-\theta^{-1/100}}.\]
Proof.: By independence, it suffices to prove this estimate for \(\mathbb{P}(\mathcal{E}_{1})\). We fix \(\theta>0\) small. We will upper bound \(\mathbb{P}(\mathcal{E}_{1}^{c})\) using Proposition 4.3. In our application of this proposition, the variables \(a=\sqrt{jr^{2/3}}\) and \(t=\theta^{-1}\). By Proposition 4.3,
\[\mathbb{P}(\mathcal{E}_{1}^{c})\leq\sum_{j=1}^{\phi_{2}}\mathbb{P}\Big{(}\log Z _{\mathcal{L}_{r}^{jr^{2/3}},N}^{\max}-\log Z_{r,N}\geq\theta^{-1}\sqrt{jr^{2 /3}}\Big{)}\leq\sum_{j=1}^{\phi_{2}}e^{-\theta^{-1/50}}\leq e^{-\theta^{-1/100 }}.\]
**Lemma 6.11**.: _There exists a positive constant \(\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exist positive constants \(c_{0},N_{0}\) such that for each \(N\geq N_{0}\), \(c_{0}\leq r\leq N/2\), we have_
\[\mathbb{P}(\mathcal{E}_{2}|\mathcal{B}_{\mathrm{bar}})\geq 1-e^{-\theta^{-2}}.\]
Proof.: By the FKG inequality, it suffices to show \(\mathbb{P}(\mathcal{E}_{2})\geq 1-e^{-\theta^{-2}}\). To do this, we upper bound \(\mathbb{P}(\mathcal{E}_{2}^{c})\). For simplicity, let us denote \(\mathcal{L}_{(r-2hr^{2/3},r+2hr^{2/3})}^{r^{2}/3}\) simply as \(J^{h}\).
\[\mathbb{P}(\mathcal{E}_{2}^{c}) \leq\sum_{|h|=\phi_{1}/2}^{r^{1/3}}\mathbb{P}\Big{(}\max_{\mathbf{ p}\in J^{h}}\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\geq\log Z_{0,r}^{ \mathrm{in},\theta r^{2/3}}+\log Z_{r,N}-\theta^{-1}r^{1/3}\Big{)}\] \[\leq\sum_{|h|=\phi_{1}/2}^{r^{1/3}}\mathbb{P}\Big{(}\log Z_{0,J^{ h}}^{\mathrm{max}}+\log Z_{J^{h},N}^{\mathrm{max}}-\log Z_{0,r}^{\mathrm{in}, \theta r^{2/3}}-\log Z_{r,N}\geq-\theta^{-1}r^{1/3}\Big{)}\] \[\leq\sum_{|h|=\phi_{1}/2}^{r^{1/3}}\mathbb{P}\Big{(}\log Z_{0,J^{ h}}^{\mathrm{max}}-\log Z_{0,r}^{\mathrm{in},\theta r^{2/3}}\geq-C^{\prime}h^{2}r^{1/3} \Big{)} \tag{6.15}\]
\[+\mathbb{P}\Big{(}\log Z^{\max}_{J^{h},N}-\log Z_{r,N}\geq(C^{\prime}h^{2}- \theta^{-1})r^{1/3}\Big{)} \tag{6.16}\]
where \(C^{\prime}\) is a positive constant which we will fix (independent of \(\theta\)).
Next, since \(h^{2}\geq\theta^{-5}\), we see that (6.16) is bounded by \(e^{-C|h|^{3}}\) as it is exactly the same as (4.5) appearing in the proof of Proposition 4.4.
The probability in (6.15) can be bounded as
\[(\ref{eq:prob1})\leq\mathbb{P}\Big{(}\log Z^{\max}_{0,J^{h}}-2 rf_{d}\geq-2C^{\prime}h^{2}r^{1/3}\Big{)}\] \[+\mathbb{P}\Big{(}\log Z^{\min,\theta r^{2/3}}_{0,r}-2rf_{d}\leq- C^{\prime}h^{2}r^{1/3}\Big{)}.\]
Provided that \(C^{\prime}\) is fixed sufficiently small, the two probabilities above are upper bounded by \(e^{-Ch^{2}}\) using Proposition 3.11 and Theorem 3.16. To summarize, we have shown that
\[\mathbb{P}(\mathcal{E}^{c}_{2})\leq\sum_{|h|=\phi_{1}/2}^{\infty}e^{-Ch^{2}} \leq e^{-\theta^{-2}},\]
and this finishes the proof of the lemma.
**Proposition 6.12**.: _On the event \((\cap_{i=1}^{6}\mathcal{A}_{i})\bigcap(\cap_{j=1}^{2}\mathcal{E}_{j})\), we have \(\mathbf{p}^{*}\in\mathcal{L}_{r}^{r^{2/3}}\)._
Proof.: The main idea of the proof is the following. First, we know that the inequality below must hold
\[\max_{\mathbf{p}\in\mathcal{L}_{r}}\{\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z^{\min,\theta r^{2/3}}_{0,r}\geq 0.\]
And we will show that if \(\mathbf{p}\not\in\mathcal{L}_{r}^{r^{2/3}}\), then on the event \((\cap_{i=1}^{6}\mathcal{A}_{i})\bigcap(\cap_{j=1}^{2}\mathcal{E}_{j})\),
\[\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}-\log Z_{r,N}-\log Z^{\min,\theta r ^{2/3}}_{0,r}<-r^{2/3}<0,\]
hence the it must be true that the maximizer \(\mathbf{p}^{*}\in\mathcal{L}_{r}^{r^{2/3}}\).
First, we note that because we are on the event \(\mathcal{E}_{2}\), then \(\mathbf{p}^{*}\) must be in \(\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\). Then, within \(\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\), the event \(\cap_{i=1}^{6}\mathcal{A}_{i}\) says we would lose more than \(\theta^{-50}r^{1/3}\) amount of free energy comparing with going from \((0,0)\) to \(\mathbf{p}\in\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\setminus\mathcal{L}_{r}^{\phi _{1}r^{2/3}}\) instead of going to \((r,r)\). And above \(\mathcal{L}_{r}\), \(\mathcal{E}_{1}\) says for any \(\mathbf{p}\) inside \(\mathcal{L}_{r}^{\phi_{1}r^{2/3}}\), we gain at most \(\theta^{-1}\sqrt{\phi_{1}}r^{1/3}\) amount of free energy comparing with going from \((r,r)\) to \((N,N)\). Thus, the loss \(\theta^{-50}r^{1/3}\) is greater than the gain \(\theta^{-1}\sqrt{\phi_{1}}r^{1/3}\), hence, \(\mathbf{p}^{*}\in\mathcal{L}_{r}^{r^{2/3}}\).
### Expectation bounds
In this subsection, we prove two propositions about the expected difference of free energies when conditioning on \(\mathcal{B}_{\mathrm{bar}}\).
**Proposition 6.13**.: _There exist positive constants \(C_{43},\theta_{0}\) such that for \(0<\theta<\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(r\geq c_{0}\), we have_
\[\mathbb{E}\Big{[}\Big{(}\log Z_{0,\mathcal{L}_{r}}-\log Z^{\min,3\theta r^{2/3 }}_{0,r}\Big{)}^{2}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{]}\leq C_{43}r^{2/3}.\]
Proof.: Let us denote the high probability event
\[\mathcal{D}=\cap_{i=1}^{6}\mathcal{A}_{i},\]
and we have \(\mathbb{P}(\mathcal{D}^{c}|\mathcal{B}_{\mathrm{bar}})\leq\mathbb{P}(\mathcal{D} ^{c})\leq e^{-\theta^{-2}}\) which is the statement of Proposition 6.2.
Now let us look at the expectation on the event \(\mathcal{D}\). Using Proposition 6.9, we obtain the first inequality below. The second inequality follows from that \(Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{1})+\log 6\geq\log Z_{0,r}^{\mathrm{in},3 \theta r^{2/3}}\) on the event \(\mathcal{D}\), so reducing the value of \(Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}\) makes the expectation bigger. We have
\[\mathbb{E}\Big{[}\Big{(} \log Z_{0,\mathcal{L}_{r}}-\log Z_{0,r}^{\mathrm{in},3\theta r^{ 2/3}}\Big{)}^{2}\mathbb{1}_{\mathcal{D}}\Big{|}\mathcal{B}_{\mathrm{bar}} \Big{]}\leq\mathbb{E}\Big{[}\Big{(}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{A}_{1 })-\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}+\log 6\Big{)}^{2}\mathbb{1}_{ \mathcal{D}}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{]}\] \[\leq\mathbb{E}\Big{[}\Big{(}\log Z_{0,\mathcal{L}_{r}}(\mathfrak{ A}_{1})-\max_{\begin{subarray}{c}\mathbf{p}\in R^{\theta r^{2/3}}_{r-\theta^{ 2/3}},r_{r-\theta^{3/2}r+r/L}\end{subarray}}\Big{\{}\log Z_{0,\mathbf{p}}^{ \mathrm{in},R_{0,r}^{\theta r^{2/3}}}+\log Z_{\mathbf{p},r}^{\mathrm{in},R_{0, r}^{\theta r^{2/3}}}\Big{\}}+\log 6\Big{)}^{2}\Big{|}\mathcal{B}_{\mathrm{bar}}\Big{]} \tag{6.17}\]
To simplify the notation, let us denote the rectangle under the maximum above as
\[R^{*}=R_{r-\theta^{3/2}r,r-\theta^{3/2}r+r/L}^{\theta r^{2/3}}.\]
Now, recall the definition of \(\mathfrak{A}_{1}\) in (6.5), every paths must touch \(R^{*}.\) If we let \(\mathbf{p}^{*}\) to be the maximizer for \(\max_{\mathbf{p}\in R^{*}}\log Z_{0,\mathbf{p}}^{\mathrm{in},R_{0,r}^{3\theta r ^{2/3}}}+\log Z_{\mathbf{p},r}^{\mathrm{in},R_{0,r}^{\theta r^{2/3}}}\), then
\[\log Z_{0,\mathcal{L}}(\mathfrak{A}_{1})\leq\log Z_{0,\mathbf{p}^{*}}^{ \mathrm{in},3\theta r^{2/3}}+\log Z_{\mathbf{p}^{*},\mathcal{L}_{r}}.\]
Therefore, the expectation (6.17) can be upper bounded by
\[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq
\[\leq\mathbb{E}\Big{[}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}}\{\log Z_{0, \mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z_{0,r}^{\mathrm{in},3 \theta r^{2/3}}\Big{)}^{2}\mathbb{1}_{\mathcal{D}^{c}}\Big{|}\mathcal{B}_{ \mathrm{bar}}\Big{]}\] \[\leq\mathbb{E}\Big{[}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}} \{\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z_{0,r}^{ \mathrm{in},\theta r^{2/3}}\Big{)}^{4}\Big{]}^{1/2}\mathbb{P}(\mathcal{D}^{c} )^{1/2}. \tag{6.20}\]
The fourth moment term above can be bounded as
\[\mathbb{E}\Big{[}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}}\{\log Z_{0, \mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z_{0,r}^{\mathrm{in}, \theta r^{2/3}}\Big{)}^{4}\Big{]}\]
\[\leq C\mathbb{E}\Big{[}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}}\{ \log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z_{0,r}\Big{)}^{ 4}\Big{]}+C\mathbb{E}\Big{[}\Big{(}\log Z_{0,r}-\log Z_{0,r}^{\text{in}\theta r ^{2/3}}\Big{)}^{4}\Big{]}\] \[\leq C\mathbb{E}\Big{[}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{r}} \{\log Z_{0,\mathbf{p}}+\log Z_{\mathbf{p},N}\}-\log Z_{r,N}-\log Z_{0,r}\Big{)} ^{4}\Big{]}+C\mathbb{E}\Big{[}\Big{(}\log Z_{0,r}-2rf_{d}\Big{)}^{4}\Big{]}\] \[\leq Cr^{4/3}+Cr^{4/3}+C\theta^{-4}r^{4/3}\]
using Proposition 4.7, Proposition 3.6, and Theorem 3.16. Since \(\mathbb{P}(\mathcal{D}^{c})^{1/2}\leq e^{-\theta^{-1/200}}\), the expectation on \(\mathcal{D}^{c}\) is also upper bounded by \(Cr^{2/3}\). With this, we have finished the proof.
### Constrained variance lower bound
The main purpose of this section is to prove the following theorem on the lower bound of the constrained variance. Recall \(\mathcal{F}_{\theta}\) as the \(\sigma\)-algebra generated by the weights in the set \([\![(0,0),(N,N)]\!]\setminus R_{0,r}^{\theta r^{2/3}}\).
**Theorem 6.15**.: _There exist positive constants \(C_{45},\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) and an event \(\mathcal{B}^{\prime}\subset\mathcal{B}_{\mathrm{bar}}\) with \(\mathbb{P}(\mathcal{B}^{\prime}|\mathcal{B}_{\mathrm{bar}})\geq 1/2\) such that for each \(r\geq c_{0}\), we have_
\[\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}^{\text{in},3\theta r^{2/3}}\Big{|} \mathcal{F}_{\theta}\Big{)}(\omega)\geq C_{45}\theta^{-1/2}r^{2/3}\qquad\text { for each }\omega\in\mathcal{B}^{\prime}.\]
First, let us define the following sequence of events. For \(i\in(\frac{1}{3}\theta^{3/2},\frac{2}{3}\theta^{3/2})\) and a positive constant \(q^{*}\) which we fix in Proposition 6.16 below,
\[\mathcal{U}_{i} =\Big{\{}\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}-\log Z_{0,i \theta^{3/2}r}^{\text{in},3\theta r^{2/3}}\leq q^{*}\sqrt{\theta}r^{1/3}\Big{\}}\] \[\mathcal{V}_{i} =\Big{\{}\log Z_{\mathcal{L}_{(i+1)\theta^{3/2}r}}-\log Z_{(i+1) \theta^{3/2}r,r}^{\text{in},3\theta r^{2/3}}\leq q^{*}\sqrt{\theta}r^{1/3}\Big{\}}\] \[\mathcal{W}_{i} =\Big{\{}\log Z_{\mathcal{L}_{i\theta^{3/2}r}^{3\theta r/3}, \mathcal{L}_{(i+1)\theta^{3/2}r}^{3\theta r^{2/3}}}-2\theta^{3/2}rf_{d}\leq q ^{*}\sqrt{\theta}r^{1/3}\Big{\}}\]
On the event \(\mathcal{B}_{\mathrm{bar}}\), the events defined above happen with high probability.
**Proposition 6.16**.: _There exist positive constants \(q^{*},\theta_{0}\) such that for each \(0<\theta\leq\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(r\geq c_{0}\),_
\[\mathbb{P}(\mathcal{U}_{i}\cap\mathcal{V}_{i}\cap\mathcal{W}_{i}|\mathcal{B}_ {\mathrm{bar}})\geq 0.9999.\]
Proof.: We upper bound \(\mathcal{U}_{i}^{c}\) and \(\mathcal{W}_{i}^{c}\). And by symmetry, the estimate for \(\mathcal{V}_{i}^{c}\) is the same as \(\mathcal{U}_{i}^{c}\).
First, by the FKG inequality and Theorem 3.15,
\[\mathbb{P}(\mathcal{W}_{i}^{c}|\mathcal{B}_{\mathrm{bar}})\leq\mathbb{P}\Big{(} \log Z_{\mathcal{L}_{i\theta^{3/2}r}^{\text{in},3\theta r^{2/3}},\mathcal{L}_{( i+1)\theta^{3/2}r}^{3\theta r^{2/3}}}-2\theta^{3/2}rf_{d}\leq q^{*}\sqrt{ \theta}r^{1/3}\Big{)}\leq e^{-Cq^{*}}.\]
Next, we upper bound \(\mathbb{P}(\mathcal{U}_{i}^{c}|\mathcal{B}_{\mathrm{bar}})\). Let \(\mathfrak{A}\) denote the collection of paths going from \((0,0)\) to \(\mathcal{L}_{i\theta^{3/2}r}^{3\theta r^{2/3}}\) such that they stay within the diagonal sides of \(R_{0,(i-1)\theta^{3/2}r}^{3\theta r^{2/3}}\) and they touch the box
\(R^{\theta r^{2/3}}_{(i-2)\theta^{3/2}r,(i-1)\theta^{3/2}r}\). (Note the \(\mathfrak{A}\) here plays the role of \(\mathfrak{A}_{1}\) from (6.5).) Applying Proposition 6.9 (for the free energy from \(0\) to \(i\theta^{3/2}r\) instead of from \(0\) to \(r\)), we know that for \(\theta\) sufficiently small.
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}\leq\log Z_{0, \mathcal{L}_{i\theta^{3/2}r}}(\mathfrak{A})+\log 6\Big{|}\mathcal{B}_{\rm bar }\Big{)}\geq 0.99999999.\]
Then, it suffices for us to upper bound the event
\[\Big{\{}\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}(\mathfrak{A})-\log Z_{0,i \theta^{3/2}r}^{\rm in,3\theta r^{2/3}}>q^{*}\sqrt{\theta}r^{1/3}-\log 6 \Big{\}}. \tag{6.21}\]
Now, since all the paths in \(\mathfrak{A}\) enter the box
\[R^{*}=R^{\theta r^{2/3}}_{(i-2)\theta^{3/2}r,(i-1)\theta^{3/2}r},\]
let \(\mathfrak{A}_{\bf p}\) denote the collection of paths in \(\mathfrak{A}\) that go through \({\bf p}\in R^{*}\). Let \({\bf p}^{*}\) be the maximizer of
\[\max_{{\bf p}\in R^{*}}\Big{\{}\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}( \mathfrak{A}_{\bf p})\Big{\}}.\]
And it holds that
\[\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}(\mathfrak{A})\leq\log Z_{0,\mathcal{ L}_{i\theta^{3/2}r}}(\mathfrak{A}_{{\bf p}^{*}})+100\log r.\]
Now, to bound (6.21), it suffices for us to upper bound
\[\mathbb{P}\Big{(}\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}(\mathfrak{A}_{{\bf p }^{*}})-\log Z_{0,i\theta^{3/2}r}^{\rm in,3\theta r^{2/3}}>\tfrac{1}{2}q^{*} \sqrt{\theta}r^{1/3}\Big{|}\mathcal{B}_{\rm bar}\Big{)}.\]
We will replace these two free energies appearing above. Since
\[\log Z_{0,\mathcal{L}_{i\theta^{3/2}r}}(\mathfrak{A}_{{\bf p}^{ *}})=\log Z_{0,{\bf p}^{*}}+\log Z_{{\bf p}^{*},\mathcal{L}_{i\theta^{3/2}r}}\] \[\log Z_{0,i\theta^{3/2}r}^{\rm in,3\theta r^{2/3}}\geq\log Z_{0, {\bf p}^{*}}^{\rm in,R^{3\theta r^{2/3}}_{0,r}}+\log Z_{{\bf p}^{*},i\theta^{3 /2}r}^{\rm in,R^{0,r}_{0,r}},\]
and it suffices for us to upper bound
\[\mathbb{P}\Big{(}\log Z_{{\bf p}^{*},\mathcal{L}_{i\theta^{3/2}r} }-\log Z_{{\bf p}^{*},i\theta^{3/2}r}^{\rm in,R^{\theta r^{2/3}}_{0,r}}>\tfrac {1}{2}q^{*}\sqrt{\theta}r^{1/3}\Big{|}\mathcal{B}_{\rm bar}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{{\bf p}^{*},\mathcal{L}_{i\theta^{3/ 2}r}}-\log Z_{{\bf p}^{*},i\theta^{3/2}r}^{\rm in,R^{\theta r^{2/3}}_{0,r}}> \tfrac{1}{2}q^{*}\sqrt{\theta}r^{1/3}\Big{)}. \tag{6.22}\]
And we may bound (6.22) as
\[(\ref{eq:22})\leq\mathbb{P}\Big{(}\max_{{\bf p}\in R^{*}}\log Z_ {{\bf p},\mathcal{L}_{i\theta^{3/2}r}}-(2i\theta^{3/2}r-|{\bf p}|_{1})f_{d}> \tfrac{1}{4}q^{*}\sqrt{\theta}r^{1/3}\Big{)}\] \[\qquad\qquad\qquad+\mathbb{P}\Big{(}\min_{{\bf p}\in R^{*}}\log Z _{{\bf p},i\theta^{3/2}r}^{\rm in,}R^{\theta^{2/3}}_{0,r}-(2i\theta^{3/2}r-|{ \bf p}|_{1})f_{d}<-\tfrac{1}{4}q^{*}\sqrt{\theta}r^{1/3}\Big{)}.\]
Both of these probabilities are bounded by \(e^{-q^{*}{}^{1/10}}\) by Theorem 3.18 and Theorem 3.19. Finally, by fixing \(q^{*}\) sufficiently large, this completes the proof of this proposition.
We say an index \(i\in(\tfrac{1}{3}\theta^{3/2},\tfrac{2}{3}\theta^{3/2})\) is _good_ if
\[\mathbb{P}(\mathcal{U}_{i}\cap\mathcal{V}_{i}\cap\mathcal{W}_{i}|\mathcal{F}_{ \theta})(\omega)\geq 0.99\qquad\text{ where }\omega\in\mathcal{B}_{\rm bar}.\]
Note for a given \(\omega\in\mathcal{B}_{\rm bar}\), the set of good indices is deterministic.
**Lemma 6.17**.: _Let \(\mathcal{B}^{\prime}\subset\mathcal{B}_{\mathrm{bar}}\) denote the event that the number of good indices is at least \(\frac{1}{6}\theta^{-3/2}\). Then \(\mathbb{P}(\mathcal{B}^{\prime}|\mathcal{B}_{\mathrm{bar}})\geq 1/2\)._
Proof.: Since \(\mathcal{B}_{\mathrm{bar}}\) is \(\mathcal{F}_{\theta}\)-measurable, by Markov's inequality,
\[\mathbb{P}(i\text{ is bad}|\mathcal{B}_{\mathrm{bar}})\leq\mathbb{P}\Big{(} \{\omega:\mathbb{P}(\mathcal{U}_{i}^{c}\cup\mathcal{V}_{i}^{c}\cup\mathcal{W}_ {i}^{c}|\mathcal{F}_{\theta})(\omega)>0.01\}\Big{|}\mathcal{B}_{\mathrm{bar}} \Big{)}\leq 100\mathbb{P}(\mathcal{U}_{i}^{c}\cup\mathcal{V}_{i}^{c}\cup \mathcal{W}_{i}^{c}|\mathcal{B}_{\mathrm{bar}})\leq 1/10.\]
Then, the expected number of bad indices (conditional on \(\mathcal{B}_{\mathrm{bar}}\)) is upper bounded by \(\frac{1}{10}\cdot\frac{1}{3}\theta^{-3/2}\), and a further application of Markov's inequality completes the proof.
Next we prove Theorem 6.15 using the \(\mathcal{B}^{\prime}\) obtained in Lemma 6.17. Let us fix an \(\omega^{\prime}\in\mathcal{B}^{\prime}\) for the remainder of this proof; projection of every configuration we shall define on to vertices outside \(R_{0,r}^{\theta r^{2/3}}\) will agree with \(\omega^{\prime}\). Given this \(\omega^{\prime}\), the collection of good vertices is known. Let us fix an enumeration of a portion of the good vertices
\[J=\{i_{1},i_{2},\ldots,i_{K}\}\]
where \(K\geq\frac{1}{6}\theta^{-3/2}\). Now, define a sequence of \(\sigma\)-algebras \(\mathcal{S}_{0}\subset\mathcal{S}_{1}\subset\mathcal{S}_{2}\subset\cdots \subset\mathcal{S}_{K}\) where \(\mathcal{S}_{0}\) is generated by the configuration \(\omega^{\prime}\in\mathcal{B}^{\prime}\) together with the configuration on \(R_{i\theta^{3/2}r,(i+1)\theta^{3/2}r}^{\theta r^{2/3}}\) for all \(i\not\in J\), and for \(j\geq 1\), \(\mathcal{S}_{j}\) is the \(\sigma\)-algebra generated by \(\mathcal{S}_{j-1}\) and the configuration inside \(R_{i_{j}\theta^{3/2}r,(i_{j}+1)\theta^{3/2}r}^{\theta r^{2/3}}\). Note that \(S_{K}\) is the \(\sigma\)-algebra of the entire weight configuration.
Consider the Doob martingale
\[M_{j}=\mathbb{E}\big{[}\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}|\mathcal{S} _{j}\big{]}.\]
By the variance decomposition of a Doob martingale, it follows that
\[\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}\Big{|} \mathcal{F}_{\theta}\Big{)}(\omega^{\prime})\geq\sum_{j=1}^{K}\mathbb{E}[(M_{j }-M_{j-1})^{2}|\mathcal{F}_{\theta}](\omega^{\prime}).\]
Theorem 6.15 follows directly from the lemma below. The proof of the lemma goes by a re-sampling argument. This idea first appeared in the zero-temperature set-up in [9, 11], although the setup there is different from ours.
**Lemma 6.18**.: _There exist positive constants \(\theta_{0},C_{46}\) such that for each \(0<\theta<\theta_{0}\), there exists a positive constant \(c_{0}\) such that for each \(r\geq c_{0}\), the following holds: for each \(i_{j}\in J\), there is an \(\mathcal{S}_{j-1}\) measurable event \(\mathcal{G}_{j}\) with \(\mathbb{P}(\mathcal{G}_{j}|\mathcal{F}_{\theta})(\omega^{\prime})\geq 1/2\) for each \(\omega^{\prime}\in\mathcal{B}^{\prime}\), and for each \(\omega\in\mathcal{G}_{j}\) we have_
\[\mathbb{E}[(M_{j}-M_{j-1})^{2}|\mathcal{S}_{j-1}](\omega)\geq C_{46}\theta r^{ 2/3}.\]
Proof.: Define the event
\[F=\Big{\{}\log Z_{i_{j}\theta^{3/2}r,(i_{j}+1)\theta^{3/2}r}^{\mathrm{in}, \theta r^{2/3}}-2(\theta^{3/2}r)f_{d}\geq 100q^{*}\sqrt{\theta}r^{1/3}\Big{\}}\]
where the fixed constant \(q^{*}\) is from Proposition 6.16. By Theorem 3.17, \(\mathbb{P}(F)\geq c>0\). Let \(\omega_{1}\) denote the configuration on \(R_{i_{j}\theta^{3/2}r,(i_{j}+1)\theta^{3/2}r}^{\theta r^{2/3}}\) drawn from i.i.d. inverse-gamma distribution. And let \(\widetilde{\omega}_{1}\) denote the configuration on \(R_{i_{j}\theta^{2/3}r,(i_{j}+1)\theta^{3/2}r}^{\theta r^{2/3}}\) which is drawn from i.i.d. inverse-gamma distribution but conditioned on \(F\). By the FKG inequality and Strassen's Theorem [53], there
exists a coupling measure of the joint distribution \((\omega_{1},\widetilde{\omega}_{1})\) such that \(\omega_{1}\leq\widetilde{\omega}_{1}\) coordinatewise. Let \(\beta\) denote such a coupling measure.
Let \(\omega_{0}\) denote the configuration on all vertices that are revealed in \(\mathcal{S}_{j-1}\), and recall the projection of \(\omega_{0}\) outside of \(R^{\theta r^{2/3}}_{0,r}\) is the same as \(\omega^{\prime}\in\mathcal{B}^{\prime}\), the environment which we have fixed previously. And \(\omega_{2}\) is the remaining weight configurations besides \(\omega_{0},\omega_{1},\widetilde{\omega}_{1}\). Let \(\underline{\omega}=(\omega_{0},\omega_{1},\omega_{2})\) and \(\underline{\omega}=(\omega_{0},\widetilde{\omega}_{1},\omega_{2})\) to denote the two coupled environments under \(\beta\), which was defined in the previous paragraph. We have
\[\frac{1}{\mathbb{P}(F)}\mathbb{E}[M_{j}\mathbb{1}_{F}|\mathcal{S}_{j-1}]-M_{j -1}=\int\log(Z^{\mathrm{in},3\theta r^{2/3}}_{0,r})(\underline{\omega})-\log Z ^{\mathrm{in},3\theta r^{2/3}}_{0,r}(\underline{\omega})\beta(d\omega_{1},d \widetilde{\omega}_{1})\mathbb{P}(d\omega_{2}). \tag{6.23}\]
Since \(F\) is independent of \(\mathcal{S}_{j-1}\) and \(M_{j-1}\), we also have
\[\begin{split}\mathbb{E}[(M_{j}-M_{j-1})^{2}|\mathcal{S}_{j-1}] &\geq\mathbb{P}(F)\frac{1}{\mathbb{P}(F)}\mathbb{E}[(M_{j}-M_{j- 1})^{2}\mathbb{1}_{F}|\mathcal{S}_{j-1}]\\ &\geq\mathbb{P}(F)\Big{(}\frac{1}{\mathbb{P}(F)}\mathbb{E}[M_{j} \mathbb{1}_{F}|\mathcal{S}_{j-1}]-M_{j-1}\Big{)}^{2}.\end{split} \tag{6.24}\]
Next, we construct the event \(\mathcal{G}_{j}\). Note that \(\mathcal{U}_{i_{j}}\) is \(\mathcal{S}_{j-1}\) measurable, but \(\mathcal{V}_{i_{j}}\) is not. Define another \(\mathcal{S}_{j-1}\) measurable event
\[\widetilde{\mathcal{V}}_{i_{j}}=\{\omega_{0}:\mathbb{P}(\mathcal{V}_{i_{j}}| \mathcal{S}_{j-1})(\omega_{0})\geq 0.9\}.\]
We set \(\mathcal{G}_{j}=\widetilde{\mathcal{U}}_{i_{j}}\cap\widetilde{\mathcal{V}}_{i_ {j}}\) where
\[\widetilde{\mathcal{U}}_{i_{j}}=\{\omega_{0}:\mathbb{1}_{\mathcal{U}_{i_{j}}} =1\}.\]
By the definition that \(i_{j}\) is good, \(\mathbb{P}(\mathcal{U}_{i_{j}}|\mathcal{F}_{\theta})(\omega^{\prime})\geq 0.99\). By Markov's inequality,
\[\mathbb{P}(\widetilde{\mathcal{V}}^{c}_{i_{j}}|\mathcal{F}_{\theta})(\omega^{ \prime})=\mathbb{P}(\{\omega_{0}:\mathbb{P}(\mathcal{V}^{c}_{i_{j}}|S_{j-1})( \omega_{0})\geq 0.1\}|\mathcal{F}_{\theta})(\omega^{\prime})\leq 10\mathbb{P}( \mathcal{V}^{c}_{i_{j}}|\mathcal{F}_{\theta})(\omega^{\prime})\leq 0.1,\]
which implies \(\mathbb{P}(\widetilde{\mathcal{V}}^{c}_{i_{j}}|\mathcal{F}_{\theta})(\omega^{ \prime})\geq 0.9\). Hence, \(\mathcal{G}_{j}\) satisfies the requirement
\[\mathbb{P}(\mathcal{G}_{j}|\mathcal{F}_{\theta})(\omega^{\prime})\geq 1/2.\]
For \(\omega_{0}\in\mathcal{G}_{j}\), combining (6.23) and (6.24), we obtain that
\[\mathbb{E}[(M_{j}-M_{j-1})^{2}|\mathcal{S}_{j-1}](\omega_{0})\geq\mathbb{P}(F )\Big{(}\int\log Z^{\mathrm{in},3\theta r^{2/3}}_{0,r}(\underline{\omega})- \log Z^{\mathrm{in},3\theta r^{2/3}}_{0,r}(\underline{\omega})\beta(d\omega_ {1},d\widetilde{\omega}_{1})\mathbb{P}(d\omega_{2})\Big{)}^{2}. \tag{6.25}\]
Since the integrand above is non-negative, we further lower bound the integral in the right-hand side of (6.25) by
\[\int_{\omega_{1}\in\widetilde{\mathcal{W}}_{i_{j}},\omega_{2}\in D}\left(\log Z ^{\mathrm{in},3\theta r^{2/3}}_{0,r}(\underline{\omega})-\log Z^{\mathrm{ in},3\theta r^{2/3}}_{0,r}(\underline{\omega})\right)\beta(d\omega_{1},d \widetilde{\omega}_{1})\mathbb{P}(d\omega_{2}) \tag{6.26}\]
where
\[\widetilde{\mathcal{W}}_{i_{j}}=\{\omega_{1}:\mathbb{1}_{\mathcal{W}_{i_{j}}} (\underline{\omega})=1\}\]
(note that since \(\omega_{0}\) (and \(\omega^{\prime}\)) has been fixed, \(\mathbb{1}_{\mathcal{W}_{i_{j}}}\) is determined by \(\omega_{1}\)) and
\[D=D(\omega_{0})=\Big{\{}\omega_{2}:\ \mathcal{V}_{i_{j}}\text{ holds on }(\omega_{0}, \omega_{2})\Big{\}}\]
(notice also that \(\mathcal{V}_{i_{j}}\) is determined by \((\omega_{0},\omega_{2})\)).
Now, it holds that
\[\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}(\widetilde{\underline{\omega}})\geq \log Z_{0,i_{j}\theta^{3/2}r}^{\mathrm{in},3\theta r^{2/3}}(\widetilde{ \underline{\omega}})+\log Z_{i_{j}\theta^{3/2}r,(i_{j}+1)\theta^{3/2}r}^{ \mathrm{in},\theta r^{2/3}}(\widetilde{\underline{\omega}})+\log Z_{(i_{j}+1) \theta^{3/2}r,r}^{\mathrm{in},3\theta r^{2/3}}(\widetilde{\underline{\omega}})\]
and
\[\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}(\underline{\omega})\leq\log Z_{0, \mathcal{L}_{i_{j}\theta^{3/2}r}}(\underline{\omega})+\log Z_{\mathcal{L}_{i_{ j}\theta^{3/2}r}^{\mathrm{in},3\theta r^{2/3}}}^{\mathrm{in},3\theta r^{2/3}}( \underline{\omega})+\log Z_{\mathcal{L}_{(i_{j}+1)\theta^{3/2}r}}(\underline{ \omega}).\]
Since \(\omega_{0}\in\mathcal{G}_{j}\), we have on \(\{\omega_{1}\in\widetilde{\mathcal{W}}_{i_{j}},\omega_{2}\in D\}\),
\[\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}(\widetilde{\underline {\omega}})-\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}(\underline{\omega})\] \[\qquad\qquad\geq\log Z_{i_{j}\theta^{3/2}r,(i_{j}+1)\theta^{3/2}r }^{\mathrm{in},\theta r^{2/3}}(\widetilde{\omega}_{1})-\log Z_{\mathcal{L}_{i_ {j}\theta^{3/2}r}^{\mathrm{in},3\theta r^{2/3}}}^{\mathrm{in},3\theta r^{2/3} }(\omega_{1})-2q^{*}\sqrt{\theta}r^{1/3}\] \[\qquad\geq 50q^{*}\sqrt{\theta}r^{1/3}\]
where the last inequality holds since \(\widetilde{\omega}_{1}\in F\) by definition.
We can therefore lower bound (6.26) by
\[50q^{*}\sqrt{\theta}r^{1/3}\int_{\omega_{1}\in\widetilde{\mathcal{W}}_{i_{j}} }\beta(d\omega_{1},d\widetilde{\omega}_{1})\int_{\omega_{2}\in D}\mathbb{P}(d \omega_{2}).\]
Since, \(\mathcal{W}_{i_{j}}(\underline{\omega})\) is determined by \(\omega^{\prime}\) and \(\omega_{1}\), it follows that
\[\int_{\omega_{1}\in\widetilde{\mathcal{W}}_{i_{j}}}\beta(d\omega_{1},d \widetilde{\omega}_{1})=\mathbb{P}(\mathcal{W}_{i_{j}}|\mathcal{F}_{\theta})( \omega^{\prime})\geq 0.99\]
since \(i_{j}\) is good. Since \(\mathcal{G}_{j}\subseteq\widetilde{\mathcal{V}}_{i_{j}}\), it follows from the definition of \(D\) that \(\int_{\omega_{2}\in D}d\omega_{2}\geq 0.9\) for all \(\omega_{0}\in\mathcal{G}_{j}\). We can therefore lower bound the right hand side of (6.25) by \(\theta r^{2/3}\mathbb{P}(F)\times(0.9\times 0.99\times 50q^{*})^{2}\) thereby completing the proof.
### Covariance lower bound
To start with fix \(\theta\) sufficiently small. Let us recall three subsets of \(\mathcal{B}_{\mathrm{bar}}\) from Theorem 6.15, Proposition 6.13 and Proposition 6.14,
\[\mathcal{B}^{\prime} :\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r ^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)\geq C_{45}\theta^{-1/2}r^{2/ 3}\quad\text{ for }\omega\in\mathcal{B}^{\prime}.\] \[\mathcal{B}^{\prime\prime} :\mathbb{E}\Big{[}\Big{(}\log Z_{0,\mathcal{L}_{r}}-\log Z_{0,r}^ {\mathrm{in},3\theta r^{2/3}}\Big{)}^{2}\Big{|}\mathcal{F}_{\theta}\Big{]}( \omega)\leq 100C_{43}r^{2/3}\quad\text{ for }\omega\in\mathcal{B}^{\prime\prime}\] \[\mathcal{B}^{\prime\prime\prime} :\mathbb{E}\Big{[}\Big{(}\log Z_{0,N}-\log Z_{r,N}-\log Z_{0,r}^ {\mathrm{in},3\theta r^{2/3}}\Big{)}^{2}\Big{|}\mathcal{F}_{\theta}\Big{]}( \omega)\leq 100C_{44}r^{2/3}\quad\text{ for }\omega\in\mathcal{B}^{\prime\prime\prime}.\]
Besides \(\mathbb{P}(\mathcal{B}^{\prime}|\mathcal{B}_{\mathrm{bar}})\geq 1/2\), we also know \(\mathbb{P}(\mathcal{B}^{\prime\prime}|\mathcal{B}_{\mathrm{bar}})\geq 0.99\) and \(\mathbb{P}(\mathcal{B}^{\prime\prime\prime}|\mathcal{B}_{\mathrm{bar}})\geq 0.99\) by Markov inequality applied to their complements.
Now going back to (6.1), let us define
\[\mathcal{E}_{\theta}=\mathcal{B}^{\prime}\cap\mathcal{B}^{\prime\prime}\cap \mathcal{B}^{\prime\prime\prime}.\]
We know \(\mathbb{P}(\mathcal{E}_{\theta})>\epsilon_{0}>0\) because \(\mathbb{P}(\mathcal{E}_{\theta}|\mathcal{B}_{\mathrm{bar}})\geq 0.1\). Finally, let us show (6.1). This follows directly from a sequence of inequalities. For \(\omega\in\mathcal{E}_{\theta}\)
\[\mathbb{C}\mathrm{ov}(\log Z_{0,r},\log Z_{0,N}|\mathcal{F}_{ \theta})(\omega)\] \[=\mathbb{C}\mathrm{ov}(\log Z_{0,r},\log Z_{0,N}-\log Z_{r,N}| \mathcal{F}_{\theta})(\omega)\] \[=\mathbb{C}\mathrm{ov}\Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r ^{2/3}},\log Z_{0,N}-\log Z_{r,N}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)\] \[\qquad\qquad\qquad+\mathbb{C}\mathrm{ov}\Big{(}\log Z_{0,r}-\log Z _{0,r}^{\mathrm{in},3\theta r^{2/3}},\log Z_{0,N}-\log Z_{r,N}\Big{|}\mathcal{ F}_{\theta}\Big{)}(\omega)\] \[=\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r ^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)+\mathbb{C}\mathrm{ov} \Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}},\log Z_{0,N}-\log Z_{r,N}- \log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)} (\omega)\] \[\qquad\qquad\qquad+\mathbb{C}\mathrm{ov}\Big{(}\log Z_{0,r}-\log Z _{0,r}^{\mathrm{in},3\theta r^{2/3}},\log Z_{0,N}-\log Z_{r,N}-\log Z_{0,r}^{ \mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)\] \[\geq\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}^{\mathrm{in},3 \theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)-\sqrt{\mathbb{V} \mathrm{ar}\Big{(}\log Z_{0,r}^{\mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F} _{\theta}\Big{)}(\omega)}\sqrt{\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}-\log Z _{0,r}^{\mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)}\] \[\qquad\qquad-\sqrt{\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,r}-\log Z _{0,r}^{\mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega) }\sqrt{\mathbb{V}\mathrm{ar}\Big{(}\log Z_{0,N}-Z_{r,N}-\log Z_{0,r}^{ \mathrm{in},3\theta r^{2/3}}\Big{|}\mathcal{F}_{\theta}\Big{)}(\omega)}\] \[\geq C\theta^{-1/2}r^{2/3}\]
where the last inequality holds by the definition of the event \(\mathcal{E}_{\theta}\) for \(\theta\) sufficiently small.
## 7 Nonrandom fluctuation lower bound
First, let us prove Theorem 3.26.
Proof of Theorem 3.26.: To simplify the notation, let us simply use \(Z\) (instead of \(\widetilde{Z}\)) to denote the version of the partition function where we also include the weight at the starting point. Note this does not apply to \(Z^{\rho}\). Also, let us define
\[\mathbf{v}_{N}=2N\boldsymbol{\xi}[\rho].\]
To prove the theorem, it suffices for us to assume that
\[\delta\geq N^{-1/3}. \tag{7.1}\]
Recall the definition of the exit time \(\tau\) above Theorem 3.25, and we start with a simple bound
\[\mathbb{P}\Big{(}\log Z_{-1,\mathbf{v}_{N}}^{\rho}-\Big{(}\log I _{[(-1,-1),(0,-1]]}^{\rho}+\log Z_{0,\mathbf{v}_{N}}\Big{)}\leq\delta N^{1/3} \Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{-1,\mathbf{v}_{N}}^{\rho}(1\leq\tau \leq N^{2/3})-\Big{(}\log I_{[(-1,-1),(0,-1]]}^{\rho}+\log Z_{0,\mathbf{v}_{N}} \Big{)}\leq\delta N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\max_{1\leq k\leq N^{2/3}}\log Z_{-1,2N \boldsymbol{\xi}[\rho]}^{\rho}(\tau=k)-\Big{(}\log I_{[(-1,-1),(0,-1]]}^{\rho}+ \log Z_{0,\mathbf{v}_{N}}\Big{)}\leq\delta N^{1/3}\Big{)}. \tag{7.2}\]
For each \(k=1,\ldots,N^{2/3}\), let us denote the term inside our maximum as
\[\log Z_{-1,\mathbf{v}_{N}}^{\rho}(\tau=k)-\Big{(}\log I_{[(-1,-1),(0,-1)]}^{ \rho}+\log Z_{0,\mathbf{v}_{N}}\Big{)}\]
\[=(\log Z_{(k,0),{\bf v}_{N}}-\log Z_{0,{\bf v}_{N}})+\sum_{i=1}^{k}I^{ \rho}_{\{(i-1,-1),(i,-1)\}\!\!1}\] \[=S_{k}.\]
Then, our estimate can be written as a running maximum.
\[(\ref{eq:1})=\mathbb{P}\Big{(}\max_{1\leq k\leq N^{2/3}}S_{k}\leq\delta N^{1/3 }\Big{)}. \tag{7.3}\]
The steps of \(S_{k}\) are not i.i.d. because of the term \(\log Z_{(k,0),{\bf v}_{N}}-\log Z_{0,{\bf v}_{N}}.\) Our next step is to lower bound \(S_{k}\) by a random walk with i.i.d. steps using Theorem 3.28. In the application of Theorem 3.28, we will rotate our picture \(180^{\circ}\) so the path \(\Theta_{1+N^{2/3}}\) is the segment \([\![(0,0),(N^{2/3},0)]\!]\). And our perturbed parameter will be \(\lambda=\rho+q_{0}sN^{-1/3}\) where \(s=|\log\delta|\). Note our condition \(\delta\geq N^{-1/3}\) verifies the assumption \(s\leq a_{0}N^{2/3}\) from Theorem 3.28.
Let us denote the lower bounding i.i.d. random walk as \(\widetilde{S}_{k}\), and the distribution of the steps of \(\widetilde{S}_{k}\) is given by the independent sum \(-\log(\operatorname{Ga}^{-1}(\mu-\lambda))+\log(\operatorname{Ga}^{-1}(\mu- \rho))\). Define the event
\[A=\{S_{k}\geq\log\tfrac{9}{10}+\widetilde{S}_{k}\text{ for }k=0,1,\ldots,N^{2/3}\}.\]
Continuing from (7.3), we have
\[(\ref{eq:1}) \leq\mathbb{P}\Big{(}\Big{\{}\max_{0\leq k\leq N^{2/3}}S_{k}\leq \delta N^{1/3}\Big{\}}\cap A\Big{)}+\mathbb{P}(A^{c})\] \[\leq\mathbb{P}\Big{(}\Big{\{}\max_{0\leq k\leq N^{2/3}}\widetilde {S}_{k}\leq\delta N^{1/3}+\log\tfrac{10}{9}\Big{\}}\Big{)}+\mathbb{P}(A^{c}).\]
By Theorem 3.28, we know \(\mathbb{P}(A^{c})\leq e^{-Cs^{3}}\leq\delta\). It remains to upper bound the running maximum
\[\mathbb{P}\Big{(}\max_{0\leq k\leq N^{2/3}}\widetilde{S}_{k}\leq\delta N^{1/3 }+\log\tfrac{10}{9}\Big{)}. \tag{7.4}\]
Lastly, Proposition E.3 gives \((\ref{eq:1})\leq C|\log\delta|\delta\). With this, we have finished the proof of the theorem.
Our nonrandom fluctuation lower bound follows directly from Theorem 3.26.
Proof of Theorem 3.22.: By Theorem 3.26, there exists \(\delta_{0},N_{0}\) such that for all \(N\geq N_{0}\), we have
\[\mathbb{P}\Big{(}\log Z_{-1,2N\boldsymbol{\xi}[\rho]}^{\rho}-\Big{(}\log I^{ \rho}_{[(-1,-1),(0,-1)]}+\log Z_{0,2N\boldsymbol{\xi}[\rho]}\Big{)}\geq\tfrac {1}{2}\delta_{0}N^{1/3}\Big{)}\geq 0.99.\]
Let us denote the above event as \(A\), and let
\[X=\log Z_{-1,2N\boldsymbol{\xi}[\rho]}^{\rho}-\Big{(}\log I^{\rho}_{[(-1,-1),( 0,-1)]}+\log Z_{0,2N\boldsymbol{\xi}[\rho]}\Big{)}.\]
Note that \(X\geq 0\). Using (3.2) and (3.6), we have
\[\Big{|}\mathbb{E}\big{[}Z_{-1,2N\boldsymbol{\xi}[\rho]}^{\rho}\big{]}-2Nf(\rho )\Big{|}\leq 10\max\{|\Phi_{0}(\rho)|,|\Phi_{0}(\mu-\rho)|\}.\]
With these, we have
\[2Nf(\rho)-\Big{(}\mathbb{E}[\log I^{\rho}_{[(-1,-1),(0,-1)]}]+\mathbb{E}[ \log Z_{0,2N\boldsymbol{\xi}[\rho]}]\Big{)}\]
\[\geq\mathbb{E}[X]-10\max\{|\Phi_{0}(\rho)|,|\Phi_{0}(\mu-\rho)|\}\] \[=\mathbb{E}[X\mathds{1}_{A}]+\mathbb{E}[X\mathds{1}_{A^{c}}]-10 \max\{|\Phi_{0}(\rho)|,|\Phi_{0}(\mu-\rho)|\}\] \[\geq\tfrac{1}{10}\delta_{0}N^{1/3}-10\max\{|\Phi_{0}(\rho)|,|\Phi_ {0}(\mu-\rho)|\}.\]
The calculation above translates to our desired lower bound
\[2Nf(\rho)-\mathbb{E}[\log Z_{0,2N\boldsymbol{\xi}[\rho]}]\geq\tfrac{1}{10} \delta_{0}N^{1/3}-10\max\{|\Phi_{0}(\rho)|,|\Phi_{0}(\mu-\rho)|\}-|\mathbb{E} [\log I^{\rho}_{\{(-1,-1),(0,-1)\}}]|.\]
provided that \(N_{0}\) is fixed sufficiently large depending on \(\epsilon\) (recall \(\rho\in[\epsilon,\mu-\epsilon]\)).
## 8 Moderate deviation bounds for the left tail
### Upper bound for the left tail
Without introducing a new notation, let us assume that we are working with the version of the partition function \(Z\) which includes the weight at the starting point. Once the upper bound for the left tail is proved for this version of \(Z\), by a union bound it easily implies that the same result holds for the partition function which does not include the starting weight.
Proof of Proposition 3.8.: For simplicity of the notation, let us denote
\[\mathbf{v}_{N}=2N\boldsymbol{\xi}[\rho].\]
We will first show the upper bound in the theorem for \(t\) in the range
\[t_{0}\leq t\leq a_{0}N^{2/3} \tag{8.1}\]
for some positive \(a_{0}\) which we will fix during the proof. Because of Theorem 3.24, it suffices for us to show that
\[\mathbb{P}\Big{(}\frac{Z^{\rho}_{-1,\mathbf{v}_{N}}}{Z_{0,\mathbf{v}_{N}}}\geq e ^{C^{\prime}tN^{1/3}}\Big{)}\leq e^{-Ct^{3/2}}. \tag{8.2}\]
We start the estimate
\[\text{left side of \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_defdef_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_
As the numerator appearing in the probability measure of (8.5) can be bounded as follows:
\[Z^{\rho}_{-1,{\bf v}_{N}}(1\leq\tau\leq\sqrt{t}N^{2/3})\leq\max_{1\leq k\leq\sqrt {t}N^{2/3}}Z^{\rho}_{-1,{\bf v}_{N}}(\tau=k)+100\log N,\]
to get (8.5), we will upper bound the probability
\[\mathbb{P}\Big{(}\frac{\max_{1\leq k\leq\sqrt{t}N^{2/3}}Z^{\rho}_{-1,{\bf v}_{ N}}(\tau=k)}{Z_{0,{\bf v}_{N}}}\geq\tfrac{1}{5}e^{C^{\prime}tN^{1/3}}\Big{)}. \tag{8.6}\]
For any fixed \(k=1,\ldots,\sqrt{t}N^{2/3}\), let us denote
\[\log Z^{\rho}_{-1,{\bf v}_{N}}(\tau=k)-\log Z_{0,{\bf v}_{N}}\] \[=\log I^{\rho}_{\llbracket(-1,-1),(0,-1)\rrbracket}+\Big{(}(\log Z _{(k,0),{\bf v}_{N}}-\log Z_{0,{\bf v}_{N}})+\sum_{i=1}^{k}I^{\rho}_{ \llbracket(i-1,-1),(i,-1)\rrbracket}\Big{)}\] \[=\log I^{\rho}_{\llbracket(-1,-1),(0,-1)\rrbracket}+S_{k}.\]
By a union bound
\[\eqref{eq:prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_probprob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob__prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob__prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob__prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob__prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob_prob__prob_prob__prob_prob_prob__prob_prob__prob_prob__prob_prob_prob__prob_prob_prob_prob_prob_prob__prob_prob__prob_prob_prob__prob_prob__prob_prob__prob_prob_prob__prob_prob__prob_prob_prob_prob_prob__prob_prob_prob__prob_prob_prob__prob_prob_prob__prob_prob_prob_prob_prob_prob__prob_prob__prob_prob_prob__prob_prob__prob_prob_prob__prob__prob_prob__prob_prob__prob_prob_._prob_
Finally, we will show that for a \(\alpha\) sufficiently large, and for \(t\geq\alpha N^{2/3}\)
\[\mathbb{P}(\log Z_{0,\mathbf{v}_{N}}-2Nf(\rho)\leq-tN^{1/3})\leq e^{-CtN^{1/3}}.\]
Let us define \(t=zN^{2/3}\) where \(z\geq\alpha\). Then, we may replace the free energy with the sum of weights along a single path \(\gamma\in\mathbb{X}_{0,\mathbf{v}_{N}}\), which has a smaller value. Then, fix \(\alpha\) sufficiently large, for \(z\geq\alpha\), we have
\[\begin{split}\mathbb{P}(\log Z_{0,\mathbf{v}_{N}}-2Nf(\rho)\leq- tN^{1/3})&\leq\mathbb{P}\Big{(}\sum_{i=1}^{2N}\log Y_{\gamma_{i}} \leq-\tfrac{1}{2}zN\Big{)}\\ &=\mathbb{P}\Big{(}\sum_{i=1}^{2N}\log Y_{\gamma_{i}}^{-1}\geq \tfrac{1}{2}zN\Big{)}\\ &\leq e^{-CzN}=e^{-CtN^{1/3}},\end{split} \tag{8.9}\]
where the last inequality follows Theorem D.1. With this, we have finished the proof of our theorem.
### Lower bound for the left tail
The approach we employ here follows from the idea of Theorem 4 in [28], which proves the optimal lower bound in the setting of last-passage percolation. The same idea was adapted to the O'Connell-Yor polymer in [42]. To start, we have the following proposition.
**Proposition 8.1**.: _Let \(\rho\in(0,\mu)\). There exist positive constants \(C_{47},C_{48},N_{0}\) such that for each \(N\geq N_{0}\), we have_
\[\mathbb{P}(\log Z_{0,2N\boldsymbol{\xi}[\rho]}-2Nf_{d}\leq-C_{47}N^{1/3})\geq C _{4\boldsymbol{8}}.\]
Proof.: This follows directly from Proposition 3.3 which says \(2Nf_{d}\geq 2Nf(\rho)\), and Theorem 3.22 which says \(2Nf(\rho)\geq\mathbb{E}[\log Z_{0,2N\boldsymbol{\xi}[\rho]}]+CN^{1/3}\). With these, on the positive probability event \(\{\log Z_{0,2N\boldsymbol{\xi}[\rho]}-\mathbb{E}[\log Z_{0,2N\boldsymbol{\xi} [\rho]}]\leq 0\}\), we have \(\{\log Z_{0,2N\boldsymbol{\xi}[\rho]}-2Nf_{d}\leq-CN^{1/3}\}\).
Using a step-back argument, we obtain an interval to interval lower bound.
**Proposition 8.2**.: _There exist positive constants \(C_{49},C_{50},\eta,N_{0}\) such that for each \(N\geq N_{0}\) and each integer \(h\in[-\eta N^{1/3},\eta N^{1/3}]\), we have_
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{N^{2/3}},\mathcal{L}_{(N-2hN^{2/3}, N+2hN^{2/3})}^{N^{2/3}}}^{\max}-2Nf_{d}\leq-C_{49}N^{1/3}\Big{)}\geq C_{5 \boldsymbol{0}}.\]
Proof.: For simplicity of the notation, let us denote
\[J^{h}=\mathcal{L}_{(N-2hN^{2/3},N+2hN^{2/3})}^{N^{2/3}}\quad\text{ and }\quad I= \mathcal{L}_{0}^{N^{2/3}}.\]
The proof uses a step-back argument. For any \(\epsilon>0\), let us first define \(I^{\epsilon}=\epsilon^{2/3}I\). We may increase the cutoff \(N_{0}\) depending on \(\epsilon\) so that \(I^{\epsilon}\) is non-empty. We cover \(I\) by a sequence of shifted \(I^{\epsilon}\)'s, i.e.
\[I\subset\bigcup_{i=-K}^{K}I_{i}^{\epsilon}\]
where \(I^{\epsilon}_{i}=(-2i(\epsilon N)^{2/3},2i(\epsilon N)^{2/3})+I^{\epsilon}\) and \(K=\lfloor 1/\epsilon\rfloor+1\). We do the same for \(J^{h}\) and obtain the collection \(\{J^{h,\epsilon}_{j}\}_{j=-K}^{K}\). Next, we will show that for each pair \(i,j\in\llbracket-K,K\rrbracket\), there exists \(c,c^{\prime}\) such that
\[\mathbb{P}(\log Z^{\max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{j}}-2Nf(\rho)\leq- cN^{1/3})\geq c^{\prime}. \tag{8.10}\]
Let us define \(\mathbf{u}^{*}\in I^{\epsilon}_{i}\) and \(\mathbf{v}^{*}\in J^{h,\epsilon}_{j}\) be the pair of points such that
\[Z_{\mathbf{u}^{*},\mathbf{v}^{*}}=Z^{\max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{ j}}.\]
And let us denote the midpoints of \(I^{\epsilon}_{i}\) and \(J^{h,\epsilon}_{j}\) as \(\widetilde{\mathbf{a}}\) and \(\widetilde{\mathbf{b}}\).
Next, we define the step back points \(\mathbf{a}=\widetilde{\mathbf{a}}-\epsilon(N,N)\) and \(\mathbf{b}=\widetilde{\mathbf{b}}+\epsilon(N,N)\). With these new endpoints, we have
\[\log Z_{\mathbf{a},\mathbf{b}}\geq\log Z_{\mathbf{a},\mathbf{u}^{*}}+\log Z^{ \max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{j}}+\log Z_{\mathbf{v}^{*},\mathbf{b}}. \tag{8.11}\]
Let us look at the term \(\log Z_{\mathbf{a},\mathbf{b}}\) on the left side. By Proposition 3.3, we have
\[\Lambda(\mathbf{b}-\mathbf{a})\leq 2(N+2\epsilon N)f_{d}.\]
By Proposition 8.1, we know there exists an event \(A\) with \(\mathbb{P}(A)\geq C_{\ref{eq:A}}\) such that on the event \(A\), we have
\[\log Z_{\mathbf{a},\mathbf{b}}-2(N+2\epsilon N)f_{d}\leq-C_{\ref{eq:A}}(N+2 \epsilon N)^{1/3}. \tag{8.12}\]
Next, we show that on a high probability event \(B\) with \(\mathbb{P}(B)\geq 1-C_{\ref{eq:A}}/2\), we have
\[\log Z_{\mathbf{a},\mathbf{u}^{*}}+\log Z_{\mathbf{v}^{*},\mathbf{b}}-4 \epsilon Nf(\rho)\geq-\tfrac{C_{\ref{eq:A}}}{2}N^{1/3}. \tag{8.13}\]
Once we have these, on the event \(A\cap B\) which has probability at least \(C_{\ref{eq:A}}/2\), estimates (8.11), (8.12) and (8.13) will imply
\[\log Z^{\max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{j}}-2Nf_{d}\leq-\tfrac{C_{ \ref{eq:A}}}{2}N^{1/3},\]
which is the statement in (8.10).
By symmetry, we will work with the term \(\log Z_{\mathbf{a},\mathbf{u}^{*}}\). By Theorem 3.18,
\[\mathbb{P}(\log Z_{\mathbf{a},\mathbf{u}^{*}}-2\epsilon Nf_{d}<-M(\epsilon N)^ {1/3})\leq e^{-CM}\leq\tfrac{C_{\ref{eq:A}}}{10}\]
provided \(M\) is fixed sufficiently large. Let \(B_{1}\) denote the complement of the event above, and let \(B_{2}\) be the similar event defined for \(\log Z_{\mathbf{v}^{*},\mathbf{b}}\). We define \(B=B_{1}\cap B_{2}\), and \(\mathbb{P}(B)\geq 1-\tfrac{C_{\ref{eq:A}}}{2}.\) Let us fix \(\epsilon\) sufficiently small so that \(M\epsilon^{1/3}\leq\tfrac{C_{\ref{eq:A}}}{2}\). With this, we have shown (8.13), thus finishing the proof for (8.10).
Finally, to prove the proposition, note
\[\{\log Z^{\max}_{I,J^{h}}-2Nf_{d}\leq-cN^{1/3}\}\supset\bigcap_{i,j=-K}^{K} \Big{\{}\log(Z^{\max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{j}})-2Nf_{d}\leq-cN^{ 1/3}\Big{\}}.\]
By the FKG inequality
\[\mathbb{P}\Big{(}\bigcap_{i,j=-K}^{K}\Big{\{}\log Z^{\max}_{I^{\epsilon}_{i},J^ {h,\epsilon}_{j}}-2Nf_{d}\leq-cN^{1/3}\Big{\}}\Big{)}\geq\prod_{i,j=-K}^{K} \mathbb{P}\Big{(}\log Z^{\max}_{I^{\epsilon}_{i},J^{h,\epsilon}_{j}}-2Nf(\rho )\leq-cN^{1/3}\Big{)},\]
and (8.13) says each term inside the product is lower bounded by some positive \(c^{\prime}\). Hence, we obtain that
\[\mathbb{P}\Big{(}\log Z^{\max}_{I,J^{h}}-2Nf(\rho)\leq-cN^{1/3}\Big{)}\geq(c^{ \prime})^{(2/\epsilon)^{2}}=C_{\ref{eq:c1}}\]
and we have finished the proof of this proposition.
Using the FKG inequality, we will further improve our lower bound to the following.
**Proposition 8.3**.: _There exist positive constants \(C_{51},C_{52},N_{0}\) such that for all \(N\geq N_{0}\)_
\[\mathbb{P}\Big{(}\log Z^{\max}_{\mathcal{L}^{N^{2/3}}_{0},\mathcal{L}_{N}}-2Nf _{d}\leq-C_{\ref{eq:c1}}N^{1/3}\Big{)}\geq C_{\ref{eq:c2}}.\]
Proof.: For simplicity of the notation, let us denote
\[J^{h}=\mathcal{L}^{N^{2/3}}_{(N-2hN^{2/3},N+2hN^{2/3})}\quad\text{ and }\quad I= \mathcal{L}^{N^{2/3}}_{0}.\]
The main idea is to cover the line \(\mathcal{L}\) by \(J^{h}\) for \(h\in\mathbb{Z}\). For some large fixed \(h_{0}\) which will be chosen later, we then split the possible values of \(h\) into two parts \([\![-h_{0},h_{0}]\!]\) and \(\mathbb{Z}\setminus[\![-h_{0},h_{0}]\!]\). For \(h\in[\![-h_{0},h_{0}]\!]\) we use the FKG inequality and the lower bound from Proposition 8.2. On the other hand, for \(h\in\mathbb{Z}\setminus[\![-h_{0},h_{0}]\!]\), Proposition 3.11 show that the probability is actually exponentially high, i.e.
\[\mathbb{P}(\log Z^{\max}_{I,J^{h}}-2Nf(\rho)\leq-cN^{1/3})\geq 1-e^{-C|h|^{3}},\]
provided \(c\) is sufficiently small. Thus, we have the lower bound
\[\mathbb{P}(\log Z^{\max}_{I,\mathcal{L}}-2Nf_{d}\leq-cN^{1/3})\geq C_{\ref{eq: c1}}^{100h_{0}}\prod_{|h|=h_{0}}^{\infty}(1-e^{-C|h|^{3}})=C_{\ref{eq:c2}},\]
where \(C_{\ref{eq:c2}}\) is the probability lower bound from Proposition 8.2. With this, we have finished the proof of this proposition.
We prove a lower bound for the constrained free energy.
**Proposition 8.4**.: _There exists constants \(C_{53},C_{54}N_{0},t_{0},a_{0}\) such that for each \(N\geq N_{0}\), \(t_{0}\leq t\leq a_{0}N^{2/3}/(\log N)^{2}\) and \(0<l\leq N^{1/3}\), we have_
\[\mathbb{P}(\log Z^{\text{in,}lN^{2/3}}_{0,N}-2Nf_{d}\leq-C_{\ref{eq:c1}}tN^{1 /3})\geq e^{-C}54^{lt^{5/2}}.\]
Proof.: Using diagonal and anti-diagonal lines, we cut the rectangle \(R^{lN^{2/3}}_{0,N}\) into smaller rectangles with diagonal \(\ell^{\infty}\)-length \(N/t^{3/2}\) and anti-diagonal \(\ell^{\infty}\)-length \((N/t^{3/2})^{2/3}\), Let us denote these small rectangles as \(R(u,v)\) where the index \(u=1,2,\ldots,t^{3/2}\) indicates the anti-diagonal level, and \(v=1,2,\ldots,lt\) enumerates the rectangle inside the same anti-diagonal level. Recall the notation \(\overline{R(u,v)}\) and \(\underline{R(u,v)}\) denote the upper and lower anti-diagonal sides of \(R(u,v)\). Let us also finally define \(\mathcal{L}(u)\) to denote the anti-diagonal line which contains \(\overline{R(u,v)}\).
Let us define the event
\[A=\bigcap_{u,v}\Big{\{}\log Z_{\underline{R(u,v)},\mathcal{L}(u)}-2(N/t^{3/2} )f_{d}\leq-C_{\ref{eq:c1}}(N/t^{3/2})^{1/3}\Big{\}},\]
where the constant \(C_{\ref{eq:c1}}\) is from Proposition 8.3. By the FKG inequality and Proposition 8.3, we know \(\mathbb{P}(A)\geq e^{-Clt^{5/2}}\).
Next, we see that our constrained free energy can be upper-bounded as follows.
\[\log Z^{\mathrm{in},lN^{2/3}}_{0,N} \leq t^{3/2}\Big{(}\log(tl)+\max_{u,v}\log Z_{\underline{R(u,v)}, \mathcal{L}(u)}\Big{)}\] \[\text{on the event }A \geq t^{3/2}\Big{(}\log(tl)+2(N/t^{3/2})f_{d}-C_{\ref{eq:1}}tN^{1/ 3}\Big{)}\] \[\geq 2Nf_{d}-C_{\ref{eq:1}}tN^{1/3}+t^{3/2}\log(tl).\]
Finally, fix \(a_{0}\) sufficiently small; the assumption \(t\leq a_{0}N^{2/3}/(\log N)^{2}\) implies that \(\frac{1}{2}C_{\ref{eq:1}}tN^{1/3}\geq t^{3/2}\log(tl).\) With this, we have shown that
\[\log Z^{\mathrm{in},lN^{2/3}}_{0,N}-2Nf_{d}\leq-\tfrac{1}{2}C_{\ref{eq:1}}tN^ {1/3}\qquad\text{ on the event }A.,\]
hence, finished the proof.
Finally, we prove Proposition 3.10.
Proof of Proposition 3.10.: This follows from the FKG inequality, Proposition 8.4 and Theorem 3.13. Set the parameter \(l=\sqrt{t}\) in Proposition 8.4, then,
\[\mathbb{P}(\log Z_{0,N}-2Nf_{d}\leq-C^{\prime}tN^{1/3})\] \[\geq\mathbb{P}\Big{(}\log Z^{\mathrm{in},\sqrt{t}N^{2/3}}_{0,N}-2 Nf_{d}\leq-C^{\prime}tN^{1/3})\Big{)}\mathbb{P}\Big{(}\log Z^{\mathrm{exit}, \sqrt{t}N^{2/3}}_{0,N}-2Nf_{d}\leq-C^{\prime}tN^{1/3})\Big{)}\] \[\geq e^{-Ct^{3}}\cdot(1-e^{-Ct^{3/2}}).\]
provided that \(C^{\prime}\) is fixed sufficiently small.
## Appendix A Proofs of the free energy estimates of Section 3.3
### Free energy and path fluctuations
#### a.1.1 Proof of Proposition 3.11
Proof of Proposition 3.11.: For this proof, let us define
\[I=\mathcal{L}^{N^{2/3}}_{0}\qquad J^{h}=\mathcal{L}^{N^{2/3}}_{(N-2hN^{2/3},N+ 2hN^{2/3})}.\]
Without the loss of generality, we will assume \(h\in\mathbb{Z}_{\geq 0}\) and it satisfies
\[0\leq h\leq\frac{1}{2}N^{1/3},\] (A.1)
since \(Z_{I,J^{h}}\) is \(0\) otherwise. We also note that it suffices to prove this estimate for \(Z^{\mathrm{max}}_{I,J^{h}}\), since
\[\log Z_{I,J^{h}}\leq 100\log N+\log Z^{\mathrm{max}}_{I,J^{h}}.\]
Next, we describe the step back argument, the points \(\widetilde{\mathbf{a}},\widetilde{\mathbf{b}},\mathbf{a},\mathbf{b}\) below are illustrated in Figure A.1. Let \(\widetilde{\mathbf{a}}\) and \(\widetilde{\mathbf{b}}\) denote the midpoints of \(I\) and \(J^{h}\). Let us define the step back points
\(\widetilde{\mathbf{a}}-w_{0}(N,N)\) and \(\mathbf{b}=\widetilde{\mathbf{b}}+w_{0}(N,N)\) where \(w_{0}\) is a constant that we will fix later. Let us use \(\mathbf{u}^{*}\in I\) and \(\mathbf{v}^{*}\in J^{h}\) to denote the random points such that \(Z^{\max}_{I,J^{h}}=Z_{\mathbf{u}^{*},\mathbf{v}^{*}}\). Then, we have
\[\log Z_{\mathbf{a},\mathbf{b}}\geq\log Z_{\mathbf{a},\mathbf{u}^{*}}+\log Z^{ \max}_{I,J^{h}}+\log Z_{\mathbf{v}^{*},\mathbf{b}}.\] (A.2)
Since \(|\mathbf{b}-\mathbf{a}|_{1}=2(1+2w_{0})N\), we rewrite the vector \(\mathbf{b}-\mathbf{a}\) as \(2(1+2w_{0})N\boldsymbol{\xi}[\mu/2+z_{\mathbf{a},\mathbf{b}}]\) for some nonnegative constant \(z_{\mathbf{a},\mathbf{b}}\). Note the perpendicular \(\ell^{1}\)-distance from \(\mathbf{b}-\mathbf{a}\) to the diagonal line is
\[(\mathbf{b}-\mathbf{a})\cdot(\mathbf{e}_{1}-\mathbf{e}_{2})=(\widetilde{ \mathbf{b}}-\widetilde{\mathbf{a}})\cdot(\mathbf{e}_{1}-\mathbf{e}_{2}),\]
which is the same as the \(\ell^{1}\)-distance from \(\widetilde{\mathbf{b}}-\widetilde{\mathbf{a}}\) to the diagonal line. For each fixed \(h\) in our range (A.1), it holds that
\[(\widetilde{\mathbf{b}}-\widetilde{\mathbf{a}})\cdot(\mathbf{e}_{1}-\mathbf{ e}_{2})=2hN^{2/3}.\]
From the perpendicular distance to the diagonal, we see that
\[\text{slope of }\mathbf{b}-\mathbf{a}\ =\tfrac{2(1+2w_{0})N+hN^{2/3}}{2(1+2w_{0})N- hN^{2/3}}=1+\tfrac{2h}{2(1+2w_{0})N^{1/3}-h}.\] (A.3)
Because of the upper bound \(h\leq\tfrac{1}{2}N^{1/3}\) from (A.1), we can choose \(w_{0}\) and \(N_{0}\) to be large enough so that for all \(N\geq N_{0}\) and \(0\leq h\leq\tfrac{1}{2}N^{1/3}\), the slope in (A.3) is contained inside the interval \([1-\epsilon,1+\epsilon]\) from Proposition 3.2. Then, by Proposition 3.2 and possibly increasing the value \(w_{0}\) if necessarily, there exists a positive constant \(C\) such that
\[z_{\mathbf{a},\mathbf{b}}\in\left[\tfrac{1}{C}\tfrac{2h}{2(1+2w_{0})N^{1/3}-h },C\tfrac{2h}{2(1+2w_{0})N^{1/3}-h}\right]\] (A.4)
where the constant \(C\) above is independent of \(h\) as long as \(0\leq h\leq\tfrac{1}{2}N^{1/3}\).
Next, we can increase \(w_{0}\) further if necessary so that the interval from (A.4) falls into the small interval \([-\epsilon,\epsilon]\) from Proposition 3.4. Then, Proposition 3.4 and increasing \(w_{0}\) further if necessarily gives us the first inequality sign below, and the second inequality comes from (A.4)
\[2(1+2w_{0})N\big{[}f(\mu/2+z_{\mathbf{a},\mathbf{b}})-f_{d}\big{]}\leq(1+2w_{ 0})N\big{[}-Cz_{\mathbf{a},\mathbf{b}}^{2}\big{]}\leq-\widetilde{C}h^{2}N^{1/3}.\] (A.5)
Note that \(w_{0}\) has now been fixed, so we absorb it into \(\widetilde{C}\).
Figure A.1: An illustration of the points \(\widetilde{\mathbf{a}},\widetilde{\mathbf{b}},\mathbf{a},\mathbf{b}\) from the step back argument.
Finally, let us upper bound the probability stated in the proposition,
\[\mathbb{P}\Big{(}\log Z^{\max}_{I,J^{h}}-2Nf_{d}\geq(-Ch^{2}+t)N^{1/3} \Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{\mathbf{a},\mathbf{b}}-\log Z_{ \mathbf{a},\mathbf{u}^{*}}-\log Z_{\mathbf{v}^{*},\mathbf{b}}-2Nf_{d}\geq(-Ch^{2 }+t)N^{1/3}\Big{)}\] \[=\mathbb{P}\Big{(}[\log Z_{\mathbf{a},\mathbf{b}}-(2+4w_{0})Nf_{d} ]-[\log Z_{\mathbf{a},\mathbf{u}^{*}}-2w_{0}Nf_{d}]\] \[\qquad\qquad\qquad\qquad-[\log Z_{\mathbf{v}^{*},\mathbf{b}}-2w_{ 0}Nf_{d}]\geq(-Ch^{2}+t)N^{1/3}\Big{)}\] by (A.5) \[\leq\mathbb{P}\Big{(}[\log Z_{\mathbf{a},\mathbf{b}}-(2+4w_{0})Nf (\mu/2+z_{\mathbf{a},\mathbf{b}})]-[\log(Z_{\mathbf{a},\mathbf{u}^{*}})-2w_{0 }Nf_{d}]\] \[\qquad\qquad\qquad-[\log Z_{\mathbf{v}^{*},\mathbf{b}}-2w_{0}Nf_{ d}]\geq((\widetilde{C}-C)h^{2}+t)N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{\mathbf{a},\mathbf{b}}-2(1+2w_{0})Nf (\mu/2+z_{\mathbf{a},\mathbf{b}})\geq\tfrac{1}{3}((\widetilde{C}-C)h^{2}+t)N^ {1/3}\Big{)}\] (A.6) \[\qquad+\mathbb{P}\Big{(}\log Z_{\mathbf{a},\mathbf{u}^{*}}-2w_{0 }Nf_{d}\leq-\tfrac{1}{3}((\widetilde{C}-C)h^{2}+t)N^{1/3}\Big{)}\] (A.7) \[\qquad+\mathbb{P}\Big{(}\log Z_{\mathbf{v}^{*},\mathbf{b}}-2w_{0 }Nf_{d}\leq-\tfrac{1}{3}((\widetilde{C}-C)h^{2}+t)N^{1/3}\Big{)}\] (A.8)
Fix \(C\) small so that \(\widetilde{C}-C>0\) in the expression above, we will now show that all three probabilities decay faster than \(e^{-C(|h|^{3}+\min\{t^{3/2},tN^{1/3}\})}\). The term (A.6) \(\leq e^{-C(|h|^{3}+\min\{t^{3/2},tN^{1/3}\})}\) follows from Proposition 3.6. For the remaining two other terms, by symmetry, their estimates are the same. Let us work with (A.7). Since \(u^{*}\) only depends on the edges between the lines \(\mathcal{L}_{0}\) and \(\mathcal{L}_{N}\), then by Proposition 3.8
\[\mathbb{P}\Big{(}\log Z_{\mathbf{a},\mathbf{u}^{*}}-2w_{0}Nf_{d} <-\tfrac{1}{3}((\widetilde{C}-C_{1})h^{2}+t)N^{1/3}\Big{)}\] \[\leq\sup_{\mathbf{u}\in I}\mathbb{P}\Big{(}\log Z_{\mathbf{a}, \mathbf{u}}-2w_{0}Nf_{d}<-\tfrac{1}{3}((\widetilde{C}-C_{1})h^{2}+t)N^{1/3} \Big{)}\] \[\leq e^{-C(|h|^{3}+\min\{t^{3/2},tN^{1/3}\})}.\]
With this, we have finished the proof of this theorem.
#### a.1.2 Proof of proposition 3.12
Proof of Proposition 3.12.: For this proof, let us define
\[I^{k}=\mathcal{L}^{N^{2/3}}_{(-2kN^{2/3},2kN^{2/3})}\qquad J^{h}=\mathcal{L}^ {N^{2/3}}_{(N-2hN^{2/3},N+2hN^{2/3})}.\]
Since the number of points between \(\mathcal{L}^{sN^{2/3}}_{0}\) and \(\mathcal{L}_{N}\setminus\mathcal{L}^{(s+t)N^{2/3}}_{N}\) which are connected by directed paths is at most \(N^{100}\), we may work with the maximum version of the free energy since
\[Z_{\mathcal{L}^{sN^{2/3}}_{0},\mathcal{L}_{N}\setminus\mathcal{L}^{(s+t)N^{2/ 3}}_{N}}\leq Z^{\max}_{\mathcal{L}^{sN^{2/3}}_{0},\mathcal{L}_{N}\setminus \mathcal{L}^{(s+t)N^{2/3}}_{N}}+100\log N.\]
By translation invariance and Proposition 3.11,
\[\mathbb{P}(\log Z_{I^{-k},J^{h}}-2Nf_{d}\geq-C_{16}(h+k)^{2}N^{1/ 3})\] \[=\mathbb{P}(\log Z_{I^{0},J^{h+k}}-2Nf_{d}\geq-C_{16}(h+k)^{2}N^{1 /3})\] \[\leq e^{-C_{1}}17^{(h+k)^{3}}.\]
Then, the following union bound finishes the proof
\[\mathbb{P}\Big{(}\log\Big{(}Z^{\max}_{\mathcal{L}_{0}^{sN^{2}/3}, \mathcal{L}_{N}\setminus\mathcal{L}_{N}^{(s+t)N^{2}/3}}\Big{)}-2Nf_{d}\geq-\tfrac {C_{\ref{lem:bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_bound_boundbound_bound_boundbound_boundbound_boundbound_boundbound_boundbound_boundbound_boundbound_boundbound_boundbound_boundbound_boundboundbound_bound
Using this decomposition of the paths, we have
\[\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}^{ \text{exit},(s+t\sum_{i=0}^{\infty}2^{-i/5})N^{2/3}} =\log\Big{(}\sum_{k=0}^{k_{0}}Z_{\mathcal{L}_{0}^{sN^{2/3}}, \mathcal{L}_{N}^{sN^{2/3}}}(A_{k})\Big{)}\] \[\leq\log(k_{0})+\max_{0\leq k\leq k_{0}}\{\log Z_{\mathcal{L}_{0 }^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k})\}\] \[\leq\log N+\max_{0\leq k\leq k_{0}}\{\log Z_{\mathcal{L}_{0}^{sN^ {2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k})\}.\]
Since our estimate is on the scale \(N^{1/3}\), we may ignore the \(\log N\) term above. Now, it suffices for us to upper bound bound
\[\mathbb{P}\Big{(}\max_{0\leq k\leq k_{0}}\big{\{}\log Z_{\mathcal{ L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k})\big{\}}-2Nf_{d}\geq-\widetilde{ C}t^{2}N^{1/3}\Big{)}\] (A.11) \[\leq\sum_{k=0}^{k_{0}}\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{ sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k})-2Nf_{d}\geq-\widetilde{C}t^{2}N^{1/3} \Big{)}\]
Next, let us upper bound each term inside the sum above,
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}} }(A_{k})-2Nf_{d}\geq-\widetilde{C}t^{2}N^{1/3}\Big{)}.\] (A.12)
Define \(U_{k}=\{2m-1:m=1,\ldots 2^{k}\}\). For \(l\in U_{k}\), we can write \(A_{k}\) as a (non-disjoint) union of \(A_{k}^{l}\) where \(A_{k}^{l}\) contains the collection of paths between \(\mathcal{L}_{0}^{sN^{2/3}}\) and \(\mathcal{L}_{N}^{sN^{2/3}}\) that go through the segments \(\mathcal{L}_{(l-1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k-1}2^{-i/5})N^{2/3}}\) and \(\mathcal{L}_{(l+1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k-1}2^{-i/5})N^{2/3}}\) while avoiding the segment \(\mathcal{L}_{l/2^{k+1}}^{(s+t\sum_{i=0}^{k}2^{-i/5})N^{2/3}}\) in between. Then, we have
\[\text{(\ref{eq:L1})}\leq\mathbb{P}\Big{(}\log\Big{(}\sum_{l\in U_{k}}Z_{ \mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k}^{l})\Big{)}-2Nf_{ d}\geq-\widetilde{C}t^{2}N^{1/3}\Big{)}\]
\[\leq\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N }^{sN^{2/3}}}(A_{k}^{l})-2Nf_{d}\geq-2\widetilde{C}t^{2}N^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L }_{N}^{sN^{2/3}}}(A_{k}^{l})\Big{\}}-2Nf_{d}\geq-2\widetilde{C}t^{2}N^{1/3} \Big{)}\] \[\leq\sum_{l\in U_{k}}\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN ^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_{k}^{l})-2Nf_{d}\geq-2\widetilde{C}t^{2} N^{1/3}\Big{)}.\] (A.13)
Again, let us look at the probability inside the sum (A.13). First, note we have the following upper bound
\[\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}(A_ {k}^{l}) \leq\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_{(l-1)N/2^{k+1}}}\] (A.14) \[\qquad+\log Z^{\mathop{\rm mid}\nolimits,(s+t\sum_{i=0}^{k}2^{-i /5})N^{2/3}}_{\mathcal{L}_{(l-1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k}2^{-i/5})N^{2/3} }},\mathcal{L}_{(l+1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k}2^{-i/5})N^{2/3}}\] (A.15) \[\qquad+\log Z_{\mathcal{L}_{(l+1)N/2^{k+1}},\mathcal{L}_{N}^{sN^ {2/3}}}.\] (A.16)
Now, note for (A.15), the transversal fluctuation of the paths between \(\mathcal{L}_{(l-1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k-1}2^{-i/5})N^{2/3}}\) and \(\mathcal{L}_{(l+1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k-1}2^{-i/5})N^{2/3}}\) is more than
\[2^{-k/5}tN^{2/3}=\frac{2^{2k/3}}{2^{k/5}}t(N/2^{k})^{2/3}.\]
Thus, by Proposition A.1, for some positive constants \(C^{\prime}\) and \(C^{\prime\prime}\),
\[\mathbb{P}\Big{(}(\text{\ref{eq:C1}})-2\frac{N}{2^{k}}f_{d}>-C^{\prime}\Big{(} \frac{2^{2k/3}}{2^{k/5}}t\Big{)}^{2}(N/2^{k})^{1/3}\Big{)}\leq e^{-C^{\prime \prime}\big{(}\frac{2^{2k/3}}{2^{k/5}}t\big{)}^{3}}.\] (A.17)
And note that
\[\Big{(}\frac{2^{2k/3}}{2^{k/5}}t\Big{)}^{2}(N/2^{k})^{1/3}=2^{3k/5}t^{2}N^{1/3}\]
With this, we may upper bound the probability inside the sum (A.13) as
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}},\mathcal{L}_ {N}^{sN^{2/3}}}(A_{k}^{l})-2Nf_{d}\geq-2\widetilde{C}t^{2}N^{1/3}\Big{)}\] \[\qquad\leq\mathbb{P}\Big{(}\log Z^{\mathop{\rm mid}\nolimits,(s+t \sum_{i=0}^{k}2^{-i/5})N^{2/3}}_{\mathcal{L}_{(l-1)N/2^{k+1}}^{(s+t\sum_{i=0}^ {k-1}2^{-i/5})N^{2/3}}},\mathcal{L}_{(l+1)N/2^{k+1}}^{(s+t\sum_{i=0}^{k-1}2^{- i/5})N^{2/3}}-2\frac{N}{2^{k}}f_{d}\geq-C^{\prime}2^{3k/5}t^{2}N^{1/3}\Big{)}\] (A.18) \[\qquad+\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}}, \mathcal{L}_{(l-1)N/2^{k+1}}}+\log Z_{\mathcal{L}_{(l+1)N/2^{k+1}},\mathcal{L} _{N}^{sN^{2/3}}}\] \[\qquad\qquad\qquad\qquad\qquad-\Big{(}2N-2\frac{N}{2^{k}}\Big{)} f_{d}\geq C^{\prime}2^{3k/5}t^{2}N^{1/3}-2\widetilde{C}t^{2}N^{1/3}\Big{)}\] (A.19)
Note that we have seen in (A.17) that (A.18) \(\leq e^{-C^{\prime\prime}2^{k/100}t^{3}}\). To bound (A.19), note that by lowering the value of \(\widetilde{C}\) if needed,
\[C^{\prime}2^{3k/5}t^{2}N^{1/3}-2\widetilde{C}t^{2}N^{1/3}\geq\frac{1}{2}C^{ \prime}2^{3k/5}t^{2}N^{1/3},\]
then the event in (A.19) should be rare because the free energy is unusually large. By a union bound
\[\text{\eqref{eq:C1}}\leq\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{sN^{2/3}}, \mathcal{L}_{(l-1)N/2^{k+1}}}-2\frac{(l-1)N}{2^{k+1}}f_{d}\geq\tfrac{1}{4}C^{ \prime}2^{3k/5}t^{2}N^{1/3}\Big{)}\] (A.20)
\[+\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{(l+1)N/2^{k+1}},\mathcal{L}_{N}^{sN 2/3}}-(2N-2\tfrac{(l+1)N}{2^{k+1}})f_{d}\geq\tfrac{1}{4}C^{\prime}2^{3k/5}t^{2} N^{1/3}\Big{)}.\]
By symmetry, let us bound (A.20) above. For simplicity of the notation, let \(M=\tfrac{(l-1)N}{2^{k+1}}\), then
\[\eqref{eq:M20}=\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{s(N/M)^{2/3}}M^{2/3}, \mathcal{L}_{M}}-2Mf_{d}\geq\tfrac{C^{\prime}2^{3k/5}t^{2}N^{1/3}}{4M^{1/3}}M^{ 1/3}\Big{)}\]
To upper bound this term, we would like to apply the interval-to-line bound from Theorem 3.15. The only assumption from Theorem 3.15 that we need to verify here is the width of the interval can not be too wide. A sufficient bound that guarantees the assumption could be
\[s(N/M)^{2/3}\leq e^{\tfrac{C^{\prime}2^{3k/5}t^{2}N^{1/3}}{4M^{1/3}}}.\]
The inequality above holds by our assumption that \(s\leq e^{t}.\) Thus, we obtain
\[\eqref{eq:M20}\leq e^{-C\min\big{\{}\big{(}\tfrac{2^{3k/5}t^{2}N^{1/3}}{M^{1/ 3}}\big{)}^{3/2},\tfrac{2^{3k/5}t^{2}N^{1/3}}{M^{1/3}}M^{1/3}\big{\}}\leq e^{- C2^{k/100}t^{3}}.\]
To summarize this last part, we have shown that
\[\eqref{eq:M21}\leq\sum_{l\in U_{k}}e^{-C2^{k/100}t^{3}}+e^{-C2^{k/100}t^{3}} \leq 2^{k}\cdot e^{-C2^{t/100}t^{3}}.\]
And going back to our goal (A.11), we have shown
\[\mathbb{P}\Big{(}\max_{0\leq k\leq k_{0}}\log Z_{0,N}(A_{k}))-2Nf_{d}\geq-Ct^ {2}N^{1/3}\Big{)}\leq\sum_{k=0}^{k_{0}}2^{k}\cdot e^{-C2^{k/100}t^{3}}\leq e^{ -Ct^{3}}.\]
With this, we have finished the proof of this theorem.
#### a.1.4 Proof of Corollary 3.14
Proof.: Because of the choice \(s<t/10\), we see that
\[R_{0,N}^{(s+\tfrac{t}{2})N^{2/3}}\subset R_{(-sN^{2/3},sN^{2/3}),N}^{tN^{2/3}}.\]
Then, we have the following bound for the free energy
\[\log Z_{(-sN^{2/3},sN^{2/3}),N}^{\text{exit},tN^{2/3}}\leq\log Z_{\mathcal{L }_{0}^{sN^{2/3}},\mathcal{L}_{N}^{sN^{2/3}}}^{\text{exit},(s+\tfrac{t}{2})N^ {2/3}}.\]
Our corollary follows directly from Theorem 3.14 when applied to the right side above.
### Interval-to-line free energy
We start with a point-to-line bound.
**Proposition A.2**.: _There exist positive constants \(C_{57},N_{0}\) such that for each \(N\geq N_{0}\) and each \(t\geq 1\), we have_
\[\mathbb{P}\log\Big{(}\log Z_{0,\mathcal{L}_{N}}-2Nf_{d}\geq tN^{1/3}\Big{)} \leq e^{-C}57^{\min\{t^{3/2},tN^{1/3}\}}.\]
Proof.: Note it suffices to prove the same estimate for
\[\mathbb{P}(\log Z_{0,\mathcal{L}_{N}}^{\max}-2Nf_{d}\geq tN^{1/3})\] (A.21)
since \(\log Z_{0,\mathcal{L}_{N}}\leq\log(Z_{0,\mathcal{L}_{N}}^{\max})+100\log N.\) Let \(J^{h}=\mathcal{L}_{(N-2hN^{2/3},N+2hN^{2/3})}^{N^{2}/3}\). By a union bound and Proposition 3.11, we have
\[\eqref{eq:C21}\leq\sum_{h\in\mathbb{Z}}\mathbb{P}(\log Z_{0,J^{h}}^{\max}-2Nf_ {d}\geq tN^{1/3})\leq\sum_{h\in\mathbb{Z}}e^{-C(|h|^{3}+\min\{t^{3/2},tN^{1/3} \})}\leq e^{-C\min\{t^{3/2},tN^{1/3}\}}.\]
Next, we use a step-back argument to upgrade the point-to-line bound of Proposition A.2 to Theorem 3.15.
Proof of Theorem 3.15.: First, we prove the case when \(h=1\). Since
\[\log Z_{\mathcal{L}_{0}^{N^{2/3}},\mathcal{L}_{N}}\leq\max_{\mathbf{p}\in \mathcal{L}_{0}^{N^{2/3}}}\log Z_{\mathbf{p},\mathcal{L}_{N}}+100\log N,\]
it suffices to work with the maximum above. Let \(\mathbf{p}^{*}\) denote the random maximizer that
\[\max_{\mathbf{p}\in\mathcal{L}_{0}^{N^{2/3}}}\log Z_{\mathbf{p},\mathcal{L}_{ N}}=\log Z_{\mathbf{p}^{*},\mathcal{L}_{N}}.\]
Then, we have
\[\log Z_{-N,\mathcal{L}_{N}}\geq\log Z_{-N,\mathbf{p}^{*}}+\log Z_{\mathbf{p}^{* },\mathcal{L}_{N}}.\]
With this, we see that
\[\mathbb{P}\Big{(}\max_{\mathbf{p}\in\mathcal{L}_{0}^{N^{2/3}}} \log Z_{\mathbf{p},\mathcal{L}_{N}}-2Nf_{d}\geq tN^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}[\log Z_{-N,\mathcal{L}_{N}}-4Nf_{d}]-[\log Z _{-N,\mathbf{p}^{*}}-2Nf_{d}]\geq tN^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{-N,\mathcal{L}_{N}}-4Nf_{d}\geq\tfrac {t}{2}N^{1/3}\Big{)}\] (A.22) \[\qquad\qquad\qquad\qquad+\mathbb{P}\Big{(}\log Z_{-N,\mathbf{p}^ {*}}-2Nf_{d}\leq-\tfrac{t}{2}N^{1/3}\Big{)}.\] (A.23)
From Proposition A.2, we obtain \(\eqref{eq:C21}\leq e^{-C\min\{t^{3/2},tN^{1/3}\}}\). Because \(\mathbf{p}^{*}\) only depends on weights between \(\mathcal{L}_{0}\) and \(\mathcal{L}_{N}\), and by Proposition 3.8, we have
\[\eqref{eq:C22}\leq\max_{\mathbf{p}\in\mathcal{L}_{0}^{N^{2/3}}}\mathbb{P}\Big{(} \log Z_{-aN,\mathbf{p}}-aNf_{d}\leq-\tfrac{t}{2}N^{1/3}\Big{)}\leq e^{- \widetilde{C}\min\{t^{3/2},tN^{1/3}\}}.\] (A.24)
This finishes the case when \(h=1\).
Next, let us define \(I^{j}=\mathcal{L}_{(-2jN^{2/3},+2jN^{2/3})}^{N^{2}/3}\) where \(j\) is the collection of integers in \(\llbracket-h,h\rrbracket\) Then, it holds that
\[\log Z_{\mathcal{L}_{0}^{hN^{2/3}},\mathcal{L}_{N}}\leq\max_{j\in\llbracket-h,h\rrbracket}\log Z_{I_{j},\mathcal{L}_{N}}+\log h.\]
Using this and a union bound,
\[\mathbb{P}\Big{(}\log Z_{\mathcal{L}_{0}^{hN^{2/3}},\mathcal{L}_{N}}-2Nf_{d} \geq tN^{1/3}\Big{)}\leq\mathbb{P}\Big{(}\max_{j\in\llbracket-h,h\rrbracket} \log Z_{I_{j},\mathcal{L}_{N}}-2Nf_{d}\geq\tfrac{t}{3}N^{1/3}\Big{)}\]
\[\leq 10\hbar\mathbb{P}\Big{(}\log Z_{I^{0},\mathcal{L}_{N}}-2Nf_{d} \geq\tfrac{t}{3}N^{1/3}\Big{)}\] \[\leq 10e^{C}24^{\min\{t^{3/2},tN^{1/3}\}}e^{-\widetilde{C}\min\{t^{ 3/2},tN^{1/3}\}}\] \[\leq e^{-C\min\{t^{3/2},tN^{1/3}\}}.\]
where the last inequality holds if we fix \(C_{2\mathcal{A}}\leq\tfrac{1}{2}\widetilde{C}\) where \(\widetilde{C}\) is the constant appearing in (A.24). With this, we have finished the proof of the theorem.
### Estimates for the constrained free energy
#### a.3.1 Proof of Theorem 3.16
Proof.: First, we prove the estimate when
\[t_{0}\leq t\leq N^{2/3}.\]
To do this, we break the line segment from \((0,0)\) to \(\mathbf{p}\) into equal pieces with \(\ell^{1}\) length \(2N\theta/\sqrt{t}\). And let us denote the endpoints in between as \(\{\mathbf{p}_{i}\}\).
Let \(0<C^{\prime}\leq 1/2\) which we will fix later. By a union bound, we have
\[\mathbb{P}\Big{(}\log Z_{0,\mathbf{p}}^{\mathrm{in},\theta N^{2 /3}}-2Nf_{d}\leq-C^{\prime}t^{2}N^{1/3}\Big{)}\] \[\leq\frac{\sqrt{t}}{\theta}\mathbb{P}\Big{(}\log Z_{0,\mathbf{p }_{1}}^{\mathrm{in},\theta N^{2/3}}-2(N\theta/\sqrt{t})f_{d}\leq-C^{\prime}t^ {2/3}\theta^{2/3}(N\theta/\sqrt{t})^{1/3}\Big{)}\] (A.25)
Using the fact that
\[\log Z_{0,\mathbf{p}_{1}}\leq\log 2+\max\Big{\{}\log Z_{0,\mathbf{p}_{1}}^{ \mathrm{in},\theta N^{2/3}},\log Z_{0,\mathbf{p}_{1}}^{\mathrm{exit},\theta N ^{2/3}}\Big{\}},\]
we may continue the bound
\[\eqref{eq:C2}\leq\frac{\sqrt{t}}{\theta}\Big{[}\mathbb{P}\Big{(} \log Z_{0,\mathbf{p}_{1}}-2(N\theta/\sqrt{t})f_{d}\leq-C^{\prime}t^{2/3} \theta^{2/3}(N\theta/\sqrt{t})^{1/3}+\log 2\Big{)}\] (A.26) \[\qquad+\mathbb{P}\Big{(}\log Z_{0,\mathbf{p}_{1}}^{\mathrm{exit },t^{1/3}\theta^{1/3}(\theta N/\sqrt{t})^{2/3}}-2(N\theta/\sqrt{t})f_{d}\geq-C ^{\prime}(t^{1/3}\theta^{1/3})^{2}(N\theta/\sqrt{t})^{1/3}\Big{)}\Big{]}.\] (A.27)
It remains to upper bound each of the probabilities above and this would finish the proof of this theorem.
First, we show that the probability in (A.26) is bounded by \(e^{-C\theta t}\). There exists an absolute constant \(a_{0}\) such that
\[\Big{|}(N\theta/\sqrt{t},N\theta/\sqrt{t})-\mathbf{p}_{1}\Big{|}_{\infty}\leq \frac{a_{0}\theta^{4/3}}{t^{1/6}}(N\theta/\sqrt{t})^{2/3}.\]
Then, by Proposition 3.5
\[\Big{|}\Lambda(\mathbf{p}_{1})-2(N\theta/\sqrt{t})f_{d}\Big{|}\leq C\Big{(} \frac{a_{0}\theta^{4/3}}{t^{1/6}}\Big{)}^{2}(N\theta/\sqrt{t})^{1/3},\]
and the fraction \(\frac{a_{0}\theta^{4/3}}{t^{1/6}}\) is bounded for \(a_{0}\) and \(0<\theta\leq 100\). Hence, we may replace the \(2(N\theta/\sqrt{t})f_{d}\) in (A.26) by \(\Lambda(\mathbf{p}_{1})\) and Proposition 3.8 can be applied.
For the probability in (A.27), we may apply Theorem 3.13 and obtain
\[\mathbb{P}\Big{(}\log Z^{\text{exit,t}^{1/3}\theta^{1/3}(N\theta/\sqrt{t})^{2/3} }_{0,\mathbf{p}_{1}}-2(N\theta/\sqrt{t})f_{d}\geq-C^{\prime}(t^{1/3}\theta^{1/ 3})^{2}(N\theta/\sqrt{t})^{1/3}\Big{)}\leq e^{-C\theta t},\]
provided that \(C^{\prime}\) is fixed sufficiently small. Note here the assumption \(t^{1/3}\theta^{1/3}\leq(N\theta/\sqrt{t})^{1/3}\) in Theorem 3.13 is satisfied because of our current assumption \(t\leq N^{2/3}\). This finishes the proof of the estimate when \(t_{0}\leq t\leq N^{2/3}\), as we have shown that the probability appearing in our theorem is upper bounded by \(\frac{\sqrt{t}}{\theta}e^{-C\theta t}\).
Finally, to generalize the range of \(t\), the steps are exactly the same as how we generalized the range of \(t\) in the proof of Proposition 3.8. First, we can trivially generalize to the range \(t_{0}\leq t\leq\alpha N^{2/3}\) for any large positive \(\alpha\). This only changed the \(C\) from our upper bound \(\frac{\sqrt{t}}{\theta}e^{-C\theta t}\). For \(t\geq\alpha N^{2/3}\), we replace the constrained free energy \(\log Z^{\text{in},\theta N^{2/3}}_{0,\mathbf{p}}\) by a sum of weights from a single deterministic path inside our parallelogram \(R^{\theta N^{2/3}}_{0,\mathbf{p}}\). Then, our estimate follows from Theorem D.1, as shown in (8.9).
#### a.3.2 Proof of Theorem 3.17
Proof.: We may lower bound the constrained free energy by an i.i.d sum
\[\log Z^{\text{in},sN^{2/3}}_{0,N}\geq\sum_{i=1}^{ks^{-3/2}}\log Z^{\text{in}, sN^{2/3}}_{(i-1)\frac{s^{3/2}N}{k},i\frac{s^{3/2}N}{k}}.\] (A.28)
Note that
\[\mathbb{P}\Big{(}\log Z^{\text{in},sN^{2/3}}_{0,\frac{s^{3/2}N}{k }}-2(s^{3/2}N/k)f_{d}\geq(s^{3/2}N/k)^{1/3}\Big{)}\] (A.29) \[\geq\mathbb{P}\Big{(}\log Z_{0,\frac{s^{3/2}N}{k}}-2(s^{3/2}N/k) f_{d}\geq 2(s^{3/2}N/k)^{1/3}\Big{)}\] (A.30) \[\qquad\qquad\qquad-\mathbb{P}\Big{(}\log Z^{\text{exit,}sN^{2/3} }_{0,\frac{s^{3/2}N}{k}}-2(s^{3/2}N/k)f_{d}\geq(s^{3/2}N/k)^{1/3}\Big{)}.\] (A.31)
The probability (A.30) is lower bounded by an absolute constant \(c_{0}\) by Proposition 3.7, provided \(s^{3/2}N/k\geq N_{0}^{*}\) where \(N_{0}^{*}\) is the \(N_{0}\) from Theorem 3.17. And the probability (A.31) is upper bounded by \(e^{-Ck}\) from Theorem 3.13 when \(k\leq\sqrt{s}N^{1/3}\) and (A.31) is zero when \(k>\sqrt{s}N^{1/3}\). Thus,
\[\eqref{eq:c_0}\geq c_{0}-e^{-Cs^{3/2}t^{3/2}}>c_{0}/10\]
when \(t\) is large.
Finally, let \(k=\frac{1}{N_{0}^{*}}s^{3/2}t^{3/2}\). On the intersection of \(k\) independent events that each term of the sum in (A.28) is large like in (A.29),
\[\mathbb{P}\Big{(}\log Z^{\text{in},sN^{2/3}}_{0,N}-2Nf_{d}\geq\tfrac{1}{N_{0}^ {*}}tN^{1/3}\Big{)}\geq\big{(}c_{0}-e^{-Cs^{3/2}t^{3/2}}\big{)}^{Ct^{3/2}}\geq e ^{-Ct^{3/2}}\]
where the last constant \(C\) depends on \(s\). With this, we have finished the proof of this theorem.
### Minimum and Maximum of the constrained free energy in a box
#### a.4.1 Proof of Theorem 3.18
Proof.: First, we will prove the following estimate,
\[\mathbb{P}\Big{(}\min_{{\bf p}\in R_{0,N/16}^{N^{2/3}}}\log Z_{{\bf p},N}^{{\rm in },R_{0,N}^{N^{2/3}}}-(2N-|{\bf p}|_{1})f_{d}\leq-tN^{1/3}\Big{)}\leq e^{-Ct}.\] (A.32)
Then, the statement of the theorem follows from a union bound, which we will show at the end of the proof. To start, we construct a tree \(\mathcal{T}\) with the base at \((N,N)\). Define \(\mathcal{T}_{0}=\{(N,N)\}\) and we will define the remaining part of the tree. Fix a positive constant \(J\) such that
\[N^{1/4}\leq N8^{-J}\leq N^{1/3-0.01},\] (A.33)
such \(J\) always exists provided that \(N_{0}\) is sufficiently large. Next, for \(j=1,2,\ldots,J\), \(\mathcal{T}_{j}\) is the collection of \(32^{j}\) vertices which we now define. For \(i=1,2,\ldots,8^{j}\), from each segment \(\mathcal{L}_{\frac{2i+1}{32}8^{-j}N}^{N^{2/3}}\), we collect \(4^{j}\) vertices which split the the segment \(\mathcal{L}_{\frac{2i+1}{32}8^{-j}N}^{N^{2/3}}\) into \(4^{j}+1\) equal pieces. We define the vertices of our tree \(\mathcal{T}\) as the union \(\bigcup_{j=0}^{J}\mathcal{T}_{j}\).
Now, we form the edges between the vertices. Let us label the vertices in \(\mathcal{T}_{j}\) as
\[\{x_{(i,k)}^{j}:1\leq i\leq 8^{j},1\leq k\leq 4^{j}\}.\]
A fixed index \(i\) records the anti-diagonal segment \(\mathcal{L}_{\frac{2i+1}{32}8^{-j}N}^{N^{2/3}}\). And along this segment, we label the \(4^{j}\) chosen vertices by their \(e_{2}\)-coordinate values with index \(k=1,\ldots,4^{j}\). For \(k=1\), we choose \(x_{(i,1)}^{j}\) to be the vertex with the smallest \(e_{2}\)-coordinate (which could be negative). Next, for \(j=1,2,\ldots,J-1\), we connect the vertex \({\bf x}_{(i,k)}^{j}\in\mathcal{T}_{j}\) with the 32 vertices inside \(\mathcal{T}_{j+1}\) which are of the form \(x_{8(i-1)+i^{\prime},4(k-1)+j^{\prime}}^{j+1}\) where \(1\leq i^{\prime}\leq 8\) and \(1\leq j^{\prime}\leq 4.\) This completes the construction of the tree \(\mathcal{T}\).
For fixed \(j,i\) and \(k\), let us denote the collection of 32 points in \(T_{j+1}\) which are connected to \({\bf x}_{(i,k)}^{j}\) as \(V_{j,i,k}\). Now, for each \({\bf v}\in V_{j,i,k}\), the diagonal distance between the \({\bf v}\) and \({\bf x}_{(i,k)}^{j}\) satisfies
\[|{\bf x}_{(i,k)}^{j}|_{1}-|{\bf v}|_{1}\in\Big{[}2\frac{8^{-j}}{32}N,6\frac{8^ {-j}}{32}N\Big{]},\] (A.34)
and their anti-diagonal distance is upper bounded as
\[\Big{|}{\bf x}_{(i,k)}^{j}\cdot({\bf e}_{1}-{\bf e}_{2})-{\bf v}\cdot({\bf e} _{1}-{\bf e}_{2})\Big{|}\leq 2\cdot 4^{-j}N^{2/3}.\] (A.35)
Similarly, we look at the vertices inside \(\mathcal{T}_{J}\), they form a grid inside the rectangle \(R_{0,\frac{248-J}{32}N}^{N^{2/3}}\) which contains \(R_{0,N/16}^{N^{2/3}}\). Then, for each \({\bf p}\in R_{0,N/16}^{N^{2/3}}\), there exists an \(x_{(i[{\bf p}],k[{\bf p}])}^{J}=x_{\bf p}^{J}\in\mathcal{T}^{J}\) such that
\[|x_{\bf p}^{J}|_{1}-|{\bf p}|_{1}\in\Big{[}2\frac{8^{-J}}{32}N,6\frac{8^{-J}}{ 32}N\Big{]}\] (A.36)
and
\[\Big{|}x_{\bf p}^{J}\cdot({\bf e}_{1}-{\bf e}_{2})-{\bf p}\cdot({\bf e}_{1}-{ \bf e}_{2})\Big{|}\leq 2\cdot 4^{-J}N^{2/3}.\] (A.37)
Provided that \(N_{0}\) is sufficiently large, the collection of up-right paths from \(\mathbf{p}\) to \(x_{\mathbf{p}}^{J}\) while remaining inside \(R_{0,N/16}^{N^{2/3}}\) has to be non-empty. This is because by our choice of \(J\) from (A.33), the estimates (A.36) and (A.37) imply the diagonal distance between \(\mathbf{p}\) and \(x_{\mathbf{p}}^{J}\) is lower bounded by \(N^{1/4}/100\), while their anti-diagonal distance is upper bounded by \(2(N8^{-J})^{2/3}\leq N^{2/9}\).
Now, for each \(\mathbf{v}\in V_{j,i,k}\), let us define the event
\[\mathcal{R}_{j,i,k}^{\mathbf{v}}=\Big{\{}\log Z_{\mathbf{v},\mathbf{x}_{(i,k) }^{\mathrm{in},h4^{-(j+1)}N^{2/3}}}^{\mathrm{in},h4^{-(j+1)}N^{2/3}}-(|\mathbf{ x}_{(i,k)}^{j}|_{1}-|\mathbf{v}|_{1})f_{d}\geq-2^{-j/5}tN^{1/3}\Big{\}}.\] (A.38)
Here note that the path constrain in the parallelogram in the definition above also satisfies the global constraint as
\[R_{\mathbf{v},\mathbf{x}_{(i,k)}^{j}}^{h4^{-(j+1)}N^{2/3}}\subset R_{0,N}^{N^ {2/3}}.\]
From (A.34), (A.35) and the choice for the width of the rectangle \(R_{v,\mathbf{x}_{(i,k)}^{j}/4^{j}}^{h4^{-(j+1)}N^{2/3}}\), by Theorem 3.16,
\[\mathbb{P}((\mathcal{R}_{j,i,k}^{\mathbf{v}})^{c})=\mathbb{P}\Big{(}\log Z_{ \mathbf{v},\mathbf{x}_{(i,k)}^{j}}^{\mathrm{in},\frac{h}{4}(8^{-j}N)^{2/3}}-(| \mathbf{x}_{(i,k)}^{j}|_{1}-|\mathbf{v}|_{1})f_{d}\geq-2^{-j/5}tN^{1/3}\Big{)} \leq e^{-C2^{j/10}t}.\]
Hence,
\[\mathbb{P}\Big{(}\cup_{j=0}^{J-1}\cup_{i=1}^{S^{j}}\cup_{k=1}^{4j}\cup_{v\in V _{j,i,k}}(\mathcal{R}_{j,i,k}^{\mathbf{v}})^{c}\Big{)}\leq\sum_{j=0}^{\infty}1 00^{j}e^{-C2^{j/10}t}\leq e^{-Ct},\]
provided that \(t\) is sufficiently large.
Next, let us define the event
\[\mathcal{R}_{\mathrm{start}}=\Big{\{}100\cdot 8^{-J}N\min_{\mathbf{z}\in R_{0,N} ^{N^{2/3}}}\{\log Y_{\mathbf{z}}\}\geq-tN^{1/3}\Big{\}}.\]
Recall that \(N8^{-J}\leq N^{1/3-0.01}\), then
\[\mathbb{P}((\mathcal{R}_{\mathrm{start}})^{c}) \leq\mathbb{P}\Big{(}\min_{\mathbf{z}\in R_{0,N}^{N^{2/3}}}\{\log Y _{\mathbf{z}}\}\leq-tN^{0.001}\Big{)}\leq N^{2}\cdot\mathbb{P}\Big{(}\log Y_{ \mathbf{0}}\leq-tN^{0.001}\Big{)}\] \[\leq e^{-CN^{0.001}t}\leq e^{-Ct}.\]
Then, on the event \(R_{\mathrm{start}}\cap\big{(}\cap_{j-0}^{J-1}\cap_{i=1}^{S^{j}}\cap_{k=1}^{4j }\cap_{\mathbf{v}\in V_{j,i,k}}\mathcal{R}_{j,i,k}^{\mathbf{v}}\big{)}\) which has probability at least \(1-e^{-Ct}\), we must have
\[\min_{\mathbf{p}\in\mathcal{L}_{0}^{N^{2/3}}}\log Z_{\mathbf{p},N}^{\mathrm{ in},R_{0,N}^{N^{2/3}}}-(2N-|\mathbf{p}|_{1})f_{d}\geq-\Big{(}1+\sum_{j=0}^{ \infty}2^{-j/5}\Big{)}tN^{1/3}.\]
To see this, for each \(\mathbf{p}\in R_{0,N/16}^{N^{2/3}}\), we may go to \(x_{\mathbf{p}}^{J}\). Then, from \(x_{\mathbf{p}}^{J}\), we obtain a sequence of points \(x_{\mathbf{p}}^{j}\) which traces back to \((N,N)\). Then, we have
\[\log Z_{\mathbf{p},N}^{\mathrm{in},R_{0,N}^{N^{2/3}}}\geq|\mathbf{p}-x_{ \mathbf{p}}^{J}|_{1}\min_{\mathbf{z}\in R_{0,N}^{N^{2/3}}}\{0\wedge\log Y_{ \mathbf{z}}\}+\sum_{j=0}^{J-1}\log Z_{x_{\mathbf{p}}^{j+1},x_{\mathbf{p}}^{j} }^{\mathrm{in},h4^{-(j+1)N^{2/3}}}.\]
And on the event \(R_{\mathrm{start}}\cap\big{(}\cap_{j-0}^{J-1}\cap_{i=1}^{S^{j}}\cap_{k=1}^{4j }\cap_{\mathbf{v}\in V_{j,i,k}}\mathcal{R}_{j,i,k}^{\mathbf{v}}\big{)}\), the right side above is greater than \((2N-|\mathbf{p}|_{1})f_{d}-\Big{(}1+\sum_{j=0}^{\infty}2^{-j/5}\Big{)}\,tN^{1/3}\). With this, we have finished the proof of (A.32).
Finally, the estimate from our theorem simply follows from a union bound using (A.32). We rewrite the rectangle \(R_{0,9N/10}^{N^{2/3}}\) as a union of smaller rectangles
\[R_{0,\frac{9}{10}N}^{N^{2/3}}=\bigcup_{k=0}^{143}R_{\frac{kN}{160},\frac{(k+1)N} {160}}^{N^{2/3}}.\]
Then, (A.32) can be applied to each one of these rectangles in the form
\[\mathbb{P}\Big{(}\min_{\mathbf{p}\in R_{0,\frac{9}{10}N}^{N^{2/3}}}\log Z_{ \mathbf{p},N}^{\mathrm{in},R_{0,N}^{N^{2/3}}}-(2N-|\mathbf{p}|_{1})f_{d}\leq-tN ^{1/3}\Big{)}\]
and a union bound finishes the proof.
#### a.4.2 Proof of Theorem 3.19
Proof of Theorem 3.19.: This follows from a step-back argument. First, let \(\mathbf{p}^{*}\) denote the random maximizer of
\[\max_{\mathbf{p}\in R_{0,\frac{9}{10}N}^{N^{2/3}}}\log Z_{\mathbf{p},\mathcal{ L}_{N}}-(2N-|\mathbf{p}|_{1})f_{d}.\]
Then,
\[\log Z_{-N,\mathcal{L}_{N}}\geq\log Z_{-N,\mathbf{p}^{*}}+\log Z_{\mathbf{p}^ {*},\mathcal{L}_{N}}.\]
By a union bound,
\[\mathbb{P}\Big{(}\log Z_{\mathbf{p}^{*},\mathcal{L}_{N}}-(2N-| \mathbf{p}^{*}|_{1})f_{d}\geq tN^{1/3}\Big{)}\] \[\leq\mathbb{P}\Big{(}\log Z_{-N,\mathcal{L}_{N}}-4Nf_{d}\geq\tfrac {1}{2}tN^{1/3}\Big{)}\] \[\qquad\qquad+\mathbb{P}\Big{(}\log Z_{-N,\mathbf{p}^{*}}-(2N+| \mathbf{p}^{*}|_{1})f_{d}\leq-\tfrac{1}{2}tN^{1/3}\Big{)}.\]
The two probabilities are bounded by \(e^{-Ct}\) by Proposition 3.8 and Theorem 3.18.
## Appendix B Proof of the random walk comparison in Section 3.5
Proof of Theorem 3.28.: We will construct the upper bound \(X_{i}\). The construction of \(Y_{i}\) follows from a similar argument, which we sketch at the end of the proof. In addition, we will assume that the partition functions include the weight \(Y_{(0,0)}\), which does not change the profile.
To start, recall the profile that we are looking at is along \(\Theta_{k}=\{\mathbf{z}_{0},\ldots,\mathbf{z}_{k}\}\). Let us first fix an \(a_{0}\) sufficiently small that \(\mathbf{z}_{0}\cdot\mathbf{e}_{2}\geq\tfrac{1}{2}\mathbf{v}_{N}\cdot\mathbf{e }_{2}\). Next, we will fix the constant \(q_{0}\), and the idea is illustrated in Figure B.1. Recall \(\lambda=\rho+q_{0}sN^{-1/3}.\) By Proposition 3.1, for \(q_{0}>0\),
\[\text{the slope of the vector }\boldsymbol{\xi}[\lambda]\geq m_{\rho}(0)+cq_{0}rN^{-1/3}.\]
This means increasing \(q_{0}\) will make the dotted line appearing in Figure B.1 more vertical. Then, because the slope between \((0,0)\) and \(v_{N}\) is \(m_{\rho}(0)\) and \(|\mathbf{z}_{0}-\mathbf{v}_{N}|_{\infty}\leq sN^{2/3}\), there exists a positive constant \(q_{0}\) sufficiently large such that the \(-\boldsymbol{\xi}[\lambda]\)-directed ray starting at \(\mathbf{z}_{0}\) (the dotted line) will cross the horizontal line \(y=-1\) on the right of the vertical line \(x=sN^{2/3}\), as shown in Figure B.1.
Once \(q_{0}\) is fixed, we may lower the value of \(a_{0}\) further if necessary, so the parameters \(\lambda\) and \(\mu-\lambda\) are both contained inside \([\frac{\epsilon}{2},\mu-\frac{\epsilon}{2}]\). We will place \(\mathrm{Ga}^{-1}(\mu-\lambda)\) and \(\mathrm{Ga}^{-1}(\lambda)\) on the \(\mathbf{e}_{1}\)- and \(\mathbf{e}_{2}\)-boundaries based at the base \((-1,-1)\).
By Theorem 3.25, we have
\[\mathbb{P}\Big{(}Q^{\lambda}_{-1,\mathbf{z}_{0}}\{\tau\leq-1\}\geq 1/10\Big{)} \leq e^{-Cs^{3}},\] (B.1)
and let us define the complement of (B.1) as
\[A=\Big{\{}Q^{\lambda}_{-1,\mathbf{z}_{0}}\{\tau\geq 1\}\geq 9/10\Big{\}}.\]
By Proposition C.2, it holds that for \(i=1,\ldots,k\),
\[Q^{\lambda}_{-1,\mathbf{z}_{i}}\{\tau\geq 1\}\geq Q^{\lambda}_{-1,\mathbf{z}_{ 0}}\{\tau\geq 1\}.\]
Let \(Z^{\lambda,\mathrm{south}}_{(0,-1),\star}\) be the partition function that uses the same weights as \(Z^{\lambda}_{-1,\star}\), except that \(Z^{\lambda,\mathrm{south}}_{(0,-1),\star}\) does not see or use any of the weights on the vertical boundary along \(x=-1\). Then, we can upper bound \(\log Z_{0,\mathbf{z}_{i}}-\log Z_{0,\mathbf{z}_{i-1}}\) as follows,
\[e^{\log Z_{0,\mathbf{z}_{i}}-\log Z_{0,\mathbf{z}_{i-1}}} = \frac{Z_{0,\mathbf{z}_{i}}}{Z_{0,\mathbf{z}_{i-1}}}\] by Proposition C.1 \[\leq \frac{Z^{\lambda,\mathrm{south}}_{(0,-1),\mathbf{z}_{i}}}{Z^{ \lambda,\mathrm{south}}_{(0,-1),\mathbf{z}_{i-1}}}=\frac{Z^{\lambda,\mathrm{ south}}_{(0,-1),\mathbf{z}_{i}}}{Z^{\lambda,\mathrm{south}}_{(0,-1), \mathbf{z}_{i-1}}}\cdot\frac{I^{\lambda,\mathrm{south}}_{[(-1,-1),(0,-1)]}}{ I^{\lambda,\mathrm{south}}_{[(-1,-1),(0,-1)]}}\] \[= \frac{Z^{\lambda}_{-1,\mathbf{z}_{i}}(\tau\geq 1)}{Z^{ \lambda}_{-1,\mathbf{z}_{i-1}}(\tau\geq 1)}=\frac{Q^{\lambda}_{-1,\mathbf{z}_{i}}( \tau\geq 1)}{Q^{\lambda}_{-1,\mathbf{z}_{i-1}}(\tau\geq 1)}\cdot\frac{Z^{ \lambda}_{-1,\mathbf{z}_{i}}}{Z^{\lambda}_{-1,\mathbf{z}_{i-1}}}\] on the event \[A \leq\frac{10}{9}\frac{Z^{\lambda}_{-1,\mathbf{z}_{i}}}{Z^{ \lambda}_{-1,\mathbf{z}_{i-1}}}.\]
Figure B.1: The step up in the proof of Theorem 3.28.
Choosing \(X_{i}=\log\frac{Z_{-1,\mathbf{z}_{i}}^{\eta}}{Z_{-1,\mathbf{z}_{i-1}}^{\alpha}}\) finishes the proof of the upper bound. Note the distributional proprieties of \(X_{i}\) is guaranteed by Theorem 3.23.
For the lower bound, by increasing the value \(q_{0}\) if necessary, the \(-\mathbf{\xi}[\eta]\) directed ray starting from \(\mathbf{z}_{k}\) will hit the vertical line \(x=-1\) above the horizontal line \(y=sN^{2/3}\). We place \(\mathrm{Ga}^{-1}(\mu-\eta)\) and \(\mathrm{Ga}^{-1}(\mu)\) on the \(\mathbf{e}_{1}\)- and \(\mathbf{e}_{2}\)-boundaries based at the base \((-1,-1)\). Then by Theorem 3.25, we have
\[\mathbb{P}\Big{(}Q_{-1,\mathbf{z}_{0}}^{\eta}\{\tau\geq 1\}\geq 1/10\Big{)}\leq e ^{-Cs^{3}}.\]
Let us define the complement of (B.1) as
\[B=\Big{\{}Q_{-1,\mathbf{z}_{k}}^{\eta}\{\tau\leq-1\}\geq 9/10\Big{\}}.\]
By Proposition C.2, it holds that for \(i=0,\ldots,k-1\),
\[Q_{-1,\mathbf{z}_{i}}^{\eta}\{\tau\leq-1\}\geq Q_{-1,\mathbf{z}_{k}}^{\eta}\{ \tau\leq-1\}.\]
Then, we lower bound \(\log Z_{0,\mathbf{z}_{i}}-\log Z_{0,\mathbf{z}_{i-1}}\) as follows,
\[e^{\log Z_{0,\mathbf{z}_{i}}-\log Z_{0,\mathbf{z}_{i-1}}} =\frac{Z_{0,\mathbf{z}_{i}}}{Z_{0,\mathbf{z}_{i-1}}^{\eta}}\] \[\text{by Proposition C.1} \geq\frac{Z_{(0,-1),\mathbf{z}_{i}}^{\eta,\text{west}}}{Z_{(0,-1 ),\mathbf{z}_{i-1}}^{\eta,\text{west}}}=\frac{Z_{(0,-1),\mathbf{z}_{i}}^{\eta,\text{west}}}{Z_{(0,-1),\mathbf{z}_{i-1}}^{\eta,\text{west}}}\cdot\frac{J_{ [[-1,-1),(-1,0)]}^{\eta,\text{west}}}{J_{[[-1,-1),(-1,0)]}^{\eta,\text{west}}}\] \[=\frac{Z_{-1,\mathbf{z}_{i}}^{\eta}(\tau\leq-1)}{Z_{-1,\mathbf{z} _{i-1}}^{\eta}(\tau\leq-1)}=\frac{Q_{-1,\mathbf{z}_{i}}^{\eta}(\tau\leq-1)}{Q _{-1,\mathbf{z}_{i-1}}^{\eta}(\tau\leq-1)}\cdot\frac{Z_{-1,\mathbf{z}_{i}}^{ \eta}}{Z_{-1,\mathbf{z}_{i-1}}^{\eta}}\] \[\text{on the event }B \geq\frac{9}{10}\frac{Z_{-1,\mathbf{z}_{i}}^{\eta}}{Z_{-1, \mathbf{z}_{i-1}}^{\eta}}.\]
Choosing \(Y_{i}=\log\frac{Z_{-1,\mathbf{z}_{i}}^{\eta}}{Z_{-1,\mathbf{z}_{i-1}}^{\eta}}\) will give us the desired lower bound.
## Appendix C Monotonicity for the polymer model
The following two propositions hold for arbitrary positive weights on the lattice, and there is no probability involved. The first proposition is Lemma A.2 from [15], and the second proposition is Lemma A.5 from [49].
**Proposition C.1**.: _Let \(\mathbf{x},\mathbf{y},\mathbf{z}\in\mathbb{Z}^{2}\) be such that \(\mathbf{x}\cdot\mathbf{e}_{1}\leq\mathbf{y}\cdot\mathbf{e}_{1}\), \(\mathbf{x}\cdot\mathbf{e}_{2}\geq\mathbf{y}\cdot\mathbf{e}_{2}\), and coordinate wise \(\mathbf{x},\mathbf{y}\leq\mathbf{z}\), then_
\[\frac{Z_{\mathbf{x},\mathbf{z}}}{Z_{\mathbf{x},\mathbf{z}-\mathbf{e}_{1}}}\leq \frac{Z_{\mathbf{y},\mathbf{z}}}{Z_{\mathbf{y},\mathbf{z}-\mathbf{e}_{1}}} \qquad\text{and}\qquad\frac{Z_{\mathbf{x},\mathbf{z}}}{Z_{\mathbf{x},\mathbf{z} -\mathbf{e}_{2}}}\geq\frac{Z_{\mathbf{y},\mathbf{z}}}{Z_{\mathbf{y},\mathbf{z}- \mathbf{e}_{2}}}.\]
**Proposition C.2**.: _For any \(k,l,m\in\mathbb{Z}_{\geq 0}\) and \(\mathbf{z}\in\mathbb{Z}_{\geq 0}^{2}\),_
\[Q_{0,\mathbf{z}}\{\tau\geq k\}\leq Q_{0,\mathbf{z}+l\mathbf{e}_{1}-m\mathbf{ e}_{2}}\{\tau\geq k\}.\]
Sub-exponential random variables.
First, we state a general result for the running maximum of sub-exponential random variables. Recall that a random variable \(X_{1}\) is sub-exponential if there exist two positive constants \(K_{0}\) and \(\lambda_{0}\) such that
\[\log(\mathbb{E}[e^{\lambda(X_{1}-\mathbb{E}[X_{1}])}])\leq K_{0}\lambda^{2}\quad \text{ for }\lambda\in[0,\lambda_{0}].\] (D.1)
Let \(\{X_{i}\}\) be a sequence of i.i.d. sub-exponential random variables with the parameters \(K_{0}\) and \(\lambda_{0}\). Define \(S_{0}=0\) and \(S_{k}=X_{1}+\cdots+X_{k}-k\mathbb{E}[X_{1}]\) for \(k\geq 1\). The following theorem captures the right tail behavior of the running maximum.
**Theorem D.1**.: _Let the random walk \(S_{k}\) be defined as above. Then,_
\[\mathbb{P}\Big{(}\max_{0\leq k\leq n}S_{k}\geq t\sqrt{n}\Big{)}\leq\begin{cases} e^{-t^{2}/(4K_{0})}&\quad\text{if }t\leq 2\lambda_{0}K_{0}\sqrt{n}\\ e^{-\frac{1}{2}\lambda_{0}t\sqrt{n}}&\quad\text{if }t\geq 2\lambda_{0}K_{0} \sqrt{n}\end{cases}.\]
Proof.: Since \(S_{k}\) is a mean zero random walk, then \(e^{\lambda S_{k}}\) is a non-negative sub-martingale for \(\lambda\geq 0\). By Doob's maximal inequality,
\[\mathbb{P}\Big{(}\max_{0\leq k\leq n}S_{k}\geq t\sqrt{n}\Big{)}=\mathbb{P} \Big{(}\max_{0\leq k\leq n}e^{\lambda S_{k}}\geq e^{\lambda t\sqrt{n}}\Big{)} \leq\frac{\mathbb{E}[e^{\lambda S_{n}}]}{e^{\lambda t\sqrt{n}}}=\frac{\mathbb{ E}[e^{\lambda X_{1}}]^{n}}{e^{\lambda t\sqrt{n}}}.\]
Taking the logarithm of the expression above, and using our assumption (D.1) for \(X_{1}\), we obtain
\[\log\Big{(}\frac{\mathbb{E}[e^{\lambda X_{1}}]^{n}}{e^{\lambda t\sqrt{n}}} \Big{)}=n\log(\mathbb{E}[e^{\lambda X_{1}}])-\lambda t\sqrt{n}\leq nK_{0} \lambda^{2}-\lambda t\sqrt{n}\quad\text{ for }\lambda\in[0,\lambda_{0}].\]
Let us denote the quadratic quadratic function in \(\lambda\in[0,\lambda_{0}]\) as
\[h(\lambda)=nK_{0}\lambda^{2}-\lambda t\sqrt{n}.\]
And note the minimizer of \(h\) is given by \(\lambda_{t}^{\min}=\min\{\lambda_{0},\frac{t}{2K_{0}\sqrt{n}}\}\), and
\[h(\lambda_{t}^{\min})=\begin{cases}-\frac{t^{2}}{4K_{0}}&\quad\text{if }t\leq 2 \lambda_{0}K_{0}\sqrt{n}\\ nK_{0}\lambda_{0}^{2}-\lambda_{0}t\sqrt{n}\leq-\frac{1}{2}\lambda_{0}t\sqrt{n}& \quad\text{if }t\geq 2\lambda_{0}K_{0}\sqrt{n}\end{cases}.\]
With this, we have finished the proof of this proposition.
Our next proposition shows that both \(\log(\text{Ga})\) and \(\log(\text{Ga}^{-1})=-\log(\text{Ga})\) are sub-exponential random variables.
**Proposition D.2**.: _Fix \(\epsilon\in(0,\mu/2)\). There exists positive constants \(K_{0},\lambda_{0}\) depending on \(\epsilon\) such that for each \(\alpha\in[\epsilon,\mu-\epsilon]\), let \(X\sim\text{Ga}(\alpha)\) and we have_
\[\log(\mathbb{E}[e^{\pm\lambda(\log(X)-\Psi_{1}(\alpha))}])\leq K_{0}\lambda^{ 2}\qquad\text{ for }\lambda\in[0,\lambda_{0}].\]
Proof.: First, note that \(\mathbb{E}[X^{\pm\lambda}]=\frac{\Gamma(\alpha\pm\lambda)}{\Gamma(\alpha)}\), provided that \(\alpha\pm\lambda>0\). Then, the proof essentially follows from Taylor's theorem,
\[\log(\mathbb{E}[e^{\pm\lambda(\log(X)-\Psi_{1}(\alpha))}]) =\log(\mathbb{E}[X^{\pm\lambda}]e^{\mp\lambda\Psi_{1}(\alpha)})\] \[=\log(\Gamma(\alpha\pm\lambda))-[\log(\Gamma(\alpha))\pm\lambda \Psi_{1}(\alpha)]\]
\[(\text{recall }\log(\Gamma(\alpha))^{\prime}=\Psi_{1}(\alpha)) =\frac{\Psi_{1}^{\prime}(\alpha)}{2}\lambda^{2}+o(\lambda^{2})\] \[\leq K_{0}\lambda^{2}\]
provided \(\lambda_{0}\) is fixed sufficiently small. The constant \(K_{0}\) can be chosen uniformly for all \(\alpha\) from the compact interval \([\epsilon,\mu-\epsilon]\) because \(\Phi_{1}\) is a smooth function on \(\mathbb{R}_{\geq 0}\).
Proof of (4.3).: First, let us normalize the \(\widetilde{S}_{k}\) by its expectation, and let us denote the new walk by \(\widetilde{S}_{k}\). The expectation of the step of \(\widetilde{S}_{k}\) is
\[-\Psi_{0}(\eta)+\Psi_{0}(\mu-\eta)\] \[=-\Psi_{0}(\mu/2-q_{0}t^{2/3}N^{-1/3})+\Psi_{0}(\mu/2+q_{0}t^{2/ 3}N^{-1/3})\] \[\leq c_{1}t^{2/3}N^{-1/3}\]
provided \(N_{0}\) is sufficiently large and \(c_{0}\) is sufficiently small. Then,
\[\mathbb{E}\big{[}\widetilde{S}_{k}\big{]}\leq ac_{1}t^{2/3}N^{-1/3}\leq c_{1} t\sqrt{a}.\]
By fixing \(C^{\prime}=2c_{1}\) in (4.3), we see that
\[(\ref{eq:C_1})\leq\mathbb{P}\Big{(}\max_{0\leq k\leq a}\overline{S}_{k}\geq c _{1}t\sqrt{a}\Big{)}.\]
Since the sum of two independent sub-exponential random variables is still a sub-exponential random variable, this fact together with Proposition D.2 shows that the steps of \(\overline{S}_{k}\) are sub-exponential. Now, we may apply the right tail bound on the running maximum from Theorem D.1 to the term
\[\mathbb{P}\Big{(}\max_{0\leq k\leq a}\overline{S}_{k}\geq c_{1}t\sqrt{a} \Big{)},\]
and this finishes the proof.
## Appendix E Random walk estimate
First, let us recall two results from [47]. Let \(\{X_{i}\}_{i\in\mathbb{Z}_{>0}}\) be an i.i.d. sequence of random variables with
\[\mathbb{E}[X_{i}]=\mu,\quad\mathbb{V}\text{ar}[X_{i}]=1\quad\text{and}\quad c _{3}=\mathbb{E}|X-\mu|^{3}<\infty.\]
Define \(S_{k}=\sum_{i=1}^{k}X_{i}\) with \(S_{0}=0\).
**Lemma E.1** ([47] Lemma 5).: _There exists an absolute constant \(C\) such that for any \(l>0\)_
\[\mathbb{P}\Big{(}\max_{1\leq k\leq N}S_{k}<l\Big{)}-\mathbb{P}\Big{(}\max_{1 \leq k\leq N}S_{k}<0\Big{)}\leq C(c_{3}l+c_{3}^{2})(|\mu|+1/\sqrt{N}).\] (E.1)
**Lemma E.2** ([47] Lemma 7).: _There exists an absolute constant \(C\) such that_
\[\mathbb{P}\Big{(}\max_{1\leq k\leq N}S_{k}<0\Big{)}\leq Cc_{3}^{2}(|\mu|+1/ \sqrt{N}).\] (E.2)
Combining them, we obtain the following proposition.
**Proposition E.3**.: _There exists an absolute constant \(C\) such that for any \(l\geq 0\),_
\[\mathbb{P}\Big{(}\max_{1\leq k\leq N}S_{k}<l\Big{)}\leq C(c_{3}l+c_{3}^{2})(| \mu|+1/\sqrt{N}).\] (E.3) |
2310.15056 | Schrödinger operator with a complex steplike potential | The purpose of this article is to study pseudospectral properties of the
one-dimensional Schr\"{o}dinger operator perturbed by a complex steplike
potential. By constructing the resolvent kernel, we show that the
pseudospectrum of this operator is trivial if and only if the imaginary part of
the potential is constant. As a by-product, a new method to obtain a sharp
resolvent estimate is developed, answering a concern of Henry and
Krej\v{c}i\v{r}\'{i}k, and a way to construct an optimal pseudomode is
discovered, answering a concern of Krej\v{c}i\v{r}\'{i}k and Siegl. The
spectrum and the norm of the resolvent of the complex point interaction of the
operator is also studied carefully in this article. | Tho Nguyen Duc | 2023-10-23T15:57:54Z | http://arxiv.org/abs/2310.15056v1 | # Schrodinger operator with a complex steplike potential
###### Abstract.
The purpose of this article is to study pseudospectral properties of the one-dimensional Schrodinger operator perturbed by a complex steplike potential. By constructing the resolvent kernel, we show that the pseudospectrum of this operator is trivial if and only if the imaginary part of the potential is constant. As a by-product, a new method to obtain a sharp resolvent estimate is developed, answering a concern of Henry and Krejcirik, and a way to construct an optimal pseudomode is discovered, answering a concern of Krejcirik and Siegl. The spectrum and the norm of the resolvent of the complex point interaction of the operator is also studied carefully in this article.
## 1. Introduction
### Context and Motivations
Since the birth of quantum physics at the turn of the 20th century, spectral theory of self-adjoint operator has attracted considerable attention and experienced many fertile developments. But its best friend _non-self-adjoint_ operator has only recently been noticed and investigated restricted to the last two decades. One of the challenges that we often face when we work with the non-self-adjoint operator is that there is no spectral theorem [18]. A clear evidence for this is the absence of the well-known formula for the norm of the resolvent
\[\|(\mathscr{L}-z)^{-1}\|=\frac{1}{\operatorname{dist}(z,\sigma(\mathscr{L}))}, \tag{1.1}\]
which is valid for self-adjoint \(\mathscr{L}\) (or generally, for unbounded normal operator, the reader can found a simple proof in subsection 4.2). The lack of this formula is supported by the fact that there exist many non-self-adjoint operators whose resolvents blow up when the spectral parameter travels far away from their spectrum. Therefore, the notion of _pseudospectrum_ was suggested to address this pathological aspect of the non-self-adjoint operator [24, 9]. More precisely, given \(\varepsilon>0\), the \(\varepsilon\)-_pseudospectrum_ of a linear operator \(\mathscr{L}\) is defined by
\[\sigma_{\varepsilon}(\mathscr{L})=\sigma(\mathscr{L})\cup\{z\in\mathbb{C}:\|( \mathscr{L}-z)^{-1}\|>\varepsilon^{-1}\}. \tag{1.2}\]
The usefulness of pseudospectrum is that it can give the answer to the question "How does the spectrum respond to a slight change of the initial operator?" by virtue of the formula
\[\sigma_{\varepsilon}(\mathscr{L})=\bigcup_{V\in L(H),\,\|V\|<\varepsilon} \sigma(\mathscr{L}+V).\]
Especially, when the pseudospectrum contains regions very far from the spectrum, it causes the instability of the spectrum under a small perturbation and reveals that it would be difficult to obtain the spectrum numerically. The reader may want to see a discussion about this topic in the introduction of [22].
According to the definition (1.2), the more information we have on the level set of the resolvent, the better described the pseudospectrum is. However, in almost all cases, it is not easy to calculate the resolvent of a given operator, and even when the resolvent is known, it is not easy to calculate its norm. Therefore, in practice, we often use an equivalent definition of the pseudospectrum, that is
\[\sigma_{\varepsilon}(\mathscr{L})=\sigma(\mathscr{L})\cup\left\{z\in\mathbb{C }:\exists\Psi\in\operatorname{Dom}(\mathscr{L}),\,\|(\mathscr{L}-z)\Psi\|< \varepsilon\|\Psi\|\right\}. \tag{1.3}\]
The number \(z\) and the vector \(\Psi\) in the definition (1.3) are respectively called the _pseudoeigenvalue_ and _pseudomode_ (also known as the _pseudoeigenfunction_, _pseudoeigenvector_, or _quasimode_) of \(\mathscr{L}\). We list here some references [15, 1, 2] using the definition (1.2) and some references [8, 3, 10, 20, 15, 19, 17, 21] using the definition (1.3) to investigate pseudospectrum of a differential operator.
In this article, we would like to use both two aforementioned definitions to study the pseudospectrum of the free Schrodinger operator adding with a complex steplike potential (see Figure 1),
\[\mathscr{L}=-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+V(x),\qquad V(x)\coloneqq \begin{cases}V_{+}&\text{for }x\geq 0,\\ V_{-}&\text{for }x<0,\end{cases}\qquad\text{with }V_{+},V_{-}\in\mathbb{C},\]
and its complex point interaction
\[\mathscr{L}_{\alpha}=-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+V(x)+\alpha \delta_{0},\qquad\alpha\in\mathbb{C}, \tag{1.4}\]
where \(\delta_{0}\) is the Dirac delta generalized function.
There are three main results in our paper. The first result concerns with the spectra of two operators \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\). Theorem 2.4 and Theorem 2.12 will provide explicit answers to the elementary questions:
1. Depending on \(V\), what does the spectrum of \(\mathscr{L}\) look like?
2. Depending on \(\alpha\in\mathbb{C}\), how does the spectrum of \(\mathscr{L}\) change under the affect of complex point interaction?
For the first question, it can be predictable that the spectrum of \(\mathscr{L}\) is obtained by shifting the ray \([0,+\infty)\) (the spectrum of the free Schrodinger operator) by two vectors \(V_{+}\) and \(V_{-}\). For the second question, we will see that some eigenvalue will be released when \(\alpha\) belongs to a geometric region \(\Omega\subset\mathbb{C}\) depending on the difference \(V_{+}-V_{-}\) (see Figure 7).
Our second result is related to the resolvents of \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\). As above, Theorem 2.7 and Theorem 2.14 will addresses two questions
1. What is the asymptotic behavior of the norm of the resolvent of \(\mathscr{L}\) inside the numerical range?
2. How does the complex point interaction affect this behavior of the resolvent of \(\mathscr{L}\)?
To answer the first question, we find a new method to obtain an explicit formula for the divergence of the resolvent
\[\left\|(\mathscr{L}-z)^{-1}\right\|=\frac{2|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V _{-}|}{|V_{+}-V_{-}|}\frac{\mathrm{Re}\,z}{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,z |[\mathrm{Im}\,V_{-}-\mathrm{Im}\,z]}\left(1+\mathcal{O}\left(\frac{1}{| \mathrm{Re}\,z|}\right)\right),\]
Figure 1. The complex steplike potential \(V(x)\).
as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\) between \(\operatorname{Im}V_{+}\) and \(\operatorname{Im}V_{-}\). Our finding addresses a concern of Henry and Krejcirik [15], who wondered whether we could find an optimal constant and a sharp dependence on the distance between the spectral parameter and the spectrum in their specific case \(V(x)=i\operatorname{sgn}(x)\). What is more, by applying this method for the operator \(\mathscr{L}_{\alpha}\), with \(\alpha\in\mathbb{C}\setminus\{0\}\), we can answer the second question, that is
\[\left\|(\mathscr{L}_{\alpha}-z)^{-1}\right\|=\frac{|\operatorname{Im}V_{+}- \operatorname{Im}V_{-}|\sqrt{\operatorname{Re}z}}{|\alpha||\operatorname{Im} V_{+}-\operatorname{Im}z||\operatorname{Im}V_{-}-\operatorname{Im}z|}\left(1+ \mathcal{O}\left(\frac{1}{|\operatorname{Re}z|^{1/2}}\right)\right).\]
as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\) between \(\operatorname{Im}V_{+}\) and \(\operatorname{Im}V_{-}\). We see that not only the constant changes, now it depends both on \(\operatorname{Im}V_{\pm}\) and \(\alpha\), but the divergent rate of the resolvent norm also decreases from \(\operatorname{Re}z\) to \(\sqrt{\operatorname{Re}z}\).
Our final result is rather interesting and it is related to the pseudomode construction for the operators \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\). It is interesting because it gives us a hope that we may describe the pseudospectrum accurately when we do not have the formula for the resolvent of the operator. More precisely, from Theorem 2.9, we derive an explicit formula for the pseudomode \(\Psi_{z}\in\operatorname{Dom}(\mathscr{L})\) such that
\[\frac{\|(\mathscr{L}-z)\Psi_{z}\|}{\|\Psi_{z}\|}=\frac{|V_{+}-V_{-}|}{2| \operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}\frac{|\operatorname{Im}V_{+} -\operatorname{Im}z||\operatorname{Im}V_{-}-\operatorname{Im}z|}{\operatorname {Re}z}\left(1+\mathcal{O}\left(\frac{1}{|\operatorname{Re}z|}\right)\right),\]
as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\) between \(\operatorname{Im}V_{+}\) and \(\operatorname{Im}V_{-}\). Our finding exceeds the expectation of the concern of Krejcirik and Siegl in [19], in which they tried to construct the pseudomode for the Schrodinger operator with \(V=i\operatorname{sgn}(x)\), _i.e._, \(V_{+}=i\) and \(V_{-}=-i\), but the best possible for the decaying rate that they could obtain is \(\mathcal{O}\left(\frac{1}{z^{1/2}}\right)\) as \(z\) goes to \(+\infty\) on the real line. Here, our method provides an optimal pseudomode which give us the exact constant \(\frac{|V_{+}-V_{-}|}{2|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}\), the precise rate \(\operatorname{Re}z\) and the correct distance to the spectrum \(|\operatorname{Im}V_{+}-\operatorname{Im}z||\operatorname{Im}V_{-}- \operatorname{Im}z|\). In order to verify that our construction method is relevant, we apply it for the model \(\mathscr{L}_{\alpha}\) (\(\alpha\neq 0\)) and it is still applicable and produce an optimal pseudomode in Theorem 2.15.
Although the model with steplike potential is simple, it also has its own application in scattering theory [13, 12] and in dispersive estimate [7]. Also because it is simple, it is often chosen as a pioneering model to understand other models whose potentials behave asymptotically as a steplike function. We hope that our model may be chosen to provide more information about the pseudospectrum of the Schrodinger with complex potentials, which is a trending hot topic recently.
### General notations
Let us fix some notations employed throughout the paper.
1. We use the following conventions for number sets: * As usual, \(\mathbb{R}\) is for the real numbers, \(\mathbb{C}\) is for complex numbers, \(\mathbb{R}_{+}\coloneqq(0,+\infty)\) and \(\mathbb{R}_{-}\coloneqq(-\infty,0)\). * For \(a,b\in\mathbb{R}\) and \(a\neq b\), we denote \(|(a,b)|\) for the open interval whose boundaries are \(a\) and \(b\), _i.e._, \(|(a,b)|=(a,b)\) if \(a<b\) and \(|(a,b)|=(b,a)\) if \(a>b\). Similarly, we denote \(|[a,b]|=[a,b]\) if \(a<b\) and \(|[a,b]|=[b,a]\) if \(a>b\) * For an axis starting from a complex number \(C\) and running to infinity horizontally in \(\mathbb{C}\), we denote by \([C,+\infty)\), _i.e._, \([C,+\infty)\coloneqq C+[0,+\infty)\). * When we write \(\langle X,Y\rangle_{\mathbb{R}^{2}}\) for \(X,Y\in\mathbb{C}\), we mean that we consider \(X,Y\) as vectors in \(\mathbb{R}^{2}\) and \(\langle X,Y\rangle_{\mathbb{R}^{2}}\) is a real inner product between two vectors \(X\) and \(Y\) on \(\mathbb{R}^{2}\), that is \(\langle X,Y\rangle_{\mathbb{R}^{2}}=X_{1}X_{2}+Y_{1}Y_{2}\) for \(X=X_{1}+iX_{2}\) and \(Y=Y_{1}+iY_{2}\). * For two real-valued functions \(a\) and \(b\), we will occasionally write \(a\lesssim b\) (respectively, \(a\gtrsim b\)) instead of \(a\leq Cb\) (respectively, \(a\geq Cb\)) for an insignificant constant \(C>0\).
2. For an indicator function (characteristic function) of a subset \(E\) in \(\mathbb{R}\), we denote by \(\mathbf{1}_{E}\), _i.e._, \(\mathbf{1}_{E}(x)\) has value \(1\) at points in \(E\) and \(0\) at points in \(\mathbb{R}\setminus E\).
3. The inner product on \(L^{2}(\mathbb{R})\) is denoted by \(\langle\cdot,\cdot\rangle\). We use the symbol \(\|\cdot\|\) for \(L^{2}\)-norm of complex-valued functions defined on \(\mathbb{R}\) and when we want to consider this norm restricted on \(\mathbb{R}_{+}\) (or, on \(\mathbb{R}_{-}\)), we will use a clear symbol \(\|\cdot\|_{L^{2}(\mathbb{R}_{+})}\) (respectively, \(\|\cdot\|_{L^{2}(\mathbb{R}_{-})}\)). The norm on Sobolev space \(H^{1}(\mathbb{R})\) is denoted by \(\|\cdot\|_{H^{1}}\).
4. For a linear operator \(S\), as usual, we employ the notations \(\operatorname{Dom}(S)\), \(\operatorname{Ran}(S)\), \(\operatorname{Ker}(S)\), \(\rho(S)\) and \(\sigma(S)\) for, respectively, the domain, the range, the kernel, the resolvent set and the spectrum of \(S\). When \(S\) is a densely defined operator, let us recall here some classes of unbounded operators: * \(S\) is called normal if \(\operatorname{Dom}(S)=\operatorname{Dom}(S^{*})\) and \(\|S\|=\|S^{*}\|\). * \(S\) is called self-adjoint if \(S=S^{*}\). * \(S\) is called \(\mathcal{T}\)-self-adjoint, if \(S^{*}=\mathcal{T}S\mathcal{T}\), where \(\mathcal{T}\) is the antilinear operator of complex conjugation defined by \(\mathcal{H}\psi(x)=\overline{\psi(x)}\). * \(S\) is called \(\mathcal{P}\)-self-adjoint if \(S^{*}=\mathcal{P}S\mathcal{P}\) with a parity operator \(\mathcal{P}\) defined by \((\mathcal{P}\psi)(x)=\psi(-x)\). * \(S\) is called \(\mathcal{P}\mathcal{T}\)-symmetric if \([S,\mathcal{P}\mathcal{T}]=0\), _i.e._, \(\mathcal{P}S\subset\mathcal{S}\mathcal{P}\), it means that whenever \(f\in\operatorname{Dom}(S)\), \(\mathcal{P}\mathcal{T}f\) also belongs to \(\operatorname{Dom}(S)\) and \(\mathcal{P}\mathcal{T}Sf=\mathcal{P}\mathcal{P}f\). The properties of these kinds of operators can be found in [18, Subsection 5.2.5], in particular, a recent and interesting research on \(\mathcal{T}\)-self-adjointness under its modern name _Complex-self-adjointness_ can be found in [5]. We often decompose the spectrum of a closed operator as follows \[\sigma(S)=\sigma_{\mathrm{p}}(S)\cup\sigma_{\mathrm{r}}(S)\cup\sigma_{\mathrm{ c}}(S),\] in which \[\sigma_{\mathrm{p}}(S) \coloneqq\{z\in\mathbb{C}:S-z\text{ is not injective}\},\] \[\sigma_{\mathrm{r}}(S) \coloneqq\{z\in\mathbb{C}:S-z\text{ is injective and }\overline{\operatorname{Ran}(S-z)} \subsetneq\mathcal{H}\},\] \[\sigma_{\mathrm{c}}(S) \coloneqq\{z\in\mathbb{C}:S-z\text{ is injective and }\overline{ \operatorname{Ran}(S-z)}=\mathcal{H}\text{ and }\operatorname{Ran}(S-z) \subsetneq\mathcal{H}\}.\] The set \(\sigma_{\mathrm{p}}(S)\) (respectively, \(\sigma_{\mathrm{r}}(S)\) and \(\sigma_{\mathrm{c}}(S)\)) is called the _point spectrum_ (respectively, the _residual spectrum_ and the _continuous spectrum_) of \(S\). For the _essential spectrum_, we use the definitions of various types of the essential spectra defined in [11, Sec. IX] or [18, Sec. 5.4]: \(\sigma_{\mathrm{ek}}(S)\) for \(\mathrm{k}\in\{1,2,3,4,5\}\). The discrete spectrum of \(S\), which is the set of isolated eigenvalues \(z\) of \(S\) which have finite algebraic multiplicity and such that \(\operatorname{Ran}(S-z)\) is closed in \(\mathcal{H}\), is labeled by \(\sigma_{\mathrm{dis}}(S)\).
5. By abusing of notation, we shall denote integral operators and their kernels by the same symbol. For example, we will write the integral operator \(\mathcal{R}\) as \(\mathcal{R}f(x)=\int_{\mathbb{R}}\mathcal{R}(x,y)f(y)\,\mathrm{d}y\).
### Structure of the paper
The organization of this paper is as follows. Section 2 is devoted to all the statements and main results: the definition of the operator \(\mathscr{L}\), its resolvent and spectrum, its pseudospectrum and optimal pseudomode, and finally the same achievements for \(\mathscr{L}_{\alpha}\). As usual, the remain sections are used to provide proofs or to describe the methods that we have employed, more precisely,
* In Section 3, the kernel of the resolvent of \(\mathscr{L}\) is established and the spectrum of \(\mathscr{L}\) is characterized.
* In Section 4, the pseudospectrum of \(\mathscr{L}\) is studied by estimating the resolvent norm inside the numerical range.
* Section 5 is used to study the spectral properties of the operator \(\mathscr{L}_{\alpha}\). The stability of the essential spectra under the Dirac interaction is proved. Then, the existence of the discrete eigenvalues depending on \(\alpha\) is discussed. After the spectrum is clear,
the resolvent norm behavior of \(\mathscr{L}_{\alpha}\) is determined as the spectral parameter goes to infinity in the region between two essential spectrum lines. Finally, the optimal pseudomode is also constructed for this delta interaction model.
* Two appendixes A and B are employed to define and show all the beautiful properties of the operators \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\).
### Acknowledgement
I would like to thank Professor David Krejcirik for giving me many precious opportunities to continue my research career. To me, he is not only a great mathematician, but he is also a great leader who takes care his people very kindly. I am kindly thankful to Professor Petr Siegl for his comments when I visited him in Graz. This project was supported by the EXPRO grant number 20-17749X of the Czech Science Foundation (GACR).
## 2. Statements and main Results
### The operator, the resolvent and the spectrum
Let us begin by defining our operator via a sesquilinear form whose formula and domain are given by
\[Q(u,v) \coloneqq\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}(x)} \,\mathrm{d}x+V_{+}\int_{0}^{+\infty}u(x)\overline{v(x)}\,\mathrm{d}x+V_{-} \int_{-\infty}^{0}u(x)\overline{v(x)}\,\mathrm{d}x,\] \[\mathrm{Dom}(Q) \coloneqq H^{1}(\mathbb{R}).\]
Then, the description and some useful properties of the operator are represented in our first proposition. The proof of this proposition is rather elementary, however, we would like to write it down for the convenience of the readers, especially for young researchers like the author. The proof of this proposition can be found in Appendix A.
**Proposition 2.1**.: _There exists a closed densely defined operator \(\mathscr{L}\) whose domain is given by_
\[\mathrm{Dom}\left(\mathscr{L}\right)=\left\{\begin{aligned} & u\in H^{1}(\mathbb{R}): \text{there exists }f\in L^{2}(\mathbb{R})\text{ such that }\\ & Q(u,v)=\left\langle f,v\right\rangle\text{ for all }v\in H^{1}( \mathbb{R})\end{aligned}\right\},\]
_and_
\[Q(u,v)=\left\langle\mathscr{L}u,v\right\rangle,\qquad\forall u\in\mathrm{Dom} (\mathscr{L}),\qquad\forall v\in H^{1}(\mathbb{R}). \tag{2.1}\]
_Then, the following holds._
**(1)**: _The domain and the action_ \(\mathscr{L}\) _can be clarified that_
\[\mathrm{Dom}\left(\mathscr{L}\right) =H^{2}(\mathbb{R}),\] \[\mathscr{L}u =-u^{\prime\prime}+V(x)u,\qquad\forall u\in H^{2}(\mathbb{R}),\]
_and its resolvent set_ \(\rho(\mathscr{L})\) _is nonempty._
**(2)**: _The numerical range of_ \(\mathscr{L}\) _is given by_ \((\)_see Figure_ 2\()\)__
\[\mathrm{Num}(\mathscr{L})=\left\{(0,+\infty)+sV_{+}+(1-s)V_{-}:\,s\in[0,1] \right\}, \tag{2.2}\]
_and as a consequence,_ \(\mathscr{L}\) _is a_ \(m-\)_sectorial operator._
**(3)**: _The adjoint of_ \(\mathscr{L}\) _is given by_
\[\mathscr{L}^{*}=-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\overline{V(x)}\,, \qquad\mathrm{Dom}(\mathscr{L}^{*})=H^{2}(\mathbb{R}), \tag{2.3}\]
_and as a consequence,_
**(a)**: \(\mathscr{L}\) _is normal if and only if_ \(\mathrm{Im}\,V_{+}=\mathrm{Im}\,V_{-}\)_;_
**(b)**: \(\mathscr{L}\) _is self-adjoint if and only if_ \(\mathrm{Im}\,V_{+}=\mathrm{Im}\,V_{-}=0\)_;_
**(c)**: \(\mathscr{L}\) _is always_ \(\mathscr{T}\)_-self-adjoint;_
**(d)**: \(\mathscr{L}\) _is_ \(\mathscr{P}\)_-self-adjoint if and only if_ \(\mathrm{Re}\,V_{+}=\mathrm{Re}\,V_{-}\) _and_ \(\mathrm{Im}\,V_{+}=-\mathrm{Im}\,V_{-}\)_;_
**(e)**: \(\mathscr{L}\) _is_ \(\mathscr{PT}\)_-symmetric if and only if_ \(\mathrm{Re}\,V_{+}=\mathrm{Re}\,V_{-}\) _and_ \(\mathrm{Im}\,V_{+}=-\mathrm{Im}\,V_{-}\)
Our next proposition will describe explicitly the resolvent of the operator \(\mathscr{L}\). It shows that the resolvent can be written in the integral form.
**Proposition 2.2**.: _Let \(V_{\pm}\in\mathbb{C}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1. For all \(z\in\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\) and for every \(f\in L^{2}(\mathbb{R})\), we have_
\[\left[\left(\mathscr{L}-z\right)^{-1}f\right](x)=\int_{\mathbb{R}}\mathcal{R} _{z}(x,y)f(y)\,\mathrm{d}y, \tag{2.4}\]
_where \(\mathcal{R}_{z}(x,y)\) is defined by_
\[\mathcal{R}_{z}(x,y)=\begin{cases}\dfrac{1}{2k_{+}(z)}e^{-k_{+}(z)|x-y|}+ \dfrac{k_{+}(z)-k_{-}(z)}{2k_{+}(z)\left(k_{+}(z)+k_{-}(z)\right)}e^{-k_{+}(z) (x+y)}&\text{for }\{x>0,y>0\};\\ \dfrac{1}{2k_{-}(z)}e^{-k_{-}(z)|x-y|}-\dfrac{k_{+}(z)-k_{-}(z)}{2k_{-}(z) \left(k_{+}(z)+k_{-}(z)\right)}e^{k_{-}(z)(x+y)}&\text{for }\{x<0,y<0\};\\ \dfrac{1}{k_{+}(z)+k_{-}(z)}e^{-k_{+}(z)x+k_{-}(z)y}&\text{for }\{x>0,y<0\};\\ \dfrac{1}{k_{+}(z)+k_{-}(z)}e^{k_{-}(z)x-k_{+}(z)y}&\text{for }\{x<0,y>0\}; \end{cases}\]
_where we set_
\[k_{+}(z)\coloneqq\sqrt{V_{+}-z},\qquad k_{-}(z)\coloneqq\sqrt{V_{-}-z}. \tag{2.5}\]
Here and throughout the article, we choose the principle value of the square root, _i.e._, \(z\mapsto\sqrt{z}\) defined on \(\mathbb{C}\) which is holomorphic on \(\mathbb{C}\setminus(-\infty,0]\) and positive on \((0,+\infty)\).
**Remark 2.3**.: _When \(V_{+}=V_{-}=v\in\mathbb{C}\), we obtain a simple formula for the resolvent kernel of the Schrodinger operator \(\mathscr{L}\):_
\[\mathcal{R}_{z}(x,y)=\frac{1}{2\sqrt{v-z}}e^{-\sqrt{v-z}|x-y|},\qquad\text{ for almost everywhere }(x,y)\in\mathbb{R}^{2}. \tag{2.6}\]
_Obviously, the resolvent of the Schrodinger operator with steplike potential has more terms than the resolvent of the free Schrodinger operator, i.e., \(V_{+}=V_{-}=0\). Besides the the exponential terms with the difference \(x-y\), it also contains the exponential terms with
Figure 2. Illustration of the numerical range of \(\mathscr{L}\) in the magenta color.
_and \(y\) which can be separable. We will see that the latter will play the main role of the blowing-up of the resolvent inside the numerical range which makes the pseudospectrum highly non-trivial (see subsection 4.1)._
By using Weyl sequence, we can show that the set \([V_{+},+\infty)\cup[V_{-},+\infty)\) is indeed the spectrum of \(\mathscr{L}\) whose characterization is described explicitly in the following theorem.
**Theorem 2.4**.: _Let \(V_{\pm}\in\mathbb{C}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1. The spectrum of \(\mathscr{L}\) is given by \((\)see Figure 3\()\)_
\[\sigma\,(\mathscr{L})=[V_{+},+\infty)\cup[V_{-},+\infty). \tag{2.7}\]
_Furthermore, the spectrum of \(\mathscr{L}\) is purely continuous, i.e.,_
\[\sigma_{\rm p}(\mathscr{L})=\emptyset,\qquad\sigma_{\rm r}(\mathscr{L})= \emptyset,\qquad\sigma_{\rm c}(\mathscr{L}_{m})=[V_{+},+\infty)\cup[V_{-},+ \infty),\]
_and all its essential spectra are identical:_
\[\sigma_{\rm e1}(\mathscr{L})=\sigma_{\rm e2}(\mathscr{L})=\sigma_{\rm e3}( \mathscr{L})=\sigma_{\rm e4}(\mathscr{L})=\sigma_{\rm e5}(\mathscr{L})=[V_{+},+\infty)\cup[V_{-},+\infty).\]
In view of Theorem 2.4, it can be seen that all the essential spectra are identical even when \(\mathscr{L}\) is not necessarily self-adjoint. When \(V_{+}=V_{-}=0\), the well-known result on the spectrum of the free Schrodinger operator, which is classically attained from the positiveness of \(-\frac{{\rm d}^{2}}{{\rm d}x^{2}}\) and the existence of the Weyl sequence (_approximate eigenfunctions_), is recovered
\[\sigma(\mathscr{L})=[0,+\infty).\]
When \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\), the spectrum of \(\mathscr{L}\) is restricted to the axis
\[\sigma(\mathscr{L})=[\widehat{V},+\infty),\qquad\widehat{V}\coloneqq\min( \operatorname{Re}V_{+},\operatorname{Re}V_{-})+i\operatorname{Im}V_{+}, \tag{2.8}\]
which is as same as the spectrum of the free Schrodinger operator translated by a constant potential \(\widehat{V}\). When \(V(x)=i\operatorname{sgn}(x)\), we attain the result given in [15, Proposition 2.1].
Figure 3. The spectrum of \(\mathscr{L}\) is expressed by the magenta lines whose starting points are \(V_{+}\) and \(V_{-}\).
### The pseudospectrum
Since the free Schrodinger operator is self-adjoint, it is deduced from (1.2) and (1.1) that
\[\sigma_{\varepsilon}\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}\right)=\left\{ z\in\mathbb{C}:\mathrm{dist}\left(z,\sigma\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^ {2}}\right)\right)<\varepsilon\right\}.\]
Therefore, its pseudospectrum is trivial. Here, we call a pseudospectrum of a linear operator \(S\) is trivial if there exists \(C>0\) such that, for all \(\varepsilon>0\), we have
\[\sigma_{\varepsilon}(S)\subset\{z\in\mathbb{C}:\mathrm{dist}(z,\sigma(S))<C \varepsilon\}.\]
By adding with a steplike potential, which is a bounded perturbation, it is natural to wonder if the pseudospectrum is trivial or not. At the end of this subsection, we will show that this pseudospectrum is trivial if and only if \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\), _i.e._, \(\operatorname{Im}V(x)\) is a constant. The following proposition is about the resolvent norm outside the numerical range which is a direct consequence of the operator theory.
**Proposition 2.5**.: _Let \(V_{\pm}\in\mathbb{C}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1. For all \(z\in\mathbb{C}\setminus\overline{\operatorname{Num}(\mathscr{L})}\), we have_
\[\frac{1}{\mathrm{dist}(z,\sigma(\mathscr{L}))}\leq\|(\mathscr{L}-z)^{-1}\|\leq \frac{1}{\mathrm{dis}(z,\operatorname{Num}(\mathscr{L}))}. \tag{2.9}\]
_As a consequence, we have_
\[\|(\mathscr{L}-z)^{-1}\|=\frac{1}{\mathrm{dist}(z,\sigma(\mathscr{L}))}, \tag{2.10}\]
_for all \(z\in W(\mathscr{L})\coloneqq\Big{\{}z\in\mathbb{C}\setminus\overline{ \operatorname{Num}(\mathscr{L})}:\mathrm{dist}(z,\sigma(\mathscr{L}))=\mathrm{ dist}(z,\operatorname{Num}(\mathscr{L}))\Big{\}}\)._
Proof.: The first inequality in (2.9) comes from [9, Theorem 1.2.10] and the second one comes from [9, Lemma 9.3.14].
Figure 4. The region \(W(\mathscr{L})\) is described in the green color, in which two dashed lines are perpendicular to line segment \(V_{+}V_{-}\) at \(V_{+}\) and \(V_{-}\).
**Remark 2.6**.: _In Figure 4, we give a description of the set \(W(\mathscr{L})\) in Proposition 2.5 in which the norm of the resolvent behaves like in the case of the normal operator. However, \(W(\mathscr{L})\) is not the largest region in \(\mathbb{C}\) such that the equality (2.10) happens. To see this, for simplicity, let us consider a simple case \(V_{+}=i\) and \(V_{-}=-i\) (see Figure 5) and thanks to the integration by parts, we have, for all \(u\in H^{2}(\mathbb{R})\),_
\[\|(\mathscr{L}-z)u\|^{2}= |i-z|^{2}\|u\|_{L^{2}(\mathbb{R}_{+})}^{2}+|-i-z|^{2}\|u\|_{L^{2 }(\mathbb{R}_{-})}^{2}+\|u^{\prime\prime}\|^{2}-2\mathrm{Re}\,z\|u^{\prime}\|^ {2}+4\mathrm{Im}\,\left(u^{\prime}(0)\overline{u(0)}\right).\]
_Notice that \(|-i-z|^{2}-|i-z|^{2}=4\mathrm{Im}\,z\) and we write_
\[\|(\mathscr{L}-z)u\|^{2}=|i-z|^{2}\|u\|^{2}+4\mathrm{Im}\,z\|u\|_{L^{2}( \mathbb{R}_{-})}^{2}+\|u^{\prime\prime}\|^{2}-2\mathrm{Re}\,z\|u^{\prime}\|^ {2}+4\mathrm{Im}\,\left(u^{\prime}(0)\overline{u(0)}\right)\]
_Assume that \(\mathrm{Re}\,z<0\), by using the inequalities, see for instance (B.1),_
\[|u(0)|^{2}\leq 2\|u\|_{L^{2}(\mathbb{R}_{-})}\|u^{\prime}\|_{L^{2}(\mathbb{R}_ {-})}\text{ and }|u^{\prime}(0)|^{2}\leq 2\|u^{\prime}\|_{L^{2}(\mathbb{R}_{+})}\|u^{ \prime\prime}\|_{L^{2}(\mathbb{R}_{+})}\]
_and AM-GM inequality for \(4\) numbers, we have_
\[\left|4\mathrm{Im}\,\left(u^{\prime}(0)\overline{u(0)}\right)\right|\leq 8\|u^{\prime\prime}\|_{L^{2}(\mathbb{R}_{+})}^{1/2}\|u^{ \prime}\|_{L^{2}(\mathbb{R}_{+})}^{1/2}\|u^{\prime}\|_{L^{2}(\mathbb{R}_{-})}^ {1/2}\|u\|_{L^{2}(\mathbb{R}_{-})}^{1/2}\] \[\leq \|u^{\prime\prime}\|_{L^{2}(\mathbb{R}_{+})}^{2}-2\mathrm{Re}\,z \|u^{\prime}\|_{L^{2}(\mathbb{R}_{+})}-2\mathrm{Re}\,z\|u^{\prime}\|_{L^{2}( \mathbb{R}_{-})}+\frac{4}{(\mathrm{Re}\,z)^{2}}\|u\|_{L^{2}(\mathbb{R}_{-})}^ {2}.\]
_Then, it yields that, for all \(u\in H^{2}(\mathbb{R})\),_
\[\|(\mathscr{L}-z)u\|^{2}\geq|i-z|^{2}\|u\|^{2}+\left(4\mathrm{Im}\,z-\frac{4} {(\mathrm{Re}\,z)^{2}}\right)\|u\|_{L^{2}(\mathbb{R}_{-})}^{2}.\]
_Therefore, if \(\mathrm{Im}\,z\geq\frac{1}{(\mathrm{Re}\,z)^{2}}\) and \(\mathrm{Re}\,z<0\), then, for all \(u\in H^{2}(\mathbb{R})\),_
\[\|(\mathscr{L}-z)u\|^{2}\geq|i-z|^{2}\|u\|^{2}.\]
_In the same manner, it can be shown that if \(-\mathrm{Im}\,z\geq\frac{1}{(\mathrm{Re}\,z)^{2}}\) and \(\mathrm{Re}\,z<0\), then, for all \(u\in H^{2}(\mathbb{R})\),_
\[\|(\mathscr{L}-z)u\|^{2}\geq|-i-z|^{2}\|u\|^{2}.\]
_This implies that if \(|\mathrm{Im}\,z|\geq\frac{1}{(\mathrm{Re}\,z)^{2}}\) and \(\mathrm{Re}\,z<0\), then_
\[\|(\mathscr{L}-z)^{-1}\|\leq\frac{1}{\min\{|i-z|,|-i-z|\}}=\frac{1}{\mathrm{ dist}(z,\sigma(\mathscr{L}))}.\]
_By using the first inequality in (2.9), we obtain the equality_
\[\|(\mathscr{L}-z)^{-1}\|=\frac{1}{\mathrm{dist}(z,\sigma(\mathscr{L}))}\]
_on the blue region in Figure 5._
Since the resolvent \((\mathscr{L}-z)^{-1}\) is a holomorphic function on \(\rho(\mathscr{L})\), its norm is continuous and thus, bounded on any compact set in \(\rho(\mathscr{L})\). Therefore, if we want to find a place where the resolvent norm blows up, we should try to find this place in an unbounded set in \(\rho(\mathscr{L})\). In view of Proposition 2.5, it can be seen that the resolvent norm is a constants when \(z\) moves parallel to the numerical range to infinity outside the numerical range and it is decaying when \(z\) goes far away from the numerical range. This tells us that the blowing-up place of the resolvent norm should be found inside the numerical range and this is the content of the following theorem.
**Theorem 2.7**.: _Let \(V_{\pm}\in\mathbb{C}\) such that \(\mathrm{Im}\,V_{+}\neq\mathrm{Im}\,V_{-}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1. Let \(z\in\mathbb{C}\) such that \(\mathrm{Im}\,z\in|(\mathrm{Im}\,V_{+},\mathrm{Im}\,V_{-})|\), then_
\[\left\|(\mathscr{L}-z)^{-1}\right\|=\frac{2|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V _{-}|}{|V_{+}-V_{-}|}\frac{\mathrm{Re}\,z}{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,z ||\mathrm{Im}\,V_{-}-\mathrm{Im}\,z|}\left(1+\mathcal{O}\left(\frac{1}{| \mathrm{Re}\,z|}\right)\right),\]
_as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\)._
Let us note that when \(V_{+}=i\) and \(V_{-}=-i\), thanks to Theorem 2.7, we obtain the formula
\[\left\|(\mathscr{L}-z)^{-1}\right\|=\frac{2\operatorname{Re}z}{1-| \operatorname{Im}z|^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\operatorname{Re}z |}\right)\right),\]
as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(|\operatorname{Im}z|<1\), and this statement is clearly an improvement of [15, Theorem 2.3]. Furthermore, by letting \(\frac{2\operatorname{Re}z}{1-|\operatorname{Im}z|^{2}}=\frac{1}{\varepsilon}\), we obtain the shape of level curves in Figure 6 which are very similar to level set \(\{z\in\mathbb{C}:\|(\mathscr{L}-z)^{-1}\|=\frac{1}{\varepsilon}\}\) inside the numerical range which is computed numerically in [15, Figure 1].
The direct result of Proposition 2.5 and Theorem 2.7 is the following corollary which shows us that the source of the non-trivial pseudospectra comes from the difference between \(\operatorname{Im}V_{+}\) and \(\operatorname{Im}V_{-}\), regardless of what values \(\operatorname{Re}V_{+}\) and \(\operatorname{Re}V_{-}\) are.
**Corollary 2.8**.: _Let \(V_{\pm}\in\mathbb{C}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1. Then, the following holds._
1. _If_ \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\)_, the pseudospectrum of_ \(\mathscr{L}\) _is given by_ \[\sigma_{\varepsilon}(\mathscr{L})=\{z\in\mathbb{C}:\operatorname{dist}(z, \sigma(\mathscr{L}))<\varepsilon\}.\]
2. _If_ \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\)_, for every_ \(\varepsilon^{\prime}\in(0,1)\)_, there exists_ \(M>0\) _such that, for every_ \(\varepsilon>0\)_, we have_ \[\sigma_{\varepsilon}(\mathscr{L})\supset \{z\in\mathbb{C}:\operatorname{dist}(z,\sigma(\mathscr{L}))<\varepsilon\}\] \[\bigcup\left\{z\in\mathbb{C}:\begin{aligned} & \operatorname{Re}z>M\text{ and }\operatorname{Im}z\in|(\operatorname{Im}V_{+}, \operatorname{Im}V_{-})|\text{ and }\\ &\operatorname{Re}z>\frac{1}{\varepsilon(1-\varepsilon^{\prime})} \frac{|V_{+}-V_{-}||\operatorname{Im}V_{+}-z||\operatorname{Im}V_{-}-z|}{2| \operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}\end{aligned}\right\}.\]
_As a consequence, \(\sigma_{\varepsilon}(\mathscr{L})\) is trivial if and only if \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\)._
### The optimal pseudomode
One of the early studies on the pseudomode construction for the non-self-adjoint Hamiltonian can be attributed to Davies' work [8]. In his worked, Davies scaled variable so that the original Schodinger operator can be transformed to its semiclassical version and thanks to this fact, the WKB pseudomode is constructed. However, this method seems merely effective for a certain class of polynomial potentials in which
the scaling can be performed, while it is inapplicable for other type of potentials such as logarithmic or exponential potentials. Twenty years later, Krejcirik and Siegl in [19] developed a _direct construction of large-energy pseudomode_ for the Schrodinger operator, which does not require passage through semiclassical setting and can cover all of the above-mentioned potentials. Furthermore, they also created two methods, the first is called _perturbative approach_ and the second is called _mollification strategy_, to deal with lower regularity potentials (discontinuous potentials are also included). However, when these methods are applied for the potential \(V(x)=i\operatorname{sgn}(x)\), the best possible decaying rate of the quotient \(\frac{\|(\mathscr{L}^{\prime}-z)\Psi_{z}\|}{\|\Psi_{z}\|}\) that may be attained is \(\mathcal{O}\left(\frac{1}{z^{1/4}}\right)\) for the perturbative approach, see [19, Example 4.5], and \(\mathcal{O}\left(\frac{1}{z^{1/2}}\right)\) for the mollification strategy, see [19, Example 4.11], as \(z\to+\infty\) on the real axis. Our next result will present an explicit pseudomode \(\Psi_{z}\) that yields the precise decaying rate which should be \(\mathcal{O}\left(\frac{1}{z}\right)\). Better yet, an optimal outcome, which is verified from Theorem 2.7, is also attained.
**Theorem 2.9**.: _Let \(V_{\pm}\in\mathbb{C}\) such that \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\) and \(\mathscr{L}\) be the operator defined as in Proposition 2.1, and \(k_{\pm}(z)\) as in (2.5). For each \(z\in\rho(\mathscr{L})\), we define a function \(\Psi_{z}\) as follows_
\[\Psi_{z}(x)=\left(n_{1}(z)e^{k_{-}(z)x}+n_{2}(z)e^{\overline{k_{-}(z)}x} \right)\mathbf{1}_{\mathbb{R}_{-}}(x)+\left(p_{1}(z)e^{-k_{+}(z)x}+p_{2}(z)e^{ -\overline{k_{+}(z)}x}\right)\mathbf{1}_{\mathbb{R}_{+}}(x), \tag{2.11}\]
Figure 6. The level curves \(\frac{2\operatorname{Re}z}{1-|\operatorname{Im}z|^{2}}=\frac{1}{\varepsilon}\) in the complex \(z\)-plane are computed for several values of \(\varepsilon\); the red lines are the essential spectrum of \(\mathscr{L}\) in the case \(V_{+}=i\) and \(V_{-}=-i\); the smaller \(\varepsilon\) is, the closer to infinity and red lines the level curve is.
_where \(n_{1}(z),p_{1}(z)\) and \(n_{2}(z),p_{2}(z)\) are complex numbers depending on \(z\) given by_
\[\begin{cases}n_{1}(z)=\dfrac{k_{+}(z)+\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)}| \mathrm{Im}\,V_{+}-\mathrm{Im}\,z|+\dfrac{k_{+}(z)-\overline{k_{+}(z)}}{k_{+}( z)+k_{-}(z)}|\mathrm{Im}\,V_{-}-\mathrm{Im}\,z|,\\ p_{1}(z)=-\dfrac{k_{-}(z)-\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)}|\mathrm{Im}\,V _{+}-\mathrm{Im}\,z|-\dfrac{\overline{k_{+}(z)}+k_{-}(z)}{k_{+}(z)+k_{-}(z)}| \mathrm{Im}\,V_{-}-\mathrm{Im}\,z|,\\ n_{2}(z)=-|\mathrm{Im}\,V_{+}-\mathrm{Im}\,z|,\\ p_{2}(z)=|\mathrm{Im}\,V_{-}-\mathrm{Im}\,z|.\end{cases} \tag{2.12}\]
_Then, \(\Psi_{z}\in H^{2}(\mathbb{R})\) for all \(z\in\rho(\mathscr{L})\) and it makes_
\[\frac{\|(\mathscr{L}-z)\Psi_{z}\|}{\|\Psi_{z}\|}=\frac{|V_{+}-V_{-}|}{2| \mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|}\frac{|\mathrm{Im}\,V_{+}-\mathrm{Im} \,z||\mathrm{Im}\,V_{-}-\mathrm{Im}\,z|}{\mathrm{Re}\,z}\left(1+\mathcal{O} \left(\frac{1}{|\mathrm{Re}\,z|}\right)\right),\]
_as \(\mathrm{Re}\,z\to+\infty\) and uniformly for all \(\mathrm{Im}\,z\in|(\mathrm{Im}\,V_{+},\mathrm{Im}\,V_{-})|\)._
### Complex point interaction
When the operator \(\mathscr{L}\) is rigorously investigated above, in this subsection, we want to discuss its perturbed model by the Dirac delta generalized function \(\delta_{0}\), the so-called _point interaction_. For the title of this subsection, we want to indicate that the adding distribution \(\delta_{0}\) will be multiplied by a complex number \(\alpha\) and then, the formal expression of this perturbed operator, denoted by \(\mathscr{L}_{\alpha}\), is given as in (1.4). Our first aim is to define its _realization_ in \(L^{2}(\mathbb{R})\) via the following sesquilinear
\[Q_{\alpha}(u,v) \coloneqq\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}(x)} \,\mathrm{d}x+V_{+}\int_{0}^{+\infty}u(x)\overline{v(x)}\,\mathrm{d}x+V_{-} \int_{-\infty}^{0}u(x)\overline{v(x)}\,\mathrm{d}x+\alpha u(0)\overline{v(0)},\] \[\mathrm{Dom}(Q_{\alpha}) \coloneqq H^{1}(\mathbb{R}).\]
The definition and some properties of \(\mathscr{L}_{\alpha}\) is given in the following proposition, whose detailed proof can be found in Appendix B.
**Proposition 2.10**.: _There exists a closed densely defined operator \(\mathscr{L}_{\alpha}\) whose domain is given by_
\[\mathrm{Dom}\left(\mathscr{L}_{\alpha}\right)=\left\{\begin{aligned} & u\in H^{1}(\mathbb{R}):\text{ there exists }f\in L^{2}(\mathbb{R})\text{ such that }\\ & Q_{\alpha}(u,v)=\left\langle f,v\right\rangle\text{ for all }v\in H^{1}(\mathbb{R})\end{aligned}\right\},\]
_and_
\[Q_{\alpha}(u,v)=\left\langle\mathscr{L}_{\alpha}u,v\right\rangle,\qquad\forall u \in\mathrm{Dom}(\mathscr{L}_{\alpha}),\qquad\forall v\in H^{1}(\mathbb{R}). \tag{2.13}\]
_Then, the following holds._
**(1)**: _The domain and the action of_ \(\mathscr{L}_{\alpha}\) _can be clarified that_
\[\mathrm{Dom}(\mathscr{L}_{\alpha})\coloneqq\{u\in H^{1}(\mathbb{R})\cap H^{ 2}(\mathbb{R}\setminus\{0\}):u^{\prime}(0^{+})-u^{\prime}(0^{-})=\alpha u(0)\},\]
\[\mathscr{L}_{\alpha}u=-u^{\prime\prime}+V(x)u,\qquad\forall u\in\mathrm{Dom}( \mathscr{L}_{\alpha}),\]
_and its resolvent set_ \(\rho(\mathscr{L}_{\alpha})\) _is nonempty;_
**(2)**: _The numerical range_ \(\mathrm{Num}(\mathscr{L}_{\alpha})\) _is included in the sector_ \(S_{C_{V,\alpha},\frac{\pi}{4}}\) _with a vertex_
\[C_{V,\alpha}=\min\{\mathrm{Re}\,V_{+},\mathrm{Re}\,V_{-}\}-\max\{|\mathrm{Im} \,V_{+}|,|\mathrm{Im}\,V_{-}|\}-(|\mathrm{Re}\,\alpha|+|\mathrm{Im}\,\alpha| )^{2}, \tag{2.14}\]
_and as a consequence,_ \(\mathscr{L}_{\alpha}\) _is a_ \(m-\)_sectorial operator._
**(3)**: _The adjoint of_ \(\mathscr{L}_{\alpha}\) _is given by_
\[\mathrm{Dom}(\mathscr{L}_{\alpha}^{*})=\{u\in H^{1}(\mathbb{R})\cap H^{2}( \mathbb{R}\setminus\{0\}):u^{\prime}(0^{+})-u^{\prime}(0^{-})=\overline{ \alpha}u(0)\}, \tag{2.15}\] \[\mathscr{L}_{\alpha}^{*}u=-u^{\prime\prime}+\overline{V(x)}u,\qquad \forall u\in\mathrm{Dom}(\mathscr{L}_{\alpha}^{*}).\]
_and as a consequence_
**(a)**: \(\mathscr{L}_{\alpha}\) _is normal if and only if_
\[\bullet\ \mathrm{Im}\,V_{+}=\mathrm{Im}\,V_{-}=0\text{ and }\alpha\in\mathbb{R},\]
_or_ * \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\neq 0\) _and_ \(\alpha=0\)_;_
* \(\mathscr{L}_{\alpha}\) _is self-adjoint if and only if_ \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}=0\) _and_ \(\alpha\in\mathbb{R}\)_;_
* \(\mathscr{L}_{\alpha}\) _is always_ \(\mathcal{T}\)_-self-adjoint;_
* \(\mathscr{L}_{\alpha}\) _is_ \(\mathcal{P}\)_-self-adjoint if and only if_ \(\operatorname{Re}V_{+}=\operatorname{Re}V_{-}\) _and_ \(\operatorname{Im}V_{+}=-\operatorname{Im}V_{-}\) _and_ \(\operatorname{Re}\alpha=0\)_;_
* \(\mathscr{L}_{\alpha}\) _is_ \(\mathcal{PT}\)_-symmetric if and only if_ \(\operatorname{Re}V_{+}=\operatorname{Re}V_{-}\) _and_ \(\operatorname{Im}V_{+}=-\operatorname{Im}V_{-}\) _and_ \(\operatorname{Re}\alpha=0\)_._
In the light of this proposition, it is clear that the complex number \(\alpha\) which appears in the action of quadratic form \(Q_{\alpha}\) does not appear in the action of \(\mathscr{L}_{\alpha}\) but rather enters the domain of the operator. It is amazing to notice that the conditions \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\) and \(\alpha\) is real are not enough to say that the operator \(\mathscr{L}_{\alpha}\) is normal. Furthermore, the study of the explicit numerical range of the operator \(\mathscr{L}_{\alpha}\) constitutes an interesting open problem, even with a simple case \(V_{+}=V_{-}=0\).
The main purpose of this subsection is to reproduce all the results for the operator \(\mathscr{L}_{\alpha}\) that we achieved for \(\mathscr{L}\) in the previous subsections. The first outcome that we want to obtain is the spectrum of the operator \(\mathscr{L}_{\alpha}\). However, we do not need to find the resolvent set of \(\mathscr{L}_{\alpha}\) to implies its spectrum as we did in subsection 2.1. Instead, we can study \(\sigma(\mathscr{L}_{\alpha})\) directly. To do that, we will show that some type of the essential spectra are stable under the interaction through the following lemma.
**Lemma 2.11**.: _Let \(V_{\pm}\in\mathbb{C}\), \(\alpha\in\mathbb{C}\) and \(\mathscr{L}_{\alpha}\) be the operator defined as in Proposition 2.10. Then, the first three essential spectra of \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\) are identical:_
\[\sigma_{\rm ek}(\mathscr{L})=\sigma_{\rm ek}(\mathscr{L}_{\alpha})\qquad \text{ for }\mathrm{k}\in\{1,2,3\}.\]
_As a consequence, we have_
\[\sigma_{\rm ek}(\mathscr{L}_{\alpha})=[V_{+},+\infty)\cup[V_{-},+\infty), \qquad\text{ for }\mathrm{k}\in\{1,2,3,4,5\}.\]
Now, we can describe the spectrum of the operator \(\mathscr{L}_{\alpha}\) by the following theorem.
**Theorem 2.12**.: _Let \(V_{\pm}\in\mathbb{C}\), \(\alpha\in\mathbb{C}\) and \(\mathscr{L}_{\alpha}\) be the operator defined as in Proposition 2.10. Then, the spectrum of \(\mathscr{L}_{\alpha}\) is composed of point spectrum and continuous spectrum,_
\[\sigma(\mathscr{L}_{\alpha})=\sigma_{\rm c}(\mathscr{L}_{\alpha})\cup\sigma_ {\rm p}(\mathscr{L}_{\alpha}),\]
_in which,_
* _the continuous spectrum is also the essential spectrum_ \[\sigma_{\rm c}(\mathscr{L}_{\alpha})=\sigma_{\rm ek}(\mathscr{L}_{\alpha})=[V _{+},+\infty)\cup[V_{-},+\infty),\qquad\text{ for }\mathrm{k}\in\{1,2,3,4,5\},\] (2.16)
* _the point spectrum is also the discrete spectrum_ \[\sigma_{\rm p}(\mathscr{L}_{\alpha})=\sigma_{\rm dis}(\mathscr{L}_{\alpha})= \begin{cases}\{z(\alpha)\}&\text{ if }\alpha\in\Omega,\\ \emptyset&\text{ if }\alpha\in\mathbb{C}\setminus\Omega,\end{cases}\] (2.17) _where_ \[z(\alpha) \coloneqq\frac{V_{+}+V_{-}}{2}-\frac{(V_{+}-V_{-})^{2}}{4\alpha^ {2}}-\frac{\alpha^{2}}{4},\] \[\Omega \coloneqq\left\{\alpha\in\mathbb{C}:|\langle V_{+}-V_{-},\alpha \rangle_{\mathbb{R}^{2}}|<-|\alpha|^{2}\mathrm{Re}\,\alpha\right\}.\]
_When \(\alpha\in\Omega\), the eigenspace of \(z(\alpha)\) is given by_
\[\mathrm{Ker}(\mathscr{L}_{\alpha}-z(\alpha))=\operatorname{span}\left\{u_{ \alpha}(x)\right\},\]
_where_
\[u_{\alpha}(x)\coloneqq\begin{cases}e^{\left(\frac{\alpha}{2}+\frac{V_{+}-V_{- }}{2\alpha}\right)x},&\text{ for }x\geq 0,\\ e^{-\left(\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2\alpha}\right)x},&\text{ for }x\leq 0.\end{cases} \tag{2.18}\]
According to Theorem 2.12, there is some \(\alpha\in\mathbb{C}\) at which the spectra of \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\) are the same, and there is some \(\alpha\in\mathbb{C}\) at which they differ. The latter happens when \(\alpha\) belongs to the set \(\Omega\), which stays on the left of the imaginary axis in complex plane and whose shape depends on the difference \(V_{+}-V_{-}\).
**Remark 2.13**.: _In Figure 7, some shapes of \(\Omega\) corresponding to \(V_{+}-V_{-}\) are represented. In particular, in the case \(V_{+}-V_{-}=2i\), we disprove wrong statements in [15, Proposition 7.1 and Figure 3] that the domain \(\mathbb{C}\setminus\Omega\) is a curve in the complex plane. In any cases, the set \(\Omega\) and \(\mathbb{C}\setminus\Omega\) must be \(2\)-dimension areas in \(\mathbb{C}\). Obviously, the case \(V_{+}-V_{-}=0\) produces the largest region \(\Omega\), that is \(\Omega=\{z\in\mathbb{C}:\operatorname{Re}z<0\}\), in all circumstances corresponding to the values of \(V_{+}-V_{-}\)._
By applying the same method that we used to calculate the asymptotic behavior of the resolvent norm of \(\mathscr{L}\) in Theorem 2.7, we deduce the following statement.
**Theorem 2.14**.: _Let \(V_{\pm}\in\mathbb{C}\) such that \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\) and let \(\alpha\in\mathbb{C}\setminus\{0\}\) and \(\mathscr{L}_{\alpha}\) be the operator defined as in Proposition 2.10. Let \(z\in\mathbb{C}\) such that \(\operatorname{Im}z\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\), then_
\[\left\|(\mathscr{L}_{\alpha}-z)^{-1}\right\|=\frac{|\operatorname{Im}V_{+}- \operatorname{Im}V_{-}|\sqrt{\operatorname{Re}z}}{|\alpha||\operatorname{Im }V_{+}-\operatorname{Im}z||\operatorname{Im}V_{-}-\operatorname{Im}z|}\left( 1+\mathcal{O}\left(\frac{1}{|\operatorname{Re}z|^{1/2}}\right)\right).\]
_as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\)._
It is clear that the closer to zero \(\alpha\) is, the faster the resolvent norm increases. Although the blowing-up rate of the resolvent norm in the case \(\alpha\neq 0\) is less than in the case free of
Figure 7. Illustrations of the set \(\Omega\) in the complex plane, at which the eigenvalue of \(\mathscr{L}_{\alpha}\) exists, corresponding to the values of \(V_{+}-V_{-}\).
interaction, the pseudospectrum is still non-trivial when \(\operatorname{Im}V(x)\) is not constant. We do not know about the asymptotic behavior of the resolvent norm of \(\mathscr{L}_{\alpha}\) outside the region bounded by two essential spectrum lines, but we conjecture that it will be bounded when the spectral parameter \(z\) moves parallel to this region and decay when \(z\) moves far away from this region to infinity.
Our final result is devoted to the pseudomode construction for the complex point interaction operator \(\mathscr{L}_{\alpha}\). Again, our method for Theorem 2.9 is still applicable and yields an optimal pseudomode for the operator \(\mathscr{L}_{\alpha}\).
**Theorem 2.15**.: _Let \(V_{\pm}\in\mathbb{C}\) such that \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\) and let \(\alpha\in\mathbb{C}\setminus\{0\}\) and \(\mathscr{L}_{\alpha}\) be the operator defined as in Proposition 2.10. We define a function \(\Psi_{z,\alpha}\), for \(z\in\rho(\mathscr{L}_{\alpha})\), as follows_
\[\Psi_{z,\alpha}(x)=\left(n_{1}(z,\alpha)e^{k_{-}(z)x}+n_{2}(z)e^{ \overline{k_{-}(z)}x}\right)\mathbf{1}_{\mathbb{R}_{-}}(x)+\left(p_{1}(z, \alpha)e^{-k_{+}(z)x}+p_{2}(z)e^{-\overline{k_{+}(z)}x}\right)\mathbf{1}_{ \mathbb{R}_{+}}(x), \tag{2.19}\]
_where \(n_{1}(z,\alpha),p_{1}(z,\alpha)\) and \(n_{2}(z),p_{2}(z)\) are given by_
\[\begin{cases}n_{1}(z,\alpha)=\dfrac{k_{+}(z)+\overline{k_{-}(z)}+ \alpha}{k_{+}(z)+k_{-}(z)+\alpha}|\operatorname{Im}V_{+}-\operatorname{Im}z| +\dfrac{k_{+}(z)-\overline{k_{+}(z)}}{k_{+}(z)+k_{-}(z)+\alpha}|\operatorname{ Im}V_{-}-\operatorname{Im}z|,\\ p_{1}(z,\alpha)=-\dfrac{k_{-}(z)-\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)+ \alpha}|\operatorname{Im}V_{+}-\operatorname{Im}z|-\dfrac{\overline{k_{+}(z) }+k_{-}(z)+\alpha}{k_{+}(z)+k_{-}(z)+\alpha}|\operatorname{Im}V_{-}- \operatorname{Im}z|,\\ n_{2}(z)=-|\operatorname{Im}V_{+}-\operatorname{Im}z|,\\ p_{2}(z)=|\operatorname{Im}V_{-}-\operatorname{Im}z|.\end{cases} \tag{2.20}\]
_Then, \(\Psi_{z,\alpha}\in\operatorname{Dom}(\mathscr{L}_{\alpha})\) for all \(z\in\rho(\mathscr{L}_{\alpha})\) and it makes_
\[\frac{\|(\mathscr{L}_{\alpha}-z)\Psi_{z,\alpha}\|}{\|\Psi_{z,\alpha}\|}=\frac{ |\alpha|}{|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}\frac{|\operatorname {Im}V_{+}-\operatorname{Im}z||\operatorname{Im}V_{-}-\operatorname{Im}z|}{ \sqrt{\operatorname{Re}z}}\left(1+\mathcal{O}\left(\frac{1}{|\operatorname{ Re}z|^{1/2}}\right)\right),\]
_as \(\operatorname{Re}z\to+\infty\) and uniformly for all \(\operatorname{Im}z\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\)._
## 3. Calculation resolvent and spectrum
The main goal of this section is to study the resolvent and spectrum of the operator \(\mathscr{L}\).
### Integral form of the resolvent: Proof of Proposition 2.2
We start by solving the resolvent equation. Let us fix \(z\in\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\) and \(f\in L^{2}(\mathbb{R})\), we look for solution \(u\) in \(H^{2}(\mathbb{R})\) such that
\[(\mathscr{L}-z)u=f.\]
Because of the discontinuity of the potential \(V\) at \(0\), we look for the solution of the above equation in the form
\[u(x)=\begin{cases}u_{+}(x)&\text{for $x>0$},\\ u_{-}(x)&\text{for $x<0$}.\end{cases}\]
It means that we need to find functions \(u_{\pm}\) satisfying the corresponding equations
\[-u_{\pm}^{\prime\prime}(x)+(V_{\pm}-z)u_{\pm}(x)=f(x). \tag{3.1}\]
The variation of parameters method (VPM) is employed to find \(u_{\pm}\), that is, we firstly find independent solutions associated with the homogeneous case (_i.e._, when \(f=0\)), they are
\[e^{k_{\pm}(z)x}\text{ and }e^{-k_{\pm}(z)x}.\]
Then, the general solutions of the non-homogeneous equations can be found in the form
\[u_{\pm}(x)=\alpha_{\pm}(x)e^{k_{\pm}(z)x}+\beta_{\pm}(x)e^{-k_{\pm}(z)x}, \tag{3.2}\]
where \(\alpha_{\pm}:\mathbb{R}_{\pm}\to\mathbb{C}\) and \(\beta_{\pm}:\mathbb{R}_{\pm}\to\mathbb{C}\) are functions to be yet determined. By taking the first derivative of \(u_{\pm}\), we obtain
\[u^{\prime}_{\pm}(x)=\left[\alpha^{\prime}_{\pm}(x)e^{k_{\pm}(z)x}+\beta^{\prime} _{\pm}(x)e^{-k_{\pm}(z)x}\right]+\left[\alpha_{\pm}(x)\left(e^{k_{\pm}(z)x} \right)^{\prime}+\beta_{\pm}(x)\left(e^{-k_{\pm}(z)x}\right)^{\prime}\right].\]
The VPM starts by assuming that
\[\alpha^{\prime}_{\pm}(x)e^{k_{\pm}(z)x}+\beta^{\prime}_{\pm}(x)e^{-k_{\pm}(z)x }=0, \tag{3.3}\]
an thus
\[u^{\prime}_{\pm}(x)=\alpha_{\pm}(x)\left(e^{k_{\pm}(z)x}\right)^{\prime}+\beta _{\pm}(x)\left(e^{-k_{\pm}(z)x}\right)^{\prime}.\]
From this, the second derivative of \(u_{\pm}\) is obtained, that is
Replacing this into non-homogeneous equations (3.1) and remember that \(e^{k_{\pm}(z)x}\) and \(e^{-k_{\pm}(z)x}\) are solution of the homogeneous ones, it leads to
\[\alpha^{\prime}_{\pm}(x)\left(e^{k_{\pm}(z)x}\right)^{\prime}+\beta^{\prime} _{\pm}(x)\left(e^{-k_{\pm}(z)x}\right)^{\prime}=-f(x),\qquad\pm x>0. \tag{3.4}\]
Solving (3.3) and (3.4), we obtain
\[\alpha^{\prime}_{\pm}(x)=-\frac{1}{2k_{\pm}(z)}e^{-k_{\pm}(z)x}f(x),\qquad \beta^{\prime}_{\pm}(x)=\frac{1}{2k_{\pm}(z)}e^{k_{\pm}(z)x}f(x),\qquad\pm x>0\]
Hence, we can choose
\[\alpha_{\pm}(x) =-\frac{1}{2k_{\pm}(z)}\int_{0}^{x}e^{-k_{\pm}(z)y}f(y)\,\mathrm{ d}y+A_{\pm}, \qquad\pm x>0, \tag{3.5}\] \[\beta_{\pm}(x) =\frac{1}{2k_{\pm}(z)}\int_{0}^{x}e^{k_{\pm}(z)y}f(y)\,\mathrm{ d}y+B_{\pm},\qquad\qquad\pm x>0,\]
where \(A_{\pm}\), \(B_{\pm}\) are some complex constants which are determined later by the fact that \(u\) and its derivative \(u^{\prime}\) need to be continuous at zero and \(u\) need to decay at infinities. We start with the decaying of \(u\) at \(+\infty\). From the density of \(C^{\infty}_{c}(\mathbb{R})\) in \(L^{2}(\mathbb{R})\), for arbitrary \(\varepsilon>0\), there exists \(f_{\varepsilon}\in C^{\infty}_{c}(\mathbb{R})\) such that \(\|f-f_{\varepsilon}\|\leq\varepsilon\). Then, by the triangle and Holder inequalities, we have
\[\left|\left(\int_{0}^{x}e^{k_{+}(z)y}f(y)\,\mathrm{d}y\right)e^{-k _{+}(z)x}\right| \leq\int_{0}^{x}e^{\mathrm{Re}\,k_{+}(z)(y-x)}|f(y)-f_{\varepsilon} (y)|\,\mathrm{d}y+\int_{0}^{x}e^{\mathrm{Re}\,k_{+}(z)(y-x)}|f_{\varepsilon}(y )|\,\mathrm{d}y\] \[\leq\sqrt{\frac{1-e^{-2\mathrm{Re}\,k_{+}x}}{2\mathrm{Re}\,k_{+} (z)}}\|f-f_{\varepsilon}\|+\int_{0}^{x}e^{\mathrm{Re}\,k_{+}(z)(y-x)}|f_{ \varepsilon}(y)|\,\mathrm{d}y.\]
Since \(z\notin[V_{+},+\infty)\), thus \(\mathrm{Re}\,k_{+}(z)(z)>0\) and we apply the dominated convergence theorem for the integral \(\int_{0}^{x}e^{\mathrm{Re}\,k_{+}(z)(y-x)}|f_{\varepsilon}(y)|\,\mathrm{d}y\), it yields that
\[\lim_{x\to+\infty}\left|\left(\int_{0}^{x}e^{k_{+}(z)y}f(y)\,\mathrm{d}y\right) e^{-k_{+}(z)x}\right|\leq\frac{\varepsilon}{\sqrt{2\mathrm{Re}\,k_{+}(z)}}.\]
From the arbitrariness of \(\varepsilon\), it leads to
\[\lim_{x\to+\infty}\left(\int_{0}^{x}e^{k_{+}(z)y}f(y)\,\mathrm{d}y\right)e^{-k_ {+}(z)x}=0. \tag{3.6}\]
In other words, we have shown that \(\lim\limits_{x\to+\infty}\beta_{+}(x)e^{-k_{+}(z)x}=0\). Therefore, from the formula of \(u_{+}\) in (3.2), we have the following equivalences
\[\lim\limits_{x\to+\infty}u_{+}(x)=0\Longleftrightarrow\lim\limits_{x\to+\infty} \alpha_{+}(x)e^{k_{+}(z)x}=0\Longleftrightarrow A_{+}=\frac{1}{2k_{+}(z)} \int_{0}^{+\infty}e^{-k_{+}(z)y}f(y)\,\mathrm{d}y. \tag{3.7}\]
Indeed, the second equivalence (whose left-to-right implication is easy to see) comes from the density of \(C_{c}^{\infty}(\mathbb{R})\) in \(L^{2}(\mathbb{R})\), Holder inequality and the dominated convergence as above:
\[\left|\left(\int_{x}^{+\infty}e^{-k_{+}(z)y}f(y)\,\mathrm{d}y \right)e^{k_{+}(z)x}\right| \leq\int_{x}^{+\infty}e^{\mathrm{Re}\,k_{+}(z)(x-y)}|f(y)-f_{ \varepsilon}(y)|\,\mathrm{d}y\] \[\qquad+\int_{x}^{+\infty}e^{\mathrm{Re}\,k_{+}(z)(x-y)}|f_{ \varepsilon}(y)|\,\mathrm{d}y\] \[\leq\frac{\varepsilon}{\sqrt{2\mathrm{Re}\,k_{+}(z)}}+\int_{0}^ {+\infty}e^{\mathrm{Re}\,k_{+}(z)(x-y)}\chi_{[x,+\infty)}(y)|f_{\varepsilon}( y)|\,\mathrm{d}y\] \[\xrightarrow{x\to+\infty}\frac{\varepsilon}{\sqrt{2\mathrm{Re}\,k _{+}(z)}}.\]
Similarly, we also have
\[\lim\limits_{x\to-\infty}u_{-}(x)=0\qquad\Longleftrightarrow\qquad B_{-}= \frac{1}{2k_{-}(z)}\int_{-\infty}^{0}e^{k_{-}(z)y}f(y)\,\mathrm{d}y. \tag{3.8}\]
Since \(u\) is found to satisfy regularity conditions
\[u_{+}(0)=u_{-}(0),\qquad u_{+}^{\prime}(0)=u_{-}^{\prime}(0),\]
we obtain the system
\[\begin{cases}A_{+}+B_{+}=A_{-}+B_{-},\\ k_{+}(z)A_{+}-k_{+}(z)B_{+}=k_{-}(z)A_{-}-k_{-}(z)B_{-},\end{cases}\]
which allows us to determine \(A_{-}\) and \(B_{+}\) in terms of \(A_{+}\) and \(B_{-}\):
\[\begin{cases}A_{-}=\frac{2k_{+}(z)}{k_{+}(z)+k_{-}(z)}A_{+}-\frac{k_{+}(z)-k_{ -}(z)}{k_{+}(z)+k_{-}(z)}B_{-},\\ B_{+}=\frac{k_{+}(z)-k_{-}(z)}{k_{+}(z)+k_{-}(z)}A_{+}+\frac{2k_{-}(z)}{k_{+}(z )+k_{-}(z)}B_{-}.\end{cases} \tag{3.9}\]
Here, \(k_{+}(z)+k_{-}(z)\neq 0\) for all \(z\notin[V_{\pm},+\infty)\) because \(\mathrm{Re}\,\left(k_{+}(z)+k_{-}(z)\right)>0\) (from the choice of the principle branch of the square root). By replacing the values of constants \(A_{\pm}\), \(B_{\pm}\) in (3.7), (3.8) and in (3.9) into the formula of \(u_{\pm}\) in (3.2), we have
\[u_{+}(x)= \frac{1}{2k_{+}(z)}\int_{0}^{+\infty}e^{-k_{+}(z)|x-y|}f(y)\, \mathrm{d}y+\frac{k_{+}(z)-k_{-}(z)}{2k_{+}(z)(k_{+}(z)+k_{-}(z))}\int_{0}^{+ \infty}e^{-k_{+}(z)(x+y)}f(y)\,\mathrm{d}y\] \[+\frac{1}{k_{+}(z)+k_{-}(z)}\int_{-\infty}^{0}e^{-k_{+}(z)x+k_{-} (z)y}f(y)\,\mathrm{d}y,\qquad x>0,\] \[u_{-}(x)= \frac{1}{2k_{-}(z)}\int_{-\infty}^{0}e^{-k_{-}(z)|x-y|}f(y)\, \mathrm{d}y-\frac{k_{+}(z)-k_{-}(z)}{2k_{-}(z)(k_{+}(z)+k_{-}(z))}\int_{- \infty}^{0}e^{k_{-}(z)(x+y)}f(y)\,\mathrm{d}y\] \[+\frac{1}{k_{+}(z)+k_{-}(z)}\int_{0}^{+\infty}e^{k_{-}(z)x-k_{+} (z)y}f(y)\,\mathrm{d}y,\qquad x<0.\]
Thus, given \(f\in L^{2}(\mathbb{R})\), we constructed a solution \(u\) of the differential equation \((\mathscr{L}-z)u=f\) that has the integral form
\[u(x)=\int_{\mathbb{R}}\mathscr{R}_{z}(x,y)f(y)\,\mathrm{d}y,\]
where \(\mathcal{R}_{z}(x,y)\) is given in the statement of Proposition 2.2. After having a solution \(u\) for the resolvent equation, we need to show that \(u\in L^{2}(\mathbb{R})\) and this can be done by using the Schur test, cf. [14, Lem. 7.1]. We will check that
\[\sup_{x\in\mathbb{R}}\int_{\mathbb{R}}|\mathcal{R}_{z}(x,y)|\,\mathrm{d}y<+ \infty\qquad\text{ and }\qquad\sup_{y\in\mathbb{R}}\int_{\mathbb{R}}|\mathcal{R}_{z}(x,y)|\, \mathrm{d}x<+\infty. \tag{3.10}\]
After noticing that \(\mathcal{R}_{z}(x,y)\) is symmetric, _i.e.,_\(\mathcal{R}_{z}(x,y)=\mathcal{R}_{z}(y,x)\) for almost everywhere \((x,y)\in\mathbb{R}^{2}\), we just need to check the first one in (3.10). Directly from the formula of the kernel \(\mathcal{R}_{z}(x,y)\), we have, for all \(x>0\),
\[\int_{\mathbb{R}}|\mathcal{R}_{z}(x,y)|\,\mathrm{d}y\leq \frac{1}{|k_{+}(z)+k_{-}(z)|}\int_{-\infty}^{0}e^{-\mathrm{Re}\,k _{+}(z)x+\mathrm{Re}\,k_{-}(z)y}\,\mathrm{d}y+\frac{1}{2|k_{+}(z)|}\int_{0}^{ +\infty}e^{-\mathrm{Re}\,k_{+}(z)|x-y|}\,\mathrm{d}y\] \[+\frac{|k_{+}(z)-k_{-}(z)|}{2|k_{+}(z)||k_{+}(z)+k_{-}(z)|}\int_ {0}^{+\infty}e^{-\mathrm{Re}\,k_{+}(z)(x+y)}\,\mathrm{d}y\] \[\leq \frac{1}{\mathrm{Re}\,k_{-}(z)|k_{+}(z)+k_{-}(z)|}+\frac{1}{ \mathrm{Re}\,k_{+}(z)|k_{+}(z)|}+\frac{|k_{+}(z)-k_{-}(z)|}{2\mathrm{Re}\,k_{+ }(z)|k_{+}(z)||k_{+}(z)+k_{-}(z)|}.\]
In the same way, for \(x<0\), we obtain
\[\int_{\mathbb{R}}|\mathcal{R}_{z}(x,y)|\,\mathrm{d}y\leq \frac{1}{\mathrm{Re}\,k_{+}(z)|k_{+}(z)+k_{-}(z)|}+\frac{1}{ \mathrm{Re}\,k_{-}(z)|k_{-}(z)|}+\frac{|k_{+}(z)-k_{-}(z)|}{2\mathrm{Re}\,k_{- }(z)|k_{+}(z)||k_{+}(z)+k_{-}(z)|}.\]
Both the right hand sides of the above bounds for the integral \(\int_{\mathbb{R}}|\mathcal{R}_{z}(x,y)|\,\mathrm{d}y\) are finite provided \(z\in\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\). Thus, \(u\in L^{2}(\mathbb{R})\) and then \(u\) will automatically belong to \(H^{2}(\mathbb{R})\) since \(u^{\prime\prime}=(V-z)u-f\in L^{2}(\mathbb{R})\).
### Characterization of the spectrum: Proof of Theorem 2.4
Let us consider \(\xi\in C_{c}^{\infty}(\mathbb{R})\) such that \(\|\xi\|_{L^{2}}=1\) and its support lives in \((-1,1)\). We set
\[\xi_{n}^{\pm}(x)\coloneqq\frac{1}{\sqrt{n}}\xi\left(\frac{x}{n}\mp n\right).\]
For \(n\geq 1\), it is not hard to see that \(\operatorname{Supp}\xi_{n}^{+}\subset(n^{2}-n,n^{2}+n)\subset\mathbb{R}_{+}\) and \(\operatorname{Supp}\xi_{n}^{-}\subset(-n^{2}-n,-n^{2}+n)\subset\mathbb{R}_{-}\). Let us consider fixed but arbitrary \(z_{\pm}\in[V_{\pm},+\infty)\) and we define sequence \(u_{n}\) as follows
\[u_{n}^{\pm}(x)\coloneqq\xi_{n}^{\pm}(x)e^{k_{\pm}(z_{\pm})x}.\]
Notice that \(k_{\pm}(z_{\pm})=\sqrt{z_{\pm}-V_{\pm}}\in i\overline{\mathbb{R}_{+}}\), it leads to the fact that \(u_{n}^{\pm}\) is normalized:
\[\|u_{n}^{\pm}\|_{L^{2}}^{2}=\int_{\mathbb{R}}\frac{1}{n}\left|\xi\left(\frac{x} {n}\mp n\right)\right|^{2}\,\mathrm{d}x=\|\xi\|_{L^{2}}^{2}=1.\]
Since \(e^{k_{\pm}(z_{\pm})x}\) satisfies \((\mathscr{L}-z_{\pm})e^{k_{\pm}(z_{\pm})x}=0\) on \(\mathbb{R}_{\pm}\), by taking the support of \(\xi_{n}^{\pm}\) into account, we obtain
\[(\mathscr{L}-z_{\pm})u_{n}^{\pm}(x) =\xi_{n}^{\pm}(x)(\mathscr{L}-\lambda_{\pm})e^{k_{\pm}(z_{\pm})x} +\left[\mathscr{L}-\lambda,\xi_{n}^{\pm}(x)\right]e^{k_{\pm}(z_{\pm})x}\] \[=\left[-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}},\xi_{n}^{\pm}(x) \right]e^{k_{\pm}(z_{\pm})x}\] \[=\left(-\frac{\mathrm{d}^{2}\xi_{n}^{\pm}}{\mathrm{d}x^{2}}(x)-2k _{\pm}(z_{\pm})\frac{\mathrm{d}\xi_{n}^{\pm}}{\mathrm{d}x}(x)\right)e^{k_{\pm }(z_{\pm})x}.\]
Here \([A,B]\coloneqq AB-BA\) is the notation of the commutator. Notice that, for each \(k\in\{1,2\}\), we have
\[\left\|\frac{\mathrm{d}^{k}\xi_{n}^{\pm}}{\mathrm{d}x^{k}}(x)\right\|^{2}=\int_ {\pm n^{2}-n}^{\pm n^{2}+n}\frac{1}{n^{2k+1}}\left|\xi^{(k)}\left(\frac{x}{n}-n \right)\right|^{2}\,\mathrm{d}x=\frac{1}{n^{2k}}\int_{-1}^{1}\left|\xi^{(k)}(x )\right|^{2}\,\mathrm{d}x\xrightarrow{n\to+\infty}0.\]
It yields that \(\|(\mathscr{L}-z_{\pm})u_{n}^{\pm}(x)\|\xrightarrow{n\to+\infty}0\). In other words, \((u_{n}^{+})_{n\geq 1}\) forms a Weyl sequence for \(z^{+}\) and \((u_{n}^{-})_{n\geq 1}\) forms a Weyl sequence for \(z^{-}\). Using for instance [6, Lem. 3.3], we have \(z_{\pm}\in\sigma(\mathscr{L})\) and from the arbitrariness of \(z_{\pm}\) in \([V_{\pm},+\infty)\) we conclude that
\[[V_{+},+\infty)\cup[V_{-},+\infty)\subset\sigma(\mathscr{L}).\]
From Proposition 2.2, we deduce that
\[\sigma(\mathscr{L})\subset[V_{+},+\infty)\cup[V_{-},+\infty).\]
Then, the conclusion on the spectrum of \(\mathscr{L}\) is followed. Since \(\mathscr{L}\) is \(J\)-self-adjoint, its residual spectrum is empty (see [18, Section 5.2.5.4]). Now, we will check that no points in \([V_{+},+\infty)\cup[V_{-},+\infty)\) can be the eigenvalue of \(\mathscr{L}\), and thus \(\sigma_{p}(\mathscr{L})=\emptyset\). Take \(z\in[V_{+},+\infty)\), assume that \(z\) is the eigenvalue of \(\mathscr{L}\) and \(u\) is its associated eigenfunction, _i.e._, \((\mathscr{L}-z)u=0\). From (3.2), the restriction of \(u\) on \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\), denoted by, respectively, \(u_{+}\) and \(u_{-}\), has the expression
\[u_{\pm}(x)=A_{\pm}e^{k_{\pm}(z)x}+B_{\pm}e^{-k_{\pm}(z)x},\qquad\text{for }\pm x>0. \tag{3.11}\]
Since \(z\in[V_{+},+\infty)\), then \(\operatorname{Re}k_{+}(z)=0\) and thus,
\[|u_{+}(x)|^{2}=|A_{+}|^{2}+|B_{+}|^{2}+2\operatorname{Re}\left(A_{+}\overline {B}_{+}e^{i\operatorname{Im}k_{+}(z)x}\right)\geq\left(|A_{+}|-|B_{+}|\right)^ {2}.\]
The fact that \(u_{+}\) is in \(L^{2}(\mathbb{R}_{+})\) and that \(\mathbb{R}_{+}\) is unbounded, it implies that \(|A_{+}|=|B_{+}|\). By writing \(A_{+}=|A_{+}|e^{i\operatorname{Arg}A_{+}},B_{+}=|B_{+}|e^{i\operatorname{Arg}B _{+}}\), here \(\operatorname{Arg}\,(w)\) denotes the principal argument of a complex number \(w\), we obtain
\[|u_{+}(x)|^{2}= 2|A_{+}|^{2}\left(1+\cos\left(\operatorname{Arg}\,A_{+}- \operatorname{Arg}\,B_{+}+\operatorname{Im}k_{+}(z)x\right)\right)\] \[= 4|A_{+}|^{2}\cos^{2}\left(\frac{\operatorname{Arg}\,A_{+}- \operatorname{Arg}\,B_{+}+\operatorname{Im}k_{+}(z)x}{2}\right).\]
Since \(u_{+}\in L^{2}(\mathbb{R}^{+})\), it yields that \(A_{+}=0\) (since the trigonometry is not integrable on unbounded domain as \(\mathbb{R}_{+}\)) and thus \(u_{+}=0\) on \(\mathbb{R}_{+}\). Similarly, \(u_{-}\in L^{2}(\mathbb{R}_{-})\) also implies that \(u_{-}=0\) on \(\mathbb{R}_{-}\) and thus, we have a contradiction. Therefore \([V_{+},+\infty)\) is not a subset of the point spectrum \(\sigma_{p}(\mathscr{L}_{\alpha})\). This argument can be made for \([V_{-},+\infty)\) in the same manner. Therefore, the spectrum of \(\mathscr{L}\) is purely continuous.
In order to obtain the statement on essential spectra of \(\mathscr{L}\), we will show that the above sequence \(u_{n}\) is singular, _i.e._, \(u_{n}\) converges weakly to zero. Indeed, take \(f\in L^{2}(\mathbb{R})\), by the density of \(C_{c}^{\infty}(\mathbb{R})\) in \(L^{2}(\mathbb{R})\), for arbitrary \(\varepsilon>0\), there exists a sequence \((f_{k})_{k\geq 1}\subset C_{c}^{\infty}(\mathbb{R})\) such that \(\|f-f_{k}\|\leq\varepsilon\). Then, by using triangle and Cauchy-Schwarz inequalities with \(\|u_{n}^{\pm}\|=1\) and combining with the definition of \(\xi_{n}\), we obtain
\[|\langle u_{n}^{\pm},f\rangle|\leq|\langle u_{n}^{\pm},f-f_{k}\rangle|+| \langle u_{n}^{\pm},f_{k}\rangle|\leq\|f-f_{k}\|+\frac{1}{\sqrt{n}}\|\xi\|_{L^ {\infty}}\|f_{k}\|_{L^{1}}\leq\varepsilon+\frac{1}{\sqrt{n}}\|\xi\|_{L^{\infty} }\|f_{k}\|_{L^{1}}\]
Let \(n\to+\infty\) and consider arbitrary \(\varepsilon>0\), it implies that \(\langle u_{n},f\rangle\xrightarrow[n\to+\infty]{}0\). Since \(f\) is taken arbitrarily in \(L^{2}(\mathbb{R})\), we have just proved the weakly convergence to zero of \(u_{n}\). Thanks to [11, Theo. IX.1.3], we implies that \(\sigma_{\mathrm{e}2}(\mathscr{L})=\sigma(\mathscr{L})\). Since \(\mathscr{L}\) is \(J\)-self-adjoint, four first essential spectra \(\sigma_{\mathrm{e}k}(\mathscr{L})\) (\(k\in\{1,2,3,4\}\)) are identical (see [11, Theo. IX.1.6]). Since \(\mathbb{C}\setminus\sigma_{\mathrm{e}1}(\mathscr{L})=\rho(\mathscr{L})\) is connected, the fifth essential spectrum is also the same (cf. [18, Prop. 5.4.4]).
## 4. Pseudospectral estimates
From now on, unless otherwise stated, for simplicity, we will denote \(z=\tau+i\delta\) with \((\tau,\delta)\in\mathbb{R}_{+}\times|(\operatorname{Im}V_{+},\operatorname {Im}V_{-})|\) and each time we write some asymptotic formula with big \(\mathcal{O}\) notation, we understand that this formula happen as \(\tau\to+\infty\) and uniformly for all \(\delta\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\).
### Resolvent estimate inside the numerical range: Proof of Theorem 2.7
We rewrite \((\mathscr{L}-z)^{-1}\) as the sum of two integral operators
\[(\mathscr{L}-z)^{-1}=\mathscr{R}_{1,z}+\mathscr{R}_{2,z},\]
where
\[\mathscr{R}_{1,z}f(x)\coloneqq\int_{\mathbb{R}}\mathscr{R}_{1,z}(x,y)f(y)\, \mathrm{d}y,\qquad\mathscr{R}_{2,z}f(x)\coloneqq\int_{\mathbb{R}}\mathscr{R}_ {2,z}(x,y)f(y)\,\mathrm{d}y, \tag{4.1}\]
whose kernel are given by
\[\mathscr{R}_{1,z}(x,y) =\begin{cases}K_{++}(z)e^{-k_{+}(z)(x+y)},&\text{for $\{x>0,y>0\}$;}\\ K_{--}(z)e^{k_{-}(z)(x+y)},&\text{for $\{x<0,y<0\}$;}\\ K_{+-}(z)e^{-k_{+}(z)x+k_{-}(z)y},&\text{for $\{x>0,y<0\}$;}\\ K_{+-}(z)e^{k_{-}(z)x-k_{+}(z)y},&\text{for $\{x<0,y>0\}$;}\end{cases} \tag{4.2}\] \[\mathscr{R}_{2,z}(x,y) =\begin{cases}\frac{1}{2k_{+}(z)}e^{-k_{+}(z)|x-y|},&\text{for $\{x>0,y>0\}$;}\\ \frac{1}{2k_{-}(z)}e^{-k_{-}(z)|x-y|},&\text{for $\{x<0,y<0\}$;}\\ 0,&\text{for $\{x.y<0\}$.}\end{cases} \tag{4.3}\]
Here, we denote
\[K_{++}(z) \coloneqq\frac{k_{+}(z)-k_{-}(z)}{2k_{+}(z)\left(k_{+}(z)+k_{-}(z )\right)},\qquad K_{--}(z)\coloneqq-\frac{k_{+}(z)-k_{-}(z)}{2k_{-}(z)\left(k _{+}(z)+k_{-}(z)\right)}, \tag{4.4}\] \[K_{+-}(z) \coloneqq\frac{1}{k_{+}(z)+k_{-}(z)}.\]
Our strategy is to show that the norm of \(\mathscr{R}_{1,z}\) will play the main role, while the norm of \(\mathscr{R}_{2,z}\) is just a small perturbation compared with \(\mathscr{R}_{1,z}\) in the divergence of the resolvent norm inside the numerical range. To do that, two-sided estimate of the norm of \(\mathscr{R}_{1,z}\) will be clearly established with the help of the following optimization lemma.
**Lemma 4.1**.: _Let \(A,B,C\in\mathbb{R}\), consider a function of two variables_
\[f(x,y)=Ax^{2}+Bxy+Cy^{2}\]
_on the circle \(x^{2}+y^{2}=1\). Then it attains the maximum on this circle and_
\[\max_{x^{2}+y^{2}=1}f(x,y)=\frac{A+C+\sqrt{(A-C)^{2}+B^{2}}}{2}.\]
Proof.: Let us write \(x=\cos(\theta)\) and \(y=\sin(\theta)\) for \(\theta\in[0,2\pi)\) and write
\[f(\cos(\theta),\sin(\theta))= A\cos^{2}(\theta)+B\sin(\theta)\cos(\theta)+C\sin^{2}(\theta)\] \[= \frac{1}{2}\left(A+C+(A-C)\cos(2\theta)+B\sin(2\theta)\right)\]
By using the Cauchy-Schwarz inequality, we obtain the upper bound, for every \(\theta\in[0,2\pi)\),
\[f(\cos(\theta),\sin(\theta))\leq\frac{1}{2}\left(A+C+\sqrt{(A-C)^{2}+B^{2}} \right),\]
and the equality can be obtained when
\[(\cos(2\theta),\sin(2\theta))=\pm\left(\frac{A-C}{\sqrt{(A-C)^{2}+B^{2}}}, \frac{B}{\sqrt{(A-C)^{2}+B^{2}}}\right).\]
Then, the conclusion of the lemma follows.
**Proposition 4.2**.: _Let \(\mathcal{R}_{1,z}\) be the integral operator with the kernel \(\mathcal{R}_{1,z}(x,y)\) defined as in (4.2). For all \(z\in\rho(\mathscr{L})\), \(\mathcal{R}_{1,z}\) is a bounded operator on \(L^{2}(\mathbb{R})\) whose norm satisfies_
\[\begin{split}\|\mathcal{R}_{1,z}\|&\leq\frac{1}{ \sqrt{2}}\sqrt{\sqrt{(A(z)-C(z))^{2}+B(z)^{2}}+A(z)+C(z)+2D(z)},\\ \|\mathcal{R}_{1,z}\|&\geq\frac{1}{\sqrt{2}}\sqrt{ \sqrt{(A(z)-C(z))^{2}+\widetilde{B}(z)^{2}}+A(z)+C(z)+2D(z)},\end{split} \tag{4.5}\]
_where_
\[\begin{split} A(z)&\coloneqq\frac{|K_{++}(z)|^{2}}{ 4(\operatorname{Re}k_{+}(z))^{2}},\qquad C(z)\coloneqq\frac{|K_{--}(z)|^{2}}{4 (\operatorname{Re}k_{-}(z))^{2}},\qquad D(z)\coloneqq\frac{|K_{+-}(z)|^{2}}{4 (\operatorname{Re}k_{-}(z))(\operatorname{Re}k_{+}(z))},\\ B(z)&\coloneqq\frac{|K_{+-}(z)|}{2\sqrt{(\operatorname {Re}k_{-}(z))(\operatorname{Re}k_{+}(z))}}\left(\frac{|K_{++}(z)|}{ \operatorname{Re}k_{+}(z)}+\frac{|K_{--}(z)|}{\operatorname{Re}k_{-}(z)} \right),\\ \widetilde{B}(z)&\coloneqq\frac{1}{2\sqrt{( \operatorname{Re}k_{-}(z))(\operatorname{Re}k_{+}(z))}}\left(\frac{ \operatorname{Re}\,\left(K_{++}(z)\overline{K_{+-}(z)}\right)}{ \operatorname{Re}k_{+}(z)}+\frac{\operatorname{Re}\,\left(K_{--}(z) \overline{K_{+-}(z)}\right)}{\operatorname{Re}k_{-}(z)}\right).\end{split}\]
Proof.: Consider \(f\in L^{2}(\mathbb{R})\) such that \(\|f\|=1\), we have
\[\begin{split}\|\mathcal{R}_{1,z}f\|^{2}=&\int_{ \mathbb{R}}|\mathcal{R}_{1,z}f(x)|^{2}\,\mathrm{d}x\\ =&\int_{0}^{+\infty}\left|\int_{-\infty}^{0}K_{+-}(z )e^{-k_{+}(z)x+k_{-}(z)y}f(y)\mathrm{d}y+\int_{0}^{+\infty}K_{++}(z)e^{-k_{+}( z)(x+y)}f(y)\,\mathrm{d}y\right|^{2}\,\mathrm{d}x\\ &+\int_{-\infty}^{0}\left|\int_{-\infty}^{0}K_{--}(z)e^{k_{-}(z)( x+y)}f(y)\mathrm{d}y+\int_{0}^{+\infty}K_{+-}(z)e^{k_{-}(z)x-k_{+}(z)y}f(y)\, \mathrm{d}y\right|^{2}\,\mathrm{d}x\\ =&\frac{1}{2\operatorname{Re}k_{+}(z)}\left|\int_{- \infty}^{0}K_{+-}(z)e^{k_{-}(z)y}f(y)\mathrm{d}y+\int_{0}^{+\infty}K_{++}(z)e^{ -k_{+}(z)y}f(y)\,\mathrm{d}y\right|^{2}\\ &+\frac{1}{2\operatorname{Re}k_{-}(z)}\left|\int_{-\infty}^{0}K_{ --}(z)e^{k_{-}(z)y}f(y)\mathrm{d}y+\int_{0}^{+\infty}K_{+-}(z)e^{-k_{+}(z)y}f( y)\,\mathrm{d}y\right|^{2}.\end{split}\]
Using Holder's inequality and remember that \(\|f\|_{L^{2}(\mathbb{R}_{-})}^{2}+\|f\|_{L^{2}(\mathbb{R}_{+})}^{2}=1\), we obtain
\[\begin{split}\|\mathcal{R}_{1,z}f\|^{2}\leq&\frac{1 }{2\operatorname{Re}k_{+}(z)}\left(\frac{|K_{+-}(z)|}{\sqrt{2\operatorname{Re }k_{-}(z)}}\|f\|_{L^{2}(\mathbb{R}_{-})}+\frac{|K_{++}(z)|}{\sqrt{2\operatorname {Re}k_{+}(z)}}\|f\|_{L^{2}(\mathbb{R}_{+})}\right)^{2}\\ &+\frac{1}{2\operatorname{Re}k_{-}(z)}\left(\frac{|K_{--}(z)|}{ \sqrt{2\operatorname{Re}k_{-}(z)}}\|f\|_{L^{2}(\mathbb{R}_{-})}+\frac{|K_{+-} (z)|}{\sqrt{2\operatorname{Re}k_{+}(z)}}\|f\|_{L^{2}(\mathbb{R}_{+})}\right)^{ 2}\\ =&\frac{|K_{++}(z)|^{2}}{4(\operatorname{Re}k_{+}(z))^{ 2}}\|f\|_{L^{2}(\mathbb{R}_{+})}^{2}+\frac{|K_{--}(z)|^{2}}{4(\operatorname{ Re}k_{-}(z))^{2}}\|f\|_{L^{2}(\mathbb{R}_{-})}^{2}+\frac{|K_{+-}(z)|^{2}}{4( \operatorname{Re}k_{-}(z))(\operatorname{Re}k_{+}(z))}\\ &+\frac{|K_{+-}(z)|}{2\sqrt{(\operatorname{Re}k_{-}(z))( \operatorname{Re}k_{+}(z))}}\left(\frac{|K_{--}(z)|}{\operatorname{Re}k_{-}(z )}+\frac{|K_{++}(z)|}{\operatorname{Re}k_{+}(z)}\right)\|f\|_{L^{2}(\mathbb{R} _{-})}\|f\|_{L^{2}(\mathbb{R}_{+})}.\end{split}\]
By applying Lemma 4.1, it yields that
\[\|\mathcal{R}_{1,z}f\|^{2}\leq\frac{1}{2}\left(A(z)+C(z)+\sqrt{(A(z)-C(z))^{2} +B(z)^{2}}+2D(z)\right),\]
where \(A(z),B(z),C(z),D(z)\) defined as in the statement of this proposition. For all \(z\in\rho(\mathscr{L})\), \(\operatorname{Re}k_{\pm}(z)>0\), then all coefficients \(A(z),B(z),C(z),D(z)\) are finite, for that reason,
\(\mathcal{R}_{1,z}\) is bounded and the upper bound in (4.5) is obtained. In order to get the lower bound, we introduce a test function
\[f_{0}(y)=\begin{cases}\alpha e^{-\overline{k_{+}(z)}y}&\text{for }y\geq 0,\\ \beta e^{\overline{k_{-}(z)}y}&\text{for }y\leq 0,\end{cases} \tag{4.6}\]
where \(\alpha,\beta\in\mathbb{R}\) will be determined later.
By straightforward computation, the action of the operator \(\mathcal{R}_{1,z}f_{0}\) is given by
\[\left[\mathcal{R}_{1,z}f_{0}\right](x)=\begin{cases}e^{-k_{+}(z)x}\left(\frac{ \alpha K_{++}(z)}{2\text{Re}\,k_{+}(z)}+\frac{\beta K_{+-}(z)}{2\text{Re}\,k_{ -}(z)}\right)&\text{for }x>0,\\ e^{k_{-}(z)x}\left(\frac{\alpha K_{+-}(z)}{2\text{Re}\,k_{+}(z)}+\frac{\beta K _{--}(z)}{2\text{Re}\,k_{-}(z)}\right)&\text{for }x<0.\end{cases}\]
It yields that
\[\|\mathcal{R}_{1,z}f_{0}\|^{2}=\frac{1}{2\text{Re}\,k_{+}(z)}\left|\frac{ \alpha K_{++}(z)}{2\text{Re}\,k_{+}(z)}+\frac{\beta K_{+-}(z)}{2\text{Re}\,k _{-}(z)}\right|^{2}+\frac{1}{2\text{Re}\,k_{-}(z)}\left|\frac{\alpha K_{+-}(z )}{2\text{Re}\,k_{+}(z)}+\frac{\beta K_{--}(z)}{2\text{Re}\,k_{-}(z)}\right|^ {2}.\]
By the definition of \(f_{0}\), we have
\[\|f_{0}\|^{2}=\frac{|\alpha|^{2}}{2\text{Re}\,k_{+}(z)}+\frac{|\beta|^{2}}{2 \text{Re}\,k_{-}(z)}.\]
To normalize the norm of \(f_{0}\), let us write
\[\alpha=\sqrt{2\text{Re}\,k_{+}(z)}a,\qquad\beta=\sqrt{2\text{Re}\,k_{-}(z)}b,\text{ with some }a,b\in\mathbb{R}:a^{2}+b^{2}=1.\]
Then, our problem turns to find \((a,b)\in\mathbb{R}\) such that the quantity
\[\|\mathcal{R}_{1,z}f_{0}\|^{2}= \frac{1}{2\text{Re}\,k_{+}(z)}\left|\frac{aK_{++}(z)}{\sqrt{2 \text{Re}\,k_{+}(z)}}+\frac{bK_{+-}(z)}{\sqrt{2\text{Re}\,k_{-}(z)}}\right|^{ 2}+\frac{1}{2\text{Re}\,k_{-}(z)}\left|\frac{aK_{+-}(z)}{\sqrt{2\text{Re}\,k _{+}(z)}}+\frac{bK_{--}(z)}{\sqrt{2\text{Re}\,k_{-}(z)}}\right|^{2}\] \[= \frac{a^{2}|K_{++}(z)|^{2}}{4(\text{Re}\,k_{+}(z))^{2}}+\frac{b^{ 2}|K_{--}(z)|^{2}}{4(\text{Re}\,k_{-}(z))^{2}}+\frac{|K_{+-}(z)|^{2}}{4(\text{ Re}\,k_{-}(z))(\text{Re}\,k_{+}(z))}\] \[+\frac{1}{2\sqrt{(\text{Re}\,k_{-}(z))(\text{Re}\,k_{+}(z))}} \left(\frac{\text{Re}\,\left(K_{++}(z)\overline{K_{+-}(z)}\right)}{\text{Re} \,k_{+}(z)}+\frac{\text{Re}\,\left(K_{--}(z)\overline{K_{+-}(z)}\right)}{ \text{Re}\,k_{-}(z)}\right)ab\]
attains its maximum. Lemma 4.1 tells us that there exists such a couple \((a,b)\) and its maximum is
\[\|\mathcal{R}_{1,z}f_{0}\|^{2}=\frac{1}{2}\left(A(z)+C(z)+\sqrt{(A(z)-C(z))^{ 2}+\widetilde{B}(z)^{2}}+2D(z)\right).\]
With this choice of test function, the quantity \(\|\mathcal{R}_{1,z}f_{0}\|\) becomes the lower bound for the norm of \(\mathcal{R}_{1,z}\).
After having explicit formulas for two-sided bounds of the norm of \(\mathcal{R}_{1,z}\) in the above proposition, we will send \(z\) to infinity inside the numerical range of \(\mathscr{L}\) to see the asymptotic behavior of these two-sided bounds. The following lemma will be employed to provide us some useful asymptotic formulas from which we deduce the asymptotic behavior of these two-sides bounds.
**Lemma 4.3**.: _Let \(z=\tau+i\delta\) with \(\tau>0\), \(\delta\in K\setminus\{\operatorname{Im}V_{+},\operatorname{Im}V_{-}\}\) with \(K\) is some compact set in \(\mathbb{R}\). Then, the following asymptotic formulas hold as \(\tau\to+\infty\) and uniformly for all
\(\delta\in K\setminus\{\operatorname{Im}V_{+},\operatorname{Im}V_{-}\}\):_
\[k_{+}(z)= \operatorname{sgn}(\operatorname{Im}V_{+}-\delta)\left[i\sqrt{ \tau}+\frac{(\operatorname{Im}V_{+}-\delta)-i\operatorname{Re}V_{+}}{2\sqrt{ \tau}}+\mathcal{O}\left(\frac{1}{|\tau|^{3/2}}\right)\right],\] \[k_{-}(z)= \operatorname{sgn}(\operatorname{Im}V_{-}-\delta)\left[i\sqrt{ \tau}+\frac{(\operatorname{Im}V_{-}-\delta)-i\operatorname{Re}V_{-}}{2\sqrt{ \tau}}+\mathcal{O}\left(\frac{1}{|\tau|^{3/2}}\right)\right],\]
_and_
\[\operatorname{Re}k_{+}(z)=\frac{|\operatorname{Im}V_{+}-\delta|}{2\sqrt{\tau }}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right),\qquad\quad \operatorname{Re}k_{-}(z)=\frac{|\operatorname{Im}V_{-}-\delta|}{2\sqrt{\tau }}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
Proof.: Let us give a proof for \(k_{+}(z)\) and \(\operatorname{Re}k_{+}(z)\), the asymptotics for \(k_{-}(z)\) and \(\operatorname{Re}k_{-}(z)\) is obtained by just changing from the plus sign to the minus sign. We start with \(k_{+}(z)\). Recalling that the square root that we fixed in this article is in principal branch, thus, for \(\tau>0\), we have
\[k_{+}(z)= \sqrt{\operatorname{Re}V_{+}-\tau+i(\operatorname{Im}V_{+}- \delta)}\] \[= i\sqrt{\tau}\operatorname{sgn}(\operatorname{Im}V_{+}-\delta) \sqrt{\frac{-\operatorname{Re}V_{+}-i(\operatorname{Im}V_{+}-\delta)}{\tau}+1}.\]
We consider a smooth complex-valued function \(F_{\delta}\) defined on \(\mathbb{R}\) by
\[F_{\delta}(x)=\sqrt{ax+1},\qquad a=-\operatorname{Re}V_{+}-i(\operatorname{Im }V_{+}-\delta).\]
By applying Taylor's theorem (for example, [16, Theo. 1.36]) for \(F_{\delta}\) expanding at zero, we have, for all \(x\) in some neighbourhood of zero,
\[F_{\delta}(x)=1+\frac{a}{2}x+\frac{x^{2}}{2}\int_{0}^{1}(1-s)F_{\delta}^{(2)}( xs)\,\mathrm{d}s.\]
The second derivative of \(F_{\delta}\) can be calculated explicitly as follows
\[F_{\delta}^{(2)}(x)=-\frac{a^{2}}{4\sqrt{(ax+1)^{3}}}.\]
By considering \(|x|\) sufficiently small and using the fact that \(|ax+1|\gtrsim 1\) uniformly for all \(\delta\in K\), we get that \(|F_{\delta}^{(2)}(x)|\lesssim 1\) uniformly for all \(\delta\in K\) and thus, as \(|x|\to 0\),
\[F_{\delta}(x)=1+\frac{a}{2}x+\frac{x^{2}}{2}+\mathcal{O}(|x|^{2})\qquad\text{ uniformly for all }\delta\in K.\]
Replacing \(x=\frac{1}{\tau}\), we obtain the asymptotic for \(k_{+}(z)\) as in statement of the lemma. Next, to obtain the expansion for \(\operatorname{Re}k_{+}(z)\), we use the algebraic formula for the square root of a non-real complex number \(w\):
\[\operatorname{Re}\sqrt{w}=\frac{|\operatorname{Im}w|}{\sqrt{2(|w|-\operatorname {Re}w)}}.\]
For \(\delta\neq\operatorname{Im}V_{+}\), we have
\[\operatorname{Re}k_{+}(z)=\frac{|\operatorname{Im}V_{+}-\delta|}{\sqrt{2}\sqrt {|k_{+}(z)|^{2}+\tau-\operatorname{Re}V_{+}}}. \tag{4.7}\]
Since \(|k_{+}(z)|^{2}=\sqrt{(\tau-\operatorname{Re}V_{+})^{2}+(\operatorname{Im}V_{+} -\delta)^{2}}\), we can show that
\[\sqrt{|k_{+}(z)|^{2}+\tau-\operatorname{Re}V_{+}}-\sqrt{2\tau}=\frac{-4( \operatorname{Re}V_{+})\tau+(\operatorname{Im}V_{+}-\delta)^{2}}{\left(\sqrt{ |k_{+}(z)|^{2}+\tau-\operatorname{Re}V_{+}}+\sqrt{2\tau}\right)(|k_{+}(z)|^{2} +\tau+\operatorname{Re}V_{+})}.\]
Therefore, as \(\tau\to+\infty\), we get
\[\sqrt{|k_{+}(z)|^{2}+\tau-\operatorname{Re}V_{+}}=\sqrt{2\tau}+\mathcal{O}\left( \frac{1}{\sqrt{\tau}}\right).\]
Replacing this into the denominator of the right hand side of (4.7), we deduce the asymptotic behavior of \(\operatorname{Re}k_{+}(z)\) as in the statement of the lemma.
Now, we restrict ourselves to \(\delta\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\), note that we have the following relations
\[\operatorname{sgn}(\operatorname{Im}V_{+}-\delta)=\operatorname{sgn}( \operatorname{Im}V_{+}-\operatorname{Im}V_{-}),\qquad\operatorname{sgn}( \operatorname{Im}V_{-}-\delta)=-\operatorname{sgn}(\operatorname{Im}V_{+}- \operatorname{Im}V_{-}). \tag{4.8}\]
As a consequence, Lemma 4.3 produces the following asymptotic expansion
\[|k_{+}(z)|=\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|} \right)\right), |k_{-}(z)|=\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|} \right)\right), \tag{4.9}\] \[|k_{+}(z)+k_{-}(z)|=\frac{|V_{+}-V_{-}|}{2\sqrt{\tau}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right), |k_{+}(z)-k_{-}(z)|=2\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|} \right)\right).\]
Using these expansions, we obtain the asymptotic formulas for \(|K_{++}(z)|,|K_{--}(z)|\) and \(|K_{+-}(z)|\) defined in (4.4):
\[|K_{++}(z)|=\frac{2\sqrt{\tau}}{|V_{+}-V_{-}|}\left(1+\mathcal{O} \left(\frac{1}{|\tau|}\right)\right),\ \ \ \ \ |K_{--}(z)|=\frac{2\sqrt{\tau}}{|V_{+}-V_{-}|}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right),\] \[|K_{+-}(z)|=\frac{2\sqrt{\tau}}{|V_{+}-V_{-}|}\left(1+\mathcal{O} \left(\frac{1}{|\tau|}\right)\right),\]
Next step, we compute the asymptotic expansions for \(A(z),B(z),C(z),D(z)\) and \(\widetilde{B}(z)\) appearing in Proposition 4.2. For \(A(z),C(z),D(z)\), it is easier to obtain their asymptotic formulas by using Lemma 4.3 for \(\operatorname{Re}k_{\pm}(z)\) and the above estimates for \(K_{++}(z)\), \(K_{--}(z)\) and \(K_{+-}(z)\):
\[A(z)= \frac{4\tau^{2}}{|V_{+}-V_{-}|^{2}|\operatorname{Im}V_{+}-\delta |^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right),\] \[C(z)= \frac{4\tau^{2}}{|V_{+}-V_{-}|^{2}|\operatorname{Im}V_{-}-\delta |^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right),\] \[D(z)= \frac{4\tau^{2}}{|V_{+}-V_{-}|^{2}|\operatorname{Im}V_{+}-\delta ||\operatorname{Im}V_{-}-\delta|}\left(1+\mathcal{O}\left(\frac{1}{|\tau|} \right)\right).\]
For \(B(z)\) and \(\widetilde{B}(z)\), we need more efforts, but it is a straightforward calculation:
\[B(z)= \frac{2\tau\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right) }{|V_{+}-V_{-}||\operatorname{Im}V_{+}-\delta|^{1/2}|\operatorname{Im}V_{-} -\delta|^{1/2}}\] \[= \frac{2\tau\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right) }{|V_{+}-V_{-}||\operatorname{Im}V_{+}-\delta|^{1/2}|\operatorname{Im}V_{-}- \delta|^{1/2}}\frac{4|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|\tau \left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right)}{|V_{+}-V_{-}|| \operatorname{Im}V_{+}-\delta||\operatorname{Im}V_{-}-\delta|}\] \[= \frac{8|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|\tau^{2}}{|V _{+}-V_{-}|^{2}|\operatorname{Im}V_{+}-\delta|^{3/2}|\operatorname{Im}V_{-}- \delta|^{3/2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
Here, in the second equality, we used (4.8) to obtain
\[|{\rm Im}\,V_{+}-\delta|+|{\rm Im}\,V_{-}-\delta|=|{\rm Im}\,V_{+}-{\rm Im}\,V_ {-}|\qquad\text{for all }\delta\in|({\rm Im}\,V_{+},{\rm Im}\,V_{-})|. \tag{4.10}\]
From the formula of \(\widetilde{B}(z)\), in order to have its asymptotic, we firstly need to calculate \({\rm Re}\,\left(K_{++}(z)\overline{K_{+-}(z)}\right)\) and \({\rm Re}\,\left(K_{--}(z)\overline{K_{\pm}(z)}\right)\). From the definitions of \(K_{++}(z)\) and \(K_{+-}(z)\) in (4.4) and thanks to Lemma 4.3, (4.8) and (4.9), we get
\[{\rm Re}\,\left(K_{++}(z)\overline{K_{+-}(z)}\right)= \frac{1}{|k_{+}(z)+k_{-}(z)|^{2}}{\rm Re}\,\left(\frac{k_{+}(z)-k _{-}(z)}{2k_{+}(z)}\right)\] \[= \frac{4\tau}{|V_{+}-V_{-}|^{2}}\left(1+\mathcal{O}\left(\frac{1} {|\tau|}\right)\right){\rm Re}\,\left(\frac{2i\sqrt{\tau}+\mathcal{O}\left( \frac{1}{|\tau|^{1/2}}\right)}{2i\sqrt{\tau}+\mathcal{O}\left(\frac{1}{|\tau| ^{1/2}}\right)}\right)\] \[= \frac{4\tau}{|V_{+}-V_{-}|^{2}}\left(1+\mathcal{O}\left(\frac{1} {|\tau|}\right)\right)\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right)\] \[= \frac{4\tau}{|V_{+}-V_{-}|^{2}}\left(1+\mathcal{O}\left(\frac{1} {|\tau|}\right)\right),\]
and in the same way, we also obtain
\[{\rm Re}\,\left(K_{--}(z)\overline{K_{\pm}(z)}\right)=\frac{4\tau}{|V_{+}-V_{- }|^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
From these estimates, the asymptotic expansion for \(\widetilde{B}(z)\) is followed from Lemma 4.3 (for \({\rm Re}\,k_{\pm}(z)\)) and (4.10):
\[\widetilde{B}(z)= \frac{\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right)}{|{\rm Im}\,V_{+}-\delta|^{1/2}|{\rm Im}\,V_{-}-\delta|^{1/2}}\left( \frac{\frac{8\tau^{3/2}}{|V_{+}-V_{-}|^{2}}\left(1+\mathcal{O}\left(\frac{1}{ |\tau|}\right)\right)}{|{\rm Im}\,V_{+}-\delta|}+\frac{\frac{8\tau^{3/2}}{|V_{ +}-V_{-}|^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right)}{|{\rm Im }\,V_{-}-\delta|}\right)\] \[= \frac{8|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|\tau^{2}}{|V_{+}-V_{-}|^{ 2}|{\rm Im}\,V_{+}-\delta|^{3/2}|{\rm Im}\,V_{-}-\delta|^{3/2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
We notice that \(\widetilde{B}(z)\) has the same asymptotic limit as \(B(z)\), this makes the upper bound and the lower bound of \(\|\mathcal{R}_{1,z}\|\) in (4.5) also have the same asymptotic limit as \(\tau\to+\infty\). Now, we can calculate the asymptotic behavior of all terms appearing in these bounds:
\[\sqrt{\left(A(z)-C(z)\right)^{2}+B(z)^{2}}= \frac{4|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|^{2}\tau^{2}}{|V_{+}-V_{- }|^{2}|{\rm Im}\,V_{+}-\delta|^{2}|{\rm Im}\,V_{-}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right),\] \[A(z)+C(z)+2D(z)= \frac{4|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|^{2}\tau^{2}}{|V_{+}-V_{- }|^{2}|{\rm Im}\,V_{+}-\delta|^{2}|{\rm Im}\,V_{-}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
Here, in the first and the second equalities, respectively, we used the fact that (follows from (4.8)), for all \(\delta\in|({\rm Im}\,V_{+},{\rm Im}\,V_{-})|\),
\[|{\rm Im}\,V_{+}+{\rm Im}\,V_{-}-2\delta|^{2}+4|{\rm Im}\,V_{+}- \delta||{\rm Im}\,V_{+}-\delta|=|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|^{2}, \tag{4.11}\] \[|{\rm Im}\,V_{+}-\delta|^{2}+|{\rm Im}\,V_{+}-\delta|^{2}+2|{\rm Im }\,V_{+}-\delta||{\rm Im}\,V_{-}-\delta|=|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|^{2}.\]
Therefore, the norm of \(\mathcal{R}_{1,z}\) has the following asymptotic behavior
\[\|\mathcal{R}_{1,z}\|=\frac{2|{\rm Im}\,V_{+}-{\rm Im}\,V_{-}|\tau}{|V_{+}-V_{- }||{\rm Im}\,V_{+}-\delta||{\rm Im}\,V_{-}-\delta|}\left(1+\mathcal{O}\left( \frac{1}{|\tau|}\right)\right). \tag{4.12}\]
Next step, we will show that \(\|\mathcal{R}_{2,z}\|\) is merely a small perturbation compared with \(\|\mathcal{R}_{1,z}\|\) as \(\tau\to+\infty\). To do that, the upper bound for the norm of \(\mathcal{R}_{2,z}\) will be established by applying
the Schur test. Considering the kernel of \(\mathcal{R}_{2,z}\) given in (4.3), we have
\[\int_{\mathbb{R}}|\mathcal{R}_{2,z}(x,y)|\,\mathrm{d}y=\begin{cases}\dfrac{1}{2|k _{+}(z)|}\dfrac{2-e^{-\operatorname{Re}k_{+}(z)x}}{\operatorname{Re}k_{+}(z)}& \text{ for }x>0,\\ \dfrac{1}{2|k_{-}(z)|}\dfrac{2-e^{\operatorname{Re}k_{-}(z)x}}{ \operatorname{Re}k_{-}(z)}&\text{ for }x<0,\end{cases}\]
It yields that, for all \(z\in\rho(\mathscr{L})\) (hence, \(\operatorname{Re}k_{\pm}(z)>0\)),
\[\sup_{x\in\mathbb{R}}\int_{\mathbb{R}}|\mathcal{R}_{2,z}(x,y)|\,\mathrm{d}y= \max\left(\frac{1}{|k_{+}(z)|\operatorname{Re}k_{+}(z)},\frac{1}{|k_{-}(z)| \operatorname{Re}k_{-}(z)}\right).\]
Since \(\mathcal{R}_{2,z}(x,y)\) is symmetric for almost everywhere \((x,y)\in\mathbb{R}^{2}\), thus, by the Schur test, we obtain
\[\|\mathcal{R}_{2,z}\|\leq\frac{1}{|k_{+}(z)|\operatorname{Re}k_{+}(z)}+\frac{ 1}{|k_{-}(z)|\operatorname{Re}k_{-}(z)}=\frac{2|\mathrm{Im}\,V_{+}-\mathrm{Im }\,V_{-}|}{|\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}-\delta|}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|}\right)\right). \tag{4.13}\]
Then, by considering (4.12), we obtain what we want to show, that is
\[\frac{\|\mathcal{R}_{2,z}\|}{\|\mathcal{R}_{1,z}\|}=\mathcal{O}\left(\frac{1}{ |\tau|}\right).\]
By applying triangular inequality, it yields that
\[\left\|(\mathscr{L}-z)^{-1}\right\|=\|\mathcal{R}_{1,z}\|\left(1+\mathcal{O} \left(\frac{1}{|\tau|}\right)\right). \tag{4.14}\]
Thus, the asymptotic expansion for \(\|\left(\mathscr{L}-z\right)^{-1}\|\) is deduced from (4.14) and (4.12).
### (Non)-triviality of the pseudospectrum of \(\mathscr{L}\): Proof of Corollary 2.8
Let us consider two situations mentioned in this corollary:
**(1)**: If \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\), then, thanks to (2.2) and (2.7), we have
\[\overline{\mathrm{Num}(\mathscr{L})}=[\widehat{V},+\infty)=\sigma(\mathscr{ L}),\]
where \(\widehat{V}\) defined as in (2.8). By using Proposition 2.5 in which \(W(\mathscr{L})=\rho(\mathscr{L})\), we obtain, for all \(z\in\rho(\mathscr{L})\),
\[\|(\mathscr{L}-z)^{-1}\|=\frac{1}{\operatorname{dist}(z,\sigma(\mathscr{L}))}. \tag{4.15}\]
From the definition of pseudospectrum (1.2), we deduce that the \(\varepsilon\)-pseudospectrum of \(\mathscr{L}\) is exact the \(\varepsilon\)-neighbourhood of the spectrum \(\sigma(\mathscr{L})\). In fact, the formula (4.15) is true for general unbounded normal operator \(T\) (in this case, \(\mathscr{L}\) is a normal operator, by Proposition 2.1), indeed, since \(T\) is normal, its resolvent is a bounded normal operator (see [23, Proposition 3.26(v)]), therefore, the norm of the resolvent is equal to its spectral radius [6, Proposition 3.27]:
\[\|(T-z)^{-1}\|= \sup\left\{|\lambda|:\lambda\in\sigma((T-z)^{-1})\right\}=\sup \left\{\frac{1}{|\eta-z|}:\eta\in\sigma(T)\right\}\] \[= \frac{1}{\inf\{|\eta-z|:\eta\in\sigma(T)\}}=\frac{1}{\operatorname {dist}(z,\sigma(T))}.\]
Here, in the second equality, we have used the fact that \(\lambda\in\sigma((T-z)^{-1})\setminus\{0\}\) if and only if \(\lambda=\frac{1}{\eta-z}\) for some \(\eta\in\sigma(T)\).
**(2)**: If \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\), Theorem 2.7 implies that, for all \(\varepsilon^{\prime}\in(0,1)\), there exists \(M>0\) such that, for all \(\operatorname{Re}z>M\) and for all \(\operatorname{Im}z\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\), we have
\[\left\|(\mathscr{L}-z)^{-1}\right\|\geq(1-\varepsilon^{\prime})\frac{2| \operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}{|V_{+}-V_{-}|}\frac{ \operatorname{Re}z}{|\operatorname{Im}V_{+}-\operatorname{Im}z||\operatorname {Im}V_{-}-\operatorname{Im}z|}.\]
Thus, for any \(\varepsilon>0\), if we consider
\[\operatorname{Re}z>\frac{1}{\varepsilon(1-\varepsilon^{\prime})}\frac{|V_{+}- V_{-}||\operatorname{Im}V_{+}-\operatorname{Im}z||\operatorname{Im}V_{-}- \operatorname{Im}z|}{2|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|}\]
then \(\|(\mathscr{L}-z)^{-1}\|>\frac{1}{\varepsilon}\) and the conclusion in this case follows.
In order to see the non-triviality of the pseudospectrum of \(\mathscr{L}\) when \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\), we just need to take \(\varepsilon\) sufficiently small and consider \(z_{\varepsilon}\) on the line \(\operatorname{Im}z=\frac{\operatorname{Im}V_{+}+\operatorname{Im}V_{-}}{2}\) whose real part is large enough such that \(z_{\varepsilon}\in\sigma_{\varepsilon}(\mathscr{L})\), it is easy to see that \(z_{\varepsilon}\) always stay away from the spectrum at distance \(\frac{1}{2}|\operatorname{Im}V_{+}-\operatorname{Im}V_{-}|\).
### Accurate pseudomode for \(\mathscr{L}\): Proof of Theorem 2.9
The aim of this part is to construct the pseudomode for the operator \(\mathscr{L}\). Let us fix \(z\in\rho(\mathscr{L})\), we construct the Ansatz in the form
\[\Psi_{z}(x)=\Psi_{1,z}(x)+\Psi_{2,z}(x),\]
where
\[\Psi_{1,z}(x) =n_{1}(z)e^{k_{-}(z)x}\mathbf{1}_{\mathbb{R}_{-}}(x)+p_{1}(z)e^{- k_{+}(z)x}\mathbf{1}_{\mathbb{R}_{+}}(x),\] \[\Psi_{2,z}(x) =n_{2}(z)e^{\overline{k_{-}(z)}x}\mathbf{1}_{\mathbb{R}_{-}}(x)+ p_{2}(z)e^{-\overline{k_{+}(z)}x}\mathbf{1}_{\mathbb{R}_{+}}(x),\]
in which \(n_{1}(z),n_{2}(z),p_{1}(z),p_{2}(z)\) are complex numbers to be determined later. Here, we notice that \(\Psi_{1,z}\) and \(\Psi_{2,z}\) satisfy, respectively, for all \(x\in\mathbb{R}\setminus\{0\}\),
\[\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+V(x)-z\right)\Psi_{1,z}(x)=0, \qquad\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\overline{V(x)}-\overline {z}\right)\Psi_{2,z}(x)=0. \tag{4.16}\]
In order \(\Psi_{z}\) to belong to \(H^{2}(\mathbb{R})\), the domain of \(\mathscr{L}\), it is necessary to satisfy the conditions \(\Psi_{z}(0^{+})=\Psi_{z}(0^{-})\) and \(\Psi_{z}^{\prime}(0^{+})=\Psi_{z}^{\prime}(0^{-})\). These conditions read
\[\begin{cases}n_{1}(z)+n_{2}(z)=p_{1}(z)+p_{2}(z),\\ k_{-}(z)n_{1}(z)+\overline{k_{-}(z)}n_{2}(z)=-k_{+}(z)p_{1}(z)-\overline{k_{+}( z)}p_{2}(z),\end{cases} \tag{4.17}\]
This allows us to compute \(n_{1}(z)\) and \(p_{1}(z)\) in terms of \(n_{2}(z)\) and \(p_{2}(z)\) as follows
\[\begin{cases}n_{1}(z)=-\frac{k_{+}(z)+\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)}n _{2}(z)+\frac{k_{+}(z)-\overline{k_{+}(z)}}{k_{+}(z)+k_{-}(z)}p_{2}(z),\\ p_{1}(z)=\frac{k_{-}(z)-\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)}n_{2}(z)-\frac{ \overline{k_{+}(z)}+k_{-}(z)}{k_{+}(z)+k_{-}(z)}p_{2}(z).\end{cases} \tag{4.18}\]
In the following, we will show that this Ansatz \(\Psi_{z}\) belongs to \(H^{2}(\mathbb{R})\) and \(n_{2}\) and \(p_{2}\) can be calculated through optimizing (in fact, minimizing) the quotient \(\frac{\|(\mathscr{L}-z)\Psi_{\parallel}}{\|\Psi_{\parallel}}\). Indeed, assume that \(\Psi_{z}\) satisfies conditions in (4.17), since \(z\in\rho(\mathscr{L})\), we have \(\operatorname{Re}k_{\pm}(z)=\operatorname{Re}\overline{k_{\pm}(z)}>0\), it is easy to see that \(\Psi_{z}\in L^{2}(\mathbb{R})\). Let \(\varphi\in C^{1}_{\mathrm{c}}(\mathbb{R})\), by integration by parts on \(\mathbb{R}_{-}\) and on
and using the first equality in (4.17), we get
\[\begin{split}\int_{\mathbb{R}}\Psi_{z}(x)\varphi^{\prime}(x)\, \mathrm{d}x=&(n_{1}(z)+n_{2}(z)-p_{1}(z)-p_{2}(z))\varphi(0)\\ &-\int_{-\infty}^{0}\left(n_{1}(z)k_{-}(z)e^{k_{-}(z)x}+n_{2}(z) \overline{k_{-}(z)}e^{\overline{k_{-}(z)}x}\right)\varphi(x)\,\mathrm{d}x\\ &+\int_{0}^{+\infty}\left(p_{1}(z)k_{+}(z)e^{-k_{+}(z)x}+p_{2}(z )\overline{k_{-}(z)}e^{-\overline{k_{+}(z)}x}\right)\varphi(x)\,\mathrm{d}x\\ =&-\int_{-\infty}^{0}\left(n_{1}(z)k_{-}(z)e^{k_{-}( z)x}+n_{2}(z)\overline{k_{-}(z)}e^{\overline{k_{-}(z)}x}\right)\varphi(x)\, \mathrm{d}x\\ &+\int_{0}^{+\infty}\left(p_{1}(z)k_{+}(z)e^{-k_{+}(z)x}+p_{2}(z )\overline{k_{+}(z)}e^{-\overline{k_{+}(z)}x}\right)\varphi(x)\,\mathrm{d}x. \end{split}\]
Therefore, the distributional derivative of \(\Psi_{z}\) is given by
\[\begin{split}\Psi_{z}^{\prime}(x)=&\left(n_{1}(z)k_ {-}(z)e^{k_{-}(z)x}+n_{2}(z)\overline{k_{-}(z)}e^{\overline{k_{-}(z)}x}\right) \mathbf{1}_{\mathbb{R}_{-}}(x)\\ &+\left(-p_{1}(z)k_{+}(z)e^{-k_{+}(z)x}-p_{2}(z)\overline{k_{+}( z)}e^{-\overline{k_{+}(z)}x}\right)\mathbf{1}_{\mathbb{R}_{+}}(x),\end{split} \tag{4.19}\]
which also belongs to \(L^{2}(\mathbb{R})\). In other words, \(\Psi_{z}\in H^{1}(\mathbb{R})\). In the same manner, by using the second equality in (4.17), we can obtain the distributional second derivative of \(\Psi_{z}\) as follows
\[\begin{split}\Psi_{z}^{\prime\prime}(x)=&\left(n_{1 }(z)k_{-}(z)^{2}e^{k_{-}(z)x}+n_{2}(z)\overline{k_{-}(z)}^{2}\,\overline{e^{k_ {-}(z)}x}\right)\mathbf{1}_{\mathbb{R}_{-}}(x)\\ &+\left(p_{1}(z)k_{+}(z)^{2}e^{-k_{+}(z)x}+p_{2}(z)\overline{k_{ +}(z)}^{2}e^{-\overline{k_{+}(z)}x}\right)\mathbf{1}_{\mathbb{R}_{+}}(x),\end{split}\]
and it belongs to \(L^{2}(\mathbb{R})\). In conclusion, \(\Psi_{z}\in H^{2}(\mathbb{R})\).
Now, we will send \(z\) to infinity in the numerical range to get the asymptotic behavior of the quotient \(\frac{\|(\mathscr{C}-z)\Psi_{z}\|}{\|\Psi_{z}\|}\). Thanks to Lemma 4.3 and (4.8), we obtain the following expressions
\[\begin{split} k_{+}(z)+\overline{k_{-}(z)}&=\operatorname {sgn}(\operatorname{Im}V_{+}-\operatorname{Im}V_{-})2i\sqrt{\tau}+\mathcal{O} \left(\frac{1}{|\tau|^{1/2}}\right),\\ k_{+}(z)-\overline{k_{+}(z)}&=\operatorname{sgn} (\operatorname{Im}V_{+}-\operatorname{Im}V_{-})2i\sqrt{\tau}+\mathcal{O} \left(\frac{1}{|\tau|^{1/2}}\right),\\ k_{-}(z)-\overline{k_{-}(z)}&=-\operatorname{sgn} (\operatorname{Im}V_{+}-\operatorname{Im}V_{-})2i\sqrt{\tau}+\mathcal{O} \left(\frac{1}{|\tau|^{1/2}}\right),\\ \overline{k_{+}(z)}+k_{-}(z)&=-\operatorname{sgn} (\operatorname{Im}V_{+}-\operatorname{Im}V_{-})2i\sqrt{\tau}+\mathcal{O} \left(\frac{1}{|\tau|^{1/2}}\right).\end{split} \tag{4.20}\]
Now, we impose the following conditions on the coefficients \(n_{2}(z)\) and \(p_{2}(z)\)
\[|n_{2}(z)|=\mathcal{O}(1),\qquad|p_{2}(z)|=\mathcal{O}(1),\qquad 1=\mathcal{O}(|n_{2} (z)-p_{2}(z)|), \tag{4.21}\]
as \(\tau\to+\infty\) and uniformly for all \(\delta\in|(\operatorname{Im}V_{+},\operatorname{Im}V_{-})|\). Then, it follows from the first equation in (4.18), (4.20) and (4.9) that
\[|n_{1}(z)|^{2}= \left(\left|k_{+}(z)+\overline{k_{-}(z)}\right|^{2}|n_{2}(z)|^{2}+ \left|k_{+}(z)-\overline{k_{+}(z)}\right|^{2}|p_{2}(z)|^{2}\right.\] \[\left.-2\mathrm{Re}\,\left((k_{+}(z)+\overline{k_{-}(z)}) \overline{(k_{+}(z)-\overline{k_{+}(z)})}n_{2}(z)\overline{p_{2}(z)}\right) \right)\div|k_{+}(z)+k_{-}(z)|^{2}\] \[= \frac{4\tau|n_{2}(z)-p_{2}(z)|^{2}\left(1+\mathcal{O}\left(\frac{ 1}{|\tau|}\right)\right)}{\frac{|V_{+}-V_{-}|^{2}}{4\tau}\left(1+\mathcal{O} \left(\frac{1}{|\tau|}\right)\right)},\] \[= \frac{16\tau^{2}|n_{2}(z)-p_{2}(z)|^{2}}{|V_{+}-V_{-}|^{2}} \left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right). \tag{4.22}\]
Similarly, we also obtain the asymptotic expression for \(|p_{1}(z)|^{2}\),
\[|p_{1}(z)|^{2}=\frac{16\tau^{2}|n_{2}(z)-p_{2}(z)|^{2}}{|V_{+}-V_{-}|^{2}} \left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right). \tag{4.23}\]
With \(\Psi_{1,z}\) and \(\Psi_{2,z}\) defined as in (4.16), we get
\[\|\Psi_{1,z}\|^{2}=\frac{|n_{1}(z)|^{2}}{2\mathrm{Re}\,k_{-}(z)}+\frac{|p_{1} (z)|^{2}}{2\mathrm{Re}\,k_{+}(z)},\qquad\|\Psi_{2,z}\|^{2}=\frac{|n_{2}(z)|^{2 }}{2\mathrm{Re}\,k_{-}(z)}+\frac{|p_{2}(z)|^{2}}{2\mathrm{Re}\,k_{+}(z)}. \tag{4.24}\]
By using the assumption (4.21) for \(n_{2}(z),p_{2}(z)\) and (4.22), (4.23), \(\|\Psi_{z,2}\|\) can be shown to be a small perturbation compared with \(\|\Psi_{z,1}\|\):
\[\frac{\|\Psi_{2,z}\|^{2}}{\|\Psi_{1,z}\|^{2}}\leq\frac{|n_{2}(z)|^{2}}{|n_{1} (z)|^{2}}+\frac{|p_{2}(z)|^{2}}{|p_{1}(z)|^{2}}=\mathcal{O}\left(\frac{1}{| \tau|^{2}}\right).\]
Thanks to the triangle inequality, it yields that
\[\|\Psi_{z}\|=\|\Psi_{1,z}\|\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right). \tag{4.25}\]
From (4.16), it yields that, for all \(x\in\mathbb{R}\setminus\{0\}\),
\[\left(\mathscr{L}-z\right)\Psi_{z}(x)=2i(\operatorname{Im}V(x)-\operatorname{ Im}z)\Psi_{2,z}(x),\]
and the square of its norm is given by
\[\|(\mathscr{L}-z)\Psi_{z}\|^{2}=\frac{2|n_{2}(z)|^{2}}{\mathrm{Re}\,k_{-}(z)}| \mathrm{Im}\,V_{-}-\delta|^{2}+\frac{2|p_{2}(z)|^{2}}{\mathrm{Re}\,k_{+}(z)}| \mathrm{Im}\,V_{+}-\delta|^{2}.\]
Applying (4.25) and (4.24), we have
\[\frac{\|(\mathscr{L}-z)\Psi_{z}\|^{2}}{\|\Psi_{z}\|^{2}}= \frac{\frac{2|\mathrm{Im}\,V_{-}-\delta|^{2}}{\mathrm{Re}\,k_{-}(z )}|n_{2}(z)|^{2}+\frac{2|\mathrm{Im}\,V_{+}-\delta|^{2}}{\mathrm{Re}\,k_{+}(z) }|p_{2}(z)|^{2}}{\frac{|n_{1}(z)|^{2}}{2\mathrm{Re}\,k_{-}(z)}+\frac{|p_{1}(z) |^{2}}{2\mathrm{Re}\,k_{+}(z)}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|} \right)\right)\] \[= 4\frac{\mathrm{Re}\,k_{+}(z)|\mathrm{Im}\,V_{-}-\delta|^{2}|n_{2 }(z)|^{2}+\mathrm{Re}\,k_{-}(z)|\mathrm{Im}\,V_{+}-\delta|^{2}|p_{2}(z)|^{2}} {\mathrm{Re}\,k_{+}(z)|n_{1}(z)|^{2}+\mathrm{Re}\,k_{-}(z)|p_{1}(z)|^{2}} \left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right)\right).\]
Employing Lemma 4.3 for \(\operatorname{Re}k_{\pm}(z)\), (4.21), (4.22), and (4.23), we get
\[\operatorname{Re}k_{+}(z)|\mathrm{Im}\,V_{-}-\delta|^{2}|n_{2}(z)| ^{2}= \frac{|\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}-\delta|^{2}} {2\sqrt{\tau}}|n_{2}(z)|^{2}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right),\] \[\operatorname{Re}k_{-}(z)|\mathrm{Im}\,V_{+}-\delta|^{2}|p_{2}(z )|^{2}= \frac{|\mathrm{Im}\,V_{+}-\delta|^{2}|\mathrm{Im}\,V_{-}-\delta|} {2\sqrt{\tau}}|p_{2}(z)|^{2}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right),\] \[\operatorname{Re}k_{+}(z)|n_{1}(z)|^{2}= \frac{8\tau^{3/2}|\mathrm{Im}\,V_{+}-\delta|}{|V_{+}-V_{-}|^{2}} |n_{2}(z)-p_{2}(z)|^{2}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right),\] \[\operatorname{Re}k_{-}(z)|p_{1}(z)|^{2}= \frac{8\tau^{3/2}|\mathrm{Im}\,V_{-}-\delta|}{|V_{+}-V_{-}|^{2}} |n_{2}(z)-p_{2}(z)|^{2}\left(1+\mathcal{O}\left(\frac{1}{|\tau|}\right) \right).\]
Then, by using (4.10), the quotient has the asymptotic behavior as follows
\[\frac{\|(\mathscr{L}-z)\Psi_{z}\|^{2}}{\|\Psi_{z}\|^{2}}= \frac{|V_{+}-V_{-}|^{2}}{4|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-} |}\frac{|\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}-\delta|}{\tau^{2}} \tag{4.26}\] \[\times\frac{|\mathrm{Im}\,V_{-}-\delta||n_{2}(z)|^{2}+|\mathrm{Im }\,V_{+}-\delta||p_{2}(z)|^{2}}{|n_{2}(z)-p_{2}(z)|^{2}}\left(1+\mathcal{O} \left(\frac{1}{|\tau|}\right)\right).\]
Since when \(z\in\rho(\mathscr{L})\), the inverse of the resolvent's norm can be expressed as follows
\[\|\left(\mathscr{L}-z\right)^{-1}\|^{-1}=\inf_{\Psi\in H^{2}(\mathbb{R})\setminus \{0\}}\frac{\|(\mathscr{L}-z)\Psi\|}{\|\Psi\|}.\]
From this expression, \(n_{2}(z)\) and \(p_{2}(z)\) will be chosen in order to minimize the function
\[F(n_{2}(z),p_{2}(z))\coloneqq\frac{|\mathrm{Im}\,V_{-}-\delta||n_{2}(z)|^{2}+| \mathrm{Im}\,V_{+}-\delta||p_{2}(z)|^{2}}{|n_{2}(z)-p_{2}(z)|^{2}}.\]
From (4.18), in order to have \(\Psi_{z}\neq 0\), at least one of two numbers \(n_{2}(z)\) or \(p_{2}(z)\) must be non-zero, we assume that \(p_{2}\neq 0\). By dividing both numerator and denominator of \(F(n_{2}(z),p_{2}(z))\) by \(|p_{2}(z)|^{2}\), our problem turns into searching the infimum of a one-variable function
\[f(x)\coloneqq\frac{|\mathrm{Im}\,V_{-}-\delta|x^{2}+|\mathrm{Im}\,V_{+}- \delta|}{(x-1)^{2}}.\]
For simplification, we search its infimum on \(\mathbb{R}\): its derivative is
\[f^{\prime}(x)=-2\frac{|\mathrm{Im}\,V_{-}-\delta|x+|\mathrm{Im}\,V_{+}- \delta|}{(x-1)^{3}},\]
By investigation function \(f(x)\), we will see that it attains its global minimum at the point \(x_{0}=-\frac{|\mathrm{Im}\,V_{+}-\delta|}{|\mathrm{Im}\,V_{-}-\delta|}\) and using (4.10), we can calculate the minimum value of \(f(x)\)
\[f(x_{0})=\frac{|\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}-\delta|}{| \mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|}.\]
Therefore, we need to choose \(n_{2}(z)\) and \(p_{2}(z)\) such that \(\frac{n_{2}(z)}{p_{2}(z)}=-\frac{|\mathrm{Im}\,V_{+}-\delta|}{|\mathrm{Im}\,V_{ -}-\delta|}\) and satisfying the assumption (4.21). We have \(|n_{2}(z)-p_{2}(z)|=\frac{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|}{|\mathrm{Im }\,V_{-}-\delta|}|p_{2}(z)|\), thus, in order to have \(1\lesssim|n_{2}(z)-p_{2}(z)|\lesssim 1\), we can choose \(p_{2}(z)\) in the form \(p_{2}(z)=C|\mathrm{Im}\,V_{-}-\delta|\) where \(C\) is a non-zero complex constant and thus, \(n_{2}(z)=-C|\mathrm{Im}\,V_{+}-\delta|\). However, by replacing these numbers \(n_{2}(z)\), \(p_{2}(z)\) into two first equations in (2.12) to have \(n_{1}(z),p_{1}(z)\), we see that the constant \(C\) merely plays the role of a (normalizing) multiplying constant. Therefore, we can choose \(C=1\) and it completes the proof of Theorem 2.9.
## 5. Pseudospectrum of the complex point interaction
### Stability of essential spectra under point interaction: Proof of Lemma 2.11
We prove this lemma by finite extensional method, cf. [11, Sec. IX.4]. This method says that if \(\mathscr{L}\) is a closed \(n\)-dimensional extension of a closed, densely defined operator \(\mathscr{T}\), _i.e._, \(\mathscr{T}\subset\mathscr{L}\) and there is an \(n\)-dimensional subspace \(F\) of \(\operatorname{Dom}(\mathscr{L})\) such that \(\operatorname{Dom}(\mathscr{L})=\operatorname{Dom}(\mathscr{T})\dotplus F\) (_i.e._, \(\operatorname{Dom}(\mathscr{L})\) is a direct sum of two linear vector spaces \(\operatorname{Dom}(\mathscr{T})\) and \(F\)), then the third three essential spectra of \(\mathscr{L}\) and \(\mathscr{T}\) are identical: \(\sigma_{\mathrm{ek}}(\mathscr{L})=\sigma_{\mathrm{ek}}(\mathscr{T})\) for \(\mathrm{k}\in[[1,3]]\). We can not apply directly this method to two operators \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\), since their domains are not extensions of each other. We need to find a mediator operator between two operators \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\) so that we obtain the transitive property between essential spectra of these operators. Let us make the previous sentence to be clear: We consider an operator \(\mathscr{T}\) which is a common restriction of \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\), defined by
\[\operatorname{Dom}(\mathscr{T}) =\{u\in H^{1}(\mathbb{R})\cap H^{2}(\mathbb{R}):u(0)=u^{\prime}(0 )=0\},\] \[\mathscr{T}u =-u^{\prime\prime}+V(x)u,\qquad\forall u\in\operatorname{Dom}( \mathscr{T}).\]
and we will show that
**(a)**: \(\mathscr{T}\) is a closed densely defined operator;
**(b)**: \(\mathscr{L}\) is a \(2\)-dimensional extension of \(\mathscr{T}\);
**(c)**: \(\mathscr{L}_{\alpha}\) is a \(2\)-dimensional extension of \(\mathscr{T}\).
Let us start with **(a)**. It is not hard to prove that \(\operatorname{Dom}(\mathscr{T})\) is dense in \(L^{2}(\mathbb{R})\). Indeed, we take \(f\in L^{2}(\mathbb{R})\) and we consider a sequence of smooth function \((\xi_{n})_{n\geq 1}\) such that \(\operatorname{supp}\xi_{n}\subset\{x\in\mathbb{R}:0<|x|<n+1\}\) and \(\xi_{n}(x)=1\) for all \(x\in\mathbb{R}\) satisfying \(\frac{1}{n}\leq|x|\leq n\), then \(\xi_{n}f\in\operatorname{Dom}(\mathscr{T})\) and \(\|\xi_{n}f-f\|\xrightarrow{n\to+\infty}0\) by the dominated convergence theorem. To prove that \(\mathscr{T}\) is closed, we consider the graph norm \(\|u\|_{\mathscr{T}}\coloneqq\sqrt{\|\mathscr{T}u\|^{2}+\|u\|^{2}}\) defined for all \(u\in\operatorname{Dom}(\mathscr{T})\), this norm is equivalent to the norm in \(H^{2}(\mathbb{R})\). Therefore, take any Cauchy sequence \(u_{n}\) in \(\operatorname{Dom}(\mathscr{T})\) under the graph norm, this sequence will converge to a function \(u\) in \(H^{2}(\mathbb{R})\). By the Sobolev embedding of \(H^{2}(\mathbb{R})\) into \(L^{\infty}(\mathbb{R},\mathbb{C}^{2})\), we have \(u(0)=u^{\prime}(0)=0\). Thus, we have shown that \((\operatorname{Dom}(\mathscr{T}),\|\cdot\|_{\mathscr{T}})\) is complete or equivalently, \(\mathscr{T}\) is closed.
Next, we prove **(b)**. Let \(\phi\in\mathbb{C}_{c}^{\infty}(\mathbb{R})\) such that \(\phi(x)=1\) in some neighbourhood of zero. Take \(u\in\operatorname{Dom}(\mathscr{L})\), then, we can show that
\[u(x)-[u(0)\phi(x)+u^{\prime}(0)x\phi(x)]\in\operatorname{Dom}(\mathscr{T}).\]
It means that
\[\operatorname{Dom}(\mathscr{L})\subset\operatorname{Dom}(\mathscr{T})+F, \qquad F:=\operatorname{span}\{\phi(x),x\phi(x)\}.\]
Note that \(\dim F=2\) (since \(\phi(x)\) and \(x\phi(x)\) are linear independent). Since both \(\operatorname{Dom}(\mathscr{T})\) and \(F\) are subspaces of \(\operatorname{Dom}(\mathscr{L})\), therefore, we have the direction \(\operatorname{Dom}(\mathscr{L})\supset\operatorname{Dom}(\mathscr{T})+F\). Furthermore, we have \(\operatorname{Dom}(\mathscr{T})\cap F=\{0\}\), thus it implies the statement **(b)**.
To prove **(c)**, we modify functions in \(F\) such that we have functions belonging to \(\operatorname{Dom}(\mathscr{L}_{\alpha})\). Let us consider a function \(\phi_{\alpha}(x)\) defined on \(\mathbb{R}\) as follows
\[\phi_{\alpha}(x):=\left(\frac{\alpha}{2}|x|+1\right)\phi(x).\]
It is clear that \(\phi_{\alpha}\in H^{1}(\mathbb{R})\cap H^{2}(\mathbb{R}\setminus\{0\})\) and \(\phi_{\alpha}(0)=1\), \(\phi_{\alpha}^{\prime}(0^{+})=\frac{\alpha}{2}\) and \(\phi_{\alpha}^{\prime}(0^{-})=-\frac{\alpha}{2}\). This implies that \(\phi_{\alpha}\in\operatorname{Dom}(\mathscr{L}_{\alpha})\). It is also clear that \(x\Phi_{\alpha}(x)\in\operatorname{Dom}(\mathscr{L}_{\alpha})\), therefore, we have
\[G:=\operatorname{span}\{\phi_{\alpha}(x),x\phi_{\alpha}(x)\}\subset \operatorname{Dom}(\mathscr{L}_{\alpha}).\]
Take \(u\in\operatorname{Dom}(\mathscr{L}_{\alpha})\), we can verify that
\[v(x)\coloneqq u(x)-\left[u(0)\phi_{\alpha}(x)+\frac{u^{\prime}(0^{+})+u^{ \prime}(0^{-})}{2}x\phi_{\alpha}(x)\right]\in\operatorname{Dom}(\mathscr{T}).\]
Indeed, by direct computation, we have \(v(0)=u(0)-u(0)=0\) and
\[v^{\prime}(0^{+}) =u^{\prime}(0^{+})-\frac{\alpha u(0)}{2}-\frac{u^{\prime}(0^{+})+u^ {\prime}(0^{-})}{2}=\frac{u^{\prime}(0^{+})-u^{\prime}(0^{-})-\alpha u(0)}{2}=0,\] \[v^{\prime}(0^{-}) =u^{\prime}(0^{-})+\frac{\alpha u(0)}{2}-\frac{u^{\prime}(0^{+})+u ^{\prime}(0^{-})}{2}=-\frac{u^{\prime}(0^{+})-u^{\prime}(0^{-})-\alpha u(0)}{2} =0,\]
where the last equalities comes from the jump condition of \(u\) in \(\operatorname{Dom}(\mathscr{L}_{\alpha})\). In other words, we have shown that \(\operatorname{Dom}(\mathscr{L}_{\alpha})\subset\operatorname{Dom}( \mathscr{T})+G\). Therefore, we obtain the equality \(\operatorname{Dom}(\mathscr{L}_{\alpha})=\operatorname{Dom}(\mathscr{T})+G\) (since both \(\operatorname{Dom}(\mathscr{T})\) and \(G\) are subspaces of \(\operatorname{Dom}(\mathscr{L}_{\alpha})\)). It is easy to check that \(\operatorname{Dom}(\mathscr{T})\cap G=\emptyset\), thus the statement **(c)** follows since \(\dim G=2\).
Thanks to **(a)**, **(b)**, **(c)**, we can apply [11, Corollary IX.4.2] to obtain
\[\sigma_{\operatorname{ek}}(\mathscr{L})=\sigma_{\operatorname{ek}}(\mathscr{T })=\sigma_{\operatorname{ek}}(\mathscr{L}_{\alpha})\qquad\text{ for }k\in\{1,2,3\}.\]
The statement on essential spectra of \(\mathscr{L}_{\alpha}\) in other type, _i.e._, \(\operatorname{k}\in\{4,5\}\), is followed from [18, Proposition 5.4.4] because \(\mathbb{C}\setminus\sigma_{\operatorname{e1}}(\mathscr{L}_{\alpha})\) is connected.
### Existence of the eigenvalue: Proof of Theorem 2.12
Since \(\mathscr{L}_{\alpha}\) is \(\mathscr{T}\)-self-adjoint (see Proposition 2.10), its residual spectrum is empty, thus its spectrum is decomposed as a union of the point spectrum and the continuous spectrum. Thanks to Lemma 2.11, the set \([V_{+},+\infty)\cup[V_{-},+\infty)\) are essential spectrum of \(\mathscr{L}_{\alpha}\). Therefore, take \(z\in\mathbb{C}\setminus[V_{+},+\infty)\cup[V_{-},+\infty)\), then \(\operatorname{Ran}\left(\mathscr{L}_{\alpha}-z\right)\) is closed and this implies that \(z\in\mathbb{C}\setminus\sigma_{\operatorname{c}}(\mathscr{L}_{\alpha})\) (because if \(z\in\sigma_{\operatorname{c}}(\mathscr{L}_{\alpha})\), we have a contradiction with the definition of the continuous spectrum). In other words, we have shown that
\[\sigma_{\operatorname{c}}(\mathscr{L}_{\alpha})\subset[V_{+},+\infty)\cup[V_ {-},+\infty). \tag{5.1}\]
Since the action of \(\mathscr{L}_{\alpha}\) and \(\mathscr{L}\) are the same, we can use the argument in the proof of Proposition 2.2 to solve the eigenvalue equation \((\mathscr{L}_{\alpha}-z)u=0\). From (3.2) with a notice that \(f=0\) in the argument, the general solution restricted on \(\mathbb{R}_{\pm}\), denoted as \(u_{\pm}\), reads as
\[u_{\pm}(x)=A_{\pm}e^{k_{\pm}(z)x}+B_{\pm}e^{-k_{\pm}(z)x},\qquad\text{for }\pm x>0. \tag{5.2}\]
In the same manner as in the proof of Theorem 2.4 (subsection 3.2), we can easily show that no point in \([V_{+},+\infty)\cup[V_{-},+\infty)\) can be the eigenvalue of \(\mathscr{L}_{\alpha}\) and thus,
\[[V_{+},+\infty)\cup[V_{-},+\infty)\subset\sigma_{\operatorname{c}}(\mathscr{L }_{\alpha}).\]
Combining this inclusion with (5.1), we conclude (2.16). The fact that the point spectrum is the discrete spectrum comes from [18, Proposition 5.4.3] which states that \(\sigma_{\operatorname{dis}}(\mathscr{L}_{\alpha})=\sigma(\mathscr{L}_{\alpha} )\setminus\sigma_{\operatorname{e5}}(\mathscr{L}_{\alpha})\).
Now, we look for the eigenvalue of \(\mathscr{L}_{\alpha}\) in the set \(\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\). Take \(z\in\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\) and assume that \((z,u)\) is the eigenpair of \(\mathscr{L}_{\alpha}\). From the decaying of \(u\) at \(\pm\infty\), with reasons as in (3.7) and (3.8), it implies that \(A_{+}=0\) and \(B_{-}=0\). Replacing these conditions into (5.2), we get
\[u(x)=\begin{cases}B_{+}e^{-k_{+}(z)x},&\text{for }x>0,\\ A_{-}e^{k_{-}(z)x},&\text{for }x<0.\end{cases} \tag{5.3}\]
In order \(u\) belong to \(\operatorname{Dom}(\mathscr{L}_{\alpha})\), it is necessary that \(A_{-}\) and \(B_{+}\) are non-trivial solution of the system
\[\begin{cases}B_{+}-A_{-}=0,\\ -k_{+}(z)B_{+}-(k_{-}(z)+\alpha)A_{-}=0.\end{cases}\]
Vice versa, if there exists non zero constants \(A_{-}\) and \(B_{+}\) satisfies the above system, it is easy to check that \(u(x)\) given by (5.3) is indeed the eigenfunction associated with the eigenvalue
\(z\in\mathbb{C}\setminus([V_{+},+\infty)\cup[V_{-},+\infty))\). This is equivalent to the fact that \(z\) is eigenvalue if and only if \(z\) satisfies the following algebraic equation depending on the parameter \(\alpha\in\mathbb{C}\)
\[k_{+}(z)+k_{-}(z)=-\alpha, \tag{5.4}\]
and \(z\notin([V_{+},+\infty)\cup[V_{-},+\infty))\). We can assume that \(\alpha\neq 0\) (since when \(\alpha=0\), then \(\mathscr{L}_{0}=\mathscr{L}\), we already know that there is not eigenvalue in this case). Using the fact that
\[k_{+}(z)^{2}-k_{-}(z)^{2}=V_{+}-V_{-},\]
from (5.4), we implies that
\[k_{+}(z)-k_{-}(z)=-\frac{V_{+}-V_{-}}{\alpha}. \tag{5.5}\]
Thanks to (5.4) and (5.5), we get
\[\sqrt{V_{+}-z}=-\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2\alpha}\text{ and }\sqrt{V_{-}-z}=-\frac{\alpha}{2}+\frac{V_{+}-V_{-}}{2\alpha}. \tag{5.6}\]
By squaring one of two equations in (5.6), it yields that
\[z=z(\alpha)=\frac{V_{+}+V_{-}}{2}-\frac{(V_{+}-V_{-})^{2}}{4\alpha^{2}}-\frac {\alpha^{2}}{4}. \tag{5.7}\]
Thus, we have shown that if the equation (5.4) has solution, then its solution is necessary in the form (5.7). It means that, let \(\alpha\in\mathbb{C}\), the point \(z(\alpha)\) is the eigenvalue of \(\mathscr{L}_{\alpha}\) if and only if \(\alpha\) satisfies
\[\begin{cases}\sqrt{V_{+}-z(\alpha)}+\sqrt{V_{-}-z(\alpha)}=-\alpha,\\ z(\alpha)\notin[V_{+},+\infty)\cup[V_{-},+\infty).\end{cases} \tag{5.8}\]
Since two equations in (5.6) are implication equations of (5.4), then \(\alpha\) satisfying (5.8) if and only if
\[\begin{cases}\sqrt{V_{+}-z(\alpha)}=-\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2 \alpha},\\ \sqrt{V_{-}-z(\alpha)}=-\frac{\alpha}{2}+\frac{V_{+}-V_{-}}{2\alpha},\\ z(\alpha)\notin[V_{+},+\infty)\cup[V_{-},+\infty).\end{cases} \tag{5.9}\]
We will show that (5.9) is equivalent to
\[\begin{cases}\operatorname{Re}\,\left(-\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2 \alpha}\right)>0,\\ \operatorname{Re}\,\left(-\frac{\alpha}{2}+\frac{V_{+}-V_{-}}{2\alpha}\right)> 0.\end{cases} \tag{5.10}\]
Indeed we prove the statement in two directions
* (5.9) \(\Longrightarrow\) (5.10): Since \(V_{+}-z(\alpha)\notin(-\infty,0]\), then the real part of the square root \(\sqrt{V_{+}-z(\alpha)}\) must be positive. The same argument for the real part of the square root \(\sqrt{V_{-}-z(\alpha)}\).
* (5.10) \(\Longrightarrow\) (5.9): Using the fact that, for \(w\) is a complex number, \(\sqrt{w^{2}}=w\) if \(\operatorname{Re}w>0\), we have \[\sqrt{V_{+}-z(\alpha)} =\sqrt{\left(-\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2\alpha}\right)^ {2}}=-\frac{\alpha}{2}-\frac{V_{+}-V_{-}}{2\alpha},\] \[\sqrt{V_{-}-z(\alpha)} =\sqrt{\left(-\frac{\alpha}{2}+\frac{V_{+}-V_{-}}{2\alpha}\right) ^{2}}=-\frac{\alpha}{2}+\frac{V_{+}-V_{-}}{2\alpha}.\]
If \(V_{\pm}-z(\alpha)\in(-\infty,0]\), then \(\operatorname{Re}\sqrt{V_{\pm}-z(\alpha)}=0\) and thus, we have a contradiction with (5.10). Therefore, the last condition in (5.9) is obtained.
In summary, we have just shown that the point \(z(\alpha)\) is the eigenvalue of \(\mathscr{L}_{\alpha}\) if and only if \(\alpha\) satisfies conditions in (5.10). By elementary calculations, the conditions for \(\alpha\) in (5.10) can be read as
\[|\langle V_{+}-V_{-},\alpha\rangle_{\mathbb{R}^{2}}|<-|\alpha|^{2}\text{Re}\,\alpha,\]
which is the description of the set \(\Omega\) in the statement of this theorem. The eigenfunction associated with \(z(\alpha)\) must be in the form of (5.3) with \(A_{-}=B_{+}\), then, thanks to (5.9), the eigenspace associated with \(z(\alpha)\) is expressed as
\[\text{Ker}(\mathscr{L}_{\alpha}-z(\alpha))=\{Cu_{\alpha}:C\in\mathbb{C}\},\]
where \(u_{\alpha}\) is given by (2.18).
### Asymptotic resolvent norm of \(\mathscr{L}_{\alpha}\): Proof of Theorem 2.14
. Since the action of \(\mathscr{L}\) and \(\mathscr{L}_{\alpha}\) are the same, by following the proof of Proposition 2.2, the resolvent of \(\mathscr{L}_{\alpha}\) can be constructed in the same way. It means that the solution \(u\) of the resolvent equation \((\mathscr{L}_{\alpha}-z)u=f\) can be found in the form of (3.2), in which
* \(u_{\pm}\) is the restriction of \(u\) on \(\mathbb{R}_{\pm}\);
* \(\alpha_{\pm}\) and \(\beta_{\pm}\) are functions defined in (3.5) where \(A_{+}\) and \(B_{-}\) are numbers defined in (3.7) and (3.8) by the decaying conditions of \(u_{\pm}\) at \(\pm\infty\) while \(A_{-}\) and \(B_{+}\) are found by the continuity of \(u\) at zero (\(u_{+}(0)=u_{-}(0)\)) and the jump condition \((u_{+}^{\prime}(0)-u_{-}^{\prime}(0)=\alpha u(0))\).
Then, the resolvent of \(\mathscr{L}_{\alpha}\) can be expressed in the integral form,
\[\left[\left(\mathscr{L}_{\alpha}-z\right)^{-1}f\right](x)=\int_{\mathbb{R}} \mathscr{R}_{z,\alpha}(x,y)f(y)\,\mathrm{d}y, \tag{5.11}\]
where \(\mathscr{R}_{z,\alpha}(x,y)\) is defined by
\[\mathscr{R}_{z,\alpha}(x,y)\coloneqq\begin{cases}\frac{1}{2k_{+}(z)}e^{-k_{+} (z)|x-y|}+\frac{k_{+}(z)-k_{-}(z)-\alpha}{2k_{+}(z)\left(k_{+}(z)+k_{-}(z)+ \alpha\right)}e^{-k_{+}(z)(x+y)}&\text{for }\{x>0,y>0\};\\ \frac{1}{2k_{-}(z)}e^{-k_{-}(z)|x-y|}-\frac{k_{+}(z)-k_{-}(z)+\alpha}{2k_{-}( z)\left(k_{+}(z)+k_{-}(z)+\alpha\right)}e^{k_{-}(z)(x+y)}&\text{for }\{x<0,y<0\};\\ \frac{1}{k_{+}(z)+k_{-}(z)+\alpha}e^{-k_{+}(z)x+k_{-}(z)y}&\text{for }\{x>0,y<0\};\\ \frac{1}{k_{+}(z)+k_{-}(z)+\alpha}e^{k_{-}(z)x-k_{+}(z)y}&\text{for }\{x<0,y>0\}.\end{cases}\]
Next, we decompose the resolvent as in Section 4, that is
\[(\mathscr{L}_{\alpha}-z)^{-1}=\mathscr{R}_{1,z,\alpha}+\mathscr{R}_{2,z, \alpha},\]
where
\[\mathscr{R}_{1,z,\alpha}f(x)\coloneqq\int_{\mathbb{R}}\mathscr{R}_{1,z,\alpha }(x,y)f(y)\,\mathrm{d}y,\qquad\mathscr{R}_{2,z,\alpha}f(x)\coloneqq\int_{ \mathbb{R}}\mathscr{R}_{2,z,\alpha}(x,y)f(y)\,\mathrm{d}y, \tag{5.12}\]
with
\[\mathscr{R}_{1,z,\alpha}(x,y) =\begin{cases}K_{++}(z,\alpha)e^{-k_{+}(z)(x+y)},&\text{for }\{x>0,y>0\};\\ K_{--}(z,\alpha)e^{k_{-}(z)(x+y)},&\text{for }\{x<0,y<0\};\\ K_{+-}(z,\alpha)e^{-k_{+}(z)x+k_{-}(z)y},&\text{for }\{x>0,y<0\};\\ K_{+-}(z,\alpha)e^{k_{-}(z)x-k_{+}(z)y},&\text{for }\{x<0,y>0\};\end{cases} \tag{5.13}\] \[\mathscr{R}_{2,z,\alpha}(x,y) =\mathscr{R}_{2,z}(x,y). \tag{5.14}\]
in which
\[K_{++}(z,\alpha) \coloneqq\frac{k_{+}(z)-k_{-}(z)-\alpha}{2k_{+}(z)\left(k_{+}(z)+k_{- }(z)+\alpha\right)},\qquad K_{--}(z,\alpha)\coloneqq-\frac{k_{+}(z)-k_{-}(z)+ \alpha}{2k_{-}(z)\left(k_{+}(z)+k_{-}(z)+\alpha\right)},\] \[K_{+-}(z,\alpha) \coloneqq\frac{1}{k_{+}(z)+k_{-}(z)+\alpha}.\]
By applying the proof of Proposition 4.2, the norm of \(\mathcal{R}_{1,z,\alpha}\) is estimated as follows, for all \(z\in\rho(\mathscr{L}_{\alpha})\),
\[\|\mathcal{R}_{1,z,\alpha}\| \leq\frac{1}{\sqrt{2}}\sqrt{\sqrt{(A(z,\alpha)-C(z,\alpha))^{2}+ B(z,\alpha)^{2}}+A(z,\alpha)+C(z,\alpha)+2D(z,\alpha)}, \tag{5.15}\] \[\|\mathcal{R}_{1,z,\alpha}\| \geq\frac{1}{\sqrt{2}}\sqrt{\sqrt{(A(z,\alpha)-C(z,\alpha))^{2}+ \widetilde{B}(z,\alpha)^{2}}+A(z,\alpha)+C(z,\alpha)+2D(z,\alpha)},\]
where the formulas of \(A(z,\alpha)\), \(B(z,\alpha)\), \(C(z,\alpha)\), \(D(z,\alpha)\), \(\widetilde{B}(z,\alpha)\) are defined by replacing \(K_{++}(z)\), \(K_{--}(z)\) and \(K_{+-}(z)\) in the formulas of \(A(z)\), \(B(z)\), \(C(z)\), \(D(z)\), \(\widetilde{B}(z)\) by the above \(K_{++}(z,\alpha)\), \(K_{--}(z,\alpha)\) and \(K_{+-}(z,\alpha)\). Let us recall and reuse here the conventions at the beginning of Section 4. Thanks to Lemma 4.3, it is easily to obtain the following asymptotic expansions
\[|k_{+}(z)-k_{-}(z)-\alpha| =2\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}} \right)\right),\] \[|k_{+}(z)-k_{-}(z)+\alpha| =2\sqrt{\tau}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}} \right)\right),\] \[|k_{+}(z)+k_{-}(z)+\alpha| =|\alpha|\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right) \right),\] \[|K_{++}(z,\alpha)| =\frac{1}{|\alpha|}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2 }}\right)\right),\] \[|K_{--}(z,\alpha)| =\frac{1}{|\alpha|}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2 }}\right)\right),\] \[|K_{+-}(z,\alpha)| =\frac{1}{|\alpha|}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2 }}\right)\right).\]
From these expansions, the asymptotic formulas of \(A(z,\alpha),C(z,\alpha),D(z,\alpha)\) and \(B(z,\alpha),\widetilde{B}(z,\alpha)\) are deduced, that is
\[A(z,\alpha) =\frac{\tau}{|\alpha|^{2}|\text{Im}\,V_{+}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\] \[C(z,\alpha) =\frac{\tau}{|\alpha|^{2}|\text{Im}\,V_{-}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\] \[D(z,\alpha) =\frac{\tau}{|\alpha|^{2}|\text{Im}\,V_{+}-\delta||\text{Im}\,V_ {-}-\delta|}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\] \[B(z,\alpha) =\frac{2|\text{Im}\,V_{+}-\text{Im}\,V_{-}|\tau}{|\alpha|^{2}| \text{Im}\,V_{+}-\delta|^{3/2}|\text{Im}\,V_{-}-\delta|^{3/2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\] \[\widetilde{B}(z,\alpha) =\frac{2|\text{Im}\,V_{+}-\text{Im}\,V_{-}|\tau}{|\alpha|^{2}| \text{Im}\,V_{+}-\delta|^{3/2}|\text{Im}\,V_{-}-\delta|^{3/2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right).\]
Then, by using (4.11), we obtain
\[\sqrt{\left(A(z,\alpha)-C(z,\alpha)\right)^{2}+B(z,\alpha)^{2}} =\frac{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|^{2}\tau}{|\alpha|^{ 2}|\mathrm{Im}\,V_{+}-\delta|^{2}|\mathrm{Im}\,V_{-}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\] \[A(z,\alpha)+C(z,\alpha)+2D(z,\alpha) =\frac{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|^{2}\tau}{|\alpha|^ {2}|\mathrm{Im}\,V_{+}-\delta|^{2}|\mathrm{Im}\,V_{-}-\delta|^{2}}\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right).\]
From (5.15), since \(B(z,\alpha)\) and \(\widetilde{B}(z,\alpha)\) have the same asymptotic behaviors, we deduce that
\[\|\mathcal{R}_{1,z,\alpha}\|=\frac{|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}| \sqrt{\tau}}{|\alpha||\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}-\delta|} \left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right).\]
By employing the upper bound (4.13) for the norm of \(\mathcal{R}_{2,z}\), it implies that \(\mathcal{R}_{2,z,\alpha}\) is indeed a small perturbation of \(\mathcal{R}_{1,z,\alpha}\):
\[\frac{\|\mathcal{R}_{2,z,\alpha}\|}{\|\mathcal{R}_{1,z,\alpha}\|}=\mathcal{O} \left(\frac{1}{|\tau|^{1/2}}\right).\]
Then, the triangle inequality provides us
\[\|(\mathscr{L}_{\alpha}-z)^{-1}\|=\|\mathcal{R}_{1,z,\alpha}\|\left(1+ \mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right),\]
and the conclusion of the theorem follows.
### Accurate pseudomode for \(\mathscr{L}_{\alpha}\): Proof of Theorem 2.15
Let us fix \(\alpha\in\mathbb{C}\setminus\{0\}\) and \(z\in\rho(\mathscr{L}_{\alpha})\). What we need to do is to follow the proof of Theorem 2.9 in subsection 4.3 and modify some calculations of this proof. We started by choosing the pseudomode in the form
\[\Psi_{z,\alpha}(x)=\Psi_{1,z,\alpha}(x)+\Psi_{2,z,\alpha}(x),\]
where
\[\Psi_{1,z,\alpha}(x)=n_{1}(z,\alpha)e^{k_{-}(z)x}\mathbf{1}_{\mathbb{R}_{-}}( x)+p_{1}(z,\alpha)e^{-k_{+}(z)x}\mathbf{1}_{\mathbb{R}_{+}}(x),\]
\[\Psi_{2,z,\alpha}(x)=n_{2}(z,\alpha)e^{\overline{k_{-}(z)x}}\mathbf{1}_{ \mathbb{R}_{-}}(x)+p_{2}(z,\alpha)e^{-\overline{k_{+}(z)x}}\mathbf{1}_{ \mathbb{R}_{+}}(x),\]
in which \(n_{1}(z,\alpha),n_{2}(z,\alpha),p_{1}(z,\alpha),p_{2}(z,\alpha)\) are complex numbers to be determined later. At the end of the proof, we will see that \(n_{2}\) and \(p_{2}\) can be chosen independently of \(\alpha\), and thus, so is \(\Psi_{2,z,\alpha}\). In order \(\Psi_{z,\alpha}\) to belong to the domain \(\mathrm{Dom}(\mathscr{L}_{\alpha})\), it is necessary that \(\Psi_{z,\alpha}(0^{+})=\Psi_{z,\alpha}(0^{-})\) and \(\Psi^{\prime}_{z,\alpha}(0^{+})-\Psi^{\prime}_{z,\alpha}(0^{-})=\alpha\Psi_{z,\alpha}(0)\) which impose the following conditions on the coefficients of the pseudomode
\[\begin{cases}n_{1}(z,\alpha)+n_{2}(z,\alpha)=p_{1}(z,\alpha)+p_{2}(z,\alpha), \\ -k_{+}(z)p_{1}(z,\alpha)-\overline{k_{+}(z)}p_{2}(z,\alpha)-k_{-}(z)n_{1}(z, \alpha)-\overline{k_{-}(z)}n_{2}(z,\alpha)=\alpha(p_{1}(z,\alpha)+p_{2}(z, \alpha)).\end{cases}\]
Since \(z\in\rho(\mathscr{L}_{\alpha})\), \(k_{+}(z)+k_{-}(z)+\alpha\neq 0\) (see the argument around (5.4)), then \(n_{1}(z,\alpha)\) and \(p_{1}(z,\alpha)\) can be calculated in terms of \(n_{2}(z,\alpha)\) and \(p_{2}(z,\alpha)\) as follows
\[\begin{cases}n_{1}(z,\alpha)=-\dfrac{k_{+}(z)+\overline{k_{-}(z)}+\alpha}{k_{ +}(z)+k_{-}(z)+\alpha}n_{2}(z,\alpha)+\dfrac{k_{+}(z)-\overline{k_{+}(z)}}{k_ {+}(z)+k_{-}(z)+\alpha}p_{2}(z,\alpha),\\ p_{1}(z,\alpha)=\dfrac{k_{-}(z)-\overline{k_{-}(z)}}{k_{+}(z)+k_{-}(z)+\alpha} n_{2}(z,\alpha)-\dfrac{\overline{k_{+}(z)}+k_{-}(z)+\alpha}{k_{+}(z)+k_{-}(z)+ \alpha}p_{2}(z,\alpha).\end{cases} \tag{5.16}\]
Thanks to the condition \(\Psi_{z,\alpha}(0^{+})=\Psi_{z,\alpha}(0^{-})\), we can easily show that \(\Psi_{z,\alpha}\in H^{1}(\mathbb{R})\) and then, \(\Psi^{\prime}_{z,\alpha}\in H^{1}(\mathbb{R}\setminus\{0\})\), see (4.19). After that, the jump condition \(\Psi^{\prime}_{z,\alpha}(0^{+})-\Psi^{\prime}_{z,\alpha}(0^{-})=\alpha\Psi_{z,\alpha}(0)\) implies that \(\Psi_{z}\in\mathrm{Dom}(\mathscr{L}_{\alpha})\). Let us recall and reuse the convention in the beginning of Section 4 and now we send \(z\to\infty\) inside the strip bounded by two
essential spectrum lines \([V_{+},+\infty)\) and \([V_{-},+\infty)\). By imposing the same assumptions (4.21) on coefficients \(n_{2}\), \(p_{2}\) and working as in (4.22), we obtain
\[\begin{split}|n_{1}(z,\alpha)|^{2}&=\frac{4\tau|n_{2 }(z,\alpha)-p_{2}(z,\alpha)|^{2}}{|\alpha|^{2}}\left(1+\mathcal{O}\left(\frac{ 1}{|\tau|^{1/2}}\right)\right),\\ |p_{1}(z,\alpha)|^{2}&=\frac{4\tau|n_{2}(z,\alpha)- p_{2}(z,\alpha)|^{2}}{|\alpha|^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}} \right)\right).\end{split} \tag{5.17}\]
The norms squared of \(\Psi_{1,z,\alpha}\) and \(\Psi_{2,z,\alpha}\) can be computed explicitly as same as in (4.24) and then, we can show that \(\Psi_{2,z,\alpha}\) is just a small perturbation compared with \(\Psi_{1,z,\alpha}\), more precisely, \(\|\Psi_{2,z,\alpha}\|=\mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\|\Psi_{1,z,\alpha}\|\) and thus, by triangle inequality, we have
\[\|\Psi_{z,\alpha}\|=\|\Psi_{1,z,\alpha}\|\left(1+\mathcal{O}\left(\frac{1}{| \tau|^{1/2}}\right)\right).\]
Without difficulty, we obtain the asymptotic behavior for the quotient
\[\frac{\|(\mathscr{L}_{\alpha}-z)\Psi_{z,\alpha}\|^{2}}{\|\Psi_{z,\alpha}\|^{2}}= \frac{|\alpha|^{2}|\mathrm{Im}\,V_{+}-\delta||\mathrm{Im}\,V_{-}- \delta|}{\tau|\mathrm{Im}\,V_{+}-\mathrm{Im}\,V_{-}|}\] \[\times\frac{|\mathrm{Im}\,V_{-}-\delta||n_{2}(z,\alpha)|^{2}+| \mathrm{Im}\,V_{+}-\delta||p_{2}(z,\alpha)|^{2}}{|n_{2}(z,\alpha)-p_{2}(z, \alpha)|^{2}}\left(1+\mathcal{O}\left(\frac{1}{|\tau|^{1/2}}\right)\right).\]
As same as the argument at the end of the subsection 4.3, \(n_{2}\) and \(p_{2}\) will be chosen to minimize the function
\[F(n_{2},p_{2})=\frac{|\mathrm{Im}\,V_{-}-\delta||n_{2}|^{2}+|\mathrm{Im}\,V_{ +}-\delta||p_{2}|^{2}}{|n_{2}-p_{2}|^{2}},\]
and satisfies the assumptions in (4.21), and they are
\[n_{2}(z,\alpha)=n_{2}(z)=-|\mathrm{Im}\,V_{+}-\delta|,\qquad p_{2}(z,\alpha)= p_{2}(z)=|\mathrm{Im}\,V_{-}-\delta|.\]
With these values of \(n_{2}\) and \(p_{2}\), the conclusion of Theorem 2.15 follows.
## Appendix A Properties of the operator \(\mathscr{L}\): Proof of Proposition 2.1
Let us introduce a translated sesquilinear of \(Q\), that is
\[\widetilde{Q}(u,v)\coloneqq Q(u,v)+(1-\min\{\mathrm{Re}\,V_{+},\mathrm{Re}\, V_{-}\})\langle u,v\rangle,\qquad\mathrm{Dom}(\widetilde{Q})\coloneqq H^{1}( \mathbb{R}).\]
The coercivity of the sesquilinear \(\widetilde{Q}\) is given by, for all \(u\in H^{1}(\mathbb{R})\),
\[\left|\widetilde{Q}(u,u)\right|\geq\mathrm{Re}\,\,\widetilde{Q}(u,u) =\|u^{\prime}\|^{2}+\|u\|^{2}+(\mathrm{Re}\,V_{+}-\min\{\mathrm{Re} \,V_{+},\mathrm{Re}\,V_{-}\})\int_{0}^{+\infty}|u|^{2}\,\mathrm{d}x\] \[\qquad+(\mathrm{Re}\,V_{-}-\min\{\mathrm{Re}\,V_{+},\mathrm{Re}\, V_{-}\})\int_{-\infty}^{0}|u|^{2}\,\mathrm{d}x\] \[\geq\|u\|_{H^{1}}^{2}.\]
It is easy to obtain the continuity of \(\widetilde{Q}\), _i.e.,_ there exists a positive constant \(C\) such that \(\left|\widetilde{Q}(u,v)\right|\leq C\|u\|_{H^{1}}\|v\|_{H^{1}}\). Then, by Lax-Milgram theorem (cf. [6, Theorem 2.89]), \(\widetilde{Q}\) is associated with a closed, densely defined and bijective operator \(\widetilde{\mathscr{L}}\) whose domain is given by
\[\mathrm{Dom}\left(\widetilde{\mathscr{L}}\right)=\left\{\begin{aligned} u \in H^{1}(\mathbb{R}):&\text{there exists }\widetilde{f}\in L^{2}(\mathbb{R}) \text{ such that}\\ &\widetilde{Q}(u,v)=\left\langle\widetilde{f},v\right\rangle \text{ for all }v\in H^{1}(\mathbb{R})\end{aligned}\right\},\] \[\left\langle\widetilde{\mathscr{L}}u,v\right\rangle= \widetilde{Q}(u,v),\qquad\forall u\in\mathrm{Dom}\left(\widetilde{\mathscr{L}} \right),\forall v\in H^{1}(\mathbb{R}).\]
Then, the operator \(\mathscr{L}\) defined by shifting the operator \(\widetilde{\mathscr{L}}\), that is
\[\mathscr{L}\coloneqq\widetilde{\mathscr{L}}-(1-\min\{\operatorname{Re}V_{+}, \operatorname{Re}V_{-}\})\,,\qquad\operatorname{Dom}(\mathscr{L})= \operatorname{Dom}(\widetilde{\mathscr{L}}).\]
**(1)**: We will show that \(\operatorname{Dom}\left(\mathscr{L}\right)=H^{2}(\mathbb{R})\). Let \(u\in\operatorname{Dom}\left(\mathscr{L}\right)\), by considering \(v\) in (2.1) on the space of test functions \(C_{c}^{\infty}(\mathbb{R})\), then for each \(u\in\operatorname{Dom}\left(\widetilde{\mathscr{L}}\right)\), we get in the distributional sense that
\[\mathscr{L}u=-u^{\prime\prime}+V(x)u.\]
Since \(\mathscr{L}u\in L^{2}(\mathbb{R})\) and \(V(x)u\in L^{2}(\mathbb{R})\), we implies that \(u^{\prime\prime}\in L^{2}(\mathbb{R})\) and thus \(u\in H^{2}(\mathbb{R})\). Therefore, \(\operatorname{Dom}\left(\mathscr{L}\right)\subset H^{2}(\mathbb{R})\). The remaining direction \(H^{2}(\mathbb{R})\subset\operatorname{Dom}\left(\mathscr{L}\right)\) is easily obtained by integration by parts. Since \(\widetilde{\mathscr{L}}\) is bijective, it implies that \(-\left(1-\min\{\operatorname{Re}V_{+},\operatorname{Re}V_{-}\}\right)\in \rho(\mathscr{L})\). In other words, \(\rho(\mathscr{L})\) is nonempty.
**(2)**: Let us recall here the definition of the numerical range of an operator, it is a subset in \(\mathbb{C}\) defined by
\[\operatorname{Num}(\mathscr{L})=\left\{\langle\mathscr{L}\psi,\psi\rangle\in \mathbb{C}:\,\psi\in\operatorname{Dom}(\mathscr{L}),\,\|\psi\|=1\right\}.\]
Given \(\psi\in\operatorname{Dom}(\mathscr{L})\) such that \(\|\psi\|=1\), we have
\[\langle\mathscr{L}\psi,\psi\rangle= \|\psi^{\prime}\|^{2}+V_{+}\int_{0}^{+\infty}|\psi|^{2}\,\mathrm{ d}x+V_{-}\int_{-\infty}^{0}|\psi|^{2}\,\mathrm{d}x.\]
Let \(s=\int_{0}^{+\infty}|\psi|^{2}\,\mathrm{d}x\), it is clear that \(s\in[0,1]\) because of the normalization of \(\psi\), then we can write
\[\langle\mathscr{L}\psi,\psi\rangle=\|\psi^{\prime}\|^{2}+sV_{+}+(1-s)V_{-}.\]
This implies that \(\operatorname{Num}(\mathscr{L})\subset\{(0,+\infty)+sV_{+}+(1-s)V_{-}:\,s\in [0,1]\}\). In order to prove the remaining direction, we fix a function \(f\in C_{c}^{\infty}(\mathbb{R})\) such that \(\operatorname{supp}f\subset\mathbb{R}_{+}\) and \(\|f\|=\|f\|_{L^{2}(\mathbb{R}_{+})}=1\). By setting a family of functions in \(C_{c}^{\infty}(\mathbb{R})\), for \(\lambda>0\),
\[\psi_{\lambda}(x):=\lambda^{\frac{1}{2}}f(\lambda x),\]
it is obvious that \(\|\psi_{\lambda}\|=\|f\|=1\) and \(\|\psi_{\lambda}^{\prime}\|=\lambda\|f^{\prime}\|\). Therefore,
\[\langle\mathscr{L}\psi_{\lambda},\psi_{\lambda}\rangle=t+V_{+},\]
where \(t=\lambda\|f^{\prime}\|^{2}\) can take arbitrary positive value when \(\lambda\) runs on \((0,\infty)\). Consequently, \([V_{+},+\infty)\subset\operatorname{Num}(\mathscr{L})\). In the same manner by choosing a function \(f\) whose support lies inside \(\mathbb{R}_{-}\), we also obtain \([V_{-},+\infty)\subset\operatorname{Num}(\mathscr{L})\). In other words, two lines \([V_{+},+\infty)\) and \([V_{-},+\infty)\) are contained in \(\operatorname{Num}(\mathscr{L})\). Since \(\operatorname{Num}(\mathscr{L})\) is a convex set, it also contains the convex hull (_i.e._ the smallest convex set containing) of the set \([V_{+},+\infty)\cup[V_{-},+\infty)\), which is precisely the set \(\{(0,+\infty)+sV_{+}+(1-s)V_{-}:s\in[0,1]\}\). Therefore, the numerical range is described exactly as in (2.2).
We use [23, Proposition 3.19] to show that \(\mathscr{L}\) is a \(m\)-sectorial operator. By choosing a sector \(S_{c,\theta}\) whose vertex \(c\) is some point in the middle of two points \(\min\{\operatorname{Re}V_{+},\operatorname{Re}V_{-}\}-1\) and \(\min\{\operatorname{Re}V_{+},\operatorname{Re}V_{-}\}\) on the real axis with a suitable semi-angle \(\theta\in[0,\frac{\pi}{2})\) such that the numerical range \(\operatorname{Num}(\mathscr{L})\) is included inside \(S_{c,\theta}\), since the point \(\min\{\operatorname{Re}V_{+},\operatorname{Re}V_{-}\}-1\in\rho(\mathscr{L}) \setminus S_{c,\theta}\), \(\mathscr{L}\) is \(m\)-sectorial.
**(3)**: The operator \(\mathscr{L}\) can be seen as the sum of a self-adjoint operator \(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}\) (with domain \(H^{2}(\mathbb{R})\)) and a bounded operator \(V(x)\) (on \(L^{2}(\mathbb{R})\)), then, thanks to [23, Prop. 1.6(vii)], we have
\[\mathscr{L}^{*}=\left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+V(x)\right)^{*}= \left(-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}\right)^{*}+\left(V(x)\right)^{*} =-\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+\,\overline{V(x)}.\]
By directing computation, we get that, for all \(f\in H^{2}(\mathbb{R})\),
\[\begin{split}\|\mathscr{L}f\|^{2}&=\|-f^{\prime\prime} +(\operatorname{Re}V)f\|^{2}+\|(\operatorname{Im}V)f\|^{2}+2\operatorname{Im} \left\langle-f^{\prime\prime},\operatorname{Im}Vf\right\rangle,\\ \|\mathscr{L}^{*}f\|^{2}&=\|-f^{\prime\prime}+( \operatorname{Re}V)f\|^{2}+\|(\operatorname{Im}V)f\|^{2}-2\operatorname{Im} \left\langle-f^{\prime\prime},\operatorname{Im}Vf\right\rangle.\end{split}\] (A.1)
Since the operator \(\mathscr{L}\) and its adjoint \(\mathscr{L}^{*}\) share the same domain, \(\mathscr{L}\) is normal if and only if \(\|\mathscr{L}f\|=\|\mathscr{L}^{*}f\|\) for all \(f\in H^{2}(\mathbb{R})\), or
\[\operatorname{Im}\,\left\langle-f^{\prime\prime},\operatorname{Im}Vf\right\rangle =0,\qquad\forall f\in H^{2}(\mathbb{R}).\]
By using integral by parts, this condition is equivalent to
\[(\operatorname{Im}V_{+}-\operatorname{Im}V_{-})\operatorname{Im}\,\left(f^{ \prime}(0)\overline{f(0)}\right)=0,\qquad\forall f\in H^{2}(\mathbb{R}).\]
Since there exists \(f\in H^{2}(\mathbb{R})\) such that \(\operatorname{Im}\,\left(f^{\prime}(0)\overline{f(0)}\right)\neq 0\), for example, \(f(x)=\frac{1}{\sqrt{x^{2}+1}}+i\frac{x}{x^{2}+1}\), thus the normality of \(\mathscr{L}\) is equivalent to \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\). From the formula of \(\mathscr{L}\) and \(\mathscr{L}^{*}\), it is obviously that \(\mathscr{L}\) is self-adjoint iff \(V=\overline{V}\), _i.e._, \(\operatorname{Im}V=0\). Next, let us check when \(\mathscr{L}\) is \(\mathcal{P}\)-self-adjoint, \(\mathcal{T}\)-self-adjoint and \(\mathcal{PT}\)-symmetric by straightforward computation, given \(u\in H^{2}(\mathbb{R})\) and \(x\in\mathbb{R}\),
\[\begin{split}\mathcal{PT}u(x)&=-u^{\prime\prime}(x)+ V(-x)u(x),\\ \mathcal{T}\mathscr{LT}u(x)&=-u^{\prime\prime}(x)+ \overline{V}(x)u(x),\\ \mathscr{PT}u(x)&=-\overline{u}^{\prime\prime}(-x)+ V(x)\overline{u}(-x),\\ \mathcal{PT}\mathscr{L}u(x)&=-\overline{u}^{\prime \prime}(-x)+\overline{V}(-x)\overline{u}(-x).\end{split}\] (A.2)
Thus, all the conclusions on \(\mathcal{P}\)-self-adjointness, \(\mathcal{T}\)-self-adjointness and \(\mathcal{PT}\)-symmetry of \(\mathscr{L}\) follows obviously.
## Appendix B Properties of the operator \(\mathscr{L}_{\alpha}\): Proof of Proposition 2.10
First recall that for \(u\in H^{1}(\mathbb{R})\) then \(|u|^{2}\in H^{1}(\mathbb{R})\) and \((|u|^{2})^{\prime}=2\operatorname{Re}\left(u^{\prime}\overline{u}\right)\) (cf. [4, Corollary 8.10]). Thus, for any \(\varepsilon>0\) and for any \(u\in H^{1}(\mathbb{R})\), we have
\[\begin{split}|u(0)|^{2}=\int_{-\infty}^{0}\left(|u(x)|^{2}\right) ^{\prime}\mathrm{d}x=\int_{-\infty}^{0}2\operatorname{Re}\left(u^{\prime} \overline{u}\right)\mathrm{d}x\leq& 2\|u\|_{L^{2}(\mathbb{R}_{-})}\|u^{\prime}\|_{L^{2}( \mathbb{R}_{-})}\\ \leq& 2\|u\|\|u^{\prime}\|\leq\varepsilon\|u^{\prime}\| ^{2}+\frac{1}{\varepsilon}\|u\|^{2}.\end{split}\] (B.1)
In the same manner as defining the operator \(\mathscr{L}\) as in Proposition 2.1, we define the operator \(\mathscr{L}_{\alpha}\) via Lax-Milgram theorem applying for a translated operator. By choosing and fixing some \(\varepsilon>0\) such that \(\varepsilon|\mathrm{Re}\,\alpha|<1\), we introduce a translated sesquilinear
\[\widetilde{Q_{\alpha}}(u,v)\coloneqq Q_{\alpha}(u,v)+C_{\operatorname{Re} \alpha,\operatorname{Re}V}\langle u,v\rangle,\qquad\operatorname{Dom}( \widetilde{Q_{\alpha}})\coloneqq H^{1}(\mathbb{R}),\]
where \(C_{\operatorname{Re}\alpha,\operatorname{Re}V}\) is a constant depending on \(\operatorname{Re}\alpha\), \(\operatorname{Re}V_{+}\), \(\operatorname{Re}V_{-}\) defined by
\[C_{\operatorname{Re}\alpha,\operatorname{Re}V}\coloneqq 1-\varepsilon|\mathrm{Re}\, \alpha|-\min\{\operatorname{Re}V_{+},\operatorname{Re}V_{-}\}+\frac{| \mathrm{Re}\,\alpha|}{\varepsilon}.\]
By using the inequality (B.1), we can show that, for all \(u\in H^{1}(\mathbb{R})\),
\[\left|\widetilde{Q_{\alpha}}(u,u)\right|\geq \mathrm{Re}\,\widetilde{Q_{\alpha}}(u,u)\] \[= \|u^{\prime}\|^{2}+\left(1-\varepsilon|\mathrm{Re}\,\alpha|+\frac{ |\mathrm{Re}\,\alpha|}{\varepsilon}\right)\|u\|^{2}+(\mathrm{Re}\,V_{+}-\min\{ \mathrm{Re}\,V_{+},\mathrm{Re}\,V_{-}\})\int_{0}^{+\infty}|u|^{2}\,\mathrm{d}x\] \[+(\mathrm{Re}\,V_{-}-\min\{\mathrm{Re}\,V_{+},\mathrm{Re}\,V_{-} \})\int_{-\infty}^{0}|u|^{2}\,\mathrm{d}x+\mathrm{Re}\,(\alpha)|u(0)|^{2}\] \[\geq (1-\varepsilon|\mathrm{Re}\,\alpha|)\|u\|_{H^{1}}^{2}.\]
Thus, \(\widetilde{Q_{\alpha}}\) is coercive. Using (B.1) again, the continuity of \(\widetilde{Q_{\alpha}}\) is easily obtained. Then, thanks to Lax-Milgram theorem, there exists a closed densely defined and bijective operator \(\widetilde{\mathscr{L}_{\alpha}}\) that is defined by
\[\mathrm{Dom}\left(\widetilde{\mathscr{L}_{\alpha}}\right)= \begin{cases}u\in H^{1}(\mathbb{R}):\text{there exists }\widetilde{f}\in L^{2}(\mathbb{R}) \text{ such that}\\ \widetilde{Q_{\alpha}}(u,v)=\left\langle\widetilde{f},v\right\rangle\text{ for all }v\in H^{1}(\mathbb{R})\end{cases},\] \[\left\langle\widetilde{\mathscr{L}_{\alpha}}u,v\right\rangle= \widetilde{Q_{\alpha}}(u,v),\qquad\forall u\in\mathrm{Dom}\left(\widetilde{ \mathscr{L}_{\alpha}}\right),\forall v\in H^{1}(\mathbb{R}).\]
Then, we define the operator \(\mathscr{L}_{\alpha}\) as usual
\[\mathscr{L}_{\alpha}\coloneqq\widetilde{\mathscr{L}_{\alpha}}-C_{\mathrm{Re} \,\alpha,\mathrm{Re}\,V},\qquad\mathrm{Dom}(\mathscr{L}_{\alpha})\coloneqq \mathrm{Dom}(\widetilde{\mathscr{L}_{\alpha}}).\]
* Let us analyze to clear the domain \(\mathrm{Dom}\left(\widetilde{\mathscr{L}_{\alpha}}\right)\) up. Take \(u\in\mathrm{Dom}(\widetilde{\mathscr{L}_{\alpha}})\), there exists \(f\in L^{2}(\mathbb{R})\) such that, for all \(v\in H^{1}(\mathbb{R})\), \[\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}(x)}\,\mathrm{d}x+\int_{ \mathbb{R}}V(x)u(x)\overline{v(x)}\,\mathrm{d}x+\alpha u(0)\overline{v(0)}= \int_{\mathbb{R}}f(x)\overline{v(x)}\,\mathrm{d}x.\] (B.2) By restricting our consideration for \(v\in\mathbb{C}_{c}^{1}(\mathbb{R}\setminus\{0\})\), we get \[\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}(x)}\,\mathrm{d}x=-\int_{ \mathbb{R}}\left(-f(x)+V(x)u(x)\right)\overline{v(x)}\,\mathrm{d}x.\] Thus, \(u^{\prime}\in H^{1}(\mathbb{R}\setminus\{0\})\) and hence \(u\in H^{2}(\mathbb{R}\setminus\{0\})\). From the Sobolev embedding, \(u^{\prime}(0^{+})\) and \(u^{\prime}(0^{-})\) are well-defined. Now, considering \(v\in H^{1}(\mathbb{R})\) and using integral by parts for two functions \(u^{\prime}\) and \(v\) on \(\mathbb{R}_{+}\) and \(\mathbb{R}_{-}\), we have \[\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}(x)}\mathrm{d}x=\left(u^{ \prime}(0^{-})-u^{\prime}(0^{+})\right)\overline{v(0)}-\int_{\mathbb{R}}u^{ \prime\prime}(x)\overline{v(x)}\,\mathrm{d}x.\] (B.3) Replacing this into (B.2), we deduce that \[\left|u^{\prime}(0^{-})-u^{\prime}(0^{+})+\alpha u(0)\right|\left|v(0)\right| \leq\left(\|u^{\prime\prime}\|+\|V\|_{L^{\infty}}\|u\|+\|f\|\right)\|v\|.\] It follows that \(u^{\prime}(0^{-})-u^{\prime}(0^{+})+\alpha u(0)=0\), since there exists a sequence \(v_{n}\) in \(H^{1}(\mathbb{R})\) such that \(v_{n}(0)=1\) and \(\|v_{n}\|\xrightarrow{n\to+\infty}0\) (for example, take a function \(\varphi\in C_{c}^{1}(\mathbb{R})\) satisfying \(\varphi(0)=1\) and consider \(v_{n}(x)=\varphi(nx)\) for all \(n\in\mathbb{N}\)). Hence, we have proved that \[u\in\mathrm{Dom}(\mathscr{L}_{\alpha})\Rightarrow u\in H^{1}(\mathbb{R})\cap H ^{2}(\mathbb{R}\setminus\{0\})\text{ and }u^{\prime}(0^{+})-u^{\prime}(0^{-})=\alpha u(0).\] Conversely, if \(u\in H^{1}(\mathbb{R})\cap H^{2}(\mathbb{R}\setminus\{0\})\), by using (B.3) with the jump condition \(u^{\prime}(0^{+})-u^{\prime}(0^{-})=\alpha u(0)\), we have \[Q_{\alpha}(u,v)=\langle-u"+V(x)u,v\rangle,\qquad\forall v\in H^{1}(\mathbb{R}).\] and thus, \(u\) belongs to the domain of \(\mathscr{L}_{\alpha}\). Furthermore, this also implies the action of \(\mathscr{L}_{\alpha}\) on its domain. Clearly, \(-C_{\mathrm{Re}\,\alpha,\mathrm{Re}\,V}\in\rho(\mathscr{L})\).
**(2)**: Let \(u\in\operatorname{Dom}(\mathscr{L}_{\alpha})\) such that \(\|u\|=1\), from (2.13), we have
\[\langle\mathscr{L}_{\alpha}u,u\rangle=\|u^{\prime}\|^{2}+V_{+}\|u^{\prime}\|^{2} _{L^{2}(\mathbb{R}_{+})}+V_{-}\|u^{\prime}\|^{2}_{L^{2}(\mathbb{R}_{+})}+ \alpha|u(0)|^{2}.\]
Let \(\varepsilon>0\) and using inequality (B.1), we get
\[|\operatorname{Im}\,\langle\mathscr{L}_{\alpha}u,u\rangle|\leq\varepsilon| \operatorname{Im}\alpha|\|u^{\prime}\|^{2}+\frac{|\operatorname{Im}\alpha|}{ \varepsilon}+\max\{|\operatorname{Im}V_{+}|,|\operatorname{Im}V_{-}|\},\]
and
\[\operatorname{Re}\,\langle\mathscr{L}_{\alpha}u,u\rangle\geq(1-\varepsilon| \operatorname{Re}\alpha|)\|u^{\prime}\|^{2}+\min\{\operatorname{Re}V_{+}, \operatorname{Re}V_{-}\}-\frac{|\operatorname{Re}\alpha|}{\varepsilon}.\]
By choosing \(\varepsilon=\frac{1}{|\operatorname{Re}\alpha|+|\operatorname{Im}\alpha|}\), we deduce that
\[\operatorname{Re}\,\langle\mathscr{L}_{\alpha}u,u\rangle-|\operatorname{Im} \,\langle\mathscr{L}_{\alpha}u,u\rangle|\geq C_{V,\alpha},\]
where \(C_{V,\alpha}\) defined in (2.14). In other words, we have shown that \(\langle\mathscr{L}_{\alpha}u,u\rangle\in\{\lambda\in\mathbb{C}:|\operatorname {Im}\lambda|\leq\operatorname{Re}\lambda-C_{V,\alpha}\}=S_{C_{V,\alpha},\frac{ \pi}{4}}\). The arbitrariness of \(u\in\operatorname{Dom}(\mathscr{L}_{\alpha})\) gives us the inclusion \(N(\mathscr{L}_{\alpha})\subset S_{C_{V,\alpha},\frac{\pi}{4}}\). From this, it is also clear that \(\mathscr{L}_{\alpha}\) is \(m\)-sectorial.
**(3)**: Let us consider a conjugate transpose sequilinear of \(Q_{\alpha}(u,v)\), that is
\[\widehat{Q_{\alpha}}(u,v)\coloneqq\overline{Q_{\alpha}(v,u)},\qquad \operatorname{Dom}(\widehat{Q_{\alpha}})=H^{1}(\mathbb{R}),\]
More precisely, for all \(u,v\in H^{1}(\mathbb{R})\),
\[\widehat{Q_{\alpha}}(u,v)=\int_{\mathbb{R}}u^{\prime}(x)\overline{v^{\prime}( x)}\,\mathrm{d}x+\overline{V_{+}}\int_{0}^{+\infty}u(x)\overline{v}(x)\, \mathrm{d}x+\overline{V_{-}}\int_{-\infty}^{0}u(x)\overline{v}(x)\,\mathrm{d} x+\overline{\alpha}u(0)\overline{v(0)}.\]
By working as above (translating the sesquilinear by the real constant \(C_{\operatorname{Re}\alpha,\operatorname{Re}V}\) to obtain a coercive and continuous one, using Lax-Milgram to define a corresponding operator, and translating back this operator by the constant \(C_{\operatorname{Re}\alpha,\operatorname{Re}V}\)), we also can define an operator \(\widehat{\mathscr{L}_{\alpha}}\) by
\[\operatorname{Dom}(\widehat{\mathscr{L}_{\alpha}})=\{u\in H^{1}( \mathbb{R})\cap H^{2}(\mathbb{R}\setminus\{0\}):u^{\prime}(0^{+})-u^{\prime}( 0^{-})=\overline{\alpha}u(0)\},\] \[\widehat{\mathscr{L}_{\alpha}}u=-u^{\prime\prime}+\overline{V(x )}u,\qquad\forall u\in\operatorname{Dom}(\widehat{\mathscr{L}_{\alpha}}).\]
Thanks to [6, Theorem 2.90], we have \(\mathscr{L}_{\alpha}^{*}=\widehat{\mathscr{L}_{\alpha}}\). After obtain the formula of the adjoint, we will discuss about the normality of \(\mathscr{L}_{\alpha}\). By comparing the domain of \(\mathscr{L}_{\alpha}\) and \(\mathscr{L}_{\alpha}^{*}\), we first notice that \(\operatorname{Dom}(\mathscr{L}_{\alpha})=\operatorname{Dom}(\mathscr{L}_{ \alpha}^{*})\) if and only if \(\alpha\in\mathbb{R}\). Indeed, if \(\alpha\in\mathbb{R}\), it is clear that two domains are the same and if \(\alpha\in\mathbb{C}\setminus\mathbb{R}\), we consider, for example, \(u(x)=e^{\frac{\alpha}{2}|x|}\varphi(x)\) with \(\varphi\) is a smooth cut-off function equal \(1\) in the neighborhood of zero, then \(u\in\operatorname{Dom}(\mathscr{L}_{\alpha})\) but \(u\notin\operatorname{Dom}(\mathscr{L}_{\alpha}^{*})\). Therefore, working as in (A.1) and integrating by parts, we have a statement that \(\mathscr{L}_{\alpha}\) is normal if and only if \(\alpha\in\mathbb{R}\) and
\[\left(\operatorname{Im}V_{+}f^{\prime}(0^{+})-\operatorname{Im}V_{-}f^{ \prime}(0^{-})\right)\overline{f(0)}=0,\qquad\forall f\in\operatorname{Dom}( \mathscr{L}_{\alpha}).\] (B.4)
We consider the following cases:
**Case 1**: \(\operatorname{Im}V_{+}\neq\operatorname{Im}V_{-}\). We consider, for example, the function
\[f(x)=\left(e^{\frac{(1-\operatorname{Im}V_{+}\alpha)x}{\operatorname{Im}V_{+} -\operatorname{Im}V_{-}}}\mathbf{1}_{\mathbb{R}_{-}}(x)+e^{\frac{(1- \operatorname{Im}V_{-}\alpha)x}{\operatorname{Im}V_{+}-\operatorname{Im}V_{-}} }\mathbf{1}_{\mathbb{R}_{+}}(x)\right)\varphi(x),\]
where \(\varphi(x)\) is a smooth cut-off function mentioned above, then,
\[f(0)=1,\qquad f^{\prime}(0^{+})=\frac{1-\operatorname{Im}V_{-}\alpha}{ \operatorname{Im}V_{+}-\operatorname{Im}V_{-}},\qquad f^{\prime}(0^{-})=\frac{ 1-\operatorname{Im}V_{+}\alpha}{\operatorname{Im}V_{+}-\operatorname{Im}V_{-}}.\]
It is clear that \(f\in\operatorname{Dom}(\mathscr{L}_{\alpha})\) and \(\left(\operatorname{Im}V_{+}f^{\prime}(0^{+})-\operatorname{Im}V_{-}f^{\prime }(0^{-})\right)\overline{f(0)}=1\). Therefore, \(\mathscr{L}_{\alpha}\) is nonnormal in this case.
**Case 2**: \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}=0\). Since (B.4) is automatically satisfied for all \(f\in\operatorname{Dom}(\mathscr{L}_{\alpha})\), \(\mathscr{L}_{\alpha}\) is normal if and only if \(\alpha\in\mathbb{R}\).
**Case 3**: \(\operatorname{Im}V_{+}=\operatorname{Im}V_{-}\neq 0\). Condition (B.4) is equivalent to
\[\alpha|f(0)|^{2}=0\qquad\forall f\in\operatorname{Dom}(\mathscr{L}_{\alpha}).\]
It is easy to check, using the function \(e^{\frac{\alpha}{2}|x|}\varphi(x)\) mentioned above, that \(\mathscr{L}_{\alpha}\) is normal if and only if \(\alpha=0\).
From the above cases, the necessary and sufficient conditions for the normality of \(\mathscr{L}_{\alpha}\) is shown as in the statement of this proposition. The condition for self-adjointness of \(\mathscr{L}_{\alpha}\) follows easily from comparing the domains and the actions of \(\mathscr{L}_{\alpha}\) and \(\mathscr{L}_{\alpha}^{*}\). In order to check when \(\mathscr{L}_{\alpha}\) is \(\mathcal{P}\)-self-adjoint, \(\mathcal{T}\)-self-adjoint and \(\mathcal{P}\mathcal{T}\)-symmetric, we notice that since actions of \(\mathscr{L}_{\alpha}\) and \(\mathscr{L}\) (similarly, \(\mathscr{L}_{\alpha}^{*}\) and \(\mathscr{L}^{*}\)) are the same when they act on their domains, thus we can use (A.2) to get the conditions on \(V\) as in Proposition 2.1. However, unlike the operator \(\mathscr{L}\), the domains of \(\mathscr{L}_{\alpha}\) and its adjoint \(\mathscr{L}_{\alpha}^{*}\) are not the same, it leads to the fact that \(\operatorname{Dom}(\mathscr{L}_{\alpha}^{*})\) and \(\operatorname{Dom}(\mathcal{P}\mathscr{L}_{\alpha}\mathcal{P})\) (or \(\operatorname{Dom}(\mathcal{P}\mathscr{L}_{\alpha}\mathcal{P})\)) are not necessary the same. Thus, we need to check them carefully to get condition on \(\alpha\).
\[\operatorname{Dom}(\mathcal{P}\mathscr{L}_{\alpha}\mathcal{P})= \{u\in L^{2}(\mathbb{R}):u(-x)\in\operatorname{Dom}(\mathscr{L}_ {\alpha})\}\] \[= \{u\in H^{1}(\mathbb{R})\cap H^{2}(\mathbb{R}\setminus\{0\}):u^ {\prime}(0^{+})-u^{\prime}(0^{-})=-\alpha u(0)\},\]
and
\[\operatorname{Dom}(\mathcal{T}\mathscr{L}_{\alpha}\mathcal{T})= \{u\in L^{2}(\mathbb{R}):\overline{u(x)}\in\operatorname{Dom}( \mathscr{L}_{\alpha})\}\] \[= \{u\in H^{1}(\mathbb{R})\cap H^{2}(\mathbb{R}\setminus\{0\}):u^ {\prime}(0^{+})-u^{\prime}(0^{-})=\overline{\alpha}u(0)\}.\]
Therefore, we have \(\operatorname{Dom}(\mathcal{P}\mathscr{L}_{\alpha}\mathcal{P})=\operatorname{ Dom}(\mathscr{L}_{\alpha}^{*})\) if and only if \(\overline{\alpha}=-\alpha\), in other words, \(\operatorname{Re}\alpha=0\), while \(\operatorname{Dom}(\mathcal{T}\mathscr{L}_{\alpha}\mathcal{T})=\operatorname{ Dom}(\mathscr{L}_{\alpha}^{*})\) is always true for any \(\alpha\in\mathbb{C}\). In the same manner, we can check that \(\operatorname{Dom}(\mathcal{P}\mathscr{T}_{\alpha})\subset\operatorname{Dom }(\mathscr{L}_{\alpha}\mathcal{P}\mathcal{T})\) is ensured if and only if \(\operatorname{Re}\alpha=0\).
|
2301.03175 | Tight Convergence Rate in Subgradient Norm of the Proximal Point
Algorithm | Proximal point algorithm has found many applications, and it has been playing
fundamental roles in the understanding, design, and analysis of many
first-order methods. In this paper, we derive the tight convergence rate in
subgradient norm of the proximal point algorithm, which was conjectured by
Taylor, Hendrickx and Glineur [SIAM J.~Optim., 27 (2017), pp.~1283--1313]. This
sort of convergence results in terms of the residual (sub)gradient norm is
particularly interesting when considering dual methods, where the dual residual
gradient norm corresponds to the primal distance to feasibility. | Guoyong Gu, Junfeng Yang | 2023-01-09T05:24:45Z | http://arxiv.org/abs/2301.03175v1 | # Tight Convergence Rate in Subgradient Norm
###### Abstract
Proximal point algorithm has found many applications, and it has been playing fundamental roles in the understanding, design, and analysis of many first-order methods. In this paper, we derive the tight convergence rate in subgradient norm of the proximal point algorithm, which was conjectured by Taylor, Hendrickx and Glineur [SIAM J. Optim., 27 (2017), pp. 1283-1313]. This sort of convergence results in terms of the residual (sub)gradient norm is particularly interesting when considering dual methods, where the dual residual gradient norm corresponds to the primal distance to feasibility.
**Keywords:** proximal point algorithm, performance estimation framework, subgradient norm, tight convergence rate.
## 1 Introduction
Consider the unconstrained minimization problem
\[\min_{x\in\mathbb{R}^{n}}f(x). \tag{1.1}\]
Here, the objective function \(f:\mathbb{R}^{n}\to\mathbb{R}\cup\{+\infty\}\) is a convex, closed and proper (not necessarily differentiable) function, which is denoted by \(\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\) in the sequel. Since the objective function \(f\) is extended real valued, constraints such as nonnegativity, box and/or ball constraints, can simply be absorbed into the objective function via the indicator functions. Therefore, model (1.1) encompasses a broad class of convex optimization problems.
Let \(\langle\cdot,\cdot\rangle\) be the standard dot inner product and \(B\) any symmetric positive definite matrix of order \(n\). Assume that \(\mathbb{R}^{n}\) is endowed with the weighted inner product \(\langle x,y\rangle_{B}:=\langle Bx,y\rangle\) for \(x,y\in\mathbb{R}^{n}\). Then, the induced Euclidean norm and its dual norm are, respectively, given by
\[\|x\|_{B}=\sqrt{\langle Bx,x\rangle}\ \ \text{and}\ \ \|u\|_{B^{-1}}=\sqrt{ \langle u,B^{-1}u\rangle},\ \ \ x,u\in\mathbb{R}^{n}.\]
Particularly, when \(B\) equals to an identity matrix, both the Euclidean norm norm and its dual will be denoted as \(\|\cdot\|\) for simplicity. The proximal mapping of \(f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\) is defined by
\[\text{prox}_{\alpha f}(x)=\text{argmin}_{y\in\mathbb{R}^{n}}\Big{\{}\alpha f( y)+\frac{1}{2}\|y-x\|_{B}^{2}\Big{\}},\ \text{for}\ \alpha>0\ \text{and}\ x\in\mathbb{R}^{n}. \tag{1.2}\]
The proximal mapping is uniquely well defined for any \(x\in\mathbb{R}^{n}\) as the objective function in (1.2) is strongly convex with respect to \(y\). In this paper, we focus on the proximal point algorithm (PPA, Algorithm 1) for solving the aforementioned minimization problem (1.1).
It then follows from (1.2), (1.3) and the Fermat's rule that
\[0\in\alpha_{i}\partial f(x_{i})+B(x_{i}-x_{i-1}),\text{ i.e., }\frac{B(x_{i-1}-x_{i})}{ \alpha_{i}}\in\partial f(x_{i}). \tag{1.4}\]
PPA dates back to [14]. It was firstly introduced to the optimization community in [12], and later analyzed and refined by Rockafellar [17] and Guler [6, 7]. For some recent surveys, we refer to the works of Combettes and Pesquet [1], Parikh and Boyd [16], and so on.
### Convergences in function and subgradient values
The standard convergence result in terms of the function value for the PPA is provided by Guler [6, Theorem 2.1]:
\[f(x_{N})-f(x_{*})\leq\frac{R^{2}}{2\sum_{i=1}^{N}\alpha_{i}},\]
for any starting point \(x_{0}\in\mathbb{R}^{n}\) satisfying \(\|x_{0}-x_{*}\|_{B}\leq R\), where \(R>0\) is a constant and \(x_{*}\in\mathbb{R}^{n}\) is an optimal solution. Recently, by using the performance estimation framework, this upper bound was improved by a factor of \(2\) by Taylor et al.:
**Theorem 1.1** ([20, Theorem 4.1]).: _Let \(\{\alpha_{i}\}_{i\geq 1}\) be a sequence of positive step sizes and \(x_{0}\in\mathbb{R}^{n}\) some initial iterate satisfying \(\|x_{0}-x_{*}\|_{B}\leq R\) for some optimal point \(x_{*}\). Any sequence \(\{x_{i}\}_{i\geq 1}\) generated by the PPA with step sizes \(\{\alpha_{i}\}_{i\geq 1}\) applied to a function \(f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\) satisfies_
\[f(x_{N})-f(x_{*})\leq\frac{R^{2}}{4\sum_{i=1}^{N}\alpha_{i}},\]
_and this bound cannot be improved, even in dimension one._
The tightness of the bound provided in Theorem 1.1 is illustrated by the \(l_{1}\)-shaped one dimensional function
\[f(x)=\frac{\sqrt{B}R|x|}{2\sum_{i=1}^{N}\alpha_{i}}=\frac{R\|x\|_{B}}{2\sum_{ i=1}^{N}\alpha_{i}}\in\mathcal{F}_{0,\infty}(\mathbb{R}),\]
for which \(x_{*}=0\). Here, \(B>0\) is a constant and the starting point \(x_{0}=-R/\sqrt{B}\) is used in the PPA.
If the residual (sub)gradient norm is used as the convergence measure, strong numerical evidence based on performance estimation suggests the following conjecture.
**Conjecture 1.2** ([20, Conjecture 4.2]).: _Let \(\{\alpha_{i}\}_{i\geq 1}\) be a sequence of positive step sizes and \(x_{0}\in\mathbb{R}^{n}\) some initial iterate satisfying \(\|x_{0}-x_{*}\|_{B}\leq R\) for some optimal point \(x_{*}\). For any
sequence \(\{x_{i}\}_{i\geq 1}\) generated by the PPA with step sizes \(\{\alpha_{i}\}_{i\geq 1}\) on a function \(f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\), there exists for every iterate \(x_{N}\) a subgradient \(g_{N}\in\partial f(x_{N})\) such that_
\[\|g_{N}\|_{B^{-1}}\leq\frac{R}{\sum_{i=1}^{N}\alpha_{i}}.\]
_In particular, the choice \(g_{N}=\frac{x_{N-1}-x_{N}}{\alpha_{N}}\) is a subgradient satisfying the inequality._
This bound cannot be improved, as it is attained by the one-dimensional \(l_{1}\)-shaped function
\[f(x)=\frac{\sqrt{B}R|x|}{\sum_{i=1}^{N}\alpha_{i}}\in\mathcal{F}_{0,\infty}( \mathbb{R}) \tag{1.5}\]
started from \(x_{0}=-R/\sqrt{B}\).
### The contribution and organization of the paper
PPA has been playing fundamental roles both theoretically and algorithmically in the optimization area [17, 6, 7, 23, 13, 11]. Even the convergence rate of the famous alternating direction method of multipliers can be established within the framework of PPA, see, e.g., [8].
Convergence rate in terms of the residual (sub)gradient norm is particularly interesting when considering dual methods. In that case, the dual residual gradient norm corresponds to the primal distance to feasibility [2]. Moreover, the (sub)gradient norm is more amenable than function value in nonconvex optimization.
By using performance estimation framework [3, 20, 21], Conjecture 1.2, i.e., Conjecture 4.2 of [20] is proved in this paper, which gives the first direct proof of the convergence rate in terms of the residual (sub)gradient norm for PPA. In addition, this convergence rate for PPA is tight, which means it is the best bound one can derive. Note that the convergence rate in terms of the residual (sub)gradient norm can be proved by combing the convergence rate in terms of \(f(x_{N})-f(x_{*})\) or \(\|x_{N}-x_{*}\|\) with the \(L\)-smoothness of \(f\)[20, 15], and this kind of proof is considered to be indirect. We want to add that the \(L\)-smoothness of \(f\) is not required in our setting.
At first, our analysis will be restricted to the case when \(B\) is the identity matrix, i.e., under the norm \(\|\cdot\|\). After that, we will show that the analysis can easily be extended to the case using a more general norm \(\|\cdot\|_{B}\). The rest of this paper is organized as follows. In the next section, the performance estimation framework is briefly recalled and the worst-case complexity bound is computed numerically via solving a semidefinite programming (SDP) reformulation. In Section 3, the analytical optimal Lagrange multipliers is constructed based on the numerical solutions of the SDP. In Section 4, Conjecture 1.2, i.e., Conjecture 4.2 of [20] is proved under the norm \(\|\cdot\|\), which is then extended to the case with the more general norm \(\|\cdot\|_{B}\). Finally, some concluding remarks are drawn in Section 5.
## 2 Performance estimation and numerical results
Performance estimation was originally developed by Drori and Teboulle [3]. Their approach is based on semidefinite relaxations, and was taken further by Kim and Fessler [10], who derived analytically the optimized gradient method. By using convex interpolation and smooth (strongly) convex interpolation, the performance estimation problems can be transformed into SDP problems without any relaxation by Taylor et al. [21, 20, 22]. Recently, the performance estimation was extended by Ryu et al. [18] to study operator splitting methods for monotone inclusion problems. In [5], the authors established tight nonergodic sublinear convergence rate
of the PPA for solving maximal monotone inclusion problems. More recently, an accelerated proximal point type algorithm for maximal monotone operator inclusion problems has been derived in [9].
### Performance estimation and SDP reformulation
Under the Euclidean norm \(\|\cdot\|\), the worst case performance of the PPA (as given in Algorithm 1) with residual subgradient norm as performance measure can be formulated as the optimal value of the following performance estimation problem:
\[\sup_{f,x_{*},x_{0},x_{1},\ldots,x_{N},g_{N}} \|g_{N}\|\] \[\text{s.t.}\quad\ f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n}),\] \[\|x_{0}-x_{*}\|\leq R,\] \[x_{*}\in\arg\min_{x}f(x),\] (PEP) \[x_{i}\text{ is determined by (\ref{eq:pPA}) for }1\leq i\leq N,\] \[g_{N}=\frac{x_{N-1}-x_{N}}{\alpha_{N}}\in\partial f(x_{N}).\]
Suppose that \(\|g_{N}\|\) attains some value at a \(\tilde{f}\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\) with optimal solution \(x_{*}\) and initial iterate \(x_{0}\), then the same value of the subgradient norm can be attained at \(f(\cdot)=\tilde{f}(\cdot+x_{*})-\tilde{f}(x_{*})\) with optimal solution \(0\) and initial iterate \(x_{0}-x_{*}\). This implies that we may assume that \(x_{*}=0\) and \(f(x_{*})=0\) without affecting the optimal value of the (PEP).
Due to the black-box property, the PPA is determined only by the function values and the subgradients at its iterates. According to the definition of \(\mathcal{F}_{\mu,L}\)-interpolation [21, Definition 2] or [20, Definition 1.1], the iterates of the PPA, along with the function values and subgradients at these iterates, can be considered as optimization variables instead of function \(f\) itself. As a result, the optimization problem (PEP) can be rewritten as
\[\sup_{f_{1},\ldots,f_{N},x_{0},x_{1},\ldots,x_{N},g_{1},\ldots,g_{ N}} \|g_{N}\|\] \[\text{s.t.}\quad\ \|x_{0}\|\leq R,\] \[x_{i}\text{ is determined by (\ref{eq:pPA}) for }1\leq i\leq N,\] \[\big{\{}(0,0,0),(x_{1},g_{1},f_{1}),\ldots,(x_{N},g_{N},f_{N}) \big{\}}\text{ is }\mathcal{F}_{0,\infty}\text{-interpolable},\] \[\text{ where }f_{i}=f(x_{i}),\ g_{i}=\frac{x_{i-1}-x_{i}}{\alpha_{i} },\text{ for }1\leq i\leq N.\]
Remember that \((0,0,0)\) in the interpolation set \(\big{\{}(0,0,0),(x_{1},g_{1},f_{1}),\ldots,(x_{N},g_{N},f_{N})\big{\}}\) corresponds to the optimal solution \(x_{*}=0\), \(0\in\partial f(0)\), and the optimal value \(f(0)=0\). In view of [20, Theorem 3.3], \(\big{\{}(0,0,0),(x_{1},g_{1},f_{1}),\ldots,(x_{N},g_{N},f_{N})\big{\}}\) is \(\mathcal{F}_{0,\infty}\)-interpolable if and only if
\[f_{i} \geq 0+\langle 0,x_{i}-0\rangle,\ 1\leq i\leq N,\] \[0 \geq f_{i}+\langle g_{i},0-x_{i}\rangle,\ 1\leq i\leq N,\] \[f_{j} \geq f_{i}+\langle g_{i},x_{j}-x_{i}\rangle,\ 1\leq i,j\leq N,\ i\neq j.\]
By replacing the \(\mathcal{F}_{0,\infty}\)-interpolable condition with the last inequalities and eliminating \(g_{i}\), the
optimization problem (PEP) may further be rewritten as
\[\sup_{f_{1},\ldots,f_{N},x_{0},x_{1},\ldots,x_{N}} \|x_{N-1}-x_{N}\|^{2}/\alpha_{N}^{2}\] (2.1) s.t. \[\|x_{0}\|^{2}\leq R^{2},\] \[f_{i}\geq 0,\ 1\leq i\leq N,\] \[0\geq f_{i}+\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},-x_{i }\big{\rangle},\ 1\leq i\leq N,\] \[f_{j}\geq f_{i}+\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},x_ {j}-x_{i}\big{\rangle},\ 1\leq i,j\leq N,\ i\neq j.\]
Here, both the objective function and the constraint \(\|x_{0}\|\leq R\) are squared. These changes make the objective function and all the constraints simple quadratic functions over the iterates, while neither one affects the optimal solution.
We gather the iterates of the PPA and the function values at these iterates in the following matrices
\[P =\big{[}x_{0},x_{1},\ldots,x_{N}\big{]}\in\mathbb{R}^{n\times(N+1 )},\] \[F =\big{[}f_{1},f_{2},\ldots,f_{N}\big{]}\in\mathbb{R}^{1\times N}.\]
In addition, we introduce the Gram matrix \(G=P^{T}P\in\mathbb{S}_{+}^{N+1}\). Notice that the rank of the Gram matrix \(G\) is less than or equal to \(n\). On the other side, to reconstruct the matrix \(P\) from \(G\), we need that the dimension \(n\) is greater than or equal to the rank of \(G\), which is upper bounded by \(N+1\). By replacing the optimization variables with \(F\) and \(G\), we obtain an equivalent SDP problem, linear in \(F\) and \(G\), with rank constraint \(\operatorname{rank}(G)\leq n\). Since the worst case iteration bound is considered, the dimension \(n\) can be chosen freely. Hence, dropping the rank constraint does not incur any relaxation.
Similar reformulations have been adopted in [20, 21, 4, 18]. Below, the SDP, with the rank constraint being dropped, is referred to as the primal SDP, and it shares the same optimal objective function value and the Lagrange multipliers with the optimization problem (2.1).
### Numerical results
SeDuMi [19] is utilized to solve the primal SDP, i.e., the SDP reformulation of the (PEP). For \(R=1\), \(N=20\), and the sequence of positive step lengths \(\{\alpha_{i}\}_{i=1}^{N}\) being randomly drawn from the uniform distribution between \(0\) and \(1\), the absolute difference between the numerically computed optimal value of \(\|g_{N}\|\) and the upper bound in Conjecture 1.2, namely \(1/\sum_{i=1}^{N}\alpha_{i}\), is around \(10^{-8}\). In addition, the Gram matrix \(G\) computed is of rank one. The same phenomenon has been observed for many tests on other values of \(N\). This implies that there is a \(f\in\mathcal{F}_{0,\infty}(\mathbb{R})\) in one-dimensional space, which attains this bound.
In fact, if the upper bound of the subgradient norm \(\|g_{N}\|\) can be expressed as the quotient of two linear functions of \(\{\alpha_{i}\}_{i=1}^{N}\), the coefficients to these \(\{\alpha_{i}\}_{i=1}^{N}\) can be found by solving a system of linear equations. First, by multiplying both sides with the denominator, a linear equation over the coefficients of these \(\{\alpha_{i}\}_{i=1}^{N}\) can be derived for each fixed sequence \(\{\alpha_{i}\}_{i=1}^{N}\). Then, by choosing different sequence of positive step lengths \(\{\alpha_{i}\}_{i=1}^{N}\), a system of linear equations over these coefficients can be constructed, and from which these coefficients can be recovered numerically.
## 3 The analytical Lagrange multipliers
The optimization problem (2.1) and the primal SDP share the same Lagrange multipliers. These Lagrange multipliers correspond to the analytical solutions of the dual SDP, which play
a key role in our proof of the conjecture. Numerical solvers such as SeDuMi [19] only provide numerical solutions. To find an analytical one, much more effort needs to be taken.
### Heuristics
To begin with, we remove the constraints that are always inactive in the primal SDP, as the dual variables corresponding to these constraints are \(0\). For randomly chosen step lengths, we numerically solve the primal SDP with or without certain constraints, respectively. If the optimal values of the two corresponding SDPs always keep the same, then these certain constraints are considered to be inactive. As such, our experiments indicate that the constraints corresponding to
\[f_{i}\geq 0,\ 1\leq i\leq N-1,\] \[f_{j}\geq f_{i}+\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},x_{j }-x_{i}\big{\rangle},\ \text{for all}\ |i-j|\geq 2,\ 1\leq i,j\leq N,\]
in (2.1) can be removed from the primal SDP, which implies that these constraints are not supposed to play any role in the primal SDP.
Recall that the bound of the Conjecture 1.2 is attained by the (one-dimensional) \(l_{1}\)-shaped function \(f(x)=\frac{R|x|}{\sum_{k=1}^{N}\alpha_{k}}\) started from \(x_{0}=-R\). By setting \(R=1\), the iterations of the PPA (as given in Algorithm 1) suggest that
\[P =\Big{[}-1,\ -\frac{\alpha_{2:N}}{\alpha_{1:N}},\ -\frac{\alpha_{3:N}}{ \alpha_{1:N}},\ \ldots,\ -\frac{\alpha_{N:N}}{\alpha_{1:N}},\ 0\Big{]}\in\mathbb{R}^{1\times(N+1)},\] \[F =\Big{[}\frac{\alpha_{2:N}}{(\alpha_{1:N})^{2}},\ \frac{\alpha_{3:N}}{(\alpha_{1:N})^{2}},\ \ldots,\ \frac{\alpha_{N:N}}{(\alpha_{1:N})^{2}},\ 0\Big{]}\in \mathbb{R}^{1\times N},\]
from which the analytical optimal solutions for (2.1) and the primal SDP can be recovered. Here and in the following, to simplify the notation, we denote
\[\alpha_{i:j}:=\begin{cases}\sum_{k=i}^{j}\alpha_{k},&i\leq j,\\ 0,&i>j.\end{cases} \tag{3.1}\]
The activeness of the remaining constraints in (2.1) and the primal SDP can be verified at the aforementioned optimal solution.
The Lagrange multiplier to the inequality constraint \(f_{N}\geq 0\) in (2.1) or the corresponding constraint in the primal SDP can be computed numerically via the method proposed in paragraph 2 of Subsection 2.2. Furthermore, by assuming that the multiplier is the quotient of two quadratic functions of \(\{\alpha_{i}\}_{i=1}^{N}\), the method can be generalized to compute the Lagrange multipliers corresponding to
\[\|x_{0}\|^{2} \leq 1,\ \text{(recall that $R$ is set to $1$)},\] \[f_{N-1} \geq f_{N}+\big{\langle}\frac{x_{N-1}-x_{N}}{\alpha_{N}},x_{N-1} -x_{N}\big{\rangle}.\]
However, Lagrange multipliers to the other constraints cannot be computed this way.
With the inactive constraints being removed, there are totally \(3N\) active constraints left in the reduced (2.1) or the primal SDP. As a consequence, the number of Lagrangian multipliers in the Lagrangian of the reduced (2.1) is also \(3N\). On the other hand, by setting the partial derivatives of the Lagrangian for the reduced (2.1) to \(0\), we have \(2N+1\) equations over Lagrangian multipliers. This gap implies that some Lagrangian multipliers are not unique, which prevents us from computing them numerically.
In the Lagrangian of the reduced (2.1), for \(i=2,\ldots,N\), by setting the partial derivative with respect to \(f_{i}\) to \(0\), we have that the difference between the Lagrangian multipliers associated with
\[0\geq f_{i}+\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},-x_{i} \big{\rangle},\] \[f_{i}\geq f_{i-1}+\big{\langle}\frac{x_{i-2}-x_{i-1}}{\alpha_{i-1 }},x_{i}-x_{i-1}\big{\rangle},\]
appears in the resulting dual constraint. Moreover, by setting the partial derivative with respect to \(x_{N}\) to \(0\), we have the last difference, i.e., the difference between the Lagrangian multipliers associate with
\[0\geq f_{N}+\big{\langle}\frac{x_{N-1}-x_{N}}{\alpha_{N}},-x_{N} \big{\rangle},\] \[f_{N}\geq f_{N-1}+\big{\langle}\frac{x_{N-2}-x_{N-1}}{\alpha_{N-1 }},x_{N}-x_{N-1}\big{\rangle},\]
equals to \((\alpha_{N}-\alpha_{1:N-1})/(\alpha_{1:N}\alpha_{N})\). This equation motivates us to fix one of the Lagrangian multipliers to \(0\) with the other one being kept greater than or equal to \(0\), depending on the sign of \((\alpha_{N}-\alpha_{1:N-1})/(\alpha_{1:N}\alpha_{N})\). Similar treatments can be applied to the pair of multipliers involved in the differences for \(i=2,\ldots,N-1\). Thus, among the Lagrange multipliers of the reduced (2.1), \(N-1\) of them are fixed to \(0\).
### The Lagrange multipliers
The Lagrange multipliers depend on the step lengths \(\{\alpha_{i}\}_{i=1}^{N}\). In order to write down all the nonzero Lagrange multipliers, we need to introduce a separator.
**Definition 3.1** (Separator).: Let \(\{\alpha_{i}\}_{i=1}^{N}\) be a sequence of positive step sizes, then there exists a unique positive integer \(s\) such that
\[\alpha_{1:s}>\alpha_{s+1:N},\] \[\alpha_{1:s-1}\leq\alpha_{s:N}.\]
This positive integer \(s\) is defined as the separator.
Note that notation (3.1) is used here. Trivially, the separator \(s\) satisfies \(1\leq s\leq N\) and
\[\alpha_{1:i}>\alpha_{i+1:N},\ i\geq s,\] \[\alpha_{1:i}\leq\alpha_{i+1:N},\ i<s.\]
All the nonzero Lagrange multipliers and their corresponding constraints are summarized
below.
\[\frac{1}{(\alpha_{1:N})^{2}} :\quad\|x_{0}\|^{2}\leq 1,\] \[\frac{2\alpha_{i}}{\alpha_{i:N}\alpha_{i+1:N}} :\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad 0\geq f_{i}+\big{\langle} \frac{x_{i-1}-x_{i}}{\alpha_{i}},-x_{i}\big{\rangle},\ 1\leq i\leq s-1,\] \[\frac{2\alpha_{1:i}}{\alpha_{1:N}\alpha_{i+1:N}} :\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
where
\[A_{3} =-\Big{\|}\frac{x_{N-1}-x_{N}}{\alpha_{N}}\Big{\|}^{2}+\frac{1}{( \alpha_{1:N})^{2}}\|x_{0}\|^{2}+\frac{2(\alpha_{s:N}-\alpha_{1:s-1})}{\alpha_{ 1:N}\alpha_{s:N}}\big{\langle}\frac{x_{s-1}-x_{s}}{\alpha_{s}},-x_{s}\big{\rangle}\] \[\quad+2\sum_{i=s}^{s-1}\Bigl{[}\frac{\alpha_{i}}{\alpha_{i:N} \alpha_{i+1:N}}\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},-x_{i}\big{\rangle} +\frac{\alpha_{1:i}}{\alpha_{1:N}\alpha_{i+1:N}}\big{\langle}\frac{x_{i}-x_{i+ 1}}{\alpha_{i+1}},x_{i}-x_{i+1}\big{\rangle}\Bigr{]}\] \[\quad+2\sum_{i=s}^{N-1}\Bigl{[}\frac{\alpha_{1:i}-\alpha_{i+2:N}}{ \alpha_{1:N}\alpha_{i+1}}\big{\langle}\frac{x_{i}-x_{i+1}}{\alpha_{i+1}},x_{i }-x_{i+1}\big{\rangle}+\frac{\alpha_{1:i}-\alpha_{i+1:N}}{\alpha_{1:N}\alpha_{ i+1}}\big{\langle}\frac{x_{i-1}-x_{i}}{\alpha_{i}},x_{i+1}-x_{i}\big{\rangle} \Bigr{]}.\]
Next, we reformulate \(A_{3}\) as the sum of positive weighted squares. This will imply that \(A_{3}\) is always nonnegative. According to the value of the separator \(s\), the proof is divided into the following three categories.
(i) For \(s=1\),
\[A_{3} =\Big{\|}\frac{x_{0}}{\alpha_{1:N}}-\frac{\alpha_{1}-\alpha_{3:N} }{\alpha_{1}\alpha_{2}}x_{1}+\frac{\alpha_{1}-\alpha_{2:N}}{\alpha_{1}\alpha_ {2}}x_{2}\Big{\|}^{2}+\frac{\alpha_{1}-\alpha_{2:N}}{\alpha_{1:N}\alpha_{1}^{2 }\alpha_{2}^{2}}\|\alpha_{3:N}x_{1}-\alpha_{2:N}x_{2}\|^{2}\] \[\quad+\sum_{i=s+1}^{N-1}\frac{\alpha_{1:i}-\alpha_{i+1:N}}{\alpha _{1:N}}\Big{\|}\frac{x_{i-1}}{\alpha_{i}}-\frac{\alpha_{i}+\alpha_{i+1}}{ \alpha_{i}\alpha_{i+1}}x_{i}+\frac{x_{i+1}}{\alpha_{i+1}}\Big{\|}^{2}.\]
(ii) For \(2\leq s\leq N-1\),
\[A_{3} =\Big{\|}\frac{x_{0}}{\alpha_{1:N}}-\frac{x_{1}}{\alpha_{2:N}} \Big{\|}^{2}+\sum_{i=1}^{s-2}\frac{\alpha_{i+1}\alpha_{1:N}+2\alpha_{1:i} \alpha_{i+2:N}}{\alpha_{1:N}\alpha_{i+1}}\Big{\|}\frac{x_{i}}{\alpha_{i+1:N}} -\frac{x_{i+1}}{\alpha_{i+2:N}}\Big{\|}^{2}\] \[\quad+\frac{\alpha_{s}\alpha_{1:N}+2\alpha_{1:s-1}\alpha_{s+1:N}} {\alpha_{1:N}\alpha_{s}}A_{4}\] \[\quad+\frac{(\alpha_{1:s}-\alpha_{s+1:N})(\alpha_{s:N}-\alpha_{1 :s-1})}{\alpha_{1:N}\alpha_{s}\alpha_{s+1}^{2}(\alpha_{s}\alpha_{1:N}+2\alpha _{1:s-1}\alpha_{s+1:N})}\Big{\|}\alpha_{k+2:N}x_{k}-\alpha_{k+1:N}x_{k+1} \Big{\|}^{2}\] \[\quad+\sum_{i=s+1}^{N-1}\frac{\alpha_{1:i}-\alpha_{i+1:N}}{\alpha _{1:N}}\Big{\|}\frac{x_{i-1}}{\alpha_{i}}-\frac{\alpha_{i}+\alpha_{i+1}}{ \alpha_{i}\alpha_{i+1}}x_{i}+\frac{x_{i+1}}{\alpha_{i+1}}\Big{\|}^{2},\]
with
\[A_{4}=\Big{\|}\frac{x_{s-1}}{\alpha_{s:N}}-\frac{\alpha_{s+1}\alpha_{1:N}+ \alpha_{s:N}(\alpha_{1:s}-\alpha_{s+1:N})}{\alpha_{s+1}(\alpha_{s}\alpha_{1:N} +2\alpha_{1:s-1}\alpha_{s+1:N})}x_{s}+\frac{\alpha_{s:N}(\alpha_{1:s}-\alpha_ {s+1:N})}{\alpha_{s+1}(\alpha_{s}\alpha_{1:N}+2\alpha_{1:s-1}\alpha_{s+1:N})} x_{s+1}\Big{\|}^{2}.\]
(iii) For \(s=N\),
\[A_{3} =\Big{\|}\frac{x_{0}}{\alpha_{1:N}}-\frac{x_{1}}{\alpha_{2:N}} \Big{\|}^{2}+\sum_{i=1}^{N-2}\frac{\alpha_{i+1}\alpha_{1:N}+2\alpha_{1:i} \alpha_{i+2:N}}{\alpha_{1:N}\alpha_{i+1}}\Big{\|}\frac{x_{i}}{\alpha_{i+1:N}} -\frac{x_{i+1}}{\alpha_{i+2:N}}\Big{\|}^{2}\] \[\quad+\frac{\alpha_{N}-\alpha_{1:N-1}}{\alpha_{1:N}\alpha_{N}^{2 }}\|x_{N}\|^{2}.\]
The reformulation may be verified by comparing the coefficients to \(\langle x_{i},x_{j}\rangle\) for \(i,j=0,1,\ldots,N\), and according to Definition 3.1, the coefficients to the norm squares in the reformulations of \(A_{3}\) are always nonnegative.
By combining the nonnegativity of \(A_{3}\), the equation (1.4) and the inequality (4.2), we derive that for \(R=1\),
\[\|g_{N}\|^{2}=\Big{\|}\frac{x_{N-1}-x_{N}}{\alpha_{N}}\Big{\|}^{2}\leq\frac{1}{ (\alpha_{1:N})^{2}}.\]
For \(R\neq 1\), if we scale all the \(\{x_{i}\}_{i=0,1,\dots,N}\) by \(R\) and all the \(\{f_{i}\}_{i=1,\dots,N}\) by \(R^{2}\), respectively, we notice that the objective function value is scaled by \(R^{2}\). Hence, the following theorem is proved.
**Theorem 4.1**.: _Let \(\{\alpha_{i}\}_{i\geq 1}\) be a sequence of positive step sizes and \(x_{0}\) some initial iterate satisfying \(\|x_{0}-x_{*}\|\leq R\) for some optimal point \(x_{*}\). For any sequence \(\{x_{i}\}_{i\geq 1}\) generated by the PPA with step sizes \(\{\alpha_{i}\}_{i\geq 1}\) on any function \(f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\), there exists for every iterate \(x_{N}\) a subgradient \(g_{N}\in\partial f(x_{N})\) such that_
\[\|g_{N}\|\leq\frac{R}{\sum_{i=1}^{N}\alpha_{i}}.\]
_In particular, the choice \(g_{N}=\frac{x_{N-1}-x_{N}}{\alpha_{N}}\) is a subgradient satisfying the inequality._
Note that though there are some kind of heuristics in our construction of the Lagrange multipliers, our proof of Theorem 4.1 is free of these heuristics.
Up till now, our analysis is limited to the case of Euclidean norm \(\|\cdot\|\). For the more general norm \(\|\cdot\|_{B}\), a one to one correspondence can be formed between the feasible solutions of the Euclidean norm \(\|\cdot\|\) case and the general Euclidean norm \(\|\cdot\|_{B}\) case. Suppose that \((f_{1},\dots,f_{N},x_{0},x_{1},\dots,x_{N})\) is a feasible solution to (2.1), then \(\tilde{f}_{1}=f_{1},\dots,\tilde{f}_{N}=f_{N}\), and \(\tilde{x}_{0}=B^{-1/2}x_{0},\ \tilde{x}_{1}=B^{-1/2}x_{1},\dots,\tilde{x}_{N}=B^{-1/2}x_{N}\) is a feasible solution to
\[\sup_{\tilde{f}_{1},\dots,\tilde{f}_{N},\tilde{x}_{0},\tilde{x}_{ 1},\dots,\tilde{x}_{N}} \Big{\|}\frac{B(\tilde{x}_{N-1}-\tilde{x}_{N})}{\alpha_{N}} \Big{\|}_{B^{-1}}^{2}\] s.t. \[\|\tilde{x}_{0}\|_{B}^{2}\leq R^{2},\] \[\tilde{f}_{i}\geq 0,\ 1\leq i\leq N, \tag{4.3}\] \[0\geq\tilde{f}_{i}+\Big{\langle}\frac{B(\tilde{x}_{i-1}-\tilde{ x}_{i})}{\alpha_{i}},-\tilde{x}_{i}\Big{\rangle},\ 1\leq i\leq N,\] \[\tilde{f}_{j}\geq\tilde{f}_{i}+\Big{\langle}\frac{B(\tilde{x}_{i- 1}-\tilde{x}_{i})}{\alpha_{i}},\tilde{x}_{j}-\tilde{x}_{i}\Big{\rangle},\ 1\leq i,j\leq N,\ i\neq j,\]
and vice versa. The optimization problem (4.3) is just a reformulation of the performance estimation problem under the general norm \(\|\cdot\|_{B}\). Notice that the two optimization problems (2.1) and (4.3) have the same objective function value at these one to one correspondent feasible solutions. As a consequence, they also have the same optimal value. Hence, we have the following corollary.
**Corollary 4.2** ([20, Conjecture 4.2]).: _Let \(\{\alpha_{i}\}_{i\geq 1}\) be a sequence of positive step sizes and \(x_{0}\) some initial iterate satisfying \(\|x_{0}-x_{*}\|_{B}\leq R\) for some optimal point \(x_{*}\). For any sequence \(\{x_{i}\}_{i\geq 1}\) generated by the PPA (based on the general norm \(\|\cdot\|_{B}\)) with step sizes \(\{\alpha_{i}\}_{i\geq 1}\) on a function \(f\in\mathcal{F}_{0,\infty}(\mathbb{R}^{n})\), there exists for every iterate \(x_{N}\) a subgradient \(g_{N}\in\partial f(x_{N})\) such that_
\[\|g_{N}\|_{B^{-1}}\leq\frac{R}{\sum_{i=1}^{N}\alpha_{i}}.\]
_In particular, the choice \(g_{N}=\frac{B(x_{N-1}-x_{N})}{\alpha_{N}}\) is a subgradient satisfying the inequality._
Note that this bound cannot be improved, as it is attained on the (one dimensional) \(l_{1}\)-shaped function (1.5).
Concluding remarks
PPA has been playing fundamental roles both theoretically and algorithmically in the optimization area. The convergence results in terms of the residual (sub)gradient norm are particularly interesting when considering dual methods [2]. Moreover, the (sub)gradient norm is more amenable than function value in nonconvex optimization.
By using performance estimation framework, Conjecture 4.2 in [20] is proved, which gives the first direct proof of the convergence rate in terms of the residual (sub)gradient norm for PPA. To the best of our knowledge, even for PPA with universal step length, i.e., freezing all step lengths at the same positive constant, a direct proof of the convergence rate in terms of the residual (sub)gradient norm is not known before. In addition, this convergence rate is tight, which means it is the best bound one can derive.
|
2304.03973 | RobCaps: Evaluating the Robustness of Capsule Networks against Affine
Transformations and Adversarial Attacks | Capsule Networks (CapsNets) are able to hierarchically preserve the pose
relationships between multiple objects for image classification tasks. Other
than achieving high accuracy, another relevant factor in deploying CapsNets in
safety-critical applications is the robustness against input transformations
and malicious adversarial attacks.
In this paper, we systematically analyze and evaluate different factors
affecting the robustness of CapsNets, compared to traditional Convolutional
Neural Networks (CNNs). Towards a comprehensive comparison, we test two CapsNet
models and two CNN models on the MNIST, GTSRB, and CIFAR10 datasets, as well as
on the affine-transformed versions of such datasets. With a thorough analysis,
we show which properties of these architectures better contribute to increasing
the robustness and their limitations. Overall, CapsNets achieve better
robustness against adversarial examples and affine transformations, compared to
a traditional CNN with a similar number of parameters. Similar conclusions have
been derived for deeper versions of CapsNets and CNNs. Moreover, our results
unleash a key finding that the dynamic routing does not contribute much to
improving the CapsNets' robustness. Indeed, the main generalization
contribution is due to the hierarchical feature learning through capsules. | Alberto Marchisio, Antonio De Marco, Alessio Colucci, Maurizio Martina, Muhammad Shafique | 2023-04-08T09:58:35Z | http://arxiv.org/abs/2304.03973v2 | RobCaps: Evaluating the Robustness of Capsule Networks against Affine Transformations and Adversarial Attacks
###### Abstract
Capsule Networks (CapsNets) are able to hierarchically preserve the pose relationships between multiple objects for image classification tasks. Other than achieving high accuracy, another relevant factor in deploying CapsNets in safety-critical applications is the robustness against input transformations and malicious adversarial attacks.
In this paper, we systematically analyze and evaluate different factors affecting the robustness of CapsNets, compared to traditional Convolutional Neural Networks (CNNs). Towards a comprehensive comparison, we test two CapsNet models and two CNN models on the MNIST, GTSRB, and CIFAR10 datasets, as well as on the affine-transformed versions of such datasets. With a thorough analysis, we show which properties of these architectures better contribute to increasing the robustness and their limitations. Overall, CapsNets achieve better robustness against adversarial examples and affine transformations, compared to a traditional CNN with a similar number of parameters. Similar conclusions have been derived for deeper versions of CapsNets and CNNs. Moreover, our results unleash a key finding that the dynamic routing does not contribute much to improving the CapsNets' robustness. Indeed, the main generalization contribution is due to the hierarchical feature learning through capsules.
Machine Learning, Deep Neural Networks, Convolutional Neural Networks, Capsule Networks, Dynamic Routing, Adversarial Attacks, Affine Transformations, Security, Robustness, Vulnerability
## I Introduction
In recent years, many works have explored the problems of adversarial examples and affine transformations in Convolutional Neural Networks (CNNs) for image classification applications [1][2][3][4]. Szegedy et al. [5] proposed the concept of adversarial examples, i.e., examples with small perturbations, imperceptible to the human eye, that mislead high-confidence models when added to the input. The same limitation of Deep Neural Networks (DNNs) in image classification is also noticed if the input is affected by affine transformations that do not modify the pixels but their relative position in space. The most common means of limiting these problems is to increase the generalization level of a CNN, which is achievable using different methods. Some research works proposed to increase the depth of CNN architectures [6], others proposed to modify the hyper-parameters [7] and using data pre-processing during the training [8]. For a CNN, the convolutional and the Max Pooling layers provide the generalization and the capability to detect high-order features in a large region of the image (_invariance property_), but without preserving any relation with other identified features.
With the introduction of the Capsule Networks (CapsNets) by Google [9], the basic building block of a neural network, i.e., the neuron, has been replaced by a group of neurons, called _capsule_. The capsules encode spatial information in a vector form. When a detected feature moves around the image, the probability of being detected does not vary, but its _pose_ information changes (_equivariance property_). The work in [10] proposes an efficient way of learning the coupling between capsules from different layers through the so-called _dynamic routing_ algorithm, an iterative process that replaces the behavior of the max pooling, but without losing any information. Hence, such a capsule structure improves the network's generalization because it can efficiently learn cross-correlations between different features of the inputs. Recently, Rajasegaran et al. [11] showed that a deeper version of CapsNets can achieve high accuracy also on mid-complex datasets like the CIFAR10 [12], despite reducing the number of parameters compared to the shallower CapsNet in [10].
Existing works [13][14][15][16] have analyzed the vulnerabilities and robustness of CapsNets against affine transformations and adversarial attacks, respectively. _However, they lack a systematic study comparing different types of CapsNets and CNNs and a detailed analysis of the impact of different CapsNet functions (like dynamic routing) on the robustness._ Moreover, Michels et al. [15] did not investigate the CapsNets' robustness when an adversarial defense, such as the adversarial training [17], is applied.
Such analyses would establish an understanding of differences between CNNs and CapsNets w.r.t. the robustness against adversarial attacks and how the robustness of CapsNets changes depending on the model features. This could help future CapsNet designs in _accounting for the security vulnerabilities into design constraints_, increasing the applicability of CapsNets in real-world scenarios [18].
**Research Questions and Associated Challenges**
The goal of our paper is to investigate these research questions:
1. _Are CapsNets more robust than CNNs against adversarial
attacks and affine transformations?_
2. _If yes, how can these phenomena be analyzed in a systematic way?_
3. _Which CapsNet functions contribute more to the robustness improvement?_
Answering these questions is a challenging task. Firstly, we evaluate a good metric of comparison between CapsNets and CNNs, i.e., which network models give a fair and significant robustness comparison, which types of adversarial attacks are applied, etc. Then, it should be interesting to analyze the transferability of the adversarial attacks, i.e., white-box attacks. _If an adversarial example has been generated to fool network A, does it also fool network B?_
**Our Novel Contributions are** (see Figure 1):
* We generate an affined-transformed version of the CIFAR10 and GTSRB datasets, called **affCIFAR** and **affGTSRB**, respectively. _(Section IV-A)_
* We evaluate / compare the **robustness of different CapsNets and CNNs (like ShallowCaps, DeepCaps, ResNet20) against affine trans-formations** for different datasets and different networks. _(Section IV-B)_
* We compare the robustness of different networks **against adversarial attacks** for different datasets. Further analyses have been carried out in the presence of a defense such as the **adversarial training**. _(Section V)_
* We evaluate the role of the **dynamic routing** towards the CapsNets robustness. _(Section VI)_
In summary, our key results depicted in Table I show that the DeepCaps [11] is more robust than a deeper ResNet20 [6] against affine transformation and different types of adversarial attacks, increasing the complexity of the input data. As we will demonstrate, such improvements in the robustness also hold when the adversarial examples are transferred from one network to the other and vice-versa.
After showing the power of the capsules, we focus our analysis on the dynamic routing, which increases the confidence of the prediction, with a consequent improvement in terms of accuracy. By knowing that, our challenging question is: _Is the dynamic routing also helpful in guaranteeing the CapsNets robustness?_ Our results and analyses provide great insights when relating CNNs and CapsNets against adversarial attacks and affine transformations, as well as how CapsNets' behavior changes when modifying model features.
Before proceeding to the technical sections, we discuss adversarial attacks and CapsNets in Section II to a level of detail necessary to understand our contributions.
## II Background and Related Work
### _Adversarial Attacks_
Formally, having an example \(x\) that is correctly classified by a well-trained model \(M(x)=y_{true}\), an adversarial example \(x^{\prime}=x+\eta\) is defined as a new input, perceptually identical to the original one, but wrongly classified by the model, i.e., \(M(x^{\prime})\neq y_{true}\). Goodfellow et al. [19] proposed the fast gradient sign method (FGSM), a white-box attack to generate adversarial examples by exploiting the gradient of the model w.r.t. the input image, towards the direction of the highest loss. An example of its functionality is shown in Figure 2, where the crafted noise added to the original input is imperceptible to the human eye but results in a misclassification. Afterwards, Madry et al. [17] and Kurakin et al. [20] proposed two different versions of the projected gradient descent (PGD) attack, which is an iterative version of the FGSM that introduces a perturbation \(\alpha\) to multiple smaller steps. After each iteration, the PGD projects the generated image into a ball with a radius \(\varepsilon\), keeping small the size of the perturbation. It is a white-box attack and has both the targeted and untargeted versions. The algorithm consists of the iteration expressed in Equation (1), where \(\theta\) is the set of parameters and \(t\) is the target label.
\[x^{\prime}_{i}=x^{\prime}_{i-1}-proj_{\varepsilon}(\alpha\cdot sign(\nabla_{x} loss(\theta,x,t))) \tag{1}\]
Carlini and Wagner [21] proposed a powerful white-box targeted attack method that exploits \(l_{\infty}\), \(l_{1}\) and \(l_{2}\) distances to preserve the imperceptibility of the adversarial example. It is performed by solving the optimization problem expressed in Equation (2).
\[||\eta||_{2}+c\cdot max(G(x,\eta,t)-M(x,\theta)_{t},-k) \tag{2}\]
The algorithm aims to minimize both the components of the equation: (i) the distance \(\eta\) between the input and the adversarial image and (ii) the distance between the max output activation (\(G(x,\eta,t):=max_{i\neq t}(M(x+\eta))\)) and the confidence \(M(x)_{t}\) of the target label \(t\). The value \(c\) is updated at every iteration to balance the two terms
Fig. 1: Overview of our novel contributions in this work.
Fig. 2: Example of an adversarial attack’s functionality, where strawberries are misclassified as chesums [19].
during the generation of the attacked data. Many works showed the success of such attacks in fooling DNNs and provided state-of-the-art success rate results [19][21][17]. A common countermeasure to defend against such attacks is the adversarial training [17], which extends the training set for DNNs by also including the adversarial examples.
### _Capsule Networks_
CapsNets gathered attention due to their capability to achieve higher classification accuracy than traditional CNNs. Sabour et al. [10] introduced the first CapsNet architecture, based on the following differences w.r.t. traditional CNNs:
* _capsules_: multi-D entities, instead of single neurons, that constitute each layer.
* a _dynamic routing_ algorithm between two adjacent capsules selects the capsules that must be propagated, based on their pose agreement.
* a _squash_ function compresses the components of each capsule in a small interval at the end of each layer.
The architecture designed in [10], which we call _ShallowCaps_ (for ease of discussion), is composed of:
* a first standard convolutional layer with \(256\)\(9\times 9\) kernels.
* a Primary Capsule layer, convolutional with \(9\times 9\) kernels and the same parameters as the previous layer, but reshaped to form \(32\)\(8\)-dimensional capsules.
* a DigitCaps layer of \(10\) capsules of dimension \(16\).
The last layer defines a transformation matrix that, during the training, learns the relationship between all the capsules of the Primary Capsule layer and the capsules of the DigitCaps layer. The dynamic routing (Fig. 3) has the task of propagating only the activations with a high contribution by updating a set of coupling coefficients. Specifically, this iterative algorithm ensures that only the most voted opinion among the predictions is propagated to the DigitCaps layer.
The limit of this architecture is that it cannot correctly generalize a complex dataset like the CIFAR10. Kumar et al. [22] proposed a three-layer architecture, like the previous one, for the GTSRB dataset [23], increasing the number of capsules coupled with the DigitCaps layer. This one needs a huge number of parameters and wasteful use of resources to reach similar performances as traditional CNN models. To solve this problem, the DeepCaps [11] has been designed to reduce the number of parameters, exploiting deeper capsule architectures. Without stacking more than one fully-connected layer of capsules, the DeepCaps introduces a new kind of 3D dynamic routing that exploits 3D convolutions.
Both the dynamic routing and the expectation-maximization routing used by Hinton et al. [24] are computationally expensive in terms of execution time. Many works tried to accelerate the procedure at the algorithmic level [25][26][27] or at the hardware level [28][29][30][31][32][33], and others proposed novel routing strategies [34][35]. On the contrary, many other works proposed to incorporate the routing procedures into the training process, removing it. In other words, it is possible to learn the coupling coefficients implicitly, including them in the weights of the transformation matrix. Furthermore, [36] proposed a different algorithm introducing new coupling weights between two capsule layers, called _self-routing_.
Our analysis (_Section VI_) also proves that the contribution of the dynamic routing against attacks and affine transformations is not effective. Then, incorporating it into the training process could be a solution to avoid this expensive procedure.
Recent works showed the vulnerability of CapsNet against adversarial attacks. Frosst et al. [37] investigated the detection of adversarial examples using the reconstruction quality of the CapsNets. Peer et al. [38] and Marchisio et al. [39] applied the FGSM method [19] and their proposed attack on CapsNets, respectively. Michels et al. [15] compared the results of different attacks on CapsNets trained on different datasets. The RoHNAS framework [40] includes adversarial robustness among the optimization objectives and conducts Neural Architecture Search to obtain energy efficient and robust CapsNets. However, before employing CapsNets in safety-critical applications, their robustness must be analyzed in practical use-case scenarios, e.g., investigating applications where the CapsNets' classification accuracy is on par or better than the state-of-the-art DNNs, and when robust defenses like adversarial training are adopted.
## III In-Depth View of our RobCaps Methodology
The CapsNets has been considered relatively more robust towards adversarial attacks when compared to traditional CNNs. To investigate this intuition, we present a detailed analysis to answer our main research questions, and to show (1) if and why the Capsule Network under attack provides
Fig. 3: Schematic overview of the processing flow occurring in the Dynamic routing of the DigitCaps layer.
a better response than traditional CNNs, (2) which model quality plays an important role, and their limits. Knowing the main differences between CapsNets and traditional CNNs, we explore the impact of these networks on affine transformations and adversarial attacks. Moreover, we study the role of different functions of a CapsNet on the robustness against these attacks. Towards a fair and comprehensive evaluation, the results for the ShallowCaps have been compared with three different architectures (chosen according to their properties, their number of parameters, and their depth) for three different datasets, i.e., MNIST [41], GTSRB [23] and CIFAR10 [12].
* A deeper CapsNet architecture, like the _DeepCaps_ model [11]. Despite being deeper than the ShallowCaps, it has fewer parameters. The _DeepCaps_ employs four groups of 2D convolutional capsule layers, with a 3D convolution layer in the last group and a fully connected capsule layer of \(10\) 32D capsules.
* _ResNet20_ (He et al. [6]) is one of the best performing CNN architectures for CIFAR10, used in various applications. It would be interesting to compare the capabilities of the CapsNet with a widely used CNN, which is deeper and employs Residual Blocks with convolutional and average pooling layers.
* A traditional _CNN_ with the same depth as the DeepCaps, but without multidimensional entities such as capsules. The dimensions of the layers are reshaped in a 2D fashion, using traditional convolutional layers with batch normalization instead of capsules with squash compression, and a traditional fully connected layer instead of the DigitCaps layer with dynamic routing. Its comparison w.r.t the DeepCaps highlights the contribution to the robustness of 3D convolutions and capsules.
### _Step-By-Step View of our Methodology_
Our methodology, shown in Fig. 4, is composed of these following steps:
1. **Evaluation of robustness on affine transformations:** i) Train our networks with the clean datasets using the same hyperparameters and data augmentation. ii) Generate the affine-transformed version of each dataset for a given set of affine-transformations. For the CIFAR10 and the GTSRB datasets, we design two novel transformed datasets with random translations, rotations, and zooms (which we call _affCIFAR_ and _affGTSRB_, see Section IV-A). iii) Use such affCIFAR and affGTSRB datasets for inference, as the case for the already existing affNIST [42], to evaluate the response of the networks to affine transformations.
2. **Evaluation of robustness on adversarial attacks:** We use the saved parameters of the trained models to evaluate the gradient, with respect to the input, for the two implemented white-box attacks. The key steps of our methodology are: i) Apply the projected gradient descent (PGD) attack for each architecture and dataset. ii) Test the networks with the generated adversarial inputs, evaluating the accuracy behavior, increasing the perturbation level. iii) Apply the Carlini Wagner attack (CW) for each dataset. iv) Evaluate the mean distortion required by the algorithm to misclassify \(500\) images of the test datasets and its fooling rate. v) Apply at the input to a network the adversarial image generated with another one to test the transferability of the attack. vi) Test the robustness when the adversarial training defense is applied.
3. **Analyzing the contribution of the dynamic routing to the CapsNet's robustness:** i) Modify the dynamic routing of the DigitCaps layer of the DeepCaps and then generate three versions of it with different routing algorithms. ii) Analyze the robustness against affine transformations. iii) Analyze the robustness against PGD and CW attacks.
### _Experimental Setup_
These architectures have been trained with the \(40\!\times\!40\) sized version of the MNIST dataset and tested on the affNIST for evaluating the robustness against affine transformations. For all the architectures tested on CIFAR10, input data have been resized before the training, from \(32\times 32\) to \(64\times 64\), following the pre-processing steps used in [11]. For the GTSRB dataset, the input images' size is kept at \(32\times 32\). The data augmentation and hyperparameters used for the training are kept the same for all the networks. As a regularization term, the CapsNets have the reconstruction loss provided by the decoder. For the evaluation of the loss, we use the same function as in [10] for CapsNets and the Cross-Entropy for CNNs.
Fig. 4: Overview of our RobCaps methodology.
We implemented the attack algorithms using the CleverHans [43] library, adapted for the Keras framework [44] with Tensorflow backend [45]. The networks have been trained on multiple Nvidia RTX-2080Ti GPUs with CUDA \(10\). To have a good comparison metric, we train different versions of the DeepCaps architecture modifying/removing the dynamic routing.
## IV Robustness Against Affine Transformations
### _Affine-CIFAR10 (affCIFAR) and Affine-GTSRB (affGTSRB) Datasets Generation_
While a dataset with affine transformed images of the MNIST dataset (affNIST) is already available, we create an affine version of the CIFAR10 and GTSRB datasets, which we call _affCIFAR_ and _affGTSRB_, to compare the response of the networks defined in Section III. The test data was created by modifying the \(10\,000\) test images from the original dataset with random affine transformations. Every image is transformed following these criteria:
* _Translations:_ random pixels translations in one or in two dimensions by a factor between \(10\%\) and \(25\%\) of the input image size, with a fixed interval.
* _Rotations:_ random rotations between \(+20\) and \(-20\) degrees with a fixed step.
* _Zooms:_ the vertical and horizontal expansions are chosen uniformly between \(0.8\) (i.e., shrinking the image by \(20\%\)) and \(1.2\) (i.e., enlarging the image by \(20\%\)).
### _Affine Transformations Results_
For each model defined in Section III, we evaluate the accuracy for all the datasets and their respective affine-transformed versions. The results are shown in Figure 5.
**ShallowCaps vs. DeepCaps**: As shown in Figure 5, the ShallowCaps on the CIFAR10 dataset achieves lower accuracy than the state-of-the-art (\(77.32\%\)). Such limitation is solved by the DeepCaps, which reaches better results even when using the affine version of the respective dataset (\(78.66\%\)). Thus, using a deeper architecture while keeping the same capsule structure, the DeepCaps model has fewer parameters while having better generalization. Its accuracy with the CIFAR10 dataset (\(91.52\%\)) and with the affine transformed datasets are much higher compared to ShallowCaps. In fact, despite the shallower model reaching a good accuracy on the normal MNIST and GTSRB datasets, it is still unable to generalize as the DeepCaps against affine transformations. The improvement could also be explained by the presence of the 3D convolutional layer. The effect of having 3D convolutions, compared to a stack of fully connected capsules, is similar to when we compare the generalization level offered by the Multi-Layer Perceptrons (MLP) and the CNNs. In the DigitCaps layer, each element of the transformation matrix learns if a capsule is correlated with each capsule of the following layer. On the contrary, with the 3D convolution, sliding a 3D kernel, the same weights are used among all the capsules of the layer. This characteristic also allows learning the presence of a particular feature if the input image is spatially transformed (e.g., translated, rotated, or zoomed), preserving the capsule structure.
**DeepCaps vs. CNN and ResNet20**: Another significant result is provided by comparing the response of the DeepCaps with a traditional CNN, having a similar base architecture. While the accuracy of the CNN on the MNIST, GTSRB, and CIFAR10 datasets is similar to the DeepCaps, the CNN's robustness against the affNIST, affGTSRB, and affCIFAR is much lower. These results confirm the benefits of capsules against affine transformations. Compared to the DeepCaps, the ResNet20 is deeper but has fewer parameters. It can generalize better for the affMNIST and affGTSRB but worse for the affCIFAR dataset. This apparently contradictory result is due to the difference in complexity between the datasets. While for simple datasets, a deep CNN, like the ResNet20, can generalize very well, for more complex tasks like the affCIFAR, it is outperformed by the DeepCaps. This observation highlights the capability of the capsule architectures to preserve spatial correlations between the features detected, and this difference w.r.t deeper traditional CNNs is even more evident when the input dataset is composed of complex features like the CIFAR10.
## V Robustness Against Adversarial Attacks
### _Evaluations under the PGD Attack_
We analyze the network response by increasing the perturbation level \(\varepsilon\), generated by the algorithm. Figures 5(a), 5(b), and 5(c) show the results for the MNIST, GTSRB, and CIFAR10 datasets, respectively.
**ShallowCaps vs. ResNet20**: Applying the PDG attack for the MNIST dataset, the ResNet20 is less vulnerable than other networks for low levels of \(\varepsilon\). The ShallowCaps robustness behavior, not so far from the one of the ResNet20, overperforms the ResNet20 when \(\varepsilon\approx 0.065\). Hence, despite the low number of layers, the ShallowCaps responds to the PGD attack similarly to a deeper CNN.
**DeepCaps vs. ShallowCaps**: According to the results, the ShallowCaps is more robust than the DeepCaps, in contrast
Fig. 5: Robustness against affine transformations.
to what happens for affine transformations. This means that increasing the depth of a CapsNet does not provide more robustness against perturbed images. Note that the ShallowCaps response for the CIFAR10 dataset has not been examined because of its very low baseline accuracy, which is not comparable with other networks.
**DeepCaps vs. ResNet20 vs. CNN**: For this kind of algorithm and the MNIST dataset, Figure 6a shows that the DeepCaps behaves worse than the ResNet20. On the contrary, for more complex datasets like CIFAR10 or GTSRB, the results in Figure 6 show that the ResNet20 is not as robust as for the MNIST dataset. By increasing the perturbation's size, the attack's success rate grows faster than on DeepCaps. Such an outcome is in line with the takeaway from Section IV-B, which showed the DeepCaps be more robust than the ResNet20 against the transformations in affCIFAR.
The behavior of the CNN curve for GTSRB and CIFAR10 always stays below the curve of the DeepCaps. It means that the capsule architecture plays a fundamental role in improving the robustness against the PGD attacks when the dataset becomes more complex than the MNIST.
**Transferability ResNet20 \(\Longleftrightarrow\) DeepCaps:** Towards a more comprehensive study of the robustness against the PGD, we analyze the transferability of the attacks, between the ResNet20 and the DeepCaps, presenting the two opposite behaviors. We provide as inputs to the DeepCaps the adversarial examples generated with the gradient of the ResNet20 and vice-versa. Figure 7 shows the transferability between these two networks for different datasets.
For the MNIST dataset, the attacks generated for the ResNet20, tested on DeepCaps, have a more significant effect than the opposite way. As shown in Figure 7a, this outcome confirms that the ResNet20 appears suitable for the generalization of the MNIST. The contrasting results can be derived for the GTSRB and CIFAR10 dataset, where the DeepCaps shows greater robustness than the ResNet20 due to a better generalization ability for a more complex dataset.
### _Evaluations under the Carlini Wagner (CW) Attack_
To have a more solid comparison, the CapsNets and CNNs have also been tested against the CW attack, a different kind of algorithm that does not define a threshold for the magnitude of the perturbation (like the \(\varepsilon\) in the PGD attack). It is an iterative targeted algorithm that tries to reduce the gap between the target and the predicted class (success rate) with the minimum perturbation (mean distortion), estimated as the \(l_{2}\) distance. For a more robust network, the algorithm necessitates more iterations to obtain that the probability of the target class overcomes the probability of the correct class. As a consequence, more iterations also imply a higher \(l_{2}\) distance between the original image and the adversarial example. For our estimations, we set a maximum of \(10\) iterations for the MNIST and \(5\) iterations for the CIFAR10 dataset. In addition, for the attacks on CIFAR10, the algorithm has been forced to set the confidence of the targeted class to \(0.5\) higher than the confidence of the true label. Table II reports the fooling rate and the mean distortion for both the datasets.
**CapsNets vs. CNNs:** The CW attack is very effective for traditional CNNs. In fact, it reaches a \(100\%\) fooling rate for all three datasets. Similar findings were also made in [21]. On the other hand, both CapsNets show greater robustness (i.e., lower fooling rate) than CNNs, for the MNIST dataset (and also for the GTSRB, even if the fooling rate of the DeepCaps is just a little bit lower than \(100\%\)). The CapsNets also require a higher mean distortion than the CNNs, which makes the resulting adversarial example more perceptible. For the CIFAR10 dataset, the CW attack shows its effectiveness because, for all the networks, the fooling rate is \(100\%\). However, we can notice that CapsNets are more robust due to a higher mean distortion.
Fig. 6: Robustness against the PGD attack for (a) the MNIST, (b) the GTSRB, and (c) the CIFAR10 datasets.
Fig. 7: Transferability for the PGD attack: comparison of the network response for (a) MNIST, (b) GTSRB, and (c) CIFAR10 datasets.
**DeepCaps vs. ShallowCaps:** The DeepCaps appears to be more robust than the ShallowCaps, because of a lower fooling rate, despite having slightly lower mean distortion. Therefore, the depth and the 3D convolutions help to generalize better against the CW attack.
**Transferability ResNet20 \(\Longleftrightarrow\) DeepCaps:** Table III shows the transferability of the attacks between ResNet20 and DeepCaps for the CW attack. The values report the accuracies of the two models that receive as input a sample of \(500\) targeted adversarial examples generated by the CW algorithm applied to the other network. The high accuracy values demonstrate the low level of transferability of the CW attack. Despite this, the ResNet20 still achieves lower accuracies than the DeepCaps, thereby performing less robustly.
### _DeepCaps defended with the PGD Adversarial Training_
We also evaluate the robustness of DeepCaps when the PGD adversarial training is applied, compared to the normally trained DeepCaps. We chose an input perturbation \(\varepsilon\) equal to \(0.03\), with step size \(0.005\) in each iteration of the algorithm. From Figure 8, we can derive that the adversarial training increases the robustness of the DeepCaps against the PGD attack, both for the CIFAR10 and GTSRB datasets, because its classification accuracy is higher than the baseline DeepCaps.
The adversarial training with PGD defense helps the networks also against the CW attack. For both the datasets, from Table IV, comparing both the mean distortion and the fooling rate, the defended DeepCaps appears more robust. Hence, the adversarial training improves the model interpretability and reduces the learning of brittle features, also when the attack algorithm used for the defense is different from the one used for the actual attack.
## VI Analyzing the Contribution of Dynamic Routing to the Robustness of the DeepCaps
As a further analysis, we investigate the contribution of the dynamic routing towards the CapsNets generalization and, as a consequence, towards their robustness. We train two versions of the DeepCaps architecture. (i) The original dynamic routing with three iterations has been replaced by a simple connection (i.e., one iteration of dynamic routing) in both the 3D convolutional and the DigitCaps layers. (ii) The dynamic routing has been replaced by the self-routing algorithm in the last fully connected layer. Then, we run the experiments on such networks and compare them with the original DeepCaps.
### _Evaluations under the Affine Transformations_
The results in Table V compare the accuracies achieved by the DeepCaps with and without dynamic routing, and with self-routing, for the MNIST, GTSRB, and CIFAR10 datasets. While the difference is minimal, the response of the DeepCaps without dynamic routing against affine transformations appears to be slightly better. For the CIFAR10 dataset, even if the accuracy with the normal dataset is higher with the dynamic routing compared to the case without it, the latter works better for the affCIFAR dataset. The self-routing shows some limits increasing the complexity of the datasets.
We can derive that the dynamic routing does not contribute significantly to the robustness against affine transformations. Indeed, it makes the DeepCaps much computationally heavier. The functionality of the dynamic routing is to inhibit the propagation of the activation vectors with lower contribution by lowering the values of the coupling coefficients in such connections. Instead, the relationship between objects is learned during training by the transformation matrix, which could wrongly recognize some relationships between the inputs and a wrong output label, which the dynamic routing amplifies, together with the correct agreements. Hence, the confidence of the incorrect label increases.
Fig. 8: Adversarially vs. normally trained DeepCaps with (a) the GTSRB and (b) the CIFAR10 datasets.
### _Evaluations under the Adversarial Attacks_
The comparison analysis for the PGD attack applied to the MNIST, GTSRB, and CIFAR10 datasets are shown in Figures (a)a, (b)b and (c)c, respectively.
For the MNIST dataset, the DeepCaps with dynamic routing is slightly more robust than the version without it. On the contrary, for the CIFAR10, the accuracy of the DeepCaps without dynamic routing decreases faster when increasing the perturbation \(\varepsilon\). We can conclude that increasing the complexity of the dataset, from MNIST toward the CIFAR10, the dynamic routing does not improve the classification capability when the input starts to be perturbed.
Table VI shows the results of the CW attack. The self-routing seems to confer robustness with this attack, even if the architecture with dynamic routing is again outperformed by the one without it. Since the fooling rate is lower and the mean distortion is higher without dynamic routing, we can derive that the dynamic routing does not improve the robustness against such an attack. It confirms that the dynamic routing does not contribute much to the generalization.
## VII Conclusion
In this paper, we proposed a methodology to systematically analyze the robustness of CapsNets against affine transformations and adversarial attacks. Comparing CapsNets and CNNs, we investigated which differences play critical roles in increasing the robustness. The ShallowCaps are more robust than comparable CNNs. However, despite the high cost of training many parameters, they do not generalize well on more complex datasets. The analysis results demonstrate that they are more robust against adversarial attacks but show their limits against affine transformations. Going deeper, the DeepCaps model reduces this problem, decreasing the gap between the transformed and untransformed versions of the datasets, despite the lower number of parameters. Against the adversarial attacks, the DeepCaps does not reach the same robustness as the ShallowCaps for a simple task like the MNIST classification. However, for a more complex dataset like the CIFAR10, their performances overcome not only a CNN with a similar architecture but also the ResNet20. In addition, the DeepCaps offers even higher robustness when the adversarial training is employed. The same conclusion can be obtained for the affine transformations, where the DeepCaps reaches a higher accuracy than the ResNet20 with the affCIFAR dataset. Moreover, our results show that the dynamic routing does not contribute much to improving the CapsNets' robustness.
Our thorough analysis paves the way for future CapsNet
Fig. 9: PGD results: comparison of the DeepCaps response for (a) MNIST and (b) GTSRB and (c) CIFAR10 datasets.
designs, allowing designers to take into account adversarial attacks when targeting safety-critical applications, as well as leading the path for new adversarial attacks against CapsNets.
## Acknowledgment
This work has been supported in part by the Doctoral College Resilient Embedded Systems, which is run jointly by the TU Wien's Faculty of Informatics and the UAS Technikum Wien. This work was also supported in parts by the NYUAD Center for Artificial Intelligence and Robotics (CAIR), funded by Tamkeen under the NYUAD Research Institute Award CG010, the NYUAD Center for Interacting Urban Networks (CITIES), funded by Tamkeen under the NYUAD Research Institute Award CG001, and the NYUAD Center for CyberSecurity (CCS), funded by Tamkeen under the NYUAD Research Institute Award G1104.
|
2307.04594 | Parameterized Analysis of the Cops and Robber Problem | \textit{Pursuit-evasion games} have been intensively studied for several
decades due to their numerous applications in artificial intelligence, robot
motion planning, database theory, distributed computing, and algorithmic
theory. \textsc{Cops and Robber} (\CR) is one of the most well-known
pursuit-evasion games played on graphs, where multiple \textit{cops} pursue a
single \textit{robber}. The aim is to compute the \textit{cop number} of a
graph, $k$, which is the minimum number of cops that ensures the
\textit{capture} of the robber.
From the viewpoint of parameterized complexity, \CR is W[2]-hard
parameterized by $k$~[Fomin et al., TCS, 2010]. Thus, we study structural
parameters of the input graph. We begin with the \textit{vertex cover number}
($\mathsf{vcn}$). First, we establish that $k \leq \frac{\mathsf{vcn}}{3}+1$.
Second, we prove that \CR parameterized by $\mathsf{vcn}$ is \FPT by designing
an exponential kernel. We complement this result by showing that it is unlikely
for \CR parameterized by $\mathsf{vcn}$ to admit a polynomial compression. We
extend our exponential kernels to the parameters \textit{cluster vertex
deletion number} and \textit{deletion to stars number}, and design a linear
vertex kernel for \textit{neighborhood diversity}. Additionally, we extend all
of our results to several well-studied variations of \CR. | Harmender Gahlawat, Meirav Zehavi | 2023-07-10T14:35:29Z | http://arxiv.org/abs/2307.04594v1 | # Parameterized Analysis of the Cops and Robber Problem
###### Abstract
_Pursuit-evasion games_ have been intensively studied for several decades due to their numerous applications in artificial intelligence, robot motion planning, database theory, distributed computing, and algorithmic theory. Cops and Robber (CnR) is one of the most well-known pursuit-evasion games played on graphs, where multiple _cops_ pursue a single _robber_. The aim is to compute the _cop number_ of a graph, \(k\), which is the minimum number of cops that ensures the _capture_ of the robber.
From the viewpoint of parameterized complexity, CnR is W[2]-hard parameterized by \(k\)[Fomin et al., TCS, 2010]. Thus, we study structural parameters of the input graph. We begin with the _vertex cover number_ (vcn). First, we establish that \(k\leq\frac{\text{vcn}}{3}+1\). Second, we prove that CnR parameterized by vcn is FPT by designing an exponential kernel. We complement this result by showing that it is unlikely for CnR parameterized by vcn to admit a polynomial compression. We extend our exponential kernels to the parameters _cluster vertex deletion number_ and _deletion to stars number_, and design a linear vertex kernel for _neighborhood diversity_. Additionally, we extend all of our results to several well-studied variations of CnR.
## 1 Introduction
In _pursuit-evasion_, a set of agents, called _pursuers_, plan to catch one or multiple _evaders_. Classically, pursuit-evasion games were played on geometric setups, where pursuers and evaders move on the plane following some rules [46, 49, 67]. Parsons [65, 66] formulated pursuit-evasion on graphs to model the search for a person trapped in caves, giving rise to the field of graph searching. Since then, pursuit-evasion has been studied extensively, having applications in artificial intelligence [50], robot motion planning [27, 55], constraint satisfaction and database theory [42, 43, 44], distributed computing [5, 29] and network decontamination [62], and significant implications in graph theory and algorithms [75, 34, 1, 39].
Cops and Robber (CnR) is one of the most intensively studied pursuit-evasion games on graphs, where a set of cops pursue a single robber. Players move in discrete time steps alternately, starting with the cops. In each move, a player can move to an adjacent vertex, and the cops win by _capturing_ the robber (i.e., if a cop and the robber occupy the same vertex). The goal is to compute the _cop number_ of a graph \(G\), denoted \(\mathsf{c}(G)\), which is the minimum number of cops required to win in \(G\). We define the game formally in Section 2. CNR is well studied in the artificial intelligence literature under the name Moving Target Pursuit (MTP) [51], where we consider sub-optimal but faster strategies from an applicative point of view. The results have found numerous applications in game design, police chasing, path planning, and robot motion planning [78, 61, 77, 6, 81].
Determining the parameterized complexity of games is a well-studied research topic [72, 18, 17]. Most pursuit-evasion games are, in fact, AW[*]-hard [73]. In particular, CnR is W[2]-hard parameterized by
\(\mathsf{c}(G)\)[36]. Thus, we consider structural parameterizations, focusing on _kernelization_, also known as polynomial-time preprocessing with a parametric guarantee. Due to the profound impact of preprocessing, kernelization was termed "the lost continent of polynomial time" [35]. We begin with the most studied structural parameter in parameterized complexity: the _vertex cover number_ (\(\mathsf{vcn}\)) of the input graph. We bound \(\mathsf{c}(G)\) in terms of \(\mathsf{vcn}\), as well as achieve both positive and negative results concerning the kernelization complexity of \(\mathsf{CnR}\) parameterized by \(\mathsf{vcn}\). We generalize our kernelization results to the smaller parameters _cluster vertex deletion number_ (\(\mathsf{cvd}\)) and _deletion to stars number_ (\(\mathsf{dts}\)), as well as to the parameter _neighborhood diversity_ (\(\mathsf{nd}\)). Furthermore, we extend all our results to several well-studied variants of \(\mathsf{CnR}\).
The choice of \(\mathsf{vcn}\) as a parameter to study pursuit-evasion games is natural due to various scenarios where \(\mathsf{vcn}\) is significantly smaller than the graph size. For example, this includes scenarios where we model the existence of one or few (possibly interconnected) central hubs--for illustration, suppose an intruder is hiding in a system of buildings where we have only few corridors but a large number of rooms, or suppose we have few virtual servers with many stations (e.g., of private users) that can communicate only with the servers. Furthermore, \(\mathsf{vcn}\) is one of the most efficiently computable parameters from both approximation [80] and parameterized [28] points of view, making it fit from an applicative perspective even when a vertex cover is not given along with the input. Moreover, \(\mathsf{vcn}\) is the best choice for proving negative results--indeed, our negative result on the kernelization complexity of \(\mathsf{CnR}\) for \(\mathsf{vcn}\) implies the same for many other well-known smaller parameters such as treewidth, treedepth and feedback vertex set [37]. One shortcoming of \(\mathsf{vcn}\) as a parameter is that it is very high for some simple (and easy to resolve) dense graphs like complete graphs. However, we generalize our kernel to \(\mathsf{cvd}\), which is small for these dense graphs, and for \(\mathsf{dts}\). Furthermore, we design a _linear_ kernel for the well-studied parameter \(\mathsf{nd}\). We further discuss the utility of our kernels in the Conclusion.
### Brief Survey
\(\mathsf{CnR}\) was independently introduced by Quilliot [71] and by Nowakowski and Winkler [63] with exactly one cop1. Aigner and Fromme [3] generalized the game to multiple cops and defined the _cop number_ of a graph. The notion of cop number and some fundamental techniques introduced by Aigner and Fromme [3] yielded a plethora of results on this topic. For more details, we refer the reader to the book [16].
Footnote 1: In fact, a specific instance of \(\mathsf{CnR}\) on a specific graph was given as a puzzle in Problem 395 of the book Amusements in Mathematics [33] already in 1917.
The computational complexity of finding the cop number of a graph has been a challenging subject of research. On the positive side, Berarducci and Intrigila [10] gave a backtracking algorithm that decides whether \(G\) is \(k\)-copwin in \(\mathcal{O}(n^{2k+1})\) time. On the negative side, Fomin et al. [36] proved that determining whether \(G\) is \(k\)-copwin is NP-hard, and W[2]-hard parameterized by \(k\). Moreover, Mamino [60] showed that the game is PSPACE-hard, and later, Kinnersley [54] proved that determining the cop number of a graph is, in fact, EXPTIME-complete. Recently, Brandt et al. [22] provided fine-grained lower bounds, proving that the time complexity of any algorithm for \(\mathsf{CnR}\) is \(\Omega(n^{k-o(1)})\) conditioned on _Strong Exponential Time Hypothesis_ (\(\mathsf{SETH}\)), and \(2^{\Omega(\sqrt{n})}\) conditioned on _Exponential Time Hypothesis_ (\(\mathsf{ETH}\)).
Since \(\mathsf{CnR}\) admits an XP-time algorithm, it is sensible to bound the cop number for various graph classes or by some structural parameters. Nowadays, we know that the cop number is 3 for the class of planar graphs [3] and toroidal graphs [57], 9 for unit-disk graphs [11], 13 for string graphs [30], and is bounded for bounded genus graphs [19] and minor-free graphs [4]. Moreover, it is known that the cop number of a graph \(G\) is at most \(\frac{\mathsf{tw}(G)}{2}+1\)[53], where \(\mathsf{tw}(G)\) denotes the treewidth of \(G\), and at most \(\mathsf{cw}(G)\)[36], where \(\mathsf{cw}(G)\) denotes the clique-width of \(G\).
### Our Contribution
We conduct a comprehensive analysis of CnR parameterized by \(\mathsf{vcn}\). We start by bounding the cop number of a graph by its vertex cover number:
**Theorem 1**.: _For a graph \(G\), \(\mathsf{c}(G)\leq\frac{\mathsf{vcn}}{3}+1\)._
The proof is based on the application of three reduction rules. Each of our rules controls its own cop that, in particular, guards at least three vertices that belong to the vertex cover. Once our rules are no longer applicable, we exhibit that the remaining unguarded part of the graph is of a special form. In particular, we exploit this special form to prove that, now, the usage of only two additional cops suffices. We complement Theorem 1 with an argument (Lemma 20) that it might be difficult to improve this bound further using techniques similar to ours.
Second, we prove that CnR parameterized by \(\mathsf{vcn}\) is \(\mathsf{FPT}\) by designing a kernelization algorithm:
**Theorem 2**.: \(\mathsf{CNR}\) _parameterized by \(\mathsf{vcn}\) admits a kernel with at most \(\mathsf{vcn}+\frac{2^{\mathsf{vcn}}}{\sqrt{\mathsf{vcn}}}\) vertices._
Our kernel is also based on the application of reduction rules. However, these rules are very different than those used for the proof of Theorem 1. While our main rule is quite standard in kernelization (involving the removal of, in particular, false twins), the proof of its correctness is (arguably) not. Theorem 2, along with Theorem 1 and an XP-algorithm (Proposition 13), gives the following immediate corollary:
**Corollary 3**.: \(\mathsf{CNR}\) _is \(\mathsf{FPT}\) parameterized by \(\mathsf{vcn}\), and is solvable in \(\left(\mathsf{vcn}+\frac{2^{\mathsf{vcn}}}{\sqrt{\mathsf{vcn}}}\right)^{ \frac{\mathsf{vcn}}{3}+2}\cdot n^{\mathcal{O}(1)}\) time._
We complement our kernel by showing that it is unlikely for CnR to admit polynomial compression, by providing a _polynomial parameter transformation_ from Red-Blue Dominating Set. In particular, our reduction makes non-trivial use of a known construction of a special graph having high girth and high minimum degree.
**Theorem 4**.: \(\mathsf{CNR}\) _parameterized by \(\mathsf{vcn}\) does not admit polynomial compression, unless \(\mathsf{NP}\subseteq\mathsf{coNP}/\mathsf{poly}\)._
Next, we present a linear kernel for CnR parameterized by neighbourhood diversity:
**Theorem 5**.: \(\mathsf{CNR}\) _parameterized by \(\mathsf{nd}\) admits a kernel with at most \(\mathsf{nd}\) vertices._
On the positive side, we extend our exponential kernel to two smaller structural parameters, \(\mathsf{cvd}\) and \(\mathsf{dts}\):
**Theorem 6**.: \(\mathsf{CNR}\) _parameterized by \(\mathsf{cvd}\) admits a kernel with at most \(2^{2^{\mathsf{cvd}}+\sqrt{\mathsf{cvd}}}\) vertices. Moreover, \(\mathsf{CNR}\) parameterized by \(\mathsf{dts}\) admits a kernel with at most \(2^{2^{\mathsf{dts}}+\mathsf{dts}^{1.5}}\) vertices._
Several variants of CnR have been studied due to their copious applications. We extend our results, parameterized by \(\mathsf{vcn}\), to some of the most well-studied ones. We define these variants (and used notations) in Section 2. We first bound the cop number of these variants by \(\mathsf{vcn}\):
**Theorem 7**.: _For a graph \(G\): (1) \(\mathsf{c}_{lazy}\leq\frac{\mathsf{vcn}}{2}+1\); (2) \(\mathsf{c}_{attack}\leq\frac{\mathsf{vcn}}{2}+1\); (3) \(\mathsf{c}_{active}(G)\leq\mathsf{vcn}\); (4) \(\mathsf{c}_{surround}(G)\leq\mathsf{vcn}\); (5) \(\mathsf{c}_{s}(G)\leq\mathsf{vcn}\) (for any value of \(s\)); (6) for a strongly connected orientation \(\overrightarrow{G}\) of \(G\), \(\mathsf{c}(\overrightarrow{G})\leq\mathsf{vcn}\)._
We also extend our exponential kernel to these variants:
**Theorem 8**.: \(\mathsf{Cops}\) _and Attacking Robber and Lazy \(\mathsf{CNR}\) parameterized by \(\mathsf{vcn}\) admit a kernel with at most \(\mathsf{vcn}+\frac{2^{\mathsf{vcn}}}{\sqrt{\mathsf{vcn}}}\) vertices. Moreover, \(\mathsf{CNR}\) on strongly connected directed graphs admits a kernel with at most \(3^{\mathsf{vcn}}+\mathsf{vcn}\) vertices._
Then, we present a slightly more general kernelization that works for most variants of the game. In particular, we define a new variant of the game (in Section 2), Generalized CnR,that generalizes various well studied variants of CnR. We have the following result that proves that Generalized CnR parameterized by \(\mathsf{vcn}\) admits an exponential kernel.
**Theorem 9**.: Generalized CnR _parameterized by \(\mathsf{vcn}\) admits a kernel with at most \(\mathsf{vcn}+\mathsf{vcn}\cdot 2^{\mathsf{vcn}}\) vertices._
Then, we show that the same kernelization algorithm also provides us the following result:
**Theorem 10**.: Fully Active CnR, Cops and Fast Robber, and Surrounding CnR _parameterized by \(\mathsf{vcn}\) admit a kernel with at most \(\mathsf{vcn}+\mathsf{vcn}\cdot 2^{\mathsf{vcn}}\) vertices._
Finally, we complement our exponential kernels for these variants by arguing about their incompressibility:
**Theorem 11**.: Lazy CnR, Cops and Attacking Robber, Cops and Fast Robber, Fully Active CnR, and \(\mathsf{CnR}\) on strongly connected directed and oriented graphs parameterized by \(\mathsf{vcn}\) do not admit a polynomial compression, unless \(\mathsf{NP}\subseteq\mathsf{coNP}/\mathsf{poly}\).
### Additional Related Works
For a graph with girth at least 5, the cop number is lower bounded by the minimum degree of the graph [3]. As implied by the lower bound for the _Zarankiewicz problem_[82], an extremal graph with girth 5 has \(\Omega(n^{3/2})\) edges. In a graph with \(\Omega(n^{3/2})\) edges, if there is a vertex whose degree is smaller than \(c\sqrt{n}\), for an appropriate constant \(c\), then we can remove it and still get a smaller graph with \(\Omega(n^{3/2})\) edges. Hence, eventually, every vertex has degree \(\Omega(\sqrt{n})\). Therefore, the cop number of such a graph is \(\Omega(\sqrt{n})\). Meyniel [38] conjectured this to be tight, that is, \(\mathcal{O}(\sqrt{n})\) cops are sufficient to capture the robber in any connected graph. This is probably the deepest conjecture in this field (see [7]). Since then, several attempts have been made to bound the cop number of general graphs [26, 59, 74]. Although these results establish that the \(\mathsf{c}(G)=o(n)\), even the question whether \(c(G)=\mathcal{O}(n^{1-\epsilon})\), for \(\epsilon>0\), remains open.
Many graph classes have unbounded cop number. The graph classes for which the cop number is \(\Omega(\sqrt{n})\) are called _Meyniel extremal_. These include bipartite graphs [7], subcubic graphs [48], and polarity graphs [13]. Meyniel's conjecture was also considered for random graphs [69, 70, 12].
Lastly, we remark that variations of CnR vary mainly depending on the capabilities of the cops and the robber. Some of these variations were shown to have correspondence with several width measures of graphs like treewidth [75], pathwidth [65], tree-depth [41], hypertree-width [2], cycle-rank [41], and directed tree-width [52]. Moreover, Abraham et al. [1] defined the concept of a _cop-decomposition_, which is based on the cop strategy in the CnR game on minor-free graphs provided by Andreae [4], and showed that it has significant algorithmic applications.
## 2 Preliminaries
For \(\ell\in\mathbb{N}\), let \([\ell]\) = \(\{1,\ldots,\ell\}\). Whenever we mention \(\frac{a}{b}\), we mean \(\lceil\frac{a}{b}\rceil\).
### Graph Theory
For a graph \(G\), we denote its vertex set by \(V(G)\) and edge set by \(E(G)\). We denote the size of \(V(G)\) by \(n\) and size of \(E(G)\) by \(m\). In this paper, we consider finite, connected2, and simple graphs. Let \(v\) be a vertex of a graph \(G\). Then, by \(N(v)\) we denote the _open neighbourhood_ of \(v\), that is, \(N(v)=\{u\mid uv\in E(G)\}\).
By \(N[v]\) we denote the _close neighbourhood_ of \(v\), that is, \(N[v]=N(v)\cup\{v\}\). For \(X\subseteq V(G)\), we define \(N_{X}(v)=N(v)\cap X\) and \(N_{X}[v]=N[v]\cap X\). We say that \(v\)_dominates_\(u\) if \(u\in N[v]\). The _girth_ of a graph \(G\) is the length of a shortest cycle contained in \(G\). A \(u,v\)_-path_ is a path with endpoints \(u\) and \(v\). A path is _isometric_ if it is a shortest path between its endpoints. For \(u,v\in V(G)\), let \(d(u,v)\) denote the length of a shortest \(u,v\)-path.
Let \(G\) be a graph and \(U\subseteq V(G)\). Then, \(G[U]\) denotes the subgraph of \(G\) induced by \(U\). A set \(U\subseteq V(G)\) is a _vertex cover_ if \(G[V(G)\setminus U]\) is an independent set. The minimum cardinality of a vertex cover of \(G\) is its _vertex cover number_ (vcn). Moreover, \(U\) is a _cluster vertex deletion set_ if \(G[V(G)\setminus U]\) is a disjoint union of cliques. The minimum size of a cluster vertex deletion set of a graph is its _cluster vertex deletion number_ (cvd). Additionally, \(U\) is a _deletion to stars set_ if \(G[V(G)\setminus U]\) is a disjoint union of star graphs. The minimum size of a deletion to stars set of a graph is its _deletion to stars number_ (dts). Two vertices \(u,v\in V(G)\) have the _same type_ if and only if \(N(v)\setminus\{u\}=N(u)\setminus\{v\}\). A graph \(G\) has _neighborhood diversity_ at most \(w\) if there exists a partition of \(V(G)\) into at most \(w\) sets, such that all the vertices in each set have the same type.
### Cops and Robber
CnR is a two-player perfect information pursuit-evasion game played on a graph. One player is referred as _cop player_ and controls a set of _cops_, and the other player is referred as _robber player_ and controls a single _robber_. The game starts with the cop player placing each cop on some vertex of the graph, and multiple cops may simultaneously occupy the same vertex. Then, the robber player places the robber on a vertex. Afterwards, the cop player and the robber player make alternate moves, starting with the cop player. In the cop player move, the cop player, for each cop, either moves it to an adjacent vertex (along an edge) or keeps it on the same vertex. In the robber player move, the robber player does the same for the robber. For simplicity, we will say that the cops (resp., robber) move in a cop (resp., robber) move instead of saying that the cop (resp., robber) player moves the cops (resp., robber). Throughout, we denote the robber by \(\mathcal{R}\).
A situation where one of the cops, say, \(\mathcal{C}\), occupies the same vertex as \(\mathcal{R}\) is a _capture_. (We also say that the \(\mathcal{C}\) captures \(\mathcal{R}\) and that \(\mathcal{R}\) is captured by \(\mathcal{C}\).) The cops win if they have a strategy to capture \(\mathcal{R}\), and \(\mathcal{R}\) wins if it has a strategy to evade a capture indefinitely. A graph \(G\) is _\(k\)-copwin_ if \(k\) cops have a winning strategy in \(G\). The _cop number_ of \(G\), denoted \(\mathsf{c}(G)\), is the minimum \(k\) such that \(G\) is \(k\)-copwin. For brevity, \(G\) is said to be _copwin_ if it is \(1\)-copwin (i.e. \(\mathsf{c}(G)=1\)). Accordingly, we have the following decision version of the problem.
Cops and Robber
**Input**: A graph \(G\), and an integer \(k\in\mathbb{N}\)
**Question**: Is \(G\)\(k\)-copwin?
We say that some cops _guard_ a subgraph \(H\) of \(G\) if \(\mathcal{R}\) cannot enter \(H\) without getting captured by one of these cops in the next cop move. We shall use the following result:
**Proposition 12** ([3]).: _Let \(P\) be an isometric path in \(G\). Then one cop can guard \(P\) after a finite number of rounds/cop moves._
Currently, the best known algorithm to decide whether \(G\) is \(k\)-copwin is by Petr et al. [68]:
**Proposition 13** ([68]).: \(\textsc{CNR}\) _is solvable in \(\mathcal{O}(kn^{k+2})\) time._
If a cop \(\mathcal{C}\) occupies a vertex \(v\), then \(\mathcal{C}\)_attacks_\(N[v]\). A vertex \(u\) is _safe_ if it is not being attacked by any cop. If \(\mathcal{R}\) is on a vertex that is not safe, then \(\mathcal{R}\) is _under attack_.
### Variations of CnR
Several variations of CnR have been studied in the literature, differing mainly in the rules of movements of agents, the definition of the capture, and the capabilities of the agents. We provide below the definitions of the games considered in this paper. We list below some of the primary properties of the gameplay in which these variations differ:
1. _Speed of agents_: If an agent has speed \(s\), where \(s\in\mathbb{N}\), then the agent can move along at most \(s\) edges in its turn. We note that a robber with speed \(s\) cannot move over a cop, that is, the robber can move along a path of length at most \(s\) not containing any cop, in its turn.
2. _Lazy/active/flexible cops_: Let \(C\) be the set of cops and let \(A\cup F\cup L\) be a partition of the set of cops such that \(A\) is the set of _active_ cops, \(F\) be the set of _flexible_ cops, and \(L\) be the set of _lazy_ cops. Then, in each cop move, at most one cop from \(L\) can make a move, each cop from \(A\) must make a move, and each cop from \(F\) can either make a move or stay on the same vertex. Unless mentioned otherwise, all cops are assumed to be flexible.
3. _Reach of cops:_ If a cop \(\mathcal{C}_{i}\) has _reach_\(\lambda_{i}\), then \(\mathcal{R}\) cannot access a vertex that is at a distance at most \(\lambda_{i}\) from the vertex occupied by \(\mathcal{C}_{i}\). Here, think of the cop \(\mathcal{C}_{i}\) as having a gun with range \(\lambda_{i}\). Hence, if \(\mathcal{C}_{i}\) can reach a vertex that is at most distance \(\lambda_{i}\) from the robber's vertex at the end of a cop move, then \(\mathcal{C}_{i}\) can shoot \(\mathcal{R}\), and the cops win. Similarly, on a robber move, even if \(\mathcal{R}\) has speed \(s\), then it can move only along a path of length at most \(s\) that does not contain any vertex that is at a distance at most \(\lambda_{i}\) from \(\mathcal{C}_{i}\). In CnR, for each cop \(\mathcal{C}_{i}\), \(\lambda_{i}=0\).
4. _Visible/invisible robber_: If the robber is _visible_, then the cops know the position of the robber. If the robber is _invisible_, then the cops do not know the position of the robber. Moreover, we say that cops have \(d\)_-visibility_ if cops can see the position of the robber only if it is at most \(d\) edges away from at least one of the cops.
Next, we define the variants of CnR for which we will extend our results.
**Lazy CnR:** Lazy CnR [64] is one the the most well-studied variants of CnR games [8, 76]. In this variant, the cops are lazy, that is, at most one cop can move during a cops' turn. This restricts the ability of the cops with respect to the classical version. The minimum number of lazy cops that can ensure a capture in a graph \(G\) is known as the _lazy cop number_ and is denoted by \(\mathsf{c}_{lazy}(G)\). Clearly, \(\mathsf{c}(G)\leq\mathsf{c}_{lazy}(G)\), as \(\mathsf{c}_{lazy}(G)\) cops can capture the robber in the classical version (using the winning strategy of the Lazy CnR game). We remark that this game is also studied with the name _one-cop-moves_ game [40, 79].
**Cops and Attacking Robber:** In Cops and Attacking Robber[15], the robber is able to _strike back_ against the cops. If on a robber's turn, there is a cop in its neighborhood, then the robber can attack the cop and _eliminate_ it from the game. However, if more than one cop occupy a vertex and the robber attacks them, then only one of the cops gets eliminated, and the robber gets captured by one of the other cops on that vertex. The cop number for capturing an attacking robber on a graph \(G\) is denoted by \(\mathsf{c}_{attack}(G)\), and is referred to as the _attacking cop number_ of \(G\). Clearly, \(\mathsf{c}(G)\leq\mathsf{c}_{attack}(G)\leq 2\cdot\mathsf{c}(G)\), as, on the one hand, \(\mathsf{c}_{attack}(G)\) cops can capture the robber in the classical version. On the other hand, if we play the attacking version with \(2\cdot\mathsf{c}(G)\) cops using the strategy of the classical variant with the only difference that there are always at least two cops on a vertex, then the cops have a winning strategy.
**Fully Active CnR:** In the game of Fully Active CnR [45], each cop as well as the robber are active, that is, in a cop/robber move, each cop/robber has to move to an adjacent vertex. The _active cop number_ of a graph \(G\), denoted by \(\mathsf{c}_{active}(G)\), is the minimum number of cops that can ensure capture in this game. It is easy to see that \(\mathsf{c}_{active}(G)\leq 2\cdot\mathsf{c}(G)\), as if we keep one extra cop adjacent to each cop in the winning
strategy for CnR, then whenever some cop has to skip a move, it can simply do so by switching with the extra cop adjacent to it.
**Surrounding CnR:** In the game of Surrounding CnR [23, 20], the definition of capture is different. In this game, a cop and the robber can occupy the same vertex of the graph during the game, but the robber cannot end its turn by remaining at a vertex occupied by some cop. The cops win by _surrounding_ the robber, that is, if the robber occupies a vertex \(v\), then there is a cop at each vertex \(u\in N(v)\). The _surrounding cop number_ for a graph \(G\) is denoted as \(\mathsf{c}_{surround}(G)\). It is easy to see that \(\mathsf{c}_{surround}(G)\geq\delta(G)\), where \(\delta(G)\) is the _minimum degree_ of the graph.
**Cops and Fast Robber:** In the game of Cops and Fast Robber[36], the robber can move faster than the cops. If \(\mathcal{R}\) has speed \(s\), then it can move along a path with at most \(s\) edges not containing any cop. The minimum number of cops that can ensure capture of a fast robber with speed \(s\) in a graph \(G\) is denoted by \(\mathsf{c}_{s}(G)\). For \(s\geq 2\), deciding whether \(\mathsf{c}_{s}(G)\leq k\) is NP-hard as well as W[2]-hard even when input graph \(G\) is restricted to be a split graph [36]. The game of Cops and Fast Robber is well-studied [31, 9, 24].
**CnR on Directed Graphs:** The game of CnR is also well-studied for oriented/directed graphs [58, 21, 47]. The game is played on a directed graph \(\overrightarrow{G}\), and the players can only move along the orientation of the arcs.
Finally, we define a variant of CnR that generalizes many well-studied variants of CnR:
**Generalized CnR:** Consider the following generalized version of CnR. Here the input is \((G,\mathcal{C}_{1},\ldots,\mathcal{C}_{k},\mathcal{R})\) where each cop \(\mathcal{C}_{i}\) has _speed_\(s_{i}\) (possibly different for each cop) and \(\mathcal{R}\) has speed \(s_{R}\). Moreover, each cop can be either forced to be _active_ (all active cops have to move in each turn), _lazy_ (at most one lazy cop moves in each turn), or _flexible_ (a flexible cop can either move or stay on the same vertex in its move). Moreover, the robber can also be forced to be either lazy or flexible. Furthermore, each cop \(\mathcal{C}_{i}\) can have _reach_\(\lambda_{i}\) (possibly different for each cop). This game generalizes several well-studied variants of CnR along with Cops and Fast Robber, Fully Active CnR, and Cops and Robber From a Distance[14]. It also generalizes the game of [61, 50].
Finally, we note that we assume the notion of "being active" to be defined only when the agent has speed \(s=1\). But, this notion can be defined in multiple ways if the agent has speed \(s>1\): the player might have to move at least \(s^{\prime}\leq s\) edges, the player may have to move to a vertex at a distance at least \(s^{\prime}\leq s\) from the current vertex, the player may or may not be allowed to repeat edges, and so on. We remark that our kernelization result for Generalized CnR can be made to work, with some changes, considering any of these notions discussed.
### An XP Algorithm for Variants
For graph searching games, there is a standard technique to get an XP-time algorithm with running time \(n^{\mathcal{O}(k)}\) (where \(n\) is the size of input graph and the question is whether \(k\) cops have a winning strategy). This technique involves generating a _game graph_ where each vertex represents a possible placement of all the agents on the vertices of \(G\). Since \(k\) cops and a single robber can have \(n^{k+1}\) possible placements on \(G\), the game graph has \(n^{k+1}\) vertices. The following step is to mark all of the _winning states_ (that is, where the robber is captured). Afterwards, we use an algorithm to keep adding states to the set of winning states in the following manner. On a cop move, from a given state \(S\), if there exists a movement of cops that can change the game state \(S\) to a winning state, we add \(S\) to the winning states. On a robber move, for a game state \(S\), if all the possible moves of the robber lead to a winning state, we add \(S\) to the winning state. Finally, if there exists a position of \(k\) cops such that, for any position of the robber, these states are in winning states, we declare that \(k\) cops have a winning strategy in \(G\). It is easy to see that this algorithm can be implemented in \(n^{\mathcal{O}(k)}\) time.
Petr, Portier, and Versteegan [68] gave an implementation of this algorithm, for CnR, that runs in \(\mathcal{O}(kn^{k+2})\) time. It is not difficult to see that this algorithm can be made to work for all the variants we discussed by changing the rules to navigate between game states. For Cops and Attacking Robber, the only extra consideration is that if \(\mathcal{R}\) attacks a cop (among \(k\) cops) and does not get captured in the next cop move, then we have a game state, say, \(S^{\prime}\), with \(k-1\) cops and one robber, where the placement of these agents is a subset of a placement of \(k+1\) agents in one of the original game states, and hence we prune \(S^{\prime}\). Thus, we have the following proposition.
**Proposition 14**.: _For any variant of CnR considered in this paper, an instance \((G,k)\) can be solved in \(\mathcal{O}(kn^{k+2})\) time._
### Parameterized complexity
In the framework of parameterized complexity, each problem instance is associated with a non-negative integer, called a _parameter_. A parametrized problem \(\Pi\) is _fixed-parameter tractable_ (\(\mathsf{FPT}\)) if there is an algorithm that, given an instance \((I,k)\) of \(\Pi\), solves it in time \(f(k)\cdot|I|^{\mathcal{O}(1)}\) for some computable function \(f(\cdot)\). Central to parameterizedcomplexity is the W-hierarchy of complexity classes: \(\mathsf{FPT}\subseteq\mathsf{W}[1]\subseteq\mathsf{W}[2]\subseteq\ldots \subseteq\mathsf{XP}\).
Two instances \(I\) and \(I^{\prime}\) (possibly of different problems) are _equivalent_ when \(I\) is a Yes-instance if and only if \(I^{\prime}\) is a Yes-instance. A _compression_ of a parameterized problem \(\Pi_{1}\) into a (possibly non-parameterized) problem \(\Pi_{2}\) is a polynomial-time algorithm that maps each instance \((I,k)\) of \(\Pi_{1}\) to an equivalent instance \(I^{\prime}\) of \(\Pi_{2}\) such that size of \(I^{\prime}\) is bounded by \(g(k)\) for some computable function \(g(\cdot)\). If \(g(\cdot)\) is polynomial, then the problem is said to admit a _polynomial compression_. A _kernelization algorithm_ is a compression where \(\Pi_{1}=\Pi_{2}\). Here, the output instance is called a _kernel_. Let \(\Pi_{1}\) and \(\Pi_{2}\) be two parameterized problems. A _polynomial parameter transformation_ from \(\Pi_{1}\) to \(\Pi_{2}\) is a polynomial time algorithm that, given an instance \((I,k)\) of \(\Pi_{1}\), generates an equivalent instance \((I^{\prime},k^{\prime})\) of \(\Pi_{2}\) such that \(k^{\prime}\leq p(k)\), for some polynomial \(p(\cdot)\). It is well-known that if \(\Pi_{1}\) does not admit a polynomial compression, then \(\Pi_{2}\) does not admit a polynomial compression [28]. We refer to the books [28, 37] for details on parameterized complexity.
## 3 Bounding the Cop Number
In the following lemma, we give a general upper bound for the cop number, which we use to derive bounds for several graph parameters.
**Lemma 15**.: _Let \(G\) be a graph and let \(U\subseteq V(G)\) be a set of vertices such that for each connected component \(H\) of \(G[V(G)\setminus U]\), \(\mathsf{c}(H)\leq\ell\). Then, \(\mathsf{c}(G)\leq\lceil\frac{|U|}{2}\rceil+\ell\)._
Proof.: We note that this proof uses techniques used to bound \(\mathsf{c}(G)\) in terms of \(tw(G)\) by Joret et al. [53]. Denote \(U=\{u_{1},\ldots,u_{q}\}\). Consider isometric paths \(P_{1},\ldots,P_{\lceil\frac{q}{2}\rceil}\) such that the endpoints of \(P_{i}\) are \(u_{2i-1}\) and \(u_{2i}\). Note that these isometric paths always exist as we assume that the graph is connected. Here, \(P_{\lceil\frac{q}{2}\rceil}\) might be a single vertex path containing only the vertex \(u_{q}\). Now, we guard each path \(P_{i}\) using a single cop (due to Proposition 12). These \(\lceil\frac{q}{2}\rceil\) cops restrict the robber to one connected component \(H\) of \(G[V(G)\setminus U]\). Since each of these components is \(\ell\)-copwin, \(\frac{q}{2}+\ell\) cops have a clear winning strategy in \(G\).
We know that the classes of star graphs, complete graphs, chordal graphs, and trees are copwin [63]. These bounds, along with Lemma 15, implies the following theorem.
**Theorem 16**.: _Let \(G\) be a graph and \(t=\min\{\mathsf{cvd},\mathsf{dts}\}\). Then, \(\mathsf{c}(G)\leq\frac{t}{2}+1\)._
### Bounding Cop Number by \(\mathsf{vcn}\):
Let \(U\) be a vertex cover of size \(t\) in \(G\) and \(I\) be the independent set \(V(G)\setminus U\). Lemma 15 implies that \(\mathsf{c}(G)\leq\lceil\frac{t}{2}\rceil+1\). In this section, we improve this bound. First, we provide the following reduction rules.
**Reduction Rule 1** (Rr1).: _If there is a vertex \(v\in I\) such that \(|N(v)|\geq 3\), then place a cop at \(v\) and delete \(N[v]\)._
**Reduction Rule 2** (Rr2).: _If there is a vertex \(v\in U\) such that \(|N[v]\cap U|\geq 3\), then place a cop at \(v\) and delete \(N[v]\)._
**Reduction Rule 3** (Rr3).: _If there is an isometric path \(P\) such that \(P\) contains at least three vertices from \(U\), then guard \(P\) using one cop and delete \(V(P)\) (see Proposition 12)._
We remark that RR1 and RR2 can be merged, but we prefer to keep them separate to ease the presentation. Moreover, we note the following.
**Note 17**.: _In the application of reduction rules RR1-RR3, whenever a set of vertices \(X\subseteq V(G)\) is deleted by the application of rules RR1-RR3, it implies that each vertex \(x\in X\) is being guarded by some cop, and hence, is not accessible to \(\mathcal{R}\). We do not actually delete the vertices, and this deletion part is just for the sake of analysis. Hence, from the cop player's perspective, the graph remains connected._
Second, we have the following lemma concerning the structure of subgraphs accessible to \(\mathcal{R}\) after an exhaustive application of rules RR1-RR3.
**Lemma 18**.: _Let \(H\) be a connected component of \(G\) where rules RR1-RR3 cannot be applied anymore. Then, for every two distinct vertices \(x,y\in V(H)\cap U\), either \(xy\in E(G)\) or there exists a vertex \(w\in I\) such that \(xw\in E(G)\) and \(yw\in E(G)\)._
Proof.: For contradiction, let us assume that there exist two distinct vertices \(x,y\in V(H)\cap U\) such that \(xy\notin E(G)\) and there does not exist a vertex \(w\in I\) such that \(xw\in E(G)\) and \(yw\in E(G)\). Since \(x\) and \(y\) are part of the connected component \(H\), there exists an \(x,y\)-path. Let \(P\) be an isometric \(x,y\)-path.
Let \(P=x,v_{1},\ldots,v_{\ell},y\). Since vertices in \(I\) form an independent set and \(\ell\geq 2\), the vertices \(v_{1},\ldots,v_{\ell}\) cannot be all from \(I\). So, there exists at least one \(v_{i}\), for \(i\in[\ell]\), such that \(v_{i}\in U\). Thus, \(P\) contains at least three vertices from \(U\), and \(P\) is an isometric path. Therefore, we can apply RR3, and hence, we reach a contradiction.
Next, we argue that, after an exhaustive application of rules RR1-RR3, the cop number of each connected component accessible to \(\mathcal{R}\) is bounded. We have the following lemma.
**Lemma 19**.: _Once we cannot apply rules RR1-RR3 anymore, let the robber be in a connected component \(H\). Then, \(c(H)\leq 2\)._
Proof.: We present a winning strategy for two cops. If \(H\) contains at most two vertices from \(U\), then the cops have a winning strategy by placing a cop on each of these vertices. Hence, we assume there exist at least three vertices in \(H\) from \(U\). Let \(x\) and \(y\) be two distinct vertices of \(H\) from \(U\). Then, we place a cop on each of these vertices. Denote the cops by \(\mathcal{C}_{1}\) and \(\mathcal{C}_{2}\). We consider the two cases as follows.
**Case 1:** If \(\mathcal{R}\) is on a vertex in \(w\in I\), then due to reduction rule RR1, it can have at most two neighbors in \(U\). Let them be \(u\) and \(v\). Now, due to Lemma 18, the cops can move to vertices such that one of them, say \(x^{\prime}\), dominates the vertex \(u\) and the other, say \(y^{\prime}\), dominates the vertex \(v\). See Figure 1 for reference. So, the cops move to the vertices \(x^{\prime}\) and \(y^{\prime}\). This restricts \(\mathcal{R}\) to stay on its current vertex \(w\) in \(I\) (else it is captured in the next move of the cops). Now, in the next move of the cops, they move to the vertices \(u\) and \(v\). Again,
this restricts \(\mathcal{R}\) to stay on the vertex \(w\) (else it is indeed captured). Finally, in the next move of the cops, the cops capture \(\mathcal{R}\).
**Case 2:** If \(\mathcal{R}\) is on a vertex in \(u\in U\), then \(\mathcal{C}_{1}\) can move to a vertex in \(I\), say \(x^{\prime}\), to attack \(\mathcal{R}\) (due to Lemma 18). This forces \(\mathcal{R}\) to either move to a vertex \(w\in I\) or to a vertex \(z\in U\). Accordingly, we consider two sub-cases.
1. If \(\mathcal{R}\) moves to a vertex \(w\in I\), then note that \(w\) can have at most two neighbors in \(U\) (due to RR1), and one of them is \(u\) (being attacked by \(\mathcal{C}_{1}\)). Let the other neighbor of \(w\) be \(v\). Now, \(\mathcal{C}_{2}\) can move to a vertex such that it attacks \(v\) (due to Lemma 18). This game state is identical to case 1. Hence, the cops can capture the robber in two rounds.
2. If \(\mathcal{R}\) moves to a vertex \(z\in U\), then \(\mathcal{C}_{1}\) moves to \(u\). This forces \(\mathcal{R}\) to move to a vertex in \(I\) since \(u\) can have only one neighbor in \(U\) (due to RR2), and that is occupied by \(C_{1}\), with both cops being in \(U\). This game state is again identical to case 1, and thus the cops win in at most two rounds.
This completes our proof.
Finally, we have the following theorem.
**Theorem 1**.: _For a graph \(G\), \(\mathsf{c}(G)\leq\frac{\mathsf{ven}}{3}+1\)._
Proof.: The correctness of this theorem follows from Lemma 19 and the fact that using each cop in the reduction rules RR1, RR2, and RR3, we remove at least three vertices from \(U\). If we can apply these rules \(\frac{t}{3}\) times, \(\mathcal{R}\) gets restricted to a vertex in \(I\), and thus one additional cop can capture \(\mathcal{R}\). Else, when we apply these rules at most \(\frac{t}{3}-1\) times, we then need two additional cops (by Lemma 19), that is, we overall need at most \(\frac{t}{3}+1\) cops to ensure capture.
We note here that a similar technique will fail if we try to "remove" four vertices in each reduction rule. More precisely, if we have the following reduction rules, then we might not get a graph with a bounded (by a constant independent of the input) cop number.
**Reduction Rule 4** (Rr4).: _If there is a vertex \(v\in I\) such that \(|N(v)|\geq 4\), then place a cop at \(v\) and delete \(N[v]\)._
**Reduction Rule 5** (Rr5).: _If there is a vertex \(v\in U\) such that \(|N[v]\cap U|\geq 4\), then place a cop at \(v\) and delete \(N[v]\)._
Figure 1: Illustration for the proof of Lemma 19.
**Reduction Rule 6** (Rr6).: _If there is an isometric path \(P\) such that \(P\) contains at least four vertices from \(U\), then guard \(U\) using one cop and delete \(V(P)\) (see Proposition 12)._
We have the following claim.
**Lemma 20**.: _For every \(k\in\mathbb{N}\), there exists a graph \(G\) with a vertex cover \(U\) and independent set \(I=V(G)\setminus U\), such that we cannot apply the rules RR4-RR6, and \(\mathsf{c}(G)>k\)._
Proof.: Bonato and Burgess [13] proved that for every \(k\), there exists a diameter-2 graph \(H\) such that \(c(H)\geq k\). Let \(H\) be a diameter-2 graph such that \(c(H)\geq k\). Joret et al. [53] showed that subdividing each edge of a graph an equal number of times does not reduce the cop number. So, we subdivide each edge of \(H\) to get the graph \(G\) such that \(\mathsf{c}(G)\geq k\). Now, we can put the original vertices in the vertex cover \(U\), and the newly introduced vertices in the independent set \(I\). We cannot apply any of the rules RR4 (because each vertex in \(I\) has degree exactly 2), RR5 (because \(U\) is an independent set), and RR6 (since any isometric path in \(G\) containing more than three vertices of \(U\) will contradict the fact that \(H\) is a diameter-2 graph). Hence, \(G\) is a graph that satisfies the conditions of our lemma.
### Bounding the Cop Number for Variants
Here we extend the result of Theorem 1 to several variations of the game. In particular, we prove the following result.
**Lemma 21**.: _Let \(G\) be a graph with a vertex cover \(U\) of size \(t\). Then,_
1. \(\mathsf{c}_{lazy}\leq\frac{t}{2}+1\)_._
2. \(\mathsf{c}_{attack}\leq\frac{t}{2}+1\)_._
Proof.: Let \(I\) be the independent set \(V(G)\setminus U\). First, we note here that in Cops and Attacking Robber, one cop cannot ensure guarding of an isometric paths [15], and in Lazy CnR, multiple cops, say, \(\ell\) cops, cannot ensure guarding \(\ell\) paths simultaneously. (This is evident from the fact that there exists a planar graph \(G\) with \(\mathsf{c}_{lazy}(G)\geq 4\)[40].) Therefore, reduction rules RR1-RR3 will not directly imply an upper bound on the respective cop numbers here. So, we have the following reduction rules:
**Reduction Rule 7** (Rr7).: _If there is a vertex \(v\in I\) such that \(N(v)>1\), then place a cop at \(v\) and delete \(N[v]\)._
**Reduction Rule 8** (Rr8).: _If there is a vertex \(v\in U\) such that \(N_{U}[v]>1\), then place a cop at \(v\) and delete \(N[v]\)._
Observe that after an exhaustive application of reduction rules RR7 and RR8, we are left with a collection of stars, each of which has its center vertex in \(U\).
In the case of Lazy CnR, we can easily apply rules RR7 and RR8, since cops do not move once placed according to an application of reduction rules RR7 and RR8, except for when they move to capture \(\mathcal{R}\). Finally, \(\mathcal{R}\) is restricted to a star, and one extra lazy cop can move and capture \(\mathcal{R}\).
In the case of Cops and Attacking Robber, all cops start at the same vertex. Whenever the cop player wants to station one of the cops at a vertex \(v\) according to rules RR7 and RR8, all of the cops that are not stationed yet move together to the vertex \(v\) (to avoid getting attacked). Note that once a cop is stationed at a vertex \(u\), the cop never moves and hence can never be attacked (because if \(\mathcal{R}\) wants to attack a cop at vertex \(v\), it has to reach a vertex in \(N(v)\) in the previous round, and now the cop at \(v\) can move and capture \(\mathcal{R}\)). Once we cannot apply rules RR7 and RR8 anymore, \(\mathcal{R}\) is restricted to a star. At this point, if there are at least two unstationed cops, then these two cops can move to capture \(\mathcal{R}\). Else, let \(v\) be the last vertex
where we stationed a cop. Since at this point we have stationed all but one cop (\(\frac{t}{2}\) cops stationed), observe that for each vertex \(x\in U\), there is a cop in \(N[x]\), and therefore, \(\mathcal{R}\) is restricted to one vertex, say, \(u\), of \(I\). Now, \(\mathcal{R}\) can only attack a cop if it is at a vertex in \(N(u)\) (and \(N(u)\subseteq U\)). Finally, the only unstained cop, say, \(\mathcal{C}\), moves to a vertex in \(N(u)\) in a finite number of steps (at this point \(\mathcal{R}\) cannot attack \(\mathcal{C}\) without getting captured as \(\mathcal{C}\) is on a vertex in \(U\)), and captures \(\mathcal{R}\) in the next round.
The bound on the cop numbers follow from the fact that in each reduction rule, we remove at least two vertices from \(U\) and place only one cop.
We have the following straightforward observation concerning the bounds on the cop number for the remaining variants.
**Observation 22**.: _Let \(t\) be the \(\mathsf{vcn}\) of a graph \(G\). Then, \(\mathsf{c}_{active}(G)\leq t\), \(\mathsf{c}_{surround}(G)\leq t\), \(\mathsf{c}_{s}(G)\leq t\) (for any value of \(s\)), and for a strongly connected orientation \(\overrightarrow{G}\) of \(G\), \(\mathsf{c}(\overrightarrow{G})\leq t\)._
We remark that the cop number for an oriented graph \(\overrightarrow{G}\) (with underlying graph \(G\)) that is not strongly connected can be arbitrarily larger than the \(\mathsf{vcn}\) of \(G\). To see this, consider a vertex cover \(U\) of size \(t\) in \(G\). Next, we add \(\ell\) vertices \(v_{1},\ldots,v_{\ell}\) such that each vertex \(v_{i}\), for \(i\in[\ell]\), has only outgoing edges to vertices in \(U\). Now, if we do not place a cop on some \(v_{j}\), for \(j\in[\ell]\), then \(\mathcal{R}\) can start at \(v_{j}\) and cops can never capture \(\mathcal{R}\). Hence, \(\mathsf{c}(\overrightarrow{G})\geq\ell\).
The proof of Theorem 7 directly follows from Lemma 21 and Observation 22.
## 4 Kernelization Algorithms
In this section, we provide kernelization algorithms for CnR and its variants.
### Exponential Kernel for CnR by \(\mathsf{vcn}\):
Let \(G\) be a graph where a vertex cover \(U\) of size \(t\) is given. If no such vertex cover is given, then we can compute a vertex cover \(U\) of size \(t\leq 2\cdot\mathsf{vc}(G)\) using a polynomial-time approximation algorithm [80]. Then, the vertices in \(V(G)\setminus U\) form an independent set \(I\) of size \(n-t\). Recall that the question is whether \(G\) is \(k\)-copwin.
Our kernelization algorithm is based on the exhaustive application of the following reduction rules.
**Reduction Rule 9** (Rr9).: _If \(k\geq\frac{t}{3}+1\), then answer positively._
**Reduction Rule 10** (Rr10).: _If \(k=1\), then apply an \(\mathcal{O}(n^{3})\) time algorithm (Proposition 13) to check whether \(G\) is copwin._
**Reduction Rule 11** (Rr11).: _If there are two distinct vertices \(u,v\in I\) such that \(N(u)\subseteq N(v)\), then delete \(u\)._
The safeness of rule RR9 follows from Theorem 1. For the safeness of rule RR11, we have the following lemma. We note that Lemma 23 can also be derived from [10, Corollary 3.3], but we give a self-contained proof for the sake of completeness.
**Lemma 23**.: _Let \(u\) and \(v\) be two distinct vertices of \(G\) such that \(N(u)\subseteq N(v)\). Consider the subgraph \(H\) of \(G\) induced by \(V(G)\setminus\{u\}\). Let \(k\geq 2\). Then, \(G\) is \(k\)-copwin if and only if \(H\) is \(k\)-copwin._
Proof.: First, we show that if \(G\) is \(k\)-copwin, then \(H\) is \(k\)-copwin. For the graph \(H\), the \(k\) cops borrow the winning strategy that they have for \(G\), with the only difference that whenever a cop has to move to the vertex \(u\) in \(G\), it moves to \(v\) (in \(H\)) instead. Since \(N(u)\subseteq N(v)\), the cop can make the next move as it does in the winning cop strategy for \(G\). Note that using this strategy, the cops can capture \(\mathcal{R}\) if \(\mathcal{R}\) is restricted to \(V(H)\) in \(G\). Therefore, using this strategy, \(k\) cops will capture \(\mathcal{R}\) in \(H\) as well.
Second, we show that if \(H\) is \(k\)-copwin, then \(G\) is is \(k\)-copwin. Here, for each vertex \(x\neq u\) of \(G\), we define \(I(x)=x\), and for \(u\), we define \(I(u)=v\). Observe that for each \(x\in V(G)\), \(I(x)\) is restricted to \(H\) and if \(xy\in E(G)\), then \(I(x)I(y)\in E(H)\). Therefore, every valid move of a player from a vertex \(x\) to \(y\) in \(G\) can be translated to a valid move from \(I(x)\) to \(I(y)\) in \(H\). Now, the cops have the following strategy. If the robber is on a vertex \(x\), the cops consider the _image_ of the robber on the vertex \(I(x)\). Since the robber's image is restricted to \(H\), the cops can use the winning strategy for \(H\) to capture the image of the robber in \(G\). Once the image is captured, if the robber is not on the vertex \(u\), then the robber is also captured. Otherwise, the robber is on the vertex \(u\), and at least one cop is on \(v\). See Figure 2 for an illustration. So, one cop, say \(\mathcal{C}_{1}\), stays on \(v\) and this prevents the robber from ever leaving \(u\). Indeed this follows because \(N(u)\subseteq N(v)\), and so, if \(\mathcal{R}\) ever leaves \(u\), it will be captured by \(\mathcal{C}_{1}\) in the next cop move. Finally, since \(k>1\), some other cop, say \(\mathcal{C}_{2}\), can use a finite number of moves to reach \(u\) and capture the robber.
This completes our proof.
Note that the requirement for \(k\geq 2\) in Lemma 23 is crucial. It might so happen that we can get an \(H\) such that \(c(H)=1\), but \(\mathsf{c}(G)>1\). To see this, consider the example of \(C_{4}\), where any two diagonal (i.e., non-adjacent) vertices satisfy the property in Rule RR9, and if we remove one of them, the cop number reduces from \(2\) to \(1\). However, this does not harm our algorithm because if we are given \(k=1\), then RR10 is applied (before RR11).
Two sets \(A\) and \(B\) are _incomparable_ if neither \(A\subseteq B\) nor \(B\subseteq A\). We shall use the following proposition that follows from Sperner's Theorem and Stirling's approximation.
**Proposition 24**.: _Let \(X\) be a set of cardinality N. Moreover, let \(Y\) be a set of subsets of \(X\) such that for each \(a,b\in Y\), \(a\) and \(b\) are incomparable. Then, \(|Y|\leq\frac{2^{N}}{\sqrt{N}}\)._
Once we cannot apply RR9-RR11 anymore, we claim that the size of the reduced graph \(G^{\prime}\) is bounded by a function of \(t\). Let \(U^{\prime}=U\cap V(G^{\prime})\) and \(I^{\prime}=I\cap V(G^{\prime})\). Clearly, \(|U^{\prime}|\leq t\). Now, each vertex \(u\in I^{\prime}\) is associated with a neighborhood \(N(u)\) such that \(N(u)\subseteq U^{\prime}\). Moreover, for any two vertices \(u,v\in I^{\prime}\), the sets
Figure 2: Illustration for Lemma 23. Here, \(\mathcal{R}\) is at vertex \(u\) and \(\mathcal{C}_{1}\) is at vertex \(v\).
\(N(u)\) and \(N(v)\) are incomparable. Hence, due to Proposition 24, \(|I^{\prime}|\leq\frac{2^{t}}{\sqrt{t}}\), and therefore, \(|V(G^{\prime})|\leq t+\frac{2^{t}}{\sqrt{t}}\), which proves the following theorem.
**Theorem 2**.: CNR _parameterized by_ vcn _admits a kernel with at most \(\mathsf{vcn}+\frac{2^{\mathsf{vcn}}}{\sqrt{\mathsf{vcn}}}\) vertices._
Now, we can apply the XP-time algorithm (Proposition 13) for CNR on our kernel. Since \(k\leq\frac{t}{3}\), the running time we get is exponential only in \(t\) and polynomial in \(n\). Specifically, the running time of the algorithm is \(t\cdot\left(t+\frac{2^{t}}{\sqrt{t}}\right)^{\frac{t}{3}+2}\cdot n^{\mathcal{ O}(1)}\). Moreover, if a vertex cover \(U\) of size \(t=\mathsf{vc}(G)\) is not given, then we can compute one in time \(1.2738^{t}\cdot n^{\mathcal{O}(1)}\)[25]. Thus, we have the following corollary.
**Corollary 3**.: CNR _is_ FPT _parameterized by_ vcn_, and is solvable in \(\left(\mathsf{vcn}+\frac{2^{\mathsf{vcn}}}{\sqrt{\mathsf{vcn}}}\right)^{ \frac{\mathsf{vcn}}{3}+2}\cdot n^{\mathcal{O}(1)}\) time._
### Exponential Kernel for CnR by cvd:
To get a kernel for CNR parameterized by cvd, we employ techniques similar to the ones we used to get a kernel for CNR parameterized by vcn. Let \(U\) be a cluster vertex deletion set of size \(t\). Let \(S=V(G)\setminus U\), and \(C_{1},\ldots,C_{\ell}\) be the set of disjoint cliques that form the graph \(G[S]\). Since \(\mathsf{c}(G)\leq\frac{t}{2}+1\) (Theorem 16), we have the following reduction rule.
**Reduction Rule 12** (Rr12).: _If \(k\geq\frac{t}{2}+1\), then report Yes-instance._
Next, we have the following lemma.
**Lemma 25**.: _Let \(u\) and \(v\) be vertices of some clique \(C\) of \(G[S]\). If \(N_{U}(u)\subseteq N_{U}(v)\), then \(\mathsf{c}(G)=\mathsf{c}(G\setminus\{u\})\)._
Proof.: First we observe that, since \(u\) and \(v\) are part of the same clique \(C\), \(N[u]\subseteq N[v]\). Then, the proof of this lemma follows from the proof of Lemma 23. We remark that this proof also follows from the idea of retracts used in the CnR literature [10, 63]. Additionally, we remark that, here, \(\mathsf{c}(G)\) need not be necessarily greater than \(1\). To see this, consider the situation when \(\mathcal{R}\) is at \(u\) and a cop, say, \(\mathcal{C}_{1}\), is at \(v\). Now, \(\mathcal{R}\) cannot move to a vertex in \(U\) since \(N_{U}(u)\subseteq N_{U}(v)\) and \(\mathcal{R}\) cannot stay on a vertex in \(C\) since \(v\) is a part of \(C\). Thus, \(\mathcal{R}\) gets captured in the next move by \(\mathcal{C}_{1}\).
Hence, we can apply the following reduction rule, whose safeness was proved by Lemma 25.
**Reduction Rule 13** (Rr13).: _Let \(u\) and \(v\) be vertices of some clique \(C\in G[S]\) such that \(N[u]\subseteq N[v]\). Then, delete \(u\)._
Once we cannot apply reduction rule RR13 anymore, the size of each clique in \(G[S]\) is at most \(\frac{2^{t}}{\sqrt{t}}\) (due to Proposition 24).
Similarly to Lemma 25, we have the following lemma.
**Lemma 26**.: _Let \(C_{i}\) and \(C_{j}\) be two cliques in \(G[S]\) such that for each vertex \(u\in V(C_{i})\), there exists a vertex \(v\in V(C_{j})\) such that \(N_{U}(u)\subseteq N_{U}(v)\). Then, \(k>1\) cops have a winning strategy in \(G\) if and only if they have a winning strategy in \(G[V(G)\setminus V(C_{i})]\)._
Proof.: The proof idea here is similar to the proof idea of Lemma 23. Let \(H=G[V(G)\setminus V(C_{i})]\). Here, we will just prove that if \(k\) cops have a winning strategy in \(H\), then \(k\) cops have a winning strategy in \(G\). (The proof of the reverse direction is rather easy to see, combining arguments from Lemma 23 and the arguments we present in the rest of this proof).
Let \(k\geq 2\) cops have a winning strategy in \(H\). Similarly to Lemma 23, for each vertex \(x\in V(G)\setminus V(C_{i})\), we define \(I(x)=x\), and for each vertex \(u\in V(C_{i})\), we have a vertex \(v\in V(C_{j})\) such that \(N_{U}(u)\subseteq N_{U}(v)\)
and we define \(I(u)=v\). (Note that there might be multiple choices for \(v\) here. We can choose any such vertex.)
Observe that for each vertex \(x\in V(G)\), \(I(x)\) is restricted to \(H\). Moreover, if \(xy\in E(G)\), then \(I(x)I(y)\in E(H)\) for the following reasons. If \(x,y\in V(G)\setminus V(C_{i})\), then it is obvious. Else, if \(x,y\in V(C_{i})\), then observe that \(I(x)\) and \(I(y)\) are part of some clique \(C_{j}\), and \(N_{U}(x)\subseteq N_{U}(I(x))\) and \(N_{U}(y)\subseteq N_{U}(I(y))\). Hence, in this case, if \(xy\in E(G)\), then \(I(x)I(y)\in E(H)\). Finally, assume without loss of generality that \(x\in V(C_{i})\) and \(y\in V(G)\setminus V(C_{i})\). In this case, \(xy\in E(G)\) only if \(y\in U\). Since \(N_{U}(x)\subseteq N_{U}(I(x))\), \(I(x)I(y)\in E(H)\). Thus, if \(xy\in E(G)\), then \(I(x)I(y)\in E(H)\). Therefore, every valid move of a player from a vertex \(x\) to a vertex \(y\) in \(G\) can be translated to a move from \(I(x)\) to \(I(y)\) in \(H\).
Now, cops play their winning strategy in \(H\) with the following consideration: When the robber is at a vertex \(x\) in \(G\), the cops consider the _image_ of the robber at vertex \(I(x)\) in \(G\). Since the robber's image is restricted to the vertices of \(H\), the cops can use a winning strategy from \(H\) to capture the image of the robber in \(G\). Once the image is captured, if the robber is at a vertex \(x\notin V(C_{i})\), then the robber is also captured. Otherwise, the robber is at a vertex \(x\in V(C_{i})\), and one of the cops is at vertex \(I(x)\) in \(C_{j}\). Now, observe that the robber cannot immediately move to a vertex in \(U\). Anyhow, the robber can move to some other vertex \(y\in V(C_{i})\), and in this case, the cop at vertex \(I(x)\) can move to vertex \(I(y)\in V(C_{j})\). This way, the cop occupying the robber's image can prevent the robber from ever leaving \(C_{i}\). Since \(k\geq 2\), some other cop can move to capture the robber in \(C_{i}\) (as cliques are copwin). This completes our proof.
Thus, we can apply the following reduction rule, whose safeness was proved by Lemma 26.
**Reduction Rule 14** (Rr14).: _Let \(C_{i}\) and \(C_{j}\) be two cliques in \(G[S]\) such that for each vertex \(u\in V(C_{i})\), there exists a vertex \(v\in V(C_{j})\) such that \(N_{U}(u)\subseteq N_{U}(v)\). Then, delete \(V(C_{i})\)._
Finally, we use the following lemma to bound the size of the desired kernel from Theorem 6.
**Lemma 27**.: _After an exhaustive application of RR12-RR14, the size of the reduced graph is at most \(2^{2^{t}+\sqrt{t}}\)._
Proof.: Once we cannot apply the reduction rules RR13 and RR14, due to Proposition 24, each clique can have at most \(\frac{2^{t}}{\sqrt{t}}\) vertices. Moreover, the total number of cliques possible is at most \(\frac{2^{\frac{2^{t}}{\sqrt{t}}}}{\sqrt{\frac{2^{t}}{\sqrt{t}}}}\) (due to Proposition 24). Thus, the total number of vertices in the reduced graph is at most \(2^{2^{t}+\sqrt{t}}\).
Since \(k\leq\frac{t}{2}+1\) (by Reduction Rule RR12), applying the XP-algorithm for CnR from Proposition 13 to the kernel in Theorem 6 gives us the following corollary.
**Corollary 28**.: _CnR is \(\mathsf{FPT}\) parameterized by \(\mathsf{cvd}\). Specifically, it is solvable in \((\mathsf{cvd}+2^{2^{\mathsf{cvd}}+\sqrt{\mathsf{cvd}}})^{\frac{\mathsf{cvd}}{ 2}+2}\cdot n^{\mathcal{O}(1)}\) time._
### Exponential Kernel for CnR by \(\mathsf{dts}\)
Using the ideas we presented in Section 4.2, we can also get a kernel for CnR with respect to deletion to stars number. Let \(U\) be a deletion to stars vertex set of size \(t\). Also, let \(S=V(G)\setminus U\), and let \(X_{1},\ldots X_{\ell}\) be the stars in the graph \(G[S]\). Specifically, we have the following reduction rules along with reduction rule RR12.
**Reduction Rule 15** (Rr15).: _Let \(u\) and \(v\) be two leaf vertices of some star \(X\) in \(G[S]\) such that \(N_{U}(u)\subseteq N_{U}(v)\). Then, delete \(u\)._
**Reduction Rule 16** (Rr16).: _Let \(X\) and \(Y\) be two stars in \(G[S]\) such that \(V(X)=x,x_{1},\ldots,x_{p}\) and \(V(Y)=y,y_{1},\ldots,y_{q}\), where \(x\) and \(y\) are center vertices of \(X\) and \(Y\), respectively. If \(N_{U}(x)\subseteq N_{U}(y)\) and for each vertex \(x_{i}\) (for \(i\in[p]\)), there is a vertex \(y_{j}\) (for \(j\in[q]\)) such that \(N_{U}(x_{i})\subseteq N_{U}(y_{j})\), then delete \(X\)._
The safeness of RR12 follows from Theorem 16. We have the following lemma, which establishes that reduction rules RR15 and RR16 are safe.
**Lemma 29**.: _Assuming \(k>1\), reduction rules RR15 and RR16 are safe._
Proof.: To prove that rule RR15 is safe, it suffices to observe that for leaf vertices \(u\) and \(v\) of some star \(X\in S\), if \(N_{U}(u)\subseteq N_{U}(v)\), then \(N(u)\subseteq N(v)\) in \(G\). Indeed, the rest of the proof follows directly from the proof of Lemma 23.
Next, we give a proof idea for the safeness of rule RR16. Here, we just define the function of the image of the robber, and the rest of the proof is similar to the proofs of Lemmas 23 and 26. For each vertex \(u\notin V(X)\), \(I(u)=u\). For each \(x_{i}\), \(I(x_{i})=y_{j}\) such that \(N_{U}(x_{i})\subseteq N_{U}(y_{j})\) (there might be multiple choices for \(y_{j}\) and we can choose any one of them), and \(I(x)=y\).
Now, we claim that once we cannot apply rules RR15 and RR16 anymore, the size of the graph is bounded by a function of \(t\). First, we note that the size of each star is at most \(\frac{2^{t}}{\sqrt{t}}+1\) (by Proposition 24). Let \(X\) and \(Y\) be two stars in \(G[S]\) such that \(x\) and \(y\) are the center vertices of \(X\) and \(Y\), respectively. We say that \(X\) and \(Y\) have the same _neighbourhood type_ if \(N_{U}(x)=N_{U}(y)\). Second, it is easy to see that there can be at most \(2^{t}\) neighbourhood types. Next, we bound the number of stars in each neighbourhood type. Let \(S_{1},\ldots,S_{z}\) be the stars having the same neighbourhood type, and let \(v_{i}\) be the center vertex of star \(S_{i}\). For each star \(S_{i}\), for \(i\in[z]\), let \(\mathcal{S}_{i}=\{N(v):v\in V(S_{i})\setminus\{v_{i}\}\}\). Since we have applied reduction rule RR15 exhaustively, we know that for each \(A\in\mathcal{S}_{i}\), \(A=N(v)\) for a unique vertex \(v\in V(S_{i})\setminus\{v_{i}\}\). Observe that each \(S_{i}\) is a subset of the power set of \(U\) and the power set of \(U\) has size \(2^{t}\). Moreover, since we have applied reduction rule RR16 exhaustively, we know that for any \(i,j\in[z]\), neither \(\mathcal{S}_{i}\subseteq\mathcal{S}_{j}\) nor \(\mathcal{S}_{j}\subseteq\mathcal{S}_{i}\). Hence, due to Proposition 24, \(z\leq\frac{2^{2^{t}}}{\sqrt{2^{t}}}\). Therefore, the size of the reduced graph can be at most \(\frac{2^{2^{t}}}{\sqrt{2^{t}}}\cdot 2^{t}\cdot(\frac{2^{t}}{\sqrt{t}}+1)\). Thus, we have the desired kernel from Theorem 6.
Since \(k\leq\frac{t}{2}+1\) (by reduction rule RR12), applying the XP-algorithm for CnR from Proposition 13 to the kernel in Theorem 6 gives us the following corollary.
**Corollary 30**.: \(\textsc{CnR}\) _is \(\mathsf{FPT}\) parameterized by \(\mathsf{dts}\). Specifically, it is solvable in \((\mathsf{dts}+2^{2^{\mathsf{dts}+\mathsf{dts}^{1.5}}})^{\frac{\mathsf{dts}}{2} +2}\cdot n^{\mathcal{O}(1)}\) time._
### Exponential Kernels for Different Variants
Here, we extend the result of Theorem 2 to several variations of the game. We have the following results.
#### 4.4.1 Lazy CnR and Cops and Attacking Robber:
First, we prove the following lemma.
**Lemma 31**.: _Let \(u\) and \(v\) be two distinct vertices of \(G\) such that \(N(u)\subseteq N(v)\). Consider the graph \(H\) induced by \(V(G)\setminus\{u\}\). Then for \(k>1\) and for \(x\in\{lazy,attack\}\), \(\mathsf{c}_{x}(G)\leq k\) if and only if \(\mathsf{c}_{x}(H)\leq k\)._
Proof.: The proof for the forward direction (\(c_{x}(G)\leq k\) implies \(c_{x}(H)\leq k\)) is easy and follows from the arguments similar to the arguments in the proof of Lemma 23. We prove the reverse side (\(c_{x}(H)\leq k\) implies \(c_{x}(G)\leq k\)) for both the variants below. Moreover, similarly to the proof of Lemma 23, we define \(I(u)=v\) and \(I(x)=x\) when \(x\neq u\). Similarly, when \(\mathcal{R}\) is at a vertex \(x\), we say that the _image_ of \(\mathcal{R}\) is at vertex \(I(x)\). (Note that the image of \(\mathcal{R}\) is restricted to \(H\).) In both of the variants, the cops will play in \(G\) to capture the image of the robber using the winning strategy of \(H\).
In Lazy CnR, the cops begin by capturing the image of \(\mathcal{R}\) in \(G\). If \(\mathcal{R}\) is at a vertex \(x\neq u\), then \(\mathcal{R}\) is captured. If \(\mathcal{R}\) is at vertex \(u\), then observe that there is a cop, say, \(\mathcal{C}\), at \(v\) that has captured the image of
\(\mathcal{R}\). Now, \(\mathcal{C}\) ensures that \(\mathcal{R}\) cannot move, and some other lazy cop can move to capture \(\mathcal{R}\) in a finite number of rounds.
In Cops and Attacking Robber, the main observation is that if the cops can capture \(\mathcal{R}\) in \(H\), they can capture the image of \(\mathcal{R}\) in \(G\) without getting attacked by \(\mathcal{R}\). If \(\mathcal{R}\) is at a vertex \(x\neq u\) when the image of \(\mathcal{R}\) is captured, then \(\mathcal{R}\) is captured. Otherwise, \(\mathcal{R}\) is at \(u\), and a cop, say, \(\mathcal{C}_{1}\), is at vertex \(v\). Now another cop, say, \(\mathcal{C}_{2}\), can move to a vertex \(w\in N(v)\) (in a finite number of steps) to capture \(\mathcal{R}\). If \(\mathcal{R}\) attacks \(\mathcal{C}_{2}\) at this point, then note that \(\mathcal{C}_{1}\) can move to capture \(\mathcal{R}\) in the next round. If \(\mathcal{R}\) does not attack, then \(\mathcal{C}_{2}\) moves to capture \(\mathcal{R}\) in the next round.
Lemma 31 establishes that reduction rule RR11 is safe for both Cops and Attacking Robber and Lazy CnR. Before applying reduction rule RR11, we apply the following reduction rules.
**Reduction Rule 17** (RR17).: _If \(k\geq\frac{t}{2}+1\), then answer positively (Theorem 7)._
**Reduction Rule 18** (RR18).: _If \(k=1\), then apply the \(\mathcal{O}(n^{3})\) time algorithm from Proposition 14._
The size of the kernel, by using these reduction rules, is dependent on RR9. Therefore, the proof of the existence of the desired kernel from Theorem 8 follows directly.
Moreover, Theorem 8, along with the XP-algorithms from Proposition 14 for these variants, gives the following immediate corollary.
**Corollary 32**.: Cops and Attacking Robber _and Lazy CnR are_ FPT _parameterized by_ vcn_. Specifically, they are solvable in \(\mbox{\sf{vcn}}+\frac{2^{\mbox{\sf{vcn}}}}{\sqrt{\mbox{\sf{vcn}}}})^{\frac{ \mbox{\sf{vcn}}}{2}+2}\cdot n^{\mathcal{O}(1)}\) time._
#### 4.4.2 CnR on Directed Graphs:
Next, we consider the game of CnR on oriented graphs. For a directed graph \(\overrightarrow{G}\) and a vertex \(v\in V(\overrightarrow{G})\), let \(N^{+}(v)\) and \(N^{-}(v)\) denote the set of out-neighbors and in-neighbors of \(v\), respectively. We have the following lemma.
**Lemma 33**.: _Let \(u\) and \(v\) be two distinct vertices of a strongly connected directed graph \(\overrightarrow{G}\) such that \(N^{+}(u)\subseteq N^{+}(v)\) and \(N^{-}(u)\subseteq N^{-}(v)\). Let \(\overrightarrow{H}\) be the graph induced by \(V(\overrightarrow{G})\setminus\{u\}\). Then, for \(k>1\), \(k\) cops have a winning strategy in \(\overrightarrow{H}\) if and only if \(k\) cops have a winning strategy in \(\overrightarrow{G}\)_
Proof.: First, observe that \(H\) is also strongly connected.
Second, let \(k\) cops have a winning strategy in \(\overrightarrow{G}\). Then, the cops can use this winning strategy in \(\overrightarrow{H}\), with the only difference that whenever a cop, say, \(\mathcal{C}\), has to move to \(u\) in \(\overrightarrow{G}\), \(\mathcal{C}\) moves to \(v\) in \(\overrightarrow{H}\) instead (\(\mathcal{C}\) can do so because \(N^{-}(u)\subseteq N^{-}(v)\)). Next, whenever \(\mathcal{C}\) has to move to a vertex, say, \(w\), from \(u\), in the strategy in \(G\), then \(\mathcal{C}\) can move to \(w\) from \(v\) also (since \(N^{+}(u)\subseteq N^{+}(v)\)). As \(\mathcal{R}\) is restricted to \(V(\overrightarrow{H})\) in \(\overrightarrow{G}\), cops will capture \(\mathcal{R}\) using this strategy in \(\overrightarrow{H}\) as well.
Finally, let \(k\) cops have a winning strategy in \(\overrightarrow{H}\). We use this strategy to get a winning strategy in \(\overrightarrow{G}\) using \(k\) cops. First, we define \(I(x)=x\) for \(x\neq u\) and \(I(u)=v\). Since \(I(x)\) is restricted to \(\overrightarrow{H}\), we use the winning strategy in \(\overrightarrow{H}\) to capture \(I(x)\). At this point if \(x\neq u\), then \(\mathcal{R}\) is captured. Else, \(\mathcal{R}\) is at \(u\) and one of the cops, say, \(\mathcal{C}_{1}\), is at \(v\). Since \(N^{+}(u)\subseteq N^{+}(v)\), \(\mathcal{R}\) cannot move as long as \(\mathcal{C}_{1}\) occupies \(v\). Since \(\overrightarrow{G}\) is strongly connected, one of the other cops, say, \(\mathcal{C}_{2}\), can move to \(u\) in a finite number of rounds to capture \(\mathcal{R}\).
Let \(G\) be a graph with a vertex cover \(U\) of size \(t\), and let \(I=V(G)\setminus U\). Let \(\overrightarrow{G}\) be a strongly connected orientation of \(G\). We apply the following reduction rules.
**Reduction Rule 19** (RR19).: _If \(k\geq t\), then answer positively._
**Reduction Rule 20** (RR20).: _If \(k=1\), then apply the \(\mathcal{O}(n^{3})\) time algorithm from Proposition 14 to check whether \(\overrightarrow{G}\) is copwin._
**Reduction Rule 21** (RR21).: _If \(u\) and \(v\) are two distinct vertices in \(I\) such that \(N^{+}(u)\subseteq N^{+}(v)\) and \(N^{-}(u)\subseteq N^{-}(v)\), then delete \(u\)._
Safeness of reduction rules RR19 and RR21 follow from Theorem 7 and Lemma 33, respectively. Now, we argue that once we cannot apply reduction rules RR19-RR21, the size of \(\overrightarrow{G}\) is bounded by a function of \(t\). Observe that each vertex \(u\) in \(I\) has a unique neighbourhood \((N^{+}(u)\cup N^{-}(u))\) and there are three choices for a vertex \(v\in U\) to appear in the neighbourhood of a vertex \(u\in I\), that is, either \(v\in N^{+}(u)\), or \(v\in N^{-}(u)\), or \(v\notin N^{+}(u)\cup N^{-}(u)\). Therefore, the total number of possible vertices in \(I\) are at most \(3^{t}\). Thus, applying reduction rules RR19-RR21, we get the desired kernel from Theorem 8.
Theorem 8, along with rule RR21 and Proposition 14, gives the following corollary.
**Corollary 34**.: _CnR on strongly connected directed graphs is_ FPT _parameterized by the vertex cover number \(t\). In particular, it is solvable in \((3^{t}+t)^{t+1}\cdot n^{\mathcal{O}(1)}\) time._
### General Kernelization
In this section, we provide a general reduction rule that works for most variants of CnR parameterized by the vertex cover number. Let \(U\) be a vertex cover of size \(t\) in \(G\) and \(I\) be the independent set \(V(G)\setminus U\). For each subset \(S\subseteq U\), we define the following equivalence class: \(\mathcal{C}_{S}=\{v\in I\colon N(v)=S\}\). Given an instance \(((G,k),t)\), we have the following reduction rule.
**Reduction Rule 22** (RR22).: _If there is an equivalence class \(\mathcal{C}_{S}\) such that \(|\mathcal{C}_{S}|>k+1\), then keep only \(k+1\) arbitrary vertices from \(\mathcal{C}_{S}\) in \(G\), and delete the rest._
First, we present (informal) intuition why reduction rule RR22 is safe. Since the neighbourhood of each vertex in \(\mathcal{C}_{S}\) is the same, all of these vertices are equivalent with respect to the movement rules in any of the variants discussed. We keep \(k+1\) copies of such vertices because, on a robber move, there is at least one vertex that is not occupied by any cop. We refer to such a vertex as a _free vertex_. Note that there might be multiple free vertices. On a robber player's turn, if \(\mathcal{R}\) plans to move to a vertex in \(\mathcal{C}_{S}\), it can move to a free vertex. Moreover, if a fast robber wants to use a vertex from \(\mathcal{C}_{S}\) as an intermediate vertex, it can use a free vertex for this purpose as well. We prove safeness for individual variants later in this section.
Moreover, we have the following lemma that we will use later.
**Lemma 35**.: _Let \(G\) be a graph with a vertex cover \(U\) of size \(t\). After an exhaustive application of reduction rules RR19 and RR22, the reduced graph has at most \(t+t\cdot 2^{t}\) vertices._
Proof.: There can be at most \(2^{t}\) equivalence classes, and for each equivalence class, we keep at most \(k+1\) vertices in \(I\). Due to rule RR19, we can assume \(k<t\). Thus, size of \(I\) is at most \(t\cdot 2^{t}\). The size of \(G\) is, therefore, at most \(|U|+|I|\leq t+t\cdot 2^{t}\).
#### 4.5.1 Generalized CnR
In this section, we establish that RR22 is safe for Generalized CnR. We have the following lemma to prove this claim.
**Lemma 36**.: _Let \(G\) be a graph with a vertex cover \(U\) of size \(t\). Let \(\mathcal{C}_{S}\) (for \(S\subseteq U\)) be an equivalence class such that \(|\mathcal{C}_{S}|=\ell>k+1\). Moreover, let \(H\) be a subgraph formed by deleting an arbitrary vertex \(v\) of \(\mathcal{C}_{S}\) from \(G\). Then, \((G,\mathcal{C}_{1},\ldots,\mathcal{C}_{k},\mathcal{R})\) is a Yes-instance if and only if \((H,\mathcal{C}_{1},\ldots,\mathcal{C}_{k},\mathcal{R})\) is a Yes-instance._
Proof.: Let \(\mathcal{C}_{S}=\{v_{1},\ldots,v_{\ell}\}\). Without loss of generality, let us assume that vertices \(v_{1},\ldots v_{\ell-1}\) belong to the graph \(H\), and \(v=v_{\ell}\). Since there are at most \(k\) cops in the game and \(\ell>k+1\), at least one vertex of \(v_{1},\ldots,v_{\ell-1}\) is not occupied by any cop. We denote this vertex by \(x\) (\(x\) is dependent on the position of the cops and may change during the course of the game). Moreover, here we modify the definition of a _safe vertex_ slightly: A vertex \(y\) is _safe_ if it is at a distance at least \(\lambda_{i}+1\) from \(\mathcal{C}_{i}\), for \(i\in[k]\). Since each vertex in \(\mathcal{C}_{S}\) has the same neighborhood, observe that either each vertex in \(\mathcal{C}_{S}\) not occupied by a cop is a safe vertex or none of the vertices in \(\mathcal{C}_{S}\) is safe. Moreover, for each vertex \(y\in V(G)\setminus\{v\}\), let \(I(y)=y\) and \(I(v)=x\). Note that for each vertex \(u\), \(I(u)\) is restricted to vertices of \(V(H)\), \(N(u)=N(I(u))\), and if \(u\) is a safe vertex, then \(I(u)\) is also a safe vertex. To ease the presentation, instead of saying that the cops/robber has a winning strategy in \((G,\mathcal{C}_{1},\ldots,\mathcal{C}_{k},\mathcal{R})\) (or \((G,\mathcal{C}_{1},\ldots,\mathcal{C}_{k},\mathcal{R})\)), we will say that the cops/robber has a winning strategy in \(G\) (or \(H\)).
First, let \(\mathcal{R}\) has a winning strategy \(\mathcal{S}\) in \(G\). To show that \(\mathcal{R}\) has a winning strategy in \(H\), we will prove a slightly stronger statement that \(\mathcal{R}\) has a winning strategy, say, \(\mathcal{S}^{\prime}\), in \(G\) even if \(\mathcal{R}\) is restricted to the vertices of \(V(H)\) in \(G\). We get \(\mathcal{S}^{\prime}\) from \(\mathcal{S}\) as follows: If \(\mathcal{R}\) has to use a vertex \(y\) in \(\mathcal{S}\) in some move during the game, it uses \(I(y)\) instead. We first show that \(\mathcal{R}\) can safely enter the graph. Let \(y\) be the vertex \(\mathcal{R}\) enters in the strategy \(\mathcal{S}\). Then, \(\mathcal{R}\) enters at \(I(y)\) in \(\mathcal{S}^{\prime}\). Since \(y\) is a safe vertex (as \(\mathcal{S}\) is a winning strategy for \(\mathcal{R}\)), \(I(y)\) is also a safe vertex. Hence \(\mathcal{R}\) can safely enter a vertex. Now, the only thing to argue is that if \(\mathcal{R}\) can move safely from a vertex \(y\) to a vertex \(z\) in \(G\), then it can safely move from vertex \(I(y)\) to \(I(z)\) in \(G\). Let \(\mathcal{R}\) moves from \(y\) to \(z\) using a path \(P_{1}=(y=y_{1},\ldots,y_{r}=z)\), where \(r\in[s_{R}]\), during some move in \(\mathcal{S}\). Notice that since \(\mathcal{S}\) is a winning strategy, each vertex \(y_{i}\) (\(i\in[r]\)) is a safe vertex, and hence, each vertex \(I(y_{i})\) is also a safe vertex. Moreover, since \(N(y_{i})=N(I(y_{i}))\), \(W=(I(y_{1}),\ldots,I(y_{r}))\) is a walk with at most \(r\) vertices between \(I(y)\) and \(I(z)\). (It might not be a path since vertex \(x\) may repeat in this walk.) Since the existence of a walk between two vertices implies the existence of a path between these vertices using vertices from a subset of the walk vertices, we have an \(I(y),I(z)\)-path of length at most \(r\) using (safe) vertices from \(\{I(y_{1}),\ldots,I(y_{r})\}\). Hence \(\mathcal{R}\) can safely move from \(I(y)\) to \(I(z)\). Thus, \(\mathcal{S}^{\prime}\) is a winning strategy for \(\mathcal{R}\) even when \(\mathcal{R}\) is restricted to vertices of \(V(H)\) in \(G\).
In the reverse direction, let \(\mathcal{R}\) has a winning strategy in \(H\). Then, we show that \(\mathcal{R}\) has a winning strategy in \(G\) as well. Here, whenever a cop \(\mathcal{C}_{i}\) moves to a vertex \(y\), \(\mathcal{R}\) assumes its image at the vertex \(I(y)\). Observe that \(I(y)\) is restricted to \(V(H)\) in \(G\). Let \(y_{1},\ldots,y_{k}\) be the vertices occupied by cops during some instance in the game. Let \(F\) be the set of vertices in \(V(H)\) that are safe during this turn. Moreover, let \(F^{\prime}\) be the set of the vertices in \(V(H)\) that are safe if cops are occupying the vertices \(I(y_{1}),\ldots,I(y_{k})\). Then, we have the following claim.
**Claim 37**.: \(F^{\prime}\subseteq F\)_._
Proof of Claim.: Targetting contradiction, assume that \(y\in F^{\prime}\) but \(y\notin F\). Then, there exists some \(i\in[k]\) such that \(d(y,y_{i})\leq\lambda_{i}\) but \(d(y,I(y_{i}))>\lambda_{i}\). If \(y_{i}\neq v\), then this is not possible since for \(y_{i}\neq v\), \(I(y_{i})=y_{i}\). Hence, we can assume that \(y_{i}=v\) and \(I(y_{i})=x\). Since \(N(v)=N(x)\), for each vertex \(y\) in \(V(G)\setminus\{v\}\) (and \(y\in F^{\prime}\subseteq V(H)\)), \(d(y,x)\leq d(y,v)\), that is, \(d(y,I(y_{i}))\leq d(y,y_{i})\), a contradiction.
We note that it might not be true that \(F\subseteq F^{\prime}\), as it might so happen that \(F\) contains the vertex \(x\), but \(F^{\prime}\) does not.
Due to Claim 4.5.1, it is sufficent to show that if \(\mathcal{R}\) has a winning strategy in \(H\) considering the image of cop \(\mathcal{C}_{i}\) as a cop with the capabilities of \(\mathcal{C}_{i}\), then \(\mathcal{R}\) has a winning strategy in \(G\). To this end, \(\mathcal{R}\) can use its winning strategy from \(H\) since images of the cops are restricted to \(V(H)\). Thus, \(\mathcal{R}\) has a winning strategy in \(G\).
Finally, note that, in both directions of proofs, \(\mathcal{R}\) moves in \(H\) (respectively, in \(G\)) if and only if \(\mathcal{R}\) moves in \(G\) (respectively, in \(H\)). Hence, if \(\mathcal{R}\) is active/flexible in the original strategy, \(\mathcal{R}\) is active/flexible in the designed strategy. This completes the proof.
Observe that \(\mathsf{vcn}+1\) cops always have a winning strategy in \(G\). Therefore, we have the following theorem as a consequence of Lemma 36 and Lemma 35.
**Theorem 9**.: Generalized C_NR parameterized by \(\mathsf{vcn}\) admits a kernel with at most \(\mathsf{vcn}+\mathsf{vcn}\cdot 2^{\mathsf{vcn}}\) vertices._
Theorem 9 directly implies the existence of the desired kernel for Fully Active CnR and Cops and Fast Robber from Theorem 10. The existence of the desired kernel for Surrounding CnR from Theorem 10 follows from Lemma 35 and the following lemma, which proves the safeness of RR22 for Surrounding CnR.
**Lemma 38**.: _Let \(G\) be a graph with a vertex cover \(U\) of size \(t\). Let \(\mathcal{C}_{S}\) (for \(S\subseteq U\)) be an equivalence class such that \(|\mathcal{C}_{S}|=\ell>k+1\). For a subgraph \(H\) formed by deleting \(\ell-k-1\) arbitrary vertices of \(\mathcal{C}_{S}\) from \(G\), \(\mathsf{c}_{surround}(H)\leq k\) if and only if \(\mathsf{c}_{surround}(G)\leq k\)._
Proof.: Let \(\mathcal{C}_{S}=\{v_{1},\ldots,v_{\ell}\}\). Without loss of generality, let us assume that the vertices \(v_{1},\ldots v_{k+1}\) belong to the graph \(H\) and vertices \(v_{k+2},\ldots,v_{\ell}\) are deleted. We begin by noting that \(\mathcal{R}\) cannot be surrounded at a vertex in \(S\) in \(G\) (since each vertex in \(S\) has at least \(k+1\) neighbours). Therefore, throughout the proof, we have the implicit assumption that when \(\mathcal{R}\) is surrounded, it is not on a vertex in \(S\).
Let \(k\) cops have a winning strategy in \(G\). Then, to surround \(\mathcal{R}\), cops use this strategy with the following changes in \(H\). Whenever a cop has to move to a vertex in \(\{v_{k+2},\ldots,v_{\ell}\}\), it moves to vertex \(v_{1}\) instead. Since all vertices in \(\mathcal{C}_{S}\) have the same neighbourhood, the next move of this cop can be the same as it was in (the winning strategy of) \(G\). Note that using this strategy, the cops can surround the robber in \(G\) if \(\mathcal{R}\) is restricted to \(V(H)\) in \(G\), and also the moves of cops are restricted to \(V(H)\) in \(G\). Therefore, the cops can surround \(\mathcal{R}\) using this strategy in \(H\) as well.
Now, let \(k\) cops have a winning strategy in \(H\). We use this strategy to surround \(\mathcal{R}\) in \(G\), in the following manner. Since we have only \(k\) cops, during any time in the gameplay, there is at least one vertex in \(\{v_{1},\ldots,v_{k+1}\}\) that is not occupied by any cop. Let us call this vertex a _free vertex_ (there might be multiple free vertices). Again we use the concept of the _image of the robber_. For each vertex \(x\in V(G)\), if \(x\in V(H)\), then we define \(I(x)=x\); else, if \(x\in\{v_{k+1},\ldots v_{\ell}\}\), then we define \(I(x)=y\), where \(y\) is a free vertex at that instance. Whenever \(\mathcal{R}\) moves to a vertex \(x\in V(G)\), we say that the image of the robber moves to \(I(x)\). Moreover, we remind that, in this game, although some cop and \(\mathcal{R}\) can be at the same vertex, the robber cannot end its move at the same vertex as one of the cops. Cops use this capability to force \(\mathcal{R}\) to move from a vertex. Therefore, we also have to argue that whenever cops force \(\mathcal{R}\) to move, they force the image of the robber to move as well. To this end, observe that the image of the robber and the robber are on different vertices only if \(\mathcal{R}\) is on some vertex \(x\in\{v_{k+1},\ldots,v_{\ell}\}\) and the image of the robber is on a free vertex, say, \(y\). Notice that if, in the strategy for \(H\), \(\mathcal{R}\) was occupying \(y\) and the cop player wants to force \(\mathcal{R}\) to move out of \(y\), then it does so by moving a cop, say, \(\mathcal{C}\), from a vertex \(w\in N(y)\) to \(y\). Cop player adapts this strategy in \(G\) by moving \(\mathcal{C}\) form \(w\) to \(x\) instead of \(w\) to \(y\). This move is possible because \(N(x)=N(y)\). Thus, \(\mathcal{R}\), as well as the image of \(\mathcal{R}\), are forced to move as they would have been forced to move in the winning strategy of \(k\) cops in \(H\).
Hence, the image of \(\mathcal{R}\) is restricted to \(V(H)\) in \(G\) and has to follow the rules of the movement of the robber. Thus, the cops will finally surround the image of \(\mathcal{R}\) in \(G\). At this point, if \(\mathcal{R}\) is on a vertex \(v\in\{v_{k+2},\ldots,v_{\ell}\}\), note that \(I(\mathcal{R})\) is on a vertex \(u\in\{v_{1},\ldots,v_{k+1}\}\). Observe that here, if \(I(\mathcal{R})\) is surrounded, then there is a cop on each vertex in \(S\), and thus, \(\mathcal{R}\) is surrounded as well. If \(\mathcal{R}\) was on a vertex in \(V(H)\setminus S\) when surrounded, then \(I(\mathcal{R})\) and \(\mathcal{R}\) are at the same vertex, and thus, \(\mathcal{R}\) is surrounded as well.
This finishes the proof of Theorem 10. The following corollary is a direct consequence of Theorem 7, Theorem 10, Theorem 9, and Proposition 14.
**Corollary 39**.: Cops and Fast Robber, Fully Active CnR, Surrounding CnR, and Generalized C_NR are \(\mathsf{FPT}\) parameterized by \(\mathsf{vcn}\). Specifically, each of these variants is solvable in \((\mathsf{vcn}\cdot 2^{\mathsf{vcn}}+\mathsf{vcn})^{\mathsf{vcn}+1}\cdot n^{ \mathcal{O}(1)}\) time._
Polynomial Kernels for Cnr
In this section, we provide a linear kernel for CnR parameterized by the neighbourhood diversity (nd) of the input graph. One of the key benefits of nd as a parameter is that it is computable in polynomial time [56]. More specifically, in polynomial time, we can compute a minimum partition of \(V(G)\) into classes \(V_{1},\ldots,V_{w}\) such that each \(V_{i}\) contains vertices of the same type. Hence, a linear kernel parameterized by nd can be very useful from an applicative perspective.
Since for any two vertices \(u,v\in V_{i}\), for \(i\in[w]\), \(N(v)\setminus\{u\}=N(v)\setminus\{u\}\), we have that either each \(V_{i}\) is an independent set (\(N(v)=N(u)\) in this case) or each \(V_{i}\) is a clique (\(N[v]=N[u]\) in this case). Now, we use the following reduction rules.
**Reduction Rule 23** (Rr23).: _If \(k\geq w\), then answer positively._
We have the following lemma to prove that RR23 is safe.
**Lemma 40**.: _For a graph \(G\), \(\mathsf{c}(G)\leq\mathsf{nd}\)._
Proof.: Let \(S\) be a set of vertices such that \(S\) contains exactly one vertex, say \(v_{i}\), from each neighbourhood class \(V_{i}\). Then, (since we assume \(G\) to be connected) observe that \(S\) is a dominating set of \(G\). Hence, the cops have a trivial winning strategy by placing a cop on each vertex of \(S\) (and \(|S|\leq w\)). Therefore, \(\mathsf{c}(G)\leq\mathsf{nd}\).
Next, if \(k=1\), then we apply RR10 (the XP-algorithm for CnR from Proposition 13). Hence, we assume that \(k\geq 2\). Next, we have the following reduction rule.
**Reduction Rule 24** (Rr24).: _For each neighbourhood class \(V_{i}\), keep one arbitrary vertex and delete the rest._
We have the following lemma to prove that RR24 is safe.
**Lemma 41**.: _Let \(V_{i}=\{v_{1},\ldots,v_{\ell}\}\) be a neighbourhood class of \(G\) having at least two vertices (\(\ell\geq 2\)). Consider the subgraph \(H\) of \(G\) induced by \(V(G)\setminus\{v_{\ell}\}\). Then, for \(k>1\), \(G\) is \(k\)-copwin if and only if \(H\) is \(k\)-copwin._
Proof.: We have the following two cases depending on whether \(V_{i}\) is an independent set or a clique.
1. \(V_{i}\) is an independent set: Note that, in this case, \(N(v_{\ell})=N(v_{1})\). Therefore, due to Lemma 23, we have that \(G\) is \(k\)-copwin if and only if \(H\) is \(k\)-copwin.
2. \(V_{i}\) is a clique: Note that in this case, \(N[v_{\ell}]=N[v_{1}]\). The proof, in this case, (specifically the forward direction) follows from arguments presented in the proof of Lemma 23. For the reverse direction, here for \(x\neq v_{\ell}\), \(I(x)=x\) and \(I(v_{\ell})=v_{1}\). Now, note that every possible move of \(\mathcal{R}\) in \(G\) can be mapped to a valid move of the _image of the robber_ in \(H\), just like in the proof of Lemma 23. The only difference here is that when \(\mathcal{R}\) is at \(v_{\ell}\) (image of \(\mathcal{R}\) is at \(v_{1}\)), \(\mathcal{R}\) can move to \(v_{1}\) as well (along with vertices in \(N(v_{1})\)). Notice that this move can be translated to the movement of the image of \(\mathcal{R}\) in \(H\) where the image of \(\mathcal{R}\) chooses to stay on the same vertex in its move. Hence, the cops will first capture the image of \(\mathcal{R}\) in \(H\), and then capture \(\mathcal{R}\) in \(G\).
This completes the proof of this lemma.
Since we keep only one vertex of each type and there are at most \(w\) types, we have the following theorem.
**Theorem 5**.: _CnR parameterized by nd admits a kernel with at most nd vertices._
We have the following corollary as a consequence of Theorem 5.
**Corollary 42**.: _CnR is_ PPT _parameterized by nd. Specifically, it is solvable in \(\mathsf{nd}^{\mathsf{nd}}\cdot n^{\mathcal{O}(1)}\) time._
Moreover, it is not difficult to see that this kernelization can be extended to Lazy CNR and Cops and Attacking Robber using an extension of Lemma 31, giving us a kernel with at most \(\mathsf{nd}\) vertices. Moreover, using a reduction rule similar to RR22 where we keep \(k\) vertices of each type, we can have a kernel with at most \(k\cdot\mathsf{nd}\) vertices for Cops and Fast Robber and Fully Active CnR. We have the following lemma, for which we provide a proof outline.
**Lemma 43**.: _Let \(V_{i}=\{v_{1},\ldots,v_{\ell}\}\) is a neighbourhood class of \(G\) containing at least \(k+2\) vertices \((\ell\geq k+2)\). Consider the subgraph of \(G\) induced by \(V(G)\setminus\{v_{\ell}\}\). Then, for \(k>1\), \(s\geq 1\), and \(x\in\{active,s\}\), \(\mathsf{c}_{x}(G)\leq k\) if and only if \(\mathsf{c}_{x}(H)\leq k\)._
Proof Sketch.: Similar to the proof of Lemma 41, we have the following two cases depending on whether \(V_{i}\) is an independent set or a clique.
1. \(V_{i}\) is an independent set: Proof of this case follows from the proof of Lemma 36.
2. \(V_{i}\) is a clique: Here, for each \(v_{j}\in V_{i}\), \(N[v_{j}]=N[v_{\ell}]\). For each vertex \(x\neq v_{\ell}\), let \(I(x)=x\) and \(I(v_{\ell})=v_{1}\). First, let \(\mathsf{c}_{x}(G)\leq k\). Then, we use the strategy of the cops from \(G\) to capture \(\mathcal{R}\) in \(H\) with the only change that whenever a cop, say, \(\mathcal{C}_{i}\), wants to move to a vertex \(x\) in \(G\), it moves to \(I(x)\) in \(H\) instead with the only contingency that if \(\mathcal{C}_{i}\) wants to move from \(v_{1}\) to \(v_{\ell}\), then it moves to \(v_{2}\) (so that if the cops are active, then this is indeed a valid move of \(\mathcal{C}_{i}\) in \(H\)). Observe that the cops can capture \(\mathcal{R}\) in \(G\) using this strategy even when the cops are restricted to the vertices of \(H\). Hence, the cops can capture \(\mathcal{R}\) using this strategy in \(H\). In the reverse direction, let \(\mathsf{c}_{x}(H)\leq k\). Note that if \(k\) active cops have a winning strategy against a flexible robber in \(G\), then \(k\) active cops have a winning strategy against an active robber in \(G\) as well. Hence, for the ease of arguments, we show that \(k\) active cops have a winning strategy in \(G\) even if \(\mathcal{R}\) is flexible to show that \(\mathsf{c}_{active}\leq k\). The cops assume that the _image of_\(\mathcal{R}\) is occupying the vertex \(I(x)\) when \(\mathcal{R}\) is occupying the vertex \(x\). Thus, we have an image of \(\mathcal{R}\) moving in \(H\) with the same capabilities as \(\mathcal{R}\). The cops will capture the image of \(\mathcal{R}\) using their winning strategy from \(H\). Notice that once the image of \(\mathcal{R}\) is captured, if \(\mathcal{R}\) is at a vertex \(x\neq v_{\ell}\), then \(\mathcal{R}\) is captured as well. Otherwise, \(\mathcal{R}\) is at \(v_{\ell}\) and there is some cop, say, \(\mathcal{C}_{1}\), at \(v_{1}\). In the case of Fully Active CnR, \(\mathcal{R}\) will be captured in the next move of cops (since \(v_{1}v_{\ell}\in E(G)\)). In the case of Cops and Fast Robber, if this is a cop move (that is, the image of \(\mathcal{R}\) is captured on a robber move), then \(\mathcal{C}_{1}\) will capture \(\mathcal{R}\) in the next move. Otherwise, in the previous move of the cops, \(\mathcal{C}_{1}\) moved to \(v_{1}\) while \(\mathcal{R}\) was at \(v_{\ell}\). In this case, since \(N[v_{1}]=N[v_{\ell}]\), \(\mathcal{C}_{1}\) could have moved to \(v_{\ell}\) to capture \(\mathcal{R}\) itself. Hence, \(\mathsf{c}_{x}(G)\leq k\).
This completes the proof.
Since \(\mathsf{c}(G)\leq\mathsf{nd}\) for all of these variants (as there is a dominating set of size \(\mathsf{nd}\) in \(G\)), we have the following result as a consequence of Lemma 43 (and arguments presented above).
**Theorem 44**.: Lazy CNR _and Cops and Attacking Robber parameterized by \(\mathsf{nd}\) admit a kernel with at most \(\mathsf{nd}\) vertices. Moreover, Cops and Fast Robber and Fully Active CnR parameterized by \(\mathsf{nd}\) admit a kernel with at most \(\mathsf{nd}^{2}\) vertices._
Finally, we remark that this technique of kernelization will not work directly for Surrounding CnR. For example, consider a complete graph on \(n\) vertices, for which \(\mathsf{nd}=1\) (all the vertices have the same type) and \(\mathsf{c}_{surround}=n\), and if we remove any vertex from this clique, \(\mathsf{c}_{surround}\) decreases. Moreover, as evident from our example of complete graphs, \(\mathsf{c}_{surround}\) cannot be bounded by any computable function that depends only on \(\mathsf{nd}\).
Incompressibility
### Incompressibility of CnR
In this section, we show that it is unlikely that the CnR problem parameterized by vcn admits a polynomial compression. For this purpose, we first define the following problem. In Red-Blue Dominating Set, we are given a bipartite graph \(G\) with a vertex bipartition \(V(G)=T\cup N\) and a non-negative integer \(k\). A set of vertices \(N^{\prime}\subseteq N\) is said to be an _RBDS_ if each vertex in \(T\) has a neighbour in \(N^{\prime}\). The aim of Red-Blue Dominating Set is to decide whether there exists an _RBDS_ of size at most \(k\) in \(G\). Accordingly, we have the following decision version of Red-Blue Dominating Set.
Red-Blue Dominating Set
**Input**: A bipartite graph \(G\) with vertex bipartition \(V(G)=T\cup N\), and a non-negative integer \(k\).
**Question**: Does \(G\) has an _RBDS_ of size at most \(k\)?
Dom, Lokshtanov, and Saurabh [32] proved that it is unlikely for Red-Blue Dominating Set parameterized by \(|T|+k\) to admit a polynomial compression. More precisely, they proved the following result.
**Proposition 45** ([32]).: Red-Blue Dominating Set _parameterized by \(|T|+k\) does not admit a polynomial compression, unless NP \(\subseteq\) coNP/poly._
We show that CnR parameterized by the vcn does not have a polynomial compression by developing a polynomial parameter transformation from Red-Blue Dominating Set parameterized by \(|T|+k\) to CnR parameterized by vcn.
#### 6.1.1 Bipartite Graphs with Large Degree and Girth
For our reduction, we borrow a construction by Fomin at al. [36] of bipartite graphs having high girth and high minimum degree, which they used to prove NP-hardness (and \(W[2]\)-hardness for the solution size \(k\)) of CnR. For positive integers \(p,q\), and \(r\), we can construct a bipartite graph \(H(p,q,r)\) with \(rqp^{2}\) edges and a bipartition \((X,Y)\), with \(|X|=|Y|=pq\). The set \(X\) is partitioned into sets \(U_{1},\ldots,U_{p}\), and the set \(Y\) is partitioned into sets \(W_{1},\ldots W_{p}\), with \(|U_{i}|=|W_{i}|=q\). By \(H_{i,j}\) we denote the subgraph of \(H(p,q,r)\) induced by \(U_{i}\cup W_{j}\), and by \(\mathsf{deg}_{i,j}(z)\) we denote the degree of vertex \(z\) in \(H_{i,j}\). Fomin et al. [36] provided the following construction:
**Proposition 46** ([36]).: _Let \(q\geq 2p(r+1)\frac{(p(r+1)-1)^{6}-1}{(p(r+1)-1)^{2}-1}\). Then, we can construct \(H(p,q,r)\) in time \(\mathcal{O}(r\cdot q\cdot p^{2})\) with the following properties._
1. _The girth of_ \(H(p,q,r)\) _is at least 6._
2. _For every vertex_ \(z\in V(H_{i,j})\) _and every_ \(i,j\in[p]\)_, we have_ \(r-1\leq\mathsf{deg}_{i,j}(z)\leq r+1\)_._
#### 6.1.2 Polynomial Parameter Transformation
Suppose that we are given an instance \((G,k)\) with \(V(G)=T\cup N\) of the Red-Blue Dominating Set problem. First, we construct a graph \(G^{\prime}\) with \(V(G^{\prime})=T^{\prime}\cup N^{\prime}\) from \(G\) by introducing two new vertices, \(x\) and \(y\), such that \(T^{\prime}=T\cup\{x\}\) and \(N^{\prime}=N\cup\{y\}\), and \(E(G^{\prime})=E(G)\cup\{xy\}\). We have the following observation.
**Observation 47**.: \(G\) _has an RBDS of size at most \(k\) if and only if \(G^{\prime}\) has an RBDS of size at most \(k+1\). Moreover, any RBDS of \(G^{\prime}\) contains \(y\)._
Now, we present the main construction for our reduction. Denote the vertex set \(V(T^{\prime})\) by \(\{v_{1},v_{2},\ldots,v_{p^{\prime}},x\}\). Moreover, let \(p=p^{\prime}+1\), \(\ell=k+1\), \(r=\ell+2\), and \(q=\lceil 2p(r+1)(\frac{p(r+1)-1)^{6}-1}{(p(r+1)-1)^{2}-1}\rceil\).
We construct \(H(p,q,r)\) such that each of \(U_{i}\) and \(W_{i}\), for \(0<i\leq p^{\prime}\), contains \(q\) copies of vertex \(v_{i}\), and each of \(U_{p}\) and \(W_{p}\) contains \(q\) copies of vertex \(x\). Now, we obtain a graph \(G^{\prime\prime}\) by adding one more set of vertices \(P\) to \(H(p,q,r)\) such that \(V(P)=V(N^{\prime})\). Moreover, if there is an edge between a vertex \(u\in N^{\prime}\) and a vertex \(v_{i}\in T^{\prime}\), then we add an edge between \(u\) and every vertex of \(U_{i}\), and also between \(u\) and every vertex of \(W_{i}\). Similarly, we add an edge between \(y\) and every vertex of \(U_{p}\), and between \(y\) and every vertex of \(W_{p}\). Finally, we make the vertex \(y\) adjacent to every vertex of \(P\). See Figure 3 for reference.
For correctness, we have the following lemma.
**Lemma 48**.: \(G^{\prime}\) _has an RBDS of size at most \(\ell\) if and only if \(G^{\prime\prime}\) is \(\ell\)-copwin._
Proof.: First, we show that if \(G^{\prime}\) has an _RBDS_ of size \(\ell\), then \(\ell\) cops have a winning strategy in \(G^{\prime\prime}\). Let \(S\subseteq N^{\prime}\) be an RBDS in \(G^{\prime}\) of size at most \(\ell\). The cops begin by choosing the vertices corresponding to \(S\) in \(P\). Observe that the vertex \(y\) has to be present in \(S\). Since vertex \(y\) dominates each vertex in \(P\), the robber cannot safely enter a vertex in \(P\). Additionally, due to the construction of \(G^{\prime\prime}\), the vertices of \(S\) dominate each vertex in \(H\). Hence, the robber cannot safely enter a vertex in \(H\). Therefore, the robber will be captured in the first move of the cops.
Next, we show that if there is no _RBDS_ of size \(\ell\) in \(G^{\prime}\), then \(\ell\) cops do not have a winning strategy. We prove this by giving a winning strategy for the robber. First, we show that the robber can safely enter the graph. In the beginning, let there be \(\ell_{1}\leq\ell\) cops in \(P\) and \(\ell_{2}\leq\ell\) cops in \(H\). Since there is no _RBDS_ of size \(\ell\) in \(G^{\prime}\), for every placement of at most \(\ell\) cops in \(P\), there exists at least one pair of \(U_{i}\) and \(W_{i}\) such that no vertex of \(U_{i}\) and \(W_{i}\) is dominated by the cops from \(P\). Let \(U_{i}\) and \(W_{i}\) be one such pair of sets such that no vertex of \(U_{i}\) and \(W_{i}\) is dominated by the cops from \(P\). Moreover, since each vertex of \(H\) can dominate at most \(p(r+1)\) vertices in \(H\), \(\ell_{2}\) cops can dominate at most \(\ell\cdot p(r+1)\) vertices. Since \(U_{i}\) (and \(W_{i}\) also) contains \(q\) vertices, and \(q>\ell\cdot p(r+1)\), the \(\ell_{2}\) cops in \(H\) cannot dominate all vertices of \(U_{i}\), and hence the
Figure 3: Illustration for \(H(p,q,r)\) and \(P\). If a vertex \(u\) is connected to \(v_{j}\) in \(G\), then here \(u\) is connected to every vertex of \(W_{j}\) and \(U_{j}\). Moreover, for every \(i,j\), each vertex in \(U_{i}\) has at least \(r-1\) neighbors in \(U_{j}\).
robber can safely enter a vertex of \(U_{i}\).
Now, whenever the robber is under attack, it does the following. Without loss of generality, let us assume that the robber is in \(U_{i}\) (the case of \(W_{i}\) is symmetric). Since there are at most \(\ell\) cops in \(P\), there is always a \(W_{j}\) such that no vertex of \(W_{j}\) is dominated by cops from \(P\). Since each vertex in \(U_{i}\) has at least \(r-1=\ell+1\) neighbours in \(W_{j}\), the robber can move to at least \(\ell+1\) vertices of \(W_{j}\). Since the girth of \(H\) is at least \(6\), no vertex from \(H\) can dominate two vertices of \(W_{j}\) that are adjacent to the robber; else, we get a cycle on four vertices. Hence, at most \(\ell\) cops from \(H\) can dominate at most \(\ell\) neighbors of the robber in \(W_{j}\), and the robber has at least \(\ell+1\) neighbors in \(W_{j}\). Hence, the robber can move to a safe vertex in \(W_{j}\). Since the graph \(H\) is symmetric, the robber can move safely from \(W_{j^{\prime}}\) to \(W_{i^{\prime}}\) also. The robber follows this strategy to avoid capture forever.
This completes the proof of our lemma.
Next, we have the following observation to show that there exists a vertex cover \(U\) of \(G^{\prime\prime}\) such that \(|U|=poly(|T|,k)\).
**Observation 49**.: \(V(H)\cup\{y\}\) _is a vertex cover of \(G^{\prime\prime}\). Therefore, the vertex cover number of \(G^{\prime\prime}\) is at most \(2\cdot p\cdot q+1=1+2p\cdot\lceil 2p(k+3)\frac{(p(k+3)-1)^{6}-1}{(p(k+3)-1)^{2}-1}\rceil\), where \(p=|T|+1\)._
This completes the proof of the argument that CnR parameterized by the vcn is unlikely to admit a polynomial compression. Thus, we have the following theorem as a consequence of Lemma 48, Observation 49 and Proposition 45.
**Theorem 4**.: \(\textsc{CnR}\) _parameterized by_ vcn _does not admit polynomial compression, unless_ NP \(\subseteq\)_coNP/poly_._
We prove the incompressibility of the variants (Theorem 11) in the Appendix.
### Incompressibility for Variants
In this section, we prove Theorem 11. In Theorem 4, we proved that it is unlikely for CnR to admit a polynomial compression. For this purpose, we constructed a graph \(G^{\prime\prime}\) where \(k\) cops have a winning strategy if and only if the graph \(G^{\prime}\) has an \(RBDS\) of size at most \(k\). If \(G^{\prime}\) has an \(RBDS\) of size \(k\), then there is a dominating set of size \(k\) in \(G^{\prime\prime}\). Else, there is no winning strategy for \(k\) cops in \(G^{\prime\prime}\). Here, we use the same construction to show that the variants we study (except for Surrounding CnR) are unlikely to admit a polynomial compression parameterized by vcn. We establish this by proving that \(G^{\prime\prime}\) is \(k\)-copwin for these variants if and only if \(G^{\prime}\) has an \(RBDS\) of size at most \(k\).
As discussed earlier, for a graph \(G\), \(\mathsf{c}(G)\leq\mathsf{c}_{lazy}(G)\), \(\mathsf{c}(G)\leq\mathsf{c}_{attacking}(G)\), and \(\mathsf{c}(G)\leq\mathsf{c}_{s}(G)\) (for any \(s\geq 1\)). Therefore, if \(G\) does not have an \(RBDS\) of size at most \(k\), then \(\mathsf{c}(G)>k\), and hence, \(\mathsf{c}_{lazy}(G)>k\), \(\mathsf{c}_{attacking}(G)>k\), and \(\mathsf{c}_{s}(G)>k\) (for \(k>0\)). To see the reverse direction, observe that in each of these three variants, if the cops start by occupying a dominating set, then they win in the next round. Hence, this establishes that it is unlikely for Cops and Attacking Robber, Cops and Fast Robber, Lazy CnR parameterized by vcn to admit to admit a polynomial compression.
Similarly, it is true for Fully Active CnR also, that if the cops start by occupying a dominating set, then they win in the next round. Hence, we have to only show that if there is no \(RBDS\) of size \(k\) in \(G^{\prime}\) (and hence, no dominating set of size \(k\) in \(G^{\prime\prime}\)), then \(k\) cops do not have a winning strategy in \(G\) for Fully Active CnR. The robber uses the following strategy. When \(\mathcal{R}\) is under attack, it follows the strategy from Cops and Robber. Now, \(\mathcal{R}\) is forced to move (because it is active) even when it is on a safe vertex. Note that \(\mathcal{R}\) always stays in \(H(p,q,r)\). Due to symmetry, let us assume it is in some vertex \(v\) in some block \(U_{i}\). In this case, \(\mathcal{R}\) can simply move to a vertex in \(W_{i}\). Observe here that since vertices in \(U_{i}\) are safe, the vertices in \(W_{i}\) are also safe.
Thus, we have the following lemma to establish that these variants are unlikely to admit a polynomial compression.
**Lemma 50**.: Lazy CnR_, Cops and Attacking Robber, Cops and Fast Robber, and Fully Active CNR parameterized by the vertex cover number do not admit a polynomial compression, unless \(\textsf{NP}\subseteq\textsf{coNP}/\textsf{poly}\)._
This result can also be extended to directed (or oriented) graphs. We have the following lemma.
**Lemma 51**.: \(\textsc{CnR}\) _on strongly connected directed and oriented graphs parameterized by vertex cover number does not admit a polynomial compression, unless \(\textsf{NP}\subseteq\textsf{coNP}/\textsf{poly}\)._
Proof.: For the case of directed graphs, we can simply replace each edge in the construction with a loop edge (directed cycle on two vertices).
To prove this result for oriented graphs, we do the following. Here, we change the underlying graph \(G^{\prime\prime}\). First, instead of having two partitions \(U\) and \(W\), we have three partitions \(U,W\), and \(X\) (with the same rules). See Figure 4 for an illustration. Second, we add edges between \(U\) and \(W\), \(W\) and \(X\), and \(X\) and \(U\) following the rules of the construction. Moreover, the edge rules for vertices in \(P\) are the same (that is, if a vertex has edges with each vertex in some \(U_{i}\), it has edges with each vertex in \(W_{i}\) and \(X_{i}\) as well). Next, we define orientations. For the vertex \(y\), we orient all the edges as outgoing. For every vertex \(u\in P\setminus\{y\}\), we mark all the edges as outgoing, except for the edge \(uy\) (which is oriented \(\overrightarrow{yu}\)). For each edge \(uv\) such that \(u\in U\) and \(w\in W\), orient the edge \(\overrightarrow{uv}\). For each edge \(xu\) such that \(x\in X\) and \(u\in U\), orient the edge \(\overrightarrow{xu}\). Finally, add an extra vertex \(z\), and add arc \(\overrightarrow{zy}\). Moreover, for each vertex \(v\in U_{p}\cup W_{p}\cup X_{p}\), add an arc \(\overrightarrow{vz}\).
It is straightforward to see that \(\overrightarrow{G^{\prime\prime}}\) is a strongly connected oriented graph. Moreover, if \(G^{\prime\prime}\) has a dominating set of size \(k\), then \(k\) cops have a winning strategy by occupying these vertices in \(\overrightarrow{G^{\prime\prime}}\). Observe that, at this point, \(\mathcal{R}\) can enter only at vertex \(z\) and cannot move as long as there is a cop, say, \(\mathcal{C}_{1}\), at \(y\) (which there is due to the construction of \(G^{\prime}\) and \(G^{\prime\prime}\)). Now, since \(\overrightarrow{G^{\prime\prime}}\) is strongly connected, some other cop, say, \(\mathcal{C}_{2}\), can move to capture \(\mathcal{R}\) in a finite number of rounds. For the reverse direction, if \(G^{\prime}\) does not have
an \(RBDS\) of size \(k\) (and hence \(G^{\prime\prime}\) does not have a dominating set of size \(k\)), then following the arguments of Lemma 48, \(\mathcal{R}\) can enter at a safe vertex in \(U\). Then, whenever \(\mathcal{R}\) is under attack, it can move to a safe vertex in \(W\). Similarly, it can move from \(W\) to \(X\) and from \(X\) to \(U\) when under attack. Moreover, note that vertex \(z\) does not attack any vertex in \(U\cup W\cup X\). Hence, \(\mathcal{R}\) has a clear evading strategy.
This completes our proof.
Lemma 50 and Lemma 51 directly imply Theorem 11.
## 7 Conclusion and Future Directions
In this paper, we conducted a comprehensive analysis of the parameterized complexity of CnR parameterized by vcn. First, we showed that the cop number of a graph is upper bounded by \(\frac{\operatorname{vcn}}{3}+1\). Second, we proved that CnR parameterized by vcn is FPT by designing an exponential kernel. We complemented this result by proving that it is unlikely for CnR parameterized by vcn to admit a polynomial compression. We then extended these results to other variants as well as to other parameters.
To achieve our kernelization results, the rules we used concerned removing (false or true) twins from the graph. These rules are easy to implement and hence can be used to reduce the complexity of the input graph, even when the input graph is far from the considered parameters. For example, for cographs, none of the considered parameters is constant/bounded, but cographs can be reduced to a single vertex with the operation of removing twins, and hence, our reduction rules give an alternate proof that the cop number of cographs is at most two [53] for several variants. Moreover, MTP is well-studied with the motivation of designing computer games. Some examples of these variants include: multiple targets and multiple pursuer search [81] with applications in controlling non-player characters in video games; MTP from the robber's perspective with faster cops [61] where the strategies were tested on Baldur's Gate; MTP modeled with edge weights and different speeds of agents [50] with the specific examples of Company of Heroes and Supreme Commander. Moreover, the PACMAN game's movement can be considered as an instance of Fully Active CnR on a partial grid. One of the key aspects of designing these games is to come up with scenarios that are solvable but look complex and challenging. Our reduction rule can help in this regard. One can begin with an easy-to-resolve instance of CnR, and then keep adding twins to this instance (recursively) to get an instance that looks sufficiently complex but has the same complexity.
Finally, we defined a new variant of CnR, named Generalized CnR, that generalizes many well-studied variants of CnR including Cops and Fast Robber, Fully Active CnR, Cops and Robber From a Distance [14], and also generalizes the games of [61, 50]. We showed that RR22 provides a kernel for Generalized CnR as well. This gives hope that RR22 can be used to get kernels for many practical variants not explicitly studied in this paper.
Still, many questions on the parameterized complexity of CnR remain open. We list some of these questions below.
**Question 52**.: _Does there exist an_ FPT _algorithm for CnR parameterized by_ vcn _with running time \(2^{\mathcal{O}(\operatorname{vcn})}\). \(n^{\mathcal{O}(1)}\)?_
**Question 53**.: _Does there exist a better bound for the cop number with respect to_ vcn _? In particular, is \(\operatorname{c}(G)=o(\operatorname{vcn})\)?_
**Question 54**.: _Does CnR parameterized by_ vcn _ admit a polynomial \(\alpha\)-approximate kernel?_
**Question 55**.: _Study CnR with respect to the following parameters: (1) feedback vertex set (2) treewidth (3) treedepth. In particular, is_ CnR__FPT _parameterized by treewidth?_ |
2303.16467 | A complex analogue of the Goodman-Pollack-Wenger theorem | A \textit{$k$-transversal} to family of sets in $\mathbb{R}^d$ is a
$k$-dimensional affine subspace that intersects each set of the family. In 1957
Hadwiger provided a necessary and sufficient condition for a family of pairwise
disjoint, planar convex sets to have a $1$-transversal. After a series of three
papers among the authors Goodman, Pollack, and Wenger from 1988 to 1990,
Hadwiger's Theorem was extended to necessary and sufficient conditions for
$(d-1)$-transversals to finite families of convex sets in $\mathbb{R}^d$ with
no disjointness condition on the family of sets. We prove an analogue of the
Goodman-Pollack-Wenger theorem in the complex setting. | Daniel McGinnis | 2023-03-29T05:52:09Z | http://arxiv.org/abs/2303.16467v2 | # A necessary and sufficient condition for \((2d-2)\)-transversals in \(\mathbb{R}^{2d}\)
###### Abstract.
A \(k\)_-transversal_ to family of sets in \(\mathbb{R}^{d}\) is a \(k\)-dimensional affine subspace that intersects each set of the family. In 1957 Hadwiger provided a necessary and sufficient condition for a family of pairwise disjoint, planar convex sets to have a \(1\)-transversal. After a series of three papers among the authors Goodman, Pollack, and Wenger from 1988 to 1990, Hadwiger's Theorem was extended to necessary and sufficient conditions for \((d-1)\)-transversals to finite families of convex sets in \(\mathbb{R}^{d}\) with no disjointness condition on the family of sets. However, no such conditions for a finite family of convex sets in \(\mathbb{R}^{d}\) to have a \(k\)-transversal for \(0<k<d-1\) has previously been proven or conjectured. We make progress in this direction by providing necessary and sufficient conditions for a finite family of convex sets in \(\mathbb{R}^{2d}\) to have a \((2d-2)\)-transversal.
The author was supported by NSF grant DMS-1839918 (RTG).
## 1. Introduction
The well-known Helly's theorem [10] states that if a finite family \(\mathcal{F}\) of convex sets in \(\mathbb{R}^{d}\) has the property that any choice of \(d+1\) or less sets in \(\mathcal{F}\) have a non-empty intersection, then there is a point in common to all the sets in \(\mathcal{F}\) (see [1][1] for surveys on Helly's theorem and related results). A _\(k\)-transversal_ is a \(k\)-dimensional affine space that intersects each set of \(\mathcal{F}\), so Helly's theorem provides a necessary and sufficient condition for \(\mathcal{F}\) to have a \(0\)-transversal. In 1935, Vincensini was interested in the natural extension of Helly's theorem of finding necessary and sufficient conditions for a finite family of convex sets \(\mathcal{F}\) in \(\mathbb{R}^{d}\) to have a \(k\)-transversal for \(k>0\)[21]. In particular, Vincensini asked if there exists some constant \(r=r(k,d)\) such that if every choice of \(r\) or fewer sets in \(\mathcal{F}\) has a \(k\)-transversal, then \(\mathcal{F}\) has a \(k\)-transversal. However, Santalo provided examples showing that such a constant \(r\) does not exist for any \(k>0\)[16]. Many other related problems in geometric transversal theory have also been considered. For more information we refer the reader to the surveys [1][1].
In 1957, Hadwiger made the first positive progress toward this extension of Helly's theorem considered by Vincensini by proving the following theorem.
**Theorem 1.1** (Hadwiger [1]).: _A finite family of pairwise disjoint convex sets in \(\mathbb{R}^{2}\) has a \(1\)-transversal if and only if the sets in the family can be linearly ordered such that any three sets have a \(1\)-transversal consistent with the ordering._
Hadwiger's theorem has been generalized in different ways, eventually resulting in an encompassing result for \((d-1)\)-transversals in \(\mathbb{R}^{d}\). The first significant result in this direction was made by Goodman and Pollack who showed that in \(\mathbb{R}^{d}\), the linear ordering in Hadwiger's theorem can be replaced with the notion of an order
type of points in \(\mathbb{R}^{d-1}\) given the additional condition that the family is \((d-2)\)-separable, which generalizes the disjointness condition in Hadwiger's theorem (see [11] for the precise statement of the theorem and definitions of these notions). Soon after, Wenger showed that the disjointness condition of Hadwiger's theorem can be dropped [12]. Finally, Pollack and Wenger completed the picture by proving the necessary and sufficient conditions required to have a \((d-1)\)-transversal in \(\mathbb{R}^{d}\) with no additional separability conditions on the family of sets [13]. We note that several extensions of this result, including colorful generalizations, have been studied for instance in [1][2][1][14][15].
Despite the previous work on the existence of \((d-1)\)-transversals, no necessary and sufficient conditions for the existence of \(k\)-transversals in \(\mathbb{R}^{d}\) for \(0<k<d-1\) have been proven or conjectured. In this paper, we make progress in this direction by providing necessary and sufficient conditions for \((2d-2)\)-transversals in \(\mathbb{R}^{2d}\).
## 2. Hyperplane transversals revisited
Here we will describe the result of Pollack and Wenger on \((d-1)\)-transversals in \(\mathbb{R}^{d}\) as presented in [1], then we will discuss an equivalent rephrasing of this theorem to put our main result in Section 3 into context.
Let \(\mathcal{F}\) be a finite family of convex sets in \(\mathbb{R}^{d}\) and let \(P\) be a subset of points in \(\mathbb{R}^{k}\) for some \(k\). We say that \(\mathcal{F}\)_separates consistently_ with \(P\) if there exists a map \(\phi:\mathcal{F}\to P\) such that for any two subfamilies \(\mathcal{F}_{1},\ \mathcal{F}_{2}\subset\mathcal{F}\), we have that
\[\operatorname{conv}(\mathcal{F}_{1})\cap\operatorname{conv}(\mathcal{F}_{2}) =\emptyset\implies\operatorname{conv}(\phi(\mathcal{F}_{1}))\cap \operatorname{conv}(\phi(\mathcal{F}_{2}))=\emptyset.\]
Here we mean \(\operatorname{conv}(\mathcal{F}_{i})\) to be \(\operatorname{conv}(\cup_{F\in\mathcal{F}_{i}}F)\). Another way to think about this condition is that if the sets of \(\mathcal{F}_{1}\) can be separated from the sets of \(\mathcal{F}_{2}\) by a hyperplane in \(\mathbb{R}^{d}\), then the sets of points \(\phi(\mathcal{F}_{1})\) and \(\phi(\mathcal{F}_{2})\) can be separated by a hyperplane in \(\mathbb{R}^{k}\). We also note that \(\mathcal{F}\) separates consistently with \(P\) if and only if
\[\operatorname{conv}(\mathcal{F}_{1})\cap\operatorname{conv}(\mathcal{F}_{2}) =\emptyset\implies\operatorname{conv}(\phi(\mathcal{F}_{1}))\cap \operatorname{conv}(\phi(\mathcal{F}_{2}))=\emptyset.\]
whenever \(|\mathcal{F}_{1}|+|\mathcal{F}_{2}|\leq k+2\). This is a consequence of the well-known Kirchberger's theorem [16], which states that if \(U\) and \(V\) are finite point sets in \(\mathbb{R}^{k}\) such that for every set of \(k+2\) points \(S\subset U\cup V\), we have that \(\operatorname{conv}(S\cap U)\cap\operatorname{conv}(S\cap V)=\emptyset\), then \(\operatorname{conv}(U)\cap\operatorname{conv}(V)=\emptyset\).
We now have the terminology to state the Goodman-Pollack-Wenger theorem.
**Theorem 2.1** (Goodman-Pollack-Wenger theorem [13]).: _A finite family of convex sets \(\mathcal{F}\) in \(\mathbb{R}^{d}\) has a \((d-1)\)-transversal if and only if \(\mathcal{F}\) separates consistently with a set \(P\subset\mathbb{R}^{d-1}\)._
The condition in our main result of Section 3 is quite similar to the condition in Theorem 2.1, and we will first provide a slight rephrasing of the definition for \(\mathcal{F}\) to separate consistently with \(P\) in order to make this similarity more apparent. By taking the contrapositive of the implication in the definition of separating consistently, we may equivalently say that \(\mathcal{F}\) separates consistently with \(P\) if there exists a map \(\phi:\mathcal{F}\to P\subset\mathbb{R}^{k}\) such that
\[\operatorname{conv}(\phi(\mathcal{F}_{1}))\cap\operatorname{conv}(\phi( \mathcal{F}_{2}))\neq\emptyset\implies\operatorname{conv}(\mathcal{F}_{1}) \cap\operatorname{conv}(\mathcal{F}_{2})\neq\emptyset.\]
In other words, the existence of an affine dependence
\[\sum_{F\in\mathcal{F}_{1}\cup\mathcal{F}_{2}}a_{F}=0,\,\sum_{F\in\mathcal{F}_ {1}\cup\mathcal{F}_{2}}a_{F}\phi(F)=0\]
where \(a_{F}\geq 0\) for all \(F\in\mathcal{F}_{1}\) (not all \(0\)) and \(a_{F}\leq 0\) for all \(F\in\mathcal{F}_{2}\) implies the existence of points \(p_{F}\in F\) and real numbers \(r_{F}\geq 0\) such that
\[\sum_{F\in\mathcal{F}_{1}\cup\mathcal{F}_{2}}r_{F}a_{F}=0,\sum_{F\in\mathcal{F} _{1}\cup\mathcal{F}_{2}}(r_{F}a_{F})p_{F}=0\]
is an affine dependence of the points \(p_{F}\) and the numbers \(r_{F}a_{F}\) are not all \(0\).
## 3. Main result
In order to simplify the statement of our main result, we will state our theorem as a necessary and sufficient condition for a finite family \(\mathcal{F}\) of convex sets in \(\mathbb{C}^{d}\) to have a _complex_\((d-1)\)-transversal, where a complex \((d-1)\)-transversal here is a complex \((d-1)\)-dimensional affine subspace of \(\mathbb{C}^{d}\) that intersects each set in \(\mathcal{F}\). Since \(\mathbb{C}^{d}\) can be identified with \(\mathbb{R}^{2d}\) and a complex \((d-1)\)-transversal can subsequently be identified with a \((2d-2)\)-transversal in \(\mathbb{R}^{2d}\), our result will equivalently provide a necessary and sufficient condition for a finite family of convex sets \(\mathcal{F}\) in \(\mathbb{R}^{2d}\) to have a \((2d-2)\)-transversal.
First, following our discussion from Section 2, we make the following definition in order to articulate our main theorem, Theorem 3.4.
**Definition 3.1**.: Let \(\mathcal{F}\) be a finite family of convex sets in \(\mathbb{C}^{d}\), and let \(P\subset\mathbb{C}^{k}\). We say that \(\mathcal{F}\) is _dependency-consistent_ with \(P\) if there exists a map \(\phi:\mathcal{F}\to P\) such that for every subfamily \(\mathcal{F}^{\prime}\subset\mathcal{F}\) and every affine dependence
\[\sum_{F\in\mathcal{F}^{\prime}}a_{F}=0,\;\sum_{F\in\mathcal{F}^{\prime}}a_{F} \phi(F)=0\]
for complex numbers \(a_{F}\), there exist real numbers \(r_{F}\geq 0\) and points \(p_{F}\in F\) for \(F\in\mathcal{F}^{\prime}\) such that
\[\sum_{F\in\mathcal{F}^{\prime}}r_{F}a_{F}=0,\;\sum_{F\in\mathcal{F}^{\prime}} (r_{F}a_{F})p_{F}=0\]
where not all of the values \(r_{F}a_{F}\) are \(0\).
_Remark 3.2_.: Note that we could add the additional restriction that \(|\mathcal{F}^{\prime}|\leq 2k+3\) in Definition 3.1, and we would get an equivalent definition. Indeed, by associating the points \((a_{F}\phi(F),a_{F})\) with points in \(\mathbb{R}^{2k+2}\), we have that the set of points \(\{(a_{F}\phi(F),a_{F})\}_{F\in\mathcal{F}^{\prime}}\) contains \(0\in\mathbb{R}^{2k+2}\) in its convex hull. Therefore, by Caratheodory's Theorem, there exist \(m\leq 2k+3\) sets \(F_{1},\ldots,F_{m}\in\mathcal{F}^{\prime}\) and real numbers \(s_{i}>0\) such that \(\sum_{i=1}^{m}s_{i}(a_{F_{i}}\phi(F_{i}),a_{F_{i}})=0\). In other words, there is the complex affine dependence
\[\sum_{i=1}^{m}s_{i}a_{F_{i}}=0,\;\sum_{i=1}^{m}(s_{i}a_{F_{i}})\phi(F_{i})=0\]
among the points \(\phi(F_{1}),\ldots,\phi(F_{m})\).
In our proof of Theorem 3.4 will make use of the Borsuk-Ulam Theorem below. We note that the Borsuk-Ulam was also employed in the proof of the Goodman-Pollack-Wenger theorem in [10], and in fact our proof of Theorem 3.4 takes significant inspiration from this proof.
**Theorem 3.3** (Borsuk-Ulam Theorem).: _If \(n\geq m\) and \(f:S^{n}\to\mathbb{R}^{m}\) is an odd map, i.e. \(f(-x)=-f(x)\) for all \(x\in S^{n}\), then \(0\in\text{Im}(f)\)._
We are now ready to state and prove the main result.
**Theorem 3.4** (Main theorem).: _A finite family of convex sets \(\mathcal{F}\) in \(\mathbb{C}^{d}\) has a complex \((d-1)\)-transversal if and only if \(\mathcal{F}\) is dependency-consistent with a set \(P\subset\mathbb{C}^{d-1}\)._
Proof.: We associate \(\mathbb{C}^{d}\) with the set of points
\[H=\{(z_{1},\ldots,z_{d+1})\in\mathbb{C}^{d+1}\mid z_{d+1}=1\}.\]
Thus, we think of the convex sets from the family \(\mathcal{F}\) as lying in \(H\) as well.
We will now define a continuous odd map \(f\) from \(\{\mathbf{z}\in\mathbb{C}^{d+1}\mid||\mathbf{z}||=1\}\), which can be identified with the \((2d+1)\)-dimensional sphere, \(S^{2d+1}\), to \(\mathbb{R}^{2d}\). Such a map will have a zero by Theorem 3.3, and we will show that for the map that we define, this zero will correspond to a complex \((d-1)\)-transversal of \(\mathcal{F}\).
Let \(\mathbf{x}\in\{\mathbf{z}\in\mathbb{C}^{d+1}\mid||\mathbf{z}||=1\}\), and let \(P_{\mathbf{x}}\) be the complex subspace spanned by \(\mathbf{x}\). Additionally, for a set \(F\in\mathcal{F}\) we denote the orthogonal projection of \(F\) onto \(P_{\mathbf{x}}\) by \(F_{\mathbf{x}}\); note that \(F_{\mathbf{x}}\) is a convex set.
For each set \(F\in\mathcal{F}\), let \(p_{\mathbf{x},F}\) be the complex number \(c\) such that \(c\mathbf{x}\) is the element of \(F_{\mathbf{x}}\) that is closest in distance to \(\mathbf{0}\), the origin in \(\mathbb{C}^{d+1}\). Note that the convexity of \(F_{\mathbf{x}}\) implies that \(c\) is unique. Furthermore, for a fixed set \(F\), the point \(p_{\mathbf{x},F}\) varies continuously with \(\mathbf{x}\). Let \(\phi\) be the map witnessing the fact that \(\mathcal{F}\) is dependency-consistent with \(P\); we define \(f(\mathbf{x})\in\mathbb{C}\times\mathbb{C}^{d-1}\), which can be identified with \(\mathbb{R}^{2d}\), as follows
\[f(\mathbf{x})=\sum_{F\in\mathcal{F}}\left(p_{\mathbf{x},F},\,\overline{p_{ \mathbf{x},F}}\phi(F)\right),\]
where \(\overline{p_{\mathbf{x},F}}\) denotes the complex conjugate of \(p_{\mathbf{x},F}\). Because \(p_{\mathbf{x},F}=-p_{-\mathbf{x},F}\) and \(p_{\mathbf{x},F}\) varies continuously with \(\mathbf{x}\), we have that \(f\) is indeed a continuous odd map and thus has a zero, say \(\mathbf{x}_{0}\), by Theorem 3.3.
It cannot be the case that \(\mathbf{x}_{0}=(0,\ldots,0,z_{d+1})\) for some \(z_{d+1}\neq 0\), since in this case, we would have that \(p_{\mathbf{x}_{0},F}=1/z_{d+1}\) for every \(F\in\mathcal{F}\), which cannot satisfy \(\sum_{F\in\mathcal{F}}p_{\mathbf{x}_{0},F}=0\). Thus, \(\mathbf{x}_{0}\neq(0,\ldots,0,z_{d+1})\), and in particular, this implies that the orthogonal complement of \(\mathbf{x}_{0}\) intersects \(H\) in a complex \((d-1)\)-dimensional affine space, say \(T\). If \(T\) intersects each set in \(\mathcal{F}\), then \(\mathcal{F}\) has a complex \((d-1)\)-transversal, and we are done. Therefore, we assume that \(T\) does not intersect each set in \(\mathcal{F}\), so we have that some of the values \(p_{\mathbf{x}_{0},F}\) must be nonzero. We show that this assumption leads to a contradiction.
Let \(\mathcal{F}^{\prime}\subset\mathcal{F}\) be the family of sets \(F\) such that \(p_{\mathbf{x}_{0},F}\neq 0\). Since \(f(\mathbf{x}_{0})=0\), we have the affine dependence
\[\sum_{F\in\mathcal{F}^{\prime}}p_{\mathbf{x}_{0},F}=0,\,\sum_{F\in\mathcal{F} ^{\prime}}\overline{p_{\mathbf{x}_{0},F}}\phi(F)=0.\]
Since \(\mathcal{F}\) is dependency-consistent with \(P\), we have that there exist points \(p_{F}\in F\) for all \(F\in\mathcal{F}^{\prime}\) and real numbers \(r_{F}\geq 0\) such that
\[\sum_{F\in\mathcal{F}^{\prime}}r_{F}p_{\mathbf{x}_{0},F}=0,\,\sum_{F\in \mathcal{F}^{\prime}}\overline{r_{F}p_{\mathbf{x}_{0},F}}p_{F}=0.\]
Now, consider the \(\mathbb{C}\)-linear map \(\mathrm{proj}_{\mathbf{x}_{0}}:\mathbb{C}^{d+1}\to P_{\mathbf{x}_{0}}\) given by orthogonal projection onto \(P_{\mathbf{x}_{0}}\). By linearity, we have that
\[\mathrm{proj}_{\mathbf{x}_{0}}\left(\sum_{F\in\mathcal{F}^{\prime}}\overline{r_ {F}p_{\mathbf{x}_{0},F}}p_{F}\right)=\sum_{F\in\mathcal{F}^{\prime}}\overline{r _{F}p_{\mathbf{x}_{0},F}}\mathrm{proj}_{\mathbf{x}_{0}}(p_{F})=0.\]
However, this immediately leads to a contradiction. Indeed, recall that \(p_{\mathbf{x}_{0},F}\mathbf{x}_{0}\) is the point of \(F_{\mathbf{x}_{0}}\) closest to \(\mathbf{0}\) and \(F_{\mathbf{x}_{0}}\ni\mathrm{proj}_{\mathbf{x}_{0}}(p_{F})\) is a convex set. This implies that the angle between \(p_{\mathbf{x}_{0},F}\mathbf{x}_{0}\) and \(\mathrm{proj}_{\mathbf{x}_{0}}(p_{F})\) is less than 90 degrees. Therefore, writing \(\mathrm{proj}_{\mathbf{x}_{0}}(p_{F})=c_{F}\mathbf{x}_{0}\), we have that the absolute difference in argument between the complex numbers \(p_{\mathbf{x}_{0},F}\) and \(c_{F}\) is less than 90 degrees, which implies that \(\mathrm{Re}(\overline{p_{\mathbf{x}_{0},F}}c_{F})>0\). This is in contradiction to the fact that \(\sum_{F\in\mathcal{F}^{\prime}}\overline{r_{F}p_{\mathbf{x}_{0},F}}\mathrm{ proj}_{\mathbf{x}_{0}}(p_{F})=0\), and hence completes the proof.
## 4. Acknowledgements
The author would like to thank Shira Zerbib for her helpful comments on a first draft of this paper.
|
2303.11308 | Thermal decomposition of the Kitaev material $α$-RuCl$_3$ and its
influence on low-temperature behavior | We explore the effect of heat treatment in argon atmosphere under various
temperatures up to $500^\circ$C on single crystals of $\alpha$-RuCl$_3$ by
study of the mass loss, microprobe energy dispersive x-ray spectroscopy, powder
x-ray diffraction, electrical resistance as well as low-temperature magnetic
susceptibility and specific heat. Clear signatures of dechlorination and
oxidation of Ru appear for annealing temperatures beyond $300^\circ$C. Analysis
of the specific heat below 2~K reveals a RuO$_2$ mass fraction of order $1\%$
for pristine $\alpha$-RuCl$_3$ which increases up to $20\%$ after thermal
annealing, fully consistent with mass-loss analysis. The small RuO$_2$
inclusions drastically reduce the global electrical resistance and may thus
significantly affect low-temperature thermal transport and Hall effect. | Franziska A. Breitner, Anton Jesche, Vladimir Tsurkan, Philipp Gegenwart | 2023-03-20T17:46:44Z | http://arxiv.org/abs/2303.11308v1 | Thermal decomposition of the Kitaev material \(\alpha\)-RuCl\({}_{3}\) and its influence on low-temperature behavior
###### Abstract
We explore the effect of heat treatment in argon atmosphere under various temperatures up to 500\({}^{\circ}\)C on single crystals of \(\alpha\)-RuCl\({}_{3}\) by study of the mass loss, microprobe energy dispersive x-ray spectroscopy, powder x-ray diffraction, electrical resistance as well as low-temperature magnetic susceptibility and specific heat. Clear signatures of dechlorination and oxidation of Ru appear for annealing temperatures beyond 300\({}^{\circ}\)C. Analysis of the specific heat below 2 K reveals a RuO\({}_{2}\) mass fraction of order 1% for pristine \(\alpha\)-RuCl\({}_{3}\) which increases up to 20% after thermal annealing, fully consistent with mass-loss analysis. The small RuO\({}_{2}\) inclusions drastically reduce the global electrical resistance and may thus significantly affect low-temperature thermal transport and Hall effect.
## I Introduction
The 4d layered spin orbit Mott insulator \(\alpha\)-RuCl\({}_{3}\)[1; 2; 3; 4] is one of the most studied "Kitaev materials" [5; 6], implying nearest-neighbor bond-directional Ising interactions on the honeycomb lattice [7]. The pure Kitaev model offers an exciting route towards a quantum spin liquid with exotic fractionalized excitations and potential application for topological quantum computation [7; 8; 9]. Although \(\alpha\)-RuCl\({}_{3}\) displays a zigzag magnetic order below \(T_{\rm N}\sim\)7-8 K [10; 11; 12], its intriguing dynamical properties [13; 4; 14] and the possibility to suppress the order by moderate in-plane magnetic fields [11; 15; 16] led to a strong interest in this material. This was further boosted in 2018 when Kasahara _et al.,_ reported a half-integer quantized plateau in the thermal Hall conductance, in accordance with chiral Majorana edge modes [17]. Subsequent studies of the thermal Hall effect by different groups however questioned a generic regime of half-quantization and indicated that the thermal Hall conductance is strongly sample dependent [18; 19; 20; 21; 22; 23; 24; 25]. Oscillatory structures of the magnetothermal conductivity [20] were related to coexistent secondary phases that feature differing critical magnetic fields due to stacking disorder [23]. \(\alpha\)-RuCl\({}_{3}\) has a monoclinic symmetry at room temperature [12; 26] and displays a first-order structural transition with large hysteresis around 150 K [27; 28]. Crystals with structural domains featuring stacking disorder show multiple antiferromagnetic transitions in the specific heat [26; 28; 12]. This holds mainly for powder specimens and low-quality crystals, whose signature in the specific heat is an anomaly near 14 K. High-quality single crystalline samples usually show only one transition at 7 K, which can be observed in the heat capacity as a sharp peak [29]. Stacking disorder can however easily arise in the van der Waals material \(\alpha\)-RuCl\({}_{3}\) by non-careful handling or small strain effects during cooling.
In addition, it has been known since 1968 that transition-metal chloride hydrates are chemically unstable and decompose upon heating above 150\({}^{\circ}\)C [30]. This raises the question whether a possible degradation of \(\alpha\)-RuCl\({}_{3}\) single crystals may influence its low-temperature physical properties. In particular, if \(\alpha\)-RuCl\({}_{3}\) undergoes a thermally activated degradation, then it could be assumed that already during growth some small fraction of the crystals become degraded. This motivates our systematic study of the effect of moderate temperature treatments on high-quality \(\alpha\)-RuCl\({}_{3}\) single crystals.
In this paper, we report thermal annealing (in Argon atmosphere) studies on \(\alpha\)-RuCl\({}_{3}\) single crystals at temperatures up to 500\({}^{\circ}\)C. Analysis of the mass loss in combination with EDX and XRD reveals clear evidence for dechlorination and the formation of RuO\({}_{2}\) clusters penetrating from the surface into the bulk. While RuO\({}_{2}\) inclusions have little influence on magnetic susceptibility as well as on the specific heat anomaly at \(T_{N}\), they dominate over the gapped magnon contribution in \(C(T)\) below 2 K. The low-\(T\) specific heat reveals approximately 1% RuO\({}_{2}\) even in untreated \(\alpha\)-RuCl\({}_{3}\) single crystals. Our study shows that the bulk electrical conductance is strongly enhanced by metallic RuO\({}_{2}\) inclusions suggesting that the latter may also affect the low-\(T\) thermal transport properties.
## II Methods
High quality crystals of \(\alpha\)-RuCl\({}_{3}\) were grown using vacuum sublimation as described in [31]. Zero-field heat capacity measurements were done to check the quality of all crystals before heat treatments. Thereby, a single transition at 7 K and no signature at 14 K were detected.
Heat treatments were performed in two different ways, see supplemental material (SM) for a table with all studied samples [32]. Two of the crystals (sample 1 and sample 5) were sealed in a quartz ampoule under 150 mbar Ar atmosphere, after evacuating the tube several times down
to \(2\cdot 10^{-2}\) mbar and flushing with Ar gas, then heated in a muffle furnace up to 400 or 450\({}^{\circ}\)C for 12 h. To avoid contamination of the sample the quartz tube was previously cleaned using acetone and then baked out at 70\({}^{\circ}\)C for one hour before inserting the crystal. The other samples were heat treated using an Al\({}_{2}\)O\({}_{3}\) crucible placed inside a DTA chamber, which was then evacuated down to 3 mbar and flooded with Argon gas before heating the sample in Ar flow to 500\({}^{\circ}\)C for 1 h.
Heat capacity (HC) measurements in the range of 0.35 - 20 K were performed in a Quantum Design PPMS with Helium-3 Option. The samples were mounted onto the platform using Apiezon N grease. Magnetic susceptibility in the range of 2 - 300 K was measured utilizing the Quantum Design MPMS 3. The sample was mounted onto a quartz rod using GE varnish and later removed using isopropanol. Electrical transport measurements in the range of 125 - 300 K were performed in the PPMS utilizing the ETO option. Contacts for four-wire measurements were made using two-component silver epoxy.
Powder X-ray diffraction measurements were performed using a Rigaku Miniflex600 powder diffractometer (Cu-K\({}_{\alpha}\) radiation). A scanning electron microscope (SEM, Merlin Gemini 2, Zeiss) equipped with an energy dispersive x-ray (EDX) analysis probe (X-Max 80N SDD detector, Oxford Instruments) was utilized for structural and compositional investigation. Silver epoxy was used to mount the crystals onto the sample holder.
After each measurement the samples were carefully cleaned using n-butyl acetate to avoid carrying any epoxy or grease residue into the next measurement while at the same time avoiding damage to the crystals.
## III Results and Discussion
Before performing any kind of heat treatments we checked whether the specific heat is affected by multiple removals from HC and EDX pucks as it is known that less careful handling can potentially induce stacking disorder that profoundly changes the \(T_{\mathrm{N}}\) and the signature of magnetic order [12]. As shown in SM [32], no change in the HC was found, confirming that any changes in our study are induced by heat treatments.
For sample 1, heat treatments were performed at increasing temperatures, until a change in the HC could be detected. The first change was observed after heat treatment at 400\({}^{\circ}\)C. Already an increase of the HC towards low temperatures for temperatures below 1.5 K as well as a shrinking of the peak at 7 K can be detected, as can be seen in Fig. 2(a). The procedure was repeated with a maximum heat treatment temperature of 450\({}^{\circ}\)C. Again, the HC showed an even more pronounced increase towards low temperatures. Here, the exponential impact of the maximum temperature on the activation process exceeds that of longer dwell time, thus no experiments with longer dwell times were conducted.
After each step the sample mass was determined. Each heat treatment led to a notable decrease, the exact values of which are listed in Tab. 1.
The heat treatment for sample 2 was performed at 500\({}^{\circ}\)C in Argon flow. Comparing the HC of the heat treated and the untreated crystal, as is shown in Fig. 2(b), the same increase towards low temperatures and shrinking of the 7 K peak can be observed.
Figure 1: Crystals used for heat capacity study. Sample 1 (a) is shown before heat treatments, sample 2 (b) after being heat treated at 500\({}^{\circ}\)C in Argon flow.
Figure 2: Heat capacity of initial samples 1 (a) and 2 (b) compared with those after several heat treatments. For \(T<1.5\) K the heat capacity increases towards low temperatures with increasing annealing temperature, while the peak at 7 K is reduced by magnitude. For comparison, the fraction of heat capacity attributed to RuO\({}_{2}\) is shown as dotted lines.
Compared to its inital mass of 5.44 mg the mass of the heat treated sample 2 was reduced to 4.58 mg indicating a relative mass loss of 16%. As the sample surface appeared rather porous after heat treatment, very careful handling was required to avoid parts breaking off during transport or handling.
EDX was used to determine the sample stoichiometry and map the elemental distribution on the surface. After the initial consistency check, EDX analysis was only performed once the HC changed in order to minimize stress on the crystal. For the untreated crystals, the obtained molar ratio of Ru:Cl amounted to 25(3):75(3), both Ru and Cl were evenly distributed across the observed surface, as can be seen in SM [32]. No significant change in elemental distribution was observed for temperatures up to 300\({}^{\circ}\)C. However, it should be noted, that only a fraction of the crystal surface was evaluated in greater detail due to spatial limitations and time constraints. Upon heating sample 1 to 400\({}^{\circ}\)C, the formation of clusters, some as large as 70 \(\upmu\)m in diameter, was observed (see Fig. 3(a)-(c)). Stoichiometry analysis of such clusters shows a decrease of Cl concentration down to 25 at% in some areas and a corresponding increase in Ru concentration. Averaging over the investigated surface, the molar ratio of Ru:Cl is determined to be about 30:70 for sample 1 treated at 400\({}^{\circ}\)C and 32:68 for 450\({}^{\circ}\)C. We therefore conclude that further degradation of the sample has occurred due to the second heat treatment.
After heat treatment at 500\({}^{\circ}\)C sample 2 did not show any visible formation of clusters, however the Cl concentration was significantly diminished across the whole crystal surface. The average molar ratio Ru:Cl was determined to be 62:38.
As EDX analysis is limited to the surface layers due to a penetration depth of the electron beam below \(\sim 1\mu\)m, the question arises as to how deep into the crystal this effect can still be observed. For this purpose, another crystal (sample 3) was prepared as previously described in order to investigate the penetration depth of the degradation process without rendering sample 2 unusable for further measurements. From sample 3 few layers were peeled off, then the crystal was cut into half and EDX was performed on the freshly obtained surfaces. The distribution of Ru and Cl on the surface can be seen in Fig. 3(d), revealing large areas on the crystal surface with predominantly Ru being detected. While after a few layers, the molar ratio of Ru:Cl is still significantly enhanced to 44:56, roughly halfway into the 0.5 mm thick crystal only 27 at% Ru are detected. In order to determine whether the accumulated Ru on the surface is metallic ruthenium or some other product, enough material was carefully removed from the crystal surface and ground into fine powder using an agate mortar and pestle in order to perform X-ray powder diffraction. The obtained diffraction pattern shown in Fig. 4 matches that of the metallic transition-metal oxide RuO\({}_{2}\)[33] while no pure Ru could be detected.
Using this information we examine the low-\(T\) HC of initial and heat-treated \(\alpha\)-RuCl\({}_{3}\) for samples 1 and 2. As shown in Fig. 5, the measured data below 1.9 K are described by the sum of two contributions arising from phonons in \(\alpha\)-RuCl\({}_{3}\) and phonons and electrons in RuO\({}_{2}\). Note, that the magnon contribution \(\sim\exp(-\Delta/k_{B}T)\) with \(\Delta=1.7\) meV [15] is negligible compared to phonons in this temperature range. For the fit, we used the measured total sample mass and described the total HC (in units of J/K) by the function \(C/T=(m_{\text{sample}}-m_{\text{RuO}_{2}})\cdot(\beta_{\text{RuCl}_{3}}T^{2} )+m_{\text{RuO}_{2}}\cdot C_{\text{m,RuO}_{2}}\) with \(m_{\text{RuO}_{2},\text{HC}}\) as free fit parameter. The (molar) specific heat of RuO\({}_{2}\) was measured on a pellet and found in good agreement to literature [34]. For details, we refer to SM [32]. The converted (mass) specific heat \(C_{\text{m,RuO}_{2}}\) was then used in the above fit of the HC. The fit also includes the phonon contribution of \(\beta_{\text{RuCl}_{3}}\), which was determined by fitting the untreated crystals (yielding the parameters given in the caption of Fig. 5) and then
Figure 3: Elemental maps of Ru and Cl for sample 1 after heat treatment at 400\({}^{\circ}\)C (a-c) and sample 3 which was treated analogous to sample 2 (d). For sample 1 in some areas a significant increase in the Ru concentration along with a corresponding decrease in the Cl concentration can be observed (e.g. green areas in (b)). Sample 2 displays an overall diminished Cl concentration on the surface, with large parts of the surface showing a majority of Ru.
Figure 4: XRD pattern of ground surface material. The peaks match those for RuO\({}_{2}\).
fixed for all further fits. The fitted values for \(m_{\rm RuO_{2},HC}\) listed in Tab. 1 are in good agreement with those obtained from the analysis of the weight loss according to \(m_{\rm RuO_{2},scale}=\Delta m(1-\frac{M_{\rm mol,RuO_{2}}}{M_{\rm mol,RuO_{2}}})^ {-1}\), where \(\Delta m<0\) denotes the measured mass difference between heat-treated and pristine samples, arising by the loss of chlorine and gain of oxygen according to 2RuCl\({}_{3}\) + 2O\({}_{2}\) \(\rightarrow\) 2RuO\({}_{2}\) + 3Cl\({}_{2}\). Applying the same fitting procedure to the HC of the pristine samples yields RuO\({}_{2}\) masses corresponding to 1-2% of total sample mass. This suggests that even in unannealed crystals of RuCl\({}_{3}\) a tiny RuO\({}_{2}\) metal fraction cannot be excluded. Furthermore, the Sommerfeld coefficient of the respective RuO\({}_{2}\) contribution can be accessed directly by looking at the intersection of the fit function with the \(C/T\) axis, cf. the insets of Fig. 5.
Fitting with the heat capacity with Ru instead of RuO\({}_{2}\) yields fits of lower quality with Ru masses significantly higher than what would be possible due to the measured mass loss. Another possibility would be to fit the data with a combination of Ru and RuO\({}_{2}\). However, fitting with the masses as free parameters results in the same values as obtained for the fit with just RuO\({}_{2}\) and the mass of Ru chosen as zero. We therefore conclude that most if not all of the degraded \(\alpha\)-RuCl\({}_{3}\) turns into RuO\({}_{2}\) upon heating.
Another crystal (sample 4) was used to investigate the change in magnetic behavior due to heat treatment. The sample was tempered at 500\({}^{\circ}\)C in argon flow for 1 h analogous to sample 2. The susceptibility measurement of the heat treated crystal showed significantly lower absolute values compared to the initial measurement. Scaling with the \(\alpha\)-RuCl\({}_{3}\) and RuO\({}_{2}\) concentration determined from mass loss resulted in a plot showing good agreement with the first measurement.
\begin{table}
\begin{tabular}{l c c c c} & \(m_{\rm sample}\) & \(m_{\rm RuO_{2},balance}\) & \(m_{\rm RuO_{2},HC}\) & \(\frac{m_{\rm RuO_{2},HC}}{m_{\rm sample}}\) \\ \hline \hline
**sample 1** & 6.34 mg & - & 0.06 mg & 0.9\% \\ \hline
400\({}^{\circ}\)C & 5.96 mg & 0.68 mg & 0.61 mg & 10.2\% \\ \hline
450\({}^{\circ}\)C & 5.84 mg & 0.89 mg & 0.93 mg & 15.9\% \\ \hline
**sample 2** & 5.44 mg & - & 0.09 mg & 1.6\% \\ \hline
500\({}^{\circ}\)C & 4.58 mg & 0.89 mg & 0.85 mg & 18.6\% \\ \end{tabular}
\end{table}
Table 1: Comparison of the masses of RuO\({}_{2}\) in the initial and heat treated samples determined via fit to the low-\(T\) heat capacity versus values calculated from mass loss determined by balance.
Figure 6: Temperature dependence of the magnetic susceptibility measured with a magnetic field of \(H=1\) T applied along the \(ab\)-plane before and after heat treatment at 500\({}^{\circ}\)C. The values given in brackets refer to the sample masses, wherein the heat treated sample contains 6 mg RuO\({}_{2}\) according to calculation from mass loss. Scaling the heat treated measurement (green) with a corrected mass of \(\alpha\)-RuCl\({}_{3}\) due to degradation yields a plot (red) corresponding to that of the initial sample (blue).
Electrical transport measurements were performed on a pristine and 400\({}^{\circ}\) C heat treated crystal (sample 5). While the untreated sample clearly shows insulating behaviour, the resistivity of the heat treated sample changes by several orders of magnitude. From previous analysis, we know that after heat treatment at 400\({}^{\circ}\)C only \(\sim\)10% of the sample consist of RuO\({}_{2}\) resulting in a deviation visible only at very low temperatures in heat capacity, yet the influence on the electronic transport properties is significant over the whole temperature range. For comparison we also plotted the resistivity of pure RuO\({}_{2}\)[35], revealing that the values of the heat treated sample are already closer to that of RuO\({}_{2}\) than \(\alpha\)-RuCl\({}_{3}\). Since milli-Kelvin thermal conductance of metallic RuO\({}_{2}\) is far higher than that of insulating \(\alpha\)-RuCl\({}_{3}\), we expect a significant influence of RuO\({}_{2}\) inclusions on the low-\(T\) thermal transport and Hall effect.
Remarkably we find a complete oxidation of dechlorinated ruthenium in our experiments, despite sealing the samples in an ampoule which was evacuated and flushed with Argon gas several times. In air \(\alpha\)-RuCl\({}_{3}\) is extremely sensitive to decomposition and oxidation already under very moderate heating [30]. While it is well established that \(\alpha\)-RuCl\({}_{3}\) needs to be handled mechanically with maximal care to avoid the formation of stacking faults, our experiments indicate that in addition special care is needed to avoid degradation and oxidation. This concerns for instance long-term storage in air or baking of glued metal wire contacts on the crystal surface, required for thermal transport measurements. Protected gas atmosphere is recommended, along with careful check for partial dechlorination and oxidation.
It should further be noted, that XRD analysis of a different batch revealed the as purchased powder to not be pure \(\alpha\)-RuCl\({}_{3}\), but rather containing some RuO\({}_{2}\) and other impurities. This would offer a possible explanation for the small (of order 1%) RuO\({}_{2}\) fraction in the initial crystals, yet not for the increase in RuO\({}_{2}\) for annealed crystals. While the exact origin of oxygen for the degradation reaction in our study remains unclear [32], the above mentioned observations along with the heat capacity data lead us to conclude that even for unannealed crystals the presence of a small percentage of metallic RuO\({}_{2}\) cannot be excluded.
## IV Conclusion
In conclusion, we performed heat treatments on \(\alpha\)-RuCl\({}_{3}\) single crystals in closed Argon atmosphere up to 450\({}^{\circ}\)C as well as Argon flow up to 500\({}^{\circ}\)C in order to investigate the impact of annealing on the low temperature physical properties. Both samples show enhanced heat capacity towards low temperatures for T\(<\)1.5 K. SEM and EDX revealed the formation of Ru rich clusters on the surface of sample 1 and an overall decreased percentage of Cl on the surface of sample 2, after the samples were heated to at least 400\({}^{\circ}\)C. Dechlorination and oxidation takes place beneath the sample surface to some degree, however seems to be less pronounced towards the center of the investigated crystal. Powder XRD analysis revealed the surface material to be RuO\({}_{2}\). The deviation of both heat capacity and susceptibility measured after heat treatment from the initial measurement can be explained by a decreased amount of \(\alpha\)-RuCl\({}_{3}\) along with the formation of RuO\({}_{2}\). The RuO\({}_{2}\) content determined via fit in both cases agrees well with the value calculated from the measured mass loss and amounts to 10-20 mass%. Such relatively small mass fraction of metallic RuO\({}_{2}\) already reduces the electrical resistance of degraded \(\alpha\)-RuCl\({}_{3}\) by several orders of magnitude. Importantly, the low-temperature specific heat analysis of pristine \(\alpha\)-RuCl\({}_{3}\) crystals (before thermal treatment) also yields the presence of 0.9-1.6 mass% RuO\({}_{2}\). It would be important to clarify whether such a low fraction of metallic inclusions as found in pristine crystals, though effectively invisible in most physical properties, may have an impact on the low-temperature thermal transport and Hall effect in \(\alpha\)-RuCl\({}_{3}\) crystals, as found in our electrical transport measurements.
## Acknowledgements
We are grateful to Alexander Herrnberger and Klaus Wiedenmann for technical support and acknowledge fruitful discussions with Alexander A. Tsirlin, A. Loidl, Y.-J. Kim, S.E. Nagler and R. Valenti. This work was supported by the German Science Foundation through TRR80 (Project No. 107745057). Partial support of ANCD via project 20.80009.5007.19 is acknowledged.
Figure 7: Measurement of the electrical transport in the temperature range of 125-300 K for an untreated (red) and heat treated at 400\({}^{\circ}\) C (blue) \(\alpha\)-RuCl\({}_{3}\) crystal. The green line shows literature data [35] of pure RuO\({}_{2}\). |
2301.07313 | Efficient Black-box Checking of Snapshot Isolation in Databases | Snapshot isolation (SI) is a prevalent weak isolation level that avoids the
performance penalty imposed by serializability and simultaneously prevents
various undesired data anomalies. Nevertheless, SI anomalies have recently been
found in production cloud databases that claim to provide the SI guarantee.
Given the complex and often unavailable internals of such databases, a
black-box SI checker is highly desirable.
In this paper we present PolySI, a novel black-box checker that efficiently
checks SI and provides understandable counterexamples upon detecting
violations. PolySI builds on a novel characterization of SI using generalized
polygraphs (GPs), for which we establish its soundness and completeness. PolySI
employs an SMT solver and also accelerates SMT solving by utilizing the compact
constraint encoding of GPs and domain-specific optimizations for pruning
constraints. As demonstrated by our extensive assessment, PolySI successfully
reproduces all of 2477 known SI anomalies, detects novel SI violations in three
production cloud databases, identifies their causes, outperforms the
state-of-the-art black-box checkers under a wide range of workloads, and can
scale up to large-sized workloads. | Kaile Huang, Si Liu, Zhenge Chen, Hengfeng Wei, David Basin, Haixiang Li, Anqun Pan | 2023-01-18T05:27:32Z | http://arxiv.org/abs/2301.07313v2 | # Efficient Black-box Checking of Snapshot Isolation in Databases
###### Abstract.
Snapshot isolation (SI) is a prevalent weak isolation level that avoids the performance penalty imposed by serializability and simultaneously prevents various undesired data anomalies. Nevertheless, SI anomalies have recently been found in production cloud databases that claim to provide the SI guarantee. Given the complex and often unavailable internals of such databases, a black-box SI checker is highly desirable.
In this paper we present PolySI, a novel black-box checker that efficiently checks SI and provides understandable counterexamples upon detecting violations. PolySI builds on a novel characterization of SI using generalized polygraphs (GPs), for which we establish its soundness and completeness. PolySI employs an SMT solver and also accelerates SMT solving by utilizing the compact constraint encoding of GPs and domain-specific optimizations for pruning constraints. As demonstrated by our extensive assessment, PolySI successfully reproduces all of 2477 known SI anomalies, detects novel SI violations in three production cloud databases, identifies their causes, outperforms the state-of-the-art black-box checkers under a wide range of workloads, and can scale up to large-sized workloads.
The source code, data, and/or other artifacts have been made available at [https://github.com/anonymous-hipp/PolySI](https://github.com/anonymous-hipp/PolySI). +
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Science
+
Footnote †: journal: Journal of Experimental and Theoretical Computer Computer Science
workloads but also with standard key-value/SQL APIs. We call this extended principle SIEGE+.
None of the existing SI checkers, to the best of our knowledge, satisfies SIEGE+ (see Section 7 for the detailed comparison). For example, dbcop (Dupuis et al., 2017) is incomplete, incurs exponentially increasing overhead under higher concurrency (Section 5.4), and returns no counterexamples upon finding a violation; Elle (2017) relies on specific database APIs such as lists and the (internal) timestamps of transactions to infer isolation anomalies, thus not conforming to our black-box setting.
**The PolySI Checker.** We present PolySI, a novel, black-box SI checker designed to achieve all the SIEGE+ criteria. PolySI builds on three key ideas in response to three major challenges.
First, despite previous attempts to characterize SI (Blei et al., 2017; Dwork et al., 2018; Dwork et al., 2018), its semantics is usually explained in terms of low-level implementation choices invisible to the database outsiders. Consequently, one must _guess_ the dependencies (aka uncertain/unknown dependencies) between client-observable data, for example, which of the two writes was first recorded in the database.
We introduce a novel dependency graph, called _generalized polygraph_ (GP), based on which we present a new _sound_ and _complete_ characterization of SI. There are two main advantages of a GP: (i) it naturally models the guesses by capturing _all_ possible dependencies between transactions in a single compacted data structure; and (ii) it enables the acceleration of SMT solving by compacting constraints (see below) as demonstrated by our experiments.
Second, there have been recent advances in SAT and SMT solving for checking _graph properties_ such as the MonoSAT solver (Blei et al., 2017) and its successful application to the black-box checking of SER (Stein
**Example 2** (Lost Update).: Dan and Emma share a banking account with 10 dollars. Both simultaneously deposit 50 dollars. The resulting balance is 60, instead of 110, as one of the deposits is lost.
In this paper we focus on the prevalent _strong session_ variant of SI [11, 21], which additionally requires a transaction to observe all the effects of the preceding transactions in the same _session_[45]. Many production databases, including DGraph [22], Galera [15], and CockroachDB [16], provide this isolation level in practice.
### Snapshot Isolation: Formal Definition
We recall the formalization of SI over dependency graphs, which serves as the theoretical foundation of PolySI. The following account is standard, see for example [12], and Table 1 summarizes the notation used throughout the paper.
We consider a distributed key-value store managing a set of keys \(\text{Key}=\{x,y,z,\dots\}\), which are associated with values from a set Val.3 We denote by \(\text{Op}\) the set of possible read or write operations on keys: \(\text{Op}=\{\text{R}_{i}(x,v),\text{W}_{i}(x,v)\mid i\in\text{Opld},x\in \text{Key},v\in\text{Val}\}\), where \(\text{Opld}\) is the set of operation identifiers. We omit operation identifiers when they are unimportant.
Footnote 3: We discuss how to support SQL queries in Section 6. However, we do not support predicates in this work.
#### 2.2.1. Relations, Orderings, Graphs, and Logics
A binary relation \(R\) over a given set \(A\) is a subset of \(A\times A\), i.e., \(R\subseteq A\times A\). For \(a,b\in A\), we use \((a,b)\in R\) and \(a\stackrel{{ R}}{{\rightarrow}}b\) interchangeably. We use \(R\)? and \(R^{+}\) to denote the reflexive closure and the transitive closure of \(R\), respectively. A relation \(R\subseteq A\times A\) is _acyclic_ if \(R^{+}\cap I_{A}=\emptyset\), where \(I_{A}\triangleq\{(a,a)\mid a\in A\}\) is the identity relation on \(A\). Given two binary relations \(R\) and \(S\) over set \(A\), we define their composition as \(R\) ; \(S=\{(a,c)\mid\exists b\in A:a\stackrel{{ R}}{{\rightarrow}}b \stackrel{{ S}}{{\rightarrow}}c\}\). A strict partial order is an irreflexive and transitive relation. A strict total order is a relation that is a strict partial order and total.
For a directed labeled graph \(G=(V,E)\), we use \(V_{G}\) and \(E_{G}\) to denote the set of vertices and edges in \(G\), respectively. For a set \(F\) of edges, \(G|_{F}\) denotes the directed labeled graph that has the set \(V\) of vertices and the set \(F\) of edges.
In logical formulas, we write \(\_\) for irrelevant parts that are implicitly existentially quantified. We use \(\exists!\) to mean "unique existence."
#### 2.2.2. Transactions and Histories
**Definition 3**.: A _transaction_** is a pair \((O,\text{po})\), where \(O\subseteq\text{Op}\) is a finite, non-empty set of operations and \(\text{po}\subseteq O\times O\) is a strict total order called the _program order_.
For a transaction \(T\), we let \(T\vdash\text{W}(x,v)\) if \(T\) writes to \(x\) and the last value written is \(v\), and \(T\vdash\text{R}(x,v)\) if \(T\) reads from \(x\) before writing to it and \(v\) is the value returned by the first such read. We also use \(\text{WriteTx}=\{T\mid T\vdash\text{W}(x\_)\}\).
Clients interact with the store by issuing transactions during _sessions_. We use a _history_ to record the client-visible results of such interactions. For conciseness, we consider only committed transactions in the formalism [12]; see further discussions in Section 4.5.
**Definition 4**.: A _history_** is a pair \(\mathcal{H}=(\mathcal{T},\text{SO})\), where \(\mathcal{T}\) is a set of transactions with disjoint sets of operations and the _session order_\(\text{SO}\subseteq\mathcal{T}\times\mathcal{T}\) is the union of strict total orders on disjoint sets of \(\mathcal{T}\), which correspond to transactions in different sessions.
#### 2.2.3. Dependency Graph-based Characterization of SI
A dependency graph extends a history with three relations (or typed edges, in terms of graphs): WR, WW, and RW, representing three possibility of dependencies between transactions in this history [12]. The WR relation associates a transaction that reads some value with the one that writes this value. The WW relation stipulates a strict total order (aka the version order [2]) among the transactions on the same key. The RW relation is derived from WR and WW, relating a transaction that reads some value to the one that overwrites this value, in terms of the version orders specified by the WW relation.
**Definition 5**.: A _dependency graph_ is a tuple \(\mathcal{G}=(\mathcal{T},\text{SO},\text{WR},\text{WW},\text{RW})\), where \((\mathcal{T},\text{SO})\) is a history and
1. \(\text{WR}:\text{Key}\to 2^{\mathcal{T}\times\mathcal{T}}\) is such that \(\forall X\in\text{Key}\). \(\forall S\in\mathcal{T}\). \(S\vdash\text{R}(x,\_)\Longrightarrow\exists!T\in\mathcal{T}\). \(T\xrightarrow{\text{WR}(x)}S\)
2. \(\forall x\in\text{Key}\). \(\forall T,S\in\mathcal{T}\). \(T\xrightarrow{\text{WR}(x)}S\)\(\Longrightarrow\)\(T\neq S\wedge\exists v\in\text{Val}\). \(T\vdash\text{W}(x,v)\wedge S\vdash\text{R}(x,v)\).
3. \(\text{WW}:\text{Key}\to 2^{\mathcal{T}\times\mathcal{T}}\) is such that for every \(x\in\text{Key}\), \(\text{WW}(x)\) is a strict total order on the set \(\text{WriteTx}_{x}\);
4. \(\text{RW}:\text{Key}\to 2^{\mathcal{T}\times\mathcal{T}}\) is such that \(\forall T,S\in\mathcal{T}\). \(\forall x\in\text{Key}\). \(T\xrightarrow{\text{RW}(x)}S\)\(\Longleftrightarrow\)\(T\neq S\wedge\exists T^{\prime}\in\mathcal{T}\). \(T^{\prime}\xrightarrow{\text{WR}(x)}T\wedge T^{\prime}\xrightarrow{\text{ WW}(x)}S\).
We denote a component of \(\mathcal{G}\), such as WW, by \(\text{WW}_{\mathcal{G}}\). We write \(T\xrightarrow{\text{WR}/\text{WW}/\text{RW}}S\) when the key \(x\) in \(T\xrightarrow{\text{WR}(x)/\text{WW}(x)/\text{RW}(x)}S\) is irrelevant or the context is clear.
Intuitively, a history satisfies SI if and only if it can be extended to a dependency graph that contains only cycles (if any) with at least two adjacent RW edges. Formally,
**Theorem 6** (Dependency Graph-based Characterization of SI (Theorem 4.1 of [12])).: _For a history \(\mathcal{H}=(\mathcal{T},\text{SO})\),_
\[\mathcal{H}\models\text{SI} \Longleftrightarrow\mathcal{H}\models\text{Int}\wedge\] \[\exists\text{WR},\text{WW},\text{RW}.\mathcal{G}=(\mathcal{H}, \text{WR},\text{WW},\text{RW})\wedge\] \[(((\text{SO}_{\mathcal{G}}\cup\text{WR}_{\mathcal{G}}\cup\text{ WW}_{\mathcal{G}})\ ;\ \text{RW}_{\mathcal{G}}?)\ \text{is acyclic}).\]
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Category** & **Notation** & **Meaning** \\ \hline \multirow{3}{*}{_KV Store_} & Key & set of keys \\ \cline{2-3} & Val & set of values \\ \cline{2-3} & Op & set of operations \\ \hline \multirow{3}{*}{_Relations_} & \(R\)? & reflexive closure of \(R\) \\ \cline{2-3} & \(R^{+}\) & transitive closure of \(R\) \\ \cline{2-3} & \(R\) ; \(S\) & composition of \(R\) with \(S\) \\ \hline \multirow{2}{*}{_Dependency_} & SO, WR, WW, RW & dependency relations/edges \\ \cline{2-3} & \(G=(V,E,C)\) & (generalized) polygraph \\ \cline{2-3} & \(V_{G}\), \(E_{G}\), \(C_{G}\) & components of \(G\) \\ \cline{2-3} & \(G|_{F}\) & digraph with set \(F\) of edges \\ \hline \multirow{3}{*}{_Algorithm_} & \(\mathcal{H}=(\mathcal{T},\text{SO})\) & history to check \\ \cline{2-3} & \(I\) & SI induced graph \\ \cline{2-3} & BV & set of Boolean variables \\ \cline{2-3} & CL & set of clauses \\ \hline \end{tabular}
\end{table}
Table 1. Notation
The _internal consistency axiom_Int ensures that, within a transaction, a read from a key returns the same value as the last write to or read from this key in the transaction.
### The SI Checking Problem
Definition 7 ().: The _SI checking problem_ is the decision problem of determining whether a given history \(\mathcal{H}\) satisfies SI, i.e., is \(\mathcal{H}\models\mathrm{SI}\)?
We take the common "UniqueValue" assumption on histories (Alberts et al., 2009; Alberts et al., 2009; Alberts et al., 2010; Alberts et al., 2011; Alberts et al., 2012): for each key, every write to the key assigns a unique value. For database testing, we can produce such histories by ensuring the uniqueness of the values written on the client side (or workload generator) using, e.g., the client identifier and local counter. Under this assumption, each read can be associated with the transaction that issues the corresponding write (Alberts et al., 2009).
Theorem 6 provides a brute-force approach to the SI checking problem: enumerate all possible WW relations and check whether any of them results in a dependency graph that contains only cycles with at least two adjacent RW edges. This approach is, however, prohibitively expensive.
### Polygraphs
A dependency graph extending a history represents _one_ possibility of dependencies between transactions in this history. To capture _all_ possible dependencies between transactions in a single structure, we rely on polygraphs (Kal
### Characterizing SI
According to Theorem 6, we are interested in the _induced_ graph of a generalized polygraph \(G\), obtained by composing the edges of \(G\) according to the rule (\((\mathsf{SO}\cup\mathsf{WR}\cup\mathsf{WW})\) ; \(\mathsf{RW}\)?).
Definition 11 ().: The _induced SI graph_ of a polygraph \(G=(V,E,C)\) is the graph \(G^{\prime}=(V,E,C,\mathcal{R})\), where \(\mathcal{R}=(\mathsf{SO}\cup\mathsf{WR}\cup\mathsf{WW})\) ; \(\mathsf{RW}\)? is the induce rule.
The concept of _compatible_ graphs gives a meaning to polygraphs and their induced SI graphs. A graph is compatible with a polygraph when it is a resolution of the constraints of the polygraph. Thus, a polygraph corresponds to a family of its compatible graphs.
Definition 12 ().: A directed labeled graph \(G^{\prime}=(V^{\prime},E^{\prime})\) is _compatible with a generalized polygraph_\(G=(V,E,C)\) if
* \(V^{\prime}=V\);
* \(E^{\prime}\supseteq E\); and
* \(\mathsf{V}(\textit{either},\textit{or})\in C\). (_either_\(\subseteq E^{\prime}\wedge\textit{or}\cap E^{\prime}=\emptyset\)) \(\vee\) (_or_\(\subseteq E^{\prime}\wedge\textit{either}\cap E^{\prime}=\emptyset\)).
By applying the induce rule \(\mathcal{R}\) to a compatible graph of a polygraph, we obtain a compatible graph with the induced SI graph of this polygraph.
Definition 13 ().: Let \(G^{\prime}=(V^{\prime},E^{\prime})\) be a compatible graph with a polygraph \(G\). Then \(G^{\prime}|_{(\mathsf{SO}_{G^{\prime}}\cup\mathsf{WR}_{G^{\prime}}\cup\mathsf{ WW}_{G^{\prime}})}:\mathsf{RW}_{G^{\prime}\geqslant 3}\) is a _compatible graph with the induced SI graph of \(G\)_.
Example 14 (Compatible Graphs).: There are two compatible graphs with the generalized polygraph of Figure 1(b): one is with the edge set \(\{(T,T^{\prime},\mathsf{WR}),(S,S^{\prime},\mathsf{WR}),(T,S,\mathsf{WW}),(T^ {\prime},S,\mathsf{RW})\}\), and the other is with \(\{(T,T^{\prime},\mathsf{WR}),(S,S^{\prime},\mathsf{WR}),(S,T,\mathsf{WW}),(S^ {\prime},T,\mathsf{RW})\}\).
Accordingly, there are also two compatible graphs with the induced SI graph of the polygraph of Figure 1(b): one is with the edge set \(\{(T,T^{\prime},\mathsf{WR}),(S,S^{\prime},\mathsf{WR}),(T,S,\mathsf{WW}),(T,S, \mathsf{WR}\) ; \(\mathsf{RW})\}\). The edge \((T,S,\mathsf{WR}\) ; \(\mathsf{RW})\) is obtained from \((T,T^{\prime},\mathsf{WR})\) : \((T^{\prime},S,\mathsf{RW})\). It is identical to \((T,S,\mathsf{WW})\) if the edge types are ignored. The other is with \(\{(T,T^{\prime},\mathsf{WR}),(S,S^{\prime},\mathsf{WR}),(S,T,\mathsf{WW}),(S,T, \mathsf{WR}\) ; \(\mathsf{RW})\}\). Similarly, \((S,T,\mathsf{WR}\) ; \(\mathsf{RW})\) is identical to \((S,T,\mathsf{WW})\) if the edge types are ignored.
We are concerned with the acyclicity of polygraphs and their induced SI graphs.
Definition 15 ().: An induced SI graph is _acyclic_ if there exists an acyclic compatible graph with it, _when the edge types are ignored_. A polygraph is _SI-acyclic_ if its induced SI graph is acyclic.
Finally, we present the generalized polygraph-based characterization of SI. Its proof can be found in Appendix B. The key lies in the correspondence between compatible graphs of polygraphs and dependency graphs.
Theorem 16 (Generalized Polygraph-based Characterization of SI).: _A history \(\mathcal{H}\) satisfies SI if and only if \(\mathcal{H}\models\textsc{Int}\) and the generalized polygraph of \(\mathcal{H}\) is SI-acyclic._
## 4. The checking algorithm for SI
Given a history \(\mathcal{H}\), PolySI encodes the induced SI graph of the generalized polygraph of \(\mathcal{H}\) into an SAT formula and utilizes the MonoSAT solver (Bordes and T
\(\{(T_{1},T_{5},WW),(T_{3},T_{5},RW)\}\), \(or=\{(T_{5},T_{1},WW)\}\)) on the order as a SAT formula
\[(\text{BV}_{1,5}\wedge\text{BV}_{3,5}\wedge\neg\text{BV}_{5,1})\vee(\text{BV}_{5,1}\wedge\neg\text{BV}_{1,5}\wedge\neg\text{BV}_{3,5}),\]
where \(\text{BV}_{i,j}\) is a Boolean variable indicating the existence of the edge from \(T_{i}\) to \(T_{j}\) in the pruned polygraph. We then encode the induced SI graph, denoted \(I\). Since \(T_{2}\xrightarrow{\text{WR}(y)}T_{4}\xrightarrow{\text{RW}(x)}T_{5}\), we have \(\text{BV}_{2,5}^{I}=\text{BV}_{2,4}\wedge\text{BV}_{4,5}\), where \(\text{BV}_{i,j}^{I}\) is a Boolean variable indicating the existence of the edge from \(T_{i}\) to \(T_{j}\) in \(I\). Similarly, we have \(\text{BV}_{1,2}^{I}=\text{BV}_{1,3}\wedge\text{BV}_{3,2}\) and \(\text{BV}_{2,1}^{I}=\text{BV}_{2,4}\wedge\text{BV}_{4,1}\). In contrast, since it is possible that \(T_{3}\xrightarrow{\text{RW}(x)}T_{5}\), we have \(\text{BV}_{1,5}^{I}=\text{BV}_{1,3}\wedge\text{BV}_{3,5}\).
**MonoSAT Solving.** Finally, we feed the SAT formula to MonoSAT for an acyclicity test of the graph \(I\). MonoSAT successfully finds an undesired cycle \(T_{1}\xrightarrow{\text{WR}(x)}T_{3}\xrightarrow{\text{RW}(y)}T_{2} \xrightarrow{\text{WR}(y)}T_{4}\xrightarrow{\text{RW}(x)}T_{1}\), which contains two _non-adjacent_ RW edges; see Figure 2(e). Therefore, this history violates SI.
### Constructing the Generalized Polygraph
We construct the generalized polygraph \(G\) of the history \(\mathcal{H}\) in two steps. First, we create the known graph of \(G\) by adding the known edges of types SO and WR to \(E_{G}\). Second, we generate the generalized constraints of \(G\) on possible dependencies between transactions. Specifically, for each key \(x\) and each pair of transactions \(T\) and \(S\) that both write \(x\), we generate a generalized constraint of the form \(\langle\)_either_, \(or\rangle\) according to Definition 9.
### Pruning Constraints
To accelerate MonoSAT solving, we prune as many constraints as possible before encoding (line 10). A constraint can be pruned if either of its two possibilities, represented by _either_ or _or_, cannot happen, i.e., adding the edges in one of the two possibilities would create a cycle in the reduced SI graph. If neither of the two possibilities in a constraint can happen, PolySI immediately returns False. This process is repeated until no more constraints can be pruned (line 31).
In each iteration, we first construct the _currently known part_ of the induced SI graph, denoted \(KI\), of \(G\). To do this, we define two auxiliary graphs, namely \(Dep\gets G|\text{S}_{\text{SO}_{\text{U}}\cup\text{WR}_{G}\cup\text{WW} _{G}}\) and \(AntiDep\gets G|_{\text{RW}_{G}}\). By Definition 13, \(KI\) is \(Dep\cup(Dep\ ;\ AntiDep)\) (line 14). Then, we compute the reachability relation of \(KI\). Next, for each constraint _cons_ of the form \(\langle\)_either_, \(or\rangle\), we check if _either_ or _or_ would create cycles in \(KI\) (line 16). Consider an edge (_from_, _to_, _type_) in _either_ (line 17). By construction, it must be of type WW or RW. Note that \(KI\) does not contain any RW edges by definition. Therefore, an RW edge from _from_ to _to_, together with a path from _to_ to
Figure 4. Two cases for pruning constraints.
Figure 3. The “long fork” anomaly: an illustrating example of PolySI.
from_ in _KI_, does _not_ necessarily create a cycle in _KI_. This fails the simple reachability-based strategy used in Cobra (Cobra, 2017).
Suppose first that (_from_, _to_) is a WW edge; see Figure 3(a). If there is already a path from _to_ to _from_ in _KI_ (line 2), adding the WW edge would create a cycle in _KI_. Thus, we can prune the constraint _cons_ and the edges in the other possibility _or_ become known.
Now suppose that (_from_, _to_) is an RW edge; see Figure 3(b). We check if there is a path in _KI_ from _to_ to any immediate predecessor _prec_ of _from_ in _Dep_ (line 2). If there is a path, adding this RW edge would introduce, via composition with the edge from _prec_ to _from_, an edge from _prec_ to _in_KI_ (the dashed arrow in Figure 3(b)). Then, with the path from _to_ to _prec_, we obtain a cycle in _KI_.
The pruning process of the _or_ possibility is same with that for _either_, except that it returns False if both _either_ and _or_ possibilities of a constraint are pruned.
The following theorem states that PruneConstraints is correct in that (1) it preserves the SI-(acyclicity of polygraphs; and (2) it does not introduce new undesired cycles, which ensures that any violation found in the pruned polygraph using MonoSAT later also exists in the original polygraph. This is crucial to the informativeness of PolySI. The theorem's proof can be found in Appendix B.
Theorem 17 (Correctness of PruneConstraints).: _Let \(G\) and \(G_{p}\) be the generalized polygraphs before and after PruneConstraints, respectively. Then,_
1. \(G\) _is SI-acyclic if and only if PruneConstraints returns_ True _and_ \(G_{p}\) _is SI-acyclic._
2. _Suppose that_ \(G_{p}\) _is not SI-acyclic. Let_ \(\mathcal{C}\) _be a cycle in a compatible graph with the induced SI graph of_ \(G_{p}\)_. Then there is a compatible graph with the induced SI graph of_ \(G\) _that contains_ \(\mathcal{C}\)_._
Combining Theorems 16 and 17, we prove PolySI's soundness.
Theorem 18 (Soundness of PolySI).: PolySI is _sound_, i.e., if PolySI returns_ False_, then the input history indeed violates SI._
### SAT Encoding
In this step we encode the induced SI graph, denoted \(I\), of the pruned polygraph \(G\) into an SAT formula (line 3). We use BV and CL to denote the set of Boolean variables and the set of clauses of the SAT formula, respectively. For each pair of vertices \(v_{i}\) and \(v_{j}\), we create two Boolean variables \(\text{BV}_{i,j}\) and \(\text{BV}^{I}_{i,j}\): one for the polygraph \(G\), and the other for its induced SI graph \(I\). An edge (\(v_{i}\), \(v_{j}\)) is in the compatible graph with \(I\) (resp., \(G\)) if and only if \(\text{BV}^{I}_{i,j}\) (resp., \(\text{BV}_{i,j}\)) is assigned to True by MonoSAT in testing the acyclicity of \(I\).
We first encode the polygraph \(G\). For each edge (\(v_{i}\), \(v_{j}\)) in the known graph of \(G\), we add a clause \(\text{BV}_{i,j}\)\(\land\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{i,k}\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\land\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\)\(\neg\text{BV}_{k,j}\)\(\lor\)\(\)\(\neg\text{BV
* _Intermediate Reads_: a transaction cannot read a value that was overwritten by the transaction that wrote it.
Note that PolySI's completeness relies on a common assumption about _determinate_ transactions (Bauer et al., 2010; Chen et al., 2011; Chen et al., 2012; Chen et al., 2013; Chen et al., 2014), i.e., the status of each transaction, whether committed or aborted, is legitimately decided. Indeterminate transactions are inherent to black-box testing: it is difficult for a client to justify the status of a transaction due to the invisibility of system internals. Together with the completeness of the dependency-graph-based characterization of SI in Theorem 6, we prove PolySI's completeness.
Theorem 19 (Completeness of PolySI).: PolySI _is complete with respect to a history that contains only determinate transactions, i.e., if such a history indeed violates SI, then PolySI returns false._
## 5. Experiments
We have presented our SI checking algorithm PolySI and established its _soundness_ and _completeness_. In this section, we conduct a comprehensive assessment of PolySI to answer the following questions with respect to the remaining criteria of SIEGE+ (Section 1):
**(1) Effective:** Can PolySI find SI violations in (production) databases?
**(2) Informative:** Can PolySI provide understandable counterexamples for SI violations?
**(3) Efficient:** How efficient is PolySI (and its components)? Can PolySI outperform the state of the art under _various_ workloads and scale up to large-sized workloads?
Our answer to (1) is twofold (Section 5.2): (i) PolySI successfully reproduces all of 2477 known SI anomalies in production databases; and (ii) we use PolySI to detect novel SI violations in three cloud databases of different kinds: the graph database Dgraph (Dwork et al., 2015), the relational database MariaDB-Galera (Dwork et al., 2015), and YugoslavDB (Zhu et al., 2016) supporting multiple data models. To answer (2) we provide an algorithm that recovers the violating scenario, highlighting the cause of the violation found (Section 5.3). Regarding (3), we (i) show that PolySI outperforms several competitive baselines including the most performant SI and serializability checkers to date; (ii) measure the contributions of its different components/optimizations to the overall performance under both general and specific transaction workloads (Section 5.4); and (iii) demonstrate its scalability for large-sized workloads with one billion keys and one million transactions. Note that we demonstrate PolySI's **generality** along with the answers to questions (1) and (3).
### Workloads, Benchmarks, and Setup
#### 5.1.1. Workloads and Benchmarks
To evaluate PolySI on _general_ read-only, write-only, and read-write transaction workloads, we have implemented a parametric workload generator. Its parameters are: the number of client sessions (#sess; 20 by default), the number of transactions per session (#txns/sess; 100 by default), the number of read/write operations per transaction (#ops/txn; 15 by default), the percentage of reads (%reads; 50% by default), the total number of keys (#keys; 10k by default), and the key-access distribution (dist) including uniform, zipfian (by default), and hotspot (80% operations touching 20% keys). Note that the default 2k transactions with 30k operations issued by 20 sessions are sufficient to distinguish PolySI from competing tools (see Section 5.4.1).
Among such general workloads, we also consider three representatives, each with 10k transactions and 80k operations in total (#sess=25, #txns/sess=400, and #ops/txn=8), in the comparison with Cobra and the decomposition and differential analysis of PolySI: (i) GeneralRH, read-heavy workload with 95% reads; (ii) GeneralRW, medium workload with 50% reads; and (iii) GeneralWH, write-heavy workloads with 30% reads.
We also use three synthetic benchmarks with only serializable histories of at least 10k transactions (which also satisfy SI):
* RUBiS (Peters et al., 2014): an eBay-like bidding system where users can, for example, register and bid for items. The dataset archived by (Peters et al., 2014) contains 20k users and 200k items.
* TPC-C (Cheng et al., 2015): an open standard for benchmarking online transaction processing with a mix of five different types of transactions (e.g., for orders and payment) portraying the activity of a wholesale supplier. The dataset includes one warehouse, 10 districts, and 30k customers.
* C-Twitter (Peters et al., 2014): a Twitter clone where users can, for example, tweet and follow or unfollow other users (following the zipfian distribution).
To assess PolySI's scalability, we also consider large-sized workloads with one billion keys and one million transactions (#sess=20; #txns/sess=50k). The workloads contain both short and long transactions; the default sizes are 15 and 150, respectively.
#### 5.1.2. Setup
We use a PostgreSQL (v15 Beta 1) instance to produce _valid_ histories without isolation violations: for the performance comparison with other SI checkers and the decomposition and differential analysis of PolySI itself, we set the isolation level to _repeatable read_ (implemented as SI in PostgreSQL (Peters et al., 2014)); for the runtime comparison with Cobra (Section 5.4.1), we use the _serializability_ isolation level to produce serializable histories. We co-locate the client threads and PostgreSQL (or other databases for testing; see Section 5.2.2) on a local machine. Each client thread issues a stream of transactions produced by our workload generator to the database and records the execution history. All histories are saved to a file to benchmark each tool's performance.
We have implemented PolySI in 2.3k lines of Java code, and the workload generator, including the transformation from generated key-value operations to SQL queries (for the interactions with relational databases such as PostgreSQL), in 2.2k lines of Rust code. We ensure unique values written for each key using counters. We use a simple database schema of a two-column table storing keys and values, which is effective to find real violations in three production databases (see Section 5.2).
We conducted all experiments with a 4.5GHz Intel Xeon E5-2620 (6-core) CPU, 48GB memory, and an NVIDIA K620 GPU.
### Finding SI Violations
#### 5.2.1. Reproducing Known SI Violations
PolySI successfully reproduces _all_ known SI violations in an extensive collection of 2477 anomalous histories (Chen et al., 2011; Chen et al., 2012; Chen et al., 2013). These histories were obtained from the earlier releases of three different production databases, i.e., CockroachDB, MySQL-Galera, and YugoslavDB; see Table 2 for
details. This set of experiments provides supporting evidence for PolySI's _soundness_ and _completeness_, established in Section 4.
#### 5.2.2. Detecting New Violations
We use PolySI to examine recent releases of three well-known cloud databases (of different kinds) that claim to provide SI: Dgraph (Dgraph, 2017), MariaDB-Galera (2017), and YugabyteDB (2018). See Table 2 for details. We have found and reported novel SI violations in all three databases which, as of the time of writing, are being investigated by the developers. In particular, as communicated with the developers, (i) our finding has helped the DGraph team confirm some of their suspicions about their latest release; and (ii) Galera has confirmed the incorrect claim on preventing lost updates for transactions issued on different cluster nodes and thereafter removed any claims on SI or "partially supporting SI" from the previous documentation.4
Footnote 4: [https://github.com/codership/documentation/commit/ec8d61e51f76749abe61e2cc825s3d65ececee67a](https://github.com/codership/documentation/commit/ec8d61e51f76749abe61e2cc825s3d65ececee67a) and [https://github.com/codership/documentation/commit/d8711bd15s10fe59973cb7cc892061ec7b80](https://github.com/codership/documentation/commit/d8711bd15s10fe59973cb7cc892061ec7b80)
### Understanding Violations
MonoSAT reports cycles, constructed from its output logs, upon detecting an SI violation. However, such cycles are _uninformative_ with respect to understanding how the violation actually occurred. For instance, Figure 5(a) depicts the original cycle returned by MonoSAT for an SI violation found in MariaDB-Galera, where it is difficult to identify the cause of the violation.
Hence, we have designed an algorithm to interpret the returned cycles. The key idea is to (i) bring back any potentially involved transactions and the associated dependencies, (ii) restore the violating scenario by identifying the core participants and dependencies, and (iii) remove the "irrelevant" dependencies to simplify the scenario. We have integrated into PolySI the algorithm written in 300 lines of C++ code. The pseudocode is given in Appendix C. We have also integrated the Graphviz tool (Garibor, 2017) into PolySI to visualize the final counterexamples (e.g., Figure 5).
**Minimal Counterexample.** A "minimal" counterexample would facilitate understanding how the violation actually occurred. We define a minimal violation as a polygraph where no dependency can be removed; otherwise, the resulting polygraph would pass the verification of PolySI. Given a polygraph \(G\) (constructed from a collected history) and a cycle \(C\) (returned by MonoSAT), there may however be more than one minimal violation with respect to \(G\) and \(C\) due to different interpretations of uncertain dependencies. We call the one with the least number of dependencies _the minimal counterexample with respect to \(G\) and \(C\)_.
PolySI guarantees the minimality of returned counterexample:
Theorem 20 (Minimality).: _PolySI always returns a minimal counterexample with respect to \(G\) and \(C\), with \(G\) the polygraph built from a history and \(C\) the cycle output by MonoSAT._
We defer to Appendix E for the formal definitions of the minimal violation and counterexample and the proof of Theorem 28.
**Violation Found in MariaDB-Galera.** We present an example violation detected in MariaDB-Galera. In particular, we illustrate how the interpretation algorithm helps us locate the violation cause: _lost update_. We defer the Dgraph and YugabyteDB anomalies (causality violations) to Appendix D. In the following example, we use T:(s, _n_) to denote the \(n\)th transaction issued by session s.
Given the original cycles returned by MonoSAT in Figure 5(a), PolySI first finds the (only) "missing" transaction T:(1,4) (colored in green) and the associated dependencies, as shown in Figure 5(b). Note that some of the dependencies are uncertain at this moment, e.g., the WW dependency between T:(1,4) and T:(1,5) (in red). PolySI then restores the violating scenario by resolving such uncertainties. For example, as depicted in Figure 5(c), PolySI determines that W(0,4) was actually installed first in the database, i.e., T:(1,4)\(\xrightarrow{\text{WW}}\)T:(1,5), because there would otherwise be an undesired cycle with the known dependencies, i.e., T:(1,5)\(\xrightarrow{\text{WW}}\)T:(1,4)\(\xrightarrow{\text{WR}}\)T:(1,5). The same reasoning applies to determine the WW dependency between T:(1,4) and T:(2,13) (in blue). Finally, PolySI finalizes the violating scenario by removing any remaining uncertainties including those dependencies not involved in the actual violation (the WW dependency between T:(1,5) and T:(2,13) in this case).
The violating scenario now becomes informative and explainable: transaction T:(1,4) writes value 4 on key 0, which is read by transactions T:(2,13) and T:(1,5). Both transactions subsequently commit their writes on key 0 by W(0,13) and W(0,5), respectively, which results in a _lost update_ anomaly.
### Performance Evaluation
In this section, we conduct an in-depth performance analysis of PolySI and compare it to the following black-box checkers:
* dbcop (DBLAN, 2017) is, to the best of our knowledge, the most efficient black-box SI checker that does not use an off-the-shelf solver. Note that, unlike our PolySI tool, dbcop does not check _aborted reads_ or _intermediate reads_ (see Section 4.5).
* Cobra (CBLAN, 2017) is the state-of-the-art SER checker utilizing both MonoSAT and GPUs to accelerate the checking procedure. Cobra serves as a baseline because (i) checking SI is more complicated than checking SER in general (DBLAN, 2017), and constraint pruning and the MonoSAT encoding for SI are more challenging in particular due to more complex cycle patterns in dependency graphs (Theorem 6, Section 2.2.3); and (ii) Cobra is the most performant SER checker to date.
* CobraSI: We implement the incremental algorithm (DBLAN, 2017, Section 4.3) for reducing checking SI to checking serializability (in polynomial time) to leverage Cobra. We consider two variants: (i) CobraSI without GPU for a fair comparison with
\begin{table}
\begin{tabular}{c c c c} \hline \hline Database & GitHub Stars & Kind & Release \\ \hline
**New violations found:** & & & \\ Dgraph & 18.2k & Graph & v2.11.2.0 \\ MariaDB-Galera & 4.4k & Relational & v10.7.3 \\ YugabyteDB & 6.7k & Multi-model & v2.11.1.0 \\ \hline
**Known bugs (DBLAN, 2017; DBLAN, 2017):** & & & \\ CockroachDB & 25.1k & Relational & v2.1.0 \\ MySQL-Galera & 381 & Relational & v25.3.26 \\ YugabyteDB & 6.7k & Multi-model & v1.1.10.0 \\ \hline \hline \end{tabular}
\end{table}
Table 2. Summary of tested databases. Multi-model refers to relational DBMS, document store, and wide-column store.
PolySI and dbcop, which do not employ GPU or multithreading; and (ii) CobraSI with GPU as a strong competitor.
#### 5.4.1. Performance Comparison with State of the Art
Our first set of experiments compares PolySI with the competing SI checkers under a wide range of workloads. The input histories extracted from PostgreSQL (with the _repeatable read_ isolation level) are all valid with respect to SI. The experimental results are shown in Figure 6: PolySI significantly surpasses not only the state-of-the-art SI checker dbcop but also CobraSI with GPU. In particular, with more concurrency, such as more sessions (a), transactions per session (b), and operations per transaction (c), CobraSI with GPU exhibits exponentially increasing checking time5 while PolySI incurs only for reducing checking SI to checking serializability typically doubles the number of transactions in a given history [(7)], rendering the checking even more expensive.
Figure 5. Lost update: the SI violation found in MariaDB-Galera. The original output dependencies are represented by dotted black arrows. The recovered dependencies are colored in red/blue with dashed and solid arrows representing uncertain and certain dependencies, respectively. The missing transaction is colored in green. We omit key 0, associated with all dependencies.
Figure 8. Comparison on time and memory overhead with Cobra with GPU acceleration under representative workloads.
Figure 6. Performance comparison with the competing SI checkers under various workloads. Experiments time out at 180s; data points are not plotted for timed-out experiments.
Figure 7. Comparison on memory overhead with competing SI checkers under various workloads.
moderate overhead. The result depicted in Figure 6(f) is also consistent: with the skewed key accesses representing high concurrency as in the zipfan and hotspot distributions, both dbcop and CobraSI without GPU acceleration time out. Moreover, even with the GPU acceleration, CobraSI takes 6x more time than PolySI. Finally, unlike the other SI checkers, PolySI's performance is fairly stable with respect to varying read/write proportions (d) and keys (e).
In Figure 8(a) we compare PolySI with the baseline serializability checker Cobra. We present the checking time on various benchmarks. PolySI outperforms Cobra (with its GPU acceleration enabled) in five of the six benchmarks with up to 3x improvement (as for GeneralRH). The only exception is TPC-C, where most of the transactions have the read-modify-write pattern,6 for which Cobra implements a specific optimization to efficiently infer dependencies before pruning and encoding.
Footnote 6: In a read-modify-write transaction each read is followed by a write on the same key.
We also measure the memory usage for all the checkers under the same settings as in Figure 6 and Figure 8(a). As shown in Figure 7, PolySI consumes less memory (for storing both generated graphs and constraints) than the competitors in general. Note that dbcop, the only checker that does not rely on solving and stores no constraints, is not competitive with PolySI for most of the cases. Regarding the comparison on specific benchmarks (Figure 8(b)), PolySI and Cobra with GPU acceleration have similar overheads, while PolySI (resp. Cobra) requires less memory for read-heavy workloads (resp. TPC-C).
#### 5.4.2. Decomposition Analysis of PolySI
We measure PolySI's checking time in terms of stages: _constructing_, which builds up a generalized polygraph from a given history; _pruning_, which prunes constraints in the generalized polygraph; _encoding_, which encodes the graph and the remaining constraints; and _solving_, which runs the MonoSAT solver.
Figure 9 depicts the results on six different datasets. Constructing a generalized polygraph is relatively inexpensive. The overhead of pruning is fairly constant, regardless of the workloads; PolySI can effectively prune (resp. resolve) a huge number of constraints (resp. unknown dependencies) in this phase. See Table 3 for details. In particular, for TPC-C which contains only read-only and read-modify-write transactions, PolySI is able to resolve all uncertainties on WW relations and identify the unique version chain for each key. The encoding effort is moderate; TPC-C incurs more overhead as the number of operations in total is 5x more than the others. The solving time depends on the remaining constraints and unknown dependencies after pruning, e.g., the left four datasets incur negligible overhead (see Table 3).
#### 5.4.3. Differential Analysis of PolySI
To investigate the contributions of PolySI's two major optimizations, we experiment with three variants: (i) PolySI itself; (ii) PolySI without pruning (P) constraints; and (iii) PolySI without both compacting (C) and pruning the constraints. Figure 10 demonstrates the acceleration produced by each optimization. Note that the two variants without optimization exhibit (16GB) memory-exhausted runs on TPC-C, which contain considerably more uncertain dependencies (3628k) and constraints (386k) without pruning than the other datasets (see Table 3).
#### 5.4.4. Scalability
To assess PolySI's scalability, we generate transaction workloads with one billion keys and one million transactions with hundreds of millions of operations. We experiment with varying read proportions and long transaction sizes (up to 450 operations per transaction). As shown in Figure 11, PolySI consumes less than 40GB memory in all cases and at most 4 hours for checking one million transactions. We also observe that the time used increases linearly with larger-sized transactions while the memory overhead is fairly stable. To conclude, large-sized workloads are quite manageable for PolySI on modern hardware. Note that the competing checkers, as expected, fail to handle such workloads.
\begin{table}
\begin{tabular}{c|r r|r r} Benchmark & \#cons. & \#cons. & \#unk. dep. & \#unk. dep. \\ & before P & after P & before P & after P \\ \hline TPC-C & 386k & 0 & 3628k & 0 \\ GeneralRH & 4k & 29 & 39k & 77 \\ RUBiS & 14k & 149 & 171k & 839 \\ C-Twitter & 59k & 277 & 307k & 776 \\ GeneralRW & 90k & 2565 & 401k & 5435 \\ GeneralWH & 167k & 6962 & 468k & 14376 \\ \end{tabular}
\end{table}
Table 3. Number of constraints and unknown dependencies before and after pruning (P) in the six benchmarks.
Figure 11. PolySI’s overhead on large-sized workloads with one billion keys and one million transactions.
Figure 9. Decomposing PolySI’s checking time into stages.
## 6. Discussion
**Fault Injection.** We have found SI violations in three production databases without injecting faults, such as network partition and clock drift. Since PolySI is an off-the-shelf checker, it is straightforward to integrate it into existing testing frameworks with fault injection such as Jepsen (Kal
Finally, despite the theoretical soundness and completeness (modulo determinate transactions) claim, Elle's actual implementation is unsound for efficiency reasons and there are also anomalies it cannot detect. We have confirmed this with the developer (Kang et al., 2019).
ConsAD (Kang et al., 2019) is a checker tailored to application servers as opposed to black-box databases in our setting. Its SI checking algorithm is also based on dependency graphs. To determine the WW dependencies, ConsAD enforces the commit order of update transactions using, e.g., artificial SQL queries, to acquire exclusive locks on the database records, resulting in additional overhead (Sandel et al., 2019; Kang et al., 2019). Moreover, ConsAD is incapable of detecting non-cycle anomalies.
CAT (Kang et al., 2019) is a dynamic white-box checker for SI (and several other isolation levels). The current release is restricted to distributed databases implemented in the Maude formal language (Mauda et al., 2019). CAT must capture the internal transaction information, e.g., start/commit times, during a system run.
## 8. Conclusion
We have presented the design of PolySI, along with a novel characterization of SI using generalized polygraphs. We have established the soundness and completeness of our new characterization and PolySI's checking algorithm. Moreover, we have demonstrated PolySI's effectiveness by reproducing all of 2477 known anomalies and by finding new violations in three popular production databases, its efficiency by experimentally showing that it outperforms the state-of-the-art tools and can scale up to large-sized workloads, and its generality, operating over a wide range of workloads and databases of different kinds. Finally, we have leveraged PolySI's interpretation algorithm to identify the causes of the violations.
PolySI is the first black-box SI checker that satisfies the SIEGE+ principle. The obvious next step is to apply SMT solving to build SIEGE+ black-box checkers for other data consistency properties such as transactional causal consistency (Kang et al., 2019; Kang et al., 2019) and the recently proposed regular sequential consistency (Kang et al., 2019). Moreover, we will pursue the research directions discussed in Section 6.
###### Acknowledgements.
We would like to thank the anonymous reviewers for their helpful feedback. This work was supported by the CCF-Tencent Open Fund (Tencent RAGR20200201). |
2305.13849 | Gaussian Latent Representations for Uncertainty Estimation using
Mahalanobis Distance in Deep Classifiers | Recent works show that the data distribution in a network's latent space is
useful for estimating classification uncertainty and detecting
Out-of-distribution (OOD) samples. To obtain a well-regularized latent space
that is conducive for uncertainty estimation, existing methods bring in
significant changes to model architectures and training procedures. In this
paper, we present a lightweight, fast, and high-performance regularization
method for Mahalanobis distance-based uncertainty prediction, and that requires
minimal changes to the network's architecture. To derive Gaussian latent
representation favourable for Mahalanobis Distance calculation, we introduce a
self-supervised representation learning method that separates in-class
representations into multiple Gaussians. Classes with non-Gaussian
representations are automatically identified and dynamically clustered into
multiple new classes that are approximately Gaussian. Evaluation on standard
OOD benchmarks shows that our method achieves state-of-the-art results on OOD
detection with minimal inference time, and is very competitive on predictive
probability calibration. Finally, we show the applicability of our method to a
real-life computer vision use case on microorganism classification. | Aishwarya Venkataramanan, Assia Benbihi, Martin Laviale, Cedric Pradalier | 2023-05-23T09:18:47Z | http://arxiv.org/abs/2305.13849v3 | # Gaussian Latent Representations for Uncertainty Estimation
###### Abstract
Recent works show that the data distribution in a network's latent space is useful for estimating classification uncertainty and detecting Out-Of-Distribution (OOD) samples. To obtain a well-regularized latent space that is conducive for uncertainty estimation, existing methods bring in significant changes to model architectures and training procedures. In this paper, we present a lightweight, fast, and high-performance regularization method for Mahalanobis distance (MD)-based uncertainty prediction, and that requires minimal changes to the network's architecture. To derive Gaussian latent representation favourable for MD calculation, we introduce a self-supervised representation learning method that separates in-class representations into multiple Gaussians. Classes with non-Gaussian representations are automatically identified and dynamically clustered into multiple new classes that are approximately Gaussian. Evaluation on standard OOD benchmarks shows that our method achieves state-of-the-art results on OOD detection with minimal inference time, and is very competitive on predictive probability calibration. Finally, we show the applicability of our method to a real-life computer vision use case on microorganism classification.
## 1 Introduction
Current deep learning classification networks achieve superior performance and find widespread applications in various industrial domains such as biology and robotics [1, 2, 3]. While they achieve state-of-the-art accuracy, there remain two main challenges that hinder the deployment of deep classifiers in critical situations: the derivation of calibrated classification and a measure of the classification uncertainty. Without those, a network exposed to Out-of-Distribution (OOD) data makes incorrect predictions with high confidence [4] and no human-in-the-loop can catch such errors. It is thus necessary to obtain calibrated probabilities [4]_i.e.,_ predict probabilities that represent true likelihood, and to estimate the uncertainty in the network's predictions to allow users to make informed decisions.
Among deep uncertainty estimation approaches [5, 6, 7, 8] are Bayesian Neural Networks [9], MC-Dropout [10] and Deep Ensemble [11]. These stochastic methods require multiple forward-passes so they are not scalable to large systems. Aware of the scalability requirements, current research focuses on estimating uncertainty from deterministic single-forward-pass networks [12, 13, 14, 15, 16, 17]. Distance-based methods belong to this category and are an attractive alternative for their excellent performance in OOD detection [18, 19].
Distance-based methods rely on the distance between the test samples and the In-Distribution (ID) samples in a network's latent space to determine if the test samples are OOD. A relevant distance is the Mahalanobis distance (MD) [20] for its superior performance over Euclidean Distance (ED) [21, 22, 23]. One key MD assumption though is that the in-distribution samples in the latent space should follow class-conditional Gaussian distributions. In practice, though, there is nothing in the classification training that constrains the latent space to fulfil such an assumption [24]. Instead, research on representation learning shows that each
class is usually composed of several clusters of visually similar images [25, 26, 27]. This can be due to intra-class variance of images taken from different view-points, the presence of additional objects in the image, and variations in object shapes. In the network's latent space, these variations appear as distinct distributions or deviate from a Gaussian distribution. This breaks the MD assumption, which could lead to incorrect or imprecise uncertainty estimation.
In this paper, we introduce MAPLE, a self-supervised representation learning method that regularizes a classification network's latent space to exhibit multivariate Gaussian distributions. MAPLE generates a latent space where class representations are Gaussian, making it compliant with the MD assumption and allows fast and high-performance MD-based OOD detection, uncertainty estimation, and calibrated classification. The effect of MAPLE is illustrated in Fig. 1 with the 2D projection of the latent space of a Convolutional Neural Network (CNN) trained on CIFAR10.
MAPLE stands for MAhalanobis distance based uncertainty Prediction for reLiablE classification, and is illustrated in Fig. 2. MAPLE relies on two components: i) a self-supervised intra-class label refinement through clustering in the latent space; ii) a deep metric learning loss that improves the class separation. During training, the representations associated to a class that deviate from a Gaussian distribution are divided into several clusters that are approximately Gaussian. The cluster assignments become the new labels of the representations, and the training goes on. Since each cluster gathers samples that exhibit similar intra-class variations, the clustering step is akin to automatic fine-grained annotation. The metric-learning then reinforces the fined-grained class separation by pushing apart the new classes. The combination of in-class clustering and metric learning results in classification representations that are well-clustered and approximately Gaussian, which makes them suitable for MD-based uncertainty estimation.
We evaluate MAPLE against existing uncertainty quantification methods on the three standard benchmarks: CIFAR10 [28]_vs._ SVHN [29]/CIFAR100 [28], FashionMNIST [30]_vs._ MNIST [31] for OOD detection and predictive probability calibration. Results show that MAPLE achieves the best compromise between performance and run time efficiency while being the most lightweight integration-wise. It achieves very competitive performance with the state-of-the-art and has the best inference time. Also, it introduces minor architectural changes and does not require additional fine-tuning to OOD datasets.
We summarize the paper's contributions as follows. **i)** We develop a self-supervised representation learning method that constrains a classification network's latent space to be approximately Gaussian. **ii)** We show that such representations allow for reliable OOD detection and probability calibration using MD. **iii)** We design the method such that it has a minimal impact on the network's original architecture, has low computational cost during inference, and achieves results competitive with the state-of-the-art on OOD detection.
Figure 1: **Self-supervised latent space regularization with MAPLE** for uncertainty estimation and OOD detection. MAPLE improves class separation as illustrated by the PCA visualization of a CNN’s latent space trained on CIFAR10 without regularization (left) and with MAPLE regularization (right). Our method constrains the latent representations to be approximately Gaussian to enable efficient distance-based uncertainty estimation.
## 2 Related Work
**Multi-forward-pass Uncertainty Estimation.** Traditional uncertainty quantification methods rely on Bayesian Neural Networks [32, 33] to learn a distribution over the network weights. To extract predictive probability variance, sampling [34] or variational methods [9] are used. The application of these methods is limited, as they increase the number of parameters by a factor of two and hinder convergence. As a lighter alternative, MC Dropout [10] enables dropout at test time and averages the network's output over several forward passes. While MC Dropout paves the way towards faster and lighter uncertainty estimation, it has been shown to produce over-confident predictions [11] and underestimate uncertainty [35]. To improve uncertainty estimation, Deep Ensembles [11] average the predictions from an ensemble of trained models and achieve state-of-the-art performance on several classification tasks. It remains computationally expensive due to the training of multiple models and the several forward passes during inference. By deriving uncertainty from a single forward pass, MAPLE achieves significantly faster inference time without sacrificing performance.
**Single-forward-pass Uncertainty Estimation.** One line of work relies on the distribution of data samples in the network's latent space. A test sample is considered ID if it lies within the training data manifold, otherwise it is labelled as OOD. Methods differ in the way they regularize the representation space and the way they derive distances. DUQ [18] uses a Radial Basis Function (RBF) kernel in the representation space to measure distances between test samples and the centroids of various classes. Additionally, they use gradient penalty to obtain a regularized space, which improves the prediction's quality. SNGP [36] uses Spectral Normalization on the network's weights to satisfy the bi-Lipchitz condition, which is a more gradient-friendly regularization than DUQ. This condition preserves semantically meaningful distance changes in the representation space with respect to input changes. The prediction's uncertainty is then given by a Gaussian Process layer on the output. To improve the scalability of the Gaussian Process estimation, [14] proposes Deep Kernel Learning to process the input images with a distance-preserving network and fit a Gaussian on inducing
Figure 2: **Representation regularization with MAPLE for uncertainty estimation. Our approach trains a classification network to learn representations that are approximately Gaussian for each class. During inference, the Mahalanobis distance between a test sample and the class centroids is used for classification, uncertainty estimation and OOD detection.**
points only. Contrary to these methods, MAPLE avoids the Gaussian Process estimation and gradient regularization during training and instead relies on simple metric learning. Similarly, VMDLS [24] simplifies the Gaussian enforcement by training the network with a KL-divergence loss so that each class representations follow an isotropic Gaussian distribution in the latent space. However, this ignores the possible intra-class variation within each class and requires the Gaussian variance to be tuned manually. Instead, MAPLE uses a simpler self-supervised clustering that automatically fits the data. Also, MAPLE makes the latent space not only suitable for OOD detection but also for calibrated probability prediction.
**Mahalanobis-Distance for OOD detection.** MD is a common distance in the OOD detection literature. Early work by Lee et al. [19] derives confidence values as a function of MD to predict the likelihood of a sample being ID. To obtain competitive performance, the method requires several tweaks such as adding noise to input samples, combining confidence values from multiple feature layers, and fine-tuning on OOD datasets. [23] proposes two light improvements: Partial MD and Marginal MD. In Partial MD, the MD is computed on lower dimensional representations with PCA. Marginal MD uses all training representations to fit a single Gaussian to calculate the MD. While both perform well on Far-OOD datasets _i.e.,_ where ID and OOD samples are significantly distinct, their results are limited on Near-OOD [37], where the OOD samples are semantically similar to the ID ones. Relative MD (RMD) [22] improves the MD performance on Near-OOD by computing a global MD between the test sample and the samples of all classes combined, and then subtracting this value from the per-class MDs. All these methods exhibit satisfying performance, but their main limitation is their strong assumption that the image representations follow a Gaussian distribution, even though standard classification training does not enforce such a constraint. MAPLE addresses this limitation with a self-supervised regularization. By doing so, the features better fit the theoretical framework of MD-based OOD detection, thereby improving the performance.
## 3 Method
In this section, we describe MAPLE, a self-supervised regularization method for MD-based OOD detection, uncertainty estimation, and calibrated classification. It augments a standard CNN classifier with a self-supervised regularization to output both class probabilities and MD-based uncertainty. To enable MD for OOD detection, the representations of the training samples are dynamically clustered into multiple Gaussians using X-Means [38] during training. The samples are assigned new pseudo-class labels defined by their cluster assignment. The network is then optimized with the cross-entropy loss and the triplet loss. With periodic validation, the clusters are updated and the total number of classes change with every validation. At inference time, the MD between a test sample and each cluster's centroid is used to estimate the classification uncertainty and the probability of the point being OOD. Note that the only modification to the original network architecture is in the final layer, where the number of output neurons change according to the number of clusters identified. This makes MAPLE easy to integrate to any classification network. An algorithmic description is provided in Appendix C.
**Self-Supervised Dynamic Relabelling.** During training, MAPLE updates the training labels to make them representative of the features' separation in the latent space. Every \(p\) epochs, the network is evaluated on \(\mathbf{\mathcal{D}_{val}}\) and the classes with a false negative ratio higher than a threshold \(t\) are updated. This is representative of the scenarios where the samples of a given class are misclassified, which is typical of classes with high intra-class variations. For every class to update, the training representations belonging to such a class are extracted and clustered using X-Means [38]. The resulting clusters form well-separated groups and we use the cluster assignment as new pseudo-labels for the train samples. If \(k^{\prime}\) additional clusters are introduced by X-Means, each of them are considered as independent classes. Thus, the number of classes becomes \(K=k+k^{\prime}\), and the final layer of the model is updated to have \(K\) neurons. Then, the network training continues with the new labels. During inference, the pseudo labels are remapped to the original set of \(k\) labels to identify their original class.
Fig. 3 illustrates the benefits of jointly using X-Means and the triplet loss on the representations: X-Means splits classes with high intra-class variations into separated classes that are semantically more representative
of the data, and the triplet loss reinforces this separation.
The method introduces three hyperparameters: false negative ratio threshold \(t\), frequency of validation epochs \(p\) and the maximum number of clusters (max_num_cluster), which is a parameter needed for X-Means. More details on the hyperparameters are provided in Appendix. B.4.
**Clustering.** The motivation for using X-Means over other commonly used clustering methods such as K-Means [39], DB-SCAN [40] and Gaussian Mixture Models (GMMs) are two-folds: (1) X-Means is scalable and automatically identifies the number of clusters based on the Bayesian Information Criterion (BIC); (2) BIC uses a maximum likelihood estimation of the variance under the spherical Gaussian assumption, which means that the samples are approximately spherical Gaussian in each cluster.
### Representation Distance
This section describes the MD derivation over the latent representations. To avoid matrix singularities, the latent representations are first reduced using PCA.
**Dimensionality reduction.** Representations extracted from large neural networks usually have a high dimension and redundant dimensions. The MD requires calculating the inverse covariance matrix of these features, but the presence of redundancy causes the covariance matrix to be singular. Furthermore, [22] shows that the presence of non-informative dimensions could be detrimental to MD performance. This motivates the use of dimensionality reduction.
A common dimensionality reduction method is t-SNE [41], widely used for latent space's visualization. While t-SNE maintains the local distribution of points, it fails to represent global distributions accurately, which is undesirable in distance-based uncertainty predictions. Instead, we use Principal Component Analysis (PCA) for dimensionality reduction. The principal components are constructed from the covariance matrix of the standardized training representations. The eigen vectors of the covariance matrix are the principal components and the eigen values account for the amount of original information (variance) present in these components. We automatically estimate the number of principal components by the number of eigen values in decreasing order, required to explain 95% of the original data variance. This transformation is denoted by \(g:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d^{\prime}}\), where \(d^{\prime}\) is the dimension of the reduced features. With \(\mathbf{z^{\prime}_{train}}=f^{\theta}(\mathbf{x_{train}})\) the full dimensions training features, we denote \(\mathbf{z_{train}}=g(\mathbf{x^{\prime}_{train}})\) the reduced features.
**Mahalanobis Distance.** The MD is a generalized version of Euclidean distance that takes into account the data correlation to measure the distance. Hence, the MD is more accurate when predicting the distance between a point and a distribution of points. Here, MD is calculated on the PCA-reduced representations as follows. Let \(\{z_{i}\}\) be the set of training representations after dimensionality reduction, \(\mu_{c}\) be the class centroids with \(c=1,2,...,K\), and \(\mathbf{\Sigma}\) be the shared covariance for all training samples, given by
Figure 3: **Visualizing intra-class label refinement and feature optimization.** The original data is not perfectly Gaussian due to intra-class variations. X-Means refines the labelling by dividing the samples into multiple clusters that are approximately Gaussian. The clusters are considered as separate classes during training. Triplet loss optimizes the representations by bringing the in-class samples together and separating them from other classes.
\[\mu_{c} =\frac{1}{N_{c}}\sum_{i:y_{i}=c}z_{i} \tag{1}\] \[\mathbf{\Sigma} =\frac{1}{N}\sum_{c}\sum_{i:y_{i}=K}(z_{i}-\mu_{c})(z_{i}-\mu_{c})^ {T}\]
The following Eq. 2 gives the Mahalanobis distance between the centroid \(\mu_{c}\) of class \(c\) and a test sample \(\tilde{x}\) with reduced representation \(\tilde{z}=g(f^{\theta}(\tilde{x}))\)
\[MD_{c}(\tilde{x})=\sqrt{(\tilde{z}-\mu_{c})^{T}\mathbf{\Sigma}^{-1}(\tilde{z} -\mu_{c})} \tag{2}\]
### Classification and Uncertainty Estimation
We now show how to use the MD distance calculated in Eq. 2 for three purposes: classification, predictive probability, and uncertainty prediction.
**MD-based Classification.** The predicted class is the one whose centroid \(c^{*}\) is closest to the test sample \(\tilde{x}\):
\[c^{*}=\underset{c}{\text{argmin}}(MD_{c}(\tilde{x})) \tag{3}\]
Note that this classification is inferred in addition to the usual classification done by the network by taking the maximum of the output logits.
**Predictive Probability.** We convert the MD into a calibrated classification probability using the following property: the squared MD on representations with dimension \(d^{\prime}\) follows a chi-squared distribution \(\chi^{2}_{d^{\prime}}\) with \(d^{\prime}\) degrees of freedom. The MD is converted as follows:
\[P^{c}_{MD}=1-\text{cdf}(\chi^{2}_{d^{\prime}})(MD_{c}(\tilde{x})^{2}) \tag{4}\]
where cdf(.) is the cumulative distribution function. \(P^{c}_{MD}\) represents the probability that a test sample belongs to class \(c\). When the test point belongs to a particular class, the MD to that class is low and the corresponding \(P^{c}_{MD}\) is high. The predictive probability is the one associated with the class \(c^{*}\) obtained in Eq. 3:
\[P^{c^{*}}_{MD}=\underset{c}{\text{max}}(P^{c}_{MD}) \tag{5}\]
Note that contrary to a CNN softmax 'probabilities', this classification probability is calibrated and can be interpreted as a confidence in the classification output. This means \(P^{c}_{MD}\) represents the actual probability that a sample belongs the class \(c\).
**Uncertainty Prediction.** We define the predictive uncertainty, which is the uncertainty in the network prediction as
\[u_{c^{*}}=1-P^{c^{*}}_{MD} \tag{6}\]
For small values of MD, \(u_{c^{*}}\) is around 0 and goes to 1 as the MD increases.
## 4 Experiments
We compare MAPLE with the following related works: two multi forward-pass methods MC-Dropout [10] (10 dropout samples) and Deep ensemble [11] (10 models), four single forward-pass methods: DUQ [18], SNGP [36], DUE [14] and VMDLS [24]. Following the standard evaluation on OOD detection, we evaluate the methods on classification, predictive probability calibration, and OOD detection on the three benchmark datasets: FashionMNIST [30]_vs._ MNIST [31], CIFAR10 [28]_vs._ SVHN [29], CIFAR10 [28].
We also compare MAPLE with MD-based methods on OOD detection, namely, the approach by Lee et al. [19], Marginal MD [23] and RMD [22]. We used the near-OOD CIFAR10 _vs._ CIFAR100 for the comparison, which is notably challenging for OOD detection.
### Evaluation Metrics
We report the standard evaluation metrics [18, 36] namely, the classification accuracy, the Expected Calibration Error (ECE), the Negative Log-Likelihood (NLL), the Area Under the Receiver Operating Characteristics (AUROC) and the Area Under the Precision-Recall curve (AUPR). For qualitative analysis, we use calibration plots and uncertainty histograms (Appendix. D.1). As mentioned previously, MAPLE produces two classification outputs so we report the accuracies obtained from both the traditional softmax probability and the MD-based classification (Sec. 3.2). The ECE and the NLL are calculated from the predictive probability \(P_{MD}^{c^{*}}\). AUROC and AUPR are calculated from the uncertainty \(u_{c^{*}}\). The definition of these standard metrics are recalled in Appendix A.
### Implementation Details
As in [18], the network architecture used for training FashionMNIST is a three layer CNN. The CIFAR10 training follows [14, 36] and uses a Wide ResNet 28-10 [42] for the classification backbone. The hyperparameters for the trainings are \(p=10,t=0.3\) and max_num_cluster\(=5\). Additional details on the network architecture, dataset splits, hyperparameter search, and the hardware used for training are provided in the Appendices B.1, B.2 and B.4.
### Results
We report the results on FashionMNIST and CIFAR10 in Table (Tab.) 1 and 2 respectively.
**OOD Detection Results.** MAPLE outperforms all baseline methods by upto 12% on the AUROC and AUPR scores, and achieves so with the least computation time1. Note that competitive approaches, such as SNGP and DUE, derive their performance from spectral normalization and Gaussian process layer, which are invasive training add-ons. In contrast, MAPLE relies only on the layers of a standard CNN architecture to achieve superior performance.
Footnote 1: Latency value for MAPLE includes time for inference+post-processing with MD. Latency for MC Dropout and Deep Ensemble are when the inferences are performed serially.
When it comes to inference speed, MC Dropout and Deep Ensemble perform the worst, which is expected since they require multiple forward passes during inference. In contrast, most single-forward-pass methods achieve scores comparable to MC Dropout and Deep Ensemble while being faster, with a factor close to 8 times faster when comparing MAPLE and Deep Ensemble. This reinforces MAPLE's motivation: the distribution of feature points in a network's latent space holds reliable information for fast prediction of a network's uncertainty and detection of OOD samples.
\begin{table}
\begin{tabular}{c||c c c|c c|c} \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{ID metrics} & \multicolumn{3}{c|}{OOD metrics} & Latency\(\downarrow\) \\ & Accuracy \(\uparrow\) & ECE \(\downarrow\) & NLL \(\downarrow\) & AUROC \(\uparrow\) & AUPR \(\uparrow\) & (ms/sample) \\ \hline MC Dropout [10] & 0.923 & 0.069 & **0.213** & 0.912 & 0.895 & 15.46 \\ Deep ensemble [11] & **0.939** & 0.018 & 0.238 & 0.874 & 0.866 & 23.87 \\ DUQ [18] & 0.923 & 0.045 & 0.276 & 0.941 & 0.945 & 2.61 \\ SNGP [36] & 0.924 & **0.009** & 0.259 & 0.981 & 0.978 & 2.54 \\ DUE [14] & 0.923 & 0.028 & 0.284 & 0.954 & 0.948 & 2.57 \\ VMDLS [24] & 0.920 & - & - & 0.963 & 0.970 & 2.60 \\ MAPLE & 0.925/0.924 & 0.020 & 0.262 & **0.995** & **0.994** & **2.48** \\ \hline \end{tabular}
\end{table}
Table 1: **FashionMNIST (ID) vs MNIST (OOD).** MAPLE achieves the best performance on OOD detection and has the best inference time. It is very competitive with other single-pass methods on the classification task. Blue: Classification based on prediction from softmax probability Orange: MD-based classification.
**Classification Results.** MAPLE achieves results competitive to state-of-the-art, only 1% below the top method Deep ensemble [11] whose score comes at the cost of training and inference on several models. Note that both MAPLE accuracies, the softmax probability and the MD-based one are close. A finer analysis of the accuracy shows that the slight difference in accuracy with the MD-based classification occurs on samples the network is uncertain about: MAPLE achieves top accuracy on high-confidence predictions (above 80% and 90% confidence) and the accuracy slightly decreases for lower-confidence predictions. See Appendix. D.2 for an extended analysis.
**Calibration Results.** MAPLE is competitive with state-of-the-art SNGP [36] and Deep Ensembles. When training on FashionMNIST, one source of ECE error is MAPLE's under-confidence on the accuracy range below 80%. This is visible in the calibration plot (Fig. 4) where the curve goes above the ideal calibration: the confidence is lower than the accuracy. This is typical of the scenario where the inter-class representations are widely spread out. Even though a sample falls closest to its ground-truth centroid, their inter-distance remains high, which decreases the confidence. The sample is then correctly classified, but with a low confidence. Note that while optimal calibration is the gold-standard, MAPLE's under-confidence still makes it more compliant with hazardous applications than other methods that make over-confident predictions, which can be disastrous. On CIFAR10, all methods are well-calibrated, except for the overconfident MC-Dropout, which explains its high ECE score. When the accuracy is below 0.4, baseline methods become
\begin{table}
\begin{tabular}{c||c c c|c c|c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{ID metrics} & \multicolumn{3}{c|}{OOD AUROC \(\uparrow\)} & \multicolumn{2}{c|}{OOD AUPR \(\uparrow\)} & \multicolumn{2}{c}{Latency\(\downarrow\)} \\ & Accuracy \(\uparrow\) & ECE \(\downarrow\) & NLL \(\downarrow\) & SVHN & CIFAR100 & SVHN & CIFAR100 & (ms/sample) \\ \hline MC Dropout [10] & 0.960 & 0.048 & 0.293 & 0.932 & 0.835 & 0.965 & 0.829 & 27.10 \\ Deep Ensemble [11] & **0.964** & 0.014 & **0.134** & 0.934 & 0.864 & 0.935 & 0.885 & 38.10 \\ DUQ [18] & 0.945 & 0.023 & 0.222 & 0.927 & 0.872 & 0.973 & 0.833 & 8.68 \\ SNGP [36] & 0.957 & 0.016 & 0.153 & 0.991 & 0.911 & 0.994 & 0.907 & 6.25 \\ DUE [14] & 0.956 & 0.015 & 0.179 & 0.936 & 0.852 & 0.967 & 0.850 & 6.94 \\ VMDLS [24] & 0.951 & - & - & 0.932 & 0.868 & 0.953 & 0.864 & 5.61 \\ MAPLE & 0.956/0.954 & **0.012** & 0.142 & **0.996** & **0.926** & **0.997** & **0.918** & **4.96** \\ \hline \hline \end{tabular}
\end{table}
Table 2: **CIFAR10 (ID) vs SVHN / CIFAR100 (OOD).** MAPLE outperforms all single and multi pass methods on OOD detection, and results in significantly faster derivation. Classification with MAPLE is very competitive with the state-of-the-art and the predicted probabilities are better calibrated. Blue: classification based on prediction from softmax probability. Orange: MD-based classification.
Figure 4: **Calibration plots.** A perfectly calibrated plot is when the predicted confidence equals the true likelihood _i.e.,_ the accuracy. This is shown by the linear dotted line in the plots. MAPLE is closer to optimal calibration than existing methods, especially for low-accuracy samples.
overconfident whereas MAPLE is closer to optimal calibration and achieves the best ECE score.
### Comparison with other MD methods
**Setup.** MAPLE is compared against MD-based OOD detectors [19, 22, 23]. These methods are tailored for OOD detection, so we report the metric relevant to this task only for the sake of fairness. We report the AUROC score on the challenging near-OOD dataset CIFAR10 _vs._ CIFAR100. The experiments are done with a Wide ResNet 28-10 [42].
**OOD Detection Results.** MAPLE achieves top-performance on Near-OOD detection (Tab. 3), which supports MAPLE's representation regularization. Note that the primary difference between MAPLE and the baselines is their lack of constraints on the latent representation. In contrast, we force the samples of every class to be Gaussian before calculating MD. Non-Gaussian samples lead to incorrect mean and covariance calculations, resulting in incorrect distance values. The error is more pronounced when the samples deviate from the Gaussian distribution by a large factor. This explains why the MD-based approaches under-perform compared to MAPLE on Near-OOD.
### Ablation analysis
In this study, we assess how the different components of MAPLE impact its performance. We train a wide ResNet 28-10 [42] network on CIFAR10 and use SVHN and CIFAR100 as OOD datasets.
\begin{table}
\begin{tabular}{c||c c c|c c|c c|c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c|}{ID metrics} & \multicolumn{3}{c|}{OOD metrics - SVHN} & \multicolumn{1}{c|}{OOD metrics—CIFAR100} & \multirow{2}{*}{\#Eig} \\ & Softmax Accuracy \(\uparrow\) & MD-based Accuracy\(\uparrow\) & ECE \(\downarrow\) & AUROC \(\uparrow\) & AUPR \(\uparrow\) & AUROC \(\uparrow\) & AUPR \(\uparrow\) \\ \hline
**DNN+MD (1)** & 0.950 & 0.943 & 0.086 & 0.752 & 0.762 & 0.583 & 0.564 & - \\
**DNN+PCA+MD (2)** & 0.950 & 0.946 & 0.053 & 0.855 & 0.839 & 0.813 & 0.859 & 12 \\ \hline
**DNN+PCA+ED (3)** & 0.950 & 0.943 & 0.105 & 0.829 & 0.804 & 0.734 & 0.765 & 12 \\ \hline
**DNN+Triplet+PCA+MD (4)** & 0.954 & 0.953 & 0.013 & 0.945 & 0.948 & 0.912 & 0.894 & 11 \\
**DNN+Clustering+PCA+MD (5)** & 0.947 & 0.945 & 0.032 & 0.922 & 0.908 & 0.811 & 0.815 & 12 \\ \hline
**MAPLE (6)** & **0.956** & **0.954** & **0.012** & **0.996** & **0.997** & **0.926** & **0.930** & 12 \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Ablation study.** We evaluate the influence of several MAPLE components. **PCA** (1 vs 2) results in a significant improvement of the OOD detection by discarding non-informative dimensions. The distances derived on these reduced features are better representative of the similarity between the input samples. The **MD** (2 vs 3) is better suited than ED for calibrated classification and OOD detection, which reiterates conclusions already found in previous works. The **triplet loss** (2 vs 4) improves both the accuracy and the OOD metrics by increasing the class separation. **Clustering** alone (2 vs 5) also contributes to a better separation of the classes, but the results are not as significant. The joint use of **triplet loss and clustering**, as done in MAPLE (6) achieves the best results on both classification and OOD detection. Note: #Eig refers to the number of principal components, whenever applicable.
\begin{table}
\begin{tabular}{c c} \hline \hline Method & AUROC \(\uparrow\) \\ \hline Lee et al. [19] & 0.893 \\ Marginal MD [23] & 0.838 \\ RMD [22] & 0.897 \\ MAPLE & **0.926** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **Comparison with MD-based OOD detection.** MAPLE performs significantly better in OOD detection than existing MD-based methods on the CIFAR100 _vs._ CIFAR100 setup. By enforcing the learned representations to follow a Gaussian distribution, MAPLE allows for distance derivations that are more semantically meaningful.
**Dimensionality Reduction.** We consider two scenarios: **(1) DNN+MD** - A baseline where a standard Deep Neural Network (DNN) is trained with the cross-entropy loss and with no feature regularization. The MD is computed on the raw features, and we add a value of \(1e^{-20}\) to the diagonal elements [22] to avoid a singular covariance matrix. **(2) DNN+PCA+MD** - It follows (1) except that the MD is derived on PCA-reduced features.
_Results:_ Dimensionality reduction (2) drastically improves the network's performance, as shown in the first line of Tab. 4. The improvement amounts to 7-30% on the OOD metrics and 3% on the ID metrics. One possible explanation is that the reduced dimensions are the ones that contribute to distinguishing ID samples from OOD ones, as previously observed by [22]. When including all the feature dimensions in the MD, the dimensions that do not contribute to discriminating ID and OOD samples add up and dominate the final MD score.
**Distance Definition.** We compare Mahalanobis distance and Euclidean distance (ED) in the network's latent space. We compare **(2) DNN+PCA+MD** with the new experiment **(3) DNN+PCA+ED** - It follows (2) except that the MD is replaced with ED. As for MD, the \(\chi^{2}_{d^{\prime}}\) distribution is used to obtain the probability values from ED (Sec 3.2).
_Results:_ The results show that MD boosts the performance in terms of ID and OOD metrics. The improvement is ECE score is by 5%, and the OOD metrics improved by 3-9% when using MD. This is because MD takes into account the data correlation, which gives a better estimate of the probability and uncertainty values.
**Representation training.** To study the influence of the training on the representations, we consider three experiments: **(4) DNN+Triplet+PCA+MD** - We train the DNN using both cross-entropy and triplet loss. **(5) DNN+Clustering+PCA+MD** - We train using the cross-entropy loss only and periodically cluster the feature points using X-Means. **(6) MAPLE** - This is our proposed method that fuses (4) and (5). For all experiments, the MD is derived on the reduced features.
_Results:_ Using the triplet loss (4) improves the performance considerably compared to training with the cross-entropy loss only (2). An explanation is that the triplet loss pulls in-class feature embeddings together, and pushes the other class features apart. This encourages the representations to be well separated and makes it easier to distinguish OOD features. Choosing the triplet loss for metric learning is empirically motivated: experiments using contrastive loss showed that triplet loss has a slightly better performance.
Periodic clustering (5) improves the ECE score by 2%, and the AUROC and AUPR scores on SVHN by about 7% compared to (2). However, there is a slight drop in accuracy by 0.3% and OOD metric by 4% on CIFAR100. One explanation is that clustering increases the chances of new classes to overlap. This phenomenon is illustrated in the centre plot of Fig. 3. The class overlap is particularly hindering when the new domain is close to the training one: with clustering (5), the SHVN scores are better but the near-OOD CIFAR100 performs better without clustering (2).
MAPLE uses clustering together with triplet loss and achieves top-performance. The triplet loss reduces the overlap introduced with the clustering by pulling apart the newly created classes. With MAPLE, the latent representations are approximately Gaussian and well-clustered resulting in better MD estimates and superior performance in both ID and OOD metrics. Compared to experiment (2), the calibration error drops by 4%and the OOD scores improved by 4-11%.
**False Negative Ratio \(t\).** We evaluate the influence of the clustering trigger _i.e.,_ the False Negatives Ratio. We train MAPLE with a range of \(t\) values on CIFAR10 (Tab. 5).
_Results:_ A low value of \(t\) results in overclustering, where multiple clusters contain similar images. This further increases the chances of misclassifications, leading to decrease in the metric values. On the other hand, high \(t\) values result in underclustering. Note that for \(t>0.3\), there are no additional clusters generated. This is because, the classes have false negative ratios that are below this threshold and so, they are not clustered. For CIFAR10, a \(t\) value of 0.3 yields the best results. An extended ablation analysis on the influence of classification backbones, clustering methods, and hyperparameters is provided in Appendix. E.
## 5 Discussion
With the periodical clustering and the dynamic re-labeling, a natural question that arises is _'Is there a drop in performance when the ground truth labels change during training?'_. Experimentally, we observe a drop in training accuracy by 2-3% in the following epoch after every clustering phase. However, the network makes up for the drop within 4-5 epochs of training.
It can happen that the clusters contain very few samples, which introduces label imbalance when classifying. This is exacerbated when the samples are over-clustered. To mitigate this, we restrict X-Means to only cluster the classes that get misclassified. These are the classes with a false negative ratio higher than the threshold \(t\). Automatic clustering regularization [43, 44, 45] is left for future work.
## 6 Use Case: Microorganism Classification
We consider the real-life computer vision use-case of image-based diatom identification [46]. Diatoms are microorganisms present in the water. The distribution of diatoms in the water is a useful indicator for predicting the water quality. Diatoms consist of several species or 'taxa', each corresponding to a different class with a different appearance. Typical in several biology applications, the image dataset includes a lot of intra-class variance (Fig. 5). In this study, we evaluate the performance of different approaches when encountering taxa that were not previously trained on.
We train a Wide ResNet 28-10 on 130 taxa and use 36 taxa as OOD. The dataset is particularly challenging since it is fine-grained and Near-OOD. Additional details on the dataset and experimental setup are provided
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Method & Accuracy \(\uparrow\) & ECE \(\downarrow\) & AUROC \(\uparrow\) & AUPR \(\uparrow\) & Latency (ms/sample)\(\downarrow\) \\ \hline MC-Dropout [10] & 0.936 & 0.039 & 0.548 & 0.589 & 129.7 \\ Deep Ensemble [11] & **0.969** & **0.025** & 0.589 & 0.570 & 146.81 \\ SNGP [36] & 0.954 & 0.196 & 0.798 & 0.826 & 26.25 \\ MAPLE & 0.963 & 0.036 & **0.864** & **0.865** & **17.38** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Real Case Application: microorganism classification. With its top performance and state-of-the-art speed, MAPLE makes for a particularly applicable method for classification and OOD detection on real case datasets.**
\begin{table}
\begin{tabular}{c||c c c|c c} \hline \hline False Negative & \multicolumn{5}{c|}{} & \multicolumn{2}{c}{SVHN} & CIFAR100 \\ Ratio (t) & \#Classes & Accuracy\(\uparrow\) & ECE\(\downarrow\) & AUROC\(\uparrow\) & AUROC\(\uparrow\) \\ \hline
0.0 & 23 & 0.9449 & 0.014 & 0.922 & 0.888 \\
0.1 & 18 & 0.9534 & 0.013 & 0.964 & 0.918 \\
0.2 & 14 & **0.9544** & 0.012 & 0.991 & 0.925 \\
0.3 & 12 & 0.9541 & **0.012** & **0.996** & **0.926** \\
0.4 & 10 & 0.9535 & 0.013 & 0.961 & 0.921 \\
0.5 & 10 & 0.9535 & 0.012 & 0.955 & 0.915 \\ \hline \hline \end{tabular}
\end{table}
Table 5: **Metrics for different values of False Negative Ratio evaluated on CIFAR10 #Classes refers to the total number of output classes obtained after clustering. A low value of \(t\) results in overclustering, whereas a high \(t\) fails to detect classes with high variance.**
in Appendix B.3. As shown in Tab. 6, MAPLE outperforms all baselines on OOD detection. While Deep Ensemble has a slightly better classification accuracy and ECE score, MAPLE significantly outperforms it in OOD with a 30% score boost and a runtime 8 times faster.
## 7 Conclusion
This paper presents MAPLE, a self-supervised regularization method for uncertainty estimation and out-of-distribution detection on CNN classifiers. The uncertainty is derived from the Mahalanobis Distance (MD) between an image representation and the class representations in the network's latent space. MAPLE derives meaningful MD distances by introducing a regularizer based on self-supervised label refinement and metric learning. Thus, MAPLE learns well-clustered representations that are approximately Gaussian for each class, which complies with the theoretical requirements of MD-based uncertainty estimation. Experimental results show that MAPLE achieves state-of-the-art results on out-of-distribution detection with the shortest inference time, and is very competitive with existing methods on predictive probability calibration. MAPLE also has the significant advantage of introducing the least architectural changes. Finally, we demonstrate a real-life use-case of our method on microorganism classification for the automatic assessment of water quality in natural ecosystems.
Figure 5: **Micro-organisms belonging to the same class.** These images of **one** diatom class show wide appearance changes due to different viewpoints during the acquisition. These translate into separate distributions in the latent space, deviating from Gaussian distribution. MAPLE’s regularization makes the latent space Gaussian, hence suitable for MD calculation. |
2310.17899 | Nuclear chiral rotation within Relativistic Configuration-interaction
Density functional theory | The Relativistic Configuration-interaction Density functional (ReCD) theory
that combines the advantages of large-scale configuration-interaction shell
model and relativistic density functional theory is extended to study nuclear
chiral rotation. The energy spectra and transition probabilities of the chiral
doublet bands are reproduced satisfactorily without any free parameters. By
analyzing the probability amplitudes of the wavefunctions, the significant
roles of configuration mixing and four quasiparticle states to the chiral
doublets are revealed. The evolution from chiral vibration to static chirality
are clearly illustrated by the K plot and azimuthal plot. The present
investigation provides both microscopic and quantal descriptions for nuclear
chirality for the first time and demonstrates the robustness of chiral geometry
against the configuration mixing as well as the four quasiparticle states. | Yakun Wang, Pengwei Zhao, Jie Meng | 2023-10-27T05:17:33Z | http://arxiv.org/abs/2310.17899v1 | # Nuclear chiral rotation within Relativistic Configuration-interaction Density functional theory
###### Abstract
The _R_elativistic _C_onfiguration-interaction _D_ensity functional (ReCD) theory that combines the advantages of large-scale configuration-interaction shell model and relativistic density functional theory is extended to study nuclear chiral rotation. The energy spectra and transition probabilities of the chiral doublet bands are reproduced satisfactorily without any free parameters. By analyzing the probability amplitudes of the wavefunctions, the significant roles of configuration mixing and four quasiparticle states to the chiral doublets are revealed. The evolution from chiral vibration to static chirality are clearly illustrated by the _K plot_ and _azimuthal plot_. The present investigation provides both microscopic and quantal descriptions for nuclear chirality for the first time and demonstrates the robustness of chiral geometry against the configuration mixing as well as the four quasiparticle states.
keywords: Chiral rotation, rotational symmetry restoration, relativistic density functional theory, configuration interaction +
Footnote †: journal: Physics Letters B
## 1 Introduction
Chirality has emerged as a prominent research area in many fields, such as chemistry, biology, and physics. The chirality in atomic nuclei has garnered great attentions since its first prediction by Frauendorf and Meng in 1997 [1]. The predicted topology, namely the mutually perpendicular angular momenta of the valence protons, valence neutrons,
and the core, forms left- and right-handed configurations and leads to the spontaneous chiral symmetry breaking in the intrinsic frame. The restoration of the broken chiral symmetry in the laboratory frame is manifested by the observation of the chiral doublet bands, which consist of a pair of nearly degenerate \(\Delta I=1\) rotational bands with the same parity. In 2006, a new phenomenon, multiple chiral doublets (M\(\chi\)D), i.e., more than one pair of chiral doublets in one single nucleus is predicted [2]. The evidence of M\(\chi\)D is confirmed experimentally in 2013 [3]. The M\(\chi\)D phenomenon demonstrates the coexistence of nuclear triaxiality and the multiple facets of nuclear chiral rotation. So far, over 60 chiral doublet bands, including 8 M\(\chi\)D, have been experimentally reported; see reviews [4, 5, 6] and also data compilation [7] for more details.
Theoretically, the nuclear chirality has been extensively studied using both phenomenological [1, 8, 9, 10, 11, 12, 13, 14] and microscopic methods [15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25]. Based on the successful relativistic density functional theory (DFT) [26], the three-dimensional tilted axis cranking model (3D-TAC-DFT) [22] with core polarization and nuclear currents considered self-consistently has received more attentions. Up to now, the relativistic 3D-TAC-DFT has been extended to study the M\(\chi\)D [22], the nuclear chiral conundrum [23], and the superfluidity effects on nuclear chiral rotation [25]. In these studies, however, the angular momenta and transition probabilities are treated in semiclassical ways, and only the lower-energy band of the chiral doublets can be obtained.
To describe the lower- and upper-energy bands of the doublets simultaneously, the nuclear chirality is investigated dynamically within the time-dependent relativistic 3D-TAC-DFT [27]. The energy splitting between the doublet bands can be reproduced and explained by the chiral precession. However, the time-dependent relativistic 3D-TAC-DFT is still within the framework of mean-field approximation.
Recently, the _R_elativistic _C_onfiguration-interaction _D_ensity functional (ReCD) theories in axial [28] and triaxial [29] cases are developed. The basic idea of the ReCD theory is the following. Firstly, the relativistic DFT calculation is performed to obtain a self-consistent solution, which corresponds to the minimum of the potential energy surface and includes already important physics. Secondly, the configuration space including various quasiparticle excitation states is constructed based on the self-consistent solution. Thirdly, the broken rotational symmetry for states in the configuration space
is restored by the angular momentum projection, and a set of projected bases with good angular momentum is obtained. Finally, a shell-model calculation, namely diagonalizing the Hamiltonian in symmetry restored space expanded by the projected bases, is carried out to consider the correlations required to describe nuclear spectroscopic properties. It is clear that the ReCD theory combines the advantages of large-scale configuration-interaction shell model and relativistic DFT and, thus, provides a promising tool to study the properties of nuclear chirality in both microscopic and quantal ways.
In this work, we report the first application of the ReCD theory for the nuclear chirality. The chiral doublet bands in the odd-odd nucleus \({}^{130}\)Cs [30] are investigated as an example.
## 2 Theoretical Framework
In the ReCD theory, the nuclear wavefunction formulated in the laboratory frame is expressed as [28; 29; 31]
\[|\Psi_{\sigma}^{IM}\rangle=\sum_{K\kappa}F_{K\kappa}^{I\sigma}\hat{P}_{MK}^{I} |\Phi_{\kappa}\rangle, \tag{1}\]
where \(\hat{P}_{MK}^{I}\) is the three-dimensional angular momentum operator [32], \(\hat{P}_{MK}^{I}|\Phi_{\kappa}\rangle\) is the projected basis with good angular momentum \(I\) and \(M\), and \(|\Phi_{\kappa}\rangle\) represents a certain intrinsic state in the configuration space, which up to four quasiparticle states for odd-odd nuclei is constructed as [29]
\[\{\hat{\beta}_{\pi_{0}}^{\dagger}\hat{\beta}_{\nu_{0}}^{\dagger}|\Phi_{0} \rangle,\hat{\beta}_{\pi_{i}}^{\dagger}\hat{\beta}_{\nu_{j}}^{\dagger}|\Phi_{ 0}\rangle,\hat{\beta}_{\pi_{i}}^{\dagger}\hat{\beta}_{\nu_{j}}^{\dagger}\hat{ \beta}_{\pi_{k}}^{\dagger}\hat{\beta}_{\pi_{l}}^{\dagger}|\Phi_{0}\rangle, \hat{\beta}_{\pi_{i}}^{\dagger}\hat{\beta}_{\nu_{j}}^{\dagger}\hat{\beta}_{ \nu_{k}}^{\dagger}\hat{\beta}_{\nu_{l}}^{\dagger}|\Phi_{0}\rangle\}. \tag{2}\]
Here, \(\hat{\beta}_{\pi}^{\dagger}\) and \(\hat{\beta}_{\nu}^{\dagger}\) are the quasiparticle (qp) creation operators for protons and neutrons, respectively. Among all the states in Eq. (2), \(|\Phi_{\pi_{0}\nu_{0}}\rangle\equiv\hat{\beta}_{\pi_{0}}^{\dagger}\hat{\beta}_ {\nu_{0}}^{\dagger}|\Phi_{0}\rangle\) has the lowest intrinsic total energy and it is obtained by iteratively solving the triaxial relativistic Hartree-Bogoliubov (TRHB) equation [26]. To ensure the correct number parity for the odd-odd nucleus, the proton (\(\pi_{0}\)) and the neutron (\(\nu_{0}\)) orbits with the lowest qp energies are blocked during the iterative calculations [32]. The coefficients \(F_{K\kappa}^{I\sigma}\) in Eq. (1) play the role of variational parameters and are determined by the following generated eigenvalue equation
\[\sum_{K\kappa}\{H_{K^{\prime}\kappa^{\prime}K\kappa}^{I}-E^{I\sigma}N_{K^{ \prime}\kappa^{\prime}K\kappa}^{I}\}F_{K\kappa}^{I\sigma}=0. \tag{3}\]
The Hamiltonian matrix element and the norm matrix element are defined as
\[H^{I}_{K^{\prime}\kappa^{\prime}K\kappa}=\langle\Phi_{\kappa^{\prime}}|\hat{H} \hat{P}^{I}_{K^{\prime}K}|\Phi_{\kappa}\rangle,\quad N^{I}_{K^{\prime}\kappa^{ \prime}K\kappa}=\langle\Phi_{\kappa^{\prime}}|\hat{P}^{I}_{K^{\prime}K}|\Phi_{ \kappa}\rangle, \tag{4}\]
and are evaluated by the Pfaffian algorithms proposed in Refs. [33; 34]. The Hamiltonian \(\hat{H}\) is derived from a universal relativistic Lagrangian density by the Legendre transformation [28]. Once \(F^{I\sigma}_{K\kappa}\) are obtained, one can calculate the physical observables associated with the nuclear chirality, including the \(E2\) and \(M1\) transition probabilities.
It is known that the projected bases \(\{\hat{P}^{I}_{MK}|\Phi_{\kappa}\rangle\}\) are not orthogonal and the coefficients \(F^{I\sigma}_{K\kappa}\) should not be interpreted as the probability amplitudes for the state \(|\Phi_{\kappa}\rangle\). To obtain the probability amplitudes, one needs to construct the following collective wavefunctions [28]
\[g^{I\sigma}_{K\kappa}=\sum_{K^{\prime}\kappa^{\prime}}(N^{I})^{1/2}_{K\kappa K ^{\prime}\kappa^{\prime}}F^{I\sigma}_{K^{\prime}\kappa^{\prime}} \tag{5}\]
with \((N^{I})^{1/2}_{K\kappa K^{\prime}\kappa^{\prime}}\) the matrix element of the square root of the norm matrix in Eq. (4). The probability amplitude for each state \(|\Phi_{\kappa}\rangle\) in the configuration space is then expressed as
\[G^{I\sigma}_{\kappa}=\sum_{K}|g^{I\sigma}_{K\kappa}|^{2}. \tag{6}\]
The \(G^{I\sigma}_{\kappa}\) will be used to analyze the dominant configurations of the wavefunction \(|\Psi^{IM}_{\sigma}\rangle\) and examine the structural evolution of chiral doublet bands with the total angular momentum \(I\). The collective wavefunctions in Eq. (5) can also be used in the calculations of the _K plot_ and the _azimuthal plot_, which have been introduced in Ref. [18] to illustrate the chiral geometry of the chiral doublet bands. The _K plot_ represents the probability distributions of the components of the total angular momentum on the three intrinsic axes [18],
\[P^{I\sigma}(|K|)=\sum_{\kappa}|g^{I\sigma}_{K\kappa}|^{2}+|g^{I\sigma}_{-K \kappa}|^{2}. \tag{7}\]
The _azimuthal plot_ represents the probability distributions of the orientation of the total angular momentum on the intrinsic \((\theta,\phi)\) plane [18],
\[{\cal P}^{I\sigma}(\theta,\phi)=\sum_{\kappa}\int_{0}^{2\pi}d\psi^{\prime}|W^ {I\sigma}_{\kappa}(\psi^{\prime},\theta,\pi-\phi)|^{2}, \tag{8}\]
where the integrand \(W^{I\sigma}_{\kappa}(\psi^{\prime},\theta,\pi-\phi)\) reads
\[W^{I\sigma}_{\kappa}(\psi^{\prime},\theta,\pi-\phi)=\sqrt{\frac{2I+1}{8\pi^{2 }}}\sum_{K}g^{I\sigma}_{K\kappa}D^{I*}_{IK}(\psi^{\prime},\theta,\pi-\phi). \tag{9}\]
Here, \(\theta\) is the angle between the total angular momentum and the long (\(l\)) axis, and \(\phi\) is the angle between the projection of the total angular momentum on the intermediate-short (\(i\)-\(s\)) plane and \(i\) axis.
## 3 Results and discussion
In the present work, the chiral doublet bands (denoted as Band A and Band B) in \({}^{130}\)Cs are studied using the ReCD theory. The point-coupling Lagrangian PC-PK1 [35] is adopted to derive the Hamiltonian \(\hat{H}\) in Eq. (4) and the TRHB equation. A finite-range separable pairing force with strength \(G=728\) MeV fm\({}^{3}\)[36] is used to treat the pairing correlations. The TRHB equation is solved in the three-dimensional harmonic oscillator basis [37] with 10 major shells. By solving the TRHB equation iteratively, it is found that the deformation parameters (\(\beta,\gamma\)) for \({}^{130}\)Cs are (\(0.20,21^{\circ}\)). Similar to Refs. [28; 29; 31], the dimension of the configuration space is truncated with a qp excitation energy cutoff \(E_{\rm cut}\). The \(E_{\rm cut}\) = 5.0 MeV is adopted in the present calculation. The resultant configuration space is sufficient to obtain convergent results for the spectroscopic properties of \({}^{130}\)Cs. The calculations are free of additional parameters.
The calculated energy spectra for Bands A and B in \({}^{130}\)Cs and their comparison with the data [30] are shown in Fig. 1. The predicted spectra, including the near degeneracy
Figure 1: (Color online) Energy spectra of the chiral doublet bands in \({}^{130}\)Cs calculated by the ReCD theory (right panel) in comparison with data [30] (left panel). The energy levels are shifted by taking the level \(9^{+}\) in Band A as a reference.
between Bands A and B, agree satisfactorily with the data. In more detail, the energy levels with \(I\geq 15\hbar\) are slightly overestimated. Such overestimation might be alleviated by considering states beyond the four-qp configurations.
The theoretical \(E2\) and \(M1\) transition probabilities for the chiral doublets in \({}^{130}\)Cs are depicted in Fig. 2, in comparison with available data [38]. The predicted \(B(E2)\) values reproduce well the experimental data, and they are similar for Bands A and B, as expected for the chiral doublets [8; 10]. Note that there is no need to introduce the effective charge when calculating the \(E2\) transition probabilities in the ReCD theory, as demonstrated in Refs. [28; 31]. The \(B(M1)\) values are somewhat overestimated for states with \(I\leq 14\hbar\). The predicted staggering phenomenon of \(B(M1)\) values for Band A at \(I=16\hbar\) is also weaker than the data. It is known that the staggering of \(B(M1)\) values for chiral doublets depends sensitively on the triaxial deformation \(\gamma\), and its amplitude decreases significantly as \(\gamma\) deviates from \(30^{\circ}\)[39]. The weaker staggering of \(B(M1)\) values predicted here may indicate that the \(\gamma\) value of \({}^{130}\)Cs is slightly underestimated by the relativistic DFT.
Figure 2: (Color online) \(E2\) and \(M1\) transition probabilities of Band A and Band B calculated by the ReCD theory, in comparison with the experimental data [38].
\begin{table}
\begin{tabular}{c c} \hline Labels & Configurations \\ \hline
1 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=9/2})\) \\
2 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-9/2})\) \\
3 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=5/2})\) \\
4 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-5/2})\) \\
5 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=9/2}h_{11/2;\Omega=7/2}h_{11 /2;\Omega=-7/2})\) \\
6 & \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-9/2}h_{11/2;\Omega=7/2}h_{11 /2;\Omega=-7/2})\) \\ \hline \end{tabular}
\end{table}
Table 1: Detailed information of the most dominant configurations for Bands A and B. Here, \(\Omega\) represents the projection of the spin of the \(h_{11/2}\) qp orbital on the quantization axis, while \(\pi\) and \(\nu\) indicate protons and neutrons.
Figure 3: (Color online) The probability amplitudes \(G_{\kappa}^{I}\) for configurations that play important roles for describing Bands A and B. Here, configurations whose contributions to the doublet bands larger than 1% are listed. The most dominant configurations are marked as “1”, “2”, “3”, “4”, “5”, and “6”, and their detailed information are shown in Table 1.
To pin down the dominating configurations and examine the structural evolution of the chiral doublets in \({}^{130}\)Cs, the probability amplitudes \(G_{\kappa}^{I}\) obtained from Eq. (6) are shown in Fig. 3. Here, we list only the configurations whose contributions to Bands A and B are larger than 1%. The most dominant configurations are marked as "1", "2", "3", "4", "5", and "6", and their detailed information are shown in Table 1. The dominant configurations for Bands A and B are similar, as expected for nuclear chiral rotation. For \(I<16\hbar\), the 2qp states \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=9/2})\) and \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-9/2})\) are dominant configurations. For \(I\geq 16\hbar\), the 2qp states \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=5/2})\) and \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-5/2})\), and the 4qp states \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=9/2}h_{11/2;\Omega=7/2}h_{11/2 ;\Omega=-7/2})\) and \(\pi(h_{11/2;\Omega=1/2})\otimes\nu(h_{11/2;\Omega=-9/2}h_{11/2;\Omega=7/2}h_{11 /2;\Omega=-7/2})\) take over, indicating the important roles of qp configuration mixing and 4qp states for describing the chiral doublets in \({}^{130}\)Cs.
The chiral geometry for chiral doublets can be illustrated by the _K plot_ and the _azimuthal plot_, as proposed in Ref. [18]. The _K plot_, i.e., the probability distributions of the components of the total angular momentum on the three intr
Figure 4: (Color online) The _K plot_, i.e., the \(K\) distributions of the angular momentum on the three principal axes, for Band A and Band B at \(I=11\hbar\), \(14\hbar\), \(16\hbar\), and \(17\hbar\). The \(K\) distributions on _short_, _intermediate_, and _long_ axes are shown respectively in the first, second, and third rows.
and B at \(I=11\hbar\), \(14\hbar\), \(16\hbar\), and \(17\hbar\) are shown in Fig. 4. The evolution from the chiral vibration to the static chirality can be clearly seen from the changes of \(K\) distributions on the \(i\) axis. At \(I=11\hbar\), the \(K\) distribution for Band A exhibits a peak at \(K\approx 0\hbar\), and the one for Band B peaks at \(K\approx 8\hbar\). This is the typical feature of zero- and one-phonon states and can be interpreted as chiral vibration with respect to the _l-s_ plane, as demonstrated in Refs. [18; 19; 20]. At \(I=14\hbar\), the \(K\) distribution becomes rather flat for Band A. This indicates that the total angular momentum of Band A begins to deviate from the _l-s_ plane and the collective rotation around \(i\) axis develops. At \(I=16\hbar\) and \(17\hbar\), the \(K\) distributions for both Bands A and B peak at \(K\approx 16\hbar\). The similar \(K\) distributions for Bands A and B suggest the appearance of static chirality.
The chiral geometry can also be examined by the _azimuthal plot_. The _azimuthal plot_, i.e., the probability distribution profiles for the orientation of the angular momentum on the intrinsic \((\theta,\phi)\) plane, for Bands A and B at \(I=11\hbar\), \(14\hbar\), \(16\hbar\), and \(17\hbar\) are shown in Fig. 5.
At \(I=11\hbar\), the _azimuthal plot_ for Band A has one single peak at \((\theta,\phi)=(63^{\circ},90^{\circ})\), which means the total angular momentum of Band A orientates mainly at _l-s_ plane and corresponds to a planar rotation. This is in accordance with the expectation for the zero-phonon state [18; 19]. The _azimuthal plot_ for Band B exhibits two peaks at \((\theta,\phi)=(59^{\circ},48^{\circ})\) and \((\theta,\phi)=(59^{\circ},132^{\circ})\), together with a node at \((\theta,\phi)=(70^{\circ},90^{\circ})\), supporting
Figure 5: (Color online) The _azimuthal plot_, i.e., the probability distribution profiles for the orientation of the angular momentum on the intrinsic \((\theta,\phi)\) plane for Band A (upper panel) and Band B (lower panel) at \(I=11\hbar\), \(14\hbar\), \(16\hbar\), and \(17\hbar\). The black stars represent the positions of local minima.
the interpretation of the one-phonon vibration. Therefore, the picture of chiral vibration is clearly demonstrated, and this is also consistent with the same probability amplitudes \(G_{\kappa}^{I}\) for Bands A and B at \(I=11\hbar\), as shown in Fig. 3.
At \(I=14\hbar\), there remains one single peak in the _azimuthal plot_ for Band A, but the corresponding probability distribution profile along \(\phi\) direction is very flat. This means the probability of the total angular momentum deviating from \(\phi=0^{\circ}\) (_l-s_ plane) begins to increase. Such phenomenon is consistent with the result revealed by the \(K\) distribution on the \(i\) axis at \(I=14\hbar\), as shown in Fig. 4.
At \(I=16\hbar\) and \(17\hbar\), two peaks corresponding to aplanar orientations of the total angular momentum are found, i.e., \((\theta,\phi)\approx(77^{\circ},55^{\circ})\) and \((77^{\circ},125^{\circ})\) for Band A, and \((\theta,\phi)\approx(69^{\circ},45^{\circ})\) and \((69^{\circ},135^{\circ})\) for Band B. These features suggest the realization of static chirality. Moreover, the non-vanishing distribution at \(\theta=90^{\circ}\) and \(\phi=90^{\circ}\) reflects the tunneling between the left- and right-handed configurations, which explains also the slight differences of the probability amplitudes \(G_{\kappa}^{I}\) between Bands A and B, as shown in Fig. 3. The chiral geometry illustrated by the _K plot_ and the _azimuthal plot_ is thus confirmed to be robust against the configuration mixing and the four qp states.
## 4 Summary
In summary, the ReCD theory, i.e., the _Re_lativistic _C_onfiguration-interaction _D_ensity functional theory, that combines the advantages of large-scale configuration-interaction shell model and relativistic density functional theory is extended to study the chiral rotation in \({}^{130}\)Cs. Without any free parameters, the energy spectra and transition properties of the chiral doublets are reproduced satisfactorily. By calculating the probability amplitudes for states in the configuration space, the composition of the wavefunctions is analyzed. It is found that the quasiparticle configuration mixing and four quasiparticle states play important roles for the chiral doublets. The chiral geometry of the doublets is illustrated in terms of the _K plot_ and _azimuthal plot_, from which the evolution from chiral vibration to static chirality with the total angular momentum is clearly seen. The present investigation provides both microscopic and quantal descriptions for nuclear chirality for the first time and demonstrates the robustness of chiral geometry against the configuration mixing and the four quasiparticle states.
## Acknowledgments
This work was partly supported by the National Natural Science Foundation of China (Grants No. 12141501, No. 12105004, No. 12070131001, No. 11935003, and No. 11975031), the China Postdoctoral Science Foundation under Grant No. 2020M680183, the High-performance Computing Platform of Peking University, and State Key Laboratory of Nuclear Physics and Technology, Peking University.
|
2308.12573 | Conditional Kernel Imitation Learning for Continuous State Environments | Imitation Learning (IL) is an important paradigm within the broader
reinforcement learning (RL) methodology. Unlike most of RL, it does not assume
availability of reward-feedback. Reward inference and shaping are known to be
difficult and error-prone methods particularly when the demonstration data
comes from human experts. Classical methods such as behavioral cloning and
inverse reinforcement learning are highly sensitive to estimation errors, a
problem that is particularly acute in continuous state space problems.
Meanwhile, state-of-the-art IL algorithms convert behavioral policy learning
problems into distribution-matching problems which often require additional
online interaction data to be effective. In this paper, we consider the problem
of imitation learning in continuous state space environments based solely on
observed behavior, without access to transition dynamics information, reward
structure, or, most importantly, any additional interactions with the
environment. Our approach is based on the Markov balance equation and
introduces a novel conditional kernel density estimation-based imitation
learning framework. It involves estimating the environment's transition
dynamics using conditional kernel density estimators and seeks to satisfy the
probabilistic balance equations for the environment. We establish that our
estimators satisfy basic asymptotic consistency requirements. Through a series
of numerical experiments on continuous state benchmark environments, we show
consistently superior empirical performance over many state-of-the-art IL
algorithms. | Rishabh Agrawal, Nathan Dahlin, Rahul Jain, Ashutosh Nayyar | 2023-08-24T05:26:42Z | http://arxiv.org/abs/2308.12573v1 | # Conditional Kernel Imitation Learning for Continuous State Environments
###### Abstract
Imitation Learning (IL) is an important paradigm within the broader reinforcement learning (RL) methodology. Unlike most of RL, it does not assume availability of reward-feedback. Reward inference and shaping are known to be difficult and error-prone methods particularly when the demonstration data comes from human experts. Classical methods such as behavioral cloning and inverse reinforcement learning are highly sensitive to estimation errors, a problem that is particularly acute in continuous state space problems. Meanwhile, state-of-the-art IL algorithms convert behavioral policy learning problems into distribution-matching problems which often require additional online interaction data to be effective. In this paper, we consider the problem of imitation learning in continuous state space environments based solely on observed behavior, without access to transition dynamics information, reward structure, or, most importantly, any additional interactions with the environment. Our approach is based on the Markov balance equation and introduces a novel conditional kernel density estimation-based imitation learning framework. It involves estimating the environment's transition dynamics using conditional kernel density estimators and seeks to satisfy the probabilistic balance equations for the environment. We establish that our estimators satisfy basic asymptotic consistency requirements. Through a series of numerical experiments on continuous state benchmark environments, we show consistently superior empirical performance over many state-of-the-art IL algorithms.
## 1 Introduction
Reinforcement Learning (RL) has produced a series of breakthroughs over the last decade from exceeding human proficiency at playing simple games such as in the Atari suite [14] to Go [17] and StarCraft [18], and to protein structure prediction systems [16], etc. A fundamental premise that the'reward is enough' [15] underlies all of such RL methodology. And yet, in most problems, a natural reward function is not available. Nor it may be possible to engineer one from intuition. Thus, a lot of effort is spent on reward shaping [20] to make RL algorithms work, often without success.
This problem is particularly acute when humans are in the loop, either as demonstrators or as evaluators. Often, demonstration data comes from human experts and it is impossible to infer precisely what reward function human experts really have in mind while taking actions. To be fair, several inverse RL (IRL) algorithms such as MaxEntropy-IRL [14] use a methodology wherein a reward function is first inferred from the demonstration data, and then used in conjunction with RL algorithms to design near-optimal policies. This has two lacunae. First, the performance of the RL algorithms can be very sensitive to errors in the reward function estimate. And second, the expert may not be using a policy that is optimal with respect to any reward objective at all! There is thus a need to develop imitation learning algorithms that do not depend on reward inference as a first step [1].
Behavioral Cloning (BC) is a simple and natural idea for imitation learning [23]. It is a supervised learning scheme that aims to learn the expert policy as a map from states to actions. In fact, it largely ignores the inherent sequential nature of reinforcement learning problems. Unfortunately, it suffers from severe covariate shift issues as it fails to generalize to less-visited parts of the state space and ignores the sequential decision-making aspect. This results in propagation of errors in the agent's performance [24] resulting in a limited practical ability to generalize effectively.
To address the issue of compounding errors that can afflict methods like BC, IRL algorithms [1, 25, 26, 27] take a different approach by learning a reward function that takes into account that transitions come from trajectories. But this requires use of reinforcement learning algorithms, making them extremely expensive to run, and at the same time sensitive to reward function estimation errors.
In contrast, adversarial Imitation Learning (AIL) is a technique centered around distribution matching through adversarial learning. It gained significant traction in the recent past as an approach to imitation learning [13, 24, 25]. Within this framework, the objective transforms into finding a behavioral policy that minimizes the divergence between the target distribution and the distribution of state-action pairs generated by the behavioral policy during its interaction with the environment. The primary drawback of current distribution matching methods via AIL is that estimating distribu
tion density ratios, a crucial step in each iteration, usually demands samples from the behavioral policy distribution. Consequently, new interactions with the environment are necessary for every behavioral policy update iteration. This limitation makes these algorithms unsuitable for problems where only offline data is available. This downside is even more apparent in continuous state and action space problems wherein each visited state is visited at most once, with most states not being visited at all in the demonstration data.
A related strand of literature on imitation learning [12] assumes access to a generative model so trajectory data can be generated on the fly. We make no such assumption in our problem formulation.
In this paper, we aim to develop imitation learning algorithms that do not need reward-feedback, do not rely on distribution matching between occupation measures, are not doing behavioral-cloning but use the'meta' knowledge that the underlying dynamics are Markovian, do not need access to a generative model, work for continuous state space problems, and allow for batch processing of the offline dataset. This version of the imitation learning problem is relevant in many real-world decision-making applications, such as medical, healthcare, robotic manipulations, and autonomous vehicles [10], where experimentation is either costly or unsafe.
We introduce a simple and natural framework based on a simple premise. Namely, that the demonstration trajectory data satisfies the balance equation between the demonstration policy, the Markov decision process (MDP) transition density and that of the induced Markov chain. This allows us to incorporate the fact that the demonstration data is coming from a system with Markovian dynamics. The transition densities for the MDP and the Markov chain are then estimated using conditional kernel density estimators which are universal density estimators with proven asymptotic consistency properties. We start with the discrete state and action space setting, but then show that the framework extends to the continuous state space setting as well. We prove consistency properties of the estimators, and validate the algorithm in a series of continuous state space problems. Conceptual extension to continuous action space is straightforward but requires more work for numerical robustness.
Related Work.We now discuss prior work broadly related to our paper. As already mentioned behavioral cloning faces a fundamental limitation due to discarding distributional insights from the demonstrations [12, 13]. To address this, several remedies have been suggested [12, 13] which involve either further online interactions with the environment, or the demonstrator, or using insights into model dynamics or the sparsity of rewards, all of which are in general impractical. The recent work [11] aims to overcome these by using additional data from non-expert policies, which circumvents the need for additional online interactions but the additional offline data may not be available. The EDM approach [15] captures the expert's state occupancy measure by training an explicit energy-based model but its limitations has been scrutinized in [14].
There also have been efforts to further develop IRL approaches to overcome the limitations of earlier algorithms. [10, 12] introduce \(LSTD\)-\(\mu\), a temporal difference technique for calculating feature expectations. However, these approaches share the weaknesses of least squares estimators, being highly sensitive to basis features and training data distribution. [10] propose _DSFN_, which estimates feature expectations in an off-policy setting. They also propose a transition-regularized imitation network that produces an initial policy close to expert behavior and an efficient feature representation. Despite these advancements, the assumption of complete knowledge about reward feature functions in these methods can often be unrealistic, especially for complex problems [1]. [13] introduced \(RCAL\), a non-parametric algorithm that employs a boosting method to minimize the criterion directly without feature selection steps and can help tackle some of the above issues. [10] propose \(AVRIL\), adopting a variational approach to jointly learn an approximate posterior distribution over reward and policy. However, due to inherent covariate shift problems, these methods encounter significant reward extrapolation errors, leading to misguided outcomes in novel environments and reduced learning efficiency. To address this, the _CLARE_[12] model-based offline Inverse Reinforcement Learning (IRL) approach introduces conservation to its estimated reward. It employs an IRL algorithm within an estimated dynamics model to learn the reward. However, limitations arise when dealing with a limited number of expert demonstrations or predominantly low-quality transition samples from a behavior policy. In such cases, forcing alignment of the empirical state-action visitation measure across all data can lead to a recovered reward or policy that mimics the suboptimal behavior policy, undermining the accuracy of the expert model [11].
Adversarial Imitation Learning (AIL) approaches [13] were a breakthrough when they were introduced a few years ago [12, 14]. However, these approaches require online interactions with the environment, and thus are not applicable when we must work only with offline data. Employing a distribution matching strategy, [14] introduces _ValueDICE_, an offline objective for assessing the distribution ratio between the imitator and expert policies. Although theoretically allowing for comprehensive offline learning, the approach undertakes a complex alternating maximization-minimization optimization procedure. Additionally, it suffers from difficulty estimating the expectation of an exponential that introduces bias when approximating gradients using minibatches [15].
Thus, the algorithm we present in this paper is quite distinct in its approach from most of the prior literature. Additionally, it demonstrates promising preliminary empirical results.
Preliminaries
### The Imitation Learning Problem
An infinite horizon discounted Markov decision process (MDP) \(M\) is defined by the tuple (\(S,A,T,r,\gamma\)) with states \(s\in S\), actions \(a\in A\) and successor states \(s^{\prime}\in S\) drawn from the transition function \(T(s^{\prime}|s,a)\). The reward function \(r:S\times A\rightarrow\mathbb{R}\) maps state-action pairs to scalar rewards, and \(\gamma\) is the discount factor. Policy \(\pi\) is a probability distribution over actions conditioned on state and is given by \(\pi(a_{t}|s_{t})=P_{\pi}(A_{t}=a_{t}|S_{t}=s_{t})\), where \(a_{t}\in A\), \(s_{t}\in S\), \(\forall t=0,1,2,\cdots\). The induced occupancy measure of a policy is given as \(\rho_{\pi}(s,a):=\mathbb{E}_{\pi}[\sum_{i=1}^{\infty}\gamma^{t}\,\mathbf{1}_{s _{t}=s,a_{t}=a}]\), where the expectation is taken over \(a_{t}\sim\pi(\cdot|s_{t})\), \(s_{t+1}\sim T(\cdot|s_{t},a_{t})\) for all \(t\), and the initial state \(s_{0}\). The corresponding state-only occupancy measure is given as \(\rho_{\pi}(s)=\sum_{a}\rho_{\pi}(s,a)\).
In the imitation learning (IL) framework, the agent is provided with trajectories generated by a demonstration policy \(\pi_{D}\), collected as \(D=\{(s_{0},a_{0}),(s_{1},a_{1}),(s_{2},a_{2}),...\}\). The data \(D\) does _not_ include the corresponding reward information \(r_{t}\) at each time step. Indeed, rather than finding the optimal policy that maximizes the long-term reward, the IL objective is to learn a policy \(\pi^{\star}\) that is close to \(\pi_{D}\) in the following sense [20]:
\[\pi^{\star}\in\operatorname*{arg\,min}_{\pi\in\Pi}\mathbb{E}_{s\sim\rho_{\pi} }[\mathcal{L}(\pi(\cdot|s),\pi_{D}(\cdot|s))], \tag{1}\]
where \(\Pi\) is the set of all randomized (Markovian) stationary policies, and \(\mathcal{L}\) is a chosen loss function. In practice, (1) can only be solved approximately in part due to the assumption that \(\pi_{D}\) is unknown and only observed via the finite dataset \(D\).
### Conditional Kernel Density Estimation (CKDE)
The imitation learning approach we will introduce will depend on transition density estimation. Though, statistical theory exists for it, conditional density estimation is a difficult problem due to lack of clarity on what parametric families of density functions are good candidates. Thus, we adopt the kernel density estimation framework [23] that are provably universal probability density estimators.
We next outline the method in the case of two continuous scalar random variables, \(X\) and \(Y\) for the sake of simplicity. Let \(f\) and \(g\) denote the joint density of \((X,Y)\) and the marginal density of \(X\), respectively. The conditional distribution of \(Y\), given \(X\), is denoted as \(h_{Y|X}(y|x)=f_{X,Y}(x,y)/g_{X}(x)\).
Selecting a pair of kernel functions \(K:\mathbb{R}\rightarrow\mathbb{R}\) and \(K^{\prime}:\mathbb{R}\rightarrow\mathbb{R}\) with respective scalar bandwidth parameters \(h>0\) and \(h^{\prime}>0\) and given a set of \(n\) samples \(\{(x_{i},y_{i})\}_{i=1}^{n}\), the kernel density estimation (KDE) approximations \(\widehat{f}\) and \(\widehat{g}\) for the joint and marginal distributions, respectively, are obtained as follows:
\[\widehat{f}_{X,Y}(x,y)=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{h}K \bigg{(}\frac{x-x_{i}}{h}\bigg{)}\frac{1}{h^{\prime}}K^{\prime}\bigg{(}\frac{ y-y_{i}}{h^{\prime}}\bigg{)}, \tag{2}\] \[\widehat{g}_{X}(x)=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{h}K\bigg{(} \frac{x-x_{i}}{h}\bigg{)}.\]
Using the approximations in (2), the approximate conditional density \(\widehat{h}_{Y|X}\) can be computed as
\[\widehat{h}_{Y|X}(y|x)=\frac{\widehat{f}_{X,Y}(x,y)}{\widehat{g}_{X}(x)}. \tag{3}\]
In more general cases involving random vectors, analogous estimates to those in (2) and (3) may be obtained using kernel functions defined according to
\[K_{H}(x)=|H|^{-\frac{1}{2}}K(H^{-\frac{1}{2}}x), \tag{4}\]
where \(H\) is a symmetric positive definite _bandwidth matrix_ of appropriate dimension, \(m\), with determinant \(|H|\), and \(K\) is a real-valued function satisfying \(\int_{\mathbb{R}^{m}}K(x)dx=1\). For example, the KDE estimate for the marginal distribution of random vector \(X\) is defined as
\[\widehat{g}_{X}(x;H)=\frac{1}{n}\sum_{i=1}^{n}K_{H}\big{(}x-x_{i}\big{)}. \tag{5}\]
An example of such a multivariate kernel function is the standard \(m\)-variate normal density function
\[K(x):=(2\pi)^{-\frac{m}{2}}\exp\left(-\frac{x^{T}x}{2}\right).\]
For more details on kernel density estimation, see [1].
## 3 Conditional Kernel Imitation Learning
We next describe our imitation learning algorithm. A key premise of our algorithm is that the demonstration trajectories from an expert must satisfy the Markov balance equation under the demonstrator's policy \(\pi_{D}\), and thus we must use that to guide the agent's learning. The use of the balance equation then requires estimation of certain transition (conditional probability) density functions which we do via conditional kernel density estimation methods. Then, the problem reduces to identifying policies that best fit the balance equation. We elucidate this procedure in Algorithm 1, prove its theoretical properties and then present numerical evidence of its efficacy on a number of benchmark environments.
### The Markov Balance Equation
Consider a demonstration policy \(\pi_{D}\) that is used to take actions starting from an initial state \(s_{0}\). Let \(T(s^{\prime}|s,a)\) denote the transition density function of the MDP. Note that \(\pi_{D}(a|s)\) is a randomized Markovian, stationary policy that denotes the probability of taking action \(a\) in state \(s\). This will
induce a Markov chain on the state space \(S\). Let its transition density be denoted \(P(s^{\prime}|s)\). Then, the Markov balance equation is given by
\[P(s^{\prime}|s)=\sum_{a}\pi_{D}(a|s)T(s^{\prime}|s,a).\]
Unfortunately, this involves a sum, and hence is difficult to use. We thus, use the following alternate form which is a transition density of the induced Markov chain on the state-action space,
\[P_{\pi_{D}}(s^{\prime},a^{\prime}|s,a)=\pi_{D}(a^{\prime}|s^{\prime})T(s^{ \prime}|s,a). \tag{6}\]
The above balance equation is the basis of our imitation learning approach. If we can estimate \(P_{\pi_{D}}\) and \(T\) in (6) (estimates denoted by \(\widehat{P}\) and \(\widehat{T}\) respectively), we can then infer a policy \(\pi_{D}\) that satisfies it. Unfortunately, the problem is ill-conditioned, and we will need to impose additional criterion such as a regularization term.
We consider a parametric class of policies for the agent parametrized by \(\theta\) and formulate the following optimization problem:
\[\min_{\theta\in\Theta} \int_{(s^{\prime},a^{\prime})}\int_{(s,a)}\big{[}\widehat{P}(s^{ \prime},a^{\prime}|s,a)-\pi_{\theta}(a^{\prime}|s^{\prime})\widehat{T}(s^{ \prime}|s,a)\big{]}^{2} \tag{7}\] \[d\mu(s,a)\,d\mu(s^{\prime},a^{\prime})-\lambda\int_{s^{\prime}}H (\pi_{\theta}(\cdot|s^{\prime}))\,d\nu(s^{\prime}).\]
In (7), the first term ensures that the balance equation is satisfied approximately. The second term involves \(H(\pi_{\theta}(\cdot|s^{\prime}))\), which is the entropy of the probability distribution \(\pi_{\theta}(\cdot|s^{\prime})\) on actions when the state is \(s^{\prime}\). It penalizes less randomized policies in favor of highly randomized policies. \(\lambda\geq 0\) is a regularization parameter that governs relative weight on the first and second terms. Here, \(\mu\) and \(\nu\) denote reference measures on state-action pairs and states respectively. For example, they can be the counting measures obtained from the dataset. \(\Theta\) is a given parameter set. The parameters could be weights of a neural network, for example.
### Transition Density Estimation
We now discuss how to use kernel density estimation methods for estimating the two conditional densities \(P_{\pi_{D}}\) and \(T\). We first discuss the discrete state and action space setting where the form of estimates is simple and intuitive. We then discuss the continuous setup.
Discrete Spaces.For environments where both the state and action spaces are discrete, the estimates \(\widehat{P}\) and \(\widehat{T}\) can be calculated as:
\[\widehat{T}(s^{\prime}|s,a):=\frac{\eta(s,a,s^{\prime})}{\eta(s,a)}, \text{ and } \tag{9}\] \[\widehat{P}(s^{\prime},a^{\prime}|s,a):=\frac{\eta(s,a,s^{ \prime},a^{\prime})}{\eta(s,a)}\]
where \(\eta\) denotes the counting measure, i.e., the number of times a given tuple or sequence appears in the dataset \(D\). If the counting measures in the denominator are zero when that state-action pair is not visited at all in the given dataset, it implies that we have no information about transitions from it. In such cases, we will take the conditional density to be uniform.
Continuous Spaces.Estimation of transition densities in this setting is more challenging since no visited state would appear twice, and most of the states would never be visited in any given dataset. This, thus calls for conditional density estimation using more sophisticated methods.
These include a range of techniques such as parametric approaches like mixture density network [1], normalizing flows [16]; non-parametric methods like Gaussian process conditional density estimation [10], CKDE [11]; and semi-parametric methods like least squares conditional density estimation [15]. In this study, we opt for using CKDE since it is a closed-form, non-parametric method that can be easily implemented and adapted to different data types. Further, CKDE provides a consistent estimator under appropriate conditions [10].
As described in Section 2.2, the kernel functions use a difference between two samples/values (e.g., \(x-x_{i}\)) as their argument (5). This difference can be alternatively replaced by a suitable distance metric, as indicated in prior work [1]. We define three distinct distance metrics: one to assess the dissimilarity between (next state, next action) pairs, another for (state, action) pairs, and a final one for next states. These metrics are denoted as \(d_{1}:(S\times A)\times(S\times A)\rightarrow\mathbb{R}_{+},d_{2}:(S\times A) \times(S\times A)\rightarrow\mathbb{R}_{+}\), and \(d_{3}:S\times S\rightarrow\mathbb{R}_{+}\) respectively. Similarly, we define \(H_{1}\), \(H_{2}\), and \(H_{3}\) as bandwidth matrices for the kernels \(K_{H_{1}}\), \(K_{H_{2}}\), and \(K_{H_{3}}\), respectively. \(H_{1}\), \(H_{2}\), and \(H_{3}\) are square matrices with dimensions matching those of the \((s^{\prime},a^{\prime})\) pair, \((s,a)\) pair, and \(s^{\prime}\), respectively. The CKDE approximations \(\widehat{P}\) and \(\widehat{T}\) are then computed as
\[\widehat{P}(s^{\prime},a^{\prime}|s,a)= \tag{10}\] \[\frac{\sum_{l=1}^{n}K_{H_{1}}\big{(}d_{1}((s^{\prime},a^{\prime}), (s^{\prime}_{l},a^{\prime}_{l}))\big{)}K_{H_{2}}\big{(}d_{2}((s,a),(s_{l},a_{ l}))\big{)}}{\sum_{l=1}^{n}K_{H_{2}}\big{(}d_{2}((s,a),(s_{l},a_{l}))\big{)}},\]
and \(\widehat{T}(s^{\prime}|s,a)=\)
We combine the transition estimation procedures of (9) and (10) with the balance equation based optimization problem in (7) in our conditional kernel imitation learning (CKIL) algorithm whose pseudo-code is presented in Algorithm 1.
### Theoretical Guarantees
In this section, we focus on the conditional kernel density estimation part of our approach and show that, as the training dataset size \(n\) approaches infinity, the CKDE estimates
in (10) converge towards the corresponding true conditional distributions.
Here we introduce the kernel functions \(K_{i}\), \(i\in\{1,2,3\}\), so that using (4), the kernels \(K_{H_{i}}\) for \(i\in\{1,2,3\}\) appearing in (10)
\[K_{H_{i}}(x)=|H_{i}|^{-\frac{1}{2}}K_{i}(H^{-\frac{1}{2}}x), \tag{11}\]
where \(x\) is of appropriate dimension. Our theoretical guarantee holds under the following assumptions [1]:
**(A1)**: Suppose the buffer \(B\) in Algorithm 1 consists of \(n\) iid tuples \((s,a,s^{\prime},a^{\prime})\) generated according to a probability distribution \(P(s,a,s^{\prime},a^{\prime})=\mu(s,a)\bar{P}_{\pi_{D}}(s^{\prime},a^{\prime}|s,a)\), where \(P_{\pi_{D}}\) is the transition probability density of the induced Markov chain on the state-action space under the demonstration policy \(\pi_{D}\) (see (6)) and \(\mu\) is a reference measure on \((s,a)\). Further, \(P\) has a density function \(g\) that is square-integrable and twice differentiable, with all of its second-order partial derivatives bounded, continuous and square integrable. Also assume that the marginals \(P(s^{\prime},s,a)\) and \(P(s,a)\) satisfy these properties.
**(A2)**: The kernels \(K_{i}\) for \(i\in\{1,2,3\}\) in (11) are square integrable, zero-mean, spherically symmetric, and with common finite second-order moment \(\int_{\mathbb{R}^{m_{i}}}zz^{T}K_{i}(z)dz=\sigma^{2}I_{m_{i}}\).
**(A3)**: For each kernel \(K_{H_{i}}\) as defined in (4), the bandwidth matrices \(H_{i}(n)\) (where \(n\) is the number of tuples in \(B\)) form a sequence of positive definite, symmetric matrices such that \(H_{i}(n)\to 0\) and \(n^{-1/2}|H_{i}(n)|^{-1/2}\to 0\) as \(\rightarrow\infty\).
**Theorem 1**.: _Suppose assumptions (A1)-(A3) are true. Let \(\widehat{P}_{n}\) and \(\widehat{T}_{n}\) be the CKDE estimates constructed using (10) and a buffer \(B\) with \(n\) tuples. Then, for each \((s,a,s^{\prime},a^{\prime})\), as \(n\rightarrow\infty\),_
\[\widehat{P}_{n}(s^{\prime},a^{\prime}|s,a)\smash{\mathop{\longrightarrow} \limits^{P}}P_{\pi_{D}}(s^{\prime},a^{\prime}|s,a),\] \[\widehat{T}_{n}(s^{\prime}|s,a)\smash{\mathop{\longrightarrow} \limits^{P}}T(s^{\prime}|s,a). \tag{12}\]
Proof.: Let \(\widehat{f}(\cdot)\) and \(\widehat{g}(\cdot)\) represent the numerator and denominator of \(\widehat{T}(s^{\prime}|s,a)\) respectively from (10), i.e.,
\[\widehat{f}(s^{\prime},s,a)=\sum_{l=1}^{n}K_{H_{3}}\bigl{(}d_{3}(s^{\prime}, s^{\prime}_{l})\bigr{)}K_{H_{2}}\bigl{(}d_{2}((s,a),(s_{l},a_{l}))\bigr{)}, \tag{13}\]
\(\widehat{f}(\cdot)\) and \(\widehat{g}(\cdot)\) are, in fact, the KDEs of \(P(s,a,s^{\prime})\) and \(P(s,a)\), where \(P(s,a,s^{\prime})\) and \(P(s,a)\) are marginals of \(P(s,a,s^{\prime},a^{\prime})\) mentioned in Assumption (A1).
We now show why Lemma 2, given in Appendix, can be adopted to prove this theorem under assumptions (A1)-(A3). We argue as follows:
1. We assume (A1) that \(P(s,a,s^{\prime},a^{\prime})\) has a density function \(g\) that is square-integrable and twice differentiable, with all of its second-order partial derivatives bounded, continuous and square integrable and so does its marginals \(P(s^{\prime},s,a)\) and \(P(s,a)\). This leads to the satisfaction of condition (C1).
2. From assumption (A2), \(\int_{\mathbb{R}^{m_{i}}}zK_{i}(z)dz=0\) for \(i=\{2,3\}\), where \(z_{i}\) is a vector of size \(m_{i}\). Partition the vector \(z\) as \(z=[z_{3},z_{2}]\) and let \(m=m_{2}+m_{3}\) and \(K(z)=K_{3}(z_{3})K_{2}(z_{2})\). Then for \(t\leq m_{3}\), \[\int_{\mathbb{R}^{m}}z_{t}K(z)dz=\int_{\mathbb{R}^{m}}z_{t}K_{3}( z_{3})K_{2}(z_{2})dz\] \[\quad=\int_{\mathbb{R}^{m_{2}}}K_{2}(z_{2})dz_{2}\int_{\mathbb{R} ^{m_{3}}}z_{t}K_{3}(z_{3})dz_{3}\] (14) \[\quad=\int_{\mathbb{R}^{m_{3}}}z_{t}K_{3}(z_{3})dz_{3}=0,\] which follows from (A2). This can be shown for any \(t\in\{1,2,\ldots,m\}\). Hence, \(\int_{\mathbb{R}^{m}}zK(z)dz=0\) is satisfied corresponding to condition (C2).
Now,
\[\int_{\mathbb{R}^{m}} zz^{T}K(z)dz\] \[=\int_{\mathbb{R}^{m}}\begin{bmatrix}z_{3}z_{3}^{T}&z_{3}z_{2}^{T}\\ z_{2}z_{3}^{T}&z_{2}z_{2}^{T}\end{bmatrix}K_{3}(z_{3})K_{2}(z_{2})dz_{3}dz_{2}\] \[=\sigma^{2}\begin{bmatrix}I_{m_{3}}&0\\ 0&I_{m_{2}}\end{bmatrix}=\sigma^{2}I_{m}.\]
Hence, \(K(z)=K_{3}(z_{3})K_{2}(z_{2})\) satisfies condition (C2).
3. Consider \(H(n)\) to be a block diagonal matrix with \(H_{3}(n)\) and \(H_{2}(n)\) as the two block diagonal entries with \(H_{3}(n)\) and \(H_{2}(n)\) satisfying assumption (A3). Then the matrices \(H(n)\) form a sequence of positive definite, symmetric matrices. Using (5) with this \(H\), the kernel estimate for \(P(s^{\prime},s,a)\) takes the product kernel form as seen for \(\widehat{f}(\cdot)\) in (13). Now, \(|H(n)|=|H_{3}(n)||H_{2}(n)|\), this implies that as \(n\rightarrow\infty\), \(n^{-1}|H(n)|^{-1/2}\to 0\) because \(n^{-1/2}|H_{i}(n)|^{-1/2}\to 0\) for \(i=\{2,3\}\). Also, vec \(H(n)\to 0\) as vec \(H_{i}(n)\to 0\) for \(i=\{2,3\}\). Therefore, condition (C3) is satisfied.
Having satisfied conditions (C1)-(C3), we may apply the argument found in Sections 2.6-2.9 of Chacon and Duong (2018) and conclude that
\[\widehat{f}(s^{\prime},s,a)\smash{\mathop{\longrightarrow}\limits^{P}} P(s^{\prime},s,a),\] \[\widehat{g}(s,a)\smash{\mathop{\longrightarrow}\limits^{P}} P(s,a).\]
It follows from the Continuous Mapping Theorem (Mann and Wald 1943) that taking the ratio of \(\widehat{f}\) and \(\widehat{g}\) produces a consistent estimator of \(\frac{P(s^{\prime},s,a)}{P(s,a)}=T(s^{\prime}|s,a)\). That is, \(\widehat{T}_{n}(s^{\prime}|s,a)=\frac{\widehat{f}(s^{\prime},s,a)}{\widehat{ g}(s,a)}\smash{\mathop{\longrightarrow}\limits^{P}}T(s^{\prime}|s,a)\).
A similar approach can be used for establishing the asymptotic convergence in probability for the CKDE of \(P_{\pi_{D}}(s^{\prime},a^{\prime}|s,a)\).
## 4 Experimental Results
### Experimental Setup
We now validate the empirical performance of the CKIL algorithm on a number of benchmark environments from Open AI Gym Brockman et al. (2016). These encompass environments of varying complexities used in the RL literature. These include the MountainCar environment, where the goal is to reach the top of the mountain Moore (1990); CartPole, which aims to balance a pendulum on a frictionless track Barto, Sutton, and Anderson (1983); Acrobot, which aims to swing limbs around a system of joints to achieve a specified height Sutton (1995); and LunarLander, which aims to optimize a rocket's trajectory to achieve a successful landing Klimov (2019).
To create a demonstration dataset \(D\), we generate data using pre-trained and hyperparameter-optimized agents available in the RL Baselines Zoo Raffin (2020). In particular, we used a PPO agent for LunarLander-v2, a DQN agent for CartPole-v1 and an A2C agent for Acrobot-v1.
Benchmark Algorithms.We compare the performance of our CKIL algorithm (Algorithm 1), with a range of offline IRL/IL/AIL benchmarks. This comprehensive assessment covers a spectrum of methodologies, including the inherently offline Behavioral Cloning (BC); ValueDICE (VDICE), a sample-efficient AIL approach designed for offline scenarios by removing replay regularization; reward-regularized classification (RCAL), a large margin classification approach, which introduces a sparsity-based penalty on inferred rewards to exploit dynamics information; Energy-based Distribution Matching (EDM), an offline imitation learning algorithm that captures the expert's state occupancy patterns through explicit training of an energy-based model; AVRIL, a recent model-free offline IRL technique employing a variational approach to simultaneously learn an approximate posterior distribution over rewards and policies; and Deep Successor Feature Network (DSFN), an offline adaptation of the max-margin IRL algorithm that transcends linear approaches by introducing a deep network architecture and employing least-squares temporal-difference learning to produce both reward and policy outputs.
Implementation.The policy \(\pi_{\theta}\) in (8) is embodied by a neural network (NN) architecture. This NN comprises two hidden layers featuring the Rectified Linear Unit (ReLU) activation function. The final layer employs a softmax function to produce a probability distribution over actions when given a state as an input. To facilitate comparison, all benchmarks adopt a common neural network architecture consisting of two hidden layers comprising 64 units each, with Exponential Linear Unit (ELU) activation functions. Training is carried out using the Adam optimizer Kingma and Ba (2015) with individually tuned learning rates.
In the case of VDICE, we used the publicly available source code provided at Kostrikov et al. (2020). It is worth noting that, for VDICE, offline learning
Figure 1: Average rewards attained by the CKIL agent in a discretized MountainCar environment for a varying number of trajectories (higher values indicate better performance).
is achieved by configuring the "replay regularization" coefficient to zero. Our execution of EDM leveraged the source code accessible at [11]. It is essential to highlight that the contrast between BC and EDM predominantly stems from the introduction of \(L_{\rho}\), an occupancy loss defined in the EDM work, while deriving the RCAL loss is a straightforward process involving the inversion of the Bellman equation. As for AVRIL and DSFN, the applicable source codes are accessible at [10], [12] respectively. Additional specifics regarding the experimental setup and benchmarks implementations are available in the Appendix.
Choice of Kernel.We utilize the Gaussian kernel due to its universal nature, i.e, its capacity to uniformly approximate arbitrary continuous target functions on any compact subset of the input space [13]. Moreover, its properties such as square integrability, finite second-order moment, and spherical symmetry are desirable properties for density estimation, facilitating the asymptotic convergence of the conditional kernel density estimator to the true conditional density.
We consider a Euclidean distance metric for \(d_{1}\), \(d_{2}\), and \(d_{3}\) and utilize a diagonal bandwidth matrix with the same values across its diagonal elements. These matrices can then be denoted as \(H_{i}=h_{i}I_{m_{i}}\), where \(m_{i}\) is the corresponding appropriate dimension for \(i=1,2,3\). Here, each \(h_{i}\) is treated as a hyper-parameter and we tuned these in our experimental work depending on what yielded better performance. We would like to emphasize that in prior research [16, 17, 18, 19], approaches for systematic selection of bandwidth parameters, which should decrease as the dataset size grows, have been developed. These methods can be applied to more intricate problems where manual tuning is impractical. Specific values of \(h_{i}\) employed for various experiments are detailed in the Appendix.
### Results
Discrete States and Actions.We first consider the discretized mountain car problem [15] as an example of a discrete state and action space environment. To that end, we transformed the original continuous \(2\)-dimensional state space of the MountainCar environment into a grid configuration measuring \(15\) by \(15\). Subsequently, we estimate \(\widehat{P}\) and \(\widehat{T}\) using equation (9). The dataset \(D\) is generated using the policy outlined in [10]. We begin with one trajectory in \(D\), and increase to \(50\) trajectories, with observations summarized in Figure 1. With more data, we see that the CKIL agent's performance improves and achieves expert-level proficiency with 50 trajectories.
Continuous States.We next address our main goal, namely imitation learning for continuous state environments. When provided ample demonstration data, all benchmarks exhibit the capability to attain performance comparable to the demonstrator level. Thus, we evaluate the algorithms' capacity to manage sample complexity in scenarios with limited data. To achieve this, we assess their performance while granting access to a specific number of trajectories, which we vary. This setup mirrors the configuration described in [11]. We vary the amount of demonstration data \(D\), ranging from a single trajectory to 15, to illustrate the sample complexity concept.
The algorithms used a batch of 1, 3, 7, 10, or 15 trajectories, each uniformly drawn from a pool of 1000 expert trajectories. Subsequently, each algorithm underwent a training phase until convergence. For testing, 300 live roll-outs were conducted in the simulated environment, during which the average accumulated reward for each episode was measured. This entire process was repeated 10 times, utilizing diverse initialization and observed trajectories in each iteration.
Figure 2 illustrates the average rewards for policies learned by different algorithms with increasing numbers of demonstration trajectories. Across all tasks, the results showcase CKIL's capability to learn effective policies, manifesting robust and consistently superior performance com
Figure 2: Average rewards achieved by benchmark IRL/IL/AIL and CKIL policies during real-time deployment plotted against the number of trajectories included in demonstration dataset \(D\) (higher values indicate better performance).
pared to other well-known algorithms, especially when data is scarce.
Remarkably, even with a mere single expert trajectory, CKIL manages to attain expert-level performance in the Acrobot environment, and it approaches expert-level performance in the CartPole environment. Moreover, employing only three trajectories enables it to achieve expert-level performance on the CartPole environment. In the LunarLander environment, a substantial performance enhancement becomes evident within \(3\) trajectories. At this juncture, the agent's performance nearly matches that of an expert, surpassing all benchmark algorithms by a considerable margin.
Additionally, it is worth highlighting that within the confines of this exclusively batch-oriented context, the off-policy adaptations of online algorithms (VDICE, DSFN) do not exhibit the same degree of consistent performance as their inherently offline counterparts. This highlights the inadequacy of solely adopting online algorithms in offline scenarios. Furthermore, the challenges associated with estimating the expectation of an exponential distribution may contribute to VDICE's potential under-performance when compared to the baseline behavioral cloning algorithm.
## 5 Conclusions
In this paper, we introduce Conditional Kernel Imitation Learning (CKIL), a simple but novel approach to imitation learning for continuous state space problems. The approach depends on finding policies that satisfy a Markov balance equation, and hence incorporates the sequential nature of the problem in a natural way. It uses conditional kernel density estimators for estimating transition densities of the MDP and the induced Markov chain and allows for behavioral policies to be represented as neural networks, thus both together making the applicability of the approach to be almost universal. Furthermore, it does not need access to a generative model or further online interaction data, does not first need reward inference, does not do distribution matching, and allows for batch processing of offline data for scalability. It also makes no assumptions about the form of the transition densities, nor of course of the reward function. The algorithm is supported by theoretical consistency results about the estimators, and furthermore, shows remarkably good empirical performance as compared to almost all state-of-the-art IL, IRL, and AIL algorithms against which comparison is meaningful on a number of continuous state OpenAI Gym benchmark environments.
There are a number of interesting directions for future work. Since the method relies on kernel density estimation, scalability to high-dimensional problems needs to be addressed. One potential remedy involves exploring a modified version of CKDE designed to enhance computational efficiency [12]. Given that our framework is fundamentally rooted in the estimation of conditional probability densities, it readily lends itself to adaptations that can incorporate choices for substituting the CKDE, as evidenced in various alternative approaches [20], [17], [16]. In fact, conditional densities themselves could have non-parametric representions. There is also scope for deeper and more detailed theoretical analysis, including non-asymptotic sample complexity bounds. Such bounds are rarely available in the current literature for IL, IRL or AIL algorithms.
|
2306.04865 | MyStyle++: A Controllable Personalized Generative Prior | In this paper, we propose an approach to obtain a personalized generative
prior with explicit control over a set of attributes. We build upon MyStyle, a
recently introduced method, that tunes the weights of a pre-trained StyleGAN
face generator on a few images of an individual. This system allows
synthesizing, editing, and enhancing images of the target individual with high
fidelity to their facial features. However, MyStyle does not demonstrate
precise control over the attributes of the generated images. We propose to
address this problem through a novel optimization system that organizes the
latent space in addition to tuning the generator. Our key contribution is to
formulate a loss that arranges the latent codes, corresponding to the input
images, along a set of specific directions according to their attributes. We
demonstrate that our approach, dubbed MyStyle++, is able to synthesize, edit,
and enhance images of an individual with great control over the attributes,
while preserving the unique facial characteristics of that individual. | Libing Zeng, Lele Chen, Yi Xu, Nima Kalantari | 2023-06-08T01:35:43Z | http://arxiv.org/abs/2306.04865v3 | # MyStyle++: A Controllable Personalized Generative Prior
###### Abstract.
In this paper, we propose an approach to obtain a personalized generative prior with explicit control over a set of attributes. We build upon MyStyle, a recently introduced method, that tunes the weights of a pre-trained StyleGAN face generator on a few images of an individual. This system allows synthesizing, editing, and enhancing images of the target individual with high fidelity to their facial features. However, MyStyle does not demonstrate precise control over the attributes of the generated images. We propose to address this problem through a novel optimization system that organizes the latent space in addition to tuning the generator. Our key contribution is to formulate a loss that arranges the latent codes, corresponding to the input images, along a set of specific directions according to their attributes. We demonstrate that our approach, dubbed MyStyle++, is able to synthesize, edit, and enhance images of an individual with great control over the attributes, while preserving the unique facial characteristics of that individual.
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Information and Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
+
Footnote †: journal: Journal of Image Engineering
sample the convex hull of the anchor points until a desired image is reached by chance. For image editing, MyStyle uses the editing directions provided by approaches, such as InterFaceGAN (Shen et al., 2020), to offer controllability over the attributes of the generated images. Since these editing directions are learned over the entire domain, they may not reside within the personalized subspace. As shown in Fig. 2 (top), by performing the edits using the original editing direction, the latent codes will quickly fall outside the personalized subspace, producing images with a different identity. To address this issue, MyStyle personalizes the editing direction by projecting it into the subspace. While the projected edit direction keeps the latent codes within the personalized subspace, it loses the ability to perform disentangled edits. As shown in Fig. 2 (middle), removing the expression also results in changing the yaw angle.
Our goal is to address these problems by providing full control over a set of pre-defined attributes of the generated images. To this end, we make a key observation that anchors corresponding to a single person are usually clustered together in a small region within the latent space. Therefore, we can organize the latent space within that region by rearranging the anchors. Since it is easier for a generator, like StyleGAN, to preserve the smoothness of the output variation over the space of the latent space, rearranging the anchors causes the space in between to be dragged with them, resulting in an organized latent space.
Armed with this observation, we propose a novel optimization system to personalize a generative prior by both tuning the generator and organizing the latent space through optimizing the anchors. Our key contribution is to formulate a loss function that arranges the anchors with specific attributes along a particular direction in the latent space. Specifically, we project the anchors into a set of principal axis and minimize the variance of the projection for all the anchors with the same attribute. By doing so, the generator becomes highly tuned to one individual, while the attributes can be controlled within a small hypercube in the latent space.
We demonstrate that our proposed method, called MyStyle++, allows synthesizing images with high fidelity to the characteristics of one individual, while providing full control over a set of pre-defined attributes. We also show that our method can better disentangle different attributes compared to MyStyle (Nikan et al., 2022). Moreover, we demonstrate that our system can produce images with a desired attribute during image enhancement.
## 2. Related Work
### Deep Generative Networks
Generative Adversarial Networks (GANs) consist of two main modules: a generator and a discriminator (Goodfellow et al., 2014). The generator takes a noise vector as input and tries to capture the distribution of true examples. The generator focuses on producing an output that fools the discriminator, whose purpose is to classify whether the output is real or fake. GANs have been used extensively to synthesize images that are in line with the training data distribution (Brock et al., 2019; Karras et al., 2018, 2019; Zhu et al., 2017). Among different variants, StyleGAN (Karras et al., 2018, 2019, 2020), which is a carefully re-designed generator architecture, produces the best results, particularly for human faces, that are indistinguishable from real photographs. Instead of feeding the latent code only through the input layer, StyleGAN maps the input to an intermediate latent space, which is injected at each convolution block of the generator. This architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair). In our work, we use StyleGAN2 (Karras et al., 2020) as the base network and personalize it by tuning the generator and organizing the latent space.
### Controllable GANs
StyleGAN generates photorealistic portrait images of faces, but it lacks a rig-like control over semantic face parameters that are interpretable in 3D, such as face pose, expressions, and scene illumination. Recently, many StyleGAN variants (Abdal et al., 2021; Gal et al., 2022; Harkonen et al., 2020; Patashnik et al., 2021; Shoshan et al., 2021; Tov et al., 2021; Wang et al., 2021, 2022; Wu et al., 2021) have been introduced to address this problem. For example, StyleFlow (Abdal et al., 2021) proposes flow models for non-linear exploration of a StyleGAN latent space. GANSpace (Harkonen et al., 2020) attempts to analyze the GAN space by identifying latent directions based on principal component analysis (PCA), applied either in latent space or feature space.
Most controllable portrait image generation methods (Bret et al., 2021; Ji et al., 2022; Sun et al., 2022; Tewari et al., 2020, 2020; Wang et al., 2021, 2019) either rely on 3D morphable face models (3DMMs) (Blanz and Vetter, 1999) to achieve rig-like control over StyleGAN, or utilize another modality as guidance (_e.g._, facial landmark and audio) to control the generation. For instance, by building the bijective mapping between the StyleGAN latent code and the 3DMM parameter sets, StyleRig (Tewari et al., 2020) achieves the controllable parametric nature of existing morphable face models and the high photorealism of generative face models. Ji et al. propose EAMM (Ji et al., 2022) to generate one-shot emotional talking faces controlled by an emotion source video and an audio clip.
Figure 2. On the top, we show the editing results of MyStyle using expression direction from InterFaceGAN (Shen et al., 2020). Since the original direction does not reside within the personalized subspace, editing with this direction produces results with altered identity (rightmost image). By performing the edit using the projected direction, the identity is better preserved, but the expression becomes entangled with the yaw angle. Our method preserves the identity and keeps the other attributes intact while removing the smile.
Unfortunately, these approaches either struggle to retain crucial facial features (identity) after editing (Nitzan et al., 2022) or are unable to maintain explicit control over fully disentangled attributes.
### Few-shot GANs and Personalization
Drawing inspiration from the human capability of picking up the essence of a novel object from a small number of examples and generalizing from there, many works (Chen et al., 2021; Liu et al., 2019; Nitzan et al., 2022; Ojha et al., 2021; Wang et al., 2019; Zakharov et al., 2019) seek to further improve the generation quality by adapting the pre-trained model to few-shot image samples. Zakharov et al. (2019) propose a framework that performs lengthy meta-learning on a large dataset of videos. After this training, this method is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators. The appearance information of the unseen target person is learned by the adaptive instance normalization layers. More recently, MyStyle (Nitzan et al., 2022) tunes the weights of a pre-trained StyleGAN face generator to form a local, low-dimensional, personalized manifold in the latent space within a small reference set of portrait images of the target person. The synthesized images within the adapted personalized latent space have better identity-preserving ability compared with the original StyleGAN. However, MyStyle does not demonstrate precise control of the attributes of the generated images. We focus on addressing this issue by organizing the personalized subspace according to a set of pre-defined attributes.
## 3. Algorithm
Given a few images of an individual with a set of corresponding attributes, our goal is to obtain a personalized generative prior that allows us to synthesize images of that individual with high fidelity and _full control over the desired attributes_. Specifically, we use the pre-trained StyleGAN (Karras et al., 2019, 2020) face generator and adapt it to the target individual through a novel optimization system. During tuning, we organize the latent space by optimizing the anchors according to the attributes to be able to easily sample an image with a desired set of attributes. Additionally, we optimize the generator to ensure it can produce images that are faithful to the characteristics of the target individual. Below we discuss our approach in detail by first explaining our data pre-processing.
### Data Pre-processing
Given a set of \(N\) images of an individual, we first follow the pre-processing steps of MyStyle (Nitzan et al., 2022) to align, crop, and resize the images. We then estimate a set of \(M\) pre-defined attributes (e.g., yaw and expression) for each image. Certain attributes have a discrete domain, while others are continuous. We leave the discrete attributes unchanged, but quantize the range of the continuous ones to obtain \(a_{m,p}\), where \(m\) refers to the attribute type \(m\in\{1,\cdots,M\}\), while \(p\in\{1,\cdots,P(m)\}\) is the index of the attribute value. Note that the number of quantization levels \(P(m)\) could be different for each attribute \(m\). The estimated attributes for each image are then snapped to the nearest quantized values. A simple example illustrating this process is shown in Fig. 3. We provide more details on the attributes and our quantization strategy in Sec. 4.
### Controllable Personalization
We begin by projecting the input images into the latent space of StyleGAN, using the pre-trained encoder by Richardson et al. (2021), to obtain a set of \(N\) latent codes \(\{\mathbf{w}_{n}\}_{n=1}^{N}\). We follow MyStyle (Nitzan et al., 2022) terminology and call these latent codes, anchors. As discussed, in addition to tuning the generator to improve its fidelity to the target individual, we would like to organize the latent space to have full control over a set of attributes. An overview of our approach is shown in Fig. 4.
Our key observation is that we can organize the latent space by only rearranging the anchors. This is because the output of StyleGAN changes smoothly with respect to the input, and thus as an anchor moves, its neighborhood will be dragged with it. Based on this observation, we formulate an anchor loss to rearrange the anchors based on their attributes.
Before explaining our anchor loss in detail, we discuss the properties of an ideal latent space: **1)** Each attribute should change along a known direction; \(\mathbf{d}_{m}\) for the \(m^{\text{th}}\) attribute. This is to ensure we can perform semantic editing and change a particular attribute by simply modifying a latent code along that attribute's direction. **2)** All the latent codes that project to the same value along an attribute direction should have the same attribute. For example, all the latent codes that project to 0.5 along the yaw direction should correspond to images of front faces. This allows us to directly sample an image with a certain set of attributes by ensuring that the latent code projects to appropriate values along each attribute direction. **3)** The directions for different attributes should be orthogonal to guarantee that the attributes are fully disentangled and changing one will not result in modifying the other attributes.
We propose to codify the three properties into the following anchor loss:
Figure 3. Illustration of our data organization with two attributes \(M=2\), yaw and expression. We quantize the range of continuous attributes to obtain a set of discrete levels \(a_{m,p}\) across all attributes. The estimated attributes for each image are then assigned to their nearest discrete level.
\[\mathcal{L}_{\text{anc}}=\sum_{m=1}^{M}\mathcal{L}_{d}(\mathbf{d}_{m})=\sum_{m=1}^ {M}\sum_{n=1}^{N}\|\mathbf{w}_{n}\cdot\mathbf{d}_{m}-c_{n,m}\|. \tag{1}\]
Here, \(\mathbf{w}_{n}\cdot\mathbf{d}_{m}\) computes the projection of the anchor for the \(n^{\text{th}}\) image onto the direction of \(m^{\text{th}}\) attribute through dot product. Moreover, \(c_{n,m}\) is the average of the projected anchors into direction \(\mathbf{d}_{m}\) for all the images with the same \(m^{\text{th}}\) attribute as the \(n^{\text{th}}\) image (subset denoted as \(\mathcal{N}_{n,m}\)). Formally, we can write this as follows:
\[c_{n,m}=\frac{1}{|\mathcal{N}_{n,m}|}\sum_{k\in\mathcal{N}_{n,m}}\mathbf{w}_{ k}\cdot\mathbf{d}_{m}, \tag{2}\]
where
\[\mathcal{N}_{n,m}=\{k\in\{1,\cdots,N\}\mid k\neq n,f_{a}(\mathbf{I}_{n})[m]=f _{a}(\mathbf{I}_{k})[m]\}. \tag{3}\]
Here, \(f_{a}(\mathbf{I}_{n})[m]\) returns the quantized \(m^{\text{th}}\) attribute of image \(\mathbf{I}_{n}\). We note that \(c_{n,m}\) changes at every iteration of the optimization. By minimizing the loss in Eq. 1, we ensure that all the anchors with the same \(m^{\text{th}}\) attribute, project to the same point along \(m^{\text{th}}\) attribute direction \(\mathbf{d}_{m}\), satisfying our second desired property. This loss also ensures that each attribute is changed along its specific direction, satisfying the first property. This can be seen visually in Fig. 3; for example, if all the images with a specific yaw (each column) project to the same point in the yaw direction, moving along this direction will change the yaw.
To satisfy the third property, we apply principle component analysis (PCA) to all the \(N\) anchors and use a subset of the principle components as our \(\mathbf{d}_{m}\). We assign a specific principal component to each \(\mathbf{d}_{m}\) through the following objective:
\[\mathbf{d}_{m}=\arg\min_{\mathbf{v}_{i}\in\mathbf{V}}\mathcal{L}_{d}(\mathbf{ v}_{i}) \tag{4}\]
where \(\mathbf{V}\) is the set of all the principle components. The intuition behind this is that we would like to perform the least amount of rearrangement by ensuring that the latent space is already well aligned with respect to the selected directions. Note that we perform PCA at every iteration of training. Therefore, as we rearrange the anchor points in different iterations, the directions will be updated as well. We also note that although the objective in Eq. 4 could potentially assign different principle components to a particular attribute direction \(\mathbf{d}_{m}\) in different iterations, we did not observe this phenomenon in our experiments.
To perform personalization, we minimize the combination of the anchor and reconstruction losses
\[\mathcal{L}=\mathcal{L}_{\text{anc}}+\mathcal{L}_{\text{rec}}, \tag{5}\]
where the reconstruction loss \(\mathcal{L}_{\text{rec}}\) minimizes the error between between the synthesized \(G(\mathbf{w}_{n})\) and the corresponding input images \(\mathbf{I}_{n}\). We follow MyStyle and use a combination of LPIPS (Zhang et al., 2018) and L2 as our reconstruction loss. During optimization, both the latent codes corresponding to the anchors and the weights of the generator are updated. Note that in addition to adapting the generator to the input image set, the reconstruction loss plays a critical role in avoiding trivial solutions to the anchor loss, e.g., collapsing all the anchors to a single point.
Once the optimization is performed, we obtain an organized latent space \(\mathcal{W}^{*}\) and tuned generator \(G^{*}\). All the attributes can be controlled within a \(M\)-dimensional hypercube in the organized latent space. The bounds of this hypercube can simply be found by projecting all the anchors into each axis of the hypercube \(\mathbf{d}_{m}\) and computing the minimum and maximum values. Note that all the other attributes, not being used during optimization, are encoded in the remaining PCA dimensions.
### Controllable Synthesis, Edit, and Enhancement
We now describe how to use our personalized generative prior for various tasks.
Synthesis:Controlling the synthesized images can easily be done by ensuring that the sampled latent code projects to the desired location in the \(M\)-dimensional hypercube. However, special care must be taken to ensure the latent code does not fall outside of the personalized space. Following MyStyle, we define the convex hull of all the organized anchors \(\mathbf{w}_{n}^{*}\) as the personalized subspace
Figure 4. On the top, we show an overview of our controllable personalization approach. Given a set of input images of an individual with their corresponding attributes, we first encode them into ‘\(\mathcal{W}\) space of StyleGAN to obtain a set of anchors, shown with circles. Note that the colors indicate the attributes of the images. In this case, different views are indicated with yellow, orange, and red, while the colors for different expressions are green and blue. We then minimize an objective function consisting of anchor \(\mathcal{L}_{\text{enc}}\) and reconstruction \(\mathcal{L}_{\text{rec}}\) losses to organize the latent space, by updating the anchors, while tuning the generator. After optimization, we obtain an organized latent space \(\mathcal{W}^{*}\) that can be easily sampled according to a set of attributes, and a tuned generator that can produce images that are faithful to the facial characteristics of the target individual.
within \(\mathcal{W}^{*}\). This convex hull is represented through generalized barycentric coordinate as the weighted sum of the anchors, where the weights (coordinates) \(\mathbf{\alpha}=\{\alpha_{n}\}_{n=1}^{N}\) sum up to 1 and are greater than \(-\beta\) (\(\beta\) is a positive value). The latter condition dilates the space by a small amount to ensure expressiveness.
We propose a simple strategy to perform controlled sampling in the dilated convex hull. Specifically, we first randomly sample \(\mathbf{\alpha}\) to ensure the latent code is within the personalized subspace. We then project the sampled latent code into PCA and set the projected values along the attribute directions \(\mathbf{d}_{m}\) to the desired values. Note that, while it is possible for the modified latent codes to fall outside the dilated convex hull and require reprojection to the personalized space, we did not observe such cases in practice. This is mainly because our latent space is organized according to the attributes and our modifications are performed inside a hypercube which is part of the subspace.
Semantic EditingSince our latent space is organized, the editing process for sampled images is straightforward. To edit an image, we project its latent code to PCA and perform the edit by changing the coordinate in the hypercube. To edit a real image \(\mathbf{I}\), we first project the image into the \(\mathbf{\alpha}\) space through the following objective:
\[\mathbf{\alpha}^{*}=\arg\min_{\mathbf{\alpha}}\mathcal{L}_{\text{rec}}(G(\mathbf{W}^ {*}\mathbf{\alpha}),\mathbf{I}), \tag{6}\]
where \(\mathbf{W}^{*}\) is a matrix with organized anchors along its columns. Note that we follow MyStyle's approach to ensure \(\mathbf{\alpha}\) values satisfy the conditions of the dilated convex hull, i.e., they sum up to 1 and are greater than \(-\beta\). Once we obtain the optimized latent code, following Roich et al. (2022), we further tune the generator to better match the input image. We then perform the semantic edits, by changing the latent code in PCA.
Image EnhancementGiven an input image \(I\) with a known degradation function \(Q\), our goal is to enhance the image, while controlling the attributes of the reconstructed image. We propose to do this through the following objective:
\[\mathbf{\alpha}^{*}=\arg\min_{\mathbf{\alpha}}\mathcal{L}_{\text{rec}}(Q(G(\mathbf{W}^ {*}\mathbf{\alpha})),\mathbf{I})+\lambda\sum_{m=1}^{M}\|(\mathbf{W}^{*}\mathbf{\alpha })\cdot\mathbf{d}_{m}-a_{m}\|, \tag{7}\]
where \(\lambda\) controls the balance between the two terms and we set it to one in our implementation. Here, the first term ensures that the generated image, after applying the degradation function, is similar to the input image. The second term encourages the projection of the latent code \(\mathbf{W}^{*}\mathbf{\alpha}\) onto the \(m^{\text{th}}\) attribute direction to be similar to the desired value \(a_{m}\). Note that, we can perform enhancement by controlling a subset of the attributes, by only applying the second term to the attributes of interest. Similarly, for uncontrolled enhancement, we simply remove the second term.
## 4. Results
We implement the proposed approach in PyTorch and adopt ADAM optimizer (Kingma and Ba, 2015) with the default parameters. All the results are obtained after tuning a pre-trained StyleGAN2 (Karras et al., 2020) generatoron FFHQ (Karras et al., 2019) dataset. We perform the tuning for 3000 epochs with a batch size of one and a learning rate of 5e-3 across all datasets. _We will release our source code and the network weights (for a few individuals) upon publication_.
We have tested our system on the following individuals: Barack Obama (93 images), Emma Watson (304 images), Joe Biden (142 images), Leonardo DiCaprio (217 images), Michele Obama (138 images), Oprah Winfrey (106 images), Scarlett Johansson (179 images), and Taylor Swift (129 images). We consider theexpression, as well as yaw and pitch angles as the attributes for all individuals. For Leonardo Dicaprio and Emma Watson, we include age in addition to the other three attributes. Throughout this section, we demonstrate our results on some of these individuals, but more results can be found in the supplementary materials.
We estimate the expression, yaw, and pitch by leveraging AWS Rekognition API (Amazon, 2023), while we employ the DEX VGG network (Rothe et al., 2018) to estimate the age attribute. We quantize yaw and pitch angles by every 5 degrees and age by every 2 years during the data pre-processing stage, described in Sec. 3.1. For expression, we utilize a combination of the "Smile" and "MouthOpen" attributes of the AWS output, which indicates the presence of the attribute as true or false with a confidence level ranging from 50 to 100. We divide the confidence level by 20% and round it down to the nearest integer, resulting in three groups of presence and three groups of absence for each attribute. We then combine the lowest groups of presence and absence (presence and absence with 50% to 60% confidence) into the same group, resulting in five quantization levels for both "Smile" and "MouthOpen". The images with the same "Smile" and "MouthOpen" quantization levels are then grouped together.
We compare our approach against two versions of MyStyle, called MyStyle_I and MyStyle_P, where the editing directions are obtained from InterFaceGAN (Shen et al., 2020) and PCA (using Eq. 4), respectively. Note that in MyStyle_P we do not organize the latent space and only tune the generator, i.e., minimize the reconstruction loss, but not the anchor loss. Although MyStyle does not demonstrate controllable synthesis, we use the approach discussed in Sec. 3.3 with the directions from InterFaceGAN and PCA to imbue MyStyle with this capability.
Here, we show a subset of our results, but more comparisons and evaluations can be found in our accompanying video and supplementary materials.
SynthesisWe begin by comparing our controllable synthesis results for Oprah Winfrey, Barack Obama, Scarlett Johansson, and Leonardo DiCaprio against MyStyle_I and MyStyle_P. For each person, we show a set of results by fixing one attribute and randomly sampling the rest. As shown in Fig. 5, both MyStyle_I and MyStyle_P produce results with large variations in the intersubfe of interest, because the directions from InterFaceGAN (Shen et al., 2020) and PCA do not match the correct attribute directions in the personalized subspace. For example, on the top, a large smile is expected, whereas images generated by MyStyle_I and MyStyle_P exhibit a range of different expressions. While yaw is usually the dominant attribute in the latent space and relatively easy to control, MyStyle_I and MyStyle_P exhibit undesirable yaw variance for Barack Obama. Similarly, these baselines produce results with large pitch and age variations for Scarlett Johansson and Leonardo DiCaprio, respectively. In contrast, our approach produces results that are consistent
in all four cases. Note that InterFaceGAN does not provide a direction corresponding to the pitch, and thus we only compare against MyStyle_P for the case with fixed pitch.
We further numerically evaluate the ability of our method to control the attributes in comparison with MyStyle_P and MyStyle_I in Table 1. To accomplish this, we generate 100 images by fixing one attribute and randomly sampling the other ones. We then estimate the attributes of the generated images, using AWS Rekognition for expression, as well as the yaw and pitch angles, and DEX VGG [12] for age, and compute the standard deviation of the estimated attribute for all the 100 images. For each attribute, we show the results for five normalized values (0.0, 0.25, 0.5, 0.75, 1.0). As seen, MyStyle_P and MyStyle_I generate inferior results as the PCA and InterFaceGAN attribute directions are not well-aligned with the correct attribute directions in the subspace. In contrast, our approach consistently demonstrates the smallest standard deviation across all attributes for both Scarlett Johansson and Leonardo DiCaprio.
A potential concern is whether our latent space organization could compromise the diversity and preservation of the identity of the results. To numerically evaluate this, we compute the ID metric, as proposed in MyStyle [13], on the results generated by both our approach and MyStyle for Scarlette Johansson, and Leonardo DiCaprio. This metric measures the cosine similarity of the features extracted by a deep classifier between the generated image and the closest one from the training data. Besides measuring the ability to preserve the identity, we also compute the diversity of the synthesized images. We follow the protocol suggested by Ojha et al. [2021] to computer the intra-cluster diversity using the LPIPS score. Specifically, we generate 1000 images and assign them to one of the 10 training images, by using the lowest LPIPS distance. Then we compute the average pair-wise LPIPS distance within members of the same cluster and then average over the 10 clusters. As shown in Table 2, our method generates results that are comparable to MyStyle in terms of ID metric and diversity score, demonstrating that our latent space organization does not compromise the diversity and identity preservation of the results.
_Semantic Editing:_ We begin by comparing our semantic editing results against MyStyle_P and MyStyle_I in Fig. 6. Specifically, we modify the expression, yaw, pitch, and age of Scarlett Johansson, Michelle Obama, Joe Biden, and Leonardo DiCaprio, respectively. MyStyle_P has difficulties editing Scarlett Johansson's expression and predominantly changes the yaw. While MyStyle_I is better able to edit the expression, it slightly changes the yaw (see the supplementary video) and produces a neutral face with altered identity (the leftmost image). Moreover, both MyStyle_P and MyStyle_I change the expression when editing Michelle Obama's yaw angle. For Joe Biden, MyStyle_P struggles to properly edit the pitch angle as the PCA direction is not well-aligned with the pitch attribute direction in the subspace. Finally, when editing the age of Leonardo DiCaprio, both MyStyle_P and MyStyle_I exhibit noticeable changes to the expression and pitch, respectively. Additionally, both approaches struggle to preserve the identity of the edited images in extreme cases (rightmost for MyStyle_P and leftmost for MyStyle_I). In contrast to these techniques, our method only changes the attribute of interest when producing edited results and is able to better preserve the identity. Again, we note that we do not show pitch editing for MyStyle_I as InterFaceGAN does not provide a direction corresponding to the pitch attribute.
Next, we compare our method against the other techniques for editing real images of Barack Obama, Emma Watson, Scarlett Johansson, and Leonardo DiCaprio, in Fig. 7. Both MyStyle_P and MyStyle_I have difficulties preserving the identity of Barack Obama when removing the smile. Additionally, MyStyle_P struggles to maintain the yaw angle. For Emma Watson, both MyStyle_P and MyStyle_I change the expression when editing the yaw angle. For Scarlett Johansson, MyStyle_P is unable to edit the pitch and instead modifies the yaw angle. Finally, MyStyle_P changes the yaw angle
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline & & \multicolumn{5}{c}{Scarlett Johansson} \\ \cline{3-6} & & 0.0 & 0.25 & 0.5 & 0.75 & 1.0 \\ \hline \multirow{3}{*}{Exp} & MyStyle\_P & 0.651 & 0.881 & 0.926 & 0.878 & 0.667 \\ & MyStyle\_I & 0.242 & 0.771 & 0.788 & 0.391 & 0.001 \\ & Ours & **0.025** & **0.182** & **0.470** & **0.230** & **0.000** \\ \hline \multirow{3}{*}{Yaw} & MyStyle\_P & 7.815 & 7.331 & 4.418 & 5.563 & 6.568 \\ & MyStyle\_I & 3.894 & 3.553 & 2.374 & 3.005 & 3.579 \\ & Ours & **2.677** & **2.335** & **1.963** & **1.567** & **2.390** \\ \hline \multirow{3}{*}{Pitch} & MyStyle\_P & 4.945 & 3.912 & 4.808 & 4.803 & 5.162 \\ & MyStyle\_I & - & - & - & - & - \\ & Ours & **3.717** & **3.030** & **2.501** & **2.710** & **2.670** \\ \hline \hline \end{tabular}
\end{table}
Table 1: We numerically compare our controlled synthesis results against MyStyle_P and MyStyle_I. We generate 100 images for each fixed attribute value and report the standard deviation of the estimated attribute of interest over the generated images. Note that the attribute values (e.g., 0.25) are in the normalized coordinate \(d_{m}\). The best results are shown in bold. Note that we do not report any fixed pitch synthesis results for MyStyle_I as InterFaceGAN [2] does not provide an edit direction for Pitch.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline & & Scarlett Johansson & Leonardo DiCaprio \\ \hline \multirow{3}{*}{ID \(\uparrow\)} & MyStyle & 0.760\(\pm\)0.007 & 0.7828\(\pm\)0.050 \\ & Ours & **0.763\(\pm\)0.004** & **0.7856\(\pm\)0.070** \\ \hline \multirow{3}{*}{Diversity \(\uparrow\)} & MyStyle & 0.455\(\pm\)0.081 & 0.417\(\pm\)0.049 \\ & Ours & **0.471\(\pm\)0.053** & **0.456\(\pm\)0.035** \\ \hline \hline \end{tabular}
\end{table}
Table 2: We compare our results against MyStyle in terms of the ID metric [13] and diversity score [12]. Higher numbers are better. Our method produces similar results compared to MyStyle, which demonstrates that latent organization does not hurt the quality of our results. The best results are shown in bold.
when editing Leonardo DiCaprio's age, while MyStyle_I has difficulties maintaining the identity. In contrast to these methods, our approach disentangles the attributes more effectively and is better at preserving the identities in all four cases.
We note that the reason behind MyStyle's occasional failure to preserve the identity is that the edited latent codes, in some cases, fall outside the personalized subspace. While the loss of identity can be resolved by projecting the edited latent codes back to the convex hull, using MyStyle's suggested strategy, this process produces results with undesirable attributes. This is shown in Fig. 8 where the objective is to completely remove Barack Obama's smile and produce a teenage Leonardo DiCaprio. MyStyle_I produces results with altered identities as evident both visually and numerically through the ID metric. The identity is improved by projecting the edited latent codes to the subspace (third column), but this process increases the smile (top) and age (bottom).
We further numerically compare our real image editing results against MyStyle_P and MyStyle_I on Leonardo DiCaprio and Michelle Obama in Tab. 3. Specifically, we evaluate the editing consistency by computing the mean standard deviation of the edited attribute, while we measure the attribute disentanglement by calculating the mean standard deviation of the non-edited attributes. The standard deviation is computed over 21 edits and they are averaged over 15 and 21 images for Michelle Obama and Leonardo DiCaprio, respectively. We additionally evaluate the ability of different methods to preserve the identity using the ID metric. As seen, our method consistently outperforms MyStyle_P and MyStyle_I across all metrics.
_Image Enhancement:_ As discussed in Sec. 3.3, since our method provides precise control over the attributes, it can be used to perform controllable image enhancement. This is shown in Figs. 9 and 10 for image inpainting and super-resolution, respectively. As seen our method can produce inpainted and super-resolved images with the desired expressions.
## 5. Limitations and Future Work
Our approach is able to produce high-quality results with great control over a set of attributes. However, it has a few limitations. First, the number of images required for personalization increases significantly with the number of desired attributes. This is because we rely on the propagation of the anchors to the neighboring regions. If the anchors in certain regions are sparse, those areas are not going to be personalized appropriately. However, this is not unique to our approach and MyStyle suffers from the same drawback. For example, if MyStyle is personalized with images of a young subject, it cannot produce images of the subject at an old age with high fidelity. Second, while our approach provides great control over the attributes, our reconstructions for attributes like view are not physically accurate. In the future, it would be interesting to incorporate the image formation process into our system to improve accuracy.
We note that although our approach has the potential to be applied to cases beyond MyStyle, such as organizing the entire latent space of StyleGAN, one significant challenge arises: organizing the entire latent space necessitates a large number of anchor images, resulting in time-consuming and difficult optimization. Furthermore, special attention must be given to prevent anchors with different identities from being placed closely together after optimization; this is not an issue when handling a single individual.
## 6. Conclusion
We have presented an approach to obtain a controllable personalized generative prior from a set of images of an individual. Our system allows for reconstructing images of the individual that faithfully preserve the key facial features of the individual, while providing full control over a set of pre-defined attributes. In addition to tuning a pre-trained generator, we organize its latent space such that different attributes change along certain known directions. To do this, we formulate a loss that rearranges the latent codes, corresponding to the input images, according to the attributes. We show that our method better disentangles the attributes than MyStyle, while providing full control over the attributes.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & & & \multicolumn{3}{c}{Michelle Obama} \\ \hline \multirow{5}{*}{Exp} & \multirow{5}{*}{MyStyle\_P} & Exp\({}^{*}\) & Yaw & Pitch & ID \(\uparrow\) \\ \cline{3-6} & & 0.547 & 0.716 & 4.436 & 0.786\(\pm\)0.057 \\ & MyStyle\_I & 0.445 & 1.069 & 0.715 & 0.757\(\pm\)0.065 \\ & Ours & **0.306** & **0.576** & **0.596** & **0.794\(\pm\)0.034** \\ \hline \multirow{5}{*}{Yaw} & \multirow{5}{*}{MyStyle\_P} & Yaw\({}^{*}\) & Exp & Pitch & ID \(\uparrow\) \\ \cline{3-6} & & 2.773 & 0.477 & 2.419 & 0.780\(\pm\)0.053 \\ & MyStyle\_I & 4.424 & 0.510 & 2.181 & 0.715\(\pm\)0.110 \\ & Ours & **0.876** & **0.298** & **1.137** & **0.792\(\pm\)0.047** \\ \hline \multirow{5}{*}{Pitch} & \multirow{5}{*}{MyStyle\_P} & Pitch\({}^{*}\) & Exp & Yaw & ID \(\uparrow\) \\ \cline{3-6} & & & & & \\ \cline{3-6} & MyStyle\_P & 6.306 & 0.463 & 2.242 & 0.727\(\pm\)0.047 \\ & MyStyle\_I & - & - & - & - \\ \cline{3-6} & Ours & **2.045** & **0.231** & **1.311** & **0.731\(\pm\)0.110** \\ \hline \hline \multicolumn{6}{c}{Leonardo DiCaprio} \\ \hline \multirow{5}{*}{Exp} & \multirow{5}{*}{MyStyle\_P} & \multirow{5}{*}{0.794} & \multirow{5}{*}{4.866} & \multirow{5}{*}{1.965} & \multirow{5}{*}{3.584} & \multirow{5}{*}{0.743\(\pm\)0.103} \\ & & & & & & \\ \cline{3-6} & & 2.57 & 1.538 & 1.227 & 3.179 & 0.731\(\pm\)0.108 \\ & Ours & **0.268** & **1.204** & **0.999** & **2.201** & **0.752\(\pm\)0.107** \\ \hline \multirow{5}{*}{Yaw} & \multirow{5}{*}{MyStyle\_P} & \multirow{5}{*}{4.069} & \multirow{5}{*}{0.213} & \multirow{5}{*}{2.570} & \multirow{5}{*}{2.893} & \multirow{5}{*}{0.717\(\pm\)0.108} \\ & & & & & & \\ \cline{3-6} & MyStyle\_I & 3.925 & 0.111 & 2.426 & 2.700 & 0.716\(\pm\)0.117 \\ & Ours & **2.097** & **0.075** & **2.108** & **2.212** & **0.728\(\pm\)0.115** \\ \hline \multirow{5}{*}{Pitch} & \multirow{5}{*}{MyStyle\_P} & \multirow{5}{*}{5.463} & \multirow{5}{*}{0.281} & \multirow{5}{*}{3.030} & \multirow{5}{*}{3.720} & \multirow{5}{*}{0.717\(\pm\)0.121} \\ & & & & & & \\ \cline{3-6} & MyStyle\_I & - & - & - & - & - \\ \cline{3-6} & Ours & **3.591** & **0.071** & **1.786** & **3.023** & **0.726\(\pm\)0.114** \\ \hline \multirow{5}{*}{Age} & \multirow{5}{*}{MyStyle\_P} & \multirow{5}{*}{5.113} & \multirow{5}{*}{0.230} & \multirow{5}{*}{2.808} & \multirow{5}{*}{2.824} & \multirow{5}{*}{0.734\(\pm\)0.118} \\ \cline{3-6} & & & Age\({}^{*}\) & Exp & Yaw & Pitch & ID \(\uparrow\) \\ \cline{3-6} & MyStyle\_P & 5.113 & 0.230 & 2.808 & 2.824 & 0.734\(\pm\)0.118 \\ \cline{3-6} & MyStyle\_I & 7.152 & 0.134 & 1.294 & 2.095 & 0.723\(\pm\)0.120 \\ \cline{3-6} & Ours & **3.473** & **0.087** & **0.467** & **1.217** & **0.739\(\pm\)0.113** \\ \hline \end{tabular}
\end{table}
Table 3. We compare our editing results against MyStyle_I and MyStyle_P in terms of the mean standard deviation (STD) of the edited attribute to show predicting consistency (marked with \(*\)), and of fixed attributes to demonstrate attribute disentanglement. We additionally report the ID metric to evaluate identity preservation ability. The best results are shown in bold. |
2302.08570 | The complexity of counting planar graph homomorphisms of domain size 3 | We prove a complexity dichotomy theorem for counting planar graph
homomorphisms of domain size 3. Given any 3 by 3 real valued symmetric matrix
$H$ defining a graph homomorphism from all planar graphs $G \mapsto Z_H(G)$, we
completely classify the computational complexity of this problem according to
the matrix $H$. We show that for every $H$, the problem is either polynomial
time computable or \#P-hard. The P-time computable cases consist of precisely
those that are P-time computable for general graphs (a complete classification
is known) or computable by Valiant's holographic algorithm via matchgates. We
also prove several results about planar graph homomorphisms for general domain
size $q$. The proof uses mainly analytic arguments. | Jin-Yi Cai, Ashwin Maran | 2023-02-16T20:33:07Z | http://arxiv.org/abs/2302.08570v1 | # The complexity of counting planar graph homomorphisms of domain size 3
###### Abstract
We prove a complexity dichotomy theorem for counting planar graph homomorphisms of domain size 3. Given any 3 by 3 real valued symmetric matrix \(H\) defining a graph homomorphism from all planar graphs \(G\mapsto Z_{H}(G)\), we completely classify the computational complexity of this problem according to the matrix \(H\). We show that for every \(H\), the problem is either polynomial time computable or #P-hard. The P-time computable cases consist of precisely those that are P-time computable for general graphs (a complete classification is known [25]) or computable by Valiant's holographic algorithm via matchgates. We also prove several results about planar graph homomorphisms for general domain size \(q\). The proof uses mainly analytic arguments.
## 1 Introduction
Given graphs \(G\) and \(H\), a mapping from \(V(G)\) to \(V(H)\)is called a _homomorphism_ if edges of \(G\) are mapped to edges of \(H\). This is put in a more general or quantitative setting by the notion of a _partition function_. Let \(M=(m_{i,j})\) be a symmetric \(q\times q\) matrix. In this paper we consider arbitrary real entries \(m_{i,j}\in\mathbb{R}\); if \(m_{i,j}\in\{0,1\}\) (or \(m_{i,j}\geq 0\)), then \(M\) is the unweighted (or nonnegatively weighted) adjacency matrix of a graph \(H=H_{M}\). Given \(M\), the partition function \(Z_{M}(G)\) for any input undirected multi-graph \(G=(V,E)\) is defined as
\[Z_{M}(G)=\sum_{\sigma:V\rightarrow[q]}\prod_{(u,v)\in E}m_{\sigma(u),\sigma(v)}.\]
Obviously isomorphic graphs \(G\cong G^{\prime}\) have the same value \(Z_{M}(G)=Z_{M}(G^{\prime})\), and thus every \(M\) defines a graph property \(Z_{M}(\cdot)\). For a 0-1 matrix \(M\), \(Z_{M}(G)\) counts the number of homomorphisms from \(G\) to \(H\). Graph homomorphism (GH) encompasses a great deal of graph properties [34].
Each \(M\) defines a computational problem, denoted by \(\mathtt{GH}(M)\): given an input graph \(G\) compute \(Z_{M}(G)\). The complexity of \(\mathtt{GH}(M)\) has been a major focus of research. A number of increasingly general complexity dichotomy theorems have been achieved. Dyer and Greenhill [18] proved that for any 0-1 symmetric matrix \(M\), computing \(Z_{M}(G)\) is either in P-time or is #P-complete. Bulatov
and Grohe [7] found a complete classification for \(\mathtt{GH}(M)\) for all nonnegative matrices \(M\). Goldberg, Grohe, Jerrum and Thurley [25] then proved a dichotomy for all real-valued matrices \(M\). Finally, Cai, Chen, and Lu [11] established a dichotomy for all complex valued matrices \(M\). We also note that graph homomorphism can be viewed as a special case of counting CSP, with one binary constraint function. For counting CSP, a series of results established a complexity dichotomy for any set of constraint functions \(\mathcal{F}\), going from 0-1 valued [9, 20, 21, 19] to nonnegative rational valued [6], to nonnegative real valued [12], to all complex valued functions 1.
Footnote 1: The last counting CSP dichotomy [10] is not known to have a decidable dichotomy criterion, while the dichotomy criterion on GH [11] is polynomial-time decidable. So, [10] does not strictly supersede [11].
Parallel to this development, Valiant [41] introduced _holographic algorithms_. It is well known that counting the number of perfect matchings (#PM) is #P-complete [42]. On the other hand, since the 60's, there has been a famous FKT algorithm [33, 40, 32, 31] that can compute #PM on planar graphs in P-time. Valiant's holographic algorithms greatly extended its reach, in fact so much so that a most intriguing question arises: Is this a _universal_ algorithm that every counting problem (expressed as a sum-of-products) that _can be solved_ in P-time on planar graphs (but #P-hard in general) _is solved_ by this method alone? Such a universality statement must appear to be extraordinarily, if not overly, ambitious.
To attack this problem, Holant problems are introduced [16]. Holant problems are edge-models, where for an input graph, constraint functions are attached to vertices, and the edges serve as variables. Typical examples are #PM, counting proper edge colorings, or cycle covers, etc. It can be shown that counting CSP can be expressed as Holant problems, but not conversely [22]. It is in the framework of Holant problems, a classification of such counting problems can be studied, and the power of holographic algorithms be understood.
After a series of work [16, 15, 13, 3, 2, 44, 24, 23, 13] it was established that for every set of complex valued constraint functions \(\mathcal{F}\) on the Boolean domain (i.e., \(q=2\)) there is a 3-way classification for #CSP(\(\mathcal{F}\)): (1) P-time solvable, (2) P-time solvable over planar graphs but #P-hard over general graphs, (3) #P-hard over planar graphs. Moreover, category (2) consists of precisely those problems that can be solved by Valiant's holographic algorithm using FKT. Note that, curiously, this is a reduction to #PM, which is a Holant problem, but not a #CSP problem. More mysteriously, it is further proved that for the broader class of Holant problems on the Boolean domain, Valiant's holographic algorithm is _not_ universal for category (2) [14]. So far we have very limited knowledge on this universality question for higher domain problems (\(q>2\)). Before this work, no complexity classification was known for the _planar_ version of \(\mathtt{GH}(M)\) for \(q=3\), even for 0-1 matrices \(M\).
On the other hand, the _planar_ version of \(\mathtt{GH}\) (for general \(q\)) has been found to be intimately related to quantum information theory [4, 17, 1, 39, 37, 36]. Mancinska and Roberson [38] showed that two graphs \(H\) and \(H^{\prime}\) are _quantum isomorphic_ iff for every _planar_ input graph \(G\), the partition function \(Z_{H}(G)=Z_{H^{\prime}}(G)\). This is in contrast to a classical result by Lovasz [35] that \(H\) and \(H^{\prime}\) are _isomorphic_ iff \(Z_{H}(G)=Z_{H^{\prime}}(G)\) for _every_ graph \(G\). They further proved that it is in general undecidable for two graphs \(H\) and \(H^{\prime}\), whether \(Z_{H}(G)=Z_{H^{\prime}}(G)\) for all _planar_ graphs \(G\)[38, 1]. (\(H\) and \(H^{\prime}\) need not be planar.)
Our goal is strictly on the complexity question. What is the computational complexity of \(Z_{M}(G)\) from planar input graphs \(G\)? This paper marks the beginning of this quest. Let \(\mbox{\tt P1-GH}(M)\) denote the problem \(\mbox{\tt GH}(M)\) when the input graphs are restricted to planar graphs. (Again, the underlying graph \(H_{M}\) is not restricted to planar graphs.) For domain size \(q=3\), we give a complete classification of the complexity of \(\mbox{\tt P1-GH}(M)\) for all real valued matrices \(M\). We prove that an exact classification according to the three categories above hold for this class, and a holographic reduction to FKT remains a _universal_ algorithm for category (2). As in previous work, generalizing a dichotomy from domain size \(2\) to domain size \(3\) has to overcome significant difficulty and can lead to future progress [8]. We also prove several results about \(\mbox{\tt P1-GH}(M)\) for general \(q\). For example, we give a generic criterion that leads to #P-hardness of \(\mbox{\tt P1-GH}(M)\) for non-negative real matrices (Theorems 24 and 25), and also prove that \(\mbox{\tt P1-GH}(M)\) is #P-hard for almost all \(M\) (Theorem 26).
Now we give some highlights of the proof for the \(q=3\) case. First, we use the Boolean domain dichotomy to handle certain \(3\times 3\) matrices. This includes both the _reducible_ matrices as well as a subtler case called _twinned_ matrices. For the latter, we can transform the problem to a version of the partition function with degree dependent vertex weights. Then we use a gadget construction from [26] and interpolation to get rid of the dependency on vertex degree.
We then formulate a _lattice condition_ on the eigenvalues of \(M\), which if satisfied, would allow us to carry out a successful #P-hardness proof using Vandermonde systems. After some work, it boils down to proving that \(\mbox{\tt P1-GH}(M(p))\) is #P-hard for some real \(p>1\), where \(M(p)\) is a family of matrices of the form \((p^{x_{ij}})\) for some integers \(x_{ij}\geq 0\). We have very little control of \(x_{ij}\) except that \(M(p)\) has full rank in some small interval \(p\in I_{\epsilon}=(1,1+\epsilon)\) for some \(\epsilon>0\). Let \(\lambda_{i}=\lambda_{i}(p)\) be the eigenvalues of \(M(p)\) ordered by \(|\lambda_{1}|\leq|\lambda_{2}|\leq|\lambda_{3}|\). By the Perron theorem we have \(|\lambda_{2}|<|\lambda_{3}|\) is strict and there is a well defined and unique \(t(p)\in(0,1]\) such that \(|\lambda_{2}|=|\lambda_{1}|^{t(p)}|\lambda_{3}|^{1-t(p)}\).
The crux of the proof is to show the following: (i) \(\lim_{p\to 1^{+}}t(p)\) exists, and is either \(1/2\) or \(1\). (ii) \(t(p)\) cannot be identically \(1/2\) in \(I_{\epsilon}\). (iii) If \(t(p)\) is identically \(1\) in \(I_{\epsilon}\), then \(\mbox{\tt P1-GH}(M)\) is #P-hard. From (i), if \(t(p)\) is constant in \(I_{\epsilon}\) it can only be \(1/2\) or \(1\). From (ii) and (iii), we may assume neither case holds, and so \(t(p)\) is not constant. Thus, by the intermediate value theorem there exists some \(p\in I_{\epsilon}\) such that \(t(p)\) is _irrational_. This irrational \(t(p)\) will fulfill the lattice condition!
Now a perceptive reader may object that the \(p\) that produces an irrational \(t(p)\) may not even be algebraic, and the usual definition for the complexity of partition functions requires that the real numbers be algebraic so that strict bit complexity in terms of Turing machines can apply.
This is a serious quandary. Our proof is intrinsically analytic. Also there are indeed non-constant continuous functions \(f(\cdot)\) that map all algebraic \(p\) to rational \(f(p)\) (see Appendix B). Furthermore, it seems hopeless to prove that our \(t(p)\) is not such a function (although probably true).
We resolve this difficulty by a bold approach--we will allow _all_ real \(M\) for \(\mbox{\tt GH}(M)\) and still stay within strict bit complexity of Turing machines. This uses the theorem of unique transcendence degree [30]. Details are in Section 2.1. We remark that this makes it non-constructive. For instance, it is unknown whether \(\mathbb{Q}(e,\pi)\) has transcendence degree \(1\) or \(2\). If \(M\) has both \(e\) and \(\pi\) our formal definition of \(\mbox{\tt GH}(M)\) treats this degree as "known" (existentially); \(M\) is a fixed constant for the computational problem \(\mbox{\tt GH}(M)\), and the complexity statements refer to the _existence_ of either P
time algorithms or reductions (but not how to get them).
## 2 Preliminaries
Let \(M=(m_{i,j})\) be a symmetric \(q\times q\) real valued matrix, \(i,j\in[q]\). Given a planar, undirected multi-graph \(G=(V,E)\), we can perform certain elementary operations (that preserve planarity) on the graph \(G\) to transform it into a new graph \(G^{\prime}\), such that \(Z_{M}(G^{\prime})=Z_{M^{\prime}}(G)\) for some matrix \(M^{\prime}\). For most of this paper we will use two such operations, thickening and stretching.
From any planar multi-graph \(G=(V,E)\), and a positive integer \(k\), we can construct the planar multi-graph \(T_{k}G\), by replacing every edge in \(G\) with \(k\) parallel edges between the same vertices. This process is called _thickening_. Clearly \(Z_{M}(T_{k}G)=Z_{T_{k}M}(G)\), where \(T_{k}M\) is the \(q\times q\) matrix with entries \(\big{(}(m_{i,j})^{k}\big{)}\) for \(i,j\in[q]\). In particular, \(\texttt{PI-GH}(T_{k}M)\leq\texttt{PI-GH}(M)\) for all \(k\geq 1\).
Similarly, from any planar multi-graph \(G=(V,E)\), and a positive integer \(k\), we can construct the planar multi-graph \(S_{k}G\) by replacing every edge \(e\in E\) with a path of length \(k\). This process is called _stretching_. It is also easily seen that \(Z_{M}(S_{k}G)=Z_{S_{k}M}(G)\), where \(S_{k}M=M^{k}\), the \(k\)-th power of \(M\). So, we also have \(\texttt{PI-GH}(S_{k}M)\leq\texttt{PI-GH}(M)\) for all \(k\geq 1\).
### Model of Computation
The Turing machine model is naturally suited to the study of computation over discrete structures such as integers or graphs. When \(M\in\mathbb{R}^{q\times q}\), for \(\texttt{PI-GH}(M)\) one usually restricts \(M\) to be a matrix with only algebraic numbers. This is strictly for the consideration of the model of computation, even though allowing all real-valued matrices would be more natural.
There is a formal (albeit nonconstructive) method to treat \(\texttt{PI-GH}(M)\) for arbitrary real-valued matrices \(M\) and yet stay strictly within the Turing machine model in terms of bit-complexity. In this paper, because our proof depends heavily on analytic argument with continuous functions on the real line, this logical formal view becomes necessary.
To begin with, we recall a theorem from field theory: Every extension field \(\mathbf{F}\) over \(\mathbb{Q}\) by a finite set of real numbers is a finite algebraic extension \(\mathbf{E}^{\prime}\) of a certain purely transcendental extension field \(\mathbf{E}\) over \(\mathbb{Q}\), which has the form \(\mathbf{E}=\mathbb{Q}(X_{1},\ldots,X_{m})\) where \(m\geq 0\) and \(X_{1},\ldots,X_{m}\) are algebraically independent [30] (Theorem 8.35, p. 512). \(\mathbf{F}\) is said to have a finite transcendence degree \(m\) over \(\mathbb{Q}\). It is known that \(m\) is uniquely defined for \(\mathbf{F}\). Since \(\mathrm{char}\ \mathbb{Q}=0\), the finite algebraic extension \(\mathbf{E}^{\prime}\) over \(\mathbf{E}\) is actually simple, \(\mathbf{E}^{\prime}=\mathbf{E}(\beta)\) for some \(\beta\), and it is specified by a minimal polynomial in \(\mathbf{E}[X]\). Now given a real matrix \(M\), let \(\mathbf{F}=\mathbb{Q}(M)\) be the extension field by adjoining the entries of \(M\). We
Figure 1: A graph \(G\), the thickened graph \(T_{3}G\), and the stretched graph \(S_{2}G\).
consider \(M\) is fixed for the problem \(\mathtt{PI\mbox{-}GH}(M)\), and thus we may assume (nonconstructively) that the form \(\mathbf{F}=\mathbf{E}(\beta)\) and \(\mathbf{E}=\mathbb{Q}(X_{1},\ldots,X_{m})\) are given. (This means, among other things, that the minimal polynomial of \(\beta\) over \(\mathbf{E}\) is given, and all arithmetic operations can be performed on \(\mathbf{F}\).)
Now, the computational problem \(\mathtt{PI\mbox{-}GH}(M)\) is the following: Given a planar \(G\), compute \(Z_{M}(G)\) as an element in \(\mathbf{F}\) (which is expressed as a polynomial in \(\beta\) with coefficients in \(\mathbf{E}\)). More concretely, we can show that this is equivalent to the following problem \(\mathsf{COUNT}(M)\): The input is a pair \((G,x)\), where \(G=(V,E)\) is a planar graph and \(x\in\mathbf{F}\). The output is
\[\#_{M}(G,x)=\Big{|}\big{\{}\sigma:V\to[q]\,:\,\prod_{(u,v)\in E}m_{\sigma(u), \sigma(v)}=x\big{\}}\Big{|},\]
a non-negative integer. Note that, in this definition, we are basically combining terms with the same product value in the definition of \(Z_{M}(G)\).
Let \(n=|E|\). Define \(X\) to be the set of all possible product values appearing in \(Z_{M}(G)\):
\[X=\left\{\prod_{i,j\in[q]}m_{ij}^{k_{ij}}\Big{|}\;\mbox{integers $k_{ij}\geq 0$ and $\sum_{i,j\in[q]}k_{ij}=n$}\right\}. \tag{1}\]
There are \(\binom{n+q^{2}-1}{q^{2}-1}=n^{O(1)}\) many integer sequences \((k_{i,j})\) such that \(k_{i,j}\geq 0\) and \(\sum_{i,j\in[q]}k_{i,j}=n\). \(X\) is defined as a set, not a multi-set. After removing repeated elements the cardinality \(|X|\) is also polynomial in \(n\). For fixed and given \(\mathbf{F}\) the elements in \(X\) can be enumerated in polynomial time in \(n\). (It is important that \(\mathbf{F}\) and \(q\) are all treated as fixed constants.) It then follows from the definition that \(\#_{M}(G,x)=0\) for any \(x\notin X\). This gives us the following relation:
\[Z_{M}(G)=\sum_{x\in X}x\cdot\#_{M}(G,x),\quad\mbox{for any graph $G$},\]
and thus, \(\mathtt{PI\mbox{-}GH}(M)\leq\mathsf{COUNT}(M)\).
For the other direction, we construct, for any \(p\in[|X|]\) (recall that \(|X|\) is polynomial in \(n\)), a planar graph \(T_{p}G\) from \(G\) by replacing every edge of \(G\) with \(p\) parallel edges. Then,
\[Z_{M}(T_{p}G)=\sum_{x\in X}x^{p}\cdot\#_{M}(G,x),\quad\mbox{for any graph $G$}.\]
This is a Vandermonde system; it has full rank since elements in \(X\) are distinct by definition. So by querying \(\mathtt{PI\mbox{-}GH}(M)\) for the values of \(Z_{M}(T_{p}G)\), we can solve it in polynomial time and get \(\#_{M}(G,x)\) for every non-zero \(x\in X\). To obtain \(\#_{M}(G,0)\) (if \(0\in X\)), we note that
\[\sum_{x\in X}\#_{M}(G,x)=q^{|V|}.\]
This gives us a polynomial-time reduction and thus, \(\mathsf{COUNT}(M)\leq\mathtt{PI\mbox{-}GH}(M)\). We have proved
**Lemma 1**.: _For any fixed \(M\in\mathbb{R}^{q\times q}\), \(\mathtt{PI\mbox{-}GH}(M)\equiv\mathsf{COUNT}(M)\)._
Thus, \(\mathtt{P1-GH}(M)\) can be identified with the problem of producing those polynomially many integer coefficients in the canonical expression for \(Z_{M}(G)\) as a sum of (distinct) terms from \(X\).
This formalistic view has the advantage that we can treat the complexity of \(\mathtt{P1-GH}(M)\) for general \(M\), and not restricted to algebraic numbers. Thus, numbers such as \(e\) or \(\pi\) need not be excluded. More importantly, in this paper this generality is essential, due to the proof technique that we employ. Furthermore, once freed from this restriction we in fact explicitly use transcendental numbers as a tool in our proof (see Lemma 19). In short, in this paper, treating the complexity of \(\mathtt{P1-GH}(M)\) for general real \(M\) is not a _bug_ but a _feature_.
However, we note that this treatment has the following subtlety. For the computational problem \(\mathtt{P1-GH}(M)\) the formalistic view demands that \(\mathbf{F}\) be specified in the form \(\mathbf{F}=\mathbf{E}(\beta)\). Such a form exists, and its specification is of constant size when measured in terms of the size of the input graph \(G\). However, in reality many basic questions for transcendental numbers are unknown. For example, it is still unknown whether \(e+\pi\) or \(e\pi\) are rational, algebraic irrational or transcendental, and it is open whether \(\mathbb{Q}(e,\pi)\) has transcendence degree 2 (or 1) over \(\mathbb{Q}\), i.e., whether \(e\) and \(\pi\) are algebraically independent. The formalistic view here non-constructively assumes this information is given for \(\mathbf{F}\). A polynomial time reduction \(\Pi_{1}\leq\Pi_{2}\) from one problem to another in this setting merely implies that the _existence_ of a polynomial time algorithm for \(\Pi_{2}\) logically implies the _existence_ of a polynomial time algorithm for \(\Pi_{1}\). We do not actually obtain such an algorithm constructively.
This logical detour not withstanding, if a reader is interested only in the complexity of \(\mathtt{P1-GH}(M)\) for integer matrices \(M\), then the complexity dichotomy proved in this paper holds according to the standard definition of \(\mathtt{P1-GH}(M)\) for integral \(M\) in terms of the model of computation; the fact that this is proved in a broader setting for all real matrices \(M\) is irrelevant. This is akin to the situation in analytic number theory, where one might be interested in a question strictly about the ordinary integers, but the theorems are proved in a broader setting of analysis.
## 3 Reduction to Boolean domain matrices
In this section, we handle those \(3\times 3\) matrices for which the planar graph homomorphism problem is equivalent to the same problem on a \(2\times 2\) matrix. We first state the following theorem [27]:
**Theorem 2**.: _The problem \(\mathtt{P1-GH}(M)\) is \(\#\)P-hard, for \(M=(\begin{smallmatrix}x&y\\ y&z\end{smallmatrix})\), unless one of the following conditions holds, in which case \(\mathtt{P1-GH}(M)\) is polynomially tractable:_
\[\text{(1)}\ \ xz=y^{2},\ \ \ \ \text{(2)}\ \ y=0,\ \ \ \ \text{(3)}\ \ x=z,\ \ \ \ \text{or}\ \ \ \ \text{(4)}\ \ xz=-y^{2}\ \ \&\ x=-z.\]
Case (3) (the Ising model) is precisely the problems for which \(\mathtt{P1-GH}(M)\) is P-time solvable, but \(\mathtt{GH}(M)\) is \(\#\)P-hard; these are also exactly the ones solvable by a holographic algorithm using matchgates.
### Reducible matrices
**Definition 3**.: _A \(q\times q\) symmetric matrix \(M\) is reducible if there exists a permutation matrix \(P\), such that \(PMP^{\tt T}\) is a direct sum \(A\oplus B=\begin{pmatrix}A&\mathbf{0}\\ \mathbf{0}&B\end{pmatrix}\) for some (nonempty) matrices \(A\) and \(B\). A symmetric matrix \(M\) is irreducible if it is not reducible. Specialized to \(q=3\), a symmetric matrix \(M\) is reducible if it has the form_
\[M^{\pi}=\begin{pmatrix}x&y&0\\ y&z&0\\ 0&0&w\end{pmatrix} \tag{2}\]
_after its rows and columns are permuted by a permutation \(\pi\)._
As \(M\) is symmetric, the permutation \(\pi\) on the rows and columns must be the same. Clearly \(Z_{M}(G)=Z_{M^{\pi}}(G)\) for all \(G\). For a reducible \(q\times q\) matrix \(M^{\pi}=A\oplus B\), it is known from [11] (Lemma 4.3) that \(\mathtt{GH}(M)\) is #P-hard iff at least one of \(\mathtt{GH}(A)\) or \(\mathtt{GH}(B)\) is #P-hard, and \(\mathtt{GH}(M)\) is P-time tractable iff both \(\mathtt{GH}(A)\) or \(\mathtt{GH}(B)\) are P-time tractable. The P-time tractablility statement holds for P1-\(\mathtt{GH}\) as well. Also, it can be checked that the proof for #P-hardness in [11] (Lemma 4.3) also works in the planar setting. So, we have the following
**Lemma 4**.: _For a \(q\times q\) reducible symmetric matrix such that \(M=A\oplus B\), \(\mathtt{P1-GH}(M)\) is #P-hard iff at least one of \(\mathtt{P1-GH}(A)\) or \(\mathtt{P1-GH}(B)\) is #P-hard, and \(\mathtt{P1-GH}(M)\) is P-time tractable iff both \(\mathtt{P1-GH}(A)\) or \(\mathtt{P1-GH}(B)\) are P-time tractable._
Specifically for the \(q=3\) case, we see that if \(M\) is reducible, \(M\) is in form (2), and \(\mathtt{P1-GH}(B)\) is trivially tractable, when \(B\) is a \(1\times 1\) matrix. Therefore, we have the following lemma.
**Lemma 5**.: _For \(M\) in form (2), \(\mathtt{P1-GH}(M)\equiv\mathtt{P1-GH}(M^{\prime})\), where \(M^{\prime}=(\begin{smallmatrix}x&y\\ y&z\end{smallmatrix})\)._
Thus, Theorem 2 already classifies \(\mathtt{P1-GH}(M)\) for reducible matrices.
### Twinned matrices
\(\mathtt{P1-GH}(M)\) is equivalent to a problem on \(2\times 2\) matrices for another set of matrices \(M\).
**Definition 6**.: _A \(q\times q\) symmetric matrix \(M\) is a twinned matrix if any of its rows (columns) is a multiple of another row (column). Specialized to \(q=3\), a symmetric matrix \(M\) is a twinned matrix if it has the form_
\[M^{\pi}=\begin{pmatrix}x&cx&y\\ cx&c^{2}x&cy\\ y&cy&z\end{pmatrix} \tag{3}\]
_after its rows and columns are permuted by a permutation \(\pi\)._
Let \(M^{\prime}=\begin{pmatrix}x&y\\ y&z\end{pmatrix}\), and \(\mathcal{D}=\{D^{[r]}\}_{r\in\mathbb{N}}\), where \(D^{[r]}\) denotes vertex weights \(D^{[r]}=\begin{pmatrix}1+c^{r}&0\\ 0&1\end{pmatrix}\) for vertices of degree \(r\). Define \(\mathtt{P1-GH}(M^{\prime},\mathcal{D})\) to be the problem of evaluating
\[Z_{M^{\prime},\mathcal{D}}(G)=\sum_{\sigma:V\to[2]}\prod_{\{u,v\}\in E}m^{ \prime}_{\sigma(u)\sigma(v)}\prod_{v\in V}D^{[\deg(v)]}_{\sigma(v)\sigma(v)},\]
for any planar graph \(G=(V,E)\).
Then for \(M\) in form (3),
\[Z_{M}(G) =\sum_{\sigma:V\to\{1,2\}}\prod_{\{u,v\}\in E}m^{\prime}_{\sigma(u) \sigma(v)}\left(\sum_{\tau:\sigma^{-1}(1)\to\{-,+\}}\left(\prod_{v\in\tau^{-1}(+ )}(c^{\text{deg}(v)})\right)\right)\] \[=\sum_{\sigma:V\to\{1,2\}}\prod_{\{u,v\}\in E}m^{\prime}_{\sigma(u )\sigma(v)}\prod_{v\in V}D^{[\text{deg}(v)]}_{\sigma(v)\sigma(v)}\] \[=Z_{M^{\prime},\mathcal{D}}(G).\]
Therefore, \(\texttt{Pl-GH}(M)\equiv\texttt{Pl-GH}(M^{\prime},\mathcal{D})\). The following theorem is adapted from [26]. (We give a proof for completeness in Appendix A.)
**Theorem 7**.: _For a full rank \(M^{\prime}\) and \(\mathcal{D}\) given above, there exists \(p_{0}\geq 1\), such that for all \(p\geq p_{0}\),_
\[\texttt{Pl-GH}(N_{p})\leq\texttt{Pl-GH}(M^{\prime},\mathcal{D})\text{ where }N_{p}=\begin{pmatrix}c_{p}^{2}x&c_{p}y\\ c_{p}y&z\end{pmatrix}\text{ and }c_{p}=\frac{1+c^{2p+1}}{1+c^{2p}}.\]
Before we prove the hardness for general twinned matrices, we do need to consider a special case where the problem is tractable.
**Definition 8**.: _A \(q\times q\) symmetric matrix \(M\) is a bipartite matrix if it has the form \(\begin{pmatrix}\mathbf{0}&A\\ A^{\mathsf{T}}&\mathbf{0}\end{pmatrix}\) for some \(r\times(q-r)\) matrix \(A\), where \(0<r<q\), after the rows and columns of \(M\) are permuted by a permutation \(\pi\)._
Specialized to \(q=3\), irreducible bipartite matrices happen to just be twinned matrices such that \(x=z=0\). It is known that \(\texttt{Pl-GH}(M)\) is tractable in this case [7, 25].
**Lemma 9**.: _Let \(M\) be a twinned real-valued symmetric matrix in form (3). Assume \(M\) is irreducible and non-bipartite. Then \(\texttt{Pl-GH}(M)\) is \(\#\)P-hard, unless \(xz=y^{2}\), in which case, it is polynomial-time tractable._
Proof.: If \(xz=y^{2}\) then \(M\) is block rank 1, and \(\texttt{Pl-GH}(M)\) is polynomial-time tractable [7, 25]. Now we assume that this does not occur. Note that \(y\neq 0\) and \(c\neq 0\), for otherwise \(M\) would be reducible. Also, \((x,z)\neq\mathbf{0}\), for otherwise \(M\) would be bipartite.
Then for the \(M^{\prime}\) and \(\mathcal{D}\) given above, Theorem 7 applies, and we only need to prove that \(\texttt{Pl-GH}(N_{p})\) is \(\#\)P-hard for a large positive integer \(p\).
We first assume \(c\neq\pm 1\). Then \(c_{p}y\neq 0\) for any \(p\), and also \(\det(N_{p})\neq 0\). It is easy to verify that
\[c_{p+1}-c_{p}=\frac{c^{2p}(c+1)(c-1)^{2}}{(1+c^{2p})(1+c^{2p+2})}.\]
Since \(c\neq\pm 1\), we have a strictly monotonic sequence \(\{c_{p}\mid p\geq 1\}\). Hence for a large \(p\), \(c_{p}^{2}x\neq\pm z\). Then \(\texttt{Pl-GH}(N_{p})\) is \(\#\)P-hard by Theorem 2. It follows that \(\texttt{Pl-GH}(M)\) is also \(\#\)P-hard.
Now suppose \(c=\pm 1\). We have several cases.
**Case 1: \(x^{2}z^{2}\neq y^{4}\) and \(x^{2}\neq z^{2}\)**
In this case we use \(T_{2}M\). From Theorem 7, we have \(\texttt{Pl-GH}(N)\leq\texttt{Pl-GH}(T_{2}M)\leq\texttt{Pl-GH}(M)\), where \(N=\left(\begin{smallmatrix}x^{2}&y^{2}\\ y^{2}&z^{2}\end{smallmatrix}\right)\). It follows from Theorem 2 that \(\texttt{Pl-GH}(N)\) is \(\#\)P-hard, and so is \(\texttt{Pl-GH}(M)\).
**Case 2: \(x^{2}z^{2}\neq y^{4}\) and \(x^{2}=z^{2}\)**
In this case we use \(S_{2}T_{2}M\). From Theorem 7, we have \(\texttt{Pl-GH}(N^{\prime})\leq\texttt{Pl-GH}(S_{2}T_{2}M)\leq\texttt{Pl-GH}(M)\), where \(N^{\prime}=\left(\begin{smallmatrix}2x^{4}+y^{4}&3x^{2}y^{2}\\ 3x^{2}y^{2}&x^{4}+2y^{4}\end{smallmatrix}\right)\). Since \(x^{2}z^{2}\neq y^{4}\), the rank of \(T_{2}M\) is \(2\), and therefore, so is the rank of \(S_{2}T_{2}M\). Therefore, it follows that \(N^{\prime}\) is also a rank \(2\) matrix, and so, \(\det(N^{\prime})\neq 0\). Moreover, we note that since \(x^{2}=z^{2}\), \(x^{2}z^{2}=x^{4}\neq y^{4}\). Therefore, we see that \(2x^{4}+y^{4}\neq x^{4}+2y^{4}\). So \(\texttt{Pl-GH}(N^{\prime})\) is \(\#\)P-hard from Theorem 2 and so is \(\texttt{Pl-GH}(M)\).
**Case 3: \(x^{2}z^{2}=y^{4}\) and \(2x^{2}\neq y^{2}\)**
In this case we use \(S_{2}M\). As \(xz\neq y^{2}\), \(x^{2}z^{2}=y^{4}\) gives \(xz=-y^{2}\). Since \(y\neq 0\), we have \(x\neq 0\) and \(z\neq 0\). Clearly \(M=C^{\intercal}M^{\prime}C\) where \(C=\left(\begin{smallmatrix}1&c&0\\ 0&0&1\end{smallmatrix}\right)\) and \(M^{\prime}=\left(\begin{smallmatrix}x&y\\ y&z\end{smallmatrix}\right)\). Then \(S_{2}M\) has the matrix \(M^{2}=C^{\intercal}M^{\prime\prime}C\), where \(M^{\prime\prime}=M^{\prime}\left(\begin{smallmatrix}2&0\\ 0&1\end{smallmatrix}\right)M^{\prime}=\left(\begin{smallmatrix}x^{\prime}&y^{ \prime}\\ y^{\prime}&z^{\prime}\end{smallmatrix}\right)\), with \(x^{\prime}=2x^{2}+y^{2},y^{\prime}=2xy+yz\) and \(z^{\prime}=2y^{2}+z^{2}\).
Clearly \(M^{2}\) has a \(2\times 2\) submatrix \(M^{\prime\prime}\) with \(\det(M^{\prime\prime})\neq 0\). Also \(x^{\prime}z^{\prime}+(y^{\prime})^{2}>0\) since \(y^{2}>0\). Therefore, \((x^{\prime})^{2}(z^{\prime})^{2}-(y^{\prime})^{4}\neq 0\). Also \((x^{\prime},z^{\prime})\neq(0,0)\). Finally, since \(2x^{2}\neq y^{2}\), we have \(y^{\prime}\neq 0\), so \(S_{2}M\) is irreducible. Therefore, it follows from cases 1 and 2 that \(\texttt{Pl-GH}(S_{2}M)\) is \(\#\)P-hard. Since \(\texttt{Pl-GH}(S_{2}M)\leq\texttt{Pl-GH}(M)\), so is \(\texttt{Pl-GH}(M)\).
**Case 4: \(x^{2}z^{2}=y^{4}\) and \(2x^{2}=y^{2}\)**
In this case we use \(T_{3}M\). Again we have \(xz=-y^{2}\). So, \(z=-\frac{y^{2}}{x}=-\frac{2x^{2}}{x}=-2x\). Then \(T_{3}M=\left(\begin{smallmatrix}1&c&0\\ 0&0&1\end{smallmatrix}\right)^{\intercal}\left(\begin{smallmatrix}x^{\prime}&y^{ \prime}\\ y^{\prime}&z^{\prime}\end{smallmatrix}\right)\left(\begin{smallmatrix}1&c&0\\ 0&0&1\end{smallmatrix}\right)\), where \(x^{\prime}=x^{3},y^{\prime}=y^{3}\) and \(z^{\prime}=-8x^{3}\). Therefore, \(T_{3}M\) is irreducible, \((x^{\prime},z^{\prime})\neq(0,0)\), and \(x^{\prime}z^{\prime}\neq(y^{\prime})^{2}\). Also, \((y^{\prime})^{2}=y^{6}=8x^{6}\neq 2x^{6}=2(x^{\prime})^{2}\). Therefore, this case is reduced to case 3, and \(\texttt{Pl-GH}(T_{3}M)\) is therefore \(\#\)P-hard. Since \(\texttt{Pl-GH}(T_{3}M)\leq\texttt{Pl-GH}(M)\), it implies that \(\texttt{Pl-GH}(M)\) is \(\#\)P-hard.
Combining the results from Lemma 9 and Theorem 2 we have
**Theorem 10**.: _If \(M\) is a twinned real-valued symmetric matrix in form (3), then \(\texttt{Pl-GH}(M)\) is \(\#\)P-hard, except in the following cases where \(\texttt{Pl-GH}(M)\) is polynomial-time tractable:_
_(1)_ \(y=0,\) _(2)_ \(x=z=0,\) _(3)_ \(xz=y^{2},\) _(4)_ \(c=0\) & \(x=z\) _or (5)_ \(c=0\) & \(x=-z\) & \(xz=-y^{2}\)_._
## 4 Interpolation by Thickening
We have successfully classified as polynomially tractable or \(\#\)P-hard, all problems \(\texttt{Pl-GH}(M)\) for which \(\texttt{Pl-GH}(M)\) was equivalent to a domain two problem. We will now consider the problems for
which such equivalences do not hold. In this section we will furthermore not restrict ourselves to \(q=3\), but instead consider the problem more generally.
Let us now consider the thickening operation more closely. We note that
\[Z_{M}(T_{k}G)=Z_{T_{k}M}(G)=\sum_{x\in X}x^{k}\cdot\#_{M}(G,x) \tag{4}\]
where \(X\) is as in Eq.1. Note that \(\#_{M}(G,x)\) does not depend on \(k\), but depends on the entries of the matrix \(M\). We will deal with this dependence now.
### Generating sets
**Definition 11**.: _Let \(\mathcal{A}\) be a set of nonzero real numbers. A finite set of positive real numbers \(\{g_{t}\}_{t\in[d]}\), for some integer \(d\geq 0\), is a generating set of \(\mathcal{A}\) if for every \(a\in\mathcal{A}\), there exists a unique \((e_{1},\ldots,e_{d})\in\mathbb{Z}^{d}\) such that \(a=\pm g_{1}^{e_{1}}\cdots g_{d}^{e_{d}}\)._
**Lemma 12**.: _Every finite set \(\mathcal{A}\) of nonzero real numbers has a generating set._
Proof.: Consider the multiplicative group \(\mathcal{G}\) generated by the positive real numbers \(\{|a|:a\in\mathcal{A}\}\). It is a subgroup of the multiplicative group \((\mathbb{R}^{+},\cdot)\). Since \(\mathcal{A}\) is finite, and \((\mathbb{R}^{+},\cdot)\) is torsion-free, the group \(\mathcal{G}\) is a finitely generated free Abelian group, and thus isomorphic to \(\mathbb{Z}^{d}\) for some \(d\geq 0\). Let \(\{g_{t}\}_{t\in[d]}\) be a basis of this free Abelian group, the lemma follows.
We note that \(\{g_{t}\}_{t\in[d]}\subset\mathcal{G}\) and \(\{\log g_{t}\}_{t\in[d]}\) is linearly independent over \(\mathbb{Q}\).
We now use Lemma12 to find a generating set for the entries \((m_{ij})_{i,j\in[q]}\) of any matrix \(M\in\mathbb{R}^{q\times q}\) with no zero entries. Note that this generating set need not be unique. With respect to a generating set, for any \(m_{ij}\), there are unique integers \(e_{ij0}\in\{0,1\}\), \(e_{ij1},\ldots,e_{ijd}\), such that
\[m_{ij}=(-1)^{e_{ij0}}\cdot g_{1}^{e_{ij1}}\cdots g_{d}^{e_{ijd}}. \tag{5}\]
We also note that \(\texttt{Pl-GH}(M)\equiv\texttt{Pl-GH}(cM)\) for any real \(c\neq 0\), since \(Z_{cM}(G)=c^{|E(G)|}Z_{M}(G)\). By choosing some \(c=g_{1}^{e_{1}}\cdots g_{d}^{e_{d}}\) we may assume that \(e_{ijt}\geq 0\) for all \(i,j\in[q]\) and \(t\in[d]\).
**Lemma 13**.: _Let \(M\in\mathbb{R}^{q\times q}\) be symmetric with no zero entries, with entries \((m_{ij})_{1\leq i\leq j\leq q}\) given in Eq.5. Define \(\mathcal{M}:\mathbb{R}^{d}\to\mathbb{R}^{q\times q}\) where \(\mathcal{M}(\mathbf{p})_{ij}=\mathcal{M}(p_{1},\ldots,p_{d})_{ij}=(-1)^{e_{ij0 }}\cdot p_{1}^{e_{ij1}}\cdots p_{d}^{e_{ijd}}\) for all \(i,j\in[q]\). Then, \(\texttt{Pl-GH}(\mathcal{M}(\mathbf{p}))\leq\texttt{Pl-GH}(M)\) for all \(\mathbf{p}\in\mathbb{R}^{d}\)._
Proof.: We already know that for any \(k\geq 1\),
\[Z_{M}(T_{k}G)=\sum_{x\in X}x^{k}\cdot\#_{M}(G,x),\]
where
\[X=\left\{\prod_{i,j\in[q]}m_{ij}^{k_{ij}}\ \Big{|}\ \text{integers}\ k_{ij}\geq 0 \ \text{and}\ \sum_{i,j\in[q]}k_{ij}=|E|\right\}.\]
Since each \(x\in X\) is distinct, and \(|X|\leq|E|^{O(1)}\), if we can compute \(Z_{M}(T_{k}G)\) for \(k\in[|X|]\), we have a full rank Vandermonde system of linear equations, which can be solved in polynomial time to find \(\#_{M}(G,x)\) for all \(x\in X\).
Now, let us consider the set \(X\) more closely. Given any \(x\in X\), we see that \(x=\prod m_{ij}^{k_{ij}}\) for some (not necessarily unique) integers \(k_{ij}\geq 0\) such that \(\sum k_{ij}=|E|\). From Eq. (5), we know that each \(m_{ij}\) is generated by the set \(\{g_{t}\}_{t\in[d]}\). Therefore, any \(x\in X\) can be represented as
\[x=(-1)^{e_{0}^{x}}g_{1}^{e_{1}^{x}}\cdots g_{d}^{e_{d}^{x}}\]
Moreover, the exponents \(e_{0}^{x}\in\{0,1\}\), and \(e_{1}^{x},\ldots,e_{d}^{x}\in\mathbb{Z}\) are unique, since \(\{g_{t}\}_{t\in[d]}\) is a generating set.
Consider a fixed \(\mathbf{p}=(p_{1},\ldots,p_{d})\in\mathbb{R}^{d}\). We can now define the function \(y:X\to\mathbb{R}\), such that \(y(x)=(-1)^{e_{0}^{x}}\cdot p_{1}^{e_{1}^{x}}\cdots p_{d}^{e_{d}^{x}}\) for all \(x\in X\). Now, let
\[Y=\left\{\prod_{i,j\in[q]}(\mathcal{M}(\mathbf{p})_{ij}^{k_{ij}}\Bigm{|}\text { integers }k_{ij}\geq 0\text{ and }\sum_{i,j\in[q]}k_{ij}=|E|\right\}.\]
We note that for any \(y\in Y\),
\[\#_{\mathcal{M}(\mathbf{p})}(G,y)=\sum_{x\in X:\ y(x)=y}\#_{M}(G,x).\]
Since \(\#_{M}(G,x)\) have already been computed for all \(x\in X\), we can now compute in polynomial time,
\[\sum_{x\in X}y(x)\cdot\#_{M}(G,x)=\sum_{y\in Y}y\cdot\#_{\mathcal{M}(\mathbf{ p})}(G,y)=Z_{\mathcal{M}(\mathbf{p})}(G).\]
Therefore, \(\mathtt{PI-GH}(\mathcal{M}(\mathbf{p}))\leq\mathtt{PI-GH}(M)\).
We need the following theorem from [43]:
**Theorem 14**.: _For \(x,y\in\mathbb{C}\), evaluating the Tutte polynomial at \((x,y)\) is \(\#\)P-hard over planar graphs unless \((x-1)(y-1)\in\{1,2\}\) or \((x,y)\in\{(1,1),(-1,-1),(\omega,\omega^{2}),(\omega^{2},\omega)\}\), where \(\omega=e^{2\pi i/3}\). In each exceptional case, the problem is in polynomial time._
**Corollary 15**.: \(\mathtt{PI-GH}(\mathrm{VC}_{q})\) _is \(\#\)P-hard for \(q\geq 3\), where \(\mathrm{VC}_{q}\) is the \(q\times q\) matrix with entries \((v_{ij})\) such that \(v_{ij}=1\) if \(i\neq j\), and \(v_{ij}=0\) otherwise._
Theorem 14 allows us to prove our first hardness result.
**Lemma 16**.: _Let \(M\) be a \(q\times q\) (\(q\geq 3\)) real-valued, symmetric matrix with no zero entries, as given in Eq. (5). Furthermore, assume for all \(i\in[q]\) there exists some (not necessarily distinct) \(t(i)\in\{1,\ldots,d\}\), such that \(e_{i\text{it}(i)}>0,\text{ and }e_{j\text{kt}(i)}=0\text{ for all }j\neq k\). Then \(\mathtt{PI-GH}(M)\) is \(\#\)P-hard._
Proof.: We apply Lemma 13. Let \(\mathbf{p}\in\mathbb{R}^{d}\), defined by \(p_{t(i)}=0\) for \(i\in[q]\), and \(p_{i}=1\) for all other \(i\in[d]\). Then, it is easy to see that \(T_{2}(\mathcal{M}(\mathbf{p}))=\mathrm{VC}_{q}\).
From Lemma 13, we get \(\mathtt{PI-GH}(\mathcal{M}(\mathbf{p}))\leq\mathtt{PI-GH}(M)\). Therefore,
\[\mathtt{PI-GH}(\mathrm{VC}_{q})\leq\mathtt{PI-GH}(\mathcal{M}(\mathbf{p}))\leq \mathtt{PI-GH}(M).\]
It follows from Corollary 15 that \(\mathtt{PI-GH}(M)\) is \(\#\)P-hard.
Interpolation by Stretching
In this section we focus on full ranked matrices. Using stretching, we shall prove the hardness of a more interesting class of matrices than we were able to in Lemma 16.
Consider a \(q\times q\) positive real valued, symmetric matrix \(M\). There exist real orthogonal matrix \(H=(h_{ij})_{i,j\in[q]}\) and a diagonal matrix \(D=\operatorname{diag}(\lambda_{1},\ldots,\lambda_{q})\), such that
\[M=HDH^{\mathsf{T}},\]
where \(\lambda_{1},\ldots,\lambda_{q}\) are the eigenvalues of \(M\) and the columns of \(H\) are the corresponding eigenvectors.
### Lattice condition
From the decomposition \(M=HDH^{\mathsf{T}}\), we have \(M^{k}=HD^{k}H^{\mathsf{T}}\), and
\[(M^{k})_{ij}=(h_{i1}h_{j1})\lambda_{1}^{k}+\cdots+(h_{iq}h_{jq})\lambda_{q}^{k}.\]
It follows that
\[Z_{M^{k}}(G)=\sum_{\begin{subarray}{c}x_{1},\ldots,x_{q}\geq 0\\ \sum_{i}x_{i}=|E|\end{subarray}}c_{(x_{i})_{i\leq q}}\cdot\left(\lambda_{1}^{ x_{1}}\cdots\lambda_{q}^{x_{q}}\right)^{k}, \tag{6}\]
where
\[c_{(x_{i})_{i\leq q}}=\sum_{\sigma:V\to[q]}\left(\sum_{\begin{subarray}{c}E_ {1}\sqcup\cdots\sqcup E_{q}=E\\ |E_{i}|=x_{i}\end{subarray}}\left(\prod_{i\in[q]}\prod_{\{u,v\}\in E_{i}}h_{ \sigma(u)i}h_{\sigma(v)i}\right)\right)\]
depends only on \(G\) and the orthogonal matrix \(H\), but not on \(D\).
**Definition 17**.: _A nonempty set of nonzero real numbers \((r_{i})_{i\in[d]}\) satisfies the lattice condition, if the only integer sequence \((n_{i})_{i\in[d]}\) with the property \(n_{1}+\cdots+n_{d}=0\) and \(r_{1}^{n_{1}}\cdots r_{d}^{n_{d}}=1\) is \((n_{i})_{i\in[d]}=\mathbf{0}\)._
**Lemma 18**.: _If \(M\) is a \(q\times q\) full rank, real valued, symmetric matrix, whose eigenvalues \((\lambda_{1},\ldots,\lambda_{q})\) satisfy the lattice condition then \(\mathtt{P1-GH}(H\Delta H^{\mathsf{T}})\leq\mathtt{P1-GH}(M)\) for any diagonal matrix \(\Delta\)._
Proof.: By the lattice condition, for any integer sequences \((x_{i})_{i\leq q}\) and \((y_{i})_{i\leq q}\) with \(\prod_{i\in[q]}\lambda_{i}^{x_{i}}=\prod_{i\in[q]}\lambda_{i}^{y_{i}}\) and \(\sum_{i\leq q}x_{i}=\sum_{i\leq q}y_{i}\) we get \(x_{i}=y_{i}\) for all \(i\in[q]\). Therefore, from the values \(Z_{M}(S_{k}G)=Z_{M^{k}}(G)\) for \(k\in\left[\binom{|E|+q-1}{q-1}\right]\), we have a full-rank Vandermonde system of linear equations with unknowns \(c_{(x_{i})_{i\leq q}}\). Solving this linear system in polynomial time, we can compute
\[\sum_{\begin{subarray}{c}x_{1},\ldots,x_{q}\\ \sum x_{i}=|E|\end{subarray}}c_{(x_{i})_{i\leq q}}\cdot\left(\alpha_{1}^{x_{1} }\cdots\alpha_{q}^{x_{q}}\right)=Z_{H\Delta H^{\mathsf{T}}}(G)\]
for any \(\Delta=\operatorname{diag}(\alpha_{1},\ldots,\alpha_{q})\). Therefore, \(\mathtt{P1-GH}(H\Delta H^{\mathsf{T}})\leq\mathtt{P1-GH}(M)\).
We now prove that there exists some \(\Delta\) such that \(\mathtt{P1-GH}(H\Delta H^{\mathsf{T}})\) is \(\#\)P-hard.
**Lemma 19**.: _If \(M=HDH^{\texttt{T}}\) is a \(q\times q\) (\(q\geq 3\)) full rank, positive real valued, symmetric matrix, whose eigenvalues \((\lambda_{1},\ldots,\lambda_{q})\) satisfy the lattice condition, then there exists a diagonal matrix \(\Delta\) such that \(\texttt{P1-GH}(H\Delta H^{\texttt{T}})\) is \(\#\)P-hard._
Proof.: Let \((m_{ij})_{i,j\in[q]}\) be the entries of the matrix \(M\). By assumption, we know that these are positive reals. We pick some \(\kappa\in\mathbb{R}\) such that \(\kappa+m_{ii}>0\), and is transcendental to the field \(\mathbf{F}=\mathbb{Q}(\{m_{ij}\}_{i,j\in[q]})\). Such a transcendental real number exists because there are only a countable number of algebraic numbers over \(\mathbf{F}\). Let \(\{g_{i}\}_{i\in[d]}\) (respectively, \(\{f_{i}\}_{i\in[d^{\prime}]}\)) be a basis of the multiplicative free Abelian group generated by \(\{m_{ij}\}_{i\neq j\in[q]}\) (respectively, \(\{\kappa+m_{ii}\}_{i\in[q]}\)) as in the proof of Lemma 12. Finally, we let \(\Delta=D+\kappa I\).
**Claim 20**.: \(\{g_{i}\}_{i\in[d]}\cup\{f_{i}\}_{i\in[d^{\prime}]}\) _is a generating set of the entries of \(H\Delta H^{\texttt{T}}\)._
Clearly, every element of \(H\Delta H^{\texttt{T}}\) can be expressed as a product of integer powers of \(\{g_{i}\}_{i\in[d]}\cup\{f_{i}\}_{i\in[d^{\prime}]}\), by construction. We now want to show uniqueness of such an expression.
Being in the Abelian group generated by \(\{\kappa+m_{jj}\}_{j\in[q]}\), there exist integers \(x_{i,j}\) for \(i\in[d^{\prime}]\) and \(j\in[q]\), such that
\[f_{i}=(\kappa+m_{11})^{x_{i,1}}\cdots(\kappa+m_{qq})^{x_{i,q}}. \tag{7}\]
Every element in \(\{g_{i}\}_{i\in[d]}\cup\{f_{i}\}_{i\in[d^{\prime}]}\) is positive. Suppose for some \((e_{1},\ldots,e_{d},e^{\prime}_{1},\ldots,e^{\prime}_{d^{\prime}})\in\mathbb{ Z}^{d+d^{\prime}}\) such that
\[g_{1}^{e_{1}}\cdots g_{d}^{e_{d}}\cdot(f_{1})^{e^{\prime}_{1}}\cdots(f_{d^{ \prime}})^{e^{\prime}_{d^{\prime}}}=1.\]
First, if \((e_{1},\ldots,e_{d})=\mathbf{0}\) then \(\prod_{i\in[d]}g_{i}^{e_{i}}=1\), and since \(\{f_{i}\}_{i\in[d^{\prime}]}\) is a generating set we get \((e^{\prime}_{1},\ldots,e^{\prime}_{d^{\prime}})=\mathbf{0}\), therefore \((e_{1},\ldots,e_{d},e^{\prime}_{1},\ldots,e^{\prime}_{d^{\prime}})=\mathbf{0}\). Now assume \((e_{1},\ldots,e_{d})\neq\mathbf{0}\).
Substituting \(f_{i}\) using Eq. (7), we get
\[\prod_{i\in[d]}g_{i}^{e_{i}}\cdot\prod_{j\in[q]}(\kappa+m_{jj})^{y_{j}}=1,\]
where \(y_{j}=\sum_{i\in[d^{\prime}]}e^{\prime}_{i}x_{i,j}\) for \(j\in[q]\). Since \((e_{1},\ldots,e_{d})\neq\mathbf{0}\), we see that \(\prod_{i\in[d]}g_{i}^{e_{i}}\neq 1\). Therefore, \((y_{1},\ldots,y_{q})\neq\mathbf{0}\). Separating out positive and negative \(y_{j}\)'s, we have
\[\left(\prod_{i\in[d]}g_{i}^{e_{i}}\right)\cdot\prod_{j\in[q]:y_{j}>0}(\kappa+ m_{jj})^{y_{j}}=\prod_{j\in[q]:y_{j}<0}(\kappa+m_{jj})^{-y_{j}}. \tag{8}\]
Both sides of Eq. (8) are polynomials in \(\kappa\) over the field \(\mathbf{F}\), with different leading coefficients. This contradicts our assumption that \(\kappa\) is transcendental to \(\mathbf{F}\). Claim 20 is thus proved.
Now for any \(i\in[q]\), there exists some \(t(i)\in[d^{\prime}]\), such that \(e_{iit(i)}>0\), but \(e_{jkt(i)}=0\) for all \(j\neq k\). This is because \(\{f_{i}\}_{i\in[d^{\prime}]}\), without \(\{g_{i}\}_{i\in[d]}\), is a generator set for \(\{\kappa+m_{ii}\}_{i\in[q]}\), and \(\kappa+m_{ii}\neq 1\). Also, \(\{g_{i}\}_{i\in[d]}\), without \(\{f_{i}\}_{i\in[d^{\prime}]}\), is a generator set for \(\{m_{ij}\}_{i\neq j\in[q]}\). Therefore, from Lemma 16, we conclude that \(\texttt{P1-GH}(H\Delta H^{\texttt{T}})\) is \(\#\)P-hard.
We have prove the following theorem:
**Theorem 21**.: _If \(M\) is a \(q\times q\) (\(q\geq 3\)) full rank, positive real valued, symmetric matrix, whose eigenvalues \((\lambda_{1},\ldots,\lambda_{q})\) satisfy the lattice condition, then \(\texttt{P1-GH}(M)\) is \(\#\)P-hard._
### Extensions of the hardness criterion
The requirement in Theorem21 that the eigenvalues satisfy the lattice condition is not entirely necessary. The following is a simple adaptation of Lemma18.
**Lemma 22**.: _If \(M\) is a \(q\times q\) full rank, real valued, symmetric matrix, such that its set of eigenvalues \(\{\lambda_{i}:i\in[q]\}\) (without duplicates as a set) satisfies the lattice condition, then for any function \(f:\{\lambda_{i}:i\in[q]\}\to\mathbb{R}\), we have \(\mathtt{PI\_GH}(H\Delta H^{\mathtt{T}})\leq\mathtt{PI\_GH}(M)\) where \(\Delta=\operatorname{diag}(f(\lambda_{1}),\ldots,f(\lambda_{q}))\)._
Proof.: Note that as a function, if \(\lambda_{i}=\lambda_{j}\) then \(f\) must map \(f(\lambda_{i})=f(\lambda_{j})\). Accordingly, we can define a partition \(\mathcal{P}=(P_{1},\ldots,P_{\ell})\) of \([q]\) collecting equal values of \(\lambda_{i}\) together. We rename their distinct values as \(\{\mu_{1},\ldots,\mu_{\ell}\}\) such that the \(\mu_{i}\) are all distinct, and \(\lambda_{j}=\mu_{i}\) for all \(i\in[\ell]\) and \(j\in P_{i}\). By hypothesis, \((\mu_{1},\ldots,\mu_{\ell})\) satisfies the lattice condition, and \(f\) is defined on the set \(\{\mu_{i}:i\in[\ell]\}\).
Now note that
\[Z_{M}(G)=\sum_{\begin{subarray}{c}x_{1},\ldots,x_{\ell}\\ \sum x_{i}=|E|\end{subarray}}c_{(x_{i})_{i\leq\ell}}\cdot\mu_{1}^{x_{1}}\cdots \mu_{\ell}^{x_{\ell}}\]
where
\[c_{(x_{i})_{i\leq\ell}}=\sum_{\sigma:V\to[q]}\left(\sum_{ \begin{subarray}{c}E_{1}\sqcup\cdots\sqcup E_{q}=E,\\ \sum_{t\in P_{i}}|E_{t}|=x_{i}\end{subarray}}\left(\prod_{i\in[q]}\prod_{\{u,v \}\in E_{i}}h_{\sigma(u)i}h_{\sigma(v)i}\right)\right).\]
Since \((\mu_{1},\ldots,\mu_{\ell})\) satisfies the lattice condition, for any \((x_{i})_{i\leq\ell}\) and \((y_{i})_{i\leq\ell}\) with \(\sum_{i}x_{i}=\sum_{i}y_{i}=|E|\), and
\[\mu_{1}^{x_{1}}\cdots\mu_{\ell}^{x_{\ell}}=\mu_{1}^{y_{1}}\cdots\mu_{\ell}^{y _{\ell}},\]
we have \(x_{i}=y_{i}\) for \(i\in[\ell]\). Therefore, from the values \(Z_{M}(S_{k}G)=Z_{M^{k}}(G)\) for \(k\in\left[\binom{|E|+\ell-1}{\ell-1}\right]\), we can form a full rank Vandermonde system of linear equations with unknowns \(c_{(x_{i})_{i\leq\ell}}\). Solving this in polynomial time, we can compute
\[\sum_{\begin{subarray}{c}x_{1},\ldots,x_{\ell}\\ \sum x_{i}=|E|\end{subarray}}c_{(x_{i})_{i\leq\ell}}\cdot f(\mu_{1})^{x_{1}} \cdots f(\mu_{\ell})^{x_{\ell}}=Z_{H\Delta H^{\mathtt{T}}}(G)\]
for any \(\Delta=\operatorname{diag}(f(\lambda_{1}),\ldots,f(\lambda_{q}))\). Therefore, \(\mathtt{PI\_GH}(H\Delta H^{\mathtt{T}})\leq\mathtt{PI\_GH}(M)\).
We now have the following extension of Theorem21.
**Theorem 23**.: _If \(M\) is a \(q\times q\) (\(q\geq 3\)) full rank, positive real valued, symmetric matrix, such that its set of eigenvalues \(\{\lambda_{i}:i\in[q]\}\) (without duplicates as a set) satisfies the lattice condition, then \(\mathtt{PI\_GH}(M)\) is \(\#\)P-hard._
Proof.: Let \((m_{ij})_{i,j\in[q]}\) be the entries of the matrix \(M\). By assumption, we know that these are positive reals. We pick some \(\kappa\in\mathbb{R}\) such that \(\kappa+m_{ii}>0\), and is transcendental to the field \(\mathbf{F}=\mathbb{Q}(\{m_{ij}\}_{i,j\in[q]})\). Let \(\{g_{i}\}_{i\in[d]}\) be a basis of the multiplicative free Abelian group generated by \(\{m_{ij}\}_{i\neq j\in[q]}\) as in the proof of Lemma12. Let \(\{f_{i}\}_{i\in[d^{\prime}]}\) be a basis for the multiplicative free
Abelian group generated by \(\{\kappa+m_{ii}\}_{i\in[q]}\). Finally, we let \(\Delta=D+\kappa I\). We know from Claim 20 that the set \(\{g_{i}\}_{i\in[d]}\cup\{f_{i}\}_{i\in[d^{\prime}]}\) is a generating set of the entries of \(H\Delta H^{\tt T}=M+\kappa I\).
Now for any \(i\in[q]\), there exists some \(t(i)\in[d^{\prime}]\), such that \(e_{iit(i)}>0\), but \(e_{jkt(i)}=0\) for all \(i\neq k\). Therefore, from Lemma 16, we conclude that \(\tt{PI\text{-}GH}(H\Delta H^{\tt T})\) is \(\#\)P-hard. Moreover, \(\Delta\) is of the form \(\operatorname{diag}(f(\lambda_{1}),\dots,f(\lambda_{q}))\) with \(f(\lambda_{i})=\lambda_{i}+\kappa\). So, from Lemma 22, we see that \(\tt{PI\text{-}GH}(M)\) is \(\#\)P-hard.
Clearly, Theorem 21 is a special case of Theorem 23. We also have the following theorem.
**Theorem 24**.: _Let \(M\) be a \(q\times q\) (\(q\geq 3\)) non-bipartite, irreducible, full rank, non-negative real valued symmetric matrix. If its set of absolute values of eigenvalues \(\{|\lambda_{i}|:i\in[q]\}\) (without duplicates as a set) satisfies the lattice condition, then \(\tt{PI\text{-}GH}(M)\) is \(\#\)P-hard._
Proof.: The matrix \(M\) with non-negative values represents a weighted graph \(H\). Since \(M\) is irreducible, \(H\) is a connected graph. Since \(M\) is non-bipartite, \(H\) contains a cycle of odd length \(t\). Moreover, \(H\) trivially has cycles of length \(2\). Since \(H\) is connected, and \(\gcd(t,2)=1\), for any large enough integer \(n\), there is a path of length \(n\) between any \(i,j\in[q]\). In other words, for some large enough integer \(n\), \(M^{n}\) is a full rank, positive valued matrix. Since this is true for all sufficiently large \(n\), we may assume \(n\) is even. The eigenvalues of \(M^{n}\) are \(\lambda_{1}^{n},\dots,\lambda_{q}^{n}\), with possible repetition. Suppose \(\{\mu_{1},\dots,\mu_{\ell}\}\) is the set \(\{|\lambda_{i}|:i\in[q]\}\) after removing duplicates, then by hypothesis it satisfies the lattice condition. Then \(\{\mu_{1}^{n},\dots,\mu_{\ell}^{n}\}\) is the set \(\{\lambda_{i}^{n}:i\in[q]\}\) without duplicates as a set. Indeed, if \(\lambda_{i}^{n}=\lambda_{j}^{n}\) then as real numbers \(\lambda_{i}=\pm\lambda_{j}\), and so \(|\lambda_{i}|=|\lambda_{j}|\). Thus only one of \(\lambda_{i}^{n}\) and \(\lambda_{j}^{n}\) appears in \(\{\mu_{1}^{n},\dots,\mu_{\ell}^{n}\}\). It follows that \(\{\mu_{1}^{n},\dots,\mu_{\ell}^{n}\}\) also satisfies the lattice condition. Therefore, \(\tt{PI\text{-}GH}(S_{n}M)\) is \(\#\)P-hard by Theorem 23. It follows that \(\tt{PI\text{-}GH}(M)\) is also \(\#\)P-hard.
Next we prove the same theorem for the bipartite case. For the bipartite case, we note that if a \(q\times q\) matrix \(M\) has full rank, then \(q\) is even and after a permutation, \(M\) has the form \(\left(\begin{smallmatrix}\mathbf{0}&A\\ A^{\tt T}&\mathbf{0}\end{smallmatrix}\right)\), for some matrix \(A\) of order \(q/2\). That the lattice condition implies \(\#\)P-hardness, as in Theorem 24, really only works when \(q/2\geq 3\). So the following theorem is stated for \(q\geq 6\). After the theorem, we give a complete classification for such bipartite matrices with \(q=4\).
**Theorem 25**.: _Let \(M\) be a \(q\times q\) (\(q\geq 6\)) bipartite, irreducible, full rank, non-negative real valued symmetric matrix. If the absolute values of the eigenvalues \(\{|\lambda_{i}|:i\in[q]\}\), as a set without duplicates, satisfies the lattice condition, then \(\tt{PI\text{-}GH}(M)\) is \(\#\)P-hard._
Proof.: For a \(q\times q\) bipartite matrix \(M\), its square \(M^{2}\) is a reducible matrix of the form \(M^{2}=A\oplus B\). Since \(M\) has full rank \(q\), \(q\) must be even, and \(A\) and \(B\) are both \(q/2\times q/2\). Both \(A\) and \(B\) are non-negative real valued symmetric matrices. As \(M\) is irreducible and bipartite, the underlying graph is connected, and every pair of vertices in each part is connected by path of an even length. Thus \(A\) and \(B\) are irreducible. Both \(A\) and \(B\) contain self loops and so they are non-bipartite. They have full rank since \(\det(M^{2})=\det(A)\det(B)\). \(\{\lambda_{i}^{2}:i\in[q]\}\) is the union of eigenvalues of \(A\) and \(B\). Since \(\{|\lambda_{i}|:i\in[q]\}\) (after removal of duplicates as a set) satisfies the lattice condition, so does the subset of \(\{\lambda_{i}^{2}:i\in[q]\}\) that corresponds to \(A\) (and to \(B\)), both after removal of duplicates as a set. As \(q\geq 6\), we have \(q/2\geq 3\) and we can conclude that \(\tt{PI\text{-}GH}(A)\) is \(\#\)P-hard by Theorem 24. Then \(\tt{PI\text{-}GH}(M)\) is \(\#\)P-hard by Lemma 4.
The corresponding case \(q=4\) in Theorem 25 can be completely classified. In this case, \(M=\left(\begin{smallmatrix}0&N\\ N^{\mathsf{T}}&\mathbf{0}\end{smallmatrix}\right)\), where \(N=\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\) is a full rank, non-negative real valued matrix. Now, we consider \(M^{2}=A\oplus B\), where
\[A=NN^{\mathsf{T}}=\begin{pmatrix}a^{2}+b^{2}&ac+bd\\ ac+bd&c^{2}+d^{2}\end{pmatrix},\qquad B=N^{\mathsf{T}}N=\begin{pmatrix}a^{2}+ c^{2}&ab+cd\\ ab+cd&b^{2}+d^{2}\end{pmatrix}.\]
Since \(M\) is irreducible, all entries of \(A\) and \(B\) are positive (as each side of the bipartite graph has a vertex connected to both vertices of the other side). Therefore, we see that \(\mathtt{Pl}\text{-}\mathtt{GH}(M^{2})\) is #P-hard unless both \(\mathtt{Pl}\text{-}\mathtt{GH}(A)\) and \(\mathtt{Pl}\text{-}\mathtt{GH}(B)\) are tractable, and by Theorem 2 this is so iff \(a^{2}+b^{2}=c^{2}+d^{2}\) and \(a^{2}+c^{2}=b^{2}+d^{2}\). Since \(N\) is a non-negative matrix, this implies that \(a=d\) and \(b=c\). Therefore, as long as \((a,b)\neq(d,c)\), at least one of \(\mathtt{Pl}\text{-}\mathtt{GH}(A)\) or \(\mathtt{Pl}\text{-}\mathtt{GH}(B)\) is #P-hard, and this would imply the #P-hardness of \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\).
If \((a,b)=(d,c)\), then \(M=X\otimes Y\), where \(X=\left(\begin{smallmatrix}0&1\\ 1&0\end{smallmatrix}\right)\), and \(Y=\left(\begin{smallmatrix}a&b\\ b&a\end{smallmatrix}\right)\). \(\mathtt{Pl}\text{-}\mathtt{GH}(X)\) and \(\mathtt{Pl}\text{-}\mathtt{GH}(Y)\) are tractable from Theorem 2. We note that for any planar graph \(G=(V,E)\),
\[Z_{M}(G) =\sum_{\sigma:V\rightarrow[4]}\prod_{\{u,v\}\in E}m_{\sigma(u) \sigma(v)}\] \[=\sum_{(\sigma_{1},\sigma_{2}):V\rightarrow[2]\times[2]}\prod_{ \{u,v\}\in E}X_{\sigma_{1}(u)\sigma_{1}(v)}Y_{\sigma_{2}(u)\sigma_{2}(v)}\] \[=\left(\sum_{\sigma_{1}:V\rightarrow[2]}\prod_{\{u,v\}\in E}X_{ \sigma_{1}(u)\sigma_{1}(v)}\right)\left(\sum_{\sigma_{2}:V\rightarrow[2]}\prod _{\{u,v\}\in E}Y_{\sigma_{2}(u)\sigma_{2}(v)}\right)\] \[=Z_{X}(G)\cdot Z_{Y}(G)\]
Therefore, \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) is also polynomial time tractable.
Note that in this case, the eigenvalues of \(M\) are \(\{\pm(a+b),\pm(a-b)\}\) for real \(a\neq\pm b\), with absolute values \(\{|a+b|,|a-b|\}\) after removal of duplicates. One is greater than the other, and so they _do_ satisfy the lattice condition. Thus, the formal statement of Theorem 25 for \(q=4\) is false (assuming #P does not collapse to P.)
**Theorem 26**.: _The set of \(q\times q\) (\(q\geq 3\)) real symmetric matrices \(M\) such that \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) is not #P-hard has measure 0._
Proof.: Using the same proof idea of Lemma 1 we can show that \(\mathtt{Pl}\text{-}\mathtt{GH}(|M|)\leq\mathtt{Pl}\text{-}\mathtt{GH}(M)\) where \(|M|\) denotes the matrix (\(|m_{i,j}|\)) obtained from \(M\) by taking entry-wise absolute values. Consider the set of \(q\times q\) non-negative real symmetric matrices. The subset that has rank \(<q\) has measure 0. This is also the case for bipartite matrices and reducible matrices. The set of lattice conditions specified by an integer sequence \((n_{i})_{i\in[d]}\neq\mathbf{0}\) in Definition 17 is a countable set. Each such condition defines a hypersurface \(n_{1}\log(\lambda_{1}^{2})+\dots+n_{d}\log(\lambda_{d}^{2})=1\), where the eigenvalues \(\lambda_{i}^{2}\) are continuous and piece-wise differentiable functions of the entries of \(M\). (If we order \(\lambda_{1}\leq\ldots\leq\lambda_{n}\), we can avoid a measure 0 subset where two eigenvalues are equal, which is specified by the vanishing of the discriminant, a polynomial in the entries of \(M\).) Thus, the subset where the lattice condition
fails is also of measure \(0\). It follows that \(\mathtt{P1-GH}(M)\) is #P-hard for almost all \(M\) in the sense of Lebesgue measure.2
Footnote 2: The number of Turing machines is countable, and so there are only a countable number of algorithms. But this observation does not trivialize Theorem 26, since it is possible (and indeed true) that a single TM can solve uncountably many problems \(\mathtt{P1-GH}(M)\), by the strict definition of \(\mathtt{P1-GH}(M)\) for all real \(M\).
## 6 Hardness of \(3\times 3\) matrices
Consider a \(q\times q\) full rank, positive real valued, symmetric matrix \(M\), with a generating set \(\{g_{t}\}_{t\in[d]}\) obtained as in Lemma 12, and let \(\mathcal{M}:\mathbb{R}^{d}\to\mathbb{R}^{q\times q}\) be defined as in Lemma 13.
**Lemma 27**.: _For any nonzero polynomial \(f(x_{1},\ldots,x_{d})\in\mathbb{Z}[x_{1},\ldots,x_{d}]\), there exist nonnegative integers \(e_{1},\ldots,e_{d}\), such that \(f(x^{e_{1}},\ldots,x^{e_{d}})\in\mathbb{Z}[x]\) is a nonzero polynomial._
Proof.: If \(d=0\), then \(f\) is a nonzero integer, and the lemma is trivial. If \(d=1\), the lemma is proved by taking \(e_{1}=1\). Assume \(d>1\). There exist \(p,p_{2},\ldots,p_{d}\in\mathbb{Z}\) such that \(f(p,p_{2},\ldots,p_{d})\neq 0\). We may assume \(p\neq-1,0,1\), since \(f(x,p_{2},\ldots,p_{d})\in\mathbb{Z}[x]\) is a nonzero polynomial, and has only finitely many zeros. Then \(f(p,x_{2},p_{3},\ldots,p_{d})\in\mathbb{Z}[x_{2}]\) is a nonzero univariate polynomial, which has finitely many zeros. Thus, for some integer \(e_{2}\geq 0\), \(f(p,p^{e_{2}},p_{3},\ldots,p_{d})\neq 0\). Inductively, assume \(f(p^{1},p^{e_{2}},\ldots,p^{e_{t-1}},p_{t},\ldots,p_{d})\neq 0\), for some \(t\), then \(f(p^{1},p^{e_{2}},\ldots,p^{e_{t-1}},x_{t},p_{t+1},\ldots,p_{d})\in\mathbb{Z}[x_ {t}]\) is a nonzero univariate polynomial, and thus for some integer \(e_{t}\geq 0\), \(f(p^{1},p^{e_{2}},\ldots,p^{e_{t}},p_{t+1},\ldots,p_{d})\neq 0\). Finally, \(f(p^{1},p^{e_{2}},\ldots,p^{e_{d}})\neq 0\), and so the univariate polynomial \(f(x^{1},x^{e_{2}},\ldots,x^{e_{d}})\in\mathbb{Z}[x]\) is nonzero.
**Corollary 28**.: _If \(M\) is a \(q\times q\) full rank, positive real valued, symmetric matrix, then there exist nonnegative integers \(e_{1},\ldots,e_{d}\), and real \(\epsilon>0\), such that \(\det\left(\mathcal{M}(p^{e_{1}},\ldots,p^{e_{d}})\right)\neq 0\), for all \(1<p<e^{\epsilon}\)._
Proof.: \(\det(\mathcal{M}(x_{1},\ldots,x_{d}))\in\mathbb{Z}[x_{1},\ldots,x_{d}]\) is a nonzero polynomial since \(\det(\mathcal{M}(g_{1},\ldots,g_{d}))=\det(M)\neq 0\). By Lemma 27 we have a nonzero univariate polynomial \(\det\left(\mathcal{M}(p^{e_{1}},\ldots,p^{e_{d}})\right)\). It has at most finitely many zeros, and so for some \(\epsilon>0\), the value is nonzero for all \(1<p<e^{\epsilon}\).
We will now focus our attention back on \(3\times 3\) matrices specifically, and prove the hardness of all full rank, positive real valued matrices. We define the function \(M:\mathbb{R}\to\mathbb{R}^{3\times 3}\) as \(M(p):=\mathcal{M}(p^{e_{1}},\ldots,p^{e_{d}})\), where \(e_{1},\ldots,e_{d}\) are as in Lemma 27 and Corollary 28. Each entry of \(M(p)\) has the form \(M(p)_{ij}=p^{x_{ij}}\) for some non-negative integer \(x_{ij}\), and \(x_{ij}=x_{ji}\). We know from Corollary 28 that there exists some \(\epsilon>0\), such that \(\det(M(p))\neq 0\) for all \(1<p<e^{\epsilon}\).
**Lemma 29**.: _If \(M\) is a \(3\times 3\) full rank, positive real valued symmetric matrix, with \(M(p)_{ij}=p^{x_{ij}}\) for all \(i,j\in[3]\) and \(x_{ij}=x_{ji}\), then_
\[\lim_{\delta\to 0}\frac{\det M(e^{\delta})}{\delta^{2}}=0\quad\implies \quad\lim_{\delta\to 0}\frac{\det M(e^{\delta})}{\delta^{3}}\neq 0.\]
Proof.: Consider the matrix \(M(e^{\delta})=(e^{x_{ij}\delta})_{i,j\in[3]}\). Define \(X\) to be the \(3\times 3\) matrix with the entries \((x_{ij})_{i,j\in[3]}\), and consider the Taylor series expansion of \(f(\delta)=\det M(e^{\delta})\),
\[f(\delta)=f(0)+f^{\prime}(0)\delta+f^{\prime\prime}(0)\frac{\delta^{2}}{2!}+ \big{(}3g(X)+6\det X\big{)}\frac{\delta^{3}}{3!}+O(\delta^{4}), \tag{9}\]
where \(f(0)=\det M(e^{0})=0\) as \(M(e^{0})=J\) is the all-1 matrix, and
\[f^{\prime}(0)=\begin{vmatrix}x_{11}&x_{12}&x_{13}\\ 1&1&1\\ 1&1&1\end{vmatrix}+\begin{vmatrix}1&1&1\\ x_{12}&x_{22}&x_{23}\\ 1&1&1\end{vmatrix}+\begin{vmatrix}1&1&1\\ 1&1&1\\ x_{13}&x_{23}&x_{33}\end{vmatrix}=0,\]
\[g(X)=\begin{vmatrix}(x_{11})^{2}&(x_{12})^{2}&(x_{13})^{2}\\ x_{12}&x_{22}&x_{23}\\ 1&1&1\end{vmatrix}+\begin{vmatrix}1&1&1\\ (x_{12})^{2}&(x_{22})^{2}&(x_{23})^{2}\\ x_{13}&x_{23}&x_{33}\end{vmatrix}+\begin{vmatrix}x_{11}&x_{12}&x_{13}\\ (x_{13})^{2}&(x_{23})^{2}&(x_{33})^{2}\\ \end{vmatrix}\]
We remark that \(f^{\prime\prime}(0)=0\) if rank \(X\leq 1\).
By the Taylor expansion,
\[\lim_{\delta\to 0}\frac{\det M(e^{\delta})}{\delta^{2}}=\frac{1}{2}f^{\prime \prime}(0).\]
After some row operations, we have
\[\frac{1}{2}f^{\prime\prime}(0)=\begin{vmatrix}x_{11}-x_{13}&x_{12}-x_{23}&x_{ 13}-x_{33}\\ x_{12}-x_{13}&x_{22}-x_{23}&x_{23}-x_{33}\\ 1&1&1\end{vmatrix}.\]
Now we assume \(f^{\prime\prime}(0)=0\). So there exist real numbers \((a,b,c,d)\neq\mathbf{0}\), such that \(a+b+c=0\), and
\[a\left(x_{11}\quad x_{12}\quad x_{13}\right)+b\left(x_{12}\quad x_{22}\quad x _{23}\right)+c\left(x_{13}\quad x_{23}\quad x_{33}\right)+d\left(1\quad 1 \quad 1\right)=\mathbf{0}.\]
Since \((a,b,c,d)\neq\mathbf{0}\), this equation also gives \((a,b,c)\neq\mathbf{0}\). Therefore, we may assume without loss of generality that \(c=-(a+b)\neq 0\). Let \(\alpha=\frac{a}{a+b}\) and \(\beta=\frac{d}{a+b}\), then
\[\left(x_{13}\quad x_{23}\quad x_{33}\right)=\alpha\left(x_{11}\quad x_{12} \quad x_{13}\right)+(1-\alpha)\left(x_{12}\quad x_{22}\quad x_{23}\right)+ \beta\left(1\quad 1\quad 1\right).\]
This gives the expression \(X=A+\beta B\), where
\[A=N^{\mathrm{T}}\begin{pmatrix}x_{11}&x_{12}\\ x_{12}&x_{22}\end{pmatrix}N,\quad N=\begin{pmatrix}1&0&\alpha\\ 0&1&1-\alpha\end{pmatrix},\quad B=\begin{pmatrix}0&0&1\\ 0&0&1\\ 1&1&2\end{pmatrix}.\]
We will now make use of the following claim, which we shall prove later:
**Claim 30**.: _If \(x_{11}+x_{22}=2x_{12}\), then \(\det M(e^{\delta})=0\) for all \(\delta>0\)._
Since we have assumed that \(M\) is a full rank matrix, we know from Lemma 27 that for small enough values of \(\delta\), \(\det M(e^{\delta})\neq 0\). Therefore, we have \(x_{11}+x_{22}\neq 2x_{12}\). Next we consider the matrix \(A-kJ\), where
\[k=\frac{x_{11}x_{22}-(x_{12})^{2}}{x_{11}+x_{22}-2x_{12}},\hskip 14.226378ptJ= \begin{pmatrix}1&1&1\\ 1&1&1\\ 1&1&1\end{pmatrix}.\]
As \(x_{11}+x_{22}\neq 2x_{12}\), the value \(k\) is well-defined. The following claim will also be proved later:
**Claim 31**.: _The matrix \(A-kJ\) has rank at most one._
This implies that there exists a vector
\[\mathbf{u}=\begin{pmatrix}u_{1}&u_{2}&u_{3}\end{pmatrix}^{\intercal},\]
such that \(A=\mathbf{uu}^{\intercal}+kJ\). Therefore, \(X=\mathbf{uu}^{\intercal}+kJ+\beta B\). Next, we note that
\[\det M(e^{\delta}) = \begin{vmatrix}e^{\delta((u_{1})^{2}+k)}&e^{\delta(u_{1}u_{2}+k) }&e^{\delta(u_{1}u_{3}+k+\beta)}\\ e^{\delta(u_{1}u_{2}+k)}&e^{\delta((u_{2})^{2}+k)}&e^{\delta(u_{2}u_{3}+k+ \beta)}\\ e^{\delta(u_{1}u_{3}+k+\beta)}&e^{\delta(u_{2}u_{3}+k+\beta)}&e^{\delta((u_{3 })^{2}+k+2\beta)}\end{vmatrix} \tag{10}\] \[= e^{\delta(3k+2\beta)}\begin{vmatrix}e^{\delta(u_{1})^{2}}&e^{ \delta u_{1}u_{2}}&e^{\delta u_{1}u_{3}}\\ e^{\delta u_{1}u_{2}}&e^{\delta(u_{2})^{2}}&e^{\delta u_{2}u_{3}}\\ e^{\delta u_{1}u_{3}}&e^{\delta u_{2}u_{3}}&e^{\delta(u_{3})^{2}}\end{vmatrix}\] \[= e^{\delta(3k+2\beta)}\left(g(\mathbf{uu}^{\intercal})\frac{ \delta^{3}}{2}+O(\delta^{4})\right). \tag{11}\]
Here from Eq. (10) to (11) we used the Taylor expansion Eq. (9) on the rank one matrix \(\mathbf{uu}^{\intercal}\).
Finally, we will use this following claim, which we shall also prove later:
**Claim 32**.: \(g(\mathbf{uu}^{\intercal})=((u_{2}-u_{1})(u_{3}-u_{1})(u_{3}-u_{2}))^{2}\)_._
So, if \(g(\mathbf{uu}^{\intercal})=0\), it must be the case that \(u_{i}=u_{j}\) for some \(i\neq j\). But in that case, we see that \(\det M(e^{\delta})=0\) for all \(\delta>0\), by Eq. (10), which we know to be false. Therefore, it must be the case that \(g(\mathbf{uu}^{\intercal})\neq 0\). Since the leading term of \(e^{\delta(3k+2\beta)}\) is \(1\), this implies that when \(f^{\prime\prime}(0)=0\), then the coefficient of \(\delta^{3}\) in \(\det M(e^{\delta})\) is \(\frac{1}{2}g(\mathbf{uu}^{\intercal})\neq 0\). So, \(\lim\limits_{\delta\to 0}\frac{\det M(e^{\delta})}{\delta^{3}}\neq 0\).
The Taylor expansion Eq. (9) and Lemma 29 say that \(\det M(e^{\delta})\) has exact order either \(\delta^{2}\) or \(\delta^{3}\). We shall now finish the proof of the claims above.
**Claim 30**.: _If \(x_{11}+x_{22}=2x_{12}\), then \(\det M(e^{\delta})=0\) for all \(\delta>0\)._
Proof.: Consider the matrix \(M(e^{\delta})\). Note that
\[\begin{pmatrix}e^{\delta(x_{12})}\\ e^{\delta(x_{22})}\\ e^{\delta(\alpha x_{12}+(1-\alpha)x_{22}+\beta)}\end{pmatrix}=e^{\delta(x_{12 }-x_{11})}\begin{pmatrix}e^{\delta(x_{11})}\\ e^{\delta(x_{12})}\\ e^{\delta(\alpha x_{11}+(1-\alpha)x_{12}+\beta)}\end{pmatrix}\]
This implies that the second column of the matrix is a multiple of the first column, for all \(\delta>0\). Therefore, \(\det M(e^{\delta})=0\) for all \(\delta>0\).
**Claim 31**.: _The matrix \(A-kJ\) has rank at most one._
Proof.: We have
\[A-kJ=N^{\mathtt{t}}\left(\begin{pmatrix}x_{11}&x_{12}\\ x_{12}&x_{22}\end{pmatrix}-kJ_{2}\right)N,\]
where \(N=\left(\begin{smallmatrix}1&0&\alpha\\ 0&1&-\alpha\end{smallmatrix}\right)\), and \(J_{2}=\left(\begin{smallmatrix}1&1\\ 1&1\end{smallmatrix}\right)\). Since \(x_{11}+x_{22}\neq 2x_{12}\), \(k\) is well defined and satisfies
\[(x_{11}-k)(x_{22}-k)=(x_{12}-k)^{2}.\]
By the matrix factorization, we see that \(A-kJ\) has rank at most one.
**Claim 32**.: \(g(\mathbf{uu^{\mathtt{T}}})=((u_{2}-u_{1})(u_{3}-u_{1})(u_{3}-u_{2}))^{2}\)_._
Proof.: Note that
\[g(\mathbf{uu^{\mathtt{T}}}) =\begin{vmatrix}(u_{1})^{4}&(u_{1}u_{2})^{2}&(u_{1}u_{3})^{2}\\ u_{1}u_{2}&(u_{2})^{2}&u_{2}u_{3}\\ 1&1&1\end{vmatrix}+\begin{vmatrix}1&1&1\\ (u_{1}u_{2})^{2}&(u_{2})^{4}&(u_{2}u_{3})^{2}\\ u_{1}u_{3}&u_{2}u_{3}&(u_{3})^{2}\end{vmatrix}+\begin{vmatrix}(u_{1})^{2}&u_{ 1}u_{2}&u_{1}u_{3}\\ 1&1&1\\ (u_{1}u_{3})^{2}&(u_{2}u_{3})^{2}&(u_{3})^{4}\end{vmatrix}\] \[=\begin{vmatrix}1&1&1\\ u_{1}&u_{2}&u_{3}\\ u_{1}^{2}&u_{2}^{2}&u_{3}^{2}\end{vmatrix}\left(-(u_{1})^{2}u_{2}-(u_{2})^{2 }u_{3}-(u_{3})^{2}u_{1}+(u_{2})^{2}u_{1}+(u_{3})^{2}u_{2}+(u_{1})^{2}u_{3}\right)\] \[=((u_{2}-u_{1})(u_{3}-u_{1})(u_{3}-u_{2}))^{2}\]
Let \(\lambda_{i}=\lambda_{i}(p)\) be the eigenvalues of \(M(p)\), ordered by \(|\lambda_{1}|\leq|\lambda_{2}|\leq|\lambda_{3}|\). Clearly, for \(i\in[3]\), \(\lambda_{i}(p)\) are well-defined and continuous functions of \(p\) (see Theorem VI.1.4 and Corollary VI.1.6 in pages 154-155 of [5]). As \(M(1)\) is the all-1 matrix \(J\), \(\lambda_{1}(1)=\lambda_{2}(1)=0\) and \(\lambda_{3}(1)=3\), and when \(1<p<e^{\epsilon}\), \(\lambda_{i}(p)\neq 0\) for \(i\in[3]\) by Corollary 28. Moreover, since \(M(p)\) is a positive valued matrix, the Perron theorem (see Theorem 8.2.8 in page 526 of [28]) implies that \(|\lambda_{1}(p)|\leq|\lambda_{2}(p)|<|\lambda_{3}(p)|\), and we have \(\log\left(\left|\lambda_{1}(p)\right|/\left|\lambda_{3}(p)\right|\right)\neq 0\). So, the following function \(t(p)\) is well-defined on \(I_{\epsilon}=(1,e^{\epsilon})\), and is continuous as a function of \(p\):
\[t(p)=\frac{\log\left(\left|\lambda_{2}(p)\right|/\left|\lambda_{3}(p)\right| \right)}{\log\left(\left|\lambda_{1}(p)\right|/\left|\lambda_{3}(p)\right| \right)}.\]
Clearly, \(|\lambda_{2}|=|\lambda_{1}|^{t(p)}|\lambda_{3}|^{1-t(p)}\), and \(t(p)\) is unique satisfying this equation.
**Lemma 33**.: _Let \(M(p)\) be a \(3\times 3\) full rank, positive real valued, symmetric matrix for \(p\in I_{\epsilon}\) (where \(\epsilon>0\)), with \(\left(M(p)\right)_{ij}=p^{x_{ij}}\) for all \(i,j\in[3]\). If \(t(r)\) is irrational for some \(r\in I_{\epsilon}\), then \(\mathtt{Pl-GH}(M(r))\) is #P-hard._
Proof.: We have
\[\left|\lambda_{2}(r)\right|=\left|\lambda_{1}(r)\right|^{t(r)}\left|\lambda_{ 3}(r)\right|^{1-t(r)}.\]
If the eigenvalues \(\lambda_{i}(r)\) do not satisfy the lattice condition, then there are integers \(n_{i}\) not all \(0\), such that \(n_{1}+n_{2}+n_{3}=0\) and \(\lambda_{1}(r)^{n_{1}}\lambda_{2}(r)^{n_{2}}\lambda_{3}(r)^{n_{3}}=1\). We have \(n_{2}=-(n_{1}+n_{3})\neq 0\), for otherwise \(n_{1}=-n_{3}\neq 0\) and \(\left|\lambda_{1}(r)\right|=\left|\lambda_{3}(r)\right|\), a contradiction. Then,
\[\left|\lambda_{2}(r)\right|=\left|\lambda_{1}(r)\right|^{\frac{n_{1}}{n_{1}+n _{3}}}\left|\lambda_{3}(r)\right|^{\frac{n_{3}}{n_{1}+n_{3}}}.\]
By the uniqueness, \(t(r)=\frac{n_{1}}{n_{1}+n_{3}}\) is rational.
Therefore if \(t(r)\) is irrational, then the eigenvalues \((\lambda_{1},\lambda_{2},\lambda_{3})\) of \(M(r)\) must satisfy the lattice condition, and \(\mathtt{Pl-GH}(M(r))\) is #P-hard by Theorem 23.
**Corollary 34**.: _For \(M(p)\) given in Lemma 33, if \(t(p)\) is not a constant for all \(p\in I_{\epsilon}\), then \(\mathtt{Pl-GH}(M)\) is #P-hard._
Proof.: By the intermediate value theorem for continuous functions, if \(t(p)\) is not a constant function within \(I_{\epsilon}\), then there is some \(r\in I_{\epsilon}\) such that \(t(r)\) is irrational. Thus \(\mathtt{Pl-GH}(M(r))\) is #P-hard. Since \(\mathtt{Pl-GH}(M(r))\leq\mathtt{Pl-GH}(M)\), this also implies that \(\mathtt{Pl-GH}(M)\) is #P-hard.
**Remark 35**.: _Here, it is important to note that our choice of \(r\) in Corollary 34 need not be rational. In fact, it may even be the case that \(r\) is transcendental. (See Appendix B.) Therefore, we may not assume that \(M(r)\) has rational or even algebraic values. However, as we have seen in Section 2.1, this does not cause any issues with our model of computation, as we can continue to represent \(Z_{M(r)}(G)\) as a polynomial sized tuple of integers, in terms of \(\mathtt{COUNT}(M(r))\)._
**Lemma 36**.: _Let \(M(p)\) be as given in Lemma 33. Then \(\mathtt{Pl-GH}(M)\) is #P-hard, unless \(t(p)=1\) or \(t(p)=\frac{1}{2}\) for all \(p\in I_{\epsilon}\)._
Proof.: Recall that \(\lambda_{1}(1)=0\), \(\lambda_{2}(1)=0\) and \(\lambda_{3}(1)=3\), and \(\lambda_{i}(p)\) are continuous as functions of \(p\). By definition \(|\lambda_{1}(p)|/|\lambda_{2}(p)|\leq 1\). We claim that either \(\lim\limits_{p\to 1^{+}}\frac{|\lambda_{1}(p)|}{|\lambda_{2}(p)|}=0\), or this ratio stays above some \(c>0\). To see that, let
\[S(p)=\frac{\lambda_{1}\lambda_{2}}{\lambda_{3}}+\frac{\lambda_{2}\lambda_{3}}{ \lambda_{1}}+\frac{\lambda_{3}\lambda_{1}}{\lambda_{2}}.\]
\(S(p)\) is a symmetric rational function. Indeed, \(S(p)=\frac{s_{2}^{2}-2s_{1}s_{3}}{s_{3}}\), where
\[s_{1}=\lambda_{1}+\lambda_{2}+\lambda_{3},\quad s_{2}=\lambda_{1}\lambda_{2}+ \lambda_{2}\lambda_{3}+\lambda_{3}\lambda_{1},\quad s_{3}=\lambda_{1}\lambda_ {2}\lambda_{3}\]
are the elementary symmetric polynomials of \(\lambda_{i}\), which are polynomials in the entries of \(M(p)\).
Note that \(S(p)=\frac{\lambda_{1}^{2}\lambda_{2}^{2}+\lambda_{2}^{2}\lambda_{3}^{2}+ \lambda_{3}^{2}\lambda_{1}^{2}}{\lambda_{1}\lambda_{2}\lambda_{3}}\) cannot be identically \(0\), for otherwise as a polynomial in \(p\), the numerator is identically \(0\), which would imply that \(\lambda_{1}\lambda_{2}=\lambda_{2}\lambda_{3}=\lambda_{3}\lambda_{1}\) are identically \(0\) functions
in \(p\). But \(\lambda_{3}\to 3\neq 0\) as \(p\to 1^{+}\), and so \(\lambda_{1},\lambda_{2}\) are identically \(0\). However, this contradicts \(\det M(p)\neq 0\) for \(p\in I_{\epsilon}\).
Expanding \(S(p)\) as a Laurent series in \((p-1)\), we have \(S(p)=c_{k}(p-1)^{k}+c_{k+1}(p-1)^{k+1}+\ldots\), where \(k\in\mathbb{Z}\), and \(c_{k}\neq 0\). Since \(\lambda_{1}\lambda_{2}/\lambda_{3}\to 0\) as \(p\to 1^{+}\), and \(|\lambda_{3}|\left|\frac{\lambda_{1}}{\lambda_{2}}\right|\) stays bounded above, it follows that \(k<0\) iff \(S(p)\to\infty\) iff \(\lambda_{1}/\lambda_{2}\to 0\), and \(k\geq 0\) iff \(|\lambda_{1}/\lambda_{2}|\geq c>0\), for all \(p\in I_{\epsilon}\).
Suppose \(|\lambda_{1}/\lambda_{2}|\geq c>0\) for all \(p\in I_{\epsilon}\), then
\[t(p)=1-\frac{\log\left(\left|\lambda_{1}(p)\right|/\left|\lambda_{2}(p)\right| \right)}{\log\left(\left|\lambda_{1}(p)\right|/\left|\lambda_{3}(p)\right| \right)}\to 1.\]
Therefore the only possibility that \(t(p)\) is a constant on \(I_{\epsilon}\) is that it is constant \(1\). If \(t(p)\) is not constant \(1\), then \(\texttt{PI-GH}(M)\) is \(\#\)P-hard by Corollary 34.
Now suppose \(\lim\limits_{p\to 1^{+}}\left|\frac{\lambda_{1}(p)}{\lambda_{2}(p)} \right|=0\). As we already noted, \(s_{2}\) is a polynomial in the entries of \(M(p)\), which we can express as a polynomial in \((p-1)\). It is not identically \(0\). This can be seen by \(s_{2}=\lambda_{2}(\lambda_{1}+\lambda_{3}(1+\frac{\lambda_{1}}{\lambda_{2}}))\), where the second factor has the same limit as \(\lambda_{3}\to 3\). So if \(s_{2}\) were identically \(0\) we would have \(\lambda_{2}\) identically \(0\), contradicting \(\det M(p)\neq 0\) for \(p\in I_{\epsilon}\). Let \(s_{2}=a_{j}(p-1)^{j}+a_{j+1}(p-1)^{j+1}+\ldots\) be in increasing power terms, where \(a_{j}\neq 0\) is the first nonzero term. It has the same order as \(\lambda_{2}\) when \(p\to 1^{+}\), since \(s_{2}=\lambda_{2}(\lambda_{1}+\lambda_{3}(1+\frac{\lambda_{1}}{\lambda_{2}}))\), where \(\lambda_{1}\to 0\), \(\frac{\lambda_{1}}{\lambda_{2}}\to 0\) and \(\lambda_{3}\to 3\). So \(\lambda_{2}\) also has the exact order \(j\). The important point is that this exact order \(j\) is an integer.
From Lemma 29, we know that \(\lambda_{1}\lambda_{2}\lambda_{3}\) is either of exact order \(\Theta((p-1)^{2})\) or \(\Theta((p-1)^{3})\). Since \(\lambda_{3}\to 3\) the same is true for the product \(\lambda_{1}\lambda_{2}\). As both \(\lambda_{1},\lambda_{2}\to 0\), and we are in the case \(|\lambda_{1}|/|\lambda_{2}|\to 0\), the only possibility is \(j=1\), i.e., \(\lambda_{1}=\Theta((p-1)^{2})\) and \(\lambda_{2}=\Theta((p-1)^{1})\).
In this case, we see that \(\lim\limits_{p\to 1^{+}}t(p)=\frac{1}{2}\). Therefore, once again by Corollary 34, if \(t(p)\) is not a constant \(\frac{1}{2}\) for \(p\in I_{\epsilon}\), then \(\texttt{PI-GH}(M)\) is \(\#\)P-hard.
**Lemma 37**.: _Let \(M(p)\) be given in Lemma 33. If \(t(p)=1\) for all \(p\in I_{\epsilon}\), then \(\texttt{PI-GH}(M)\) is \(\#\)P-hard._
Proof.: From \(t(p)=1\) we have \(|\lambda_{2}(p)|=|\lambda_{1}(p)|\). Consider the matrix \(M(p)^{2}\). Since \(M(p)\) is a positive matrix, so is \(M(p)^{2}\). Moreover, its eigenvalues are \(\lambda_{1}(p)^{2}=\lambda_{2}(p)^{2}<\lambda_{3}(p)^{2}\). Therefore, \(\texttt{PI-GH}(M(p)^{2})\) is \(\#\)P-hard, as a consequence of Theorem 23. Since \(\texttt{PI-GH}(M(p)^{2})\leq\texttt{PI-GH}(M(p))\leq\texttt{PI-GH}(M)\), it also means that \(\texttt{PI-GH}(M)\) is \(\#\)P-hard.
**Lemma 38**.: _Let \(M(p)\) be given in Lemma 33. Then \(t(p)\) is not a constant \(\frac{1}{2}\) on the interval \(I_{\epsilon}\)._
Proof.: Let \(\mu_{i}=\lambda_{i}^{2}=(\lambda_{i}(p))^{2}\), we define the function
\[F(p)=(\mu_{1}^{2}-\mu_{2}\mu_{3})(\mu_{2}^{2}-\mu_{3}\mu_{1})(\mu_{3}^{2}-\mu_ {1}\mu_{2}).\]
Note that \(F(p)\) is a symmetric polynomial of the eigenvalues \(\mu_{i}\) of \(M(p)^{2}\), and in fact
\[F(p)=s_{1}^{3}s_{3}-s_{2}^{3},\]
where \(s_{i}\) are the elementary symmetric polynomials of \(\{\mu_{1},\mu_{2},\mu_{3}\}\),
\[s_{1}=\mu_{1}+\mu_{2}+\mu_{3},\quad s_{2}=\mu_{1}\mu_{2}+\mu_{2}\mu_{3}+\mu_{3} \mu_{1},\quad s_{3}=\mu_{1}\mu_{2}\mu_{3}.\]
As coefficients of \(\det(xI-M(p)^{2})\), they are polynomials in the entries of \(M(p)\).
Suppose for a contradiction that \(t(p)=\frac{1}{2}\) on the interval \(I_{\epsilon}\), then \(\mu_{2}^{2}=\mu_{1}\mu_{3}\) in that interval. Then \(F(p)=0\) for all \(p\in I_{\epsilon}\). Since \(F(p)\) is a polynomial in \(p\), we have a polynomial identity
\[s_{1}^{3}s_{3}=s_{2}^{3}.\]
As \(p\to 1^{+}\), both \(s_{1}\) and \(s_{3}\) are nonzero, and so all three are nonzero polynomials. By the unique factorization of polynomials \(s_{1}\mid s_{2}\), and so \(s_{2}=s_{1}f\) for some nonzero polynomial \(f(p)\). It follows that \(f^{3}=s_{3}=\det(M(p))^{2}\). The exact order of any irreducible polynomial \(q\) in \(f^{3}\) is \(\operatorname{ord}_{q}(f^{3})=3\cdot\operatorname{ord}_{q}(f)\equiv 0\bmod 3\), which is also \(2\cdot\operatorname{ord}_{q}(\det(M(p)))\). Thus \(\det(M(p))\) is a cubic power of a polynomial, \(\det(M(p))=g^{3}\).
Now
\[g(p)^{3}=\det(M(p))=p^{x_{11}+x_{22}+x_{33}}+p^{x_{12}+x_{23}+x_{13}}+p^{x_{12 }+x_{23}+x_{13}}-p^{x_{11}+2x_{23}}-p^{x_{33}+2x_{12}}-p^{x_{22}+2x_{13}}.\]
In the expression for \(\det(M(p))\) there are three positive and three negative terms. If any cancellation occurs, an equal number of positive and negative terms are cancelled, hence the nonzero polynomial \(\det(M(p))\) has either \(2\) or \(4\) or \(6\) terms with equal number having \(+1\) and \(-1\) coefficients after cancellation. So \(g(p)\) is not a monomial. We may write \(g(p)=c_{1}p^{x_{1}}+\cdots+c_{k}p^{x_{k}}\) with \(x_{1}>\cdots>x_{k}\) and nonzero integers \(c_{1},\ldots,c_{k}\), with \(k\geq 2\). Then \(g(p)^{3}\) has the following terms which have distinct degrees and cannot be cancelled:
\[c_{1}^{3}p^{3x_{1}},\quad 3c_{1}^{2}c_{2}p^{2x_{1}+x_{2}},\quad 3c_{k}^{2}c_{k- 1}p^{2x_{k}+x_{k-1}},\quad c_{k}^{3}p^{3x_{k}}.\]
In terms of monomial terms with \(\pm 1\) coefficients there are at least \(8\) terms. These cannot be matched by at most \(6\) monomial terms with \(\pm 1\) coefficients. This contradiction proves the lemma.
**Theorem 39**.: _If \(M\) is a \(3\times 3\) full rank, positive real valued, symmetric matrix, then \(\mathtt{P1-GH}(M)\) is #P-hard._
Proof.: We define \(M(p)\) for \(p\in I_{\epsilon}\) for some \(\epsilon>0\), as was done after Corollary 28. From Lemma 38, \(t(p)\) is not a constant \(\frac{1}{2}\) for all \(p\in I_{\epsilon}\). On the other hand, if \(t(p)=1\) for all \(p\in I_{\epsilon}\), from Lemma 37, we know that \(\mathtt{P1-GH}(M)\) is #P-hard. Finally, if \(t(p)\) is not constant \(1\) or \(\frac{1}{2}\) for all \(p\in I_{\epsilon}\), then \(\mathtt{P1-GH}(M)\) is #P-hard from Lemma 36.
## 7 Dichotomy for \(3\times 3\) matrices
From Section 6, if \(M\) is \(3\times 3\) full rank, positive real valued, symmetric matrix, then \(\mathtt{P1-GH}(M)\) is #P-hard. We will now use Theorem 39 to establish a dichotomy for all \(3\times 3\) matrices.
**Lemma 40**.: _If \(M\) is a \(3\times 3\) full rank, non-negative real valued, symmetric matrix that is irreducible, then \(\mathtt{P1-GH}(M)\) is #P-hard._
Proof.: As we saw in Section 3.2, a \(3\times 3\) matrix can only be bipartite if it is a twinned matrix. Since \(M\) is full rank, it is therefore non-bipartite. The proof of this lemma is then the same as that of Theorem 24, and is a consequence of Theorem 39.
**Lemma 41**.: _If \(M\) is a \(3\times 3\) rank two, real valued, symmetric matrix that is irreducible and not twinned, then \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) is \(\#\text{P-hard}\)._
Proof.: If \(M\) is a rank two matrix that is not twinned, it must be of the form
\[M=N^{\mathtt{T}}\begin{pmatrix}x&y\\ y&z\end{pmatrix}N,\quad\text{where}\quad N=\begin{pmatrix}1&0&\alpha\\ 0&1&\beta\end{pmatrix},\]
for some \(\alpha\neq 0\) and \(\beta\neq 0\).
If \(xz=y^{2}\), then \(M\) has rank at most one, contrary to assumption. So, \(xz\neq y^{2}\). Now, we consider \(T_{2}M\). We have
\[T_{2}M=\begin{pmatrix}x^{2}&y^{2}&\alpha^{2}x^{2}+\beta^{2}y^{2}+2\alpha\beta xy \\ y^{2}&z^{2}&\alpha^{2}y^{2}+\beta^{2}z^{2}+2\alpha\beta yz\\ \alpha^{2}x^{2}+\beta^{2}y^{2}+2\alpha\beta xy&\alpha^{2}y^{2}+\beta^{2}z^{2 }+2\alpha\beta yz&\left(\alpha^{2}x+2\alpha\beta y+\beta^{2}z\right)^{2} \end{pmatrix}.\]
Miraculously,
\[\det(T_{2}M)=2\alpha^{2}\beta^{2}\left(xz-y^{2}\right)^{3}.\]
Thus, \(T_{2}M\) is a full rank matrix. Since \(M\) is irreducible, it also follows that \(T_{2}M\) is irreducible. Moreover, since \(T_{2}M\) is non-negative valued, we see from Lemma 40 that \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\) is \(\#\text{P-hard}\). The lemma follows from \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\leq\mathtt{Pl}\text{-}\mathtt{GH}(M)\).
**Lemma 42**.: _If \(M\) is a \(3\times 3\) full rank, real valued, symmetric matrix that is irreducible, then \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) is \(\#\text{P-hard}\)._
Proof.: We consider the matrix \(T_{2}M\). We note that this is a non-negative valued matrix. Since \(M\) is irreducible, neither is \(T_{2}M\). If \(T_{2}M\) is a full rank matrix, the hardness of \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\), and therefore, the hardness of \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) follows from Lemma 40.
Next, suppose \(T_{2}M\) has rank two. If \(T_{2}M\) is not twinned, then the hardness of \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\), and therefore, the hardness of \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\) follows from Lemma 41. Therefore, let us consider the case where \(T_{2}M\) is twinned. From Theorem 10, we can see that \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\) can be polynomially tractable only if \(T_{2}M\) is rank one, or if \(T_{2}M\) is reducible, or if \(T_{2}M\) is of the form
\[T_{2}M=\begin{pmatrix}0&0&y\\ 0&0&z\\ y&z&0\end{pmatrix}\]
upto permutations, in which case, it must be the case that \(M\) is also of the form above, and must be a rank two matrix. Since none of these are true, it follows from Theorem 10 that \(\mathtt{Pl}\text{-}\mathtt{GH}(T_{2}M)\) is \(\#\text{P-hard}\), and therefore, so is \(\mathtt{Pl}\text{-}\mathtt{GH}(M)\).
Finally, suppose \(T_{2}M\) has rank one. If \(T_{2}M\) has any zero entries, it can be easily checked that \(T_{2}M\), and therefore \(M\) is reducible. So, \(T_{2}M\) has no zero entries. In this case, we consider \(\mathcal{M}(\mathbf{1})\). The entries of this matrix may be denoted by \(\sigma_{ij}\in\{+1,-1\}\). Since \(T_{2}M\) is a rank one matrix, all three rows are multiples of each other. If we further have that two rows of \(\mathcal{M}(\mathbf{1})\) are multiples
of each other, then \(M\) itself is not full rank. Therefore, no two rows of \(\mathcal{M}(\mathbf{1})\) are multiples of each other. Now, we consider \(S_{2}\mathcal{M}(\mathbf{1})=\mathcal{M}(\mathbf{1})^{2}\). Clearly, \(\left(\mathcal{M}(\mathbf{1})^{2}\right)_{ii}=\sigma_{i1}^{2}+\sigma_{i2}^{2}+ \sigma_{i3}^{2}=3\) for \(i\in 3\). Moreover, if \(i\neq j\), since no two rows are multiples of each other, we see that \(\left(\mathcal{M}(\mathbf{1})^{2}\right)_{ij}=\sigma_{i1}\sigma_{j1}+\sigma_{i2 }\sigma_{j2}+\sigma_{i3}\sigma_{j3}\in\{+1,-1\}\), as exactly one pair cancels. So, the set \(\{3\}\) is a generating set for the entries of \(\mathcal{M}(\mathbf{1})^{2}\), and the hardness of \(\mathtt{Pl-GH}(\mathcal{M}(\mathbf{1})^{2})\) follows from Lemma 16. Since \(\mathtt{Pl-GH}(\mathcal{M}(\mathbf{1})^{2})\leq\mathtt{Pl-GH}(\mathcal{M}( \mathbf{1}))\leq\mathtt{Pl-GH}(M)\), we see that \(\mathtt{Pl-GH}(M)\) is also \(\#\)P-hard.
Combining all our results, we have our final dichotomy:
**Theorem 43**.: _If \(M\) is a \(3\times 3\) real valued, symmetric matrix, then \(\mathtt{Pl-GH}(M)\) is \(\#\)P-hard, unless \(M\) is of one of the following forms upto a permutation of rows and columns and in which case, \(\mathtt{Pl-GH}(M)\) is tractable in polynomial time:_
1. \[M=\begin{pmatrix}x^{2}&xy&xz\\ xy&y^{2}&yz\\ xz&yz&z^{2}\end{pmatrix}=\mathbf{u}\mathbf{u}^{\mathtt{t}}.\]
2. \[M=\begin{pmatrix}x&y&0\\ y&z&0\\ 0&0&t\end{pmatrix}\] _such that_ 1. \(xz=y^{2},\) _2._ \[M=\begin{pmatrix}x&y&0\\ y&z&0\\ 0&0&t\end{pmatrix}\] _such that_ 1. \(xz=y^{2},\) _2._ \[M=\begin{pmatrix}0&0&x\\ 0&0&y\\ x&y&0\end{pmatrix}.\]
Proof.: The listed forms are all polynomial-time tractable. If \(M\) has rank \(\leq 1\), then \(M\) has form 1. Now we assume \(M\) has rank \(\geq 2\). If \(M\) is reducible, by Lemma 5 and Theorem 2, \(\mathtt{Pl-GH}(M)\) is \(\#\)P-hard unless \(M\) has form 2. Below, we assume \(M\) is irreducible. If \(M\) is a rank two twinned matrix, by Theorem 10\(\mathtt{Pl-GH}(M)\) is \(\#\)P-hard unless it has form 3 (i.e. \(M\) is bipartite). If \(M\) has rank two and not twinned, by Lemma 41, \(\mathtt{Pl-GH}(M)\) is \(\#\)P-hard. So now we assume \(M\) is a full rank matrix. Then, by Lemma 42\(\mathtt{Pl-GH}(M)\) is \(\#\)P-hard.
|
2308.02760 | Neural Collapse in the Intermediate Hidden Layers of Classification
Neural Networks | Neural Collapse (NC) gives a precise description of the representations of
classes in the final hidden layer of classification neural networks. This
description provides insights into how these networks learn features and
generalize well when trained past zero training error. However, to date, (NC)
has only been studied in the final layer of these networks. In the present
paper, we provide the first comprehensive empirical analysis of the emergence
of (NC) in the intermediate hidden layers of these classifiers. We examine a
variety of network architectures, activations, and datasets, and demonstrate
that some degree of (NC) emerges in most of the intermediate hidden layers of
the network, where the degree of collapse in any given layer is typically
positively correlated with the depth of that layer in the neural network.
Moreover, we remark that: (1) almost all of the reduction in intra-class
variance in the samples occurs in the shallower layers of the networks, (2) the
angular separation between class means increases consistently with hidden layer
depth, and (3) simple datasets require only the shallower layers of the
networks to fully learn them, whereas more difficult ones require the entire
network. Ultimately, these results provide granular insights into the
structural propagation of features through classification neural networks. | Liam Parker, Emre Onal, Anton Stengel, Jake Intrater | 2023-08-05T01:19:38Z | http://arxiv.org/abs/2308.02760v1 | # Neural Collapse in the Intermediate Hidden Layers of Classification Neural Networks
###### Abstract
_Neural Collapse_ (\(\mathcal{NC}\)) gives a precise description of the representations of classes in the final hidden layer of classification neural networks. This description provides insights into how these networks learn features and generalize well when trained past zero training error. However, to date, \(\mathcal{NC}\) has only been studied in the final layer of these networks. In the present paper, we provide the first comprehensive empirical analysis of the emergence of \(\mathcal{NC}\) in the intermediate hidden layers of these classifiers. We examine a variety of network architectures, activations, and datasets, and demonstrate that some degree of \(\mathcal{NC}\) emerges in most of the intermediate hidden layers of the network, where the degree of collapse in any given layer is typically positively correlated with the depth of that layer in the neural network. Moreover, we remark that: (1) almost all of the reduction in intra-class variance in the samples occurs in the shallower layers of the networks, (2) the angular separation between class means increases consistently with hidden layer depth, and (3) simple datasets require only the shallower layers of the networks to fully learn them, whereas more difficult ones require the entire network. Ultimately, these results provide granular insights into the structural propagation of features through classification neural networks.
1
## 1 Introduction
Modern, highly-overparameterized deep neural networks have exceeded human performance on a variety of computer vision tasks [1, 2, 3]. However, despite their many successes, it remains unclear how these overparameterized networks converge to solutions which generalize well. In a bid to demystify neural networks' performance, a recent line of inquiry has explored the internally represented 'features' of these networks during the Terminal Phase of Training (TPT), i.e. when networks are trained past the point of zero error on the training data [4, 5, 6].
The phenomenon of _Neural Collapse_ (\(\mathcal{NC}\)), introduced by Papyan, Han, and Donoho [7, 8], represents one such avenue of inquiry. \(\mathcal{NC}\) refers to the emergence of a simple geometry present in neural network classifiers that appears during TPT. Specifically, \(\mathcal{NC}\) describes the phenomena by which neural network classifiers converge to learning maximally separated, negligible-variance representations of classes in their last layer activation maps. However, despite the extensive documentation of \(\mathcal{NC}\) in the last-layer representations of classification neural networks [9, 10, 11, 12],
there has been no exploration of its presence throughout the intermediate hidden layers of these networks.
In the present study, we provide a detailed account of the emergence of \(\mathcal{NC}\) in the intermediate hidden layers of neural networks across a range of different settings. Specifically, we investigate the impact of varying architectures, datasets, and activation functions on the degree of \(\mathcal{NC}\) present in the intermediate layers of classification networks. Our results show that some level of \(\mathcal{NC}\) typically occurs in these intermediate layers in all explored settings, where the strength of \(\mathcal{NC}\) in a given layer increases as the depth of that layer within the network increases. By examining the presence of \(\mathcal{NC}\) not only in the final hidden layer but also the intermediate hidden layers of classification networks, we gain a more nuanced understanding of the mechanisms that drive the behavior of these networks.
## 2 Methodology
### Network Architecture and Training
We examine the degree of \(\mathcal{NC}\) present in the hidden layers of three different neural network classifiers. Two of these models are popular in the computer vision community, and have been extensively studied and widely adopted: VGG11 [13] and ResNet18 [14]. We also train a fully-connected (FC) classification network, MLP6, with network depth of \(\ell=6\) and layer width of \(d=4096\) for each of its hidden layers. MLP6 serves as a toy model in which to more easily explore \(\mathcal{NC}\). In addition to varying network architecture, we also vary the activation functions. Specifically, we explore the effects on \(\mathcal{NC}\) of the ReLU, Tanh, and LeakyReLU activation functions. We train these classification neural networks on five popular computer vision classification datasets: MNIST [15], CIFAR10 and CIFAR100 [16], SVHN [17], and FashionMNIST [18]. To rebalance the datasets, MNIST and SVHN were subsampled to \(N=5,000\), \(N=4,600\), and \(N=600\) images per class, respectively. We normalize all datasets but do not perform any data augmentation. We use stochastic gradient descent with 0.9 momentum, Mean Square Error Loss (MSE) 1, \(\lambda=10^{-5}\) weight decay, and the one-cycle learning rate scheduler for all training [20].
Footnote 1: We use MSE loss rather than Cross Entropy loss as it has been shown to exhibit a greater degree of \(\mathcal{NC}\) in classification neural networks while maintaining a greater degree of mathematical clarity [7, 19].
### Intermediate Layer Analysis
To assess the extent of \(\mathcal{NC}\) in the hidden layers of our classifiers, we perform a step of "\(\mathcal{NC}\) analysis" at various points during training. This analysis involves freezing the network and passing all the training samples through it. We then collect the network's post-activation representation of each sample after every hidden layer of interest. Specifically, we collect post-activations after each FC layer in MLP6, each convolutional layer in VGG11, and each convolutional layer in ResNet18. We then flatten these post-activation representations into vectors \(\mathbf{h}_{i,c}^{j}\), where \(j\) is the hidden layer, \(i\) is the training sample, and \(c\) is the class. We then compute four quantities in each of these hidden layer post-activation vectors to track \(\mathcal{NC}\): intra-class variance collapse \((\mathcal{NC}1)\), intra-class norm equality \((\mathcal{NC}2)\), inter-class maximal angular separation \((\mathcal{NC}2)\), and simplification to nearest-class center classifier \((\mathcal{NC}4)\) following the general outline provided in [7]. The specifics of these quantities are provided in Appendix A.
## 3 Results
We present the results for each of the \(\mathcal{NC}\) conditions for MLP6, VGG11, and ResNet18 with ReLU activation functions trained to TPT on FashionMNIST, MNIST, SVHN, CIFAR10, and CIFAR100 in Figure 1 to Figure 4.
\(\mathcal{NC}1\): The within-class variability decreases linearly with layer depth in the shallower layers of the classification networks, indicating that the earlier fully-connected/convolutional layers are equally effective in clustering samples of the same class. However, in the deeper layers of the networks, the within-class variability plateaus, suggesting that the network has already maximally reduced the variance between same-class samples even when they have only partially propagated through the network. This behavior is observed in most tested network architectures and datasets, except for CIFAR100. Ultimately, the earlier layers of the classifiers primarily group same-class samples together and contribute to the generation of \(\mathcal{NC}1\).
\(\mathcal{NC}2\): The two phenomena related to the emergence of the simplex ETF structure in the class means, i.e. the convergence of class means to equal norms and to maximal equal angles, also exhibit a somewhat linear relationship between the degree of collapse in any given hidden layer and that layer's depth in the network. However, unlike \(\mathcal{NC}1\), the collapse continues to strengthen even in the deeper layers of the network, rather than plateauing after the first few layers; this is more prevalent for the angular separation between different class means than for the similarity between their norms. This phenomenon also persists across most architectures and datasets, and suggests that the network continues to separate classes as it feeds samples forward through its full depth. This makes sense, as features extracted in the shallower layers of the network can be used to learn more and more effective ways of separating different-class samples in deeper layers, leading to an increase in the recorded strength of \(\mathcal{NC}2\) over layer depth.
\(\mathcal{NC}4\): The degree of \(\mathcal{NC}4\) in any given layer during training seems to be influenced by the presence of \(\mathcal{NC}1\) and \(\mathcal{NC}2\) in that layer. For the nearest class mean to accurately predict net
Figure 1: **Intra-class Variance Collapse (\(\mathcal{NC}1\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}1\) at initialization and the red solid line indicates \(\mathcal{NC}1\) after TPT.
work output, samples need to be close to their class mean (\(\mathcal{NC}1\)) and class means should be well-separated (\(\mathcal{NC}2\)). In most experiments, \(\mathcal{NC}4\) decreases linearly in the shallower layers and plateaus in the deeper layers. However, the plateauing occurs later than \(\mathcal{NC}1\) due to the additional angular separation between class means in these deeper layers, which contributes valuable information for classification. The degree of \(\mathcal{NC}4\) in the \(j\)-th hidden layer indicates how much of the network's classification ability is captured by its \(\leq j\) hidden layers. If the mismatch between the nearest neighbor classification in the \(j\)-th layer and the total network classification is zero, then all of the
Figure 3: **Maximal Angles (\(\mathcal{NC}2\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}2\) at initialization and the red solid line indicates \(\mathcal{NC}2\) after TPT.
Figure 2: **Equal Norms (\(\mathcal{NC}2\)) in the ReLU classifiers’ intermediate hidden layers.** The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}2\) at initialization and the red solid line indicates \(\mathcal{NC}2\) after TPT.
network's classification ability is already present by the \(j\)-th layer. For simpler datasets like MNIST and SVHN, the NCC mismatch between shallower layers and the network output reaches zero, while for more complex datasets like CIFAR100, the NCC mismatch only reaches zero in the final layers. This observation aligns with the notion that complex datasets require the deeper networks for complete classification, while simpler datasets can achieve it with the shallower layers; however, it will be important for future studies to observe how this generalizes to test/validation datasets.
However, despite these general trends, we note that the models trained on CIFAR100 exhibit a number of unique behaviors not consistent with our broader observations on \(\mathcal{NC}\), most striking of which is no significant decrease in within-class variability over training. Instead, the data seems largely noisy for this \(\mathcal{NC}\) condition. This merits future investigation across other challenging datasets such as TinyImageNet, as well as more exploratory analysis.
In addition to the experiments performed on the classification networks with ReLU activation functions above, we also perform the same set of experiments on Tanh and LeakyReLU classifiers in Appendix B and Appendix C respectively. These experiments largely demonstrate the same characteristics as the ReLU experiments.
## 4 Conclusions
Our work demonstrates that the _Neural Collapse_ (\(\mathcal{NC}\)) phenomenon occurs throughout most of the hidden layers of classification neural networks trained through the terminal phase of training (TPT). We demonstrate these findings across a variety of settings, including varying network architectures, classification datasets, and network activations functions, and find that the degree of \(\mathcal{NC}\) present in any hidden layer is typically correlated with the depth of that layer in the network. In particular, we make the following specific observations: (1) almost all of the reduction in intra-class variance in the samples occurs in the shallower layers of the classification networks, (2) the angular separation between class means is increased consistently as samples propagate through the entire
Figure 4: **Simplification to NCC (\(\mathcal{NC}4\)) in the ReLU classifiers’ intermediate hidden layers. The results are generated at various points in training, where the blue dotted line indicates \(\mathcal{NC}4\) at initialization and the red solid line indicates \(\mathcal{NC}4\) after TPT.**
network, and (3) simpler datasets require only the shallower layers of the networks to fully learn them, whereas more difficult ones require the entire network. Ultimately, these results provide a granular view of the structural propagation of features through classification networks. In future work, it will be important to analyze how these results generalize to held-out validation data. For example, do our observations of \(\mathcal{NC}4\) serve as a proxy for the \(\leq j\) network's ability to classify data extend to validation data? Moreover, is there a broader relationship between \(\mathcal{NC}\) and network generalization/over-training?
## 5 Contributions
**Parker**: Initiated problem concept; led experiments and analysis design; collaborated on computational work; collaborated on results analysis; wrote the paper. **Onal**: Collaborated on experiments and analysis design; collaborated on computational work; collaborated on results analysis. **Stengel**: Collaborated on background research and initial experiment design; collaborated on computational work. **Intrater**: Collaborated on problem concept and background research.
## 6 Acknowledgements
We would like to thank Professor Boris Hanin for his generous guidance and support in the completion of this paper as well as for his insightful and exciting class, Deep Learning Theory, which ultimately led to the creation of this project.
|
2309.02449 | League of Legends: Real-Time Result Prediction | This paper presents a study on the prediction of outcomes in matches of the
electronic game League of Legends (LoL) using machine learning techniques. With
the aim of exploring the ability to predict real-time results, considering
different variables and stages of the match, we highlight the use of
unpublished data as a fundamental part of this process. With the increasing
popularity of LoL and the emergence of tournaments, betting related to the game
has also emerged, making the investigation in this area even more relevant. A
variety of models were evaluated and the results were encouraging. A model
based on LightGBM showed the best performance, achieving an average accuracy of
81.62\% in intermediate stages of the match when the percentage of elapsed time
was between 60\% and 80\%. On the other hand, the Logistic Regression and
Gradient Boosting models proved to be more effective in early stages of the
game, with promising results. This study contributes to the field of machine
learning applied to electronic games, providing valuable insights into
real-time prediction in League of Legends. The results obtained may be relevant
for both players seeking to improve their strategies and the betting industry
related to the game. | Jailson B. S. Junior, Claudio E. C. Campelo | 2023-09-02T02:01:51Z | http://arxiv.org/abs/2309.02449v1 | # League of Legends: Real-Time Result Prediction
###### Abstract
This paper presents a study on the prediction of outcomes in matches of the electronic game League of Legends (LoL) using machine learning techniques. With the aim of exploring the ability to predict real-time results, considering different variables and stages of the match, we highlight the use of unpublished data as a fundamental part of this process. With the increasing popularity of LoL and the emergence of tournaments, betting related to the game has also emerged, making the investigation in this area even more relevant. A variety of models were evaluated and the results were encouraging. A model based on LightGBM showed the best performance, achieving an average accuracy of 81.62% in intermediate stages of the match when the percentage of elapsed time was between 60% and 80%. On the other hand, the Logistic Regression and Gradient Boosting models proved to be more effective in early stages of the game, with promising results. This study contributes to the field of machine learning applied to electronic games, providing valuable insights into real-time prediction in League of Legends. The results obtained may be relevant for both players seeking to improve their strategies and the betting industry related to the game.
prediction of results, league of legends, machine learning, prediction models, game strategies, betting
## I Introduction
League of Legends is one of the most popular video games in the world today. There are over 150 million registered players, with an average of over 100 million monthly active players and over 10 million daily [1]. League of Legends (LoL) is a Multiplayer Online Battle Arena (MOBA) game developed by Riot Games in 2009. In the game, two teams face off with the objective of destroying the structures of the opposing team, with the main target being a structure called the nexus. The teams are composed of five players, each controlling a champion/character, and they battle on a map with three lanes and a jungle, with each player having a specific role in the match. There are many important objectives and variables in the game that directly impact victory, from destroying towers to killing dragons across the map, thereby generating advantages for the team in pursuit of victory.
With the game's fame and the growth of eSports [2, 3], the League of Legends World Championship, also known as Worlds, emerged. The event takes place annually and brings together hundreds of players from around the world, with the tournament being held in different cities every year. The first tournament was organized by Riot Games in 2011 and had a prize pool of US$100,000 [4]. Currently, the tournament is in its 12th edition, which took place in September 2022, with a total prize pool of US$2,225,000 [5].
With the popularity of tournaments and the associated prize money, betting and prediction pools have emerged within the community and even by Riot Games themselves. When Worlds begins, Riot Games creates a prediction pool for League of Legends players, where those who accurately predict the match outcomes receive in-game rewards. Additionally, there are betting websites that include eSports categories and allow users to bet on League of Legends matches. Websites like Betway and Rivalry are examples of platforms where you can place bets on LoL tournaments.
Predicting the outcomes of a LoL match can be quite challenging. Many factors can affect the final result, such as objective completions, ranging from jungle objectives like dragons and the Rift Herald to major objectives like tower destruction and inhibitors. The advantage one team has over the other in terms of gold (the in-game currency), items, or champion level is also taken into account. All these variables can change throughout the match, making it an even greater challenge to predict the final outcome.
This paper presents a machine learning-based approach to real-time prediction of LoL match results with the highest possible accuracy. Emphasis is placed on using recent and unique data as a fundamental part of this process, providing a comprehensive and up-to-date analysis of the game. Additionally, it aims to identify the best technique or combination thereof, considering the use of these distinct data sources. The effectiveness of the approach is also evaluated at different moments during the match, ensuring a dynamic and up-to-date approach to result prediction in the context of League of Legends.
The remainder of this paper is structured as follows. Section II presents background information. Then Section III discusses the related work. Following this, the data and methods employed are described in Section IV. Afterwards, Section V discusses the results. Finally, Section VI concludes the paper and points to future directions.
## II Background
In this section, we provide a background that serves as a basis for understanding the analyses and models developed in this study. Firstly, we present an explanation of the MOBA (Multiplayer Online Battle Arena) genre, and subsequently, we
delve into the understanding of the game League of Legends, which is the central focus of this work.
### _MOBA Genrere: Multiplayer Online Battle Arena_
The MOBA genre, or Multiplayer Online Battle Arena, is one of the most popular genres in the competitive video game scene. MOBA games stand out for their combination of real-time strategy elements and intense action, providing a dynamic and challenging experience for players. In this genre, matches are contested between two teams, each composed of a group of players who control a single character, known as a hero or champion. Each hero has unique abilities and plays a specific role in the team, such as attack, defense, or support.
The central objective of MOBA games is to destroy the enemy base while defending one's own. The bases are protected by defensive towers and computer-controlled units that try to prevent the opposing team's advance. During the match, players must work together, plan strategies, coordinate attacks, and make tactical decisions to gain advantages and overcome opponents. Effective communication and synchronization among team members are crucial to success in the game.
Additionally, MOBA games feature elements such as character progression throughout the match, resource gathering, map objectives, and strategic decision-making. These elements contribute to the strategic depth and complexity of matches, making the MOBA genre a highly competitive and engaging form of gameplay.
### _League of Legends_
Among the numerous titles in the MOBA genre, League of Legends (LoL) stands out as one of the most popular and influential electronic games today. Developed by Riot Games, LoL was launched in 2009 and has garnered a massive player base worldwide.
In League of Legends, two teams composed of five players each face off on a strategically divided map with three main lanes (top, middle, and bottom) and an area called the jungle. Each player takes control of a champion with unique abilities, responsible for performing a specific role within the team. Champions are selected at the beginning of the match and have various play styles, such as fighters, mages, marksmen, tanks, and supports.
The main objective in LoL is to destroy the enemy Nexus, a structure located in the opposing team's base. To achieve this goal, players must advance through the lanes, eliminate computer-controlled minions, destroy defensive towers, and neutral objectives spread across the map, such as dragons and the Baron Nashor. These accomplishments provide resources and strategic advantages, such as gold, experience, and buffs, which can be used to strengthen the team and increase the chances of victory.
League of Legends stands out for its variety of champions, team strategies, ability combinations, and strategic complexity. Each match is unique, requiring quick adaptation, tactical decision-making, and efficient teamwork. The depth of the game and the constantly evolving meta game (dominant strategies) contribute to its longevity and continuous popularity in the eSports scene.
## III Related Work
This section analyzes research related to the prediction of outcomes in electronic games, with a focus on MOBA and League of Legends. These studies explore different ways to predict who will win the matches and understand what affects the success of teams. This review gives us an idea of what researchers have been studying in this field.
### _Prediction of Results in MOBA Games_
In the area of predicting results in MOBA games, specifically in the context of DOTA 2, studies have been conducted that contribute to the advancement of this field. Ke et al. [6] propose an innovative approach to identify and define team fights as crucial events during DOTA 2 matches. The main objective of this study is to explore the potential of team fight information in real-time prediction of match results. By using different recurrent neural network models, the study achieved an accuracy of over 70% when considering all team fight information up to the first 32 minutes of each match.
Another study conducted in the context of DOTA 2 investigated the use of machine learning models, from supervised linear regression to deep learning models such as Neural Networks and Long Short-Term Memory (LSTM) [7]. The results obtained demonstrated that deep learning models achieved high accuracy rates, with averages of 82% for linear regression and up to 93% for LSTM models. Furthermore, analyses were conducted considering different future prediction stages and temporal correlation, revealing the importance of these aspects in obtaining more accurate results.
In a different context, in the game Heroes of the Storm (HOTS), Swidler [8] explored the possibility of predicting match outcomes. Players have the option to upload replay files to a website called HOTSLogs [9], which analyzes the games and provides relevant statistics. Although Blizzard Entertainment, the game developer, does not provide a match history for download, HOTSLogs offers a way to obtain a summary of the last 30 days of games. This data was used to develop a model that accurately predicts over 62% of matches and estimates the probability of a team's victory in a given game.
### _Prediction of League of Legends Results_
The articles analyzed in this section address the prediction of League of Legends match results using different machine learning approaches. In common, all studies aim to identify patterns and variables that may influence match outcomes in order to develop accurate prediction models.
Some works approach the prediction of League of Legends match results using neural networks [10, 11]. They explore different sets of features such as Dragon, Level, Rift Heralds, Towers, gold, kills, assists, destroyed towers, and drakes. These studies demonstrate the capability of neural networks
to achieve high prediction accuracy rates, ranging from 75.1% to 93.75%.
In contrast, other studies utilize decision tree algorithms to predict outcomes [12, 13]. These studies consider features such as duration, kills, towers, and gold. Both studies obtained promising results, with average accuracies of 80% and accuracy rates ranging from 68.33% to 85.17%.
On the other hand, Silva et al. [14] compare different types of Recurrent Neural Networks (RNNs) in result prediction. A simple RNN achieved higher accuracy and was selected for further experiments with different time intervals. The accuracy ranged from 63.91% to 83.54% depending on the considered time interval. The study suggests the use of RNNs to analyze power spikes and make strategic decisions in the game.
Do et al. [15] highlight the importance of champion selection and player experience in result prediction. A machine learning model using Deep Neural Network (DNN) achieved an accuracy of 75.1% when considering the player's experience with the champion as one of the features. The study emphasizes the need for fair matchmaking systems that take into account players' champion selection.
In summary, the existing approaches share the goal of predicting League of Legends match results but differ in their approaches and considered features. Neural networks and decision tree algorithms have shown promise, while the analysis of time intervals and player experience with the champion have also been addressed. These studies provide valuable insights for the gaming community and developers, providing a solid foundation for future work in the field of result prediction in competitive games like League of Legends.
Table I presents a comparison of related works, highlighting the techniques used and the data considered in each study.
## IV Methodology
This Methodology section describes the approach adopted in this research for predicting League of Legends games. We present the dataset used, the applied machine learning techniques, and the evaluation metrics employed. The objective is to explore different approaches and provide accurate predictions to enhance the understanding and performance in this competitive gaming context.
### _Data Collection_
The data was collected using the official API from Riot Games [17], the developer of the game League of Legends. The API provides access to a variety of game-related information, such as match statistics, player details, results, and other relevant data for the study. Through the API, it was possible to obtain up-to-date and accurate data for analysis and modeling.
The data collection process involved the use of the _riottwatcher_ library [18], a Python library specifically developed to interact with the official Riot Games API. This library facilitated the sending of requests and the manipulation of the data returned by the API, streamlining the collection process.
It is important to highlight that the data collection through the Riot Games API, with the assistance of the _riotwatcher_ library, ensured the reliability and integrity of the data, as the information was obtained directly from the official source of the game. Additionally, all privacy policies and terms of use of the API were strictly followed during the collection process.
### _Dataset_
The dataset used in this study represents a representative sample of ranked matches in League of Legends, encompassing players of different skill levels, ranging from Iron to Diamond elo. The 64,556 collected matches provide a comprehensive and diverse view of the competitive landscape during the period from January 12, 2023, to May 18, 2023. [19]
Each recorded match in the dataset contains a wealth of valuable information for analysis and modeling. Among the available variables, we have information about champion kills, including the number of kills performed by the blue and red teams, the first kill that occurred in each match, as well as the players' performance in securing kills and assists.
Additionally, we have data on the achievement of important objectives during matches, such as the elimination of dragons, towers, and inhibitors. These strategic elements are crucial in determining a team's advantage and their ability to control the game map.
Other relevant variables include the total gold acquired by each team, the number of minions slain both in lanes and in the jungle, the average level of players, and many other metrics that allow us to better understand the dynamics of matches and the factors that can influence the final result.
By exploring this diverse dataset, the goal is to identify patterns, trends, and correlations among the variables in order to develop more accurate and robust prediction models for predicting the outcomes of League of Legends matches. This detailed analysis of the dataset is essential for understanding the game and developing more effective strategies.
### _Data Preprocessing_
During the data preprocessing process, several approaches were adopted to ensure the quality and consistency of the
dataset. One of the initial steps involved converting time values, expressed in milliseconds, to minutes. This transformation allowed for a better understanding of the match duration and facilitated the analysis of events at different moments of the game.
The dataset was then divided into four scenarios, representing different percentages of the total match duration: 20% PET (Percent Elapsed Time), 40% PET, 60% PET, and 80% PET. This strategic division, using the concept of Percent Elapsed Time (PET), was carried out to examine the performance of the model when dealing with varying amounts of information available about the game. While intuition leads us to expect that the model will make better predictions as more information is provided, it is important to highlight that the performance of the model does not always exhibit a linear growth as the amount of information increases. This analytical approach allows us to explore how the model behaves in different temporal contexts.
To ensure the reliability and of the data, measures were taken to address potential issues with data quality. As part of this process, matches with a duration of less than 5 minutes were excluded from the dataset. This decision was made to eliminate instances that are likely to be remakes rather than actual matches.
Another important step was the transformation of boolean variables into binary numeric values, assigning the value \(0\) for "false" and the value \(1\) for "true". This approach simplifies the representation of the data and allows for the direct application of modeling and numerical analysis techniques.
Additionally, redundant or opposing variables were removed. For example, the variables _redWin_ and _redFirstBlood_ were removed as they provide similar information to the remaining variables and do not add additional value to the analysis. This removal of redundant variables resulted in a more concise dataset focused on the most relevant information for the study.
### _Exploratory Data Analysis_
In this subsection, we will perform an exploratory data analysis to understand the general characteristics, identify patterns and trends, and extract relevant insights for our analysis. We will use visualization techniques and descriptive statistics to explore the data in detail and gain a comprehensive view of the dataset.
We now present a concise overview of the key findings and observations derived from our Exploratory Data Analysis.
During the exploratory data analysis, it was observed that the win ratio is 50.5% for the blue team and 49.5% for the red team. This indicates that there is no significant imbalance between the teams, with a relatively balanced outcome.
Regarding the duration of the matches, the longest one recorded in the dataset is 67.35 minutes, while the median duration of the matches is 30.01 minutes. This suggests that the majority of matches have a duration around this mark.
Fig. 1 shows the distribution of match durations in minutes for both the complete match (100%) and each of the portions considered in predictive analysis (20%, 40%, 60%, 80%). These values provide insights into the progression of game time throughout the matches.
significant. On the other hand, the performance of the red team in kills, the achievement of Dragons, and the destruction of towers by the red team have a negative impact on the victory of the blue team. These insights help us understand which aspects are more determinant for the success of the blue team at different moments of the game.
### _Feature Selection_
When analyzing the correlation between the variables and the victory of the blue team, we observed that different variables showed different levels of correlation in different scenarios of the matches. Overall, the variables _blueChampionKinKill_, _blueDragonKill_, and _blueFirstBlood_ showed a stronger positive correlation with the victory of the blue team in almost all the analyzed scenarios. This suggests that having a higher number of champion kills, objectives related to dragons, and securing the first kill are associated with a higher probability of victory for the blue team.
However, it is important to highlight that correlation does not imply direct causality. Although these variables show a significant correlation, other factors may influence the outcome of the match. Therefore, it is necessary to consider these variables in conjunction with other relevant information to build more accurate prediction models.
Additionally, during the analysis of the different scenarios, we observed some differences in the correlations of the variables. For example, in the 40% PET scenario of the matches, the initial kill (_firstBlood_), the number of kills, and the achievement of dragons showed equal correlations with the victory of the blue team, indicating the importance of these events in the team's performance.
In the 60% PET scenario of the matches, the variables related to champion kills, dragons, and the first kill showed more significant correlations with the victory of the blue team. This may suggest that these objectives are crucial for the team's success in this stage of the game.
In the 80% PET scenario of the matches, we observed a strengthening of the correlations between champion kills, dragons, and towers conquered by the blue team. These results may indicate that in more advanced matches, these aspects become even more determinative for victory.
Therefore, considering these correlation analyses in different scenarios, we can conclude that certain variables have a more significant relationship with the victory of the blue team in different stages of the matches.
Table VI presents the variables that were taken into account in the construction of prediction models based on the correlation analyses discussed earlier.
The variables listed in Table VI represent the aspects identified as having a significant correlation with the victory of the blue team at different stages of the matches.
This information is valuable for the construction of prediction models that take into account these specific characteristics to predict the outcome of League of Legends matches with greater accuracy.
### _Construction and Evaluation of Prediction Models_
In this subsection, we describe the methodology adopted for the construction and evaluation of prediction models for League of Legends matches, taking into consideration the percentage of elapsed time in the matches. In addition to the machine learning algorithms mentioned earlier, we considered this new variable as an important attribute to enhance the accuracy of the models.
Several machine learning algorithms were selected for this study, including Logistic Regression, Random Forest, Naive Bayes, scikit-learn's Gradient Boosting, XGBoost, LightGBM, Neural Network, and Bagging. This choice was motivated by these algorithms' capability to handle classification problems and their common usage in prediction studies.
The evaluation of the models was conducted using the cross-validation technique, an approach that provides robust
evaluation considering the percentage of elapsed time as part of the input data. This method allows us to capture patterns and behaviors over time, contributing to the improvement of the predictive capability of the models.
During the model evaluation, essential metrics such as accuracy, precision, recall, F1-score, and AUC-ROC were applied. Furthermore, we considered the models' performance and predictive ability while taking into account the percentage of elapsed time in the matches.
As part of model optimization, we chose to employ the hyperparameter search technique known as Random Search on the Random Forest and LightGBM algorithms. This approach enabled us to efficiently explore a wide range of hyperparameter combinations, aiming to identify more promising configurations for the models' performance.
When comparing the results obtained from various models, we did not limit ourselves to the analysis of accuracy and traditional metrics alone. We also considered the models' ability to capture and interpret variations over time. This approach enables us to identify the most suitable model for predicting the outcomes of League of Legends matches, considering the impact of the percentage of elapsed time.
By including the percentage of elapsed time as an additional attribute, we were able to obtain more precise and relevant insights for predicting match outcomes. The combination of machine learning algorithms, cross-validation, evaluation metrics, and the inclusion of the percentage of elapsed time enriches the construction of effective and reliable prediction models.
## V Results and Discussion
During the analysis of the correlations between the variables and the victory of the blue team, we observed that these correlations vary throughout the duration of the match. In the early stages, the kills made by the champions of the blue team and the achievement of first blood were strongly correlated with victory. As the match progressed, the acquisition of Dragons and the destruction of towers by the blue team also showed a positive correlation with the outcome.
Regarding the feature importance of the random forest model, we observed that it varies over time in the match. Specifically, at 20% of the elapsed time, the most important variable for predicting the victory of the blue team was _redTotalGold_. This means that the total gold obtained by the red team played a crucial role in determining the outcome in this early phase of the match. This observation is evidenced in Fig. 2, where we can see the graph of the top 12 features with the highest importances in this scenario.
In the 40% PET, 60% PET, and 80% PET scenarios, the importance of the features underwent changes. At 40% PET scenario, the variable "blueTotalGold" became the most relevant, as illustrated in Fig. 3. This indicates that the total gold acquired by the blue team was a determining factor for their victory in this intermediate phase of the match. When PET=60%, it could be observed that the variable "blueTotalGold" remained the most important, as shown in Fig. 4, reinforcing its relevance in this stage of the game. For PET=80%, the variable "blueChampionKill" held the highest importance, as shown in Fig. 5, emphasizing the significance of champion kills executed by the blue team in this stage of the match.
These findings highlight how the relevant features for the victory of the blue team can vary over time. Therefore, it is essential to consider the different stages of the match when building prediction models and developing strategies to succeed at different moments of the game. Additionally, when comparing the results of correlations with the feature importance obtained through Random Forest, significant differences can be noticed. While correlations provide information about the direct relationship between variables and victory, feature importance indicates the degree of contribution of each variable to the accuracy of the prediction model.
It is interesting to note that, in some cases, a variable may have a strong correlation with the victory of the blue team, but relatively low importance according to Random Forest. This can be explained by the fact that Random Forest evaluates the importance of a variable considering not only its direct relationship with the target variable but also its interaction with other variables and its ability to reduce model uncertainty.
Therefore, it is important to consider both correlations and feature importance when interpreting the results. While correlations help identify the direct relationships between variables and victory, feature importance provides insights into the relative impact of each variable on the overall performance of the prediction model. This combination of information allows us to obtain a more comprehensive and accurate understanding of the factors that influence the outcome of League of Legends matches.
Based on the results obtained from applying the constructed prediction models, we evaluated the performance of each of them considering the percentage of elapsed time in League of Legends matches. The evaluation metrics used were accuracy, precision, recall, F1-score, and AUC-ROC.
Table VII presents the evaluation metrics for each model at different PET.
In the analysis of accuracy metrics among models across different time intervals, we can observe interesting trends. When considering 20% of the elapsed time, the standout models were Logistic Regression, followed by Gradient Boosting and XGBoost. In the 40% interval, we had LightGBM leading, followed by Logistic Regression and again XGBoost.
Moving on to the 60% elapsed time mark, LightGBM achieved the highest performance, followed by Random Forest and Gradient Boosting. Finally, reaching 80% of the elapsed time, LightGBM once again demonstrated the best accuracy, followed by Random Forest and XGBoost.
This analysis showcases the variation in model performance throughout the course of the match. LightGBM consistently excelled across different intervals, highlighting its adaptability to various phases of the game. It's worth noting that the success of each model could depend on specific match factors, and considering the percentage of elapsed time as an additional attribute could be beneficial for the model learning process.
Table VIII presents the accuracy rates achieved through the application of the hyperparameter search technique known as Random Search on the LightGBM and Random Forest models.
This efficient approach explored various combinations of hyperparameters, aiming to optimize the most promising models through the Random Search method. The comparison between optimized and non-optimized versions revealed noticeable improvements in accuracy, validating the effectiveness of Random Search in enhancing predictive performance. In summary, the utilization of Random Search on the most promising models led to significant performance enhancements, underscoring its importance in optimizing models and obtaining more reliable results.
It is noteworthy that, overall, the models demonstrated enhanced performance when considering the percentage of elapsed time as an additional attribute in their learning processes.
Fig. 6 displays the average accuracy of the models at different elapsed time percentages.
We can observe a trend of increasing average accuracy of the models as the elapsed time percentage increases. This indicates that the information about the elapsed time in the matches is relevant for predicting the outcomes.
## VI Conclusions and Future Work
Based on the results obtained and the conducted analyses, we can conclude that including the elapsed time percentage in League of Legends matches as an additional attribute has a significant impact on the performance of prediction models.
The models showed overall improvement in their evaluation metrics when considering this temporal information.
The LightGBM model proved to be the most effective in terms of performance, especially when the elapsed time percentage was between 40% and 80%. However, in the early stages of the game (elapsed time percentage of 20%), other models such as Logistic Regression and Gradient Boosting had superior results. This indicates that different models may be more suitable depending on the stage of the match.
Considering an accuracy of 60%, we can state that there is potential for making predictions in games that have only passed 20% of the time. However, it is important to note that predicting outcomes in League of Legends matches still presents significant challenges due to the dynamic nature of the game. Therefore, it is necessary to consider other factors beyond model accuracy, such as the analysis of other performance metrics and understanding the limitations of the study.
The insights gained revealed the importance of different variables over time. Variables related to champion kills, dragons, and conquered towers had a greater impact on predictions in the early stages, while variables such as total gold and first blood became more relevant as the elapsed time increased. This game evolution highlights the importance of considering different stages of the matches when building prediction models.
Although the results were promising, there is still room for improvement. The dynamic and complex nature of League of Legends poses significant challenges for modeling and predicting outcomes. Therefore, it is crucial to consider other variables and additional information, as well as explore new approaches and datasets, in order to further enhance the prediction models.
An interesting perspective for future work is to address the challenge of predicting the outcomes of League of Legends matches as a time series. This approach could enable the capture of emerging patterns and trends in the game, considering the dynamic and ever-evolving nature of the matches. Additionally, a compelling suggestion for future work is the real-time guidance implementation. This implementation could provide predictions to players during the actual gameplay, enhancing their experience and enabling tests of the practical effectiveness of predictions in a real game scenario. As a complementary notion, the evaluation of real-time performance is also valuable. Investigating practical aspects of the model, such as latency and processing rate, in conjunction with real-time implementation, would yield significant insights into the challenges and requirements associated with real-time deployment. By exploring these three research directions, we aim to broaden the applicability and effectiveness of predictions within the ever-changing context of League of Legends.
|
2305.02944 | A cold-atom Ramsey clock with a low volume physics package | We demonstrate a Ramsey-type microwave clock interrogating the 6.835~GHz
ground-state transition in cold \textsuperscript{87}Rb atoms loaded from a
grating magneto-optical trap (GMOT) enclosed in an additively manufactured
loop-gap resonator microwave cavity. A short-term stability of $1.5
\times10^{-11} $~$\tau^{-1/2}$ is demonstrated, in reasonable agreement with
predictions from the signal-to-noise ratio of the measured Ramsey fringes. The
cavity-grating package has a volume of $\approx$67~cm\textsuperscript{3},
ensuring an inherently compact system while the use of a GMOT drastically
simplifies the optical requirements for laser cooled atoms. This work is
another step towards the realisation of highly compact portable cold-atom
frequency standards. | Alan Bregazzi, Etienne Batori, Ben Lewis, Christoph Affolderbach, Gaetano Mileti, Erling Riis, Paul Griffin | 2023-05-04T15:46:03Z | http://arxiv.org/abs/2305.02944v1 | # A cold-atom Ramsey clock with a low volume physics package
###### Abstract
We demonstrate a Ramsey-type microwave clock interrogating the 6.835 GHz ground-state transition in cold \({}^{87}\)Rb atoms loaded from a grating magneto-optical trap (GMOT) enclosed in an additively manufactured loop-gap resonator microwave cavity. A short-term stability of \(1.5\times 10^{-11}\)\(\tau^{-1/2}\) is demonstrated, in reasonable agreement with predictions from the signal-to-noise ratio of the measured Ramsey fringes. The cavity-grating package has a volume of \(\approx\)67 cm\({}^{3}\), ensuring an inherently compact system while the use of a GMOT drastically simplifies the optical requirements for laser cooled atoms. This work is another step towards the realisation of highly compact portable cold-atom frequency standards.
+
Footnote †: preprint: AIP/1203
Compact frequency standards based on the interrogation of both ions and neutral atoms continue to receive much interest, with many different schemes now reported in the literature [1; 2; 3; 4; 5; 6; 7; 8; 9]. While compact clocks based on thermal atomic vapours and coherent population trapping (CPT) remain unvitaled in terms of size, weight and power (SWaP) they typically exhibit stabilities that are limited in the medium to long-term due to light-shift effects [10] and the buffer gasses required to reduce atomic collisions [1]. In addition to this, CPT clocks struggle to reach atomic shot noise due to the limited signal-to-noise ratio (SNR), a consequence of the low detected signal photon count per atom. This limitation often requires complex interrogation schemes to optimise the clock stability [11; 12; 13].
The development in the last decade of pulsed optically pumped (POP) clocks in thermal vapours has driven new research, with state-of-the-art stabilities [2; 14; 15]. However, these clocks still suffer from buffer gas shifts, providing the ultimate limitation to their long-term stabilities. Achievable Ramsey times within these systems are also limited to a few ms by spin relaxation of the thermal atoms [2; 16], restricting the short-term stability performance.
In an effort to combat these limitations several groups have now developed compact cold atom microwave clocks based on spherical optical-integrating sphere cavities [17; 18], cylindrical cavities [19; 20] and more recently loop-gap-resonator cavities [21]. Three examples of cold atom microwave clocks are now even commercially available [22; 23; 24]. Different laser cooling schemes with varying optical geometries have been used within these systems, with isotropic cooling of an optical molasses [17; 18; 20; 22], pyramid MOTs [25] and larger traditional 6-beam MOTs [19; 21] all being utilised.
In this letter we demonstrate operation and first stability measurement of a cold-atom atomic clock based on an additively manufactured loop-gap resonator (LGR) cavity with a volume of \(\approx\) 67 cm\({}^{3}\) incorporating an integrated GMOT [26]. Although vapour-cell atomic clocks based on LGR structures with external diameters of order 1 cm have been demonstrated [16], such small devices will have limited benefit when using cold atoms. Previous investigations with both GMOTs [27] and 6-beam MOTs [28] have shown that by constraining the trap beam diameter the maximum trapped atom number is limited, therefore constricting the potential stability of the final clock.
In the present study a sample of cold atoms is optically pumped into the clock state, probed in a double resonance Ramsey-type scheme and the resulting state populations read out through an absorptive method. The use of additive manufacturing allows for complex electrode geometries, while maintaining a highly scalable manufacturing process due to the lack of precision machining and assembly required [29; 30]. While the main aim of this work has been to demonstrate the feasibility of integrating this cavity design and its fabrication technique with the GMOT architecture, we envisage that further substantial reduction in size is possible by adapting the cavity to form the bulk of the vacuum system. The minimal optical access required of this scheme also allows for a reduction in the number of apertures present in the cavity body which must be carefully considered during the design process so as not to degrade the cavity mode and field
Figure 1: Simplified schematic of the physics package. Trap light (red arrow) propagates parallel to the magnetic bias field (black dashed arrow), 90:10 non-polarising beam splitter (NPBS) splits the optical pumping (blue arrow) and readout (green arrow) light onto a reference photodiode (\(\text{PD}_{\text{Ref}}\)) and signal photodiode (\(\text{PD}_{\text{Sig}}\)) after being retro-reflected by a mirror (M). Local oscillator (LO) is connected via a SMA vacuum feedthrough.
homogeneity [21; 30].
A commercial laser system (Muquans ILS) at 780 nm with an integrated electro-optic modulator (EOM) for the creation of optical sidebands on the carrier frequency is used throughout. The laser light is split into three distinct optical paths, each with a double passed acousto-optic modulator (AOM) for power and frequency control to enable trapping, optical pumping and state readout. The trapping light is coupled into a fibre to be passed to the physics package. The optical pumping and state readout beams are coupled into a single additional fibre and likewise passed to the physics package.
The microwave cavity itself, described in detail in Ref [30] consists of a loop-gap structure with a four electrode geometry and has an outer radius of 44 mm and a height of roughly 44 mm. The cavity operates in a TE\({}_{011}\)-like mode, tuned to the ground-state-splitting of \({}^{87}\)Rb with a quality factor of Q\(\approx\)360 and is mounted within a stainless steel vacuum chamber, maintained at ultra-high-vacuum (UHV) by an ion pump. Optical access is enabled via viewports in the vacuum chamber along two orthogonal axes. The first axis is parallel to the cylindrical symmetry axis of the cavity (in the following this is referred to as the cavity axis) and allows the trap light to be directed onto the grating chip after expansion and collimation from the fibre. Optical pumping and state readout light, expanded from the fibre to a 1/e\({}^{2}\) diameter of 7 mm, is directed onto the atoms through two 4 mm holes drilled in the side of the cavity body. A retro-reflecting mirror for this light is placed outside the vacuum chamber to decrease the acceleration experienced by the atoms when interacting with the optical pumping and probe beams and increase signal amplitudes. A simplified schematic of this is shown in Fig.1. No significant degradation of the cavity field is observed by the introduction of the holes in the cavity body [30].
A pair of anti-Helmholtz coils are mounted within the vacuum chamber, along the cavity axis in order to create the quadrupole magnetic field required for the trapping process. Three orthogonal pairs of Helmholtz shim coils, mounted externally to the vacuum chamber are used for the cancellation of external stray DC magnetic fields and to apply a magnetic bias along the cavity axis of \(\approx\) 100 mG in order to lift the atomic degeneracy during optical molasses and clock interrogation. We note that the current demonstration is a proof of concept with no magnetic shielding of the experiment present, limiting the potential stability of the system.
The experimental cycle is initiated by turning the trapping coils on with the trap light tuned to be approximately \(\Delta\)=-2\(\Gamma\) red detuned (\(\Gamma/2\pi\)=6.07 MHz) from the \({}^{87}\)Rb D\({}_{2}\)\(F\)=2\(\rightarrow\)\(F^{\prime}\)=3 cycling transition, with re-pump light generated by the EOM operating at 6.57 GHz to produce 5% optical sidebands. A Rb vapour, maintained at the \(1\times 10^{-9}\) Torr level, is produced by resistively heating a pair of alkali metal dispensers. We then perform a 6 ms optical molasses by turning the trap coils off and linearly ramping the light detuning to \(\Delta\)=-5\(\Gamma\) while simultaneously decreasing the trap light intensity. After molasses we measure atomic temperatures of \(\approx 10\)\(\mu K\). In order to decrease the clock cycle time and mitigate experimental dead time, we employ atom recapture between experimental cycles [3; 31]. In steady state this allows the trapping of \(>3\times 10^{6}\) atoms with a load time of 100 ms for a clock cycle operating at \(\approx\)7 Hz. Once the atoms have been trapped and cooled, the trap light is extinguished by an AOM and blocked by a mechanical shutter [32] to ensure no light leakage during microwave interrogation. After molasses, the atoms are assumed to be roughly evenly distributed between the five \(m_{F}\) levels of the \(F=2\) hyperfine ground-state manifold. We therefore employ a 1 ms optical pumping stage with 10 \(\mu\)W total power of linearly polarised light, polarisation axis parallel to the quantization axis, tuned to the \(F\)=2\(\rightarrow\)\(F^{\prime}\)=2 and re-pump light tuned to the \(F\)=1\(\rightarrow\)\(F^{\prime}\)=2 transition. Due to selection rules this pumps \(>95\%\) of the atoms into the \(5^{2}\)S\({}_{1/2}\), \(|F=2,m_{F}=0\rangle\) state [33] as seen by the almost complete elimination of the \(m_{F}=1\to m_{F}^{\prime}=1\) transitions, when scanning the microwave detuning, increasing the contrast of the resulting clock signal.
A Keysight E8257D microwave synthesizer is used as a local oscillator and microwave source throughout. Square microwave \(\pi/2\) pulses with a duration of 200 \(\mu\)s and typical power levels of 0.04 mW are applied to the atoms. The specified phase noise performance of the local oscillator allows the Dick effect [34] to be reduced to the \(6.12\times 10^{-13}\) level at 1 s for the experimental cycle time. After the desired microwave pulses have been applied, the atomic states are read out by an absorption method using 30 \(\mu\)W of optical power, measured by recording the transmission on a photodiode of two subsequent probe pulses. First a short pulse of readout light tuned to the \(F\)=2\(\rightarrow\)\(F^{\prime}\)=3 transition is applied, giving an measure of the number of atoms in the \(F=2\) ground-state. We then apply a second readout pulse with re-pump light present. This measures the total number of atoms, providing a normalisation of the signal and reduced sensitivity to atom number fluctuations. Intensity noise in the readout pulses is reduced by the use of a reference photodiode.
An example Ramsey fringe taken with a Ramsey time, \(T_{R}\)=10 ms, corresponding to a fringe linewidth of 50 Hz is shown in Fig.2. This fringe exhibits an SNR of around 110, measured at the half-width points of the central fringe, where SNR is defined as the ratio of the peak amplitude divided by
Figure 2: Typical Ramsey fringe taken with a 10 ms free evolution time.
the amplitude noise. The predicted SNR limited short-term relative frequency instability of a local oscillator stabilised to an atomic transition in terms of Allan deviation is given by [35]:
\[\sigma_{SNR}(\tau)=\frac{1}{\pi C}\frac{\Delta f}{f_{0}}\frac{1}{SNR}\sqrt{\frac{ T_{C}}{\tau}} \tag{1}\]
where \(C\) is the fringe contrast, \(\Delta f\) is the signal linewidth (\(\approx 1/2T_{R}\)), \(f_{0}\) is the central frequency (6.8346... GHz), \(T_{C}\) is the full experimental cycle time (\(T_{C}=140\)ms) and \(\tau\) is the averaging time. We use this relationship as a basis to optimise our experimental cycle to maximise the potential stability of our clock. A plot of the predicted stability, using the measured SNR, as a function of the Ramsey time is shown in Fig.3. From this we find that as the Ramsey time is increased the predicted stability also improves up to the level of \(8.95\times 10^{-12}\) for a 10 ms Ramsey time. After this time however the stability begins to decrease as the SNR is degraded. This degradation in SNR is attributed to atomic losses due to both the thermal expansion of the cloud and the atoms falling out of the probe region under gravity. The extension of this Ramsey time should be possible by moving the holes in the cavity body lower down, introducing elliptical holes to maintain good probe-atom overlap along the path of gravity or by implementing a grating-chip atomic fountain [36]. This last option is particularly attractive because as with traditional atomic fountains it would be possible apply both \(\pi/2\) pulses when the atoms are at the same vertical point of the cavity. This will allow the phase difference observed by the atoms between the two microwave pulses to be minimised, essential for high contrast Ramsey fringes in a relatively low-Q cavity such as used here. A grating-chip atomic fountain (using CPT interrogation) such as this has already demonstrated Ramsey times out to 100 ms, with a corresponding fringe linewidth of 5 Hz [36].
To assess the stability of our system we stabilise the local oscillator to the atomic signal. The signal is modulated and demodulated at the fringe's half maximum to construct the error signal which is used to feedback onto the local oscillator by voltage tuning its 10 MHz internal reference. Fig.4 shows an overlapping Allan deviation of the resulting frequency stability when compared to an oven controlled quartz crystal oscillator (Wenzel Associates 501-29647) disciplined to GPS (GPSDO). From Fig.4 we see that the clock stability averages down with a \(1.5\times 10^{-11}\)\(\tau^{-1/2}\) dependence out to 10 s, at which point it deviates slightly from the ideal \(\tau^{-1/2}\) dependence. This is in reasonable agreement with the theoretical stability obtained from Fig.3 at a 10 ms Ramsey time and is well above the ultimate stability limit set by the quantum projection noise (QPN) of \(4.9\times 10^{-13}\)\(\tau^{-1/2}\), calculated by replacing the SNR term in (1) by \(\sqrt{N}\), where \(N\) is the atom number.
As the experiment is currently operated in a magnetically unshielded environment the instability contribution due to the second-order Zeeman shift was expected to limit the clock stability in the medium-term. This was confirmed by measuring the magnetic field stability via the \(|F=1,m_{F}=1\rangle\rightarrow|F^{\prime}=2,m_{F}^{\prime}=1\rangle\) microwave transition, exhibiting a magnetic field sensitivity of \(\beta_{1}\)=1.4 MHz/G [37]. The expected stability of the \(m_{F}=0\) clock transition (\(\beta_{0}\)=575 Hz/G\({}^{2}\)[37]) could then be calculated, shown as black points in Fig.4. This plot shows that the second-order Zeeman shift is indeed an important limitation to the clock stability at \(\tau>\)10 s, due to a pronounced hump in the magnetic field stability around this time, after which the stability averages down slightly to the level of \(<2\times 10^{-12}\). For future iterations of this set-up it will be imperative to introduce magnetic shielding for long-term stability performance.
Figure 4: Red points: Overlapping Allan deviation of local oscillator’s stability when locked to atomic signal. Blue solid line: stability predicted by (1) and measured fringe SNR. Red dashed line: \(\tau^{-1/2}\) dependence of the measured 1 s stability. Black points: stability limit due to the second-order Zeeman shift. Yellow points: measured GPSDO reference oscillator stability. Error bars represent 1\(\sigma\) confidence bound.
Figure 3: Predicted short-term stability as a function of Ramsey time. Inset: Typical central Ramsey fringe obtained at 10 ms free evolution time. Red points indicate measured normalised probe absorption while the black line shows a sinusoidal fit to the data. The fringe offset from 0 Hz detuning corresponds to the frequency shift expected from the second-order Zeeman shift at a 100 mG bias field. |
2310.10648 | Bridging the Novice-Expert Gap via Models of Decision-Making: A Case
Study on Remediating Math Mistakes | Scaling high-quality tutoring remains a major challenge in education. Due to
growing demand, many platforms employ novice tutors who, unlike experienced
educators, struggle to address student mistakes and thus fail to seize prime
learning opportunities. Our work explores the potential of large language
models (LLMs) to close the novice-expert knowledge gap in remediating math
mistakes. We contribute Bridge, a method that uses cognitive task analysis to
translate an expert's latent thought process into a decision-making model for
remediation. This involves an expert identifying (A) the student's error, (B) a
remediation strategy, and (C) their intention before generating a response. We
construct a dataset of 700 real tutoring conversations, annotated by experts
with their decisions. We evaluate state-of-the-art LLMs on our dataset and find
that the expert's decision-making model is critical for LLMs to close the gap:
responses from GPT4 with expert decisions (e.g., "simplify the problem") are
+76% more preferred than without. Additionally, context-sensitive decisions are
critical to closing pedagogical gaps: random decisions decrease GPT4's response
quality by -97% than expert decisions. Our work shows the potential of
embedding expert thought processes in LLM generations to enhance their
capability to bridge novice-expert knowledge gaps. Our dataset and code can be
found at: \url{https://github.com/rosewang2008/bridge}. | Rose E. Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, Dorottya Demszky | 2023-10-16T17:59:50Z | http://arxiv.org/abs/2310.10648v3 | # Step-by-Step Remediation of Students' Mathematical Mistakes
###### Abstract
Scaling high-quality tutoring is a major challenge in education. Because of the growing demand, many platforms employ novice tutors who, unlike professional educators, struggle to effectively address student mistakes and thus fail to seize prime learning opportunities for students. In this paper, we explore the potential for large language models (LLMs) to assist math tutors in remediating student mistakes. We present ReMath, a benchmark co-developed with experienced math teachers that deconstructs their thought process for remediation. The benchmark consists of three step-by-step tasks: (1) infer the type of student error, (2) determine the strategy to address the error, and (3) generate a response that incorporates that information. We evaluate the performance of state-of-the-art instruct-tuned and dialog models on ReMath. Our findings suggest that although models consistently improve upon original tutor responses, we cannot rely on models alone to remediate mistakes. Providing models with the error type (e.g., the student is guessing) and strategy (e.g., simplify the problem) leads to a 75% improvement in the response quality over models without that information. Nonetheless, despite the improvement, the quality of the best model's responses still falls short of experienced math teachers. Our work sheds light on the potential and limitations of using current LLMs to provide high-quality learning experiences for both tutors and students at scale. Our work is open-sourced at this link: [https://github.com/rosewang2008/remath](https://github.com/rosewang2008/remath).
## 1 Introduction
_If you can both listen to children and accept their answers not as things to just be judged right or wrong but as pieces of information which may reveal what the child is thinking, you will have taken a giant step toward becoming a master teacher rather than merely a disseminator of information._
Human tutoring plays a critical role in accelerating student learning, and is one of the primary ways to combat recent pandemic-related learning losses (Fryer Jr and Howard-Noveck, 2020; Nickow et al., 2020; Robinson and Loeb, 2021; of Education, 2021; Accelerator, 2022). To accommodate the growing demand for tutoring, many tutoring providers engage novice tutors. While novice tutors may exercise the domain knowledge, they often lack the specialized training of professional educators in interacting with students. However, research suggests that, with proper training, many people can serve as effective tutors--not just trained educators (Nickow et al., 2020).
Responding to student mistakes in real-time is a critical area of tutoring where novice tutors tend to struggle. Mistakes are prime learning opportunities to address misconceptions (Boaler, 2013), but responding to them effectively requires expertise in pedagogical techniques to probe students' thinking (Shaughnessy et al., 2021). The importance of fostering positive educator-student relationships amplify this challenge. Prior research has shown that positive educator-student relationships improve student outcomes (Pianta, 2016; Roorda et al., 2011). The way that educators respond to student errors can shape how students perceive themselves, which subsequently impacts their engagement in the learning process (Robinson, 2022). Therefore, effective tutors not only provide _useful_ responses, but also _caring_ responses to remediate student mistakes.
Experienced educators could provide high-quality feedback to novice tutors. However, hiring experienced educators to provide timely feedback is resource-intensive (Kraft et al., 2018; Kelly et al., 2020). A potential solution is the use of automated tutors (Graesser et al., 2004). With recent advances in large language models (LLMs), such as Chat-GPT or GPT4, this approach has gained even more interest (Khan Academy, 2023). However, even though LLMs have the potential to address the _scal
ability_ challenge faced by tutoring platforms, their ability to effectively remediate student mistakes is yet to be evaluated. Prior work suggests that they have several shortcomings, including lacking reliable subject knowledge [14] and pedagogical expertise [20]. They also suffer from hallucination, generating text that is factually incorrect [11].
To address these challenges, our work makes the following key contributions:
1. We collaborate closely with experienced teachers to develop the ReMath framework. ReMath breaks down the process of remediating student mistakes into three tasks: **Task A**: Infer the type of student error, **Task B**: Determine the strategy and intention behind the strategy for remediation, and **Task C**: Generate a response to the student's mistake.
2. We contribute a dataset of original tutor responses and expert teacher annotations using ReMath. Our dataset consists of tutor-student interactions from chat-based tutoring sessions conducted with 1st through 5th grade students in Title I schools that predominantly serve low-income students of color. Each example is annotated by two expert teachers.
3. Using our framework and dataset, we conduct a thorough evaluation, combining automated and human approaches, of LLMs and teacher responses to student mistakes. To our knowledge, our work is the first to assess the performance of LLMs--such as GPT4 and open-sourced models--on remediating student mistakes. We find that models' responses improve significantly over the original tutors' responses but they still fall short of responses written by experienced teachers. Providing the model with the error type and strategy improves response quality, close to that of the expert teachers. This suggests that a promising approach lies in combining the strengths of LLMs models with teacher expertise to scale and improve tutoring effectiveness.
## 2 Related Work
### Responding to Student Mistakes in Mathematics
The ability of teachers to identify student errors is essential for effectively adjusting instruction and remediating student misunderstanding. Research emphasizes the importance of recognizing misconceptions to facilitate meaningful learning and long-term retention [14, 15, 16, 17, 18]. In the context of mathematics instruction, attention to the mathematical details in children's strategies provides valuable insights into their understanding [12, 13, 14, 15]. They also allow the teacher to be intentional in remediating the student's mistakes. Prior education research discusses multiple good practices in remediating student mistakes. These range from offering visual aids [1], adopting a Socratic approach [14], or eliciting student thinking through questions [16]. Effective instructional strategies coincide with strong teacher-student relationships, which significantly contribute to instructional effectiveness, stu
Figure 1: Illustration of the ReMath benchmark. The left handside shows the original tutor-student conversation on the lesson topic, rounding. The last message is the student’s mistake that should be remediated. The right handside shows the original tutor’s response and the math teacher’s step-by-step remediation annotations. The benchmark tasks are Task A: the inferred error that the student makes (e), Task B: the strategy and desired outcome of the strategy (z), and Task C: the remediation response (\(c_{r}\)).
dent motivation and student engagement (Wentzel, 1997; Pianta et al., 2003; Robinson, 2022; Wentzel, 2022).
### Automated Feedback for Educators
Recent advances in NLP provide teachers feedback on their classroom discourse and have been shown to be beneficial, cost-effective feedback tools (Samei et al., 2014; Donnelly et al., 2017; Kelly et al., 2018; Jensen et al., 2020; Demszky and Liu, 2023; Wang and Demszky, 2023; Demszky et al., 2023). For example, Suresh et al. (2021) provides feedback to teachers on their teaching moves, such as how frequently the teacher asks students to reason aloud. Jacobs et al. (2022) provides evidence that K-12 math teachers receive this kind of feedback positively. The development of LLMs such as GPT-4 has re-kindled excitement around autotutors in providing equitable access to high-quality education (Graesser et al., 2004; Rus et al., 2013; Litman, 2016; Hobert and Meyer von Wolff, 2019; OpenAI, 2023; Khan Academy, 2023). However, these models are known to unreliably solve math problems (Frieder et al., 2023) and, more broadly, hallucinate information (Ji et al., 2023). Having a human tutor in-the-loop is important for catching these undesirable responses and preventing them from being disseminated to students. We explore the potential for a collaborative approach where a model and a human tutor could together provide students with effective guidance, thereby overcoming the limitations of either and ensuring a high-quality learning experience.
## 3 The ReMath Benchmark
This section discusses the ReMath benchmark which focuses on the step-by-step remediation process of expert teachers. The benchmark has three core tasks: **Task A**: Infer the type of student error, **Task B**: Determine a response strategy and intention of that strategy, and **Task C**: Generate a response that incorporates the information. Figure 1 illustrates ReMath. The following subsections describe the categories under each task; due to space constraints, we provide examples of each category in Appendix B. The framework for ReMath emerged from an extensive co-development with math teachers, spanning several months of collaboration. We developed it with the intention that the framework is comprehensive, intuitive, and aligned with the process that teachers actually follow. For more details on the development process, please refer to Appendix A.
### Task A: Infer the Type of Student Error
Identifying the student's errors is prerequisite to successful remediation (Easley and Zwoyer, 1975; Bamberger et al., 2010). Task A involves teachers to infer the most likely cause of mistake from context. Prior research--particularly in math teacher education--has often focused on topic-specific categories of misconceptions, such as Bamberger et al. (2010). Our approach intends to support tutors who are not necessarily content experts, therefore we instead define "error" as a student's degree of understanding; this definition aligns with literature on the hierarchical design of mathematics curricula (Gagne, 1962, 1968; White, 1973; Resnick et al., 1973) and psychometrics, including constructs like the Zone of Proximal Development and item-response theory which have continuous scales of mastery and use student responses to update the inferred level of student understanding (Glaser and Nitko, 1970; Vygotsky and Cole, 1978; Wertsch, 1985; Embretson and Reise, 2013). As such, our error categories are topic-agnostic descriptions of a student's understanding, and complement the topic-agnostic strategies in Task B.
guess: The student does not seem to understand or guessed the answer.This error type is characterized by expressions of uncertainty or answers unrelated to the problem or target answer.
misinterpret: The student misinterpreted the question.This error type is characterized by answers that arise from a misunderstanding of the question. Students may mistakenly address a subtly different question, leading to an incorrect response. One example is reversing numbers, e.g., interpreting "2 divided by 6" as "6 divided by 2."
careless: The student made a careless mistake.This error type is characterized by answers that appear to utilize the correct mathematical operation but contain a small numerical mistake, resulting in an answer that is slightly off.
right-idea: Student has the right idea, but is not quite there.This error type is characterized by situations where the student demonstrates a general understanding of the underlying concept but falls short of reaching the correct solution.1 For
example, a student with right-idea error may recognize that multiplication is required to compute areas but may struggle with applying it to a specific problem.
Footnote 1: [https://www.cai.com/](https://www.cai.com/)
**imprecise: Student's answer is not precise enough or the tutor is being too picky about the form of the student's answer.** This error type is characterized by answers that lacks the necessary level of precision or when the tutor places excessive emphasis on the form of the student's response.
**not-sure: Not sure, but I'm going to try to diagnose the student.** This option is used if the teacher is not sure why the student made the mistake from the context provided. We encourage the teachers to use this error type sparingly.
**N/A: None of the above, I have a different description.** This option is used when none of the other options reflect the error type. Similar to not-sure, we encourage teachers to use this error type sparingly.
### Task B: Determine a Response Strategy and Intention of the Strategy
Student errors are usually persistent unless the teacher intervenes pedagogically [1]. This task involves selecting a course of action that guides the student towards improving their understanding. It also involves specifying the intention.
**Strategies:** Explain a concept, Ask a question, Provide a hint, Provide a strategy, Provide a worked example, Provide a minor correction, Provide a similar problem, Simplify the question, Affirm the correct answer, Encourage the student, Other.
**Intentions:** Motivate the student, Get the student to elaborate their answer, Correct the student's mistake, Hint at the student's mistake, Clarify a student's misunderstanding, Help the student understand the lesson topic or solution strategy, Diagnose the student's mistake, Support the student in their thinking or problem-solving, Explain the student's mistake (e.g., what is wrong in their answer or why is it incorrect), Signal to the student that they have solved or not solved the problem, Other.
### Task C: Generate the Response
Once the student error has been identified and a response strategy has been determined, the next task is to generate a suitable response. We instruct teachers to respond in a useful and caring way. Experienced educators possess the instructional expertise to generate responses that are tailored to the individual student's needs (e.g., their error type) and age group. This is important as the students from this tutoring program are elementary school students, who require different pedagogical strategies than older students [1].
## 4 ReMath Formalism
This section presents the formalism for the ReMath benchmark. Given a conversation history \(c_{h}\) that contains evidence of a student's mistake, the ultimate goal is to generate a remediation response \(c_{r}\) that is useful and caring. The motivation behind ReMath is that experienced educators infer the student's error \(e\) and determine their response strategy and intention \(z\) before generating their final response. We model their response as \(c_{r}^{*}\):
\[c_{r}^{*}\sim\underbrace{p(c_{r}|c_{h},\underbrace{e}_{\text{Task A}}, \underbrace{z}_{\text{Task B}})}_{\text{Task C}}\]
In our dataset, a single annotation tuple is \((c_{h},e,z,c_{r}^{*})\). Each tuple contains the conversation history \(c_{h}\) which includes the lesson topic and the last 5 conversation messages leading up to the student's turn where the mistake is made; in other words \(c_{h}[-1]\) is the student's conversation turn where they make a mistake. Finally, the tuple contains the remediation annotations from the ReMath benchmark.
## 5 Dataset
**Data source.** Our data is sourced from a tutoring provider that offers end-to-end services for districts, including the tutoring platform, instructional materials, and tutors. The conversations used in ReMath were collected as part of their work in a Southern district in the United States that serves over 30k students. The students in these tutoring sessions are in the first to fifth grade, learning a variety of topics in math. The majority of schools are classified as Title I and three-quarters of students identify as Hispanic/Latinx. This district was focused on addressing existing achievement gaps among their students, as well as responding to the learning disruptions caused by the pandemic. To support the district's goals, the tutoring provider delivered 1:1
virtual tutoring to students at least twice a week, in-school over the course of the 2021-22 school year. The tutoring interactions are text-based, integrated on the providers' online platform. The platform has several features, including a whiteboard and pre-defined problems. The tutor communicates primarily through text message in a chat box, while the student uses either voice recording or the chat.
Preprocessing.The chat transcripts are de-identified by the tutoring provider. The student's name is replaced with [STUDENT] and the tutor's name is replaced with [TUTOR]. ReMath uses excerpts from the original tutoring chat sessions, where the tutor responds to a mistake. Tutors on this platform use templated responses to flag mistakes, such as "That is incorrect" or "Good try." We leverage these templates to create a set of signalling expressions used by the tutor to identify excerpts. Specifically, we search for a three turn conversation pattern where (1) the tutor sends a message containing a question mark "?", (2) the student responds via text, then (3) the tutor uses a signalling expression. The set of signalling expressions were validated on a random sample of 100 conversations to ensure complete coverage. Appendix C includes the full set of signalling expressions we use.
Teacher annotation.We work closely with four math teachers from diverse demographics in terms of gender (3 female, 1 male) and race (Asian, Black/African American, White/Caucasian, Multiracial/Biracial). Three have more than 8 years of teaching experience, and the other has 6 years of teaching experience. They also have taught in a broad range of school settings including public schools, Title 1 schools, and charter schools. We work with two of the teachers in developing the framework and compensate them at $50/hour. We work with all four teachers in annotating the dataset, and compensate them at $40/hour.
Appendix A discusses the quality checks and onboarding process conducted prior to annotation. We randomly sampled 350 unique excerpts for annotation and assigned each to two teachers. Each teacher annotated whether the conversation has enough context about the problem and then added respective annotations for each of the three tasks.
Dataset statistics.The final dataset contains 700 items which we split into a train, validation, and test set with a 6:1:3 ratio. Each conversation excerpts contains 4 conversation messages. The train set contains 420, validation 70, and test 210 items.
## 6 Experiments
### Models
We compare the teacher-written ReMath responses to four models. We fine-tune an instruction-tuned model Flan-T5 (large) (Chung et al., 2022). We also fine-tune a goal-directed dialog model GODEL (large) (Peng et al., 2022) because our data involves dialog-based interactions. Both models are fine-tuned on the training dataset using the teacher-generated responses, and not the original tutor responses. Appendix D contains details on the models and finetuning setup. We also ran GODEL and Flan-T5 zero-shot, however the generations were very poor upon manual inspection and have been omitted from the paper. We additionally compare against gpt-3.5-turbo2 and gpt-4 that have been optimized for chat. Appendix E contains the prompts used for gpt-3.5-turbo and gpt-4 on the tasks. We use greedy decoding for all models.
Footnote 2: We use the gpt-3.5-turbo-0301 model, and not the recently updated gpt-3.5-turbo-0613 version.
### Task Setup and Ablations
Task A.We prompt the models to predict the student's error type from prior context \(c_{h}\): \(\arg\max_{e}p(e|c_{h})\).
Task B.We prompt the models to predict what strategy and intention to use from the context: \(\arg\max_{z}p(z|c_{h})\). Although there are many ways to predict \(z\) from context--for example, \(z\) could be predicted from \(e\) and \(c_{h}\)--our experiments focus on the setting where the strategy and intention are determined from \(c_{h}\).
Task C.We prompt the models to generate a remediation response based on prior context, the error type and strategy : \(c_{r}\sim p(c_{r}|c_{h},e,z)\). We refer to the above as _complete-remediation generation_. In our evaluations, we provide the models with the teachers' error and strategy annotations. There are alternative comparisons such as prompting each model to generate their own error and strategy, then decoding their response from these predictions. We leave this extension for future work. We additionally run ablations to determine the importance of providing the error type and strategy. We run an _unconstrained generation_ where the response is generated conditioned
only on the context (\(c_{r}\sim p(c_{r}|c_{h})\)), an _error-constrained generation_ conditioned on the error type (\(c_{r}\sim p(c_{r}|c_{h},e)\)) and a _strategy-constrained generation_ conditioned on the strategy and intention annotation \(c_{r}\sim p(c_{r}|c_{h},z)\). Appendix E contains the prompts used for these settings.
### Evaluation Setup
For both Task A and Task B, we want to measure the similarity between human and model annotations. To this end, we report the inter-rater reliability (IRR) as measured by Cohen's kappa and the percentage of exact label matches between the human and the model. We are also interested in whether the models can identify a diverse set of errors, as the teachers did during the development of the ReMath benchmark; for this, we report entropy and the annotation percentage per category.
For Task C, we measure the extent to which the generated responses improve over the original tutors' responses. We recruit new teachers on Prolific (identified through Prolific's screening criteria) to perform pairwise comparisons between the original tutor response and a response generated by the teacher or one of the 16 models. A random set of 40 pairs per model is evaluated by 3 annotators each, who are blind to the source of the responses.
Raters evaluate the pairs along four dimensions. The first two items are _usefulness_ and _care_, as these have been identified as key qualities of effective remediation in prior work (Roorda et al., 2011; Pianta, 2016; Robinson, 2022). The third item is _human-soundingness_; we discovered in a preliminary analysis of the data that low learning outcomes strongly correlated with whether the student was distracted by whether their tutor was a robot during their tutoring session. Given that the tutoring is chat-based, we include this as another dimension for measuring the effectiveness of responses. Each dimension is rated on a 5-point Likert scale with the following choices: Response A is much more <dimension> (caring/useful/human-sounding), Response A is somewhat more <dimension>, Response A and B are equally <dimension>, Response B is much more <dimension>, Response B is somewhat more <dimension>, where Response A represents the original tutor response and Response B represents the response by the teacher or the models. Finally, we ask the teachers to rate which responses they would _prefer_ using, if they were the tutor. We use a similar 5-point Likert scale as the dimensions above indicating preference. We convert all Likert scale responses to integers between -2 and 2 for analysis. We additionally run automated metrics to compare the responses; however, we found these metrics not to be as insightful into the response quality and include them in Appendix G.
## 7 Results
Task A: Inference of error type.Table 1 summarizes the results. The teachers and the models all commonly annotate the student's error as guess. gpt-4 identifies the most diverse error categories and maintains the highest agreement with teachers out all the models. However, the relatively higher human IRR (\(0.38\)) and lower human-model IRR (\(0.04-0.23\)) indicate that there are still settings where the teachers and models disagreement on. Additionally, we find that the other models generally exhibit low diversity in selecting error type (rf. "entropy" column). The fine-tuned models potentially exhibit this behavior because the distribution of teacher-annotated errors is already skewed. We hypothesize that gpt-3.5-turbo exhibits low diversity because some categories like right-idea require knowledge of younger students' error traces, which may not be common in the model's training data.
We found that the human IRR being only fair (\(0.38\)) can be mostly attributed to some conversations not providing enough context about the conversation or context on the problem; for example, the tutor might flag a student's multiple choice answer as incorrect, however the conversation does not provide what options are available as the problem is being presented on their shared whiteboard screen. Nonetheless, we assume that even if those problems were available in the chat, the relative ordering of the IRR values would remain the same.
Task B: Identifying the strategy and intention.Two key observations can be made from Table 2. One, humans have lower agreement with each other on the strategies on this task compared to Task A. From our discussions with the teachers, the low agreement is due to them taking different strategies to remediate the same type of error. Employing different strategies is vital because it provides the student with multiple access points to understanding the content (Rose and Strangman, 2007; Glass et al., 2013; CAST, 2018).
This leads to our second observation which is
that while teachers use a diversity of strategies, the models do not, as indicated by the models' lower entropy scores. Appendix G complement these observations with additional results on the strategy-intention pairings. These highlight how humans pick different strategies for the same intention (and vice versa), however models do not.
**Task C: Generate response.** Table 3 summarizes the results. Notably, models consistently outperform the original tutor response (indicated by positive values) on all dimensions, with the exception of strategy-constrained Flan-T5 being worse on all dimensions and gpt-3.5-turbo being worse on care. The teacher-written responses are the most highly rated on all dimensions except, surprisingly, on human-soundingness.
The best model across all settings is gpt-4, and it benefits most from teacher-annotations. The ratings for gpt-4 from the _unconstrained_ to _strategy-constrained_ setting nearly double across all the dimensions except care. Its care rating only improves when error information is added. Figure 2 shows responses from gpt-4 that illustrate the diversity in remediation strategies. In the unconstrained setting, gpt-4 directly corrects the student, while the other models utilize different approaches to prompt the student to try again. The error-constrained gpt-4 provides a solution strategy tailored to the specific problem, while the strategy-constrained gpt-4 abstracts the details of applying the strategy. The complete-constrained gpt-4 model combines both approaches, offering a comprehensive but long remediation response. These results highlight the challenge for models in generating simultaneously useful and caring responses to student mistakes. Appendix G includes additional qualitative examples.
Unlike gpt-4, gpt-3.5-turbo's performance hurts with strategy information alone, however benefits from both having the error provided or both the error and strategy (rf. "overall"). We found that the the model's _strategy-constrained_ generations tend to interpret the provided strategies in a corrective manner (e.g., immediately tells the correct
\begin{table}
\end{table}
Table 1: Values show percentage of examples annotated with a given error type. “(f)” denotes the finetuned models. The most frequent error type for each model is **bolded**. The “match” column reports the \(\%\) of items with an exact match with the human teacher label. IRR computes the Cohen kappa between the model and the human teachers (values are computed separately for each annotator then averaged across annotators).
\begin{table}
\end{table}
Table 2: Annotation results for the _strategy_ (a) and _intention_ (b). The results include the percentage of examples annotated with the given label, entropy of the label distribution, and agreement with human annotations (“match” and “IRR” columns, same as Table 1). “(f)” denotes the finetuned models. The most frequent label within a model is **bolded**. The Intention columns use a shortform for error (E), student’s understanding (U), and answer (A).
answer) whereas the _error-constrained_ generations propose alternative approaches (e.g., "try out this solution approach"). Table 5 illustrates this: the _strategy-constrained_ response provides the worked out solution to the problem as the strategy annotation suggests, whereas the _error-constrained_ response suggests the student to draw out a number line and count down.
The models fine-tuned on the expert teacher's data--Flan-T5 and GODEL--generally perform the best without constraints. These results suggest that Flan-T5 and GODEL are not good at generalizing with constraints. This may be due to the imbalance of student error types and strategies in the training data, and the models' pre-training data. Human evaluators on Flan-T5 and GODEL frequently comments that the tutor responses were almost always factually incorrect. For example, when suggesting a worked out problem, _strategy-constrained_ Flan-T5 generates a response like "Great try! Let's try another problem. Mike has 4 cookies and eats 3 cookies, so he has 4 - 3 = 2 cookies left." This contributes to the lower scores on the constrained generation setting.
## 8 Discussion & Conclusion
Our work focuses on remediation of student mistakes because mistakes are prime learning opportunities to correct misunderstandings that hinder learning progress. We present several contributions for understanding and performing remediation of student mistakes. First, we develop the ReMath framework, which sheds light on the thought process of experienced teachers for addressing student mistakes. This framework provides concrete tasks and evaluations for measuring the effectiveness of remediation responses. Second, we contribute a dataset that contains rich annotations from experienced math teachers, including the type of student error and the strategies the teachers would use to address the student's mistake. The dataset comes from a tutoring program working with a majority of Title I schools. We hope that it can serve as a valuable resource for providing equitable, high-quality learning experiences. Finally, we perform a thorough evaluation of responses from the experienced math teachers, instruct-tuned and dialog models on the ReMath tasks. We demonstrate that LLMs alone struggle to accurately infer student errors and generate useful, caring responses. However, when combined with the expert teacher annotations, the quality of LLM-generated responses significantly improves.
Our results indicate two promising avenues for
Figure 2: An example of how the tutor, a math teacher and gpt-4 models remediate this student’s mistake. The error used here is guess, and the strategy and intention are to “Provide a solution strategy” and “Help student understand the lessons topic or solution strategy.”
\begin{table}
\begin{tabular}{l c c c c c c|c} \hline \hline & & **Method** & **Prefer** & **Useful** & **Care** & **Not Robot** & **Overall** \\ \hline \multirow{5}{*}{**Flan-T5**} & Human & \(\mathbf{1.26}\) & \(\mathbf{1.19}\) & \(\mathbf{0.86}\) & \(0.78\) & \(\mathbf{1.02}\) \\ \cline{3-7} & \multirow{5}{*}{**Flan-T5**} & Flan-T5 & \(0.38\) & \(0.38\) & \(0.56\) & \(0.46\) & \(0.45\) \\ & & GODEL & \(0.51\) & \(0.47\) & \(0.38\) & \(0.39\) & \(0.44\) \\ & & GPT-3.5 & \(0.46\) & \(0.45\) & \(-0.04\) & \(0.22\) & \(0.27\) \\ & & GPT-4 & \(0.54\) & \(0.54\) & \(0.50\) & \(0.47\) & \(0.51\) \\ \hline \multirow{5}{*}{**Flan-T5**} & Flan-T5 & \(0.17\) & \(0.17\) & \(0.17\) & \(0.10\) & \(0.15\) \\ & & GODEL & \(0.23\) & \(0.24\) & \(0.26\) & \(0.40\) & \(0.28\) \\ & & GPT-3.5 & \(0.41\) & \(0.44\) & \(0.14\) & \(0.17\) & \(0.29\) \\ & & GPT-4 & \(0.88\) & \(0.64\) & \(0.79\) & \(0.83\) & \(0.79\) \\ \hline \multirow{5}{*}{**Flan-T5**} & Flan-T5 & \(-0.13\) & \(-0.15\) & \(-0.04\) & \(-0.03\) & \(-0.09\) \\ & & GODEL & \(0.34\) & \(0.29\) & \(0.33\) & \(0.55\) & \(0.37\) \\ \cline{1-1} & & GPT-3.5 & \(0.27\) & \(0.29\) & \(-0.03\) & \(0\) & \(0.13\) \\ \cline{1-1} & & GPT-4 & \(0.97\) & \(\mathbf{1.08}\) & \(0.5\) & \(\mathbf{1.07}\) & \(\mathbf{0.91}\) \\ \hline \multirow{5}{*}{**Flan-T5**} & Flan-T5 & \(-0.02\) & \(0.11\) & \(0.11\) & \(0.16\) & \(0.09\) \\ \cline{1-1} & & GODEL & \(0.38\) & \(0.23\) & \(0.45\) & \(0.88\) & \(0.48\) \\ \cline{1-1} & & GPT-3.5 & \(0.65\) & \(0.58\) & \(-0.04\) & \(0.59\) & \(0.45\) \\ \cline{1-1} & & GPT-4 & \(0.95\) & \(0.97\) & \(0.7\) & \(0.7\) & \(0.83\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Human evaluations on remediation responses written by educators (Human row) and models. The educator-written responses are grayed as a reference. The bolded values are the highest values within that column. The yellow cells are the highest values amongst all the models.
scaling the remediation process. One approach is to directly prompt the model and ask human tutors to adapt the model's response such that they are appropriate for the student. Another approach is to prompt the tutor to identify the error and select a strategy from our predefined list, which can then be fed into the model and edited by tutor for further improvements. Our work shows promising results that indicate that either approach can lead to significant improvements over the tutor's response alone.
## 9 Limitations and Future Work
While our work provides a useful starting point for remediating student mistakes, there are limitations to our work. Addressing these limitations will be an important area for future research.
Access to questions.In some cases, the chat transcripts do not include the question the tutor and the student are working on together. This is because the questions are sometimes displayed on a shared whiteboard, and not posted in the chat. Even though our dataset includes annotations for when there's not enough context, future work could improve upon our analysis by always including information about the question. For example, this may improve the IRR scores on Task A.
Expanding to other subjects.Our dataset and benchmark currently focuses on mathematics. The taxonomy may not directly transfer to other subjects, although it may serve as a good starting point for remediating student mistakes in other domains.
Evaluation with students.Our human evaluations are currently limited to the teacher's perspective. However, ultimately, the effectiveness of the responses relies on how students receive and interpret them, and whether these interactions positively impact their learning outcomes. To address this limitation, future research should work towards evaluating this method with students. This is important as previous studies like wentzel2022highlight the potential disparity between teachers and students in determining what responses are more caring or useful.
## Ethics Statement
We recognize that our research on the integration of large language models (LLMs) in education ventures into a less explored territory of NLP with numerous ethical considerations. LLMs open up new possibilities for enhancing the quality of human education, however there are several ethical considerations we actively took into consideration while performing this work. We hope that these serve as guidelines for responsible practices, and hope that future work does the same.
First is the privacy of both students and tutors. We obtained approval from the tutoring program for repurposing the data into the ReMath dataset. We handled all data with strict confidentiality, adhering to best practices in data anonymization and storage security.
Furthermore, we are committed to promoting equity and inclusivity in education. The compensation provided to the experienced math teachers involved in our benchmarking process was set at a significantly higher rate, reflecting our recognition of their invaluable contributions and domain expertise. By compensating teachers fairly, we aim to foster a culture of respect, collaboration, and mutual support within the NLP and education community.
Finally, we are committed to the responsible use of our research findings. We encourage the adoption of our benchmark and methodologies by the research community, with the understanding that the ultimate goal is to improve educational outcomes for all students and provide support to educators. We actively promote transparency, openness, and collaboration to drive further advancements in the field of natural language processing (NLP) for education.
## Acknowledgements
We'd like to give a big thanks to Jiang Wu, Hannah Shuchhardt, and two anonymous individuals for their help and feedback on ReMath. We'd also like to thank the Stanford NLP group for their helpful feedback on the paper draft.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.