id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.12910 | Flow states of two dimensional active gels driven by external shear | Using a minimal hydrodynamic model, we theoretically and computationally
study active gels in straight and annular two-dimensional channels subject to
an externally imposed shear. The gels are isotropic in the absence of
externally- or activity-driven shear, but have nematic order that increases
with shear rate. Using the finite element method, we determine the possible
flow states for a range of activities and shear rates. Linear stability
analysis of an unconfined gel in a straight channel shows that an externally
imposed shear flow can stabilize an extensile fluid that would be unstable to
spontaneous flow in the absence of the shear flow, and destabilize a
contractile fluid that would be stable against spontaneous flow in the absence
of shear flow. These results are in rough agreement with the stability
boundaries between the base shear flow state and the nonlinear flow states that
we find numerically for a confined active gel. For extensile fluids, we find
three kinds of nonlinear flow states in the range of parameters we study:
unidirectional flows, oscillatory flows, and dancing flows. To highlight the
activity-driven spontaneous component of the nonlinear flows, we characterize
these states by the average volumetric flow rate and the wall stress. For
contractile fluids, we only find the linear shear flow and a nonlinear
unidirectional flow in the range of parameters that we studied. For large
magnitudes of the activity, the unidirectional contractile flow develops a
boundary layer. Our analysis of annular channels shows how curvature of the
streamlines in the base flow affects the transitions among flow states. | Wan Luo, Aparna Baskaran, Robert A. Pelcovits, Thomas R. Powers | 2023-07-24T16:09:13Z | http://arxiv.org/abs/2307.12910v1 | # Flow states of two dimensional active gels driven by external shear
###### Abstract
Using a minimal hydrodynamic model, we theoretically and computationally study active gels in straight and annular two-dimensional channels subject to an externally imposed shear. The gels are isotropic in the absence of externally- or activity-driven shear, but have nematic order that increases with shear rate. Using the finite element method, we determine the possible flow states for a range of activities and shear rates. Linear stability analysis of an unconfined gel in a straight channel shows that an externally imposed shear flow can stabilize an extensile fluid that would be unstable to spontaneous flow in the absence of the shear flow, and destabilize a contractile fluid that would be stable against spontaneous flow in the absence of shear flow. These results are in rough agreement with the stability boundaries between the base shear flow state and the nonlinear flow states that we find numerically for a confined active gel. For extensile fluids, we find three kinds of nonlinear flow states in the range of parameters we study: unidirectional flows, oscillatory flows, and dancing flows. To highlight the activity-driven spontaneous component of the nonlinear flows, we characterize these states by the average volumetric flow rate and the wall stress. For contractile fluids, we only find the linear shear flow and a nonlinear unidirectional flow in the range of parameters that we studied. For large magnitudes of the activity, the unidirectional contractile flow develops a boundary layer. Our analysis of annular channels shows how curvature of the streamlines in the base flow affects the transitions among flow states.
## 1 Introduction
The defining property of an active fluid is that energy is added to the system at the small length scales of the particles that make up the fluid, instead of at the large length scales of the bounding walls or inlets of the system. [1] Commonly studied examples include cytoplasm [2] or its reconstituted components, [3, 4, 5] collections of swimming microorganisms, [6, 7, 8] and model two-dimensional layers of cells. [9] The interplay of the energy injected at small scales and the interactions among the constituent particles lead to nonequilibrium collective behavior, including spontaneous coherent flows, [10, 11, 12] sustained oscillations, [1, 13] active turbulence, [14, 15, 16] and two-dimensional [4] or three-dimensional [17, 18] topological defects in active liquid crystalline fluids. These phenomena suggest that active fluids may be used for novel microfluidics applications, including fluids that pump themselves or mix themselves. Since these applications require a degree of control over active fluids, recent investigations have studied how confinement of active fluids affects flows and the formation of defects. [19, 20, 21, 22] In this paper, we build on these investigations by studying the flow states of an active gel in a channel with moving boundaries to see how an imposed shear affects the possible flow states and the transitions among them.
By 'active gel' we mean a model liquid crystal which tends to the isotropic phase away from boundaries with strong anchoring conditions and in the absence of shear flow. The motionless, isotropic state of an unbounded two-dimensional active gel is unstable to spontaneous flow and nematic ordering above a critical activity. [23, 24] Recent numerical calculations have identified the spontaneous flow states in straight three-dimensional [25, 26] and two-dimensional channels [25, 13, 27] with stationary walls. In a two dimensional channel with no-torque anchoring conditions at the walls, the critical activity for spontaneous flow increases as the channel width decreases. [25] Thus, confinement is stabilizing, as has been found in other related situations. [28] For a given value of the activity parameter, new flow states emerge as the channel width increases, with the flow progressing through unidirectional, undulating (also known as 'oscillatory', [13]) and dancing flow states. [25, 29] A similar sequence of flow states is found for fixed channel width and increasing activity. [25]
Our work is motivated by the experimental observation that imposed shear can prevent [26] the spontaneous instability of a solution [12] of microtubule bundles and kinesin motors in the presence of the molecular fuel ATP. Instead of a motionless state, our base state is the state of simple shear in which the flow field is given by the solution to the Stokes equation for our straight or annular channel geometry. Working at fixed channel width, we find that increasing the activity leads to a sequence of flow states which are reminiscent of the ones seen in the case of no external shear, but with some important new elements. For example, the imposed shear rate can be stabilizing in the same sense that confinement is stabilizing: for an extensile active gel, we find that
the critical activity for the imposed simple shear flow to develop a spontaneous flow component increases with the imposed shear rate. A similar result was established using linear stability analysis of a polar system by Muhuri, Rao, and Ramaswamy.[30] Here we give a more systematic treatment of this problem for the apolar case, revealing that the imposed shear also leads to oscillatory behavior in the unstable modes. For a contractile active gel, we find that shear is _destabilizing_. Earlier work has also examined the rheology of active nematics and gels, showing that polar active particles have a nonmonotonic stress-strain relation at high activity,[31] and illuminating the nature of shear banding in apolar active gels.[32] Our work extends these investigations to the case of an annular channel, illustrating the role of the curvature of the streamlines of the base flow.
Our paper begins with a minimal hydrodynamic model for active gels. We then study the linear stability of an active gel in a straight channel subject to a uniform shear flow imposed by a moving plate. In the stable region, the linear rheology, orientational order, and the shear stress exerted by the active fluids on the moving boundary are analytically calculated for the state of uniform shear. Then we turn to the other flow states using the finite element method to characterize the flow transitions for the extensile and contractile fluids. Next, we turn to an annular channel and carry out similar analytical and numerical studies to assess the effects of the curvature of the boundaries.
## 2 Minimal hydrodynamic model
We use a well-studied continuum hydrodynamic model for nematic liquid crystals[33, 34] to describe apolar microtubules, adding a term corresponding to non-equilibrium active forces as was done in the "minimal" model used by Varghese et al.[25] In two dimensions, the orientational order of apolar active matter is described by a traceless, symmetric tensor--the tensor order parameter that is used in the theory of nematic liquid crystals--\(Q_{ij}=S(2n_{i}n_{j}-\delta_{ij})\), with \(i,j=x,y\).[35] The unit vector \(\mathbf{n}(\mathbf{x})\) is the director at position \(\mathbf{x}\) and the scalar order parameter \(S\) represents the degree of alignment. The equilibrium state of the microtubule bundles is governed by a Landau-Ginzburg free energy density,
\[\mathcal{F}=\frac{K}{2}\partial_{ij}Q_{jk}\partial_{i}Q_{jk}+\frac{A}{2}Q_{ij }Q_{ij}+\frac{C}{4}\left(Q_{ij}Q_{ij}\right)^{2}, \tag{1}\]
where repeated indices are summed over. The single Frank elastic constant \(K\) penalizes gradients of \(Q_{ij}\). Since we focus on a low concentration isotropic phase, \(A\) will be positive to guarantee that the minimizing state is disordered. In two dimensions there is no term cubic in \(Q_{ij}\), and the isotropic-nematic transition is continuous. In the isotropic phase we consider in this paper, the term proportional to \(C\) can be neglected, as was done in previous studies of two-dimensional and three-dimensional channel flow.[25, 26]
A minimal hydrodynamic model for incompressible flow in two dimensions is given by[25]
\[0 = \nabla\cdot\mathbf{v} \tag{2}\] \[0 = -\boldsymbol{\nabla}p+\eta\nabla^{2}\mathbf{v}-a\nabla\cdot \mathbf{Q}\] (3) \[0 = -\nu\big{(}\partial_{t}\mathbf{Q}+\mathbf{v}\cdot\nabla\mathbf{Q} +\mathbf{Q}\cdot\Omega-\Omega\cdot\mathbf{Q}\big{)}-A\mathbf{Q}+K\nabla^{2} \mathbf{Q}\] (4) \[+ 2\lambda\nu\mathbf{E},\]
where \(\eta\) is the shear viscosity, \(\nu\) is the rotational viscosity, \(p\) is pressure, \((\mathbf{v}\cdot\boldsymbol{\nabla}\mathbf{Q})_{ij}=v_{A}\partial_{k}Q_{ij}\), \(\mathbf{E}=(\nabla\mathbf{v}+(\nabla\mathbf{v})^{\mathrm{T}})/2\) is the strain rate tensor, \(\Omega=(\nabla\mathbf{v}-(\nabla\mathbf{v})^{\mathrm{T}})/2\) [i.e. \(\Omega_{ij}=(\partial_{j}v_{i}-\partial_{i}v_{j})/2\)] is the vorticity tensor, and \(a\) is the strength of the activity. A positive value of \(a\) corresponds to extensile particles, and a negative value of \(a\) corresponds to contractile particles. The shape parameter \(\lambda\) is positive for prolate particles and negative for oblate particles; \(\lambda=1\) corresponds to needle-like particles. Note that in three dimensions there will be additional nonlinear terms proportional to \(\lambda\) appearing in eqn (4).
We disregard inertial effects because the Reynolds number of the typical active flows we study is small. In this minimal hydrodynamic model, passive backflow effects are neglected and the order parameter field \(\mathbf{Q}\) only affects the flow through the active stress \(-a\mathbf{Q}\). The active time scale which results from the competition between viscosity and activity is given by \(\eta/|a|\). From the dynamical equation for \(\mathbf{Q}\), eqn (4), it is apparent that the relaxation time \(\tau\) for distortions away from the equilibrium isotropic state is \(\tau=\nu/A\). Likewise, \(\sqrt{K/A}\) is a correlation length for the liquid crystalline order, which we write in nondimensional form as \(\ell=\sqrt{K/A}/W\), where \(W\) is the width of the straight or annular channel. The factor \(\lambda\nu\) characterizes the flow birefringence of a passive (\(a=0\)) liquid crystal.[36] When weak shear \(\dot{\gamma}\ll 1/\tau\) is applied to a nematic liquid crystal in the isotropic state, the rods align such that \(A\mathbf{Q}\approx 2\lambda\nu\mathbf{E}\), which implies that the scalar order parameter is proportional to the shear rate: \(S\propto\gamma\tau\).
## 3 Straight channel: start-up problem and linear stability analysis
Let us begin by reviewing the linear stability analysis of an unbounded two-dimensional active gel[37]. An isotropic (\(\mathbf{Q}=0\)), motionless (\(\mathbf{v}=0\)) gel is unstable to shear flow and nematic order
Fig. 1: The base state for the flow field and tensor order parameter field \(Q_{ij}\) of an active gel in a straight channel with the bottom wall moving at a fixed speed \(\dot{\gamma}W\). The double-headed arrows correspond to the director field \(\mathbf{n}\) of the extensile apolar active bundles. The tensor order parameter field is uniform throughout the channel because the flow is uniform and because we impose Neumann boundary conditions on \(Q_{ij}\).
ing when the effective shear viscosity (\(\eta_{\rm eff}\equiv\eta-a\lambda\tau\)) vanishes, which occurs for a critical activity \(a_{c}=\eta/(\lambda\tau)\).[23, 24] The form of the effective shear viscosity shows that extensile particles tend to reduce the shear viscosity, whereas contractile particles tend to increase it. In the unstable state of the unconfined geometry, the pattern of alignment of the bundles follows a sine wave, appearing like a bent filament, or like the nematic configuration of bend.[35]
Next, let us consider an active gel confined to an infinite straight channel of width \(W\) and subject to a steady uniform shear flow \(\mathbf{v}_{0}=\gamma(W-y)\mathbf{\hat{x}}\) as shown in Fig. 1. We assume no-slip boundary conditions on the channel walls for the velocity field, and Neumann conditions, (\(\partial_{t}\partial_{\dot{x}}=0\)) or "zero-torque conditions" for the order parameter field on the walls. Given the parallel planar channel walls and zero-torque boundary conditions, the nematic order parameter is uniform and divergenceless for the imposed uniform shear flow. In our hydrodynamic model, activity only appears in eqn (3), and thus, when activity is below the critical value for the instability, the order parameter field is unaffected by the activity.
Before considering the stability of simple shear flow, we solve the startup problem, assuming an initially stationary isotropic gel with activity below the critical value (to be deduced below). Since the Reynolds number is assumed to be small, the flow immediately assumes its steady-state value \(\mathbf{v}_{0}\). But the order parameter field attains its steady-state value only after a time comparable to the liquid crystal relaxation time \(\tau\).[38] Given the boundary conditions on the order parameter, we may assume that \(\mathsf{Q}\) is uniform in space. Since \(\mathsf{Q}\) is uniform, the divergence of the active stress vanishes and the flow remains simple shear as the order-parameter field evolves. The order parameter equations (eqn (4)) reduce to
\[\partial_{t}Q_{xx} = -\frac{1}{\tau}Q_{xx}-\gamma\mathcal{Q}_{xy} \tag{5}\] \[\partial_{t}Q_{xy} = \gamma\mathcal{Q}_{xx}-\frac{1}{\tau}Q_{xy}-\lambda\dot{\gamma}. \tag{6}\]
Assuming \(\mathsf{Q}(t=0)=0\), we find
\[Q_{xx} = Q_{xx}^{(0)}\left[1-\mathrm{e}^{-t/\tau}\cos\left(\dot{\gamma} \right)\right]+Q_{xy}^{(0)}\mathrm{e}^{-t/\tau}\sin\left(\dot{\gamma}\right) \tag{7}\] \[Q_{xy} = Q_{xy}^{(0)}\left[1-\mathrm{e}^{-t/\tau}\cos\left(\dot{\gamma} \right)\right]-Q_{xx}^{(0)}\mathrm{e}^{-t/\tau}\sin\left(\dot{\gamma}\right), \tag{8}\]
where the steady-state order parameter tensor \(\mathsf{Q}_{0}\) is given by
\[Q_{xx}^{(0)} = \frac{\lambda\dot{\gamma}\tau^{2}\tau^{2}}{1+\dot{\gamma}^{2} \tau^{2}}, \tag{9}\] \[Q_{xy}^{(0)} = -\frac{\lambda\dot{\gamma}\tau}{1+\dot{\gamma}^{2}\tau^{2}}. \tag{10}\]
The order parameter rises to its steady state, with oscillations that become apparent when the shear rate is greater than the relaxation rate \(1/\tau\). These oscillations are reminiscent of the oscillations observed[39] in the apparent viscosity during the startup flow of 8CB, a director-tumbling nematogen.[40] In simple shear, the director of a tumbling nematic makes a complete revolution, like a rod undergoing a Jeffery orbit in shear flow.[40] In our case, as long as \(\tau\) is finite, the directors oscillate about their final steady state. Fig. 2 shows the director angle \(\phi=\arctan[Q_{xy}/(S+Q_{xx})]\) (measured counterclockwise from the \(x\)-axis) as a function of time.
The steady-state scalar order parameter and the director angle are given by
\[S = \frac{\lambda\gamma\tau}{\sqrt{1+\dot{\gamma}^{2}\tau^{2}}} \tag{11}\] \[\phi = -\arctan\left(\frac{1}{\sqrt{1+\dot{\gamma}^{2}\tau^{2}}+\dot{ \gamma}\tau}\right). \tag{12}\]
Equations (11) and (12) show that in steady state, the flow aligns the nematic director at a nonzero angle with the horizontal streamlines, with a degree of order that increases with increasing shear rate. At low shear rates, \(\gamma\tau\ll 1\), the bundles are oriented at an angle of \(\phi=-\pi/4\) with the streamlines, and the order is weak (\(S\ll 1\)). At high shear rates, the bundles tend to align parallel to the streamlines, and \(S\approx\lambda\). For needle-like particles, with \(\lambda\approx 1\), the order is strong in the limit of high shear rate. The shear stress on the moving plate in the stable region is
\[\sigma_{\rm W}=-\eta\dot{\gamma}-aQ_{xy}^{(0)}=\dot{\gamma}\left(-\eta+\frac{ a\lambda\tau}{1+\dot{\gamma}^{2}\tau^{2}}\right),\quad a<a_{c}. \tag{13}\]
From eqn (13), it is easy to see the wall shear stress increases linearly with activity but the dependence on the imposed shear is not linear when the activity is below the critical value.
To analyze the stability of the base configuration with flow rate \(\mathbf{v}_{0}\) and the confinement \(W\), we consider a perturbation that is independent of \(x\), the channel axis. 1 Thus, \(\mathbf{v}=\mathbf{v}_{0}+\mathbf{v}_{1}\) and \(\mathsf{Q}=\mathbf{v}_{0}+\mathbf{v}_{1}\) are the same as the one in the previous section. The shear stress stress is given by
Footnote 1: A more general assumption would be to suppose the perturbation depends on both \(x\) and \(y\), but here we forbid \(x\)-dependence to simplify the analysis. The more general analysis using pseudospectral methods will be reported elsewhere.
Fig. 2: Director angle \(\phi\) as a function of time for various shear rates for the startup problem of the liquid crystal order parameter in the case of steady simple shear. From top to bottom, the shear rates are \(\dot{\gamma}\tau=5\) (green curve), \(\gamma\tau=1\) (gold curve), and \(\dot{\gamma}\tau=0.2\) (blue curve).
\(Q_{0}+Q_{1}\), with the perturbations
\[\mathbf{v}_{1} = v_{x}\sin(n\pi\gamma/W)\exp\left(\beta t\right)\hat{\mathbf{x}}, \tag{14}\] \[Q_{1} = \begin{pmatrix}\mathscr{Q}_{xx}&\mathscr{Q}_{xy}\\ \mathscr{Q}_{xy}&-\mathscr{Q}_{xx}\end{pmatrix}\cos(n\pi\gamma/W)\exp\left( \beta t\right), \tag{15}\]
where \(v_{x}\), \(\mathscr{Q}_{xx}\), and \(\mathscr{Q}_{xy}\) are constants, \(n\) is a nonzero positive integer, and \(\beta\) is the growth rate of the perturbation. With these assumptions, the \(x\) component of the force equation eqn (3) implies
\[v_{x}=\frac{a\mathscr{Q}_{xy}W}{n\pi\eta}. \tag{16}\]
Using eqn (16) in the linearized equations for \(Q_{1}\) yields
\[\beta_{\pm} = -\frac{1}{\tau}\left(1+\frac{\pi^{2}Kn^{2}}{AW^{2}}\right)+\frac{ \lambda a}{2\eta(1+\hat{\gamma}^{2}\epsilon^{2})} \tag{17}\] \[\pm \sqrt{\left[\frac{\lambda a}{2\eta(1+\hat{\gamma}^{2}\epsilon^{2} )}\right]^{2}-\hat{\gamma}^{2}\left(1+\frac{\lambda a\tau/\eta}{1+\hat{\gamma }^{2}\epsilon^{2}}\right)}.\]
There are two modes. In the limit of a passive fluid, \(a=0\), the modes collapse to a single mode corresponding to oscillations of the order parameter as it decays to its equilibrium value given by eqn (11): \(\beta_{\pm}=-\lambda/\tau[1+\pi^{2}K/(AW^{2})]\pm i\hat{\gamma}\). Note the similarity between these damped oscillations and the damped oscillations in the startup problem, eqns (7) and (8). A nonzero activity makes the two modes distinct. In the limit of zero shear rate, \(\beta_{-}\) is negative and independent of activity even if \(a\neq 0\), and corresponds to the decay of the scalar order parameter of a passive isotropic nematic when it is perturbed from the isotropic value \(S=0\). The other mode corresponds to the spontaneous flow and ordering of an active isotropic nematic when \(a>a_{c}=[1+\pi^{2}K/(AW^{2})]\eta/(\lambda\tau)\). Note that the confining channel walls raise the critical activity above the previously quoted critical value for unbounded space. The elastic constant \(K\) only enters the growth rate if the channel width is finite.
In general, the critical activity for instability depends on the shear rate, and is found by determining when \(\text{Re}(\beta_{+})=0\) for \(n=1\). The modes are oscillatory when the square root in eqn (17) is imaginary, or when \(a_{-}<a<a_{+}\), where
\[a_{\pm}=\frac{2\eta\hat{\gamma}}{\lambda}(1+\hat{\gamma}^{2}\epsilon^{2})\left( \hat{\gamma}\tau\pm\sqrt{1+\hat{\gamma}^{2}\epsilon^{2}}\right). \tag{18}\]
When \(a_{-}<a<a_{+}\), the critical curve \(\text{Re}[\beta_{+}(n=1)]=0\) in the \(\hat{\gamma}\)-\(a\) plane is given by
\[a_{\text{1c}}=2\frac{\eta}{\lambda\tau}\left(1+\hat{\gamma}^{2}\epsilon^{2} \right)\left(1+\pi^{2}\ell^{2}\right), \tag{19}\]
where \(\ell\) is the dimensionless correlation length defined in the previous section. When \(a<a_{-}\) or \(a>a_{+}\), the growth rate is purely real, and the critical curve \(\beta_{+}(n=1)=0\) is given by
\[a_{\text{2c}}=\frac{\eta}{\tau}\frac{\left(1+\hat{\gamma}^{2}\epsilon^{2} \right)\left[(1+\pi^{2}\ell^{2})^{2}+\hat{\gamma}^{2}\epsilon^{2}\right]}{ \lambda(1+\pi^{2}\ell^{2}-\hat{\gamma}^{2}\epsilon^{2})} \tag{20}\]
Note that \(a_{\text{2c}}>0\) for \(\sqrt{1+\pi^{2}\ell^{2}}>\gamma\tau\), and \(a_{\text{2c}}<0\) for \(\sqrt{1+\pi^{2}\ell^{2}}<\dot{\gamma}\tau\).
The stability boundaries are plotted in Fig. 3 for the case of \(\ell=0\) (zero Frank elasticity). The region of oscillatory growth rates, \(a_{-}<a<a_{+}\), is the region between the dashed lines. The stable region is the shaded blue region between the solid blue curves, whereas the unstable regions are the white regions. Note that the upper stability boundary is given by \(a_{\text{1c}}\) in the oscillatory region, and \(a_{\text{2c}}\) in the non-oscillatory region. The lower stability boundary lies wholly in the non-oscillatory region, and is therefore given by \(a_{\text{2c}}\). Since the upper stability boundary near \(\dot{\gamma}=0\) increases with shear rate, our results are in agreement with Muhuri et al., [30] who found that shear counteracts the instability for extensile particles. Surprisingly, we also find that shear can be _destabilizing_ for contractile active particles if the magnitude of the activity is large enough.
## 4 Straight channel: nonlinear spontaneous flows
The linear analysis of the previous section predicts that simple shear flow with uniform nematic order is stable as long as the activity and externally imposed shear rate lie in the shaded region of Fig. 3. However, there may be transitions to flow states that are not captured by linear stability analysis, and furthermore, the linear equations cannot describe the fully-developed flow states. Thus, we explore the activity-induced flow states and the transitions between them by numerically solving the full nonlinear equations, eqns (2)-(4). We use the open source finite element software FEniCS [41, 42, 43] to solve the nonlinear equations, employing a backwards Euler scheme to solve for the time dependence. We characterize the flow states by the spontaneous volumetric flow rate as well as the wall shear stress.
The system is initialized with a small value of the nematic order parameter \(S\), appropriate for an isotropic state. For sufficiently small values of the external shear, the direction of the activity-induced flow for \(a>a_{c}\) depends on the configuration of the nematic order. We can achieve positive flow--flow in the same direction the bottom wall moves--or negative flow--flow against
Fig. 3: Linear stability analysis results for a two-dimensional active gel in a straight channel of width \(W\) subject to a shear flow with rate \(\dot{\gamma}\). The Frank elasticity is small: \(K=0.01AW^{2}\). Simple shear flow is stable against perturbations in the shaded blue region, and the perturbations are oscillatory in the region between two dashed lines.
the direction the bottom wall moves--by imposing appropriate initial conditions on the directors. These conditions will be described below for the extensile and contractile cases. The initial director fields also have small random fluctuations. Because we are neglecting inertial effects, we do not need to initialize the velocity field, which is determined from eqns (2)-(4). Instead of attempting to simulate a very long channel, we use periodic boundary conditions on the left and right boundaries of the channel. The length \(L\) of the channel is chosen to be five times the width \(W\); we found this length to be the longest channel length we could simulate in a reasonable amount of computing time. We focus on situations in which the width \(W\) of the channel is large compared to the correlation length \(\sqrt{K/A}\) of the liquid crystal. Therefore, our simulations are carried out with a small value of the Frank elasticity, \(K/A=0.01W^{2}\) (i.e. \(\ell=0.1\)). In our numerical calculations, \(W\) is the unit of length, \(\tau\) is the unit of time, and \(\eta/\tau\) is the unit of pressure. We also define the dimensionless activity \(\alpha=a\lambda\tau/\eta\), and restrict our simulations to the case of needle-like particles, \(\lambda=1\).
### Extensile fluids
For extensile fluids, we find three types of flow states when the activity is above the critical value \(a_{c}\): unidirectional, oscillatory, and dancing. These states are similar to three of the states found by Samui et al., [13] who studied an active nematic fluid confined to a channel in the absence of external shear. These authors also found an active turbulent state at high activity, which we do not explore here. The unidirectional flow is steady, consisting of a superposition of spontaneous flow and simple shear flow. The oscillatory flow is unsteady, with a pattern of flow and order that translates at a constant velocity along the channel, which makes the spatially-averaged wall stress constant in time. The dancing flow is truly unsteady, with a spatially-averaged wall stress that oscillates in time. These states will be described in more detail below. Fig. 4 shows the phase diagram for flow states for dimensionless activity in the range \(0\leq\alpha\lesssim 2.5\) and shear rate in the range \(0\leq\gamma\tau\lesssim 1\). To get positive spontaneous flow, we imposed initial conditions with the directors converging to the right, as in Fig. 5b. To get negative spontaneous flow, we imposed initial conditions with the directors converging to the left, as in Fig. 5d. We ran each simulation until either all transients died out, or \(t=600\tau\), whichever came first. The final state could either be a steady state or a state with regular periodic behaviour. Then we classified the states as follows. The simple shear and unidirectional flow states generally emerge at times \(t<600\tau\). Both states are steady with negligible \(y\)-component of velocity, and these two flow states are easily distinguished since simple shear has the standard linear flow profile \(v_{x}=\dot{\gamma}(W-y)\), whereas unidirectional flow has a spontaneous flow component added to the linear flow. If there is a nonzero \(y\)-component of the velocity at the end of the simulation, we check for oscillations in the average wall stress, \(\bar{\omega}_{w}=\int_{0}^{L}\mathrm{d}\alpha\sigma_{xy}(x,y=0)/L\), for times in the range \(550\tau\)-\(600\tau\). Negligible oscillation in the average wall stress implies an oscillatory flow state, while non-negligible values imply a dancing state. Most of the points shown in Fig. 4 reached a steady or regular periodic state by \(t=600\tau\), or came very close to doing so. But a few cases near transitions between flow states needed much longer to fully develop.
The limit of stability for the simple shear flow states in the Fig.
Fig. 4: Numerically determined flow states for extensile fluids with \(\ell=0.1\) and \(\lambda=1\) in a straight channel. See the text for an explanation of how the flow states were determined. The blue shaded region and the dashed line indicate the linearly stable states and the boundary for oscillatory modes, respectively, for \(\ell=0.1\) (compare with Fig. 3). The location of the transitions is generally insensitive to whether the initial splay of the nematic directors converges to the right (as in Fig. 5b) or the left (as in Fig. 5d). Note that depending on the noise in the initial conditions, some of states at higher \(\alpha\) can either be dancing or oscillatory-like states; an example is shown in the SI.
Fig. 5: Steady states of spontaneous flow and nematic order for dimensionless activity \(\alpha=1.5\) and dimensionless shear rate \(\dot{\gamma}=0.2\) (the bottom wall moves to the right, and the top wall is stationary). In the velocity field plots (panels (a) and (c)), color denotes flow speed, and arrows denote the direction of the flow. In the order parameter field (panels (b) and (d)), color denotes the scalar order parameter and lines denote the director field. Note that the leftward spontaneous flow in (c) is not sufficient to reverse the net flow near the moving wall, but leads to negative net flow near the stationary wall.
4 is the boundary between the region with black crosses and the regions with other symbols. We observe that the numerical limit of stability for simple shear flow matches very well with the prediction of linear stability analysis (filled blue region), but only for the transition from the simple shear to unidirectional flow, \(\dot{\gamma}\tau\lesssim 0.3\). The disagreement between the linear stability boundary and the transition from simple shear flow to oscillatory flow may be due to our neglect of the possibility that the perturbation could depend on \(x\) as well as \(y\). In the region of simple shear flow (black crosses in Fig. 4), our numerical results show that the wall stress decreases with activity, in agreement with eqn 13. Fig. 6 shows the numerically computed wall stress, normalized by the passive (viscous) stress. When the flow state is simple shear, activity reduces the total wall stress in proportion to the activity, in accord with the general understanding that extensile particles with activity reduces the effective viscosity.[37]
**Unidirectional flow.** When the externally imposed shear is in the range \(0\leq\dot{\gamma}\tau\lesssim 0.3\), and the dimensionless activity is in a relatively narrow band near \(\alpha\approx 1\) (Fig. 4), activity creates a steady unidirectional flow along the \(x\)-axis (Fig. 5). The activity-induced component spontaneously breaks the left-right symmetry of the channel, with the actual direction of the active flow component determined not by the imposed external shear but instead by the initial conditions of the directors, as described above. Since the total shear rate vanishes at the value of \(y\) at which the flow rate has an extremum, the scalar parameter vanishes at this same value of \(y\) (Fig. 5). Fig. 7 shows the flow profile subtracting off the imposed shear flow for fixed activity and various values of \(\dot{\gamma}\) for both the left-moving and right-moving spontaneous flows. It indicates that the spontaneous active component of the flow depends on \(\dot{\gamma}\); in other words, the total flow is not simply a superposition of the passive shear flow \(v_{x}=\dot{\gamma}(W-y)\) and the spontaneous flow at _zero_ externally imposed shear.
To better characterize these flows, we subtract the passive volumetric flow rate from the total volumetric flow rate to get the dimensionless activity-induced volumetric flow rate (per unit channel width),
\[q^{\text{active}}\equiv\left(\int_{0}^{W}\dd yv_{x}-\frac{\dot{\gamma}W^{2}}{2 }\right)\frac{\tau}{W^{2}}, \tag{21}\]
shown in Fig. 8. This quantity serves as an order parameter describing the transitions among the various flow states. Fig. 8 shows that the activity-driven flow rate has the same magnitude for the left-moving and right-moving flows, and also that the amplitude of the unidirectional flows increases as the activity increases.
Examining Fig. 6 for the case of \(\dot{\gamma}\tau=0.2\) reveals that reduction of the normalized wall stress with increasing activity ceases at the onset of the unidirectional flow, and the normalized wall stress at \(y=0\) starts to increase slightly as activity increases further. The
Fig. 6: The spatially-averaged wall stress on the bottom wall of a straight channel, normalized by the passive wall stress. Because the average stress depends somewhat on the spatial period of the oscillatory and dancing flow states, only the results for one representative period are shown. For dancing flow states, the spatially-averaged wall stress oscillates in time, and the square symbols show the time-averaged value of the wall shear stress.
Fig. 7: Spontaneous component of velocity profiles in dimensionless units for unidirectional flow at different rates of externally imposed shear. For each value of the shear rate, there are two branches, with the upper branch corresponding to the positive spontaneous flow, and the lower branch corresponding to the negative spontaneous flow.
active component of the wall stress at \(y=0\) in the unidirectional flow has the opposite sign compared to that of the simple shear flow, as can be seen from the opposite orientation of the directors near the wall \(y=0\) in Fig. 5b and Fig. 1. Also, the active component of the flow changes the sign of the flow gradient near the wall, as can be seen from Fig. 7. These two effects together lead to the rise in the normalized wall stress at the onset of unidirectional flow.
**Oscillatory flow.** Our phase diagram of flow states shows that for \(\gamma\tau\lesssim 0.3\), there is a transition with increasing activity from the unidirectional flow states to two-dimensional oscillatory flows (Fig. 9). When \(\gamma\tau\gtrsim 0.3\), the simple shear states transition directly to two-dimensional oscillatory flows as activity increases. Although the oscillatory flow states are unsteady, with the velocity and order parameter taking the form of a traveling wave, the flow pattern and order parameter configuration rigidly translate in the \(x\) direction with wave speed \(v_{\rm T}\). In other words, in the frame moving relative to the channel walls with speed \(v_{\rm T}\), the streamlines meander in space but are steady. Likewise, the configuration of the order parameter tensor is steady in this frame. Because we use periodic boundary conditions, the flow field and orientational order parameter must have period in \(x\) equal to the channel length \(L\). But these fields could also have a shorter period, which must evenly divide the total channel length. Since we use a channel length \(L=5W\), the possible wavelengths for a periodic configuration are \(5W\), \(5W/2\), \(5W/3\),.... Different wavelengths are selected in the dynamical final state depending on the initial state of the nematic directors, as well as the value of the activity and the imposed shear. Because it is difficult to determine the relationship between the random fluctuations imposed on the initial directors and the wavelength that is finally selected, we did not make a systematic study of all the possible wavelengths. It is natural to worry that the steady translation of the flow field and order parameter pattern could be an artifact of the periodic boundary conditions. In Sec. 5, we study an annular geometry as a single domain without the need for periodic boundary conditions. Since we also observe an oscillatory flow state with constant angular wave speed in that situation, we are confident the constant wave speed \(v_{T}\) we see in the straight channel is not an artifact of the period boundary conditions.
We measured the volumetric flux for times in the range \(t=550\)-\(600\tau\), which is when the system is generally in its final dynamically stable state. In the final state, the volumetric flow rate and wall shear stress of the oscillatory flows are constant. For small externally imposed shear (e.g. \(\dot{\gamma}\tau\leq 0.2\) in Fig. 4), the spontaneous activity-induced flows can be either positive or negative, depending on the form of the splay in the initial conditions for nematic order, as for the unidirectional flows. See Figs. 9a-d (movies are in the SI). For positive spontaneous flow, the streamlines undulate, but the externally imposed shear breaks the up-down symmetry of the waves with respect to the horizontal centerline of the channel. The velocity at the valleys of the waves is higher than at the peaks. For negative spontaneous flow, since the activity-induced flow is opposite to the direction of the externally imposed shear flow, the flow more easily forms circular streamlines. Thus, for \(\dot{\gamma}\neq 0\), the absolute value of the activity-driven flux of negative spontaneous flows is slightly smaller than the flux for the positive spontaneous flows, as can be seen by looking very closely at Fig. 8.
The direction of the spontaneous flow not only determines the shape of the streamlines, but also determines the direction of translation of the total flow pattern, including the passive viscous flow. For positive spontaneous flow, the total flow pattern translates in the \(+x\) direction, while for the negative spontaneous flow case, it translates in the \(-x\) direction. The activity-driven volumetric flow rate is nonzero but generally decreases with increasing activity as shown in Fig. 8. Fig. 10 shows that the wave speed \(v_{\rm T}\) pattern is faster than the flux, and the difference between these two quantities decreases with the growth of the activity.
We now turn to larger externally imposed shear (e.g. \(\dot{\gamma}\tau\geq 0.3\) in Fig. 4). In this case, only the positive spontaneous flow appears; the symmetry is broken by the flow imposed by the external shear. The activity-driven volumetric flow rate is zero because the imposed shear rate is large enough to close the streamlines. Interestingly, our numerical results indicate that the wave speed is equal to the average volumetric flow rate of simple shear, \(v_{\rm T}=\mathcal{W}/2\).
**Dancing flow.** At higher activity, the flow field and tensor order parameter field become unsteady in any frame, and we find states (Fig. 11; movies are in the SI) analogous to the danc
Fig. 9: Examples of the unsteady oscillatory flow state at \(t=600\tau\). Panels (a), (b), (c) and (d) show the flow patterns and order parameter fields corresponding to positive and negative spontaneous flows for small external shear rate. Panels (e) and (f) show a case with larger shear rate. At this shear rate, we only observe positive spontaneous flow.
ing flows found by Shendruk et al. [29] and Samui et al. [13] in their study of active nematic flow in a two-dimensional channel. The volumetric flow rate of dancing flow is still constant with time. Additionally, in the range we study (\(\alpha\leq 2.5\)), when activity is large enough to dynamically close all streamlines for the part of the flow that is activity-driven, the total flux is the same as in the passive case. As in the case of the oscillatory flows, sometimes we find multiple states at the same values of parameters. For example, noise in the initial conditions may cause the system to exhibit oscillatory-like states in the region of the phase diagram where dancing flows are also found.
Given a director configuration \(\hat{\mathbf{n}}=\cos\phi\hat{\mathbf{x}}+\sin\phi\hat{\mathbf{y}}\), we may define the topological charge inside a closed loop by computing \(\int\mathrm{d}\phi=2\pi m\) around the loop, where \(m\) is the charge. Applying this definition to the configuration in Fig. 11b may be problematic because the order parameter \(S\) vanishes not just in small cores but in extended two-dimensional regions. If the loop drawn to encircle a potential topological defect crosses a region where \(S\) vanishes, the angle \(\phi\) and the topological charge are ill-defined. Nevertheless, we can simply look at the director configuration of dancing flow and see that there are parts of the configuration around the regions of small \(S\) near the center of the channel that closely approximate the director field of \(+1/2\) defects. The \(+1/2\) defects appear in pairs, and the two defect cores move with undulations of the flow in opposite directions leading to the pairs exchanging partners with the pair to the immediate left and immediate right, consistent with the Ceilidh dance observed by Shendruk [29] and Samui [13].
The spatially averaged shear stress imposed by the active flow on the moving wall also oscillates in time. The average wall shear stress no longer decreases linearly with activity in the spontaneous flow region.
### Contractile fluids
Negative activity corresponds to contractile particles. When the activity is sufficiently negative and the shear rate is large enough, \(\dot{\gamma}\tau>\sqrt{1+\pi^{2}\ell^{2}}\), we observe unidirectional flow states in our finite-element calculations. The stability boundary that we find in our numerical calculations is consistent with the results of our linear stability analysis (Fig. 12). As in the extensile case, we get both positive and negative flows, depending on whether the initial configuration of the directors bends downward as in Fig. 13b, or upward as in Fig. 13d. After transients have died out, the active component of the volumetric flow rate is equal in magnitude for the positive and negative flows, and the amplitude of the flow rate increases as the magnitude of the activity increases. It is well-known that contractile elongated particles in a shear flow enhance the shear viscosity [37]. Thus, the wall stress (normalized by passive stress) increases linearly with the magnitude of the activity when the flow is simple shear, according to eqn 13. When the flow transitions to unidirectional flow, we also find that the normalized wall stress increases linearly with the magnitude of the activity, however with a slightly smaller absolute value of slope. The figures showing the dependence of the active component of flow rate and the dependence of the normalized wall stress on activity are in the SI.
When the magnitude of the activity becomes large, we observe a boundary layer in the flow velocity. Since we found only steady-state unidirectional flow states for contractile activity, it is com
Fig. 11: Snapshot of a representative unsteady dancing flow state at \(t=599.8\tau\). In the velocity field plots, colors denote flow speed, and arrows denote flow direction. In the order parameter field, colors denote the scalar order parameter, and lines denote directors.
Fig. 12: Flow states for contractile particles in a straight channel with \(\ell=0.1\). As in Fig. 4, the blue shaded region is linearly stable, and the modes of the linearized equations are damped but oscillatory above the dashed line. The finite element results are insensitive to the initial conditions of the director field.
Fig. 10: Wave speed \(v_{T}\) of the flow pattern relative to the average dimensionless flow rate for the oscillatory flow state with various (small) externally imposed shear rates.
putationally more efficient to reduce the governing partial differential equations to ordinary differential equations [see eqns (23-26) below] and solve them using the bvp5c solver of MATLAB.[44] Fig. 14 shows the active component of the flow for the positive and negative spontaneous flows of contractile gels. When the absolute value of the activity is large, we observe that the spontaneous component of the flow approaches simple shear flow, with a boundary layer of dimensionless thickness \(\ell_{\delta}\) near one of the walls, which we define as the displacement boundary layer thickness [45]
\[\ell_{\delta}\equiv\frac{\int_{0}^{W}\mathrm{d}y\left(\hat{\gamma}_{0}y-(v_{x} -v_{x}^{\mathrm{passive}})\right)}{\int_{0}^{W}\mathrm{d}y\hat{\gamma}_{0}y}, \tag{22}\]
where \(\hat{\gamma}_{0}=\mathrm{d}(v_{x}-v_{x}^{\mathrm{passive}})/\mathrm{d}y\) at \(y=0\) for positive spontaneous flow. The boundary layer thickness is the same for positive and negative spontaneous flow. Fig. 14 shows that the peak flow speed of the active component is higher and the boundary layer is thinner for larger magnitudes of the activity. From Fig. 15, we find that \(\ell_{\delta}\propto|\alpha-\alpha_{c}|^{\zeta}\), where \(\zeta\) is close to \(-0.5\), but its magnitude increases with \(\hat{\gamma}\tau\). This dependence will be studied in another publication.
### Weakly nonlinear analysis for \(\hat{\gamma}=0\).
To conclude this section, we turn to a weakly nonlinear analysis of the spontaneous steady unidirectional flow near the transition from the motionless isotropic state.[46] We continue to assume \(\lambda=1\) and only consider the case of zero shear rate, \(\dot{\gamma}=0\), leaving the case of nonzero \(\hat{\gamma}\) for another publication. Assuming that the velocity field, order parameter tensor, and pressure depend only on the coordinate \(y\), the dimensionless governing equations are
\[v_{x}^{\prime\prime}-\alpha Q_{xy}^{\prime} = 0 \tag{23}\] \[-p^{\prime}+\alpha Q_{xx}^{\prime} = 0\] (24) \[\ell^{2}Q_{xx}^{\prime\prime}-Q_{xx}+v_{x}^{\prime}Q_{xy} = 0\] (25) \[\ell^{2}Q_{xy}^{\prime\prime}-Q_{xy}-v_{x}^{\prime}Q_{xx}+v_{x}^{ \prime} = 0 \tag{26}\]
with no-slip boundary conditions \(v_{x}(0)=v_{x}(1)=0\) and no-torque (Neumann) boundary conditions \(Q_{ij}^{\prime}(0)=Q_{ij}^{\prime}(1)=0\). The prime denotes a derivative with respect to \(y\). We already saw in Sec. 3 that the motionless, distorted state at zero imposed shear rate is unstable when \(\alpha>\alpha_{c}\), where \(\alpha_{c}=(1+\pi^{2}\ell^{2})\) is the dimensionless critical activity. Here we study the spontaneous flow and weak ordering for \(\alpha=\alpha_{c}+\delta\alpha\), with \(\delta\alpha>0\). Assuming the
Fig. 14: Active component of the velocity profile in dimensionless units for contractile particles in the unidirectional flow state, for \(\hat{\gamma}\tau=2.0\) and various activities. The upper branches correspond to positive spontaneous flow, and the lower branches correspond to negative spontaneous flow. The boundary layer thickness is denoted by \(\ell_{\delta}\).
Fig. 13: An example of the velocity and order parameter fields of positive and negative spontaneous flow for contractile particles. In the velocity field plots (a) and (c), color denotes the flow speed, and arrows denote flow direction. In the order parameter field plots (b) and (d), color denotes the scalar order parameter, and lines denote the directors.
balance \(Q_{xx}\approx v^{\prime}_{x}Q_{xy}\) in eqn (25) suggests that to leading order, \(v_{x}=\mathcal{O}(\delta\alpha^{1/2})\), \(Q_{xy}=\mathcal{O}(\delta\alpha^{1/2})\), and \(Q_{xx}=\mathcal{O}(\delta\alpha)\). Thus, we expand in powers of \(\delta\alpha^{1/2}\):
\[v_{x} = \delta\alpha^{1/2}v^{(1)}_{x}+\delta\alpha v^{(2)}_{x}+\delta \alpha^{3/2}v^{(3)}_{x}+\ldots \tag{27}\] \[Q_{ij} = \delta\alpha^{1/2}Q^{(1)}_{ij}+\delta\alpha Q^{(2)}_{ij}+\delta \alpha^{3/2}Q^{(3)}_{ij}+\ldots. \tag{28}\]
At \(\mathcal{O}(\delta\alpha^{1/2})\), we find the steady versions of the linearized equations we used in Sec. 3 to solve for the growth rate,
\[v^{(1)\prime}_{x}-\alpha_{\rm c}Q^{(1)\prime}_{xy} = 0 \tag{29}\] \[\ell^{2}Q^{(1)\prime\prime}_{xx}-Q^{(1)}_{xx} = 0\] (30) \[\ell^{2}Q^{(1)\prime\prime}_{xy}-Q^{(1)}_{xy}+v^{(1)\prime}_{x} = 0. \tag{31}\]
The Neumann boundary conditions on \(Q_{ij}\) together with eqn (30) imply that \(Q^{(1)}_{xx}(y)=0\). Integrating eqn (29) yields \(v^{(1)\prime}_{x}-\alpha_{\rm c}Q^{(1)}_{xy}=\sigma^{(1)}\), where \(\sigma^{(1)}\) is a constant. Eliminating \(v^{(1)}_{x}\) from eqn (31) leads to
\[\ell^{2}Q^{(1)\prime\prime}_{xy}+(\alpha_{\rm c}-1)Q^{(1)}_{xy}=-\sigma^{(1)}. \tag{32}\]
To get a solution for \(Q^{(1)}_{xy}\) that satisfies the Neumann boundary conditions, we must have
\[Q^{(1)}_{xy} = c_{1}\cos\Bigl{(}\sqrt{\alpha_{\rm c}-1}y/\ell\Bigr{)}-\frac{ \sigma^{(1)}}{\alpha_{\rm c}-1} \tag{33}\] \[= c_{1}\cos\pi y-\frac{\sigma^{(1)}}{\alpha_{\rm c}-1}, \tag{34}\]
Using eqn (29) and the no-slip boundary conditions implies \(\sigma^{(1)}=0\) and \(v^{(1)}=(c_{1}\alpha_{\rm c}/\pi)\sin\pi y\). Note that to leading order, \(v\) and \(Q_{xy}\) are \(\mathcal{O}(\delta\alpha^{1/2})\), but \(Q_{xx}\) is at most \(\mathcal{O}(\delta\alpha)\). At the next order, the equations are
\[v^{(2)\prime\prime}_{x}-(1+\pi^{2}\ell^{2})Q^{(2)\prime}_{xy} = 0 \tag{35}\] \[-\ell^{2}Q^{(2)\prime\prime}_{xy}+Q^{(2)}_{xy}-v^{(2)\prime}_{x} = 0\] (36) \[-\ell^{2}Q^{(2)\prime\prime}_{xx}+Q^{(2)}_{xx} = c_{1}^{2}(1+\pi^{2}\ell^{2})\cos^{2}\pi y, \tag{37}\]
with solutions
\[Q^{(2)}_{xx} = c_{1}^{2}\frac{1+\pi^{2}\ell^{2}}{2}\left(1+\frac{\cos 2\pi y}{1+4 \pi^{2}\ell^{2}}\right) \tag{38}\] \[Q^{(2)}_{xy} = c_{2}\cos\pi y\] (39) \[v^{(2)}_{x} = c_{2}\frac{1+\pi^{2}\ell^{2}}{\pi}\sin\pi y, \tag{40}\]
where \(c_{2}\) is a constant.
To determine \(c_{1}\), we must expand to \(\mathcal{O}(\delta\alpha^{3/2})\):
\[v^{(3)\prime\prime}_{x}-(1+\pi^{2}\ell^{2})Q^{(3)\prime}_{xy} = -c_{1}\pi\sin(\pi y) \tag{41}\] \[\ell^{2}Q^{(3)\prime\prime}_{xy}-Q^{(3)}_{xy}+v^{(3)\prime} = c_{1}^{3}C_{0}\left[\left(\frac{3}{2}+4\pi^{2}\ell^{2}\right)\cos \pi y\right.\] (42) \[+ \left.\frac{1}{2}\cos 3\pi y\right],\]
where \(C_{0}=(1+\pi^{2}\ell^{2})^{2}/[2(1+4\pi^{2}\ell^{2})]\). Integrating eqn (41) yields
\[v^{(3)\prime}_{x}=(1+\pi^{2}\ell^{2})Q^{(3)}_{xy}+c_{1}\cos\pi y+\sigma^{(3)}, \tag{43}\]
where the constant \(\sigma^{(3)}\) appears in the expansion of the stress, \(\sigma=v^{\prime}_{x}-\alpha Q_{xy}=\delta\alpha^{1/2}\sigma^{(1)}+\delta \alpha\sigma^{(2)}+\delta\alpha^{3/2}\sigma^{(3)}+\ldots\). The solutions we have already found at lower order imply that \(\sigma^{(1)}=\sigma^{(2)}=0\). The no-slip boundary conditions on \(v^{(3)}_{x}\) also imply that \(\sigma^{(3)}=0\). Thus, the stress vanishes not only at the critical value of the activity, but also as \(\alpha\) is increased above \(\alpha_{\rm c}\). Our numerical computations give the same result just above the critical activity. Using eqn (43) to eliminate \(v^{(3)}\) from eqn (42) yields
\[\ell^{2}Q^{(3)\prime\prime}_{xy}+\pi^{2}\ell^{2}Q^{(3)}_{xy}=C_{1}\cos\pi y+C_ {2}\cos 3\pi y, \tag{44}\]
where
\[C_{1} = \frac{c_{1}^{3}(1+\pi^{2}\ell^{2})^{2}(3+8\pi^{2}\ell^{2})}{4(1+4 \pi^{2}\ell^{2})}-c_{1} \tag{45}\] \[C_{2} = \frac{c_{1}^{3}(1+\pi^{2}\ell^{2})^{2}}{4(1+4\pi^{2}\ell^{2})}. \tag{46}\]
To find \(c_{1}\), we use the Fredholm alternative,[47] which implies that the right-hand side of eqn (44) must be orthogonal to the solution of the corresponding homogeneous equation. Thus, \(C_{1}=0\), and
\[Q_{xx} = \frac{2\delta\alpha}{\alpha_{\rm c}}\frac{(1+4\pi^{2}\ell^{2})}{3+8 \pi^{2}\ell^{2}}\left(1+\frac{\cos 2\pi y}{1+4\pi\ell^{2}}\right)+\mathcal{O}(\delta \alpha^{3/2}) \tag{47}\] \[Q_{xy} = \pm\frac{2\delta\alpha^{1/2}}{\alpha_{\rm c}}\sqrt{\frac{(1+4\pi^{2 }\ell^{2})}{3+8\pi^{2}\ell^{2}}}\cos\pi y+\mathcal{O}(\delta\alpha)\] (48) \[v_{x} = \pm\frac{2\delta\alpha^{1/2}}{\pi}\sqrt{\frac{(1+4\pi^{2}\ell^{2})} {3+8\pi^{2}\ell^{2}}}\sin\pi y+\mathcal{O}(\delta\alpha), \tag{49}\]
where the two signs for \(v_{x}\) and \(Q_{xy}\) correspond to the two different spontaneous directions of flow, and the corresponding orientation of the directors. These analytical solutions agree well with our numerical solutions for the spontaneous unidirectional flow state with activity just above the critical activity.
## 5 Annular channel: nonlinear spontaneous flows
In our work on the straight channel, we saw that simple shear flow led to a spatially uniform order parameter field Q when the activity is less than a critical value. Uniform Q leads to zero active force on the fluid. In contrast, if the shear rate in the flow is spatially nonuniform, the alignment and degree of ordering of the directors will also be spatially nonuniform, leading to an active
force. This situation arises in the case of curved boundaries--as in an annular channel--for any nonzero value of the activity, no matter how small. Previous theoretical studies involving curved boundaries have focused on the case of motionless walls. For example, Woodhouse and Goldstein found spontaneous circular flow in a circular chamber, [10] and Norton et al. showed that the nature of topological defects in the director field is determined by the flow rather than the director anchoring conditions at the wall of a circular chamber. [20]
In this section, we introduce curvature by considering the flow states of a two-dimensional active gel in the Taylor-Couette geometry of an annular channel between two concentric circular boundaries of radius \(R\) and \(R+W\). We impose external shear by rotating the inner boundary with steady angular frequency \(\omega\), leaving the outer boundary stationary. Stokes flow in this geometry, known as Couette flow, is given by [48]
\[v_{\theta}=\frac{\omega R^{2}}{(2R+W)W}\left[\frac{(R+W)^{2}}{r}-r\right], \tag{50}\]
where \(r\) is the radial polar coordinate. The second term of eqn (50) corresponds to rigid body rotation and does not lead to any strain rate, but the first term leads to a nonuniform strain rate, and thus induces a nonuniform order parameter field and an active force on the fluid for _any_ nonzero value of the activity. To study the nonlinear flow states of active flows in the annular channel, we again employ the finite element software FEniCS to solve the the full nonlinear equations, eqns (2)-(4). We set \(\ell=0.1\), \(\lambda=1\) and \(R/W=1\).
### Extensile fluids
We begin our discussion of the flow states in the annulus with extensile active gels, \(\alpha>0\). As in the case of the straight channel, we give the initial director field some splay to induce counterclockwise or clockwise spontaneous flow, with the flow direction depending on the sense of the splay. For example, splay with the rods converging as we move counterclockwise around the annulus (Fig. 16d) leads to counterclockwise active flow (Fig. 16c). For the activities we used, we find the same kinds of active flow states as in the straight channel: Couette-like states which have no radial component of flow and are the analogs of the unidirectional states in the straight channel (Figs. 16a-d), oscillatory states (Figs. 16e and f), and dancing states (Figs. 16g and h). We run the simulations until \(t=600\tau\), and characterize the flow states as we did in the case of the straight channel (Sec. 4). For the Couette-like flows, we distinguish two flow states by checking whether the maximum velocity is at the moving wall or in the interior of the annulus. If the flow is fastest on the wall, we label it a "Couette-like 1" flow state; otherwise the label is "Couette-like 2". If the transverse component of the velocity \(v_{r}\) is nonnegligible, we check whether the torque exerted by the total flow on the inner boundary oscillates during the time interval 550\(\tau\)-600\(\tau\). If it oscillates, then the state is dancing, otherwise it is oscillatory. There are a few flow states near transitions that need a longer time to equilibrate. We also find multiple solutions for particular values of \(\omega\tau\) and \(\alpha\) for the oscillatory and dancing flows. Fig. 17 shows flow transitions in the annular channel in range of \(0\leq\alpha\lesssim 2.5\) and \(0\leq\dot{\gamma}\tau\lesssim 1\). The transition from Couette-like to oscillatory flow states is relatively robust, with the transition states showing little dependence on the initial conditions. However, comparing with the case of the straight channel, the states observed in the transition from oscillatory to dancing flow are more sensitive to the choice of initial conditions.
In the case of a straight channel, our numerical calculations always yielded the Newtonian simple shear state solution as long as the magnitude of the activity was small enough. The situation is different for the annular channel: our numerical calculations only yield the Newtonian Couette flow state solution (eqn (50)) when the activity vanishes. As emphasized earlier, any nonzero value of activity leads to active force and an active component of the flow because the order parameter field is nonuniform for nonzero wall rotation speed \(\omega\). Green, Toner and Vitelli examined a similar phenomena for active nematics in which a surface of nonvanishing Gaussian curvature generates a spontaneous flow at arbitrarily low values of the activity parameter. [49] As long as \(\omega\tau\) is sufficiently small, the flow profile varies continuously between the Couette, Couette-like 1, and Couette-like 2 states as the activity increases (Fig. 18). Note that the flow velocity increases with activity for a given imposed rotation rate, as expected because extensile activity reduces the effective shear viscosity. [37] Also, the change from the Newtonian Couette flow profile is small as long as the activity is modest, \(\alpha\lesssim 0.9\) (Fig. 18), which we examine in Sec. 6.
The oscillatory flow in an annular channel (Figs. 16e and f; movies are in SI) is similar to the oscillatory flow in a straight channel. The flow and order parameter patterns are steady in a frame that rotates at constant speed, and the average volumetric flow rate (\(\int drv_{\theta}/W\)) is constant in time. Since we solve the equations in the annular domain without applying periodic boundary conditions, and still see steady rotation of the flow pattern and order parameter pattern, we can be confident that the constant wave speed we saw in the case of the oscillatory flows in the straight channel is not an artifact of the periodic boundary condition.
In the dancing state, the flow and order parameter patterns periodically change in time, similar to the case of the straight channel. Unlike the straight channel, the volumetric flow rate of the dancing flow state (Figs. 16g and h; movies are in SI) in the annular case is not constant in time. This time dependence arises because the difference in curvatures of the inner and outer boundaries of the annulus breaks the reflection symmetry of the boundaries of the straight channel that relates the dancing flow at the top wall to the dancing flow at the bottom wall. Also, as in the straight channel, we observe moving pairs of \(+1/2\) defect-like patterns with an exchange of partners in the annular dancing flow. In the straight channel, the defect pairs are mirror images of each other (see Fig 11b), but in the annulus, the different curvatures of the two boundaries spoils this symmetry. Joshi et al. also found similar oscillatory and dancing flow states for active nematics by changing the curvatures of the annular channel without external shear. [50]
Fig. 19 shows the active component of the average flow rate
(defined as before as the average flow rate of the total flow minus the average flow rate of the \(a=0\) case) for the various flow states we studied in the annular channel. For the case of zero applied shear (\(\omega\tau=0\)), there are positive and negative spontaneous flows when the activity exceeds a critical value. But for \(\omega\tau\neq 0\), the flow rate has no bifurcation: it continuously increases from zero as the activity increases from zero. Another striking difference with the straight channel is that for nonzero rotation rates of the inner curved wall, we only observe positive spontaneous flows (Fig. 19), even when we attempt to reverse the direction of flow by altering the initial conditions of the directors. This rectification arises because in the curved channel, the non-uniform alignment of the directors arising from the applied shear leads to spontaneous flow with the same rotation sense as the rotating wall. Furthermore, since the wave translation direction corresponds to the direction of the spontaneous component of the flow, the oscillatory flow patterns all translate in the \(+\theta\) direction when \(\omega\tau\neq 0\). Another difference from the straight channel case is that the active contribution to the average flow rate does not
Fig. 17: Flow states for extensile fluids in an annular channel with \(\ell=0.1\) and \(R/W=1\). The Couette states at \(\omega=0\) are states of zero flow and zero order.
Fig. 16: Examples of spontaneous flow states of an active gel in an annular channel. In these examples, the inner boundary rotates with dimensionless speed \(\omega\tau=0.2\). The Couette-like 1 and 2 states are steady. In the oscillatory flow state, the flow pattern and order parameter configuration rotate at a steady rate. The dancing flow state is unsteady. Videos of the oscillatory and the dancing flow states can be found in the SI, sections S2.1 and S2.2 respectively. In the velocity field plots (top panels), colors denote flow speed and arrows denote flow direction. In the order parameter field plots (bottom panels), colors denote the scalar order parameter and lines denote the directors.
disappear in the annular channel for larger shear rate.
Fig. 20 shows the torque exerted by the active fluid on the inner boundary, normalized by the wall torque in the passive case. The relation of the wall torque to the activity is very similar to the relation of the wall stress to the activity in the straight channel case, i.e. the normalized wall torque decreases with increasing activity for the Couette-like 1 flow state.
The change in slope in the active-flow rate vs. \(\alpha\) curve in Fig. 19, or the normalized wall torque vs. \(\alpha\) in Fig. 20 indicates the transition from the Couette-like flow state to the oscillatory flow state. As noted earlier, sometimes our numerical approach finds oscillatory patterns of different wavelengths for the same values of the parameters, which would likely result in values of the volumetric flow rate and wall torque different from those shown in Fig. 19 and Fig. 20. Some of the variation in the normalized torque in the oscillatory and dancing flow regimes in Fig. 20 arises from abrupt changes in wavelength as \(\alpha\) was varied.
We compare the wall torque and wall stress of annular and straight channels in Fig. 21 to show the effect of curvature on the wall stress as a function of external shear in the range of \(0<\alpha\leq 1\). The normalized wall torque and wall stress are close to each other for small external shear rate and both increase with external shear rate, but the increase is larger in the annular channel, i.e. normalized wall torque is closer in value to the passive case. Thus, the curvature of the channel reduces the effect of activity on the wall with increasing external shear.
### Contractile fluids
We studied contractile active fluids in a two-dimensional annulus with the parameters in the range \(-16\leq\alpha<0\) and \(0<\omega\tau\leq 2.4\). When \(\alpha<0\), we only found Couette-like states with no radial component of the flow. Since contractile activity is effectively shear thickening,[37] the effect of the activity is always to reduce the flow relative to Newtonian Couette flow (Fig. 22). As in the extensile case, the direction that the active component of the flow travels around the annulus is independent of the initial conditions, but unlike the extensile case, the active component of flow is negative (against the direction imposed by the externally applied shear). The magnitude of the negative flow is always less than the magnitude of the externally imposed Couette flow; therefore, the total flow never reverses. In this sense, the contractile annular flows are Couette-like 1 states rather than Couette-like 2 states. In accord with the effective shear-thickenening of contractile active fluids, the total torque on the inner boundary is always greater than the hydrodynamic torque in Couette flow (SI Fig. S4).
## 6 Annular channel: Linear analysis of curvature at low shear rate
In this section we study the limit in which the flow in the annular channel is slow enough that the induced order is small, \(S\ll 1\). For slow enough flow, it is valid to neglect the nonlinear terms in eqn (4). On the one hand, this analysis offers a theoretical explanation of some of the observations in Sec. 5; on the other hand, it gives some insight into the role of the curvature of the boundaries, which we did not vary in the previous section. For convenience, here we restate the modified Stokes equation (eqn (3)) in dimensionless form, along with the dimensionless form of the steady linearized equation for \(\mathbf{Q}\):
\[0 = -\boldsymbol{\nabla}p+\nabla^{2}\mathbf{v}-\frac{\alpha}{\lambda }\nabla\cdot\mathbf{Q} \tag{51}\] \[0 = -\mathbf{Q}+\ell^{2}\nabla^{2}\mathbf{Q}+2\lambda\mathbf{E}, \tag{52}\]
As in our numerical calculations, we use the width \(W\) of the channel as the unit of length. Since we seek to study the Couette-like flow state, we assume \(\mathbf{v}=v_{\theta}(r)\hat{\boldsymbol{\theta}}\). Note that this flow is incompressible. We also suppose that \(p\), \(Q_{rr}\), and \(Q_{r\theta}\) are func
Fig. 19: Active component of the volumetric flow rate for extensile fluids in dimensionless units of \(v_{\theta}\tau/W\) as a function of activity in the annular channel. The symbols denote the flow states and the colors denote the externally imposed dimensionless shear rate. For the dancing flows, the square symbols show the average value of the oscillation of the volumetric flow rate.
Fig. 20: The torque imposed by the active flow of an extensile fluid on the rotating disk normalized by the passive torque. For the dancing flows, the square symbols denote the average value of the oscillation of the torque.
tions of radius only. With these assumptions, the \(rr\) component of eqn (52) is homogeneous, which together with the Neumann boundary conditions \(\partial_{r}\partial_{rr}=0\) at \(r=R/W\) and \(r=R/W+1\) implies \(Q_{rr}=0\). Since the radial component of the modified Stokes equation(eqn (51) with \(Q_{rr}=0\) implies that the pressure gradient vanishes, we take \(p=0\).
To solve for the velocity and order parameter fields, we take the divergence of eqn (52) and combine with eqn (51) with \(p=0\) to find
\[\nabla^{2}\left(\nabla^{2}-\frac{1}{\xi^{2}}\right)\mathbf{v}=0, \tag{53}\]
where \(\xi^{2}=\ell^{2}/(1-\alpha)\). To focus our attention on the Couette-like states only, we restrict our analysis to \(\alpha<1\) in this section. Thus,
\[v_{\theta}=c_{1}r+c_{2}/r+c_{3}I_{1}(r/\xi)+c_{4}K_{1}(r/\xi), \tag{54}\]
where the \(c_{i}\) are constants to be determined, and \(I_{1}(x)\) and \(K_{1}(x)\) are modified Bessel functions. Inserting the velocity field eqn (54) into the \(r\theta\) component of eqn (52),
\[0=\ell^{2}\left(Q_{r\theta}^{\prime\prime}+\frac{1}{r}Q_{r\theta}^{\prime}- \frac{4}{r^{2}}Q_{r\theta}\right)-Q_{r\theta}+\lambda\left(v_{\theta}^{\prime }-\frac{v_{\theta}}{r}\right), \tag{55}\]
yields
\[\ell^{2}\left(Q_{r\theta}^{\prime\prime}+\frac{1}{r}Q_{r\theta}^ {\prime}-\frac{4}{r^{2}}Q_{r\theta}\right)-Q_{r\theta} \tag{56}\] \[= \frac{\lambda}{\xi}\left[\frac{2c_{2}\xi}{r^{2}}-c_{3}I_{2}(r/\xi )+c_{4}K_{2}(r/\xi)\right],\]
which has general solution
\[Q_{r\theta} = c_{3}I_{2}(r/\ell)+c_{6}K_{2}(r/\ell)-2c_{2}\lambda/r^{2} \tag{57}\] \[- c_{3}\frac{\lambda\xi}{\ell^{2}-\xi^{2}}I_{2}(r/\xi)+c_{4} \frac{\lambda\xi}{\ell^{2}-\xi^{2}}K_{2}(r/\xi).\]
Inserting this solution into the modified Stokes equation [eqn (51)] shows that \(c_{5}=c_{6}=0\). The rest of the integration constants are determined by the no-slip boundary conditions on the (dimensionless) velocity, \(v_{\theta}(R/W)=\omega\tau R/W\) and \(v_{\theta}(R/W+1)=0\), and the Neumann boundary conditions on the order parameter field \(Q_{rr}^{\prime}(R/W)=Q_{rr}^{\prime}(R/W+1)=0\). The complete formulas are too complicated to display, but we plot the
Fig. 21: Comparison of the normalized torque on the inner boundary of the annulus as a function of the dimensionless frequency of rotation of the inner disk and the normalized wall stress on the bottom wall of the straight channel as a function of the dimensionless shear rate.
Fig. 22: Velocity profiles in dimensionless units for Couette flow (\(\alpha=0\)) and the contractile Couette-like 1 (\(\alpha<0\)) flows states, for \(\omega\tau=2.4\).
Fig. 23: Analytical results for flow velocity of the Couette-like 1 state in an annular channel with weak order for \(\ell=0.1\), \(\lambda=1\), and various values of \(R/W\) for (a) a contractile fluid with \(\alpha=-0.9\) and (b) an extensile fluid with \(\alpha=0.99\). The dashed lines in the panel (a) show the results for Newtonian flows (\(\alpha=0\)) for comparison.
velocity in Fig. 23 for various ratios of \(R/W\) for a representative contractile case (top panel) and extensile case (bottom panel). In both cases, the flow velocity approaches a linear profile as \(R/W\) becomes large, as expected, since in that limit the curvature of the annulus becomes unimportant, and the flow approaches simple shear flow. For the contractile case, Fig. 23a, the velocity profile is close to the Newtonian result, with the agreement between the two cases getting better as \(R/W\) increases. For the extensile case, the velocity curves for different values of \(R/W\) get closer to each other as \(\alpha\) increases, becoming very close to the linear profile around \(\alpha=0.885\). Above this value of activity, the order of the curves reverses, with the linear curve lying below all the other curves. When \(\alpha\) gets very close to unity and \(R/W\) is small, the maximum velocity is not at the wall, i.e. the flow continuously changes from the Couette-like 1 state to the Couette-like 2 state [Fig. 23b]. Fig. 24 shows the total torque \(M=2\pi R^{2}\sigma_{\theta}\) on the circle \(r=R\) as a function of \(R/W\). Note that the limit of a straight channel is almost obtained when \(R\) becomes comparable to \(W\). The torque for a contractile fluid is higher than the passive value since contractile fluids effectively increase the shear viscosity. Likewise, the torque for an extensile fluid is less than the passive value since extensile fluids are shear thinning. The torque approaches the passive value when \(R\ll W\). Note that since we use \(W\) as the unit of length, the limit \(R\ll W\) corresponds to making the inner cylinder of vanishing thickness. When \(R<\ell\), the term \(\ell^{2}\nabla^{2}Q\) dominates eqn (52), and therefore \(Q\to 0\). In this limit, the active force vanishes, and flow is Couette flow.
It is informative to find the velocity and the order parameter field in the limit \(R\gg W\), where the curvature of the annulus is small. Rather than taking the limit of the formulas used to make Figs. 23 and 24, it is simplest to solve the equations directly using regular perturbation theory in powers of \(W/R\). Reinstating the dimensions and writing \(r=R+y\), we find
\[v_{\theta} = \omega R\left(1-\frac{y}{W}\right)-\frac{\omega y}{2}\left(1- \frac{y}{W}\right) \tag{58}\] \[+ \frac{2\alpha\ell^{2}\omega W}{1-\alpha}\left[\frac{1-\cosh\left[ \left(1-2y/W\right)/\xi\right]}{\cosh[1/(2\xi)]}\right]\] \[Q_{r\theta} = -\frac{\lambda\omega\tau R}{W}\] (59) \[+ \frac{\lambda\omega\tau}{2}\left[4y/W-3+4\xi\,\frac{\sinh\left[ \left(1-2y/W\right)/\xi\right]}{\cosh[1/(2\xi)]}\right].\]
The first terms of eqns (58) and (59) correspond to the velocity and order parameter field, respectively, of a straight channel with an infinitesimal imposed shear rate \(\gamma=\omega R/W\). The remaining terms are the corrections due to the nonzero curvature of the annular channel. Unlike our weakly analysis of the active flow in the straight channel (Sec. 4.3), which had spontaneous flow in either direction, here we see that the component of flow driven by the activity has a definite sign, and is the same direction as externally imposed flow for extensile fluids.
## 7 Summary
We investigated the stability and flow states of the active gel confined in a channel subject to a external shear. An externally imposed shear flow can stabilize an extensile fluid that would be unstable to spontaneous flow when there is no external shear flow, and destabilize a contractile fluid that would be stable against spontaneous flow when there is no external shear flow. In accordance with previous simulations [13, 25] carried out in the absence of external shear, we find three kinds of nonlinear flow states in the range of parameters we study: unidirectional flows, oscillatory flows, and dancing flows for extensile fluids. The unidirectional flow observed in the straight channel can have a spontaneous active component which is either positive--in the same direction as the moving wall--or negative--in the opposite direction of the moving wall. The oscillatory flow states also have two possible directions for the spontaneous active component when the externally imposed shear rate is small. For greater imposed shear rates, the spontaneous flow direction will be the same as the moving wall. For contractile gels, we only observe unidirectional flow states in the range of parameters that we studied. These unidirectional flows can have positive or negative spontaneous active components. In the analysis of the the wall stress caused by the active flow on the moving boundary, the extensile flow helps the motion of the moving boundary, while the contractile flow resists the motion. Moreover, the external shear flow can weaken this effect of activity on the motion.
Our analysis of the curvature shows there are three main differences between the flows states for the straight channel and the annular channel. First in the annular channel, there is no critical activity for the system to be stable against the spontaneous flow given a nonzero external shear. Second, we only observe one direction of spontaneous flow: positive for extensile gels, but negative for contractile gels. Last, the average volumetric flow rate of the annular case oscillates with time for the dancing flow state,
Fig. 24: Dependence of normalized wall torque on \(R/W\) for different \(\alpha\) from linear analysis at low shear with \(\ell=0.1\) and \(\lambda=1\). The colors denote different activities.
while it is steady in the straight channel. Also, we find increasing the curvature of the streamlines weakens the dependence of the wall stress on activity.
Our work suggests several directions for future study. An obvious extension is to work in three dimensions, allowing both the directors and velocity vectors to point out of the plane and vary in both directions across a channel. Also, it would be natural to study the effect of aligning flows induced by a pressure gradient rather than a moving wall, since Poiseuille-like flow may be easier to study experimentally.
## Acknowledgements
This work was supported in part by the National Science Foundation through Grant Nos. MRSEC DMR-2011846, CBET-2227361, and PHY-1748958. We are grateful to Jesse Ault, Kenny Breuer, Guillaume Duclos, Hamid Karani, Jasper Chen, Alexander Morozov, and Pranay Sampat for helpful discussions. We also thank the Center for Computation and Visualization (CCV) at Brown university for use of high performance computing facilities.
Notes and references
|
2308.13253 | Impact of hot exozodiacal dust on the polarimetric analysis of close-in
exoplanets | Hot exozodiacal dust (HEZD) found around main-sequence stars through
interferometric observations in the photometric bands H to L is located close
to the dust sublimation radius, potentially at orbital radii comparable to
those of close-in exoplanets. Consequently, HEZD has a potential influence on
the analysis of the scattered-light polarization of close-in exoplanets and
vice versa. We analyze the impact of HEZD on the polarimetric characterization
of close-in exoplanets. This study is motivated in particular by the recently
proven feasibility of exoplanet polarimetry. Applying the 3D Monte Carlo
radiative transfer code POLARIS in an extended and optimized version for
radiative transfer in exoplanetary atmospheres and an analytical tool for
modeling the HEZD, we simulated and compared the polarization characteristics
of the wavelength-dependent scattered-light polarization of HEZD and close-in
exoplanets. The varied parameters are the planetary phase angle
($0^\circ-180^\circ$), the dust grain radius ($0.02\ \mu$m $- \ 10\ \mu$m), the
HEZD mass ($10^{-10}$$\rm{M}_{\oplus}$ $-\ 10^{-8}$$\rm{M}_{\oplus}$), the
orbital inclination ($0^\circ-90^\circ$), the composition of the planetary
atmosphere (Mie and Rayleigh scattering atmosphere), the orbital radius of the
HEZD ($0.02$ au $-\ 0.4$ au), and the planetary orbital radius ($0.01$ au $-\
0.05$ au). The dust grain radius has the strongest influence on the
polarimetric analysis due to its significant impact on the wavelength-dependent
polarization characteristics and the total order of magnitude of the
scattered-light polarization. In certain scenarios, the scattered-light
polarization of the HEZD even exceeds that of the close-in exoplanet. | Kevin Ollmann, Sebastian Wolf, Moritz Lietzow, Thomas A. Stuber | 2023-08-25T08:59:04Z | http://arxiv.org/abs/2308.13253v1 | # Impact of hot exozodiacal dust on the polarimetric
###### Abstract
Context:Hot exozodiacal dust (HEZD) found around main-sequence stars through interferometric observations in the photometric bands \(H\) to \(L\) is located close to the dust sublimation radius, potentially at orbital radii comparable to those of close-in exoplanets. Consequently, HEZD has a potential influence on the analysis of the scattered-light polarization of close-in exoplanets and vice versa.
Aims:We analyze the impact of HEZD on the polarimetric characterization of close-in exoplanets. This study is motivated in particular by the recently proven feasibility of exoplanet polarimetry.
Methods:Applying the 3D Monte Carlo radiative transfer code POLARIS in an extended and optimized version for radiative transfer in exoplanetary atmospheres and an analytical tool for modeling the HEZD, we simulated and compared the polarization characteristics of the wavelength-dependent scattered-light polarization of HEZD and close-in exoplanets. As a starting point for our analysis, we defined a reference model consisting of a close-in exoplanet with a scattered-light polarization consistent with the upper limit determined for WASP-18b, and a HEZD consistent with the near-infrared excess detected for HD 22484 (10 Tau).
Results:The varied parameters are the planetary phase angle (\(0^{\circ}-180^{\circ}\)), the dust grain radius (\(0.02\,\mu\mathrm{m}-10\,\mu\mathrm{m}\)), the HEZD mass (\(10^{-10}\mathrm{M}_{\oplus}-10^{-8}\mathrm{M}_{\oplus}\)), the orbital inclination (\(0^{\circ}-90^{\circ}\)), the composition of the planetary atmosphere (Mie and Rayleigh scattering atmosphere), the orbital radius of HEZD (\(0.02\,\mathrm{au}-0.4\,\mathrm{au}\)), and the planetary orbital radius (\(0.01\,\mathrm{au}-0.05\,\mathrm{au}\)). The dust grain radius has the strongest influence on the polarimetric analysis due to its significant impact on the wavelength-dependent polarization characteristics and the total order of magnitude of the scattered-light polarization. In certain scenarios, the scattered-light polarization of the HEZD even exceeds that of the close-in exoplanet, for example for a dust grain radius of \(0.1\,\mu\mathrm{m}\), a HEZD mass of \(8\times 10^{-10}\mathrm{M}_{\oplus}\), an orbital radius of HEZD of \(0.04\,\mathrm{au}\) and an orbital inclination of \(90^{\circ}\).
Conclusions:The presence of HEZD potentially has a significant impact on the polarimetric investigations of close-in exoplanets. Furthermore, interferometric observations are required to better constrain the parameter space for HEZD and thus the possible resulting scattered-light polarization.
## 1 Introduction
A strong near-infrared (NIR) excess is detected at the 1% level around more than two dozen main-sequence stars by interferometric observations in the photometric bands \(H\) to \(L\) (e.g., Absil et al.2006, 2013; Ertel et al.2014, 2016). This excess is attributed to submicrometer dust grains in close vicinity to stars (e.g., Di Folco et al.2007) and is often referred to as hot exozodiacal dust (hereafter: HEZD). A simple but so far sufficient model for fitting the observed NIR excesses is an optically thin, geometrically narrow dust ring located close to the dust sublimation radius. Graphite grains with radii often smaller than their blowout size, a steep particle distribution and low HEZD masses around \(10^{-10}\) M\({}_{\oplus}\) to \(10^{-8}\) M\({}_{\oplus}\) are suitable for fitting the observed NIR excess (e.g., Kirchschlager et al.2018). Because of the high contrast and the small angular distance between this circumstellar dust and the star, observations of HEZD were so far only possible through NIR and mid-infrared (MIR) long-baseline and nulling interferometry (e.g., Absil et al.2021). Moreover, no clear correlations between the NIR excess and MIR or far-infrared (FIR) excess tracing dust farther out have been observed so far (Millian-Gabet et al.2011; Mennesson et al.2014). In addition, the contribution of silicate grains is negligible because the weak MIR excess is inconsistent with the otherwise expected strong \(N\) band feature (Akeson et al.2009; Kirchschlager et al.2017).
The mechanisms that sustain HEZD at the observed level are currently unclear because these small grains are thought to sublimate rapidly or be blown out of the system (Backman and Paresce 1993; Wyatt et al.2007; Lebreton et al.2013). The replenishment of dust grains in situ through a steady-state collisional cascade close to the star can most likely also be excluded because the dust lifetime at these distances is short (e.g., Wyatt2008; Lebreton et al.2013). Several alternative scenarios to explain the presence of HEZD, for instance magnetic trapping or cometary supply, have been discussed, but none of these scenarios individually or in combination provided a comprehensive explanation for the transport of the dust to the inner regions nor how it can survive there or can be replenished efficiently (Kobayashi et al.2009; Faramaz et al.2017; Sezestre et al.2019; Stamm et al.2019; Kimura et al.2020; Rigley and Wyatt2020; Pearce et al.2020, 2022).
This HEZD may offer a way to probe the inner regions of extrasolar planetary systems because its presence might constrain
the architecture of the planetary system (e.g., Kral et al., 2017). Direct imaging of those close-in planets is not yet possible because the faint planetary signal is lost in the bright stellar glare. Thus, polarimetry has become a useful tool in recent years to potentially distinguish and characterize the weak, polarized signal that is reflected by the planet from the direct stellar radiation (e.g., Bott et al., 2016). Close-in planets such as hot Jupiters are well suited for polarimetric investigations considering their large radii and the proximity to their host star and thus the high fraction of radiation that is scattered off their atmosphere (Seager et al., 2000; Stam et al., 2004). High-precision polarimeters, such as the ground-based HIgh-Precision Polarimetric Instruments (HIPPI, HIPPI-2, Bailey et al., 2015, 2020) and the POlarimeter at Lick for Inclination Studies of Hot jupiters 2 (POLISH2; Wiktorowicz and Nofi, 2015) allow measuring the polarized flux of several hot Jupiters, for example, within the exoplanet detection program \(-\)"Wide Angle Search for Planets"- (WASP) for the hot Jupiter WASP-18b at the parts per million (ppm) level (Bott et al., 2018; Bailey et al., 2021).
At the same time the possible presence of small hot dust grains potentially interferes with the polarimetric characterization of planets in the habitable zone (Agol, 2007; Beckwith, 2008; Roberge et al., 2012; Stapelfeldt et al., 2021). Therefore, HEZD has to be considered in the attempt to characterize exoplanets via their scattered-light polarization.
We investigate the influence of HEZD on the analysis of the wavelength-dependent scattered-light polarization of close-in exoplanets. For this purpose we investigate characteristic features of the individual signatures of a close-in exoplanet and a HEZD on the wavelength-dependent polarization signal in the wavelength range from 550 nm to 4 \(\mu\)m, covering the photometric bands and the respective representative wavelengths \(V(550\leavevmode\nobreak\ \mathrm{nm})\), \(R(650\leavevmode\nobreak\ \mathrm{nm})\), \(Z(878\leavevmode\nobreak\ \mathrm{nm})\), \(I(1.00\leavevmode\nobreak\ \mu\mathrm{m})\), \(J(1.25\leavevmode\nobreak\ \mathrm{\mu m})\), \(H(1.65\leavevmode\nobreak\ \mathrm{\mu m})\), \(K(2.22\leavevmode\nobreak\ \mathrm{\mu m})\) and \(L(3.45\leavevmode\nobreak\ \mathrm{\mu m})\).
This article is organized as follows: In Sect. 2 we give a brief overview about the assumptions based on which we simulate the scattered-light polarization. Moreover, we describe the reference model that provides the basis for the analysis of the influence of selected physical parameters on the polarization characteristics of the close-in planet and the HEZD. In Sect. 3 we study the impact of selected model parameters on the wavelength-dependent polarization degree (Sect. 3.1) and on the polarized flux intensity (Sect. 3.2) of the HEZD and the close-in planet. Moreover, the polarization degrees are discussed in the context of the intrinsic stellar polarization, especially for observed active stars. Our findings are summarized in Sect. 4.
## 2 Reference model
In Sect. 2.1 we define the polarimetric quantities that we used for the qualitative description of the linear polarization. Subsequently, we define a reference model consisting of these components, that is a star (Sect. 2.2), a close-in planet (Sect. 2.3) and a HEZD (Sect. 2.4).
### Polarimetric quantities
We defined the HEZD model as an optically thin, geometrically narrow ring. Therefore, when we considered only single-scattering events, the net scattered-light polarization was calculated by applying the Stokes formalism analytically. To simulate the radiative transfer in the generally optically thick planetary atmosphere we used the numerical 3D Monte Carlo solver POLARIS (Reissl et al., 2016; Lietzow et al., 2021). We considered scattering of unpolarized stellar radiation by the atmospheric gas as well as spherical cloud and dust particles as polarization mechanism in this study. For a more detailed model description see Appendix A.
The observable polarized radiation of the considered system results from the superposition of the direct stellar flux and the thermally reemitted and scattered radiation of the HEZD and the planet. The unpolarized stellar flux dominates the net flux and thus the level of the observable polarization. For practical purposes in the context of our subsequent studies, we thus defined the observable wavelength-dependent linear polarization degree once including the stellar flux (\(P_{\star}\)) and once without (\(P\)):
\[P_{\star}(\lambda)=\frac{F_{\mathrm{pol}}(\lambda)}{F_{\mathrm{sca}}(\lambda)+ F_{\mathrm{therm}}(\lambda)+F_{\star}(\lambda)}, \tag{1}\]
\[P(\lambda)=\frac{F_{\mathrm{pol}}(\lambda)}{F_{\mathrm{sca}}(\lambda)+F_{ \mathrm{therm}}(\lambda)}, \tag{2}\]
denoting the flux of the linearly polarized radiation as \(F_{\mathrm{pol}}\), the scattered flux as \(F_{\mathrm{sca}}\), the thermally reemitted flux as \(F_{\mathrm{therm}}\) and the stellar flux as \(F_{\star}\) (see Appendix A for the individual definitions).
### Reference star
Motivated by the presumable detection of polarized radiation in the WASP-18 system from Bott et al. (2018), we defined a star of spectral type F6 with radius \(R_{\star}=1.24\leavevmode\nobreak\ R_{\odot}\) and a stellar effective temperature of \(T_{\star}=6350\leavevmode\nobreak\ \mathrm{K}\), with \(d_{\mathrm{obs}}=126.4\leavevmode\nobreak\ \mathrm{pc}\) as distance to the observer, with the same stellar parameters as the illuminating reference source of the system. While we neglected the possible intrinsic stellar polarization in the case of this reference star in our simulations, we compared the polarization degrees of HEZD and the close-in planet with the measured intrinsic stellar polarization of the Sun and selected FGK dwarfs in the wavelength range \(\lambda\) from 430 nm to 600 nm in Sect. 3.1.4.
### Reference close-in planet
The properties of our reference close-in planet are motivated by polarimetric measurements of the hot Jupiter WASP-18b for which an upper boundary of the polarization degree (40 ppm) has most likely been detected (Bott et al., 2018). Thus, we chose a planetary radius of \(R_{\mathrm{planet}}=1.16\leavevmode\nobreak\ R_{\mathrm{Jup}}\), an orbital radius of \(d_{\mathrm{planet}}=0.02\leavevmode\nobreak\ \mathrm{au}\) and an equilibrium temperature of \(T_{\mathrm{planet}}=2411\leavevmode\nobreak\ \mathrm{K}\)(Sheppard et al., 2017). This choice is further supported by the fact that the observed polarization degree in the optical wavelength range of hot Jupiters is most likely in the same order of magnitude as that of WASP-18b (Bailey et al., 2021).
Because currently available data for hot Jupiters are not sufficient to derive a unique model for their atmosphere (e.g., Pluriel, 2023), we neglected the high potentially horizontal temperature contrasts, mixtures of species in ionic, atomic, molecular and condensate phases in atmospheres of hot Jupiters (Helling et al., 2019; Fortney et al., 2021) without addressing any nonequilibrium processes that might affect the atmospheric composition and chemistry. The chemical composition of the
atmosphere of this reference planet was thus chosen to be as simple as possible to obtain qualitative insights into the polarization characteristics.
Because Rayleigh scattering from small cloud particles is anticipated to be the dominant source of polarization for hot Jupiters (e.g., Bailey et al. 2018; Bott et al. 2018) we considered an atmosphere consisting of molecular hydrogen (H\({}_{2}\)), and optically thin forsterite cloud particles (Mg\({}_{2}\)SiO\({}_{4}\)) with an effective radius of \(r_{\rm eff}=0.05\)\(\mu\)m and an effective variance of \(v_{\rm eff}=0.01\). A more detailed description of the pressure structure and cloud formation is given in A.4. Because the variation in the polarization degree and thus the potentially most decisive quantity that allows a distinction between the individual contributions of the planet and the HEZD to the net polarization degree of the system, is expected to show the strongest variations along its orbit in the case of an edge-on orientation of its orbital plane, we considered a transiting planet (orbital inclination of \(i=90^{\circ}\)).
The simulated maximum polarization degree (about 14 ppm) of the (reference) planet-star system at a wavelength of 470 nm is reached at a planetary phase angle of 67\({}^{\circ}\), which is approximately where the observed polarization peaks as well. This consistency supports the plausibility of the chosen properties of the reference model for the close-in planet.
### Reference hot exozodiacal dust
The strong NIR excess of HEZD that was observed for a wide range of stellar spectral types can be modeled by an optically thin narrow dust ring with a total mass (\(M_{\rm dust}\)) in a range of about \(0.03\times 10^{-9}\rm M_{\oplus}-10.2\times 10^{-9}\rm M_{\oplus}\) consisting of graphite grains with radii (\(a\)) between 0.01 \(\mu\)m \(-\) 1.0 \(\mu\)m. The dust is typically located at distances of 0.02 au \(-\) 0.4 au from the central star (\(d_{\rm dust}\)) corresponding to dust temperatures (\(T_{\rm dust}\)) between 1000 K - 2800 K (Kirschschlager et al. 2017, 2018, 2020).
We defined the reference HEZD model to fit the NIR excess of HD 22484 (10 Tau) with the following appropriate parameter configuration (Kirschlager et al. 2018): The dust grains were assumed to consist of graphite, to be compact spheres with a singles radius of \(a=0.71\)\(\mu\)m, and to be distributed within a ring with a radius of \(d_{\rm dust}=0.04\) au centered at the central star. The corresponding dust temperature was \(T_{\rm dust}=1780\) K, which is close to the sublimation temperature of graphite (2000 K; Lamoreaux et al. 1987), and the HEZD mass was \(M_{\rm dust}=1.23\times 10^{-9}\rm M_{\oplus}\). Because the observed NIR excesses of HEZD can also partly be modeled with small dust grains whose temperatures are above this sublimation temperature, the aspect of sublimation was neglected in the context of this study. In addition, the orbital inclination was set to \(i=90^{\circ}\) (edge-on). Corresponding to the observational constraints we used for the graphite mixture a density of \(\rho=2.24\) g cm\({}^{-3}\)(Weingartner & Draine 2001) applying the one-third to two-thirds dust grain orientation weighting for the calculation of the optical properties (Draine & Malthora 1993).
According to Absil et al. (2013), a HEZD-star flux ratio of 1.21% was measured at a wavelength of 2.13 \(\mu\)m for HD 22484. Our defined reference model at the same wavelength agrees by about 87% with this measured HEZD-star flux ratio, which shows that our model is plausible. An overview of the parameter values of the reference model is compiled in Table 1.
## 3 Results
In Sect. 3.1 we compare the polarization characteristics between the net polarization degree of the radiation scattered by the close-in exoplanet (hereafter: planetary polarization) and the net polarization degree of the radiation scattered by the HEZD (hereafter: dust polarization) for various representative model configurations. We consider the wavelength-dependent polarization characteristics of a HEZD and a planet for selected parameters to identify characteristic features. We vary the dust grain radius and orbital inclination in Sect. 3.1.2 and the composition of the atmosphere and planetary phase angle in Sect. 3.1.3. Moreover, we compare dust and planetary polarization with existing measurements of intrinsic stellar polarization in Sect. 3.1.4.
In Sect. 3.2 we investigate a system consisting of a close-in planet and a HEZD to study the impact of HEZD on polarimetric investigations of close-in planets for selected wavelengths representative for the photometric bands \(V,K\) and \(L\). We vary the following parameters for a comparison of the resulting linear polarized fluxes: The dust grain radius and the HEZD mass (Sect. 3.2.1) and the composition of the planetary atmosphere and phase angle (Sect. 3.2.2). We also investigate the potential of analyzing the individual Stokes parameters \(Q\) and \(U\) (Sect. 3.2.3). Unless stated otherwise, we applied the parameter setup of the reference model (see Table 1). The discussion is conducted under the assumption of ideal observations without consideration of possible technical limitations.
### Wavelength-dependent polarization
The following discussion focuses on individual features of the wavelength-dependent polarization for a HEZD and a planet in the wavelength range from 550 nm to 4 \(\mu\)m. To derive conclusions about the polarization characteristics independent of the HEZD mass, the radius of the HEZD and planetary orbital radius, we consider the polarization as a function of the wavelength relative to the polarization at a wavelength of 550 nm that is representative for the photometric \(V\) band, in the vicinity of which previous polarimetric observations of close-in exoplanets or HEZD have been performed (e.g., Marshall et al. 2016; Bott et al. 2018).
#### 3.1.1 Characterization of wavelength-dependent polarization
In the case of the planet the polarization ratio \(P(\lambda)/P_{V}\) decreases steeply within the considered wavelength range (see Fig. 1). In the case of the HEZD the ratio decreases with minor fluctuations,
\begin{table}
\begin{tabular}{l l c c} \hline & Parameter & Variable & Value \\ \hline
**Star** & Radius & \(R_{\star}\) & 1.24 \(R_{\odot}\) \\ & Effective temperature & \(T_{\star}\) & 6350 K \\ & Distance to observer & \(d_{\rm obs}\) & 126.4 pc \\ \hline
**Planet** & Orbital inclination & \(i\) & 90\({}^{\circ}\) \\ & Radius & \(R_{\rm planet}\) & 1.16 \(R_{\rm Jup}\) \\ & Atmospheric composition & & H\({}_{2}\), Mg\({}_{2}\)SiO\({}_{4}\) clouds \\ & Orbital radius & \(d_{\rm planet}\) & 0.02 au \\ & Phase angle & \(\alpha\) & 67\({}^{\circ}\) \\ & Equilibrium temperature & \(T_{\rm planet}\) & 2411 K \\ \hline
**Dust** & Grain radius & \(a\) & 0.71 \(\mu\)m \\ & HEZD mass & \(M_{\rm dust}\) & \(1.23\times 10^{-9}\rm M_{\oplus}\) \\ & Radius & \(d_{\rm dust}\) & 0.04 au \\ & Temperature & \(T_{\rm dust}\) & 1780 K \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the reference model consisting of the central star, the close-in planet and the HEZD.
but less steeply than that of the planet. The thermal flux \(F_{\rm them}\) of the planet and the HEZD is higher in the NIR wavelength range than the scattered flux \(F_{\rm xc}\) (see Fig. 2), resulting in lower polarization ratios in the NIR wavelength range. The ratio of the planetary scattered flux to the total planetary flux is similar to that in observations from Krenn et al. (2023) for low geometric albedos of hot Jupiters.
The wavelength-dependent polarization characteristics result from the optical properties of dust grains and atmospheric particles. In the planetary atmosphere, Rayleigh scattering by H\({}_{2}\) molecules and small Mg\({}_{2}\)SiO\({}_{4}\) cloud particles determines the wavelength-dependent polarization, while it is Mie scattering in the case of the HEZD (for a detailed discussion of the influence of the dust grain radius on the polarization characteristics see Sect. 3.1.2). When the direct unpolarized stellar radiation is taken into account, the resulting polarization ratio \(P_{\star}(\lambda)/P_{\star\nu}\) is affected as well (see Fig. 1). As the stellar flux has its maximum in the optical wavelength range close to the \(V\) band, the polarization ratios show a flatter decline in the NIR wavelength range. The impact of central stars with spectral types A and G on the net wavelength-dependent polarization characteristics of the star-planet-HEZD system is given in Appendix B.1. We conclude that a distinction between a planet and a HEZD based on their wavelength-dependent polarization characteristics for the considered parameters is thus possible by comparing the ratios of polarization degree measurements in the optical and NIR wavelength range.
#### 3.1.2 Dust grain radius and inclination
Although the wavelength dependence of the NIR excess of a HEZD is best explained by the thermal reemission of submicrometer-sized dust grains (Kirchschlager et al., 2017, 2018, 2020), the lack of detections and stringent constraints at longer wavelengths in the MIR to millimeter range, does not allow an exclusion of larger particles so far (Stuber et al., 2023). For this reason, particle radii up to 10 \(\mu\)m are also considered in the following discussion of the impact of the dust grain radius for a fixed HEZD mass on the wavelength-dependent polarization characteristics. The impact of a dust grain size distribution on the wavelength-dependent polarization characteristics is given in Appendix B.2. We identify the following three trends (see Fig. 3):
Figure 1: Polarization as a function of wavelength relative to the polarization at a wavelength of 550 nm for a HEZD and a planet (reference model; Table 1) with \((P_{\rm{}}(\lambda)=P_{\star}(\lambda),\,\bar{P}_{V}=P_{\star\nu})\) and without \((\bar{P}_{\rm{}}(\lambda)=P(\lambda),\,\bar{P}_{V}=P_{V})\) the stellar flux to the net polarization degree. See Sect. 3.1.1 for details.
Figure 3: Dust polarization for different wavelengths and dust grain radii. Top: Dust polarization as a function of wavelength and dust grain radius for an orbital inclination of 90\({}^{\circ}\). Bottom: Dust polarization as a function of wavelength and dust grain radius relative to the polarization at a wavelength of 550 nm for an orbital inclination of 90\({}^{\circ}\). See Sect. 3.1.2 for details.
Figure 2: Relative contribution of the scattered flux of a HEZD and a planet to the net flux (scattered and thermally reemitted flux) of both components (reference model; Table 1). See Sect. 3.1.1 for details.
First, for dust grain radii smaller than 0.1 \(\mu\)m and wavelengths longer than 550 nm the polarization drops to below 1% (top plot of Fig. 3) because small dust grains have a higher temperature than larger dust grains at the same orbital distance from the central star. Their thermal reemission therefore dominates the net flux in the NIR wavelength range, which strongly reduces the resulting net polarization. Consequently, the contribution of scattered radiation is negligible for these small grains. Second, the highest polarization is about 7% at a wavelength of 550 nm for a dust grain radius of 0.1 \(\mu\)m, which has a strong impact on the polarimetric analysis of close-in planets in the presence of a HEZD (see Sect. 3.2). Third, at the maximum considered wavelength and dust grain radius (\(\lambda=4\)\(\mu\)m, \(a=\)_10 \(\mu\)m_), the polarization ratio reaches its highest value of about 2.5 (bottom plot of Fig. 3) because the scattering efficiency increases at infrared wavelengths (larger dust grains are redder in their scattering color). We furthermore note that the dust polarization degree in the \(K\) band agrees well with the findings of Kirchschlager et al. (2017).
In Fig. 4 the dependence of the dust polarization on the orbital inclination of the HEZD ring is shown for dust grains with a radius of 0.1 \(\mu\)m at a wavelength of 550 nm. These particular values of the grain radius and wavelength were chosen because the dust polarization was found to reach its maximum value in this case. The net polarization increases with increasing inclination and reaches its maximum of about 7% at 90\({}^{\circ}\). Because of symmetry constraints, the net polarization equals zero in the case of zero inclination (face-on).
In summary we find that while the orbital inclination affects the net dust polarization, the dust grain radius has a strong impact on the wavelength-dependent polarization characteristics and also on the net polarization. It therefore strongly affects the polarimetric investigation of a close-in planet in the presence of a HEZD. For small dust grain radii the resulting net polarization of a HEZD is negligible within the wavelength range from 550 nm to 4 \(\mu\)m for a polarimetric analysis like this. Furthermore, polarimetric measurements of HEZD in the NIR wavelength range and in the \(V\) band potentially provide constraints on the dust grain radius by a comparison of the polarization ratios.
#### 3.1.3 Composition of the planetary atmosphere
We illustrate the planetary polarization as a function of wavelength relative to the polarization in the \(V\) band for selected atmospheric compositions to investigate their impact on the planetary polarization characteristics (see Fig. 5). The considered mixing ratios (CO/H\({}_{2}=0.2\), H\({}_{2}\)O/H\({}_{2}=10^{-3}\)) were adopted from Sheppard et al. (2017) to represent the atmospheric compositions of WASP-18b and the effective radii \(r_{\rm eff}=0.05\)\(\mu\)m, 0.5 \(\mu\)m and effective variances \(\nu_{\rm eff}=0.01,0.1\) for Mg\({}_{2}\)SiO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\) cloud particles are adopted from Bailey et al. (2018), Bott et al. (2018) and Kedziora-Chudczer et al. (2019) (see Appendix A.4 for the size distribution of the cloud particles). Although WASP-18b differs from many other hot Jupiters in terms of its temperature and planetary orbital radius, the results presented here can be generalized for hot Jupiters because the selected atmospheric compositions, resulting in Rayleigh or Mie scattering, also apply to other extensively studied hot Jupiters such as HD 189733b (Sing et al. 2015; Wakeford & Sing 2015; Mansfield 2023) and are thus expected to result in at least qualitatively similar polarization characteristics.
Figure 4: Dust polarization as a function of orbital inclination at a wavelength of 550 nm for a dust grain radius of 0.1 \(\mu\)m. See Sect. 3.1.2 for details.
Figure 5: Planetary polarization as a function of wavelength relative to the polarization at a wavelength of 550 nm for compositions of the planetary atmosphere (H\({}_{2}\), CO, H\({}_{2}\)O, Mg\({}_{2}\)SiO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\) clouds). Bottom and top: With and without the atmospheric specific absorption coefficients. See Sect. 3.1.3 for details.
Three scattering scenarios contribute in this case: Rayleigh scattering on atmospheric particles, Rayleigh and Mie scattering on cloud particles. When no absorption of stellar radiation for the atmospheric particles was considered (see top graph of Fig. 5) the polarization ratios of the CO/H\({}_{2}\), H\({}_{2}\)O/H\({}_{2}\) and H\({}_{2}\) compositions were close to the ratio of the reference atmospheric composition (H\({}_{2}\) atmosphere and Mg\({}_{2}\)SiO\({}_{4}\) cloud particles with an effective radius of 0.05 \(\mu\)m) and thus have similar wavelength-dependent polarization characteristics. The reason is that Rayleigh scattering has the strongest impact on the resulting polarization characteristics for the selected compositions because the radii of the gas molecules are small. Moreover, scattering on H\({}_{2}\) is the source of polarization in the clear atmosphere case and the dominant gas for the other mixing ratios. Rayleigh scattering also applies to the Mg\({}_{2}\)SiO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\) clouds with an effective radius of 0.05 \(\mu\)m and an effective variance of 0.01.
The polarization in the NIR wavelength range is significantly lower than in the \(V\) band for every selected atmospheric composition, especially for the Rayleigh-scattering particles. Additionally, the polarization increases in the NIR wavelength range for cloud particles with an effective radius of 0.5 \(\mu\)m and an effective variance of 0.1 because Mie scattering applies for particles of such radii. Due to the strong similarities of the polarization ratio for the selected atmospheric compositions, drawing unambiguous conclusions about the composition of the atmosphere only from the measured polarization is hardly feasible, at least as long as small cloud particles dominate the composition of the atmosphere.
In addition, we also considered the specific molecular absorption of stellar radiation of the atmospheric gas particles (see bottom graph of Fig. 5). These absorption features are imprinted on the wavelength-dependent polarization, resulting in significant differences between the polarization ratios for the clear atmospheres with different chemical compositions, potentially in the NIR wavelength range. However, for atmospheres with clouds these features are strongly damped because the absorption of the gas particles is negligible in comparison to the absorption by the cloud particles within the model we considered.
Motivated by the wavelength-dependence of the planetary polarization for different effective cloud particle radii, we illustrate the polarization degree for three different atmospheric compositions as a function of phase angle in Fig. 6. When the effective cloud particle radii are increased to 0.5 \(\mu\)m, the Mg\({}_{2}\)SiO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\) particles move into the Mie scattering regime, which results in a substantially reduced polarization degree and a different phase angle dependence of the planetary polarizaton in comparison to the Rayleigh scattering regime (maximum polarization is now at about 40\({}^{\circ}\)) because the peak of the back-scattering probability is narrower.
In conclusion, a comparison of the characteristic wavelength-dependent polarization ratios for dust and atmospheric particles enables a distinction of the polarization characteristics for the HEZD and the planet, whereby the resulting polarization of large cloud particles potentially mimic the wavelength-dependent polarization characteristics of large dust grains (Mie scattering) and likewise with respect to small dust grains and atmospheric particles (Rayleigh scattering). Within the considered parameter space, an additional consideration of the molecule-specific absorption coefficients for the wavelength-dependent polarization characteristics enables a unique characterization of the atmospheric and dust particles in the case of clear atmospheres.
#### 3.1.4 Intrinsic stellar polarization
To investigate whether the intrinsic stellar polarization affects the polarimetric analysis of a HEZD and a planet (see Table 1 for the model parameters), we compared their wavelength-dependent polarization degrees in the range from 430 nm to 600 nm with polarization measurements of the Sun and of selected FGK dwarfs (see Fig. 7). The maximum dust polarization for parameters from the reference model is about 13 ppm while the maximum planetary polarization is about 17 ppm (both at a wavelength of 430 nm).
The average polarization degree for a sample of nearby FGK dwarfs (Cotton et al., 2017) was divided into three classes: The first class consists of inactive stars with an average polarization degree of about 2.9 ppm. The second class consists of inactive stars with debris disks with an average polarization degree of about 7.8 ppm. Here, the additional polarization of about 4.9 ppm in comparison to the inactive stars without debris disk
Figure 6: Planetary polarization as a function of phase angle for compositions of the planetary atmosphere (H\({}_{2}\), Mg\({}_{2}\)SiO\({}_{4}\) and Al\({}_{2}\)O\({}_{3}\) clouds) at a wavelength of 550 nm. See Sect. 3.1.3 for details.
Figure 7: Polarization as a function of wavelength for a HEZD and a planet (reference model; Table 1). Two measured polarization degrees of the Sun (red dots; Kemp et al., 1987) and average polarization degrees for three types of observed FGK dwarfs are also shown (green, yellow and purple dots; Cotton et al., 2017). See Sect. 3.1.4 for details.
results from scattering of stellar radiation by the debris disk. The third class contains active stars with an average polarization degree of about 23 ppm exceeding even the maximum planetary polarization. These high intrinsic stellar polarization degrees presumably result from differential saturation that is also induced by the global magnetic field, whereby differential saturation means that many spectral lines overlap and merge with each other to produce a net broadband linear polarization (Cotton et al. 2017, 2019). Measured polarization degrees of the Sun are also shown in Fig. 7 (Kemp et al. 1987) at about 0.8 ppm at 450 nm and about 0.2 ppm at 550 nm, which fits into the order of magnitude of averaged polarization for inactive stars without a debris disk (first class of sampled FGK dwarfs). In summary, the stellar polarization contributes significantly to the total polarization of the system, especially for active stars. In conclusion, an analysis of the scattered-light polarization for close-in planets orbiting active stars in the presence of a HEZD is therefore challenging and requires independent constraints on the intrinsic wavelength-dependent stellar polarization. Time domain polarimetry represents a promising approach to disentangle stellar and planetary signals based on their potentially different temporal dependence, similar to the established technique of high-precision long-duration photometry (e.g., Bruno et al. 2016).
### Contribution of hot exozodiacal dust to the total polarized flux
We studied to which extent the net flux of the radiation polarized by the HEZD (hereafter: polarized flux of the dust \(F_{\rm pol~{}dust}\)) contributes to the net flux of the radiation polarized by the planet (hereafter: planetary polarized flux \(F_{\rm pol~{}planet}\)) at 550 nm, 2.22 \(\mu\)m, and 3.45 \(\mu\)m, which is representative for the photometric bands \(V,K\) and \(L\). We varied the following parameters: The HEZD mass, the dust grain radius for a fixed HEZD mass (Sect. 3.2.1), the planetary phase angle and the composition of the planetary atmosphere (Sect. 3.2.2).
For the parameters from the reference model (Table 1), the maximum planetary polarized flux (about 122 \(\mu\)Jy) is higher than the maximum polarized flux of dust (about 39 \(\mu\)Jy), but it decreases much faster (see Fig. 8). This is due to Mie scattering in the case of the HEZD and Rayleigh scattering in the case of the planetary atmosphere. For polarimetric observations in the NIR wavelength range, the influence of the planet is therefore negligible within the considered parameter range.
#### 3.2.1 Hot exozodiacal dust mass and dust grain radius
Because current constraints on the HEZD masses cover a wide range (around \(10^{-10}\mathrm{M_{\oplus}}-10^{-8}\mathrm{M_{\oplus}}\)) and the dust grain radius has a strong impact on the wavelength-dependent polarization characteristics (see Sect. 3.1.2), we studied the effect of com
Figure 8: Polarized flux as a function of wavelength for a HEZD and a planet (reference model; Table 1). See Sect. 3.2 for details.
Figure 9: Polarized flux of the dust as a function of HEZD mass and dust grain radius relative to the planetary polarized flux (reference model; Table 1). These relations are shown at a wavelength of 550 nm (top), 2.22 \(\mu\)m (middle) and 3.45 \(\mu\)m (bottom). See Sect. 3.2.1 for details.
bining these two parameters on the polarized flux of the dust. Furthermore, we investigated the contribution of the polarized flux of the dust relative to the planetary polarized flux. We find three trends (see Fig. 9):
At first, for the selected HEZD model, the required dust grain radius to reach the maximum polarized flux of the dust increases from 0.1 \(\mu\)m at a wavelength of 550 nm, over 0.5 \(\mu\)m at a wavelength of 2.22 \(\mu\)m, to 0.8 \(\mu\)m at a wavelength of 3.45 \(\mu\)m. Second, the ratio of the polarized flux of the dust and planetary polarized flux increases from a maximum ratio of about 10 at a wavelength of 550 nm, over about 100 at 2.22 \(\mu\)m, to about 1000 at 3.45 \(\mu\)m because the planetary polarized flux decreases faster than the polarized flux of the dust with increasing wavelength for the selected model parameters.
Third, the minimum HEZD mass required for the polarized flux of the dust to exceed the planetary polarized flux (\(F_{\rm{pol\ dust}}/F_{\rm{pol\ planet}}>1\)) for a dust grain radius of 0.1 \(\mu\)m (see the highlighted orange contour line in Fig. 9) is about \(0.08\times 10^{-8}\)M\({}_{\oplus}\) at a wavelength of 550 nm. The minimum HEZD mass required for the polarized flux of the dust to exceed the planetary polarized flux at 2.22 \(\mu\)m and 3.45 \(\mu\)m for certain dust grain radii is even below the lowest considered HEZD mass of \(10^{-10}\) M\({}_{\oplus}\). To increase the polarized flux of the dust for a fixed dust grain radius, a higher HEZD mass and thus more particles of the same dust grain radius are required.
The orbital radius of a HEZD as well as the planetary orbital radius cover wide ranges (for \(d_{\rm{dust}}\) around 0.02 au \(-\) 0.4 au and for \(d_{\rm{planet}}\) around 0.01 au \(-\) 0.05 au). For both fixed parameters that characterize the planet and the HEZD, their relative individual contributions are almost exclusively determined by the geometrical dilution of the stellar radiation before scattering (the minor impact due to different illumination geometries at different distances from the central star is neglected at this point). We express the polarized flux of the dust relative to the polarized planetary flux as a function of these parameters (\(d_{\rm{dust}}\), \(d_{\rm{planet}}\)) with all other model parameters fixed. We define this relation for the dust grain radii resulting in the highest polarized flux of the dust at the wavelengths 550 nm, 2.22 \(\mu\)m and 3.45 \(\mu\)m and for other model parameters selected according to the reference model (see Table 1) as
\[\frac{F_{\rm{pol\ dust}}\left(\lambda,d_{\rm{dust}}\right)}{F_{\rm{pol\ planet}}( \lambda,d_{\rm{planet}})}=j(\lambda)\cdot\frac{d_{\rm{planet}}^{2}}{d_{\rm{ dust}}^{2}}, \tag{3}\]
with \(j(550\ {\rm nm})=0.68,\ j(2.22\ \mu{\rm m})=34.28\) and \(j(3.45\ \mu{\rm m})=360\). In conclusion, the impact of the presence of a HEZD on the polarimetric analysis of a close-in planet increases with increasing HEZD mass, a certain dust grain radius depending on the observed wavelength, increasing radius of the planetary orbit and decreasing orbital radius of a HEZD. Due to the large possible parameter space of a HEZD, one suitable parameter configuration to fit the observed NIR excess in certain cases contains dust particles with radii smaller than 0.1 \(\mu\)m resulting in low polarization. The dust therefore has to be located very close to the star and the HEZD mass has to be sufficiently high in these cases to significantly contribute the total polarization of the system in the case of a close-in planet being present.
#### 3.2.2 Phase angle and planetary atmosphere
Motivated by the findings in Sect. 3.1.2 and Sect. 3.1.3, we investigated the influence of the phase angle on the relation between planetary polarized flux and polarized flux of the dust in one example case of Rayleigh scattering (dust grain radius of 0.02 \(\mu\)m and forsterite cloud particles with an effective radius of 0.05 \(\mu\)m) and one example case of Mie scattering (dust grain radius of 0.71 \(\mu\)m and forsterite cloud particles with an effective radius of 0.5 \(\mu\)m) for three selected wavelengths (550 nm, 2.22 \(\mu\)m and 3.45 \(\mu\)m). We find three interesting trends:
First, while in the case of Rayleigh scattering (top graph of Fig. 10) the phase angle for the maximum flux ratio deviates only slightly for increasing wavelengths from 67\({}^{\circ}\), the maximum flux ratio shifts to almost 40\({}^{\circ}\) in the case of Mie scattering (bottom graph of Fig. 10). Second, in the Mie scattering case, the planetary polarized flux only exceeds the polarized flux of the dust (\(F_{\rm{pol\ planet}}/F_{\rm{pol\ dust}}>1\)) at a wavelength of 550 nm by a factor of about 2.4. However, in the case of Rayleigh scattering, the flux ratio exceeds a value of 10 in the NIR wavelength range and even 40 at 550 nm. Third, the polarized flux of the dust never exceeds the planetary polarized flux in the Rayleigh scattering case for the selected parameters.
The atmospheric composition, as well as the phase angle, thus have a significant influence on a polarimetric analysis of a HEZD and a close-in planet. In the case of a planetary Rayleigh scatter
Figure 10: Planetary polarized flux as a function of phase angle relative to the polarized flux of the dust at the wavelengths 550 nm, 2.22 \(\mu\)m, and 3.45 \(\mu\)m. Top: Planetary atmosphere consisting of H\({}_{\rm{z}}\) and also Mg\({}_{\rm{3}}\)SiO\({}_{\rm{z}}\) cloud particles with an effective radius of \(r_{\rm{eff}}=0.05\ \mu\)m. The HEZD consists of dust grains with a radius of 0.02 \(\mu\)m. Bottom: Same chemical composition but for an effective cloud particle radius of \(r_{\rm{eff}}=0.5\ \mu\)m and the HEZD consists of dust grains with a radius of 0.71 \(\mu\)m. All other parameters for the planet and the HEZD are those of the reference model (Table 1). See Sect. 3.2.2 for details.
ing atmosphere, small dust particles of the HEZD that also fall in the Rayleigh scattering region contribute significantly less to the total polarized flux than particles of a certain grain radius (see Sect. 3.2.1). In the case of a planetary Mie scattering atmosphere, however, larger cloud particles severely limit the phase angle range required for the planetary polarized flux to exceed the polarized flux of the dust.
#### 3.2.3 Potential of analyzing the individual Stokes parameters \(Q\) and \(U\)
While the degree and orientation of the linear polarization can be derived from the combination of \(Q\) and \(U\), these individual components of the Stokes vector already provide limited information about the polarization state of the radiation. This is made use of in the case of differential polarimetry (e.g., Kuhn et al. 2001; Potter et al. 2003; Quanz et al. 2011). Moreover, as illustrated by Marshall et al. (2023), \(Q\) and \(U\) may provide the basis for distinguishing between the weakly polarized signal of the scientific target (e.g., a debris disk or an exoplanet) from that of the foreground interstellar polarization. In the following, we illustrate the potential of the Stokes parameters \(Q\) and \(U\) for the analysis of observations of a system consisting of a star, a HEZD and a hot Jupiter.
For this purpose, the two Stokes parameters are shown as a function of wavelength for our reference HEZD and planet seen at two different inclinations (\(i=45^{\circ}\), \(90^{\circ}\)) in Fig. 11. Furthermore, the polarizers were assumed to be oriented parallel and perpendicular to the main axis of the disk and the planetary orbit in the case of \(Q\). Consequently, \(U\) cancels out in the case of the HEZD ring and for a planetary orbit seen edge-on (\(i=90^{\circ}\)). Thus, for our defined reference model (edge-on configuration), the total linear polarized flux only results from the Stokes parameter \(Q\). We find that \(Q\) and \(U\) decrease with increasing wavelength. This trend is only slightly modified by the fluctuating wavelength dependence in the case of the Mie scattering-dominated HEZD.
The generally strong trends of the wavelength dependence on the resulting polarization degree as we found for the planet (strong dependence) and the HEZD (weaker dependence) in Sect. 3.1, is already imprinted in the underlying Stokes parameters \(Q\) and \(U\). On the one hand, this behavior is due to the decreasing polarization of the radiation of the planet and the HEZD, as illustrated for example in Fig. 1. On the other hand, as both \(Q\) and \(U\) are defined as intensity differences, but for different combinations of the orientation of the polarizers, this behavior is also linked to the decreasing scattered and thus polarized fraction of the radiation of both components (e.g., Fig. 2). In addition to the polarization degree and thus the underlying absolute amount of both Stokes parameters, the sign of \(Q\) and \(U\) contains additional information about the orientation of the linear polarization. In contrast to the basic analysis of the individual Stokes parameters, it will be particularly difficult to derive the orientation of the linear polarization because it is determined by the ratio of these quantities that is \(\frac{1}{2}\arctan(U/Q)\), resulting in a strong impact of measurement uncertainties. In conclusion, the basic characteristics of the wavelength-dependent polarization state of both the planetary and the HEZD radiation and thus the potential of distinguishing between the two components are already imprinted in the Stokes parameters \(Q\) and \(U\).
## 4 Summary and conclusions
We compared the wavelength-dependent scattered-light polarization of HEZD and close-in exoplanets starting from a reference model. This model was motivated by observational constraints from the polarization measurements of WASP-18b for a reference close-in planet and a suitable parameter configuration to fit the NIR excess of HD 22484 (10 Tau) as a reference for a HEZD. We varied different model parameters to study their individual impact on the polarization characteristics in the wavelength range from 550 nm to 4 \(\mu\)m. We defined the net polarization degree with and without including the stellar flux separately. The net polarization degree furthermore included the scattered (polarized) radiation of a close-in planet and a HEZD, as well as their thermally reemitted flux. We used a numerical tool (3D Monte Carlo solver POLARIS) for the radiative transfer process in the Rayleigh scattering atmosphere of the close-in planet and an analytical tool to calculate the net scattered-light polarization of the HEZD modeled as an optically thin dust ring. We also investigated the ratio of the the polarized flux of the HEZD to the planetary polarized flux for representative wavelengths of the photometric bands \(V,K\) and \(L\) for selected parameter combinations. We found several trends that have the potential to distinguish the polarization from HEZD and close-in planets. We reach the following conclusions:
1. The presence of HEZD needs to be considered in any effort to characterize exoplanets via polarimetric observations because they have different wavelength-dependent polarization characteristics and for certain model configurations different orders of magnitude of their scattered-light polarization. This
Figure 11: Wavelength-dependent Stokes parameters for the reference model of the HEZD-planet-star system (see Table 1) for two orbital inclinations (\(i=45^{\circ}\), \(90^{\circ}\)). We show \(Q\) (top) and \(U\) (bottom). See Sect. 3.2.3 for details.
complicates any analysis of the radiation in such a system because the scattered-light polarization of the HEZD may exceed that from the close-in planet for a subset of the parameter space that is usually considered for modeling HEZD. Stricter constraints on the parameters (especially on the grain radii) of HEZD, which can be achieved with interferometers such as MATISSE (Lopez, 2014) are thus required to restrict the potentially resulting scattered-light polarization.
2. Within the investigated parameter space, the presence of a HEZD complicates the polarimetric analysis of a close-in planet in the optical wavelength range most under the following conditions: The dust grain radius is 0.1 \(\mu\)m at an observed wavelength of 550 nm, the orbital radius of the HEZD is as small as possible, the HEZD mass is as high as possible, the orbital radius of the planet is as large as possible and the planetary phase angle deviates strongly from 67\({}^{\circ}\).
3. The dust grain radius has the most significant impact on the scattered-light polarization because it strongly changes the wavelength-dependent polarization characteristics of the HEZD and also the total order of magnitude of the polarization (see Sect. 3.1.2 and Sect. 3.2.1).
4. The scattered-light polarization of a close-in planet is strongly dependent on the phase angle. For the defined reference planet a phase angle of about 67\({}^{\circ}\) results in the highest polarization degree in the \(V\) band (see Sect. 3.1.3 and Sect. 3.2.2).
5. The orbital radius of the HEZD, respectively the planet and the HEZD mass are scaling factors that can change the order of magnitude of the polarized flux (see Sect. 3.2.1 and Sect. 3.2.2).
6. Different compositions of atmospheres of hot Jupiters affect the resulting linear polarization degree only slightly as long as Rayleigh scattering dominates the scattering behavior of the atmospheric and cloud particles. On the other hand, when the absorption of the atmospheric particles is also taken into account and larger cloud particles are considered, then the significant differences between the linear polarization degrees especially in the NIR wavelength range provide a characterization of the atmospheric composition. For cloud particles with sizes comparable to the observing wavelength, Mie scattering dominates. This results in similar wavelength-dependent polarization characteristics for the planet and the HEZD consisting of dust grains of similar radii (the same applies for Rayleigh scattering in the case of small cloud particles and the HEZD consisting of small dust grains, see Sect. 3.1.3 and Sect. 3.2.2).
7. The potential to distinguish between a HEZD and a close-in exoplanet is already imprinted in the underlying Stokes parameters \(Q\) and \(U\) (see Sect. 3.2.3).
8. Orbital inclinations of the HEZD up to 90\({}^{\circ}\) increase the net scattered-light polarization (see Sect. 3.1.2).
9. The intrinsic polarization of the stellar radiation of active stars may have a significant impact on the net polarization of the system and thus on the polarimetric analysis (see Sect. 3.1.4).
As a next step to complete the investigation of the mutual influence of a planet and a HEZD on their individual polarization signatures, the gravitational impact of the planet on the spatial structure of the HEZD, which might result in asymmetries, has to be considered. The exact mechanisms that sustain or reproduce the HEZD are not yet well understood. The results of such a study would therefore strongly depend on the underlying model and thus on the dominating physical processes. For example, if the HEZD consists of submicron-sized grains close to the star, the dust grains may be trapped in the stellar magnetic field while otherwise, resonant trapping could determine the spatial dust distribution (e.g., Kral et al., 2017).
Three of the systems with an observed HEZD from Kirchschlager et al. (2017) also contain exoplanets (Pepe et al., 2011; Feng et al., 2017; Lagrange et al., 2019). They can therefore be selected as a starting point for such an investigation, but these exoplanets are not close-in planets. However, required simulations are difficult to perform because the exact physics providing an explanation of the presence of HEZD first need to be understood.
## ORCID IDs
K.Ollmann \(\copyright\)[https://orcid.org/0009-0003-6954-5252](https://orcid.org/0009-0003-6954-5252) S. Wolf \(\copyright\)[https://orcid.org/0000-0001-7841-3452](https://orcid.org/0000-0001-7841-3452) M. Lietzow \(\copyright\)[https://orcid.org/0000-0001-9511-3371](https://orcid.org/0000-0001-9511-3371) T. A. Stuber \(\copyright\)[https://orcid.org/0000-0003-2185-0525](https://orcid.org/0000-0003-2185-0525)
###### Acknowledgements.
This work is supported by the Research Unit FOR 2285 "Devis Disks in Planetary Systems" of the Deutsche Forschungsgemeinschaft (DFG). K.O. and S.W. acknowledge the DFG for financial support under contract WO 857/15-2. We thank the anonymous referee for the very useful suggestions.
|
2308.05425 | Reactivity of Stone-Wales defect in graphene lattice -- DFT study | Understanding the reactivity of carbon surfaces is crucial for the
development of advanced functional materials. In this study, we systematically
investigate the reactivity of graphene surfaces with the Stone-Wales (SW)
defect using Density Functional Theory calculations. We explore the atomic
adsorption of various elements, including rows 1-3 of the Periodic Table,
potassium, calcium, and selected transition metals. Our results demonstrate
that the SW defect enhances binding with the studied adsorbates when compared
to pristine graphene, with carbon and silicon showing the most significant
differences. Additionally, we examine the effects of mechanical deformation on
the lattice by constraining the system with the SW defect to the pristine
graphene cell. Interestingly, these constraints lead to even stronger binding
interactions. Furthermore, for carbon, nitrogen, and oxygen adsorbates, we
observe that mechanical deformation triggers the incorporation of adatoms into
the carbon bond network, leading to the reorganization of the SW defect
structure. This work establishes a foundation for future studies in the defect
and strain engineering of graphene, opening avenues for developing advanced
materials and catalysts with enhanced reactivity and performance. | Aleksandar Z. Jovanović, Ana S. Dobrota, Natalia V. Skorodumova, Igor A. Pašti | 2023-08-10T08:39:52Z | http://arxiv.org/abs/2308.05425v1 | # Reactivity of Stone-Wales defect in graphene lattice - DFT study
###### Abstract
Understanding the reactivity of carbon surfaces is crucial for the development of advanced functional materials. In this study, we systematically investigate the reactivity of graphene surfaces with the Stone-Wales (SW) defect using Density Functional Theory calculations. We explore the atomic adsorption of various elements, including rows 1-3 of the Periodic Table, potassium, calcium, and selected transition metals. Our results demonstrate that the SW defect enhances binding with the studied adsorbates when compared to pristine graphene, with carbon and silicon showing the most significant differences. Additionally, we examine the effects of mechanical deformation on the lattice by constraining the system with the SW defect to the pristine graphene cell. Interestingly, these constraints lead to even stronger binding interactions. Furthermore, for carbon, nitrogen, and oxygen adsorbates, we observe that mechanical deformation triggers the incorporation of adatoms into the carbon bond network, leading to the reorganization of the SW defect structure. This work establishes a foundation for future studies in the defect and strain engineering of graphene, opening avenues for developing advanced materials and catalysts with enhanced reactivity and performance.
graphene; Stone-Wales defect; atomic adsorption; mechanical deformation
**1. Introduction**
Graphene is an allotrope of carbon consisting of a single layer of atoms arranged in a two-dimensional honeycomb lattice. It is the basic structural element of other C allotropes, including graphite, charcoal, carbon nanotubes, and fullerenes [1]. This atomic arrangement gives graphene its unique properties, including high electrical and thermal conductivity and extraordinary mechanical strength [2]. Due to these properties, graphene has been used in a variety of applications, including solar cells [3, 4, 5], touch screens [6], sensors [7, 8], flexible electronics [9, 10], energy storage [11, 12, 13], and many others. In fact, these days, it is very difficult to find any contemporary technology in which graphene is not present or at least tested for possible applications.
Although typically imagined as a flat sheet of atoms, in reality, graphene is not perfectly flat [14, 15] and the presence of defects in the graphene structure cannot be avoided. Various types of defects can occur in graphene, broadly classified into point defects and line defects [16, 17]. Point defects are localized disruptions to the regular atomic arrangement and can be either vacancies (missing atoms) or impurities (substitutions of foreign atoms) [17, 18]. Line defects are linear disruptions to the atomic structure and include things like edge dislocations and grain boundaries [17, 18]. They can be created by removing or substituting multiple carbon atoms or introducing a foreign atom into the lattice.
Both point and line defects can have a significant impact on the properties of graphene [17]. For example, point defects can act as sites for chemical reactions [17] even if no dangling bonds are present, while line defects can affect the electrical and thermal conductivity of graphene [18, 19]. Therefore, it is important to carefully control the number and type of defects present in graphene when it is being used for applications such as electronics or energy storage [20, 21]. In general, the more defects there are in a graphene sample, the more its properties will be degraded with respect to that of pristine graphene. However, the presence of defects is crucial for some applications of graphene-based materials. This particularly relates to chemical applications, including chemical reactions with or on the graphene surface, as pristine graphene is exceptionally chemically inert and interacts very weakly with most atomic adsorbates [22]. However, vacancies are the sites of enhanced reactivity and interact with different species much stronger than pristine graphene [23], resulting from the formation of localized states near the vacancy.
Another common type of defect in graphene is the Stone-Wales (SW) defect, which is also present in other nanocarbon materials [17]. The SW defect results from rotating two carbon atoms about a bond axis, resulting in two five-membered rings and two seven-membered rings instead of four hexagons [24]. For this reason, the SW defect is also called the 5775 defect, and its formation takes approximately 5 eV [25, 26]. Like in the case of vacancies in graphene, the presence of Stone-Wales defects can significantly impact the electronic structure, chemical reactivity, and adsorption properties of the material [27, 28, 29], acting as adsorption sites for molecules or ions or increasing the rate of chemical reactions by providing an active site for the reaction [30, 31]. For example, the presence of SW defect enhances the interaction of graphene with ambient gases (N\({}_{2}\), O\({}_{2}\), Ar, CO\({}_{2}\), and H\({}_{2}\)O), causing damage in the material, particularly in the presence of oxygen [30]. However, it can also be utilized to turn graphene sheets into a membrane with a high proton permeability and isotope selectivity [31].
Besides the alteration of the chemical properties of graphene due to the presence of defects, mechanical deformations can significantly impact the reactivity of graphene [32]. Thus, it is essential to
consider how mechanical deformations will affect the reactivity of graphene before using it in applications where it may be subject to such forces. For example, reactivity tuning by mechanical strain could be employed to effectively store and release hydrogen from the graphene surface [33, 34, 35]. In contrast, combining heteroatoms and mechanical strain can induce side-selective reactivity towards different species [36].
In our previous works, we have addressed atomic adsorption on pristine graphene [22] and graphene with single vacancy [23] in a systematic fashion, considering all the elements in the first six rows of the Periodic Table of Elements (PTE). In this work, we continue the quest for understanding the reactivity trends of graphene surfaces by analyzing the atomic adsorption of graphene with SW defect. We have restricted our study to the elements located in rows 1-3 of the PTE, additionally including K, Ca, d-elements, which are particularly interesting for catalysis and electrocatalysis (Ni, Ru, Rh, Pd, Ir, Pt), and coinage metals (Cu, Ag, and Au). In the present work, we have also analyzed how the selected elements interact with the SW defect in the lattice under compressive strain invoking structural deformation of the surface layer. The presented results provide insight into the reactivity trends of the SW-containing graphene and can help develop novel strategies for graphene functionalization by defect and strain engineering.
## 2 Computational details
We calculated the adsorption of all the elements of the PTE located in rows 1 to 3, in the 4x4 cell of graphene (32 atoms) containing a SW defect. Additionally, we analyzed the adsorption of K, Ca, selected d-elements (Ni, Ru, Rh, Pd, Ir, Pt), and coinage metals (Cu, Ag, and Au). Due to the systematic nature of this work, the minimal simulation cell size (32 atoms), which does not induce strong interactions between periodic images, was used.
The first-principle DFT calculations were performed using the Vienna _ab initio_ simulation code (VASP) [37, 38, 39, 40]. We used the generalized gradient approximation (GGA) in the parametrization by Perdew, Burk and Ernzerhof [41] and the projector augmented wave (PAW) method [42, 43]. To account for dispersion interactions, the DFT+D3 formulation of Grimme was used [44]. In this approach, the total energy is corrected by a pairwise term, which accounts for dispersion interactions and is added to the total energy of the system calculated using PBE functional. The cut-off energy of 600 eV and Gaussian smearing with the width of \(\sigma\) = 0.025 eV for the occupation of the electronic levels were used. A Monkhorst-Pack _f_-centered 10x10x1 k-point mesh was used. We have selected 31 initial adsorption sites (**Figure S1**, Supplementary Information), which were systematically investigated. Two sets of calculations were done. First, the calculations were done with the cell corresponding to that of pristine graphene. As explained later on, this induces the corrugation of the SW-containing lattice due to compressive strain. In this set of calculations, all the atoms in the cell were allowed to fully relax. In the second set of calculations, the atoms and the cell were fully relaxed. The relaxation procedure was stopped when the Hellmann-Feynman forces on all atoms were smaller than 10\({}^{-2}\) eV A-1. This corresponded to the total energy converged below 0.01 meV. Spin-polarization was taken into account in all calculations. The repeated graphene sheets were separated from each other by 20 A of vacuum.
The binding energies (\(E_{\text{b}}\)) obtained within different sets of calculations were calculated as:
\[E_{b}^{rel}(A)=\ E_{0}^{rel}(A@SW-G)-\ E_{0}^{rel}(SW-G)-\ E_{0}(A) \tag{1}\]
\[E_{b}^{prls}(A)=\ E_{0}^{prls}(A@SW-G)-\ E_{0}^{prls}(SW-G)-\ E_{0}(A) \tag{2}\]
where \(E_{0}\) are the ground state energies of the adatom on SW-graphene [A@SW-G], SW-graphene [SW-G], and adatom [A] alone. Superscript "rel" or "pris" indicates whether the supercell was allowed to relax or was fixed to that of a pristine graphene lattice. \(E_{b}\) is negative when adsorption is exothermic.
Visualization was done using VESTA [45], while the graphical presentation of Densities of States (DOS) was done using sumo tools for VASP [46].
## 3 Results and discussion
### SW defect in the graphene lattice
As mentioned in the introduction, the SW defect is formed by a rotation of one C-C bond, forming two five-membered and two seven-membered rings. This process requires very high energy input. Hence, the formation of the SW defect is not probable under low temperatures [17, 25]. However, if formed during the synthesis and quenched in the structure, a high energy barrier has to be overcome to restore the perfect lattice. Thus, it remains in the structure.
Figure 1: Two structures of the SW defect in graphene lattice. On the left (a), the SW defect in a fully relaxed lattice is presented with charge density distribution, Densities of States (DOS), and isosurfaces of partial charge density obtained by the DOS integration from \(-2\) eV do Fermi level (set to \(0\)). On the right (b), analogous data for the SW defect constrained to the pristine graphene lattice are presented.
Here we introduced the SW defect into the (4x4) supercell of pristine graphene and relaxed the structure in two ways. First, the cell and all the atoms were fully relaxed while keeping the simulation cell volume constant. This resulted in an almost perfectly flat graphene sheet with the embedded SW defect (**Figure 1**, a). This finding agrees with previous findings, where only minor deviations from the planarity were reported for the SW defect in the sp\({}^{2}\)-bonded planar materials [26]. In the second approach, we allowed only ionic relaxation while maintaining the cell to that of pristine graphene. As a result, the graphene sheet has undergone deformation, resulting in the development of curvature (**Figure 1**, b). The electronic structures of the obtained SW-graphene surface models are quite similar (**Figure 1**, a1 and a2), although we have observed a small build-up of the electronic states in the vicinity of the Fermi level for the case of the system where the cell was fixed. **Figure 1**, a3 and b3, compares partial charge densities obtained by the density of states (DOS) integration from -2 eV to Fermi level (0 eV) for the two systems considered. This DOS range was selected as it was previously shown that it could be linked with the reactivity of graphene [47]. While partial charge densities are qualitatively similar, less charge was found in the case of a fully relaxed system. For the fully relaxed system, we found 4.8 electrons in the -2 to 0 eV window, while for the system in the pristine graphene cell, we found 5.18 electrons in total (per 32 atoms). The largest difference is observed for the carbon atoms forming the SW defect, thus, it is expected that the charge accumulation caused by the mechanical deformation of the sheet will additionally increase the reactivity of the SW defect in the graphene lattice.
To study atomic adsorption on the SW-containing graphene plane, we adopted a similar approach in terms of structural relaxation. Thus, the systems where the cell and ionic relaxation were allowed are denoted as "Relaxed cell", while the systems where only ionic relaxation was allowed were denoted as "Pristine graphene cell". Next, we discuss atomic adsorption with respect to the adsorbate location in the PTE. First, hydrogen and s- metals are discussed. Then, we move to the p-elements and continue with noble gases. Finally, we discuss transition metals with relevance to catalysis. Finally, the reactivity of the SW-defective graphene is compared to that of pristine graphene to provide a complete overview of the reactivity trends.
### H and s-metals
For the group 1 elements, the binding energies decrease from H to Na, with the exception of K, having the highest \(E_{\text{b}}\) in the series (**Table 1**). This trend suggests that the binding strength of Group 1 elements to graphene with a Stone-Wales defect decreases as moving down the group, likely due to the increasing atomic size and decreasing electronegativity along the group. Hydrogen forms a single H-C bond (on top adsorption; **Figure 2**), while Li, Na, and K position themselves in the center of the seven-membered ring. These geometries were found for both relaxed cell calculations (**Figure 2**) and the calculations in the pristine graphene cell (**Figure S2**, Supplementary Information).
The binding energy of Be is lower than that of Mg, while Ca has a significantly lower binding energy compared to the other two elements. When the cell is fully relaxed, Be is accommodated in the center of the five-membered ring of the SW defect (**Figure 2**). However, when the cell is constrained, it binds directly to the C atom, forming on Be-C bond (**Figure S2**, Supplementary Information). Mg is located very far from the surface, while Ca adsorbs in the center of the seven-membered ring (**Figure 2** and **Figure S2**, Supplementary Information).
When comparing atomic adsorption on a fully relaxed surface and the constrained one, modifications of reactivity can be observed. H, Be, and Ca show enhanced binding on constrained surfaces (pristine graphene cell), while Li and Na show weaker bonding on the constrained surface. Finally, Na and Mg adsorption is weakly affected by surface mechanical deformation (**Table 1**).
Considering the overall trends, the electronic structure (see DOS plots, **Figure 2** and **Figure S2**, Supplementary Information) is not significantly affected by the adsorption of H and s-metals on the SW-defective graphene. Some modifications can be observed while Fermi level shifts are the consequence of pronounced charge transfer, that is, n-doping when s-metals are adsorbed on the surface. Moreover, magnetization was observed only for Be in a fully relaxed cell (**Table 1**).
### p-elements
The binding energies of the p-elements on the relaxed cell of graphene with the Stone-Wales defect are generally more negative compared to the binding energies of Group 1 and 2 elements. This results from forming covalent bonds between the adsorbates and the surface (**Figure 3**). Namely, when the cell is fully relaxed, B adsorbs at the center of the five-membered ring of the SW defect, while Group 4, 5, and 6 elements bind at the bridge position. Halogens (F and Cl) bind directly on top of the C atom in the center of the defect. Across the p-elements, there is a general decreasing trend in binding energies from left to right within the periodic table (B to Cl). However, \(E_{\rm b}\) passes through the maximum for the elements with ns\({}^{2}\)np\({}^{3}\) configuration (N and P) (**Table 2**).
When comparing the binding energies of the p-elements in the relaxed cell calculations to those in the constrained cell calculations, we can observe more pronounced changes in the binding energies compared to the Group 1 and 2 elements. In general, except for Al, the binding energies of the p-elements
Figure 2: Optimized structures and DOSes for H, alkaline, and alkaline earth elements interacting with SW-graphene in fully relaxed lattice.
on the constrained lattice of pristine graphene become more negative (more favorable) compared to the relaxed cell calculations. This finding suggests that mechanical deformation enhances the binding strength of the p-elements to the graphene surface with the Stone-Wales defect. The overall trends in binding energies are similar for the relaxed cell and constrained cell calculations.
It is interesting to observe the distinct behavior of C, N, and O adsorbates in the fully relaxed (**Figure 3**) and the constrained cells (**Figure S3**, Supplementary Information). In the fully relaxed cell, these atomic adsorbates bind at the bridge site between the two central carbon atoms of the SW defect. However, if the cell is constrained (and exposed to mechanical deformation), these adsorbates get incorporated into the lattice, breaking the SW defect and forming a defective structure which could be denoted as "578" in analogy to the "5775" notation of the SW defect. Such a behavior aligns with the previous computational predictions that C adatoms can heal the SW defect in graphene lattice [48] by opening the defect structure and recombining it to the pristine one with the activation energy of only 0.87 eV (which is still too high to take place at the room temperature). Thus, here we find that similar behavior might be expected for N and O. At the same time, the SW defect reconstruction could also be triggered by the mechanical deformation of the graphene sheet.
Due to the formation of chemical bonds between the defective graphene surface and the adsorbates, the electronic structure is more disrupted (**Figure 3** and **Figure S3**, Supplementary Information) compared to the case of Group 1 and 2 elements adsorption, but we have not seen any
\begin{table}
\begin{tabular}{l c c c c c} & \multicolumn{3}{c}{**Relaxed cell**} & \multicolumn{3}{c}{**Pristine graphene cell**} \\
**adsorbate** & \(E_{\text{b}}\) / eV & \(d\) / Å & \(M\) / \(\mu_{\text{b}}\) & \(E_{\text{b}}\) / eV & \(d\) / Å & \(M\) / \(\mu_{\text{a}}\) \\ \hline B & -1.917 & 1.932 & 0.00 & -3.193 & 1.806 & 0.00 \\ \hline C & -2.877 & 1.533 & 0.00 & -4.301 & 1.395 & 0.00 \\ \hline N & -2.121 & 1.464 & 0.85 & -3.569 & 1.336 & 0.00 \\ \hline O & -3.053 & 1.447 & 0.00 & -3.996 & 1.386 & 0.00 \\ \hline F & -2.592 & 1.460 & 0.00 & -2.878 & 1.440 & 0.00 \\ \hline Al & -1.654 & 2.388 & 0.01 & -1.604 & 2.483 & 0.00 \\ \hline Si & -1.958 & 1.976 & 0.00 & -2.080 & 1.942 & 0.00 \\ \hline P & -1.343 & 1.907 & 0.94 & -1.537 & 1.886 & 0.95 \\ \hline S & -1.848 & 1.861 & 0.00 & -2.068 & 1.840 & 0.00 \\ \hline Cl & -1.269 & 1.957 & 0.00 & -1.521 & 1.889 & 0.00 \\ \hline \end{tabular}
\end{table}
Table 2: Binding energies (\(E_{\text{b}}\)), adsorbate-C nearest distance (\(d\)), total magnetization (\(M\)) for p-elements in rows 2 and 3 of the PTE.
bandgap opening. However, in the case of P adsorption, we observe magnetization in both cells considered here (**Figure 4**). The magnetic moment is primarily located on the P adatom. In the case of N, the magnetization is observed only in the case of the fully relaxed cell (**Table 2**) when N is adsorbed at the bridge site between the central carbon atoms of the SW defect. When the cell is constrained, N ignites the reconstruction of the SW defect, and magnetization disappears.
### Noble gasses
Noble gases, as expected, interact very weakly with the SW-defective graphene surface. There are slight differences in the binding energies when comparing the relaxed cell and constrained cell calculations. In general, the fixed cell calculations result in slightly stronger binding compared to the relaxed cell calculations for all the considered noble gases (**Table 3**). Irrespective of the cell relaxation, the noble atoms are located more than 3 A away from the surface (**Table 3**, **Figure 5**, and **Figure S4**, Supplementary Information).
Among the noble gases considered, the binding energies increase from helium to neon to argon, indicating a weak trend of increasing binding strength. This trend is the same as for the case of noble gas adsorption on pristine graphene, where the binding energies were found to scale with the polarizabilities of the noble gas atoms [22].
### d- and coinage metals
For all of the d- and coinage metals, the binding energies in the pristine graphene lattice configuration are slightly more negative compared to those for the relaxed cell configuration (**Table 4**). This result suggests that the mechanical deformation of the graphene surface with the Stone-Wales defect can enhance the binding affinity of the transition and coinage metals. Particular trends along the row and the group depend on the actual electron configuration of a given element. However, there is a clear trend of more negative binding energies for the considered transition metals (Ni, Ru, Rh, Pd, Ir, Pt) compared to the coinage metals (Cu, Ag, Au). We find that the constraining surface relaxation affects the final geometry of adsorbed metals. In the fully relaxed cell, Ru and Rh form a multicentric bond, being adsorbed at the center of the five-membered ring (**Figure 6**). In contrast, when the cell is constrained (**Figure S5**, Supplementary Information), Ru binds directly on top of the C atom in the center of the SW defect, while Rh prefers bridge configuration. Irrespective of the cell relaxation, Ni, Pd, Ir, and Pt adsorb in the bridge configuration in the center of the SW defect, while coinage metals prefer on-top adsorption (**Figure 6**).
Obviously, the SW defect presents the center of altered reactivity of the graphene lattice, which is further affected by the mechanical deformation of the lattice. Thus, we have mapped the reactivity of the graphene lattice with the SW defect considering the fully relaxed cell (**Figure 7**) and the constrained cell scenario (**Figure S6**, Supplementary Information). The mapping was done considering the final position of the adsorbate atom upon the relaxation, as done in Ref. [49]. Briefly, the adsorption energy of the adatom at the site to which it relaxes was ascribed to the initial site in which it was placed. When all the sites in the cell were described with the corresponding binding energy of the relaxed structure, heatmaps were constructed by linear interpolation of these values. For both considered cases, constructed reactivity maps clearly show that the SW defect acts as an attractor for adsorbed metals. Also, the energy landscape of the SW-defective graphene lattice for the adsorption of studied metals is not exceptionally rough, and the energy difference was found to be the largest for Ru in the relaxed cell (0.95
Figure 6: Optimized structures of selected d-elements and coinage metals adsorbed at the SW defect in the fully relaxed cell.
eV). For the relaxed cell, reactivity maps are symmetric around the defect, while in the case of the constrained cell, the symmetry is still present, but it is lower compared to the fully relaxed cell. The obtained results suggest that coinage metals are likely highly mobile on the graphene surface with the SW defect and that their mobility can be triggered by applying mechanical deformation to the graphene sheet.
Surprisingly, only a few metals exhibit non-zero magnetic moments (\(M\)) when adsorbed at the SW defect. Notably, Cu, Ag, and Ir exhibit non-zero magnetic moments in the relaxed cell configuration, while Cu, Ru, Rh, and Ir exhibit magnetization in the pristine cell calculations. In other cases, no magnetic moment is observed, while the d-states of atomic adsorbates are generally well localized in the energy window between -5 eV and the Fermi level (**Figure 8** and **Figure S7**, Supplementary Information). Subtle differences in the electronic structure can be observed when comparing the relaxed and constrained cell cases. Thus, we believe that the combination of single metal atom trapping at the SW sites, combined with the mechanical deformation of the graphene substrate, could be a plausible way to study the tuning of the reactivity of such formed metal single atoms (catalysts). This issue is, however, beyond the scope of the present work.
Ir was the only adsorbate that exhibited magnetization irrespective of the cell lattice (**Table 4**). **Figure 10** presents the orbital-resolved DOS and the spin density isosurface of Ir adatom on a fully relaxed graphene surface with the SW defect. Based on the spin density symmetry and the presented DOS, it can be concluded that the magnetism arises due to partially filled d\({}_{\alpha 2}\) orbitals, while hybridized d\({}_{\alpha x}\) and d\({}_{\alpha 2}\) orbitals are mainly responsible for bonding with the graphene surface (bridge configuration).
Figure 7: Reactivity maps of the SW defect in the fully relaxed cell towards selected d-elements and coinage metals. Color coding is provided below each map giving the value of the adsorbate binding energy in eV (blue – weaker bonding, red – stronger bonding).
Figure 8: DOSes of selected d-elements and coinage metals adsorbed at the SW defect in the fully relaxed cell.
Figure 10: Spin density of Ir adatom at the SW defect (fully relaxed lattice, isosurface values 1.5\(\times\)10\({}^{-3}\) e A\({}^{-3}\)) and the corresponding Im-decomposed DOS.
### SW-defected graphene vs. pristine graphene
When atomic adsorption on a SW-defective graphene surface (fully relaxed cell) is compared to the adsorption on the pristine graphene surface, it can be seen that in all the cases, the SW defect binds the studied atomic adsorbates stronger than the pristine graphene surface (**Figure 10**, a). Differences in the binding energies are up to ~1.4 eV, reaching the maximum for C and Si. As the SW defect was found to be an attractor for the studied adsorbates, it can be concluded that the introduction of the SW defect can be an effective way to fine-tune the reactivity of the graphene lattice. Even though there are no surface dangling bonds, the reactivity is improved compared to the pristine graphene surface. In contrast, the correlation between the binding energies on the defective and pristine surface is quite low (**Figure 10**, c). However, the binding on the SW defect is still much weaker compared to the single vacancy [23]. For example, none of the studied transition metal adsorbate binding energies were found to be above the cohesive energy of the corresponding bulk metal phase. Thus, the SW defect is not likely to be able to stabilize single atoms of these metals. However, these systems might have a high academic interest in studying fundamental aspects of graphene and graphene-supported single-atom catalysts' reactivity. Moreover, the mobility of these adatoms is generally high, although preferentially bonding is to the defect site.
Figure 10: The overall adsorption trends: (a) Variation of calculated \(E_{b}\) for selected elements in the fully relaxed cell along the rows of the PTE, compared to \(E_{b}\) calculated for SW-graphene in the pristine cell and for the pristine graphene (data taken from Ref. [22] for PBE-D3), (a) correlation of \(E_{b}\) calculated for adsorption on SW-graphene in fully relaxed cell and in pristine cell, (c) correlation of \(E_{b}\) calculated for adsorption on SW-graphene in fully relaxed cell and on pristine graphene.
We also note that in the majority of cases, our calculations with the pristine graphene (constrained) cell led to more negative binding energies for the studied atomic adsorbates. A clear correlation exists between the binding energies calculated in the relaxed and constrained cell. B, C, N, and O are outliers with much more negative binding energy on the surface in the pristine graphene cell (**Figure 10**, b). Among these four elements, C, N, and O exhibit a particularly interesting behavior. Under mechanical deformation, they "destroy" the SW defect and cause the rearrangement of the carbon bond network and lead to the formation of the N- and O-doped defective graphene lattice.
## 4 Conclusions
Introducing the SW defect enhances the binding strength between atomic adsorbates and the graphene lattice. The SW defect acts as an attractor for studied atomic adsorbates. Despite the absence of surface dangling bonds, the reactivity of SW-defective graphene outperforms that of pristine graphene. However, the SW defect may not be suitable for stabilizing single atoms of certain transition metals. However, it holds potential for studying fundamental aspects of graphene-supported single-atom catalysts' reactivity. Thus, the defect engineering by introducing the SW defect, can be an effective strategy for fine-tuning the reactivity of graphene surface. Moreover, the presented results highlight the influence of lattice constraints on binding energies. Calculations with the pristine graphene (constrained) cell generally led to more negative binding energies for most atomic adsorbates compared to the fully relaxed cell. The behavior of elements B, C, N, and O deviates from this trend, exhibiting notably more negative binding energies in the pristine graphene cell. We find that under mechanical deformation, C, N, and O can "destroy" the SW defect, forming N- and O-doped defective graphene lattices. The presented work contributes to a deeper understanding of the reactivity of SW-defective graphene and its potential applications in catalysis and electrocatalysis. The stronger binding energies observed on the SW defect surface offer promise for designing novel graphene-based functional materials. However, further investigations are needed to explore the full potential of SW-defective graphene and its interactions with various atomic adsorbates.
## Acknowledgment
A.Z.J., A.S.D., and I.A.P. acknowledge the support provided by the Serbian Ministry of Science, Technological Development, and Innovation (451-03-47/2023-01/200146). The computations and data handling were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) at the NSC center of Linkoping University, partially funded by the Swedish Research Council through grant agreement no. 2018-05973.
|
2301.09714 | Dimension drop of harmonic measure for some finite range random walks on
Fuchsian Schottky groups | We prove that the harmonic measures of certain finite range random walks on
Fuchsian Schottky groups, have dimension strictly smaller than the Hausdorff
dimension of the corresponding limit set. | Ernesto García, Pablo Lessa | 2023-01-23T20:41:36Z | http://arxiv.org/abs/2301.09714v1 | # Dimension drop of harmonic measure for some finite range random walks on Fuchsian Schottky groups
###### Abstract
We prove that the harmonic measures of certain finite range random walks on Fuchsian Schottky groups, have dimension strictly smaller than the Hausdorff dimension of the corresponding limit set.
## 1 Introduction
### A dimension drop conjecture
Let \(G\) be the group of isometries of the hyperbolic plane \(\mathbb{H}^{2}\) and \(\Gamma<G\) be a non-elementary discrete subgroup.
Let \(\Lambda\) be the limit set of \(\Gamma\), which is a closed \(\Gamma\) invariant subset of the visual boundary \(\partial\mathbb{H}^{2}\) of \(\mathbb{H}^{2}\). We denote by \(\dim(\Lambda)\) the Hausdorff dimension of \(\Lambda\) with respect to the visual distance based at any point \(o\in\mathbb{H}^{2}\).
For any finitely supported probability measure \(\mu\) whose support generates \(\Gamma\) as a semi-group there exists a unique \(\mu\)-stationary probability \(\nu\) on \(\partial\mathbb{H}^{2}\). That is, \(\nu\) satisfies
\[\nu=\sum_{g\in\Gamma}\mu(g)g_{*}\nu,\]
where \(g_{*}\nu\) denotes the push-forward of \(\nu\) by \(g\). Since any \(\Gamma\)-invariant compact set admits a \(\mu\)-stationary measure, it follows by uniqueness that the support of \(\nu\) must be contained in \(\Lambda\).
The measure \(\nu\) is also the harmonic measure of any random walk of the form \(x_{n}=g_{1}\cdots g_{n}o\) where \(o\in\mathbb{H}^{2}\) and \(g_{1},\ldots,g_{n},\ldots\) are i.i.d. random elements with common distribution \(\mu\). By this we mean that it is the distribution of the random limit point \(x_{\infty}=\lim\limits_{n\to+\infty}x_{n}\).
Furthermore, \(\nu\) is exact dimensional (see [19], and also [10], [11], and [12]), i.e. there exists a non-negative real number \(\dim(\nu)\) such that
\[\dim(\nu)=\lim\limits_{r\downarrow 0}\frac{\log(\nu(B(\xi,r)))}{\log(r)},\]
for \(\nu\)-a.e. \(\xi\in\partial\mathbb{H}^{2}\), where \(B(\xi,r)\) denotes the ball of radius \(r\) centered at \(\xi\) with respect to the visual metric based at some point \(o\in\mathbb{H}^{2}\).
The dimension \(\dim(\nu)\) can be characterized as the infimum of the Hausdorff dimensions of Borel sets with full \(\nu\)-measure (see for example [23, Proposition 2.1]). Since \(\nu\) is supported on \(\Lambda\) one has \(\dim(\nu)\leq\dim(\Lambda)\). We conjecture that equality is never attained for finitely supported \(\mu\).
**Conjecture 1** (Dimension drop conjecture).: _Let \(\mu\) be a finitely supported probability on \(G\) whose support generates (as a semi-group) a discrete non-elementary subgroup \(\Gamma\), and let \(\nu\) be the unique \(\mu\)-stationary measure on the limit set \(\Lambda\) of \(\Gamma\). Then \(\dim(\nu)<\dim(\Lambda)\)._
The conjecture is motivated by the observation that, in other contexts, the harmonic measure of a random walk 'built from local information' has a dimension drop with respect to the natural geometric measure on the boundary.
Two contexts where analogous statements have been proven are the harmonic measure of a Jordan domain bounded by a curve with Hausdorff dimension strictly larger than \(1\) (see [16]). And the harmonic measure for the simple random walk on a Galton-Watson tree (see [10]).
Another motivation for Conjecture 1 is to extend the scope of the following well known conjecture of singularity with respect to visual measure on the boundary (see [11, Conjecture 1.21], and [12]) to the case where \(\Lambda\) is not the entire visual boundary, allowing it to be investigated in a larger family of examples.
**Conjecture 2** (Singularity conjecture).: _If \(\mu\) is a finite support probability whose support generates (as a semi-group) a Fuchsian group \(\Gamma\) then the unique \(\mu\) stationary measure \(\nu\) on \(\partial\mathbb{H}^{2}\) is singular with respect to the class of visual measures._
We notice that Conjecture 2 has been studied in other contexts such as, higher dimensional hyperbolic space [13], Teichmuller space [15], and higher rank semi-simple Lie groups [12]. In all these contexts analogous statements to Conjecture 1 seem to warrant further investigation.
Conjecture 2 is known to be true when \(\Gamma\) is not co-compact but has finite co-volume (see [10], [1], [11], [13]).
Some progress has been made in the case when \(\Gamma\) is co-compact. In [12] and [14] the conjecture is verified for nearest neighbor random walks on tilings by regular polygons. A general result for symmetric random walks on the fundamental group of the surface of genus two is obtained in [12]). For any finite support probability measure on the fundamental group of a closed surface, dimension drop for the harmonic measure associated to discrete and faithful representations of the group outside of a compact subset of Teichmuller space was established in [1].
Dimension drop of harmonic measure was established for finite range random walks on discrete groups \(\Gamma\) which are not virtually free and whose boundary is endowed with a distance coming from a word metric in [15, Theorem 1.1]. However, their result does not apply in the current setting (see [15, Remark 1.1]).
If \(\mu\) is not assumed to be finitely supported then Conjecture 1 is false. In the co-compact case a measure \(\mu\) whose stationary measure is continuous can be constructed via the Furtenberg-Lyons-Sullivan discretization procedure (see [10]) or by more general results of Connell and Muchnik [14]. In the convex co-compact case it has been shown that the probability can be chosen with finite exponential moment (see [12, Appendix]).
Conjecture 2 is false if the support of \(\mu\) generates a dense group (see [1] and [1]).
Conjecture 1 should be contrasted with the classical case of self-similar sets defined by iterated function systems satisfying the open set condition. In that case a probability \(\mu\) on the generators of the iterated function system can be found such that the \(\mu\)-stationary measure realizes the Hausdorff dimension of the self-similar set (see [11] and [12]).
### A partial result for Schottky groups
The purpose of this article is to prove some special cases of Conjecture 1 when the discrete group \(\Gamma\) is a Schottky group in two generators.
To state our result we will need to discuss the expression of elements of the group \(\Gamma\) in terms of words in a free generator. For this purpose we fix the finite alphabet \(\Sigma=\{a,b,a^{-1},b^{-1}\}\) and let \(\Sigma^{*}\) denote the set of finite words on this alphabet, including the empty word which we denote by \(\varepsilon\).
We fix the rewriting rules \(R=\{(aa^{-1},\varepsilon),(a^{-1}a,\varepsilon),(bb^{-1},\varepsilon),(b^{-1 }b,\varepsilon)\}\) which are the standard presentation for the free group with generators \(\{a,b\}\). Each word \(w\in\Sigma^{*}\) has a unique reduced form \(\overline{w}\) which is obtained by successively replacing any occurrence of the left-hand side of a pair in \(R\) with the empty word.
The set of reduced words \(\Sigma^{*}_{R}\) is a free group with generators \(a,b\) when endowed with the product \(w\cdot w^{\prime}=\overline{ww^{\prime}}\) (where \(ww^{\prime}\) is the concatenation of \(W\) and \(w^{\prime}\)).
We now consider a probability measure \(\mu\) on \(\Sigma^{*}_{R}\) whose support \(\operatorname{supp}(\mu)\) is finite and generates \(\Sigma^{*}_{R}\) as a semi-group. And denote the set of words which are a prefix of some word in \(\operatorname{supp}(\mu)\) (this includes the empty word \(\varepsilon\)) by \(\operatorname{Prefix}(\operatorname{supp}(\mu))\).
Suppose \(\rho:\Sigma^{*}_{R}\to\Gamma\) is an isomorphism from \(\Sigma^{*}_{R}\) to a Schottky group in \(G\). By this we mean that there exists family of half-planes \(\{H_{x}:x\in\Sigma\}\) whose closures in \(\mathbb{H}^{2}\cup\partial\mathbb{H}^{2}\) are pairwise disjoint and such that \(\rho(x)H_{y}\subset H_{x}\) for all \(x\) and all \(y\in\Sigma\setminus\{x^{-1}\}\). See Figure 1.
We let \(\nu\) denote the \(\rho_{*}\mu\)-stationary measure on the limit set \(\Lambda\) of \(\Gamma\).
Our main theorem is the following:
**Theorem A** (Dimension drop for some finite range random walks on Schottky groups).: _In the setting above suppose that \(a\) occurs as the last letter of exactly one element of \(\operatorname{Prefix}(\operatorname{supp}(\mu))\). Then \(\dim(\nu)<\dim(\Lambda)\)._
The theorem applies to nearest neighbor random walks, i.e. when \(\operatorname{supp}(\mu)=\Sigma\). Examples with arbitrarily large cardinality may be obtained defining \(\operatorname{supp}(\mu)\)
as the set of all non-empty reduced words with length at most \(n\), which either do not contain the letter \(a\), or are of the form \(aw\) where \(w\) does not contain \(a\). For \(n=2\) this yields
\[\operatorname{supp}(\mu)=\{b,a^{-1},b^{-1},b^{2},ba^{-1},a^{-1}b,a^{-2},a^{-1}b^ {-1},b^{-1}a^{-1},b^{-2}\}\cup\{a,ab,ab^{-1}\}.\]
## 2 Hidden Markov property of harmonic measure
We let \(\Sigma^{\omega}\) denote the set of infinite words and \(\Sigma^{\omega}_{R}\) the subset of infinite reduced words (i.e. every prefix is reduced). We use \(w[i]\) for the \(i-\)th letter of a word (infinite or finite) so \(w=w[1]w[2]\cdots\), and set \(w[i:j]=w[i]w[i+1]\cdots w[j]\).
Figure 1: Half-planes \(H_{x}\) for \(x\in\Sigma\) associated to the Schottky group \(\rho(\Sigma^{*}_{R})\), in the Poincaré disk model of \(\mathbb{H}^{2}\).
The set \(\Sigma^{\infty}=\Sigma^{*}\cup\Sigma^{\omega}\) is a compact Polish space under letter-wise convergence, i.e. \(w_{n}\to w\) if and only if for each \(i\in\{1,\ldots,|w|\}\) one has \(w_{n}[i]=w[i]\) for all \(n\) large enough. The subset \(\Sigma^{\infty}_{R}=\Sigma^{*}_{R}\cup\Sigma^{\omega}_{R}\subset\Sigma^{\infty}\) is compact.
In the setting of Theorem A let \(g_{1},g_{2},\ldots\) be i.i.d. with common distribution \(\mu\). The limits
\[g_{\infty}=\lim_{n\to+\infty}g_{1}g_{2}\cdots g_{n},\]
and
\[\overline{g_{\infty}}=\lim_{n\to+\infty}\overline{g_{1}g_{2}\cdots g_{n}},\]
exist almost surely and belong to \(\Sigma^{\omega}\) and \(\Sigma^{\omega}_{R}\) respectively. We let \(\nu_{\Sigma^{\omega}}\) denote the distribution of \(g_{\infty}\) and \(\nu_{\Sigma^{\omega}_{R}}\) the distribution of \(\overline{g_{\infty}}\).
We define \(\pi:\Sigma^{*}\to\Sigma\cup\{\varepsilon\}\) as the projection associating to each non-empty word its last letter, and satisfying \(\pi(\varepsilon)=\varepsilon\).
**Theorem 1** (Reduced infinite words are hidden markov).: _There exists a Markov chain \(x_{1},x_{2},\ldots\) on \(\mathrm{Prefix}(\mathrm{supp}(\mu))\setminus\{\varepsilon\}\) such that_
\[\lim_{n\to+\infty}\pi(x_{1})\pi(x_{2})\cdots\pi(x_{n}),\]
_exists, belongs to \(\Sigma^{\omega}_{R}\) almost surely, and has distribution \(\nu_{\Sigma^{\omega}_{R}}\)._
Theorem 1 implies in particular that \(\nu_{\Sigma^{\omega}_{R}}\) is a hidden Markov measure. This result follows from the recent work on coloured random walks [1]. The construction we give for the Markov lift is roughly equivalent to the discussion in [1, Section 1.2] though our proof is different. Since we need the specific statement above for what follows, we present the details in the present section.
The hidden Markov property of \(\nu_{\Sigma^{\omega}_{R}}\) is also implicit in formulas for harmonic measure in terms of matrix products in [11]. However, the number of states of the Markov pre-image implied by Lalley's formulas is large, and his construction would not suffice for our proof of Theorem A.
For nearest neighbor random walks on free groups the Markov property of harmonic measure is well known and has been extended to certain types of free products and other tree-like graphs (see [13], [14], [20], [15], and [16]).
Our result implies for example, that when \(\mathrm{supp}(\mu)=\{ab,a^{-1},b^{-1}\}\), then \(\nu_{\Sigma^{\omega}_{R}}\) is Markov. Other examples of random walks which are not nearest-neighbor but have Markov harmonic measures can be obtained by using stopping times on a nearest-neighbor walk as in [10].
Since the image of a Markov measure is rarely Markov (see for example [1], [12], and [1])) the case where \(\nu_{\Sigma^{\omega}_{R}}\) is hidden Markov but not Markov also occurs (and in some sense should be generic).
An example is obtained by setting \(\mu(\{a\})=p_{1},\mu(\{a^{2}\})=p_{2},\mu(\{b\})=p_{3}\) where \(0<\min(p_{1},p_{2},p_{3})<p_{1}+p_{2}+p_{3}=1\). To see this, observe that infinite words in \(\{a,a^{2},b\}\) are already reduced and let \([w]\) denote the set of such words with \(w\) as a prefix. So we get
\[\frac{\nu_{\Sigma^{\omega}_{R}}([ab])}{\nu_{\Sigma^{\omega}_{R}}([a])}=\frac{p _{1}p_{3}}{p_{1}+p_{2}}<\frac{p_{1}^{2}p_{3}+p_{2}p_{3}}{p_{1}^{2}+p_{1}p_{2}+ p_{2}}=\frac{\nu_{\Sigma^{\omega}_{R}}([a^{2}b])}{\nu_{\Sigma^{\omega}_{R}}([a^{2}])}.\]
A natural context for the results of this section seems to be that of finite complete rewriting systems, i.e. finite sets of reduction rules with the property that applying available reductions in any order one eventually arrives at a unique reduced form for each word.
For example, the plain groups studied in [1], [14], and [13], admit a finite complete rewriting system where every rule substitutes a pair of consecutive letters either with the empty string or a single letter.
There exist finite presentations of surface groups which are complete rewriting systems (see [10] and [11]). It seems of interest, in view of Conjecture 2, to explore whether the harmonic measure on the set of infinite reduced words for these presentations have the hidden Markov property.
### Infinite non-reduced words
As an initial step towards the proof of Theorem 1 we give an explicit construction for the following:
**Lemma 1** (Non-reduced infinite words are hidden markov).: _There exists a Markov chain \(y_{1},y_{2},\ldots\) on \(\operatorname{Prefix}(\operatorname{supp}(\mu))\) such that_
\[\lim_{n\to+\infty}\pi(y_{1})\pi(y_{2})\cdots\pi(y_{n}),\]
_exists, belongs to \(\Sigma^{\omega}\) almost surely, and has distribution \(\nu_{\Sigma^{\omega}}\)._
#### 2.1.1 Weighted prefix graph
We will construct a weighted graph out of \(\operatorname{Prefix}(\operatorname{supp}(\mu))\), see Figure 2 for an example.
We say \(w\) is a one step prefix of \(w\) and write \(w^{\prime}\stackrel{{ 1}}{{\prec}}w\), if \(w=w^{\prime}x\) for some \(x\in\Sigma\). More generally we write \(w^{\prime}\prec w\) if \(w^{\prime}\) is a prefix of \(w\).
**Definition 1** (Weighted prefix graph).: _By the prefix graph associated to \(\mu\) we mean the graph with vertex set \(\operatorname{Prefix}(\operatorname{supp}(\mu))\), where a single directed edge is added from \(w^{\prime}\) to \(w\) whenever \(w^{\prime}\stackrel{{ 1}}{{\prec}}w\), and also from \(w\) to \(\varepsilon\) for each \(w\in\operatorname{supp}(\mu)\)._
_To each edge of the form \(w\stackrel{{ 1}}{{\prec}}w^{\prime}\) we associate the weight_
\[W(w,w^{\prime})=\sum_{w^{\prime}\prec w^{\prime\prime}}\mu(\{w^{\prime\prime} \}),\]
_and to each edge of the form \(w\to\varepsilon\) the weight \(W(w,\varepsilon)=\mu(\{w\})\)._
We now prove two basic properties of the weighted prefix graph which we need to prove Lemma 1.
**Proposition 1**.: _There exists \(N\) such that every path of length \(N\) in the prefix graph passes through \(\varepsilon\) at least once._
Proof.: Each step along a path that does not arrive at \(\varepsilon\) increases word length. The maximal possible word length is \(\max\{|w|:w\in\operatorname{supp}(\mu)\}\) which is finite since \(\operatorname{supp}(\mu)\) is a finite set.
**Proposition 2** (Incoming and outgoing weights are equal).: _With the above definition one has_
\[\sum_{y}W(y,x)=\sum_{y}W(x,y),\]
_for all \(x\in\operatorname{Prefix}(\operatorname{supp}(\mu))\)._
Proof.: If \(x=\varepsilon\) the left-hand side is \(1\) since there is a term \(\mu(\{w\})\) for each \(w\in\operatorname{supp}(\mu)\). The right-hand side is
\[\sum_{x\in\Sigma}\sum_{x\prec w}\mu(\{w\}),\]
which is the same sum grouped by first letter and therefore is also \(1\).
For \(x\neq\varepsilon\) one has \(x^{\prime}\stackrel{{ 1}}{{\prec}}x\) for a unique (possibly empty) word \(x^{\prime}\in S^{\prime}\). Hence, the left-hand side is
\[\sum_{x\prec w}\mu(\{w\}).\]
If \(x\notin\operatorname{supp}(\mu)\) then the right-hand side is
\[\sum_{\stackrel{{ 1}}{{x\prec x^{\prime\prime}}}}\sum_{x^{ \prime\prime}\prec w}\mu(\{w\}),\]
which, is the sum above regrouped by the first letter after \(x\).
If \(x\in\operatorname{supp}(\mu)\) then the two sums differ by \(\mu(x)\) which appears in the right-hand side as \(W(x,\varepsilon)\). This concludes the proof of the claim.
#### 2.1.2 Proof of Lemma 1
To conclude the proof of Lemma 1 we consider the Markov chain on the weighted prefix graph whose transition probabilities are proportional to the given weights.
**Lemma 2** (Markov chain on the weighted prefix graph).: _Let \(y_{1},y_{2},\ldots\) be a Markov chain on the weighted prefix graph starting at \(\varepsilon\) and with transition probabilities given by_
\[P(x,y)=\frac{W(x,y)}{\sum_{z}W(x,z)}.\]
_Then \(\lim_{n\to+\infty}\pi(y_{1})\pi(y_{2})\cdots\pi(y_{n})\), has distribution \(\nu_{\Sigma\omega}\)._
Proof.: Let \(\tau_{0}=1\) and inductively define \(\tau_{n+1}=\min\{k>\tau_{n}:y_{k}=\varepsilon\}\) for \(n=0,1,\ldots\). By the strong Markov property the sequence \(y_{\tau_{1}-1},y_{\tau_{2}-1},\ldots\) is i.i.d.. It suffices to show that \(y_{\tau_{1}-1}\) has distribution \(\mu\) to prove the claim.
For this purpose fix \(w\in\mathrm{supp}(\mu)\), set \(n=|w|\), and observe using Proposition 2 that
\[\mathbb{P}\left(y_{\tau_{1}-1}=w\right) =P(\varepsilon,w[1])\cdots P(w[n-1],w[n])P(w[n],\varepsilon)\] \[=\frac{W(\varepsilon,w[1])}{\sum\limits_{x\in\Sigma}W(\varepsilon, x)}=\mu(\{w\}).\qed\]
### Proof of Theorem 1
Let \(y_{1},y_{2},\ldots\) be as in Lemma 1 and define \(S=\mathrm{Prefix}(\mathrm{supp}(\mu))\setminus\{\varepsilon\}\).
Given a reduced word \(w\) we define the stopping time
\[\tau_{w}=\min\{n:\overline{\pi(y_{1})\cdots\pi(y_{n})}=w\}.\]
Let \(f:S\to[0,1]\) be defined by
\[f(s)=\mathbb{P}\left(\tau_{\pi(s)}<+\infty,y_{\tau_{\pi(s)}}=s\right).\]
Similarly let \(g:S\times S^{\prime}\to[0,1]\) be defined by
\[g(s,s^{\prime})=\mathbb{P}^{s}\left(\tau_{\pi(s)\pi(s^{\prime})}<+\infty,y_{ \tau_{\pi(s)\pi(s^{\prime})}}=s^{\prime}\right),\]
where \(\mathbb{P}^{s}\) denotes the probability for the Markov chain conditioned to start at \(s\).
Figure 2: The weighted prefix graph associated to a probability \(\mu\) with support \(\mathrm{supp}(\mu)=\{a,a^{2},b\}\).
Finally let \(h:S\to[0,1]\) be defined by
\[h(s)=\mathbb{P}^{s}\left(\pi(s)\prec\overline{\pi(y_{1})\cdots\pi(y_{n})}\text{ for all $n$ large enough}\right).\]
**Definition 2** (Reduced prefix graph).: _We observe that \(g(s,s^{\prime})>0\) if and only if there exists a path \(s=s_{0}\to s_{1}\to\cdots\to s_{n}=s^{\prime}\) in the weighted prefix graph such that_
\[\overline{\pi(s_{0})\cdots\pi(s_{n})}=\pi(s)\pi(s^{\prime})\]
_and furthermore_
\[\overline{\pi(s_{0})\cdots\pi(s_{k})}\neq\pi(s)\pi(s^{\prime}),\]
_for all \(k<n\)._
_By the reduced prefix graph we mean the set \(S=\operatorname{Prefix}(\operatorname{supp}(\mu))\setminus\{\varepsilon\}\) with a single directed edge between \(s\) and \(s^{\prime}\) when the above property holds._
The following result concludes the proof of Theorem 1.
**Lemma 3**.: _Let \(r=\lim\limits_{n\to+\infty}\overline{\pi(y_{1})\cdots\pi(y_{n})}\) and set \(\tau_{n}=\min\{k:\overline{\pi(y_{1})\cdots\pi(y_{k})}=r[1:n]\}\) for \(n=1,2,\ldots\)._
_Then the sequence \(x_{1}=y_{\tau_{1}},x_{2}=y_{\tau_{2}},\ldots\) is a Markov chain and furthermore_
\[\mathbb{P}\left(x_{1}=s_{1},\ldots,x_{n}=s_{n}\right)=f(s_{1})g(s_{1},s_{2}) \cdots g(s_{n-1},s_{n})h(s_{n}), \tag{1}\]
_for all \(s_{1},\ldots,s_{n}\in S\)._
Proof.: It suffices to prove equation 1, since this implies that \(x_{1},\ldots,x_{n}\) is Markov with initial distribution \(p(s)=f(s)h(s)\) and transtion probabilities \(P(s,s^{\prime})=h(s)^{-1}g(s,s^{\prime})h(s^{\prime})\).
For this purpose we let \(w=\pi(s_{1})\cdots\pi(s_{n})\) and write
\[\mathbb{P}\left(x_{1}=s_{1},\ldots,x_{n}=s_{n}\right)=\mathbb{P}\left(F_{1} \cap\cdots F_{n}\cap G\right)=\mathbb{P}\left(F_{n}\cap G\right), \tag{2}\]
where \(F_{i}=\{\tau_{w[1:k]}<+\infty,y_{\tau_{w[1:k]}}=s_{k}\text{ for all }k\leq i\}\) and \(G=\{w\prec r\}\).
We observe that conditioned on \(F_{n}\) the event \(G\) is equivalent to
\[\left\{\pi(s_{n})\prec\overline{\pi(y_{\tau_{w}})\cdots\pi(y_{k+\tau_{w}})} \text{ for all }k\text{ large enough}\right\}.\]
Therefore, by the strong Markov property we obtain
\[\mathbb{P}\left(F_{n}\cap G\right)=\mathbb{P}\left(F_{n}\right)\mathbb{P} \left(G|F_{n}\right)=\mathbb{P}\left(F_{n}\right)h(s_{n}). \tag{3}\]
For each \(i=1,\ldots,n\) we observe that conditioned on \(F_{i}\) the event \(F_{i+1}\) is equivalent to there being some finite \(k\) such that \(\overline{\pi(y_{\tau_{w[1:i]}})\cdots\pi(y_{k+\tau_{w[1:i]}})}=\pi(s_{i})\pi( s_{i+1})\) and the minimal \(k\) with this property satisfying \(y_{k+\tau_{w[1:i]}}=s_{i+1}\).
Once again by the strong Markov property we obtain
\[\mathbb{P}\left(F_{i+1}\right)=\mathbb{P}\left(F_{i}\right)\mathbb{P}\left(F_ {i+1}|F_{i}\right)=\mathbb{P}\left(F_{i}\right)g(s_{i},s_{i+1}). \tag{4}\]
Hence we have
\[\mathbb{P}\left(x_{1}=s_{1},\ldots,x_{n}=s_{n}\right) =\mathbb{P}\left(F_{n}\cap G\right)\] \[=\mathbb{P}\left(F_{n}\right)\mathbb{P}\left(G|F_{n}\right)\] \[=\mathbb{P}\left(F_{n-1}\right)\mathbb{P}\left(F_{n}|F_{n-1} \right)h(s_{n})\] \[=\cdots\] \[=\mathbb{P}\left(F_{1}\right)g(s_{1},s_{2})\cdots g(s_{n-1},s_{n })h(s_{n}),\]
the equality \(\mathbb{P}\left(F_{1}\right)=f(s_{1})\) is direct from the definition of \(f\), which concludes the proof.
## 3 Symbolic coding and thermodynamic formalism
Recall that there exists family of half-planes \(\{H_{x}:x\in\Sigma\}\) whose closures in \(\mathbb{H}^{2}\cup\partial\mathbb{H}^{2}\) are disjoint and such that for all \(x\) one has \(\rho(x)H_{y}\subset H_{x}\) for all \(y\in\Sigma\setminus\{x^{-1}\}\).
We fix a basepoint \(o\in\mathbb{H}^{2}\setminus\bigcup\limits_{x\in\Sigma}H_{x}\) and let dist denote the hyperbolic distance. See Figure 1.
Given a finite reduced word \(w\in\Sigma_{R}^{*}\) we denote the set of infinite reduced words having \(w\) as a prefix by
\[[w]=\{w^{\prime}\in\Sigma_{R}^{\omega}:w\prec w^{\prime}\}.\]
**Theorem 2**.: _In the setting of Theorem A either \(\dim(\nu)<\dim(\Lambda)\) or there exists a constant \(C>1\) such that_
\[C^{-1}\exp(-\delta\operatorname{dist}(o,\rho(\omega)o))\leq\nu_{\Sigma_{R}^{ \omega}}\left([w]\right)\leq C\exp(-\delta\operatorname{dist}(o,\rho(\omega)o )),\]
_for all \(w\in\Sigma_{R}^{*}\), where \(\delta=\dim(\Lambda)\)._
### Coding of the limit set
We recall basics facts on the symbolic coding of \(\Lambda\), see [1, Section 9].
The mapping \(\pi_{\Lambda}:\Sigma_{R}^{\omega}\to\Lambda\) defined by
\[\pi_{\Lambda}(w)=\lim_{n\to+\infty}\rho(w[1:n])o,\]
is a Holder homeomorphism when \(\Lambda\) is endowed with the visual distance \(\operatorname{dist}_{o}\) based at \(o\) and \(\Sigma^{\omega}\) with the distance
\[\operatorname{dist}(w,w^{\prime})=\exp\left(-\min\{n:w[n]\neq w^{\prime}[n]\} \right).\]
We define \(f:\Sigma_{R}^{\omega}\to\mathbb{R}\) as
\[f(w)=\log|F^{\prime}(\pi_{\Lambda}(w))|,\]
where \(F:\Lambda\to\Lambda\) restricted to \(\overline{H_{x}}\cap\partial\mathbb{H}^{2}\) is \(\rho(x^{-1})\) for each \(x\in\Sigma\), and the derivative \(F^{\prime}\) is taken with respect to the constant speed parametrization with respect to visual distance based at \(o\).
### Thermodynamical formalism
The mapping \(F\) is conjugate via \(\pi_{\Lambda}\) to the left shift \(\sigma:\Sigma_{R}^{\omega}\to\Sigma_{R}^{\omega}\) defined by \(\sigma(w)[i]=w[i+1]\).
The Hausdorff dimension \(\delta=\dim(\Lambda)\) is characterized (see [1, Section 4]) as the unique value of \(s\geq 0\) such that \(P(s)=0\) where
\[P(s)=\sup_{m}\left\{h(m)-s\int f(w)dm(w)\right\},\]
and the supremum is taken over all shift-invariant probability measure \(m\), and \(h(m)\) denote topological entropy of \(m\). There is a unique shift-invariant probability measure \(m_{\delta}\) attaining the supremum above.
Recall, that the Gibbs measure on \(\Sigma_{R}^{\omega}\) corresponding to a Holder potential \(\psi:\Sigma_{R}^{\omega}\to\mathbb{R}\) is the unique shift-invariant probability such that for some \(C>1\) one has
\[C^{-1}\exp\left(-nP+\sum_{k=0}^{n-1}\psi(\sigma^{k}(w^{\prime}))\right)\leq m ([w])\leq C\exp\left(-nP+\sum_{k=0}^{n-1}\psi(\sigma^{k}(w^{\prime}))\right),\]
for all \(w\in\Sigma_{R}^{*}\) and all \(w^{\prime}\in[w]\), where \(P\) is the pressure of \(\psi\), which is equal to \(0\) in the case of \(m_{\delta}\).
**Lemma 4**.: _If \(m\) is a Gibbs measure on \(\Sigma_{R}^{\omega}\) then either \(m=m_{\delta}\) or \(\dim(\pi_{*}m)<\dim(\Lambda)\)._
Proof.: From [1, Theorem 8.1.4] one has that \(\pi_{*}m\) is exact dimensional with dimension
\[\dim(\pi_{*}m)=\frac{h(m)}{\int f(w)dm(w)}.\]
Since \(m_{\delta}\) is the unique shift-invariant measure attaining the supremum \(P(\delta)=0\), for all \(m\neq m_{\delta}\) one has
\[h(m)-\delta\int f(w)dm(w)<P(\delta)=0.\]
Hence, \(\dim(\pi_{*}m)<\delta\) as claimed.
For \(\xi\in\partial\mathbb{H}^{2}\) we denote by \(b_{\xi}:\mathbb{H}^{2}\times\mathbb{H}^{2}\to\mathbb{R}\) the Busemann function defined by
\[b_{\xi}(x,y)=\lim_{t\to+\infty}\operatorname{dist}(\alpha(t),x)-\operatorname {dist}(\alpha(t),y),\]
where \(\alpha:[0,+\infty)\to\mathbb{H}^{2}\) is any geodesic ray ending at \(\xi\) parametrized by arclength (see [1, Theorem 1.18]).
Directly from the definition it follows that \(b_{\xi}(x,z)=b_{\xi}(x,y)+b_{\xi}(y,z)\) for all \(x,y,z\in\mathbb{H}^{2}\). Furthermore one has \(|b_{\xi}(x,y)|\leq\operatorname{dist}(x,y)\).
We will need the following additional property for what follows:
**Lemma 5**.: _For all \(g\in G\) one has \(|(g^{-1})^{\prime}(\xi)|=\exp(b_{\xi}(o,go))\) where \(g^{\prime}\) is the derivative with respect to the constant speed parametrization with respect to the visual distance on \(\partial\mathbb{H}^{2}\) based at \(o\)._
Proof.: The proof is by direct calculation in the Poincare disk model where \(\mathbb{H}^{2}\) is identified with \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) and \(\partial\mathbb{H}^{2}\) with the boundary circle \(\partial\mathbb{D}\). Setting \(o=0\), the parametrization of \(\partial\mathbb{D}\) is by Euclidean arclength.
We fix an orientation preserving isometry of \(g\) which in this model is of the form
\[g(z)=\frac{az+b}{\overline{b}z+\overline{a}},\]
with \(|a|^{2}-|b|^{2}=-1\). By direct calculation
\[g^{-1}(z)=\frac{\overline{a}z-b}{-\overline{b}z+a}.\]
We calculate \(|(g^{-1})^{\prime}(\xi)|=1/|a-\overline{b}\xi|^{2}\) while one has (see [1, pg. 273] and \(b_{\xi,x}\) defined there corresponds to \(-b_{\xi}(o,x)\) in our notation)
\[\exp(b_{\xi}(o,g(o)))=\frac{1-|g(0)|^{2}}{|g(0)-\xi|^{2}}=\frac{1-|b|^{2}/| \overline{a}|^{2}}{|b/\overline{a}-\xi|^{2}}=1/|b-\overline{a}\xi|^{2}.\]
Since \(|\xi|=1\) we have
\[|a-\overline{b}\xi|=|\overline{\overline{b}\xi-a}|=|b\overline{\xi}- \overline{a}|=|\overline{\xi}(b-\overline{a}\xi)|=|b-\overline{a}\xi|.\qed\]
We now establish the estimates for \(m_{\delta}\) needed for Theorem 2.
**Lemma 6**.: _There exists \(C>1\) such that_
\[C^{-1}\exp(-\delta\operatorname{dist}(o,\rho(w)o))\leq m_{\delta}([w])\leq C \exp(-\delta\operatorname{dist}(o,\rho(w)o)),\]
_for all \(w\in\Sigma_{R}^{*}\)._
Proof.: From the Gibbs property of \(m_{\delta}\) we obtain
\[m_{\delta}([w])\approx|(F^{|w|})^{\prime}(\pi_{\Lambda}(\tilde{w}))|^{-\delta }=|(\rho(w)^{-1})^{\prime}\xi|^{-\delta}\]
where \(F^{k}\) denotes the \(k\)-th iterate of \(F\), \(\approx\) means up to a multiplicative constant independent of \(w\), \(\tilde{w}\in[w]\), and \(\xi=\pi_{\Lambda}(\tilde{w})\).
By Lemma 5 we have
\[|(\rho(w)^{-1})^{\prime}\xi|^{-\delta}=\exp(-\delta b_{\xi}(o,\rho(w)o)).\]
Now take \(R>0\) sufficiently large so that all geodesics joining \(H_{x}\) to \(H_{y}\) for distinct \(x,y\in\Sigma\) intersect \(B(o,R)\). For this value of \(R\) one has that the geodesic ray from \(o\) to \(\xi\) must intersect \(B(o,\rho(w)o)\). It follows that
\[|b_{\xi}(o,\rho(w)o)-\operatorname{dist}(o,\rho(w))|\leq 2R.\qed\]
### Proof of Theorem 2
From [11, Proposition 3.2] there exists a continuous positive density \(\varphi\) such that \(\varphi\nu_{\Sigma_{R}^{\omega}}\) is a Gibbs measure.
Assume that \(\dim(\nu)=\dim(\Lambda)\). Since \(\nu=\pi_{*}\nu_{\Sigma_{R}^{\omega}}\) from Lemma 4 it follows that \(\varphi\nu_{\Sigma_{R}^{\omega}}=m_{\delta}\). In particular, since \(\varphi\) is bounded away from zero and infinity there exists \(C^{\prime}>1\) such that
\[(C^{\prime})^{-1}m_{\delta}([w])\leq\nu_{\Sigma_{R}^{\omega}}([w])\leq C^{ \prime}m_{\delta}([w]),\]
for all reduced words \(w\). Combining this with Lemma 6 concludes the proof.
## 4 Proof of Theorem A
We denote the translation length of an isometry \(g\) of \(\mathbb{H}^{2}\) by
\[\ell(g)=\lim_{n\to+\infty}\frac{1}{n}\operatorname{dist}(o,g^{n}o).\]
We say a word \(w\in\Sigma_{R}^{*}\) is cyclically reduced if its first and last letters are not inverses.
**Lemma 7**.: _Either \(\dim(\nu)<\dim(\Lambda)\) or for all cyclically reduced \(w\in\Sigma_{R}^{\omega}\) one has_
\[\lim_{n\to+\infty}\left(\nu_{\Sigma_{R}^{\omega}}([w^{n}])\right)^{\frac{1}{ n}}=\exp(-\delta\ell(\rho(w))).\]
Proof.: This is a direct corollary of Theorem 2.
**Lemma 8**.: _Either \(\dim(\nu)<\dim(\Lambda)\) or for all cyclically reduced \(aw_{1},aw_{2}\in\Sigma_{R}^{*}\) one has_
\[\ell(\rho(aw_{1}aw_{2}))=\ell(\rho(aw_{1}))+\ell(\rho(aw_{2})).\]
Proof.: Given \(w\in\Sigma_{R}^{*}\) we denote by \(A(w)\) the set of \(w\)-paths, which is to say \(\alpha\in A(w)\) if \(\alpha=(s_{0}\to\cdots\to s_{n})\) is a path in the reduced prefix graph such that \(\pi(s_{0})\cdots\pi(s_{n})=w\). For such paths we write \(g(\alpha)=g(s_{0},s_{1})\cdots g(s_{n-1},s_{n})\), and \(g_{+}(\alpha)=g(\alpha)h(s_{n})\).
By hypothesis there is a unique element \(s_{a}\in\operatorname{Prefix}(\operatorname{supp}(\mu))\) such that \(\pi(s_{a})=a\). This implies that if \(\alpha\in A(w)\) and \(w[i]=a\) then the \(i\)-th vertex visited by \(\alpha\) is \(s_{a}\).
Using this observation and Lemma 3 we calculate for \(aw\) cyclically reduced
\[\nu_{\Sigma_{R}^{\omega}}([(aw)^{n}])=f(s_{a})\left(\sum_{\alpha\in A(awa)}g( \alpha)\right)^{n-1}\sum_{\alpha\in A(aw)}g_{+}(\alpha).\]
By Lemma 7 this implies
\[\exp(-\delta\ell(\rho(aw)))=\sum_{\alpha\in A(awa)}g(\alpha),\]
whenever \(aw\) is cyclically reduced.
In particular we have
\[\exp(-\delta\ell(\rho(aw_{1}aw_{2}))) =\sum_{\alpha\in A(aw_{1}aw_{2}a)}g(\alpha)\] \[=\sum_{\alpha\in A(aw_{1}a)}g(\alpha)\sum_{\alpha\in A(aw_{2}a)}g(\alpha)\] \[=\exp(-\delta\ell(\rho(aw_{1})))\exp(-\delta\ell(\rho(aw_{2}))),\]
which concludes the proof.
If it were the case that \(\dim(\nu)=\dim(\Lambda)\), by Lemma 8 setting \(w=aa\) and \(w^{\prime}=ab\) one would have \(\ell(\rho(ww^{\prime}))=\ell(\rho(w))+\ell(\rho(w^{\prime}))\).
Notice that \(\rho(w)\) is a translation along a geodesic beginning in \(H_{a^{-1}}\) and ending in \(H_{a}\) while the axis of \(\rho(w^{\prime})\) begins in \(H_{b^{-1}}\) and ends in \(H_{a}\).
If the axes intersect then \(\ell(\rho(ww^{\prime}))<\ell(\rho(w))+\ell(\rho(w^{\prime}))\). If they do not, they must be at a positive distance, and since both translations are toward the endpoint in \(H_{a}\) we would have \(\ell(\rho(ww^{\prime}))>\ell(\rho(w))+\ell(\rho(w^{\prime}))\) (see for example [15, Theorem 2.1]).
In both cases \(\ell(\rho(ww^{\prime}))\neq\ell(\rho(w))+\ell(\rho(w^{\prime}))\), from which we deduce that \(\dim(\nu)<\dim(\Lambda)\).
## 5 Acknowledgments
The authors would like to thank Francoise Dal'bo, Sebastien Gouezel and Francois Ledrappier for several helpful conversations. And Steve Lalley, Jean Mairesse, Pablo Shmerkin, and Wolfgang Woess for directing us towards relevant literature.
|
2304.05928 | Cosmological constraints from HSC Y1 lensing convergence PDF | We utilize the probability distribution function (PDF) of normalized
convergence maps reconstructed from the Subaru Hyper Suprime-Cam (HSC) Y1 shear
catalogue, in combination with the power spectrum, to measure the matter
clustering amplitude $S_8=\sigma_8\sqrt{\Omega_m/0.3}$. The large-scale
structure's statistical properties are incompletely described by the
traditional two-point statistics, motivating our investigation of the PDF -- a
complementary higher-order statistic. By defining the PDF over the standard
deviation-normalized convergence map we are able to isolate the non-Gaussian
information. We use tailored simulations to compress the data vector and
construct a likelihood approximation. We mitigate the impact of survey and
astrophysical systematics with cuts on smoothing scales, redshift bins, and
data vectors. We find $S_8=0.860^{+0.066}_{-0.109}$ from the PDF alone and
$S_8=0.798^{+0.029}_{-0.042}$ from the combination of PDF and power spectrum
(68% CL). The PDF improves the power spectrum-only constraint by about 10%. | Leander Thiele, Gabriela A. Marques, Jia Liu, Masato Shirasaki | 2023-04-12T15:48:54Z | http://arxiv.org/abs/2304.05928v2 | # Cosmological constraints from HSC Y1 lensing convergence PDF
###### Abstract
We utilize the probability distribution function (PDF) of convergence maps reconstructed from the Subaru Hyper Suprime-Cam (HSC) Y1 shear catalogue, in combination with the power spectrum, to measure the matter clustering amplitude \(S_{8}=\sigma_{8}\sqrt{\Omega_{\rm m}/0.3}\). The large-scale structure's statistical properties are incompletely described by the traditional two-point statistics, motivating our investigation of the PDF -- a complementary higher-order statistic. By defining the PDF over the standard deviation-normalized convergence map we are able to isolate the non-Gaussian information. We use tailored simulations to compress the data vector and construct a likelihood approximation. We mitigate the impact of survey and astrophysical systematics with cuts on smoothing scales, redshift bins, and data vectors. We find \(S_{8}=0.852^{+0.048}_{-0.094}\) from the PDF alone and \(S_{8}=0.827^{+0.033}_{-0.044}\) from the combination of PDF and power spectrum (\(68\,\%\,\rm{CL}\)). The PDF improves the power spectrum-only constraint by about \(10\,\%\).
## I Introduction
Weak lensing surveys are rapidly catching up with the precision afforded by cosmic microwave background (CMB) experiments (e.g., [1; 2; 3]). Weak lensing directly probes the clustering of matter through statistical measurements of distortion of galaxy shapes. In recent years, weak lensing surveys have produced increasingly precise measurements of the matter clustering amplitude \(S_{8}\)[e.g., 4; 5; 6; 7; 8], showing hints of a discrepancy between lensing and CMB determinations, dubbed the \(S_{8}\)-tension. Typically, these studies adopt two-point statistics -- the two-point correlation function or the power spectrum.
While two-point statistics are sufficient to describe Gaussian fields such as the CMB, the large-scale structure that sources the weak lensing signal has undergone nonlinear growth and is thus far from Gaussian. Therefore, weak lensing non-Gaussian statistics have been proposed to extract complementary information.1 Equally importantly, non-Gaussian statistics are affected by systematics differently than the two-point statistics. This is especially relevant for the \(S_{8}\)-tension, since whether its origin stems from new physics or systematics is currently under hot debate. Non-Gaussian statistics will be particularly beneficial for upcoming surveys such as _Rubin_ LSST [47], Euclid [48], and _Roman_[49]. These surveys will provide a high source density and hence probe deeper into the nonlinear regime.
Footnote 1: Popular non-Gaussian statistics include minima, peaks [9; 10; 11; 12; 13; 14; 15; 16; 17], voids [18], Minkowski functionals [19; 20; 21; 22; 23; 24; 25], Betti numbers [26; 27], persistent homology [28; 29], wavelets [30], scattering transform [31; 32], moments [21; 22; 33; 34; 35; 36], higher-order correlation functions [37; 38; 39; 40; 41; 42; 43; 44; 45], and convolutional neural networks [43; 44; 45]. A comparison of some of these in a forecast setting for Euclid has been carried out by [46], finding about similar performance for many of these statistics but limited to the Fisher approximation.
In this work, we focus on the probability distribution function [PDF, 50; 51; 52; 53] of the lensing convergence map2 from Subaru Hyper Suprime-Cam (HSC) Y1 data release [54; 55]. HSC is currently the deepest large-scale lensing survey and is thus considered a precursor of Stage-IV surveys. The PDF collects the amplitudes -- but not shapes -- of correlation functions of all orders. Thus, it is a highly non-Gaussian statistics and contains distinct information "far" from the power spectrum. Forecasts have shown promise in tightening the constraints on not only \(S_{8}\) but also the neutrino mass sum and the dark energy equation of state [56; 57; 58; 59].
Footnote 2: Also called mass map.
Our work is the first to obtain cosmological constraints from the lensing PDF with observations. While there exist analytic models of the PDF based on deviation theory [60; 61; 62] and the halo model [63; 64; 59], they are not yet
at the level of accuracy and flexibility required to incorporate the complex survey configurations and systematics. Therefore, we use a large set of cosmological simulations tailored to HSC Y1 data to model the lensing PDF and its likelihood.
## Methods
In this section, we provide a brief overview of the HSC Y1 data, the PDF and power spectrum measurements, the simulations suite, the likelihood, our blinding procedure, and systematics. More detail may be found in our companion paper on counts of lensing peaks and minima [65].
### HSC Y1 Data
We use lensing convergence maps estimated from the HSC Y1 galaxy shapes catalogue [66]. After applying masks, the shear maps span \(136.9\,\mathrm{deg}^{2}\) in six spatially disjoint fields. We adopt galaxy redshifts determined using the MLZ code [67]. After applying redshift cuts of \(0.3<z_{s}<1.5\) and restricting to sources with reliable shape measurements, we obtained a total number density of \(\sim 17\,\mathrm{arcmin}^{-2}\). We split source galaxies into four tomographic redshift bins with edges [0.3, 0.6, 0.9, 1.2, 1.5] and construct convergence maps via Kaiser-Squires inversion [68].
### PDF and power spectrum
The primary summary statistic we consider is the lensing PDF, calculated as histograms of pixels in normalized convergence maps. One crucial step in our computation is that we define the PDF over the signal-to-noise ratio, i.e., each convergence map is divided by its standard deviation before its pixels are histogrammed. This is important to remove information duplicated in the power spectrum. The PDF's non-Gaussian character becomes apparent in Fig. 1, where the tails deviate from a Gaussian (which is dominated by galaxy shape noise).
We measure PDFs on maps smoothed with Gaussian filters \(\propto\exp\bigl{(}-\theta^{2}/\theta_{s}^{2}\bigr{)}\), where we choose \(\theta_{s}=\{5,7,10\}\,\mathrm{arcmin}\) to mitigate systematics while retaining non-Gaussian information. For each smoothing scale, we histogram in 19 equally spaced bins between -4 and 4 (cf. Fig. 1). Of these, we remove the first three bins to minimize contamination by baryons [cf. 69]. Finally, we remove the 10th bin as otherwise the linear constraint from fixing the first three moments would render the PDF's auto-covariance almost singular.
We also consider the auto-power spectrum for each tomographic redshift bin, deconvolving the survey mask using the pseudo-\(C_{\ell}\) method as implemented in NaMaster[70, 71]. We measure \(C_{\ell}^{\kappa\kappa}\) in 14 logarithmically spaced bins in angular multipole \(81<\ell<6580\). Of these, we remove \(\ell<285\) as Ref. [72] found unmodeled systematic errors for these scales in HSC Y1 data. Furthermore, we remove \(\ell>1000\) due to possible contamination by baryons. We note that \(\ell_{\mathrm{max}}\sim 1000\) is also comparable to the minimum smoothing scale of \(5\,\mathrm{arcmin}\) used in the PDF data vector. This leaves us with four \(\ell\)-bins. In contrast to previous two-point analyses [73, 74] we do not include cross-spectra between tomographic redshift bins. Upon unblinding the power spectrum-only posteriors we found that the highest tomographic redshift bin causes significant shifts in \(S_{8}\).3 Thus, we exclude the highest redshift bin in our analysis.
Footnote 3: This may be due to deficiencies in our simulations; for example, the simulations use lower resolution at high redshifts which may be not sufficient. Alternatively, the highest source redshift bin may be subject to photo-\(z\) calibration errors [7, 8].
In summary, our raw data vector consists of the PDF with 135 dimensions (3 smoothing scales \(\times\) 3 redshifts \(\times\) 15 bins) and the power spectrum with 12 dimensions (3 redshifts \(\times\) 4 bins).
Figure 1: Example of the non-Gaussian lensing convergence PDFs from the HSC Y1 data (orange) and the fiducial simulation (blue), for the tomographic redshift bin \(0.6<z_{s}<0.9\) and with a smoothing scale \(\theta_{s}=5\,\mathrm{arcmin}\). For most of the data points the error bars are invisible. Unfilled data points are not used in the analysis. The PDF’s deviation from a Gaussian (black) provides the cosmological information to be exploited in this work.
### Simulations
We adopt two sets of \(N\)-body simulations in our analysis:
1. To model the covariance, we use 2268 realizations of the HSC Y1 footprint at a fiducial point (\(S_{8}=0.791\), \(\Omega_{m}=0.279\)), generated from the 108 quasi-independent [75] full-sky maps of Ref. [76].
2. To model our statistics, we use simulations at 100 different cosmological models with varying values of \(S_{8}\) and \(\Omega_{m}\), as introduced in Ref. [77]. For each cosmology, 50 quasi-independent realizations are generated by randomly placing observers within the periodic box. These simulations are used to construct emulators for the mean theory prediction; thus, they are designed to have better accuracy than the fiducial simulations.
For each of these 7768 (=2268+100\(\times\)50) realizations, mock galaxy shapes are generated following Refs. [78; 79; 80]. Convergence map generation and summary statistic measurements are performed on these simulated mocks identically to as done for the real data.
### Likelihood and inference
For the PDF, to reduce the size of the data vector and Gaussianize its likelihood, we score-compress the logarithmic PDF under the approximation of a Gaussian likelihood [MOPED, 81].4 This reduces the number of PDF bins from 135 to only 2, corresponding to the number of parameters (\(S_{8}\) and \(\Omega_{m}\)). We then construct at each of the 100 cosmological models a 2\(\times\)2 covariance matrix, using the 50 realizations. Finally, we build emulators of both the compressed PDF and the cosmology-dependent inverse covariance using Gaussian processes. If we use a cosmology-independent covariance the \(S_{8}\)-posterior tightens and its peak is almost unchanged. However, with that choice the ranks plot discussed below would indicate an overconfident posterior.
Footnote 4: We note that upon combining the PDF with the power spectrum, the compression is likely sub-optimal; future work could investigate more rigorous ways to preserve a maximum of information. Of course, the choice of likelihood to compute the score cannot induce biases, only information loss.
The power spectrum data vector is small enough and its distribution is known to be close to Gaussian. Therefore, we apply no data compression to the power spectrum. We build a Gaussian process emulator to model the power spectrum and estimate a cosmology-independent covariance matrix using the fiducial simulations. To jointly analyze the compressed PDF and the power spectrum, we approximate the cross-correlation between them as constant and estimate it from the fiducial simulations.
We adopt uniform priors on our parameters \(S_{8}\) and \(\Omega_{m}\), in intervals \([0.5,1.0]\) and \([0.2,0.4]\), respectively. Our prior is well-covered by the available simulations. Markov chain Monte Carlo (MCMC) sampling is performed using emcee[82; 83].
We validate our likelihoods with a "ranks plot" [e.g., 84] shown in Fig. 2. To do so, we run MCMC on realizations drawn from the 30 cosmology-varied simulations within our prior. To cleanly separate our training and test sets, all emulators are rebuilt without the test cosmology. We then order each Markov chain by its \(S_{8}\) values and find the rank of the true \(S_{8}\). For a valid likelihood, it should be impossible to statistically distinguish the true \(S_{8}\) from randomly selected Monte Carlo samples, so the ranks should follow a uniform distribution. If the posterior is over-confident (under-confident), the histogram exhibits a U-shape (inverse U). However, because we only have a relatively small number of simulated cosmologies available within our prior, a perfectly uniform distribution may not be possible to attain. Indeed, Fig. 2 shows an approximately uniform distribution for all data vector choices (PDF, power spectrum, and them jointly), except for spikes at the edges for the power spectrum and joint statistic. These spikes are attributable to a small number of simulations near the prior boundary where the power spectrum emulation appears to work less well. We do not expect these to affect our results as they have little overlap with our final posterior and the problem is mostly due to underestimated tails which are less important for the confidence levels typically considered.
### Systematics
We study potential biases caused by effects not included or wrongly described in the simulated model by running inference on fiducial mocks contaminated with the following methods:
* We simulate miscalibration of multiplicative bias by shifting it by \(\pm 1\,\%\), corresponding to the uncertainty on the mean quoted in Ref. [85].
* We assess the impact of photometric redshift uncertainty by generating fiducial mocks with source redshifts from two other codes, frankenz and mizuki.
* We model the impact of baryonic effects using the
\(\kappa\)TNG simulations [86] -- two sets of convergence maps with and without baryons. We contaminate the fiducial mocks with the ratio of hydrodynamic to dark matter-only data vectors.
* We estimate the impact of intrinsic alignments similarly to the baryons case by multiplying the fiducial data vectors with a fixed ratio which we obtain from mocks with nonlinear intrinsic alignments [87]. We choose alignment amplitudes, \(A_{\rm IA}=\{-0.32,1.18\}\), comparable to the \(1\sigma\) constraints in Refs. [73; 74]. We show the resulting biases on \(S_{8}\) for these systematics in the left panel of Fig. 3 (also see the more comprehensive discussion in Ref. [65]). The most constraining joint PDF+\(C_{\ell}^{\kappa\kappa}\) data vector is used. We do not find biases exceeding \(0.45\sigma\) in \(S_{8}\).
In addition, we also consider potential biases caused by imperfections in the simulations and emulators:
* The mean recovered \(S_{8}\) on the fiducial simulations is about \(1\sigma\) higher than the input, indicating a systematic difference with the cosmology-varied simulations. This is likely due to the lower fidelity of the fiducial simulations. With this in mind, the observed \(1\sigma\) bias can be considered an upper bound on biases caused by resolution effects in the \(N\)-body simulations.
* To assess biases caused by the limited training set size, at each cosmology we divide the realizations into 25 samples each for training and testing. We match the realization indices (i.e., observer positions) between cosmologies so as to maximize statistical independence between training and testing samples. In addition, all realizations at the test cosmology are removed from the training. The right panel of Fig. 3 shows the average biases on \(S_{8}\) within our prior range, in units of the standard deviation. The bias is computed using the mode of the marginalized \(S_{8}\) posterior, to minimize the effect of the prior which can shift the mean. The bias is a few ten percent for most of the points, except at the edges where the emulator quality decreases. Since the described test aggressively reduces the training set size, our bias estimates are likely a conservative upper bound.
### Blinding
To help build best practice for non-Gaussian statistical analyses with Stage-IV surveys (e.g., [46], we follow a three-step blinding procedure in our analysis:
1. We build the mock generation pipeline using one random realization from the fiducial model as "observation". This includes shear bias correction, convergence map reconstruction, masking, in-painting, and summary statistics measurements.
2. We construct the inference pipeline using the ranks plot and similar tests. This includes data compression, covariances, emulators, and MCMC sampling.
3. We select smoothing scales, redshift bins, and cuts on data vectors to minimize the impact of systematics.
4. First unblinding: B-modes. We compare the power spectra and PDFs measured in B-mode maps of the
Figure 3: _Left_: Estimate of systematic errors. We contaminate the PDF+\(C_{\ell}^{\kappa\kappa}\) data vector with various systematics as described in the text. No systematic exceeds \(0.45\sigma(S_{8})\). _Right_: Assessment of training set and emulator quality. Each point is one of the cosmologies for which cosmo-varied realizations exist. The bias indicated is estimated by averaging over chains run on 25 realizations, while the emulator is trained with the other 25 realizations and without including the cosmology where the inference is performed.
Figure 2: Validations of our likelihoods. The histograms are over hundreds of Markov chains, run with realizations from the cosmo-varied simulation set as “observation”. The \(x\)-axis is the position of the simulation’s true \(S_{8}\) value in the ordered chain. A uniform distribution is expected for a valid likelihood. The grey band represents the \(2\sigma\) interval approximating the bin counts as Poissonian.
HSC Y1 data and our fiducial simulation.5 We validate that the two distributions are statistically consistent.
5. Second unblinding: power spectrum. We apply inference to the measured HSC Y1 power spectrum and compare our results internally between different redshift bins and to the official HSC Y1 analyses by Refs. [73; 74]. We discover systematics in the highest tomographic redshift bin and remove it from our final analysis.
6. Third unblinding: PDF. We unblind the PDF-only posterior during an invited presentation at the HSC weak lensing working group telecom. The initial posterior was approximately uniform. This failure was due to a bug in computing the standard deviations of the HSC data maps.6 The bug did not affect our previous steps, so once we resolved it we did not change any other parts of the pipeline. Footnote 5: B-mode maps are built by rotating galaxy shapes by 45 degrees.
Footnote 6: Thanks to the feedback from the HSC weak lensing working group, we learned a valuable lesson of comparing the PDFs before moving on to inferences. They also pointed out that a more sophisticated blinding policy, as typically employed by collaborations, would have flagged this issue before unblinding.
Since the HSC Y1 data is already public our blinding procedure is rather an honor system.7
Footnote 7: During our study, we also became aware of other recently completed non-Gaussian statistical analyses using the same dataset [88; 45]. To avoid confirmation bias, we refrained from reading these papers until we finalized our results.
## Results
We show the \(S_{8}\) posteriors obtained from the power spectrum, the PDF, and the two combined in Fig. 4. In all cases, the constraint on \(\Omega_{m}\) is prior-dominated and thus not shown here for brevity. For reference, we also show the 68 % intervals from the official HSC Y1 two-point analyses with the power spectrum [73] and the two-point correlation function [74].
Our two-point result (blue) agrees well with the previous analyses in terms of mean value. However, our posterior is wider: \(\sigma\simeq 0.042\) compared to \(\sigma\simeq 0.03\) in the previous two-point analyses. This is expected as these analyses differ in many respects, such as additional cross-spectra between tomographic redshift bins, choice of priors,treatment of systematic errors, data vector binning, and scale cuts.
Our PDF-only posterior (orange) shows a clear detection of \(S_{8}\), or specifically _nonlinear_ clustering, at 9 \(\sigma\). This is after removing variance information from the PDF and thus almost independent of the two-point results. The mean \(S_{8}\) value is slightly higher than the one from \(C_{\ell}^{\kappa\kappa}\) but statistically consistent. Upon combining the PDF and \(C_{\ell}^{\kappa\kappa}\), we observe a slight upwards shift in the mean \(S_{8}\) and a \(\approx\,10\) % tightening of the posterior. This shows that the PDF indeed contains complementary information to the power spectrum.
In many aspects, our results match the CNN analysis from Ref. [45], including the \(S_{8}\) posterior and an \(\mathcal{O}(10\,\%)\) improvement compared to the power spectrum.
## Conclusions
In this work, we obtained the first cosmological constraints using the probability distribution function of weak lensing convergence. We built convergence maps using the HSC Y1 shear catalogue, constructed a likelihood based on tailored numerical simulations, and validated that known survey and astrophysical systematics are under control after applying cuts on smoothing scales,
Figure 4: \(S_{8}\) posteriors from different data vector choices. Since the posteriors are slightly asymmetric, we quote their modes as well as the symmetric 68 % confidence interval. For comparison, we include 68 % intervals from Refs. [73; 74]. Note that these analyses differ in various details. The inset compares the best-fit model to the data (PTE = 0.92). We remind the reader that the PDF does not contain information on the variance.
redshift, and data vectors. We designed and followed a three-step blinding procedure to minimize confirmation bias. We make our inference code publicly available at this URL.
The PDF improves the power spectrum-only constraint by about 10 %. For the clustering amplitude, we obtained \(S_{8}=0.852^{+0.086}_{-0.094}\) and \(S_{8}=0.827^{+0.033}_{-0.044}\) from the lensing PDF alone and the combination of PDF and power spectrum, respectively (68 % CL). We computed the PDF on convergence maps normalized by their standard deviation, hence maximally removed two-point information from the PDF. Our results are consistent with previous analyses on the same data and show that the PDF provides additional information not contained in the power spectrum.
We find no tension between the \(S_{8}\) inferred from HSC Y1 lensing and from primordial CMB measurements.
Future work could investigate alternatives to the probably lossy compression step as well as the forward modeling of systematics to allow inclusion of smaller scales. Additionally, Stage-IV data will be sensitive to cosmological parameters beyond \(S_{8}\), for example, the neutrino mass sum. The PDF could be instrumental in complementing two-point statistics in order to optimally constrain such model extensions.
Acknowledgements. We thank Sihao Cheng, Will Coulton, Daniela Grandon, Surhud More, Ken Osato, David Spergel, Sunao Sugiyama, and Masahiro Takada for useful discussions. We thank Joachim Harnois-Deraps for sharing the IA mocks. We thank the organizers of the Kyoto CMBxLSS workshop, during which this work was completed. This work was supported by JSPS KAKENHI Grant Number 23K13095 (to JL), 19K14767 and 20H05861 (to MS). This work was supported in part by the UTokyo-Princeton Strategic Partnership Teaching and Research Collaboration. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. The authors are pleased to acknowledge that the work reported on in this paper was substantially performed using the Princeton Research Computing resources at Princeton University which is consortium of groups led by the Princeton Institute for Computational Science and Engineering (PIC-SciE) and Office of Information Technology's Research Computing.
|
2310.10504 | Flavour Physics at LHCb -- 50 years of the KM paradigm | The LHCb experiment is in operation since 2009 and has provided measurements
of the KM matrix with unprecedented precision. 50 years after the original
paper we are in the position of pinning down the parameters of the theory, or
possibly to show its limitations. In these proceedings the status of LHCb is
shown in a historical perspective, along with some anecdotes. | Patrick Koppenburg | 2023-10-16T15:22:54Z | http://arxiv.org/abs/2310.10504v1 | # Flavour Physics at LHCb -- 50 years of the KM paradigm
###### Abstract
The LHCb experiment is in operation since 2009 and has provided measurements of the KM matrix with unprecedented precision. 50 years after the original paper we are in the position of pinning down the parameters of the theory, or possibly to show its limitations. In these proceedings the status of LHCb is shown in a historical perspective, along with some anecdotes.
LHCb-PROC-2023-003, Nikhef-2023-005
Introduction
It took Kobayashi and Maskawa [1] a bit short of a decade since the observation of \(C\!P\) violation in neutral kaons [2] to provide a theoretical description, the KM model. In hindsight, this time-lapse seems relatively short as one needed to wait until the twenty-first century to get confirmation of the model in the \(b\) and \(c\) sectors (Fig. 1). Since then however, \(C\!P\) violation has been observed in interference of mixing and decay (also known as "indirect") in the \(B^{0}\)[4; 5] and \(B^{0}_{s}\)[6] systems, and \(C\!P\) violation in decays (also known as "direct") of \(K^{0}\)[8; 9], \(B^{0}\)[10; 11], \(B^{0}_{s}\)[12], \(B^{+}\)[13] and \(D^{0}\)[7] mesons.
The original observations of \(C\!P\) violation in the \(B^{0}\) system that led to the Nobel Prize awarded to Kobayashi and Maskawa in 2008 were the work of the BaBar and Belle experiments. Since then all of the following "firsts" were performed by LHCb. Barring the still missing observation of \(C\!P\) violation in baryons (and leptons, but this is another story), we are now entering the precision regime in which the KM paradigm no longer needs to be established, but is tested to its ultimate precision.
## 2 LHCb
The LHCb experiment was designed at a time before the \(B\) factories Belle and BaBar came into operations, and when the amount of \(C\!P\) violation in the SM was poorly constrained [15]. The benchmark observables were the \(\sin 2\phi_{1}\)\(C\!P\) asymmetry amplitude1
Figure 1: The briefest history of \(C\!P\) violation [1; 2; 3; 4; 5; 6; 7]
obtained from the decay \(B^{0}\!\to\!J\!/\!\psi K^{0}_{\rm S}\) -- _because it may be that the \(B\) factories wouldn't be able to measure CP violation_[16] -- and the frequency of \(B^{0}_{s}\) oscillations -- as it was hoped that LHCb would see them before Tevatron. Incidentally both measurements were performed before LHCb came into operation [4, 5, 17], but these benchmarks set constraints on the design of the experiment from which the physics programme largely profits today. In particular the excellent vertex resolution is a legacy of the requirement that \(B^{0}_{s}\) oscillations should be resolvable for frequencies up to \(60\,{\rm ps}^{-1}\).
The LHCb detector [18, 19] depicted in Fig. 2 is a single-arm forward spectrometer covering the pseudorapidity range \(2<\eta<5\), designed for the study of particles containing \(b\) or \(c\) quarks. It consists of a silicon-strip vertex detector surrounding the \(pp\) interaction region that allows \(c\) and \(b\) hadrons to be identified from their characteristically long flight distance; a tracking system that provides a measurement of the momentum of charged particles; and two ring-imaging Cherenkov detectors that are able to discriminate between different species of charged hadrons.
Lessons from past experiments have taught LHCb to have a versatile trigger (Sec. 6), which allowed the collaboration to quickly adapt to varying experimental conditions. These variations are kept minimal by offsetting the LHC beams in order to keep the luminosity
Figure 2: The LHCb detector. Figure adapted from Ref. [14].
constant throughout a fill, and throughout a data-taking period. The nominal value was around 2 to \(4\times 10^{32}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\).
In the startup year of 2010, LHCb was able to identify the first \(B\) meson at the LHC2 with low luminosity and then cope with a rapid increase of the collision rate. Eventually LHCb capped the luminosity and collected data corresponding to 1, 2 and \(6\,\mathrm{fb}^{-1}\) at 7, 8 and \(13\,\mathrm{TeV}\) collision energies, respectively. Together they form the \(9\,\mathrm{fb}^{-1}\) "legacy" sample. The huge cross-section at the LHC provided more than \(10^{12}\)\(b\overline{b}\) pairs in the LHCb acceptance, in rough proportion of 4:4:2:1 \(B^{+}\):\(B^{0}\):\(\Lambda_{b}^{0}\):\(B_{s}^{0}\) hadrons [21]. In total LHCb collected more than \(10^{11}\) events between 2010 and 2018, a number that can be compared to the 770 million \(B\overline{B}\) pairs collected by Belle. More numbers can be found in Table 1.
\begin{table}
\begin{tabular}{l l l||c c|c c|c c c c|c} Quantity & & unit & TDR & 2011 & 2012 & Run 1 & 2015 & 2016 & 2017 & 2018 & Run 2 & Tot/Avg \\ \hline \hline Peak Luminosity & \(\mathcal{L}_{\mathrm{peak}}\) & \(\upmu^{-1}\)/s & 280 & 461 & 492 & & 302 & 422 & 453 & 493 & & \\ Average Luminosity & \(\mathcal{L}_{\mathrm{avg}}\) & \(\upmu^{-1}\)/s & 200 & 250 & 330 & 298 & 140 & 240 & 280 & 310 & 268 & 278 \\ Seconds of running & \(t\) & \(10^{6}\) s & 10.0 & 4.3 & 6.2 & 10.5 & 1.6 & 6.9 & 4.6 & 6.9 & 20.0 & 30.5 \\ Integrated luminosity & \(\int\!\mathcal{L}\mathrm{d}t\) & \(\mathrm{fb}^{-1}\) & 2.0 & 1.1 & 2.1 & 3.2 & 0.5 & 1.9 & 1.5 & 2.5 & 6.4 & 9.6 \\ Bunches & \(N_{\mathrm{b}}\) & & 2600 & 1320 & 1320 & & 1710 & 2036 & 2332 & 2332 & & \\ Energy & \(E\) & TeV & 14 & 7 & 8 & & 13 & 13 & 13 & 13 & & \\ Inelastic cross-section & \(\sigma_{\mathrm{inel}}\) & mb & 80 & 64 & 67 & & 77 & 77 & 77 & 77 & & \\ Charged multiplicity & \(\frac{\mathrm{d}N_{\mathrm{ch}}}{\mathrm{d}q}\) & & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 6 & 6 \\ \(b\overline{b}\) cross-section (acc.) & \(\sigma_{b\overline{b}}\) & \(\upmu\)b & 150 & 72 & 83 & & 144 & 144 & 144 & & \\ \hline \(pp\) interactions/BX & \(\mu=\frac{\mathcal{L}_{\mathrm{inel}}}{\mathrm{J}_{\mathrm{LHCb}}N_{\mathrm{b}} }\) & & 0.55 & 1.08 & 1.49 & 1.32 & 0.56 & 0.81 & 0.82 & 0.91 & 0.83 & 0.99 \\ Non-empty rate & \(f_{\mathrm{LHC}}N_{\mathrm{b}}(1-e^{-\mu})\) & MHz & 12.3 & 9.8 & 11.5 & 10.8 & 8.3 & 12.7 & 14.7 & 15.7 & 14.0 & 12.9 \\ Avg. MB rate & \(\sigma_{\mathrm{inel}}E_{\mathrm{avg}}\) & MHz & 16.0 & 16.0 & 22.1 & 19.7 & 10.8 & 18.5 & 21.6 & 23.9 & 20.7 & 20.3 \\ MB events & \(\sigma_{\mathrm{inel}}\int\mathcal{L}_{\mathrm{avg}}\mathrm{d}t\) & \(10^{12}\) & 160 & 70 & 141 & 211 & 38 & 146 & 116 & 192 & 493 & 704 \\ Peak particle flow & \(\frac{\mathrm{d}N_{\mathrm{ch}}}{\mathrm{d}q}\sigma_{\mathrm{inel}}\mathcal{L}_{ \mathrm{peak}}\) & \(10^{6}\) & 134 & 177 & 198 & & 140 & 195 & 209 & 228 & & \\ Irradiation & \(\frac{\mathrm{d}N_{\mathrm{ch}}}{\mathrm{d}q}\sigma_{\mathrm{inel}}\mathcal{L} \mathrm{d}t\) & \(10^{15}\) & 1.0 & 0.4 & 0.8 & 1.3 & 0.2 & 0.9 & 0.7 & 1.2 & 3.0 & 4.2 \\ \(b\overline{b}\) rate & \(\sigma_{b\overline{b}}E_{\mathrm{avg}}\) & kHz & 30 & 18 & 27 & 24 & 20 & 35 & 40 & 45 & 39 & 34 \\ \(b\overline{b}\) yield & \(\sigma_{b\overline{b}}\mathcal{L}\mathrm{d}t\) & \(10^{9}\) & 300 & 79 & 174 & 254 & 72 & 274 & 216 & 360 & 922 & 1175 \\ \hline Output rate & \(\lambda_{\mathrm{HLT}}\) & kHz & 2.0 & 2.6 & 4.5 & 3.7 & 10.4 & 6.1 & 7.5 & 5.8 & 6.6 & 5.7 \\ Stored events (bkk) & \(\lambda_{\mathrm{HLT}}t\) & \(10^{9}\) & 20 & 11 & 28 & 39 & 17 & 42 & 35 & 40 & 133 & 172 \\ Event size & \(S_{\mathrm{ev}}\) & kB & 2 & 53 & 59 & 56 & 48 & 55 & 58 & 58 & 56 & 56 \\ HLT B/W & \(S_{\mathrm{ev}}\lambda_{\mathrm{HLT}}\) & MB/s & 5 & 136 & 263 & 212 & 501 & 333 & 438 & 333 & 371 & 319 \\ Total storage & \(S_{\mathrm{ev}}\lambda_{\mathrm{HLT}}t\) & EB & 0.1 & 0.6 & 1.6 & 2.2 & 0.8 & 2.3 & 2.0 & 2.3 & 7.4 & 9.6 \\ \hline \end{tabular}
\end{table}
Table 1: Operational statistics of LHCb running conditions in Runs 1 and 2. The Run 1, Run 2 and last column contain sums or weighted averages depending on which is more appropriate (or none if neither makes sense). \(f_{\mathrm{LHC}}\simeq 27\,\mathrm{km}/c\simeq 11\,\mathrm{kHz}\) is the LHC frequency.
## 3 Unitarity triangle
Unitarity relations of the KM matrix are exploited to draw triangles in the complex plane, the most renowned being the "\(B^{0}\)" unitarity triangle ("UT", Fig. 3) formed from the first and last columns of the KM matrix. The two non-unit sides intersect at an apex, which defines three angles: \(\phi_{1}\), \(\phi_{2}\), and \(\phi_{3}\). All sides and angles are observables while only two are independent.3 This opens the way to over-constraining the triangle and thus putting the KM paradigm to test.
Footnote 3: There is also a constraint from kaon physics, noted \(\epsilon_{K}\), which is beyond the scope of these proceedings.
LHCb has measured the top-right side -- proportional to \(|V_{td}V_{tb}^{*}|\) -- by precision measurements of the \(B^{0}\) and \(B_{s}^{0}\) mixing frequencies. Such a mixing plot is shown in Fig. 4. LHCb obtained the most precise values of \(\Delta m_{s}=17.766\pm 0.006\,\mathrm{ps}^{-1}\)[23] and \(\Delta m_{d}=505.0\pm 2.3\,\mathrm{ps}^{-1}\)[24]. Their conversion into constraints on the KM UT is however limited by hadronic uncertainties.
The other side is obtained from semileptonic \(b\) decays to \(c\) and \(u\) quarks to determine \(V_{cb}\) and \(V_{ub}\), respectively. Such processes are members of a category known colloquially as "things LHCb cannot do (but still does)". LHCb was never designed for decays with missing neutrinos. However the precise vertexing allows the determination of the primary production (PV) and secondary decay vertices (SV) with resolution of a few 10 microns.
Figure 3: Status of the KM \(B^{0}\) unitarity triangle as of March 2021 [22].
The corrected mass \(m_{\rm corr}=\sqrt{m^{2}+p_{\perp}^{2}}+p_{\perp}\) is determined from the momentum missing along the direction of flight and peaks at the \(b\) hadron mass if the lost particle is of small mass, see Fig. 5. LHCb measured the ratio \(|V_{ub}|/|V_{cb}|\) in decays of \(\Lambda_{b}^{0}\) baryons [24] and \(B_{s}^{0}\) mesons [25], which are inaccessible to \(B\) factories at the \(\mathchar 28935\relax(4S)\) resonance. The resulting constraints are shown as diagonal bands in Fig. 4. It is particularly striking that the constraints from \(B_{s}^{0}\!\to\!K^{-}\mu^{+}\nu\) in the low and high range of dilepton masses squared, \(q^{2}\), are inconsistent, owing to inconsistent form factors from light-cone sum rules [26] and Lattice QCD [27], respectively. The figure also shows the discrepancy between the inclusive \(b\!\to\!q\ell\nu\) and exclusive \(B\!\to\!X\ell\nu\) determinations of \(|V_{ub}|\) and \(|V_{cb}|\). This puzzle is still unresolved in spite of multiple measurements and computations spanning several decades. Let us hope we won't have to wait for an \(e^{+}e^{-}\) collider to run at the \(WW\) threshold to resolve the issue.
The situation in the measurements of the angles is somewhat clearer. The angle \(\phi_{1}\) is obtained from final states that are \(C\!P\) eigenstates reached via interference of \(b\!\to\!c\bar{c}s\) and their charged-conjugated processes after \(B^{0}\) to \(\kern 1.8pt\overline{\kern-1.8ptB}{}^{0}\) mixing [28]. The angle is known with sub-degree precision: LHCb now holds the most precise value of \(\sin 2\phi_{1}=0.717\pm 0.013\pm 0.008\), from a combination of the decay modes \(B^{0}\!\to\!J\!/\!\psi K_{\rm S}^{0}\) and \(\psi(2S)K_{\rm S}^{0}\)[29].
The angle \(\phi_{3}\) is often referred to as a standard candle of the Standard Model -- to the extent that one is certain that no new physics affects tree-dominated decays, which is a bold
assumption [30]. The \(C\!P\) asymmetry is generated from interference of \(b\!\to c\overline{u}q\) and \(b\!\to u\overline{c}q\) where the \(D\) mesons formed by \(c\overline{u}\) (\(\overline{c}u\)) decay to a common final state. If it is assumed that such tree-level processes are free from new physics contributions, then the determination of the angle \(\phi_{3}\) will be dominated by the experimental resolution in the foreseeable future and beyond [31].
However, unlike with \(B^{0}\!\to J\!/\!\psi K^{0}_{\rm S}\) for \(\sin 2\phi_{1}\) there is no single process that yields a good resolution on its own. Multiple \(B\) meson (and potentially \(b\)-baryon) decays can be used, as well as many \(D\) decays. The ultimate precision will be reached once all combinations are analysed with the full data set. The present status is shown in Table 2, where multiple possible analyses are still missing. It is however unlikely that this table will ever be fully filled, as some combination of \(B\) and \(D\) decay modes have marginal sensitivities. In addition to the interference pattern above, \(\phi_{3}\) can also be measured from interference of mixing (both \(B^{0}\) and \(B^{0}_{s}\)) and decay. These processes are however potentially affected by new physics entering the \(B\) mixing loop. A comparison of values of \(\phi_{3}\) obtained by from the two tables in Tab. 2 thus constitutes an additional test of the Standard Model.
A precise value of \(\phi_{3}=(63.8\begin{smallmatrix}+3.5\\ -3.7\end{smallmatrix})^{\circ}\) is obtained from a combination the analyses reported in Ref. [32], which does not yet include some of the latest LHCb results [39, 42, 43]. Presently, the time-dependent measurements yield \(\phi_{3}\!=\!(79\begin{smallmatrix}+21\\ -23\end{smallmatrix})^{\circ}\), compatible with the value from time-integrated analyses.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & \multicolumn{4}{c|}{**Time-integrated measurements**} \\ \hline Decays & \(B^{+}\!\to Dh^{+}\) & \(B^{+}\!\to D^{*0}h^{+}\) & \(B^{+}\!\to DK^{0}_{\rm S}\pi^{+}\) & \(B^{0}\!\to DK^{+}\pi^{-}\) & \(B^{+}\!\to DK^{+}\pi^{-}\pi^{-}\) \\ \hline \(D\!\to h^{+}h^{-}\) & \({\bf 9\,fb^{-1}}\)[33] & \({\bf 9\,fb^{-1}}\)[33] & \({\rm 5\,fb^{-1}}\)[34] & \({\rm 5\,fb^{-1}}\)[35] & \({\rm 3\,fb^{-1}}\)[36] \\ \(D\!\to h^{+}\pi^{-}\pi^{+}\pi^{-}\) & \({\bf 3\,fb^{-1}}\)[37] & \({\bf 9\,fb^{-1}}\)[35] & \({\rm 5\,fb^{-1}}\)[34] & \({\rm 5\,fb^{-1}}\)[35] & \\ \(D\!\to K^{\pm}\pi^{\mp}\pi^{+}\pi^{-}\) & \({\bf 9\,fb^{-1}}\)[38] & & & & \\ \(D\!\to h^{+}h^{-}\pi^{+}\pi^{-}\) & \({\bf 9\,fb^{-1}}\)[39] & & & & \\ \(D\!\to h^{+}h^{-}\pi^{0}\) & \({\bf 9\,fb^{-1}}\)[40] & & & & \\ \(D\!\to K^{0}_{\rm S}h^{+}h^{-}\) & \({\bf 9\,fb^{-1}}\)[41] & \({\bf 9\,fb^{-1}}\)[42] & & \({\bf 9\,fb^{-1}}\)[43] & \\ \(D\!\to K^{0}_{\rm S}K^{+}\pi^{-}\) & \({\bf 9\,fb^{-1}}\)[44] & & & & \\ \hline \hline \end{tabular}
\begin{tabular}{|l|l|l|l|} \hline & \multicolumn{4}{c|}{**Time-dependent measurements**} \\ \hline Decays & \(B^{0}\!\to D^{\mp}\pi^{\pm}\) & \(B^{0}_{s}\!\to D^{\mp}_{s}K^{\pm}\) & \(B^{0}_{s}\!\to D^{\mp}_{s}K^{\pm}\pi^{+}\pi^{-}\) \\ \hline \(D^{\pm}\!\to K^{\pm}h^{+}h^{-}\) & \({\rm 3\,fb^{-1}}\)[45] & N/A & N/A \\ \(D^{\pm}_{s}\!\to h^{\pm}h^{\mp}\pi^{\pm}\) & N/A & \({\bf 9\,fb^{-1}}\)[46, 47] & \({\bf 9\,fb^{-1}}\)[48] \\ \hline \end{tabular}
\end{table}
Table 2: Used integrated luminosities and references of measurements with sensitivity to \(\phi_{3}\) split by final states. Entries in red are not yet included in the combination [32].
The values reported in this combination also make use of the charm mixing [49] and \(C\!P\) violation [7; 50] measurements (that slightly affect the \(\phi_{3}\) determination for \(B\!\to D\) modes). The first observation of \(C\!P\) violation in charm [7] -- with a precision in units of \(10^{-4}\) -- is a major achievement but its direct understanding in terms of KM matrix elements is not yet within theoretical reach.
The third angle, \(\phi_{2}\), is obtained from interference in \(b\!\to\!u\) transitions. LHCb's main contribution is the most precise time-dependent measurement of \(C\!P\) asymmetries in \(B^{0}\!\to\pi^{+}\pi^{-}\)[12]. A full determination of \(\phi_{2}\) needs an isospin analysis of several decay modes involving \(\pi^{0}\) mesons [51], which are hard to reconstruct at LHCb.4
Footnote 4: See however Ref. [52].
The combination of these two sides and three angles yields the over-constrained triangle depicted in Fig. 3[22]. This is however only one of several possible triangles, dominated by \(B\) decay modes.
Another triangle, dubbed the \(B_{s}^{0}\) triangle, features the angle \(\phi_{s}\). In analogy to \(\phi_{1}\), it is obtained from \(B_{s}^{0}\) oscillation followed by a \(B_{s}^{0}\) decay to a \(c\overline{c}s\overline{s}\)\(C\!P\) eigenstate, _e.g._\(J\!/\!\psi\phi\). It used to be said that the SM expectation is close to vanishing, making any measurement of a non-zero value a sign of new physics. We are no longer in this regime: LHCb recently released their legacy measurement with the full \(9\,\mbox{fb}^{-1}\) dataset and obtain \(\phi_{s}=-0.039\pm 0.022\pm 0.006\,\mbox{rad}\)[54] from \(B_{s}^{0}\!\to\!J\!/\!\psi K^{+}K^{-}\) with the \(K^{+}K^{-}\) mass in the vicinity of the \(\phi\) meson. This value is combined with LHCb measurements using previous datasets or other decays [55; 56; 57; 58] as well as results from Tevatron [59; 60] and other LHC experiments [61; 62]. The result is \(\phi_{s}=-0.050\pm 0.016\,\mbox{rad}\), which is now significantly deviating from zero. The expectation from KM fits is \(\phi_{s}=-0.037\pm 0.001\)[22; 63], with which the world average is well compatible, as shown in Fig. 6.
There is some disagreement in the determination of the total decay width of the \(B^{0}_{s}\) meson \(\Gamma_{s}\) and of the difference of the decays widths of the heavy and light \(B^{0}_{s}\) mass eigenstates, \(\Delta\Gamma_{s}\), which are by-products of these analyses. The update of ATLAS with the full Run 2 sample is eagerly awaited, hopefully resolving the issue.
In 2003 there was some excitement due to inconsistent values of \(\sin 2\phi_{1}\) obtained from \(b\!\rightarrow\!c\overline{c}s\) processes and from \(b\!\rightarrow\!s\overline{s}s\), as \(B^{0}\!\rightarrow\!\phi K^{0}_{\rm S}\)[64]. The latter are also sensitive to the same KM phases, but proceed via penguin diagrams and are thus more likely affected by heavy degrees of freedom in the loop. The issue turned out to be a fluke but the penguin decay modes remained on the radar.
A similar test can be done by measuring the time-dependent \(C\!P\)-violating phase in from \(B^{0}_{s}\!\rightarrow\!\phi\phi\), that is strictly zero in the SM. LHCb obtain \(\phi_{s}^{s\overline{s}s}=-0.18\pm 0.09\) in a combination of all data samples [65], which is compatible with the SM expectation.
Resolution of the above-mentioned interference patterns require flavour tagging, i.e. the identification of the original flavour of the \(B^{0}\) or \(B^{0}_{s}\) meson. LHCb uses the flavour of the accompanying \(b\) from the \(b\overline{b}\) pair, using muon, electron and kaon particles, as well as the vertex charge (Fig. 7). In addition, accompanying pions, kaons and protons from fragmentation are used to determine the \(B\) meson flavour directly [67].
Improved understanding of proton-proton collisions at LHC energies has allowed continuous increase of the effective tagging power from less than 2% in 2011 [68] to more than 6% for selected modes nowadays [29, 65].
A (somewhat outdated) comparison of several decays modes is shown in Fig. 8. Note however that the tagging performance anti-correlates with the hardware trigger efficiency: Modes with muons profit from low trigger thresholds while those with hadrons or electrons have lower efficiencies.
**Fig. 7** Schematics of information used in flavour tagging. Figure from Ref. [66].
These thresholds however bias the transverse momentum of the accompanying \(B\) and the tagging particles from fragmentation, which improves the tagging performance. As an example, the best tagging performance at the LHC is obtained by CMS [62] thanks for their triggering on the tagging muon: Trigger efficiency is traded off for better tagging.
## 4 Anomalies coming and going
After 30 years of unsuccessful searches at multiple experiments, the observation of \(B_{s}^{0}\!\rightarrow\!\mu^{+}\mu^{-}\) was one of the benchmark goals at the LHC. This loop-induced, GIM and helicity-suppressed process is very rare in the SM -- its branching fraction is 3 in a billion [69; 70] -- which makes it sensitive to any new physics contribution at this or higher level. In supersymmetric extensions its rate is enhanced proportionally to the sixth power of \(\tan\beta\) -- the ratio of the two vacuum expectations of the two neutral Higgs bosons. Its non-observation increasingly shattered hopes for large-\(\tan\beta\) supersymmetry as the limit on the branching fraction decreased (Fig. 9). The first evidence was then finally reported by LHCb in 2013 [71], the first observation was achieved via from a joint fit to LHCb and CMS data in 2014 [72]5 and LHCb reported the first single-experiment observation in 2017 [73], soon followed by CMS [74]. ATLAS are just a bit short of an observation [75]. The LHCb and CMS results were in the meantime updated using the full Run 1-2 data sets [76; 77; 78]. The average of the \(B_{s}^{0}\!\rightarrow\!\mu^{+}\mu^{-}\) measurements [79] is consistent with the Standard Model expectation, which sets strong constraints on new physics affecting \(b\overline{s}\ell\overline{\ell}\) operators.
Footnote 5: The fit to a joint dataset of two experiments was a major enterprise. It required first to align all definitions and treatments of backgrounds in the two experiments. Once this was done the complex simultaneous fit to multiple datasets returned the same result as one would have obtained from combining the LHCb and CMS likelihoods [72]. This teaches a lesson on the relative importance of agreeing between experiments versus developing complex fitting frameworks.
The further suppressed \(B^{0}\!\rightarrow\!\mu^{+}\mu^{-}\) mode still escapes clear detection, but one can nevertheless determine a ratio of branching fractions which is precisely predicted in the SM as
it is essentially a measurement of KM matrix elements:
\[{\cal R}_{\mu^{+}\mu^{-}}\equiv\frac{{\cal B}(B^{0}\!\rightarrow\mu^{+}\mu^{-})}{{ \cal B}(B^{0}_{s}\!\rightarrow\mu^{+}\mu^{-})}\stackrel{{\rm SM}}{{= }}\frac{\Gamma_{s}^{\rm H}}{\Gamma_{d}}\left(\frac{f_{B^{0}}}{f_{B^{0}_{s}}} \right)^{2}\frac{|V_{td}|^{2}}{|V_{ts}|^{2}}\frac{\sqrt{m_{B^{0}}^{2}-4m_{\mu }^{2}}}{\sqrt{m_{B^{0}_{s}}^{2}-4m_{\mu}^{2}}}, \tag{1}\]
and is measured as [76, 77]
\[{\cal R}_{\mu^{+}\mu^{-}}^{\rm exp}=\left(3.9\begin{array}{cc}+&3.0&+&0.6\\ -&2.4&-&0.4\end{array}\right)\times 10^{-2}.\]
Another decay mode sensitive to new physics in these operators is \(B\!\rightarrow{K^{*}}\ell^{+}\ell^{-}\). While \(B^{0}_{s}\!\rightarrow\mu^{+}\mu^{-}\) is essentially probing axial currents, \(B\!\rightarrow{K^{*}}\mu^{+}\mu^{-}\) also probes vector currents, and their interference, as shown in Fig. 10.
The exclusive decay \(B^{0}\!\rightarrow{K^{*}}0\ell^{+}\ell^{-}\), with \({K^{*}}^{0}\!\rightarrow{K^{+}}\pi^{-}\), provides a rich set of observables with different sensitivities to new physics, and for which theoretical predictions are available. This process is complicated by a dependence on \(q^{2}\), the dilepton mass squared. At low \(q^{2}\), \(B^{0}\!\rightarrow{K^{*}}0\ell^{+}\ell^{-}\) behaves like \(B^{0}\!\rightarrow{K^{*}}\gamma\), with a slightly off-shell photon decaying to two leptons. At higher \(q^{2}\) values, there is an interference of the amplitudes controlled by the
and \({\cal O}_{10}\) operators [80], related to the \(Z\) loop and \(W\) box diagrams, respectively. This "low-\(q^{2}\)" region between 1 and 6 \({\rm Ge\kern-1.0ptV}^{2}\!\!/c^{4}\) is the most interesting and theoretically cleanest. The observation of high mass resonances above the \(\psi(2S)\) meson by the LHCb collaboration [81] is an indication that a lot of care is needed when interpreting the high-\(q^{2}\) region.
Branching fraction predictions are affected by hadronic uncertainties (see also below), but selected ratios of observables benefit from cancellations of uncertainties, thus providing a cleaner test of the Standard Model [82, 83, 84, 85, 86, 87, 88]. The observable \(P_{5}^{\prime}\)[89] for instance is in tension with the theoretical prediction, as seen in Fig. 11, but the jury is still out on determining what the cause is. In order to address this question, LHCb recently performed an amplitude analysis in the \(B^{0}\!\to K^{*0}\mu^{+}\mu^{-}\) decay in which the short-distance Wilson coefficients and long-distance nuisance parameters are determined from the data [90]. The overall level of discrepancy with the Standard Model is at the level of \(2\sigma\).
Figure 11: Experimental results on the (left) \(B\!\to K\ell^{+}\ell^{-}\) differential decay rate [91, 92, 93, 94, 95, 96] compared to prediction of Ref. [97] and (right) \(P_{5}^{\prime}\)[89, 98, 99, 100, 101, 102, 103] asymmetry compared to predictions from Refs. [104, 105] (sea green) and Refs. [106, 107] (green).
Figure 10: Feynman diagrams of the dominant Standard Model contributions to \(b\!\to s\ell^{+}\ell^{-}\): (left) electroweak loop, (centre) box, (right) \(c\overline{c}\) loop diagram.
Until recently there was excitement about the evidence of lepton-universality breaking in the decays \(B\!\to Ke^{+}e^{-}\) and \(B\!\to K\mu^{+}\mu^{-}\)[108, 109], and similarly but less significantly in \(B\!\to K^{*}e^{+}e^{-}\) and \(B\!\to K^{*}\mu^{+}\mu^{-}\)[109, 110]. With these decays one defines the ratio [111]
\[R_{X}=\frac{\int\limits_{q_{\rm min}^{2}}^{q_{\rm max}^{2}}{\rm d }q^{2}\frac{d\Gamma\left(B\!\to X\mu^{+}\mu^{-}\right)}{{\rm d}q^{2}}}{\int \limits_{q_{\rm min}^{2}}^{q_{\rm max}^{2}}{\rm d}q^{2}\frac{d\Gamma\left(B\! \to Xe^{+}e^{-}\right)}{{\rm d}q^{2}}} \tag{2}\]
in a well chosen range of \(q^{2}\) (usually 1 to \(6\,{\rm GeV}\!^{2}\!/c^{4}\)) in order to avoid charmonium resonances and photon poles. These ratios should be identical to unity at a level of precision well below experimental resolution [112].
Initially Hiller and Kruger [111] had introduced these ratios as they are linearly correlated with the branching fraction of \(B_{s}^{0}\!\to\!\mu^{+}\mu^{-}\) (under some reasonable assumptions, notably minimal flavour violation). In 2003 the hope was thus that Belle and BaBar could see sign of new physics in these ratios before the LHC would observe \(B_{s}^{0}\!\to\mu^{+}\mu^{-}\). At a high-luminosity workshop at SLAC in 2003, Hiller convinced me to check whether LHCb could measure this ratio. I showed that LHCb could so [113] thereby demonstrating that LHCb had physics potential with electrons.6 A first proof-of-concept analysis was the topic of a master thesis [118] that exceeded my expectations and led to an LHCb measurement that was "compatible with the SM prediction within 2.6 standard deviations" [119].7 The result was _below_ unity,8 meaning we saw a deficit of muons; a case that had not been foreseen by Hiller in the original publication.
Footnote 6: Inspection of old reports to the LHCC [114, 115, 116, 117] shows that the use of electrons was essentially for flavour tagging, and adding some data to \(\phi_{s}\) and \(\sin 2\phi_{1}\) measurements.
Footnote 7: The result was submitted to PRL after the editor had expressed their interest in this result following a CERN news update.
Footnote 8: Since Hiller and Krüger expected \(R_{X}\geq 1\) they defined it as muon/electrons. This is an annoyance for experiments as the uncertainty on \(R_{X}\) is dominated by the electron modes, which leads to asymmetric uncertainties when expressed in the ratio. In hindsight one should have defined \(R_{X}^{-1}\), which is what is reported in Ref. [120].
Three years later the analysis of \(B^{0}\!\to K^{*0}\ell^{+}\ell^{-}\) yielded a similar deviation from unity. However this result already contained a hint that something was not quite right: the ratio \(R_{K^{*}}\) in the low \(q^{2}\) bin should have been unity, as this region is dominated by the photon pole and lepton-universality is known to be respected in electromagnetic decays. In spite of lengthy investigations nothing could be found that explained the discrepancy and the result was published as is [110]. The value of \(R_{K}\) in \(B^{+}\!\to K^{+}\ell^{+}\ell^{-}\) was updated twice with Run 2
data [121, 108], the latter of which exhibited a \(3\sigma\) evidence. This was exciting news for model builders. Indeed, no hadronic effects can mimic values of \(R_{K}\) different from unity, unlike other deviations, which can be generated by stretching some not-so-certain QCD predictions. It was known that the discrepancy with the SM was driven by \(b\!\to s\mu^{+}\mu^{-}\) channels, for which all rates are measured below the SM expectation [94, 122], while the electrons were thought to be SM-like.
The anomaly disappeared after a reappraisal of the LHCb result [95, 96]. More stringent particle identification requirements reduced the amount of backgrounds with two hadrons mis-identified as electrons, and these and similar backgrounds were determined from data and incorporated in the mass fit. The measurements of two bins in \(q^{2}\) (below and above \(1\,\mbox{GeV}^{2}\!/c^{4}\)) in \(B^{+}\!\to K^{+}\ell^{+}\ell^{-}\) and \(B^{0}\!\to K^{*0}\ell^{+}\ell^{-}\) are now all compatible with unity.
However, the demise of \(R_{K}\) replaced one anomaly with another. After the correction, the electron modes also have measured decay rates below the SM expectation. Some lattice QCD groups report discrepancies in excess of \(4\sigma\)[97] for the muon modes, while the electron and muon modes are experimentally compatible. Other groups however determine form factors from the data and get much lower tensions [124].
Input from all \(b\!\to s\ell^{+}\ell^{-}\), \(B\!\to\mu^{+}\mu^{-}\) and \(b\!\to s\gamma\) modes is used to perform fits to Wilson coefficients \(C_{9}\) and \(C_{10}\) corresponding to the vector and axial operators, respectively. Multiple groups [123, 124, 125, 126, 127, 128, 129, 130, 131, 132] consistently hint at a modified vector \(b\overline{s}\ell\bar{\ell}\) operator -- with varying significance -- as shown in in Fig. 12[123].
The elephant in the room are non-local \(c\overline{c}\) contributions (Fig. 10, right). Unlike the local form factors that are in principle calculable, the charm loops are nonlocal -- i.e. the \(b\overline{s}c\overline{c}\) and \(c\overline{c}\ell\bar{\ell}\) operators appear at different points in spacetime -- which requires involved integrals. In the \(B\!\to K\mu^{+}\mu^{-}\) channel, light-cone-sum-rule (LCSR) methods predict small nonlocal effects [133], and attempts to constrain charm loops from data [124] reach the conclusion that they are too small to explain the observed discrepancies. Are the discrepancies due to form factors? The latest evidence by Belle II for the \(B^{+}\!\to K^{+}\nu\overline{\nu}\) decay [134], at a rate above
the SM predicted branching fraction, tends to indicate that the \(B\!\rightarrow\!K\) form factor is not low compared to the SM expectation. In \(B\!\rightarrow\!K^{*}\mu^{+}\mu^{-}\) decays the form factors and nonlocal effects are less well controlled [107].
Another test of lepton universality in tree decays is done with semileptonic decays to \(\tau\) leptons. The ratio \(R(D)\) of the \(B\!\rightarrow\!D\tau\nu\) to \(B\!\rightarrow\!D\mu\nu\) decay rates -- and mutatis mutandis for \(R(D^{*})\) -- are not unity because of phase-space factors, but well predicted in the SM. There has been a long-standing discrepancy at the level of \(3\sigma\), mostly driven by a measurement from BaBar [135, 136], that pulls the experimental average away from the SM, see Fig. 13. In the spirit of "things LHCb cannot do (but still does)", LHCb is contributing to this programme: in spite of the missing neutrino and the multiplicity of backgrounds affecting these decays, LHCb is able to reconstruct the missing-mass, \(q^{2}\) and muon energy in the \(B\) frame, owing to the missing-momentum correction coming from the \(B\) pointing requirement. The LHCb results are compatible with the SM in \(R(D^{*})\), but show a slight tension in \(R(D)\)[137, 138]. The full legacy dataset is not yet exploited, so more updates will be coming, while waiting for the first results from Belle II.
Figure 14 shows a cherry-picked selection of measurements which are of particular interest. For each measurements, the SM prediction is placed at zero. The experimental value is then offset its deviation from the SM in units of standard deviations. The quadratic sum of the two uncertainties is therefore unity by construction. This presentation shows which observables have an uncertainty dominated by experiment (\(R(D)\), \({\cal B}(B^{+}\!\rightarrow\!\tau^{+}\nu\) stick out) or by theory (\({\cal B}(B^{+}\!\rightarrow\!K^{+}\mu^{+}\mu^{-})\), which tells where efforts are needed to reduce uncertainties.9
Footnote 9: Additionally there are anomalies like the muon \(g-2\) value where multiple theory determinations do not agree (we report Ref. [143]) or where there is disagreement between experiments (as the \(W\) mass, which is not reported here).
\(\begin{array}{c}{\cal B}(B^{+}\to K^{+}\mu^{+}\mu^{-})\ [1.1,6.0]\\ {\cal B}(B^{+}\to K^{+}e^{+}e^{-})\ [1.1,6.0]\\ {\cal B}(B^{+}\to K^{+}\nu\bar{\nu})\\ {\cal B}(B^{0}_{s}\to\phi\mu^{+}\mu^{-})\ [1.1,6.0]\\ {\cal B}(B^{0}_{s}\to\mu^{+}\mu^{-})\\ {\cal B}(B^{0}\to\mu^{+}\mu^{-})\\ P^{\prime}_{5}(B^{0}\to K^{*0}\mu^{+}\mu^{-})\ [2.5,4.0]\\ P^{\prime}_{5}(B^{0}\to K^{*0}\mu^{+}\mu^{-})\ [4.0,6.0]\\ R_{K}\ [0.1,1.1]\\ R_{K}\ [1.1,6.0]\\ R_{K^{0}_{S}}\ [1.1,6.0]\\ R_{K^{*0}}\ [0.1,1.1]\\ R_{K^{*0}}\ [1.1,6.0]\\ R_{K^{*+}}\ [0.045,6.0]\\ R_{pK}\ [0.1,6.0]\\ Muon \(g-2\) (WP) -\\ Muon \(g-2\) (BMW) -\\ \(R(D)\) -\\ \(R(D^{*})\) -\\ \(R(J/\psi)\) - \\ \(R(\Lambda_{c}^{+})\) - \\ {\cal B}(B^{+}\to\tau^{+}\nu)\) - \\ \end{array}\)
**Fig. 14**: Cherry-picked selection of measurements in flavour physics. The SM prediction is placed at zero. The experimental value is then offset its deviation from the SM in units of standard deviations. The quadratic sum of the uncertainties is unity by construction. Figure from ref. [139]. Values from Refs. [23, 53, 75, 76, 77, 78, 79, 95, 96, 97, 122, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151]. Figure from Ref. [123].
## 5 Spectroscopy
The LHC has become a hadron discovery machine [152] with 72 states observed so far, 64 of which were discovered by LHCb. A timeline is shown in Figure 15[153].
Many of these new states are exotic; but let us rewind history to 2003. I remember well the internal discussions about a bump in the \(J\!/\!\psi\pi^{+}\pi^{-}\) mass that was seen in Belle data of the \(B^{+}\!\to J\!/\!\psi K^{+}\pi^{+}\pi^{-}\) decay, an considered it likely to be yet another charmonium state. Instead, this accidental observation [154] of what turned out to be the first tetraquark, called the \(X(3872)\) and then the \(\chi_{c1}(3872)\) meson, started the whole new field of exotic spectroscopy.
Multiple similar states -- some of which charged -- have then been discovered by the \(B\) factories; see Ref. [155] for a history. We now know that \(\chi_{c1}(3872)\) is an isospin-singlet that proceeds mostly in an isospin-violating decay to \(D^{0}\kern 1.8pt\overline{\kern-1.8ptD}{}^{0}\pi^{0}\) or \(D^{*0}\kern 1.8pt\overline{\kern-1.8ptD}{}^{0}\)[156], or to \(J\!/\!\psi\pi^{+}\pi^{-}\) via a \(\rho^{0}\)[157], but with a sizeable \(\omega\) component [158].10 LHCb showed that \(\chi_{c1}(3872)\) meson has quantum numbers \(J^{PC}\!=\!1^{++}\)[161, 162] and is consistent with \(D^{*0}\kern 1.8pt\overline{\kern-1.8ptD}{}^{0}\) bound state with \(24\,\)keV binding energy and a width of \(1.39\pm 0.26\,\)MeV [163] (see also Ref. [164]). This makes it look like a molecule, while its production mode is that of charmonium [165]. If both possibilities exist, reality must be a superposition of the two [163, 166].
Footnote 10: This \(\omega\) component has been underestimated for a long time because of a bug in the EvtGen model which made the generated \(\pi^{+}\pi^{-}\) mass from pure \(\chi_{c1}(3872)\!\to\!J\!/\!\psi\rho^{0}\) decays look like what was seen in data; see _e.g._ mass plots in Refs. [159, 160, 161].
In spite of some early papers (notably Ref. [167]), most members of the LHCb experiment only realised the potential of exotic spectroscopy when we stumbled over the \(J\!/\!\psi p\) pentaquarks with minimal quark content \(c\overline{c}uud\)[168]. These states were first seen in \(\Lambda_{b}^{0}\!\to\!J\!/\!\psi pK^{-}\) decays,11 and established in an amplitude analysis. The \(3\,{\rm fb}^{-1}\) data set was best fit with a wide \(P_{\psi}^{N}(4380)^{+}\) and a narrow \(P_{\psi}^{N}(4450)^{+}\) state (using the naming scheme of
Figure 15: Masses and date of arxiv submission for all states observed at the LHC. Figure from Ref. [153].
Ref. [169]). A subsequent simplified analysis [170] of the \(9\,{\rm fb}^{-1}\) legacy data showed that the latter state is split in two states, \(P_{\psi}^{N}(4440)^{+}\) and \(P_{\psi}^{N}(4457)^{+}\), and that another is needed at \(4312\,\)MeV. The full amplitude analysis is underway.
With the full legacy data sample, LHCb also discovered doubly charmed states \(T_{cc}^{+}\)[181, 182], which differ from previously discovered tetraquarks in that they have two charm quarks and two light anti-quarks, the \(T_{cs0,1}^{a}(2900)\) states with a single charm and a strange quark [178, 179], and the heavy \(T_{\psi\psi}\) with two charm and two anti-charm quarks [171, 174, 175]. Table 3 attempts to classify these states by quark content.
Mapping out existing (and eventually non-existing) quark contents helps understanding the internal structure of exotic hadrons. The dispute between proponents of the molecular picture and those of the compact tetra- and pentaquarks has calmed down lately. Most likely there are representatives of both kinds. Let's look at states with two heavy quarks \(Q\), and \(Q\) or \(\overline{Q}\). If \(Q\overline{Q}\) is in a colour-singlet configuration, it will immediately hadronise into quarkonium. If the quarks are in different colourless hadrons, they may form a molecule. On the other hand \(QQ\) can never be in a singlet configuration. A \(QQ\overline{q}q\) state may thus be compact. A \(Q\overline{Q}q\overline{q}\) may not [183]. It is thus likely that all kinds of structures exist in nature: hadronic molecules, compact multi-quark objects, superposition of those, as well as rescattering effects. Sorting them out will be an enterprise for the next decades [184].
## 6 LHCb status and prospects
The LHCb experiment has just undergone a major upgrade [14]. The detector layout is hardly changed -- actually the image in Fig. 2 is that of the new detector -- but many components have been changed. The goal of the upgrade was to allow for an increased instantaneous luminosity, in the \(10^{33}\,{\rm cm}^{-2}{\rm s}^{-1}\) range. In order to achieve that, the hardware trigger needed to be removed: Meeting a \(1\,\)MHz bottleneck would require \(p_{\rm T}\) thresholds of
several \(\,\mathrm{GeV}\), which starts to remove as much \(B\) signal as backgrounds. This in turn required to change all readout electronics of the detectors not included in the hardware trigger, namely the vertex detector, the tracking system, and the RICH. The silicon strip vertex detector was replaced by a pixel detector and the trackers consisting of silicon strips near the beam and straw tubes elsewhere were replaced by scintillating fibres.
All this detector is read out at \(40\,\mathrm{MHz}\), of which \(30\,\mathrm{MHz}\) are non-empty events. These data are sent to a first trigger farm of graphical processing units. The selected events are then buffered on disk while a calibration and alignment procedure is run. Only when all calibration constants are available are the data processed by the second trigger level. The final selected data thus have the full offline-level quality and no further reconstruction is required. Most of the events are saved partially, only keeping the objects of interest for the analysis. The evolution of the trigger scheme is shown in Fig. 16.
The data stored by the trigger are massaged by a "sprucing" process, which may for instance add neighbouring tracks to a selected \(B\) candidate to form a potential excited state to be used in spectroscopy studies. These neighbouring tracks have to be duly requested by the relevant trigger selection as they may otherwise be lost. The spruced candidates are stored in data streams that are analysed by centrally managed user analysis productions, as shown in Fig. 17.
The detector is now fully installed and commissioning is ongoing. There was a recent set-back as the vacuum inside the vertex detector exceeded the specifications due to a faulty sensor. The pressure difference with the vacuum of the LHC deformed the thin RF foil that
Figure 16: Evolution of LHCb trigger schemes in Run 1, 2 and 3. Figures from Ref. [185].
separates the two volumes. Luckily no sensors were affected, but the vertex detector cannot fully close to its nominal position until the foil is replaced during the 2023-24 winter shutdown. In the meantime the experiment can run with a partially open vertex detector and a consequently degraded resolution. Here degraded means a resolution similar to that of the decommissioned Run 1-2 vertex detector. The main issue is rather the lack of simulation perfectly matching this sub-optimal situation and thus determining the geometrical acceptance of the partially open detector. It should all be recovered in 2024.
## 7 Future prospects
After Runs 3 and 4, LHCb plans to upgrade again the detector in order to keep up with the requirements of the High Luminosity LHC. A luminosity in the vicinity of \(10^{34}\,\mathrm{cm}^{-2}\mathrm{s}^{-1}\) will however generate several tens of \(pp\) collisions per bunch crossings. Fishing out \(b\) and \(c\) hadrons from such busy events will required 4D tracking including timing [191]. With such a detector one could achieve unprecedented sensitivities, some of which are listed in Table 4.
## 8 Conclusion
Such fantastic accuracies will require enormous efforts on understanding the detector, the backgrounds, the simulation. The prize will be an extremely precise knowledge of the flavour sector, including of course the KM matrix at the origin of this meeting. Figures 18 (left)
Figure 17: LHCb offline dataflow. Figure from Ref. [185]
and 3 show the progress between the early dates of KM metrology and now, and Fig. 18 (right) shows how much there is still to do.
Precision metrology should not be dismissed. It is about the best possible understanding of Nature and the best possible exploitation of the machines we have built. It is a prerequisite to discovery of new physics. Who knows if at this level of precision the triangle will still close? Either way, it is a journey worth taking.
\begin{table}
\begin{tabular}{l|c c|c|c|} & Legacy & 2026 & U2 \\ Observable & \multicolumn{2}{c|}{(9 \(\mathrm{fb}^{-1}\))} & \multicolumn{2}{c|}{(23 \(\mathrm{fb}^{-1}\))} & \multicolumn{2}{c}{(300 \(\mathrm{fb}^{-1}\))} \\ \hline \(\sin 2\phi_{1}\), with \(B^{0}\!\rightarrow\!J\!/\!\psi K^{0}_{\mathrm{S}}\) & 0.015 [29] & 0.011 & 0.003 \\ \(\phi_{s}\), with \(B^{0}_{s}\!\rightarrow\!J\!/\!\psi K^{+}K^{-}\) [mrad] & 23 [188] & 14 & 4 \\ \(\phi^{s\bar{s}s}\), with \(B^{0}_{s}\!\rightarrow\!\phi\phi\) [mrad] & 80 [65] & 39 & 11 \\ \(\phi_{3}\) & \(4^{\circ}\)[32] & \(1.5^{\circ}\) & \(0.35^{\circ}\) \\ \(|V_{ub}|/|V_{cb}|\) & 6\% [189] & 3\% & 1\% \\ \hline \(\mathcal{R}_{\mu^{+}\mu^{-}}\) & 90\% [76] & 34\% & 10\% \\ \(R_{K}\)\((1<q^{2}<6\,\mathrm{Ge}\hskip-1.0ptV^{2}\!/c^{4})\) & 0.1 [95] & 0.025 & 0.007 \\ \(R_{K^{*}}\)\((1<q^{2}<6\,\mathrm{Ge}\hskip-1.0ptV^{2}\!/c^{4})\) & 0.1 [95] & 0.031 & 0.008 \\ \hline \(R(D^{*})\) & 0.022 [138] & 0.0072 & 0.002 \\ \(R(J/\psi)\) & 0.24 [144] & 0.071 & 0.02 \\ \hline \(\Delta A_{C\!P}(KK-\pi\pi)\)\([10^{-5}]\) & 85 [190] & 17 & 3.0 \\ \end{tabular}
\end{table}
Table 4: Expected sensitivities for selected observables. Table adapted from Refs. [186, 187].
Figure 18: (left) The unitarity triangle in 1995 [22]. (right) Possible status after Run 5 of the LHC [186].
## Acknowledgements
I would like to thank Hashimoto-san for the kind invitation to the workshop. It was a pleasure to visit KEK after 20 years of absence. Many thanks for Carla Gobel for commenting on the manuscript.
|
2304.03381 | $G_0W_0$ Ionization Potentials of First-Row Transition Metal Aqua Ions | We report computations of the vertical ionization potentials within the $GW$
approximation of the near-complete series of first-row transition metal (V-Cu)
aqua ions in their most common oxidation states, i.e. V$^{3+}$, Cr$^{3+}$,
Cr$^{2+}$, Mn$^{2+}$, Fe$^{3+}$, Fe$^{2+}$, Co$^{2+}$, Ni$^{2+}$, and
Cu$^{2+}$. The $d$-orbital occupancy of these systems spans a broad range from
$d^2$ to $d^9$. All the structures were first optimized at the density
functional theory level using a large cluster of explicit water molecules that
are embedded in a continuum solvation model. Vertical ionization potentials
were computed with the one-shot $G_0W_0$ approach on a range of transition
metal ion clusters (6, 18, 40, and 60 explicit water molecules) wherein the
convergence with respect to the basis set size was evaluated using the systems
with 40 water molecules. We assess the results using three different density
functional approximations as starting points for the vertical ionization
potential calculations, namely $G_0W_0$@PBE, $G_0W_0$@PBE0, and
$G_0W_0$@r$^2$SCAN. While the predicted ground-state structures are similar
with all three exchange-correlation functionals, the vertical ionization
potentials were in closer agreement with the experiment when using the
$G_0W_0$@PBE0 and $G_0W_0$@r$^2$SCAN approaches, with the r2SCAN based
calculations being significantly less expensive. Computed bond distances and
vertical ionization potentials for all structures were compared with available
experimental data and are in good agreement. | Daniel Mejia-Rodriguez, Alexander A. Kunitsa, Edoardo Aprà, Niranjan Govind | 2023-04-06T21:22:27Z | http://arxiv.org/abs/2304.03381v2 | # \(G_{0}W_{0}\) Ionization Potentials of First-Row Transition Metal Aqua Ions
###### Abstract
We report computations of the vertical ionization potentials within the \(GW\) approximation of the near-complete series of first-row transition metal (V-Cu) aqua ions in their most common oxidation states, i.e. V\({}^{3+}\), Cr\({}^{3+}\), Cr\({}^{2+}\), Mn\({}^{2+}\), Fe\({}^{3+}\), Fe\({}^{2+}\), Co\({}^{2+}\), Ni\({}^{2+}\), Cu\({}^{2+}\), and Zn\({}^{2+}\). The \(d\)-orbital occupancy of these systems spans a broad range from \(d^{2}\) to \(d^{10}\). All the structures were first optimized at the density functional theory level using a large cluster of explicit water molecules that are embedded in a continuum solvation model. Vertical ionization potentials were computed with the one-shot \(G_{0}W_{0}\) approach on a range of transition metal ion clusters (6, 18, 40, and 60 explicit water molecules) wherein the convergence with respect to the basis set size was evaluated using the systems with 40 water molecules. We assess the results using three different density functional approximations as starting points for the vertical ionization potential calculations, namely \(G_{0}W_{0}\)@PBE, \(G_{0}W_{0}\)@PBE0, and \(G_{0}W_{0}\)@r\({}^{2}\)SCAN. While the
predicted ground-state structures are similar with all three exchange-correlation functionals, the vertical ionization potentials were in closer agreement with experiment when using the \(G_{0}W_{0}\)@PBE0 and \(G_{0}W_{0}\)@r\({}^{2}\)SCAN approaches, with the r\({}^{2}\)SCAN based calculations being significantly less expensive. Computed bond distances and vertical ionization potentials for all structures were compared with available experimental data and are in very good agreement.
Hydrated transition metal (TM) ions are involved in many biological processes. In fact, five of the first-row TMs (Mn, Fe, Co, Cu, Zn) are essential to human health, while the other three (Cr, V, Ni) have shown both beneficial and detrimental biological effects [1]. In many instances, TM aqua ions have very similar electronic structures to the ones in the active site of metalloenzymes. To that end, understanding their chemistry is important for the study of both biological systems and other chemical systems with relevant industrial applications [2, 3].
The chemistry of TM aqua ions has been a very active research topic for both experiment and theory. From a computational point of view, the accurate description of these systems is still challenging despite many recent advances in the field [4]. It is well known that a sufficiently accurate theoretical description of 3\(d\) TM aqua ions must include, at least, the first and second solvation shells explicitly [4, 5]. An implicit solvation model, like the conductor-like screening model (COSMO) [6], must also be included in order to simulate bulk water effects and appropriately screen the highly-charged metal center. Very recently, Ghosh and co-workers have demonstrated that near-quantitative agreement between experimental X-ray absorption spectroscopy (XAS) and theoretical simulations can be achieved only by including a realistic description of the solvent and an appropriate level of theory [7]. The resulting models were composed of 40 explicit water molecules around the charged TM center, described with a basis set of triple-\(\zeta\) quality, surrounded by an implicit solvation description of the bulk solvent environment.
In this letter, we have performed a comprehensive study of the vertical ionization potentials (IPs) of the near-complete series of first-row transition metal aqua ions (\(d\)-orbital occupancy from \(d^{2}\) to \(d^{10}\)) within the \(GW\) approximation [8] using our recently reported scalable implementation based on Gaussian basis sets [9, 10]. To the best of our knowledge, this study is the first application of
the \(GW\) method to the study of IPs of TM aqua ions. The \(GW\) approximation is a well-established method in the solid-state physics community,[11, 12, 13, 14] that is recently showing an increased interest in the chemistry arena for molecular systems[15, 16, 17, 18, 19, 20, 21, 22, 23] due to a greatly improved balance between cost and accuracy. In particular, we have utilized the one-shot \(G_{0}W_{0}\) approach due to its lower computational cost. Performing these calculations with other electronic structure methods of similar accuracy would require a high computational cost, even when using linear-scaling approaches like the domain-based pair natural orbital implementation of coupled cluster theory.[4]
In Figure 1 we report the TM ion-water average bond distances. The experimental values were obtained via fitting of the extended X-ray absorption fine structure (EXAFS) spectra.[24, 25, 26, 27] The theoretical values have been computed from the average distance to the first coordination water shell for the V\({}^{3+}\), Cr\({}^{3+}\), Mn\({}^{2+}\), Fe\({}^{3+}\), Fe\({}^{2}\), Co\({}^{2+}\), and Ni\({}^{2+}\), or to the subset of only the equatorial water molecules for Cr\({}^{2+}\), and Cu\({}^{2+}\) aqua ions. Clear trends can be observed upon increasing the size of the water cluster about the cations where most of the ion-water bonds elongate by about 0.03 A, while the equatorial Cr\({}^{2+}\)-O and Cu\({}^{2+}\)-O bonds shorten by the same amount. These bonds distances are stable with respect to the change of basis set, from triple-\(\zeta\) to quadruple-\(\zeta\), at least for the smaller [M(H\({}_{2}\)O)\({}_{6}\)]\({}^{n+}\) and [M(H\({}_{2}\)O)\({}_{18}\)]\({}^{n+}\) systems. The average bond distances with the PBE0 exchange-correlation functional, middle panel of Figure 1, for [M(H\({}_{2}\)O)\({}_{40}\)]\({}^{n+}\) are in very good agreement with those obtained from Ref. [7] using a different basis set for the water molecules, namely 6-311G**.[28] It is interesting to note that the r\({}^{2}\)SCAN-D3 functional yields geometries comparable with the PBE0-D3 global hybrid functional, but at a fraction of the computational cost.1 Overall, all three density functional approximations (DFAs) perform rather well in predicting the average conformation of the first hydration shell, although PBE0 and r\({}^{2}\)SCAN yield slightly better results than the "pure" PBE exchange-correlation functional.
Footnote 1: where the speed-up comes from the use of the density fitting approach for the evaluation of the Coulomb potential in the computation of the ground state energy.
Figure 2 shows the IPs obtained using the \(G_{0}W_{0}\) approach for the same three optimized cluster geometries of the hydrated hexaaqua complexes. The experimental reference values were obtained from Ref. [5]. The theoretical IPs were obtained using the triple-\(\zeta\) basis set combination (def2
Figure 1: Average metal-oxygen bond distances, in Å, obtained with three different exchange-correlation functionals. Complexes were modeled with 6, 18, 40, or 60 explicit water molecules surrounded by COSMO implicit solvation. The hydrated Cr\({}^{2+}\) and Cu\({}^{2+}\) average bond distances only took into account equatorial bonds.
Figure 2: \(G_{0}W_{0}\) vertical IPs, in eV, using orbitals obtained with three different exchange-correlation functionals.
TZVP/Sapporo-TZP-2012) and, as a consequence, the values are not converged with respect to the basis set size (see Supplementary Information). However, the trends are clear: \(G_{0}W_{0}\)@PBE0 overestimates the IPs of all aqua ions, while \(G_{0}W_{0}\)@PBE and \(G_{0}W_{0}\)@r\({}^{2}\)SCAN underestimate them. Figure 2 also shows that there are still some minimal variations between the 18- and 40-explicit-water models. It is important to note that our \(G_{0}W_{0}\) implementation does not take into account the optical response from the implicit solvation model. The effect of this omission is expected to be small since we are focusing on the occupied part of the spectrum of systems with relatively large charges.[29]
In order to further assess the convergence of the \(G_{0}W_{0}\)@DFT IP values, and evaluate indirectly the effect of the missing optical response term mentioned above, we decided to extend the size of the systems to 60 water molecules using the r\({}^{2}\)SCAN DFA. The choice of DFA follows from the quality of geometrical parameters and \(G_{0}W_{0}\) vertical IPs that can be obtained with r\({}^{2}\)SCAN at a fraction of the cost of PBE0 calculations. These results are shown in the bottom panels of Figures 1 and 2. It is evident that the [M(H\({}_{2}\)O)\({}_{40}\)]\({}^{n+}\) systems already offer converged results for both bond distances and IPs. This means that, as expected, the missing response term should be negligible in the larger models.
It is important to note that no dynamical information is contained in the previous results. That is, the bond distances and IPs were obtained from a single optimal conformation of each model. The lack of missing dynamical information is not expected to drastically change our results. Indirect confirmation of this observation follows from the relatively smooth behavior of bond distances and IPs obtained with growing model size, as the starting geometries were completely unrelated to each other. Thus, some conformational broadening is already included in the reported average ion-water distances. An exception is the hydrated Cu\({}^{2+}\) ion, which exhibits complex first-solvation shell dynamics.[30, 27, 31] In order to evaluate the effect that this fluxional behavior has on the vertical IP, we ran an _ab initio_ molecular dynamics (AIMD) simulation of the implicitly solvated [Cu(H\({}_{2}\)O)\({}_{40}\)]\({}^{n+}\) model at 298.15 K for 3.7 ps using the PBE0 functional, a double-\(\zeta\) basis set, and a timestep of 0.24 fs. Sharp distinctions between the different reported coordination symmetries
(commonly described as 6, 5+1, 4+1+1, and 5 coordinated complexes) were not observed but rather complex dynamics were obtained. Figure 3 shows the Cu-O bond distance fluctuations for the six nearest oxygen atoms. Equatorial bond lengths fluctuate around a mean value of slightly less than 2 A. Axial bond lengths exhibit wider oscillations bracketed between about 2.20 and 2.50 A. In some instances, however, axial molecules can be found at distances greater than 2.75 A. Note also the presence of Berry-like pseudorotations every \(\sim\) 1 ps: each pair of water molecules (corresponding to each column in Figure 3) occupied the axial position at different times, with the first pair (left column) occupying the position between 2 and 3 ps, the second pair (middle column) from the 3 ps mark onwards, and the last pair (right column) during the first 2 ps. These pseudorotations were previously found in MD simulations, using periodic boundary conditions, at about the same frequency [30].
The axial waters were further classified as bonded, partially bonded (corresponding to the "+1"
Figure 3: Cu–O bonds distances [Å] (top) and bond orders (bottom) for the six nearest oxygen atoms. The plots were split by axial pairs in order to facilitate visualization, with each column corresponding to a given pair. Note that the ordinate of the top panel was cut at 3 Å in order to reduce white space. The bond orders were obtained using intrinsic atomic orbitals (IAOs) representation.
in the 5+1, and 4+1+1 coordination symmetries), or not bonded, the Wiberg bond orders [32] were computed using intrinsic atomic orbitals (IAOs) [33]. In Figure 3, the bond orders show continuous and discrete changes which closely track changes in the Cu-O distance. However, the bond order fluctuations are relatively larger, allowing the discrimination of the different coordination symmetries. The region around IAO bond order of 0.20 corresponds to equatorial water molecules, while the region around 0.09 corresponds to axial ones. Based on this, we define water molecules with bond orders \(>0.09\) as fully bonded, those with bond orders between 0.05 and 0.09 as partially bonded, and those with bond orders \(<0.05\) as not bonded. Using this classification, we observe the following distribution of coordination modes for [Cu(H\({}_{2}\)O)\({}_{40}\)]\({}^{2+}\): 1) a \(4+1\) coordination (2%); 2) \(4+1+1\) coordination (18.3%); 3) 5 coordination (16.3%); \(5+1\) coordination (37.3%); and 4) 6 coordination (26%). For comparison, recent experimental results obtained by combining EXAFS and MXAN found that Cu\({}_{\rm(aq)}^{2+}\) existed half of the time as an axially elongated square pyramidal (5-coordinated) structure and the other half as a \(5+1\) structure [31].
Four dominant conformations Cu\({}_{\rm(aq)}^{2+}\), according to our classification (4+1+1, 5, 5+1, 6), have a significant impact on the highest occupied quasiparticle (HOQP) energy of [Cu(H\({}_{2}\)O)\({}_{40}\)]\({}^{2+}\). The top panel of Figure 4 shows, in black, the contribution of the HOQP to the valence photoelectron spectrum of Cu\({}_{\rm(aq)}^{2+}\), obtained by Lorentzian broadening each HOQP energy computed for more than 150 snapshots along the trajectory. Note that the range of HOQP energies sampled is about 1.8 eV, which is rather large. The top panel of Figure 4 also shows the contribution stemming from each individual coordination mode. Our classification shows nearly complete splitting between the contributions from the different coordination modes, with lower coordinations leading to larger vertical IPs, as expected. The total spectra in the top panel of Figure 4 suggest sensitivity to the existence of at least three major structures (5, 5+1, and 6) in Cu\({}_{\rm(aq)}^{2+}\).
The bottom panel of Figure 4 shows the total simulated valence photoelectron spectrum in a 4 eV window (30 quasiparticle energies). The spectrum includes contributions from water, which bundle together from -11.5 eV to -10.5 eV. The rest of the spectrum, arising from the \(d\) shell of Cu\({}^{2+}\), is included within a 3 eV span between -10.5 eV and -7.5 eV. This is roughly the same width
shown in the differential valence photoelectron spectrum of Reference [5], and has roughly the same characteristics including a shoulder at about 1 eV and a maximum at around 2 eV from the low energy end.
This finding reinforces the suggestion that one needs more than two water coordination shells in order to simulate the complex dynamics of the Cu\({}^{2+}\) ion in an aqueous solution. A short constant temperature AIMD simulation of [Cu(H\({}_{2}\)O)\({}_{18}\)]\({}^{n+}\) also corroborates our suggestion, as the structural dynamics were not as complex as the one for [Cu(H\({}_{2}\)O)\({}_{40}\)]\({}^{n+}\).
In summary, we have presented a detailed computational study of the vertical IPs of the first
Figure 4: Top: changes in the highest occupied quasiparticle energy [eV] along a 3.7 ps AIMD trajectory of the hydrated Cu\({}^{2+}\) ion. The plot was obtained by broadening each peak with a Lorentzian with 0.1 eV FWHM. Bottom: Total valence photoelectron spectra of hydrated Cu\({}^{2+}\) along the same 3.7 ps trajectory. Individual contributions from the 30 highest occupied quasiparticle states were broadened with a 0.25 eV FWHM Lorentzian.
row TM aqua ions with comparisons with experimental data. Specifically, we have used DFT-optimized structures of explicitly solvated TM aqua ions to compute vertical IPs using the one-shot \(G_{0}W_{0}\) approach. Vertical IPs were calculated on a range of TM ion clusters of increasing size (using 6, 18, 40, and 60 explicit water molecules). Convergence with respect to the number of explicit water molecules (surrounded by a continuum solvation model) and basis set size was tested. We also evaluated the performance of three different DFAs as starting point, namely, PBE, PBE0, and r\({}^{2}\)SCAN. Utilizing water as the common solvation ligand for all the TM ions makes this series an excellent benchmark to test the predictive ability of theory not only from a structural standpoint, but also for spectroscopies and for a deeper understanding of complex ligand chemistries. We have demonstrated that the \(G_{0}W_{0}\) approximation offers an excellent balance between cost and accuracy for predicting the vertical IPs of aquated open-shell first-row transition metal cations and for capturing the effects of the complex structural dynamics that some of the cations exhibit in aqueous solution.
## 2 Computational Methods
In order to perform a \(G_{0}W_{0}\) calculation, one needs to first compute the single-particle orbitals and energies obtained with a mean-field theory. Given that \(G_{0}W_{0}\) does not optimize these orbitals, the method has some degree of starting point dependency. Here, we start from Kohn-Sham (KS) density functional theory (DFT) calculations using three different density functional approximations (DFAs) to the exchange-correlation energy. All ground-state DFT calculations were performed with the latest development version of the NWChem[34] computational chemistry software using the Perdew-Burke-Ernzerhoff (PBE),[35] its global hybrid PBE0[36] extension, and the recent regularized-restored strongly constrained and appropriately normed (r\({}^{2}\)SCAN)[37] DFAs. The def2-{T,Q}ZVP basis set[38] for hydrogen and oxygen were used in combination with the Sapporo-{T,Q}ZP basis sets for the TM atoms,[39] respectively. The van der Waals dispersion interactions were accounted for with the D3 empirical model[40] using Becke-Johnson damping.[41] D3 parame
ters for r\({}^{2}\)SCAN were obtained from Ref. [42].
The explicitly solvated TM aqua ion clusters were optimized at the corresponding DFT-level without symmetry constraints, surrounded by the COSMO implicit solvation model [6] as modified by York and Karplus [43]. Different from traditional implementations of the COSMO model, atomic spheres were discretized using a spherical Fibonacci lattice [44, 45, 46, 47] with 401 points. The York-Karplus switching function parameters, including the covalent radii, were updated due to the presence of many COSMO charges located inside the explicit water volume. Further details about these modifications can be found in the Supporting Information. The starting geometries for the optimization of the [M(H\({}_{2}\)O)\({}_{6}\)]\({}^{n+}\) and [M(H\({}_{2}\)O)\({}_{18}\)]\({}^{n+}\) aqua ions were taken from Ref. [48], the starting geometries for the larger [M(H\({}_{2}\)O)\({}_{40}\)]\({}^{n+}\) systems were taken from Ref. [7], and the starting geometries for the [M(H\({}_{2}\)O)\({}_{60}\)]\({}^{n+}\) were obtained by carving a sphere out from an equilibrated SPC/E and pre-optimized using a double-\(\zeta\) basis set.
Vertical ionization energies were computed using the \(GW\) module recently implemented in a development version of NWChem [9, 10] at the \(G_{0}W_{0}\) level. The necessary integrals were obtained using the contour-deformation approach using 200 points for the numerical integration over the imaginary axis. The def2-{T,Q}ZVP-RIFIT charge density fitting basis [49] was used for H and O. The fitting basis for the TM atom was automatically generated using the "AutoAux" approach described in Ref. [50] and recently implemented in the Basis Set Exchange [49].
## 4 Supporting Information
### Supporting Information Available
Basis set convergence studies, details about COSMO implementation, and Tables with detailed values used in all plots can be found in the Supporting Information file.
## Acknowledgement
D.M.R, E.A., N.G. acknowledge support from the Center for Scalable Predictive methods for Excitations and Correlated phenomena (SPEC) under FWP 70942 and J.L.F. is supported under FWP 16248, which are funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Division of Chemical Sciences, Geosciences and Biosciences as part of the Computational Chemical Sciences (CCS) program at Pacific Northwest National Laboratory (PNNL). PNNL is operated by Battelle Memorial Institute for the United States Department of Energy under DOE contract number DE-AC05-76RL1830. This research benefited from computational resources provided by EMSL, a DOE Office of Science User Facility sponsored by the Office of Biological and Environmental Research and located at the Pacific Northwest National Laboratory (PNNL). This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.
|
2302.00506 | Decentralized Stream Runtime Verification for Timed Asynchronous
Networks | We study the problem of monitoring distributed systems where computers
communicate using message passing and share an almost synchronized clock. This
is a realistic scenario for networks where the speed of the monitoring is
sufficiently slow (at the human scale) to permit efficient clock
synchronization, where the clock deviations is small compared to the monitoring
cycles. This is the case when monitoring human systems in wide area networks,
the Internet or including large deployments.
More concretely, we study how to monitor decentralized systems where monitors
are expressed as stream runtime verification specifications, under a timed
asynchronous network. Our monitors communicate using the network, where
messages can take arbitrarily long but cannot be duplicated or lost. This
communication setting is common in many cyber-physical systems like smart
buildings and ambient living. Previous approaches to decentralized monitoring
were limited to synchronous networks, which are not easily implemented in
practice because of network failures. Even when networks failures are unusual,
they can require several monitoring cycles to be repaired.
In this work we propose a solution to the timed asynchronous monitoring
problem and show that this problem generalizes the synchronous case. We study
the specifications and conditions on the network behavior that allow the
monitoring to take place with bounded resources, independently of the trace
length. Finally, we report the results of an empirical evaluation of an
implementation and verify the theoretical results in terms of effectiveness and
efficiency. | Luis Miguel Danielsson, César Sánchez | 2023-02-01T15:26:54Z | http://arxiv.org/abs/2302.00506v2 | # Decentralized Stream Runtime Verification for Timed Asynchronous Networks
###### Abstract
We study the problem of monitoring distributed systems where computers communicate using message passing and share an almost synchronized clock. This is a realistic scenario for networks where the speed of the monitoring is sufficiently slow (at the human scale) to permit efficient clock synchronization, where the clock deviations is small compared to the monitoring cycles. This is the case when monitoring human systems in wide area networks, the Internet or including large deployments.
More concretely, we study how to monitor decentralized systems where monitors are expressed as stream runtime verification specifications, under a timed asynchronous network. Our monitors communicate using the network, where messages can take arbitrarily long but cannot be duplicated or lost. This communication setting is common in many cyber-physical systems like smart buildings and ambient living. Previous approaches to decentralized monitoring were limited to synchronous networks, which are not easily implemented in practice because of network failures. Even when networks failures are unusual, they can require several monitoring cycles to be repaired.
In this work we propose a solution to the timed asynchronous monitoring problem and show that this problem generalizes the synchronous case. We study the specifications and conditions on the network behavior that allow the monitoring to take place with bounded resources, independently of the trace length. Finally, we report the results of an empirical evaluation of an implementation and verify the theoretical results in terms of effectiveness and efficiency.
Keywords:decentralized monitoring, distributed, runtime verification, stream runtime verification, time asynchronous networks
## 1 Introduction
We study the problem of decentralized runtime verification of stream runtime verification (SRV) specifications under the timed asynchronous model of computation. In decentralized monitoring a specification is decomposed into a network of monitors that communicate by exchanging messages. These monitors cooperatively evaluate the specification against a trace of input observations performed
at distributed locations. We present a solution to the decentralized monitoring problem under the timed asynchronous model of computation--in which processes share a sufficiently synchronized global clock but where messages can take arbitrarily long to arrive.
_Runtime verification_ (RV) is a dynamic technique for software quality assurance that consists of generating a monitor from a formal specification. This monitor then inspects a single trace of execution of the system under analysis. In contrast to static verification techniques, RV sacrifices completeness to provide a readily usable formal method, that for example can be easily combined with testing and debugging. One of the problems that RV must handle is to generate monitors from a specification. Early approaches to RV specification languages were based on temporal logics [11, 12, 13], regular expressions [20], timed regular expressions [20], rules [1], or rewriting [21]. Another approach to monitor specifications is Stream Runtime Verification (SRV)--pioneered by Lola [1]--which defines monitors by declaring equations that describe the dependencies between output streams of results and input streams of observations. SRV is a richer formalism than most RV solutions that goes beyond Boolean verdicts (like in logical techniques) by allowing specifications that compute richer verdicts as output. Examples include counting events and other statistics, computation of robustness values or generating explanations of the errors. See [1, 13, 14, 15, 16] for examples illustrating the expressivity of SRV languages.
Another important aspect of runtime verification is the operational execution of monitors: how to collect information and how to perform the monitoring task. We focus in this paper in _online_ monitoring where the monitoring happens incrementally as the input trace is being observed. In [1, 17, 18] the authors consider a centralized specification which gets deployed as network of distributed monitors connected via a synchronous network, where the global synchronous clock is used both for communication and periodic sampling. Monitors exchange messages and cooperate to perform the global monitoring task. This problem is called _decentralized monitoring_ (see [1]). We study here the timed asynchronous networks or communication together with periodic sampling of inputs, that is a synchronous computation over an asynchronous network. Our solution subsumes the previously available SRV solution for synchronous computation and a synchronous reliable network studied in [18]. We call the more general problem studied in this paper the _timed asynchronous decentralized monitoring problem_. Our goal is to generate local monitors at each node that collaborate to monitor the specification, distributing the computational load while minimizing the network bandwidth and the latency of the computation of verdicts. Apart from more efficient evaluation, decentralized monitoring can provide fault-tolerance as the process can partially evaluate
a specification using the information provided by the part of the network that does not fail. In the same spirit, if part of the network of cooperating monitors is clogged--in the sense that it is working slower for some reason--the other part can keep its normal throughput. Consider for example an _if-then-else_ specification with a slow computation needed to obtain the value of both the _then_ and the _else_ parts. Consider a decentralized deployment with three monitors connected as a tree: the leaf monitors compute the _then_ and the _else_ parts, while the root monitor computes the specification using a Boolean input stream for _if_ part. Assume that the condition is true 90% of the time, so most of the time the _then_ value is used and the _else_ value is discarded. Now, also consider that the network link between the root monitor and the leaf monitor that computes the _else_ part is slow, the throughput of the root of the specification remains unaffected for that 90% of times and the result can be produced without waiting for the long computation and the network delay of the link that affect the _else_ part. We plan to leverage the advantages of the decentralized systems to aggressively incorporate fault-tolerance in future work.
#### 3.2.2 Our Solution.
In this paper we provide a solution to the decentralized monitoring problem for Lola [1] specifications for arbitrary network topologies and placement of the local monitors.
We study _time asynchronous_ networks [10], where nodes share a global clock (built upon bounding the network synchronicity delays and hardware clock drifts) but monitoring messages can take arbitrarily long. Time asynchronous networks [10] "... allow practically needed services such as clock synchronization, membership, consensus, election, and atomic broadcast to be implemented". Synchronous networks are a special case where, additionally, messages take a known bounded time to arrive. We use the fact that a global clock is available to use a model of computation for monitoring that proceeds in rounds, where each round consists on input readings and process incoming messages, followed by an update the internal state of local monitors and finally producing output messages. This synchronous execution model is realistic in many scenarios, for example in smart buildings or smart cities--where clocks can be synchronized using a time network protocol--that is sufficiently precise for round cycles of tens of seconds. We also assume in this paper a reliable system: nodes do not crash, and messages are not lost or duplicated. In our solution, different parts of the specification (modeled as streams), including input readings, are deployed into different network nodes as a local monitor. Local monitors will communicate with other monitors when necessary to resolve the streams assigned to them, trying to minimize the communication overhead. Intuitively, data will be read from sensor monitors, and then each layer of intermediate monitors will compute sub-expressions and communicate partial results to remote monitors in charge of super-expressions, ultimately computing the stream of values of the root expression. A degenerated case of this setting is a centralized solution: nodes with mapped observations send their sensed values to a fixed central node that is responsible of computing the whole specification.
The SRV language that we consider is Lola [4, 10]. We will identify those specifications and conditions on the network behavior that allow the monitoring to take place with bounded resources, independently of the trace length.
#### 3.2.1 Motivating Example.
Example 1: We use as a running example a smart building with rooms equipped with sensors and a central node. The aim is to generate alarms when there is a fire risk. The following specification captures this risk by detecting acute uprisings in temperature and _CO\({}_{2}\)_ in a certain room. We place the computations needed to decide whether the measured variables rise 'enough' to those nodes where the sensor readings take place. In this way, the central node only needs to compute which nodes present both the temperature and the _CO\({}_{2}\)_ alarm. We omit the _CO\({}_{2}\)_ computation for simplicity and readability (as it is an exact mirror of the temperature computation). The _CO\({}_{2}\)_ values would be useful to assess the risk of fire at the 'Building' monitor.
```
@Room1{ inputnumt_1eval #tini_1isaconstant #withmeaningfulbounds definennumtlow=1.6*tini_1 definennumthi=2.0*tini_1 definennumt_spike_q_1= ift_1<=lowthen0 elseift_1>hithen1 else(t_1-low)/(hi-low) } @Room2{ inputnumt_2eval definennumt_low=1.6*tini_2 definennumt_hi=2.0*tini_2 definennumt_spike_q_2= ift_2<=t_lowthen0 elseift_2>t_hithen1 else(t_2-t_low)/(t_hi-t_low) } @Building{ defineboolfire_risk_q_1=t_spike_q_1>0.5 defineboolfire_risk_q_2=t_spike_q_2>0.5 }
```
#### 3.2.2 Related work.
The term _decentralized monitoring_ is used in the survey [11] to distinguish the term from distributed monitoring where processes do not share a global clock. In distributed monitoring a complete asynchronous network is assumed, while typically decentralized monitoring assumes
a completely synchronous network where all samples and communication occur in lockstep. In this paper we explore the middle ground: network nodes share a sufficiently synchronized global clock (like in synchronous distributed systems) but communication can take arbitrarily long (like in asynchronous distributed systems). Also, in [17] they present other concepts such as policy checking that are called decentralized monitoring that do not correspond to the monitoring presented in this paper, because they are concerned only about global safety properties that can be used for asynchronous networks with asynchronous computations.
In [14] they also study timed asynchronous networks of cooperating monitors but use an SMT-solver for simplifying LTL formulas.
Distributed stream processing has been largely studied. In [15] they use the concept of streams in Complex Event Processing, where events may be structured datatypes and where computation may be complex in the sense that several operations are needed for each event, for example in sliding window operations to make aggregate calculations on the arriving events. The aim of [15] is merging privacy and approximation techniques obtaining zero-knowledge privacy and low-latency and efficient analytics. In [13] Apache Flink is introduced where stream dataflows processing is used to handle continuous streams and batch processing. Distributed and decentralized monitoring has been studied in the context of runtime verification. Sen et al. [12] introduces PT-DTL, a variant of LTL logic for monitoring distributed systems, but they consider a complete asynchronous distributed system and they are limited to Boolean verdicts. The work in [17] uses slices to support node crashes and message errors when monitoring distributed message passing systems with a global clock. Bauer et al. [1] introduce a first-order temporal logic and trace-length independent spawning automaton, and in [1] show a decentralized solution to monitor LTL\({}_{3}\) in synchronous systems using formula rewriting. LTL\({}_{3}\) is a three-valued variant of LTL with a central value in the lattice that captures when an expression has an unknown value so far and we need to process more of the input trace to determine its truth value. This is improved in [18, 19] using an Execution History Encoding (EHE). EHE is a datastructure that stores the partially evaluated expressions by different monitors with their partial information that allow decentralized monitors to infer the state in which the monitoring automaton is in. In [18] the authors extend the EHE with distributed and multi-threaded support along with guaranteeing the determinism of the datastructure by construction. Then they analyze the compatibility and monitorability of decentralized specifications using EHE. However, the verdicts and data are still Boolean and the network assumption is synchronizity. In [1] global choreographies (as a kind of master-based protocol) are synthesized (including control flows, synchronization, notification, acknowledgment, computations embedding) to distributed systems. Also, they provide a transformation to Promela which allows to verify the implementation using LTL spec
ifications. Some schemes that they showcase are a variant producer-consumer or two-phase commit and apply it to building micro-services such as a buying system. This work focuses on synthesizing the flow of monitors, but again the observations and verdicts are Boolean. In [11] a synchronous network of LTL-monitors cooperate to achieve a verdict on the system under test while they may suffer crashes. In this scenario an SMT-based algorithm for synthesizing the automata for the LTL-monitors is presented that achieves fault tolerance providing soundness even though crashed monitors never recover. Even though this work considers failures (which is out of the scope of our paper) they assume synchronous communication. All these approaches consider only Boolean verdicts. In comparison, SRV can generate verdicts from arbitrary data domains.
All previous SRV efforts, from Lola [10], Lola2.0 [12], Copilot [13, 14, 15] and extensions to timed event streams, like TeSSLa [16], RTLola [12] or Striver [15] assume a centralized monitoring setting. In [15] the relationship between time-based (soft real time) and event-based models of computation and their effects on SRV are explored, but again in the centralized setting. The work in [1] shows how monitoring Metric Temporal Logic specifications of distributed systems (including failures and message reordering) where the nodes communicate in a tree fashion and the root emits the final verdict. The work in [10] proposes a solution to the synchronous monitoring of SRV specifications but assuming a synchronous network. We extend [10] to timed asynchronous networks.
#### 3.1.1 Contributions and structure.
The main contribution of this paper is a solution, described in Section 3, to the timed asynchronous decentralized stream runtime verification problem. We provide a proof of correctness of the algorithms and show that our solution subsumes a synchronous decentralized problem without overhead. A second contribution, included in Section 3.6, is the description of those specifications and conditions on the network behavior that allow the monitoring to take place with bounded resources, independently of the trace length. Bounding resources is of the uttermost importance in cyber-physical systems where memory, bandwidth and even computing time are limited in order to react properly and timely to the changing environment. If a cyber-physical system is trace-length independent it can run indefinitely long even if the resources are physically constrained. A third contribution, detailed in Section 4, is a prototype implementation and an empirical evaluation. A fourth contribution, in Section 5, is a modified algorithm that allows nodes to save bandwidth by only communicating stream values when requested. Section 2 contains the preliminaries and Section 6 concludes.
Preliminaries. Stream Runtime Verification
We recall now SRV briefly. For a more detailed description see [10] and the tutorial [11]. The fundamental idea of SRV, pioneered by Lola [10] is to describe monitors declaratively via a set of equations that describe the dependencies between output streams of values and input streams of values. We focus here on online monitoring. A monitor is generated from a specification, which at runtime computes a sequence of values for the output streams as soon as possible after observing each value from input streams. Input values are typically extracted from some sensor or read from a log file.
A Lola specification declares output streams in relation to the input streams, including both future and past temporal dependencies. The Lola language cleanly separates the temporal dependencies from the individual operations to be performed at each step, which leads to generalization of monitoring algorithms for logics to the computation of richer values such as numbers, strings or richer data-types.
### Lola Syntax.
A Lola specification consist of declaring the relation between output streams and input streams of events. Stream expressions are terms built using a collection of (interpreted) constructor symbols. Symbols are interpreted in the sense that each constructor is not only used to build terms, but it is also associated with an evaluation function, that given values of arguments produces a value of the return type. Given a set \(Z\) of typed stream variables the set of _stream expressions_ consists of (1) variables from \(Z\), (2) offsets \(v[k,d]\) where \(v\) is a stream variable of type \(D\), \(k\) is a natural number and \(d\) a value from \(D\), and (3) terms \(f(t_{1},\ldots,t_{n})\) using constructor symbols \(f\) from the theories to previously defined terms. Stream variables represent sequences of values (streams) in the specification. The intended meaning of an offset expression \(v[-1,\mathit{false}]\) is the value of stream \(v\) in the previous position of the trace (or \(\mathit{false}\) if there is no such previous position, that is, at the beginning). We use \(\mathit{Term}_{D}(Z)\) for the set of stream expressions of type \(D\) constructed from variables from \(Z\) (and drop \(Z\) if clear from the context). Given a term \(t\), \(\mathit{sub}(t)\) represents the set of sub-terms of \(t\).
Definition 1 (Specification): A Lola specification \(\varphi(I,O)\) consists of a set \(I=\{r_{1},\ldots,r_{m}\}\) of input stream variables, a set \(O=\{s_{1},\ldots,s_{n}\}\) of output stream variables, and a set of defining equations, \(s_{i}=e_{i}(r_{1},\ldots,r_{m},s_{1},\ldots,s_{n})\) one per output variable \(s_{i}\in O\). The term \(e_{i}\) is from \(\mathit{Term}_{D}(I\cup O)\), where \(D\) is the type of \(s_{i}\).
We will use \(r\), \(r_{i}\ldots\)to refer to input stream variables; \(s\), \(s_{i}\ldots\)to refer to output stream variables; and \(u\), \(v\) for an arbitrary input or output stream variable. Given \(\varphi(I,O)\) we use \(\mathit{appears}(u)\) for the set of output streams that use \(u\), that
is \(\{s_{i}\mid u[-k,d]\in\mathit{sub}(e_{i})\text{ or }u\in\mathit{sub}(e_{i})\}\). Also, \(\mathit{ground}(t)\) indicates whether expression \(t\) is a ground expression (contains no variables or offsets) and therefore can be evaluated into a value using the interpretations of constants and function symbols.
Example 2: The property _"sum the previous values in input stream y, but if the reset stream is true, reset the count"_, can be expressed as follows, where stream variable root uses the accumulator acc and the input reset to compute the desired sum. The stream acc is defined with the keyword define to emphasize that it is an intermediate stream.
```
inputboolreset inputnumy defineintacc=y+root[-1|0] outputintroot=ifresetthen0elseacc
```
### Lola semantics.
We introduce now the formal semantics of Lola, that guarantee that there is a unique correct output stream for each input stream. This semantics allows to prove that an algorithm is correct by showing that the algorithm produces the desired output. At runtime, input stream variables are associated incrementally with input streams of values.
Given an input streams \(\sigma_{I}\) (one sequence per input stream variable) and given an output candidate \(\sigma_{O}\) (one sequence per output stream) the formal semantics captures whether the pair \((\sigma_{I},\sigma_{O})\) matches the specification, which we write \((\sigma_{I},\sigma_{O})\vDash\varphi\). We use \(\sigma_{r}\) for the stream in \(\sigma_{I}\) corresponding to input variable \(r\) and \(\sigma_{r}(k)\) for the value of stream \(\sigma\) at position \(k\). For \((\sigma_{I},\sigma_{O})\vDash\varphi\) to hold, all streams must be sequences of the same length.
A _valuation_ of a specification \(\varphi\) is a pair \(\sigma:(\sigma_{I},\sigma_{O})\) that contains one stream (of values of the appropriate type) and of the same length for each input and output stream variable in \(\varphi\). Given a term \(t\), the _evaluation_\([\![t]\!]_{\sigma}\) is a sequence of values of the type of \(t\) defined as follows:
* If \(t\) is a stream variable \(u\), then \([\![u]\!]_{\sigma}(j)=\sigma_{u}(j)\).
* If \(f=f(t_{1},\ldots,t_{k})\) then \([\![f(t_{1},\ldots,t_{k})]\!]_{\sigma}(j)=f([\![t_{1}]\!]_{\sigma}(j),\ldots, [\![t_{k}]\!]_{\sigma}(j))\).
* Finally, if \(t=v[i,c]\) is an offset then \([\![v(j+i)]\!]\) if \(j+i\) is a valid point of the trace, and the default value \(c\) otherwise. Formally: \([\![v[i,c]\!]_{\sigma}(j)=[\![v]\!]_{\sigma}(j+i)\) if \(0\leq j+i\), and \(c\) otherwise.
A valuation \((\sigma_{I},\sigma_{O})\) satisfies a Lola specification \(\varphi\) whenever for every output variable \(s_{i}\), \([\![s_{i}]\!]_{(\sigma_{I},\sigma_{O})}=[\![e_{i}]\!]_{(\sigma_{I},\sigma_{O })}\). In this case we say that \(\sigma\) is an evaluation model of \(\varphi\) and write \((\sigma_{I},\sigma_{O})\vDash\varphi\).
The intention of a specification \(\varphi\) is to describe a unique output from a given input, which is guaranteed if \(\varphi\) has no cycles in the following sense. A _dependency graph_\(D_{\varphi}\) of a specification \(\varphi(I\cup O)\) is a weighted multi-graph \((V,E)\) whose vertices are the stream variables \(V=I\cup O\), and where \(E\) contains a directed
weighted edge \(u\xrightarrow{w}v\) whenever \(v[w,d]\) is a sub-term in the defining equation of \(u\). A specification \(\varphi\) is _well-formed_ if \(D_{\varphi}\) contains no zero-weight cycles, which guarantees that no stream depends on itself at the current position.
Considering example 2. Its dependency graph is:
Given a stream variable \(u\) and position \(i\geq 0\) an _instant stream variable_ (or simply instant variable) is defined as the pair \(u\langle i\rangle\), which is a fresh variable of the same type as \(u\). Note there is one different instant variable \(u\langle i\rangle\) for each instant \(i\). The _evaluation graph EG_ is the unrolling expansion of the dependency graph for all instants. Given \(\varphi(I,O)\) and a trace length \(M\) (or \(M=\omega\) for infinite traces) the evaluation graph \(G_{\varphi,M}\) has as vertices the set of instant variables \(\{u\langle k\rangle\}\) for \(u\in I\cup O\) and \(0\leq k<M\), and has edges \(u\langle k\rangle\to v\langle k^{\prime}\rangle\) if the dependency graph contains an edge \(u\xrightarrow{j}v\) and \(k+j=k^{\prime}\)
The corresponding evaluation graph for \(M=5\) is shown in Fig. 2.
Considering example 2, \(acc\langle 4\rangle\) points to \(root\langle 3\rangle\) in all evaluation graphs with \(M\geq 4\). We denote by \(e_{s}\langle k\rangle\) the term (whose leafs are instant variables) that results from \(e_{s}\) at \(k\), by replacing the offset terms with the corresponding instant variables corrected with the appropriated shift. Consider again Example 2. The instant stream expression \(e_{acc}\)for \(acc\) at instant \(4\) is \(acc\langle 4\rangle=y\langle 4\rangle+root\langle 3\rangle\).
Nodes of the dependency graph form a DAG of Maximal Strongly Connected Components (MSCCs). Note also that specifications whose dependency graph has no positive cycles are called _efficiently monitorable specifications_[4]. There are no cycles in the evaluation graph of an efficiently monitorable specification, which enables us to reason by induction on evaluation graphs, as we will do later. Note that these specifications can have positive edges
Figure 1: Dependency graph for example 2
Figure 2: Evaluation graph for example 2
(corresponding to future dependencies) as long as they do not form a positive cycle. As it can be shown [10] these specifications can be evaluated online (incrementally) with finite memory with a central monitor.
Example 3: The following code snippet shows a non-efficiently monitorable, an efficiently monitorable specification and a very efficiently monitorable. The first snippet is a non-efficiently monitorable specification because the stream b depends on itself in the future, in the Evaluation Graph (EG) all instant variables will depend on the next instant unboundedly to the future. This will make all instant streams b to never be resolved in an infinite trace.
```
inputinta outputintb=b[1|0]
```
Next specification is an efficiently monitorable specification because there are only bounded references to the future: each instant variable b only depends on a two positions ahead, so for every instant variable \(b\langle k\rangle\) it will be resolved at \(k+2\).
```
inputinta outputintb=a[2|0]+b[-1|0]
```
This is a very efficiently monitorable specification because there are no reference to the future, all offsets are either negative or zero.
```
inputinta outputintb=a+b[-1|0]
```
### Decentralized Synchronous Online Monitoring
An online decentralized algorithm to monitor Lola specifications in a synchronous networks is presented in [1]. The main idea is to use a network of cooperating nodes to monitor a Lola specification, sliced according to its syntax tree and then each subexpression, including inputs, is mapped to a node. This requires monitors to share their partial results (of the subexpressions) via messages. At each time instant the algorithm will read inputs, update internal expressions and communicate results with the appropiate nodes so that the specification ends being computed by means of those partial results. Therefore, given a well-formed Lola specification, the decentralized online algorithm presented in [1] incrementally computes the value for each output instant variable assuming a synchronous network where messages are not lost or duplicated. The algorithms presented here extend [1] to the more general setting of timed asynchronous networks.
## 3 Decentralized Stream Runtime Verification for Timed Asynchronous Networks
In this section we describe our solution to the decentralized SRV problem for Timed Asynchronous Networks. The algorithm that we present below will com
pute the unique values of the output instant variables based on the values of the input readings. We prove the termination of the algorithm in theorem 1 and its correctness in theorem 2, verifying that the operational semantics are equivalent to the denotational. We require a well-formed Lola specification, and a mapping between streams and the network nodes where they are computed. Each network node will host a local monitor that is responsible for computing some of the streams of the specification. We denote \(\mu(s)\) for stream variable \(s\) is the network node whose local monitor is responsible for resolving the values of stream \(s\). Local monitors exchange messages containing partial results whenever needed in order to compute the global monitoring task. However, our decentralized algorithm may compute some output values at different time instants than a centralized version, due to the different location of the inputs and the delays caused by the communication. We study this effect both theoretically in Section 3.6, and empirically in Section 4. A centralized monitor corresponds with the operational semantics in [1, 2] which is equivalent with a network mapping that assigns all input and output streams to a single node and therefore avoids communication.
### Problem Description
#### 3.1.1 Network.
We assume a network with a set of nodes \(N\), such that every node can communicate with every other network node by sending messages. We assume reliable unicast communication (no message loss or duplication) over a timed asynchronous network, so a given message can take an arbitrary amount of time to arrive. Since network nodes share a global clock, the computation proceeds in cycles. In every cycle, all nodes in the network execute--in parallel and to completion--the following actions: (1) read input messages, (2) perform a terminating local computation, (3) generate output messages. We use the following type of message: \((s\langle k\rangle,c,n_{s},n_{d})\) where \(s\langle k\rangle\) is an instant variable, \(c\) is a value of the type of \(s\), \(n_{s}\) is the source node and \(n_{d}\) is the destination node. We use the following abbreviations \(\textit{msg.src}=n_{s}\), \(\textit{msg.dst}=n_{d}\), \(\textit{msg.stream}=s\langle k\rangle\) and \(\textit{msg.val}=c\). These messages are used to inform of the actual values read or computed.
#### 3.1.2 Stream Assignment and Communication Strategy
Given a specification \(\varphi(I,O)\) and a network with nodes \(N\), a _stream assignment_ is a map \(\mu:I\cup O\to N\) that assigns a network node to each stream variable. The node \(\mu(r)\) for an input stream variable \(r\) is the location in the network where \(r\) is sensed in every clock tick. At runtime, at every instant \(k\) a new input value for \(r\langle k\rangle\) is read. On the other hand, the node \(\mu(s)\) for an output stream variable \(s\) is the location whose local monitor is responsible for resolving the values of \(s\).
An instant value \(v\langle k\rangle\) is automatically communicated to all potentially interested nodes whenever the value of \(v\langle k\rangle\) is resolved. Let \(v\) and \(u\) be two stream variables such that \(v\) appears in the equation of \(u\) and let \(n_{v}=\mu(v)\) and \(n_{u}=\mu(u)\). Then, \(n_{v}\) informs \(n_{u}\) of every value \(v\langle k\rangle=c\) that \(n_{v}\) resolves by
sending a message \((v\langle k\rangle,c,n_{v},n_{u})\). We are finally ready to define the decentralized SRV problem.
Definition 2: A decentralized SRV problem \(\langle\varphi,N,\mu\rangle\) is characterized by a specification \(\varphi\), a network with notes \(N\) and a stream assignment \(\mu\) for every stream variable.
We use DSRV for decentralized SRV problem. Solving a DSRV instance consists of computing the values of instant variables corresponding to the output streams based on the values of the instant variables of the input streams, by means of a network of interconnected nodes that host local monitors.
### Model of Communication
We now describe in detail the timed asynchronous model of computation that we assume. Every message inserted in the network arrive at its destination according to the following conditions:
* _Always later_: every message \(m\) inserted at \(t\) will arrive at \(t^{\prime}\) with \(t^{\prime}>t\);
* _Arbitrary delay_: there is no a-priori bound on the amount of time that any message will take to arrive.
* _FIFO between each pair of nodes_: let \(m_{1}\) and \(m_{2}\) be two messages with the same origin and destination, \(m_{1}.\mathit{src}=m_{2}.\mathit{src}\) and \(m_{1}.\mathit{dst}=m_{2}.\mathit{dst}\). Let \(m_{1}\) is inserted at \(t_{1}\) and arrive at \(t^{\prime}_{1}\) and let \(m_{2}\) be inserted at \(t_{2}\) and arrive at \(t^{\prime}_{2}\). If \(t_{1}<t_{1}\), then \(t^{\prime}_{1}\leq t^{\prime}_{2}\). That is, \(m_{1}\) cannot arrive later than \(m_{2}\).
The synchronous model is a particular case of the timed asynchronous in which all messages inserted in the network will always take the same amount of time between each pair of network nodes. In this case the delay will always be a constant. Formally, to analyze the behavior of our algorithms we model the message delays as a family of functions \(\mathit{arr}_{u\to v}\) (one for each pair of nodes \((u,v)\), which provides at every moment \(t\) the instant \(t^{\prime}\) at which a message sent at \(t\) from \(u\) will arrive at \(v\).
### DSRV for Timed Asynchronous Networks: monitor and algorithm
Our solution consists of a collection of local monitors, one for each network node \(n\). A local monitor \(\langle Q_{n},U_{n},R_{n}\rangle\) for \(n\) maintains an input queue \(Q_{n}\) and two storages:
* **Resolved** storage \(R_{n}\), where \(n\) stores resolved instant variables \((v\langle k\rangle,c)\).
* **Unresolved** storage \(U_{n}\), where \(n\) stores unresolved equations \(v\langle k\rangle=e\) where \(e\) is not a value, but an expression that contains other instant variables.
When \(n\) receives a message from a remote node, the information is added to \(R_{n}\), so future local requests for the information can be resolved locally and immediately. At the beginning of the cycle of computation at instant \(k\), node \(n\) reads the values for input streams assigned to using local sensors and instantiates
for \(k\) all output stream variables that \(n\) is responsible for. After that, the equations obtained are simplified using the knowledge acquired so far by \(n\), which is stored in \(R_{n}\). Finally, new messages are generated and inserted in the queues of the corresponding neighbors.
```
1:procedureMonitor
2:\(Q_{n}\leftarrow\emptyset\); \(U_{n}\leftarrow\emptyset\); \(R_{n}\leftarrow\emptyset\); \(k\leftarrow\textsc{Now}()\)
3:while not END do Step(\(k\))
4:\(M\gets k\); Finalize(\(M\))
5:procedureStep(\(k\))
6:\(R_{old}\leftarrow\mathit{MS}_{n}.R_{n}\)
7:ProcessMessage(\(\mathit{MS}_{n}\))
8:\(R_{n}.\mathit{add}(\{r\langle k\rangle\!\mapsto\!\mathit{read}(r,k)\mid r \in\mathit{ins}_{n}\})\)
9:\(U_{n}.\mathit{add}(\{s\langle k\rangle\!\mapsto\!e_{s}\langle k\rangle\mid s \in\mathit{outs}_{n}\})\)
10:Evaluate(\(\mathit{MS}_{n}\))
11:\(R_{new}\leftarrow\mathit{MS}.R_{n}\setminus R_{old}\)
12:SendResponses(\(\mathit{MS}_{n}\))
13:Prune(\(\mathit{MS}_{n}\))
14:procedureEvaluate(\(\mathit{MS}_{n}\))
15:\(\mathit{done}\leftarrow\mathit{false}\)
16:whilenot done do
17:\(\mathit{done}\leftarrow\mathit{true}\)
18:for all\(s\langle k\rangle\!\mapsto\!e\in U_{n}\)do
19:\(e^{\prime}\leftarrow\textsc{Subst}(e,R_{n})\)
20:if\(\mathit{ground}(e^{\prime})\)then\(\mathit{done}\leftarrow\mathit{false}\)
21:\(U_{n}.\mathit{del}(s\langle k\rangle\!\mapsto\!e);R_{n}.\mathit{add}(s\langle k \rangle\!\mapsto\!e^{\prime})\)
22:else\(U_{n}.\mathit{del}(s\langle k\rangle\!\mapsto\!e);U_{n}.\mathit{add}(s \langle k\rangle\!\mapsto\!e^{\prime})\)
23:procedureProcessMessage(\(\mathit{MS}_{n}\))
24:for all\(\mathit{msg}=\langle\mathbf{resp},s\langle k\rangle,c\rangle\gets Q_{n}.pop()\)do
25:\(R_{n}.\mathit{add}(s\langle k\rangle\!\mapsto\!c)\)
26:procedureSendResponses(\(\mathit{MS}_{n},R_{new}\))
27:for all\(u\langle k\rangle\!\mapsto\!c\in R_{new}\)do
28:\(\mathit{send}(\mathbf{resp},s\langle k\rangle,c,n,n_{r})\)
29:procedurePrune(\(\mathit{MS}_{n},R_{new}\))
30:for all\(u\langle j\rangle\!\mapsto\!c\) s.t. \(\mathit{now}\geq\mathit{MTR}(u\langle j\rangle)\)do
31:\(R_{n}.\mathit{del}(u\langle j_{i}\rangle\!\mapsto\!c_{i})\)\(\triangleright\) Remove
```
**Algorithm 1** Local monitor at node \(n\) with \(\langle Q_{n},U_{n},R_{n}\rangle\)
More concretely, every node \(n\) will execute the procedure Monitor shown in Algorithm 1, which invokes Step in every clock tick. The procedure Finalize is used to resolve the pending values at the end of the trace to their default. Note that this procedure is never invoked if the monitor trace never terminates (the monitor will be continuously observing and producing outputs). The procedure Step executes the following steps:
1. **Process Messages**: Lines 7 invokes ProcessMessages procedure in lines 23-25 that deals with the processing of incoming response arrivals, adding them to \(R_{n}\)
2. **Read Inputs and Instantiate Outputs:** Line 8 reads new inputs for current time \(k\), and line 9 instantiates the equation of every output stream that \(n\) is responsible for.
3. **Evaluate:** Line 10 invokes the procedure Evaluate, in lines \(14-22\) which evaluates the unresolved equations.
4. **Send Responses:** Line 12 invokes SendResponses, in lines 26-28, sending messages for all newly resolved variables.
5. **Prune:** Line 29-31 prunes the set \(R\) from information that is no longer needed. See section 3.6.
### Formal Correctness
We now show that our solution is correct by proving that the output computed is the same as in the denotational semantics, and that every output is eventually computed.
Theorem 3.1: _All of the following hold for every instant variable \(u\langle k\rangle\):_
1. _The value of_ \(u\langle k\rangle\) _is eventually resolved._
2. _The value of_ \(u\langle k\rangle\) _is_ \(c\) _if and only if_ \((u\langle k\rangle,c)\in R\) _at some instant._
3. _A response message for_ \(u\langle k\rangle\) _is eventually sent to all interested network nodes_ (_all nodes responsible for streams_ \(v\) _where_ \(u\in\mathit{appears}(v)\)_)._
Proof: The proof proceeds by induction on the evaluation graph, showing simultaneously in the induction step (1)-(3) as these depend on each other in the previous inductive steps. Let \(M\) be a length of a computation (which can be \(\omega\)) and \(\sigma_{I}\) be an input of length \(M\). Note that (1) to (3) above are all statements about instant variables \(u\langle k\rangle\), which are the nodes of the evaluation graph \(G_{\varphi,M}\). We proceed by induction on \(G_{\varphi,M}\) (which is acyclic because \(D_{\varphi}\) is well-formed, by assumption).
* **Base case**: The base case are vertices of the evaluation graph that have no outgoing edges, which are either
* instant variables that correspond to inputs read from local sensors or
* to defined variables whose instant equation does not contain other instant variables; This is the case when either the equation is a constant or the time instant is such that the resulting offset falls off the trace; the default value is used. Statement (1) follows immediately for inputs because at instant \(k\), \(u\langle k\rangle\) is read at node \(\mu(u)\). For output equations that do not have variables, or whose variables have offsets that once instantiated become negative or greater than \(M\), the value of its leafs is determined either immediately or at \(M\) when the offset is calculated. At this point, the value computed is inserted in \(R\), so (2) also holds at \(\mu(u)\). Note that (2) also holds for other nodes because the
response message contains \(u\langle k\rangle=c\) if and only if \((u\langle k\rangle,c)\in R_{n}\), where \(\mu(u)=n\). Then the response message is inserted exactly at the point it is resolved, so (1) implies (3).
* **Inductive case**: Consider an arbitrary \(u\langle k\rangle\) in the evaluation graph \(G_{\varphi,M}\) and let \(u_{1}\langle k_{1}\rangle,\ldots,u_{l}\langle k_{l}\rangle\) be the instant variables that \(u\langle k\rangle\) depends on. These are nodes in \(G_{\varphi,M}\) that are lower than \(u\langle k\rangle\) so the inductive hypothesis applies, and (1)-(3) hold for these. Let \(n=\mu(u)\). At instant \(k\), \(u\langle k\rangle\) is instantiated and inserted in \(U_{n}\). The values of instant variables are calculated and sent as well (by (1) and (3)). At the latest time of arrival, the equation for \(u\langle k\rangle\) has no more variables and it is evaluated to a value, so (1) holds and (2) holds at \(n\). At this point, the response message is sent (so (1) holds for \(u\langle k\rangle\)) and so (1) also holds.
This finishes the proof.
Theorem 1 implies that every value of every defined stream at every point is eventually resolved by our network of cooperating monitors. Therefore, given input streams \(\sigma_{I}\), the algorithm computes (by (2)) the unique output streams \(\sigma_{i}\) one for each \(s_{i}\). The element \(\sigma_{i}(k)\) is the value resolved for \(s_{i}\langle k\rangle\) by the local monitor for \(\mu(s_{i})\). The following theorem captures that Algorithm 1 computes the right values (according to the denotational semantics of Lola), Theorem 1 that all values are eventually computed.
We use \(\mathit{out}(\sigma_{I})\) as the function from input streams to output streams that the cooperating monitors compute. We use \([s]\) for the stream of values corresponding to stream variable \(s\) in \(\mathit{out}(\sigma_{I})\). We now show that the sequence of values computed corresponds to the semantics of the specification.
Theorem 2.1: _Let \(\varphi\) be a specification, \(S=\langle\varphi,\mathcal{T},\mu\rangle\) be a decentralized SRV problem, and \(\sigma_{I}\) an input stream of values. Then \((\sigma_{I},\mathit{out}(\sigma_{I}))\vDash\varphi\)._
Proof: Let \(\sigma_{O}\) be the unique evaluation model such that \((\sigma_{I},\sigma_{O})\vDash\varphi\) (we use \(\sigma_{O}(s)\) for the output stream for stream variable \(s\) and \(\sigma_{O}(s)(k)\) for its value in the \(k\)-th position). We need to show that for every \(s\) and \(k\), \([s](k)=\sigma_{O}(s)(k)\). We again proceed by induction on the evaluation graph \(G_{\varphi,M}\).
* **Base case:** For inputs the value follows immediately. The other basic case corresponds to output variables \(s\) at instants at which these that do not depend on other variables (because all occurrences of offsets, if any, fall off the trace). The evaluation of the value is performed by network node \(\mu(s)\), and it satisfies the equation \(e_{s}\) of \(s\), not depending on any value of any other stream. Therefore, it satisfies that \([s](k)=[\![e_{s}[k]]\!]=\sigma_{O}(s)(k)\), as desired
* **Inductive case:** Let \(s\) be an arbitrary stream variable and \(k\) an arbitrary instant within \(0\) and \(M-1\) and assume that all instant variables \(u\langle k^{\prime}\rangle\) that \(s\langle k\rangle\) can reach in the evaluation graph satisfy the inductive hypothesis. Let \(n\) be the node in charge of computing \(s\). By Theorem 1, all the values are eventually received by \(n\) and in \(R_{n}\), and by IH, these values are the same as in the denotational semantics, that is \([u](k^{\prime})=\sigma_{O}(u)(k^{\prime})\). The evaluation of \(s\langle k\rangle\) corresponds to computing \([\![e_{s}]\!]\), which uses the semantics
of the expression (according to Section 2). A simple structural induction on the expression \(e_{s}\) shows that the result of the evaluation, that is the value assigned to \(s\langle k\rangle\), is \(\llbracket e_{s}\rrbracket_{\sigma}(k)=\sigma_{O}(s)(k)\), as desired.
This finishes the proof.
### Simplifiers
The evaluation of expressions in Algorithm 1 assumes that all instant variables in an expression \(e\) are known (i.e., \(e\) is ground), so the interpreted functions in the data theory can evaluate \(e\). Sometimes, expressions can be partially evaluated (or even the value fully determined) knowing only some but not all of the instant variables involved in the expression. As simplifier is a function \(f:\mathit{Term}_{D}\rightarrow\mathit{Term}_{D}\) such that (1) the variables in \(f(t)\) are a subset of the variables in \(t\)), and (2) every substitution of values for the variables of \(t\) produces the same value as the substitution of \(f(t)\). For example, the following are typical simplifiers:
\[\begin{array}{llllll}\mathit{if}&\mathit{true}&\mathit{then}&t_{1}&\mathit{ else}&t_{2}&\mapsto t_{1}\\ \mathit{if}&\mathit{false}&\mathit{then}&t_{1}&\mathit{else}&t_{2}&\mapsto t_{2 }\\ \mathit{true}&\lor&x&&\mapsto\mathit{true}\\ \mathit{true}&\land x&&\mapsto x\\ 0\cdot x&&\mapsto 0\end{array}\]
In practice, simplifiers can dramatically affect the performance in terms of the instant at which an instant variable is resolved and, in the case of decentralized monitoring, the delays and number of messages exchanged. Essentially, a simplifier is a function from terms to terms such that, for every possible valuation of the variables in the original term it does not change the final value obtained. It is easy to see that for every term \(t\) obtained by instantiating a defining equation and for every simplifier \(f\), \(\llbracket t\rrbracket_{\sigma_{I},\sigma_{O}}=\llbracket f(t)\rrbracket_{( \sigma_{I},\sigma_{O})}\), because the values of the variables in \(t\) and in \(f(t)\) are filled with the same values (taken from \(\sigma_{I}\) and \(\sigma_{O}\)).
Consider arbitrary simplifiers _simp_ used in line 19 of Algorithm 1 to simplify expressions. Let \(U_{n}\) be the unresolved storage for node \(n\) and let \(u\langle k\rangle\) be an instant variable with \(\mu(u)=n\). By Algorithm 1 the sequence of terms \((u\langle k\rangle,t_{0}),(u\langle k\rangle,t_{1}),\ldots(u\langle k\rangle,t _{k})\) that \(U_{n}\) will store are such that each \(t_{i}\) will have the simplifier applied. It follows that the value computed using simplifiers is the same as without simplifiers. It is also easy to show that the algorithm using simplifiers obtains the value of every instant variable no later than the algorithm that uses no simplifier. This is because in the worst case every instant variable is resolved when all the instant variables it depends on are known, and all response messages are sent at the moment they are resolved.
### Theoretical Resource Utilization
The aim of this section is to define conditions under which local monitors only need bounded memory to compute every output value. The first thing to consider
is that the specification must be _decentralized efficiently monitorable_[1], which essentially states that every strongly connected component in \(G_{\varphi}\) must be mapped to the same network node. That is, if \(u\) appears, transitively, in the declaration of \(v\) and \(v\) appears in the declaration of \(u\) (with some offsets), then \(\mu(u)=\mu(v)\).
In order to guarantee that a given storage in a local monitor for node \(n\) is bounded, we must provide an upper-bound for how long it takes to resolve an instant variable for a stream that is assigned to \(n\). We use Time to Resolve (TTR) to refer to the ammount of time that a given instant variable \(u\langle k\rangle\) takes to get resolved. This is the number of time instants between the instantiation of the variable at time \(k\) and the instant at which it gets resolved, leaving \(U_{n}\) and being stored in \(R_{n}\). This happens in line 21 in Algorithm 1.
#### 4.1.1 General Equations for the Time to Resolve
We introduce now a general definition of recursive equations that capture when an instant variable \(s\langle k\rangle\) is resolved. In order to bound the memory used by the monitor at network node \(n\), we need to bound storages \(U_{n}\) and \(R_{n}\):
* **Bound on \(R_{n}\)**: Resolved values that are needed remotely are sent immediately to the remote nodes, so \(R_{n}\) only contains resolved values that are needed in the future locally at \(n\). Since efficiently monitorable specifications only contain (future) bounded paths there is a maximum future reference \(b\) used in the specification. This upper-bound limits for how long a resolved value \(v\langle k\rangle\) can remain in \(R_{n}\), because after at most \(b\) steps the instant variables \(u\langle k^{\prime}\rangle\) that need the value of \(v\langle k\rangle\) stored in \(R_{n}\) will be instantiated (note that \(k^{\prime}-k\leq b\)). That is \(u\langle k\rangle\) is not needed after \(t=max(k+b,k+TTR(u\langle k\rangle))\). At \(t\), the value of \(u\langle k\rangle\) can be removed from \(R_{n}\). This guarantees that the size of \(R_{n}\) is always upped-bounded by a constant in every node \(n\).
* **Bound on \(U_{n}\)**: The size of the memory required for storage \(U_{n}\) at the node \(n\) responsible to resolve \(s\) (that is \(n=\mu(s)\)) is proportional to the number of instantiated but unresolved instant variables. Therefore, to bound \(U_{n}\) we need to compute the bound on the time it takes to resolve instant variables of streams assigned to \(n\).
The general equations that we present below depend on the delay of messages in the network. We will later instantiate these general equations for the following cases of network behavior:
* a synchronous network;
* a timed-asynchronous network with an upper-bound on message delays for the whole trace (we call this the **aeternal** case);
* timed-asynchronous network with an upper-bound for message delays in a given time-horizon (we call this the **temporary** case).
Note that the correctness of the algorithm (Theorem 4.1.1) establishes that the output streams \(\sigma_{O}\) only depend on the input streams \(\sigma_{I}\) but does not state
bounds on the time at which each element of \(\sigma_{O}\) is resolved or on the delays of messages.
In this section we study how the delay of messages affects the time at which instant variables are resolved, which in turn affects the memory usage at the computations nodes. We use \(d(t,a,b)\) for the time it takes for a message sent from \(a\) to \(b\) at time \(t\) to arrive. In other words \(\mathit{arr}_{a\to b}(t)=t+d(t,a,b)\). Recall that we assume that messages are causal and queues are FIFO as we described in 3.2. Causality means that messages arrive after they are sent (that is, for every \(n\), \(m\) and \(t\), \(\mathit{arr}_{a\to b}(t)>t\)) and FIFO that for every \(n\) and \(m\), if \(t<t^{\prime}\) then \(\mathit{arr}_{a\to b}(t)\leq\mathit{arr}_{a\to b}(t^{\prime})\).
We now capture the _Moment to Resolve_ for a given instant variable \(s\langle t\rangle\), represented as \(\mathit{MTR}(s\langle t\rangle)\), which captures the instant of time at which \(s\langle t\rangle\) is guaranteed to be resolved by the monitor at network node \(\mu(n)\) responsible to compute \(s\). Our definition considers two components, the delay in resolving all local instant variables that \(s\langle t\rangle\) may depend on and the resolution of remote instant variables, which also involve message delays. We use the concept of _remote moment to resolve_, denoted \(\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle)\), as the instant at which all remote values that \(s\langle t\rangle\) directly require have arrived (which is \(t\) if all values arrive before \(t\)).
\[\mathit{MTR}(s\langle t\rangle)\ \overset{\mathrm{def}}{=}\max(\] \[\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle),\{\mathit{MTR}(r \langle t+w\rangle)\mid s\xrightarrow[loc]{w}r\})\] \[\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle)\ \overset{\mathrm{def}}{=}\max(t,\] \[\{\mathit{arr}_{r\to s}(\mathit{MTR}(r\langle t+w\rangle))\mid s \xrightarrow[rem]{w}r\text{ and }t+w\geq 0\})\]
Note that this is well-defined for every well-formed specification because the evaluation graph is acyclic, and the equation for \(s\langle t\rangle\) only depends on those variables lower in the evaluation graph, which is acyclic.
Example 4: Consider example 2 with streams \(i\) and \(acc\) at network node 1 and streams _reset_ and _root_ computed at network node 2. Then, we can substitute in
the equations to obtain the \(\mathit{MTR}(\mathit{root}\langle 1\rangle)\).
\[\mathit{MTR}(\mathit{root}\langle 1\rangle)=\max(\mathit{MTR}( \mathit{reset}\langle 1\rangle),\mathit{MTR}_{\mathit{rem}}(\mathit{acc}\langle 1 \rangle))=\] \[=\max(1,\max(1,\mathit{arr}_{\mathit{acc}\to root}( \mathit{MTR}(\mathit{acc}\langle 1\rangle))))=\] \[=\max(1,\mathit{arr}_{\mathit{acc}\to root}(\max(1,\] \[\qquad\mathit{MTR}(i\langle 1\rangle),\mathit{MTR}_{\mathit{rem}}( \mathit{root}\langle 0\rangle))))=\] \[=\max(1,\mathit{arr}_{\mathit{acc}\to root}(\max(1, \mathit{MTR}_{\mathit{rem}}(\mathit{root}\langle 0\rangle))))=\] \[=\max(1,\mathit{arr}_{\mathit{acc}\to root}(\max(1,\] \[\qquad\max(0,\mathit{arr}_{\mathit{acc}\to root}( \mathit{MTR}(\mathit{acc}\langle 0\rangle))))))=\] \[=\max(1,\mathit{arr}_{\mathit{acc}\to root}(\max(1, \mathit{arr}_{\mathit{acc}\to root}(\max(0,\] \[\qquad\mathit{MTR}(i\langle 0\rangle),\mathit{MTR}_{\mathit{rem}}( \mathit{root}\langle-1\rangle)))))=\] \[=\max(1,\mathit{arr}_{\mathit{acc}\to root}(\max(1, \mathit{arr}_{\mathit{acc}\to root}(0))))\]
The instant variable \(\mathit{root}\langle 1\rangle\) is guaranteed to be resolved when the response from the instant variable \(\mathit{acc}\langle 1\rangle\) arrives--that is the \(\max(1,\mathit{arr}_{\mathit{acc}\to root}(...))\) part. And this response can only be produced when the response for \(\mathit{acc}\langle 0\rangle\) is arrives, which is the innermost part: \(...\max(1,\mathit{arr}_{\mathit{acc}\to root}(0))\) Note that we do not need to account for \(\mathit{MTR}_{\mathit{rem}}(\mathit{root}\langle-1\rangle)\) since it is resolved instantaneously to its default value. Likewise, the inputs are also resolved instantaneously and do not add any delay when obtaining the value of the \(\mathit{MTR}\).
Note that for \(\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle)\) only consider the those remote instant variables for which \(t+w\geq 0\) because otherwise the default value will be used at the moment of instantiating \(s\langle t\rangle\). In the equation for \(\mathit{MTR}(s\langle t\rangle)\) we assume the base case \(\mathit{MTR}(s\langle t\rangle)=0\) when \(t<0\), because again, the default value in the offset expression is used instead, which is known immediately. It is easy to see that the first equation is equivalent to:
\[\mathit{MTR}(s\langle t\rangle)\;\stackrel{{\mathrm{def}}}{{=}} \max(\{\mathit{MTR}_{\mathit{rem}}(r\langle t+w\rangle)\mid s\stackrel{{ w}}{{\underset{\mathit{loc}}{\mathit{loc}}{\mathit{\phantom{ \rule{0.0pt}{0.0pt}}}}}}^{*}r\}))\]
We are now ready to prove that these definitions indeed capture the time at which \(s\langle t\rangle\) is resolved.
Theorem 4.1: _Let \(\varphi\) be a specification and \(\mu\) a network placement, let \(\sigma_{I}\) be the input trace and arr a network behavior. Every \(s\langle t\rangle\) is resolved at \(\mathit{MTR}(s\langle t\rangle)\) or before._
Proof: The proof proceeds by induction on the evaluation graph \(G_{\varphi,M}\) induced by \(\varphi\) and the length of \(\sigma_{I}\).
* **Base case**: inputs and instant variables \(s\langle t\rangle\) that do not depend on any other instant variables. These are the nodes of _EG_ that do not have any
outgoing edge. Since \(s\langle t\rangle\) is instantiated at \(t\), then the value is resolved exactly at \(t\) either by reading a sensor or instancing to a default value. Also, \(\mathit{MTR}(s\langle t\rangle)=\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle)=t\).
* **General case**. Let \(s\langle t\rangle\) be an arbitrary instant variable and assume, by inductive hypothesis, that the theorem holds for all instant variables lower in the _EG_ than \(s\langle t\rangle\). At time \(\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle)\) all instant variables \(r\langle t+w\rangle\) from remote nodes that \(s\langle t\rangle\) depends on have arrived because \(r\langle t+w\rangle\) will be resolved at \(\mathit{MTR}(r\langle t\rangle)\) by induction hypothesis. Similarly, all local elements that \(s\langle t\rangle\) depends on are also below in the dependency graph, so the induction hypothesis also applies. Therefore, at time \[\max(\mathit{MTR}_{\mathit{rem}}(s\langle t\rangle),\{\mathit{MTR}(r\langle t +w\rangle)\mid s\xrightarrow[\mathit{rem}]{w}r\})\] or before all elements that \(s\langle t\rangle\) depends on will be known and \(s\langle t\rangle\) will be resolved.
This finishes the proof.
The following corollary follows from the fact that nothing that happens after an instant variable has been resolved (either further values in \(\sigma_{I}\) or the network behavior) can affect the value computed. Therefore, the value and time at which \(s\langle t\rangle\) is computed does not depend on the future after \(\mathit{MTR}(s\langle t\rangle)\).
Corollary 1: _For all \(s\langle t\rangle\) there is a \(t^{\prime}\) such that \(s\langle t\rangle\) only depends on \(\sigma_{I}\) and arr up to \(t^{\prime}\)._
The _MTR_ for an instant variable depends on the delay of the network \(\mathit{arr}_{\rightarrow}\) between the network nodes that cooperate in order to compute that instant variable. Therefore we cannot guarantee a bound on _MTR_ if those delays can be arbitrarily long, so we cannot bound the memory usage. Consequently, monitoring is not trace-length independent in a general Time Asynchronous Network.
Next, we study how different conditions on the network behavior (concerning the delays between links) affect the _MTR_ establishing memory bounds and regain trace-length independent monitoring under those conditions.
#### 4.1.1 Instantiation to Synchronous Time
We assume first the synchronous model of computation, which is a particular case of the timed-asynchronous model where all message delays between two monitors take exactly the same amount of time throughout the trace. We use the predicate \(\mathit{dist}_{\_}{\mathit{r}\_}{\mathit{s}}\) to represent the delay that every message will take from \(\mu(r)\) to \(\mu(s)\), independently of the time instant at which the message is sent. Therefore \(\mathit{arr}_{r\to s}(t)=t+\mathit{dist}_{\_}{\mathit{r}\_}{\mathit{s}}\). This delay allows us to simplify \(\mathit{MTR}_{\mathit{rem}}\) for synchronous networks as follows:
\[\mathit{MTR}_{\mathit{rem}}^{sync}(s\langle t\rangle)\,\stackrel{{ \mathrm{def}}}{{=}}\,\max(t,M(s\langle t\rangle))\]
where \(M(s\langle t\rangle)=\)
\[\{\mathit{MTR}^{sync}(r\langle t+w\rangle)+\mathit{dist}_{\_}{\mathit{r} \_}{\mathit{s}}\mid s\xrightarrow[\mathit{rem}]{w}r,t+w\geq 0\})\]
Recall that the _time to resolve_ is the time interval between the moment at which a variable is instantiated and the instant at which it is resolved (that is \(\mathit{TTR}(s\langle t\rangle)=\mathit{MTR}(s\langle t\rangle)-t\)) In the synchronous case we obtain:
\[\mathit{TTR}^{sync}(s\langle t\rangle)=\mathit{MTR}^{sync}(s \langle t\rangle)-t=\] \[=\{\mathit{MTR}^{sync}_{rem}(r\langle t+w\rangle)\mid s \xrightarrow[loc]{w}r\}-t=\] \[=\max(t,M(s\langle t\rangle)-t=\] \[=\max(0,\{\mathit{TTR}^{sync}(r\langle t+w\rangle)+\mathit{ dist}_{r\_s}\mid s\xrightarrow[rem]{w}r\})\]
Note that the value that determines the result is the \(\mathit{TTR}^{sync}(s\langle t\rangle)\) of the slowest remote dependency, which includes both the resolve time and the time the message needs to traverse through the network. Additionally, we can easily show by induction on the dependency graph that for every stream variable \(s\) there is a constant \(k\) such that \(\mathit{TTR}^{sync}(s\langle t\rangle)\leq k\), that is, \(s\) always takes less than \(k\) instants to be resolved. It follows that all decentralized efficiently monitorable specifications can be monitored in constant space in every local monitor, that is, synchronous decentralized monitoring of decentralized efficient monitorable specifications is trace length independent.
#### 3.2.2 Timed Asynchronous with AETERNALLY Bounded delays
We now assume that there is a global upper bound on the delay time for every message, which we call **aetermally** bounded delays. Formally, this assumption states that if there is a \(d\) such that for every pair of streams \(r,s\) and for every time \(t\), \(\mathit{arr}_{r\to s}(t)\leq t+d\). Substituting the upper-bound value \(d\) in the equations for \(\mathit{MTR}\), we obtain an constant upper-bound on the \(\mathit{MTR}\):
\[\mathit{MTR}^{\mathbf{g}}_{rem}(s\langle t\rangle)\leq\max(t,M(s\langle t \rangle))\]
where
\[M(s\langle t\rangle)=\mathit{MTR}^{\mathbf{g}}(r\langle t+w\rangle)+d\mid s \xrightarrow[rem]{w}r,t+w\geq 0\}\]
Note that in some cases \(s\langle t\rangle\) can be resolved before \(\mathit{MTR}^{\mathbf{g}}(s\langle t\rangle)\) because \(d\) is an upper bound. In this case we can also bound the memory necessary to store in every node to perform the monitoring process, but most of the time less memory will be necessary. We can see an example of a **aeternal** bound in Figure 3.
#### 3.2.3 Timed Asynchronous with TEMPORARILY Bounded delays
We now take a closer look at the equations to obtain a better bound on the time to resolve a given instant variable \(s\langle t\rangle\), without assuming an upper-bound of all messages in the history of the computation, but only the necessary messages that can influence \(s\langle t\rangle\). The main idea to bound \(\mathit{MTR}(s\langle t\rangle)\) is to consider the time interval at which the messages that are relevant to compute \(s\langle t\rangle\) are sent.
We first define an auxiliary notion. We say that a stream variable \(r\) is a direct remote influence on \(s\) with delay \(w\), and we write \(s\xrightarrow[\mathit{\mathit{w}}]{\mathit{drem}}r\), whenever there is a path \(s\xrightarrow[\mathit{\mathit{w}_{1}}]{\mathit{loc}}s_{1}\xrightarrow[ \mathit{\mathit{\mathit{w}}_{2}}]{\mathit{loc}}s_{2}\ldots\xrightarrow[ \mathit{\mathit{\mathit{w}}_{k}}]{\mathit{loc}}s_{k}\xrightarrow[\mathit{ \mathit{\mathit{w}}_{k+1}}]{\mathit{rem}}r\) such that:
* no two nodes \(s_{i}\) and \(s_{j}\) are repeated (if \(i\neq j\) then \(s_{i}\neq s_{j}\)), and
* \(w=w_{1}+\ldots+w_{k}+w_{k+1}\).
Note that \(s\xrightarrow[\mathit{\mathit{w}}]{\mathit{drem}}r\) means that \(s\langle t\rangle\) may be influenced by remote variable \(r\langle t+w\rangle\). We define the window of interest for \(s\langle t\rangle\) as:
\[\mathit{win}(s\langle t\rangle)=[\min S,\max S]\text{ where }S\text{ is defined as}\] \[S=\{t,\mathit{MTR}_{\mathit{rem}}(r\langle t+w\rangle)\mid s \xrightarrow[\mathit{\mathit{w}}]{\mathit{drem}}r\text{ and }t+w>0\}\]
Note that \(S\) is the set of instants at which remote instant variables that influence \(s\langle t\rangle\) are sent.
Example 5: Considering the specification in example 2 and by taking a look at the evaluation graph in Figure 2 we observe that the window of interest of the any instant variable at any time includes those of its dependencies in the evaluation graph. Therefore, their window of interest will include the minimum time for the earliest dependency to be resolved and the maximum time for the last dependency to be resolved. In this example, the window for \(root\langle 1\rangle\) will include the windows for \(acc\langle 1\rangle\), \(root\langle 0\rangle\) and \(acc\langle 0\rangle\) and the time required for the response messages to travel from source to destination. Note that inputs do not affect the _MTR_.
Therefore \(\mathit{win}(s\langle t\rangle)\) contains those instants at which the remote information relevant to \(s\langle t\rangle\) is sent. This window always ends at most at \(\mathit{MTR}(s\langle t\rangle)\). We
Figure 3: TTR, **aeternal** and **temporary** bounds for a peak delay
then define the worst message sent to \(s\) for the computation of \(s\langle t\rangle\) as:
\[d_{\mathit{worst}}(s\langle t\rangle)=\max\{t^{\prime}-t\mid\] \[t^{\prime}=\mathit{arr}_{r\to s}(t)\text{ for }s\xrightarrow{w}r \text{ and }t\in\mathit{win}(s\langle t\rangle)\}.\]
Note that \(d_{\mathit{worst}}\) is still an over-approximation of the messages sent in order to compute \(s\langle t\rangle\) but in this case the bound considers all those messages and only looks at a bounded interval of time. Since all the values that influence \(s\langle t\rangle\) are sent within \(\mathit{win}(s\langle t\rangle)\) we can bound \(\mathit{MTR}(s\langle t\rangle)\) as follows:
\[\mathit{MTR}^{\mathbf{temp}}_{\mathit{rem}}(s\langle t\rangle)\leq\max(t,M(s \langle t\rangle))\]
where \(M(s\langle t\rangle)=\)
\[\{\mathit{MTR}(r\langle t+w\rangle)+d_{\mathit{worst}}(\mathit{win}(s \langle t\rangle))\mid s\xrightarrow{w}r\}.\]
We have finally arrived at the desired outcome: a finite window of time that contains the sending and receiving of the relevant messages for the computation of the instant variable. This implies that only a finite number of network delays affect the resolution of any instant variable \(s\langle t\rangle\). As we can always find the maximum delay in the window, we can upper bound the time that it will take for any instant variable to be resolved, and we are able to know how much time these instant variables are stored in \(U_{n}\) and \(R_{n}\). In turn, this allows to determine when certain instant variables are no longer needed and when they can be pruned releasing the used memory.
Figure 3 shows the peak network behavior and how the TTR adapts accordingly. We can observe the difference between the **temporary** and **aeternal** bounds, where the **aeternal** bound is high and constant throughout the execution and the **temporary** drops when the network has small delays.
#### 3.2.2 Pruning the Resolved Storage \(R_{n}\).
We are finally ready to prune \(R_{n}\) because we know now when every instant variable will be resolved.
Corollary 2: _Every unresolved instant variable \(s\langle t\rangle\) in \(U_{n}\) is resolved at most at \(\mathit{MTR}(s\langle t\rangle)\)._
As soon as \(\mathit{MTR}(s\langle t\rangle\) is reached (or before), the value of \(s\langle t\rangle\) will be known in the local monitor of \(\mu(s)\) and its value will be sent to those remote monitors where it is needed. After this moment \(s\langle t\rangle\) can be pruned from \(U_{n}\). With this mechanism, we can assure that every instant variable will be in memory (\(U_{n}\) or \(R_{n}\)) for a bounded amount of time. Corollary 2 implies that decentralized efficiently monitorable specifications in timed asynchronous networks can be monitored with bounded resources when there is a certain bound on the network behavior, be it synchronous, **aeternal** or a **temporary** bound. This memory bound depends only linearly on the size of the specification and the delays between the nodes of the network. This results can be interpreted from the opposite perspective: given a fixed amount of memory available, we could calculate the maximum delays in the network that would allow the monitoring to be performed correctly.
## 4 Empirical Evaluation
We have implemented our solution in a prototype tool tadLola, written in the Go programming language (available at [http://github.com/imdea-software/dLola](http://github.com/imdea-software/dLola)). We describe now:
* (1) an empirical study of the capabilities of tadLola in different scenarios with real data extracted from four different realistic public datasets.
* (2) the effect of the network behavior--in terms of delays-into memory and time to resolve outputs.
Our experimental setup intends to empirically determine the behavior of the asynchronous network and how failures affect the time to resolve of the streams.
### Datasets and Network Failures.
We have used four different datasets for this empirical evaluation, namely: SmartPolitech [13], Tomsk Heating [14], Orange4Home [2] and Context [15]. All datasets are related to smart buildings except for Context that is about Industry 4.0. The first two are concerned about building climate control and use sensors in different rooms or buildings respectively. Orange4Home dataset focus on activity recognition where a tenant can move freely in an apartment, and the goal is to infer the activity performed. Lastly, Context is a dataset in a smart factory where a new class of failures, namely contextual failures arise when there is no specific sensor or data collected that signals directly the error but the presence
Figure 4: TTR analysis of Tadlola for different network behaviors
of the error and its underlying cause need to be inferred from contextual knowledge. For each dataset we created a synthetic specification that could showcase the functionality of our tool. We also injected synthetic delays to model network congestion and failures.
* _constant_ behavior is modeled as a global constant delay between each pair of monitors, so every message takes exactly a fixed amount of time to reach the destination network node. This corresponds to the network behavior observable in synchronous monitoring.
* _constantPeak_ consists of a constant delay with a single high delay of the network modeling a network failure and recovery, so all messages get delayed until the problem is solved and then the network starts to recover gradually, until normal operation is reached again.
* _Normal_ behavior follows a normal distribution of the delays given an average delay.
* _normalPeak_ is similar to the constantPeak but with a baseline of the normal behavior.
Note that all these behaviors are both **aeternal** and **temporary** bounded since for all of them we can find an upper bound for the whole trace as well as a bound by window of interest of each instant variable.
Figure 4 shows the minimum, median and maximum TTR to resolve streams under these network behaviors.
We can observe an example of the delays observed under these behaviors in Figure 5. The system under observation is sampled periodicly, obtaining the input traces for each of the variables measured. Thus, having the length of the trace and the sampling period we can obtain the system time that gets monitored throughout the experiment. For example, a trace of length \(200k\) with a sample period of 30 seconds, corresponds to monitoring a system during \(\approx 2.31\) months. For some of the experiments the traces of real data available in the datasets were not sufficiently long, so we extended those traces by repeating the samples as much as needed to reach the desired trace length. Also, some of those traces required interpolation in order to use a common clock tick for all events, since some of those traces were based on events instead of sensing periodically a variable. We did this interpolation whenever needed.
### Hypothesis
For the empirical evaluation of this paper we intend to evaluate the following hypothesis:
* (H1) Our time Asynchronous algorithm behaves no worse than the synchronous algorithm from [1] when the network presents a synchronous behavior.
* (H2) Synchronous SRV can simulate the monitoring of a time asynchronous network with a software layer that provides the illusion of synchronicity, but at a very high cost in delays and memory usage.
* (H3) Our theoretical results of Section 3.6 hold for the execution of the experiments.
* (H4) Local memory of the root monitor is bounded, resulting in a trace length independent monitor our theoretical results predict.
* (H5) Our algorithm scales in terms of number of monitors-network usage. We expect that memory will increase linearly with "network usage" but will remain constant when increasing the number of local monitors. Here we refer with local monitor to a non-empty set of streams that are computed at the same network node.
* (H6) We can benefit from using redundant specifications and redundant topologies (exploiting simplifiers) to reduce _TTRs_ by avoiding delays of slow or faulty links.
### Empirical Results
In order to validate hypothesis (H1) we built the following experiments:
Figure 5: Examples of network behaviors
* SmartPolitechDistr: we detect fire hazards by analysing the levels of temperature, CO\({}_{2}\) and humidity in the air in different rooms in university buildings. We use a quantitative robust specification.
* tomskHeating: we check that the heating system is behaving as expected (extracted from the data). Again, this is a quantitative robust specification.
* orange4Home: we detect fire hazards by analysing the activities performed by the tenant in the apartment.
* contextAct: we detect fire hazards by analysing the levels of temperature, CO\({}_{2}\) and humidity in the air in different rooms in an smart apartment. This is also a quantitative specification.
Figure 4 shows metrics of the delay of the root of the specification for the different datasets analyzed with different network behaviors. This proves empirically that TADSRV subsumes DSRV with no additional loss of performance, as expected by our theoretical proofs. Therefore (H1) holds. All these different network behaviors show that TADSRV is more general than DSRV, as we expected.
For the validation of (H2) we built an experiment with the specification of obtaining both the maximum and sum of the inputs. We placed this in the topology shown in Figure 6.
We looked for the maximum delay present in the normalPeak traces that we have and used that duration as the global delay between each pair of monitors in the synchronous scenario. We measured both settings: simulating synchronicity and the execution of the timed asynchronous algorithm. The results are shown in Figure 7. The figure shows that we can emulate TADSRV with DSRV but with a high cost in memory usage (\(+200\%\) than the worst instant) and incurring in delays of _worst delay \(*\) depth of topology_, which in this case is \(558\) instants. This corresponds to an increase of around \(30\) times the delay of the timed asynchronous. Therefore, (H2) holds as well. This results makes it clear that it is not feasible in practice to use DSRV in a time asynchronous scenario (even with the layer that simulates synchrony), where the contribution of this work applies naturally with much better performance.
Also, we can see that the TTRs obtained empirically are below or equal to our estimated bounds calculated a-priori with the equations described in Section 3.6. Hence, (H3) holds.
Figure 6: Monitor topology of the experiment for (H2)
For the validation of (H4)--studying the scalability in terms of trace length--we used the smartPolitechDistr dataset and run it with a trace of \(200k\) instants with the normalPeak behavior. In the extract shown below we compute both a Boolean and a quantitative stream to look for temperature uprisings.
```
definebooltemp_upeval= temp>1.1*tempiniandtemp<=1.6*tempini definenturntemp_up_qeval= iftemp<=1.1*tempinithen0else iftemp>1.6*tempinithen1else (temp-1.1*tempini)/(1.6*tempini-1.1*tempini) definebooltemp_spikeeval= temp>1.6*tempini definenumtemp_spike_qeval= iftemp<=1.6*tempinithen0else iftemp>2*tempinithen1else (temp-1.6*tempini)/(2*tempini-1.6*tempini)
```
Figure 8 shows that the memory used in the root monitor of this experiment remains bounded. The pikes in memory correspond to higher delays in the network links among nodes. This forces monitors to keep records in their memory until the messages that they need arrive, allowing the monitor to resolve streams and prune their memories. This result suggests that the algorithm with a decentralized efficiently monitorable specification can behave in a trace-length independent fashion, validating hypothesis (H4).
Figure 9 shows that the memory usage of a single monitor does not depend on the number of other monitors in the network but it depends on the maximum depth of its specification that travels the network. In this experiment the depth of the specification deployed in the network was kept constant (5) while we changed the number of monitors in a binary tree topology (preserving the depth in one branch). The intuition is that the variable that affects memory usage is not how many monitors we have but the number of network nodes and links among them that affect the monitoring performance. This is because the more links, the higher the probability that a failure in the network (modelled as a delay) affects the run. These results prove that hypothesis (H5) holds.
#### 4.2.2 Redundancy and Delays.
In this subsection we take a closer look at hypothesis (H6), so we build the topology and the specification to minimize the _TTRs_ of the instant variables. We seek to benefit from using simplifiers to minimize the effect network delays of messages required to compute the instant variables. Thus, we intend to exploit the messages that go through the fastest path in the network from the nodes that read the inputs to the nodes that compute the root of the specification. Intermediate results are generated faster in the least congested deployment and messages will travel through the least weight path (in terms of accumulated delays) between the inputs and the root of the specification yielding a minimum TTR for the instant variables. This improvement can be achieved because intermediate results from slower monitors will not be
needed due to the use of simplifiers, and therefore the engine will not wait to achieved a final result of the root monitor. We build the following fragment of the specification for the data in smartPolitech, where we make the streams C3_fire_risk_q and C3_fire_risk_q_red redundant of each other and we deploy them in different monitors so that they are affected by different delays. We use a normal delay for the whole network but introduce a failure in the form of a peak in the delays between the monitors connected to monitor 3. This will make the path through monitor 2 faster. We can observe in Figure 10 how the delay of obtaining the value for the root of the specification takes the best delay possible. Since we use an OR to take advantage of the symplifiers, in the best case verdict (outcome true) there is a gain, but in the worst case verdict (false) the redundant solution gains no speed as the engine needs to wait for all the values to calculate the OR.
```
00{ defineboolC3_alarmeval= (C3_fire_riskorC3_fire_risk_red)and (C3_fire_risk_q>0.5orC3_fire_risk_q_red>0.5) }
02{ defineboolC3_fire_risk_redeval= AND(C3_temp_spike,C3_co2_spike,C3_humid_down) definenumC3_fire_risk_q_redeval= AVG(C3_temp_spike_q,C3_co2_spike_q,C3_humid_down_q) }
03{ defineboolC3_fire_riskeval= AND(C3_temp_spike,C3_co2_spike,C3_humid_down) definenumC3_fire_risk_qeval= AVG(C3_temp_spike_q,C3_co2_spike_q,C3_humid_down_q) }
```
Figure 10 shows the difference between using the redundant specification with redundant topology and not using any redundancy. Even though a general study of exploiting redundant paths in the network is out of the scope of this paper, this case study illustrates how redundant deployments can improve decentralized monitoring.
## 5 Lazy Algorithm
We introduce now a variant of Algorithm 2 where some of the streams are not sent unless their values are requested. This is beneficial in cases where their value is rarely needed. We call these _lazy streams_.
To introduce the modified algorithm we need to introduce a new type of message: the request message. We also call a response message to the messages containing the value of an instant variable.
* **Response** messages: (\(\mathbf{resp},s\langle k\rangle,c,n_{s},n_{d}\)) where \(s\langle k\rangle\) is an instant variable, \(c\) is a constant of the same datatype as \(s\langle k\rangle\), \(n_{s}\) is the source node and \(n_{d}\) is the destination node of the message.
* **Requests** messages: (\(\mathbf{req},s\langle k\rangle,n_{s},n_{d}\)) where \(s\langle k\rangle\) is an instant variable, \(n_{s}\) is the source node and \(n_{d}\) is the destination node of the message.
Again, if \(\mathit{msg}=(\mathbf{req},s\langle k\rangle,n_{s},n_{d})\), then \(\mathit{msg.src}=n_{s}\), \(\mathit{msg.dst}=n_{d}\), \(\mathit{msg.type}=\mathbf{req}\), \(\mathit{msg.stream}=s\langle k\rangle\). Similarly, for a response message we have the same, the only difference is that we add \(\mathit{msg.val}=c\).
Each stream variable \(v\) can be assigned one of the following two _communication strategies_ to denote whether an instant value \(v\langle k\rangle\) is automatically communicated to all potentially interested nodes, or whether its value is provided upon request only. Let \(v\) and \(u\) be two stream variables such that \(v\) appears in the equation of \(u\) and let \(n_{v}=\mu(v)\) and \(n_{u}=\mu(u)\).
* **Eager communication**: the node \(n_{v}\) informs \(n_{u}\) of every value \(v\langle k\rangle=c\) that it resolves by sending a message (\(\mathbf{resp},v\langle k\rangle,c,n_{v},n_{u}\)). This is what we have used previously in the paper.
* **Lazy communication**: node \(n_{u}\) requests \(n_{v}\) the value of \(v\langle k\rangle\) (in case \(n_{u}\) needs it to resolve \(u\langle k^{\prime}\rangle\) for some \(k^{\prime}\)) by sending a message (\(\mathbf{req},v\langle k\rangle,n_{u},n_{v}\)). When \(n_{u}\) receives this message and resolves \(v\langle k\rangle\) to a value \(c\), \(n_{u}\) will respond with (\(\mathbf{resp},v\langle k\rangle,c,n_{v},n_{u}\)).
Each stream variable can be independently declared as eager or lazy. We use two predicates \(\mathit{eager}(u)\) and \(\mathit{lazy}(u)\) (which is defined as \(\neg\mathit{eager}(u)\)) to indicate the communication strategy of stream variable \(u\). Note that the lazy strategy involves two messages and the eager strategy only one, but eager sends every instant variable resolved, while lazy will only sends those that are requested. In case the values are almost always needed, eager is preferable while if values are less frequently required lazy is preferred. We now need to add the communication strategy to the definition of the decentralized SRV problem. A decentralized SRV problem \(\langle\varphi,\mathcal{T},\mu,\mathit{eager}\rangle\) is now characterized by a specification \(\varphi\), a topology \(\mathcal{T}\), a stream assignment \(\mu\) and a communication strategy for every stream variable.
### Lazy DSRV Algorithm for Timed Asynchronous Networks
We extend our local monitor to \(\langle Q_{n},U_{n},R_{n},P_{n},W_{n}\rangle\) adding the following two storages:
* **Pending** requests \(P_{n}\), where \(n\) records instant variables that have been requested from \(n\) by other monitors but that \(n\) has not resolved yet.
* **Waiting** for responses \(W_{n}\), where \(n\) records instant variables that \(n\) has requested from other nodes but has received no response yet.
The storage \(W_{n}\) is used to prevent \(n\) from requesting the same value twice while waiting for the first request to be responded. An entry in \(W_{n}\) is removed when the value is received, since the value will be subsequently fetched directly from
\(R_{n}\) and not requested through the network. The storage \(P_{n}\) is used to record that a value that \(n\) is responsible for has been requested, but \(n\) does not know the answer yet. When \(n\) computes the answer, then \(n\) will send the corresponding response message and remove the entry from \(P_{n}\). Finally, request messages are generated for unresolved lazy instant variables and inserted in the queues of the corresponding neighbors.
More concretely, every node \(n\) will execute the procedure Monitor shown in Algorithm 2, which invokes Step in every clock tick until the input terminates or ad infinitum. Procedure Finalize is used to resolve the pending values at the end of the trace to their default if the trace ends. Procedure Step now executes some modified procedures and additional steps:
1. **Process Messages**: Line 26 annotates requests in \(P_{n}\), which will be later resolved and responded. Lines 27-28 handle response arrivals, adding them to \(R_{n}\) and removing them from \(W_{n}\).
2. **Send Responses:** Lines 33-36 deal with pending lazy variables. If a pending instant variable is now resolved, the response message is sent and the entry is removed from \(P_{n}\).
3. **Send new Requests:** Lines 37-41 send new request messages for all lazy instant streams that are now needed.
4. **Prune:** Line 42-44 prunes the set \(R\) from information that is no longer needed. See section 5.4.
### Formal Correctness
We now show that our solution is correct again by proving that the output computed is the same as in the denotational semantics, and that every output is eventually computed.
Theorem 5.1: _All of the following hold for every instant variable \(u\langle k\rangle\):_
1. _The value of_ \(u\langle k\rangle\) _is eventually resolved._
2. _The value of_ \(u\langle k\rangle\) _is_ \(c\) _if and only if_ \((u\langle k\rangle,c)\in R\) _at some instant._
3. _If_ \(\mbox{eager}(u)\) _then a response message for_ \(u\langle k\rangle\) _is eventually sent._
4. _If_ \(\mbox{lazy}(u)\) _then all request messages for_ \(u\langle k\rangle\) _are eventually responded._
Proof: The proof proceeds by induction in the evaluation graph, showing simultaneously in the induction step (1)-(4) as these depend on each other (in the previous inductive steps). Let \(M\) be a length of a computation and \(\sigma_{I}\) be an input of length \(M\). Note that (1) to (4) above are all statements about instant variables \(u\langle k\rangle\), which are the nodes of the evaluation graph \(G_{\varphi,M}\). We proceed by induction on \(G_{\varphi,M}\) (which is acyclic because \(D_{\varphi}\) is well-formed).
* **Base case**: The base case are vertices of the evaluation graph that have no outgoing edges, which are either instant variables that correspond to inputs or to defined variables whose instant equation does not contain other instant variables. Statement (1) follows immediately for inputs because at instant \(k\), \(s\langle k\rangle\) is read at node \(\mu(k)\). For output equations that do not have variables,
or whose variables have offsets that once instantiated become negative or greater than \(M\), the value of its leafs is determined either immediately or at \(M\) when the offset if calculated. At this point, the value computed is inserted in \(R\), so (2) also holds at \(\mu(u)\). Note that (2) also holds for other nodes because the response message contains \(u\langle k\rangle=c\) if and only if \((u\langle k\rangle,c)\in R_{n}\), where \(\mu(u)=n\). Then the response message is inserted exactly at the point it is resolved, so (1) implies (3). Finally, (4) also holds at the time of receiving the request message or resolving \(u\langle k\rangle\) (whatever happens later).
* **Inductive case**: Consider an arbitrary \(u\langle k\rangle\) in the evaluation graph \(G_{\varphi,M}\) and let \(u_{1}\langle k_{1}\rangle\ldots u_{l}\langle k_{l}\rangle\) the instant variables that \(u\langle k\rangle\) depends on. These are nodes in \(G_{\varphi,M}\) that are lower than \(u\langle k\rangle\) so the inductive hypothesis applies, and (1)-(4) hold for these instant variables. Let \(n=\mu(u)\). At instant \(k\), \(u\langle k\rangle\) is instantiated and inserted in \(U_{n}\). At the end of cycle \(k\), lazy variables among \(u_{1}\langle k_{1}\rangle\ldots u_{l}\langle k_{l}\rangle\) are requested. By induction hypothesis, at some instant all these requests are responded by (1) and (4). Similarly, the values of all eager variables are calculated and sent as well (by (1) and (3) which hold by IH). At the latest time of arrival, the equation for \(u\langle k\rangle\) has no more variables and it is evaluated to a value, so (1) holds and (2) holds for \(u\langle k\rangle\) at \(n\). At this point, if \(\mathit{eager}(u)\) then the response message is sent (so (1) holds for \(u\langle k\rangle\)) and if \(\mathit{lazy}(u)\) then all requests (previously received in \(P_{n}\) or future requests) are answered, so (1) also holds.
This finishes the proof.
### Resources for Lazy
Analyzing the lazy case requires modifications. In timed asynchronous networks we need to introduce a new kind of message to provide _confirmations_ that are only used to inform the receiving node that some instant variables are not needed so they can be pruned. This new message have the following form:
* **Confirmation** messages: \((\mathbf{confirm},s\langle k\rangle,n_{s},n_{d})\) where \(s\langle k\rangle\) is an instant variable, \(n_{s}\) is the source node and \(n_{d}\) is the destination node of the message.
This message will be interpreted as the source node \(n_{s}\) has resolved instant variables \(s\) up to \(k\). This information allows the destination node to conclude that instant variables required at the remote node for nodes that have been resolved are no longer necessary. We change \(\mathit{MTR}_{\mathit{rem}}\) to include that the response gets emitted when the request arrives or when the remote instant variable gets resolved, whichever happens later.
\[\mathit{MTR}_{\mathit{rem}}^{\mathit{lazy}}(s\langle t\rangle)\;\stackrel{{ \mathrm{def}}}{{=}}\;\max(t,M(s\langle t\rangle)\]
where
\[M(s\langle t\rangle)=\{\mathit{arr}_{r\to s}(t^{\prime})\;\mathrm{s.t.s}\; \stackrel{{ w}}{{\mathit{rem}}}r\;\mathrm{and}\;t+w\geq 0\})\]
\[t^{\prime}=\max(\mathit{arr}_{s\to r}(t),\mathit{MTR}^{\mathit{lazy}}(r \langle t+w\rangle))\]
Here \(\mathit{arr}_{s\to r}(t)\) is the time when the request is sent, that is, when the instant variable \(s\) gets instantiated and stored in \(U\). \(\mathit{MTR}^{\mathit{lazy}}_{rem}(r\langle t+w\rangle)\) is when the remote instant stream gets resolved. Finally \(\mathit{arr}_{r\to s}(t^{\prime})\) is the moment at which the response of the lazy instant stream variable arrives at the requesting node.
#### 3.2.2 Instantiation to Synchronous
Again, we first consider the case where the delay of any link to be a constant throughout the execution. This constant is useful to simplify the equations but we need to consider now that for each instant variable we need a request and afterwards a response, in order to get the remote value. Again, \(\mathit{dist}_{\_s}\) is used to represent the delay that every message will take from \(\mu(r)\) to \(\mu(s)\), independently of the time instant at which the message is sent. We use this knowledge to simplify \(\mathit{MTR}^{\mathit{lazy}}_{rem}\) for synchronous networks as follows
\[\mathit{MTR}^{\mathit{sync\ lazy}}_{rem}(s\langle t\rangle)\stackrel{{ \mathrm{def}}}{{=}}\max(t,t^{\prime})\text{ s.t.}s\stackrel{{ w}}{{\xrightarrow{rem}}}r;t+w\geq 0\})\]
where
\[t^{\prime}=\{\mathit{dist}_{\_s}+\max(t+\mathit{dist}_{\_s{\_r}},\mathit{MTR} ^{\mathit{sync\ lazy}}(r\langle t+w\rangle))\]
Where the value of the remote instant variable arrives when the response message arrives \(\mathit{dist}_{\_s}\), which is emitted either when the request arrived \(t+\mathit{dist}_{\_s{\_r}}\) or when the remote value is resolved \(\mathit{MTR}^{\mathit{sync}}(r\langle t+w\rangle)\), whichever ocurrs later.
#### 3.2.3 AETERNALLY Bounded delays
Now we consider that case where we know a maximum delay in the network that upper bounds all the other delays in the network behavior. Substituting the upper-bound value \(d\) in the equations for \(\mathit{MTR}\), we obtain an constant upper-bound on the \(\mathit{MTR}\) (although this value can be a gross over-approximation):
\[\mathit{MTR}^{\mathbf{aeternal\ lazy}}_{rem}(s\langle t\rangle)\stackrel{{ \mathrm{def}}}{{=}}\max(t,M(s\langle t\rangle))\]
where
\[M(s\langle t\rangle=\{d+\max(t+d,t^{\prime})\text{ s.t. }s\stackrel{{ w}}{{ \xrightarrow{rem}}}r;t+w\geq 0\})\]
and
\[t^{\prime}=\mathit{MTR}^{\mathbf{aeternal\ lazy}}(r\langle t+w\rangle)\]
**TEMPORARILY Bounded delays** Finally, we do not assume an _aeternal_ bound on the delays of the network. Instead, we can just look at what affects the computation of the instant variables, that is, other instant variables that it depends on and the network delays that affect the messages to compute those instant variables. We take into account again the window \(\mathit{win}(s\langle t\rangle)\), which contains the interval that includes all the instants at which values that influence \(s\langle t\rangle\) are resolved and sent. This window always ends at most at \(\mathit{MTR}(s\langle t\rangle)\). Inside this window we can find the worst delay of a message sent for the computing of the instant variable: \(d_{\mathit{worst}}(s\langle t\rangle)\). Then, we can bound \(\mathit{MTR}(s\langle t\rangle)\) as follows for the lazy case:
\[\mathit{MTR}_{\mathit{rem}}^{\mathbf{temp}\ \mathit{lazy}}(s\langle t \rangle)\ \stackrel{{\mathrm{def}}}{{=}}\max(t,\{t^{\prime}\ \mathrm{s.t.}s\xrightarrow{w}r;t+w\geq 0\})\] \[\mathrm{where}\] \[t^{\prime}=d_{\mathit{worst}}(s\langle t\rangle)+\max\left( \begin{matrix}d_{\mathit{worst}}(s\langle t\rangle)\\ \mathit{MTR}^{\mathbf{temp}\ \mathit{lazy}}(r\langle t+w\rangle)\end{matrix}\right)\]
Here, \(d_{\mathit{worst}}(s\langle t\rangle)\) is the time for worst message affecting the computation of \(s\langle t\rangle\), so the window for obtaining this value considers both request and response messages. We use this value to bound both the request and the response. First, we obtain the latter instant at which either the request arrives or the remote dependency is resolved in \(\max(d_{\mathit{worst}}(s\langle t\rangle),\mathit{MTR}^{\mathbf{temp}\ \mathit{lazy}}(r \langle t+w\rangle))\) and then we add the time for the response message to arrive with the value in \(d_{\mathit{worst}}(s\langle t\rangle)\). Obtaining the moment at which we know that the remote dependency is guaranteed to be resolved and its value arrived at the requesting network node.
### Pruning the Resolved Storage.
We are finally ready to prune \(R_{n}\) for the lazy algorithm case because we know now when every instant variable will be resolved.
Theorem 5.3: _Every unresolved instant variable \(s\langle t\rangle\) that is lazy in \(U_{n}\) is resolved at most at \(\mathit{MTR}^{\mathit{lazy}}(s\langle t\rangle)\)._
As soon as this moment is reached, considering that the network delays are bounded, a confirmation message will be sent to those monitors where lazy instant variables that are dependencies to the resolved instant variable are computed and this message will arrive in bounded time. Then the receiving node can prune the corresponding instant variables from its memory. Now we need to add _tconf_ in this theorem which is the time for the confirmation message to arrive:
Every unresolved \(s\langle k\rangle=e\) in \(U_{n}\) is pruned at most at \(max(\{\mathit{MTR}^{\mathit{lazy}}(u\langle k-w\rangle)+\mathit{tconf}_{u}\})\). Where \(u\langle k-w\rangle\) is a remote instant variable that contains \(s\langle k\rangle\) in its equation and \(\mathit{tconf}_{u}\) is the time for the confirmation message to travel from \(\mu(u)\) to \(\mu(s)\) sent at time \(\mathit{MTR}^{\mathit{lazy}}(u\langle k-w\rangle)\). This message arrives at destination in bounded time and the instant variable gets pruned. Because at that point the
receiving node knows that the instant variable is no longer needed and can prune it even if it is not resolved yet. With this mechanism, we can assure that every instant variable will be in memory \((U_{n},R_{n})\) for a bounded amount of time. This implies that decentralized efficiently monitorable specifications in timed asynchronous networks can be monitored with bounded resources. The bound depends only linearly on the size of the specification, the diameter of the network and the delays among the nodes of the network.
## 6 Conclusions and Future Work
We have studied the problem of decentralized stream runtime verification for timed asynchronous networks where messages can take an arbitrary ammount of time to arrive. This problems starts from a specification and a network. Our solution consists of a placement of output streams and an online local monitoring algorithm that runs on every node. We prove the termination and correctness of the proposed algorithm. We have captured specifications and network assumptions (synchronous, **aeternal** and **temporary** bounds) that guarantee that the monitoring can be performed with constant memory independently of the length of the trace showing that our solution subsumes the previous synchronous algorithm. We report on an empirical evaluation of our prototype tool tadLola. Our empirical evaluation shows that placement is crucial for performance and suggest that in most cases careful placement can lead to bounded costs and delays. As future work we plan to extend our solution to disaster scenarios where some links may present a delay ad infinitum, so no message can traverse that link. Our intuition is that we could use redundancy in the specifications and the network topology to provide resilience against faulty network links while also providing better performance than just by replicating the time asynchronous algorithm and running them in parallel isolated from each other.
|
2301.01348 | DADAgger: Disagreement-Augmented Dataset Aggregation | DAgger is an imitation algorithm that aggregates its original datasets by
querying the expert on all samples encountered during training. In order to
reduce the number of samples queried, we propose a modification to DAgger,
known as DADAgger, which only queries the expert for state-action pairs that
are out of distribution (OOD). OOD states are identified by measuring the
variance of the action predictions of an ensemble of models on each state,
which we simulate using dropout. Testing on the Car Racing and Half Cheetah
environments achieves comparable performance to DAgger but with reduced expert
queries, and better performance than a random sampling baseline. We also show
that our algorithm may be used to build efficient, well-balanced training
datasets by running with no initial data and only querying the expert to
resolve uncertainty. | Akash Haridas, Karim Hamadeh, Samarendra Chandan Bindu Dash | 2023-01-03T20:44:14Z | http://arxiv.org/abs/2301.01348v1 | # DADagger: Disagreement-Augmented Dataset Aggregation
###### Abstract
DAgger is an imitation algorithm that aggregates its original datasets by querying the expert on all samples encountered during training. In order to reduce the number of samples queried, we propose a modification to DAgger, known as DADagger, which only queries the expert for state-action pairs that are out of distribution (OOD). OOD states are identified by measuring the variance of the action predictions of an ensemble of models on each state, which we simulate using dropout. Testing on the Car Racing and Half Cheetah environments achieves comparable performance to DAgger but with reduced expert queries, and better performance than a random sampling baseline. We also show that our algorithm may be used to build efficient, well-balanced training datasets by running with no initial data and only querying the expert to resolve uncertainty.
I amitation Learning, Ensemble
## 1 Problem Description
The DAgger algorithm [1] addresses the covariate shift encountered in behavioural cloning by querying the expert again on all the states that the agent encounters during a test run, then aggregating these new samples with the existing dataset.
However, querying the expert can be costly in some situations. A drawback of DAgger is that it queries the expert at all collected observations without considering which actions are most valuable towards training a good policy. The DRIL algorithm [2] handles the issue of covariance shift by adding a regularizer term to the optimisation of the policy, implicitly favouring policies that choose to enter states which minimise the variance of an ensemble of policies for these states. The rationale is that the variance should be low for states that are in-distribution, so this should motivate the learner policy to stay within this distribution, assuming it is sufficient to achieve proper performance. A drawback of this method is that it may not be able to learn from incomplete datasets, and cannot explore further.
We therefore design an algorithm that does not needlessly query the expert, while also selectively querying it to learn about states which genuinely have not been encountered. This combines the flexibility and exploration of DAgger, while taking advantage of the ability of an ensemble to identify out-of-distribution states as done in Brantley et al. [2].
## 2 Related Work
Prior work in this area mainly consists of estimating uncertainty in the predictions of deep neural networks, as well as improving the sample efficiency of the DAgger algorithm.
Blundell et al. [3] learn a probability distribution on the weights of a neural network, which allows the network to make more reasonable predictions about unseen data. Gal and Ghahramani [4] estimate model uncertainty from the dropout layers in deep neural networks. A common way to estimate model uncertainty is with an ensemble of learners: the uncertainty is said to be high when the predictions of the ensemble have high variance. Wen et al. [5] reduces the computational cost of ensembles by sharing some weights across the networks. Brantley et al. [2] includes the ensemble variance in an additional loss term which implicitly trains the imitation policy to avoid states for which it has not seen demonstrations.
Several works have attempted to improve the sampling efficiency of the DAgger imitation learning algorithm. Kim and Pineau [6] propose a query metric that determines how close an encountered state is from the distribution of data gathered so far, and query the expert only if this metric is above some threshold. They use Maximum Mean Discrepancy as the metric, which was originally used to determine whether two sets of data are from different probability distributions. Zhang and Cho [7] achieve a similar goal by employing a separate network, called the safety policy, to predict whether the current policy being trained is likely to make an error, and subsequently use this information to determine which visited states need to be included in the next DAgger iteration. Similarly, Laskey et al. [8] uses a Support Vector Machine classifier to determine the risk level of a particular state. Menda et al. [9] uses ensembling to estimate model uncertainty to build a probabilistic variant of DAgger, where they use the model's uncertainty over its predicted actions to determine when to query the expert. They use this to enforce model safety, rather than to select when to augment the dataset in training as in our work.
## 3 Methodology
We propose a new algorithm called DADAgger (Disagreement-Augmented Dataset Aggregation algorithm), which is based on DAgger, and borrows from [2] the principle that out-of-distribution states induce higher disagreement among an ensemble of policies. We use this to modify DAgger to only query the expert on states with high disagreement (a particular percentage of visited states), in an attempt to gain maximal information and resolve the most uncertainty in a smaller number of queries. This improves on DAgger by reducing the number of expert calls, while still being able to handle incomplete datasets unlike the entirely offline DRIL. Importantly, this method does not query the expert to determine when the learner has encountered a state that it cannot handle, but rather relies entirely on the disagreement of an ensemble of learners.
Disagreement can be defined here as the variance of the output, if the outputs admit a distance between themselves, which is keeping with the implementation in [2] or possibly the entropy, if discrete and non-transitive. Since training an ensemble of neural networks is prohibitively expensive, in order to efficiently obtain the variance we instead use dropout layers to approximate uncertainty of our estimators. By passing the same input \(M\) times through a single network with dropout enabled after every layer, we obtain a Gaussian Process sampling theoretically different networks [10]. DADAgger is formally stated below. Tunable hyperparameters include the percentage of states to save, denoted by \(\alpha\), according to their variance, and \(M\), the number of policies present in our ensemble. Since we are using dropout to simulate the ensemble, the choice of \(M\) does not significantly increase training time, as only one network has its weights updated. The network is called \(M\) times in order to determine the uncertainty associated with a particular action. Smaller values of \(\alpha\) query a smaller number of samples, but in turn make convergence more difficult due to less augmentation of the dataset. Note that during evaluation of the policy (i.e at test time), we sample the network once to predict actions, rather than obtain and ensemble mean, to ensure a fair comparison to DAgger.
Figure 1 shows a simplified schematic of different models agreeing over an area of the number line \(\mathcal{R}\) centered on the origin (representing the in-distribution states), but disagreeing in areas beyond it (the out-of-distribution that is not being explicitly learned).
```
Initialise \(D\leftarrow\emptyset\) Initialise \(\hat{\pi}_{1,1},\hat{\pi}_{1,2}\).\(\hat{\pi}_{1,M}\) for\(i=1\) to \(i=n\)do Sample T-step trajectories using \(\hat{\pi}_{i,1}\) Compute Variance of \(\hat{\pi}_{i,1}(s),\hat{\pi}_{i,2}(s)\).\(\hat{\pi}_{i,M}(s)\) for all visited s. Keep the \(\alpha\) percent with the highest variance. Get dataset \(D_{i}=\{s,\pi^{*}(s)\}\) for all filtered s. Aggregate datasets \(D\gets D_{i}\cup D\) Train classifiers \(\hat{\pi}_{i+1,1},\hat{\pi}_{i+1,2}\).\(\hat{\pi}_{i+1,M}\) on \(D\) endfor return best \(\hat{\pi}_{i,1}\) on validation
```
**Algorithm 1** DADAgger Algorithm with Ensemble
## 4 Experiments
### Baselines and Parameter Tuning
We implement our method on the Car Racing environment available through OpenAI Gym [11]. For the policy network we use a CNN with dropout layers in the fully-connected section, that takes in an image of the current state and predicts a steering action.
Our first experiment consists of varying the hyper-parameters \(M\) and \(\alpha\) in order to gauge their effect on the convergence of DADAgger. Note that there are no "optimal parameters". Increasing \(M\) and \(\alpha\) should hypothetically reduce the number of iterations required to reach convergence (to be verified in testing), however these increases come at the cost of training time for more models in the ensemble and more expert queries. We compare the performance and convergence properties of our algorithm to DAgger (obtained by setting both \(\alpha\) and \(M\) to 1), as well as to an agent which queries the expert at observed states selected at random (obtained by setting \(M\) to 1 for varying \(\alpha\)) to test whether the quality of sampling plays a role in achieving convergence. We run the algorithm for 10 iterations with 5 different random seeds, and measure the proportion of runs that converged. We also measure the distribution of actions accumulated in the dataset, to obtain a quantitative assessment of the efficiency of the algorithm in exploring the action space. Reported standard deviation is that of a \(p=0.5\) binomial estimator, which is guaranteed to be an upper-bound of error.
### Half Cheetah
After implementing our algorithm on a simple environment, we test it on the Half Cheetah MuJoCo [12] environment. Compared to Racer, Half Cheetah involves more complex dynamics, and a higher dimensional and continuous-valued action space. The observation is a state vector rather than an image, therefore we swap out the CNN for an MLP (with dropout layers), keeping the rest of the
Figure 1: Simplified diagram of different functions agreeing over an area of \(\mathcal{R}\) and disagreeing beyond.
algorithm the same. We repeat each of the values of \(M\) and \(\alpha\) used in the previous experiment, and measure the Reward obtained by each policy at the end of each DADAgger iteration. We compare the performance and convergence properties with base DAgger, as well as an agent querying the expert at observed states selected at random.
### Dataset Construction
Our final experiment exploits the efficiency of our choice of expert queries to construct a compact, and representative dataset that allows rapid, one-shot training. Instead of using an initial training dataset as in previous experiments, we run DADAgger with an empty initial dataset, and consequently create the dataset over several iterations, querying the expert only to resolve uncertainty. Our hypothesis is that the final dataset upon convergence will be fairly small, due to the inclusion of only points which the models disagree on at different stages of training. We use \(\alpha=0.1\), as well as \(50\) iterations, which is the time we found needed to obtain a proper convergence using this technique. We then measure the distribution of actions in our final dataset and compare it to the dataset produced by DAgger and DADAgger when supplied with an initial dataset.
## 5 Results and Discussion
### Baselines and Parameter Tuning
The results are shown in Table 1. For \(\alpha=0.4\), all trained models converge in all runs, including random querying. We hypothesise that this is due to the relatively small difference between consecutive frames, which means skipping a frame, on average, will not be overly detrimental to training. With a lower \(\alpha\) of \(0.2\), our algorithm, regardless of the choice of \(M\), maintains 100% convergence, while random querying only succeeds 40% of the time. Our algorithm thus outperforms the random baseline on this metric (\(p<0.003\)). Finally, on the lowest setting of \(\alpha\), random sampling fails to converge entirely, while our algorithm only sometimes converges. This convergence often happens in the 9th or 10th iteration, so it is entirely possible that convergence could follow in the cases where it did not terminate, however we are restricting testing to 10 iterations to control across all methods. Note that the difference between \(M=25\) and \(M=10\) is not statistically significant.
To verify the mechanism by which our improvement is obtained, we obtain the histogram of saved actions in different settings in Figure 3. DAgger is seen to heavily sample straight-line actions (close to the middle). Our algorithms are seen to substantially increase the proportion of sampled turning actions with respect to the size of the dataset, but also use less sampling overall. Interestingly, the starting dataset is more balanced than any of the ones resulting from DAgger / DADAgger iterations, yet fails to converge, meaning the quality of a dataset is not entirely determined by how balanced it is.
### Half Cheetah
Figure 2 shows that all variants of DADAgger converge to a similar performance as base DAgger on Half Cheetah, which demonstrates that our uncertainty measurement and sampling technique is also capable of handling a multi-dimensional and continuous action space. It achieves this with as little as 10% of queries to the expert (\(\alpha=0.1\)) as base DAgger, again suggesting that it samples only the most important datapoints to learn a policy and resolve uncertainty. Interestingly, neither of the methods (incl. base DAgger) matched the expert's performance, possibly due to the increased complexity of the environment.
### Dataset Construction
Convergence is obtained at roughly iteration 50 (hence our decision to stop). We are more interested in examining the nature of the constructed dataset over the performance of the policy. The final dataset consists of 746 samples, which is markedly less than even the initial dataset we were using for DAgger and DADAgger, which contained 1139 samples. It is thus far smaller than datasets generated by the previous experiments, regardless of the choice of \(\alpha\). This is notable because a policy trained on this dataset is able to converge with 0 augmentation (ie, 1 DAgger iteration, or in one-shot), indicating a high quality of samples. We once again look to the histogram (Figure 2(e)),
and find it to be the most balanced of all datasets, especially with respect to extreme sharp turns which were relatively undersampled even in DADAgger experiments. DADAgger thus acts as a way to create small and efficient datasets for one-shot training.
## 6 Limitations and Conclusion
While DADAgger is able to improve on the sampling efficiency of DAgger, a few weaknesses arise. Primarily, the agent is still required to engage in the environment as many times as DAgger, meaning there is no improvement on the number of episodes required to converge, should that be expensive. This is particularly true in the case where the dataset was initialised to zero, as many more episodes were required to achieve convergence. Another weakness we occasionally observed in our tests was high confidence predictions for wrong outcomes, which could lead to skipping essential sampling and repeating the same error. It is therefore absolutely essential for our algorithm to function, that a certain degree of independence exist between predictors. Any bias will create agreement in out-of-distribution states that may lead to the algorithm getting stuck in a cycle of not sampling, failing because of that error, and not sampling the requisite states again. Indeed, 30 of 50 iterations in the dataset construction experiment were failures of relatively similar form, which could hint at the existence of this problem in sparse datasets.
In conclusion, this paper introduces DADAgger, a novel method to increase querying efficiency for the DAgger algorithm without compromising convergence. Our method matches DAgger performance on both the Car Racing and Half Cheetah environments. Furthermore, our technique is efficient in its execution due to the use of dropout, and could also find use as a dataset generator due to its selectivity. Further avenues of exploration involve exploration of alternate methods to assess uncertainty that could allow the use of an absolute metric rather than the fraction \(\alpha\), and would not require several network evaluations for each action. Another uncovered area is treating the sample space without regard to the action, and attempting to devise a measure of how out-of-distribution a particular sample is that is completely independent of the predictor which generates the action.
\begin{table}
\begin{tabular}{l l l l} \hline \hline \(M\) / \(\alpha\) & 0.1 & 0.2 & 0.4 \\ \hline
10 & \(0\pm 20\) & \(100\pm 20\) & \(100\pm 20\) \\
25 & \(60\pm 20\) & \(100\pm 20\) & \(100\pm 20\) \\ Random & \(40\pm 20\) & \(40\pm 20\) & \(100\pm 20\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Percentage of trials that led to a successful lap on the Car Racing environment.
Figure 2: All variants of DADAgger converge to a similar performance as base DAgger (\(M=1\) and \(\alpha=1\)) on the Half Cheetah environment, with significantly less queries to the expert. However, none of the methods achieve a performance comparable to the expert
### Contributions
KH was responsible for the algorithm design and Experiment 3. Experiment 1 was split among all members. AH and SCBD conducted Experiment 2. SCBD and AH jointly wrote/adapted and optimized most of the code used to perform all experiments.
|
2303.17353 | Lateral quantum confinement effect on monolayer high-Tc superconductors | Despite decades of research in spatially confined superconducting systems to
understand the modification of superconductivity from reduced length scales,
the investigation of the quantum confinement effect on high-temperature
superconductors remains an outstanding challenge. Here, we report scanning
tunneling spectroscopy measurements on laterally confined FeSe monolayers on
SrTiO3 substrates, which are formed by epitaxially growing FeSe films with a
coverage less than one unit cell. Comparing to the uniform regions of FeSe
monolayers, the peninsula regions at the monolayer boundary exhibit reduced
Fermi energy and undiminished superconductivity, leading to a putative
crossover from a Bardeen-Cooper-Schrieffer state to a Bose-Einstein condensate
state. In isolated FeSe monolayer islands, superconductivity is shown to exist
in samples of smaller volume in contrast to conventional superconductors, while
the validity of Anderson's criterion remains fulfilled. Our work reveals
lateral quantum confinement effects in unconventional superconductors, to
enrich the understanding of high-temperature superconductivity in
low-dimensional systems. | Guanyang He, Yu Li, Yuxuan Lei, Andreas Kreisel, Brian M. Andersen, Jian Wang | 2023-03-30T13:11:07Z | http://arxiv.org/abs/2303.17353v2 | # Lateral quantum confinement effect on monolayer high-T\({}_{\text{C}}\) superconductors
###### Abstract
A fundamental problem in superconductivity research is to understand the modification of superconductivity from reduced length scales. Despite decades of research in spatially confined superconducting systems, a systematic investigation of the lateral quantum confinement effect on high-temperature superconductors is still rare and desired. Here, we report the scanning tunneling spectroscopy and quasiparticle interference measurements on laterally confined FeSe monolayers on SrTiO\({}_{3}\) substrates, which are formed by epitaxially growing FeSe films with the coverage less than one unit cell. Comparing to the uniform regions of FeSe monolayers, the peninsulas regions at the monolayer boundary show a reduced Fermi energy and undiminished superconductivity, leading to a crossover from Bardeen-Cooper-Schrieffer state to Bose-Einstein condensate state. In isolated FeSe monolayer islands, superconductivity is illustrated to exist in samples of smaller volume in contrast to conventional superconductors, while the validity of Anderson criterion is still verified. Our work reveals the lateral quantum confinement effect on superconductors at the two-dimensional limit, and may shed light on the understanding of unconventional superconductivity in low-dimensional systems.
+
Footnote †: preprint: APS/123-QED
When the size of a superconductor goes below or comparable to certain characteristic length scales, such as the superconducting (SC) coherence length \(\xi\), quantum confinement can modify SC properties and generate novel phenomena. Decades of active researches have been fueled by the quantum confinement effect on superconductivity, and especially enriched by the two-dimensional (2D) superconductors with the thicknesses less than \(\xi\). Nowadays, 2D superconductors have become an important platform to study quantum phase transitions and other important quantum behaviors [1-9]. As the area of 2D superconductors decreases, lateral quantum confinement can also affect the local electronic structure and superconductivity.
Despite extensive researches in the quantum confinement effect, the lateral quantum confinement in 2D high critical temperature (T\({}_{\text{C}}\)) superconductors remains unexplored. Considering the small \(\xi\) of high-T\({}_{\text{C}}\) superconductors [10], it is challenging to approach these characteristic lengths laterally. In this work, to investigate such quantum confinement, we grew FeSe monolayers on the SrTiO\({}_{3}\)(001) (STO) substrate with the coverage less than one unit cell, to generate laterally confined monolayer FeSe films and islands. The 0.55 nm thickness of FeSe monolayers is below its \(\xi\sim\)1.2 nm. Two kinds of laterally confined FeSe monolayer are studied. One is FeSe peninsulas near the film boundaries with the confined widths around ten nanometers, where the SC gap remains intact but the Fermi energy (\(E_{F}\)) drops significantly. Accordingly, the SC pairing strength \(\Delta/E_{F}\) varies from 0.28 for uniform FeSe monolayers to 0.73 for peninsulas, indicating a crossover from the Bardeen-Cooper-Schrieffer (BCS) regime to Bose Einstein condensation (BEC). The other is isolated monolayer FeSe islands with the areas around tens of square nanometers, where a coexistence of Coulomb gap and SC gap is observed. Decreasing the size of FeSe
island, the SC gap is reduced and eventually disappears. It is consistent with the Anderson criterion that for superconductors small enough, the electronic energy level spacing can destabilize the SC order [11].
Our monolayer FeSe was epitaxially grown by molecular beam epitaxy (see Materials and methods for details). Figure 1a shows the scanning tunneling microscopy (STM) topographic image of the monolayer FeSe grown on the STO terraces, and the inset shows their top Se atom arrangement of FeSe. In Fig. 1b, a symmetrized d/d/_V_ tunneling spectrum is obtained by scanning tunneling spectroscopy (STS) in the central uniform area of FeSe, exhibiting prominent coherence peaks and U-shaped SC gaps at 4.2 K (see Supplementary for the data process of STS spectra). The two-band Dynes model incorporating anisotropic SC gap functions fits the d/_V_ spectrum well with gap values \(\Delta_{1}=10.3\) meV and \(\Delta_{2}=18.1\) meV [12].
Quasiparticle interference (QPI) scattered by defects appears as the periodic modulation in the differential conductance mapping \(g(\mathbf{r},E)=dI/dV(\mathbf{r},eV)\), which is a powerful tool to analyze electronic states [13-15]. Firstly, the differential conductance mappings are measured on FeSe monolayers. After Fourier transform (FT), the modulus of the FT-QPI \(|\text{g}(\mathbf{q},\text{E})|\) as the scattering intensity exhibits three types of ring-like scattering structures, denoted as \(\mathbf{q_{1}}\), \(\mathbf{q_{2}}\) and \(\mathbf{q_{3}}\) rings in Fig. 1c; \(\mathbf{q}\) is the momentum transfer of the scattering momenta \(\mathbf{k}\) on the Fermi pockets (\(\mathbf{q}=\mathbf{k_{1}}-\mathbf{k_{2}}\)). Due to the tunneling-matrix-element effect and the orbital structures on the Fermi pockets, \(\mathbf{q_{1}}\) ring has the highest scattering intensity and the most complete ring shape [12, 16], which is thus used to extract the band dispersion. Since the elastic scattering from the opposite side of the pocket (\(-\mathbf{k}\leftrightarrow+\mathbf{k}\)) normally results in the highest intensity at \(\mathbf{q}=2\mathbf{k}\)[14], the band dispersion is approximately \(E(\mathbf{k})=E(\mathbf{q}/2)\). The energy evolution of \(\mathbf{q_{1}}\) ring is resolved from maximum-intensity points in a \(|\mathbf{q}|\)-\(E\) plane, corresponding to the dark points on both sides of the \(|\mathbf{q}|=0\) position in Fig. 1d. Here, as \(\mathbf{q_{1}}\) ring corresponds to the scattering within an electron pocket, the ring size shrinks as the energy decreases. The red dashed line in Fig. 1d is a parabolic fitting for the band dispersion (dark points), and the distance between the bottom of the parabola and the Fermi level is \(E_{F}\)[15, 17]. To present a sharper image, the contributions to the scattering intensity inside the parabola from other electronic pockets are manually subtracted (see Supplementary for QPI data analysis). With more QPI measurements in other uniform areas (Fig. S1 and Table S1), an averaged \(E_{F}\)= 65.3 meV is obtained, close to the results in literature [17].
Figure 2a shows a peninsula structure at the boundary of FeSe monolayer, spatially confined in the width as indicated by the double arrow; the dark area is the STO substrate. In the dashed box region, QPI measurements are performed again to extract the band dispersion (Fig. 2b), and the dI/dV tunneling spectrum is measured (Fig. 2c). Here, the Dynes fitting of SC gaps gives \(\Delta_{1}=9.3\) meV and \(\Delta_{2}=19.5\) meV, which barely change from the values of the uniform area (\(\Delta_{1}=10.2\) meV, \(\Delta_{2}=18.1\) meV). Note that a previous study indicates that the gap value fluctuation in monolayer FeSe/STO is 1.5 meV [18]. Meanwhile, by the same method of \(E_{F}\) determination in the previous paragraph, \(E_{F}\) decreases by nearly 40 meV in this peninsula comparing to that in the uniform area (from 65.3 meV to 26.8 meV). Figure 2d-f show the results of another peninsula, which corroborates the significant reduction of \(E_{F}\) and the undiminished SC gaps. Similar results from many more peninsulas are exhibited in the supplementary Fig. S3 and S4, and a summarization of all \(\Delta\) and \(E_{F}\) values is given in Fig. 2g and Table S2. For each point in Fig. 2g, the horizontal error bar represents the standard deviation of the parabolic fitting for \(E_{F}\), and the vertical error bar represents the standard deviation of the Dynes fitting for SC gaps. Moreover, we have measured the temperature-dependent tunneling spectra from 4.2 K to 42 K, and the BCS fitting for \(\Delta_{1,2}(T)\) (see Supplementary for BCS fitting) gives \(T_{c}\) (in Fig. S2). The obtained \(T_{c}\) is nearly invariant for both the uniform area and the peninsulas.
Figure 1: **STM topography, tunneling spectrum and band dispersion of the uniform area of FeSe monolayer.** (a) Large-scale STM topographic image of the FeSe monolayer grown on STO terraces, where darker colors mean lower heights. The inset is an atomically resolved image showing the topmost Se lattice of FeSe. (b) A typical symmetrized dI/dI/\(V\) spectrum measured at 4.2 K in the uniform area of FeSe monolayer. The red curve is the theoretical Dynes fitting. (c) Typical FT-QPI pattern of \(|\)g(\(\mathbf{q}\), \(E=20\) meV)\(|\) after symmetrization, showing three types of ring-like structures denoted as \(\mathbf{q_{1}}\), \(\mathbf{q_{2}}\) and \(\mathbf{q_{3}}\) rings. (d) Intensity plot in the \(|\mathbf{q}|\)-\(E\) plane, where \(|\mathbf{q}|=0\) corresponds to the center of (c); The intensity at each energy is plotted versus the radial coordinate near the \(\mathbf{q_{1}}\) ring position after azimuthal averaging, showing the energy evolution of \(\mathbf{q_{1}}\) ring. a\({}_{0}=0.38\) nm is the lattice constant of FeSe. The red dashed line is a parabolic fitting for the maximum-intensity points at each energy to find \(E_{F}\).
To better understand the undiminished superconductivity mentioned above, we might recall that \(T_{c}\) of superconducting Al, In, and Sn particles are observed to increase with decreasing sample sizes before reaching the Anderson limit [19, 20]. It is ascribed to phonon softening caused by the small size, specifically by structural changes in lattice or a greater contribution from surface phonons [11, 21, 22]. Phonon softening results in a larger electron-phonon coupling constant \(\lambda\propto 1/\langle\omega^{2}\rangle\), \(\omega\) being the phonon frequency. According to the McMillan equation, \(T_{c}\) will be enhanced by the enlarged \(\lambda\) in the form of \(T_{C}=\frac{T_{D}}{1.45}\exp\left[-\frac{1.04(1+\lambda)}{\lambda-\mu^{*}(1+ 0.62\lambda)}\right]\), where \(\mu^{*}\) is the Coulomb pseudopotential parameter, and \(T_{D}\) the Debye temperature [23]. For monolayer FeSe/STO, it is believed that the electron doping from STO substrates increases the carrier density of FeSe and boost the superconductivity [24-26]. Comparing to the uniform area of monolayer FeSe, the reduced \(E_{F}\) observed in the FeSe peninsula indicates less electron doping and should lead to smaller \(\Delta\) and \(T_{c}\). However, \(\Delta\) and \(T_{c}\) remain basically unchanged regardless of the reduced \(E_{F}\). To account for it, we suggest that in FeSe peninsulas, phonon softening due to the small size might counteract the effect of the reduced \(E_{F}\).
From another perspective, the band dispersions obtained by parabolic fittings from uniform areas and peninsulas of FeSe monolayers suggest a basically unchanged Fermi momentum \(|\mathbf{k}_{F}|\) for different \(\mathrm{E_{F}}\)(see specific values of \(|\mathbf{k}_{F}|\) in Fig. S5a), corresponding to a Fermi wavelength around 1.1 nm. The effective mass \(m^{*}=\frac{\hbar^{2}|\mathbf{k}_{F}|^{2}}{2E_{F}}\) thus increases for decreasing \(E_{F}\) (see specific values of \(m^{*}\) in Fig. S5b). Note that in monolayer FeSe, the electron band at the \(E_{F}\) mainly consists of the \(d\) orbitals of Fe. The decrease of \(E_{F}\) thus indicates a reduction of electron occupation in the \(d\) orbitals (\(n_{d}\)). For most iron-based superconductors, \(n_{d}\) is around 6.25 and the Coulomb interaction is not strong enough to localize electrons but slows electrons down, inducing a large effective mass (i.e., electron correlation) [27]. As \(n_{d}\) decreases, the Coulomb interaction tends to fully localize the electrons like in the Mott insulator phase, and the electron correlation increases [28]. Furthermore, as \(n_{d}\) decreases the electron correlations in different \(d\) orbitals are enhanced differently, and the enhancement of \(d_{xy}\)-orbital electron correlation is dramatically larger than that of other orbitals [29]. In monolayer FeSe, superconductivity is mainly contributed by \(d_{xy}\) orbital electrons according to spin fluctuation pairing, but such contribution is suppressed to some extent by the strong electron correlation of the \(d_{xy}\) orbital [30]. Further enhancement of \(d_{xy}\)-orbital electron correlation will suppress superconductivity more, assuming the contributions from other orbitals do not change. In short, within the theory of spin-fluctuation pairing, reduced \(\mathrm{E_{F}}\) suggests an enhanced electron correlation in the \(d_{xy}\) orbital and leads to the suppression of superconductivity, different with our observation of unchanged \(\Delta\) and \(T_{c}\). Such suppression should be counteracted by the phonon softening discussed in the preceding paragraph, which is based on electron-phonon coupling (EPC). Thus, our results suggest a cooperative pairing mechanism in monolayer FeSe/STO, where both EPC and spin fluctuations play a role [31-33].
The microscopic theory of superconductivity is based on the formation of fermion pairs. With a weak attractive interaction between fermions, they pair in the momentum space with a large pair size \(\xi\), which is classified as the BCS limit. With a strong attractive interaction, fermions will be bound in pairs tightly in real space; such preformed pairs undergo BEC to the superfluid state afterwards, classified as the BEC limit. Two limits are connected through an intermediate unitary regime called BEC-BCS crossover [34, 35]. This crossover is considered important to understand strong correlated electronic systems, attracting attention in both cold atom and condensed matter fields. In cold fermionic atom systems, such crossover is realized by tuning the interaction strength between atoms. The interaction strength is specified by the scattering length \(a_{s}\), which diverges and changes sign at the BEC-BCS crossover. Also, the ratio of \(\Delta\)/\(E_{F}\) is around 0.5 as a hallmark of this crossover [36]. In condensed matter
Figure 2: **STM topographies, band dispersions and tunneling spectra in the peninsulas of FeSe monolayer.** (a) STM topographic image of a peninsula at the monolayer boundary. The dashed box indicates the area of QPI analysis, and the double arrow indicates the peninsula width. (b) Band dispersion extracted from the QPI patterns in the boxed region of (a) by a parabolic fitting. (c) A symmetrized tunneling spectrum measured at the center of the dashed box in (a) with a Dynes fitting. (d-f) Similar results to (a-c) for another peninsula. (g) The summarized results of \(\Delta\) and \(E_{F}\) from different peninsulas (peninsula widths from 6 to 27 nm). (h) Multiple SC systems in the BCS limit and BCS-BEC crossover regime. Grey pentagons represent the results from conventional superconductors. Green symbols represent cuprates. Blue symbols represent FeSe-based superconductors. Red symbols represent our results in (g).
systems, \(\Delta/E_{F}\) is studied in various superconductors as we summarize in Fig. 2h. For conventional superconductors Al, Sn, and Nb, \(\Delta/E_{F}\) is nearly zero [37], and for MgB\({}_{2}\) the value of \(\Delta/E_{F}\) is 0.016 [38], all suggesting BCS limit. On the other hand, BCS-BEC crossover is expected in high-T\({}_{\rm C}\) cuprates due to the quasi-2D nature and strong pairing [39]. \(\Delta/E_{F}\) values of cuprates are higher than those of conventional superconductors, but mostly lower than 0.3. Specifically, upward triangles in Fig. 2h stand for YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{x}\), leftward triangles for La\({}_{2\times}\)Sr\({}_{x}\)CuO\({}_{4}\), and rightward triangles for Tl\({}_{2}\)Ba\({}_{2}\)Ca\({}_{2}\)Cu\({}_{3}\)O\({}_{10}\), Bi\({}_{2}\)Sr\({}_{2}\)CaCu\({}_{2}\)O\({}_{8}\), Bi\({}_{2}\)Sr\({}_{2}\)Ca\({}_{2}\)Cu\({}_{3}\)O\({}_{10}\), Tl\({}_{2}\)Ba\({}_{2}\)CaCu\({}_{2}\)O\({}_{8}\)[40, 41, 42]. In iron-based superconductors, the effects of chemical doping have been explored, and \(\Delta/E_{F}\) of bulk Fe\({}_{1+\times}\)Te\({}_{0.6}\)Se\({}_{0.4}\) can be raised from 0.16 to 0.50 by reducing x [43]. Different substrates have also been used to tune the band edge of FeSe monolayer, as FeSe monolayer grown on trilayer graphene (TLG) shows \(\Delta/E_{F}=0.3\)[44]. Bulk FeSe has uniquely tiny hole pocket (\(E_{F}\sim 10\) meV) and electron pocket (\(E_{F}\sim 3\) meV) at the Fermi surface [13], and \(\Delta=2.3\) or 1.5 meV for the hole or electron band respectively [45]. Thus \(\Delta/E_{F}\sim 0.23\) or 0.5 for the hole or electron band of bulk FeSe. In our work, the monolayer FeSe/STO has much larger \(T_{c}\) and yet a larger \(E_{F}\), yielding \(\Delta_{2}/E_{F}=0.28\) in the uniform area of our sample; such ratio is dramatically raised to 0.73 due to the lateral quantum confinement in the peninsula area. Therefore, monolayer FeSe peninsulas turn out to be one promising platform to study the BCS-BEC crossover modulation in condensed matter systems.
Next, we focus on the isolated islands of monolayer FeSe as shown in Fig. 3a, which are more spatially confined with more boundaries comparing to the peninsulas. When the size of a nano-island is small enough, electrons inside the island create a strong Coulomb repulsion preventing the tunneling of an external electron, known as the Coulomb blockade (CB). When the applied voltage overcomes the electrostatic energy, external electrons can tunnel into the island one at a time, leading to the equidistant conductance peaks in the differential conductance spectrum [46]. Figure 3b shows the tunneling spectrum measured on the FeSe island in Fig. 3a. Except for the gap at the Fermi surface \(E_{F}\) (i.e., zero bias) which will be discussed later, pronounced differential conductance oscillation with nearly equidistant peaks is observed. The peak spacing is labeled as \(``U"\) in Fig. 3b. This observed phenomena is consistent with the scenario of Coulomb peaks from CB effect, where the voltage interval \(U\) is related to the addition energy of an electron to the island by \(eU=e^{2}/C\)[47, 48]. Here, \(C\) is the capacitance between the island and STM tip, and \(eU=14.0\) meV gives \(C=11.4\) aF (Fig. 3b). The inset of Fig. 3b shows an enlarged view near the zero bias. The peak asymmetry about \(E_{F}\) is observed, which can be attributed to the residual charge on the island [47]. Fig. 3c shows the temperature dependence of the tunneling spectra measured on another FeSe island. With increasing temperature, the Coulomb gaps gradually decrease due to raised thermal energy, corresponding to the expectation that CB effect would be smeared out at higher temperatures [49, 50]. The black dots in Fig. 3d summarize the values of Coulomb gap \(U\) in FeSe islands of different sizes (see more results in Fig. S6). For each dot, the error bar represents the standard deviation of the Coulomb gap measured on one individual island. These dots follow a linear relation (red dashed line) between \(U\) and \(1/S\), confirming the CB effect relationship \(U=e/C\propto 1/S\), \(S\) as the area of FeSe islands.
The gap at \(E_{F}\) is abnormally enlarged comparing to \(U\), which includes a contribution from the SC gap of FeSe. Since an energy equal to the SC gap is needed to excite quasiparticles in a superconductor, the excitation energy adds to the Coulomb gap, as the total energy cost for a single electron tunneling process [51]. Similar to the addition of the Coulomb gap and SC gap in Sn nanoparticles, we can calculate the underlying SC gap \(\Delta\) with the assumption that the enlarged energy gap near \(E_{F}\) equals \(U+2\Delta\)[52]. The calculated \(\Delta\) shown as black dots in Fig. 3e reduces with the decrease of FeSe island volume, where the error bar indicates the standard deviation of \(U\). The red dashed line is a guide to the eye. Such behavior can be expected since small superconductors experience large thermal fluctuations in the order parameter which quenches superconductivity [11, 53, 54, 55]. Moreover, the Anderson criterion suggests that SC state will completely disappear, when the mean electronic energy level
spacing (Kubo gap) \(\delta\) near \(E_{F}\) exceeds the SC gap [56]. In a system of volume \(V\), \(\delta\) is around \(\frac{2\pi^{2}h^{2}}{m^{*}k_{F}V}\), where \(k_{F}\) is the Fermi wave vector and \(m^{*}\) the effective electron mass [57]. For monolayer FeSe with or without lateral confinement, \(k_{F}\) is almost same (\(\sim\)2.7 \(nm^{-1}\)) as shown in Fig. S5a. However, \(m^{*}\) becomes larger for smaller E\({}_{F}\) values (Fig S5b). By adopting \(m^{*}=10.56\)\(m_{e}\), \(\delta\) vs \(V\) is plotted in Fig. 3e as the blue dashed line. The Anderson limit is estimated by the intersection between two dashed lines in red and blue colors where \(\delta=\Delta\), giving a critical volume of 29 \(nm^{3}\). In contrast to conventional superconductors with Anderson limits around 64 \(\sim\) 216 nm\({}^{3}\)[11], superconductivity persists in monolayer FeSe islands down to a smaller volume. The critical volume of 29 \(nm^{3}\) is also consistent with the fact that we do not see SC gap in the island smaller than 29 \(nm^{3}\) (Fig. S6d).
In conclusion, our work reveals how lateral quantum confinement affects the unconventional superconductivity of monolayer FeSe/STO in the peninsula structures and isolated islands. In our experiments, lateral quantum confinement brings a clear reduction in \(E_{F}\), but nearly undiminished superconductivity in FeSe peninsulas. The BCS-BEC crossover is thus detected, indicating a scenario that fermions form pairs before condensation in such low-dimensional high-T\({}_{\rm C}\) superconductors. The pairing strength \(\Delta/E_{F}\) is tuned by lateral quantum confinement from
Figure 3: **Coexistence of Coulomb gap and SC gap in isolated monolayer FeSe islands.** (a) Topographic image of a rectangular monolayer FeSe island on STO substrates. (b) Tunneling spectrum measured on the island in (a). The periodic gap \(U\) away from \(E_{F}\) is the Coulomb gap. The enlarged gap at \(E_{F}\) consists of the Coulomb gap \(U\) and SC gap. (c) Temperature dependence of the CB effect measured on another FeSe island. (d) Coulomb gaps in relation with the reciprocal of island areas, with the dashed line showing a linear fitting. (e) SC gap value vs the volume of FeSe islands. The red dashed line is a guide to the eye for SC gap \(\Delta\), and the blue dashed line shows the Kubo gap \(\delta\). Anderson limit is estimated by the intersection between two dashed lines.
0.28 to 0.73, suggesting a new method to modulate the BCS-BEC crossover. Considering the superconductivity is almost invariant for reduced E\({}_{\text{F}}\) in FeSe peninsulas, we infer that phonon softening might counteract the effect of the reduced \(E_{F}\). Our work reveals that QPI methods can be used to study the quantum confinement effect on low-dimensional high-T\({}_{\text{C}}\) superconductors with high spatial and energy resolution. As the lateral confinement becomes more severe in isolated monolayer FeSe islands, superconductivity is found to be suppressed and coexists with CB effect. For the first time, the evolution of SC gap with the volume of monolayer FeSe islands is investigated, where superconductivity eventually disappears near the Anderson limit of 29 \(nm^{3}\). This limit is much smaller than that in conventional superconductors, demonstrating that the Anderson limit is a valid criterion even in unconventional superconductors.
We sincerely thank Dingyu Shen for the help in data analysis and Yanan Li for discussions. This work is supported by the National Key Research and Development Program of China (Grant No. 2018YFA0305604), the Natural Science Foundation of China (Grant No. 11888101), the Beijing Natural Science Foundation (Z180010), the Innovation Program for Quantum Science and Technology (2021ZD0302400), the Strategic Priority Research Program of Chinese Academy of Sciences (XDB28000000), and the Postdoctoral International Exchange Program of China (YJ20210216). |
2306.05189 | EMO: Episodic Memory Optimization for Few-Shot Meta-Learning | Few-shot meta-learning presents a challenge for gradient descent optimization
due to the limited number of training samples per task. To address this issue,
we propose an episodic memory optimization for meta-learning, we call EMO,
which is inspired by the human ability to recall past learning experiences from
the brain's memory. EMO retains the gradient history of past experienced tasks
in external memory, enabling few-shot learning in a memory-augmented way. By
learning to retain and recall the learning process of past training tasks, EMO
nudges parameter updates in the right direction, even when the gradients
provided by a limited number of examples are uninformative. We prove
theoretically that our algorithm converges for smooth, strongly convex
objectives. EMO is generic, flexible, and model-agnostic, making it a simple
plug-and-play optimizer that can be seamlessly embedded into existing
optimization-based few-shot meta-learning approaches. Empirical results show
that EMO scales well with most few-shot classification benchmarks and improves
the performance of optimization-based meta-learning methods, resulting in
accelerated convergence. | Yingjun Du, Jiayi Shen, Xiantong Zhen, Cees G. M. Snoek | 2023-06-08T13:39:08Z | http://arxiv.org/abs/2306.05189v3 | # EMO: Episodic Memory Optimization for Few-Shot Meta-Learning
###### Abstract
Few-shot meta-learning presents a challenge for gradient descent optimization due to the limited number of training samples per task. To address this issue, we propose an episodic memory optimization for meta-learning, we call _EMO_, which is inspired by the human ability to recall past learning experiences from the brain's memory. EMO retains the gradient history of past experienced tasks in external memory, enabling few-shot learning in a memory-augmented way. By learning to retain and recall the learning process of past training tasks, EMO nudges parameter updates in the right direction, even when the gradients provided by a limited number of examples are uninformative. We prove theoretically that our algorithm converges for smooth, strongly convex objectives. EMO is generic, flexible, and model-agnostic, making it a simple plug-and-play optimizer that can be seamlessly embedded into existing optimization-based few-shot meta-learning approaches. Empirical results show that EMO scales well with most few-shot classification benchmarks and improves the performance of optimization-based meta-learning methods, resulting in accelerated convergence.
## 1 Introduction
The vast majority of current few-shot learning methods fall within the general paradigm of meta-learning (Schmidhuber, 1987; Bengio et al., 1991; Thrun and Pratt, 1998). It searches for the best few-shot learning strategy as the learning experiences increase (Finn et al., 2017; Ravi and Larochelle, 2017; Andrychowicz et al., 2016). Optimization-based meta-learning (Finn et al., 2017; Ravi and Larochelle, 2017; Li et al., 2017; Raghu et al., 2019) is one of the most popular approaches, owing to its "model-agnostic" nature to incorporate different model architectures and its principled formulation that allows the application to various problems. Optimization-based meta-learning comprises inner-loop and outer-loop updates that operate on a batch of tasks per iteration. In the inner-loop, these methods learn task-specific network parameters \(\theta\) by performing traditional gradient descent on a task-specific loss \(\mathcal{L}(\theta;\mathcal{S})\) with the support set \(\mathcal{S}\), where
\[\theta^{{}^{\prime}}=\theta-\alpha\nabla_{\theta}\mathcal{L}(\theta;\mathcal{ S}), \tag{1}\]
and \(\alpha\) is the learning rate which determines the step size per inner iteration. Gradient estimation with a small support set is inherently noisy, which causes the model to diverge or converge to a non-optimal minimum per task. Due to the small number of samples, traditional optimizers, e.g., (Allen-Zhu and Yuan, 2016; Kingma and Ba, 2015; Sutskever et al., 2013; Ruder, 2016; Li and Malik, 2017), tend to get trapped in local minima. In this paper, we propose a new inner-loop optimizer for few-shot meta-learning.
Our work is inspired by the human cognitive function of episodic memory in the brain, which enables us to quickly adapt to new tasks with limited training samples by recalling past learning experiences from episodic memory (Tulving, 1972; 1983; 2002). Episodic memory has been shown to be effective in various machine learning tasks, such as reinforcement learning and continual learning. In reinforcement learning (Zhu et al., 2020; Gershman and Daw, 2017), recent works use episodic memory to store past experiences and improve generalization ability quickly. In continual learning (Lopez-Paz and Ranzato, 2017; Chaudhry et al., 2019), episodic memory alleviates catastrophic forgetting while allowing beneficial knowledge transfer to previous tasks. Building on this inspiration, we introduce episodic memory into meta-learning for few-shot learning. Our approach learns to collect long-term episodic optimization knowledge for improved few-shot learning performance. By incorporating the cognitive function of episodic memory into meta-learning, we aim to improve machine learning models' generalization and adaptation ability to new tasks with limited training samples.
This paper proposes a new inner-loop optimization method for few-shot meta-learning, we call _Episodic Memory Optimization_ (EMO). EMO exploits an external memory to accrue and store the gradient history gained from past training tasks, enabling the model to update to optimal parameters more accurately and quickly when faced with new
tasks. Specifically, the episodic memory stores the gradients of the parameters per network layer for previous tasks, which are aggregated with the gradient of the current task in a linear or learnable way. By doing so, episodic memory could help us achieve more optimal model parameters, despite having only limited training samples available for the new task. To avoid overloading the memory storage space, EMO incorporates a memory controller that implements three different replacement strategies to replace the task in the memory. Furthermore, we also prove that EMO with fixed-size memory converges under strong convexity assumptions, regardless of which gradients are selected or how they are aggregated to form the update step. EMO is a general gradient descent optimization that is model-agnostic and serves as a plug-and-play module that can seamlessly be embedded into existing optimization-based few-shot meta-learning approaches. We conduct our ablations and experiments on the few-shot learning benchmarks and verify that the optimization-based meta-learning methods with EMO easily outperform the original methods in terms of both performance and convergence.
## 2 Method
### Preliminaries
Few-shot classificationThe goal of few-shot classification is to construct a model using a limited number of labelled examples. In the conventional few-shot classification scenario, following (Vinyals et al., 2016), we define the \(N\)-way \(K\)-shot classification problem, which has \(N\) classes, and each class has \(K\) labelled support examples. In this scenario each task is a classification problem from a predefined task distribution \(p(\mathcal{T})\). We denote the labeled support set by \(\mathcal{S}{=}\{(x_{i},y_{i})\}_{i=1}^{N\times K}\); each \((x_{i},y_{i})\) is a pair of an input and a label, where \(y_{i}\in\{1,2,\cdots,N\}\). Each task is also associated with a query set \(\mathcal{Q}{=}\{(x_{j},y_{j})\}_{j=1}^{N\times M}\) to evaluate the quality of the trained model. The query set \(\mathcal{Q}\) for each task is also composed of examples of the same \(N\) classes. Usually, optimizing and learning parameters for each task with a few labelled training samples is difficult. Meta-learning offers a way of learning to improve performance by leveraging knowledge from multiple tasks.
Optimization-based meta-learningIn meta-learning, a sequence of tasks \(\{\mathcal{T}_{1},\mathcal{T}_{2},\cdots,\mathcal{T}_{N_{\mathcal{T}}}\}\) are sampled from a predefined task distribution \(p(\mathcal{T})\), where each one is a few-shot learning task. The core idea of meta-learning is to find a well-generalized meta-learner on the training tasks during the meta-training phase. For each task \(\mathcal{T}_{i}\), the meta-learner \(\Phi\) is applied on the base learner \(f_{\theta_{i}}\), and the parameter \(\theta_{i}\) and meta-learner \(\Phi\) are learned alternatively. During the meta-testing phase, the learned meta-learner is applied to tackle the new tasks composed of examples from unseen classes. Given a new few-shot learning task \(\mathcal{T}_{t}\), the optimal meta-learner \(\Phi^{*}\) is used to improve the effectiveness of \(\mathcal{T}_{t}\) by solving \(\min_{\theta_{t}}\mathcal{L}(\Phi^{*}(f_{\theta_{t}}),\mathcal{Q})\). In this way, meta-learning effectively adapts to new tasks, even when the training data for the new task is insufficient. Optimization-based meta-learning (Finn et al., 2017; Li and Malik, 2017; Raghu et al., 2019) strives to learn an optimization that is shared across tasks while being adaptable to new tasks. The most representative optimization-based meta-learning is model-agnostic meta-learning (MAML) by Finn et al. (2017). MAML is formulated as a bilevel optimization problem with inner-loop and outer-loop optimization, where the inner-loop computes the task-specific parameters \(\theta\) (starting from \(\theta\)) via a few gradient updates:
\[\theta_{t+1}=\theta-\alpha\nabla_{\theta}\frac{1}{N\times K}\sum_{(x,y)\in \mathcal{S}}\mathcal{L}_{\mathcal{T}_{t}}(f_{\theta_{t}}(x),y)\;. \tag{2}\]
For the outer loop, the original model parameters are then updated after the inner-loop update, i.e.,
\[\theta\leftarrow\theta-\beta\nabla_{\theta}\frac{1}{N_{\mathcal{T}}}\sum_{\{ \mathcal{T}_{1},\cdots,\mathcal{T}_{t}\}}\frac{1}{M\times K}\sum_{(x,y)\in \mathcal{Q}}\mathcal{L}_{\mathcal{T}_{t}}(f_{\theta_{t+1}}(x),y)\;. \tag{3}\]
where \(\alpha\) and \(\beta\) are the inner-loop and outer-loop learning rates, respectively. Training results in a model initialization \(\theta\) that can be adapted to any new task with just a few gradient steps.
### Model
This paper focuses on inner-loop optimization for optimization-based meta-learning methods. We propose a new inner-loop optimization method called _Episodic Memory Optimization_ (EMO). Our proposed EMO model is composed of four parts: an **encoder** that generates a representation for the incoming query data and the available support data; an external **memory store** which contains previously seen task representations and the gradients of each layer with writing managed by a **memory controller**; and an **episodic gradient memory stochastic gradient descent** that ingests the gradients from the new task and data from the memory store to generate the new gradients over the current task.
EncoderThe encoder first takes in support data \(\mathcal{S}{=}\{(x_{i},y_{i})\}_{i=1}^{N\times K}\) and then converts these data to the representation \(\{e_{i}\}_{i=1}^{N\times K}\) of lower dimension. In this paper, the input is an image, and we choose a convolutional network architecture for the encoder function \(f_{\theta}\).
Memory storeOur external memory module \(\mathcal{M}{=}\{M_{t}\}_{t=1}^{T}\) contains the stored learning process of experienced tasks, where \(T\) is the memory capacity. Each of the slots corresponds to each experienced task. In our work, the memory stores a key-value pair in each row of the memory array as (Graves et al., 2014). The keys are the task representations \(K_{t}\) of each task, the value is the gradient representation as value \(V_{t}^{l}\), \(M_{t}=[\mathbf{K}_{t},\mathbf{V}_{t}]\), where \(\mathbf{V}_{t}=\{V_{t}^{1},V_{t}^{2},\cdots,V_{t}^{l}\}\), \(t\) indicates the task \(t\) and \(l\) indicates the \(l\)-th convolutional layer. The memory module is queried by finding the k-nearest neighbors between the test task representation and the task \(K_{t}\) in a given slot. The distance metric used to calculate proximity between the points is an available choice, and here we always use Euclidean distance.
For the task representation \(K_{t}\), to allow the flexibility of variable input sizes of task representations, we use the generic Transformer architecture (Vaswani et al., 2017):
\[K_{t}=\texttt{Transformer}([\texttt{cls}_{t},e_{1},e_{2},\cdots,e_{n}])[0], \tag{4}\]
where \(\texttt{cls}_{t}\) is the task representation token embedding, and \(e_{i}=\textbf{Encoder}(x_{i})\) is the encoded \(i\)-th support data pair \(\mathcal{S}{=}\{(x_{i},y_{i})\}_{i=1}^{N\times K}\). After the transformer, we take the position output cls as the task embedding \(K_{t}\).
For the memory value \(V_{t}^{l}\), we first compute the gradients of task \(t\) at layer \(l\) as:
\[\mathbf{g}_{t}^{l}=\sum_{i=1}^{N\times K}\frac{\partial\mathcal{L}(\hat{y}_{t }^{l},y_{i}^{l})}{\partial\theta^{l}}, \tag{5}\]
where \(\theta^{l}\) uses the parameters at layer \(l\), we denote with \(\mathcal{L}(\cdot)\) a loss function (such as the cross entropy loss on the labels). To avoid confusion, we omit the superscript \(l\) for the memory from now on.
Memory controllerTo avoid overloading the memory storage space, we propose a memory controller that decides to replace the episodic memory slot at a certain moment. The input of the memory controller consists of the gradient \(\mathbf{g}_{t}\) of the current task and the selected memory \(\hat{M}_{c}\) that needs to be replaced. The controller is written as:
\[M_{c}=\texttt{Controller}(\mathbf{g}_{t},\hat{M}_{c}). \tag{6}\]
Inspired by the page replacement algorithm in operating systems, we propose three implementations of the memory controller: _First In First Out Episodic Memory_ (FIFO-EM), _Least Recently Used Episodic Memory_ (LRU-EM) and _Clock Episodic Memory_ (CLOCK-EM). (1) FIFO-EM keeps track of all the memory in a queue, with the oldest memory in the front. When a memory needs to be replaced, the memory in the front of the queue is selected for removal. (2) LRU-EM is a content-based memory writer that writes episodic memories to the least recently used memory location. New task information is written into rarely-used locations, preserving recently encoded data or written to the last used location, which can function as an update of the memory with newer, possibly more relevant information. (3) In the CLOCK-EM, the candidate memory for removal is considered in a round-robin fashion, and a memory that has been accessed between consecutive considerations will be spared, similar to the CLOCK page replacement algorithm in the operating system (Janapastya et al., 2010). When the memory is \(\mathcal{M}\) is not complete, we directly store the \(\mathbf{g}_{t}\) to be added into \(\mathcal{M}\), while once the memory is complete, we use the Controller to achieve memory replacement. The best-suited memory controller is specific to the underlying meta-learning method and datasets. We compare each Controller in the experiments.
Episodic gradient memory stochastic gradient descentEpisodic gradient memory stochastic gradient descent is the explicit integration of episodic memory gradients into SGD. To be specific, the iteration comes in the form:
\[\theta_{t+1}=\theta_{t}-\alpha\cdot\texttt{Aggr}(\mathbf{g}_{t},\ \mathcal{V}_{t}), \tag{7}\]
where \(\mathbf{g}_{t}\) are the gradients of the support set from the current task, \(\mathcal{V}_{t}\) is the collection of episodic gradients selected from the memory based on the similarity of memory key, and the current task representation and Aggr denotes an aggregation function which is used to combine the episodic gradients with the current-iteration gradient. We consider three possible functions for Aggr including Mean, the average of \(\mathcal{V}_{t}\) and all selected episodic gradients; Sum, the addition of \(\mathcal{V}_{t}\) to the average of all selected episodic gradients; and Transformer, the learnable combination of \(\mathcal{V}_{t}\) to all the selected episodic gradients. Mathematically, these three Aggr functions are defined as:
\[\texttt{Mean}(\mathbf{g}_{t},\ \mathcal{V}_{t})=\frac{1}{M_{\mathcal{V}_{t}}+1}( \mathbf{g}_{t}+\sum_{V_{t}\in\mathcal{V}_{t}}V_{t}), \tag{8}\]
\[\texttt{Sum}(\mathbf{g}_{t},\;\mathcal{V}_{t})=\mathbf{g}_{t}+\frac{1}{M_{ \mathcal{V}_{t}}}\sum_{V_{t}\in\mathcal{V}_{t}}V_{t}, \tag{9}\]
\[\texttt{Transformer}(\mathbf{g}_{t},\;\mathcal{V}_{t})=\texttt{Transformer}([ \texttt{cls}_{g},\mathbf{g}_{t},V_{t}^{1},V_{t}^{2},\cdots,V_{t}^{M_{\mathcal{ V}_{t}}}])[0]. \tag{10}\]
where \(\texttt{cls}_{g}\) is the new gradients token embedding. The best-suited aggregation function is specific to the meta-learning method into which the episodic gradients are integrated. We compare each aggregation function with different optimization-based meta-learning methods in the experiments.
### Meta-training and meta-test
Following (Ravi and Larochelle, 2017; Finn et al., 2017), we perform episodic training by exposing the model to a variety of tasks from the training distribution \(p(\mathcal{T})\). For a given training task \(\mathcal{T}_{i}\), the model first computes its parameters by Eq. (7) in the inner-loop, then incurs a loss \(\mathcal{L}_{i}\) of this task, and updates the model parameters by Eq. (3); we sum these losses and back-propagate through the sum at the end of the task. We evaluate the model using a partition of the dataset that is class-wise disjoint from the training partition. The parameters of EMO for each component are trained in an end-to-end framework. In the meta-test stage, the model first computes the gradients \(\mathbf{g}_{t}\) and recalls the memory \(\mathcal{V}_{t}\) based on the task representation \(k\), which is computed by the support set \(\mathcal{S}\). Then the model updates the task-specific parameters by Eq. (7) in the inner-loop. After the inner-loop, we evaluate the model on the query set. Note that during the meta-test phase, our model only uses the content of the acquired memory to update the network parameters and does not modify the content stored in the memory. Detailed algorithms for meta-training and meta-test are shown in the appendix 1 and 2.
### Analysis of Convergence
The core of EMO is to explicitly integrate the current gradient with the episodic memory \(\texttt{Aggr}(\mathbf{g}_{t},\mathcal{V}_{t})\). In practice, we observe that the proposed method has a higher convergence rate than previous optimizers. Here, we theoretically analyze the proposed EMO optimization's convergence rate of gradient descent.
To do so, we reformulate the aggregation process as a linear multi-step system (Polyak, 1964; Assran and Rabbat, 2020), leading to \(\texttt{Aggr}(\mathbf{g}_{t},\mathcal{V}_{t}){=}\sum_{s=0}^{S-1}w_{t,s}g_{t-s}\). \(S\) is the number of steps. At the \(t\)-th iteration, the multi-step system involves the gradients from the past \(S\) time steps. \(w_{t,s}\) is the aggregation scalar of the corresponding gradient \(g_{t-s}\) in the linear multi-step system, which is bounded by the interval \([0,1]\). The system involves all gradients in the episodic memory, \(\mathcal{V}_{t}\subseteq\{g_{t-s}\}_{s=0}^{S}\). For the gradient that does not appear in the memory, \(\exists\;s\in\{1,2,...,S\},g_{t-s}\notin\mathcal{V}_{t}\), the corresponding aggregation scalar \(w_{t,s}\) is \(0\). In general, we define a model-agnostic objective as \(\min_{\theta}f(\theta)\). \(\theta_{t}\) and \(\theta^{*}\) denote model parameters of the \(t\)-th iteration and the optimal. The difference between both parameters is \(\Delta\theta_{t}=\theta_{t}-\theta^{*}\). We assume \(f\) is continuously differentiable, \(\mu\)-strongly convex and \(L\)-Lipschitz (\(0<\mu\leq L\)). These assumptions imply the Hessian matrix \(\nabla^{2}f(\theta)\) exists and is bounded by the interval \([\mu,L]\). We consider the stochastic gradient \(g_{t}\) as a random vector and \(\mathbb{E}[g_{t}]=\nabla f(\theta_{t})\). \(\epsilon_{t}\) denotes the independent gradient noise at iteration \(t\). The gradient noise has zero means, and its variance is bounded by a finite constant \(\sigma^{2}\). Thus, the gradient in each iteration can be formulated as:
\[g_{t}=\Delta\theta_{t}\int_{0}^{1}\nabla^{2}f(\theta^{*}+u\Delta\theta_{t})du+ \epsilon_{t}, \tag{11}\]
where \(\int_{0}^{1}\nabla^{2}f(\theta^{*}+u\Delta\theta_{t})du\) is the average rate of the gradient changes from the \(t\)-th iteration to the optimal one with respect to the model parameters. Based on the assumptions of the objective, the average rate is also bounded between \(\mu\) and \(L\). We incorporate Eq. (11) into Eq. (7) with the linear multi-step system. In this case, the convergence of the system depends on the spectral properties of the system matrix (McRae et al., 2022).
**Theorem 1** (Convergence rate of EMO).: We define a system matrix1 for each iteration as \(A_{t}\), which contains aggregation scalars and average rates of gradient changes of the past \(S\) gradients. \(\lambda_{t}\) is the square root of the largest singular value of the corresponding system matrix, and thus the spectral norm of the system matrix is not larger than \(\lambda_{t}\). \(\lambda_{\max}\) is the upper bound for all \(\lambda_{t}\) corresponding to all system matrices. Since \(\alpha\) is chosen sufficiently small such that \(\lambda_{\max}<1\), we have that:
Footnote 1: For clarity, we provide the definition of \(A_{t}\) in Eq. (16) of Appendix D.
\[f(\theta_{t+1})-f(\theta^{*})\leq\frac{L}{2}(\lambda_{\max}^{2t}\left\|\Delta \theta_{1}\right\|^{2}+\frac{\alpha^{2}\sigma^{2}S}{1-\lambda_{\max}^{2}}). \tag{12}\]
From this theorem, the learning rate mainly depends on \(\lambda_{\max}\). The lower \(\lambda_{\max}\), the faster the convergence rate and the smaller the variance. When the number of steps in the system is \(1\), Eq. (12) degenerates to the conventional stochastic gradient decent as used in the previous meta-learning methods (Finn et al., 2017; Li et al., 2017; Raghu et al., 2019). In practice, our model usually sets a large number as the number of steps. In this case, it is possible to set the learning rate and aggregation scalars to obtain a faster convergence rate than SGD. Proofs are presented in Appendix D.
## 3 Related Work
Episodic memoryEpisodic memoryhas shown its effectiveness in a variety of machine learning tasks. Recent works (Zhu et al., 2020; Gershman & Daw, 2017; Botvinick et al., 2019; Hu et al., 2021; Lampinen et al., 2021) use episodic memory to store past experiences to help the intelligence quickly adapt to new environments and improve its generalization ability. In continual learning, episodic memory alleviates catastrophic forgetting (Lopez-Paz & Ranzato, 2017; Chaudhry et al., 2019; Derakhshani et al., 2021) while allowing beneficial transfer of knowledge to previous tasks. We draw inspiration from the cognitive function of episodic memory and introduce it into meta-learning to learn to collect long-term episodic (optimization) knowledge for few-shot learning.
Meta-learningMeta-learning designs models to learn new tasks or adapt to new environments quickly with only a few training examples. There are four common research lines of meta-learning: (1) metric-based meta-learning (Snell et al., 2017; Vinyals et al., 2016; Sung et al., 2018; Du et al., 2022; Triantafillou et al., 2020) generally learn a shared/adaptive embedding space in which query images can be accurately matched to support images for classification; (2) optimization-based meta learning (Finn et al., 2017; 2018; Lee & Choi, 2018; Yoon et al., 2018; Grant et al., 2018; Kalais & Chatzis, 2022; Abbas et al., 2022; Flennerhag et al., 2021; Zou et al., 2021; Triantafillou et al., 2020) learns an optimization algorithm that is shared across tasks and can be adapted to new tasks, enabling learning to be conducted efficiently and effectively. Note that Proto-MAML (Triantafillou et al., 2020) combines the strengths of prototypical networks and MAML for few-shot learning. Proto-MAML initializes the task-specific linear layer from the ProtoNet before optimizing those parameters using MAML. Our method focuses on optimization-based meta-learning as well, with the key difference being our optimizer can be used with any meta-learning approach. ; (3) model-based meta-learning (Mishra et al., 2018; Gordon et al., 2019) explicitly learns a base-learner that incorporates knowledge acquired by the meta-learner and effectively solves individual tasks; (4) memory-based meta-learning (Munkhdalai & Yu, 2017; Ramalho & Granelo, 2019; Zhen et al., 2020; Santoro et al., 2016; Du et al., 2022; Zhen et al., 2020) deploys an external memory to rapidly assimilate new data of unseen tasks, which is used for quick adaptation or to make decisions. Our method combines optimization-based meta-learning with memory-based meta-learning. To the best of our knowledge, it is the first optimization-based meta-learning method with episodic memory, intending to perform few-shot classification.
Memory-based few-shot learningBoth Andrychowicz et al. (2016) and Ravi & Larochelle (2017) propose the update rule for neural network parameters by transforming gradients via an LSTM, which outperforms fixed SGD update rules. The Meta-network (Munkhdalai & Yu, 2017) learns to transform the gradients to fast weights as memory, which are stored and retrieved via attention during testing. Conditionally shifted neurons (Munkhdalai & Trischler, 2018) modify the activation values with task-specific shifts retrieved from an external memory module, which is populated rapidly based on limited task experience. Santoro et al. (2016) leverages the Neural Turning Machine (Graves et al., 2014) for online few-shot learning by designing efficient read-and-write protocols. Ramalho & Granelo (2019) introduced adaptive posterior learning, which approximates probability distributions by remembering the most surprising observations it has encountered in external memory. Babu et al. (2021) proposed a distributed memory architecture, which recasts the problem of meta-learning as simply learning with memory-augmented models. These methods (Andrychowicz et al., 2016; Ravi & Larochelle, 2017) leverage an LSTM to design a new update rule for the network parameters, which can be seen as implicit memory. Compared to previous methods that rely on additive feature augmentation, our approach utilizes episodic memory to augment the gradients during the network parameter updating process. This represents a novel and distinct memory approach to few-shot learning.
## 4 Experiments
### Experimental Setup
In our experiments we consider two datasets: (i) _Meta-Dataset-BTAF_(Yao et al., 2019), which contains four fine-grained image classification datasets: (a) _Bird_(Wah et al., 2011), _Texture_(Cimpoi et al., 2014), _Aircraft_(Maji et al., 2013), and _Fungi_(FUNGI, 2018). (ii) _mini_ImageNet (Vinyals et al., 2016) which consists of 100 randomly chosen classes
from ILSVRC2012 (Russakovsky et al., 2015). For the _Meta-Dataset-BTAF_, each meta-training and meta-test task samples classes from one of the four datasets. This benchmark is more heterogeneous and closer to real-world image classification. Following the conventional meta-learning settings (Vinyals et al., 2016; Finn et al., 2017), all datasets are divided into meta-training, meta-validation and meta-testing classes. The \(N\)-way \(K\)-shot settings are used to split the training and test sets for each task. We report the average few-shot classification accuracy (%, top-1) along with the 95% confidence intervals across all test images and tasks. The error bar in the Figure 2 and Figure 3 represent the 95% confidence intervals across all test images and tasks. Appendix B provides the detailed implementation and algorithm. The results for MAML, ANIL, and Meta-SGD on the _Meta-Dataset-BTAF_ are based on our re-implementations.
### Results
Benefit of episodic memory optimizerTo show the benefit of our proposed episodic memory optimizer, we compare MAML (Finn et al., 2017), Meta-SGD (Li and Malik, 2017), and ANIL (Raghu et al., 2019) with their EMO variants. Each original meta-learning method uses SGD as the inner-loop optimizer, while each EMO variant uses EMO as the inner-loop optimizer. Table 1 shows adding EMO improves performance independent of the meta-learning method or dataset. On the challenging Texture dataset, which has the largest domain gap, Meta-SGD with EMO delivers \(36.26\%\), surpassing Meta-SGD by \(3.88\%\). In addition, Meta-SGD with EMO achieves the best performance compared with other meta-learning methods. This is because Meta-SGD with EMO stores not only the gradients of each layer, but also the gradients of the learning rate in the inner-loop, thus accelerating training. ANIL only stores the gradients of the last layer, causing the number of parameters and the memory size to be much smaller than in MAML and Meta-SGD. Despite the reduced accuracy, ANIL with EMO is still beneficial for applications that require compute efficiency, as ANIL is about 4.8 times faster than MAML and Meta-SGD. We attribute the improvements with EMO to our model's ability to leverage the episodic memory to adjust the model parameters, allowing the model to update the test task model using the most similar training task-like update.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{MAML} & \multicolumn{2}{c}{ANIL} & \multicolumn{2}{c}{Meta-SGD} \\ \cline{2-7} Dataset & w/o EMO & w/ EMO & w/o EMO & w/ EMO & w/o EMO & w/ EMO \\ \hline \hline
5-way 1-shot & & & & & & \\ Bird & 53.94 \(\pm\) 1.45 & 56.32 \(\pm\) 1.33 & 52.57 \(\pm\) 1.44 & 54.78 \(\pm\) 1.43 & 55.58 \(\pm\) 1.43 & 58.95 \(\pm\) 1.41 \\ Texture & 31.66 \(\pm\) 1.31 & 34.75 \(\pm\) 1.41 & 31.45 \(\pm\) 1.32 & 33.15 \(\pm\) 1.31 & 32.38 \(\pm\) 1.32 & 36.26 \(\pm\) 1.33 \\ Aircraft & 51.37 \(\pm\) 1.38 & 53.99 \(\pm\) 1.33 & 50.45 \(\pm\) 1.34 & 52.79 \(\pm\) 1.33 & 52.99 \(\pm\) 1.36 & 55.29 \(\pm\) 1.35 \\ Fungi & 42.12 \(\pm\) 1.36 & 43.15 \(\pm\) 1.36 & 41.14 \(\pm\) 1.34 & 43.75 \(\pm\) 1.31 & 41.74 \(\pm\) 1.34 & 45.24 \(\pm\) 1.34 \\ \hline \hline
5-way 5-shot & & & & & & \\ Bird & 68.52 \(\pm\) 0.79 & 70.91 \(\pm\) 0.71 & 67.17 \(\pm\) 0.74 & 69.25 \(\pm\) 0.73 & 67.87 \(\pm\) 0.74 & 72.74 \(\pm\) 1.40 \\ Texture & 44.56 \(\pm\) 0.68 & 47.21 \(\pm\) 0.64 & 43.41 \(\pm\) 0.68 & 45.78 \(\pm\) 0.68 & 45.49 \(\pm\) 0.68 & 49.15 \(\pm\) 0.68 \\ Aircraft & 66.18 \(\pm\) 0.71 & 68.13 \(\pm\) 0.61 & 65.34 \(\pm\) 0.70 & 67.15 \(\pm\) 0.71 & 66.84 \(\pm\) 0.70 & 69.73 \(\pm\) 0.70 \\ Fungi & 51.85 \(\pm\) 0.85 & 56.17 \(\pm\) 0.75 & 52.11 \(\pm\) 0.83 & 54.35 \(\pm\) 0.83 & 52.51 \(\pm\) 0.81 & 58.21 \(\pm\) 0.79 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Benefit of episodic memory optimizer for few-shot fine-grained classification. All evaluated optimization-based meta-learning methods consistently achieve better performance with EMO than without. Meta-SGD with EMO achieves the best performance, especially for the 5-way 5-shot setting.
Figure 1: Comparisons for MAML with EMO and other optimizers. EMO speeds up MAML training and outperforms the other optimizers for few-shot learning.
Comparison with the other optimizersTo show the benefit of our episodic memory optimizer, we compare EMO with other commonly used optimizers in the inner-loop stage of MAML. Learning curves for MAML using different optimizers are shown in Figure 1. EMO outperforms other optimizers by a considerable margin. Momentum and Adam have somewhat degraded performance compared to SGD, which means that these traditional optimizers cannot exploit past inaccurate gradients for few-shot learning. However, EMO can speed up training and improve performance since EMO acquires the ability to adaptively choose the most relevant task update rules for the test task.
Effect of inner-loop stepsWe provide further analysis of the effectiveness of our optimization in a fast adaptation by varying the number of update steps. Specifically, we compare the performance of MAML with SGD, Momentum, Adam, and EMO in Figure 2. We find that MAML with EMO achieves about \(51.34\%\) accuracy at step 0 (no support set is required to update the model), which is more than \(19.86\%\) higher than MAML, since EMO can utilize the past gradients in the memory to update the learning process of new tasks. Also, MAML with EMO can reach convergence very quickly (step 2 vs. step 5) and perform much better than other optimizers. The results by comparing them with different optimizers in the Appendix F.1. We also report the adaptation speed of the MAML with different optimizers by the varying number of update steps in the Appendix F.5. Although MAML with other optimizers already performs fast adaptation with 5 steps, MAML with EMO is even faster and better. This again demonstrates the benefit of EMO.
Comparison of our aggregation functionsWe also ablate the effect of EMO's aggregation function to generate the new gradients. We report the performance of MAML and Meta-SGD with EMO using different Aggr in Table 2, and the experiments for ANIL with EMO are proposed in Appendix F.2. The results show that the best-suited aggregation function is specific to the optimization-based meta-learning method for integrating episodic gradients. The best-suited aggregation function for MAML with EMO is the Mean, while the best-suited aggregation function for Meta-SGD with EMO is the Transformer. To ensure consistency of implementation on each dataset and for each model, we choose the Mean aggregation function for MAML with EMO and the Transformer aggregation function for Meta-SGD with EMO in the remaining experiments.
Comparison of our memory controllersTo assess the effect of the memory controller, we compare our three memory controllers: FIFO-EM, CLOCK-EM, LRU-EM on the _Meta-Dataset-BTAF_ under the 5-way 1-shot setting. The experimental results for MAML with EMO are reported in Table 3, and results for Meta-SGD and ANIL with EMO are in Appendix F.3. FIFO-EM achieves the worst performance compared to the other memory controllers since FIFO-EM may replace some crucial or commonly used memory, causing the test task to fail to find the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{MAML with EMO} & \multicolumn{3}{c}{Meta-SGD with EMO} \\ \cline{2-7} Dataset & Sum & Mean & Transformer & Sum & Mean & Transformer \\ \hline Bird & 54.35 \(\pm\) 1.34 & 56.32 \(\pm\) 1.33 & 55.91 \(\pm\) 1.35 & 57.15 \(\pm\) 1.31 & 57.03 \(\pm\) 1.40 & 58.95 \(\pm\) 1.41 \\ Texture & 33.13 \(\pm\) 1.45 & 34.75 \(\pm\) 1.41 & 34.23 \(\pm\) 1.40 & 34.93 \(\pm\) 1.42 & 35.97 \(\pm\) 1.41 & 36.26 \(\pm\) 1.33 \\ Aircraft & 52.53 \(\pm\) 1.30 & 53.99 \(\pm\) 1.33 & 53.15 \(\pm\) 1.30 & 53.12 \(\pm\) 1.27 & 54.01 \(\pm\) 1.25 & 55.29 \(\pm\) 1.35 \\ Fungi & 44.07 \(\pm\) 1.33 & 43.15 \(\pm\) 1.36 & 45.27 \(\pm\) 1.35 & 43.49 \(\pm\) 1.32 & 44.13 \(\pm\) 1.31 & 45.24 \(\pm\) 1.34 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Effect of different aggregation functions on _Meta-Dataset-BTAF_ under the 5-way 1-shot setting. The best-suited aggregation function for MAML is Mean, while the best-suited aggregation function for Meta-SGD is Transformer.
Figure 3: Effect of task-memory size.
Figure 2: Effect of inner-loop steps.
precise memory to learn quickly. With LRU-EM, MAML with EMO leads to a small but consistent gain under all the datasets, as it replaces the memory that is not commonly used and these memories can usually be seen as outliers. In Table 12, with CLOCK-EM, Meta-SGD with EMO achieves better performance on the all datasets. CLOCK-EM allows the network to access the memory in a systematic and efficient manner by controlling the sequence of read and write operations, which is more suitable for methods that require large memory, such as Meta-SGD with EMO, which also requires additional storage of the gradient of the inner-loop learning rate. To ensure consistency of implementation on each dataset, we choose the LRU-EM function for MAML with EMO and ANIL with EMO, CLOCK-EM is used for Meta-SGD with EMO.
Effect of task-memory sizeTask-memory size cannot be increased indefinitely. To study the effect We conduct this ablation of task-memory size on EMO on _mini_ImageNet using MAML with EMO under the 5-way 1-shot and 5-shot settings. Note that task memory size is the number of stored meta-training tasks. For each task, we store the model gradients and their task representation. From Figure 3, we observe the performance increases along with the increase in task-memory size. This is expected since more significant memory provides more context information for building better memory. Naturally, the memory size has a greater impact in the 1-shot setting. In this case, the model updated from only one example might be insufficiently representative of the object class. Leveraging context information provided by the memory compensates for the limited number of samples. We adopt memory sizes 100 for 1-shot and 200 for 5-shot on each dataset.
Computational cost and storagesWe also report the extra computational cost and storage of different models with various optimizers in Table 4. Although our model requires more parameters and computational costs compared to the baseline, it brings a 9.14% improvement with MAML in accuracy on the _mini_ImageNet.
Analysis of episodic memoryIn this experiment, we meta-train MAML and MAML with EMO on the Bird dataset and meta-test on _Meta-Dataset-BTAF_. Therefore the episodes saved in the memory are only from the Bird dataset. The experiments that meta-train on the other three datasets are provided in Appendix F.4. From Figure 4, there is no doubt that MAML with EMO achieves a better performance than MAML on Bird. Surprisingly, MAML with EMO also outperforms MAML on Aircraft. It might be that the two datasets have more similar shapes (wings), but Bird memory can still help accelerate the Aircraft tasks' training. However, when the test task has significant distribution shifts with training task, e.g., Texture, EMO harms performance. We will explore in future work how to use episodic memory to address cross-domain few-shot challenges.
Comparison with the state-of-the-artWe first compare our method on _Meta-Dataset-BTAF_ using a Conv-4 backbone under the 5-way 1-shot setting in Table 5. In this comparison, we apply ARML (Yao et al., 2020) with EMO to experiment since ARML is the current state-of-the-art algorithm based on optimization-based meta-learning. Our method achieves state-of-the-art performance on each dataset under the 5-way 1-shot setting. On Texture, our model surpasses the second best method, i.e., ARML (Yao et al., 2020), by \(1.6\%\). The better performance confirms that EMO can find the most similar task to the test task and update the parameters so that it can converge faster and perform better. We also evaluate our method on traditional few-shot classification, in which the training and test datasets are from the
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & & \multicolumn{2}{c}{MAML} & \multicolumn{3}{c}{Meta-SGD} \\ \cline{2-7} Model & Extra FLOPs (M) & Extra parameters (M) & Memory usage (G) & Extra FLOPs (M) & Extra parameters (M) & Memory usage (G) \\ \hline SGD & 0 & 0 & 7.3 & 0 & 0 & 8.4 \\ Adam & 0.003 & 0.0004 & 7.7 & 0.004 & 0.0006 & 8.9 \\ EMO & 0.04 & 0.003 & 8.1 & 0.07 & 0.009 & 9.7 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Computational cost in FLOPs, parameters, and GPU memory usage for different optimizers in the model.
Figure 4: EMO is trained only on the Bird to show that EMO also holds semantic information. MAML with EMO achieves better performance on the same dataset (Bird) and the similar shapes dataset (Aircraft), while it is harmful on the test tasks that have significant distribution shifts (Texture and Fungi) from the training tasks.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{MAML with EMO} \\ \cline{2-5} Dataset & FIFO-EM & CLOCK-EM & LRU-EM \\ \hline Bird & 51.91 \(\pm\) 1.35 & 54.01 \(\pm\) 1.33 & 56.2 \(\pm\) 1.33 \\ Texture & 30.11 \(\pm\) 1.40 & 32.14 \(\pm\) 1.41 & 34.75 \(\pm\) 1.41 \\ Aircraft & 48.16 \(\pm\) 1.40 & 50.91 \(\pm\) 1.38 & 54.39 \(\pm\) 1.33 \\ Fungi & 41.17 \(\pm\) 1.35 & 43.97 \(\pm\) 1.35 & 43.15 \(\pm\) 1.36 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Effect of the memory controller. LRU-EM achieves better performance than alternatives.
same dataset. Note that the meta-training tasks of _min_i_magenet have a slow, gradual shift in task distribution compared with meta-test task. In this experiment, we also apply ARML (Li et al., 2017) with EMO experiment. The results have shown a significant improvement of 7.42%. This suggests that the EMO technique is more effective even in the absence of task boundaries and a slow, gradual shift scenario. The results demonstrate that optimization-based meta-learning benefits from EMO for traditional few-shot learning.
## 5 Conclusions
This paper introduces episodic memory optimization (EMO), which retains the gradient history of past experienced tasks in external memory. EMO accumulates long-term, general learning processes knowledge of past tasks, enabling it to learn new tasks quickly based on task similarity. Our experiments show that integrating EMO with several optimization-based meta-learning methods accelerates learning in all settings and datasets tested and improves their performance. We also prove that EMO with fixed-size memory converges under assumptions of strong convexity, regardless of which gradients are selected or how they are aggregated to form the update step. We conduct thorough ablation studies to demonstrate the effectiveness of the memory-augmented optimizer. Experiments on several few-shot learning datasets further substantiate the benefit of the episodic memory optimizer.
## Acknowledgment
This work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy.
|
2310.18158 | Identifying heavy stellar black holes at cosmological distances with
next generation gravitational-wave observatories | We investigate the detectability of single-event coalescing black hole
binaries with total mass of $100-600 M_{\odot}$ at cosmological distances ($5
\lesssim z \lesssim 20$) with the next generation of terrestrial gravitational
wave observatories, specifically Einstein Telescope and Cosmic Explorer. Our
ability to observe these binaries is limited by the low-frequency performance
of the detectors. Higher-order Multipoles of the gravitational wave signal are
observable in these systems, and detection of such multipoles serves to both b
the mass range over which black hole binaries are observable and improve the
recovery of their individual masses and redshift. For high redshift systems of
$\sim 200 M_{\odot}$ we will be able to confidently infer that the redshift is
at least $z=12$, and for systems of $\sim 400 M_{\odot}$ we can infer a minimum
redshift of at least $z=8$. We discuss the impact that these observations will
have in narrowing uncertainties on the existence of the pair-instability
mass-gap, and their implications on the formation of the first stellar black
holes that could be seeds for the growth of supermassive black holes powering
high-$z$ quasars. | Stephen Fairhurst, Cameron Mills, Monica Colpi, Raffaella Schneider, Alberto Sesana, Alessandro Trinca, Rosa Valiante | 2023-10-27T14:05:53Z | http://arxiv.org/abs/2310.18158v1 | # Identifying heavy stellar black holes at cosmological distances
###### Abstract
We investigate the detectability of single-event coalescing black hole binaries with total mass of \(100-600\,\mathrm{M}_{\odot}\) at cosmological distances (\(5\lesssim z\lesssim 20\)) with the next generation of terrestrial gravitational wave observatories, specifically Einstein Telescope and Cosmic Explorer. Our ability to observe these binaries is limited by the low-frequency performance of the detectors. Higher-order Multipoles of the gravitational wave signal are observable in these systems, and detection of such multipoles serves to both b the mass range over which black hole binaries are observable and improve the recovery of their individual masses and redshift. For high redshift systems of \(\sim 200\,\mathrm{M}_{\odot}\) we will be able to confidently infer that the redshift is at least \(z=12\), and for systems of \(\sim 400\,\mathrm{M}_{\odot}\) we can infer a minimum redshift of at least \(z=8\). We discuss the impact that these observations will have in narrowing uncertainties on the existence of the _pair-instability mass-gap_, and their implications on the formation of the first stellar black holes that could be _seeds_ for the growth of supermassive black holes powering high-\(z\) quasars.
## 1 Introduction
In the three observing runs by the LIGO-Virgo-KAGRA (LVK) Collaboration, many tens of gravitational-wave (GW) transient signals consistent with the merger of binary black holes (BBHs) have been detected (Abbott et al., 2019; The LIGO Scientific Collaboration et al., 2021, 2021). The majority of them, observed at \(z\lesssim 1\), are found to have one or both component masses between \(20\,\mathrm{M}_{\odot}\) and \(50\,\mathrm{M}_{\odot}\) and total mass \(\lesssim 80\,\mathrm{M}_{\odot}\).1 A variety of channels have been proposed for their origin: formation as field binaries primarily in low-metallicity galaxies at high redshifts, formation in dense stellar systems, in active galactic nucleus (AGN) discs, or after generations of repeated mergers (see Graziani et al., 2020; Mapelli, 2021; Gerosa and Fishbach, 2021; Mandel and Farmer, 2022, and references therein).
Footnote 1: All masses in this paper are referred to as measured in the source-frame.
The next-generation of ground-based GW observatories, specifically the Einstein Telescope (ET, Punturo et al., 2014; Branchesi et al., 2023) and Cosmic Explorer (CE, Evans et al., 2021), will open the prospect of detecting the GW signatures of merging BBHs over a wider mass range and deeper redshifts, extending the realm of observations to BBHs out to \(z\sim 30\), when the first stars began to shine, and into the intermediate-mass range \(\mathcal{O}(100-1000)\,\mathrm{M}_{\odot}\)(Kalogera et al., 2019; Maggiore et al., 2020). Beyond redshift \(z\sim 30-40\), merging primordial black holes of \(\mathcal{O}(10)\,\mathrm{M}_{\odot}\), formed by quantum processes in the early Universe (Carr et al., 2021; Ng et al., 2022), may also be detected and studied (Ng et al., 2022, 2023).
In this paper we study systems with individual masses extending from 60 to \(480\,\mathrm{M}_{\odot}\), and total masses of \(180\,\mathrm{M}_{\odot}\) to \(600\,\mathrm{M}_{\odot}\), covering a mass range that is relevant for several reasons, as we explore below. Measuring such systems is most interesting at cosmological distances, which is only possible due to the enhanced sensitivity of ET and CE at frequencies below 10 Hz.
The formation of such heavy stellar-mass black holes (BHs) requires the presence of star forming regions of zero or extremely low metallicity, where fragmentation and cooling of the parent gas cloud, and mass loss from stellar winds are strongly suppressed (see Bromm, 2013; Spera and Mapelli, 2017, and references therein). These are conditions that occur in the high-redshift Universe, and are expected to result in a top-heavy mass function where stars heavier than \(150-300\,\mathrm{M}_{\odot}\) are more common than in the conventionally adopted, Kroupa-like, stellar initial mass function (IMF) (see Klessen and Glover, 2023 for a recent review on the first stars). At the highest redshifts, these heavy BHs may represent systems not yet affected by accretion of surrounding gas (Roupas and Kazanas, 2019; Safarzadeh and Haiman, 2020; van Son et al., 2020), and hence their masses reflect their birth conditions. Detecting GWs from these heavy stellar-mass binaries will let us constrain their merger rate which is intimately related to the rate of formation of massive stellar binary systems in pristine star forming galaxies (Stacy and Bromm, 2013; Chon and Hosokawa, 2019; Sugimura et al., 2023).
Some of the BBH masses we investigate reside within the so called _pair-instability mass-gap_ (often referred to as upper-mass gap or Pair Instability Supernova (PISN) gap. This gap is between about \(65\,\mathrm{M}_{\odot}\) and \(135\,\mathrm{M}_{\odot}\) where no BH is expected to form in evolution models of isolated stars. This mass-gap is attributed to a pair-instability, arising in metal poor, massive stars between about \(130\,\mathrm{M}_{\odot}\) and \(250\,\mathrm{M}_{\odot}\), which leads to a supernova explosion due to uncontrolled \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) nuclear burning, leaving no remnant (Farmer et al., 2020; Woosley and Heger, 2021). During the third observing run of LVK a short duration signal, GW190521, was detected and estimated to be consistent with the merger of two BHs with component masses of about \(85\,\mathrm{M}_{\odot}\) and \(66\,\mathrm{M}_{\odot}\) and total mass of \(142\,\mathrm{M}_{\odot}\)(Abbott et al., 2020, 2020). This is the heaviest BH observed in GWs to date, with an _intermediate-mass_ remnant and a primary component residing within the _pair-instability mass-gap_.2
Footnote 2: Alternative analyses, (e.g. Nitz and Capano, 2021) find that GW190521 could instead be \(\sim 170\,\mathrm{M}_{\odot}\) black hole with a companion of \(\sim 20\,\mathrm{M}_{\odot}\), suggesting that the primary is already an intermediate-mass BH, with a mass beyond the mass gap (Fishbach and Holz, 2020).
Detecting the GW signal from high-redshift heavy stellar BBH mergers, where one or both components are in the upper-mass gap or straddling it, would be highly informative. Various mechanisms could lead to the formation and coalescence of such binaries, and among them, evolution in isolated binaries, dynamical encounters in star clusters or a chain of Nth generation mergers.3 But at redshifts as high as \(\sim 10\) the contributions from the two dynamical channels appear to be negligible (Mapelli et al., 2022). Consequently, observing these systems at high \(z\) would allow us to better probe the physics of the isolated binary channel, the potential existence of an upper-mass gap and its imprint on the mass function of the earliest stellar BHs.
Footnote 3: There are several proposed channels for the origin of the components of GW190521-like systems observed at low redshift. For instance: mergers from the relics of the first stars (known as Population III stars) (Liu and Bromm, 2020; Tanikawa et al., 2021), isolated binary evolution with no-gap (Belczynski, 2020), stellar collisions in star clusters (see Costa et al., 2022; Arca-Sedda et al., 2021; Di Carlo et al., 2020; Renzo et al., 2020, and references therein), and hierarchical mergers (Gerosa and Fishbach, 2021).
Estimates of the location and width of the upper-mass gap are at best approximate. Current uncertainties on the reaction rate, on rotation, and on the presence of rich hydrogen envelopes may shift the instability interval for the explosion to higher masses, and narrow further the gap (Farmer et al., 2020; Belczynski, 2020; Woosley and Heger, 2021; Marchant and Moriya, 2020; Vink et al., 2021) or even fill it entirely (Costa et al., 2021). Testing the existence of this upper-mass gap and inferring it properties from GW observations depends critically upon the accuracy with which the masses of the individual BHs are measured from the merger signal. Here and for this purpose we carry out a parameter estimation on BBHs with component masses which touch the _edges_ of the upper-mass gap, recognizing that all of the above arguments become compelling if the redshift of the observed systems is \(z\gtrsim 10\).
Determining the _lowest_ redshift one can claim the source to be _beyond_, and inferring posteriors for the distribution of the component masses is of paramount importance (Mancarella et al., 2023). A key challenge, then, is to accurately infer both the masses and redshift of the binary. There is a well-known degeneracy between measurements of the distance to and inclination of a binary from GW observations (Usman et al., 2019). At high redshifts, this degeneracy further impacts our ability to infer the masses of the binary. In GW observations, masses and redshift are degenerate, and only the redshifted masses, \(m_{1,2}(1+z)\), can be inferred from the signal. Given a cosmological model, the measured distance can be used to obtain the redshift and hence the source masses. However, if the distance is poorly constrained, this leads to significant uncertainties on the mass. For example, it is not unusual to have an uncertainty of \(\sim 50\%\) in the distance measurement of BBH signals (The LIGO Scientific Collaboration et al.
2021b). At a \(z=10\) this translates to a redshift uncertainty of \(\pm 4\) and consequently an uncertainty in the masses of 40% due to redshift effects alone. The ability to accurately infer redshifts and masses is improved by a detector network, which can provide more accurate localization and distance measurements (Fairhurst, 2011; Mills et al., 2018; Singer et al., 2014), as well as the observation of Higher order Multipoles (HoMs) in the GW signal which help break the distance-inclination degeneracy (Mills and Fairhurst, 2021; Fairhurst et al., 2023).
The paper is organized as follows. In Section 2, we discuss the observability of high-mass, high-redshift binaries with a focus on the HoMs. In Section 3, we provide detailed parameter estimation results for a number of astrophysically interesting simulated BBH merger signals and in Section 4 we summarize our results. We include two appendices. Appendix A provides additional figures showing detector sensitivity for binaries of varying mass ratio and Appendix B gives parameter estimation accuracy for low signal-to-noise ratio (SNR) systems.
## 2 The importance of higher order multipoles
High-mass, high-\(z\) BBH coalesences are intrinsically low-frequency GW sources. This is illustrated in Fig.1, where we show the frequency evolution of the GW strain amplitude for a BBH of (120-60) \(\mathrm{M}_{\odot}\) placed at redshift \(z=14\), with an inclination \(\iota\) of 60\({}^{\circ}\) between the orbital angular momentum and theof line of sight. The gravitational waveform for this signal only extends to 15 Hz and is therefore outside the sensitive frequency range of current GW observatories. The leading GW emission from the source, emitted in the (2, 2) multipole at twice the orbital frequency, extends to only 7 Hz in the detector, making discovery challenging.4 Although the (3, 3) and (4, 4) multipoles are intrinsically lower amplitude, they extend to higher frequencies (\(\sim\)1.5 and 2 times the frequency of the (2, 2)) and can therefore contribute significantly to the observed SNR. This improves the prospects of detecting such a system. Furthermore, the identification of these higher-order (higher-frequency) multipoles in the signal can significantly improve the ability to infer the parameters of the system, as they enable us to break measurement degeneracies that exist with observation of only a single multipole.
Footnote 4: We recall that for all BBH observed to date, the (2, 2) multipole, which is emitted at twice the orbital frequency, has been the dominant multipole detected in the GW signal. Additional multipoles of the GW signal have been observed for a handful of events (Abbott et al., 2020, 2020) but, as their amplitudes are lower, they are generally not identified for the majority of sources.
There are several well-known degeneracies in the emitted gravitational-waveform, leading to some parameters being very well measured while others being not. For our purposes, we are most concerned with a degeneracy between the observed distance to and inclination of a binary, as discussed in Usman et al. (2019). When only the (2, 2) multipole is observed, the amplitude gives a good measurement of \(\cos\iota/d_{L}\) where \(\iota\) is the binary inclination and \(d_{L}\) is the luminosity distance. However, in many cases, the binary inclination is only restricted to the range \(\iota\in[0^{\circ},60^{\circ}]\), leading to a factor of two uncertainty in distance due to this degeneracy alone. When the binary is observed at a high redshift, the measurement of the masses also becomes degenerate with distance and inclination, and a factor of two uncertainty in distance can lead to a similar uncertainty on the masses. The observation of a second GW multipole can serve to break this degeneracy (Abbott et al., 2020; Mills and Fairhurst, 2021) as the relative amplitude of the different multipoles depends upon the orientation of the binary5.
Footnote 5: The ratio of the amplitude of the (3, 3) multipole to the (2, 2) scales as \(\sin\iota\) while the (4, 4) multipole scales as \(\sin^{2}\iota\) relative to the (2, 2).
In Fig. 2, we show the variation of the SNR with binary mass ratio \(q=m_{1}/m_{2}\) (assuming an inclination
Figure 1: Waveform amplitude from a (120,60) \(\mathrm{M}_{\odot}\) binary at \(z=14\), inclined at an angle of 60\({}^{\circ}\) to the line of sight. The figure shows the frequency evolution of the waveform, and also the individual contributions from the three leading multipoles (2, 2), (3, 3) and (4, 4) using the IMRPhenomX-PHM waveform (Pratten et al., 2020). The oscillations in the full waveform are due to constructive/destructive interference between the multipoles. The projected noise curves for three future detectors: Cosmic Explorer, Einstein Telescope and A\({}^{\#}\) are also shown. The signal is represented in such a way that area between the wave amplitude and noise is indicative of the SNR (Abbott et al., 2016).
\(\iota=60^{\circ}\)) and inclination (assuming \(q=2\)) in each of the multipoles for a binary of total mass of \(180\,\mathrm{M}_{\odot}\) at \(z=14\) observed by ET. The SNR of the (2, 2) multipole is greatest for face-on signals (\(\iota=0^{\circ}\)) with equal mass components (\(q=1\)). For a face-on signal, the (2, 2) multipole is circularly polarized and, as the inclination increases, the amplitude of both polarizations decreases to a minimum for edge-on systems, \(\iota=90^{\circ}\), whose emission is linearly polarized. For the other multipoles considered, the SNR vanishes at face-on and peaks at \(\sim 50^{\circ}\) for the (3, 3) multipole and \(\sim 60^{\circ}\) for the (4, 4) multipole. The binary would be observable in ET at any orientation. For inclinations \(\iota\gtrsim 10^{\circ}\) or \(30^{\circ}\) the (3, 3) and (4, 4) multipoles would be identifiable, respectively. Since this waveform lasts only a few cycles in the detector band, the contributions from the different multipoles are not orthogonal. Consequently, the total SNR varies with the merger phase of the binary.
The SNR of each different multipole, and the full signal, also varies with mass ratio. The (2, 2) multipole is largest for equal mass systems and decreases by a factor of two by mass ratio \(q=5\), while the (3, 3) vanishes for equal mass and peaks around \(q=3\). For this signal, the SNR in the (4, 4) multipole does not vary significantly with mass ratio. The (2, 2) and (4, 4) multipoles would be identifiable at any mass ratio, and the (3, 3) for binaries with mass ratio above \(\sim\)1.2. Identification of more than one multipole enables an improved measurement of mass ratio, as well as binary orientation.
In Fig. 3, we show the same dependence of SNR with inclination and mass ratio for the CE detector. Since CE has sensitivity to the signal above 5 Hz, rather than 3 Hz for ET, the overall SNR is lower and the signal would be marginally observable. Furthermore, a broad range both in inclination and mass ratio, where the (3,3) multiple gives the dominant contribution to the SNR, becomes accessible. This provides a clear example of a signal where the HoMs enable detection as well improved parameter recovery.
Given the above discussion, we are interested in identifying the regions of the mass space where HoMs can contribute to either the observability or parameter measurement accuracy of high-mass, high-redshift binaries. In Fig. 4 we show the sensitivity of the proposed ET and CE observatories to BBH mergers with mass ratio of 2 as a function of redshift. We show the maximum redshift at which a binary can be observed, at an SNR of 8, and also the redshifts at which 10%, 50% and 90% of binaries, averaged over sky location and orientation, will be observed. The detector sensitivity is shown for both the (2, 2) multipole, in orange, and the full waveform, in blue. At low masses, the (2, 2) multipole dominates the observable signal and therefore the distance to which the full waveform can be observed is essentially equal to that of the (2, 2) multipole. However, at high masses,
Figure 2: The total SNR and SNR in each multipole in ET for a binary of total mass of \(180\,\mathrm{M}_{\odot}\), at redshift \(z=14\) and overhead the detector. _Left_: Variation of SNR with inclination of the binary, for a BBH of \((120,60)\,\mathrm{M}_{\odot}\) and the fixed redshift. _Right_: Variation of SNR with mass ratio, for a \(180\,\mathrm{M}_{\odot}\) binary with inclination of \(60^{\circ}\) at the fixed ratio. The total SNR for the system is computed using the (2, 2), (2, 1), (3, 3), (3, 2) and (4, 4) multipoles and individual SNRs for the (2, 2), (3, 3) and (4, 4) multipoles are shown. At \(\iota=0\) in the left panel the full waveform SNR is higher than the (2,2) mode due to the contribution of the (2, 1) and (3, 2) modes, which are not shown. The total SNR depends upon the phase of the signal at merger, due to non-zero overlap between multipoles, and is therefore shown as a shaded blue area. In both panels, horizontal lines indicate \(\mathrm{SNR}=8\), an approximate threshold for detection, and \(\mathrm{SNR}=3\), an approximate threshold for observation of a second multipole, given that another multipole has been observed (Mills & Fairhurst, 2021).
the (3, 3) and (4, 4) multipoles contribute more significantly and incorporating them increases the sensitivity of the detectors to these systems. When a system has been observed, the identification of a second multipole, at SNR above 3, can greatly improve parameter recovery by breaking degeneracies between distance and inclination and improving mass ratio measurement. The range of masses and redshifts for which the binary would be observed with SNR above 8, and with SNR above 3 in at least two multipoles, is shown in black in Fig. 4. For example, in ET a 4,000 M\({}_{\odot}\) system is visible at \(z\approx 1\) with the (2, 2) multipole but up to \(z\approx 2\) with the full waveform.
Remarkably, for the majority of binaries with \(M\gtrsim 100\) M\({}_{\odot}\) observed by ET, and \(M\gtrsim 30\) M\({}_{\odot}\) observed by CE, a second multipole will be observable. At lower masses, it is the (2, 2) and (3, 3) multipoles which contain most power, while at high masses it is the (3, 3) and (4, 4) multipoles that are observed, with the (2, 2) multipole power occurring at frequencies below the instrumental sensitivity. The picture is similar at different mass ratios, and figures showing the sensitivity to binaries with \(q=1,2,4\) and \(10\) are provided in Appendix A, for ET in Fig. 11 and CE in Fig. 12. The most significant difference occurs for equal mass binaries, where the (3, 3) multipole vanishes and we therefore require both (2, 2) and (4, 4) multipoles to be observable. This limits
Figure 4: Maximum observable redshift of binaries with mass ratio \(q=2\) with next-generation observatories. _Left_: ET, _Right_: CE. Shaded regions show redshifts at which 10%, 50% and 90% of sources are observable. The maximum observable redshift of the (2, 2) multipole at SNR = 8 is shown in orange, the observability of the full waveform at SNR = 8 is shown in blue, and the observability of the second most significant multipole above SNR = 3 is shown in black. The red asterisks represent the masses and redshifts of the \(q=2\) systems we study in detail in Section 3.
Figure 3: The same as Fig. 2 but for Cosmic Explorer (CE).
the range for which two multipoles can be seen and increases the minimum mass at which we expect to observe two multipoles to \(\sim 200\,\mathrm{M}_{\odot}\) in ET and \(\sim 50\,\mathrm{M}_{\odot}\) for CE. Nonetheless, for the majority of high-mass, high-redshift binaries, we expect to observe multipole multipoles, and therefore obtain good estimates of both the masses and redshift of the system. In the next section, we investigate those expectations in detail through parameter recovery of a series of systems.
## 3 Parameter recovery for high-mass, high-redshift binaries
Observation in a single GW observatory leads to large uncertainties in the sky location of the binary (Singer et al., 2014), and this is again degenerate with the inferred distance and redshift. A network of detectors with comparable sensitivity can provide accurate localization (Mills et al., 2018) and therefore improved redshift and mass accuracy. Binaries with black hole spins misaligned with the orbital angular momentum will precess. In principle, the observation of precession can further improve parameter estimates. However, given that so few cycles of the waveform are visible in the detectors, the prospects for observing precession are slim (Green et al., 2020). Therefore, in what follows we neglect precession effects.
To illustrate the expected performance of a next-generation GW network in observing and measuring these binaries, we perform a number of simulations and obtain parameter estimates with the LALInference (Veitch et al., 2015) package and a uniform in comoving volume distance prior. We simulate four different binary mass combinations, denoting (in the source frame) with \(m_{1}\) (\(m_{2}\)) the primary (secondary) mass and with \(M\) the total mass. We consider \((120,60)\,\mathrm{M}_{\odot}\) and \((90,90)\,\mathrm{M}_{\odot}\) binaries, chosen so that component BHs lie in, or close to the upper mass-gap, and \((240,120)\,\mathrm{M}_{\odot}\) and \((480,120)\,\mathrm{M}_{\odot}\) binaries chosen to probe observability of high-redshift intermediate mass black hole (IMBH) in binaries. In all cases, we simulate quasi-circular non-spinning BBHs, but allow for non-zero, aligned spins when performing parameter estimation6. This is important as the degeneracy between the binary mass ratio and BH spins (Hannam et al., 2013; Fairhurst et al., 2023) greatly impacts the accuracy with which mass ratio can be measured.
Footnote 6: The restriction to non-spinning BBH is solely to simplify presentation — all results presented here could be easily extended to aligned-spin BHs.
The simulated signals are added to data from a three-detector network of observatories with sensitivity matching ET (Punturo et al., 2014) and CE (Evans et al., 2021). Specifically, we use a single, triangular ET detector located in Europe and two 40 km CE observatories, one in the US and one in India. The simulations are performed at the optimal sky location for the network. Given the greater low-frequency sensitivity of ET, this leads to the binaries being essentially overhead ET. The signals are generated at varying inclination angle, to enable us to investigate the importance of HoMs. We choose the redshift of the sources to ensure a fixed SNR for all signals. In the main text, we use an SNR of 30, while in Appendix B we investigate quieter signals with an SNR of 15.
### Observing mass-gap objects
A mass gap in the BH mass distribution is expected due to the presence of the PISN. Farmer et al. (2020) investigated the location of this pair-instability region as a function of the temperature-dependent uncertainty in the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate. Determining the value of the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate is extremely important for tracing the evolution of massive stars. Thus, restricting this rate through GW observations would be of considerable astrophysical interest. According to Farmer et al. (2020), the width of the mass-gap remains roughly constant as a function of the (unknown) reaction rate, but the mass-range where no black hole can form varies. At the lowest rate relative to the median, the mass-gap extends from \(\sim 90\,\mathrm{M}_{\odot}\) to \(\sim 175\,\mathrm{M}_{\odot}\). At the highest rate, the location of the mass-gap is between \(\sim 60\,\mathrm{M}_{\odot}\) and \(\sim 120\,\mathrm{M}_{\odot}\). Interestingly, there exists a region of BH masses between \(90\,\mathrm{M}_{\odot}\) and \(120\,\mathrm{M}_{\odot}\) where we should not expect any black hole to form, _for any rate_. We refer to this region as the _forbidden strip_. Consequently, we choose to investigate systems which host at least one member with mass touching this narrow strip. Then, if their masses were to be determined with sufficient accuracy, their detection could constrain the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate to be at the extreme of the allowed range (see Fig. 5 of Farmer et al., 2020). In particular, we focus on \((120,60)\,\mathrm{M}_{\odot}\) and \((90,90)\,\mathrm{M}_{\odot}\) binaries which have components at the lower and upper range of the forbidden strip.
As seen in Fig. 4, BBHs with masses \((120,60)\,\mathrm{M}_{\odot}\) will be detectable at a maximum redshift of \(z\sim 25\), for an optimally located and oriented system, with 50% of mergers at \(z\sim 17\) and the vast majority of events at \(z\sim 10\) being detectable. The sensitivity to \((90,90)\,\mathrm{M}_{\odot}\) systems is comparable. If \((90,90)\,\mathrm{M}_{\odot}\) systems were to be observed, this will allow us to constrain the strength of the uncertain \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate. In particular, black holes with such masses would imply the rate
to be at the lower end of the explored range. A binary with mass \((120,60)\,\mathrm{M}_{\odot}\) would be challenging to form through stellar evolution. Specifically, allowing for variation in the reaction rate, the mass of the primary would require a very high reaction rate for \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\), while the mass of the secondary would be compatible with a value below the median. Therefore this would be a system where only one of the two black holes could originate from stellar evolution and the other would require a different formation channel.
In Fig. 5 and 6, we show the recovered parameter accuracies for both mass and redshift for \((90,90)\,\mathrm{M}_{\odot}\) and \((120,60)\,\mathrm{M}_{\odot}\) binaries observed with \(\mathrm{SNR}=30\). The first thing to note is that these high-mass, high-redshift systems could be identified with good accuracy by the next-generation GW network, as would be expected due to the relatively large SNRs. For all events, there is, at most, a factor of two uncertainty in the mass of the systems and a 50% uncertainty in the redshift, with both numbers quoted at the 90% confidence interval. However, we also notice a substantial variation in the accuracy of parameter measurement between the systems. The parameters of systems close to face-on (\(\iota=0^{\circ}\) or \(30^{\circ}\)) are recovered with significantly larger uncertainties than those which are more highly inclined (\(\iota=60^{\circ}\) or \(90^{\circ}\)). When the binary is close to face-on, the uncertainty in the component mass posterior is much greater and, for \(\iota=0^{\circ}\), the 90% mass region for the \((120,60)\,\mathrm{M}_{\odot}\) binary includes equal masses. As the inclination of the binary is increased, parameter accuracy improves significantly: already at \(\iota=30^{\circ}\) the posterior for the \((120,60)\,\mathrm{M}_{\odot}\) binary is inconsistent with equal masses, although large uncertainty in the mass ratio remains. For binaries inclined at \(60^{\circ}\) or \(90^{\circ}\) the parameter accuracy is excellent. In both cases, the mass ratio is very well constrained and uncertainty in total mass and redshift is \(\pm(10-20)\%\).
We can explain _why_ the results of our parameter accuracy display the features they do, using our understanding of the importance of HoMs discussed in Section 2. Let us begin by considering the \((120,60)\,\mathrm{M}_{\odot}\) system at \(z=14\), inclined at \(\iota=60^{\circ}\), that we have plotted in Figs. 1, 2 and 3. The signal is observed with high SNR in ET and the (3, 3) multipole is also clearly observed. In CE, the total SNR of the system is sufficient for it to be observed, with the (3, 3) multipole providing the dominant contribution. Since the event is observed in three detectors, it is relatively well localized,7 with a 90% localization area of \(300\,\mathrm{deg}^{2}\). The observation of HoMs in the waveform enables the accurate inference of both the binary inclination and mass ratio. Since the sky location, mass ratio and inclination are well measured, this enables accurate inference of the distance to the binary and consequently the redshift of the source. In the top panel of Fig. 7 we show the recovered values for the redshift, binary inclination and mass ratio. All three are recovered with an accuracy better than 10%.
Footnote 7: Since these are very low-frequency systems, the localization is poorer than for events in GWTC-3 (The LIGO Scientific Collaboration et al., 2021) as the localization depends upon the frequency bandwidth of the signal (Fairhurst, 2009).
Next, we consider a comparable system observed face-on (\(\iota=0^{\circ}\)) at the same SNR, and at a redshift of 17. In that case, the power in both the (3, 3) and (4, 4) multipoles vanishes. Consequently, the binary is no longer observable in CE as it has an SNR of 1.8 At this SNR, the CE observatories are unable to provide localization of the source, with a 90% localization area of \(10,000\,\mathrm{deg}^{2}\). Furthermore, the vanishing HoMs means that only the (2, 2) multipole is observed in ET. Therefore, it is not possible to break the degeneracy between binary orientation and distance, nor to place a tight constraint upon the mass ratio. The bottom panel of Fig. 7 shows a the recovered redshift, mass ratio and inclination for this system. The mass ratio _is not_ accurately recovered and, indeed, the binary is inferred most likely be (close-to) equal mass, although the distribution does extend to \(1/q=0.5\). In addition, the binary orientation is not accurately recovered, with a broad distribution of \(\iota\lesssim 25^{\circ}\) -- more inclined systems are excluded as they would have observable power in the (4, 4) multipole. The mass ratio-inclination distribution does show a secondary peak close to the simulated value (\(1/q\approx 0.5\) and \(\iota<10^{\circ}\)), however, the preference is for an equal mass system. Despite both mass ratio and inclination being offset from the true values, the inferred redshift matches well with the simulated value. However, due to the uncertainties in other parameters, the redshift uncertainty is now close to 25%. The comparison of parameter accuracy for these two systems highlights the importance of both a network of detectors and also observability of the HoMs in accurate inference of binary properties.
Footnote 8: The waveform shown in Fig. 3 corresponds to a binary at \(z=14\) and we are now considering the same mass binary at \(z=17\). The larger redshift reduces the SNR primarily through redshifting the signal which lowers the frequency by 20%. This leads to an SNR which is lower than that shown in the figure.
It is worth noting that our intuition from current GW observations that the majority of sources are close
Figure 5: Inferred posteriors of component masses for binaries of total mass of \(180\,\mathrm{M}_{\odot}\) which straddle, or sit within the upper mass gap. Results are shown for inclinations of \(\iota=0^{\circ},30^{\circ},60^{\circ}\) and \(90^{\circ}\), with the redshift of the system varied, as indicated in the labels, so that the signals are all simulated with an SNR of 30 in the ET–CE network, at the most sensitive sky location for the network. The grey region denotes the pair-instability mass-gap and the darker grey denotes the forbidden strip where no black hole is expected to form for any value of the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate (Farmer et al., 2020). Simulated values are denoted by a black cross and contours show the 90% credible region. _Left_: Binary with masses just below the mass gap: \(m_{1}=m_{2}=90\,\mathrm{M}_{\odot}\). _Right_: Binary with masses that straddle the mass gap: \(m_{1}=120\,\mathrm{M}_{\odot}\) and \(m_{2}=60\,\mathrm{M}_{\odot}\). Greek letter \(\rho\) indicates the SNR, throughout the paper.
Figure 6: Inferred posteriors for the redshift for binaries of total mass of \(180\,\mathrm{M}_{\odot}\) which straddle, or sit within the upper mass gap. Signals are simulated with varying inclination (in degrees) and redshift (\(z\)), as indicated in the labels, with a fixed SNR of 30 in the ET–CE network, at the most sensitive sky location for the network. Symbols \(\rho_{\ell m}\) indicate the SNR in the \((\ell,m)\) multipole HoMs. Black vertical lines indicate the 90% credible intervals, violins show the 99.5% range and yellow vertical lines show simulated values. _Left_: Binary with masses just below the mass gap: \(m_{1}=m_{2}=90\,\mathrm{M}_{\odot}\). _Right_: Binary with masses that straddle the mass gap: \(m_{1}=120\,\mathrm{M}_{\odot}\) and \(m_{2}=60\,\mathrm{M}_{\odot}\).
to face-on (or face-off) no longer holds in the next-generation network (Vitale, 2016). In the nearby Universe, where sources are approximately uniform in volume, a signal observed with a given SNR is most likely to originate from a distant binary which is close to face-on (or face off) as the number of sources increases as \(d_{L}^{2}\). For a high-redshift source, whose redshift is past the peak of the redshift distribution -- likely around a redshift of \(z\approx 2\) at which star formation peaks -- this is no longer the case. Now, the most likely origin is from a binary which is at lower redshift, where the intrinsic rate is higher, and is either poorly oriented or from a region of the sky where the detectors have lower sensitivity. Thus, the results from sources inclined at \(60^{\circ}\) and \(90^{\circ}\) are more typical of the observed population.
Let us return to the implications for probing the location of the Pair Instability Supernova (PISN) mass-gap. For both mass pairs, binaries inclined at \(60^{\circ}\) or \(90^{\circ}\) are those which provide the best mass measurements. For the \((90,90)\,\mathrm{M}_{\odot}\) system, we have \(m_{1}\) and \(m_{2}\) measured in the interval \(\in[70,100]\,\mathrm{M}_{\odot}\). So, this system is consistent with both components lying below the mass-gap provided the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) rate is low. For the \((120,60)\,\mathrm{M}_{\odot}\) system, we have \(m_{1}\in[100,140]\,\mathrm{M}_{\odot}\) and \(m_{2}\in[40,80]\,\mathrm{M}_{\odot}\). The masses are consistent with one above and one below the gap, provided the reaction rate is high. If _both_ signals were observed, this would be inconsistent with our current understanding of the PISN mass gap.
To investigate the observability at even higher redshifts, we have simulated a second set of signals, with the same masses and inclinations but with a lower SNR fixed at 15. For these systems, the redshifts range from \(z\gtrsim 20\) for face-on systems to \(z\approx 15\) for edge on systems. Broadly, the results are consistent with those in Fig. 5 and 6, but with larger uncertainties due to the lower signal amplitude. In particular, for all but the face-on systems, we are able to clearly identify that the \((120,60)\,\mathrm{M}_{\odot}\) binary is of unequal mass, due to the observed power in HoMs. For the inclined systems, the uncertainty in total mass and redshift is around a factor of two (from 150 to \(300\,\mathrm{M}_{\odot}\) and \(z=12\) to 25). Thus, while it is possible to identify these systems as unambiguously high-mass and high-redshift sources, the uncertainties in masses and redshifts make it difficult to perform precision astrophysics. For the \((90,90)\,\mathrm{M}_{\odot}\) system, it is only at \(\iota=60^{\circ}\) or \(90^{\circ}\) that the parameters are well recovered. These results are shown in Fig. 13 and 14 in Appendix B.
For the face-on systems, we see an interesting feature whereby the binary can be mistaken for a different system with very different properties. Fig. 8 shows the inferred redshift and redshifted mass -\(M(1+z)\) - distributions for a \((120,60)\,\mathrm{M}_{\odot}\) system at \(z=21\). The primary peak is at \(z=21\) and \(M(1+z)\approx 4,000\,\mathrm{M}_{\odot}\) corresponding to the simulated value. However, there is a secondary peak around \(z=5\), with a redshifted mass
Figure 7: Inferred posteriors for redshift, mass ratio (\(1/q=m_{2}/m_{1}\)) and inclination for \((120,60)\,\mathrm{M}_{\odot}\) binary observed at SNR=30. _Top:_ Parameter recovery for a system at \(z=14\) and \(\iota=60^{\circ}\). _Bottom:_ Parameter recovery for a system at \(z=17\) and \(\iota=0^{\circ}\).
around \(6,000\,\mathrm{M}_{\odot}\) corresponding to a binary with mass of \(1,000\,\mathrm{M}_{\odot}\). For this system, it is the (3, 3) multipole which is consistent with the simulated signal. This provides another example of the challenges which arise when identifying high-mass, high-redshift binaries. The signal would be observed only in ET, and have only one observable multipole. Not only does this lead to poor parameter recovery, but also the inability to distinguish between a \(180\,\mathrm{M}_{\odot}\) binary at \(z=21\) and a \(1,000\,\mathrm{M}_{\odot}\) binary at \(z=5\). Given the GW data alone, it would not be possible to distinguish between the two scenarios. The relative significance of the two will depend upon astrophysical knowledge of the mass and redshift distributions of BBHs. Here, we have used priors which are uniform in comoving volume and component masses. Other choices might lead to different conclusions about the mass and redshift of the binary.
In summary, for these two representative sources, and for high SNRs we would confidently identify the systems as high-mass BBHs at high redshift. These could be potential seeds for the growth of supermassive black holes (SMBHs). The first system of \((90,90)\,\mathrm{M}_{\odot}\) would be marginally consistent with being a binary formed at the lower edge of the mass gap, in correspondence of the lowest value of the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate. Inconsistent otherwise. The second system of \((120,60)\,\mathrm{M}_{\odot}\) would be consistent with one BH (the lightest) originating from the core-collapse of a massive star (provided the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) rate is low) and the second, in the midst of the pair-instability gap, would have a different origin. For a large fraction of the computed rates, the discovery of the latter system would be inconsistent with the explosion scenario implied by the pair instability that would predict no BHs, and therefore a different channel has to be called for, for both components.
### Observing intermediate mass black holes in binaries
Next, let us consider higher mass systems, containing an IMBH. For concreteness, we consider binaries with masses \((240,120)\,\mathrm{M}_{\odot}\) and \((480,120)\,\mathrm{M}_{\odot}\) which would be observable up to \(z\sim 10\). While the \(120\,\mathrm{M}_{\odot}\) BH would be at the border of the pair-instability gap, both BHs are well above the gap. Their observation would either require a sufficiently high upper-mass end of a Kroupa-like IMF (extending to at least \(300\,\mathrm{M}_{\odot}\)) or a top-heavy IMF (Klessen and Glover, 2023). Alternatively the primary BH (particularly the most massive one of \(480\,\mathrm{M}_{\odot}\)) could have had time to increase its original mass due to accretion of gas from its surroundings. Thus for these systems, we are interested in determining whether the mass of the primary and the redshift can be accurately inferred in order to identify these early IMBHs.
Assuming the mass gap to be that predicted by the median of the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate, between \(\sim 50\,\mathrm{M}_{\odot}\) and \(130\,\mathrm{M}_{\odot}\), and a standard Kroupa-like IMF, in the interval between \(0.1\,\mathrm{M}_{\odot}\) and \(150\,\mathrm{M}_{\odot}\), all our systems are expected to have dynamical origin in dense star clusters (see, eg. Mapelli et al., 2021).9 Among these, Nuclear Star Clusters (NSCs) could be the sites for 3rd- or 4th-generation BBHs with observed individual masses up to \(\sim 600\,\mathrm{M}_{\odot}\), merging on timescales smaller than 500 Myr compatible with the redshift of observation. Formation in Globular Clusters (GCs) would be marginally compatible with our lightest systems. According to Mapelli et al. (2022), above \(z\sim 10\) there are two available BBH formation channels, the isolated and the dynamical formation in young star clusters. However, the maximum masses for these channels are \(\sim 50\,\mathrm{M}_{\odot}\) and \(\sim 100\,\mathrm{M}_{\odot}\) respectively. This implies that detecting BBHs with individual masses larger than \(\sim 100\,\mathrm{M}_{\odot}\) at \(z>10\) could point to a top-heavy IMF, as predicted for the first stellar generation.
Footnote 9: We warn that black holes with these masses could be of primordial origin or outcome of post-formation accretion, as mentioned in the Introduction.
Figures 9 and 10 show the accuracy with which we can measure the masses and redshifts of the events. The broad features are similar to what we have already ob
Figure 8: Inferred posteriors for redshift and redshifted mass for a simulated \((120,60)\,\mathrm{M}_{\odot}\) BBH merger with \(\iota=0^{\circ}\) observed at \(z=21\).
Figure 10: Inferred posteriors for the redshift for heavy binaries. Signals are simulated with varying inclination and redshift, as indicated in the labels, with a fixed SNR of 30 in the ET–CE network, at the most sensitive sky location for the network. Black vertical lines indicate the 90% credible intervals, violins show the 99.5% range and yellow vertical lines show simulated values. Symbols \(\rho_{\ell m}\) indicate the SNR in the \((\ell,m)\) multipole HoMs. _Left_: \(m_{1}=240\,\mathrm{M}_{\odot}\) and \(m_{2}=120\,\mathrm{M}_{\odot}\). _Right_: \(m_{1}=480\,\mathrm{M}_{\odot}\) and \(m_{2}=120\,\mathrm{M}_{\odot}\).
Figure 9: Inferred posteriors for the component masses for heavy binaries. Results are shown for inclinations of \(\iota=0^{\circ},30^{\circ},60^{\circ}\) and \(90^{\circ}\), with the redshift of the system varied, as indicated in the labels, so that the signals are all simulated with an SNR of 30 in the ET-CE network, at the most sensitive sky location for the network. The grey region denotes the pair-instability mass-gap and the darker grey denotes the forbidden strip where no black hole is expected to form for any value of the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction rate (Farmer et al., 2020). Simulated values are denoted by a black cross and contours show the 90% credible region. _Left_: \(m_{1}=240\,\mathrm{M}_{\odot}\) and \(m_{2}=120\,\mathrm{M}_{\odot}\). _Right_: \(m_{1}=480\,\mathrm{M}_{\odot}\) and \(m_{2}=120\,\mathrm{M}_{\odot}\).
served for the lower mass systems, namely that the parameter recovery is significantly worse for face-on systems, due to the vanishing HoMs. Even though the mass ratio of the systems is 2 or 4, both are inferred to be consistent with equal mass (or nearly equal mass) binaries when viewed face-on. Furthermore, the uncertainty in redshift and total mass is about a factor of two. For the inclined systems, the recovery of masses and redshifts improves significantly, particularly for \(\iota\geq 60^{\circ}\). In that case, component masses and redshifts are recovered with a \(\sim 20\%\) accuracy. In particular, the mass of the \(120\,\mathrm{M}_{\odot}\) BH will be constrained to be between \(90\,\mathrm{M}_{\odot}\) and \(\sim 150\,\mathrm{M}_{\odot}\) for all except for the face-on system. This is consistent with a black hole above the mass-gap and, due to uncertainties in the mass measurement, will not significantly restrict the \({}^{12}\mathrm{C}(\alpha,\gamma)^{16}\mathrm{O}\) reaction-rate. In Appendix B, we also show results for events simulated at higher redshifts and at a lower SNR of 15. The results are comparable to those discussed above, with the masses and redshifts for inclined systems better measured, and masses constrained to be unequal. For face-on systems, the parameter recovery is significantly worse and we again see multiple peaks in the mass-redshift distributions corresponding to different multipoles matching with the signal.
Remarkably, the next-generation GW observatories have the capability to detect and accurately identify mergers involving \(240\,\mathrm{M}_{\odot}\) BH at a redshift of 10, and confidently infer a minimum redshift of 7, and mergers involving a \(480\,\mathrm{M}_{\odot}\) BH at a redshift of 6, and infer a redshift of at least 4. These systems will be interesting to observe because we do not know if BHs of those masses exist, and we can hope to shed light on their formation routes, either by accretion from lower-mass BHs or by direct collapse of very massive stars.
## 4 Discussion
The next-generation of GW detectors provide a unique way to probe the existence of heavy stellar black holes in the high-redshift Universe. Future GW observations of BHs with masses above \(\sim 50\,\mathrm{M}_{\odot}\) at redshift \(z\sim 10-15\) will enable us to probe the properties of the first stars forming in the Universe and their initial mass function. If BHs in the mass range explored here exist, they can contribute, as _seeds_10, to the rapid growth of the population of quasars observed close to the recombination epoch, at \(z\approx 7.5\), and housing accreting BHs of \(\mathcal{O}(10^{8}-10^{9})\,\mathrm{M}_{\odot}\)(e.g. Volonteri and Rees, 2005; Madau et al., 2014; Pezzulli et al., 2016; Volonteri et al., 2015; Ricarte and Natarajan, 2018; Valiante et al., 2021; Trinca et al., 2022). Whether and how the bridge between stellar and supermassive black holes was established when the first galaxies were forming is currently unknown (Volonteri et al., 2021). The revolutionary data coming from JWST, with the recent discovery of more than 40 new faint accreting supermassive BHs of \(\mathcal{O}(10^{5}-10^{7}\,\mathrm{M}_{\odot})\) at \(4<z\leq 10.6\)(see Maiolino et al., 2023, and references therein), is an outstanding confirmation of the rich BH landscape at cosmic dawn predicted by theoretical models (Schneider et al., 2023). In the future, with the Laser Interferometer Space Antenna (LISA) in operation (Amaro-Seoane et al., 2017), we will detect low-frequency GWs from merging massive BHs of \(\mathcal{O}(10^{4}-10^{6}\,\mathrm{M}_{\odot})\) out to \(z\sim 10-15\). By combining and confronting statistically all observations of both merging and accreting BHs, we will be able to shed light into the origin and evolution of the BH populations, from the stellar to the supermassive through the intermediate-mass ones, across the cosmic epochs (Fragione and Loeb, 2023; Valiante et al., 2021).
Footnote 10: the so called “light seeds” explored in the literature (Valiante et al., 2017; Inayoshi et al., 2020; Volonteri et al., 2021).
In this paper, we focused on the observability of high-redshift stellar BBHs with high masses, and, equally importantly, on the accuracy with which their masses and redshifts can be inferred. We have shown that both the observation of systems and the accurate measurement of their parameters depend critically on the inclusion of HoMs in the GW waveform. At the highest masses and redshifts, HoMs, which extend the signal to higher frequencies than the (2, 2) multipole, can significantly increase the sensitive range of the detectors. Across a broad range of masses and redshifts, we expect to see multiple GW multipoles in signals observed by CE and ET. Observation of more than one multipole, typically a HoM in addition to the (2, 2) multipole, enables the breaking of the degeneracy between binary inclination and distance, as well as a more accurate determination of the mass ratio. Additionally, a network of observatories is required for source's localizations. When a signal is seen in only a single detector, the sky location is poorly measured and, since the detector response varies significantly over the sky, this leads to large uncertainties in the distance. For very high-redshift sources, accurate distance/redshift measurement is vital for the measurement of the BH individual masses, as the observed signal depends upon \(M(1+z)\). By performing full parameter estimation on a set of representative systems, we demonstrated that it will be possible to measure masses and redshifts with an accuracy of \(10-20\%\), for signals at redshifts up to at least 15. Those systems which can be observed and accurately measured are typically seen in both CE and ET detectors, so they are well localized and
also tend to be viewed away from face-on (or face-off) so that more than one GW multipole is observed.
We examined systems with masses \((120,60)\,\mathrm{M}_{\odot}\) and \((90,90)\,\mathrm{M}_{\odot}\), which lie in, or around the _pair-instability mass-gap_. For the best-measured of the examples we investigated, at a redshift of 10, we could measure redshift and component masses with 10% uncertainly. This would, unambiguously, place these sources in the high-redshift Universe and serve to constrain the, currently unknown, location of the pair-instability mass-gap. We also investigated mergers of \((240,120)\,\mathrm{M}_{\odot}\) and \((480,120)\,\mathrm{M}_{\odot}\) binaries, which enable us to probe the observability of early IMBH. It will be possible to observe these IMBH at redshifts up to \(z=10\) and constrain the redshift to be at least \(z=7\).
The results in this paper complement those in Ng et al. (2022, 2023) which investigate lower-mass BH mergers in the next-generation GW network, and Mancarella et al. (2023) who introduced the concept of an "inference horizon" which maps the redshift a source can confidently be placed beyond. In all cases, it is shown that next-generation GW network provides a unique capability to probe high-redshift black hole formation.
The most critical feature of detector sensitivity for observing these systems is the low-frequency sensitivity of the detectors. In our study, we used a low-frequency limit of 3 Hz for ET and 5 Hz for CE. Even the relatively small change from 3 Hz to 5 Hz can have a profound impact on sensitivity to high-mass, high-redshift sources as shown in Fig. 2 and 3. Achieving the instrumental design sensitivity at low frequencies has been challenging in the current LIGO and Virgo observatories. As the detailed technical designs of the next-generation observatories are finalised, the desire to probe the remnants of high-mass stars in the early Universe should be considered as a motivation to optimize sensitivity at low frequencies.
## Acknowledgements
We thank Riccardo Buscicchio for his careful reading and valuable comments. SF acknowledges the support of STFC grant ST/V005618/1 and a Leverhulme Trust International Fellowship. RS acknowledges support from the Amaldi Research Centre funded by the MIUR programme "Dipartimento di Eccellenza" (CUP:B81I18001170001). MC, RS, AT, RV acknowledge the INFN TEONGRAV specific initiative. MC acknowledges support by the 2017-NAZ- 0418/PER grant, and by the Italian Ministry for Universities and Research (MUR) program "Dipartimenti di Eccellenza 2023-2027", within the framework of the activities of the "Centro Bicocca di Cosmologia Quantitativa (BiCoQ)". AS acknowledges the financial support provided under the European Union's H2020 ERC Consolidator Grant "Binary Massive Black Hole Astrophysics" (B Massive, Grant Agreement: 818691). MC and RS thank the Institut d'Astrophysique de Paris for kind hospitality.
## Appendix A Detector Sensitivity
In this Appendix, we show the sensitivity of ET and CE to binary mergers as a function of mass and redshift. Fig. 4 shows the sensitivity to mergers with mass ratio of two. In Fig. 11 we show the ET sensitivity for binaries with mass ratios 1, 2, 4 and 10. In Fig. 12, we show the same for CE. The maximum reach of the detectors is for equal mass binaries. However, at equal mass the (3, 3) multipole vanishes so there is a larger range for which only one multipole is visible. As we increase the mass ratio, the maximum sensitive redshift decreases, as the amplitude of the emitted GW also decreases. However, the relative significance of the HoMs increases so that an increasing fraction of sources will be observed with at least two multipoles.
broader posteriors as expected due to the lower SNR. Specifically, the masses and redshifts are poorly measured for face-on systems, and measurement accuracy improves for inclined systems (particularly \(\iota=60^{\circ},90^{\circ}\)) where there is observable power in the HoMs. In the best-case scenarios, masses and redshifts are measured with \(\sim 25\%\) accuracy. For all systems other than \(\iota=0^{\circ}\) the \((120,60)\,\mathrm{M}_{\odot}\) system is clearly identified as having unequal masses. However, the mass distributions are broad enough that limited information about the location of the pair-instability mass-gap can be extracted.
For the \(\iota=0^{\circ}\) systems, and \(\iota=30^{\circ}\) for the \((90,90)\,\mathrm{M}_{\odot}\) binary, there is a bimodality in the recovered redshift. In addition, the inferred mass distribution is broader than that shown in Fig.13 and extends to \(\sim 1000\,\mathrm{M}_{\odot}\). For these events, there is zero (or limited) power in the HoMs so only a single GW multipole is observable. The secondary peak at high masses and \(z\approx 5\) corresponds to a binary configuration where the (3, 3) multipole has the correct amplitude and frequency content to match the simulated signal. This is discussed in more detail in Section 3, around Fig. 8.
In Fig. 15 and 16, we show the accuracy with which the masses and redshift are recovered for the \((240,120)\,\mathrm{M}_{\odot}\) and \((480,120)\,\mathrm{M}_{\odot}\) binaries. As for the lower-mass systems, the qualitative results are similar to those for the SNR 30 signals presented in Section 3, with broader posteriors as expected due to the lower SNR. Nonetheless, other than the face-on (\(\iota=0^{\circ}\)) systems, the binaries are clearly identified as unequal mass systems containing an IMBH with minimum mass \(200\,\mathrm{M}_{\odot}/400\,\mathrm{M}_{\odot}\) for the two system. Redshifts are generally underestimated, likely due to the poor sky localization, and lower bounds on the redshift are _no better_ than for the higher SNR systems shown in Fig. 10.
Figure 11: Maximum observable redshift of BBH mergers with ET for mass ratios \(q=1,2,4\) and \(10\). Shaded regions show redshifts at which \(10\%\), \(50\%\) and \(90\%\) of sources are observable. The maximum observable redshift of the (2, 2) multipole at \(\mathrm{SNR}=8\) is shown in orange, the observability of the full waveform at \(\mathrm{SNR}=8\) is shown in blue, and the observability of the second most significant mode above \(\mathrm{SNR}=3\) is shown in black. The red asterisks represent the masses and redshifts of the (\(q=2\)) systems we study in detail in Section 3.
Again, the face-on systems show significant bimodality with a second peak at much lower redshifts and higher masses. As before, this corresponds to a system where the HoMs, rather that the (2, 2) multipole, are associated with the observed waveform.
|
2310.00717 | Exact Entanglement Propagation Dynamics in Integrable Heisenberg Chains | The exact single-magnon entanglement evolution in Heisenberg chains is
obtained using the Quantum Correlation Transfer Function (QCTF) formulation.
The individual spins' entanglement is given by a hypergeometric function, and
its transient behavior is described via a Bessel function of the first kind.
The presented characterization through the lens of QCTF allowed for calculating
the ballistic single-magnon entanglement edge velocity in Heisenberg chains,
which has not been achieved before. Our results can be extended to the
multi-magnon regime, therefore opening up the means to explain equilibration
dynamics and thermodynamics in Heisenberg chains. | Peyman Azodi, Herschel A Rabitz | 2023-10-01T16:36:03Z | http://arxiv.org/abs/2310.00717v2 | # Exact Entanglement Propagation Dynamics in Integrable Heisenberg Chains
###### Abstract
The exact single-magnon entanglement evolution in Heisenberg chains is obtained using the Quantum Correlation Transfer Function (QCTF) formulation. The individual spins' entanglement is given by a hypergeometric function, and its transient behavior is described via a Bessel function of the first kind. The presented characterization through the lens of QCTF allowed for calculating the ballistic single-magnon entanglement edge velocity in Heisenberg chains, which has not been achieved before. Our results can be extended to the multi-magnon regime, therefore opening up the means to explain equilibration dynamics and thermodynamics in Heisenberg chains.
_Introduction.-_ Understanding entanglement propagation in non-equilibrium many-body quantum systems is valuable for both fundamental and practical reasons, especially given new developments in the understanding of the interrelation between entanglement and thermodynamics in quantum systems [1; 2; 3; 4]. Integrable quantum systems have been a primary subject in the study of entanglement dynamics due to their importance and algebraic structure [5]. In this class of quantum systems, quasi-particles can transport correlations throughout the many-body lattice [6; 7; 8; 9; 10; 11; 12]. The development of macroscopic theories, including Generalized Hydrodynamics (GHD) [13], to study finite-temperature quasi-particle transport has been an extensive, ongoing research effort [14; 15; 16; 17]. In isolated quantum systems, thermal behavior re-emerges through entanglement between the constituents [18; 19; 20; 21]. In this case, the long-time behavior of generic local observables is conjectured to be given by conventional thermodynamics ensembles at relevant effective temperatures [22; 23; 24; 25; 26]. In the case of integrable systems, Generalized Gibbs Ensembles have been used extensively to predict the _asymptotic_ behavior of observables [27; 28; 29; 30; 31], including both successful [32; 33; 34; 35] and unsuccessful [36; 37] cases of this formulation in studying the Heisenberg model. Despite the analytical progress in studying entanglement, except in rare cases [38; 39; 40], the exact time dependence, in particular the transient behavior, and the underlying mechanism of the equilibration process thus far have remained confined to numerical treatments.
In relativistic quantum systems with short-length interactions, Lieb and Robinson's theorem provides a bound based on the maximum group velocity and a resulting causality light-cone for the ballistic propagation of correlations, beyond which correlations must decay exponentially [41; 42]. This phenomenon has been experimentally observed in several instances [43; 44]. The presence of long-range interactions breaks the Lieb-Robinson bound, but further modifications can be made to obtain the correlation transport velocity [45; 46; 47; 48; 49], which has proven to remain finite under certain circumstances [50]. Moreover, the spread of correlations is shown to have a double causality structure with different velocities, where one case corresponds to the edge - which is faster, given by the phase velocity in the lattice - and the other case is associated with the extremum of correlation transport [47; 51].
This paper presents analytical results on entanglement propagation in integrable Heisenberg chains through a new lens, the Quantum Correlation Transfer Functions (QCTFs) [52; 53]. In this framework, the dynamical properties of a subsystem's entanglement are encoded in the residues of a complex (QCTF) function which can be calculated directly from the system's Hamiltonian and its pre-quench state. Hence, in this framework, exponentially expensive (i.e., with respect to the number of bodies in the system) calculations of the system's time evolution can be avoided and the evolution of entanglement can be directly obtained from the system's Hamiltonian. To this end, the dynamics of entanglement is quantified using a geometric measure: the squared area spanned by projected wave functions (onto a local basis for the subsystem of interest) [52]. In the case of two-energy level subsystems, this measure of entanglement reduces to the determinant of the reduced density matrices in the Laplace domain. In order to obtain the QCTF function, we assign a unique integer number to an arbitrary set of eigenstates for the underlying Hilbert space; nevertheless, the residues of the QCTF, encoding entanglement between subsystems, are invariant to the chosen basis.
This treatment enables a full analysis of the single-magnon entanglement quench _dynamics_ in ferromagnetic Heisenberg spin-\(\frac{1}{2}\) chains with arbitrary length, thereby going beyond numerical analyses, mean-field models, and tensor-network-based approaches. The choice of local quench based on a single-magnon excitation allows for the study of the velocity of propagation of correlations in the chain. In addition to the exact characterization of entanglement dynamics in the chain with an arbitrary number of spins, another main finding of this paper is the exact calculation of the entanglement edge velocity \(v_{e}=\frac{e}{2}v_{group}\) in anisotropic Heisenberg chains, with no dependence on the anisotropy in the chain. Our results add to the understanding of entanglement dynamics in this well-studied class of integrable systems by reveal
ing new aspects of this phenomenon through the lens of QCTF.
_Model and QCTF Analysis.-_ The goal of this paper is to study the quench entanglement dynamics of a single-magnon state in an anisotropic Heisenberg chain with the following Hamiltonian,
\[\mathbf{H}=-\mathbf{J}\sum_{j=-\frac{N-1}{2}}^{\sum\limits_{j=-\frac{N-1}{2}}^{ \frac{N-1}{2}}}\big{(}S_{j}^{x}S_{j+1}^{x}+S_{j}^{y}S_{j+1}^{y}+\Delta(S_{j}^{x }S_{j+1}^{z}-\frac{1}{4})\big{)}, \tag{1}\]
of \(N\) (odd) number of spins with periodic boundary condition, i.e., \(S_{\frac{N+1}{2}}=S_{-\frac{N-1}{2}}\). Here, \(\mathbf{J}\) and \(\Delta\) denote the interaction strength and anisotropy. The pre-quench state of the quantum chain is the single-magnon excitation of one of the degenerate ferromagnetic states (\(S_{j}^{-}\)\(\left|F\right\rangle=0\)), where \(S_{j}^{\pm}=S_{j}^{x}\pm iS_{j}^{y}\) are the spin raising/lowering operators at site \(j\). Without loss of generality due to transnational invariance, we choose the magnon state as \(\left|0\right\rangle\equiv\mathbf{S}_{0}^{+}\left|F\right\rangle\). Since the product state \(\left|F\right\rangle\) is an eigenstate of the Hamiltonian, the local quench in the magnon excitation is exclusively responsible for the entanglement evolution in the chain. In what follows, we study how entanglement evolves and propagates through the chain, using the QCTF formulation.
To construct a QCTF model, we start by labeling an arbitrary set of basis kets for the one-magnon sector (with \(\left\langle S_{total}^{Z}\right\rangle=\left\langle\sum S^{z}\right\rangle=- \frac{N}{2}+1\)) as \(\left|p\right\rangle\doteq S_{+}^{+}\left|F\right\rangle\), where \(p=-\frac{N-1}{2},\cdots,0,\cdots,\frac{N-1}{2}\). Since \([\mathbf{H},S_{total}^{Z}]=0\), the higher-order magnon sectors can be ignored in the resolvent function, defined as \(\mathbf{G}(s)=(s\mathbf{I}-\frac{i}{\hbar}\mathbf{H})^{-1}\) ( \(s\) is the Laplace variable). Employing the translational invariance of the chain, the coordinate Bethe ansatz gives the eigenstates of the chain as \(\left|K\right\rangle=(N)^{-\frac{1}{2}}\sum_{p=-\frac{N-1}{2}}^{\frac{N-1}{2}}e ^{ipK}\left|p\right\rangle\), with dispersion relation \(E(K)=\mathbf{J}(\Delta-\cos(K))\) and momenta \(K=\frac{2\pi}{N}m\); \(m=-\frac{N-1}{2},\cdots,\frac{N-1}{2}\). Therefore, the resolvent defined in the sub- Hilbert space of interest (i.e., one-magnon sector) can be written as follows:
\[\mathbf{G}(s)=\frac{1}{N}\sum_{m=-\frac{N-1}{2}}^{\sum\limits_{j=- \frac{N-1}{2}}^{\sum\limits_{j=-\frac{N-1}{2}}^{\sum\limits_{j=-\frac{N-1}{2}} ^{\sum\limits_{j=-\frac{N-1}{2}}}}}}\left[\left(s-\frac{i\mathbf{J}}{\hbar} \left(\Delta-\cos(\frac{2\pi}{N}m)\right)\right)\right]^{-1} \tag{2}\] \[\sum_{p_{1},p_{2}}e^{\frac{2i\pi n}{N}(p_{1}-p_{2})}\left|p_{1} \right\rangle\left\langle p_{2}\right|\Bigg{]}.\]
In the QCTF framework, entanglement dynamics of each individual spin can be obtained by finding the residues of a corresponding QCTF transformation. The first step is to find the QCTF centered at the spin number \(q\) (the subsystem of interest). For this model, the QCTF is defined as [52]:
\[\mathcal{K}_{q}(s,z_{d},z_{a})=\sum_{\begin{subarray}{c}p=-\frac{N-1}{2}\\ p\neq q\end{subarray}}^{\sum\limits_{j=-\frac{N-1}{2}}^{\sum\limits_{j=-\frac{N -1}{2}}^{\sum\limits_{j=-\frac{N-1}{2}}}}}z_{d}^{q-p}z_{a}^{q+p}\left\langle 0| \mathbf{G}^{\dagger}(s^{*})|p\right\rangle\star\left\langle q|\mathbf{G}(s)|0 \right\rangle, \tag{3}\]
where \(z_{a}\), \(z_{d}\) are complex variables and the operator \(\star\) is the convolution in the \(s\) domain and regular multiplication in \(z_{d}\) and \(z_{a}\) domains [52]. In the remainder of the paper, a basic application of this operator, namely \((s+i\omega_{1})^{-1}\star(s+i\omega_{2})^{-1}=(s+i(\omega_{1}+\omega_{2}))^{ -1}\) will be used. Inserting (2) in the QCTF (3) leads to
\[\mathcal{K}_{q}=\frac{1}{N^{2}}\sum_{\begin{subarray}{c}m_{1},m_{2},p=-\frac{ N-1}{2}\\ p\neq q\end{subarray}}^{\sum\limits_{j=-\frac{N-1}{2}}^{\frac{N-1}{2}}}z_{d}^{q-p}z_{a}^{q+p} \Bigg{(}s-\frac{i\mathbf{J}}{\hbar}\bigg{(}\cos(\frac{2\pi}{N}m_{2})-\cos( \frac{2\pi}{N}m_{1})\bigg{)}\Bigg{)}^{-1}e^{\frac{2i\pi}{N}(m_{1}q-m_{2}p)}. \tag{4}\]
This formula can be understood as a three-variable transformation of the density matrix: two transformations with one parallel to the diagonal (\(z_{d}\)) and the other perpendicular to the diagonal (\(z_{a}\)) array of elements of the density matrix, as well as a transformation to the Laplace domain (\(s\)), which reflects the time-evolution of entanglement. Note that the dependence on \(\Delta\) is not present in the QCTF function. This variable is a constant shift in the energy of each fixed-magnon block on the diagonal of the Hamiltonian, therefore it does not affect the linear combination of eigenvalues that appears in the QCTF entanglement measure. Having determined the QCTF, the dynamical entanglement measure of spin \(q\) (\(\bar{\mathcal{Q}}_{q}(s)\)) can be obtained using the following relation [52]:
\[\tilde{\mathcal{Q}}_{q}(s)= \underset{z_{d}=0}{\mathbf{Res}}\big{(}(z_{d}z_{a})^{-1}\mathcal{ K}_{q}(z_{d},z_{a},s)\big{)}\bigg{|}_{z_{a}=1}. \tag{5}\]
with \(\mathcal{K}_{d}(s)=\underset{z_{d}=0}{\mathbf{Res}}\big{(}z_{d}^{-1}\mathcal{ K}_{q}(z_{d},z_{a},s)\big{)}\bigg{|}_{z_{a}=1}\). One can show that \(\tilde{\mathcal{Q}}_{q}(s)\) is the determinant of the reduced density matrix of spin \(q\), in the Laplace domain [52]. These residues can easily be found upon expanding the \(\star\) multiplication in (5) using (4), which gives the following dynamical entanglement measure:
This equation provides the frequency spectrum of the dynamical entanglement of spin \(q\). By taking the inverse Laplace transform, one finds the entanglement time-evolution of each spin, which is shown in Figure 1 for \(N=33\) spins. Note that the poles of this function (6) has the inversion symmetry \(m_{i}\leftrightarrow-m_{i}\) and also the \(m_{1}\leftrightarrow m_{4}\), \(m_{2}\leftrightarrow m_{3}\) symmetries.
_Analysis-_ We will present two different analyses of equation (6), respectively in the frequency and time domains. The relation (6) shows that the entanglement frequency components (poles of \(\tilde{\mathcal{Q}}_{q}(s)\)) must be upper-bounded by \(|s|<\frac{4\lambda}{\hbar}\). Importantly, as will be demonstrated, the frequencies in the upper half region, \(\frac{2\lambda}{\hbar}<|s|<\frac{4\lambda}{\hbar}\), are highly (polynomially in \(N\)) suppressed and therefore negligible in the \(N\rightarrow\infty\) limit. To produce these frequencies on the higher end of the spectrum, cooperative addition of all four terms in the frequency argument (i.e., \(\cos\bigl{(}\frac{2\pi}{N}m_{i}\bigr{)}\)) is required, which necessarily rules that \(m_{2}\neq m_{4}\). In this case, the inner summation over \(p\) will lead to:
\[\sum_{p\neq q}e^{\frac{2i\pi}{N}\bigl{(}q(m_{1}-m_{3})+p(m_{4}-m_{2})\bigr{)}} =-e^{\frac{2i\pi}{N}q(m_{1}-m_{3}+m_{4}-m_{2})}. \tag{7}\]
Therefore, the inner summation reduces to a number with unit norm. Given the \(N^{-4}\) scaling in \(\tilde{\mathcal{Q}}_{q}(s)\), this situation not only suppresses all of the higher end frequencies (\(\frac{2\lambda}{\hbar}<|s|<\frac{4\lambda}{\hbar}\)), but also the majority of frequencies on the lower end (\(0<|s|<\frac{2\lambda}{\hbar}\)). As a result, the dominant frequencies correspond to \(m_{2}=m_{4}\).
Entanglement of the initially excited spin (i.e., \(q=0\)) can be obtained directly from (6). In this case, the intensity of the dominant frequencies is proportional to their abundance. Thus, finding the intensity of each frequency component in the entanglement measure entails counting the instances when each particular frequency emerges as the four-tuple \((m_{1},m_{2},m_{3},m_{4})\) varies. This statement follows since the exponential term becomes unity when \(q=0\) and \(m_{2}-m_{4}=0\). As a result, the entanglement frequency spectrum of the initially excited spin (\(q=0\)) consists of two equal-intensity lines (see Figure 2), one for \(m_{1}m_{3}\neq 0\), and one for \(m_{1}m_{3}=0\), with lower intensity due to a lower number count.
Analogously, for the general case of \(q\neq 0\), the dynamical measure (6) gives the propagation of entanglement throughout the chain. Here, the transient behavior of entanglement (corresponding to the fast time scales) is of main interest. Transient features of entanglement correspond to the poles close to \(|s|\approx\frac{2\lambda}{\hbar}\), which can be verified to correspond to the following (note that all frequencies appear in positive and negative pairs; Here only positive frequencies are considered for brevity):
\[m_{2}=m_{4},(m_{1},m_{3})\approx(0,\pm\pi). \tag{8}\]
Therefore, the fastest dominant peak corresponds to:
\[(m_{1},m_{3})=(0,\pm\frac{N-1}{2}). \tag{9}\]
By employing the new set of variables \(\epsilon=|m_{1}|-|m_{3}|-\frac{N}{2}\) and \(\delta=|m_{1}|+|m_{3}|-\frac{N}{2}\), the intensity (\(\mathcal{I}_{q}\)) and frequency (\(\omega_{q}\)) of the (non-zero) dominant peaks are:
\[\mathcal{I}_{q}(\epsilon,\delta)\propto(-1)^{q}\Bigl{(}\cos \bigl{(}\tfrac{2\pi q}{N}\epsilon\bigr{)}+\cos\bigl{(}\tfrac{2\pi q}{N}\delta \bigr{)}\Bigr{)}, \tag{10}\] \[\omega_{q}(\epsilon,\delta)=\tfrac{2\lambda}{\hbar}\cos\bigl{(} \tfrac{\pi}{N}\epsilon\bigr{)}\cos\bigl{(}\tfrac{\pi}{N}\delta\bigr{)}. \tag{11}\]
As a result, based on (10), one expects to observe a _string_ of poles, close to and below the cut-off frequency (\(\frac{2\lambda}{\hbar}\)), the intensity of which decay to zero (and cross the horizontal axis in Figure 2) more rapidly as \(q\) increases. A simple calculation shows that these crossings of the zero intensity line occur each time \(|m_{1}|\) or \(|m_{3}|\) crosses the pole near \(\frac{N}{2}-\frac{N}{4q}\) and \(\frac{N}{4q}\). Therefore, the first (meaning closest to the cut-off frequency) crossing corresponds to \(m_{1}=\pm\frac{N-1}{2}\) and \(|m_{3}|\approx\frac{N}{4q}\), which, according to (11), will be at the frequency (shown with red marks in Figure
Figure 1: Exact evolution of entanglement after a local quench at \(t=0\) in the middle of the chain obtained using the QCTF formulation. Due to periodic boundary condition and translational symmetry, all of the spins can be considered to be in the middle of the chain. The evolution shows clear light-cone behavior until quasi-particles arrive at the middle of the chain (located at the upper and lower edges of the figure).
2):
\[2\mathbf{J}\cos\!\left(\frac{\pi}{4q}-\frac{\pi}{2N}\right)\cos\!\left(\frac{\pi}{ 4q}+\frac{\pi}{2N}\right)\approx 2\mathbf{J}\cos^{2}(\frac{\pi}{4q}). \tag{12}\]
This mechanism filters out fast modes through the oscillatory behavior of poles near the cut-off frequency, leading to retarded growth of entanglement for farther spins (larger \(q\)). This behavior is analyzed in detail in the following paragraphs.
Here, we present an alternative and more in-depth analysis of the transient entanglement dynamics in the time domain. We demonstrate that entanglement of spins at the \(q\)'th distance from the initially quenched spin obeys the transient behavior \(\sim(\frac{v_{e}t}{q})^{2q}\), where \(v_{e}\) is a universal constant of the chain and describes the velocity of propagation for the entanglement edge. Given the even symmetry of the frequency components, the inverse Laplace transform of (6), which gives the entanglement dynamics of spin \(q\), has the following general form in the
Figure 2: Entanglement frequency spectra for \(q=0\) (initially excited spin) and further spins (\(q\) denotes the distance). The spectra consist of two main classes of poles: dominant poles that contribute mainly to the entanglement evolution and suppressed poles, with intensities close to \(\mathcal{I}_{q}=0\). For the initially excited spin (\(q=0\)), the dominant frequency spectrum consists of two equi-intensity lines of poles (refer to the main text for explanation). For further spins, the frequency spectra feature a string of poles close to the cut-off frequency (\(\omega=\frac{2\mathbf{J}}{\hbar}\)) that approach zero in intensity at \(\approx 2\mathbf{J}\cos^{2}(\frac{\pi\mathbf{j}}{4q})\) (these poles are marked by red circles), more rapidly for larger \(q\). More generally, further spins exhibit spectra that are more oscillatory in intensity (see equation (10)), which leads to filtering out the fast modes in the entanglement dynamics and eventually to delayed development of entanglement.
time domain:
\[\mathcal{Q}_{q}(t)=\sum_{j}\mathcal{I}_{q,j}\cos(\omega_{j}t), \tag{13}\]
where \(j\) is the index for all possible frequencies (\(\omega_{j}=-is_{j}\), with \(s_{j}\) being the poles of (6)), arising from the four-tuples \((m_{1},m_{2},m_{3},m_{4})\), and \(\mathcal{I}_{q,j}\) is the intensity corresponding to \(\omega_{j}\), when considering spin \(q\). Therefore, all of the odd derivatives (with respect to time) of \(\mathcal{Q}_{q}(t)\) at \(t=0\) vanish and the even derivatives are:
\[\mathcal{Q}_{q}^{(2r)}(t)\bigg{|}_{t=0}=(-1)^{r}\sum_{j}\mathcal{I}_{q,j} \omega_{j}^{2r}. \tag{14}\]
We define the vector \(\mathcal{I}^{q}\doteq[\mathcal{I}_{q,1},\cdots,\mathcal{I}_{q,j},\cdots]^{T}\), thus, given that all of the first even (up to \(2(q-1)\)) derivatives of \(\mathcal{Q}_{q}(t)\) vanish at \(t=0\) we have the following linear system of equations:
\[V\mathcal{I}^{q}=\mathbf{0} \tag{15}\]
where \(V\) is the following (transposed) Vandermonde matrix:
\[(V_{kj})\doteq(i\omega_{j})^{2(k-1)};k=1,\cdots,q. \tag{16}\]
Thus, the intensities in the entanglement dynamics of the \(q\)'th spin, i.e., \(\mathcal{I}^{q}\), should belong to the null space of \(V\). The proof of this statement can be found in the supporting material. Moreover, it is shown that the higher (than \(2(q-1)\)) order derivatives of \(\mathcal{Q}_{q}(t)\), denoted by \(\mathcal{Q}_{q}^{(2(q+\bar{k}))}(t)\), are:
\[\begin{split} Q_{q}^{(2(q+\bar{k}))}(t)\bigg{|}_{t=0}=& (\frac{\mathbf{J}}{2\hbar})^{2(q+\bar{k})}(-1)^{\bar{k}}\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
###### Acknowledgements.
P.A acknowledges support from the Princeton Program in Plasma Science and Technology (PPST). H.R acknowledges support from the U.S Department Of Energy (DOE) grant (DE-FG02-02ER15344).
|
2310.06366 | On the Peak AoI of UAV-assisted IoT Networks: A Stochastic Geometry
Approach | In this paper, we analyze the peak age of information (PAoI) in UAV-assisted
internet of thing (IoT) networks, in which the locations of IoT devices are
modeled by a Mat\'{e}rn cluster process (MCP) and UAVs are deployed at the
cluster centers to collect the status updates from the devices. Specifically,
we consider that IoT devices can either monitor the same physical process or
different physical processes and UAVs split their resources, time or bandwidth,
to serve the devices to avoid inter-cluster interference. Using tools from
stochastic geometry, we are able to compute the mean activity probability of
IoT devices and the conditional success probability of an individual device. We
then use tools from queuing theory to compute the PAoI under two load models
and two scenarios for devices, respectively. Our numerical results show
interesting system insights. We first show that for a low data arrival rate,
increasing the number of correlated devices can improve the PAoI for both load
models. Next, we show that even though the time-splitting technique causes
higher interference, it has a limited impact on the mean PAoI, and the mean
PAoI benefits more from the time-splitting technique. This is because of the
nature of UAV communication, especially at places where devices (users) are
spatially-clustered: shorter transmission distances and better communication
channels, comparing the links established by the cluster UAV and serving
devices (users) to links established by interferers. | Yujie Qin, Mustafa A. Kishk, Mohamed-Slim Alouini | 2023-10-10T07:07:10Z | http://arxiv.org/abs/2310.06366v1 | # On the Peak AoI of UAV-assisted IoT Networks: A Stochastic Geometry Approach
###### Abstract
In this paper, we analyze the peak age of information (PAoI) in UAV-assisted internet of thing (IoT) networks, in which the locations of IoT devices are modeled by a Matern cluster process (MCP) and UAVs are deployed at the cluster centers to collect the status updates from the devices. Specifically, we consider that IoT devices can either monitor the same physical process or different physical processes and UAVs split their resources, time or bandwidth, to serve the devices to avoid inter-cluster interference. Using tools from stochastic geometry, we are able to compute the mean activity probability of IoT devices and the conditional success probability of an individual device. We then use tools from queuing theory to compute the PAoI under two load models and two scenarios for devices, respectively. Our numerical results show interesting system insights. We first show that for a low data arrival rate, increasing the number of correlated devices can improve the PAoI for both load models. Next, we show that even though the time-splitting technique causes higher interference, it has a limited impact on the mean PAoI, and the mean PAoI benefits more from the time-splitting technique. This is because of the nature of UAV communication, especially at places where devices (users) are spatially-clustered: shorter transmission distances and better communication channels, comparing the links established by the cluster UAV and serving devices (users) to links established by interferers.
Stochastic geometry, peak age of information, Matern cluster process, resource allocation, correlated and uncorrelated device
## I Introduction
With the development of advanced wireless communication and sensing techniques, a massive number of IoT devices are deployed widely to enable multiple real-time applications such as localization, remote monitoring, image collection, and environment sensing. In such real-time involved scenarios, low latency and reliable communication channels are of vital importance. For instance, internet of thing (IoT) devices can be used in disaster monitoring. During the course of any disasters, accurate and up-to-date information help in making timely decisions which can reduce the damage dramatically [1, 2]. To this objective, the information age, which characterizes the freshness of the data, draws great attention [3, 4]. Compared to traditional metrics like delay, the concept of 'age of information' (AoI) offers a more insightful perspective. AoI is defined as the time elapsed since the last successfully received update packet at the monitor was generated at the source, effectively capturing the timelines of updates [5]. As new updates are received, the AoI grows until the arrival of the next update. Therefore, it computes the time duration between two successfully transmitted updates. Consequently, collecting data from IoT devices efficiently and reducing AoI at the receiver side become crucial to maintain the functionality and high quality of service of the network.
Featureured by high flexibility of adjusting locations, and mobility of tracking serving users, unmanned aerial vehicles (UAVs), also known as drones, are expected to be widely acting as communication relays in future wireless networks [6, 7, 8]. For instance, UAVs can adjust their altitudes to establish line-of-sight (LoS) links with ground users and modify their locations when the spatial distribution of ground users changes [9, 10]. Besides, UAVs are expected to play a key role in IoT networks. Firstly, UAVs can wirelessly charge and wake up IoT devices, which prolongs the lifetime of these energy-limited devices [11, 12, 13]. Secondly, UAVs can modify their trajectory to get closer and serve the IoT devices based on demands [14, 15]. It is both time and energy efficient to use UAVs as communication relays between IoT devices and BSs. Moreover, at places where the devices are spatially-clustered, such as industrial areas and remote areas, UAVs are more suitable owing to their altitudes, hence, higher LoS probabilities, and better channel quality [16], compared to the ground BSs.
Motivated by the necessity for timely updates in IoT devices and recognizing the benefits of UAV communications, our study focuses on analyzing the PAoI in UAV-assisted IoT networks. In this context, PAoI is defined as the maximum value of AoI immediately before an update packet is received at the monitor, signifying the worst-case scenario for AoI. By investigating the PAoI, we gain insights into the worst AoI situations, which are crucial for ensuring efficient and timely data collection and transmission in UAV-assisted IoT architectures. Specifically, we use tools from stochastic geometry to model the locations of IoT devices, which are spatially distributed according to a Matern cluster process (MCP), and assume that UAVs hover above the cluster centers to collect information. Additionally, since we consider a cluster of devices, they can either monitor the same physical process or different physical processes, and UAVs need to split their resources to serve them and avoid inter-cluster interference. Hence, in this work, we compute the mean PAoI of UAV-assisted IoT networks under IoT devices in two scenarios: (i) correlated devices and (ii) uncorrelated devices, and two load models: (i) time splitting
and (ii) bandwidth splitting.
### _Related Work_
Literature related to this work can be categorized into: (i) AoI analysis, (ii) stochastic geometry-based approach to the AoI analysis, and (iii) Analysis of UAV-assisted IoT networks. A brief discussion on related works is provided in the following lines.
_Queueing theory-based AoI analysis._ An introduction and survey about AoI was provided in [17], in which the authors introduced some basic methods for analyzing AoI and mainly focused on the recent contributions in AoI areas. Authors in [18] provided an introduction about AoI and its variations, and compared this performance metric to other well-known performance metrics. In [3, 19], authors introduced the concept of the freshness of the data and started to characterize the AoI. Followed by them, AoI analysis has been widely investigated in the literature. For instance, authors in [20] studied the AoI based on the queuing discipline of last-come-first-serve. Authors in [21] studied the AoI of a G/G/I/1 queuing system and authors in [22] derived a general formula for the stationary distribution of the AoI, which is given in terms of system delay and PAoI. Besides, PAoI in an M/M/1 queue and package delivery error was computed in [23] and mean AoI under M/M/1 queuing system with fixed or random deadline was studied in [24]. Some other related works can be found in [25, 26, 27, 28, 29].
_Stochastic geometry-based approach to the AoI analysis._ The first work of combining AoI and stochastic geometry is provided in [30], in which the authors considered a Poisson bipolar network and optimized the AoI by scheduling. Besides, the authors in [31] studied the AoI in large-scale wireless networks under first-come-first-serve and last-come-first-serve transmission protocols, respectively. Authors in [32, 33] studied the AoI in a large-scale network under a reinforcement learning framework and deep reinforcement learning framework, respectively. In [34], authors considered a large-scale network and the locations of source-destination pairs were Poisson bipolar distributed. Besides, non-preemptive and preemptive queuing disciplines were used to compute the PAoI. Additionally, in [35] authors considered device-to-device communication in a cellular-based IoT network. Specifically, authors computed the network throughput and mean AoI. IoT devices time- and event-triggered traffic based PAoI analysis was provided in [36], while the locations of IoT devices and base stations (BSs) were modeled by two independent Poisson point processes (PPPs).
_Analysis of UAV-assisted IoT networks._ Authors in [37] optimized the UAV trajectory to minimize the mean PAoI of a source-destination pair networks, in which UAVs behave as communication relays. In [38], authors considered a multipurpose UAV which delivers the data for IoT clusters and packages simultaneously. Authors in [39] investigated the deployment and mobility of multiple UAVs to collect data from IoT devices while minimizing the total transmission power of IoT devices. A UAV-assisted multiple-in-multiple-out (MIMO) communication IoT network was analyzed in [40] and the authors jointly designed the transmission duration and power of all devices to maximize the energy efficiency. In [41], authors studied the throughput maximization problem in UAV-enabled network, in which the trajectory and transmit power of UAVs are jointly considered. A game theory-based analysis was used in [42], in which the authors studied the UAV access selection and BS bandwidth allocation problems while UAVs acting as communication relays between BSs and IoT devices.
Different from the existing literature, we compute the PAoI in a UAV-assisted IoT network, where the locations of IoT devices are modeled by a MCP. Specifically, we investigate the impact of the number of devices and resource allocation techniques on the PAoI.
### _Contribution_
In this paper, our main contribution is computing the mean PAoI in UAV-assisted IoT networks under two load models, (i) bandwidth splitting and (ii) time splitting. We quantify the freshness of the updates in the case of correlated or uncorrelated devices, respectively. The details of the contributions are listed next.
_System setup of UAV-assisted IoT networks._ We consider a UAV-assisted IoT network, in which UAVs are deployed above the IoT cluster centers to collect updates from IoT devices. We consider a cluster of devices and they can either monitor one physical process (correlated devices) or several processes (uncorrelated devices). Specifically, we consider that the cluster UAV splits its resources, time or bandwidth, to serve IoT devices to avoid inter-cluster interference.
_PAoI analysis under correlated devices._ While the PAoI of uncorrelated devices is widely studied, we mainly study the PAoI of correlated devices. Using tools from stochastic geometry, we model the locations of UAVs and IoT devices by a MCP, which means that IoT devices are uniformly located within the clusters and UAVs are deployed at the cluster centers. Considering that the success probability varies across the devices and is a function of the mean activity probability of devices (interferers) which depends on the success probability of each device, we first provide an equation to compute the mean activity probability of devices and it is solved by iterating. Besides, we also provide an approximation for computing PAoI under bandwidth splitting, which works well in high LoS environments.
_System-level insights._ By comparing two load models, we show that time splitting works better than bandwidth splitting in UAV-assisted IoT networks for both correlated and uncorrelated devices. Firstly, we observe that time splitting causes higher interference since the channel has a higher probability to be occupied. However, in high LoS scenario, high interference has a limited impact on UAV-assisted networks. This is because: devices are more likely to establish LoS links with cluster UAVs and Non-LoS (NLoS) links with UAVs deployed at other clusters, and the signals received in LoS links are much better than NLoS links. While the UAV splits the time to serve multiple IoT devices, it is more time efficient: UAVs can use the data generation time of a device to serve other devices.
## II System Model
An uplink UAV-assisted IoT wireless communication network is considered where IoT devices are spatially clustered and UAVs are deployed above the cluster centers to collect information from IoT devices. The locations of the IoT cluster centers follow a PPP \(\Phi_{u}\) with density \(\lambda_{u}\) and IoT devices are uniformly distributed within the clusters with radii \(r_{c}\). Hence, the locations of IoT devices follow a MCP, and UAVs are assumed to lower above the cluster centers at a fixed altitude \(h\), as shown in Fig. 1 (a). The IoT devices send the status updates to their cluster center UAV, and fractional path-loss inversion power control with compensation factor \(\epsilon_{\{l,n\}}\) is used in this work, where the subscripts \(l,n\) depends on the LoS and NLoS link with the cluster UAV, respectively. Besides, a standard path-loss model with exponents \(\alpha_{\{l,n\}}\) and the Nakagami-m channel fading model are used in this work. The notations of this work are summarized in Table I.
Since we consider a cluster of IoT devices, they can either monitor different types of physical random processes or the same type of process, we consider two scenarios for IoT devices in this work: (i) uncorrelated IoT devices, which monitor different types of processes, or (ii) correlated IoT devices, which monitor the same process, as shown in Fig. 1 (b). To characterize the freshness of the information transmitted by the IoT devices, we compute the mean peak age-of-information (PAoI) at the UAV side for devices in two scenarios. Specifically, we consider two load models: (i) split the bandwidth, and (ii) split the time, to avoid inter-cluster interference. To ensure a stable system and avoid infinite PAoI, we consider a non-preemptive discipline which means that IoT devices can only have new updates after the previous updates are successfully transmitted.
Without loss of generality, we perform the analysis for a typical device, which is randomly selected from a random IoT clusters. Applying Slivnyak's theorem, the center of this cluster (typical cluster), which contains the typical IoT device, is located at the origin. Consequently, both the typical IoT cluster and the UAV are positioned at the origin as well.
### _Communication Channel_
As mentioned, we consider a truncated path-loss inversion power control. Thus, the transmit power of a device depends on established NLoS/LoS channel and is given by
\[p_{t}=\begin{cases}p_{t,l}=\rho_{l}R_{u,l}^{\alpha_{l}\epsilon_{l}},\text{in the case of LoS},\\ p_{t,n}=\rho_{n}R_{u,n}^{\alpha_{n}\epsilon_{n}},\text{in the case of NLoS},\end{cases} \tag{1}\]
where the subscript \(\{l,n\}\) denotes the established LoS/NLoS channel between the IoT device and the serving UAV: \(R_{u,l}\), \(\rho_{l}\), \(\epsilon_{l}\) and \(\alpha_{l}\) denote the Euclidean transmission distance, power control parameter, which adjusts the received power at the UAV, compensation factor and path-loss exponent in LoS scenario, respectively; \(R_{u,n}\), \(\rho_{n}\), \(\epsilon_{n}\), \(\alpha_{n}\) denote the same set of parameters for the NLoS scenario. Assuming that the maximum transmit power is \(p_{u}\), the range of \(\epsilon_{l,n}\) is
\[0\leq\epsilon_{l,n}\leq\frac{1}{\alpha_{\{l,n\}}}\log\sqrt{r_{\epsilon}^{2}+h ^{2}}\left(\frac{p_{u}}{\rho_{\{l,l\}}}\right), \tag{2}\]
in which \(h\) is the altitude of the UAV. Consequently, the received power at the UAV is
\[p_{r}(R_{u})=\begin{cases}p_{r,l}(R_{u,l})=\eta p_{t,l}G_{l}R_{u,l}^{-\alpha_{ l}},\text{LoS link},\\ p_{r,n}(R_{u,n})=\eta p_{t,n}G_{n}R_{u,n}^{-\alpha_{n}},\text{NLoS link},\end{cases} \tag{3}\]
in which \(G_{l}\) and \(G_{n}\) are the fading gains, which follow Gamma distribution with shape and scale parameters (\(m_{l},\frac{1}{m_{l}}\)) and (\(m_{n},\frac{1}{m_{n}}\)) for LoS and NLoS transmissions, respectively, and \(\eta_{l}\) and \(\eta_{n}\) denote the mean additional losses in LoS and NLoS scenario, respectively. Given the horizontal distance \(r\) between the serving UAV and the device, the occurrence probability of a LoS link is given in [43] as
\[P_{l}(r)=\frac{1}{1+a\exp(-b(\frac{180}{\pi}\arctan(\frac{h}{r})-a))}, \tag{4}\]
where \(a\) and \(b\) are two environment variables. Consequently, the probability of NLoS link is \(P_{n}(r)=1-P_{l}(r)\).
### _Load Modeling_
Recall that we consider two scenarios for IoT devices, (i) correlated IoT devices, or (ii) uncorrelated IoT devices, in a cluster, and UAVs fairly allocate their resources to serve these devices to avoid inter-cluster interference. Let \(N_{d}\) be
Fig. 1: Illustration of system setup: **(a)** UAVs are deployed above the cluster centers to collect updates from IoT devices and IoT devices are uniformly distributed within the clusters, and **(b)** (i) uncorrelated devices, which monitor different physical processes, and (ii) correlated devices, which monitor the same physical process.
the number of IoT devices in a cluster and \(B\) be the available bandwidth of UAVs. We consider the following two load models and analyze the impact of these two load models (resource block allocation strategy) on the mean PAoI. Let \(T\) be the time unit, e.g., sec or min.
**Definition 1** (Load Model 1: Bandwidth Splitting).: _Equal allocation of bandwidth to serve all the IoT devices. The serving UAV equally splits the available transmission bandwidth among the IoT devices. Thus, each IoT device is assigned a bandwidth equal to \(B/N_{d}\) and the data transmission time of a single device is \(N_{d}T\)._
**Definition 2** (Load Model 2: Time Splitting).: _Exclusive allocation of bandwidth to serve one IoT device. At each transmission time slot, the UAV periodically selects one IoT device and allocates the entire bandwidth to it. Besides, the UAV ensures that all the IoT devices are associated equally to transmit the data in the long term. Thus, each IoT device is assigned a bandwidth equal to \(B\) and the data transmission time of a single device is \(T\)._
Therefore, in both load models, the interference comes from the devices in other IoT clusters which transmit at the same resource block. Besides, noting that the device only transmits the data when it is active, e.g., has a new update, the interference is also a function of the activity probability of devices. To simplify the system model, we consider the number of devices is the same for all clusters. Therefore, in both Load Model 1 and Load Model 2, the density of IoT devices that might transmit at a specific resource block is \(\lambda_{u}\). Let \(\Phi_{i}\) be the point set that contains the locations of IoT devices that transmit at a specific resource block, and \(\bar{\pi}\) be the mean activity probability. Therefore, the density of interference is \(\bar{\pi}\lambda_{u}\) and \(\Phi_{i}\) is obtained by displacement and thinning of \(\Phi_{u}\).
Without loss of generality, we perform the analysis at the typical IoT cluster which is centered at the origin and the typical device which is located within the typical cluster. If the signal-to-interference-plus-noise-ratio (SINR) received at the UAV is above a predefined threshold, the updated information from an IoT device is successfully delivered. We define the probability of successful transmission of a typical device to a UAV link as success probability. The conditional success probability, which is conditioned on an arbitrary but fixed realization of \(\Phi_{u}\) and \(\Phi_{i}\), is provided next.
**Definition 3** (Conditional Success Probability).: _The conditional success probability is defined as,_
\[P_{s}(R_{u})=\mathbb{P}(\mathrm{SINR}(R_{u})>\theta|\Phi_{i},\Phi_{u}), \tag{5}\]
_in which \(\theta\) is the SINR threshold,_
\[\mathrm{SINR}(R_{u})=\frac{p_{r}(R_{u})}{I+\sigma^{2}}, \tag{6}\]
_where \(I\) denotes the aggregated interference,_
\[I= \sum_{u_{i}\in\Phi_{i}^{m}}\eta_{i}G_{l}p_{t,l,u_{i}}D_{u_{i}}^{- \alpha_{l}}+\sum_{u_{i}\in\Phi_{i}^{m}}\eta_{i}G_{l}p_{t,n,u_{i}}D_{u_{i}}^{- \alpha_{l}}\] \[+\sum_{u_{i}\in\Phi_{i}^{m}}\eta_{n}G_{n}p_{t,l,u_{i}}D_{u_{i}}^{ -\alpha_{n}}\] \[+\sum_{u_{i}\in\Phi_{i}^{m}}\eta_{n}G_{n}p_{t,n,u_{i}}D_{u_{i}}^{ -\alpha_{n}}, \tag{7}\]
_in which \(i_{0}\) denotes the location of the typical device, \(\Phi_{i}^{ll}\), \(\Phi_{i}^{ln}\), \(\Phi_{i}^{nl}\) and \(\Phi_{i}^{nn}\) are subsets of \(\Phi_{i}\setminus\{i_{0}\}\) denote the locations of interfering IoT devices which establish LoS/NLoS links with their cluster UAVs and have LoS/NLoS links with the typical UAV, respectively, \(D_{\{\cdot\}}\) denotes the distances between the interfering users and the typical UAV, and \(p_{t,\{l,n\},u_{i}}\) denote the transmission power of interferer, which establish LoS/NLoS links with their cluster UAVs, respectively._
Since we consider the cluster UAV serves multiple IoT devices and the locations of devices are randomly distributed within the IoT clusters, the success probabilities vary with devices. Therefore, we define the SINR meta distribution which computes the complementary cumulative distribution function (CCDF) of success probability [44, 45, 46].
**Definition 4** (SINR Meta Distribution).: _The SINR meta distribution, defined as the CCDF of the conditional success probability, is defined in [44] and given by_
\[\bar{F}_{P_{s}}(\gamma)=\mathbb{P}(P_{s}(R_{u})>\gamma), \tag{8}\]
_where \(\gamma\in[0,1]\)._
### _Age of Information_
As mentioned, AoI characterizes the freshness of the data. Let \(\Delta(t)\) track the AoI evolution at time slot \(t\). Let \(k\) be the index of the update at IoT devices and \(G(k)\) denote the generation time of the \(k\)-th update. Therefore, AoI evolution of the \(k\)-th update is given in [3], computed by
\[\Delta_{k}(t+1)=\begin{cases}\Delta_{k}(t)+1,&\text{if transmission fails},\\ t+1-G(k),&\text{otherwise}.\end{cases} \tag{9}\]
At the side of the serving UAV, \(\Delta(t)\) increases with time and drops upon the arrival of new updates. In this work, we consider the update generation and the successful transmission occurs at the end of the time slot. Besides, for a unit time slot \(T\), the generation probability of an update is \(\lambda_{a}\) and the generation processes of IoT devices are independent and follow a geometric distribution.
In this work, we characterize the mean peak AoI, which is defined as the average value of age resulting immediately before the receiving of a new update, as shown in Fig. 2.
**Definition 5** (Mean PAoI).: _The mean PAoI is defined as the mean value of PAoI, in which PAoI is measured immediately before the reception of an update from UAV side,_
\[\overline{\Delta}=\mathbb{E}[\Delta_{k}|\Phi_{u},\Phi_{i}]=\mathbb{E}[T_{i}+T_ {\mathrm{tra}}|\Phi_{u},\Phi_{i}], \tag{10}\]
_in which \(\Delta_{k}\) denotes the PAoI of the \(k\)-th updates, \(T_{i}\) is the inter-arrival time and \(T_{\mathrm{tra}}\) denotes the transmission time and the expectation sign is over the locations of the devices and the fading gains._
In the case of uncorrelated devices, AoI works independently for different updates of physical processes, and the evolution process is shown in Fig. 2.
In the case of correlated devices, we consider all the devices working independently and assume that the cluster UAV only
eeps the most recent updates. It means that AoI only drops when the arrival update is generated later than the current information UAV kept, as shown in Fig. 3 (a), where the green crosses denote that the UAV drops the updates. Besides, we plot the AoI evolution under two load models in Fig. 3 where the downward arrows denote the generation of new updates and the upward arrows denote the successful transmission of the update. Moreover, Fig. 3 (a) shows that the devices can transmit the update immediately but have a longer transmission time, and Fig. 3 (b) shows that the devices need to wait for the transmission time slot but have a shorter transmission time.
The main difference between the PAoI of correlated and uncorrelated devices is that the PAoI of correlated only contains one successful transmission while the PAoI of uncorrelated contains two successful transmission since we assume that devices can only have new update when the previous date is successfully transmitted.
In the following sections, we analyze the mean PAoI under two load models and show the impact of the number of devices on the system performance.
## III Success Probability Analysis
In this section, we provide the analysis of success probability under two load models and two scenarios for IoT devices. To do so, we first compute the probability density function (PDF) of the transmission power of devices and then compute the success probability. Since the conditional success probability is a function of the density of interferers, we provide the equations of the conditional activity probability of devices in this section, but the details of conditional activity probabilities are provided in the next section.
As mentioned previously, we consider a truncated inverse power control in this uplink network. The transmit power of devices is a function of transmission distance and the distribution of transmit power of IoT devices is given in the following lemma.
**Lemma 1** (Distribution of the Transmit Power).: _The PDF of the transmit power of a general active device in the case of establishing a LoS/NLoS with cluster UAV is_
\[f_{p_{t},l}(p)\] \[=\left\{\begin{aligned} &\frac{2P_{l}((\frac{p}{\rho_{l}})^{1/( \epsilon_{l}\alpha_{l})})(\frac{p}{\rho_{l}})^{2/(\epsilon_{l}\alpha_{l})-1}}{( \epsilon_{l}\alpha_{l})\rho_{l}r_{c}^{2}},\rho_{l}<p<p_{u},\\ &\int_{p_{u}}^{\infty}\frac{2P_{l}((\frac{x}{\rho_{l}})^{1/( \epsilon_{l}\alpha_{l})})(\frac{x}{\rho_{u}})^{2/(\epsilon_{l}\alpha_{l})-1}}{( \epsilon_{l}\alpha_{l})\rho_{l}r_{c}^{2}}\mathrm{d}x,p=p_{u},\end{aligned}\right. \tag{11}\]
\[f_{p_{t},m}(p)\] \[=\left\{\begin{aligned} &\frac{P_{n}((\frac{p}{\rho_{n}})^{1/( \epsilon_{n}\alpha_{n})})p^{2/(\epsilon_{n}\alpha_{n})-1}}{(\epsilon_{n} \alpha_{n})\rho_{n}r_{c}^{2}},\rho_{n}<p<p_{u},\\ &\int_{p_{u}}^{\infty}\frac{2P_{n}((\frac{x}{\rho_{n}})^{1/( \epsilon_{n}\alpha_{n})})(\frac{x}{\rho_{n}})^{2/(\epsilon_{n}\alpha_{n})-1}}{ (\epsilon_{n}\alpha_{n})\rho_{n}r_{c}^{2}}\mathrm{d}x,p=p_{u},\end{aligned}\right. \tag{12}\]
_in which \(P_{l}(x)\) and \(P_{n}(x)\) are the LoS/NLoS probabilities as defined in (4)._
Proof.: Similar to [47, 48], in the uplink transmission of a MCP, the transmission distance has the distribution \(f_{R_{u,l}}(r)=\frac{2J_{0}^{\epsilon}\rho_{l}(r)\mathrm{d}x}{r_{c}^{2}}\), and the transmit power of a device is \(p_{t,l}=\rho_{l}R_{u,l}^{\alpha_{l}}\). The PDF of the transmit power is obtained by inserting \(r=(p_{t,l}/\rho_{l})^{1/(\epsilon_{l}\alpha_{l})}\) into \(f_{R,l}(r)\).
Before analyzing the interference, we first need to obtain the activity probability of the devices. Recall that we assume a non-preemptive discipline, devices can only have new updates when the previous update is successfully transmitted and the device is active when it has a new update.
In the case of Load model 1, the state transmission diagram is shown in Fig. 4. The steady state probability for both load models when \(N_{d}=1\) is
\[\pi_{1}=\frac{\lambda_{a}}{\lambda_{a}+P_{s}(R_{u})}. \tag{13}\]
The activity probabilities when \(N_{d}>1\) are provided in the following lemma. The activity probability of Load Model 2 is slightly different owing to the waiting of the transmission time slot, hence, in the following lemma, we provide the formulas
Fig. 3: Illustration of the AoI evolution of correlated devices, **(a)** Load Model 1, split the bandwidth, and **(b)** Load Model 2, split the time.
Fig. 2: Illustration of the AoI evolution of a typical physical process of uncorrelated devices.
first and details are provided in the PAoI analysis under each load model in the next section.
**Lemma 2** (Conditional Activity Probability).: _Conditioned on the locations of devices, the activity probability of the typical device under Load Model 1 when \(N_{d}>1\) is_
\[\pi_{1,l_{1}}=\frac{\lambda_{a}^{{}^{\prime}}}{\lambda_{a}^{{}^{\prime}}+P_{s}(R _{u})}, \tag{14}\]
_in which \(P_{s}(R_{u})\) is the success probability of the device as defined in (5), and \(\lambda_{a}^{{}^{\prime\prime}}\) reflects the impact of the number of devices on the conditional activity probability, which will be discussed in Section IV._
_Conditioned on the locations of devices, the activity probability of the typical device under Load Model 2 is_
\[\pi_{1,l_{2}}=P_{s}(R_{u})\lambda_{a}^{{}^{\prime\prime}}+(1-P_{s}(R_{u})), \tag{15}\]
_in which \(\lambda_{a}^{{}^{\prime\prime}}\) reflects the impact of \(N_{d}\) on the conditional activity probability, more details are provided in Section IV._
It is clear that the conditional activity probability is a function of the success probability, which depends on the realization of \(\Phi_{u}\), \(\Phi_{i}\), and the interference density, which depends on the mean activity probability. To solve this problem, we propose the following method which can compute the mean activity probability accurately but requires an iteration over the arrival rate. To do so, we first let \(\bar{\pi}\) be the mean activity probability, \(\bar{\pi}=\mathbb{E}[\{\pi_{1,l_{1}},\pi_{1,l_{2}}\}]\), and compute the first and the second moments of the conditional success probability given \(\bar{\pi}\). Therefore, we first derive the conditional probability and the first and the second moments.
**Lemma 3** (Conditional Success Probability).: _Conditioned on the transmission distance between the IoT device and the cluster UAV, the success probability in LoS/NLoS scenario is given by_
\[P_{s,l}(R_{u})=P_{l}(R_{u})\sum_{k=1}^{m_{l}}\binom{m_{l}}{k}(-1) ^{k+1}\exp(-g_{l}(R_{u})\sigma^{2})\] \[\prod_{c_{1},c_{2}\in\{l,n\}}\exp\bigg{(}-2\pi\bar{\pi}\lambda_{u }\int_{h}^{\infty}\int_{0}^{p_{u}}\bigg{[}1-\kappa(g_{l}(R_{u}),c_{1},c_{2}) \bigg{]}\] \[\times zP_{c_{1}}(\sqrt{z^{2}-h^{2}})f_{p_{t,c_{2}}}(p)\mathrm{d} p\mathrm{d}z\bigg{)}, \tag{16}\]
\[P_{s,n}(R_{u})=P_{n}(R_{u})\sum_{k=1}^{m_{n}}\binom{m_{n}}{k}(-1) ^{k+1}\exp(-g_{n}(R_{u})\sigma^{2})\] \[\prod_{c_{1},c_{2}\in\{l,n\}}\exp\bigg{(}-2\pi\bar{\pi}\lambda_{u }\int_{h}^{\infty}\int_{0}^{p_{u}}\bigg{[}1-\kappa(g_{n}(R_{u}),c_{1},c_{2}) \bigg{]}\] \[\times zP_{c_{1}}(\sqrt{z^{2}-h^{2}})f_{p_{t,c_{2}}}(p)\mathrm{d} p\mathrm{d}z\bigg{)}, \tag{17}\]
_where \(g_{l}(r)=k\beta_{2}(m_{l})m_{l}\theta r^{(1-\epsilon_{l})\alpha_{l}}(\rho_{l} \eta_{l})^{-1}\), \(g_{n}(r)=k\beta_{2}(m_{m})m_{n}\theta r^{(1-\epsilon_{s})\alpha_{n}}(\rho_{n} \eta_{n})^{-1}\), \(\beta_{2}(m)=(m!)^{-1/m}\) and \(\bar{\pi}\) is the mean activity probability,_
\[\kappa(g(r),c_{1},c_{2})=\bigg{(}\frac{m_{c_{1}}}{m_{c_{1}}+g(r)\eta_{c_{2}}p ^{-\alpha_{2}}}\bigg{)}^{m_{c_{1}}}, \tag{18}\]
Proof.: Similar to [16, Theorem 1].
Consequently, the first and the second moments of the conditional success probability are given in the following lemma. The moment of the conditional success probability is a function of \(\bar{\pi}\) since the mean activity probability influences the density of the interferers.
**Lemma 4** (The Moments of The Conditional Success Probability).: _The moment of the conditional success probability is derived by taking the expectation over the locations of interferers and distance to the serving UAV, given by_
\[M_{b}(\bar{\pi})=M_{b,l}(\bar{\pi})+M_{b,n}(\bar{\pi})=\mathbb{E}[P_{s,l}^{b}( R_{u})]+\mathbb{E}[P_{s,n}^{b}(R_{u})], \tag{19}\]
_where \(b=1,2\) denotes the first and the second moment, and are given by_
\[M_{b,l}(\bar{\pi}) =\sum_{k_{1}=1}^{m_{l}}\cdots\sum_{k_{b}=1}^{m_{l}}\binom{m_{l}}{ k}\cdots\binom{m_{l}}{k_{b}}(-1)^{k_{1}\cdots k_{b}+b}\] \[\int_{0}^{r_{c}}P_{l}(r)\mathcal{L}(g_{l,1}(r),\cdots,g_{l,b}(r)) f_{R_{u}}(r)\mathrm{d}r,\] \[M_{b,n}(\bar{\pi}) =\sum_{k_{1}=1}^{m_{n}}\cdots\sum_{k_{b}=1}^{m_{n}}\binom{m_{n}}{ k}\cdots\binom{m_{n}}{k_{b}}(-1)^{k_{1}\cdots k_{b}+b}\] \[\int_{0}^{r_{c}}P_{n}(r)\mathcal{L}(g_{n,1}(r),\cdots,g_{n,b}(r)) f_{R_{u}}(r)\mathrm{d}r, \tag{20}\]
_where \(g_{l,i}(r)=k_{i}\beta_{2}(m_{l})m_{l}\theta r^{(1-\epsilon_{i})\alpha_{l}}( \rho_{l}\eta_{l})^{-1}\), \(g_{n,i}(r)=k_{i}\beta_{2}(m_{m})m_{n}\theta r^{(1-\epsilon_{n})\alpha_{n}}( \rho_{n}\eta_{n})^{-1}\), and \(\mathcal{L}(g_{1}(r),\cdots,g_{b}(r))\) is shown in Appendix A._
Proof.: Similar to [49, Lemma 2].
To simplify the analysis of this work, we use the beta approximation to approximate the meta distribution, and our previous work [49, Lemma 2] and existing literature [44] haven shown the accuracy of the beta approximation. Therefore, the SINR meta distribution of the proposed uplink network is give in the following lemma.
**Lemma 5** (SINR Meta Distribution).: _The SINR meta distribution, which is defined as the CCDF of the conditional success probability is approximated by_
\[\bar{F}_{P_{s}}(\gamma) \approx 1-I_{\gamma}(\frac{M_{1}(\bar{\pi})(M_{1}(\bar{\pi})-M_{2}( \bar{\pi}))}{M_{2}(\bar{\pi})-M_{1}^{2}(\bar{\pi})},\] \[\frac{(M_{1}(\bar{\pi})-M_{2}(\bar{\pi}))(1-M_{1}(\bar{\pi}))}{M_{ 2}(\bar{\pi})-M_{1}^{2}(\bar{\pi})}), \tag{21}\]
_where \(M_{1}(\bar{\pi})\) and \(M_{2}(\bar{\pi})\) are the first and the second moment of the proposed network,_
\[I_{x}(a,b) =\frac{\int_{0}^{x}t^{a-1}(1-t)^{b-1}\mathrm{d}t}{B(a,b)},\] \[B(a,b) =\int_{0}^{1}t^{a-1}(1-t)^{b-1}\mathrm{d}t. \tag{22}\]
Note from (14) and (15) that, the conditional activity probability is a function of success probability. Thus, we can obtain the distribution of conditional probability as
\[\mathbb{P}(\pi_{1,l_{1}}<x) =\mathbb{P}\bigg{(}\frac{\lambda_{a}^{{}^{\prime}}}{\lambda_{a}^{ ^{\prime}}+P_{s}(R_{u})}<x\bigg{)}\] \[=\mathbb{P}\bigg{(}P_{s}(R_{u})>\frac{1-x}{x}\lambda_{a}^{{}^{ \prime}}\bigg{)}=\bar{F}_{P_{s}}\bigg{(}\frac{1-x}{x}\lambda_{a}^{{}^{\prime}} \bigg{)},\] \[\mathbb{P}(\pi_{1,l_{2}}<x) =\mathbb{P}(P_{s}(R_{u})\lambda_{a}^{{}^{\prime\prime}}+(1-P_{s}(R _{u}))<x)\]
\[=\bar{F}_{P_{s}}\bigg{(}\frac{1-x}{1-\lambda_{a}^{\prime\prime}}\bigg{)}, \tag{23}\]
in which \(\lambda_{a}^{{}^{\prime}}\) and \(\lambda_{a}^{{}^{\prime\prime}}\) are provided in (28) and (38) which reflects the influence of the number of serving devices. Therefore, the mean activity probability is obtained by taking the expectation, as shown in the following theorem.
**Theorem 1** (Activity Probability).: _The unconditional activity probability \(\hat{\pi}\) is given by_
\[\bar{\pi}= \int_{0}^{1}\bar{F}_{P_{s}}(g(x))\mathrm{d}x\] \[= \int_{0}^{1}I_{g(x)}(\frac{M_{1}(\bar{\pi})(M_{1}(\bar{\pi})-M_{2} (\bar{\pi}))}{M_{2}(\bar{\pi})-M_{1}^{2}(\bar{\pi})},\] \[\frac{(M_{1}(\bar{\pi})-M_{2}(\bar{\pi}))(1-M_{1}(\bar{\pi}))}{M _{2}(\bar{\pi})-M_{1}^{2}(\bar{\pi})}\mathrm{d}x, \tag{24}\]
_in which \(g(x)\) is \(\min(1,(\frac{1-x}{1-\lambda_{a}^{\prime}})^{+})\) in Load Model 1 and is \(\min(1,(\frac{1-x}{1-\lambda_{a}^{\prime}})^{+})\) in Load Model 2, where \((x)^{+}=\max(x,0)\). Observe that both sides of (24) contain the unknown variable \(\bar{\pi}\), e.g., \(M_{1}(\bar{\pi})\) and \(M_{2}(\bar{\pi})\) are functions of \(\bar{\pi}\), this equation can be solved by iterations: for a specific \(\bar{\pi}\), we find a typical \(\lambda_{a}\) which let the equal sign hold._
After obtaining the mean activity probability and conditional success probability of an individual device, we are able to proceed to the next step: compute the mean PAoI.
## IV Mean PAoI Analysis
In this section, we provide the analysis to compute the mean PAoI under two load models, two scenarios for IoT devices (correlated and uncorrelated), respectively. Specifically, we consider an approximation that all the correlated devices have the same success probability. Besides, while the analysis results of mean PAoI under \(N_{d}>1\) are provided in the following two subsections, the results of \(N_{d}=1\) are the same for both load models, which is
\[\bar{\Delta}=\mathbb{E}_{R_{u}}[T_{i}+T_{\mathrm{tra}}]=\mathbb{E}_{R_{u}} \bigg{[}\frac{2T}{P_{s}(R_{u})}+\frac{T}{\lambda_{a}}\bigg{]}, \tag{25}\]
in which \(\frac{T}{P_{s}(R_{u})}+\frac{T}{\lambda_{a}}\) computes the mean inter-arrival time and \(\frac{T}{P_{s}(R_{u})}\) denotes the mean transmission time.
### _Mean PAoI Under Load Model 1_
In this subsection, we derive the mean PAoI under Load Model 1, in which the cluster UAV splits the bandwidth to serve correlated IoT devices. Recall that \(N_{d}\) denotes the number of devices within the cluster, and for a unit of time \(T\), the arrival rate of an update is \(\lambda_{a}\) and the generation process follows a geometric distribution. When the UAV splits the bandwidth to serve devices, the transmission time increases with the increasing of the number of serving devices \(T_{m1}=N_{d}T\). The device is active when it has an update to transmit or when it is transmitting updates. Hence, the activity probability, which computes as a time average ratio, in Load Model 1 is
\[\pi_{1}=\frac{\bar{T}_{\mathrm{tra}}}{\bar{T}_{\mathrm{tra}}+\bar{T}_{i}}, \tag{26}\]
in which \(\bar{T}_{\mathrm{tra}}\) and \(\bar{T}_{i}\) are mean transmission time and mean data inter-arrival time, respectively. The distribution of successful transmission time follows a geometric distribution
\[\mathbb{P}(T_{\mathrm{tra}}=nT_{m1})=(1-P_{s}(R_{u}))^{n-1}P_{s} (R_{u}),n=1,2,3,\cdots,\] \[\bar{T}_{i}=\frac{1}{\lambda_{a}}. \tag{27}\]
Consequently, the conditional activity probability and \(\lambda_{a}^{{}^{\prime}}\) in (14) are, respectively, given by
\[\pi_{1,l_{1}}=\frac{N_{d}\lambda_{a}}{N_{d}\lambda_{a}+P_{s}(R_{u })},\] \[\lambda_{a}^{{}^{\prime}}=N_{d}\lambda_{a}. \tag{28}\]
#### Iv-A1 Correlated Devices
Recall that we randomly choose one device from the cluster as the typical device and use the notation \(\Delta_{i,k}(t)\) to denote the AoI of the \(i\)-th device and \(\Delta_{i,k}\) to denote the PAoI of the \(i\)-th device, in which the subscript \(k\) denotes the index of the update. For the typical device \(o\), the probability mass function (PMF) of \(\Delta_{o}\) is given in the following lemma.
**Lemma 6** (Distribution of the PAoI under Load Model 1).: _The PMF of the PAoI under Load Model 1 is given by,_
\[f_{\Delta_{o}}(n\times T)=\mathbb{P}(\Delta_{o,k}=n)\] \[= \sum_{n_{1},n_{2}\in\mathcal{N}_{1}}\lambda_{a}P_{s}(R_{u})(1- \lambda_{a})^{n_{1}-1}(1-P_{s}(R_{u}))^{n_{2}-1},\] \[(n=N_{d}+2,N_{d}+3,\cdots),\] \[f_{\Delta_{o}}(1\times T)=f_{\Delta_{o}}(2\times T)=\cdots=f_{ \Delta_{o}}((N_{d}+1)\times T)=0, \tag{29}\]
_where \(\mathcal{N}_{1}=\{n_{1},n_{2}:n_{1}+n_{2}N_{d}=n\}\). Accordingly, the CCDF of PAoI is given by_
\[\bar{F}_{\Delta_{o}}(n\times T)=\mathbb{P}(\Delta_{o,k}>n)\] \[= \sum_{l=n+1}^{\infty}\sum_{n_{1}+n_{2}N_{d}=l}\lambda_{a}P_{s}(R _{u})(1-\lambda_{a})^{n_{1}-1}(1-P_{s}(R_{u}))^{n_{2}-1},\] \[(n=N_{d}+1,N_{d}+2,\cdots),\] \[\bar{F}_{\Delta_{o}}(1\times T)=\bar{F}_{\Delta_{o}}(2\times T)= \cdots=\bar{F}_{\Delta_{o}}((N_{d}+1)\times T)=1. \tag{30}\]
Proof.: The PAoI of the typical device in the case of Load Model 1 is computed by
\[f_{\Delta_{o}}(n\times T)=\mathbb{P}(\Delta_{o,k}=n)\] \[=\mathbb{P}(T_{i}=n_{1},T_{\mathrm{tra}}=n-n_{1})=\mathbb{P}(T_{i} =n_{1})\mathbb{P}(T_{\mathrm{tra}}=n-n_{1}), \tag{31}\]
proof completes by inserting the PMFs of inter-arrival time and success transmissions time, respectively, and both are geometric distributed.
Different from the PAoI of \(N_{d}=1\), which requires two successful transmission, PAoI of multiple devices only requires one. This is because the update can be generated when other updates are currently transmitting.
Note that the reason the value of PAoI starts from \(N_{d}+2\) is that we assume the update generates and the data transmit at the end of the time slot. Hence, the minimum value of PAoI happens when the previous data is successfully transmitted and the update data generates immediately, which takes one
\(T\), and successfully transmits immediately, which takes \(N_{d}T\), after the transmission of the previous data.
Obtaining the PMF \(\Delta_{o}\) of an individual device, we are able to derive the mean PAoI of correlated devices, which is given in the following theorems.
**Theorem 2** (Mean PAoI under Load Model 1 and Correlated Devices).: _In the case of correlated devices, the mean PAoI under the Load Model 1 is given by_
\[\bar{\Delta}_{l_{1}}\approx\mathbb{E}_{R_{u}}[\bar{F}_{\Delta_{o}}^{N_{d}}(n \times T)], \tag{32}\]
_in which the approximation sign comes from the fact that we assume all the devices have the same success probability._
Proof.: As mentioned, we approximate that these devices have the same success probability. Conditioned on \(R_{u}\), the CCDF of the PAoI in the case of Load Model 1 and correlated devices is computed by
\[\bar{F}_{\Delta_{l_{1}}}(n\times T)=\mathbb{P}(\Delta_{1}>n, \cdots,\Delta_{N_{d}}>n)\] \[=\mathbb{P}(\Delta_{1}>n)\cdots\mathbb{P}(\Delta_{N_{d}}>n)\] \[=\bar{F}_{\Delta_{1}}(n\times T)\cdots\bar{F}_{\Delta_{N_{d}}}(n \times T)\] \[=\bar{F}_{\Delta_{o}}^{N_{d}}(n\times T), \tag{33}\]
proof completes by taking the expectation over \(R_{u}\).
Besides, while the number of devices increases, the data arrival rate, and success transmission probability from the perspective of the serving UAV increases. In the following remark, we propose an approximation that considers a modified arrival rate and a modified success probability.
**Remark 1** (Approximation of Mean PAoI under Load Model 1 and Correlated Devices).: _At high values of conditional success probability (high LoS probability areas), the mean PAoI under Load model 1 can be computed by_
\[\bar{\Delta}_{l_{1}}\approx \mathbb{E}_{R_{u}}\bigg{[}\frac{T_{m1}}{\hat{P}_{s}(R_{u})}+\frac {T}{\hat{\lambda}_{a}}\bigg{]}\] \[= \int_{0}^{r_{c}}P_{l}(r)\bigg{(}\frac{T_{m1}}{\hat{P}_{s,l}(r)}+ \frac{T}{\hat{\lambda}_{a}}\bigg{)}f_{R_{u}}(r)\mathrm{d}r\] \[+\int_{0}^{r_{c}}P_{n}(r)\bigg{(}\frac{T_{m1}}{\hat{P}_{s,n}(r)}+ \frac{T}{\hat{\lambda}_{a}}\bigg{)}f_{R_{u}}(r)\mathrm{d}r, \tag{34}\]
_in which the approximation sign comes from the fact that we use the same success probability of all the devices, and_
\[\hat{\lambda}_{a} =(1-(1-\lambda_{a})^{N_{d}}),\] \[\hat{P}_{s}(R_{u}) =(1-(1-P_{s}(R_{u}))^{N_{d}}). \tag{35}\]
_However, this approximation shows gaps at low LoS probability areas and the mean PAoI behaves more close to a single device in such environment. More details are discussed in Section V._
#### Iv-A2 Uncorrelated Devices
In the case of uncorrelated devices, we compute the mean PAoI directly, which is composed of mean data transmission time and mean data inter-arrival time.
**Theorem 3** (Mean PAoI under Load Model 1 and Uncorrelated Devices).: _In the case of uncorrelated devices, the mean PAoI under the Load Model 1 is given by_
\[\bar{\Delta}_{l_{1}}= \mathbb{E}_{R_{u}}\bigg{[}\frac{2\times T_{m1}}{P_{s}(R_{u})}+ \frac{T}{\lambda_{a}}\bigg{]}\] \[= \int_{0}^{r_{c}}P_{l}(r)\bigg{(}\frac{2\times T_{m1}}{P_{s,l}(r)} +\frac{T}{\lambda_{a}}\bigg{)}f_{R_{u}}(r)\mathrm{d}r\] \[+\int_{0}^{r_{c}}P_{n}(r)\bigg{(}\frac{2\times T_{m1}}{P_{s,n}(r) }+\frac{T}{\lambda_{a}}\bigg{)}f_{R_{u}}(r)\mathrm{d}r, \tag{36}\]
_in which \(P_{s,\{n\}}(R_{u})\) are provided in (16) and (17)._
### _Mean PAoI Under Load Modeling 2_
In this subsection, we derive the mean PAoI under Load Model 2, in which the cluster UAV splits the time and serves the IoT devices periodically. Since the UAV serves the devices periodically with the period \(T_{m2}=N_{d}T\) the PMF of the successful transmission time is
\[\mathbb{P}(T_{\mathrm{tra}}=(n-1)\times T_{m2}+T) =(1-P_{s}(R_{u}))^{n-1}P_{s}(R_{u}),\] \[(n=1,2,3,\cdots). \tag{37}\]
From the perspective of the cluster UAV, the transmission time slot is occupation if the serving device has an update or the previous transmission fails, therefore, the conditional transmission time slot occupied probability is
\[\pi_{1,l_{2}} =P_{s}(R_{u})\lambda_{a}^{{}^{\prime\prime}}+(1-P_{s}(R_{u})),\] \[\lambda_{a}^{{}^{\prime\prime}} =1-(1-\lambda_{a})^{N_{d}}, \tag{38}\]
in which \(P_{s}(R_{u})\lambda_{a}^{{}^{\prime\prime}}\) denotes the arrival probability of new update and \((1-P_{s}(R_{u}))\) denotes the probability of re-transmission.
#### Iv-B1 Correlated Devices
Different from Load Model 1, in Load Model 2, the device can only transmit the update during the serving time slot. That is, the new update needs to wait. Let \(X_{n}\) be the average waiting time of the updates, and \(\Delta_{o}^{{}^{\prime}}=T_{i}+T_{\mathrm{tra}}\) be the AoI without waiting, as shown in Fig. 5. The PMF of \(\Delta_{o}^{{}^{\prime}}\) is shown in the following lemma.
**Lemma 7** (Distribution of \(\Delta_{o}^{{}^{\prime}}\) under Load Model 2).: _The PMF of \(\Delta_{o}^{{}^{\prime}}\) under Load Model 2 is given by,_
\[f_{\Delta_{o}^{{}^{\prime}}}(n\times T) =\mathbb{P}(\Delta_{o,k}^{{}^{\prime}}=n\times T)\] \[=\sum_{n_{1}=0}^{\lfloor\frac{n-2}{N_{d}^{{}^{\prime}}}\rfloor} \lambda_{a}P_{s}(R_{u})(1-\lambda_{a})^{n-2-N_{d}n_{1}}(1-P_{s}(R_{u}))^{n_{1}},\] \[(n=3,4,\cdots),\] \[f_{\Delta_{o}^{{}^{\prime}}}(1\times T) =f_{\Delta_{o}^{{}^{\prime}}}(2\times T)=0, \tag{39}\]
_accordingly, the CCDF of \(\Delta_{o}^{{}^{\prime}}\) is given by_
\[\bar{F}_{\Delta_{o}^{{}^{\prime}}}(n\times T)=\mathbb{P}(\Delta_{o,k}>n \times T)\] \[=\sum_{l=n+1}^{\infty}\sum_{n_{1}=0}^{\lfloor\frac{n-2}{N_{d}} \rfloor}\lambda_{a}P_{s}(R_{u})(1-\lambda_{a})^{n-2-N_{d}n_{1}}(1-P_{s}(R_{u}))^{ n_{1}},\] \[(n=3,4,\cdots),\]
Fig. 5: Illustration of the PAoI of Load Model 2.
\[\bar{F}_{\Delta_{o}^{\prime}}(1\times T)=\bar{F}_{\Delta_{o}^{\prime}}(2\times T)=1. \tag{40}\]
Proof:: Similar to the proof of Lemma 6, just note that the minimum value of \(\Delta_{o}^{\prime}\) is \(3T\) since we ignore the waiting time, and it happens when the update successfully transmits immediately after it generates.
After obtaining the PMF and CCDF of the PAoI of an individual device, we are able to derive the mean PAoI of correlated devices, which is given in the following theorem.
**Theorem 4** (Mean PAoI under Load Model 2 and Correlated Devices).: _In the case of correlated devices, the mean PAoI under the Load Model 2 is given by_
\[\bar{\Delta}_{l_{2}}\approx\mathbb{E}_{R_{a}}[\bar{F}_{\Delta_{o}^{\prime}}^{ N_{d}}(n\times T)]+X_{n}\times T, \tag{41}\]
_in which the approximation sign comes from the fact that we assume all the devices have the same success probability, and \(X_{n}\) characterizes the average waiting time, the number of the time slots between the data arrival and the transmission time slot,_
\[X_{n} =\sum_{k=1}^{N_{d}-1}x_{n}(k)(N_{d}-k-1)+x_{n}(N_{d})(N_{d}-1),\] \[x_{n}(k) =(1-\lambda_{a})^{k-1}\frac{\lambda_{a}}{1-(1-\lambda_{a})^{N_{d} }}. \tag{42}\]
Proof:: Similar to the proof of Theorem 2 and \(X_{n}\) is computed by
\[\mathbb{P}(T_{w}=(N_{d}-k-1)\times T)=\mathbb{P}(T_{a}=k\times T)\] \[\quad+\mathbb{P}(T_{a}=(k+N_{d})\times T)\] \[\quad+\mathbb{P}(T_{a}=(k+2N_{d})\times T)+\cdots\] \[=\sum_{i=0}^{\infty}\mathbb{P}(T_{a}=(k+iN_{d})\times T)=\sum_{i =0}^{\infty}(1-\lambda_{a})^{k+iN_{d}-1}\lambda_{a},\] \[X_{n} =\sum_{k=1}^{N_{d}-1}T_{w}\mathbb{P}(T_{w}=(N_{d}-k-1)\times T)\] \[\quad+(N_{d}-1)\mathbb{P}(T_{w}=N_{d}\times T), \tag{43}\]
in which \(T_{w}\) and \(T_{a}\) denote the waiting time and the time slot during which the update comes.
**Remark 2**.: _It is difficult to tell from (32) and (41) which load model provides a lower mean PAoI since the PAoI in (41) starts at a lower value but has one more waiting time term. However, we can expect that Load Model 2 performs better since the value of \(X_{n}\) is at most \(N_{d}\) which equals to the minimum value of PAoI in (41), and this performance gap (gap in mean PAoI) depends on the arrival rate and increases with the increasing of \(N_{d}\times T\)._
#### Iv-C2 Uncorrelated Devices
Compared to the correlated devices, the mean PAoI of uncorrelated devices also includes one more successful transmission, as shown in the following theorem.
**Theorem 5** (Mean PAoI under Load Model 2 and Uncorrelated Devices).: _In the case of uncorrelated devices, the mean PAoI under the Load Model 2 is given by_
\[\bar{\Delta}_{l_{2}}=\mathbb{E}_{R_{a}}[\bar{F}_{\Delta_{o}^{\prime}}(n\times T )+T_{n}\times T]+X_{n}, \tag{44}\]
_in which \(T_{n}\) denotes the mean success transmission time,_
\[T_{n}=\sum_{t_{n}=0}(t_{n}N_{d}+1)P_{s}(R_{u})(1-P_{s}(R_{u}))^{t_{n}}. \tag{45}\]
Now we have obtained all the required equations and in the next section we validate all these results with simulations.
## V Numerical Results
In this section, we validate the analytical results with simulations, evaluate the impact of number of devices, UAV altitudes, and environment parameters on the mean PAoI, and compare the two load models. Unless stated otherwise, we use the simulation parameters as listed herein Table II.
In the simulations of the considered system setup, we first generate one realization of PPP to model the locations of IoT cluster centers and the typical IoT cluster is centered at the origin. For the typical IoT cluster, we generate the locations of \(N_{d}\) IoT devices which are randomly located within the cluster following a uniform distribution. For the remaining IoT clusters, we generate one IoT device for each cluster to model the interferer. For the typical IoT cluster, we first compute the success probability for each device and then compute the mean PAoI under the two load models. Besides, the unit of mean PAoI in the following figures is \(T\).
We first show the accuracy of computing the mean activity probability, as shown in Fig. 6 (a), in which four curves, from the bottom to the top, are plotted under (1) suburban area and \(h=100\) m, (2) urban area and \(h=100\) m, (3) urban area and \(h=80\) m, and highrise urban area and \(h=100\) m. The simulation results show a good matching with the analytical results, which is obtained by iterating as mentioned in Theorem 1. We observe that activity probability increases with the increase of information arrival rate (due to more data generated) and increases with the decrease of LoS probability (due to longer successful transmission time). We then compare the activity probability under Load Model 1 and Load Model 2, as shown in Fig. 6 (b), in which we plot two environments (1) highrise urban area and \(h=100\) m, and (2) urban area and \(h=100\) m. Interestingly, for Load Model 1, as the arrival rate increases \(0\) to \(1\), the activity probability has a smaller range and it cannot reach \(1\). This is because of the state transfer diagram shown in Fig. 4: it takes at least one time slot for devices to generate the data, hence, the activity probability approaches \(1\) with the decrease of success probability or the increase of
transmission time (increases of \(N_{d}\)). For Load Model 2, it has a higher interference than Load Model 1 and the devices activity probability is able to reach \(1\). This is because the UAV uses the data generation time to serve other devices.
In Fig. 7, we plot the mean PAoI under two load models: Fig. 7 (a) bandwidth splitting and Fig. 7 (b) time splitting, while the solid lines with crossing markers denote the mean PAoI under suburban areas and the dash lines with circle markers denotes the mean PAoI under dense areas and UAV altitudes \(h=80\) m (we omit the curve under dense area and \(h=100\) since these three curves are very close to each other). Moreover, the approximation mentioned in Remark 1 shows good matching in these scenarios, thus, is omitted in the figure. The mean PAoI only slightly increases when the environment changes from suburban to dense areas and UAV altitude decreases. This is because, in the uplink transmission of UAV-assisted networks, the success probability decreases slowly when the transmission distance increases, and since we consider that devices are spatially clustered the transmission distance is limited by the cluster radii. However, the mean PAoI increases dramatically under highrise urban areas (dash lines with square markers), and the approximation mentioned in Remark 1 shows gaps (Fig. 7 (a) dotted line). This is because, in highrise urban areas, the LoS probability and the success probability decrease dramatically when the transmission distance increases. Thus, the devices near the cluster center have a much higher success probability compared to the devices at the edge, and considering all the devices have the same success probability is not accurate: while the devices near the cluster center update the information periodically, the devices at the edge rarely transmit useful information.
In Fig. 8 (a), we compare mean PAoI under the proposed two load models in the case of correlated devices (dense areas with altitudes \(h=100\) m). We observe that an optimal number of devices exists for a fixed data arrival rate and Load Model 2 performs slightly better than Load Model 1 at a large number of devices, as expected in Remark 2. Mean PAoI decreases first due to a higher data arrival rate at the UAV side, and increases due to a longer transmission time in Load Model 1 and a longer waiting time in Load Model 2. In addition, Load Model 2 schedules the transmission of updates since we consider UAVs serving the devices periodically. The update information comes when the previous update is successfully transmitted. Hence, the updates are scheduled since they also come periodically. However, we would like to indicate that these two load models are fairly compared since all the devices are fairly served. Interestingly, we notice that while the optimal value of the number of devices only exists at a low value of arrival rate in Load Model 1 (\(\lambda_{a}\leq 0.5\), as shown in Fig. 8 (b)) under dense area and \(h=100\) m, optimal values exist (number of devices equals to \(2\)) for all the arrival rates in Load Model 2, see Fig. 8 (b). This is because we consider that devices cannot have any updates during the transmission period. For instance, when the \(\lambda_{a}=1\) and \(P_{s}=1\), the mean PAoI is \(3\times T\) which contains two successful transmissions and one data
Fig. 6: Simulation and analytical results of computing the mean activity probability of IoT devices in the case of **(a)**\(N_{d}=1\times T\), these four curves, from the bottom to the top, are plotted under (1) suburban urban area and \(h=100\) m, (2) urban area and \(h=100\) m, (3) urban area and \(h=80\) m, and highrise urban area and \(h=100\) m; **(b)** Load Model 1 and Load Model 2, and \(N_{d}=2\times T\).
Fig. 7: Simulation and analysis results of the mean PAoI of correlated devices: **(a)** Load Model 1, split the bandwidth, **(b)** Load Model 2, split the time.
update time slot in the case of a single device, and the mean PAoI is \(2\) which contains one successful transmission and one data update time slots in the case of two devices. Hence, serving two correlated devices is more efficient than serving one device, due to the efficient use of the data generation time slot to collect data from another device.
In Fig. 9, we plot the mean PAoI under the two load models in the case of uncorrelated devices. For uncorrelated devices, the mean PAoI increases with the increase in the number of devices. The mean PAoI keeps almost the same when the UAV altitudes change from \(h=100\) to \(h=80\) and the environment changes from suburban to dense areas, while the mean PAoI increases dramatically at highrise urban areas. This is because the coverage probability (the expectation of the success probability over the transmission distance) under the highrise urban area is about \(0.24\) while it is about \(0.6\) to \(0.9\) in the three scenarios. Thus, it takes a much longer time to successfully transmit the update in highrise urban areas.
In Fig. 10 (a), we plot the mean PAoI of the proposed two load models in the case of uncorrelated devices. Similar to the trend observed in Fig. 8 (a), Load Model 2 performs slightly better than Load Model 1, as expected in Remark 2. Additionally, we notice that the mean PAoI gap decreases with the increasing of arrival rate, e.g., at a low arrival rate the gap is about \(2(N_{d}-2-X_{n})\) and at a high arrival rate (\(\lambda_{a}=1\)) the gap is about \(1\).
To explain this, we consider the scenario that \(N_{d}=18\), \(\lambda_{a,1}=1\) and \(\lambda_{a,2}=0.2\). When \(\lambda_{a,1}=1\), the data updates at the first time slot in both load models, and the device starts to transmit the update in the following \(T_{m1}=18\) time slots in Load Model 1 while the device waits for \(T_{m2}-2=16\) time slots and transmits in the \(18\)-th time slot in Load Model 2. If both transmissions succeed, the Load Model 2 requires \(1\) less time slot. If the transmissions fail, both devices spend another \(N_{d}\) to re-transmit. Hence, the Load Model 2 still requires \(1\) less time slot. In the case of \(\lambda_{a,2}=0.2\), the Load Model 1 device transmits the update as soon as it comes, and intuitively, it should perform better. However, the minimum value of AoI of Load Model 2 is lower when it arrives at the serving UAV since the update may come in the middle of a period, e.g., coming \(1\) time slot before the transmission, hence, the AoI is \(2\times T\) when it arrives at the UAV. Besides, the inter-arrival time is also longer in Load Model 1: new updates come when the previous updates are successfully transmitted. Therefore, the inter-arrival time in Load Model 1 is \(T_{m1}+\frac{1}{\lambda_{a}}\) while in load model 2 is \(T+\frac{1}{\lambda_{a}}\), in which \(T_{m1}=N_{d}T\).
Besides, we notice in Fig. 10 (b) that in Load Model 2, a higher arrival rate does not always result in a lower mean PAoI: for instance, we plot \(\lambda_{a,1}=1\) and \(\lambda_{a,2}=0.38\) in Fig. 10 (b). When the number of devices is smaller than \(N_{d}\leq 5\), mean PAoI under \(\lambda_{a,1}\) is smaller while \(N_{d}>5\), \(\lambda_{a,2}\) has a smaller mean PAoI. This is because of the waiting time \(X_{n}\). At high arrival rates, updates have a higher probability of arriving at the beginning of the period and waiting for a longer time.
Fig. 8: Simulation and analysis results of the mean PAoI of correlated devices: **(a)** mean PAoI comparison between Load Model 1 and Load Model 2, **(b)** at different arrival rates of Load Model 2.
Fig. 9: Simulation and analysis results of the mean PAoI of uncorrelated devices: **(a)** mean PAoI under Load Model 1, **(b)** mean PAoI under Load Model 2. In both **(a)** and **(b)**, the curves from the top to the bottom are plotted under: highrise urban areas \(a=27,b=0.08\) and \(h=100\), dense area \(a=12,b=0.16\) and \(h=80\), dense area \(a=12,b=0.16\) and \(h=100\), and suburban area \(a=4.88,b=0.43\) and \(h=100\).
## VI Conclusion
This work presented a stochastic geometry-based analysis of PAoI performance metric in a UAV-assisted IoT network in which we considered correlated IoT devices and uncorrelated devices and two load models, assuming a non-preemptive queuing discipline. We first showed the impact of UAV altitudes and environment parameters on the mean PAoI. The mean PAoI increases dramatically in high dense areas, while keeps almost the same in suburban to dense areas and slight changes in UAV altitudes. We then compared the mean PAoI under two load models. For the correlated devices, we showed that an optimal value exists to minimize the mean PAoI when the arrival rate is low in Load Model 1, and two devices always have a lower PAoI in Load Model 2 for all arrival rates. In particular, we showed that even though Load Model 2, time splitting, causes a higher interference, it provides a lower mean PAoI than Load Model 1, bandwidth splitting, for both scenarios for IoT devices. This is because of the spatially clustered devices and the higher probability of establishing LoS links. Additionally, it is more time efficient for UAVs to split the time to serve multiple devices as it can use data generation time to serve other devices.
In future research, an interesting avenue to explore would be the analysis of AoI in mobility-aware IoT networks. For instance, studying scenarios where UAVs dynamically move around to collect updates from IoT devices belonging to different IoT clusters or where IoT devices switch their connections between different UAVs as they move. Additionally, investigating the impact of limited resources on UAVs' AoI is another relevant direction for future work. UAVs possess finite resources, including battery power, processing capability, and communication bandwidth. Optimizing resource allocation to minimize PAoI, while considering these limitations poses a challenging task that could be tackled in further research.
### _The Laplace Transform in Lemma 4_
The expression of the Laplace transform in Lemma 4 is provided in [49, Lemma 2] and given by
Fig. 10: Simulation and analysis results of the mean PAoI of uncorrelated devices: **(a)** mean PAoI comparison between Load Model 1 and Load Model 2, **(b)** at different arrival rates of Load Model 2.
in which \(g_{i,i}(r)=k_{i}\beta_{2}(m_{l})m_{l}\theta r_{l}(^{1-\epsilon_{l})\alpha_{l}}(\rho_{ \eta}\eta_{l})^{-1}\), \(g_{n,i}(r)=k_{i}\beta_{2}(m_{n})m_{n}\theta r^{(1-\epsilon_{n})\alpha_{n}}(\rho_{ \eta}\eta_{n})^{-1}\), and \(\mathcal{L}(g_{1}(r),\cdots,g_{b}(r))\), \(\beta_{2}(m)=(m!)^{-1/m}\) and \(\bar{\pi}\) is the mean activity probability.
|
2302.09244 | Dual-Domain Self-Supervised Learning for Accelerated Non-Cartesian MRI
Reconstruction | While enabling accelerated acquisition and improved reconstruction accuracy,
current deep MRI reconstruction networks are typically supervised, require
fully sampled data, and are limited to Cartesian sampling patterns. These
factors limit their practical adoption as fully-sampled MRI is prohibitively
time-consuming to acquire clinically. Further, non-Cartesian sampling patterns
are particularly desirable as they are more amenable to acceleration and show
improved motion robustness. To this end, we present a fully self-supervised
approach for accelerated non-Cartesian MRI reconstruction which leverages
self-supervision in both k-space and image domains. In training, the
undersampled data are split into disjoint k-space domain partitions. For the
k-space self-supervision, we train a network to reconstruct the input
undersampled data from both the disjoint partitions and from itself. For the
image-level self-supervision, we enforce appearance consistency obtained from
the original undersampled data and the two partitions. Experimental results on
our simulated multi-coil non-Cartesian MRI dataset demonstrate that DDSS can
generate high-quality reconstruction that approaches the accuracy of the fully
supervised reconstruction, outperforming previous baseline methods. Finally,
DDSS is shown to scale to highly challenging real-world clinical MRI
reconstruction acquired on a portable low-field (0.064 T) MRI scanner with no
data available for supervised training while demonstrating improved image
quality as compared to traditional reconstruction, as determined by a
radiologist study. | Bo Zhou, Jo Schlemper, Neel Dey, Seyed Sadegh Mohseni Salehi, Kevin Sheth, Chi Liu, James S. Duncan, Michal Sofka | 2023-02-18T06:11:49Z | http://arxiv.org/abs/2302.09244v1 | # Dual-Domain Self-Supervised Learning for Accelerated Non-Cartesian MRI Reconstruction
###### Abstract
While enabling accelerated acquisition and improved reconstruction accuracy, current deep MRI reconstruction networks are typically supervised, require fully sampled data, and are limited to Cartesian sampling patterns. These factors limit their practical adoption as fully-sampled MRI is prohibitively time-consuming to acquire clinically. Further, non-Cartesian sampling patterns are particularly desirable as they are more amenable to acceleration and show improved motion robustness. To this end, we present a fully self-supervised approach for accelerated non-Cartesian MRI reconstruction which leverages self-supervision in both k-space and image domains. In training, the undersampled data are split into disjoint k-space domain partitions. For the k-space self-supervision, we train a network to reconstruct the input undersampled data from both the disjoint partitions and from itself. For the image-level self-supervision, we enforce appearance consistency obtained from the original undersampled data and the two partitions. Experimental results on our simulated multi-coil non-Cartesian MRI dataset demonstrate that DDSS can generate high-quality reconstruction that approaches the accuracy of the fully supervised reconstruction, outperforming previous baseline methods. Finally, DDSS is shown to scale to highly challenging real-world clinical MRI reconstruction acquired on a portable low-field (0.064 T) MRI scanner with no data available for supervised training while demonstrating improved image quality as compared to traditional reconstruction, as determined by a radiologist study.
Self-supervised Learning, Dual-domain Learning, Non-Cartesian MRI, Accelerated MRI, Low-field Portable MRI +
Footnote †: journal: Journal of the Acoustical Society of America
+
Footnote †: journal: Journal of the Acoustical Society of America
+
Footnote †: journal: Journal of the Acoustical Society of America
## 1 Introduction
Magnetic resonance imaging (MRI) is a common medical imaging modality for disease diagnosis (Vlaardingerbroek and Boer, 2013). However, MRI is inherently challenging due to its slow acquisition arising from physical and physiological constraints, with real-world scan times ranging from 15 mins to over an hour depending on the protocol and diagnostic use-case. Prolonged MR imaging sessions are impractical as they lead to increased patient discomfort and increased accumulation of motion artifacts and system imperfections in the image. Consequently, there is significant interest in accelerating MRI
acquisition while maintaining high image fidelity.
MRI is typically acquired by sequentially sampling the frequency-domain (or _k-space_) in a pre-defined pattern. For Cartesian MRI, a k-space grid is regularly sampled and an inverse Fourier transform may be directly applied to reconstruct the image (assuming that the Nyquist sampling rate is met). However, accelerated MRI generally uses non-Cartesian sampling patterns, such as spiral (Delattre et al., 2010), radial (Knoll et al., 2011), variable density (Knoll et al., 2011), and optimized sampling patterns (Lazarus et al., 2019). Advantages of non-Cartesian sampling patterns include enabling more efficient coverage of k-space (Wright et al., 2014) and enhanced patient motion robustness (Forbes et al., 2001; Pipe, 1999). Further, recent accelerated MRI reconstruction studies have also shown non-Cartesian sampling to be better suited towards compressed sensing (CS) (Lustig et al., 2008) and potentially deep learning (DL) based reconstructions as aliasing artifacts from non-Cartesian sampling shows higher noise-like incoherence than Cartesian sampling. In this work, we focus on the reconstruction of non-Cartesian sampled data.
To accelerate MRI acquisition, various efforts have been made for reconstructing high-quality images with undersampled k-space data. The previous methods can be summarized into two categories: CS-based reconstruction (Lustig et al., 2008) and DL-based reconstruction (Chandra et al., 2021). CS-based reconstruction methods typically use sparse coefficients in transform-domains (e.g. wavelets (Qu et al., 2010; Zhang et al., 2015)) with application-specific regularizers (e.g. total variation) to solve the ill-posed inverse problem in an iterative fashion (Liang et al., 2009). However, iterative sparse optimization approaches are prone to reconstructing over-smoothed anatomical structure and may yield undesirable image artifacts, especially when the acceleration factor is high (acceleration factor \(>3\)) (Ravishankar and Bresler, 2010). Moreover, iterative optimization-based reconstruction is time-consuming and requires careful parameter tuning across different scanners and protocols and may even require subject-specific tuning.
With recent advances in computer vision and the availability of large-scale MRI datasets (Zbontar et al., 2018; Bien et al., 2018), DL-based reconstruction methods have demonstrated significant improvements over CS-based methods. However, most DL-based reconstruction methods are limited to Cartesian sampling patterns and are supervised, thus requiring paired fully-sampled acquisitions for ground-truth (Wang et al., 2016; Sun et al., 2016; Schlemper et al., 2017; Qin et al., 2018; Zhu et al., 2018; Hammernik et al., 2018; Quan et al., 2018; Lanning et al., 2019; Han et al., 2019; Li et al., 2019; Zhang et al., 2019; Zhou and Zhou, 2020; Liu et al., 2021; Feng et al., 2021; Dar et al., 2020). However, these requirements are impractical as several real-world MRI use-cases may not have the time/resources to fully sample k-space for supervised training or may prefer non-Cartesian sampling for its motion robustness advantages, among others. For example, full k-space sampling cannot be done for real-time cardiac MRI (Coelho-Filho et al., 2013) and functional brain MRI (Bagarinao et al., 2006) where data acquisition time frames are tightly restricted.
Currently, there exist a handful of DL-based reconstruction methods to address non-Cartesian sampling while still requiring paired fully-sampled data for supervision (Aggarwal et al., 2018; Schlemper et al., 2019; Ramzi et al., 2022, 2021). Further, to obviate the need for supervised training and fully-sampled data, _self-supervised_ MRI reconstruction methods have been recently proposed where reconstruction networks can be trained without fully-sampled data (Wang et al., 2020; Yaman et al., 2020, 2021; Acar et al., 2021; Hu et al., 2021; Martin-Gonzalez et al., 2021; Cole et al., 2020). However, these methods are still limited to Cartesian sampling patterns and do not address how to perform self-supervised learning for accelerated non-Cartesian MRI reconstruction.
In this work, we present Dual-Domain Self-Supervised (DDSS) reconstruction, a self-supervised learning method for accelerated non-Cartesian MRI reconstruction. DDSS is trained only on non-Cartesian undersampled data by first randomly partitioning the input undersampled k-space acquisitions into two disjoint sets with no overlap in k-space coordinates. The network is then trained under a combination objective function (Fig. 1) exploiting: (1) k-space self-similarity, where the network is trained to reconstruct the input undersampled k-space data from both the input and from the two disjoint partitions and; (2) reconstruction self-similarity, where the network is trained to enforce appearance consistency between the reconstructions obtained from the original undersampled data and the two partitions.
Experimentally, as existing large-scale MRI datasets use Cartesian acquisitions (Zbontar et al., 2018; Bien et al., 2018), we first simulate a multi-coil accelerated non-Cartesian dataset with spatial and phase degradations from the publicly available Human Connectome Project dataset (Van Essen et al., 2013) and use the simulated complex-valued images as ground truth for evaluation. We find that DDSS generates accurate reconstructions that approach the fidelity of full supervision and outperform a series of baseline methods and ablations. We then demonstrate the successful application of DDSS in reconstructing challenging real-world MRI data from a clinically deployed portable low-field (0.064 T) MRI scanner (Mazurek et al., 2021), which acquires k-space data with a non-Cartesian variable density sampling pattern with no fully sampled references available. In this case, a survey of multiple radiologists indicates DDSS improvements in terms of image quality as compared to the reconstruction algorithm deployed in the system. Our major contributions include:
* A self-supervised learning method that enables training deep reconstruction networks for non-Cartesian MRI without access to fully sampled data by exploiting dual-domain self-supervision in both k-space and image domains via novel loss functions and training strategies.
* Successful deployment (as measured by fidelity metrics, radiologist opinions, and improvements on previous work) on a simulated non-Cartesian accelerated MRI dataset and on real-world low-field 0.064T MRI scanners where fully sampled data is practically infeasible.
## 2 Related Work
**Fully-Supervised MRI Reconstruction.** With the recent advances in medical computer vision and the availability of large-scale paired Cartesian MRI datasets such as fastMRI (Zbontar et al., 2018) and MRNet (Bien et al., 2018), significant effort has been made towards the development of fully supervised deep networks for accelerated Cartesian MRI reconstruction. Wang et al. (2016) proposed the first DL-based MR reconstruction solution for training a three-layer convolutional network to map accelerated zero-filled reconstructions to fully sampled reconstructions. Sun et al. (2016) developed ADMM-Net, which included an iterative ADMM-style optimization procedure into a deep reconstruction network. Schlemper et al. (2017) proposed to use a deep cascade network with data consistency layers to approximate the closed-form solution of the iterative reconstruction. Qin et al. (2018) and Lonning et al. (2019) further advanced this network design with recurrent components to improve the reconstruction quality. Hammernik et al. (2018) proposed to unroll the iterative optimization steps into a variational network. Li et al. (2019) and Quan et al. (2018) suggested adversarial learning for improving anatomical details and structure in the reconstruction. Recently, Feng et al. (2021) aggregated spatial-frequency context with a dual-octave convolution network to further improve reconstruction fidelity.
In addition to image domain-based reconstruction, Han et al. (2019) proposed to restore missing k-space measurements with deep models allowing the direct application of the inverse Fourier transform for image reconstruction. Similarly, Zhu et al. (2018) proposed to directly learn a mapping between undersampled k-space measurements to the image domain. Combining image and k-space domains, Zhou and Zhou (2020) showed that dual-domain recurrent learning can further improve reconstruction as compared to learning reconstruction in a single-domain only. Liu et al. (2021) and Zhou and Zhou (2020) also demonstrated that using multi-modal MRI input into the reconstruction network improves anatomical accuracy in the reconstructions. To directly address the issue of having insufficient data in the target domain of MRI reconstruction, transfer-learning methods have also been investigated to perform training in a data-abundant domain (for example, natural images), and then to fine-tune reconstruction models with few samples in the target domain (Dar et al., 2020).
However, the aforementioned works focus on MRI reconstruction with Cartesian sampling patterns, with only a limited number of methods developed for non-Cartesian sampling. For example, Aggarwal et al. (2018) proposed a variational network with a conjugate gradient-based data consistency block that is suitable for non-Cartesian MRI reconstruction. Similarly, Schlemper et al. (2019) developed a gradient descent-based variational network for non-Cartesian MRI reconstruction. More recently, Ramzi et al. (2022) also designed a non-Cartesian density compensated unrolled network, called NC-PDNet, for high-quality non-Cartesian accelerated MRI reconstruction. However, again, these methods use supervised training from large-scale paired non-Cartesian MRI which may be hard or infeasible to obtain in the real world or on new scanners. We address these challenges by taking a self
Figure 1: **Dual-Domain Self-Supervised (DDSS) learning. In training, undersampled k-space data \(y\) is randomly partitioned into disjoint sets \(y_{p_{1}}\) and \(y_{p_{2}}\), \(y_{v}\), \(y_{p_{1}}\), and \(y_{p_{2}}\) are then fed into the network to produce image-domain reconstructions (\(x_{p_{1}}\), \(x_{p_{2}}\), and \(x_{v}\), respectively) and k-space reconstructions (\(y_{grid_{1}}\), \(y_{grid_{2}}\), and \(y_{grid_{v}}\), respectively). These outputs are trained under dual-domain losses \(\mathcal{L}_{\mathcal{D}\text{DC}_{1,2\kappa}}\) (for k-space consistency) and \(\mathcal{L}_{\mathcal{L}\text{C}_{1,2\kappa}}\) (for appearance consistency). Once trained, the trained network can directly reconstruct the image from \(y\).**
supervised reconstruction approach using only undersampled non-Cartesian data without paired ground truth.
**Self-Supervised MRI Reconstruction.** To alleviate the dependence on pairs of undersampled and fully sampled k-space measurements for MRI reconstruction training, self-supervised MRI reconstruction methods have been recently explored. For example, Wang et al. (2020) proposed HQS-Net which decouples the minimization of the data consistency term and regularization term in Schlemper et al. (2017) based on a neural network, such that the deep network training relies only on undersampled measurements. Cole et al. (2020) developed a self-supervised GAN-based reconstruction method, called AmbientGAN in which the discriminator is trained to differentiate between real undersampled k-space measurements from k-space measurements of a synthesized image, such that the generator learns to generate fully sampled measurements. Yaman et al. (2020, 2021) proposed a physically-guided self-supervised learning method, called SSDU, that partitions the undersampled k-space measurements into two disjoint sets and trains the reconstruction network (Schlemper et al., 2017) by predicting one k-space partition using the other. This method has also been successfully applied to dynamic MRI reconstruction (Acar et al., 2021). Extending SSDU, Hu et al. (2021) proposed a parallel self-supervised framework towards improved accelerated Cartesian MRI reconstruction. On the other hand, instead of partitioning the undersampled k-space measurements into two disjoints as done in SSDU, Martin-Gonzalez et al. (2021) proposed to train the reconstruction network by predicting the undersampled k-space measurement with the same undersampled measurement as network input. Lastly, Korkmaz et al. (2022) presented an adversarial vision transformer for unsupervised patient-specific MRI reconstruction.
However, these methods mainly focus only on Cartesian MRI reconstruction and heavily rely on k-space data and thus do not take full advantages of the self-supervised learning in the image domain. In this work, we take advantage of _both_ k-space and image domain self-supervision and develop a dual-domain self-supervised learning method for accelerated _non-Cartesian_ MRI reconstruction.
## 3 Methods
### Problem Formulation
Let \(x\in\mathbb{C}^{N}\) be a complex-valued 2D image to be reconstructed, where \(x\) is a vector with size of \(N=N_{x}N_{y}\) and \(N_{x}\) and \(N_{y}\) are the height and width of the image. Given a undersampled k-space measurement \(y\in\mathbb{C}^{M}(M<<N)\), our goal is to reconstruct \(x\) from \(y\) by solving the unconstrained optimization problem,
\[\operatorname*{arg\,min}_{x}\quad\frac{\lambda}{2}\|Ax-y\|_{2}^{2}+R(x), \tag{1}\]
where \(A\) is a non-uniform Fourier sampling operator, and \(R\) is a regularization term on reconstruction. If data is acquired under Cartesian sampling patterns, then \(A=MF\), where \(M\) is a sampling mask with the same size as \(A\) and \(F\) is the discrete Fourier transform. If data is acquired under a non-Cartesian
Figure 2: Variational reconstruction network architecture. The variational network is a unrolled network to approximate gradient descent on equation 1. The backbone network is shown in the yellow block and the data consistency (DC) operation is shown in the red block.
sampling pattern, the k-space measurement locations will no longer located on a uniform k-space grid and thus a generalized definition of \(A\) can be given by the non-uniform discrete Fourier transform:
\[y((k_{x},k_{y}))=\sum_{w=0}^{N_{x}}\sum_{h=0}^{N_{y}}x_{wh}e^{2\pi i(\frac{x}{k_ {x}}k_{x}+\frac{b}{k_{y}}k_{y})}, \tag{2}\]
where \((k_{x},k_{y})\in\mathbb{R}^{2}\), in contrast to \((k_{x},k_{y})\in\mathbb{Z}^{2}\) under Cartesian sampling. With the non-uniform Fast Fourier Transform (NUFFT) (Fessler and Sutton, 2003; Greengard and Lee, 2004), equation 2 can be approximated by:
\[A=GF_{s}D, \tag{3}\]
where \(G\) is a gridding interpolation kernel, \(F_{s}\) is the fast Fourier Transform (FFT) with an oversampling factor of \(s\), and \(D\) is the de-apodization weights. The inversion of \(A\) under fully-sampled cases can be approximated by gridding reconstruction:
\[x=A^{H}Wy \tag{4}\]
where \(W\) is a diagonal matrix for the density compensation of irregularly spaced measurements. However, when undersampled, this inversion is ill-posed, thus requiring one to solve equation 1.
### Non-Cartesian Reconstruction Network
In this work, we use a variational network (Schlemper et al., 2019; Hammernik et al., 2018) to approximate the solution to the optimization in equation 1. The network structure and detailed optimization steps are illustrated in Fig. 2. Using a gradient descent algorithm, a locally optimal solution to equation 1 can be iteratively computed as,
\[x_{i}=x_{i-1}-\alpha_{i}\nabla_{x,f}(x)|_{x=x_{i-1}}, \tag{5}\]
with an initial solution of:
\[x_{0}=f_{init}(A,y), \tag{6}\]
where \(f_{init}\) is an initialization function that is set as \(f_{init}(A,y)=A^{H}y\). \(\alpha_{i}\) is the gradient descent step size, and \(\nabla f\) is the gradient of the objective function,
\[\nabla_{x}f(x)=\lambda A^{H}(Ax-y)+\nabla_{x}R(x). \tag{7}\]
We unroll the sequential update steps and formulate it as a deep learning-based feed-forward model, in which the regularization gradient term \(\nabla_{x}R(x)\) is approximated by neural network. Here, we used a 3-level UNet (Ronneberger et al., 2015) to approximate it. Thus, we have an end-to-end trainable variational network with \(N_{iter}\) blocks:
\[x_{i}=x_{i-1}-\lambda_{i}A^{H}(Ax_{i-1}-y)+f_{out}(x_{i-1}|\theta_{i}) \tag{8}\]
where \(\theta\) and \(\lambda\) are learnable parameters. The second term is the data consistency (DC) term, and the third term is the CNN term.
### Dual-Domain Self-Supervised Learning
Let \(f_{m}(A,y)\) denote the variational network presented in the previous section, where \(A\) is the non-uniform Fourier sampling operator and \(y\) is the undersampled k-space measurement. DDSS training and testing pipelines are summarized in Fig. 1. During training, we randomly partition \(y\) into two disjoint sets:
\[y_{p_{1}} =S(y,p_{1}), \tag{9}\] \[y_{p_{2}} =S(y,p_{2}), \tag{10}\]
where \(S\) is a sampling function with sampling locations \(p_{1}\) and \(p_{2}\). \(S\) retrieves the k-space data at location \(p_{1}\) and location \(p_{2}\) to generate the partitioned data \(y_{p_{1}}\) and \(y_{p_{2}}\), \(p_{1}\) and \(p_{2}\) do not share any overlapping coordinates and are randomly generated during training. The partitioned data \(y_{p_{1}}\) and \(y_{p_{2}}\) are then fed into the variational network for parallel reconstruction (with shared weights),
\[x_{p_{1}} =f_{m}(A_{y_{p_{1}}},y_{p_{1}}), \tag{11}\] \[x_{p_{2}} =f_{m}(A_{y_{p_{2}}},y_{p_{2}}). \tag{12}\]
In addition to using \(y_{p_{1}}\) and \(y_{p_{2}}\), we also feed the original measurement data before partitioning \(y\) into \(f_{m}(\cdot)\) in parallel:
\[x_{u}=f_{m}(A,y) \tag{13}\]
Our first loss corresponds to a Partition Data Consistency (PDC) loss, which operates in k-space. If the reconstruction network can generate a high-quality image from any undersampled k-space measurements, the k-space data of the images \(x_{1}\), \(x_{2}\), and \(x_{u}\) predicted from the partitions \(y_{p_{1},p_{2}}\) and \(y\) should all be consistent with the original undersampled k-space data \(y\). The predicted k-space data on the original undersampled k-space locations is written as,
\[y_{pred_{1}} =Ax_{p_{1}}, \tag{14}\] \[y_{pred_{2}} =Ax_{p_{2}},\] (15) \[y_{pred_{4}} =Ax_{u}. \tag{16}\]
Thus, the PDC loss can be formulated as,
\[\mathcal{L}_{PDC}=\|y_{pred_{1}}-y\|_{1}+\|y_{pred_{2}}-y\|_{1}+\|y_{pred_{u}}- y\|_{1}, \tag{17}\]
where the first, second, and third terms are the data consistency losses for partitions 1 and 2 and the original undersampled data, respectively.
Our second loss is an Appearance Consistency (AC) loss which operates in the image domain. First, we regularize the reconstructions from the partitions \(y_{p_{1}}\) and \(y_{p_{2}}\) to be consistent with each other at the image-level. Second, we assume that the reconstruction from \(y\) should also be consistent with the reconstructions of \(y_{p_{1}}\) and \(y_{p_{2}}\). To enforce that, the AC loss is computed on both image intensities and image gradients for improved anatomical clarity,
\[\mathcal{L}_{AC}=\lambda_{img}\mathcal{L}_{img}+\lambda_{grad}\mathcal{L}_{ grad}, \tag{18}\]
where,
\[\mathcal{L}_{img}=\|x-x_{p_{1}}\|_{1}+\|x-x_{p_{2}}\|_{1}+\|x_{p_{1}}-x_{p_{2}} \|_{1} \tag{19}\]
Figure 4: T2w MRI reconstruction results on a simulated non-Cartesian degraded and undersampled dataset across all benchmarked methods. Both R=2 (top) and R=4 (bottom) acceleration results are shown alongside their respective error maps. Closer to a blank color (error map) indicates better performance.
Figure 3: T1w MRI reconstruction results on a simulated non-Cartesian degraded and undersampled dataset across all benchmarked methods. Both R=2 (top) and R=4 (bottom) acceleration results are shown alongside their respective error maps. Closer to a blank color (error map) indicates better performance.
\[\mathcal{L}_{grad} =\|\nabla_{v}x_{u}-\nabla_{v}x_{p_{1}}\|_{1}+\|\nabla_{h}x_{u}- \nabla_{h}x_{p_{1}}\|_{1}, \tag{20}\] \[+\|\nabla_{v}x_{u}-\nabla_{v}x_{p_{1}}\|_{1}+\|\nabla_{h}x_{u}- \nabla_{h}x_{p_{1}}\|_{1},\] \[+\|\nabla_{v}x_{p_{1}}-\nabla_{v}x_{p_{1}}\|_{1}+\|\nabla_{h}x_{p _{1}}-\nabla_{h}x_{p_{1}}\|_{1},\]
where \(\nabla_{v}\) and \(\nabla_{h}\) are spatial intensity gradient operators in x and y directions, respectively. We set \(\lambda_{\text{img}}=2\) and \(\lambda_{grad}=1\). Combining the PDC loss in k-space and the AC loss in the image domain, our total loss is,
\[\mathcal{L}_{tot}=\mathcal{L}_{AC}+\lambda_{PDC}\mathcal{L}_{PDC} \tag{21}\]
where \(\lambda_{PDC}=10\) is used to balance the scale between k-space and image domain losses, which is selected by hyper-parameter search (Fig. 6).
## 4 Experiments
### Data Preparation
We evaluated the proposed method on both simulated and real non-Cartesian data. For the simulation studies, we randomly select 505 T1-weighted and 125 T2-weighted 3D brain MR images from the Human Connectome Project (HCP) (Van Essen et al., 2013) with no subject overlap. The volumes were first resampled to \(1.5\times 1.5\times 5mm^{3}\) to match common clinical resolutions. We consider a 2D non-Cartesian multi-coil data acquisition protocol, where 8 coli sensitivity maps are analytically generated. We note that the publicly-available HCP dataset is magnitude-only and does not come with phase information. To this end, we add realistic phases to the magnitude images to create fully complex-valued target images for the experiments, which generates complex-valued multi-coil 3D images for the simulation study.
To generate non-Cartesian undersampled data, we use a non-Cartesian variable density sampling pattern where the sampling density decays from the k-space center at a quadratic rate (Lustig et al., 2008) with the central 10% of the k-space oversampled at 1.25 times the Nyquist rate. We generate two sampling trajectory settings with target acceleration factor \(R\in\{2,4\}\). 476 T1-weighted and 104 T2-weighted images are used for training and 29 T1-weighted and 21 T2-weighted MR images are used for evaluation.
For real-world low-field MRI studies, we collect 119 FLAIR and 125 FSE-T2w 3D brain MR images acquired using a Hyperfine SwoopTM portable MRI system1 with a field strength of 64 mT. An in-house 1 transmitter / 8 receivers phased array coil is implemented in the system. Walsh's method (Walsh et al., 2000) was used to estimate the coil sensitivity maps as required for the reconstruction. Due to the low field strength and real-world deployment, these images are more difficult to reconstruct due to higher levels of noise and imaging system artifacts. Both FLAIR and FSE-T2w images of resolution \(1.6\times 1.6\times 5mm^{3}\) are acquired using a non-Cartesian variable density sampling pattern with an acceleration factor of 2 and 4 and are reconstructed slice-wise. 106 FLAIR and 112 FSE-T2w images were used for training and 13 FLAIR and 13 FSE-T2w MR images are used for evaluation.
Footnote 1: [https://www.hyperfine.io/](https://www.hyperfine.io/)
### Implementation Details
We implement all methods in Tensorflow and perform experiments using an NVIDIA Tesla M60 GPU with 8GB memory. The Adam solver (Kingma and Ba, 2014) is used to optimize
\begin{table}
\begin{tabular}{l|c|c|c||c|c|c} \hline \multicolumn{2}{c|}{**Evaluation**} & \multicolumn{4}{c}{**Setting 1 (R=2)**} & \multicolumn{4}{c}{**Setting 2 (R=4)**} \\ \cline{2-7}
**T1w** & SSIM & PSNR & NMSE & SSIM & PSNR & NMSE \\ \hline Adjoint & \(0.810\pm 0.057\) & \(15.881\pm 4.150\) & \(0.848\pm 0.663\) & \(0.762\pm 0.071\) & \(14.584\pm 4.192\) & \(1.174\pm 0.934\) \\ Gridding/SDC & \(0.886\pm 0.032\) & \(19.941\pm 3.767\) & \(0.295\pm 0.145\) & \(0.818\pm 0.057\) & \(16.859\pm 4.097\) & \(0.675\pm 0.555\) \\ CG-SENSE & \(0.921\pm 0.021\) & \(22.641\pm 3.230\) & \(0.151\pm 0.049\) & \(0.880\pm 0.034\) & \(19.997\pm 3.450\) & \(0.291\pm 0.148\) \\ L1-Wavelet & \(0.927\pm 0.018\) & \(23.106\pm 3.091\) & \(0.134\pm 0.037\) & \(0.893\pm 0.028\) & \(20.787\pm 3.280\) & \(0.238\pm 0.109\) \\ \hline SSDU & \(0.960\pm 0.021\) & \(26.301\pm 1.253\) & \(0.073\pm 0.041\) & \(0.898\pm 0.033\) & \(23.370\pm 2.911\) & \(0.123\pm 0.019\) \\ KDSS & \(0.981\pm 0.004\) & \(30.454\pm 2.447\) & \(0.024\pm 0.005\) & \(0.941\pm 0.022\) & \(26.303\pm 3.169\) & \(0.063\pm 0.011\) \\ DDSS (Ours) & \(\mathbf{0.988\pm 0.006}^{4}\) & \(\mathbf{33.678\pm 3.926}^{6}\) & \(\mathbf{0.012\pm 0.005}^{5}\) & \(\mathbf{0.947\pm 0.022}^{2}\) & \(\mathbf{27.223\pm 3.353}^{3}\) & \(\mathbf{0.051\pm 0.013}^{3}\) \\ \hline Supervised (Upper Bound) & \(0.996\pm 0.002\) & \(39.080\pm 4.336\) & \(0.003\pm 0.002\) & \(0.954\pm 0.020\) & \(27.760\pm 3.349\) & \(0.046\pm 0.013\) \\ \hline \hline \multicolumn{2}{c|}{**Evaluation**} & \multicolumn{4}{c}{**Setting 1 (R=2)**} & \multicolumn{4}{c}{**Setting 2 (R=4)**} \\ \cline{2-7}
**T2w** & SSIM & PSNR & NMSE & SSIM & PSNR & NMSE \\ \hline Adjoint & \(0.795\pm 0.033\) & \(15.909\pm 2.307\) & \(0.883\pm 0.356\) & \(0.745\pm 0.042\) & \(14.735\pm 2.322\) & \(1.184\pm 0.526\) \\ Gridding/SDC & \(0.866\pm 0.027\) & \(19.170\pm 2.556\) & \(0.402\pm 0.128\) & \(0.800\pm 0.042\) & \(16.712\pm 2.741\) & \(0.743\pm 0.303\) \\ CG-SENSE & \(0.908\pm 0.019\) & \(22.398\pm 2.454\) & \(0.183\pm 0.035\) & \(0.860\pm 0.032\) & \(19.550\pm 2.833\) & \(0.368\pm 0.121\) \\ L1-Wavelet & \(0.914\pm 0.017\) & \(22.889\pm 2.403\) & \(0.163\pm 0.030\) & \(0.874\pm 0.028\) & \(20.190\pm 2.750\) & \(0.314\pm 0.095\) \\ \hline SSDU & \(0.950\pm 0.021\) & \(26.030\pm 1.577\) & \(0.089\pm 0.041\) & \(0.900\pm 0.023\) & \(23.458\pm 2.552\) & \(0.142\pm 0.015\) \\ KDSS & \(0.980\pm 0.003\) & \(30.490\pm 2.071\) & \(0.028\pm 0.006\) & \(0.944\pm 0.017\) & \(26.513\pm 2.866\) & \(0.071\pm 0.011\) \\ DDSS (Ours) & \(\mathbf{0.988\pm 0.004}^{4}\) & \(\mathbf{33.739\pm 3.324}^{4}\) & \(\mathbf{0.013\pm 0.003}^{3}\) & \(\mathbf{0.949\pm 0.015}^{4}\) & \(\mathbf{27.213\pm 2.900}^{1}\) & \(\mathbf{0.060\pm 0.009}^{5}\) \\ \hline Supervised (Upper Bound) & \(0.996\pm 0.002\) & \(39.102\pm 3.743\) & \(0.004\pm 0.002\) & \(0.953\pm 0.013\) & \(27.544\pm 2.909\) & \(0.057\pm 0.011\) \\ \hline \end{tabular}
our models with \(lr=3\times 10^{-5}\), \(\beta_{1}=0.9\), and \(\beta_{2}=0.999\). We use a batch size of 8 and train all models for 200 epochs. To trade off training time versus reconstruction quality, the default number of iterations in the non-Cartesian reconstruction network was set to 6 unless otherwise noted. We use the adjoint for \(f_{init}\). We initialize the forward and adjoint operator based on Fessler and Sutton (2003) with an oversampling factor of 2. During training, the undersampled data partitioning rate is randomly generated between \([0.2,0.8]\) which was empirically determined to provide better results during initial prototyping.
### Baselines and Evaluation Strategies
**Simulated HCP.** For the simulated non-Cartesian HCP data with ground truth reconstruction available, we quantify reconstructed image quality via the structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and normalized mean squared error (NMSE). Benchmarks are run against previous methods including SSDU (Yaman et al., 2020), Compressed Sensing with \(L_{1}\)-Wavelet sparsity (Gu et al., 2021), CG-SENSE (Maier et al., 2021), Gridding (Zwart et al., 2012), and adjoint-only reconstruction. For fair comparison, the same non-Cartesian reconstruction network was deployed in SSDU. As an upper bound, we also compared our DDSS against a supervised strategy where the same reconstruction network was trained in a fully supervised fashion with ground truth available. As ablations, we further evaluate the advantages of dual-domain training, where we compared against a k-space domain only self-supervised (KDSS) model where only the PDC loss is used for training.
**Real-world low-field MRI.** For the acquired low-field non-Cartesian data with no ground truth available, image quality assessment is performed via a reader study including three experienced radiologists. The radiologists are asked to compare DDSS to CG-SENSE (Maier et al., 2021), a reconstruction algorithm implemented by the Hyperfine system which robustly offers high quality reconstruction. During the reader study, pairs of CG-SENSE and DDSS reconstruction with an effective acceleration rate of 2 were shown, where the readers are blinded to the reconstruction method. The evaluations are performed using a five-point Likert scale to rate the image quality and the consistency in diagnosis outcomes. The image quality is evaluated in terms of noise, sharpness, and overall perceptual quality between DDSS and CG-SENSE, with a rating scale of 2 being far better, 1 being better, 0 being the same, -1 being worse, -2 being far worse. The consistency in diagnosis outcomes is evaluated based on the agreement on giving consistent diagnoses between the two methods' reconstructions in terms of contrast, geometric fidelity, and presence of artifacts (2: strongly agree, 1: agree, 0: neutral, -1: disagree, -2: strongly disagree). Specifically, for contrast, the reader is asked to judge whether the contrast between different anatomical features in both image reconstructions is the same. For geometric fidelity, the reader assesses whether whether CG-SENSE and DDSS agree on how fine details in underlying anatomy are rendered. For artifact assessment, the reader is evaluates whether CG-SENSE and DDSS agree on the presence or absence of imaging artifacts, such as motion and ghosting.
### Results on Simulated Non-Cartesian Data
For the simulation studies, quantitative results are summarized in Table 1. With acceleration \(R=2\) on T1w reconstruction, DDSS achieves SSIM\(=0.988\) and PSNR\(=\)33.678 which are significantly higher than all previous baseline methods, including SSDU (Yaman et al., 2020) with SSIM\(=\)0.960 and PSNR\(=\)26.301. When comparing the ablation KDSS against DDSS, we find that the AC loss further boosts reconstruction performance from 30.454 to 33.678 in terms of PSNR and from 0.981 to 0.988 in terms of SSIM, thus narrowing the gap in performance to the fully supervised upper bound with PSNR\(=\)39.090 and SSIM\(=\)0.996. Similar trends are observed for the T2w reconstruction experiments where DDSS outperforms previous unsupervised (conventional or deep learning-based) baseline methods, achieving a PSNR of 33.739 and SSIM of 0.988. While overall performance decreases for all methods as the acceleration factor increases from \(R=2\) to \(R=4\), DDSS still outperforms previous methods on both T1w and T2w reconstruction tasks in terms of PSNR, SSIM and NMSE. Interestingly, for \(R=2\), PSNR standard deviation marginally increases with increased mean PSNR performance, but we do not find similar patterns in the \(R=4\) experiments. Since DDSS, KDSS, and SSDU are implemented with the same reconstruction network, they share the same reconstruction time that is \(<0.01s\), and the number of parameters (\(30M\)).
Qualitative comparisons of T1w reconstructions and T2w reconstructions under R\(=\)2 and R\(=\)4 are visualized in Fig. 3 and Fig. 4, respectively. We find that conventional methods such as CG-SENSE and \(L_{1}\)-Wavelet can significantly reduce aliasing as compared to the gridding reconstructions. SSDU further reduces the artifacts with decreased residual reconstruction errors as compared to conventional methods. On the other hand, DDSS reconstruction resulted in the lowest overall errors for both T1w and T2w reconstructions across all previous conventional and self-supervised methods. Even though the supervised model using fully sampled ground truth during training achieves the best quantitative results, the reconstructions from the supervised model and DDSS are comparable qualitatively.
We then study the impact of the number of iterations in the non-Cartesian reconstruction network (Fig. 5). We find that DDSS consistently outperforms KDSS at different settings of the number of iterations and achieved closer performance to the fully supervised model. For the DDSS, the reconstruction performance starts to asymptotically converge after 6 iterations.
Figure 5: The impact of the number of iterations in the non-Cartesian reconstruction network for DDSS reconstruction on the Simulated HCP dataset.
Experiments on T1w and T2w reconstructions show similar behavior when varying the number of iteration from 2 to 10.
We further investigate the impact of \(\lambda_{PDC}\) used in DDSS training (Fig. 6). When sweeping \(\lambda_{PDC}\) from 1 to 500, the T1w and T2w reconstruction performance of DDSS is optimized when \(\lambda_{PDC}\) is set to 10, which is used as the default hyper-parameter in our experiments. Notably, using \(\lambda_{PDC}=0\), i.e. only using the AC loss in the loss function, was unable to converge due to training instability indicating that both components of the overall DDSS loss are synergistic.
### Results on Real Data
For the real-world dataset, the data acquired from the low-field MRI scanner is non-Cartesian and accelerated with no ground truth available (Mazurek et al., 2021). We therefore evaluated DDSS performance on this dataset using expert studies. Specifically, we trained DDSS on the real data and compared it against the default CG-SENSE reconstruction (Maier et al., 2021) from the system. The reader study for the image quality assessment is summarized in Fig. 7 and Tbl. 2.
All 3 radiologists rated the DDSS FSE-T2 and FLAIR reconstructions to be better than or the same as the reconstructions from the CG-SENSE, in terms of noise, sharpness, and overall image quality. For noise, sharpness, and overall image quality, the averaged rating scores of the 3 radiologists were 1.05, 0.76, and 1.00, respectively for FSE-T2 and 1.12, 1.07, and 1.07 respectively for FLAIR reconstruction.
The reader study for the consistency in diagnoses is summarized in Fig. 8 and Tbl. 3. The first and second radiologists agreed that consistent diagnoses were attainable from the DDSS and CG-SENSE reconstructions for the majority of the study, in terms of contrast, geometric fidelity, and the presence of artifacts. Notably, the third radiologist rated neutral or disagree in terms of contrast consistency between the two methods and rated agree or neutral in terms of geometric consistency for majority of the cases. In the diagnosis consistency study, the averaged rating scores of the 3 radiologists for contrast, geometric fidelity, and presence of artifacts are 0.76, 1.33, and 1.79, respectively, for FSE-T2 and 0.58, 1.25, and 1.79, respectively, for FLAIR reconstruction.
Qualitative results are presented in Fig. 9 and Fig. 10, where we visualize results from FSE-T2w and FLAIR scans of two subjects presenting with hemorrhagic strokes. In addition to visualizing the CG-SENSE reconstruction, we also visualize the Gridding and L1-Wavelet reconstruction for qualitative comparisons. As we can see from Fig. 9 with R=2, the Gridding reconstructions suffer from blurring due to the accelerated data acquisition protocols. While the L1-Wavelet and CG-SENSE methods can reduce blurring, the proposed self-supervised DDSS reconstructions produce much sharper image quality leading to enhanced visualization of neuroanatomy. Similar observation are made in Fig. 10, where DDSS provides reconstructions with better contrast, sharpness, and lower noise under both R=2 and R=4 acceleration factors.
## 5 Discussion
In this work, we developed a novel dual-domain self-supervised (DDSS) approach for accelerated non-Cartesian MRI reconstruction. Specifically, we proposed to train a non-Cartesian MRI reconstruction network in both image and k-space domains in a self-supervised fashion. We overcame two major difficulties in MRI reconstruction. First, the proposed self-supervised method allows the MRI reconstruction network to be trained without using any fully sampled MRI data, instead of relying on large-scale under/fully-sampled paired data for reconstruction network training, which is infeasible if the MRI system has accelerated acquisition protocols. Second, DDSS is applicable to non-Cartesian MRI reconstruction, a relatively understudied problem for deep reconstruction networks.
### Experimental Summaries
We first demonstrate that DDSS can reconstruct high-quality images on the simulated non-Cartesian MRI dataset (Tab 1). First of all, the DDSS can achieve significantly better reconstruction performance than baseline reconstruction methods, including previous conventional and self-supervised methods. Secondly, we found combining image domain self-supervision
\begin{table}
\begin{tabular}{l|c c c|c} \hline
**FSE-T2** & **Reader \#1** & **Reader \#2** & **Reader \#3** & **Average** \\ \hline
**Noise** & \(1.00\pm 0.57\) & \(1.00\pm 0.00\) & \(1.15\pm 0.37\) & \(1.05\pm 0.39\) \\ \hline
**Sharpness** & \(0.69\pm 0.48\) & \(1.00\pm 0.00\) & \(0.61\pm 0.50\) & \(0.76\pm 0.42\) \\ \hline
**Overall** & \(0.92\pm 0.27\) & \(0.92\pm 0.27\) & \(1.15\pm 0.37\) & \(1.00\pm 0.32\) \\ \hline \hline
**FLAIR** & **Reader \#1** & **Reader \#2** & **Reader \#3** & **Average** \\ \hline
**Noise** & \(1.15\pm 0.55\) & \(0.92\pm 0.27\) & \(1.30\pm 0.48\) & \(1.12\pm 0.46\) \\ \hline
**Sharpness** & \(1.07\pm 0.64\) & \(0.92\pm 0.27\) & \(1.23\pm 0.59\) & \(1.07\pm 0.53\) \\ \hline
**Overall** & \(1.07\pm 0.49\) & \(0.84\pm 0.37\) & \(1.30\pm 0.48\) & \(1.07\pm 0.48\) \\ \hline \end{tabular}
\end{table}
Table 2: The averaged scores of the 3 radiologists from the image-quality reader study on both FSE-T2 and FLAIR reconstructions. Higher is better.
\begin{table}
\begin{tabular}{l|c c c|c} \hline
**PSE-T2** & **Reader \#1** & **Reader \#2** & **Reader \#3** & **Average** \\ \hline
**Contrast** & \(1.38\pm 0.50\) & \(1.00\pm 0.40\) & \(-0.07\pm 0.75\) & \(0.76\pm 0.84\) \\ \hline
**Geometry** & \(1.61\pm 0.50\) & \(1.92\pm 0.27\) & \(0.46\pm 0.51\) & \(1.33\pm 0.77\) \\ \hline
**Artifact** & \(1.92\pm 0.27\) & \(1.76\pm 0.43\) & \(1.69\pm 0.85\) & \(1.79\pm 0.57\) \\ \hline \hline
**FLAIR** & **Reader \#1** & **Reader \#2** & **Reader \#3** & **Average** \\ \hline
**Contrast** & \(1.15\pm 0.68\) & \(1.07\pm 0.64\) & \(-0.46\pm 0.96\) & \(0.58\pm 1.06\) \\ \hline
**Geometry** & \(1.53\pm 0.51\) & \(2.00\pm 0.00\) & \(0.23\pm 0.43\) & \(1.25\pm 0.84\) \\ \hline
**Artifact** & \(1.84\pm 0.37\) & \(1.84\pm 0.37\) & \(1.69\pm 0.85\) & \(1.79\pm 0.57\) \\ \hline \end{tabular}
\end{table}
Table 3: The averaged scores of the 3 radiologists from the diagnosis consistency reader study on both FSE-T2 and FLAIR reconstructions. Higher is better.
Figure 6: The impact of \(\lambda_{PDC}\) used in DDSS training (Eq. 21). Higher is better.
with k-space self-supervision can significantly boost the reconstruction performance, implying the synergy of dual-domain self-supervision enhances the reconstruction learning of DDSS. Qualitatively, we can observe from Fig 3 and Fig. 4 that the important anatomy structures are visually consistent with the ground truth. Ablation study on the impact of \(\lambda_{PDC}\) which controls the balance between the k-space self-supervision loss and the image-domain self-supervision loss shows that it is important to properly select this hyper-parameter to achieve optimal performance. We also found that setting \(\lambda_{PDC}=0\), i.e. using only image-domain self-supervision during the training, cannot properly converge the loss, thus the performance under this setting is not reported. Finally, we also demonstrate successful applications on a real dataset, where the low-field non-Cartesian MRI data is acquired using the Hyperfine Swoop system with only undersampled non-Cartesian data available (Figs. 9 and 10).
Figure 8: The diagnostic-consistency expert assessments from the clinical radiologist reader study on the real low-field MRI dataset. Diagnosis consistency is evaluated in terms of contrast, geometric fidelity, and presence/absence of artifacts using a 5-point ordinal scale, comparing DDSS to CG-SENSE (Maier et al., 2021) reconstructions.
Figure 7: The image-quality expert assessments from the clinical radiologist reader study on the real low-field MRI dataset. Image-quality is evaluated in terms of noise, sharpness, and overall quality using a 5-point ordinal scale, comparing DDSS to CG-SENSE (Maier et al., 2021) reconstructions.
10). Our reader studies on image-quality (Fig. 7 and Tab. 2) and diagnosis consistency (Fig. 8 and Tbl. 7) show that the DDSS can provide superior image quality and highly consistent diagnosis results as compared to the conventional method deployed in the system, i.e. CG-SENSE.
### Limitations and Future Work
The presented work has opportunities for improvements that are the subject of our ongoing work.
* While DDSS performance does not currently exceed the performance of full supervision, several future modifications could potentially further increase its performance. First, the current non-Cartesian reconstruction network is based on gradient descent where the data consistency constraint could be further enforced. Using a conjugate gradient-based architecture, such as Aggarwal et al. (2018), could potentially further improve DDSS performance. Similarly, deploying a density-compensated primal dual network (Ramzi et al., 2022) with a density-compensated data consistency operation in DDSS could also improve reconstruction.
* DDSS only imposes image and k-space self-supervised losses on the final output of the non-Cartesian reconstruction network, whereas deep supervision on each cascade output (Zhou and Zhou, 2020) could be implemented to potentially further improve its performance.
* Even though this work use a UNet (Ronneberger et al., 2015) as the backbone network in the non-Cartesian reconstruction network, we do not claim that this is an optimal backbone for reconstruction. As the DDSS backbone network is interchangeable, other state-of-the-art image restoration networks, such as OUCNet (Guo et al., 2021),
Figure 10: **Qualitative comparisons of FSE-T2w and FLAIR reconstructions from the real data acquired from a low-field (64mT) MRI system. The subject was diagnosed with a hemorrhagic stroke with associated midline shift. The DDSS reconstruction is compared to Gridding, L1-Wavelet, and CG-SENSE (Maier et al., 2021) reconstructions of this subject under both R=2 and R=4 acceleration factors.**
Figure 9: **Qualitative comparisons of FSE-T2w and FLAIR reconstructions from the real data acquired from a low-field (64mT) MRI system. The subject was diagnosed with a hemorrhagic stroke with lacunar infarcts. The DDSS reconstruction is compared to Gridding, L1-Wavelet, and CG-SENSE (Maier et al., 2021) reconstructions of this subject.**
ResViT (Dalmaz et al., 2021), and DCCT (Zhou et al., 2022), could use the DDSS loss functions which could lead to improved reconstruction. Deploying dual-domain reconstruction networks, such as DuDoRNet (Zhou and Zhou, 2020) and MDReconNet (Ran et al., 2020), in DDSS and extending the current 2D framework to 3D, could also potentially improve the reconstruction performance.
* While the coil sensitivity map is assumed to be known or straightforward to estimate in this work, previous work (Sriram et al., 2020) has attempted to use another sub-network for coil sensitivity map estimation. Integrating deep learning-based coil sensitivity map prediction into DDSS will be investigated in future work.
* This work focused on the non-Cartesian variable density sampling pattern in order to facilitate experimentation with the real clinical images generated by Hyperfine system. Investigating more diverse non-Cartesian sampling patterns, such as spiral interleaves and radial spokes (Tsao and Kozerke, 2012) and their compatibility with DDSS, will be included in future work.
* The experiments in this paper focus on clinical scenarios, in which extremely high fidelity reconstruction is required, and as such the acceleration factors were investigated at (2\(\times\) and 4\(\times\)). On the other hand, in research scenarios, much higher acceleration factors are often considered in order to probe the limit of the methods. From Table 1, we see that, while statistically significant, the performance gains are not as strong for \(R=4\) as they are for \(R=2\). It is plausible that even higher acceleration rates may further reduce the impact of self-supervision via k-space redundancy, especially in comparison to a fully-supervised model which still utilizes the fully-sampled ground truth for supervision. Therefore, future work will investigate the sensitivity of the DDSS framework to higher acceleration factors and potential improvements.
## 6 Conclusion
This paper presented a dual-domain self-supervised learning method for training a non-Cartesian deep MRI reconstruction model _without_ using any fully sampled data. Novel loss functions leveraging self-supervision in both the k-space and image domains were developed leading to improved reconstructions. Experimental results on a simulated accelerated non-Cartesian dataset demonstrated that DDSS can generate highly accurate reconstructions that approach the fidelity of the fully supervised reconstruction. Finally, the proposed framework was shown to successfully scale to the reconstruction of challenging real MRI data from a portable low-field 0.064T MRI scanner, where fully sampled data is unavailable. These DDSS improvements were assessed by expert radiologists in a user study measuring image quality and diagnostic consistency and were found to outperform traditional reconstruction methods.
## Acknowledgments
The authors thank Dr. Ardavan Saeedi for valuable discussions and suggestions and the radiologists for participating in our expert study.
## Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Credit authorship contribution statement
**Bo Zhou**: Conceptualization, Methodology, Software, Visualization, Validation, Formal analysis, Writing original draft.
**Jo Schlemper**: Conceptualization, Methodology, Software, Visualization, Validation, Formal analysis, Writing - review and editing, Supervision. **Neel Dey**: Conceptualization, Writing - review and editing. **Seged Sadegh Mohseni Salehi**: Conceptualization, Methodology, Software, Writing - review and editing. **Kevin Sheth**: Data preparation, Writing - review and editing. **Chi Liu**: Writing - review and editing. **James S. Duncan**: Writing - review and editing. **Michal Sofka**: Conceptualization, Methodology, Software, Visualization, Validation, Formal analysis, Writing - review and editing, Supervision.
|
2303.14882 | TransCODE: Co-design of Transformers and Accelerators for Efficient
Training and Inference | Automated co-design of machine learning models and evaluation hardware is
critical for efficiently deploying such models at scale. Despite the
state-of-the-art performance of transformer models, they are not yet ready for
execution on resource-constrained hardware platforms. High memory requirements
and low parallelizability of the transformer architecture exacerbate this
problem. Recently-proposed accelerators attempt to optimize the throughput and
energy consumption of transformer models. However, such works are either
limited to a one-sided search of the model architecture or a restricted set of
off-the-shelf devices. Furthermore, previous works only accelerate model
inference and not training, which incurs substantially higher memory and
compute resources, making the problem even more challenging. To address these
limitations, this work proposes a dynamic training framework, called DynaProp,
that speeds up the training process and reduces memory consumption. DynaProp is
a low-overhead pruning method that prunes activations and gradients at runtime.
To effectively execute this method on hardware for a diverse set of transformer
architectures, we propose ELECTOR, a framework that simulates transformer
inference and training on a design space of accelerators. We use this simulator
in conjunction with the proposed co-design technique, called TransCODE, to
obtain the best-performing models with high accuracy on the given task and
minimize latency, energy consumption, and chip area. The obtained
transformer-accelerator pair achieves 0.3% higher accuracy than the
state-of-the-art pair while incurring 5.2$\times$ lower latency and 3.0$\times$
lower energy consumption. | Shikhar Tuli, Niraj K. Jha | 2023-03-27T02:45:18Z | http://arxiv.org/abs/2303.14882v1 | # TransCODE: Co-design of Transformers and Accelerators for Efficient Training and Inference
###### Abstract
Automated co-design of machine learning models and evaluation hardware is critical for efficiently deploying such models at scale. Despite the state-of-the-art performance of transformer models, they are not yet ready for execution on resource-constrained hardware platforms. High memory requirements and low parallelizability of the transformer architecture exacerbate this problem. Recently-proposed accelerators attempt to optimize the throughput and energy consumption of transformer models. However, such works are either limited to a one-sided search of the model architecture or a restricted set of off-the-shelf devices. Furthermore, previous works only accelerate model inference and not training, which incurs substantially higher memory and compute resources, making the problem even more challenging. To address these limitations, this work proposes a dynamic training framework, called DynaProp, that speeds up the training process and reduces memory consumption. DynaProp is a low-overhead pruning method that prunes activations and gradients at runtime. To effectively execute this method on hardware for a diverse set of transformer architectures, we propose ELECTOR, a framework that simulates transformer inference and training on a design space of accelerators. We use this simulator in conjunction with the proposed co-design technique, called TransCODE, to obtain the best-performing models with high accuracy on the given task and minimize latency, energy consumption, and chip area. The obtained transformer-accelerator pair achieves 0.3% higher accuracy than the state-of-the-art pair while incurring 5.2\(\times\) lower latency and 3.0\(\times\) lower energy consumption.
Application-specific integrated circuits; hardware-software co-design; machine learning; neural network accelerators; transformers.
## I Introduction
Artificial intelligence (AI) is undergoing a paradigm shift with the rise of large language models, e.g., BERT [1], GPT-3 [2], DALL-E [3]. These models, backed by the transformer architecture [4], target many applications, including language [1], vision [5], robotic manipulation [6], reasoning [7], human interaction [8], and forecasting [9]. Training on comprehensive datasets (generally using self-supervision at scale) and finetuning on downstream tasks have enabled their widespread application. However, training and inference with such large models either involve high power consumption on graphical processing units (GPUs) or high energy and latency on off-the-shelf edge-AI devices. For instance, the lowest possible latency for transformer inference on a Raspberry Pi [10] is 2.1 seconds [11], which is too slow for real-time natural language processing (NLP) tasks. This makes efficient training (and even inference) of such models an unsolved problem. The increasing size of state-of-the-art language models [2, 12] results in a higher memory footprint and computational complexity, exacerbating this problem.
Previous works propose many specialized hardware accelerators to address the abovementioned challenges. For instance, A\({}^{3}\)[13] is one of the first accelerators to enable efficient transformer inference by leveraging algorithmic approximation and hardware specialization. SpAtten [14] proposes a cascade token pruning mechanism to prune the weights of a transformer at runtime. Energon [15] approximates this pruning mechanism to speed up inference. AccelTran [16], a state-of-the-art transformer accelerator, executes dynamic inference by skipping _all_ ineffectual multiply-and-accumulate (MAC) operations. It also _tiles_ the matrices to facilitate higher parallelization and hardware utilization while executing the matrix multiplication operations. However, the above accelerators do not support transformer training, which demands a higher memory footprint for transformer execution on resource-constrained edge-AI devices.
Tuli et al. [17] showed that each NLP task has a unique optimal transformer architecture, requiring a specialized accelerator for efficient evaluation. However, designing a single accelerator that efficiently executes a diverse set of transformer architectures takes significant time and effort. Transformers involve serially-connected encoder (and sometimes decoder) layers. An accelerator with too many processing elements (PEs) would have low compute resource utilization for a deep but narrow transformer model. A PE is the basic compute module in an accelerator. A shallow and wide transformer would incur low latency and enable high parallelization [11, 18] but would require many PEs and high bandwidth memory. However, a manual search of the best accelerator architecture and transformer design decisions is computationally too expensive due to the vastness of each design space [19].
To tackle the abovementioned challenges, previous works implement automated hardware-software co-design. NAAS [20] and CODEBench [19] simultaneously search for the best design choices for the convolutional neural network (CNN) and accelerator architecture. However, a CNN workflow is different from that of a transformer, warranting substantially different accelerator design choices. Some recent works target co-design with transformer models. Qi et al. [21] use a recurrent neural network (RNN) and a reinforcement learning (RL)-based controller to guide the search using a pool of five field-programmable gate arrays (FPGAs) and adjust the pruning parameters of an input transformer model. Peng et al. [22] explore the scheduling
and sparsity decisions on a single FPGA. CODEBench [19], although a framework for CNN accelerators, shows the advantages of exploring massive CNN and accelerator search spaces, resulting in high gains in accuracy, energy consumption, evaluation throughput, and chip area. Hence, we leverage its co-design technique, BOSHCODE, in our proposed framework (details in Section II-C). On the other hand, the abovementioned works on transformer accelerator search only target one (or few) transformer models on a limited set of hardware platforms. This restricts the gains from automated co-design, leading to low resource utilization and inefficient configurations.
In order to address the above issues, we propose TransCODE, a co-design framework for transformers and application-specific integrated circuit (ASIC)-based accelerators. Our main contributions are as follows.
* For efficient on-device training, we propose DynaProp, which dynamically prunes weights, activations, and gradients to skip ineffectual MAC operations and speed up the transformer training/inference process. DynaProp leverages specialized low-overhead hardware modules to induce sparsity into transformer training and inference.
* To support vast design spaces involving _flexible_ and _heterogeneous_ transformer architectures [17], we propose a flexible BERT accelerator (ELECTOR) framework. ELECTOR supports diverse transformer architectures within the FlexiBERT 2.0 design space [11]. It efficiently implements model operations through dedicated hardware modules and a functional transformer mapper. The design space within the ELECTOR framework involves disparate accelerators that can execute the transformers in the FlexiBERT 2.0 design space. ELECTOR also effectively implements the proposed DynaProp algorithm to speed up transformer training and inference. It involves 14,850,000 accelerators, a design space much more extensive than investigated in any previous work.
* We then leverage the proposed ELECTOR and FlexiBERT 2.0 design spaces to implement co-design and obtain a transformer-accelerator pair that maximizes the performance objectives within the given user-defined constraints. We call this framework, which co-designs the transformer-accelerator pair, TransCODE. It leverages the best-performing optimization technique in the considered design spaces.
We organize the rest of the article as follows. Section II presents background on transformer and accelerator design choices along with automated hardware-software co-design. Section III illustrates the TransCODE framework that includes DynaProp, ELECTOR framework and its design space, and the co-design pipeline. Section IV describes the experimental setup and targeted baselines. Section V presents the results. Section VI discusses the limitations of the proposed work and future work directions. Finally, Section VII concludes the article.
## II Background and Related Work
In this section, we present background material on popular transformer and accelerator architectures and the corresponding design decisions. We also describe previously proposed hardware-software co-design methods.
### _Transformer Design Space_
Previous works propose various transformer architectures. BERT is one of the most popular architectures that is widely used for language modeling [1]. Its variants leverage mechanisms other than vanilla self-attention [23] to optimize performance or reduce model size and complexity. They include RoBERTa [24] that implements robust pre-training techniques, ConvBERT [25] that uses one-dimensional convolutional operations, MobileBERT [26] that employs bottleneck structures and multiple feed-forward stacks, among many others. Further, architectures like FNet [27] and LinFormer [28] use Fourier transform and low-rank approximation, respectively, of the self-attention operation to aid efficiency and reduce the number of model parameters.
In order to search for the best-performing model for a given task, FlexiBERT [17] unifies and implements _heterogeneous_ and _flexible_ transformer architectures, encapsulating various self-attention operation types. Each encoder layer in its design space can have a different attention mechanism (heterogeneity) and a different hidden dimension (flexibility). Among many works that implement neural architecture search (NAS) in a design space of transformer models [29, 30, 31, 32], FlexiBERT has the largest and the most expressive design space. This results in state-of-the-art models that outperform previous architectures in accuracy. FlexiBERT 2.0 [11] extends the design space to 1.7 \(\times\) 10\({}^{88}\) transformer models, the largest and the most expressive transformer design space to date. We thus use the FlexiBERT 2.0 design space to implement co-design in this work. Note that no previously proposed accelerator supports heterogeneous and flexible transformer workflows. We discuss traditional transformer accelerators next.
### _Accelerator Design Space_
A transformer model's hardware performance (characterized by latency, energy consumption, and chip area) on a given platform depends on multiple factors. These factors include memory size and bandwidth, number of MAC units (that can execute matrix multiplication operations in parallel), number of specialized hardware modules (e.g., ones for softmax and layer-norm operations), operation scheduling, dataflow, model sparsity, etc. These design decisions lead to many existing accelerators proposed in the literature.
A\({}^{3}\)[13] is one of the first ASICs to support transformer acceleration. It uses several approximation strategies to avoid computing attention scores that are close to zero. SpAtten [14] proposes the top-\(k\) pruning algorithm that ranks input token and attention-head scores using a dedicated hardware module. However, it only considers part of the activations formed, not sparsity in all possible matrix multiplication operations. Further, implementing the proposed top-\(k\) pruning mechanism involves high compute overhead; its time complexity is \(\mathcal{O}(N^{3})\), leading to marginal gains in energy efficiency [16]. Energon [15] approximates this pruning mechanism. However,
since it is limited to being a co-processor, it requires high off-chip memory access. Finally, OPTIMUS [33] targets sparsity in a broader scope, using a set-associative rearranged compressed sparse column format to eliminate ineffectual MAC operations, although limited to weight matrices. Here, weights correspond to the trainable transformer model parameters and activations are represented by intermediate matrices formed by the transformer model operations.
To overcome the drawbacks of the abovementioned accelerators, AccelTran [16] implements dynamic inference with a transformer while pruning _all_ weights and activations. In addition, it leverages matrix tiling to improve parallelization and resource utilization. However, it only executes transformer inference and not training, uses a fixed set of design choices (e.g., a fixed tile size, number of PEs, buffer sizes), and does not support diverse models, thus leading to sub-optimal utilization. To tackle this problem, various works propose design spaces of transformer accelerators to efficiently obtain the optimal transformer architecture for the given task. However, such design spaces are limited to off-the-shelf FPGAs [21, 22] that only focus on inference. We next describe previous works on co-design of the AI model and hardware accelerator.
### _Hardware-software Co-design_
Various works target CNN-accelerator co-design [19, 20, 34]. CODEBench [19] searches over massive CNN and accelerator design spaces. However, its accelerators are geared toward CNN workflows and thus inefficient for transformer pipelines. As discussed before, Qi et al. [21] use an RNN and RL-based controller to guide search in a pool of five FPGAs and adjust the pruning parameters of an input transformer model. However, they only consider latency and accuracy constraints and do not optimize energy consumption and chip area. Peng et al. [22] explore the scheduling and sparsity decisions on an FPGA and adapt the input sequence length. SpAtten [14] implements hardware-aware NAS (HW-NAS), where it finds a sub-net of a trained super-net [18]. However, its design space only involves 72 transformers that are not flexible. Thus, there is a need for an exhaustive design space of transformer accelerators to implement co-design and obtain the best-performing transformer-accelerator pair. This pair should not only deliver high accuracy on a given task but also be energy-efficient and have a high throughput and low chip area.
In this work, we leverage Bayesian optimization using second-order gradients and a heteroscedastic surrogate model for co-design, i.e., BOSHCODE [19]. It is a scalable co-design framework that efficiently searches the hardware and software design spaces at scale. CODEBench [19] proposes and uses BOSHCODE to search over significantly large design spaces (4.2 \(\times\) 10\({}^{812}\) CNNs and 2.3 \(\times\) 10\({}^{8}\) accelerators). EdgeTran [11] leverages BOSHCODE to search over the joint space of FlexiBERT 2.0 and a set of off-the-shelf edge-AI devices, including Raspberry Pi [10], Apple M1 [35] system-on-chip (SoC) with a central processing unit, CPU, and a GPU, Intel Neural Compute stick [36] (a neural processing unit), and Nvidia Jetson Nano [37] (SoC with both CPU and GPU).
BOSHCODE supports co-design with any two search spaces. It leverages second-order gradient-based optimization [38] on an actively-trained [39] surrogate model for performance prediction (which is the optimization objective). The surrogate model combines a natural parameter network (NPN), a teacher, and a student network. The NPN predicts the mean performance of the transformer-accelerator pair along with the aleatoric uncertainty. The teacher and student networks predict the epistemic uncertainty in performance. Epistemic uncertainty is the uncertainty in performance due to an unexplored design space. In contrast, aleatoric uncertainty refers to the uncertainty due to parameter initializations and variations in model performance due to different training recipes. BOSHCODE exploits epistemic and aleatoric uncertainty estimates to obtain the best design decisions for the transformer, the accelerator, and the model training recipe that maximizes accuracy. We present more details on how we leverage BOSHCODE in our search process in Section III-C.
## III Methodology
Fig. 1 shows an overview of the TransCODE framework. ELECTOR, in Fig. 1(a), takes the accelerator embedding and transformer computational graph as input. Using the accelerator embedding, it implements a hardware accelerator with the corresponding design decisions. Next, it converts the computational graph into a corresponding transformer model with modular operations (supported by the FlexiBERT 2.0 design space), which it then maps to specialized hardware modules. It also tiles the matrices for efficient resource allocation, operation scheduling, and data reuse. Fig. 1(b) shows how we leverage the FlexiBERT 2.0 [17] framework to convert a transformer embedding to its corresponding computational graph and employ the surrogate model to predict model accuracy. Finally, Fig. 1(c) illustrates TransCODE, which uses previous performance results to train a surrogate model and query the next transformer-accelerator pair. Finally, it feeds the output accelerator and transformer embeddings to ELECTOR and FlexiBERT 2.0, respectively.
We now discuss the dynamic inference and training technique, DynaProp, that prunes activations and gradients to skip infectful operations. We then present the ELECTOR simulator and the accelerator design choices it supports. Finally, we describe the TransCODE pipeline that implements co-design and obtains the best-performing transformer-accelerator pair.
### _Dynamic Inference and Training_
DynaTran [16] is a low-overhead dynamic inference method that quickly prunes ineffectual weight and activation values at runtime. However, it only targets transformer inference and not training. We propose DynaProp that induces sparsity in weights and activations at runtime (during inference) and gradients (during training). DynaProp takes an input matrix, which is either a weight matrix (loaded from memory), an activation matrix (obtained from previous MAC operations), or a gradient matrix (formed while backpropagating gradients). It then prunes values with a magnitude less than a given
threshold \(\tau\) (i.e., it forces them to zero). Mathematically, we prune an input matrix \(\mathbf{M}\in\mathbb{R}^{m\times n}\) to \(\mathbf{M}^{\text{P}}\) as follows:
\[\mathbf{M}_{ij}^{\text{P}}=\begin{cases}\mathbf{M}_{ij}&\text{if }|\mathbf{M}_{ij}| \geq\tau\\ 0&\text{if }|\mathbf{M}_{ij}|<\tau\end{cases}\]
We implement each such comparison in parallel, thus requiring only one clock cycle for the pruning process. We define the pruning ratio (or level of sparsity) of the output matrix as:
\[\rho(\mathbf{M}^{\text{P}})=\frac{\sum_{x\in\mathbf{M}^{\text{P}}}\delta_{x,0 }}{m\times n}\]
where \(\delta\) is the Kronecker delta function. We profile the resultant sparsity in weights, activations, and gradients for different transformer models on diverse applications to obtain a desired \(\rho\). ELECTOR stores these curves in memory. For the desired values of \(\rho\), we determine the corresponding \(\tau\) at runtime through a simple look-up operation. We present such curves in Section V-A.
Table I shows the operations underlying the forward and backward pass for matrix multiplication and one-dimensional (1D) convolution, respectively. The table shows that training requires the same operation types (as inference) and thus mandates identical hardware, although with a separate dataflow. We also observe that the number of backward pass and weight update operations (executed during training) is more than the number of those for the forward pass (executed during inference). This shows that training is much more computationally expensive than inference, involving more activations and gradients that the accelerator needs to account for. DynaProp prunes each such matrix before it executes the respective operation in hardware. Thus, the accelerator skips ineffectual operations, improving latency and energy efficiency.
Optimizers like Adam would require extra computation (e.g., the calculation of momentum and storage of previous weights/gradients). These computations can easily be incorporated into the accelerators supported in the proposed design space. However, second-order gradients would add much more computational overhead. We leave the application of complex optimizers to future work.
### _The ELECTOR Framework_
Accelerators in the ELECTOR design space take inspiration from previously proposed state-of-the-art accelerators, including SPRING [40] and AccelTran [16]. We divide the overall accelerator architecture into the accelerator tier and the (on-chip or off-chip) memory tier. Fig. 2 shows the organization of the accelerator tier in the proposed architecture. The control block receives the instruction stream of the
\begin{table}
\begin{tabular}{l c} \hline \hline
**Matrix Multiplication** & \\ \hline Forward Pass & \(\mathbf{X}_{i}=f_{i}(\mathbf{W}_{i}\mathbf{X}_{i-1})\) \\ Backward Pass & \(\delta_{i}=\mathbf{W}_{i+1}^{\mathsf{P}}\delta_{i+1}\cdot f^{\prime}( \mathbf{W}_{i}\mathbf{X}_{i-1})\) \\ Weight Update & \(\nabla_{\mathbf{W}_{i}}\mathcal{L}=\delta_{i}\mathbf{X}_{i-1}^{\mathsf{P}}\) \\ & \(\mathbf{W}_{i}=\mathbf{W}_{i}-\alpha\cdot\nabla_{\mathbf{W}_{i}}\mathcal{L}\) \\ \hline
**1D Convolution** & \\ \hline Forward Pass & \(\mathbf{X}_{i}=\mathbf{w}_{i}\ast\mathbf{X}_{i-1}\) \\ Backward Pass & \(\delta_{i}=\nabla_{\mathbf{W}_{i}}\mathcal{L}\) \\ Weight Update & \(\nabla_{\mathbf{W}_{i}}\mathcal{L}=\delta_{i}\ast\overleftarrow{\mathbf{X}_{i -1}}\) \\ & \(\mathbf{W}_{i}=\mathbf{W}_{i}-\alpha\cdot\nabla_{\mathbf{W}_{i}}\mathcal{L}\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Forward and backward pass operations for matrix multiplication and 1D convolution.
Fig. 1: Overview of the TransCODE framework. (a) ELECTOR takes an accelerator embedding and a transformer computational graph to simulate its training/inference on the given accelerator. (b) FlexiBERT 2.0 converts the input transformer embedding to a computational graph and employs a pre-trained surrogate model to predict model accuracy. (c) The TransCODE optimizer takes in the performance values of the previously evaluated transformer-accelerator pair to query another pair in the active learning loop.
transformer model from the host CPU. The direct memory access (DMA) controller fetches the weights and embeddings from the main memory. Thus, the PEs communicate with the on-chip buffers while the DMA controller transfers data between the buffers and the on-chip/off-chip main memory. The activation-and-gradient buffer stores the activations and gradients formed during transformer evaluation. The weight buffer stores the transformer weights. ELECTOR stores all data in a compressed format (discussed in Section III-B1). Data compression relies on binary masks (stored in the mask buffer). The PEs employ the compressed data and associated masks to perform the main compute operations of any transformer in the FlexiBERT 2.0 design space.
#### Iii-A1 Hardware Modules
We now describe various modules supported in the ELECTOR design space.
* Main Memory: ELECTOR supports three memory types: an off-chip dynamic random access memory (DRAM) for scalable and economical deployments, an on-chip high-bandwidth memory (HBM) for memory-intensive edge/server applications, and an on-chip monolithic-3D resistive random access memory (RRAM). Monolithic-3D integration leverages monolithic inter-tier vias, allowing much higher density than traditional through-silicon-via-based 3D integration [41]. This leaves much more logic space and permits high memory bandwidth, which are crucial for large transformer models in the FlexiBERT 2.0 design space.
* Control Block: The control block takes the transformer model as input. It then converts all functions in the model into hardware-mappable operations (details in Section III-B2) that it later converts to _tiled_ operations. For instance, it converts the matrix multiplication operation **0 = W\(\times\)A** to multiple operations of the form **0[b,i,j] = W[b,i,k]\(\times\)A[b,k,j]**, where each tiled matrix \(\in\mathbb{R}^{b\times x\times y}\), i.e., the tile size [16]. The control block also assigns and schedules the tiled operations to different PEs [16].
* Processing Elements: Fig. 3 shows the organization of a PE (the basic compute module of an accelerator) in the ELECTOR design space. The local registers of the PE store the compressed data. These are the first-in-first-out (FIFO) registers for the activations (and gradients) and weights. The data then enter the DynaProp module that induces sparsity based on the desired \(\rho\). As explained in Section III-A, this module prunes the given activation/gradient/weight matrices based on a pre-calculated threshold \(\tau\). The PE then feeds the sparse data to the pre-compute sparsity module with the binary masks. These binary masks have the same shape as the uncompressed data, where each binary bit in a mask depicts if the corresponding element in the original data vector is ineffectual or not. The pre-compute sparsity module converts the input data into a zero-free format based on the associated masks [16]. The PE then forwards this zero-free data to the MAC lanes (for matrix multiplication), softmax modules (for softmax operation), or the layer-norm module (for layer-norm operation). The zero-free data eliminate any ineffectual computations in these modules. Finally, the post-compute sparsity module [16] implements the inverse of this operation on the output activations before storing them in the corresponding FIFO register and, eventually, the main buffer.
* The MAC lanes execute multiplication between two tiles in a parallelized manner. We store all activation, gradient, and weight data in fixed-point format with \((\operatorname{IL}+\operatorname{FL})\) bits, denoting integer length (IL) and fractional length (FL), respectively [16]. Data first reach \(M\) multipliers and then an adder tree with depth \(\log_{2}M\). The MAC lanes also include a ReLU and a GeLU [42] module for feed-forward operations.
* Fig. 4 shows the DynaProp module that executes dynamic inference and training on the transformer. It takes the input activation/gradient/weight matrix and prunes ineffectual values for efficient evaluation. As explained in Section III-A, we prune the values of the input matrix by comparing their magnitude with a pre-determined threshold \(\tau\). The DynaProp module implements this in parallel for the entire tile. We first feed an input tile \(\mathbf{M}\in\mathbb{R}^{b\times x\times y}\) to the matrix transpose block, which carries out the transpose operation, if required. Mathematically, it
Fig. 3: Internal components of a PE.
Fig. 2: Organization of a typical accelerator in the ELECTOR design space.
outputs \(\mathbf{M}^{\intercal}\in\mathbb{R}^{b\times y\times x}\), transposing all matrices in the batch of size \(b\). It then feeds the input tile to \(b\times x\times y\) comparators. The threshold calculator determines the required threshold using the desired \(\rho\) and the pre-profiled transfer functions for different transformer models on diverse applications (stored in the internal register; more details in Section V-A). If the output of the comparator is zero, we set the corresponding mask bit to 1. Here, we represent the lines carrying mask information in grey and those carrying activation/gradient/weight information in black. * For all other hardware modules, we use the proposed implementation of AccelTran [16]. However, we expand the operation executability of all modules (e.g., support for different tile sizes in the softmax module), as explained in Section III-B3.
The optimal selection of the number of PEs, buffer sizes, and other design choices results in the highest possible resource utilization while minimizing the number of compute/memory stalls (when we do not execute either a compute operation or a memory fetch operation). Hence, determining the best accelerator hyperparameters is essential for energy-efficient designs with a low chip area and high throughput.
#### Iii-B2 The Transformer Mapper
The FlexiBERT 2.0 [11] design space supports various operation types. We describe each operation next.
* Self-attention: The self-attention (SA) operation finds how much one token _attends_ to another token. For an output attention head \(\mathbf{H}_{i}\in\mathbb{R}^{N_{T}\times d_{\text{out}}}\) with query \(\mathbf{Q}_{i}\in\mathbb{R}^{N_{T}\times h/n}\), key \(\mathbf{K}_{i}\in\mathbb{R}^{N_{T}\times h/n}\), and value \(\mathbf{V}_{i}\in\mathbb{R}^{N_{T}\times h/n}\) matrices [17]: \[\mathbf{H}_{i}=\text{softmax}\left(\text{SA}\right)\mathbf{V}_{i}\mathbf{W}_{ i}^{o}\] where \(N_{T}\) is the input sequence length, \(h\) is the hidden dimension of the encoder layer, and \(n\) is the number of heads. The SA operation has two sub-types:
* The scaled dot-product (SDP) attention [4] is the de-facto standard operation in traditional transformer architectures. Mathematically, \[\text{SA}_{\text{SDP}}:=\frac{\mathbf{Q}_{i}\mathbf{K}_{i}^{\intercal}}{\sqrt {h}}.\]
* The weighted multiplicative attention (WMA) [43] involves a trainable weight matrix \(\mathbf{W}_{a}\in\mathbb{R}^{h/n\times h/n}\) such that \[\text{SA}_{\text{WMA}}:=\frac{\mathbf{Q}_{i}\mathbf{W}_{a}\mathbf{K}_{i}^{ \intercal}}{\sqrt{h}}.\] The mapper converts the self-attention operation into various MAC and softmax operations that the corresponding hardware modules can execute in the accelerator.
* Linear Transform: As the name suggests, this operation implements a linear transform (LT) on the input sequence. The FlexiBERT 2.0 design space supports two sub-types:
* The discrete Fourier transform (DFT) that we implement in hardware using the corresponding Vandermonde matrix \(\mathbf{V}_{\text{DFT}}\in\mathbb{R}^{N_{T}\times N_{T}}\) for the roots of unity (also called the DFT matrix) [44] such that \[\text{LT}_{\text{DFT}}:=\mathbf{V}_{\text{DFT}}\ \mathbf{H}\] where \(\mathbf{H}\in\mathbb{R}^{N_{T}\times d_{in}}\) represents a matrix for the input hidden states.
* The discrete cosine transform (DCT) that we again implement using an equivalent Vandermonde matrix \(\mathbf{V}_{\text{DCT}}\in\mathbb{R}^{N_{T}\times N_{T}}\) such that \[\text{LT}_{\text{DCT}}:=\mathbf{V}_{\text{DCT}}\ \mathbf{H}\] We store the \(\mathbf{V}_{\text{DFT}}\) and \(\mathbf{V}_{\text{DCT}}\) matrices in the buffer for subsequent use while executing the above operations. Although these operations are slower than the fast Fourier transform (FFT) [45] and the fast cosine transform (FCT) [46], respectively, sparsification of the input matrices results in a low overall execution time. Furthermore, converting these operations to MAC operations enables the reuse of the MAC lanes, thus not requiring special hardware modules for the LT operation. Nevertheless, these methods (FFT and FCT) may lead to high gains for transformer models that support long sequences [47], due to their \(\mathcal{O}(N\log N)\) complexity. We leave their hardware implementation to future work.
* Dynamic-span-based Convolution: The dynamic-span-based convolution (DSC) operation implements a 1D convolution over the input. Mathematically, \[\text{DSC}_{k}:=\mathbf{w}_{k}*\mathbf{H}\] where \(\mathbf{w}_{k}\) is the convolution kernel of length \(k\). To implement this operation in hardware, we convert the convolution operation into an equivalent matrix multiplication operation. In other words, we convert the convolutional kernel to a sparse matrix that we multiply with the input. We tweak the MAC lane module to incorporate this conversion.
Now that the mapper has converted the operations in the FlexiBERT 2.0 design space to hardware-implementable formats, the control block tiles, schedules, and assigns these
Fig. 4: Implementation of the DynaProp module. The wires for mask bits are in grey.
mapped operations to the accelerator for transformer evaluation.
#### Iii-B3 Design Space
ELECTOR supports various accelerators in its design space. It allows adaptation of many design decisions in an ASIC-based accelerator. We describe these tunable hyperparameters next.
* Batch Tile Size: This is the size of a tile along the batch. Mathematically, a tile \(\mathbf{M}\in\mathbb{R}^{b\times x\times y}\) has the batch tile size \(b\).
* Spatial Tile Size: This is the size of a tile orthogonal to the batch dimension. In the above example, \(x=y\) is the spatial tile size (we assume square matrices for the tiles). A higher tile size (either \(b\) or \(x\)/\(y\)) would imply that each hardware module (MAC lane, softmax module, or layer-norm module) could execute more operations in parallel since the module evaluates a larger tile. This enables latency reduction at the cost of higher dynamic power.
* Activation Function: Transformer evaluation uses a nonlinear function following a feed-forward operation. We support two functions: ReLU and GeLU [42]. This is in accordance with the FlexiBERT 2.0 design space.
* Number of PEs: The number of PEs in the accelerator.
* Number of MAC Lanes per PE: The number of MAC lanes in each PE of the accelerator. We keep the number of MAC lanes constant for every PE.
* Number of MACs per Lane: The number of MAC units per MAC lane. Again, this is constant across all MAC units.
* Number of Softmax Modules per PE: The number of softmax modules in each PE. Every PE has only one layer-norm module. Therefore, the number of MAC lanes and softmax modules in each PE determines the net ratio of the number of MAC lanes, softmax modules, and layer-norm modules in an accelerator. One can tune this ratio based on the corresponding proportion of these operations in evaluating the selected transformer.
* Batch Size: The batch size for transformer evaluation. More compute resources and high bandwidth memory enable a larger batch, reducing evaluation latency.
* Activation and Gradient Buffer Size: The size of the activation/gradient buffer. Training requires more activation matrices than inference. It also has gradient matrices, requiring a larger buffer size.
* Weight Buffer Size: The size of the weight buffer. A larger transformer model requires a larger weight buffer.
* Mask Buffer Size: The size of the mask buffer that stores the binary masks for the zero-free format [16] used in the accelerators in ELECTOR.
Table II summarizes the possible design choices for accelerators in the ELECTOR design space. The possible memory configurations include the memory type (RRAM, DRAM, and HBM) along with the banks, ranks, and channels.
#### Iii-B4 Accelerator Embeddings
We now describe how we convert the selected accelerator configuration (a sample from Table II) to an embedding for surrogate modeling. We generate a 12-dimensional embedding (\(e\)) for a selected accelerator configuration as follows:
* \(e_{1}\) denotes the batch tile size, i.e., \(e_{1}=b\).
* \(e_{2}\) and \(e_{3}\) correspond to the spatial tile sizes, i.e., \(e_{2}=x\), \(e_{3}=y\). For the targeted design space, \(e_{2}=e_{3}\).
* \(e_{4}\) denotes the number of PEs.
* \(e_{5}\) denotes the number of MAC lanes per PE.
* \(e_{6}\) denotes the number of MACs per lane.
* \(e_{7}\) denotes the number of softmax modules in each PE.
* \(e_{8}\) denotes the selected batch size for model evaluation.
* \(e_{9}\), \(e_{10}\), and \(e_{11}\) denote the activation/gradient, weight, and mask buffer sizes, respectively, in MBs.
* \(e_{12}\) denotes the index of possible memory configurations in Table II, thus ranges from 1 to 11.
We use these generated embeddings to train the TransCODE surrogate model, which also outputs the subsequent query as an accelerator embedding.
#### Iii-B5 Simulation Flow
Fig. 5 shows the simulation flow for evaluating an input accelerator configuration and tiled operations (obtained after mapping and tiling the input transformer) in ELECTOR. We first select the compute modules (including the tile size for parallel operation), buffer sizes, and main memory configuration. Next, we implement different hardware modules discussed in Section III-B1 at the register-transfer level (RTL) using SystemVerilog. We use Design Compiler [48] to synthesize the RTL design based on a
\begin{table}
\begin{tabular}{l l} \hline \hline
**Hyperparameter** & **Permissible values** \\ \hline Batch tile size & 1, 4 \\ Spatial tile size & 8, 16, 32 \\ Activation function & ReLU, GeLU \\ \#PEs & 64, 128, 256, 512, 1024 \\ \#MAC lanes per PE & 8, 16, 32, 64, 128 \\ \#MACs per lane & 1, 16 \\ \#Softmax modules per PE & 2, 4, 8, 16, 32, 64 \\ Batch size & 4, 16, 32 \\ Act/grad. buffer size (MB) & 4, 8, 16, 32, 64 \\ Weight buffer size (MB) & 8, 16, 32, 64, 128 \\ Mask buffer size (MB) & 1, 2, 4, 8, 16 \\ & RRAM: [16, 2, 2], [8, 2, 4], [4, 2, 8], \\ Main memory configuration & [2, 2, 16], [32, 2, 1], [1, 2, 32] \\ [banks, ranks, channels] & DRAM: [16, 2, 2], [8, 2, 4], [32, 2, 1], \\ & [16, 4, 1] \\ & HBM: [32, 1, 4] \\ \hline \hline \end{tabular}
\end{table} TABLE II: Hyperparameters supported in the ELECTOR design space.
Fig. 5: Flow of simulation in ELECTOR.
14nm FinFET technology library [49]. Capo [50], an open-source floorplacer, performs floorplanning. FinCACTI [51], a cache modeling tool for deeply-scaled FinFETs, models the on-chip buffers. NVSim [52] and NVMain [53] model the main memory (either the off-chip DRAM or on-chip HBM/RRAM). ELECTOR then plugs the synthesized results into a Python-based cycle-accurate simulator. Finally, the control block segregates the tiled operations into compute and memory operations for separate execution pipelines [16].
### _TransCODE_
We use BOSHCODE to obtain the best-performing transformer-accelerator pair. BOSHCODE takes as input the accelerator and transformer embeddings and outputs the performance measure to be estimated. For the transformer embeddings, we use the embeddings used in FlexiBERT 2.0 [11] as opposed to the Transformer2vec encodings [17] since they are fast and efficient. This is critical for exploring the vast FlexiBERT 2.0 design space efficiently. For the accelerator embeddings, we use the embeddings from the accelerator configuration discussed in Section III-B4. We define the output performance measure as follows:
\[\text{Performance} =\alpha\times(1-\text{Latency})+\beta\times(1-\text{Area})\] \[+\gamma\times(1-\text{Dynamic Energy})\] \[+\delta\times(1-\text{Leakage Energy})+\epsilon\times\text{ Accuracy}\]
where \(\alpha+\beta+\gamma+\delta+\epsilon=1\) are hyperparameters. We normalize the values of the individual performance measures with respect to their maximum values (hence, these values reside in the \([0,1]\) interval). Thus, for edge applications where the power envelope of devices is highly restricted, users can set the hyperparameters \(\gamma\) and \(\delta\) high. On the other hand, for server-side deployments, where accuracy is of utmost importance, one can set \(\epsilon\) high.
TransCODE needs five performance values for the queried transformer-accelerator pair: latency, area, dynamic energy, leakage energy, and model accuracy. To obtain the first four performance values, we leverage the ELECTOR simulator. To obtain the transformer model accuracy, we employ the FlexiBERT 2.0 surrogate model, which outputs the GLUE score [54].
## IV Experimental Setup
In this section, we present the setup behind various experiments we performed, along with the baselines considered for comparison.
### _Evaluation Models and Datasets_
To test the efficacy of the DynaProp method, we evaluate transformer models in the FlexiBERT 2.0 design space. Table III shows the hyperparameter ranges supported by the FlexiBERT 2.0 design space [11]. Evidently, shallow models (e.g., with two encoder layers) incur lower latency relative to deep models (e.g., with 12 encoder layers) [11, 18]. Moreover, wide models (e.g., with 12 attention heads) require more compute resources to enable higher parallelization than narrow ones (e.g., with two attention heads). Further, different attention-head types have different latencies and energy consumption characteristics. Hence, there is a need for optimized dataflows when executing such heterogeneous architectures.
We test the models on representative natural language understanding tasks under the GLUE benchmark [54]. The included tasks are: SST-2 [55], MNLI [56], QQP, QNLI, MRPC [57], CoLA [58], STS-B [59], RTE [60], and WNLI [61]. The surrogate model trained on the FlexiBERT 2.0 design space [11] reports the overall GLUE score. We show the training sizes and used metrics in Table IV. The GLUE score represents average performance across all the tasks.
While running DynaProp, we target activation, weight, and gradient sparsity. Weight sparsity is static and depends on pruning performed during model pre-training or finetuning [62]. Activation and gradient sparsity change for every input sequence - we report their averages over the entire validation set.
### _The ELECTOR Design Space_
Table II summarizes the ELECTOR design space. Taking into account all the possible combinations presented in this table, ELECTOR supports 14,850,000 accelerators in its design space. This space includes accelerators meant for resource-constrained edge applications as well as those relevant to high-energy server settings that require high throughput. In addition, ELECTOR allows different memory configurations to support diverse user requirements, from high-bandwidth monolithic-3D RRAM to economic off-chip DRAM.
\begin{table}
\begin{tabular}{l l l} \hline \hline Task & Training Size & Metric \\ \hline SST-2 & 67K & Accuracy \\ MNLI & 393K & Accuracy \\ QQP & 364K & Accuracy \\ QNL1 & 105K & Accuracy \\ MRPC & 3.7K & Accuracy \\ CoLA & 8.5K & Matthew’s Correlation \\ STS-B & 7K & Spearman Correlation \\ RTE & 2.5K & Accuracy \\ WNLI & 634 & Accuracy \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Data statistics of datasets in the GLUE benchmark.
\begin{table}
\begin{tabular}{l l} \hline \hline
**Design Element** & **Allowed Values** \\ \hline Number of encoder layers (\(l\)) & \(\{2,4,6,8,10,12\}\) \\ Type of attention operation used (\(o^{j}\)) & \(\{\text{SA},\text{IT},\text{DSC}\}\) \\ Number of operation heads (\(n^{j}\)) & \(\{2,4,8,12\}\) \\ Hidden size (\(h^{j}\)) & \(\{128,256\}\) \\ Feed-forward dimension (\(f^{j}\)) & \(\{256,512,1024,2048,3072,4096\}\) \\ Number of feed-forward stacks & \(\{1,2,3\}\) \\ Operation parameters (\(p^{j}\)): & \\ \hline \(i\)\(o^{j}=\) SA & Self-attention type: \{SDP,WMA\} \\ else if \(o^{j}=\) LT & Linear transform type: \{DFT, DCT\} \\ else if \(o^{j}=\) DSC & Convolution kernel size: \(\{5,9\}\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Hyperparameter ranges in FlexiBERT 2.0 design space [11]. Super-script (\(j\)) depicts the value for layer \(j\).
### _Co-design Pipeline_
To run BOSHCODE, we use the following parameter values to obtain the net performance measure: \(\alpha=0.1\), \(\beta=0.1\), \(\gamma=0.2\), \(\delta=0.1\), and \(\epsilon=0.5\) (see Section III-C). We leverage the network and hyperparameters used in EdgeTran [11] for co-design. The BOSHCODE model takes \(x_{\text{TXF}}\) and \(x_{\text{ACC}}\) as input and outputs the predicted performance measure. Here, \(x_{\text{TXF}}\) and \(x_{\text{ACC}}\) correspond to the FlexiBERT 2.0 and ELECTOR embeddings, respectively. BOSHCODE then leverages gradient-based optimization using backpropagation to the input (GOBI) [38] while freezing the model weights.
All input embeddings obtained using GOBI from the surrogate models may not be valid. For instance, \(x_{\text{ACC}}\) should be well-defined (e.g., we allow the batch tile size, \(b\), to only be 1 or 4). To add constraints to the optimization process, along with forcing the model to learn the performance only for valid input embeddings, we add a datapoint (\(x_{\text{TXF}}\), \(x_{\text{ACC}}\), \(P_{\text{MIN}}\)) to the dataset if either of the input embeddings is invalid or does not adhere to user-defined constraints. Another example of an input constraint could be that transformers with only up to six layers are allowed. \(P_{\text{MIN}}\) has a low value, set to \(-1\) for our experiments (where well-defined inputs would result in \(P\) to lie in the [0,1] range).
### _Evaluation Baselines_
We compare our experimental results with previously proposed transformer-accelerator pairs. The baseline accelerators include SpAtten and AcceFlran, hand-designed for a specific transformer architecture. SpAtten implements HW-NAS. For fair comparisons, we present an HW-NAS version of TransCODE in which we execute BOSHCODE while forcing gradients to the accelerator to zero, i.e., we only search for transformer models run on a given edge platform. We also include co-design baselines implemented on a set of FPGAs [22].
## V Experimental Results
In this section, we present the experimental results and comparisons of the TransCODE framework with relevant baselines.
### _Dynamic Pruning of Transformer Weights, Activations, and Gradients_
Fig. 6 shows a breakdown of the memory required for three evaluation modes: inference, traditional training, and DynaProp training, for BERT-Tiny and BERT-Base. The evaluation mode does not affect the memory usage for the token and position embeddings. Moreover, inference and training require the same memory size for transformer model weights. However, training generates gradients and also more activation operations. Here, the \(\delta\)'s described in Table I define the gradient memory consumption. For BERT-Tiny and BERT-Base, training requires 2.8\(\times\) and 2.1\(\times\) more memory (for activations and gradients), respectively. However, the buffer can be smaller since it only stores the activations or gradients required by the PEs at a given time. Finally, we show the memory required for DynaProp training. We configure DynaProp to induce 50% sparsity in weights and activations (resulting in no loss in accuracy [16]) and 90% sparsity in the gradients (marginal accuracy loss, as shown below). DynaProp thus requires 1.5\(\times\) and 1.9\(\times\) smaller memory for BERT-Tiny and BERT-Base, respectively, while running training. This results in a smaller main memory, smaller buffers, and fewer MAC operations, thus leading to improved throughput.
To decouple and study the effects of pruning while running model inference and training, we execute DynaProp with two pruning thresholds: \(\tau_{\text{I}}\) and \(\tau_{\text{T}}\). It prunes activation and gradient matrices using \(\tau_{\text{I}}\) for the forward pass and \(\tau_{\text{T}}\) for the backward pass. It leverages movement pruning [62] for transformer weights [16]. Fig. 7(a) presents a contour plot showing the effect of these thresholds on accuracy for the BERT-Tiny model. As previously observed [15, 16], the accuracy first increases and then decreases as we increase \(\tau_{\text{I}}\). However, accuracy monotonically decreases on increasing \(\tau_{\text{T}}\). Fig. 7(b) shows the average between activation and gradient sparsities (or the net sparsity) when changing \(\tau_{\text{I}}\) and \(\tau_{\text{T}}\). The net sparsity increases as both \(\tau_{\text{I}}\) and \(\tau_{\text{T}}\) increase.
Fig. 6: Breakdown of the total memory required while running inference (solid bars), traditional training (hatched bars), and DynaProp training (dotted bars) for BERT-Tiny and BERT-Base.
Fig. 7: Effect of training (\(\tau_{\text{T}}\)) and inference (\(\tau_{\text{I}}\)) thresholds on (a) accuracy (%) of the SST-2 task and (b) averaged activation and gradient sparsity (%) in BERT-Tiny.
Fig. 8(a) shows accuracy plotted against \(\tau_{\text{T}}\). Unlike pruning during inference (using \(\tau_{\text{I}}\)), accuracy decreases under DynaProp as we increase the pruning threshold \(\tau_{\text{T}}\). However, this loss in accuracy is a result of high gradient sparsity, as shown in Fig. 8(b). This enables ELECTOR to skip many ineffectual MAC operations, reducing energy consumption and latency. We achieve 90% gradient sparsity when we set \(\tau_{\text{T}}\) to 0.0001 with an accuracy loss of only 0.4%. Fig. 9 shows a plot of accuracy against net sparsity. Again, we define _net sparsity_ as the average of the activation and gradient sparsities (weight sparsity remains constant at 50%). The plot shows that accuracy decreases with increasing net sparsity for BERT-Base. However, for BERT-Tiny, accuracy increases and decreases as we increase net sparsity.
Fig. 10 shows a plot of the normalized time for traditional and DynaProp training. Here, we evaluate the BERT-Tiny model on an Nvidia A100 GPU and an ELECTOR-supported accelerator (AccelTran-Edge [16] with added training support). Training takes 761.9\(\times\) longer than inference on a GPU. However, ELECTOR only requires 1.6\(\times\) more time. This is due to optimized scheduling, tiling of operation matrices, specialized hardware modules, and a dataflow curated for transformer workflows [16]. Since an off-the-shelf GPU does not automatically skip ineffectual computations (in other words, it is not _sparsity-aware_), DynaProp training hardly reduces evaluation time on the A100 GPU. However, due to the zero-free data format and specially designed hardware modules that skip ineffectual operations, ELECTOR reduces the training time by 2.3\(\times\). Thus, high activation, weight, and gradient sparsities enabled by DynaProp, along with ASIC-based acceleration, allow ELECTOR to substantially reduce evaluation times relative to a baseline GPU.
### _Design Space Exploration_
Fig. 11 shows convergence plots while executing co-design using BOSHCODE and various baselines. These baselines include random search, gradient-boosted regression trees (GBRT), Gaussian-process-based Bayesian optimization (GP-BO) that approximates performance through Gaussian process regression and optimizes it through the L-BFGS method [63], and random forest that fits various randomized decision trees over sub-samples of the dataset. As shown in Fig. 11, BOSHCODE achieves the highest performance. It yields the
Fig. 11: Co-design convergence plots for BOSHCODE and various baselines.
Fig. 8: Effect of changing training threshold (\(\tau_{\text{T}}\)) on (a) accuracy of the SST-2 task and (b) sparsity in gradient matrices.
Fig. 10: Evaluation time for traditional (hatched bars) and DynaProp (dotted bars) training normalized by the inference time (solid bars) on a GPU and an ELECTOR accelerator.
Fig. 9: Accuracy of the SST-2 task plotted against net sparsity in activation and gradient matrices.
optimal transformer-accelerator pair, FB*-ELECTOR* (FB is an acronym for FlexiBERT 2.0). Here, performance refers to the net measure found using a convex combination of accuracy, latency, area, dynamic energy, and leakage energy (Section III-C).
Table V summarizes the design choices of the converged co-design pair, i.e., FB*-ELECTOR*. To optimize latency, FB* uses only two encoder layers. However, FB* uses 12 attention heads in each encoder layer to avoid performance loss. Thus, BOSHCODE searches for a shallow but wide model to improve throughput while not incurring a performance penalty. The converged architecture is also highly heterogeneous, with diverse attention types in each layer, leveraging the modeling capabilities of each operation type. ELECTOR* has many PEs to parallelize the computation of 12 attention heads in each FB* layer. It also leverages monolithic-3D RRAM, which has the highest bandwidth and lowest energy consumption. The net area of this accelerator is 359.3 mm\({}^{2}\).
### _Performance Improvements_
We now compare the converged transformer-accelerator pairs obtained by the proposed approach with baseline pairs. Fig. 12 shows Pareto frontiers of GLUE scores with respect to hardware measures, i.e., latency, chip area, and energy consumption. We obtain GLUE scores from the surrogate model described in the EdgeTran framework [11]. We also plot state-of-the-art transformer-accelerator pairs for comparison. Our pair on the Pareto frontier with the same accuracy as BERT-Base evaluated on AccelTran-Server incurs 44.8\(\times\) lower latency. On the other hand, the pair on the Pareto frontier with the same latency as that of BERT-Tiny evaluated on AccelTran-Edge achieves a 14.5% higher GLUE score. Similarly, the pair with the same accuracy as that of BERT-Base evaluated on AccelTran-Server but on the Pareto frontier in Fig. 12(b) requires 34.5\(\times\) lower chip area. The one with the same chip area as that evaluated on AccelTran-Edge finds a transformer model on the frontier that achieves a 14.8% higher GLUE score. Finally, the pair with the same accuracy as that of BERT-Base incurs 1050\(\times\) lower energy consumption than that of the model evaluated on AccelTran-Server. In contrast, the _same-energy_ pair with BERT-Tiny evaluated on AccelTran-Edge, but on the Pareto frontier, achieves a 13.9% higher GLUE score.
Table VI compares the proposed TransCODE approach against various baselines. These baselines include HAT [18] and AutoTinyBERT [31], which implement HW-NAS on off-the-shelf edge-AI devices. We also add a co-design method implemented on a set of FPGAs [22] and another HW-NAS approach implemented on the SpAtten ASIC-based accelerator [14]. For fair comparisons, we also include a monolithic-3D-RRAM-based transformer accelerator, i.e., AccelTran-Server [16], that evaluates BERT-Base. Finally, the table presents an ablation study in which we implement HW-NAS (by forcing the gradients to the accelerator to zero) with AccelTran-Server [16] as the base accelerator. We also include performance values for FB*-ELECTOR* without DynaProp training implemented. Since the baselines do not support training, we report performance values for running inference with the proposed pairs. FB*-ELECTOR* outperforms the state-of-the-art pair, i.e., BERT-Base/AccelTran-Server, achieving 0.3% higher accuracy, 5.2\(\times\) lower latency, and 3.0\(\times\) lower energy consumption.
## VI Discussions and Future Work
In this section, we discuss the implications of the proposed work along with future work directions.
### _Multi-objective Optimization_
To perform co-design with the BOSHCODE framework, we model performance as a linear function of latency, energy consumption, chip area, and accuracy. This converts a multi-objective optimization problem into a single-objective optimization problem. We use this approach because BOSHCODE supports single-objective optimization only. The designer can decide the importance of each such objective when running the co-design pipeline. However, one could extend this approach to multi-objective optimization that increases/decreases a Pareto front's hypervolume [64, 65]. In this case, the designer would obtain a set of non-dominated solutions. We leave the application of multi-objective optimization methods to the FlexiBERT 2.0 and ELECTOR design spaces to future work.
### _In-memory and Reconfigurable Processors_
The proposed framework optimizes for a specific accelerator deployed in practice for edge-based training or inference.
\begin{table}
\begin{tabular}{l|l|l} \hline \hline
**Hyperparameter** & & **Value** \\ \hline \multicolumn{3}{c}{**Transformer**} \\ \hline \hline & \(h^{1}\) & 256 \\ & #SA-SDP & 3 \\ & #SA-WMA & 1 \\ Encoder Layer 1 & #LT-DFT & 1 \\ & #DSC-5 & 1 \\ & #DSC-9 & 1 \\ & #DSC-13 & 5 \\ & FF & 1024, 1024, 512 \\ \hline \hline & \(h^{2}\) & 512 \\ & \#LT-DFT & 4 \\ Encoder Layer 2 & \#DSC-5 & 5 \\ & \#DSC-9 & 3 \\ & FF & 256, 1024, 1024 \\ \hline \hline \multicolumn{3}{c}{**Accelerator**} \\ \hline \hline Batch tile size & 4 \\ Spatial tile size & 32 \\ Activation function & GeLU \\
#PEs & 128 \\
#MAC lanes per PE & 32 \\
#MACs per lane & 16 \\
#Softmax modules per PE & 4 \\ Batch size & 4 \\ Act/grad buffer size (MB) & 64 \\ Weight buffer size (MB) & 128 \\ Mask buffer size (MB) & 8 \\ Main memory configuration & RRAM [8, 2, 4] \\ \hline \hline \end{tabular}
\end{table} TABLE V: Design choices of the converged TransCODE pair.
However, any accelerator in the proposed ELECTOR design space can execute any transformer, although it would not be the best accelerator for that transformer (in terms of hardware performance). The hardware architectures are not reconfigurable at runtime (except the pruning ratios \(\tau_{\text{l}}\) and \(\tau_{\text{T}}\)). The architectures in the Sanger [66] design space are reconfigurable. However, Sanger is limited to only pruning a given model in the software space. Meanwhile, TransCODE leverages the FlexiBERT 2.0 design space to search for dense and small models. It also supports dynamic pruning of the model (using runtime-tunable pruning ratios) to trade off accuracy with hardware performance, while also searching for the best-performing set of accelerator design decisions. Nevertheless, adding reconfigurability to accelerators in the ELECTOR design space would benefit dynamic workloads. One could also implement co-design for a group of transformers instead of just one. We leave this to future work.
## VII Conclusion
In this work, we presented TransCODE, a co-design framework for flexible and heterogeneous transformer models evaluated on diverse accelerator architectures. We proposed a novel, low-overhead dynamic inference-and-training scheme, DynaProp, that increases the sparsity of activations and gradients at runtime with controllable accuracy loss. DynaProp attains 90% sparsity in gradient matrices with negligible accuracy loss while improving training throughput by 2.3\(\times\) relative to traditional training. We further proposed a design space of diverse ASIC-based transformer accelerators: ELECTOR. It supports accelerators targeted at various scenarios, budgets, and user-defined constraints that support flexible and heterogeneous transformer inference and training. The best transformer-accelerator pair achieves 0.3% higher accuracy than the state-of-the-art pair while enabling 5.2\(\times\) lower latency and 3.0\(\times\) lower energy consumption.
## Acknowledgments
We performed the simulations presented in this article on computational resources managed and supported by Princeton Research Computing at Princeton University.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline
**Method** & \begin{tabular}{c} **Hardware-** \\ **Aware** \\ \end{tabular} & \begin{tabular}{c} **Flex.** \\ **Layers** \\ \end{tabular} & **Co-design** & \begin{tabular}{c} **ASIC-based** \\ **Accelerator** \\ \end{tabular} & **Platform** & \begin{tabular}{c} **GLUE Score** \\ **(\%)** \\ \end{tabular} & \begin{tabular}{c} **Latency** \\ **(ms/seq)** \\ \end{tabular} &
\begin{tabular}{c} **Energy** \\ **(J/seq)** \\ \end{tabular} \\ \hline Devlin et al. [1] & ✗ & ✗ & ✗ & ✗ & A100 & 79.6 & 10.6 & 0.6 \\ \hline \multicolumn{8}{c}{**Baselines**} \\ \hline Wang et al. [18] & ✓ & ✗ & ✗ & ✗ & Raspberry Pi & 77.1 & 12,351.6 & 38.2 \\ Yin et al. [31] & ✗ & ✗ & ✗ & ✗ & Raspberry Pi & 78.3 & 10,427.7 & 20.7 \\ Peng et al. [22] & ✓ & ✗ & ✓ & ✗ & FPGA & 77.0 & 15.8 & 0.4 \\ Wang et al. [14] & ✓ & ✗ & ✗ & ✓ & SpAtten & 77.1 & 2.44 & 0.3 \\ Tuli et al. [16] & ✓ & ✗ & ✗ & ✓ & Acc |
2308.07232 | Lyapunov Exponents to Test General Relativity | Photon rings are key targets for near-future space-based very-long baseline
interferometry missions. The ratio of flux measured between successive
light-rings is characterized by the Lyapunov exponents of the corresponding
nearly-bound null geodesics. Therefore, understanding Lyapunov exponents in
this environment is of crucial importance to understanding black hole
observations in general, and in particular, they may offer a route for
constraining modified theories of gravity. While recent work has made
significant progress in describing these geodesics for Kerr, a theory-agnostic
description is complicated by the fact that Lyapunov exponents are
time-parameterization dependent, which necessitates care when comparing these
exponents in two different theories. In this work, we present a robust
numerical framework for computing and comparing the Lyapunov exponents of null
geodesics in Kerr with those in an arbitrary modified theory. We then present
results obtained from calculating the Lyapunov exponents for null geodesics in
two particular effective theories, scalar Gauss-Bonnet gravity and dynamical
Chern-Simons gravity. Using this framework, we determine accuracy lower-bounds
required before a very-long baseline interferometry observation can constrain
these theories. | Alexander Deich, Nicolás Yunes, Charles Gammie | 2023-08-14T16:16:03Z | http://arxiv.org/abs/2308.07232v2 | # Lyapunov Exponents to Test General Relativity
###### Abstract
Photon rings are key targets for near-future space-based very-long baseline interferometry missions. The ratio of flux measured between successive light-rings is characterized by the Lyapunov exponents of the corresponding nearly-bound null geodesics. Therefore, understanding Lyapunov exponents in this environment is of crucial importance to understanding black hole observations in general, and in particular, they may offer a route for constraining modified theories of gravity. While recent work has made significant progress in describing these geodesics for Kerr, a theory-agnostic description is complicated by the fact that Lyapunov exponents are time-parameterization dependent, which necessitates care when comparing these exponents in two different theories. In this work, we present a robust numerical framework for computing and comparing the Lyapunov exponents of null geodesics in Kerr with those in an arbitrary modified theory. We then present results obtained from calculating the Lyapunov exponents for null geodesics in two particular effective theories, scalar Gauss-Bonnet gravity and dynamical Chern-Simons gravity. Using this framework, we determine accuracy lower-bounds required before a very-long baseline interferometry observation can constrain these theories.
## I Introduction
The Event Horizon Telescope (EHT) and its space-based successors will soon offer unprecedented views of the structure of photon trajectories in the space very near the photon ring of a black hole [1; 2]. These trajectories can be characterized by the number of fractional orbits the photon completes before scattering to the detector, giving rise to the so-called'sub-rings". The ratio of flux received between adjacent sub-rings is determined by the Lyapunov exponent at the photon ring. These Lyapunov exponents are therefore an observable quantity, and offer a route to understand the phase space of the environment they are observed in. As measures of the stability of phase space trajectories, Lyapunov exponents open a window to the underlying physics of any given system. In particular, they provide clues to the underlying theory of gravity in play, and may therefore offer a way to use very-long baseline interferometry (VLBI) observations to place constraints on modified theories [3; 4].
Impressive strides have been made in understanding the structure of null orbits near black holes described by the Kerr metric [5; 4]. This includes full analytic evolution equations for null geodesics, which have in turn enabled a much deeper understanding of the structure of the photon ring that a VLBI instrument could detect. As a result, we now also have full analytic expressions for the Lyapunov exponents associated with the photon ring around a Kerr black hole. As Lyapunov exponents provide a method of testing the Kerr hypothesis, one may wish to also calculate Lyapunov exponents in different theories of gravity and investigate whether any difference from the Kerr result is indeed detectable.
Black holes in modified gravity, however, are not necessarily described by the Kerr metric. For example, two theories, scalar-Gauss-Bonnet (sGB) [6] and dynamical-Chern-Simons (dCS) gravity [7], predict distinct modifications to the spacetime of rotating black holes, and therefore, to the particular trajectories that geodesics follow in each theory. Such modifications will necessarily also affect the values that Lyapunov exponents take in these theories. Exact, analytic forms of these metrics remain unknown, leaving us with only approximate analytic forms written as power series in the black hole spin. Recent work, nonetheless, allows us to calculate these expansions to essentially arbitrary order in spin [8].
The calculation of Lyapunov exponents in GR is simplified by certain symmetries that are absent in modified gravity theories. This machinery relies on the existence of two Killing vectors (associated with stationarity and axisymmetry) and one Killing tensor (associated with a Carter constant) that render the geodesic equations separable and integrable [4; 9]. This is why Lyapunov exponents can be calculated analytically for the Kerr metric, a simple, parametric deformations of it [10], and the Manko-Novikov metric [11; 12]. Separability and integrability, however, is far from guaranteed in modified gravity, black hole spacetimes. In particular, the nonexistence of a Carter-like constant for the spin-expanded sGB and dCS metrics beyond leading order in spin [13; 14] prevents us from casting the geodesic equations in terms of elliptic integrals, and thus, from calculating the Lyapunov exponents analytically.
Therefore, a clear need exists to calculate Lyapunov exponents numerically, accurately and in a theory-agnostic way; it is to this end that we present the current work. In
this paper, we introduce a framework to calculate Lyapunov exponents accurately for arbitrary, axisymmetric modifications to the Kerr metric, allowing us to predict the flux ratio between adjacent sub-rings, and in turn, to potentially test the Kerr hypothesis with VLBI images. We also develop a robust numerical method to check the calculation to high accuracy. We then implement this method on the two example theories described above, sGB and dCS gravity. We find the corrections are of \(\mathcal{O}(10^{-2})\) for geodesics around dCS black holes, and \(\mathcal{O}(10^{-0.8})\) for geodesics around sGB black holes. For any constraint to be realized, this method would also have to be used in conjunction with independent measurements of the black hole's spin and mass, as both of these also affect the size of the Lyapunov exponent.
The remainder of this paper is organized as follows. In Sec. II, we give a primer on symplectic geometry and provide a full derivation of Lyapunov exponents in general relativity. Then, in Sec. III, we describe the structure of the photon shell, photon ring, and its subrings in general relativity. We discuss in Sec. IV how Lyapunov exponents are calculated for the photon rings of Kerr black holes. In Sec. V, we briefly describe the modified theories of gravity under consideration and the black hole solutions permitted by them. Then, we cover how we calculate Lyapunov exponents for black holes in these theories, before presenting our results for dCS and sGB gravity. Finally, in Sec. VI we conclude and point to future research. Appendix A discusses the application of eigenvalue perturbation to Lyapunov exponents. Throughout this paper, we use the convention of \(G=1=c\).
## II Symplectic geometry and Lyapunov exponents
In this section, we go over the basics of symplectic geometry, which can be thought of as a theoretical prerequisite to formally understand Lyapunov exponents. We then develop the theory of Lyapunov exponents thoroughly, and briefly discuss several subtle details, which will be of importance for the present work.
### Phase Space Evolution and Symplectic Geometry
The study of Lyapunov exponents requires manipulating objects that are constructed out of both positions and momenta. We therefore must be careful: these constituents transform differently when contracted with a metric tensor, and so any object we construct from them will transform differently from more familiar objects. The primary object of focus will be a set, \(\mathbb{X}\), of components of coordinates, \(q^{\mu}\), _and_ momenta, \(p_{\mu}\), which live in an \(n-\)dimensional phase space. The set \(\mathbb{X}\) therefore has \(2n\) elements, and we index these with a gothic letter. Explicitly,
\[\mathbb{X}^{\mathfrak{a}} =q^{\mu},\qquad\mathfrak{a}\in[0,...,n-1], \tag{1}\] \[\mathbb{X}^{\mathfrak{a}} =p_{\mu-n},\qquad\mathfrak{a}\in[n,...,2n-1]. \tag{2}\]
The evolution of \(\mathbb{X}^{\mathfrak{a}}\) is governed by Hamilton's equations of motion, which in this context take the form
\[\dot{\mathbb{X}}^{\mathfrak{a}}=\Omega^{\mathfrak{a}\mathfrak{b}}\partial_{ \mathfrak{b}}H, \tag{3}\]
for a Hamiltonian \(H\), where \(\partial_{\mathfrak{b}}=\partial/\partial\mathbb{X}^{\mathfrak{b}}\), where the dot refers to a derivative with respect to proper time and where we have introduced the _symplectic matrix_, \(\Omega^{\mathfrak{a}\mathfrak{b}}\). This matrix is a \(2n\times 2n\) matrix whose elements take the form [15]
\[\Omega^{\mathfrak{a}\mathfrak{b}}=\begin{pmatrix}0_{n}&\mathbb{I}_{n}\\ -\mathbb{I}_{n}&0_{n}\end{pmatrix}, \tag{4}\]
where \(\mathbb{I}_{n}\) and \(0_{n}\) are the \(n\times n\) identity and null matrices, respectively.
### Analytic derivation of Lyapunov exponents
Lyapunov exponents are motivated by asking how a given trajectory responds to small perturbations in its initial phase space conditions. Given some phase space trajectory \(\mathbb{X}^{\mathfrak{a}}(t)\) whose evolution is governed by a Hamiltonian \(H\), as in Eq. (11), we can look at how a perturbation to \(\mathbb{X}^{\mathfrak{a}}(t)=\mathbb{X}^{\mathfrak{a}(0)}(t)+\delta\mathbb{X }^{\mathfrak{a}}(t)\) evolves by linearizing \(\dot{\mathbb{X}}^{\mathfrak{a}}(t)\) about small \(\delta\mathbb{X}^{\mathfrak{a}}(t)\):
\[\delta\dot{\mathbb{X}}^{\mathfrak{a}}(t) =\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\delta\mathbb{X}^ {\mathfrak{b}}(t), \tag{5}\] \[\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}(t) \equiv\partial_{\mathfrak{b}}\dot{\mathbb{X}}^{\mathfrak{a}}\] (6) \[=\Omega^{\mathfrak{a}\sigma}\partial_{\mathfrak{b}}\partial_{ \mathfrak{c}}H. \tag{7}\]
In other words, \(\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\) describes how quickly perturbations grow in phase space. Furthermore, as this matrix is the product of two spatial derivatives, \(\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\) also encodes information about the curvature of the phase space, which is helpful for building intuition about the stability of \(\mathbb{X}^{\mathfrak{a}}(t)\). Schematically, unstable trajectories will lie on hilltops in appropriate phase space slicings (as depicted by the blue surface in Fig. 1).
The evolution of \(\delta\mathbb{X}^{\mathfrak{a}}(t)\) is then
\[\delta\mathbb{X}^{\mathfrak{a}}(t)=\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}} (t)\delta\mathbb{X}^{\mathfrak{b}}(0), \tag{8}\]
where
\[\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\equiv\exp\biggl{(}\int_{t_{0}}^{ t}\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}(t^{\prime})dt^{\prime}\biggr{)} \tag{9}\]
is known as the _linear stability matrix_[15, 16], whose initial condition is \(\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}}(0)=\delta^{\mathfrak{a}}{}_{ \mathfrak{b}}\). The linear stability matrix controls the evolution of vectors that lie in the space that is tangent to the phase space. As \(\delta\mathbb{X}^{\mathfrak{a}}\) evolves, \(\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\) will simultaneously coerce it to align with its largest eigenvector (the orange arrows in Fig. 1, read left-to-right),
while also scaling it exponentially by the corresponding eigenvalue [17].
The _Lyapunov spectrum_ is then given by the eigenvalues of \(\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}}\). The largest of these eigenvalues (the largest element in the Lyapunov spectrum) is the _principal Lyapunov exponent_, \(\lambda\). This quantity can be calculated directly by
\[\lambda\equiv\lim_{t\to\infty}\frac{1}{t}\ln\left(\frac{\mathbb{L}^{\mathfrak{ a}}{}_{\mathfrak{a}}(t)}{\mathbb{L}^{\mathfrak{b}}{}_{\mathfrak{b}}(0)} \right), \tag{10}\]
If \(\lambda\) is a positive number, then \(\mathbb{X}^{\mathfrak{a}}(t)\) is an unstable trajectory and the perturbation will grow without bound. If \(\lambda\) is an imaginary number, then the trajectory is stable to small perturbations [15; 16; 17].
It is worth pointing out a few details that will be useful later on. First, finding Lyapunov exponents analytically is rare. Calculating \(\mathbb{L}^{\mathfrak{a}}{}_{\mathfrak{b}}(t)\) requires knowing the path of a full trajectory, \(\mathbb{X}^{\mathfrak{a}}(t)\), which is not, in general, possible for many Hamiltonians. Second, at no point did we concern ourselves with tracking the evolution of the separation between two initially close trajectories, which is how this subject of Lyapunov exponents is sometimes presented. This approach can add unnecessary complexity. Working in the linearized regime prevents this potential pitfall. Third, that the Lyapunov exponent is an eigenvalue suggests that they are well-suited to perturbative calculations, as we show in Appendix A. While perturbative techniques are not necessary to complete the work we will present in this paper, it is worth noting the techniques presented here are quite general. As a result, one could generate Lyapunov exponents for any conceivable modification to the Hamiltonian for null geodesic motion.
## III Photon trajectories around Kerr black holes
In this section, we briefly review the structure of null trajectories close to a Kerr black hole. Doing so will help establish intuition when we tackle the corresponding problem in modified gravity. For more complete treatments, see [3; 4; 5; 18].
### Geodesic Equations and Analytic Solutions
In Boyer-Lindquist coordinates, a given trajectory on a Kerr background with Kerr spin parameter \(a\) and mass \(M\) follows a four-momentum \(p^{\mu}\) that satisfies
\[\frac{\Sigma}{E}p^{r} =\pm_{r}\sqrt{\mathcal{R}(r)}, \tag{11a}\] \[\frac{\Sigma}{E}p^{\theta} =\pm_{\theta}\sqrt{\Theta(\theta)},\] (11b) \[\frac{\Sigma}{E}p^{\phi} =\frac{a}{\Delta}\left(r^{2}+a^{2}-a\ell\right)+\frac{\ell}{\sin^ {2}\theta}-a,\] (11c) \[\frac{\Sigma}{E}p^{t} =\frac{r^{2}+a^{2}}{\Delta}\left(r^{2}+a^{2}-a\ell\right)+a\left( \ell-a\sin^{2}\theta\right), \tag{11d}\]
where \(E=-p_{t}\) and \(\ell=p_{\phi}/E\) are the conserved energy and energy-rescaled angular momentum, respectively, and \(\Sigma=r^{2}+a^{2}\cos\theta\) and \(\Delta=r^{2}-2Mr+a^{2}\) are functions that appear in the metric. The functions \(\mathcal{R}(r)\) and \(\Theta(\theta)\) are the radial and polar potentials, respectively, which are defined by
\[\begin{split}\mathcal{R}(r)&=\left(r^{2}+a^{2}-a \ell\right)^{2}-\Delta\left[\eta+(\ell-a)^{2}\right],\\ \Theta(\theta)&=\eta+a^{2}\cos^{2}\theta-\ell^{2} \cot^{2}\theta,\end{split} \tag{12}\]
Figure 1: A representation of a photon ring orbit (red line) around a Schwarzschild black hole (black sphere), with the effective potential (blue surface) visualized. Two initially orthogonal vectors (\(\delta\mathbb{X}^{\mathfrak{a}}(0)\), orange arrows furthest on the left) tangent to the phase space evolve under the linear stability matrix. As they evolve counter-clockwise when looking at the black hole from above, they are rotated to align with the biggest eigenvector of the stability matrix (pink arrows). The log of the rate at which the magnitude of \(\delta\mathbb{X}^{\mathfrak{a}}(t)\) grows is the principal Lyapunov exponent. In this cartoon, the vectors are drawn with respect to a vertical \(p_{r}\) axis, while the height of the effective potential has units of energy. The direction pointing radially out from the black hole is the Schwarzschild \(r\) coordinate.
where \(\eta\) is the energy-rescaled Carter constant. The turning points, or maximum and minimum values of \(\theta\) of the trajectory's evolution, \(\theta_{\pm}\), are given by
\[\theta_{\pm}=\arccos\left(\mp\sqrt{u_{+}}\right), \tag{13}\]
where
\[u_{\pm}=\triangle_{\theta}\pm\sqrt{\triangle_{\theta}^{2}+\frac{\eta}{a^{2}}}, \quad\triangle_{\theta}=\frac{1}{2}\left(1-\frac{\eta+\ell^{2}}{a^{2}}\right). \tag{14}\]
As explored in impressive detail in [4; 5], these evolution equations permit full analytic solutions in terms of elliptic functions. Of particular use to our current effort, recasting them in integral form will allow the unambiguous definition of a complete orbit of the black hole. We can do this by integrating along the path of a particle traveling from its source point with coordinates \((t_{s},r_{s},\theta_{s},\phi_{s})\) to its observed point at \((t_{o},r_{o},\theta_{o},\phi_{o})\). While the complete set of integrals is not needed in this paper, it will be useful to cover the \(\theta\) case. The curious reader is directed to [4; 5] for the full story.
Let us then define the quantity
\[G_{\theta}=\int_{\theta_{s}}^{\theta_{o}}\frac{\mathrm{d}\theta}{\pm_{\theta} \sqrt{\Theta(\theta)}}, \tag{15}\]
where the slash indicates that the integral should be taken along the trajectory. Equation (15) can be expressed in terms of elliptic integrals as
\[G_{\theta}=\frac{1}{a\sqrt{-u_{-}}}\left[2mK\pm F_{s}\mp F_{o}\right], \tag{16}\]
where \(m\) is the number of angular turning points encountered in the trajectory,
\[F_{i}=F\left(\arcsin\left(\frac{\cos\theta_{i}}{\sqrt{u_{+}}}\right)\left| \frac{u_{+}}{u_{-}}\right) \tag{17}\]
is the elliptic integral of the first kind (with \(i\in\{s,o\}\)), and
\[K=F\left(\frac{\pi}{2}\middle|\frac{u_{+}}{u_{-}}\right). \tag{18}\]
Let us now make a few observations. First, the quantity \(G_{\theta}\), defined in Eq. (15) is equivalent to the so-called "Mino time" [19], which can be used to decouple Eq. (11). Second, this notation enables the unambiguous comparison of trajectories that are closed to those that are not by defining one complete orbit to be a complete oscillation in \(\theta\). We therefore here adopt the convention of [4], and declare one full orbit to be the traversal from one turning point in \(\theta\), as defined in Eq. (13), back to itself again.
This, coupled with the definition of \(G_{\theta}\) above, allows us to define the number of orbits, \(n\), as
\[n=\frac{G_{\theta}}{G_{\theta}^{1}}, \tag{19}\]
where the normalization factor \(G_{\theta}^{1}\) is the time required to complete one orbit:
\[G_{\theta}^{1}\equiv 2\fint_{\theta_{-}}^{\theta_{+}}\frac{\mathrm{d}\theta}{ \sqrt{\Theta(\theta)}}=\frac{4K}{a\sqrt{-u_{-}}}. \tag{20}\]
### The Photon Shell, Rings, and Subrings
The region near a black hole where there exist trapped null trajectories is called the _photon shell_. In this region, a given trajectory has no radial evolution, but can oscillate in \(\theta\) inside \((\theta_{-},\theta_{+})\), which depends on the chosen value of \(r\). For a Kerr black hole, the limiting values of the photon orbit radii are given by
\[r_{\pm}=2M\left[1+\cos\left(\frac{2}{3}\arccos\left(\pm\frac{a}{M}\right) \right)\right], \tag{21}\]
where \(M\) and \(a\) are the mass and spin of the black hole, respectively. Therefore, a photon on the shell has a fixed \(r-\)coordinate value \(\tilde{r}\), with \(r_{-}\leq\tilde{r}\leq r_{+}\). If \(\tilde{r}=r_{\pm}\), then the trajectory is confined to orbit on the equator. In Fig. 2, we show the values of \(\theta_{-}\) and \(\theta_{+}\) as a function of the value of \(\tilde{r}\) across a range of spin values for a Kerr black hole.
In the event that a photon trajectory is not exactly bound--i.e. its \(r\)-coordinate value is close to, but not exactly, the value of \(\tilde{r}\) that would place it in a bound orbit in the photon shell--then the photon will either scatter or plunge into the black hole event horizon. Ultimately, it is these almost-bound photons that are of interest to us,
Figure 2: The maximum and minimum values of \(\theta\) of a given photon shell orbit are determined entirely by its value of \(\tilde{r}\). Pictured here are the values of \(\theta_{+}\) for trajectories around a Kerr black hole across a range of dimensionless spin values. This relationship is axisymmetric, so the \(\theta_{-}\) values are identical to the above, but mirrored along the \(\theta=\pi/2\) axis.
because the bound photons are never seen by observers far from the black hole. These almost-bound photons are the ones responsible for creating an image on the image plane of the observer's screen, which we call the _photon ring_. Now, consider what is required of a given set of light rays emanating from some point outside the black hole, if we require that they all eventually impact the detector. While there are infinitely many such light rays, the different paths they can take are uniquely indexed by the number of half-orbits they complete before hitting the detector, \(n\). This is visualized in Fig. 3 for the \(n=0,1,2\) trajectories (the blue, green, and orange lines). As a result, we can unambiguously refer to a single geodesic that connects the detector to a given point outside the black hole solely by \(n\).
## IV Lyapunov exponents on the photon shell
Here we cover the specifics of Lyapunov exponents for photon ring trajectories in axisymmetric spacetimes. First, we discuss how Lyapunov exponents are calculated for these geodesics. Then we cover some practical considerations for their observation by comparing the flux ratios from adjacent sub-rings.
### Calculating Lyapunov Exponents in General Axisymmetric Spacetimes
Photon shell trajectories are inherently unstable, owing to their position at the top of a local maximum in the effective potential, as shown by the yellow edge in both panels of Fig. 4. From this figure, we observe that in both dCS (left panel) and sGB (right panel) theories (see Sec. V.1 for their respective black hole metrics), the position of the photon ring shifts with the size of the coupling parameter. Thus, we expect them to possess positive Lyapunov exponents. Here we show one method of analytically calculating these Lyapunov exponents for null geodesics in the general class of axisymmetric spacetimes, to which the Kerr metric belongs. For more details, see [3; 20].
In order to simplify the problem, we first restrict ourselves to just those trajectories that are bound to the equator (i.e. \(\theta=\pi/2\)). We make this simplification for two reasons. First, this restricts the phase space to only 2 dimensions, in \(r\) and \(p_{r}\), so that the form of \(\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}\) is as straightforward as possible. Second, these circular geodesics are, for general axisymmetric spacetimes, the only geodesics for which closed-form solutions exist. We will exploit the same simplification when we extend this calculation to modified theories of gravity.
The Lagrangian is
\[\mathcal{L}=\frac{1}{2}g_{\mu\nu}\frac{dx^{\mu}}{ds}\frac{dx^{\nu}}{ds}, \tag{22}\]
where \(g_{\mu\nu}\) is the spacetime metric, \(s\) is an affine parameter and \(x(s)\) is the trajectory's four-position [22]. From this, the canonical conjugate momenta are
\[p_{\mu}\equiv\frac{\partial\mathcal{L}}{\partial(dx^{\nu}/ds)}=g_{\mu\nu}\frac {dx^{\mu}}{ds} \tag{23}\]
Then, performing a Legendre transformation, the Hamiltonian reads
\[H\equiv p_{\mu}\frac{dx^{\mu}}{ds}-\mathcal{L}=\frac{1}{2}g^{\mu\nu}p_{\mu}p_{ \nu}=0, \tag{24}\]
where the final equality follows from the fact that we are dealing with null trajectories. Considering only equatorial geodesics, the Hamiltonian above becomes
\[H=V_{\rm eff}+\frac{p_{r}^{2}}{g_{rr}}, \tag{25}\]
where \(V_{\rm eff}\) is the effective potential, given in terms of metric components by
\[V_{\rm eff}(r)=\frac{L^{2}g_{tt}(r)+E^{2}g_{\phi\phi}(r)+2ELg_{t\phi}(r)}{g_{ tt}(r)g_{\phi\phi}(r)-g_{t\phi}(r)^{2}}, \tag{26}\]
where \(E\) and \(L\) are the conserved energy and angular momenta of the trajectory, respectively. In the absence of analytic expressions for the trajectories, the location and angular momentum of the equatorial photon orbit is found by setting \(\theta=\pi/2\) and solving the system
\[V_{\rm eff}=0=\partial_{r}V_{\rm eff} \tag{27}\]
simultaneously for \(\tilde{r}\) and \(L\). Doing so will usually result in two real solutions, corresponding to \(r_{-}\) and \(r_{+}\).
Figure 3: Geodesics connecting some point (red dot) outside the black hole (black dot) to the observer are indexed by the number of half-orbits of the black hole they complete. Pictured are the \(n=0,1,2\) equatorial trajectories (blue, green, and orange, respectively) for a Kerr black hole with spin \(a=0.8M\). Observe how the number of half-orbits can be used to label geodesics.
Then, setting \(p_{r}=0\), as all photon shell trajectories require, the evolution Jacobian (5), with proper time parameterization, takes the form
\[\mathbb{J}^{\mathsf{a}}_{\ \mathsf{b}}=\begin{pmatrix}0&-g_{rr}^{-1}\\ \partial_{r}^{2}V_{\text{eff}}&0\end{pmatrix}, \tag{28}\]
whose eigenvalues, evaluated at \(r=\tilde{r}\), are the Lyapunov exponents
\[\lambda_{\text{p}}=\pm\sqrt{-\frac{\partial_{r}^{2}V_{\text{eff}}}{g_{rr}}} \bigg{|}_{r=\tilde{r}}, \tag{29}\]
which we note is dependent upon the full set of spacetime parameters, including spin.
Let us pause here for a moment to make clear an important point. Lyapunov exponents are not invariant under changes of the time parametrization used. Rather, it is the ratio between the Lyapunov timescale \(\tau_{\lambda}=1/\lambda_{\text{p}}\) and a relevant timescale that is invariant and may be compared between systems. In other words, the numerical value of a given trajectory's Lyapunov exponent will change if calculated in, say, proper time parametrization versus Schwarzschild time. But if we also calculate a timescale in the same parametrization, \(\tau^{\prime}\), then the ratio \(\tau^{\prime}/\tau_{\lambda}=\lambda_{\text{p}}\tau^{\prime}\) will be invariant under reparametrization. In the case currently being considered, the relevant timescale is the time required for a half-orbit, \(G_{\theta}^{1}\), as given by Eq. (20). Therefore, it is not sufficient to simply calculate Eq. (29); we must also calculate the time required for one complete orbit.
### Detectability of Sub-Rings and the Measurement of the Lyapunov Exponent by Future VLBIs
By tracking the fractional number of orbits completed by a trajectory as it evolves, we have actually created for ourselves another affine coordinate with which we can parametrize our trajectories. This is a handy parametrization for calculations involving the detection of the photon ring, as the fractional number of orbits determines the specific sub-ring to which the photon belongs. Thought of this way, we can consider how a perturbation in \(r\) evolves over some number \(n\) of orbits,
\[\delta r(n)=\exp(\lambda_{\text{M}}n)\delta r(0), \tag{30}\]
where \(\lambda_{\text{M}}\) is the principal Lyapunov exponent parametrized by the fractional number of orbits.
In general then, let us define \(\lambda_{\text{M}}\) as follows
\[\lambda_{\text{M}}=\lambda_{\text{p}}\tau_{\text{M}}\,, \tag{31}\]
where \(\lambda_{\text{p}}\) is calculated via Eq. (29) and \(\tau_{\text{M}}\) corresponds to the time it takes to complete a half-orbit for a null geodesic around a rotating black hole. For a Kerr black hole, this time is simply given in closed form by \(\tau_{\text{M,Kerr}}=G_{\theta}^{1}\) in Eq. (15). In this case then, one can show [3] that \(\lambda_{\text{M,Kerr}}\) takes the analytic form
\[\lambda_{\text{M,Kerr}}=\sqrt{\frac{\partial_{r}^{2}V_{\text{eff}}}{2g_{rr}}}G _{\theta}^{1}\bigg{|}_{r=\tilde{r}}=\frac{4\tilde{r}\sqrt{\tilde{\chi}}}{a \sqrt{-\tilde{u}_{-}}}\tilde{K}, \tag{32}\]
where
\[\tilde{\chi}\equiv 1-\frac{M\Delta(\tilde{r})}{\tilde{r}(\tilde{r}-M)^{2}} \tag{33}\]
and where each quantity with a tilde is understood to be evaluated on the photon shell. When defined like this, the principal Lyapunov exponent parameterized by the fractional number of orbits, \(\tau_{\text{M}}\), implicitly gains the factor of \(G_{\theta}^{1}\) mentioned earlier.
As a trajectory starts closer and closer to an exactly bound orbit, and \(n\) gets high, the image on the detector screen approaches a closed curve known as the _critical
Figure 4: Null orbit effective potentials in dCS gravity (left) and sGB gravity (right) for a range of values of \(\zeta_{\text{q}}\), with \(\chi=0.9\) and \(L\approx-6.832\). The photon orbit in Kerr, denoted by the dashed red lines, corresponds to the peak of the \(\zeta_{\text{q}}=0\) curve. Notice that the location of this point depends on the coupling parameter. Data from [21].
_curve_[3]. Giving this curve dimensionless detector screen angular coordinates \((\rho_{c},\phi_{c})\), it can then be shown that any photons that impact at \(\delta\rho\) near \(\rho_{c}\) must be funneled into an exponentially narrowing annulus of width
\[\frac{\delta\rho}{\rho_{c}}\approx\exp(\lambda_{\rm M}n). \tag{34}\]
This means that each sub-ring is sequentially nested according to the number of half-orbits completed en route. When one integrates Eq. (34) over a solid angle to determine the flux generated by each sub-ring, one finds [3; 4]
\[\frac{F^{n+1}}{F^{n}}\approx\exp(-\lambda_{\rm M}) \tag{35}\]
for the ratio in flux recieved between adjacent subrings.
## V Calculating Lyapunov Exponents in Modified Gravity
Here we introduce the field equations of two varieties of modified gravity theories, dCS and sGB gravity, and discuss the rotating black hole solutions in each. This allows us to then explain our method of calculating the half-orbit timescale and, consequently, the Lyapunov exponents of photon ring null orbits in these theories.
### Rotating Black Holes in sGB and dCS gravity
There are two main ways of motivating quadratic gravity. First, if we assume the Einstein-Hilbert (EH) action is simply the leading-order term in a more general effective field theory, we can modify the standard EH action by adding terms that are expansions in curvature. Second, these quadratic theories also occur naturally from low-energy expansions of certain string theories[7; 23]. These theories have an action that reads
\[S=S_{\rm EH}+S_{\rm mat}+S_{\vartheta}+S_{RR}, \tag{36}\]
where \(S_{\rm EH}\) is the EH action, \(S_{\rm mat}\) is the matter action, \(S_{\vartheta}\) is an action for a dynamical scalar or pseudo-scalar field, and \(S_{RR}\) couples a quadratic curvature term to the field. The only distinction between the two quadratic theories we are concerned with is in this final term. The EH action reads
\[S_{\rm EH}=\kappa\int d^{4}x\,\sqrt{-g}\,R\,, \tag{37}\]
where \(R=g^{\alpha\beta}g^{\rho\sigma}R_{\rho\alpha\sigma\beta}\) the Ricci scalar, and \(\kappa=(16\pi)^{-1}\), \(g\) is the determinant of the metric tensor with the Riemann tensor \(R_{\rho\alpha\sigma\beta}\). Meanwhile, the scalar or pseudo-scalar field action is
\[S_{\vartheta}=-\frac{1}{2}\int d^{4}x\sqrt{-g}\left[\nabla_{\mu} \vartheta\nabla^{\mu}\vartheta+\nabla_{\mu}\vartheta\nabla^{\mu}\vartheta+2V( \vartheta)\right], \tag{38}\]
where the potential of the scalar field is \(V(\vartheta)\). In order to preserve the shift symmetry of \(\vartheta\to\vartheta+\text{const.}\), we set \(V(\vartheta)=0\) to specify a massless theory, which is a feature often found in effective string theories at low energy, including in both dCS and sGB gravity [24].
For sGB and dCS gravities specifically, we can prescribe a curvature-coupling action that generically encompasses both theories. Let us then define
\[S_{RR}=\int d^{4}x\sqrt{|g|}\left\{\alpha_{\rm sGB}\vartheta_{ \rm sGB}RR+\alpha_{\rm dCS}\vartheta_{\rm dCS}R\tilde{R}\right\}, \tag{39}\]
where
\[RR=R_{\mu\nu\rho\sigma}R^{\mu\nu\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^{2} \tag{40}\]
is the so-called Gauss-Bonnet density, and where
\[R\tilde{R}\equiv{}^{*}R^{\alpha}{}_{\beta}{}^{\gamma\delta}R^{ \beta}{}_{\alpha\gamma\delta}\,, \tag{41}\]
is the Pontryagin density with \({}^{*}R^{\alpha}{}_{\beta}{}^{\gamma\delta}=\frac{1}{2}\epsilon^{\gamma \delta\rho\lambda}R^{\alpha}{}_{\beta\rho\lambda}\) the dual of the Riemann tensor. With this in hand, sGB gravity is defined by setting \(\alpha_{\rm dCS}=0\) in Eq. (39) and \(\vartheta=\vartheta_{\rm sGB}\) in Eq. (38), while dCS gravity is defined by setting \(\alpha_{\rm sGB}=0\) in Eq. (39) and \(\vartheta=\vartheta_{\rm dCS}\) in Eq. (38). The parameters \(\alpha_{\rm sGB}\) and \(\alpha_{\rm dCS}\) determine the coupling parameter strength of the particular theory being described, and they have dimensions of length squared in geometric units. In this paper, we report results after non-dimensionalizing the coupling parameter of either theory, so we define
\[\zeta_{\rm q}\equiv\frac{\alpha_{\rm q}^{2}}{\kappa M^{4}}, \tag{42}\]
where q is dCS or sGB, depending on the theory in question. We choose the maximum values of \(\zeta_{\rm q}\) that we will explore by finding the maximum values that still satisfies the small-coupling approximation, i.e. the maximum value of \(\zeta_{\rm q}\) that generates a correction to the metric components that remains small relative to the Kerr metric components everywhere outside the black hole. Explicitly, we demand that \(|\zeta_{\rm q}H_{i}|<0.5\), where the \(H_{i}\) are the functions that perturb the metric components (see e.g. [21]). Doing so results in maximum values of \(\zeta_{\rm dCS}^{\rm max}=0.15\) and \(\zeta_{\rm sGB}^{\rm max}=0.5\).
Our two example theories have different motivations and phenomenology, and, therefore, different constraints. SGB theory is motivated by a certain low-energy limit of string theory [6]. Unique among these two theories, sGB gravity induces modifications in the spacetime regardless of whether the spacetime is spherically symmetric (i.e., regardless of whether the black hole is spinning or not), and it induces dipole scalar radiation in black hole binaries. This is why gravitational wave observations have already constrained \(\alpha_{\rm sGB}^{1/2}\leq 5.6\) km at 90% confidence [25].
On the other hand, dCS gravity finds motivation from a few sources, including loop quantum gravity [7; 26], the standard model gravitational anomaly [7; 27], and
investigations in string theory1[29, 30]. Unlike the sGB case, dCS gravity does not modify spherically symmetric spacetimes, and thus non-rotating black hole, and it does not activate dipole scalar radiation in binaries. For this reason, dCS gravity has not yet been constrained with gravitational waves alone, but rather, the most stringent bounds come from neutron star multi-messenger observations, which lead to \(\alpha_{\rm dCS}^{1/2}\leq 8.5\)km within a 90% confidence interval [31].
Footnote 1: A combination of both theories, sGB and dCS, with two scalar fields, can also arise in the effective action of heterotic string theory [28] if, somehow, the dilaton is not stabilized.
The field equations of the theories described above are so complicated that analytic exact solutions that represent black holes with arbitrary rotation have not yet been found. Instead, we are forced to work with metrics that are simultaneous expansions in small coupling and dimensionless spin, \(\chi\equiv a/M\), where \(a=S/m\) and \(S\) is the (magnitude of the) black hole spin angular momentum. While the expansion in the coupling \(\zeta_{\rm q}\) is always kept to linear order (because these theories are effective), recent work [8, 28] has made it possible to achieve arbitrary orders in the spin expansion. To label these expansions, we adopt the notation \(\mathcal{O}(\zeta_{\rm q}\chi^{m})\) for a metric that is expanded to \(m^{\rm th}\) order in dimensionless spin. Written in a generic form, the metric components are expanded to \(\mathcal{O}(\zeta_{\rm q}\chi^{m})\) in the following way:
\[g_{\mu\nu}=g_{\mu\nu}^{\rm Kerr}+\zeta^{\prime}\sum_{\ell=0}^{\ell=m}\left( \chi^{\prime}\right)^{\ell}\delta g_{\mu\nu}^{(\ell)}, \tag{43}\]
where \(\zeta^{\prime}\) and \(\chi^{\prime}\) are bookkeeping parameters labeling the expansion in coupling and expansion in spin, respectively, and \(g_{\mu\nu}^{\rm Kerr}\) is the Kerr metric. For the work presented here, we find that the quantities of interest achieve sufficient accuracy at \(\mathcal{O}(\zeta_{\rm q}\chi^{14})\) for both dCS and sGB theories. Finally, we note that the metrics available for sGB are not valid for \(\chi>0.8\)[21], which is the upper-bound for spin in the work presented here.
Finally, we note that if the constraints from gravitational wave and neutron star observations are saturated, then \(\zeta_{\rm q}\), for supermassive black holes, takes values significantly smaller than the maximum values we consider here (if we take Sgr A*, then we find \(\zeta_{\rm sGB}=1.2\times 10^{-24}\) and \(\zeta_{\rm dCS}=2.8\times 10^{-24}\), respectively). However, these values for \(\alpha_{\rm q}^{1/2}\) were derived from objects with masses smaller by many orders of magnitude than the supermassive black holes which would be the targets of any VLBI observation [1, 31]. Further, the method we present here is, in principle, an independent test. Therefore, any deviations that are found could motivate a higher-order EFT which screens larger modifications at small scales.
### Generalized Equatorial Half-Orbit Timescales
We can apply the machinery developed in Sec. IV.1 to also calculate \(\tau\) for equatorial photon orbits in generalized spacetimes, such as those provided by sGB and dCS. To solve for \(\tau\), we still look for eigenvalue solutions to Eq. (5) as before, but with two key differences. First, we restrict our phase space to motion in \(\theta\) only. Second, in order to retrieve the harmonic solutions, we demand that the eigenvalues be imaginary. Thus, this eigenvalue, labeled \(\omega_{\theta}\), represents the frequency of oscillations in \(\theta\).
When this is carried through, one finds for \(\omega_{\theta}\) an expression very similar to that found for \(\lambda_{\rm p}\) earlier (Eq. 29):
\[\omega_{\theta}=\sqrt{\frac{\partial_{\theta}^{2}V_{\rm eff}}{g_{\theta\theta }}}\Bigg{|}_{r=\bar{r}}. \tag{44}\]
Figure 5: The log fractional difference in the calculated Lyapunov exponent for equatorial null geodesics around dCS (left) and sGB (right) black holes across a grid of dimensionless spin and coupling parameter.
Then, finding the half-orbit timescale via \(\tau=\pi/\omega_{\theta}\), we have all we need to calculate \(\lambda_{\rm p}\) in arbitrary spacetimes. All together, our expression for \(\lambda_{\rm M}\) reads
\[\lambda_{\rm M}=\pi\sqrt{-\frac{g_{\theta\theta}}{g_{rr}}\frac{\partial_{r}^{2}V _{\rm eff}}{\partial_{\theta}^{2}V_{\rm eff}}}\bigg{|}_{r=\tilde{r}}. \tag{45}\]
We were able to check the validity of this method by developing a numerical approach, which is described in Appendix B. This approach calculates \(\tau\) for arbitrary values of \(\theta\), and yields answers within accuracy of \(10^{-6}\).
With this in hand, we can now compute the principal Lyapunov exponent parameterized in terms of half orbits through Eq. (10) and compare this to its Kerr value in Eq. (32). When we do this, we find the largest fractional difference to be of \(\mathcal{O}(10^{-2})\) for geodesics around dCS black holes (left panel of Fig. 5), and \(\mathcal{O}(10^{-0.8})\) for geodesics around sGB black holes (right panel of Fig. 5).
These results allow us to make several observations. First, dCS and sGB modifications to Lyapunov exponents increase with the coupling constant and spin. Therefore, the best target of future VLBI observations to constrain these theories would be rapidly spinning black holes. Both of these theories, however, are already constrained by other astrophysical observations (see Sec. V.1), and thus, the value of the coupling constant cannot be increased without bound.
Notice that the results presented here should not be understood as necessarily making a direct claim about the detectability of modified theories with this method. Such a statement would depend on, among other factors, an accurate and independent measurement of the BH dimensionless spin. This is because the modified gravity correction to \(\lambda_{\rm p}\) depends both on \(\zeta_{\rm q}\) and \(a/M\). In the absence of this data, our results should instead be read as a lower-bound on the accuracy of the BH dimensionless spin measurement required before entertaining constraining a modified theory.
## VI Conclusions
We have here constructed a framework to calculate Lyapunov exponents in a theory-agnostic way that allows the direct computation of the flux ratio between adjacent sub-rings in VLBI images. We then applied this framework to two theories of modified gravity, dCS and sGB gravity, and calculated the log fractional difference between these theories and the Lyapunov exponent in a Kerr spacetime. We find the corrections are of \(\mathcal{O}(10^{-2})\) for geodesics around dCS black holes, and \(\mathcal{O}(10^{-0.8})\) for geodesics around sGB black holes.
However, our results do not necessarily imply that you can constrain either theory, due to a number of confounding factors present in real-world observations. First, a measurement of \(\lambda_{\rm M}\) would need to be made to within the accuracy of the log fractional differences we present. This would require, among other things, that the uncertainties due to the astrophysical environment be smaller than any (most probably very small) deviation from GR. Second, any statement about constraints on the theory would require disentangling the effect on \(\lambda_{\rm M}\) due to \(\zeta_{\rm q}\) from that due to the black hole spin, most probably necessitating an independent measurement of the latter quantity.
We expect these results to be of interest in two ways. One, they inform our knowledge of the behavior of photons very near black holes, and will be of interest when designing observating campaigns of these environments. Second, the procedure developed here has applications in theory more broadly. The work we present is very general, being applicable to any conceivable modification to the metric or even the Hamiltonian itself, especially given the prospect of being able to calculate these effects perturbatively.
In the future, there are at least two avenues that would extend this work. One is to continue hunting for a means of measuring the differences in Lyapunov exponents due to quadratic gravity theories. An approach that exploits the auto-correlations of the photon ring [32; 33] could possibly amplify the effect, perhaps rendering more detectable the small differences calculated here. Another route would be to explore different terms that one could add to the Hamiltonian to change the size and shape of the effective potential. Such terms could include, for instance, the presence of a third body, or yet further modifications to the theory of gravity itself.
###### Acknowledgements.
We would like to thank Alex Lupsasca for several discussions, including the suggestion to use auto-correlations in future work. We also would like to thank Leo Stein for discussions about symplectic structure. NY and AD acknowledge support from the Simmons Foundation through Award No. 896696, and the NSF through awards PHY-2207650 and WoU-2007936.
## Appendix A Perturbation of Local Lyapunov Exponents in Curved Backgrounds
The perturbation of eigenvalues of symplectic matrices follows closely the standard story from linear algebra. The only point of possible confusion is that there is no well-defined means of "raising" an index of a symplectic object as one may be accustomed to in differential geometry. Instead, upper- and lower-indexed eigenvectors are simply those vectors that solve the right- and left-eigenvalue problems, respectively. That is to say, for a symplectic matrix \(\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}\), vectors \(\mathbb{R}^{\mathfrak{a}}\) and \(\mathbb{L}_{\mathfrak{b}}\) are said to be its eigenvectors if they satisfy
\[\mathbb{L}_{\mathfrak{a}}\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b }}=\lambda\mathbb{L}_{\mathfrak{b}}, \tag{46}\] \[\mathbb{J}^{\mathfrak{a}}{}_{\mathfrak{b}}\mathbb{R}^{\mathfrak{b }}=\lambda\mathbb{R}_{\mathfrak{a}}, \tag{47}\]
where \(\lambda\) is the corresponding eigenvalue. Now, considering the following series expansions,
\[\mathbb{J}^{\mathsf{a}}{}_{\mathsf{b}} =\mathbb{J}^{(0)\mathsf{a}}{}_{\mathsf{b}}+\alpha\mathbb{J}^{(1) \mathsf{a}}{}_{\mathsf{b}}+..., \tag{11}\] \[\mathbb{R}^{\mathsf{a}} =\mathbb{R}^{(0)\mathsf{a}}+\alpha\mathbb{R}^{(1)\mathsf{a}}+...,\] (12) \[\mathbb{L}_{\mathsf{a}} =\mathbb{L}_{(0)\mathsf{a}}+\alpha\mathbb{L}_{(1)\mathsf{a}}+...,\] (13) \[\lambda =\lambda^{(0)}+\alpha\lambda^{(1)}+..., \tag{14}\]
where the parenthetical exponents denote expansion order and \(\alpha\) is a bookkeeping parameter. We can now follow through with the standard eigenvalue perturbation. We start by right-multiplying Eq. (11) by Eq. (12) to find
\[\left(\mathbb{J}^{(0)\mathsf{a}}{}_{\mathsf{b}}+\alpha\mathbb{J}^ {(1)\mathsf{a}}{}_{\mathsf{b}}+...\right)\left(\mathbb{R}^{(0)\mathsf{b}}+ \alpha\mathbb{R}^{(1)\mathsf{b}}+...\right) \tag{15}\] \[= \left(\lambda^{(0)}+\alpha\lambda^{(1)}+...\right)\left( \mathbb{R}^{(0)\mathsf{a}}+\alpha\mathbb{R}^{(1)\mathsf{a}}+...\right)\,. \tag{16}\]
Then, keeping only the linear-order terms, we find
\[\mathbb{J}^{(1)\mathsf{a}}{}_{\mathsf{b}}\mathbb{R}^{(0)\mathsf{b}}+\mathbb{ J}^{(0)\mathsf{a}}{}_{\mathsf{b}}\mathbb{R}^{(1)\mathsf{b}}=\lambda^{(1)} \mathbb{R}^{(0)\mathsf{a}}+\lambda^{(0)}\mathbb{R}^{(1)\mathsf{a}}. \tag{17}\]
Now, we can left-multiply both sides of Eq. (17) by \(\mathbb{L}_{(0)\mathsf{a}}\) to find
\[\mathbb{L}_{(0)\mathsf{a}}\mathbb{J}^{(1)\mathsf{a}}{}_{\mathsf{b }}\mathbb{R}^{(0)\mathsf{b}}+\mathbb{L}_{(0)\mathsf{a}}\mathbb{J}^{(0) \mathsf{a}}\mathbb{R}^{(1)\mathsf{b}}= \tag{18}\] \[\lambda^{(1)}\mathbb{L}_{(0)\mathsf{a}}\mathbb{R}^{(0)\mathsf{a}} +\lambda^{(0)}\mathbb{L}_{(0)\mathsf{a}}\mathbb{R}^{(1)\mathsf{a}}, \tag{19}\]
and use Eq. (19) and (10) to cancel the second term on the left-hand side with the second term on the right-hand side, leaving us with
\[\mathbb{L}_{(0)\mathsf{a}}\,\mathbb{J}^{(1)\mathsf{a}}{}_{\mathsf{b }}\,\mathbb{R}^{(0)\mathsf{b}}=\lambda^{(1)}, \tag{20}\]
after normalization, \(\mathbb{L}_{(0)\mathsf{a}}\mathbb{R}^{(0)\mathsf{a}}=1\) (so we drop that term from the final expression). The above expression gives us the familiar result that the first-order perturbation to the eigenvalue is given by contracting the unperturbed eigenvectors onto the first-order matrix perturbation. The fact that this machinery translates perfectly to the symplectic context means that Lyapunov exponents can be calculated for any imaginable perturbation to the Hamiltonian.
## Appendix B Numerically Calculated Half-Orbit Timescales
We develop here a method to calculate half-orbit timescales numerically to high precision. First, we use the geodesic evolution equations in second-order form, which after a trivial first-order reduction are
\[\frac{dx^{\mu}}{ds}=\frac{\partial H}{\partial p_{\mu}},\qquad\frac{dp_{\mu}} {ds}=-\frac{\partial H}{\partial x^{\mu}}, \tag{21}\]
Now, we must follow what is suggested by Eq. (19) and calculate the time required for a single half-orbit. In order to do so, one option would be to simply track a trajectory's numerical evolution in \(\theta\), starting at \(\theta_{+}\), and solve numerically for the proper time it takes for the trajectory to reach \(\theta_{-}\). However, this method is needlessly computationally intensive, as it would require using something like a bisection algorithm to accurately determine when the trajectory has crossed \(\theta_{-}\).
A better route is to instead invert the \(\dot{\theta}\) evolution equation and integrate between the angular turning points to find the proper time interval we are after. To reduce the accumulated error further, we can exploit the system's axisymmetry and integrate only to the equator, multiplying by 2 to find the half-orbit proper time interval. For a given null trajectory with four-velocity \(dx^{\mu}/ds\) in Boyer-Lindquist-like coordinates, we may then define
\[\tau_{\text{M},\mathsf{q}}\equiv 2\int_{\theta_{+}}^{\pi/2}\frac{ds}{d\theta}d\theta \tag{22}\]
as the time required to complete a half-orbit2.The problem has then reduced to finding \(d\theta/ds\) as a function of \(\theta\). We therefore use the chain rule to rewrite the relevant Hamilton equations as
Footnote 2: This is the same as the “Mino time” for the trajectory, hence the subscript. For more on the Mino time, see [3; 4; 5].
\[\frac{dx^{a}}{d\theta} =\frac{dx^{a}}{ds}\frac{ds}{d\theta}=\left(\frac{\partial H}{ \partial p_{a}}\right)\left(\frac{\partial H}{\partial p_{\theta}}\right)^{-1},\] \[\frac{dp_{c}}{d\theta} =\frac{dp_{c}}{ds}\frac{ds}{d\theta}=-\left(\frac{\partial H}{ \partial x^{c}}\right)\left(\frac{\partial H}{\partial p_{\theta}}\right)^{-1}, \tag{23}\]
where \(a\in[r,\phi]\) and \(c\in[r,\theta]\), so that we can integrate the full, coupled system simultaneously with respect to \(\theta\).
Before we initialize these trajectories, we must deal with an issue that occurs because we are restricting ourselves to equatorial orbits, which ostensibly experience no evolution in \(\theta\) (or in other words, \(\theta_{+}=\pi/2\), and so the definition of \(\tau_{\text{M},\mathsf{q}}\) in Eq. (22) would be meaningless). We can deal with this by finding \(\tau_{\text{M},\mathsf{q}}\) for a series of trajectories whose initial \(\theta(0)\) value is shifted from the equator by an amount \(\delta\theta\), and tracking the value that \(\tau_{\text{M},\mathsf{q}}\) approaches as \(\delta\theta\to 0\), before using a Richardson extrapolation [34] to calculate the final value.
The above method also determines the order in which the evolution equations are initialized. Setting \(\theta=\pi/2+\delta\theta\) as described above, we can demand that the trajectory begins at the top of its \(\theta\) trajectory by setting \(p_{\theta}=0\) (this therefore sets \(\theta_{+}=\pi/2+\delta\theta\) and \(\theta_{-}=\pi/2-\delta\theta\)). Then, we can again solve the system in Eq. (27)(evaluated at the aforementioned \(\theta\)) simultaneously to find the initial values of \(r\) and \(L\). This allows us to finally integrate the equations in (22) and (21) from \(\theta_{+}\) to \(\pi/2\), and thereby extract a value for \(\tau_{\text{M},\mathsf{q}}\) through Eq. (22). We used a Runge-Kutta integrator of order 8 due to Dormand and Price [35], with relative tolerance
set to \(10^{-13}\) and absolute tolerance set to \(10^{-14}\). We found that using three trajectories with \(\delta\theta=10^{-6},10^{-7}\), and \(10^{-8}\) was sufficient to achieve an accuracy in \(\tau_{\rm M}\) of \(10^{-6}\). This was verified by computing \(\tau_{\rm M,Kerr}\) numerically (i.e. \(\tau_{\rm M}\) for null geodesics around a Kerr black hole) and comparing the result to the analytically exact value of Eq. (15).
|
2307.08784 | The first example of a simple $2-(81,6,2)$ design | We give the very first example of a simple $2-(81,6,2)$ design. Its points
are the elements of the elementary abelian group of order 81 and each block is
the union of two parallel lines of the 4-dimensional geometry over the field of
order 3. Hence it is also additive. | Anamari Nakic | 2023-07-17T18:54:58Z | http://arxiv.org/abs/2307.08784v1 | # The first example of a simple \(2-(81,6,2)\) design
###### Abstract
We give the very first example of a simple \(2-(81,6,2)\) design. Its points are the elements of the elementary abelian group of order 81 and each block is the union of two parallel lines of the 4-dimensional geometry over the field of order 3. Hence it is also additive.
**Keywords:** simple design; additive design; automorphism group; group action.
+
Footnote †: Faculty of Electrical Engineering and Computing, University at Zagreb, Croatia, email: [email protected]
## 1 Introduction
A \(t-(v,k,\lambda)\) design is a pair \((V,\mathcal{B})\) where \(V\) is a set of \(v\)_points_ and \(\mathcal{B}\) is a collection of \(k\)-subsets (_blocks_) of \(V\) with the property that every \(t\)-subset of \(V\) is contained in exactly \(\lambda\) blocks. A design is said to be _simple_ if it does not have repeated blocks, i.e., if the multiset of blocks is actually a set. We recall, in particular, that in a \(2-(v,k,\lambda)\) design the number of blocks containing any fixed point is \(\frac{\lambda(v-1)}{k-1}\) and that the number of blocks is \(\frac{\lambda v(v-1)}{k(k-1)}\). Thus \(\lambda(v-1)\equiv 0\) (mod \(k-1\)) and \(\lambda v(v-1)\equiv 0\) (mod \(k(k-1)\)) are trivial necessary conditions (called _divisibility conditions_) for the existence of a \(2-(v,k,\lambda)\) design.
Two designs \((V,\mathcal{B})\) and \((V^{\prime},\mathcal{B}^{\prime})\) are _isomorphic_ if there exists a bijection between \(V\) and \(V^{\prime}\) turning \(\mathcal{B}\) into \(\mathcal{B}^{\prime}\). An automorphism group of a design \((V,\mathcal{B})\) is a group \(A\) of permutations on \(V\) leaving \(\mathcal{B}\) invariant. It is convenient to have designs with a "rich" automorphism group \(A\) since they can be stored more efficiently in terms of memory space; we do not need to give the entire collection of blocks but only a system of representatives for the \(A\)-orbits on it.
For general background on the topic we refer to [1].
The authors of [4] developed an interesting theory about _additive_ designs. These are designs \((V,\mathcal{B})\) for which, up to isomorphism, \(V\) is a subset of a suitable additive group \(G\) and the elements of any block sum up to zero. We propose to speak of a _strictly_ additive design if \(V\)_coincides_ with the set of elements of \(G\).
Among the strictly additive 2-designs with \(\lambda=1\) we obviously have the point-line \(2-(q^{n},q,1)\) designs associated with AG\((n,q)\), the \(n\)-dimensional affine geometry over the field of order \(q\). As far as we are aware, no other example of a strictly additive \(2-(v,k,1)\) design is known. In the attempt to find strictly additive 2-designs with a higher \(\lambda\) it is natural to look for 2-\((q^{n},mq,\lambda)\) designs whose points are those of AG\((n,q)\) and whose blocks are union of \(m\) pairwise disjoint lines. Indeed such a design would be strictly additive automatically. It is quite evident that the set of all possible unions of two parallel lines of an affine plane AG\((2,q)\) is a \(2-(q^{2},2q,2q-1)\) design. For \(q\) odd, this design has been characterized in [3] as the unique design with these parameters and the property that the intersection of any two distinct blocks has size 0, 4 or \(q\).
In this note we explicitly give a simple \(2-(81,6,2)\) design which is strictly additive. Indeed its points are those of AG\((4,3)\) and each block is the union of two parallel lines. As far as we are aware, this is the very first example of a simple design with these parameters.
The design
The values of \(v\) for which a \(2-(v,6,\lambda)\) design exists have been completely determined for \(\lambda>1\) by Hanani [7]. For \(\lambda=1\) the existence is still uncertain for twenty-nine values of \(v\) the third of which is \(81\). Thus it is unknown whether a \(2-(81,6,1)\) design exists. According to the tables of 2-designs of small order by Mathon and Rosa [8] there is only one known \(2-(81,6,2)\) design. Looking at its explicit description in Examples 2.7 on page 237 of [6], any expert reader will recognize that it has sixteen pairs of repeated blocks. Thus it cannot be isomorphic to the simple and strictly additive design with the same parameters that we explicitly construct below.
Let \(G=\mathbb{Z}_{4}^{4}\) be the elementary abelian group of order 81. Given two elements \(x\in G\setminus\{0\}\) and \(y\in G\setminus\{0,x,2x\}\), let \(B(x,y)\) be the union of the two parallel lines \(\{0,x,2x\}\) and \(\{y,x+y,2x+y\}\) of AG\((4,3)\). The \(G\)-stabilizer of \(B(x,y)\) (under the natural action of \(G\) on itself) is clearly given by \(\{0,x,2x\}\), hence its \(G\)-orbit has size \(|G|/3=27\). Also, from the divisibility conditions we infer that a \(2-(81,6,2)\) design has \(432=27\cdot 16\) blocks. Thus it makes sense to look for a design with these parameters whose collection of blocks is the union of the \(G\)-orbits of 16 suitable blocks of the form \(B(x,y)\). Such a 16-tuple of blocks has been found with a computer and it is given below.
\[\{(0,0,0,0),(0,0,0,1),(0,0,0,2),(0,1,0,0),(0,1,0,1),(0,1,0,2)\}\] \[\{(0,0,0,0),(0,0,1,1),(0,0,2,2),(2,1,0,0),(2,1,1,1),(2,1,2,2)\}\] \[\{(0,0,0,0),(0,1,1,1),(0,2,2,2),(0,0,1,0),(0,1,2,1),(0,2,0,2)\}\] \[\{(0,0,0,0),(0,1,2,0),(0,2,1,0),(2,0,2,1),(2,1,1,1),(2,2,0,1)\}\] \[\{(0,0,0,0),(1,0,0,0),(2,0,0,0),(0,2,2,1),(1,2,2,1),(2,2,2,1)\}\] \[\{(0,0,0,0),(1,0,1,0),(2,0,2,0),(0,1,0,0),(1,1,1,0),(2,1,2,0)\}\] \[\{(0,0,0,0),(1,0,1,1),(2,0,2,2),(0,0,2,0),(1,0,0,1),(2,0,1,2)\}\] \[\{(0,0,0,0),(1,0,2,0),(2,0,1,0),(0,2,1,1),(1,2,0,1),(2,2,2,1)\}\] \[\{(0,0,0,0),(1,0,2,2),(2,0,1,1),(0,1,2,1),(1,1,0),(2,1,0,2)\}\] \[\{(0,0,0,0),(1,1,0,0),(2,2,0,0),(0,2,0,1),(1,0,0,1),(2,1,0,1)\}\] \[\{(0,0,0,0),(1,1,0,1),(2,2,0,2),(0,2,2,0),(1,0,2,1),(2,1,2,2)\}\] \[\{(0,0,0,0),(1,1,2,0),(2,2,1,0),(0,0,2,1),(1,1,1,1),(2,2,0,1)\}\] \[\{(0,0,0,0),(1,1,2,1),(2,2,1,2),(0,2,1,1),(1,0,0,2),(2,1,2,0)\}\] \[\{(0,0,0,0),(1,1,2,2),(2,2,1,1),(0,2,2,0),(1,0,1,2),(2,1,0,1)\}\] \[\{(0,0,0,0),(1,2,1,2),(2,1,2,1),(0,0,2,1),(1,2,0,0),(2,1,1,2)\}\] \[\{(0,0,0,0),(1,2,2,0),(2,1,1,0),(0,2,2,1),(1,1,1,1),(2,0,0,1)\}\]
Here is a short program in GAP [5] checking that the union of the \(G\)-orbits of the above sixteen 6-subsets of \(G\) actually is the collection of blocks of the desired \(2-(81,6,2)\) design.
# All points of AG(4,3) pts := Tuples( [0..2],4 );
# blkOrbRep denotes block orbit representatives listed above
all blocks of the design blks:=Union(List (blkOrbRep, b-> List(pts, p -> AsSet(List([1..Size(b)], i-> (b[i] + p) mod 3)) ) ));;
check that it is a 2-design Collected(List(Combinations(pts, 2), p -> Number(blks, b-> (p[1] in b) and (p[2] in b) ) ));;
It is evident that any block of the obtained design is a union of two parallel lines. Hence we conclude that this design is strictly additive. It is also easy to check that our design is simple. Thus, considering the comments that we made at the beginning of this section, we have the following new result.
The number of pairwise non-isomorphic \(2-(81,6,2)\) designs is at least equal to \(2\).
Some infinite classes of strictly additive 2-designs will be given in a future paper still in preparation [2].
## Acknowledgements
The author is supported by the Croatian Science Foundation under the project 9752.
|
2304.11785 | Imaging Preflare Broadband Pulsations in the Decimetric-metric
Wavelengths | Preflare activities contain critical information about the pre-cursors and
causes of solar eruptions. Here we investigate the characteristics and origin
of a group of broadband pulsations (BBPs) in the decimetric-metric wavelengths,
taking place during the preflare stage of the M7.1 flare dated on 2011
September 24. The event was recorded by multiple solar instruments including
the Nan\c{c}ay Radioheliograh that measure the properties of the radio source.
The BBPs start $\sim$24 min before the flare onset, extending from $<$ 360 to
above 800 MHz with no discernible spectral drift. The BBPs consist of two
stages, during the first stage the main source remains stationary, during the
second stage it moves outward along with a steepening extreme-ultraviolet (EUV)
wave driven by the eruption of a high-temperature structure. In both stages, we
observe frequent EUV brightenings and jets originating from the flare region.
During the second stage, the BBPs become denser in number and stronger in
general, with the level of the polarization increasing gradually from $<$ 20%
to $>$ 60% in the right-handed sense. These observations indicate the
steepening EUV wave is important to the BBPs during the second stage, while the
preflare reconnections causing the jets and EUV brightenings are important in
both stages. This is the first time such a strong association of an EUV wave
with BBPs is reported. We suggest a reconnection plus
shock-sweeping-across-loop scenario for the cause of the BBPs. | Maoshui Lv, Baolin Tan, Ruisheng Zheng, Zhao Wu, Bing Wang, Xiangliang Kong, Yao Chen | 2023-04-24T01:13:50Z | http://arxiv.org/abs/2304.11785v1 | # Imaging Preflare Broadband Pulsations in the Decimetric-metric Wavelengths
###### Abstract
Preflare activities contain critical information about the pre-cursors and causes of solar eruptions. Here we investigate the characteristics and origin of a group of broadband pulsations (BBPs) in the decimetric-metric wavelengths, taking place during the preflare stage of the M7.1 flare dated on 2011 September 24. The event was recorded by multiple solar instruments including the Nancay Radioheliograh that measure the properties of the radio source. The BBPs start \(\sim\)24 min before the flare onset, extending from \(<\) 360 to above 800 MHz with no discernible spectral drift. The BBPs consist of two stages, during the first stage the main source remains stationary, during the second stage it moves outward along with a steepening extreme-ultraviolet (EUV) wave driven by the eruption of a high-temperature structure. In both stages, we observe frequent EUV brightenings and jets originating from the flare region. During the second stage,
the BBPs become denser in number and stronger in general, with the level of the polarization increasing gradually from \(<20\%\) to \(>60\%\) in the right-handed sense. These observations indicate the steepening EUV wave is important to the BBPs during the second stage, while the preflare reconnections causing the jets and EUV brightenings are important in both stages. This is the first time such a strong association of an EUV wave with BBPs is reported. We suggest a reconnection plus shock-sweeping-across-loop scenario for the cause of the BBPs.
Solar coronal mass ejections (310), Solar activity (1475), Solar corona (1483), Solar flares (1496), Solar radio emission (1522)
## 1 Introduction
Solar radio pulsations represent (quasi-)periodic or irregular short fluctuations observed from the radio flux curves or the dynamic spectra, in almost all wavelength ranges from metric to microwave. In the decimetric-metric wavelengths, pulsations often appear as fine structures being superposed on the wideband continuum of the type-IV solar radio burst (Young et al., 1961; Kai and Takayanagi, 1973; Droege, 1977; Trottet et al., 1981; Khan et al., 2002. See the review by Nindos and Aurass, 2007). According to the bandwidths of pulsations, they can be classified as broadband pulsations (BBPs) and narrowband pulsations (NBPs).
At least two aspects of the mechanisms of the radio pulsations should be considered. One is the radiation mechanism, the other is the cause of the pulsations/modulations. For NBPs a coherent mechanism of plasma radiation may be involved while for BBPs the gyrosynchrotron mechanism may be important. In addition, the Type-III like coherent radiation mechanism excited by fast-moving beam-type energetic electrons within large magnetic loops, as well as the loss-cone maser instability driven by trapped energetic electrons (Aurass et al., 2003; Zlotnik et al., 2003. See Ni et al., 2020, 2021 and Li et al., 2021 for latest theorical studies) have also been proposed as likely radiation mechanisms for BBPs. On the other hand, the pulsations/modulations are often explained with the following scenarios (Kuijpers, 1980; Aschwanden, 1987): (1) modulations by the magnetohydrodynamic (MHD)
oscillations in terms of, e.g., the fast sausage mode of the magnetic loop structure; (2) intrinsic oscillations of the kinetic radiation process involving the nonlinear wave-wave and wave-particle coupling; (3) modulations by the transient acceleration of energetic electrons, by, e.g., intermittent or bursty magnetic reconnections.
Most studies have focused on BBPs taking place during the impulsive or decay phases of solar flares. During the preflare stage, the energy releases, if exist, are much weaker than that during the main phase of the flares, thus the resultant radio signatures can be easily missed. Zhang et al. (2015) reported four flares with preflare microwave activities which contain fine structures such as quasi-periodic pulsations (QPPs) and millisecond dots, using the high-sensitivity data recorded by the Ondrejov radio spectrograph in the frequency range of 0.8-2.0 GHz. This reveals novel signatures of preflare activities. In a study on flare precursors in the low solar atmosphere, Wang et al. (2017) found two episodes of weak emissions existing 10-30 min before the impulsive phase of the flare. The accompanying microwave spectra were recorded by the new Expanded Owens Valley Solar Array (EOVSA) in the frequency range of 2-18 GHz. They show that these precursor microwave emissions can be well modelled as quasi-thermal, gyrosynchrotron emission sources, and the spectral fittings have been used to deduce the magnetic field strengths and their temporal variation. This provides important constraint on the location and characteristics of the energy release from the radio perspective. Another study identified preflare microwave QPPs with the 17 GHz data recorded by the Nobeyama Radioheliograph (Li et al., 2020), in which the QPPs appear within the flare source region according to the microwave imaging data. Their periods increase from \(\sim\) 300 to 500s. This has been used to infer the property of electric currents of the pre-flare source region.
Here we report a rare decimetric-metric event of BBPs observed during the preflare stage of an M7.1 limb flare. Multi-wavelength data, including both EUV and radio imaging/spectral data, are available. The close-to-limb perspective assures minor projection effect. The event provides a good opportunity to investigate the origin of BBPs. The observational data and results are presented in Sections 2 and 3. Conclusions and discussion are presented in Section 4.
The flare occurred on 2011 September 24, starting at 12:33 UT, peaking at 13:17 UT, and ending at 14:10 UT, according to the _Geostationary Operational Environmental Satellite_ (_GOES_) flare list. It originated from the NOAA AR 11302 close to the northeastern limb of the solar disk. Figure 1(a) shows the 171 A image observed by the Atmospheric Imaging Assembly (AIA; Lemen et al., 2012) on board the _Solar Dynamics Observatory_ (_SDO_; Pesnell et al., 2012) at the flare peak time, from which overlying loops are clearly visible. Mainly the EUV data at 171 A (Fe IX, \(\sim\)0.6 MK), 193 A (Fe XII, \(\sim\)1.6 MK), and 94 A (Fe XVIII, \(\sim\)6 MK) were analyzed here. The EUV data have a pixel size of 0.6\({}^{\prime\prime}\) and a cadence of 12s. An accompanying halo CME was observed by the Large Angle and Spectrometric Coronagraph (LASCO; Brueckner et al., 1995) C2 onboard the _Solar and Heliospheric Observatory_ (_SOHO_; Domingo et al., 1995) first at 12:48 UT (see Figure 1(b)). According to the second-order polynomial fit of the height measurements by C2, the CME starts at \(\sim\)12:33 UT.
The radio bursts were recorded by the spectrographs of e-Callisto Bleien (175-870 MHz) with a time resolution of 0.25s (Figure 1(c)). The bursts took place in all stages of the flare, from the preflare and early-rising stages to the impulsive and decay stages. Liu et al. (2018) have analyzed the stationary type IV continuum burst (12:40-13:00 UT) during the early-rising stage. Here we focus on the radio bursts observed from \(\sim\)12:09-12:31 UT before the onset of the flare.
The Nancay Radioheliograh (NRH; Kerdraon & Delouis, 1997) provides imaging data with polarization measurement at 10 frequencies from 150 to 445 MHz. The spatial resolution depends on frequency and time of observation, being \(\sim\)2\({}^{\prime}\) at 445 MHz and \(\sim\)6\({}^{\prime}\) at 150 MHz in summer and up to three times larger along the NS direction during winter. Mainly the NRH data at 4 frequencies (360, 408, 432, and 445 MHz) were analyzed here.
The magnetic-field data on the photosphere are from the Helioseismic and Magnetic Imager (HMI; Scherrer et al., 2012) on board _SDO_, with a pixel scale of 0.6\({}^{\prime\prime}\) and a cadence of 45s. The coronal magnetic field configuration is extrapolated with the Potential Field Source Surface (PFSS; Schrijver & De Rosa, 2003) model.
### Spectral Characteristics and Source Properties of the BBPs
From Figure 2(a), the preflare BBPs started from \(\sim\)12:09 and ended at \(\sim\)12:31 UT, being \(\sim\)24 min before the flare onset. They extend from \(<\) 360 to above 800 MHz with no discernible spectral drift. Hundreds of decimetric-metric BBPs can be identified. They are rather intermittent from \(\sim\)12:09 to 12:17 UT, and become denser in number and stronger in intensity later. There exist several minutes of weak emission at \(\sim\)12:17 UT. This has been used to split the whole BBPs into two stages, with stage I for \(\sim\)12:09 to 12:17 UT and stage II for \(\sim\)12:17-12:31 UT. There exists some background continuum that also becomes stronger during stage II. In addition, the overall bandwidth of relatively strong emission becomes wider in the later stage. During most time of this stage, the bandwidth is \(>\)500 MHz that is close to the emission frequency. This is why we classify the emission as BBPs.
Figure 2(b) presents the temporal curves of the brightness temperature (\(T_{b}\)) at four NRH frequencies (360, 408, 432, and 445 MHz). In accordance with the spectral data, there exist plenty of local peaks of \(T_{b}\). Five blue vertical and dashed lines are plotted to show the correspondence between the spectral data and temporal curves, so to confirm the absence of any observable spectral drift of each BBP. The \(T_{b}\) curves at the four frequencies are similar to each other, with the values of \(T_{b}\) being close for the three larger frequencies while they are much smaller at 360 MHz. This is consistent with the cutoff of the spectral data that is around 360 MHz. The values of \(T_{b}\) range from \(\sim\)10\({}^{7}\) K to \(4\times 10^{8}\) K in stage I and ranges from \(\sim\)10\({}^{7}\) K to \(>\)10\({}^{9}\) K in stage II.
Figure 2(c) presents the temporal variations of the levels of polarization at the four NRH frequencies. The levels are close to each other, agreeing with the fact they belong to the same BBPs. Another significant observation is that the polarization remains at a weak level during stage I while increasing consistently up to strong levels (\(\sim\)60%) at the end of stage II, which continues to increase to \(\sim\) 100% after 12:31 UT.
Figure 2(d) shows the results of the wavelet analysis of the trend-subtracted \(T_{b}\) profile at the NRH 445 MHz, from which one can identify several periodic components. The \(\sim\) 5 min signal lasts for only about 5 min, thus it is not convincing; the \(\sim\) 3 min signal lasts for about 9 min (\(\sim\)12:14-12:23 UT). Their origin can be inferred from the three intermittent enhancements of the spectral data (Figure
2(a)). There also exist several periodic components at \(\sim\)1 min and \(\sim\)2 min. Comprehensive studies exist regarding the origin of QPPs (reviewed briefly in the introduction section), therefore they are not considered further here.
Figure 3 presents the NRH sources at the four frequencies (panels (a)-(d)). Panel (e) shows the corresponding time line overplotted on the spectra. From this figure and the accompanying movie, we observe a major source located around the equatorial part of the limb. The source persists at the four NRH frequencies during the pre-flare BBPs, indicating the existence of the background continuum radiation. In addition to this major source, another two sources located at the northern part of the images (out of the limb) appear intermittently during the later period of stage II. Linking the three sources, one can obtain a large-scale arcade-like structure.
The plus sign in each panel represents the centroid of the major source at 445 MHz, plotted for the convenience of comparison. The size of the source is about 150-200\({}^{\prime\prime}\) according to the outmost contours which represent the 30% level of the maximum \(T_{b}\) (the 30% contours for short), close to the corresponding spatial resolution of NRH. The source centroids of the four frequencies are close to each other. During stage I the sources remain basically stationary (being \(\sim 40^{\prime\prime}\) above the solar limb) while during stage II the sources move outward systematically. This can be seen from Figure 4(a) which presents the source centroid locations with the 85% contours at different times. The dashed line that connects the centroids of the three frequencies (see Figure 4(a)) has been used as the slit to obtain the distance-time (\(d-t\)) measurement of the radio sources with the NRH data. The obtained \(d-t\) data at 445 MHz are plotted in Figure 4(b), the error bars are given by the dimension of the 85% contours of the source. They are overplotted onto the \(d-t\) map of the AIA data at 193 A along the same slit that will be analyzed later. From the radio data, the speed of the radio source during stage I is not significant as expected, while the average speed increases to \(\sim\)120 km s\({}^{-1}\) around the end of stage II (from 12:28 to 12:31 UT).
### Coronal Activities and their relation with the BBPs
Figure 5 and the accompanying movie present the AIA images at 171, 193, and 94 A. The left two columns present the AIA images at the start of stage I and around the end of stage II, respectively,
the rightmost column presents the difference images. The three passbands are selected to show (1) the system of the coronal loops overlying the active region and the sporadic jets and EUV brightenings within/around the flare source region with 171 A, (2) the rise and steepening of the EUV wave structure with 193 A, and (3) the hot eruptive structure that drives the EUV wave with 94 A.
All these three aspects of the event are self-evident according to the AIA data. We plot arrows in Figure 5 to indicate their occurrence at the specific moment. What is important here is their physical connection to the BBP sources.
The jets move outward from the flare source toward the main radio source. The EUV brightenings take place frequently during the BBPs. The two phenomena indicate the occurrence of intermittent magnetic reconnection during the pre-flare stage. This can explain the similar intermittent behaviour of BBPs if attributing the radio-emitting energetic electrons to the same reconnection process.
The hot structure starts to rise around 12:10 UT according to the 94 A data. As mentioned, in Figure 4(b) we have plotted the \(d-t\) map with the difference data at 193 A. Figure 4(c) presents the \(d-t\) map at 94 A along the same slit. The rise of the ejecta consists of the gradual and the impulsive stage. The turning point of the two stages is at \(\sim\)12:25 UT. The steepening of the EUV wave also takes place around this moment (see Figure4(b)). According to the \(d-t\) maps and the AIA images, the EUV wave emerges at \(\sim\)12:16 UT, which is always ahead of and thus is driven by the hot eruptive structure. They have the similar gradual-impulsive two-stage dynamics, with the speed of the EUV wave (hot structure) along the slit being \(\sim\)34 km s\({}^{-1}\) (\(\sim\)36 km s\({}^{-1}\)) during the first stage and being \(\sim\)121 km s\({}^{-1}\) (\(\sim\)151 km s\({}^{-1}\)) on during the second stage. Later, the EUV wave accelerates to \(\sim\)473 km s\({}^{-1}\) (see Figure 4(b)), indicating its steepening into a shock structure.
In Figure 6 and the accompanying movie we present the BBP sources at six NRH frequencies with the running-difference images at 193 A. The major source remains stationary first, after the appearance of the EUV wave the source is located at the EUV wave front and moves together with the EUV wave. This agrees with what has been observed from the \(d-t\) map in Figure 4(b). Again, the arcade-like structure linking the three sources correlates well with the large-scale EUV wave front (see the white circles in Figure 6(g)).
## 4 Conclusions and Discussion
Here we presented detailed observations of the BBPs in the decimetric-metric wavelengths. The purpose is to understand the process causing the pulsations using multi-wavelength data including the radio-imaging data at several frequencies from NRH and the EUV imaging data from AIA/SDO. Such complete data set were reported for the first time ever for preflare decimetric-metric BBPs. The BBPs start \(\sim\)24 min before the major eruption, during which frequent EUV brightenings occur and several jets eject from the flare region towards the main radio source. During the early stage (stage I) the BBP sources remain stationary while during the later stage (stage II) the sources move outward with a steepening/acclerating EUV wave that is driven by an eruptive hot structure. The major BBP source, together with the other two sources, are located at the EUV wave front. In addition, the BBPs get intensified overall, and the level of polarization increases gradually upon the emergence of the EUV wave. These observations strongly indicate that the EUV wave contributes to the acceleration of the BBP-emitting energetic electrons during the later stage, in addition to the low-lying reconnection process that may provide seed particles to the EUV wave.
The combined action of the EUV wave and magnetic reconnection in releasing BBPs is demonstrated here for the first time. To see how this happens, in Figure 7(a) we show the HMI magnetogram superposed by (1) the magnetic field lines extrapolated with the usual PFSS method, (2) the EUV profile delineated with the 193 A data at 12:31:55 UT (also see the white circles in Figure 6(g)), and (3) the 50% contour of the major source observed by NRH at 12:31:57 UT. Note that the PFSS result suffers from the potential-field assumption and the usage of synoptic magnetogram, thus the extrapolated field lines should be treated with caution. Nevertheless, the overall morphology of the large-scale loop system agrees with the bright loops according to the 171 A images (see Figure 5). The location of the major BBP source correlates with the top of the loop system and the front of the EUV wave.
These observations are in line with the schematic of Figure 7(b). We suggest that during stage I the BBPs are radiated by energetic electrons that are released by the magnetic reconnection process taking place within the flare source region as evidenced by jets and EUV brightenings, during stage II
after the EUV wave appearance these energetic electrons are further processed by the steepening wave and the resultant BBP source are carried outward by the EUV wave. In the meantime, the EUV wave sweeps a series of loop tops during its outward propagation. According to Kong et al. (2015, 2016), such EUV (or shock)-wave-sweeping-loop-top process favors the acceleration of energetic particles since the loop tops serve as an efficient trapping agency within which particles can be processed by the steepening wave/shock-like structure for multiple times. The intensified pulsations during stage II are a natural result of this further processing, and the gradual increase of polarization level can be caused by the gradual change of the overall magnetic field orientation within the loop tops that are swept by the EUV wave front.
We tried to derive some key parameters (e.g., magnetic field, plasma density) in the radio sources through fitting the flux density spectra assuming the gyrosynchrotron emission. Such fittings require one to prescribe a group of free parameters, including the density of the background and energetic electrons, the field strength, the viewing angle, the column depth, and the size, etc. This means the obtained results are not unique and have very large uncertainty. In addition, the available number/range of frequencies with imaging data is quite limited here. For a proper fitting to deduce the source conditions, more observational constraints, such as measurement of the flux density at more frequencies and independent measurement of the magnetic field or plasma density, are required. This study is supported by the National Natural Science Foundation of China (11973031, 11790303, and 11873036). The authors acknowledge the team of NRH for making their data available to us. We thank the Institute for Data Science FHNW Brugg/Windisch, Switzerland for providing the e-Callisto data.
|
2309.00674 | Integrating dynamical mean-field theory and diagrammatic Monte Carlo | Dynamical mean-field theory (DMFT) is one of the most widely used theoretical
methods for electronic structure calculations, providing self-consistent
solutions even in low-temperature regimes, which are exact in the limit of
infinite dimension. The principal limitation of this method is that it neglects
spatial fluctuations, which become important in finite dimensions. Diagrammatic
Monte Carlo (diagMC), by contrast, provides results that are asymptotically
exact for a convergent or resummable series, but are typically limited to high
temperature as they depend on the analytic structure of the expansion. In this
work, we present a framework for integrating these two methods so that the
diagrammatic expansion is conducted around the DMFT solution. This results in a
series expansion conducted only in terms that explicitly depend on nonlocal
correlations, and which is asymptotically exact. | Johan Carlström | 2023-09-01T18:00:02Z | http://arxiv.org/abs/2309.00674v1 | # Integrating dynamical mean-field theory and diagrammatic Monte Carlo
###### Abstract
Dynamical mean-field theory (DMFT) is one of the most widely used theoretical methods for electronic structure calculations, providing self-consistent solutions even in low-temperature regimes, which are exact in the limit of infinite dimension. The principal limitation if this method is that it neglects spatial fluctuations, which become important in finite dimension. Diagrammatic Monte Carlo (diagMC), by contrast, provides results which are asymptotically exact for a convergent or resummable series, but are typically limited to high temperature as they depend on the analytic structure of the expansion. In this work, we present a framework for integrating these two methods so that the diagrammatic expansion is conducted around the DMFT solution. This results in a series expansion conducted only in terms that explicitly depend on nonlocal correlations, and which is asymptotically exact.
Strongly interacting fermions is both one of the most ubiquitous and also most challenging problems of theoretical physics. It arises in condensed matter systems, ultra-cold atomic gases, materials science, nuclear physics and in the interior of celestial bodies. Despite an intense effort and notable progress, theory fails to produce decisive results in many of these scenarios, and central questions remain open, some for decades.
A major obstacle to theoretical progress in this field is the sign problem, which prevents the use of conventional quantum Monte Carlo techniques in fermionic systems. In response, a number of approximative methods have been proposed, ranging from DMFT [1; 2; 3] and generalizations thereof [4; 5; 6; 7; 8; 9; 10], to wave function methods [11; 12], the density matrix renormalization group theory [13] and auxiliary-field quantum Monte Carlo [14; 15; 16; 17]. Correlated systems are also extensively simulated with ultra-cold atomic gases [18; 19; 20]. Yet, despite considerable progress, a reliable phase diagram for even the most elementary fermionic many-body problem, namely the Hubbard model, has not been produced, and different numerical protocols produce results with notable discrepancies [21; 22].
A second obstacle to theoretical treatment of strongly correlated systems is the prospect of competing states that are situated closely in terms of free energy, making uncontrolled approximations potentially very misleading [21; 23]. This fact points to the need for extremely accurate, preferably controlled, numerical methods which can be applied in the macroscopic limit.
Diagrammatic Monte Carlo [24; 25; 26] is a method which was developed specifically for unbiased treatment of many-body fermions, and is based on stochastic sampling of Feynman type graphs. Given a series that is either convergent or resummable, it provides asymptotically exact results directly in the macroscopic limit. The principal limitation of this method is that it relies on the analytic structure of the series. In metallic systems this typically prevents treating large expansion parameters, and also limits the applicability to fairly high temperatures. Thus far, near-zero temperature physics has only been reached in semimetals [27; 28; 29].
The problems associated with a large expansion parameter means that a conventional expansion in the interaction part of the Hamiltonian is not viable for strongly correlated systems. This problem has now been partially overcome with the introduction of strong-coupling diagrammatic Monte Carlo (SCDMC) [30; 31; 32; 33], which relies on expansion in the nonlocal part of the Hamiltonian. Reliable results now exist for the Hubbard model in the strong coupling limit [30], the BCS-BEC crossover regime [34] and magnetic moire systems [35]. The main issue at point is that the method is still limited to high temperatures.
An alternative path to diagrammatic simulation of challenging regimes is centered on the analytic structure of the series. Homotopic action relies on shifting the starting point of the expansion in order to improve convergence so that the point of interest in parameter space falls within the convergence radius [36]. By determining the analytical structure of a system in the weak coupling regime, it may be possible to reconstruct it in regimes where the series is not convergent [37]. By Borell resummation, it may even be possible to obtain results from a series with a zero convergence radius [38]. By themselves, these methods have provided access to \(U/t\leq 7\) in the Hubbard model. While this represent important progress, it is not by it self sufficient for addressing many key problems. It should be stressed however, that these methods are generally agnostic to the origin of the underlying series, and could for example be combined with alternative diagrammatic techniques.
In this work, we demonstrate that SCDMC can be integrated with DMFT to produce an asymptotically exact expansion around the mean-field solution, thus providing the means to systematically improve DMFT results. This is expected to dramatically expand the parameter regimes in which controllable results can be obtained, and shed new light on beyond-mean-field physics in correlated systems at low temperature.
## Model
DMFT is aimed at lattice fermions, and becomes exact in infinite dimension. In this regime, non-local interactions become trivial in that they only shift the chemical potential, and are correspondingly neglected. SCDMC is applicable to
any lattice fermion problem, but was specifically developed to deal with strong contact interactions, by treating these non-perturbatively.
In demonstrating how these methods can be integrated, we will rely on the Hubbard model, which takes the form
\[H=\hat{\mu}+\hat{t}+\hat{U},\ \hat{\mu}=-\mu\sum_{i\sigma}c^{ \dagger}_{i\sigma}c_{i\sigma}, \tag{1}\] \[\hat{t}=\sum_{ij\sigma}t_{ij}c^{\dagger}_{i\sigma}c_{j\sigma},\ \hat{U}= \sum_{i}Uc^{\dagger}_{i\uparrow}c_{i\uparrow}c^{\dagger}_{i\downarrow}c_{i \downarrow}. \tag{2}\]
However, it should be noted that long range interactions can be treated perturbatively with this approach [30].
### Dynamical mean-field theory
DMFT is based on mapping the many-body problem to an impurity model consisting of a single site embedded in a bath. Taking the limit of infinite dimension, the self energy becomes local, \(\Sigma(\omega,\mathbf{k})\rightarrow\Sigma(\omega)\), which significantly simplifies the problem [39]. The corresponding Greens function is then obtained by solving an impurity problem of the form
\[G_{\text{imp}}(\tau-\tau^{\prime})=-\langle Tc(\tau)c^{\dagger}( \tau^{\prime})\rangle_{S_{\text{eff}}}, \tag{3}\]
where the effective action is given by
\[S_{\text{eff}}=-\int_{0}^{\beta}d\tau_{1}d\tau_{2}\sum_{\sigma} \bar{c}_{\sigma}(\tau_{1})\mathsf{g}_{0,\sigma}^{-1}(\tau_{1}-\tau_{2})c_{ \sigma}(\tau_{2})\\ +U\int d\tau n_{\uparrow}(\tau)n_{\downarrow}(\tau). \tag{4}\]
Since \(\mathsf{g}_{0}\) describes the bilinear part of the action, it is related to the full Greens function of the impurity problem by
\[\Sigma(\omega)=\mathsf{g}_{0}^{-1}(\omega)-G_{\text{imp}}^{-1}( \omega). \tag{5}\]
Translation invariance implies that the self energy is the same for the impurity and the bath, giving
\[G(\omega)=\sum_{\mathbf{k}}\frac{1}{i\omega-\epsilon_{\mathbf{k }}+\mu-\Sigma(\omega)}, \tag{6}\] \[\mathsf{g}_{0}^{-1}(\omega)=G^{-1}(\omega)+\Sigma(\omega). \tag{7}\]
From the equations 3-7, we obtain the DMFT cycle, as illustrated in Fig. 1. The impurity problem, which is the only nontrivial step, can be solved by a range of different methods, including quantum Monte Carlo simulations and exact diagonalization [3].
### Strong coupling diagrammatic Monte Carlo
SCDMC is based on expansion in the hopping integral \(t\) and a set of vertices which describe scattering processes that are mediated by the contact interaction \(\hat{U}\). The starting point for deriving this diagrammatic description is the treatment of nonlocal terms and interactions as a perturbation:
\[H_{0}=\hat{\mu},\ H_{1}=\hat{U}+\hat{t}. \tag{8}\]
Since \(H_{0}\) is bilinear, eq. 8 can be treated via expansion and Wicks theorem:
\[\langle\hat{o}\rangle=\sum_{n}\frac{(-1)^{n}}{n!}\int_{0}^{\beta}d\tau_{1}... \langle T[H_{1}(\tau_{1})...H_{1}(\tau_{n})\hat{o}]\rangle_{H_{o},c}, \tag{9}\]
where the subscript \(c\) implies connected topologies. This gives two types of vertices corresponding to the expansion terms \(\hat{t}\) and \(\hat{U}\), as shown in Fig. 2 (a). The resulting diagrams describe fermions dispersing according to \(\sim t\) which are then subject to corrections that depend on the local terms \(\hat{\mu}\) and \(\hat{U}\), as exemplified in Fig. 2 (b). The set of all such scattering processes for a given number of particles and corresponding quantum numbers (like spin), define a vertex \(V_{N}\), where \(N\) denotes the number of scattered particles, see Fig. 2, (c). Note that we suppress spin indices for brevity.
Since \(H_{0}\) contains no dispersion it follows that the bare Greens function is local,
\[G^{0}_{\alpha\beta}(i-j,\tau)=G^{0}_{\alpha\beta}(\tau)\delta_{i,j}. \tag{10}\]
The vertices \(V_{N}\) can be expressed in terms of infinite sums as shown in Fig. 2, (d). These are equivalent to connected \(N-\)particle Greens functions obtained in the atomic limit, and can be written
\[V_{N}[\bar{O}]=\sum_{n}\frac{(-1)^{n}}{n!}\int d\bar{\tau}\langle\hat{U}(\tau _{1})...\bar{O}\rangle_{\hat{\mu},c}, \tag{11}\]
where \(\bar{O}\) denotes the set of fermionic operators associated with the external lines, and the subscript \(c\) implies connected topologies. Working in the atomic limit, the \(N-\)particle
Figure 1: **The DMFT cycle** is based on a self-consistency relation for the Greens function. Solving the impurity problem with respect to an effective action \(S_{\text{eff}}\), gives a local Greens function \(G_{\text{imp}}(\omega)\). Comparison to the bilinear part of the effective action gives the self energy \(\Sigma(\omega)\). Integrating over momenta provides a local Greens function \(G(\omega)\), from which the bilinear part of the action, \(\mathsf{g}_{0}^{-1}\) is obtained and fed into the effective action. After convergence, a self-consistent solution for the Greens function is acquired.
Greens function is an exactly solvable problem. The connected Greens function is then obtained by a recursion, where the disconnected parts are removed [30].
The vertices are connected by the hopping integrals \(\sim t\) to form diagrams. The skeleton graphs up to order \(4\) for the free energy are shown in Fig. 2, (e). Observables are obtained by inserting a measuring line into a diagram, as shown in Fig. 2 (f): One line is tagged and treated as an external line. The remainder of the diagrams corresponds to an element of the polarization operator of the hopping integral, \(\Pi\), which is the principal observable. The dressed hopping integral can then be obtained from a Bethe-Salpeter/Dyson type equation of the form
\[\tilde{t}(\omega,\mathbf{k})=\frac{1}{t^{-1}(\mathbf{k})-\Pi(\omega,\mathbf{k })}. \tag{12}\]
Expanding in \(\tilde{t}\) and retaining only skeleton graphs (to avoid double counting) gives a self-consistent solution for the dressed hopping integral and the polarization operator. The Greens function can then be obtained from
\[G(\omega,\mathbf{k})=\frac{1}{\Pi^{-1}(\omega,\mathbf{k})-t(\mathbf{k})}, \tag{13}\]
from which it also follows that
\[\tilde{t}(\omega,\mathbf{k})=t(\mathbf{k})+t^{2}(\mathbf{k})G(\omega,\mathbf{ k}). \tag{14}\]
More complex observables can be obtained by using multiple measuring lines.
The SCDMC cycle is shown in Fig. 3: The first step consists of constructing all vertices up to a given order for all sets of operators \(\bar{O}\) originating in the nonlocal terms \(\hat{t}\). The second stage consists of expanding in \(\tilde{t}\) to obtain \(\Pi\), which in turn gives a new \(\tilde{t}\). This process is repeated until convergence is obtained.
Figure 2: **Overview of strong coupling diagrammatic Monte Carlo**. (a) Treating \(\hat{t}\) and \(\hat{U}\) as expansion parameters, we obtain two types of vertices. (b) The resulting diagrams consists of particles propagating along the \(\sim t\) lines, which are then subject to corrections from local terms \(\sim G^{0}\) and \(\sim U\). These corrections can be classified as single-particle corrections and many-particle collisions respectively. (c) The set of all single particle corrections can be summed into a single particle vertex V\({}_{1}\), while the set of \(N\)-particle collisions can be sorted into vertices V\({}_{N}\). (d) The vertices are identical to the connected \(N-\)particle Greens functions in the atomic limit, making them an exactly solvable problem. (e) The expansion can then be conducted in vertices V\({}_{N}\) connected by the hopping integral. Using the dressed hopping integral, only skeleton graphs need to be sampled. (f) Observables are extracted using a measuring line, which is treated as an external line. The remaining part of the diagram provides a contribution to the polarization operator of the hopping integral, from which a dressed hopping integral and full Greens function can be obtained.
### Integration of dynamical mean-field theory and diagrammatic Monte Carlo
To see how these two methods can be combined, we note that the impurity problem (Eq. 3) can be written as a series expansion:
\[G_{\text{imp}}(\tau-\tau^{\prime})=Z_{H_{1}}^{-1}(\mu)\sum_{n} \frac{(-1)^{n}}{n!}\] \[\times\int_{0}^{\beta}d\tau_{i}...\text{Tr}e^{-\beta H_{0}}T[H_{1 }(\tau_{1},\tau_{1}^{\prime})...c^{\dagger}(\tau^{\prime})c(\tau)]. \tag{15}\]
Here, \(H_{0}=\hat{\mu}\), \(H_{1}(\tau,\tau^{\prime})=\hat{U}\delta(\tau-\tau^{\prime})+\Delta(\tau-\tau^{ \prime})c^{\dagger}(\tau^{\prime})c(\tau)\). The partition function refers to contractions of \(H_{1}\) evaluated with respect to \(H_{0}\). Since \(H_{0}\) is bilinear we can express this as a diagrammatic expansion
\[G_{\text{imp}}\!=\!\sum_{n}\frac{(\!-\!1)^{n}}{n!}\!\int_{0}^{ \beta}d\bar{\tau}\langle T[H_{1}(\bar{\tau}_{1})...c^{\dagger}(\tau^{\prime}) c(\tau)]\rangle_{H_{0},c}, \tag{16}\]
where the subscript \(c\) denotes connected topologies and \(\bar{\tau}_{i}=\{\tau_{i},\tau_{i}^{\prime}\}\). The term \(\sim\Delta\) describes exchange of electrons with the bath [39], and takes the form
\[\Delta(\omega)=\frac{1}{\Big{[}\sum_{\mathbf{k}}\tilde{t}(\omega, \mathbf{k})\Big{]}^{-1}+\Pi_{\text{imp}}(\omega)}, \tag{17}\]
where \(\Pi_{\text{imp}}\) denotes the polarization operator of \(\Delta\) in the impurity problem.
In the next stage, we note that the expansion (16) can be conducted in an exchange term which is dressed with respect to the impurity polarization operator, denoted \(\tilde{\Delta}\), while retaining only the skeleton graphs (to avoid double counting). We then find
\[\tilde{\Delta}(\omega)=\frac{1}{\Delta^{-1}(\omega)-\Pi_{\text{ imp}}(\omega)}=\sum_{\mathbf{k}}\tilde{t}(\omega,\mathbf{k}). \tag{18}\]
Separating the dressed hopping integral into local and inter-site parts
\[\tilde{t}_{L}(\omega)=\tilde{\Delta}(\omega),\;\tilde{t}_{I}( \omega,\mathbf{k})=\tilde{t}(\omega,\mathbf{k})-\tilde{t}_{L}(\omega), \tag{19}\]
the impurity problem can be cast into a strong-coupling expansion in \(\tilde{t}_{L}\) for a renormalized single-particle vertex, as illustrated in 4, (a). The vertex \(\tilde{\mathrm{V}}_{1}\) is equivalent to the impurity Greus function, and provides a contribution to the polarization operator of the full strong-coupling problem. Thus, given \(\Delta\), the infinite series for \(\tilde{\mathrm{V}}_{1}\) can be accounted for nonperturbatively by solving the impurity problem.
This idea can be generalized by solving the impurity problem for connected \(N\)-particle Greens functions \(G_{\text{imp,c}}^{N}(\bar{\tau})\): Once again, we may cast this problem into a skeleton-graph expansion in the dressed exchange term \(\tilde{\Delta}=\tilde{t}_{L}\), which produces an infinite series for a corrected \(N\)-particle vertex \(\tilde{\mathrm{V}}_{N}\), as shown in Fig. 4 (b). This problem is identical to the SCMDC expansion in the local part of the hopping integral \(\tilde{t}_{L}\), implying that this infinite class of diagrams for a renormalized vertex may be replaced by the corresponding DMFT solutions.
In the next stage, we note that we can conduct the SCDMC expansion in \(\tilde{\mathrm{V}}_{N}\), as opposed to in the bare vertices. However, if we connect two vertices by the local hopping term \(\tilde{t}_{L}\), we
Figure 4: **Mean-field like corrections.** (a) The single particle Greens function obtained from the impurity problem can be equated to a strong coupling expansion in the local part of the hopping integral, \(\tilde{t}_{L}\), for a renormalized vertex \(\tilde{\mathrm{V}}_{1}\). (b) The \(N\)-particle connected Greens function of the impurity problem is equivalent to the infinite series for a corresponding vertex \(\tilde{\mathrm{V}}_{N}\). By integrating DMFT and SCDMC, these corrections can be accounted for nonperturbatively via an impurity solver, allowing for an expansion in the corrected vertices \(\{\mathrm{V}_{N}\}\).
Figure 3: **SCDMC cycle.** In the first stage, a data base of vertices are constructed from the local part of the Hamiltonian, \(\hat{\mu}+\hat{U}\). In the second stage, expansion is conducted in these vertices and a dressed hopping integral \(\tilde{t}\) to produce a new estimate for \(\tilde{t}\), which is fed back into the diagrammatic expansion. The second stage is repeated until convergence is obtained. The Greens function is then obtained from \(\tilde{t}\).
will produce an insertion that is by definition also an element of \(\tilde{\text{V}}_{N}\), resulting in double counting. Consequently, when expanding in \(\tilde{\text{V}}_{N}\), we are only allowed to connects vertices via the itinerant hopping \(\tilde{t}_{I}\). From a diagrammatic point of view, this treatment is equivalent to including \(\tilde{t}_{L}\) in the summation shown in Fig. 2, (d), to produce the vertices \(\tilde{\text{V}}_{N}\) rather than their bare counterpart \(\text{V}_{N}\), that only depend on \(\hat{\mu}+\tilde{U}\). At this point, \(\tilde{t}_{I}\) becomes the expansion term.
The fact that the expansion term is non-local, and that we only permit skeleton graphs, leads to a sharp reduction of the number of diagrams that are permitted. Up to an expansion order 6, we only obtain 4 topologies, as shown in Fig. 5. The remaining diagrams explicitly depend on nonlocal correlations, as all mean-field like contributions have been accounted for non-perturbatively.
With these results in place, we can proceed to construct an integrated DMFT-SCDMC algorithm in accordance with the diagram in Fig. 6:
1. An impurity solver is used to obtain the single and many-particle Greens functions \(G^{1}_{\text{imp}},\;G^{N}_{\text{imp}}\).
2. The many-particle Greens function is fed into a recursive algorithm which removes its disconnected contributions, producing the renormalized vertices \(\tilde{\text{V}}_{N}\) which are identical to connected Greens functions.
3. A diagrammatic expansion is conducted in the vertices \(\tilde{\text{V}}_{N}\) and the itinerant part of the dressed hopping integral \(\tilde{t}_{I}\), producing the beyond mean-field contribution to the polarization operator, \(\Pi_{\text{SCDMC}}\).
4. \(\Pi_{\text{SCDMC}}\) and \(G_{\text{imp}}\) are combined, to form the full polarization operator \(\Pi_{\text{tot}}\), which is fed into a Bethe-Salpeter/Dyson type equation (12) to provide \(\tilde{t}\) and \(\tilde{\Delta}\).
5. \(\Delta\) is obtained from \(\tilde{\Delta}\) and \(\Pi_{\text{imp}}\) and fed back into the impurity solver, while \(\tilde{t}_{I}\) is inserted into the diagrammatic expansion.
The process above is repeated until convergence, providing a self-consistent solution for \(\Pi\), which gives the dressed hopping integral, the Greens function and therefore also the spectrum of the system. Once convergence has been obtained, additional observables like spin-spin/spin-charge correlations can be obtained by using multiple measuring lines.
## Summary
In conclusion, we have provided a framework for integrating DMFT with SCDMC which allows a systematic and asymptotically exact expansion around the mean-field solution. The remaining corrections in this series explicitly depend on non-local correlations, which now represent the expansion parameter, as all other contributions have been accounted for non-perturbatively.
Since the renormalized vertices contain the same energy scales as the expansion parameter, it is very likely that convergence properties of the series is substantially improved compared to conventional diagrammatics, and this should give access to virtually exact results in parameter regimes where this was previously considered impossible.
DMFT has an extremely wide applicability, ranging from condensed matter theory and ultra-cold atomic gases to material science. In the latter, it is often combined with DFT to conduct ab initio calculations of strongly correlated systems [40; 41]. Our method provides the means to dramatically improve results across these applications by systematically taking into account non-local correlation.
Figure 5: **Topologies of the mean-field corrected series up to order 6. In the corrected vertices \(\{\tilde{\text{V}}_{N}\}\), the local part of the dressed hopping integral \(\tilde{t}_{L}\) has effectively been integrated out. The corresponding expansion should therefore be conducted in the nonlocal part \(\tilde{t}_{I}\), while retaining only skeleton graphs. As a result, there is only a small number of allowed topologies (4 graphs up to order N=6) that all explicitly depend on nonlocal correlations.**
Figure 6: **DMFT+SCDMC cycle. The impurity solver allows mean-field like corrections to be integrated out, resulting in an expansion only in diagrams that explicitly depend on non-local correlations.** |
2303.03326 | Keep It Simple: CNN Model Complexity Studies for Interference
Classification Tasks | The growing number of devices using the wireless spectrum makes it important
to find ways to minimize interference and optimize the use of the spectrum.
Deep learning models, such as convolutional neural networks (CNNs), have been
widely utilized to identify, classify, or mitigate interference due to their
ability to learn from the data directly. However, there have been limited
research on the complexity of such deep learning models. The major focus of
deep learning-based wireless classification literature has been on improving
classification accuracy, often at the expense of model complexity. This may not
be practical for many wireless devices, such as, internet of things (IoT)
devices, which usually have very limited computational resources and cannot
handle very complex models. Thus, it becomes important to account for model
complexity when designing deep learning-based models for interference
classification. To address this, we conduct an analysis of CNN based wireless
classification that explores the trade-off amongst dataset size, CNN model
complexity, and classification accuracy under various levels of classification
difficulty: namely, interference classification, heterogeneous transmitter
classification, and homogeneous transmitter classification. Our study, based on
three wireless datasets, shows that a simpler CNN model with fewer parameters
can perform just as well as a more complex model, providing important insights
into the use of CNNs in computationally constrained applications. | Taiwo Oyedare, Vijay K. Shah, Daniel J. Jakubisin, Jeffrey H. Reed | 2023-03-06T17:53:42Z | http://arxiv.org/abs/2303.03326v1 | # Keep It Simple: CNN Model Complexity Studies for Interference Classification Tasks
###### Abstract
The growing number of devices using the wireless spectrum makes it important to find ways to minimize interference and optimize the use of the spectrum. Deep learning models, such as convolutional neural networks (CNNs), have been widely utilized to identify, classify, or mitigate interference due to their ability to learn from the data directly. However, there have been limited research on the complexity of such deep learning models. The major focus of deep learning-based wireless classification literature has been on improving classification accuracy, often at the expense of model complexity. This may not be practical for many wireless devices, such as, internet of things (IoT) devices, which usually have very limited computational resources and cannot handle very complex models. Thus, it becomes important to account for model complexity when designing deep learning based models for interference classification. To address this, we conduct an analysis of CNN based wireless classification that explores the trade-off amongst dataset size, CNN model complexity, and classification accuracy under various levels of classification difficulty: namely, interference classification, heterogeneous transmitter classification, and homogeneous transmitter classification. Our study, based on three wireless datasets, shows that a simpler CNN model with fewer parameters can perform just as well as a more complex model, providing important insights into the use of CNNs in computationally constrained applications.
Interference Classification, Convolutional Neural Networks, Model Complexity.
## I Introduction
The proliferation of internet of things (IoT), 5G devices, and other wireless technologies has led to an increase in the number of wireless devices that interfere with each other, either intentionally or unintentionally. Classifying interference is essential for ensuring good communication quality and adhering to spectrum sharing policies. The concept of wireless interference, where a signal of interest is disrupted by another signal sharing the same channel, has been studied extensively in the field of wireless communication [1, 2, 3]. This type of interference can significantly degrade the signal-to-noise-plus-interference ratio (SINR) and disrupt communication between a transmitter and receiver.
Traditionally, many interference classification techniques have relied on rule-based approaches that are not effective when multiple coexisting technologies are in use. As a result, alternative classification algorithms that use feature detection or extraction techniques, such as cyclostationary feature detection [4], have been explored. However, these techniques require domain expertise and can result in a complicated solution that is oftentimes not scalable. In recent years, researchers have turned to deep learning techniques to reduce the need for domain expertise [3, 5, 6]. Interference suppression applications have also widely used deep learning [7].
Convolutional neural networks (CNNs), a model-free deep learning approach, have been shown to be effective in various domains such as image classification and natural language processing. CNNs have been used for various classification tasks, including protocol/interference classification [3, 5, 6, 8, 9, 10], transmitter classification [11], and modulation classification [12]. In using CNNs or other deep learning models for classification tasks in wireless communication applications, researchers need to ensure that they have access to high quality datasets and efficient models. While earlier works [5, 6, 9] have focused on improving the classification performance of different deep learning models, the relationship between the size of a dataset, the complexity of the CNN models used, and the difficulty of classification is often overlooked or assumed. Even though many researchers would tune hyper-parameters of their CNN during the training process, an insight into other factors (such as filter size, number of nodes in the hidden layer, etc) that affect model complexity has not been typically investigated. In resource-constrained applications like the IoT, CNN models that are too complex may not be feasible for classification in real-world situations. This is because IoT is marked by its limited processing power and storage capabilities, which can lead to challenges in terms of performance, security, privacy, and reliability [13, 14]. It is essential to understand the processes that lead to the selection of hyper-parameters in relation to model complexity, the size of the dataset and the difficulty of classification.
A typical CNN architecture consists of a series of feed-forward layers that apply convolutional filters and pooling operations, followed by fully-connected layers that convert the 2D feature maps produced by the earlier layers into 1D vectors for classification [15]. While CNNs do not require a separate feature extraction step before being used, they can be time-consuming and difficult to train from scratch because they require a large labeled dataset for building and training the model [16]. The complexity of deep learning models can be influenced by various factors, such as the number of layers, number of filters, size of the filters, and number of nodes in the
hidden layer. Researchers in the field of deep learning often aim to improve the performance of their models by hyper-parameter tuning and other optimization techniques. In the literature, there has been a focus on improving classification performance through these methods. Although it is important to optimize deep learning models for performance, there has been limited attention in the wireless communication literature on thoroughly analyzing the factors that influence model complexity. This paper aims to fill this gap by studying the relationship between model complexity, dataset size, and classification difficulty in a thorough and empirical manner. To the best of our knowledge, this is one of the first studies to examine this relationship.
Our contributions are as follows:
* We thoroughly analyze the complexity of three different CNN architectures (simple, medium, and complex) in relation to dataset size and classification difficulty.
* We show, empirically, that the performance of a simple CNN model with fewer parameters is comparable to that of a more complex CNN model. This is important because resource-constrained devices, which have limited processing power and storage capabilities, can benefit from using simpler CNN models.
## II Overview of Classification Tasks
In this paper, our interference classification task is performed at levels of difficulty as shown in Fig. 1. At the interference or protocol level, a CNN can be used to classify different protocols or interference sources. At the heterogeneous level, a CNN can be used to classify different transmitter categories. Finally, at the homogeneous level, a CNN can be used to classify specific emitters (homogeneous) categories, such as transmitters from the same model or manufacturer. The outermost layer is the easiest classification while the innermost layer is the most difficult.
### _Transmitter Categorization_
Most transmitters have features that are peculiar to each of them. For instance, when run at high power, the power amplifiers used in many wireless devices sometimes display non-linearities [17][18]. These non-linearities can be used to group the transmitters into different categories. We briefly discuss the features of the transmitters used for our classification tasks.
#### Ii-A1 Category A (USRP 2921)
These transmitters utilize more reliable linear power amplifiers and finer filters than other transmitter categories.
#### Ii-A2 Category B (USRP B200)
In comparison to those in category A, the components in the transmitters in this category are less reliable. They were designed with low-cost experimentation in mind. They employ a single chain of the \(AD9364\), which is frequently utilized to decrease hardware and software complexity.
#### Ii-A3 Category C (Adalm Pluto SDR)
This category's transmitters are far less capable than the other two categories'. Considering their small size and low cost, the Pluto SDR is capable of a wide range of useful SDR applications.
### _Levels of Classification_
In this section we discuss the three levels of classification experiments carried out for the model complexity study.
#### Ii-B1 Protocol or Interference Classification
The protocol or interference level of classification is easier than the other two discussed in this section. This is because, there are enough distinguishing features at this level. For instance, in the radio frequency interference dataset used in this paper, there is a marked difference between the three types of jammers described. For instance, the MCWI, which combines the SoI and a two-tone CW is structurally different from the CI [19].
#### Ii-B2 Heterogeneous Classification
One transmitter from each of the three categories listed above in Section II-A is used in this study. While two of the three transmitters (USRP 2921 and USRP B200) were produced by the same company (National Instruments), all three are of distinct models. In level of difficulty, this classification is easier than homogeneous classification (discussed in Section II-B3) but harder than the protocol or interference classification discussed in Section II-B1
#### Ii-B3 Homogeneous Classification
In this classification task, we seek to distinguish transmitters within categories A, B, or C. The transmitters are identical (same manufacturer and model), making this the most challenging classification problem of all three levels. This is because the classification algorithm must identify slight variations in transmitters which have the same architecture and hardware components. The same OFDM waveform is sent by all of the devices, significantly complicating categorization.
## III Implementation Details of the CNN
### _CNN Parameters for Model Complexity_
The CNN algorithm utilized was a modified _Tensorflow_ CNN model that was used to categorize handwritten digits from the MNIST dataset. We created three levels of complexity for our CNN models by varying the number of nodes in the hidden layer, the number of filters, and the size of the training dataset. These factors all contribute to the number of parameters in the model, and as a general rule, the
Fig. 1: Overview of Interference Classification
complexity of a CNN algorithm increases with the number of parameters. As a CNN architecture becomes more complex, it is generally expected that the performance of the CNN algorithm will improve. However, one potential downside is that the algorithm may begin to overfit to the training data, resulting in a higher training accuracy compared to the test accuracy.
Our network contains three convolutional layers, max pooling layers and a fully connected layer. A \(3\times 3\) filter is applied to the input matrix by the convolutional layers. Convolution operations are carried out in the resulting sub-region to produce a single value in the respective output feature map. To integrate non-linearities into the model, the scale-invariant rectified linear unit (ReLU) activation function is applied to the feature map values.
The data collected by the convolutional layer is down-sampled using the pooling layer. We utilized the maximum pooling function of \(2\times 2\). This indicates that the most important features of the signals are kept while others are deleted [20], which facilitates transmitter classification. Table I lists specifics of the parameters for all three designs.
### _CNN Architecture for Interference Classification_
In this section, we describe the architecture of the CNN model and the training parameters used for the interference classification task. The CNN model is a pre-trained _ResNet18_ model described in [21]. The architecture is summarized in Table II. The model consists of many convolutional layers, two fully-connected layers, and one output layer, in that order. Leaky ReLU (Leaky ReLU\(x)=\max\{\alpha x,x\}\), where \(\alpha\in(0,1)\) is a preset parameter). All convolutional layers and fully linked layers are subjected to an activation function with \(\alpha=0.2\). The resulting (output) layer then has the softmax function applied to it. All convolutional layers are subject to batch normalization [22], however the output layer and fully-connected layers are excluded. Additionally, we use stride \(=2\) in the convolutional layers rather than \(2\times 2\) pooling layers for down-sampling. Such changes enhance performance and lower the variance of the results across various training epochs. x is normalized as \(x^{\prime}=x/x_{\text{max}}\), where \(x_{\text{max}}\) is the largest input value allowed in \(x\). We utilize the Adam optimizer with the suggested default values in [23]. The learning rate is \(1\times 10^{-4}\), and the batch size is \(64\).
## IV Experimental Setup
In this section, we describe the datasets used for the investigation of dataset size, model complexity and level of classification. It is worth noting that we only used publicly available datasets for interference classification, while we used our own testbed to generate datasets for the model complexity studies presented in this paper. This is because there are some limitation with using public dataset, for instance, we were not able to control the types of transmitters used to generate the interference. Also, the channel used for transmission cannot be changed since we are using the dataset as is.
### _Dataset Generated For Model Complexity_
The dataset generation process used for the assessment of model complexity is similar to the one used in [11]. The only difference is that more training data was added when compared to the work in [11]. We define the process of creating a base-band waveform, transmitting it over a channel, and receiving it as the transmitter-receiver chain. It is important to note that the details of the transmitter-receiver chain can be found in [11].
#### Iv-A1 Baseband Waveform Generation
The hardware devices transmit OFDM packets created using GNU Radio Companion signal processing blocks. This is done by generating a stream of bits (0s and 1s) from a random source. We used an OFDM waveform that receives 10,000 data bits from a random source, which are mapped to the OFDM waveform using a QPSK modulation scheme with an FFT length of 512 and occupying the center 200 subcarriers with a cyclic prefix of 128.
#### Iv-A2 RF Transmission
During transmission, GNU Radio and the transmitter hardware are connected through the USRP hardware driver (UHD) hardware support package. All USRP devices are managed and communicated with using a library called UHD. This is done using a GNU Radio block that takes as inputs the sampling rate, RF bandwidth, buffer size, center frequency, physical address of the device, and attenuation. The OFDM waveform is then up-converted to 2.45 GHz as the transmission center frequency and transmitted through the hardware's antenna. The sets of transmitters include four USRP 2921s, four USRP B200s, and four Adalm Pluto software defined radios (SDRs).
\begin{table}
\begin{tabular}{|c|c|} \hline
**Parameters** & **RESNet** \\ \hline Batch size & 64 \\ \hline Number of layers & 18 \\ \hline Learning rate & 0.0001 \\ \hline Maximum pooling dimension & \(2\times 2\) \\ \hline Activation function & ReLU \\ \hline Training/test size & 80\%/20\% \\ \hline Dropout Probability & 80\% \\ \hline \end{tabular}
\end{table} TABLE II: CNN Parameters for Interference Classification
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Parameters** & **Simple** & **Medium** & **Complex** \\ \hline Nodes in the hidden layer & 0.26k & 1.04k & 8k \\ \hline Total number of parameters for the whole network & 6.1k & 48k & 276k \\ \hline Filter size (No. of Filters) & 3*3(16) & \multicolumn{2}{c|}{3*3(32)} \\ \hline Batch size & \multicolumn{2}{c|}{16} \\ \hline Number of layers & \multicolumn{2}{c|}{3} \\ \hline Strides & \multicolumn{2}{c|}{2} \\ \hline Number of classes & \multicolumn{2}{c|}{4 (for homogeneous devices)} \\ \cline{2-4} & \multicolumn{2}{c|}{3 (for heterogeneous devices)} \\ \hline Learning rate & 0.0002 & \multicolumn{2}{c|}{2} \\ \hline Max pooling size & 2* 2 & \multicolumn{2}{c|}{} \\ \hline Dropout Probability & 50\% & \multicolumn{2}{c|}{} \\ \hline Input matrix size & \multicolumn{2}{c|}{38*100} \\ \hline Activation function & \multicolumn{2}{c|}{ReLU} \\ \hline Optimizer & Adam optimizer with cross entropy loss \\ \hline Loss & \multicolumn{2}{c|}{Cross entropy loss} \\ \hline Training sizes & 16,64,256,1024,4096,8192,16384 \\ \hline Test sizes & \multicolumn{2}{c|}{2k} \\ \hline \end{tabular}
\end{table} TABLE I: CNN Parameters for Model Complexity
#### Iv-A3 RF Reception
On the receiver side, the signal is received when the antenna and the transmitter are on the same channel. We assume that the receiver knows the center frequency and bandwidth of the transmitter and corrects for frequency offset at the receiver. The transmitted signal's center frequency and sample rate are stored in the UHD USRP Source block, which is used by a computer running GNU Radio Companion to down-convert the signal to baseband frequency. After being delivered to a low noise amplifier, the signal is separated into in-phase and quadrature components at baseband. It is then low-pass filtered and transferred to an analog-to-digital converter (ADC). When the ADC process is completed, the digital samples are clocked into an FPGA. After being digitally down-converted using precision frequency tuning, a series of filters are used to decimate the FPGA image. The raw samples are then transmitted to a host computer through the host interface (using an Ethernet gigabit link, in this case) following decimation, made possible by the UHD. The complex samples are collected by the host computer using GNU Radio, and the IQ data is saved to a file and transferred to MATLAB for further processing.
#### Iv-A4 Continuous Wavelet Transform (CWT) Signal Preprocessing
The features in the received signal that can be employed in the classification process are highlighted by the CWT. For CWT, a \(M\times N\) matrix of coefficients is generated from \(N\) samples for a \(N\)-length signal, where \(M\) represents the number of scales [24]. The scales are determined automatically using the wavelength's energy distribution in frequency and time [24].
The CWT MATLAB function is used to pre-process the signals in order to enhance key transmitter properties in the received signal. After the signal has been transformed, the resultant 2D matrix of size \(M\times\ N\) for each sample is stacked together and sent to the CNN classifier as a three dimensional matrix. The dimension of the output of the CWT process for this project is a \(38\times 100\) matrix of coefficient whereas the input to the CWT is \(2\times 128\).
### _Datasets for Interference Classification_
#### Iv-B1 Radio Frequency Interference Dataset
In this work, we use publicly available wireless interference datasets to evaluate our approach. The RFI dataset used in this study was created by the authors of [19]. They created the dataset by combining a signal of interest (SOI) with three types of jammers (continuous-wave interference (CWI), multi-continuous-wave interference (MCWI), and chirp interference (CI)) at different signal-to-noise ratios (SNRs).
#### Iv-B2 CRAWDAD Dataset
The second dataset used in this paper was obtained from the Community Resource for Archiving Wireless Data at Dartmouth (CRAWDAD) website. This dataset, provided by Schmidt et al. [3], includes traces of IEEE 802.11b/g, IEEE 802.15.4, and Bluetooth packet transmissions with different SNRs in the baseband [25], as well as frequency offsets in the baseband [9]. There are a total of 15 distinct classes, 10 of which are IEEE 802.15.1 devices, 3 of which are IEEE 802.11 devices, and the remaining 2 are IEEE 802.15.4 devices.
## V Results
In this section we discuss the results from the model complexity and interference classification studies. Model complexity studies are important since they help to understand the relationship between classification performance and the number of parameters used by a deep learning algorithm (CNN used in this paper). It is important to note that generating our own dataset to study model complexity helps us to vary different conditions in the data generating process which can not be done with publicly available datasets.
### _Model Complexity_
This section examines the performance of the various CNN models across different device categories. When the test accuracy is significantly lower than the training accuracy, it suggests that the algorithm performs well on the training set but poorly on the test set, indicating overfitting. In order for a
\begin{table}
\begin{tabular}{c c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{**Homogeneous Classification (USRP 2921)**} & \multicolumn{4}{c}{**Homogeneous Classification (USRP B200)**} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Simple CNN**} & \multicolumn{2}{c}{**Medium CNN**} & \multicolumn{2}{c}{**Complex CNN**} & \multicolumn{2}{c}{**Simple CNN**} & \multicolumn{2}{c}{**Medium CNN**} & \multicolumn{2}{c}{**Complex CNN**} \\ \hline
**Dataset** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** \\
**Size** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** \\ \hline
16 & 1.00 & 0.407 & 1.00 & 0.406 & 1.00 & 0.408 & 1.00 & 0.502 & 1.00 & 0.492 & 1.00 & 0.544 \\ \hline
64 & 0.996 & 0.545 & 1.00 & 0.563 & 1.00 & 0.568 & 0.988 & 0.567 & 1.00 & 0.608 & 1.00 & 0.591 \\ \hline
256 & 1.00 & 0.660 & 1.00 & 0.688 & 1.00 & 0.694 & 0.996 & 0.623 & 1.00 & 0.658 & 1.00 & 0.652 \\ \hline
1024 & 0.971 & 0.744 & 1.00 & 0.784 & 1.00 & 0.788 & 0.955 & 0.663 & 1.00 & 0.709 & 1.00 & 0.722 \\ \hline
4096 & 0.947 & 0.807 & 0.99 & 0.836 & 1.00 & 0.829 & 0.900 & 0.806 & 0.995 & 0.821 & 1.00 & 0.826 \\ \hline
8192 & 0.899 & 0.838 & 0.986 & 0.856 & 0.99 & 0.842 & 0.864 & 0.830 & 0.984 & 0.837 & 0.99 & 0.858 \\ \hline
16384 & 0.88 & 0.860 & 0.975 & 0.862 & 0.99 & 0.876 & 0.856 & 0.843 & 0.971 & 0.860 & 0.97 & 0.870 \\ \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Homogeneous Classification (Adamlnt PMR)**} & \multicolumn{4}{c}{**Heterogeneous Classification**} \\ \hline \hline \multicolumn{1}{c}{} & \multicolumn{2}{c}{**Simple CNN**} & \multicolumn{2}{c}{**Medium CNN**} & \multicolumn{2}{c}{**Complex CNN**} & \multicolumn{2}{c}{**Simple CNN**} & \multicolumn{2}{c}{**Medium CNN**} & \multicolumn{2}{c}{**Complex CNN**} \\ \hline
**Dataset** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** & **Training** & **Test** \\
**Size** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** & **Acc** \\ \hline
16 & 0.859 & 0.369 & 1.00 & 0.437 & 1.00 & 0.407 & 0.96 & 0.762 & 1.00 & 0.842 & 1.00 & 0.816 \\ \hline
64 & 0.988 & 0.512 & 1.00 & 0.548 & 1.00 & 0.450 & 1.00 & 0.836 & 1.00 & 0.844 & 1.00 & 0.841 \\ \hline
256 & 0.939 & 0.641 & 1.00 & 0.633 & 1.00 & 0.579 & 0.997 & 0.883 & 1.00 & 0.885 & 1.00 & 0.895 \\ \hline
1024 & 0.862 & 0.700 & 0.993 & 0.689 & 1.00 & 0.673 & 0.99 & 0.913 & 1.00 & 0.920 & 1.00 & 0.919 \\ \hline
4096 & 0.820 & 0.718 & 0.959 & 0.699 & 0.99 & 0.706 & 0.97 & 0.937 & 0.99 & 0.954 & 1.00 & 0.952 \\ \hline
8192 & 0.775 & 0.736 & 0.901 & 0.738 & 0.96 & 0.718 & 0.98 & 0.955 & 0.986 & 0.958 & 1.00 & 0.968 \\ \hline
16384 & 0.772 & 0.747 & 0.865 & 0.753 & 0.85 & 0.754 & 0.98 & 0.97 & 0.975 & 0.970 & 1.00 & 0.972 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Comparison of Training Accuracy and Testing Accuracy for all Model Complexity and Device Category
deep learning algorithm to perform well on new, unseen data, it is important for the training and test accuracies to be similar. This indicates that the algorithm has learned to generalize well.
Table III compares the training accuracy to the test accuracy for all the classification tasks. At smaller dataset sizes, all the models tend to overfit significantly. However, as the dataset size increases, the algorithms tend to overcome overfitting issues. For most of the classification categories, the overfitting problem is significantly reduced, with the difference between the training and test accuracy being within \(5\%\). Overfitting often occurs when the CNN is complex, with multiple layers and many nodes in the hidden layer. To prevent overfitting, regularization techniques such as dropout [26] and early stopping [27] can be used.
Figure 3 demonstrates that, for most transmitters, the simple CNN performs similarly to the medium and complex CNNs, despite having significantly fewer parameters. In fact, Figure (d)d shows that the performance of the simple CNN is comparable to that of the complex CNNs for heterogeneous transmitters after a dataset size of 100. While more complex or sophisticated CNN models may offer some benefits, there must be a balance between the network's generalization ability and its complexity. _These findings suggest that, it is important for the deep learning model to be both simple and robust, this is especially true for resource-constrained applications_.
### _Interference Classification_
Accurate classification of interference sources is crucial for interference suppression or mitigation. Using a pre-trained ResNet18 model, we achieved a \(97.8\%\) accuracy on the RFI dataset, as shown in Table II. Figure (a)a shows the confusion matrix for the interference classes, indicating excellent performance on this comparatively easy classification task. For the CRAWDAD dataset, the classification performance was about \(80\%\), as shown in the confusion matrix in Figure (b)b, after 25 epochs.
As discussed in Section V-A, while more complex CNN models tend to perform better in classification tasks, they are typically prone to overfitting. Overfitting occurs when a deep learning model memorizes the training dataset, leading to high training accuracy but low test or validation accuracy. This can be seen in the results for the Pluto SDR in both Table III and Figure (c)c. These results further emphasize the importance of our findings from the model complexity study discussed in Section V-A. As previously mentioned, a less complex model can often perform as well as a more complex one without the risk of overfitting. Therefore, it is important to use models that are neither too complex nor too simple. The right level of complexity can be determined by using just enough 2D convolution layers and filters to achieve good performance, starting with a simpler model and gradually increasing complexity as needed, this helps to prevent both under-fitting and overfitting issues.
## VI Conclusion
In conclusion, this paper has thoroughly examined the use of CNN for interference classification. Our results demonstrate that the CNN model is capable of accurately classifying different interference sources, as shown in the two datasets we used. Our study found that, while medium and complex CNN classifiers performed slightly better than the simple classifier, the difference in performance was not significant. This is an important finding since resource-constrained devices can easily leverage the simpler models. When designing deep learning models, the goal is to create models that can be applied to new data. Simple models are more likely to be able to do
Fig. 2: Confusion Matrix for CNN classification of the RFI and CRAWDAD Datasets.
this because they are less prone to overfitting, which is a common issue with complex models. Different classification tasks, such as homogeneous and heterogeneous classification, may require different amounts of data and different levels of model complexity. In our study, the training dataset size, number of nodes in the hidden layer has had the greatest impact on CNN model performance. While the size of the dataset is important, we also note that the quality of the training dataset can also significantly impact the performance of CNN classifiers.
## Acknowledgment
This work was supported in part by the U.S. Air Force Research Laboratory (AFRL) under Grant FA8750-20-2-0504, in part by the Lockheed Martin Corporation under Grant M16-005-RPP010, and in part by the National Science Foundation under Grant CNS-1564148.
|
2308.01643 | I am a global citizen. Or am I not? International Business Schools
students and Global Citizenship unified framework & a scoping literature
review of the last decade (2013-2022) | This review examines the scientific articles of the last decade, approaching
the subject through the methodology of the scoping literature review. Starting
with the Boolean search global citizens AND education AND (international
business OR international business school) in the ScienceDirect, Emerald, and
Scopus databases, the review resulted in only scientific journal articles,
strictly targeted at tertiary education ONLY of international business schools
and ONLY in those articles that study global citizenship. For reasons of
up-to-date knowledge, the present literature was content with the final decade.
A total of 13 articles are recorded as a result of the aforementioned Boolean
search from a total of 216 articles identified in the first phase of the
search. The results will help the researchers to acquire the required knowledge
base for their research, the academics to incorporate new methods in their
teaching and the approach of their students, and the policymakers to adapt the
schools curricula according to the data from the articles present in the
literature review. | Nikolaos Misirlis | 2023-08-03T09:20:21Z | http://arxiv.org/abs/2308.01643v1 | I am a global citizen. Or am I not? International Business Schools' students and Global Citizenship unified framework & a scoping literature review of the last decade (2013-2022)
###### Abstract
I am not Athenian, or a Greek but a citizen of the world (Socrates, 399 BC). Almost two and a half millennia back, Socrates, the Greek philosopher was claiming his global citizenship against his Greek or Athenian one.
Today a citizen of the world is considered someone who recognizes and understands the world in its broadest sense as well as his/her place within it. The citizen of the world has an active role in the world and strives for it, recognizing the diversity and striving for a better, more just, and peaceful planet. Today, the need to train young students to become not only worthy scientists but active citizens of the world is even more pressing. More specifically, business school students are now targeting 7 billion potential customers rather than their narrow geographic circle. The need to educate them, therefore, is greater for business students.
This review examines the scientific articles of the last decade, approaching the subject through the methodology of the scoping literature review. Starting with the Boolean search "global citizens" AND "education" AND ("intentional business" OR "intentional business school?") in the ScienceDirect, emerald, and Scopus databases, the review resulted in only scientific journal articles, strictly targeted at tertiary education ONLY of international business schools and ONLY in those articles that study global citizenship. For reasons of up-to-date knowledge, the present literature was content with the final decade.
A total of 13 articles are recorded as a result of the aforementioned Boolean search from a total of 216 articles identified in the first phase of the search. The results will help the researchers to acquire the required knowledge base for their research, the academics to incorporate new methods in their teaching and the approach of their students, and the policymakers to adapt the schools' curricula according to the data from the articles present in the literature review.
Global citizen, university students, framework, literature review
## 1 Introduction
Global citizenship refers to the idea that individuals have a sense of belonging to a broader global community and have a responsibility to contribute to the betterment of the world as a whole. It involves recognizing the interdependence of different countries and cultures, and acknowledging that our actions have an impact on others beyond our immediate surroundings [1-3].
Global citizenship in students involves fostering a sense of responsibility towards the world and its inhabitants. It encourages students to think beyond their immediate surroundings and to recognize their role in creating a more just and sustainable future for all [4].
In practice, global citizenship education in schools and universities aims to equip students with the knowledge, skills, and values needed to address global challenges and promote positive change. This includes developing critical thinking skills to analyze complex global issues, as well as promoting empathy and intercultural understanding. In particular, students of international schools around the world feel more of the need to become global citizens, acting like global citizens, working as global citizens, etc [5]. Nowadays the borders of every young worker are not limited to the narrow framework of a country but extend to a global level. But how much are today's students really citizens of the world? How much do they train for it? How ready are academic teachers to train young students to be more ready for the cosmos?
## 2 Methodology
In an effort to understand this framework, this paper is a scoping literature review of academic articles in the last decade. Such reviews are helpful to academics to start building knowledge around a topic [6].The research was limited to this time period in order to record only the most up-to-date results of
science. The main question of this research is the following: What are the main results of the scientific community for the readiness of the students and their institutions towards global citizenship?
The result of the research is a complete table with the scientific articles included, and analyzed, the summary for each of them, and a final framework of global citizenship focused on tertiary students of international business schools.
The search for articles was done through Boolean research in the libraries of ScienceDirect and Emerald. The diagram below breaks down the Boolean phrase used ["global citizens" AND "education" AND ("international business" OR "international business school?")].
After the first screening, the articles were studied, analyzed and categorized based on the year, and methodology used. The following table summarizes these findings. The most important thing, besides the fact that the table summarizes in one view all the articles into consideration, is the outcome of the main findings of the articles. Keyword analysis and finding analysis has be done in order to create Figure 2. This figure represents a unified framework of global citizenship for students in international business schools. With this figure, researchers, business practitioners and Academics have a clear view of the components they need to incorporate into their research, policies, and academic curricula in order to prepare better the future global citizen/student.
## 6 Conclusion
In this paper, we have proposed a novel approach to the classification of the data from the
## 3 Conclusions
The importance of global citizenship is today more important than ever. Respectful, diverse, responsible, and collaborative are the students we want in order to build a better future for all. Our students must be individuals who think beyond their personal self-interest, considering the needs of others, both in their own communities and around the world. Universities, by working towards this approach will create eventually a better, more peaceful, and sustainable world for future generations.
|
2301.02641 | Can the double-slit experiment distinguish between quantum
interpretations? | Despite the astonishing successes of quantum mechanics, due to some
fundamental problems such as the measurement problem and quantum arrival time
problem, the predictions of the theory are in some cases not quite clear and
unique. Especially, there are various predictions for the joint spatiotemporal
distribution of particle detection events on a screen, which are derived from
different formulations and interpretations of the quantum theory. Although the
differences are typically small, our studies show that these predictions can be
experimentally distinguished by an unconventional double-slit configuration,
which is realizable using present-day single-atom interferometry. This
experiment would enrich our understanding of the foundations of quantum
mechanics. | Ali Ayatollah Rafsanjani, MohammadJavad Kazemi, Alireza Bahrampour, Mehdi Golshani | 2023-01-06T18:35:58Z | http://arxiv.org/abs/2301.02641v2 | # Can the double-slit experiment distinguish between quantum interpretations?
###### Abstract
Despite the astonishing successes of quantum mechanics, due to some fundamental problems such as the measurement problem and quantum arrival time problem, the predictions of the theory are in some cases not quite clear and unique. Especially, there are various predictions for the joint spatiotemporal distribution of particle detection events on a screen, which are derived from different formulations and interpretations of the quantum theory. Although the differences are typically small, our studies show that these predictions can be experimentally distinguished by an unconventional double-slit configuration, which is realizable using present-day single-atom interferometry.
## I Introduction
In textbook quantum theory, time is a parameter in the Schrodinger equation, not a self-adjoint operator, hence there is no unique and unambiguous way to compute the temporal probability distribution of events from the first principles (i.e. the Born rule) [1]. Nonetheless, since clocks exist and time measurements are routinely performed in quantum experiments [2; 3], a complete quantum theory must be able to predict the temporal statistics of detection events. For example, in the famous double slit experiment, each particle is detected at a _random time_ as same as at a _random position_ on the detection screen [4; 5; 6; 7; 8]. Therefore, one can ask: What is the position-time _joint_ probability density \(\mathbb{P}(\mathbf{x},t)\) on the screen? Although this question is very old [9; 10; 11; 12], it is still open [13; 14; 15; 16; 17; 18]. In fact, the ambiguity in the arrival time distribution even prevents a clear prediction of cumulative arrival position distribution, \(\int\mathbb{P}(\mathbf{x},t)dt\), which is typically measured in a non-time-resolved double-slit experiment [19].
Nonetheless, usual experiments are performed in the far-field (or scattering) regime, where a semiclassical analysis is often sufficient [19; 13]. In this analysis, it is assumed that particles move along classical trajectories, and the arrival time distribution is computed using the quantum momentum distribution [20; 8; 21]. However, because of the _quantum backflow_ effect [22], even in free space, the quantum mechanical time evolution of position probability density is not consistent with the underlying uniform motion assumption, especially in near-field interference phenomena [23]. In fact, due to recent progress in the ultra-fast detectors technology (e.g. see [24; 25; 26; 27]), it will be soon possible to investigate the near-field regime, where the semiclassical approximation breaks down and a deeper analysis would be demanded [28; 13; 29].
To remedy this problem, based on various interpretations and formulations of quantum theory, several attempts have been made to introduce a suitable arrival time distribution. On the one hand, according to the (generalized) standard canonical interpretation, the arrival distribution is considered as a generalized observable, which is described by a positive-operator-valued measure (POVM), satisfying some required symmetries [30; 31; 11; 10]. On the other hand, in the realistic-trajectory-based formulations of quantum theory, such as the Bohmian mechanics [32], Nelson stochastic mechanics [33], and many interacting worlds interpretation [34], the arrival time distribution could be obtained from particles trajectories [35; 36; 7; 18]. Moreover, in other approaches, the arrival time distribution is computed via phenomenological modeling of the detection process, such as the (generalized) path integral formalism in the presence of an absorbing boundary [37; 38; 39; 12], Schrodinger equation with complex potential or absorbing boundary [40; 41; 42; 43; 44], and so on [45; 46; 47].
In principle, the results of these approaches are different. However, in most of the experimental situations, the differences are typically slight, and so far as we know, in the situation where differences are significant, none of the proposals have been backed up by experiments in a strict manner [36; 8]. An experiment that can probe these differences would undoubtedly enrich our understanding of the foundations of quantum mechanics. The purpose of the present paper is to make it evident, via numerical simulations, that the famous two-slit experiment could be utilized to investigate these differences if we simply use a horizontal screen instead of a vertical one: see Fig. 1. Using current laser cooling and magneto-optical trapping technologies, this type of experiment can be realized by Bose-Einstein condensates, as a controllable source of coherent matter waves [48; 49; 50]. Moreover, our numerical study shows that the required space-time resolution in particle detection is achievable using fast single-atom detectors, such as the recent delay-line detectors described in [51; 52] or the detector used in [53; 6].
The structure of this paper is as follows: In Section II, we study the main proposed _intrinsic_ arrival distributions. Then, in section III we compare them in the double-slit setup with vertical and horizontal screens and in different detection schemes. In Section IV, we study the screen back-effect, and we summarize in section V.
"Intrinsic" arrival distributions
In this section, we first review the semi-classical approximation and then scrutinize two main proposed intrinsic arrival time distributions [16; 36] and their associated screen observables. In these approaches, the effect of the detector's presence on the wave function evolution, before particle detection, is not considered. We discuss this effect in section IV.
### Semiclassical approximation
As mentioned, in the experiments in which the detectors are placed far away from the support of the initial wave function (i.e. far-field regime), the semiclassical arrival time distribution is routinely used to the description of the particle time-of-flight [54; 55; 56; 21; 57]. In this approximation, it is assumed that particles move classically between the preparation and measurement. In this approach, the arrival time randomness is understood as a result of the uncertainty of momentum, and so the arrival time distribution is obtained from momentum distribution [13; 17; 58; 36]. In the one-dimensional case, the classical arrival time is given by
\[t=m(L-x_{0})/p_{0}, \tag{1}\]
which is applicable for a freely moving particle of mass \(m\) that at the initial \(t\!=\!0\) had position \(x_{0}\) and momentum \(p_{0}\) arriving at a distant point \(L\) on a line. Hence, for a particle with the momentum wave fuction \(\tilde{\psi}_{0}(p)\), assuming \(\Delta x_{0}\!\ll\!|L-\langle x\rangle_{0}|\), the semiclassical arrival time distribution reads [58]
\[\Pi_{\rm SC}(t|x\!=\!L)=\frac{mL}{t^{2}}|\tilde{\psi}_{0}(mL/t)|^{2}. \tag{2}\]
This analysis could be generalized in three-dimensional space. Then, the distribution of arrival time at a screen surface \(\mathbb{S}\) is given by [36]
\[\Pi_{\rm SC}(t|\mathbf{x}\!\in\!\mathbb{S})=\frac{m^{3}}{t^{4}}\int_{\mathbb{S }}|\tilde{\psi}_{0}(m\mathbf{x}/t)|^{2}\,\mathbf{x}\cdot d\mathbf{S}, \tag{3}\]
where the \(d\mathbf{S}\) is the surface element directed outward. The other main distribution that should be demanded is the joint position-time probability distribution on the screen, which is also called "screen observable" [11]. Using the conditional probability definition, the joint probability of finding the particle in \(dS\) and in a time interval \([t,t\!+\!dt]\) could be written as \(\mathbb{P}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})dSdt=[\Pi(t|\mathbf{x} \!\in\!\mathbb{S})dt]\times[\mathbb{P}(\mathbf{x}|\mathbf{x}\!\in\!\mathbb{S},t)dS]\,.\) In this regard, one can use the fact that \(\psi_{t}(\mathbf{x})\) is the state of the system, conditioned on the time being \(t\) in the Schrodinger picture. This implies that \(|\psi_{t}(\mathbf{x})|^{2}\) refers to the position probability density conditioned at a specific time \(t\)[14; 59; 15]. Therefore, in the semiclassical approximation, the joint spatiotemporal probability density reads as
\[\mathbb{P}_{\rm SC}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})=N_{\rm SC}\Pi_ {SC}(t|\mathbf{x}\!\in\!\mathbb{S})\left|\psi_{t}(\mathbf{x})\right|^{2} \tag{4}\]
in which \(N_{\rm SC}\!\equiv\!1/\int_{\mathbb{S}}dS\left|\psi_{t}(\mathbf{x})\right|^{2}\) is the normalization constant, and \(dS\!=\!\mathbf{n}\!\cdot\!d\mathbf{S}\), where \(\mathbf{n}\) is the outward unit normal vector at \(x\!\in\!\mathbb{S}\).
### "Standard" approach
The first attempts to investigate the arrival time problem, based on the standard rules of quantum theory, were made at the beginning of the 1960s by Aharonov and Bohm [60], and also Paul [61]. This approach starts with a symmetric quantization of classical arrival time expression (1), as follows [62]:
\[\hat{t}_{AB}=mL\,\hat{p}\,^{-1}-\frac{m}{2}\,(\hat{p}\,^{-1}\hat{x}\,+\,\hat{x }\,\hat{p}\,^{-1}), \tag{5}\]
where \(\hat{x}\) and \(\hat{p}\!=\!-i\,\partial/\partial x\) are the usual position and momentum operators, respectively, and \(t_{AB}\) is called the Aharonov-Bohm time operator. This operator satisfies the canonical commutation relation with the free Hamiltonian operator, \([\hat{t}_{AB},\hat{p}^{2}/2m]\!=\!i\hbar\), which has been used to establish the energy-time uncertainty relation [63; 64]. However, although \(\hat{t}_{AB}\) is Hermitian (or symmetric in mathematics literature), it is not a self-adjoint operator [65]--a fact that is in agreement with Pauli's theorem [1]. The origin of this non-self-adjointness can be understood as a result of the singularity at \(p\!=\!0\) in the momentum representation, \(\hat{t}_{AB}\to(i\hbar m/2)(p^{-2}-2p^{-1}\partial_{p})\)[65]. Nevertheless, although the (generalized) eigenfunctions of \(\hat{t}_{AB}\) are not orthogonal, they constitute an overcomplete set and provide a POVM, which are used to define the arrival-time distribution as follows [63; 65]:
\[\Pi_{\rm STD}(t|x\!=\!L)\!=\!\frac{1}{2\,\pi\hbar}\!\sum_{\alpha=\pm}\!\left| \!\int_{-\infty}^{\infty}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
The three-dimensional generalization of (6) is derived by Kijowski's [10] via an axiomatic approach. The assumed axioms are implied by the principle of the probability theory, the mathematical structure of standard quantum mechanics, and the Galilei invariance [78]. Based on these axioms, Kijowski constructed the following arrival time distribution for a free particle that passes through a two-dimensional plane \(\mathbb{S}\) as
\[\Pi_{\rm STD}(t|\mathbf{x}\in\mathbb{S})\frac{1}{2\pi\hbar}\sum_{ \alpha=\pm}\int_{\mathbb{R}^{2}}\!d^{2}\mathbf{p}_{\parallel}\\ \times\left|\int_{-\infty}^{\infty}\!\!\!\!dp\!\perp\theta(\alpha \mathbf{p}.\mathbf{n})\sqrt{\frac{|\mathbf{p}_{\perp}|}{m}}\ \tilde{\psi}_{t}(\mathbf{p})e^{\frac{i}{\hbar}\mathbf{x}.\mathbf{p}_{\perp}} \right|^{2}\!\!, \tag{7}\]
where \(\mathbf{p}_{\perp}\!\equiv\!(\mathbf{p}\,.\,\mathbf{n})\mathbf{n}\) and \(\mathbf{p}_{\parallel}\!\equiv\!\mathbf{p}-\mathbf{p}_{\perp}\) are perpendicular and parallel components of \(\mathbf{p}\) relative to \(\mathbb{S}\) respectively, and \(\mathbf{n}\) is the outward normal of plane \(\mathbb{S}\). In fact, he first proves the above expression for the wave functions whose supports lie in the positive (or negative) amounts of \(p_{\perp}\). Then he _uniquely_ derives the following self-adjoint variant of the (three-dimensional version of) Aharonov-Bohm arrival time operator, by demanding that the time operator be self-adjoint and leads to (7) for these special cases via the Born rule [10; 78]:
\[\hat{t}_{L}=\mathrm{sgn}(\hat{p}_{\perp})\left[mL\hat{p}_{\perp}^{-1}-\frac{m }{2}(\hat{p}_{\perp}^{-1}\hat{x}_{\perp}+\hat{x}_{\perp}\hat{p}_{\perp}^{-1}) \right], \tag{8}\]
where \(\hat{x}_{\perp}\equiv\hat{\mathbf{x}}.\mathbf{n}\) and \(L\) (\(\equiv\mathbf{x}.\mathbf{n}\)) represent the distance between the detection surface and the origin [29]. Finally, for an arbitrary wave function, the equation (7) could be derived from this self-adjoint operator. Moreover, considering this time operator, besides the components of the position operator in the detection plane, \(\hat{\mathbf{x}}_{\parallel}\equiv\hat{\mathbf{x}}-(\hat{\mathbf{x}}.\mathbf{ n})\mathbf{n}\), Kijowski obtains the following expression as the joint position-time distribution on the detection screen via the Born rule [78]:
\[\mathbb{P}_{\rm STD}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})=\sum_{\alpha= \pm}|\psi_{\mathbb{S}}^{\alpha}(\mathbf{x},t)|^{2}, \tag{9}\]
in which \(\psi_{\mathbb{S}}^{\pm}(\mathbf{x},t)\) is the wave function on the basics of joint eigenstates of the operators \(\hat{t}_{L}\) and \(\hat{\mathbf{x}}_{\parallel}\). Explicitly
\[\psi_{\mathbb{S}}^{\pm}(\mathbf{x},t)=\frac{1}{(2\pi\hbar)^{3/2}}\int d^{3} \mathbf{p}\,\theta(\pm\mathbf{p}.\mathbf{n})\,\sqrt{\frac{|\mathbf{p}_{\perp }|}{m}}\tilde{\psi}_{t}(\mathbf{p})e^{\frac{i}{\hbar}\mathbf{x}.\mathbf{p}}. \tag{10}\]
Note that, the arrival time distribution (7) could be reproduced by taking the integral of (9) over the whole of the screen plane. The joint space-time probability distribution (9), and its generalization for the particles with arbitary spin, have been also derived by Werner in another axiomatic manner [11]. Moreover, it is easy to see that the results (7) and (9) can be obtained from a regularized version of the (three-dimensional generalization of) Aharonov-Bohm time operator, which is the same as the procedure used by Grot, Rovelli and Tate in one-dimensional cases [64].
### Quantum flux and Bohmian approach
Inspiring by classical intuition, another proper candidate for screen observables is the perpendicular component of the quantum probability current to the screen surface, \(\mathbf{J}(\mathbf{x},t).\mathbf{n}\), where
\[\mathbf{J}(\mathbf{x},t)=-\frac{\hbar}{m}\operatorname{Im}\left[\psi_{t}^{*}( \mathbf{x})\mathbf{\nabla}\psi_{t}(\mathbf{x})\right], \tag{11}\]
and \(\mathbf{n}\) is the outward normal to the screen \(\mathbb{S}\). This proposal is applicable for a particle in a generic external potential and a generic screen surface, not necessarily an infinite plane. There are several attempts to derive this proposal in various approaches, such as Bohmian mechanics for the scattering case in [79], decoherent histories approach in [80] as an approximation, or in [81] as an exact formula using the concept of extended probabilities, and so on [45; 82; 46]. However, even if the wave function contains only momentum in the same direction as \(\mathbf{n}\), the \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\) could be negative due to the _backflow_ effect [22]. This property is incompatible with the standard notion of probability. Nevertheless, this problem could be treated from the Bohmian point of view: Using Bohmian trajectories, it can be shown that the positive and negative values of \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\) correspond to the particles that reach the point \(\mathbf{x}\) at \(\mathbb{S}\) in the same direction of \(\mathbf{n}\) or the opposite direction of it, respectively [83; 84]. In this regard, through the Bohmian mechanics in one-dimension, Leavens demonstrates that the time distribution of arrival to \(x\!=\!L\) from both sides could be obtained from the absolute form of probability flux as [35; 85]
\[\Pi_{\rm QF}(t|x\!=\!L)=\frac{|J(L,t)|}{\int dt\,|J(L,t)|}, \tag{12}\]
which is free from the aforementioned problem.
The three-dimensional justification of \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\) as an operational formulation of the arrival time model has been made in [82]. Also, the generalization of (12) for arrival to the surface \(\mathbb{S}\) is given by [16; 13; 7; 86]
\[\Pi_{\rm QF}(t|\mathbf{x}\!\in\!\mathbb{S})=\frac{\int_{\mathbb{S}}dS|\mathbf{ J}(\mathbf{x},t)\!\cdot\!\mathbf{n}|}{\int dt\int_{\mathbb{S}}dS|\mathbf{J}( \mathbf{x},t).\mathbf{n}|}, \tag{13}\]
with \(dS\!=\!\mathbf{n}\!\cdot\!d\mathbf{S}\) the magnitude of the surface element \(d\mathbf{S}\) which is directed outward at \(\mathbf{x}\!\in\!\mathbb{S}\). To illustrate (13) and to generalize it to the case of joint arrival distribution, we can use the Bohmian point of view. In this theory, each particle has a specific trajectory, depending on the initial position, and so the rate of passing particles through an area element \(d\mathbf{S}\) centered at \(\mathbf{x}\!\in\!\mathbb{S}\), in the time interval between \(t\) and \(t+dt\), is proportional to \(\rho_{t}(\mathbf{x})|\mathbf{v}(\mathbf{x},t)\!\cdot\!d\mathbf{S}|dt\), where \(\mathbf{v}(\mathbf{x},t)\!=\!\mathbf{J}(\mathbf{x},t)/|\psi_{t}(\mathbf{x})|^{2}\) is the Bohmian velocity of the particle. Hence, using quantum equilibrium condition [87; 88], \(\rho_{t}(\mathbf{x})\!=\!|\psi_{t}(\mathbf{x})|^{2}\), and accomplishing normalization, the joint arrival distribution could be represented by the absolute value of the current density as
\[\mathbb{P}_{\mathrm{QF}}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})=\frac{|\mathbf{J} (\mathbf{x},t)\!\cdot\!\mathbf{n}|}{\int dt\int_{\mathbb{S}}dS|\mathbf{J}( \mathbf{x},t)\!\cdot\!\mathbf{n}|}. \tag{14}\]
Now, by integrating (14) over all \(\mathbf{x}\!\in\!\mathbb{S}\), we arrive at the three-dimensional arrival time distribution (13) for the screen surface \(\mathbb{S}\). It should be noted that Eq. (14) is not necessarily followed for an ensemble of classical particles because a positive or negative current at a space-time point, \((\mathbf{x},t)\), can in general have contributions from all the particles arriving to \(\mathbf{x}\) at \(t\) from any direction. Nonetheless, since the Bohmian velocity field is single-valued, the particle trajectories cannot intersect each other at any point of space-time and so only a single trajectory contributes to the current density \(\mathbf{J}(\mathbf{x},t)\) at the particular space-time point \((\mathbf{x},t)\). Moreover, this fact implies that when \(\mathbf{v}(\mathbf{x},t)\cdot\mathbf{n}\!>\!0\) we can say that the trajectory and consequently the particle has passed through the screen from the inside and vice versa for \(\mathbf{v}(\mathbf{x},t)\cdot\mathbf{n}\!<\!0\). Hence, one can define the joint probability distribution for the time of arrival to each side of \(\mathbb{S}\) as
\[\mathbb{P}_{\mathrm{QF}}^{\pm}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})= \frac{\mathbf{J}^{\pm}(\mathbf{x},t)\!\cdot\!\mathbf{n}}{\int dt\int_{ \mathbb{S}}dS\ \mathbf{J}^{\pm}(\mathbf{x},t)\!\cdot\!\mathbf{n}}, \tag{15}\]
where \(\mathbf{J}^{\pm}(\mathbf{x},t)=\pm\,\theta(\pm\mathbf{J}\!\cdot\!\mathbf{n}) \,\mathbf{J}(\mathbf{x},t)\). In addition, note that there may be some trajectories which cross \(\mathbb{S}\) more than once--and we have _multi-crossing_ trajectories (see the typical Bohmian trajectory in Fig. 1). The course of the above inference to Eq. (14) was in such a manner that multi-crossing trajectories could contribute several times (see Fig. 2 (a)). However, one could assume the detection surface as a barrier that does not allow the crossed particle to return inside (see Fig. 2 (c)). In this case, it is suggested to use the truncated current defined as
\[\tilde{\mathbf{J}}(\mathbf{x},t)\coloneqq\begin{cases}\mathbf{J}(\mathbf{x}, t)&\text{if $(\mathbf{x},t)$ is a first exit through $\mathbb{S}$}\\ 0&\text{otherwise}\end{cases} \tag{16}\]
where \((\mathbf{x},t)\) is a first exit event through the boundary surface \(\mathbb{S}\), if the trajectory passing through \(\mathbf{x}\) at time \(t\) leaves inside \(\mathbb{S}\) at this time, for the first time since \(t\!=\!0\)[13, 79, 89]. The limiting condition in (16), imposes that the joint probability distribution based on it should be computed numerically using trajectories:
\[\tilde{\mathbb{P}}_{\mathrm{QF}}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})= \frac{\tilde{\mathbf{J}}(\mathbf{x},t)\!\cdot\!\mathbf{n}}{\int dt\int_{ \mathbb{S}}dS\ \tilde{\mathbf{J}}(\mathbf{x},t)\!\cdot\!\mathbf{n}}. \tag{17}\]
Of course, the detection screen is not always a barrier-like surface (see Fig. 2 (b)), and one could assume that there is a point-like detector that lets the multi-crossing trajectories to contribute to the distribution and we can use (14) in such cases.
## III "Intrinsic" screen observable in two-slit experiment
In this section, we study the discussed proposals in the previous section for the double-slit experiment. We compare the results of these proposals in the cases of vertical and horizontal screens (see Fig. 1), and also in different detection schemes. The main motivation for the study of the horizontal screen is the non-classical particles' motions along the \(y\)-direction, in the Bohmian perspective; see a typical Bohmian trajectory in Fig. 1. This behavior is due to changing the sign of the probability current's component in the \(y\)-direction. This behavior does not occur for \(x\)-component of \(\mathbf{J}\) and consequently for the Bohmian motion of a particle along the \(x\)-direction.
As shown in Fig. 1, the setup contains two identical slits at \(y\!=\!\pm s\), and screens are placed at \(x\!=\!L_{x}\) and \(y\!=\!L_{y}\) correspond to the vertical and horizontal screens, respectively. To avoid the mathematical complexity of Fresnel diffraction at the sharp-edge slits, it is supposed that the slits have soft edges that generate waves having identical Gaussian profiles in the \(y\)-direction. So, for each slit, we can take the wave function as an uncorrelated two-dimensional Gaussian wave packet, which in each dimension has the form
\[\psi_{G}^{(i)}(x,t)=(2\pi s_{t}^{2})^{-\frac{1}{4}}\exp\left\{ \frac{(x-x_{0}^{(i)}-u_{x}t)^{2}}{4\sigma_{0}s_{t}}\right\}\\ \times\exp\left\{\frac{i}{\hbar}mu_{x}(x-x_{0}^{(i)}-\frac{u_{x} t}{2})\right\}\ \ (i=1,2), \tag{18}\]
with \(m\) the particle's mass, \(\sigma_{0}\) the initial dispersion,
Figure 1: Schematic double-slit experiment setup. The center of two slits is considered as the coordinate origin, and the vertical and horizontal screens are placed at \(x\!=\!L_{x}\) and \(y\!=\!L_{y}\), respectively. The dashed black line shows a typical Bohmian trajectory that arrives at the horizontal screen. A suitable single-particle detector, in addition to particle arrival position, can record the arrival time using a proper clock.
the wave packet's velocity, \(x_{0}^{(i)}\) the initial position of wave packet or in other words the location of \(i\)-th slit, and \(s_{t}=\sigma_{0}(1+i\hbar t/(2m\sigma_{0}^{2}))\). Therefore, when the particle passes through the slits, we have the total wave function as
\[\psi(x,y,t)=\frac{1}{\sqrt{2}}[\psi_{G}^{(1)}(x,t)\psi_{G}^{(1)}(y,t)+\psi_{G}^ {(2)}(x,t)\psi_{G}^{(2)}(y,t)], \tag{19}\]
where superscripts (1) and (2) correspond to upper and lower slits, respectively. This form of Gaussian superposition state is commonly used in the literature [7, 90, 91, 92, 93] and is feasible to implement by quantum technologies because such a state could be produced and controlled readily [94, 95], even without using slits [49]. In this paper, we have chosen the metastable helium atom, with mass \(m=6.64\times 10^{-27}\) kg, as the interfering particle, and the parameters as \(s=10\,\mu\)m, \(\sigma_{x}=0.04\,\mu\)m, \(\sigma_{y}=0.5\,\mu\)m, \(u_{x}=3\) m/s, and \(u_{y}=0\) m/s. These values are feasible according to the performed experiments [96]. Moreover, the meta-stable helium atom could be detected with high efficiency because of its large internal energy [97, 52].
### Vertical screen
The arrival time distribution for the vertical screen placed at different distances from the two-slit is shown in Fig. 3. As one can see this distribution is the same for all methods, and their average arrival time is close to the corresponding quantity in classical uniform motion. To calculate the mean time of arrival to the screen, we use the arrival time distribution of each method presented in sec II, i.e., Eq. (3), (7) and (13), and we have
\[\bar{t}_{\mathbb{S}}=\int_{0}^{\infty}\!\!dt\ \Pi(t|\mathbf{x}\!\in\!\mathbb{S})\,t, \tag{20}\]
as the mean arrival time at the surface \(\mathbb{S}\). Furthermore, we can compute the average arrival time to each point on the screen using the joint probability distribution as
\[\bar{t}_{\mathbf{x}}=\frac{\int_{0}^{\infty}dt\ \mathbb{P}(\mathbf{x},t| \mathbf{x}\!\in\!\mathbb{S})\,t}{\int_{0}^{\infty}dt\ \mathbb{P}(\mathbf{x},t| \mathbf{x}\!\in\!\mathbb{S})}. \tag{21}\]
This observable is depicted in Fig. 4-b for a vertical screen placed at \(L_{x}\!=\!300\) mm. Apparently, the results of the _standard_ and _quantum flux_ methods are the same and similar to one that resulted in [7] by Nelson's mechanics. Nevertheless, they are different from the _semiclassical_ approximation. However, when the interference pattern is calculated by either method, we see that their predicted cumulative position distributions do not differ much from the others (Fig. 4-a). This observable can be calculated by using the joint distribution as
Figure 3: Arrival time distributions of particles that arrive at the vertical screen of the double-slit experiment at different screen distances.
Figure 2: Different schemes of particle detection on the screen surface \(\mathbb{S}\). In the Bohmian point of view, particles could have a recursive motion on surface \(\mathbb{S}\) and cross it more than once (e.g. see the trajectory that plotted in Fig. 1). Assuming different detector types, one can prob variant possible observables on the screen. In panel (a) a conceivable particle trajectory is depicted, which crosses \(\mathbb{S}\) three times. In this panel, a movable point-like detector is placed on \(\mathbb{S}\), which can survey the whole screen and detect particles that arrive only from one side, while in panel (b) a two-sided point detector is placed on \(\mathbb{S}\), which can move along it and detect particles that arrive from up and down. In addition, one can assume there is (c) an array of side-by-side detectors covering the entire screen surface \(\mathbb{S}\). The last configuration blocks the trajectory and does not allow the crossed particle to return. In this scheme, we only detect _first-arrivals_ from one side.
\[P(\mathbf{x}|\mathbf{x}\!\in\!\mathbb{S})\!=\!\frac{\int_{0}^{\infty}dt\ \mathbb{P}( \mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})}{\int_{0}^{\infty}dt\int_{\mathbb{S}}dS \ \mathbb{P}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})}. \tag{22}\]
As mentioned, it should be noted that, \(|\psi_{t}(\mathbf{x})|^{2}\) is just the _conditional_ position probability density at the specific time \(t\), not the position-time _joint_ probability density and so the accumulated interference pattern, \(P(\mathbf{x}|\mathbf{x}\!\in\!\mathbb{S})\), is not given by \(\int dt|\psi_{t}(\mathbf{x})|^{2}\)[98].
### Horizontal screen
In this section, we are going to compare the mentioned proposals in the double-slit setup with a horizontal detection screen (see Fig. 1). In this regard, in Fig. 5, the arrival time distributions at the screen are plotted for some horizontal screens which are located at \(L_{y}\!=\!15,\ 20,\ 25,\ \mathrm{and}\ \ 30\,\mu\mathrm{m}\). In this figure, solid-black, dashed-green, and dash-dotted-blue curves represent the distributions \(\Pi_{STD}\), \(\Pi_{QF}\) and \(\Pi_{SC}\) respectively. Also, the vertical lines show the average time of arrival to the screen, \(\bar{t}_{\mathbb{S}}\), associated with these arrival time distributions. From this figure, one can see that, although the averages almost coincide, the distributions are distinct. Moreover, as expected, when the screen's distance from the center of the two slits \(L_{y}\) decreases, the difference between distributions increases. Most of these differences occur in the early times, which are associated with the particles that arrive at the \(\mathbb{S}\) in the near field. Furthermore, we observe that the \(\Pi_{SC}\) behaves quite differently from \(\Pi_{QF}\) and \(\Pi_{STD}\). The distributions \(\Pi_{QF}\) and \(\Pi_{STD}\) are more or less in agreement, however, for the screen that is located at \(L_{y}\!=\!15\,\mu\mathrm{m}\), a significant difference between the standard and quantum flux distributions occurs around \(t\!\approx\!0.2\ \mathrm{ms}\).
To have a more comprehensive insight, we can look at the joint spatiotemporal arrival distributions in Fig. 6. In this figure, joint distributions, \(\mathbb{P}_{\mathrm{SC}}\), \(\mathbb{P}_{\mathrm{STD}}\) and \(\mathbb{P}_{\mathrm{QF}}\) are plotted in three panels, for the horizontal screen surface located at \(L_{y}=15\,\mu\mathrm{m}\). These density plots clearly visualize differences between the mentioned arrival distribution proposals. In these plots, we can see separated fringes with different shapes, which this fact imply that the particles arrive at the screen in some detached space-time regions. In the insets, one can see that the shapes of these regions are different for each proposal. In the joint density of the semiclassical approximation
Figure 4: (a) The cumulative arrival position distribution, Eq. (22), for the vertical screen at \(L_{x}\!=\!300\ \mathrm{mm}\), and (b) the average arrival time at each point of the screen, Eq. (21).
Figure 5: Arrival time distributions of particles that arrive on the horizontal screen at four different distances from the center of two slits. The vertical lines show the average arrival time.
(Fig.6-a), fringes are well-separated, while the standard distribution (Fig. 6-b) exhibits more continuity in its fringes. In addition, in the pattern of the quantum flux proposal (Fig. 6-c) there are grooves between every two fringes which is due to changing the sign of \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\) in (14). In all panels of Fig.6, the duration of "temporal no-arrival windows" between every two typical fringes variate in the range between 0.01 and 0.2 ms which has a spatial extension of about 0.3 to 2 mm. These space-time scales are utterly amenable empirically by current technologies [53, 96], which could be used to test these results.
The average time of arrival to each point of the screen and cumulative position interference pattern could be calculated as in the vertical screen case by Eqs. (21) and (22). In Fig. 7(a)-(b), these two quantities are shown for the horizontal screen which is placed at \(y=15\,\mu\)m. In contrast to the vertical screen, the cumulative position distribution of the semiclassical approximation is entirely separate from the two other proposals. The cumulative position distribution resulting from standard and quantum flux approaches have obvious differences from each other, as well. As one can see in Fig. 7(b), the average arrival times are the same for all three methods at first and begin to deviate from each other at \(x\approx 5\) mm; then again, these curves converge to each other at \(x\approx 25\) mm, approximately. The maximum deviation between the standard and quantum flux average arrival time occurs at \(x\approx 19\) mm, which is quite in the far-field regime--the width of the initial wave function is \(\sim O(10^{-3})\)mm which is smaller than 19 mm. Therefore one can suggest the average arrival time in the gray region of Fig. 7(b) as a practical target for comparing these approaches experimentally. To this end, we study arrival time distributions at some points of this region as _local arrival distributions_. The arrival time distribution conditioned at a specific point \(\mathbf{x}\) on the screen can be obtained as follow
\[\Pi_{\mathbf{x}}(t|\mathbf{x}\in\mathbb{S})=\frac{\mathbb{P}(\mathbf{x},t| \mathbf{x}\in\mathbb{S})}{\int_{0}^{\infty}dt\,\,\mathbb{P}(\mathbf{x},t| \mathbf{x}\in\mathbb{S})}. \tag{23}\]
Using the associated joint distribution of each proposal, we have plotted Fig. 7(c)-(f) that show \(\Pi_{\mathbf{x}}(t|\mathbf{x}\in\mathbb{S})\) at the positions \(x=16.2\), \(17.4\), \(18.4\), \(19.2\) mm, on the screen placed at \(L_{y}=15\,\mu\)m. The broken black curves in Fig. 7 (c)-(f), resulting from the quantum flux proposal, against the smooth curves of the other two methods could be understood as the result of the changing the signature of the \(y\)-component of the probability current: Note that, quantum flux distribution is given by the absolute value of the probability current. The origin of distinctions between the local average arrival times is more perceptible from these local arrival distributions. In principle, these distributions could be probed using fast and high-resolution single-atom detectors [53, 97]. In particular, the delay-line detector that is recently developed by Keller et al. [51] seems suitable for our purpose: It has the capability to resolve single-atom detection events temporally with 220 ps and spatially with 177\(\mu\)m at rates of several \(10^{6}\) events per second.
We estimate by a numerical investigation that these local arrival distributions could be well reconstructed from about \(10^{4}\) number of detection events. As an example, in Fig. 7, the histograms associated with the probability densities of the panel (f) are plotted in panel (g), using \(10^{4}\) numerical random sampling. It is easy to estimate that the recording of \(10^{4}\) particle detection events can determine the local average arrival time with a statistical error of about \(10^{-2}\)ms, while the differences between local average arrival times of various proposals are almost bigger than \(10^{-1}\)ms. Using cumulative position distribution,
Figure 6: Density plots of joint arrival position-time distributions for particles that arrive at the horizontal screen of the double-slit experiment. Panels (a), (b), and (c) represent \(\mathbb{P}_{\mathrm{SC}}\), \(\mathbb{P}_{\mathrm{STD}}\) and \(\mathbb{P}_{\mathrm{QF}}\), respectively. Insets: Magnified contour plots of the joint distributions.
Fig. 7(b), one can estimate that, if the total number of particles that arrived at the screen is about \(10^{8}\), we have about \(10^{4}\) particles around \(x=19.2\,\mathrm{mm}\), in the spacial interval \((19.1,19.3)\). Using recent progress in laser cooling and magneto-optical trapping [97], the preparation of a coherent ensemble of metastable helium atoms with this number of particles is quite achievable [51].
One might be inclined to think that the difference between the quantum flux and standard average arrival times is just due to changing the signature of \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\), but in the following, we show that even without the contribution of the negative part of \(\mathbf{J}(\mathbf{x},t)\cdot\mathbf{n}\), these proposals are significantly distinguishable: see Fig. 8.
### Detection schemes
As we mentioned in section II.3, according to the Bohmian deterministic point of view, there are several possible schemes to detect arrived particles, especially for the horizontal screen surface which we have recursive motions on it (see Fig. 1 and 2). One can assume that the horizontal screen is swept with a point-like detector that surveys all arrived particles at the surface \(\mathbb{S}\), which we call _spot-detection scheme_. In this scheme, one option is to use a unilateral detector to detect arrived particles at the top or bottom of \(\mathbb{S}\). In this case, the positive and negative parts of the quantum probability current have respectively corresponded to particles that arrive at the top or bottom of \(\mathbb{S}\) (as shown in Fig. 2 (a)), and we must use Eq. (15) to calculate the screen observables. Additionally, we can choose a bilateral detector (or two unilateral detectors) that prob all particles that arrive from both sides of \(\mathbb{S}\), along the time with several repeats of the experiment (as shown in Fig. 2 (b)). In these circumstances (i.e. spot-detection scheme), there is no barrier in front of the particles before they reach the point of detection and we can use Eq. (14) to obtain the screen observables as in the two previous subsections.
As we have already shown in section II.3, whether the particles arrive from the top or bottom of \(\mathbb{S}\), the absolute value of the quantum probability current yield the trajectories' density and consequently give the joint distribution of the total arrival at each point of \(\mathbb{S}\). This fact is the case for the standard method, as well, however, there is a subtle difference between the two proposals in the spot-detection scheme. When we talk about
Figure 7: The space-time arrival statistics for the double-slit experiment with a horizontal screen placed at \(L_{y}\!=\!15\,\mu\mathrm{m}\). Panel (a) represents the average time of arrival at each point of the screen, \(\bar{t}_{\mathbf{x}}\). Panel (b) represents the cumulative position probability density. The panels (c)-(f) show the local arrival time probability densities, \(\Pi_{\mathbf{x}}(t|\mathbf{x}\!\in\!\mathbb{S})\), at the at the points \(x\!=\!16.2,\,17.4,\,18.4,\,19.2\) mm on the screen, which are chosen from the gray region in panel (b). The vertical lines in these panels represent the average arrival times. Panel (g) is Histograms associated with probability densities of panel (f), which are generated by \(10^{4}\) numerical random sampling.
the spot-detection in the Bohmian approach, it would be considered the possibility of multi-crossing and the distribution includes _all-arrivals_ at \(\mathbb{S}\). Although, in the standard method there is an interpretation for \(\psi_{\mathbb{S}}^{+}(\mathbf{x},t)\) and \(\psi_{\mathbb{S}}^{-}(\mathbf{x},t)\) in Eq. (10), which relates them to the particles arrive at \(\mathbb{S}\) in a direction which is the same or opposite with the direction of outward normal of the screen \(\mathbf{n}\), respectively [10, 64], nevertheless, since there are no defined paths in this approach, it is obscure whether it counts only the _first-arrivals_ to each side of the screen or includes recursive movements of particles.
Alternatively, along with the spot-detection scheme, it could be assumed that there is a continuous flat barrier in front of the particle's paths as the _detection surface_ or _screen_ surface that does not allow particles to cross this surface. Depending on the screen's length and position, there are several possibilities for the detection process. In each case, a specific number of particle paths contribute to the distribution of arrival time. In the simplest case, the screen blocks all the trajectories that reach the horizontal surface \(\mathbb{S}\), and we only detect the first-arrivals. In such a setup, we can no longer use the quantum flux method to represent Bohmian trajectories' first encounter with the surface; hence, the screen observables must be obtained by numerical analysis, due to the definition of truncated current as in Eq. (16) and its corresponding joint distribution, \(\tilde{\mathbb{P}}_{\mathrm{QF}}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})\), defined in Eq. (17). By computing the Bohmian trajectories, we can find positions and times of the first-arrivals to the screen, and consequently calculate the arrival time distribution which mathematically could be defined as
\[\tilde{\Pi}_{\mathrm{QF}}(t|\mathbf{x}\!\in\!\mathbb{S})=\int_{\mathbb{S}} \tilde{\mathbb{P}}_{\mathrm{QF}}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})dS. \tag{24}\]
Also, other observable quantities such as the cumulative spatial distribution and averaged arrival time over the detection surface could be defined and calculated numerically in a similar way--by substituting \(\tilde{\mathbb{P}}_{\mathrm{QF}}(\mathbf{x},t|\mathbf{x}\!\in\!\mathbb{S})\) in Eqs. (21) and (22). Furthermore, we can complete the computations to find the second and third encounters to the surface (regardless of the barrier).
In Fig. 8, we show our numerical results of Bohmian trajectories simulation. The background scatter plot is the position and time of arrivals of \(2\times 10^{6}\) trajectories. In this plot, the second and third arrivals are shown in blue and green, respectively. Here, it is more clear why
Figure 8: The space-time Bohmian arrival statistics for the double-slit experiment with a horizontal screen placed at \(L_{y}\!=\!15\,\mu\mathrm{m}\). The interior curves in the central figure are the averaged times of arrival obtained by different detection schemes: see Fig. 2. The Left and top plots are marginal arrival time distributions and marginal arrival position distributions, respectively. The scatter plot is generated using \(2\times 10^{6}\) Bohmian trajectories, and the black, blue, and green points of the scatter plot represent the first, second, and third arrivals of Bohmian particles to the screen, respectively. The inset is a zoom-in of the dashed rectangle.
we interpret the grooves of the quantum flux density plot (Fig. 6 (c)) as a result of the multi-crossing of Bohmian trajectories. The three middle graphs are the average time of the _first_ and all-arrivals, which are simulation results of \(10^{8}\) trajectories, and are compared by the quantum flux method. As expected, the average time of all-arrivals fits on the quantum flux curve. However, the average time of first-arrivals deviates from all-arrivals in the area discussed in the previous section (between \(x=16.2\) mm and \(x=19.2\) mm).
To scrutinize the deviation zone of Fig. 8 (the gray region), Fig. 9 is drawn to show the arrival time distributions of screen positions \(x=16.2,\,17.4,\,18.4,\,19.2\) mm. As one can see, at the first recursive points of quantum flux distribution, the first-arrival distributions raise down to zero. This implies that in the presence of a barrier-like screen, there would be a big temporal gap between arrived particles. These gaps could be investigated as a result of the non-intersection property of Bohmian trajectories that cause a unilateral motion of particles along the direction of the probability current field.
## IV Screen Back-Effect
In principle, the presence of the detector could modify the wave function evolution, before the particle detection, which is called detector back-effect. To have a more thorough investigation of detection statistics, we should consider this effect. Howsoever, due to the measurement problem and the quantum Zeno effect [9], a complete investigation of the detector effects is problematic at the fundamental level, and it is less obvious how to model an ideal detector. Nonetheless, some phenomenological non-equivalent models are proposed, such as the generalized Feynman path integral approach in the presence of absorbing boundary [37; 38; 12; 39], Schrodinger equation with a complex potential [44], Schrodinger equation with absorbing (or complex Robin) boundary condition [40; 41; 42; 43; 44], and so on. The results of these approaches are not the same, and a detailed study of the differences is an interesting topic. In this section, we provide a brief review of the absorbing boundary rule (ABR) and path-Integral with absorbing boundary (PAB) models, then we compare them in the double-slit setup with the horizontal screen.
### Absorbing Boundary Rule
Among the above-mentioned phenomenological models, the absorbing boundary condition approach has the most compatibility with Bohmian mechanics [42]. The application of absorbing boundary condition in arrival time problem was first proposed by Werner [40], and recently it is re-derived and generalized by Tumulka and others using various methods [41; 42; 43; 44]. Especially, it is recently shown that in a suitable (non-obvious) limit, the imaginary potential approach yields the distribution of detection time and position in agreement with the absorbing boundary rule [44]. According to this rule, the particle wave function \(\psi\) evolves according to the free Schrodinger equation, while the presence of a detection screen is modeled by imposing the following boundary conditions on the Detection screen, \(\mathbf{x}\in\mathbb{S}\),
\[\mathbf{n}\cdot\nabla\psi=i\kappa\psi, \tag{25}\]
where \(\kappa\!>\!0\) is a constant characterizing the type of detector, in which \(\hbar\kappa/m\) represents the momentum that the detector is most sensitive to. This boundary condition ensures that waves with wave number \(\kappa\) are completely absorbed while waves with other wave numbers are partly absorbed and partly reflected [41; 99]. In the absorbing boundary rule, the joint spatiotemporal distribution of the detection event is given by quantum flux. Considering (25), this distribution reads
\[\mathbb{P}_{\mathrm{ABR}}(t,\mathbf{x}|\mathbf{x}\!\in\!\mathbb{S})=\frac{| \psi_{\mathrm{ABC}}|^{2}}{\int dt\int_{\mathbb{S}}dS|\psi_{\mathrm{ABC}}|^{2}}, \tag{26}\]
where \(\psi_{\mathrm{ABC}}\) represent the solution of the free Schrodinger equation satisfying the aforementioned absorbing boundary condition. This distribution can be understood in terms of Bohmian trajectories. The Bohmian particle equation of motion, \(\dot{\mathbf{X}}=(\hbar/m)\mathrm{Im}\left[\nabla\psi_{\mathrm{ABC}}/\psi_{ \mathrm{ABC}}\right]\), together with the boundary condition (25), imply that trajectories can cross the boundary \(\mathbb{S}\) only outwards and so there are no multi-crossing trajectories. If it is assumed
Figure 9: Arrival time distribution at the horizontal screen positions \(x=16.2\), \(17.4\), \(18.4\), \(19.2\) mm, and \(L_{y}=15\,\mu\)m, which are in the gray region of Fig (8). The width of sampling in each point is about \(\delta x=0.25\) mm, and \(10^{8}\) Bohmian trajectories are simulated to obtain these distributions.
that the detector clicks when and where the Bohmian particle reaches \(\mathbb{S}\), the probability distribution of detection events is given by (26), because the initial distribution of the Bohmian particle is \(|\psi_{\rm ABC}(x,0)|^{2}\)[41].
### Path-Integral with Absorbing Boundary
In several papers [37; 38; 39; 12], Marchuwka and Schuss develop an interesting method to calculate the detection effect of absorbing surface using the Feynman path integral method. They postulate a _separation principle_ for the wave function in which we could consider the (bounded wave function) as a sum of two parts, \(\psi(\mathbf{x},t)=\psi_{1}(\mathbf{x},t)+\psi_{2}(\mathbf{x},t)\), such that \(\psi_{1}(\mathbf{x},t)\) corresponds to the survival part of the wave which is orthogonal to \(\psi_{2}(\mathbf{x},t)\) at a time \(t\) and evolve independently [38]. So, we can obtain the probability of survival of the particle, denoted \(S(t)\), which is the probability of the particle not being absorbed by the time \(t\), as \(\int_{\mathbb{D}}d^{3}\mathbf{x}|\psi_{1}(\mathbf{x},t)|^{2}\), where the integral is over the domain \(\mathbb{D}\), outside the absorbing region. By discretizing the path integral in a time interval \([0,t]\) and eliminating the trajectories that, in each time interval \([t^{\prime},t^{\prime}+\Delta t^{\prime}]\) for all \(t^{\prime}<t\), are reached to the absorbing surface \(\mathbb{S}\), the survival and consequently absorbing probability would be obtained. Based on this analysis, we could define a unidirectional probability current into the surface as \(\frac{d}{dt}[1-S(t)]\), which yields a normal component of the multidimensional probability current density at any point on \(\mathbb{S}\) as
\[\mathbf{J}(\mathbf{x},t)\!\cdot\!\mathbf{n} \!\!\!= \frac{\lambda\hbar}{m\pi}\left|\mathbf{n}\!\cdot\!\mathbf{\nabla} \psi(\mathbf{x},t)\right|^{2}\] \[\times \exp\left\{-\frac{\lambda\hbar}{m\pi}\int_{0}^{t}\!\!\!dt^{ \prime}\!\oint_{\mathbb{S}}dS|\mathbf{n}\!\cdot\!\mathbf{\nabla}\psi(\mathbf{x}^{ \prime},t^{\prime})|^{2}\right\},\]
where \(dS\!=\!\mathbf{n}\cdot d\mathbf{S}\) is the magnitude of the surface element \(d\mathbf{S}\), \(\mathbf{n}\) is the unit outer normal to the absorbing surface \(\mathbb{S}\), and \(\lambda\) is a proportionality factor with the dimension of length [37; 62]. Also, \(\psi(\mathbf{x},t)\) is the solution of Schrodinger equation bounded and normalized in the domain \(\mathbb{D}\). Moreover, the normal component \(\mathbf{J}(\mathbf{x},t)\!\cdot\!\mathbf{n}\) is supposed to be the probability density for observing the particle at the point \(\mathbf{x}\) on the screen at time \(t\)[39; 12].
### Screen back-effect in two-slit experiment
In order to complete the investigations carried out in section III, we are going to study the screen back-effect in the double-slit experiment with a horizontal screen. In this regard, we compare the arrival distributions which are resulted from the absorbing boundary rule (ABR), path-Integral with absorbing boundary (PAB), and Bohmian truncated current (BTC).
We continue with the same initial conditions as in section III, and choose \(\kappa\!=\!1\,\mu\mathrm{m}^{-1}\) for ABR. This value of \(\kappa\) leads to the maximum absorption probability--which is almost \(0.4\)--for the chosen initial wave function. In addition, for a more meaningful comparison, we consider \(\lambda\!=\!1\,\mu\mathrm{m}\) in the PAB method, which leads to the same absorption probability as ABR. The resulting joint arrival time-position distributions of the three methods are depicted in Fig. 10. As one can see, the distributions of the ABR and PAB methods--i.e., panels (a) and (b) in Fig. 10--have more compatibility with each other than the result of the BTC method. However, there are differences between them which are more obvious in the zoomed areas. The joint density of the ABR is more uni
Figure 10: Density plots of joint probability distributions of position and time (screen observable) for the horizontal screen placed at \(y\!=\!15\,\mu\mathrm{m}\) in the double-slit experiment. These densities are calculated by the three methods which take the screen effects into account.
formly distributed than of the PAB method. The empty areas between the fringes of the panel (c) of Fig. 10 are due to the elimination of the recursive trajectories--or in other words, are due to the elimination of second and third arrivals in Fig. 8.
For a more detailed comparison, in Fig. 11 the spatial and temporal marginal distributions are shown. In addition, the associated local average arrival times are compared in the central panel of this figure. The PAB method leads to significant discrepancies in marginal distributions; The maximum difference is about 40% that occurs around \(x\approx 0.8\) mm, which seems testable clearly. In contrast to the previous results on intrinsic distributions, in which the difference between average arrival times was significant, there is a good agreement in this observable for the ABR and PAB methods. However, there is a significant difference between the average arrival time in these two methods and BTC around \(x=6\) mm. In Fig. 12, the local arrival time distributions at some points on the screen are plotted, which show similar behavior.
## V Summary and discussion
When and where does the wave function collapse? How one can model a detector in quantum theory? These are the questions that we investigated in this work. We tried to show that there is no agreed answer for these questions, even for the double-slit experiment that _has in it the heart of quantum mechanics_[100]. This is a practical encounter with the measurement problem [73]. In this regard, we numerically investigated and compared the main proposed answers to these questions for a double-slit setup with a horizontal detection screen. It is shown that these proposals lead to experimentally distinguishable predictions, thanks to the current single-atom detection technology.
In this work, we suggest the meta-stable helium atom as a proper coherent source of the matter wave, however, other sources may lead to some practical improvements. For example, using heavier condensate atoms can lead to more clear discrepancies. Moreover, it is worth noting that although the experiment with photons may have some practical advantages, there are more complications in its theoretical analysis. This is partially because of the relativistic localization-causality problem [101; 102; 103; 104]. The theoretical investigation of a proposed experiment for photons would be an interesting extension of the present work, which has been left for future studies.
###### Acknowledgements.
We sincerely thank Mohammad Hossein Barati for carefully reviewing the manuscript, and Sheldon Goldstein for his helpful comments.
Figure 11: Averaged time of arrival at each point of the screen (central figure), cumulative interference pattern (upper figure), and distribution of time of arrival to the horizontal screen of the double-slit experiment placed at \(y=15\,\mu\)m (right-hand figure).
Figure 12: Arrival time distribution at the horizontal screen positions \(x=16.2\), \(17.4\), \(18.4\), \(19.2\) mm, and \(L_{y}=15\,\mu\)m, which are calculated for the three methods which take the screen effects into account. |
2306.07946 | STUDY: Socially Aware Temporally Causal Decoder Recommender Systems | Recommender systems are widely used to help people find items that are
tailored to their interests. These interests are often influenced by social
networks, making it important to use social network information effectively in
recommender systems. This is especially true for demographic groups with
interests that differ from the majority. This paper introduces STUDY, a
Socially-aware Temporally caUsal Decoder recommender sYstem. STUDY introduces a
new socially-aware recommender system architecture that is significantly more
efficient to learn and train than existing methods. STUDY performs joint
inference over socially connected groups in a single forward pass of a modified
transformer decoder network. We demonstrate the benefits of STUDY in the
recommendation of books for students who are dyslexic, or struggling readers.
Dyslexic students often have difficulty engaging with reading material, making
it critical to recommend books that are tailored to their interests. We worked
with our non-profit partner Learning Ally to evaluate STUDY on a dataset of
struggling readers. STUDY was able to generate recommendations that more
accurately predicted student engagement, when compared with existing methods. | Eltayeb Ahmed, Diana Mincu, Lauren Harrell, Katherine Heller, Subhrajit Roy | 2023-06-02T14:47:56Z | http://arxiv.org/abs/2306.07946v3 | # STUDY: Socially Aware Temporally Causal Decoder Recommender Systems
###### Abstract
Recommender systems are widely used to help people find items that are tailored to their interests. These interests are often influenced by social networks, making it important to use social network information effectively in recommender systems, especially for demographic groups with interests that differ from the majority. This paper introduces STUDY, a Socially-aware Temporally caUsal Decoder recommender sYstem. The STUDY architecture is significantly more efficient to learn and train than existing methods and performs joint inference over socially-connected groups in a single forward pass of a modified transformer decoder network. We demonstrate the benefits of STUDY in the recommendation of books for students who have dyslexia or are struggling readers. Students with dyslexia often have difficulty engaging with reading material, making it critical to recommend books that are tailored to their interests. We worked with our non-profit partner Learning Ally to evaluate STUDY on a dataset of struggling readers. STUDY was able to generate recommendations that more accurately predicted student engagement, when compared with existing methods.
## 1 Introduction
Recommender systems are one of the major applications of AI systems and are an essential driver of many of our online experiences today. With applications ranging from e-commerce [1] and advertising platforms [2] to video platforms [3], we are relying on recommender systems to surface relevant and interesting content to individual users. In this work, we focus on recommender systems deployed in the educational setting [4] to suggest relevant literature for students in grades 1 through 12.
Recommender systems for educational content have been studied in the context of online learning programs/massive open online courses (MOOCs) [5] but are not as common for primary and secondary school student applications. Experiments with recommender systems in education have generally been limited by the lack of publicly-available large data sources - one review found only 5 experimental studies with sample sizes of over 1000 participants [6]. However, versions of recommender systems have been applied in educational measurement and assessment for over four decades through computerized adaptive testing [7], where the test items presented to the test-takers depend on the current estimate of the student's ability.
In more recent literature, expansions on methods for computerized adaptive testing have been proposed for recommending new content in an adaptive learning framework [8] where content can be automatically presented to students given their particular stage of learning. These adaptive learning
systems require access to some measures of student subject-matter performance and do not account for the student's interest in the material nor the social dynamics that may lead to greater engagement. Alternative approaches are needed in content recommendation contexts where a student's reading level cannot be measured or measures are not included in the available data for recommendations.
Previous studies have shown that higher levels of student motivation predicts growth in reading comprehension [9]; thus promoting content that is most likely to align with a student's interests is hypothesized to produce better reading and literacy outcomes, particularly for students with reading difficulties. In the United States, often the reading content assigned in by teachers aligns to a state or district-level curriculum for a particular grade level, but for assigning reading materials outside the required texts, other strategies are needed, and we hypothesize that incorporating social connections can be an effective strategy for recommending content successfully.
In one study of a book recommendation system, using an app function that allowed users to view the reading histories of peers had a beneficial long-term effect on reading choices [10], indicating that incorporating social dynamics into recommendations may lead to more effective recommendations. In another social network analysis of second and third graders in the US [11], the researchers found that on average students were able to effectively identify peers with higher reading skills to ask these peers for help, thus even for younger learners, peer relationships may be relevant for content selected. In rural areas which sometimes lack resources to help with struggling students, a study [12] found that adolescent reading choices were often motivated by conversations and materials borrowed from friends and family, suggesting that a recommender system that includes peer preferences could also be effective for reaching the rural student population.
### Possibilities for recommender systems
In the applied educational setting, systems can be targeted towards either teachers [13] or students [14] to suggest content, and in both cases the goal of these systems is to surface relevant and engaging educational material that is beneficial to the students' learning. Student-facing educational recommender system is built from data relevant to students interaction. with the platform, which falls into the following categories [15]:
* Data about the students, or "user data"
* Data about the content to be recommended, or "item data"
* Data about the context of the current session (e.g. time of day, session device, etc.), or "context data".
In the case of our applied scenario to recommend books, we do not assume that student or "user data" includes measures of student performance.
Two widely used types of recommender systems are "Batch" and "Sequential". Batch recommender systems operate on representations of previous interactions, and don't model time or relative order. They include collaborative filtering based methods [16] and Embarrassingly Shallow Autoencoders (EASE) [17]. Sequential recommender [18] systems operate on representations of historical user interaction as sequences [19].
The classroom setting enables socially-structured recommendation because of the availability of a clearly-defined hierarchical network, which groups students into classrooms, year cohorts, and schools. This makes the utilization of social recommendation systems [20] particularly attractive where the relationships between users are leveraged to make recommendations.
In this work we present Socially-aware Temporally caUsal Decoder recommender sYstems (STUDY), a sequence-based recommender system for educational content that makes use of known student hierarchies to improve recommendations. Our method does joint inference over groups of students who are adjacent in the social network. Our method utilizes a single transformer decoder network to model both within-user and cross-user interactions. This paper is organized as follows: we provide a review of the related work in Section 2, we review related previous recommender systems in Section 3, we introduce our new socially-aware recommender system in Section 4, and we present our experimental results in Section 5.5 before concluding in section 6.
In summary, the contributions of this paper are:
* Proposing a novel architecture and data pipeline for performing socially-aware sequential recommendation.
* Comparing the new method to modern and classical recommendation baselines.
* Performing ablations and performance breakdowns to better understand the new model.
## 2 Related Work
### Click-through Rate Prediction
One of the popular approaches for recommender systems is click-through rate prediction [21], where the probability of a user clicking on a specific presented item is predicted. These probabilities are then used as a proxy for user preferences. Click-through Rate (CTR) models typically make predictions for a suggested next item for a user based on the user's sequence of previous interactions, user data and context data. Model architectures used in this problem range from standard models like Transformers used in Behavior Sequence Transformers (BST) [22] and Convolutional Neural Networks used in [23] to more task specific architectures such as Wide & Deep models [24] and Field-Leveraged Embedding Networks (FLEN) [25]. This approach contrasts with other approaches such as neural collaborative filtering [26] and K-Nearest Neighbors (KNN) recommenders [27] where there is no attempt to explicitly model the likelihood of the user interacting with a specific item.
### Socially-Aware Recommendation Systems
When social connectivity information exists for users, there are many modeling approaches that leverage this information. Methods such as TrustMF [28] and Sorec [29] project user preference vectors into a latent space using matrix factorization approaches. The underlying assumption of these systems is homophily i.e. that users who are more socially connected are more likely to have similar preferences.
Deep-learning based methods have leveraged graph neural networks to learn using social connectivity structure. Methods such as DiffNet [30] and KCGN [31] utilize graph convolutional neural networks whereas methods such as GraphRec [32] and Social Attentional Memory Networks (SAMN) [33] employ graph attention mechanisms. Other notable work includes Disentangled Graph Neural
Figure 1: (a) a sequential autoregressive transformer with causal attention that processes each user individually, (b) an equivalent joint forward pass that results in the same computation as (a), c) shows that introducing new nonzero values (shown in purple) to the attention mask allows information to flow across users. Predictions condition on all interactions with an earlier timestamp, irrespective of whether the interaction came from the same user or not.
Networks (DGNN) which have the capability to model non-heterogeneous relationships and utilize a memory augmented graph network [20].
In this work we take a different approach to that of previous work, which has used graph neural networks or other custom architectures with separate components to handle cross-user interactions. We utilize a single transformer decoder with a specifically-designed attention mask to do joint inference over groups of users. With this approach we have developed a single consistent way to handle both within-user and cross-user interactions in a computationally efficient manner.
## 3 Review of Baseline Work
We review related work, used as baselines for comparison in the experiments section 5.5.
### Item-based KNN recommender
KNN recommender systems [34] compute the cosine similarity between the user's current feature vector and each entry in the training dataset. They then recommend to the user the \(k\) distinct items with highest cosine similarity to the user's current feature vector. When feature vectors are sparse most entries in the training dataset will have a cosine similarity of exactly zero with the user's current feature vector. An efficient implementation in this situation precomputes an inverse index [35] for the training data that can be used to retrieve only the feature vectors which have a nonzero value at a specific index. Utilizing this inverse index enables only having to compute the cosine similarity for vectors that will have a nonzero similarity with current features. These are the vectors that have overlapping nonzero entries with the current user feature vector.
In this paper's KNN implementation, we iterate over every sequence in the training dataset and featurize each item by computing a feature vector from the \(h\) interactions previously preceding it. Each item in the sequence is represented by a vector of size \(v+1\), one component for each entry in the vocabulary as well as an entry for the out-of-vocabulary token). The \(i^{th}\) component of this vector is the number of times the \(i^{th}\) item in the vocabulary was interacted with in the users previous \(h\) interactions with the system. As \(h\ll n\) these feature vectors are very sparse. To make recommendations at inference time, we compute a feature vector from the users \(h\) most recent interactions from the users history.
### Individual
Following the click-through rate prediction [36] method of recommendation this methodology takes the next-click prediction approach to recommendation, and hence treats making recommendations as a causal sequence-modeling problem. In particular, this modeling framework borrows from the language modeling literature [37] due to a similar problem setup. Concretely, given a set of students \(s_{j}\in S\), and a set of historical item interactions \(\{i_{j}^{k}:\forall j|s_{j}\in S,\forall k<\|s_{j}\|,k\in\mathbb{N}\}\) we learn a propensity function
\[P(i_{j}^{k})=f(i_{j}^{k}|i_{j}^{k^{\prime}<k};\theta)\]
where the propensity of an item at a point in time the likelihood the student will interact with that item at that time. To this end we modeled \(f\) as a causal decoder-only transformer with a next-token prediction objective to maximize the following likelihood of our data \(D\)
\[\mathbb{L}(D)=\sum_{s_{j}^{k}\in D}\log f(s_{j}^{i}|s_{j^{\prime}<j}^{i};\theta)\]
This is the formulation we used for the model referred to as **Individual**, since inference is carried out for each individual student separately.
## 4 Method
We present our new Socially-aware Temporally Causal Decoder Recommender System (STUDY), enabling the efficient use of the unique social structure inherent to students in schools.
### Study
We motivate our model by observing that for students with few previous interactions, we can rely on data from other students with similar preferences to seed the model to improve predictive performance. Concretely, we concatenate the interaction sequences of multiple students within the same classroom. This precludes using a causal sequence modeling approach to model this problem, since some item-interactions for students presented earlier in the sequence could have occurred at a later point in time relative to item-interactions for the students presented later in the sequence. Modeling data represented in this format using causal sequence modeling would lead to anti-causal data leakage and the model would learn to make recommendations conditioned on information not available at inference time.
Hence we introduce temporally causal masking into our model: a change to our model's forward pass using a training process similar to causal sequence modeling that respects the causal relationships in our data as shown in Figure 1. Conceptually we concatenate the user vectors of students in the same classroom and allow predictions for a particular item to condition on all interactions that happened in the past, both within-user and cross-user. In more detail, if there is a subset of users \(u^{1},u^{2},\cdots,u^{n}\) who are all in the same classroom, with interaction sequences \(\mathbf{S^{1}},\mathbf{S^{2}},\cdots,\mathbf{S^{n}}\), and with timestamp vectors \(\mathbf{T^{1}},\mathbf{T^{2}},\cdots\mathbf{T^{2}}\) where \(t^{i}_{j}\) is the timestamp of the interaction described at \(s^{i}_{j}\) - and each user vector \(\mathbf{S^{n}}\) and timestamp vector \(\mathbf{T^{n}}\) is terminated with a separator token - we define the concatenated classroom vectors generated by the procedure described in Section 5.2 as
\[\mathbf{\hat{S}} =\left(\mathbf{S^{1}}\mathbf{S^{2}}\cdots\mathbf{S^{n}}\right)\] \[\mathbf{\hat{T}} =\left(\mathbf{T^{1}}\mathbf{T^{2}}\cdots\mathbf{T^{n}}\right)\]
We define the matrix \(\mathbf{M}\)
\[m_{i,j}=\mathbb{1}_{\hat{t}_{i}<\hat{t}_{j}}\]
as the temporally causal mask matrix. This matrix is used as the mask in our attention operator instead of the usual causal mask used in decoder-only transformers. Hence our we redefine the attention operator in the decoder-only transformer as follows.
\[\mathbf{A}=\mathrm{Softmax}(\frac{\mathbf{Q_{K}}^{\mathbf{T}}}{\sqrt{d_{k}}}) \odot\mathbf{M}\] \[\mathrm{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\mathbf{A}\mathbf{V}\]
where \(\mathbf{Q}\) is the query matrix, \(\mathbf{K}\) is the key matrix and \(\mathbf{V}\) is the value matrix. With this modification we can use next-item prediction sequence modeling to train the model without anti-causal information leakage, utilizing a multihead generalization of this attention mechanism [38]. We call the model defined by this procedure **STUDY**.
## 5 Experiments
### Data
We test STUDY on a dataset of historical interactions with an educational platform collected by our nonprofit partner, Learning Aly. This platform recommends and provides access to audiobooks with the goal of promoting reading in students with dyslexia. The data offered was anonymized, with each student, school and district identified only by a unique randomly generated identification number. Furthermore, all descriptive student data was only available as highly aggregated summaries. There are historical records of interactions between students and audiobooks in the dataset. For each interaction recorded we have a timestamp, an item ID and an anonymized student ID, an anonymized school ID and a grade level. This data was collected over two consecutive school years containing over 5 million interactions per each school year totalling over 10 million interactions. These interactions come from a cohort of over 390,000 students. We use the data from the first school year as our training dataset and split the data from our second school year into a validation dataset and a test dataset. This split was done according to the temporal global splitting strategy [39]. This was done to model the scenario of deployment as realistically as possible. To partition the data from
the second school year into a test set and a validation set we split by student, following the user split strategy [39]. If a data split does not contain at least a full academic year then the distributions would not match due to seasonal trends in the data.
Overall this dataset is well suited to studying social recommendation algorithms due to the existence of implied social connections through known proximity and also due to the large amount of interaction data on record. The existing book selections were made through either student choice or teacher recommendation, where often the teacher-assigned content aligned to materials assigned to the whole class or required curriculum. Interactions with the assigned content, however, were still up to the learner, and thus we believe the existing data is a good fit for modeling preferences and likely engagement with content. Further details on the data including summary statistics can be found in Appendix A
### Preprocessing
In order to get the training data representation, we express the items as tokens. The top \(v\) most popular items get a unique and sequential integer as their token, while the rest of the items get assigned to an out-of-vocabulary token. The student interaction history will therefore become a list of such tokens associated with a time point.
The following additional processing steps are taken based on the model type used downstream:
* For transformer models: we split the student history into slices based on a context window of length \(c\).
* For models that process students jointly: we split the sequence associated with each student into segments of length \(s\), \(s\) < \(c\), then compose sequences of length \(c\) by joining segments from _multiple_ students in the same classroom, taking care to use a separator token.
Additional information, including a diagram of our preprocessing steps, are presented in Appendix B.
### Evaluating Models
We implement **KNN** and **SAMN1**[22] as baseline models, a transformer-based model that does inference for each student separately, which we will call **Individual**, as well as a transformer that operates over groups of students called **STUDY**.
Footnote 1: We used the author’s repository [https://github.com/chenchongthu/SAMN](https://github.com/chenchongthu/SAMN) as a guideline. We found a discrepancy between this code and the method described in the paper, but it didn’t affect final performance.
We compare results from the Individual model, STUDY model, the item-based KNN baseline, and SAMN [33] as a social baseline. We tuned the hyperparameters learning rate on the validation set and report final results on the test set. We took the both the context length \(c\) and the segment size \(s\) for our transformer models to be 65, enough to the full history of most students in our dataset. Details about further hyperparameters and compute can be found in Appendix C. Hits@n scores was used as our evaluation metric, where hits@n is the percentage of interactions when the actual item interacted with falls within the top \(n\) recommendations from the model under evaluation. Since we observe that students tend to repeatedly interact with an item multiple times before completing it, we additionally evaluate our models on the subset of the dataset where the student is interacting with a different item to the item previously interacted with, referred to as _non-continuation_ evaluation. We also evaluate on the subset of the dataset where the students are interacting with an item for the first time, referred to as _novel_ evaluation. This motivated by the fact that we are interested in the power of our recommendation systems to engage students with new items in order to maximize time spent on the educational platform. Aggregate statistics are computed per student then averaged over students to prevent students with large numbers of interactions from dominating the reported statistics. We also examine the relevant performance of these models on different slices of data, looking at co-factors such as demographic characteristics and school performance. We present the results of this experiment in section 5.5.
### Ablation Experiments
**Force Mix**: In our model because we set segment size \(s\) equal to context size \(c\) we only do joint inference over groups of students when processing a student who does not have enough previous interactions to fill the transformer's context. We experiment with shorter segment size \(s=20\ll c\) as per the definitions in Section 5.2. Practically, this leads to the model always sharing its context between students in a classroom when possible, even for students have enough history to fill the transformer context. We present results for this ablation in Section 5.5.1
**Classroom Grouping**: In STUDY we do joint inference over students in the same classroom. We ablate the importance of using this particular grouping. Concretely, we experiment with grouping students who are in the same district and school year as being in a single group. We also experiment with grouping all students in the dataset into a single group, which results in completely random groups of students being jointly processed together. We present results in Section 5.5.1.
**Data Tapering**:We compare the effect of using only a subset of the available data and compare the performance of STUDY and Individual. We compare the use of 25%, 50%, 75% and the entire dataset, with the aim of discerning the effect of using social information on the data efficiency of the system. We present results in Section 5.5.1.
### Results and Analysis
Table 1 shows the performance of the models STUDY, Individual, KNN and SAMN on the test split of audiobook usage data. We observe that both transformer models, Individual and STUDY, largely outperform KNN and SAMN, with the STUDY model outperforming the Individual model. We see that the social model SAMN, derived from the collaborative filtering family of models, fails to pick up on the sequential patterns in the dataset, such as users revisiting the exact same item or similar items. This is exhibited by SAMN having similar performance in the evaluation subsets _all_, _non-continuation_ and _novel_. The performance differences are most pronounced when evaluated on the entire test set as seen in the _all_ section of the table, but also holds up when evaluated across the more difficult _non-continuation_ and _novel_ test subsets. Crucially, with the STUDY model outperforming the individual model, we can see that leveraging the social hierarchy of our users to do joint predictions leads to improved recommendations.
In Figure 3 we see the relative performance of the models under examination to be constant, with STUDY outperforming Individual, which in turn outperforms KNN. SAMN trailed behind with
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline Evaluation Subset & \(n\) & KNN(\%) & SAMN(\%) & Individual (\%) & STUDY(\%) \\ \hline \multirow{5}{*}{All} & 1 & \(16.67\pm 0.14\) & \(0.32\pm 0.02\) & \(28.06\pm 0.14\) & \(31.86\pm 0.14\) \\ & 3 & \(31.97\pm 0.17\) & \(2.01\pm 0.05\) & \(35.74\pm 0.16\) & \(38.65\pm 0.14\) \\ & 5 & \(37.16\pm 0.20\) & \(3.64\pm 0.09\) & \(38.63\pm 0.18\) & \(41.17\pm 0.18\) \\ & 10 & \(43.17\pm 0.20\) & \(6.87\pm 0.10\) & \(42.56\pm 0.20\) & \(44.85\pm 0.18\) \\ & 20 & \(48.02\pm 0.20\) & \(11.56\pm 0.15\) & \(46.70\pm 0.20\) & \(48.90\pm 0.18\) \\ \hline \multirow{5}{*}{Non-continuation} & 1 & \(5.15\pm 0.08\) & \(0.33\pm 0.02\) & \(2.05\pm 0.04\) & \(3.75\pm 0.07\) \\ & 3 & \(9.35\pm 0.08\) & \(1.95\pm 0.04\) & \(10.38\pm 0.11\) & \(13.76\pm 0.11\) \\ & 5 & \(11.47\pm 0.13\) & \(3.56\pm 0.07\) & \(14.35\pm 0.10\) & \(17.66\pm 0.11\) \\ & 10 & \(14.93\pm 0.10\) & \(6.68\pm 0.09\) & \(19.96\pm 0.13\) & \(23.05\pm 0.13\) \\ & 20 & \(19.42\pm 0.15\) & \(11.29\pm 0.15\) & \(26.27\pm 0.14\) & \(29.50\pm 0.16\) \\ \hline \multirow{5}{*}{Novel} & 1 & \(0.58\pm 0.03\) & \(0.32\pm 0.02\) & \(1.06\pm 0.04\) & \(1.86\pm 0.06\) \\ & 3 & \(2.21\pm 0.06\) & \(1.87\pm 0.05\) & \(5.03\pm 0.09\) & \(6.60\pm 0.10\) \\ \cline{1-1} & 5 & \(3.73\pm 0.05\) & \(3.45\pm 0.06\) & \(8.02\pm 0.12\) & \(9.77\pm 0.13\) \\ \cline{1-1} & 10 & \(6.68\pm 0.08\) & \(6.47\pm 0.12\) & \(13.14\pm 0.12\) & \(15.06\pm 0.15\) \\ \cline{1-1} & 20 & \(11.12\pm 0.10\) & \(10.99\pm 0.11\) & \(19.56\pm 0.14\) & \(22.01\pm 0.16\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Hits@n percentage metrics for the different recommendation models evaluated on the historical data in the test split, across three subsets: _all_, _non-continuation_ and _novel_. Both transformer decoder approaches significantly outperform KNN and SAMN with STUDY having the best performance. Uncertainties are \(95\%\) confidence intervals computed over 50 bootstraps.
almost 0 hits@1, we attribute this to SAMN's non-sequential nature. This ordering is the same when slicing by demographic variables such as metro-code (which describes schools as being in urban, suburban, rural or town areas), school socio-economic indicators which indicate the level of wealth of the area the in the vicinity of a school. We also observe the same ordering of models by performance when slicing by academic variables such as classroom reading scores. In Figure 2 we slice model performance by student engagement, which we measure by the number of interactions the student has on record. Here we see similar relative performance order for students with less than 35 or so total interactions, but for students with more engagement, we see convergence between the performance of STUDY and Individual. This is discussed in more detail in Section 5.5.1.
#### 5.5.1 Ablations Results
**Force Mix**: In Figure 2 we compare the performance of the STUDY model to the individual model and observe that STUDY significantly outperforms the Individual model on students with up to 35 previous interactions. Compared to the model from the Force Mix ablation only, STUDY outperforms Individual on students who have up to about 17 interactions on the platform. Given that our segment length for the Force Mix model is 20, it is at students with 20 previous interactions where Force Mix starts to forgo the available history for the student at hand in favor of conditioning on data from other peer students. From here we can conclude that conditioning on peer student history is beneficial if it is done in addition to conditioning on all available history for a student, but not if it comes at the cost of conditioning on less history than available for the particular student.
**Classroom Grouping**: In Figure 3(a) we compare the performance of our model that uses classrooms to group students for joint inference compared to a model that uses intersection of district and school year to group students, to a model that uses a single group as well as to a model that does inference for each student individually. We can see that using classrooms for grouping results in the best performance, that using the intersection of district and school year for grouping performs slightly worse, and that putting all students in the same group performs similarly to individual processing. From here we can conclude that using groups of users whom we expect to have correlated interests is necessary for the performance of our model and using poorly designed groups can harm model performance.
**Data Tapering**: In Figure 3(b) we see that STUDY outperforms the Individual recommender across all data subsets used to train the models, confirming the benefit of adopting social recommender systems
Figure 2: Performance of the presented models broken down by student engagement, accompanied by a histogram of student engagement in the lower chart. (a) Hits@1 across four models KNN, SAMN, Individual and STUDY. The two transformer based approaches outperform KNN and SAMN. The STUDY model significantly outperforms the Individual model for students with low engagement of up to 35 interactions. (b) A comparison of the Individual against the Force Mix ablation. The Force Mix ablation only outperforms Individual on students with engagement of up to 17, with matched performance onwards. Uncertainties shown are \(95\%\) confidence intervals computed over 50 bootstraps.
such as STUDY even when in a data-constrained environment. We also note that both models witness a performance drop when the amount of data used increases from 50% to 75%, suggesting that not all additional data is beneficial. We leave a deeper interpretation of this phenomenon to future work.
## 6 Conclusion
In this paper we present STUDY, a socially aware recommendation system that leverages cross-user information at inference time and we demonstrate its applicability to the practical problem of book recommendation for children inside a classroom. This is an important problem, as engagement with reading materials from an early age can positively impact language acquisition, communication skills, social skills and literacy skills.
Our novel STUDY method uses attention masks that are causal with respect to interaction timestamps and is able to process both within-user and across-user interactions using a single forward pass through a modified transformer decoder network. It avoids complex architectures and circumvents the need for graph neural networks which are notoriously difficult to train; thus, STUDY is an efficient system that can be deployed by partners with limited computational bandwidth that doesn't sacrifice model performance. We also compare STUDY to a number of baselines, both sequential and
Figure 3: Model performance (hits@1) and histograms of number of students across slices (a) socio-economic indicator, (b) classroom reading score and (c) Metro code which describes schools as being Urban, Suburban, Rural or Town. We see the relative order of performance Temporally Causal \(>\) Individual \(\gg\) KNN \(\gg\) SAMN is maintained across all slices. Uncertainties shown are \(95\%\) confidence intervals. Note: (a) and (c) uncertainties are shown as error bars but are very small.
non-sequential, and social and non-social. We show that STUDY outperforms alternative sequential and social methods, in a variety of scenarios, as demonstrated in ablation studies.
**Limitations**: Evaluations were limited to offline evaluations on historical data, inline with much of the literature. However, these evaluations cannot account for the utility of recommended items that the user has never interacted with in the past, but would have actually enjoyed. Furthermore, our method is limited to structures where all the known relationships between users are homogeneous - each student in the classroom is assumed to have the same relationship with each other. Given that social dynamics in reality are more complicated, in future work we wish to explore extending this method to social networks with richer heterogeneous relationships between users where proximity between users can vary within a classroom.
|
2301.12452 | Cox rings of morphisms and resolution of singularities | We extend the Cox-Hu-Keel construction of the Cox rings to any proper
birational morphisms of normal noetherian schemes. It allows the representation
of any proper birational morphism by a map of schemes with mild singularities
with torus actions. In a particular case, the notion generalizes the
combinatorial construction of Satriano and the recent construction of multiple
weighted blow-ups on Artin-stacks by Abramovich-Quek. The latter can be viewed
as an extension of stack theoretic blow-ups by Abramovich, Temkin, and
Wlodarczyk, a similar construction of McQuillan and the author's recent
cobordant blow-ups at weighted centers to a more general situation of arbitrary
locally monomial centers. We show some applications of this operation to the
resolution of singularities over a field of any characteristic. | Jarosław Włodarczyk | 2023-01-29T14:21:42Z | http://arxiv.org/abs/2301.12452v2 | # Cox rings of morphisms and resolution of singularities
###### Abstract.
We extend the Cox-Hu-Keel construction of the Cox rings to any proper birational morphisms of normal noetherian schemes. It allows the representation of any proper birational morphism by a map of schemes with mild singularities with torus actions.
In a particular case, the notion generalizes the combinatorial construction of Satriano [12] and the recent construction of multiple weighted blow-ups on Artin-stacks by Abramovich-Quek [1].
The latter can be viewed as an extension of stack theoretic blow-ups by Abramovich, Temkin and Wlodarczyk [1], a similar construction of McQuillan [1] and and the author's recent cobordant recent cobordant blow-ups [20] at weighted centers to a more general situation of arbitrary locally monomial centers.
We show some applications of this operation to the resolution of singularities over a field of any characteristic.
This research is supported by BSF grant 2014365
## 1. Introduction
The importance of \(G_{m}\)-actions in birational geometry and their connection with the Mori theory was already discovered by Reid, Thaddeus, and many others (see [13], [14], [15], [16], [2]). This was also reflected in the proof of the Weak Factorization theorem, which relied on the notion of _birational cobordism_ and a critical role of \(G_{m}\)-action [20], [21], [1].
The idea of the birational cobordism from [20] is to construct a smooth variety with \(G_{m}\)-action which represents a proper birational morphism and parametrizes possible birational elementary modifications such as blow-ups, blow-downs, and flips. This allows decomposing the proper birational maps of smooth varieties into a sequence of blow-ups and blow-downs with smooth centers.
A similar idea was considered shortly after by Hu-Keel [1], who constructed their _Mori dream space_, parametrizing possible birational modifications in the Mori program via torus actions. The Mori dream space plays a vital role in the Mori theory. One of the key ingredients in constructing the Mori dream space is the Cox rings.
Recall that the Cox rings for toric varieties were considered first by Cox in [1]. The main idea of the construction comes from the convex geometry: Any polyhedral complex can be realized as the image of the simplicial complex. Similarly, any fan in toric geometry can be represented as the image of the subfan of a regular cone. This simple observation leads to the fundamental formula describing
the _Cox coordinate ring_ of tor the toric variety \(X\) as
\[\mathcal{C}(X):=\bigoplus_{D\in\operatorname{Cl}(X)}H^{0}(X,\mathcal{O}_{X}(D)),\]
where \(\operatorname{Cl}(X)\) is the Weil divisor class group. The action of torus \(T=\operatorname{Spec}\mathbb{Z}[\operatorname{Cl}(X)]\) naturally occurs in the construction, and is determined by the \(\operatorname{Cl}(X)\)-gradation.
The Cox formula generalizes the construction of the _coordinate ring_ of the projective scheme \(X=\mathbb{P}^{n}_{\mathbb{Z}}\), namely
\[\mathbb{Z}[x_{0},\dots,x_{n}]=\bigoplus_{n\in\mathbb{Z}}H^{0}(X,\mathcal{O}_{X }(n))\]
The projective space \(X=\mathbb{P}^{n}\) can be seen as the geometric quotient of the _characteristic space_
\[\hat{X}=\operatorname{Spec}_{X}(\bigoplus_{D\in\operatorname{Cl}(X)}\mathcal{ O}_{X}(D)\to X,\]
introduced in [1]. The characteristic space \(\hat{X}\) comes with the natural embedding \(\hat{X}\hookrightarrow\overline{X}\) into the coordinate space:
\[\overline{X}:=\operatorname{Spec}(\bigoplus_{D\in\operatorname{Cl}(X)}H^{0} (X,\mathcal{O}_{X}(D))\]
In particular, for \(X=\mathbb{P}^{n}\) we obtain
\[\hat{X}=\mathbb{A}^{n+1}_{\mathbb{Z}}\smallsetminus\{0\}\hookrightarrow \overline{X}=\mathbb{A}^{n+1}_{\mathbb{Z}}\]
This leads to the standard \(Proj\) -construction:
\[\begin{array}{ccc}Proj(\mathbb{Z}[x_{0},\dots,x_{n}])&=&(\operatorname{Spec }(\mathbb{Z}[x_{0},\dots,x_{n}]\smallsetminus V(x_{0},\dots,x_{n}))/G_{m}\\ \parallel&&\parallel\\ X&=&\hat{X}/T,\end{array}.\]
The Cox rings have found many different applications in algebraic geometry. (See [1], [2], [1], [1], [1], [1], [2], [3], [4], [5], [6], [7], [8], [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19]). In particular, they were used to study minimal resolutions of singularities (see [15], [15], [16], [17], [20], [21]). In the recent paper [1] the Cox rings were defined for the morphisms of log pairs.
In this paper, we introduce the idea of Cox rings of the proper birational morphisms of any normal noetherian schemes and propose a more general approach to embedded resolution problems in the language of torus actions, extending the ideas of McQuillan [16] and Abramovich-Temkin-Wlodarczyk [1] of the weighted resolution, and Abramovich- Quek [1] of the multiple weighted resolutions.
The idea of utilizing group actions to resolve singularities is ancient and should be traced back to Newton. In the method that he developed, known later as Newton-Puiseau theorem, he shows that any polynomial function \(f(x,y)\) on \(X=\mathbb{C}^{2}\) with expansion containing the term \(y^{r}\) can be, in fact, upon a coordinate change, resolved by a Newton-Puiseau series \(y=g(x^{1/k})\). In other words one considers the space \(X^{\prime}=\mathbb{C}^{2}\) with the group action of \(\mu_{k}=\langle\xi\rangle\), \(\xi(x,y)=(\xi\cdot x,y)\), giving the quotient \(X^{\prime}\to X\), \((x,y)\mapsto(x^{k},y)\), and a smooth holomorphic branch \(V(y-g(x^{1/k}))\) on \(X^{\prime}\) parametrizing subspace \(V(f)\) on \(X\).
Originally in the Hironaka embedded resolution, only smooth centers were used (see [10],[11],[12],[13],[14], [15]). In the recent papers [1], [1] in the resolution process of logarithmic schemes and morphisms, we considered the stack-theoretic blow-ups of the centers of the form
\[\mathcal{J}=(u_{1},\dots,u_{k},m_{1}^{1/w_{1}},\dots,m_{r}^{1/w_{r}}),\]
in the context of Kummer etale topology on the logarithmic stacks. The functorial properties of the algorithm of logarithmic resolution of morphisms dictated such general centers.
Then in [1], we developed the formalism of the stack-theoretic blow-ups of the weighted centers of the form \((u_{1}^{1/w_{1}},\dots,u_{k}^{1/w_{k}})\). This approach allows to simplify the resolution procedure in characteristic zero. The algorithm is more efficient and avoids many unnecessary blow-ups reducing technicalities. It uses a very simple geometric invariant, which improves after each step and is independent of the logarithmic structure. A similar result was obtained by McQuillan in [14]. More general centers were considered in the paper[13] of Quek in the logarithmic context.
In work [1] of Abramovich-Quek, the authors introduce _multi-weighted blow-ups_, further extending the results in [13]. The multiple weighted blow-ups generalize the weighted blow-ups and are used to obtain a smooth and toroidal resolution version of Artin stacks (see Section 5.5). The Abramovich-Quek weighted blow-up generalizes the Satriano toroidal construction on Artin logarithmically smooth stacks in [10] to locally monomial ideals.
Subsequently in the paper [14] the operation of _cobordant blow-up_\(B_{+}\to X\) with weighted centers \(\mathcal{J}=(u_{1}^{1/w_{1}},\dots,u_{k}^{1/w_{k}})\) was introduced, where \(u_{1},\dots,u_{k}\) is a partial system of local parameters
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},t^{w_{1}}x_{1},\dots,t^{w_{ k}}x_{k}]),\quad B_{+}=B\smallsetminus V(x_{1}t^{w_{1}},\dots,x_{k}t^{w_{k}}), \tag{1}\]
where \(t\) is an introduced unknown. A similar formula was discovered by Rydh in the paper of [14] and studied in the context of the stack-theoretic blow-ups. Moreover, a certain relation between toric Cox construction and toric weighted cobordant blow-ups was already observed in [14] and [13].
The operation of cobordant blow-up allows representing stack-theoretic weighted blow-ups and more general Kummer blow-ups in the language of smooth varieties with torus action without stack theoretic language. Moreover, apart from fast functorial resolution with SNC divisors in characteristic zero, the approach leads to the resolution of some classes of singularities in positive and mixed characteristic (see [13]).
In the present paper, we associate the _Cox coordinate ring_ to arbitrary proper birational morphisms \(\pi:Y\to X\) of normal noetherian schemes as follows:
\[\mathcal{A}_{Y/X}:=\pi_{*}(\bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{O} _{Y}(E)),\]
where \(\operatorname{Cl}(Y/X)\subset\operatorname{Cl}(Y)\) is a free group generated by the exceptional divisors. It comes with the coaction of the associated torus \(T=\operatorname{Spec}(\mathbb{Z}[\operatorname{Cl}(Y/X)])\).
Per analogy with the standard Cox construction, we call the space
\[B:=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{E\in\operatorname{Cl}(Y/X)} \mathcal{O}_{Y}(E)) \tag{2}\]
the _relative Cox coordinate space_. The scheme
\[B_{+}:=\operatorname{Spec}_{Y}(\bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{O}_{ Y}(E) \tag{3}\]
will be called the _relative Cox characteristic space_.
In this language, any proper birational morphism \(\pi:Y\to X\) can be represented by a \(T\)-equivariant morphism \(B_{+}\to B\) such that the induced morphism of the good quotient coincides with \(\pi:Y\to X\):
\[\begin{array}{ccc}B_{+}\not\!/\!\!/\ T&\to&B\not\!/\!\!/\ T\\ \parallel&&\parallel\\ Y&\stackrel{{\pi}}{{\to}}&X,\end{array}.\]
As in the standard construction, the morphism \(B_{+}\subset B\) is an open immersion upon some reasonable assumptions.
If \(Y\to X\) is the the blow-up of the ideal \(\mathcal{J}\) on \(X\) the associated presentation \(B_{+}\not\!/\!\!/\ T\to X\) can be thought as the normalized extended \(Proj\) introduced by Swanson-Huneke [10]:
\[B_{+}\not\!/\!\!/\ T=Proj_{X}(\mathcal{O}_{X}[\mathcal{J}t,\mathbf{t}^{-1}])^ {\operatorname{nor}}.\]
(See Section 5.2.)
Note that the morphism \(B\to X\) is affine and is locally described by a single chart. The spaces \(B_{+}\) and \(B\) usually have nicer singularities and simpler descriptions, and the morphism \(B_{+}\subset B\) is way simpler than the original \(\pi:Y\to X\). As in the standard Cox construction, the semiinvariant functions on \(B_{+}\) and \(B\) can be interpreted as forms on \(Y\) and are convenient for the computations.
For instance, the construction can be applied to normalized blow-ups of locally monomial centers, leading to general classes of modifications of singularities of subschemes and ideals that preserve regular ambient schemes.
Given a locally toric or simply regular scheme \(X\) over a field and any locally toric proper birational morphism \(\pi:Y\to X\), one associates with \(\pi\) a morphism of Cox regular spaces \(B_{+}\subset B\), where
\[B=\operatorname{Spec}(\mathcal{O}_{X}[t_{1}^{-1},\dots,t_{k}^{-1},u_{1} \mathbf{t}^{\alpha_{1}},\dots,u_{k}\mathbf{t}^{\alpha_{k}}])\]
In particular, one represents the normalized blow-up of any locally monomial \(\mathcal{J}\) by a _smooth cobordant blow-up_\(B_{+}\to X\)_of_\(\mathcal{J}\) equipped with torus action. The formula generalizes the weighted cobordant blow-up introduced in [23] with \(B_{+}=B\smallsetminus V(\mathcal{J}\mathbf{t}^{\alpha})\), for the corresponding multi-indexes \(\alpha,\alpha_{1},\dots,\alpha_{k}\). It also leads to a version of _the multi-weighted blow-up_ of [1, Definition 2.1.6] when considering the stack theoretic quotient \([B_{+}\not\!/\!\!\ T]\).
One can think of this approach as an extension of the resolution by cobordant blow-ups with weighted centers to more general locally monomial ideals or \(\mathbb{Q}\)-ideals.
When replacing the group \(\operatorname{Cl}(Y/X)\) with a subgroup \(\Gamma\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\) in the formulas (2) and (3), one further generalizes the construction. We obtain
\[B^{\Gamma}:=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{E\in\Gamma}\mathcal{O}_ {Y}(E)),\quad B^{\Gamma}_{+}:=\operatorname{Spec}_{Y}(\bigoplus_{E\in\Gamma} \mathcal{O}_{Y}(E).\]
This generalized construction can be linked to the weighted cobordant blow-ups as in [23] (See Section 5.4). In particular, let \(\pi:Y\to X\) be the weighted blow-up of a regular scheme with the \(\mathbb{Q}\)-ideal center \(\mathcal{J}=(u_{1}^{1/w_{1}},\dots,u_{k}^{1/w_{k}})\). This
is simply the normalized blow-up of the ideal \(\mathcal{J}^{(a)}:=(u_{1}^{a/w_{1}},\dots,u_{k}^{a/w_{k}})\), with the exceptional irreducible \(\mathbb{Q}\)-Cartier divisor \((1/a)E_{a}\) with \(\mathcal{O}_{Y}(-E_{a})=\mathcal{O}_{X}\cdot\mathcal{J}^{(a)}\), where \(a\) is any positive integer such that \(w_{i}|a\),. The cobordant blow-up of \(\mathcal{J}^{(a)}\) with respect to the group \(\Gamma=\mathbb{Z}\cdot\frac{1}{a}E_{a}\subset\operatorname{Cl}(Y/X)\otimes \mathbb{Q}\) gives the formula (1) for the cobordant weighted blow-up of \(\mathcal{J}=(u_{1}^{1/w_{1}},\dots,u_{k}^{1/w_{k}})\). Note that the above definition does not depend upon the choice of \(a\), and the \(\mathbb{Q}\)-Cartier divisor \((1/a)E_{a}\) can be interpreted as the divisor corresponding to the \(\mathbb{Q}\)-ideal \(\mathcal{O}_{Y}\cdot\mathcal{J}=\mathcal{O}_{Y}(-E_{a})^{1/a}\).
More generally, let \(\mathcal{J}\) be a locally monomial center, and \(\pi:Y\to X\) be the normalized blow-up of \(\mathcal{J}\). Denote by \(E_{1},\dots,E_{k}\) the exceptional divisors of \(\pi\). The cobordant blow-up of \(\mathcal{J}\) with respect to the subgroup
\[\Gamma=\mathbb{Z}\frac{1}{b_{1}}E_{1}\oplus\dots\oplus\mathbb{Z}\frac{1}{b_{k }}E_{k}\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q},\]
generated by \(\frac{1}{b_{1}}E_{1},\dots,\frac{1}{b_{k}}E_{k}\), where \(b_{1},\dots,b_{k}\) are any positive integers, leads to the _multiple weighted blow-up_, considered by Abramovich-Quek in [1]. It can be understood as the fantastack associated with the stack-theoretic quotient \([B_{+}\mathbin{/\!\!/}T]\) (See Section 5.5). Since the stabilizers of the action are not finite, in general, one obtains an Artin stack as the stack-theoretic quotient.
Note that in the resolution process of hypersurfaces, one often considers locally the corresponding Newton polytope. It is naturally associated with a certain coordinate system and rises to a locally monomial center. In a more general setting, the Newton polytope is replaced with the _dual valuation complex_ of the locally monomial center. We show some conditions for singularities when the cobordant blow-up of such a center immediately resolves singularities. (see Theorems 4.4.5, 4.5.9, 4.5.11, 4.6.5, 4.6.9, 4.8.1, 4.8.2). The particular resolution methods and theorems extend the relevant results for the weighted cobordant blow-ups in [20]. As a Corollary 4.6.1, we obtain Abramovich-Quek's [1, Theorem 5.1.2].
The resolution algorithm outputs a regular scheme with a torus action which admits a good quotient having locally toric singularities and birational to the original scheme. It can be directly resolved by the canonical combinatorial methods in any characteristic as in [20, Theorem 7.17.1]. Alternatively, by Proposition 3.5.2, one can always replace in the resolution process each \(B_{+}\) with an open stable subset \(B^{s}\) admitting a geometric quotient, and then apply the destackification method of Bergh-Rydh in [10]. It is also possible to use the canonical reduction of stabilizers due to Edidin- Rydh [1], and then the destackification method of Bergh-Rydh in [10].
#### 1.0.1. Aknowledgements
The author would like to thank Dan Abramovich, Jurgen Hausen, Antonio Laface, Michael Temkin, Ilya Tyomkin, and Jaroslaw Wisniewski for helpful discussions and suggestions.
### Preliminaries
The definition of Cox spaces of morphisms is similar, with some important differences, to the notion of Cox spaces of varieties, as presented in [1]. We shall assume that all the schemes considered in the paper are noetherian.
#### 1.1.1. Construction of Cox sheaves
Given a proper birational morphism \(\pi:Y\to X\) of normal integral schemes, consider the the free group \(\operatorname{Cl}(Y/X)\subset\operatorname{Div}(Y)\) generated by the images of the exceptional irreducible divisors \(E_{i}\). It can be identified with the kernel of the surjective morphism \(\pi_{*}:\operatorname{Cl}(Y)\to\operatorname{Cl}(X)\).
**Definition 1.1.2**.: By _the relative Cox ring_ w mean the sheaf of graded \(\mathcal{O}_{Y}\)-algebras
\[\mathcal{C}_{Y/X}=\bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{C}_{E}= \bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{O}_{Y}(E),\]
graded by \(\operatorname{Cl}(Y/X)\), where \(C_{E}:=\mathcal{O}_{Y}(E)\) for
\[\mathcal{O}_{Y}(E)(U)=\{f\in\kappa(Y)\mid(\operatorname{div}_{Y}(f)+E)_{|U} \geq 0\}\subset\kappa(Y)=\kappa(X).\]
Note the \(C_{0}=\mathcal{O}_{Y}\). One can introduce the dummy variables \(t=(t_{1},\ldots,t_{k})\) so that \(E_{i}\) corresponds to \(t_{i}^{-1}\) and \(E\mapsto\mathbf{t}^{E}\). This defines the isomorphism of the gradings:
\[\operatorname{Cl}(Y/X)\simeq\{\mathbf{t}^{-\alpha}\mid\alpha\in\mathbb{Z}^{k }\}\simeq\mathbb{Z}^{k}\]
Using this notation, we can write
\[\mathcal{C}_{Y/X}=\bigoplus_{E\in\operatorname{Cl}(Y/X)}C_{E}\mathbf{t}^{E}= \bigoplus_{\alpha\in\mathbb{Z}^{k}}C_{\alpha}\cdot t_{1}^{a_{1}}\cdot\ldots \cdot t_{k}^{a_{k}}\subseteq\bigoplus_{E\in\operatorname{Cl}(Y/X)}\kappa(Y) \mathbf{t}^{E}\]
#### 1.1.3. Forms
As mentioned, the Cox relative ring construction, similarly to the absolute case, is analogous to the coordinate ring \(\mathbb{Z}[x_{0},\ldots,x_{n}]\) on projective space \(X=\mathbb{P}_{\mathbb{Z}}^{n}\). One can choose a very ample divisor, for instance \(D=V(x_{0})\), and identify the functions \(f=F(x_{0},\ldots,x_{n})/x_{0}^{n}\in\mathcal{O}_{X}(nD)\) with the forms \(F(x_{1},\ldots,x_{n})\) so that the vanishing locus \(V(F)\) equals to
\[V(F)=V_{X}(F)=\operatorname{div}(f)+nD.\]
Per this analogy, and as in [1], the elements in \(\mathcal{C}_{E}\) will be called _forms of degree \(E\)_ on \(Y\) and can be written formally as \(F=f\mathbf{t}^{E}\), where \(f\in\mathcal{O}_{Y}(E)\), with the natural componentwise operation of addition and multiplication. We also define the _divisor of the form \(F=f\mathbf{t}^{E}\)_ on \(Y\) as \(\operatorname{div}_{Y}(F)=\operatorname{div}_{Y}(f)+E\), and its _vanishing locus on \(Y\)_ to be
\[V_{Y}(F):=\operatorname{supp}(\operatorname{div}_{Y}(f)+E).\]
#### 1.1.4. Exceptional valuations
By the _exceptional valuations_ of \(\pi:Y\to X\) we shall mean the valuations \(\nu_{1},\ldots,\nu_{k}\) of \(\kappa(X)=\kappa(Y)\) associated with the generic points of the exceptional divisors \(E_{1},\ldots,E_{k}\) of \(\pi\).
These valuations define ideals \(\mathcal{I}_{\nu,a,X}\subset\mathcal{O}_{X}\) on \(X\) for \(a\in\mathbb{Z}\), generated by the functions \(f\in\mathcal{O}_{X}\), with \(\nu(f)\geq a\). In particular \(\mathcal{I}_{\nu,a}=\mathcal{O}_{X}\) if \(a\leq 0\).
**Lemma 1.1.5**.: _Let \(E=\sum n_{i}E_{i}\) correspond to \(t_{1}^{-n_{1}}\cdot\ldots\cdot t_{k}^{-n_{1}}\). Then_
1. \(\pi_{*}(\mathcal{O}_{Y}(E_{i}))=\mathcal{O}_{X}\)_._
2. _If all_ \(n_{i}\geq 0\) _then_ \(\pi_{*}(\mathcal{O}_{Y}(E))=\mathcal{O}_{X}\)_._
3. _If there is_ \(n_{i}<0\)_, then_ \[\pi_{*}(\mathcal{O}_{Y}(E))=\bigcap_{n_{i}<0}\mathcal{I}_{\nu_{i},-n_{i},X}= \bigcap_{i=1}^{k}\mathcal{I}_{\nu_{i},-n_{i},X}.\]
Proof.: First, since \(\pi:Y\to X\) is proper, birational and \(X\) is normal, we have \(\pi_{*}(\mathcal{O}_{Y})=\mathcal{O}_{X}\).
We can reduce the situation to the case when \(X\) is affine since the problem is local on \(X\). Then
\[g\in\mathcal{O}_{Y}(E)(\pi^{-1}(X))\subset\kappa(X)=\kappa(Y)\]
if and only if
\[\operatorname{div}_{Y}(g)+E\geq 0.\]
This implies that \(\operatorname{div}_{U}(g)\geq 0\), where \(U:=Y\smallsetminus(\bigcup E_{i})\), and \(U\subset X\), where \(X\smallsetminus U\) is of codimension \(\geq 2\). Thus \(\operatorname{div}_{X}(g)\geq 0\). So, since \(X\) is normal, w get \(g\in\pi_{*}(\mathcal{O}_{Y})=\mathcal{O}_{X}\), whence \(\pi_{*}(\mathcal{O}_{Y}(E))\subseteq\mathcal{O}_{X}\).
**(1)** and **(2)** If \(E=\sum n_{i}E_{i}\) with \(n_{i}\geq 0\) then
\[\mathcal{O}_{X}=\pi_{*}(\mathcal{O}_{Y})\subseteq\pi_{*}(\mathcal{O}_{Y}(E)) \subseteq\mathcal{O}_{X}.\]
**(3)** In general, \(g\in\pi_{*}(\mathcal{O}_{Y}(E))\subseteq\mathcal{O}_{X}\) iff \(\operatorname{div}_{Y}(g)+E\geq 0\). This translates into \(\operatorname{div}_{Y}(g)+\sum_{n_{i}<0}n_{i}E_{i}\geq 0\) by part (2). Thus \(\nu_{i}(g)\geq-n_{i}\) for all \(n_{i}<0\), which yields
\[g\in\bigcap_{n_{i}<0}\mathcal{I}_{\nu_{i},-n_{i},X}=\bigcap_{i=1}^{k} \mathcal{I}_{\nu_{i},-n_{i},X}.\]
We use here the fact that by definition \(\mathcal{I}_{\nu_{i},-n_{i}}=\mathcal{O}_{X}\) if \(n_{i}\geq 0\).
### Cox coordinate space
#### 1.2.1. Cox algebra
As a corollary from Lemma 1.1.5, we obtain
**Proposition 1.2.2**.: _Let \(\pi:Y\to X\) be a proper birational morphism of normal irreducible schemes. Assume that \(E_{k},\ldots,E_{k}\) are the irreducible exceptional divisors of \(\pi\), and \(\nu_{i}\) are the associated valuations. Then the direct image \(\pi_{*}(\mathcal{C}_{Y/X})\) of the relative Cox ring is a \(\operatorname{Cl}(Y/X)=\mathbb{Z}^{k}\)-graded \(\mathcal{O}_{X}\)-algebra:_
\[\mathcal{A}_{Y/X}:=\pi_{*}(\mathcal{C}_{Y/X})=\bigoplus_{a_{i}\in\mathbb{Z}} \ \bigcap_{i=1}^{k}\mathcal{I}_{\nu_{i},a_{i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{k}^{a_{k}}\subset \mathcal{O}_{X}[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}],\]
_where \(E_{i}\) correspond to \(t_{i}^{-1}\). _
#### 1.2.3. Cox coordinate space
**Definition 1.2.4**.: Given a proper birational morphism \(\pi:Y\to X\) of normal integral schemes. The _Cox relative coordinate space_ is the scheme
\[B=\operatorname{Cox}(Y/X):=\operatorname{Spec}_{X}(\mathcal{A}_{Y/X}),\]
over \(X\) with the natural action of \(T_{B}=\operatorname{Spec}\mathbb{Z}[\operatorname{Cl}(Y/X)]\). The _Cox relative characteristic space_ is the space
\[B_{+}=\operatorname{Cox}(Y/X)_{+}:=\operatorname{Spec}_{Y}(\mathcal{C}_{Y/X}).\]
over \(Y\). The _Cox trivial space_ is given by
\[B_{-}:=B\smallsetminus V_{B}(t_{1}^{-1}\cdot\ldots\cdot t_{k}^{-1}).\]
#### 1.2.5. Good and geometric quotient
We consider here a relatively affine action of
\[T=\operatorname{Spec}(\mathbb{Z}[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])\]
on a scheme \(X\) over \(\mathbb{Z}\). By the _good quotient (or GIT-quotient)_ of \(X\) by \(T\) we mean an affine \(T\)-invariant morphism
\[\pi:X\to Y=X\mathbin{/\!\!/}\,T\]
such that the induced morphism of the sheaves \(\mathcal{O}_{Y}\to\pi_{*}(\mathcal{O}_{X})\) defines the isomorphism onto the subsheaf of invariants \(\mathcal{O}_{Y}\simeq\pi_{*}(\mathcal{O}_{X})^{T}\subset\pi_{*}(\mathcal{O}_{ X})\).
Then \(\pi:X\to Y=X/T\) will be called the _geometric quotient_ if additionally every fiber \(X_{\overline{y}}\) of \(\pi\) over s geometric point \(\overline{y}:\operatorname{Spec}(\overline{\kappa})\to Y\) defines a single orbit of the action of \(T_{\overline{\kappa}}=T\times_{\kappa}\operatorname{Spec}(\overline{\kappa})= \operatorname{Spec}(\overline{\kappa}[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])\) on \(X_{\overline{y}}\).
**Lemma 1.2.6**.: _Let \(\pi:X\to Y=X\mathbin{/\!\!/}\,T\) be a good quotient of integral schemes of a relatively affine action of the torus \(T\). Then \(\pi\) is surjective. Moreover, the inverse image \(\pi^{-1}(Z)\subset X\)of a closed connected subscheme \(Z\subset Y\) is connected._
Proof.: The problem reduces to the affine situation \(\pi:X=\operatorname{Spec}(A)\to\operatorname{Spec}A^{T}\). Then the coaction of \(T\) on \(A\) determines the gradation
\[A=\bigoplus_{\alpha\in\mathbb{Z}^{n}}A_{\alpha}\mathbf{t}^{\alpha},\]
where \(B=A_{0}\). Then for any prime ideal \(p\subset A^{T}=A_{0}\), the extended ideal \(pA\) in \(A\) is proper, and \(p=pA\cap A_{0}\) is a contracted ideal. This implies that \(\pi\) is surjective.
Let \(I\subset A_{0}\) be an ideal such that the scheme \(\operatorname{Spec}(A_{0}/I)\) is connected. Suppose that for the ideal
\[IA=\bigoplus_{\alpha\in\mathbb{Z}^{n}}IA_{\alpha}\mathbf{t}^{\alpha}\]
of \(A\) the space \(\operatorname{Spec}(A/IA)\) is disconnected. Then there is a nontrivial ring decomposition \(A/IA=A^{\prime}\oplus A^{\prime\prime}\), and \((A/IA)_{0}=A^{\prime}_{0}\oplus A^{\prime\prime}_{0}\). Hence either the ring \(A^{\prime}_{0}=0\) or \(A^{\prime\prime}_{0}=0\). Consequently, either \(A^{\prime}=0\) or \(A^{\prime\prime}=0\), and the decomposition is trivial.
**Lemma 1.2.7**.: _The natural morphisms_
\[\pi_{B}:B\to B\mathbin{/\!\!/}\,T_{B}\simeq X,\quad\pi_{B_{+},Y}:B_{+}\to B_ {+}\mathbin{/\!\!/}\,T_{B}\simeq Y\]
_are good quotients._
Proof.: \[\operatorname{Spec}(\mathcal{O}_{B\mathbin{/\!\!/}T_{B}})= \operatorname{Spec}(\mathcal{O}_{B}^{T_{B}})=\operatorname{Spec}_{X}(\mathcal{ A}_{Y/X})^{T_{B}}=\operatorname{Spec}_{X}(\mathcal{O}_{X})=X\] \[\operatorname{Spec}(\mathcal{O}_{B_{+}\mathbin{/\!\!/}T_{B}})= \operatorname{Spec}(\mathcal{O}_{B_{+}}^{T_{B}})=\operatorname{Spec}_{Y}( \mathcal{C}_{Y/X})^{T_{B}}=\operatorname{Spec}_{Y}(\mathcal{O}_{Y})=Y\]
#### 1.2.8. Exceptional divisors on \(B=\operatorname{Cox}(Y/X)\)
Let \(\pi:X\to Y\) be a proper birational morphism of normal schemes. Using the natural birational morphism \(i_{B}:B_{+}\to B\), one can interpret the notion of the exceptional divisors of \(B\).
Any exceptional divisor \(E_{i}\) on \(Y\) defines a canonical form
\[F_{i}=t_{i}^{-1}=\mathbf{t}^{E_{i}}\in\mathcal{O}_{Y}(E)\mathbf{t}^{-1}\subset \mathcal{O}_{B}\]
on \(Y\) of degree \(E_{i}\) which vanishes on \(V_{Y}(F)=E_{i}\). The form \(t_{i}^{-1}\) also defines a regular homogenous function \(t_{i}^{-1}\) on \(B\) of degree \(E_{i}\). Its divisor \(D_{i}:=\operatorname{div}_{B}(t_{i}^{-1})\) on \(B\) determines the divisor \(D_{i+}:=D_{i|B_{+}}=\operatorname{div}_{B_{+}}(t_{i}^{-1})\) on \(B_{+}\) which maps to \(E_{i}\).
**Lemma 1.2.9**.: _The natural quotient morphism \(\pi_{B_{+},Y}:B_{+}\to Y\), (respectively \(\pi_{B}:B\to X\)) takes the exceptional divisors \(D_{i+}=V_{B_{+}}(t_{i}^{-1})\) (respectively \(D_{i}=V_{B}(t_{i}^{-1})\)) surjectively onto \(E_{i}\) (respectively surjectively to the center of \(Z_{X}(\nu_{i})=V_{X}(\mathcal{I}_{\nu_{i},1,X})\) of the valuation \(\nu_{i}\)). Moreover the induced morphism \(D_{i+}\to E_{i}\) (resp. \(D\to Z_{X}(\nu_{i})\)) is defined by the good quotient. \(\clubsuit\)_
Proof.: \[D_{i+}=V_{B_{+}}(t_{i}^{-1})=\operatorname{Spec}_{Y}\mathcal{C}_{Y/X}/t_{i}^{- 1}\mathcal{C}_{Y/X}=\]
\[=\operatorname{Spec}_{Y}(\bigoplus(\mathcal{O}_{Y}(E)/\mathcal{O}_{Y}(E-E_{i} ))\mathbf{t}^{E})\quad\to\]
\[\to\quad\operatorname{Spec}_{Y}((\mathcal{C}_{Y/X}/(t_{i}^{-1}\mathcal{C}_{Y/X })_{0}=\operatorname{Spec}(\mathcal{O}_{Y}/\mathcal{O}_{Y}(-E_{i}))=E_{i}\]
Thus the morphism \(D_{i+}\to Y\) is defined by the good quotient and is surjective by Lemma 1.2.6. The proof for the divisors \(D_{i}\) is similar. \(\clubsuit\)
**Definition 1.2.10**.: The divisors \(D_{i}=V_{B}(t_{i}^{-1})\), respectively \(D_{i+}=V_{B_{+}}(t_{i}^{-1})\) will be called the _exceptional divisors_ of \(B=\operatorname{Cox}(Y/X)\to X\), respectively of \(B_{+}\to Y\).
**Lemma 1.2.11**.: _The divisors \(D_{i}=V_{B}(t_{i}^{-1})\) on \(B\) and \(D_{i+}=V_{B_{+}}(t_{i}^{-1})\) on \(B_{+}\) are irreducible._
Proof.: By Lemma 1.2.6, the divisors \(D_{i}\) are connected, so it suffices to show that they are locally irreducible. We can assume that \(X\) is affine. It suffices to show that
\[t_{i}^{-1}=\mathbf{t}^{E_{i}}\in\mathcal{O}(B)=\mathcal{A}_{Y/X}(X)\]
is a prime element. The latter can be verified for the homogenous elements. Let \(\nu_{i}\) be the valuation on \(X\) associated with \(E_{i}\subset Y\).
Let
\[t_{i}^{-1}=t^{E_{i}}|(\mathbf{t}^{E}\cdot f)(\mathbf{t}^{E^{\prime}}\cdot g)= \mathbf{t}^{E+E^{\prime}}fg\]
where \(f\in\mathcal{O}_{X}(E)\), and \(g\in\mathcal{O}_{X}(E^{\prime})\), and suppose \(t_{i}^{-1}\) does not divide both \((\mathbf{t}^{E}\cdot f)\), and \((\mathbf{t}^{E^{\prime}}\cdot g)\). The first assumption implies that \(\mathbf{t}^{E+E^{\prime}-E_{i}}fg\in\mathcal{O}(B)\). So \(fg\in\pi_{*}(\mathcal{O}_{Y}(E+E^{\prime}-E_{i}))\).
Write the presentations \(E=\sum n_{j}E_{j}\) and \(E^{\prime}=\sum n_{j}^{\prime}E_{j}\). Then, by the assumption \(\nu_{i}(f)=n_{i}\) and \(\nu_{i}(g)=n_{i}^{\prime}\). Thus, by Proposition 1.2.2, and the assumptions on \(f\) and \(g\), we have
\[\nu_{i}(fg)>n_{i}+n_{i}^{\prime}=\nu_{i}(f)+\nu_{i}(g),\]
which is a contradiction since \(\nu_{i}\) is a valuation. The same reasoning works for \(D_{i+}\). \(\clubsuit\)
#### 1.2.12. Morphisms of Cox spaces
The following result is analogous to
[ADHL15][Construction 1.6.3.1] for the Cox spaces of varieties.
**Proposition 1.2.13**.: _Let \(\pi:Y\to X\) is a proper birational morphism of normal schemes, and \(T_{B}:=\operatorname{Spec}(\mathbb{Z}[\operatorname{Cl}(Y/X)])\). Let \(E\) be the exceptional divisor with the components \(E_{i}\). Denote by \(U_{\pi}:=Y\smallsetminus E\subset Y\) the open subset of \(Y\), which can be identified with the open subset of \(X\), where \(Y\to X\) is an isomorphism. Let \(\pi_{B}:B\to X\), and \(\pi_{B_{+},Y}:B_{+}\to Y\) be the natural projections._
_There is a natural \(T_{B}\)-equivariant birational morphism_
\[i_{B}:B_{+}=\operatorname{Cox}(Y/X)_{+}\to B=\operatorname{Cox}(Y/X).\]
_over \(X\), which is an isomorphism over \(U_{\pi}\), with_
\[\pi_{B}^{-1}(U_{\pi})=\pi_{B_{+},Y}^{-1}(U_{\pi})=U_{\pi}\times T_{B},\]
_and such that \(\pi_{B}^{-1}(D_{i})=D_{i+}\)._
_Moreover, the morphism \(i_{B}\) induces the morphism of the good quotients:_
\[\pi:B_{+}\not|\,T_{B}=Y\ \to\ B\not|\,T_{B}=X.\]
Proof.: For any open affine \(U\subset X\), we have the natural identifications
\[\Gamma(U,\pi_{*}(\mathcal{C}_{Y/X}))=\Gamma(\pi_{B}^{-1}(U),\mathcal{O}_{B})\]
and
\[\Gamma(U,\pi_{*}(\mathcal{C}_{Y/X}))=\Gamma(\pi^{-1}(U),\mathcal{C}_{Y/X})= \Gamma(\pi_{B_{+},Y}^{-1}(\pi^{-1}(U)),\mathcal{O}_{B_{+}})\]
Combining both equalities gives us:
\[\Gamma(\pi_{B}^{-1}(U),\mathcal{O}_{B})=\Gamma(\pi_{B_{+},Y}^{-1}(\pi^{-1}(U) ),\mathcal{O}_{B_{+}}).\]
Since \(\pi_{B}^{-1}(U)\subset\operatorname{Cox}(Y/X)\) is affine we obtain a natural morphism
\[\phi_{U}:\pi_{B_{+},Y}^{-1}(\pi^{-1}(U))\to\pi_{B}^{-1}(U)\]
over \(U\) induced by the isomorphisms on global sections. The constructed morphisms are functorial for open embeddings \(U\subset V\) of affine subsets on \(X\) and glue to a global morphism \(B_{+}\to B\).
The morphism \(B_{+}\to B\) is birational as it is an isomorphism over \(U_{\pi}\subset X\). Moreover \(i_{B}\) is an isomorphism over \(U_{\pi}\):
\[\pi_{B}^{-1}(U_{\pi})=\pi_{B_{+},Y}^{-1}(U_{\pi})=\operatorname{Spec}_{U_{\pi}} (\bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{O}_{U_{\pi}}\mathbf{t}^{E})=U _{\pi}\times T_{B}.\]
By the construction,
\[\pi_{B}^{-1}(D_{i})=\pi_{B}^{-1}(V_{B}(t_{i}^{-1}))=V_{B_{+}}(t_{i}^{-1})=D_{i +}.\]
Locally for any open affine \(V\subset\pi^{-1}(U)\) the induced homomorphisms
\[\mathcal{O}_{B}(\pi_{B}^{-1}(U))=\Gamma(U,\pi_{*}(\mathcal{C}_{Y/X}))=\Gamma( \pi^{-1}(U),\mathcal{C}_{Y/X})\to\Gamma(V,\mathcal{C}_{Y/X})=\mathcal{O}_{B_ {+}}(\pi_{B_{+},Y}^{-1}(V))\]
determine the homomorphisms
\[(\mathcal{O}_{B}(\pi_{B}^{-1}(U)))^{T}=\Gamma(U,\pi_{*}(\mathcal{C}_{Y/X})^{T })\to\Gamma(V,\mathcal{C}_{Y/X}^{T})=\mathcal{O}_{B_{+}}((\pi_{B_{+},Y}^{-1}( V)))^{T},\]
and define the global morphism \(B_{+}\not|\,T_{B}=Y\to B\not|\,T_{B}=X\).
#### 1.2.14. Cobordization
**Definition 1.2.15**.: Let \(\pi:Y\to X\) be a proper birational morphism. Then the morphism \(\pi_{B}:B=\operatorname{Cox}(Y/X)\to X\), (respectively \(\pi_{B_{+},Y}:B_{+}=\operatorname{Cox}(Y/X)_{+}\to X\)) will be called the _full cobordization of \(\pi\)_ (respectively the _cobordization_ of \(\pi\)).
If \(\mathcal{I}\) is an ideal on \(X\), then by the _full cobordant blow-up \(\sigma:B\to X\) at \(\mathcal{I}\)_ (respectively cobordant blow-up \(\sigma_{+}:B_{+}\to X\) at \(\mathcal{I}\) we mean the full cobordization (respectively cobordization) of the normalized blow-up \(bl_{\mathcal{J}}(X)\to X\).
#### 1.2.16. The Cox trivial space
**Lemma 1.2.17**.: _Let \(\pi:Y\to X\) be a proper birational morphism of normal schemes, and \(E=\bigcup E_{i}\) be its exceptional divisor. Let \(U_{\pi}=Y\smallsetminus E\subset X\) be the maximal open subset of \(X\) and of \(Y\) where \(\pi\) is an isomorphism exactly. Then the Cox trivial space is \(B_{-}=X\times T_{B}\). Moreover we have_
\[i_{B}^{-1}(B_{-})=B_{+}\times_{B}B_{-}=U_{\pi}\times T_{B}.\]
Proof.: By Proposition 1.2.2 we have
\[B_{-}:=B\smallsetminus\bigcup_{i=1}^{k}D_{i}=B\smallsetminus V(t_{1}^{-1}\cdot \ldots\cdot t_{k}^{-1})=\operatorname{Spec}(\mathcal{O}_{X}[t_{1},t_{1}^{-1}, \ldots,t_{k},t_{k}^{-1}])=X\times T_{B}\]
\[B_{+}\times_{B}B_{-}:=B_{+}\smallsetminus\bigcup_{i=1}^{k}D_{i}=B_{+}\smallsetminus V _{B_{+}}(t_{1}^{-1}\cdot\ldots\cdot t_{k}^{-1})=\]
\[=\operatorname{Spec}(\bigoplus_{E\in\operatorname{Cl}(Y/X)}\mathcal{O}_{Y}(E )\mathbf{t}^{E})[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])=\operatorname{ Spec}(\mathcal{O}_{U_{\pi}}[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])\]
### Open immersion of Cox spaces
#### 1.3.1. Generating forms
**Lemma 1.3.2**.: _Let \(X\) be an affine scheme and \(\pi:Y\to X\) be a proper birational morphism of normal integral schemes. Assume that \(Y_{F}\) is affine, for a certain form \(F=f\mathbf{t}^{-E}\) on \(Y\), with \(f\in H^{0}(Y,\mathcal{O}_{Y}(-E))=H^{0}(X,\pi_{*}(\mathcal{O}_{Y}(-E))\). Then \((B_{+})_{F}=B_{F}\) is affine and \(\pi_{B_{+},Y}^{-1}(Y_{F})=(B_{+})_{F}\). Moreover_
\[C_{Y/X}(Y_{F})=(C_{Y/X}(Y))_{F}=(H^{0}(B,\mathcal{O}_{B}))_{F}.\]
Proof.: If \(y\in Y_{F}\) then \(F=f\mathbf{t}^{-E}\) is invertible in the stalk \((\mathcal{C}_{Y/X})_{y}\). Indeed \(\operatorname{div}(f\mathbf{t}^{-E})=0\) at \(y\) so \(E=\operatorname{div}(f)\) is principal at \(y\), and thus \((f\mathbf{t}^{-E})^{-1}=f^{-1}\mathbf{t}^{E}\) is the inverse of \(f\mathbf{t}^{-E}\). This shows that the form \(F\) is invertible in \(\mathcal{C}_{Y/X}(Y_{F})\), and the function \(F\) is invertible on the scheme \(\pi_{B_{+},Y}^{-1}(Y_{F})\subset B_{+}\). Thus we have an open immersion \(\pi_{B_{+},Y}^{-1}(Y_{F})\hookrightarrow B_{+F}\). Since \(F\) is invertible on \(\pi_{B_{+},Y}^{-1}(Y_{F})\) the natural homorphism \(\mathcal{C}_{Y/X}(Y)\to\mathcal{C}_{Y/X}(Y_{F})\) factors through the localization \((\mathcal{C}_{Y/X}(Y))_{F}\to\mathcal{C}_{Y/X}(Y_{F})\).
On the other hand if
\[G=g\mathbf{t}^{-E^{\prime}}\in C_{Y/X}(Y_{F})\]
is a form on \(Y_{F}\) then, by definition,
\[\operatorname{div}_{Y}(G\cdot F^{n})=\operatorname{div}_{Y}(G)+n\cdot \operatorname{div}_{Y}(F)\geq 0\]
on \(Y\) for sufficiently large \(n\). Hence \(G\cdot F^{n}\in C_{Y/X}\). This shows that \((\mathcal{C}_{Y/X}(Y))_{F}\to\mathcal{C}_{Y/X}(Y_{F})\) is surjective. But this morphism is defined by the restrictions of forms, so functions on open subsets of \(B_{+}\), and thus it is also injective. Hence it is an isomorphism.
This defines an isomorphism of the global sections
\[(\mathcal{C}_{Y/X}(Y))_{F} =H^{0}(B_{+},\mathcal{O}_{B_{+}})_{F}=H^{0}((B_{+})_{F},\mathcal{ O}_{B_{+}})\] \[\to H^{0}(\pi_{B_{+},Y}^{-1}(Y_{F}),\mathcal{O}_{B_{+}})= \mathcal{C}_{Y/X}(Y_{F})\]
If \(Y_{F}\) is affine then we obtain then \(\pi_{B_{+},Y}^{-1}(Y_{F})\) is also affine, and the open immersion \(\pi_{B_{+},Y}^{-1}(Y_{F})\hookrightarrow(B_{+})_{F}\) has the left inverse \((B_{+})_{F}\to\pi_{B_{+},Y}^{-1}(Y_{F})\) determined by the global sections. Since the schemes are separated, it is an isomorphism.
Finally we observe that \(H^{0}(X,\pi_{*}(\mathcal{O}_{Y}(E))=H^{0}(Y,\mathcal{O}_{Y}(E)\). Hence
\[H^{0}(Y,\mathcal{C}_{Y/X})=H^{0}(X,\pi_{*}(\mathcal{C}_{Y/X}))=H^{0}(X, \mathcal{A}_{Y/X})=H^{0}(B,\mathcal{O}_{B}),\]
and
\[H^{0}(B_{F},\mathcal{O}_{B})=\mathcal{A}_{Y/X}(X))_{F}=\mathcal{C}_{Y/X}(Y))_{ F}=\mathcal{C}_{Y/X}(Y_{F}).\]
#### 1.3.3. Irrelevant ideal and open immersion of Cox spaces
The notion of _irrelevant ideals_ was used in [1] in the context of Cox rings. Here we consider the analogous definition and results for morphisms.
**Proposition 1.3.4**.: _Let \(\pi:Y\to X\) be a proper birational morphism of normal schemes. Assume that \(X\) can be covered by open subsets \(X_{i}\) such \(Y_{i}:=\pi^{-1}(X_{i})\) admits an open affine cover \((Y_{i})_{F_{j}}\), where \(F_{ij}=f_{ij}\mathbf{t}^{-E_{ij}}\) is a form on \(Y_{i}\) for \(f_{ij}\in\mathcal{O}_{Y_{i}}(-E_{ij})\). Then there is a natural open \(T_{B}\)-equivariant embedding_
\[B_{+}=\operatorname{Cox}(Y/X)_{+}\hookrightarrow B=\operatorname{Cox}(Y/X),\]
_It induces the morphism of the good quotients:_
\[B_{+}\mathbin{/\!\!\!/}\,T_{B}=Y\ \to\ B\mathbin{/\!\!\!/}\,T_{B}=X.\]
_Moreover \(B\smallsetminus B_{+}\) is of codimension \(\geq 2\) in \(B\)._
Proof.: The problem is local on \(X\), so we can replace \(X\) with \(X_{i}\), and drop the subscripts \(i\). By Lemma 1.3.2, the open affine cover \(Y_{F_{j}}\) of \(Y\) where \(F_{j}\in\mathcal{I}_{\operatorname{irr}}\) defines the open affine cover \(B_{+F_{i}}=\operatorname{Spec}_{Y}((\mathcal{C}_{Y/X})_{F_{i}})=\pi_{B_{+},Y}^ {-1}(Y_{i})\) of \(B_{+}\) mapping it isomorphically onto open subsets \(B_{F_{i}}=\operatorname{Spec}_{X}((\mathcal{A}_{Y/X})_{F_{i}})\subset X\). This induces the open immersion
\[B_{+}\hookrightarrow B.\]
For "moreover part" let \(U_{\pi}=Y\smallsetminus E\subset Y\) be the maximal open subset, where \(\pi:Y\to X\) is an isomorphism. Then \(U_{\pi}\) can be identified with an open subset of \(X\), and the complement \(X\smallsetminus U_{\pi}\) of the open set is of codimension \(\geq 2\), and
\[B_{+}\smallsetminus D=U_{\pi}\times T_{B}\subset B_{-}=B\smallsetminus D=X \times T_{B}.\]
if of codimension \(\geq 2\) in \(B_{-}=B\smallsetminus D\).
On the other hand, by Lemma 1.2.11, the divisors \(D_{i}=V_{B}(t_{i}^{-1})\) are irreducible on \(B\).
Consequently the difference \(D_{i}\smallsetminus B_{+}=D_{i}\smallsetminus D_{i+}\) is of codimension \(\geq 2\). Thus
\[B\smallsetminus B_{+}=(B_{-}\smallsetminus B_{+})\cup(D\smallsetminus D_{+})\]
is of codimension \(\geq 2\) in \(B\).
The notion of _irrelevant ideal_ on Cox coordinate spces was originally introduced in [1] (Definition 1.6.3.2 and Proposition 1.6.3.3(iii))
**Definition 1.3.5**.: By the the _irrelevant ideal_\(\mathcal{I}_{\operatorname{irr}}\subset\mathcal{A}_{Y/X}\) we mean the ideal radically generated by the forms \(F\) in \(\mathcal{A}_{Y/X}\), such that \(Y_{F}\) is open affine over \(X\).
**Corollary 1.3.6**.: _Under the conditions from Proposition 1.3.4, \(\mathcal{I}_{\rm irr}\) is the radical coherent ideal determined by the reduced closed subscheme \(B\smallsetminus B_{+}\). Thus we can write \(B_{+}=B\smallsetminus V(\mathcal{I}_{\rm irr})\)._
Proof.: The problem is local on \(X\), and we can assume that \(X\) is affine. It follows from the construction that \(B_{+}=B\smallsetminus V(\mathcal{I})\), where \(\mathcal{I}\) is generated by all \(F\in\mathcal{A}_{Y/X}\), such that \(Y_{F}\) is affine. Thus \(\operatorname{rad}(\mathcal{I})=\mathcal{I}_{\rm irr}\).
#### 1.3.7. Cox construction for regular schemes \(X\)
Recall a well-known fact:
**Lemma 1.3.8**.: _Let \(Y\) be a normal scheme. Then the complement of any open affine subset \(V\subset Y\) is the support of a Weil divisor._
_Thus there is a finite open cover of \(Y\) by open affine subsets \(V_{i}=Y\smallsetminus D_{i}\), where \(D_{i}\) are Weil divisors on \(Y\)._
Proof.: By definition, \(V\) is the set of points of \(Y\) where all the functions \(f\in\Gamma(V,\mathcal{O}_{Y})\subset\kappa(Y)\) are regular. Since \(Y\) is normal, this means that the supports of the divisors \(\operatorname{div}_{-}(f)\) of the negative components of \(\operatorname{div}(f)\) cover \(Y\smallsetminus V\). Consequently, \(Y\smallsetminus V\) is the union of the Weil divisors contained in it. Thus this union is finite, and \(Y\smallsetminus V\) is the support of the Weil divisor.
This defines an open cover \(V_{i}=Y\smallsetminus D_{i}\) which can be assumed to be finite.
**Lemma 1.3.9**.: _Let \(\pi:Y\to X\) be a proper birational morphism of normal schemes._
_Let \(p\in X\) be a regular point on \(X\). There is an open affine neighborhood \(U\) of \(p\) in \(X\), and an open cover of \(Y_{U}=\pi^{-1}(U)\) by open affine subsets \(Y_{F}=Y_{U}\smallsetminus V_{Y}(F)\), where \(F\) is a form over \(U\subset X\) and on \(Y_{U}\subset Y\)._
Proof.: We can assume that \(X\) is affine. By the previous lemma, we can find an open affine cover
\[V_{j}:=Y\smallsetminus(D_{j}\cup\overline{E}_{j})\]
of \(Y\) defined by the divisors \(D_{j}\cup\overline{E}_{j}\), where \(\overline{E}_{j}\) are some possibly reducible exceptional divisors. Taking the images of \(D_{j}\) in \(X\), we obtain a finite collection of divisors \(D^{\prime}_{j}=\pi(D_{j})\) on \(X\). Consider an open affine neighborhood
\[U:=X_{g}=X\smallsetminus V(g)\]
of \(p\in X\), for \(g\in H^{0}(X,\mathcal{O}_{X})\), such that all the divisors \(D^{\prime}_{j}\) are principal on \(U\). Thus we can write \(D^{\prime}_{j}=\operatorname{div}_{U}(f_{j})\), where \(f_{j}\in\mathcal{O}(X)\).
The pullbacks of the principal divisor \(D^{\prime}_{j}=\operatorname{div}_{U}(f_{j})\) on \(U\) are of the form \(\pi^{*}(D^{\prime}_{j})=D_{j}+E^{j}\) on \(Y_{U}=\pi^{-1}(U)\), where \(E^{j}=\sum n_{ij}E_{i}\) is an exceptional divisor, with \(n_{ij}\geq 0\). They define the forms
\[F_{j}:=f_{j}\mathbf{t}^{-E^{j}+\overline{E}_{j}}\]
on \(Y_{U}\) such that
\[\operatorname{div}_{Y}(F_{j})=\operatorname{div}_{Y}(f_{j})-E^{j}+\overline{E }_{j}=D_{j}+\overline{E}_{j}.\]
and thus \(V_{Y}(F_{j})=D_{j}\cup\overline{E}_{j}\) on \(Y_{U}\). Then
\[Y_{U}\smallsetminus V(F_{j})=Y_{U}\smallsetminus(D_{j}\cup\overline{E}_{j})=(V_{ j})_{g}=V_{j}\smallsetminus V_{Y}(g)\]
is an open affine cover of \(Y_{U}=\pi^{-1}(U)=\pi^{-1}(X_{g})\).
**Remark 1.3.10**.: The lemma is valid under the assumption that \(p\in X\) is a \(\mathbb{Q}\)-factorial point, so any Weil divisor at \(p\) is \(\mathbb{Q}\)-Cartier.
As a corollary from the above, we obtain the following:
**Proposition 1.3.11**.: _Assume that \(X\) is regular, and \(\pi:Y\to X\) is a proper birational morphism of normal schemes. There is a natural open \(T_{B}\)-equivariant embedding_
\[B_{+}=B\smallsetminus V(\mathcal{I}_{\rm irr})\hookrightarrow B\]
_It induces the morphism of the good quotients:_
\[B_{+}\,/\!\!/\,T_{B}=Y\ \to\ B\,/\!\!/\,T_{B}=X.\]
### Cobordant blow-ups of ideals
#### 1.4.1. The strict and the weak transform under cobordant morphism
**Definition 1.4.2**.: Let \(\mathcal{I}\) be any ideal on a normal scheme \(X\). Let \(\pi:Y\to X\) be a proper birational morphism from a normal scheme \(Y\), and \(\sigma=\pi_{B}:B\to X\) be the full cobordization of \(\pi\). Then by the _strict transform_ of the ideal \(\mathcal{I}\) we mean the ideal
\[\sigma^{s}(\mathcal{I}):=(f\in\mathcal{O}_{B}\mid{\bf t}^{-\alpha}f\in \mathcal{O}_{B}\cdot\mathcal{I},\mbox{ for some }\alpha\in\mathbb{Z}_{\geq 0}^{k}) \subset\mathcal{O}_{B}.\]
The _weak transform_ of the ideal \(\mathcal{I}\) is given by
\[\sigma^{\circ}(\mathcal{I}):={\bf t}^{\alpha_{0}}\mathcal{I},\]
where
\[\alpha_{0}:=\max\{\alpha\mid\mathcal{I}\subset{\bf t}^{-\alpha}\mathcal{O}_{X }\},\]
is defined for the partial componentwise order on the set of components.
#### 1.4.3. Cobordant blow-ups
**Lemma 1.4.4**.: _Let \(\mathcal{J}\) be an ideal on a normal scheme \(X\), such that \(\operatorname{codim}(V(\mathcal{J})\geq 2\). Let \(\pi:Y\to X\) be the normalized blow-up of \(\mathcal{J}\). Let \(E=\sum a_{i}E_{i}\) be the exceptional divisor of \(\pi\), such that \(\mathcal{O}_{Y}(-E)=\mathcal{O}_{Y}\cdot\mathcal{J}\). Set \(\alpha=(a_{1},\dots,a_{k})\). Denote by \(\sigma:B\to X\) be the corresponding full cobordant blow-up of \(\mathcal{J}\). Then_
1. \(\sigma^{-1}(X\smallsetminus V(\mathcal{J}))=(X\smallsetminus V(\mathcal{J}))\times T _{B}\) _is trivial._
2. \(B_{+}=B\smallsetminus V_{B}(\sigma^{\circ}(\mathcal{J}))=B\smallsetminus V_{B}({ \bf t}^{\alpha}\mathcal{J})\)_, where_ \[\sigma^{\circ}(\mathcal{J})=\mathcal{O}_{B}\cdot{\bf t}^{-E}\mathcal{J}= \mathcal{O}_{B}\cdot{\bf t}^{\alpha}\mathcal{J}\] _is the weak transform of_ \(\mathcal{J}\)_._
3. \(\mathcal{J}\cdot\mathcal{O}_{B_{+}}={\bf t}^{-\alpha}\mathcal{O}_{B_{+}}\) _is a locally principal monomial ideal on_ \(B_{+}\)_._
Proof.: Let \(U\subset X\) be an open affine subset. The ideal of sections \(\mathcal{J}(U)\) is generated by some \(f_{1}\dots,f_{k}\in\mathcal{J}(U)\subset\mathcal{O}_{X}(U)=\mathcal{O}_{Y}( \pi^{-1}(U))\). The pullbacks of the functions \(f_{1}\dots,f_{k}\in\mathcal{J}(U)\) generate the ideal
\[\mathcal{I}_{E}=\mathcal{O}_{Y}(-E)=\mathcal{O}_{Y}\cdot\mathcal{J}\]
on \(Y_{U}:=\pi^{-1}(U)\). Moreover on each \(Y_{U}\smallsetminus V_{Y}(F_{i})\), where \(F_{i}:=f_{i}{\bf t}^{-E}\) we have exactly \(\operatorname{div}_{Y_{U}}(f_{i})=E_{|Y_{U}}\).
On the other hand consider the open cover of
\[Y_{U}=\pi^{-1}(U)=\operatorname{Proj}\bigoplus_{i=0}^{\infty}\mathcal{J}^{i}( U)t^{i},\]
where \(t\) is a dummy unknown by the open subsets
\[(Y_{U})_{f_{i}t}=\pi^{-1}(U)_{f_{i}t}=(\operatorname{Proj}\bigoplus_{i=0}^{ \infty}\mathcal{J}^{i}(U)t^{i})_{f_{i}t}=(\operatorname{Spec}(\bigoplus_{i=0}^{ \infty}\mathcal{J}^{i}(U))_{f_{i}t})_{0},\]
where \(f_{i}t\in\mathcal{J}^{1}(U)t\). Since \(f_{j}t\) is invertible on \((Y_{U})_{f_{j}t}\) and \(f_{i}t/f_{j}t=f_{i}/f_{j}\) are regular we wee that \(\mathcal{O}_{(Y_{U})_{f_{j}t}}\cdot\mathcal{J}\) is generated by \(f_{j}\). So \(E=\operatorname{div}_{Y}(f_{j})\) on \((Y_{U})_{f_{i}t}\), and consequently the form \(F_{j}=f_{j}\mathbf{t}^{-E}\) is invertible on \((Y_{U})_{f_{j}t}\).
Computing \(\operatorname{div}(f_{i}t)\) on the cover \((Y_{U})_{f_{j}t}\) of \(Y_{U}\) gives us
\[\operatorname{div}(f_{i}t)=\operatorname{div}(f_{i}t/f_{j}t)=\operatorname{ div}(f_{i}/f_{j})=\operatorname{div}(f_{i})-E=\operatorname{div}(F_{i})= \operatorname{div}(f_{i}\mathbf{t}^{-E}).\]
Consequently we conclude that \((Y_{U})_{f_{i}t}=(Y_{U})_{F_{i}}\) is affine and cover \(Y_{U}\). Thus, by Proposition 1.3.4, there is an open immersion \(B_{+}\subset B\), where \(B_{+}\) is covered by \(B_{+F_{i}}\). Moreover, by the above, the ideal \(\mathcal{J}\mathbf{t}^{\alpha}\) on \(B_{+F_{i}}\) is generated by \(f_{i}\mathbf{t}^{-E}\), and thus equal to
\[\mathcal{J}\mathbf{t}^{\alpha}_{|B_{+F_{i}}}=\mathcal{O}_{B_{+F_{i}}}\cdot \mathcal{O}_{Y_{F_{i}}}(-E)\mathbf{t}^{-E}=\mathcal{O}_{B_{+F_{i}}}\cdot f_{i }\mathbf{t}^{-E}=\mathcal{O}_{B_{+F_{i}}}\cdot F_{i}.\]
But \(F_{i}=f_{i}\mathbf{t}^{-E}\) is invertible on \(B_{+F_{i}}\) of degree \(-E\), whence
\[\mathcal{O}_{B_{+F_{i}}}\cdot\mathcal{J}=\mathcal{O}_{B_{+F_{i}}}\cdot \mathcal{J}\mathbf{t}^{\alpha}\cdot\mathbf{t}^{-\alpha}=\mathcal{O}_{B_{+F_{i }}}\mathbf{t}^{-\alpha},\]
which implies that
\[\mathcal{O}_{B_{+}}\cdot\mathcal{J}=\mathcal{O}_{B_{+}}\mathbf{t}^{-\alpha}. \tag{4}\]
On the other hand \(\mathcal{J}\mathbf{t}^{\alpha}\subset\mathcal{O}_{B}\), since any element \(f\mathbf{t}^{\alpha}\in\mathcal{J}(U)\mathbf{t}^{\alpha}\) is in
\(\mathcal{O}_{\pi^{-1}(U)}(-E)\mathbf{t}^{\alpha}\) which is the \(-E\) gradation of \(\mathcal{O}_{B}\) over \(U\). This also shows that \(\mathcal{J}\mathbf{t}^{\alpha}=\sigma^{\circ}(\mathcal{J})\), as by equality (4) for \(B_{+}\), the form \(\mathbf{t}^{-\alpha}=t^{E}\) is the maximal factor which divides \(\mathcal{O}_{B}\cdot\mathcal{J}\).
Finally, by the above
\[V(\sigma^{\circ}(\mathcal{J}))=V(f_{1}\mathbf{t}^{-E},\dots,f_{k}\mathbf{t}^{ -E})=V(F_{1},\dots,F_{k})=B\smallsetminus B_{+}=V(\mathcal{I}_{\operatorname{ irr}}).\]
**Remark 1.4.5**.: It follows from the above that the inverse image \(\mathcal{O}_{B_{+}}\cdot\mathcal{J}\) of ideal \(\mathcal{J}\) under the cobordant blow-up is the ideal of the exceptional divisor \(\mathbf{t}^{E}\), analogously to the standard blow-up of \(\mathcal{J}\). However, this is no longer true for the full cobordant blow-up \(\mathcal{J}\).
### Cobordant flips
**Lemma 1.5.1**.: _Let \(\phi_{1}:X_{1}\to Z\), and \(\phi_{2}:X_{2}\to Z\) be proper birational morphisms from normal schemes \(X_{1},X_{2}\) to \(Z\). Assume that the induced proper birational map \(X_{1}\dashrightarrow X_{2}\) over \(Z\) is an isomorphism in codimension one. Then_
\[B:=B(X_{1}/Z)=B(X_{2}/Z),\]
_is equipped with the action of torus \(T_{B}=\operatorname{Cl}(X_{1}/Z)=\operatorname{Cl}(X_{2}/Z)\), and there is a natural birational map \(B(X_{1}/Z)_{+}\dashrightarrow B(X_{2}/Z)_{+}\) over \(B\). Moreover if \(\phi_{1},\phi_{2}\) satisfy the condition of Proposition 1.3.4, then \(B(X_{1}/Z)_{+}\) and \(B(X_{2}/Z)_{+}\) are open subschemes of \(B\) which coincide in codimension \(1\)._
### Functoriality of Cox spaces for open immersions
The construction of the full cobordization is functorial for open immersions up to torus factors:
**Lemma 1.6.1**.: _Let \(\pi:Y\to X\) be a proper birational morphism of normal integral schemes. Let \(U\subset Y\) be an open subset, and \(Y_{U}:=\pi^{-1}(U)\). Let \(E_{1},\dots,E_{k}\) be the irreducible exceptional divisors of \(\pi:Y\to X\). Let \(\pi_{B}:B=\operatorname{Cox}(Y/X)\to X\) be the full cobordization of a proper birational morphism \(\pi:Y\to X\), and \(\pi_{B+}:B_{+}\to X\) is its cobordization. Let_
\[T_{B\smallsetminus B_{U}}:=\operatorname{Spec}(\,\mathbb{Z}[t_{i},t_{i}^{-1}\mid E _{i}\subset Y\smallsetminus Y_{U}\ ]\,),\]
_Then_
\[B_{U}:=\pi_{B}^{-1}(U)=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}},\quad B_{U+}:= \pi_{B+}^{-1}(U)=B(Y_{U}/U)_{+}\times T_{B\smallsetminus B_{U}}.\]
Proof.: For any open subset \(U\subset X\), and \(Y_{U}=\pi^{-1}(U)\), we can construct a subgroup \(\operatorname{Cl}(Y_{U}/U)\subseteq\operatorname{Cl}(Y/X)\), with the canonical splitting \(\operatorname{Cl}(Y/X)\to\operatorname{Cl}(Y_{U}/U)\). Write \(\operatorname{Cl}(Y/X)=\operatorname{Cl}(Y_{U}/U)\oplus\operatorname{Cl}^{0} (Y_{U}/U)\), where \(\operatorname{Cl}^{0}(Y_{U}/U)\) is generated by \(E_{i}\subset Y\smallsetminus Y_{U}\).
\[\pi_{B}^{-1}(U)=\operatorname{Spec}_{U}(\bigoplus_{E\in \operatorname{Cl}(Y/X)}\pi_{*}(\mathcal{O}_{Y}(E)_{\mid U})\mathbf{t}^{E}=\] \[\operatorname{Spec}_{U}(\bigoplus_{E\in\operatorname{Cl}(Y_{U}/U )}\pi_{*}(\mathcal{O}_{Y}(E))_{\mid U}\mathbf{t}^{E})\otimes_{\mathcal{O}_{U} }(\bigoplus_{E\in\operatorname{Cl}^{0}(Y_{U}/U)}\pi_{*}(\mathcal{O}_{Y}(E))_{ \mid U}\mathbf{t}^{E})\] \[\operatorname{Spec}_{U}(\bigoplus_{E\in\operatorname{Cl}(Y_{U}/U )}\pi_{*}(\mathcal{O}_{Y}(E))_{\mid U}\mathbf{t}^{E})\otimes_{\mathcal{O}_{U} }(\bigoplus_{E\in\operatorname{Cl}^{0}(Y_{U}/U)}\mathcal{O}_{U}\mathbf{t}^{E} )=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\]
Similarly
\[\pi_{B+}^{-1}(U)=\operatorname{Spec}_{Y_{U}}(\bigoplus_{E\in \operatorname{Cl}(Y/X)}\mathcal{O}_{Y_{U}}(E)\mathbf{t}^{E})=\] \[\operatorname{Spec}_{Y_{U}}(\bigoplus_{E\in\operatorname{Cl}(Y_{U }/U)}\mathcal{O}_{Y_{U}}(E)\mathbf{t}^{E})\otimes_{\mathcal{O}_{Y_{U}}}( \bigoplus_{E\in\operatorname{Cl}^{0}(Y_{U}/U)}\mathcal{O}_{Y_{U}}(E)\mathbf{t }^{E})\] \[\operatorname{Spec}_{Y_{U}}(\bigoplus_{E\in\operatorname{Cl}(Y_{U }/U)}\mathcal{O}_{Y_{U}}(E)\mathbf{t}^{E})\otimes_{\mathcal{O}_{Y_{U}}}( \bigoplus_{E\in\operatorname{Cl}^{0}(Y_{U}/U)}\mathcal{O}_{Y_{U}}\mathbf{t}^{ E})\] \[=B(Y_{U}/U)_{+}\times_{Y_{U}}(Y_{U}\times T_{B\smallsetminus B_{U} })=B(Y_{U}/U)_{+}\times T_{B\smallsetminus B_{U}}\]
## 2. Relative Cox construction for toric morphisms
### Toric varieties
Recall some basic properties of toric varieties over a field. (See [10], [11], [12], [13]).
#### 2.1.1. Fans
Let \(\kappa\) be a field, and let
\[T=\operatorname{Spec}(\kappa[x_{1},x_{1}^{-1},\dots,x_{k},x_{k}^{-1}]= \operatorname{Spec}(\kappa[\mathbb{M}])\]
be the torus, where \(\mathbb{M}=\operatorname{Hom}(T,G_{m})\simeq\mathbb{Z}^{k}\). The elements of \(\mathbb{M}\) can be described by the Laurent monomials \(x^{\alpha}\in\mathbb{M}\), where \(\alpha\in\mathbb{Z}^{k}\).
Denote by \(\mathbb{N}:=\operatorname{Hom}(G_{m},T)\) the group of algebraic homomorphisms \(t\to\mathbf{t}^{\beta}=(\mathbf{t}^{b_{1}},\dots,\mathbf{t}^{b_{k}})\).
This determines a nondegenerate pairing \((\cdot,\cdot)\)\(\mathbb{N}\times\mathbb{M}\to\mathbb{Z}\) defined by the composition:
\[\operatorname{Hom}(G_{m},T)\times\operatorname{Hom}(T,G_{m})\to\operatorname{ Hom}(G_{m},G_{m})\quad,x^{\alpha}\circ\mathbf{t}^{\beta}=\mathbf{t}^{(\beta, \alpha)}.\]
Thus \(N=M^{*}\simeq Hom(M,\mathbb{Z})\) is dual to \(M\).
By a _fan_\(\Delta\) in \(\mathbb{N}_{\mathbb{Q}}\), we mean a collection of strictly convex cones, which is closed under the face relation, and such that two cones intersect along the common face. If \(\tau\) is a face of \(\sigma\), written as \(\tau\leq\sigma\) then \(X_{\tau}\subset X_{\sigma}\) is an open immersion.
#### 2.1.2. Toric varieties from fans
With any rational strictly convex cone \(\sigma\) in \(\mathbb{N}_{\mathbb{Q}}=\mathbb{N}\otimes\mathbb{Q}\) we associate its dual
\[\sigma^{\vee}:=\{y\in\mathbb{M}_{\mathbb{Q}}\mid(x,y)\geq 0\quad\text{for all} \quad x\in\sigma\}.\]
The cone \(\sigma^{\vee}\) determnies the monoid \(P_{\sigma}:=\sigma^{\vee}\cap M\), and the relevant affine toric variety \(X_{\sigma}=\operatorname{Spec}(\kappa[P_{\sigma}])\).
We say that a cone \(\sigma\) in \(\mathbb{N}^{\mathbb{Q}}\) is _regular_ or _nonsingular_ if it is generated by a part of a basis of the lattice \(e_{1},\ldots,e_{k}\in\mathbb{N}\), written
\[\sigma=\langle e_{1},\ldots,e_{k}\rangle:=\mathbb{Q}_{\geq 0}e_{1}+\ldots+ \mathbb{Q}_{\geq 0}e_{k}.\]
Similarly a cone \(\sigma=\langle v_{1},\ldots,v_{k}\rangle\) in \(\mathbb{N}^{\mathbb{Q}}\) is _simplicial_ if it generated by a linearly independent set \(\{v_{1},\ldots,v_{k}\}\in\mathbb{N}\).
With a fan \(\Sigma\) we associate the _toric variety_\(X_{\Sigma}\) obtained by glueing \(X_{\sigma}\), where \(\sigma\in\Sigma\), along \(X_{\tau}\), where \(\tau\leq\sigma\). The torus \(T=\operatorname{Spec}(\kappa(M))\) acts on toric variety \(X_{\Sigma}\) with an open dense orbit \(T=\operatorname{Spec}(\kappa(M))\) corresponding to \(\{0\}\in\Sigma\).
The fan \(\Sigma\) will be called _regular_ (respectively _simplicial_) if all its cones are regular (respectively simplicial).
The regular (resp. simplicial) fans \(\Sigma\) are in the bijective correspondence with the smooth (resp. \(\mathbb{Q}\)-factorial) toric varieties \(X_{\Sigma}\).
For any \(r\in\mathbb{Z}_{\geq 0}\) by \(\Sigma(r)\) denote the set of cones \(\sigma\) of dimension \(r\) in \(\Sigma\). The cones in \(\Sigma(r)\) correspond to the orbits \(\mathcal{O}_{\sigma}\) and thus to the irreducible \(T\)-stable closed subvarieties \(\mathcal{O}_{\sigma}\). In particular, the irreducible \(T\)-stable divisors correspond to the one-dimensional faces in \(\Sigma(1)\).
#### 2.1.3. Toric valuations
Any integral vector \(v\in\mathbb{N}\) determines a monomial valuation \(\operatorname{val}(v)\), which can be defined for \(f=\sum c_{m}\cdot m\in\kappa[\mathbb{M}]\), as
\[\operatorname{val}(v)(f)=\operatorname{val}(v)(\sum c_{m}\cdot m)=\min_{c_{m} \neq 0}(v,m).\]
The center \(Z_{\operatorname{val}(v)}\) of the valuation \(\operatorname{val}(v)\) is the union of orbits \(\mathcal{O}_{\tau}\), which correspond to the cones \(\tau\) in
\[\operatorname{Star}(\tau,\Sigma)=\{\tau\mid\sigma\leq\tau\}.\]
The associated ideals on \(X_{\Sigma}\) are given locally on \(X_{\sigma}\) as
\[\mathcal{I}_{\operatorname{val}(v),a,X_{\sigma}}=(m\in P_{\sigma}\mid(v,m) \geq a).\]
By a _vertex_ of \(\Sigma\), we mean the _primitive vector_, so the integral vector with relatively coprime coordinates, which lies in a one-dimensional face of \(\Sigma\). The set of vertices of \(\Sigma\) will be denoted by \(\operatorname{Vert}(\Sigma)\). Each vector \(v\in\operatorname{Vert}(\Sigma)\) defines the one-dimensional face \(\langle v\rangle\), and the valuation \(\operatorname{val}(v)\), which is precisely the valuation of the associated \(T\)-stable irreducible divisor \(D\).
#### 2.1.4. Decomposition of fans
By the _support of a fan_\(\Sigma\) we mean the union of its cones \(|\Sigma|=\bigcup_{\sigma\in\Sigma}\sigma\).
The _decomposition_ of the fan \(\Sigma\) is a fan \(\Sigma^{\prime}\) such that any cone \(\sigma^{\prime}\in\Sigma^{\prime}\) is contained in \(\sigma\in\Sigma\), and \(|\Sigma^{\prime}|=|\Sigma|\).
For any subset \(\Sigma_{0}\) of the fan \(\Sigma\), denote by \(\overline{\Sigma_{0}}\) the set of all faces of the cones in \(\Sigma_{0}\). The typical examples of the decompositions are given by the _star subdivisions_.
**Definition 2.1.5**.: Let \(\Sigma\) be a fan and \(v\) be a primitive vector in the relative interior of \(\tau\in\Sigma\). Then the _star subdivision_\(v\cdot\Sigma\) of \(\Sigma\) at \(v\) is defined to be
\[v\cdot\Sigma=(\Sigma\smallsetminus\operatorname{Star}(\tau,\Sigma))\cup\{\langle v \rangle+\sigma\mid\sigma\in\overline{\operatorname{Star}(\tau,\Sigma)} \smallsetminus\operatorname{Star}(\tau,\Sigma)\}.\]
The vector \(v\) will be called the _center_ of the star subdivision.
**Lemma 2.1.6**.: _The decompositions \(\Delta\) of a fan \(\Sigma\) are in bijective correspondence with the proper birational \(T\)-equivariant morphisms \(X_{\Delta}\to X_{\Sigma}\)._
_The star subdivision \(v\cdot\Sigma\) corresponds to the blow-up of the valuation, which is the normalized blow-up of \(\mathcal{I}_{\operatorname{val}(v),a,X_{\Sigma}}\) for a sufficiently divisible \(a\)._
#### 2.1.7. Maps of fans
By _a map of fans_\((\Sigma^{\prime},\mathbb{N}^{\prime})\to(\Sigma,\mathbb{N})\) we mean a linear map \(\phi:\mathbb{N}^{\prime}\otimes\mathbb{Q}\to\mathbb{N}\otimes\mathbb{Q}\) of vector spaces, such that
1. \(\phi(\mathbb{N}^{\prime})\subset\mathbb{N}\).
2. For any \(\sigma^{\prime}\in\Sigma^{\prime}\) there is is \(\sigma\in\Sigma\) such that \(\phi(\sigma^{\prime})\subset\sigma\).
The map of fans corresponds to a \(T_{\mathbb{N}^{\prime}}\)-equivariant morphism of toric varieties \((X_{\Sigma^{\prime}},T_{\mathbb{N}^{\prime}})\to(X_{\Sigma},T_{\mathbb{N}})\), where the action of \(T_{\mathbb{N}^{\prime}}=\operatorname{Spec}\kappa[\mathbb{M}^{\prime}]\) on \(X_{\Sigma}\) is defined by the homomorphism of tori
\[T_{\mathbb{N}^{\prime}}=\operatorname{Spec}\kappa[\mathbb{M}^{\prime}]\to T_ {\mathbb{N}}=\operatorname{Spec}\kappa[\mathbb{M}],\]
induced by \(\mathbb{N}^{\prime}\to\mathbb{N}\). The decomposition \(\Sigma^{\prime}\) of a fan \(\Sigma\) corresponds to the proper birational morphism.
#### 2.1.8. Good quotients
Let \(\phi:(\sigma^{\prime},\mathbb{N}^{\prime})\to(\sigma,\mathbb{N})\) be a surjective map of cones, such that \(\phi(\sigma^{\prime})=\sigma\), and \(\phi(\mathbb{N}^{\prime})=\mathbb{N}\). Let \(\mathbb{N}^{\prime\prime}:=\ker(\mathbb{N}^{\prime}\to\mathbb{N})\). Then the exact sequence
\[0\to\mathbb{N}^{\prime\prime}\to\mathbb{N}^{\prime}\to\mathbb{N}\to 0,\]
has its dual
\[0\to\mathbb{M}\to\mathbb{M}^{\prime}\to\mathbb{M}^{\prime\prime}\to 0.\]
Thus \(\mathbb{M}\) can be identified with the sublattice of \(M^{\prime\prime}\) defined as
\[\mathbb{M}=\{m\in M^{\prime\prime}\mid(n,m)=0\quad\text{for}\quad\text{all }n\in\mathbb{N}^{\prime\prime}\}\]
Consequently, \(\kappa[\mathbb{M}]=\kappa[\mathbb{M}^{\prime\prime}]^{T_{\mathbb{N}^{\prime \prime}}}\). Moreover the dual map determine the inclusion \(\sigma^{\vee}\hookrightarrow(\sigma^{\prime})^{\vee}\) for which \((\sigma^{\prime})^{\vee}\cap\mathbb{M}_{\mathbb{Q}}=\sigma^{\vee}\), and
\[(P_{\sigma^{\prime}})^{T_{\mathbb{N}^{\prime\prime}}}=P_{\sigma^{\prime}} \cap\mathbb{M}=P_{\sigma}.\]
Hence
\[\mathcal{O}(X_{\sigma^{\prime}})^{T_{\mathbb{N}^{\prime\prime}}}=\kappa[P_{ \sigma^{\prime}}]^{T_{\mathbb{N}^{\prime\prime}}}=\kappa[P_{\sigma}]=\mathcal{ O}(X_{\sigma}).\]
Thus
\[X_{\sigma^{\prime}}\to X_{\sigma}\simeq X_{\sigma^{\prime}}\mathbin{/\!\!/}T_{ \mathbb{N}^{\prime\prime}}\]
is _a good quotient_.
If additionally \(\phi:\sigma^{\prime}\to\sigma\) is injective, so it is an isomorphism of cones, then the inverse image of any orbit is a single orbit, and thus the corresponding morphism \(X_{\sigma^{\prime}}\to X_{\sigma}\simeq X_{\sigma^{\prime}}/T_{\mathbb{N}^{ \prime\prime}}\) is _a geometric quotient_.
If the map of fans \(\phi:(\Sigma^{\prime},\mathbb{N}^{\prime})\to(\Sigma,\mathbb{N})\) is surjective, i.e. \(\phi(|\Sigma^{\prime}|)=|\Sigma|\) and \(\phi(\mathbb{N}^{\prime})=\mathbb{N}\), and for any cone \(\delta\in\Sigma\), the inverse image \(\phi^{-1}(\delta)\cap|\Sigma^{\prime}|\) is a unique cone \(\delta^{\prime}\in\Sigma^{\prime}\), then the corresponding morphism \(X_{\Sigma^{\prime}}\to X_{\Sigma}\) is affine. Consequently, by the previous argument, it is a good quotient with respect to \(T_{N^{\prime\prime}}=\ker(T_{\mathbb{N}^{\prime}}\to T_{\mathbb{N}})\), where \(\mathbb{N}^{\prime\prime}=\ker(\mathbb{N}^{\prime}\to\mathbb{N})\).
If additionally, the map \(|\Sigma^{\prime}|\to|\Sigma|\) is bijective then \(X_{\Sigma^{\prime}}\to X_{\Sigma}\) is a geometric quotient.
### Cox construction for toric varieties
We recall here the standard Cox construction for toric varieties from the convex geometry point of view. This presentation relies greatly on [10], [1], and will be then adapted to the relative situation.
#### 2.2.1. Cox construction
Given a toric variety \(X\) with associated fan \(\Sigma\) in the space \(\mathbb{N}^{\mathbb{Q}}\simeq\mathbb{Q}^{n}\) containing the standard lattice \(\mathbb{N}\simeq\mathbb{Z}^{n}\). We shall assume that the fan \(\Sigma\) is _nondegenerate_ that is the set \(\operatorname{Vert}(\Sigma)\)_generate the vector space_\(\mathbb{N}_{\mathbb{Q}}\).
Let \(\operatorname{Vert}(\Sigma)=\{v_{1},\dots,v_{k}\}\) denote the set of vertices of \(\Sigma\). Let \(e_{1},\dots,e_{k}\) denote the standard basis of \(\mathbb{Z}^{k}\subset\mathbb{Q}^{k}\), and let
\[\sigma_{B}:=\langle e_{1},\dots,e_{k}\rangle=\{\sum_{i=1}^{k}a_{i}v_{i}\mid a_ {i}\in\mathbb{Q}_{\geq 0}\}.\]
The cone \(\sigma_{B}\) defines a regular fan \(\Sigma_{B}\) in \(\mathbb{N}_{B}^{\mathbb{Q}}=\mathbb{Q}^{k}\), consisting of all the faces of \(\sigma_{B}\). It corresponds to the affine space
\[X_{\sigma_{B}}=\operatorname{Spec}(\kappa[x_{1},\dots,x_{k}])=\mathbb{A}_{ \kappa}^{k}\]
Consider the linear map \(\pi_{B}:\mathbb{N}_{B}^{\mathbb{Q}}=\mathbb{Q}^{k}\to\mathbb{N}^{\mathbb{Q}}= \mathbb{Q}^{n}\) defined on the basis \(e_{1},\dots,e_{k}\), such that \(\pi_{B}(e_{i})=v_{i}\). We construct the subfan \(\Sigma_{B_{+}}\) of \(\sigma_{B}\) to be the set of all the faces \(\sigma\) of \(\sigma_{B}\) such that \(\pi_{B}(\sigma)\) is contained in a face of \(\Sigma\) ([1]). This determines a morphism \(\pi_{B}:\Sigma_{B_{+}}\to\Sigma\). Note that it follows from the definition that for any face \(\delta=\langle v_{i_{1}},\dots v_{i_{k}}\rangle\) of \(\Sigma\), there is a unique face
\[\delta_{0}=\pi_{B}^{-1}(\delta)=\langle e_{i_{1}},\dots e_{i_{k}}\rangle\in \operatorname{Cox}(\Sigma).\]
#### 2.2.2. Cox coordinate ring
Let \(\operatorname{Div}(X)\) be the group of Weil divisors on \(X=X_{\Sigma}\), and \(\operatorname{Div}(X)_{+}\) be the monoid of the effective Weil divisors and zero on \(X\). Let \(\operatorname{Vert}(\Sigma)=\{v_{1},\dots,v_{k}\}\) denote the set of vertices of \(\Sigma\). The corresponding Weil divisors \(D_{1},\dots,D_{k}\in\operatorname{Div}(X)\) freely generate \(\operatorname{Div}(X)\).
**Definition 2.2.3**.: [10] The _Cox coordinate ring_ is defined to be
\[\mathcal{C}(X):=\kappa[x_{1},\dots,x_{k}]=\kappa[\operatorname{Div}(X)_{+}]= \bigoplus_{D\in\operatorname{Div}(X)_{+}}\kappa x^{D},\]
with the natural identification \(x_{i}=x^{D_{i}}\), and \(x^{D}=x^{\alpha}\) and the induced multiplication \(x^{D_{1}}\cdot x^{D_{2}}=x^{D_{1}+D_{2}}\).
Denote by \(\operatorname{Prin}(X)\) the subgroup of \(\operatorname{Div}(X)\) of the principal divisors on \(X\), which is generated by \(\operatorname{div}(m)\), where \(m\in M\), giving an isomorphism
\[M\simeq\operatorname{Prin}(X),\quad m\mapsto\sum(v_{i},m)D_{i}.\]
We use here the assumption that \(\Sigma\) is nondegenerate.
Let \(\operatorname{Cl}(X)=\operatorname{Div}(X)/\operatorname{Prin}(X)\) be the Weil divisor class group. Although the Cox coordinate ring, as defined, comes with the natural \(\operatorname{Div}(X)\)-gradation, one can also consider its \(\operatorname{Cl}(X)=\operatorname{Div}(X)/\operatorname{Prin}(X)\)-gradation. Then for any class \([E]\in\operatorname{Cl}(X)\) of the divisor \(E\in\operatorname{Div}(X)\) the space of effective Weil divisors in \([E]\) on \(X\) is \(T\)-stable and thus generated by all \(T\)- invariant effective divisors
\[E+\operatorname{div}(m)\geq 0.\]
Thus one can describe the \([E]\)- gradation to be
\[\mathcal{C}(X)_{[E]}=\bigoplus_{D\in[E]}\kappa\cdot x^{D}=\bigoplus_{m\in M, \operatorname{div}(m)+E\geq 0}\kappa x^{E}\cdot x^{\operatorname{div}(m)}\simeq H ^{0}(X,\mathcal{O}_{X}(E))\cdot x^{E},\]
Thus choosing any set \(E_{1},\ldots E_{k}\) of \(\operatorname{Div}(X)\) which determines a basis of the lattice \(\operatorname{Cl}(X)\), one identifies \(\operatorname{Cl}(X)\), with the subgroup of \(\operatorname{Div}(X)\). Under this noncanonical identification we can write as in [10] and [1]:
\[\mathcal{C}(X)=\bigoplus_{E\in\operatorname{Cl}(X)}H^{0}(X,\mathcal{O}_{X}(E) )\cdot x^{E}\]
On the other hand the canonical \(\operatorname{Cl}(X)\)-gradation on \(\mathcal{C}(X)\) determines the natural action of the torus
\[T_{X}:=\operatorname{Spec}(\kappa[\operatorname{Cl}(X)])\simeq\operatorname{ Spec}(\kappa[t_{1},t_{1}^{-1},\ldots,t_{r},t_{r}^{-1}],\]
where \(\operatorname{Cl}(X)\simeq\mathbb{Z}^{r}\).
#### 2.2.4. Cox coordinate space
The Cox coordinate ring defines _the Cox coordinate space_ (as in [10] and [1]) to be
\[B=\operatorname{Cox}(X):=\operatorname{Spec}(\mathcal{C}(X))=\operatorname{ Spec}(\bigoplus_{E\in\operatorname{Cl}(X)}H^{0}(X,\mathcal{O}_{X}(E))\cdot x^{E}) \simeq\mathbb{A}^{k},\]
It is the toric variety associated with the fan \(\Sigma_{B}\) of all the faces of \(\sigma_{B}\).
#### 2.2.5. Good and geometric quotients
Let
\[B_{+}=\operatorname{Cox}(X)_{+}:=X_{\Sigma_{B_{+}}}\subset B\]
be the open toric subscheme of \(B\) associated with \(\Sigma_{B_{+}}\). The subscheme \(B_{+}\) is called the _Cox characteristic space_. The morphism \(B_{+}\to X\) corresponding to \(\Sigma_{B_{+}}\to\Sigma\) is toric and affine. It defines the homomorphism of the relevant tori
\[\phi:T_{B}:=\operatorname{Spec}(\kappa[\operatorname{Div}(X)])\to T:= \operatorname{Spec}(\kappa[M]),\]
corresponding to the inclusion \(M\hookrightarrow\operatorname{Div}(X)\) and defining the exact sequence
\[0\to M\to\operatorname{Div}(X)\to\operatorname{Cl}(X)\to 0.\]
Consequently, the kernel of \(\phi\) can be identified canonically with \(T_{X}:=\operatorname{Spec}(\kappa[\operatorname{Cl}(X)])\). Since \(T_{X}\) acts trivially on \(T\subset X\), the morphism \(B_{+}\to X\) is \(T_{X}\)-invariant and affine. Moreover for any \(\delta\in\Sigma\), and \(\delta_{0}=\pi^{-1}(\delta)\), we have that \(\pi(\delta_{0})=\delta\), and \(X_{\delta_{0}}\not\!/\,T_{X}=X_{\delta}\). Thus, the affine \(T_{X}\)-invariant morphism \(B_{+}\to X\) is a good quotient.
#### 2.2.6. Forms
By the _form_\(F\) on \(X\) we mean a \(\operatorname{Cl}(X)\)-homogenous function of gradation \([E]\) in
\[H^{0}(X,\mathcal{O}_{X}(E))x^{E}=H^{0}(B,\mathcal{O}_{B})_{[E]}=\mathcal{C}(X)_{ [E]}=(\mathcal{O}_{B})_{[E]}.\]
Each such \(T_{B}\)-semiinvariant form can be described as
\[F=x^{D}=x^{\alpha}=x^{m}\cdot x^{E},\]
where \(D\in\operatorname{Div}(X)\), \(D=E+\operatorname{div}(x^{m})\), and \(x^{m}\in H^{0}(X,\mathcal{O}_{X}(E))\), for \(E\) being a linear combination of \(E_{i}\).
With any form \(F=fx^{E}\in H^{0}(X,\mathcal{O}_{X}(E))x^{E}\) we can associate its divisor \(\operatorname{div}_{X}(F)=E+\operatorname{div}(f)\), and its vanishing locus \(V(F)=\operatorname{supp}(\operatorname{div}(F))\). This extends to a homomorphism
\[\operatorname{Div}(X)\to\operatorname{Div}(X),\quad D\to\operatorname{div}( x^{D}),\]
which is identical on generators \(E_{i}\) of \(\operatorname{Cl}(X)\subset\operatorname{Div}(X)\), and thus on their linear combinations. On the other hand, any class \([D]\in\operatorname{Cl}(X)\) can be written as the difference
\[[D]=[E^{\prime}]\smallsetminus[E^{\prime\prime}]\]
of effective linear combinations \(E^{\prime}\) and \(E^{\prime\prime}\) of the generators \(E_{i}\).
Then \(E^{\prime}+\operatorname{div}(x^{m})=D+E^{\prime\prime}\) is effective, for a certain \(m\in M\), and we have the equality for the form \(F:=x^{m}x^{E^{\prime}}\):
\[\operatorname{div}(F)=\operatorname{div}(x^{m}x^{E^{\prime}})=\operatorname{ div}(x^{D})+\operatorname{div}(E^{\prime\prime}),\]
whence
\[D+E^{\prime\prime}=E^{\prime}+\operatorname{div}(x^{m})=\operatorname{div}(x^ {D})+E^{\prime\prime}\]
and thus
\[\operatorname{div}_{X}(x^{D})=D,\]
for any \(D\in\operatorname{Div}(X)\). Consequently \(V(x^{D})=\operatorname{supp}(D)\) for any form \(x^{D}\), where \(D\in\operatorname{Div}(X)_{+}\).
In particular, the vanishing locus \(\operatorname{div}(x_{i})=\operatorname{div}(x^{D_{i}})=D_{i}\) corresponds to the vertex \(v_{i}\in\operatorname{Vert}(\Sigma)\).
#### 2.2.7. Cox characteristic space
The subscheme \(B_{+}\) can be described using the \(T_{B}\)-semiinvariant forms on \(X\) as in [1]. By the construction, \(B_{+}\) can be covered by the open affine subsets \(B_{\delta}:=\pi^{-1}(X_{\delta})\), where \(\delta\in\Sigma_{B_{+}}\). For each \(\delta\in\Sigma_{B_{+}}\) consider the form \(\check{x}_{\delta}:=\prod_{v_{i}\not\in\delta}x_{i}\) on \(X\). Its vanishing locus is equal to the complement
\[X\smallsetminus X_{\delta}=\bigcup_{v_{i}\not\in\delta}D_{i}.\]
So we can write \(X_{\delta}=X\smallsetminus V_{X}(\check{x}_{\delta})\). Similarly \(B_{\delta}=B\smallsetminus V_{B}(\check{x}_{\delta})=B_{\check{x}_{\delta}}\), where \(\check{x}_{\delta}\) is considered as a function on \(B\).
Consequently
\[B_{+}=B\smallsetminus V(\mathcal{I}_{\operatorname{irr}}),\]
where
\[\mathcal{I}_{\operatorname{irr}}:=(\check{x}_{\delta}\mid\delta\in\Sigma) \subset\mathcal{O}(B)=\mathcal{C}(X)\]
is the _irrelevant ideal_ (see [10],[1]).
Moreover the morphism \(B_{\delta}\to X_{\delta}\), can be described as
\[B_{\delta}=B_{\check{x}_{\delta}}=\operatorname{Spec}(\kappa[x_{1},\dots,x_{ k}]_{\check{x}_{\delta}})\to X_{\delta}=X_{\check{x}_{\delta}}=X\smallsetminus V_{X}( \check{x}_{\delta})\]
Note however then that the condition \(f\in H^{0}(X_{\check{x}_{\delta}},\mathcal{O}_{X}(E))\) is equivalent to
\[\operatorname{div}(f)+E+\operatorname{div}(\check{x}_{\delta}^{n})\geq 0\]
for \(n\gg 0\). The latter condition can be written as
\[fx^{E}\cdot\check{x}_{\delta}^{n}\in H^{0}(X,\mathcal{O}_{X}(E+n[\operatorname {div}(\check{x}_{\delta})])\cdot x^{E+n[\operatorname{div}(\check{x}_{\delta})]}\]
Consequently
\[\kappa[x_{1},\dots,x_{k}]_{\check{x}_{\delta}}=(\bigoplus_{E\in\operatorname{ Cl}(X)}H^{0}(X,\mathcal{O}_{X}(E))\cdot x^{E})_{\check{x}_{\delta}}=\bigoplus_{E \in\operatorname{Cl}(X)}H^{0}(X_{\delta},\mathcal{O}_{X}(E))\cdot x^{E}\]
The latter leads to the formula for the Cox characteristic space to be
\[B_{+}=\operatorname{Cox}(X)=\operatorname{Spec}_{X}(\bigoplus_{E\in \operatorname{Cl}(X)}\mathcal{O}_{X}(E)\cdot x^{E})\]
as in [1].
### Cox relative spaces over affine toric schemes
In this section, we shall study the general relative Cox construction developed in Chapter 1 in the context of birational toric morphisms. To a great extent, it is analogous to the original Cox construction for toric varieties (as in [1]) presented in the previous section. On the other hand, one can link it to the original construction of Satriano, who developed a similar notion in the context of the toric Artin stacks in [13].
The following result shows the relation between the toric Cox construction for toric varieties and the general Cox construction for proper morphisms.
**Lemma 2.3.1**.: _Let \(\sigma\) be a regular cone, and \(\Delta\) be its subdivision. Let \(\pi:X_{\Delta}\to X_{\sigma}\) be the induced proper birational morphism. Then the toric Cox coordinate space \(\operatorname{Cox}(X_{\Delta})\) and the toric Cox characteristic space \(\operatorname{Cox}(X_{\Delta})_{+}\) for toric variety \(X_{\Delta}\) coincide with the relative Cox coordinate space \(B=\operatorname{Cox}(X_{\Delta}/X_{\sigma})\) and relative Cox characteristic space \(B_{+}=\operatorname{Cox}(X_{\Delta}/X_{\sigma})_{+}\) for the proper birational morphism \(X_{\Delta}\to X_{\sigma}\)._
Proof.: The construction of the spaces is formally identical. The reason is that the gradation in both cases is the group \(\operatorname{Cl}(X_{\Delta})=\operatorname{Cl}(X_{\Delta}/X_{\sigma})\), which is freely generated by the exceptional toric divisors \(E_{i}\) with no relations.
#### 2.3.2. System of local parameters on affine toric schemes
Let \(P_{\sigma}=\sigma^{\vee}\cap M\) be the monoid associated with the affine toric variety \(X_{\sigma}=\operatorname{Spec}(\kappa[P_{\sigma}])\). Denote by \(P_{\sigma}^{*}\simeq\mathbb{Z}^{r}\) the subgroup of the invertible elements in \(P_{\sigma}\), and let \(\overline{P}_{\sigma}:=P_{\sigma}/P_{\sigma}^{*}\). The natural homomorphism of monoids \(P_{\sigma}\to\overline{P}_{\sigma}=P_{\sigma}/P_{\sigma}^{*}\) splits, and one can write noncanonically
\[P_{\sigma}=\overline{P}_{\sigma}\times P_{\sigma}^{*},\]
Let \(u_{1}=m_{1},\dots,u_{s}=m_{s}\in P_{\sigma}\) be the minimal set of generators of the monoid \(\overline{P}_{\sigma}\). This set is determined uniquely and consists of the elements \(m\in\overline{P}_{\sigma}\), which cannot be written as \(m=m^{\prime}\cdot m^{\prime\prime}\) for the nontrivial elements \(m^{\prime},m^{\prime\prime}\in\overline{P}_{\sigma}\).
**Definition 2.3.3**.: The set of generators of \(u_{1},\dots,u_{s}\in\overline{P}_{\sigma}\subset P_{\sigma}\) will be called a _system of local toric parameters_ on \(X_{\sigma}\).
#### 2.3.4. Cox relative spaces over affine toric schemes
**Lemma 2.3.5**.: _Let \(\sigma_{0}\) be any cone in \(\mathbb{N}^{\mathbb{Q}}\), and \(\Delta\) be its subdivision. Consider the induced toric morphism \(\pi:X_{\Delta}\to X_{\sigma_{0}}=\operatorname{Spec}(\kappa[P_{\sigma_{0}}])\). Let \(E_{1},\ldots,E_{k}\) be the toric exceptional divisors of \(\pi\) corresponding to the vertices \(v_{1},\ldots,v_{k}\in\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert }(\sigma_{0})\), and the exceptional valuations \(\nu_{i}=\operatorname{val}(v_{i})\)._
_Let \(B\) and \(B_{+}\) denote the full cobordization and, respectively, the cobordization of the morphism \(X_{\Delta}\to X_{\sigma_{0}}\)_
_Then_
1. \(B=\operatorname{Spec}\mathcal{O}_{X_{\sigma}}[t_{1}^{-1},\ldots,t_{k}^{-1},u_{ 1}\mathbf{t}^{\alpha_{1}},\ldots,u_{s}\mathbf{t}^{\alpha_{k}}]\)_, where and_ \(u_{1},\ldots,u_{k}\in P_{\sigma}\) _is a system of local toric parameters and_ \(\alpha_{i}=(a_{i1},\ldots,a_{ik})\)_, with_ \(a_{ij}:=\nu_{j}(u_{i})\)_._
2. \(B\) _is a toric variety_ \(B\simeq X_{\sigma_{0}}\times\mathbb{A}^{k}\)_, and the corresponding cone is_ \[\sigma_{B}=\sigma_{0}\times\langle e_{1},\ldots,e_{k}\rangle.\]
3. _The natural morphism_ \[B=\operatorname{Spec}\mathcal{O}_{X_{\sigma_{0}}}[t_{1}^{-1},\ldots,t_{k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}}]\to X_{ \sigma_{0}}\] _is given by the projection_ \(\pi_{\Sigma}:\sigma_{B}\to\sigma_{0}\)_, mapping_ \(e_{i}\mapsto v_{i}\)_._
4. \(B_{+}\subset X_{\sigma_{B}}\) _can be described as the set_ \(\Sigma_{B+}\) _of the faces_ \(\sigma\) _of_ \(\sigma_{B}\) _such that_ \(\pi_{\Sigma}(\sigma)\subseteq\delta\)_, where_ \(\delta\in\Delta\)_. In particular,_ \(B_{+}\subset B\) _is an open inclusion._
Proof.: First we will show that \(X_{\Delta}\) can be covered by the open affine subsets \((X_{\Delta})_{F}\), where \(F\) is a form on \(X_{\Delta}\). The problem translates into a toric situation. For any cone \(\delta\in\Delta\) let \(\omega\) be a maximal common face of \(\delta\) and \(\sigma_{0}\). Consider a character \(\chi_{\delta}\in\sigma_{0}^{\vee}\) which is zero on \(\omega\) and strictly positive on \(\sigma_{0}\smallsetminus\omega\). The character \(\chi_{\delta}\) defines a regular function on \(X_{\sigma_{0}}\), for which
\[n_{i}:=\chi_{\delta}(v_{i})=\nu_{i}(\chi_{\delta})>0,\]
for any vertex
\[v_{i}\in\operatorname{Vert}(\delta)\smallsetminus\operatorname{Vert}(\omega)= \operatorname{Vert}(\delta)\smallsetminus\operatorname{Vert}(\sigma_{0}).\]
In particular \(\operatorname{div}(\chi_{\delta})-E_{\delta}\geq 0\), where \(E_{\delta}:=\sum_{E_{i}\cap X_{\delta}\neq\emptyset}n_{i}E_{i}\). Then, for the form \(F_{\delta}:=\chi_{\delta}x^{-E_{\delta}}\), its support
\[\operatorname{supp}(\operatorname{div}(F_{\delta}))=\operatorname{supp}( \operatorname{div}(\chi_{\delta}))-E_{\delta})\]
on \(X_{\Delta}\) is the union of all the toric divisors which are in \(X_{\Delta}\smallsetminus X_{\delta}\) and which correspond to the vertices in \(\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\delta)\). Consequently \(\operatorname{supp}(\operatorname{div}(F_{\delta}))=X_{\Delta}\smallsetminus X _{\delta}\), and \(X_{F_{\delta}}=X_{\delta}\). This implies, by Proposition 1.3.4, that the natural morphism
\[B_{+}=\bigcup_{\delta\in\Delta}B_{\delta}\hookrightarrow B\]
is an open immersion, where \(B_{\delta}:=B_{F_{\delta}}\) is open affine.
(1) For any \(i=1,\ldots,k\) let \(t_{i}\) be the coordinate corresponding to \(-E_{i}\). Set
\[\check{\mathbf{t}}_{i}:=(t_{1},\ldots,\check{t}_{i},\ldots,t_{k})\quad\check{ \mathbf{t}}_{i}^{-1}:=(t_{1}^{-1},\ldots,\check{t}_{i}^{-1},\ldots,t_{k}^{-1})\]
By Proposition 1.1.5 one can write:
\[\mathcal{A}_{Y/X}=\bigoplus_{a_{i}\in\mathbb{Z}}\ \bigcap_{i=1}^{k}\mathcal{I}_{\nu_{i},a_{ i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{k}^{a_{k}}=\bigcap_{i=1}^{k}\bigoplus_{a_{i} \in\mathbb{Z}}\mathcal{I}_{\nu_{i},a_{i}}\ \cdot t_{i}^{a_{i}}[\check{\mathbf{t}}_{i},\check{\mathbf{t}}_{i}^{-1}],\]
Let \(u_{1},\ldots,u_{k}\in\overline{P}_{\sigma}=P_{\sigma}/P_{\sigma}^{*}\subset P_{\sigma}\) be the generators of \(\overline{P}_{\sigma}\), and let \(\nu_{i}(u_{j})=a_{ij}\in\mathbb{Z}_{\geq 0}\) for \(i=1,\ldots,k\), and \(j=1,\ldots,n\). Then
\[\mathcal{I}_{\nu_{i},a}=(u_{1}^{b_{1}}\cdot\ldots\cdot u_{n}^{b_{n}})\mid\sum_ {j=1}^{k}b_{j}a_{ij}\geq a).\]
Comparing gradations we easily see that for each \(i\),
\[\bigoplus_{a_{i}\in\mathbb{Z}}\mathcal{I}_{\nu_{i},a}t_{i}^{a_{i}}=\mathcal{O }_{X}[t_{i}^{-1},u_{j}t_{i}^{a_{ij}}].\]
So
\[\mathcal{A}_{Y/X}=\bigcap_{i=1}^{k}\mathcal{O}_{X}[t_{i}^{-1},u_{j}t_{i}^{a_{ ij}}][\hat{\mathbf{t}}_{i},\hat{\mathbf{t}}_{i}^{-1}]=\mathcal{O}_{X}[t_{1}^{-1}, \ldots,t_{k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha _{k}}],\]
where \(\alpha_{i}=(a_{i1},\ldots,a_{ik})\).
(2)
\[B =\operatorname{Spec}\mathcal{O}_{X_{\sigma}}[t_{1}^{-1},\ldots,t_ {k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}}]=\] \[=\operatorname{Spec}(\kappa[u_{1},\ldots,u_{k},v_{1},\ldots,v_{r} ][t_{1}^{-1},\ldots,t_{k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k} \mathbf{t}^{\alpha_{k}}]=\] \[=\operatorname{Spec}(\kappa[t_{1}^{-1},\ldots,t_{k}^{-1},u_{1} \mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}},v_{1},\ldots,v_{r} ]\simeq\] \[\simeq\operatorname{Spec}(\kappa[t_{1}^{-1},\ldots,t_{k}^{-1},u_{ 1},\ldots,u_{k},v_{1},\ldots,v_{r}])\simeq X_{\sigma_{0}}\times\mathbb{A}^{k}.\]
(3) The toric map
\[B =\operatorname{Spec}(\kappa[t_{1}^{-1},\ldots,t_{k}^{-1},u_{1} \mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}},v_{1},\ldots,v_{r} ]\to\] \[X_{\sigma_{0}} =\operatorname{Spec}(\kappa[u_{1},\ldots,u_{s},v_{1},\ldots,v_{r} ]),\]
corresponds to the map of cones
\[\pi_{B}:\sigma_{B}\simeq\sigma_{0}\times\langle e_{1},\ldots,e_{k}\rangle \to\sigma_{0}.\]
Under this correspondence
\[\operatorname{val}(e_{i})(t_{j}^{-1})=\delta_{ij},\quad\operatorname{val}(e_{i })(u_{j}\mathbf{t}^{\alpha_{j}})=0.\]
On the other hand \(\operatorname{val}(v)(t_{i}^{-1})=0\) for any integral vector \(v\in\sigma_{0}\).
By the above we can write
\[B=\operatorname{Spec}(\kappa[t_{1}^{-1},\ldots,t_{k}^{-1}]\times\operatorname {Spec}(\kappa[u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}}, v_{1},\ldots,v_{r}])\]
The toric valuation \(\mu_{i}\) on \(B\) associated to the divisor \(D_{i}=V_{B}(t_{i}^{-1})\) satisfies \(\mu_{i}(u_{j}\mathbf{t}^{\alpha_{j}})=0\), and \(\mu_{i}(\mathbf{t}_{i^{\prime}}^{-1})=0\) for \(i\neq i^{\prime}\). It corresponds to the vector \(e_{i}\), as \(\operatorname{val}(e_{i})\) fulfills precisely the same relations.
The quotient morphism \(\pi_{B}:B\to X_{\sigma_{0}}\) takes a toric valuation \(\operatorname{val}(v)\) on \(B\), for any integral \(v\in\sigma_{B}\) to the restriction to \(\mathcal{O}(X_{\sigma_{0}})\) corresponding to \(\operatorname{val}(\pi_{\Sigma}(v))\). It maps the vertices of the face \(\sigma_{0}\subset\sigma_{B}\) to the very same vertices of \(\sigma_{0}\). The image of the vector \(e_{i}\) is the vertex \(\pi_{\Sigma}(e_{i})=v_{i}\in\operatorname{Vert}(\Delta)\smallsetminus \operatorname{Vert}(\sigma_{0})\). This follows from Lemma 1.2.9 or can be seen by direct computation. By the previous considerations, \(v_{i}\) corresponds to \(\nu_{i}\) on \(X_{\Delta}\), and \(e_{i}\) to the valuation \(\mu_{i}\) of \(t_{i}^{-1}\) on \(B_{+}\). The restriction of the toric valuation \(\mu_{i}\) to \(\kappa[u_{1},\ldots,u_{k}]\), gives
\[\mu_{i}(u_{j})=\mu_{i}(u_{j}\mathbf{t}^{\alpha_{j}}\cdot\mathbf{t}^{-\alpha_{j }})=\mu_{i}(\mathbf{t}^{\alpha_{j}})=a_{ij}=\nu_{i}(u_{j}).\]
(4) By the considerations at the beginning of the proof, and Lemma 1.3.2, we can write \(B_{+}\) as the union of open affine subsets \(B_{\delta}=B_{F_{\delta}}=\pi_{B_{+}}^{-1}(X_{\delta})\):
\[B_{+}=\bigcup_{\delta\in\Delta}B_{\delta}\subset B.\]
The induced map of fans \(\Sigma_{B_{+}}\to\Delta\) corresponds to the good quotient \(B_{+}\to B_{+}/\!\!/T\), and is defined by the linear map:
\[\pi_{\Sigma}:\mathbb{N}_{B}^{\mathbb{Q}}=\mathbb{N}_{B_{+}}^{\mathbb{Q}}\to \mathbb{N}^{\mathbb{Q}}=\mathbb{N}_{Y}^{\mathbb{Q}}=\mathbb{N}_{Y}^{\mathbb{Q}}.\]
Thus any cone \(\delta\in\Delta\) can be written as the image \(\delta=\pi_{\Sigma}(\delta^{\prime})\), where \(\delta^{\prime}\in\Sigma_{B_{+}}\). In particular any vertex \(v_{i}\in\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\sigma_{0})\) is the image \(v_{i}=\pi(e_{i})\) of \(e_{i}\in\operatorname{Vert}(\Sigma_{B})=\operatorname{Vert}(\Sigma_{B_{+}})\). Consequently, the fan \(\Sigma_{B_{+}}\) is determined by the faces \(\tau\) of \(\Sigma_{B}\) such that \(\pi_{\Sigma}(\tau)\in\Delta\).
### Cox relative spaces for toric morphisms
General case
#### 2.4.1. Coborization of proper toric morphisms
Let \(\Delta\) be a subdivision of a fan \(\Sigma\). We can further generalize the characterization of the coborization of any proper birational toric morphism \(\pi:X_{\Delta}\to X_{\Sigma}\).
**Proposition 2.4.2**.: _Let \(\Delta\) be a fan subdivision of a fan \(\Sigma\). Let \(\pi:Y=X_{\Delta}\to X=X_{\Sigma}\) be the associated proper toric morphism of toric varieties. Let \(v_{1},\ldots,v_{k}\) be the vertices of \(\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\Sigma)\) corresponding to the toric valuations \(\nu_{i}=\operatorname{val}(v_{i})\), associated with the exceptional divisors \(E_{1},\ldots,E_{k}\). Let \(\sigma_{0}=\langle e_{1},\ldots,e_{k}\rangle\) be the regular cone defined by the free basis \(e_{1},\ldots,e_{k}\)._
_Let \(\pi_{\Sigma}:|\Sigma|\times\sigma_{0}\to|\Sigma|\) be the linear map of the supports of fans such that \(\pi_{\Sigma}(e_{i})=v_{i}\), and identical on \(|\Sigma|\). Consider the subfan \(\Sigma_{B}\) of \(\Sigma\times\sigma_{0}\) consisting of the faces of \(\Sigma\times\sigma_{0}\) mapping to faces of \(\Sigma\), under the projection \(\pi_{\Sigma}\). Then the full coborization \(B\to X\) of \(\pi\) can be described as the toric morphism associated with the projection \(\pi_{\Sigma|\Sigma_{B}}:\Sigma_{B}\to\Sigma\)._
_The morphism \(B_{+}\subset B\) is an open inclusion which corresponds to the subfan \(\Sigma_{B_{+}}\) of \(\Sigma_{B}\) of all the faces of \(\Sigma\times\sigma_{0}\) mapping to the faces of \(\Delta\)._
Proof.: By Lemma 2.3.5, and reducing to the affine case, we see that \(B_{+}\subset B\) is an open immersion.
Let \(T_{0}:=\operatorname{Spec}(\kappa[M])\subset X_{\Sigma}\) be the torus acting on \(X_{\Sigma}\), and on \(X_{\Delta}\). Let \(T_{B}:=\operatorname{Spec}(\kappa[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])\), where the coordinates \(t_{1}^{-1},\ldots,t_{k}^{-1}\), correspond to \(e_{1},\ldots,e_{k}\) on \(X_{\sigma_{0}}=\kappa[t_{1}^{-1},\ldots,t_{k}^{-1}])\). By Proposition 1.2.2, we can write \(B\) as \(B=\operatorname{Spec}(\mathcal{A}_{Y/X})\), where
\[\mathcal{A}_{Y/X}=\bigoplus_{a_{i}\in\mathbb{Z}}\ \bigcap_{i=1}^{k}\,\mathcal{I}_{ \nu_{i},a_{i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{k}^{a_{k}}\subset\mathcal{O}_{X}[t_{1}, t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}]\]
Consequently, \(B\) contains a toric variety
\[B=\operatorname{Spec}_{X}(\mathcal{A}_{Y/X})\supset B_{-}=\operatorname{Spec} _{X}(\mathcal{O}_{X}[t_{1},t_{1}^{-1},\ldots,t_{k},t_{k}^{-1}])=X_{\Sigma} \times T_{B},\]
and hence contains a torus \(T_{0}\times T_{B}\). Moreover the torus \(T_{0}\times T_{B}\) acts on \(B\).
On the other hand, by Lemmas 1.6.1, and 2.3.5 the scheme \(B\) is the union of toric varieties \(B_{\sigma}\) containing \(T_{0}\times T_{B}\), associated with \(\sigma\in\Sigma\), such that
\[B_{\sigma}:=\pi_{B}^{-1}(X_{\sigma})=B(X_{\Delta_{|\sigma}}/X_{ \sigma})\times T(X_{\sigma})=\] \[\operatorname{Spec}_{X_{\sigma}}(\mathcal{O}_{X_{\sigma}}[t_{1}^{-1 },\ldots,t_{k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{s}\mathbf{t}^{ \alpha_{k}}].\]
Thus \(B\) is a toric variety, let \(\Sigma_{B}\) its corresponding fan. The affine toric morphism \(B\to X\) determines the homomorphism of tori \(T_{0}\times T_{B}\to T_{0}\). It corresponds to the map of fans \((\Sigma_{B},N_{0}\times\mathbb{N}_{B})\to(\Sigma,\mathbb{N}_{0})\), defined by the natural projection \(N_{0}\times\mathbb{N}_{B}\)
Consider the toric variety \(X_{\Sigma}\times\operatorname{Spec}(\kappa[t_{1}^{-1},\ldots,t_{k}^{-1}])\), associated with the fan \(\Sigma\times\sigma_{0}\), with the lattice \(N_{0}\times\mathbb{N}_{B}\). The linear map
\[\pi_{\Sigma}:(|\Sigma|\times\sigma_{0},N_{0}\times\mathbb{N}_{B})\to(|\Sigma|, \mathbb{N}_{0}),\]
satisfies \(\pi_{\Sigma}(e_{i})=v_{i}\), and \(\pi_{\Sigma_{|}\mathbb{N}_{0}}=\operatorname{id}_{\mathbb{N}_{0}}\).
The full cobordization morphism \(B\to X_{\Sigma}\) takes the divisor \(D_{i}=V_{B}(t_{i}^{-1})\) to \(E_{i}\) by Lemma 1.2.9. Thus it defines the same map on the lattices \(\pi_{\Sigma}:\mathbb{N}_{B}\to\mathbb{N}\). Moreover, by Lemmas 1.6.1, and 2.3.5, each toric variety
\[B_{\sigma}=B(X_{\Delta_{|\sigma}}/X_{\sigma})\times T_{B\smallsetminus B_{\sigma }}\subset B\]
corresponds to the subfan determined by the cone \(\sigma\times\tau(\sigma)\) of \(\Sigma\times\sigma_{0}\), where \(\tau(\sigma)\leq\sigma_{0}\) is generated by all \(e_{i}\) with \(v_{i}\in\sigma\).
Thus, by Lemma 2.3.5(4), \(\Sigma_{B}\) consists exactly of the faces of \(\Sigma\times\sigma_{0}\) mapping into faces of \(\Sigma\), under the projection \(\pi_{\Sigma}:e_{i}\mapsto v_{i}\). Consequently, \(B_{+}\) corresponds to the subfan \(\Sigma_{B_{+}}\) of \(\Sigma_{B}\) of all the faces mapping to the faces of \(\Delta\).
### The dual complex of the exceptional divisor
#### 2.5.1. The dual complex of toric morphisms
Let \(\pi:Y=X_{\Delta}\to X_{\Sigma}\) be a proper toric morphism, where \(\Delta\) is a subdivision of \(\Sigma\). Assume that \(X=X_{\Sigma}\) is smooth. Then the full cobordization \(B\) of \(\pi\) is a smooth toric variety with the toric morphism \(\pi_{B}:B\to X\). Consequently the exceptional divisors \(D=V_{B}(t_{1}^{-1},\ldots,t_{k}^{-1})\) of \(\pi_{B}\) and \(D_{+}=D\cap B_{+}\) of \(\pi_{B+}\) are SNC. On the other hand the components \(D_{i+}\) map to the components \(E_{i}\) of the exceptional toric divisor \(E\) of \(\pi:Y\to X\).
One can define the divisorial stratifications \(S_{D}\), and \(S_{D_{+}}\) on \(B\), and \(B_{+}\) with the strata determined by the nonempty sets
\[s_{I}:=\bigcap_{i\in I}D_{i}\smallsetminus\bigcup_{j\in J}D_{j},\]
where \(I\cup J=\{1,\ldots,k\}\), \(I\cap J=\emptyset\). Note that the closure \(\overline{s_{I}}\) can be written in the form
\[\overline{s_{I}}=\bigcap_{i\in I}D_{i}.\]
Likewise the stratification \(S_{E}\) od \(E\) on \(Y\) is determined by the nonempty closed sets \(\overline{s_{I}}^{E}:=\bigcap_{i\in I}E_{i}\), which determine the strata \(s_{I}^{E}\) obtained by removing from \(\overline{s_{I}}^{E}\) all the proper subsets \(\overline{s_{J}}^{E}\subset\overline{s_{I}}^{E}\), with \(\mathcal{J}\supset\mathcal{I}\).
These three stratifications are coarser than the orbit stratifications; thus, each stratum is the union of orbits. The divisorial stratifications \(S_{D}\) and \(S_{D_{+}}\) define the dual complexes \(\Delta_{D}\) and \(\Delta_{D_{+}}\subset\Delta_{D}\). The vertices \(e_{i}\) of \(\Delta_{D}\) and \(\Delta_{D_{+}}\) correspond to the divisors \(D_{i}\) or, respectively \(D_{i+}\). We associate with a stratum \(s=\bigcap_{i\in I}D_{i}\smallsetminus\bigcup_{j\in J}D_{j}\), the simplex \(\sigma_{s}:=\Delta(e_{i}\mid i\in I)\).
Similarly, we can define the dual complex \(\Delta_{E}\) associated with the toric exceptional divisor \(E\) on \(Y\) (which is usually not SNC). Again the vertices \(e_{i}\) of \(\Delta_{E}\) correspond to the divisors \(E_{i}\). We associate with any set of divisors \(\{E_{i}\mid i\in I\}\) such that \(\bigcap_{i\in I}E_{i}\neq\emptyset\) the simplex \(\sigma_{I}:=\Delta(e_{i}\mid i\in I)\). Summarizing we obtain the following characterization of the complexes:
**Lemma 2.5.2**.: _A simplex \(\sigma\) in \(\Delta_{D}\)( respectively in \(\Delta_{D_{+}}\) or \(\Delta_{E}\)) corresponds bijectively to a set of divisors \(D_{i}\) ( respectively \(D_{i+}\) or \(E_{i}\)) having a nonempty intersection. \(\clubsuit\)_
**Lemma 2.5.3**.: _Let \(B\to X\) be the full cobordization of \(\pi:Y=X_{\Delta}\to X=X_{\Sigma}\). Let \(D\) be the exceptional divisor on \(B\), and \(S_{D}\) be the induced stratification. Then for any stratum \(s\in S_{D}\), the image \(\pi_{B}(s)\) is closed in \(X\)._
Proof.: The problem is local on \(X\) so we can assume that \(X=X_{\sigma}\). Let \(B=X_{\sigma}\times X_{\delta}\), where \(\delta=\langle e_{1},\ldots,e_{k}\rangle\) the regular cone generated by a free basis \(\{e_{1},\ldots,e_{k}\}\).
The morphism \(B\to X\) corresponds to the projection \(\pi_{\Sigma}:\sigma_{B}=\sigma\times\delta\to\sigma\), mapping \(e_{i}\) to \(v_{i}\in\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\sigma)\).
Any stratum \(s=\bigcap_{i\in I}D_{i}\smallsetminus\bigcup_{j\in J}D_{j}\) in \(S_{D}\) is closed on the open affine subset \(B^{\prime}=B\smallsetminus\bigcup_{j\in J}D_{j}\) of \(B\). By replacing \(B\) with the open affine subset \(B^{\prime}=B_{t_{j}^{-1}}\), where \(t_{J}=\prod_{j\in J}t_{j}\), we assume that \(s=\bigcap_{i\in I}D_{i}\) and all the exceptional vertices \(e_{i}\) of \(\sigma\times\delta\), where \(i\in I\), span the cone \(\delta\). Then \(s\) is the union of orbits corresponding to the cones in \(\operatorname{Star}(\delta,\sigma_{B})\).
Let \(\delta_{0}:=\pi_{\Sigma}(\delta)\) be the image of \(\delta\), which is a subcone in \(\sigma\) generated by \(\pi_{\Sigma}(e_{i})\). Denote by \(\sigma_{0}\leq\sigma\) the unique face such that \(\operatorname{int}(\delta_{0})\subset\sigma_{0}\leq\sigma\).
Thus, since \(\sigma\) is regular, and the map \(\pi_{\Sigma}\) is surjective, the image \(\pi_{B}(s)\) of \(s\) is defined by the orbits corresponding to the cones in \(\operatorname{Star}(\sigma_{0},\sigma)\), and thus it is closed. \(\clubsuit\)
#### 2.5.4. The center of valuation
Recall that for any valuation \(\nu\) of the quotient field \(\kappa(X)\), we denote its center on \(X\) by \(Z_{X}(\nu)\).
**Corollary 2.5.5**.: _Consider any stratum \(s\in S_{D}\) such that \(s=\bigcap_{i\in I}D_{i}\smallsetminus\bigcup_{j\in J}D_{j}\). Then_
\[\pi_{B}(s)=\bigcap_{i\in I}Z_{X}(\nu_{i}),\]
Proof.: Note that, by Lemma 1.2.9, the image \(\pi_{B}(D_{i})=Z_{X}(\nu_{i})\). Then \(\pi_{B}(s)\subseteq\bigcap_{i\in I}Z_{X}(\nu_{i}).\) The problem is local on \(X\), and we use the notation and the description from the proof of the previous Lemma. The stratum \(s\) contains a generic toric orbit corresponding to the cone \(\delta=\langle e_{i}\mid i\in I\rangle\). Its image \(\pi_{B}(s)\) is closed and corresponds to the \(\operatorname{Star}(\sigma_{0},\sigma)\), where \(\sigma_{0}\) is the smallest face containing the images \(\{v_{i}\mid i\in I\}\), where \(\operatorname{val}(v_{i})=\nu_{i}\).
On the other hand, \(\bigcap_{i\in I}Z_{X}(\nu_{i})\) corresponds to the faces of \(\sigma\) containing all \(v_{i}\). Both sets are identical and \(\pi_{B}(s)=\bigcap_{i\in I}Z_{X}(\nu_{i})\). \(\clubsuit\)
**Lemma 2.5.6**.: _The morphism \(\pi_{B_{+},Y}:B_{+}\to Y\) determines a bijective correspondence between the sets of divisors \(\{D_{i+}\mid i\in I\}\) such that \(\bigcap D_{i+}\neq 0\), and the sets \(\{E_{i}\mid i\in I\}\) for which \(\bigcap E_{i}\neq 0\)._
Proof.: We need to show that \(\bigcap_{i\in I}D_{i+}\neq\emptyset\) iff \(\bigcap_{i\in I}E_{i}\neq\emptyset\). By Lemma 2.5.6, \(\pi_{B_{+},Y}(D_{i+})=E_{i}\). Thus if \(\bigcap_{i\in I}D_{i+}\) is nonempty then
\[\bigcap_{i\in I}\pi_{B_{+},Y}(D_{i+})=\bigcap_{i\in I}E_{i}\supseteq\pi_{B_{+ },Y}(\bigcap_{i\in I}D_{i+})\neq\emptyset.\]
Conversely, if \(\bigcap_{i\in I}E_{i}\) is nonempty then the vertices \(v_{i}\) corresponding to \(E_{i}\) form the subcone \(\tau=\langle v_{i}\mid i\in I\rangle\) of a face \(\delta\in\Delta\), with \(\operatorname{Vert}(\tau)\subseteq\operatorname{Vert}(\delta)\smallsetminus \operatorname{Vert}(\sigma)\). Consequently, by Lemma 2.3.5(4), \(\tau\) is the image of the face \(\delta^{\prime}=\langle e_{i}\mid i\in I\rangle\in\Delta_{B_{+}}\), whence \(\bigcap_{i\in I}D_{i+}\) is nonempty.
**Corollary 2.5.7**.: _The natural surjective map \(S_{D+}\to S_{E}\) determines an isomorphism of the dual complexes \(\Delta_{D_{+}}\simeq\Delta_{E}\). \(\clubsuit\)_
Also, we have
**Corollary 2.5.8**.: _The natural morphism \(S_{D+}\to S_{D}\) determines the inclusion of the dual complexes \(\Delta_{D_{+}}\simeq\Delta_{D}\), so that \(\Delta_{D_{+}}\) is a subcomplex of \(\Delta_{D}\)._
Proof.: By the construction, the faces of \(\Delta_{D}\) (and \(\Delta_{D_{+}}\)) correspond to the sets of divisors \(\{D_{i}\mid i\in I\}\) such that \(\bigcap_{i\in I}D_{i}\neq\emptyset\). Now, if \(\bigcap_{i\in I}D_{i+}\neq\emptyset\) then obviously \(\bigcap_{i\in I}D_{i}\neq\emptyset\).
\(\clubsuit\)
### Newton polytopes of monomial ideals
#### 2.6.1. Newton polytopes
**Definition 2.6.2**.: Consider the lattice of monomials
\[\mathbb{M}=\{\mathbf{x}^{\alpha}\mid\alpha\in\mathbb{Z}^{k}\}\simeq\mathbb{Z} ^{k},\]
and let \(N=\operatorname{Hom}(\mathbb{M},\mathbb{Z})\) be its dual. Let \(I=(\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x}^{\alpha_{k}})\subset\kappa[x_{1}, \ldots,x_{n}]\) be a toric ideal generated by the monomials corresponding to the elements of
\[\alpha_{i}\in\mathbb{Z}_{\geq 0}^{n}\subset\sigma_{0}^{\vee}=\langle e_{1}^{*},\ldots,e_{n}^{*}\rangle=\mathbb{Q}_{\geq 0}^{n}.\]
By the associated _Newton polytope_ of \(\mathcal{I}\) we mean
\[\operatorname{P}=\operatorname{P}_{I}:=\operatorname{conv}(\alpha_{1}+ \mathbb{Q}_{\geq 0}^{n},\ldots,\alpha_{k}+\mathbb{Q}_{\geq 0}^{n})\subseteq \mathbb{Q}_{\geq 0}^{n}\subseteq M_{\mathbb{Q}}=M\otimes\mathbb{Q}=\mathbb{Q}^{n}\]
Conversely any polytope (or polyhedron) \(P=P+\mathbb{Q}_{\geq 0}^{n}\) determines the ideal
\[\mathcal{I}=\mathcal{I}_{P}:=(\mathbf{x}^{\alpha}\mid\alpha\in P).\]
**Lemma 2.6.3**.: _There is a bijective correspondence \(I\mapsto P_{I}\), \(P\mapsto I_{P}\), between integrally closed toric ideals \(I\subset\kappa[x_{1},\ldots,x_{n}]\), and polytopes \(P=P+\mathbb{Q}_{\geq 0}^{n}\) with integral vertices. \(\clubsuit\)_
#### 2.6.4. The orbit stratification
One can identify \(e_{i}^{*}\) with \(x_{i}\), so we can write \(\sigma_{0}^{\vee}=\langle x_{1},\ldots,x_{n}\rangle\). Denote by \(N_{\mathbb{Q}}\) the dual space of \(M_{\mathbb{Q}}\), and \(\sigma_{0}\subset N_{\mathbb{Q}}\) the dual of \(\sigma_{0}^{\vee}\) as in Sections 2.1.1, 2.1.2. For any \(\tau\subset N_{\mathbb{Q}}\) set
\[\tau^{\perp}:=\{y\in M_{\mathbb{Q}}\mid(x,y)=0\quad\text{for \ all $x\in\tau$}\},\]
**Lemma 2.6.5**.: _There is a natural bijective correspondence between_
* _the faces_ \(\tau\) _of_ \(\sigma_{0}\)__
* _the faces_ \(\tau^{*}:=\tau^{\perp}\cap\sigma_{0}^{\vee}\) _of_ \(\sigma_{0}^{\vee}\)_._
* _the open affine subsets_ \(X_{\tau}\subset X_{\sigma_{0}}\)__
* _the minimal closed orbits_ \(O_{\tau}\subset X_{\tau}\) _which are in_ \(X_{\sigma_{0}}\)_._
_Moreover under the above identification the closure \(\overline{O_{\tau}}\) of the orbit \(O_{\tau}\) is defined by the ideal \((x_{i}\mid x_{i}\not\in\tau^{*})\)._
Proof.: The face \(\tau\) of \(\sigma_{0}\) determines the open subset
\[X_{\tau}=\operatorname{Spec}(\kappa[\tau^{\vee}\cap M])=\operatorname{Spec}( \kappa[P_{\tau}])=\operatorname{Spec}(\kappa[P_{\tau}^{*}])\times\operatorname {Spec}(\kappa[\overline{P}_{\tau}])\]
of \(X_{\sigma_{0}}\), where
\[P_{\tau}=\tau^{\vee}\cap M=P_{\sigma_{0}}+P_{\tau}^{*}=P_{\sigma_{0}}-(\tau^{ *}\cap M).\]
Thus \(\tau^{*}\cap M=P_{\tau}^{*}\cap P_{\sigma_{0}}\) consists of the elements of \(P_{\sigma_{0}}=\sigma_{0}^{\vee}\cap M\), which are invertible in \(P_{\tau}\). The closed orbit \(O_{\tau}\subset X_{\tau}\) is described by the ideal generated by the set of noninvertible elements
\[P_{\tau}\smallsetminus P_{\tau}^{*}:=(\tau^{\vee}\smallsetminus\tau^{*}\smallsetminus( -\tau^{*}))\cap M\subset P_{\tau}.\]
Thus its closure \(\overline{O_{\tau}}\) in \(X_{\sigma_{0}}\) is defined by the ideal \((x_{i}\mid x_{i}\not\in\tau^{*})\) corresponding to the monoid ideal \(P_{\sigma_{0}}\smallsetminus P_{\tau}^{*}=(\sigma_{0}^{\vee}\smallsetminus\tau^{* })\cap M\). Conversely, any face \(\tau^{*}\) of \(\sigma_{0}^{\vee}\) determines the closure \(\overline{O_{\tau}}\subset X_{\sigma_{0}}\) of the orbit \(O_{\tau}\) with the monoid ideal \((\sigma_{0}^{\vee}\smallsetminus\tau^{*})\cap M\), and the face \(\tau=(\tau^{*})^{\perp}\cap\sigma_{0}\) of \(\sigma_{0}\subset N_{\mathbb{Q}}\).
By the construction, \(\mathcal{O}_{\tau}\) is the smallest \(T\)-stable closed subset of \(X_{\tau}\). If \(\tau\subset\tau^{\prime}\) is the inclusion of the faces then \(X_{\tau}\subset X_{\tau^{\prime}}\) is an open immersion, and \(\overline{\mathcal{O}_{\tau}}\) contains \(\mathcal{O}_{\tau^{\prime}}\). Consequently the orbits \(\mathcal{O}_{\tau}\) form the stratification of \(X_{\sigma_{0}}=\operatorname{Spec}(\kappa[x_{1},\dots,x_{n}])\).
**Corollary 2.6.6**.: _Let \(\mathcal{I}\subset\kappa[x_{1},\dots,x_{k}]\) be a monomial ideal and \(P_{\mathcal{I}}\subset\sigma_{0}^{\vee}\) be its Newton polytope. Then the toric subset \(V(\mathcal{I})\) is exactly the union of the orbits \(O_{\tau}\) such that \(\tau^{*}\) is disjoint from \(P_{\mathcal{I}}\)._
Proof.: The orbit \(O_{\tau}\) is contained in \(V(\mathcal{I})\) if and only if the ideal of \(\overline{O_{\tau}}\) contains \(\mathcal{I}\). Thus the corresponding monoid ideal \((\sigma_{0}^{\vee}\smallsetminus\tau_{*})\cap M\) contains \(P\cap M\). The latter is equivalent to the condition \(\tau_{*}\cap P=\emptyset\).
#### 2.6.7. Supporting faces
The monomial ideal \(I=(\mathbf{x}^{\alpha_{1}},\dots,\mathbf{x}^{\alpha_{k}})\) defines a piecewise linear convex function \(F_{I}:=\min(\alpha_{i},v)\) on \(\sigma_{0}:=\langle e_{1},\dots,e_{n}\rangle\) which is dual to \(\sigma_{0}^{\vee}\).
Likewise any polytope \(P\subset\sigma_{0}^{\vee}\) determines a piecewise linear convex function
\[F_{P}:=\min((w,v)\mid w\in P)\]
on \(\sigma_{0}\).
If \(P=P(I)\) then both functions coincide:
\[F_{P}=F_{I}=\min(v,\alpha_{i})=\min((v,w)\mid w\in P))\]
By the _dual fan_ or _normal fan_ of \(P\), we mean the fan \(\Delta_{P}=\Delta_{\mathcal{I}}\) is determined by the maximal cones \(\tau\subset\sigma_{0}\), where \(F_{P}\) is linear. By definition, \(\Delta_{P}\) is a decomposition of \(\sigma_{0}\).
Conversely, the function \(F_{P}\) on \(\sigma_{0}\), determines the polytope
\[P=\{w\in\sigma_{0}^{\vee}\mid(\cdot,w)_{|\sigma_{0}}\geq F_{P|\sigma_{0}}\}.\]
Recall the standard fact from the convex geometry:
**Lemma 2.6.8**.: _There is a bijective correspondence between the faces \(P\) of the polytope \(P_{0}\), and the faces \(\tau_{P}\) of the fan \(\Delta_{P_{0}}\)._
\[P\mapsto\tau_{P}:=(P_{0}-P)^{\vee}=\{v\in\sigma_{0}\mid(v,w)\geq 0,\ w\in P_{0} -P\}\in\Delta_{P_{0}}\]
\[\tau\mapsto P_{\tau}=\{w\in P\mid F_{P|\sigma}=(\cdot,w)_{|\sigma}\}.\]
_Moreover \(\dim(P)=n-\dim(\sigma_{P})\)._
**Remark 2.6.9**.: For any \(i=1,\dots,n\), let
\[a_{i}:=\min\{x_{i}(p)\mid p\in P\}.\]
Then
\[P_{i}:=\{p\in P\mid(x_{i}-a_{i})(p)=0\}\]
is the face of \(P\) corresponding to the one-dimensional face \(\langle e_{i}\rangle\) determine by the vertex \(e_{i}\) of \(\sigma\).
**Definition 2.6.10**.: By the _supporting facets_ of \(P_{0}\) we mean the faces corresponding to the vertices of \(\operatorname{Vert}(\Delta_{P_{0}})\smallsetminus\operatorname{Vert}(\sigma)\). The affine hull of a supporting face will be called a _supporting hyperplane_. By the _supporting faces_, we mean the faces, which are the intersections of some supporting facets.
As a corollary from Lemma 2.6.8, we obtain
**Lemma 2.6.11**.: _Let \(\Delta\) be the subdivision of regular cone \(\sigma_{0}\) associated with the normalized blow-up \(\pi:Y=X_{\Delta}\to X=X_{\sigma_{0}}\) of the monomial ideal \(I\subset\kappa[P_{\sigma}]=\kappa[x_{1},\dots,x_{n}]\). Let \(B\to X=X_{\sigma_{0}}=\mathbb{A}^{n}\) be the full cobordant blow-up of \(I\). Then the following sets are in the bijective correspondence_
1. _The supporting hyperplanes_ \(H_{i}\) _of_ \(P(I)\)_._
2. _The vertices_ \(v_{i}\) _of_ \(\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\sigma_{0})\)_._
3. _The exceptional divisors_ \(D_{i}\) _of_ \(B\to X\)_._
4. _The exceptional divisors_ \(E_{i}\) _of_ \(Y\to X\)_._
5. _The toric exceptional valuations_ \(\nu_{i}=\operatorname{val}(v_{i})\) _on_ \(X\) _associated with_ \(E_{i}\) _on_ \(Y\)_._
6. _The vertices of the dual complexes_ \(\Delta_{E}\simeq\Delta_{D_{+}}\) _and_ \(\Delta_{D}\)_._
**Remark 2.6.12**.: The supporting faces exist if \(\operatorname{codim}(V(\mathcal{I}))\geq 2\). On the other hand, if \(\mathcal{I}\) is principal, then \(\Delta=\sigma_{0}\), and thus \(P\) admits no supporting faces.
**Corollary 2.6.13**.: _With the above notation and assumptions:_
1. _Any exceptional valuation_ \(\nu\) _determines the supporting hyperplane_ \(H_{\nu}\)_._
2. _With a face_ \(\sigma\) _of_ \(\Delta_{E}\) _one can associate the set_ \(\omega_{\sigma}\) _of the exceptional valuations corresponding to the vertices_ \(\operatorname{Vert}(\sigma)\)_._
3. _Any face_ \(\sigma\) _of the dual complex_ \(\Delta_{E}\) _determines the supporting face_ \(P_{\sigma}\) _of_ \(P\)_, where_ \[P_{\sigma}=\bigcap_{\nu\in\omega_{\sigma}}H_{\nu}\cap P.\]
4. \[\operatorname{inv}_{\omega_{\sigma}}^{\circ}(\mathcal{I}):=(u^{\alpha}\in I \mid\nu(\mathcal{I})=\nu(\mathbf{x}^{\alpha}),\nu\in\omega_{\sigma})=\] \[=\operatorname{inv}_{P_{\sigma}}(\mathcal{I}):=(\mathbf{x}^{\alpha} \in I\mid\alpha\in P_{\sigma}).\]
\(\clubsuit\)
Here
\[\nu(\mathcal{I}):=\min\{\nu(f)\mid f\in\mathcal{I}\}.\]
One can see the above relations in the following example:
**Example 2.6.14**.: Let \(I=(x^{k},xy,y^{l})\subset\kappa[x,y]\).
The Newton polytope \(P\) of \(I\) is generated by the vertices \(P_{1}=(k,0),P_{2}=(1,1),P_{3}=(0,l)\) of \(P\). The supporting planes \(H_{1}\), and \(H_{2}\) are determined, respectively, by the supporting facets \(P_{12}=\operatorname{conv}(\{(k,0),(1,1)\})\), and \(P_{23}=\operatorname{conv}(\{(1,1),(0,l)\})\). They correspond to the vectors \(v_{1}=(v_{11},v_{12})\), \(v_{2}=(v_{21},v_{22})\) such that
\[kv_{11}=v_{11}+v_{12},\quad v_{21}+v_{22}=lv_{21}\]
Thus \(v_{1}=(1,k-1)\), \(v_{2}=(l-1,1)\). The decomposition \(\Delta_{P}\) consists of three 2-dimesional cones \(\sigma_{1}=\langle e_{1},v_{1}\rangle\), \(\sigma_{2}=\langle v_{1},v_{2}\rangle\), \(\sigma_{3}=\langle v_{2},e_{2}\rangle\), and their 1-dimesional faces. These 2-dimesional faces in \(\Delta_{P}\) correspond to the vertices \((k,0),(1,1),(0,l)\) of \(P\), and the associated monomials \(x^{k},xy,y^{l}\). The vectors \(v_{1},v_{2}\in\operatorname{Vert}(\Delta_{p})\) correspond to the exceptional valuations \(\nu_{1}=\operatorname{val}(v_{1}),\nu_{2}=\operatorname{val}(v_{2})\). In particular
\[\operatorname{inv}^{\circ}_{\nu_{1}}(\mathcal{I})=\operatorname{ inv}_{P_{12}}(\mathcal{I})=(x^{k},xy),\quad\operatorname{inv}^{\circ}_{\nu_{2}}( \mathcal{I})=\operatorname{inv}_{P_{23}}(\mathcal{I})=(xy,y^{l}),\] \[\operatorname{inv}^{\circ}_{\nu_{1},\nu_{2}}(\mathcal{I})= \operatorname{inv}_{P_{2}}(\mathcal{I})=(xy).\]
Note that the vertices in \(\operatorname{Vert}(P)\), which are, in our case, defined by \(x^{k},xy,y^{l}\), label the maximal faces in \(\Delta_{P}\).
We see that the monomials in \(\operatorname{inv}^{\circ}_{\omega}(\mathcal{I})=\operatorname{inv}_{P}( \mathcal{I})\) correspond to the maximal cones in the star of the relevant face in \(\Delta_{P}\). This face is described as the dual to \(P\). Equivalently, it is defined as the smallest face containing the set of the vertices of \(\Delta_{P}\) determined by \(\omega\). In particular, the generators \(x^{k},xy\) occurring in \(\operatorname{inv}^{\circ}_{\nu_{1}}(\mathcal{I})=\operatorname{inv}^{\circ}_ {P_{12}}(\mathcal{I})=(x^{k},xy)\) correspond to the maximal cones in the star of the face \(\langle v_{1}\rangle\in\Delta_{P}\).
The full cobordant blow-up of \(I=(x^{k},xy,y^{l})\) is given by
\[B=\operatorname{Spec}(\kappa[t_{1}^{-1},t_{2}^{-1},xt_{1}t_{2}^{l-1},yt_{1}^{ k-1}t_{2}]),\]
and using Lemma 1.4.4,
\[B_{+}=\operatorname{Spec}(\kappa[t_{1}^{-1},t_{2}^{-1},xt_{1}t_{2}^{l-1},yt_{1 }^{k-1}t_{2}])\smallsetminus V(t_{1}^{k}t_{2}^{l}(x^{k},xy,y^{l})).\]
### Geometric quotients for toric morphisms
**Lemma 2.7.1**.: _Let \(\pi:Y=X_{\Delta}\to X_{\Sigma}\) be a toric morphism, associated with the decomposition \(\Delta\) of \(\Sigma\). Assume that \(\Sigma\) is simplicial._
_Then its cobordization \(B_{+}\to Y=B_{+}/T_{B}\) is a geometric quotient iff \(\Delta\) is simplicial._
Proof.: The problem is local on \(X\), and can be reduced to the affine toric morphism \(X_{\Delta}\to X_{\sigma}\) corresponding to the subdivision \(\Delta\) of a simplicial cone \(\sigma\). Then, by Proposition 2.4.2, \(\Sigma_{B}\) is simplicial, and so is \(\Sigma_{B_{+}}\).
The natural projection \(\Sigma_{B_{+}}\to\Delta\) is defined bijectively on the vertices. Moreover, the faces of \(\Delta\) are the images of cones in \(\Sigma_{B_{+}}\). Thus \(\Sigma_{B_{+}}\to\Delta\) is bijective on faces if and only if \(\Delta\) is simplicial. On the other hand, the condition that \(\Sigma_{B_{+}}\to\Delta\) is bijective on faces is equivalent to \(B_{+}\to Y\) being a geometric quotient.
**Lemma 2.7.2**.: _Let \(\pi:Y=X_{\Delta}\to X=X_{\Sigma}\) be a proper birational toric morphism of toric varieties, with \(X\) regular. Then \(B_{+}\subset B\) contains open maximal subsets \(B^{s}\subset B_{+}\) admitting geometric quotient \(B^{s}/T_{B}\) which is projective birational over \(Y\)._
Proof.: The morphism \(Y\to X\) corresponds to the subdivision \(\Delta\) of \(\Sigma\). Consider the sequence of the star subdivisions centered at \(\operatorname{Vert}(\Delta)\) of \(\Delta\). By the definition of the star subdivision, the process transforms \(\Delta\) into a simplicial fan \(\Delta^{\prime}\) with \(\operatorname{Vert}(\Delta^{\prime})=\operatorname{Vert}(\Delta)\), as all the vertices in the faces form linearly independent sets being the centers of the star subdivisions. So the valuations of the exceptional divisors corresponding to \(\operatorname{Vert}(\Delta)=\operatorname{Vert}(\Delta^{\prime})\) remain unchanged. Then, by Proposition 2.4.2 we obtain that \(B(Y/X)=B(Y^{\prime}/X)\). On the other hand, by the second
part of Proposition 2.4.2, we have the open inclusions of toric subsets:
\[B^{s}:=B(Y^{\prime}/X)_{+}\subset B(Y/X)_{+}\subset B(Y/X).\]
By the previous Lemma, \(B^{s}\to B^{s}/T_{B}=X_{\Delta^{\prime}}\) is a geometric quotient.
## 3. Cobordization of locally toric morphisms
### Locally toric morphisms of locally toric schemes
#### 3.1.1. Locally toric schemes
**Definition 3.1.2**.: A normal scheme \(X\) over a field \(\kappa\) is _locally toric_ if any point \(p\in X\) admits an open neighborhood \(U\), and a regular morphism \(\phi:U\to X_{\sigma}=\operatorname{Spec}(\kappa[P_{\sigma}])\), called _a toric chart_.
An ideal \(\mathcal{I}\) on a locally toric \(X\) is called _locally monomial_ if for any point \(p\in X\), there exists a toric chart \(U\to X_{\sigma}=\operatorname{Spec}(\kappa[P_{\sigma}])\), and a monomial ideal \(\mathcal{I}_{\sigma}\subseteq\kappa[P_{\sigma}]\), defined by a subset of \(P_{\sigma}\), such that \(\mathcal{I}_{|U}=\mathcal{I}_{\sigma}\cdot\mathcal{O}_{X|U}\).
**Remark 3.1.3**.: The primary reason we consider locally toric schemes over a field \(\kappa\), and not just over \(\mathbb{Z}\), is that the morphisms to \(\operatorname{Spec}(\mathbb{Z})\) are, in general, not flat. Thus the toric charts over \(\mathbb{Z}\) into \(\operatorname{Spec}(\mathbb{Z}[P_{\sigma}])\) which are defined by the monomials in \(P_{\sigma}=\sigma^{\vee}\cap N\) are not regular (not flat), and some proofs would require a different formalism.
#### 3.1.4. Locally monomial valuations
**Definition 3.1.5**.: Let \(X\) be a locally toric scheme. A valuation of \(\kappa(X)\) with values in \(\mathbb{Z}\) will be called _locally monomial_ if for any point \(p\) in the center \(Z(\nu)\subset X\), there exists a toric chart \(U\to X_{\sigma}\), and a vector \(v\in\sigma\cap\mathbb{N}\), such that \(\mathcal{I}_{\nu,a}=\mathcal{O}_{X}\cdot\mathcal{I}_{\operatorname{val}(v),a}\), for any \(a\in\mathbb{Z}_{\geq 0}\).
#### 3.1.6. Locally toric morphisms
**Definition 3.1.7**.: A proper birational morphism \(\pi:Y\to X\) of normal schemes over a field \(\kappa\) is called _locally toric_ if for any point \(p\in X\) there is an open neighborhood \(U\), a toric chart \(\phi:U\to X_{\sigma}\), and the fiber square:
\[\begin{array}{ccc}\pi^{-1}(U)&\stackrel{{\psi}}{{\to}}&X_{ \Delta}\\ \pi_{U}\downarrow&&\pi_{A}\downarrow\\ U&\stackrel{{\psi}}{{\to}}&X_{\sigma}\end{array}.\]
where \(\pi_{U}:=\pi_{|\pi^{-1}(U)}:\pi^{-1}(U)\to U\) is the restriction of \(\pi\).
**Proposition 3.1.8**.: _Let \(\mathcal{J}\) be a locally monomial ideal on a locally monomial scheme \(X\). The normalized blow-up of \(\mathcal{J}\) is a locally toric morphism. \(\clubsuit\)_
### Functoriality of cobordization of locally toric morphisms
**L
#### 3.2.1. Local toric presentation of cobordization of locally toric morphisms
**Lemma 3.2.2**.: _Let \(\pi:Y\to X\) be a locally toric proper birational morphism. Then for any point \(p\in X\) there exists an open neighborhood \(U\) of \(p\in X\), a toric chart \(\phi_{U}:U\to X_{\sigma}\) and a fiber square_
\[\begin{array}{ccc}Y_{U}:=\pi^{-1}U&\stackrel{{\phi}}{{\to}}&X_ {\Delta}\\ \pi_{U}\downarrow&&\pi_{A}\downarrow\\ U&\stackrel{{\phi_{U}}}{{\to}}&X_{\sigma},\end{array}\]
_such that_
1. _There is a bijective correspondence between the irreducible exceptional divisors of_ \(\pi_{U}\) _and_ \(\pi_{A}\)_. That is, any irreducible exceptional divisor of_ \(\pi_{U}\) _is the inverse image of an irreducible exceptional divisor of_ \(\pi_{A}\)_._
2. _There is a bijective correspondence between the strata of the divisorial stratifications of the exceptional divisors_ \(E_{U}\) _of_ \(Y_{U}\to U\) _and_ \(E^{\Delta}\) _of_ \(X_{\Delta}\to X_{\sigma}\)_, which defines the isomorphism_ \(\operatorname{Cl}(Y_{U}/U)\to\operatorname{Cl}(X_{\Delta}/X_{\sigma})\)_. Moreover, any stratum of the stratification_ \(S_{E}\) _is the inverse image of a stratum in_ \(S_{E^{\Delta}}\)_._
3. _For any_ \(E^{\prime}_{U}=\sum n_{i}(E_{U})_{i}\) _and the corresponding_ \((E^{\Delta})^{\prime}=\sum n_{i}E^{\Delta}_{i}\) _we have_ \(\mathcal{O}_{Y_{U}}((E^{\Delta})^{\prime})=\mathcal{O}_{Y}\cdot(\mathcal{O}_ {X_{\Delta}}((E^{\Delta})^{\prime}).\)__
4. \(B(\pi_{U})=B(\pi_{A})\times_{X_{\sigma}}U\quad B_{+}(\pi_{U})=B_{+}(\pi_{A})_{ +}\times_{X_{\sigma}}U.\)__
5. _Any irreducible exceptional Weil divisor_ \(E_{i}\) _of_ \(\pi\) _defines a locally monomial valuation_ \(\nu_{i}\) _with respect to any given toric chart_ \(U\to X_{\sigma}\) _associated with the morphism_ \(\pi\)_._
Proof.: (1) Since \(\phi_{U}\) is regular, the inverse images \(\phi_{U}^{-1}(s_{\tau})\) of the toric strata \(s_{\tau}\), where \(\tau\leq\sigma\), define a stratification on \(U\). Moreover, the induced morphisms on the strata \(\phi_{U}^{-1}(s_{\tau})\to s_{\tau}\) are regular.
We can assume that the given point \(p\in X\) maps to a point \(q\in X_{\sigma}\), which is in the orbit \(\mathcal{O}_{\sigma}\subset X_{\sigma}\). For any \(\tau\leq\sigma\), the closure \(s_{\tau}:=\overline{O}_{\tau}\) of the toric orbit \(\mathcal{O}_{\tau}\) on \(X_{\sigma}\) is normal. Moreover, since \(\phi_{U}\) is regular, the inverse image \(\phi_{U}^{-1}(s_{\tau})\) is normal, and thus, it is the disjoint union of the irreducible components of the codimension equal to the codimension of \(s_{\tau}\).
Consequently, by shrinking \(U\) around \(p\), if necessary, we can assume that the inverse image of the closures of the toric strata (i.e., the orbits) on \(X_{\sigma}\) are irreducible subsets of \(U\).
The inverse image of \(E^{\Delta}\) is the union of the normal divisorial components. Their images under \(\pi_{U}\) are of the codimension \(\geq 2\). So they are the exceptional divisors of \(\pi_{U}\). Moreover all the irreducible exceptional divisors of \(\pi_{U}\) are contained in \(\phi^{-1}(E^{\Delta})\).
The image \(\pi_{A}(E^{\Delta}_{i})\) contains the orbit \(\mathcal{O}_{\sigma}\) with \(\phi_{U}^{-1}(\mathcal{O}_{\sigma})\neq\emptyset\). Then, by the assumption, \(\phi^{-1}(E^{\Delta}_{i})\neq\emptyset\).
We need to show that each \(\phi^{-1}(E^{\Delta}_{i})\) is an irreducible divisor. The image of the exceptional divisor \(E^{\Delta}_{i}\) under \(\pi_{A}\) defines the closure of the toric orbit
\[\pi_{A}(E^{\Delta}_{i})=\overline{s_{\tau}}=\overline{O_{\tau}}\]
on \(X_{\sigma}\), for some face \(\tau\leq\sigma\). Denote by \(q\) the generic point of \(E^{\Delta}_{i}\), and by \(q^{0}\) the generic point of \(s_{\tau}\).
For any divisorial component, \(E_{ij}\) in \(\phi^{-1}(E^{\Delta}_{i})\), let \(p_{j}\) be its generic point. By the assumption \(\pi(p_{j})\in U\) determines a unique point \(p^{0}\) which is the generic point
of the stratum \(s\) on \(U\) so that
\[\overline{s}=\overline{\phi_{U}^{-1}}(s_{\tau})=\overline{p^{0}}.\]
By definition, the generic point \(q\) of the toric divisor \(E_{i}^{\Delta}\) on \(X_{\Delta}\) is in the fiber \(F_{q^{0}}=\pi_{A}^{-1}(q^{0})\). Thus the generic points \(p_{j}\) of the components \(E_{ij}\) of \(\phi^{-1}(E_{i}^{\Delta})\) are in the fiber
\[F_{p^{0}}=\pi^{-1}(p_{i}^{0})=\operatorname{Spec}(\kappa(p_{i}^{0}))\times_{ \operatorname{Spec}(\kappa(q_{i}^{0}))}F_{q_{i}^{0}}\]
Let \(\Delta_{\tau}:=\Delta_{|\tau}\) be the restriction of \(\Delta\) to \(\tau\) which determines the induced decomposition of \(\tau\).
The fiber of
\[F_{p^{0}}=\operatorname{Spec}(\kappa(p^{0}))\times_{\operatorname{Spec}( \kappa(q^{0}))}F_{q^{0}}=Y\times_{X}\operatorname{Spec}(\kappa(p^{0}))\]
of \(\pi:Y=X\times_{X_{\sigma}}X_{\Delta}\to X\) is isomorphic to the fiber of the induced morphism
\[X_{\Delta_{\tau}}^{\kappa(p^{0})}\to X_{\tau}^{\kappa(p^{0})}\]
over \(p^{0}=\operatorname{Spec}(\kappa(p^{0}))\). Moreover the natural morphism \(F_{p^{0}}\to F_{q^{0}}\) is induced by the fiber square
\[\begin{array}{ccc}X_{\Delta_{\tau}}^{\kappa(p^{0})}&\stackrel{{ \phi_{\Delta}}}{{\to}}&X_{\Delta_{\tau}}^{\kappa(q^{0})}\\ \pi\downarrow&&\pi_{A}\downarrow\\ X_{\tau}^{\kappa(p^{0})}&\stackrel{{\phi}}{{\to}}&X_{\tau}^{ \kappa(q^{0})}\\ \downarrow&&\downarrow\\ \operatorname{Spec}(\kappa(p^{0}))&\to&\operatorname{Spec}(\kappa(q^{0})) \end{array}.\]
The above morphism is bijective on the toric orbits and their generic points, as they correspond to the faces of \(\Delta\) or respectively \(\sigma\). Then the inverse image of the point \(q_{j}\in F_{q^{0}}\subset X_{\Delta_{\tau}}^{\kappa(q^{0})}\) corresponds to a unique face in \(\Delta(1)\) and a unique point \(p\) in \(F_{p^{0}}\subset X_{\Delta_{\tau}}^{\kappa(p^{0})}\).
Hence the inverse image of the toric divisor \(E_{i}^{\Delta}\) with the generic point \(q\) is the unique exceptional divisor \(E_{i}\) with the generic point \(p=p_{j}\) over \(q\).
(2) The same reasoning shows that the inverse image \(\phi_{\Delta}^{-1}(\overline{s_{j}^{\Delta}})=\overline{s}_{j}\) of the closure of a toric stratum \(\overline{s_{j}^{\Delta}}\) on \(X_{\sigma}\) determines a unique stratum \(s_{j}\) on \(Y_{U}\). We use the same relation for the fibers.
\[\pi^{-1}(p_{j}^{0})=F_{p_{j}^{0}}=\operatorname{Spec}(\kappa(p_{j}^{0}))\times _{\operatorname{Spec}(\kappa(q_{j}^{0}))}F_{q_{j}^{0}}\]
where \(p_{j}\) is the generic point of \(s_{j}\), \(q_{j}=\phi(p_{j})\), \(p_{j}^{0}=\pi(p_{j})\) and \(q_{j}^{0}=\pi_{A}(q_{j})\).
(3) We need to show first that
\[\mathcal{O}_{Y}(nE_{i})=\mathcal{O}_{Y}(\mathcal{O}_{X_{\Delta}}(nE_{i}^{ \Delta}))\]
By the above, the generic point \(p\) of \(E_{i}\) is exactly the generic point of the fiber \(\phi^{-1}(q)\). The induced homomorphism of the completions of the local rings is given by
\[\widehat{\mathcal{O}_{X_{\Delta},q}}\to\widehat{\mathcal{O}_{Y,p}}=\widehat{ \mathcal{O}_{X_{\Delta},q}}\otimes_{\kappa(q)}\kappa(p)\]
Thus we get
\[m_{q}^{n}\cdot\mathcal{O}_{X_{\Delta}.p}=m_{q}^{n}\subset\mathcal{O}_{Y,p}.\]
Both points \(p\) and \(q\) admit a regular neighborhood and its local rings are DVR defining the valuation \(\nu_{i}\) of \(E_{i}\), and \(\nu_{i}^{\Delta}\) of \(E_{i}^{\Delta}\).
One verifies that \(\mathcal{I}_{\nu_{i},a,Y}=\mathcal{O}_{Y}\cdot\mathcal{I}_{\nu_{i},a}^{\Delta}\). First observe that the valuation center of \(\nu_{i}\) on \(Y\) can be described as
\[Z_{Y}(\nu_{i})=V_{Y}(\mathcal{I}_{\nu_{i},a,Y})=E_{i}=\phi^{-1}(E_{i}^{\Delta})= V(\mathcal{O}_{Y}\cdot\mathcal{I}_{\nu_{i},a}^{\Delta})\]
For any point \(p^{\prime}\in Z(\nu_{i})\), and its image \(q^{\prime}=\phi(p)\in Z(\nu_{i}^{\Delta})\) we have
\[\widehat{\mathcal{O}_{Y,p}}=\widehat{\mathcal{O}_{X_{\Delta},p}}\otimes_{ \kappa(p)}\kappa(q)[[u_{1},\ldots u_{k}]].\]
Consequently the monomial valuation \(\nu_{i}^{\Delta}\) on \(\mathcal{O}_{X_{\Delta},p^{\prime}}=\mathcal{O}_{X_{\delta},p^{\prime}}\) of \(E_{i}^{\Delta}\) associated with a vertex of \(\Delta\) extends to a certain unique monomial valuation \(\nu_{i}^{\prime}\) on \(\widehat{\mathcal{O}_{Y,p}}\) such that
\[\widehat{\mathcal{I}}_{\nu_{i}^{\prime},a,p^{\prime}}=\mathcal{I}_{\nu_{i},a} ^{\Delta}\cdot\widehat{\mathcal{O}_{Y,p^{\prime}}}^{\Delta}=\mathcal{I}_{\nu_ {i}}^{\Delta}\cdot(\widehat{\mathcal{O}_{X_{\Delta},p^{\prime}}}\otimes_{ \kappa(p^{\prime})}\kappa(q^{\prime})[[u_{1},\ldots u_{k}]])\]
which by flatness implies
\[\mathcal{I}_{\nu_{i}^{\prime},a,p^{\prime}}=\mathcal{I}_{\nu_{i},a}^{\Delta} \cdot\mathcal{O}_{Y,p^{\prime}}\]
Note that the generic point \(p\) of \(E_{i}\) specializes at \(p^{\prime}\), and the generic point \(q\) of \(E^{\Delta}\) specializes at \(q^{\prime}\). Passing to \(p\) and \(q\) and localizing we obtain that
\[\mathcal{I}_{\nu_{i}^{\prime},a,p}=\mathcal{I}_{\nu_{i},a,q}^{\Delta}\cdot \mathcal{O}_{Y,p}=\mathcal{I}_{\nu_{i},a,p},\]
whence both valuations are equal \(\nu_{i}=\nu_{i}^{\prime}\). Thus \(\mathcal{I}_{\nu_{i},a,p^{\prime}}=\mathcal{I}_{\nu_{i},a,q^{\prime}}^{\Delta} \cdot\mathcal{O}_{Y,p^{\prime}}\) and the vanishing locus of the ideal
\[\mathcal{I}_{\nu_{i},a,Y_{U}}=\mathcal{O}_{Y_{U}}\cdot\mathcal{I}_{\nu_{i},a, X_{\Delta}}^{\Delta}\]
is irreducible by part (1) and defines the center of the valuation \(\nu_{i}\).
Now, for any effective divisor \((E^{\Delta})^{\prime}=\sum a_{i}E_{i}^{\Delta}\), and its inverse image \((E_{U})^{\prime}=\sum a_{i}(E_{U})_{i}\) we have, by flatness
\[\mathcal{O}_{Y_{U}}(-(E_{U})^{\prime})=\bigcap\mathcal{I}_{\nu_{i},a_{i},Y_{ U}}=\mathcal{O}_{Y_{U}}\cdot(\bigcap\mathcal{I}_{\nu_{i},a_{i},X_{\Delta}})=\]
In general, for any \((E^{\Delta})^{\prime}=\sum a_{i}E_{i}^{\Delta}\), we can find a nontrivial monomial \(m\in P_{\sigma}=\sigma^{\vee}\cap M\) such that for \(n\gg 0\),
\[\mathcal{O}_{Y}((E^{\Delta})^{\prime})=m^{-n}\mathcal{O}_{Y}((E^{\Delta})^{ \prime}-n\cdot\operatorname{div}(m)),\]
where \(-((E^{\Delta})^{\prime}-n\cdot\operatorname{div}(m))\) is effective. Consequently
\[=\mathcal{O}_{Y_{U}}\cdot m^{-n}\cdot\mathcal{O}_{X_{\Delta}}((E^{\Delta})^{ \prime}-n\cdot\operatorname{div}(m))=\mathcal{O}_{Y_{U}}\mathcal{O}_{X_{\Delta }}(E^{\Delta})=\mathcal{O}_{Y_{U}}\otimes_{\mathcal{O}_{X_{\Delta}}}\mathcal{ O}_{X_{\Delta}}(E^{\Delta}).\]
(4) and (5) Since the morphism \(U\to X_{\sigma}\) is affine, and thus \(Y_{U}\to X_{\Delta}\) is such we have
\[B_{U}+\operatorname{Spec}_{Y_{U}}(\bigoplus_{E\in\operatorname{Cl}(Y_{U}/U)} \mathcal{O}_{Y_{U}}(E))=\operatorname{Spec}_{Y_{U}}(\mathcal{O}_{Y_{U}}\cdot( \bigoplus_{E^{\Delta}\in\operatorname{Cl}(X_{\Delta}/X_{\sigma})}\mathcal{O}_{ X_{\Delta}}(E^{\Delta}))=\]
\[=\operatorname{Spec}_{X_{\Delta}}\mathcal{O}_{Y_{U}}\otimes_{\mathcal{O}_{X_{ \Delta}}}(\bigoplus_{E^{\Delta}\in\operatorname{Cl}(X_{\Delta}/X_{\sigma})} \mathcal{O}_{X_{\Delta}}(E^{\Delta}))=Y_{U}\times_{X_{\Delta}}B_{+}(\pi_{A})_ {+}=\]
\[=(U\times_{X_{\sigma}}X_{\Delta})\times_{U}\times_{X_{\sigma}}B_{+}(\pi_{A})=U \times_{X_{\sigma}}B_{+}(\pi_{A}).\]
By definition, and since all the schemes are normal
\[\pi_{A*}(\mathcal{I}_{\nu_{i}^{\Delta},a,X_{\Delta}})=\pi_{A*}(\mathcal{O}_{X_ {\Delta}}(a,E_{i}^{\Delta}))=\mathcal{I}_{\nu_{i}^{\sigma},a,X_{\sigma}}\subset \pi_{A*}(\mathcal{O}_{X_{\Delta}}(E^{\Delta}))=\mathcal{O}_{X_{\sigma}}\]
are the toric ideals generated by monomials associated with the toric valuation \(\nu_{i}^{\sigma}\).
Similarly
\[\pi_{*}(\mathcal{I}_{\nu_{i},a,Y}))=\pi_{*}(\mathcal{O}_{Y}(aE))=\mathcal{I}_{\nu_{ i},a,X}.\]
By the above and since \(\psi\) is flat, we have
\[\mathcal{I}_{\nu_{i},a,U}=\pi_{*}(\mathcal{O}_{Y_{U}}(-E_{i}))=\pi _{*}(\mathcal{O}_{Y_{U}}\cdot\mathcal{O}_{X_{\Delta}}(-E_{i}^{\Delta}))=\] \[\pi_{*}(\mathcal{O}_{Y_{U}}\otimes\mathcal{O}_{X_{\Delta}}(-E_{i} ^{\Delta}))=\mathcal{O}_{U}\otimes\pi_{A*}(\mathcal{O}_{X_{\Delta}}(-E_{i}^{ \Delta}))=\mathcal{O}_{U}\cdot\mathcal{I}_{\nu_{i},a,X_{\sigma}}\]
is a locally monomial valuation.
Thus for \(E=\sum a_{i}E_{i}\), we have
\[\pi_{*}(\mathcal{O}_{Y_{U}}(E))=\bigcap\mathcal{I}_{\nu_{i},a_{i},U}=\bigcap \mathcal{O}_{U}\cdot\mathcal{I}_{\nu_{i},a,X_{\sigma}}=\mathcal{O}_{U}\cdot \bigcap\mathcal{I}_{\nu_{i},a_{i},X_{\sigma}}=\mathcal{O}_{U}\cdot\pi_{*}( \mathcal{O}_{X_{\Delta}}(E))\]
Hence, by the above
\[B_{U}=\operatorname{Spec}_{U}(\bigoplus_{E\in\operatorname{Cl}(Y_{U}/U)}\pi_ {*}(\mathcal{O}_{Y_{U}}(E))=\operatorname{Spec}_{U}(\mathcal{O}_{U}\cdot( \bigoplus_{E^{\Delta}\in\operatorname{Cl}(X_{\Delta}/X_{\sigma})}\pi_{*}( \mathcal{O}_{X_{\Delta}}(E^{\Delta}))=\]
#### 3.2.3. Local description of the exceptional divisor
As a corollary from Lemma 3.2.2 we obtain:
**Lemma 3.2.4**.: _Let \(\pi:Y\to X\) be a locally toric morphism of locally toric schemes. Let \(\pi_{B}:B\to X\) be its full cobordization._
_For any point \(p\in X\), there is a toric chart \(\phi_{U}:U\to X_{\sigma}\), such that for the induced morphism \(B_{U}=\pi_{B}^{-1}(U)\to X_{\Delta}\), there is a bijective correspondence between the strata \(s=\phi^{-1}(s_{\tau})\) of the divisorial stratifications of the exceptional divisor \(D_{B_{U}}\) on \(B_{U}\), (respectively \(D_{B_{U+}}\) on \(B_{U+}\)) and the strata \((s_{\tau})\) of the exceptional divisor \(D_{B(X_{\Delta}/X_{\sigma})}\) on \(B(X_{\Delta}/X_{\sigma})\) (respectively \(D_{B(X_{\Delta}/X_{\sigma})_{+}}\) on \(B(X_{\Delta}/X_{\sigma})_{+}\) )._
Proof.: The reasoning is the same as in the proof of Lemma 3.2.2(2). We can assume, as in the proof of Lemma 3.2.2(2), that the inverse image \(\phi_{U}^{-1}(s_{i})\subset U\) consists of a single stratum.
By Lemmas 3.2.2 and 3.3.2, we have the following fiber square diagram for the cobordizations, with horizontal morphisms being regular:
\[\begin{array}{ccc}B_{U}&\stackrel{{\phi}}{{\to}}&B(X_{\Delta} /X_{\sigma})\\ \pi_{U}\downarrow&&\pi_{B}\downarrow\\ U&\stackrel{{\phi_{U}}}{{\to}}&X_{\sigma}=\mathbb{A}^{n},\end{array}\]
and the analogous fiber square for \(B_{U+}\). Consequently, the inverse image of the exceptional divisor on \(D_{B(X_{\Delta}/X_{\sigma})}\) is the exceptional divisor \(D_{B_{U}}\). Its components are of the form \(VB(X_{\Delta}/X_{\sigma})(t_{i}^{-1})\) and are associated with the components \(E_{i}^{\Delta}\). Their inverse images are the irreducible components \(VB_{U}(t_{i}^{-1}))\) corresponding to the exceptional components \(E_{i}=\phi^{-1}(E_{i}^{\Delta})\).
Since \(\phi_{U}\) is regular, the inverse image \(\phi^{-1}(\overline{s})\) of the closure \(\overline{s}\) of any stratum \(s\) of \(D_{B(X_{\Delta}/X_{\sigma})}\) is normal. Thus it is the disjoint union of the irreducible components.
To prove that \(\phi^{-1}(\overline{s})\) is irreducible on \(Y_{U}\), we need to show that there is a single generic point \(p\) in the fiber \(\phi^{-1}(q)\) over the generic point \(q\) of \(s\), and such that
is of the same codimension in \(U\) as \(s\) in \(B(X_{\Delta}/X_{\sigma})\). This can be reduced to the problem of the morphism of the fibers
\[\pi^{-1}(p^{0})=F_{p^{0}}\to\pi^{-1}(q^{0})=F_{q^{0}},\]
where \(p^{0}=\pi_{B}(p)\), and \(q^{0}=\pi_{U}(q)\) are the generic point of the relevent strata.
But this follows from the relation for the fibers of toric morphisms, as in the proof of Lemma 3.2.2(2),
\[\pi^{-1}(p^{0}_{i})=F_{p^{0}_{i}}=\operatorname{Spec}(\kappa(p^{0}_{i}))\times _{\operatorname{Spec}(\kappa(q^{0}_{i}))}F_{q^{0}_{i}}.\]
### Description of cobordism of locally toric morphisms
#### 3.3.1. Local functoriality of relative Cox spaces for smooth morphims
**Proposition 3.3.2**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism of locally toric varieties over a field \(\kappa\). Let \(\phi:X^{\prime}\to X\) be a regular morphism over \(\kappa\), and \(\pi^{\prime}:Y^{\prime}\to X^{\prime}\) will be the base change. Then for any \(p^{\prime}\in Y^{\prime}\) there are open neighborhoods \(U^{\prime}\) of \(p^{\prime}\), and \(U\) of \(p:=\phi(p^{\prime})\), with the induced smooth morphism \(\phi_{|U^{\prime}}:U^{\prime}\to U\) such that_
\[B(Y_{U}/U)\times_{X}X^{\prime}\simeq B(Y^{\prime}_{U^{\prime}}/U^{\prime}) \quad B(Y_{U}/U)_{+}\times_{X}X^{\prime}\simeq B(Y^{\prime}_{U^{\prime}}/U^{ \prime})_{+}\]
_Thus the full cobordization and cobordization of proper birational locally toric morphisms are functorial for regular morphisms up to torus factors._
Proof.: This is a direct consequence of Lemma 3.2.2 and definition of locally toric morphisms
#### 3.3.3. Local description of cobordization of locally toric morphisms
**Lemma 3.3.4**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism over a field \(\kappa\), and \(\pi_{B}:B\to X\) be its full cobordization. Then_
1. \(B_{+}\subset B\) _is the natural open immersion._
2. _For any point_ \(p\in X\) _there is an open neighborhood_ \(U\) _of_ \(p\)_, with a toric chart_ \(U\to X_{\sigma}\)_, and the torus_ \[T_{B\smallsetminus B_{U}}:=\operatorname{Spec}(\,\kappa[x_{i},x_{i}^{-1}\mid E_{ i}\subset B\smallsetminus B_{U}\,\,]\,),\] _and an induced regular morphism_ \[B_{U}=\sigma^{-1}(U)=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\ \to\ X_{\sigma}\times\mathbb{A}^{k}\times T_{B\smallsetminus B_{U}}\]
3. _If_ \(X\) _is regular then_ \(B\) _is regular._
Proof.: By Lemma 3.2.2, the problem reduces locally to a toric situation via toric chart \(U\to X_{\sigma}\).
(1) By Lemma 2.3.5, \(B(X_{\Delta}/X_{\sigma})_{+}\hookrightarrow B(X_{\Delta}/X_{\sigma})\) is an open inclusion. Thus, by Lemma 3.2.2, \(B_{+}\subset B\) is also such.
(2) Also locally by Lemma 1.6.1, we can write \(B_{U}=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\). On the other hand, by Lemma 3.2.2(4) \(B(Y_{U}/U)\to B(X_{\Delta}/X_{\sigma})\) is regular. Finitely by Lemma 2.3.5(2), \(B(X_{\Delta}/X_{\sigma})=X_{\sigma}\times\mathbb{A}^{k}\).
(3) Follows from (2).
### Local description of cobordization
**Lemma 3.3.1**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism over a field \(\kappa\), and \(\pi:Y\to X\) be its full cobordization. Then_
1. \(B_{+}\subset B\) _is the natural open immersion._
2. _For any point_ \(p\in X\) _there is an open neighborhood_ \(U\) _of_ \(p\)_, with a toric chart_ \(U\to X_{\sigma}\)_, and the torus_ \[T_{B\smallsetminus B_{U}}:=\operatorname{Spec}(\,\kappa[x_{i},x_{i}^{-1}\mid E_{ i}\subset B\smallsetminus B_{U}\,\,]\,),\] _and an induced regular morphism_ \[B_{U}=\sigma^{-1}(U)=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\ \to\ X_{\sigma}\times\mathbb{A}^{k}\times T_{B\smallsetminus B_{U}}\]
3. _If_ \(X\) _is regular then_ \(B\) _is regular._
Proof.: By Lemma 3.2.2, the problem reduces locally to a toric situation via toric chart \(U\to X_{\sigma}\).
(1) By Lemma 2.3.5, \(B(X_{\Delta}/X_{\sigma})_{+}\hookrightarrow B(X_{\Delta}/X_{\sigma})\) is an open inclusion. Thus, by Lemma 3.2.2, \(B_{+}\subset B\) is also such.
(2) Also locally by Lemma 1.6.1, we can write \(B_{U}=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\). On the other hand, by Lemma 3.2.2(4) \(B(Y_{U}/U)\to B(X_{\Delta}/X_{\sigma})\) is regular. Finitely by Lemma 2.3.5(2), \(B(X_{\Delta}/X_{\sigma})=X_{\sigma}\times\mathbb{A}^{k}\).
(3) Follows from (2).
### Local description of cobordization
**Lemma 3.3.2**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism over a field \(\kappa\), and \(\pi:Y\to X\) be its full cobordization. Then_
1. \(B_{+}\subset B\) _is the natural open immersion._
2. _For any point_ \(p\in X\) _there is an open neighborhood_ \(U\) _of_ \(p\)_, with a toric chart_ \(U\to X_{\sigma}\)_, and the torus_ \[T_{B\smallsetminus B_{U}}:=\operatorname{Spec}(\,\kappa[x_{i},x_{i}^{-1}\mid E_{ i}\subset B\smallsetminus B_{U}\,\,]\,),\] _and an induced regular morphism_ \[B_{U}=\sigma^{-1}(U)=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\ \to\ X_{\sigma}\times\mathbb{A}^{k}\times T_{B\smallsetminus B_{U}}\]
3. _If_ \(X\) _is regular then_ \(B\) _is regular._
Proof.: By Lemma 3.2.2, the problem reduces locally to a toric situation via toric chart \(U\to X_{\sigma}\).
(1) By Lemma 2.3.5, \(B(X_{\Delta}/X_{\sigma})_{+}\hookrightarrow B(X_{\Delta}/X_{\sigma})\) is an open inclusion. Thus, by Lemma 3.2.2, \(B_{+}\subset B\) is also such.
(2) Also locally by Lemma 1.6.1, we can write \(B_{U}=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\). On the other hand, by Lemma 3.2.2(4) \(B(Y_{U}/U)\to B(X_{\Delta}/X_{\sigma})\) is regular. Finitely by Lemma 2.3.5(2), \(B(X_{\Delta}/X_{\sigma})=X_{\sigma}\times\mathbb{A}^{k}\).
(3) Follows from (2).
### Local description of cobordization
**Lemma 3.3.3**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism over a field \(\kappa\), and \(\pi:Y\to X\) be its full cobordization. Then_
1. \(B_{+}\subset B\) _is the natural open immersion._
2. _For any point_ \(p\in X\) _there is an open neighborhood_ \(U\) _of_ \(p\)_, with a toric chart_ \(U\to X_{\sigma}\)_, and the torus_ \[T_{B\smallsetminus B_{U}}:=\operatorname{Spec}(\,\kappa[x_{i},x_{i}^{-1}\mid E_{i} \subset B\smallsetminus B_{U}\,\,]\,)_,\] _and an induced regular morphism_ \[B_{U}=\sigma^{-1}(U)=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\ \to\ X_{\sigma}\times\mathbb{A}^{k}\times T_{B\smallsetminus B_{U}}\]
3. _If_ \(X\) _is regular then_ \(B\) _is regular._
Proof.: By Lemma 3.2.2, the problem reduces locally to a toric situation via toric chart \(U\to X_{\sigma}\).
(1) By Lemma 2.3.5, \(B(X_{\Delta}/X_{\sigma})_{+}\hookrightarrow B(X_{\Delta}/X_{\sigma})\) is an open inclusion. Thus, by Lemma 3.2.2, \(B_{+}\subset B\) is also such.
(2) Also locally by Lemma 1.6.1, we can write \(B_{U}=B(Y_{U}/U)\times T_{B\smallsetminus B_{U}}\). On the other hand, by Lemma 3.2.2(4) \(B(Y_{U}/U)\to B(X_{\Delta}/X_{\sigma})\) is regular. Finitely by Lemma 2.3.5(2), \(B(X_{\Delta}/X_{\sigma})=X_{\sigma}\times\mathbb{A}^{k}\).
(3) Follows from (2).
#### 3.4.1. Cobordization of locally monomial maps
**Definition 3.4.2**.: Let \(X\) be a locally monomial scheme over a field \(\kappa\). We say that \(u_{1},\ldots,u_{k}\) is a locally toric system of parameters on \(X\) if there is a chart \(\phi:U\to X_{\sigma}\), and a local system of toric parameters \(x_{1},\ldots,x_{k}\) on \(X_{\sigma}\), such that \(u_{i}=\phi^{*}(x_{i})\).
As a Corollary from Proposition 3.3.2, and Lemma 3.3.4 we obtain the following:
**Theorem 3.4.3**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism of locally toric schemes over a field \(\kappa\). Then locally on \(X\) we can write up to torus factors_
\[\mathcal{A}_{Y/X}=\pi_{*}(\mathcal{C}_{Y/X})=\mathcal{O}_{X}[t_{1}^{-1},\ldots,t_{k}^{-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}}],\]
_where_
1. \(u_{1},\ldots,u_{k}\) _is a locally toric system of parameters on an open_ \(U\subset X\) _defining a toric chart for the morphism_ \(\pi\)_,_
2. \(\mathbf{t}^{\alpha_{i}}:=t_{1}^{a_{i1}}\cdot\ldots\cdot t_{k}^{a_{ik}}\)_, with_ \(a_{ij}:=\nu_{i}(u_{j})\geq 0\)_._
_In particular, if \(X\) is regular then \(B\) and \(B_{+}\subset B\) are regular. \(\clubsuit\)_
Proof.: We use the fact from Lemma 3.2.2, that the valuations are locally monomial with respect to \(u_{1},\ldots,u_{k}\), and Lemma 2.3.5(1).
#### 3.4.4. The cobordization of monomial morphisms
Let \(Y\to X\) be a proper birational locally toric morphism over \(\kappa\).
Let \(x_{1},\ldots,x_{n}\) be a system of local parameters at a point \(p\) on a locally toric \(X\) defining a toric chart for a \(Y\to X\). Then the full cobordization of \(\pi:Y\to X\) can be represented as:
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t_{1}^{-1},\ldots,t_{k}^{-1},x_{1}^ {\prime},\ldots,x_{n}^{\prime}]/(x_{1}^{\prime}\mathbf{t}^{-\alpha_{1}}-x_{1},\ldots,x_{k}^{\prime}\mathbf{t}^{-\alpha_{k}}-x_{k}).\]
Thus \(B=V(x_{1}^{\prime}\mathbf{t}^{-\alpha_{1}}-x_{1},\ldots,x_{k}^{\prime} \mathbf{t}^{-\alpha_{k}}-x_{k})\subset X\times\mathbb{A}^{n+k}\) is locally a closed subscheme of \(X\times\mathbb{A}^{n+k}\) defined by a system of local parameters. It is regular for a regular \(X\). Consequently, the full cobordization \(B\to X\) can be described by a single chart up to a torus factor with the following coordinates:
* \(t_{i}^{-1}\) for \(i=1,\ldots,k\) is the inverse of the coordinate \(t_{i}\) representing the action of torus \(T=\operatorname{Spec}(\kappa[\operatorname{Cl}(Y/X)]=\operatorname{Spec}( \kappa[t_{1},t_{1}^{-1},\ldots,t_{n},t_{n}^{-1}])\).
* \(x_{i}^{\prime}=x_{i}\cdot\mathbf{t}^{\alpha_{i}}\) for \(1\leq i\leq k\), and
* \(x_{j}^{\prime}=x_{j}\) for \(j>n\).
The open subsets \(B_{x_{i}^{\prime}}=B\smallsetminus V(x_{i}^{\prime})\), associated with the forms \(x_{i}^{\prime}=x_{i}\mathbf{t}^{\alpha_{i}}\) cover the cobordization \(B_{+}=B\smallsetminus V(\mathcal{I}_{\operatorname{irr}})\) producing several "charts" similarly to the standard blow-up. These open affine subsets can be conveniently described by using toric geometry. They correspond to the maximal faces of the decomposition \(\Delta\) of the cone \(\sigma\) associated with the local toric chart.
If \(\pi:Y\to X\) is the cobordant blow-up of a locally monomial \(\mathcal{J}\), where \(\operatorname{codim}(V(\mathcal{J})\geq 2\), then the subset \(B_{+}\), by Lemma 1.4.4, can be described as \(B_{+}=B\smallsetminus V(\mathcal{J}\mathbf{t}^{\alpha})\), where \(\alpha=(a_{1},\ldots,a_{k})\), and \(a_{i}\) are the coefficients of the exceptional divisor \(E=\sum a_{i}E_{i}\) of \(\pi:Y\to X\). In this case, the charts of \(B_{+}\) can also be interpreted by the vertices of the Newton polytope of \(\mathcal{J}\).(See Example 2.6.14)
**Remark 3.4.5**.: In the particular case, when considering the stack-theoretic quotients of the blow-up of a locally monomial ideal on a regular scheme, one obtains the definition of a _multiple weighted blow-up_\(\operatorname{Bl}_{\mathcal{J}}=[B_{+}\mathbin{/\!\!/}T]\) introduced in [1] by Abramovich-Quek via the Satriano construction in [11]. The more general definition of \(\operatorname{Bl}_{\mathcal{J},b}\) is discussed in Section 5.5.
#### 3.4.6. Weighted cobordant blow-ups
Recall that the weighted stack-theoretic blow-ups were considered in the context of resolution in [10] and [1]. The definition of the _weighted cobordant blow-up_ was introduced in [25]. One can view these notions from the more general perspective of Cox cobordant blow-ups or the multiple weighted blow-ups of Abramovich-Quek from [1].
**Definition 3.4.7**.: Let \((x_{1},\ldots,x_{k})\) be a partial system of local parameters on a regular scheme \(X\). Let \(\mathcal{J}\) be a center of the form \((x_{1}^{a_{1}},\ldots,x_{k}^{a_{k}})\), where \(a_{1}\leq a_{2}\leq\ldots\leq a_{k}\) are positive integers, and \(k>1\). Let \(\pi:Y\to X\) be the normalized blow-up of \(\mathcal{J}\). By the _weighted cobordant blow-up_ of \(\mathcal{J}\) we mean the cobordization \(B_{\mathcal{J}+}\to X\) of \(\pi:Y\to X\).
The corresponding monomial ideal on the toric chart \(\mathbb{A}_{\kappa}^{k}\) defines a piecewise linear function \(G:=\min_{i}(a_{i}e_{i}^{*})\) on the regular coordinate cone \(\sigma=\langle e_{1},\ldots,e_{k}\rangle\), where \(e_{i}^{*}(e_{j})=\delta_{ij}\). The functions \(a_{i}e_{i}^{*}\) determine the ray
\[\rho:=\{v\in\sigma\mid a_{1}e_{1}^{*}(v)=\ldots=a_{k}e_{k}^{*}(v)\}.\]
The ray \(\rho\) is generated by the primitive vector
\[w=(w_{1},\ldots,w_{k})=w_{1}e_{1}+\ldots+w_{k}e_{k},\]
with relatively prime components and such that
\[w_{1}a_{1}=\ldots=w_{k}a_{k}.\]
The normalized blow-up of \(\mathcal{J}\) is described by the decomposition \(\Delta\) of \(\sigma\) into maximal subcones where \(G\) is linear.
Thus \(\Delta\) is the star subdivision \(\rho\cdot\langle e_{1},\ldots,e_{k}\rangle\) at a ray \(\rho\). The vector \(w\) determines the valuation \(\nu_{E}\) of the unique irreducible exceptional divisor.
Then, by Lemma 2.3.5(1), the full cobordant blow-up of \(X\) at the center \(\mathcal{J}\) is defined by
\[B_{\mathcal{J}}=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},t^{w_{1}}x_{1},\ldots,t^{w_{k}}x_{k}]).\]
Here we have
\[w_{i}=\nu_{E}(x_{i})=(w,e_{i}^{*})\]
The cobordant weighted blow-up is simply \((B_{\mathcal{J}})_{+}=B\smallsetminus V(\sigma^{\circ}(\mathcal{J}))\), where, by Lemma 1.4.4, we have \(\sigma^{\circ}(\mathcal{J})=t^{a}\cdot\mathcal{J}\), where \(a=\nu(\mathcal{J})\). Thus
\[\sigma^{\circ}(\mathcal{J})=(x_{1}^{a_{1}}t^{a_{1}w_{1}},\ldots,x_{k}^{a_{k}}t ^{a_{k}w_{k}}).\]
We see that the cobordant weighted blow-up is the cobordization of an ordinary toric weighted blow-up corresponding to the star subdivision at the center \(v\in\sigma\). We will discuss this construction in the context of the blow-ups of valuative \(\mathbb{Q}\)-ideals in Section 5.4.
Observe that both notions: the one in Definition 3.4.7, and the one given by the formula \(B_{\mathcal{J}}=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},t^{w_{1}}x_{1},\ldots,t^{w_{k}}x_{k}])\), as in [25], are different in the trivial case \(k=1\) and the blow-up of \((x_{1}^{a_{1}})\). Then \(Y\to X\) is an isomorphism, and \(B=B_{+}=B_{-}\simeq Y\simeq X\). However the formula from [25] gives us
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},t^{w_{1}}x_{1}]),\]
which defines the isomorphism of the quotients:
\[B/G_{m}\simeq B_{+}/G_{m}\simeq B_{-}/G_{m}\simeq Y\simeq X.\]
In this case, \(B_{+}\) is a locally trivial \(G_{m}\)-bundle. So both constructions of \(B_{+}\) differ locally by the torus factor.
### Geometric quotients for locally toric morphisms
In general, when considering the cobordization \(B_{+}\) of a locally toric morphism \(\pi:Y\to X\), one obtains the good quotient \(B_{+}\mathbin{/\!\!/}T\simeq Y\). Proposition 3.5.2, below shows that if \(X\) is regular then, by replacing \(B_{+}\) with an open subset \(B^{s}\subseteq B_{+}\) one obtains the geometric quotient \(B^{s}/T\) with a proper birational morphism \(B^{s}/T\to B_{+}\mathbin{/\!\!/}T\simeq Y.\)xConsequently, \(B^{s}\) has a geometric quotient \(B^{s}/T\) with abelian quotient singularities and the transformation \(B^{s}\to X\) can be be used in the resolution instead of \(B_{+}\to X\).
**Lemma 3.5.1**.: _Let \(\pi:Y\to X\) be a locally toric morphism, with \(X\) regular._
_Then its cobordization \(B_{+}\) determines the geometric quotient \(B_{+}\to B_{+}/T\simeq Y\) iff \(Y\) has abelian quotient singularities._
Proof.: The problem is local and can be reduced to the toric morphism \(\pi:X_{\Delta}\to X_{\sigma}\) corresponding to the subdivision \(\Delta\) of a regular cone \(\sigma\). Then, by Lemma 2.3.5, the full cobordization \(B\) of \(\pi\) is a regular scheme corresponding to the cone \(\Sigma_{B}\), and \(B_{+}\) is its open toric subscheme. The natural projection \(\sigma_{B_{+}}\to\Delta\) corresponds to the geometric quotient iff \(\Delta\) is a simplicial fan, and thus \(Y\) has abelian quotient singularities.
**Proposition 3.5.2**.: _Let \(\pi:Y\to X\) be a proper birational locally toric morphism of locally toric schemes over a field, with \(X\) regular. Then \(B_{+}=\operatorname{Cox}(Y/X)_{+}\subset B=\operatorname{Cox}(Y/X)\) contains open maximal subsets \(B^{s}\subset B_{+}\) admitting geometric quotient \(B^{s}/T_{B}\) with the projective birational morphism_
\[B^{s}/T_{B}\to B_{+}/T_{B}=Y.\]
Proof.: Let \(E_{1},\dots,E_{k}\) be the irreducible exceptional divisors of \(\pi:Y\to X\), and \(\nu_{1},\dots,\nu_{k}\) be the associated exceptional valuations on \(X\). By Lemma 3.2.2(5), the valuations are locally toric on \(X\). Consider the sequence of the blow-ups at the valuations \(\nu_{i}\) as in [20, Proposition 8.16.6]. These are precisely the normalized blow-ups of \(\mathcal{I}_{\nu_{i},a,X}\) for a sufficiently divisible \(a\).
Locally, in the compatible toric charts, \(U\to X_{\sigma}\) the sequence of the blow-ups correspond to a sequence of the star subdivisions at the vertices \(\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\sigma)\) (see [20, Lemma 7.3.9]). As the result we create a new subdivision \(\Delta^{\prime}\) of \(\Delta\) with \(\operatorname{Vert}(\Delta^{\prime})=\operatorname{Vert}(\Delta)\). This decomposition is simplicial. Indeed, let \(\delta_{0}\) be any cone in \(\Delta^{\prime}\). By the property of the star subdivisions, for any vertex \(v_{0}\in\operatorname{Vert}(\delta)\smallsetminus\operatorname{Vert}(\sigma)\) one can write \(\delta_{0}=\langle v_{0}\rangle+\delta_{1}\), where \(\delta_{1}\) is a face of \(\delta_{0}\) of codimension one in \(\delta_{0}\), and \(v_{0}\) is linearly independent of \(\operatorname{Vert}(\delta_{1})\). We can run this argument inductively until we can represent \(\delta_{0}\) as \(\delta_{0}=\langle v_{0},\dots,v_{r}\rangle+\delta_{r}\), where \(v_{0},\dots,v_{r}\in\operatorname{Vert}(\delta_{0})\) are linearly independent of \(\operatorname{Vert}(\delta_{r})\subset\operatorname{Vert}(\sigma)\). Thus all the vertices of \(\delta_{0}\) are linearly independent.
By construction, the valuations of the exceptional divisors corresponding to \(\operatorname{Vert}(\Delta)\smallsetminus\operatorname{Vert}(\sigma)\) remain unchanged. Then \(B(Y/X)=B(Y^{\prime}/X)\). On the other
hand, by the description of the toric case from Lemma 2.3.5(4), we obtain the open inclusions
\[B^{s}:=B(Y^{\prime}/X)_{+}\subseteq B(Y/X)_{+}=B_{+}\subseteq B(Y/X)=B.\]
## 4. Cobordant resolution of singularities
### The dual complex of the exceptional divisor
One can extend the considerations and the results from Section 2.5.1.
#### 4.1.1. The exceptional divisor
Let \(\pi:Y\to X\) be a proper birational locally toric morphism, where \(X\) is a regular scheme over a field \(\kappa\), and \(E_{1},\ldots,E_{k}\) be the irreducible components of the exceptional divisor \(E\) of \(\pi\). Let \(\pi_{B}:B\to X\) be the full cobordization of \(\pi\). By Theorem 4.4.5, \(B\) is regular and there is an SNC divisor \(D=V_{B}(t_{1}^{-1}\cdot\ldots\cdot t_{k}^{-1})\) with irreducible components \(D_{i}=V_{B}(t_{i}^{-1})\). So is the divisor \(D_{+}=D_{|B_{+}}\) on \(B_{+}\). Moreover, the exceptional divisor \(E\) of \(\pi:Y\to X\) is locally toric.
We can associate with the SNC divisors \(D\) on \(B\), \(D_{+}\) on \(B_{+}\), and with the divisor \(E\) on \(Y\) the divisorial stratifications \(S_{D}\), \(S_{D_{+}}\), and \(S_{E}\), extending the definitions from Section 2.5.1. The strata of \(S_{D}\) are defined by the irreducible components of the locally closed sets :
\[\bigcap_{i\in I}D_{i}\smallsetminus\bigcup_{j\in J}D_{j},\]
where \(I\cup J=\{1,\ldots,k\}\). Replacing \(D_{i}\) with \(D_{i+}\) we obtain the definition for \(S_{D+}\). Consequently, any stratum \(s\in S_{D_{+}}\) extends to a stratum in \(S_{D}\).
The closures of the strata \(s^{E}\in S_{E}\) are defined by the irreducible components of the intersections \(\bigcap_{i\in I}E_{i}\). The strata \(s_{E}\) are obtained by removing from \(\overline{s}^{E}\) the proper closed subsets \(\overline{s^{\prime}}^{E}\). The stratifications \(S_{D}\), \(S_{D_{+}}\) and \(S_{E}\) determine the dual simplicial complexes \(\Delta_{D}\), \(\Delta_{D_{+}}\), and \(\Delta_{E}\). Since \(D\) and \(D_{+}\) are SNC, the simplices in \(\Delta_{D}\), (respectively \(\Delta_{D_{+}}\)) are in the bijective correspondence with the strata of \(S_{D}\) (respectively of \(S_{D_{+}}\)). Moreover, by the above, \(\Delta_{D_{+}}\) is a subcomplex of \(\Delta_{D}\) corresponding to the strata of \(S_{D}\) which intersect \(B_{+}\subset B\). Also, under this identification \(\operatorname{Vert}(\Delta_{D})=\operatorname{Vert}(\Delta_{D+})\).
The divisor \(E\) is usually not SNC, and the strata alone do not determine the faces of \(\Delta_{E}\). The vertices \(v_{i}\) of \(\Delta_{E}\) correspond to the divisors \(E_{i}\leftrightarrow v_{i}\). The simplices \(\sigma=\Delta(e_{i}\mid i\in I)\) in \(\Delta_{E}\) correspond to the pairs \((s_{\sigma}^{E},E_{\sigma})\) consisting of a stratum \(s_{\sigma}\in S_{E}\) and a collection of divisors \(E_{\sigma}=\{E_{i}\mid i\in I\}\), such that \(\overline{s_{\sigma}}^{E}\) is an irreducible component of \(\bigcap_{i\in I}E_{i}\). Thus, in this case, the correspondence between the faces of \(\Delta_{E}\) and the strata of \(S_{E}\) is not bijective, and the closures of strata could be represented by the intersections of components \(\bigcap_{i\in I}E_{i}\) defined by different subsets \(I\). (See also Section 2.5.1.)
Summarizing we have
**Lemma 4.1.2**.: _A simplex \(\sigma\) in \(\Delta_{B}\) (respectively \(\Delta_{D_{+}}\), \(\Delta_{E}\)) is represented by a pair_
\[(\{D_{i}\mid i\in I\},\quad(\bigcap D_{i})_{0})\]
_consisting of a collection of the irreducible divisors \(D_{i}\), (respectively \(D_{i+}\), \(E_{i}\)) which have a nonempty intersection and an irreducible component \((\bigcap D_{i})_{0}\) of \(\bigcap D_{i}\) (respectively \(\bigcap D_{i+}\), \(\bigcap E_{i}\))._
**Corollary 4.1.3**.: _With the previous assumptions and notations:_
1. _There is a bijective correspondence between the divisors_ \(D_{i}\)_,_ \(D_{i+}\)_,_ \(E_{i}\)_, and the valuations_ \(\nu_{i}\)_._
2. 1. _If_ \(s\in S_{D}\) _then_ \(s\) _is a component of a locally closed set_ \[\bigcap_{D_{i}\supseteq s}D_{i}\smallsetminus\bigcap_{D_{i}\not\supseteq s}D_{i}\] 2. _The image_ \(\pi_{B}(s)\) _is closed. It is an irreducible component of the closed set_ \[\pi_{B}(\bigcap_{D_{i}\supseteq s}D_{i}\smallsetminus\bigcap_{D_{i}\not\supseteq s }D_{i})=\] \[\bigcap_{D_{i}\supseteq s}Z_{X}(\nu_{i})=\bigcap_{D_{i}\supseteq s}\pi_{B}(D_{i })=\bigcap_{D_{i}\supseteq s}\pi(E_{i})\] _where_ \(Z_{X}(\nu)\) _denotes the center of a valuation_ \(\nu\) _on_ \(X\)_. Moreover, the sets_ \(\bigcap_{D_{i}\supseteq s}D_{i}\) _are locally irreducible over_ \(X\)_._ 3. _The morphism_ \(\pi_{B}\) _determines a bijective correspondence between the strata defined by the irreducible components of_ \(\bigcap_{D_{i}\supseteq s}D_{i}\smallsetminus\bigcap_{D_{i}\not\supseteq s}D_{i}\) _and the irreducible components of_ \(\bigcap_{D_{i}\supseteq s}Z_{X}(\nu_{i})\)_._
3. _The morphism_ \(\pi_{B_{+},Y}\) _determines a bijective correspondence between the components of_ \(\bigcap_{i\in I}D_{i+}\) _and the components of_ \(\bigcap_{i\in I}E_{i}\)_. This correspondence defines the isomorphism of the dual complexes_ \(\Delta_{D_{+}}\simeq\Delta_{E}\)_._
4. _The morphism of the stratifications_ \(S_{D_{+}}\to S_{D}\) _maps a stratum_ \(s_{+}\) _of_ \(S_{D_{+}}\) _into an open subset of a stratum_ \(s\) _of_ \(S_{D}\) _. It determines the inclusion of the dual complexes_ \(\Delta_{B_{+}}\hookrightarrow\Delta_{B}\)_._
Proof.: (1) The correspondence follows from Lemmas 1.2.9, and 1.2.11.
(2)-(5) By Lemma 3.2.2 and 3.2.4, we can reduce the situation locally to the toric case, where we use Lemmas 2.5.3, and 2.5.6, and Corollaries 2.5.5 2.5.7 and 2.5.8.
#### 4.1.4. Dual complex of valuations of a locally toric morphism
Let \(N=\{\nu_{1},\ldots,\nu_{k}\}\) be the set of the exceptional valuations of \(\pi:Y\to X\). The vertices of \(\operatorname{Vert}(\Delta_{E})\), and thus of \(\operatorname{Vert}(\Delta_{B})\) and \(\operatorname{Vert}(\Delta_{B+})\) are in the bijective correspondence with the valuations in \(N\), and the exceptional divisors \(E_{i}\), \(D_{i+}\), and \(D_{i}\):
\[\nu_{i}\leftrightarrow E_{i}=Z_{Y}(\nu_{i})\leftrightarrow D_{i+} \leftrightarrow D_{i}.\]
Consequently, one can associate with the faces of \(\Delta_{E}\), \(\Delta_{B+}\), and \(\Delta_{B}\) the subsets of \(N\). This determines the complexes \(\Delta_{B}^{N},\Delta_{B+}^{N},\Delta_{E}^{N}\), called the _dual valuation complexes_, together with natural isomorphisms of the simplicial complexes
\[\Delta_{B}\to\Delta_{B}^{N},\quad,\Delta_{B_{+}}\to\Delta_{B+}^{N},\quad \Delta_{E}\to\Delta_{E}^{N}\]
Then, by Lemma 4.1.3, \(\Delta_{E}^{N}=\Delta_{B_{+}}\) determine the same subcomplex of \(\Delta_{B}^{N}\).
The simplices of the valuation complexes will be called the _valuation faces_. The valuation faces come with natural face inclusions inherited from \(\Delta_{B}^{N},\Delta_{B+}^{N},\Delta_{E}^{N}\).
By Lemma 4.1.3 we get:
**Lemma 4.1.5**.:
1. _A valuation face_ \(\sigma\) _in_ \(\Delta_{B}^{N}\) _is represented by a pair_ \((\omega,Z_{X}^{0}(\omega))\) _defined by the collection of valuations_ \(\omega=\omega_{\sigma}\) _in_ \(N\)_, such that_ \[Z_{X}(\omega):=\bigcap_{\nu\in\omega}Z_{X}(\nu)\neq\emptyset,\] _and an irreducible component_ \(Z_{X}^{0}(\omega)\) _of_ \(Z_{X}(\omega)\)_._
2. _A simplex_ \(\sigma\) _of_ \(\Delta_{E}^{N}=\Delta_{B+}^{N}\) _corresponds to a subset_ \(\omega\subset N\)_, such that_ \[Z_{Y}(\omega):=\bigcap_{\nu\in\omega}Z_{Y}(\nu)\neq\emptyset,\] _and an irreducible component, denoted as_ \(Z_{Y}^{0}(\omega)\) _of the set_ \[Z_{Y}(\omega):=\bigcap_{\nu\in\omega}Z_{Y}(\nu).\]
_The face relations are given by the inclusions of the sets of valuations and the associated components. \(\clubsuit\)_
**Remark 4.1.6**.: Thus, the dual valuation complexes could be thought of as ordinary dual complexes of the exceptional divisors with the associated valuation structure so that the vertices define the relevant exceptional valuations, and the faces determine the sets of the valuations.
#### 4.1.7. Dual complex associated with a locally monomial ideal
If \(\mathcal{J}\) is locally monomial ideal on a regular scheme, such that \(\operatorname{codim}(V(\mathcal{I}))\geq 2\), then one can associate with \(\mathcal{J}\) the normalized blow-up \(\pi:Y\to X\), and the full cobordant blow-up \(\pi_{B}:B\to X\) of \(\mathcal{J}\). The morphism \(\pi:Y\to X\) is locally toric, and we shall call the dual complexes \(\Delta_{D}\), \(\Delta_{D+}\simeq\Delta_{E}\) and the corresponding dual valuation complexes \(\Delta_{D}^{N}\), \(\Delta_{D+}^{N}\simeq\Delta_{E}^{N}\)_associated with \(\mathcal{J}\).
### Graded rings defined by the valuations
#### 4.2.1. Graded rings defined by valuations
In the considerations below, let \(\omega=\{\nu_{1},\ldots,\nu_{r}\}\) be a set of valuations on a regular scheme \(X\). We associate with each valuation \(\nu_{i}\) a dummy variable \(t_{i}\) for \(i=1,\ldots,r\). Set
\[\mathbf{t}:=(t_{1},\ldots,t_{k})\quad\text{and}\quad\mathbf{t}^{-1}:=(t_{1}^{- 1},\ldots,t_{k}^{-1}).\]
Consider the partial componentwise order on \(\mathbb{Z}_{\geq o}^{r}\). For \(\alpha:=(a_{1},\ldots,a_{r})\in\mathbb{Z}_{\geq 0}^{r}\) we define the ideals
\[\mathcal{J}_{\omega}^{\alpha}:=\bigcap_{\nu_{i}\in\omega}\mathcal{I}_{\nu_{i},a_{i}}\subset\mathcal{O}_{X},\qquad\mathcal{J}_{\omega}^{>\alpha}:=\sum_{ \beta>\alpha}\mathcal{J}_{\omega}^{\beta}. \tag{5}\]
This determines the \(\mathbb{Z}_{\geq 0}^{k}\)-graded Rees algebra
\[\mathcal{A}_{\omega}:=\bigoplus_{a\in\mathbb{Z}_{\geq 0}}\mathcal{J}_{\omega}^{ \alpha}\mathbf{t}^{\alpha}\subset\mathcal{O}_{X}[t],\]
where \(\mathbf{t}^{\alpha}=t_{1}^{a_{1}}\cdot\ldots\cdot t_{r}^{a_{r}}\), and the associated gradation
\[\operatorname{gr}_{\omega}(\mathcal{O}_{X})=\bigoplus_{a\in\mathbb{Z}_{\geq 0 }}(\mathcal{J}_{\omega}^{\alpha}/\mathcal{J}_{\omega}^{>\alpha})\mathbf{t}^{ \alpha}=\mathcal{A}_{\omega}/(\mathcal{A}_{\omega}\cap\mathbf{t}^{-1}\mathcal{ A}_{\omega}])=\mathcal{A}_{\omega}[\mathbf{t}^{-1}]/(\mathbf{t}^{-1}\cdot\mathcal{A}_{\omega}[ \mathbf{t}^{-1}]). \tag{6}\]
In particular, for \(\alpha=0=(0,\ldots,0)\) we have locally on \(X\):
\[\mathcal{J}_{\omega}:=\mathcal{J}_{\omega}^{>0}=\mathcal{I}_{Z_{X}(\omega)},\]
where
\[Z_{X}(\omega):=\bigcap_{i=1}^{k}Z_{X}(\nu_{i}).\]
Then \(\operatorname{gr}_{\omega}(\mathcal{O}_{X})\) is a sheaf of graded \(\mathcal{O}_{X}/\mathcal{J}_{\omega}=\mathcal{O}_{V(\mathcal{J}_{\omega})}\)-modules.
**Lemma 4.2.2**.: _Assume the valuations in the set \(\omega=\{\nu_{1},\ldots,\nu_{r}\}\) are monomial for a certain partial system of local parameters \(u_{1},\ldots,u_{n}\) on a regular scheme \(X\). Then_
1. \(\mathcal{J}_{\omega}=\sum_{i=1}^{k}\mathcal{I}_{\nu_{i},1}=(u_{j}\mid\nu_{i}(u _{j})>0,\text{for some }\nu_{i}\in\omega)\) _, and_
2. \(\operatorname{gr}_{\omega}(\mathcal{O}_{X})=\mathcal{O}_{V(\mathcal{J}_{ \omega})}[u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}}],\) _where_ \(u_{i}\in\mathcal{J}_{\omega}^{\alpha_{i}}\smallsetminus\mathcal{J}_{\omega}^{ \gamma\alpha_{i}}\)_, and_ \(\alpha_{i}=(a_{i1},\ldots,a_{in})\)_, with_ \(\nu_{i}(u_{j})=a_{ij}\in\mathbb{Z}_{\geq 0}\) _for_ \(i=1,\ldots,r\)_, and_ \(j=1,\ldots,n\)__
Proof.: (1) Note that \(\mathcal{I}_{Z_{X}(\nu_{i})}=(u_{j}\mid\nu_{i}(u_{j})>0)\). Thus
\[\mathcal{J}_{\omega}=\mathcal{I}_{Z_{X}(\omega)}=\sum_{\nu\in\omega}\mathcal{ I}_{Z_{X}(\nu_{i})}=(u_{j}\mid\nu_{i}(u_{j})>0,\text{for some }\nu_{i}\in\omega)\]
(3) By definition of \(A_{\omega}\), the equality (5), and the Proof of Lemma 2.3.5(1).
\[\mathcal{A}_{\omega}[\mathbf{t}^{-1}]=(\bigoplus_{a_{i}\in \mathbb{Z}_{\geq 0}}\ \bigcap_{i=1}^{r}\mathcal{I}_{\nu_{i},a_{i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{r}^{a_{r}})[t_{1}^{-1},\ldots,t_{r}^{-1 }]=\] \[=\mathcal{O}_{X}[t_{1}^{-1},\ldots,t_{r}^{-1},u_{1}\mathbf{t}^{ \alpha_{1}},\ldots,u_{n}\mathbf{t}^{\alpha_{n}}])=\mathcal{O}_{X}[\mathbf{t}^ {-1},u_{1}\mathbf{t}^{\alpha_{1}},\ldots,u_{n}\mathbf{t}^{\alpha_{k}}],\]
where \(\alpha_{i}=(a_{i1},\ldots,a_{in})\), and \(\nu_{i}(u_{j})=a_{ij}\in\mathbb{Z}_{\geq 0}\) for \(i=1,\ldots,r\), and \(j=1,\ldots,n\). Thus by the equality (6):
\[\operatorname{gr}_{\omega}(\mathcal{O}_{X})=\mathcal{A}_{\omega}[ \mathbf{t}^{-1}]/(\mathbf{t}^{-1}\cdot\mathcal{A}_{\omega}[\mathbf{t}^{-1}]=\] \[=(\mathcal{O}_{X}[\mathbf{t}^{-1},u_{1}\mathbf{t}^{\alpha_{1}}, \ldots,u_{n}\mathbf{t}^{\alpha_{k}}])/(\mathbf{t}^{-1})\simeq\] \[=(\mathcal{O}_{X}/\mathcal{J}_{\omega})[u_{1}\mathbf{t}^{\alpha_ {1}},\ldots,u_{n}\mathbf{t}^{\alpha_{k}}].\]
We shall call the corresponding scheme
\[\mathbb{N}_{\omega}(X):=\operatorname{Spec}_{V(\mathcal{J}_{\omega})}( \operatorname{gr}_{\omega}(\mathcal{O}_{X}))=\operatorname{Spec}_{Z_{X}( \omega)}(\operatorname{gr}_{\omega}(\mathcal{O}_{X}))\]
the _weighted normal bundle_ of \(X\) at the set of valuations \(\omega\).
One can extend this to any valuation face in \(\Delta_{B}^{N}\), associated with a full cobordant blow-up \(B\to X\).
**Definition 4.2.3**.: By the _weighted normal bundle_ of \(X\) at the valuation face \(\omega\in\Delta_{B}^{N}\) we mean the scheme
\[\mathbb{N}_{\omega}(X):=\operatorname{Spec}_{Z_{X}^{0}(\omega)}( \operatorname{gr}_{\omega}(\mathcal{O}_{X}))\]
over the component \(Z_{X}^{0}(\omega)\) of \(Z_{X}(\omega)\) associated with the face \(\omega\).
As \(\Delta_{E}^{N}\subset\Delta_{B}^{N}\) the above definition is also valid for any valuation face \(\omega\in\Delta_{E}^{N}\).
#### 4.2.4. The ideals of the initial forms
With any function \(f\in\mathcal{O}_{X,p}\), regular at \(p\in V(\mathcal{J})\), such that \(f\in\mathcal{J}_{\omega}^{\alpha}\prec\mathcal{J}_{\omega}^{>\alpha}\), for a certain \(a\in\mathbb{N}\) one can associate the unique homogenous element, called the _initial form_
\[\mathrm{in}_{\omega}(f)=(f+\mathcal{J}_{\omega}^{>\alpha})\in(\mathcal{J}_{ \omega}^{\alpha}/\mathcal{J}_{\omega}^{>\alpha})\mathbf{t}_{\omega}^{\alpha} \subset\mathrm{gr}_{{}_{\omega}}(\mathcal{O}_{X}).\]
Similarly, we associate with an ideal sheaf \(\mathcal{I}\), the filtration \(\mathcal{I}_{\omega}^{\alpha}:=\mathcal{I}\cap\mathcal{J}_{\omega}^{\alpha}\) and set \(\mathcal{I}_{\omega}^{>\alpha}=\mathcal{I}\cap\mathcal{J}_{\omega}^{>\alpha}\).
We define the _ideal of the initial forms_ of \(\mathcal{I}\) to be the ideal
\[\mathrm{in}_{\omega}(\mathcal{I})=\bigoplus_{\alpha\in\mathbb{Z}\geq 0} \mathcal{I}_{\omega}^{\alpha}/\mathcal{I}_{\omega}^{>\alpha}=\bigoplus_{ \alpha\in\mathbb{Z}_{\geq 0}}(\mathcal{I}_{\omega}^{\alpha}+\mathcal{J}_{ \omega}^{>\alpha})/\mathcal{J}_{\omega}^{>\alpha}\subset\mathrm{gr}_{{}_{ \omega}}(\mathcal{O}_{X})\]
on \(N_{\mathcal{J}}(X)\).
For the ideal sheaf \(\mathcal{I}\), its _weak ideal of the initial forms on \(N_{\mathcal{J}}(X)\)_ is given by
\[\mathrm{in}_{\omega}^{\circ}(\mathcal{I})=\mathrm{gr}_{{}_{\omega}}(\mathcal{ O}_{X})\cdot\mathcal{I}_{\omega}^{\alpha_{0}}/\mathcal{I}_{\omega}^{>\alpha_{0}} \subset\mathrm{gr}_{{}_{\omega}}(\mathcal{O}_{X}),\]
where \(\mathcal{I}\subset\mathcal{J}_{\omega}^{\alpha_{0}}\), and \(\mathcal{I}\not\subset\mathcal{J}_{\omega}^{>\alpha_{0}}\).
**Remark 4.2.5**.: For any function \(f\in\mathcal{O}_{X}\),
\[\mathrm{in}_{\omega}(f)=\mathrm{in}_{\omega}(\mathcal{O}_{X}\cdot f)=\mathrm{ in}_{\omega}^{\circ}(\mathcal{O}_{X}\cdot f).\]
#### 4.2.6. Composition of gradations
**Lemma 4.2.7**.: _Let \(\omega=\{\nu_{1},\ldots,\nu_{r}\}\) be a set of valuations which are monomial for a common partial system of local parameters \(u_{1},\ldots,u_{n}\) on a regular \(X\). Consider its partition into subsets \(\omega_{1}=\{\nu_{1},\ldots,\nu_{s}\}\), and \(\omega_{2}=\{\nu_{s+1},\ldots,\nu_{r}\}\). Let \(\mathbf{t}:=(t_{1},\ldots,t_{r})\) ( respectively \(\mathbf{t}_{\omega_{1}}:=(t_{1},\ldots,t_{s})\)) be the set of the unknowns \(t_{i}\) associated to the valuations \(\nu_{i}\in\omega\), ( respectively and \(\nu_{i}\in\omega_{1}\)). Let \(\mathcal{J}_{\omega_{1}}=(u_{1},\ldots,u_{\ell})\). Then_
1. _The set_ \(\omega_{2}=\{\nu_{s+1},\ldots,\nu_{r}\}\) _determines the set of monomial valuations_ \(\overline{\omega_{2}}=\{\overline{\nu}_{s+1},\ldots,\overline{\nu}_{r}\}\) _on the multi-graded ring_ \[\mathrm{gr}_{{}_{\omega_{1}}}(\mathcal{O}_{X})=\mathcal{O}_{V(\mathcal{J}_{ \omega_{1}})}[u_{1}\mathbf{t}_{\omega_{1}}^{\alpha_{1}},\ldots,u_{\ell} \mathbf{t}_{\omega_{1}}^{\alpha_{n}}]\simeq\mathcal{O}_{V(\mathcal{J}_{ \omega_{1}})}[u_{1},\ldots,u_{\ell}],\] _with the ideals_ \[\mathcal{I}_{\overline{\nu}_{i},a_{i}}=\mathrm{in}_{\omega_{1}}(\mathcal{I}_ {\nu_{i},a_{i}})\]
2. \(\mathrm{gr}_{\overline{\omega_{2}}}(\mathrm{gr}_{\omega_{1}}(\mathcal{O}_{X}) \simeq\mathrm{gr}_{{}_{\omega}}(\mathcal{O}_{X})\).
3. _If_ \(\mathcal{I}\subset\mathcal{O}_{X}\) _then_ 1. \(\mathrm{in}_{\omega}(\mathcal{I})=\mathrm{in}_{\overline{\omega_{2}}}( \mathrm{in}_{\omega_{1}}(\mathcal{I}))\)_._ 2. \(\mathrm{in}_{\omega}^{\circ}(\mathcal{I})=\mathrm{in}_{\overline{\omega_{2}}}^{ \circ}(\mathrm{in}_{\omega_{1}}^{\circ}(\mathcal{I}))\)_._
Proof.: (1) For \(j\leq\ell\), \(\mathrm{in}_{\omega_{w_{1}}}(u_{j})\) is identified with \(u_{j}\) in \(\mathcal{O}_{V(\mathcal{J}_{\omega_{1}})}[u_{1},\ldots,u_{\ell}]\). Otherwise if \(j>\ell\), then \(\mathrm{in}_{\omega_{w_{1}}}(u_{j})\) is a parameter in \(\mathcal{O}_{V(\mathcal{J}_{\omega_{1}})}=\mathcal{O}_{X}/\mathcal{J}_{\omega_ {1}}=\mathcal{O}_{X}/(u_{1},\ldots,u_{\ell})\). Consequently \(\nu_{j}\) determine the monomial valuations \(\overline{\nu}_{j}\) on \(\mathrm{gr}_{\omega}(\mathcal{O}_{X})\) with \(\mathrm{in}_{\omega_{1}}(\mathcal{I}_{\nu_{j},a})=(\mathcal{I}_{\overline{\nu}_ {j},a})\).
(2) and (3) For the multiindex \(\alpha=(\alpha_{1},\alpha_{2})\), where \(\alpha_{i}\) correspond to \(\omega_{i}\) for \(i=1,2\), consider a function \(f\in\mathcal{J}_{\omega}^{\alpha}\prec\mathcal{J}_{\omega}^{>\alpha}\):
\[f\in\mathcal{J}_{\omega}^{\alpha}\prec\mathcal{J}_{\omega}^{>\alpha}=\mathcal{J} _{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{\alpha_{2}}\smallsetminus( \mathcal{J}_{\omega_{1}}^{>\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{\alpha_{2}}+ \mathcal{J}_{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{>\alpha_{2}})\]
The ideal \(\mathrm{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{\alpha_{2}})\subset\mathrm{gr} _{\omega_{1}}(\mathcal{O}_{X})\) is homogenous and \(\mathrm{in}_{\omega_{1}}(f)\) is in \(\alpha_{1}\)-gradation of \(\mathrm{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{\alpha_{2}})_{\alpha_{1}} \subset(\mathrm{gr}_{\omega_{1}}(\mathcal{O}_{X}))_{\alpha_{1}}\):
\[\mathrm{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{\alpha_{2}})_{\alpha_{1}}= \frac{\mathcal{J}_{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{ \alpha_{2}}}{\mathcal{J}_{\omega_{1}}^{>\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{ \alpha_{2}}}\subseteq(\mathrm{gr}_{\omega_{1}}(\mathcal{O}_{X}))_{\alpha_{1}}= \frac{\mathcal{J}_{\omega_{1}}^{\alpha_{1}}}{\mathcal{J}_{\omega_{1}}^{>\alpha_ {1}}}\]
and
\[\operatorname{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{>\alpha_{2}})_{\alpha_{1} }=\frac{\mathcal{J}_{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{> \alpha_{2}}+\mathcal{J}_{\omega_{1}}^{>\alpha_{1}}\cap\mathcal{J}_{\omega_{2}}^{ \alpha_{2}}}{\mathcal{J}_{\omega_{1}}^{>\alpha_{1}}\cap\mathcal{J}_{\omega_{2} }^{\alpha_{2}}}\]
Consequently, by the above,
\[\operatorname{in}_{\overline{\omega_{2}}}(\operatorname{in}_{ \omega_{1}}(f))\in\operatorname{in}_{\overline{\omega_{2}}}(\operatorname{in}_ {\omega_{1}}(\mathcal{J}_{\omega_{2}}^{\alpha_{2}})_{\alpha_{1}})=\frac{( \operatorname{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{\alpha_{2}}))_{ \alpha_{1}}}{(\operatorname{in}_{\omega_{1}}(\mathcal{J}_{\omega_{2}}^{> \alpha_{2}}))_{\alpha_{1}}}=\] \[=\frac{\mathcal{J}_{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J}_{ \omega_{2}}^{\alpha_{2}}}{\mathcal{J}_{\omega_{1}}^{\alpha_{1}}\cap\mathcal{J }_{\omega_{2}}^{>\alpha_{2}}+\mathcal{J}_{\omega_{1}}^{>\alpha_{1}}\cap \mathcal{J}_{\omega_{2}}^{\alpha_{2}}}=\frac{\mathcal{J}_{\omega}^{\alpha}}{ \mathcal{J}_{\omega}^{>\alpha}}=(\operatorname{gr}_{\omega_{1}}(\mathcal{O}_ {X}))_{\alpha}\]
On the other hand the initial form
\[\operatorname{in}_{\omega}(f)\in\frac{\mathcal{J}_{\omega}^{\alpha}}{\mathcal{ J}_{\omega}^{>\alpha}}=(\operatorname{gr}_{\omega_{1}}(\mathcal{O}_{X}))_{ \alpha},\]
determines the same element:
\[\operatorname{in}_{\omega}(f)=\operatorname{in}_{\overline{\omega_{2}}}( \operatorname{in}_{\omega_{1}}(f))\in(\operatorname{gr}_{\omega}(\mathcal{O} _{X}))_{\alpha}=\operatorname{gr}_{\overline{\omega_{2}}}((\operatorname{gr}_ {\omega_{1}}(\mathcal{O}_{X}))_{\alpha_{1}})_{\alpha_{2}},\]
which implies (3).
### The weighted normal bundles at valuations
The following extends a classical result of Huneke-Swanson on extended Rees algebras and smooth blow-ups [10, Definition 5.1.5], and the recent results of Rydh in [11] and Wlodarczyk in [12, Lemma 5.1.4] on the weighted normal cone.
**Lemma 4.3.1**.: _Let \(\pi:Y\to X\) be a locally toric proper birational morphism to a regular scheme \(X\) over a field \(\kappa\), with the exceptional components \(E_{i}\), for \(i=1,\dots,k\) and let \(\pi_{B}:B\to X\) be its full cobordism. Then for any stratum \(s=s_{\omega}\in S_{D}\) of the exceptional divisor \(D=V(t_{1}^{-1}\cdot\dots\cdot t_{k}^{-1})\) on \(B\) and the corresponding valuation face \(\omega\) in \(\Delta_{B}^{N}\) there is an isomorphism:_
\[s\simeq\mathbb{N}_{\omega}(X)\times\check{\mathbf{t}}_{\omega}.\]
_where \(T_{\check{\mathbf{t}}_{\omega}}:=\operatorname{Spec}(\kappa[\check{\mathbf{t}} _{\omega},\check{\mathbf{t}}_{\omega}^{-1}])\), for the set \(\check{\mathbf{t}}_{\omega}\) of the unknowns corresponding to the remaining exceptional valuations which are not in \(\omega\)._
Proof.: We can replace \(X\) with its open subset and assume \(Z_{X}(\omega)\) is irreducible so that \(\pi_{B}(s)=Z_{X}^{0}(\omega)=Z_{X}(\omega)\).
By separating variables into \(\mathbf{t}_{\omega}\) and \(\check{\mathbf{t}}_{\omega}\) we can factor any monomial \(\mathbf{t}^{\alpha}=\mathbf{t}_{\omega}^{\alpha}\cdot\check{\mathbf{t}}_{ \omega}^{\alpha}\) uniquely into the product of the relevant monomials \(\mathbf{t}_{\omega}^{\alpha}\) and \(\check{\mathbf{t}}_{\omega}^{\alpha}\) respectively in \(\mathbf{t}_{\omega}\) and \(\check{\mathbf{t}}_{\omega}\). Then by Corollary 4.1.3(2), we can write \(\overline{s}=V(\mathbf{t}_{\omega}^{-1})\) in a neighborhood of \(\overline{s}\), and \(s=V(\mathbf{t}_{\omega}^{-1})\smallsetminus V(\check{\mathbf{t}}_{\omega}^{-1})\) as there is only one component of \(V(\mathbf{t}_{\omega}^{-1})\) mapping to \(\pi_{B}(s)=Z_{X}^{0}(\omega)=Z_{X}(\omega)\).
By definition \(B=\operatorname{Spec}(\mathcal{A}[\mathbf{t}^{-1}])\), where
\[\mathcal{A}=\bigoplus_{\alpha\in\mathbb{Z}_{\geq 0}^{k}}\mathcal{J}^{\alpha} \mathbf{t}^{\alpha},\quad\mathcal{J}^{\alpha}:=\bigcap_{i=1}^{k}\mathcal{I}_{ \nu_{i},a_{i}}\subset\mathcal{O}_{X}.\]
Let
\[\mathcal{A}_{\omega}=\bigoplus_{\alpha}\mathcal{J}_{\omega}^{\alpha}\mathbf{t} _{\omega}^{\alpha},\quad\mathcal{J}_{\omega}^{\alpha}:=\bigcap_{\nu_{i}\in \omega}\mathcal{I}_{\nu_{i},a_{i}}\subset\mathcal{O}_{X}.\]
Thus for the open subset \(B_{\omega}\subset B\) where \(\check{\mathbf{t}}_{\omega}^{-1}\) are invertible we can write:
\[B_{\omega}:=B_{\check{\mathbf{t}}_{\omega}^{-1}}=\operatorname{Spec}_{X}( \mathcal{A}[\mathbf{t}^{-1}][\check{\mathbf{t}}_{\omega}])=\operatorname{ Spec}_{\mathbf{X}}(\mathcal{A}_{\omega}[\mathbf{t}_{\omega}^{-1}])[\check{\mathbf{t}}_{ \omega},\check{\mathbf{t}}_{\omega}^{-1}])\]
Consequently \(s=V_{B_{\omega}}(\mathbf{t}_{\omega}^{-1})\subset B_{\omega}\), by Corollary 4.1.3(2), so we can write
\[\mathcal{O}_{s}=\mathcal{A}_{\omega}[\mathbf{t}_{\omega}^{-1}][ \tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]/(\mathbf{t}_{ \omega}^{-1})=(\mathcal{A}_{\omega}[\mathbf{t}_{\omega}^{-1}]/((\mathbf{t}_{ \omega}^{-1}\cdot\mathcal{A}_{\omega}[\mathbf{t}_{\omega}^{-1}]))[\tilde{ \mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]=\] \[=(\mathcal{A}_{\omega}/((\mathbf{t}_{\omega}^{-1}\cdot\mathcal{A} _{\omega})\cap\mathcal{A}_{\omega}))[\tilde{\mathbf{t}}_{\omega},\tilde{ \mathbf{t}}_{\omega}^{-1}]=\operatorname{gr}_{\omega}(\mathcal{O}_{X})[ \tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}].\]
The latter equality follows from Section 4.2.1.
#### 4.3.2. The weak and the strict transforms and the ideal of the initial forms
The identification from Lemma 4.3.1 can be extended to the strict transforms of the ideals. The following generalizes the result from [20, Lemma 5.1.4] for the weighted blow-ups.
**Lemma 4.3.3**.: _Let \(X\) be a regular scheme over a field Let \(B\to X\) be the full cobordant blow-up of a locally monomial center \(\mathcal{J}\). Let \(\mathcal{I}\subset\mathcal{O}_{X}\) be an ideal sheaf on \(X\). Let \(\sigma^{s}(\mathcal{I})\subset\mathcal{O}_{B}\) be the strict transform of \(\mathcal{I}\), and \(\sigma^{\circ}(\mathcal{I})\subset\mathcal{O}_{B}\) be its weak transform (see Definition 1.4.2). Then for any \(s\in S_{D}\), the natural isomorphism_
\[\mathcal{O}_{s}\simeq\mathcal{O}_{B}[\tilde{\mathbf{t}}_{\omega},\tilde{ \mathbf{t}}_{\omega}^{-1}]/(\mathbf{t}_{\omega}^{-1})\to\operatorname{gr}_{ \omega}(\mathcal{O}_{X})[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{ \omega}^{-1}]\]
_takes_
1. \(\sigma^{s}(\mathcal{I})_{|s}\) _onto_ \(\operatorname{in}_{\omega}(\mathcal{I})[\tilde{\mathbf{t}}_{\omega},\tilde{ \mathbf{t}}_{\omega}^{-1}]\subset\operatorname{gr}_{\omega}(\mathcal{O}_{X})[ \tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]\)_._
2. \(\sigma^{\circ}(\mathcal{I})_{|s}\) _onto_ \(\operatorname{in}_{\omega}^{\circ}(\mathcal{I})[\tilde{\mathbf{t}}_{\omega}, \tilde{\mathbf{t}}_{\omega}^{-1}]\subset\operatorname{gr}_{\omega}(\mathcal{O}_ {X})[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]\)_._
Proof.: Let \(f\in\mathcal{I}\) such that \(f\in\mathcal{J}^{\alpha}\smallsetminus\mathcal{J}^{>\alpha}\). By the definition of
\[\mathcal{O}_{B}=(\bigoplus_{\alpha\geq 0}\mathcal{J}^{\alpha}\mathbf{t}^{ \alpha})[\mathbf{t}^{-1}]\]
we conclude that \(\sigma^{s}(f)=\mathbf{t}^{\alpha}f\).
Then \(f\in\mathcal{J}^{\alpha}_{\omega}\smallsetminus\mathcal{J}^{>\alpha}_{\omega}\), and in a neighborhood of \(s\) we have that \(\tilde{\mathbf{t}}_{\omega}^{-1}\) is invertible. Then the strict transform
\[\sigma^{s}(f)=\mathbf{t}^{\alpha}f=\mathbf{t}_{\omega}^{\alpha}\tilde{ \mathbf{t}}_{\omega}^{\alpha}f\in\mathcal{J}^{\alpha}\mathbf{t}^{\alpha} \subset\mathcal{J}^{\alpha}_{\omega}\mathbf{t}_{\omega}^{\alpha}[\tilde{ \mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}],\]
and its reduction modulo \((\mathbf{t}_{\omega}^{-1}\mathcal{O}_{B}\cap\mathcal{J}^{\alpha}_{\omega}) \mathbf{t}^{\alpha}=\mathcal{J}^{>\alpha}_{\omega}\mathbf{t}^{\alpha}\) can be written as the homogenous element
\[\sigma^{s}(f)=\mathbf{t}^{\alpha}f+\mathbf{t}^{\alpha}\mathcal{J}^{>a}_{\omega }=\mathbf{t}_{\omega}^{\alpha}\tilde{\mathbf{t}}_{\omega}^{\alpha}f+\mathbf{t}_ {\omega}^{\alpha}\mathbf{t}_{\omega}^{\alpha}\cdot\mathcal{J}^{>a}_{\omega}\]
in
\[\mathcal{O}_{B}[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]/( \mathbf{t}_{\omega}^{-1}\cdot\mathcal{O}_{B}[\tilde{\mathbf{t}}_{\omega}, \tilde{\mathbf{t}}_{\omega}^{-1}])=\operatorname{gr}_{\omega}(\mathcal{O}_{X})[ \tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]\]
in the gradation
\[\mathcal{J}^{\alpha}_{\omega}\mathbf{t}_{\omega}^{\alpha}[\tilde{ \mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]/(\mathbf{t}_{\omega}^{- 1}\mathcal{O}_{B}[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1 }]\cap\mathcal{O}_{B}[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1 }])\] \[=(\mathcal{J}^{\alpha}_{\omega}/\mathcal{J}^{>a}_{\omega})\mathbf{t }_{\omega}^{\alpha}[\tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1 }]\subset\operatorname{gr}_{\omega}(\mathcal{O}_{X})[\tilde{\mathbf{t}}_{\omega}, \tilde{\mathbf{t}}_{\omega}^{-1}]\]
On the other hand \(f\) determines its initial form
\[\operatorname{in}_{\omega}(f)=(f+\mathcal{J}^{>\alpha}_{\omega})\mathbf{t}^{ \omega}\in(\mathcal{J}^{\alpha}_{\omega}/\mathcal{J}^{>\alpha}_{\omega})\mathbf{t }_{\omega}^{\alpha},\]
and thus, by the above \(\sigma^{s}(f)\) naturally and bijectively corresponds to
\[\tilde{\mathbf{t}}_{\omega}^{\alpha}\operatorname{in}_{\omega}(f)\in(\mathcal{ J}^{\alpha}_{\omega}/\mathcal{J}^{>\alpha}_{\omega})\mathbf{t}_{\omega}^{\alpha}[ \tilde{\mathbf{t}}_{\omega},\tilde{\mathbf{t}}_{\omega}^{-1}]\subset \operatorname{gr}_{\omega}(\mathcal{O}_{X})[\tilde{\mathbf{t}}_{\omega},\tilde{ \mathbf{t}}_{\omega}^{-1}].\]
The latter differs from \(\operatorname{in}_{\omega}(f)\) by the unit \(\tilde{\mathbf{t}}_{\omega}^{\alpha}\):
\[\tilde{\mathbf{t}}_{\omega}^{\alpha}\operatorname{in}_{\omega}(f)\sim \operatorname{in}_{\omega}(f).\]
### Cobordant resolution by locally monomial centers
#### 4.4.1. Weighted normal cone
**Definition 4.4.2**.: Let \(X\) be a regular scheme over a field. Let \(Y\subset X\) be a closed reduced subscheme with the ideal \(\mathcal{I}_{Y}\). Let \(\omega\) be a set of monomial valuations for a partial system of local parameters. The subscheme \(C_{\omega}(Y)=V(\operatorname{in}_{\omega}\mathcal{I}_{Y})\subset\mathbb{N}_{ \omega}(X)\) will be called the _weighted normal cone_ of \(Y\) at \(\omega\).
**Lemma 4.4.3**.: _Let \(X\) be a regular universal catenary scheme over a field. Let \(Y\subset X\) be a subscheme of pure codimension \(d\). Let \(\omega\) be a set of monomial valuations for a common partial local system of parameters \(u_{1},\ldots,u_{k}\) on \(X\). Then \(C_{\omega}(Y)\) is of pure codimension \(d\) in \(\mathbb{N}_{\omega}(X)\)._
Proof.: Let \(\omega=\{\nu_{1},\ldots,\nu_{r}\}\) and \(\omega_{1}=\{\nu_{1},\ldots,\nu_{r-1}\}\) be its subset. Then, by Lemma 4.2.7, we can write
\[\operatorname{in}_{\omega}(\mathcal{I})=\operatorname{in}_{\nu_{r}}( \operatorname{in}_{\omega_{1}}(\mathcal{I})),\]
where \(\nu_{r}\) is monomial on
\[\mathbb{N}_{\omega_{1}}(X)=\operatorname{Spec}(\operatorname{gr}_{\omega_{1} }(\mathcal{O}_{X}))=\operatorname{Spec}(\mathcal{O}_{V(\mathcal{J}_{\omega_{1 }})}[u_{1},\ldots,u_{\ell}])\]
is also universally catenary. Here we assume without loss of generality that \(\mathcal{J}_{\omega_{1}}=u_{1},\ldots,u_{\ell}\) for \(\ell\leq k\).
By the inductive argument for \(\operatorname{in}_{\omega_{1}}(\mathcal{I})\) on \(\mathbb{N}_{\omega_{1}}(X)\) we can reduce the situation to a single monomial valuation \(\nu=\nu_{r}\).
Let \(\nu(u_{1})=w_{1},\ldots,\nu(u_{k})=w_{k}\), and find some integers \(a_{1},\ldots,a_{k}\) such that
\[a_{1}w_{1}=\ldots=a_{k}w_{k},\]
Consider the full cobordant blow-up \(B\) of \(\mathcal{I}=(u_{1}^{a_{1}},\ldots,u_{k}^{a_{k}})\)
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},u_{1}t^{w_{1}},\ldots,u_{k}t^ {w_{k}}].\]
We apply the argument from [20, Theorem 5.2.1]. By the assumption, \(B\) is catenary. Let \(d\) be the codimension of \(Y\) in \(X\). Then for the morphism
\[\pi_{B_{-}}:B_{-}=B\smallsetminus V(t^{-1})=X\times\mathbb{G}_{m}\to X,\]
the inverse image \(\pi_{B_{-}}^{-1}(Y)\) is of pure codimension \(d\) in \(B_{-}\). So it is its scheme-theoretic closure \(Y^{\prime}:=\overline{\pi_{B_{-}}(Y)}\), which is the strict transform \(V(\sigma^{s}(\mathcal{I}))\) of \(Y\).
Note that \(t^{-1}\) is not a zero divisor in
\[Y^{\prime}=V(\sigma^{s}(\mathcal{I}))=\operatorname{Spec}(\mathcal{O}_{B}/ \sigma^{s}(\mathcal{I})),\]
since \(t^{-1}f\in\sigma^{s}(\mathcal{I})\) implies \(f\in\sigma^{s}(\mathcal{I})\), by definition of the strict transform.
Then, by the Krull Hauptidealsatz, we have that each component of \(Y^{\prime}\cap V(t^{-1})\) is of codimension \(1\) in \(Y^{\prime}\), and of codimension \(d+1\) in \(B\). We conclude that each component of
\[Y^{\prime}\cap V(t^{-1})=V(O_{t^{-1}}\cdot\sigma^{s}(\mathcal{I}))=C_{\nu}(Y) \subset\mathbb{N}_{\nu}(X)\]
is of codimension \(d\) in \(V(t^{-1})=\mathbb{N}_{\nu}(X)\).
#### 4.4.4. Cobordant resolution
For any scheme \(Y\), let \(\operatorname{Sing}(Y)\) denote its singular locus. For any ideal \(\mathcal{I}\) on \(X\) by \(\operatorname{Sing}(V(\mathcal{I}))\) we mean the singular locus of the scheme
\[V(\mathcal{I})=\operatorname{Spec}_{X}(\mathcal{O}_{X}/\mathcal{I}).\]
The following theorem extends [20, Theorem 5.2.2].
**Theorem 4.4.5**.: _Let \(X\) be a regular universally catenary scheme over a field. Let \(Y\subset X\) be an integral, closed subscheme of pure codimension \(d\) defined by \(\mathcal{I}_{Y}\)._
_Assume there is a locally monomial ideal \(\mathcal{J}\supset\mathcal{I}_{Y}\) on \(X\), with the cosupport \(V(\mathcal{J})\) of codimension \(\geq 2\), and with the associated exceptional divisor \(E\) on the normalized blow-up \(\sigma:Y=bl_{\mathcal{J}}(X)\to X\), and the dual valuation complex \(\Delta_{E}^{N}\) such that_
1. \(\operatorname{Sing}(V(\mathcal{I}_{Y}))\subseteq V(\mathcal{J})\)_._
2. _For any valuation face_ \(\omega\in\Delta_{E}^{N}\subset\Delta_{B}^{N}\)_, and the ideal_ \(\operatorname{in}_{\omega}(\mathcal{I}_{Y})\subset\operatorname{gr}_{ \mathcal{J}_{\omega}}(\mathcal{O}_{X})\)1 _we have_
Footnote 1: Definition 4.2.3
_(respectively_
_(2') For any valuation face \(\omega\in\Delta_{E}^{N}\subset\Delta_{B}^{N}\), we have_
\[\operatorname{Sing}_{\mathbb{N}_{\omega}(X)}(V(\operatorname{in}_{\omega}^{ \circ}\mathcal{I}_{Y}))\subseteq V(\operatorname{in}_{\omega}^{\circ}\mathcal{ J}).)\]
_Then the cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}\) defines a cobordant resolution of \(Y\). That is, the strict transform \(Y^{\prime}=V(\sigma^{*}(\mathcal{I}_{Y}))\) of \(Y\) (respectively the weak transform \(Y^{\prime}=V(\sigma^{\circ}(\mathcal{I}_{Y}))\) of \(Y\)) is a regular subscheme of \(B_{+}\) of the codimension equal to the codimension of \(Y\) in \(X\)._
Proof.: The problem is local on \(X\). Thus, up to a torus factor, we can assume that the full cobordant blow-up of \(\mathcal{J}\) is given locally on \(X\) by
\[\sigma:B=\operatorname{Spec}(\mathcal{O}_{X}[\mathbf{t}^{-1},\mathbf{t}^{ \alpha_{1}}u_{1},\ldots,\mathbf{t}^{\alpha_{k}}u_{k}])\to X.\]
Then for the restriction morphism \(\pi_{B-}:B_{-}=X\times T\to X\), the inverse image \(\pi_{B_{-}}^{-1}(Y)\) is irreducible of codimension \(d\). So is its closure \(Y^{\prime}:=\overline{\sigma^{-1}(Y)}=V(\sigma^{\circ}(\mathcal{I}_{Y}))\), which is the strict transform of \(Y\). Since \(V(\mathcal{J})\) is of codimension \(\geq 2\), the divisor \(D=V_{B}(\mathbf{t}^{-1})\) is exceptional for \(B\to X\). Observe that
\[\operatorname{Sing}(Y^{\prime})\smallsetminus D=\operatorname{Sing}(Y^{\prime}) \cap B_{-}\subset V_{B_{-}}^{\circ}(\mathcal{J})\subseteq V_{B}(\sigma^{ \circ}(\mathcal{J}))\]
On the other hand, the exceptional divisor \(D_{+}=D_{|B_{+}}\) is the union of the strata \(s_{+}\in S_{D+}\). By Corollary 4.1.3(4),(5), each such a stratum \(s_{+}\) extends to \(s\in S_{D}\), and corresponds to the valuation face \(\omega\in\Delta_{D_{+}}^{N}=\Delta_{E}^{N}\subset\Delta_{D}^{N}\).
Since the singular locus of \(V(\operatorname{in}_{\omega}\mathcal{I}_{Y})\) is contained in \(V(\operatorname{in}_{\omega}^{\circ}(\mathcal{J}))\) and by Lemmas 4.3.1, 4.3.3, we have
\[\operatorname{Sing}(Y^{\prime}\cap s)=\operatorname{Sing}_{\mathbb{N}_{\omega} (X)}(V(\operatorname{in}_{\omega}\mathcal{I}_{Y}))\times\check{\mathbf{t}}_{ \omega}\subseteq V(\operatorname{in}_{\omega}^{\circ}\mathcal{J})\times \check{\mathbf{t}}_{\omega}=V_{B}(\sigma^{\circ}(\mathcal{J})_{|s}),\]
Then using Lemmas 4.4.3 and 4.3.1 we conclude that the subscheme
\[Y^{\prime}\cap s\simeq V(\operatorname{in}_{\omega}\mathcal{I}_{Y}))\times \check{\mathbf{t}}_{\omega}\]
is of pure codimension \(d\) in \(s\simeq\mathbb{N}_{\omega}(X)\times\check{\mathbf{t}}_{\omega}\), and
\[(Y^{\prime}\cap s)\cap B_{+}=((Y^{\prime}\cap s)\smallsetminus V_{B}(\sigma^{ \circ}(\mathcal{J}))\]
is regular of codimension \(d\) in \(s\).
Hence for \(p\in((Y^{\prime}\cap s)\smallsetminus V_{B}(\sigma^{\circ}(\mathcal{J}))\), we can find parameters \(v_{1},\ldots v_{d}\in\mathcal{O}_{s,p}\cdot\mathcal{I}_{Y^{\prime}}=(\mathcal{O }_{B,p}\cdot\mathcal{I}_{Y^{\prime}})/(\mathbf{t}_{\omega}^{-1})\) at \(p\) which vanish on \(Y^{\prime}\cap s\). But these parameters come from local parameters in \(\mathcal{I}_{Y^{\prime}}\) on \(B\) at \(p\). So they define a regular subscheme \(Y^{\prime\prime}\) of \(B_{+}\) of codimension \(d\), containing locally \(Y^{\prime}\). Thus \(Y^{\prime\prime}\) locally coincides with \(Y^{\prime}\) which must be regular at \(p\in s\smallsetminus V_{B}(\sigma^{\circ}(\mathcal{J}))\). Consequently \(\operatorname{Sing}(Y^{\prime})\) is contained in \(V_{B}(\sigma^{\circ}(\mathcal{J}))\), and, by Lemma 1.4.4, \(Y^{\prime}\) is a regular subscheme of \(B_{+}=B\smallsetminus V_{B}(\sigma^{\circ}(\mathcal{J}))\) of codimension \(d\).
The proof for the weak transform \(\sigma^{\circ}(\mathcal{I})\) (with stronger assumptions in condition (2')) is identical.
As a corollary, we obtain the following:
**Theorem 4.4.6**.: _Let \(X\) be a smooth variety over a field \(\kappa\) of any characteristic. Let \(Y\subset X\) be a closed integral subscheme of \(X\) Assume there is a locally monomial ideal \(\mathcal{J}\supset\mathcal{I}_{Y}\) on \(X\), with the cosupport \(V(\mathcal{J})\) of codimension \(\geq 2\), and with the associated exceptional divisor \(E\) on the normalized blow-up \(\sigma:Y=bl_{\mathcal{J}}(X)\to X\), and the dual valuation complex \(\Delta_{E}^{N}\) such that_
1. \(\operatorname{Sing}(V(\mathcal{I}_{Y}))\subseteq V(\mathcal{J})\)_._
2. _For any valuation face_ \(\omega\in\Delta_{E}^{N}\)_, the ideal the singular locus_ \[\operatorname{Sing}_{\mathbb{N}_{\omega}(X)}(V(\operatorname{in}_{\omega} \mathcal{I}_{Y}))\subseteq V(\operatorname{in}_{\omega}^{\circ}\mathcal{J}).\]
_Then there is a resolution of \(Y\) at \(Z\), that is, a projective birational morphism \(\phi:Y^{res}\to Y\) from a smooth variety \(Y^{res}\) with the exceptional locus \(Z\subset Y\), such that \(\phi^{-1}(Z)\) is an SNC divisor on \(Y^{\prime}\)._
Proof.: Take the cobordant resolution \(B_{+}\to X\) from Theorem 4.4.5. We use Section 3.4.4 to embed cobordant blow-up \(B_{+}\) as a smooth subspace of the relative affine space \(\mathbb{A}_{X}^{n}\). This implies that \(B_{+}\mathbin{/\!\!/}T\) is locally toric.
The locally toric singularities of \(B_{+}\mathbin{/\!\!/}T\) can be canonically resolved by the combinatorial method of [20, Theorem 7.17.1]. This produces the projective birational resolution \(Y^{\prime}\to Y\) of \(Y\) such that the inverse image of the singular point is an SNC divisor.
### Resolution of hypersurfaces via the Newton method
#### 4.5.1. The Newton polytope of a monomial ideal
Let \(X=\mathbb{A}_{Z}^{k}=\operatorname{Spec}(\mathcal{O}_{Z}[x_{1},\ldots,x_{k}])\), where \(Z\) is a smooth scheme over \(\kappa\), and \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\) is an ideal. One can extend the notion of the Newton polytope of monomial ideals \(I=(\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x}^{\alpha_{k}})\subset\kappa[x_{1 },\ldots,x_{n}]\) considered previously in Section 2.6.1 in the case \(Z=\operatorname{Spec}(\kappa)\), where \(\kappa\) is a field.
As before, by the associated _Newton polytope_ of \(\mathcal{I}\) we mean
\[\operatorname{P}_{\mathcal{I}}:=\operatorname{conv}(\alpha_{1}+\mathbb{Q}_{ \geq 0}^{n},\ldots,\alpha_{k}+\mathbb{Q}_{\geq 0}^{n})\subseteq\mathbb{Q}_{ \geq 0}^{n}\]
Conversely, with a polytope \(P\subset\mathbb{Q}_{\geq 0}^{n}\) we associate the monomial ideal
\[\mathcal{I}_{P}:=(x^{\alpha}\mid\alpha\in P\cap\mathbb{Z}^{n}).\]
#### 4.5.2. The initial forms defined by faces
**Definition 4.5.3**.: (see [1]) If \(\mathcal{I}=(x^{\alpha})_{\alpha\in A}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{n}]\) is a monomial ideal and \(P_{\mathcal{I}}\) is its Newton polytope and \(P\leq P_{\mathcal{I}}\) be its face, we define the _initial form with respect to a face of the \(P_{I}\)_ to be:
\[\operatorname{inv}_{P}(\mathcal{I}):=(\mathbf{x}^{\alpha}\mid\alpha\in A\cap P).\]
The definition \(\operatorname{inv}_{P}(\mathcal{I})\) is a particular case of the notion o the initial form \(\operatorname{inv}_{\omega}^{\circ}(\mathcal{I})\) with respect to a valuation face \(\omega\in\Delta_{E}^{N}\).
By Corollary 2.6.13 we obtain
**Lemma 4.5.4**.: _Let \(\Delta_{E}^{N}\) be the valuation dual complex associated with a monomial ideal \(\mathcal{I}\), and let \(P=P_{\mathcal{I}}\) be its Newton polytope._
_Any valuation face \(\omega\in\Delta_{E}^{N}\) of the associated dual valuation complex \(\Delta_{E}^{N}\) defines the induced face_
\[P_{\omega}:=P\cap\bigcap_{\nu\in\omega}H_{\nu}\]
_of the Newton polytope \(P\), for the supporting hyperplanes \(H_{\nu}\) associated with \(\nu\) and we have:_
\[\operatorname{inv}_{\omega}^{\circ}(\mathcal{I})=\operatorname{inv}_{P_{ \omega}}(\mathcal{I}).\]
_Conversely for any supporting face \(P^{\prime}\) of \(P\) there is a valuation face \(\omega\in\Delta_{E}^{N}\), such that \(\operatorname{inv}_{\omega}^{\circ}(\mathcal{I})=\operatorname{inv}_{P_{\omega }}(\mathcal{I})\). \(\clubsuit\)_
**Remark 4.5.5**.: The above correspondence is not bijective. Several valuation faces \(\omega\) could define the same supporting face of \(P\). The information encoded in the dual valuation complex is richer and can be applied to a more general setting.
#### 4.5.6. The Newton polytopes of polynomials and ideals
By the _Newton polytope_ of the function
\[f=\sum c_{\alpha}\mathbf{x}^{\alpha}\in\mathcal{O}(Z)[x_{1},\ldots,x_{k}],\]
where \(c_{\alpha}\in\mathcal{O}(Z)\) we mean the Newton polytope of the monomial ideal
\[\mathcal{J}_{f}:=(x^{\alpha}\mid c_{\alpha}\neq 0),\]
generated by the exponents \(\alpha\) occurring in the presentation of \(f\) with nonzero coefficients. Note that \(\mathcal{J}_{f}\) is the smallest monomial ideal which contains \(f\).
This definition can be extended to any ideal \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\). We associate with \(\mathcal{I}\) the monomial ideal \(\mathcal{J}=\mathcal{J}_{\mathcal{I}}\) generated by \(\mathcal{I}_{f}\), where \(f\in\mathcal{I}\). The Newton polytope \(P_{\mathcal{I}}\) of \(\mathcal{I}\) is simply the Newton polytope of the monomial ideal \(\mathcal{J}\).
If \(P\leq P_{\mathcal{I}}\) is a face of the Newton polytope \(P_{\mathcal{I}}\), and \(f=\sum_{\alpha\in A}c_{\alpha}\mathbf{x}^{\alpha}\in\mathcal{I}\) we put
\[\operatorname{inv}_{P}(f):=\sum_{\alpha\in A\cap P}c_{\alpha}\mathbf{x}^{ \alpha}.\]
Then \(\operatorname{inv}_{P}(\mathcal{I})\) is the ideal generated by \(\operatorname{inv}_{P}(f)\), where \(f\in\mathcal{I}\).
Let \(\Delta_{E}^{N}\) be the dual valuation complex associated with a monomial ideal \(\mathcal{J}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\). Recall that, by Section 4.2.4, and using identification :
\(\operatorname{gr}_{\omega}\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]=\mathcal{O}_{Z}[ x_{1},\ldots,x_{k}]\), for any valuation face \(\omega\in\Delta_{E}^{N}\) we write
\[\operatorname{inv}_{\omega}(f):=\sum_{\alpha\in A_{\omega,f}}c_{\alpha} \mathbf{x}^{\alpha}\in\operatorname{gr}_{\omega}(\mathcal{O}_{Z}[x_{1},\ldots,x_{k}])=\mathcal{O}_{Z}[x_{1},\ldots,x_{k}].\]
where
\[A_{\omega,f}=\{\alpha\in A\mid\nu(x^{\alpha})=\nu(f),\nu\in\omega\}.\]
Similarly for the ideal \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\) the ideal of the initial forms \(\operatorname{inv}_{\omega}^{\circ}(\mathcal{I})\) is generated by all \(\operatorname{inv}_{\omega}(f)\), where \(f\in\mathcal{I}\), and \(\nu(f)=\nu(\mathcal{I})\) for all \(\nu\in\omega\).
The following is an immediate consequence of Lemma 4.5.4, and the above:
**Lemma 4.5.7**.: _Let \(P_{f}\) (respectively \(P_{\mathcal{I}}\)) be the Newton polytope of \(f=\sum c_{\alpha}x^{\alpha}\) (respectively of an ideal \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\)), and let \(\mathcal{J}_{f}\) (respectively \(\mathcal{J}_{\mathcal{I}}\)) be the associated monomial ideal. Then for any valuation face \(\omega\in\Delta_{E}^{N}\) of the associated dual valuation complex \(\Delta_{E}^{N}\) and the corresponding face \(P_{\omega}\) of \(P\)._
\[\operatorname{inv}_{\omega}(f)=\operatorname{inv}_{P_{\omega}}(f).\]
\[\text{(respectively}\quad\operatorname{inv}_{\omega}^{\circ}(\mathcal{I})= \operatorname{inv}_{P_{\omega}}(\mathcal{I})).\]
#### 4.5.8. Resolution by the Newton polytopes
The following is a particular case of Theorem 4.4.5 for hypersurfaces, written in a more straightforward setup.
**Theorem 4.5.9**.: \(X=\mathbb{A}_{Z}^{n}=\operatorname{Spec}\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\)_, where \(Z\) is a regular scheme over a field \(\kappa\). Let_
\[f=\sum_{\alpha\in A_{f}}c_{\alpha}\mathbf{x}^{\alpha}\in\mathcal{O}_{Z}[x_{1}, \ldots,x_{k}]\]
_where \(c_{\alpha}\neq 0\) for \(\alpha\in A_{f}\). Let \(\mathcal{J}=(\mathbf{x}^{\alpha}\mid\alpha\in A_{f})^{\operatorname{sat}}\) be the induced monomial ideal, and \(P_{f}=P_{\mathcal{J}}\) be its Newton polytope. Assume that_
1. _The cosupport_ \(V(\mathcal{J})\) _is of codimension_ \(\geq 2\)_,_
2. \(\operatorname{Sing}(V((f))\subseteq V(\mathcal{J}).\)__
3. _For any supporting face_ \(P\) _of_ \(P_{f}\)_,_ \(\operatorname{Sing}(V(\operatorname{in}_{P}(f))\subset V(\operatorname{in}_{P} (\mathcal{J})).\)__
_Then the cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}\) resolves the singularity of \(V(f)\). That is, the strict transform \(Y^{\prime}=V(\sigma^{s}(f))\) of \(Y\) (which coincides with the weak transform \(V(\sigma^{\circ}(f))\) of \(Y\)) is a regular subscheme of \(B_{+}\)._
Proof.: By Lemmas 4.5.7, 4.5.4, and the assumption (3) we get that
\[\operatorname{Sing}(\operatorname{inv}_{\omega}(f))=\operatorname{Sing}( \operatorname{inv}_{\omega}^{\circ}(f))\subset V(\operatorname{in}_{\omega}^{ \circ}(\mathcal{J})),\]
for any \(\omega\in\Delta_{E}^{N}\) and the corollary follows from Theorem 4.4.5.
**Remark 4.5.10**.: The theorem shows that in the case of hypersurface \(V(f)\) the critical combinatorial information is related to the faces \(P\) of the Newton polytope \(P_{f}\). Generally, one considers the dual valuation complex \(\Delta_{E}^{N}\) associated with the ideal \(\mathcal{I}\). In such a case, \(\operatorname{inv}_{P}(f)\) is replaced with more general \(\operatorname{inv}_{\omega}(\mathcal{I})\), and the role of the Newton polytope of a monomial ideal is limited (see Theorems 4.4.5, Theorems 4.6.9). However, it still can be used in the context of the order of the ideals in \(\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\) (see Theorems 4.8.1, 4.8.2).
One can easily extend these results to the products of schemes:
**Theorem 4.5.11**.: _Let \(X=\prod_{Z}X_{j}\), where each \(X_{j}=\mathbb{A}_{Z}^{n_{j}}=\operatorname{Spec}(\mathcal{O}_{Z}[\mathbf{x}_{ j}])=\operatorname{Spec}\mathcal{O}_{Z}[x_{j1},\ldots,x_{jk_{j}}]\), where \(Z\) is a regular scheme over a field \(\kappa\) for \(j=1,\ldots,r\). Let_
\[f_{j}=\sum_{\alpha\in A_{f_{j}}}c_{j\alpha}\mathbf{x}^{j\alpha}\in\mathcal{O} _{Z}[x_{j1},\ldots,x_{jk_{j}}]\]
_where \(c_{j\alpha}\neq 0\) for \(\alpha\in A_{f_{j}}\). Let_
\[\mathcal{J}_{j}=(\mathbf{x}_{j}^{\alpha}\mid\alpha\in A_{f_{j}})^{\mathrm{sat}} \subset\mathcal{O}_{Z}[\mathbf{x}_{j}]\]
_be the induced monomial ideal, and \(P_{f_{j}}:=P_{\mathcal{J}_{j}}\) be its Newton polytope in \(\mathbb{Q}^{k_{j}}\). Assume that for any \(j=1,\ldots,r\)_
1. _The cosupport_ \(V(\mathcal{J}_{j})\) _is of codimension_ \(\geq 2\)_,_
2. \(\mathrm{Sing}(V(f_{j}))\subseteq V(\mathcal{J}_{j})\)_._
3. _For any supporting face_ \(P\) _of_ \(P_{f_{j}}\)_,_\(\mathrm{Sing}(V(\mathrm{in}_{P}(f))\subseteq V(\mathrm{in}_{P}(\mathcal{J}_{ i}))\)_._
_Then the cobordant blow-up \(B_{+}\to X\) of \(\prod_{j=1}^{r}\mathcal{O}_{X}\cdot\mathcal{J}_{i}\) resolves the singularity of \(V(f_{1},\ldots,f_{k})\). That is, the strict transform \(Y^{\prime}=V(\sigma^{s}((f_{1},\ldots,f_{k}))\) of \(Y\) is a regular subscheme of \(B_{+}\)._
Proof.: The cobordant blow-up of \(\prod_{j=1}^{r}\mathcal{O}_{X}\cdot\mathcal{J}_{i}\) is equal to the product over \(Z\) of the cobordant blow-ups \(B_{j+}\) of \(\mathcal{J}_{i}\) on \(\mathrm{Spec}(\mathcal{O}_{Z}[\mathbf{x}_{j}])\), each of which is smooth over \(Z\).
### The Abramovich-Quek resolution
The following result is due to Abramovich-Quek (with some minor modifications):
**Corollary 4.6.1**.: _[_1_, Theorem 5.1.2]_ _Let \(X=\mathbb{A}_{Z}^{n}=\mathrm{Spec}\,\mathcal{O}_{Z}[x_{1},\ldots,x_{n}]\), where \(Z\) is a regular scheme over a field. Consider the induced SNC divisor \(D:=V(x_{1}\cdot\ldots\cdot x_{n})\). Let_
\[f=\sum_{\alpha\in A_{f}}c_{\alpha}\mathbf{x}^{\alpha}\in\mathcal{O}(Z)[x_{1}, \ldots,x_{k}]\]
_where \(c_{\alpha}\neq 0\) for \(\alpha\in A_{f}\). Let \(\mathcal{J}=\mathcal{J}_{f}:=(\mathbf{x}^{\alpha}\mid\alpha\in A_{f})\) be the associated monomial ideal, and \(P_{f}:=P_{\mathcal{J}}\) be its Newton polytope. Assume that the cosupport \(V(\mathcal{J})\) is of codimension \(\geq 2\), and for any face \(P\) of \(P_{f}\), the ideal \((\mathrm{in}_{P}(f))\) determines a smooth subscheme outside of \(D\)._
_Then the cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}\) resolves the singularity of \(V(f)\). That is, the strict transform \(Y^{\prime}=V(\sigma^{s}(f))=V(\sigma^{\circ}(f))\) of \(Y\) is a regular subscheme of \(B_{+}\)._
**Remark 4.6.2**.: Note that unlike in the original formulation the coefficients \(c_{\alpha}\) are not necessarily invertible. Theorem 4.6.1 is further generalized for the ideals in the context of order. See Remark 4.8.3.
Proof.: Let \(\sigma_{0}^{\vee}=\mathbb{Q}_{\geq 0}^{n}=\langle x_{1},\ldots,x_{n}\rangle\) be the cone corresponding to the ring
\[\mathcal{O}_{Z}[x_{1},\ldots,x_{n}].\]
It suffices to show that conditions (2), (3) of Theorem 4.5.9 are satisfied.
To prove condition (3) let \(P^{\prime}\) be any supporting face of \(P_{f}\). By Lemma 2.6.5, there is a stratification of \(X\) with strata \(s_{\tau}\), where \(\tau\) is a face of \(\sigma_{0}\). It is determined by the pull-back of the orbit stratification on \(X_{\sigma_{0}}\), via \(X=X_{\sigma_{0}}\times Z\to X_{\sigma_{0}}\).
Assume that \(\overline{s_{\tau}}\) is not in \(V(\mathrm{in}_{P^{\prime}}^{\circ}(\mathcal{J}))\). This means, by Corollary 2.6.6, that \(\tau^{*}\) intersects \(P^{\prime}\) so we consider the face \(P:=P^{\prime}\cap\tau^{*}\). Moreover, by Lemma 2.6.5, we can write the closure of the stratum \(s_{\tau}\) as
\[\overline{s_{\tau}}:=V(x_{i}\mid x_{i}\not\in\tau^{*}).\]
On the other hand, since \(P\subset\tau^{*}\), the polynomial \(\mathrm{in}_{P}(f)\in\mathcal{O}_{Z}[x_{i}\in\tau^{*}]\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{n}]\) can be identified with
\[\mathrm{in}_{P}(f)_{|\overline{s_{\tau}}}\in\mathcal{O}_{Z}[x_{1},\ldots,x_{n} ]/(x_{i}\mid x_{i}\not\in\tau^{*})\simeq\mathcal{O}_{Z}[x_{i}\in\tau^{*}].\]
Now \(\mathrm{in}_{P}(f)\) is simply equal to
\[\mathrm{in}_{P}(f)_{|\overline{s_{\tau}}}=\mathrm{in}_{P^{\prime}}(f)_{| \overline{s_{\tau}}},\]
By the assumption \(\mathrm{in}_{P}(f)\in\mathcal{O}_{Z}[x_{i}\in\tau^{*}]\) is a local parameter on
\[\mathrm{Spec}(\mathcal{O}_{Z}[x_{1},\ldots,x_{n}]\smallsetminus V( \prod_{x_{i}\in\tau^{*}}x_{i}) =\] \[(\mathrm{Spec}(\mathcal{O}_{Z}[x_{i}\in\tau^{*}])\smallsetminus V( \prod_{x_{i}\in\tau^{*}}x_{i})) \times (\mathrm{Spec}(\mathcal{O}_{Z}[x_{i}\not\in\tau^{*}])\smallsetminus V( \prod_{x_{i}\notin\tau^{*}}x_{i})),\]
and on
\[(\mathrm{Spec}(\mathcal{O}_{Z}[x_{i}\in\tau^{*}])\smallsetminus V( \prod_{x_{i}\in\tau^{*}}x_{i}))\simeq\] \[\simeq V(x_{i}\mid x_{i}\not\in\tau^{*})\smallsetminus V(\prod_{x_{ i}\in\tau^{*}}x_{i}))=\overline{s_{\tau}}\smallsetminus V(\prod_{x_{i}\in\tau^{*}}x_{i}))=s_{\tau}\]
Consequently \(\mathrm{in}_{P}(f)_{|\overline{s_{\tau}}}=\mathrm{in}_{P^{\prime}}(f)_{| \overline{s_{\tau}}}\) defines a local parameter on the stratum \(s_{\tau}\). This implies that \(\mathrm{in}_{P^{\prime}}(f)\) is a local parameter on all strata \(s_{\tau}\) outside of \(V(\mathrm{in}_{P^{\prime}}^{\circ}(\mathcal{J}))\).
The proof of condition (2) is similar. Consider any face \(\tau\) of \(\sigma_{0}\). If \(\overline{s_{\tau}}\) is not in \(V(\mathcal{J})\), then, by Corollary 2.6.6, \(\tau\) intersects \(P_{f}\) so we consider the face \(P:=P_{f}\cap\tau^{*}\). Consequently, by the assumption
\[(\mathrm{in}_{P}(f))_{|\overline{s_{\tau}}}=f_{|\overline{s_{\tau}}}\in \mathcal{O}_{Z}[x_{i}\in\tau^{*}]\]
defines a local parameter on the stratum
\[s_{\tau}=\overline{s_{\tau}}\smallsetminus V(\prod_{x_{i}\in\tau^{*}}x_{i}),\]
This implies that \(f\) is a local parameter on all strata \(s_{\tau}\) outside of \(V(\mathcal{J})\), showing condition (2) of Theorem 4.5.9 and completing the proof.
**Corollary 4.6.3**.: _Let \(Z\) be a regular scheme over a field \(\kappa\), and \(X=\prod_{Z}X_{j}\), where \(X_{j}:=\mathbb{A}_{Z}^{n_{j}}=\mathrm{Spec}\,\mathcal{O}(Z)[x_{j1},\ldots,x_{ jk_{j}}]\) for \(j=1,\ldots,r\). Let_
\[f_{j}=\sum_{\alpha\in A_{f_{j}}}c_{j\alpha}\mathbf{x}^{j\alpha}\in\mathcal{O }_{Z}[x_{j1},\ldots,x_{jk_{j}}]\]
_where \(c_{j\alpha}\neq 0\) for \(\alpha\in A_{f_{j}}\). Let \(\mathcal{J}_{j}:=(\mathbf{x}_{j}^{\alpha}\mid\alpha\in A_{f_{j}})\) be the induced monomial ideal, and \(P_{f_{j}}:=P_{\mathcal{J}_{j}}\) be its Newton polytope in \(\mathbb{Q}^{kj_{j}}\). Assume that for any \(j=1,\ldots,r\) the cosupport \(V(\mathcal{J}_{j})\) is of codimension \(\geq 2\), and for any face \(P\) of \(P_{f_{j}}\), the ideal \((\mathrm{in}_{P}(f_{j}))\) determines a smooth subscheme outside of \(V(x_{j1}\cdot\ldots\cdot x_{jk_{j}})\). Then the cobordant blow-up \(B_{+}\to X\) of \(\prod_{j=1}^{r}\mathcal{O}_{X}\cdot\mathcal{J}_{i}\) resolves the singularities of \(V(f_{1},\ldots,f_{k})\). That is, the strict transform \(Y^{\prime}=V(\sigma^{s}((f_{1},\ldots,f_{k}))\) of \(Y\) is a regular subscheme of \(B_{+}\)._
#### 4.6.4. Examples of resolution
**Theorem 4.6.5**.: _Let \(X=\operatorname{Spec}_{Z}(\mathcal{O}_{Z}[x_{1},\ldots,x_{k}])\), where \(Z\) is a smooth variety over a field \(\kappa\). Consider the closed subscheme \(Y\) on \(X\) defined by a function \(f\in H^{0}(X,\mathcal{O}_{X})\) of the form_
\[f=\sum_{i=1}^{k}c_{\alpha_{i}}(v)\mathbf{x}^{\alpha_{i}},\]
_where \(c_{\alpha_{i}}(v)\in\mathcal{O}(Z)^{*}\) are invertible._
_Assume that for the presentation of \(f\), one of the following holds:_
* \(\operatorname{char}(\kappa)=0\)_, and for any_ \(\alpha_{i}\) _except possibly one, there is a variable_ \(x_{j_{i}}\) _such that a power_ \(x_{j_{i}}^{a_{j_{i}}}\) _of_ \(x_{j_{i}}\)_, occurs in_ \(\mathbf{x}^{\alpha_{i}}\) _and_ \(x_{j_{i}}\) _does not occur in the others_ \(\mathbf{x}^{\alpha_{j}}\) _for_ \(j\neq i\)_._
* \(\operatorname{char}(\kappa)=p\)_, and for any_ \(\alpha_{i}\) _except possibly one there is a variable_ \(x_{j_{i}}\) _such that a power_ \(x_{j_{i}}^{a_{j_{i}}}\) _of_ \(x_{j_{i}}\)_, occurs in_ \(\mathbf{x}^{\alpha_{i}}\)_, with_ \(p\nmid a_{j_{i}}\) _and_ \(x_{j_{i}}\) _does not occur in the others_ \(\mathbf{x}^{\alpha_{j}}\) _for_ \(j\neq i\) _except as some_ \(k\cdot p\)_-th power for_ \(k\in\mathbb{N}\)_._
_Then the cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}=(\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x}^{\alpha_{k}})\) resolves singularity, so that the strict transform \(\pi_{B}^{s}(f)\) determines a regular subscheme \(\sigma^{s}(Y)\) of \(B_{+}\)._
Proof.: Let \(\mathcal{D}(f)\) be the ideal generated by \(f\), and all the derivatives \(D(f)\). At any point \(p\) of \(\operatorname{Sing}(f)\), we have that \(\operatorname{ord}_{p}(f)\geq 2\) which implies \(\operatorname{Sing}(V(f))=V(\mathcal{D}(f))\). But the ideal \(\mathcal{D}(f)\) contains all but possibly one monomials \(\mathbf{x}^{\alpha_{i}}\sim x_{j(i)}D_{x_{j(i)}}(f)\). Since \(f=\sum_{i=1}^{k}c_{\alpha_{i}}(v)\mathbf{x}^{\alpha_{i}}\), and all but at most one monomial \(\mathbf{x}^{\alpha_{i}}\) are in \(\mathcal{D}(f)\) we conclude that \(\mathcal{D}(f)\supseteq\mathcal{J}=(\mathbf{x}^{\alpha_{1}},\ldots,\mathbf{x} ^{\alpha_{k}}).\) So
\[\operatorname{Sing}(Y)=V(\mathcal{D}(f))\subseteq V(\mathcal{J}).\]
Similarly
\[\operatorname{Sing}(\operatorname{in}_{P}(f))=V(\mathcal{D}(\operatorname{in} _{P}(f))\subseteq V((\operatorname{in}_{P}^{\circ}(\mathcal{J})).\]
Thus the conditions of Theorem 4.5.9 are satisfied.
**Example 4.6.6**.: Let \(f=x_{1}^{a_{1}}+\ldots+x_{k}^{a_{k}}\in\kappa[x_{1},\ldots,x_{k}]\), where the characteristic \(p\) divides at most one \(a_{i}\). Then the cobordant blow-up of \((x_{1}^{a_{1}},\ldots,x_{k}^{a_{k}})\) resolves singularity. By Example 3.4.6, it is given by
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[\mathbf{t}^{-1},x_{1}\mathbf{t}^{w_{ 1}},\ldots,x_{k}\mathbf{t}_{k}^{w}])\]
\[B_{+}=B\smallsetminus V_{B}(\sigma^{s}(\mathcal{J}))=B\smallsetminus V_{B}(x_{1} \mathbf{t}^{w_{1}},\ldots,x_{k}\mathbf{t}^{w_{k}}).\]
The morphism \(B_{+}\to X\) is interpreted in Section 5.4 as the cobordant blow-up of the weighted center \(\mathcal{J}=(x_{1}^{1/w_{1}},\ldots,x_{k}^{1/w_{k}})\), such that \(\mathcal{O}_{B_{+}}\cdot\mathcal{J}=\mathcal{O}_{B_{+}}\cdot\mathbf{t}^{-1}\).
**Example 4.6.7**.: \[x_{1}^{p}+ax_{2}^{p}x_{3}+bx_{1}x_{4}^{p}x_{5}^{p^{2}}\in\kappa[x_{1},x_{2},x_ {3},x_{4},x_{5},],\]
where \(a,b\in\kappa^{*}\)can be resolved by the single cobordant blow-up of
\[\mathcal{J}=(x_{1}^{p},x_{2}^{p}x_{3},x_{1}x_{4}^{p}x_{5}^{p^{2}})\]
over a field \(\kappa\) of characteristic \(p\). Here for \(x_{2}^{p}x_{3}\) the variable \(x_{3}\) does not occur in the other terms, and for \(x_{1}x_{4}^{p}x_{5}^{p^{2}}\) the coordinate \(x_{1}\) occurs in the other terms as \(x_{1}^{p}\) -power or does not show at all.
**Example 4.6.8**.: \[x_{1}^{2}x_{2}^{5}+7x_{4}^{7}x_{3}^{5}+25x_{1}x_{3}^{6}\in\kappa[x_{1},x_{2},x_{3},x_{4}]\]
can be resolved by the cobordant blow-up of
\[\mathcal{J}=(x_{1}^{2}x_{2}^{5},x_{4}^{7}x_{3}^{5},x_{1}x_{3}^{6})\]
over a field \(\kappa\) of \(\operatorname{char}(\kappa)\neq 5,7\). We use \(x_{2}\) for \(x_{1}^{2}x_{2}^{5}\), and \(x_{4}\) for \(x_{4}^{7}x_{3}^{5}\).
**Theorem 4.6.9**.: _Let \(Z\) be a smooth variety over a field \(\kappa\). Let_
\[X=\operatorname{Spec}_{Z}(\mathcal{O}_{Z}[x_{1},\ldots,x_{n}])=\operatorname{ Spec}_{Z}(\mathcal{O}_{Z}[\mathbf{x}_{1},\ldots,\mathbf{x}_{r}]),\]
_where_
\[\mathbf{x}_{i}:=(x_{k_{i-1}},\ldots,x_{k_{i}-1}),\]
_for \(k_{0}=1<k_{1}<\ldots<k_{r}=n+1\). Consider the closed subscheme \(Y\) of \(X\) defined by the set of the polynomial functions \(f_{j}\in H^{0}(X,\mathcal{O}_{X})\), where \(j=1,\ldots,r\) of the form_
\[f_{j}=\sum_{i=1}^{r_{j}}c_{\alpha_{ij}}(v)\mathbf{x}_{j}^{\alpha_{ij}},\]
_where \(c_{\alpha_{ij}}(v)\in\mathcal{O}(Z)^{*}\) are invertible._
_Assume that for any \(j=1,\ldots,r\) and for the presentation of \(f_{j}\) one of the following holds:_
* \(\operatorname{char}(\kappa)=0\)_, and for any_ \(\alpha_{ij}\) _except possibly one, there is a variable_ \(x_{j_{i}}\) _such that a power_ \(x_{j_{i}}^{a_{ji}}\) _of_ \(x_{j_{i}}\)_, occurs in_ \(\mathbf{x}^{\alpha_{ij}}\) _and_ \(x_{j_{i}}\) _does not occur in the others_ \(\mathbf{x}^{\alpha_{i^{\prime}j}}\) _for_ \(i^{\prime}\neq i\)_._
* \(\operatorname{char}(\kappa)=p\)_, and for any_ \(\alpha_{ij}\) _except possibly one there is a variable_ \(x_{j_{i}}\) _such that a power_ \(x_{j_{i}}^{a_{ji}}\) _of_ \(x_{j_{i}}\) _occurs in_ \(\mathbf{x}^{\alpha_{ij}}\) _, with_ \(p\nmid a_{j_{i}}\) _and_ \(x_{j_{i}}\) _does not occur in the others_ \(\mathbf{x}^{\alpha_{i^{\prime}j}}\) _for_ \(i^{\prime}\neq i\) _except as some_ \(k\cdot p\)_-th power for_ \(k\in\mathbb{N}\)_._
_Then the cobordant blow-up \(B_{+}\to X\) of_
\[\mathcal{J}=\prod_{j=1}^{r}(\mathbf{x}_{j}^{\alpha_{1}},\ldots,\mathbf{x}_{j }^{\alpha_{k}})\]
_resolves singularity, so that the strict transform \(\sigma^{s}(f_{1},\ldots,f_{r})\) determines a smooth subvariety of \(B_{+}\)._
Proof.: The space \(X\) can be written as the fiber product
\[X=\prod_{Z}\mathbb{A}_{Z}^{k_{i}-k_{i-1}}=\prod_{Z}\operatorname{Spec}_{Z}( \mathcal{O}_{Z}[\mathbf{x}_{j}]).\]
The cobordant blow-up of \(\mathcal{J}\) is equal to the product over \(Z\) of the cobordant blow-ups \(B_{j+}\) of \((\mathbf{x}_{j}^{\alpha_{1}},\ldots,\mathbf{x}_{j}^{\alpha_{k}})\) on \(\operatorname{Spec}_{Z}(\mathcal{O}_{Z}[\mathbf{x}_{j}])\), and each of \(B_{j+}\) is smooth over \(Z\) by Theorem 4.6.5.
**Example 4.6.10**.: The system of equations
\[x_{1}^{p}+ax_{1}x_{2}^{p}x_{3}+bx_{4}x_{5}^{p}x_{6}^{p^{2}} =0\] \[y_{1}^{p^{3}}+cy_{2}^{p^{2}}y_{3}y_{6}+dy_{1}y_{4}^{p}y_{5}^{p^{2 }}y_{6}^{2} =0\]
in
\[\kappa[x_{1},\ldots,x_{6},y_{1},\ldots,y_{5}],\]
where \(a,b,c,d\in\kappa^{*}\), can be resolved by the single cobordant blow-up of
\[\mathcal{J}=(x_{1}^{p},x_{2}^{p}x_{3},x_{1}x_{4}x_{5}^{p}x_{6}^{p^{2}})\cdot(y_{ 1}^{p^{3}},y_{2}^{p^{2}}y_{3},y_{4}y_{5}^{p}y_{6}^{p^{2}})\]
in characteristic \(p\).
**Example 4.6.11**.: Let \(f_{j}=x_{1j}^{a_{1}}+\ldots+x_{kjj}^{a_{k}}\in\kappa[x_{ij}]\), where \(j=1,\ldots,k\) and the characteristic \(p\) divides at most one \(a_{ij}\) for any \(j\). Then the cobordant blow-up of \(\prod_{j}(x_{1j}^{a_{1}},\ldots,x_{kjj}^{a_{k}})\) resolves singularity of \(V(f_{j})_{j=1,\ldots,k}\).
### Partial resolution by the order
The method can be linked to different invariants, particularly to the order
\[\operatorname{ord}_{p}(\mathcal{I}):=\max\{k\mid\mathcal{I}_{p}\subset\mathfrak{ m}_{p}^{k}\},\]
where \(m_{p}\subset\mathcal{O}_{X,p}\) is the maximal ideal of a point \(p\in X\).
**Definition 4.7.1**.: Let \(\mathcal{I}\) be an ideal on a regular scheme \(X\), and \(d\in\mathbb{N}\) be an integer. We define
\[\operatorname{supp}(\mathcal{I},d):=\{p\in X\mid\operatorname{ord}_{p}( \mathcal{I})\geq d\}.\]
The following theorem extends [20, Lemma 5.3.1]:
**Theorem 4.7.2**.: _Let \(\mathcal{I}\) be an ideal on a regular scheme \(X\) over a field, and let \(d\in\mathbb{N}\) be any natural number. Assume that there exists a locally monomial center \(\mathcal{J}\), with \(\operatorname{codim}(V(\mathcal{J})\geq 2\), with the associated dual valuation complex \(\Delta_{E}^{N}\), and such that_
1. \(\operatorname{supp}(\mathcal{I},d)\subseteq V(\mathcal{J})\subset X,\)__
2. \(\operatorname{supp}(\operatorname{in}_{\omega}(\mathcal{I}),d)\subseteq V( \operatorname{in}_{\omega}^{\circ}(\mathcal{J}))\subset\mathbb{N}_{\omega}(X)\)_, for any_ \(\omega\in\Delta_{E}^{N}\)_._
_(respectively_
_(2') \(\operatorname{supp}(\operatorname{in}_{\omega}^{\circ}(\mathcal{I}),d) \subseteq V(\operatorname{in}_{\omega}^{\circ}(\mathcal{J}))\subset\mathbb{N }_{\omega}(X)\), for any \(\omega\in\Delta_{E}^{N}\).)_
_Then for the cobordant blow-up \(\sigma_{+}:B_{+}\to X\) of \(\mathcal{J}\), the maximal order of the strict transform \(\sigma^{s}(\mathcal{I})\) (respectively the weak transform \(\sigma^{\circ}(\mathcal{I})\)) on \(B_{+}\) is strictly smaller than \(d\)._
Proof.: Let \(q\in D_{+}=D\smallsetminus V_{B}(\sigma^{\circ}(\mathcal{J}))\), where \(D=V(t_{1}^{-1}\cdot\ldots t_{k}^{-1})\) is the exceptional divisor of \(B\to X\). Then there is \(\omega\in\Delta_{E}^{N}\), and the corresponding stratum \(s\) in \(S_{D}\) such that
\[q\in s\smallsetminus V(\sigma^{\circ}(\mathcal{J})).\]
By Lemmas 4.3.1, 4.3.3, there is a natural isomorphism \(s\to\operatorname{Spec}(\operatorname{gr}_{\omega}(\mathcal{O})[\tilde{ \mathfrak{t}}_{\omega},\tilde{\mathfrak{t}}_{\omega}^{-1}])\), which takes \(\sigma^{s}(\mathcal{I})_{|s}\) to \(\operatorname{in}_{\omega}(\mathcal{I})[\tilde{\mathfrak{t}}_{\omega},\tilde {\mathfrak{t}}_{\omega}^{-1}]\) (and \(\sigma^{\circ}(\mathcal{J})_{|s}\) to \(\operatorname{in}_{\omega}^{\circ}(\mathcal{J})[\tilde{\mathfrak{t}}_{\omega },\tilde{\mathfrak{t}}_{\omega}^{-1}]\)). Consequently
\[\operatorname{ord}_{q}(\sigma^{s}(\mathcal{I}))\leq\operatorname{ord}_{q}( \sigma^{s}(\mathcal{I})_{|s})=\operatorname{ord}_{q}(\operatorname{in}_{\omega }(\mathcal{I})[\tilde{\mathfrak{t}}_{\omega},\tilde{\mathfrak{t}}_{\omega}^{- 1}])<d.\]
If
\[q\in B\smallsetminus D\smallsetminus V(\sigma^{s}(\mathcal{J}))=B_{-}\smallsetminus V (\sigma^{s}(\mathcal{J}))=(X\smallsetminus V(\mathcal{J}))\times T,\]
then since \(\pi_{B}(q)\in X\smallsetminus V(\mathcal{J})\) we conclude that
\[\operatorname{ord}_{q}(\sigma^{s}(\mathcal{I}))=\operatorname{ord}_{\pi_{B}(q )}(\mathcal{I})<d.\]
The proof for \(\sigma^{\circ}(\mathcal{I})\) is the same.
### The Newton method of decreasing order
As a corollary from Theorem 4.7.2, and Lemma 4.5.4 we obtain:
**Theorem 4.8.1**.: \(X=\mathbb{A}_{Z}^{n}=\operatorname{Spec}\mathcal{O}_{X}[x_{1},\ldots,x_{k}]\)_, where \(Z\) is regular over a field \(\kappa\) of characteristic \(p\). Let \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\) be an ideal, and \(\mathcal{J}=\mathcal{J}_{\mathcal{I}}\), be its associated monomial ideal with the Newton polytope \(P_{\mathcal{J}}=P_{\mathcal{I}}\) and \(d\in\mathbb{N}\) be any natural number such that_
1. \(\operatorname{codim}(V(\mathcal{J}))\geq 2\)_._
2. \(\operatorname{supp}(\mathcal{I},d)\subseteq V(\mathcal{J})\)_._
3. _for any supporting face_ \(P\) _of_ \(P_{\mathcal{J}}\)_,_ \(\operatorname{supp}(\operatorname{in}_{P}(\mathcal{I}),d)\subset V( \operatorname{inv}_{P}(\mathcal{J}))\)_._
_Then the maximal order of the weak transform \(\sigma^{\circ}(\mathcal{I})\) on \(B_{+}\) under cobordant blow-up of \(\mathcal{J}\) is strictly smaller than \(d\). \(\clubsuit\)_
Thus we get
**Theorem 4.8.2**.: \(X=\mathbb{A}_{Z}^{n}=\operatorname{Spec}\mathcal{O}_{X}[x_{1},\ldots,x_{k}]\)_, where \(Z\) is regular over a field \(\kappa\) of characteristic \(p\). Let \(\mathcal{I}\subset\mathcal{O}_{Z}[x_{1},\ldots,x_{k}]\) be an ideal, and \(\mathcal{J}=\mathcal{J}_{\mathcal{I}}\), be its associated monomial ideal with with \(\operatorname{codim}(V(\mathcal{J})\geq 2\), and let \(P_{\mathcal{J}}=P_{\mathcal{I}}\) be its Newton polytope and \(d\in\mathbb{N}\) be any natural number such that for any face \(P\) of \(P_{\mathcal{J}}\),_
\[\operatorname{supp}(\operatorname{in}_{P}(\mathcal{I}),d)\subset D:=V(x_{1},\ldots.x_{k}).\]
_Then the maximal order of the weak transform \(\sigma^{\circ}(\mathcal{I})\) under cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}\) is strictly smaller than \(d\). \(\clubsuit\)_
Proof.: The proof uses similar arguments as the proof of Corollary 4.6.1. We need to show that the conditions of Theorem 4.8.1 are satisfied
For condition (3) of Theorem 4.8.1, let \(P^{\prime}\) be any supporting face of \(P_{\mathcal{I}}\). Consider the closure of the stratum \(\overline{s_{\tau}}\), where \(\tau\) is a face of \(\sigma_{0}\).
If \(s_{\tau}\) is not in \(V(\operatorname{in}_{P^{\prime}}^{\circ}(\mathcal{J}))\) consider the face \(P:=P^{\prime}\cap\tau^{*}\). Then
\[\operatorname{supp}(\operatorname{in}_{P^{\prime}}(\mathcal{I}))_{\overline{s _{\tau}}},d)=\operatorname{supp}(\operatorname{in}_{P}(f)_{|\overline{s_{ \tau}}},d)\]
is contained in \(\overline{s_{\tau}}\smallsetminus s_{\tau}=V(\prod_{x_{i}\in\tau^{*}}x_{i})\) so it is not in \(s_{\tau}\). This implies that \(\operatorname{supp}(\operatorname{in}_{P^{\prime}}(\mathcal{I})),d)\) is contained \(V(\operatorname{in}_{P^{\prime}}^{\circ}(\mathcal{J})\).
The proof of condition (2) of Theorem 4.8.1 is the same, except we replace \(P^{\prime}\) with \(P_{\mathcal{J}}\), \(\operatorname{in}_{P^{\prime}}^{\circ}(\mathcal{J})\) with \(\mathcal{J}=\operatorname{in}_{P_{\mathcal{J}}}^{\circ}(\mathcal{J})\), and \(P\) with \(P=P_{\mathcal{J}}\cap\tau^{*}\).
Thus the corollary is a consequence of Theorem 4.8.1. \(\clubsuit\)
**Remark 4.8.3**.: Theorems 4.8.1, 4.8.2 generalize respectively Theorem 4.5.9, and Corollary 4.6.1. We put \(\mathcal{I}=(f)\), and \(d=2\). Then \(\operatorname{Sing}(V(f))=\operatorname{supp}((f),2)\).
**Example 4.8.4**.: (See also Example 2.6.14)
Let \(Y\subset X=\operatorname{Spec}\kappa[x,y,z]\) be described by the ideal
\[\mathcal{I}=(x^{k}+xy+y^{l},\quad z^{kl}+x^{k-2}z^{kl-1}+y^{k-2}z^{kl-1})\]
of order \(2\), where \(\gcd(k,l)=1\). Consider the corresponding admissible monomial ideal
\[\mathcal{J}=(x^{k},xy,y^{l},z^{kl},x^{k-2}z^{kl-1},y^{k-2}z^{kl-1})\]
and its associated _Newton polytope_\(P\) generated by the exponents
\[(k,0,0),(1,1,0),(0,l,0),(0,0,kl)\subset\sigma^{\vee}=\langle e_{1}^{*},e_{2}^ {*},e_{3}^{*}\rangle.\]
This corresponds to two supporting faces \(P_{1},P_{2}\) defined, respectively, by
\[(k,0,0),(1,1,0),(0,0,kl),\]
and
\[(1,1,0),(0,l,0),(0,0,kl).\]
They intersect at the face \(P_{12}=(1,1,0),(0,0,kl)\).
The faces \(P_{1},P_{2}\) corresponds to the primitive vectors \(v_{1}=(a_{1},a_{2},a_{3})\) such that
\[a_{1}k=a_{1}+a_{2}=a_{3}kl,\]
and \(v_{2}=(b_{1},b_{2},b_{3})\), where
\[b_{1}+b_{2}=lb_{2}=klb_{3}\]
in the dual plane. So
\[v_{1}=(l,l(k-1),1),\quad v_{2}=(k(l-1),k,1).\]
This defines the set of two extremal valuations \(\nu_{1},\nu_{2}\).
Then
\[\operatorname{in}_{P_{1}}(\mathcal{I})=(x^{k}+xy,z^{kl}),\quad\operatorname{ in}_{P_{2}}(f)=(xy+y^{l},z^{kl}),\quad\operatorname{in}_{P_{12}}(f)=(xy,z^{kl}).\]
By considering the ideals of the derivatives \(\mathcal{D}(\operatorname{in}_{P}(f))\) we see that in all cases
\[\operatorname{supp}(\operatorname{in}_{P}(\mathcal{I}),2)=V(x,y,z)\]
Similarly \(\operatorname{supp}(\mathcal{I},2)=V(x,y,z)\).
The cobordant blow-up of \(\mathcal{J}=(x^{k},x^{2}y,y^{l})\) is described as
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t_{1}^{-1},t_{2}^{-1}, \ xt_{1}^{l}t_{2}^{k(l-1)},\ yt_{1}^{l(k-1)}t_{2}^{k},\,zt_{1}t_{2}]=\] \[\operatorname{Spec}(\kappa[t_{1}^{-1},t_{2}^{-1},\ xt_{1}^{l}t_{2 }^{k(l-1)},\ yt_{1}^{l(k-1)}t_{2}^{k},\ zt_{1}t_{2}])\]
\[B_{+}=B\smallsetminus V(\sigma^{s}(\mathcal{J}))=B\smallsetminus\mathbf{t}^{ \alpha}\mathcal{J},\]
where \(\sigma^{s}(\mathcal{J})=\mathbf{t}^{\alpha}\mathcal{J}\), and the coefficients are given by the exceptional divisor \(E=\alpha_{1}E_{1}+\alpha_{2}E_{2}\) of the toric normalized blow-up of \(\mathcal{J}\).
\[\alpha_{1}=\nu_{1}(f)=a_{1}k=a_{1}+a_{2}=a_{3}kl=kl,\]
and
\[\alpha_{2}=\nu_{2}(f)=b_{1}+b_{2}=lb_{2}=klb_{3}=kl\]
Thus
\[B_{+}=B\smallsetminus V((x^{k},xy,y^{l},z^{kl})\cdot t_{1}^{kl}t_{2}^{kl}),\]
By Lemma 4.6.9, the cobordant blow-up \(B_{+}\to X\) of \(\mathcal{J}=(x^{k},xy,y^{l},z^{kl})\) decreases the order of \(\mathcal{I}\) to \(1\).
## 5. Generalized cobordant blow-ups and \(\mathbb{Q}\)-ideals
### Cobordization with respect to subgroups \(\Gamma\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\)
**Definition 5.1.1**.: Let \(\pi:Y\to X\) be a proper birational morphism. Let \(\Gamma\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\) be a finitely generated subgroup. We define the _full cobordization (resp. cobordization of \(\pi\)) with respect to \(\Gamma\)_ to be
\[B=B^{\Gamma}:=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{E\in\Gamma}\mathcal{O }_{X}(E))\quad B_{+}=B_{+}^{\Gamma}=\operatorname{Spec}_{Y}(\bigoplus_{E\in \Gamma}\mathcal{O}_{Y}(E)).\]
**Proposition 5.1.2**.: _The natural morphism \(B_{+}^{\Gamma}\to B^{\Gamma}\) is an open immersion if locally on \(X\) there are forms \(F=fx^{-E}\), with \(E\in\Gamma\) such that \(X_{F}\) are open affine and cover \(X\)._
Proof.: The proposition follows from the first part of the proof of Proposition 1.3.4.
**Definition 5.1.3**.: Let \(\pi:Y\to X\) be the normalized blow-up of the an \(\mathcal{I}\) on a normal scheme \(X\). Let \(\Gamma\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\) be a finitely generated subgroup. Then we define the _full cobordant blow-up of \(\mathcal{I}\) with respect to \(\Gamma\)_ (resp. the _cobordant blow-up of \(\mathcal{I}\) with respect to \(\Gamma\)_) to be the full cobordization (resp. cobordization of) \(\pi\) with respect to \(\Gamma\).
**Proposition 5.1.4**.: \(\pi:Y\to X\) _be the normalized blow-up of an ideal \(\mathcal{J}\) on a normal scheme \(X\), and let \(E^{0}\) be the exceptional Cartier divisor such that \(\mathcal{O}_{Y}(-E^{0})=\mathcal{O}_{Y}\cdot\mathcal{J}\). If a finitely generated group \(\Gamma\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\) contains divisor \(E^{0}\), then \(B^{\Gamma}_{+}=B^{\Gamma}\smallsetminus V(\mathcal{I}\mathbf{t}^{-E_{0}})\)._
Proof.: The proof is identical to the proof of Lemma 1.4.
### Simple cobordant blow-up of ideal \(\mathcal{I}\)
**Definition 5.2.1**.: Let \(\pi:Y\to X\) be the normalized blow-up of an ideal \(\mathcal{I}\) on a normal scheme \(X\), with the exceptional divisor \(E^{0}\), such that \(\mathcal{O}_{Y}(-E^{0})=\mathcal{O}_{Y}\cdot\mathcal{I}\). By the _simple cobordant blow-up_ of \(\mathcal{I}\) on \(X\) we mean the cobordization \(B^{\Gamma}_{+}\) of \(\pi:Y\to X\) with respect to the subgroup \(\Gamma=\mathbb{Z}\cdot E^{0}\subset\operatorname{Cl}(Y/X)\) generated by \(E^{0}\).
**Lemma 5.2.2**.: _The simple cobordant blow-up of \(\mathcal{I}\) is given by_
\[B=\operatorname{Spec}_{X}(\mathcal{O}_{X}[\mathbf{t}^{-1},\mathcal{I}t])^{ \operatorname{int}},\quad B_{+}=B\smallsetminus V(\mathcal{I}t).\]
Proof.: It follows that \(\mathcal{O}_{Y}(-nE^{0})=\mathcal{O}_{Y}\cdot\mathcal{I}^{n}\). Moreover \(\pi_{*}(\mathcal{O}_{Y}(-nE^{0}))=(\mathcal{I}^{n})^{\operatorname{int}}\) is the integral closure of \(\mathcal{I}^{n}\).
Consequently
\[B=B^{\Gamma}_{\mathcal{I}}=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{n\in \mathbb{Z}}\mathcal{O}_{Y}(nE^{0})\mathbf{t}^{nE_{0}})=\operatorname{Spec}_{ X}(\mathcal{O}_{X}[t^{-1},\mathcal{I}t])^{\operatorname{int}},\]
under the identification of \(\mathbf{t}^{E_{0}}\) with \(t^{-1}\). By Proposition 5.1.4,
\[B_{+}=B\smallsetminus V(\sigma^{s}(\mathcal{I}))=B\smallsetminus V(\mathcal{I}t).\]
and thus is described by the standard Rees extended algebra.
### Cobordant blow-ups of \(\mathbb{Q}\)-ideals
#### 5.3.1. _Valuative \(\mathbb{Q}\)-ideals_
The _valuative \(\mathbb{Q}\)-ideals_ were introduced in [10]. Here we consider its particular version considered in [11].
**Definition 5.3.2**.: By _valuative \(\mathbb{Q}\)-ideals_, or, simply, \(\mathbb{Q}\)_-ideals_ on a normal scheme \(X\) we mean the equivalence classes of formal expressions \(\mathcal{I}^{1/n}\), where \(\mathcal{I}\) is the ideal on \(X\), and \(n\in\mathbb{N}\). We say that two \(\mathbb{Q}\)-ideals \(\mathcal{I}^{1/n}\), and \(\mathcal{J}^{1/m}\) are equivalent if the integral closures of \(\mathcal{I}^{m}\), and \(\mathcal{J}^{n}\) are the same.
In particular, if \(D\) is a Cartier effective divisor on \(X\) then any \(\mathbb{Q}\)-Cartier effective divisor \(\frac{1}{m}\cdot D\) determines the \(\mathbb{Q}\)-ideal \(\mathcal{O}_{X}(-D)^{\frac{1}{m}}\).
By the _vanishing locus_ of \(\mathcal{J}=\mathcal{I}^{1/n}\) we mean \(V(\mathcal{J})=V(\mathcal{I})\).
One can define the operation of addition and multiplication on \(\mathbb{Q}\)-ideals:
\[\mathcal{I}^{1/n}+\mathcal{J}^{1/m}:=(\mathcal{I}^{m}+\mathcal{J}^{n})^{1/mn},\quad\mathcal{I}^{1/n}\cdot\mathcal{J}^{1/m}=(\mathcal{I}^{m}\cdot\mathcal{ J}^{n})^{1/mn}.\]
For any valuative \(\mathbb{Q}\)-ideal \(\mathcal{J}=\mathcal{I}^{1/n}\) on \(X\) we define the associated ideal of sections on \(X\):
\[\mathcal{J}_{X}:=\{f\in\mathcal{O}_{X}\mid f^{n}\in\mathcal{I}^{\mathrm{int}}\},\]
where \(\mathcal{I}^{\mathrm{int}}\) is the integral closure of \(\mathcal{I}\). In particular, for the effective Cartier divisor \(D\), we have the equalities
\[(\mathcal{O}_{X}(-D)^{1/m})_{X}=\mathcal{O}_{X}(-\frac{1}{m}D)=\{f\in\mathcal{ O}_{X}\mid\mathrm{div}(f)-\frac{1}{m}D\geq 0\}.\]
With any valuative \(\mathbb{Q}\)-ideal \(\mathcal{J}\) we associate the _Rees algebra_ on \(X\) :
\[\mathcal{O}_{X}[\mathcal{J}t]_{X}=\bigoplus_{n\in\mathbb{Z}_{\geq 0}}(\mathcal{ J}^{n})_{X}t^{n}\subset\mathcal{O}_{X}[t],\]
and the _extended Rees algebra_ on \(X\):
\[\mathcal{O}_{X}[t^{-1},\mathcal{J}t]_{X}=\bigoplus_{n\in\mathbb{Z}_{\geq 0}} \mathcal{J}_{X}^{n}t^{n}\oplus\bigoplus_{-n\in\mathbb{Z}_{<0}}t^{-n}\subset \mathcal{O}_{X}[t,t^{-1}].\]
#### 5.3.3. Cobordant blow-up of \(\mathbb{Q}\)-ideals
Let \(\mathcal{J}=\mathcal{I}^{1/m}\) be a \(\mathbb{Q}\)-ideal on \(X\). Consider the normalized blow-up \(\pi:Y\to X\) of \(\mathcal{I}\), with the exceptional divisor \(E^{0}\) such that \(\mathcal{O}_{Y}(-E^{0})=\mathcal{O}_{Y}\cdot\mathcal{I}\).
Then \(\mathcal{O}_{Y}\cdot\mathcal{I}^{1/m}\) is the \(\mathbb{Q}\)-ideal \(\mathcal{O}_{Y}(-E^{0})^{1/m}\), which corresponds to the \(\mathbb{Q}\)-Cartier exceptional divisor \(\frac{1}{m}E^{0}\) on \(Y\).
Consequently, by the _blow-up_ of the \(\mathbb{Q}\)-ideal \(\mathcal{J}=\mathcal{I}^{1/m}\) we mean the the normalized blow-up \(\pi:Y\to X\) of \(\mathcal{I}\), with the associated \(\mathbb{Q}\)-Cartier divisor \(\frac{1}{m}E^{0}\).
**Definition 5.3.4**.: By the _simple cobordant blow-up/full cobordant blow-up of the \(\mathbb{Q}\)-ideal \(\mathcal{J}=\mathcal{I}^{1/m}\)_ we mean the cobordization/full cobordization of the normalized blow-up \(Y\to X\) of \(\mathcal{J}\) with respect to the group \(\Gamma=\mathbb{Z}\cdot\frac{1}{m}E^{0}\subset\mathrm{Cl}(Y/X)\otimes\mathbb{Q}\) generated by \(\frac{1}{m}E^{0}\).
**Lemma 5.3.5**.: _Let \(\sigma:B\to X\) be the simple full cobordant blow-up of the \(\mathbb{Q}\)-ideal \(\mathcal{J}=\mathcal{I}^{1/m}\) on a normal scheme \(X\). Then_
\[B=\mathrm{Spec}_{X}(\mathcal{O}_{X}[t^{-1},\mathcal{J}t])_{X}\]
1. \(B_{+}=B\smallsetminus V(\mathcal{J}t)\)__
2. \(\mathcal{O}_{B_{+}}\cdot\mathcal{J}=t^{-1}\cdot\mathcal{O}_{B_{+}}\)__
Proof.: Let \(\pi:Y\to X\) be the normalized blow-up of \(\mathcal{I}\), \(E^{0}\) is the exceptional divisor on \(Y\) such \(\mathcal{O}_{X}\cdot\mathcal{I}=\mathcal{O}_{Y}(-E_{0})\). Thus, by [22, Proof of Lemma 2.1.4],
\[\pi_{*}(\mathcal{O}_{Y}(-\frac{n}{m}E^{0})=(f\in\pi_{*}(\mathcal{O}_{Y})= \mathcal{O}_{X}:f^{n}\in\pi_{*}(\mathcal{O}_{Y}(-mE_{0}))=(\mathcal{I}^{m})^{ \mathrm{int}})=\mathcal{J}_{X}^{m}.\]
giving the formula for \(B\):
\[B=\mathrm{Spec}_{X}(\pi_{*}(\bigoplus_{n\in\mathbb{Z}}\mathcal{O}_{Y}(n\cdot( 1/m)\cdot E^{0})t^{n}))=\mathrm{Spec}_{X}(\mathcal{O}_{X}[t^{-1},\mathcal{J}t] )_{X}.\]
By Proposition 5.1.4,
\[B_{+}=B\smallsetminus V(\mathcal{I}\cdot\mathbf{t}^{-E^{0}})=B\smallsetminus V( \mathcal{J}\mathbf{t}^{-(1/m)E^{0}})=B\smallsetminus V(\mathcal{J}t),\]
as \((1/m)E^{0}\) generates \(\Gamma\), and \(\mathbf{t}^{(1/m)E^{0}}\) corresponds to \(t^{-1}\). Thus the inverse image of
\[\mathcal{O}_{B_{+}}\cdot\mathcal{I}^{1/m}=\mathcal{O}_{B_{+}}\cdot\mathcal{J} =t^{-1}\mathcal{O}_{B_{+}}\cdot\mathcal{J}t=\mathcal{O}_{B_{+}}\cdot t^{-1}\]
is a Cartier exceptional divisor. We use here the fact that the \(\mathbb{Q}\)-ideal \(\mathcal{J}t_{|B_{+}}=\mathcal{O}_{B_{+}}\), as \(\mathcal{J}t=(\mathcal{I}t^{a})^{1/a}=(\mathcal{I}\cdot t^{-E^{0}})^{1/a}\) is trivial on \(B_{+}=B\smallsetminus V(\mathcal{I}t^{-E^{0}})\).
### Weighted cobordant blow-ups revisited
Let \(\pi_{B}:B\to X\) be the simple cobordant blow-up of the weighted center \(\mathcal{J}=(u_{1}^{1/w_{1}},\ldots,u_{k}^{1/w_{k}})\), where \(u_{1},\ldots,u_{k}\) is a partial system of local parameters on a regular scheme. Assume, first, that the weights \(w_{i}\) are relatively prime. The center \(\mathcal{J}\) can be written as \(\mathcal{J}=\mathcal{I}^{1/m}\), where
\[\mathcal{I}=(u_{1}^{m/w_{1}},\ldots,u_{k}^{m/w_{k}}),\]
is the ideal, and the weights \(w_{i}|m\).
Let \(E^{0}\) be the exceptional divisor of the blow-up \(\pi:Y\to X\) of \(\mathcal{J}\). Let \(\nu_{E^{0}}\) be the associated exceptional valuation. Using the toric chart, defined by \(u_{1},\ldots,u_{k}\) one reduces the situation to the blow-up of the toric \(\mathbb{Q}\)-ideal \(\mathcal{J}\) on a toric variety \(X_{\sigma}\), where \(\sigma=\langle e_{1},\ldots,e_{k}\rangle\) is regular. The \(\mathbb{Q}\)-ideal \(\mathcal{J}=\mathcal{I}^{1/m}\) defines a piecewise linear convex function
\[F_{\mathcal{J}}=\min(\frac{1}{w_{1}}e_{1}^{*},\ldots,\frac{1}{w_{k}}e_{k}^{*}) =1/m\cdot\min(\frac{m}{w_{1}}e_{1}^{*},\ldots,\frac{m}{w_{k}}e_{k}^{*}).\]
The normalized blow-up of \(\mathcal{J}\) defines a decomposition \(\Delta\) of \(\sigma\) into the maximal subcones where \(F_{\mathcal{J}}\) is linear. Let \(w:=(w_{1},\ldots,w_{k})\), and
\[F_{\mathcal{J}}(e_{i})=0,\ldots,F_{\mathcal{J}}(w)=1.\]
Then \(\Delta\) is the star subdivision at \(\langle w\rangle\). Moreover, the vector \(mw\) corresponds to \(mF_{\mathcal{J}}=F_{\mathcal{I}}\) in the sense that they define the same Weil divisors, and \(w\) and corresponds to \(E^{0}\), so the valuation \(\nu_{E^{0}}\) on \(Y\) is associated with \(w\). In particular,
\[\nu_{E^{0}}(u_{i})=e_{i}^{*}(w)=w_{i}.\]
Consequently, \(\mathcal{O}_{Y}\cdot(u_{1}^{m/w_{1}},\ldots,u_{k}^{m/w_{k}})=\mathcal{O}_{Y}( -mE^{0})\) is the ideal of the Cartier divisor \(mE^{0}\) on \(Y\), associated with the integral function \(mF_{\mathcal{J}}\), and the \(\mathbb{Q}\)-ideal \(\mathcal{J}=(u_{1}^{1/w_{1}},\ldots,u_{k}^{1/w_{k}})\) corresponds to the \(\mathbb{Q}\)-divisor \((1/m)\cdot mE^{0}=E^{0}\) which is a Weil divisor.
The cobordant blow-up associated with the group \(\Gamma=\mathbb{Z}\cdot E^{0}=\operatorname{Cl}(Y/X)\) is given by the standard formula from Theorem 3.4.3:
\[B=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{n\in\mathbb{Z}} \mathcal{O}_{Y}(nE^{0}))t^{n}=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},\mathcal{J}t])_{X}=\] \[=\operatorname{Spec}_{X}(\bigoplus_{a_{i}\in\mathbb{Z}}\ \mathcal{I}_{\nu,a_{i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{k}^{a_{k}})=\] \[=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},u_{1}t^{w_{1}}, \ldots,u_{k}t^{w_{k}}]),\]
where \(w_{i}=\nu_{E^{0}}(u_{i})\).
In general, for arbitrary weights, the simple cobordant blow-up of \((u_{1}^{1/w_{1}},\ldots,u_{k}^{1/w_{k}})\) is associated with the group \(\Gamma=\mathbb{Z}\cdot\frac{1}{w^{0}}E^{0}=\frac{1}{w^{0}}\cdot\operatorname{ Cl}(Y/X)\), where \(w^{0}:=\gcd(w_{1},\ldots,w_{k})\), and with the valuation \(w^{0}\nu\), with
\[w^{0}\nu(x_{i})=w_{i}.\]
Now
\[\mathcal{I}_{w^{0}\nu,a}=(u_{1}^{b_{1}}\cdot\ldots\cdot u_{k}^{b_{k}})\mid \sum_{j=1}^{k}b_{j}w_{i}\geq a).\]
Comparing gradations we see
\[\bigoplus_{a\in\mathbb{Z}}\mathcal{I}_{w^{0}\nu,a}t^{a}=\mathcal{O}_{X}[t^{-1},u_{ i}t^{w_{i}}].\]
Then
\[B=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{n\in\mathbb{Z}} \mathcal{O}_{Y}(n\cdot(1/w^{0})\cdot E^{0})t^{n}=\] \[=\operatorname{Spec}_{X}(\bigoplus_{a_{i}\in\mathbb{Z}}\mathcal{ I}_{w_{0}\nu,a}\,\cdot\,t^{a})=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t^{-1},t^{w_{1}} x_{1},\dots,t^{w_{k}}x_{k}]).\]
By the above \(B_{+}=B\smallsetminus V(\mathcal{J}t)=B\smallsetminus V(t^{w_{1}}x_{1},\dots,t^{w _{k}}x_{k})\), and \(\mathcal{O}_{B_{+}}\cdot\mathcal{J}=\mathcal{O}_{B_{+}}\cdot t^{-1}\).
These weighted cobordant blow-ups were studied in [20] and used for the resolution of varieties in characteristic zero and some classes of singularities in positive and mixed characteristic. To a great extent, they are equivalent to the stack-theoretic weighted blow-ups introduced and considered in [10], and [17].
### Multiple weighted cobordant blow-ups of Abramovich-Quek
In the paper [1], the authors consider the generalization of the weighted blow-ups, so-called, _multi-weighted blow-ups_\(Bl_{\mathcal{J},b}\), associated with a \(\mathbb{Q}\)-ideal \(\mathcal{J}\) and a vector \(b=(b_{1},\dots,b_{k})\). They are constructed locally in toric charts in the language of fantastacks and stack-theoretic quotients via Satriano combinatorial approach [14]. The _multi-weighted blow-ups_ are used to prove the logarithmic resolution on smooth toroidal ambient Artin stacks in characteristic zero.
We give here a geometric interpretation of this construction in the language of cobordizations with respect to a subgroup. In particular, this approach does not rely on coordinates or combinatorics.
Let \(\pi:Y\to X\) be the normalized blow-up of a locally monomial center \(\mathcal{J}\) on a regular scheme over a field. Denote by \(E_{1},\dots,E_{k}\) the irreducible exceptional divisors. Let \(\nu_{1},\dots,\nu_{k}\) be the associated exceptional valuations. We consider the full cobordant blow-up of \(\mathcal{J}\) with respect to the subgroup
\[\Gamma_{b}:=\mathbb{Z}\frac{1}{b_{1}}E_{1}\oplus\dots\oplus\mathbb{Z}\frac{1 }{b_{k}}E_{k}\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q},\]
for any positive integers \(b_{1},\dots,b_{k}\), and \(b=(b_{1},\dots,b_{k})\). Write
\[B=\operatorname{Spec}_{X}(\pi_{*}(\bigoplus_{E\in\Gamma_{b}}\mathcal{O}_{X}(E )),\quad B_{+}=\operatorname{Spec}_{Y}(\bigoplus_{E\in\Gamma_{b}}\mathcal{O}_ {Y}(E)).\]
The generators \(\frac{1}{b_{1}}E_{1},\dots,\frac{1}{b_{k}}E_{k}\) are associated with the monomial valuations
\[\nu_{1}^{b}:=b_{1}\nu_{1},\dots,\nu_{k}^{b}:=b_{k}\nu_{k}.\]
Then locally on \(X\) using the Proposition 1.2.2, and the proof of Lemma 2.3.5(1) we can write
\[B=\operatorname{Spec}_{X}(\bigoplus_{a_{i}\in\mathbb{Z}}\ \bigcap_{i=1}^{k} \mathcal{I}_{\nu_{i}^{b},a_{i}}\ \cdot t_{1}^{a_{1}}\cdot\ldots\cdot t_{k}^{a_{k}})=\] \[=\bigcap_{i=1}^{k}\mathcal{O}_{X}[t_{i}^{-1},u_{j}t_{i}^{\nu_{i}^ {b}(u_{j})}][\tilde{\mathbf{t}}_{i},\tilde{\mathbf{t}}_{i}^{-1}]=\] \[=\operatorname{Spec}_{X}(\mathcal{O}_{X}[t_{1}^{-1},\ldots,t_{k}^ {-1},u_{1}\mathbf{t}^{\alpha_{1}^{b}},\ldots,u_{k}\mathbf{t}^{\alpha_{k}^{b}}]),\]
where
* \(\tilde{\mathbf{t}}_{i}:=t_{1},\ldots,\tilde{t}_{i},\ldots,t_{k}\)
* \(u_{1},\ldots,u_{k}\), is a system of coordinates on open \(U\subset X\) defining monomial generators for \(\mathcal{J}\), and
* \(\mathbf{t}^{\alpha_{i}^{b}}:=t_{1}^{a_{11}^{b}}\cdot\ldots\cdot t_{k}^{a_{ik} ^{b}},\quad\text{with}\quad a_{ij}^{b}:=\nu_{i}^{b}(u_{j})=b_{i}\nu_{i}(u_{j}) \geq 0\).
Note that under this correspondence \(t_{i}^{-1}\mapsto\mathbf{t}^{\frac{1}{u_{i}}E_{i}}\).
Let
\[E^{0}=a_{1}E_{1}+\ldots+a_{k}E_{k} \tag{7}\]
be the exceptional divisor of \(\pi:Y\to X\), for the relevant \(a_{i}\in\mathbb{Z}_{\geq 0}\), such that \(\mathcal{O}_{Y}(-E^{0})=\mathcal{O}_{Y}\cdot\mathcal{J}\). By Proposition 5.1.4,
\[B_{+}=B\smallsetminus V(\mathcal{J}\mathbf{t}^{-E^{0}})=B\smallsetminus V( \mathcal{J}\mathbf{t}^{\alpha^{b}}),\]
where \(\mathbf{t}^{\alpha^{b}}\) corresponds to \(\mathbf{t}^{-E_{0}}\), under \(t_{i}^{-1}\mapsto\mathbf{t}^{\frac{1}{b_{i}}E_{i}}\). Thus by (7), \(\alpha^{b}=(b_{1}a_{1},\ldots,b_{k}a_{k})\), and
\[B_{+}=B\smallsetminus V(\mathcal{J}\mathbf{t}^{-E^{0}})=B\smallsetminus V( \mathcal{J}t_{1}^{b_{1}a_{1}}\cdot\ldots\cdot t_{k}^{b_{k}a_{k}}).\]
In particular, if \(X\) is regular over a field and \(\mathcal{J}\) is a locally monomial ideal on \(X\), then the full cobordant blow-up \(B\) of \(\mathcal{J}\) with respect to \(\Gamma_{b}\) is regular.
#### 5.5.1. Multiple weighted blow-ups associated with \(\mathbb{Q}\)-ideals
Consider the normalized blow-up \(\pi:Y\to X\) of a monomial \(\mathbb{Q}\)-ideal \(\mathcal{J}\), with the associated exceptional divisor \(E^{0}=a_{1}E_{1}+\ldots+a_{k}E_{k}\) with rational, positive coefficients \(a_{i}\), as in Section 5.3.3. We choose \(b=(b_{1},\ldots,b_{k})\) with \(b_{i}\in\mathbb{Z}_{>0}\), such that
\[\Gamma_{b}=\mathbb{Z}\frac{1}{b_{1}}E_{1}\oplus\ldots\oplus\mathbb{Z}\frac{1 }{b_{k}}E_{k}\subset\operatorname{Cl}(Y/X)\otimes\mathbb{Q},\]
is the minimal subgroup of \(\operatorname{Cl}(Y/X)\otimes\mathbb{Q}\) containing \(E^{0}\).
Thus any monomial \(\mathbb{Q}\)-ideal \(\mathcal{I}\) and \(b=(b_{1},\ldots,b_{k})\in\mathbb{Z}_{>0}^{k}\) determines a unique associated cobordant blow-up \(B_{+}\to X\) with respect to the group \(\Gamma_{b}\). This way, taking the stack-theoretic quotient, we obtain the Abramovich-Quek _multiple weighted blow-up_\([B_{+}/\!\!/\,T]\to X\) from [1], which is necessarily regular for a regular \(X\).
|
2310.04571 | Elliptic Calogero-Moser system, crossed and folded instantons, and
bilinear identities | Affine analogues of the Q-functions are constructed using folded instantons
partition functions. They are shown to be the solutions of the quantum spectral
curve of the N-body elliptic Calogero-Moser (eCM) system, the quantum Krichever
curve. They also solve the elliptic analogue of the quantum Wronskian equation.
In the companion paper we present the quantum analogue of Krichever's Lax
operator for eCM. A connection to crossed instantons on Taub-Nut spaces, and
opers on a punctured torus is pointed out. | Andrei Grekov, Nikita Nekrasov | 2023-10-06T20:20:42Z | http://arxiv.org/abs/2310.04571v1 | # Elliptic Calogero-Moser system,
###### Abstract.
Affine analogues of the \(Q,\tilde{Q}\)-functions are constructed using _folded instantons_ partition functions. They are shown to be the solutions of the quantum spectral curve of the \(N\)-body elliptic Calogero-Moser (eCM) system, the quantum Krichever curve. \(Q,\tilde{Q}\) also solve the elliptic analogue of the quantum Wronskian equation. In the companion paper we present the quantum analogue of Krichever's Lax operator for eCM. A connection to crossed instantons on Taub-Nut spaces, and opers on a punctured torus is pointed out.
## 1. Elliptic Calogero-Moser system
Let
\[\tau=\tau_{1}+\mathrm{i}\tau_{2}\in\mathbb{C}\,,\ \tau_{2}>0\,,\ \mathfrak{q}=e^{2\pi \mathrm{i}\tau}. \tag{1.1}\]
Let
\[E_{\tau}=\mathbb{C}/\mathbb{Z}+\tau\mathbb{Z}\approx\mathbb{C}^{\times}/ \mathfrak{q}^{\mathbb{Z}} \tag{1.2}\]
be the corresponding elliptic curve.
Fix an integer \(N\geq 1\), and \(n,\hbar\in\mathbb{C}\). Quantum elliptic Calogero-Moser system (eCM) can be defined in many ways. At simplified level it is a collection of commuting differential operators \(\hat{H}_{1},\hat{H}_{2},\dots,\hat{H}_{N}\) in \(N\) variables \(z_{1},\dots,z_{N}\),
\[\begin{split}&\hat{H}_{1}=\sum_{i=1}^{N}\frac{\partial}{\partial z _{i}}\,,\\ &\hat{H}_{2}=-\frac{1}{2}\sum_{i=1}^{N}\frac{\partial^{2}}{ \partial z_{i}^{2}}\,+\,n(n-1)\sum_{i<j}\wp(z_{i}-z_{j}|\tau)\\ &\dots\end{split} \tag{1.3}\]
invariant under the action of the double affine Weyl group
\[(z_{i})_{i=1}^{N}\mapsto\left(z_{\sigma(i)}+a_{i}+\tau b_{i}\right)_{i=1}^{N} \,,\ \sigma\in S(N)\,,\ a_{k},b_{k}\in\mathbb{Z} \tag{1.4}\]
The elliptic Calogero-Moser system admits several interesting degenerations. One such limit, \(\tau\to\mathrm{i}\infty\), produces the trigonometric/hyperbolic CM system.
## 1. Introduction
Let \(\Omega\) be a smooth
with \(\tilde{L}(u)\) have a first order pole at \(u\in\mathbb{Z}+\tau\mathbb{Z}\) with rank one residue. Therefore, the function \(\mathscr{R}(x,u)\) has, at fixed \(x\) only a first order pole at \(u=0\). At fixed \(u\) it is still a degree \(N\) polynomial in \(x\). However, it is not a double-periodic function of \(u\) at fixed \(x\), rather:
\[\mathscr{R}(x-2\pi\mathrm{i}\nu,u+\tau)=\mathscr{R}(x,u+1)=\mathscr{R}(x,u)\,, \tag{1.13}\]
Thus \(\mathscr{R}=0\) is an equation for a curve \(\mathscr{C}\) in the total space \(S_{\nu}\) of affine bundle over \(T^{*}E_{\tau}\), again with the fiber over \(u=0\) removed. Note that \(S_{\nu}\) has a rich set of holomorphic functions, e.g. any Laurent polynomial in
\[U_{1}=\exp\,\left(\frac{x}{\nu}\right)\,,\ U_{2}=\exp\,\left(2\pi\mathrm{i}u +\tau\frac{x}{\nu}\right) \tag{1.14}\]
Thus, for \(\nu\neq 0\), outside the fiber over \(u=0\) the surface \(S_{\nu}\) is isomorphic to the algebraic torus \(\mathbb{C}^{\times}\times\mathbb{C}^{\times}\). The curve \(\mathscr{C}\), however, is not algebraic, only analytic, in the \((U_{1},U_{2})\) coordinate system.
## 2. Quantum Krichever curve
In a very naive way the equation \(\mathscr{R}(x,u)=0\) can be quantized, for example, by making \(x,u\) the generators \(\hat{x},\hat{u}\) of a non-commutative algebra, obeying
\[[\hat{u},\hat{x}]=\hbar\cdot 1 \tag{2.1}\]
where \(\hbar\in\mathbb{C}\) is a parameter. It is not obvious how to order \(\hat{x}\) and \(\hat{u}\) in the complicated function such as \(\mathscr{R}(x,u)\), or, say, \(R(x,u)\). However, \(\mathscr{R}(x,u)\) is much better suited for quantization than \(R(x,u)\) for the following reason. As we pointed out earlier, the operator \(\mathbf{\tilde{L}}(u)\) has a first order pole at \(u=0\) (modulo the lattice \(\mathbb{Z}+\mathbb{Z}\tau\)) with the rank one residue. Thus,
\[\mathscr{R}(x,u)=\sum_{i=0}^{N}x^{N-i}f_{i}(u) \tag{2.2}\]
with the coefficients \(f_{i}(u)\) having only a first order pole at \(u\in\mathbb{Z}+\mathbb{Z}\tau\). Of course, the shifted periodicity (1.13) means that \(f_{i}(u)\)'s are not double-periodic meromorphic functions of \(u\) (with the exception of \(f_{0}(u)\equiv 1\)). Now, let us multiply \(\mathscr{R}(x,u)\) by \(\theta(u|\tau)\), and define the Fourier coefficients \(\tilde{\mathfrak{R}}_{l}(x)\) by:
\[\mathfrak{R}(x,u)=\theta(u|\tau)\mathscr{R}(x,u)=\sum_{l\in\mathbb{Z}}(-1)^{l }e^{2\pi\mathrm{i}lu}\mathfrak{q}^{\frac{l(l-1)}{2}}\tilde{\mathfrak{R}}_{l}(x) \tag{2.3}\]
It is clear from the definition and (1.13) that \(\mathfrak{R}(x,u)\) is an entire function of \(x,u\), degree \(N\) polynomial in \(x\) at fixed \(u\), obeying
\[\mathfrak{R}(x,u+1)=-e^{2\pi\mathrm{i}u}\,\mathfrak{R}(x-2\pi\mathrm{i}\nu,u+ \tau)=\mathfrak{R}(x,u). \tag{2.4}\]
Thus \(\tilde{\mathfrak{R}}_{l}(x)\) are all degree \(N\) (monic) polynomials in \(x\), obeying
\[\tilde{\mathfrak{R}}_{l}(x-2\pi\mathrm{i}\nu)=\tilde{\mathfrak{R}}_{l+1}(x) \tag{2.5}\]
In other words
\[\tilde{\mathfrak{H}}_{l}(x)=\mathscr{Y}(\mathsf{w}-\nu l) \tag{2.6}\]
where \(x=2\pi\mathsf{iw}\), and \(\mathscr{Y}=\tilde{\mathfrak{R}}_{0}\) is a degree \(N\) monic polynomial. Now we quantize (2.3) by replacing \(u\) in the exponent by
\[\hat{u}=\hbar\frac{\partial}{\partial x}=\frac{\hbar}{2\pi\mathrm{i}}\frac{ \partial}{\partial\mathsf{w}} \tag{2.7}\]
We can place \(e^{2\pi\mathrm{i}l\hat{u}}\) to the right of \(\tilde{\mathfrak{R}}_{l}(x)\), or we can place it to the left, amounting to a renormalization \(\nu\to\nu\pm\hbar\). We can also imagine a dual quantization, where \(e^{2\pi\mathrm{i}l\hat{u}}\) is to the right of \(\tilde{\mathfrak{R}}_{l}(x)\) but acts on the left. We shall see both versions realized below.
The quantum version of the spectral curve can, therefore, be represented as an infinite order difference operator \(\hat{\mathfrak{R}}\) with polynomial coefficients, acting on functions of one variable \(\mathsf{w}\), or its dual. The corresponding solutions \(\boldsymbol{\Psi}\), \(\boldsymbol{\Psi}^{\vee}\) are the functions of \(\mathsf{w}\), obeying:
\[\hat{\mathfrak{R}}\boldsymbol{\Psi}(\mathsf{w})\,=\,\sum_{l\in \mathbb{Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l-1)}{2}}\mathscr{Y}(\mathsf{w}-\nu l )\boldsymbol{\Psi}(\mathsf{w}+\hbar l)=0\] \[\hat{\mathfrak{R}}^{*}\boldsymbol{\Psi}^{\vee}(\mathsf{w})\,=\, \sum_{l\in\mathbb{Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l-1)}{2}}\mathscr{Y}( \mathsf{w}-\nu l-\hbar l)\boldsymbol{\Psi}^{\vee}(\mathsf{w}-\hbar l)=0 \tag{2.8}\]
The quantum version of the spectral curve can, equivalently, be represented in \(u\)-variable, with
\[\widehat{\mathsf{w}}=\frac{\hbar}{2\pi\mathrm{i}}\frac{\partial}{\partial u} \tag{2.9}\]
as the \(N\)'th order meromorphic differential operator on \(E_{\tau}\), with a regular singularity at \(u=0\), a \(PGL_{N}\)-oper. The corresponding Fourier transforms
\[\begin{split}\chi_{\alpha}(u)&=\sum_{\mathsf{w}\in \alpha+\hbar\mathbb{Z}}e^{\frac{2\pi\mathrm{i}\mathsf{w}}{\hbar}u}\, \boldsymbol{\Psi}(\mathsf{w})\\ \chi_{\alpha}^{\vee}(u)&=\sum_{\mathsf{w}\in\alpha +\hbar\mathbb{Z}}e^{-\frac{2\pi\mathrm{i}\mathsf{w}}{\hbar}u}\,\boldsymbol{ \Psi}^{\vee}(\mathsf{w})\end{split} \tag{2.10}\]
where \(\alpha\) is chosen so that the series on the right hand side of (2.10) converge are the local solutions of the oper. They should play a role in the quantum separation of variables for eCM and its spectral dual \(Y(\widehat{\mathfrak{gl}}(1))\)-based spin chain, analogously to the genus zero (\(\mathfrak{sl}_{r+1}\) spin chain) case studied in [23]. Also, the construction of [12] seems to generalize to genus one case, giving rise to the \(L^{2}\), or \(z,\bar{z}\)-version of the elliptic Calogero-Moser system, where the wave-function of the \(N\)-body system is an \(L^{2}\)-function on \(E_{\tau}^{N}/S(N)\), which is a common eigenvector of the Hamiltonians of the conventional, meromorphic, eCM and its complex conjugate. The parameters \(\alpha\) and the coefficients of the polynomial \(\mathscr{Y}\) would have to be fixed by a requirement of
the \(PGL_{N}(\mathbb{R})\)-holonomy of the oper. The \(\mathcal{N}=2^{*}\) counterpart of this construction seems to be placing the theory on extremely elongated four dimensional ellipsoid with a surface defect filling up a two dimensional equatorial section thereof.
We shall return to this picture elsewhere.
## 3. \(T-Q\)-equation
Functional Bethe ansatz [41] (see [11, 14, 44, 42] for the reviews of the foundational work) approach to \(\mathfrak{sl}_{2}\)-spin chains or quantum periodic \(N\)-body Toda chain uses the following \(T-Q\)-equation [3, 40]:
\[P_{+}(\mathsf{w}-2\hbar)\mathbf{Q}(\mathsf{w}+\hbar)+\mathfrak{q}P_{-}( \mathsf{w}+2\hbar)\mathbf{Q}(\mathsf{w}-\hbar)=(1+\mathfrak{q})\mathbf{T}( \mathsf{w})\mathbf{Q}(\mathsf{w}) \tag{3.1}\]
with degree \(N\) monic polynomial \(\mathbf{T}(\mathsf{w})\) and entire function \(\mathbf{Q}(\mathsf{w})\). The Eq. (3.1) describes the eigenvalues \(\mathbf{T}(\mathsf{w})\) of the transfer matrix \(\hat{T}(\mathsf{w})\). There is a _spectral dual_ Lax operator \(\mathbf{L}(z)\) of a particular case of \(\mathfrak{sl}_{N}\) Gaudin-Garnier model, whose eigenvalues belong to the \(\mathsf{w}\) line. The equation (3.1) can be derived from the Yangian version of the \(q\)-characters of \(\mathfrak{sl}_{2}\)[13], from the \(\varepsilon_{2}\to 0\) limit of the \(qq\)-characters of the \(A_{1}\)-type \(\mathcal{N}=2\) superconformal gauge theory (\(=N_{f}=2N_{c}\) super-QCD) [32], it can also be viewed as a quantum version of the Seiberg-Witten curve of the \(A_{1}\) type gauge theory (for the topological recursion approach to quantum curves of that type see [7], and for the extension to the local CY geometries [9]). In this paper we define (and solve) an \(\widehat{\mathfrak{gl}(1)}\)-analogue of (3.1).
## 4. Partitions, characters, and contents
In what follows \(\Lambda\) denotes the set of all partitions, i.e. non-increasing sequences
\[\Lambda=\left\{\,\lambda\,|\lambda=\left(\lambda_{1}\geq\lambda_{2}\ldots\geq \lambda_{\ell(\lambda)}>0=\lambda_{\ell(\lambda)+1}\right)\right\} \tag{4.1}\]
The size and length maps
\[|\cdot|\,,\ell\,:\,\Lambda\longrightarrow\mathbb{Z}_{\geq 0}\,,\qquad| \lambda|\equiv\lambda_{1}+\ldots+\lambda_{\ell(\lambda)}\, \tag{4.2}\]
and the "transposition" map: \(\lambda\mapsto\lambda^{t}\), \(\ell(\lambda^{t})=\lambda_{1}\), \(|\lambda^{t}|=|\lambda|\), best described by viewing partitions as finite subsets \(\lambda\subset\mathbb{Z}_{>0}\times\mathbb{Z}_{>0}\):
\[\lambda=\left\{\square=(i,j)\,|\,i,j\geq 1\,\,\,1\leq j\leq\lambda_{i}\,,\,\,1 \leq i\leq\lambda_{j}^{t}\,\right\} \tag{4.3}\]
Define the _contents_ of \(\square\):
\[\xi_{\square}=\hbar\,(i-j+n(j-1))\,\,\upsilon_{\square}=\hbar\,(n(j-i)+1-j) \,\,\zeta_{\square}=\hbar\,(i-1-n(j-1)) \tag{4.4}\]
Define the _boundary sets_\(\Gamma_{\lambda}^{+}\), \(\Gamma_{\lambda}^{-}\) by:
\[\sum_{(i,j)\in\Gamma_{\lambda}^{+}}\langle i-1,j-1\rangle-\sum_{(i,j)\in\Gamma_{ \lambda}^{-}}\langle i,j\rangle=\sum_{i=1}^{\ell(\lambda)+1}\langle i-1,\lambda _{i}\rangle-\sum_{i=1}^{\ell(\lambda)}\langle i,\lambda_{i}\rangle \tag{4.5}\]
where \(\langle i,j\rangle\) is any linear function \(\langle i,j\rangle=(i-1)+f(j-1)\), for \(f\in\mathbb{C}\backslash\mathbb{Q}\).
Finally, define the character
\[\chi_{ab}(\lambda)=\sum_{\square\in\lambda}q_{a}^{i-1}q_{b}^{j-1} \tag{4.6}\]
for \(a,b\in\{1,2,3,4\}\)
\(Q\)-functions, \(\mathsf{X},\mathsf{Y},\mathsf{Z}\)-observables, and \(\mathcal{X},\mathcal{Y},\mathcal{Z}\)-characters
Supersymmetric gauge theories produce [32] (see Appendix **A** for some details) entire functions \(Q(\mathsf{w})\), which, in the limit \(\varepsilon_{2}\to 0,\varepsilon_{1}=\hbar\) of [31] solve _Bethe equations_
\[\frac{Q(\mathsf{w}+\hbar)Q(\mathsf{w}-\hbar n)Q(\mathsf{w}+\hbar(n-1))}{Q( \mathsf{w}-\hbar)Q(\mathsf{w}+\hbar n)Q(\mathsf{w}+\hbar(1-n))}=-\mathfrak{q }\,,\ \forall\mathsf{w}\in Q^{-1}(0). \tag{5.1}\]
Define
\[\mathsf{X}(\mathsf{w})=\frac{Q(\mathsf{w})}{Q(\mathsf{w}+\hbar n)}\,,\ \mathsf{Y}(\mathsf{w})=\frac{Q(\mathsf{w})}{Q(\mathsf{w}-\hbar)}\,,\ \mathsf{Z}(\mathsf{w})=\frac{Q(\mathsf{w})}{Q(\mathsf{w}+\hbar(1-n))}\,. \tag{5.2}\]
Then \(U(N)\)\(\mathcal{N}=2^{*}\) theory with adjoint hypermultiplet of mass \(\hbar n\) on \(\Omega\)-deformed \(\mathbf{R}_{\hbar}^{2}\times\mathbb{R}^{1,1}\) is characterized by the property that
\[\mathsf{Y}(\mathsf{w})\sim\mathsf{w}^{N}+\ldots\ \text{at}\ \mathsf{w}\to\infty \tag{5.3}\]
(it is called the \(Y\)-observable in [32]). Define the following triplet of observables built out of \(\mathsf{X},\mathsf{Y},\mathsf{Z}\), respectively:
\[\mathcal{X}(\mathsf{w}) = \sum_{\lambda}\mathfrak{q}^{|\lambda|}\,\frac{\prod_{\square\in \Gamma_{\lambda}^{+}}\mathsf{X}(\mathsf{w}-\hbar n+\xi_{\square})}{\prod_{ \square\in\Gamma_{\lambda}^{-}}\mathsf{X}(\mathsf{w}+\xi_{\square})}\,,\] \[\mathcal{Y}(\mathsf{w}) = \sum_{\lambda}\mathfrak{q}^{|\lambda|}\,\frac{\prod_{\square\in \Gamma_{\lambda}^{+}}\mathsf{Y}(\mathsf{w}+\hbar+\upsilon_{\square})}{\prod_{ \square\in\Gamma_{\lambda}^{-}}\mathsf{Y}(\mathsf{w}+\upsilon_{\square})}\,,\] \[\mathcal{Z}(\mathsf{w}) = \sum_{\lambda}\mathfrak{q}^{|\lambda|}\,\frac{\prod_{\square\in \Gamma_{\lambda}^{+}}\mathsf{Z}(\mathsf{w}+\hbar(n-1)+\zeta_{\square})}{\prod_{ \square\in\Gamma_{\lambda}^{-}}\mathsf{Z}(\mathsf{w}+\zeta_{\square})} \tag{5.4}\]
Then \(\mathcal{Y}(\mathsf{w})\) is a degree \(N\) polynomial in \(\mathsf{w}\), as (5.1) guarantees the absence of poles, and the asymptotics (5.3) fixes the rest (it is a particular, \(\hat{A}_{0}\), case of the main theorem of [32]). In fact, \(\mathcal{Y}(\mathsf{w})\) is the \(\varepsilon_{2}\to 0\) limit of the the fundamental \(qq\)-character of \(\hat{A}_{0}\)-theory, introduced in [32]. It can be related to the \(q\)-character of [13] for the \(\widehat{\mathfrak{gl}}(1)\)-algebra. Our main _new_ result is the set of three
## 6. Bilinear equations
\[\begin{split}\mathcal{Y}\star\mathcal{X}(\mathsf{w})\equiv\sum_{l \in\mathbb{Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l+1)}{2}}\,\mathcal{Y}(\mathsf{w}- \hbar n(l+1))\mathcal{X}(\mathsf{w}-\hbar l)&=0\\ \mathcal{Y}\star\mathcal{Z}(\mathsf{w})\equiv\sum_{l\in\mathbb{ Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l+1)}{2}}\,\mathcal{Y}(\mathsf{w}+\hbar(n-1)(l+1)) \mathcal{Z}(\mathsf{w}-\hbar l)&=0\\ \mathcal{X}\star\mathcal{Z}(\mathsf{w})\equiv\sum_{l\in\mathbb{ Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l+1)}{2}}\,\mathcal{X}(\mathsf{w}+\hbar(n-1)(l+1)) \mathcal{Z}(\mathsf{w}+\hbar nl)&=0\end{split} \tag{6.1}\]
A quick comparison with (2.8) reveals \(\mathcal{X}\) and \(\mathcal{Z}\) are the dual solutions of the quantum spectral curve (1.12). Having these solutions is a step towards the quantum separation of variables [43] for the elliptic CM system.
## 7. Discussion: gauge theory and integrable system
Gauge theory approach to elliptic Calogero-Moser system was started in [18]. It was shown there that the phase space and the Hamiltonians of the many-body system can be obtained by Hamiltonian reduction from an infinite-dimensional phase space, which is the space of two-dimensional gauge fields and adjoint-valued one-forms (mathematically called Higgs fields, although they were introduced and studied by N. Hitchin in [21], the construction of [18] being a degenerate version [30]). In this realization the classical and quantum evolution of the many-body system is identified with that of some \(2+1\) dimensional gauge theory. The proposal [10] identified the family of spectral curves found in [18] (which thanks to [28] are the spectral curves of elliptic Calogero-Moser system) with the family of Seiberg-Witten curves of \(\mathcal{N}=2^{*}\) theory with gauge group \(SU(N)\) (an earlier paper [17] pointed out a connection between the Seiberg-Witten proposal for the geometry of vacua of \(\mathcal{N}=2\) theories and integrable systems). The analysis [29] allows to extract the low-instanton number non-perturbative corrections to the low-energy effective action from the careful expansion of the periods of a degenerating family of curves. The bottom up approach [31], starting with supersymmetric gauge theory, computing the effects of the non-perturbative dynamics on the geometry of its moduli space of vacua, produced a two-parameter generalization of the Seiberg-Witten geometry. It was pointed out that these generalizations might lead to the quantization of the integrable system, as well as other deformations. In [33, 34] important steps were made in showing that indeed, for \(\mathcal{N}=2^{*}\) theory (among others) the gauge theory computation produces the spectral curve of the elliptic Calogero-Moser system.
Our present achievement is to find, within gauge theory, the observables whose expectation values play the role of the _single-particle wave-functions_ for the quantum version of the spectral curve. We build them using the technique of _folded instantons_, introduced in [32]. Again, we stress that
our results are derived, not conjectured. In this way we made independent checks of many string duality based claims e.g. [2, 1, 15, 24, 6]. The proper embedding of folded instanton observables in gauge theory remains an interesting open problem (see [35] for string theory considerations).
Likewise, the meaning of the sums over \(l\) in (6.1) begs for a gauge theory explanation. Our conjecture is that just like the \(\mathcal{Y}\)-observable can be interpreted as a partition function of an auxiliary four dimensional theory living on a copy of \(\mathbf{R}^{4}\) transverse to the physical spacetime \(\mathbf{R}^{4}\) in some ambient eight-dimensional special holonomy geometry, the sum in \(\mathfrak{R}(x,u)\) is the partition function of a similar \(U(1)\) theory placed in the Taub-Nut background. The sum over \(l\in\mathbb{Z}\) is the sum over the fluxes of the abelian gauge field, which is supported by the \(L^{2}\)-normalizable self-dual harmonic two-form. One needs to find a Taub-Nut \(\times\mathbb{R}^{4}\) analogue of the moduli space of crossed instantons. The ADHM construction for instantons on Taub-Nut spaces has been beautifully generalized in [8].
Of course, the relation between (2.8) and the quantum spectral curve of elliptic Calogero-Moser system is not very direct. We simply match the families of curves, and their non-commutative versions. However we can claim the match of the coordinate systems, the coordinates \(z_{1},\ldots,z_{N}\) of the particles having gauge-theoretic significance. We know from [32] the regular surface defect of \(\mathcal{N}=2^{*}\) theory obeys eCM Schrodinger equation, with \(z_{i}-z_{i+1}\) being the complex Kahler moduli of the surface defect. As we said in the beginning, we are discussing a simpler version of eCM, defined in complex-analytic terms, so there are separate "in" and "out" wave-functions, related by \(\hbar\mapsto-\hbar\) transformations. A short form of (6.1) reads
\[0=\left(\sum_{l\in\mathbb{Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l+1)}{ 2}}\,\mathcal{Y}(\mathsf{w}-\hbar n(l+1))e^{-\hbar l\partial_{\mathsf{w}}} \right)\Psi\,,\\ 0=\left(\sum_{l\in\mathbb{Z}}(-1)^{l}\mathfrak{q}^{\frac{l(l+1 )}{2}}\,e^{\hbar l\partial_{\mathsf{w}}}\mathcal{Y}(\mathsf{w}-\hbar n(l+1)) \right)\Psi^{\vee}\,, \tag{7.1}\]
where \(\Psi(\mathsf{w})=\mathcal{X}(\mathsf{w})\,,\ \Psi^{\vee}(\mathsf{w})= \mathcal{Z}(\mathsf{w}+\hbar(1-n))\). The quantum separation of variables which would express the regular surface defect as an integral transform of a product of the single-particle wave-functions, which should be the \(\Psi\sim\mathcal{X}\) or \(\Psi^{\vee}\sim\mathcal{Z}\)-observables.
The solutions \(\mathcal{X}\) and \(\mathcal{Z}\) are the \(\hat{A}_{0}\) analogues of the two independent (over the quasiconstants, i.e. \(\hbar\)-periodic functions) solutions of the \(T-Q\) equations. The third bilinear equation \(\mathcal{X}\star\mathcal{Z}=0\) is the analogue of the very important _quantum Wronskian relation_ (see the Appendix **B**).
In the forthcoming publication [20] we shall establish another detail in the gauge theory-integrable system package dictionary: the Lax operators as gauge theory observables. Our main equations (6.1) will get a matrix-vector dress explaining the origin of so many beautiful _zero-curvature representations_ in the theory of integrable systems. The manipulation (2.3)
leading from \(\mathcal{R}\) to \(\mathfrak{R}\) will also get a matrix version, generalizing that of [19, 45]. The story of the \(\mathcal{N}=2^{*}\) (\(\hat{A}_{0}\)) theory we discussed generalizes to the \(\hat{A}_{r}\)-type quiver theories, with (6.1) mapping, upon a Fourier transform, to \(PGL_{N}\)-opers on elliptic curve \(E_{\tau}\) with \(r+1\) regular singularities with minimal monodromies around the poles (another limit, \(\varepsilon_{1}+\varepsilon_{2}\to 0\) of this theory was studied recently in [5]).
The quantum separation of variables for eCM is still around the corner...
Our equations should also play a role in the solution of the quantum intermediate long-wave equation, a \(N\to\infty\) counterpart of eCM system, related to the six dimensional gauge theories and sigma models on instanton moduli spaces [38], as well as to the integrable structure of two dimensional conformal field theory [4], and quantum hydrodynamics [37, 27].
## 8. Acknowledgments
We enormously benefited from discussions and collaboration with I. Krichever in 1995-2022. We also thank S. Cherkis and S. Grushevsky for discussions. Research was partly supported by NSF PHY Award 2310279. Any opinions expressed are solely our own and do not represent the views of the National Science Foundation.
## 9. Appendix A. Derivation of the main bilinear equations
In this section we use the notations of [32]. Let \(\varepsilon_{1},\varepsilon_{2},\varepsilon_{3},\varepsilon_{4}\in\mathbb{C}\) sum to zero,
\[\varepsilon_{1}+\varepsilon_{2}+\varepsilon_{3}+\varepsilon_{4}=0 \tag{9.1}\]
and let \(q_{a}=e^{\varepsilon_{a}}\), \(P_{a}=1-q_{a},P_{ab}=P_{a}P_{b}\), \(a,b,=1,\ldots,4\). The \(\mathcal{Y}\star\mathcal{X}\) of (6.1) is the \(\varepsilon_{2}\to 0\) limit of the sum over \(l\) of
\[(-1)^{l}\mathfrak{q}^{\frac{(l+1)}{2}}\times Z_{\varepsilon_{1},\varepsilon_ {2}=0,\varepsilon_{3},\varepsilon_{4}}(N_{ab})\,, \tag{9.2}\]
the partition function of the gauge origami theory with
\[\varepsilon_{1}=\hbar\,,\ \varepsilon_{3}=-\hbar n\,,\ \varepsilon_{4}=\hbar(n-1) \tag{9.3}\]
and the Chan-Paton characters
\[N_{12}=\sum_{\alpha=1}^{N}e^{a_{\alpha}}\,,\ N_{34}=e^{\mathsf{w}+\varepsilon _{3}(l+1)}\,,\ N_{14}=e^{\mathsf{w}-\varepsilon_{1}l}\,, \tag{9.4}\]
the \(\mathcal{Y}\star\mathcal{Z}\) corresponds to the origami with Chan-Paton characters
\[N_{12}=\sum_{\alpha=1}^{N}e^{a_{\alpha}}\,,\ N_{34}=e^{\mathsf{w}+\varepsilon _{4}(l+1)}\,,\ N_{13}=e^{\mathsf{w}-\varepsilon_{1}l}\,, \tag{9.5}\]
finally \(\mathcal{X}\star\mathcal{Z}\) corresponds to the origami with
\[N_{12}=\sum_{\alpha=1}^{N}e^{a_{\alpha}}\,,\ N_{14}=e^{\mathsf{w}+\varepsilon _{4}(l+1)}\,,\ N_{13}=e^{\mathsf{w}-\varepsilon_{3}l}\,, \tag{9.6}\]
We prove the \(\mathcal{Y}\star\mathcal{X}=0\) equation from (6.1) by constructing a involution
\[\Upsilon:\Lambda\times\Lambda\times\mathbb{Z}\longrightarrow\Lambda\times \Lambda\times\mathbb{Z}\, \tag{9.7}\]
sending \((\lambda,\mu,l)\) to \((\tilde{\lambda},\tilde{\mu},\tilde{l})\), such that:
\[\begin{split} 1)&|l-\tilde{l}|=1\\ 2)& P_{1}S_{34}(l,\lambda)+P_{3}S_{14}(l,\mu)=P_{1}S _{34}(\tilde{l},\tilde{\lambda})+P_{3}S_{14}(\tilde{l},\tilde{\mu})\end{split} \tag{9.8}\]
the latter equation implying
\[\frac{l(l+1)}{2}+|\lambda|+|\mu|=\frac{\tilde{l}(\tilde{l}+1)}{2}+|\tilde{ \lambda}|+|\tilde{\mu}| \tag{9.9}\]
where
\[S_{34}(l,\lambda)=e^{\mathsf{w}}q_{3}^{l+1}\left(1-P_{34}\chi_{3 4}(\lambda)\right)\,,\\ S_{14}(l,\mu)=e^{\mathsf{w}}q_{1}^{-l}\left(1-P_{14}\chi_{14}( \mu)\right)\,, \tag{9.10}\]
The involution acts by:
\[\Upsilon(\lambda,\mu,l)=\begin{cases}\left(\tilde{\lambda},\tilde{\mu},l-1 \right)&,\qquad\tilde{\lambda}_{1}=\mu_{1}+l\,,\;\tilde{\lambda}_{i+1}=\lambda _{i}\,,\;\mu_{i+1}=\tilde{\mu}_{i},\;i\geq 1\,\\ &\qquad\text{when}\;\;\mu_{1}+l\geq\lambda_{1}\,\\ \left(\hat{\lambda},\hat{\mu},l+1\right)&,\qquad\hat{\mu}_{1}=\lambda_{1}-l-1 \,,\;\tilde{\lambda}_{i}=\lambda_{i+1}\,,\;\hat{\mu}_{i+1}=\mu_{i}\,,\;i\geq 1 \,\\ &\qquad\text{when}\;\;\mu_{1}+l<\lambda_{1}\.\end{cases} \tag{9.11}\]
Let us check (9.8) in the case \(\mu_{1}+l\geq\lambda_{1}\), i.e. \(\tilde{l}=l-1\). In that case, the second equation in (9.8) reads
\[\frac{1-q_{4}^{l}}{1-q_{4}}+q_{3}\chi_{34}(\lambda)+q_{4}^{l}\chi_{14}(\mu)= \chi_{34}(\tilde{\lambda})+q_{4}^{l}q_{1}\chi_{14}(\tilde{\mu}) \tag{9.12}\]
Let us assume \(l\geq 1\) (the case \(l\leq 0\) is analogous) and expand the left hand side of (9.12) in \(q_{1},q_{4}\):
\[\sum_{i=1}^{\ell(\lambda)}q_{1}^{-i}\sum_{j=1}^{\lambda_{i}}q_{4}^{j-i-1}+\sum _{j=1}^{\mu_{1}+l}q_{4}^{j-1}+\sum_{i=2}^{\ell(\mu)}q_{1}^{i-1}\sum_{j=l+1}^{ \mu_{i}+l}q_{4}^{j-1} \tag{9.13}\]
while the right hand side of (9.12) expands, similarly, as
\[\sum_{i=1}^{\ell(\tilde{\mu})}q_{1}^{i}\sum_{j=l+1}^{\tilde{\mu}_{i}+l}q_{4}^{ j-1}+\sum_{i=1}^{\ell(\tilde{\lambda})}q_{1}^{1-i}\sum_{j=1}^{\tilde{\lambda}_{i }}q_{4}^{j-i} \tag{9.14}\]
Comparing the \(q_{1}^{0}\), \(q_{1}^{<0},q_{1}^{\geq 0}\) terms, respectively, we get the precise match, given (9.11).
## 10. Appendix \(B\). Limit to periodic Toda and relation to earlier work
The Toda limit, i.e.
\[n\to\infty\,,\mathfrak{q}\to 0\,, \tag{10.1}\]
while keeping finite
\[\Lambda^{2N}=\mathfrak{q}(-1)^{N}\left(\hbar n\right)^{2N} \tag{10.2}\]
is a bit delicate, as far as the \(\mathcal{X},\mathcal{Z},\mathsf{X},\mathsf{Z}\) functions are concerned. The \(Q\)-function has a smooth limit, with the Bethe equations (5.1) going over to
\[\frac{Q(\mathsf{w}+\hbar)}{Q(\mathsf{w}-\hbar)}=-\Lambda^{2N}\,,\ \forall\mathsf{w}\in Q^{-1}(0). \tag{10.3}\]
More precisely, this is the limit of \(Q(\mathsf{w})\) at fixed \(\mathsf{w}\). On the other hand, \(Q\) with the shifted arguments have somewhat singular limits, which we need to account for:
\[Q(\mathsf{w}-\hbar nl)\sim\ (nl)^{\frac{N}{2}}\,e^{Nnl}\ (-\hbar nl)^{\frac{N \mathsf{w}}{\hbar}-Nnl}\times\left(1+O(n^{-1})\right) \tag{10.4}\]
Specifically, we shall need (10.4) for \(l=-1,1\) and a limit of the product
\[\mathfrak{q}^{\frac{\mathsf{w}}{\hbar}+p}Q(\mathsf{w}+\hbar(p-n))Q(\mathsf{w }+\hbar(n-1+p))\sim\frac{(-1)^{N\left(n+\frac{1}{2}\right)}}{\hbar^{N}} \Lambda^{2N\left(\frac{\mathsf{w}}{\hbar}+p\right)} \tag{10.5}\]
The series for \(\mathcal{X}(\mathsf{w}),\mathcal{Z}(\mathsf{w})\), in our limit, receive contributions only from partitions \(\lambda=(1^{p})\), for \(p=0,1,\ldots\), giving:
\[\mathcal{X}(\mathsf{w}) \sim\tilde{Q}(\mathsf{w}-\hbar)\,,\] \[\mathcal{Z}(\mathsf{w}) \sim\tilde{Q}(\mathsf{w}-\hbar)\,, \tag{10.6}\]
where \(\sim\) means up to a factor \(e^{c_{1}(n)\mathsf{w}+c_{0}(n)}\),
\[\tilde{Q}(\mathsf{w})=\Lambda^{2N\frac{\mathsf{w}}{\hbar}}\,Q(\mathsf{w})\sum _{p=0}^{\infty}\frac{\Lambda^{2Np}}{Q(\mathsf{w}+\hbar p)Q(\mathsf{w}+\hbar(p+ 1))}. \tag{10.7}\]
The \(\hbar n\)-shifted \(\mathcal{X}(\mathsf{w}),\mathcal{Z}(\mathsf{w})\) functions have \(Q\)-function as its principal asymptotics:
\[\mathcal{X}(\mathsf{w}+\hbar n)\sim Q(\mathsf{w})\,,\ \mathcal{Z}(\mathsf{w}+ \hbar(1-n))\sim Q(\mathsf{w}) \tag{10.8}\]
Finally, the series for \(\mathcal{Y}(\mathsf{w})\) retains only two terms in the limit (10.1):
\[\mathcal{Y}(\mathsf{w})=\mathsf{Y}(\mathsf{w}+\hbar)+\frac{\Lambda^{2N}}{ \mathsf{Y}(\mathsf{w})} \tag{10.9}\]
Either by recalling the relation between \(Q(\mathsf{w})\) and \(\mathsf{Y}(\mathsf{w})\), or by taking the limit of (6.1) we obtain the _T-Q equation_ (cf. [23, 39, 32]), obeyed both by \(Q(\mathsf{w})\) and \(\tilde{Q}(\mathsf{w})\):
\[Q(\mathsf{w}+\hbar)+\Lambda^{2N}Q(\mathsf{w}-\hbar) =\mathcal{Y}(\mathsf{w})Q(\mathsf{w})\] \[\tilde{Q}(\mathsf{w}+\hbar)+\Lambda^{2N}\tilde{Q}(\mathsf{w}- \hbar) =\mathcal{Y}(\mathsf{w})\tilde{Q}(\mathsf{w}) \tag{10.10}\]
in complete agreement with (10.3). Note that (10.10) can also be obtained by a limit of (3.1).
The Toda limit (10.1) of (6.1) gives (10.10) and the Wronskian relation (cf. [26])
\[\tilde{Q}(\mathsf{w})Q(\mathsf{w}+\hbar)-Q(\mathsf{w})\tilde{Q}(\mathsf{w}+ \hbar)=\Lambda^{\frac{2N\mathsf{w}}{\hbar}} \tag{10.11}\]
Both \(Q\) and \(\tilde{Q}\) functions prominently feature (in disguise) in the separation of variables for the quantum Toda system (the Ref. [25] used determinant representation of Gaudin-Pasquier [16], which can be related to our \(Q\), \(\tilde{Q}\)'s).
We leave the verification to the reader as an exercise in application of (10.6), (10.5), (10.4).
|
2303.04449 | Loss-Curvature Matching for Dataset Selection and Condensation | Training neural networks on a large dataset requires substantial
computational costs. Dataset reduction selects or synthesizes data instances
based on the large dataset, while minimizing the degradation in generalization
performance from the full dataset. Existing methods utilize the neural network
during the dataset reduction procedure, so the model parameter becomes
important factor in preserving the performance after reduction. By depending
upon the importance of parameters, this paper introduces a new reduction
objective, coined LCMat, which Matches the Loss Curvatures of the original
dataset and reduced dataset over the model parameter space, more than the
parameter point. This new objective induces a better adaptation of the reduced
dataset on the perturbed parameter region than the exact point matching.
Particularly, we identify the worst case of the loss curvature gap from the
local parameter region, and we derive the implementable upper bound of such
worst-case with theoretical analyses. Our experiments on both coreset selection
and condensation benchmarks illustrate that LCMat shows better generalization
performances than existing baselines. | Seungjae Shin, Heesun Bae, Donghyeok Shin, Weonyoung Joo, Il-Chul Moon | 2023-03-08T08:59:04Z | http://arxiv.org/abs/2303.04449v1 | # Loss-Curvature Matching for Dataset Selection and Condensation
###### Abstract
Training neural networks on a large dataset requires substantial computational costs. Dataset reduction selects or synthesizes data instances based on the large dataset, while minimizing the degradation in generalization performance from the full dataset. Existing methods utilize the neural network during the dataset reduction procedure, so the model parameter becomes important factor in preserving the performance after reduction. By depending upon the importance of parameters, this paper introduces a new reduction objective, coined LCMat, which Matches the Loss Curvatures of the original dataset and reduced dataset over the model parameter space, more than the parameter point. This new objective induces a better adaptation of the reduced dataset on the perturbed parameter region than the exact point matching. Particularly, we identify the worst case of the loss curvature gap from the local parameter region, and we derive the implementable upper bound of such worst-case with theoretical analyses. Our experiments on both coreset selection and condensation benchmarks illustrate that LCMat shows better generalization performances than existing baselines.
## 1 Introduction
Although we live in the world of big data, utilizing such big data induces a considerable amount of time and space complexity in the learning process (Mirzasoleiman et al., 2020; Kim et al., 2022; Patterson et al., 2021). Accordingly, researchers introduced a concept of _dataset selection_ and _dataset condensation_, etc (Killamsetty et al., 2021; Paul et al., 2021). These concepts state that a dataset with smaller cardinality may yield similar performance in machine learning compared to a big dataset, if the smaller dataset delivers all task-relevant information as the original dataset. Dataset reduction provides tangible benefits because the reduced dataset will consume less time in training and less space in memory (Pooladzandi et al., 2022). Moreover, such benefits are the desiderata of some well-known tasks, i.e. continual learning with memory replay (Lopez-Paz and Ranzato, 2017; Borsoso et al., 2020).
As we reviewed, there exist two approaches in reducing the cardinality of dataset: the selection-based method (a.k.a. dataset selection) and the condensation-based method (a.k.a. dataset condensation). While these are similar concepts in terms of reducing data cardinality without performance degradation, both approaches have been treated and researched in different papers. Hence, this paper will refer to these approaches by a unifying term of _dataset reduction_. 1) Selection-based method optimally selects a small set of data instances out of the full dataset with an expectation on the identical task-relevant information of the small and the full datasets (Agarwal et al., 2020; Sener and Savarese, 2018; Welling, 2009). In contrast, 2) condensation-based method synthesizes the data instances by directly passing the learning gradient to the data input (Zhao and Bilen, 2021; Nguyen et al., 2021).
To identify the examples which contribute the most to learning, both lines of work mainly utilize the gradient matching between the original dataset and reduced dataset (Mirzasoleiman et al., 2020; Killamsetty et al., 2021; Zhao et al., 2020), which provides theoretical analyses unlike other methods (Coleman et al., 2019; Zhao and Bilen, 2021). However, gradient matching is conducted at a specific model parameter, so this implementation would fundamentally be biased by the model parameter at hand. Therefore, the generalization over the perturbed parameter point could be potentially beneficial. From the perspective of generalization over the model parameter region, the gradient matching can be generally extended to the local curvature matching in the response surface. Recently, Sharpness-Aware Minimization (SAM) (Foret et al., 2020) has made breakthroughs which ensure the generalization of the model by regularizing the flat minima over the local parameter region, not the point estimate of the parameter. This opens a new possibility of applying the spirit of SAM to the dataset reduction field.
This paper introduces a new objective for dataset reduction,
coined **L**oss-**C**urvature **M**atching (LCMat), which matches the loss curvature of the original dataset and the resulting reduced dataset on the target parameter region. This matching could be also interpreted as the sharpness of the loss difference between two datasets. This notion enables LCMat as the first work of sharpness-aware dataset reduction. This merge of dataset reduction and sharpness-aware minimization induces two contributions. First, SAM only provides the optimization based on the model parameter, whereas the optimization of dataset reduction is conducted based on the input data variable. To enable the input-based optimization on the defined sharpness, this paper derives an implementable upper bound of the sharpness, which results in an objective of LCMat. Second, we adaptively transform the objective into the function of either selection or condensation objective, so LCMat becomes the fundamentally applicable mechanism for dataset reduction overarching the dataset selection as well as the dataset condensation. We conduct experiments over the evaluation scenarios with different benchmark datasets, and we confirm that LCMat shows clear merit when the reduction ratio becomes significant and when the evaluation scenario becomes dynamic and complex, e.g. continual learning.
## 2 Preliminary
### Notations
This paper focuses on dataset reduction for classification tasks, which is a widely studied scenario in the community of dataset reduction (Mirzasoleiman et al., 2020; Welling, 2009; Zhao et al., 2020). Assuming a classification into \(c\) classes, let \(\mathcal{X}\in\mathbb{R}^{d}\) and \(\mathcal{Y}=\{1,2,...,c\}\) be input variable space and a label candidate set, respectively. Given \(\mathcal{X}\) and \(\mathcal{Y}\), our training dataset is \(T=\{(x_{i},y_{i})\}_{i=1}^{n}\subseteq\mathcal{X}\times\mathcal{Y}\). We assume that each training instance \((x,y)\) is drawn i.i.d from the population distribution \(\mathbb{D}\).
Let a classifier \(f_{\theta}:\mathbb{R}^{d}\rightarrow\mathbb{R}^{c}\) be parameterized by \(\theta\in\Theta\). Under this definition, the training loss on \(T\) and the population loss on \(\mathbb{D}\) are denoted as \(\mathcal{L}(T;\theta){=}\frac{1}{n}\sum_{i=1}^{n}\ell(x_{i},y_{i};\theta)\) and \(\mathcal{L}(\mathbb{D};\theta)=\mathbb{E}_{(x,y)\sim\mathbb{D}}[\ell(x,y; \theta)]\), respectively. Here, \(\ell\) means a value of loss function for a pair of \(x\) and \(y\)1.
Footnote 1: This paper utilizes cross-entropy as a loss function.
### Previous Researches on Dataset Reduction
This paper focuses on _dataset reduction_, whose purpose is to generate a cardinality-reduced dataset \(S\) from the training dataset \(T\), as such \(|S|\ll|T|\), while maximally preserving the task-relevant information from \(T\).
Selection-based MethodsSelection-based methods (Welling, 2009; Sener and Savarese, 2018) find a data subset \(S\subset T\) that satisfies the cardinality constraint while maximizing the objective defined by the informativeness of \(S\). The approximated objectives are defined by utilizing either 1) gradient (Paul et al., 2021; Mirzasoleiman et al., 2020; Killamsetty et al., 2021), 2) loss (Toneva et al., 2018), 3) uncertainty (Coleman et al., 2019), and 4) decision boundary (Ducoffe and Precioso, 2018; Margatina et al., 2021). This section surveys existing methods with emphasis on gradient-based objectives because our method is primarily relevant to them. Gradient-based methods minimize the distance between the gradients from the training dataset \(T\); and the (weighted) gradients from \(S\) as follows:
\[\min_{\mathbf{w},S}\mathcal{D}\Big{(}\sum_{(x,y)\in T} \frac{\nabla_{\theta}\ell(x,y;\theta)}{|T|},\sum_{(x,y)\in S}\frac{w_{x} \nabla_{\theta}\ell(x,y;\theta)}{\|\mathbf{w}\|_{1}}\Big{)} \tag{1}\] \[\text{s.t.}\quad S\subset T,\ w_{x}\geq 0\]
Here, \(\mathbf{w}\) is the vector of learnable weights for the data instances in subset \(S\); \(\|\mathbf{w}\|_{1}\) is l1 norm of \(\mathbf{w}\); and \(\mathcal{D}\) measures the distance between two gradients.
To solve the selection problem, Mirzasoleiman et al. (2020) converts Eq (1) into the submodular maximization problem, and this research utilizes the greedy approach to optimize Eq (1). Compared to Mirzasoleiman et al. (2020), Killamsetty et al. (2021) utilizes orthogonal matching pursuit algorithm (Elenberg et al., 2018) and \(L_{2}\) regularization term over \(\mathbf{w}\) to stabilize the optimization. Pooladzandi et al. (2022) replaces \(\nabla_{\theta}l(x,y;\theta)\) in Eq (1) with a preconditioned gradient with the Hessian matrix, which leverages the second-order information for optimization. Having said that, the optimization of Eq (1) is highly dependent on the given \(\theta\), so the gradient matching could be potentially biased by the single snapshot \(\theta\) because the small-sized \(S\) would be vulnerable to selection bias to summarize \(T\).
Condensation-based MethodsInstead of selecting \(S\) from \(T\), a small dataset, \(S\) can be directly synthesized to achieve the similar performance from \(T\)Wang et al. (2018). Then, \(S\) becomes a learnable variable updated via \(S\gets S-\gamma\nabla_{S}\mathcal{L}(T,S)\), where \(\mathcal{L}(T,S)\) is a general loss function which is dependent on both \(T\) and \(S\). Zhao et al. (2020) proposed Dataset Condensation (DC), which matches the gradients between \(T\) and \(S\) over the optimization path of \(S\) as follows:
\[\min_{S}\mathbb{E}_{\theta^{0}\sim P_{\theta^{0}}}\Big{[}\sum_{k} \mathcal{D}(\nabla_{\theta^{k}_{S}}\mathcal{L}(T;\theta^{k}_{S}),\nabla_{ \theta^{k}_{S}}\mathcal{L}(S;\theta^{k}_{S}))\Big{]} \tag{2}\]
Here, \(\theta^{0}\) is the initialized parameter from \(P_{\theta^{0}}\); and \(\theta^{k}_{S}\) is the parameter updated with \(k\) iterations on SGD with \(S\). The optimization of Eq (2) can be highly-dependent on the learning trajectory of \(\theta\) from \(S\). Other condensation methods2 utilize either 1) feature vectors (Zhao and Bilen,
2021b; Wang et al., 2022) or 2) kernel products to propagate the task-relevant information of \(T\) into \(S\)(Nguyen et al., 2021). However, these methods do not provide theoretical analyses of the relation between \(T\) and \(S\).
### Generalization on Parameter Space
Apart from dataset reduction, a new research area has emerged by considering generalization over parameter space and its optimization (Sun et al., 2021; Wu et al., 2020; He et al., 2019). Several studies have focused on the problem of \(\theta\) over-fitting to \(T\)(Izmailov et al., 2018; Foret et al., 2020; Kim et al., 2022b), and they confirmed that optimization on the perturbed parameter region has a strong correlation to the generalization performance of the model. Sharpness-Aware Minimization (SAM) (Foret et al., 2020) is an optimizer for the model parameter, which regularizes the locality region of \(\theta\) to be the flat minima on the loss curvature as follows:
\[\min_{\theta}\max_{||\epsilon||_{2}\leq\rho}\mathcal{L}(T;\theta+\epsilon) \tag{3}\]
Here, \(\epsilon\) is the perturbation vector to the parameter; and \(\rho\) denotes the maximum size of the perturbation vector. As the objective is a function defined by both input and model parameter, it is possible to solve the generalization of a model parameter through the optimization of input data. However, there is no such study, which improves the generalization of the perturbed parameter space via optimizing the input data variable, to the best of our knowledge. It should be noted that adversarial training (Zhang et al., 2019) is different from our method because the perturbation for the worst case is conducted on the input space, not on the parameter space.
## 3 Method
As described in Section 2.2, recent methods in dataset reduction propagate the task-relevant information from \(T\) to \(S\) by aligning the gradients of a specific \(\theta\). Given that dataset reduction hinges upon the utilization of \(\theta\), the performance depends on the trained \(\theta\) at the moment of reduction. Therefore, the optimal dataset reduction \(S^{*}\) would be different from \(S\), which is biased by \(\theta\) at the specific state of \(f_{\theta}\). Therefore, our research question becomes how to design a parameter-robust algorithm for dataset reduction while the algorithm still uses \(\theta\) by the necessity of the implementation practice.
### Parameter Generalization in Dataset Reduction
A loss function \(\mathcal{L}\) quantifies the fitness of \(\theta\) under a certain dataset. Accordingly, the optimization of \(S\) toward \(T\) with respect to \(\theta\) would decrease \(|\mathcal{L}(T;\theta)-\mathcal{L}(S;\theta)|\), which is the loss difference between \(T\) and \(S\) on \(\theta\). However, if \(|\mathcal{L}(T;\theta+\epsilon)-\mathcal{L}(S;\theta+\epsilon)|\) increases with small perturbation \(\epsilon\) on \(\theta\), then this increment indicates the lack of generalization on \(\theta+\epsilon\), or an over-fitted reduction of \(S\) by \(\theta\). This generalization failure on the locality of \(\theta\) subsequently results in the large difference of loss surfaces between \(T\) and \(S\), as illustrated in Figure 0(a). Figure 0(a) shows that the difference of loss surfaces between \(T\) and \(S\) could be measured by the sharpness of the loss differences, whose color is pink, on the target parameter region.
_Remark 3.1_.: Assuming the strict convexity of \(\mathcal{L}\) over \(\Theta\), if \(|\mathcal{L}(T;\theta)-\mathcal{L}(S;\theta)|=c\) for some fixed constant \(c\geq 0\) and any \(\theta\in\Theta,\ \text{argmin}_{\theta}\mathcal{L}(T;\theta)=\text{argmin}_{ \theta}\mathcal{L}(S;\theta)\).
Remark 3.1 explains that the optimal \(\theta\) for \(T\) and \(S\) are the same if the loss difference is constant over the parameter space, which is the state when the loss curvatures of \(T\) and \(S\) are the same. If this condition is satisfied, we could safely utilize \(S\) for learning \(\theta\) where the generalization performance of \(\theta\) from \(S\) is guaranteed to be the same as that of \(T\). This motivates us to match the loss curvatures between \(T\) and \(S\), whose objective is introduced in the next section.
### Loss-Curvature Matching (LCMat)
This section introduces a parameter-robust objective for dataset reduction, coined Loss-Curvature Matching
Figure 1: (a) The sharpness on loss differences represents the degree of difference on loss surfaces. (b) (left) The data subset, \(S\), which is selected by Craig (Mirzasoleiman et al., 2020), does not match the loss curvature of the training dataset. (right) On the other hand, LCMat-S successfully matches the loss curvatures of \(T\) and \(S\). We visualize the loss landscape according to the implementation of Li et al. (2018).
(LCMat), which matches the loss curvature of \(T\) and \(S\) based on a currently presented \(\theta\). The target region of the objective is specified by the \(\rho\)-ball perturbed region of \(\theta\). In Eq (3), SAM optimizes the worst-case sharpness from the target region of \(\theta\), where the worst-case optimization becomes efficient when the optimization is requested over the specific region (Sagawa et al., 2019; Foret et al., 2020). Following the worst-case optimization scheme, we formulate the primary objective as follows:
\[\min_{S}\max_{||\epsilon||_{2}\leq\rho}\frac{\mathcal{L}_{abs}(T,S;\theta+ \epsilon)\text{--}\mathcal{L}_{abs}(T,S;\theta)}{\rho} \tag{4}\]
Here, we denote the loss difference between \(T\) and \(S\) on \(\theta\), \(\mathcal{L}_{abs}(T,S;\theta)=|\mathcal{L}(T;\theta)-\mathcal{L}(S;\theta)|\). In Eq (4), \(S\) is optimized to minimize the sharpness of \(\mathcal{L}_{abs}(T,S;\theta)\) over the \(\rho\)-ball perturbed region from \(\theta\). The optimization on Eq (4) incurs the maximization of \(\mathcal{L}_{abs}(T,S;\theta)\), which could result in the overly under-fitted state of \(S\) on \(\theta\). In our implementation, \(\mathcal{L}_{abs}(T,S;\theta)\) is bounded or regularized during the optimization. See Appendix B.1 for detailed analyses. Also, Eq (4) is defined on the case of single \(\theta\) for simplicity, and it could be generalized to any \(\theta\in\Theta\).
The next question is how to optimize \(S\) by Eq (4). As our learning target is \(S\), not \(\theta\); it is intractable to utilize SAM because SAM only provides the gradient of \(\theta\) for the corresponding sharpness. We introduce Proposition 3.2, which provides a tractable and differentiable upper bound of Eq (4) as follows:
**Proposition 3.2**.: _When \(\mathbb{H}_{D}=\nabla_{\theta}^{2}\mathcal{L}(D;\theta)\) is a Hessian matrix of \(\mathcal{L}(D;\theta)\), let \(\mathbb{H}_{T,S}=\mathbb{H}_{T}-\mathbb{H}_{S}=\nabla_{\theta}^{2}\mathcal{L} (T;\theta)-\nabla_{\theta}^{2}\mathcal{L}(S;\theta)\) and \(\lambda_{1}^{T,S}\) be the maximum eigenvalue of the matrix \(\mathbb{H}_{T,S}\), then we have: (Proof in Appendix A.1)_
\[\max_{||\epsilon||_{2}\leq\rho}\frac{\mathcal{L}_{abs}(T,S;\theta +\epsilon)\text{--}\mathcal{L}_{abs}(T,S;\theta)}{\rho} \tag{5}\] \[\qquad\qquad\leq\underbrace{\left\|\nabla\theta\mathcal{L}(T; \theta)-\nabla_{\theta}\mathcal{L}(S;\theta)\right\|_{2}}_{\text{Gradient Matching via $L_{2}$-norm}}\] \[\qquad\qquad\qquad+\underbrace{\frac{1}{2}\rho\lambda_{1}^{T,S}} _{\text{Max eigenvalue}}+\max_{||\epsilon||_{2}\leq 1}O(\rho^{2}v^{3})\]
According to Proposition 3.2, the upper bound of the Eq (4) consists of 1) the \(L_{2}\) norm of gradient differences between \(T\) and \(S\); 2) the maximum eigenvalue of \(\mathbb{H}_{T,S}\); and 3) remaining higher-order terms. Given a certain selection of \(\rho\) determining the locality scope of the \(\theta\), Proposition 3.2 argues that the gradient matching objective would not be enough for the loss surface matching if \(\lambda_{1}^{T,S}\) holds a large proportion in the upper bound.
Figure 2 shows the value of \(\left\|\nabla_{\theta}\mathcal{L}(T;\theta)-\nabla_{\theta}\mathcal{L}(S; \theta)\right\|_{2}\) and \(\frac{1}{2}\rho\lambda_{1}^{T,S}\) measured from different methods with \(\rho=0.5\). For the gradient matching term, all methods show similar values, which means that these methods could not be distinguished by the learning from the gradient matching term. On the contrary, \(\lambda_{1}^{T,S}\) holds a large proportion and takes high variance across the tested methods, so the upper bound differences among the methods eventually rely on the value of \(\lambda_{1}^{T,S}\). By excluding higher-order terms in Proposition 3.2, the resulting alternative objective is as follows:
\[\min_{S}\,\left\|\nabla_{\theta}\mathcal{L}(T;\theta)-\nabla_{\theta} \mathcal{L}(S;\theta)\right\|_{2}+\frac{1}{2}\rho\lambda_{1}^{T,S} \tag{6}\]
Directly solving the optimization of Eq (6) requires an explicit calculation of the Hessian matrices, \(\mathbb{H}_{T}\) and \(\mathbb{H}_{S}\). This calculation is too costly for over-parameterized models, such as neural networks. To overcome the computational overhead, various methods in machine learning have utilized the diagonal approximation of Hessian (Rame et al., 2022; Yao et al., 2021) as a common technique. According to Rame et al. (2022), \(\mathbb{H}\) becomes diagonally dominant at the end of training in most cases. We apply the diagonal approximation on \(\mathbb{H}_{T}\) and \(\mathbb{H}_{S}\), and we denote the corresponding diagonal Hessian as \(\hat{\mathbb{H}}_{T}=\text{diag}(\mathbb{H}_{T})\) and \(\hat{\mathbb{H}}_{S}=\text{diag}(\mathbb{H}_{S})\). When we replace \(\mathbb{H}_{T}\) and \(\mathbb{H}_{S}\) into \(\hat{\mathbb{H}}_{T}\) and \(\hat{\mathbb{H}}_{S}\), respectively, Eq (6) is derived3 as follows:
Footnote 3: See Appendix A.2 for the proof.
\[\min_{S}\,\left\|\nabla_{\theta}\mathcal{L}(T;\theta)-\nabla_{\theta} \mathcal{L}(S;\theta)\right\|_{2}+\frac{1}{2}\rho\max_{k}\left|\hat{\lambda}_ {k}^{T}-\hat{\lambda}_{k}^{S}\right| \tag{7}\]
Here, \(\hat{\lambda}_{k}^{T}\) and \(\hat{\lambda}_{k}^{S}\) are eigenvalues of \(\hat{\mathbb{H}}_{T}\) and \(\hat{\mathbb{H}}_{S}\) on \(k\)-th dimension for \(\theta\). Having said that, we provide an adaptive application of our objective, Eq (7), on two approaches: selection-based methods and condensation-based methods, in Sections 3.3 and 3.4, respectively.
### LCMat for Selection-based method
To select \(S\subseteq T\), which minimizes Eq (7); we transform (7) into the selection-based objective with the cardinality constraint on \(S\), in a sample-wise derivation as follows:
\[\min_{S\subseteq T} \Big{(}\left\|\frac{1}{|T|}\sum_{\begin{subarray}{c}(x_{i},y_{i}) \\ \in T\end{subarray}}\mathbf{g}_{i}^{T}-\frac{1}{|S|}\sum_{\begin{subarray}{c}(x_ {j},y_{j})\\ \in S\end{subarray}}\gamma_{j}\mathbf{g}_{j}^{S}\Big{\|}_{2} \tag{8}\] \[\qquad+\frac{1}{2}\rho\max_{k}\Big{|}\frac{1}{|T|}\sum_{ \begin{subarray}{c}(x_{i},y_{i})\\ \in T\end{subarray}}\hat{\lambda}_{i,k}^{T}-\frac{1}{|S|}\sum_{\begin{subarray}{c}(x_ {i},y_{j})\\ \in S\end{subarray}}\gamma_{j}\hat{\lambda}_{j,k}^{S}\Big{|}\Big{)}\] s.t. \[|S|\ll|T|\]
Figure 2: Report on each term in Eq (6) for the selected methods.
Here, we denote the per-sample gradient as \(\mathbf{g}_{i}^{T}=\nabla_{\theta}\ell(x_{i},y_{i};\theta)\) for \((x_{i},y_{i})\in T\), and we also denote the \(k\)-th dimension eigenvalue of the per-sample Hessian as \(\hat{\lambda}_{i,k}^{T}\) for \((x_{i},y_{i})\in T\). Also, we introduce the learnable weight \(\gamma_{j}\) for \((x_{j},y_{j})\in S\) to build Eq (8) as a generalized form.
It is well known the subset selection problem is NP-hard (Sener and Savarese, 2018; Mirzasoleiman et al., 2020). When we maximize \(\left|\frac{1}{|T|}\sum\limits_{(x_{i},y_{i})\in T}\hat{\lambda}_{i,k}^{T}- \frac{1}{|S|}\sum\limits_{(x_{j},y_{j})\in S}\gamma_{j}\hat{\lambda}_{j,k}^{S}\right|\) with respect to \(k\), \(k\) will be different by each subset \(S\subseteq T\), where the search for \(k\) based on every possible \(S\subseteq T\) would be very costly. To relax the computational constraints on a search for \(k\), we empirically optimize the following equation, which does not need the search of \(k\), on behalf of the second term in Eq (8):
\[\frac{1}{2}\rho\sum\limits_{k\in\mathcal{K}}\left|\frac{1}{|T|}\sum\limits_{( x_{i},y_{i})\in T}\hat{\lambda}_{i,k}^{T}-\frac{1}{|S|}\sum\limits_{(x_{j},y_{j}) \in S}\gamma_{j}\hat{\lambda}_{j,k}^{S}\right| \tag{9}\]
Here, \(\mathcal{K}\) is a set of indexes for \(K\) sub-dimensions on \(\theta\). We select \(K\) dominant sub-dimensions based on the variance of \(\hat{\lambda}_{k}^{T}=[\hat{\lambda}_{i,k}^{T}]_{i=1}^{|T|}\) for each \(k\), which is denoted by the set \(\mathcal{K}=\underset{\mathcal{K},|\mathcal{K}|=K}{\text{argmax}}\sum\limits_ {j\in\mathcal{K}}\text{Var}(\hat{\lambda}_{k}^{T})\). We empirically show that the true \(k\) in Eq (8) is always in \(\mathcal{K}\), where the hyper-parameter of sub-dimensions \(K\) is fixed to \(100\) in our experiments. See Appendix B.2 for detailed analyses.
By the notion of regarding the subset selection as sparse vector approximation (Elenberg et al., 2018; Mirzasoleiman et al., 2020), existing methods utilize submodular optimization with a simple greedy algorithm to get a nearly-optimal solution on their objectives. Similar to Mirzasoleiman et al. (2020), we utilize a facility location function (Lin et al., 2009; Lin and Bilmes, 2012) for the submodular optimization. The facility location function quantifies the cover of \(T\) given its subset \(S\) by summation of the similarities defined between every \(i\in T\) and its closest element \(j\in S\). Formally, a facility location is defined as \(F(S)=\sum_{i\in T}\max_{j\in S}s_{i,j}\), where \(s_{i,j}\) is the similarity between \(i,j\in T\). By utilizing the analytical result of Craig, we get an upper bound of the error for Eq (9) as follows: (Proof in Appendix A.3)
\[\min_{S_{\subseteq T}}\left\|\bar{\mathbf{g}}^{T}-\gamma^{S}\bar {\mathbf{g}}^{S}\right\|_{2}+\frac{1}{2}\rho\sum\limits_{k\in\mathcal{K}} \left|\bar{\lambda}_{k}^{T}-\gamma^{S}\bar{\lambda}_{k}^{S}\right| \tag{10}\] \[\leq\sum\limits_{i\in T}\underset{j\in S}{\min}\Big{(}\Big{\|} \mathbf{g}_{i}^{T}-\mathbf{g}_{j}^{S}\Big{\|}_{2}+\frac{1}{2}\rho\sum\limits_ {k\in\mathcal{K}}\left|\hat{\lambda}_{i,k}^{T}-\hat{\lambda}_{j,k}^{S}\right| \Big{)}\]
Here, \(\bar{\mathbf{g}}^{T}=\frac{1}{|T|}\sum\limits_{(x_{i},y_{i})\in T}\mathbf{g}_ {i}^{T}\)4 and \(\bar{\lambda}_{k}^{T}=\frac{1}{|T|}\sum\limits_{(x_{i},y_{i})\in T}\hat{\lambda }_{i,k}^{T}\). We aim at minimizing the upper bound from Eq (10), where we denote the upper bound as \(L(S)\). Finally, our algorithm will be implemented as follows:
Footnote 4: \(\bar{\mathbf{g}}=\frac{1}{N}\sum_{i=1}^{N}\mathbf{g}_{i}\).
\[\min_{S\subseteq T}L(S)\ \ \ \text{s.t.}\ \ \ |S|=m \tag{11}\]
Similar to Pooladzandi et al. (2022), we re-formulate Eq (11) into the formalized version of facility location algorithm. Let's suppose an auxiliary example \(e\), and the minimization of \(L(S)\) is turned into the maximization of a facility location objective \(F(S)\) as follows:
\[\max_{S\subseteq T}F(S)=L(\{e\})-L(S\cup\{e\})\ \ \ \text{s.t.}\ \ \ |S|=m \tag{12}\]
Here, \(L(\{e\})\) is a constant, which is an upper bound of \(L(S)\). The objective could also be derived as a submodular cover problem, whose objective is to minimize \(|S|\) with the constraints on \(F(S)\). Finally, we call our method applied to the selection-based method as LCMat-S.5
Footnote 5: The code is available at [https://github.com/SJShin-AI/LCMat](https://github.com/SJShin-AI/LCMat).
### LCMat for Condensation-based method
Different from selection-based methods, which need submodular optimization for a subset selection from \(T\); condensation-based methods directly optimize \(S\) by setting Eq (7) to \(\mathcal{L}(T,S;\theta)\). Eventually, the implemented objective becomes \(min_{S}\mathcal{L}(T,S;\theta)\). Here, \(S\) is updated as \(S\gets S-\gamma\nabla_{S}\mathcal{L}(T,S;\theta)\). However, the direct optimization of Eq (7) still remains costly because of derivative computation over the Hessian terms, which are \(\hat{\lambda}_{k}^{T}\) and \(\hat{\lambda}_{k}^{S}\). This section provides an efficient variation of Eq (7), which is adapted to the community of condensation-based methods.
According to Rame et al. (2022), the Fisher information \(\mathbb{F}=\sum_{i=1}^{|T|}\mathbb{E}_{\hat{y}\sim p_{\theta}(y|x_{i})}\Big{[} \nabla_{\theta}\log p_{\theta}(\hat{y}|x_{i})\nabla_{\theta}\log p_{\theta}( \hat{y}|x_{i})^{\top}\Big{]}\) approximates the Hessian \(\mathbb{H}\) with probably bounded errors under mild assumptions (Kim et al., 2022b). As the Fisher information only requires the first derivative on \(\theta\), the computation of Fisher information is more efficient than the computation of the Hessian matrix. The equation below is the empirical Fisher information \(\tilde{\mathbb{F}}\) of a certain dataset \(D\):
\[\tilde{\mathbb{F}}=\frac{1}{|D|}\sum\limits_{(x_{i},y_{i})\in D}\nabla_{\theta} \ell(x_{i},y_{i};\theta)\nabla_{\theta}\ell(x_{i},y_{i};\theta)^{\top} \tag{13}\]
\(\tilde{\mathbb{F}}\) is equivalent6 to the gradient covariance matrix \(\mathbf{C}=\frac{1}{n-1}\Big{(}\mathbf{G}^{\top}\mathbf{G}-\frac{1}{n}( \mathbf{1}^{\top}\mathbf{G})^{\top}(\mathbf{1}^{\top}\mathbf{G}))\Big{)}\) of size \(|\theta|\times|\theta|\) at any first-order stationary point (Rame et al., 2022), where \(\mathbf{G}=[\mathbf{g}_{i}]_{i=1}^{|D|}\). As our objective (7) is constructed based on the Hessian diagonals, such as \(\hat{\mathbb{H}}_{T}\) and \(\hat{\mathbb{H}}_{S}\); we consider the gradient variance, \(\text{Var}(\mathbf{G})\), which is the diagonal components of \(\mathbf{C}\) as follows:
Footnote 6: We skip the index with \(D\) for the simplicity of \(\tilde{\mathbb{F}}\), \(\mathbf{C}\), and \(\mathbf{G}\).
\[\text{Var}(\mathbf{G})=\frac{1}{|D|-1}\sum\limits_{i=1}^{|D|}\big{(}\mathbf{g}_ {i}-\bar{\mathbf{g}}\big{)}^{2} \tag{14}\]
Results from Rame et al. (2022) support that the similarity between Hessian diagonals and gradient variances is over 99.99\(\%\). Similar to Eq (9), we could specify \(\mathcal{K}\) to select the sub-dimensions of \(\text{Var}(\mathbf{G})\) to match. In practice, we match the whole dimensions of \(\text{Var}(\mathbf{G})\), which shows the robustness over the implemented experiments. We provide the adapted application of LCMat to the dataset condensation as follows:
\[\min_{S}\mathbb{E}_{\theta^{0}}\Big{[}\sum_{k}\mathcal{D}(\bar{ \mathbf{g}}_{\theta_{k}}^{T},\bar{\mathbf{g}}_{\theta_{k}}^{S})+\frac{1}{2} \rho|\text{Var}(\mathbf{G}_{\theta_{k}}^{T})-\text{Var}(\mathbf{G}_{\theta_{k} }^{S})|\Big{]}\] \[\text{s.t.}\quad\theta_{t+1}=\theta_{t}-\eta\bar{\mathbf{g}}_{ \theta_{t}}^{T}\text{ for }t=0,...,k-1. \tag{15}\]
We denote \(\theta_{k}\) under each term to represent the subject of the derivative. Our objective is composed of 1) \(\mathcal{D}(\bar{\mathbf{g}}_{\theta_{k}}^{T},\bar{\mathbf{g}}_{\theta_{k}}^{S})\), which is averaged gradient matching between \(T\) and \(S\); and 2) \(|\text{Var}(\mathbf{G}_{\theta_{k}}^{T})-\text{Var}(\mathbf{G}_{\theta_{k}}^{S })|\), which is gradient variance matching between \(T\) and \(S\). Note that the averaged gradient matching is the objective of Zhao et al. (2020). We also differentiate the learning trajectory of \(\theta\) from \(S\) to \(T\) to satisfy the assumption on the model parameter in Section 3.5, which is utilized for the theoretical analysis of our method. We call our method applied to the condensation-based method as LCMat-C.
### Theoretical Understanding of LCMat
This section analyzes the generalization bound of Eq (4), which is our primary objective. First, we define \(\hat{\Theta}\), which is the application range of generalization bound as follows:
**Definition 3.3**.: \(\hat{\Theta}=\{\theta:\mathcal{L}(T;\theta)\leq\mathcal{L}(\mathbb{D};\theta) \text{ for }\theta\in\Theta\}\)__
In practice, \(\mathcal{L}(T;\theta)\) and \(\mathcal{L}(\mathbb{D};\theta)\) are approximated by the training loss and test loss, respectively. \(\hat{\Theta}\) specifies \(\theta\) whose generalization gap is more than equal to zero, which is intuitive when we optimize \(\theta\) based on \(T\). We first derive the generalization bound of \(\max_{||\epsilon||_{2\leq\rho}}\mathcal{L}_{abs}(T,S;\theta+\epsilon)\), which is subpart of Eq (4), as follows:
**Theorem 3.4**.: _(Generalization Bound of \(\max_{||\epsilon||_{2\leq\rho}}\mathcal{L}_{abs}(T,S;\theta+\epsilon)\)) For \(\theta\in\hat{\Theta}\), with probability at least \(1-\delta\) over the choice of the training set \(T\) with \(|T|=n\), the following holds. (Proof in Appendix A.4)_
\[\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\rho)}[\mathcal{L}_{abs}( \mathbb{D},S;\theta+\epsilon)] \tag{16}\] \[\leq\max_{\|\epsilon\|_{2}\leq\rho}\mathcal{L}_{abs}(T,S;\theta+ \epsilon)+\sqrt{\frac{O(k+\log\frac{n}{\delta})}{n-1}}\]
Please note that proof of Theorem 3.4 largely referred to the proof concept of SAM (Foret et al., 2020). Having said that, Theorem 3.4 states that \(\max_{||\epsilon||_{2\leq\rho}}\mathcal{L}_{abs}(T,S;\theta+\epsilon)\) can become the upper bound of \(\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\rho)}\Big{[}\Big{|}\mathcal{L}(\mathbb{ D};\theta+\epsilon)-\mathcal{L}(S;\theta+\epsilon)\Big{|}\Big{]}\), which is the expected loss difference between \(\mathbb{D}\) and \(S\) over the \(\epsilon\)-perturbed space of the current parameter \(\theta\).
From the theoretical view, Theorem 3.4 provides the generalization property of the loss difference between two arbitrary datasets. As an extension of Theorem 3.4, Corollary 1 directly investigates the generalization property of our main objective in Eq (4), which is the first term in R.H.S of Corollary 1, with an additional assumption, \(\mathcal{L}_{abs}(T,S;\theta)\leq\mathcal{L}_{abs}(\mathbb{D},S;\theta)\). The assumption is acceptable if the loss difference from \(\mathbb{D}\) is larger than \(T\)'s.
**Corollary 3.5**.: _(Generalization Bound of Eq (4)) If \(\mathcal{L}_{abs}(T,S;\theta)\leq\mathcal{L}_{abs}(\mathbb{D},S;\theta)\) for \(\theta\in\hat{\Theta}\), with probability at least \(1-\delta\) over the choice of the training set \(T\) with \(|T|=n\), the following holds: (Proof in Appendix A.5)_
\[\Big{(}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\rho)}[\mathcal{L}_{abs}( \mathbb{D},S;\theta+\epsilon)]-\mathcal{L}_{abs}(\mathbb{D},S;\theta)\Big{)} \Big{/}\rho \tag{17}\] \[\leq\Big{(}\max_{||\epsilon||_{2\leq\rho}}\mathcal{L}_{abs}(T,S; \theta+\epsilon)-\mathcal{L}_{abs}(T,S;\theta)\Big{)}\Big{/}\rho\] \[+\sqrt{\frac{O(k+\log\frac{n}{\delta})}{n-1}}\]
According to Corollary 3.5, Eq (4) can be an upper bound of \(\Big{(}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,\rho)}[\mathcal{L}_{abs}( \mathbb{D},S;\theta+\epsilon)]-\mathcal{L}_{abs}(\mathbb{D},S;\theta)\Big{)} \Big{/}\rho\), which is the expected sharpness of loss differences between \(\mathbb{D}\) and \(S\) over the \(\epsilon\)-perturbed space of the parameter \(\theta\). This implies that the minimization of Eq (4) would lead to the local curvature matching between S and \(\mathbb{D}\), when \(\mathbb{D}\) is our target population distribution.
## 4 Experiments
This section investigates the validity of our method, LCMat, through experiments on various datasets and tasks. First, we check the efficacy of LCMat through the application of LCMat on coreset selection and dataset condensation tasks. In addition, we investigate the performance of LCMat on a continual learning framework as a practical application.
### Coreset Selection Evaluation
**Experiment Details** To investigate the efficacy of each selection-based algorithm, we follow the selection evaluation scenario of Guo et al. (2022), which is provided as follows. Each selection-based method learns \(S\) by utilizing the neural network, \(f_{\theta_{T}}\), which is pre-trained on \(T\). Next, we introduce another randomly initialized neural network \(f_{\theta_{S}}\); and we optimize \(\theta_{S}\) with \(S\). Finally, we measure the test accuracy on \(f_{\theta_{S}}\) to evaluate the quality of \(S\). During the selection, we assume that \(\theta\) is fixed without alternative optimization between \(S\) and \(\theta\). It should be noted that our method could also be evaluated on the dynamic coreset selection scenario (Mirzasoleiman et al., 2020; Pooladzandi et al., 2022).
**Baselines** We choose the baselines in the past works of selection-based methods. The selected baselines can be divided into two modelling categories. Baselines in the first category utilize the output from the forward-pass of the model, e.g. layer-wise feature vector, softmax output (Contextual Diversity (C-Div) (Agarwal et al., 2020), Herding (Welling, 2009), k-CenterGreedy (k-Center) (Sener and Savarese, 2018), Least Confidence (L-Conf), Entropy, and Margin (Coleman et al., 2019)). Baselines in another category are a set of variants for gradient matching (Craig (Mirzasoleiman et al., 2020), GradMatch (Killamsetty et al., 2021a), Glister (Killamsetty et al., 2021b) and AdaCore (Pooladzandi et al., 2022)). We also report results from a randomly chosen subset (Uniform). For all methods, We select \(S\) in a class-balanced manner. We provide the detailed implementation of each method and the corresponding wall-clock time in Appendix D.
**Implementation of LCMat-S and Gradient-based Methods** We compute the gradient and the Hessian matrix of the last layer of \(f_{\theta}\), which is common practice in the theoretical analyses (Mirzasoleiman et al., 2020; Pooladzandi et al., 2022). For AdaCore (Pooladzandi et al., 2022) and our method, LCMat-S; we skip the training of \(\mathbf{w}\), which is learnable weights for the instances in subset \(S\) because it significantly decreases the test performances. We conjecture that the problem is caused by the over-fitting of \(\mathbf{w}\). We tune \(\rho\), which is the only hyper-parameter of LCMat-S, from the value list of [0.01, 0.05, 0.1, 0.5]. We also implement the variants of GradMatch and Glister, which we call as GradMatch-Val and Glister-Val, by matching the gradient of \(T\) with the gradient over the validation dataset as specified in the original paper.
**Benchmark Evaluation Result** Table 1 reports the test accuracy of the ResNet-18 trained using \(S\) from each method. We evaluate \(S\) with different fractions in dataset reduction, which is the cardinality budget of \(S\) from \(T\). Uniform, which is a random selection baseline, shows competitive performances over other baselines. This shows the weak robustness of the existing selection methods. LCMat-S shows the improved or competitive performances over the implemented baselines by relieving the over-fitting issue of \(S\) to the provided \(\theta\). Particularly, the gain from LCMat-S becomes significant when the tested dataset becomes difficult and the reduction rate becomes small, i.e. the dataset reduction to 0.5%, 1%, and 5% in CIFAR-100. In Appendix D.4, we report image samples selected by each method of all classes for CIFAR-10 dataset. LCMat-S selects a set of examples with diverse characteristics, e.g. the diverse shape of each object and different backgrounds without redundancy.
**Robustness on Cross-Architecture** From our scenario, the network structure of \(f_{\theta_{g}}\) could be different from \(f_{\theta_{T}}\). We test the robustness of LCMat-S on the specific scenario, which we call as Cross-Architecture Generalization (Zhao et al., 2020). We utilize VGG-16 (Simonyan and Zisserman, 2014), Inception-v3 (Szegedy et al., 2016), and WRN-16-8 (Zagoruyko and Komodakis, 2016) as \(f_{\theta_{S}}\). Table 2 reports the test accuracy of LCMat-S and other gradient-based methods. LCMat-S consistently shows better generalization performances than the implemented baselines. We conjecture
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**CIFAR-10**} \\ \cline{2-13} Fraction & 0.1\% & 0.5\% & 1\% & 5\% & 10\% & 20\% & 30\% & 100\% & 0.5\% & 1\% & 5\% & 10\% & 20\% & 30\% & 100\% \\ \hline Uniform & 20.42\(\pm\)2.31 & 39.81\(\pm\)1.93 & 36.47\(\pm\)6.14 & 64.21\(\pm\)7.74 & 74.8\(\pm\)1.87 & 37.36\(\pm\)0.94 & 90.67\(\pm\)2.0 & 5.04\(\pm\)0.3 & 8.70\(\pm\)0.35 & 25.37\(\pm\)0.33 & 40.90\(\pm\)5.98 & 64.59\(\pm\)0.1 \\ C-Div & 16.26\(\pm\)2.12 & 39.07\(\pm\)2.33 & 32.50\(\pm\)0.32 & 40.25\(\pm\)1.53 & 56.85\(\pm\)1.87 & 83.42\(\pm\)1.90 & 90.93\(\pm\)3.5 & 4.76\(\pm\)0.1 & 6.01\(\pm\)13.86 & 13.62\(\pm\)0.25 & 20.53\(\pm\)0.36 & 44.91\(\pm\)1.58 & 58.60\(\pm\)0.37 \\ Herding & 18.34\(\pm\)1.21 & 28.64\(\pm\)1.35 & 31.91\(\pm\)1.53 & 48.38\(\pm\)1.26 & 63.04\(\pm\)2.73 & 73.24\(\pm\)1.89 & 99.93\(\pm\)1.5 & 4.42\(\pm\)0.26 & 6.93\(\pm\)0.36 & 18.24\(\pm\)1.46 & 26.47\(\pm\)0.42 & 42.83\(\pm\)1.95 & 52.14\(\pm\)1.44 \\ K-Center & 19.38\(\pm\)2.58 & 25.80\(\pm\)1.01 & 31.61\(\pm\)1.55 & 55.57\(\pm\)1.22 & 71.21\(\pm\)0.36 & 86.79\(\pm\)0.98 & 90.83\(\pm\)0.3 & 4.76\(\pm\)0.63 & 6.74\(\pm\)0.78 & 18.41\(\pm\)1.94 & 27.37\(\pm\)1.51 & 52.18\(\pm\)0.63 & 67.34\(\pm\)0.7 \\ L-Conf & 13.67\(\pm\)1.05 & 10.50\(\pm\)1.42 & 31.86\(\pm\)1.24 & 58.43\(\pm\)0.84 & 26.64\(\pm\)1.92 & 91.21\(\pm\)0.11 & 2.65\(\pm\)1.48 & 4.38\(\pm\)1.11 & 11.51\(\pm\)0.76 & 16.63\(\pm\)1.21 & 41.29\(\pm\)1.51 & 58.86\(\pm\)1.10 \\ Entropy & 15.29\(\pm\)1.17 & 17.50\(\pm\)2.20 & 22.42\(\pm\)3.79 & 37.92\(\pm\)5.74 & 57.43\(\pm\)3.86 & 81.72\(\pm\)2.91 & 91.06\(\pm\)0.7 & 2.51\(\pm\)0.34 & 3.82\(\pm\)0.23 & 11.32\(\pm\)0.53 & 16.94\(\pm\)0.39 & 41.88\(\pm\)1.35 & 57.45\(\pm\)2.0 \\ Margin & 17.80\(\pm\)1.24 & 24.64\(\pm\)1.22 & 28.26\(\pm\)2.39 & 44.17\(\pm\)2.59 & 59.90\(\pm\)7.82 & 32.34\(\pm\)0.90 & 99.92\(\pm\)0.4 & 95.48\(\pm\)0.11 & 3.86\(\pm\)0.33 & 6.11\(\pm\)0.24 & 14.57\(\pm\)0.20 & 20.70\(\pm\)1.46 & 36.36\(\pm\)1.27 & 59.45\(\pm\)2.25 & 78.91\(\pm\)0.22 \\ \hline Craig & 18.80\(\pm\)2.47 & 40.19\(\pm\)2.96 & 39.76\(\pm\)0.75 & 53.71\(\pm\)3.74 & 70.49\(\pm\)0.98 & 87.25\(\pm\)0.8 & 6.38\(\pm\)0.72 & 6.38\(\pm\)0.72 & 6.38\(\pm\)0.72 & 15.93\(\pm\)0.22 & 32.32\(\pm\)0.32 & 22.32\(\pm\)0.79 & 47.09\(\pm\)1.4 \\ GradMatch & 15.31\(\pm\)1.28 & 23.88\(\pm\)1.27 & 27.80\(\pm\)0.75 & 5.11\(\pm\)1.12 & 71.84\(\pm\)1.83 & 48.88\(\pm\)1.44 & 4.28\(\pm\)0.62 & 6.26\(\pm\)1.49 & 11.91\(\pm\)0.23 & 22.30\(\pm\)0.36 & 22.30\(\pm\)0.21 & 51.01\(\pm\)0.15 \\ GradMatch-Val & 15.39\(\pm\)1.22 & 21.81\(\pm\)1.14 & 25.14\(\pm\)1.37 & 37.76\(\pm\)1.49 & 49.21\(\pm\)2.42 & 71.14\(\pm\)1.83 & 83.48\(\pm\)1.44 & 4.43\(\pm\)0.53 & 5.75\(\pm\)0.20 & 13.45\(\pm\)0.26 & 29.90\(\pm\) 39.84 & 12.50\(\pm\)1.28 \\ Glister & 19.08\(\pm\)
that the robustness over the different network architectures could be improved from our loss-curvature matching objective.
**Robustness on the pre-training of \(f_{\theta_{T}}\)** From our evaluation scenario, \(f_{\theta_{T}}\) could be pre-trained with different hyper-parameters for each experiment, where \(f_{\theta_{T}}\) significantly influences the selection of \(S\). To test the robustness over the \(\theta\) pre-training, We conduct the coreset selection experiments over the differently pre-trained ResNet-18 with combinations of epochs [2,5,10,20,100]; weight decay [1e-4, 5e-4,1e-3]; optimizers [SGD, Adam]; and 3 seeds, which result in 90 cases. Fig 3 shows the number of times that each method beats the others from each case; and the averaged improvements over the other methods in parenthesis (%). LCMat-S beats other baselines with large numbers.
**Additional Results** We demonstrate the efficacy of LCMat-S over the baselines from the experiments of 1) Selection with different network architecture (VGG-16); and 2) Selection on a different dataset (TinyImageNet). Table 3 shows that LCMat is consistently competitive over the selected baselines on the evaluated settings.
**Ablation Study** When we set \(\rho=0\) in Eq (11), our method is reduced to the gradient matching with the facility location algorithm. To validate the efficacy of loss-curvature matching over the gradient matching, we provide the ablation study of LCMat-S by conducting sensitivity analyses over \(\rho\). Figure 4 shows that the test performances when \(\rho>0\) are consistently higher than when \(\rho=0\), which shows the efficacy of loss-curvature matching over the gradient matching.
### Dataset Condensation Evaluation
**Experiment Details** The condensation evaluation scenario is very similar to our selection scenario explained in Section 4.1. The only difference is the existence of alternative training between \(S\) and \(\theta_{T}\) during the condensation, which arises from the nature of condensation.
We condense \(S\) based on CIFAR-10 and CIFAR-100 with the utilization of ConvNet-3 as \(f_{\theta_{T}}\). As specified in Eq (15), we optimize \(\theta_{T}\) from \(T\) than the current \(S\) during the alternative training of \(S\) and \(\theta_{T}\), which is shown to be effective for the condensation [11]. It also fits with our parameter coverage on the Theorem 3.4. All methods utilize the Differential Siamese Augmentation [13] and the additional augmentation strategy specified in Kim et al. (2022) to further improve the performance. During the alternative update of \(S\) and \(\theta_{T}\), we re-initialize \(\theta_{T}\) periodically as a common practice [13, 12]. All experiments in this section are repeated over 3 times.
**Baselines** To validate the efficacy of LCMat-C, we compare the test performances over the baselines with different objec
\begin{table}
\begin{tabular}{c|c|c c c c c c c} \hline \hline Experiment & Frac & Uniform & k-Center & Craig & GradMatch & Glister & AdaCore & LCMat-S \\ \hline \multirow{2}{*}{CIFAR-10 w/ **VGG-16**} & 0.5\(\%\) & 13.61\(\pm\)1.8 & 12.81\(\pm\)1.1 & **15.83\(\pm\)1.9** & 11.33\(\pm\)0.6 & 12.4\(\pm\)0.7 & 13.84\(\pm\)1.6 & 15.37\(\pm\)0.0 \\ & 1\(\%\) & 19.81\(\pm\)2.4 & 15.78\(\pm\)4.1 & 15.19\(\pm\)1.4 & 13.7\(\pm\)1.8 & 22.84\(\pm\)3.1 & 19.08\(\pm\)8.4 & **25.41\(\pm\)6.4** \\ \hline \multirow{2}{*}{CIFAR-100 w/ **VGG-16**} & 0.5\(\%\) & 1.85\(\pm\)0.4 & 1.51\(\pm\)0.2 & 2.13\(\pm\)0.6 & **2.41\(\pm\)**0.8 & 2.03\(\pm\)0.6 & 1.79\(\pm\)0.3 & 2.34\(\pm\)0.2 \\ & 1\(\%\) & 3.61\(\pm\)5.0 & 2.07\(\pm\)0.6 & 4.73\(\pm\)1.0 & 2.63\(\pm\)0.5 & 4.36\(\pm\)1.1 & 2.9\(\pm\)0.8 & **5.91\(\pm\)**0.3 \\ \hline \multirow{2}{*}{**TinyImageNet** w/ ResNet-18} & 0.5\(\%\) & 2.07\(\pm\)0.2 & 1.72\(\pm\)0.2 & 2.99\(\pm\)0.2 & 2.44\(\pm\)0.2 & 2.75\(\pm\)0.0 & 1.81\(\pm\)0.1 & **3.18\(\pm\)**0.4 \\ & 1\(\%\) & 3.57\(\pm\)0.1 & 2.45\(\pm\)0.2 & 5.16\(\pm\)0.3 & 4.81\(\pm\)0.1 & 5.20\(\pm\)0.3 & 3.43\(\pm\)0.1 & **5.43\(\pm\)**0.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracies of coreset selection task on VGG-16 network (first, second row) and TinyImageNet dataset (third row), respectively. We denote the best performance as **Bold**; and the second best performance as **Underline**, respectively.
Figure 4: The sensitivity analyses based on \(\rho\) for fraction = 0.5\(\%\) and 5\(\%\) in CIFAR-10.
Figure 3: Heatmap plot which shows the number of times that each method beats the others from each case; and the averaged improvements over the other methods in parenthesis (%).
tives. Baselines include the gradient matching (DSA) [12], feature output matching (DM) [13], and kernel-based (KIP) methods [20].
**Implementation of LCMat-C** The gradient variance, \(\text{Var}(\mathbf{G}_{\bar{p}_{k}}^{T})\), in Eq (15), requires the costly computation of per-sample gradients over \(\theta\). We utilize BackPACK [1], which provides the computation of per-sample gradients at almost no time overhead. Also, we compute the gradient variance term only for the last layer, which is an efficient practice to improve the test performance with low computational costs.
**Results** Table 4 shows that condensed \(S\) from LCMat-C consistently improves the test performances of all baselines over different fractions of CIFAR-10 and CIFAR-100. We especially observe significant improvements from the experiments on the low fraction budgets. We also test the robustness of LCMat-C on the cross-architecture scenario, which utilizes ResNet-10 [1] and DenseNet-121 [12] as testing backbones. Table 5 shows the consistent improvements of LCMat-C over baselines.
### Application : Continual Learning with Memory Replay
Methods for memory-based continual learning store small representative instances; and these methods optimize its classifier with the samples stored in the memory to alleviate the catastrophic forgetting of previously observed tasks [1]. As an application practice, we utilize \(S\) from each method as a memory exemplar for previously seen classes under the class incremental setting of [12]; [13]. From the setting, CIFAR-100 is divided into 5 sets of sub-classes with a memory budget of 10 images per class, where each set of classes means a separate task stage. This setting purely trains a model based on the latest memory at each task stage. Figure 5 shows that the variants of LCMat, LCMat-S and LCMat-C, significantly improve the test performances under the defined setting, which represents the minimization of catastrophic forgetting.
## 5 Conclusion
We propose a new objective for dataset reduction named Loss-Curvature Matching, or LCMat. LCMat identifies the worst loss-curvature gap between the original dataset and the reduced dataset around the local parameter region, which is closely related to the parameter-based generalization on dataset reduction procedure. From the adaptive application of LCMat, such as selection-based methods and condensation-based methods; LCMat consistently shows improved performances over baselines from both lines of research in dataset reduction. Especially, LCMat shows clear performance merits on the extreme reduction ratio, which is a specialized property for on-device learning where the memory capacity is limited.
#### Acknowledgements
This research was supported by AI Technology Development for Commonsense Extraction, Reasoning, and Infer
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Fraction} & \multicolumn{2}{c}{CIFAR-10} & \multicolumn{2}{c}{CIFAR-100} \\ \cline{2-5} & 0.2\(\%\) & 1\(\%\) & 2\(\%\) & 10\(\%\) \\ \hline Random & 37.13\(\pm\)0.3 & 56.67\(\pm\)0.5 & 20.60\(\pm\)0.3 & 40.90\(\pm\)0.0 \\ KIP & 47.30\({}^{\dagger}\)\(\pm\)0.3 & 50.10\({}^{\dagger}\)\(\pm\)0.2 & 13.40\({}^{\dagger}\)\(\pm\)0.2 & - \\ DM & 54.47\(\pm\)0.5 & 65.23\(\pm\)0.2 & 33.99\(\pm\)0.2 & 43.35\(\pm\)0.2 \\ DSA & 54.90\(\pm\)0.3 & 61.90\(\pm\)0.4 & 33.75\(\pm\)0.1 & 38.71\(\pm\)0.3 \\ \hline LCMat-C & **56.83\(\pm\)**0.2 & **65.90\(\pm\)**0.4 & **36.47\(\pm\)**0.0 & **43.53\(\pm\)**0.1 \\ \hline Full & \multicolumn{2}{c}{89.77\(\pm\)0.2} & \multicolumn{2}{c}{65.13\(\pm\)0.5} \\ \hline \hline \end{tabular}
\end{table}
Table 4: Condensation performances on CIFAR-10 and CIFAR-100 with ConvNet-3. **Bold** represents best result. \(\dagger\) means reported results from the original papers.
\begin{table}
\begin{tabular}{c|c|c c c} \hline \hline Fraction & Test Model & ConvNet-3 & ResNet-10 & DenseNet-121 \\ \hline \multirow{4}{*}{\(0.2\%\)} & Random & 37.13\(\pm\)0.3 & 35.27\(\pm\)0.4 & 36.93\(\pm\)0.6 \\ & DM & 54.47\(\pm\)0.5 & 44.73\(\pm\)1.1 & 44.97\(\pm\)0.3 \\ & DSA & 54.90\(\pm\)0.3 & 46.03\(\pm\)0.3 & 45.63\(\pm\)1.8 \\ & LCMat-C & **56.83\(\pm\)**0.2 & **48.00\(\pm\)**1.5 & **47.27\(\pm\)**1.1 \\ \hline \multirow{4}{*}{\(1\%\)} & Random & 56.67\(\pm\)0.5 & 53.57\(\pm\)0.4 & 56.77\(\pm\)0.4 \\ & DM & 65.23\(\pm\)0.2 & 56.77\(\pm\)0.1 & 55.80\(\pm\)0.4 \\ \cline{1-1} & DSA & 61.90\(\pm\)0.4 & 57.97\(\pm\)0.2 & 55.00\(\pm\)0.8 \\ \cline{1-1} & LCMat-C & **65.90\(\pm\)**0.4 & **60.93\(\pm\)**0.4 & **57.93\(\pm\)**0.1 \\ \hline
100\(\%\) & Full & 89.72\(\pm\)0.2 & 93.80\(\pm\)0.3 & 96.17\(\pm\)0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Cross-architecture generalization performance (\(\%\)) on CIFAR-10 with ConvNet-3. **Bold** means best.
Figure 5: Test accuracy from the continual learning scenario with the selected or condensed data from CIFAR-100. We compare the methods from each category separately.
ence from Heterogeneous Data (IITP) funded by the Ministry of Science and ICT(2022-0-00077). Also, authors would like to acknowledge Dongjun Kim and Byeonghu Na for their invaluable discussions and supports.
|
2304.09874 | Domain Adaptable Self-supervised Representation Learning on Remote
Sensing Satellite Imagery | This work presents a novel domain adaption paradigm for studying contrastive
self-supervised representation learning and knowledge transfer using remote
sensing satellite data. Major state-of-the-art remote sensing visual domain
efforts primarily focus on fully supervised learning approaches that rely
entirely on human annotations. On the other hand, human annotations in remote
sensing satellite imagery are always subject to limited quantity due to high
costs and domain expertise, making transfer learning a viable alternative. The
proposed approach investigates the knowledge transfer of selfsupervised
representations across the distinct source and target data distributions in
depth in the remote sensing data domain. In this arrangement, self-supervised
contrastive learning-based pretraining is performed on the source dataset, and
downstream tasks are performed on the target datasets in a round-robin fashion.
Experiments are conducted on three publicly available datasets, UC Merced
Landuse (UCMD), SIRI-WHU, and MLRSNet, for different downstream classification
tasks versus label efficiency. In self-supervised knowledge transfer, the
proposed approach achieves state-of-the-art performance with label efficiency
labels and outperforms a fully supervised setting. A more in-depth qualitative
examination reveals consistent evidence for explainable representation
learning. The source code and trained models are published on GitHub. | Muskaan Chopra, Prakash Chandra Chhipa, Gopal Mengi, Varun Gupta, Marcus Liwicki | 2023-04-19T14:32:36Z | http://arxiv.org/abs/2304.09874v1 | # Domain Adaptable Self-supervised Representation Learning on Remote Sensing Satellite Imagery
###### Abstract
This work presents a novel domain adaption paradigm for studying contrastive self-supervised representation learning and knowledge transfer using remote sensing satellite data. Major state-of-the-art remote sensing visual domain efforts primarily focus on fully supervised learning approaches that rely entirely on human annotations. On the other hand, human annotations in remote sensing satellite imagery are always subject to limited quantity due to high costs and domain expertise, making transfer learning a viable alternative. The proposed approach investigates the knowledge transfer of self-supervised representations across the distinct source and target data distributions in depth in the remote sensing data domain. In this arrangement, self-supervised contrastive learning- based pretraining is performed on the source dataset, and downstream tasks are performed on the target datasets in a round-robin fashion. Experiments are conducted on three publicly available datasets, UC Merced Landuse (UCMD), SIRI-WHU, and MLRSNet, for different downstream classification tasks versus label efficiency. In self-supervised knowledge transfer, the proposed approach achieves state-of-the-art performance with label efficiency labels and outperforms a fully supervised setting. A more in-depth qualitative examination reveals consistent evidence for explainable representation learning. The source code and trained models are published on GitHub1.
contrastive learning; self-supervised learning; representation learning, domain adaptation, remote sensing, satellite image
Footnote 1: [https://github.com/muskaan712/Domain-Adaptable-Self-Supervised-Representation-Learning-on-Remote-Sensing-Satellite-Imagery](https://github.com/muskaan712/Domain-Adaptable-Self-Supervised-Representation-Learning-on-Remote-Sensing-Satellite-Imagery)
## I Introduction
To formulate the policies and schemes, the region's geographical and demographic information and its efficient representation are essential [1][2]. Visual interpretation of aerial and space images is the most common method of producing topographic and thematic maps. Satellite images are also used to classify different types of crops using deep learning techniques [3][4]. Today many high-resolution satellites can be relied upon to develop cartographic projects [5]. But it's not always the case when you get a high-resolution image which makes a major concern. Satellite images are not always provided in abundance, and there may be fewer image samples, which further poses a challenge in classification and segmentation. The applications of satellite imagery classification include disaster prediction using remote sensing images, and these early predictions are used to take necessary precautions. [31] Satellite images can also be classified and segmented into areas with more wind and solar power so that adequate coverage of windmills and solar panels can be achieved to harness the power efficiently. [32]
Substantial human-labeled data is necessary to train a deep neural network successfully. Unfortunately, data collection and labeling are time-consuming and challenging in many fields. However, acquiring sufficient annotated data can be quite expensive and time-intensive. The process of cleaning, screening, labeling, evaluating, and reorganizing data by a training framework can be exceedingly time-consuming and complicated [11]. The lack of data has spawned a variety of solutions, the most prevalent of which is transfer learning. This work presents an approach that minimizes the training samples and puts less stress on the data labeling compared to the architecture modeling [6]. For most supervised learning approaches, annotated data is necessary to train a machine.
This work employs self-supervised learning-based satellite image classification to deal with scarce labeled data in satellite imagery. When representations are learned from a pretext task using unlabeled input images and then used for a downstream task of interest, self-supervised learning is akin to transfer learning [12]. This work has used Domain Adaptation (DA) to prove the robustness of the model toward performance generalization on unseen data distribution. Domain Adaptation follows the concept that the model gets trained on one source
Fig. 1: Demonstrate the instance of proposed domain adaptation framework where self-supervised representation learning is performed at source dataset using contrastive learning method and downstream task performed on target datasets.
dataset and evaluated on the other target dataset, increasing the model's reliability, re-usability, and results. The proposed work's main contributions are outlined as-
* Establishing the adaptation of the self-supervised representation learning on remote-sensing satellite imagery by proposing a domain adaptation framework for rigorous evaluation.
* Achieving label-efficient representational knowledge transfer across multiple public datasets by obtaining state-of-the-art performance with limited labels and outperforming in fully supervised settings.
* Explaining improvement in quantitative results by qualitative analysis with significant and consistent evidence.
The rest of the article is organized as follows. Section 2 discusses the datasets. Section 3 presents a domain adaptation framework for self-supervised contrastive learning. Section 4 presents the experiments and results. Section 5 discusses the experiments and results obtained from the proposed work, Section 6 presents the related work, and Section 7 concludes the proposed work and provides the future scope of the work.
## II Satellite Imagery Dataset Description
There are many applications for satellite images in meteorology, oceanography, fisheries, agriculture, biodiversity, geology, cartography, and land use planning. Instead of only having an image of a place, satellite image classification aims to transform satellite imagery into valuable information. Satellite Imagery of residential and non-residential buildups varies with objects and natural scenes captured in the image. A dataset with images labeled as a whole is required for categorizing satellite images. Three public satellite imagery datasets are used in this work, SIRI-WHU and UC Merced have equally distributed images among their classes and MLRSNet has non-uniform distribution which is demostrated in the graph below, other details about the datasets are discussed below and summarized in Table I.
### _SIRI-WHU Dataset_
The SIRI-WHU dataset2 for classification has 2400 photos sorted into 12 classifications. This dataset was obtained from Google Earth and mainly included metropolitan regions in China, with the image collection developed by Wuhan University's RS IDEA Group (SIRI-WHU). It consists of 12 classes: Agriculture, Commercial, Harbor, Idle land, Industrial, Meadow, Overpass, Park, Pond, Residential, River, and Water. Each class comprises 200 pictures that are 200 x 200 pixels in size.
Footnote 2: [http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/e-code.html](http://www.lmars.whu.edu.cn/prof_web/zhongyanfei/e-code.html)
### _UC Merced Dataset_
The image data in the UC Merced dataset3 were manually extracted from large-sized images in the United States Geological Survey (USGS) National Map Urban Area Imagery collection for numerous cities across the country (United States). This big ground truth picture collection consists of 21 land-use types, each with 100 pictures. The 21 classes were namely agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis court. This public domain imagery has a pixel resolution of 1 foot, with each image being 256x256 pixels.
Footnote 3: [http://weegee.vision.ucmercred.edu/datasets/landuse.html](http://weegee.vision.ucmercred.edu/datasets/landuse.html)
### _MLRSNet_
MLRSNet4 offers several satellite-based perspectives of the world. It comprises optical satellite images with great spatial resolution--between 1,500 and 3,000 example photos in every 46 categories in the 109,161 remote sensing photographs makeup MLRSNet. The photos are 256256 pixels and have different pixel sizes (10m to 0.1m). The dataset can be used for picture segmentation, image retrieval, and classification based on multiple labels.
Footnote 4: [https://data.mendeley.com/datasets/7j9bw9wxs/2](https://data.mendeley.com/datasets/7j9bw9wxs/2)
## III Domain Adaptation framework for Self-supervised contrastive learning
The proposed framework consists of two main tasks: (i) pretext task, in which learning representations following contrastive self-supervised learning on satellite imagery datasets within the source domain is performed (ii) downstream task, in which satellite images are classified based on the representations learned in pretext task. Figure 3 depicts a schematic
\begin{table}
\begin{tabular}{|l|c|c|} \hline _Dataset_ & _Total Images_ & _No. of Classes_ \\ \hline _SIRI-WHU_ & _2400_ & _12_ \\ \hline _UC Merced Land_ & _2100_ & _21_ \\ _Use Dataset_ & & \\ \hline _MLRSNet_ & _109,161_ & _46_ \\ \hline \end{tabular}
\end{table} TABLE I: Dataset description
Fig. 2: Class-wise distribution of the MLRSNet dataset.
diagram of the proposed approach where knowledge transfer on self-supervised learnt representation is comprehensively validated
In the pretext task, various augmentations are applied to
the images, such as flipping, affine transformations, jitter, grayscale, etc., to create different views of the images. A positive pair is a pair of views created from the same image, whereas a negative pair is a pair of views created from different images. Then, positive and negative pairs of images are contrastively learned to form image representations. Labeled images are not required for representation learning in this task. Figure 1 depicts the contrastive learning architecture. SimCLR (simple framework for contrastive learning) [21] has been used to learn the representations. Positive and negative pairs of satellite images are created from unlabelled satellite images with augmentations such as Gaussian blur, flipping, translation, rotation, etc. These pairs of positive and negative views are fed to the encoder network. ResNet-50 encoder is a backbone for the pretext task network, followed by two fully connected layers containing 2048 and 1024 neurons each. The encoder part helps in extracting image representations for positive and negative pairs. Normalized Temperature-scaled Cross-Entropy loss (NT-Xent) is used to pull close representations and push away different representations. This loss function for a positive pair is defined below.
\[\ell\left(\mathbf{z_{i}},\mathbf{z_{j}}\right)=-\log\frac{\exp\left(\sin\left( z_{i}z_{j}\right)/\tau\right)}{\sum_{k=1}^{2n}1_{k\neq i}\exp\left(z_{i}z_{k}/\tau \right)}\]
T is the temperature parameter, where \(z_{i}\) and \(z_{j}\) in the numerator represent positive pairs, where \(z_{i}\) and \(z_{k}\) in the denominator represent all possible pairs, including positive and negative. In terms of a loss function, it comes down to the ratio of the sum of similarities between all positive pairs divided by the negative log-likelihood of these pairs being similar. A softmax function-based temperature parameter is used to normalize this loss function. It is designed to maximize an agreement between positive pairs in a mini-batch. The loss function for all the positive pairs is given below.
\[\mathcal{L}=-\frac{1}{N}\sum_{i,j\in\mathcal{MB}}\log\frac{\exp\left(\sin \left(z_{i},z_{j}\right)/\tau\right)}{\sum_{k=1}^{2N}\mathbb{I}_{[k\neq i]} \exp\left(\sin\left(z_{i},z_{k}\right)/\tau\right)}\]
The downstream task uses the learned embeddings of images during the pretext task as input. Only a few basic augmentations like resizing and cropping are used during the downstream task. The downstream task involves binary and multi-class classification of satellite images. A binary and multi-class classification task is involved in the downstream task. For classification, fewer labeled images are now required for the downstream task. In the downstream task, the input image from the target dataset is taken as input, and primary augmentations (resizing, cropping) are applied to this image. The augmented images are fed to the encoder, initialized from the pretext task-trained model. A linear classifier having layers 512 and a number of classes has been appended to the encoder part of the network to classify satellite images.
## IV Experiments and Results
Extensive experimentation is designed and performed to investigate the domain adaptation in self-supervised learning based representational knowledge transfer on three datasets, UC Merced, SIRI-WHU, and MLRSNet, covering binary and multi-class classifications downstream tasks under varying label efficiency. Table II shows the augmentations for the pretext task, and Table III shows the hyperparameters used for training the pretext task. Details of hyperparameters for fine-tuning and other configuration is available in open-source source code. The dataset follows a 70%, 20%, and 10% split for training, testing, and validation. The next subsections discuss the binary classification results and the multi-class classification results. The performance metrics are defined below.
\[\text{Precision}\ =\ \frac{\text{Total true positives}}{\text{Real actual positives}\ +\ \text{Total false positives}}\]
\[\text{Recall}\ =\ \frac{\text{Total true positives}}{\text{Total true positives}\ +\ \text{Total false-negatives}}\]
\[\text{Accuracy}\ =\ \frac{\text{true negatives}\ +\ \text{true positives}}{\text{ total cases}}\]
\[\text{F1 Score}\ =2*\frac{\text{Precision}\ *\ \text{Recall}}{\text{ Precision}\ +\ \text{Recall}}\]
Fig. 3: Proposed approach for domain adaptation. It ensures to investigate of each dataset for self-supervised representation learning and for downstream tasks under all possible domain adaptation scenarios
### _Domain Adaptation_
For domain adaptation, three different datasets have been used to evaluate the results and performance of the proposed methodology for satellite image classification. The three datasets are used without labels in the pretext task to generate representations and fine-tuning is performed on the other two remaining datasets to evaluate domain adaptation. The results of experiments are shown in Table IV.
### _Multi-class Classification_
For the multi-class classification task, three different datasets have been used to evaluate contrastive learning for satellite image classification. The datasets are UC Merced, which has 21 classes on which our model achieved an accuracy of 99.35%, precision of 99.91%, recall value of 98.95%, and F1 score of 98.89% on the 100% dataset in the downstream task. Another dataset used is the SIRI-WHU with 12 classes, and our model scored an accuracy of 99.68%, precision of 95.36%, recall value of 96.56%, and F1 score of 97.92% on the 100% dataset in the downstream. MLRSNET is another dataset that was used, a 46 class dataset and our model achieved an accuracy of 96.59%, precision of 96.79%, recall value of 96.545, and F1 score of 96.65% with 100% data in the downstream. The results of further experiments are shown in Table V, with fewer datasets downstream.
### _Comparison with existing results_
The concept of self-supervised learning and domain adaptation-based self-supervised learning applied to satellite imagery has yet to be explored. This work considered various methods applied to these datasets, including supervised learning-based methods. Table VI compares the results obtained from the proposed work with the existing binary and multi-class classification methods on satellite imagery.
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Author_ & _Method_ & _Accuracy_ \\ \hline _[23]_ & _SVM_ & _98.8_ \\ \hline _[24]_ & _GIST_ & _46.9_ \\ _[25]_ & _ResNet 50_ & _98_ \\ _[26]_ & _DCNN_ & _93.48_ \\ \hline _[16]_ & _GoogleNet_ & _97.10_ \\ \hline _[14]_ & _Semisupervised ensemble projection_ & _66.49_ \\ \cline{2-3} _Our Results_ &
\begin{tabular}{c} _Self-supervised Domain Adaptation_ \\ **Self-supervised Same Dataset**_ \\ \end{tabular} & _98.75_ \\ \hline \end{tabular}
\end{table} TABLE VI: Comparative results of multi-class classification 21 class UCMD. (Top 2 results are shown)
\begin{table}
\begin{tabular}{|c|c|} \hline _Augmentations_ & _Value_ \\ \hline _Resize_ & _224 \(\times\) 224_ \\ \hline _Horizontal Flip_ & _P= 0.5_ \\ \hline _Vertical Flip_ & _P - 0.5_ \\ \hline _Rotation_ & _(-90, 90)_ \\ \hline _Grayscale_ & _P - 0.2_ \\ \hline _Gaussian Blur_ & _P - 0.51, Kernel size - [21, 21]_ \\ \hline \end{tabular}
\end{table} TABLE II: Augmentations for pretext task
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Author_ & _Method_ & _Accuracy_ \\ \hline _[27]_ & _AlexNet SPP SS_ & _95.07_ \\ \hline _[28]_ & _MCNN_ & _93.75_ \\ \hline _[29]_ & _Inception-LSTM_ & _99.73_ \\ \hline _Our Results_ & _Self-Supervised Domain Adaptation_ & _96.87_ \\ \cline{2-3} & _Self-supervised Same Dataset_ & _99.68_ \\ \hline \end{tabular}
\end{table} TABLE VII: Comparative results for multi-class classification 12 class SIRI-WHU. (Top 2 results are shown)
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Hyperparameters_ & _Value_ \\ \hline _Batch size_ & _256_ \\ \hline _Optimizer_ & _SGD_ \\ \hline _Momentum_ & _0.9, nesterov=True_ \\ \hline _Learning Rate_ & _0.0005_ \\ \hline _Weight decay_ & _0.0005_ \\ \hline \end{tabular}
\end{table} TABLE III: Hyperparameters for pretext task
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Author_ & _Method_ & _Accuracy_ \\ \hline _[27]_ & _AlexNet SPP SS_ & _95.07_ \\ \hline _[28]_ & _MCNN_ & _93.75_ \\ \hline _[29]_ & _Inception-LSTM_ & _99.73_ \\ \hline _Our Results_ & _Self-Supervised Domain Adaptation_ & _96.87_ \\ \cline{2-3} & _Self-supervised Same Dataset_ & _99.68_ \\ \hline \end{tabular}
\end{table} TABLE VII: Comparative results for multi-class classification 12 class SIRI-WHU. (Top 2 results are shown)
\begin{table}
\begin{tabular}{|c|c|c|} \hline _Hyperparameters_ & _Value_ \\ \hline _Batch size_ & _256_ \\ \hline _Optimizer_ & _SGD_ \\ \hline _Momentum_ & _0.9, nesterov=True_ \\ \hline _Learning Rate_ & _0.0005_ \\ \hline _Weight decay_ & _0.0005_ \\ \hline \end{tabular}
\end{table} TABLE III: Hyperparameters for pretext task
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Dataset seed**} & \multicolumn{2}{c|}{_Accuracy_} & _Precision_ & _Recall_ & _F1-Score_ \\ \hline _Pretest_ & _Downstream_ & \multicolumn{2}{c|}{_96.34_} & _96.21_ & _96.56_ & _96.87_ \\ \hline \multirow{2}{*}{_UC Merced_} & _MLRSNet_ & _30\%_ & _93.18_ & _94.83_ & _94.76_ & _94.34_ \\ \cline{2-3} & _10\%_ & _92.23_ & _91.98_ & _91.73_ & _91.45_ \\ \hline \multirow{2}{*}{_UC Merced_} & _Multi-WUHU_ & _30\%_ & _96.87_ & _96.32_ & _96.34_ & _96.87_ \\ \cline{2-3} & _30\%_ & _94.99_ & _94.12_ & _94.76_ & _94.12_ \\ \hline \multirow{2}{*}{_MLRSNet_} & _10\%_ & _87.30_ & _87.43_ & _87.24_ & _87.37_ \\ \cline{2-3} & _10\%_ & _95.80_ & _88.54_ & _96.21_ & _98.32_ \\ \hline \multirow{2}{*}{_MLRSNet_} & _10\% Merced_ & _30\%_ & _96.01_ & _96.80_ & _96.79_ & _96.85_ \\ \cline{2-3} & _10\%_ & _93.22_ & _92.32_ & _92.38_ \\ \hline \multirow{2}{*}{_MLRSNet_} & _10\%_ & _93.50_ & _97.11_ & _96.98_ & _96.43_ \\ \cline{2-3} & _50\%_ & _96.24_ & _98.87_ & _96.22_ & _96.76_ \\ \hline \multirow{2}{*}{_MLRSNet_} & _10\%_ & _89.58_ & _89.90_ & _94.05_ & _93.94_ \\ \cline{2-3} & _10\%_ & _88.75_ & _98.21_ & _97.93_ & _98.53_ \\ \cline{2-3} & _30\%_ & _96.31_ & _96.80_ & _96.43_ & _96.21_ \\ \cline{2-3} & _10\%_ & _94.23_ & _94.89_ & _94.71_ & _94.39_ \\ \hline \multirow{2}{*}{_SIRI-WHU_} & _10\%_ & _97.87_ & _97.43_ & _97.54_ & _97.32_ \\ \cline{2-3} & _50\%_ & _94.40_ & _94.87_ & _94.91_ & _94.51_ \\ \hline \multirow{2}{*}{_SIRI-WHU_} & _10\%_ & _90.02_ & _90.83_ & _90.26_ & _90.73_ \\ \hline \end{tabular}
\end{table} TABLE IV: Results for Domain Adaptation on multiclassification
\begin{table}
\begin{tabular}{|c|c|} \hline _Augmentations_ & _Value_ \\ \hline _Resize_ & _224 \(\times\) 224_ \\ \hline _Horizontal Flip_ & _P= 0.5_ \\ \hline _Vertical Flip_ & _P= 0.5_ \\ \hline _Rotation_ & _(-90, 90)_ \\ \hline _Grayscale_ & _P - 0.2_ \\ \hline _Gaussian Blur_ & _P - 0.51, Kernel size - [21, 21]_ \\ \hline \end{tabular}
\end{table} TABLE III: Hyperparameters for pretext task
Based on the comparisons in Tables VI, VII, VIII the proposed work performs better than the existing works. According to the above comparative analysis, the proposed work outperforms all previous works and achieves state-of-the-art results for multi-class classification of satellite imagery.
## V Discussions
This section discusses the key outcomes of the proposed work and provides analysis based on the achieved results on three datasets for different scenarios.
### _Self-supervised learnt representations are domain adaptable_
Results in Table IV clearly indicate that the performance of domain adaptation with different sources and targets achieves comparable results with in-domain knowledge transfer presented in Table V. While investigating and comparing the domain adaptation results with ImageNet supervised knowledge transfer (refer Table IX & X), all the models outperform which indicates the successful domain adaptations across the datasets. Following the trend, the proposed framework consistently outperformed on the given datasets compared with the ImageNet pretrained ResNet50 in a complete range of labels from 10% to 100%, shown in Figure 4, 5, & 6.
all three datasets. This trend indicates that self-supervised representations are efficient end-to-end learning.
### _Robust and explainable representations_
All the quantitative results are well described, with qualitative analysis performed on all three datasets with activation maps. Figure 7 demonstrates the explainability and attention for self-supervised pretrained models against ImageNet supervised models. This indicates that learnt representations in self-supervised manner are more efficient and thus achieve higher performance in downstream tasks.
## VI Related Work
During the past few years, self-supervised contrastive learning has emerged as a new training paradigm. Using this training paradigm, comprehensive representations can be learned without human annotation, which could solve the lack of annotated data problem. Much research has yet to be done on self-supervised learning in satellite imagery. Here, the main focus is on discussing deep learning methods applied to satellite imagery classification.
### _Supervised learning on satellite images_
R. Naushad et al. [13] proposed a transfer learning approach to classify land use and land cover on the Eurosat dataset. For this, four CNN models were pre-trained: VGG-16 (without data augmentation), VGG-16 (with data augmentation), wide ResNet-50 (Without Data Augmentation), and wide ResNet-50 (With Data Augmentation). They achieved an accuracy of 99.17%. However, the proposed approach has been validated only on one dataset (the Eurosat dataset ) while using 100% of the data. At a large scale, CNN-based models were used by A. Albert et al. [14] to identify patterns in urban environments using satellite imagery. They used pre-trained models: VGG 16 and ResNet, for the classification task and achieved different accuracies for different countries, with which authors showed the highest accuracy of 83%. However, there remains scope for improvement in the results. Another method was applied by M. Castelluccio et al. [16] for land use classification in remote sensing images. They used pre-trained models: CaffeNet and GoogLeNet. These models provided an accuracy of 90.17% and 91.83%, respectively. Although authors achieve competitive results, the possibility of improvement of results and the use of better models remains.
X. Tan et al. proposed a multilabel classification to classify the MLRSNet, a benchmark dataset of 46 classes. They achieved an accuracy of 87.87% with DenseNet201-SR-Net [30]. However, there is still scope for improvement in accuracy. Furthermore, DenseNet201 is a very heavy computational method that needs more resources. S.Jog et al. [19] performed a supervised classification of satellite images using the Landsat dataset and support vector machine as the classifier, which achieved an accuracy of 92.84%. However, this approach was tested on a single dataset. Thus, the proposed method needs to be validated on other datasets as well to check the robustness of the model. M. Pritt et al. [22] used convolutional neural networks on the FMoV dataset for satellite image classification and achieved an accuracy of 83%. However, there is much scope for improvement in accuracy.
Ozyurt, F. et al. [23] attempted to classify satellite images using the UC Merced dataset with a unique approach to feature extraction. For the classification, they used an SVM-based machine learning model and achieved an accuracy of 98.8%. However, their work focussed on a single dataset only. Kadhim et al. [25] used a pre-trained ResNet50 model on
Fig. 6: Comparison of accuracy of the proposed method for UCMD with supervised learning.
Fig. 7: Learning representation of the architecture using Class Activation Maps
the UC Merced dataset for the satellite image classification task and achieved an accuracy of 98%. However, the model has been evaluated on a single dataset only. F. P. S. Laus et. al. [26] used deep convolutional neural networks for land use classification on the UC Merced dataset and achieved an accuracy of 93.48%. Though the methodology used differs from existing methods, there is a scope for improvement in accuracy.
Han. Xiaobing et.al. [27] used a pre-trained AlexNet model on the SIRI-WHU dataset for satellite image classification and obtained an accuracy of 95.07%. However, the accuracy can be improved further using better network architectures. Y. Liu et al. [28] worked on the SIRI-WHU dataset and used multiscale convolutional neural networks for scene classification using satellite images. They achieved an accuracy of 93.75%. Although the method differs from existing approaches, this work is based on a single dataset only and needs to be tested on other datasets also. Y. Dong et al. [29] classified satellite images from the SIRI-WHU dataset using inception-based LSTM approach with an accuracy of 99.73%. However, the approach is tested on a single dataset only.
### _Self-supervision and domain adaptation on specialized visual domain_
Self-supervised methods on ImageNet and natural scenes have advanced in recent times. It has also been considerable advancements in other specialized visual domains to adapt self-supervised representation learning. Self-supervised methods in medical images showed progress where data and human labels are limited, Chhipa et al. [33] demonstrated self-supervised domain adaptation on histopathology images, and Azizi et al. [34] showed knowledge transfer on X-ray. Other interesting applications for self-supervised methods exploring on underwater images Tarling et al. [37] for the fish count and identifying mining materials from three-dimensional particle management sensors in Chhipa et al. [36] shown progress in a specialized domain.
### _Domain adaptation and self-supervised learning on satellite images_
Few non-supervised learning based methods for satellite image classification have been proposed in the literature. A semi-supervised learning based approach for satellite image classification was proposed by W. Yang et al. [15] to solve the problem of fewer images. They achieved an accuracy of 73.82% on 19 class data and 65.34% on UCMD dataset. However, there is scope for further improvements in the results obtained by the authors for satellite image classification.
A few self-supervised methods have also been used to classify remotely sensed satellite images in the literature, as proposed by V. Stojnic et al. [17]. They used self-supervised methods with a pre-trained Imagenet model on MLRNSNet and achieved an accuracy of 96%. Manas et al. [?] have shown self-supervised pretraining on remote sensing data using weather information. Yi Wang et al. [18] proposed contrastive multiview coding (CMC) based approach for satellite image classification, where one image is an anchor, and other images are neighboured around that image. They used pre-trained models for feature extraction, and the number of training samples was large. However, they did not validate the proposed approach in cross-domain settings wherein learning the representations from one dataset of satellite images and performing downstream tasks on another dataset.
From the above analysis of the existing work in the literature, it can easily be observed that most of the existing supervised learning-based satellite image classification methods require a lot of labeled data to perform satisfactorily. Only a few semi-supervised or self-supervised satellite image classification methods exist in the literature. However, these methods use the same dataset for pretext tasks, and downstream tasks, and these methods have not been evaluated in cross-domain settings. To mitigate these research gaps in the literature, this work proposes a domain adaptation-based self-supervised representation learning approach for classifying satellite images. This work proposes a domain adaptable self-supervised learning approach to reuse the representations learned on one unlabelled dataset from the source domain for classifying satellite images taken from a different target domain dataset.
## VII Conclusion
This work proposed a domain-adaptable self-supervised representation learning based framework focusing on the robust evaluation of learnt representations rather than one-directional knowledge transfer, which ultimately reviews the effectiveness and applicability of such methods in the satellite imagery visual domain. One significant outcome is achieving improved performance by applying domain-adapted knowledge transfer across the datasets, outperforming the existing methods of satellite image classification, even in cross-domain settings. By applying the self-supervised representation learning, the proposed work has surpassed the existing results by 1%, with fewer training data. The proposed evaluation framework is conveniently applicable to other visual domains which are not thoroughly explored yet for the usability of self-supervised representation learning to reduce human annotation needs. In future work, i) We aim to investigate domain adaptation for other computer vision downstream tasks, i.e., segmentation and localization, ii) Investigate non-contrastive representation learning methods, and iii) Candidates for standard augmentation methods in self-supervised learning to adapt remote sensing visual domain.
|
2305.18488 | A Bayesian sparse factor model with adaptive posterior concentration | In this paper, we propose a new Bayesian inference method for a
high-dimensional sparse factor model that allows both the factor dimensionality
and the sparse structure of the loading matrix to be inferred. The novelty is
to introduce a certain dependence between the sparsity level and the factor
dimensionality, which leads to adaptive posterior concentration while keeping
computational tractability. We show that the posterior distribution
asymptotically concentrates on the true factor dimensionality, and more
importantly, this posterior consistency is adaptive to the sparsity level of
the true loading matrix and the noise variance. We also prove that the proposed
Bayesian model attains the optimal detection rate of the factor dimensionality
in a more general situation than those found in the literature. Moreover, we
obtain a near-optimal posterior concentration rate of the covariance matrix.
Numerical studies are conducted and show the superiority of the proposed method
compared with other competitors. | Ilsang Ohn, Lizhen Lin, Yongdai Kim | 2023-05-29T12:41:32Z | http://arxiv.org/abs/2305.18488v1 | # A Bayesian sparse factor model with adaptive posterior concentration+
###### Abstract
In this paper, we propose a new Bayesian inference method for a high-dimensional sparse factor model that allows both the factor dimensionality and the sparse structure of the loading matrix to be inferred. The novelty is to introduce a certain dependence between the sparsity level and the factor dimensionality, which leads to adaptive posterior concentration while keeping computational tractability. We show that the posterior distribution asymptotically concentrates on the true factor dimensionality, and more importantly, this posterior consistency is adaptive to the sparsity level of the true loading matrix and the noise variance. We also prove that the proposed Bayesian model attains the optimal detection rate of the factor dimensionality in a more general situation than those found in the literature. Moreover, we obtain a near-optimal posterior concentration rate of the covariance matrix. Numerical studies are conducted and show the superiority of the proposed method compared with other competitors.
Msc2020 subject classifications: Primary 62F15; secondary 62G20.
**Keywords and phrases:** Adaptive inference, sparse factor models, factor dimensionality, covariance matrix estimation, optimal posterior concentration rate.
## 1 Introduction
In this paper, we propose a novel Bayesian method for learning a high-dimensional sparse linear factor model and study asymptotic concentration properties of the posterior distribution. We consider the following linear factor model where \(p\)-dimensional random vectors \(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n}\) are distributed as
\[\mathbf{Y}_{i}|(\mathbf{Z}_{i}=\mathbf{z}_{i})\stackrel{{\mathrm{ ind}}}{{\sim}}\mathtt{N}(\mathbf{B}\mathbf{z}_{i},\psi\mathbf{I}),\quad \mathbf{Z}_{i}\stackrel{{\mathrm{iid}}}{{\sim}}\mathtt{N}( \mathbf{0},\mathbf{I}) \tag{1.1}\]
for \(i\in[n]\) with \(\mathbf{B}\) representing a \(p\times\xi\) factor loading matrix, \(\psi>0\) a noise variance and \(\mathbf{Z}_{i}\) a \(\xi\)-dimensional (latent) factor related to datum \(\mathbf{Y}_{i}\), where \(\xi\in\{1,\ldots,p-1\}\). Under this model, the marginal distribution of the data
###### Abstract
We consider the _factor dimensionality_ of a given set of \(\xi\)-valued functions \(\mathbf{X}_{i}\)
a certain quantity converges to \(1\). But the upper bound is much larger than the true factor dimensionality. Ohn and Kim (2022) derived posterior consistency of the factor dimensionality under a spike and slab prior with the two-parameter IBP. However, their result is nonadaptive in the sense that the choice of hyperparameters of the prior distribution relies on the information or knowledge of the true sparsity of the loading matrix, which is unknown in practice.
Another promising theoretical result was provided by Gao and Zhou (2015). The authors studied the Bayesian factor model in the context of sparse principle component analysis (PCA) in which the prior distribution concentrates its mass on the orthogonal loading matrix, i.e., \(\mathbf{B}^{\top}\mathbf{B}\) is diagonal, and established the adaptive posterior consistency of the factor dimensionality. However, the orthogonal constraint makes it hard to compute posterior distribution and Gao and Zhou (2015) only succeeded in implementing a posterior sampler for the factor model with a one-dimensional factor i.e., \(\xi=1\).
We propose a novel Bayesian model that overcomes the theoretical and practical limitations of the existing Bayesian approaches. A key feature of the proposed Bayesian model is that the sparsity and the factor dimensionality are negatively correlated under the prior distribution, and this a priori negative correlation between them helps to prevent overestimating the true factor dimensionality. This is a critical difference between the proposed prior and the widely used IBP-type priors. Yet, posterior computation can be carried out through a simple and efficient Monte Carlo Markov chain (MCMC) algorithm. Our numerical studies show that the developed MCMC algorithm can apply to high-dimensional data without many hampers.
We thoroughly investigate the theoretical properties of the posterior distribution of the proposed Bayesian model. We prove that the posterior distribution of the factor dimensionality converges to the true one. In particular, we prove that the proposed Bayesian model attains the optimal detection rate for the eigengap (i.e., the size of the smallest eigenvalue of the low rank part \(\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top}\) of the true covariance matrix \(\mathbf{\Sigma}^{\star}=\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top}+\psi^{ \star}\mathbf{I}\)) for the consistency of the factor dimensionality. We also show that the posterior distribution of the covariance matrix concentrates around the truth at a near optimal rate. The novelty of our results lies in that it does not require any prior knowledge of the true sparsity and noise variance and hence all the optimal theoretical properties of the posterior are _adaptive_ to the sparsity and noise variance.
It should be noted that the proposed Bayesian model has theoretical advantages over not only other Bayesian factor models but also existing frequentist's estimators of the factor dimensionality. The estimator of Cai et al. (2013) is adaptive to the true sparsity but requires a larger detection rate for the eigengap than the optimal one. On the other hand, the estimator of Cai et al. (2015) achieves the optimal detection rate but is not adaptive to the true sparsity. Moreover, both estimators assume the known noise variance, which limits their applicability.
The rest of the paper is organized as follows. In Section 2, we introduce the proposed prior distribution and develop an efficient MCMC algorithm for sampling from the posterior distribution. In Section 3, asymptotic properties
of the posterior distribution are derived. In Section 4, we conduct simulation studies and real data analysis. Section 5 concludes the paper.
### Notation
Let \(\mathbb{R}\), \(\mathbb{R}_{+}\) and \(\mathbb{N}\) be the sets of real numbers, positive numbers and natural numbers, respectively. Let \(\mathbf{0}\) and \(\mathbf{1}\) denote vectors of \(0\)'s and of \(1\)'s, respectively, where the dimensions of such vectors can differ according to the context. For a positive integer \(p\), we let \([p]:=\{1,2,\ldots,p\}\). For a real number \(x\), \(\lfloor x\rfloor\) denote the largest integer less than or equal to \(x\) and \(\lceil x\rceil\) denote the smallest integer larger than or equal to \(x\). For two real numbers \(a\) and \(b\), we write \(a\lor b:=\max\{a,b\}\) and \(a\wedge b:=\min\{a,b\}\). For two positive sequences \(\{a_{n}\}_{n\in\mathbb{N}}\) and \(\{b_{n}\}_{n\in\mathbb{N}}\) we write \(a_{n}\lesssim b_{n}\) or equivalently \(b_{n}\gtrsim a_{n}\) if there exists a positive constant \(C>0\) such that \(a_{n}\leq Cb_{n}\) for any \(n\in\mathbb{N}\). Moreover, we write \(a_{n}\asymp b_{n}\) if both \(a_{n}\lesssim b_{n}\) and \(a_{n}\gtrsim b_{n}\) hold. We denote by \(\mathbb{1}(\cdot)\) the indicator function.
For a set \(\mathcal{S}\), \(|\mathcal{S}|\) denotes its cardinality. For a \(p\)-dimensional vector \(\boldsymbol{\beta}:=(\beta_{j})_{j\in[p]}\), let \(\|\boldsymbol{\beta}\|_{r}:=\left(\sum_{j=1}^{p}|\beta_{j}|^{r}\right)^{1/r}\) for \(r\geq 1\) and \(\|\boldsymbol{\beta}\|_{0}:=\sum_{j=1}^{p}\mathbb{1}(\beta_{j}\neq 0)\). For a set \(\mathcal{S}\subset\{1,\ldots,p\}\), define \(\boldsymbol{\beta}_{[\mathcal{S}]}:=(\beta_{j})_{j\in\mathcal{S}}\). For a \(p\times q\)-dimensional matrix \(\mathbf{A}:=(a_{jk})_{j\in[p],k\in[q]}\), we denote the spectral norm of the matrix \(\mathbf{A}\) by \(\|\mathbf{A}\|\) and the Frobenius norm by \(\|\mathbf{A}\|_{\mathrm{F}}\), that is, \(\|\mathbf{A}\|:=\sup_{\mathbf{x}\in\mathbb{R}^{q}:\|\mathbf{x}\|_{2}=1}\| \mathbf{A}\mathbf{x}\|_{2}\) and \(\|\mathbf{A}\|_{\mathrm{F}}:=\sqrt{\mathrm{Tr}(\mathbf{A}^{\top}\mathbf{A})}\). Let \(\|\mathbf{A}\|_{1}\) be the vector \(\ell_{1}\) norm of \(\mathbf{A}\), i.e., \(\|\mathbf{A}\|_{1}:=\sum_{j=1}^{p}\sum_{k=1}^{q}|a_{jk}|\). For sets \(\mathcal{S}\subset[p]\) and \(\mathcal{K}\subset[q]\), we let \(\mathbf{A}_{[\mathcal{S},\mathcal{K}]}:=(a_{jk})_{j\in\mathcal{S},k\in\mathcal{ K}}\) which is the submatrix of \(\mathbf{A}\) taking the rows in \(\mathcal{S}\) and columns in \(\mathcal{K}\). For notational simplicity, we write \(\mathbf{A}_{[:,\mathcal{K}]}:=\mathbf{A}_{[[p],\mathcal{K}]}\) and \(\mathbf{A}_{[\mathcal{S},:]}:=\mathbf{A}_{[\mathcal{S},[q]]}\). Furthermore, let \(\mathbf{A}_{[j,:]}:=\mathbf{A}_{[\{j\},:]}\) and \(\mathbf{A}_{[:,k]}:=\mathbf{A}_{[:,\{k\}]}\), which denote the \(j\)-th row and \(k\)-th column of \(\mathbf{A}\), respectively. We let \(\lambda_{1}(\boldsymbol{\Sigma})\geq\lambda_{2}(\boldsymbol{\Sigma})\cdots \geq\lambda_{p}(\boldsymbol{\Sigma})\) be the ordered eigenvalues and \(|\boldsymbol{\Sigma}|\) be the determinant of a \(p\times p\)-dimensional matrix \(\boldsymbol{\Sigma}\). Let \(\mathbb{S}_{++}^{p}\) be a set of \(p\times p\) symmetric positive definite matrices.
For a given probability measure \(G\), let \(\mathsf{P}_{G}\) denote the probability or the expectation operator under the probability measure \(G\). We denote by \(\mathsf{p}_{G}\) the probability density function of \(G\) with respect to the Lebesgue measure if exists. For convenience, we write \(\mathsf{P}_{\boldsymbol{\Sigma}}:=\mathsf{P}_{\mathbb{N}(\mathbf{0},\boldsymbol {\Sigma})}\) and \(\mathsf{p}_{\boldsymbol{\Sigma}}:=\mathsf{p}_{\mathbb{N}(\mathbf{0},\boldsymbol {\Sigma})}\) for a normal distribution \(\mathbb{N}(\mathbf{0},\boldsymbol{\Sigma})\). For \(n\in\mathbb{N}\), let \(\mathsf{P}_{G}^{(n)}\) be the probability or the expectation under the product measure and, if exists, \(\mathsf{p}_{G}^{(n)}\) its density function.
## 2 Proposed prior and MCMC algorithm
In this section, we design a novel prior tailored for the loading matrix in a factor model and develop a computationally efficient MCMC algorithm for sampling the posterior distribution.
### Prior distribution
The proposed prior on the loading matrix first samples the "sparse structure" of the loading matrix and then samples nonzero elements. Let
\(\Delta_{p}:=\{0,1\}^{p}\setminus\{\mathbf{0}\}\) and, for a positive integer \(q\in\mathbb{N}\) we have chosen, \(\mathbf{v}:=(v_{1},\ldots,v_{q})^{\top}\in\Delta_{q}:=\{0,1\}^{q}\setminus\{ \mathbf{0}\}\). They are latent indicator variables that determine nonzero rows and columns of the loading matrix \(\mathbf{B}\), respectively. Note that the sparsity of columns determines the factor dimensionality \(\xi\), i.e., \(\xi=\|\mathbf{v}\|_{0}\) and \(q\) is a pre-specified upper bound of the factor dimensionality. We impose the prior distribution on \(\mathbf{u}\) and \(\mathbf{v}\) such that
\[\Pi(\mathbf{u},\mathbf{v})= \frac{Q_{A}(\|\mathbf{u}\|_{0},\|\mathbf{v}\|_{0})}{\sum_{ \mathbf{u}^{\prime}\in\Delta_{p}}\sum_{\mathbf{v}^{\prime}\in\Delta_{q}}Q_{A} (\|\mathbf{u}^{\prime}\|_{0},\|\mathbf{v}^{\prime}\|_{0})}\mathbb{1}(\mathbf{ u}\in\Delta_{p},\mathbf{v}\in\Delta_{q}) \tag{2.1}\] \[\text{with }Q_{A}(\omega,\xi):=Q_{A,p,q,n}(\omega,\xi):=\frac{1}{ \binom{p}{\omega}\binom{q}{\xi}}\exp\bigl{(}-A\omega\xi\log(p\lor n)\bigr{)}\]
for some \(A>0\). Under this prior distribution, the non-sparsity \(\omega=\|\mathbf{u}\|_{0}\) and the factor dimensionality \(\xi=\|\mathbf{v}\|_{0}\) are negatively correlated in the sense that \(\omega\) becomes smaller when \(\xi\) is large and vice versa. Note that \(\|\mathbf{u}\|_{0}\) and \(\|\mathbf{v}\|_{0}\) are negatively correlated in the proposed prior (2.1), which sharply contrasts with existing IBP-type priors (Rockova and George, 2016; Ohn and Kim, 2022) that assume the independence of \(\|\mathbf{u}\|_{0}\) and \(\|\mathbf{v}\|_{0}\). The IBP-type priors, however, are known to have the posterior consistency only when the true sparsity is known. We devise the prior (2.1) to achieve an optimal posterior concentration rate even when the true sparsity is unknown.
Conditional on \(\mathbf{u}\) and \(\mathbf{v}\), we then impose the prior distribution of the loading matrix \(\mathbf{B}\) such that
\[\Pi(\mathbf{B}\in\mathcal{B}|\mathbf{u},\mathbf{v})=\int_{\mathcal{B}}\prod_{ j=1}^{p}\prod_{k=1}^{q}\left[\delta_{0}(\beta_{jk})\right]^{1-u_{j}v_{k}} \left[\mathsf{p}_{\mathtt{Lap}(1)}(\beta_{jk})\right]^{u_{j}v_{k}}\mathrm{d} \beta_{jk} \tag{2.2}\]
for any measurable set \(\mathcal{B}\subset\mathbb{R}^{p\times q}\), where \(\delta_{0}\) denotes the Dirac-delta function at \(0\) and \(\mathtt{Lap}(1)\) does the Laplace distribution with scale \(1\). That is, independently for each loading \(\beta_{jk}\), we consider a spike and slab type prior distribution with the Dirac spike and Laplace slab. The use of the Laplace slab, which is more diffused than the normal distribution, is commonly used in Bayesian sparse factor models (Rockova and George, 2016; Xie et al., 2022; Ning, 2021; Ohn and Kim, 2022) in order to reduce bias in the estimation of large loadings.
Lastly, we consider the inverse Gamma prior distribution for \(\psi\) such that
\[\Pi(\psi\in\mathcal{B}^{\prime})=\mathsf{P}_{\mathtt{IG}(\mathbf{a})}(\psi\in \mathcal{B}^{\prime}). \tag{2.3}\]
for any measurable set \(\mathcal{B}^{\prime}\subset\mathbb{R}\), where \(\mathtt{IG}(\mathbf{a})\) with \(\mathbf{a}:=(a_{1},a_{2})\in\mathbb{R}_{+}^{2}\) denotes the inverse gamma distribution with shape \(a_{1}\) and rate \(a_{2}\).
The proposed prior is carefully designed to attain the posterior consistency of the factor dimensionality adaptively to the sparsity, therefore we call the prior defined through (2.1), (2.2) and (2.3) the _adaptive spike and slab (AdaSS)_ prior.
### Posterior computation
In this section, we develop an MCMC algorithm to compute the posterior distribution of the parameters \(\mathbf{u}:=(u_{j})_{j\in[p]}\), \(\mathbf{v}:=(v_{k})_{k\in[q]}\), \(\mathbf{B}:=(\beta_{jk})_{j\in[p],k\in[q]}\), \(\psi\) and the latent factors \(\mathbf{Z}_{1},\ldots,\mathbf{Z}_{n}\). We first introduce additional notations. Let \(Y_{ij}\) be the \(j\)-th element of \(\mathbf{Y}_{i}\) and \(Z_{ik}\) be the \(k\)-th element of \(\mathbf{Z}_{i}\). Let \(\mathcal{S}:=\{j\in[p]:u_{j}=1\}\) and \(\mathcal{K}:=\{k\in[q]:v_{k}=1\}\). We use the notation \(f(Y|X=x)\) to denote the conditional density of \(Y\) given \(X=x\).
To make posterior sampling of the factor loading \(\beta_{jk}\) easy, we employ the scale mixture representation of the Laplace distribution. Note that if \(\beta_{jk}|\tau_{jk}\sim\mathtt{N}(0,\tau_{jk})\) and \(\tau_{jk}\sim\mathtt{Exp}(1/2)\), then marginally we have \(\beta_{jk}\sim\mathtt{Lap}(1)\), where \(\mathtt{Exp}(1/2)\) stands for the exponential distribution with mean \(2\). In the MCMC algorithm, we introduce auxiliary scale parameters \(\tau_{jk}\) for \(j\in[p]\) and \(k\in[q]\).
Then a single iteration of our proposed MCMC sampler goes as follows.
**Sample \(\beta_{jk}\) for \(j\in[p]\) and \(k\in[q]\):** We sample \(\beta_{jk}\) from
\[\beta_{jk}|-\sim\begin{cases}\mathtt{N}(\hat{\beta}_{jk},\hat{\tau}_{jk})& \text{if $u_{j}=1$ and $v_{k}=1$}\\ \delta_{0}&\text{otherwise,}\end{cases}\]
where
\[\widehat{\beta}_{jk} :=\widehat{\tau}_{jk}\left\{\psi^{-1}\sum_{i=1}^{n}Z_{ik}\left(Y_ {ij}-\sum_{h\in[q]:h\neq k}Z_{ih}\beta_{jh}\right)\right\} \tag{2.4}\] \[\widehat{\tau}_{jk} :=\left(\psi^{-1}\sum_{i=1}^{n}Z_{ik}^{2}+\tau_{jk}^{-1}\right)^{ -1}. \tag{2.5}\]
**Sample \(\tau_{jk}\) for \(j\in[p]\) and \(k\in[q]\):** We sample \(\tau_{jk}\) from
\[\tau_{jk}|-\sim\begin{cases}\mathtt{GIG}(1,\beta_{jk}^{2},\frac{1}{2})&\text {if $u_{j}=1$},\\ \mathtt{Exp}(\frac{1}{2})&\text{otherwise,}\end{cases} \tag{2.6}\]
where \(\mathtt{GIG}(a,b,c)\) denotes the generalized inverse Gaussian distribution with density \(\mathtt{p}_{\mathtt{GIG}(a,b,c)}(z)\propto z^{c-1}\mathrm{e}^{-(az+b/z)/2} \mathbbm{1}(z>0)\).
**Sample \(u_{j}\) for \(j\in[p]\):** If \(\mathcal{S}\setminus\{j\}=\emptyset\), we set \(u_{j}=1\). Otherwise, we sample
from \(\mathtt{Bernoulli}(\rho_{j}^{\text{row}}/(1+\rho_{j}^{\text{row}}))\) with
\[\begin{split}\rho_{j}^{\text{row}}&:=\frac{\Pi(u_{j} =1|-)}{\Pi(u_{j}=0|-)}\\ &=\frac{Q_{A}(|\mathcal{S}\setminus\{j\}|+1,|\mathcal{K}|)}{Q_{A} (|\mathcal{S}\setminus\{j\}|,|\mathcal{K}|)}\frac{\prod_{i=1}^{n}f(\mathbf{Y} _{i}|u_{j}=1,\mathbf{v},\mathbf{T}_{j,\mathcal{K}},\mathbf{B}_{[-j,:]},\psi, \mathbf{Z}_{i})}{\prod_{i=1}^{n}f(\mathbf{Y}_{i}|u_{j}=0,\mathbf{B}_{[-j,:]}, \psi,\mathbf{Z}_{i})}\\ &=(p\lor n)^{-A|\mathcal{K}|}\frac{\left(|\mathcal{S}\setminus \{j\}|\right)}{\binom{p}{|\mathcal{S}\setminus\{j\}|+1}}\frac{\int\prod_{i=1}^ {n}f(\mathbf{Y}_{i}|\mathbf{B},\psi,\mathbf{Z}_{i})\prod_{k=1}^{q}\left\{ \mathfrak{p}_{\mathfrak{N}(0,\tau_{jk})}(\beta_{jk})\right\}^{v_{k}}\left\{ \delta_{0}(\beta_{jk})\right\}^{1-v_{k}}\mathrm{d}\beta_{jk}}{\prod_{i=1}^{n} f(\mathbf{Y}_{i}|\mathbf{B}_{[j,:]}=\mathbf{0},\mathbf{B}_{[-j,:]},\psi,\mathbf{Z}_{i})}\\ &=(p\lor n)^{-A|\mathcal{K}|}\frac{|\mathcal{S}\setminus\{j\}|+1} {p-|\mathcal{S}\setminus\{j\}|}\sqrt{\frac{|\hat{\mathbf{T}}_{j,\mathcal{K}}| }{|\mathbf{T}_{j,\mathcal{K}}|}}\exp\left(\frac{1}{2}\hat{\boldsymbol{\beta}} _{j,\mathcal{K}}^{\top}\hat{\mathbf{T}}_{j,\mathcal{K}}^{-1}\hat{\boldsymbol{ \beta}}_{j,\mathcal{K}}\right),\end{split} \tag{2.7}\]
where \(\mathbf{B}_{[-j,:]}:=\mathbf{B}_{[[p]\setminus\{j\},:]}\), \(\mathbf{T}_{j,\mathcal{K}}:=\textsc{diag}((\tau_{jk})_{k\in\mathcal{K}})\), \(\hat{\mathbf{T}}_{j,\mathcal{K}}:=\textsc{diag}((\hat{\tau}_{jk})_{k\in \mathcal{K}})\) and \(\hat{\boldsymbol{\beta}}_{j,\mathcal{K}}:=(\hat{\beta}_{jk})_{k\in\mathcal{K}}\).
**Sample \(v_{k}\) for \(k\in[q]\).** If \(\mathcal{K}\setminus\{k\}=\emptyset\), we set \(v_{k}=1\). Otherwise, we sample \(v_{k}\) from \(\mathtt{Bernoulli}(\rho_{k}^{\text{col}}/(1+\rho_{k}^{\text{col}}))\) with
\[\begin{split}\rho_{k}^{\text{col}}&:=\frac{\Pi(v_{k} =1|-)}{\Pi(v_{k}=0|-)}\\ &=\frac{Q_{A}(|\mathcal{S}|,|\mathcal{K}\setminus\{k\}|+1)}{Q_{A} (|\mathcal{S}|,|\mathcal{K}\setminus\{k\}|)}\frac{\prod_{i=1}^{n}f(\mathbf{Y} _{i}|v_{k}=1,\mathbf{u},\mathbf{T}_{\mathcal{S},k},\mathbf{B}_{[:,-k]},\psi, \mathbf{Z}_{i})}{\prod_{i=1}^{n}f(\mathbf{Y}_{i}|v_{k}=0,\mathbf{B}_{[:,-k]}, \psi,\mathbf{Z}_{i})}\\ &=(p\lor n)p^{-A|\mathcal{S}|}\frac{\binom{q}{|\mathcal{K} \setminus\{k\}|}}{\binom{q}{|\mathcal{K}\setminus\{k\}|+1}}\frac{\int\prod_{i=1 }^{n}f(\mathbf{Y}_{i}|\mathbf{B},\psi,\mathbf{Z}_{i})\prod_{j=1}^{p}\left\{ \mathfrak{p}_{\mathfrak{N}(0,\tau_{jk})}(\beta_{jk})\right\}^{u_{j}}\left\{ \delta_{0}(\beta_{jk})\right\}^{1-u_{j}}\mathrm{d}\beta_{jk}}{\prod_{i=1}^{n}f( \mathbf{Y}_{i}|\mathbf{B}_{[:,k]}=\mathbf{0},\mathbf{B}_{[:,-k]},\psi,\mathbf{ Z}_{i})}\\ &=(p\lor n)^{-A|\mathcal{S}|}\frac{|\mathcal{K}\setminus\{k\}|+1}{q -|\mathcal{K}\setminus\{k\}|}\sqrt{\frac{|\hat{\mathbf{T}}_{\mathcal{S},k}|}{| \mathbf{T}_{\mathcal{S},k}|}}\exp\left(\frac{1}{2}\hat{\boldsymbol{\beta}}_{ \mathcal{S},k}^{\top}\hat{\mathbf{T}}_{\mathcal{S},k}^{-1}\hat{\boldsymbol{ \beta}}_{\mathcal{S},k}\right),\end{split} \tag{2.8}\]
where \(\mathbf{B}_{[:,-k]}:=\mathbf{B}_{[:,[q]\setminus\{k\}]}\), \(\mathbf{T}_{\mathcal{S},k}:=\textsc{diag}((\tau_{jk})_{j\in\mathcal{S}})\), \(\hat{\mathbf{T}}_{\mathcal{S},k}:=\textsc{diag}((\hat{\tau}_{jk})_{j\in \mathcal{S}})\) and \(\hat{\boldsymbol{\beta}}_{\mathcal{S},k}:=(\hat{\beta}_{jk})_{j\in\mathcal{S}}\).
**Sample \(\mathbf{Z}_{i}\) for \(i\in[n]\):** We sample \(\mathbf{Z}_{i}:=(Z_{ik})_{k\in[q]}\) from
\[\begin{split}(Z_{ik})_{k\in\mathcal{K}}|&-\sim \mathsf{N}\left(\widehat{\boldsymbol{\alpha}}_{\mathcal{K}}^{\top}\mathbf{Y}_{i}, \widehat{\boldsymbol{\Xi}}_{\mathcal{K}}\right)\\ (Z_{ik})_{k\in[q]\setminus\mathcal{K}}|&-\sim\mathsf{N }\left(\mathbf{0},\mathbf{I}\right),\end{split}\]
where
\[\widehat{\boldsymbol{\alpha}}_{\mathcal{K}} :=\psi^{-1}\widehat{\boldsymbol{\Xi}}_{\mathcal{K}}\mathbf{B}_{[ :,\mathcal{K}]}^{\top} \tag{2.9}\] \[\widehat{\boldsymbol{\Xi}}_{\mathcal{K}} :=\left(\psi^{-1}\mathbf{B}_{[:,\mathcal{K}]}^{\top}\mathbf{B}_{[ :,\mathcal{K}]}+\mathbf{I}\right)^{-1}. \tag{2.10}\]
**Sample \(\psi\):** We sample \(\psi\) from
\[\psi|-\sim\texttt{IG}\left(a_{1}+\frac{np}{2},a_{2}+\frac{1}{2}\sum_{i=1}^{n}\sum_ {j=1}^{p}\left(Y_{ij}-\sum_{k\in\mathcal{K}}Z_{ik}\beta_{jk}\right)^{2}\right).\]
We provide several remarks on possible extensions of our Bayesian factor models.
**Remark 1** (Extension to heterogeneous noise variances).: The AdaSS prior can be easily modified for a factor model with heterogeneous noise variances, under which the covariance matrix of the observed variable \(\mathbf{Y}_{i}\) is decomposed as \(\texttt{Var}(\mathbf{Y}_{i})=\mathbf{B}\mathbf{B}^{\top}+\mathbf{\Psi}\) with \(\mathbf{\Psi}:=\textsc{diag}(\psi_{1},\ldots,\psi_{p})\). In this situation, a standard choice of the prior distribution on \(\mathbf{\Psi}\) is a product of the inverse gamma distributions, that is,
\[\psi_{j}\stackrel{{\mathrm{ind}}}{{\sim}}\texttt{IG}(\mathbf{a}_ {j})\]
for some \(\mathbf{a}_{j}:=(a_{j1},a_{j2})\in\mathbb{R}_{+}^{2}\) for \(j\in[p]\). Then the conditional posterior of \(\psi_{j}\) is given by
\[\psi_{j}|-\sim\texttt{IG}\left(a_{j1}+\frac{n}{2},a_{j2}+\frac{1}{2}\sum_{i=1 }^{n}\left(Y_{ij}-\sum_{k\in\mathcal{K}}Z_{ik}\beta_{jk}\right)^{2}\right).\]
For posterior sampling of other parameters, calculations given in (2.4)-(2.10) are modified as \(\widehat{\beta}_{jk}:=\widehat{\tau}_{jk}\left\{\psi_{j}^{-1}\sum_{i=1}^{n}Z_{ ik}\left(Y_{ij}-\sum_{h\in[\![:h\neq k]}Z_{ih}\beta_{jh}\right)\right\}\), \(\widehat{\tau}_{jk}:=\left(\psi_{j}^{-1}\sum_{i=1}^{n}Z_{ik}^{2}+\tau_{jk}^{- 1}\right)^{-1}\), \(\widehat{\boldsymbol{\alpha}}_{\mathcal{K}}:=\widehat{\boldsymbol{\Sigma}}_{ \mathcal{K}}\mathbf{B}_{[:,\mathcal{K}]}^{\top}\mathbf{\Psi}^{-1}\) and \(\widehat{\boldsymbol{\Xi}}_{\mathcal{K}}:=\left(\mathbf{B}_{[:,\mathcal{K}]}^{ \top}\mathbf{\Psi}^{-1}\mathbf{B}_{[:,\mathcal{K}]}+\mathbf{I}\right)^{-1},\) respectively.
**Remark 2** (Extension to correlated factors).: The factor model we investigated assumes that all components of the latent factor are independent. A more general model would be the correlated factor model such that \(\mathbf{Y}_{i}|(\mathbf{Z}_{i}=\mathbf{z}_{i})\stackrel{{\mathrm{ ind}}}{{\sim}}\texttt{N}(\mathbf{B}\mathbf{z}_{i},\psi\mathbf{I})\) and \(\mathbf{Z}_{i}\stackrel{{\mathrm{iid}}}{{\sim}}\texttt{N}( \mathbf{0},\boldsymbol{\Sigma}_{\mathbf{z}})\) for some \(\boldsymbol{\Sigma}_{\mathbf{z}}\in\mathbb{S}_{++}^{q}\), and our AdaSS prior can be easily modified for this model. If we impose the inverse Wishart prior \(\texttt{IW}(\mathbf{A},\nu)\) with scale matrix \(\mathbf{A}\in\mathbb{S}_{++}^{q}\) and the degrees of freedom \(\nu>0\) on \(\boldsymbol{\Sigma}_{\mathbf{z}}\), then we can sample \(\boldsymbol{\Sigma}_{\mathbf{z}}\) from the conditional posterior
\[\boldsymbol{\Sigma}_{\mathbf{z}}|-\sim\texttt{IW}\left(\mathbf{A}+\sum_{i=1}^ {n}\mathbf{z}_{i}\mathbf{z}_{i}^{\top},\nu+n\right),\]
while the other posterior sampling schemes remain the same. Unfortunately, the theoretical results in this paper could not be applied directly to the correlated factor model since the correlated factor does not guarantee the required sparsity pattern of the loading matrix to achieve optimal posterior concentration rates.
**Remark 3** (Post-processing for estimating loading matrices).: The loading matrix \(\mathbf{B}\) is not identifiable since for any \(q\times q\) orthogonal matrix \(\mathbf{Q}\) with \(\mathbf{Q}\mathbf{Q}^{\top}=\mathbf{I}\) the transformed loading matrix \(\mathbf{B}\mathbf{Q}\) yields the exact same likelihood as that of \(\mathbf{B}\). Consequently, we need additional effort if we are interested in estimating the loading matrix. A number of methods have been proposed to resolve this identifiability issue for Bayesian factor analysis. One approach is to impose a prior distribution on the loading matrix satisfying certain identifiability constraints such as the positive diagonal, lower triangular (PLT) constraint (Lopes and West, 2004; Ghosh and Dunson, 2009; Leung and Drton, 2016; Man and Culpepper, 2022). But as pointed out by Carvalho et al. (2008), Assmann et al. (2016), the posterior distribution obtained under the PLT constraint may not be invariant to the ordering of the observed variables. An alternative approach is to post-process MCMC samples to make \(\mathbf{B}\) identifiable (Assmann et al., 2016; Papastamoulis and Ntzoufras, 2022), which can be directly applicable to our Bayesian model. We illustrate the effectiveness of this approach by analyzing a toy example in Appendix C.2.
## 3 Asymptotic properties of the posterior distribution
We study frequentist properties of the posterior distribution induced by the proposed AdaSS prior distribution. Throughout this section, we assume that the number of columns \(q\) of the loading matrix \(\mathbf{B}\) of our Bayesian model is taken to be sufficiently large so that it is at least the true factor dimensionality. Given data \(\mathbf{Y}_{1:n}:=(\mathbf{Y}_{1},\ldots,\mathbf{Y}_{n})\), we denote by \(\Pi(\cdot|\mathbf{Y}_{1:n})\) the posterior distribution under the AdaSS prior II. Proofs of all results in this section are deferred to Appendix A in the supplementary material.
### Class of covariance matrices
We first define a class of matrices to which the true covariance matrix belongs. We denote by \(r\) the _true factor dimensionality_. To deal with very high dimensional cases where the dimension \(p\) is much larger than the sample size \(n\), we impose sparsity on the loading matrix. Specifically, for a loading matrix \(\mathbf{B}^{\star}\in\mathbb{R}^{p\times r}\), we define its (row) support by
\[\textsc{supp}(\mathbf{B}^{\star}):=\left\{j\in[p]:\mathbf{B}^{\star}_{[j,:]} \neq\mathbf{0}\right\}.\]
We say that the loading matrix \(\mathbf{B}^{\star}\) is \(s\)_-sparse_ if \(|\textsc{supp}(\mathbf{B}^{\star})|\leq s\) and let
\[\mathcal{B}(p,r,s):=\left\{\mathbf{B}^{\star}\in\mathbb{R}^{p\times r}:| \textsc{supp}(\mathbf{B}^{\star})|\leq s\right\}\]
be a set of \(p\times r\)-dimensional \(s\)-sparse (loading) matrices. The parameter space for the covariance matrix we consider throughout the paper is given by
\[\mathcal{C}(p,r,s,\lambda,\zeta):=\Big{\{}\mathbf{\Sigma}^{\star}= \mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top}+\psi^{\star}\mathbf{I}:\mathbf{B} ^{\star}\in\mathcal{B}(p,r,s),\lambda_{1}(\mathbf{\Sigma}^{\star})\leq\lambda, \psi\geq\psi_{0}, \tag{3.1}\]
for some arbitrarily small constant \(\psi_{0}>0\). We discuss the implications of the conditions determining the class \(\mathcal{C}(p,r,s,\lambda,\zeta)\).
* As we will show in Section 3.2 (Theorem 3), the posterior concentration rate of the covariance matrix depends on the dimension \(p\), sparsity \(s\), factor dimensionality \(r\) and the upper bound of the largest eigenvalue \(\lambda\), but not on \(\zeta\), which is needed for the consistency of the factor dimensionality, and \(\psi_{0}\).
* Our parameter space (3.1) includes loading matrices whose row support is sparse, which is also considered in Cai et al. (2013, 2015); Xie et al. (2022), Ning (2021). On the other hand, Pati et al. (2014); Gao and Zhou (2015); Rockova and George (2016); Ohn and Kim (2022) consider the sparsity of the column support, which means that the nonzero entries of each column vector are less than or equal to \(s\). Note that the \(s\) column support sparsity implies \(sr\) row support sparsity and hence the column and row support sparsities have the same order of \(s\) as \(s\to\infty\) when \(r\) is bounded.
* The lower bound \(\psi\geq\psi_{0}\), which is assumed in Pati et al. (2014); Ohn and Kim (2022), is introduced to avoid ill-conditioned covariance matrices. In contrast, Cai et al. (2013, 2015); Gao and Zhou (2015); Rockova and George (2016); Xie et al. (2022); Ning (2021) assume that \(\psi\) is fixed in their theories.
* The condition \(\lambda_{r}(\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top})\geq\zeta\) yields the eigengap between spikes and noises, which prevents underestimation of the factor dimensionality. The same condition is assumed by Cai et al. (2013, 2015); Gao and Zhou (2015); Ohn and Kim (2022).
* Note that \(\left\lVert\mathbf{B}^{\star}\left(\mathbf{B}^{\star}_{[j;.]}\right)^{\top} \right\rVert_{2}\geq\left\lVert\mathbf{B}^{\star}_{[j;.]}\right\rVert_{2}^{2}\), thus the condition \(\min_{j\in\text{supp}(\mathbf{B})}\left\lVert\mathbf{B}^{\star}\left(\mathbf{ B}^{\star}_{[j;.]}\right)^{\top}\right\rVert_{2}\geq\zeta\) is met when the magnitudes of the nonzero rows do not vanish too quickly. This condition enables accurate estimation of the sparsity level of the true loading matrix. This condition is similar to the beta-min condition (see Section 7.4 of Buhlmann and Van De Geer (2011)) in high-dimensional sparse regression models. We believe that this condition is indispensable unless the column vectors are assumed to be orthogonal (Cai et al., 2013, 2015; Gao and Zhou, 2015) or the true sparsity level \(s\) is known (Ohn and Kim, 2022).
Note that we allow the model architecture parameters \(p\), \(s\), \(r\)\(\lambda\) and \(\zeta\) to depend on \(n\), but we do not specify the subscript \(n\) to those quantities, e.g., keep using \(p\) instead of \(p_{n}\), for notational simplicity.
### Posterior consistency of the factor dimensionality
In this section, we explore asymptotic properties of the posterior distribution of the factor dimensionality in the sparse factor model. For a loading matrix \(\mathbf{B}\in\mathbb{R}^{p\times q}\), we define the factor dimensionality corresponding to \(\mathbf{B}\) as
\[\xi:=\xi(\mathbf{B}):=\left|\left\{k\in[q]:\mathbf{B}_{[:,k]}\neq\mathbf{0} \right\}\right|, \tag{3.2}\]
that is, the factor dimensionality \(\xi(\mathbf{B})\) is equal to the number of nonzero columns of \(\mathbf{B}\). The following theorem shows that the posterior distribution of the factor dimensionality behaves nicely.
**Theorem 1**.: _Assume that \(r\leq p/2\), \(\lambda\lesssim s\) and \(\epsilon:=\lambda\sqrt{sr\log(p\lor n)/n}=o(1)\). Moreover, assume that \(\zeta\geq c_{0}\epsilon\) for sufficiently large \(c_{0}>0\). Then for any \(\delta>0\), there exists a constant \(A_{\delta}>0\) depending only on \(\delta\) such that_
\[\inf_{\mathbf{\Sigma}^{*}\in\mathcal{C}(p,r,s,\lambda,\zeta)}\mathsf{P}_{ \mathbf{\Sigma}^{*}}^{(n)}\left[\Pi\left(r\leq\xi(\mathbf{B})\leq(1+\delta)r| \mathbf{Y}_{1:n}\right)\right]\to 1 \tag{3.3}\]
_for the prior distribution defined through (2.1), (2.2) and (2.3) with \(A>A_{\delta}\), and \(\mathbf{a}\in\mathbb{R}_{+}^{2}\). Furthermore, if \(r\lesssim\log(p\lor n)/\log n\), then there exists an universal constant \(A_{0}>0\) such that_
\[\inf_{\mathbf{\Sigma}^{*}\in\mathcal{C}(p,r,s,\lambda,\zeta)}\mathsf{P}_{ \mathbf{\Sigma}^{*}}^{(n)}\left[\Pi\left(\xi(\mathbf{B})=r|\mathbf{Y}_{1:n} \right)\right]\to 1, \tag{3.4}\]
_whenever \(A>A_{0}\)._
Theorem 1 implies that a posteriori the factor dimensionality \(\xi\) is not much larger than the true one \(r\), and \(\xi\) concentrates on \(r\) asymptotically if \(r\) is not too large. \(.0\)Gao and Zhou (2015) attains posterior consistency of the factor dimensionality of the orthogonal loading matrix under a mildly growing regime \(r\lesssim\log p\). Our condition \(r\lesssim\log(p\lor n)/\log n\) for the posterior consistency is slightly stronger than Gao and Zhou (2015). However, note that we do not impose the orthogonality constraint on the loading matrix which makes posterior computation difficult.
Ohn and Kim (2022) obtains posterior consistency without any condition on the growth rate of the true factor dimensionality \(r\) by using a prior that strongly regularizes \(\xi\). However, this strong regularization sacrifices the convergence rate of the covariance matrix by a factor of \(\sqrt{s}\) compared to the optimal rate. Another critical drawback of their prior is that the knowledge of the true sparsity level is required to select the hyper-parameters in the prior. In contrast, the AdaSS prior attains the posterior consistency of the factor dimensionality _without knowing the true sparsity level_.
On the frequentist side, Cai et al. (2013, 2015) proposed consistent estimators of the factor dimensionality for sparse factor models. However, Cai et al. (2013) requires a \(\sqrt{s}\) times larger detection rate for the eigengap (i.e., the lower bound of \(\zeta\)) than ours and Cai et al. (2015) is nonadaptive to the true sparsity. Moreover, a known and fixed noise variance \(\psi\) is required for the consistency of both estimators, while our consistency result is _adaptive to the unknown noise level_.
**Remark 4**.: One should set \(q>r\) to correctly estimate \(r\), but \(r\) is unknown. A naive strategy would be to set \(q\) very large, e.g., \(q=p-1\), so that \(q\geq r\). However, unnecessarily large \(q\) requires huge computation. A better strategy for choosing \(q\) is to set \(q=\sqrt{n}\). This choice is based on our posterior contraction rate \(\lambda\sqrt{sr\log(p\lor n)/n}=o(1)\) of the covariance matrix given in Theorem3 in the next section. Since \(r=\textsc{rank}(\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top})\leq s\) for the true loading matrix \(\mathbf{B}^{\star}\), if \(\lambda\log(p\lor n)\gtrsim 1\) we have \(r=o(\sqrt{n})\). Therefore, asymptotically, the upper bound \(q=\sqrt{n}\) does not underdetermine the true factor dimensionality. A similar problem occurs in Gao and Zhou (2015), where the authors assume \(r\lesssim\log p\) and set \(q\asymp p^{b}\) for some \(b>0\) so that \(q\gg r\). When \(p\) is large, our choice \(q=\sqrt{n}\) is much smaller than that of Gao and Zhou (2015), which leads to more efficient computation
**Remark 5**.: In Theorem1, we assume an upper bound for the largest eigenvalue such that \(\lambda\lesssim s\). This bound is mild in view of the random matrix theory. Suppose that \(\tilde{\mathbf{B}}\in\mathbb{R}^{s\times r}\) is a random matrix whose entries are independent centered random variables with finite fourth moments. Then by Theorem2 of Latala (2005), since \(r\leq s\), we have \(\mathsf{E}\left\|\tilde{\mathbf{B}}\right\|\lesssim\sqrt{s}+\sqrt{r}\lesssim \sqrt{s}\). Therefore, \(\mathsf{E}[\lambda_{1}(\tilde{\mathbf{B}}\tilde{\mathbf{B}}^{\top})]= \mathsf{E}\left\|\tilde{\mathbf{B}}\tilde{\mathbf{B}}^{\top}\right\|\lesssim s\). Pati et al. (2014) and Rockova and George (2016) assumed the same condition as ours, while other studies on the Bayesian covariance estimation (Gao and Zhou, 2015; Xie et al., 2022) used a stronger condition that the largest eigenvalue of the true covariance matrix is bounded.
In Theorem1, we show that the true factor dimensionality is almost consistently recovered whenever the eigengap \(\zeta\) is larger than the _detection rate_\(\lambda\sqrt{sr\log p/n}\) by a sufficiently large constant \(c_{0}>0\). As shown in the next proposition, this detection rate is optimal when \(r\lesssim 1\) in the sense that any method cannot consistently estimate the factor dimensionality when the eigengap \(\zeta\) is less than \(a_{0}\lambda\sqrt{s\log\bigl{(}p/s\bigr{)}/n}\) for some constant \(a_{0}>0\). This result is an extension of Theorem5 of Cai et al. (2015) for unknown \(\psi\) and diverging \(\lambda\).
**Proposition 2**.: _Assume that \(s\log p/n=o(1)\). Then there exists a constant \(a_{0}>0\) such that if \(0\leq\zeta\leq a_{0}\lambda\sqrt{s\log\bigl{(}p/s\bigr{)}/n}\),_
\[\inf_{\hat{r}:\mathbb{R}^{p\times n}\mapsto\mathbf{N}}\sup_{\boldsymbol{\Sigma }\in\mathcal{C}(p,r,s,\lambda,\zeta)}\mathsf{P}_{\boldsymbol{\Sigma}}^{(n)} \left(\hat{r}(\mathbf{Y}_{1:n})\neq r\right)\geq\frac{1}{4} \tag{3.5}\]
_for all but finitely many \(n\), where the infimum runs over all possible estimator \(\hat{r}\) of \(r\)._
### Posterior concentration rate of the covariance matrix
In the linear factor model, the covariance matrix \(\mathbf{B}\mathbf{B}^{\top}+\psi\mathbf{I}\) determines its distribution. In this section, we prove that the posterior distribution of the covariance matrix in our Bayesian model concentrates around the true covariance matrix at a near-optimal rate, which is summarized in the next theorem.
**Theorem 3**.: _Assume that \(\lambda\lesssim s\). Then there exists a constant \(M>0\) such that_
\[\sup_{\mathbf{\Sigma}^{*}\in\mathcal{C}(p,r,s,\lambda,0)}\mathsf{P}_{\mathbf{ \Sigma}^{*}}^{(n)}\left[\Pi\left(\left\|\mathbf{\Sigma}-\mathbf{\Sigma}^{*} \right\|>M\lambda\sqrt{\frac{sr\log(p\lor n)}{n}}\middle|\mathbf{Y}_{1:n} \right)\right]\to 0 \tag{3.6}\]
_for the prior distribution defined through (2.1), (2.2) and (2.3) with \(A>0\) and \(\mathbf{a}\in\mathbb{R}_{+}^{2}\)._
Note that the lower bound of the eigengap \(\zeta\) is set to \(0\) in Theorem 3 while it should be larger than a certain rate in Theorem 1, that is, the eigengap condition is required only for consistent estimation of the factor dimensionality but not for the contraction of the covariance matrix. This difference implies that the optimal estimation of the covariance matrix does not require the consistent estimation of the factor dimensionality.
Our posterior concentration rate in (3.6) is near optimal when \(r\lesssim 1\) as shown in the following proposition, which is a direct consequence of Theorem 5.4 of Pati et al. (2014).
**Proposition 4**.: _Assume that \(s\log p/n=o(1)\) and \(r\lesssim 1\). Then_
\[\inf_{\mathbf{\hat{\Sigma}}:\mathbb{R}^{p\times n}\mapsto\mathbf{S}_{++}^{p} \ \mathbf{\Sigma}^{*}\in\mathcal{C}(p,r,s,\lambda,0)}\mathsf{P}_{\mathbf{\Sigma}^{ *}}^{(n)}\left[\left\|\mathbf{\hat{\Sigma}}-\mathbf{\Sigma}^{*}\right\| \right]\gtrsim\lambda\sqrt{\frac{s\log\bigl{(}p/s\bigr{)}}{n}}, \tag{3.7}\]
_for all but finitely many \(n\), where the infimum runs over all possible estimator \(\mathbf{\hat{\Sigma}}\) of \(\mathbf{\Sigma}^{*}\)._
## 4 Numerical examples
We evaluate the empirical performance of the proposed Bayesian model with the AdaSS prior through simulation studies and real data analysis. For each posterior computation, we run the MCMC sampler described in Section 2.2 for 3,000 iterations discarding the first 500 as burn-in, and by thinning every 5, we obtain the final 500 MCMC samples from the posterior. We give the convergence diagnostics via trace, autocorrelation and partial autocorrelation plots of some randomly selected parameters in C.1 in the supplementary material, which confirm that the MCMC sampler converges well.
### Simulation study
In this section, we conduct an extensive numerical study to compare the performance of the AdaSS prior for estimating the factor dimensionality and the covariance matrix with various competitors. Throughout the simulation study, we set the number of columns of the loading matrix \(q=\left\lceil\sqrt{n}\right\rceil\) for a sample size \(n\) and the hyperparameters \(A=0.1\) and \(\mathbf{a}=(0.01,0.01)\).
#### 4.1.1 Posterior distribution of the factor dimensionality
We first compare the AdaSS prior and the spike and slab with the two-parameter IBP prior of Ohn and Kim (2022) for evaluating the concentration behaviors of their posterior distributions of the factor dimensionality. We only consider the prior of Ohn and Kim (2022) since other Bayesian models either do not infer the factor dimensionality (Ning, 2021; Xie et al., 2022) or do not achieve posterior consistency of the factor dimensionality (Rockova and George, 2016; Bhattacharya and Dunson, 2011; Srivastava et al., 2017) or are purely theoretical (i.e., do not have a posterior computation algorithm) (Gao and Zhou, 2015).
There are two hyperparameters of the spike and slab with the two-parameter IBP prior of Ohn and Kim (2022). The first hyperparameter denoted by \(\alpha\) controls the factor dimensionality and the second hyperparameter denoted by \(\kappa\) does the sparsity of the loading matrix. For \(\kappa\), we choose \(p^{1.1}\) as recommended by Ohn and Kim (2022). For \(\alpha\), we consider three values: \(p^{-30}\), \(p^{-25}\) and \(p^{-20}\). Ohn and Kim (2022) proved that using \(\alpha=p^{-As}\) for a constant \(A>0\) and the true sparsity \(s\) can lead to the posterior consistency. Assuming that \(s=30\), these choices of \(\alpha\) correspond to the choices of \(A\in\{4/6,5/6,1\}\). We use the MCMC sampler used in Knowles and Ghahramani (2011); Ohn and Kim (2022) for approximating the posterior.
We generate a data set consisting of \(n\) synthetic random vectors from the multivariate normal distribution with mean \(\mathbf{0}\) and variance \(\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top}+2\mathbf{I}\) independently, where \(\mathbf{B}^{\star}\) is a \(s\)-sparse \(p\times r\) loading matrix. For the true loading matrix, we randomly select the location of \(s\) nonzero rows and sample the elements in the nonzero rows from \(\{-2,2\}\) randomly. We set \((n,p)=(100,1000)\) and let the sparsity \(s\) and factor dimensionality \(r\) vary among \(s\in\{10,30,50\}\) and \(r\in\{1,3,5\}\), respectively.
Figure 1 presents the posterior distribution of the factor dimensionality for the AdaSS prior and spike and slab with the two-parameter IBP prior with different \(\alpha\). The posterior distribution under the AdaSS prior concentrates at the true factor dimensionality quite well for all nine cases, while the performance of the two-parameter IBP prior depends heavily on the choice of the hyperparameter \(\alpha\). If \(\alpha\) is not sufficiently small, the resulting posterior distribution apparently overestimates the true factor dimensionality. The smallest choice of \(\alpha=p^{-30}\) estimates the factor dimensionality consistently for some cases, but also severe underestimation occurs in other cases. The results of this simulation show that there is no universally good choice of the hyperparameter \(\alpha\) in the two-parameter IBP across different levels of the sparsity, while the AdaSS prior performs consistently well with a single choice of the hyperparameter.
#### 4.1.2 Comparison with frequentist estimators
In this simulation, we compare the performance of the AdaSS prior with some frequentist estimators for point estimation of the factor dimensionality. For our Bayesian model, we use the mode of the posterior distribution of the factor
Figure 1: Posterior distributions of the factor dimensionality for the proposed adaptive Bayesian (AdaSS) prior and spike and slab with the two-parameter IBP prior with various \(\alpha\) (IBP(\(\alpha\))). The black dashed vertical lines indicate the true factor dimensionality \(r\).
dimensionality as a point estimator. We consider the following five frequentist estimators as competitors: with \(\mathbf{S}\) and \(\mathbf{R}\) being the sample covariance and correlation matrices, respectively, and \(r_{\max}\in[p]\) pre-specified,
* Eigenvalue threshold estimator (ET, Onatski (2010)) \[\hat{r}_{\mathrm{ET}}:=\sum_{j=1}^{p}\mathbbm{1}\left(\lambda_{j}(\mathbf{S})> w\lambda_{r_{\max}+1}(\mathbf{S})+(1-w)\lambda_{2r_{\max}+1}(\mathbf{S}) \right),\] where \(w:=2^{2/3}(2^{2/3}-1)\).
* Eigenvalue ratio estimator (ER, Ahn and Horenstein (2013), Lam and Yao (2012)): \[\hat{r}_{\mathrm{ER}}:=\operatorname*{argmax}_{j\in[r_{\max}]}\lambda_{j}( \mathbf{S})/\lambda_{j+1}(\mathbf{S}).\]
* Growth ratio estimator (GR, Ahn and Horenstein (2013)): \[\hat{r}_{\mathrm{GR}}:=\operatorname*{argmax}_{j\in[r_{\max}]}\frac{\log \bigl{(}L_{j-1}/L_{j}\bigr{)}}{\log\bigl{(}L_{j}/L_{j+1}\bigr{)}},\] where \(L_{j}:=\sum_{t=j+1}^{p}\lambda_{t}(\mathbf{S})\).
* Adjusted correlation threshold estimator (ACT, Fan et al. (2020)): \[\hat{r}_{\mathrm{ACT}}:=\max\left\{j\in[r_{\max}]:\lambda_{j}^{\dagger}>1+ \sqrt{p/(n-1)}\right\},\] where \[\lambda_{j}^{\dagger}:=\left[\frac{1-w_{j}}{\hat{\lambda}_{j}}-\frac{w_{j}}{p- j}\left\{\sum_{t=j+1}^{p}(\hat{\lambda}_{t}-\hat{\lambda}_{j})^{-1}+4(\hat{ \lambda}_{j+1}-\hat{\lambda}_{j})^{-1}\right\}\right]^{-1}\] with \(\hat{\lambda}_{j}:=\lambda_{j}(\mathbf{R})\) and \(w_{j}:=(p-j)/(n-1)\) for \(j\in[p]\).
* Diagonal thresholding estimator (DT, Cai et al. (2013)): \[\hat{r}_{\mathrm{DT}}:=\max\left\{j\in[r_{\max}]:\lambda_{j}(\hat{\mathbf{S}} _{[J,J]})>2\left(1+\sqrt{|J|/n}+\sqrt{(2(1+|J|)\log(\mathrm{e}p)+6\log n)/n} \right)^{2}\right\},\] where \(\hat{\mathbf{S}}:=n^{-1}\sum_{i=1}^{n}(\mathbf{Y}_{i}+\mathbf{Z}_{i}^{0})( \mathbf{Y}_{i}+\mathbf{Z}_{i}^{0})^{\top}\) with \(\mathbf{Z}_{1}^{0},\ldots,\mathbf{Z}_{n}^{0}\stackrel{{\mathrm{ iid}}}{{\sim}}\mathbf{N}(\mathbf{0},\mathbf{I})\) and \[J:=\left\{j\in[p]:\hat{\mathbf{S}}_{[j,j]}\geq 2+6\sqrt{\log p/n}\right\}.\]
We fix \(r_{\max}=10\) throughout the simulation study.
A synthetic data set of size \(n\) is generated from the \(p\)-dimensional normal distribution with mean \(\mathbf{0}\) and covariance \(\mathbf{B}^{\star}(\mathbf{B}^{\star})^{\top}+\mathbf{I}\), where the true loading matrix \(\mathbf{B}^{\star}\in\mathbb{R}^{p\times r}\) is \(s\)-sparse. The true loading matrix \(\mathbf{B}^{\star}\) is generated as follows: we first select \(s\) nonzero rows and sample the elements in the nonzero rows from the uniform distribution on \([-4/\sqrt{s},-3/\sqrt{s}]\cup[3/\sqrt{s},4/\sqrt{s}]\). We take
two sample sizes \(n=50\) and \(n=100\). We fix the dimension \(p=1,000\) and let the sparsity \(s\) and factor dimensionality \(r\) vary among \(s\in\{10,30,50\}\) and \(r\in\{1,3,5\}\), respectively.
The simulation results based on 100 synthetic data sets of size \(n=50\) and \(n=100\) are summarized in Tables 1 and 2, respectively. We see that the proposed Bayesian model with the AdaSS prior outperforms the other competitors in the estimation of the factor dimensionality. The AdaSS prior has the highest proportion of the correct estimation for 17 out of the total 18 simulation setups. In particular, there are considerable performance gaps between the AdaSS prior and the other competitors when sparsity is small (\(s=10\)) or the factor dimensionality is large (\(r=5\)).
#### 4.1.3 Covariance matrix estimation
In this simulation study, we compare the AdaSS prior with other competitors for covariance matrix estimation. For competitors, we consider the principal orthogonal complement thresholding method (POET, Fan et al. (2013)), the variational inference method for Bayesian sparse PCA (SPCA-VI, Ning (2021)), the Bayesian sparse factor models with multiplicative gamma process shrinkage prior (MGPS, Bhattacharya and Dunson (2011)) and two maximum a posteriori estimators that employ the multi-scale generalized double Pareto prior (MDP, (Srivastava et al., 2017)) and the spike-and-slab lasso with Indian buffet process prior (SSL-IBP, (Rockova and George, 2016)), respectively. For the POET and SPCA-VI, the factor dimensionality must be selected in advance and we use the true factor dimensionality for this. We use the posterior mean of the covariance matrix as the point estimator for the MGPS and AdaSS priors.
We generate 100 synthetic data sets with sample size \(n=50\) and \(n=100\), respectively, and we report the averages of the scaled spectral norm losses \(\left\|\hat{\mathbf{\Sigma}}-\mathbf{\Sigma}^{\star}\right\|/\left\|\mathbf{\Sigma}^{\star}\right\|\) between the point estimate \(\hat{\mathbf{\Sigma}}\) of each estimator and the true covariance matrix \(\mathbf{\Sigma}^{\star}\) obtained over 100 synthetic data sets in Tables 3 and 4. The AdaSS prior performs generally well, while the POET, MGPS and MDP are significantly inferior. SSL-IBP is not much worse and performs best for the setups with \(s=50\).
### Real data analysis
In this section, we analyze gene expression data on aging in mice from the AGEMAP (Atlas of Gene Expression in Mouse Aging Project) database Zahn et al. (2007). We obtained this data from the online website [http://statweb.stanford.edu/](http://statweb.stanford.edu/)\(\sim\)owen/data/AGEMAP. There are 5 female and 5 male mice in each age group, where there are 4 age groups of 1, 6, 16 and 24 months. Thus there are 40 mice in total. From each of 40 mice, 16 microarrays obtained from 16 different tissues were prepared, and from each microarray, gene expression levels of \(8,932\) probes were measured. In this paper, we focus only on the microarray data from the cerebrum tissue, for which the rotation test of Perry and Owen (2010) provided strong evidence
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \(s\) & \(r\) & & ET & ER & GR & ACT & DT & AdaSS \\ \hline \multirow{4}{*}{10} & \multirow{4}{*}{1} & True & 85 & 97 & 97 & 96 & 100 & 100 \\ & & Over & 14 & 3 & 3 & 4 & 0 & 0 \\ & & Under & 1 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.14 & 1.03 & 1.03 & 1.04 & 1 & 1 \\ \cline{2-10} & & True & 24 & 3 & 3 & 0 & 0 & 97 \\ & & Over & 1 & 0 & 0 & 0 & 0 & 0 \\ & & Under & 75 & 97 & 97 & 100 & 100 & 3 \\ & & Ave & 2.16 & 1.35 & 1.37 & 1.01 & 1 & 2.97 \\ \cline{2-10} & & True & 0 & 0 & 0 & 0 & 0 & 77 \\ & & Over & 0 & 0 & 0 & 0 & 0 & 15 \\ & & Under & 100 & 100 & 100 & 100 & 100 & 8 \\ & & Ave & 2.97 & 1.54 & 1.55 & 1.01 & 1.1 & 5.06 \\ \hline \multirow{4}{*}{30} & \multirow{4}{*}{1} & True & 86 & 96 & 97 & 97 & 100 & 100 \\ & & Over & 12 & 4 & 3 & 3 & 0 & 0 \\ & & Under & 2 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.1 & 1.1 & 1.07 & 1.03 & 1 & 1 \\ \cline{2-10} & & True & 46 & 15 & 15 & 0 & 0 & 94 \\ & & Over & 0 & 0 & 0 & 0 & 0 & 0 \\ & & Under & 54 & 85 & 85 & 100 & 100 & 6 \\ & & Ave & 2.43 & 1.57 & 1.57 & 1.09 & 1 & 2.93 \\ \cline{2-10} & & True & 0 & 1 & 1 & 0 & 0 & 87 \\ & & Over & 0 & 0 & 0 & 0 & 0 & 0 \\ & & Under & 100 & 99 & 99 & 100 & 100 & 13 \\ & & Ave & 3.2 & 1.72 & 1.77 & 1.06 & 1 & 4.86 \\ \hline \multirow{4}{*}{50} & \multirow{4}{*}{1} & True & 80 & 98 & 97 & 95 & 100 & 100 \\ & & Over & 17 & 2 & 3 & 5 & 0 & 0 \\ & & Under & 3 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.17 & 1.04 & 1.05 & 1.05 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 1: Proportions of correct estimation (“True”), overestimation (“Over”) and underestimation (“Under”) of the estimated factor dimensionalities for various sparsity \(s\) and true factor dimensionality \(r\) obtained on 100 synthetic data sets of size \(n=50\). “Ave” is the average of the estimated factor dimensionalities.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \(s\) & \(r\) & & ET & ER & GR & ACT & DT & AdaSS \\ \hline \multirow{6}{*}{10} & \multirow{3}{*}{1} & True & 78 & 100 & 100 & 93 & 100 & 100 \\ & & Over & 22 & 0 & 0 & 7 & 0 & 0 \\ & & Under & 0 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.24 & 1 & 1 & 1.07 & 1 & 1 \\ \cline{2-8} & \multirow{3}{*}{3} & True & 66 & 9 & 10 & 0 & 0 & 86 \\ & & Over & 2 & 0 & 0 & 0 & 0 & 13 \\ & & Under & 32 & 91 & 90 & 100 & 100 & 1 \\ & & Ave & 2.7 & 1.55 & 1.57 & 1.02 & 1.25 & 3.12 \\ \cline{2-8} & \multirow{3}{*}{5} & True & 6 & 0 & 0 & 0 & 0 & 20 \\ & & Over & 0 & 0 & 0 & 0 & 0 & 77 \\ & & Under & 94 & 100 & 100 & 100 & 100 & 3 \\ & & Ave & 3.63 & 1.83 & 1.86 & 1 & 2.13 & 5.93 \\ \hline \multirow{6}{*}{30} & \multirow{3}{*}{3} & True & 83 & 100 & 100 & 93 & 100 & 100 \\ & & Over & 17 & 0 & 0 & 7 & 0 & 0 \\ & & Under & 0 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.18 & 1 & 1 & 1.09 & 1 & 1 \\ \cline{2-8} & \multirow{3}{*}{3} & True & 92 & 33 & 34 & 5 & 0 & 99 \\ & & Over & 4 & 0 & 0 & 2 & 0 & 1 \\ & & Under & 4 & 67 & 66 & 93 & 100 & 0 \\ & & Ave & 3 & 1.96 & 1.98 & 1.56 & 1 & 3.01 \\ \cline{2-8} & \multirow{3}{*}{5} & True & 46 & 3 & 4 & 0 & 0 & 68 \\ & & Over & 1 & 0 & 0 & 0 & 0 & 32 \\ & & Under & 53 & 97 & 96 & 100 & 100 & 0 \\ & & Ave & 4.45 & 2 & 2.1 & 1.43 & 1 & 5.32 \\ \hline \multirow{6}{*}{50} & \multirow{3}{*}{3} & True & 85 & 100 & 100 & 94 & 100 & 100 \\ & & Over & 15 & 0 & 0 & 6 & 0 & 0 \\ & & Under & 0 & 0 & 0 & 0 & 0 & 0 \\ & & Ave & 1.16 & 1 & 1 & 1.06 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 2: Proportions of correct estimation (“True”), overestimation (“Over”) and underestimation (“Under”) of the estimated factor dimensionalities for various sparsity \(s\) and true factor dimensionality \(r\) obtained on 100 synthetic data sets of size \(n=100\). “Ave” is the average of the estimated factor dimensionalities.
for the presence of the latent factor. We will call this one tissue data set with sample size \(n=40\) and dimension \(p=8,932\) the AGEMAP data for simplicity.
We preprocessed the AGEMAP data following the regression model of Perry and Owen (2010). We obtained the mean-centered data by regressing out an intercept, sex, and age effects on each of the 8,932 outcomes. Then the factor dimensionality is estimated based on the mean-centered data set. We consider the factor model with heterogeneous noise variances and impose the AdaSS prior presented in Remark 1. We set \(q=10\geq\sqrt{n}=\sqrt{40}\), \(A=0.1\) and \(\mathbf{a}_{j}=(0.01,0.01)\) for every \(j\in[p]\) in the prior. Then we take the posterior mode of the factor dimensionality as the point estimate. For comparison, we also considered the five frequentist methods described in Section 4.1, i.e., ET, ER, GR, ACT and DT.
Table 5 provides the factor dimensionality estimates by the proposed Bayesian model and the five competing frequentist methods. The four methods including the AdaSS prior estimate the factor dimensionality by 1. The presence of the one-dimensional latent factor was advocated by the rotation test of Perry and Owen (2010).
Figure 1(a) shows the histogram of the posterior means of the latent factors \(\mathsf{E}(Z_{ik^{*}}|\mathbf{Y}_{1:n})\) for \(i\in[n]\) obtained under the AdaSS prior, where \(k^{*}\) denotes the index of the nonzero column of the loading matrix, i.e., \(\mathbf{B}_{[:,k^{*}]}\neq\mathbf{0}\) under the posterior distribution. The bimodality of the histogram is clearly shown, which
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(s\) & \(r\) & POET & SPCA-VI & MGPS & MDP & SSL-IBP & ABayes \\ \hline \multirow{3}{*}{10} & 1 & 2.366 (0.151) & 0.245 (0.108) & 2.591 (2.04) & 2.103 (0.135) & 0.689 (0.062) & **0.233 (0.103)** \\ & 3 & 1.83 (0.254) & 0.398 (0.12) & 1.45 (0.863) & 1.583 (0.212) & 0.646 (0.094) & **0.301 (0.113)** \\ & 5 & 1.59 (0.232) & 0.422 (0.111) & 1.195 (0.575) & 1.271 (0.191) & 0.696 (0.101) & **0.335 (0.107)** \\ \hline \multirow{3}{*}{30} & 1 & 2.375 (0.155) & 0.772 (0.102) & 1.945 (1.292) & 2.104 (0.14) & 0.699 (0.067) & **0.624 (0.152)** \\ & 3 & 2.073 (0.202) & 0.674 (0.117) & 2.078 (1.309) & 1.839 (0.184) & 0.696 (0.063) & **0.609 (0.172)** \\ & 5 & 1.868 (0.192) & 0.644 (0.086) & 1.551 (0.87) & 1.649 (0.175) & 0.684 (0.052) & **0.631 (0.138)** \\ \hline \multirow{3}{*}{50} & 1 & 2.345 (0.146) & 0.901 (0.039) & 2.018 (1.52) & 2.072 (0.13) & **0.759 (0.065)** & 0.847 (0.134) \\ & 3 & 2.145 (0.194) & 0.762 (0.102) & 1.996 (1.168) & 1.901 (0.175) & **0.695 (0.069)** & 0.966 (0.208) \\ \cline{1-1} & 5 & 2.013 (0.2) & 0.744 (0.098) & 1.519 (0.752) & 1.786 (0.182) & **0.709 (0.067)** & 1.049 (0.228) \\ \hline \end{tabular}
\end{table}
Table 3: The averages and standard errors of the scaled spectral norm losses of the estimators of the covariance matrix obtained on 100 synthetic data sets with \(n=50\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \(s\) & \(r\) & POET & SPCA-VI & MGPS & MDP & SSL-IBP & ABayes \\ \hline \multirow{3}{*}{10} & 1 & 2.366 (0.151) & 0.245 (0.108) & 2.591 (2.04) & 2.103 (0.135) & 0.689 (0.062) & **0.233 (0.103)** \\ & 3 & 1.83 (0.254) & 0.398 (0.12) & 1.45 (0.863) & 1.583 (0.212) & 0.646 (0.094) & **0.301 (0.113)** \\ & 5 & 1.59 (0.232) & 0.422 (0.111) & 1.195 (0.575) & 1.271 (0.191) & 0.696 (0.101) & **0.335 (0.107)** \\ \hline \multirow{3}{*}{30} & 1 & 2.375 (0.155) & 0.772 (0.102) & 1.945 (1.292) & 2.104 (0.14) & 0.699 (0.067) & **0.624 (0.152)** \\ & 3 & 2.073 (0.202) & 0.674 (0.117) & 2.078 (1.309) & 1.839 (0.184) & 0.696 (0.063) & **0.609 (0.172)** \\ & 5 & 1.868 (0.192) & 0.644 (0.086) & 1.551 (0.87) & 1.649 (0.175) & 0.684 (0.052) & **0.631 (0.138)** \\ \hline \multirow{3}{*}{50} & 1 & 2.345 (0.146) & 0.901 (0.039) & 2.018 (1.52) & 2.072 (0.13) & **0.759 (0.065)** & 0.847 (0.134) \\ & 3 & 2.145 (0.194) & 0.762 (0.102) & 1.996 (1.168) & 1.901 (0.175) & **0.695 (0.069)** & 0.966 (0.208) \\ \cline{1-1} & 5 & 2.013 (0.2) & 0.744 (0.098) & 1.519 (0.752) & 1.786 (0.182) & **0.709 (0.067)** & 1.049 (0.228) \\ \hline \end{tabular}
\end{table}
Table 4: The averages and standard errors of the scaled spectral norm losses of the estimators of the covariance matrix obtained on 100 synthetic data sets with \(n=100\).
is also confirmed by Perry and Owen (2010). Figure 2b presents the posterior distribution of the sparsity \(|\text{supp}(\mathbf{B})|\) of the loading matrix, which ranges from \(79.4\%\) to \(82\%\). A similar \(78\%\) sparsity of the estimated factor model was reported by Rockova and George (2016).
## 5 Concluding remarks
In this paper, we proposed a novel prior distribution, called AdaSS, to infer high-dimensional sparse factor models. We proved that the resulting posterior distribution asymptotically concentrates at the true factor dimensionality without knowing the sparsity level of the true loading matrix. This adaptivity to the sparsity is a remarkable advantage of the proposed method over other theoretically consistent estimators such as the point estimator in Cai et al. (2015) and Bayesian posterior distribution in Ohn and Kim (2022). We also showed that the proposed model attained the optimal detection rate of the eigengap for consistent estimation of the factor dimensionality. Moreover, the concentration rate of the posterior distribution of the covariance matrix is optimal when the true factor dimensionality is bounded and equal or faster than those of other Bayesian models otherwise. Our numerical studies amply confirmed the theoretical results and provided strong empirical support to the proposed AdaSS prior.
With our prior, nonzero row vectors of the loading matrix \(\mathbf{B}\) are not sparse.
\begin{table}
\begin{tabular}{c c c c c c} \hline ET & ER & GR & ACT & DT & AdaSS \\ \hline
8 & 1 & 1 & 10 & 1 & 1 \\ \hline \end{tabular}
\end{table}
Table 5: Estimated factor dimensionality for the AGEMAP data.
Figure 2: (a) Histogram of the posterior means of the latent factors for each observation; (b) Posterior distribution of the sparsity of the loading matrix for the AGEMAP data.
That is, when \(s^{\prime}\)-many nonzero row vectors and the factor dimensionality are given, all the entries of the corresponding \(s^{\prime}\times\xi\) sub-matrix are all nonzero. In practice, one may want to have sparsity in nonzero row vectors. Our prior can be modified easily to ensure such sparsity without hampering the asymptotic properties, which we will report somewhere else.
There are several promising directions for future work. In this paper, we consider the _static_ factor model where the observations are assumed to be identically distributed. However, this static factor model may be inadequate to capture the dependence structure in some types of data, e.g., time series data. As an alternative, we may consider a dynamic factor model, where the covariance matrix as well as the factor dimensionality can be different at each time point. It would of interest to study the posterior consistency of the factor dimensionality which possibly varies over time. Another promising avenue of research is to develop the Bayesian factor model which deals with non-Gaussian or mixed-type observed variables. We believe that the proposed Bayesian model can be easily extended to those types of data using the Gaussian copula factor model developed by Murray et al. (2013). It would be interesting to investigate the theoretical properties of such a non-Gaussian extension of the proposed Bayesian model.
|
2301.12334 | Don't Play Favorites: Minority Guidance for Diffusion Models | We explore the problem of generating minority samples using diffusion models.
The minority samples are instances that lie on low-density regions of a data
manifold. Generating a sufficient number of such minority instances is
important, since they often contain some unique attributes of the data.
However, the conventional generation process of the diffusion models mostly
yields majority samples (that lie on high-density regions of the manifold) due
to their high likelihoods, making themselves ineffective and time-consuming for
the minority generating task. In this work, we present a novel framework that
can make the generation process of the diffusion models focus on the minority
samples. We first highlight that Tweedie's denoising formula yields favorable
results for majority samples. The observation motivates us to introduce a
metric that describes the uniqueness of a given sample. To address the inherent
preference of the diffusion models w.r.t. the majority samples, we further
develop minority guidance, a sampling technique that can guide the generation
process toward regions with desired likelihood levels. Experiments on benchmark
real datasets demonstrate that our minority guidance can greatly improve the
capability of generating high-quality minority samples over existing generative
samplers. We showcase that the performance benefit of our framework persists
even in demanding real-world scenarios such as medical imaging, further
underscoring the practical significance of our work. Code is available at
https://github.com/soobin-um/minority-guidance. | Soobin Um, Suhyeon Lee, Jong Chul Ye | 2023-01-29T03:08:47Z | http://arxiv.org/abs/2301.12334v2 | # Don't Play Favorites: Minority Guidance for Diffusion Models
###### Abstract
We explore the problem of generating minority samples using diffusion models. The minority samples are instances that lie on low-density regions of a data manifold. Generating sufficient numbers of such minority instances is important, since they often contain some unique attributes of the data. However, the conventional generation process of the diffusion models mostly yields majority samples (that lie on high-density regions of the manifold) due to their high likelihoods, making themselves highly ineffective and time-consuming for the task. In this work, we present a novel framework that can make the generation process of the diffusion models focus on the minority samples. We first provide a new insight on the majority-focused nature of the diffusion models: they denoise in favor of the majority samples. The observation motivates us to introduce a metric that describes the uniqueness of a given sample. To address the inherent preference of the diffusion models w.r.t. the majority samples, we further develop _minority guidance_, a sampling technique that can guide the generation process toward regions with desired likelihood levels. Experiments on benchmark real datasets demonstrate that our minority guidance can greatly improve the capability of generating the low-likelihood minority samples over existing generative frameworks including the standard diffusion sampler.
Machine Learning, Diffusion Models, Diffusion Models, Diffusion Models
## 1 Introduction
Conventional large-scale datasets are mostly long-tailed in their distributions, containing the _minority_ of samples in low-probability regions of a data manifold (Ryu et al., 2017; Liu et al., 2019). The minority samples often comprise novel attributes rarely observed in _majority_ samples lying in high-density regions which usually consist of common features of the data (Agarwal et al., 2022). Generating enough numbers of the minorities is important in well representing a given dataset, and more crucially, in reducing the negative societal impacts of generative models in terms of fairness (Choi et al., 2020; Xiao et al., 2021).
One challenge is that generation focused on such minority samples is actually difficult to perform (Hendrycks et al., 2021). This holds even for diffusion-based generative models (Sohl-Dickstein et al., 2015; Ho et al., 2020) that provide strong coverage for a given data distribution (Sehwag et al., 2022). The generation process of the diffusion models can be understood as simulating the reverse of a diffusion process that defines a set of noise-perturbed data distributions (Song and Ermon, 2019), which guarantees their samples to respect the (clean) data distribution when a properly-learned score function is available (e.g., via denoising score-matching (DSM) (Vincent, 2011)). Given the long-tail-distributed data, such probabilistic faithfulness of the diffusion models makes their sampler become _majority-oriented_, i.e., producing higher likelihood samples more frequently than lower likelihood ones (Sehwag et al., 2022).
**Contribution.** In this work, we provide a new perspective on the majority-focused aspect of diffusion models. Specifically, we investigate the denoising behavior of diffusion models over samples with various features and show that diffusion models, when obtained via DSM, _play favorites with majorities_: their reconstruction (w.r.t. given noised input) favors to produce common-featured samples over ones containing unique features. Surprisingly, we find that even when perturbations are made upon the unique-featured samples (so we naturally expect denoising back to the ones having the same novel features), the diffusion models often produce reconstruction to the common-featured majority samples, thereby yielding significant semantic changes in the reconstructed versions of the minority samples (see Figure 1 for instance). This motivates us to come up with a new metric for describing the uniqueness of features contained in a given sample, which we call _minority score_, defined as a perceptual distance (e.g., LPIPS (Zhang et al., 2018)) between the original and restored instances. We highlight that our metric is efficient to compute, being sufficient with _one-shot_ reconstruction given by Tweedie's formula (Stein, 1981; Robbins, 1992; Efron, 2011).
Given the proposed metric at hand, we further develop a sampling technique for addressing the inherent preference of the diffusion models toward the majority features. Our sampler, which we call _minority guidance_, makes the sampling process condition on a desired level of minority score. More precisely, we construct a classifier that predicts (discretized) minority scores for given perturbed input and incorporate it into a conventional sampler (e.g., ancestral sampling (Sohl-Dickstein et al., 2015; Ho et al., 2020)) using the classifier guidance (Dhariwal and Nichol, 2021). We find that our minority guidance enables a controllable generation of the minority-featured samples, e.g., by conditioning on high minority score in our sampler.
We conduct extensive experiments on various benchmark real datasets. We demonstrate that minority score can serve to identify samples with novel features in a given dataset. We also exhibit that our minority guidance can serve as an effective knob for controlling the uniqueness of features for generated samples. We observe that our sampler can greatly improve the capability of generating the minority samples over existing generative samplers including ancestral sampling, reflected in high values of outlier measures like Average k-Nearest Neighbor and Local Outlier Factor (Breunig et al., 2000).
**Related work.** Generating minority samples that contain novel features has been explored under a number of different scenarios (Sehwag et al., 2022; Lin et al., 2022; Yu et al., 2020; Lee et al., 2022). The closest instance to our work is Sehwag et al. (2022) wherein the authors propose a sampling technique for diffusion models, which can encourage the generation process to move toward low-density regions (for a specific class) using a class-predictor and a conditional model. The key distinction w.r.t. ours is that their method is limited to class-conditional settings and requires access to class labels. Another notable work that bears an intimate connection to ours is Lee et al. (2022). As in Sehwag et al. (2022), they also leverage diffusion models yet for a different modality, graph data (e.g., lying in chemical space). The key idea therein is to produce out-of-distribution (OOD) samples via a customized generation process designed for maintaining some desirable properties w.r.t. the focused data space (e.g., plausible chemical structures). Since it is tailored for a particular modality which is inherently distinct from our focus (e.g., image data), hence not directly comparable to our approach.
Leveraging diffusion models for detecting uncommon instances (e.g., OOD samples) has recently been proposed, especially for medical imaging (Wolleb et al., 2022; Wyatt et al., 2022; Teng et al., 2022). Their methods share a similar spirit as ours: measuring discrepancies between the original and reconstructed images. However, they rely upon heuristic intuitions in designing their proposals and does not provide any theoretical supports for them. Also, their methods require a number of function evaluations (of diffusion models), thus computationally more expensive than ours.
After firstly introduced in Sohl-Dickstein et al. (2015), the classifier guidance has been a gold standard for imposing class-conditions into the generation process of diffusion models. Its use cases range far and wide, from high-quality sample generation (Dhariwal and Nichol, 2021) to text-to-speech synthesis (Kim et al., 2022), anomaly detection for medical images (Wolleb et al., 2022), and so on. To our best knowledge, the proposed approach is the first attempt that leverages the classifier guidance for incorporating conditions w.r.t. the novelty of features in the generation process.
## 2 Background
Before going into details of our work, we briefly review several key elements of diffusion-based generative models. We provide an overview of diffusion models with a particular focus on Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) that our framework is based upon. Also, we review the classifier guidance (Dhariwal and Nichol, 2021) which provides a basic principle for our sampling technique.
### Diffusion-based generative models
Diffusion-based generative models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019) are latent variable models defined by a forward diffusion process and the associated reverse process. The forward process is a Markov chain
Figure 1: **Diffusion models play favorites with majority features.** (Left column) Clean images from CelebA (Liu et al., 2015); (Middle column) Noise-corrupted samples made by the DDPM perturbation (1) on the clean samples in the left column; (Right column) Denoised samples using Tweedie’s formula (5) on the perturbed ones. The top (bottom) row represents the perturbation-reconstruction process for majority (minority) featured samples.
with a Gaussian transition where data is gradually corrupted by Gaussian noise in accordance with a (positive) variance schedule \(\{\beta_{t}\}_{t=1}^{T}\):
\[q(\mathbf{x}_{t}\mid\mathbf{x}_{t-1})\coloneqq\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_{t}} \mathbf{x}_{t-1},\beta_{t}\mathbf{I}),\]
where \(\{\mathbf{x}_{t}\}_{t=1}^{T}\) are latent variables having the same dimensionality as the data \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\). One notable property is that the forward process enables _one-shot_ sampling of \(\mathbf{x}_{t}\) at any desired timestep \(t\):
\[q_{\alpha_{t}}(\mathbf{x}_{t}\mid\mathbf{x}_{0})=\mathcal{N}(\mathbf{x}_{t};\sqrt{\alpha_ {t}}\mathbf{x}_{0},(1-\alpha_{t})\mathbf{I}), \tag{1}\]
where \(\alpha_{t}\coloneqq\prod_{s=1}^{t}(1-\beta_{s})\). The variance schedule is designed to respect \(\alpha_{T}\approx 0\) so that \(\mathbf{x}_{T}\) becomes approximately distributed as \(\mathcal{N}(\mathbf{0},\mathbf{I})\). The reverse process is another Markov Chain that is parameterized by a _learnable_ Gaussian transition:
\[p_{\mathbf{\theta}}(\mathbf{x}_{t-1}\mid\mathbf{x}_{t})\coloneqq\mathcal{N}(\mathbf{x}_{t-1}; \mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t),\beta_{t}\mathbf{I}).\]
One way to express \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) is to employ a noise-conditioned score network \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\coloneqq\nabla_{\mathbf{x}_{t}}\log p_{\mathbf{ \theta}}(\mathbf{x}_{t})\) that approximates the score function \(\nabla_{\mathbf{x}_{t}}\log q_{\alpha_{t}}(\mathbf{x}_{t})\): \(\mathbf{\mu}_{\mathbf{\theta}}(\mathbf{x}_{t},t)=\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t} +\beta_{t}\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t))\)(Song and Ermon, 2019; Song et al., 2020). The score network is trained with a weighted sum of denoising score matching (DSM) (Vincent, 2011) objectives:
\[\min_{\mathbf{\theta}}\sum_{t=1}^{T}w_{t}\mathbb{E}_{q(\mathbf{x})q_{ \alpha_{t}}(\tilde{\mathbf{x}}|\mathbf{x})}[\|\mathbf{s}_{\mathbf{\theta}}(\tilde{\mathbf{x}},t)- \nabla_{\tilde{\mathbf{x}}}\log q_{\alpha_{t}}(\tilde{\mathbf{x}}\mid\mathbf{x})\|_{2}^{2}], \tag{2}\]
where \(w_{t}\coloneqq 1-\alpha_{t}\). Notably, this procedure is equivalent to building a noise-prediction network \(\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)\) that regresses noise added on clean data \(\mathbf{x}_{0}\) through the forward process (1) (Vincent, 2011; Song et al., 2020). This establishes an intimate connection between the two networks: \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)=-\mathbf{\epsilon}_{\theta}(\mathbf{x}_{t},t)/ \sqrt{1-\alpha_{t}}\), implying that the score model is actually a denoiser.
Once obtaining the optimal model via the DSM training, data generation can be done by starting from \(\mathbf{x}_{T}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and following the reverse Markov Chain down to \(\mathbf{x}_{0}\):
\[\mathbf{x}_{t-1}=\frac{1}{\sqrt{1-\beta_{t}}}(\mathbf{x}_{t}+\beta_{t} \mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t))+\beta_{t}\mathbf{z},\quad\mathbf{z}\sim\mathcal{ N}(\mathbf{0},\mathbf{I}), \tag{3}\]
which is often called _ancestral sampling_(Song et al., 2020). This process corresponds to a discretized simulation of a stochastic differential equation that defines \(\{p_{\mathbf{\theta}}(\mathbf{x}_{t})\}_{t=0}^{T}\)(Song et al., 2020), which guarantees to sample from \(p_{\mathbf{\theta}}(\mathbf{x}_{0})\approx q(\mathbf{x}_{0})\).
### Classifier guidance for diffusion models
Suppose we have access to auxiliary classifier \(p_{\mathbf{\theta}}(y|\mathbf{x}_{t})\) that predicts class \(y\) given perturbed input \(\mathbf{x}_{t}\). The main idea of the classifier guidance is to construct the score function of a conditional density w.r.t. \(y\) by mixing the score model \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) and the log-gradient of the auxiliary classifier:
\[\begin{split}\nabla_{\mathbf{x}_{t}}\log\tilde{p}_{\mathbf{\theta}}(\mathbf{ x}_{t}\mid y)&=\nabla_{\mathbf{x}_{t}}\{\log p_{\mathbf{\theta}}(\mathbf{x}_{t} )+\log p_{\mathbf{\phi}}(y|\mathbf{x}_{t})^{w}\}\\ &=\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)+w\nabla_{\mathbf{x}_{t}}\log p_{ \mathbf{\phi}}(y|\mathbf{x}_{t})\\ &\eqqcolon\tilde{\mathbf{s}}_{\mathbf{\theta}}(\mathbf{x}_{t},t,y),\end{split} \tag{4}\]
where \(w\) is a hyperparameter that controls the strength of the classifier guidance. Employing the mixed score \(\tilde{\mathbf{s}}_{\mathbf{\theta}}(\mathbf{x}_{t},t,y)\) in place of \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) in the generation process (e.g., in (3)) enables the conditional sampling w.r.t. \(\tilde{p}_{\mathbf{\theta}}(\mathbf{x}_{t}|y)\propto p_{\mathbf{\theta}}(\mathbf{x}_{t})p_{ \mathbf{\phi}}(y|\mathbf{x}_{t})^{w}\). Increasing the scaling factor \(w\) affects the curvature of \(p_{\mathbf{\phi}}(y|\mathbf{x}_{t})^{w}\) around given \(y\) to be more sharp, i.e., gives more strong focus on some noticeable features w.r.t. \(y\), which often yields improvement of fidelity w.r.t. the corresponding class at the expense of diversity (Dhariwal and Nichol, 2021).
## 3 Method
We present our framework herein that specifically focuses on generating minority samples lying on low-density regions of a data manifold. To this end, we first show that denoising of diffusion models are biased to majority samples having high-likelihoods, which sheds light in a new direction on why diffusion models struggle in the minority-focused generation. In light of this, we come up with a measure for describing the uniqueness of features and then develop a sampler that can guide the generation process of diffusion models toward the minority samples. Throughout the section, we follow the setup and notations presented in Section 2.
### Diffusion models play favorites with majorities
We start by investigating how denoising diffusion models work for samples featured with different levels of uniqueness. Let us consider two distinct samples drawn from the data distribution: \(\mathbf{x}_{0}^{M},\mathbf{x}_{0}^{m}\sim q(\mathbf{x}_{0})\). Here we assume that \(\mathbf{x}_{0}^{M}\) is a majority sample containing commonly observed attributes, such as frontal-view faces in CelebA (Liu et al., 2015). In addition, \(\mathbf{x}_{0}^{m}\) is assumed as an instance of minorities consisting of novel features, e.g., side-view faces in CelebA. We perturb these points using the DDPM forward process (1) to obtain \(\mathbf{x}_{t}^{M}\) and \(\mathbf{x}_{t}^{m}\) respectively. Then by employing the score model \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) trained on \(q(\mathbf{x}_{0})\) with DSM (2), we reconstruct the perturbed samples back to their clean versions in _one-shot_ with Tweedie's formula (Stein, 1981; Robbins, 1992; Efron, 2011). For instance, \(\mathbf{x}_{t}^{M}\) is
denoised as:
\[\hat{\mathbf{x}}_{0}^{M}\coloneqq\frac{1}{\sqrt{\alpha_{t}}}\left(\mathbf{x}_{t}^{M}+(1- \alpha_{t})\mathbf{s_{\theta}}(\mathbf{x}_{t}^{M},t)\right). \tag{5}\]
Similarly, we get \(\hat{\mathbf{x}}_{0}^{m}\coloneqq\frac{1}{\sqrt{\alpha_{t}}}(\mathbf{x}_{t}^{m}+(1- \alpha_{t})\mathbf{s_{\theta}}(\mathbf{x}_{t}^{m},t))\). Figure 1 illustrates an example of this perturbation-restoration process for groups of majority and minority samples. As we can see, the process yields semantic changes for both groups. However, we observe more significant differences w.r.t. the minority samples where the novel features that are originally contained in their clean versions are replaced with some commonly-observable majority features in the reconstructed samples. This implies that denoising diffusion models actually _play favorites_ with the majority features, i.e., they are biased to produce samples with common attributes from given noisy data instances.
Now the question is where such preference to the majorities comes from. We found the answer in a routine that we usually adopt for training the diffusion-based models: the DSM optimization. The principle of DSM is approximating the (marginal) score function \(\nabla_{\mathbf{x}_{t}}\log q_{\alpha_{t}}(\mathbf{x}_{t})\) in an _average_ sense by matching a (conditional) score function \(\nabla_{\mathbf{x}_{t}}\log q_{\alpha_{t}}(\mathbf{x}_{t}|\mathbf{x}_{0})\) over given data \(\mathbf{x}_{0}\sim q(\mathbf{x}_{0})\)(Vincent, 2011). This averaging nature encourages DSM to yield the optimal score model pointing to regions that the majority of samples lies on, rather than to regions w.r.t. the minority samples that rarely contribute to the averaged conditional score due to their small numbers in the given data. To show this idea more clearly, we provide in Proposition 3.1 below a closed-form expression for the optimal score model \(\mathbf{s^{*}_{\theta}}(\mathbf{x}_{t},t)\) yielded by our focused DSM objective (2).
**Proposition 3.1**.: _Consider the DSM optimization in (2). Assume that a given noise-conditioned score network \(\mathbf{s_{\theta}}(\mathbf{x}_{t},t)\) have enough capacity. Then for each timestep \(t\), the optimality of the score network is achievable at:_
\[\mathbf{s^{*}_{\theta}}(\mathbf{x}_{t},t)=\mathbb{E}_{q(\mathbf{x}_{0}|\mathbf{x}_{t})}\left[ \nabla_{\mathbf{x}_{t}}\log q_{\alpha_{t}}(\mathbf{x}_{t}\mid\mathbf{x}_{0})\right].\]
See Appendix A for the proof. Observe that the optimal model gives the averaged conditional score over the data distribution (conditioned on noised input \(\mathbf{x}_{t}\)), thereby producing directions being _inclined_ to the manifold w.r.t. the majority samples that take up most of the given data and therefore predominantly contribute to the average. See Figure 2 for geometric illustration of the idea. In the denoising perspective, we can say that the optimal denoiser \(\mathbf{\epsilon^{*}_{\theta}}(\mathbf{x}_{t},t)\coloneqq-\sqrt{1-\alpha_{t}}\mathbf{s^{ *}_{\theta}}(\mathbf{x}_{t},t)\) works for the given \(\mathbf{x}_{t}\) while expecting the underlying clean data \(\mathbf{x}_{0}\) as a majority sample, hence yielding reconstruction with a sample featured with common attributes (as we saw in Figure 1).
### Minority score: Measuring the uniqueness
Based on the intuitions that we gained in Section 3.1, we develop a metric for describing the uniqueness of features of given samples using diffusion models. We saw that the minority samples often lose significant amount of their perceptual information after going through the perturbation-reconstruction procedure. Hence, we employ the LPIPS distance (Zhang et al., 2018) between the original sample \(\mathbf{x}_{0}\) and the restored version \(\hat{\mathbf{x}}_{0}\), formally written as:
\[l(\mathbf{x}_{0};\mathbf{s_{\theta}})\coloneqq\mathbb{E}_{q_{\alpha_{t}}(\mathbf{x}_{t}| \mathbf{x}_{0})}[d(\mathbf{x}_{0},\hat{\mathbf{x}}_{0}(\mathbf{x}_{t},\mathbf{s_{\theta}}))], \tag{6}\]
where \(d(\cdot,\cdot)\) denotes the LPIPS loss between two samples, and \(t\) is a timestep used for perturbation; see Appendix B for details on our choice of \(t\). The expectation is intro
Figure 3: The most (and the least) unique CelebA real samples determined by our proposed metric, _minority score_. Samples with the highest (left) and the lowest (right) minority scores are exhibited.
duced for randomness due to the perturbation1. We call this metric _minority score_, since it would yield high values for the minorities that contain novel attributes. Figure 3 visualizes the effectiveness of minority score specifically on the CelebA training samples. On the left side, we see samples that contain unique attributes like "Eyeglasses" and "Wearing_Hat" that are famously known as low-density features of the dataset (Amini et al., 2019; Yu et al., 2020). On the other hand, the samples on the right-side reveal features that look relatively common. We provide quantitative results that further validate the effectiveness of our metric in the next section; see Figure 8 for details.
Footnote 1: We empirically found that the metric performs well even when the expectation is computed with a single sample from \(q_{\alpha_{t}}(\mathbf{x}_{t}|\mathbf{x}_{0})\). See Figure 3 for instance, where the employed minority scores are computed based on such single-sampling.
Thanks to the one-shot characteristic offered by Tweedie's formula, our minority score is efficient to compute when compared to the previous methods that rely upon iterative forward and reverse diffusion processes, hence requiring a number of evaluations of models (Wolleb et al., 2022; Wyatt et al., 2022; Teng et al., 2022). In contrast, our metric requires only a few number of function evaluations (even counting forward passes needed for LPIPS computations).
_Remark 3.2_ (The use of other distances in place of LPIPS).: One may wonder: why don't we just use L1 or L2 distances for minority score, which are more accessible and efficient than the LPIPS metric? In fact, we observed that minority score can also work with such measures (to a certain extent), yielding some meaningful results in identifying minority-featured samples. However, they are sensitive to differences in some less-semantic information such as brightness and saturation in image data. So they tend to score high values not only to novel-featured samples, but also to ones that contain bright and highly-saturated visual aspects (which are often vanished during the perturbation-reconstruction process). See Figure 9 for instance. On the other hand, we did not observe such impurity in the LPIPS metric, and therefore stick to it in all our main implementation.
### Minority guidance: Tackling the preference
Here a natural question arises. What can we do for tackling the inherent bias of diffusion models (to common attributes) using minority score at hand, so that they become more likely to generate the novel-featured samples? To address this question, we take a conditional generation approach2 where we incorporate minority score as a conditioning variable into the generation process, which can then serve to produce the unique-featured samples by conditioning w.r.t. high minority score values. Specifically to condition the existing framework with minimal effort, we employ the classifier guidance (Dhariwal and Nichol, 2021) that does not require re-building of class-conditional diffusion models. Below we describe in detail how we develop the conditional generation framework for minority score based on the classifier guidance technique.
Footnote 2: We also explored another natural strategy that concerns sampling from \(\hat{q}(\mathbf{x}_{0})\propto q(\mathbf{x}_{0})l(\mathbf{x}_{0};\mathbf{s}_{\mathbf{\theta}})\). See Appendix C.4 for details.
Consider a dataset \(\mathbf{x}_{0}^{(1)},\dots,\mathbf{x}_{0}^{(N)}\stackrel{{\mathrm{i.i.d.} }}{{\sim}}q(\mathbf{x}_{0})\) and a diffusion model \(\mathbf{s}_{\mathbf{\theta}}(\mathbf{x}_{t},t)\) (pre-trained on the dataset). For each sample, we compute minority score via (6) and obtain \(l^{(1)},\dots,l^{(N)}\) where \(l^{(i)}\coloneqq l(\mathbf{x}_{0}^{(i)},s_{\mathbf{\theta}}),i\in\{1,\dots,N\}\). We process the (positive-valued) minority scores as ordinal data with \(L\) categories by thresholding them with \(L-1\) levels of minority score. This yields the ordinally categorized minority scores \(\tilde{l}^{(1)},\dots,\tilde{l}^{(N)}\in\{0,\dots,L-1\}\) where \(\tilde{l}=0\) and \(\tilde{l}=L-1\) indicate the classes w.r.t. the most common and the rarest features, respectively. The ordinal minority scores are then coupled with the associated data samples to yield a paired dataset \((\mathbf{x}_{0}^{(1)},\tilde{l}^{(1)}),\dots,(\mathbf{x}_{0}^{(N)},\tilde{l}^{(N)})\) which is subsequently used for training a (noise-conditioned) classifier \(p_{\mathbf{\psi}}\big{(}\tilde{l}|\mathbf{x}_{t}\big{)}\) that predicts \(\tilde{l}\) for input \(\mathbf{x}_{t}\) (perturbed via (1)). After training, we blend the given score model with the log-gradient of the classifier as in (4) to yield a modified score:
\[\hat{\mathbf{s}}_{\mathbf{\theta}}\big{(}\mathbf{x}_{t},t,\tilde{l}\big{)}\coloneqq\mathbf{s} _{\mathbf{\theta}}(\mathbf{x}_{t},t)+w\nabla_{\mathbf{x}_{t}}\log p_{\mathbf{\psi}}\big{(} \tilde{l}\mid\mathbf{x}_{t}\big{)},\]
where \(\mathbf{\psi}\) indicates parameterization for the classifier, and \(w\) is a scaling factor for the guidance; see Figure 5 for details on its impact. Incorporating this mixed score into the sampling process (3) then enables conditional generation w.r.t. \(\hat{p}_{\mathbf{\theta}}(\mathbf{x}_{t}|\tilde{l})\propto p_{\mathbf{\theta}}(\mathbf{x}_{t} )p_{\mathbf{\psi}}\big{(}\tilde{l}|\mathbf{x}_{t}\big{)}^{w}\). We call our technique _minority guidance_, as it gives guidance w.r.t. minority score in the generation process. Notice that generating unique-featured samples is now immediate, e.g., by conditioning an arbitrarily high \(\tilde{l}\) via minority guidance. Even more, our sampler enables a _free_ control of the uniqueness of features for generated samples (to the extent that a given pre-trained backbone model can represent), which has never been offered in literature so far to our best knowledge.
**Details on categorizing minority score.** For the threshold levels to comparnt the (raw) minority scores (obtained via (6)), we observed that a naive equally-spaced thresholds yield significant imbalance in the size of classes (e.g., extremely small numbers of samples in highly unique classes), which would then yield negative impacts in the performance of the classifier especially against the small-sized classes. Hence, we resort to splitting the minority scores based on their quantiles. When \(L=10\) for instance, we categorize the minority scores such that \(\tilde{l}=9\) (\(\tilde{l}=0\)) becomes the samples with the top (bottom) \(10\%\) of the uniquely-featured samples. For the number of classes \(L\), using ones that yield benign numbers of samples per each class (e.g., over \(100\) samples) being chosen based on the size of a given dataset
usually offers good performances. See Figures 10 and 11 for demonstration. We also found that \(L\) can serve as a control knob for balancing the faithfulness of the guidance with the controllability over the uniqueness of features. We leave a detailed discussion on this point in Appendix C.2.
## 4 Experiments
In this section, we provide empirical demonstrations that validate our proposals and arguments, specifically focusing on the unconditional image generation task. To this end, we first clarify the setup used in our demonstrations (see Appendix B for more details) and then provide results for the proposed approach in comparison with existing frameworks.
### Setup
Datasets and pre-trained models.We consider three benchmark datasets: unconditional CIFAR-10 (Krizhevsky et al., 2009), CelebA \(64\times 64\)(Liu et al., 2015), and LSUN-Bedrooms \(256\times 256\)(Yu et al., 2015). For the unconditional CIFAR-10 model, we employ the checkpoint provided in Nichol and Dhariwal (2021). The pre-trained model for CelebA is constructed by ourselves via the architecture and setting used in Dhariwal and Nichol (2021). The backbone model for LSUN-Bedrooms is taken from Dhariwal and Nichol (2021).
Classifiers for minority guidance.We employ the encoder architecture of U-Net used in Dhariwal and Nichol (2021) for all of our guidance classifiers. For CIFAR-10 and CelebA results, we employ all training samples for constructing the minority classifiers (e.g., \(N=50000\) for CIFAR-10). On the other hand, only a \(10\%\) of the training samples are employed for building the minority classifier for the results on LSUN-Bedrooms3. For the number of minority classes \(L\), we take \(L=100\) for all three datasets.
Footnote 3: We found that the performance of minority guidance is indeed affected by the number of samples employed for constructing the minority classifier. See Appendix C.3 for details.
Baselines.Since there has been lack of studies that explored the minority-focused generation in the unconditional setting as ours (see related work in Section 1 for details), we consider three generic frameworks that are widely adopted in literature. The first two baselines are GAN-based frameworks, BigGAN (Brock et al., 2019) and StyleGAN (Karras et al., 2019). The third baseline, which is our main interest for comparison, is a diffusion-based generative model, DDPM (Ho et al., 2020) with the standard sampler (3). The BigGAN models are employed for CIFAR-10 and CelebA, and built with the settings provided in the author's official project page. We employ StyleGAN for LSUN-Bedrooms using the checkpoint provided in Karras et al. (2019). The DDPM baseline is compared in all three datasets and shares the same pre-trained models as ours. Additionally to comparison with the generative baselines, we also employ the ground-truth real data, especially in terms of neighborhood density (see below for details on the focused metrics).
Figure 4: Generated samples from minority guidance over various minority classes \(\tilde{l}\). The results are obtained on CelebA where the number of minority classes is \(L=100\). Generated samples conditioned on \(\tilde{l}=0\) (left), \(\tilde{l}=50\) (middle), and \(\tilde{l}=99\) (right) are exhibited. We share the same random seed for all three minority classes. The classifier scale is fixed as \(w=2.0\).
Figure 5: Impacts of minority class \(\tilde{l}\) (left) and classifier scale \(w\) (right) on the density of Local Outlier Factor (LOF) (Breunig et al., 2000) on CelebA. The higher LOF, the less likely (i.e., more unique-featured) samples. “Real” refers to the test set of CelebA. The other parameters are fixed: \(w=2.0\) (left) and \(\tilde{l}=0\) (right).
**Evaluation metrics.** For the purpose of evaluating the generation capability of low-density samples, we focus on two well-known measures for describing the density of neighborhoods: Average k-Nearest Neighbor (AvgkNN) and Local Outlier Factor (LOF) (Breunig et al., 2000). AvgkNN measures density via proximity to k-nearest neighbors. On the other hand, LOF compares density around a given sample to density around its neighbors. For both measures, a higher value indicates that a given sample lies on a lower-density region compared to its neighboring samples (Sehwag et al., 2022). We evaluate the two metrics in the feature space of ResNet-50 (He et al., 2016) as in Sehwag et al. (2022). In addition to the twin neighborhood measures, we employ minority score to further augment our evaluation. The improved precision (Kynkaanniemi et al., 2019) is used for assessing sample quality. All measures are evaluated with 50K generated samples.
### Results
**Validation of the roles of \(\tilde{l}\) and \(w\).** Figure 4 visualizes generated samples from minority guidance considering a variety of the (categorized) minority class \(\tilde{l}\). The left figure corresponds to \(\tilde{l}=0\), while the middle and the right figures are due to \(\tilde{l}=50\) and \(\tilde{l}=99\), respectively. We use the same random seed for generation of all three classes. Observe that as \(\tilde{l}\) increases, the samples tend to have more rare features that appear similar to the ones observable in the minority samples (e.g., in Figure 3). The left plot of Figure 5 illustrates this impact in a quantitative manner. We see that increasing \(\tilde{l}\) yields the LOF density shifting toward high-valued regions (i.e., low-density regions), which corroborates the observation made in Figure 4. On the other hand, the right plot of Figure 5 exhibits the impact of the classifier scale \(w\). Notice that an increase in \(w\) makes the LOF density squeezing toward low-valued regions (i.e., high-probability regions). This aligns with the well-known role of \(w\) (see Section 2.2 for details) that determines whether to focus on some promi
Figure 6: Sample comparison on LSUN-Bedrooms focused on the DDPM-based methods. See Figure 23 for comparison with samples from StyleGAN (Karras et al., 2019). Generated samples from ancestral sampling (Ho et al., 2020) (left), our method conditioned on majority (middle) and minority (right) classes are exhibited. We use the same random seed for all three cases.
Figure 7: Sample comparison on unconditional CIFAR-10 considering the DDPM-based samplers. We leave comparison with BigGAN (Brock et al., 2019) in Figure 21. Generated samples from ancestral sampling (Ho et al., 2020) (left), the proposed sampler focused on majority (middle) and minority (right) classes are provided. We share the same random noise for all three settings.
nent attributes for the conditioned class (Dhariwal and Nichol, 2021). See Figure 18 for generated samples that visualize this impact.
**Comparison with the baselines.** Figure 6 visualizes generated samples on LSUN-Bedrooms considering the DDPM-based sampling methods. From left to right, the figures correspond to samples from ancestral sampling (Ho et al., 2020), our method conditioned on majority and minority classes, respectively. For all three cases, we share the same random seed for generating noise. Observe that our minority guidance, when focused on a novel-featured class, produces more exquisite and unique images compared to the standard sampler. Notably, it often encourages some monotonous-looking features (due to the standard method) to be more novel, e.g., with inclusions of additional objects like lamps. We found that such sophisticated attributes are in fact novel features that yield high minority score values in LSUN-Bedrooms; see Figure 16 for details. Figure 7 exhibits generated samples on unconditional CIFAR-10 w.r.t. the DDPM-based samplers. The left figure exhibits samples from ancestral sampling, and the middle (right) figure is due to ours focused on a majority (minority) class, respectively. Again, we share the same random noise for all three cases. We see the same trends on the performance benefits of ours relative to the baseline sampler; see Figure 17 for visualization of unique features in CIFAR-10.
Figure 8 provides performance comparison of neighborhood density measures on LSUN-Bedrooms. Observe that minority guidance outperforms the baselines in generating low-density unique samples for all three measures. Also notice that minority score exhibits a similar trend as the other measures, demonstrating its effectiveness as a metric for minority description. Table 1 exhibits sample quality of the considered methods. We observe that when focused on unique features, samples from minority guidance exhibit degraded fidelity. We conjecture that such degradation is attributed to limited knowledge of the backbone models on the unique features due to their extremely small volume in training data. This is supported by our results w.r.t. majority features that yield superior sample quality compared to the baselines; see Table 1 for instance. We leave a detailed analysis on this point in Appendix C.5.
## 5 Conclusion and Limitations
We showed that the conventional DSM-based diffusion models are inherently biased to produce high-likelihood majority samples. In light of this, we introduced a novel metric to evaluate the uniqueness of features and developed a sampling technique that can serve to generate novel-featured minority samples. We demonstrated that the proposed sampler greatly improves the capability of producing the low-density minority samples over existing generative methods. We believe that this is an important step toward fairness in generative models, with the goal of preventing discrimination caused by model and data biases.
One disadvantage is that the quality of minority samples
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{Precision} \\ \cline{2-4} Method & CIFAR-10 & CelebA & LSUN \\ \hline BigGAN & 0.8250 & 0.8593 & \(-\) \\ StyleGAN & \(-\) & \(-\) & 0.7310 \\ DDPM (standard) & 0.8795 & 0.8714 & 0.7686 \\ DDPM (ours-minor) & 0.7581 & 0.7951 & 0.5935 \\ DDPM (ours-major) & **0.8979** & **0.9123** & **0.7895** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of sample quality. “DDPM (ours-minor)” is minority guidance focused on minority features (e.g., \(\tilde{l}=99\)). “DDPM (ours-major)” also indicates the proposed sampler yet configured for minority classes (e.g., \(\tilde{l}=50\)). We mark the best result in bold and the second-best with underline.
Figure 8: Comparison of neighborhood density on LSUN-Bedrooms. We found the same trends on the other two datasets and leave the results in Appendix D.3. “Real” refers to our real-data baseline, the training set of LSUN-Bedrooms. “StyleGAN” is one of our generative baselines, StyleGAN (Karras et al., 2019). “DDPM (standard)” indicates another generative baseline on the dataset: DDPM with the standard sampler (3) (Ho et al., 2020). “DDPM (ours)” denotes DDPM with the proposed sampler, minority guidance. “AvgkNN” refers to Average k-Nearest Neighbor, and “LOF” is Local Outlier Factor (Breunig et al., 2000). “Minority Score” corresponds to our proposed metric for describing the uniqueness of samples. The higher values, the less likely samples for all measures.
due to our framework is predominantly determined by the knowledge of a given pre-trained model, which is often limited w.r.t. minor features that are rarely seen during training. Another flip side is that our approach requires access to a good number of real samples (while being unlabeled), which may not be feasible in some highly limited situations. Hence, we believe one promising future direction is to push the boundary towards such challenging scenarios while addressing the limited knowledge issue.
|
2310.03578 | Targeted Adversarial Attacks on Generalizable Neural Radiance Fields | Neural Radiance Fields (NeRFs) have recently emerged as a powerful tool for
3D scene representation and rendering. These data-driven models can learn to
synthesize high-quality images from sparse 2D observations, enabling realistic
and interactive scene reconstructions. However, the growing usage of NeRFs in
critical applications such as augmented reality, robotics, and virtual
environments could be threatened by adversarial attacks.
In this paper we present how generalizable NeRFs can be attacked by both
low-intensity adversarial attacks and adversarial patches, where the later
could be robust enough to be used in real world applications. We also
demonstrate targeted attacks, where a specific, predefined output scene is
generated by these attack with success. | Andras Horvath, Csaba M. Jozsa | 2023-10-05T14:59:18Z | http://arxiv.org/abs/2310.03578v1 | # Targeted Adversarial Attacks on Generalizable Neural Radiance Fields
###### Abstract
Neural Radiance Fields (NeRFs) have recently emerged as a powerful tool for 3D scene representation and rendering. These data-driven models can learn to synthesize high-quality images from sparse 2D observations, enabling realistic and interactive scene reconstructions. However, the growing usage of NeRFs in critical applications such as augmented reality, robotics, and virtual environments could be threatened by adversarial attacks.
In this paper we present how generalizable NeRFs can be attacked by both low-intensity adversarial attacks and adversarial patches, where the later could be robust enough to be used in real world applications. We also demonstrate targeted attacks, where a specific, predefined output scene is generated by these attack with success.
## 1 Introduction
Neural Radiance Fields (NeRFs) [1] have emerged as a groundbreaking paradigm in the domain of 3D scene representation and rendering, revolutionizing the way we perceive and interact with virtual environments. NeRFs leverage the power of deep learning to capture intricate scene details [2], enabling the synthesis of photorealistic images from sparse 2D observations [3]. The ability to reconstruct high-quality scenes from limited input data has propelled NeRFs into the forefront of computer vision, computer graphics, augmented reality [4], robotics [5], and other related fields.
NeRFs represent 3D scenes as continuous functions, mapping 3D coordinates to their corresponding scene appearance properties, such as color and opacity. This continuous representation distinguishes them from most traditional 3D models, which often rely on discrete voxels or point clouds. In essence, NeRFs can be seen as implicit functions that define the scene's surface, depth and appearance properties, making them particularly suited for complex and detailed scene reconstruction. They can generate depth maps [6] and can be used in navigation [7], [8], localization [5] and six degrees of freedom orientation estimation [9].
The significance of Neural Radiance Fields (NeRFs) lies in their widespread applicability and apart from image rendering, in generating 3D scenes, depth maps, and adding navigation. However, it is essential to acknowledge that the susceptibility of NeRFs to adversarial attacks can introduce complications and challenges. These attacks could have the potential to produce unrealistic maps and representations, leading to the hallucination of non-existent objects within the scene or the omission of existing objects. As a result, in various applications employing NeRFs, these adversarial perturbations may give rise to erroneous outcomes and hinder accurate scene reconstruction and navigation.
The training process of NeRFs involves capturing multi-view image observations of the scene and optimizing the model to predict accurate color and opacity values for any novel viewing angle within the scene's spatial extent. This approach enables NeRFs to not only render novel viewpoints but also handle dynamic scenes and incorporate additional observations over time. Consequently, NeRFs have opened up exciting possibilities for applications like real-time virtual reality experiences, interactive architectural visualizations [10], and advanced autonomous robotic systems [7].
As NeRFs find increasing adoption in real-world applications, concerns surrounding their vulnerability to adversarial attacks have surfaced. Adversarial attacks aim to exploit vulnerabilities in machine learning models by introducing carefully crafted perturbations to the input data. These perturbations are imperceptible to the human eye but can lead to drastic misclassifications or erroneous predictions.
In their conventional configurations, NeRFs are trained in a scene-specific and object-specific manner, involving the training of a dedicated neural network for each scene. The neural network's weights store the scene-specific representations and knowledge of views and camera angles. While these networks could potentially be vulnerable to attacks during the training process, exploiting data poisoning [11] or backdoor attacks [12], resulting in the production of invalid three-dimensional representations, their lack of generality limits the potential issues in real-world applications. As a consequence, the specialized nature of NeRF training offers a degree of protection against such adversarial perturbations in practical scenarios.
As research on NeRFs has progressed, recent advancements have led to the development of Generalizable Neural Radiance Fields (GeNeRFs) [13]. These extensions go beyond the original NeRF formulation where scene specific models had to be trained. The capabilities of these models can encompass both the generation of novel views and the creation of implicit three-dimensional representations using known previous views and camera poses. Due to the general nature of these methods, there arises a suspicion that they might be susceptible to attacks through perturbations of input pixels in the images. Such attacks could potentially enable the creation of scenes with arbitrary objects.
In this paper, we aim to substantiate this hypothesis by providing a demonstration of the vulnerability of these models to adversarial perturbations on one of the most commonly used GeNeRF variant: IBRNet [14], showcasing the potential for generating scenes with arbitrary objects through these attacks.
Adversarial attacks can take various forms within the context of NeRFs, including attacks on the embedded 3D representation, the weights of the trained models or the input pixels. Attacking input pixels is relatively easy, and this method remains the most significant form of attack as it does not require access to the image processing pipeline, making it a potential real-world threat. Consequently, this study focuses on this form of attack by employing targeted attack strategies involving both low-intensity attacks [15] covering all input pixels and patch based attacks [16] being limited to only a certain region of the image.
Attack strategies can also be distinguished based on the expected output of the attacks. In case of untargeted attacks our aim is to modify the output of the network as much as possible, without any restrictions on the output scene of the model. Meanwhile in case of targeted attack a predefined output scene has to be generated by the model as the result of the attack.
In [17] untargeted attacks have been introduced using GeNeRFs. The attack methodology and results are interesting, but untargeted attacks do not pose a substantial real-world threat, as the resulting outputs are often easily detected as non-realistic images, hence their unrestrictedness.
In contrast, this research delves into targeted attacks, wherein the objective is to create realistic scenes featuring unreal objects on the rendered images, while the generated depth map were not investigated in the current work. Given the importance of rendered images as the most commonly investigated element, the study specifically focuses on attacking this aspect. By exploring the vulnerability of NeRFs to targeted attacks on the rendered image, our research aims to shed light on potential security risks and the extent of their impact on NeRF-based systems. This investigation is expected to provide valuable insights into safeguarding NeRFs against adversarial threats and further enhancing their reliability and practicality in various real-world applications.
In this paper, we embark on a comprehensive exploration of adversarial attacks on NeRFs. We investigate the efficacy of different attack strategies and evaluate their impact on the rendering quality, scene reconstruction accuracy, and generalization capabilities of NeRFs.
Our paper is structured the following way: in section 2 we briefly describe Generalized Neural Radiance Fields, in section 3 we introduce the most commonly applied adversarial attack methodologies and algorithms, in section 4 we describe our experiments and results and in section 5 we draw conclusion from them.
## 2 Generalizable NeRFs
NeRFs present a cutting-edge approach in leveraging deep neural networks to generate 3D representations of objects or scenes from 2D images. This innovative technique involves encoding the complete object or scene within an artificial neural network, which then predicts the light intensity, also known as radiance, at any specific point in the 2D image. As a result, NeRFs enable the creation of novel 3D views from various angles, revolutionizing the generation of highly realistic 3D objects automatically.
The exceptional potential of NeRFs lies in their capacity to represent 3D data more efficiently compared to other existing methods. This efficiency opens new avenues for generating highly realistic 3D objects with remarkable promise. Moreover, when combined with complementary techniques, NeRFs offer the exciting prospect of significantly compressing 3D representations of the world, reducing data sizes from gigabytes to mere tens of megabytes [18]. Such advancements hold significant implications for various fields, enabling streamlined and versatile 3D data generation and manipulation.
GeNeRF variants like [19],[20], [21], [14] enable cross-scene generalization via two modifications on top of traditional NeRFs: Firstly, these variants condition NeRFs on the source views of new scenes. This involves utilizing a limited number of observed source views from a new scene to extract features via a Convolutional Neural Network (CNN) encoder. These features are then used as scene priors and fed into mostly feed-forward neural
networks combined with transformer architectures. Secondly, the variants incorporate a ray transformer, which operates on all points along the same ray, enhancing the density prediction.
The most common steps implemented by the GeNeRF models can be summarized as follows: 2D feature maps \(\{W_{i}\}_{i=1}^{S}\) are inferred for a total of \(S\) source views \(\{I_{i}\}_{i=1}^{S}\) using a pretrained CNN encoder \(E\), where \(W_{i}=E(I_{i})\) represents a 3D tensor. (Notably, this process requires only a one-time effort for each new scene.) A ray \(r(t)=o+td\) is emitted from the origin \(o\) of the virtual camera along the view direction \(d\) to pass through the pixel to be rendered. 3D points \(x_{k}\) are sampled along the ray based on an ordered depth sequence \(t_{k}\) drawn from a certain distribution. Each sampled 3D point \(x_{k}\) is projected onto the image planes of source views using a project transformation \(\pi\), obtaining the corresponding scene features \(W_{i}(\pi(x_{k}))\) for all \(S\) source views. The scene features acquired in the previous step are applied to an MLP model \(f\) to derive the color \(c_{k}\) and volumetric density \(\sigma_{k}\) for each point. Compared to scene specific NeRF models, instead of directly predicting volumetric densities \(\sigma_{k}\) some architectures implement a two-step process where density features \(f_{k}^{\sigma}\) are predicted, and the final volumetric density prediction is determined by a transformer architecture \(T\) having as input all the \(f_{k}^{\sigma}\) vectors of every sample. Occlusion aware volume rendering is performed in the final step by taking into account the relative viewing directions or predicting visibility probabilities. During training, the networks E, f, and T are updated using the Mean Squared Error (MSE) loss or other pixel-based distance metrics, ensuring effective learning of the rendering process.
From the various variants of GeNeRFs we have selected IBRNet [14] for our investigations, which is commonly applied and highly cited variant, capable of rendering state of the art images from new views on novel scenes. Since the whole rendering pipeline is differentiable, pixels or parts of the source images can be modified according to the planned adversarial attacks. We have used a pretrained model, which was trained on multiple datasets simultaneously (LLF [22], RealEstate 10k [23], Google Scanned Objects [24], etc. ) to be able to cope with generic scenes. For the sake of reproducibility, the same pretrained model and data for training and evaluation are available at the following link 1
Footnote 1: [https://drive.google.com/drive/folders/1qfcPffMy8-rmzjbapLAtdrKwg3AV-NJe](https://drive.google.com/drive/folders/1qfcPffMy8-rmzjbapLAtdrKwg3AV-NJe)
There are more recent implementations and variants of GeNeRFs, such as [25], which apply geometric constraints to be more efficient, or [13] where even hardware constraints were considered, but these approaches do not differ significantly from the model of our selection, therefore we believe that the attacks presented here can be generalized for these variants as well.
GeNeRFs represent a highly promising real-world solution for novel view synthesis, owing to their remarkable ability to generalize across different scenes, facilitating instant rendering on previously unseen environments. Despite the critical significance of adversarial robustness in practical applications, limited attention has been given to exploring its implications specifically for GeNeRF. We postulate that GeNeRF's conditioning on source views from new scenes, often sourced from the Internet or third-party providers, may introduce novel security concerns in real-world scenarios. Additionally, the conventional understanding and solutions for achieving adversarial robustness in neural networks may not directly apply to GeNeRFs, given its distinctive 3D nature and diverse operations.
## 3 Adversarial attacks
The concept of adversarial attacks originated from the pioneering work of [26]. It brought to light a crucial revelation about deep neural networks. Despite their ability to generalize effectively and perform well on conventional input data and even on similar inputs, they possess a vulnerability to exploitation by malicious agents. This vulnerability stems from the high-dimensional nature of inputs, enabling the generation of non-realistic input samples that generate outputs, which deviate drastically from human judgment and the expected outcomes.
The initial adversarial attacks proposed by Goodfellow et al. [15] involved calculating the sign of the gradient of the cost function (\(J\)) with respect to the input (\(\mathbf{x}\)) and expected output (\(y\)), which was then scaled by a constant (\(\epsilon\)) to control the intensity of the noise. This method, known as the Fast Gradient Sign Method (FGSM), allowed for rapid generation of attacks.
Rozsa et al. [27] extended FGSM by utilizing not just the sign of the raw gradient but also a scaled version of the gradient's magnitude, termed the Fast Gradient Value method.
Furthermore, Dong et al. [28] proposed an iterative version of FGSM that incorporated momentum into the equation. The inclusion of momentum was inspired by the concept of optimization during model training, with the goal of avoiding poor local minima and non-convex patterns in the objective function's landscape.
Moosavi et al. [29] approached adversarial attacks from the perspective of binary classifier robustness. They formulated the idea that a binary classifier's robustness at a given point \(\mathbf{x}_{0}\) is determined by its distance from the separating hyperplane \(\Delta(\mathbf{x}_{0};f)\). They derived a closed-form formula to calculate the smallest perturbation required to change the classifier's output and applied these perturbations iteratively to the image until the classifier's decision changed. This approach was later extended to address multiclass classification problems as well.
While these methods were crucial for theoretical understanding, their application to neural networks in practical, real-world applications has limited significance due to
their low-intensity, constrained noise application. In real-world scenarios, even the smallest perturbations, such as those arising from environmental factors like perspective, illumination changes, or lens distortion, can completely disrupt the desired results. Therefore, the utilization of these attacks in practical applications is not feasible [30].
In [16, 31] robust and real-world attacks were presented against various classification networks. These methods create an adversarial patch, where instead of the global, but low-intensity approaches, distortions appear in a region with limited area, but intensity values are not bounded2. Successful attacks with adversarial patches were also demonstrated using black and white patches only [32], where not the intensities of the patch, but the locations and sizes of the stickers are optimized. These attacks, where the gradients of the networks are not necessarily used during optimization open space towards black-box attacks [33, 34], where the attacker needs access only to the final responses, confidence values to generate attacks using evolutionary algorithms.
Footnote 2: apart from the global bounds of image values
A general overview of adversarial attacks, containing a more detailed description of most of the previously mentioned methods can be found in the following survey paper [35]. The resilience of segmentation networks against adversarial attacks was investigated heavily in the past years [36, 37, 38, 39].
Subsequent years witnessed extensive investigations into the potential of exploiting adversarial attacks. Researchers developed novel attack strategies to enhance the robustness of generated attacks [16, 31], even enabling black-box attacks, which do not require access to the network gradients [32, 33, 34].
Moreover, advancements were made in extending adversarial attacks to more complex tasks beyond classification, such as detection and localization problems [40]. These innovative techniques were applied to diverse network architectures, including Faster-RCNN [41].
According to our best knowledge adversarial attacks has not be presented and investigated in GeNeRF models apart from [17], which is restricted to low-intensity and untargeted attacks.
## 4 Method and Results
Our proposed method for the adversarial attack on GeNeRFs is shown in Fig. 1. We have selected a pretrained model of the IBRNet as a GeNeRF. For a certain pose and source images we created a new rendered image. We manually placed a hallucinated object on the rendered image. The resulting image serves as the adversarial ground truth image. The residual loss is always computed between the adversarial ground truth image and the currently rendered image at the same pose. This is an important regularizer because ignoring the gradients coming from the non-attacked regions might significantly deteriorate the image quality in these parts.
Since these modifications were manual we have to admit the they can be biased in two ways. On one hand they might disturb the real structure of the images (artificial insertion and deletion might cause extremely strong edges in the image), on the other hand the modifications are subjective and other people might desire different modifications. We would argue that this subjectivity is unavoidable and we were carefully generating three different types of modifications:
* types where the shape of existing objects are modified.
* types where existing images were deleted from scenes and substituted by background pixels
* types where new objects were added to the scenes
A few samples of these modifications and the result of attacks using these image as desired outputs can be seen in Fig. 2.
### Low-intensity Attacks
For our investigation into low-intensity attacks, we opted for the iterative version of FGSM with momentum [28] as the attack mechanism. Our setup involved 1000 iterations, with parameter \(\epsilon\) set to \(0.01\).
In a typical low-intensity attack on classification problems, a single input image is used, allowing modifications to all its pixels until a predefined threshold is reached. However, since GeNeRFs utilize multiple input images, referred to as source images or source views, attackers can simultaneously modify all or a subset of these images. To explore the impact of different attack scenarios, we devised five setups with varying numbers of source images: 10, 8, 6, 5, and 4. The quality of the generated image depends on the number of source images, generally improving with an increase in this number. Our investigation covered cases where one, two, three, and so on, up to all source images were subject to modification.
This investigation holds significance as it addresses real-world scenarios where images from events are uploaded to a common dataset by users or multiple autonomous robots. In such cases, understanding the necessary number and percentage of images to be attacked for successful modifications in the rendered output image becomes crucial. This way Our research aims could provide valuable insights into enhancing the security and reliability of GeNeRFs in various practical applications.
The quality of the attack was measured as the average \(\ell_{2}\) distance between the generated image and our hand-modified ground truth image. We have executed this experiment on ten different scenes, repeating each attack ten times (to average out the stochastic nature of the attack al
Figure 1: Adversarial attack on generalizable NeRFs. A ground truth render is created for the adversarial pose and a hallucinated object is randomly placed on the rendered image. Since all the components of the pipeline are fully differentiable, the goal is to modify the source images such that the resulting render will be close to the attacked ground truth render. The back-propagated gradients of the residual image are used to alter the source images within certain bounds.
Figure 2: Samples cases from evaluation part of the LLFF dataset. Here we display three different samples, one in each row. The first column contains the original output images of the network without any attacks, the second column contains the modified images which were used as ground truth during the attacks. These modifications were done manually. The third column contains the output images of the network after the attack. In this setup the images were generated using ten different views and attacks were applied on all input images. The attacks were generated using FGSM for 1000 iterations with and \(\epsilon\) value of \(0.01\). As these images demonstrate adversarial attacks were successful and we were able to modify objects in the scene (ferm), delete objects from the scene (orchids) and render non-existing objects in the scene (room).
gorithm) and the quantitative summary of the results can be seen in Fig. 3.
These results clearly indicate that attacks were successful in most cases when a significant majority of the source views were targeted. In our setup, an attack can be considered successful when the average pixel distance dropped below \(0.015\), while unsuccessful attacks resulted in values above \(~{}0.020\). It is important to note that these threshold values may vary depending on the scene, but as observed in Fig. 2, scenes with only a small region altered in the image can be used as a rule of thumb. Additional results containing PSNR, SSIM and LPIPS measures can be found in table 1.
These findings highlight the overall robustness of GeNeRFs, as the generated images remained reliable in cases where the majority of the source images were left untouched. However, the study also underscores the vulnerability of the system when an attacker gains access to most of the source images, enabling arbitrary modifications to the output. Understanding and addressing these security implications are crucial as GeNeRFs and similar technologies advance, ensuring their safe and reliable application in various practical scenarios.
### Patch-based attacked
Low-intensity attacks may hold academic interest, but their significance diminishes when considering real-world applications, primarily due to the limited access attackers have to the image processing pipeline. However, the most straightforward and practical way to target neural networks is by modifying the real environment itself. In such scenarios, attackers can manipulate small regions within the image while freely altering the pixel values in this designated region. To effectively simulate and study these real-world threats, we have focused our investigation on patch-based attacks.
Patch-based attacks provide a suitable framework to understand the vulnerabilities of neural networks in the face of real-world adversarial manipulations. By restricting our attention to specific regions in the image, we emulate the scenario where an attacker can locally modify the environment while leaving the rest of the scene intact. The arbitrary nature of pixel values within these patches allows us to evaluate the robustness of the neural networks against unpredictable and potentially damaging alterations.
For low-intensity attacks, the algorithm's crucial parameter is the \(\epsilon\) value, intended to ensure the challenging detectability of these modifications. Similarly the size of the patch applied is the most crucial parameter in patch-based attacks, akin to the significance of the amount of maximal change in low-intensity attacks. To examine the impact of patch size on these attacks, we employed the same set of 10 scenes previously generated. For each scene, desired attack outputs were manually specified, and patches were automatically placed at the center of the images. This approach ensured that the patches were not closely positioned to the regions already modified.
Clearly, a patch covering the modified region could influence the outcome, especially when applied near or at the boundary of the effect. However, the most critical scenario to consider is when patches have far-reaching effects, altering pixels that are not in close proximity to them and keeping the original output value of other regions.
Our experimental investigations involved generating patches of sizes \(2\times 2\), \(5\times 5\), \(10\times 10\), and \(20\times 20\), and then assessing their respective effects on the scenes. The results of these experiments are illustrated in Fig. 4, providing valuable insights into the relationship between patch size and the success of patch-based attacks. Additional results containing PSNR, SSIM and LPIPS measures can be found in table 1.
The results clearly demonstrate the feasibility of patch-based attacks when the patch size is sufficiently large (typically \(10\times 10\) patches in our experiments) and when the patches are prevalent in the majority of images. In our investigations, utilizing ten source views, attacks were generally successful if at least four of them contained a patch large enough to cause significant impact.
It is essential to highlight that in this experiment, the patches were independently optimized for each source image. Consequently, the pixel values at the same location could differ across different images, enabling the attacker to tailor their patches specifically to exploit the vulnerabilities in each individual source view.
Figure 3: This plot depicts the dependence of attack quality on the number of source views and the number of attacked samples in case of GeNeRFs. The Y axes plots the average \(\ell_{2}\) distance between the pixels of the ground truth image and the image generated by the network after the attack. Lower values mean the attack was more successful, since this case the network output was closer to our desired output. The X axes contains the number of attacked images, meanwhile the different colored plots depicts outputs generated from different number of source views. As it can be seen from these results attacks are not successful (they generate a larger distance) until the number of attacked views will not reach the majority of the source views. Each point in these measurements were generated as the average of 10 independent runs and on ten different scenes.
These findings underscore the potential threat posed by patch-based attacks and emphasize the importance of developing robust defenses against such manipulation techniques. Understanding the adaptability of these attacks to various scenarios is crucial for strengthening the security of neural network systems in real-world applications.
These preliminary results demonstrate the feasibility of patch-based attacks on GeNeRFs. However, our simulations deviate from real-world setups in the following aspects:
* The patches are consistently positioned at the center of images, and their locations remain unchanged regardless of the viewpoint.
* The pixel values within the patches are optimized independently from each other and can vary across different input images.
Addressing these differences in the future is essential to simulate scenarios where an attacker introduces a real object into a scene. Despite these limitations, we are optimistic that this research paves the way for real-world adversarial applications, such as stickers on GeNeRFs.
## 5 Conclusion
We have demonstrated targeted adversarial attacks on GeNeRF, revealing important insights into the security vulnerabilities of these networks. The success of the attacks, utilizing methods commonly employed in classification tasks, emphasizes the ease with which malevolent attackers can manipulate the generated images. However, our findings also demonstrate the relative robustness of NeRFs when multiple views are utilized and not all source images are accessible to the attacker. In such cases, the effectiveness of the attack diminishes, indicating the importance of safeguarding access to critical source images. In cases where the attacker has access to the majority of the views the quality of the attacks increases significantly.
Additionally, we explored patch-based attacks, where limited regions of the image are targeted, but arbitrary values can be introduced. Remarkably, these attacks are not restricted to local neighborhoods, as even distant regions can be manipulated with such modifications. The position and view angle of these patches proved to have little impact on their efficacy, further accentuating the potential threat posed by these attacks.
While our results indicate that these attacks have the potential to be robust enough for real-world applications, it is essential to acknowledge that further investigations are necessary to fully comprehend their implications and develop effective countermeasures. As the field of NeRFs continues to advance, addressing security concerns and improving defenses against adversarial attacks becomes imperative to ensure the trustworthy deployment of these technologies in various domains.
## Acknowledgement
This research has been partially supported by the Hungarian Government by the following grants: 2018-1.2.1-NKP00008: Exploring the Mathematical Foundations of Artificial Intelligence and TKP2021_02-NVA-27 - Thematic Excellence Program. The support of the Alfred Renyi Institute of Mathematics if also gratefully acknowledged.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Attack & L2 \(\downarrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) & LPIPS \(\downarrow\) \\ \hline Low (2/10) & 0.022 & 19.75 & 0.537 & 0.242 \\ Low (4/10) & 0.016 & 21.83 & 0.841 & 0.168 \\ Low (10/10) & 0.011 & 24.72 & 0.910 & 0.163 \\ \hline Patch (2/10) & 0.022 & 18.43 & 0.588 & 0.256 \\ Patch (4/10) & 0.017 & 21.60 & 0.792 & 0.173 \\ Patch (10/10) & 0.011 & 24.33 & 0.903 & 0.114 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Within this table, one can find evaluations of attack methodologies showcased across diverse attributes. The rows labeled as “Low” exhibit outcomes of low-intensity adversarial attacks, whereas those designated as “Patch” reveal findings from attacks rooted in patch patch based attack ( with size 20x20). Each entry corresponds to L2, PSNR, SSIM, and LPIPS metrics (in different columns accordingly). These analyses involve 10 source views (indicated by the second value in brackets following the attack methods) while the quantity of attacked images is denoted by the first figure (2, 4, or 10). These measurements were calculated on the average of 10 independent runs and conducted on ten different scenes.
Figure 4: This plot illustrates how the attack quality is influenced by the number of source views and the size of the patch applied during the attack. The Y-axis represents the average \(\ell_{2}\) distance between the pixels of the ground truth image and the image generated by the network after the attack. Lower values indicate more successful attacks, as they result in the network output being closer to our desired output. On the other hand, the X-axis represents the number of attacked images, while the various colored plots depict outputs generated from different patch sizes. The results demonstrate that attacks are not successful (they generate a larger distance) until the number of attacked views encompasses the majority of the source views or when the patch size is too small. These measurements are based on the average of 10 independent runs and conducted on ten different scenes. |
2308.01125 | Stereo Visual Odometry with Deep Learning-Based Point and Line Feature
Matching using an Attention Graph Neural Network | Robust feature matching forms the backbone for most Visual Simultaneous
Localization and Mapping (vSLAM), visual odometry, 3D reconstruction, and
Structure from Motion (SfM) algorithms. However, recovering feature matches
from texture-poor scenes is a major challenge and still remains an open area of
research. In this paper, we present a Stereo Visual Odometry (StereoVO)
technique based on point and line features which uses a novel feature-matching
mechanism based on an Attention Graph Neural Network that is designed to
perform well even under adverse weather conditions such as fog, haze, rain, and
snow, and dynamic lighting conditions such as nighttime illumination and glare
scenarios. We perform experiments on multiple real and synthetic datasets to
validate the ability of our method to perform StereoVO under low visibility
weather and lighting conditions through robust point and line matches. The
results demonstrate that our method achieves more line feature matches than
state-of-the-art line matching algorithms, which when complemented with point
feature matches perform consistently well in adverse weather and dynamic
lighting conditions. | Shenbagaraj Kannapiran, Nalin Bendapudi, Ming-Yuan Yu, Devarth Parikh, Spring Berman, Ankit Vora, Gaurav Pandey | 2023-08-02T13:09:12Z | http://arxiv.org/abs/2308.01125v1 | Stereo Visual Odometry with Deep Learning-Based Point and Line Feature Matching using an Attention Graph Neural Network
###### Abstract
Robust feature matching forms the backbone for most Visual Simultaneous Localization and Mapping (vSLAM), visual odometry, 3D reconstruction, and Structure from Motion (SfM) algorithms. However, recovering feature matches from texture-poor scenes is a major challenge and still remains an open area of research. In this paper, we present a Stereo Visual Odometry (StereoVO) technique based on point and line features which uses a novel feature-matching mechanism based on an Attention Graph Neural Network that is designed to perform well even under adverse weather conditions such as fog, haze, rain, and snow, and dynamic lighting conditions such as nighttime illumination and glare scenarios. We perform experiments on multiple real and synthetic datasets to validate our method's ability to perform StereoVO under low-visibility weather and lighting conditions through robust point and line matches. The results demonstrate that our method achieves more line feature matches than state-of-the-art line-matching algorithms, which when complemented with point feature matches perform consistently well in adverse weather and dynamic lighting conditions.
## I Introduction
With the advances in development and deployment of self-driving vehicles and mobile robots, there is a growing need for high-resolution, accurate visual odometry algorithms that can be deployed on low-cost camera sensors. Although existing localization algorithms perform well under ideal conditions, they usually tend to fail or under-perform in adverse weather conditions such as fog, rain, and snow and in dynamic lighting conditions such as glare and nighttime illumination. The development of visual odometry algorithms that perform effectively under such conditions remains an open area of research.
Existing vision-based localization algorithms rely primarily on conventional point features, such as SIFT [3], SURF [4], and ORB [5], or learning-based point features, such as SuperPoint [6] and LIFT [7], to perform temporal feature matching or learn alternate representations using monocular camera(s) [8, 9] and obtain camera pose estimates. However, point features become unreliable in the adverse conditions mentioned above. To overcome this, we develop a visual odometry technique that includes line segment features in addition to point features (see example application in Fig. 1). In foggy scenes where conventional point features like ORB and SIFT are unable to detect enough features, learning-based feature detectors like SuperPoint perform well. Line feature detectors such as SOLD2 [10] and L2D2 [11] also perform well in such scenarios. However, most point feature detectors, including ORB, SIFT, and SuperPoint, tend to demonstrate poor performance in nighttime scenarios. In comparison, the line features detected by state-of-the-art line detectors such as SOLD2 and L2D2 tend to remain consistent in scenes with low illumination, thereby indicating the need to leverage line features for nighttime conditions.
Fig. 1: Grayscale images of (a) a clear-sky scenario from the Ford AV dataset [1]; (b) a nighttime scenario from the Oxford Car dataset [2]. Rows 1 and 3 show point matches between frames \(i\) and \(i+1\); rows 2 and 4 show line matches between frames \(i\) and \(i+1\). Point and line matches were generated by the method presented in this paper (Method 2 in Section IV).
Given the importance of line features, the next step in integrating line features with the visual odometry framework is to perform feature-matching for line features. Existing techniques either utilize line-based descriptors or apply point descriptors to points sampled from detected lines. Both types of techniques rely on visual descriptors, which tend to fail in texture-poor scenes, justifying the need for a line-matching solution that is constrained by the positions of line features and visual cues. Position-constrained line-matching ensures the prevention of line feature mismatches, particularly in scenes where point features are sparse or similar structures appear repeatedly, such as trusses of a bridge and windows in urban high-rises.
Apart from vision-based sensors, inertial sensors such as Inertial Measurement Units (IMUs) can be used to aid the system to perform Visual-Inertial Odometry (VIO), which yields better accuracy. However, we restricted our focus to just vision-based systems to showcase our method's capabilities without the aid of other such sensors. We developed our method with the goal of easily integrating it into a standard self-driving research vehicle for real-time deployment, and hence we assume that our visual-odometry based pose estimate will ultimately be fused with GNSS (Global Navigation Satellite System) and IMU-based pose estimates in an extended Kalman filter-based framework to provide more accurate pose estimates.
Our contributions can be summarized as follows:
* We developed a novel line-matching technique using an Attention Graph Neural Network that is capable of acquiring robust line matches in feature-poor scenarios by sampling and detecting self-supervised learning-based point features along the lines with encoded position constraints.
* We integrated point features and fine-tuned line features in a Stereo Visual Odometry framework to maintain consistent performance in adverse weather and dynamic lighting conditions and compared the performance of our method to that of state-of-the-art point and line feature matching techniques.
We discuss related work in Section II, give a technical description of our approach in Section III, and describe experiments and results in Section IV. Section V concludes the paper and provides an outlook on future work.
## II Related Work
In this section, we give an overview of previous work related to visual odometry, graph matching, point feature matching, and line matching. Given the amount of prior research, particularly in the visual odometry field (e.g., [12, 13, 14]), a compilation of all existing visual odometry algorithms is beyond the scope of this paper. Visual odometry (VO) techniques are usually classified as either direct VO or feature-based VO. Feature-based solutions are primarily used for their reliability, high accuracy, and robustness, and will therefore be the focus of this paper.
### _Point feature detection and matching_
Point feature detection lies at the heart of most vision-based algorithms. The paper [15] presents a comprehensive survey of different classical feature detectors such as SIFT, ORB, and SURF and learning-based detectors such as LF-Net [16] and SuperPoint and compares their performance on three evaluation tasks in terms of robustness, repeatability, and accuracy. Classical feature-matching techniques usually involve finding descriptors, matching them using a nearest neighbor search, and finally removing outliers to obtain robust matches. Over time, researchers moved towards developing more robust and accurate feature detectors and descriptors to improve matching. Then graph neural network (GNN)-based matching systems such as SuperGlue [17] were developed, which outperformed all existing feature matching techniques by using an attentional GNN. The attention mechanism enables the GNN to selectively focus on the most relevant features (i.e., nodes and edges) when comparing two graphs, which improves the accuracy of feature matching and its robustness to noisy or incomplete graphs. For this reason, we used SuperGlue for point feature matching in our Stereo Visual Odometry framework.
### _Line feature detection and matching_
Line feature detection and matching is a well-researched topic. Classical line detector algorithms rely on geometric constraints to extract lines and find correspondences. Similarly, line segment descriptors can be constructed from the appearance of a neighborhood of the detected line, without resorting to any other photometric or geometric constraints such as the mean-standard deviation line descriptor (MLSD) [18], which constructs the line descriptors by computing the mean and variance of the gradients of pixels in the neighboring region of a line segment. The work [19] proposes a Line Band Descriptor (LBD) that computes gradient histograms over bands for improved robustness and efficiency. Recent advancements in learning-based line segment descriptors, e.g., LLD [20] and DLD [21], demonstrate excellent performance with the use of a convolutional neural network (CNN) to learn the line descriptors. In [11], the authors propose a novel line segment detector and descriptor, Learnable Line Detector and Descriptor (L2D2), which enables efficient extraction and matching of 2D lines via the angular distance of 128-dimensional unit descriptor vectors. The paper [22] presents a novel Graph Convolutional Network-based line segment matching technique that learns local line segment descriptors through end-to-end training.
In [10], the authors propose SOLD2, a self-supervised learning-based line detector that is similar to SuperPoint and does not require any annotation, enabling the system to generalize to multiple scenarios. For this reason, we chose SOLD2's line detector module as a baseline for our method. SOLD2 also includes a line-matching algorithm to enable occlusion awareness. However, unlike SuperGlue, SOLD2's matching algorithm does not take advantage of the position information of the features, which is critical in scenes that
contain repetitive structures such as windows in urban high-rises.
### _Visual SLAM / Odometry with point and line features_
As described in [23, 24], visual SLAM methods that incorporate both point and line features have been developed to improve localization accuracy and computational efficiency over conventional point-based approaches in challenging scenarios, making the VO pipeline more comprehensive and robust to real-world conditions. One example is the visual-inertial SLAM method in [25], which includes several enhancements in line detection and an optical flow-based line feature tracker. Another is the line classification methodology for a Visual Inertial Odometry system that is presented in [26], which exploits the distinctive characteristics of structural (parallel) and non-structural (non-parallel) line features to develop a two-parameter line feature representation, leading to more efficient SLAM computations. However, despite the benefits afforded by using both point and line features, these visual SLAM techniques often exhibit poor performance in scenarios with repeated, similar-looking point and line features, such as those found in traffic environments (e.g., building facades, pedestrian crosswalks). The design of our StereoVO technique was motivated in part by this limitation.
## III StereoVO WITH POINTS AND LINES
Our proposed StereoVO framework is developed to perform well in texture-poor scenarios and relies on tracking a set of point and line correspondences. The framework is based on the SuperGlue [17] network, with an additional constraint (constraint (3) in Section III-B) that greatly improves performance. We first provide a overview of the StereoVO framework, followed by the notation and definitions that we use in our Attention Graph Neural Network architecture. This is followed by a description of the Optimal Matching layer for both point and line features, and a brief summary of how to obtain pose estimates from the point and line correspondences.
### _Overview_
An outline of the proposed StereoVO framework is shown in Fig. 2. The stereo images obtained from the camera are initially undistorted. The left camera image is used to obtain point and line matches, and the right camera image is used to obtain 3D points and 3D line estimates from disparity maps generated using the stereo images.
To improve the accuracy of StereoVO, the left camera image is pre-processed using a semantic segmentation algorithm to remove dynamic objects, such as cars and pedestrians, thereby generating a mask that highlights static features in the scene. In StereoVO, focusing on stable features improves the reliability, precision, and robustness of the camera pose estimates. We employ SegFormer [27], a state-of-the-art semantic segmentation algorithm, out of the box to mask the classes of interest.
In the next step, we perform point and line segment feature detection on the masked input images. Since our goal is to implement the framework on a full-size autonomous vehicle and ensure that it is capable of performing well under adverse weather and dynamic lighting conditions, we tested a variety of point feature detectors, including SIFT, ORB, and LIFT, in example scenarios with such conditions and chose SuperPoint since it outperformed the others. SuperPoint is a CNN-based self-supervised framework that is trained on the MS-COCO dataset [28] and performs real-time point feature detection out of the box without any fine-tuning.
We selected SOLD2 for line feature matching, since it has a similar CNN architecture to SuperPoint. To improve the performance of SOLD2 in low-light conditions and other adverse weather conditions, we fine-tuned the network on synthetic data generated using the CARLA driving simulator [29]. Since SOLD2 performs well in ideal daytime conditions, we used the line features detected by SOLD2 as ground truth and changed weather and lighting conditions for the same scenes in CARLA to generate multi-weather and lighting-augmented data. The SOLD2 algorithm also performs line matching by sampling lines and performing feature matching between the samples to aid in occlusion awareness. However, this results in incorrect matches in feature-poor scenarios. To overcome this, we introduced position constraints on the line features by sampling points along the lines, using SuperPoint to detect point features from these sets of sampled points, and encoding the point
Fig. 2: Outline of the proposed Stereo Visual Odometry framework.
features with keypoint position awareness. This is explained further in the next sections. Both the point and line feature detection networks are designed to function effectively across various datasets without being overly reliant on specific training data. Their generalizability enables them to detect point and line features in diverse contexts and domains, making them versatile tools for a wide range of applications. Figure 3 provides additional details of the framework.
### _Notation and definitions_
Consider a pair of images labeled \(A\) and \(B\). Each image \(\alpha\in\{A,B\}\) has \(L_{\alpha}\) line features, indexed by set \(\mathcal{L}_{\alpha}\subset\mathbb{Z}_{+}\), and \(P_{\alpha}\) point features, indexed by set \(\mathcal{P}_{\alpha}\subset\mathbb{Z}_{+}\) and referred to as _P-point features_. The pixels comprising the line features are extracted using SOLD2 line feature extractor, and SuperPoint is used to detect \(Q_{\alpha}\) point features from these pixels, indexed by set \(\mathcal{Q}_{\alpha}\subset\mathbb{Z}_{+}\) and referred to as _L-point features_. Each point feature is associated with a position \(\mathbf{p}\) and a visual descriptor vector \(\mathbf{d}\). The feature position is defined as \(\mathbf{p}=[u\ v\ c]^{T}\), where \(u\) and \(v\) are the pixel coordinates of the point and \(c\) is the descriptor detection confidence. We will use the notation \((\mathbf{p}_{p})_{i}^{\alpha}\) to indicate the position of P-point feature \(i\in\mathcal{P}_{\alpha}\) in image \(\alpha\) and \((\mathbf{p}_{l})_{i}^{\alpha}\) to indicate the position of L-point feature \(i\in\mathcal{Q}_{\alpha}\) in image \(\alpha\). We define the visual descriptors \((\mathbf{d}_{p})_{i}^{\alpha}\), \((\mathbf{d}_{l})_{i}^{\alpha}\) similarly. The subscript \(x\) will refer to either \(p\) or \(l\).
The representation \((\mathbf{y}_{x})_{i}^{\alpha}\), \(x\in\{p,l\}\), for each keypoint \(i\) in image \(\alpha\) is a high-dimensional vector that encodes the keypoint's position and visual descriptor. The keypoint position is embedded into \((\mathbf{y}_{x})_{i}^{\alpha}\) as follows using a multi-layer perceptron (MLP), similar to the SuperGlue architecture:
\[(\mathbf{y}_{x})_{i}^{\alpha}=(\mathbf{d}_{x})_{i}^{\alpha}+MLP_{encoder}( \mathbf{p}_{x})_{i}^{\alpha} \tag{1}\]
Our framework enforces the following constraints. (1) Any P-point feature from one image has exactly one match to a P-point feature in the other image; similarly for L-point features. (2) All P-point features and L-point features that are occluded or undetected will be unmatched. (3) A line feature \(l_{a}\in\mathcal{L}_{A}\) in image \(A\) is matched to a line feature \(l_{b}\in\mathcal{L}_{B}\) in image \(B\) if most of the L-point features on \(l_{a}\) are matched to L-point features on \(l_{b}\).
### _Attention Graph Neural Network_
An Attention Graph Neural Network (GNN) forms the first layer of the architecture. The network encodes both the positions and visual descriptors of the keypoints, which ultimately improves the performance of the network over a conventional graph neural network. The position constraints increase line-matching robustness and ensure that incorrect line matches do not occur in cases where images contain repetitive structures, such as windows in high-rise buildings. We developed separate GNNs for point and line feature matching, one with nodes defined as the P-point features and the other with nodes defined as the L-point features. Each GNN has a different set of losses and weights, since the networks compute different estimates of the geometric and photometric cues.
As in the SuperGlue architecture, aggression is achieved through both self- and cross-attention mechanisms. Given a feature that corresponds to a particular node in one image, self-attention aggregates features that correspond to adjacent nodes in the same image, and cross-attention aggregates similar features that correspond to nodes in another image. The framework attends to individual point features' positions and their positions relative to adjacent point features, as in SuperGlue. Let \((\mathbf{h}_{x})_{i}^{\alpha}\), \(x\in\{p,l\}\), denote the matching descriptor for keypoint \(i\) in image \(\alpha\). The matching descriptors are defined as:
\[(\mathbf{h}_{x})_{i}^{\alpha}=\mathbf{W}_{x}(\mathbf{y}_{x})_{i}^{\alpha}+ \mathbf{b}_{x},\ \ x\in\{p,l\},\ \alpha\in\{A,B\}, \tag{2}\]
where \(\mathbf{W}_{x}\) is a weight matrix and \(\mathbf{b}_{x}\) is a bias vector.
### _Optimal Matching layer_
The Optimal Matching layer forms the second block of the framework, similar to SuperGlue. The input to this layer is the structural affinity between the two GNNs that have been encoded, defined in terms of affinity matrices \(\mathbf{S}_{p}\in\mathbb{R}^{P_{A}\times P_{B}}\) and \(\mathbf{S}_{l}\in\mathbb{R}^{Q_{A}\times Q_{B}}\). The \((i,j)\)-th entry of each matrix represents the affinity score between point feature \(i\) in image \(A\) and point feature \(j\) in image \(B\) and is defined as follows:
\[(\mathbf{S}_{x})_{i,j}=\text{exp}\left(\frac{(\mathbf{h}_{x}^{T})_{i}^{A} \mathbf{E}_{x}(\mathbf{h}_{x})_{j}^{B}}{\delta_{x}}\right),\ \ x\in\{p,l\}, \tag{3}\]
where \(\mathbf{E}_{x}\) is a learnable weight matrix and \(\delta_{x}\) is a tunable hyperparameter. The network is subject to the constraints described in Section III-B. As in SuperGlue, the unmatched and occluded P-point and L-point features are assigned to a dustbin, which augments each affinity matrix with an additional row and column that are both filled with a single learnable parameter.
We formulate the constrained optimization problem (4)-(5) below to solve for the assignment matrices \(\mathbf{P}_{p}\in\mathbb{R}^{P_{A}\times P_{B}}\) and \(\mathbf{P}_{l}\in\mathbb{R}^{Q_{A}\times Q_{B}}\):
\[\text{max}\sum_{i=1}^{N+1}\sum_{j=1}^{M+1}(\mathbf{S}_{x})_{i,j}(\mathbf{P}_{x })_{i,j},\ \ x\in\{p,l\} \tag{4}\]
\[\mathbf{P}_{x}\mathbf{1}_{N+1}=\mathbf{a}\ \ \text{and}\ \ \mathbf{P}_{x}^{T} \mathbf{1}_{M+1}=\mathbf{b}, \tag{5}\]
where \(M=P_{A}\), \(N=P_{B}\) for \(x=p\); \(M=Q_{A}\), \(N=Q_{B}\) for \(x=l\); and \(\mathbf{a}\) and \(\mathbf{b}\) are biases. As in SuperGlue, this constitutes a differentiable optimal transport problem and can be solved using the Sinkhorn algorithm [30], which is GPU-optimized. The algorithm is iterated until convergence.
Since all layers of the network are differentiable, we use the negative log-likelihood loss as the matching prediction loss. We backpropagate from ground truth matches to visual descriptors. Let \(GT_{p}\) be the set of ground truth matches of P-point features, \(\{(i,j)\}\subset\mathcal{P}_{A}\times\mathcal{P}_{B}\), and \(GT_{l}\) be the set of ground truth matches of L-point features, \(\{(i,j)\}\subset\mathcal{Q}_{A}\times\mathcal{Q}_{B}\). The sets \(\mathcal{A}_{p}\subseteq\mathcal{P}_{A}\) and \(\mathcal{B}_{p}\subseteq\mathcal{P}_{B}\) will denote the unmatched P-point features in both images, and similarly, \(\mathcal{A}_{l}\subseteq\mathcal{Q}_{A}\) and \(\mathcal{B}_{l}\subseteq\mathcal{Q}_{B}\) will denote the unmatched L-point
features. We define two losses, one for P-point features and one for L-point features:
\[Loss_{x} = -\sum_{(i,j)\in GT_{x}}\log(\mathbf{P}_{x})_{i,j}-\sum_{i\in\mathcal{ A}_{x}}\log(\mathbf{P}_{x})_{i,M+1}\] \[\qquad\quad-\sum_{j\in\mathcal{B}_{x}}\log(\mathbf{P}_{x})_{N+1,j },\qquad x\in\{p,l\},\]
where \(M=P_{A}\), \(N=P_{B}\) for \(x=p\) and \(M=Q_{A}\), \(N=Q_{B}\) for \(x=l\).
### _Camera pose estimation_
As a final step, we perform camera pose estimation by using the point and line matches from 2D-3D point and line correspondences between successive frames. To obtain 3D points and lines, we perform stereo matching using the disparity map generated from the stereo camera images. Since this is a well-established topic and open source modules are readily available, a detailed description of camera pose estimation is outside the scope of this paper.
## IV Experiments
In this section, we present the results of experiments that compare the performance of our StereoVO framework to that of state-of-the-art algorithms for point and line feature matching. We compared **Method 1**, which combines the SuperGlue point-matching algorithm and SOLD2 line-matching algorithm, to **Method 2**, which combines the SuperGlue point-matching algorithm and our novel line-matching algorithm. We tested both methods on the following datasets:
* Ford AV dataset [1], collected by autonomous vehicles, which consists of stereo camera images with accurate ground-truth trajectories obtained from LiDAR-based ICP. Our test data was drawn from Log 3 (Vegetation with clear sunny sky) and Log 4 (Residential area with clear sky).
* Nighttime stereo camera images from the Oxford car dataset [2], collected by an autonomous vehicle. Our test data consisted of images of residential areas with Visual Odometry (VO)-based ground truth.
* Synthetic stereo camera images with ground-truth trajectories from the urban environment in Town 10 of CARLA [29]. Our test data consisted of images of the same scenes under a variety of weather and lighting conditions, such as fog, nighttime, and glare.
Our StereoVO framework was run in real-time on an NVIDIA RTX 2080Ti GPU at around 7 FPS (142 ms). To ensure real-time operation, the framework requires a minimum of 6 GB of GPU memory (VRAM). Point and line matching results from Method 2 are shown in Fig. 1 and Fig. 4 for scenes from real and synthetic datasets, respectively, under various weather and lighting conditions.
### _Comparison of estimated trajectories and pose error_
We applied each method to estimate the vehicle camera poses from the Ford AV Log 4 dataset and generated vehicle trajectories from these pose estimates. The trajectories are plotted in Fig. 5, along with the ground truth (GT) trajectory. The figure shows that both methods yield trajectories that are close to the GT trajectory at all times. Note that at coordinates (2965, -660) and (3175, -825), indicated by arrows in the figure, the vehicle was at a complete stop, causing drift in the visual odometry. This drift can be reduced by fusing measurements from other sensors, such as GNSS and IMU, with the StereoVO estimates. To quantify the deviation of the trajectory generated using each method from the GT trajectory, we computed the Absolute Pose Error (APE) over time between each trajectory and the GT trajectory. Figure 6 compares the time series of the APE for both trajectories and shows that our method (Method 2) outperforms Method 1, in that it generally produced lower APE values over the sampled 160-s period.
Fig. 3: The proposed point and line feature matching architecture, consisting of point and line feature layers, an Attention Graph Neural Network layer for both points and lines, and an Optimal Matching layer that generates an assignment matrix using the Sinkhorn algorithm.
We also evaluated the performance of Method 2 on synthetic data from CARLA for the following scenarios: (1) daytime with clear sky; (2) daytime with fog; and (3) nighttime with no street lights. Figure 7a plots the trajectory generated over 200 frames for each scenario, along with the ground truth (GT) trajectory, and Figure 7b shows the box plot of the absolute pose error (APE) for each scenario. Both figures indicate that the discrepancy between the estimated and GT trajectories is higher for the fog and nighttime scenarios than for the daytime scenario, as expected, but that all three estimated trajectories are relatively close to the GT trajectory.
Table I reports the root-mean-square error (RMSE) be
Fig. 4: Every pair of rows starting from the top in this figure shows grayscale images of various synthetic scenarios from CARLA: (a) clear sky scenario; (b) nighttime scenario; (c) foggy scenario. The first row shows point matches between frames \(i\) and \(i+1\), and the second row shows line matches between frames \(i\) and \(i+1\). Point and line matches were generated by Method 2.
Fig. 5: Comparison of Methods 1 and 2 on the Ford AV Log 4 dataset: ground truth (GT) trajectory and trajectories that were generated using camera pose estimates from each method.
Fig. 6: Comparison of Methods 1 and 2 on the Ford AV Log 4 dataset: Absolute Pose Error (APE) over a sample period of 160 s. The \(x\)-axis displays the time recorded by the vehicle.
Fig. 7: Evaluation of Method 2 on the CARLA synthetic dataset: (a) Ground truth (GT) trajectory and trajectories that were generated using camera pose estimates from images of the same scenes in CARLA under daytime, fog, and nighttime conditions. (b) Box plot of Absolute Pose Error (APE) over a sample of 200 frames for the same scenarios.
tween the estimated vehicle position and its ground truth position, obtained from simulated noise-free GNSS data, over 2000 frames in each of the three simulated scenarios in CARLA. The estimated positions were computed using Method 2 with only point features (detected by SuperPoint), only line features (obtained by our line-matching algorithm), or both point and line features. For each scenario, the use of both point and line features yields a lower RMSE value than either point features or line features alone. Hence, the inclusion of line features in the StereoVO framework leads to improved performance, particularly in low-visibility and low-light conditions.
### _Comparison of number of feature detections and matches_
We also compared the number of point or line features that different algorithms detected and matched, along with the percentage of detected features that were matched, in 200 frames of the Ford AV, Oxford car, and CARLA datasets. Table II lists these quantities for point features that were detected by SuperPoint and matched by SuperGlue and line features that were detected by the SOLD2 line detector and matched by either the SOLD2 line-matching algorithm or ours. The table shows that our line-matching algorithm recovers more line matches than the SOLD2 line-matching algorithm in each tested dataset.
Figure 8 plots the number of point features detected by SuperPoint and the number of line matches obtained by our algorithm in each frame of the daytime, fog, and nighttime scenarios simulated in CARLA. The figure shows that the number of line matches in each frame is not significantly affected by the visibility conditions (clear or foggy) or light level (daytime or nighttime) in the scene. However, the numbers of point features detected in the fog and nighttime scenarios are consistently lower than the number detected in the daytime scenario, and are substantially lower in some frames. This is also reflected in the first row of Table II, which shows fewer total point detections in the CARLA fog and nighttime scenarios than in the daytime scenario.
These results indicate that our line-matching algorithm exhibits robust performance as a scene becomes more texture-poor due to adverse weather conditions and/or low illumination. In turn, this robustness in feature matching maintains the accuracy of the camera pose estimates under such conditions.
## V Conclusion
We have presented a real-time stereo visual odometry framework aided by Attention Graph Neural Networks. This framework incorporates self-supervised learning-based point and line features and uses a novel line-matching technique that samples line features into point features with encoded position constraints. Using real datasets from autonomous vehicles and synthetic datasets from the CARLA driving simulator, we demonstrated that our framework produces robust line-matching in feature-poor scenes and scenes containing repetitive structures, e.g., Manhattan-world scenarios. In these tests, our framework outperformed state-of-the-art point and line feature matching algorithms in terms of the error between estimated and ground-truth vehicle poses, the percentage of detected line features that were matched, and the variability in number of identified features with respect to visibility and lighting conditions. One direction for future work is to incorporate planar features into the framework to improve its robustness. Moreover, developing a single end-to-end framework that performs temporal and stereo matching for both point and line features would be a promising step toward increasing the method's accuracy, robustness, and computation efficiency.
Fig. 8: Number of point features detected by SuperPoint (top three plots) and number of line features obtained by our line-matching algorithm (bottom three plots) in each frame of three runs in CARLA under daytime, fog, and nighttime conditions. |
2301.02127 | Generalized Dicke model and gauge-invariant master equations for two
atoms in ultrastrongly-coupled cavity quantum electrodynamics | We study a generalization of the well-known Dicke model, using two dissimilar
atoms in the regime of ultrastrongly coupled cavity quantum electrodynamics.
Our theory uses gauge invariant master equations, which yields consistent
results in either of the standard multipolar and Coulomb gauges, including
system-bath interactions for open cavity systems. We first show how a second
atom can be treated as a sensor atom to measure the output spectrum from a
single atom in the ultrastrong-coupling regime, and compare results with the
quantum regression theorem, explaining when they can be different. We then
focus on the case where the second atom is also ultrastrongly coupled to the
cavity, but with different parameters from those of the first atom, which
introduces complex coupling effects and additional resonances and spectral
features. In particular, we show multiple resonances in the cavity spectra that
are visible off-resonance, which cannot be seen when the second atom is
on-resonance with the rest of the system. We also observe clear anti-crossing
features particularly pronounced for when the second atom tunes through
resonance. | Kamran Akbari, Will Salmon, Franco Nori, Stephen Hughes | 2023-01-05T16:09:46Z | http://arxiv.org/abs/2301.02127v2 | Generalized Dicke model and gauge-invariant master equations for two atoms in ultrastrongly-coupled cavity quantum electrodynamics
###### Abstract
We study a generalization of the well-known Dicke model, using two dissimilar atoms in the regime of ultrastrongly coupled cavity quantum electrodynamics. Our theory uses gauge invariant master equations, which yields consistent results in either of the standard multipolar and Coulomb gauges, including system-bath interactions for open cavity systems. We first show how a second atom can be treated as a sensor atom to measure the output spectrum from a single atom in the ultrastrong-coupling regime, and compare results with the quantum regression theorem, explaining when they can be different. We then focus on the case where the second atom is also ultrastrongly coupled to the cavity, but with different parameters from those of the first atom, which introduces complex coupling effects and additional resonances and spectral features. In particular, we show multiple resonances in the cavity spectra that are visible off-resonance, which cannot be seen when the second atom is on-resonance with the rest of the system. We also observe clear anti-crossing features particularly pronounced for when the second atom tunes through resonance.
## I Introduction
Recent progress in the strong and ultrastrong (USC) regimes of light-matter interaction has opened up significant advances in theoretical and experimental research in quantum optical systems [1; 2; 3; 4; 5; 6; 7; 8; 9]. These strong coupling regimes allow one to coherently exchange excitations between matter and light, enabling breakthroughs in fundamental quantum experiments and technologies [1; 4; 5; 10; 11].
In particular, USC exploits the nature of counterrotating wave physics and pondermotive forces [4; 5], and pushes one toward a non-perturbative regime where the light and matter excitations must be treated on an equal footing, i.e., as joint/dressed states [12], where even the ground state can contain _virtual photons_. These features make the USC regime responsible for many intriguing phenomena including the formation of high-quality quasiparticle (e.g. exciton-polariton or plasmon-photon) collective modes, hybrid and entangled states with higher degrees of controllability [13; 14; 15; 16; 17; 8; 18; 19].
The intricate interactions between quantized cavity modes and quantum emitters can be modeled in the framework of cavity quantum electrodynamics (cavity-QED), where atoms and atom-like structures (e.g., quantum dots, molecules, superconducting circuits) interact with a (dominant) single quantized cavity mode [20; 21; 22]. Traditionally, strong coupling occurs when the cavity-emitter rate, \(g\), exceeds any dissipation rate \(\kappa\) (of the cavity) or \(\gamma\) (decay of the emitter) [23; 24; 25; 26; 27; 28; 29], while the USC regime is characterized not only by the lower rates of decoherence, but also when the atom-cavity coupling strength, \(g\), becomes a significant fraction of the bare energies, \(\omega_{0}\), of the system, commonly quantified as \(g>0.1\omega_{0}\)[4; 5]. Additionally, the hybridization of quantum states with different numbers of excitations in the USC regime results in a population of virtual photons in a dissipative system's ground state, also with significant loss (i.e., not even in the strong coupling regime) [30].
The profound applicability of cavity-QED and its ease of modeling are derived from truncating the full emitter problem to a two-level system (TLS), which is typically coupled to a single quantized cavity mode. However, the truncation of the Hilbert space, in either the material and/or photonic part, causes problems for gauge invariance when working in the USC regime [12; 31; 32; 33]. Recently, many of these issues have been partly fixed for the standard quantum Rabi model (QRM) Hamiltonian [34; 31; 35], and extended recently to ensure that dissipation and input/output is also included in a gauge invariant way [34]. More general quantization for arbitrary media, and the USC regime, has been reported as well [33].
### Gauge invariance
In the dipole gauge (specifically, the dipole approximation in the multipolar gauge), the QRM describes the TLS-cavity system via the Hamiltonian [12; 31] (in units of \(\hbar=1\)):
\[\hat{\mathcal{H}}^{\rm D}_{\rm QR}=\omega_{c}a^{\dagger}a+\frac{\omega_{a}}{ 2}\sigma_{z}+{\rm i}g^{\rm D}(a^{\dagger}-a)\sigma_{x}, \tag{1}\]
up to a constant \((\mathbf{1}\omega_{c}\eta^{2})\), where \(\omega_{c}\) is the cavity transition frequency, \(a\) (\(a^{\dagger}\)) is the cavity photon annihilation (creation) operator, \(\omega_{a}\) is the TLS transition frequency, \(\sigma_{z}=\sigma^{+}\sigma^{-}-\sigma^{-}\sigma^{+}\) and \(\sigma_{x}=\sigma^{+}+\sigma^{-}\), with \(\sigma^{+}=|e\rangle\langle g|\) (\(\sigma^{-}=|g\rangle\langle e|\)) the atomic raising (lowering) operator; also, \(g^{\mathrm{D}}\) is the atom-cavity coupling in the dipole gauge (\(g^{\mathrm{D}}\propto\sqrt{\omega_{c}}\)), and \(\eta=g^{\mathrm{D}}/\omega_{c}\) is the normalized coupling parameter. We can neglect terms proportional to the identity as these do not affect the system dynamics; they simply introduce an offset in the ground state energy, which we can normalize to any value. Equation (1) reduces to the Jaynes-Cummings model (JCM) in the rotating wave approximation (RWA) as [36; 37]
\[\tilde{\mathcal{H}}^{\mathrm{D}}_{\mathrm{JC}}=\omega_{c}a^{\dagger}a+\frac{ \omega_{a}}{2}\sigma_{z}+\mathrm{i}g^{\mathrm{D}}(a^{\dagger}\sigma^{-}-a \sigma^{+}). \tag{2}\]
When the system is subjected to matter truncation, \(\hat{\mathcal{H}}^{\mathrm{D}}_{\mathrm{QR}}\) produces the correct eigenenergies, but the fundamental electric field operator [38; 34]\(\hat{E}\propto-\mathrm{i}(a^{\prime\dagger}-a^{\prime})\), where \(a^{\prime}=a+\mathrm{i}\eta\sigma_{x}\), which can be derived from several different viewpoints [38; 31; 34]. For example, in the restricted TLS subspace, one can transform the Coulomb gauge operators to the dipole gauge operators, through the _projected_ unitary transform [31]\(\mathcal{U}=\exp[-\mathrm{i}\eta(a+a^{\dagger})\sigma_{x}]\), so that \(a^{\prime}\to Ua\mathcal{U}^{\dagger}=a+\mathrm{i}\eta\sigma_{x}\)[38]. These transformed operators then must be used when computing cavity field observables and for deriving master equations.
In the Coulomb gauge, the standard system Hamiltonian for the QRM is [39; 12; 31]
\[\hat{\mathcal{H}}^{\mathrm{C,naive}}_{\mathrm{QR}}=\omega_{c}a^{\dagger}a+( \omega_{a}/2)\sigma_{z}+g^{\mathrm{C}}(a+a^{\dagger})\sigma_{y}+D(a+a^{ \dagger})^{2}, \tag{3}\]
where \(\sigma_{y}=\mathrm{i}(\sigma^{-}-\sigma^{+})\), \(g^{\mathrm{C}}=g^{\mathrm{D}}\omega_{a}/\omega_{c}\), and \(D=(g^{\mathrm{C}})^{2}/\omega_{a}\) is the ponderomotive coupling strength [40]. Unfortunately, this "naive" system Hamiltonian is wrong (which is why we use this name in the superscript) as it does not produce the correct eigenenergies in the USC regime [12], and breaks gauge invariance. The breakdown of gauge invariance here can be seen as a formation of a potential nonlocality due to the truncation of the matter Hilbert space [31; 41]. Instead, by applying a proper unitary gauge transformation (i.e., the spin rotation along the \(x\)-axis) to the dipole gauge-independent QRM model Hamiltonian, the correct _gauge-fixed_ Coulomb QRM Hamiltonian reads [31]
\[\hat{\mathcal{H}}^{\mathrm{C}}_{\mathrm{QR}} =\omega_{c}a^{\dagger}a\] \[\quad+\frac{\omega_{a}}{2}\left(\sigma_{z}\cos[2(a+a^{\dagger}) \eta]+\sigma_{y}\sin[2(a+a^{\dagger})\eta]\right), \tag{4}\]
which produces identical eigenenergies to \(\hat{\mathcal{H}}^{\mathrm{D}}_{\mathrm{QR}}\).
### Gauge-invariant generalized master equation
For realistic cavities, one must also account for dissipation/losses and photon input-output channels. Generally, open-system cavity-QED problems are introduced by subjecting the atom and the cavity attached to general baths, as an open quantum system. In such situations, a master equation description is widely used, leading to the understanding of the cavity spectra and other desired observables [42; 43; 44; 45].
Commonly, the bare-state master equation formalism, where the joint basis states are constructed from the bare light states and the bare matter states before light-matter interaction, is often used in open system cavity-QED, yielding the standard Lindblad master equation. However, the bare-state master equation formalism uses the wrong states in the USC regime (including the ground state, which is now an entangled state of photons and matter) and it has been shown that one needs a dressed-state approach to avoid unphysical transitions [46]. More generally, one also needs a "generalized" master equation (GME) approach to account for frequency-dependent baths and non-secular effects [44]. Beyond these details, in the USC regime, such approaches are typically gauge relative, and again one must use a corrected \(a^{\prime}\) for cavity mode operators with the dipole gauge or use the corrected Coulomb gauge Hamiltonian. Although such studies have so far assumed very simple models for the system-bath interactions, these approaches do produce gauge-independent results [34; 35].
An advantage of using a GME approach is that realistic observables can be computed, such as the cavity-emitted spectra, typically using the quantum regression theorem [34; 47]. However, gauge-independent GMEs have so far only been applied to the case of one atom/TLS, and we can also expect a significant impact when applied to multiple atoms. In this regard, the Dicke model is a fundamental model of quantum optics describing the light-matter interaction where a cavity mode is coupled with a set of _identical_ TLSs [48; 49]. The model is known to be an established description for a class of intriguing phenomena in cavity-QED such as superradiant phase transitions and quantum chaos [49; 50; 51].
### Dicke model in the USC regime
The Dicke model has also been investigated in the USC regime [62; 63; 64; 65; 66; 67; 68; 69]. In the study of effective light-matter interactions in a circuit QED system, coupled symmetrically to multiple superconducting qubits, Ref. [63] studied a microscopic model Hamiltonian that not only describes the usual collective qubit-photon coupling but also the effect of direct qubit-qubit interactions. Various other works in the USC regime have been presented on related coupling effects, mainly at the thermodynamic limit, using simple system Hamiltonians [70; 71]. In these extended Dicke models, similar to the previous studies on the Dicke model or even the Hopfield model in the USC regime [64; 65; 66; 67], the atoms are degenerate (i.e., they share the same coupling coefficient and resonant frequency). Recent studies also include gauge-invariant sys
tem Hamiltonian models [67], or even discuss more exotic schemes of the Dicke model, such as the _anisotropic_ or _nonequilibrium_ models in which the counter- and co-rotating terms have different coupling strengths, but the two atoms are still identical [68; 69].
### Generalized Dicke model
It is desirable to explore a more general two-atom case where the TLS parameters can be different, and a natural extension to investigate is a system of two dissimilar atoms, namely, the _generalized Dicke model_ (GDM), in the limit of two atoms. From a practical viewpoint, one must also include realistic dissipation and input-output channels to the system. In this paper, we present such a study, using _gauge-independent_ master equations valid for exploring USC dynamics. Ultimately, this GDM is a more realistic scenario for studying how atoms interact in the USC regime, as it is practically impossible to experimentally produce two identical effective TLSs for experimental systems [72; 73; 74; 75]. Coupling with two different atoms also leads to new coupling regimes that are not accessible with identical atoms.
We study the two-atom GDM by introducing a disparate second atom to a general one-atom-cavity USC problem, using a gauge-invariant GME description. We exploit this model in two different ways: \((i)\) we first introduce a second TLS as a weakly coupled _sensor atom_ for the cavity-emitted spectrum [sensor atom approach, Fig. 1(a)], and show that it produces qualitatively similar spectra to that computed with the quantum regression theorem, though only with certain types of bath coupling; we also confirm that these sensing atom results are identical in both the dipole gauge and Coulomb gauge, as they must be; \((ii)\) we then focus on the main topic where the second atom is now also treated as an ultrastrongly coupled atom, distinct from the first atom [GDM, Fig. 1(b)], and demonstrate several new spectral features that emerge as we change the coupling parameters of the second TLS.
The rest of our paper is organized as follows: In Sec. II, we present the main theory, which includes a description of the GME, our excitation scheme, as well the various system Hamiltonians, bath interactions, and observables, including the cavity-emitted spectra.
In Sec. III, we present the main calculations and results for the sensing atom approach, and show how the sensing atom coupling can be used to model the detection of light. We also show how these results compare to calculations with the quantum regression theorem and explore the more general case of different bath couplings (for the atoms as well as the cavity). Next, in Sec. IV, we consider the case of two atoms in the USC regime, where we change the parameters of the second atom, and study the effect that this has on both the system eigenenergies as well as the cavity observables. We first show explicitly how our GME produces gauge-independent results when using the correct gauge-fixed approaches as described in the main text. Subsequently, we then present a series of investigations using the dipole gauge. Finally, we conclude in Sec. V.
## II Theory
In this section, we present the GME, as well as the different bath models and system Hamiltonians that we will use. We also show how these can be used to compute the cavity spectra, using either the quantum regression theorem, or a sensing atom approach.
### Generalized master equation
We first introduce the main GME that we use to compute the key observables of interest:
\[\frac{\partial}{\partial t}\rho=-\mathrm{i}\left[\hat{\mathcal{H}},\rho\right] +\sum_{\Lambda}\mathcal{L}^{\Lambda}\rho+\mathcal{L}^{\mathrm{pump}}\rho, \tag{5}\]
where \(\rho\) is the composite system (composed of the cavity and the atom, or atoms) density matrix, and \(\hat{\mathcal{H}}\equiv\hat{\mathcal{H}}^{\mathrm{D/C}}\) is the system Hamiltonian in either gauge (dipole, 'D', or Coulomb, 'C').
The Lindbladian for each dissipation channel is of the same form, so we write it generally as [44]
\[\begin{split}\mathcal{L}^{\Lambda}\rho=\frac{1}{2}\sum_{\omega, \omega^{\prime}>0}\Gamma^{\Lambda}(\omega)\left[X_{\Lambda}^{+}(\omega)\,\rho \,X_{\Lambda}^{-}(\omega^{\prime})\right.\\ \left.\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.-X_{\Lambda }^{-}(\omega^{\prime})X_{\Lambda}^{+}(\omega)\,\rho\right]\right.\\ \left.\qquad\qquad\qquad\qquad\qquad\qquad\left.+\Gamma^{\Lambda} (\omega^{\prime})\left[X_{\Lambda}^{+}(\omega)\,\rho\,X_{\Lambda}^{-}(\omega^ {\prime})\right.\right.\\ \left.\qquad\qquad\qquad\qquad\qquad\left.-\rho\,X_{\Lambda}^{-}( \omega^{\prime})X_{\Lambda}^{+}(\omega)\right].\right.\end{split} \tag{6}\]
Since we now have several possible dissipation channels for the cavity, and the atoms, \(\Lambda\) indexes the cavity and the atom, or atoms.
The dressed operators are defined from
\[\begin{split} X_{\mathrm{cav}}^{+}(\omega)&=\left\langle j \middle|\hat{\Pi}|k\right\rangle\left|j\right\rangle\left\langle k\right|,\\ X_{\mathrm{atom}}^{+}(\omega)&=\left\langle j \middle|\sigma_{x}|k\right\rangle\left|j\right\rangle\left\langle k\right|, \end{split} \tag{7}\]
with \(\omega=\omega_{k}-\omega_{j}>0\), \(X_{\Lambda}^{-}(\omega)=[X_{\Lambda}^{+}(\omega)]^{\dagger}\), and we assume that \(\hat{\Pi}\) has electric field coupling, such that \(\hat{\Pi}^{\mathrm{C}}=\mathrm{i}(a^{\dagger}-a)\) in the Coulomb gauge, and \(\hat{\Pi}^{\mathrm{D}}=\mathrm{i}(a^{\prime\dagger}-a^{\prime})\) in the dipole gauge [34]. We note that the dressed eigenstates \(\{\left\lvert j\right\rangle\}\) are required to construct the correct dressed operators utilized in the GME; these are the eigenstates of the full light-matter system Hamiltonian _including_ the interaction term [34; 76; 44]. The dressed states are naturally gauge-dependent, but the observables are not.
Modeled by a (continuous) superposition of damped bosonic harmonic oscillators, baths are generally described by their correlation functions and, in turn, their
spectral densities of states which contain information on the frequencies of the baths' modes and their coupling to the system [44]. For our purpose, the frequency-dependence of the baths is modeled as either a flat bath,
\[\Gamma^{\text{cav}}(\omega)=\kappa,\ \ \Gamma^{\text{atom}}(\omega)=\gamma \tag{8}\]
or an Ohmic bath,
\[\Gamma^{\text{cav}}(\omega)=\frac{\kappa\omega}{\omega_{c}},\ \ \Gamma^{\text{atom}}_{a,b}( \omega)=\frac{\gamma_{a,b}\,\omega}{\omega_{a,b}}. \tag{9}\]
However, in the case of a sensor atom, we use
\[\Gamma^{\text{sen}}(\omega)=\frac{\gamma_{s}\omega}{\omega_{c}}, \tag{10}\]
since in reality the sensor will also have a center frequency at the main detection frequency of interest, while we assume is at the cavity resonance frequency.
If the open system also includes a sensing element, special considerations for the sensing atom's bath must be taken into account. Essentially, we must add the dissipation channel for this sensor atom in an analogous way to the primary atom. However, in principle, we require that the inclusion of the sensor should act as a noninvasive measurement. Therefore, we must ensure that \(\gamma_{s}\ll\kappa\) in either the flat or Ohmic shape of \(\Gamma^{\text{sen}}(\omega)\), where \(\gamma_{s}\) is the sensor atom decay rate, or else the sensor atom introduces additional broadening to the existing peaks in the spectra. Careful attention is also needed as the dissipation rate of the sensor puts a limit on the coupling strength between itself and the cavity. We will cautiously take into account these considerations in our results.
For a cavity-QED system in the USC regime, the \(\gamma\ll\kappa\) process is usually negligible; however, \(\gamma\) plays an important role in the sensing atom approach (for its light detection), so we keep the bath functions general for such a study. However, in the case of two atoms in the USC regime, we will use Ohmic baths throughout, where only \(\Gamma^{\text{cav}}(\omega)\) is generally important.
For the excitation process, we also include the incoherent driving through the pump Lindbladian, with
\[\mathcal{L}^{\text{pump}}=\frac{1}{2}P_{\text{inc}}\,\mathcal{D}[X^{-}_{\text {cav}}]\,\rho, \tag{11}\]
where \(\mathcal{D}[\hat{O}]\rho=\frac{1}{2}(2\hat{O}\rho\hat{O}^{\dagger}-\rho\hat{O}^ {\dagger}\hat{O}-\hat{O}^{\dagger}\hat{O}\rho)\), and \(P_{\text{inc}}\) is the incoherent driving strength.
### Observables
Now that our main master equation model is established, we next present the key observables with which to explore the dynamics of the system. These can also be used to ensure we have properly enforced gauge invariance. We will focus on the cavity-emitted spectrum.
The cavity spectrum is typically computed from the Fourier transform of the two-time cavity correlation function, which exploits the quantum regression theorem. In such an approach, the cavity spectrum is defined from [77]
\[S_{\text{cav}}(\omega) \tag{12}\] \[\propto\text{Re}\left[\int_{0}^{\infty}d\tau e^{i\omega\tau}\int_ {0}^{\infty}\left\langle X^{-}_{\text{cav}}(t)\,X^{+}_{\text{cav}}(t+\tau) \right\rangle dt\right],\]
where \(\omega\) is the emission frequency. With incoherent steady-state driving, this simplifies to a single time integration,
\[S_{\text{cav}}(\omega)\propto\text{Re}\left[\int_{0}^{\infty}d\tau e^{i \omega\tau}\left\langle X^{-}_{\text{cav}}(0)\,X^{+}_{\text{cav}}(\tau) \right\rangle\right], \tag{13}\]
carried out after the system dynamics has reached steady state.
An alternative method for computing the spectra is to include a sensing atom, and compute its excitation flux. Reference [78] showed how normal-order correlation functions, used to compute the spectrum and other
Figure 1: **Cavity-QED schemes with two atoms.** Schematics of the cavity-QED model with a second atom, including: (a) the sensor atom approach and (b) the generalized Dicke model in the USC regime. In the sensor atom approach (a), the addition of a second TLS shown as a sensor atom is weekly coupled to the cavity (hence shown outside of the cavity). In the generalized Dicke model (b), the second atom is also considered to be ultrastrongly coupled to the cavity (similar to the first atom, but it can have different coupling parameters).
observables, can be computed from a frequency-tunable sensing atom in the limit of small coupling with the field. In the USC regime, such methods have been discussed at the system Hamiltonian level [38], and here we test how well such an approach recovers the same sort of spectra as the quantum regression theorem. If such a model is correct and is gauge invariant, it naturally extends to allow us to explore two atoms in the USC regime. In the latter case, we will use the quantum regression theorem again, primarily for convenience. However, we remark that the sensing atom approach has several potential advantages: \((i)\) it does not require a Born-Markov approximation to be valid, and \((ii)\) it can easily be used to model pulsed excitations, without the need for a double-time integral.
### Spectrum detected from the sensing atom approach
The detected spectrum from the sensing atom approach (SAA) is defined through
\[S_{\text{cav}}^{\text{SAA}}(\omega_{s})=\int_{0}^{\infty}dt\left\langle X_{ \text{sen}}^{-}(t)\,X_{\text{sen}}^{+}(t)\right\rangle, \tag{14}\]
where \(\omega_{s}\) is the sensing atom frequency and, according to Eq. (7), \(X_{\text{sen}}^{+}=\left\langle j\middle|\sigma_{x,s}\middle|k\middle|j\right\rangle \left\langle k\right|\), with \(\sigma_{x,s}\) being the Pauli \(x\)-matrix for the sensor. In contrast to the quantum regression theorem, such an approach does not require any two-time correlation functions. Moreover, since we are exciting the system with a steady-state drive, then
\[S_{\text{cav}}^{\text{SAA}}(\omega_{s})=\left\langle X_{\text{ sen}}^{-}(0)\,X_{\text{sen}}^{+}(0)\right\rangle, \tag{15}\]
which is (again) computed when the system reaches a steady state. This method has a simple physical interpretation; the sensing atom excitation number is proportional to the photon flux of the cavity-QED system which contains another atom in the USC regime. The sensing atom then "detects" the cavity output flux.
For the SAA to be valid, the sensing parameters should generally be noninvasive to avoid affecting the detected spectrum. To be specific, the sensor atom should have a vanishing coupling strength, \(g_{s}\ll g\). In order to determine acceptable parameters for the "sensor atom" (non-perturbative coupling), we use parameters that provide constant results over a range of frequencies, with acceptable run times, which are guided by the criteria \(g_{s}\ll\sqrt{\gamma_{s}R/2}\), where \(R\) is any rate in the system [78].
### Photodetection rate of cavity photons
Another useful quantity to calculate is the photodetection rate of cavity photons, emitted from the \(\left|j\right\rangle\rightarrow\left|k\right\rangle\) transition [76], which is proportional to the \(\mathcal{P}\) (\(\hat{\Pi}/\sqrt{2}\)) quadrature matrix elements squared, namely:
\[|\mathcal{P}_{jk}|^{2}=\frac{1}{2}|\langle j|\hat{\Pi}|k\rangle|^{2}. \tag{16}\]
This is the main system-level quantity that affects the spectral transition rates; however, the transition rates, \(T_{ij}\), also must be multiplied by a factor \(D^{2}(\omega_{jk})\), where \(D(\omega)\) is the density of states of the relevant bath [40], so that, in the case of cavity emission, \(T_{jk}=2\pi D_{\text{cav}}^{2}(\omega_{jk})|\mathcal{P}_{jk}|^{2}\); this can be derived from Fermi's golden rule. Moreover, in the presence of a sensing atom, there is additional filtering through the sensing atom's density of states, as will show below.
For all our numerical calculations, we will use |Python|and we also exploit the QuTiP module for quantum objects and operations [79; 80].
### System Hamiltonians and gauge-fixing for the sensor atom approach
We first introduce a second atom (TLS) as a sensor for the cavity-emitted spectrum. While the sensor atom approach does not introduce any further observables to probe or any vastly new physics, it provides a check for gauge invariance and verifies that the second atom is included correctly in the model, before elevating it to a second atom also in the USC regime. It also demonstrates the influence of an additional bath coupling, which is relevant for other types of detection, including cavity detection. As mentioned earlier, this method for simulating spectra also holds some potential advantages over the quantum regression theorem, which requires the calculation of the two-time correlation function. In particular, the sensing atom approach is potentially more powerful when used to compute various multi-time correlation functions [78] and to model short-pulse excitation. The sensor atom approach is also a valid physical model for the detection of photons emitted from the cavity, and a similar approach could be used to describe a sensing cavity as well.
In order to not affect the spectrum, this sensor atom should have a vanishing coupling strength, \(g_{s}\ll g\). In practice, however, it may have a minor influence on the computed spectrum, and a qualitatively different one if it also has a non-trivial bath function.
The system Hamiltonian in such a case may be naively constructed by the addition of two terms: \((\omega_{s}/2)\,\sigma_{z,s}\) and \(\mathrm{i}g_{s}^{\text{D}}(a^{\dagger}-a)\sigma_{x,s}\), namely the bare Hamiltonian of the sensor, and the interaction Hamiltonian between the sensor and the cavity, respectively [78], to Eq. (1); this is analogous to that of the primary atom, in the dipole gauge. Perhaps counter-intuitively, after our discussions on the dipole gauge Hamiltonian, here the gauge correction, including only the main atom, also needs to be applied at the Hamiltonian level for the sensor interaction [38]. This is because the sensor atom couples to the electric field of the cavity with its coupling to the primary atom already included, which explicitly contains the corrected \(\hat{a}^{\prime}\) operators (similar to the cavity bath operators terms in the GME). Therefore, the naive choice is incorrect, and gauge fixing must be applied so that the interaction
Hamiltonian between the sensor atom and the cavity becomes
\[\mathrm{i}g_{s}^{\mathrm{D}}(a^{\dagger}-a)\sigma_{x,s} \to g_{s}^{\mathrm{D}}\left[\mathrm{i}(a^{\dagger}-a)+2\eta\sigma_{x}+2 \eta_{s}\sigma_{x,s}\right]\sigma_{x,s}\] \[\approx g_{s}^{\mathrm{D}}\left[\mathrm{i}(a^{\dagger}-a)+2\eta \sigma_{x}\right]\sigma_{x,s}, \tag{17}\]
since \(\eta_{s}\ll\eta\) and also \(\sigma_{x,s}^{2}=\mathbf{1}\) (which only gives an energy offset).
Thus, in the dipole gauge, the gauge-corrected full system Hamiltonian reads [38, 76]
\[\hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{D}}=\hat{\mathcal{H}}_{ \mathrm{QR}}^{\mathrm{D}}+\frac{\omega_{s}}{2}\sigma_{z,s}+g_{s}^{\mathrm{D}} \left[\mathrm{i}(a^{\dagger}-a)+2\eta\sigma_{x}\right]\sigma_{x,s}. \tag{18}\]
Applying a RWA, then we have
\[\hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{D}}|_{\mathrm{RWA}} =\hat{\mathcal{H}}_{\mathrm{JC}}^{\mathrm{D}}+\frac{\omega_{s}}{ 2}\sigma_{z,s}\] \[+g_{s}^{\mathrm{D}}\left[\mathrm{i}(a^{\dagger}\sigma_{s}^{-}-a \sigma_{s}^{+})+2\eta(\sigma^{+}\sigma_{s}^{-}+\sigma^{-}\sigma_{s}^{+})\right], \tag{19}\]
where \(\eta\) is the normalized coupling for the primary atom. Clearly, in the sensor atom approach, the sensor atom does not need to modify the principal atom-cavity coupling and related observable operators, though its bath interactions can play a qualitatively important role. In the USC regime, of course the RWA does not work, but it is useful to highlight the effects of counter-RWA terms, at least at the level of how these affect the system eigenfrequencies.
In the Coulomb gauge, the Hamiltonian can be also obtained from the dipole-gauge one, by conducting the unitary transform \(\hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{C}}=\mathcal{U}_{1,s}^{\dagger} \hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{D}}\mathcal{U}_{1,s}\), with \(\mathcal{U}_{1,s}=\exp[\mathrm{i}(a+a^{\dagger})(\eta\sigma_{x}+\eta_{s} \sigma_{x,s})]\), yielding [81]
\[\hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{C}} =\hat{\mathcal{H}}_{\mathrm{QR}}^{\mathrm{C}}\] \[\quad+\frac{\omega_{s}}{2}\left\{\sigma_{z,s}\cos\left[2\eta_{s}( a+a^{\dagger})\right]+\sigma_{y,s}\sin\left[2\eta_{s}(a+a^{\dagger})\right] \right\}, \tag{20}\]
where \(g_{s}^{\mathrm{C}}=g_{s}^{\mathrm{D}}\omega_{s}/\omega_{c}\), \(\eta_{s}=g_{s}^{\mathrm{C}}/\omega_{s}=g_{s}^{\mathrm{D}}/\omega_{c}\), and again we have neglected terms proportional to the identity operator. Note here that, as in the Coulomb Hamiltonian without the sensor, there is no separable bare sensor Hamiltonian when including the gauge correction. However, without the gauge correction, we have, \((\omega_{s}/2)\sigma_{z,s}+g_{s}^{\mathrm{D}}(a+a^{\dagger})\sigma_{y,s}\) for the bare sensor and cavity-sensor interaction Hamiltonian.
Naturally, we can apply this transformation in reverse, i.e., to the cavity operators in the Coulomb gauge, to find the two-atom version of the gauge correction in the dipole gauge. As expected, this results in the corrected operators in the dipole gauge, and yields
\[a\,\rightarrow\,a^{\prime}=a-\mathrm{i}(\eta\sigma_{x}+\eta_{s}\sigma_{x,s}). \tag{21}\]
As discussed before, computing field observables in the dipole gauge must then be done with these corrected electric-field operators. The GME for the sensor atom approach has the same format as Eq. (5). However, one must use the correct definition of Hamiltonian \(\hat{\mathcal{H}}\equiv\hat{\mathcal{H}}_{\mathrm{SAA}}^{\mathrm{D}/\mathrm{C}}\), to ensure gauge-invariant results, and also one has additional bath coupling terms, which need not have the same spectral function (i.e., the density of states seen by the sensing atom could be different to the density of states seen by the cavity).
### System Hamiltonians and gauge-fixing for the generalized Dicke model
The Dicke model in the USC regime takes the QRM and adds a second identical atom to the system, also in the USC regime. This also has to be done with the gauge-corrections, to ensure the results are gauge invariant. The difference between the regular (or previously extended) Dicke model [82, 83, 84, 85] and the GDM is that we allow the two ultrastrongly coupled atoms to vary in their frequency and coupling strength. We have already described how to include a second atom (TLS) into our system in both the dipole gauge and the Coulomb gauge, using the sensor atom approach, where gauge invariance is also ensured. The only difference here is that we use Eq. (12) to compute the spectrum since the second atom is now participating in energy exchange with the cavity and its population cannot be relied on to obtain the spectrum, as it is no longer acting as a weak sensor. Of course, one could bring in a third atom as a sensor, but in general, the quantum regression theorem is efficient and accurate for the problems we study below, especially with time-independent incoherent driving.
Conveniently, the required Hamiltonians for two atoms in the USC regime are similar to the above, except that for the operators and quantities associated with the original atom we assign a subscript '\(a\)' and for those of the sensor atom we assign a subscript '\(b\)', and we no longer assume \(\eta_{b}\ll\eta_{a}\). Thus we have
\[\hat{\mathcal{H}}_{\mathrm{GDM}}^{\mathrm{D}} =\omega_{c}a^{\dagger}a+\frac{\omega_{a}}{2}\sigma_{z,a}+\mathrm{ i}g_{a}^{\mathrm{D}}(a^{\dagger}-a)\sigma_{x,a}\] \[+\frac{\omega_{b}}{2}\sigma_{z,b}+\mathrm{i}g_{b}^{\mathrm{D}}(a^ {\dagger}-a)\sigma_{x,b}+2\omega_{c}\eta_{a}\eta_{b}\sigma_{x,a}\sigma_{x,b}, \tag{22}\]
and for reference, with the application of the RWA, one has
\[\hat{\mathcal{H}}_{\mathrm{GDM}}^{\mathrm{D}}|_{\mathrm{RWA}}= \omega_{c}a^{\dagger}a+\frac{\omega_{a}}{2}\sigma_{z,a}+\mathrm{i}g_{a}^{ \mathrm{D}}(a^{\dagger}\sigma_{a}^{-}-a\sigma_{a}^{+})\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad+2 \omega_{c}\eta_{a}\eta_{b}(\sigma_{a}^{-}\sigma_{b}^{+}+\sigma_{a}^{+}\sigma_{b}^{ -}). \tag{23}\]
As before, from the transformation \(\hat{\mathcal{H}}_{\mathrm{GDM}}^{\mathrm{C}}=\mathcal{U}_{1,2}^{\dagger}\hat{ \mathcal{H}}_{\mathrm{GDM}}^{\mathrm{D}}\mathcal{U}_{1,2}\), with \(\mathcal{U}_{1,2}=\exp[\mathrm{i}(a+a^{\dagger})(\eta_{a}\sigma_{x,a}+\eta_{b} \sigma_{x,b})]\)
then the correct Coulomb-gauge Hamiltonian is
\[\hat{\mathcal{H}}^{\text{C}}_{\text{GDM}}=\omega_{c}a^{\dagger}a\] \[\quad+\frac{\omega_{a}}{2}\left\{\sigma_{z,a}\cos\left[2\eta_{a}(a+ a^{\dagger})\right]+\sigma_{y,a}\sin\left[2\eta_{a}(a+a^{\dagger})\right]\right\}\] \[\quad+\frac{\omega_{b}}{2}\left\{\sigma_{z,b}\cos\left[2\eta_{b}(a+ a^{\dagger})\right]+\sigma_{y,b}\sin\left[2\eta_{b}(a+a^{\dagger})\right]\right\}. \tag{24}\]
Simulations can then proceed as before, e.g., if using the dipole gauge, then one must use the corrected cavity operator
\[a\,\rightarrow\,a^{\prime}=a-\mathrm{i}(\eta_{a}\sigma_{x,a}+\eta_{b}\sigma_{x,b}), \tag{25}\]
to compute cavity observables in the dipole gauge, and these are also used for deriving the dissipation terms in the GME. The GME [Eq. (5)] then uses the appropriate Hamiltonians, \(\hat{\mathcal{H}}\equiv\hat{\mathcal{H}}^{\text{D/C}}_{\text{GDM}}\), which are the total gauge-corrected Hamiltonians of the system in their respective gauge for the GDM. In the Coulomb gauge, no modification is needed on the field operators.
## III Results and discussions of the sensor atom approach
Thus far, we have presented a gauge-corrected model for the system Hamiltonian for the sensing atom approach; namely, one that yields the same eigenfrequencies for either the dipole gauge or the Coulomb gauge. This leads to the proper understanding of the eigenenergies and eigenstates of the closed system, which is basically an extension of the original QRM. We must also consider dissipation for this sensor atom, including it in an analogous way to the primary atom, under the condition \(\gamma_{s}\ll\kappa\). The chosen dissipation rate of the sensor also puts a limit on the coupling strength between itself and the cavity. In general, we require that the coupling must be small enough to ensure that losses from the cavity into the sensor and the sensor back-action into the main system are negligible. This crucial argument leads to an acceptable range of parameters discussed in Ref. [78], and also discussed in more details below.
For computing spectra in either considered gauge (dipole and Coulomb), we then allow the system to evolve to steady state, also including an incoherent pump term. Once the steady state has been reached, we take the expectation value of the sensor excitation. We do this for a range of frequencies of interest to form the sensor atom spectra, performing a calculation for each scanned \(\omega_{s}\).
For this sensor atom section, we will focus our attention on the sensing atom interacting with the primary atom in the USC regime, using a fixed coupling parameter of \(\eta=0.5\) to the primary atom. This USC regime has been studied recently using gauge-independent master equations, and shown to yield identical results in the dipole and Coulomb gauges, and is thus an excellent test-bed to also compare with a sensing atom simulation [34].
We must first ensure an approximately vanishing coupling rate compared to the coupling between the cavity and the main atom. We take \(g_{s}=0.001g\) to satisfy this condition. Then, to obtain a lower limit on \(\gamma_{s}\), we find the _smallest_ transition rate in our system at \(\eta=0.5\) to be \(R\approx 0.3g\), and we chose a slightly smaller value than this, to satisfy the previously-mentioned criterion \(g_{s}\ll\sqrt{\gamma_{s}R/2}\)[78]. If we then use \(\gamma_{s}=0.0025g\), we obtain \(\sqrt{\gamma_{s}R/2}\approx 0.02\gg 0.001g=g_{s}\). Therefore, we use \(\kappa\gg\gamma_{s}\geq 0.0025g\), as the acceptable range of values to choose from.
### Dressed eigenenergies/eigenstates and example transitions
In Fig. 2(a), we plot the eigenenergies of the single-atom cavity-QED without (blue solid curves) and with (red dashed curves) the RWA. This helps to highlight the role of the counter-rotating wave terms for increasing \(\eta\). The computed energies are gauge-independent (namely, the Coulomb and dipole gauge results yield identical results), as they should be. In the sensor atom approach, one expects no difference between the main eigenenergies with a single TLS-cavity system, which we have confirmed to be the case; however, additional states naturally appear because of the sensing atom states, which depend on \(\omega_{s}\).
The three significant optical transitions are identified with the downward arrows and the letters 'A' (\(\omega_{10}/\omega_{c}\approx 0.5\)), 'B' (\(\omega_{31}/\omega_{c}\approx 8.2\)), and 'C' (\(\omega_{20}/\omega_{c}\approx 1.45\)), for \(\eta=0.5\) (primary atom). These transitions are responsible for the significant peaks that will appear in the incoherent spectra shown in Fig. 3, discussed below.
When the primary atom and sensor atom are both on resonance with the cavity, as expected, the eigenenergy lines start together, at low \(\eta\), and then split from the same initial points at multiples of the cavity transition energy, as shown in Fig. 2(b). When the primary atom is on resonance but \(\omega_{b}=0.5\omega_{c}\) in Fig. 2(c), the addition of extra eigenenergy lines starts at half-multiples of the cavity transition energy. In all three panels of Fig. 2, the deviation of the JCM eigenenergies from the full QRM Hamiltonian eigenenergies is apparent as the normalized coupling parameter increases, a failure that is of course fully expected [4; 5].
### Cavity-emitted spectra via incoherent driving
We next study the cavity-emitted spectra via incoherent driving, and consider weak drives so as not to perturb the system eigenstates too much. We will compare our sensor results to those obtained using the quantum regression theorem [34]. Additionally, we investigate the effects of changing the various bath spectral functions.
In the top two rows of Fig. 3, panels (a-d), we consider a flat bath for the cavity (i.e., \(\Gamma^{\text{cav}}(\omega)=\kappa\)) and
show the effect of changing the atomic baths from flat to Ohmic. Using the quantum regression theorem, changing the atomic bath has almost no visible effect on the spectrum when the cavity bath is flat. However, the spectrum detected by the sensor atom is _drastically_ modified if the atomic bath of the sensor atom has a non-trivial frequency dependence. This can be viewed as an additional filtering process, e.g., in the case of an Ohmic sensor bath (i.e., \(\Gamma^{\mathrm{sen}}(\omega)=\gamma_{s}\omega/\omega_{c}\)), there is increasing dissipation at higher frequencies, thus reducing the strength of the peak on the right and increasing the relative strength of the peak on the left.
Next, as shown in the bottom half of Fig. 3, panels (e-h), we consider again an Ohmic cavity bath and look at the effect of changing the atomic baths. The first result to note is that the Ohmic cavity bath produces the largest change of any of the models explored here. This is not too surprising, as the cavity dissipation is the largest by far to begin with (\(\kappa=0.25g\) vs. \(\gamma=\gamma_{s}=0.005g\)), and we are modeling cavity emission. Thus, the dissipation is overwhelmingly dominated by \(\kappa\), and any frequency dependence included with it will have a larger effect. Also,
Figure 3: **Cavity-QED incoherent spectra for one atom in the ultrastrong coupling regime.** Cavity emitted spectra were computed using the sensor atom approach (left column) and the full quantum regression theorem (right column). On the right, we list the type of bath used for the cavity and the two atoms (which use the same bath type). Gauge-corrected (not corrected) results are shown with solid blue (dashed orange) curves. We use incoherent driving with \(P_{\mathrm{inc}}=0.01g\). Other system parameters are \(\kappa=0.25g\), \(\gamma=\gamma_{s}=0.005g\), \(g_{s}=0.001g\), \(\omega_{c}=\omega_{a}\), \(\eta=0.5\). The three significant peaks A, B, and C represent the major transitions shown in Fig. 2.
Figure 2: **Cavity-QED eigenenergies with and without a sensor atom.** Computed first ten eigenenergies of the QRM (blue solid lines) and the JCM (red dashed lines) for the cavity-QED system with (a) a single atom that is in resonance with the cavity, i.e., \(\omega_{a}=\omega_{c}\), obtained from Eqs. (1) and (2), respectively, and (b) the sensor atom approach Hamiltonian, where both the primary and the sensor atoms are in resonance with the cavity, \(\omega_{a}=\omega_{\bar{s}}=\omega_{c}\), and the sensor atom has the coupling of \(g_{\mathrm{D}}^{\mathrm{D}}=0.001g^{\mathrm{D}}\); and (c) the sensor atom approach Hamiltonian, where the primary atom is in resonance with the cavity, \(\omega_{a}=\omega_{c}\), and the sensor atom has \(\omega_{s}=\omega_{c}/2\) and \(g_{s}^{\mathrm{D}}=0.001g^{\mathrm{D}}\), obtained from Eqs. (18) and (19), respectively. In panel (a), the three significant transitions are identified with the downward arrows and the letters ‘A’ (\(\omega_{10}/\omega_{c}\approx 0.5\)), ‘B’ (\(\omega_{31}/\omega_{c}\approx 8.2\)), and ‘C’ (\(\omega_{20}/\omega_{c}\approx 1.45\)), for \(\eta=0.5\). These resonances are highlighted for reference when explaining the key features of the cavity spectrum.
since the two models (quantum regression theorem and sensor atom) here have the same dependence on the single cavity, we see the change in the cavity bath having a similar effect on both spectra (specifically, reversing the asymmetry and modifying the relative peak heights to a similar extent). When we now also change the atomic baths to be Ohmic, we see similar effects to those above. The quantum regression theorem results are now slightly affected, and we again see a large effect on the sensor approach model.
## IV Results and discussions of the generalized Dicke model
In the description of the GDM, we extended our sensor atom approach to be a primarily part of the coupled system (i.e., no longer weakly coupled but also in the USC regime). Using this approach, we now allow the second atom's properties to vary relative to the first atom, but now when the second atom is also in the USC regime. Our two-atom Hamiltonian in the dipole gauge [Eq. (22)] is equivalent to the extended Dicke model in Ref. [63] in the case that the two atoms are degenerate (i.e., \(g_{a}=g_{b}\) and \(\omega_{a}=\omega_{b}\)). However, our main focus will be on analyzing spectra obtained with _dissimilar_ atoms, where the coupling parameters and resonant frequencies need not be the same.
Similar to the sensor atom approach, one has to first identify the dressed operators which are now found using the eigenstates of the full gauge-corrected GDM Hamiltonians, _including_ the second atom. We can also consider dissipation for this second atom, including it in the same way as for the primary atom, but these rates are basically negligible, as cavity decay is the main source of loss. In either the Coulomb gauge or the dipole gauge, we then allow the system to evolve to a steady state, again including an incoherent pump term. From now on, we use the quantum regression theorem to compute the spectra. Our first calculations will show explicitly the effect of gauge fixing and confirm gauge invariance for the spectra, and then we just choose the dipole gauge, since both gauge results yield identical results.
### System characterization: dressed eigenenergies/eigenstates and transitions
In Fig. 4, we show the first seven eigenenergies of the GDM without (blue solid curves) and with (red dashed curves) the RWA. In panel (a), we display the eigenenergies with respect to the variation of the equal normalized coupling parameter of the two atoms, \(\eta\equiv\eta_{a}=\eta_{b}\). In the GDM, one expects a considerable difference between the resulting eigenenergies compared to the single TLS-cavity system (or with the sensor atom approach) as compared to Fig. 2. In particular, one observes significant hybridization of the two TLSs in the system leading to the production of the splitting of the eigenstate curves. As expected from our previous observation in the sensor atom approach, when \(\omega_{b}=0.5\omega_{c}\) in Fig. 4(a), the extra eigenenergy lines start at half-multiples of the cavity transition energy. As opposed to a regular Dicke model with similar atoms, the different starting and splitting point of eigenenergies results a different hybridization and crossing/anti-crossing. Therefore, the possibility of the existence of more exotic transitions and spectra, in comparison to the one-atom spectra and regular Dicke model, appears.
In panel (b) of Fig. 4, the GDM eigenenergies are plotted when the coupling parameters have a phase difference. Letting \(g_{b}=g_{a}\exp[\mathrm{i}\pi\phi]\), we equate their amplitude but vary their phase via the sweep of \(\phi\) from \(0\) to \(1\). The eigenenergies in both models show symmetry and crossing at \(\phi=0.5\), as the sign
Figure 4: **Selected eigenenergies using the generalized Dicke model.** We show the first seven eigenenergies of the GDM. (a) Full quantum model without a RWA (blue solid lines) and with a RWA (red dashed lines), displaying eigenenergies with \(\omega_{a}=2\omega_{b}=\omega_{c}\) versus \(\eta\equiv\eta_{a}=\eta_{b}\) are plotted. (b) Full model eigenenergies for \(\omega_{a}=2\omega_{b}=\omega_{c}\) and \(|g_{b}|=g_{a}=0.5\omega_{c}\) (\(\eta=0.5=|\eta_{a}|=|\eta_{b}|\), with \(g_{b}=g_{a}\exp[\mathrm{i}\pi\phi]\)) versus the variation of the relative phase between the coupling parameters are represented; here we do not show RWA results, as they are all clearly wrong and also gauge dependent. The random color coding in panel (b) is to help distinguish the different eigenenergies.
coupling does not change the physics.
In Fig. 5, we show further details about the eigenstate properties and transitions. In particular, in Fig. 5(a) the parities of the first seven eigenstates are shown for a range of interest in the second atom's normalized frequency, \(\omega_{b}/\omega_{c}\). We define the parity of a state \(\ket{j}\) as \(\bra{j}\hat{P}\ket{j}\) where \(\hat{P}=\exp[\mathrm{i}\pi\hat{N}]\) and \(\hat{N}=\sigma_{a}^{+}\sigma_{a}^{-}+\sigma_{b}^{+}\sigma_{b}^{-}+a^{\prime \dagger}a^{\prime}\) is the total excitation number (in the dipole gauge). We label the states in Fig. 5(a) even (odd) if their parity is positive (negative). We see that for the considered range of frequencies, the first three excited states have odd parity while the ground state and the \(4-6\) the excited states have even parity. Correspondingly, in panel (b), we plot the energy eigenvalues of these lowest seven states, where we distinguish their parity and label the main transitions that will show up in the cavity spectra. Also, in Fig. 5(c), \(|\mathcal{P}_{ij}|^{2}\) for these transitions are shown which are related to the transition rates (which also depend on the density of states of the cavity bath). All of these properties clearly explain the main spectral peaks that emerge in the computed spectrum.
As mentioned previously, and defined in Eq (16), we relate the rate of a transition from state \(\ket{j}\) to state \(\ket{k}\)[34, 40], as proportional to \(|\mathcal{P}_{jk}|^{2}=\frac{1}{2}|\langle j|\mathrm{i}(a^{\prime\dagger}-a^{ \prime})|k\rangle|^{2}\), in the dipole gauge.
### Cavity emitted spectra via incoherent driving
Our first set of GDM spectra to study is with \(\omega_{a}\neq\omega_{b}\), assuming that the two atoms have the same coupling strength. We will first highlight that our current models do indeed ensure gauge invariance.
In Fig. 6(a), we compare the spectra obtained in the dipole gauge and Coulomb gauge and also show the naive non-gauge-corrected counterparts. Throughout this section, we use Ohmic baths for the cavity and atoms. We display the spectra as a function of the second atom's frequency, while the first is held on resonance with the cavity mode, and both atoms are in the USC regime. It can easily be seen that, while the non-gauge-corrected spectra do pick up some of the correct features, they clearly do not satisfy gauge invariance. The corrected spectra are not only clearly gauge invariant but are also much richer, with additional features including a visible anti-crossing around \(\omega_{b}/\omega_{c}\approx 1\) and the disappearance of a main peak in this regime as well.
Beyond the numerical success of our model and codes, we can once again identify key differences in the behavior of our system with and without gauge correction. In all subsequent calculations, we will just choose the dipole gauge, since the results produce observables that are clearly gauge-invariant.
In Fig. 6(b), we plot selected spectra at a few selected \(\omega_{b}\) values of interest, along with the spectra in the absence of the second TLS. The anticrossing behavior in Fig. 6(a) is at its closest at \(\omega_{b}/\omega_{c}\approx 1\); examining the 2D spectra at this frequency in Fig. 6(b), we can see that the splitting is about \(0.063\omega_{c}=0.126\), or about \(g/8\). The location of this minimal splitting can be understood by looking at Fig. 5(b) and noting that the eigenvalue of the third excited state is closest to twice that of the first excited state at this point, thereby making the frequencies of the \(\mathbf{B}\) and \(\mathbf{A}\) transitions closest.
The change in \(|\mathcal{P}_{ij}|^{2}\) (which is proportional to the transition rates) is shown in Fig. 5(c), which correlates with a change in the associated peak's height, e.g., peak \(\mathbf{C}\), which is absent from the spectra (peak height is zero) on resonance; this can be explained by the transition rate going to zero in this regime, as shown in Fig. 5(c). This is a strong indicator that there are features of the GDM that
Figure 5: **Generalized Dicke model state parities, optical transitions and (normalized) transition rates.** (a) Parities of the first seven states of the GDM _versus_ the second atom’s normalized frequency. (b) Eigenenergies of the first seven states of the GDM _versus_ the second atom’s normalized frequency with positive (blue) and negative (orange) parity. (c) \(\mathcal{P}\) quadrature matrix element squared of the selected transitions in panel (b) versus the second atom’s normalized frequency obtained via Eq. (16). The position of the arrows in panel (b) is irrelevant. The plots are for \(\eta_{\mathrm{a}}=\eta_{\mathrm{b}}=0.5\).
can only be accessed when the atoms are dissimilar (i.e., when the second atom is off-resonance). However, one cannot solely rely on the \(\mathcal{P}\) quadrature matrix element squared to determine the relative heights of the peaks. If this were the case, we would expect peak \(\mathbf{G}\) to be the largest by far, and peak \(\mathbf{A}\) to be very small, whereas it dominates above \(\omega_{b}/\omega_{c}=0.5\). This is primarily due to the increased damping of higher states [86], and the fact that, as mentioned before, one must take into account the effect of the cavity Ohmic bath in the definition of the transition rate as \(T_{jk}\propto\omega_{ij}^{2}|\mathcal{P}_{jk}|^{2}\). Indeed, this clarifies that transitions involving higher states (\(\mathbf{D}\), \(\mathbf{E}\), \(\mathbf{F}\), \(\mathbf{G}\)) are significantly more broadened than those involving just the lower states (\(\mathbf{A}\), \(\mathbf{B}\), \(\mathbf{C}\)), which appear very sharp on the spectra. Furthermore, transition \(\mathbf{C}\) has a lower \(|\mathcal{P}_{jk}|^{2}\) value than \(\mathbf{B}\), but since \(\mathbf{B}\) involves higher states, \(\mathbf{C}\) dominates. Even peak \(\mathbf{A}\), with a far lower \(|\mathcal{P}_{jk}|^{2}\) value than \(\mathbf{B}\), is larger due to the effect of damping. To sum up, this trend largely depends on the spectral bath function so that one can expect more broadening at higher energy levels, as well as the proper definition of the transition rate for a general (though Ohmic here) relevant bath.
From the eigenvalues in Fig. 5(b), we can identify the peaks with their transitions. As labelled in Fig. 5(c), the visible peaks when \(\eta=0.5\) are caused by transitions varying from \(|1\rangle\rightarrow|0\rangle\) to \(|6\rangle\rightarrow|0\rangle\). Hence, it is clear that the incoherent drive (even though weak) excites states up to at least \(|6\rangle\). Moreover, it appears that most of the peaks are due to relaxation to the ground state. This is partly because higher-order photons are already part of the lower hybrid states in the USC regime. The key transitions are summarized in Table 1. Now that these peaks have been identified with specific transitions, below we next vary the coupling strength of the second atom to determine how the spectra are affected.
Next, we vary \(\eta\) from near the threshold of the USC regime (\(\eta=0.1\)), to the verge of the deep strong coupling regime (\(\eta=1\)), plotted in Fig. 7. At \(\eta=0.1\), we see a reasonable level of symmetry around \(\omega=\omega_{c}\), yet we also see the appearance of a new resonance which anticrosses with the lower polariton peak, near \(\omega/\omega_{c}\approx 0.9\). As we increase \(\eta\), this symmetry significantly reduces and the anticrossing peaks shift to lower frequencies, while the general Rabi splittings increase as expected, in addition to various Stark shifts. We also see reduced broadening (sharper peaks) with increasing \(\eta\) for the lower frequency peaks, as expected from the GME baths. At \(\eta=1\), we discern some of the background peaks becoming the main peaks, and the apparent anti-crossing at lower \(\eta\) appears to become a true crossing, i.e., near \(\omega/\omega_{c}\approx 1\). One of the peaks we can identify through the entire range is the one that appears forbidden (or highly reduced) when the second atom is near resonance. This peak can be identified as the \(\mathbf{C}\) (\(|2\rangle\rightarrow|0\rangle\)) transition. However, states \(|2\rangle\) and \(|3\rangle\) cross in energy between \(\eta=0.5\) and \(\eta=1\), and are degenerate up to about \(\omega_{b}/\omega_{c}=0.5\) at \(\eta=1\). For simplicity, we retain the label \(|2\rangle\) even after it crosses with \(|3\rangle\), so that this feature is indeed due to the same transition throughout.
\begin{table}
\begin{tabular}{c|c} Peak & Transition \\ \hline \(\mathbf{A}\) & \(|1\rangle\rightarrow|0\rangle\) \\ \(\mathbf{B}\) & \(|3\rangle\rightarrow|1\rangle\) \\ \(\mathbf{C}\) & \(|2\rangle\rightarrow|0\rangle\) \\ \(\mathbf{D}\) & \(|4\rangle\rightarrow|3\rangle\) \\ \(\mathbf{E}\) & \(|5\rangle\rightarrow|2\rangle\) \\ \(\mathbf{F}\) & \(|4\rangle\rightarrow|0\rangle\) \\ \(\mathbf{G}\) & \(|6\rangle\rightarrow|0\rangle\) \\ \end{tabular}
\end{table}
Table 1: Identification of the key transitions causing some of the peaks. Note that not all peaks are visible at all values of \(\eta\). At \(\eta=1\), there are other peaks present that we have not labeled.
Figure 6: **Cavity spectra for the GDM, using two different atoms that are both ultrastrongly coupled to the cavity.** (a) Cavity spectra computed using the quantum regression theorem with two USC atoms in GDM, the first on resonance (\(\omega_{a}=\omega_{c}\)) and the second (\(\omega_{b}\)) we sweep through resonance. (b) Selected cavity spectra as in the lower left panel of (a) for \(\omega_{b}/\omega_{c}=\{0.5,1,1.5\}\). The labeled peaks in \(\omega_{b}/\omega_{c}=0.5\) plot in (b) reflect on the corresponding transitions in Fig. 5c. We also plot the spectra obtained in the absence of the second TLS. All baths are Ohmic and here \(\eta_{a}=\eta_{b}=0.5\). We use the following parameters throughout the section: \(\kappa=0.25g\), \(\gamma_{a}=\gamma_{b}=0.005g\), and \(P_{\mathrm{inc}}=0.01g\).
### Relative coupling strength variation
#### iv.3.1 Influence of amplitude variation of \(g_{b}\)
In the above investigations, we chose a few values of \(\omega_{b}\) to study in detail. We now extend this study further, by examining the role of \(g_{b}\) when it varies from zero to \(g_{a}\). First, in Fig. 8, we show how the spectra change when increasing the second atom's coupling strength at a few interesting values of the second atom's frequency. The main feature we can identify is a splitting of some peaks with increased \(g_{b}\) and, interestingly, the merging of some other peaks. Some peaks also shift in frequency without any other behavior appearing (Stark shifts).
In the first example, at \(\omega_{b}/\omega_{c}=0.5\) [panel (a) of Fig. 8], we see one peak splitting into three at low frequency. Since we have already identified the origin of these peaks and given them labels at \(g_{b}=g_{a}\), we can easily explain where this splitting comes from by examining the change in energy eigenvalues as we increase \(g_{b}\). In Fig. 9, we can see that states \(\ket{1}\) and \(\ket{2}\), initially near degenerate at \(g_{b}=0\), split in energy. Recalling from Table 1 that peaks \(\mathbf{A}\), \(\mathbf{C}\), and \(\mathbf{B}\) are due to transitions \(\ket{1}\rightarrow\ket{0}\), \(\ket{2}\rightarrow\ket{0}\), and \(\ket{3}\rightarrow\ket{1}\), respectively, we can see why these peaks decrease, increase, or remain roughly unchanged in energy respectively over the range of \(g_{b}\) considered here. Turning now to the peaks involving higher states, these are more complex due to the anticrossing of states \(\ket{4}\) and \(\ket{6}\) (labeled according to the order at \(g_{b}=g_{a}\), to be consistent with the previous sections) around \(g_{b}/g_{a}=0.8\). Considering \(g_{b}=g_{a}\), the energy differences in Fig. 9 do explain the peaks with the same transitions as in Table 1. Below the anticrossing, however, the peaks can only be explained by different transitions, namely switching \(\ket{4}\) with \(\ket{6}\).
Next, consider the case of the resonant second atom [panel (b) of Fig. 8]. Once again the bright left-most peak can be trivially associated with the \(\mathbf{A}\) transition. Similarly, the next peak, which almost merges with the first, is identified as transition \(\mathbf{B}\), as expected. Transition \(\mathbf{C}\) is not visible in this regime, but transition \(\mathbf{D}\) is seen as a broad peak at high \(g_{b}\). Transition \(\mathbf{E}\) is also not visible, but transition \(\mathbf{F}\) is visible throughout and transition \(\mathbf{G}\) is visible at high \(g_{b}\).
Finally, at \(\omega_{b}/\omega_{c}=1.5\) [panel (c) of Fig. 8], we again see a significant dressing of the resonances as we change \(g_{b}\), and all of the peaks can be identified as aligning with the transitions in Table 1 throughout. The differences here are that transition \(\mathbf{C}\) is strongly visible throughout and merges with \(\mathbf{F}\) at low \(g_{b}\) and that peaks \(\mathbf{D}\), \(\mathbf{E}\), and \(\mathbf{G}\) are not visible, except \(\mathbf{D}\) at higher \(g_{b}\).
Figure 8: **Influence of relative coupling amplitude variation on the GDM cavity spectra.** Spectra at selected \(\omega_{b}\), where we now sweep \(g_{b}\) from negligible coupling to the same level as the first atom, at \(\eta_{b}=\eta_{a}=0.5\).
Figure 7: **Influence of coupling strength on the GDM spectra.** Cavity spectra as in Fig. 6 for various \(\eta\) values, with both atoms in the same coupling regime (i.e., \(\eta=\eta_{a}=\eta_{b}\)). As before, we keep the first atom on resonance with the cavity and sweep the second atom resonance.
Figure 9: **Relative coupling amplitude variation in GDM eigenenergies.** Eigenvalues of the lowest seven states at selected \(\omega_{b}\) values, as a function of \(g_{b}\). The parity of the states is again given by the line color: blue (dark) represents even parity and orange (light) represents odd parity.
Influence of phase variation of \(g_{b}\)
The transition dipole of the two TLSs might not be necessarily in the same direction, e.g., if the atomic dipoles are anisotropic and/or the field-polarization is different at the different atom locations. This will change the nature of the couplings in our GDM from pure real to generally complex quantities. Such effects have implications in real-world nanogenieered photonic systems, to manipulate the quantum states and control quantum optical interference effects [87].
To investigate the effects of a phase-dependent GDM, we next allow \(g_{b}\) to be complex, and vary its phase. We take \(g_{b}=g_{a}\exp[\mathrm{i}\pi\phi]\) and sweep \(\phi\) from \(0\) to \(1\), similar to Fig. 4(c). In Fig. 10, we show the spectra for \(g_{b}\) ranging from \(g_{a}\) to \(\mathrm{i}g_{a}\) to \(-g_{a}\). Apart from being gauge independent for all results, we mention that all three of the 3D spectra (contours) in Fig. 10 can be simulated in a matter of minutes on a standard desktop computer, where we use typically 200 bare photon states and 12 dressed states. Moreover, a single 2D spectra can be calculated at _arbitrary_ coupling strengths, including complex coupling from the second atom, in typically a few 10s of seconds. Thus the dressed-state truncation is not only necessary for the GME, but it considerably simplifies the numerical Hilbert space from a bare state basis.
As observed above in Fig. 6(a), the gauge-fixed model produces peaks that are absent without the gauge correction, or vice versa. Indeed, at \(\omega_{b}/\omega_{c}=1.5\), we can identify peak \(\mathbf{C}\) as persisting throughout the change in \(\phi\). Conversely, at \(\omega_{b}/\omega_{c}=0.5\), peak \(\mathbf{C}\) is completely absent at \(\phi=0.5\) (but, persists throughout the range without the gauge-correction [76], not shown here). Extending the second atom's coupling strength to a complex quantity increases the separation between the first three peaks The broadening is quite low (comparatively) throughout, but the phase change does act to increase the strength of the \(\mathbf{A}\) transition at \(\omega_{b}/\omega_{c}=0.5\), and increases the broadening of peak \(\mathbf{C}\) at \(\omega_{b}/\omega_{c}=1.5\). We finally note that the spectra are symmetric about \(\phi=0.5\), meaning that the spectra are invariant to changes in the sign of the real part of the coupling strength. All these features are in accordance with the eigenenergy lines in Fig. 4(c).
## V Conclusions
We have presented a gauge-invariant GME approach to model two atoms in ultrastrong coupling regimes of open system cavity-QED, where the atoms are modelled as Fermionic TLSs. We first analyzed the applicability of a sensor atom approach for computing the detected spectra from the cavity-QED system. This is an alternative approach to using the quantum regression theorem, allowing for the computation of spectra even when driving with extremely short pulses or with multiple time-dependent fields, when the spectra solution from the quantum regression theorem may break down. This two-atom model also provides confirmation of the gauge independence of the general theory for light-matter interaction in the USC regime [34], when more than one atom is included in the system, which is a non-trivial task, even without including dissipation and optical excitations.
Using incoherent driving, we demonstrated the ability of the sensor approach to produce spectra that match well with the quantum regression theorem results, when using spectrally flat baths. We also showed the influence on the spectra when changing the bath function for both the cavity and atomic baths. We compared the Ohmic and flat baths for each case and demonstrated that the spectra only agree well when the atomic baths are flat. This is, however, not a realistic model for real-world detection over very large frequencies.
For the main part of the article, we then presented results obtained using a generalized Dicke model, in the limit of two atoms. Previous studies on the Dicke model have used identical atoms, only varying properties of both at the same. However, it is practically impossible to produce this situation in a physical lab environment. Motivated by this fact, our studies presented results obtained with dissimilar atoms, extending previous works. We first showed that our model produces gauge invariant results when including the gauge correction terms. We also showed that the gauge corrected spectra (correct spectra) are much richer than naive models and with more striking features.
We then examined the effect of allowing the resonant frequency of the second atom to vary, and showed that there are significant peaks visible off-resonance that cannot be seen when the second atom is on-resonance with the rest of the system. We demonstrated that this effect holds for a large range of normalized coupling strengths even down to the verge of USC. This shows that this first extension, namely the ability to model two atoms with dissimilar resonant frequencies, has important implications not just in the usual USC regime. We also identified the main transitions for these visible spectral peaks.
Next, we chose a few values of the second atom's frequency, including resonant with the first atom, to explore the second extension of our model, where we changed the coupling strength of the second atom relative to the first. We observed that some of the separate peaks can only
Figure 10: **Influence of relative coupling phase variation in GDM cavity spectra.** Spectra at selected \(\omega_{b}\), where we now sweep the phase of the second atom. Notably, when we lower the coupling strength below the USC regime \(\eta_{a}\leq 0.1\), there is no dependence on the phase.
be identified as separate peaks due to the coupling of the second atom. Indeed, a single peak without the second atom's coupling splits into three when the coupling is introduced in one of the regimes considered. Finally, we allowed the second atom's coupling to have a phase difference relative to the first, and showed how the relative phase can substantially tune the spectral energy levels.
###### Acknowledgements.
This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the National Research Council of Canada (NRC), the Canadian Foundation for Innovation (CFI), and Queen's University, Canada. S.H. acknowledges the Japan Society for the Promotion of Science (JSPS) for funding support through an Invitational Fellowship. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], the JSPS [via the Grants-in-Aid for Scientific Research (KAKENHI) Grant No. JP20H00134], the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069), and the Foundational Questions Institute Fund (FQXi) via Grant No. FQXi-IAF19-06.
|
2302.03769 | Computational capability for physical reservoir computing using a
spin-torque oscillator with two free layers | A numerical analysis on the computational capability of physical reservoir
computing utilizing a spin-torque oscillator with two free layers is reported.
Conventional spintronics devices usually consist of two ferromagnets, where the
direction of magnetization in one layer, called the free layer, can move while
that of the other, the reference layer, is fixed. Recently, however, devices
with two free layers, where the reference layer is replaced by another free
layer, have been developed for various practical applications. Adding another
free layer drastically changes the dynamical response of the device through the
couplings via the spin-transfer effect and the dipole magnetic field. A
numerical simulation of the Landau-Lifshitz-Gilbert equation and a statistical
analyses of the Lyapunov exponent and the synchronization index reveal the
appearance of an amplitude-modulated oscillation and chaos in the oscillators
with two free layers. Such complex dynamics qualitatively change the
computational capability of physical reservoir computing because the
computational resource is dynamics of the physical system. An evaluation of the
short-term memory capacity clarifies that oscillators with two free layers have
a larger capacity than those of conventional oscillators. An enhancement in
capacity near the edge of echo state property, i.e., the boundary between zero
and finite synchronization index, is also found. | Terufumi Yamaguchi, Sumito Tsunegi, Kohei Nakajima, Tomohiro Taniguchi | 2023-02-07T22:00:18Z | http://arxiv.org/abs/2302.03769v1 | Computational capability for physical reservoir computing using a spin-torque oscillator with two free layers
###### Abstract
A numerical analysis on the computational capability of physical reservoir computing utilizing a spin-torque oscillator with two free layers is reported. Conventional spintronics devices usually consist of two ferromagnets, where the direction of magnetization in one layer, called the free layer, can move while that of the other, the reference layer, is fixed. Recently, however, devices with two free layers, where the reference layer is replaced by another free layer, have been developed for various practical applications. Adding another free layer drastically changes the dynamical response of the device through the couplings via the spin-transfer effect and the dipole magnetic field. A numerical simulation of the Landau-Lifshitz-Gilbert equation and a statistical analyses of the Lyapunov exponent and the synchronization index reveal the appearance of an amplitude-modulated oscillation and chaos in the oscillators with two free layers. Such complex dynamics qualitatively change the computational capability of physical reservoir computing because the computational resource is dynamics of the physical system. An evaluation of the short-term memory capacity clarifies that oscillators with two free layers have a larger capacity than those of conventional oscillators. An enhancement in capacity near the edge of echo state property, i.e., the boundary between zero and finite synchronization index, is also found.
## I Introduction
Recent developments in spintronics applications, such as brain-inspired computing [1], have led to a variety of device structures and materials [2]. For example, giant-magnetoresistive (GMR) [3; 4; 5] and tunnel-magnetoresistive (TMR) [6; 7; 8; 9; 10; 11; 12] structures that include ferromagnets, called free and reference layers, have been used in magnetic sensors and memories [13]. The magnetization in the free layer can change its direction when a magnetic field and/or electric current is applied to it [14; 15]. On the other hand, the reference layer often consists of two ferromagnets separated by a thin non-magnetic spacer, and the antiferromagnetic interlayer exchange coupling between them strongly fixes their magnetization directions. Moreover, GMR and TMR devices with two free layers, where the reference layer is replaced by another ferromagnet without pinning effects, have recently been investigated for new applications such as high-density magnetic recording [16], probabilistic computing [17], and millimeter-wave generator [18]. In such devices, the coupled dynamics of the magnetizations of the two free layers that arise through the via spin-transfer effect and magnetic dipole field provide new functionalities.
A critical difference in the magnetization dynamics between a GMR/TMR device with a single free layer and a device with two free layers is the appearance of chaos in the latter structure because of the increased dynamical degrees of freedom [19; 20; 21; 22]. In particular, devices with two free layers might be applicable to physical reservoir computing [23; 24; 25; 26; 27], which is another new application of spintronics technology. Physical reservoir computing is a kind of recurrent neural network in which a reservoir, which is a physical nonlinear system, performs a computational task; for example, a spin-torque oscillator (STO) has been applied to the task of human voice recognition [28]. Since physical reservoir computing utilizes dynamical output signals from a physical system as a computational resource, the recent research has viewed the relation between the computational capability and the dynamical state of the physical system to be of central importance [27]. Such investigations in spintronics [29; 30] have recently focused on computing near the chaotic state because the edge of chaos sometimes provides a boundary of high computational capability [31; 32]. Therefore, due to the appearance of chaos, the computational performance of physical reservoir computing by using an STO with two free layers might be also different from that of an STO with single free layer.
In this paper, we evaluate the computational performance of STOs with two free layers by performing numerical simulations of the Landau-Lifshitz-Gilbert (LLG) equation. We consider the three structures, schemati
cally shown in Fig. 1. The first one, in Fig. 1(a), is a conventional GMR or TMR system, which consists of one free layer and one reference layer. The second structure, in Fig. 1(b), consists of two free layers, where the magnetizations of both layers can change their directions through the spin-transfer effect and the dipole coupling. The third structure, in Fig. 1(c), includes three ferromagnets; two are free layers and one is a reference layer. We find that the first structure shows a saturation to a fixed point, while the second and third structures show a wide variety of dynamics, such as amplitude-modulated oscillation and chaos. These dynamical states are classified systematically by measuring the Lyapunov exponent and synchronization index, which are measures for identifying chaotic dynamics and the echo state property. In addition, the short-term memory capacity is evaluated as a figure of merit of the computational capability. It is found that the STOs with two free layers have larger capacities than that of the STO with a single free layer. An enhancement in capacity near the edge of the echo state property, i.e., the boundary between zero and finite synchronization index, is also observed in the STOs with two free layers.
The paper is organized as follow. Section II examines the dynamical state of the magnetization, short-term memory capacity, Lyapunov exponent, and synchronization index of an STO with a single free layer is studied, while Secs. III and IV examine those features of an STO consisting of two free layers and an STO with two free layers and one reference layer. Section V is the conclusion.
## II STO with single free layer
Here, we analyze the dynamics of a conventional STO consisting of a free and reference layer by the LLG equation and summarize the methods of evaluating the short-term memory capacity, Lyapunov exponent, and synchronization index. We use the macrospin LLG equation based on the model in Ref. [20], where the accuracy of the macrospin model was verified by the comparison with the experiment [16]. The results will be compared to those of STOs with two free layers in Secs. III and IV.
### LLG equation of STO with single free layer
The STO is schematically shown in Fig. 1(a). The unit vectors pointing in the magnetization direction of these layers are denoted as \(\mathbf{m}\) and \(\mathbf{p}\), respectively. The magnetization dynamics in the free layer are described by the LLG equation,
\[\frac{d\mathbf{m}}{dt}=-\gamma\mathbf{m}\times\mathbf{H}+\gamma H_{\mathrm{s}} \mathbf{m}\times(\mathbf{p}\times\mathbf{m})+\alpha\mathbf{m}\times\frac{d \mathbf{m}}{dt} \tag{1}\]
where the magnetic field \(\mathbf{H}\) consists of the shape magnetic anisotropy field and an external magnetic field \(H_{\mathrm{appl}}\) applied along the perpendicular (\(z\)) direction,
\[\mathbf{H}=\begin{pmatrix}-4\pi MN_{x}m_{x}\\ -4\pi MN_{y}m_{y}\\ H_{\mathrm{appl}}-4\pi MN_{z}m_{z}\end{pmatrix}. \tag{2}\]
The demagnetization coefficients are denoted as \(N_{\ell}\) (\(N_{x}+N_{y}+N_{z}=1\)). The spin-transfer torque strength is
\[H_{\mathrm{s}}=\frac{\hbar\eta j}{2e(1+\lambda\mathbf{m}\cdot\mathbf{p})Md}, \tag{3}\]
where \(M\) and \(d\) are the saturation magnetization and the thickness of the free layer. The spin polarization of the current density \(j\) is \(\eta\), while \(\lambda\) provides the spin-transfer torque asymmetry [14]. A positive current corresponds to a flow of electrons from the reference to the free layer. The values of the parameters are \(M=1300\) emu/cm\({}^{3}\), \(\eta=0.30\), \(\lambda=\eta^{2}\), \(d=2\) nm, \(\gamma=1.764\times 10^{7}\) rad/(Oe s), \(\alpha=0.010\), and \(H_{\mathrm{appl}}=1.0\) kOe. The demagnetization coefficients are [33; 34]
Figure 1: Schematic illustrations of the device structures studied in this paper. The unit vectors pointing in the magnetization directions of the free and reference layers are denoted as \(\mathbf{m}\) and \(\mathbf{p}\). A suffix \(i=1,2\) is added to \(\mathbf{m}\) when the device includes two free layers. (a) Conventional GMR/TMR structure consisting of one free and one reference layer. The output signal through the GMR/TMR effect is proportional to \(\mathbf{m}\cdot\mathbf{p}=m_{x}\). (b) Device consisting of two free layers, \(\mathrm{F}_{1}\) and \(\mathrm{F}_{2}\). The output signal is proportional to \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\). (c) Device consisting of two free layers, \(\mathrm{F}_{1}\) and \(\mathrm{F}_{2}\), and one reference layer. The output signal is proportional to \(\mathbf{m}_{1}\cdot\mathbf{p}\), assuming that the GMR/TMR effect between \(\mathrm{F}_{1}\) and reference layer is dominant. The directions of positive current density and the external magnetic field are indicated by arrows.
\[N_{z}=\frac{1}{\tau}\left\{\frac{3}{4\pi}-\frac{3}{4\pi}\sqrt{1+\tau^{2}}\left[\tau^ {2}\mathsf{K}\left(\frac{1}{\sqrt{1+\tau^{2}}}\right)+\left(1-\tau^{2}\right) \mathsf{E}\left(\frac{1}{\sqrt{1+\tau^{2}}}\right)\right]+\tau\right\}, \tag{4}\]
and \(N_{x}=N_{y}=(1-N_{z})/2\), where \(\tau=d/(2r)\) and \(r=50\) nm is the radius of the free layer. Here, we assume that the layer has a cylinder shape. The first and second kinds of complete elliptic integral with the modulus \(k\) are \(\mathsf{K}(k)=\int_{0}^{\pi/2}d\phi/\sqrt{1-k^{2}\sin^{2}\phi}\) and \(\mathsf{E}(k)=\int_{0}^{\pi/2}d\phi\sqrt{1-k^{2}\sin^{2}\phi}\). Furthermore, we assume that the magnetization in the reference layer points to an in-plane (\(x\)) direction, i.e., \(\mathbf{p}=\hat{\mathbf{e}}_{x}\), where \(\hat{\mathbf{e}}_{\ell}\) is the unit vector in the \(\ell\) (\(\ell=x,y,z\)) direction.
Figures 2(a) and 2(b) show typical magnetization dynamics for low (\(j_{0}=100\) MA/cm\({}^{2}\)) and high (\(j_{0}=450\) MA/cm\({}^{2}\)) current densities. Starting from the steady state in the absence of a current, where \(m_{z}=H_{\text{appl}}/(4\pi MN_{z})\), the magnetization eventually saturates to fixed points. When the magnitude of the current is small, the magnetization points to the direction orthogonal to the \(x\) axis, i.e., \(m_{x}\) becomes zero, for the following reason. The steady point is determined by the condition \(d\mathbf{m}/dt=\mathbf{0}\), which, in the present case, means that \(\mathbf{H}-H_{\text{s}}\mathbf{p}\times\mathbf{m}=\mathbf{0}\). Since \(N_{z}\simeq 1\), \(N_{x},N_{y}\ll 1\) and \(4\pi MN_{z}\gg H_{\text{appl}}\), the magnetic field \(\mathbf{H}\) mainly points in the \(z\) direction. Then, because \(\mathbf{p}\) points to the \(x\) direction, the magnetization \(\mathbf{m}\) should point in the \(y\) direction in order to satisfy \(\mathbf{H}-H_{\text{s}}\mathbf{p}\times\mathbf{m}=\mathbf{0}\). Thus, the magnetization saturates to a fixed point of \(|m_{y}|\simeq 1\), as shown in Fig. 2(a). When the current magnitude becomes large, the magnetization moves to different fixed points, where, because of the strong spin-transfer torque, the magnetization becomes close to parallel or antiparallel to the magnetization in the reference layer, depending on the sign of the current. In the present system, a positive current prefers the parallel alignment of \(\mathbf{m}\) and \(\mathbf{p}\), and therefore, the magnetization saturates to a fixed point with \(m_{x}>0\) for the case shown in Fig. 2(b).
In the next section, we study magnetization dynamics in the presence of a series of random input signals. The dynamical response to such input signals was used in a recognition task in physical reservoir computing [35, 36]. There, it is necessary to specify the quantity to be used as the output signal. The output signal from the present STO depends on the magnetization direction in the free layer along the direction of the magnetization in the reference layer, i.e., \(\mathbf{m}\cdot\mathbf{p}\), due to the GMR/TMR effect. Therefore, we will choose to use \(\mathbf{m}\cdot\mathbf{p}=m_{x}\) as the output signal used for computing.
### Short-term memory capacity
We will quantify the computational capability of the STO by its short-term memory capacity. The short-term memory capacity corresponds to, roughly speaking, the number of input data a physical reservoir can recognize. Therefore, a large short-term memory capacity corresponds to a high computational performance; see also Sec. V, where the relation between the short-term memory capacity and the total computational capability is briefly explained. In the present paper, we suppose a binary pulse-input signal \(b_{k}=0,1\) (\(k=1,2,\cdots\)) [35, 36, 37] with a pulse width of \(t_{\text{p}}\), which is added to the current density as
\[j=j_{0}\left(1+\nu b_{k}\right), \tag{5}\]
where the dimensionless parameter \(\nu\) quantifies the strength of the input signal, while \(j_{0}\) is the current density for \(b_{k}=0\). In the following, we refer \(j_{0}\) as the current density for simplicity, while \(j\) is referred as the total current density. The suffix \(k\) distinguishes the order of the input data. The values of \(\nu\) and \(t_{\text{p}}\) are assumed to be 0.2 and 1.0 ns. The input signal to the current changes the
Figure 3: Dependence of short-term memory capacity on current density for an STO with a single free layer.
Figure 2: Examples of dynamics of the in-plane (\(m_{x}\)) and out-of-plane (\(m_{z}\)) components of magnetization for current densities of (a) 100 and (b) 450 MA/cm\({}^{2}\) in an STO with a single free layer.
magnetization dynamics through modulation of the spin-transfer torque. The details of the evaluation method are summarized in Appendix A (see also, for example, Ref. [36]).
Figure 3 summarizes the current dependence of the short-term memory capacity. High performance is obtained when the magnitude of the current is relatively small and the magnetization points in a direction orthogonal to the \(x\) axis; see also Fig. 2. The maximum value of the short-term memory capacity is \(4.02\) at \(j_{0}=-168\) MA/cm\({}^{2}\). The step-like behavior in the large current region is similar to that observed in a different STO [38]. When the current density \(j_{0}\) is zero, the short-term memory capacity is zero because, according to Eq. (5), the total current is zero even if \(b_{k}\) is finite, and thus, the input signal does not cause any change in the magnetization state. In the following, we evaluate the Lyapunov exponent of the STO and show that such a current dependence of the short-term memory capacity relates to a relaxation time of the magnetization to the fixed point.
### Edges of chaos and echo state property
Since we aim to reveal the relation between the computational capability and dynamical state and quantify this capability by the short-term memory capacity, it becomes necessary to introduce quantities distinguishing the dynamical state. We will use the Lyapunov exponent [39; 21; 40] and the synchronization index [29] for this purpose. While their evaluation methods are described in Secs. II.4 and II.5, here, let us briefly explain their roles and differences.
The Lyapunov exponent is the inverse of the time scale of the expansion of the distance between two solutions to the LLG equation with slightly different initial conditions [41]. While the number of the Lyapunov exponents is the same with that of the dynamical degree of freedom, the Lyapunov exponent focused in this paper corresponds to the largest Lyapunov exponent; see also Appendix B. When the Lyapunov exponent is negative (positive), the difference between the states decreases (increases) relative to that of the initial state as time increases. Therefore, a system with a positive Lyapunov exponent has a high sensitivity to its initial state. Typical dynamics for negative, zero, and positive Lyapunov exponents are a saturation to a fixed point, a limit-cycle oscillation, and chaos, respectively [41].
The Lyapunov exponent often changes value and even sign when the system parameters are changed [41]. The boundary between a zero and positive Lyapunov exponents is called the edge of chaos. There are methods to evaluate the Lyapunov exponent from numerical simulations of the equation of motion [42; 43; 44; 45; 46]. There are also statistical methods that evaluate the Lyapunov exponent from time-series data [47; 48; 49; 50]. While statistical analyses have frequently been used for analyzing experimental data, they have restrictions; for example, some [49; 50] can only evaluate positive Lyapunov exponents, i.e., only the edge of chaos can be identified, while the boundary between negative and zero Lyapunov exponent cannot be estimated. The present paper quantifies the Lyapunov exponent through numerical simulations; thus, it identifies all possible values of the exponents, i.e., negative, zero, and positive.
The synchronization index [29] is a long-time average of the distance between two independent samples of the magnetization which obey the same LLG equation but have slightly different initial conditions. The synchronization index becomes zero when the dynamical state becomes independent of the initial state as time goes on. In an autonomous system, the synchronization index tends to be zero when the dynamics saturate to a fixed point. An example is magnetization switching, where the magnetization eventually points in a certain direction. On the other hand, when the magnetization is, for example, in an auto-oscillation state, the difference between the states in the oscillation phases will never decrease below the initial difference because the two magnetizations oscillate with the same frequency. In such a case, the synchronization index remains finite.
The situation changes when a time-dependent signal is injected. For example, in the case of a periodic input signal, e.g., as in forced synchronization [51], the phase of the magnetization is fixed with respect to that of the periodic input signal. In this case, the synchronization index becomes zero even if the magnetizations are in an oscillating state. Another example of a time-dependent input signal is a random signal, which is used in physical reservoir computing, as mentioned in Sec. II.2. In this case, nonlinear oscillators often show noise-induced synchronization [52; 53; 54; 55; 56; 57], and the oscillating state of the magnetization eventually becomes independent of the initial state. Then, the synchronization index becomes zero. In particular, noise-induced synchronization is of interest in physical reservoir computing because this synchronization behavior in at physical system guarantees the echo state property [58]. The echo state property is a necessary condition guaranteeing the computational reproducibility, wherein the dynamical state of the physical reservoir becomes independent of the initial state by injecting random input signals as washout (see also Appendix A); therefore, the physical reservoir always provides the same answer for the same task. The boundary between zero and finite synchronization indexes will be called the edge of the echo state property.
One might imagine that the edge of the echo state property can be identified from the boundary between negative and zero Lyapunov exponent. Here, saturation to a fixed point is an example of magnetization dynamics corresponding to a negative Lyapunov exponent and auto-oscillation to a zero Lyapunov exponents; according to the above discussion, saturation to a fixed point should lead to a zero synchronization index and auto-oscillation to a nonzero index. Therefore, one might imagine that it is unnecessary to evaluate the synchronization index.
However, in other cases, knowing the Lyapunov exponent is not sufficient to clarify the edge of the echo state property for the following reasons. First, the Lyapunov exponent in this study is, strictly speaking, the maximum Lyapunov exponent, which corresponds to the expansion rate in a direction along which the difference between the initial states grows the most. Second, in many cases, only some of the dynamical variables are used for computing. For example, the dynamical variable used for computing sometimes has the echo state property even though the maximum Lyapunov exponent is zero. Such an example will be shown in Sec. III below.
From the above it is clear that the Lyapunov exponent and the synchronization index are similar but slightly different quantities. The former determines the edge of chaos, while the latter determines the edge of the echo state property. A periodic oscillation state is an example of a dynamical state separating these edges, which does not have the echo state property and is non-chaotic. While the computational capability of the optical physical reservoir computing presented in Ref. [32] is maximized at the edge of chaos, Ref. [27] argues that chaos is not necessary for the computational capability to be enhanced; rather, the edge of the echo state property often corresponds to an optimization condition. Moreover, although these two edges might overlap in some cases [29], this is not guaranteed to happen in all cases. In the present paper, therefore, we estimated these edges from the Lyapunov exponent and the synchronization index, and studied their relation to the computational capability.
### Lyapunov exponent
Let us study the Lyapunov exponent of the present STO (see also Appendix B). The Lyapunov exponent is defined as
\[\varLambda=\lim_{N_{\mathrm{s}}\rightarrow\infty}\frac{1}{N_{\mathrm{A}}}\sum _{i=1}^{N_{\mathrm{A}}}\frac{1}{\Delta t}\ln\frac{\mathscr{D}(t_{i})}{\epsilon}, \tag{6}\]
where \(\Delta t\) is the time increment of the LLG equation. Here, \(\epsilon\) is the distance between two solutions of the LLG equation at every time step, while \(\mathscr{D}\) is the distance after the time increment (\(\Delta t\)) passes. We will use the relative angle of the two solutions as the distance \(\mathscr{D}(t)\); see Appendix B. Note that \(\mathscr{D}/\epsilon\) is the expansion rate of the distance \(\epsilon\), while the Lyapunov exponent describes an exponential evolution of the distance between the two solutions, \(\mathscr{D}\sim\epsilon e^{\mathcal{H}t}\).
The dependence of the Lyapunov exponent on the current is summarized in Fig. 4(a). The exponent is negative throughout the entire current region because the magnetization moves to a fixed point. Note that the past information can only be recognized when the output of the system changes with respect to the input signal; if the physical system does not react to an input signal and thus, the output signal is constant, we cannot identify the input signal from the output signal. Thus, the short-term memory capacity is often large when the magnitude (absolute value) of the Lyapunov exponent is small. This is because a small Lyapunov exponent corresponds to a slow saturation to a fixed point, wherein the history of the input signal is well reflected in the dynamics. When the Lyapunov exponent is a large negative value, the magnetization immediately saturates to a fixed point, and the output signal immediately saturates to a constant. Therefore, the short-term memory capacity is small when the exponent is a large negative value. The Lyapunov exponent is zero when the current density \(j_{0}\) is zero because, as mentioned above, the input signal does not drive any dynamics in this case, and thus, the magnetization stays at a fixed point.
As can be seen from the above, there is a correspondence between the computational capability, quantified by the short-term memory capacity, and the dynamical state, characterized by the Lyapunov exponent. However, since the Lyapunov exponent in this case is only negative, the above results reveal only part of the correspondence. Note that, according to a mathematical principle, namely the Poincare-Bendixon theorem, the Lyapunov exponent of an STO with a single free layer is negative or zero [59]. Adding an another free layer breaks this mathematical restriction and leads to an appearance of chaos. In so doing, a correspondence between the computational capability and the dynamical state of the magnetization will appear.
### Synchronization index
Here, let us study the synchronization index of the STO with a single free layer (see also Appendix C). The synchronization index is defined as
\[\mathcal{S}=\lim_{N_{\mathrm{s}}\rightarrow\infty}\frac{1}{N_{\mathrm{s}}} \sum_{i=1}^{N_{\mathrm{s}}}\mathcal{D}(t_{i}), \tag{7}\]
where \(N_{\mathrm{s}}\) is the number of samples used to evaluate the long-time average of the distance \(\mathcal{D}(t)\) between two solutions of Eq. (1) with slightly different initial conditions.
Figure 4: (a) Dependence of Lyapunov exponent on current density for an STO with a single free layer. (b) Time evolution of synchronization index at current density of 100 MA/cm\({}^{2}\). The inset shows that for 450 MA/cm\({}^{2}\).
Note that the distance \(\mathcal{D}\) here is different from \(\mathscr{D}\) used in Sec. II.4 in the following sense. In the evaluation of the Lyapunov exponent, a perturbation with magnitude \(\epsilon\) is incremented at every time step by \(\Delta t\); see Appendix B. On the other hand, in the evaluation of the synchronization index, a perturbation is added only to the initial state; see Appendix C.
We should also note that the distance \(\mathcal{D}\) here is defined in terms of the dynamical variable used as the output signal; i.e., \(\mathcal{D}=|m_{x}^{(1)}-m_{x}^{(2)}|\), where \(m_{x}^{(1)}\) and \(m_{x}^{(2)}\) are the solutions of \(m_{x}\) obtained from the LLG equation, Eq. (1), with slightly different initial conditions. In Secs. III.2 and IV.2, \(\mathcal{D}\) will be defined in different ways because the output signals from the different STOs depend on different variables. The definition of the synchronization index is different from that in the previous paper [29]. For example in Ref. [29], the distance \(\mathcal{D}\) is measured in the whole phase space, and is not evaluated from \(m_{x}\) only. The reason why we define the synchronization differently from the previous work relates to the fact that not all the variables contribute to the output signal used for computing, as mentioned in Sec. II.3. The difference in the definition of \(\mathcal{D}\) is not important in this section; but it will be important in Sec. III.2; see also Appendix C.
Figure 4(b) shows the time evolution of the temporal synchronization index \(\mathcal{S}_{N_{x}}=(1/N_{\mathrm{s}})\sum_{i=1}^{N_{\mathrm{s}}}\mathcal{D} (t_{i})\) for a low current density, \(j_{0}=100\) MA/cm\({}^{2}\), where \(\mathcal{S}\) in Eq. (7) corresponds to \(\lim_{N_{\mathrm{s}}\rightarrow\infty}\mathcal{S}_{N_{\mathrm{s}}}\). The synchronization index tends to zero as time increases, as expected from the dynamics shown in Fig. 2, where the magnetization saturates to a fixed point. Saturation to zero is also observed for a large current density of 450 MA/cm\({}^{2}\), as shown in the inset of Fig. 4(b). We observe similar behavior for the other current density, and find that the synchronization index is zero over a wide range of current density (not shown). These results indicate that the output signal (\(\propto m_{x}\)) eventually becomes independent of its initial state and the STO has the echo state property.
## III STO consisting of two free layers
Now let us examine the STO shown in Fig. 1(b). We will show that, unlike the results in Sec. II, chaos appears in some parameter regions.
### LLG equation of STO with two free layers
The STO consists of two ferromagnets, F\({}_{1}\) and F\({}_{2}\), separated by a nonmagnetic spacer. The LLG equation of the magnetization \(\mathbf{m}_{i}\) (\(i=1,2\)) in F\({}_{i}\) layer is given by
\[\frac{d\mathbf{m}_{i}}{dt}=-\gamma\mathbf{m}_{i}\times\mathbf{H}_{i}-\gamma H _{\mathrm{si}}\mathbf{m}_{i}\times(\mathbf{m}_{2}\times\mathbf{m}_{1})+ \alpha_{i}\mathbf{m}_{i}\times\frac{d\mathbf{m}_{i}}{dt}, \tag{8}\]
where \(H_{\mathrm{si}}\) is
\[H_{\mathrm{si}}=\frac{\hbar\eta_{i}j}{2e(1+\lambda_{i}\mathbf{m}_{1}\cdot \mathbf{m}_{2})M_{i}d_{i}}, \tag{9}\]
The magnetic field [20; 34]
\[\mathbf{H}_{i}=\begin{pmatrix}-4\pi MN_{ix}m_{ix}-H_{\mathrm{di}}m_{jx}\\ -4\pi MN_{iy}m_{iy}-H_{\mathrm{di}}m_{jy}\\ H_{\mathrm{appl}}-4\pi MN_{iz}m_{iz}+2H_{\mathrm{di}}m_{jz}\end{pmatrix}, \tag{10}\]
includes the dipole field (\(\propto H_{\mathrm{di}}\)) from the other (\(j=1,2\) and \(j\neq i\)) layer, where
\[H_{\mathrm{di}}=\pi M_{j}\left[\frac{\frac{d_{\mathrm{i}}}{2}+d_{\mathrm{N}}+d _{j}}{\sqrt{r^{2}+\left(\frac{d_{\mathrm{i}}}{2}+d_{\mathrm{N}}+d_{j}\right)^ {2}}}-\frac{\frac{d_{\mathrm{i}}}{2}+d_{\mathrm{N}}}{\sqrt{r^{2}+\left(\frac{d _{\mathrm{i}}}{2}+d_{\mathrm{N}}\right)^{2}}}\right]. \tag{11}\]
Here, \(d_{\mathrm{N}}\) is the thickness of the spacer layer between the two ferromagnets, which is assumed to be \(d_{\mathrm{N}}=3\) nm. The dynamics of the two magnetizations are coupled via spin-transfer torque and the dipole field. The output signal of this STO originates from the magnetoresistance effect between the two free layers and is proportional to \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\). The simulations reported in the previous studies [17] whereas an experimental study used materials with different parameters [16]. In this study, we vary the magnetization \(M_{i}\) and the Gilbert damping constant \(\alpha_{i}\).
Figure 5 shows typical magnetization dynamics of this STO, where two ferromagnets have identical parameters: \(M_{1}=M_{2}=1300\) emu/cm\({}^{3}\). and \(\alpha_{1}=\alpha_{2}=0.01\). When the current density is relatively small (\(j_{0}=100\) MA/cm\({}^{2}\)), the magnetizations saturate to a fixed point; the magnetization direction in one (F\({}_{1}\)) layer is shown in Fig. 5(a). On the other hand, when the current density is large (\(j_{0}=450\) MA/cm\({}^{2}\)), an amplitude modulation in the magnetization oscillation occurs; see Fig. 5(b). For the discussion later, it will be useful to introduce the bifurcation diagram that summarizes the local maxima of the temporal \(m_{1z}\) as a function of the current density; see Fig. 5(c). Remember as well that the variable used as the output signal for physical reservoir computing is \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\). The time evolution of \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) for small and large currents and the bifurcation diagram are shown in Figs. 5(d)-5(f). The results indicate that two magnetizations are approximately antiparallel when the current is small, which is due to the fact that the dipole interaction prefers the antiparallel alignment when the magnetizations point in an in-plane direction. The spin-transfer torque acting on one ferromagnet also prefers the antiparallel alignment, while that acting on the other prefers the parallel alignment. As a result, the magnetization alignment is close to but slightly different from antiparallel, i.e., \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\simeq-1\). For a large current, the spin-transfer torque overcomes the damping torque and drives the magnetization oscillations, where \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) shows two local maxima. We emphasize that these dynamics are not chaotic.
When the parameters of the two ferromagnets are different, the dynamics become complex [20; 21]. As an
example, let us suppose that \(M_{1}=1300\) emu/cm\({}^{3}\), \(M_{2}=2200\) emu/cm\({}^{3}\), and \(\alpha_{1}=\alpha_{2}=0.01\) and study the resulting dynamics. In this case, for a positive current, a simple oscillation of the magnetization is excited, as shown in Fig. 6(a) for a current density of 100 MA/cm\({}^{2}\). On the other hand, when the current is negative, the dynamics are complex, as shown in Fig. 6(b) for a current density of 450 MA/cm\({}^{2}\). The origin of the asymmetry in the dynamics with respect to the current direction is as follows. As mentioned, the dipole interaction prefers the antiparallel alignment of magnetizations. When the current is positive, the spin-transfer torque acting on the F\({}_{1}\) layer also prefers the antiparallel alignment, while that acting on the F\({}_{2}\) layer prefers the parallel alignment. Note that the strength of the spin-transfer torque acting on the F\({}_{2}\) layer is relatively small because it is inversely proportional to the saturation magnetization, and the saturation magnetization \(M_{2}\) in the F\({}_{2}\) is large in the present case. Accordingly, both the dipole interaction and the spin-transfer torques mainly prefer the antiparallel alignment, and the dynamics become relatively simple. On the other hand, when the current is negative, the spin-transfer torque acting on the F\({}_{1}\) layer prefers the parallel alignment, while that acting on the F\({}_{2}\) layer, which is small due to the large \(M_{2}\), prefers the antiparallel alignment. Thus, while the dipole interaction prefers the antiparallel alignment, the spin-transfer torques mainly prefer the parallel alignment. As a result of competition between them, the dynamics become complex; in fact, as clarified from the Lyapunov exponent shown below, the dynamics in Fig. 6(b) can be classified to chaos. The complexity of the dynamics can be seen
Figure 5: Examples of dynamics of the in-plane (\(m_{1x}\)) and out-of-plane (\(m_{1z}\)) components of the magnetization for current densities of (a) 100 and (b) 450 MA/cm\({}^{2}\) in an STO with two free layers. (c) Bifurcation diagram of the local maximum of temporal \(m_{1z}\) as a function of current density. Similar data for \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) are shown in (d), (e), and (f). The values of the parameters in two ferromagnets are identical.
Figure 6: Examples of dynamics of the in-plane (\(m_{1x}\)) and out-of-plane (\(m_{1z}\)) components of the magnetization for current densities of (a) 100 and (b) 450 MA/cm\({}^{2}\) in an STO with two free layers. Bifurcation diagrams of (c) the local maximum of temporal \(m_{1z}\) and (d) \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) as a function of current density. The values of the magnetization are different as \(M_{1}=1300\) emu/cm\({}^{3}\) and \(M_{2}=2200\) emu/cm\({}^{3}\).
in the bifurcation diagrams of \(m_{1z}\) and \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) [Figs. 6(c) and 6(d)]. The broad distributions, as well as the window structures, imply the appearance of chaos in the negative current region [41].
Short-term memory capacity, Lyapunov exponent, and synchronization index of STO with two free layers
Figures 7(a)-7(c) show the short-term memory capacity, the Lyapunov exponent, and the synchronization index of the STO, where the magnetization \(M_{2}\) in the F\({}_{2}\) layer is plotted on the vertical axis. The definition of the distance \(\mathcal{D}\) in the case of the synchronization index is the difference in the values of \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\) calculated under slightly different initial conditions. The results in the figures indicate the followings. First, the short-term memory capacity is almost zero when the system is chaotic, i.e., when the Lyapunov exponent is positive. Second, in addition to the boundary between zero and finite short-term memory capacity, there are boundaries along which an enhancement in the capacity can be observed. We consider that such an enhancement appears at the edge of the echo state property, as described below. Third, the short-term memory capacity is larger than that of the STO with a single free layer; for example, it is 4.60 when \(j_{0}=344\) MA/cm\({}^{2}\) and \(M_{2}=1380\) emu/cm\({}^{3}\).
We should note that the Lyapunov exponent of the present STO is at least zero, or positive (see also Appendix B). This is due to the axial symmetry around the \(z\)-axis, where rotations of the two magnetizations around the \(z\) axis through the same angle do not change the system energy. As a result, a perturbation given to the phases of the magnetizations in the \(xy\) plane remains finite. This means that the maximum Lyapunov exponent is at least zero. Therefore, one might consider that the present STO does not have the echo state property. However, the rotations of \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) around the \(z\)-axis through the same angle do not change the output signal of the STO, \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\). In this sense, the output signal has the echo state property in some parameter regions, even where the (maximum) Lyapunov exponent is zero. Therefore, to reveal the echo state property of the output signal, we evaluated the synchronization index of \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\), in addition to the Lyapunov exponent. Here, we found that the short-term memory capacity is finite and can be large at the edge of the echo state property; compare Figs. 7(a) and 7(c).
Now let us examine the effect on the short-term memory capacity of varying the damping constant \(\alpha_{2}\) in the F\({}_{2}\) layer for saturation magnetizations of \(M_{1}=M_{2}=1300\) emu/cm\({}^{3}\); see Fig. 7(d). In this case, the short-term memory capacity is large when the current is small. Also, chaos is absent in this parameter region; see Fig. 7(e), where the temporal Lyapunov exponents for small and large current densities tend to be zero. The fact that the short-term memory capacity remains finite also implies the presence of the echo state property; see also Fig. 7(f), where the temporal synchronization indexes also saturate to zero.
## IV STO consisting of two free and one reference layers
In this section, we the STO schematically shown in Fig. 1(c). The spin-transfer torque from the reference layer provides an additional torque and change the dynamical state and the computational capability from those of the STOt studied in Sec. III.
### LLG equation of STO with two free and one reference layers
The LLG equations of the magnetizations in the F\({}_{1}\) and F\({}_{2}\) layers are given by
\[\begin{split}\frac{d\mathbf{m}_{1}}{dt}=&-\gamma \mathbf{m}_{1}\times\mathbf{H}_{1}-\gamma H_{s1}\mathbf{m}_{1}\times(\mathbf{ m}_{2}\times\mathbf{m}_{1})\\ &+\gamma H_{s}\mathbf{m}_{1}\times(\mathbf{p}\times\mathbf{m}_{1} )+\alpha_{1}\mathbf{m}_{1}\times\frac{d\mathbf{m}_{1}}{dt},\end{split} \tag{12}\]
\[\frac{d\mathbf{m}_{2}}{dt}=-\gamma\mathbf{m}_{2}\times\mathbf{H}_{2}-\gamma H _{s2}\mathbf{m}_{2}\times(\mathbf{m}_{2}\times\mathbf{m}_{1})+\alpha_{2} \mathbf{m}_{2}\times\frac{d\mathbf{m}_{2}}{dt}. \tag{13}\]
Assuming that the total output signal is dominated by the magnetoresistance effect between the reference and F\({}_{1}\) layer, the output signal is proportional to \(\mathbf{m}_{1}\cdot\mathbf{p}=m_{1x}\).
Figures 8(a) and 8(b) show typical dynamics excited in the STO for small (100 MA/cm\({}^{2}\)) and large (450 MA/cm\({}^{2}\)) currents. The two ferromagnets have identical parameters: \(M_{1}=M_{2}=1300\) emu/cm\({}^{3}\) and \(\alpha_{1}=\alpha_{2}=0.01\). When the current is small, the magnetizations saturate to a fixed point. Unlike the STO studied in Sec. III, complex dynamics appear for a large current, even when the parameters of the two ferromagnets are identical, due to the spin-transfer torque from the reference layer acting on only the F\({}_{1}\) layer. The bifurcation diagrams of \(m_{1x}\) and \(m_{1z}\) [Figs. 8(c) and 8(d)] show that complex structures appear in the positive current region. These results imply chaos in the positive current region.
The asymmetry of the dynamics with respect to the current direction arises for the following reason. First, let us consider the negative current case. The spin-transfer torque from the reference layer acting on the F\({}_{1}\) layer moves \(\mathbf{m}_{1}\) in the \(-x\) direction. Then, \(\mathbf{m}_{2}\) moves in the \(+x\) direction to minimize the dipole interaction energy. The spin-transfer torque from the F\({}_{2}\) acting on the F\({}_{1}\) layer prefers the parallel alignment of the magnetizations, and thus, tries to move \(\mathbf{m}_{1}\) in the \(+x\) direction. However, this motion is compensated against with the spin-transfer torque from the reference layer, and \(\mathbf{m}_{1}\) remains in the
\(-x\) direction. The spin-transfer torque from the F\({}_{1}\) acting on the F\({}_{2}\) layer prefers the antiparallel alignment of the magnetization, and thus, \(\mathbf{m}_{2}\) also remains in the \(+x\) direction. Accordingly, the two magnetizations stay the fixed points.
Next, let us consider the positive current case. The spin-transfer torque from the reference layer acting on the F\({}_{1}\) layer moves \(\mathbf{m}_{1}\) in the \(+x\) direction. Accordingly, \(\mathbf{m}_{1}\) and \(\mathbf{m}_{2}\) try to point in the \(+x\) and \(-x\) direction, respectively. However, the spin-transfer torque from the F\({}_{1}\) acting on the F\({}_{2}\) layer in this case prefers the parallel alignment of the magnetizations, and thus, \(\mathbf{m}_{2}\) cannot remain in the \(-x\) direction. As a result, the magnetizations do not saturate to a fixed point when the current magnitude is large.
Short-term memory capacity, Lyapunov exponent, and synchronization index of STO with two free layers and one reference layer
Figure 9(a)-9(c) summarizes the short-term memory capacity, the Lyapunov exponent, and the synchronization index of the present STO, where the saturation magnetization \(M_{2}\) in the F\({}_{2}\) layer is plotted in the vertical axes. The distance \(\mathcal{D}\) for the synchronization index is evaluated from \(m_{1x}\). The maximum short-term memory capacity, \(5.72\) for \(j_{0}=127\) MA/cm\({}^{2}\) and \(M_{2}=1320\) emu/cm\({}^{3}\), is larger than those of the STO with one free layer and the STO with two free layers. These results in these sections, as well as here, indicate that adding another free layer makes the magnetization dynamics complex and helps to enhance the computational capability of STO-based physical reservoir computing. The short
Figure 8: Examples of dynamics of the in-plane (\(m_{1x}\)) and out-of-plane (\(m_{1x}\)) components of the magnetization for current densities of (a) 100 and (b) 450 MA/cm\({}^{2}\) in STO with two free layers and one reference layer. Bifurcation diagrams of the local maxima of temporal (c) \(m_{1x}\) and (d) \(m_{1x}\) as a function of current density.
Figure 7: (a) Short-term memory capacity, (b) Lyapunov exponent, and (c) synchronization index of STO with two free layers, where the horizontal axis is the current density and the vertical axis is the saturation magnetization in F\({}_{2}\) layer. (d) Short-term memory where the vertical axis the Gilbert damping constant of the F\({}_{2}\) layer. Examples of time evolution of the temporal (e) Lyapunov exponent and (f) synchronization index for current density of 100 MA/cm\({}^{2}\). The insets show those for the current density of 450 MA/cm\({}^{2}\). The Gilbert damping constant of the F\({}_{2}\) layer is 0.01.
term memory capacity again becomes zero when the system is in a chaotic state, where the Lyapunov exponent is positive; see Figs. 9(a) and 9(b). The maximum value of the short-term memory capacity appears near the edge of the echo state property. Here, the parameter regions corresponding to the zero Lyapunov exponent are relatively limited especially in the positive current region, so the edges of chaos and the echo state property nearly overlap in the present STO. This might be due to the spin-transfer torque from the reference layer, which breaks the systems's axial symmetry around the \(z\)-axis and reduces the parameter region corresponding to the zero Lyapunov exponent, compared to those in Sec. III.2. Similar behaviors are found when the damping constant \(\alpha_{2}\) in the F\({}_{2}\) layer is varied, as shown by the plots of the short-term memory capacity, Lyapunov exponent, and synchronization index in Figs. 9(d)-9(f). The maximum short-term memory capacity is 5.76 for \(j_{0}=127\) MA/cm\({}^{2}\) and \(\alpha_{2}=0.00847\). As shown in Figs. 9(a) and 9(d) that the maximum short-term memory capacity occurs in a relatively low current region, while low computational capability dominates in the relatively high current region due to the appearance of chaos. This fact might make the low current region preferable for physical reservoir computing.
Here, we have shown the dependence of the short-term memory capacity on the parameters in the F\({}_{2}\) layer. Similar behaviors, such as maximization of the short-term memory capacity near the edge of the echo state property, occur even when the parameters in the F\({}_{1}\) layer are varied; see Appendix D.
## V Conclusions
In summary, we studied the magnetization dynamics in STOs with two free layers. It was shown that adding another free layer makes the dynamical output signal complex due to the coupled motion of the magnetizations via the spin-transfer torques and the dipole field. For example, in addition to the saturation of the magnetization to a fixed point found in the STO with a single free layer, an amplitude-modulated oscillation was found in the STO consisting of two free layers. The STOs with two free layers also showed chaotic dynamics particularly in the large current region. These complex dynamics mainly appear when structural asymmetries, such as a difference in parameters and/or the presence of the reference layer, exist. We investigated the computational capability of these STOs for physical reservoir computing by evaluating the short-term memory capacity. The maximum values for the STOs with two free layers were larger than that of the STO with a single free layer. Through the evaluations of the Lyapunov exponent and the synchronization index, it was shown that the short-term memory capacity is maximized near the edge of the echo state property. We note that the short-term memory capacity is the linear component of the information processing capacity [60; 61], and the total information processing capacity is bounded by the linearly independent output of the system. Therefore, an increase in the short-term memory capacity does not guarantee an increase in the total information processing capacity directly; the non-linear components of the information processing capacity
Figure 9: (a) Short-term memory capacity, (b) Lyapunov exponent, and (c) synchronization index of an STO with two free layers and one reference layer, where the horizontal axis is the current density and the vertical axis is the saturation magnetization in the F\({}_{2}\) layer. (d), (e), and (f) are the same as (a), (b), and (c) but as a function of the Gilbert damping constant in the F\({}_{2}\).
might be suppressed in STOs with two free layers. In future, the total information processing capacity should be analyzed.
## Acknowledgements
The results were partially obtained from the project "Innovative AI Chips and Next-Generation Computing Technology Development/(2) Development of Next-Generation Computing Technologies/Exploration of Neuromorphic Dynamics towards Future Symbiotic Society" commissioned by NEDO. T.Y. is supported by JSPS KAKENHI Grant No. 21K14526. T.T. is supported by JSPS KAKENHI Grant No. 20H05655.
## Appendix A Method of evaluating short-term memory capacity
Here, we summarize the details of the method of evaluating the short-term memory capacity. The short-term memory capacity is a kind of information processing capacity [60; 61] and quantifies task-independent computational capability. We assume that a series of pulse input signals \(r_{k}\) (\(k=1,2,\cdots,N_{\rm L}\)) is injected into the physical reservoir, where the suffix \(k\) distinguish the order of the input signal. In the main text, we used a binary input signal \(b_{k}=0,1\) as the input signal \(r_{\rm k}\). Another kind of input signal can be found in, for example, Ref. [61], where a uniformly distributed random number (\(0\leq r_{k}\leq 1\) or \(-1\leq r_{k}\leq 1\)) is used. We define the target data \(z_{k,D}\) from the input signal \(r_{k}\). Here, \(D\) is an integer called the delay (\(D=0,1,2,\cdots\)). An aim of physical reservoir computing is to recognize the past input data from the present output signal, and therefore, it is necessary to introduce a delay to distinguish the past input data. For example, in the evaluation of the short-term memory capacity, \(z_{k,D}\) is \(b_{k-D}\)[35; 36] (or \(r_{k-D}\)[61]); i.e., the target data are the input data injected \(D\) times before from the present input signal. Another example of \(z_{k,D}\) is \(z_{k,D}=\sum_{j=0}^{D}b_{k-D+j}\) (mod 2) for the evaluation of parity-check capacity [35; 36]. The target data of the information processing capacity [60; 61] are, in general, nonlinear combination of \(r_{k-D}\). After defining the target data, we introduce the weight \(w_{D,i}\) to minimize
\[\sum_{k=1}^{N_{\rm L}}\left(\sum_{i=1}^{N_{\rm node}+1}u_{k,i}w_{D,i}-z_{k,D} \right)^{2}, \tag{10}\]
where the output data from the \(i\)th (virtual) node in the presence of the \(k\)th input is denoted as \(u_{k,i}\). When physical reservoir is a many body system, the suffix \(i\) distinguishes each body. On the other hand, in the present paper, we use a single STO. In this case, a time-multiplexing method [35] is applied in order to introduce virtual neurons, \(u_{k,i}=u[t_{0}+(k-1+i/N_{\rm node})t_{\rm p}]\), where \(t_{0}\) is the initial time at which the input signal is injected while \(N_{\rm node}\) is the number of virtual neurons. The function \(u(t)\) is the output signal from the STO; for example, in the case of the STO with a single free layer studied in Sec. II, the experimentally measured quantity is \(m_{x}\), and thus, \(u(t)=m_{x}(t)\). The process determining the weight is called learning. The number of the input signal used for learning is \(N_{\rm L}\). Note that a weight should be introduced for each target data.
Next, we inject a different series of pulses \(r^{\prime}_{n}\) (\(n=1,2,\cdots,N^{\prime}_{\rm L}\)), where the prime symbol is added to quantities to distinguish them from those used in learning. The number \(N^{\prime}_{\rm L}\) of input data is not necessarily the same as the number used in learning, i.e., \(N_{\rm L}\neq N^{\prime}_{\rm L}\). Then, from the output data \(u^{\prime}_{n,i}\), which is the response of the physical reservoir to the injection of \(r^{\prime}_{n}\), and using the weight \(w_{D,i}\) determined by learning, we define system output as
\[y^{\prime}_{n,D}=\sum_{i=1}^{N_{\rm node}+1}u_{n,i}w_{D,i}. \tag{11}\]
If the learning is done well, \(y^{\prime}_{n,D}\) will reproduce the target data \(z^{\prime}_{k,D}\) defined from \(r^{\prime}_{n}\). To quantify the reproducibility, we can use the correlation coefficient,
\[\mathrm{Cor}(D)=\frac{\sum_{n=1}^{N^{\prime}_{\rm L}}\left(z^{\prime}_{n,D}- \langle z^{\prime}_{n,D}\rangle\right)\left(y^{\prime}_{n,D}-\langle y^{ \prime}_{n,D}\rangle\right)}{\sqrt{\sum_{n=1}^{N^{\prime}_{\rm L}}\left(z^{ \prime}_{n,D}-\langle z^{\prime}_{n,D}\rangle\right)^{2}\sum_{n=1}^{N^{\prime} _{\rm L}}\left(y^{\prime}_{n,D}-\langle y^{\prime}_{n,D}\rangle\right)^{2}}}. \tag{12}\]
The component-wise information processing capacity is defined as
\[C(z^{\prime}_{n,D})=\left[\mathrm{Cor}(D)\right]^{2}. \tag{13}\]
The magnitude of the correlation coefficient is unity when the system output \(y^{\prime}_{n,D}\) completely reproduces the target data \(z^{\prime}_{n,D}\). On the other hand, the correlation coefficient is zero when the input signal cannot reproduce the input data. Therefore, the component-wise information processing capacity quantifies the reproducibility of the target data. Note that the component-wise information processing capacity is introduced for each target data \(z^{\prime}_{n,D}\), and is independent of the suffix \(n\) because the average with respect to the input pulse is calculated in Eq.
(17). For example, Ref. [61] evaluates the component-wise information processing capacity of several physical reservoirs, where the capacities are distinguished by the nonlinearity of the target data and the delay \(D\). In the evaluation of the short-term memory capacity, we restrict the target data to being a linear combination of the input data, i.e., \(z_{k,D}=b_{k-D}\), and define the short-term memory capacity as the sum of the component-wise information processing capacity with respect to the delay \(D\), i.e.,
\[C_{\text{STM}}=\sum_{D=1}^{D_{\text{max}}}C(b^{\prime}_{n-D}); \tag{18}\]
see also, for example, Ref. [62; 36] for the definition of the short-term memory capacity. In many cases [36], the correlation coefficient \(\text{Cor}(D)\) becomes negligibly small for a large delay when the physical reservoir has the echo state property, and thus, past information fades in time. In such a case, the value of the information processing capacity will be independent of the maximum delay \(D_{\text{max}}\). As mentioned, the component-wise capacity quantifies the reproducibility of the target data, and the target data for the evaluation of the short-term memory capacity is the input data as is. Therefore, the short-term memory capacity can be, roughly speaking, regarded as the number of the past input data reproduced from the present output, as mentioned in the main text. Note that, in some cases [29], \(C(z_{n,D})\) for \(D=0\) is included in the definition of the capacity. In the present study, we use \(N_{\text{L}}=N^{\prime}_{\text{L}}=1000\) random binary data, \(N_{\text{node}}=250\) nodes, and \(D_{\text{max}}=30\).
Figure 10 summarizes examples of these procedures for an STO with single free layer with the current density of \(10\) MA/cm\({}^{2}\). In Fig. 10(a), an example of a series of random binary input signal and the dynamics of \(m_{x}\) are shown. The value of \(m_{x}\) changes with respect to the input signal; from this dynamical response, the input signal can be identified. Figure 10(b) shows an example of the reproduction of the input data with delay \(D=1\). As can be seen, the reproducibility decreases with the delay increasing. Figure 10(c) shows the dependence of \([\text{Cor}(D)]^{2}\) on the delay \(D\). The short-term memory capacity is obtained as a sum of these \([\text{Cor}(D)]^{2}\).
In the determination of the weight, the output \(u_{k,i}\) should be independent of the initial state. This is because the initial state of physical system is often uncontrollable, and the computational capability should not depend on such uncontrollable variables. In addition, the output \(u^{\prime}_{n,i}\) used in the evaluation of the capacity should be independent of the input data \(r_{k,i}\) used in the determination of weight. This is because there should be no correlation between the learning and the evaluation of capacity. As mentioned in the main text, if the physical reservoir has echo state property, the dynamical state will be independent of the past state by injecting random input signal. Therefore, before the determination of the weight, we inject \(300\) random binary input signals for STO. Similarly, after determining the weight and before evaluating the capacity, we also inject different \(300\) random binary input signals to erase a correlation between the learning and the evaluation. These processes are called washout. As can be seen from these examples, the echo state property is a necessary factor for physical reservoir computing.
## Appendix B Method of evaluating Lyapunov exponent
Here, let us summarize the method of evaluating the Lyapunov exponent. For simplicity, we will use an STO with a single free layer as an example, for a while.
We denote the solution of the LLG equation with a certain initial condition as \(\mathbf{m}(t)\). At a certain time \(t_{0}\), we introduce \(\mathbf{m}^{(1)}(t_{0})\), which points in a slightly different direction from \(\mathbf{m}(t_{0})\) with distance \(\epsilon\). We emphasize that there is no correlation between \(\mathbf{m}(t)\) and \(\mathbf{m}^{(1)}(t_{0})\). The distance is the relative angle of two magnetizations, i.e., \(\epsilon=\cos^{-1}\left[\mathbf{m}(t_{0})\cdot\mathbf{m}^{(1)}(t_{0})\right]\). Solving the LLG equations for \(\mathbf{m}(t_{0})\) and \(\mathbf{m}^{(1)}(t_{0})\), we obtain \(\mathbf{m}(t_{0}+\Delta t)\) and \(\mathbf{m}^{(1)}(t_{0}+\Delta t)\). Then, we define a temporal Lyapunov
Figure 10: (a) Temporal dynamics of \(m_{x}\) (red) and random binary input (black) for a current density of \(10\) MA/cm\({}^{2}\). The pulse width is \(t_{\text{p}}=1\) ns. (b) Examples of target data (black) and system output (red) for \(D=1\). (c) Dependence of \([\text{Cor}(D)]^{2}\) on \(D\).
exponent at time \(t_{1}=t_{0}+\Delta t\) as
\[\varLambda(t_{1})=\frac{1}{\Delta t}\ln\frac{\mathscr{D}(t_{1})}{\epsilon}, \tag{10}\]
where \(\mathscr{D}(t_{1})=\cos^{-1}[\mathbf{m}(t_{0}+\Delta t)\cdot\mathbf{m}^{(1)}(t_ {0}+\Delta t)]\) is the distance between \(\mathbf{m}(t_{0}+\Delta t)\) and \(\mathbf{m}^{(1)}(t_{0}+\Delta t)\). Next, we introduce \(\mathbf{m}^{(2)}(t_{0}+\Delta t)\) by moving \(\mathbf{m}(t_{0}+\Delta t)\) in the direction of \(\mathbf{m}^{(1)}(t_{0}+\Delta t)\) through the distance \(\epsilon\). Solving the LLG equations of \(\mathbf{m}(t_{0}+\Delta t)\) and \(\mathbf{m}^{(2)}(t_{0}+\Delta t)\) yields \(\mathbf{m}(t_{0}+2\Delta t)\) and \(\mathbf{m}^{(2)}(t_{0}+2\Delta t)\). Then, the temporal Lyapunov exponent at time \(t_{2}=t_{0}+2\Delta t\) is defined as \(\varLambda(t_{2})=(1/\Delta t)\ln[\mathscr{D}(t_{2})/\epsilon]\), where \(\mathscr{D}(t_{2})\) is the distance between \(\mathbf{m}(t_{2})\) and \(\mathbf{m}^{(2)}(t_{2})\).
Now let us generalize the above procedure. At \(t_{n}=t_{0}+n\Delta t\), we introduce \(\mathbf{m}^{(n+1)}(t_{n})\) by moving \(\mathbf{m}(t_{n})\) in the direction of \(\mathbf{m}^{(n)}(t_{n})\) through a fixed distance \(\epsilon\). Solving the LLG equation, we obtain \(\mathbf{m}(t_{n+1})\) and \(\mathbf{m}^{(n+1)}(t_{n+1})\). From the distance \(\mathscr{D}(t_{n+1})=\cos^{-1}[\mathbf{m}(t_{n+1})\cdot\mathbf{m}^{(n+1)}(t_ {n+1})]\) between \(\mathbf{m}(t_{n+1})\) and \(\mathbf{m}^{(n+1)}(t_{n+1})\), the temporal Lyapunov exponent at \(t=t_{n+1}\) is defined as \(\varLambda(t_{n+1})=(1/\Delta t)\ln[\mathscr{D}(t_{n+1})/\epsilon]\). Then, the Lyapunov exponent is defined as
\[\varLambda=\lim_{N_{A}\rightarrow\infty}\frac{1}{N_{A}}\sum_{i=1}^{N_{A}} \varLambda(t_{i}). \tag{11}\]
For STOs including two free layers, we should define \(\mathbf{m}_{i}^{(n+1)}(t_{n})\) to make the total distance, i.e., the sum of the distances between \(\mathbf{m}_{i}(t_{n})\) and \(\mathbf{m}_{i}^{(n+1)}(t_{n})\), equal to \(\epsilon\); see also Ref. [40], where a similar method for an STO with a feedback circuit is developed.
As can be seen from this explanation, the distance between two samples is given at every time step \(t_{n}\), contrary to the evaluation of the synchronization index explained in Appendix C below, where the perturbation is given at the initial time only.
Note that the difference \(\mathbf{m}^{(n+1)}(t_{n})-\mathbf{m}(t_{n})\) between \(\mathbf{m}(t_{n})\) and \(\mathbf{m}^{(n+1)}(t_{n})\) corresponds to the direction along which the difference expands the most. Therefore, the Lyapunov exponent estimated above is the maximum (or largest) Lyapunov exponent, which quantifies the maximum expansion rate from the initial difference. There are \(n\) Lyapunov exponent, \(\varLambda_{1}\), \(\varLambda_{2}\), \(\cdots\), \(\varLambda_{n}\) (\(\varLambda_{1}\geq\varLambda_{2}\geq\cdots\geq\varLambda_{n}\)) for a system with \(n\) dimensions, and \(\varLambda\) above corresponds to \(\varLambda_{1}\). After Lyapunov exponent, \(\varLambda_{2}\), \(\cdots\), \(\varLambda_{n}\), can be estimated similarly, although it is often sufficient to estimate the maximum Lyapunov exponent for clarifying the dynamical state. In addition, the Lyapunov exponent estimated here corresponds to a conditional Lyapunov exponent [29].
The Lyapunov exponent here is the long-time average of the temporal Lyapunov exponent. While the value of the temporal Lyapunov exponent near the initial time (\(t=t_{0}\)) depends on the choice of the initial perturbation [\(\mathbf{m}^{(1)}(t_{0})\)], which is an arbitrary value, the long-time averaged value tends to become a certain value, which is independent of the initial value; see, for example, Ref. [40]. Because of the finite calculation time, however, the initial value of the temporal Lyapunov exponent might provide some confusion. For example, in Fig. 7(b), the label includes a negative value, while we argue that the exponent is zero or positive. This is because a negative value in the temporal Lyapunov exponent near the initial time, originated from an arbitrary choice of the initial perturbation, remains. As mentioned above, however, it becomes sufficiently small, and the long-time averaged value becomes close to zero. We have carefully checked these values and concluded that the Lyapunov exponent in Fig. 7(b) is zero or positive.
## Appendix C Method of evaluating synchronization index
Here, let us summarize the method of evaluating the synchronization index. For simplicity, we will suppose an STO with a single free layer as an example, for a while.
We denote the solutions of the LLG equation with two different initial conditions as \(\mathbf{m}^{(1)}\) and \(\mathbf{m}^{(2)}\). We again emphasize that \(\mathbf{m}^{(1)}\) and \(\mathbf{m}^{(2)}\) do not have any correlation. Then, we evaluate the evolution of their difference. Here, the difference is given to the initial state only, while that in Appendix B is given at every time step \(t_{n}\). Accordingly, \(\mathbf{m}^{(a)}\) (\(a=1,2\)) introduced here has a different meaning from that in Appendix B. In the case of STOs with two free layers, we solve the LLG equations for two magnetizations \(\mathbf{m}_{i}^{(1)}\) and \(\mathbf{m}_{i}^{(2)}\) (\(i=1,2\)), where there are small differences between the initial states of different samples.
Suppose that a random binary input signal is injected into the STO. Therefore, if the STOs for \(\mathbf{m}_{i}^{(1)}\) and \(\mathbf{m}_{i}^{(2)}\) show noise-induced synchronization [57], the difference will be zero, and the synchronization index will also be zero. However, noise-induced synchronization is not the only state that appears in nonlinear oscillators. For example, if the STOs originally show chaotic behavior, it is difficult to realize noise-induced synchronization. Another possibility is input-driven chaos [29], where the input signal causes chaos even if the STO does not show chaotic behavior originally. In these cases, the synchronization index will remain finite even after a long time passes. Therefore, the synchronization index becomes zero only when the distance between the initial states is zero, which indicates the presence of the echo state property, as mentioned in Sec. II.3.
Now let us briefly comment on the definition of the distance \(\mathcal{D}\) between the initial states; see also Secs. II.3 and II.5. One possible "distance" between two samples of the solution is the relative angle, \(\cos^{-1}[\mathbf{m}^{(1)}\cdot\mathbf{m}^{(2)}]\), between two solutions, \(\mathbf{m}^{(1)}\) and \(\mathbf{m}^{(2)}\). This definition relies on the fact that the LLG equation conserves the norm of the solution \(\mathbf{m}\), and thus, the magnetization dynamics described by \(\mathbf{m}\) with normalization \(|\mathbf{m}|=1\) can be regarded as the motion of a point particle on the unit sphere. Then, the angle between \(\mathbf{m}^{(1)}\) and
\(\mathbf{m}^{(2)}\) represents their distance measured on the sphere. This definition of the distance is used in, for example, Ref. [40]. For the STOs with two free layers studied in Secs. III.2 and IV.2, the distance is defined as \(\mathcal{D}=\sum_{i=1}^{2}\cos^{-1}\left[\mathbf{m}_{i}^{(1)}(t)\cdot\mathbf{ m}_{i}^{(2)}(t)\right]\), where the suffix \(i\) distinguishes the ferromagnetic layers.
A different choice of distance is made in, for example, Refs. [63, 64, 21]. There, the zenith and azimuth angles, \(\theta_{i}^{(a)}\) and \(\varphi_{i}^{(a)}\), are \(\mathbf{m}_{i}^{(a)}=[m_{ix}^{(a)},m_{iy}^{(a)},m_{iz}^{(a)}]=\sin\theta_{i}^ {(a)}\cos\varphi_{i}^{(a)},\sin\theta_{i}^{(a)}\sin\varphi_{i}^{(a)},\cos \theta_{i}^{(a)}]\), and the distance is defined as \(\mathcal{D}=\sqrt{\sum_{i=1}^{2}\left[|\theta_{i}^{(1)}-\theta_{i}^{(2)}|^{2 }+|\varphi_{i}^{(1)}-\varphi_{i}^{(2)}|^{2}\right]}\). In this definition, \(\mathcal{D}\) is a distance in a four dimensional phase space consisting of \(\theta_{1}\), \(\varphi_{1}\), \(\theta_{2}\), and \(\varphi_{2}\).
We defined the distance \(\mathcal{D}\) differently, as mentioned in Sec. II.5. The distances in Sec. II.5, III.2, and IV.2 are based on \(m_{x}\), \(\mathbf{m}_{1}\cdot\mathbf{m}_{2}\), and \(m_{1x}\) as \(\mathcal{D}=|m_{1}^{(1)}-m_{x}^{(2)}|\), \(\mathcal{D}=|\mathbf{m}_{1}^{(1)}\cdot\mathbf{m}_{2}^{(1)}-\mathbf{m}_{1}^{(2 )}\cdot\mathbf{m}_{2}^{(2)}|\), and \(\mathcal{D}=|m_{1x}^{(1)}-m_{1x}^{(2)}|\), respectively. This is because, if these distances tend to be zero, the output signal used for physical reservoir computing becomes independent of the initial state; thus, these distances provide a natural standard with which to study the echo state property for computing. It is unnecessary that \(\mathbf{m}^{(1)}\) and \(\mathbf{m}^{(2)}\) become identical; only the dynamical variable used for the computing should be identical. Simultaneously, we note that the difference of the definition of \(\mathcal{D}\) in the previous and present paper is important mainly in Sec. III.2 only, where even in parameter regions where \(\mathbf{m}^{(1)}\neq\mathbf{m}^{(2)}\), \(\mathcal{D}=|\mathbf{m}_{1}^{(1)}\cdot\mathbf{m}_{2}^{(1)}-\mathbf{m}_{1}^{( 2)}\cdot\mathbf{m}_{2}^{(2)}|\) could be zero, due to the axial symmetry. In Secs. II.5 and IV.2, on the other hand, \(\mathbf{m}^{(1)}\) and \(\mathbf{m}^{(2)}\) [or \(\mathbf{m}_{i}^{(1)}\) and \(\mathbf{m}_{i}^{(2)}\)] become identical in parameter regions where \(\mathcal{D}=0\). This is because the presence of the reference layer breaks the axial symmetry of the system and reduce the parameter regions where the Lyapunov exponent is zero. In summary, the definition of the distance \(\mathcal{D}\) should be carefully chosen mainly in Sec. III.2 because the system has axial symmetry, due to which, the output signal shows the echo state property even if \(\mathbf{m}_{i}^{(1)}\neq\mathbf{m}_{i}^{(2)}\).
## Appendix D Dependence of short-term memory capacity on parameters in \(\mathbf{F}_{1}\) layer
In Sec. IV.2, the dependence of the short-term memory capacity on the parameters in the \(\mathrm{F}_{2}\) layer was studied. The parameters, such as the saturation magnetization \(M_{2}\) and the damping constant \(\alpha_{2}\), can be changed by changing ferromagnetic materials. Therefore, we studied the short-term memory capacity by changing them. On the other hand, changing the material in the \(\mathrm{F}_{1}\) layer might be not preferable because it also changes the magnitude of the output signal generated through the GMR/TMR effect. Usually, CoFeB/MgO-based magnetic tunnel junctions are used for STOs, which can emit relatively large power [36, 28]. However, for comprehensive study, one might be interested in the dependence of
Figure 11: (a) Short-term memory capacity, (b) Lyapunov exponent, and (c) synchronization index of an STO with two free layers and one reference layer, where the horizontal axis is the current density and the vertical axis is the saturation magnetization in the \(\mathrm{F}_{1}\) layer. (d), (e), and () Same as (a), (b), and (c) a function of the Gilbert damping constant in the \(\mathrm{F}_{1}\) layer. |
2310.02666 | The sharp bound of the third order Hankel determinant for inverse of
Ozaki close-to-convex functions | Let $f$ be analytic in the unit disk $\mathbb{D}= \{z \in \mathbb{C}~:~ |z| <
1\}$, and $\mathcal{S}$ be the subclass of normalized univalent functions given
by $f(z)=\sum_{n=1}^{\infty}a_{n}z^{n},~a_{1}:=1$ for $z \in\mathbb{D}$. We
present the sharp bounds of the third-order Hankel determinant for inverse
functions when it belongs to of the class of Ozaki close-to-convex. | Biswajit Rath, K. Sanjay Kumar, D. Vamshee Krishna | 2023-10-04T09:05:37Z | http://arxiv.org/abs/2310.02666v1 | The sharp bound of the third order Hankel determinant for inverse of Ozaki close-to-convex functions
###### Abstract.
Let \(f\) be analytic in the unit disk \(\mathbb{D}=\{z\in\mathbb{C}\ :\ |z|<1\}\), and \(\mathcal{S}\) be the subclass of normalized univalent functions given by \(f(z)=\sum_{n=1}^{\infty}a_{n}z^{n},\ a_{1}:=1\) for \(z\in\mathbb{D}\). We present the sharp bounds of the third order Hankel determinant for inverse functions when it belongs to of the class of Ozaki close-to-convex.
Key words and phrases:Analytic function, Upper bound, Hankel determinant, Caratheodory function, Ozaki close-to-convex functions 2020 Mathematics Subject Classification: 30C45, 30C50
## 1. Introduction
Let \(\mathcal{A}\) be the family of all analytic normalized mappings \(f\) of the form
\[f(z)=\sum_{t=1}^{+\infty}a_{t}z^{t},\ a_{1}:=1, \tag{1.1}\]
in the open unit disc \(\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}\) and \(\mathcal{S}\) is the subfamily of \(\mathcal{A}\), possessing univalent (schlicht) mappings. For \(f\in\mathcal{S}\), has an inverse \(f^{-1}\) given by
\[f^{-1}(w)=w+\sum_{n=2}^{\infty}t_{n}w^{n},\ |w|<r_{o}(f);\left(r_{o}(f)\geq \frac{1}{4}\right). \tag{1.2}\]
A typical problem in geometric function theory is to study a functional made up of combination of the coefficients of the original functions. For the positive integers \(r,\ n\), Pommerenke [19] characterized the \(r^{th}\)- Hankel determinant of \(n^{th}\)-order for \(f\) given in (1.1), defined as follows:
\[H_{r,n}(f)=\left|\begin{array}{cccc}a_{n}&a_{n+1}&\cdots&a_{n+r-1}\\ a_{n+1}&a_{n+2}&\cdots&a_{n+r}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n+r-1}&a_{n+r}&\cdots&a_{n+2r-2}\end{array}\right| \tag{1.3}\]
This determinant is investigated by many authors for different combinations of \(r\) and \(n\) in (1.3) which yields various types of Hankel determinants such as \(r=2\) and \(n=1\) gives the famous Fekete-Szeg\(\ddot{o}\) functional mathematically denoted by \(|H_{2,1}(f)|:=|a_{3}-a_{2}^{2}|.\) The estimation of the sharp bounds to the functional \(|H_{2,2}(f)|:=|a_{2}a_{4}-a_{3}^{2}|\) is obtained for \(r=n=2\) in (1.3) is called as second order Hankel determinant for bounded turning, starlike and convex, symbolized as \(\Re\), \(S^{*}\) and \(\mathcal{C}\) respectively fulfilling the conditions \(\mathrm{Re}\{f^{\prime}(z)\}>0\), \(\mathrm{Re}\{\frac{zf^{\prime}(z)}{f(z)}\}>0\) and
## 1. Introduction
Let \(\mathcal{F}\) be a class of functions in \(\mathcal{F}\). Let \(\mathcal{F}\) be a class of functions in \(\mathcal{F}\).
**Lemma 1.1**.: _If \(p\in\mathcal{P},\) is of the form (1.5) with \(c_{1}\geq 0,\) such that \(c_{1}\in[0,2]\) then_
\[2c_{2} =c_{1}^{2}+\nu\mu,\] \[4c_{3} =c_{1}^{3}+2c_{1}\nu\mu-c_{1}\nu\mu^{2}+2\nu\left(1-|\mu|^{2} \right)\rho,\] _and_ \[8c_{4} =c_{1}^{4}+3c_{1}^{2}\nu\mu+\left(4-3c_{1}^{2}\right)\nu\mu^{2}+ c_{1}^{2}\nu\mu^{3}+4\nu\left(1-|\mu|^{2}\right)\left(1-|\rho|^{2}\right)\psi\] \[\quad+4\nu\left(1-|\mu|^{2}\right)\left(c_{1}\rho-c\mu\rho-\bar{ \mu}\rho^{2}\right),\]
_where \(\nu:=4-c_{1}^{2},\) for some \(\mu\), \(\rho\) and \(\psi\) such that \(|\mu|\leq 1\), \(|\rho|\leq 1\) and \(|\psi|\leq 1\)._
**Lemma 1.2**.: _Let \(\psi_{1},\psi_{2},\psi_{3},\psi_{4}:[0,2]\to\mathbb{R}\) be a function defined by_
\[\psi_{1}(c): =-160c^{2}+16c^{3}+20c^{4}-4c^{5}+\frac{5c^{6}}{4}\] \[\psi_{2}(c): =32c+48c^{2}+32c^{3}+14c^{4}-10c^{5}-\frac{13c^{6}}{2}\] \[\psi_{3}(c): =-256+64c+276c^{2}-48c^{3}-82c^{4}+8c^{5}+\frac{29c^{6}}{4}\] \[\psi_{4}(c): =320-32c-272c^{2}-32c^{3}+76c^{4}+10c^{5}-7c^{6}\] \[\psi_{5}(c): =-64-64c+48c^{2}+32c^{3}-12c^{4}-4c^{5}+c^{6}\]
_Then following are true_
_(a) \(\psi_{1}(c)\leq 0.\) for \(c\in[0,2]\)_
_(b) \(\psi_{1}(c)+\psi_{2}(c)\leq 0.\) for \(c\in(\frac{87137}{250000},2]\)_
_(c) \(\psi_{1}(c)+\psi_{2}(c)+\psi_{3}(c)\leq 0.\) for \(c\in\left(\frac{87137}{250000},\frac{4511}{4000}\right]\)_
_(d) \(\psi_{1}(c)+\psi_{2}(c)+\psi_{3}(c)+0.6\psi_{4}(c)\leq 0.\) for \(c\in\left[\frac{4511}{4000},2\right]\)_
_(e) \(\psi_{5}(c)\leq 0.\) for \(c\in[0,2]\)_
Proof(a).: Since \(4-c^{2}>0\) for \(c\in[0,2]\)
\[\psi_{1}(c): =-160c^{2}+16c^{3}+20c^{4}-4c^{5}+\frac{5c^{6}}{4}\] \[=-48c^{2}-\frac{3c^{6}}{4}-2c^{2}(4-c^{2})(14-2c+c^{2})\leq 0\]
Proof(b).: Let \(c=\frac{87137}{250000}t,1<t\leq\frac{500000}{87137}\). Then
\[\psi_{1}(c(t))+\psi_{2}(c(t)):= 11.1535t-13.6064t^{2}+2.03249t^{3}+0.501798t^{4}\] \[-0.072018t^{5}-0.00941315t^{6}\] \[\leq 0.0094t(1-t)(17.189-7.9049t+t^{2})(69.0293+16.5646t+t^{2})\leq 0\]
Proof(c).: Let \(c=\frac{87137}{250000}t,1<t\leq\frac{563875}{174274}\). Then
\[\psi_{1}(c(t))+\psi_{2}(c(t))+\psi_{3}(c(t)):= -256.+33.4606t+19.9237t^{2}-0.708421t^{4}\] \[-0.0308649t^{5}+0.00358596t^{6}\] \[\leq 0.00358596(-18.403+t)(8.71052+t)\] \[(15.6059-7.89366t+t^{2})(28.5372+8.97902t+t^{2})\leq 0\]
Proof(d).: Let \(c=\frac{4511}{4000}t,1\leq t\leq\frac{8000}{4511}\). Then
\[\psi_{1}+\psi_{2}+\psi_{3}+06\psi_{4}= 64+72.176t-137.357t^{2}-45.8974t^{3}\] \[+45.2907t^{4}+7.29666t^{5}-10.286t^{6}\] \[\leq 10.286(1-t)(0.511168+t)\] \[(4.07379-3.46659t+t^{2})(3.06843+3.21981t+t^{2})\leq 0\]
Proof(e).: Since \(4-c^{2}>0\) for \(c\in[0,2]\)
\[\psi_{1}=(4-c^{2})^{2}(-4-4c+c^{2})\leq 0\]
**Lemma 1.3**.: _Let \(\Psi:[0,\frac{87137}{250000}]\times(0,0.25)\to\mathbb{R}\) be a function definded by_
\[\Psi(c,x)=320+\psi_{1}(c)+\psi_{2}(c)x+\psi_{3}(c)x^{2}+\psi_{4}(c)x^{3}+\psi_ {5}(c)x^{4}\]
_where \(\psi_{1},\psi_{2},\psi_{3},\psi_{4},,\psi_{5}\) define as lemma 1.2 for \(c\in[0,\frac{87137}{250000}]\), Then \(\Psi(c,x)\leq 320\) for \(0\leq c\leq\frac{87137}{250000}\) and \(0<x<0.25\)_
Proof.: Since \(\phi_{4}(c)>0\) and \(\phi_{5}(c)<0\) in \(0\leq c\leq\frac{87137}{250000}\)
\[\Psi(c,x)\leq 320+\psi_{1}(c)+\psi_{2}(c)x+(\psi_{3}(c)+0.25\psi_{4}(c))\,x^{2}\] \[:= h(c,x),\text{ with }c\in\left[0,\frac{87137}{250000}\right] \text{ and }x\in(0,0.25).\]
\(\partial h/\partial x=0\) iff
\[x=\frac{-64c-96c^{2}-64c^{3}-28c^{4}+20c^{5}+13c^{6}}{-704+224c+832c^{2}-224c ^{3}-252c^{4}+42c^{5}+22c^{6}}:=x_{0}\in(0,0.25)\]
and
\[\frac{\partial^{2}\Psi}{\partial x^{2}}(c,x_{0})=-\left(4-c^{2}\right)\left( 88-28c-82c^{2}+21c^{3}+11c^{4}\right)<0\]
Therefore \(\Psi(c,x)\) attains maximum at \((c,x_{0})\).
Hence
\[225280-71680c-321536c^{2}+103936c^{3}\] \[+148224c^{4}-39936c^{5}-19856c^{6}+7816c^{7}\] \[\Psi(c,x)\leq\Psi(c,x_{0})=\frac{-80c^{8}-662c^{9}-59c^{10}}{8 \left(88-28c-82c^{2}+21c^{3}+11c^{4}\right)^{2}}<320\]
**Lemma 1.4**.: _Let \(\Psi:\left[0,\frac{87137}{250000}\right]\times[0.25,1]\to\mathbb{R}\) be a function definded by_
\[\Phi(c,x)=\phi_{1}(x)+\phi_{2}(x)c+\phi_{3}(x)c^{2}+\phi_{4}(x)c^{3}+\phi_{5}( x)c^{4}+\phi_{6}(x)c^{5}+\phi_{7}(x)c^{6}\]
_where for \(x\in[0.25,1]\),_
\[\phi_{1}(x): =-256x^{2}+320x^{3}-64x^{4}\] \[\phi_{2}(x): =32x+64x^{2}-32x^{3}-64x^{4}\] \[\phi_{3}(x): =-160+48x+276x^{2}-272x^{3}+48x^{4}\] \[\phi_{4}(x): =16+32x-48x^{2}-32x^{3}+32x^{4}\] \[\phi_{5}(x): =20+14x-82x^{2}+76x^{3}-12x^{4}\] \[\phi_{6}(x): =-4-10x+8x^{2}+10x^{3}-4x^{4}\] \[\phi_{7}(x): =\frac{5}{4}-\frac{13x}{2}+\frac{29x^{2}}{4}-7x^{3}+x^{4}\]
_Then \(\Phi(c,x)<0\) for \(0\leq c\leq\frac{87137}{250000}\) and \(0.25\leq x<1\)_
Proof.: Since \(\phi_{1},\phi_{1}+\phi_{2},\phi_{1}+\phi_{2}+\phi_{3},\phi_{1}+\phi_{2}+\phi_{3 }+\phi_{4},\phi_{1}+\phi_{2}+\phi_{3}+\phi_{4}+\phi_{5},\phi_{6},\phi_{7}\leq 0\) for \(0.25\leq x<1\)
\[\Phi(c,x)<\left(\phi_{1}(x)+\phi_{2}(x)+\phi_{3}(x)+\phi_{4}(x)+\phi_{5}(x) \right)c^{4}+\phi_{6}(x)c^{5}+\phi_{7}(x)c^{6}<0\]
**Lemma 1.5**.: _Let \(\Psi:\left(\frac{87137}{250000},\frac{4511}{4000}\right]\times(0,0.6)\to \mathbb{R}\) be a function definded as in lemma 1.2. Then \(\Psi(c,x)\leq 320\) for \(\frac{87137}{250000}<c\leq\frac{4511}{4000}\) and \(0<x<0.6\)_
Proof.: Since \(\phi_{4}(c)>0,\phi_{1}(c),\phi_{1}(c)+\phi_{2}(c)<0,\phi_{1}(c)+\phi_{2}(c)+ \phi_{3}(c)<0,\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)+0.6\phi_{4}(c)<0\) and \(\phi_{5}(c)<0\) in for \(\frac{87137}{250000}<c\leq\frac{4511}{4000}\)
\[\Psi(c,x)\leq 320+\left(\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)+0.6\phi_{4}(c) \right)x^{2}+\phi_{5}(c)x^{4}<320.\]
**Lemma 1.6**.: _Let \(\Gamma:\left(\frac{87137}{250000},1\right]\times[0.6,1]\to\mathbb{R}\) be a function definded by_
\[\Gamma(c,x)=\gamma_{1}(x)+\gamma_{2}(x)c+\gamma_{3}(x)c^{2}+\gamma_{4}(x)c^{3} +\gamma_{5}(x)c^{4}+\gamma_{6}(x)c^{5}+\phi_{7}(x)c^{6}\]
_where for \(x\in[0.6,1]\),_
\[\gamma_{1}(x): =-256x^{2}+320x^{3}-64x^{4}\] \[\gamma_{2}(x): =96x^{2}-32x^{3}-64x^{4}\] \[\gamma_{3}(x): =164x^{2}-272x^{3}+48x^{4}\] \[\gamma_{4}(x): =-32x^{3}+32x^{4}\] \[\gamma_{5}(x): =-48x^{2}+76x^{3}-12x^{4}\] \[\gamma_{6}(x): =-6x^{2}+10x^{3}-4x^{4}\] \[\gamma_{7}(x): =2x^{2}-7x^{3}+x^{4}\]
_Then \(\Phi(c,x)<0\) for \(\frac{87137}{250000}<c\leq 1\) and \(0.6\leq x\leq 1\)_
Proof.: Since \(\gamma_{1},\gamma_{1}+\gamma_{2},\gamma_{1}+\gamma_{2}+\gamma_{3},\gamma_{4}, \gamma_{1}+\gamma_{2}+\gamma_{3}+\gamma_{4}+\gamma_{5},\gamma_{6},\gamma_{7}\leq 0\) for \(0.6\leq x<1\)
\[\Phi(c,x)<\left(\gamma_{1}+\gamma_{2}+\gamma_{3}+\gamma_{4}+\gamma_{5}\right)c ^{4}+\gamma_{6}(x)c^{5}+\gamma_{7}(x)c^{6}<0\]
**Lemma 1.7**.: _Let \(\Psi:\left(1,\frac{4511}{40000}\right]\times[0.6,1]\to\mathbb{R}\) be a function definded as in lemma 1.2. Then \(\Psi(c,x)\leq 320\) for \(1<c\leq\frac{4511}{4000}\) and \(0.6\leq x\leq 1\)_
Proof.: Since \(\phi_{1}(c),\phi_{1}(c)+\phi_{2}(c)<0,\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)<0\) for \(1<c\leq 1.12775\)
\[\Psi(c,x)\leq 320+(\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c))x^{2}+\phi_{4}(c)x^{3}+ \phi_{5}(c)x^{4}\] \[= 320+\left(-256+96c+164c^{2}-48c^{4}-6c^{5}+2c^{6}\right)x^{2}\] \[+\left(320-32c-272c^{2}-32c^{3}+76c^{4}+10c^{5}-7c^{6}\right)x^{3}\] \[+\left(-64-64c+48c^{2}+32c^{3}-12c^{4}-4c^{5}+c^{6}\right)x^{4}\] \[\leq 320-23x^{2}+63x^{3}-53x^{4}\leq 318.459.\]
**Lemma 1.8**.: _Let \(\Psi:(\frac{4511}{40000},2]\times[0,1]\to\mathbb{R}\) be a function definded as in lemma 1.2. Then \(\Psi(c,x)\leq 320\) for \(\frac{4511}{4000}<c\leq 2\) and \(0\leq x\leq 1\)_
Proof.: Since \(\phi_{1}(c),\phi_{1}(c)+\phi_{2}(c)<0,\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)<0, \phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)+\phi_{4}(c),\phi_{5}(c)<0\) for \(1.12775<c<2\)
\[\Psi(c,x)\leq 320+(\phi_{1}(c)+\phi_{2}(c)+\phi_{3}(c)+\phi_{4}(c))x^{4}+ \phi_{5}(c)x^{4}<320.\]
## 2. **Bound for \(H_{3,1}(f^{-1})\)**
**Theorem 2.1**.: _If \(f\in\mathcal{F}\), then_
\[\left|H_{3,1}(f^{-1})\right|\leq\frac{1}{16}\]
_and the inequality is sharp for \(f_{0}=z/\sqrt{1-z^{2}}\)._
Proof.: For \(f\in\mathcal{F}\), using the defination (1.2), we have
\[w=f(f^{-1})=f^{-1}(w)+\sum_{n=2}^{\infty}a_{n}(f^{-1}(w))^{n}. \tag{2.1}\]
Further, we have
\[w=f(f^{-1})=w+\sum_{n=2}^{\infty}t_{n}w^{n}+\sum_{n=2}^{\infty}a_{n}(w+\sum_{ n=2}^{\infty}t_{n}w^{n})^{n}. \tag{2.2}\]
Upon simplification, we obtain
\[(t_{2}+a_{2})w^{2}+(t_{3}+2a_{2}t_{2}+a_{3})w^{3}+(t_{4}+2a_{2}t _{3}+a_{2}t_{2}^{2}+3a_{3}t_{2}+a_{4})w^{4} \tag{2.3}\] \[+(t_{5}+2a_{2}t_{4}+2a_{2}t_{2}t_{3}+3a_{3}t_{3}+3a_{3}t_{2}^{2}+ 4a_{4}t_{2}+a_{5})w^{5}+......=0.\]
Equating the coefficients in power of \(w\) from (2.3), upon simplification, we obtain
\[t_{2} =-a_{2};t_{3}=\{-a_{3}+2a_{2}^{2}\};t_{4}=\{-a_{4}+5a_{2}a_{3}-5a_ {2}^{3}\};\] \[t_{5} =\{-a_{5}+6a_{2}a_{4}-21a_{2}^{2}a_{3}+3a_{3}^{2}+14a_{4}^{2}\}. \tag{2.4}\]
Using the values of \(a_{n}(n=2,3,4,5)\) from (2.2) in (2.4), upon simplification, we obtain
\[\begin{split} t_{2}&=-\frac{3c_{1}}{4},\ t_{3}=\frac{1 }{4}\left(3c_{1}^{2}-c_{2}\right),\ t_{4}=-\frac{1}{32}\left(27c_{1}^{3}-21c_{1} c_{2}+4c_{3}\right)\\ \text{and}\ t_{5}&=-\frac{3}{160}\left(4c_{4}-22c_{ 1}c_{3}+69c_{1}^{2}c_{2}-7c_{2}^{2}-54c_{1}^{4}\right).\end{split} \tag{2.5}\]
Now,
\[H_{3,1}(f^{-1})=\left|\begin{array}{ccc}t_{1}=1&t_{2}&t_{3}\\ t_{2}&t_{3}&t_{4}\\ t_{3}&t_{4}&t_{5}\end{array}\right|, \tag{2.6}\]
Using the values of \(t_{j},\ (j=2,3,4,5)\) from (2.5) in (2.6), it simplifies to give
\[\begin{split} H_{3,1}(f^{-1})&=\frac{1}{5120}\Big{(}27c_{1}^{ 6}-108c_{1}^{4}c_{2}+36c_{1}^{3}c_{3}+117c_{1}^{2}c_{2}^{2}-88c_{2}^{3}\\ &\qquad\qquad+72c_{1}c_{2}c_{3}-72c_{1}^{2}c_{4}-80c_{3}^{2}+96c_{ 2}c_{4}\Big{)}.\end{split} \tag{2.7}\]
In view of Lemma 1.1, adopting the similar procedure as Theorem 2.2 in (2.7), we obtain
\[\begin{split} H_{3,1}(f^{-1})&=\frac{1}{5120}\bigg{[} \frac{5c_{1}^{6}}{4}+\nu\bigg{\{}-\frac{13c_{1}^{4}\mu}{2}-2c_{1}^{4}\mu^{2}+ \left(37c_{1}^{2}-\frac{37c_{1}^{4}}{4}\right)\mu^{2}+c_{1}^{2}\nu\mu^{4}\\ &\qquad\qquad-\left(\frac{236}{7}+7\left(-\frac{18}{7}+c_{1}^{2} \right)^{2}\right)\mu^{3}+(1-\mu^{2})\Big{(}2c_{1}\nu(1-2\mu)\mu+4c_{1}^{3}(1 +3\mu)\Big{)}\rho\\ &\qquad\qquad+4(1-\mu^{2})\Big{(}3c_{1}^{2}\mu-\nu(5+\mu^{2}) \Big{)}\rho^{2}+12\Big{(}-c_{1}^{2}+2\nu\mu\Big{)}(1-\mu^{2})(1-\rho^{2}) \bigg{\}}\bigg{]}.\end{split} \tag{2.8}\]
Putting \(c:=c_{1}\) and using \(\nu=(4-c^{2})\) in (2.8), a simple calculation gives
\[\begin{split} H_{3,1}(f^{-1})&=\frac{1}{5120}\bigg{[} \frac{5c^{6}}{4}+(4-c^{2})\bigg{\{}-\frac{13c^{4}\mu}{2}-2c^{4}\mu^{2}+\left( 37c^{2}-\frac{37c^{4}}{4}\right)\mu^{2}+(4c^{2}-c^{4})\mu^{4}\\ &\qquad\qquad-\left(\frac{236}{7}+7\left(-\frac{18}{7}+c^{2} \right)^{2}\right)\mu^{3}+(1-\mu^{2})\Big{(}2c(4-c^{2})(1-2\mu)\mu+4c^{3}(1+3 \mu)\Big{)}\rho\\ &\qquad\qquad+4(1-\mu^{2})\Big{(}3c^{2}\mu-(4-c^{2})(5+\mu^{2}) \Big{)}\rho^{2}+\\ &\qquad\qquad 12\Big{(}-c^{2}+2(4-c^{2})\mu\Big{)}(1-\mu^{2})(1-\rho^{2}) \bigg{\}}\bigg{]}.\end{split} \tag{2.9}\]
Taking modulus on both sides of (2.9), using \(|\mu|=x\in[0,1]\), \(|\rho|=y\in[0,1]\), \(c_{1}=c\in[0,2]\) and \(|\psi|\leq 1\), we obtain
\[\left|H_{3,1}(f^{-1})\right|\leq\frac{1}{5120}\vartheta\left(c,x,y\right), \tag{2.10}\]
where \(\vartheta:\mathbb{R}^{3}\rightarrow\mathbb{R}\) is defined as
\[\vartheta\left(c,x,y\right)= \bigg{[}\frac{5c^{6}}{4}+(4-c^{2})\bigg{\{}\frac{13c^{4}x}{2}+2c^{ 4}x^{2}+\left(37c^{2}-\frac{37c^{4}}{4}\right)x^{2}+(4c^{2}-c^{4})x^{4}\] \[+\left(\frac{236}{7}+7\left(-\frac{18}{7}+c^{2}\right)^{2}\right) x^{3}+(1-x^{2})\Big{(}2c(4-c^{2})(1+2x)x+4c^{3}(1+3x)\Big{)}w\] \[+4(1-x^{2})\Big{(}3c^{2}x+(4-c^{2})(5+x^{2})\Big{)}y^{2}\] \[+12\Big{(}c^{2}+2(4-c^{2})x\Big{)}(1-x^{2})(1-y^{2})\bigg{\}} \bigg{]}. \tag{2.11}\]
To achive our result, it is sufficient to maximize the function \(\vartheta\left(c,x,y\right)\) on
\(\Omega:=[0,2]\times[0,1]\times[0,1]\).
**A.** On the eight vertices of \(\Omega\), from (2.11), we have
\[\vartheta\left(0,0,0\right)=\vartheta\left(2,0,0\right)= \vartheta\left(2,1,0\right)=\vartheta\left(2,1,1\right)=\vartheta\left(2,0,0 \right)=0,\] \[\vartheta\left(0,0,1\right)=320,\ \vartheta\left(0,1,0\right)= \vartheta\left(0,1,1\right)=320.\]
**B.** Now, we consider each of the twelee edges of \(\Omega\) in view of (2.11)
as fallows
(i) For the edge \(c=0,\ x=0,\ 0<y<1\), we obtain.
\[\vartheta\left(0,0,y\right)=320y^{2}\leq 320.\]
(ii) For the edge \(c=0,\ x=1,\ 0<y<1\), we obtain
\[\vartheta\left(0,1,y\right)=320.\]
(iii) For \(c=0,\ y=0,\ 0<x<1\),
\[\vartheta\left(0,x,0\right)=384x-64x^{3}\leq 320.\]
(iv) For \(c=0,\ y=1,\ 0<x<1\),
\[\vartheta\left(0,x,1\right)=320-256x^{2}+320x^{3}-64x^{4}\]
\[=320-64(4-x)(1-x)x^{2}\leq 320.\]
(v) If \(x=0,\ y=0,\ 0<c<2\) in (2.11), then
\[\vartheta\left(c,0,0\right)=48c^{2}-12c^{4}+\frac{5c^{6}}{4}\leq 80.\]
(vi) For \(x=0,\ y=1,0<c<2\),
\[\vartheta\left(c,0,1\right)=320-160c^{2}+16c^{3}+20c^{4}-4c^{5}+\frac{5c^{6}} {4}\leq 320.\]
(vii) For the edges: \(x=1,\ y=0,0<c<2\) or \(x=1,\ y=1,0<c<2\), we have
\[\vartheta\left(c,1,y\right)=320-60c^{2}+16c^{4}-4c^{6}\leq 320.\]
(viii) For the edges: \(c=2,\ x=0,\ 0<y<1\) or \(c=2,\ x=1,\ 0<y<1\) or \(c=2,\ y=0,\ 0<x<1\) or \(c=2,\ y=1,\ 0<x<1\), we obtain
\[\vartheta\left(2,x,y\right)=0.\]
**C.** Further, we us consider the six faces of \(\Omega\) with respect to \(\vartheta\left(c,x,y\right)\), from (2.11)
(i) If \(c=2\), in (2.11), then
\[\vartheta\left(2,x,y\right)=0.\]
(ii) On the face \(c=0\), from (2.11), we obtain
\[\vartheta\left(0,x,y\right) =384x-64x^{3}+(320-384x-256x^{2}+384x^{3}-64x^{4})y^{2}\] \[=384x-64x^{3}+64(5-x)(1-x)^{2}(1+x)y^{2}\] \[\leq 384x-64x^{3}+64(5-x)(1-x)^{2}(1+x)\] \[=320-256x^{2}+320x^{3}-64x^{4}\leq 320\]
(iii) On the face \(x=0,\ c\in(0,2),\ y\in(0,1)\), from (2.11), we obtain
\[\vartheta\left(c,0,y\right) =\frac{5c^{6}}{4}+(4-c^{2})(4c^{3}y+20(4-c^{2})y^{2}+12c^{2}(1-y^{ 2}))\] \[=\frac{5c^{6}}{4}+(4-c^{2})(4c^{3}y+80y^{2}+c^{2}(-20y^{2}+12(1-y^ {2})))\] \[\leq\frac{5c^{6}}{4}+(4-c^{2})(4c^{3}+80+12c^{2})\leq 320,\ c\in(0,2).\]
(vi) On the face \(x=1,\ c\in(0,2),\ y\in(0,1)\) in (2.11), we observe that the function \(\vartheta\left(c,1,y\right)\) is independent of \(y\), from B(vii), we have
\[\vartheta\left(c,1,y\right)\leq 320.\]
(v) For \(y=0,\ c\in(0,2),\ x\in(0,1)\) in (2.11), we get
\[\vartheta\left(c,x,0\right) =\frac{5c^{6}}{4}+(4-c^{2})(\frac{13c^{4}x}{2}+2c^{4}x^{2}+(37c^{ 2}-\frac{37c^{4}}{4})x^{2}+(4c^{2}-c^{4})x^{4}\] \[\qquad\qquad+(\frac{236}{7}+7(-(\frac{18}{7})+c^{2})^{2})x^{3}+1 2(c^{2}+2(4-c^{2})x)(1-x^{2}))\] \[=\frac{5c^{6}}{4}+(4-c^{2})(96x-16x^{3}+c^{4}(\frac{13x}{2}-\frac {29x^{2}}{4}+7x^{3}-x^{4})\] \[\qquad\qquad+c^{2}(12-24x+25x^{2}-12x^{3}+4x^{4}))\] \[\leq\frac{5c^{6}}{4}+(4-c^{2})(80+\frac{21}{4}c^{4}+12c^{2})\leq 3 20,\ c\in(0,2),\]
(vi) For \(y=1\),
\[\vartheta\left(c,x,1\right)= 320-160c^{2}+16c^{3}+20c^{4}-4c^{5}+\frac{5c^{6}}{4}\] \[+\left(32c+48c^{2}+32c^{3}+14c^{4}-10c^{5}-\frac{13c^{6}}{2} \right)x\] \[+\left(-256+64c+276c^{2}-48c^{3}-82c^{4}+8c^{5}+\frac{29c^{6}}{4} \right)x^{2}\] \[+\left(320-32c-272c^{2}-32c^{3}+76c^{4}+10c^{5}-7c^{6}\right)x^{3}\] \[+\left(-64-64c+48c^{2}+32c^{3}-12c^{4}-4c^{5}+c^{6}\right)x^{4}\] Using lemma 1.2,1.3, 1.4, 1.5, 1.6, 1.7, 1.8 we can see that \[\leq 320,\ \text{with}\ c\in(0,2)\ \text{and}\ x\in(0,1).\]
**D.** Now we consider the interior portion of \(\Omega\), i.e. \((0,2)\times(0,1)\times(0,1)\).
Differentiating \(\vartheta(c,x,y)\) given in (2.11) partially with respect \(y\), we obtain
\[\frac{\partial\vartheta}{\partial y}=(4-c^{2}) \Big{(}(1-x^{2})(2c(4-c^{2})x(1+2x)+4c^{3}(1+3x))\] \[-24(c^{2}+2(4-c^{2})x)(1-x^{2})y+8(1-x^{2})(3c^{2}x+(4-c^{2})(5+x ^{2}))y\Big{)}.\]
Upon solving \(\frac{\partial\vartheta}{\partial y}=0\), we get
\[y_{1}=\frac{4cx(1+2x)+c^{3}(2+(5-2x)x)}{4(c^{2}(-8+x)-4(-5+x))(-1+x)},\]
for \(y_{1}>0\), iff the following condition
\[c^{2}(-8+x)-4(-5+x)<0 \tag{2.12}\]
must hold. From (2.12), we have
\[\left(\frac{4}{\sqrt{7}}<c\leq\sqrt{\frac{5}{2}}\wedge\frac{8c^{2}-20}{c^{2}-4 }<x<1\right)\vee\left(\sqrt{\frac{5}{2}}<c<2\wedge 0<x<1\right). \tag{2.13}\]
Hence, \(y_{1}<0\), for \(\left(0<c\leq\frac{4}{\sqrt{7}}\wedge 0<x<1\right)\vee\left(\frac{4}{\sqrt{7}}<c< \sqrt{\frac{5}{2}}\wedge 0<x<\frac{8c^{2}-20}{c^{2}-4}\right).\) Therefore,\(\vartheta(c,x,y)\) has no critical point in the interior of \((0,\frac{4}{\sqrt{7}}]\times(0,1)\times(0,1)\). Now, it remains to maximize \(\vartheta(c,x,y)\) in \((\frac{4}{\sqrt{7}},2)\times(0,1)\times(0,1)\).
We can rewrite \(\vartheta(c,x,y)\) as
\[\vartheta(c,x,y)= \frac{5c^{6}}{4}+\left(4-c^{2}\right)\left[2c^{4}x^{2}+\frac{13c ^{4}x}{2}+\left(7\left(c^{2}-\frac{18}{7}\right)^{2}+\frac{236}{7}\right)x^{3}\right.\] \[+\left(4c^{2}-c^{4}\right)x^{4}+\left(37c^{2}-\frac{37c^{4}}{4} \right)x^{2}\] \[+\left(1-x^{2}\right)\left(4c^{3}(3x+1)+2\left(4-c^{2}\right)cx( 2x+1)\right)y\] \[+\left(1-x^{2}\right)\left(4\left(4-c^{2}\right)\left(x^{2}+5 \right)+12c^{2}x-12\left(2\left(4-c^{2}\right)x+c^{2}\right)\right)y^{2}\] \[\left.+12\left(1-x^{2}\right)\left(2\left(4-c^{2}\right)x+c^{2} \right)\Bigg{]}.\]
**Case D1** Suppose, \(\left(4\left(4-c^{2}\right)\left(x^{2}+5\right)+12c^{2}x-12\left(2\left(4-c^{ 2}\right)x+c^{2}\right)\right)>0\).
Then, \(\vartheta(c,x,y)\leq\vartheta(c,x,1)<320\).
**Case D2** Suppose, \(\left(4\left(4-c^{2}\right)\left(x^{2}+5\right)+12c^{2}x-12\left(2\left(4-c^{ 2}\right)x+c^{2}\right)\right)\leq 0\).
Then,
\[\vartheta(c,x,y)\leq \frac{5c^{6}}{4}+\left(4-c^{2}\right)\left[2c^{4}x^{2}+\frac{13c ^{4}x}{2}+\left(7\left(c^{2}-\frac{18}{7}\right)^{2}+\frac{236}{7}\right)x^{3}\right.\] \[+\left(4c^{2}-c^{4}\right)x^{4}+\left(37c^{2}-\frac{37c^{4}}{4} \right)x^{2}\] \[+\left(1-x^{2}\right)\left(4c^{3}(3x+1)+2\left(4-c^{2}\right)cx( 2x+1)\right)y\] \[+12\left(1-x^{2}\right)\left(2\left(4-c^{2}\right)x+c^{2}\right)\Bigg{]}\]
\[\leq \frac{5c^{6}}{4}+\left(4-c^{2}\right)\left[2c^{4}x^{2}+\frac{13c^{4}x }{2}+\left(7\left(c^{2}-\frac{18}{7}\right)^{2}+\frac{236}{7}\right)x^{3}\right.\] \[+\left(4c^{2}-c^{4}\right)x^{4}+\left(37c^{2}-\frac{37c^{4}}{4} \right)x^{2}\] \[+\left(1-x^{2}\right)\left(4c^{3}(3x+1)+2\left(4-c^{2}\right)cx(2 x+1)\right)\] \[+12\left(1-x^{2}\right)\left(2\left(4-c^{2}\right)x+c^{2}\right)\Bigg{]}\] \[= \frac{5c^{6}}{4}-4c^{5}-12c^{4}+16c^{3}+48c^{2}\] \[+\left(c^{6}-4c^{5}-8c^{4}+32c^{3}+16c^{2}-64c\right)x^{4}\] \[+\left(-7c^{6}+10c^{5}+40c^{4}-32c^{3}-32c^{2}-32c-64\right)x^{3}\] \[+\left(\frac{29c^{6}}{4}+8c^{5}-54c^{4}-48c^{3}+100c^{2}+64c \right)x^{2}\] \[+\left(-\frac{13c^{6}}{2}-10c^{5}+50c^{4}+32c^{3}-192c^{2}+32c+3 84\right)v\] \[\leq \frac{5c^{6}}{4}-4c^{5}-12c^{4}+16c^{3}+48c^{2}\] \[+\left(c^{6}-4c^{5}-8c^{4}+32c^{3}+16c^{2}-64c\right)x^{4}\] \[+\left(-7c^{6}+10c^{5}+40c^{4}-32c^{3}-32c^{2}-32c-64\right)x^{3}\] \[+\left(\frac{29c^{6}}{4}+8c^{5}-54c^{4}-48c^{3}+100c^{2}+64c \right)x^{2}\] \[+\left(-\frac{13c^{6}}{2}-10c^{5}+50c^{4}+32c^{3}-192c^{2}+32c+3 84\right)\] \[= 384+32c-\frac{21c^{6}}{4}-14c^{5}+38c^{4}+48c^{3}-144c^{2}\] \[+\left(c^{6}-4c^{5}-8c^{4}+32c^{3}+16c^{2}-64c\right)x^{4}\] \[+\left(-7c^{6}+10c^{5}+40c^{4}-32c^{3}-32c^{2}-32c-64\right)x^{3}\] \[+\left(\frac{29c^{6}}{4}+8c^{5}-54c^{4}-48c^{3}+100c^{2}+64c \right)x^{2}:=h(c,x).\]
For \(c\in\left(\frac{4}{\sqrt{7}},\sqrt{\frac{5}{2}}\right]\) and \(x\in(0,1)\)
\[h(c,x)\leq 295+28x^{2}-81x^{3}-8x^{4}<296.\]
For \(c\in\left(\sqrt{\frac{5}{2}},2\right)\) and \(x\in(0,1)\)
\[h(c,x)\leq 282+17x^{2}-0x^{3}+1x^{4}<300.\]
Hence,from **Case D1** and **Case D2**\(\vartheta(c,x,y)<320\).
In review of cases **A**, **B**, **C** and **D**, we obtain
\[\max\bigg{\{}\vartheta(c,x,y)\leq 320:c\in[0,2],x\in[0,1]\text{ and }y\in[0,1]\bigg{\}}. \tag{2.14}\]
Symplifying the expressions (2.10) and (2.14), we get
\[\left|H_{3,1}(f^{-1})\right|\leq\frac{1}{16}. \tag{2.15}\]
For \(f_{0}\in\mathcal{F}\), we obtain \(a_{2}=a_{4}=0,\ a_{3}=1/2\) and \(a_{5}=3/8\), further we have, \(t_{2}=t_{4}=0,\ t_{3}=-1/2\) and \(t_{5}=3/8\), which follows the result.
**Data Availability:** My manuscript has no associate data
|
2301.05890 | Q-balls in K-field theory | We study the existence and stability of Q-balls in noncanonical scalar field
theories, $K(|\Phi|^2,X)$ where $\Phi$ is the complex scalar field and $X$ is
the kinetic term. We extend the Vakhitov-Kolokolov stability criterion to
K-field theories. We derive the condition for the perturbations to have a
well-posed Cauchy problem. We find that $K_{,X}>0$ and $K_{,X}+XK_{,XX}>0$ are
necessary but not sufficient conditions. The perturbations define a strongly
hyperbolic system if $(K_{,X}-2\phi'^2 K_{,XX})(K_{,X}+2\omega^2\phi^2 K_{,XX})
> 0$. For all modifications studied, we found that perturbations propagate at a
speed different from light. Generically, the noncanonical scalar field can
lower the charge and energy of the Q-ball and therefore improves its stability. | AnÃbal Faúndez, Radouane Gannouji | 2023-01-14T11:13:22Z | http://arxiv.org/abs/2301.05890v2 | # Q-balls in K-field theory
###### Abstract
We study the existence and stability of Q-balls in non-canonical scalar field theories, \(K(|\Phi|^{2},X)\) where \(\Phi\) is the complex scalar field and \(X\) is the kinetic term. We extend the Vakhitov-Kolokolov stability criterion to K-field theories. We derive the condition for the perturbations to have a well-posed Cauchy problem. We find that \(K_{,X}>0\) and \(K_{,X}+XK_{,XX}>0\) are necessary but not sufficient conditions. For all modifications studied, perturbations propagate at a speed different from light. Generically, the non-canonical scalar field can lower the charge and energy of the Q-ball.
## I Introduction
Q-balls are pseudo-like particles that could be defined as lumps of a singularity-free scalar field with finite energy. They have been originally discovered in [1] and independently rediscovered in [2]. Contrary to solitons, they do not have a topological charge but a Noether charge based originally on the \(U(1)\) global symmetry, and therefore they belong to the class of nontopological solitons. The scalar field is captured in some region of space because of non-linear self-interaction, therefore forming a pseudo-like particle carrying charge and energy.
Q-balls can be produced via many mechanisms, which makes them very interesting in particular in cosmology. Indeed, they could be produced from inflationary models, such as natural inflation [3; 4], where if a complex scalar field with a global symmetry is spontaneously broken, we end up with the inflaton as the goldstone boson and a naturally flat potential due to the shift symmetry. Also in supersymmetric extensions of the standard model (see e.g. [5]), Q-balls emerge naturally where the global charge could be assumed by the baryon or the lepton number. For example, the Affleck-Dine mechanism [6; 7] uses the supersymmetric flat directions to generate baryogenesis. In this context, some of these flat directions (scalar field) can be parametrized as a complex field, which is in general a condensate of squarks, sleptons and Higgs field. This condensate can be unstable and form Q-balls [8].
Of course, the most interesting property of Q-balls is their stability, because they could then be considered dark matter candidates [9; 10]. For that reason, it will be our main focus in this paper along with some interesting properties related to their existence. The analysis of the classical stability was studied in [11; 12] where they found that considering a Q-ball of frequency \(\omega\) and charge \(Q\), stability is similar to the condition \(dQ/d\omega<0\). It was shown in [13] that the stability of gauged Q-balls is not related to this condition. It would be interesting to see the extension of this criteria to global charge Q-balls but in modified gravity theories.
We will study three types of stability conditions that appear in the literature [14], namely, classical stability as we have previously mentioned, absolute stability, and stability against fission [12].
In most of the papers, a canonical scalar field is considered which appears naturally at low energies of various theories. But studying Q-balls in the early universe might modify this simple picture. Indeed, e.g. higher dimensions naturally produce scalar fields with nonlinear kinetic terms such as D3-brane [15] or in the context of braneworld gravity [16]. Also in string theory, a rolling tachyon has a Dirac-Born-Infeld (DBI) type of action [17]. It is therefore natural to look to non-canonical scalar fields. Q-balls in DBI type of kinetic term was studied in [18] along with its stability using catastrophe theory [19]. In this context, we will study Q-balls in the context of a complex K-field also known as K-inflation [20] or K-essence [21].
The plan of the paper is as follows. We introduce the model before discussing the stability conditions encountered in the literature. In the next section, we analyze the range of existence of the Q-balls and define the energy conditions for these solutions. Finally, we will study numerically the properties of the Q-balls before studying the equation of perturbation. We analyze the strong hyperbolicity of these equations along with the stability of the Q-ball before conclusions.
## II Q-balls
Let us consider the density Lagrangian
\[\mathcal{L}=K(|\Phi|^{2},X) \tag{1}\]
where \(K\) is a generic function of a complex scalar field \(\Phi\) and the kinetic term \(X=-\partial_{\mu}\Phi\partial^{\mu}\Phi^{*}\). The equation of motion is
\[\nabla_{\mu}(K_{,X}\partial^{\mu}\Phi)+\Phi K_{,|\Phi|^{2}}=0 \tag{2}\]
where we have used the notation \(K_{,A}\equiv\partial K/\partial A\).
The model admits a global U(1) symmetry to which the associated Noether current is
\[j^{\mu}=iK_{,X}\Big{(}\Phi^{*}\partial^{\mu}\Phi-\Phi\partial^{\mu}\Phi^{*} \Big{)} \tag{3}\]
This current is conserved \(\partial_{\mu}j^{\mu}\) on-shell. The corresponding conserved scalar charge (or total particle number) is
\[Q=\int\mathrm{d}^{3}xj^{0}=i\int\mathrm{d}^{3}xK_{,X}(\Phi\dot{\Phi}^{*}-\dot{ \Phi}\Phi^{*}) \tag{4}\]
To obtain the energy, we define the canonical conjugate momenta to the variables \(\Phi\) and \(\Phi^{*}\),
\[\pi_{\Phi}=\frac{\partial\mathcal{L}}{\partial\dot{\Phi}}=K_{,X} \dot{\Phi}^{*} \tag{5}\] \[\pi_{\Phi^{*}}=\frac{\partial\mathcal{L}}{\partial\dot{\Phi}^{*}} =K_{,X}\dot{\Phi} \tag{6}\]
so the Hamiltonian density is
\[\mathcal{H}=\pi_{\Phi}\dot{\Phi}+\pi_{\Phi^{*}}\dot{\Phi}^{*}-\mathcal{L}=2| \dot{\Phi}|^{2}K_{X}-K \tag{7}\]
The energy of the system is then
\[E=\int\mathrm{d}^{3}x\Big{(}2|\dot{\Phi}|^{2}K_{X}-K\Big{)} \tag{8}\]
We are looking for solutions which minimize the energy for a given charge Q. For that, we define the functional
\[E_{\omega}=E+\omega\Big{[}Q-i\int\mathrm{d}^{3}xK_{,X}(\Phi\dot{\Phi}^{*}-\dot {\Phi}\Phi^{*})\Big{]} \tag{9}\]
where \(\omega\) is a Lagrange multiplier which enforces the given charge Q. We have
\[E_{\omega} =\omega Q+\int\mathrm{d}^{3}x\Big{[}K_{X}\Big{(}2|\dot{\Phi}|^{2 }-i\omega(\Phi\dot{\Phi}^{*}-\dot{\Phi}\Phi^{*})\Big{)}-K\Big{]}\] \[=\omega Q+\int\mathrm{d}^{3}x\Big{[}K_{X}|\dot{\Phi}-i\omega\Phi|^ {2}+K_{X}(|\dot{\Phi}|^{2}-\omega^{2}|\Phi|^{2})\] \[\qquad\qquad\qquad-K\Big{]} \tag{10}\]
In the case of a canonical scalar field, \(K=X-V(|\Phi|^{2})\), we have
\[E_{\omega}=\omega Q +\int\mathrm{d}^{3}x\Big{[}|\dot{\Phi}-i\omega\Phi|^{2}-\omega^{2 }|\Phi|^{2}+|\vec{\nabla}\Phi|^{2}\] \[+V(|\Phi|^{2})\Big{]} \tag{11}\]
where we used that \(X=-\partial_{\mu}\Phi\partial^{\mu}\Phi^{*}=|\dot{\Phi}|^{2}-|\vec{\nabla} \Phi|^{2}\). We can therefore conclude that for a given charge Q, the energy is minimized when \(\dot{\Phi}-i\omega\Phi=0\) which means for \(\Phi(t,\vec{x})=\phi(\vec{x})e^{i\omega t}\)[12]. This simple argument for the canonical scalar field can't be easily generalized to the K-field. But we observe that in the general case, if \(\Phi(t,\vec{x})=\phi(\vec{x})e^{i\omega t}\)
\[E_{\omega}=\omega Q-\int\mathrm{d}^{3}x\ K \tag{12}\]
which implies that the extrema of the energy (for fixed charge) coincide with the extrema of the action. Therefore solutions of the following type \(\Phi(t,\vec{x})=\phi(\vec{x})e^{i\omega t}\) extremize the energy. Even if we don't know of the existence of other solutions which could also extremize the energy functional, we will assume in the future for this paper this time-dependent phase of the solution.
For a given model, the only parameter which characterizes the energy \(E\) and the charge \(Q\) is the parameter \(\omega\). Therefore we can consider that energy and charge are functions of \(\omega\), thus differentiating the energy, we get
\[\frac{\mathrm{d}E}{\mathrm{d}\omega}=\int\mathrm{d}^{3}x\Big{[}2\omega\phi^{2 }K_{,X}+4\omega^{3}\phi^{4}K_{,XX}\Big{]} \tag{13}\]
Performing the same differentiation of the charge \(Q\), we found
\[\frac{\mathrm{d}E}{\mathrm{d}\omega}=\omega\frac{\mathrm{d}Q}{\mathrm{d}\omega} \tag{14}\]
which extends to K-field results from [11]. When \(\frac{\mathrm{d}Q}{\mathrm{d}\omega}=0\) also \(\frac{\mathrm{d}E}{\mathrm{d}\omega}=0\) which corresponds to the existence of extremum of the charge and the energy at the same time. They will correspond to the cusps in the diagram \(E(Q)\). When \(\frac{\mathrm{d}Q}{\mathrm{d}\omega}\neq 0\), we obtain
\[\frac{\mathrm{d}E}{\mathrm{d}Q}=\omega \tag{15}\]
which corresponds to the generic relation found for a \(U(1)\) Q-ball.
## III Stability
Usually, three different stability criteria are discussed in the literature. The first condition considers that a given Q-ball should not decay into smaller Q-balls, sometimes referred to stability against fission [12]. In that case, the stability translates into
\[E(Q_{1}+Q_{2})<E(Q_{1})+E(Q_{2}) \tag{16}\]
and if taking derivatives wrt to both charges \((Q_{1},Q_{2})\), we obtain the equivalent condition \(\frac{\mathrm{d}^{2}E}{\mathrm{d}Q^{2}}<0\) and by using eq.(15) it reduces to \(\frac{\mathrm{d}Q}{\mathrm{d}\omega}<0\). Notice the similarity with the more generic Vakhitov-Kolokolov stability criterion [22] (or spectral stability). Of course, because of eq.(14), we could equivalently consider \(\frac{\mathrm{d}E}{\mathrm{d}\omega}<0\).
The second stability criterion considers decay into free particles of mass \(M=\sqrt{\frac{V^{\prime\prime}(0)}{2}}\). In order to avoid the decay of a Q-ball into \(Q\) free particles with the rest masses \(M\), we need to consider \(E(Q)<MQ\).
Finally, the last stability considers the time evolution of small perturbations, the so-called classical stability that we will analyze later.
Notice that from catastrophe theory, a simple criteria of stability has been proved [23]. Indeed, considering the diagram \(E(Q)\), the lowest branch corresponds to the stable soliton while the upper branch is unstable. This condition will be found to be equivalent to the linear stability.
Existence
In this section, we briefly summarize the conditions of existence of Q-balls. These conditions are obtained by constraining the shape of the potential.
Considering a spherically symmetric spacetime, and \(\Phi=\phi(r)e^{i\omega t}\) the eq.(2.2) becomes
\[K_{,X}\Big{(}\phi^{\prime\prime}(r)+\frac{2}{r}\phi^{\prime}(r)+ \omega^{2}\phi(r)\Big{)}+\phi^{\prime}(r)X^{\prime}(r)K_{,XX}\] \[\qquad+\phi^{\prime}(r)^{2}K_{,\phi X}+\frac{1}{2}K_{,\phi}=0 \tag{4.1}\]
with \(X=\omega^{2}\phi(r)^{2}-\phi^{\prime}(r)^{2}\).
Let us first consider the canonical case, namely \(K=X-V(\phi)\). The equation of motion reduces to
\[\phi^{\prime\prime}(r)+\frac{2}{r}\phi^{\prime}(r)+\omega^{2}\phi(r)-\frac{V^{ \prime}(\phi)}{2}=0 \tag{4.2}\]
which can be written as
\[\phi^{\prime\prime}(r)+\frac{2}{r}\phi^{\prime}(r)-V^{\prime}_{\rm eff}(\phi)=0 \tag{4.3}\]
with \(V_{\rm eff}(\phi)=(V(\phi)-\omega^{2}\phi^{2})/2\). We see that the \(\omega^{2}\) term acts as a tachyonic contribution to the mass of the field, which will produce solitonic solutions otherwise absent for \(\omega=0\). Considering only solutions with finite energy, the energy functional (2.8) \(E=\int{\rm d}^{3}x(\phi^{\prime}(r)^{2}+\omega^{2}\phi^{2}+V(\phi))\) implies that \((\phi,\phi^{\prime})\to 0\) for \(r\rightarrow\infty\) and \(V(0)=0\) (we assumed \(V(\phi)>0\)).
It is easier to use the analogy with a particle in Newtonian mechanics, namely replacing \(\phi\to x\) and \(r\to t\) which gives \(\ddot{x}+\frac{2}{t}\dot{x}+W^{\prime}_{\rm eff}(x)=0\), where \(W_{\rm eff}(x)=-V_{\rm eff}(x)\). Looking for a trajectory \(\phi(r)\) or equivalently \(x(t)\), we need to impose \(x(\infty)=0\) to obtain a finite energy solution. Therefore, the problem reduces to classifying the different trajectories of the equivalent particle giving finite energy. It is easy to show [2] that we need to impose \(W^{\prime\prime}_{\rm eff}(0)<0\) and \(W_{\rm eff}(\phi)>0\) around \(\phi(r=0)\). These conditions translate into \(V^{\prime\prime}(0)>2\omega^{2}\) as well as \(\min\Bigl{(}\frac{V(\phi)}{\phi^{2}}\Bigr{)}\leq\omega^{2}\). Thus, non-renormalizable potentials have to be considered and the simplest could be \(V(\phi)=m^{2}\phi^{2}-b\phi^{4}+\lambda\phi^{6}\). The previous constraints reduce to
\[0<m^{2}-\frac{b^{2}}{4\lambda}<\omega^{2}\leq m^{2} \tag{4.4}\]
The positivity of \(m^{2}-b^{2}/4\lambda\) is imposed by demanding that \(V(0)\) is a global minimum. In this paper, we will normalize [24] the parameters such as \(\lambda=1\) and \(b=2\) which implies \(m>1\). Therefore we will consider \(m^{2}=1.1\) which implies \(0.32<\omega\leq 1.05\). The Q-ball will exist only in this range of frequencies. It is important to mention that this range will change for K-fields. For example, in a model where \(K=X+\alpha X^{2}-V(\phi)\), we have around \(r=0\), and using the condition \(\phi^{\prime}=0\), \(\phi^{\prime\prime}(r)+W^{\prime}_{eff}(\phi)\simeq 0\) with
\[W^{\prime}_{eff}=\omega^{2}\phi-\frac{m^{2}-2b\phi^{2}+3\lambda\phi^{4}}{1+2 \alpha\omega^{2}\phi^{2}}\phi \tag{4.5}\]
Therefore the condition \(W_{eff}>0\) for some range of the scalar field, implies a different value for the minimum of \(\omega\). For our parameters, we found that with good accuracy, \(\omega_{min}\simeq(1+\alpha/30)/\sqrt{10}\) while \(\omega_{max}\) remains unchanged.
An other important condition for the existence of the Q-ball is the nature of the differential equation. We have an equation
\[\Bigl{(}K_{,X}-2\phi^{\prime 2}K_{,XX}\Bigr{)}\phi^{\prime\prime}+F(\phi,\phi^{ \prime})=0 \tag{4.6}\]
To avoid singular points, we need to impose \(K_{,X}-2\phi^{\prime 2}K_{,XX}\neq 0\). Therefore, for any model, smoothly connected to the canonical case, \(K_{,X}-2\phi^{\prime 2}K_{,XX}=1\), we should impose \(K_{,X}-2\phi^{\prime 2}K_{,XX}>0\). Considering the model \(K=X+\alpha X^{2}-V(\phi^{2})\), we have \(1+2\alpha\omega^{2}\phi^{2}-6\alpha\phi^{\prime 2}>0\). Around the origin, we have \(\phi^{\prime}=0\), which implies the condition \(1+2\alpha\omega^{2}\phi_{0}^{2}>0\) and therefore large negative values of \(\alpha\) will not be allowed.
## V Energy conditions
For these type of models, the fluid interpretation is not suitable because the kinetic term does not have a definite sign. But, it is mostly positive in the interior of the Q-ball and becomes negative near the surface of the Q-ball. Therefore, deep inside the Q-ball, we can use the hydrodynamical interpretation of the scalar field, by defining the energy-momentum tensor
\[T_{\mu\nu}=Kg_{\mu\nu}+K_{,X}(\partial_{\mu}\Phi\partial_{\nu}\Phi^{*}+ \partial_{\mu}\Phi^{*}\partial_{\nu}\Phi) \tag{5.1}\]
from which we define the energy density \(\rho=2|\dot{\Phi}|^{2}K_{,X}-K=2\omega^{2}\phi(r)^{2}K_{,X}-K\), the radial pressure \(P_{r}=2\phi^{\prime}(r)^{2}K_{,X}+K\) and finally the tangential pressure \(P_{t}=K\). These quantities can be converted into the pressure \(P=(P_{r}+2P_{t})/3\) and the shear force \(S=P_{r}-P_{t}\).
The hydrodynamical approach helps to obtain easily the energy conditions such as the strong energy condition (SEC)
\[K_{,X}\geq 0\,,\ \ K+(\omega^{2}\phi^{2}+\phi^{\prime 2})K_{,X}\geq 0 \tag{5.2}\]
the dominant energy condition (DEC)
\[K_{,X}\geq 0\,,\ \ (\omega^{2}\phi^{2}-\phi^{\prime 2})K_{,X}-K\geq 0 \tag{5.3}\]
the weak energy condition (WEC)
\[K_{,X}\geq 0\,,\ \ 2\omega^{2}\phi^{2}K_{,X}-K\geq 0 \tag{5.4}\]
and the null energy condition (NEC)
\[K_{,X}\geq 0 \tag{5.5}\]
We notice that \(K_{,X}\geq 0\) is common to all energy conditions.
Numerical analysis
As we have mentioned, Q-balls are finite energy objects and therefore with finite space extension, which imposes the asymptotic condition \(\phi(\infty)=0\). Therefore we have used a shooting method for each value of the frequency \(\omega\) with mixed boundary conditions \(\phi^{\prime}(0)=0\) and \(\phi(\infty)=0\). In practice, we have integrated the system from \(r=10^{-30}\) to some value, \(r_{\rm max}\), and demanded that the solution remains unchanged if we increase \(r_{\rm max}\). In Figure 1, we have considered the standard model \(K(X)=X-V(|\Phi|^{2})\) with the potential defined in the section IV. For lower frequencies, or thin wall limit, the scalar field is constant and at some radius (often considered as the Q-ball radius) the scalar field drops rapidly to zero, while for larger values of \(\omega\), also known as thick wall limit, the scalar field is more shallow. The latter will be unstable. In the same graphics, we have represented the energy and the charge. The energy and charge seem to diverge for the frequencies \(\omega_{min}\) and \(\omega_{max}\). Also \(E(\omega)\) and \(Q(\omega)\) reach their minimum for the same frequency, defining therefore a cusp in the energy vs charge graphics. We show also the stability conditions of the Q-balls. The stability criteria against decay is stronger than the fission stability condition. In the \((Q,E)\) plot, it is easy to determine the stable Q-ball. Indeed, for every given charge \(Q\), two Q-balls exist, the one with the smallest energy corresponds to the solution stable under fission. We will see later, that it corresponds also to the stable solution under linear perturbations.
Q-balls have also excited states which correspond to solutions with nodes but with the same limit at infinity, namely \(\phi(\infty)=0\). In Figure 2, we show the first and second excited modes for a given frequency \(\omega\). In order to fulfill the boundary conditions, for excited states, the initial conditions must be extremely fine-tuned. The excited states have as expected larger energy but also charge. We found that the frequency corresponding to \(dE/d\omega=0\) becomes larger with the number of nodes. For example, for the fundamental mode, we have a minimal energy for \(\omega=0.972\), while \(\omega=1.015\) for the first excited mode and \(\omega=1.025\) for the second excited mode.
All these solutions are easily generalized to K-field theories. We will consider the simplest model where the action is modified by a single parameter, \(K=X+\alpha X^{2}-V(|\Phi|)\) where \(\alpha\) is the new parameter of the model. Generically, we found that the structure of the solutions will not change. Q-balls exist for a certain range of frequency which depends on the parameter \(\alpha\). We see from Figure 3 that for a given frequency, the Q-ball lowers its energy for large positive values of the parameter \(\alpha\), because the radius decreases. Notice that the critical value, (\(E^{\prime}(\omega)=0\)), of the energy and charge is also lowered for larger values of \(\alpha\). Therefore, for a given frequency, the modified model with \(\alpha>0\) produces Q-balls with lower charge and energy. The modification by the K-field allows to build Q-balls with small charge and energy or on the contrary with larger energy and charge. Finally, we found that for all values of the parameter \(\alpha\), in the limit of \(\omega\rightarrow\omega_{\rm max}\), or thick-wall limit, we have the scaling solution \(E=\omega Q^{\gamma}\) with \(\gamma=1\pm 10^{-4}\). This expression generalizes results found in [14].
In the Figure 4, we show the energy versus the frequency for different values of \(\alpha\) but with the information on the violation of the energy conditions. We see that NEC is never violated. This condition corresponds to \(1+2\alpha(\omega^{2}\phi^{2}-\phi^{\prime 2})>0\). It could be violated for very negative values of \(\alpha\), but the construction of Q-balls for \(\alpha<-0.5\) becomes very challenging and often impossible. In general, the larger ( and positive \(\alpha\), the lower the
Figure 1: Left: The field \(\phi(r)\) is shown as a function of the radial coordinate for different values of \(\omega\). For each value of \(\omega\), \(\phi(0)\) is adjusted such that \(\phi(\infty)=0\). Center: The energy \(E\) and the charge \(Q\) are shown as a function of the frequency \(\omega\) with the critical frequency (change of colors) defined by the condition \(dQ/d\omega=0\). Right: The energy is shown as a function of the charge. For all graphics, in green we have stable configurations according to the fission stability criteria, while in red we have unstable solutions. In the first figure, the solution for the critical frequency is shown in blue and in the third graphics, we have added the decay stability criteria which is shown by a red solid line and red dashed line for the unstable solutions while the fission unstable configurations are represented only by red solid line.
probability to violate an energy condition, except for the SEC which is violated for any \(\alpha\).
## VII Perturbations
To study the mechanical stability, we decompose our field as
\[\Phi(t,r,\theta,\varphi)=\phi(r)e^{i\omega t}+\sum_{\ell,m}\delta\Phi_{lm}(t,r)e^ {i\omega t}Y_{\ell}^{m}(\theta,\varphi)\]
where \(\phi(r)\) is the background scalar field studied in the previous sections, \(\delta\Phi_{\ell m}\) is the scalar field perturbation, \(e^{i\omega t}\) in the second term is included for convenience and \(Y_{\ell}^{m}\) are spherical harmonics. Because of the symmetries of the Q-balls, the perturbations will be independent of the azimuthal number \(m\) and therefore the spherical harmonics reduce to Legendre polynomials. We will fix \(m=0\). Notice that the different modes, \(\ell\), do not couple and therefore we will omit this index. At second order in perturbations, and after integrating over the angle variables, the action reduces to
Figure 4: Energy versus frequency for K-field model with \(\alpha\) running from \(-0.5\) (in red) to \(+0.5\) (in purple) with a step of \(0.1\). For each panel, we have represented in dotted line the regime where some energy condition is violated. From top left to bottom right, we show the violation of the SEC, DEC, WEC, NEC.
Figure 3: The energy is show as a function of charge for different values of the parameter \(\alpha\) which runs from \(\alpha=-0.5\) in red to \(\alpha=0.5\) in purple with an incrementation of \(0.1\)
Figure 2: The field \(\phi(r)\) is shown as a function of the radial coordinate for the fundamental mode (green), the first (purple) and the second (blue) radial excited mode for \(\omega=0.7\). We also show the evolution of the energy as a function of the frequency. The dashed region corresponds to the unstable solutions according to the fission stability criteria.
\[S =\int\mathrm{d}t\mathrm{d}r\Big{[}r^{2}K_{,X}\dot{\Psi}_{1}^{2}-r^{2}( K_{,X}-2\phi^{\prime 2}K_{,XX})\Psi_{1}^{\prime 2}\] \[+r^{2}(K_{,X}+2\omega^{2}\phi^{2}K_{,XX})\dot{\Psi}_{2}^{2}-r^{2}K_ {,X}\Psi_{2}^{\prime 2}\] \[-2\omega r^{2}\phi\phi^{\prime}K_{,XX}\Big{(}\dot{\Psi}_{1}\Psi_{ 2}^{\prime}+\Psi_{1}^{\prime}\dot{\Psi}_{2}\Big{)}+A\Big{(}\dot{\Psi}_{1}\Psi_ {2}-\Psi_{1}\dot{\Psi}_{2}\Big{)}\] \[-M_{1}^{2}\Psi_{1}^{2}-M_{2}^{2}\Psi_{2}^{2}\Big{]} \tag{7.1}\]
where we have decomposed the perturbation into its real and imaginary parts, \(\delta\Phi=\Psi_{1}+i\Psi_{2}\), and
\[A =-2\omega r^{2}\frac{\mathrm{d}}{\mathrm{d}(\phi^{2})}\Big{(} \phi^{2}K_{,X}\Big{)}-\omega\frac{\mathrm{d}}{\mathrm{d}r}\Big{(}r^{2}\phi \phi^{\prime}K_{,XX}\Big{)}\] \[M_{1}^{2} =\lambda K_{,X}-\frac{r^{2}}{2}K_{,\phi\phi}-\frac{\mathrm{d}}{ \mathrm{d}r}\Big{(}r^{2}\phi^{\prime}K_{,X\phi}\Big{)}\] \[M_{2}^{2} =\lambda K_{,X}-r^{2}\Big{(}K_{,\phi^{2}}+\omega^{2}K_{,X}\Big{)}\] \[\lambda =\ell(\ell+1) \tag{7.2}\]
From this action, we obtain the two coupled equations for linear perturbations
\[-K_{,X}\ddot{\Psi}_{1}+(K_{,X}-2\phi^{\prime 2}K_{,XX})\Psi_{1}^{ \prime\prime}+2\omega\phi\phi^{\prime}K_{,XX}\dot{\Psi}_{2}^{\prime}\] \[\qquad+F_{1}(r,\Psi_{1},\Psi_{2},\Psi_{1}^{\prime},\dot{\Psi}_{2 })=0 \tag{7.3}\] \[-(K_{,X}+2\omega^{2}\phi^{2}K_{,XX})\ddot{\Psi}_{2}+K_{,X}\Psi_{2} ^{\prime\prime}+2\omega\phi\phi^{\prime}K_{,XX}\dot{\Psi}_{1}^{\prime}\] \[\qquad+F_{2}(r,\Psi_{1},\Psi_{2},\Psi_{2}^{\prime},\dot{\Psi}_{1 })=0 \tag{7.4}\]
with \(F_{1},F_{2}\) some functions defined by the perturbations and their first derivative. In order to study the stability, we need to insure that the problem is well posed. For that, we will derive the conditions of weak and strong hyperbolicity. Broadly speaking, the weak hyperbolicity condition forbids any solution to grow exponentially in time while the strong hyperbolicity condition imposes a stronger bound than the exponential growth and therefore is equivalent to local well-posedness of the Cauchy problem. In case of a strong hyperbolic system, \(F_{1},F_{2}\) will not be relevant while they could change the behavior of the system if weakly hyperbolic. We define the vector \(u=(\Psi_{1},\Psi_{2})^{T}\) and the system (7.3,7.4) becomes
\[u_{,tt}=Au^{\prime\prime}+Bu_{,t}^{\prime}+\cdots \tag{7.5}\]
where \(\cdots\) indicates the lowest derivative terms, and
\[A_{11} =\frac{K_{,X}-2\phi^{\prime 2}K_{,XX}}{K_{,X}} \tag{7.6}\] \[A_{22} =\frac{K_{,X}}{K_{,X}+2\omega^{2}\phi^{2}K_{,XX}}\] (7.7) \[B_{12} =2\omega\phi\phi^{\prime}\frac{K_{,XX}}{K_{,X}}\] (7.8) \[B_{21} =2\omega\phi\phi^{\prime}\frac{K_{,XX}}{K_{,X}+2\omega^{2}\phi^{ 2}K_{,XX}} \tag{7.9}\]
while other elements of the matrices are zero. We consider wave solutions \(u(t,r)=e^{ikr}\hat{u}(t,k)\) and obtain
\[\hat{u}_{,tt}=-k^{2}A\hat{u}+ikB\hat{u}_{,t}+\cdots \tag{7.10}\]
This system can be reduced to first order by defining the variable \(\hat{v}=\hat{u}_{,t}/(i|k|)\)
\[\begin{pmatrix}\hat{v}\\ \hat{u}\end{pmatrix}_{,t}=i|k|\hat{P}\begin{pmatrix}\hat{v}\\ \hat{u}\end{pmatrix} \tag{7.11}\]
with
\[\hat{P}=\begin{pmatrix}0&\frac{k}{|k|}B_{12}&A_{11}&0\\ \frac{k}{|k|}B_{21}&0&0&A_{22}\\ 1&0&0&0\\ 0&1&0&0\end{pmatrix} \tag{7.12}\]
The well posedness of this system is reduced to the analysis of the matrix \(\hat{P}\) (see e.g. [25]). If, for all \(k\), the eigenvalues of \(\hat{P}\) are real, the system is weakly hyperbolic. The eigenvalues are
\[\Big{\{}\pm 1,\pm\sqrt{\frac{K_{,X}-2\phi^{\prime 2}K_{,XX}}{K_{,X}+2\omega^{2} \phi^{2}K_{,XX}}}\Big{\}} \tag{7.13}\]
Therefore, we conclude that, if \(\frac{K_{,X}-2\phi^{\prime 2}K_{,XX}}{K_{,X}+2\omega^{2}\phi^{2}K_{,XX}}\geq 0\), the system is weakly hyperbolic. Additionally, when
\[\frac{K_{,X}-2\phi^{\prime 2}K_{,XX}}{K_{,X}+2\omega^{2}\phi^{2}K_{,XX}}>0 \tag{7.14}\]
the system is strongly hyperbolic because the eigenvectors form a complete set. In that case, the two perturbations propagate at the speed
\[c_{1}=1\,,\qquad c_{2}=\sqrt{\frac{K_{,X}-2\phi^{\prime 2}K_{,XX}}{K_{,X}+2 \omega^{2}\phi^{2}K_{,XX}}} \tag{7.15}\]
As we have shown in section IV, we consider the condition \(K_{,X}-2\phi^{\prime 2}K_{,XX}>0\) which implies \(K_{,X}+2\omega^{2}\phi^{2}K_{,XX}>0\). Summing these two conditions, we find a weaker condition, viz. \(K_{,X}>0\) and \(K_{,X}+XK_{,XX}>0\).
Notice that the conditions of well posedness of the system are independent of the energy conditions derived previously (5.2,5.3,5.4,5.5). In Figure 5, and for the model \(K=X+\alpha X^{2}-V(\phi)\), we have found that for a certain range of the parameters \((\omega,\alpha)\), the Cauchy problem is not well-posed which never corresponds to \(\alpha>0\). Also we found that for any \(\alpha<0\), the perturbations are superluminal in some region of space. Even if the classical theory is well posed, the superluminal propagation of the perturbations could be an obstacle to a quantum version of the theory. For example, requiring UV completion for K-essence (real scalar field analog to the case studied in this paper) imposes subluminal propagation [26]. A similar situation should be expected in our case [27]. Even if not equivalent, we found numerically, for all parameters \((\omega,\alpha)\) of Figure 5, that a system which violates weak energy condition does not have a well-posed Cauchy problem. The converse is not true.
Restricting our analysis to the cases where the Cauchy problem is well-posed, we can study the mechanical stability of our solutions. For that, we assume the following form for the perturbation
\[\delta\Phi(t,r)=\frac{\eta(r)}{r^{n}}e^{i\rho t}+\frac{\chi^{*}(r)}{r^{n}}e^{-i \rho^{*}t} \tag{7.16}\]
The system (7.3,7.4) reduces to two ordinary coupled differential equations for \(\eta(r)\) and \(\chi(r)\). We have included a factor \(r^{n}\) for numerical stability. In general, \(n=\ell\) provides faster numerical results. In the canonical case where \(K_{,X}=1\), the stability analyses shows that any instability corresponds to \(\rho=-\rho^{*}\)[13] which implies the condition \(\frac{dQ}{d\omega}<0\). We could not extend this analysis to K-field theories and therefore we will study the perturbations by numerical means. For that, our system can be written as four first order differential equations for the variable \(\Psi\equiv(\eta,\chi,\eta^{\prime},\chi^{\prime})^{T}\), \(\Psi^{\prime}=B\Psi\) where the matrix \(B\) is given in the appendix. Considering the conditions at \(r=0\) on the scalar field, \(\phi^{\prime}=0\), it is easy to show that perturbations behave as
\[\eta(r\simeq 0) =c_{0}r^{\ell+n} \tag{7.17}\] \[\chi(r\simeq 0) =c_{1}r^{\ell+n} \tag{7.18}\]
which implies
\[\Psi(0)=c_{0}r^{\ell+n-1}\begin{pmatrix}r\\ 0\\ \ell+n\\ 0\end{pmatrix}+c_{1}r^{\ell+n-1}\begin{pmatrix}0\\ r\\ 0\\ \ell+n\end{pmatrix} \tag{7.19}\]
Therefore, we can perform two numerical integrations from \(r=0\) with initial conditions \(\eta=r^{\ell+n},\chi=0\) and \(\eta=0,\chi=r^{\ell+n}\) respectively. The general solution will be a linear combination of these two solutions with coefficients \((c_{0},c_{1})\). Similarly, we perform an integration from infinity to \(r=0\). We have also a system with two free parameters \((c_{3},c_{4})\). We can integrate it from a large radius with initial conditions
\[\eta=\frac{e^{-r\sqrt{\frac{K_{,\phi^{2}(0,0)}}{K_{,X}(0,0)}-(\rho+\omega)^{2 }}}}{r^{1-n}}\,,\ \ \chi=0 \tag{7.20}\]
or
\[\chi=\frac{e^{-r\sqrt{\frac{K_{,\phi^{2}(0,0)}}{K_{,X}(0,0)}-(\rho-\omega)^{2 }}}}{r^{1-n}}\,,\ \ \eta=0 \tag{7.21}\]
Having the solution integrated from both boundaries with four free parameters \((c_{1},c_{2},c_{3},c_{4})\), we can match them at a given radius, using the four continuity conditions of \((\eta,\chi,\eta^{\prime},\chi^{\prime})\). Notice that, because our system is linear, we can always fix one of the parameters, e.g. \(c_{1}=1\). Therefore, we end with a system of four conditions and three parameters, the fourth parameter will determine the value of \(\rho\). In conclusion, only a certain number of discrete values of \(\rho\) can solve our problem.
In the Figure (6), we show \(|\phi+\delta\Phi|^{2}\), for \(\omega=(0.5,1)\) and \(\alpha=0\). For each case, we have found the parameter \(\rho\) and using eq.(7.16), we obtain the time and space dependence of the solution. In the case, where \(\omega=0.5\), the radius of the Q-ball is oscillating, \(\rho\) is real. The energy of this solution is constant in time, while for \(\omega=1\), the energy grows exponentially as well as the radius of the Q-ball. The solution is unstable and \(\rho\) is purely imaginary.
Therefore, the strategy is simple, for each Q-ball, we search in the complex plane for values of \(\rho\) solution to our previous problem.
For the excited states, all frequencies \(\omega\) were unstable. But for various frequencies, the unstable modes were not always purely imaginary but also with nonzero real part.
For the fundamental solution, Figure 6 shows two cases where \(\alpha=0\) and \(\omega=(0.5,1)\). The first frequency corresponds to a stable solution for which we see an oscillation of the radius of the Q-ball while the energy remains perfectly constant in time. The second case, corresponds to an unstable solution for which the radius increases and the energy grows exponentially.
Generically, we found that the stability region corresponds to \(dQ/d\omega<0\) for all \(\omega\), generalizing results which were known in the canonical case. In the unstable region, the time scale of the instability is of the order \(1/\text{Im}(\rho)\). We found that \(\text{Im}(\rho)\) and therefore the time scale of the instability depends on the mode \(\ell\). For example, for \(\alpha=0\), \(\text{Im}(\rho)\) is of the order \(10^{-1}\) for \(\ell=0\) and of the order \(10^{-4}\) for \(\ell=1\). Therefore, we will focus mainly on the spherical mode of perturbations \(\ell=0\).
In Figure 7, we show the unstable modes for three values of \(\alpha\). For each \(\alpha\), the instability starts when \(dQ/d\omega=0\). We notice also that even if for a given frequency, such as \(\omega=1.03\), the Q-ball is unstable for
Figure 5: In gray, the region of parameter space \((\omega,\alpha)\) where the Cauchy problem is not well-posed and in cyan the region of superluminal propagation.
all values of the parameter \(\alpha\), the instability is slower to develop (lower value of \(\mathrm{Im}(\rho)\)), for larger positive values of \(\alpha\), which is consistent with the previous section where we found that the energy is lowered.
In Figure 8, we summarize the various stability conditions. The quantum stability condition, namely the stability against fission is, as expected, stronger than the classical stability condition. We have also represented regions where the energy conditions are violated. The NEC is never violated in the region of analysis of the model while the WEC is violated only in region where the Cauchy problem is not well-posed. The violation of the SEC and the DEC are totally independent of the stability conditions.
## VIII Conclusion
In this work we studied Q-balls in non-canonical scalar field theory. We derived the general equations of existence and stability for these theories. We found that the stability against fission and the linear mechanical stability are equivalent and reduce to \(Q^{\prime}(\omega)<0\). On the other hand, the condition for decay into free particles is stronger. We found that perturbations have a well-posed Cauchy problem if \(\frac{K_{X}-26\sigma^{2}K_{X,XX}}{K_{X,X}+2\omega^{2}\sigma^{2}K_{XX}}>0\). When the perturbations are strongly periodic, we found that
Figure 8: Space of parameters \((\omega,\alpha)\) within the region where the Cauchy problem is well-posed. Are represented regions of quantum stability (against fission) and classical stability as well regions where the energy conditions such as the SEC and DEC are violated. We have kept the cyan and white colors for, respectively, superluminal and subluminal propagation.
Figure 7: Existence of \(\mathrm{Im}(\rho)\) as a function of \(\omega\) for \(\alpha=(-0.5,0,+0.5)\). The existence of such mode implies an instability of the background solution. The dotted line corresponds to unstable modes but in a region where the Cauchy problem is not well-posed and therefore should be excluded from the analysis.
Figure 6: Space-time diagram of \(|\Phi|^{2}\). The upper diagram shows the stability of the background solution with \(\omega=0.5\) and the lower case shows an unstable solution for \(\omega=1\). For both solutions, we have considered \(\alpha=0\).
perturbations are superluminal or subluminal. In the particular case, \(K=X+\alpha X^{2}-V(|\Phi|^{2})\), perturbations are subluminal and luminal for \(\alpha>0\) while superluminal and luminal for \(\alpha<0\). We found that a Q-ball with \(\alpha>0\) lowers its energy for larger values of \(\alpha\). Even in the unstable region, the time-scale of this instability becomes larger and therefore more stable. The frequency at which Q-balls become unstable increases with \(\alpha\). It would be interesting to find models for which all Q-balls are stable irrespectively of their frequency.
Finally, we have studied the different energy conditions such as the SEC, DEC, WEC, NEC. We found that NEC is never violated and none of these conditions can be related to mechanical stability.
## Acknowledgements
The work of A.F. is supported by ANID/CONICYT No. 21171262 while R.G. is supported by ANID FONDECYT Regular No. 1220965.
*
## Appendix A Perturbation equations
The matrix of the system \(\Psi^{\prime}=B\Psi\) can be decomposed as \(B=B^{(0)}+\rho B^{(1)}+\rho^{2}B^{(2)}\), where
\[B^{(0)} =\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ B^{(0)}_{31}&B^{(0)}_{32}&B^{(0)}_{33}&B^{(0)}_{34}\\ B^{(0)}_{32}&B^{(0)}_{31}&B^{(0)}_{34}&B^{(0)}_{33}\end{pmatrix} \tag{16}\] \[B^{(1)} =\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ B^{(1)}_{31}&B^{(1)}_{32}&B^{(1)}_{33}&B^{(1)}_{34}\\ -B^{(1)}_{32}&-B^{(1)}_{31}&-B^{(1)}_{34}&-B^{(1)}_{33}\end{pmatrix}\] (17) \[B^{(2)} =\begin{pmatrix}0&0&0&0\\ 0&0&0&0\\ B^{(2)}_{31}&B^{(2)}_{32}&0&0\\ B^{(1)}_{32}&B^{(1)}_{31}&0&0\end{pmatrix} \tag{18}\]
with
\[B^{(0)}_{31} =\frac{1}{2r^{2}}\Big{(}\frac{M_{1}^{2}}{K_{,X}-2\phi^{\prime 2}K_{,XX}}+\frac{M_{2}^{2}}{K_{,X}}\Big{)} \tag{19}\] \[B^{(0)}_{32} =\frac{1}{2r^{2}}\Big{(}\frac{M_{1}^{2}}{K_{,X}-2\phi^{\prime 2}K_{,XX}}-\frac{M_{2}^{2}}{K_{,X}}\Big{)}\] (20) \[B^{(0)}_{33} =-\frac{2}{r}-\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}r}\log \Bigl{(}K_{,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})\Bigr{)}\] (21) \[B^{(0)}_{34} =\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}r}\log\frac{K_{,X}}{K_{,X}-2\phi^{\prime 2}K_{,XX}} \tag{22}\]
\[B^{(1)}_{31} =A\frac{K_{,X}-\phi^{\prime 2}K_{,XX}}{r^{2}K_{,X}(K_{,X}-2\phi^{ \prime 2}K_{,XX})}\] \[\quad-\frac{\omega\Big{(}r^{4}\phi\phi^{\prime 3}K_{,XX}^{2} \Big{)}^{\prime}}{2r^{4}K_{,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})}\] \[\quad-\frac{\omega\phi^{\prime 2}(\phi^{\prime 2}-\phi\phi^{ \prime\prime})K_{,XX}^{2}}{2K_{,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})} \tag{23}\] \[B^{(1)}_{32} =\frac{\omega\Big{(}\phi\phi^{\prime}K_{,XX}\Big{)}^{\prime}}{K_ {,X}-2\phi^{\prime 2}K_{,XX}}+\frac{2\omega\phi\phi^{\prime}K_{,X}K_{,XX}}{rK_{,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})}\] \[\quad+2\omega r\phi^{\prime 2}K_{,XX}\frac{K_{,X}+\omega^{2}\phi^{ 2}K_{,XX}+\phi^{2}K_{,X\phi^{2}}}{rK_{,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})}\] (24) \[B^{(1)}_{33} =-\frac{2\omega\phi\phi^{\prime}K_{,XX}^{2}}{K_{,X}(K_{,X}-2\phi ^{\prime 2}K_{,XX})}\] (25) \[B^{(1)}_{34} =2\omega\phi\phi^{\prime}K_{,XX}\frac{K_{,X}-\phi^{\prime 2}K_{,XX}}{K_ {,X}(K_{,X}-2\phi^{\prime 2}K_{,XX})} \tag{26}\]
and \((A,M_{1}^{2},M_{2}^{2})\) are defined by eq.(7). These equations are given in the case of \(n=0\).
|
2307.01978 | Smooth Matérn Gaussian Random Fields: Euler Characteristic, Expected
Number and Height Distribution of Critical Points | This paper studies Gaussian random fields with Mat\'ern covariance functions
with smooth parameter $\nu>2$. Two cases of parameter spaces, the Euclidean
space and $N$-dimensional sphere, are considered. For such smooth Gaussian
fields, we have derived the explicit formulae for the expected Euler
characteristic of the excursion set, the expected number and height
distribution of critical points. The results are valuable for approximating the
excursion probability in family-wise error control and for computing p-values
in peak inference. | Dan Cheng | 2023-07-05T01:38:55Z | http://arxiv.org/abs/2307.01978v2 | Smooth Matern Gaussian Random Fields: Euler Characteristic, Expected Number and Height Distribution of Critical Points+
###### Abstract
This paper studies Gaussian random fields with Matern covariance function with smooth parameter \(\nu>2\). Two cases of parameter spaces, the Euclidean space and \(N\)-dimensional sphere, are considered. For such smooth Gaussian fields, we have derived the explicit formulae for the expected Euler characteristic of the excursion set, the expected number and height distribution of critical points. The results are valuable for approximating the excursion probability and computing p-values for peak inference.
**Keywords**: Gaussian random fields; Matern; Smooth; Isotropic; Euler characteristic; Height distribution; Critical points.
**Mathematics Subject Classification**: 60G15, 60G60, 62G32, 15B52.
## 1 Introduction
In recent years the Matern class of covariance functions has gained widespread popularity in spatial statistics [12, 14], mainly due to its great flexibility for modelling the spatial covariance and hence dependent spatial data and processes. Specifically, the _Matern covariance function_ is defined as
\[\mathcal{M}(d)=\frac{\sigma^{2}}{2^{\nu-1}\Gamma(\nu)}\left(\frac{\sqrt{2\nu} d}{\ell}\right)^{\nu}K_{\nu}\left(\frac{\sqrt{2\nu}d}{\ell}\right),\quad d\geq 0, \tag{1.1}\]
where \(K_{\nu}(\cdot)\) is the modified Bessel function of the second kind. This covariance function has three positive parameters \(\sigma^{2}\), \(\ell\) and \(\nu\), with \(\sigma^{2}\) controlling the variance, \(1/\ell\) controlling the spatial range, and \(\nu\) controlling the smoothness. Let \(X=\{X(t),t\in T\}\) be a centered Gaussian random field living on an \(N\)-dimensional parameter set \(T\). We call \(X\) a _Matern Gaussian random field_ if the covariance satisfies
\[\mathbb{E}[X(t)X(s)]=\mathcal{M}(\|t-s\|),\quad t,s\in T.\]
That is, \(X\) is isotropic with covariance given by (1.1). There is rich literature on Matern Gaussian fields; however, only a few are focused on the smooth case. This paper aims to bridging smooth Matern Gaussian fields and recent developments on the expected Euler characteristic of the excursion set [1, 8], the expected number and height distribution of critical points [2, 5, 7, 6, 4].
Let \(\{X(t),t\in T\}\) be a centered smooth Gaussian random field. Let \(A_{u}(X,T)=\{t\in T:X(t)\geq u\}\) be the excursion set of \(X\) exceeding level \(u\) over the parameter set \(T\) and denote by \(\chi(A_{u}(X,T))\) its Euler characteristic. It is shown in [1] that, the expected Euler characteristic \(\mathbb{E}[\chi(A_{u}(X,T))]\) is computable and can used to approximate the excursion probability \(\mathbb{P}[\sup_{t\in T}X(t)\geq u]\) for large \(u\) such that the error is super-exponentially small. This is useful in controlling the family-wise error in statistics [15].
The number of critical points of index \(i\) of \(X\) above \(u\) is defined as
\[\mu_{i}(X,u)=\#\left\{t\in D:X(t)\geq u,\nabla X(t)=0,\text{index}(\nabla^{2}X (t))=i\right\},\quad i=0,\ldots,N, \tag{1.2}\]
where \(D\) is an \(N\)-dimensional unit-area disc on \(T\), \(\nabla X(t)\) and \(\nabla^{2}X(t)\) are respectively the gradient and Hessian of \(X\), and \(\text{index}(\nabla^{2}X(t))\) denotes the number of negative eigenvalues of \(\nabla^{2}X(t)\). Notice that we will focus on the expectation of \(\mu_{i}(X,u)\), which depends on the volume of \(D\) (1 for a unit disc) due to isotropy, thus we omit \(D\) in the notation for simplicity. The expected number of critical points \(\mathbb{E}[\mu_{i}(X,u)]\) of smooth random fields is important in statistics [6] and physics [2]. By default, let \(\mu_{i}(X)=\mu_{i}(X,-\infty)\) be the number of critical points of index \(i\) of \(X\) over \(D\).
The height distribution of a critical value of index \(i\) of \(X\) at \(t_{0}\) is defined as
\[F_{i}(u):=\lim_{\varepsilon\to 0}\mathbb{P}\left\{X(t_{0})>u\ |\ \exists\text{ a critical point of index $i$ of $X(t)$ in $B(t_{0},\varepsilon)$}\right\}, \tag{1.3}\]
where \(B(t_{0},\varepsilon)\) is the ball of radius \(\varepsilon\) centered at \(t_{0}\). It has been found to be an important tool for computing p-values in peak detection and thresholding problems in statistics [7, 3, 4] and neuroimaging [9]. Notice that, due to isotropy, \(F_{i}(u)\) does not depend on the location \(t_{0}\). It is shown in [5] that,
\[F_{i}(u)=\frac{\mathbb{E}[\mu_{i}(X,u)]}{\mathbb{E}[\mu_{i}(X)]}. \tag{1.4}\]
Therefore, \(F_{i}(u)\) can be obtained immediately once the form of \(\mathbb{E}[\mu_{i}(X,u)]\) is known.
## 2 Smooth Matern Gaussian Random Fields on Euclidean Space
It is well known (see for example [14]) that the Gaussian fields with covariance function given by (1.1) are \(m\) times differentiable if and only if \(\nu>m\). To study the expected Euler characteristic and critical points, it requires the Gaussian fields to be twice differentiable [1, 5]; thus, throughout this paper, we assume \(\nu>2\).
To characterize the variances of \(X^{\prime}\) and \(X^{\prime\prime}\), following the notation in [5], we introduce a real function \(\rho(d)=\mathcal{M}(d^{2})\) and have the following results.
**Proposition 2.1**.: _Let \(\rho(d)=\mathcal{M}(d^{2})\). Suppose \(\nu>2\). Then_
\[\rho^{\prime}(0)=-\frac{\sigma^{2}\nu}{2(\nu-1)\ell^{2}}\quad\text{ and }\quad\rho^{\prime\prime}(0)=\frac{\sigma^{2}\nu^{2}}{4(\nu-1)(\nu-2)\ell^{4}}. \tag{2.1}\]
_In particular,_
\[\kappa:=-\frac{\rho^{\prime}(0)}{\sqrt{\rho^{\prime\prime}(0)}} \bigg{|}_{\sigma=1}=\sqrt{\frac{\nu-2}{\nu-1}}\quad\text{and }\quad\eta:=\frac{\sqrt{-\rho^{\prime}(0)}}{\sqrt{\rho^{\prime\prime}(0)}}= \sqrt{\frac{2(\nu-2)}{\nu}}\ell. \tag{2.2}\]
Proof.: It follows from the property of Bessel functions (see page 502 in [13]) that, for non-integer \(\nu\),
\[K_{\nu}(r)=\frac{2^{\nu-1}\Gamma(\nu)}{r^{\nu}}\sum_{j=0}^{\infty }\frac{(r^{2}/4)^{j}}{j!(1-\nu)_{j}}+\frac{r^{\nu}\Gamma(-\nu)}{2^{\nu+1}} \sum_{j=0}^{\infty}\frac{(r^{2}/4)^{j}}{j!(1+\nu)_{j}}, \tag{2.3}\]
where \((x)_{j}=x(x+1)\cdots(x+j-1)\) with \((x)_{0}=1\) is the Pochhammer symbol. For \(\nu>2\), organizing the right side of (2.3), we obtain the following expansion as \(r\to 0\),
\[\frac{r^{\nu}}{2^{\nu-1}\Gamma(\nu)}K_{\nu}(r)=1-\frac{r^{2}}{4( \nu-1)}+\frac{r^{4}}{32(\nu-1)(\nu-2)}+o(r^{4}). \tag{2.4}\]
On the other hand, by page 502 in [13], if \(\nu=n\) is an integer, then
\[K_{n}(r)= \frac{2^{n-1}}{r^{n}}\sum_{j=0}^{n-1}\frac{(-1)^{j}(n-j-1)!(r^{2} /4)^{j}}{j!}\] \[+\frac{(-1)^{n}r^{n}}{2^{n}}\sum_{j=0}^{\infty}\left[\frac{\psi(1 +j)}{2}+\frac{\psi(1+n+j)}{2}-\log\left(\frac{r}{2}\right)\right]\frac{(r^{2} /4)^{j}}{j!(n+j)!},\]
where \(\psi(\cdot)\) is the digamma function defined by \(\psi(z)=\frac{d}{dz}\log\Gamma(z)=\Gamma^{\prime}(z)/\Gamma(z)\). This implies that, for \(n\geq 3\), we have the following expansion as \(r\to 0\),
\[\frac{r^{n}}{2^{n-1}\Gamma(n)}K_{n}(r)=1-\frac{r^{2}}{4(n-1)}+ \frac{r^{4}}{32(n-1)(n-2)}+o(r^{4}). \tag{2.5}\]
Combining (2.4) and (2.5), we obtain that, for \(\nu>2\),
\[\begin{split}\mathcal{M}(d)&=\frac{\sigma^{2}}{2^{ \nu-1}\Gamma(\nu)}\left(\frac{\sqrt{2\nu}d}{\ell}\right)^{\nu}K_{\nu}\left( \frac{\sqrt{2\nu}d}{\ell}\right)\\ &=\sigma^{2}\left[1-\frac{\nu}{2(\nu-1)}\left(\frac{d}{\ell} \right)^{2}+\frac{\nu^{2}}{8(\nu-1)(\nu-2)}\left(\frac{d}{\ell}\right)^{4} \right]+o(d^{4}),\quad d\to 0.\end{split} \tag{2.6}\]
In other words, we have
\[\rho(d)=\sigma^{2}\left[1-\frac{\nu}{2(\nu-1)\ell^{2}}d+\frac{\nu^{2}}{8(\nu -1)(\nu-2)\ell^{4}}d^{2}\right]+o(d^{2}),\quad d\to 0.\]
This second-order Taylor expansion for \(\rho(d)\) around \(d=0\) implies (2.1) and hence (2.2).
### The expected Euler characteristic of excursion set
Denote by \(\phi(x)=(2\pi)^{-1/2}e^{-x^{2}/2}\) and \(\Psi(x)=\int_{x}^{\infty}\phi(x)dx\) the density and tail probability of the standard normal distribution, respectively. Let \(H_{j}(x)\) be the Hermite polynomial of order \(j\), i.e.,
\[H_{j}(x)=(-1)^{j}e^{x^{2}/2}\frac{d^{j}}{dx^{j}}\big{(}e^{-x^{2}/2}\big{)}, \quad j\geq 0.\]
**Theorem 2.2**.: _Let \(\{X(t),t\in T\}\) be a centered Gaussian random field with Matern covariance function (1.1), where \(T\subset\mathbb{R}^{N}\) is an \(N\)-dimensional piecewise smooth set. Suppose \(\nu>2\). Then the expected Euler characteristic is given by_
\[\mathbb{E}[\chi(A_{u}(X,T))]=\sum_{j=0}^{N}\frac{\nu^{j/2}}{(\nu-1)^{j/2}\ell ^{j}}\mathcal{L}_{j}(T)\xi_{j}\left(\frac{u}{\sigma}\right), \tag{2.7}\]
_where \(\mathcal{L}_{j}(T)\) are the Lipschitz-Killing curvatures (cf. (10.7.3) in [1]) of \(T\) and_
\[\xi_{0}(x)=\Psi(x),\quad\xi_{j}(x)=(2\pi)^{-j/2}H_{j-1}(x)\phi(x),\quad j\geq 1. \tag{2.8}\]
Proof.: It follows from formula (5.5.5) in [1] or Lemma 3.2 in [6] that \(\mathrm{Var}[\nabla X(t)]=-2\rho^{\prime}(0)I_{N}\), where \(I_{N}\) is the \(N\times N\) identity matrix. By (2.1), we obtain
\[\mathrm{Var}[\nabla X(t)/\sigma]=\left(\frac{\nu}{(\nu-1)\ell^{2}}\right)I_{N}. \tag{2.9}\]
Applying Theorem 12.4.1 in [1] to the standardized Gaussian field \(X/\sigma\) yields
\[\mathbb{E}[\chi(A_{u}(X,T))]=\mathbb{E}[\chi(A_{u/\sigma}(X/\sigma,T))]=\sum_ {j=0}^{N}\mathcal{L}_{j}^{X/\sigma}(T)\xi_{j}\left(\frac{u}{\sigma}\right), \tag{2.10}\]
where \(\mathcal{L}_{j}^{X/\sigma}(T)\) are the Lipschitz-Killing curvatures of \(T\) calculated with respect to the metric (cf. (12.2.2) in [1]) induced by \(X/\sigma\). Due to (2.9), following the arguments on page 423 in [1], we see that the metric induced by \(X/\sigma\) makes a new inner product for \(t,s\in\mathbb{R}^{N}\) given by \(\frac{\nu}{(\nu-1)\ell^{2}}\langle t,s\rangle\), where \(\langle\cdot,\cdot\rangle\) is the simple Euclidean inner product, implying that
\[\mathcal{L}_{j}^{X/\sigma}(T)=\left(\frac{\nu}{(\nu-1)\ell^{2}}\right)^{j/2} \mathcal{L}_{j}(T),\quad j\geq 0.\]
plugging this into (2.10) yields the desired result (2.7).
The formula (2.7) shows that the expected Euler characteristic is computable with a relatively simple form, mainly due to the isotropy of Matern Gaussian fields. The Lipschitz-Killing curvatures \(\mathcal{L}_{j}(T)\) depend on the geometry of \(T\). We show below an example for the case when \(T\) is a cube.
**Example 2.3** Let \(T=[0,b]^{N}\) be an \(N\)-dimensional cube in \(\mathbb{R}^{N}\). Then, by (10.7.4) in [1], \(\mathcal{L}_{j}(T)=\binom{N}{j}b^{j}\), which implies
\[\mathbb{E}[\chi(A_{u}(X,T))]=\phi\left(\frac{u}{\sigma}\right)\sum_{j=1}^{N} \frac{\binom{N}{j}b^{j}\nu^{j/2}}{(2\pi)^{j/2}(\nu-1)^{j/2}\ell^{j}}H_{j-1} \left(\frac{u}{\sigma}\right)+\Psi\left(\frac{u}{\sigma}\right).\]
### Expected number and height distribution of critical points
It is introduced in [6] that, an \(N\times N\) random matrix \(M=(M_{ij})_{1\leq i,j\leq N}\) is called _Gaussian Orthogonally Invariant_ (GOI) with _covariance parameter_\(c\), denoted by \(\text{GOI}(c)\), if it is symmetric and all entries are centered Gaussian variables such that
\[\mathbb{E}[M_{ij}M_{kl}]=\frac{1}{2}(\delta_{ik}\delta_{jl}+\delta_{il}\delta_ {jk})+c\delta_{ij}\delta_{kl}, \tag{2.11}\]
where \(\delta_{ij}\) is the Kronecker delta function. We see that \(\text{GOI}(c)\) becomes a GOE matrix if \(c=0\). In particular, the density of the ordered eigenvalues \(\lambda_{1}\leq\ldots\leq\lambda_{N}\) of \(\text{GOI}(c)\) is
\[\begin{split} f_{c}(\lambda_{1},\ldots,\lambda_{N})& =\frac{1}{K_{N}\sqrt{1+Nc}}\exp\left\{-\frac{1}{2}\sum_{i=1}^{N} \lambda_{i}^{2}+\frac{c}{2(1+Nc)}\left(\sum_{i=1}^{N}\lambda_{i}\right)^{2} \right\}\\ &\quad\times\prod_{1\leq i<j\leq N}|\lambda_{i}-\lambda_{j}| \mathbbm{1}_{\{\lambda_{1}\leq\ldots\leq\lambda_{N}\}},\end{split} \tag{2.12}\]
where \(K_{N}=2^{N/2}\prod_{i=1}^{N}\Gamma\left(\frac{i}{2}\right)\) and \(c>-1/N\). We use the notation \(\mathbb{E}_{\text{GOI}(c)}^{N}\) to represent the expectation under the \(\text{GOI}(c)\) density (2.12), i.e., for a measurable function \(g\),
\[\mathbb{E}_{\text{GOI}(c)}^{N}[g(\lambda_{1},\ldots,\lambda_{N})]=\int_{ \mathbb{R}^{N}}g(\lambda_{1},\ldots,\lambda_{N})f_{c}(\lambda_{1},\ldots, \lambda_{N})d\lambda_{1}\cdots d\lambda_{N}. \tag{2.13}\]
**Theorem 2.4**.: _Let \(\{X(t),t\in T\}\) be a centered Gaussian random field with Matern covariance function (1.1), where \(T\subset\mathbb{R}^{N}\) is an \(N\)-dimensional set. Suppose \(\nu>2\). Then for \(i=0,\ldots,N\),_
\[\begin{split}\mathbb{E}[\mu_{i}(X)]&=\frac{2^{N/2}} {\pi^{N/2}\eta^{N}}\mathbb{E}_{\text{GOI}(1/2)}^{N}\left[\prod_{j=1}^{N}| \lambda_{j}|\mathbbm{1}_{\{\lambda_{i}<0<\lambda_{i+1}\}}\right],\\ \mathbb{E}[\mu_{i}(X,u)]&=\frac{2^{N/2}}{\pi^{N/2} \eta^{N}}\int_{u/\sigma}^{\infty}\phi(x)\mathbb{E}_{\text{GOI}((1-\kappa^{2}) /2)}^{N}\left[\prod_{j=1}^{N}|\lambda_{j}-\kappa x/\sqrt{2}|\mathbbm{1}_{\{ \lambda_{i}<\kappa x/\sqrt{2}<\lambda_{i+1}\}}\right]dx,\\ F_{i}(u)&=\frac{\int_{u/\sigma}^{\infty}\phi(x) \mathbb{E}_{\text{GOI}((1-\kappa^{2})/2)}^{N}\left[\prod_{j=1}^{N}|\lambda_{j} -\kappa x/\sqrt{2}|\mathbbm{1}_{\{\lambda_{i}<\kappa x/\sqrt{2}<\lambda_{i+1} \}}\right]dx}{\mathbb{E}_{\text{GOI}(1/2)}^{N}\left[\prod_{j=1}^{N}|\lambda_{j} |\mathbbm{1}_{\{\lambda_{i}<0<\lambda_{i+1}\}}\right]},\end{split} \tag{2.14}\]
_where \(\kappa\) and \(\eta\) are given in (2.2), \(\mathbb{E}_{\text{GOI}(c)}^{N}\) is defined in (2.12) and \(\lambda_{0}\) and \(\lambda_{N+1}\) are regarded respectively as \(-\infty\) and \(\infty\) for consistency._
Proof.: Note that, by the definition (1.2), we have \(\mu_{i}(X)=\mu_{i}(X/\sigma)\) and \(\mu_{i}(X,u)=\mu_{i}(X/\sigma,u/\sigma)\). Applying Theorem 3.5 in [6] to \(X\) and the standardized field \(X/\sigma\) respectively, we obtain the first and second lines in (2.14). Finally, the last line in (2.14) follows directly from (1.4).
## 3 Smooth Matern Gaussian Random Fields on Spheres
The applications in geoscience, astronomy and environmental sciences have stimulated recent rapid development in statistics of random fields on spheres. It has been shown in [10] that,
many of the commonly used covariance functions on Euclidean spaces are valid on spheres when Euclidean distance is replaced by the spherical distance (great circle distance) on a sphere. However, the Matern class in (1.1) is positive definite with the spherical distance only if \(\nu\leq 1/2\).
Let \(\mathbb{S}^{N}\) be an \(N\)-dimensional unit sphere. Let \(\|\cdot\|\) and \(\langle\cdot,\cdot\rangle\) be the Euclidean distance and inner product in \(\mathbb{R}^{N+1}\), respectively. As shown in [16], one can apply the identity
\[\|x-y\|=2\sin\left(\frac{\theta(x,y)}{2}\right),\quad\forall x,\,y\in\mathbb{ S}^{N}\subset\mathbb{R}^{N+1},\]
where \(\theta(x,y)=\arccos\left\langle x,y\right\rangle\in[0,\pi]\) denotes the spherical distance on \(\mathbb{S}^{N}\), to construct covariance functions on spheres. In particular, by (1.1), we define the Matern covariance function on \(\mathbb{S}^{N}\) as
\[\mathcal{M}\left(2\sin(\theta(x,y)/2)\right)=\frac{\sigma^{2}}{2^{\nu-1} \Gamma(\nu)}\left(\frac{2\sqrt{2\nu}\sin(\theta(x,y)/2)}{\ell}\right)^{\nu}K _{\nu}\left(\frac{2\sqrt{2\nu}\sin(\theta(x,y)/2)}{\ell}\right). \tag{3.1}\]
Let \(\{X(x),x\in\mathbb{S}^{N}\}\) be a centered Gaussian random field with Matern covariance function (3.1). Then we have
\[\mathbb{E}[X(x)X(y)]=\mathcal{M}\left(2\sin(\theta(x,y)/2)\right),\quad\forall x,\,y\in\mathbb{S}^{N}. \tag{3.2}\]
By Corollary 1 in [11], one still has the property that the Gaussian field \(X\) is twice differentiable if and only if \(\nu>2\). Note that
\[\sin(\theta(x,y)/2)=\sqrt{\frac{1-\cos\theta(x,y)}{2}}=\sqrt{\frac{1-\langle x,y\rangle}{2}}.\]
Following the notation in [7], we write the covariance in (3.2) as
\[\begin{split} C(\langle x,y\rangle)&:=\mathbb{E}[X( x)X(y)]=\mathcal{M}\left(\sqrt{2(1-\langle x,y\rangle)}\right)\\ &=\frac{\sigma^{2}}{2^{\nu-1}\Gamma(\nu)}\left(\frac{\sqrt{2\nu} \sqrt{2(1-\langle x,y\rangle)}}{\ell}\right)^{\nu}K_{\nu}\left(\frac{\sqrt{2 \nu}\sqrt{2(1-\langle x,y\rangle)}}{\ell}\right),\ \forall x,\,y\in\mathbb{S}^{N},\end{split} \tag{3.3}\]
and derive the following results.
**Proposition 3.1**.: _Let \(\{X(x),x\in\mathbb{S}^{N}\}\) be a centered Gaussian random field with Matern covariance function (3.3). Suppose \(\nu>2\). Let \(C(p):=C(\langle x,y\rangle)\), \(p\in[-1,1]\). Then_
\[C^{\prime}(1)=\frac{\sigma^{2}\nu}{(\nu-1)\ell^{2}}\quad\text{and}\quad\ C^{ \prime\prime}(1)=\frac{\sigma^{2}\nu^{2}}{(\nu-1)(\nu-2)\ell^{4}}. \tag{3.4}\]
_In particular,_
\[\tilde{\kappa}:=\frac{C^{\prime}(1)}{\sqrt{C^{\prime\prime}(1)}}\bigg{|}_{ \sigma=1}=\sqrt{\frac{\nu-2}{\nu-1}}\quad\text{and}\quad\ \tilde{\eta}:=\frac{\sqrt{C^{\prime}(1)}}{\sqrt{C^{\prime\prime}(1)}}=\frac{ \sqrt{\nu-2}}{\sqrt{\nu}}\ell. \tag{3.5}\]
Proof.: By (2.6), we obtain that, as \(p\to 1\),
\[C(p) =\mathcal{M}\left(\sqrt{2(1-p)}\right) \tag{3.6}\] \[=\sigma^{2}\left[1-\frac{\nu}{(\nu-1)\ell^{2}}(1-p)+\frac{\nu^{2}} {2(\nu-1)(\nu-2)\ell^{4}}(1-p)^{2}\right]+o\left((1-p)^{2}\right)\] \[=\sigma^{2}\left[1+\frac{\nu}{(\nu-1)\ell^{2}}(p-1)+\frac{\nu^{2} }{2(\nu-1)(\nu-2)\ell^{4}}(p-1)^{2}\right]+o\left((p-1)^{2}\right).\]
This second-order Taylor expansion for \(C(p)\) around \(p=1\) implies (3.4) and hence (3.5).
Let \(\omega_{j}=\frac{2\pi^{(j+1)/2}}{\Gamma((j+1)/2)}\) be the spherical area of the \(j\)-dimensional unit sphere \(\mathbb{S}^{j}\). We have the following results on the expected Euler characteristic of the excursion set \(A_{u}(X,\mathbb{S}^{N})=\{x\in\mathbb{S}^{N}:X(x)\geq u\}\), as well as the expected number and height distribution of critical points.
**Theorem 3.2**.: _Let \(\{X(x),x\in\mathbb{S}^{N}\}\) be a centered Gaussian random field with Matern covariance function (3.3). Suppose \(\nu>2\). Then the expected Euler characteristic is_
\[\mathbb{E}[\chi(A_{u}(X,\mathbb{S}^{N}))]=\sum_{j=0}^{N}\frac{\nu^{j/2}}{(\nu -1)^{j/2}\ell^{j}}\mathcal{L}_{j}(\mathbb{S}^{N})\xi_{j}\left(\frac{u}{\sigma} \right), \tag{3.7}\]
_where \(\xi_{j}(\cdot)\) are given in (2.8) and_
\[\mathcal{L}_{j}(\mathbb{S}^{N})=\left\{\begin{array}{ll}2\binom{N}{j}\frac{ \omega_{N}}{\omega_{N-j}}&\text{ if $N-j$ is even,}\\ 0&\text{ otherwise},\end{array}\right.\qquad j=0,1,\ldots,N, \tag{3.8}\]
_are the Lipschitz-Killing curvatures of \(\mathbb{S}^{N}\)._
Proof.: Applying Lemma 3.5 in [8] to the standardized Gaussian field \(X/\sigma\) yields
\[\mathbb{E}[\chi(A_{u}(X,\mathbb{S}^{N}))] =\mathbb{E}[\chi(A_{u/\sigma}(X/\sigma,\mathbb{S}^{N}))]=\sum_{j =0}^{N}\left(\frac{C^{\prime}(1)}{\sigma^{2}}\right)^{j/2}\mathcal{L}_{j}( \mathbb{S}^{N})\xi_{j}\left(\frac{u}{\sigma}\right)\] \[=\sum_{j=0}^{N}\left(\frac{\nu}{(\nu-1)\ell^{2}}\right)^{j/2} \mathcal{L}_{j}(\mathbb{S}^{N})\xi_{j}\left(\frac{u}{\sigma}\right),\]
where \(C^{\prime}(1)\) is given in (3.4).
**Theorem 3.3**.: _Let \(\{X(x),x\in\mathbb{S}^{N}\}\) be a centered Gaussian random field with Matern covariance function (3.3). Suppose \(\nu>2\). Then for \(i=0,\ldots,N\),_
\[\mathbb{E}[\mu_{i}(X)] =\frac{1}{\pi^{N/2}\tilde{\eta}^{N}}\mathbb{E}^{N}_{\mathrm{GOI} ((1+\tilde{\eta}^{2}-\tilde{\kappa}^{2})/2)}\left[\prod_{j=1}^{N}|\lambda_{j}| \mathbbm{1}_{\{\lambda_{i}<0<\lambda_{i+1}\}}\right], \tag{3.9}\] \[\mathbb{E}[\mu_{i}(X,u)] =\frac{1}{\pi^{N/2}\tilde{\eta}^{N}}\int_{u/\sigma}^{\infty} \phi(x)\mathbb{E}^{N}_{\mathrm{GOI}((1+\tilde{\eta}^{2}-\tilde{\kappa}^{2})/2) }\left[\prod_{j=1}^{N}|\lambda_{j}-\tilde{\kappa}x/\sqrt{2}|\mathbbm{1}_{\{ \lambda_{i}<\tilde{\kappa}x/\sqrt{2}<\lambda_{i+1}\}}\right]dx,\] \[F_{i}(u) =\frac{\int_{u/\sigma}^{\infty}\phi(x)\mathbb{E}^{N}_{\mathrm{ GOI}((1+\tilde{\eta}^{2}-\tilde{\kappa}^{2})/2)}\left[\prod_{j=1}^{N}|\lambda_{j}- \tilde{\kappa}x/\sqrt{2}|\mathbbm{1}_{\{\lambda_{i}<\tilde{\kappa}x/\sqrt{2}< \lambda_{i+1}\}}\right]dx}{\mathbb{E}^{N}_{\mathrm{GOI}((1+\tilde{\eta}^{2})/2 )}\left[\prod_{j=1}^{N}|\lambda_{j}|\mathbbm{1}_{\{\lambda_{i}<0<\lambda_{i+1} \}}\right]},\]
_where \(\tilde{\kappa}\) and \(\tilde{\eta}\) are given in (3.5), \(\mathbb{E}_{\mathrm{GOI}(c)}^{N}\) is defined in (2.12) and \(\lambda_{0}\) and \(\lambda_{N+1}\) are regarded respectively as \(-\infty\) and \(\infty\) for consistency._
Proof.: Applying Theorem 4.4 in [6] to \(X\) and the standardized field \(X/\sigma\) respectively, we obtain the first and second lines in (3.9). The last line in (3.9) is a direct consequence of (1.4).
|
2305.09585 | Inductive Graph Neural Networks for Moving Object Segmentation | Moving Object Segmentation (MOS) is a challenging problem in computer vision,
particularly in scenarios with dynamic backgrounds, abrupt lighting changes,
shadows, camouflage, and moving cameras. While graph-based methods have shown
promising results in MOS, they have mainly relied on transductive learning
which assumes access to the entire training and testing data for evaluation.
However, this assumption is not realistic in real-world applications where the
system needs to handle new data during deployment. In this paper, we propose a
novel Graph Inductive Moving Object Segmentation (GraphIMOS) algorithm based on
a Graph Neural Network (GNN) architecture. Our approach builds a generic model
capable of performing prediction on newly added data frames using the already
trained model. GraphIMOS outperforms previous inductive learning methods and is
more generic than previous transductive techniques. Our proposed algorithm
enables the deployment of graph-based MOS models in real-world applications. | Wieke Prummel, Jhony H. Giraldo, Anastasia Zakharova, Thierry Bouwmans | 2023-05-16T16:32:08Z | http://arxiv.org/abs/2305.09585v1 | # Inductive Graph Neural Networks for Moving Object Segmentation
###### Abstract
Moving Object Segmentation (MOS) is a challenging problem in computer vision, particularly in scenarios with dynamic backgrounds, abrupt lighting changes, shadows, camouflage, and moving cameras. While graph-based methods have shown promising results in MOS, they have mainly relied on transductive learning which assumes access to the entire training and testing data for evaluation. However, this assumption is not realistic in real-world applications where the system needs to handle new data during deployment. In this paper, we propose a novel Graph Inductive Moving Object Segmentation (GraphIMOS) algorithm based on a Graph Neural Network (GNN) architecture. Our approach builds a generic model capable of performing prediction on newly added data frames using the already trained model. GraphIMOS outperforms previous inductive learning methods and is more generic than previous transductive techniques. Our proposed algorithm enables the deployment of graph-based MOS models in real-world applications.
Wieke Prummel\({}^{1}\), Jhony H. Giraldo\({}^{2}\), Anastasia Zakharova\({}^{1}\), Thierry Bouwmans\({}^{1}\)\({}^{1}\) Laboratoire Mathematiques, Image et Applications (MIA), La Rochelle Universite, France
\({}^{2}\) LTCI, Telecom Paris - Institut Polytechnique de Paris, France
Moving object segmentation, graph neural networks, inductive learning, graph signal processing
## 1 Introduction
Moving Object Segmentation (MOS) is an important problem in computer vision, particularly in surveillance system applications [1]. The goal of MOS is to identify and separate the pixels or regions in a video that correspond to moving objects from the static background or other static objects. Deep learning models [2] have demonstrated strong performance on large-scale datasets. However, as the quality of data improves, these models become increasingly complex and computationally intensive, even with fast algorithms [3], few-shot learning methods [4], and specialized architectures [5].
The most common deep models are mostly supervised and can be divided into four groups [6]: 2D Convolutional Neural Networks (CNNs) [7], 3D CNNs [8], transformer neural networks [9], and generative adversarial networks [10, 11]. In addition to these, some state-of-the-art (SOTA) techniques have been combined with deep methods to create novel approaches, such as MotionRec [12], RT-SBS [13], and GraphMOS [14]. Recent graph-based algorithms like GraphMOS [14] and GraphMOD-Net [15] use semi-supervised learning posed as a graph signal reconstruction problem. These methods are inspired by the theory of graph signal processing [16] and have shown promising results. However, these graph-based methods are transductive in nature, meaning that the model needs to be fully retrained and the graph regenerated whenever a new video is added. To address this issue, we propose inductive techniques for graph-based MOS, where multiple graphs are built instead of a single large graph. This approach reduces the need for rebuilding the whole graph and retraining the model, making it more suitable for real-world deployments, as shown in Fig. 1.
In this work, we propose a novel Graph Inductive Moving Object Segmentation (GraphIMOS) algorithm based on Graph Neural Networks (GNNs) [17]. We represent each instance of the video as a node in a graph, which is generated using a Mask Region Convolutional Neural Network (Mask R-CNN) with a ResNet-50 and Feature Pyramid Network (FPN) backbone. We use optical flow, intensity, and texture features to represent the nodes. Instead of creating a single large graph, as in [14], we generate \(\rho+\alpha\)\(k\)-Nearest Neighbors (\(k\)-NN) graphs, where each graph can have a different number of nodes. We set \(\rho=3\) in the experiments for the training and validation graphs, which are then fed to the proposed GNN model for training and hyperparameter optimization. To evaluate the performance of our model, we test it on one single graph \(\alpha=1\), built from previously unseen videos. Our approach is evaluated on the challenging Change
Figure 1: Transductive vs. inductive learning on MOS
Detection 2014 (CDNet 2014) dataset [18], and we demonstrate competitive results against previous inductive methods.
This paper makes two main contributions. Firstly, we propose an inductive graph-based framework for Moving Object Segmentation (MOS), which is an important step toward the real-world deployment of graph-based methods in video surveillance applications. Secondly, we introduce a novel GNN architecture specifically designed for MOS, called GraphIMOS. To the best of our knowledge, GraphIMOS is the first graph-based inductive learning approach for MOS. The remainder of this paper is organized as follows. In Section 2, we introduce the preliminary concepts and describe the GraphIMOS algorithm and architecture. Section 3 presents the experimental setup and results. Finally, in Section 4, we present our conclusions.
## 2 Graph Inductive Moving Object Segmentation
This section presents the preliminaries of this paper and the proposed inductive graph-based MOS method. Figure 2 shows the pipeline of GraphIMOS, which consists of instance segmentation, node representation, graph construction, and inductive GNN training and evaluation.
### Preliminaries
A graph is a mathematical entity that can be represented as \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{1,2,\ldots,N\}\) is the set of nodes and \(\mathcal{E}\subseteq\{(i,j)\mid i,j\in\mathcal{V}\text{ and }i\neq j\}\) is the set of edges connecting the nodes \(i\) and \(j\). The adjacency matrix \(\mathbf{A}\in\mathbb{R}^{N\times N}\) is a popular choice of shift operator in GNNs [19], where \(\mathbf{A}_{(i,j)}=a_{i,j}\ \forall\ (i,j)\in\mathcal{E}\), and \(0\) otherwise. For unweighted graphs we have that \(\mathbf{A}\in\{0,1\}^{N\times N}\). Similarly, we have the diagonal degree matrix given by \(\mathbf{D}=\mathrm{diag}(\mathbf{A}\mathbf{1})\). In GNNs, we commonly associate a vector of features \(\mathbf{x}_{i}\in\mathbb{R}^{F}\) to each node \(i\). Therefore, we can represent the whole set of input features as \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]^{\mathsf{T} }\in\mathbb{R}^{N\times F}\). In this paper, we use undirected and weighted graphs.
GraphIMOS is designed to be agnostic to the choice of the GNN used. However, in our experiments, we use the widely used Graph Convolutional Network (GCN) [19]. The propagation rule of GCN is given by:
\[\mathbf{H}^{(l+1)}=\sigma(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}} \tilde{\mathbf{D}}^{-\frac{1}{2}}\mathbf{H}^{(l)}\mathbf{W}^{(l)}), \tag{1}\]
where \(\tilde{\mathbf{A}}=\mathbf{A}+\mathbf{I}\), \(\tilde{\mathbf{D}}\) is the degree matrix of \(\tilde{\mathbf{A}}\), \(\mathbf{H}^{(l)}\) is the output matrix of layer \(l\) (with \(\mathbf{H}^{(0)}=\mathbf{X}\)), \(\mathbf{W}^{(l)}\) is the matrix of trainable weights in layer \(l\), and \(\sigma(\cdot)\) is an activation function such as \(\mathrm{ReLU}\) or \(\mathrm{softmax}\). It is worth noting that (1) reduces to the regular multi-layer perceptron when \(\tilde{\mathbf{D}}^{-\frac{1}{2}}\tilde{\mathbf{A}}\tilde{\mathbf{D}}^{-\frac {1}{2}}=\mathbf{I}\).
### Segmentation and Node Representation
For the region proposal, we use Mask R-CNN with ResNet-50 as the backbone and construct graphs using a \(k\)-NN method. The feature representation proposed in [14] is utilized, where
Figure 2: Pipeline of the graph inductive moving object segmentation algorithm via graph neural network learning. After instance segmentation and node feature extraction, the dataset is divided into \(\rho\) graphs for training-validation and into \(\alpha\) testing graphs. The algorithm then classifies nodes in the graphs as either moving or static objects.
the outputs generated by Mask R-CNN are represented as nodes in the graphs. These instances are associated with meaningful representations, such as optical flow, intensity, and texture features. Finally, all the features are concatenated to form a \(930\)-dimensional vector, which represents each instance. For further details on the feature extraction process, please refer to [14, 20].
### Graph Mini-batch
The proposed framework is distinct from previous graph-based MOS methods because it is an inductive architecture. For instance, in [14], the whole graph would need to be rebuilt every time a new video is fed into the algorithm. This requires the optimization problem to be solved again, making deployment in real-world scenarios more challenging. Therefore, instead of creating a single graph from the data, our proposed approach creates \(\rho+\alpha\) separate graphs, each with a different number of nodes. This ensures that the data in the \(\rho\) training-validation graphs are not connected. The adjacency matrices are arranged in a block-diagonal manner to construct a comprehensive graph that includes multiple distinct subgraphs as follows:
\[\mathbf{A}=\begin{bmatrix}\mathbf{A}_{1}&&\\ &\ddots&\\ &&&\mathbf{A}_{\rho}\end{bmatrix}, \tag{2}\]
where \(\mathbf{A}\) is the adjacency matrix in the mini-batch. We define \(\rho\) for the number of training and validation graphs, and \(\alpha\) for the testing graphs as shown in Fig. 2. Our adjacency matrices are stored efficiently, using a sparse representation that only keeps track of non-zero entries: the edges. This means that there is no extra memory overhead. The node and target features are integrated into the node dimension through simple concatenation.
### GNN Architecture
Our model consists of two GCNConv layers [19], various ReLU activation layers, and three linear layers. To reduce overfitting and improve generalization on unseen videos, we use five dropout and four pair batch normalizations (PairNorm) [21]. These techniques also enhance the training stability of the model and prevent over-smoothing. PairNorm is particularly effective in achieving a better convergence of the proposed model. Our goal is to learn meaningful representations of the graph structure and node features to classify objects as either static or moving. The final layer of our model employs a log \(\mathrm{softmax}\) function defined as follows: \(\mathrm{log\_softmax}=\mathrm{log}(\frac{1}{s}\exp{x_{i}})\) where \(s=\sum_{i}\exp{x_{i}}\). This GNN architecture enables GraphIMOS to be easily deployed in real-world applications.
## 3 Experiments and Results
This section introduces the metrics, and the dataset graph partitioning used to conduct the different experiments on GraphIMOS.
### Evaluation Metrics
The evaluation metrics F-Measure, precision, and recall are defined as follows:
\[\text{Recall}=\frac{\text{TP}}{\text{TP}+\text{FN}},\,\text{Precision }=\frac{\text{TP}}{\text{TP}+\text{FP}},\] \[\text{F-measure}=2\frac{\text{Precision}\times\text{Recall}}{ \text{Precision}+\text{Recall}}, \tag{3}\]
where TP, FN, and FP are the true positives, false negatives, and false positives, respectively. According to Table 1, we calculate the F-Measure in Table 2 on node level classification.
### Experiments
We evaluate GraphIMOS against several SOTA algorithms using the large-scale CDNet 2014 dataset [18]. To build an inductive framework, we construct multiple graphs by dividing the data into four sequences: S1, S2, S3, and S4 as shown in Table 1. Therefore, we run 4 experiments to compute the F-Measure in the whole dataset. These sequences were chosen to enable unseen videos evaluation, following the approach in [24]. For each sequence, we compute the results and construct one graph per sequence : \(G1\), \(G2\), \(G3\), and \(G4\).
In each experiment, we use two graphs to train and evaluate GraphIMOS, and one graph for testing. For each experiment we chose to use a different graph to test the performance of the proposed algorithm on unseen videos.
\begin{table}
\begin{tabular}{l c c c} \hline \hline
**Sequences** & **Training** & **Validation** & **Testing** \\ \hline S2, S3, S1, S4 (Exp1) & \(G2,G3\) & \(G1\) & \(G4\) \\ S1, S3, S4, S2 (Exp2) & \(G1,G3\) & \(G4\) & \(G2\) \\ S2, S3, S4, S1 (Exp3) & \(G2,G3\) & \(G4\) & \(G1\) \\ S1, S2, S4, S3 (Exp4) & \(G1,G2\) & \(G4\) & \(G3\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Data Partitioning.
\begin{table}
\begin{tabular}{l c c c c} \hline \hline
**F-Measure** & **Exp1** & **Exp2** & **Exp3** & **Exp4** \\ \hline F-Measure validation & \(0.8420\) & \(0.8006\) & \(0.8386\) & \(0.8014\) \\ F-Measure test & \(0.8388\) & \(0.7969\) & \(0.8367\) & \(0.7567\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: F-Measure Experiments.
### Implementation Details
We employ the PyTorch Geometric library [25] to implement our proposed architecture. The \(k\)-nearest neighbors (\(k\)-NN) graphs are constructed with a value of \(k=40\). In order to prevent overfitting, we use a dropout with a coefficient of \(0.5\). For optimization, we use stochastic gradient descent (SGD) with a momentum of \(0.9\), a learning rate of \(0.01\), and \(5e{-4}\) for weight decay. We train our model for a maximum of \(500\) epochs, using a graph-batch size of \(1\). Our model architecture includes \(5\) hidden layers, and we opt for the negative log likelihood loss for training and evaluating the GNN model.
### Results and Discussion
We compare GraphIMOS with FgSegNet [23], GraphMODNet [22], and GraphMOS [14]. While GraphMOD-Net is transductive in nature, we adapt it to our data partitioning and experimental framework for a fair comparison with GraphIMOS. We also include GraphMOS and GraphMOD-Net with their original performances as references, since they are transductive techniques. The numerical and visual results of the compared methods are shown in Fig. 3 and Table 3, which demonstrate that GraphIMOS surpasses previous inductive learning methods. Our experiments also reveal that GraphMOD-Net [22] shows a performance degradation when evaluated in an inductive setting, possibly due to the challenges of real-world deployments. GraphIMOS strikes a better balance between performance and realistic deployment, making it a promising candidate for real-world applications.
## 4 Conclusion
This paper introduces GraphIMOS, a novel approach that uses GNNs, graph mini-batches, and inductive learning for MOS. The proposed algorithm consists of four key components: instance segmentation using Mask R-CNN, feature extraction for node representation, \(k\)-NN for graph construction, and a GNN-based inductive learning algorithm. To the best of our knowledge, GraphIMOS is the first approach that uses graph-based inductive learning for MOS, demonstrating its novelty and potential. Compared to previous works such as GraphMOD-Net, GraphIMOS offers improved performance and a better trade-off between performance and practical deployment. For future work, we plan to add skip connections and deeper GNN models.
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline
**Method** & **BSL** & **BWT** & **IOM** & **LFR** & **PTZ** & **THL** & **CJI** & **SHW** & **DBA** & **Overall** \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**Transductive Learning Methods**} & \\ GraphMOS [14] & \(0.9398\) & \(0.8294\) & \(0.3607\) & **0.5538** & \(0.7599\) & **0.7292** & \(0.7005\) & **0.9653** & \(0.7334\) & \(0.7302\) \\ GraphMOD-Net [22] (Original) & **0.9550** & **0.8390** & **0.5540** & \(0.5210\) & **0.7700** & \(0.6820\) & **0.7200** & \(0.9420\) & **0.8510** & **0.7593** \\ \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**Inductive Learning Methods**} & \\ FgSegNet [23] & \(0.5641\) & \(0.2789\) & \(0.3325\) & \(0.2115\) & \(0.1400\) & \(0.3584\) & \(0.2815\) & \(0.3809\) & \(0.2067\) & \(0.3061\) \\ GraphMOD-Net (Modified) & \(0.6474\) & \(0.6268\) & \(0.5243\) & \(0.5337\) & \(0.5899\) & \(0.5484\) & \(0.4926\) & \(0.6587\) & **0.6254** & \(0.5831\) \\ GraphMOS (Ours) & **0.7003** & **0.6377** & **0.5284** & **0.5478** & **0.5932** & **0.6453** & **0.6700** & **0.6807** & \(0.5868\) & **0.6211** \\ \hline \hline
**CDNet 2014** & \multicolumn{6}{c}{**Original**} & \multicolumn{6}{c}{**Ground Truth**} & **FgSegNet** & **GraphMOD** & **GraphIMOS** \\ \hline Baseline, pedestrians & & & & & & & & & \\ \hline Intermittent object motion, tramstop & & & & & & & & \\ Shadow, cubicle & & & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Average F-Measure for transductive and inductive methods. The best score of all methods appears in bold, and the best score of the inductive methods is bold underlined. The columns contain the CDNet2014 [18] challenges : bad weather (BWT), baseline (BSL), camera jitter (CJI), dynamic background (DBA), intermittent object motion (IOM), low frame rate (LFR), PTZ, shadow (SHW), and thermal (THL).
Figure 3: Comparisons of the visual results on unseen videos of the proposed GraphIMOS algorithm with two inductive methods: FgSegNet [23] and the modified GraphMOD-Net [22]. We show results from three different challenges from CDNet 2014 [18]. |
2302.02477 | Offline Learning of Closed-Loop Deep Brain Stimulation Controllers for
Parkinson Disease Treatment | Deep brain stimulation (DBS) has shown great promise toward treating motor
symptoms caused by Parkinson's disease (PD), by delivering electrical pulses to
the Basal Ganglia (BG) region of the brain. However, DBS devices approved by
the U.S. Food and Drug Administration (FDA) can only deliver continuous DBS
(cDBS) stimuli at a fixed amplitude; this energy inefficient operation reduces
battery lifetime of the device, cannot adapt treatment dynamically for
activity, and may cause significant side-effects (e.g., gait impairment). In
this work, we introduce an offline reinforcement learning (RL) framework,
allowing the use of past clinical data to train an RL policy to adjust the
stimulation amplitude in real time, with the goal of reducing energy use while
maintaining the same level of treatment (i.e., control) efficacy as cDBS.
Moreover, clinical protocols require the safety and performance of such RL
controllers to be demonstrated ahead of deployments in patients. Thus, we also
introduce an offline policy evaluation (OPE) method to estimate the performance
of RL policies using historical data, before deploying them on patients. We
evaluated our framework on four PD patients equipped with the RC+S DBS system,
employing the RL controllers during monthly clinical visits, with the overall
control efficacy evaluated by severity of symptoms (i.e., bradykinesia and
tremor), changes in PD biomakers (i.e., local field potentials), and patient
ratings. The results from clinical experiments show that our RL-based
controller maintains the same level of control efficacy as cDBS, but with
significantly reduced stimulation energy. Further, the OPE method is shown
effective in accurately estimating and ranking the expected returns of RL
controllers. | Qitong Gao, Stephen L. Schimdt, Afsana Chowdhury, Guangyu Feng, Jennifer J. Peters, Katherine Genty, Warren M. Grill, Dennis A. Turner, Miroslav Pajic | 2023-02-05T20:29:53Z | http://arxiv.org/abs/2302.02477v4 | # Offline Learning of Closed-Loop Deep Brain Stimulation Controllers for Parkinson Disease Treatment
###### Abstract.
Deep brain stimulation (DBS) has shown great promise toward treating motor symptoms caused by Parkinson's disease (PD), by delivering electrical pulses to the Basal Ganglia (BG) region of the brain. However, DBS devices approved by the U.S. Food and Drug Administration (FDA) can only deliver continuous DBS (cDBS) stimuli at a fixed amplitude; this energy inefficient operation reduces battery lifetime of the device, cannot adapt treatment dynamically for activity, and may cause significant side-effects (_e.g._, gait impairment). In this work, we introduce an offline reinforcement learning (RL) framework, allowing the use of past clinical data to train an RL policy to adjust the stimulation amplitude in real time, with the goal of reducing energy use while maintaining the same level of treatment (_i.e._, control) efficacy as cDBS. Moreover, clinical protocols require the safety and performance of such RL controllers to be demonstrated ahead of deployments in patients. Thus, we also introduce an offline policy evaluation (OPE) method to estimate the performance of RL policies using historical data, before deploying them on patients. We evaluated our framework on four PD patients equipped with the RC+S DBS system, employing the RL controllers during monthly clinical visits, with the overall _control efficacy_ evaluated by severity of symptoms (_i.e._, bradykinesia and tremor), changes in PD biomarkers (_i.e._, local field potentials), and patient ratings. The results from clinical experiments show that our RL-based controller maintains the same level of control efficacy as cDBS, but with significantly reduced stimulation energy. Further, the OPE method is shown effective in accurately estimating and ranking the expected returns of RL controllers.
Deep Brain Stimulation, Offline Reinforcement Learning, Offline Policy Evaluation +
Footnote †: This work is sponsored in part by the NSF CNS-1837499 award and the National AI Institute for Edge Computing Leveraging Best Generation Wireless Networks, Grant CNS-2112526, as well as by NIH UHS 38103468. Investigational Summit RC+Systems and technical support provided by Meltronic PLC. Apple Watches were provided by Rune Labs.
## 1. Introduction
Currently, around 1.05 million individuals in the United States are affected by Parkinson's disease (PD) (Parks et al., 2016). Deep brain stimulation (DBS) is an effective treatment to reduce PD symptoms such as tremor and bradykinesia (Bartos et al., 2016; Gers et al., 2016; Gers et al., 2016; Gers et al., 2016). A DBS system consists of electrodes that are placed into the Basal Ganglia (BG) region of the brain, and a pulse generator implanted in the chest to generate trains of short electrical pulses (see Fig. 1). Existing FDA-approved DBS solutions are limited to continuous DBS (cDBS). These devices are programmed to stimulate at a fixed amplitude, with the specific parameters determined by clinicians through trial-and-error (Shen et al., 2016). However, such stimuli usually lead to extensive energy consumption, significantly reducing the battery lifetime of the device. Moreover, over-stimulated patients, even intermittently, may suffer from side-effects such as dyskinesia and speech impairment (Bartos et al., 2016). As a result, developments of closed-loop DBS controllers that are more responsive to activity and patient state (i.e., context) are of considerable interest to clinicians, patients, and the community.
Existing DBS control methods focus on simply switching on/off the stimulation or scaling up/down its intensity in a proportional control approach, conditioned on the change of specific biomarkers, _i.e._, when they cross over some pre-determined thresholds (Bartos et al., 2016; Gers et al., 2016; Gers et al., 2016; Gers et al., 2016). Biomarkers include local field potentials (LFPs) and electroencephalography (EEG) from the BG, as well as accelerometry data and electromyography obtained from wearable devices (Shen et al., 2016). Though such methods have improved energy efficiency (Shen et al., 2016; Gers et al., 2016), they still require substantial efforts to experiment and fine-tune the thresholds for each specific patient. Moreover, the patient may suffer from sub-optimal DBS settings in between clinical visits with
poor symptom control due to varying patient state. For example, exercise or fluctuations in medication dosage or timing could affect their PD symptoms and DBS control, so the tuning results may be biased. Consequently, the **challenge (I)** of developing closed-loop DBS controllers is to ensure that the control policy can perform consistently over diverse and dynamic patient contexts and states.
Reinforcement learning (RL) has shown considerable potential in control over complicated systems [15; 21; 22; 46], and various RL-based approaches have been proposed to facilitate closed-loop DBS [19; 23; 48; 52]. Specifically, several approaches [23; 48; 52] model EEG and LFP as the state space of the RL environment and use temporal difference learning or fitted Q-iteration to design control policies adapting stimulation amplitudes/frequencies to conserve energy usage. The deep actor-critic based approach proposed in [19] further allows the temporal pattern of the stimuli to be adapted over time, benefiting from the use of deep RL techniques capable of searching in larger state and action space. Although such methods achieve satisfactory control of efficacy and energy savings jointly, _they have only been evaluated in simulations, i.e._, on computational BG models [30; 58]. One may assume that unlimited training data can be obtained from such models, which is contrary to the real-world case where the device programming is done in clinics and the patient only participates sparsely over time.
Another limitation of directly using deep RL methods for real-time DBS control is the computational complexity of evaluating the RL policies _in vivo_, as they are usually represented by deep neural networks (DNNs) that may require millions of multiplications in a single forward pass. The resource-constrained implantable devices (_e.g._, Fig. 1) may not support or facilitate such computations. Thus, the **challenge (II)** of closed-loop DBS is to ensure that the controller can be designed with limited training samples and executed without the need of extensive computing resources. Further, in contrast to simulated or robotic environments where most RL policies can be deployed directly for performance evaluation, the safety and control efficacy of the controllers directly used on patients need to be thoroughly evaluated before each test condition starts [51]. Hence, the **challenge (III)** of enabling closed-loop DBS therapies in patients is being able to proactively provide accurate estimations of the expected performance of the controllers.
Consequently, in this paper, we first introduce an offline RL framework to address the challenges (I) and (II) above, resulting in a closed-loop DBS system that is both _effective (in terms of therapy) and energy-efficient_. Specifically, we model the BG regions of the brain as a Markov decision process (MDP), capturing the underlying neuronal activities in response to the stimuli. Then, the deep actor-critic algorithm [39] is adapted to adjust the amplitude of the stimuli according to the changes in LFPs. A total of four patients, equipped with the Medtronic Summit RC+S DBS devices [59], participated in the data collection and testing trials in clinics. Given that the deep actor-critic framework is considered offline RL and can leverage all historically collected trajectories, i.e., experience replay to facilitate optimizing the control policy, we address challenge (I) by varying the level of activities, medications etc. of the patients before and during the trials. Similarly, experience collected from non-RL controllers can also be used to update the policy; for example, in the early stage of learning, a controller that generates uniformly random amplitudes (within some range) can facilitate exploring the state and action space. We also introduce model distillation/compression [26] techniques specifically for the DBS systems, such that the RL policies can be captured by deep neural networks (DNNs) with significantly fewer nodes, whose forward passes can be executed within the required control rates, addressing challenge (II).
To address challenge (III), we introduce a model-based offline policy evaluation (OPE) method that captures the underlying dynamics of the considered MDP, where the expected returns of the control policy can be estimated by the mean return of the trajectories rolled out from the learned model, without directly deploying the policy to the patient. In each DBS trial, the control efficacy is evaluated from various sources, including LFP biomarkers recorded from the implantable DBS device, patient responses to bradykinesia tests, satisfaction level reported by the patient, and the overall tremor severity quantified from accelerometry data collected by external wearable devices (_e.g._, smart watch). Note that each of the latter three criteria is only evaluated once at the end of each trial; yet they are imperative for evaluating the control efficacy from the patient's side. These efficacy metrics are thus considered sparsely available compared to the LFPs that can be sensed in each time step, which limits the use of existing OPE methods, including importance sampling (IS) [16; 54], distributional correction estimations (DICE) [47], and the model-based OPE [20], as these do not allow for explicitly capturing/modeling such end-of-session rewards. Our OPE method can capture such behaviors through a specially designed architecture and training objective, outperforming existing methods as we show in clinical experiments.
The contributions of this work are three-fold: (_i_) to the best of our knowledge, this is the first _'full-stack'_ offline RL methodology that facilitates both _optimizing_ and _evaluating_ RL-based DBS control policies using historical data; (_ii_) we developed an RL-based DBS controller whose performance is validated through clinical trials with PD patients, demonstrating _reduced energy consumption with non-inferior control efficacy compared to cDBS - this is the first effective closed-loop DBS control that is not an ON/OFF switching, or scaling up/down proportionally, and has been extensively tested in clinic (i.e., on patients)_; (_iii_) our OPE method effectively captures the end-of-session rewards, leading to accurate
Figure 1. An implantable deep brain stimulation (DBS) device. The stimuli, generated by the pulse generator at a given amplitude and frequency, are delivered to the basal ganglia (BG) through multi-contact electrodes. Each electrode has four contacts; two stimulate the BG and two sense local field potentials (LFPs) that may be used for control feedback.
estimations of control efficacy using the data collected in clinic; thus, helps demonstrate the effectiveness of the policies to be tested proactively, and can be used to prioritize the policies that could lead to better performance within the limited amount of testing time.
This paper is organized as follows. Sec. 2 provides the basics of DBS, RL, and OPE, before our clinical closed-loop DBS setup is introduced in Sec. 3. In Sec. 4, the offline RL framework is introduced, enabling training and updating RL controllers with historical data. Sec. 5 introduces the model-based OPE approach to estimate performance of RL policies. Sec. 6 presents the results of the experimental evaluations on patients, before concluding remarks in Sec. 7.
## 2. Preliminaries and Motivation
In this section, we first introduce DBS, before presenting in the next section the DBS experimental setup we developed for clinical trials, including sensing, communication and control. Also, preliminaries for offline RL and OPE are briefly introduced; more comprehensive reviews of RL and OPE can be found in (Kang et al., 2018; Wang et al., 2019; Wang et al., 2020; Wang et al., 2021).
### The Need for Closed-Loop DBS
PD is caused by progressive death of dopaminergic neurons in the substantia nigra region of the brain. This change in dopaminergic signaling results in pathological activity in the BG regions targeted by DBS, _globus pallidus pars interna_ (GPi), _globus pallidus pars extraena_ (GPe) and subthalamic nucleus (STN); see Fig. 1. Given the reduced number of neurons, the level of dopamine generally decreases in BG, leading to various motor symptoms such as bradykinesia and tremor (Beng et al., 2017; Wang et al., 2020; Wang et al., 2021). Physiologically, the effect of PD can be captured by the changes in LFPs in GPi, GPe and STN. Specifically, PD can cause abnormal neuron firings in these regions, and lead to increased beta-band (13-35 Hz) amplitude (\(P_{\beta}\)), referred to as the beta amplitude, of the LFPs (Wang et al., 2020).
Existing research-only DBS devices are capable of capturing the changes in LFPs through the multi-contact electrodes implanted in the BG. As illustrated in Fig. 1, we used 4-contact electrodes placed in the STN and GP regions. Monopolar stimulation was delivered on a single contact on each lead (with the case serving as counter-electrode). The two contacts surrounding the stimulation contact were used for sensing LFPs (i.e., sandwich sensing). Existing devices providing open-loop cDBS stimulate pulses at a fixed amplitude, which in most cases can correct the abnormal neuronal activity (Wang et al., 2020). However, constantly stimulating with high amplitudes significantly reduces the battery lifetime of the DBS device and may cause serious side-effects such as speech impairment (Beng et al., 2017; Wang et al., 2020; Wang et al., 2021). Consequently, it is important to design DBS controllers that are _effective_ (from the control, i.e., therapy, perspective) and _energy-efficient_.
As discussed in Introduction, current aDBS approaches require considerable time and effort for the patients and their healthcare providers to determine the thresholds through trial-and-error (Zheng et al., 2019). Several deep-RL-based controllers have been proposed for closed-loop DBS, which can adapt the amplitude of the stimulation pulses in real time (Kang et al., 2018; Wang et al., 2020) in response to changes in the feedback signals (_e.g._, \(P_{\beta}\)). However, such frameworks are only validated through numerical simulations, _i.e._, on _simplified_ computational BG models, instead of clinical trials with human participants. In real world, _substantial_ historical experience, or trajectories collected from past interactions between the controller and the environment (patient), may be necessary to learn an RL policy with suitable control efficacy and patient satisfaction (Wang et al., 2020). Offline RL holds promise to resolve this challenge, as it can use the data collected from any type of controllers, including cDBS or simply a policy switching between arbitrary stimulation amplitudes/frequencies, to optimize an RL control policy. Moreover, each time before a new control policy is deployed to the patient, the clinicians need to assess its effectiveness and may require justifications toward its estimated control efficacy and performance (Wang et al., 2020). OPE can facilitate such use cases, as it is capable of estimating the expected return of RL policies using historical trajectories, bridging the gap between the offline RL training and evaluations. Preliminaries for offline RL and OPE are presented in two subsections below.
### Offline Reinforcement Learning
Offline RL has proven useful in many domains, including robotics (Kang et al., 2018; Wang et al., 2020), healthcare (Wang et al., 2020), etc., since it can optimize the control policies without requiring the environment to be presented, which guarantees the safety of the learning process. Further, it does not require the training data to be exclusively collected by the control policy being updated, leading to improved sample efficiency. To facilitate offline RL, the underlying dynamical environments are firstly modeled as Markov decision processes (MDPs).
Definition 2.1 (MDP).: _An MDP is a tuple \(\mathcal{M}=(\mathcal{S},s_{0},\mathcal{A},\mathcal{P},R,\gamma)\), where \(\mathcal{S}\) is a finite set of states; \(s_{0}\) is the initial state; \(\mathcal{A}\) is a finite set of actions; \(\mathcal{P}\) is the transition function defined as \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\); \(R:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow\mathbb{R}\) is the reward function, and \(\gamma\in[0,1)\) is a discount factor._
Then, the RL policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\) determines the action \(a=\pi(s)\) to be taken at a given state \(s\). The accumulated return under a policy \(\pi\) can be defined as follows.
Definition 2.2 (Accumulated Return).: _Given an MDP \(\mathcal{M}\) and a policy \(\pi\), the accumulated return over a finite horizon starting from the stage \(t\) and ending at stage \(T\), for \(T>t\), is defined as_
\[G_{t}^{\pi}=\sum_{k=0}^{T-t}\gamma^{t+k}r_{t+k}, \tag{1}\]
_where \(r_{t+k}\) is the return at the stage \(t+k\)._
The goal of offline RL can now be defined as follows.
Problem 1 (Offline Reinforcement Learning).: _Given an MDP \(\mathcal{M}\) with unknown transition dynamics \(\mathcal{P}\), a pre-defined reward function \(R\), and a experience replay buffer \(\mathcal{E}^{\mu}=\{\{(s_{0},a_{0},r_{0},s_{1}),\,\ldots,\,\,t\}\)._
Figure 2. The overall architecture of the RC+S DBS system. The Summit research and development kit (RDK) can be used to configure the Summit program, allowing us to compute the beta amplitude (\(P_{\beta}\)) and execute the RL controller.
\((s_{T-1},a_{T-1},r_{T-1},s_{T})\big{\rvert}^{(0)},[(s_{0},a_{0},r_{0},s_{1}),\ldots]^{ (1)},\ldots|a_{t}\sim\mu(a_{t}|s_{t})\}\)
_containing trajectories collected over an unknown behavioral policy \(\mu\), find the target policy \(\pi^{*}\) such that the expected accumulative return starting from the initial stage over the entire horizon is maximized, i.e.,_
\[\pi^{*}=\underset{\pi}{\operatorname{argmax}}\mathbb{E}_{s,a\sim\rho^{\pi},r \sim R}[G_{0}^{\pi}]; \tag{2}\]
_here, \(\rho^{\pi}\) is the state-action visitation distribution under policy \(\pi\)._
The deep actor-critic RL framework (Wang et al., 2017) can be leveraged to solve (2). Other value-based RL methods such as conservative Q-learning (Wang et al., 2017) and implicit Q-learning (Wang et al., 2017) could also be considered; however, actor-critic methods can in general reduce the variance of gradient estimations and result in faster convergence (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017). Here, we specifically consider the deterministic version of actor-critic (Wang et al., 2017), instead the one producing stochastic policies (Wang et al., 2017), as it would be easier to demonstrate the effectiveness of deterministic policies in clinics, as well as via OPE methods introduced below. Details on the deep actor-critic algorithm (Wang et al., 2017) are provided in Appendix C.1.
### Offline Policy Evaluation for DBS
OPE allows the use of experience replay buffer to estimate the expected return of RL policies, without the need of deploying them to the environment directly. Fig. 3 illustrates the use case of OPE in the context of DBS clinical testing. Specifically, during phase \(I\) and \(II\), offline RL uses all trajectories collected historically to train RL policies following different hyper-parameters etc. Then, in phase \(III\), OPE can be used to estimate and rank the expected return of these policies, where the top-performing ones can be deployed during the next clinic visit (phase \(IV\)). Consequently, OPE can effectively reduce the number of testing sessions needed, so the policies that show promise attaining better performance can be thoroughly tested within the short time frame. Also, it can demonstrate the effectiveness of the policies to be deployed in clinics.
The goal of OPE can be defined as follows.
Problem 2 (Offline Policy Evaluation).: _Consider a target policy \(\pi\), and off-policy trajectories \(\mathcal{E}^{\mu}=\{(s_{0},a_{0}),(s_{1}\,,a_{1}),\ldots|a_{t}=\mu(s_{t})\}\), collected following a behavioral policy \(\mu\neq\pi\), over an MDP \(\mathcal{M}\). The OPE goal is to estimate the expected return of the target policy \(\pi\), i.e., \(\mathbb{E}_{s,a\sim\rho^{\pi},r\sim R}[G_{0}^{\pi}]\)._
Most existing OPE methods, such as (Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017; Wang et al., 2017), are heavily based on importance sampling (IS) and could result in inconsistent estimations due to the high variance of the IS weights (Wang et al., 2017; Wang et al., 2017). On the other hand, model-based OPE methods have shown strengths in estimating the expected returns more accurately (Wang et al., 2017; Wang et al., 2017), by directly capturing the MDP transitions and rewards. The variational encoding-decoding based deep latent MDP model (DLMM) introduced in (Wang et al., 2017) is shown to be effective evaluating control policies for a computational BG model. Specifically, DLMM is derived following the variational inference framework from (Wang et al., 2017). The basics of DLMM are provided in Appendix C.2, and we refer the readers to (Wang et al., 2017) for basics of variational inference. In Sec. 5, we extend it toward the clinical use case considered in this work, to allow for including the QoC metrics that can be only evaluated once in each session, such as the bradykinesia results, patient ratings, and tremor severity, which will be available as illustrated in Fig. 4.
## 3. Dbs Setup Used in Clinical Trials
We build on the research-only Medtronic's Summit RC+S system (Wang et al., 2017) to enable testing of RL-based controllers in clinical trials. The overall architecture of the RC+S-based system we developed is illustrated in Fig. 2. Specifically, Medtronic provides the code and communication APIs (Summit program), which enable the stimulation amplitude of the pulses delivered by the internal pulse generator (IPG) to be adapted over time. The Summit program is developed using the C# language under the.NET framework, which we
Figure 4. Setup of the developed DBS clinical testing procedure. A total of three data streams are collected: (1) the LFPs and stimulation amplitudes are recorded over time; the logged trajectories are used to evaluate the performance of deployed RL controllers, as well as training data for further fine-tuning; (2) patient feedback including results from bradykinesia and a rating on the scale between 1-10; (3) patient tremor severity captured by wearable devices.
Figure 3. Timeline for training RL-based DBS controllers in clinical studies. Since only limited data can be collected during each clinical visit, offline RL can be used to fine-tune existing or train new controllers using all the historical data. Then, offline policy evaluation (OPE) facilitates choosing the possible top-performing ones to be tested in the next visit.
extended to execute RL policies leveraging the provided Summit research development kit (RDK), requiring the use of a Windows OS.
Thus, a research tablet is used for the execution of the developed DBS controllers; the desired stimulation amplitude is computed for each control cycle (every 2 seconds) and sent to the IPG over BluetoothTM, using proprietary communication and security protocols. On the other hand, the IPG transmits to the controller the LFPs captured from the BG, from which the beta amplitude of the LFPs, denoted by \(P_{\beta}\), is calculated and used as a quality of control (QoC) metric as well as potential control feedback signals (_i.e._, inputs to the RL controller). Each clinical trial session lasts 5-20 minutes depending on the schedule of the visit, and multiple controllers can be tested across different sessions. All the computed \(P_{\beta}\) and stimulation amplitudes applied over time are logged for future training and evaluation purposes, as summarized in Fig. 4. For the developed system design, we obtained the FDA's Investigative Device Exception (IDE) G180280, which has allowed us to perform human experiments according to an Institutional Review Board (IRB) protocol approved by Duke University Medical Center.
In addition to \(P_{\beta}\), three other QoC metrics are collected from every patient at the end of each session. Specifically, near the end of each session, the patient is asked to perform 10 seconds of hand grasps (rapid and full extension and close of all fingers) maneuver (Sutton et al., 2017) to evaluate the severity of the possible bradykinesia caused by PD. Such hand motions are captured by a leap motion sensor by Ultraleap (Brandt et al., 2017). Then, the elapsed time between any two consecutive open fist is captured and recorded by the sensor, after which the grasp frequency can be calculated as
\[QoC_{grasp}=\frac{1}{\frac{1}{N-1}\sum_{i=1}^{N-1}t_{(i,i+1)}}; \tag{3}\]
here, \(N\) is the total number of open fits throughout the 10 s test, and \(t_{(i,i+1)}\) is the time spent between the \(i\)-th and \(i+1\)-th grasp. Further, at the end of each session, the patient provides a score between 1-10, with 10 indicating the highest level of satisfaction with the treatment received in the past session, and 1 being the lowest, _i.e._,
\[QoC_{rate}\in[1,10]\subset\mathbb{Z}^{+}. \tag{4}\]
The grasp frequency and rating for each session are also recorded, which corresponds to the patient feedback stream in Fig. 4.
Throughout all sessions, an Apple watch is worn by the patient at their wrist, where the Apple's movement disorders kit (Sutton et al., 2017) is used to analyze the accelerometry movements, classifying the patient's tremor severity as no-tremor, slight, mild, moderate and strong every 1 minute, following StrivePD's implementation (Brandt et al., 2017). At the end of each session, an overall tremor severity is recorded as the fraction of time the patient experiencing mild (\(T_{mild}\)), moderate (\(T_{moderate}\)) or strong (\(T_{strong}\)) tremor over the entire session with length \(T_{session}\), _i.e._,
\[QoC_{tremor}=\frac{T_{mild}+T_{moderate}+T_{strong}}{T_{session}}\times 100\%. \tag{5}\]
The three data streams are collected from all trial sessions after each clinical visit. Moreover, each time a patient may come into the clinic with slightly different PD conditions (_e.g._, pathology progression over time), medication prescriptions, activity levels etc.; thus, our goal is to capture impact of such changes by the data collection process, in order to facilitate the training and testing the offline RL and OPE frameworks for DBS.
## 4. Offline RL Design of DBS Controllers
In this section, we employ offline RL for learning control policies for DBS clinical trials, starting from the formulation of an MDP \(\mathcal{M}\) capturing the underlying neurological dynamics in the BG, and the policy distillation technique that allows for reducing the computational time and resource needed to evaluate the RL policies (represented by DNNs).
### Modeling the BG as an MDP
We now define the elements of an MDP \(\mathcal{M}=(\mathcal{S},s_{0},\mathcal{A},\mathcal{P},R,\gamma)\).
_State Space \(\mathcal{S}\) and the Initial State \(s_{0}\)._ As discussed in Sec. 2.1 and 3, our DBS controller supports calculation of \(P_{\beta}\) from LFPs, and the changes in \(P_{\beta}\) can be used as a biomarker for PD-levels for some patients. Thus, we consider the MDP state, at a _discrete_ time step \(t\), as a historical sequence of \(P_{\beta}\) sampled at a fixed intervals, captured by \(m\in\mathbb{Z}^{+}\), over a sliding queue of size \(W\in\mathbb{Z}^{+}\), _i.e._,
\[s_{t}=\left[\beta_{(\tilde{t}-(W-1)m)},\beta_{(\tilde{t}-(W-2)m)},\ldots,\beta _{(\tilde{t}-2m)},\beta_{(\tilde{t}-m)},\beta_{(\tilde{t})}\right]. \tag{6}\]
Here, \(\beta_{(\tilde{t})}\)'s are the \(P_{\beta}\) evaluated at the elapsed time \(\tilde{t}\) since the clinical trial starts, \(m\) is configurable in our system design (Fig. 2), and we used \(m=2\) corresponding to calculating \(P_{\beta}\) every 2 \(s\), resulting in 20 \(s\) time-windows for \(W=10\) elements in the queue; finally, \(s_{t}\in\mathbb{R}^{W}\) is the state at \(t\)-th (discrete) step of the MDP. The initial state \(s_{0}\) is considered to be the \(\beta\) sequence collected right before the clinical trial starts, _i.e._, from \(\tilde{t}=-(W-1)m\) to \(\tilde{t}=0\).
_Action Space \(\mathcal{A}\)._ The amplitude of DBS stimulation pulses can be changed in pre-defined (discrete) time steps, _i.e._, every 2 seconds for the developed controllers. We consider the actions \(a_{t}\) as the percentage of the cDBS amplitude determined by clinicians; _i.e._, \(a_{t}\in[0,1]\subset\mathbb{R}\), where \(a_{t}=0\) and \(a_{t}=1\) correspond to no-DBS and stimulation with the same amplitude as in cDBS, respectively.
_Transition Dynamics \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\)._ Every time after the stimulation amplitude is adjusted following \(a_{t}\), the system computes the latest \(\beta_{(\tilde{t}+m)}\) using the LFPs sent back from the IPG; this leads to the MDP state at the (t+1)-th (discrete) step as
\[s_{t+1}=\left[\beta_{(\tilde{t}-(W-2)m)},\beta_{(\tilde{t}-(W-3)m)},\ldots, \beta_{(\tilde{t})},\beta_{(\tilde{t}+m)}\right], \tag{7}\]
_i.e._, the left-most element in (6) is pushed out, with \(\beta_{(\tilde{t}+m)}\) appended to the right-end. Note that we define the MDP states \(s_{t}\) and actions \(a_{t}\) over discrete time steps, \(t\)'s, instead the elapsed time \(\tilde{t}\), for the conciseness of equations and presentations below. Now, the MDP transitions are captured to directly follow \(s_{t+1}\sim\mathcal{P}(s_{t},a_{t})\).
_Reward Function \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\)._ Following from the setup of the DBS system (Sec. 3), we define the rewards as
(8) \[R(s_{t},a_{t},s_{t+1}) =\left\{\begin{array}{ll}r_{a}-C_{1}\cdot a_{t},&\text{ if }\tilde{ }\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde{} \tilde{}\tilde{}\tilde{}\tilde{}\tilde{}\tilde
over-usage of large stimulation amplitudes (for better energy efficiency). On the other hand, if \(\beta_{(i+m)}\) is greater than the threshold \(\xi_{\beta}\), a negative reward \(r_{b}\) will be used to replace \(r_{a}\) above.
**Remark 4.1**.: _The reward functions used for RL training do not consider the QoC metrics that are available not at every step of the control execution (i.e., every 2 s) but only at the end of each clinical session, i.e., \(QoC_{\text{grasp}}\), \(QoC_{\text{Crate}}\), \(QoC_{\text{Tremore}}\) from (3), (4), (5). The reason is that the horizon \(T\) is usually large and the their coverage can be very sparse. Instead, these QoC metrics serve as great measurements quantifying how well the policies perform, which are thus leveraged by the OPE techniques introduced in Sec. 5._
For the introduced MDP \(\mathcal{M}\), we leverage the offline RL framework introduced in Sec. 2.2 to search for the target policy \(\pi^{*}\). Following from Problem 1, it requires an experience replay buffer \(\mathcal{E}^{\mu}\) that consists of historical trajectories collected over some behavioral policy \(\mu\). At the beginning of offline RL training, exploration of the environment is deemed more important than exploitation (Krause et al., 2017). Hence, a controller that generates random actions uniformly from \([B,1]\) is used to constitute \(\mathcal{E}^{\mu}\) at earlier stage of clinical trials, where \(B\) is the lower bound from which the random \(a_{t}\) can be generated, for the sake of patient's safety and acceptance.
Once the RL policies can attain satisfactory overall performance, _i.e._, quantified as achieving significantly improved QoCs (introduced in Sec. 3) compared to the random controller above, we consider including into \(\mathcal{E}^{\mu}\) the trajectories obtained from such RL policies. From this point onward, the replay buffer \(\mathcal{E}^{\mu}\) will be iteratively updated and enriched with the RL-induced trajectories after each trial. Consequently, the behavioral policy \(\mu\) can be considered as a mixture of random control policy and several RL policies deployed in past trials in general. With \(\mathcal{E}^{\mu}\) being defined, the objective for training RL policies, (20), can be optimized using gradient descent (Han et al., 2017; Wang et al., 2018; Wang et al., 2018).
### Policy Distillation
Our system design (Fig. 2) is set to process various tasks in each \(2\)\(s\) stimulation (i.e., control) period, facilitating communication between the research tablet and IPG, computing \(P_{\beta}\) from LFPs, evaluating the RL controller, data logging, and other basic functionalities that ensure the safety and functionality of DBS. Hence, it was critical to reduce the overall computation requirements, such that each task meets the required timings, as well as prolong the battery lifetime. As introduced in Sec. 2.2, the RL policies are parameterized as DNNs; although a forward pass of a DNN would not require as much computational resources as for training (through back-propagation), it may still involve hundreds of thousands of multiplication operations. For example, consider the recommended DNN size as in (Wang et al., 2018), it takes at least 120,000 multiplications to evaluate a two-layer NN with 400 and 300 nodes each. Hence, we integrate into our system the model/policy distillation techniques (Krause et al., 2017), allowing smaller sized NNs to be used to parameterize RL policies.
We build on a similar approach as in (Wang et al., 2018), originally proposed to reduce the size of DNNs used in deep Q-learning (Wang et al., 2018), which only works for a discrete action space. In particular, our extension allows for the use in the deterministic actor-critic cases considered in this work. Consider the original policy (_teacher_) \(\pi_{\theta_{a}}\) parameterized by a DNN with weights \(\theta_{a}\). We train a smaller-sized DNN (_student_) with weights \(\tilde{\theta}_{a}\) to learn \(\theta_{a}\)'s behavior, by minimizing the mean squared error
\[\min_{\tilde{\theta}_{a}}||\pi_{\theta_{a}}(s_{t})-\pi_{\tilde{\theta}_{a}}(s_ {t})||^{2}, \tag{9}\]
for all state samples contained in the experience replay \(s_{t}\in\mathcal{E}^{\mu}\). We also consider augmenting the data used to optimize (9) to smooth out the learning process. We introduce synthetic states, \(\tilde{s}_{t}\)'s, where each \(\tilde{s}_{t}\) is generated by adding noise to each dimension of a state sample \(s_{t}\) that is originally in \(\mathcal{E}^{\mu}\); the noise is sampled from a zero-mean Gaussian distribution, \(\epsilon_{t}\sim\mathcal{N}(0,\sigma^{2})\) with \(\sigma\) being a hyper-parameter.
## 5. OPE of DBS Controllers Including Patient Feedback and Tremor Data
As discussed in Remark 4.1, besides the reward function introduced in Sec. 4.1, for OPE we employ QoC metrics \(QoC_{\text{grasp}}\), \(QoC_{\text{Crate}}\), and \(QoC_{\text{Tremore}}\) defined in (3), (4), (5), respectively, which are only available at the end of each session. As these well-capture performance (i.e., therapy effectiveness) of the considered policy, for OPE we additionally consider the end-of-session rewards defined as
\[r_{end} =R_{end}(s_{0},a_{0},s_{1},a_{1},...,s_{T-1},a_{T-1},s_{T})\] \[=C_{2}\cdot QoC_{\text{grasp}}+C_{3}\cdot QoC_{\text{rate}}-C_{4 }\cdot QoC_{\text{Tremore}}, \tag{10}\]
with \(C_{2}\), \(C_{3}\), \(C_{4}>0\) real constants. Without loss of generality, we slightly modify the total return under policy \(\pi\) (from Problem 2) as
\[G_{0}^{\pi}=r_{end}+\sum_{t=0}^{T}\gamma^{t}r_{t}, \tag{11}\]
where \(r_{t}\) and \(r_{end}\) follow from (8) and (10), respectively.
As discussed in Sec. 2.3, the DLMM introduced in (Krause et al., 2017), falls short in dealing with long horizons and predicting the end-of-session rewards \(r_{end}\). To address these limitations, in this section we introduce the _deep latent sequential model_ (DLSM) that directly enforces the transitions over the LVS. The overall model architecture is shown in Fig. 5. First, the latent prior \(p_{\psi}(z_{0})\) is defined only over the initial latent variable at step \(t=0,z_{0}\), which follows a multivariate Gaussian distribution with zero mean and identity covariance matrix.
Then, the encoder (approximated posterior) is defined over each trajectory (from \(t=0\) to \(T\)) as
\[q_{\phi}(z_{0:T}|s_{0:T},a_{0:T-1})=q_{\phi}(z_{0}|s_{0})\prod_{t=1}^{T}q_{ \phi}(z_{t}|z_{t-1},a_{t-1},s_{t}). \tag{12}\]
Figure 5. Architecture of the new deep latent sequential model (DLSM). The conditional dependencies between the variables from the posterior and sampling distributions are shown in dashed and solid lines, respectively.
Further, the second term \(q_{\phi}(z_{t}|z_{t-1},a_{t-1},s_{t})\), which enforces the transitions between \(z_{t-1}\) and \(z_{t}\) conditioned on (\(a_{t-1}\), \(s_{t}\)) and enables the encoder to capture the dynamical transitions in the LVS, can be obtained iteratively following
\[z_{0}^{\phi}\sim q_{\phi}(z_{0}|s_{0}),\ h_{t}^{\phi}=f_{\phi}(h_{t-1}^{\phi},z_ {t-1}^{\phi},a_{t-1},s_{t}),\ z_{t}^{\phi}\sim q_{\phi}(z_{t}|h_{t}^{\phi}); \tag{13}\]
here, \(q_{\phi}(z_{0}|s_{0})\) and \(q_{\phi}(z_{t}|h_{t}^{\phi})\) are parameterized by multivariate diagonal Gaussian distributions, each with mean and covariance determined by a feedforward DNN (Goodfellow et al., 2016); moreover, \(h_{t}^{\phi}\) is the hidden state of a recurrent DNN, such as long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), capturing the historical transitions among \(s_{t}\), \(a_{t}\) and \(z_{t}^{\phi}\) for all past steps up until \(t-1\) within each trajectory.
The decoder (sampling distribution) is responsible for interacting with the target policies to be evaluated, from which the expected returns can be estimated as the mean return obtained by the simulated trajectories. Specifically, the decoder is defined as follows, _i.e._,
\[p_{\psi}(z_{1:T},s_{0:T},r_{0:T-1},r_{end}|z_{0})=p_{\psi}(r_{ end}|z_{T})\cdot\] \[\prod_{t=0}^{T}p_{\psi}(s_{t}|z_{t})\prod_{t=1}^{T}p_{\psi}(z_{t} |z_{t-1},a_{t-1})p_{\psi}(r_{t-1}|z_{t}); \tag{14}\]
here, \(p_{\psi}(r_{end}|z_{T})\) estimates the end-of-session rewards given the latent variable at \(t=T\), \(z_{T}\); \(p_{\psi}(s_{t}|z_{t})\), \(p_{\psi}(r_{t-1}|z_{t})\) reconstruct the states and rewards; \(p_{\psi}(z_{t}|z_{t-1},a_{t-1})\) enforces the transitions over the latent variables, \(z_{t}\)'s, conditioned on the actions; and \(z_{0}\sim p_{\psi}(z_{0})\) is sampled from the prior. As a result, each simulated trajectory can be generated by the decoder following
\[h_{t}^{\psi} =f_{\psi}(h_{t-1}^{\psi},z_{t-1}^{\psi},a_{t-1}),\ z_{t}^{\psi} \sim p_{\psi}(z_{t}|h_{t}^{\psi}),\ s_{t}^{\psi}\sim p_{\psi}(s_{t}|z_{t}^{\psi}),\] \[r_{t-1}^{\psi} \sim p_{\psi}(r_{t-1}|z_{t}^{\psi}),\ a_{t-1}\sim\pi(a_{t-1}|s_{t- 1}^{\psi}),\ r_{end}^{\psi}\sim p_{\psi}(r_{end}|z_{T}); \tag{15}\]
here, \(h_{t}^{\psi}\) is the hidden state of a recurrent DNN; \(p_{\psi}(z_{t}|h_{t}^{\psi}),p_{\psi}(s_{t}|z_{t}^{\psi})\), \(p_{\psi}(r_{t-1}|z_{t}^{\psi})\) and \(p_{\psi}(r_{end}|z_{T})\) are multivariate diagonal Gaussians with means and covariances determined by four feedforward DNNs separately. Hence, \(s_{t}^{\psi}\)'s and \(r_{t-1}^{\psi}\)'s can be sampled iteratively following the process above, using the actions obtained from the target policy \(a_{t-1}\sim\pi(a_{t-1}|s_{t-1}^{\psi})\) accordingly, which constitute the simulated trajectories; and \(r_{end}^{\psi}\) is sampled at the end of each simulated trajectory.
The theorem below derives an ELBO for the joint log-likelihood \(\log p_{\psi}(s_{0:T},r_{0:T-1},r_{end})\), following the above DLSM architecture.
Theorem 5.1 (ELBO for DLSM).: _An ELBO of the joint log-likelihood \(\log p_{\psi}(s_{0:T},r_{0:T-1},r_{end})\) can be obtained as_
\[\mathcal{L}_{ELBO}(\psi,\phi)=\mathbb{E}_{z_{t}\sim q_{\phi}}\Big{[} \sum_{t=0}^{T}\log p_{\psi}(s_{t}|z_{t})+\sum_{t=1}^{T}\log p_{\psi}(r_{t-1}|z_ {t})\] \[+\log p_{\psi}(r_{end}|z_{T})-KL(q_{\phi}(z_{0}|s_{0})||p(z_{0}))\] \[-\sum_{t=1}^{T}KL(q_{\phi}(z_{t}|z_{t-1},a_{t-1},s_{t})||p_{\psi}( z_{t}|z_{t-1},a_{t-1}))\Big{]} \tag{16}\] \[\leq\log p_{\psi}(s_{0:T},r_{0:T-1},r_{end}); \tag{17}\]
_here, the first three terms are the log-likelihood of the decoder to reconstruct \(s_{t}\), \(r_{t-1}\) and \(r_{end}\) correctly, and the two terms that follow regularize the transitions captured by the encoder over the LVS, with \(KL(\cdot||\cdot)\) being the Kullback-Leibler (KL) divergence (Kolmogorov, 1957)._
The proof of Theorem 5.1 can be found in Appendix F. Empirically, similar to the DLMM (Kolmogorov, 1957), the ELBO can be evaluated using the trajectories from the experience replay \(\mathcal{E}^{\mu}\), by replacing the expectation as the mean over all trajectories, after which the objective \(\max_{\psi,\phi}\mathcal{L}(\psi,\phi)\) can be achieved using gradient descent (Kolmogorov, 1957) following the algorithm in Appendix D. Moreover, the reparameterization trick (Kolmogorov, 1957) is used, which allows for the gradients to be back-propagated when sampling from Gaussian distributions with means and covariances determined by DNNs. Details on reparameterization can be found in (Kolmogorov, 1957; Kolmogorov, 1957).
## 6. Clinical Evaluations
Using our closed-loop DBS system presented in Sec. 3, we evaluated the developed RL-based control framework in clinical trials on four PD patients, at Duke University Medical Center. In particular, we evaluated and compared four different types of controllers: cDBS, RL, RL with policy distillation (_i.e._, distilled RL), and no-DBS (i.e., without stimulation). The electrodes of the DBS device were placed in STN and GPi brain regions for all four participants; LFPs were sensed from STN and stimuli were delivered to both STN and GP.
Each participant also has had _different PD symptoms and severity_; their characteristics are summarized in Appendix E. All trials were conducted under close supervision of clinical experts, strictly following the process approved by the Duke University Medical Center IRB protocol complying with the obtained FDA IDE (G180280). Further, all participants provided informed written consent.
### Therapy Efficacy and Energy-Efficiency of the RL Control Policies
We follow the offline RL and policy distillation methodology introduced in Sec. 4 to train and update (distilled) RL policies iteratively over time. Specifically, each participant had monthly clinical visit, where during each day of trials a total of 2-4 RL policies would be tested. A cDBS session was placed in between any two RL sessions as a control group. A small number of no-DBS sessions, with DBS stimulation fully off, were also tested, to validate our choice of the employed QoCs metrics - _i.e._, whether they significantly change when the participants are not stimulated.
After each trial day was completed, the trajectories collected from all the sessions were added to the experience replay buffer \(\mathcal{E}^{\mu}\) unique to each participant. Between two consecutive visits of each participant, her \(\mathcal{E}^{\mu}\) was used to fine-tune the top-performing policies determined from the last trial (using smaller learning rates between \([10^{-7},10^{-5}]\)) or to train new policies from scratch (with learning rates between \([10^{-5},10^{-3}]\)); such policies were then tested in the next visit. We followed (Kolmogorov, 1957) and used two-layer NNs with 400 and 300 nodes each to parameterize the RL policies; moreover, a distilled version (student) of each corresponding full-sized RL policy (teacher) were trained as introduced in Sec. 4.2, with each represented as a two-layer NN with 20 and 10 nodes. The constants in (8) were set to \(r_{a}=0,r_{b}=-1,C_{1}=0.3\) for all participants.
In each testing session, to evaluate the overall performance of the employed control policy, a total of 5 metrics were considered: the energy used by the IPG for stimulation, the mean beta amplitude over
the session, and the 3 QoCs introduced in Sec. 3; for \(QoC_{grasp}\), we captured the grasp frequencies of the hand that best correlates with the PD symptom for the participant (see Appendix E for details).
Fig. 6 summarizes the obtained results, and Table 1 documents the total amount of time each controller was tested in clinic. Wilcoxon rank-sum tests [43] between cDBS and each of the other controllers were used to test the null hypothesis - _if two sets of measurements were drawn from the same distribution_ (i.e., that the controllers perform similarly over the considered metrics), from this, \(p\)-values can be calculated. The \(p\)-values accepting/rejecting the null hypothesis in the desired direction are highlighted in Fig. 6. Specifically, it can be observed that, compared to cDBS, the RL policies and their distilled version can save significant (20%-55%) stimulation energy across participants; as \(p<.05\) achieved for all participants, which rejected the null hypothesis.
When considering the other 4 metrics, there exist a great majority of results with \(p\geq.05\), accepting the null hypothesis and indicating that both RL controllers attain control (i.e., therapy) efficacy similar to cDBS. In contrast, for the no-DBS sessions, the null hypothesis is rejected in most cases. Specifically, \(p<.05\) attained by no-DBS over the mean beta amplitude, for all participants, show that beta amplitudes can change significantly when sufficient DBS is received or not, which justify our choice of using the beta amplitudes to constitute MDP states. This also shows that the RL policies can follow the reward function (from Sec. 4.1) to effectively optimize the control strategies, with beta amplitudes also playing an important role. Consequently, the results show that both full and distilled RL policies can significantly reduce the stimulation energy, while achieving non-inferior control efficacy compared to cDBS.
\begin{table}
\begin{tabular}{l l l l l} \hline \hline & cDBS & RL & Distilled RL & No-DBS \\ \hline Participant 1 & 84 & 97 & 97 & 36 \\ Participant 2 & 145 & 80 & 182 & 52 \\ Participant 3 & 135 & 115 & 115 & 39 \\ Participant 4 & 124 & 119 & 98 & 48 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Overall time, in minutes, spent toward testing each type of controller in clinical trials. Each testing session lasted 5-20 minutes, and no-DBS sessions were usually 5-min long to minimize the discomfort participants may experience.
Figure 6: Quality of control (QoC) results from all clinical trials across participants. Wilcoxon rank-sum tests [43] between cDBS and each of the other controllers are used to test the null hypothesis that two sets of measurements are drawn from the same distribution, resulting in the \(p\)-values reported above. The null hypothesis is rejected when consider the stimulation energy consumed by both RL controllers, illustrating that they lead to significant energy reduction compared to cDBS. For all other QoCs, the null hypothesis is accepted in majority cases, showing that both RL controllers can in general attain similar control efficacy to cDBS. The controllers that lead to the acceptance/rejection of the null hypothesis in the desired direction are highlighted with asterisks and bold \(p\)-values.
#### 6.1.1. Computational Complexity and Overall Energy Consumption
We also study the additional computation time and battery consumption of the DBS system due the use of full-sized RL policies or their distilled version. A Surface Go with an Intel Pentium Gold 4415Y CPU and 4GB RAM was used as the research tablet in Fig. 4. The computation time was quantified as the time needed to run a single forward pass of the NN that represents the RL policy. We evaluate the forward passes for both types of RL policies 200 times; Table 2 summarizes themean and standard deviation of the obtained computation times. As can be seen, the distilled RL policy can be evaluated significantly faster than its counterpart.
Moreover, we quantify the overall battery consumption of the entire DBS system as the time for which the tablet or the IPG battery drains from 100% to 10% (whichever comes first). We compare the battery runtime among the full RL and distilled RL, as well as a random controller that sets the IPG to stimulate with an arbitrary amplitude in each control cycle. Each experiment was repeated 3 times, resulting in the statistics in Table 3 showing that the two RL-based controllers do not drastically shorten the runtime of the DBS system; _i.e._, the energy used for RL-based control does not dominate the overall energy used by the DBS system.
### Evaluation of the OPE Methodology
For each participant, a DLSM was trained following the methodology introduced in Sec. 5, and then used as a synthetic environment to interact with 6 policies trained using the deep actor-critic method (Sec. 4) with different hyper-parameters, over the buffer \(\mathcal{E}^{\mu}\) specific to the patient; these policies can in general lead to varying performance. Then, for each policy, the mean of total returns (11) over all simulated trajectories can be calculated, and was used to estimate the policy's expected return from Problem 2. The constants in (10), balancing the scale of the QoCs (_i.e._, grasp frequency, rating and tremor severity) were set to \(C_{2}=C_{3}=C_{4}=10\) for patients 2-4 who can experience bradykinesia and pronounced tremor with insufficient DBS; in contrast, the symptoms of participant 1 are considered subtle, so we set \(C_{2}=C_{3}=C_{4}=25\) to better distinguish if sufficient DBS is provided; see Appendix E for details on patient characteristics as well as the dosage of PD medications.
DLSM's performance was compared against the classic IS (Sang et al., 2016), as well as a state-of-the-art IS-based OPE method, dual-DICE (Sang et al., 2016). Three metrics were considered to evaluate the performance of OPE, including mean absolute error (MAE), rank correlation, and regret@1, following from (Kelly et al., 2001). MAE evaluates the absolute error between the total return estimated by OPE, versus the _actual_ returns, _i.e._, mean total return recorded from clinical trials. Rank correlation quantifies the alignment between the rank of policies over OPE-estimated returns and the actual returns. Regret@1 quantifies the percentage loss, over the total actual returns, one would get by picking the policy with maximum OPE-estimated return, against the actual best-performing policy, showing if the OPE methods can identify the best-performing policy correctly. Their mathematical definitions can be found in Appendix G.
The obtained results are summarized in Fig. 7. As shown, the DLSM in general achieved significantly higher rank and lower regret, as well as non-inferior MAE, over DICE and IS.
## 7. Conclusion
In this paper, we introduced an offline RL and OPE framework to design and evaluate closed-loop DBS controllers using only historical data. Moreover, a policy distillation method was introduced to further reduce the computation requirements for evaluating RL policies. The control efficacy and energy efficiency of the RL controllers were validated with clinical testing over 4 patients. Results showed that RL-based controllers lead to similar control efficacy as cDBS, but with significantly reduced stimulation energy. The computation times for the RL and distilled RL controllers were compared, showing that the distilled version executed significantly faster; future work will focus on further reducing execution times of the distilled RL controllers to match capabilities of implanted devices. Finally, the DLSM is trained to estimate the expected returns of RL policies, which outperforms existing IS-based OPE methods, in terms of rank correlations, regrets and MAEs.
|
2308.06892 | Thermal stability and secondary aggregation of self-limiting,
geometrically-frustrated assemblies: Chain assembly of incommensurate
polybricks | In geometrically frustrated assemblies, equilibrium self-limitation manifests
in the form of a minimum in the free energy per subunit at a finite,
multi-subunit size which results from the competition between the elastic costs
of frustration within an assembly and the surface energy at its boundaries.
Physical realizations -- from ill-fitting particle assemblies to self-twisting
protein superstructures -- are capable of multiple mechanisms of escaping the
cumulative costs of frustration, resulting in unlimited equilibrium assembly,
including elastic modes of ``shape-flattening'' and the formation of weak,
defective bonds that screen intra-assembly stresses. Here we study a model of
1D chain assembly of incommensurate ``polybricks'', and determine its
equilibrium assembly as a function of temperature, concentration, degree of
shape frustration, elasticity and inter-particle binding, notably focusing on
how weakly cohesive, defective bonds give rise to strongly
temperature-dependent assembly. Complex assembly behavior derives from the
competition between multiple distinct local minima in the free energy
landscape, including self-limiting chains, weakly-bound aggregates of
self-limiting chains, and strongly-bound, elastically defrustrated assemblies.
We show that this scenario, in general, gives rise to anomalous {\it multiple
aggregation} behavior, in which disperse subunits first exhibit a primary
aggregation transition to self-limiting chains (at intermediate concentration
and temperature) which are ultimately unstable to condensation into unlimited
assembly of finite-chains through weak binding at a secondary aggregation
transition (at low temperature and high concentration). We show that window of
stable self-limitation is determined both by the elastic costs of frustration
in the assembly as well as energetic and entropic features of inter-subunit
binding. | Michael Wang, Gregory Grason | 2023-08-14T02:14:05Z | http://arxiv.org/abs/2308.06892v1 | Thermal stability and secondary aggregation of self-limiting, geometrically-frustrated assemblies: Chain assembly of incommensurate polybricks
###### Abstract
In geometrically frustrated assemblies, equilibrium self-limitation manifests in the form of a minimum in the free energy per subunit at a finite, multi-subunit size which results from the competition between the elastic costs of frustration within an assembly and the surface energy at its boundaries. Physical realizations - from ill-fitting particle assemblies to self-twisting protein superstructures - are capable of multiple mechanisms of escaping the cumulative costs of frustration, resulting in unlimited equilibrium assembly, including elastic modes of "shape-flattening" and the formation of weak, defective bonds that screen intra-assembly stresses. Here we study a model of 1D chain assembly of incommensurate "polybricks", and determine its equilibrium assembly as a function of temperature, concentration, degree of shape frustration, elasticity and inter-particle binding, notably focusing on how weakly cohesive, defective bonds give rise to strongly temperature-dependent assembly. Complex assembly behavior derives from the competition between multiple distinct local minima in the free energy landscape, including self-limiting chains, weakly-bound aggregates of self-limiting chains, and strongly-bound, elastically defurstated assemblies. We show that this scenario, in general, gives rise to anomalous _multiple aggregation_ behavior, in which disperse subunits (stable at low concentration and high temperature) first exhibit a primary aggregation transition to self-limiting chains (at intermediate concentration and temperature) which are ultimately unstable to condensation into unlimited assembly of finite-chains through weak binding beyond a secondary aggregation transition (at low temperature and high concentration). We show that window of stable self-limitation is determined both by the elastic costs of frustration in the assembly as well as energetic and entropic features of inter-subunit binding.
## I Introduction
Geometric frustration arises when the local preferred ordering of a system cannot be realized on a global scale. When this occurs, individual constituents of a system will often find complex ways of ordering on a large scale. This concept has been studied in many contexts such as the frustrated ordering of magnetic spins on various lattices [1; 2; 3; 4], nanoscale pattern formation on substrates [5; 6], bent-core liquid crystals [7; 8; 9; 10], and the ordering of colloids on curved surfaces [11; 12; 13; 14].
Recently, there has been great interest in understanding the influence of geometric frustration on the morphologies as well as the sizes of self-assembled structures of misfitting building blocks [15; 16; 17]. The concept of _geometrically frustrated assembly_ (GFA) has been applied to a range of existing systems in soft matter, from protein bundles [18; 19; 20; 21; 22; 23] and chiral membranes of liquid crystals and surfactants [24; 25; 26; 27; 28; 29] to more recent efforts to design and assemble intentionally misfitting particles [17; 30; 31; 32; 33; 34; 35; 36]. When such building blocks assemble through an attractive interaction, the resulting structure, as well as the blocks themselves and their interactions, must deform as the assembly grows. This competition between the attractions of the building blocks and the build up of strain regulates the equilibrium sizes and morphologies of the resulting assemblies, a phenomenon known as self-limiting assembly [15; 37]. Basic questions regarding the fundamental and practical importance of GFAs include how does the self-limiting size depend on shape and interactions of given building blocks, and how large can the self-limiting size be relative to that subunit size?
Self-limitation in GFA relies on the propagation of intra-assembly gradients of local strain, whose effects lead to the accumulation of elastic costs that grow super-extensively with size [16]. Mechanisms that relax or ob
Figure 1: Attenuation of stress propagation through broken bonds and weak binding. An assembly larger than its preferred size \(N_{*}\) can lower its free energy per subunit and relieve strain by breaking into multiple smaller, weakly-bound assemblies.
struct the propagation of elastic effects to large sizes and thus limit the maximal range of self-limiting sizes, can be broadly grouped into two categories [15]. The first, dubbed as "shape flattening", occurs when the shape of misfitting particles is sufficiently soft that it is overwhelmed by the strong, inter-particle cohesive binding, deforming the particle assembly into uniform, unfrustrated motifs, and giving way to unlimited (i.e. bulk) assembly. Such "soft" mechanisms of frustration escape are controlled by elastic parameters that control both shape deformation and intra-particle strains. Alternatively, the second category of frustration escape mechanisms relies on inelastic defects in the assembly that, at least partially, mitigate the propagation of frustration induced stress. For 2D ordered assemblies, these may take the form of topological defects, e.g. disclinations and dislocations, that screen frustration [12; 13; 14; 38; 39; 40; 41]. More generally, a defect may take the form of any partial bond that localizes frustration induced stress, such as an internal crack or weak-bond, but nevertheless acquires some cohesion (see e.g. examples in assembly models of hyperbolic tubules in [33; 36]).
In this article, we study a 1D model of GFA capable of exhibiting both modes of frustration escape, and in particular, aim to understand the generic effects of "weak", defective binding on the thermal stability of self-limiting assembly. Our model is inspired by the so-called DNA origami based "polybrick" particle, developed and studied by Berengut and coworkers [30], shown to exhibit self-limiting chain assembly. Figure 1 illustrates schematically the generic effects of weak binding in a 1D frustrated assembly on the free energy per subunit landscape. While the accumulation of frustration stresses between _strongly-bound_ particles leads to a primary minimum at a well-defined size \(N_{*}\), it is straightforward to see that weak-binding between two or more strongly-bound chains can lead to additional local minima in the free energy per subunit at integer multiples of the primary
Figure 2: Example states of frustrated self-assembly. Varying temperature results in **a)** dispersed subunits, **b)** self-limiting assemblies, and **c)** secondary aggregation of self-limiting assemblies through the formation of broken bonds or defects. Varying frustration results in **d)** dispersed, highly-frustrated subunits, **e)** self-limiting assemblies, and **f)** shape-flattened, unlimited assemblies where frustration is not enough to overcome binding.
aggregate size. At zero-temperature, any weak cohesive interaction guarantees that the ground state is an infinite chain of weakly-bound aggregates, i.e. an unlimited aggregate. At finite temperature, secondary assembly of primary aggregates is controlled by the free energy of weak-binding, which may be favorable or unfavorable depending on both the energy gain and entropy cost of weak binding. In general, this illustrates that geometrically frustrated self-assembly is characterized by multiple local minimal at distinct sizes which compete for stability. Based on the exact solution of the ideal 1D assembly behavior of the frustrated polybricks, we illustrate two basic and generic consequences of weak, defective binding in frustrated assembly. First, we show that self-limitation is only possible above a minimal finite temperature. Second, we show that the competition between the primary aggregation into strongly-bound assemblies and the weak binding of those assembled chains leads to an exotic type of _secondary aggregation_ behavior.
Canonical assembly, like the spherical micellization of surfactants, is marked by a single pseudo-critical aggregation transition between a dispersed state at low concentration and a micelle-dominated state at high concentration [42; 43]. Cases where there are multiple, nearly-degenerate local minima in the free energy per subunit exhibit more complex scenarios [37], where there is a primary aggregation transition from a dispersed state into a finite aggregate state, followed by a _secondary aggregation transition_ from primary aggregates to a secondary state of aggregates (of typically higher aggregate mass) at further higher concentration or lower temperature. For surfactants, such secondary micellization transitions have been studied in the context of concentration-dependent transitions from spherical to cylindrical micelles, where the nature of the transition is controlled by microscopic energetics of molecular packing in sphero-cylindrical aggregates [44; 45; 46; 47].
Here, we show that the secondary aggregation behavior is a generic feature of GFAs due to unavoidable possibility of weak, partial binding between attractive subunits. In what follows, we introduce an exactly solvable model of a frustrated polybrick assembly, and predict the assembly behavior as a function of _frustration_, _concentration_, and _temperature_, showing in general that self-limitation is possible at intermediate regimes of these basic parameters, shown schematically in Fig. 2. As temperature is decreased (or concentration is increased), we predict a sequence of states from dispersed monomers to self-limiting to defective and unlimited (i.e. weakly-bound chains of chains) assembly. As frustration is decreased (below a critical temperature), we predict a sequence of states from dispersed monomers to self-limiting to strongly-bound, unlimited chains. We relate these stability windows to microscopic parameters that control intra-assembly elasticity, as well as entropy and energy of binding. The manuscript is structure as follows. In Section II, we introduce our model of an incommensurate chain of "polybricks" and the finite-temperature corrections to the free energy per subunit. In Section III, we describe the ideal aggregation of linear chains and establish criteria for determining whether a system is dispersed, self-limiting, or defective. In Section IV, we describe the distinct free energy per subunit landscapes possible in our model, their corresponding concentration-dependent aggregation, and summarize the equilibrium assembly behavior of frustrated polybricks in the temperature-frustration plane.
## II Frustrated, incommensurate chain model
To gain insight into the role of temperature on the self-limiting assembly of GFAs, we start by introducing a simple solvable model of a linear chain of incommensurate subunits inspired by recent experiments of DNA origami, incommensurate "polybrick" particles [30]. We begin by defining the (zero-temperature) energetics of strongly-bound chain assembly in the model
### Energetics of strongly-bound, frustrated chains
The subunits in this system consist of three blocks (i.e. rectangular prisms of a honeycomb DNA lattice): a short block of length \((1-f)a\) sandwiched between two longer blocks of length \(a\), as shown in Figure 3a. Here the quantity \(f\) characterizes the amount of shape mismatch, namely the _frustration_, in a linear assembly of such subunits with the central blocks preferring a lattice spacing of \((1-f)a\) that is incommensurate with the preferred lattice spacing \(a\) of the outer blocks. Within a subunit, our model considers the central block to be elastically attached to the two outer blocks with a spring of stiffness \(k_{u}\) favoring the middle block to be centered between the outer two longer blocks and penalizing lateral (i.e. sliding) displacements \(u\) relative to the centered position. Hence, \(k_{u}\) parameterizes the intra-subunit deformability. To keep the analysis simple, we assume that the two outer blocks move together and act as a backbone along which the central block can slide, although it is possible for a more complex array of intra-particle deformation modes to renormalize the effective parameters described here. Finally, when two subunits are strongly bonded, the central blocks elastically interact through a spring of stiffness \(k_{f}\). That is, binding favors a specific face-to-face contact, but the incommensurate geometry of blocks in general requires some strain displacement from that contact. In the case of DNA origami polybricks, the inter-subunit stiffness can derive from the deformation of bound single-stranded DNA bases that mediate attractions between the particle faces in combination with the compression of the repulsive brushes that coat flanking blocks. More generally, the stiffness of specific interactions between frustrated particles can be
manipulated via surface functionalization with complementary single-stranded DNA sequences [50; 51].
The energy of a linear chain of \(N\)_strongly-bound_ subunits is
\[\begin{split} E_{N}[\{u_{n}\}]=&-(N-1)\Delta_{0}+\sum _{n=1}^{N}\frac{1}{2}k_{u}u_{n}^{2}\\ &+\sum_{n=1}^{N-1}\frac{1}{2}k_{f}\left(u_{n+1}-u_{n}+fa\right)^ {2},\end{split} \tag{1}\]
where \(u_{n}\) is the position of the central block relative to the outer blocks of the \(n\)th subunit. The first term describes the effect of cohesive gain of strong bonds in the system, where \(\Delta_{0}\) is the binding energy of the ideal, unstrained bond. The second and third terms correspond to the intra-subunit deformation energy, and the intersubunit bond-stretching energy, respectively. This energy can be minimized with respect to the displacements \(\{u_{n}\}\), the details of which we leave for Appendix A. The per-subunit ground state energy is
\[\mathcal{E}_{N}=\frac{E_{N}}{N}=-\left(1-\frac{1}{N}\right)\Delta_{0}+ \mathcal{E}_{N}^{\text{(ex)}}, \tag{2}\]
where \(\mathcal{E}_{N}^{\text{(ex)}}\) is the _excess energy_ per subunit built up from the accumulation of strain due to the incommensurate lengths of the blocks. The per-subunit ground state energy is shown in Figure 3b. The excess energy density, indicated by the orange curve, has a useful continuum approximation given by
\[\mathcal{E}_{N}^{\text{(ex)}}\approx\frac{1}{2}k_{f}f^{2}a^{2}\left(1-\frac{ \tanh N/\sqrt{4k_{f}/k_{u}}}{N/\sqrt{4k_{f}/k_{u}}}\right). \tag{3}\]
The quantity \(\sqrt{4k_{f}/k_{u}}\) can be interpreted as the size scale over which strains can accumulate. It is also the ratio of the intra-subunit deformability to the inter-subunit deformability. For small assemblies \(N\ll\sqrt{4k_{f}/k_{u}}\), the per-subunit excess energy grows superextensively as \(\mathcal{E}_{N}^{\text{(ex)}}\sim k_{u}f^{2}a^{2}N^{2}\) as the assembly strains more and more to bind new subunits, i.e. in this regime the central brick must displace more and more \(u_{n}\sim faN\) to account for the accumulating length mismatch between tightly bound units. At large assembly sizes \(N\gg N_{\text{flat}}=\sqrt{4k_{f}/k_{u}}\), the subunits can only internally deform so much before it is favorable to strain the bonds between them instead, i.e. they begin to adopted uniformly strained interactions (with the exception of a boundary layer) and the energy density plateaus to \(\mathcal{E}_{\infty}^{\text{(ex)}}=k_{f}f^{2}a^{2}/2\). As shown in the example assemblies in Figure 3b, a majority of the strain-gradient is expelled to the boundaries while the bulk subunits remain relatively uniformly strained. This is known as _shape flattening_.
Figure 3: **a)** Incommensurate chain model (left) inspired by experiments [30] (right). **b)** Energy per subunit of linear chains of length \(N\) for increasing cohesion (binding energy \(\Delta_{0}\)) with ratio of elasticities \(k_{f}/k_{u}=100\) (\(k_{f}=100\), \(k_{u}=1\)) and frustration \(fa=0.05\). Red stars indicate the local energy minima and the corresponding structures and their strains are shown. **c)** Self-limiting size as a function of the cohesion between subunits.
The frustration and strain accumulation, whose per particle cost grows as \(\mathcal{E}_{\text{strain}}(N)\approx k_{u}(afN)^{2}\) for small sizes, competes with the binding of new subunits as the binding energy density, indicated by the blue curve, decreases as \(\sim\Delta_{0}/N\) and favors larger assemblies. This competition leads to an energetically favorable, self-limiting size \(N_{\star}(T=0)\sim\left(\Delta_{0}/k_{u}f^{2}a^{2}\right)^{1/3}\), as shown in Figure 3c. As one increases the strength of binding or decreases the frustration, the self-limiting size increases. However, above a certain threshold binding strength (or below a threshold frustration), assemblies enter the shape flattening regime, i.e. \(N_{\star}\gtrsim N_{\text{flat}}\) where the accumulative cost of sliding strain exceeds the cost to deform the inter-particle blocks uniformly. As binding becomes more cohesive, or particle shapes become less frustrated, the energy per subunit no longer has a minimum at finite \(N\), and the ground state structure becomes unlimited in size. This condition can be estimated by equating accumulating intra-particle shear energy to the cost of uniform bond strain. From this condition, we see that shape-flattening limits the range of equilibrium self-limitation to an _upper limit_ on the range of finite-size of chains which occurs at a corresponding _minimal value_ of frustration (see Appendix A.2)
\[N_{\star}\lesssim N_{\text{flat}}\approx\sqrt{k_{f}/k_{u}};\,f\gtrsim f_{ \text{flat}}(T=0)\approx\Delta_{0}^{1/2}k_{f}^{-3/4}k_{u}^{1/4}, \tag{4}\]
where here we consider the case of fixed cohesive energy per strong bond (\(\Delta_{0}\)). In general, these conditions suggest that increasing the value of _interaction stiffness_\(k_{f}\) relative to the intra-particle stiffness \(k_{u}\) favors larger range of self-limiting sizes.
### Vibrational and orientational contributions to free energy
At finite temperature, the vibrational and orientational degrees of freedom that describe conformational fluctuations of aggregates lead to associated entropic contributions to the free energy of assembly. A free subunit can access all possible orientations. However, when strongly bound to another subunit, its possible orientations relative to its bound neighbor are reduced. We model this by restricting a strongly-bound unit to a cone with angle \(\theta_{\Delta}\) (Figure 4a). This leads to a reduction in the orientational entropy, which can be computed from the partition function
\[Z_{\text{orient}}=\left(\int_{0}^{2\pi}\int_{0}^{\theta_{\Delta}}\frac{d\Omega }{4\pi}\right)^{N-1}. \tag{5}\]
The vibrational contribution can be computed by considering collective sliding modes in a chain of \(N\) strongly-bound subunnits (i.e. phonons) \(\{\delta u_{n}\}\) about the ground state positions of the central blocks. The partition function is
\[Z_{\text{vib}}=\int\left(\prod_{n=1}^{N}d\delta u_{n}\right)e^{-\beta E_{N}[ \{u_{n}^{(\text{ex})}+\delta u_{n}\}]}. \tag{6}\]
We leave the details in Appendix A.3. The free energy per subunit of the linear chain can be computed as
\[\mathcal{F}_{N}=-\frac{k_{B}T}{N}\ln Z_{\text{orient}}Z_{\text{vib}}=\mathcal{ E}_{N}+\delta\mathcal{F}_{N}^{\text{(orient)}}+\delta\mathcal{F}_{N}^{\text{( vib)}}, \tag{7}\]
where \(\mathcal{E}_{N}\) is the per-subunit ground state energy (Eq. (2)) and \(\delta\mathcal{F}_{N}^{\text{(orient)}}\) and \(\delta\mathcal{F}_{N}^{\text{(vib)}}\) are the orientational and vibrational contributions to the free energy given by
\[\delta\mathcal{F}_{N}^{\text{(orient)}} =\left(1-\frac{1}{N}\right)k_{B}T\ln\frac{2}{1-\cos\theta_{\Delta }}, \tag{8a}\] \[\delta\mathcal{F}_{N}^{\text{(vib)}} \approx\left(1-\frac{1}{N}\right)k_{B}T\ln\left[\frac{1}{2}\left( 1+\sqrt{1+\frac{4k_{f}}{k_{u}}}\right)\right]. \tag{8b}\]
Observe that the forms of these contributions allow us to rewrite the total per-subunit free energy as
\[\mathcal{F}_{N}=-\left(1-\frac{1}{N}\right)\Delta_{T}+\mathcal{E}_{N}^{\text{ (ex)}}, \tag{9}\]
where \(\Delta_{T}\) is the temperature-corrected binding _free energy_ takes on the form
\[\Delta_{T}=\Delta_{0}-Ts_{\Delta}. \tag{10}\]
Notably, the quantity \(s_{\Delta}>0\) captures the effect of temperature on binding. In essence, the vibrational and orientational entropy costs of binding effectively weaken the energetic gain of assembling. Note that \(T_{\Delta}=\Delta_{0}/s_{\Delta}\) is the temperature above which (unfrustrated) bonds are melted and the energetically favorable state is a single subunit for \(f=0\). To get a sense for the value of \(s_{\Delta}\), suppose that \(\theta_{\Delta}\sim 1^{\circ}-10^{\circ}\) and \(k_{f}/k_{u}\sim 1-100\), which gives \(s_{\Delta}\sim(5-10)k_{B}\). Note that if the full binding energy is of order \(10k_{B}T\), then the vibrational/orientational melting of the bond occurs on order few \(k_{B}T\).
### Free energy landscape of weak binding
As the size of a frustrated assembly grows, it will often find ways of escaping and relieving frustration through the formation of defects that attenuate or stop the the propagation of stresses [15]. In our case, we represent this attenuation of stress propagation as broken bonds between the central block of two neighboring subunits (Figure 4b), which we call weak, or "defective", bonds. In such a state, we assume that there can still be some weaker, residual cohesion between subunits that has a characteristic binding energy \(-\delta_{0}\geq-\Delta_{0}\). In a manner similar to the case of fully bound subunits (see Appendix
A.3), the entropic contributions to the weak-binding results in a temperature-corrected weak-binding _free energy_
\[\delta_{T}=\delta_{0}-Ts_{\delta}. \tag{11}\]
As with strong bonds, finite temperature has the effect of making weak binding, or defects, less favorable due to the entropic costs of reduced rotational freedom of binding together two otherwise freely-rotating chains. Hence, above a temperature \(T_{\delta}=\delta_{0}/s_{\delta}\), they become entropically unfavorable since \(-\delta_{T}>0\). As we illustrate below, the entropic cost of defect formation has a critical role on the thermodynamic stability of self-limiting states.
To construct the free energy landscape of the incommensurate chain model including both strong (stress-propagating) and weak (defective, stress-attenuating) states of sub-unit binding, we start with the free energy \(\mathcal{F}_{N}\) of a linear chain of \(N\) strongly bound subunits. As the size of the chain grows from a single subunit, the free energy initially decreases due to binding until the chain reaches the self-limiting size (a local minimum at primary aggregate size \(N=N_{*}\)) after which it increases due to stress accumulation. Eventually, when enough stress builds up, it becomes energetically favorable for the linear chain to break into two weakly-bound pieces, the per-subunit free energy of which can be approximated as \(\mathcal{F}_{N/2}-\delta_{T}/N\), where the first term is the per-subunit free energy of each half and the second term is the weak-binding free energy distributed over the \(N\) subunits. This reasoning can be continued for a linear chain broken up into \(M\leq N\) weakly-bound pieces with an approximate per-subunit free energy \(\mathcal{F}_{N/M}-(M-1)\delta_{T}/N\). Finally, for each size \(N\), there will be some number \(M\geq 1\) of strongly-bound subchains, held together by \(M-1\) weak bonds, that is the thermodynamically heterachical chain to form. Thus, the free energy can be taken to be
\[\mathcal{F}_{N}=\min_{1\leq M\leq N}\left\{\mathcal{F}_{N/M}-\frac{(M-1) \delta_{T}}{N}\right\}. \tag{12}\]
The key result of the hierarchy of strong and weak bonds is shown in Figure 4b right, illustrated for series increasing values weak bond free energies. In general, if \(N_{\star}\) is the self-limiting size, then the subsequent local minima (separated by barriers) of the free energy are roughly \(2N_{\star},3N_{\star},\dots\), corresponding to multiple self-limiting assemblies weakly aggregating together. This serves as the starting point for a useful approximation we discuss in Appendix B. The value of \(-\delta_{T}\) regulates which of these local minima is the global one. When \(-\delta_{T}>0\) the entropy cost of defects leads to a global ground-state at
Figure 4: **a)** Bound subunits are more restricted in their vibrational and orientational modes, leading to a reduction of vibrational and orientational entropies. **b)** Strongly-bound subunits have a binding energy \(-\Delta_{0}\) while weakly-bound subunits, which results from broken bonds or any residue binding, have a binding energy \(-\delta_{0}\). The free energy landscape contains infinitely many local minima corresponding to the primary self-limiting assembly and defective aggregates of weakly-bound self-limiting assemblies. Changing the temperature-dependent weak-binding free energy \(\delta_{T}\) (see Section II.3) changes whether the defective aggregates are favorable \(-\delta_{T}<0\) or unfavorable \(-\delta_{T}>0\).
the primary minimum \(N=N_{*}\), a single strongly bound self-limiting aggregate, defined by the balance between frustration and cohesion. When \(-\delta_{T}<0\) weak-binding of strong-aggregates is favored, and the global ground state transitions to infinite, defective chains of primary aggregates. In this context, it is clear that as \(T\to 0\)_any_ weak cohesion in defective bonds leads to an unlimited chain of primary aggregates as the ground-state.
## III Ideal aggregation and self-limiting vs. unlimited assembly criteria
Beyond the energetics and entropy of internal configurations of assembled chains, equilibrium assembly also depends on the translational entropy associated with distributing subunits among different populations of aggregates at fixed temperature and concentration. We do this by applying ideal aggregation theory to a system with a fixed concentration of subunits [37; 43], which assumes that concentrations are low enough that interactions between different aggregates have a negligible effect on their free energy. Let \(\Phi_{N}\) be the volume fraction of subunits in assemblies of size \(N\). The total free energy of the system is
\[\mathcal{F}_{\text{tot}}=\sum_{N=1}^{\infty}\Phi_{N}\left(\mathcal{F}_{N}+ \frac{k_{B}T}{N}\ln\frac{\Phi_{N}}{Ne}\right), \tag{13}\]
where the logarithmic term represents the translational entropy of \(N\) subunits sharing a single center of mass degree of freedom. Minimizing the total free energy with respect to the volume fraction, \(\Phi\), yields the _law of mass action_, \(\Phi_{N}=N\left(\Phi_{1}e^{-\beta\mathcal{F}_{N}}\right)^{N}\). The dispersed subunit volume fraction \(\Phi_{1}\) can be determined from mass conservation, that is, the total volume fraction of subunits must remain fixed or
\[\Phi=\sum_{N=1}^{\infty}N\left(\Phi_{1}e^{-\beta\mathcal{F}_{N}}\right)^{N}. \tag{14}\]
This relation is an equation of state that relates the mass of subunits in distinct aggregate populations to the total mass of subunits, requiring that all aggregates are in _chemical equilibrium_. Here, the concentration of free monomers parameterizes the chemical potential of subunits in the system, \(k_{B}T\ln\Phi_{1}\)[43].
To gain intuition into the role of the weak binding free energy \(\delta_{T}\), we consider a simple example shown in Figure 5 where we vary only the weak binding free energy while keeping the self-limiting size, excess elastic energy of frustration and temperature fixed. When the weak-binding free energy \(\delta_{T}\) is comparable to the full binding free energy \(\Delta_{T}\), the energy landscape loses the self-limiting minimum (Figure 5a, cases (i) and (ii)) as it is thermodynamically favorable to break up strongly-bound chains into smaller pieces, which do not propagate frustration induced stress. As one lowers the weak binding free energy, the self-limiting minimum as well as the local minima for weakly-bound aggregates reappear (cases (iii-v)), leading to corrugations in \(\mathcal{F}(N)\) associated with states of integer numbers of primary aggregate chains held together by weak, defective bonds. In particular, depending on the entropic costs of weak binding, it is possible for the weak binding free energy to change signs, leading to weakly bound aggregates whose free energies are higher than that of the self-limiting state (case (v)).
The temperature-dependent sign change in the weak-binding free energy plays a crucial role in the stabilization of self-limiting assemblies, which can be seen by examining the subunit mass distributions for two examples, shown in Figures 5b. At low concentrations, most of the subunits remain fairly dispersed as free monomers due to the translational entropy costs of assembly outweighing the energetic gain of assembly. As the concentration of subunits is increased, above the critical aggregation concentration scale, the mass shifts to larger sized assemblies. In both cases, where defects are stable or unstable, the primary peak in \(\Phi_{N}\) shifts from monomers (\(N=1\)) to \(N\simeq N_{*}\approx 9\) continuously, which is a consequence of the lack of a nucleation barrier for 1D assemblies [52; 53][2]. When \(-\delta_{T}>0\) as in case (v) and weak-binding is entropically unfavorable, the mass distribution stabilizes around the primary frustration limited aggregate size \(N=N_{*}\) at high concentrations (Figure 5b, right), which is a defining feature of _self-limiting assembly_. In contrast, when \(-\delta_{T}<0\) and weak-binding is comparably favorable to strong-binding(i.e \(\delta_{T}\lesssim\Delta_{T}\)) (e.g. case (ii)), defects are stable at high concentrations, leading to the mass distribution shifting to higher-\(N\) populations corresponding to chains of primary \(N_{*}\)-mer aggregates held together by weak, defective bonds (Figure 5b, left). Hence, while this scenario corresponds to assemblies that are composed of _locally well-defined_ sizes (i.e. frustration-limited chains of size \(N_{*}\)), the overall mass of aggregates is not well-defined and essentially exhibits the concentration-dependent, exponential distribution characteristic of one-dimensional equilibrium chain assembly.
It is useful to establish some criteria for determining whether one has dispersed subunits, self-limiting assemblies, or unlimited and defective aggregates. We show that this can be done by considering the mean aggregate size \(\langle N\rangle\) and, in particular, the relative size fluctuations of the form \(\frac{\langle\delta N^{2}\rangle}{\langle N^{2}\rangle}\) (Figures 5c,d). When weak binding is favorable as in case (i), the average size of assemblies \(\langle N\rangle\) grows quite rapidly as expected. However, as one lowers the weak binding free energy, a plateau in the average size begins to form. We associate this slowdown in the growth of the average assembly size with self-limitation. Note that while case (ii) does not strictly have local minima, the non-convexity of free energy near the self-limiting size is enough to stabilize a local peak in \(\Phi_{N}\) near to \(N_{*}\). The range of concentrations over which the plateau in \(\langle N\rangle\) persists increases as one reduces the weak binding free energy. In particular for case (v), when weakly-bound structures are totally unfavorable, \(\langle N\rangle\) re
mains stable up to a maximum concentration \(\Phi_{\rm max}\). For our purposes, we choose an upper limit to concentration \(\Phi_{\rm max}=1\), since ideal-aggregation theory clearly breaks down in this regime, and it is not possible to have volume fractions that exceed (or even approach) unity.
Noting that self-limitation corresponds to a mean aggregate size that is _independent of concentration_, We can further quantify the concentration range over which self-limiting assembly occurs by computing the relative size fluctuations \(\langle\delta N^{2}\rangle/\langle N^{2}\rangle\). A useful relation is (see Appendix C)
\[\frac{\langle\delta N^{2}\rangle}{\langle N^{2}\rangle}=\frac{d(\log\langle N \rangle)}{d(\log\Phi)}, \tag{15}\]
that is, the relative size fluctuations is the same as the susceptibility of the average size to concentration changes. Accordingly, as shown in Figure 5d, there is a decrease in the fluctuations when the average size begins to plateau. Based on this correspondence, we define the onset of self-limiting assembly as the concentration at which there is a local maximum in the fluctuations (black dotted line). This estimates the concentration beyond which dispersed monomers are no longer the dominant structure in the system and larger structures begin to form. The growth of larger structures, particularly self-limiting ones, continues until the local minimum in the fluctuations [27]. This decrease in the fluctuations can be seen from Figure 5b where \(\langle\delta N^{2}\rangle\) of the mass distribution stabilizes while \(\langle N^{2}\rangle\) increases. We use this local minimum to define the onset of the formation of defective assemblies and a rise in size fluctuations. As Figure 5d shows, making weak binding unfavorable extends the range of self-limiting assembly. In particular for case (v), there is no local minimum up to the maximum concentration, which indicates the suppression of larger defective structures. It is useful to note that for \(-\delta_{T}<0\) and sufficiently low temperatures or high concentrations, the energy landscape is nearly flat for large assembly sizes (\(N\gg N_{*}\)) and the distribution will have an exponential tail. In those limits, the size fluctuations approach \(\langle\delta N^{2}\rangle/\langle N^{2}\rangle\to 1/2\), which is characteristic of equilibrium 1D chain assembly, and therefore, corresponds to the uncontrolled growth of defective aggregates.Hence, these cases ((i)-(iv) in Fig. 5) exhibit _secondary aggregation behavior_ according to the size-fluctuation dependent criteria, with a primary aggregation transition from
Figure 5: Role of varying the weak-binding free energy. **a)** Example free energies for favorable (i, ii, iii), neutral (iv), and unfavorable (v) weak-binding for \(k_{f}/k_{u}=100\) and \(k_{B}T/\Delta_{0}=0.05\). The self-limiting size is fixed at \(N_{*}\sim 9\). **b)** Mass distributions of subunits for favorable weak-binding (left), which leads to unlimited growth and multiple aggregation of self-limiting assemblies, and unfavorable weak-binding, which leads to the stabilization of self-limiting assemblies. Insets show mass distributions and sizes on log scales. **c)** Average assembly size. **d)** Relative size fluctuations. The black dotted line corresponds to the local maximum or the onset of self-limiting assembly while the colored dotted lines correspond to local minima or the onset of weakly-bound aggregation.
free monomers to self-limiting \(N_{*}\)-mers at a lower critical concentration, followed by a secondary aggregation transition to the unlimited (defective) state of assembly at an upper critical concentration. Notably, this self-limiting state at intermediate concentration occurs when the primary aggregate (\(N=N_{*}\)) is not the global minimum of the free energy per subunit. In general, dominance of primary aggregates at intermediate concentration derives from the generically higher translation entropy per subunit in smaller \(N\) structures that can compensate for the otherwise higher free energy to assemble those states than the larger \(N\) groundstate. Notably, the width of the intermediate concentration window of self-limiting assembly grows as the gap in energy between the \(N_{*}\)-mer and the unlimited, defective chain is reduced (i.e. as \(\delta_{T}\to 0_{+}\))
With these criteria for the onset of self-limiting assembly and the formation of defective assemblies defined, we turn to examining the role of temperature and frustration in self-limiting assembly in the next section.
## IV Role of temperature, concentration, and frustration
In this section, we now analyze the assembly behavior of the incommensurate polybrick model as a function of three key control parameters - concentration \(\Phi\), temperature \(T\), and frustration \(f\) - focusing on parameters that control the stability of the self-limiting state relative to the disperse state and states of (defective and defect-free) unlimited assembly.
### Temperature vs. concentration phase diagrams
We start by fixing the physical parameters of subunits - the frustration \(f\), elastic and interaction parameters - and consider how temperature \(T\) and subunit concentration \(\Phi\) influence the resulting assemblies. In particular, we illustrate the case where frustration at \(T=0\) selects a finite primary aggregate size is \(N_{*}\approx 9\). As discussed in Sections II.2 and II.3, there are two important temperatures: the temperature \(T_{\delta}=\delta_{0}/s_{\delta}\) at which weakly-bound subunits melt and the temperature \(T_{\Delta}=\Delta_{0}/s_{\Delta}\) at which strongly-bound subunits melt. Thus, there are two limits of interest: \(T_{\delta}<T_{\Delta}\) and \(T_{\delta}>T_{\Delta}\). To study these limits, we start by considering the case where the binding energy of defects are "slightly sticky", \(\delta_{0}/\Delta_{0}=0.3\). We compare two cases of "conformational stiffness" of the weak bonds (i.e. how much binding restricts the vibrational and orientational modes and reduced entropy): relatively stiff defects, \(s_{\delta}/s_{\Delta}=1.0\), where partial disruption of strong bonds does not significantly change the vibrational/orientational behavior subunits; and relatively floppy defects, \(s_{\delta}/s_{\Delta}=0.1\), where weak bonds exhibit significantly enhanced conformational fluctuations over strong bonds. These correspond to temperature ratios \(T_{\delta}/T_{\Delta}=0.3\) and \(T_{\delta}/T_{\Delta}=3.0\), respectively.
In these cases, we consider the necessary conditions for self-limiting assembly, which in particular requires finite temperature fluctuations to destabilize otherwise energetically favored weak binding. In addition to the nominal free energy of weak binding, \(-\delta_{T}=-\delta_{0}+Ts_{\delta}\), the thermodynamic stability of defective bonds will also depend on the translation entropy gain of breaking we bonds, which we denote as \(s_{\text{trans}}(\Phi)\). We expect a condition for the thermal stability of self-limiting assembly \(T>T_{\text{min}}(\Phi)\), defined by \(-\delta_{0}+T_{\text{min}}\big{[}s_{\delta}+s_{\text{trans}}(\Phi)\big{]}=0\), or
\[T_{\text{min}}(\Phi)=\frac{T_{\delta}}{1+s_{\text{trans}}(\Phi)/s_{\delta}}. \tag{16}\]
A more careful treatment that considers the relative mass in (primary) self-limiting chains versus unlimited, defective assemblies (Appendix B) gives this same result with an approximate expression for the translation entropy gain of weak-bond breaking, \(s_{\text{trans}}(\Phi)=k_{B}\ln\frac{(2-\sqrt{2})N_{*}}{\Phi}\). This result shows that the thermal stability of self-limiting assembly occurs above a critical temperature that is shifted below the nominal melting temperature of defective bonds (\(T_{\delta}\)) by an amount that increases with concentration as well as conformational entropy cost of weak bonds. We illustrate this effect on the thermal stability criterion for the regime of self-limitation below.
#### iv.1.1 Stiff defects
We start with the \(T_{\delta}/T_{\Delta}=0.3\) or the case of relatively stiff defective (weak) bonds. The concentration-temperature phase diagram is shown in Figure 6a. The solid black lines are the boundaries determined from the size-fluctuation criteria discussed in Section III (i.e. where self-limitation corresponds to \(\frac{d^{2}(\log(N))}{d(\log\Phi)^{2}}<0\)) that delineates states of dispersed monomers, self-limiting, and unlimited/defective assembly. Due to the temperature dependence of binding free energies, the nature of the free energy subunit is depends strongly on \(T\) (Figure 6b), exhibiting variants of the fixed-\(\Delta_{T}\) examples illustrated in Section III. At low temperatures \(T<T_{\delta}\) and favorable weak binding \(-\delta_{T}<0\) (e.g. case (iii)), the defective aggregates states are thermodynamically stable. This means that as the concentration of subunits increases, the larger defective states will eventually become occupied and the size of the assemblies will grow uncontrollably at large enough concentration (i.e. above a secondary critical aggregation concentration). Similar to what was described in Section III, this can be seen by the rapid rise of the average assembly size \(\langle N\rangle\) (Figure 6c, dashed blue line) after it plateaus near the self-limiting size \(N_{*}\sim 9\), in addition to the local minimum in size fluctuations (solid blue line). As the temperature is raised, the defective aggregates become less favorable until \(-\delta_{T}>0\) as in case (ii). As \(\Phi\) increases, the system transitions from dispersed subunits to self-limiting
assemblies but not to defective aggregates, at least up to the maximal concentration \(\Phi=1\). This can be seen in Figure 6c where the average size (dashed orange curve) begins to plateau and the fluctuations (solid orange) decrease but do not reach a local minimum for \(\Phi\leq 1\). Finally, as temperature further increased, we eventually enter the regime where \(\Delta_{T}\geq 0\) and we have the melting of strongly-bound subunits. When this happens, the dominate state will obviously be dispersed subunits for any concentration. This of course results in an average size that barely increases beyond a single subunit (dashed green curve) and fluctuations that do not reach a local maximum (solid green curve).
Also shown in Figure 6a (bottom) as the dashed white line, is the prediction for \(T_{\rm min}(\phi)\), which is generally in agreement with when the boundary defined by where the size fluctuations begin to plateau to \(0.5\), corresponding to the regime of uncontrolled growth of defective chains.
#### iv.2.2 Floppy defects
We now turn to the opposite limit when \(T_{\delta}/T_{\Delta}=3.0\) or the case of relatively floppy weak binding, whose concentration-temperature phase diagram is shown in Figure 7a. This case is strikingly different from the case of \(T_{\delta}/T_{\Delta}<1\). In particular, the self-limiting region terminates before reaching the maximum concentration. As shown in Figure 7b, due to \(T_{\delta}/T_{\Delta}>1\), defective and unlimited assembly is stable relative to dispersed monomers at temperatures well above the nominal melting of strong bonds, an effect that derives from the enhanced conformational entropy exhibited by defective bonding. This results in the system transitioning to the uncontrolled growth of defective aggregates at high concentrations or having a direct transition from dispersed subunits to defective aggregates. As shown in Figure 7c for case (i), the size fluctuations (green solid line) does not exhibit a local minimum but rather a simple rise, indicative of dispersed subunits, followed by a plateau towards \(0.5\), indicative of defective aggregates. The direct "transition" between dispersed subunits to defective aggregates can
Figure 6: **a)** Concentration and temperature dependence of the average size and relative size fluctuations for \(k_{f}/k_{u}=100\), \(\delta_{0}/\Delta_{0}=0.3\), \(k_{f}f^{2}a^{2}/2\Delta_{0}=1\), and \(s_{\delta}/s_{\Delta}=1.0\) (\(s_{\delta}=s_{\Delta}=10k_{B}\)). The ratio of melting temperatures is \(T_{\delta}/T_{\Delta}=0.3\) (\(k_{B}T_{\Delta}=0.1\)\(\Delta_{0}\)). Solid white lines are boundaries representing the onset of self-limiting assembly (upper white line) and the onset of the formation of defective aggregates (lower white line). \(T_{\rm min}(\Phi)\) is the minimum temperature below which defective aggregate dominate the system (dashed white line). **b)** Free energy landscapes as a function of the temperature: (i) \(T/T_{\Delta}=1.0\), (ii) \(T/T_{\Delta}=0.5\), and (iii) \(T/T_{\Delta}=0.2\). In case (iii), \(-\delta_{T}<0\) and defective aggregates are energetically favorable. In case (ii), \(-\delta_{T}>0\) and self-limiting assemblies are energetically/entropically favorable. In case (i), \(\Delta_{T}>0\) and dispersed subunits are favorable. **c)** Average size (dashed) and relative size fluctuations (solid) for cases (i), (ii), and (iii), and the mass distribution of subunits for decreasing temperatures (iv) \(T/T_{\Delta}=0.8\), (v) \(T/T_{\Delta}=0.3\), and (vi) \(T/T_{\Delta}=0.15\).
be characterized by the knee of the curve (indicative of a peak in a higher derivative of \(\langle N\rangle(\Phi)\)), which can be computed by considering \(\frac{d^{2}}{d(\log\Phi)^{2}}\left(\frac{\langle\delta N^{2}\rangle}{\langle N^ {2}\rangle}\right)\) and locating the minimum or most negative value. This offers an alternate criteria for determining when dispersed subunits begin to form larger structures, whether those structures be self-limiting or defect-ridden. Indeed, as seen in Figure 7, this criteria matches well with that of choosing the local maximum in the size fluctuations when there is a transition from dispersed subunits to self-limiting assemblies.
### Temperature vs frustration phase diagram
We now turn our attention to the role of frustration. We consider the maximum concentration, in our case, \(\Phi=1\) and focus on the types of assemblies that can form in the high concentration. This focus is motivated by the attempt understand if a robust, self-limiting state is possible at sufficiently large, yet experimentally feasible concentration range. We focus on the case of stiff weakly-bound subunits (Figure 6) for which we expect there to be a finite-temperature window for self-limitation that can persist to high concentrations. For each value of frustration, the criteria for determining the onset of self-limiting assembly and defect formation gives as an upper and lower temperature for self-limiting at the maximum concentration. These boundaries are shown in Figure 8 overlayed with the self-limiting size \(N_{\star}\), the average assembly size \(\langle N\rangle\), and the relative size fluctuations \(\langle\delta N^{2}\rangle/\langle N^{2}\rangle\).
There are essentially three ranges of frustrations that are important. When frustration is too small, the strain accumulation is not sufficient to prevent the binding of new subunits and assemblies undergo shape flattening, meaning that there is no local minimum at any finite \(N\) and the assembly results in unlimited, strongly-bound and defect-free chains (as shown schematically in Figure 2f). To be more precise, shape flattening occurs when
Figure 7: **a)** Concentration and temperature dependence of the average size and relative size fluctuations for \(k_{f}/k_{u}=100\), \(\delta_{0}/\Delta_{0}=0.3\), \(k_{f}f^{2}a^{2}/2\Delta_{0}=1\), and \(s_{\delta}/s_{\Delta}=0.1\) (\(s_{\delta}=1k_{B}\), \(s_{\Delta}=10k_{B}\)). The ratio of melting temperatures is \(T_{\delta}/T_{\Delta}=3\) (\(k_{B}T_{\Delta}=0.1\Delta_{0}\)). Solid white lines are boundaries representing the onset of self-limiting assembly and the onset of the formation of defective aggregates. The dotted white line indicates the transition out of dispersed subunits into either self-limiting assemblies or defective aggregates. In this case, self-limiting assemblies do not exist at high concentrations. \(T_{\min}(\Phi)\) is the minimum temperature below which weakly-bound aggregates dominate the system (dashed white line). **b)** Free energy landscapes as a function of the temperature: (i) \(T/T_{\Delta}=0.8\), (ii) \(T/T_{\Delta}=0.4\), and (iii) \(T/T_{\Delta}=0.25\). **c)** Average size (dashed) and relative size fluctuations (solid) for cases (i), (ii), and (iii), and the mass distribution of subunits for decreasing temperatures (iv) \(T/T_{\Delta}=1.0\), (v) \(T/T_{\Delta}=0.25\), and (vi) \(T/T_{\Delta}=0.1\).
(see Appendix A.2)
\[f\lesssim\sqrt{\frac{2\Delta_{T}}{k_{f}a^{2}\sqrt{4k_{f}/k_{u}}}}\equiv f_{\rm flat }(T). \tag{17}\]
This is the finite-temperature generalization of the \(T=0\) energetics considered in Section II above. Note because \(\Delta_{T}\to 0\) as temperature approaches \(T_{\Delta}\), this critical threshold of frustration also vanishes, i.e. \(f_{\rm flat}(T\to T_{\Delta})\to 0\), and more generally decreases as temperature is raised.
In the opposite regime, When frustration is too large, it is frustration costs complete negate the energetic gains of subunit binding and so the self-limiting size approaches that of a single subunit. This maximal value of frustration at which the self-limiting size \(N_{\star}\) is roughly a single subunit is given by
\[f\gtrsim\sqrt{\frac{24\Delta_{T}}{k_{u}a^{2}}}\equiv f_{\rm single}(T). \tag{18}\]
Therefore, self-limiting assembly most robustly occurs in the intermediate range of frustrations \(f_{\rm flat}\lesssim f\lesssim f_{\rm single}\). Note that the size of this range over which self-limiting assembly can occur is
\[\frac{f_{\rm single}}{f_{\rm flat}}\sim\left(\frac{k_{f}}{k_{u}}\right)^{\frac {3}{4}}. \tag{19}\]
Recall (Section II) that \(k_{f}/k_{u}\) represents the size scale over which strain can accumulate before the assembly flattens out. Thus, increasing this size scale not only expands the regime of robust self-limiting assembly but also increases the size of such assemblies.
Figure 8b-c summarize the high concentration phases and their corresponding example free energy landscapes. The phase diagram can roughly be divided into six regions. (i) At sufficiently low frustrations, assemblies undergo shape-flattening where there is a lack of a local minimum in the free energy landscape and growth is uncontrolled and unlimited, and takes the form of defect-free, strongly bound chains. Note that the criteria for the onsets of self-limiting assembly and defect formation extend into the shape-flattening region. This is due to the small amount of frustration slowing down the drop
Figure 8: Frustration-temperature phase diagrams for \(k_{f}/k_{u}=100\), \(\delta_{0}/\Delta_{0}=0.3\), and \(s_{\delta}/s_{\Delta}=1.0\) (\(s_{\delta}=s_{\Delta}=10k_{B}\)). The ratio of melting temperatures is \(T_{\delta}/T_{\Delta}=0.3\) (\(k_{B}T_{\Delta}=0.1\Delta_{0}\)). **a)** Self-limiting size (top) and average assembly size (bottom) as functions of frustration and temperature. **b)** Size fluctuations as functions of frustration and temperature. The solid lines and points indicated the onsets for the formation of self-limiting assemblies and defective aggregates. **c)** Free energy landscapes corresponding the various regimes indicated in b).
in free energy at large sizes. (ii) At sufficiently high temperatures, it is entropically unfavorable for subunits to bind, and so the preferred state of the system is that of dispersed subunits. (iii) As temperature is lowered but kept above \(T_{\delta}\), the subunits begin to assembly into stable self-limiting structures since weakly-bound subunits are entropically unfavorable. (iv) As temperature is further lowered below \(T_{\delta}\), the defective aggregate states become energetically favorable. As a result, defective aggregates always dominate at sufficiently low temperatures. (v) At sufficiently high frustrations, it is too costly to fully bind subunits. Above \(T_{\delta}\), the favorable state is dispersed subunits. Finally, (vi) below \(T_{\delta}\), while fully bound subunits may be unfavorable due to high frustration, weakly-bound subunits can be energetically favorable due to stress attenuation from defect formation. This will result in the assembly of linear chains made up of all weakly-bound subunits. Note that the boundary separating the direct transition from dispersed subunits (v) to defective aggregates (vi) uses the same criteria we discussed in Section IV where the second derivative of the size fluctations can detect the onset of the plateau of the size fluctuation towards 0.5, which represents unlimited growth. In fact, this boundary sits roughly at \(T_{\delta}\) (in this case 0.3).
In Appendix B, we derive the minimum and maximum temperatures between which self-limiting assemblies are robust. The minimum temperature \(T_{\text{min}}(\Phi=1)\) below which a majority of subunits can be found in defective aggregates is
\[T_{\text{min}}(\Phi=1)\simeq T_{\delta}\left[1+\frac{1}{s_{\delta}}\ln\left( \frac{24(2-\sqrt{2})^{3}\Delta_{0}}{k_{u}f^{2}a^{2}}\right)^{\frac{1}{3}} \right]^{-1}, \tag{20}\]
which follows from eq. (16) and the \(T=0\) approximation for the primary, self-limiting size \(N_{\star}\simeq\left(\Delta_{0}/k_{u}f^{2}a^{2}\right)^{1/3}\). The maximum temperature \(T_{\text{max}}\) above which the system is dispersed (no strong or weak bonds despite being dense) is given by
\[T_{\text{max}}(\Phi=1)\simeq T_{\Delta}\left[1-\frac{9}{4(4k_{f}/k_{u})^{3/2}} \left(\frac{f}{f_{\text{flat}}}\right)^{2}\right], \tag{21}\]
which follows from considering the frustration-dependent excess energy cost of forming primary, self-limiting aggregates and the estimated melting point of strong (i.e. frustrated) binding. Notably, these two equations show that the upper and lower temperature limits for self-limiting assembly are set, to a first approximation, by the nominal melting temperatures of strong and weak bonds, respectively, \(T_{\Delta}\) and \(T_{\delta}\). The maximum frustration before the subunits are too frustrated to form fully-bound assemblies is \(f_{\text{max}}\sim f_{\text{flat}}(k_{f}/k_{u})^{3/4}\sim f_{\text{single}}\). Additionally, these results show that the temperature range of self-limitating assembly narrows with increasing \(f\), due to both a slight increasing value of \(T_{\text{min}}(\Phi=1)\) with frustration, and more important, a strongly-decreasing dependence of \(T_{\text{max}}(\Phi=1)\) on \(f\), which effective closes as \(f\to f_{single}\).
In Appendix D and Figure 11, we illustrate these effects by comparing additional phase diagrams when the weak-binding \(\delta_{0}/\Delta_{0}\) is varied, showing that the stable temperature window for self-limitation narrows as nominal melting of defects approaches that of the strong bonds (i.e. as \(T_{\delta}\to T_{\Delta}\)).
## V Discussion and Conclusion
In this article, we have introduced a solvable model of a frustrated incommensurate chain of "polybricks" to probe the role of temperature, concentration, and frustration (or subunit shape) in the process of self-limiting, geometrically frustrated assembly. This model captures features of geometrically frustrated assemblies, particularly the competition between subunit binding and the accumulation of strains due to frustration as an assembly grows. We focused on two categories of frustration escape: shape flattening, when frustration is insufficient to overcome subunit binding, and weak-binding, when assemblies can attenuate or stop stress propagation by breaking bonds or forming defects, which notably controls the thermal stabilty of self-limiting assembly. The resulting free energy landscapes with both strong and weak binding allowed us to determine how temperature, concentration, and frustration control the stability of self-limiting assembly.
Our main results are summarized in Figure 9. The key result is that self-limiting assembly is robust at in
Figure 9: Schematic of frustration-temperature phase diagram of GFAs at high concentrations. **a)** Low frustration and temperature: unlimited growth of shape-flattened assemblies without any defects. **b)** High frustration or temperature: dispersed subunits. **c)** Intermediate frustration and temperature: self-limiting assemblies. **d)** Intermediate frustration and low temperature: unlimited, defective aggregate of weakly-bound self-limiting assemblies. Stable SLA can exist at low or intermediate concentrations. **e)** High frustration and low temperature: unlimited, defective aggregate of weakly-bound subunits.
termediate temperatures \(T_{\delta}\lesssim T\lesssim T_{\Delta}\) set by the entropy costs of weak and strong binding, and frustrations \(f_{\rm flat}\lesssim f\lesssim f_{\rm max}\) set by the elastic costs of binding. At low frustration and temperature **(a)**, the strain accumulation from frustration is insufficient for overcoming the strong binding between subunits, so the assemblies effectively ignore frustration and grow unlimited. The boundary for the shape-flattening regime is given by the frustration \(f_{\rm flat}(T)\) (Eq. 17), which is the minimum frustration needed to have self-limiting assembly. At high temperatures or high frustrations (**b**), either the strong bonds are melted with \(\Delta_{T}=\Delta_{0}-Ts_{\Delta}\geq 0\) or \(T\geq\Delta_{0}/s_{\Delta}=T_{\Delta}\), or the subunits are too frustrated to bind, resulting in dispersed subunits. As the temperature is lowered below the weak-binding melting temperature \(T_{\delta}=\delta_{0}/s_{\delta}\) at high frustrations (**c**), the weak binding free energy changes sign to \(-\delta_{T}=-(\delta_{0}-Ts_{\delta})<0\), and so while strong binding is too costly due to being overly frustrated, the subunits can still weakly bind to form unlimited aggregates of weakly-bound subunits. The boundary that describes the onset of dispersed subunits or unlimited aggregates of weakly-bound subunits is given by \(T_{\rm max}(f)\) in Eq. (21), from which the maximum frustration can be determined to be \(f_{\rm max}\sim f_{\rm flat}(k_{f}/k_{u})^{3/4}\), where \(k_{f}/k_{u}\) is the ratio of inter- to intra-subunit stiffness. Therefore, at intermediate frustrations \(f_{\rm flat}\lesssim f\lesssim f_{\rm max}\) where we avoid shape-flattened assemblies and overly frustrated dispersed or aggregated subunits, we expect to obtain the target self-limiting assemblies (**c**). The formation of self-limiting assemblies is most robust at high concentrations when the temperature is above \(T_{\rm min}(f)\) (Eq. (20)), which is the temperature at which weak bonds entropically (including translational entropy) melt. Below this temperature (**d**), defective aggregates of weakly-bound self-limiting assemblies become energetically favorable. Note that while self-limiting assemblies in this regime of frustration and temperature are unstable at high concentrations, it is possible to stabilize them at lower concentrations. It can be shown (see Appendix B) that the transition from dispersed to self-limiting and self-limiting to defective aggregation occurs at concentrations \(\Phi_{\star}\sim e^{\beta F_{\star}}\) and \(\Phi_{\star\star}\sim e^{-\beta\beta T}\), respectively, the ratio of which is then \(\Phi_{\star\star}/\Phi_{\star}\sim e^{-\beta(F_{\star}+\delta_{T})}\). The quantity \(\mathcal{F}_{\star}+\delta_{T}\) can be interpreted as the difference between the free energies of a subunit in a self-limiting assembly and one in an aggregate with all weak bonds. Thus if \(\mathcal{F}_{\star}<-\delta_{T}\), it is possible to stabilize self-limiting assemblies over a low concentration range \(\Phi_{\star}<\Phi<\Phi_{\star\star}<1\). Note that as temperature is lowered, both \(\Phi_{\star}\) and \(\Phi_{\star\star}\) approach zero exponentially fast. Thus for any concentration, defective aggregates will always dominate when the temperature is low enough.
Our analysis offers some potential lessons in designing robust self-limiting assemblies.
* The self-limiting size is controlled by the ratio of inter- to intra-subunit stiffnesses \(\sqrt{k_{f}/k_{u}}\), which sets the size scale over which strains can accumulate before an assembly undergoes shape flattening. The maximum self-limiting size can thus be increased by making inter-subunit interactions stiff while making the subunits themselves floppy.
* The range of frustrations \(f_{\rm flat}\lesssim f\lesssim f_{\rm max}\) over which self-limiting assemblies can be stabilized, like the self-limiting size, is controlled by the quantity \(k_{f}/k_{u}\) through the relation \(f_{\rm max}/f_{\rm flat}\sim(k_{f}/k_{u})^{3/4}\). Thus, making inter-subunit interactions stiffer while making the subunits themselves floppiers has the additional benefit of allowing for a wider range of frustrations or subunit shapes.
* The minimum temperature needed to stabilize self-limiting assemblies \(T_{\rm min}(f)\) (Eq. (16)) is primarily determined by the weak-binding melting temperature \(T_{\delta}=\delta_{0}/s_{\delta}\), which accounts for the energy and entropy of weak-binding. Lowering this temperatures requires either weakening the weak binding or increasing the entropy costs of weak binding. When subunits are not strongly-bound, they should be as repulsive as possible or as restricted as possible in, for example, their orientational degrees of freedom. In experiments [30], for instance, while poly-t extensions were used to change the frustration, they can be also thought of as creating some repulsion between the subunits if they are not fully bound.
We can estimate where experiments [30] lie within our model and analysis. In experiments, it was found that the central blocks thermally fluctuated distances of roughly \(u\sim 2\,\,{\rm nm}\). The resulting intra-subunit stiffness is roughly \(k_{u}\sim\frac{k_{B}T}{u^{2}}\sim 1\) pN/nm. The inter-subunit interation resulted from the binding of multiple two base pair sites on the surface of the central blocks with binding energies of roughly \(\Delta_{0}\sim 60\,\,{\rm pN}\cdot{\rm nm}\). Given that base pairs have a size of roughly \(0.34\) nm, we arrive at a stiffness of \(k_{f}\sim 100\) pN/nm, resulting in a stiffness ratio of \(k_{f}/k_{u}\sim 100\). The length of each subunit is roughly \(30\) nm with frustrations of roughly \(f\sim 0.01-0.1\). When subunits are (strongly or weakly) bound, the angle of rotational motion is roughly \(\theta\sim 1^{\circ}-10^{\circ}\), which results in an entropic contribution \(s_{\Delta}\sim s_{\delta}\sim 10k_{B}\). The estimated self-limiting size is \(N_{\star}\sim 3-15\), consistent with the assembly lengths observed in experiments. The minimum and maximum frustrations are \(f_{\rm flat}\sim 0.01\) and \(f_{\rm max}\sim 1.0\). The weak binding energy is difficult to estimate, so we assume it is roughly \(\delta_{0}/\Delta_{0}\sim 0.1-0.5\). The melting temperatures are \(T_{\Delta}\sim 1-1.5T\) and \(T_{\delta}\sim 0.1-0.7T\), where \(T\) here refers to room temperature. It is thus reasonable to conclude that the experiments are likely within the self-limiting regime, but can also be tuned to dispersed and unlimited/defective assembly regimes with suitable adjustments of binding affinity.
While our model and analysis captures some generic features of the process of self-limiting assembly, there are open questions. In particular, an interesting question is what aspects of our analysis carry over to other examples of geometrically frustrated assemblies [31, 32, 33, 34, 35, 36, 41] and
other types of defects [11; 12; 13; 14; 40; 41]. This includes understanding how the dimensionality of assemblies may affect the thermodynamics (e.g. there is a nucleation barrier in higher dimensions [52]) and defects such as disclinations and dislocations in these higher dimensional assemblies changes the strain accumulation and free energy landscapes. Thus, while many aspects of GFA models are are non-universal, strain accumulation and superextenive energy growth as well as the existence of multiple free energy minima due to frustration escape are quite generic. Hence, we expect that the qualitative behavior exhibited by the present 1D polybrick model, notably the existence of a minimal temperature of self-limiting assembly as well as secondary aggregation behavior, to be generic and key features of physical realizations more broadly.
###### Acknowledgements.
The authors are grateful to N. Hackney and M. Ha-gan for valuable discussions and input. This work was supported by US National Science Foundation through award NSF DMR-2028885 and the Brandeis Center for Bioinspired SoftMaterials, an NSF MRSEC, DMR-2011846.
## Appendix A Exact solution of linear chain
We here derive the main results for the excess energy and finite temperature corrections for the incommensurate chain model.
### Discrete limit
The excess energy of a linear chain of \(N\) subunits with central block displacements \(\{u_{n}\}\) is
\[E_{\rm ex}=\sum_{n=1}^{N}\frac{1}{2}k_{u}u_{n}^{2}+\sum_{n=1}^{N-1}\frac{1}{2 }k_{f}(u_{n+1}-u_{n}+fa)^{2}. \tag{10}\]
In mechanical equilibrium, the equilibrium positions \(\{u_{n}^{(\rm eq)}\}\) satisfy
\[\sum_{n=1}^{N}K_{mn}u_{n}^{(\rm eq)}=k_{f}fa(\delta_{m,1}-\delta_{m,N}), \tag{11}\]
where the tridiagonal matrix of spring constants \(K_{mn}\) is given by
\[\mathbf{K}=\begin{bmatrix}k_{u}+k_{f}&-k_{f}&\ldots&0&0\\ -k_{f}&k_{u}+2k_{f}&\ldots&0&0\\ \vdots&\vdots&\ddots&\vdots&\vdots\\ 0&0&\ldots&k_{u}+2k_{f}&-k_{f}\\ 0&0&\ldots&-k_{f}&k_{u}+k_{f}\end{bmatrix}. \tag{12}\]
Using the equation for mechanical equilibrium (Eq. (11)), we can rewrite the ground state excess energy in terms of the equilibrium displacements as
\[E_{\rm ex}[\{u_{n}^{(\rm eq)}\}]=\frac{1}{2}k_{f}fa\left[u_{N}^{(\rm eq)}-u_ {1}^{(\rm eq)}\right]+\frac{N-1}{2}k_{f}f^{2}a^{2}. \tag{13}\]
So we only need to determine the positions of the end subunits \(u_{1}^{(\rm eq)}\) and \(u_{N}^{(\rm eq)}\). By symmetry, we have \(u_{1}^{(\rm eq)}=-u_{N}^{(\rm eq)}\). Inverting Eq. 11, we have
\[u_{N}^{(\rm eq)}=k_{f}fa\left(K_{N,1}^{-1}-K_{N,N}^{-1}\right). \tag{14}\]
Using properties of certain tridiagonal matrices [54], we have
\[K_{N,1}^{-1} =\frac{k_{f}^{N-1}}{\det(\mathbf{K})} \tag{15a}\] \[K_{N,N}^{-1} =\frac{1}{\det(\mathbf{K})}\prod_{n=1}^{N-1}\left[k_{u}+4k_{f}\sin^{ 2}\frac{(2n-1)\pi}{2(2N-1)}\right], \tag{15b}\]
where
\[\det(\mathbf{K})=\prod_{n=1}^{N}\left[k_{u}+4k_{f}\sin^{2}\frac{(n-1)\pi}{2N} \right]. \tag{16}\]
Using these results, the excess energy can be written as
\[\mathcal{E}_{\rm ex}=\frac{k_{f}f^{2}a^{2}}{N}\left[\frac{N-1}{2}+k_{f}\left( K_{N,1}^{-1}-K_{N,N}^{-1}\right)\right]. \tag{17}\]
### Continuum limit
In the continuum limit, which can be obtained by taking \(\frac{u_{n+1}-u_{n}}{a}\approx\frac{du}{dx}\) and approximating the sums as integrals, the excess energy becomes
\[E_{\rm ex}=\int_{-\frac{L}{2}}^{\frac{L}{2}}dx\left[\frac{1}{2}Y_{u}u^{2}+ \frac{1}{2}Y_{f}a^{2}\left(\frac{du}{dx}+f\right)^{2}\right], \tag{18}\]
where \(Y_{u}=k_{u}/a\) and \(Y_{f}=k_{f}/a\) are the elasticities per unit length. Minimizing with respect to the displacement field \(u(x)\) yields
\[\frac{d^{2}u}{dx^{2}}=\frac{1}{\lambda^{2}}u, \tag{19}\]
where \(\lambda=\sqrt{Y_{f}a^{2}/Y_{u}}=a\sqrt{k_{f}/k_{u}}\) is the length scale over which strains accumulate, and the free boundary condition
\[\frac{du(\pm L/2)}{dx}=-f. \tag{20}\]
The solution for the displacement field is
\[u(x)=-f\lambda\frac{\sinh x/\lambda}{\cosh L/2\lambda}, \tag{21}\]
and the excess energy density due to stain accumulation is
\[\mathcal{E}_{\text{ex}}=\frac{1}{2}Y_{f}f^{2}a^{2}\left(1-\frac{\tanh L/2\lambda}{ L/2\lambda}\right). \tag{11}\]
### Vibrational and orientational entropies
The finite-temperature orientational contributions to the free energy can be computed by assuming that while a free subunit can orient anywhere on the unit sphere, a bound subunit is restricted to a cone with angle \(\theta_{\Delta}\). The partition function is
\[Z_{\text{orient}}=\left(\int_{0}^{2\pi}\int_{0}^{\theta_{\Delta}}\frac{d \Omega}{4\pi}\right)^{N-1}=\left(\frac{1-\cos\theta_{\Delta}}{2}\right)^{N-1}. \tag{12}\]
The vibrational contribution can be obtained from the partition function by summing over all displacements \(\{u_{n}\}\). By considering displacements \(\{\delta u_{n}\}\) about the equilibrium positions \(\{u_{n}^{(\text{eq})}\}\), we can write the partition function as
\[\begin{split} Z_{\text{vib}}&=\int\left(\prod_{n=1 }^{N}d\delta u_{n}\right)e^{-\beta E_{N}[\{u_{n}^{(\text{eq})}+\delta u_{n}\}] ]}\\ &=e^{-\beta E_{N}[\{u_{n}^{(\text{eq})}\}]}\int\left(\prod_{n=1} ^{N}d\delta u_{n}\right)e^{-\beta\frac{1}{2}\delta u_{m}K_{mn}\delta u_{n}} \\ &=e^{-\beta E_{N}[\{u_{n}^{(\text{eq})}\}]}\sqrt{\frac{(2\pi)^{N} }{\beta^{N}\det(\mathbf{K})}},\end{split} \tag{13}\]
where \(K_{mn}\) is the matrix of spring constants given by Eq. (10) and \(E_{N}[\{u_{n}^{(\text{eq})}\}]\) is the ground state energy. The determinant if given by Eq. 10. Taking the logarithm of these partition functions and shifting the free energy so that \(F_{N=1}=0\), we arrive at the total free energy per subunit
\[\mathcal{F}=\mathcal{E}_{N}+\delta\mathcal{F}_{\text{vib}}+\delta\mathcal{F}_ {\text{orient}}, \tag{14}\]
where
\[\delta\mathcal{F}_{\text{orient}} =\left(1-\frac{1}{N}\right)k_{B}T\ln\frac{2}{1-\cos\theta_{\Delta }}, \tag{15a}\] \[\delta\mathcal{F}_{\text{vib}} =\frac{k_{B}T}{2N}\sum_{n=1}^{N}\ln\left[1+\frac{4k_{f}}{k_{u}} \sin^{2}\frac{(n-1)\pi}{2N}\right]. \tag{15b}\]
In the limit of large assemblies (\(N\to\infty\)), the vibrational free energy becomes
\[\begin{split}\lim_{N\to\infty}\delta\mathcal{F}_{\text{vib}}& =k_{B}T\int_{0}^{\frac{1}{2}}dz\ln\left(1+\frac{4k_{f}}{k_{u}}\sin ^{2}\pi z\right)\\ &=k_{B}T\ln\left[\frac{1}{2}\left(1+\sqrt{1+\frac{4k_{f}}{k_{u}} }\right)\right].\end{split} \tag{16}\]
A useful approximation for our purposes is
\[\delta\mathcal{F}_{\text{vib}}\approx\left(1-\frac{1}{N}\right)k_{B}T\ln \left[\frac{1}{2}\left(1+\sqrt{1+\frac{4k_{f}}{k_{u}}}\right)\right]. \tag{17}\]
This approximation is illustrated in Figure 10.
For weak binding, we can compute, for example, the orientational free energy as follows. Given an assembly of length \(N\) that is made of \(M\) weakly-bound structures, there are \(N-M\) strong bonds and \(M-1\) weak bonds. Assuming that strong bonds and weak bonds can rotate about angles \(\theta_{\Delta}\) and \(\theta_{\delta}\), respectively, we can write the partition function as
\[Z_{\text{orient}}=\left(\int_{0}^{2\pi}\int_{0}^{\theta_{\Delta}}\frac{d \Omega}{4\pi}\right)^{N-M}\left(\int_{0}^{2\pi}\int_{0}^{\theta_{\delta}}\frac {d\Omega}{4\pi}\right)^{M-1}. \tag{18}\]
The free energy is then
\[\begin{split}\delta\mathcal{F}_{\text{vib}}&=\frac{(N -M)k_{B}T}{N}\ln\frac{2}{1-\cos\theta_{\Delta}}\\ &\quad+\frac{(M-1)k_{B}T}{N}\ln\frac{2}{1-\cos\theta_{\delta}}. \end{split} \tag{19}\]
The first term can be combined with the \(N-M\) strong bonds, resulting in the temperature-corrected strong binding energy \(\Delta_{0}-Ts_{\Delta}\), while the second term can be combined with the \(M-1\) weak bonds, resulting in \(\delta_{0}-Ts_{\delta}\), where for the case of the orientational contribution
\[s_{\delta}=k_{B}\ln\frac{2}{1-\cos\theta_{\delta}}. \tag{20}\]
Figure 10: \(1-\frac{1}{N}\) approximation (solid black line) for the finite-temperature vibrational contribution to the free energy (Eq. (15b)).
## Appendix B Multiple minima approximation
As discussed in Section II.3, the free energy landscape of the incommensurate chain with weak binding can have multiple local minima associated with the weak binding of multiple self-limiting assemblies. We here describe a useful approximation where we assume that the subunits in the system are either dispersed, in self-limiting assemblies of size \(N_{\star}\), or in defective weakly-bound aggregates of self-limiting assemblies of size \(mN_{\star}\) (for \(m>1\)). The free energy of a structure made up of \(m\) weakly-bound self-limiting assemblies is
\[\mathcal{F}_{mN_{\star}}=\mathcal{F}_{\star}-\frac{(m-1)\delta_{T}}{mN_{\star }}. \tag{10}\]
The determine \(N_{\star}\) and \(\mathcal{F}_{\star}\), note that in the strain accumulation regime (\(N\lesssim\sqrt{4k_{f}/k_{u}}\)) the free energy per subunit is
\[\mathcal{F}_{N}\simeq-\left(1-\frac{1}{N}\right)\Delta_{T}+\frac{1}{6}k_{f}f^ {2}a^{2}\left(\frac{N}{\sqrt{4k_{f}/k_{u}}}\right)^{2}. \tag{11}\]
Minimizing with respect to \(N\), we find the self-limiting size
\[\frac{N_{\star}}{\sqrt{4k_{f}/k_{u}}}\simeq\left(\frac{3\Delta_{T}}{k_{f}f^{2 }a^{2}\sqrt{4k_{f}/k_{u}}}\right)^{\frac{1}{3}}. \tag{12}\]
Substituting this in to the free energy, we have
\[\mathcal{F}_{\star}\simeq-\Delta_{T}+\frac{1}{2}k_{f}f^{2}a^{2}\left(\frac{3 \Delta_{T}}{k_{f}f^{2}a^{2}\sqrt{4k_{f}/k_{u}}}\right)^{\frac{2}{3}}. \tag{13}\]
Applying ideal aggregation theory to a system of dispersed subunits and assemblies of size \(mN_{\star}\), we have for the total subunit volume fraction
\[\begin{split}\Phi&=\Phi_{1}+\sum_{m=1}^{\infty}mN_{ \star}\left[\Phi_{1}e^{-\beta\mathcal{F}_{\star}}e^{\beta\frac{(m-1)\delta_{T }}{mN_{\star}}}\right]^{mN_{\star}}\\ &=\Phi_{1}+\frac{N_{\star}\left(\Phi_{1}e^{-\beta\mathcal{F}_{ \star}}\right)^{N_{\star}}}{\left[1-\left(\Phi_{1}e^{-\beta\mathcal{F}_{\star} }\right)^{N_{\star}}e^{\beta\delta_{T}}\right]^{2}}.\end{split} \tag{14}\]
The second term can be split into the volume fractions of subunits in self-limiting structures \(\Phi_{\text{sl}}(\Phi_{1})=N_{\star}\left(\Phi_{1}e^{-\beta\mathcal{F}_{\star }}\right)^{N_{\star}}\) and defective aggregates \(\Phi_{\text{agg}}(\Phi_{1})=\Phi(\Phi_{1})-\Phi_{1}-\Phi_{\text{sl}}(\Phi_{1})\).
We can use this approximation to determine the conditions under which dispersed subunits transition to self-limiting assemblies and self-limiting assemblies transition to defective aggregates. For the transition between dispersed subunits to self-limiting assemblies, note that the concentration scale \(\Phi_{1}^{\star}\) at which \(\Phi_{1}^{\star}=\Phi_{\text{sl}}(\Phi_{1}^{\star})\) is given by
\[\Phi_{1}^{\star}=\left(\frac{e^{N_{\star}\beta\mathcal{F}_{\star}}}{N_{\star} }\right)^{\frac{1}{N_{\star}-1}}. \tag{15}\]
Assuming that the mass of defective structures is negligible at this concentration scale, we can write the total concentration of subunits as \(\Phi=2\Phi_{1}^{\star}\). Taking the maximum concentration \(\Phi_{\text{tot}}=1\) and rearranging, we arrive at
\[e^{\beta\mathcal{F}_{\star}}=\left(\frac{N_{\star}}{2^{N_{\star}-1}}\right)^{ \frac{1}{N_{\star}}}. \tag{16}\]
Note that the righthand side weakly depends on \(N_{\star}\) and is bounded between \(1/2\) and \(1\). Therefore, as an estimate, the transition from dispersed subunits to self-limiting assemblies occurs roughly when \(\mathcal{F}_{\star}\) changes sign. This corresponds to when the self-limiting assemblies between energetically favorable over the dispersed subunits. From Eq. (13), we have the relation between temperature \(T\) and frustration \(f\)
\[\frac{T}{T_{\Delta}}\approx 1-\frac{9}{4(4k_{f}/k_{u})^{3/2}}\left(\frac{f}{f_{ \text{flat}}}\right)^{2}. \tag{17}\]
This temperature is the maximum temperature \(T_{\text{max}}\) above which the system will remain dispersed (see Section IV.2).
For the transition between self-limiting assemblies and defective aggregates, we note that the concentration scale \(\Phi_{1}^{\star\star}\) at which \(\Phi_{\text{sl}}(\Phi_{1}^{\star\star})=\Phi_{\text{agg}}(\Phi_{1}^{\star \star})\) satisfies
\[\left(\Phi_{1}^{\star\star}e^{-\beta\mathcal{F}_{\star}}\right)^{N_{\star}}e^{ \beta\delta_{T}}=1-\frac{1}{\sqrt{2}}. \tag{18}\]
This can be substituted back into the total volume fraction to obtain the equation of state
\[\Phi=(2-\sqrt{2})N_{\star}(f,T)e^{-\beta(\delta_{0}-T_{\text{ss}})}, \tag{19}\]
which relates the temperature \(T\), frustration \(f\), and concentration \(\Phi\) at which defective aggregates dominate over self-limiting assemblies. For simplicity, if we assume that the self-limiting size \(N_{\star}\) is fixed or does not vary significantly over some range of temperatures and frustrations, we can solve for the minimum temperature above which self-limiting assembly occurs. This temperature \(T_{\text{min}}\) is given by
\[\begin{split} T_{\text{min}}&=\delta_{0}\left[s_{ \delta}+k_{B}\ln\frac{(2-\sqrt{2})N_{\star}}{\Phi}\right]^{-1}\\ &=\frac{T_{\delta}}{1+s_{\text{trans}}(\Phi)/s_{\delta}},\end{split} \tag{20}\]
where \(s_{\text{trans}}\) captures the effect of translational entropy.
## Appendix C Relation between average size and relative size fluctuations
The distribution of assemblies of size \(N\) is defined as \(\rho_{N}=\Phi_{N}/N=\left(\Phi_{1}e^{-\beta\mathcal{F}_{N}}\right)^{N}\). The average size is therefore
\[\left\langle N\right\rangle=\frac{\sum_{N=1}^{\infty}N\rho_{N}}{\sum_{N=1}^{ \infty}\rho_{N}}=\frac{\sum_{N=1}^{\infty}N\left(\Phi_{1}e^{-\beta\mathcal{F} _{N}}\right)^{N}}{\sum_{N=1}^{\infty}\left(\Phi_{1}e^{-\beta\mathcal{F}_{N}} \right)^{N}}. \tag{21}\]
Using \(d\rho_{N}/d\Phi_{1}=N\rho_{N}\Phi_{1}^{-1}\), we have
\[\begin{split}\frac{d\langle N\rangle}{d\Phi_{1}}&=\Phi_ {1}^{-1}\frac{\sum_{N=1}^{\infty}N^{2}\rho_{N}}{\sum_{N=1}^{\infty}\rho_{N}}- \Phi_{1}^{-1}\left(\frac{\sum_{N=1}^{\infty}N\rho_{N}}{\sum_{N=1}^{\infty}\rho_ {N}}\right)^{2}\\ &=\Phi_{1}^{-1}\left(\langle N^{2}\rangle-\langle N\rangle^{2} \right),\end{split} \tag{10}\]
and
\[\frac{d\Phi}{d\Phi_{1}}=\frac{d}{d\Phi_{1}}\sum_{N=1}^{\infty}N\rho_{N}=\Phi_ {1}^{-1}\langle N^{2}\rangle\sum_{N=1}^{\infty}\rho_{N}=\frac{\langle N^{2} \rangle\Phi}{\langle N\rangle\Phi_{1}}. \tag{11}\]
Therefore
\[\frac{d\langle N\rangle}{d\Phi}=\frac{d\langle N\rangle}{d\Phi_{1}}\frac{d \Phi_{1}}{d\Phi}=\frac{\langle\delta N^{2}\rangle}{\langle N^{2}\rangle} \frac{\langle N\rangle}{\Phi_{1}}, \tag{12}\]
which can be rewritten as
\[\frac{d(\log\langle N\rangle)}{d(\log\Phi)}=\frac{\langle\delta N^{2}\rangle} {\langle N^{2}\rangle}. \tag{13}\]
It is useful to note that for an exponential distribution \(\langle\delta N^{2}\rangle/\langle N^{2}\rangle\to 1/2\), which in our case is what happens at large sizes.
Appendix D Effect of weak-binding energy on the transition between self-limiting assemblies and defective aggregates
Figure 11 shows how varying the weak-binding energy \(\delta_{0}\) affects the onset of the transition from self-limiting assemblies to defective aggregates. As the weak-binding energy is increased and weak binding becomes more energetically favorable, the self-limiting regime becomes more and more narrow.
|
2310.12475 | A gauge theory for the 3+1 dimensional incompressible Euler equations | We show that the incompressible Euler equations in three spatial dimensions
can be expressed in terms of an abelian gauge theory with a topological BF
term. A crucial part of the theory is a 3-form field strength, which is dual to
a material invariant local helicity in the fluid. In one version of the theory,
there is an additional 2-form field strength, with the magnetic field
corresponding to fluid vorticity and the electric field identified with the
cross-product of the velocity and the vorticity. In the second version, the
2-form field strength is instead expressed in terms of Clebsch scalars. We
discuss the theory in the presence of the boundary and argue that edge modes
may be present in the dual description of fluid flows with a boundary. | Christopher Eling | 2023-10-19T05:23:28Z | http://arxiv.org/abs/2310.12475v1 | # A gauge theory for the 3+1 dimensional incompressible Euler equations
###### Abstract
We show that the incompressible Euler equations in three spatial dimensions can be expressed in terms of an abelian gauge theory with a topological BF term. A crucial part of the theory is a 3-form field strength, which is dual to a material invariant local helicity in the fluid. In one version of the theory, there is an additional 2-form field strength, with the magnetic field corresponding to fluid vorticity and the electric field identified with the cross-product of the velocity and the vorticity. In the second version, the 2-form field strength is instead expressed in terms of Clebsch scalars. We discuss the theory in the presence of the boundary and argue that edge modes may be present in the dual description of fluid flows with a boundary.
Introduction
The role of topology in the study of fluid dynamics was first realized in 19th century, when Helmholtz, Lord Kelvin, and others realized that vortex structures are invariant quantities carried along by an ideal fluid flow. For example, the degree to which vortex tubes are tangled and knotted is a material invariant and persists in the flow. Furthermore, the velocity circulation around a loop moving with the flow depends on whether the curve encloses a simply or multiply connected domain. For further applications of topological methods to fluids, see [1].
In addition to these well-known features of fluid flows, in recent years several non-trivial connections have emerged between the properties of waves in fluid systems and the topological phases of matter [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12]. One of the most striking results is that the shallow water wave modes localized near boundaries (eg, coasts, or even the equator) are equivalent to edge modes in topological phases of matter such as the quantum Hall state. At long distances, quantum Hall states are described by Chern-Simons theory, which is a topological gauge invariant field theory. For a review, see [13; 14]. This connection between water waves and gauge theory was made explicit in [15], where it was shown that the shallow water equations can be encoded into a 2+1 dimensional gauge theory defined in terms of a U(1) gauge connection \(A_{\mu}\). The action for the linearized theory includes a Chern-Simons term that captures both the Poincare waves in the bulk and the boundary Kelvin waves localized near coasts.
Is it possible that the dual gauge theory representation can be extended to other fluid systems? In [16] we proposed that the incompressible Euler equations can also be expressed in terms of a 2+1 dimensional gauge theory. The 2d vorticity equation is identified with the Bianchi identity, which is a topological conservation law. Hence, the scalar vorticity of the flow is mapped into the magnetic field, while the electric field is identified with the product of the vorticity and the gradient of the stream function. We then showed that the field equations arising from the variation of a certain gauge invariant action with an explicit Chern-Simons term reproduce the incompressible 2d Euler equation.
A natural question is whether 3+1 dimensional incompressible flows can also be mapped into a dual gauge theory, or is our result just a special feature of physics in three spacetime dimensions? In this paper we argue that the 3+1 dimensional Euler equations can be
encoded into a gauge theory with a 2-form potential \(P_{\mu\nu}\). As before, the idea is that the equations expressing the material invariance of certain quantities are Bianchi identities of the gauge theory. In this case, the Bianchi identity associated with gauge transformations of \(P_{\mu\nu}\) reflects the material invariance of a local helicity variable in the fluid description.
As a first step, we assume the 3+1 dimensional theory still contains a 1-form gauge potential \(A_{\mu}\) as a fundamental variable. In one higher dimension the Bianchi identity is naturally identified with the 3d vorticity equation. We show that the field equations produced by varying a gauge invariant action with a topological "BF" term are the incompressible 3d Euler equations. However, these field equations also place a restriction on the local helicity.
To remedy this issue, we introduce Clebsch scalars as fundamental variables describing the vorticity sector of the theory, following [15]. In this case, the action involves an auxiliary connection and a type of BF term we will refer to as Clebsch BF. The resulting field equations yield the Euler equations and the material transport of the Clebsch scalars without restriction on the local helicity.
The structure of this paper is as follows. In Section 2 we review the construction of the gauge theory in 2+1 dimensions and describe how to generalize the approach to the 3+1 dimensional case. Along the way we review the concept of vortex momentum in a flow and how this leads to the material invariance (or Lie transport) of local helicity. In Section 3 we construct the action principles for the theory. In Section 4 we discuss the behavior of the theory in the presence of a boundary. When a boundary is present, the gauge invariance of the theory is restricted, which leads to physical edge modes degrees of freedom. We compute the quasi-local Noether charges for the theory and derive the charge algebra associated with the edge modes. We conclude the paper with a discussion of open questions, including some speculative ideas on how the gauge theory picture can be extended to describe the viscous term in the Navier-Stokes equations.
Gauge theory approach to incompressible fluids
### From two to three dimensions
In 2+1 dimensions, the gauge invariant action principle takes the form [16]
\[S_{\rm gauge}=\int dtd^{2}x\left(\frac{E_{i}E^{i}}{2B}-pB-L_{\rm CS}\right). \tag{1}\]
The action is expressed in terms of a gauge connection 1-form \(A_{\mu}\) and field strength \(F_{\mu\nu}=2\partial_{[\mu}A_{\nu]}\). The electric field \(E_{i}=F_{0i}\) and magnetic field \(B=\epsilon^{ij}\partial_{i}A_{j}\). \(p\) is an auxiliary function we will identify with the fluid pressure. The topological Chern-Simons term has the form
\[L_{\rm CS}=\frac{1}{2}\epsilon^{\mu\nu\rho}A_{\mu}\partial_{\nu}A_{\rho}. \tag{2}\]
This term is independent of the metric and invariant under gauge transformations \(A_{\mu}\to A_{\mu}+\partial_{\mu}\lambda\), up to a total derivative term.
Variation of this action with respect to \(A_{0}\) yields the Gauss law constraint
\[\partial_{i}\left(\frac{E^{i}}{B}\right)-B=0, \tag{3}\]
while variation with respect to \(A_{i}\) gives
\[-\partial_{t}\left(\frac{E^{i}}{B}\right)-\epsilon^{ij}\partial_{j}\left( \frac{E^{2}}{2B^{2}}\right)-\epsilon^{ij}\partial_{j}p+\epsilon^{ij}E_{j}=0. \tag{4}\]
Furthermore, in a 2+1 dimensional gauge theory, the Bianchi identity implies
\[\epsilon^{\mu\nu\rho}\nabla_{\mu}F_{\nu\rho}=\epsilon^{\mu\nu\rho}\partial_{ \mu}\partial_{\nu}A_{\rho}=0, \tag{5}\]
which can be rearranged into the form of the conservation of a current
\[\partial_{\mu}\left(\epsilon^{\mu\nu\rho}F_{\nu\rho}\right)=0. \tag{6}\]
This yields a Faraday law:
\[\partial_{t}B-\epsilon^{ij}\partial_{i}E_{j}=0. \tag{7}\]
This set of equations is equivalent to the incompressible Euler system with the identifications
\[B=\omega\] \[E_{i}=B\epsilon_{ij}v^{j}, \tag{8}\]
where \(\omega\) is the ultimately the fluid vorticity scalar and \(v^{i}\) is ultimately the fluid velocity. We assume the fluid velocity is divergence free, so that \(\partial_{i}v^{i}=0\), which implies \(v^{i}=\epsilon^{ij}\partial_{j}\psi\) in terms of the stream function \(\psi\).
A priori, the velocity and \(\omega\) fields are independent variables. The Gauss law (3) relates \(\omega\) to the 2d curl of \(v^{i}\) as expected
\[\omega=\epsilon^{ij}\partial_{i}v_{j}. \tag{9}\]
The Bianchi identity (7) becomes the ideal vorticity equation
\[\partial_{t}\omega+v^{i}\partial_{i}\omega=0. \tag{10}\]
Finally (4) is the Euler equation
\[\partial_{t}v^{i}+v^{j}\partial_{j}v^{i}+\partial^{i}p=0. \tag{11}\]
In the 3+1 dimensional case we can start by attempting a similar procedure, first identifying the Bianchi identity equation with the 3d vorticity equation. In 3+1 dimensions the Bianchi identity is now expressed as the conservation of a 2-form current
\[\partial_{\mu}\left(\epsilon^{\mu\nu\rho\sigma}F_{\rho\sigma}\right)=0. \tag{12}\]
This yields two of the familiar Maxwell equations:
\[\vec{\nabla}\cdot\vec{B}=0 \tag{13}\] \[\partial_{t}\vec{B}+\vec{\nabla}\times\vec{E}=0, \tag{14}\]
where the magnetic field \(B^{i}=-\frac{1}{2}\epsilon^{ijk}F_{jk}\) and the electric field \(E_{i}=F_{0i}\).
A natural guess is to identify the magnetic field vector with the vorticity vector:
\[\vec{B}=\vec{\omega}. \tag{15}\]
Then the first equation states (correctly) that the divergence of \(\vec{\omega}\) is zero
\[\vec{\nabla}\cdot\vec{\omega}=0. \tag{16}\]
In three dimensions, vorticity is a vector and the vorticity equation is
\[\partial_{t}\vec{\omega}=\vec{\nabla}\times(\vec{v}\times\vec{\omega}), \tag{17}\]
or in index notation for a vector vorticity
\[\partial_{t}\omega^{i}+v^{j}\partial_{j}\omega^{i}-\omega^{j}\partial_{j}v^{i}=0. \tag{18}\]
Therefore, matching the vorticity equation with the Faraday law equation implies that the electric field should be identified as
\[\vec{E}=-\vec{v}\times\vec{B}=-\vec{v}\times\vec{\omega}, \tag{19}\]
which is the generalization of (8) to 3d. Note that we are assuming that \(\vec{v}\) is a divergence free vector; this condition does not appear as a separate topological conservation law1. In the 2d case discussed above the Euler equation and its active scalar generalizations [16] are formulated directly in terms in the stream function, which can be considered a fundamental variable. In 3d, one could generalize to a "stream vector" \(\vec{\psi}\) via \(v^{i}=\epsilon^{ijk}\partial_{j}\psi_{k}\).
Footnote 1: In 2+1 dimensions, the continuity equation for a compressible fluid with variable density \(\rho\) can be expressed as a Bianchi identity for a 1-form gauge connection, with magnetic field \(B=\rho\). See [15; 17]
If a topological term is a crucial part of the action for incompressible fluids in general dimension, then a major obstacle in proceeding further is that a Chern-Simons term does not exist in 3+1 dimensions. Instead, we will consider topological terms of the "BF" type, which can be defined in general dimension. In 3+1 dimensions, terms of the BF type involve an additional 2-form gauge potential \(P_{\mu\nu}\) (we will use \(P_{\mu\nu}\) here to avoid confusion with magnetic field \(\vec{B}\)) and the topological Lagrangian has the form
\[L_{\rm BF}=\epsilon^{\mu\nu\rho\sigma}P_{\mu\nu}F_{\rho\sigma}, \tag{20}\]
where \(F_{\mu\nu}\) is still the field strength of \(A_{\mu}\). This action term is explicitly invariant under the 1-form gauge transformations of \(A_{\mu}\), and is also invariant under the 2-form gauge transformations
\[P_{\mu\nu}\to P_{\mu\nu}+\partial_{[\mu}\lambda_{\nu]}. \tag{21}\]
up to a total derivative term.
In two spatial dimensions the 1-form gauge transformations have previously been linked to the area preserving diffeomorphism invariance of the fluid theory when viewed from the Lagrangian perspective [18; 19]. Lagrangian coordinates \(\eta_{i}\) parametrize the locations of
each fluid parcel at a given time. In 2d, area preserving transformations of the Lagrangian coordinates take the form
\[\eta_{i}=\eta_{i}+\epsilon_{i}{}^{j}\partial_{j}\lambda(\eta). \tag{22}\]
One can then consider small perturbations around an equilibrium solution where the Lagrangian coordinates are equal to Eulerian space coordinates \(x_{i}\). If the perturbations are parametrized with the 1-form field \(A_{i}\),
\[x_{i}=\eta_{i}+\epsilon_{i}{}^{j}A_{j}(\eta), \tag{23}\]
then to linear order the area preserving transformation is equivalent to a U(1) gauge transformation of \(A_{i}\)
\[A_{i}\to A_{i}+\partial_{i}\lambda. \tag{24}\]
Note that in this mapping to a gauge theory one has fixed the temporal gauge \(A_{0}=0\). Hence, the area preserving diffeomorphisms are equivalent to the remaining residual gauge transformations in the linearized regime.
In 3d, the volume preserving transformations are a generalization of (22)
\[\eta_{i}=\eta_{i}+\epsilon_{i}{}^{jk}\partial_{j}\lambda_{k}(\eta). \tag{25}\]
Similarly, one can introduce a 2-form field to parametrize perturbations about the equilibrium configuration
\[x_{i}=\eta_{i}+\epsilon_{i}{}^{jk}P_{jk}(\eta). \tag{26}\]
then at the linear level, the volume preserving diffeomorphisms act as the 2-form residual gauge transformations in (21)2. This heuristic argument hints that 2-form gauge transformations should play a role in the mapping to 3d fluid flows.
Footnote 2: In this case the mapping is to a gauge theory with the gauge fixing \(P_{0i}=0\)
The field strength associated with \(P_{\mu\nu}\) is the 3-form field \(H_{\mu\nu\rho}\)
\[H_{\mu\nu\rho}=\partial_{[\mu}B_{\nu\rho]}. \tag{27}\]
In this theory there is another Bianchi identity involving \(H_{\mu\nu\rho}\):
\[\partial_{[\mu}H_{\nu\rho\sigma]}=0. \tag{28}\]
This Bianchi identity equation can be expressed as conservation law of a current density
\[\partial_{\mu}\left(\epsilon^{\mu\nu\rho\sigma}H_{\nu\rho\sigma}\right)=0, \tag{29}\]
or, equivalently,
\[\partial_{t}H+\partial_{i}H^{i}=0, \tag{30}\]
where \(H=\epsilon^{ijk}H_{ijk}\) and \(H^{i}=-3\epsilon^{ijk}H_{0jk}\).
The question is how to identify the field strength \(H\) with a fluid quantity. An obvious choice is the fluid helicity density \(h\), which is a scalar built out of a 3-form via the Hodge dual (ie exterior product of 1-form velocity and 2-form vorticity),
\[h=\vec{v}\cdot\vec{\omega},\ h_{ijk}=v_{[i}\omega_{jk]}. \tag{31}\]
Total helicity \(\int h\ d^{3}x\) is a famously a global conserved quantity in flows after imposing the Euler and vorticity equations [20].
However, what we'd really like is a Lagrangian material invariant quantity, analogous to the vorticity scalar in 2d. In terms of the Lagrangian comoving derivative
\[D_{t}\omega=\partial_{t}+v^{i}\partial_{i}\omega=0. \tag{32}\]
On a general curved manifold this equation can be expressed in terms of the Lie derivative along \(v\), leading to the more generic notion of "Lie transport" [21]:
\[\partial_{t}\omega+\mathcal{L}_{v}\omega=0. \tag{33}\]
Similarly, in 3d, the vorticity equation (18) can be expressed as Lie transport of \(\vec{\omega}\):
\[\partial_{t}\vec{\omega}+\mathcal{L}_{v}\vec{\omega}=0. \tag{34}\]
Note that vorticity expressed as 2-form \(\omega_{ij}\) is also Lie-transported.
The material invariance of the vorticity follows from the invariance of the ideal fluid theory under area/volume preserving diffeomorphisms (in an integrated form this is the Kelvin circulation theorem). The philosophy is that in the gauge theory description locally conserved quantities correspond to the topological conservation laws constructed via the Bianchi identities. It turns out there is a type of local helicity constructed from "vortex momentum density" that is a material invariant, as we describe next.
### Vortex momentum and local helicity
We start by considering the velocity field to be a 1-form on a general manifold. In terms of the Lie derivative on a 1-form, the Euler equation has the form
\[\partial_{t}v_{i}+\mathcal{L}_{v}v_{i}=\partial_{i}\left(\frac{1}{2}v^{2}-p \right). \tag{35}\]
Hence, velocity is almost Lie transported, up to an exact 1-form. Now introduce a scalar \(\ell\) such that
\[\partial_{t}\ell+\mathcal{L}_{v}\ell=\frac{1}{2}v^{2}-p. \tag{36}\]
It is straightforward to show that field \(u_{i}=v_{i}-\partial_{i}\ell\) is Lie-transported along the flow
\[\partial_{t}u_{i}+\mathcal{L}_{v}u_{i}=0. \tag{37}\]
The velocity potential \(\ell\) was first introduced by mathematician Heinrich Martin Weber in the 19th century [22] and the importance of the Lie transport of \(u\) was recognized by Kuzmin and Oseledets in the 1980's [23; 24]. \(u\) can be interpreted a vortex momentum density [25] produced by a large number of infinitesimal vortex rings or vortex dipoles. Each individual dipole has a linear momentum and the distribution of a large number of these dipoles can be thought of as producing \(u_{i}\). Another related way of interpreting \(u\) is as the density of the impulse \(I=\int\vec{r}\times\vec{u}\ d^{3}x\) in a region of the fluid [26]
The vortex momentum helicity 3-form \(u_{[i}\omega_{jk]}=v_{[i}\omega_{jk]}-\partial_{[i}\ell\ \omega_{jk]}\) is Lie transported because the product of two Lie transported quantities is also Lie transported. Moreover, the dual scalar helicity density \(\sigma=u_{i}\omega^{i}\) is also a Lagrangian invariant
\[D_{t}\sigma=\partial_{t}\sigma+\mathcal{L}_{v}\sigma=0. \tag{38}\]
Hence, we identify the 3-form field strength variables with fluid variables in the following way: \(H=\sigma\) and \(H^{i}=\sigma v^{i}\).
## III Action principle for 3+1 dimensional Euler equations
In this section we propose two possible actions to describe an incompressible fluid in 3+1 dimensions. In the first case we describe the "vorticity sector" of the theory with a 1-form
\(A_{\mu}\) and the associated magnetic (15) and electric fields (19). This leads to an explicit BF term in the action, as described above. However, as we will see below, the resulting field equations restrict the "helicity sector". To remedy this issue, we re-express the vorticity sector in terms of Clebsch variables and an auxiliary gauge connection, generalizing the construction in [15].
First, following the 2+1 dimensional action (1), we propose the following action for an incompressible fluid in 3+1 dimensions:
\[S_{\rm test}=\int dtd^{3}x\ \left(\frac{H_{i}H^{i}}{2H}-pH+\frac{1}{2} \epsilon^{\mu\nu\rho\sigma}P_{\mu\nu}F_{\rho\sigma}\right). \tag{39}\]
This action is invariant under rotations and space-time translations, but not Lorentz boosts. The action is even under time reversal, but changes by an overall sign under parity.
To test this action, we vary with respect to \(P_{\mu\nu}\). Note that we define \(H_{\mu\nu\rho}=\partial_{[\mu}P_{\nu\rho]}\). Then it follows that
\[\delta H =\epsilon^{ijk}\partial_{i}(\delta P_{jk})\] \[\delta H^{i} =\epsilon^{ijk}\left(\partial_{t}(\delta P_{jk})+\partial_{k}( \delta P_{j0})+\partial_{j}(\delta P_{0k})\right) \tag{40}\]
First, we consider variations with respect to \(P_{0i}\). This yields the constraint equation
\[-\epsilon^{ijk}\partial_{j}\left(\frac{H_{i}}{H}\right)-B^{i}=0 \tag{41}\]
In terms of velocity \(v^{i}=H^{i}/H\). As before, we assume that \(\vec{v}\) and \(\vec{\omega}\) are a priori independent variables. The constraint equation connects them via the expected relation that \(\vec{\omega}\) is the curl of the velocity
\[\vec{\nabla}\times\vec{v}=\vec{B}=\vec{\omega}. \tag{42}\]
Next we vary with respect to \(P_{ij}\). Peeling off an overall factor of \(\epsilon^{ijk}\) yields the following field equation in terms of the fluid variables
\[\partial_{t}v_{k}+\partial_{k}(v^{2}/2)+\partial_{k}p+E_{k}=0. \tag{43}\]
Inserting the form of the electric field (19), imposing the constraint equation (42) and using the vector calculus identity
\[\partial_{i}(v^{2}/2)=v^{j}\partial_{j}v_{i}+\vec{v}\times\vec{ \omega}, \tag{44}\]
yields the incompressible Euler equation.
We must also vary the 1-form connection \(A_{\mu}\) in the \(F_{\mu\nu}\) part of the topological term. This gives field equations that require the local helicity to vanish
\[H=H^{i}=0, \tag{45}\]
which would restrict us to flows with zero local helicity. In terms of the helicity density \(h\)
\[h=\vec{v}\cdot\vec{\omega}=\vec{\omega}\cdot\vec{\nabla}\ell. \tag{46}\]
One simple way to allow for generic non-zero \(H=\sigma\) in this framework is to couple the theory to an external current \(J^{\mu}_{\rm ext}\), such that the total action is
\[S_{\rm total}=\int d^{4}x\left(\frac{H_{i}H^{i}}{2H}-pH-\frac{1}{2}\epsilon^{ \mu\nu\rho\sigma}P_{\mu\nu}F_{\rho\sigma}+J^{\mu}_{\rm ext}A_{\mu}\right) \tag{47}\]
Because the current is conserved, the action remains gauge invariant. In this case the field equations tie the local helicity density to the electric charge density, eg.
\[H=\rho\] \[H^{i}=\rho v^{i}. \tag{48}\]
If there are a set of particles with charge \(e\), located at positions \(x_{a}\), then \(\rho(\vec{x},t)=\sum_{a=1}^{N}e\delta(\vec{x}-\vec{x}_{a})\) and
\[H(\vec{x},t)=\sum_{a=1}^{N}e\delta(\vec{x}-\vec{x}_{a}(t)). \tag{49}\]
### Clebsch BF action
To address the restriction on the local helicity in the first action, we now consider the following action again in terms of \(P_{\mu\nu}\), but now with the field strength (or alternatively, the 2-form vorticity current) depending on scalars \(\alpha\) and \(\beta\) as follows
\[S_{\rm clebsch}=\int dtd^{3}x\ \left(\frac{H_{i}H^{i}}{2H}-pH+\frac{1}{2} \epsilon^{\mu\nu\rho\sigma}P_{\mu\nu}\partial_{\rho}\alpha\partial_{\sigma} \beta\right). \tag{50}\]
The scalars \(\alpha\) and \(\beta\) can be thought of as being charged under \(P_{\mu\nu}\). Furthermore, one can define an auxiliary connection \(\tilde{A}_{\mu}=\alpha\partial_{\mu}\beta+\partial_{\mu}\chi\), which is a Clebsch decomposition of the field. Hence the last term in the action is a "Clebsch BF" term
\[S_{\rm clebsch}=\int dtd^{3}x\ \left(\frac{H_{i}H^{i}}{2H}-pH+\frac{1}{2} \epsilon^{\mu\nu\rho\sigma}P_{\mu\nu}\tilde{F}_{\rho\sigma}\right). \tag{51}\]
Variation of this action with respect to \(\alpha\) and \(\beta\) yields
\[H^{0}\partial_{t}\alpha+H^{i}\partial_{i}\alpha=0\] \[H^{0}\partial_{t}\beta+H^{i}\partial_{i}\beta=0. \tag{52}\]
These equations express the familiar fact that the Clebsch scalars are material invariants. Varying the action with respect to \(P_{0i}\) gives
\[\vec{\nabla}\times\vec{v}=\vec{\nabla}\alpha\times\vec{\nabla}\beta, \tag{53}\]
which the correct relationship between the curl of the velocity/vorticity vector and the Clebsch scalars. Finally, varying with respect to \(P_{ij}\) gives the same terms as before, but now with the Clebsch contribution \(\epsilon^{ijk}(\partial_{t}\alpha\partial_{k}\beta-\partial_{k}\alpha \partial_{t}\beta)\). This term can be re-expressed as the \(\vec{v}\times\vec{\omega}\) term and the resulting field equation is again the Euler equation. Therefore, the Clebsch BF action has the advantage of reproducing the Euler equations without any constraints on the local helicity.
## IV Boundary terms, gauge invariance, and edge modes
We now consider the two gauge theory actions in the presence of a boundary, which leads to interesting new physical effects. In the dual fluid, this setting corresponds to a flow with a boundary. We first note that the variation of the actions has the following generic form
\[\delta S_{\rm total}=\int dtd^{3}x\ \left(\frac{\delta L}{\delta\psi}\delta \psi+\partial_{\mu}S^{\mu}\right), \tag{54}\]
where \(L\) is the Lagrangian, \(\psi=(A_{\mu},P_{\mu\nu})\) or \((\alpha,\beta,P_{\mu\nu})\). Setting the first term on the right-hand side equal to zero imposes the field equations, but to have a well-defined variational principle, the surface terms in \(S^{\mu}\) must also vanish. For (47) and a spatial boundary, the boundary terms are
\[\delta S_{\rm total,bdry}=\int dtd^{2}x\ \left(2n_{k}v_{j}\epsilon^{ijk} \delta P_{0i}-(p+\frac{1}{2}v^{2})n_{i}\epsilon^{ijk}\delta P_{jk}+2n_{i} \epsilon^{ijk}P_{0j}\delta A_{k}+n_{i}\epsilon^{ijk}P_{jk}\delta A_{0}\right), \tag{55}\]
where \(n^{i}\) is the normal to the boundary surface. In the case where the boundary is a plane at \(x=0\), then the boundary terms become
\[\delta S_{\rm total,bdry}=\int dtdydz\left(2\epsilon^{ab}v_{a}\delta P_{0b}-(p +\frac{1}{2}v^{2})\epsilon^{ab}\delta P_{ab}+2\epsilon^{ab}P_{0a}\delta A_{b} +\epsilon^{ab}P_{ab}\delta A_{0}\right), \tag{56}\]
where \(x^{a}=(y,z)\). One way to set these terms to zero is to fix \(P_{0a}=P_{xy}=0\), which would leave the 1-form gauge field free on the boundary. More generically one could fix the boundary values of the 2-form potential to be constants \(k_{a}\) and \(l\):
\[P_{0a} =k_{a}\] \[P_{xy} =l. \tag{57}\]
Then fixing \(2\epsilon^{ab}k_{a}A_{b}+lA_{0}=0\) on the boundary eliminates the remaining boundary contribution to the action.
For the Clebsch BF action (50) the boundary terms are the same, except the contribution from variations of \(A_{\mu}\) are replaced by variations of the Clebsch scalars
\[\int dtd^{2}x\left(-2n_{i}\epsilon^{ijk}P_{0j}\partial_{k}\beta \ \delta\alpha-n_{i}\epsilon^{ijk}P_{jk}\partial_{t}\beta\ \delta\alpha+2n_{i}\epsilon^{ijk}P_{0j}\partial_{k}\alpha\ \delta\beta+n_{i}\epsilon^{ijk}P_{jk}\partial_{t}\alpha\ \delta\beta\right).\]
Setting \(P_{0a}=P_{xy}=0\) will again remove the boundary contribution, at the expense of leaving the Clebsch scalars free on the boundary. As in the other theory, could also fix the boundary values of \(P_{\mu\nu}\) to be constants on the boundary and arrange for a combination of \(\alpha\) and \(\beta\) to be fixed.
Another issue that we alluded to earlier is that both theories are only gauge invariant up to a total derivative. Therefore, gauge invariance must be reconsidered in the presence of a boundary. Under the 2-form gauge transformation \(P_{\mu\nu}\to P_{\mu\nu}+\partial_{[\mu}\lambda_{\nu]}\) the action transforms as
\[S_{\rm total}\to S_{\rm total}+\int dtd^{2}x\left(n_{i}B^{i} \lambda_{0}+n_{i}\epsilon^{ijk}\lambda_{j}E_{k}\right). \tag{59}\]
The gauge variation of \(S_{\rm clebsch}\) is the same, with \(\vec{B}\) and \(\vec{E}\) expressed in terms of Clebsch variables. Hence, the gauge variation of both actions is zero when conductor-like boundary conditions are present, ie when the normal component of the magnetic field and the transverse components of the electric field are zero at the boundary. In fluid variables this implies
\[\vec{\omega}\cdot\vec{n}=0\] \[(\vec{v}\times\vec{\omega})\cdot\vec{t}=0. \tag{60}\]
These conditions require both the vorticity vector and the flow velocity to be tangent to the boundary surface. The boundary conditions we imposed above to eliminate the boundary term in the action are not enough to force the normal component of the magnetic field and the transverse electric field to vanish. If these conductor-like conditions do not hold, one could still save gauge invariance by requiring that 2-form gauge parameter satisfy \(\lambda_{0}=\lambda_{a}=0\) on the boundary. In addition, in the first theory the 1-form gauge transformations also must be restricted to the subset of transformations that preserve the boundary condition on \(A_{0}\) and \(A_{a}\). All these requirements reduce the amount of gauge freedom of the theories.
Reducing the gauge redundancies of a theory means that degrees of freedom that were previously pure gauge become physical on the boundary surface. Alternatively, one could introduce new fields on the boundary designed to restore the gauge invariance of the combined bulk plus boundary system. Either approach leads to the existence of edge modes, as was first noticed in Chern-Simons theories, where the boundary theory is a 1+1 dimensional chiral scalar field [27; 28; 29]. For BF theories a similar analysis was performed first in the context of field theory [30; 31] and then later in the study of edge states of 3d topological insulators [32; 33].
To study the edge states of our gauge theories, we first consider their symmetries, which can be obtained by computing the Noether charges associated with the gauge symmetries. In the absence of boundaries, the Noether charges for local symmetries are zero on-shell, but when a boundary is present, the charges become non-trivial integrals over the boundary surface, associated with gauge transformations that act as physical transformations on boundary degrees of freedom. Using the textbook formula for the Noether current of a local gauge symmetry, parametrized by \(\lambda\), \(Q=\int j^{0}d^{3}x\), yields generically
\[Q(\lambda)=\int d^{3}x\ C(\lambda)+\oint d^{2}x\ q(\lambda) \tag{61}\]
\(C\) represent the constraint equations and \(q\) is a boundary charge aspect. For (47) the charge associated with the 2-form symmetry is
\[Q^{(2)}_{\lambda_{i}}=\int d^{3}x\ \left(-\epsilon^{ijk}\partial_{j}\left( \frac{H_{i}}{H}\right)+\epsilon^{ijk}\partial_{j}\left(\frac{H_{i}}{H}\right) \lambda_{k}-\lambda_{i}B^{i}\right) \tag{62}\]
Imposing the constraint equation and Stokes theorem gives
\[Q^{(2)}_{\lambda_{i}}=\oint d^{2}x\ n_{i}\epsilon^{ijk}v_{j}\lambda_{k}. \tag{63}\]
Following the same procedure for the 1-form symmetry gives
\[Q_{\lambda}^{(1)}=\oint d^{2}x\ \lambda n_{k}\epsilon^{kij}P_{ij}. \tag{64}\]
Note that when \(\lambda=constant\), \(Q^{(1)}\) is equal to the integrated helicity inside the region of the flow
\[Q_{c}^{(1)}=\int d^{3}x\ \sigma, \tag{65}\]
reflecting the material invariance of this quantity. The infinite number of charges for generic time independent functions \(\lambda\) are a generalization of the Kelvin theorem for helicity. Here the \(\lambda\) represent "improper" gauge transformations. The fluid interpretation of \(Q^{(2)}\) is less clear; it appears to be a new type of conserved quantity.
For the Clebsch BF action (50) the Noether charge \(Q^{(2)}\) associated with the 2-form gauge symmetry is the same, ie (63). The 1-form gauge symmetry is not present, instead there is an invariance under area preserving maps in the 2d space of Clebsch scalars (ie, symplectomorphisms), which preserve the vorticity 2-form current. For \(c_{a}=(\alpha,\beta)\)
\[c_{a}=M_{a}(c),\ \ \frac{\partial(M_{1},M_{2})}{\partial(\alpha,\beta)}=1. \tag{66}\]
The gauge connection acts as the canonical 1-form \(pdq\), while the vorticity is the canonical 2-form \(dp\wedge dq\). It would be interesting to flesh out the precise relationship between the 1-form gauge transformations on \(A_{\mu}\leftrightarrow pdq\) and the symplectomorphisms.
Finally, the algebra of the Noether charges can be determined using the Poisson bracket, e.g.
\[\{Q_{\lambda_{1}},Q_{\lambda_{2}}\}=\delta_{\lambda_{2}}Q_{ \lambda_{1}}. \tag{67}\]
The 1-form and 2-form charges commute with each other due to the gauge invariance of their charge aspects
\[\{Q_{\lambda_{1}}^{(1)},Q_{\lambda_{2}}^{(1)}\}=0\] \[\{Q_{\lambda_{i,1}}^{(2)},Q_{\lambda_{j,2}}^{(2)}\}=0. \tag{68}\]
However, in the explicit BF theory, due to the 2-form gauge dependence of \(Q_{\lambda}^{(1)}\), it follows that
\[\{Q_{\lambda}^{(1)},Q_{\lambda_{j}}^{(2)}\}=\oint d^{2}x\ \lambda n_{k} \epsilon^{kij}\partial_{i}\lambda_{j}. \tag{69}\]
The same charge algebra was found in [30; 31] for the case of pure topological BF theory. Like Chern-Simons theories, non-topological terms in the action do not modify the charge algebra [34].
Hence, we assume that the dynamics of the edge modes in (47) are controlled by topological BF term. The simplest procedure is to fix a gauge where \(A_{0}=P_{0i}=0\). Solving the constraint equations of the pure BF theory yields \(A_{i}=\partial_{i}\phi\) and \(P_{ij}=\partial_{[i}\rho_{j]}\). Normally these are pure gauge degrees of freedom, but substituting this ansatz into the action yields as non-trivial boundary action [31; 35]
\[S_{\text{edge,BF}}=\int dtd^{2}x\ \left(\epsilon^{ij}\partial_{i}\rho_{j}( \partial_{t}\phi)+\partial_{i}\phi\epsilon^{ij}\partial_{t}\rho_{j}\right). \tag{70}\]
It has been argued that this edge dynamics appears as the self-dual sector of a theory of a free scalar plus a Maxwell field on the boundary. In terms of the fluid picture, our results suggest there are edge modes appearing when the 3d flow is subject to boundaries. However, further work is needed to understand the role of these modes in the fluid picture.
## V Discussion
In this paper we have constructed gauge theories with a BF term that describe the incompressible Euler equations in 3+1 dimensions. Both theories contain a 2-form gauge sector associated with local helicity. However, in one theory, the BF term is explicit; there is a 1-form gauge invariance and vorticity are expressed in terms of the associated magnetic field. The drawback with this approach is that local helicity is constrained by the field equations to vanish. In the other version, the BF term involves Clebsch scalars, which parametrize the vorticity current. In this case, local helicity is unconstrained. We argue that the Clebsch BF action is likely the proper description of 3d flows in general, coupling the local helicity described by the 2-form potential with the vorticity described by the Clebsch scalars. However, our results suggest that incompressible flows with zero local helicity may have a purely topological description in terms of BF theory. Finally, we also studied the theories in the presence of a boundary, computing boundary Noether charges and using past results on BF type theories to postulate that there are physical degrees of freedom living on the boundary surface (ie, edge modes).
One obvious area of investigation is whether the gauge theory picture can be extended to
include a viscosity term, thereby reproducing the Navier-Stokes equations. A viscous term contributes to the helicity transport equation as \(\nu\nabla^{2}\sigma\), which means that the Bianchi identity equation must be modified. This suggests that viscosity may break the gauge invariance of our theory.
On the other hand, the viscous terms in the Navier-Stokes equation are associated with the diffusion of vorticity and momentum, whose effect can be modeled by introducing a stochastic noise term into the equation for the map between Eulerian and Lagrangian variables [36; 37]. Physically, the noise introduces a randomness in the Lagrangian paths of the fluid parcels. The average of the stochastic system satisfies the Navier-Stokes equation.
Eqns. (23) and (26) relating the Euler/Lagrangian maps and the gauge fields indicate that treating particle paths as random also introduces randomness in the gauge field. It would be interesting to see if a functional integral version of our theory, which viscosity playing the role of \(\hbar\), can be used to describe the Navier-Stokes system. In terms of a partition function
\[Z=\int D[A]D[P]e^{-S_{\rm gauge}/\nu}. \tag{71}\]
The idea is that expectation values of gauge invariant fluid observables would obey the Navier-Stokes equations. In addition, in the stochastic setting one can prove that a generalized version of the Kelvin theorem holds [38], meaning that that currents are conserved in an averaged sense, with Ward-like identities, eg
\[\langle\partial_{\mu}J^{\mu}\rangle=0\] \[\langle\partial_{\mu}J^{\mu\nu}\rangle=0. \tag{72}\]
A related issue is the role that gauge invariance plays in turbulent states, for example in the infinite Reynolds number limit where \(\nu\to 0\). In this limit there are singularities in the flow associated with vortex sheets and lines.
It would also be interesting to elucidate the role of the edge modes predicted by our theory in incompressible fluid flows with a boundary. For example, in 2d, we argued that lines of zero vorticity are natural boundaries in flows. The changing of the sign of the vorticity leads to a spontaneous breaking of time reversal symmetry and the presence of chiral modes propagating on the boundary surface [16; 39]. In 3d, zero vorticity lines or surfaces may have similar chiral modes. However, it may be the case that non-trivial edge modes exist
even in cases where time reversal symmetry is unbroken.
|
2303.05031 | CoralStyleCLIP: Co-optimized Region and Layer Selection for Image
Editing | Edit fidelity is a significant issue in open-world controllable generative
image editing. Recently, CLIP-based approaches have traded off simplicity to
alleviate these problems by introducing spatial attention in a handpicked layer
of a StyleGAN. In this paper, we propose CoralStyleCLIP, which incorporates a
multi-layer attention-guided blending strategy in the feature space of
StyleGAN2 for obtaining high-fidelity edits. We propose multiple forms of our
co-optimized region and layer selection strategy to demonstrate the variation
of time complexity with the quality of edits over different architectural
intricacies while preserving simplicity. We conduct extensive experimental
analysis and benchmark our method against state-of-the-art CLIP-based methods.
Our findings suggest that CoralStyleCLIP results in high-quality edits while
preserving the ease of use. | Ambareesh Revanur, Debraj Basu, Shradha Agrawal, Dhwanit Agarwal, Deepak Pai | 2023-03-09T04:35:03Z | http://arxiv.org/abs/2303.05031v1 | # CoralStyleCLIP: Co-optimized Region and Layer Selection for Image Editing
###### Abstract
Edit fidelity is a significant issue in open-world controllable generative image editing. Recently, CLIP-based approaches have traded off simplicity to alleviate these problems by introducing spatial attention in a handpicked layer of a StyleGAN. In this paper, we propose CoralStyleCLIP, which incorporates a multi-layer attention-guided blending strategy in the feature space of StyleGAN2 for obtaining high-fidelity edits. We propose multiple forms of our co-optimized region and layer selection strategy to demonstrate the variation of time complexity with the quality of edits over different architectural intricacies while preserving simplicity. We conduct extensive experimental analysis and benchmark our method against state-of-the-art CLIP-based methods. Our findings suggest that CoralStyleCLIP results in high-quality edits while preserving the ease of use.
## 1 Introduction
Controlling smooth semantic edits to photorealistic images [1, 43, 36, 5] synthesized by well-known Generative Adversarial Networks (GANs) [14, 19, 20] has become simplified with guidance from independently trained contrastive models such as CLIP [38]. Using natural language as a rich medium of instruction for open-world image synthesis [50, 51, 52, 41, 53, 54, 55, 40] and editing [12, 13, 25, 28, 30, 48] has addressed many drawbacks of previously proposed methods.
As first demonstrated by StyleCLIP [36], the requirements for large amounts of annotated data [26] and manual efforts [15, 47] were considerably alleviated. Furthermore, the range of possible edits that were achievable significantly improved [36]. The underlying theme of related approaches involves CLIP-driven exploration [16, 23, 36] of the intermediate disentangled _latent spaces_ of the GANs.
It is well understood by now that manipulating the latent code of a StyleGAN for aligning with a _text prompt_ can be computationally intense, as seen in StyleCLIP latent optimization [36], as well as the latent mapper methods [36]. This presents a trade-off between the complexity and quality of edits leveraged by StyleCLIP global directions [36] and StyleMC [23].
In addition, these methods often result in undesirable edits to unexpected regions of an image (see [16]), addressed to some extent by FEAT [16]. However, FEAT requires manual intervention, as described in Section 2, and involves significant training complexity of the order of hours1.
Footnote 1: With no official implementation available, we present comparisons with our reimplementation of FEAT denoted by FEAT\({}^{*}\) in this paper.
**Contributions.** In this paper, we propose CoralStyleCLIP, which addresses these challenges by combining the ease of use [36] with efficient [23] high fidelity edits [16] into our approach. In particular, we propose a novel strategy which, for a given text prompt, jointly learns both the appropriate direction of traversal in the latent space, as well as which spatial regions to edit in every layer of the StyleGAN2 [20] (see Figure 1, Figure 2) without any mediation.
Our approach overcomes the need for manual effort in selecting an appropriate layer for FEAT by incorporating multi-layer feature blending to enable the joint learning process. As a result, the edits are very accurate, rendering our method simple and effective.
The co-optimized regions and layers jointly learned with appropriate latent edits typically select earlier layers for enacting coarse edits, such as shape and structural, compared to finer edits, such as color and texture, which are usually orchestrated through the latter layers of the StyleGAN2.
To alleviate the time complexity, we implement this strategy for _segment selection_ (see Section 3.2), where we jointly learn a _global direction_[23, 36] in the \(\mathcal{W}^{+}\) space and limit the predicted areas of interest at every layer to segments from a pre-trained segmentation network. Doing so reduces the learning complexity significantly (see Table 1), albeit with potential pitfalls discussed in Section 4.3. We mitigate these pitfalls with a jointly trained _attention network_ where we relax the areas of interest at every layer to spatial masks predicted by the network (see Section 3.2). As a result, the training time increases from a few minutes to about an hour while improving the quality of edits compared to the _segment selection_ approach.
In summary, our contributions are as follows:
* We propose a novel multi-layer blending strategy that attends to features selectively at the appropriate StyleGAN layer with minimal hand-holding.
* A CORAL variant based on _segment selection_ demonstrates high edit quality at a fraction of time cost.
* Through extensive empirical analysis, we find that CORAL outperforms recent state-of-the-art methods and is better equipped to handle complex text prompts.
## 2 Related Work
The use of generative models for high-quality image synthesis and manipulation has a rich history [8, 9, 18]. In particular, the disentangled latent spaces of StyleGAN provide robust interpretable controls for editing valuable semantic attributes of an image [3, 10, 43, 44, 45, 47, 15, 46, 47]. Desirable changes to attributes of interest were previously brought out by discovering the relevant channels [47] and curating principal components [15] either through manual inspection or otherwise driven by data-hungry attribute predictors.
StyleFlow [4] leverages normalizing flows to perform conditional exploration of a pre-trained StyleGAN for attribute-conditioned image sampling and editing. By learning to encode the rich local semantics of images into multi-dimensional latent spaces with spatial dimension, StyleMapGAN [21] demonstrates improved inversion quality and the benefits of spatially aware latent code interpolation between source and target images for editing purposes. The advent of CLIP [38] has re-ignited interest in open domain attribute conditioned synthesis of images [36, 37, 55]. Text-driven edits have considerably reduced both the time and effort required for editing images and extended the range of possible edits significantly [36], all the more with increased interest in diffusion models [39, 40, 31].
The disentangled nature of the latent spaces of StyleGAN has facilitated heuristics such as a fixed global direction in StyleCLIP [36] and, more recently, StyleMC [23]. For training efficiency, StyleMC performs CLIP-driven optimization on the image generated at a low-resolution layer
Figure 2: For achieving _beard_, CORAL selects appropriate regions in layers 5-10 for carrying out the required coarse edits in early layers and finer texture edits in later layers
of the StyleGAN. Unfortunately, this limits the range of edits to only those possible by manipulating latent codes at the earlier layers.
For ameliorating edits in unexpected regions of an image, strategies for blending latent features have been an emerging theme in many recent papers [7, 16, 17, 21]. [7, 21, 16] interpolate spatial features more explicitly. In contrast, StyleFusion [17] realizes similar objectives through blended latent code extracted using a fusion network that combines disjoint semantic attributes from multiple images into a single photorealistic image.
Our work is most closely related to [16, 23, 36]. FEAT [16] reduces undesirable edits by imposing sparsity in the number of spatial features modified by StyleCLIP at a manually selected layer \(l\) of the StyleGAN2. FEAT edits layers \(\leq l\) using a non-linear latent mapper, while the attention network emits a spatial mask for interpolating edited spatial features at layer \(l\) with original spatial features at the same layer (see Figure 3). At the cost of training time and convenience, FEAT achieves high-fidelity edits. If the blending layer is not carefully selected, the edits can be significantly poor, as shown in [16] and Figure 6. Furthermore, FEAT enacts inferior edits when presented with multi-faceted prompts (see Figure 7). In Suppl., we also discuss how CORAL is different from a multi-layer extension of FEAT.
Furthermore, we argue that the required edits for aligning with a given text prompt arise from multiple layers of the StyleGAN2, necessitating a multi-layer feature interpolation mechanism (see Section 3.3). Our method percolates meaningful edits from the current layer onto subsequent layers, with restrictions on the number of spatial edits customized for each layer. As a result, we can automatically select the correct layers and regions for editing an image.
To correctly identify the region of interest at every layer, we discuss a lightweight segment-selection scheme (see Section 3.2) and contrast this with an involved convolutional mask prediction model motivated by FEAT. Recently, SAM [34] accomplished superior GAN inversion at the cost of editability by leveraging different latent spaces of the StyleGAN2 in a spatially adaptive manner. However, the edits performed on the inverted latent codes continue to modify irrelevant image regions and could benefit from CORAL (see Section 3).
With a focus on convenience and fidelity, CoralStyleCLIP learns global directions at every layer of the StyleGAN2, as done in [23], and exhibits high-quality edits with a significant reduction in the training time and manual effort (see Table 1). Borrowing inspiration from [36], we also implement our co-optimized region and layer selection strategies for a non-linear mapper-based latent edit and demonstrate additional customized and high fidelity edits.
## 3 Approach
An image edit is often spatially localized to a specific region of interest. For example, edits corresponding to the _mohawk_ text prompt should affect only the hair region of the portrait image while preventing edits in other parts. In this work, we learn a latent edit vector and a soft binary mask at every layer of a StyleGAN2 to accurately edit the image according to the input text prompt. We achieve this by training them end-to-end while respecting the challenging but desirable minimal overall edit area constraint. Following a brief revisit to the StyleGAN architecture, we introduce two simple yet effective strategies to determine the region of interest given a text prompt. Finally, we introduce a novel multi-layer blending strategy that is vital for achieving high fidelity minimal edits.
### Background
StyleGAN2 [20] is a state-of-the-art model trained for generating high-resolution images typically of sizes \(1024\times 1024\) or \(512\times 512\). The network consists of a mapper module that maps a random vector \(z\in\mathcal{Z}\sim\mathcal{N}(0,1)\) to a vector in \(w\in\mathcal{W}\) space via a multi-layer perceptron (MLP), and a generator module comprising 18 convolutional blocks.
Figure 3: Comparison of FEAT [16] with CORAL. In FEAT (left), the spatial features are blended at a carefully hand-picked layer \(l\). CORAL (right) performs multi-layer blending with custom edit regions per layer.
The \(\mathcal{W}^{+}\) space, first defined by [2], is a concatenation of 18 different \(w^{(l)}\) vectors where \(l\in\{1,2,\dots,18\}\). The \(w^{(l)}\) instance in \(\mathcal{W}^{+}\)-space is first transformed through a layer-specific affine operation to obtain _styclcode_\(s^{(l)}\in\mathcal{S}\), at all layers of the generator module. The input to the generator module is a learned tensor of \(4\times 4\) resolution. It is gradually increased to a resolution of \(1024\times 1024\) as the input tensor is passed down through the layers of the generator.
We denote the constant input tensor as \(c\) and the feature obtained at a layer \(l\) as \(f^{(l)}\). Further, we denote the \(\mathcal{W}^{+}\) code at layer \(l\) as the \(w^{(l)}\) and a layer in generator module as \(\Phi^{(l)}\). Therefore, \(f^{(l)}\) can be expressed as \(f^{(l)}=\Phi^{(l)}(f^{(l-1)},w^{(l)})\), where \(l\in\{1,2,\cdots,18\}\), \(c=f^{(0)}\) and the generated image \(I=\sum_{l=1}^{18}RGB^{(l)}(f^{(l)})\).
In our work, we aim to find a latent vector \(\Delta^{(l)}\) in the \(\mathcal{W}^{+}\) such that the image generated by the latent code \(w^{(l)}+\Delta^{(l)}\) applied to every layer of generator results in an edited image \(I^{*}\). For simplicity, we denote \(f^{*}\) and \(w^{*}=w+\Delta\) as edited features and \(\mathcal{W}^{+}\) latent code, respectively. Therefore, we have \(f^{*(l)}=\Phi^{(l)}(f^{*(l-1)},w^{(l)}+\Delta^{(l)})\). A recent study showed that StyleGAN2 learns global attributes such as position in earlier layers, structural changes in middle layers, and appearance changes (e.g., color) in the final set of layers [19, 48]. However, determining the right set of layers for a given text prompt is challenging and has been explored only empirically in FEAT [16].
### Co-optimized region and layer selection (CORAL)
We aim to edit the image to match the text prompt with minimal changes. To this end, the first step is correctly identifying the region of interest. Further, given the diversity and richness of latent space at each layer in the generator, we posit that the edits to the image can come from multiple layers of the StyleGAN2 generator.
To address both requirements, we introduce CORAL, a co-optimized region and layer selection mechanism. In CORAL, we propose two simple-yet-effective approaches for learning a soft binary mask \(m^{(l)}\in[0,1]^{f^{(l)}_{dim}}\) at every layer of the generator module with the same height and width dimensions as the feature resolution at the given layer.
**CORAL based on segment-selection.** We can use any off-the-shelf pre-trained semantic segmentation network to determine the region of interest in this approach. Intuitively, existing image segmentation networks generally capture semantic parts of the image that we are interested in editing, such as eyes, mouth, and lips. Therefore in many cases, this problem can be posed as selecting the appropriate segments. To achieve this, we introduce a matrix \(e\) of dimension \(P\times 18\) where \(P\) is the number of classes predicted by the segmentation network. Each entry in the matrix \(e\) is in the range \([0,1]\), where 1 represents a confident segment selection for the given text prompt \(t\).
The matrix \(e\) is converted into a spatial mask \(m^{(l)}\) by masking the segments with the confidence values and resizing the segmentation map to the resolution of the feature maps at each layer. In the training phase, the parameters in the matrix \(e\) are trained after applying a sigmoid, and during inference, we apply a prompt-specific threshold \(\tau_{t}\) to the sigmoid. As depicted in Figure 4, the only trainable parameters in this pipeline are \(e\). Therefore, this can achieve desirable edits with high accuracy up to 8x faster than FEAT [16].
**CORAL based on convolutional attention network.** Segment-selection-based CORAL is limited by the segments available in the pre-trained network. As shown in results Figure 6-F, the segment-selection method is prone to over-selection or under-selection of the region of interest. To overcome this limitation, we implement an attention network that directly predicts the masks \(m^{(l)}\) at every layer of the generator as shown in Figure 4. In this architecture, we obtain a mask with the exact resolution as that of the corresponding feature in the layer. Unlike FEAT, we hypothesize that the mask at a layer \(l\) should depend only on the features \(f^{(l)}\) available at the current layer since we are interested in predicting the mask at every layer.
Despite incurring higher training costs from having to learn the convolutional layers, the masks produced with this approach are smoother and avoid over/under-selection issues by accurately predicting the correct region of interest.
Figure 4: Overview of CoralStyleCLIP. The only trainable components are the attention module and the mapper/global direction. Two different variants of the attention module are summarized as segment selection and convolutional attention network (see Section 3.2 for more details).
### Multi-layer feedforwarded feature blending
CORAL produces soft binary masks \(m^{(l)}\) at every layer of the generator module. These masks blend features such that the features corresponding to the confident regions are borrowed from features \(f^{*}\) generated with updated style code, and on similar lines, features from non-confident regions are borrowed from original features \(f\) of the unedited image. This ensures that we only modify the regions corresponding to the text prompt and prevents modifications of non-masked regions. Unfortunately, a 0-mask (completely black mask) at any layer would throw away any updated feature information from the previous layers and would propagate the original features \(f\) from that point onward.
To prevent this bottleneck, we design a novel multi-layer feature blending strategy (see Figure 3) that utilizes a parallel pathway where the feature obtained from layer \(l-1\) is passed through the generated block \(\Phi\) twice - once with the original latent code \(w\) and another pathway with updated latent code \(w+\Delta\) to obtain two feature sets for blending. The former feature can be viewed as a feature that is not edited but has all the information propagated from previous layers. The multi-layer blending strategy expressed in (1), ensures that no feature information is lost along the way.
\[\widehat{f^{*(l)}} =\Phi^{(l)}(f^{*(l-1)},w^{(l)}+\Delta^{(l)})\] \[\widehat{f^{(l)}} =\Phi^{(l)}(f^{*(l-1)},w^{(l)})\] \[f^{*(l)} =m^{(l)}\odot\widehat{f^{*(\overline{l})}}+(1-m^{(l)})\odot \widehat{f^{(l)}} \tag{1}\]
Intuitively, when the mask is completely blank (which is often desirable to keep the edits to a minimum), the features are feedforwarded simply with edits from previous layers.
### Types of latent edits
For a given convolutional layer \(l\), when the learned latent edit \(\|\Delta^{(l)}\|>0\), the corresponding feature \(\widehat{f^{*(l)}}\) in (1) incorporates attributes which are desirable for semantic alignment with the given text prompt. The mask \(m^{(l)}\) counteracts possible undesirable artifacts through a region-of-interest-aware interpolation strategy.
The \(\Delta^{(l)}\) by itself is, however, well studied in [23, 36], both of which identify a single global direction that can semantically edit images for a given text prompt. Such a simple parameterization does result in accurate edits for simple text prompts, as discussed in [36].
Our findings suggest that training time is significantly reduced for prompts where a global direction can affect desirable changes. However, a more involved image-dependent non-linear mapper model \(g(\cdot)\) as a function of \(w^{(l)}\) at every layer can affect such changes with higher precision.
Therefore, we implemented CORAL for both versions of latent edits: **(i)**_global direction_; **(ii)**_latent mapper_. The latent mapper \(g(\cdot)\) is an MLP-based model along the lines of [36, Section 5], where the \(w^{(l)}\) are split into three groups: coarse (\(l\) in 1 to 4), medium (\(l\) in 5 to 8) and fine (\(l\) in 9 to 18); and each of these groups is processed by a different MLP2. Our multi-layer feature blending mechanism is independent of the parametrization of the latent edit, which is jointly learned with the mask \(m^{(l)}\) predictors.
Footnote 2: Unlike in [36], we remove the LeakyReLU activation after the final fully connected layer, as it empirically expedites the optimization.
### Loss formulation
We now describe our proposed methods' training strategy and loss formulation. We are given a text prompt \(t\) and an image with corresponding \(\mathcal{W}^{+}\) code \(w\). The goal is to find the right region of interest using a CORAL framework and determine the latent vector to help with the image edit. The only trainable components in our approach are the latent vector \(\Delta\) and the parameters in the CORAL framework. In the case of segment selection, the only trainable component in CORAL is the matrix, and in the case of convolutional attention networks, the Conv layers in the attention network are trainable.
**CLIP loss**: The first key loss component is the CLIP loss originally proposed in StyleCLIP [36]. The idea is to use the pre-trained CLIP model to edit the latent vector such that the embedding of the image \(I^{*}\) produced aligns with the embedding of the text prompt \(t\) in CLIP's latent space.
In addition we also synthesize the image \(\widetilde{I}\), by setting \(m^{(l)}_{i,j}=1\ \forall i,j,l\) in (1) and compute its CLIP loss. To understand this, we can envision \(I^{*}\) as a sophisticated non-linear interpolation between \(I_{0}\) and \(\widetilde{I}\) using strategies given in (1). Here \(I_{0}\) is the original unedited image.
By simply imposing a CLIP loss on \(I^{*}\), \(\widetilde{I}\) remains unrestricted and can potentially contain undesirable artifacts, as long as \(I^{*}\) aligns with the text prompt \(t\). However, our region selectors in Section 3.2 derive their supervision from \(\widetilde{I}\) and might also learn to include these artifacts. Our final semantic alignment loss is as follows:
\[\mathcal{L}_{clip}=\frac{1}{2}\left(D_{\text{CLIP}}(I^{*},t)+D_{\text{CLIP}}( \widetilde{I},t)\right) \tag{2}\]
\(\mathbf{L}_{2}\) **loss**: Controlled perturbations to the latent spaces of a StyleGAN2 can result in smooth semantic changes to the generated image. As a result, we optimize the squared Euclidean norm of \(\Delta\), i.e., \(\mathcal{L}_{l_{2}}=\|\Delta\|_{2}^{2}\), in the \(\mathcal{W}^{+}\) space to prefer latent edits with smaller \(l_{2}\) norms.
**ID loss**: In order to prevent changes to the identity of a person during image manipulation, we impose an ID loss \(\mathcal{L}_{id}=1-\langle\mathcal{R}(I^{*}),\mathcal{R}(I)\rangle\) using cosine similarity between the embeddings in the latent space of a pre-trained ArcFace network \(\mathcal{R}\)[11, 23, 36, 42].
**Minimal edit-area constraint**: We encourage the network to find an edit with changes to compact image areas. In the
case of segment selection, this is achieved by penalizing the CORAL matrix \(e\) as follows:
\[\mathcal{L}_{area}^{ss}=\sum_{i,j}e_{i,j} \tag{3}\]
In the case of a convolutional attention network, this is achieved by imposing the minimal edit constraint directly on the masks \(m\) as follows:
\[\mathcal{L}_{area}^{can}=\sum_{l}n_{l}\Big{(}\sum_{i,j}m_{i,j}^{(l)}\Big{)} \tag{4}\]
where \(n_{l}\) is a normalizing constant defined per layer to account for the growing feature dimensions as the feature passes through the StyleGAN2 generator module.
**Smoothness loss**: In the case of the convolutional attention network, it would be desirable to predict a smooth mask. This is achieved by imposing a total variation loss [16].
\[\mathcal{L}_{tv}=\sum_{i,j,l}\|m_{i,j}^{(l)}-m_{i+1,j}^{(l)}\|_{2}^{2}+\sum_{i,j,l}\|m_{i,j}^{(l)}-m_{i,j+1}^{(l)}\|_{2}^{2} \tag{5}\]
In summary, the loss formulations for the segment selection and convolutional attention mechanisms are as follows:
\[\mathcal{L}_{ss} =\mathcal{L}_{clip}+\lambda_{l_{2}}\mathcal{L}_{l_{2}}+\lambda_{ id}\mathcal{L}_{id}+\lambda_{area}\mathcal{L}_{area}^{ss} \tag{6}\] \[\mathcal{L}_{can} =\mathcal{L}_{clip}+\lambda_{l_{2}}\mathcal{L}_{l_{2}}+\lambda_{ id}\mathcal{L}_{id}+\lambda_{area}\mathcal{L}_{area}^{can}+\lambda_{tv} \mathcal{L}_{tv} \tag{7}\]
Both the CORAL module and the latent editor are optimized in an end-to-end fashion using the above losses.
## 4 Experiments
We evaluate CORAL mainly in the context of human faces and demonstrate high-quality edits to photo-realistic faces of size \(1024\times 1024\) generated by a StyleGAN2 pre-trained on the FFHQ dataset [19]. We present additional results on _sketch_ and _pixar_ domains as well as Cars dataset [24] in Suppl. For both variants of CORAL in Section 3.2, we compare edits from the _global direction_ and _latent paper_ in Section 3.4. All hyperparameter configurations for (6) and (7) are provided in Suppl.
### Training and inference
The loss functions corresponding to the two different variants of CORAL are given by (6) and (7). Our experiments were conducted on one NVIDIA Tesla P40 24 GB GPU with a batch size of 3. The latent editor and CORAL modules are jointly optimized using an Adam optimizer [22] while keeping the StyleGAN2 fixed.
For a given text prompt \(t\), a data point is given by a randomly sampled standard normal vector \(z\sim\mathcal{Z}\) space, and the maximum number of iterations is set to 20,000. However, in Table 1, we note that the training time required for achieving the desired quality of edit increases as we switch from segment-selection to a convolutional attention network, the same as in going from global direction edits to training a latent mapper.
Furthermore, during inference, we limit the automatically selected regions for editing by setting \(m^{(l)}\gets m^{(l)}\odot\mathbf{1}\{m^{(l)}\geq\tau\}\) where \(\tau\) is typically 0.85. For applying desirable edits and reversing them (see Figure 6-G), we have a multiplying factor \(\alpha\in[-1.5,1.5]\) for the edit direction \(\Delta\).
Out of the 18 convolutional blocks and the corresponding \(w\) code per layer, our CORAL strategy and the latent edits, as well as edits from our baselines, are only performed on the first 13 layers, which are known to span coarse and fine controls over diverse attributes [47] such as expressions, age, style and color of facial hair, and eyes, among others.
**Segment selection:** Based on ideas from [33], a pre-trained
Figure 5: Comparison of variants of CORAL differing in complexity with closely related FEAT\({}^{*}\)[16], StyleCLIP mapper method [36] and StyleMC [23]
mixture model is used for performing unsupervised semantic segmentation of the StyleGAN2 generated images into 5 classes per pixel. This model is then used to determine the region of interest with CORAL based on segment selection. In Figure 6-E, we also compare with CORAL for a weakly supervised 34 class DatasetGAN [54] network, trained on the features of StyleGAN2 network using few shot labels.
Attention network:At each convolutional block \(l\in[1,2,\dots,13]\) of the StyleGAN2, the attention network first applies 32 different \(1\times 1\) convolutional filters upon the spatial features \(f^{(l)}\) to reduce the number of channels to 32 followed by ReLU [29] activation, after which another \(1\times 1\) convolutional layer and sigmoid activation are applied to obtain \(m^{(l)}\). We set \(n_{l}=\nicefrac{{1}}{{\text{size}[l]}}\) in (4), where \(\text{size}[l]\) is given based on the height and width of \(f^{(l)}\), for example if the resolution is \(32\times 32\), then \(n_{l}=\nicefrac{{1}}{{32}}\).
Figure 6: Each column in figures A to D demonstrates a text-driven edit on an input image along with the corresponding layers and regions selected. As a limitation of segment selection, we observe over-selection of the region of edit in figure F, which is absent in E. Figure G compares edits along both the positive and negative direction where we observe intuitive differences between removal and application of _makeup_, _happy_ vs. _unhappy_ and _curly_ vs. _smooth hair_. Finally, Figure H demonstrates the edit regions selected by CORAL across different layers of the StyleGAN2 for a complex prompt.
Figure 7: Comparison of CORAL with FEAT\({}^{*}_{13}\) for multi-faceted prompts.
### Evaluation
Our method is most closely related to StyleCLIP [36], StyleMC [23] and FEAT [16]. For a comparison with CORAL, we run the official implementation of the latent mapper technique of StyleCLIP, as well as a re-implementation of StyleMC3 which optimizes a single global direction across multiple images, only for layers of the StyleGAN2 until resolution \(256\times 256\) (see [23]).
Footnote 3: as per Section 3.2 of their paper
Without an official implementation of FEAT, we evaluate our re-implementation of FEAT denoted by \(\text{FEAT}^{*}\) with \(l\in\{7,13\}\). We maintain equivalent settings in the design of the latent mapper and attention network and only intend to compare the single-layer FEAT-style blending with our multi-layer feedforward blending (see Figure 3).
### Results
**Merits:** In Figure 5, we observe that both the StyleCLIP mapper method and StyleMC result in undesirable edits, such as irrelevant edits to the background. StyleCLIP also reduces the age in the first row and affects the neck region. In the fourth row, we see that in addition to applying the prompt _surprised_, it discards the white shirt. StyleMC affects the first three rows' complexion, facial expression, and hair color. As also seen in [16], we find that for finer edits (row 1), FEAT-style blending at layer 13 (\(\text{FEAT}^{*}_{13}\)) is preferable as also with \(\text{FEAT}^{*}_{7}\) for coarse edits (rows 2-4). We find that _blue eyes_ results in unwanted edits when blended at \(l=7\), and so does _mohawk hairstyle_ at \(l=13\).
CORAL, however (last four columns in Figure 5), only affects the relevant regions of interest, which would be the hair region for _long curly hair_ and _mohawk hairstyle_, the eyes and mouth for _blue eyes_ as well as _surprised_. These traits persist in Figure 1, and Figure 6-A to D wherein the edits are incorporated such that the editing area is minimal and is limited to only the relevant layers. CORAL learns the layers and regions to edit automatically with no domain knowledge or repeated trials. The edits are highly accurate. For example, the prompt _mustache_ does not also affect the beard, as is apparent from the corresponding masks.
Under the minimality constraints given by (3) and (4), we observe that for enabling finer edits such as _blue eyes_ and _purple hair_, only the latter higher resolution layers (typically \(l\geq 8\)) are selected, whereas, for coarser structural edits, the earlier smaller layers (typically \(l\leq 8\)) are automatically selected. We clearly see that when CORAL is trained for complex multi-faceted prompts such as _curly hair and lipstick_ (see Figure 7 and Figure 6-H), the hair edits come from earlier layers whereas the lip edits come from last layers. Furthermore, for such prompts, we found that FEAT blending fails to preserve _realism_ by introducing noise artifacts (see the example for \(\text{FEAT}^{*}_{13}\) under _makeup and afro hairstyle_ in Figure 7). This is also seen in Figure 5 for _mohawk_ using \(\text{FEAT}^{*}_{13}\).
From Table 1, we see that while the Clean-FID [35] of all our edits remains within acceptable bounds of the initially generated distribution, the time required to train CORAL to a desirable edit quality increases with the complexity of the region, layer selector, and the latent editor combined, from method 1 to 4. Segment-selection-based CORAL is significantly faster to train than the attention network.
We also observe that, on average, segment selection has a higher FID than attention network. Along similar lines, _global_ edits have a higher FID than _latent mapper_, except for _surprised_, which we attribute to _global_ edits predominantly affecting the eyes for this prompt, even for StyleMC, unlike the mapper method which also opens up the mouth.
**Limitations:** The segment-selection-based approach trains at a fraction of the time taken by its counterparts, as seen in Table 1. However, the defined segments of a pre-trained segmentation model can affect performance. For example, in Figure 6-F, our semantic segmentation model combines all the eye and mouth regions into a single semantic segment. As a result, the text prompt _lipstick_ brightens the skin tone and removes wrinkles from around the eyes. Alternatively, in Figure 6-E, a different segmentation network with dedicated classes for lips overcomes this issue.
We also note that the quality of the mustache is superior in Figure 6-B compared to A. It turns out that unlike our non-linear mapper which succeeds, the _global_ edits result in black coloration in the _mustache_ region in many examples.
## 5 Conclusion
CoralStyleCLIP leverages StyleGAN2 and CLIP models to co-optimize region and layer selection for performing high-fidelity text-driven edits on photo-realistic generated images. We demonstrate the efficacy of our generic multi-layer feature blending strategy across varying complexities of the latent editors and region selectors, addressing limitations regarding manual intervention, training complexity, and over- and under-selection of regions along the way. The CORAL strategy can also enhance interactive editing experience by utilizing the predicted masks at each layer.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|c|} \hline & \multicolumn{5}{c|}{Clean-FID (\(\downarrow\))} & Avg. Time \\ \cline{3-8} \multirow{4}{*}{**Model**} & T1 & T2 & T3 & T4 & Avg. & \\ \cline{2-8} & 1 & 4.23 & 4.42 & 6.73 & 6.05 & 5.35 & 15min \\ \cline{2-8} & 3.75 & 9.28 & 1.75 & 2.73 & 4.38 & 42min \\ \cline{2-8} & 3 & 2.22 & 2.26 & 3.08 & 7.20 & 3.69 & 1.2hr \\ \cline{2-8} & 4 & 1.85 & 6.38 & 1.38 & 5.29 & 3.73 & 2hr \\ \hline \multirow{4}{*}{**Model**} & 5 & 6.98 & 18.14 & 5.40 & 22.50 & 13.26 & 1.5hr \\ \cline{2-8} & 6 & 2.93 & 25.12 & 22.15 & 11.30 & 15.38 & 20s \\ \cline{1-1} \cline{2-8} & 7 & 2.51 & 9.46 & 1.93 & 3.01 & 4.23 & 1.8hr \\ \hline \end{tabular}
\end{table}
Table 1: Average Clean-FID (35) and training time to desirable quality. Text-prompt legend: **T1**) **Happy: **T2**) **Surprised: **T3**) ** Blue eyes; **T4**) Morawk hairstyle. Method (gend: **1)** SS Global; **2) SS Stopper; **3)** Ambient Global; **4)** AttnNet-Mapper; **5)** StyleCLIP; **6)** StyleMC; **7) FEAT*
A Notation
\(X\sim\mathcal{N}(\mu,\Sigma)\) denotes a Gaussian random vector \(X\) with mean \(\mu\) and covariance \(\Sigma\), and \(\mathbf{I}\) denotes the identity matrix. Out of the four well known latent spaces of the StyleGAN2 [20], denoted by \(\mathcal{Z},\mathcal{W},\mathcal{W}^{+}\) and \(\mathcal{S}\) or the _StyleSpace_, CORAL extensively utilizes the \(\mathcal{W}^{+}\) space. MLP abbreviates multi-layer perceptron in this paper.
For simplicity, \(f^{(l)}\) is used to denote original features while \(f^{*(l)}\) denotes edited features at each convolutional layer of StyleGAN2. Correspondingly, \(I\) denotes the original image and \(I^{*}\) denotes the edited image. \(\langle\mathbf{x},\mathbf{y}\rangle\) represents the dot product between vectors \(\mathbf{x}\) and \(\mathbf{y}\). \(\|\mathbf{x}\|_{2}\) is the Euclidean norm of vector \(\mathbf{x}\). For any score, \(\uparrow\) is used to denote that a higher value is more desirable. The definition for \(\downarrow\) follows along similar lines.
## Appendix B Pseudocode for Section 3
**StyleGAN2 (Section 3.1).** We will first present an overview of the StyleGAN2 [20] architecture which can be modularized into three parts.
1. Mapper \(\underset{\mathcal{Z}\to\mathcal{W}^{+}}{\text{MLP}}(\cdot)\) from \(z\in\mathbb{R}^{d}\) in \(\mathcal{Z}\) space to \(\mathbf{w}\coloneqq[w^{(1)},w^{(2)},\dots,w^{(18)}]\in\mathbb{R}^{18\times d}\) in \(\mathcal{W}^{+}\) space,
2. 18 learned convolutional blocks \(\Phi^{(l)}\) where \(l\in\{1,2,\dots,18\}\), used as \(f^{(l)}=\Phi^{(l)}(f^{(l-1)},w^{(l)})\) emitting features \(f^{(l)}\in\mathbb{R}^{H_{l}\times W_{l}\times d_{l}}\) as outputs. Here \(H_{l}\times W_{l}\) is the resolution at which the features are generated at layer \(l\). In addition, a fixed pre-trained tensor \(f^{(0)}\coloneqq c\in\mathbb{R}^{4\times 4}\) is learned when training the StyleGAN2 backbone network on a dataset.
The progressive nature of StyleGAN2 architecture implies that \(H_{l}\leq H_{l+1}\) and \(W_{l}\leq W_{l+1}\). In our experiments, we also have \(H_{l}=W_{l}\). Furthermore, layers with smaller \(l\) synthesize coarser attributes, while the latter layers are seen to control finer attributes, as evident in multiple figures in our paper.
3. RGB image constructed as \(I=\sum_{l\in\widetilde{L}}RGB^{(l)}(f^{(l)})\) where \(I\in\mathbb{R}^{H\times W\times 3}\) and \(\widetilde{L}\coloneqq\{2,4,6,\dots,18\}\).
Given the latent code \(z\in\mathcal{N}(\mathbf{0},\mathbf{I})\) for a particular image \(I\), we can obtain the corresponding \(w^{(l)}\) vectors by,
\[[w^{(1)},w^{(2)},\dots,w^{(18)}]=\underset{\mathcal{Z}\to\mathcal{W}^{+}}{ \text{MLP}}(z)\]
after which the forward pass of the StyleGAN2 generator is given by Algorithm 1.
```
1:Input\(\{w^{(l)}\}_{l=1}^{18}\in\mathcal{W}^{+}\) space Output Generated image \(I\), features \(\{f^{(l)}\}_{l=1}^{18}\) at every layer
2:functionForward(\(\mathbf{w}\))
3: Set \(f^{(0)}=c\)
4:for\(l\in\{1,2,\dots,18\}\)do
5:\(f^{(l)}=\Phi^{(l)}(f^{(l-1)},w^{(l)})\)
6:if\(l\in\widetilde{L}\)then
7:\(I^{(l)}=RGB^{(l)}(f^{(l)})\)
8:endif
9:endfor
10:\(I=\sum_{l\in\widetilde{L}}I^{(l)}\)
11:return\(I^{*}\)
12:endfunction
```
**Algorithm 1** StyleGAN2 forward pass
**Input\(\mathbf{w_{1}},\mathbf{w_{2}}\in\mathcal{W}^{+}\), blending masks \(m^{(l)}\in[0,1]^{H_{l}\times W_{l}\times}l\in\{1,2,\dots,18\}\) Output Generated image \(I\)**
```
1:functionBlendedForward(\(\mathbf{w_{1}},\mathbf{w_{2}},\{m^{(l)}\}_{l=1}^{18}\))
2:\(\Delta^{(l)}=w_{2}^{(l)}-w_{1}^{(l)}\)
3: Set \(f^{*(0)}=c\)
4:for\(l\in\{1,2,\dots,18\}\)do
5:\(\widehat{f^{*(l)}}=\Phi^{(l)}(f^{*(l-1)},w_{1}^{(l)}+\Delta^{(l)})\)
6:\(\widehat{f^{(l)}}=\Phi^{(l)}(f^{*(l-1)},w_{1}^{(l)})\)
7:\(f^{*(l)}=m^{(l)}\odot\widehat{f^{*(l)}}+(1-m^{(l)})\odot\widehat{f^{(l)}}\)
8:if\(l\in\widetilde{L}\)then
9:\(I^{*(l)}=RGB^{(l)}(f^{*(l)})\)
10:endif
11:endfor
12:\(I^{*}=\sum_{l\in\widetilde{L}}I^{*(l)}\)
13:return\(I^{*}\)
14:endfunction
```
**Algorithm 2** StyleGAN2 blended forward pass
**Input\(\mathbf{w_{1}},\mathbf{w_{2}}\in\mathcal{W}^{+}\), blending masks \(m^{(l)}\in[0,1]^{H_{l}\times W_{l}\times}l\in\{1,2,\dots,18\}\) Output Generated image \(I\)**
```
1:Input\(\mathbf{w_{1}},\mathbf{w_{2}}\in\mathcal{W}^{+}\), blending masks \(m^{(l)}\in[0,1]^{H_{l}\times W_{l}\times}l\in\{1,2,\dots,18\}\) Output Generated image \(I\)**
```
1:functionBlendedForward(\(\mathbf{w_{1}},\mathbf{w_{2}},\{m^{(l)}\}_{l=1}^{18}\))
2:\(\Delta^{(l)}=w_{2}^{(l)}-w_{1}^{(l)}\)
3: Set \(f^{*(0)}=c\)
4:for\(l\in\{1,2,\dots,18\}\)do
5:\(\widehat{f^{*(l)}}=\Phi^{(l)}(f^{*(l-1)},w_{1}^{(l)}+\Delta^{(l)})\)
6:\(\widehat{f^{(l)}}=\Phi^{(l)}(f^{*(l-1)},w_{1}^{(l)})\)
7:\(f^{*(l)}=\Phi^{(l)}(f^{*(l-1)},w_{1}^{(l)})\)
8:\(f^{*(l)}=m^{(l)}\odot\widehat{f^{*(l)}}+(1-m^{(l)})\odot\widehat{f^{(l)}}\)
9:if\(l\in\widetilde{L}\)then
10:\(I^{*(l)}=RGB^{(l)}(f^{*(l)})\)
11:endif
12:endfor
13:\(I^{*}=\sum_{l\in\widetilde{L}}I^{*(l)}\)
14:return\(I^{*}\)
15:endfunction ```
**Algorithm 3** StyleGAN2 blended forward pass
**Input\(\mathbf{w_{1}},\mathbf{w_{2}}\in\mathcal{W}^{+}\), blending masks \(m^{(l)}\in[0,1]^{H_{l}\times W_{l}\times}l\in\{1,2,\dots,18\}\) Output Generated image \(I\)**
semantic segments predicted by the pre-trained segmentation network. In the case of the attention network, we employ a convolution network at each layer of StyleGAN2. More details about the network architecture are described in Appendix D.
The function \(g(\mathbf{w})\) is used to represent the latent edit direction which could either be a _global direction_ or an output from a non-linear _latent mapper_. Finally, the only parameters optimized for minimizing the loss \(\mathcal{L}\) are \(e\) in segment selection, the parameters of the convolutional attention network for CORAL based on attention network, and finally, those of the latent edit predictor \(g(\cdot)\) or \(\Delta\) in both variants of CORAL.
## Appendix C CLIP loss for semantic alignment
In Figure 8, we see that the additional \(D_{\text{CLIP}}(\widetilde{I},t)\) loss in (2) is essential for obtaining high-quality edits. In particular, for the text prompt _blue eyes_, we see that without this loss, there are unwanted white patches near the chin in the first row and second column, and the expression of the face is also affected. In the second row, the glasses are removed, and the complexion becomes fairer, thus affecting unrelated attributes.
With \(D_{\text{CLIP}}(\widetilde{I},t)\), however, the edits are more precise, and only the eye region is affected. This is happening because the image corresponding to \(\mathbf{w_{2}}\) in Algorithm 2 is also explicitly driven to be semantically aligned with the text prompt \(t\) and therefore provides a better reference image for guiding the interpolation.
## Appendix D Architecture diagram
In Figure 9, we present the architecture diagram of CoralStyleCLIP with all components. In this paper, we demonstrated CORAL with four different variants. We demonstrated two approaches for predicting CORAL masks - segment selection and convolutional attention network. We also demonstrated results on two different variations of the latent direction - global direction \(\Delta\) and latent mapper network \(g(\cdot)\). Therefore, in total, we have four combinations of CORAL variants. For segment selection, we have a matrix \(e\), which is used to modulate the weights of each image segment. In the case of the Convolutional Attention Network, we employ a CNN at each layer of the StyleGAN network. Architecture details of the network are mentioned in Section 4 of the main paper.
**Latent Mapper**: As also discussed in Section 3.4, the latent mapper \(g(\cdot)\) is an MLP-based model along the lines of [36, Section 5], where the \(w^{(l)}\) are split into three groups: coarse (\(l\) in 1 to 4), medium (\(l\) in 5 to 8) and fine (\(l\) in 9 to 18); and each of these groups is processed by a different MLP. Our latent mapper network consists of four BiEqual linear layers followed by a multi-layer perceptron. Each BiEqual layer consists of two MLPs followed by a LeakyReLU [49] activation function and a differencing operation (Figure 9).
Figure 8: Results with and without additional CLIP loss
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & ID (\(\uparrow\)) & LPIPS (\(\downarrow\)) & MS-SSIM (\(\uparrow\)) & \(L_{2}\) (\(\downarrow\)) \\ \hline SS Global & 0.828 & 0.1980 & 0.756 & 0.0760 \\ SS Mapper & 0.882 & 0.0970 & 0.868 & 0.0303 \\ AttNet-Global & 0.945 & 0.0849 & 0.919 & 0.0289 \\ AttnNet-Mapper & 0.922 & 0.0971 & 0.899 & 0.0337 \\ \hline StyleCLIP & 0.522 & 0.2940 & 0.597 & 0.1530 \\ StyleMC & 0.651 & 0.0704 & 0.898 & 0.0180 \\ FEAT\({}^{*}\) & 0.953 & 0.0746 & 0.924 & 0.0236 \\ \hline \end{tabular}
\end{table}
Table 4: Metrics for Text Prompt - _Mohawk hairstyle_
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & ID (\(\uparrow\)) & LPIPS (\(\downarrow\)) & MS-SSIM (\(\uparrow\)) & \(L_{2}\) (\(\downarrow\)) \\ \hline SS Global & 0.776 & 0.0160 & 0.972 & 0.0029 \\ SS Mapper & 0.799 & 0.0082 & 0.969 & 0.0023 \\ AttnNet-Global & 0.868 & 0.0067 & 0.988 & 0.0015 \\ AttnNet-Mapper & 0.896 & 0.0060 & 0.989 & 0.0012 \\ \hline StyleCLIP & 0.741 & 0.0856 & 0.871 & 0.0265 \\ StyleMC & 0.522 & 0.1670 & 0.596 & 0.2550 \\ FEAT\({}^{*}\) & 0.904 & 0.0017 & 0.997 & 0.0004 \\ \hline \end{tabular}
\end{table}
Table 2: Metrics for Text Prompt - _Blue eyes_
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & ID (\(\uparrow\)) & LPIPS (\(\downarrow\)) & MS-SSIM (\(\uparrow\)) & \(L_{2}\) (\(\downarrow\)) \\ \hline SS Global & 0.633 & 0.0313 & 0.935 & 0.0080 \\ SS Mapper & 0.651 & 0.0308 & 0.933 & 0.0084 \\ AttnNet-Global & 0.830 & 0.0136 & 0.961 & 0.0050 \\ AttnNet-Mapper & 0.847 & 0.0155 & 0.959 & 0.0053 \\ \hline StyleCLIP & 0.644 & 0.0904 & 0.835 & 0.0301 \\ StyleMC & 0.821 & 0.0244 & 0.940 & 0.0080 \\ FEAT\({}^{*}\) & 0.846 & 0.0150 & 0.957 & 0.0064 \\ \hline \end{tabular}
\end{table}
Table 3: Metrics for Text Prompt - _Happy_
Figure 9: Architecture diagram of CoralStyleCLIP. **Segment selection network** consists of a pre-trained segmentation network and matrix \(e\). The weights of each segment are modulated to produce a CORAL mask. **Convolutional attention network** consists of a CNN which predicts the CORAL mask at each layer of StyleGAN. The CORAL mask can either come from Segment selection _or_ Convolutional Attention Network. **Latent direction** can either come from the learnt global direction \(\Delta\)_or_ via a mapper \(g(\cdot)\) at each layer of CoralStyleCLIP. The layers of CoralStyleCLIP blend the features using the mask and latent direction (See Suppl. Pseudocode). There are three \(g(\cdot)\) modules for coarse (\(l\in[1,4]\)), medium (\(l\in[5,8]\)) and fine layers (\(l\in[9,18]\)) each. In this figure, the result of _mohawk hairstyle_ used a convolutional attention network and global direction.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & ID (\(\uparrow\)) & LPIPS (\(\downarrow\)) & MS-SSIM (\(\uparrow\)) & \(L_{2}\) (\(\downarrow\)) \\ \hline SS Global & 0.828 & 0.1980 & 0.756 & 0.0760 \\ SS Mapper & 0.882 & 0.0970 & 0.868 & 0.0303 \\ AttnNet-Global & 0.945 & 0.0849 & 0.919 & 0.0289 \\ AttnNet-Mapper & 0.922 & 0.0971 & 0.899 & 0.0337 \\ \hline StyleCLIP & 0.522 & 0.2940 & 0.597 & 0.1530 \\ StyleMC & 0.651 & 0.0704 & 0.898 & 0.0180 \\ FEAT\({}^{*}\) & 0.953 & 0.0746 & 0.924 & 0.0236 \\ \hline \end{tabular}
\end{table}
Table 4: Metrics for Text Prompt - _Mohawk hairstyle_
E Additional experiment details
We provide the hyperparameters used for training CoralStyleCLIP for making edits to images generated by a StyleGAN trained on the FFHQ dataset. Note that for both segment selection and attention network, the user can potentially decrease the \(\lambda_{id}\) by 20-40% depending on how likely the prompt is to make any edit to the facial region. This is essential for text prompts such as _kid_, _elderly_, and _asian_, where the transformation can alter the identity of a person significantly.
**Segment selection:** For experiments based on global directions (Segment Selection - Global Direction), we set \(\lambda_{l_{2}}=0.0007,\lambda_{id}=0.015,\lambda_{area}=0.10\), whereas for latent mapper based edits (Segment Selection - Mapper), we set \(\lambda_{l_{2}}=0.0002,\lambda_{id}=0.020,\lambda_{area}=0.08\).
**Attention network:** For experiments based on global directions (Attention Network - Global Direction) we set \(\lambda_{l_{2}}=0.0009,\lambda_{id}=0.08,\lambda_{area}=0.00009,\lambda_{tv}=0.00003\), whereas for latent mapper based edits (Attention Network - Mapper), we set \(\lambda_{l_{2}}=0.0006,\lambda_{id}=0.08,\lambda_{area}=0.00002,\lambda_{tv}=0.00003\)
In Section F.2, Section F.3 and Section F.4, we also present results for attention network-based CORAL with _latent mapper_ edits. For the Stanford cars dataset, \(\lambda_{l_{2}}\) is set to 0.0002 while keeping other hyperparameters the same. For the remaining two domains, which are adaptations of FFHQ, \(\lambda_{l_{2}}\) is set to 0.0004.
## Appendix F Additional results
In addition to presenting results for editing images using CoralStyleCLIP on the FFHQ dataset [19] (see Figure 10), we also demonstrate the benefits of our method for the Stanford Cars dataset [24] (see Figure 11), and for face generators which were adapted to the following domains: _sketch_, and _pixar_ (see Figure 12 and Figure 13 respectively), using StyleGAN-NADA [13].
For experiments other than those with FFHQ, we disable the ID loss \(\mathcal{L}_{id}\). Nonetheless, we do observe high-fidelity edits in these settings as well. Also note that as mentioned in Section 4, for \(l>13\), we set the masks \(m^{(l)}=\mathbf{0}\) for CoralStyleCLIP.
### Ffhq [19]
In Figure 10, we present more examples where CoralStyleCLIP executes a range of edits for human faces with high precision and minimal hand-holding. We choose a variety of text prompts that demand challenging structural and color edits. These results show that our method can accurately select the correct region and layer.
### Cars [24]
We trained CoralStyleCLIP based on convolutional attention network and the _latent mapper_ edit for the \(512\times 512\) size StyleGAN2 model trained on Stanford Cars dataset [24] available from [6]. For text prompts _classic_, _sports_, and _yellow car_, we observe that only the car is edited while the background is not selected in Figure 11. Furthermore, we also observe that CoralStyleCLIP automatically selects earlier layers for executing the first two prompts while it utilizes the latter layers to change the car's color. It is also interesting to note that the network generally selects layer 8 for wheel modification while it selects layers 5-7 for editing the car's body. This indicates that the car's wheels will likely have a more significant structural disentanglement and edit flexibility in layer 8. Furthermore, for _yellow car_, CORAL prioritizes the car's body over the wheels and windows. We see that the car's color remains preserved for _classic car_ and _sports car_.
### Sketch [13]
We trained CoralStyleCLIP based on convolutional attention network and the _latent mapper_ edit for a StyleGAN that was pre-trained on FFHQ [19], and then domain adapted to _sketches_ using StyleGAN-NADA [13]. For both prompts _kid_ and _frown_, we see that CORAL successfully identifies the appropriate regions and layers for editing and executes them in Figure 12. In the case of _kid_, the network selects the facial region, and in _frown_, the network selects the eyes and mouth regions. Similar to the observation made in _Cars_ subsection above (Section F.2), the network selects layer 8 for structural changes to the eyes and selects other layers for making facial edits to achieve _frown_.
### Pixar [13]
Along the lines of Section F.3, in Figure 13, we also apply edits corresponding to _glasses_ and _scared_ for StyleGAN2 generated images adapted for the _pixar_ domain using StyleGAN-NADA [13].
While the _glasses_ emerge from layers 1 to 4, leaving other attributes undisturbed, _scared_ affects both the eyes and mouth regions, their corresponding edits emerging from layers 5 and 8, respectively. In both cases, the edits are primarily structural and executed through the earlier layers. These results demonstrate that CoralStyleCLIP can determine the correct regions to edit at the correct set of StyleGAN2 layers with minimal hand-holding.
### Additional quantitative results (Table 2 to 5)
In Table 1, we observed that CORAL edits on human faces achieve a reasonable Clean-FID [35], thereby preserving the _realism_ of the edited images.
We also compare CORAL for a StyleGAN2 trained on FFHQ with our baselines across the four text prompts from
Figure 10: Additional results on FFHQ [19] generated images.
Figure 11: Additional results on Cars [24] generated images.
Table 1 based on other quantitative metrics. In particular, we compute the identity similarity (ID) based on the cosine similarity between ArcFace embeddings [11], LPIPS [53] distance, MS-SSIM [46] score and the pixel-wise Euclidean distance, between each edited image and the original.
For comparison with FEAT [16], we only compare CORAL with results from our reimplementation FEAT\({}_{7}^{*}\) for _happy_, _mohawk_ and _surprised_, and FEAT\({}_{13}^{*}\) for _blue eyes_ so that the edits do occur with high precision. On average, the ID is higher in Table 2 and Table 4 as compared to Table 3 and Table 5. This is because edits to the facial regions, such as the mouth and eyes, diminish the facial recognition capabilities of ArcFace [11].
In general, CORAL, based on the attention network, has higher similarity and lower dissimilarity scores than the segment-selection-based methods, which can be attributed to higher precision in the region of interest selection at every layer of the generator. However, the same cannot be said for StyleCLIP and StyleMC, which affect irrelevant regions, consequently underperforming CORAL and FEAT.
Figure 12: Additional results on Sketch [13] generated images.
## Appendix G CORAL v/s multi-layer FEAT
CORAL method applies the proposed feedforwardarded feature blending strategy at each of the StyleGAN layers (Section 3.3). On similar lines, an interesting and a direct extension of FEAT would be to apply their proposed feature blending strategy at every StyleGAN layer. The resulting method, which we refer to as "multi-layer FEAT", would have a similar high level architecture with two parallel pathways but would differ in the blending strategies (Figure 14). In the case of the multi-layer FEAT, the blended features would be propagated through the edited pathway (\(w+\Delta\)) while the original features would continue to propagate through the original unedited (\(w\)) pathway. In other words, the FEAT blending strategy draws relevant edits from a particular layer and learns to discard edits from all the previous layers. Therefore, multi-layer FEAT cannot propagate the edits effectively. However, in the case of CORAL, blended features are passed parallelly to both pathways (Section 3.3). As a result, CORAL can discard irrelevant edits at the current layer and propagate the updated edits effectively.
Figure 13: Additional results on Pixar [13] generated images.
## Appendix H Future applications
The edits outside the regions of interest are explicitly prevented by the feed-forwarded blending strategy proposed in Section 3.3. As a result, methods such as [23, 16, 27, 36, 4] could directly benefit from applying our strategy for inference with separately constructed masks.
Furthermore, the synergy between the co-optimized region and layer selection (CORAL) in Section 3.2 and the feed-forwarded blending may provide increased control to content designers who may find it easier to refine an accurate initial predicted region of interest.
Figure 14: FEAT vs Multi-layer FEAT vs CORAL |
2310.05109 | Lightweight In-Context Tuning for Multimodal Unified Models | In-context learning (ICL) involves reasoning from given contextual examples.
As more modalities comes, this procedure is becoming more challenging as the
interleaved input modalities convolutes the understanding process. This is
exemplified by the observation that multimodal models often struggle to
effectively extrapolate from contextual examples to perform ICL. To address
these challenges, we introduce MultiModal In-conteXt Tuning (M$^2$IXT), a
lightweight module to enhance the ICL capabilities of multimodal unified
models. The proposed M$^2$IXT module perceives an expandable context window to
incorporate various labeled examples of multiple modalities (e.g., text, image,
and coordinates). It can be prepended to various multimodal unified models
(e.g., OFA, Unival, LLaVA) of different architectures and trained via a
mixed-tasks strategy to enable rapid few-shot adaption on multiple tasks and
datasets. When tuned on as little as 50K multimodal data, M$^2$IXT can boost
the few-shot ICL performance significantly (e.g., 18\% relative increase for
OFA), and obtained state-of-the-art results across an array of tasks including
visual question answering, image captioning, visual grounding, and visual
entailment, while being considerably small in terms of model parameters (e.g.,
$\sim$$20\times$ smaller than Flamingo or MMICL), highlighting the flexibility
and effectiveness of M$^2$IXT as a multimodal in-context learner. | Yixin Chen, Shuai Zhang, Boran Han, Jiaya Jia | 2023-10-08T10:47:24Z | http://arxiv.org/abs/2310.05109v1 | # Lightweight In-Context Tuning for Multimodal Unified Models
###### Abstract
In-context learning (ICL) involves reasoning from given contextual examples. As more modalities comes, this procedure is becoming more challenging as the interleaved input modalities convolutes the understanding process. This is exemplified by the observation that multimodal models often struggle to effectively extrapolate from contextual examples to perform ICL. To address these challenges, we introduce **M**ultiModal **I**n-**c**on**e**X**t**uning (M\({}^{2}\)IXT), a lightweight module to enhance the ICL capabilities of multimodal unified models. The proposed M\({}^{2}\)IXT module perceives an expandable context window to incorporate various labeled examples of multiple modalities (_e.g._, text, image, and coordinates). It can be prepended to various multimodal unified models (_e.g._, OFA, Unival, LLaVA) of different architectures and trained via a mixed-tasks strategy to enable rapid few-shot adaption on multiple tasks and datasets. When tuned on as little as 50K multimodal data, M\({}^{2}\)IXT can boost the few-shot ICL performance significantly (_e.g._, 18% relative increase for OFA), and obtained state-of-the-art results across an array of tasks including visual question answering, image captioning, visual grounding, and visual entailment, while being considerably small in terms of model parameters (_e.g._, \(\sim\)20\(\times\) smaller than Flamingo or MMICL), highlighting the flexibility and effectiveness of M\({}^{2}\)IXT as a multimodal in-context learner.
## 1 Introduction
In recent years, significant advancements have been made in the field of multimodal models (Radford et al., 2021; Wang et al., 2022d; Li et al., 2022b; 2023b; Wang et al., 2022b; Lu et al., 2023; Zhu et al., 2022b; Li et al., 2022a; Wang et al., 2022a; Alayrac et al., 2022), with vision-language models showcasing the most considerable improvements in performance and applicability. By jointly modeling diverse data modalities, multimodal models have set new benchmarks in various tasks, such as visual question answering (Antol et al., 2015; Marino et al., 2019; Schwenk et al., 2022; Hudson & Manning, 2019), visual grounding (Yu et al., 2016), and image captioning (Chen et al., 2015; Ordonez et al., 2011; Changpinyo et al., 2021; Plummer et al., 2015). A current trend in multimodal modeling (Chen et al., 2022a,b; Wang et al., 2022a; Lu et al., 2023) focuses on unifying different modalities and tasks through a sequence-to-sequence learning framework (Vaswani et al., 2017), aiming to build versatile models. These multimodal unified models are built on the principle of forgoing specifically designed modules, like detection heads in detectors (Ren et al., 2015) or segmentation heads in segmentors (Xiao et al., 2018), and instead incorporating all inputs and outputs within the same I/O space.
Despite their impressive generalization capabilities across multiple tasks and modalities, unified models often struggle to extrapolate from a few examples and perform few-shot learning on unseen
datasets. Large language models (LLMs) (Brown et al., 2020; Liu et al., 2023b) have shown promising potential in few-shot adaptation through in-context learning (ICL) without updating their parameters. However, ICL has not been extensively explored in multimodal settings, where input sequences contain text, image, or other modalities and the integration of ICL capabilities into multimodal foundation models remains unclear and challenging.
The main challenges stem from the fact that during the pretraining phase, multimodal unified models are not adequately tailored for in-context learning, and the diversity of input modalities adds complexity to both the learning and inference processes, ultimately leading to suboptimal ICL performance. As evidenced by Figure 1 (b), the multimodal unified model, OFA (Wang et al., 2022a), fails to learn from contextual few-shot examples1. Specifically, adding few-shot examples to OFA even leads to a worse performance than its zero-shot inference, which is not an uncommon phenomenon (Alayrac et al., 2022; Awadalla et al., 2023; Tsimpoukelli et al., 2022). Potential reasons are: 1) the model encoder has never explicitly seen irregular modalities such as bounding box coordinates during pretraining; and 2) the added contextual examples can convolute the understanding of the test query. As such, it is necessary to have an additional module that can handle these in-context examples to fully harness the potential of ICL in multimodal settings, ultimately enabling them to reason more effectively from contextual few-shot examples.
Footnote 1: We prepend multimodal in-context examples to queries.
In light of these challenges, we propose a **M**ulti**M**odal **I**n-**c**u**E**M **T**uning (M\({}^{2}\)IXT) method for multimodal unified models. Recent work has demonstrated that, if trained appropriately, language models can be endowed with better ICL capability (Chen et al., 2022; Min et al., 2022; Akyurek et al., 2022). Drawing inspiration from these findings, we design the M\({}^{2}\)IXT module to encode in-context examples with multiple modalities and train it to perform in-context learning. M\({}^{2}\)IXT is a lightweight module, and can be integrated into pretrained multimodal unified models (_e.g._, OFA, LLaVA (Liu et al., 2023a), Unival (Shukor et al., 2023)) (model parameters are frozen) and trained for multiple tasks such as VQA, image captioning, and visual grounding by reusing a small portion of the original pretraining dataset of these multimodal unified models. In doing so, the M\({}^{2}\)IXT module can be easily tuned with minimal computational overhead, and it learns to align the contextual
Figure 1: (a) M\({}^{2}\)IXT surpasses existing multimodal models (_e.g._, Flamingo, MMICL, Unified-IO) on multiple datasets and tasks while maintaining considerably small size; (b) The performance gain of M\({}^{2}\)IXT (OFA\({}_{\rm LARGE}\)) over the base model (_i.e._, OFA\({}_{\rm LARGE}\)) and previous state of the art is significant; Also, OFA\({}_{\rm LARGE}\) cannot deal with in-context examples as evidenced by the comparison between its zero-shot and few-shot performances.
examples and the test query of interest to make more accurate predictions, enabling a fast adaptation to downstream dataset. In summary, the contributions of our paper are as follows:
* We propose M\({}^{2}\)IXT, an in-context tuning module explicitly designed to enable multimodal unified models to conduct in-context learning effectively. M\({}^{2}\)IXT can deal with multimodal contextual examples and can be easily trained with a multi-task strategy.
* Empirical evaluations reveal that M\({}^{2}\)IXT significantly improves the few-shot learning capabilities of existing multimodal unified models across diverse tasks and datasets, setting new performance benchmarks. Its strong performance in open-set evaluations underscores its potential as a versatile tool for a wide array of multimodal learning scenarios.
* M\({}^{2}\)IXT is lightweight and exhibits remarkable adaptability. As a plug-and-play module, M\({}^{2}\)IXT can be easily integrated into multimodal unified models with different architectures without incurring much extra computational/memory overhead given its small model size.
## 2 Related Work
Vision Language Models (VLM).It has been of long-standing interest to researchers to pretrain vision and language models to accomplish tasks such as visual question answering (Antol et al., 2015; Marino et al., 2019; Schwenk et al., 2022), visual grounding (Yu et al., 2016), captioning (Chen et al., 2015), and cross-modal retrieval (Lin et al., 2014). In recent times, there has been a significant growth in the development of foundation VLMs. These models are pretrained on a large scale and have proven to be effective in scaling up for modality encoding and ultimately improving the overall performance of downstream tasks. A typical combined model comprises modality-specific modules, _i.e._, a vision module and a language module, which are connected via dual-encoder (Radford et al., 2021; Jia et al., 2021; Yuan et al., 2021) or mixture-of-experts structures (Wang et al., 2022b; Bao et al., 2022; Zhu et al., 2022a). They are also pretrained using different objectives, like image-text contrastive loss (Radford et al., 2021; Jia et al., 2021; Yuan et al., 2021; Bao et al., 2022; Yu et al., 2022), masked data modeling (Wang et al., 2022b; Bao et al., 2022), and maximum likelihood estimation (Zhu et al., 2022b; Jia et al., 2022a; Wang et al., 2022a; Lu et al., 2023; Chen et al., 2022b). Several tasks related to vision-and-language are also being incorporated, starting with image-text matching (Radford et al., 2021; Jia et al., 2021) and gradually expanding to include more vision and language tasks (Wang et al., 2022b; Li et al., 2022a; Yu et al., 2022).
Multimodal Unified Models.Recently, there has been a trend to build unified models that handle multiple tasks and modalities with a sequence-to-sequence framework to mitigate the need for task-specific designs. Pix2seq (Chen et al., 2022a) and Pix2seq2 (Chen et al., 2022b) made an initial effort to combine object detection, segmentation, and keypoint detection into a single model by using a sequence-to-sequence architecture. Since then, unified models, with the ability to handle more tasks by representing data of various modalities in a unified I/O space (Wang et al., 2022a; Lu et al., 2023), have gained popularity and have become more prevalent. Most recently, Unival (Shukor et al., 2023) improves OFA by embedding video, image, text and audio modalities together and aligns them with transformers. In contrast to the encoder-decoder architecture, Uni-Perceivers (Zhu et al., 2022b; Jia et al., 2022a) employ a transformer encoder-only architecture to align the likelihood of the predicted and target sequence. However, they are limited in the ability to facilitate generative tasks. Similarly, Painter (Wang et al., 2022c) employs a vision encoder but is restricted to dense labeling tasks that rely solely on image data. During the era of large models, there has been a trend to incorporate visual information into LLMs. LLaVA (Liu et al., 2023a), for example, injects vision transformer to LLaMA (Touvron et al., 2023) and achieves state-of-the-art accuracy on the ScienceQA benchmark.
In-context Learning.In-context learning (ICL, also known as few-shot prompting), popularized by GPT-3 (Brown et al., 2020), enables large language models to perform tasks by including a few input-output examples in the model's context (input) as a preamble, without updating any models parameters. ICL has been widely studied as an emergent capability of LLM (Wei et al.), but its application to the multimodal vision-language domain has only recently begun to be explored. Raw pretrained models, whether language or vision-language models, are not explicitly designed for in-context few-shot prompting during pretraining. An effective approach to enhancing the ICL capabilities of pretrained models is to fine-tune them by prepending a few labeled in-context examples to the target input. For example, Chen et al. (Chen et al., 2022c) propose an in-context tuning method that meta-trains an LM to learn to adapt to new tasks from a few examples. More relevant to our work
is Flamingo (Alayrac et al., 2022), which is trained to endow VLM with in-context few-shot learning capabilities. Flamingo takes interleaved visual data and text as input and generates free-form text as output, and uses LLM as the backbone. Most recently, MMICL (Zhao et al., 2023) and Otter (Li et al., 2023) proposed to finetune the large vision-language model via large-scale in-context learning. However, they show marginal performance gains with substantial training cost while our method is more efficient and lightweight. Furthermore, our primary focus lies in enhancing the ICL capability of multimodal unified models.
## 3 The Proposed Method: M\({}^{2}\)Ixt
To endow multimodal unified models with the ability to perform in-context few-shot reasoning, we propose multimodal in-context tuning (M\({}^{2}\)IXT). Specifically, the M\({}^{2}\)IXT module takes as input a few multimodal labeled examples. Each contextual example consists of an image, a text instruction, and the corresponding answer. The M\({}^{2}\)IXT module is compatible with multiple tasks, including visual question answering, image captioning, visual grounding, _etc._, and can be prepended to multimodal unified models of different architectures.
### The Architecture of M\({}^{2}\)IXT
Following previous practices in multimodal unified models, an encoder-decoder transformer framework (_e.g._, OFA (Wang et al., 2022), Unival (Shukor et al., 2023)), or decoder-only transformer framework (_e.g._, LLaVA (Liu et al., 2023)) can be adopted as the backbone. These miltimodal unified models generate target sequences conditioned on the input source sequences, and are usually optimized by minimizing the negative log-likelihood loss, \(\ell=-\sum_{i=1}^{|y|}\log P_{\theta}(y_{i}|\hat{y}_{1:i-1},x,s)\), where \(\theta\) is the model parameters, \(x\) is the input image, \(s\) is the instruction, and \(\hat{y}_{1:i-1}\) is the \(i-1\) preceding tokens of output \(y\).
In an ICL setting, suppose we have \(N\) contextual examples of [Image, Instruction, Target] triples, and the \(i^{\text{th}}\) example is denoted as \(C_{i}\). Contextual examples are separated by by adding <bos> and <eos> to the beginning and end of each example. The M\({}^{2}\)IXT module takes the \(N\) multimodal examples as input and outputs a sequence of token embeddings which can be concatenated with the query sequence embeddings. To handle multimodal examples, the M\({}^{2}\)IXT module comprises three tunable components: a visual encoder (_e.g._, ResNet or ViT), a text embedding dictionary, and a target embedding network. The target embedding network can process conventional modalities (_e.g._, text tokens) as well as special modalities such as bounding box coordinate tokens. M\({}^{2}\)IXT is lightweight as it only brings 40M\(\sim\)60M additional tunable parameters. Figure 2 illustrates how the M\({}^{2}\)IXT module is integrated into a multimodal unified model. The M\({}^{2}\)IXT module is decoupled as a standalone module by freezing the original multimodal unified model, which minimizes the training overhead and accelerates adaptations.
M\({}^{2}\)IXT samples tokens based on the model likelihood \(P(y_{i}|\hat{y}_{1:i-1},x,s,C_{1},...,C_{N})\) conditioned on in-context sequences [\(C_{1},...,C_{N}\)]. The M\({}^{2}\)IXT training objective function is the same as the one used for multimodal unified model apart from the added additional input contextual examples,
\[\ell=-\sum_{i=1}^{|y|}\log P_{\theta}(y_{i}|\hat{y}_{1:i-1},x,s,C_{1},...,C_{N}), \tag{1}\]
Figure 2: The architecture of the proposed M\({}^{2}\)IXT. It incorporates multimodal contextual examples as input and can be integrated into multimodal unified models with varipus arndocetures.
where, \(x\), \(s\) and \(y\) represents the query image, query task instruction, and query target ground truth, respectively. Standard optimization methods, such as causal masks and teacher forcing, are leveraged in the training process. In addition, we adopt random resize, center crop, RandAug, horizontal flip, large Scale Jittering (Ghiasi et al., 2021) for image data augmentation.
To perform ICL during inference, we need to draw labeled examples as the context for each test query. To this end, we randomly sample in-context examples from the evaluation set if labels are available following the same setting as Flamingo (Alayrac et al., 2022) and Painter (Wang et al., 2022c); If labels are not accessible from the evaluation set (_e.g._, test split or online evaluations), we draw samples from any arbitrary public datasets (Antol et al., 2015; Chen et al., 2015), allowing for a better generality. Additionally, beam search is adopted to ensure generation quality.
### Training of M\({}^{2}\)Ixt
We detail the training procedure of M\({}^{2}\)IXT by taking OFA as the backbone for brevity (We adopt the same setting for Unival.). We integrate M\({}^{2}\)IXT into different variants (_e.g._, OFA\({}_{\rm BASE}\) and OFA\({}_{\rm LARGE}\)) of OFA. The text embedding dictionary and the target embedding network are initialized using the pretrained embeddings of OFA, while the visual input is embedded using a ResNet (_i.e._, ResNet-101 or ResNet-152)2. Similarly, for LLaVA, we freeze all its parameters and only tune the M\({}^{2}\)IXT module.
Footnote 2: Empirically, we found that it obtains comparable performance with ViT (CLIP ViT-B).
#### 3.2.1 Unified Data Format
Multimodal learning involves unifying language and image data through a tokenizer and an embedding network that projects them into discrete tokens represented as vectors in hidden dimensions. These tokens are then serialized into sequences for each sample. While the ordering of tokens may vary, most methods follow a serialization of [Image, Instruction or Command, Target], which is separated into a source sequence of [Image, Instruction] and a target sequence of [Target] during implementation. For multi-tasking, the instruction or command varies for different tasks, such as _"Detect the objects"_ for object detection and _"What does the image describe?"_ for image captioning. This allows the unified model to generate output based on the input. Text modality tokenization (Sennrich et al., 2016) is initialized using a linguistic vocabulary, and pretrained visual network such as ResNet (He et al., 2016) is used for image modality tokenization and embedding. In some cases, a separate set of vocabulary is created to represent special data in Target like coordinates (_e.g._, bounding box coordinates) such as "<bin>+coordinate" (_e.g._,"<bin>456") to differentiate them from regular numbers.
#### 3.2.2 Mixed-Tasks Training
The M\({}^{2}\)IXT module is trained with a unified dataset that contains multiple tasks such as image captioning, visual question answering, visual grounding, _etc_. We employ a task-heterogeneous batches strategy following (Aghajanyan et al., 2021) by shuffling all the samples randomly, with which each batch contains multimodal in-context examples of different tasks. This encourages multiple tasks to learn a shared representation, enabling an easier transfer to unseen data (Xie et al., 2019; Marino et al., 2019; Schwenk et al., 2022; Plummer et al., 2015).
We lay out the pretraining tasks and datasets in details. It is worth noting that all these datasets are sampled from the OFA pretraining dataset. In specific, we adopt several vision and language tasks and datasets for M\({}^{2}\)IXT training, including visual question answering (VQAv2 (Antol et al., 2015)), image captioning (COCO (Chen et al., 2015), SBU (Ordonez et al., 2011), CC12M (Changpinyo et al., 2021)), visual grounding (ReFCOCO, RefCOCO+, and RefCOCOg (Yu et al., 2016)), masked image modeling (ImageNet-21k (Deng et al., 2009)), as well as object detection task (OpenImage (Kuznetsova et al., 2020)). By default, we randomly sample part of the mentioned vision and language data and randomly select 25,000 samples from both masked image modeling (ImageNet-21k) and object detection (OpenImage), resulting in only 0.5M samples which are \(\sim 50\times\) less than the original OFA pretraining dataset. Section 4.4 examines the impact on the model's performance by varying the sampling percentage. We transform all the images, instructions, and targets in an in-context manner and randomly sample them from the dataset to construct the in-context examples.
## 4 Experiments
### Experimental Setup
We randomly select in-context examples from the mixed-tasks dataset, and we use random ordering for the in-context samples in single sequence. We set pretrained image size as \(384\times 384\), and use the visual encoder to divide it into \(24\times 24\) patches, resulting in 576 tokens for one image. Together with the instructions and label tokens, M\({}^{2}\)IXT is learned to handle a large context window of "3k tokens on average. We use Adam optimizer for model learning, and we set the maximum epoch number to 20, weight decay to \(0.01\), warmup ratio to \(0.01\), and the initial learning rate to \(10^{-4}\) with a cosine scheduler. Based on empirical evidence, it takes around 3 days to train M\({}^{2}\)IXT on a machine with 16 NVIDIA Tesla V100-16GB GPUs, using a pretraining data setting of 0.5M. However, with 50K pretraining data setting (0.2% of OFA data, as shown in Figure 6), it can be finished in approximately 7 hours.
### Performance Boost with M\({}^{2}\)IXt
Integrating M\({}^{2}\)IXT into OFA.We first demonstrate how M\({}^{2}\)IXT can enhance the performance of the popular multimodal unified model, _i.e._, OFA, under a few-shot setting, using 2-shots by default. As illustrated in Table 1 and Table 2, outfitting OFA with M\({}^{2}\)IXT can substantially improve performance across multiple tasks and datasets, with an average 25% and 11.4% relative performance
\begin{table}
\begin{tabular}{|l|c c|c c c|c|} \hline
**Methods** & \multicolumn{2}{c|}{VQAv2} & \multicolumn{3}{c|}{COCO Caption test} & \multicolumn{1}{c|}{SNLI-VE} \\ & val & test-dev & BLEU@4 & METEOR & CIDEr & SPICE & dev \\ \hline \hline OFA\({}_{\mathrm{BASE}}\) & 66.3 & 69.7 & 21.7 & 20.8 & 76.6 & 16.1 & 49.7 \\ M\({}^{2}\)IXT (OFA\({}_{\mathrm{BASE}}\)) & 70.1 & 70.4 & 34.6 & 28.3 & 116.0 & 21.8 & 50.7 \\ \hline \hline OFA\({}_{\mathrm{LARGE}}\) & 73.0 & 74.8 & 22.2 & 20.4 & 75.0 & 15.3 & 41.0 \\ M\({}^{2}\)IXT (OFA\({}_{\mathrm{LARGE}}\)) & 75.7 & 76.1 & 37.8 & 30.2 & 128.1 & 23.0 & 42.6 \\ \hline \end{tabular}
\end{table}
Table 1: Few-shot experiments of M\({}^{2}\)IXT (OFA): Multi-tasking evaluation on VQAv2, COCO Caption, and SNLI Visual Entailment.
\begin{table}
\begin{tabular}{|l|c c c|c c c|c|} \hline
**Methods** & \multicolumn{3}{c|}{VQAv2} & \multicolumn{3}{c|}{COCO Caption} & \multicolumn{3}{c|}{RefCOCO+} & \multicolumn{3}{c|}{RefCOCOg} \\ & val & test-dev & test & val & testA & testB & val & test \\ \hline \hline OFA\({}_{\mathrm{BASE}}\) & 63.5 & 67.3 & 60.4 & 50.9 & 56.8 & 44.7 & 56.0 & 56.2 \\ M\({}^{2}\)IXT (OFA\({}_{\mathrm{BASE}}\)) & 78.7 & 83.8 & 72.1 & 67.9 & 76.2 & 57.4 & 70.4 & 71.6 \\ \hline \hline OFA\({}_{\mathrm{LARGE}}\) & 78.7 & 82.6 & 75.2 & 70.1 & 77.0 & 63.8 & 71.9 & 72.2 \\ M\({}^{2}\)IXT (OFA\({}_{\mathrm{LARGE}}\)) & 83.8 & 88.2 & 78.3 & 74.7 & 82.8 & 64.9 & 77.8 & 78.1 \\ \hline \end{tabular}
\end{table}
Table 2: Few-shot experiments of M\({}^{2}\)IXT (OFA): Multi-tasking evaluation on visual grounding task with RefCOCO/RefCOCO+/RefCOCOg dataset.
\begin{table}
\begin{tabular}{|l|c|c c|c c|} \hline & \multicolumn{2}{c|}{VQAv2} & \multicolumn{2}{c|}{COCO Caption} & \multicolumn{2}{c|}{RefCOCO+} & \multicolumn{2}{c|}{RefCOCOg} \\ & \#Params. & test-dev & test & val \\ & acc & B@4 & CIDEr & acc \\ \hline \hline Uni-Perceiver-MoE\({}_{\mathrm{BASE}}\) & 167M & - & 33.6 & - & - \\ Uni-Perceiver-\({}_{\mathrm{BASE}}\) & 308M & - & - & 116.9 & - \\ Flamingo-3B & 3B & 53.2 & - & 85.0 & - \\ Unified-\({}_{\mathrm{OSMIALL}}\) & 71M & 57.7 & - & 80.1 & 58.5/44.7/53.3 \\ Unified-\({}_{\mathrm{BASE}}\) & 241M & 61.8 & - & 104.0 & 78.8/67.5/71.4 \\ Unified-\({}_{\mathrm{LARGE}}\) & 776M & 67.8 & - & 117.5 & 80.8/71.2/77.4 \\ Unival\({}_{\mathrm{BASE}}\) & 250M & 70.1 & - & 90.1 & -70.8/- \\ Otter & 7B & - & - & 75.7 & - \\ MMICL (Fan-T5-XL) & 3.4B & 62.6 & - & - & - \\ MMICL (Fan-T5-XXL) & 11.4B & 70.5 & - & - & - \\ \hline \hline M\({}^{2}\)IXT (OFA\({}_{\mathrm{BASE}}\)) & 226M & 70.4 & 34.6 & 116.0 & 78.7/67.9/70.4 \\ M\({}^{2}\)IXT (Unival\({}_{\mathrm{BASE}}\)) & 294M & 70.7 & - & 121.6 & -772.0/- \\ M\({}^{2}\)IXT (OFA\({}_{\mathrm{LARGE}}\)) & 528M & **76.1** & **37.8** & **128.1** & **83.8/74.7/77.8** \\ \hline \end{tabular}
\end{table}
Table 3: Comparison experiments with SOTA unified models: all models are under multi-tasking evaluation on VQA, image captioning, and visual grounding tasks w/o tuning.
gain for OF\({}_{\rm BASE}\) and OF\({}_{\rm LARGE}\), respectively. The results of M\({}^{2}\)IXT (OFA\({}_{\rm TINY}\)) and M\({}^{2}\)IXT (OFA\({}_{\rm SMALL}\)) are provided in the appendix A.2. These results affirm the effectiveness and adaptability of M\({}^{2}\)IXT when incorporated into backbones of varying model sizes. Meanwhile, when conducting a few-shot on OFA directly, it exhibits poorer results (Figure 1) (b), suggesting M\({}^{2}\)IXT with early exposure to contextual examples can significantly enhance its performance. A similar performance boost in unimodal model has been reported in (Chen et al., 2022c).
Integrated with other Multimodal Models.We apply M\({}^{2}\)IXT to decoder-only LLaVA-7B (Liu et al., 2023a) and encoder-decoder Unival (Shukor et al., 2023) in Figure 3 and Table 3. Our M\({}^{2}\)IXT modules are constructed following the method in our paper and appended to LLaVA and Unival for in-context tuning. For LLaVA-7B, we employ pretrained weights from the ScienceQA dataset for initialization, whereas for Unival, we use its stage2 pretrained weights for initialization. It's important to emphasize that only the M\({}^{2}\)IXT modules are open to training, while all other parameters remain fixed. As depicted in Figure 3(Right) and Table 3, M\({}^{2}\)IXT delivers significant enhancements across all datasets for both models. This underscores the remarkable adaptability of M\({}^{2}\)IXT.
Comparison with Previous SOTA on Few-shot Learning.There are only a handful of multimodal unified models that evaluate their few-shot/zero-shot learning capabilities on public benchmarks. Here we compare M\({}^{2}\)IXT with Uni-Perceiver-MoE (Zhu et al., 2022a), Uni-Perceiver-v2 (Li et al., 2022a), Unified-IO (Lu et al., 2023), and Flamingo (Alayrac et al., 2022) without any further fine-tuning. From Table 3, we make the following observations. (1) With M\({}^{2}\)IXT, we obtain the state-of-art performance on almost all datasets, and compared with the best baselines, the improvement is considerably substantial; (2) While being smaller in model size, it still exhibits comparable results to counterparts (_e.g._, Unified-IO\({}_{\rm LARGE}\) ) which are \(\sim 3\times\) to \(\sim 10\times\) larger; (3) Although trained to handle few-shot examples, Flamingo, MMICL, and Otter (4-shots) with billions of parameters underperforms other methods, which underscores the superiority of M\({}^{2}\)IXT as a multimodal in-context learner.
Full Fine-tuning.As an additional module, M\({}^{2}\)IXT will not degrade the full fine-tuning performance of the backbone unified models. We unfreeze all the model parameters and perform full finetuning on COCO Caption by simply replacing the mixed-tasks datasets with the COCO Caption training set. As shown in Figure 3, our method achieves a good overall result over baselines, with 139.2 CIDEr on OF\({}_{\rm BASE}\) + M\({}^{2}\)IXT and 143.4 CIDEr on OF\({}_{\rm LARGE}\) + M\({}^{2}\)IXT, beating in-context counterpart 80B Flamingo, generalized decoding model X-Decoder, and on par with multimodal large models CoCa. More results are listed in appendix A.2.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multirow{2}{*}{**Methods**} & \multirow{2}{*}{\#Params.} & OKVQA & A-OKVQA & Flickr30k \\ & & val & val & BLEU04 & CIDEr \\ \hline \hline Frozen (Tsimpoukelli et al., 2021) & 7.1B & 5.9 & - & - & - \\ Few-VLM (Jin et al., 2022) & 785M & 16.5 & - & - & - \\ Uni-Perceiver-MoE-L (Zhu et al., 2022a) & 505M & - & - & 15.8 & - \\ VLKD (Dai et al., 2022) & 832M & 13.3 & - & - & - \\ BLIP-2 ViT-g OPT2.7B (Li et al., 2023b) & 3.8B & 31.7 & - & - & - \\ MMICL (FlanT5-XL) (Zhao et al., 2023) & 3.4B & - & - & - & 71.9 \\ MMICL (Instrct-FlanT5-XXL) (Zhao et al., 2023) & 11.4B & - & - & - & 72.0 \\ \hline M\({}^{2}\)IXT (OFA\({}_{\rm BASE}\)) & 226M & 34.3 & 40.7 & 21.9 & 55.5 \\ M\({}^{2}\)IXT (OFA\({}_{\rm LARGE}\)) & 528M & **40.3** & **47.6** & **27.3** & **72.3** \\ \hline \end{tabular}
\end{table}
Table 4: Unseen task Experiments: models are evaluated on A-/OKVQA, Flickr30k w/o tuning.
Figure 3: Left: Full fine-tuning on COCO Caption in cross-entropy optimization. Circle size relatively denote the overall parameters of each model; Right: results of M\({}^{2}\)IXT (LLaVA-7B) on the ScienceQA-test dataset.
### Open Set Evaluation
Zero-shot inference on unseen datasets is becoming a crucial benchmark for multimodal models (Zhu et al., 2022; 20; 21; 22; 23; 24). Meanwhile, ICL shows promising results on unseen dataset (Marino et al., 2019; 22; 23; 24; 25; 26; 27). Thus, we tested the efficacy of M\({}^{2}\)IXT in an open-set evaluation, using three datasets: OKVQA (Marino et al., 2019), A-OKVQA (Schwenk et al., 2022), and Flickr30k (Plummer
Figure 4: M\({}^{2}\)IXT Visualizations for multimodal tasks. Inputs select tasks and input query text, M\({}^{2}\)IXT respond and give right answers. The in-context examples are from public dataset. Upper shows the few-shot inference demo of M\({}^{2}\)IXT across VQA, image captioning, and visual grounding tasks, lower shows the comparison between SOTA unified model Unified-IO\({}_{\rm BASE}\) and in-context counterpart OpenFlamingo-9B(Awadalla et al., 2023).
Figure 5: Ablation: Number of shots ranging from 1 to 3, evaluated on M\({}^{2}\)IXT (OFA\({}_{\rm BASE}\)).
Figure 6: Ablations: Number of pretraining data samples, unit million (M) is used for measurement.
et al., 2015). These datasets contain complex questions that often require external knowledge to answer accurately. We compared M\({}^{2}\)IXT with several baselines, including in-context multimodal Frozen (Tsimpoukelli et al., 2021), prompt tuning Few-VLM (Jin et al., 2022), distilled VLCLIP (Dai et al., 2022), zero-shot unified model Uni-Perceiver-MoE (Zhu et al., 2022), and multimodal LLM-based BLIP-2 (Li et al., 2023). The empirical results, as shown in Table 4, suggest that M\({}^{2}\)IXT is highly effective in leveraging external knowledge and reasoning from diverse multimodal examples in an ICL setting.
### Model Analysis
We conduct thorough model analyses on M\({}^{2}\)IXT with OFA as the backbone model.
Case Study.For an intuitive understanding of M\({}^{2}\)IXT, we showcase a few real examples to illustrate the in-context learning process. The upper three cases in Figure 4 demonstrate that M\({}^{2}\)IXT can effectively handle diverse inputs in a row following the in-context learning pattern in a multimodal setting. Also, we compare M\({}^{2}\)IXT (OF\({}_{\rm{BASE}}\)) with Unified-IO and Flamingo (_i.e._, OpenFlamingo-9B(Awadalla et al., 2023)) on performing multi-tasks. We use 2-shots for OpenFlamingo-9B, same as our 2-shots M\({}^{2}\)IXT. The comparison is also in Figure 4, from which we observe that M\({}^{2}\)IXT (OF\({}_{\rm{BASE}}\)) can give more reasonable responses for captioning and correctly answered the challenging VQA and Visual grounding questions, while Unified-IO and Flamingo either deliver wrong responses or invalid outputs. Moreover, we notice that M\({}^{2}\)IXT (OF\({}_{\rm{BASE}}\)) exhibits a noteworthy capability in precisely localizing small objects, _e.g._, the traffic light in Figure 4 occupies 26\(\times\)50 pixels.
Ablation Study on Number of Shots.In our evaluations, we kept the number of in-context examples constant for simplicity. However, it's reasonable to question if the number of in-context examples can significantly affect performance. To investigate this, we vary the number of in-context examples from 1 to 3 for several tasks and report the results in Figure 5. We observe that increasing the number of examples from 1 to 2 can offer more performance boost while further adding it to 3 can only bring marginal benefits. One possible explanation is that, unlike natural language prompts, each multimodal in-context example requires a considerably large token length, which may aggravate the difficulty of inference. Therefore, we recommend setting the number within the range of 1-3 as it strikes a balance between resource utilization and accuracy.
Ablation Study on Size of the Mixed-Tasks Training Set.In this section, we explore if M\({}^{2}\)IXT could enhance the performance of multimodal models with less amount of data. Figure 6 presents our findings on how the size of mixed-tasks training set affects the model performance. Surprisingly, using 50K (0.2% of the OFA data) pretraining data achieves quite decent performance for most tasks, compared with OFA baselines in Figure 6. Specifically, it only takes a few hours to train an M\({}^{2}\)IXT (OF\({}_{\rm{BASE}}\)) model with 50K pretraining data. Also, M\({}^{2}\)IXT shows quite impressive scaling-up ability when the data percentage increases, indicating that the few-shot reasoning ability can be further enhanced via larger-scale training.
Ablation Study on Tasks of the Training Set.We conduct an ablation experiment by removing each type of task and then retraining M\({}^{2}\)IXT. Results in Figure 7 illustrate that when we eliminate tasks that exhibit high relevance, there is a noticeable performance drop. For instance, ablating the visual grounding task deteriorates the performance significantly. Interestingly, the detection and MIM tasks do not contribute to improving the downstream caption task. Nevertheless, we retain them for their positive impact on overall performance.
Figure 7: Results of ablating training tasks, evaluated on M\({}^{2}\)IXT (OF\({}_{\rm{BASE}}\)).
## 5 Conclusion
We propose a lightweight multimodal in-context tuning method, M\({}^{2}\)IXT for multimodal unified models, endowing them with the reasoning ability to infer from in-context samples. With M\({}^{2}\)IXT, we can quickly adapt unified models to unseen datasets and an open-set world with minimal computational overhead. Empirically evaluations show that M\({}^{2}\)IXT can effectively boost the few-shot learning performance of existing multimodal unified models and obtain state-of-the-art results on multiple datasets and tasks. We hope that M\({}^{2}\)IXT will spur further research on bolstering the multimodal ICL capabilities to improve the usability and accessibility of multimodal unified models.
|
2305.08843 | Halos of dark energy | We investigate the properties of dark energy halos in models with a
nonminimal coupling in the dark sector. We show, using a quasistatic
approximation, that a coupling of the mass of dark matter particles to a
standard quintessence scalar field $\phi$ generally leads to the formation of
dark energy concentrations in and around compact dark matter objects. These are
associated with regions where scalar field gradients are large and the dark
energy equation of state parameter is close to $-1/3$. We find that the energy
and radius of a dark energy halo are approximately given by $E_{\rm halo} \sim
\boldsymbol{\beta}^2 \varphi \, m$ and $r_{\rm halo} \sim
\sqrt{\boldsymbol{\beta} \,\varphi ({R}/{H})}$, where $\varphi=Gm/(R c^2)$, $m$
and $R$ are, respectively, the mass and radius of the associated dark matter
object, $\boldsymbol{\beta} = -(8\pi G)^{-1/2} d \ln m/d \phi$ is the
nonminimal coupling strength parameter, $H$ is the Hubble parameter, $G$ is the
gravitational constant, and $c$ is the speed of light in vacuum. We further
show that current observational limits on $\boldsymbol{\beta}$ over a wide
redshift range lead to stringent constraints on $E_{\rm halo}/m$ and,
therefore, on the impact of dark energy halos on the value of the dark energy
equation of state parameter. We also briefly comment on potential backreaction
effects that may be associated with the breakdown of the quasistatic
approximation and determine the regions of parameter space where such a
breakdown might be expected to occur. | P. P. Avelino | 2023-05-15T17:57:09Z | http://arxiv.org/abs/2305.08843v3 | # Halos of dark energy
###### Abstract
We investigate the properties of dark energy halos in models with a nonminimal coupling in the dark sector. We show, using a quasistatic approximation, that a coupling of the mass of dark matter particles to a standard quintessence scalar field \(\phi\) generally leads to the formation of dark energy concentrations in and around compact dark matter objects. These are associated with regions where scalar field gradients are large and the dark energy equation of state parameter is close to \(-1/3\). We find that the energy and radius of a dark energy halo are approximately given by \(E_{\rm halo}\sim\mathbf{\beta}^{2}\varphi\,m\) and \(r_{\rm halo}\sim\sqrt{\mathbf{\beta}\,\varphi(R/H)}\), where \(\varphi=Gm/(Rc^{2})\), \(m\) and \(R\) are, respectively, the mass and radius of the associated dark matter object, \(\mathbf{\beta}=-(8\pi G)^{-1/2}d\ln m/d\phi\) is the nonminimal coupling strength parameter, \(H\) is the Hubble parameter, \(G\) is the gravitational constant, and \(c\) is the speed of light in vacuum. We further show that current observational limits on \(\mathbf{\beta}\) over a wide redshift range lead to stringent constraints on \(E_{\rm halo}/m\) and, therefore, on the impact of dark energy halos on the value of the dark energy equation of state parameter. We also briefly comment on potential backreaction effects that may be associated with the breakdown of the quasistatic approximation and determine the regions of parameter space where such a breakdown might be expected to occur.
## I Introduction
In general relativity an exotic dark energy (DE) fluid [1; 2; 3] dominating the energy density of the Universe is required in order to explain the recent acceleration of the expansion of the Universe [4; 5; 6]. Various observations also suggest that matter in the Universe is primarily nonbaryonic and dark [6; 7]. Despite their importance for the dynamics of the Universe, the physical nature of nonbaryonic dark matter (DM) and DE remains largely unknown. In particular, it is not known whether or not DM and DE are nonminimally coupled [8; 9; 10; 11], or even if they could be associated with a single DE fluid [12; 13]. It is also possible that general relativity may not provide an accurate description of gravity on cosmological scales, and that DE and/or DM could be manifestations of modified gravity [14; 15; 16].
The coupling of the mass of DM particles to a DE scalar field [9; 10; 11] has been shown to give rise to DE mediated fifth forces between DM particles as well as velocity dependent forces. These forces can affect the linear growth of cosmological perturbations [17; 18; 19], and may also play a crucial role on nonlinear scales with a potential impact on the dynamics of galaxies and clusters of galaxies [20; 21; 22; 23; 24; 25; 26]. Also, a nonminimal coupling between DM and DE has been claimed to alleviate some cosmic tensions [27; 28; 29; 30; 31; 32], including the apparent discrepancy between local and high redshift constraints on the value of the Hubble parameter, making this a promising avenue of research.
Although many studies do not explicitly consider them, local variations in the DE have been shown to play an important role in the context of growing neutrino models, with a potentially significant backreaction on the background evolution of the Universe [33; 34; 35]. It has also been shown that the dynamics of DM particles nonminimally coupled to a DE field may be affected by backreaction effects with a possible impact on structure formation and on the large scale dynamics of the Universe [36].
This paper aims to characterize the properties of DE halos resulting from a nonminimal coupling between DM and a DE scalar field. We shall work in the context of the quasistatic approximation, which essentially consists in neglecting terms involving time derivatives in the perturbed field equations. This approximation has been previously used to investigate local variations of the fine-structure constant inside virialized objects in the context of a DE model with a nonminimal coupling to the electromagnetic field [37]. It has also been frequently used in the context of other DE and modified gravity scenarios [38; 39; 40; 41]. The quasistatic approximation is particularly useful when considering small subhorizon scales, since the dynamics of perturbation modes with a wave number significantly smaller than the Hubble radius is in general expected to be dominated by the terms containing spatial derivatives in the equations of motion. Under these conditions, the quasistatic approximation may significantly simplify the perturbation equations, and allow for an analytical treatment.
The outline of this paper is as follows. In Sec. II we describe a generic family of quintessence models with a nonminimal coupling to the DM and derive the corresponding equations of motion. In Sec. III we use the quasistatic
approximation to compute the scalar field perturbations around compact DM objects, discussing the conditions required for its applicability. In Sec. IV we estimate the energy and radius of a DE halo surrounding a compact DM object, as well as the corresponding DE equation of state parameter. We also provide a lower bound to the contribution of DE perturbations inside compact DM objects. Finally, we discuss the implications of our results and conclude in Sec. V.
Throughout this paper we use units where the speed of light in vacuum is equal \(c=1\). We also adopt the metric signature \((-,+,+,+)\). The Einstein summation convention will be used when a greek index appears twice in a single term, once in an upper (superscript) and once in a lower (subscript) position.
## II Nonminimally interacting dark sector
In this paper we consider a class of models for the dark sector with a nonminimal coupling between DM and a standard quintessence scalar field (which plays a DE role). These models are described by the action
\[S=\int d^{2}x\,\sqrt{-g}\,{\cal L}\,, \tag{1}\]
where the Lagrangian \({\cal L}\) is given by
\[{\cal L}={\cal L}_{\rm DE}+{\cal L}_{\rm DM}\,, \tag{2}\]
with
\[{\cal L}_{\rm DE} = X-V(\phi)\,, \tag{3}\] \[{\cal L}_{\rm DM} = f(\phi){\cal L}_{\rm DM*}\,. \tag{4}\]
Here \(X=-\nabla^{\mu}\phi\nabla_{\mu}\phi/2\) is a standard kinetic term, \(V(\phi)\) is the scalar field potential,
\[{\cal L}_{\rm DM}=f(\phi){\cal L}_{\rm DM*} \tag{5}\]
is the nonminimally coupled DM Lagrangian (\({\cal L}_{\rm DM*}\) denoting the minimally coupled DM Lagrangian). The components of the DM and DE energy-momentum tensors are given by
\[T^{\mu\nu}_{\rm DM} = \frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}{\cal L}_{\rm DM})}{ \delta g_{\mu\nu}}=f(\phi)T^{\mu\nu}_{\rm DM*}\,, \tag{6}\] \[T^{\mu\nu}_{\rm DE} = \frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}{\cal L}_{\rm DE})}{ \delta g_{\mu\nu}}\] (7) \[= \nabla^{\mu}\phi\nabla^{\nu}\phi+g^{\mu\nu}{\cal L}_{\rm DE}\,,\]
where \(g=\det(g_{\mu\nu})\), \(g_{\mu\nu}\) are the components of the metric tensor, and
\[T^{\mu\nu}_{\rm DM*}=\frac{2}{\sqrt{-g}}\frac{\delta(\sqrt{-g}{\cal L}_{\rm DM* })}{\delta g_{\mu\nu}}\,. \tag{8}\]
As a consequence of the nonminimal coupling to the quintessence scalar field, the mass \(m\) of a DM particle or a compact DM object is a function of \(\phi\) with
\[m(\phi)=f(\phi)m_{*}\,, \tag{9}\]
where \(m_{*}\) is the mass that the particle would have if \(f(\phi)=1\).
### Scalar field dynamics in a flat Friedmann-Lemaitre-Robertson-Walker universe
Consider a flat homogeneous and isotropic universe, described by the Friedmann-Lemaitre-Robertson-Walker metric. The corresponding line element is given by
\[ds^{2} = -dt^{2}+a^{2}[t]d\vec{q}\cdot d\vec{q} \tag{10}\] \[= -dt^{2}+a^{2}[t]\left(dq^{2}+q^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2})\right)\,,\]
where \(a[t]\) is the scale factor, \(t\) is the physical time, \(\vec{q}\) are comoving Cartesian coordinates, and \((q,\theta,\phi)\) are comoving spherical coordinates. If the DM particles are nonrelativistic, then the dynamics of the nonminimally coupled quintessence scalar field \(\phi[t,\vec{q}\,]\) is given by [36]
\[\Box\phi=-\ddot{\phi}-3H\dot{\phi}+\nabla^{2}\phi=\alpha-\beta\sum_{i}m_{i} \delta^{3}[\vec{r}-\vec{r}_{i}]\,, \tag{11}\]
where \(\Box\equiv\nabla_{\mu}\nabla^{\mu}\) is the d'Alembertian, a dot represents a derivative with respect to the physical time \(t\), \(\vec{r}=a\vec{q}\), \(m_{i}\) and \(\vec{r}_{i}=a\vec{q}_{i}\) are, respectively, the masses and positions of the DM particles, \(\nabla^{2}\phi\equiv\nabla^{2}_{\vec{q}}\,\phi/a^{2}\), \(\delta^{3}[\vec{r}\,]\) is the three-dimensional Dirac delta function, and
\[\alpha[\phi] \equiv \frac{dV}{d\phi}\,, \tag{12}\] \[\beta[\phi] \equiv -\frac{d\ln m}{d\phi}=-\frac{d\ln f}{d\phi}\,. \tag{13}\]
## III Quasistatic approximation
Consider a single compact DM object of mass \(m\) comoving with the expansion of the Universe at \(\vec{r}=\vec{0}\). In the quasistatic approximation the field \(\phi\) may be written as \(\phi[t,\vec{r}\,]=\ddot{\phi}[t]+\delta\phi\), with
\[\ddot{\vec{\phi}}+3H\dot{\vec{\phi}} = -\alpha[\ddot{\phi}]\,, \tag{14}\] \[\nabla^{2}\delta\phi = -\beta[\ddot{\phi}]m[\ddot{\phi}]\delta^{3}[\vec{r}\,]\,, \tag{15}\]
where Eq. (14) implies that \(\ddot{\phi}\) is a function only of the physical time (\(\ddot{\phi}=\ddot{\phi}[t]\)). This is expected to be a good approximation as long as
\[|\delta\phi| \lesssim |\ddot{\phi}|\,, \tag{16}\] \[|\delta\phi| \lesssim \left|\frac{d\ln\alpha}{d\phi}\right|^{-1}\,,\] (17) \[|\delta\phi| \lesssim \left|\frac{d\ln(\beta m)}{d\phi}\right|^{-1}\,. \tag{18}\]
The nonzero components of the energy-momentum tensor of the DE field are given approximately by
\[T^{t}{}_{t} = T^{t}{}_{t}+\delta T^{t}{}_{t}\,, \tag{19}\] \[T^{t}{}_{q} = T^{t}{}_{q}+\delta T^{t}{}_{q}\,,\] (20) \[T^{q}{}_{q} = T^{q}{}_{q}+\delta T^{q}{}_{q}\,,\] (21) \[T^{\theta}{}_{\theta} = T^{\theta}{}_{\theta}+\delta T^{\theta}{}_{\theta}\,,\] (22) \[T^{\phi}{}_{\phi} = T^{\theta}{}_{\theta}\,, \tag{23}\]
with
\[T^{t}{}_{t} = -\frac{1}{2}\ddot{\phi}^{2}-V[\bar{\phi}]\,, \tag{24}\] \[T^{t}{}_{q} = 0\,,\] (25) \[T^{q}{}_{q} = T^{\theta}{}_{\theta}[\bar{\phi}]=T^{\phi}{}_{\phi}[\bar{\phi}] =\frac{1}{2}\ddot{\phi}^{2}-V[\bar{\phi}]\,, \tag{26}\]
and
\[\delta T^{t}{}_{t} = -\frac{1}{2}(\delta\phi)^{\prime 2}-\alpha[\bar{\phi}]\delta\phi\,, \tag{27}\] \[\delta T^{t}{}_{q} = -a\dot{\bar{\phi}}\,(\delta\phi)^{\prime}\,,\] (28) \[\delta T^{q}{}_{q} = \frac{1}{2}(\delta\phi)^{\prime 2}-\alpha[\bar{\phi}]\delta\phi\,,\] (29) \[\delta T^{\theta}{}_{\theta} = \delta T^{\phi}{}_{\phi}=-\frac{1}{2}(\delta\phi)^{\prime 2}- \alpha[\bar{\phi}]\delta\phi\,, \tag{30}\]
where \((\delta\phi)^{\prime}\equiv a^{-1}\partial\phi/\partial q\).
Let us also compute the proper density and pressure associated with the background evolution of the scalar field \(\bar{\phi}[t]\),
\[\bar{\rho}_{\rm DE} = -T^{t}{}_{t}=\frac{1}{2}\dot{\bar{\phi}}^{2}+V[\bar{\phi}]\,, \tag{31}\] \[\bar{p}_{\rm DE} = \frac{1}{3}\left(T^{q}{}_{q}+T^{\theta}{}_{\theta}+T^{\phi}{}_{ \phi}\right)=\frac{1}{2}\dot{\bar{\phi}}^{2}-V[\bar{\phi}]\,, \tag{32}\]
and define the current values of the corresponding equation of state and fractional energy density parameters:
\[\bar{w} \equiv \bar{p}_{\rm DE}/\bar{\rho}_{\rm DE}\,, \tag{33}\] \[\bar{\Omega}_{\rm DE} \equiv \frac{8\pi G\bar{\rho}_{\rm DE}}{3H^{2}}\,. \tag{34}\]
Notice that
\[\dot{\bar{\phi}}^{2} = (1+\bar{w})\bar{\rho}_{\rm DE}=\frac{3H^{2}(1+\bar{w})\bar{\Omega }_{\rm DE}}{8\pi G}\,, \tag{35}\] \[V(\phi) = (1-\bar{w})\bar{\rho}_{\rm DE}=\frac{3H^{2}(1-\bar{w})\bar{\Omega }_{\rm DE}}{8\pi G}\,. \tag{36}\]
### Quasistatic solution
The quasistatic solution to Eq. (15) outside a compact DM object of radius \(R\) is given by
\[\delta\phi[t,r]=\delta\phi[t,R]\frac{R}{r}\,,\qquad r=|\vec{r}\,|\geq R\,, \tag{37}\]
where
\[\delta\phi[t,R]=\frac{\beta m}{4\pi R}\,. \tag{38}\]
The DE flux towards the interior of a sphere of radius \(r\geq R\) centered on the DM compact object is
\[-4\pi r^{2}\dot{\bar{\phi}}\,(\delta\phi)^{\prime}=\beta\,\dot{\bar{\phi}}\,m =\dot{m}\,, \tag{39}\]
thus accounting for the change of the object's mass.
Calculating the partial derivative of \(\delta\phi\) with respect to the physical time \(t\) (at fixed \(q=r/a[t]\)) one obtains
\[\dot{\delta\phi}=\left(\frac{d\ln(\beta m)}{d\phi}\dot{\bar{\phi}}-H\right) \delta\phi\,, \tag{40}\]
where \(H\equiv\dot{a}/a\) is the Hubble parameter. The conditions given in Eqs. (16) and (18) thus imply that the quasistatic approximation is expected to be a good approximation for
\[(\dot{\delta\phi})^{2}\sim H^{2}\delta\phi^{2}\mathrel{\hbox to 0.0pt{\lower 4.0pt \hbox{ $\sim$}}\raise 1.0pt\hbox{$<$}}\dot{\bar{\phi}}^{2}\,, \tag{41}\]
or, equivalently,
\[\frac{r}{R}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}2 \frac{\mathbf{\beta}^{2}}{|k|}\varphi\,. \tag{42}\]
Here, \(\mathbf{\beta}\) is the nonminimal coupling strength parameter defined by
\[\mathbf{\beta}\equiv\frac{\beta}{\sqrt{8\pi G}}\,, \tag{43}\]
\[k[a]\equiv\frac{d\ln m}{d\ln a}=-\frac{\dot{\bar{\phi}}}{H}=-\mathbf{\beta}\sqrt{3 (1+\bar{w})\bar{\Omega}_{\rm DE}} \tag{44}\]
is a related cosmological coupling strength parameter also used often in the literature, and
\[\varphi\equiv\frac{Gm}{R} \tag{45}\]
is the Newtonian gravitational potential at the surface of the compact DM object. Equation (42) implies that the condition
\[|k|\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}}\raise 1.0pt\hbox{$>$}}2\mathbf{ \beta}^{2}\varphi \tag{46}\]
is required in order to ensure that the quasistatic approximation is valid for all \(r>R\).
## IV De halos
The energy density fluctuations of the quintessence scalar field \(\delta\rho=-\delta T^{t}{}_{t}=(\delta\phi)^{\prime 2}/2+\alpha[\bar{\phi}]\delta\phi\) are the sum of two contributions: (1) the one directly associated with the scalar field gradients \((\delta\phi)^{\prime 2}/2\propto r^{-4}\) and (2) the one
associated with spatial variations of the potential energy \(\alpha[\bar{\phi}]\delta\phi\propto r^{-1}\). The first one provides a contribution that is essentially localized in and around the particle, while the second one is strongly dependent on how one defines the halo radius \(r_{\rm halo}\). In fact, the energy perturbation inside a sphere of radius \(r_{\rm halo}\) centered on the compact DM object associated with contribution 2 is given by
\[\delta E_{V}[r_{\rm halo}] = 4\pi\int_{0}^{r_{\rm halo}}\alpha[\bar{\phi}]\delta\phi r^{2}dr \tag{47}\] \[= 2\pi\alpha[\bar{\phi}]\delta\phi[t,R]Rr_{\rm halo}^{2}\] \[= \frac{3}{2}\left(\frac{\alpha[\bar{\phi}]\delta\phi[t,R]}{V[ \bar{\phi}]}\right)\frac{R}{r_{\rm halo}}E_{V},,\]
where \(E_{V}=4\pi r_{\rm halo}^{3}V[\bar{\phi}]/3\). Hence, if \(|\alpha[\bar{\phi}]\delta\phi[t,R]|\ll V[\bar{\phi}]\) or \(R\ll r_{\rm halo}\), then this contribution may be neglected. Therefore, in this paper we shall focus only on the contribution directly associated with the scalar field gradients.
### Halo energy
The total energy associated with the scalar field gradients outside a sphere of radius \(R\) centered at \(r=0\) is given by
\[E_{\rm G}^{\rm out}[t]=2\pi\int_{R}^{\infty}(\delta\phi)^{\prime 2}r^{2}dr= \boldsymbol{\beta}^{2}\varphi\,m\,, \tag{48}\]
which will be our best estimate of the total energy of a DE halo (\(E_{\rm halo}\sim E_{\rm G}^{\rm out}\)). For \(\varphi\equiv Gm/R\gtrsim 1\) this result should be taken as a rough estimate, since the impact of the local gravitational field on the dynamics of the quintessence scalar field has not been taken into account.
### Halo radius
The radius of a DE halo may be determined by defining the halo as the region where the energy density associated with the scalar field gradients exceeds the background energy density or, equivalently, \((\delta\phi)^{\prime 2}/2>3H^{2}/(8\pi G)\). This happens for values of \(r\) smaller than
\[r_{\rm halo}[t]=\left(\sqrt{\frac{2}{3}}\boldsymbol{\beta}\,\varphi\frac{R}{H }\right)^{1/2}\,. \tag{49}\]
Here we implicitly assume that \(R<\sqrt{2/3}\,\boldsymbol{\beta}\varphi H^{-1}\) or, equivalently, that \(r_{\rm halo}>R\).
### Equation of state
Within a DE halo the DE equation of state parameter,
\[w_{\rm DE}\equiv\frac{p_{\rm DE}}{\rho_{\rm DE}}=\frac{\bar{p}_{\rm DE}+ \delta p_{\rm DE}}{\bar{\rho}_{\rm DE}+\delta\rho_{\rm DE}}\,, \tag{50}\]
is dominated by the contribution of the scalar field gradients to the DE proper density and pressure. These satisfy \(\delta p_{\rm G}=-(\delta\phi)^{\prime 2}/6=-\delta\rho_{\rm G}/3\) or, equivalently,
\[w_{\rm G}\equiv\frac{\delta p_{\rm G}}{\delta\rho_{\rm G}}=-\frac{1}{3}\,. \tag{51}\]
Hence, the DE equation of state parameter is expected to be close to \(-1/3\) within an halo, especially in its central regions.
### DE perturbations inside compact DM objects
The knowledge of the density profile inside a compact DM object would be required in order to accurately compute the energy \(E_{\rm G}^{\rm in}\) associated with the scalar field gradients for \(r<R\) -- notice that this contribution would vanish if and only if all the mass was located at the surface of the object, which would be utterly unrealistic. Here we will provide a rough estimate of \(E_{\rm G}^{\rm in}\), assuming that \(Gm/r\) is always significantly smaller than unity and that the DM energy density distribution for \(r<R\) is uniform. In this case, the quasistatic solution to Eq. (15) is given by
\[\delta\phi[t,r]=\frac{3}{2}\delta\phi[t,R]\left(1-\frac{1}{3}\left(\frac{r}{R }\right)^{2}\right)\,. \tag{52}\]
The total energy directly associated to the scalar field gradients inside the compact DM object would then be equal to
\[E_{\rm G}^{\rm in}[t]=2\pi\int_{0}^{R}(\delta\phi)^{\prime 2}r^{2}dr=\frac{1}{5}E_ {\rm G}^{\rm out}[t]\,, \tag{53}\]
so that \(\sim 17\%\) of the energy associated with the scalar field gradients would be located inside the DM object. In general one would expect the DM energy density to be a decreasing function of \(r\). Therefore, this should be regarded as a lower limit on \(E_{\rm G}^{\rm in}\).
## V Discussion and conclusions
In this paper we characterized the properties of DE concentrations that are expected to form in and around compact DM objects as a consequence of a coupling of the mass of DM particles to a standard quintessence scalar field. We determined the dependence of the energy and radius of a DE halo on the nonminimal coupling strength and Hubble parameters, and on the mass and radius of the associated compact DM object. We have also shown that deep inside a DE halo the equation of state parameter is close to -1/3 and estimated the contribution of the DE energy perturbations inside DM objects.
Although the results presented in this paper were obtained considering a single comoving compact DM object,
they should also hold in the case of a network of nonrelativistic DM objects. However, in this case the background evolution of the DE scalar field will be affected by the energy transfer between DE to DM. Although this effect needs to be considered for an accurate characterization of the background dynamics of the DE scalar field, it is not expected to significantly change our main results. Also, our results were obtained in the context of the quasistatic approximation, assuming that local contributions to the evolution of the mass of dark matter particles can be neglected. If that is not the case a breakdown of the quasistatic approximation is expected, which can be associated with a significant transfer of linear-momentum between moving DM particles and the DE scalar field [36].
Scalar field gradients associated with the coupling of the mass of DM particles to a DE scalar field give rise to attractive fifth forces between DM particles whose strength is equal to \(2\mathbf{\beta}^{2}\) times that of gravitational forces. In addition to these, energy-momentum conservation in general relativity requires that any change of the proper mass of a compact DM object associated with the non-minimal coupling to the DE scalar field should be compensated by a corresponding decrease of its speed with respect to the local cosmological frame, assuming that backreaction effects associated with a significant transfer of linear momentum from moving DM particles to the DE scalar field can be neglected (which is expected to be the case as long as the quasistatic approximation holds everywhere). This effect essentially changes the strength of the velocity dependent cosmological damping of the speed of compact DM objects by a factor of \(1+k\).
The velocity dependent forces and fifth forces in coupled DE energy models have an impact on the growth of cosmic structures which can be constrained observationally. A tomographic analysis of coupled DE has been performed in [42], where a significant redshift dependence of the constraints on the coupling strength has been found (assuming, for simplicity, a non-negative coupling strength parameter): \(\mathbf{\beta}\lesssim 0.1\) at \(z<5\), and \(\mathbf{\beta}\lesssim 0.05\) in the redshift range \(5<z<500\), while \(\mathbf{\beta}\lesssim 0.02\) for a constant coupling case (at \(68\%\) confidence level). This imposes stringent constraints on the ratio between the energy of a DE halo and the mass of the associated compact DM object: \(E_{\mathrm{halo}}/m\lesssim 0.01\varphi\) at \(z<5\). It also precludes any substantial contribution (i.e., exceeding a subpercent level) of DE halos to the DE equation of state parameter. Whether these limits can be relaxed by considering broader families of coupled DE models will be the subject of future work.
###### Acknowledgements.
We thank Lara Sousa, Rui Azevedo, Vasco Ferreira, and David Gruber for many enlightening discussions. We acknowledge the support by Fundacao para a Ciencia e a Tecnologia (FCT) through the research Grants No. UIDB/04434/2020 and No. UIDP/04434/2020. This work was also supported by FCT through the R&D project 2022.03495.PTDC - Uncovering the nature of cosmic strings.
|
2306.00797 | Microstructure quality control of steels using deep learning | In quality control, microstructures are investigated rigorously to ensure
structural integrity, exclude the presence of critical volume defects, and
validate the formation of the target microstructure. For quenched,
hierarchically-structured steels, the morphology of the bainitic and
martensitic microstructures are of major concern to guarantee the reliability
of the material under service conditions. Therefore, industries conduct small
sample-size inspections of materials cross-sections through metallographers to
validate the needle morphology of such microstructures. We demonstrate
round-robin test results revealing that this visual grading is afflicted by
pronounced subjectivity despite the thorough training of personnel. Instead, we
propose a deep learning image classification approach that distinguishes steels
based on their microstructure type and classifies their needle length alluding
to the ISO 643 grain size assessment standard. This classification approach
facilitates the reliable, objective, and automated classification of
hierarchically structured steels. Specifically, an accuracy of 96% and roughly
91% is attained for the distinction of martensite/bainite subtypes and needle
length, respectively. This is achieved on an image dataset that contains
significant variance and labeling noise as it is acquired over more than ten
years from multiple plants, alloys, etchant applications, and light optical
microscopes by many metallographers (raters). Interpretability analysis gives
insights into the decision-making of these models and allows for estimating
their generalization capability. | Ali Riza Durmaz, Sai Teja Potu, Daniel Romich, Johannes Möller, Ralf Nützel | 2023-06-01T15:25:53Z | http://arxiv.org/abs/2306.00797v1 | # Microstructure quality control of steels using deep learning
###### Abstract
In quality control, microstructures are investigated rigorously to ensure structural integrity, exclude the presence of critical volume defects, and validate the formation of the target microstructure. For quenched, hierarchically-structured steels, the morphology of the bainitic and martensitic microstructures are of major concern to guarantee the reliability of the material under service conditions. Therefore, industries conduct small sample-size inspections of materials cross-sections through metallographers to validate the needle morphology of such microstructures. We demonstrate round-robin test results revealing that this visual grading is afflicted by pronounced subjectivity despite the thorough training of personnel. Instead, we propose a deep learning image classification approach that distinguishes steels based on their microstructure type and classifies their needle length alluding to the ISO 643 grain size assessment standard. This classification approach facilitates the reliable, objective, and automated classification of hierarchically structured steels. Specifically, an accuracy of 96% and roughly 91% is attained for the distinction of martensite/bainite subtypes and needle length, respectively. This is achieved on an image dataset that contains significant variance and labeling noise as it is acquired over more than ten years from multiple plants, alloys, etchant applications, and light optical microscopes by many metallographers (raters). Interpretability analysis gives insights into the decision-making of these models and allows for estimating their generalization capability.
Quality control Microstructure Grain size Steel Martensite Bainite Deep learning
## 1 Introduction
Materials in many applications are exposed to complicated loading conditions. Along the value chain, components and materials therein are exposed to process scatter at all stages. This establishes the demand for quality control. The presence of major structural defects in components can be excluded through a variety of non-destructive testing methods which exploit ultrasonic or magnetic sensing principles for instance. Most sensor principles that are applicable in-line,
however, retrieve integral information of large volumes as they exhibit neither an adequate spatial resolution nor signal sensitivity to measure subtle microstructural heterogeneity. Moreover, most sensors provide compounded information on residual stresses, defect density, grain size, chemical composition, and more.
Therefore, when particular microstructural aspects such as grain size distribution are of interest, most industries fall back on destructive sectioning and direct imaging methodologies, which are being performed on small sample sizes. A typical example of this is hierarchically-structured steel microstructures, such as martensite or bainite, for which the quantification of feature sizes in the primary microstructure is only possible by imaging. Trained metallographers prepare metallographic cross-sections of components through consecutive cutting, polishing, and etching and then image them using light optical microscopy. The resulting micrographs, depending on the component's intended application and loading paths, can be inspected with respect to different microstructural aspects. In bearing steels in which often plate martensite or bainite is present, the contrasted cross-section shows acicular, i.e. needle-shaped, structures (Bepari (2017)). In this case, the so-called _needle length_ affects the material's resistance to rolling contact fatigue (Shur et al. (2005)) and is thus of pronounced interest. These microstructures are the consequence of the partitioning of the high-temperature austenite phase to a martensitic or bainite microstructure. Some examples of martensitic steels with varying cooling rates culminating in different morphologies and phase compositions are depicted in Figure 1. The microstructures are composed of martensite or bainite ('M/B') as the primary microstructure as well as dispersed minor phases such as retained austenite (bright constituents annotated with 'RA') and carbide particles annotated with 'C'. While both minor phases occur bright, the carbide particles are either circular or elliptic and slightly smaller than the irregular-shaped retained austenite constituents. Depending on the exact treatment conditions both minor phases are differently distributed.
In this work, two classification problems for hierarchically-structured steel microstructures are tackled. The first is the classification of the needle length as depicted in Figure 1 and listed in Table 1. This classification is inspired by the micrographic grain size categorization standard according to ISO 643 which is usually applied for equiaxed and unimodally distributed microstructures. Henceforth, we refer to this task as 'grain size', 'needle morphology', or 'needle length' classification. The classification is applied to bearing steels that underwent different hardening heat treatments. Specifically, through-hardening steels such as 100Cr6, 100CrMnSi6-4 in baini
Figure 1: Martensitic steels distinguished by their needle morphology. The arising martensite/bainite (M/B), retained austenite (RA), and carbide (C) constituents are annotated in the figure. The subfigure captions represent the structure codes associated with the grain size according to the ISO 643 standard. Note that this set of images does not represent the variance of the complete dataset as it only takes a single subtype into consideration. A random set of images showcasing a more realistic representation of the data scatter is depicted in Figure 8.
states as well as quenched and tempered martensitic C56E2 steel (M) are taken into consideration. A distinction between these three subtypes is also of interest as these entail different material properties. This poses the second classification objective. In the following, we refer to the joint information of microstructure subtype and needle length, e.g. 'G7', as _structure code_. By attempting the microstructure subtype distinction, we investigate whether nuanced differences between bainitic and martensitic steel variants (Hillert (1995)) can be identified by computer vision approaches. In contrast to efforts by Muller et al. (2020); Gola et al. (2018); Zhu et al. (2022), we attempt a macroscopic distinction on large field-of-view light optical micrographs without discerning substructures at a prior austenite grain level. Since the bainite and martensite subtype labels in the work at hand are provided from the quantitative temperature-time profile during cooling, this classification problem resembles the one presented by Bulgarevich et al. (2019). The classification of needle length following ISO 643 has not been reported in the literature to the best of our knowledge. Published work at the intersection between DL and metallography, to date, entailed image datasets acquired under comparatively controlled and repeatable conditions, see DeCost et al. (2019); Durmaz et al. (2021). In this work, on the other hand, the difficulty lies in the many degrees of freedom present in the process chain causing a profound data variance.
The dataset utilized in this work exhibits a pronounced variance as it covers multiple alloys, heat treatments, polishing protocols, storage times, etchant concentrations, etching durations, and image acquisition parameters. All images are acquired in industrial testing laboratories over a time span of more than 10 years, and their needle length labels (6-10) are assigned by numerous metallographers. A single micrograph is typically inspected by a single metallographer. As the distinction of ISO 643 grain sizes 9 and 10 does not have any application relevance for these bearing steels, both classes in this work are aggregated to the classes 9-10 for each microstructure subtype. Further, since image instances with coarse acicular needles are very scarce in industrial processing, and thus data availability is low, they will be excluded from the classification task, see Table 1. Segregation in the material, especially of carbon, chromium, and nickel can influence the local formation of the hardened microstructure and cause fluctuations in the needle morphology. Depending on the heterogeneity of the microstructure, different criteria have been applied to rate the grain size, see Table 1 and Section 5.1.
In an attempt to render both image classification tasks automated and objective, we apply different deep learning (DL) methodologies. To be precise, two approaches are employed which are illustrated in Figure 1(a) and 1(b). The
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Material subtype** & **Grain size** & **Structure code** & **Needle morphology class (mean crit.)** & **Needle morphology class (max crit.)** \\ \hline \multirow{4}{*}{Martensitic 100Cr6 (G)} & 6 & G6 & Coarse acicular to & Coarse Acicular \\ & & & & acicular & \\ \cline{2-5} & 7 & G7 & Acicular & Coarse acicular to \\ & 8 & G8 & Fine acicular & Acicular \\ & 9 – 10 & G9–G10 & Fine acicular to & Fine acicular \\ & & & & structureless & \\ \hline \multirow{4}{*}{Martensitic C56E2 (M)} & 6 & M6 & Coarse acicular to & Coarse Acicular \\ & & & & acicular & \\ \cline{2-5} & 7 & M7 & Acicular & Coarse acicular to \\ & & & & acicular \\ \cline{2-5} & 8 & M8 & Fine acicular & Acicular \\ & 9 – 10 & M9–M10 & Fine acicular to & Fine acicular \\ & & & & structureless & \\ \hline \multirow{4}{*}{Bainitic 100Cr6 (B)} & 6 & B6 & Coarse acicular to & Coarse Acicular \\ & 7 & B7 & Acicular & Coarse acicular to \\ \cline{1-1} & 8 & B8 & Fine acicular & Acicular \\ \cline{1-1} & 9 – 10 & B9–B10 & Fine acicular to & Fine acicular \\ \cline{1-1} & & & & structureless & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Steel grade families and grain sizes considered in this study. The classes corresponding to the rows with gray background color are discarded for model training. There are needle length ranges for typical and sporadically occurring large needles associated with the mean and maximum criterion, respectively. Generally, the needle length ranges for both criteria are disjunct. Depending on the measured lengths and their frequency, a metallographer decides which criterion to apply for the micrograph classification. The exact thresholds can not be provided due to confidentiality.
first represents a single multi-class classification model which confronts both classification challenges simultaneously, i.e. direct prediction of the structure code. In contrast, a second methodology is presented in which one model categorizes the micrographs based on the microstructure subtype they are showing, and another model distinguishes the image in terms of its needle morphology. Depending on the first model's subtype prediction, a dedicated needle morphology classifier model is selected to perform the needle length classification. Comparison of both approaches allows investigation of whether decomposing the problem and addressing the partial tasks with specialized models can be beneficial.
In order to evaluate the inter-rater reliability of both image classification tasks, a round-robin test is designed and presented here. Thereby, the subjectivity of the subtype and needle morphology classification can be assessed. Further, the round-robin results act as a baseline for the computational assessment.
## 2 Results
In the following, the overall results are presented for both strategies, the global model and the two-stage (2S) approach. The results are listed in Table 2. Two common deep learning architectures are taken into consideration, ResNet-50 and ResNet-18. Moreover, the 'fw' annotation in the table indicates models trained with frozen weights in the feature extractor portion of the model. As an evaluation metric, accuracy is provided. The accuracy, for the data and models at hand, virtually coincides with the F\({}_{1}\)-score. In a repeatability study, where three distinct test-train splits were sampled for the bainite 'B' grain size classification using three-fold cross-validation, the data sampling-induced fluctuations were confirmed to be negligible. Therefore, we report single training performances here.
The combined accuracy for both tasks reaches up to 90.49% using ResNet-18 as an architecture within a two-stage approach. On the dataset at hand, training the ResNet-18 architecture culminates in a better performance than ResNet-50,
\begin{table}
\begin{tabular}{l|c} \hline \hline
**Model** & **Accuracy (\%)** \\ \hline ResNet-18 global (fw) & 76.08 \\ ResNet-18 2S (fw) & 80.06 \\ \hline ResNet-18 global & 90.18 \\ ResNet-18 2S & 90.49 \\ \hline ResNet-50 global & 87.73 \\ ResNet-50 2S & 88.98 \\ \hline \hline \end{tabular}
\end{table}
Table 2: The overall model prediction accuracies achieved on the complete test set. The abbreviation ‘fw’ indicates models which were trained with frozen weights of the feature extractor portion. The specifications ‘global’ and ‘2S’ refer to direct structure code prediction and the two-stage approach depicted in Figure 2, respectively.
Figure 2: A comparison of a single multi-class classification (**A**) setting with the proposed two-stage approach (**B**) of first determining the material subtype and then the needle morphology.
which in turn outperforms the case in which only the classifier portion of the ResNet-18 was optimized (fw). The two-stage approach marginally yet consistently outperforms the scenario in which a single model performs both classification tasks. It is evident that the merit of the two-stage approach depends on the architectural choice and the training strategy. For instance, the two-stage ResNet-18 (fw) with frozen feature extractor weights outperforms its global counterpart by roughly 4%. In contrast, in both fully tuned examples, the performance improvement through the two-stage approach is less and amounts to 1.2% and 0.3% for the ResNet-50 and ResNet-18, respectively. The model's performances can be further dissected by Table 3 and the confusion matrices provided in Figures 3 and 4. In the table, the model performances of each task-specific model contributing to the two-stage model are listed.
From this table, it can be observed that the fully trained ResNet-18 variant improves on or matches the performance of the ResNet-50 variant throughout all grain size classification tasks. In contrast, the ResNet-50 performs slightly better at distinguishing the material subtypes. Moreover, it is apparent that the material subtype classification out of all tasks reaches the highest performance approaching 97% accuracy. Thus, interestingly, the models manage to distinguish the martensitic, bainitic through-hardened, and martensitic through-hardened subtypes very well. This is reflected by the confusion matrix of the ResNet-18 2S model depicted in Figure 4 where it is illustrated that the remaining error cases predominantly arise from confusing the grain size categories (8 and 9-10) associated with small needle features in the B and G class. In terms of grain size classification, the three models dedicated to the three subtypes show some variation in their performance. On the larger G and B subsets, a performance of 93.9% and 86.4% is attained, respectively. In the confusion matrix in Figure 4, it can be seen that the performance of the bainite grain size model falls short mainly because a notable portion of B instances was predicted as adjacent structure codes. The underlying reason for this will be further explored in the discussion section. Note, that the performance of the grain size models for martensitic grades (M) being seemingly unaffected by the architecture choice could be owed to the small size of the test set measuring only 14 images.
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline \hline & **Subtype** & **Grain size G** & **Grain size B** & **Grain size M** \\
**Model** & **Accuracy (\%)** & **Accuracy (\%)** & **Accuracy (\%)** & **Accuracy (\%)** \\ \hline ResNet-18 2S & 96.35 & 93.90 & 86.36 & 92.86 \\ ResNet-50 2S & 96.96 & 93.09 & 81.82 & 92.86 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The prediction accuracies achieved on the test set for the different classification tasks. The values reported here are obtained by the individual elements of the two-stage (2S) models, see Figure 1(b).
Figure 4: The confusion matrix of the two-stage ResNet-18 model with an overall accuracy of 90.49% obtained on the test set.
Figure 3: The confusion matrix of the global ResNet-18 model with an overall accuracy of 90.18% obtained on the test set.
## 3 Discussion
### Round robin test and inter-rater reliability assessment
A round-robin test was performed in which 14 metallographers trained on the needle length classification task participated in judging twenty micrographs. Those micrographs were selected from the test set based on the outcome of an intermediate DL model such that some correctly and some incorrectly classified micrographs were picked for each subtype and some grain sizes. In general, the images selected pose rather difficult instances since twelve out of twenty were misclassified by the intermediate deep learning model as well. This is in line with comments by the raters who deemed some images over-etched and thus difficult to classify. We opted for mostly identical classification conditions for the DL models and participants to facilitate comparability. The round-robin test entailed successively rating micrographs with respect to their subtype and grain size. In the case of subtype misclassification (e.g. 'M' instead of 'B'), the participants were presented with the consequential range of structure codes (M5-M10) rather than with the correct spectrum of structure codes (B5-B10) as answer options. Additionally, metallographers were able to provide information on which micrograph regions were involved in their grain size selection -- microstructural extrema or an average. Lastly, participants were able to provide free text comments for each image. All functionalities were integrated into a web application for straightforward participation, rating retrieval, and evaluation.
One important aspect to take note of is that the grain size label ('ground truth') for each image is determined by a single person. Therefore, the grain size label is afflicted by subjectivity as well. However, in contrast to the round-robin test participants, the original raters during day-to-day operation are provided with comprehensive background information about the specimen (alloy, heat treatment, storage, and contrasting) and flexibility during assessment (microscopy settings, observation of different specimen regions, and occasionally consulting further colleagues). While the decision about the reference grain size label is better informed than a single round-robin test participant, the reference label arguably does not represent a credible ground truth as it is still founded on the visual distinction of classes which are partly defined by nuanced changes in their image texture (see Figure 1c-e) rather than quantitative measurements. In contrast, the subtype classification is deduced objectively from the heat treatment process parameters. However, the distinction between subtypes is not a task that the raters are commonly facing. Relying solely on images with micron bars rather than additional contextual processing information might incentivize the round-robin test participants to deviate from their daily classification habits and rely more on measuring needle length. Even the awareness of contributing to a round-robin test can result in a non-natural, to some degree disproportionate classification effort.
First, an overview of the round-robin test outcome for all micrograph instances is provided. Most errors were made during grain size assessment rather than subtype distinction. At first glance, this is surprising since the distinction between martensitic and bainitic microstructures is generally considered challenging and is not a task faced by the participating metallographers on a day-to-day basis. Specifically, out of 280 total predictions, the correct subtype was identified in 179 cases (63.9%). Out of this subset, the correct grain size was determined in 56 instances (31.3%). This corresponds to an overall accuracy of 20%. When aggregating the grain sizes 9-10 and dropping the ones with gray background in Table 1 (by discarding the corresponding instances with those predictions or labels leaving 187 predictions), the accuracy reaches 25.1%. Considering the difficulty of the provided micrographs, the two successive classifications one of which is not commonly tackled during day-to-day operation, the noisy reference labels, and the vaguely defined decision thresholds for the mean or maximum microstructure criterion to be applied, the overall accuracy of 20-25% is not striking. It was, however, unexpected to us that the subtype classification achieved a relatively high accuracy with 64% under these circumstances, despite the DL models performing well at this task. This means that many metallographers picked up nuanced differences between these subtypes. Note that the 20 images were partitioned into 11, 5, and 4 for the G, B, and M subtypes, respectively. This skew might have simplified the subtype classification task due to the metallographer's awareness that the G subtype is dominating the data (see Supplementary Table 1) and their familiarity with the G material state. Indeed, the metallographers also performed better in the grain size predictions of the more common G and B subtypes and achieved accuracies of 0.21 and 0.23 as opposed to 0.16 for the M minority class. The final global ResNet-18 model evaluated on the round-robin test images, which are rather difficult cases, achieves an overall accuracy of 70.59% surpassing the 20-25% achieved by human expert classification by a large margin. On a less difficult, more representative set, the classification accuracy of the raters would presumably increase.
Figure 5 shows the distribution of structure code predictions for each image. A majority vote in Figure 5 coincides with the reference label for six image instances. Apart from that, there are eight image instances where a majority vote is adjacent to the reference label. In such cases, it is debatable whether the reference label or the majority vote is more trustworthy. Leaving aside the reference labels, in order to assess the inter-rater reliability, the overall Fleiss' kappa score was computed across all 14 participants. It amounts to 0.146, which is fairly low, even in view of the many classes, and thus a sign of elevated subjectivity and mediocre agreement between the participants. Only considering the material subtype distinction results in a higher Fleiss' kappa score of 0.330. When computing the overall Fleiss'
kappa after reducing the possible classes, i.e. aggregating predictions and labels of grain size 9-10 and dropping image instances with grain sizes marked gray in Table 1, a value of 0.152 is obtained which is slightly higher than the 0.146. This is not surprising as the score depends on the cardinality of the categorical variables and the aggregation increases the accordance between the raters. Note that among the 14 participants, five deal with microstructures and grain size classification daily, while the others do less regularly (weekly to monthly). This affects the scatter of the round-robin test. When analyzing the overall Fleiss' kappa only for those five participants, the score increases from 0.152 to 0.318. This substantially higher accordance seems reasonable as these metallographers are not only more familiar with differences between the subtypes but also were more thoroughly trained to perform the grain size classification.
### Model performances
As opposed to ratings by humans, the ResNet-18 and ResNet-50 models are deterministic and therefore repeatable as no stochastic elements such as dropout layers are used in their architecture. While at training-time stochastic online data augmentations are applied, at testing-time such operations were not utilized. The DL models supposedly learn an adequate mean representation and accurate decision boundaries from the noisy labels of many raters. An overall accuracy slightly exceeding 90% is very satisfactory. This applies especially in view of the large data variance and the noisy, subjectivity-afflicted reference labels which limit the attainable performance on the test set. Training the complete ResNet models rather than relying on ImageNet weight initialization in the feature extractor allows for a better model specialization towards the downstream tasks which differ significantly from ImageNet classification. Thus, there is a pronounced performance boost for the completely optimized models as opposed to the ones where only the classification head was optimized, see Table 2. The merit of the two-stage model was shown to be larger (4% improvement) in the frozen weight scenario. This can be ascribed to the fact that tuning weights of a single fully-connected classification head in the global approach is presumably inappropriate for the present set of diverse tasks and classes. On the contrary, the performance of the fully trained models is likely most of all limited by annotation noise, thus overshadowing the influence of the two-stage approach.
Figure 5: A plot in which the circles indicate correct (green) and incorrect (blue) structure code predictions of the round-robin test participants for each image. The size of the circle and the contained number indicate the number of predictions for that specific class. For images where no grading is consistent with the reference label, an empty green circle is plotted to indicate the reference label.
The two-stage approach is motivated by the presumption that the morphology classification requires an emphasis on different features than distinguishing between martensite and bainite subtypes. It is anticipated that the latter relies on the distribution of retrained austenite and carbide phases, while needle morphology classification relies on length scales in the hierarchical substructure. Since the subtypes exhibit distinct microstructure length scales, decomposing the morphology classification further into three subtype-specialized grain size models was deemed promising. However, the results show that the two-stage model only marginally outperforms the global model and it is arguable whether it justifies the additional effort of training four distinct models. The two-stage approach facilitates picking the appropriate architectures for both tasks, depending on the representation power needed. In our case, a comparatively more expressive ResNet-50, and ResNet-18 seem to be beneficial for the subtype and grain size classification stages, respectively, as suggested by Table 3. When exploring how the decision-finding process in the two-stage process differs from the global model, it can be observed that the two-stage model, as expected, decomposes both decisions and relevant regions. Three Grad-CAM activation maps provided in Supplementary Figure 2 visualize this behavior. The same G class micrograph is passed to the global model and both stages of the two-stage model and Grad-CAM maps are extracted from the last layer of each ResNet-18 model. It seems that the global model considers regions of retained austenite, which are virtually exclusive to the G class, to perform the subtype distinction and simultaneously a few prominent needles to infer the grain size. In contrast, in the sequential approach, high activations occur solely at regions with high retained austenite concentration in the subtype stage and only at distinct needles at the grain size stage. It can be observed that the subtype model effectively takes more retained austenite regions into account to form its decision than the global model. This stronger activation might be caused by comparatively more specialized convolution filters and the increased feature density could potentially help to increase the model's confidence.
Whether and how tasks should be partitioned depends on the similarity of the tasks. For the challenge faced here, multi-task learning (MTL) is a very interesting and related paradigm. It utilizes hard or soft model parameter sharing together combined with distinct, task-optimized classification heads to address multiple tasks and optimize multiple objectives with a single model (Ruder (2017)). MTL can be especially beneficial if the tasks are related and rely on similar image features (Caruana (1997)). This is presumably the case for the grain size distinction tasks across the three different subtypes. In fact, in the G and B subtypes, the needle length thresholds defining the grain sizes are virtually identical and deviate slightly for the M subtype. This could incentivize using a single jointly optimized backbone with three classification heads rather than three distinct subtype-specific grain size models prospectively. Moreover, in the current approach, only the data subsets of the respective subtype are used to train each of the three grain size classification models. This represents a strong restriction in terms of data quantity, especially for the M grain size model. Training the largest portion of the architecture with joint datasets, e.g. in the MTL setting, might alleviate this problem. The data subsets for the three grain size classification tasks are expected to have distinct labeling noise patterns. In such cases, parameter sharing in the architecture's backbone can result in learning more general representations where the data subset-dependent labeling noise is ignored (Ruder (2017)). MTL might incentivize the model to learn the relevance of needle length due to the additional evidence provided by the supervisory signal of different subtypes, despite their distinct needle morphology. Based on the earlier discussion on the decision-finding process in the two-stage model, it is rather unlikely that the subtype and grain size classification tasks rely on similar enough features to justify extensive parameter sharing. Nowadays, self-optimizing MTL models have been developed which optimize their architecture to train specific layer's parameters jointly across different tasks whenever it is beneficial to the overall performance, see Misra et al. (2016); Ruder et al. (2017).
#### 3.2.1 Material subtype classification
When examining the images, some distinctive features between the different subtypes and commonalities between instances of the same subtype can be identified. Specifically, the martensitic instances (M class) do not exhibit any retained austenite and carbide constituents (neither carbides nor retained austenite) and the overall needle structure appears comparatively coarse. In contrast, in the bainitic through-hardened (B class) instances, carbides always occur while virtually none have retained austenite. Moreover, the B class is characterized by a fairly heterogeneous needle length distribution. In some cases, a bimodal needle length distribution with two distinct regions occurs which is exclusive to the B class. The martensitic through-hardened material (G class) is typically more homogeneous than B and has an overall finer needle morphology. It can contain both, carbides and retained austenite, depending on the grain size. In finer structured material (8-10) carbides occur, while in coarser martensite (6-7) retained austenite is typical, see Figure 1.
It is interesting to explore whether the subtype model picks up on the same discriminative image features to achieve the accuracy of 97% or whether it relies on other, maybe more subtle, aspects, such as slight differences in needle morphology induced by the varying quenching process. In order to investigate this, a few typical and atypical images of all subtypes are passed to the ResNet-18 subtype model, and the resulting Grad-CAM maps are illustrated in Figure 6.
Figure 6: Steel micrographs showing different hierarchical microstructures (**A, C, E**) are input into the subtype classification model to extract the micrograph’s Grad-CAM maps at the model’s last layer (**B, D, F**). The Grad-CAM maps use ‘jet’ colormaps. Thus, red, orange, and yellow colors highlight regions with increased activation.
In Figure 5(a) and 5(b), a typical bainitic micrograph and its activation map are shown. From the Grad-CAM map, it is evident that the subtype model concentrates on the coarser needle structures embedded within finer regions and on the clustered carbide regions. Both are typical characteristics of the B class. As opposed to this, in the M class micrograph and activation map depicted in Figure 5(c) and 5(d), the attention of the model is spread more uniformly across the image. This is plausible as the C56E2 martensitic material seemingly exhibits a single-phase and relatively uniform microstructure with generally coarser grains. When assessing a through-hardened martensite (G class) micrograph in Figure 5(e) and 5(f), it can be seen that the model focuses on retained austenite regions that are exclusive to the G class. An image pair in Supplementary Figures 2(a) and 2(b) shows a misclassification where the B class micrograph was confused for a martensitic through-hardened (G class) grade. This misclassification is probably owed to the fact that the microstructure is exceptionally homogeneous for a bainitic micrograph with no visible variation in needle size. Therefore, it indeed seems the predictions are only incorrect when the micrographs deviate from the trends introduced at the beginning of this section. While the features associated with those trends are discriminative for the range of heat treatments and alloys presented here, there are naturally other bainitic and martensitic grades that do not comply with the trends outlined in the first paragraph. This raises questions regarding model generalization to arbitrary bainitic/martensitic steels, e.g. in which retained austenite/martensite-austenite islands or pearlite constituents occur in a bainitic matrix.
#### 3.2.2 Needle morphology classification
The largest portion of the remaining errors can be ascribed to the grain size classification and annotation noise therein. Especially, the distinction between the finest grain sizes, G8/G9-10 and B8/B9-10, seems to cause misclassifications. In the bainitic materials, these misclassifications can be primarily attributed to heterogeneous microstructures. Figure 7 shows two bainitic micrographs that have been passed to the bainitic grain size classification model along with the resulting Grad-CAM maps. Notably, the model takes the coarser regions into account to perform a correct prediction, see Figure 6(a). This is a typical model behavior as the model learns to apply the maximum criterion (see Table 1). In a few cases, however, such as the one illustrated in Figure 6(c)-d, when the coarser regions occupy a small portion of the micrograph, the model tends to consider the finer regions to perform the prediction. While utilizing some kind of area threshold is the desired behavior, and it is promising that a lower area proportion of coarse regions leads to the model tending to the finer grain size prediction, the reference label was selected based on the coarse regions in this case. Instead of the eventually applied padding pre-processing approach, initially, we attempted to tile the raw images to fixed tile sizes and aggregate the tile's predictions. However, due to the heterogeneity of some micrographs in terms of needle length (see Figure 7), the tiling introduced further labeling noise and thus reduced the performance.
### Implications for quality control of microstructures
While there are significant fluctuations in metallographic cross-section preparation and some variations in the alloying, there is also a notable dataset bias since the data is drawn from repeatable production processes. Specifically, the heat treatments culminate only in a small portion of possible martensitic and bainitic microstructural states. Indeed, this represents a simplification as the few distinct heat treatments result in fairly obvious discriminative features in the microstructure, see section 3.2.1. Through interpretability analysis, models were shown to rely on these straightforward distinctive regions rather than learning nuanced correlations in the needle morphology. Thus, further data with more diverse martensite and bainite instances might be necessary to achieve better model generalizability by incentivizing the model to depend on a set of discriminative features for subtype distinction. When the dataset becomes vaster in terms of heat treatments and alloying, the distribution of retained austenite and carbide phases might not be a sufficient discriminative feature anymore. In such a case, adding a needle length objective as an additional task in an MTL setting could promote learning more nuanced differences in needle morphology between bainitic and martensitic states (Ruder (2017)).
Besides a high model accuracy, we consider three additional points particularly important when it comes to deploying such a model in a productive environment.
1. Images that look similar to hardened bainitic or martensitic microstructures such as equivalent micrographs with some contained detrimental phases or a fully pearlitic micrograph should either be rejected from being classified by the model or a warning should be issued, that the image probably is not within the model's training distribution. This could help to prevent false classifications and raise confidence in the prediction's correctness as deep learning models otherwise were shown to exhibit mediocre generalization to out-of-distribution (OOD) samples (Torralba and Efros (2011)). In literature, depending on the degree of deviation and the available labels, the task of identifying abnormal images is distinguished into near/far OOD, anomaly, or novelty detection, see Ruff et al. (2021), Bepari (2017). Typically, these tasks operate under the condition that a large quantity of OOD samples is unavailable which rules out training a binary OOD classifier. This condition is fulfilled in the
present case as outliers are rare in production. However, subclass labels (e.g., steel subtypes) are available. Also, there is a necessity of detecting micrographs with subtle and local microstructural differences. For these reasons, the task could be framed as near-OOD detection. Data-driven models in literature address OOD detection by measuring distances between image embeddings or by using reconstruction-based approaches Lee et al. (2017); Liang et al. (2017); Bepari (2017); Ruff et al. (2021). While far-OOD detection has been successfully tackled for a range of domains, near-OOD detection still represents a major challenge, especially when nuanced local changes in unstructured images are concerned (such as the emergence of inclusions or some pearlite in microstructures). Prospectively, such approaches could improve the model's robustness against arbitrary microstructures or metallographic artifacts. Images classified as OOD can then be inspected manually by metallographers.
2. In order to continuously improve the model or to review individual model ratings by metallographers and data scientists, software solutions should facilitate providing process data. Dubious or interesting micrograph cases can then be collected to improve the model in the future. Aside from this, micrographs classified as in-distribution can be used for on-the-fly optimization within a semi-supervised learning framework. The process data should comprise saliency maps or other interpretability techniques. Supplementing information on the decision-making process is essential to establish trust in the proposed data-driven methodologies, especially when quality control is concerned.
3. Nowadays, microscope systems are often connected to imaging software that assists users to adjust, acquire, and store images. Usually, additional functionality such as measuring or annotating image features is provided, which supports microstructural analysis and reporting. Integration of DL models into such software solutions
Figure 7: Bainitic steel micrographs showing heterogeneous microstructure (**A, B**) are input into the bainite grain size model to extract the micrograph’s Grad-CAM maps at the model’s last layer (**C, D**). The Grad-CAM maps use ‘jet’ colormaps. Thus, red, orange, and yellow colors highlight regions with increased activation.
will increase model acceptance. However, the deployment of trained models relies on microscopy software vendors providing appropriate software interfaces. This is a feature that only a few microscopy platforms offer yet will be indispensable going forward.
Treating the grain size estimation as a classification task might be beneficial to obtain a simple measure for repeated quality assessment throughout the supply chain. However, in this case, the categorization introduces a significant subjectivity. An alternative to the grain size assessment after ISO 643 could be to extract the needle length _distribution_ and derive physical metrics with more relevance towards target properties, such as probability distributions for rolling contact fatigue resistance. The micrographs at the magnifications required to judge the needle morphology cover a too small area to be representative of the whole material, especially for the more heterogeneous and coarser microstructures. Thus the current procedure requires the metallographer to identify the most critical region on the cross-section before image acquisition and needle assessment. This in itself introduces a subjectivity that is not considered in the round-robin test presented here.
## 4 Conclusions
A deep learning model was presented that categorizes hierarchically structured, quenched steels first with respect to their microstructure type and then their needle length. The model achieves satisfactory accuracy on both tasks and learns a mean representation of the data which is labeled by many metallographers, effectively reducing the impact of the significant labeling noise. Despite the thorough training of the metallographers, the task of manually assigning an ISO 643 grain size was demonstrated to be subjective in a round-robin test. Here, an objective and deterministic deep learning model can provide a remedy, especially if quality control throughout supply chains is concerned. The model's attention map is investigated by the presented interpretability analysis and consequences with respect to its generalization ability are discussed. For the deployment of the model, a robust out-of-distribution sample detection would be helpful since the model, at its current stage, is expected to not generalize across all possible martensitic and bainitic states. To achieve a more extensive generalization, data with fewer dataset biases, i.e. more diversity in terms of alloying and heat treatments than production data, should be supplemented.
## 5 Materials and Methods
### Dataset generation and statistics
Although the basic process of metallographic microstructure assessment, namely preparing, etching, imaging, and evaluating, is standardized, it is not feasible to render the whole procedure entirely repeatable without automation -- especially in a production environment. Achieving this would require, amongst others, a reproducible storage period after polishing, fresh etchant for every specimen, etching times aligned to the millisecond, and the same brightness/contrast settings in the microscopes. Thus, even though all images were taken on upright metallographic microscopes after etching with Nital (2-3 % alcoholic nitric acid solution) using the same imaging software, the image dataset features a significant variance.
Predominantly, these variations result from using the individual magnifications, illumination, white balance, and luminosity settings that the different microscope/camera systems (ten different microscopes in this case) exhibit, as well as different etching times and etchant qualities applied by the respective users. Another major source of data scatter is that the images were captured at two plants that process different alloys. Nonetheless, these image dataset variations are still well within the natural limitations of the metallographic microstructure analysis process.
Last but not least, despite extensive training, every metallographer has a unique way of judging the images. After all, visual perception is very subjective, see Anderson (2011), Panagiotaropoulos et al. (2014). Metallographers with decades of daily experience might not only comprehend the image textures differently than metallographers who rarely perform such tasks but also categorize them using a different approach. While very experienced metallographers will judge the needle coarseness alone by visual perception, inexperienced ones tend to apply a more quantitative approach in measuring needle lengths to correlate them with the grain size. Additionally, there are two possible rating systems applicable to judge the coarseness of the microstructure, as given in table 1. The first one is the mean criterion, where an image is rated according to the overall visual perception of the microstructure. This approach is pursued mainly if the depicted microstructure is homogeneous and fine-grained. In the case of a more inhomogeneous needle length distribution, and if the coarse-grained portion occupies a significant amount of the micrograph, it is possible to rate an image according to the maximum criterion. In that case, the coarse-grained regions in an image dictate the grain size label. This is motivated by the fact that ensembles of larger grains determine fatigue properties. When observing Figure
1, it is apparent that the finer needle morphologies exhibit more similarity in their image texture due to their narrower decision boundaries.
In total, the dataset contains 1641 micrographs. A randomly sampled set of as-received images which is supposed to point out the contained variance is depicted in Figure 8. A summary with respect to the class distribution of the dataset is provided in Supplementary Table 1. There it is evident that the dataset is skewed towards the martensitic through-hardened G class (75.6% of the overall data instances) and in particular to higher structure codes G9-10 (60.0%). This is owed to the fact that this material is the predominant outcome of production. Aside from this, the bainitic through-hardened material ranks second (20.1%). In contrast, the martensitic class M has a small share of the dataset (4.3%). All three subtype datasets were split into train, validation, and test subsets using proportions of 64:16:20 in a stratified manner, i.e. ensuring that each subset contains a virtually identical distribution of structure codes.
There are further sources of data variance that are not considered in the data splits or later during imbalance correction. For instance, there is a pronounced imbalance in terms of alloys as depicted in Supplementary Figure 4. Roughly 68.7% of the data represents 100Cr6 alloy. The remaining data is composed of 100CrMnSi6-4 (15.3%) and otherwise distributed across twelve further 100Cr6 variants and the C56E2 alloy (M class). Some of the micro alloyed variants show up in the dataset with less than 10 instances and are hence clustered into a miscalleaneous category in Supplementary Figure 4. Aside from this, the two magnifications with 500\(\times\) and 1000\(\times\) are applied depending on the appropriate field of view and spatial resolution for the microstructure at hand. Overall, the 500\(\times\) images outweigh the higher magnification as they constitute approximately 82.4% of the data. The magnification of \(1000\times\) is thus most frequently employed for structure codes 9-10. Also, the fluctuation in terms of specimen preparation and contrasting conditions (etching) was not traced and are not given special attention during data preparation. While the utilized microscope setup for each micrograph is documented, no means of data adjustments, e.g. correction of optical distortion, were employed.
### Data pre-processing
In Figure 8, some images were shown to feature regions outside the region of interest. Namely, the images typically contain micron bar annotations and often extend over the specimen borders. Therefore, either some defocused background or metallographic embedding resin regions are included which along with the micron bar can lead to spurious correlations. Since this can lead to the models learning non-causal relations, regions outside the region of interest were cropped in advance. In cases of an incoherent resin-sample-interface, so-called bleeding occurred occasionally where etchants or solvents creep out of the slit at the interface resulting in a visually altered surface region of the sample. This can be observed for instance in Supplementary Figure 1 within the green box annotation. Such regions were also removed in advance.
Figure 8: A set of images depicting some martensitic and bainitic microstructures which underwent different hardening treatments with varying primary microstructure and distinct distributions of carbide and retained austenite constituents. The images are underlying the fluctuations described in section 5.1. Some images (**A–B** and **D–E**) cover regions outside the specimen at the top image border.
These modifications led to varying image resolutions. In order to cope with this, different strategies were tested in a preliminary ablation study. These strategies included tiling the images to a fixed size and aggregating the tile's predictions (1), resizing all images to the mean resolution of the dataset (2), and mirror padding/cropping to the most frequent resolution (3). As the latter turned out to perform best empirically, all images were transformed to 1994\(\times\)1994 resolution through mirror padding or minimal cropping. Similarly, to correct for the subtype and structure code class imbalances, see Supplementary Table 1, different imbalance correction methods were tested. These entailed instance-weighted cross-entropy and oversampling of the minority classes to balance out the data provided to the model. As in this case, oversampling the minority class performed slightly favorably, we carried on with it.
Online data augmentations were applied during training before passing the color images to the models. The augmentation pipeline is composed of random rotations by arbitrary angles, random horizontal/vertical flips, slight random contrast adjustments, and Gaussian blurring. Subsequently, the images were normalized to the ImageNet mean and standard deviation. For the augmentations, the Albumentations package was used (Buslaev et al. (2020)).
### Model training and interpretability analysis
All models were trained using ResNet-18 or ResNet-50 architectures. The models were initialized with ImageNet weights from torchvision (Marcel and Rodriguez (2010)) and then either the full model was optimized with the same learning rate or the feature extractor portion of the classification network was frozen. In terms of learning rates, initially, 1E-4 or 1E-5 was used depending on the exact model. These learning rates were then modified by a StepLR scheduler. Cross-entropy was selected as the objective function. All submodels of the two-stage model were optimized individually and not trained end-to-end. When it comes to the two-stage approach, training the subtype model relied on the entire dataset, and training the individual structure code classifiers relied on the relevant data subsets. The models were trained for varying numbers of epochs until convergence, yet no overfitting was observed in the training and validation loss curves. The model performing best on the validation set was then used for the results reported here.
In order to explore the tendencies of a model and understand the reasons for specific failure cases, model interpretability techniques can be helpful. In this work, we utilized a technique called Grad-CAM (Selvaraju et al. (2016)). We applied it to explore the activation of the final convolution layer of the employed ResNet architectures. The technique provides heat maps where regions of pronounced activation in a specific image are highlighted. These heat maps are constructed by a weighted combination of all feature maps of that layer. The weights for each feature map correspond to the backpropagated gradients on which a global average pooling operation over width and height dimensions is applied.
## Conflict of Interest Statement
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
## Author Contributions
Conceptualization - AD, JM, and RN; Data curation - SP, DR, and RN; Formal Analysis - AD and SP; Funding acquisition - AD, JM, and RN; Investigation - AD and SP; Methodology - AD and SP; Project administration - AD and RN; Resources - AD and RN; Software - SP; Supervision - AD, JM, and RN; Validation - SP; Visualization - AD and SP; Writing (original draft) - AD, SP, and RN; Writing - review & editing - AD, JM, RN, and SP
## Funding
The work was tackled in a joint industry project between Fraunhofer IWM and Schaeffler Technologies AG & Co. KG for which the latter party provided the funding.
## Acknowledgments
We want to express our gratitude to all metallographers of Schaeffler Technologies AG & Co. KG who created the dataset and to all who participated in the round-robin test.
|
2308.07554 | Rydberg-EIT of $^{85}$Rb vapor in a cell with Ne buffer gas | We investigate Rydberg electromagnetically induced transparency (EIT) of
$^{85}$Rb atomic vapor in a glass cell that contains a 5-Torr neon buffer gas.
At low probe power, EIT lines exhibit a positive frequency shift of about
70~MHz and a broadening of about 120~MHz, with minimal dependence on the
principal quantum number of the Rydberg states. The EIT line shift arises from
s-wave scattering between the Rydberg electron and the Ne atoms, which induces
a positive shift near 190~MHz, and from the polarization of the Ne atoms within
the Rydberg atom, which adds a negative shift near -120~MHz. The line
broadening is largely due to the Ne polarization. Our experimental results are
in good qualitative agreement with our theoretical model, in which the shift is
linear in buffer-gas density. Our results suggest that Rydberg-EIT can serve as
a direct spectroscopic probe for buffer-gas density at low pressure, and that
it is suitable for non-invasive measurement of electric fields in low-pressure
noble-gas discharge plasmas and in dusty plasmas. | Nithiwadee Thaicharoen, Ryan Cardman, Georg Raithel | 2023-08-15T03:48:20Z | http://arxiv.org/abs/2308.07554v1 | # Rydberg-EIT of \({}^{85}\)Rb vapor in a cell with Ne buffer gas
###### Abstract
We investigate Rydberg electromagnetically induced transparency (EIT) of \({}^{85}\)Rb atomic vapor in a glass cell that contains a 5-Torr neon buffer gas. At low probe power, EIT lines exhibit a positive frequency shift of about 70 MHz and a broadening of about 120 MHz, with minimal dependence on the principal quantum number of the Rydberg states. The EIT line shift arises from s-wave scattering between the Rydberg electron and the Ne atoms, which induces a positive shift near 190 MHz, and from the polarization of the Ne atoms within the Rydberg atom, which adds a negative shift near -120 MHz. The line broadening is largely due to the Ne polarization. Our experimental results are in good qualitative agreement with our theoretical model, in which the shift is linear in buffer-gas density. Our results suggest that Rydberg-EIT can serve as a direct spectroscopic probe for buffer-gas density at low pressure, and that it is suitable for non-invasive measurement of electric fields in low-pressure noble-gas discharge plasmas and in dusty plasmas.
Electromagnetically induced transparency (EIT) involving Rydberg-atom spectroscopy in room-temperature atomic vapors has become an important technique for electric-field sensing, offering sensitivity in metrology applications [1, 2, 3] as well as avenues towards non-traditional radio reception [4, 5, 6]. Rydberg Stark spectroscopy and direct-particle detection was used to measure macroscopic and microscopic electric fields in ion plasmas prepared from laser-cooled atom clouds [7, 8]. In plasmas generated in thermal atomic vapors, Rydberg-EIT Stark spectroscopy was employed for non-invasive all-optical electric-field measurement [9, 10]. DC electric fields of charges released by photo-illumination of a borosilicate vapor cell were analyzed using Rydberg-EIT spectroscopy of Rb \(nD_{J}\) Rydberg levels [11]. These developments suggest that Rydberg-EIT has the potential to serve as a non-invasive plasma electric-field probe in glass tubes, vacuum systems, or vapor cells that harbor a low-pressure discharge or an inductively coupled plasma, including RF or DC rubidium plasma lamps [12, 13] that are commonly used as spectroscopic frequency references and for optical pumping of alkali vapors in magnetic-field-sensing cells. Moreover, plasma often contains charged dust particles [14]. Such dusty plasmas appear, for instance, in astrophysical settings (including the B ring of Saturn [15], Martian dust devils [16] and the moon [17]), as well as in technical plasmas (including fusion [18] and microfabrication devices [19]). The dynamics of such plasmas are being explored in ground-based [20, 21] and microgravity setups [22] using low-pressure noble-gas plasmas, which are seeded with dust particles. Rydberg-EIT could be ideal for non-intrusive measurement of electric fields, Debye shielding and particle interaction [23], and electric-field wakes [24] in a dusty plasma.
As a step towards Rydberg-EIT field sensing in these systems, it is required to assess the viability of Rydberg-EIT of a suitable atomic-sensor species, such as rubidium or cesium, in a noble-gas background with pressures ranging from tens of milli-Torr to several Torr. This necessitates a study of the effects of the background buffer gas on the Rydberg-EIT spectrum. Previous work [25] has explored the effects of Rb-Ne collisions on \(5S_{1/2}\to 5P_{3/2}\to 5D_{5/2}\) EIT linewidths in a buffer-gas cell. In our present work, we demonstrate a first observation of \({}^{85}\)Rb Rydberg-EIT at principal quantum numbers of \(n\sim 40\) in a cell with a 5-Torr neon buffer gas. We measure the frequency shift and line broadening of the EIT signal due to the background gas. At high probe-laser power, we observe a transition from EIT to electromagnetically induced absorption (EIA). The observed features do not significantly depend \(n\). Our studies present a stepping stone towards employing Rydberg-EIT as a versatile tool for characterizing electric fields in dusty plasma, which can be prepared even at noble-gas pressures substantially below 5 Torr.
It is noted that vapor cells consisting of an alkali metal mixed with a higher-pressure inert buffer gas are of interest in other applications that require reduced ground-state spin relaxation rates, such as Faraday [26] and SERF [27] magnetometers. Optical pumping of the alkali vapor can also be used to spin-polarize the noble gas via spin-exchange collisions, a method conducive to NMR with optically prepared spin-polarized gases [28].
Theory
Atoms in highly-excited Rydberg states exhibit sensitivity to their environment. When the Rydberg atoms are perturbed by dense ground-state perturbers, their interaction with the surrounding medium gives rise to a frequency shift, which can be attributed to two main effects [29; 30; 31]. The dominant effect arises from the scattering of the Rydberg electron by the perturbers within a Rydberg-atom volume of \(\sim\frac{4}{3}\pi(2n^{2}a_{0})^{3}\). This scattering effect can be explained by a Fermi interaction [32], and the resulting angular frequency shift in units of rad/s is given by
\[\Delta\omega_{\rm sc}=2\pi a_{s}N\left[\frac{e^{2}a_{0}}{4\pi\epsilon_{0}\hbar }\right]\quad, \tag{1}\]
where \(\hbar\) is the reduced Planck's constant, \(e\) is the elementary charge in C, \(a_{s}\) is the low-energy s-wave scattering length in meters, and \(N\) is the volume density of the buffer gas atoms.
The second effect originates from the interaction between the ion core of the Rydberg atom and the perturbers. When a Rydberg atom is immersed in a medium containing ground-state atoms or molecules, the atomic electric field induces a polarization in the perturbers, leading to a frequency shift. The frequency shift due to the polarization effect can be obtained from the impact approximation [29; 30], and is given by (in units of rad/s)
\[\Delta\omega_{\rm p}=-6.21\left[\frac{\alpha e^{2}}{\hbar(4\pi\epsilon_{0})^{ 2}}\right]^{2/3}v^{1/3}N, \tag{2}\]
where \(\alpha\) represents the polarizability of the perturber in \(\rm{Cm}^{2}/V\), and \(v\) is the mean relative velocity between the Rydberg atoms and the perturbers in m/s. The total energy shift experienced by the Rydberg atom is the sum of both effects, \(\Delta\omega_{\rm total}=\Delta\omega_{\rm sc}+\Delta\omega_{\rm p}\).
In addition to frequency shifts, polarization and electron scattering can also lead to level decays, denoted \(\gamma_{\rm p}\) and \(\gamma_{\rm sc}\), respectively. It was found that the level decay mainly comes from the polarization of the perturbing atoms [30], i.e., \(\gamma_{\rm p}>>\gamma_{\rm sc}\), and that the decay rate
\[\gamma_{\rm p}=2\times 3.59\left[\frac{\alpha e^{2}}{\hbar(4\pi\epsilon_{0})^{ 2}}\right]^{2/3}v^{1/3}N\quad. \tag{3}\]
The value of \(\gamma_{\rm p}\) is equivalent with a full-width at half-maximum (FWHM) line broadening in units of rad/s.
Through two-photon, Doppler-free spectroscopy, the broadening and shifts of Rb \(nS\) and \(nD\) Rydberg levels in the presence of inert perturbers has been experimentally observed in [33] for He, Ar, Ne, Kr, and Xe, in [34] for He, Ar, and Xe, in [35] for He and Ar, and in [36] for Ne, Kr, and H\({}_{2}\). For these experiments, effects of buffer-gas pressure broadening by the intermediate \(5P_{1/2}\) and \(5P_{3/2}\) levels could be ignored because they were very far off-resonance. This is, however, not the case in our work. Pressure broadening of the Rb \(D_{2}\) line due to binary interactions with noble gases has been observed in [37] for pressures of up to 1.1 kTorr. Recently, ultra-high pressures of He and Ar on the order of \(10^{5}\) Torr interacting with a Rb vapor were spectroscopically studied in [38]. In the present work, the Ne pressure is 5 Torr, which leads to a \(D_{2}\) line broadening on the order of \(\gamma_{D_{2}}\sim 2\pi\times 50\) MHz [37]. This is consistent with our Doppler-free saturated absorption spectra, which only show marginal remnants of the buffer-gas-free \(5P_{3/2}\) hyperfine features. Since we observe Rydberg-EIT linewidths that are considerably larger, at the level of precision of our current study we neglect the effect of \(\gamma_{D_{2}}\) on the Rydberg-EIT linewidth. As such, based on \(\gamma_{p}>>\gamma_{\rm sc}\) and \(\gamma_{p}>\gamma_{D_{2}}\), we compare our measured Rydberg-EIT linewidths only with estimates for \(\gamma_{p}\).
## II Experimental setup
We perform the EIT experiment using \({}^{85}\)Rb atoms with an energy-level diagram shown in Fig. 1 (a). The probe laser (\(\lambda_{\rm p}\)) with a wavelength of \(\lambda_{p}=780\) nm is close to resonance with the \(F=3\) to \(F^{\prime}=4\) hyperfine component of the \(5S_{1/2}\leftrightarrow 5P_{3/2}\) transition. The EIT signal is measured as a function of the coupling laser (wavelength \(\lambda_{\rm c}\approx 480\) nm), which is scanned over the \(5P_{3/2}\leftrightarrow nD_{5/2}\) transition. The coupler detuning is denoted by \(\Delta_{c}\).
We simultaneously extract EIT signals from two EIT beam lines, as shown in Fig. 1 (b). The upper beam line, which serves to produce a reference spectrum, utilizes a buffer-gas-free Rb vapor cell. The reference EIT signal allows us to calibrate the frequency axis for \(\Delta_{c}\), as well as to mark the coupler-laser frequency of the shift-free EIT line. In the lower (signal) beam line, the EIT signal is acquired using a cell that contains Rb vapor and a Ne buffer gas of nominally 5 Torr pressure. The probe and coupler lasers are split between the reference and signal beam lines using polarizing beam-splitter cubes (PBS), and are then counter-propagated through the respective cells. In each cell, both beams have parallel linear polarizations. After passage through the cells, probe and coupler beams are separated using dichroic optics. The reference and signal probe beams are simultaneously detected using a pair of identical silicon photo-diodes and low-noise transimpedance amplifiers (TIAs), and the respective data traces are recorded. We perform 100 scans per data set, and present averages over the 100 scans.
The Rb vapor cell in the reference (upper) beam line in Fig. 1 (c) is held at room temperature (291 K). The probe and coupling beams in the reference line are approximately Gaussian and have \(1/e^{2}\) drop-off radii of the intensity distribution of \(w_{0}=300\)\(\mu\)m and 500 \(\mu\)m, respectively. In the signal (lower) beam line, the cell that contains the buffer gas is heated to 303 K, which results in about 70% peak absorption on the \(5S_{1/2}\), F= \(3\leftrightarrow 5P_{3/2}\)
F\({}^{\prime}\) transition. The probe and coupling beams in the signal line have Gaussian beam parameters of \(w_{0}\approx 150~{}\mu\)m.
The reference EIT signal is shown in Fig. 1 (c). The strongest EIT peak is from the \(5S_{1/2}\), \(F=3\leftrightarrow 5P_{3/2}\), \(F^{\prime}=4\leftrightarrow 34D_{5/2}\) cascade, which has the largest electric dipole moment and is the least diminished by optical pumping into the uncoupled \(5S_{1/2}\), \(F=2\) level. The two small peaks that bracket the -100 MHz mark are from the intermediate hyperfine states \(5P_{3/2}\), \(F^{\prime}=3\) and \(F^{\prime}=2\). These are small in size mostly due to optical pumping during the atom-field interaction time (which is a few \(\mu\)s in the buffer-gas-free cell). All observed frequency splittings between the \(5P_{3/2}\) hyperfine peaks carry a Doppler scaling factor of \((\lambda_{p}/\lambda_{c}-1)=0.63\). The leftmost peak in Fig. 1 (c) is attributed to the \(5S_{1/2}\), \(F=3\leftrightarrow 5P_{3/2}\), \(F^{\prime}=4\leftrightarrow 34D_{3/2}\) cascade. Noting that the Doppler scaling factor for Rydberg lines is unity, the splitting between the largest \(34D_{5/2}\) peak and the \(34D_{3/2}\) peak in Fig. 1 (c) equals the \(34D_{J}\) fine-structure splitting, which is 306.057 MHz. This splitting is used for calibration of the frequency axis of the reference and signal spectra, which are simultaneously acquired.
A typical Rydberg-EIT signal at a low probe intensity, obtained from the cell with buffer gas, is shown in Fig. 1 (d). The EIT signal from that cell has an asymmetric shape, and in the case shown the peak is shifted positively from the reference EIT line by 68 \(\pm\) 1 MHz. In the following we study the dependence of shift and line width on principal quantum number \(n\). We will then explore effects observed at high probe intensity.
## III Frequency shifts and linewidths
To study the effect of the buffer gas on the EIT signals for a range of different Rydberg states, we take Rydberg-EIT data over an \(n\)-range of 34 to 46. We extract the relative frequency of the EIT peak in the signal beam, which equals the frequency shift of the EIT caused by the buffer gas. The frequency-shift results are shown in Fig. 2 (a) for probe powers of 0.06, 0.13, and 0.22 \(\mu\)W, corresponding to the probe Rabi frequencies listed. The observed frequency shifts are in the range of 60-73 MHz and averages to about 67 MHz, with the low-power data clustering around 70 MHz. This result agrees well with the observations in [36] where the shift rate was measured to be 12\(\pm\)1 MHz/Torr for high \(n\) (which would lead to a shift of 60\(\pm\)5 MHz for 5 Torr of Ne buffer gas). Further, the shift decreases by up to about 10 MHz when the probe power increases, and it appears to overall decrease by a few MHz when \(n\) increases. The weak \(n\)-dependence agrees with a semi-classical calculation in [39], which shows that the frequency shift of the Rydberg \(nD\) state slightly decreases as \(n\) increases.
For a quantitative comparison with theory, we first calculate the electron-scattering shifts using Eq. 1. For the low-energy s-wave scattering length, \(a_{s}\), several of the previously computed values include, from low to high, \(a_{s}=0.2a_{0}\)[40], 0.227\(a_{0}\)[41], and 0.24\(a_{0}\)[30]. A table listing both theoretical and experimental values for \(a_{s}\) is provided in [42]. Over the range \(0.2a_{0}<a_{s}<0.24a_{0}\), \(\Delta\omega_{\text{sc}}/(2\pi)\) varies from 195 MHz to 234 MHz. The shift from the Ne polarization due to the Rydberg atom, obtained from Eq. 2 and using \(\alpha=2.66\) in atomic units [30], is \(\Delta\omega_{\text{p}}/(2\pi)=-122\) MHz. The net shift from these two effects, \(\Delta\omega_{\text{total}}/(2\pi)\), then ranges between 73 MHz and 112 MHz. It is seen that the shift from the calculation has the same sign in theory and experiment, confirming that the low-energy s-wave scattering length \(a_{s}\) is positive (which is not the case for some other buffer gases) and dominant. Furthermore, depending on what exact value for \(a_{s}\) is adopted, the calculated net shift is about 10% to 60% larger than the average experimental value of \(\approx 67\) MHz from the previous paragraph. Hence, we claim good qualitative agreement.
To discuss these findings, we first note that the overall
Figure 1: (a) The energy-level diagram of rubidium atoms used in this work. The probe laser (\(\lambda_{p}\)) is on-resonance with the \(5S_{1/2}\leftrightarrow 5P_{3/2}\) transition, while the coupling laser (\(\lambda_{c}\)) is detuned from the \(5P_{3/2}\leftrightarrow nD_{5/2}\) transition by \(\Delta_{c}\). (b) An illustration of a Rydberg EIT setup. The EIT signals are detected simultaneously from both the Rb reference cell and the Rb cell with buffer gas. The 34D EIT signals from the Rb reference cell and the Rb cell with buffer gas are shown in (c) and (d), where the probe powers are 170 nW and 130 nW, respectively, and the coupler powers are approximately 2 mW and 35 mW.
spread of data previously reported for \(a_{s}\) has the largest effect on the net frequency shift, \(\Delta\omega_{\text{total}}\), with our result being more consistent with the lower end of previously reported \(a_{s}\)-values (\(a_{s}\approx 0.2a_{0}\)). In fact, the ambiguity of estimates for \(a_{s}\) alone may suffice to explain our observed deviation (if any) between measured line shifts and corresponding theoretical estimates. As to additional effects that might matter, we note that the Rydberg-EIT line may be slightly pulled to lower frequencies by the weak EIT peaks that are visible in Fig. 1 (c) but that are hidden in Fig. 1 (d). Furthermore, the Ne buffer gas pressure of 5 Torr has an uncertainty of 5% according to manufacturer information. Along the same line, one may speculate that differences in cell temperature during cell fabrication and the cell's eventual operating temperature could in principle cause a mild buffer-gas density drop.
Further progress on comparing experimental buffer-gas-induced Rydberg-EIT shifts and theoretical estimates would require a refinement of theoretical models for \(a_{s}\) as well as a full model for Rydberg-EIT that covers the effects of the buffer gas on all involved atomic levels. In experimental work, one may consider a determination of the absolute Ne density with an independent, quantitative method. These research directions are, however, outside the scope of our present work.
We next extract the FWHM of the EIT signal from the cell with buffer gas for several probe powers and \(n\)-values. As shown in Fig. 2 (b), the observed FWHM of the EIT signal from the cell with the buffer gas is between 110 and 125 MHz, which greatly exceeds the width of the reference EIT lines. The FWHM of the buffer-gas EIT slightly increases with an increase in \(n\), but it is not significantly dependent on the probe power. This agrees well with the calculation in [39], which shows that, for Rydberg \(nD\) states, the FWHM of the Rydberg line should slightly increase with \(n\). A calculation of the broadening \(\gamma_{p}\) from the polarization effect (Eq. 3) yields a FWHM of 144 MHz, which is \(\approx 20\%\) larger than the experimentally observed width. We note that the effects that we neglect here could contribute somewhat to the experimentally observed line broadening, including broadening from the weak EIT peaks that are visible in Fig. 1 (c) but that are hidden in Fig. 1 (d), as well as the line broadening \(\gamma_{D_{2}}\) [which is small compared to \(\gamma_{p}\) but still substantially \(>0\) (see Sec. I)]. The results on the EIT linewidth may also indicate that the exact density of Ne atoms in the cell could be slightly lower than the density one would have at 5 Torr and at room temperature.
## IV Effects of probe power
Finally, we study the effects of probe power on the Rydberg-EIT signal in the signal beam line, which harbors the rubidium cell with the buffer gas, by varying the probe power from \(P_{p}=0.06\)\(\mu\)W to 1.79 \(\mu\)W. The central probe-laser electric field is \(E_{p}=\sqrt{2I_{p}/(c\epsilon_{0})}\), with
Figure 2: (a) Frequency shift of the EIT signal from the Rb cell with buffer gas relative to the \(nD_{5/2}\) peak for \(F^{\prime}=4\) in the reference EIT spectrum versus \(n\), at several probe powers with estimated Rabi frequencies at the beam center shown in the legend. The plot data are extracted from local parabolic fits to the peaks in spectra. Fit error bars are smaller than the marker size. (b) Corresponding FWHM of the EIT peaks. The FWHM values are obtained from the roots of the first derivatives of smoothed EIT curves [see Fig. 1 (d) and Fig. 3]. Statistical uncertainties are smaller than the marker size.
Figure 3: EIT signals of Rb \(34D_{5/2}\) Rydberg atoms in the cell with buffer gas for probe laser powers ranging from 0.06 to 1.79 \(\mu\)W. The colors of the plots correspond to probe-laser Rabi frequency, \(\Omega_{p}/(2\pi)\), indicated in the legend. To allow for a comparison of the signals on a fixed scale for the transmitted probe power, the plots are vertically shifted so that they level out at zero at large detunings. Also, the TIA gain is decreased with increasing probe power to avoid saturation. The plot shows data at the full TIA bandwidth as well as smoothed curves that allow for an easier comparison of the atomic response across the full probe-power range.
central intensity \(I_{p}=2P_{p}/(\pi w_{0}^{2})\) and \(w_{0}\approx 150~{}\mu\)m for the signal beam line. The probe Rabi frequency \(\Omega_{p}/(2\pi)=\mu_{12}E_{p}/h\), where \(\mu_{12}=1.892\,e\,a_{0}\)[43] is the probe-transition dipole moment. The EIT signals from the \(34D_{5/2}\) state are shown in Fig. 3 for five values of the probe Rabi frequency. The noise increase at high probe power is attributed to an increase in shot noise as well as an increase in TIA bandwidth at lower gain. Over the gain values used, the bandwidth of the TIA (model SRS SR570) increases from 200 Hz to 2 kHz with increasing probe power. Hence, the signals at higher probe powers have considerably larger noise on the utilized absolute-power scale. In Fig. 3, we include smoothed curves that allow for an easier comparison of the signal behavior over the entire probe-power range.
It can be seen in Fig. 3 that at low probe powers (Rabi frequency \(\lesssim 10\) MHz, as used in Sec. III) the EIT line shape is invariant and the EIT signal strength is in proportion with probe power. In the limit of vanishing probe power such a linear behavior is expected. Also, the shape of the EIT signal is asymmetric, with a longer tail on the negative side. We speculate that this behavior may come in part from the blending of intermediate-state hyperfine structure, which adds to the _nD_-line broadening [37]. At Rabi frequencies above \(\sim\)10 MHz, the EIT peak position begins to shift negative, while shape and width of the peak still largely remain the same. At probe Rabi frequencies exceeding \(\sim\)20 MHz, the signals invert in shape and turn into electromagnetically-induced absorption (EIA), with the center of the EIA dip located \(\sim 50\) MHz above the low-power EIT peak. The transition from EIT to EIA at high power may come from factors that involve optical pumping and velocity-changing collisions, the study of which could be the subject of future work.
## V Conclusion
We have observed Rydberg-EIT in a vapor cell containing 5 Torr of Ne buffer gas. Results obtained at low probe power have revealed frequency shifts of the EIT signals by about 70 MHz, as well as an increased FWHM EIT linewidth of about 120 MHz. These observations are largely unaffected by variations in the principal quantum number of the Rydberg states and in the probe power, as long as the probe Rabi frequency remains below about 10 MHz. The frequency shift is attributed to low-energy s-wave scattering between the Rydberg electron and the Ne atoms and polarization of there Ne atoms by the atomic electric field. The width of the signal is dominated by polarization of the Ne atoms. At high probe power, we observe a transition from EIT to EIA; this phenomenon awaits a future explanation.
Utilizing the Stark effect of Rydberg levels, potential applications of our research include non-invasive and spatially-resolved measurement of electric fields in low-pressure discharge plasmas in neon. Further, the electric fields of highly-charged dust particles in low-density plasma can potentially be mapped via Rydberg-EIT. For pressures below about 5 Torr, we also see applications in using Rydberg-EIT as a real-time, in-situ and non-invasive readout for buffer-gas density at a location of interest, which can have advantages over reading the buffer-gas pressure with a remote pressure gauge.
###### Acknowledgements.
We acknowledge fruitful discussions with Prof. Eric Paradis (Eastern Michigan University), Dr. David A. Anderson (Rydberg Technologies Inc.), and Bineet Dash (University of Michigan). This project was supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences under award number DE-SC0023090. N.T. acknowledges funding from the NSRF via the Program Management Unit for Human Resources and Institutional Development, Research and Innovation (grant number B05F650024), and from the Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation (Grant No. RGNS.64-067). R.C. acknowledges support from a Rackham Predoctoral Fellowship of the University of Michigan.
|
2306.11842 | Parsimonious Optimisation of Parameters in Variational Quantum Circuits | Variational quantum circuits characterise the state of a quantum system
through the use of parameters that are optimised using classical optimisation
procedures that typically rely on gradient information. The circuit-execution
complexity of estimating the gradient of expectation values grows linearly with
the number of parameters in the circuit, thereby rendering such methods
prohibitively expensive. In this paper, we address this problem by proposing a
novel Quantum-Gradient Sampling algorithm that requires the execution of at
most two circuits per iteration to update the optimisable parameters, and with
a reduced number of shots. Furthermore, our proposed method achieves similar
asymptotic convergence rates to classical gradient descent, and empirically
outperforms gradient descent, randomised coordinate descent, and SPSA. | Sayantan Pramanik, Chaitanya Murti, M Girish Chandra | 2023-06-20T18:50:18Z | http://arxiv.org/abs/2306.11842v2 | # Parsimonious Optimisation of Parameters in Variational Quantum Circuits
###### Abstract
Variational quantum circuits characterise the state of a quantum system through the use of parameters that are optimised using classical optimisation procedures that typically rely on gradient information. The circuit-execution complexity of estimating the gradient of expectation values grows linearly with the number of parameters in the circuit, thereby rendering such methods prohibitively expensive. In this paper, we address this problem by proposing a novel Quantum-Gradient Sampling algorithm that requires the execution of at most two circuits per iteration to update the optimisable parameters, and with a reduced number of shots. Furthermore, our proposed method achieves similar asymptotic convergence rates to classical gradient descent, and empirically outperforms gradient descent, randomised coordinate descent, and SPSA.
## 1 Introduction
The quantum processors available today are limited in terms of the number of qubits available, their coherence times, noise-resilience, etc. This phase has been dubbed as the Noisy, Intermediate-Scale Quantum era [1]. Variational Quantum Algorithms (VQAs) [2, 3, 4, 5, 6] - which employ both quantum and classical processors to work in tandem - investigate and utilise their potential for solving various problems in Chemistry, Machine Learning and Optimisation. In VQAs, we operate under the premise that a variational circuit represented by \(U(\boldsymbol{\theta})\) acts on the conventional initial state \(\ket{0}^{\otimes n}\) of the qubits, to obtain the state \(\ket{\psi(\boldsymbol{\theta})}=U(\boldsymbol{\theta})\ket{0}^{\otimes n}\); for the sake of brevity, the superscript \(\otimes n\) will henceforth be dropped. We assume the circuits consist of single-qubit Pauli rotation gates, along with 2-qubit \(CX\)s, which together form a universal set of quantum logic gates. The rotational gates are parameterised by the list of parameters \(\boldsymbol{\theta}\in\mathbb{R}^{k}\). The objective of variational algorithms is to find \(\boldsymbol{\theta}^{*}\) such that:
\[\boldsymbol{\theta}^{*}=\arg\min_{\boldsymbol{\theta}}\bra{\psi(\boldsymbol{ \theta})}H\ket{\psi(\boldsymbol{\theta})} \tag{1}\]
for a Hermitian observable \(H\). To simplify notation, we use:
\[\mu_{\boldsymbol{\theta}}=\bra{\psi(\boldsymbol{\theta})}H\ket{\psi( \boldsymbol{\theta})} \tag{2}\]
The current state-of-the-art method for estimating quantum gradients of expectation values for use in first-order methods for variational quantum algorithms is the parameter-shift rule (PSR) [7; 8]. PSR is equivalent to the finite-difference method, except that individual parameters are perturbed by finite values \(s_{i}\), thus giving us a formula for gradients of single-qubit Pauli rotations as:
\[\frac{\partial\mu_{\theta_{i}}}{\partial\theta_{i}}=\frac{\mu_{\theta_{i}+s_{i }}-\mu_{\theta_{i}-s_{i}}}{2} \tag{3}\]
where \(s_{i}=\nicefrac{{\pi}}{{2}}\). However, PSR requires running the quantum circuits (for a certain number of shots) twice, with \(\theta_{i}+s_{i}\) and \(\theta_{i}-s_{i}\), respectively to obtain the gradients with respect to each individual parameter. Thus, the total number of circuits evaluated is \(2k\) at each iteration of the optimization routine, which prohibits scalability.
Several recent works have acknowledged the importance of reducing the number of quantum measurements, and have developed techniques for shot-frugal optimisation [9; 10; 11; 12]. However, the number of measurements is calculated as the product of circuits executed and the shots they were run for. This is relevant because many cloud-providers that provision access to gate-model quantum processors employ separate pricing strategies for each circuit that is executed, and the number of shots it is executed for. Table 1 provides an overview of the cost incurred in running each circuit and each shot on various quantum processors. Evidently, the per-circuit price appears to be 10-1000 times more expensive than per-shot price.
As such, we feel that the field of minimising the number of circuit-executions is under-explored, with only a few methods proposed, the most widely used and studied of which is the classical technique of Simultaneous Perturbation Stochastic Approximation (SPSA) [6; 13; 14; 15]. SPSA also promises two circuit-evaluations for parameter-updation per iteration with perturbed parameters, and uses them to estimate the slope in the random direction [9]. However, it has been reported to be sensitive to the choice of hyperparameters [9], and as shown later in Sec. 4.1, not found to perform well for Quantum Machine Learning (QML) tasks. Besides gradient descent (based on PSR), and SPSA, the results from QGSA have also been compared against randomised coordinate descent [16], which randomly picks a parameter and updates it based on the gradient-estimate with respect to that parameter, resulting in two circuit runs. But it has been observed from the results in Sec. 4.1 that both SPSA and RCD end up consuming more iterations to converge.
To address these issues, we introduce the Quantum-Gradient Sampling Algorithm (QGSA) which requires at most two circuits to be executed (with a reduced number of shots) to update the parameters at each iteration. Furthermore, we show that it has the same asymptotic rate of convergence as gradient descent (GD). In practice, it was found to perform better than the aforesaid methods at a fraction of the number of measurements, as demonstrated in Sec. 4.1.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Processor** & **Per-circuit price (USD)** & **Per-shot price (USD)** \\ \hline \hline IonQ - Harmony & 0.3 & 0.01 \\ \hline IonQ - Aria & 0.3 & 0.03 \\ \hline OQC - Lucy & 0.3 & 0.00035 \\ \hline Rigetti - Aspen-M & 0.3 & 0.00035 \\ \hline \end{tabular}
\end{table}
Table 1: Per-circuit and per-shot prices for running circuits on real gate-based quantum processors through a popular cloud service provider.
The Quantum Gradient Sampling Algorithm
In this section, we state Algorithm 1, our proposed Quantum-Gradient Sampling Algorithm. The algorithm uses only two circuit evaluations to update the parameters in the variational quantum circuit, in contrast to the \(2k\) evaluations necessitated by methods that estimate the gradients. It does this by utilising Theorem 3 and sampling random vectors from a bounded probability distribution.
We begin by stating the following assumption.
**Assumption 1**.: _Going forward, without any loss of generality, we assume that the eigenvalues of the Hermitian observable \(H\) lie in \([0,1]\)._
We note that \(H\) may be expressed as a linear combination of Pauli operators \(P_{i}\) with real coefficients \(c_{i}\):
\[H=\sum_{i}c_{i}P_{i} \tag{4}\]
The observable \(H\) may be normalised as follows to obtain another Hermitian observable \(H^{\prime}\) whose eigenvalues lie in the range \([0,1]\)1:
Footnote 1: S.P. would like to thank Dr. Sourav Chatterjee and Dr. Anirban Mukherjee, from the Corporate Incubation team at Tata Consultancy services, for helpful discussions regarding Hamiltonian normalisation.
\[H^{\prime}=\frac{1}{2}\left(\frac{H}{\sum_{i}c_{i}}+I\right) \tag{5}\]
Our algorithm differs from classical gradient descent algorithms since instead of estimating the gradient, a random vector that is sufficiently close to the gradient is used as a descent direction. This yields an iterative algorithm, where the parameters \(\boldsymbol{\theta}^{(t)}\) at the \(t^{th}\) iteration are updated as:
\[\boldsymbol{\theta}^{(t+1)}=\arg\min(\mu_{\boldsymbol{\theta}^{(t)}-\alpha_{ t}\boldsymbol{g_{s}}^{\intercal},\mu_{\boldsymbol{\theta}^{(t)}+\alpha_{t} \boldsymbol{g_{s}}}}) \tag{6}\]
where \(\alpha_{t}>0\) is the step-size, \(\boldsymbol{g_{s}}\in\mathbb{R}^{k}\) is a random vector whose components are i.i.d and sampled from a bounded probability distribution, as per Theorem 3, (for instance, \(\mathit{Uniform}(-2\sqrt{\mu_{\boldsymbol{\theta}}},2\sqrt{\mu_{\boldsymbol{ \theta}}})\)). QGSA relies on the fact that the bounds on the distribution decrease as the value of the objective function decreases at each iteration. The update rule 6 can be written as
\[\boldsymbol{\theta}^{(t+1)}=\boldsymbol{\theta}^{(t)}-s_{t}\alpha_{t} \boldsymbol{g_{s}} \tag{7}\]
with \(s_{t}=-\mathrm{sgn}(\boldsymbol{g_{t}}^{\top}\boldsymbol{g_{s}})\), and \(\boldsymbol{g_{t}}\) being the actual gradient of \(\mu_{\boldsymbol{\theta}}^{(t)}\). The update step executes only two circuits (with a reduced number of shots, as explained later in Sec 2.2.1), compared to \(2k\) circuit executions required by methods that estimate the gradient using the parameter-shift rule [7]. We now state the Quantum-Gradient Sampling Algorithm formally in Algorithm 1.
We also provide another variant of the algorithm for use in the common setting wherein access to the gradient \(\boldsymbol{g_{\theta}}\) is unavailable. This algorithm is stated formally in Algorithm 2 in Appendix A.
### Perturbation bounds for quantum gradients
In this section, motivated by the need to use randomly sampled surrogates for gradients, we derive bounds on the gradients of expectation values in the quantum setting.
Before introducing bounds on the gradients of expectation values, we present the following helpful result:
**Lemma 2**.: _If the \(i^{th}\) parameter \(\theta_{i}\) of an ansatz \(U(\mathbf{\theta})\), \(i\in[k]\), is perturbed by a small quantity \(\delta_{i}\), such that the corresponding state changes from \(\ket{\psi}_{\theta_{i}}\) to \(\ket{\tilde{\psi}}_{\theta_{i}+\delta_{i}}\), then \(\lim_{\delta_{i}\to 0}\|\ket{\psi}_{\theta_{i}}-\ket{\tilde{\psi}}_{\theta_{i}+ \delta_{i}}\|=\ket{\delta_{i}}\), where \(\|\cdot\|\) is the \(l_{2}\) norm of a vector._
Proof.: We first define \(\ket{v}\) such that \(\ket{\psi}=U_{i+1:k}(\theta_{i+1:k})U_{i}(\theta_{i})\ket{v}\), \(\ket{\epsilon}=\ket{\tilde{\psi}-\psi}\), and note that \(U_{i}(\theta_{i})=e^{-i\theta_{i}G_{i}}\). Then,
\[\begin{split}\lim_{\delta_{i}\to 0}\bra{\epsilon}& =\lim_{\delta_{i}\to 0}\left(\bra{\tilde{\psi}}\ket{\tilde{\psi}}+\bra{ \psi}\ket{\psi}-\bra{\tilde{\psi}}\ket{\psi}-\bra{\psi}\ket{\tilde{\psi}}\right) \\ &=\lim_{\delta_{i}\to 0}\left(2-\bra{v}U_{i}^{\dagger}(\theta_{i}+ \delta_{i})U_{i}(\theta_{i})\ket{v}-\bra{v}U_{i}^{\dagger}(\theta_{i})U_{i}( \theta_{i}+\delta_{i})\ket{v}\right)\\ &=\lim_{\delta_{i}\to 0}\left(2-\bra{v}\left(e^{-i\delta_{i}G_{i} }+e^{i\delta_{i}G_{i}}\right)\ket{v}\right)\\ &=\lim_{\delta_{i}\to 0}\left(2-\bra{v}\left(2I\cos\delta_{i} \right)\ket{v}\right)\quad\text{since for }G^{2}=I,\ e^{i\phi G}=I\cos\phi+iG\sin\phi\\ &=\lim_{\delta_{i}\to 0}4\sin^{2}\frac{\delta_{i}}{2}=\delta_{i}^{2} \end{split} \tag{8}\]
That is, if one of the parameters of a variational quantum circuit is perturbed by \(\delta_{i}\), then the magnitude of change in the corresponding state of the circuit is equal to the absolute value of the perturbation. In Theorem 3, we use this observation to derive non-trivial bounds on the derivative of \(\mu_{\mathbf{\theta}}\).
**Theorem 3**.: _The partial derivative of \(\mu_{\mathbf{\theta}}\) (as defined in Eq. (2)) with respect to \(\theta_{i}\) lies in the range \([-2\sqrt{\mu_{\mathbf{\theta}}},2\sqrt{\mu_{\mathbf{\theta}}}]\), for all \(i\in[k]\)._
Proof.: Continuing with the notation in lemma 2, we have:
\[\langle\tilde{\psi}|H|\tilde{\psi}\rangle=\langle\psi|H|\psi\rangle+\langle \epsilon|H|\psi\rangle+\langle\psi|H|\epsilon\rangle+\langle\epsilon|H| \epsilon\rangle\,, \tag{9}\]
where the last term is of the order of \(\delta_{i}^{2}\), and may be ignored, resulting in
\[\mu_{\theta_{i}+\delta_{i}}-\mu_{\theta_{i}}=\langle\epsilon|H|\psi\rangle+ \langle\psi|H|\epsilon\rangle \tag{10}\]
Applying Cauchy-Schwarz inequality and noting that \(\langle\psi|H^{2}|\psi\rangle\leq\langle\psi|H|\psi\rangle\), we obtain:
\[-2\sqrt{\mu_{\mathbf{\theta}}}\|\epsilon\|\leq\mu_{\theta_{i}+\delta_{i}}-\mu_{ \theta_{i}}\leq 2\sqrt{\mu_{\mathbf{\theta}}}\|\epsilon\| \tag{11}\]
Finally, using the result of lemma 2, it is straightforward to see that
\[-2\sqrt{\mu_{\mathbf{\theta}}}\leq\frac{\partial\mu_{\theta_{i}}}{\partial\theta _{i}}\leq 2\sqrt{\mu_{\mathbf{\theta}}} \tag{12}\]
We note that the partial derivative of the expectation value with respect to each parameter is a function of the expectation value itself. The elements of the random vector are sampled from a probability distribution bounded as per Eq. (12). As a result, in the case of minimisation problems, if the function value decreases in consecutive iterations, then the bounds on the gradients get tighter, and the approximation of the gradient with the random vector gets better. We ensure this through Lemma 4, where we find bounds on the step-size that guarantees a decrease in objective function value from one iteration to the next.
### Quantum-Gradient Sampling Algorithm
In this section, we analyze the convergence of Algorithm 1. In Theorem 5, we observe that under Assumption 1, and with the correct choice of step-size \(\alpha_{t}\) at each iteration, our algorithm achieves an asymptotic convergence rate of \(O(\nicefrac{{1}}{{\sqrt{t}}})\), which is equivalent to that of classical gradient descent, while requiring up to \(\sim 2k\) fewer circuit-executions per iteration.
We begin by providing a bound on the step-size \(\alpha_{t}\) that guarantees reduction in the objective function value.
**Lemma 4**.: _At iteration \(t\), there exists a step-size \(\alpha_{t}\) which guarantees a decrease in the objective function value from \(\mu^{(t)}\) to \(\mu^{(t+1)}\) (as defined in Eq. (2)), corresponding to \(\mathbf{\theta}^{(t)}\) and \(\mathbf{\theta}^{(t+1)}\) (which are related as per Eq. (6)), where \(\mathbf{g}_{\mathbf{s}}\) is a random vector \(\in\mathbb{R}^{k}\)._
Proof.: Let \(\mathbf{H}(\mathbf{\zeta})\) denote the Hessian of \(\mu_{\mathbf{\zeta}}\) (with \(\mathbf{\zeta}\) being a convex combination of \(\mathbf{\theta}^{(t)}\) and \(\mathbf{\theta}^{(t+1)}\)), and \(L\) its Lipschitz smoothness constant. From the Taylor expansion of \(\mu^{(t+1)}\):
\[\begin{split}\mu^{(t+1)}-\mu^{(t)}&=s_{t}\alpha_{t }\mathbf{g}_{t}{}^{\top}\mathbf{g}_{\mathbf{s}}+\frac{\alpha_{t}^{2}s_{t}^{2}}{2}\mathbf{g}_{ \mathbf{s}}{}^{\top}\mathbf{H}(\mathbf{\zeta})\mathbf{g}_{\mathbf{s}}\\ &\leq-\alpha_{t}|\mathbf{g}_{t}{}^{\top}\mathbf{g}_{\mathbf{s}}|+\frac{\alpha _{t}^{2}L}{2}\|\mathbf{g}_{\mathbf{s}}\|_{2}^{2}\end{split} \tag{13}\]
Thus, we obtain the following range of \(\alpha_{t}\) that guarantees reduction in function value:
\[0\leq\alpha_{t}\leq\frac{2|\mathbf{g}_{t}{}^{\top}\mathbf{g}_{\mathbf{s}}|}{L\|\mathbf{g}_{\bm {s}}\|_{2}^{2}} \tag{14}\]
**Theorem 5**.: _If the step-size \(\alpha_{t}\) at each iteration satisfies Eq. (14), and the parameters \(\mathbf{\theta}\) are updated as per Eq. (6), then the Quantum-Gradient Sampling Algorithm converges in \(T\) iterations to a stationary point of \(\mu_{\mathbf{\theta}}\) with a rate given by \(O\left(\frac{1}{\sqrt{T}}\right)\)._
Proof.: Choosing \(\alpha_{t}=\frac{2|\mathbf{g_{t}}^{\top}\mathbf{g_{s}}|}{aL\|\mathbf{g_{s}}\|_{2}^{2}}\) with \(a>1\), Eq. (13) simplifies to:
\[\mu^{(t+1)}-\mu^{(t)}\leq-\frac{2}{aL}\left(1-\frac{1}{a}\right)\|\mathbf{g_{t}}\|_ {2}^{2}\cos^{2}(\phi_{t}) \tag{15}\]
Taking expectation on both sides, and summing over \(t\),
\[\begin{split}&\sum_{t=0}^{T-1}\mathbb{E}\left[\mu^{(t+1)}-\mu^{(t)} \right]\leq-\frac{2}{aL}\left(1-\frac{1}{a}\right)\sum_{t=0}^{T-1}\|\mathbf{g_{t}} \|_{2}^{2}\mathbb{E}\left[\cos^{2}(\phi_{t})\right]\\ &\implies\mu^{(0)}-\mathbb{E}[\mu^{(T-1)}]\geq\frac{2}{aL}\left( 1-\frac{1}{a}\right)T\epsilon_{g}^{2}\mathbb{E}\left[\cos^{2}(\phi_{t})\right] \\ &\qquad\qquad\implies\mu^{(0)}-\mu^{*}\geq\frac{2}{aL}\left(1- \frac{1}{a}\right)T\epsilon_{g}^{2}\mathbb{E}\left[\cos^{2}(\phi_{t})\right] \end{split} \tag{16}\]
where \(\phi_{t}\) is the angle between \(\mathbf{g_{s}}\) and \(\mathbf{g_{t}}\) at iteration \(t\), and \(\epsilon_{g}\) is a lower bound on the norm of the gradient of \(\mu\).
The choice of an appropriate step-size, as per Eq. (14), requires knowledge of the gradient at any given iteration. To circumvent this issue, we set the step-size to a sufficiently low value, and demonstrate empirically in Sec. 4.1 that this still leads to an advantage over gradient-based methods.
#### 2.2.1 Requisite Number of Shots
The gradient-sampling method expends its two circuit evaluations in discerning the descent-direction at each iteration. We posit that this requires \(\mu_{\mathbf{\theta}^{+}}\) and \(\mu_{\mathbf{\theta}^{-}}\) to be evaluated with a lower precision, thus opening up the potential for using a lower number of shots, where \(\mathbf{\theta}^{+}:=\mathbf{\theta}^{(t)}+\alpha_{t}\mathbf{g_{s}}\) and \(\mathbf{\theta}^{-}:=\mathbf{\theta}^{(t)}-\alpha_{t}\mathbf{g_{s}}\).
**Proposition 6**.: _Evaluating \(\mu\) (defined in Eq. (2)) with a precision of \(\epsilon_{\mu}\) and a confidence of \(1-\delta\) requires \(n_{\mu}\geq\frac{1}{2\epsilon_{\mu}^{2}}\ln\left(\frac{2}{\delta}\right)\) shots._
Proof.: This follows directly from Hoeffding's inequality.
**Proposition 7**.: _Finding if \(\mathbf{\theta}^{+}\) (defined above) is a descent direction, i.e., \(\mu_{\mathbf{\theta}^{+}}\leq\mu^{(t)}\), with a confidence of \(1-\delta\) requires \(n_{g}\geq\frac{1}{2(\mu^{(t)}-\mu_{\mathbf{\theta}^{+}})^{2}}\ln\left(\frac{2}{ \delta}\right)\) shots._
Proof.: Let \(\hat{\mu}_{\mathbf{\theta}^{+}}\) be an unbiased estimator of \(\mu_{\mathbf{\theta}^{+}}\). A simple geometrical analysis reveals that if \(|\hat{\mu}_{\mathbf{\theta}^{+}}-\mu_{\mathbf{\theta}^{+}}|\geq|\mu^{(t)}-\mu_{\mathbf{ \theta}^{+}}|\), then we fail to correctly obtain the direction of descent. Using Hoeffding's inequality again to bound the probability of failure gives the necessary number of shots to be \(n_{g}\geq\frac{1}{2(\mu^{(t)}-\mu_{\mathbf{\theta}^{+}})^{2}}\ln\left(\frac{2}{ \delta}\right)\).
Thus, when the difference in objective value between subsequent iterations is larger than \(\epsilon_{\mu}\), gradient-sampling requires a lower number of shots than gradient-based methods. However, recent studies on stochastic gradient descent, and its variations, [9, 10, 11, 12] show that even using a very low number of shots (as few as a single shot) to estimate the gradient can result in adequate performance. Hence, it might be possible to incorporate the same shot-frugality into QGSA to obtain a reduction both in the number of circuit-executions and shots.
#### 2.2.2 Termination Criteria
Though QGSA claims to use two circuit-evaluations per iteration, in practice, one iteration may suffice. One may choose to evaluate either \(\mu_{\mathbf{\theta}^{+}}\) or \(\mu_{\mathbf{\theta}^{-}}\), and forgo the other if the corresponding \(\mathbf{\theta}^{+}\) or \(\mathbf{\theta}^{-}\) is found to be a descent-direction, further reducing the number of circuits to be executed. In case neither of \(\mathbf{\theta}^{+}\) or \(\mathbf{\theta}^{-}\) offers a reduction in the objective function value, the step-size may be diminished as \(\alpha_{(t+1)}\leftarrow\nicefrac{{\alpha_{t}}}{{(1+\gamma)}}\), where \(\gamma\) is a decay parameter \(\geq 0\). If no reduction is obtained over a configurable number of iterations, for very low values of \(\alpha_{t}\), or for satisfactorily low bounds on the probability distribution, the algorithm may be terminated with the belief of having reached a (local) minimum.
#### 2.2.3 Other Details
Finally, we hypothesize that QGSA may be more resilient to noise than the first-order methods that estimate the gradient, as the effects of noise may be absorbed within the stochasticity of the sampled parameters at each iteration. The proposed algorithm may also help navigate barren plateaus [17] due to its stochastic nature, as suggested in [12]. QGSA might also be combined with other first-order methods, natural gradient descent [18], and heuristics such as operator-grouping [19, 20, 21, 22, 23] (by replacing their gradient-estimation step), to enhance its performance.
## 3 Application to Binary Classification
Binary classification is one of the earliest use-cases to have been attempted using Quantum Machine Learning, and has been studied extensively [4, 5, 24, 25]. The motivation behind mentioning it here is to set up the context to describe the experiments, and results thereof, presented later in Sec. 4.1, and to introduce a new loss function to address binary classification. Without delving too much into the details, a binary classification model consists of a hypothesis function/model \(h_{\mathbf{\theta}}\), which depends on trainable parameters \(\mathbf{\theta}\). For each data point \(\mathbf{x}_{i}\), the function returns a prediction, which is then compared against the available ground truth/label \(y_{i}\) through the use of a loss function \(\mathcal{L}\). The overall empirical risk is defined as:
\[\hat{R}_{h}:=\sum_{i=1}^{n}\mathcal{L}\left(h_{\mathbf{\theta}}(\mathbf{x}_{i}),y_{i} \right)=:\mathcal{L}_{\mathbf{\theta}} \tag{17}\]
A classical optimiser then adjusts the parameters to minimise the empirical risk to obtain a model that can be used to make predictions for unseen data points. It is important to mention that the hypothesis and loss functions make implicit assumptions and place a prior on the class-conditional density of the the data, and the promise of quantum computing in QML is to provide families of variational circuits/anaetze that are classically difficult-to-simulate.
The QGSA, recalling from above, minimises functions of the form \(\langle\psi(\mathbf{\theta})|H|\psi(\mathbf{\theta})\rangle\), where the eigenvalues of \(H\in[0,1]\). To provide a natural fit between this and the loss function, we define a new loss as:
\[\begin{split}\mathcal{L}_{QH}(h_{\mathbf{\theta}}(\mathbf{x}),y)& :=\langle\psi(\mathbf{\theta},\mathbf{x})|(I-y\mathcal{O})|\psi(\mathbf{\theta },\mathbf{x})\rangle\\ &=\frac{(1-yh_{\mathbf{\theta}}(\mathbf{x}))}{2}\end{split} \tag{18}\]
where \(h_{\mathbf{\theta}}(\mathbf{x})=\langle\psi(\mathbf{\theta},\mathbf{x})|\mathcal{O}|\psi(\mathbf{ \theta},\mathbf{x})\rangle\); \(\mathcal{O}\) is a user-defined, configurable observable whose eigenvalues lie between \(-1\) and \(1\); and consequently, those of \((I-y\mathcal{O})\in[0,1]\) which
satisfies Assumption 1. The loss function proposed in Eq. (18) is very similar to the popularly used Hinge loss in classical ML, due to which we dub this as the Quantum Hinge (QH) loss. It must also be noted that restriction on eigenvalues of \(H\) is not a necessary condition for QGSA to work (as Theorem 5 is independent of this assumption), but is nevertheless a nice-to-have since it lets us appeal to Theorem 3 and obtain consistently-reducing bounds on the probability distribution. The QGSA algorithm, as such, works with arbitrary loss functions as exemplified in the subsequent section.
## 4 Experimental Details and Results
In this section, we provide details of the binary classification experiments that were conducted to investigate and compare the performance of QGSA against Gradient Descent (GD), Randomised Coordinate Descent (RCD), and SPSA, the former two of which use the PSR to estimate gradients, and the latter two use two function evaluations per iteration to update the parameters. The onus of this paper is not to propose good encoding schemes, ansaetze or loss functions for the problem being considered, or even how the trained (ML) models generalise to unseen data points, but to compare the rate of decrease in objective function value with respect to the number of circuits executed.
Binary classification was performed on the following two datasets:
* Iris dataset [26]: only the first two classes were considered and the features are translated and scaled to the range of \([0,\pi]\).
* Kaggle Surface Crack Detection dataset [27]: The entire dataset consists of \(20,000\)\(227\times 227\) RGB examples each of images with and without cracks. \(100\) samples were randomly chosen from each class and preprocessed, as described in [28].
In both cases, the data points were labelled as \(+1\) and \(-1\) to denote the two classes.
The classification circuits were created by first encoding the data into the qubits using the \(H\) gate, followed by the \(R_{Z}\) gate with the features passed as parameters to the latter. The encoding layer was followed up with \(3\) layers of the Basic Entangling Layers ansatz from Pennylane [29], numbering the optimisable parameters in the circuit at \(12\). Finally, \(\langle Z\rangle\) of the first qubit in the circuit was measured, which played the role of \(h_{\mathbf{\theta}}(\mathbf{x})\). The models with both QH and Mean Squared Error (MSE) as the loss functions, using the four aforementioned methods. In each case, the models were initiated with the same starting point and trained for \(100\) iterations with step-size \(\alpha=0.1\) (in case of SPSA, the values of the hyperparameters were: \(a=0.1\), \(\alpha=0.602\), \(c=0.2\), and \(\gamma=0.101\), which were chosen as prescribed in [15], and \(a\) was chosen to make the comparison fair against QGSA, GD, and RCD) on the noise-free simulator available through Pennylane [29]. In case of gradient-sampling, both \(\mu_{\mathbf{\theta}^{+}}\) and \(\mu_{\mathbf{\theta}^{-}}\) were evaluated, and keeping the stochasticity of the processes in mind, we report the results over \(10\) trials.
### Results and Observations
Fig. 1 depicts the plots of average training losses (over \(10\) experiments, along with their standard deviation) for binary classification of both the Iris and Crack datasets, using the classifiers detailed above. The training losses have been reported against the number of iterations of the optimisation procedure, as well as the total number of circuit executions used for updating the parameters. Both SPSA and RCD were found to require the same number of circuit evaluations on an average as GD, while QGSA consistently provided a dramatic reduction. To put this in perspective, training the classifier on the Crack dataset
Figure 1: Plots of training losses on the y-axis against the number of iterations or the number of circuit-executions expended on updating the parameters on the x-axis, for both the Iris and Crack datasets. Sec. 4 provides details on the classifier used, and the caption of each individual subfigure identifies the data set and the loss function.
with QH loss using GD on a superconducting qubit based processor available through the aforementioned cloud service provider would consume \(\sim\)USD 230,000. In comparison, using QGSA, the same activity would require only about USD 18,000. Further, it may be observed that QGSA uses approximately the same number of iterations as GD, both of which perform a lot better than RCD and SPSA.
## 5 Conclusion and Future Directions
In this paper, we presented Quantum-Gradient Sampling Algorithm which uses random vectors drawn from bounded probability distributions, instead of estimating gradients, to update the parameters in variational quantum circuits. We also proved that QGSA has the same asymptotic rate of convergence as gradient descent, and demonstrated it's capability through binary classification on two different datasets. The results showed a \(k\)-fold reduction in the number of circuit executions on an average through the use of QGSA. The development of QGSA may be carried forward by incorporating it into first-order methods as a substitute for the gradient estimation step, and further reducing the requisite number of shots through shot-frugal methods. The performance of QGSA may also be investigated in navigating barren plateaus and providing resilience against noise.
|
2310.06400 | ALMA Band 9 upgrade: a feasibility study | We present the results of a study on the feasibility of upgrading the
existing ALMA Band 9 receivers (602-720 GHz). In the current configuration,
each receiver is a dual channel heterodyne system capable of detecting
orthogonally polarized signals through the use of a wire grid and a compact
arrangement of mirrors. The main goals of the study are the upgrade of the
mixer architecture from Double-Sideband (DSB) to Sideband-separating (2SB), the
extension of the IF and RF bandwidth, and the analysis of the possibilities of
improving the polarimetric performance. We demonstrate the performance of 2SB
mixers both in the lab and on-sky with the SEPIA660 receiver at APEX, which
shows image rejection ratios exceeding 20 dB and can perform successful
observations of several spectral lines close to the band edges. The same
architecture in ALMA Band 9 would lead to an increase in the effective spectral
sensitivity and a gain of a factor two in observation time. We set up also an
electromagnetic model of the optics to simulate the polarization performance of
the receivers, which is currently limited by the cross-polar level and the beam
squint, i.e. pointing mismatch between the two polarizations. We present the
results of the simulations compared to the measurements and we conclude that
the use of a polarizing grid is the main responsible of the limitations. | S. Realini, R. Hesper, J. Barkhof, A. Baryshev | 2023-10-10T08:06:50Z | http://arxiv.org/abs/2310.06400v1 | # ALMA Band 9 upgrade: a feasibility study
###### Abstract
We present the results of a study on the feasibility of upgrading the existing ALMA Band 9 receivers (602-720 GHz). In the current configuration, each receiver is a dual channel heterodyne system capable of detecting orthogonally polarized signals through the use of a wire grid and a compact arrangement of mirrors. The main goals of the study are the upgrade of the mixer architecture from Double-Sideband (DSB) to Sideband-separating (2SB), the extension of the IF and RF bandwidth, and the analysis of the possibilities of improving the polarimetric performance. We demonstrate the performance of 2SB mixers both in the lab and on-sky with the SEPIA660 receiver at APEX, which shows image rejection ratios exceeding 20 dB and can perform successful observations of several spectral lines close to the band edges. The same architecture in ALMA Band 9 would lead to an increase in the effective spectral sensitivity and a gain of a factor two in observation time. We set up also an electromagnetic model of the optics to simulate the polarization performance of the receivers, which is currently limited by the cross-polar level and the beam squint, i.e. pointing mismatch between the two polarizations. We present the results of the simulations compared to the measurements and we conclude that the use of a polarizing grid is the main responsible of the limitations.
## 1 Introduction
With its 66 high-precision antennas located on the Chajnantor plateau at 5000 m altitude, the Atacama Large Millimeter/submillimeter Array (ALMA) is the largest ground-based aperture synthesis telescope for observation in the millimeter and submillimeter regime. The broad frequency range available (30-950 GHz) is divided into ten different receiver bands, where each receiver unit (called "cartridge") is built independently. Each receiver band detects two orthogonal linear polarizations and down-converts the signals to an intermediate frequency. Each of the ten bands should be optimized to provide a good coupling with ALMA telescope as well as low sidelobe and cross-polarization levels. In addition, the beams for the two orthogonal polarizations on the sky should coincide and any deviation, called beam squint, should be minimized.
Band 9 (610-720 GHz) is ideal to study warm (\(T>100\) K) and dense gas spectroscopically for several astrophysical objects: high redshift galaxies, starbursts, black holes in AGN, star-forming regions, proto-planetary disks, and the Solar system. It also allows one to detect high excitation lines of CO, HCN, HNC and HCO+ across cosmic time. Since such observations provide a unique window on galaxy evolution, we should seek the best available
detector technique for this band. The present ALMA Band 9 receivers use a double side band (DSB) mixer scheme [1], while all frequency bands below 650 GHz use dual sideband (2SB) heterodyne receivers to better to remove the atmospheric noise in the image IF band. Moreover, the existing Band 9 receivers have a cross-polar performance which does not meet the ALMA specifications of -23 dB and they show a relatively large beam squint, which makes this channel not suitable for extended-source polarimetry.
These issues lead to the definition of a project to study a possible improvement of Band 9 performance within the framework of ESO "Advanced Study for Upgrades of the Atacama Large Millimeter/sub-millimeter Array (ALMA)". The main goals of the project are described in Section 2.
## 2 Project overview
In its current configuration, the ALMA Band 9 receiver is a dual channel heterodyne system capable of detecting orthogonally polarized signals using a wire grid combined with a compact arrangement of mirrors. The requirements on the optics performance are a beam squint \(<1\%\) of FWHM and a cross-polar level -23 dB below the co-polar total energy level. However, the cross-polar performance of existing receivers does not meet the requirements specified for the ALMA channels, and they show a relatively large beam squint, which makes this channel not suitable for extended-source polarimetry.
These requirements are very challenging and not completely fulfilled in Band 9. Due to the non-optimal performance of its receivers, we performed a feasibility study to make an upgrade to 2SB receivers with a possible improvement of the optical system within the cartridge. The study, titled "Full 2SB Receiver Upgrade for ALMA Band 9: Implementation Study", is founded by ESO and the main goals are [2]:
* extension of the IF bandwidth to \(4\times 12\) GHz (2 sidebands and 2 polarizations), without compromising the other performance parameters;
* extension of the RF bandwidth beyond the nominal 600-720 GHz;
* analysis of the possibility to improve the optical cross-polarization performance compared to the currently installed configuration, e.g. by using a single-horn configuration with an orthomode transducer;
* investigation of the causes of the scatter in the beam squint to understand if it could be improved by reducing tolerances of the grid.
The technical feasibility of an ALMA Band 9 upgrade from the existing double-sideband configuration to the sideband-separating one has already been demonstrated in a previous ESO study [3]. Since the proposed rebuilding of the Band 9 cartridges could be an opportunity to bring the cross-polar performance in line with the original specification, we carried out a dedicated analysis to investigate the improvement in the cross-polarization performance by changing the way in which polarization separation is performed, i.e. removing the grid and using an orthomode transducer (OMT) to separate the two orthogonal polarizations. Here we present also the results of the analysis performed to check the effect of the mounting of the grid on the beam squint.
### Science drivers
The proposed upgrade would give significant scientific advantages by improving the sensitivity by a factor 2 through the elimination of the atmospheric noise in the unwanted image
sideband. It would also allow a reduction in line confusion in spectral line surveys and a factor 4 improvement in the required integration time. This is highly beneficial because Band 9 requires favorable weather conditions, which only occur a modest fraction of the time.
We identified also additional science cases, especially related to the polarization capability. Among these we find the study of magnetic fields in very dense environments of circumstellar envelopes around evolved stars and high mass star-forming regions through the vibrationally excited water masers at 658 GHz. Polarization observations in Band 9 can be a powerful tool to study also dust settling and grain growth in accretion/protoplanetary disks around young stellar objects.
## 3 The ALMA Band 9 receiver
Each ALMA Band 9 receiver unit (Fig. 1, left) is installed in the telescope front end and has been designed to detect and down-convert two orthogonal linear polarization components of the light collected by the ALMA antennas [1]. The radiation, collected by the telescope, enters the front end through a vacuum window and a set of infrared filters. The beam is then refocused with a compact arrangement of elliptical mirrors, which is fully contained within the cartridge (Fig. 1, right). The optics assembly contains a polarizing grid to separate the two orthogonal linear polarizations and two beam splitters to combine each resulting beam with a local oscillator signal. The two beams are then sent to independent superconductor-insulator-superconductor (SIS) mixers that perform the heterodyne down-conversion. Finally, the generated intermediate frequency (IF) signals are amplified by cryogenic and room-temperature HEMT amplifiers and exported to the telescope's IF backend for further processing and correlation.
## 4 Results
### Extension of the IF Bandwidth
To determine the maximum IF bandwidth achievable by the Band 9 sideband separating receivers, we should consider the bandwidth of the mixer devices themselves and the technological extensions needed to harness this bandwidth. In addition, even if the mixers can
Figure 1: _Left:_ Picture of ALMA band 9 receiver cartridge. _Right:_ Layout of the optical system for focusing the RF beam and injecting the LO signals
deliver the expected bandwidth, the greatest technical challenge is obtaining cryogenic low-noise amplifiers (LNAs) and quadrature hybrids. Currently, the Observatorio Astronomico de Yebes (Spain) is involved in the further development of this type of amplifiers, especially towards larger bandwidth and better input matching, supported by a dedicated ESO technology development program.
To test the performance of the sideband-separating mixers, we measured the two most significant properties: the image rejection ratio (IRR) and the noise temperature over the IF band. Figure 2 shows the combined IRR data for the tested mixer pair in the IF range 4-18 GHz.
The measurements show that the existing Band 9 SIS mixer design can achieve a larger IF bandwidth than the original design specification and the performance will be similar up to 18 GHz.
### Extension of the RF Bandwidth
The band edges of ALMA Band 9 were originally chosen to be 602 and 720 GHz because of the presence of strong water vapour absorption lines at 557 and 750 GHz, each pressure broadened of about 50 GHz. However, very dry conditions allow useful observations even outside of this range, especially if the source is sufficiently bright. A good example is a spectral line survey of the Orion-KL star-forming region performed with the SEPIA660 2SB receiver at APEX [4].
As demonstrated by the measurements performed on the SEPIA660 receiver, the existing SIS junctions that make up most of mixer in the delivered Band 9 cartridges typically have an RF bandwidth extending significantly beyond the 602-720 GHz range of the Band 9 specification. Also the reflective optics in the cartridge as well as the corrugated feedhorns have bands wider than the specification. The only modification needed is the extension of the local oscillator (LO) tuning range by the use of commercial components. Therefore, we can extend the RF bandwidth to 580-735GHz.
### Polarization Performance
To evaluate the impact of the polarization grid on the cross-polar level, we simulated the original Band 9 optics using the software GRASP. We built the model according to the values reported in [5]. The final layout of the model includes two feedhorns, three ellipsoidal mirrors and the polarization grid. The feedhorn is modelled as a hybrid-mode feedhorn based on
Figure 2: Combined IRR data for the tested mixer pair in the IF range 4–18 GHz.
the values from the drawings. This is a good approximation to the actual corrugated horn. The mirrors shape is defined by the mechanical drawings assuming the values for the cold structure. The grid is modelled as a regular grid of conducting parallel wires. We define polarisation 0P as the reflected component from the grid wires, which are parallel to the direction of the horn axes, while polarisation 1P is the component transmitted through the wire grid.
#### 4.3.1 Cross-polarization
We started the analysis considering the nominal optics to determine the reference cross-polarization level. We performed main beam simulations using Physical Optics (PO) and Physical Theory of Diffraction (PTD) on all the reflectors. To verify the model, we compared the results of the simulations with the available measured values and we found compatible values of -17 dB. Figure 3 shows the two orthogonal cuts of the 0P (reflected) and 1P (transmitted) polarisation beams at the same measurement location, i.e. in the focal plane.
Then we repeated the simulations removing the grid from the model to get an idea of the best cross-pol obtainable with the existing optics when this is re-used in a single-horn configuration. With this configuration we get a cross-polarization level of \(\sim\)-23.8 dB, i.e. within the specification established for ALMA.
#### 4.3.2 Beam squint
Since the Band 9 orthogonally polarised beams are divided using a wire grid, they follow separate paths from the grid to the related feedhorn. Displacement of the optical components can cause a beam squint between the two main beams on the sky, potentially affecting the receivers calibration procedure. Variations of the beam squint between cartridges are very difficult to calibrate out and must be tightly controlled if not to hamper wide-field polarisation observations.
The beam squint for OMT-based bands is generally within 2%, while the current configuration of Band 9 with the wire grid has a larger scatter [6]. In addition, the on-sky beam squint shows a preferred direction, with an angle of \(\sim\)39.5\({}^{\circ}\).
Since the inaccuracy in the mounting of the grid could be one of the causes of the scatter, we tried to reproduce this effect using the GRASP model to simulate the squint. We computed the beam squint in the focal plane for a random tilt of the grid around its axes and
Figure 3: Orthogonal cuts of the 0P and 1P polarisation beams in the focal plane. The displacement of the E- and H-planes depends on the angle of the incoming RF signal, which is 0.94\({}^{\circ}\) with respect to the cartridge axis.
we quantified the offset of the beam in the focal plane to make a comparison with measured data. We assumed two independent gaussian distributions for the tilt in the two perpendicular directions because they are decoupled due to the mechanical construction. By optimizing the values of the two independent standard deviations to match the data, we found that a gaussian distribution with a standard deviation \(\sigma=0.28^{\circ}\) in one direction and \(\sigma=0.19^{\circ}\) in the orthogonal one can give a similar scatter of the beam squint, as shown in Fig. 4. Therefore, inaccuracies in the grid mounting could be responsible for the observed beam squint scatter.
## 5 Conclusions
The main conclusions of the study are that the SIS mixers allow an extension of the upper limit of the IF band to at least 18 GHz, the RF bandwidth can be widened to 580-735 GHz and the limitation of the current polarimetric performance can be explained by deviations in the grid mounting angle for the beam squint and by the presence of the grid in combination with the mirrors for the cross-polarization level.
|
2303.06538 | A $q$-Morris constant term identity for the Lie algebra $A_n$ and its
symmetric function generalizations | It is well-known that the Selberg integral is equivalent to the Morris
constant term identity. In 2009 Warnaar obtained the Selberg integral for the
Lie algebra $A_n$. In this paper, from the point view of constant term
identities, we obtain a $q$-Morris constant term identity of type $A_n$ and its
several symmetric function generalizations. The type $A_n$ $q$-Morris identity
looks as if a constant term version of Warnaar's $A_n$ Selberg integral to some
extend. | Yue Zhou | 2023-03-12T02:02:25Z | http://arxiv.org/abs/2303.06538v1 | A \(q\)-Morris constant term identity for the Lie algebra \(A_{n}\) and its symmetric function generalizations
###### Abstract.
It is well-known that the Selberg integral is equivalent to the Morris constant term identity. In 2009 Warnaar obtained the Selberg integral for the Lie algebra \(A_{n}\). In this paper, from the point view of constant term identities, we obtain a \(q\)-Morris constant term identity of type \(A_{n}\) and its several symmetric function generalizations. The type \(A_{n}\)\(q\)-Morris identity looks as if a constant term version of Warnaar's \(A_{n}\) Selberg integral to some extend.
**Keywords:** constant term identity; \(q\)-Morris identity; symmetric function; Selberg integral; Warnaar's \(A_{n}\) Selberg integral.
2010 Mathematics Subject Classification: 05A30, 33D70, 05E05
## 1. Introduction
In 1944, Atle Selberg [13] gave the following remarkable multiple integral:
\[\int_{0}^{1}\cdots\int_{0}^{1}\prod_{i=1}^{k}z_{i}^{\alpha-1}(1-z _{i})^{\beta-1}\prod_{1\leq i<j\leq k}|z_{i}-z_{j}|^{2\gamma}\mathrm{d}z_{1} \cdots\mathrm{d}z_{k}\\ =\prod_{j=0}^{k-1}\frac{\Gamma(\alpha+j\gamma)\Gamma(\beta+j \gamma)\Gamma\big{(}1+(j+1)\gamma\big{)}}{\Gamma\big{(}\alpha+\beta+(k+j-1) \gamma\big{)}\Gamma(1+\gamma)}, \tag{1.1}\]
where \(\alpha,\beta,\gamma\) are complex parameters such that
\[\mathrm{Re}(\alpha)>0,\quad\mathrm{Re}(\beta)>0,\quad\mathrm{Re}(\gamma)>- \min\{1/n,\mathrm{Re}(\alpha)/(n-1),\mathrm{Re}(\beta)/(n-1)\}.\]
When \(k=1\), the above Selberg integral reduces to the Euler beta integral.
It is well-known that the Selberg integral is equivalent to the Morris constant term identity [11]
\[\operatorname*{CT}_{z_{0},\ldots,z_{k}}\prod_{i=1}^{k}(1-z_{0}/z_{i})^{a}(1-z _{i}/z_{0})^{b}\prod_{1\leq i\neq j\leq k}(1-z_{i}/z_{j})^{c}=\prod_{i=0}^{k- 1}\frac{(a+b+ic)!\big{(}(i+1)c\big{)}!}{(a+ic)!(b+ic)!c!} \tag{1.2}\]
for nonnegative integers \(a,b,c\), where \(\operatorname*{CT}_{z_{i}}L(z_{i})\) denotes taking the constant term of the Laurent polynomial (series) \(L(z_{i})\). Note that we can set \(z_{0}=1\) in (1.2) since we take the constant
term of a homogeneous Laurent polynomial. In his Ph.D. thesis [11], Morris also conjectured the following \(q\)-analogue constant term identity
\[\operatorname{CT}_{z_{0},\ldots,z_{k}}\prod_{i=1}^{k}(z_{0}/z_{i})_{a}(qz_{i}/z_ {0})_{b}\prod_{1\leq i<j\leq k}(z_{i}/z_{j})_{c}(qz_{j}/z_{i})_{c}=\prod_{i=0}^{ k-1}\frac{(q)_{a+b+ic}(q)_{(i+1)c}}{(q)_{a+ic}(q)_{b+ic}(q)_{c}}, \tag{1.3}\]
where \((y)_{c}=(y;q)_{c}:=(1-y)(1-yq)\cdots(1-yq^{c-1})\) is the \(q\)-factorial for a positive integer \(c\) and \((y)_{0}:=1\). In 1988, Habsieger [6] and Kadell [8] independently proved Askey's conjectured \(q\)-analogue of the Selberg integral [3]. Expressing their \(q\)-analogue integral as a constant term identity they thus proved Morris' \(q\)-constant term conjecture (1.3). Now the constant term identity (1.3) is called the Habsieger-Kadell \(q\)-Morris identity.
For nonnegative integers \(k_{1},k_{2},\ldots,k_{n}\), we write
\[z^{(s)}=(z_{1}^{(s)},\ldots,z_{k_{s}}^{(s)})\]
and \(z=(z^{(1)},\ldots,z^{(n)})\). We also make use of the Vandermonde-type products for alphabets \(x=(x_{1},\ldots,x_{l})\) and \(y=(y_{1},\ldots,y_{m})\), which are given by
\[\Delta(x)=\prod_{1\leq i<j\leq l}(x_{i}-x_{j}),\qquad\Delta(x,y)=\prod_{i=1}^{ l}\prod_{j=1}^{m}(x_{i}-y_{j}).\]
In 2009, based on [14], Warnaar [15] obtained a Selberg integral for the Lie algebra \(A_{n}\), see Theorem 1.1 below. Then the original Selberg integral (1.1) corresponds to the \(A_{1}\) case, and Tarasov and Varchenko's result [14] is the \(A_{2}\) case.
**Theorem 1.1**.: _For a positive integer \(n\) let \(k_{1}\geq k_{2}\geq\cdots\geq k_{n}\geq 0\) be integers and \(k_{0}=k_{n+1}=0\). Let \(\alpha,\beta_{1},\ldots,\beta_{n},\gamma\in\mathbb{C}\) be such that_
\[\operatorname{Re}(\alpha)>0,\quad\operatorname{Re}(\beta_{1})>0,\ldots, \operatorname{Re}(\beta_{n})>0,\quad-\min\Big{\{}\frac{\operatorname{Re}( \alpha)}{k_{1}-1},\frac{1}{k_{1}}\Big{\}}<\operatorname{Re}(\gamma)<\frac{1}{k _{1}}\]
_and_
\[-\frac{\operatorname{Re}(\beta_{s})}{k_{s}-k_{s+1}-1}<\operatorname{Re}( \gamma)<\frac{\beta_{r}+\cdots+\beta_{s}}{s-r}\]
_for \(1\leq r\leq s\leq n\). Then_
\[\int_{C_{\gamma}^{k_{n},\ldots,k_{1}}[0,1]}\prod_{s=1}^{n}\Big{(} |\Delta(z^{(s)})|^{2\gamma}\prod_{i=1}^{k_{s}}(z_{i}^{(s)})^{\alpha_{s}-1}(1-z_ {i}^{(s)})^{\beta_{s}-1}\Big{)}\prod_{s=1}^{n-1}|\Delta(z^{(s+1)},z^{(s)})|^{- \gamma}\mathrm{d}z\] \[=\prod_{1\leq r\leq s\leq n}\prod_{i=1}^{k_{s}-k_{s+1}}\frac{ \Gamma\big{(}\beta_{r}+\cdots+\beta_{s}+(i+r-s-1)\gamma\big{)}}{\Gamma\big{(} \alpha_{r}+\beta_{r}+\cdots+\beta_{s}+(i+r-s+k_{r}-k_{r-1}-2)\gamma\big{)}}\] \[\quad\times\prod_{s=1}^{n}\prod_{i=1}^{k_{s}}\frac{\Gamma\big{(} \alpha_{s}+(i-k_{s-1}-1)\gamma\big{)}\Gamma(i\gamma)}{\Gamma(\gamma)}, \tag{1.4}\]
_where \(\alpha_{1}=\alpha\), \(\alpha_{2}=\cdots=\alpha_{n}=1\) and \(\mathrm{d}z=\mathrm{d}z^{(1)}\ldots\mathrm{d}z^{(n)}\)._
We omit the explicit expression for the integration domain \(C^{k_{n},\ldots,k_{1}}_{\gamma}[0,1]\) due to its complexity. The reader who is interested can refer [15, (4.9)]. Recently, Albion, Rains and Warnaar [1] generalized the above integral to an AFLT type by adding a product of two Jack polynomials.
For nonnegative integers \(a,b,k_{1},k_{2}\) and a positive integer \(c\), let
\[L_{k_{1},k_{2}}(a,b,c;z^{(1)},z^{(2)})=\frac{\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{( 1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{( 1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z _{i}^{(1)}/z_{j}^{(2)})_{c}}. \tag{1.5}\]
Denote
\[L_{k_{1},k_{2}}(a,b,c):=\underset{z^{(1)}}{\text{CT}}\,L_{k_{1},k_{2}}(a,b,c; z^{(1)},z^{(2)}), \tag{1.6}\]
where we suppress the variables \(z^{(2)}\) in the left-hand side. When taking constant terms, we always explain \((1-dz_{i}^{(s)}/z_{j}^{(s+1)})^{-1}\) as
\[\sum_{i=0}^{\infty}(dz_{i}^{(s)}/z_{j}^{(s+1)})^{i}\]
for \(d\in\mathbb{C}(q)\) and \(s=1,\ldots,n\) throughout this paper. That is, we assume all terms of the form \(|dz_{i}^{(s)}/z_{j}^{(s+1)}|<1\). In (1.6), we need the explanation of \((1-dz_{i}^{(1)}/z_{j}^{(2)})^{-1}\) to obtain the constant term. By convention, we define the product of zero term as \(1\). Then
\[L_{k_{1},0}(a,b,c)=\underset{z^{(1)}}{\text{CT}}\prod_{i=1}^{k_{1}}(z_{0}/z_{ i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{( 1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}\]
equals
\[M_{k_{1}}(a,b,c)=\prod_{i=0}^{k_{1}-1}\frac{(q)_{a+b+ic}(q)_{(i+1)c}}{(q)_{a+ ic}(q)_{b+ic}(q)_{c}} \tag{1.7}\]
by the \(q\)-Morris identity (1.3).
For \(n\) a positive integer, let \(k_{1},\ldots,k_{n+1}\), \(a_{1},\ldots,a_{n}\) and \(b_{1},\ldots,b_{n}\) be nonnegative integers such that \(k_{1}\geq k_{2}\geq\cdots\geq k_{n+1}\), \(a_{1}=a,a_{2}=\cdots=a_{n}=0\), and denote \(\sigma_{s}=b_{1}+\cdots+b_{s}\) for \(s=1,\ldots,n\) and \(\sigma_{0}=0\). In this paper, we find that
**Theorem 1.2**.: _If \(a+\sigma_{s}+s\geq sc\) for \(s=1,\ldots,n\) and \(k_{n+1}=0\), then_
\[\underset{z}{\text{CT}}\prod_{s=1}^{n}\frac{\prod_{i=1}^{k_{s}}(1/z_{i}^{(s)} )_{a_{s}}(qz_{i}^{(s)})_{b_{s}}\prod_{1\leq i<j\leq k_{s}}(z_{i}^{(s)}/z_{j}^{( s)})_{c}(qz_{j}^{(s)}/z_{i}^{(s)})_{c}}{\prod_{i=1}^{k_{s}}(z_{i}^{(s)})_{b_{s}+1-c} \prod_{j=1}^{k_{s+1}}\prod_{i=1}^{k_{s}}(q^{b_{s}+1-c}z_{i}^{(s)}/z_{j}^{(s+1)} )_{c}}\\ =(-1)^{\sum_{s=1}^{n}k_{s}(b_{s}+1-c)}q^{\sum_{s=1}^{n}k_{s}{b_{s }+2-c}}\prod_{s=1}^{n}M_{k_{s}}\big{(}a+\sigma_{s}+s(1-c),c-1,c\big{)}. \tag{1.8}\]
If readers compare the similarity of (1.8) and (1.4) with the equivalence between the Morris identity (1.2) and the Selberg integral (1.1), they may find that the constant term identity (1.8) looks as if a constant term version of Warnaar'a \(A_{n}\) Selberg integral (1.4). But we do
not know how to connect them so far. Note that in this paper, we obtain more general results than Theorem 1.2 by adding symmetric functions, see Theorem 6.4 and Theorem 6.5 below.
The structure of this paper is as follows. In the next section, we introduce some basic notation and results of symmetric functions and plethystic notation. In Section 3, we introduce the main tool of this paper -- a splitting formula. Using the formula, we find a family of vanishing constant terms. In Section 4, we determine the expression of \(L_{k_{1},k_{2}}(a,b,c)\). In Section 5, we study the \(b=c-1\) case of \(L_{k_{1},k_{2}}(a,b,c)\). In the last section, we obtain several results equivalent to those in Section 5.
## 2. Symmetric functions and plethystic notation
In this section, we introduce some basic results of symmetric functions and plethystic notation.
A partition is a sequence \(\lambda=(\lambda_{1},\lambda_{2},\dots)\) of nonnegative integers such that \(\lambda_{1}\geq\lambda_{2}\geq\cdots\) and only finitely many \(\lambda_{i}\) are positive. The length of a partition \(\lambda\), denoted \(\ell(\lambda)\) is defined to be the number of nonzero \(\lambda_{i}\). We adopt the convention of not displaying the tail of zeros of a partition. We say that \(|\lambda|=\lambda_{1}+\lambda_{2}+\cdots\) is the size of the partition \(\lambda\). The most commonly used partial order for partitions should be the dominance order, denoted \(\leq\). If \(\lambda\) and \(\mu\) are partitions of the same size, then \(\lambda\leq\mu\) if \(\lambda_{1}+\cdots+\lambda_{i}\leq\mu_{1}+\cdots+\mu_{i}\) for all \(i\geq 1\).
For a field \(\mathbb{F}\), denote \(\Lambda_{\mathbb{F}}\) by the ring of symmetric functions with coefficients in \(\mathbb{F}\). In the following, we introduce four bases of \(\Lambda\) (with \(\mathbb{F}=\mathbb{Q}\) and \(\mathbb{F}=\mathbb{Q}(q,t)\)).
Let \(X=\{x_{1},x_{2},\dots\}\) be an alphabet (a set of countably many variables). For \(r\) a positive integer, let \(p_{r}\) be the power sum symmetric function in \(X\), defined by
\[p_{r}=\sum_{i\geq 1}x_{i}^{r}.\]
In addition, we set \(p_{0}=1\). For a partition \(\lambda=(\lambda_{1},\lambda_{2},\dots)\), let
\[p_{\lambda}=p_{\lambda_{1}}p_{\lambda_{2}}\cdots.\]
The \(p_{r}\) are algebraically independent over \(\mathbb{Q}\), and the \(p_{\lambda}\) form a basis of \(\Lambda_{\mathbb{Q}}\)[10]. That is,
\[\Lambda_{\mathbb{Q}}=\mathbb{Q}[p_{1},p_{2},\dots].\]
The elementary symmetric function is defined by
\[e_{r}=\sum_{1\leq i_{1}<\cdots<i_{r}}x_{i_{1}}\cdots x_{i_{r}},\quad e_{0}=1, \quad\text{and}\quad e_{\lambda}=e_{\lambda_{1}}e_{\lambda_{2}}\cdots.\]
The \(e_{\lambda}\) form a basis of \(\Lambda_{\mathbb{Q}}\)[10].
The complete symmetric function \(h_{r}(X)\) can be defined in terms of its generating function as
\[\sum_{r\geq 0}h_{r}(X)y^{r}=\prod_{i\geq 0}\frac{1}{1-yx_{i}}. \tag{2.1}\]
Like \(p_{\lambda}\) and \(e_{\lambda}\), define
\[h_{\lambda}=h_{\lambda_{1}}h_{\lambda_{2}}\cdots.\]
The \(h_{\lambda}\) also form a basis of \(\Lambda_{\mathbb{Q}}\)[10]. (The \(e_{\lambda}\) and the \(h_{\lambda}\) are in fact bases of \(\Lambda_{\mathbb{Z}}\), but we do not need these in this paper.)
Let \(F=\mathbb{Q}(q,t)\) be the field of rational functions in \(q\) and \(t\) with coefficients in \(\mathbb{Q}\) and \((y)_{\infty}=(y;q)_{\infty}:=(1-y)(1-yq)\cdots\) be the infinity \(q\)-factorial. Define the modified complete symmetric function \(g_{r}(X;q,t)\in\Lambda_{F}(X)\) by its generating function
\[\sum_{r\geq 0}g_{r}(X;q,t)y^{r}=\prod_{i\geq 1}\frac{(tx_{i}y;q)_{\infty}}{(x_{i} y;q)_{\infty}},\]
and for a partition \(\lambda=(\lambda_{1},\lambda_{2},\dots)\) define
\[g_{\lambda}=g_{\lambda}(q,t)=g_{\lambda}(X;q,t):=\prod_{i\geq 1}g_{\lambda_{i}}(X ;q,t).\]
By [10, Chapter VI, (2.12)], the \(g_{\lambda}\) form a basis of \(\Lambda_{F}\).
Plethystic or \(\lambda\)-ring notation is a device to facilitate computations in the ring of symmetric functions. We briefly introduce plethystic notation. For more details, see [7, 9, 12].
For an alphabet \(X=\{x_{1},x_{2},\dots\}\) and a field \(\mathbb{F}\), we additively write \(X:=x_{1}+x_{2}+\cdots\), and use plethystic brackets to indicate this additive notation:
\[f(X)=f(x_{1},x_{2},\dots)=f[x_{1}+x_{2}+\cdots]=f[X],\quad\text{for }f\in \Lambda_{\mathbb{F}}.\]
We introduce a consistent arithmetic on alphabets in terms of the basis of power sums. In particular, a power sum whose argument is the sum, difference or Cartesian product of two alphabets \(X\) and \(Y\) is defined as
\[p_{r}[X+Y] =p_{r}[X]+p_{r}[Y], \tag{2.2b}\] \[p_{r}[X-Y] =p_{r}[X]-p_{r}[Y],\] (2.2c) \[p_{r}[XY] =p_{r}[X]p_{r}[Y]. \tag{2.2a}\]
In general we cannot give meaning to division by an arbitrary alphabet and only division by \(1-t\) (the difference of two one-letter alphabets with "letters" \(1\) and \(t\) respectively) is meaningful. In particular
\[p_{r}\Big{[}\frac{X}{1-t}\Big{]}=\frac{p_{r}[X]}{1-t^{r}}. \tag{2.3}\]
Note that the alphabet \(1/(1-t)\) may be interpreted as the infinite alphabet \(1+t+t^{2}+\cdots\). Indeed, by (2.2a) and (2.2c)
\[p_{r}[X(1+t+t^{2}+\cdots)]=p_{r}[X]\sum_{k=0}^{\infty}p_{r}[t^{k}]=p_{r}[X] \sum_{k=0}^{\infty}t^{kr}=\frac{p_{r}[X]}{1-t^{r}}.\]
If \(f\) is a homogeneous symmetric function of degree \(k\) then
\[f[aX]=a^{k}f[X] \tag{2.4}\]
for a single-letter alphabet \(a\). Note that by (2.2a), \(f[2X]=f[X+X]=2f[X]\). Moreover, we can extend this to
\[f[rX]=rf[X] \tag{2.5}\]
for \(r\in\mathbb{F}\). Note that this leads to some notational ambiguities, and whenever not clear from the context we will indicate if a symbol such as \(a\) or \(r\) represents a letter or a binomial element1. Throughout this paper, we take \(q\) and \(q^{-1}\) as single-letter alphabets.
Footnote 1: In [9, p. 32] Lascoux refers to \(r\in\mathbb{F}\) as a binomial element.
We need the next simple result. One can find a proof in [7, Theorem 1.27].
**Proposition 2.1**.: _Let \(X\) and \(Y\) be two alphabets. For \(r\) a nonnegative integer,_
\[h_{r}[X+Y] =\sum_{i=0}^{r}h_{i}[X]h_{r-i}[Y], \tag{2.7}\] \[h_{r}[-X] =(-1)^{r}e_{r}[X]. \tag{2.6}\]
Using plethystic notation, we can write \(g_{r}\) as
\[g_{r}(X;q,t)=h_{r}\Big{[}\frac{1-t}{1-q}X\Big{]}. \tag{2.8}\]
By (2.6) and (2.8), it is easy to obtain the next result.
**Proposition 2.2**.: _Let \(X\) and \(Y\) be two alphabets. For \(r\) a nonnegative integer,_
\[g_{r}[X+Y]=\sum_{i=0}^{r}g_{i}[X]g_{r-i}[Y]. \tag{2.9}\]
Proof.: By (2.8),
\[g_{r}[X+Y]=h_{r}\Big{[}\frac{1-t}{1-q}(X+Y)\Big{]}=\sum_{i=0}^{r}h_{r-i}\Big{[} \frac{1-t}{1-q}X\Big{]}h_{i}\Big{[}\frac{1-t}{1-q}Y\Big{]}.\]
Here the last equality holds by (2.6). Using (2.8) again, we obtain (2.9).
## 3. A splitting formula
In this section, we give a splitting formula for a rational function \(F_{n}(y,w)\) defined in (3.2) below. Using the splitting formula, we find a family of vanishing constant terms.
Before presenting the splitting formula, we need the next simple result.
**Lemma 3.1**.: _Let \(i\) and \(j\) be positive integers. Then, for \(t\) an integer such that \(0\leq t\leq j\),_
\[\frac{(1/y)_{i}(qy)_{j}}{(q^{-t}/y)_{i}}=q^{it}(q^{1-i}y)_{t}(q^{t+1}y)_{j-t}, \tag{3.1a}\]
_and for \(-1\leq t\leq j-1\),_
\[\frac{(y)_{j}(q/y)_{i}}{(q^{-t}/y)_{i}}=q^{i(t+1)}(q^{-i}y)_{t+1}(q^{t+1}y)_{j -t-1}. \tag{3.1b}\]
Note that the \(t=j\) case of (3.1a) (taking \(y\mapsto y/q\)) is the standard fact in [5, Equation (I.13)].
Proof.: For \(0\leq t\leq j\),
\[\frac{(1/y)_{i}(qy)_{j}}{(q^{-t}/y)_{i}}=\frac{(q^{i-t}/y)_{t}(qy)_{j}}{(q^{-t}/y) _{t}}=\frac{(-1/y)^{t}q^{it-\binom{t+1}{2}}(q^{1-i}y)_{t}(qy)_{j}}{(-1/y)^{t}q^{- \binom{t+1}{2}}(qy)_{t}}=q^{it}(q^{1-i}y)_{t}(q^{t+1}y)_{j-t}.\]
Taking \(y\mapsto y/q\) and \(t\mapsto t+1\) in (3.1a) yields (3.1b) for \(-1\leq t\leq j-1\).
For positive integers \(n,c\), let
\[F_{n}(y,w)=\frac{\prod_{1\leq i<j\leq n}(y_{i}/y_{j})_{c}(qy_{j}/y_{i})_{c}}{ \prod_{l=1}^{n}(y_{l}/w)_{c}}, \tag{3.2}\]
where \(w\) is a parameter such that all terms of the form \(q^{u}y_{i}/w\) in (3.2) satisfy \(|q^{u}y_{i}/w|<1\). Hence,
\[\frac{1}{1-q^{u}y_{i}/w}=\sum_{j\geq 0}(q^{u}y_{i}/w)^{j}.\]
The rational function \(F_{n}(y,w)\) admits the following partial fraction expansion. We refer it as the splitting formula for \(F_{n}(y,w)\).
**Proposition 3.2**.: _Let \(F_{n}(y,w)\) be defined as in (3.2). Then_
\[F_{n}(y,w)=\sum_{i=1}^{n}\sum_{j=0}^{c-1}\frac{A_{ij}}{1-q^{j}y_{i}/w}, \tag{3.3}\]
_where_
\[A_{ij}=\frac{q^{(n-1)jc+(n-i)c}}{(q^{-j})_{j}(q)_{c-j-1}}\prod_{ l=1}^{i-1}\big{(}q^{1-c}y_{i}/y_{l}\big{)}_{j}\big{(}q^{j+1}y_{i}/y_{l}\big{)}_{ c-j}\prod_{l=i+1}^{n}\big{(}q^{-c}y_{i}/y_{l}\big{)}_{j+1}\big{(}q^{j+1}y_{i}/y_{l} \big{)}_{c-j-1}\\ \times\prod_{\begin{subarray}{c}1\leq u<v\leq n\\ u,v\neq i\end{subarray}}(y_{u}/y_{v})_{c}(qy_{v}/y_{u})_{c}. \tag{3.4}\]
Note that the \(A_{ij}\) are polynomials in \(y_{i}\). Proposition 3.2 is the \(\beta_{i}=c\) case of [4, Theorem 3.3] and the \(n_{0}=0\) case of [16, Proposition 3.5]. For completeness, we give a proof below.
Proof.: By partial fraction decomposition of \(F_{n}(y,w)\) with respect to \(1/w\), we can rewrite \(F_{n}(y,w)\) as (3.3) and
\[A_{ij}=F_{n}(y,w)(1-q^{j}y_{i}/w)|_{w=q^{j}y_{i}}\quad\text{for $i=1,\dots,n$ and $j=0,\dots,c-1$.} \tag{3.5}\]
Carrying out the substitution \(w=q^{j}y_{i}\) in \(F_{n}(y,w)(1-q^{j}y_{i}/w)\) for \(i=1,\dots,n\) yields
\[A_{ij}=\frac{1}{(q^{-j})_{j}(q)_{c-j-1}}\prod_{l=1}^{i-1}\frac{(y _{l}/y_{i})_{c}(qy_{i}/y_{l})_{c}}{(q^{-j}y_{l}/y_{i})_{c}}\prod_{l=i+1}^{n} \frac{(y_{i}/y_{l})_{c}(qy_{l}/y_{i})_{c}}{(q^{-j}y_{l}/y_{i})_{c}}\\ \times\prod_{\begin{subarray}{c}1\leq u<v\leq n\\ v,u\neq i\end{subarray}}(y_{u}/y_{v})_{c}(qy_{v}/y_{u})_{c}. \tag{3.6}\]
Using (3.1) with \((i,j,t,y)\mapsto(c,c,j,y_{i}/y_{l})\), we have
\[\frac{(y_{l}/y_{i})_{c}(qy_{i}/y_{l})_{c}}{(q^{-j}y_{l}/y_{i})_{c}}=q^{jc}\big{(} q^{1-c}y_{i}/y_{l}\big{)}_{j}\big{(}q^{j+1}y_{i}/y_{l}\big{)}_{c-j}\quad\text{for $j=0,\dots,c$}, \tag{3.7a}\]
and
\[\frac{(y_{i}/y_{l})_{c}(qy_{l}/y_{i})_{c}}{(q^{-j}y_{l}/y_{i})_{c}}=q^{(j+1)c} \big{(}q^{-c}y_{i}/y_{l}\big{)}_{j+1}\big{(}q^{j+1}y_{i}/y_{l}\big{)}_{c-j-1} \quad\text{for $j=-1,\dots,c-1$}. \tag{3.7b}\]
Substituting (3.7) into (3.6) gives (3.4).
By Proposition 3.2, we find that a family of constant terms vanish.
**Lemma 3.3**.: _Let \(k_{1},k_{2},c\) be integers such that \(c>0\) and \(k_{1}>k_{2}\geq 0\). For integers \(t_{1},t_{2},\dots,t_{k_{1}}\), set \(\{t_{l}\leq 0\ |\ l=1,\dots,k_{1}\}=\{b_{1},\dots,b_{m}\}\) and \(\{t_{l}>0\ |\ l=1,\dots,k_{1}\}=\{d_{1},\dots,d_{p}\}\) (assume \(d_{1}\leq d_{2}\leq\dots\leq d_{p}\)). If \(\sum_{l=1}^{m}b_{l}+\sum_{l=1}^{p-k_{2}}d_{l}>0\) then_
\[C:=\operatorname*{CT}_{z^{(1)}}\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/ z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{u=1}^{k_{1}}(z_{u}^{(1)}) ^{t_{u}}\prod_{i_{1}}^{k_{1}}\prod_{j=1}^{k_{2}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}} =0.\]
Note that if \(p\leq k_{2}\) then we set the sum \(\sum_{j=1}^{p-k_{2}}d_{j}=0\) by convention. The condition \(\sum_{l=1}^{m}b_{l}+\sum_{l=1}^{p-k_{2}}d_{l}>0\) can not hold for \(k_{2}\geq k_{1}\). Because in this case \(\sum_{l=1}^{p-k_{2}}d_{l}=0\) by \(k_{2}\geq k_{1}\geq p\) and all the \(b_{l}\leq 0\).
Proof.: For \(k_{2}=0\)
\[C=\operatorname*{CT}_{z^{(1)}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)})^{-t_{i}}\prod_ {1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}.\]
The above constant term vanishes since the Laurent polynomial inside the operator \(\operatorname*{CT}_{z^{(1)}}\) is not homogeneous by \(\sum_{i=1}^{m}b_{i}+\sum_{j=1}^{p}d_{j}=\sum_{i=1}^{k_{1}}t_{i}>0\). Hence, we can assume \(k_{2}>0\) in the following of the proof. We proceed the proof by induction on \(k_{1}\). It is clear that the lemma holds for \(k_{1}=1\) (This forces \(k_{2}=0\) by \(k_{2}<k_{1}\)). Assume the lemma holds for \(k_{1}\mapsto k_{1}-1\).
Using Proposition 3.2 with \((n,y,w)\mapsto(k_{1},z^{(1)},z_{1}^{(2)})\), we can write
\[C=\operatorname*{CT}_{z^{(1)}}\sum_{i=1}^{k_{1}}\sum_{j=0}^{c-1}\frac{A_{ij}} {(1-q^{j}z_{i}^{(1)}/z_{1}^{(2)})\prod_{u=1}^{k_{1}}(z_{u}^{(1)})^{t_{u}}\prod _{u=1}^{k_{1}}\prod_{v=2}^{k_{2}}(z_{u}^{(1)}/z_{v}^{(2)})_{c}}, \tag{3.8}\]
where
\[A_{ij}=\frac{q^{(k_{1}-1)jc+(k_{1}-i)c}}{(q^{-j})_{j}(q)_{c-j-1} }\prod_{l=1}^{i-1}\big{(}q^{1-c}z_{i}^{(1)}/z_{l}^{(1)}\big{)}_{j}\big{(}q^{j+1 }z_{i}^{(1)}/z_{l}^{(1)}\big{)}_{c-j}\\ \times\prod_{l=i+1}^{k_{1}}\big{(}q^{-c}z_{i}^{(1)}/z_{l}^{(1)} \big{)}_{j+1}\big{(}q^{j+1}z_{i}^{(1)}/z_{l}^{(1)}\big{)}_{c-j-1}\prod_{ \begin{subarray}{c}1\leq u<v\leq k_{1}\\ u,v\neq i\end{subarray}}(z_{u}^{(1)}/z_{v}^{(1)})_{c}(qz_{v}^{(1)}/z_{u}^{(1)} )_{c}.\]
Denote the summand in (3.8) by \(Q_{ij}\). We prove that \(\operatorname*{\operatorname{CT}}_{z^{(1)}}Q_{ij}=0\) for all the \(i,j\). Note that we view \(Q_{ij}\) as a Laurent series in \(z_{i}^{(1)}\). That is
\[Q_{ij}=\frac{A_{ij}}{\prod_{u=1}^{k_{1}}(z_{u}^{(1)})^{t_{u}}\prod_{u=1}^{k_{1} }\prod_{v=2}^{k_{2}}(z_{u}^{(1)}/z_{v}^{(2)})_{c}}\sum_{l\geq 0}(q^{j}z_{i}^{(1)}/z_ {1}^{(2)})^{l}\prod_{v=2}^{k_{2}}\prod_{m=0}^{c-1}\sum_{l\geq 0}(q^{m}z_{i}^{(1)}/z_ {v}^{(2)})^{l}.\]
For a fixed integer \(i\), if \(t_{i}<0\) then \(Q_{ij}\) is in fact a power series in \(z_{i}^{(1)}\) with no constant term. It follows that \(\operatorname*{\operatorname{CT}}_{z^{(1)}}Q_{ij}=\operatorname*{\operatorname {CT}}_{z_{i}^{(1)}}Q_{ij}=0\). If \(t_{i}\geq 0\) then by the expression for \(A_{ij}\) and taking the constant term of \(Q_{ij}\) with respect to \(z_{i}^{(1)}\), we can write \(\operatorname*{\operatorname{CT}}_{z_{i}^{(1)}}Q_{ij}\) as a finite sum of the form
\[R:=\frac{\prod_{1\leq u<v\leq k_{1}}(z_{u}^{(1)}/z_{v}^{(1)})_{c}(qz_{v}^{(1) }/z_{u}^{(1)})_{c}}{r\cdot\prod_{\begin{subarray}{c}u=1\\ u\neq i\end{subarray}}^{k_{1}}(z_{u}^{(1)})^{t_{u}+s_{u}}\prod_{\begin{subarray} {c}u=1\\ u\neq i\end{subarray}}^{k_{1}}\prod_{v=2}^{k_{2}}(z_{u}^{(1)}/z_{v}^{(2)})_{c}},\]
where \(r\in\mathbb{Q}(q)[z^{(2)}]\setminus\{0\}\) and all the \(s_{u}\) are nonnegative integers. Take \(t_{u}^{\prime}=t_{u}+s_{u}\) and denote \(\{t_{u}^{\prime}\leq 0\mid u\in\{1,\ldots,k_{1}\}\setminus\{i\}\}=\{b_{1}^{ \prime},\ldots,b_{m^{\prime}}^{\prime}\}\) and \(\{t_{u}^{\prime}>0\mid u\in\{1,\ldots,k_{1}\}\setminus\{i\}\}=\{d_{1}^{\prime },\ldots,d_{p^{\prime}}^{\prime}\}\) (assume \(d_{1}^{\prime}\leq\cdots\leq d_{p^{\prime}}^{\prime}\)). We can see that \(\sum_{l=1}^{m^{\prime}}b_{l}^{\prime}\geq\sum_{l=1}^{m}b_{l}\), \(p^{\prime}\geq p-1\) and \(\sum_{l=1}^{i}d_{l}^{\prime}\geq\sum_{l=1}^{i}d_{l}\) for any \(1\leq i\leq p-1\). To see the last inequality holds, we can imagine two positive integer sequences: \(d_{1}^{\prime}\leq\cdots\leq d_{p^{\prime}}^{\prime}\) is obtained from \(d_{1}\leq\cdots\leq d_{p}\) by deleting one element and adding a nonnegative integer to each element. Then
\[\sum_{l=1}^{p^{\prime}-k_{2}+1}d_{l}^{\prime}+\sum_{l=1}^{m^{\prime}}b_{l}^{ \prime}\geq\sum_{l=1}^{p-k_{2}}d_{l}^{\prime}+\sum_{l=1}^{m}b_{l}\geq\sum_{l=1 }^{p-k_{2}}d_{l}+\sum_{l=1}^{m}b_{l}>0.\]
By the induction hypothesis we conclude that all the form of \(\operatorname*{\operatorname{CT}}_{z^{(1)}}R=0\). Then \(\operatorname*{\operatorname{CT}}_{z^{(1)}}Q_{ij}\) also vanishes for \(t_{i}\geq 0\). Since \(C\) is a finite sum of the \(\operatorname*{\operatorname{CT}}_{z^{(1)}}Q_{ij}\), we have \(C=0\).
By Lemma 3.3, we can get the next two corollaries.
**Corollary 3.4**.: _Let \(k_{1},k_{2},a,c\) be nonnegative integers and \(t_{1},\ldots,t_{k_{1}}\in\mathbb{Z}\) such that \(k_{1}>k_{2}\), \(c>0\), \(t_{i}\leq a\) for \(i=1,\ldots,k_{1}\), and \(\sum_{i=1}^{k_{1}}t_{i}>k_{2}a\). Then_
\[\operatorname*{\operatorname{CT}}_{z^{(1)}}\frac{\prod_{1\leq i<j\leq k_{1}}(z_ {i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{1}}( z_{i}^{(1)})^{t_{i}}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c} }=0.\]
Proof.: Let \(\{t_{i}\leq 0\mid i=1,\ldots,k_{1}\}=\{b_{1},\ldots,b_{m}\}\) and \(\{t_{i}>0\mid i=1,\ldots,k_{1}\}=\{d_{1},\ldots,d_{p}\}\) (assume \(d_{1}\leq d_{2}\leq\cdots\leq d_{p}\)). By \(t_{i}\leq a\), we have
\[\sum_{i=1}^{m}b_{i}+\sum_{i=1}^{p-k_{2}}d_{i}\geq\sum_{i=1}^{m}b_{i}+\sum_{i=1 }^{p}d_{i}-k_{2}a=\sum_{i=1}^{k_{1}}t_{i}-k_{2}a>0.\]
The last inequality holds by the condition \(\sum_{i=1}^{k_{1}}t_{i}>k_{2}a\) in the corollary. Hence, the corollary holds by Lemma 3.3.
**Corollary 3.5**.: _Let \(k_{1},k_{2}\) be nonnegative integers such that \(k_{1}>k_{2}\). Let \(c,t_{1},\ldots,t_{k_{1}}\) be positive integers. Then_
\[\underset{z^{(1)}}{\text{\rm CT}}\,\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1 )}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{1}}(z_{i}^{(1 )})^{t_{i}}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}=0.\]
Proof.: Let \(B:=\{t_{i}\leq 0\mid i=1,\ldots,k_{1}\}=\{b_{1},\ldots,b_{m}\}\) and \(D:=\{t_{i}>0\mid i=1,\ldots,k_{1}\}=\{d_{1},\ldots,d_{p}\}\). Then \(B=\emptyset\) and \(D=\{t_{1},t_{2},\ldots,t_{k_{1}}\}\). Together with \(k_{1}>k_{2}\) and all the \(t_{i}\geq 1\), we have
\[\sum_{i=1}^{m}b_{i}+\sum_{i=1}^{p-k_{2}}d_{i}=\sum_{i=1}^{k_{1}-k_{2}}t_{i}>0.\]
By Lemma 3.3, the corollary follows.
## 4. The expression for \(L_{k_{1},k_{2}}(a,b,c)\)
Recall the definitions of \(L_{k_{1},k_{2}}(a,b,c;z^{(1)},z^{(2)})\) and \(L_{k_{1},k_{2}}(a,b,c)\) in (1.5) and (1.6) respectively. In \(L_{k_{1},k_{2}}(a,b,c;z^{(1)},z^{(2)})\), by expanding the product \(\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\) directly, we find that the degree and the lower degree of \(z_{0}\) should be \(k_{1}a\) at most and \(-k_{1}b\) at least respectively. But under a certain condition, we find that after taking the constant term of \(L_{k_{1},k_{2}}(a,b,c;z^{(1)},z^{(2)})\) with respect to \(z^{(1)}\) (not w.r.t. \(z_{0}\)), the terms of \(z_{0}^{l}\) for \(l<0\) and \(l>k_{2}a\) vanish. More precisely, we will show that \(L_{k_{1},k_{2}}(a,b,c)\) is of the form
\[M_{k_{1}}(a,b,c)\prod_{j=1}^{k_{2}}(q^{c-1-b}z_{0}/z_{j}^{(2)})_{a+b+1-c}\times (1+C_{1}z_{0}+\cdots+C_{k_{2}(c-1-b)}z_{0}^{k_{2}(c-1-b)}) \tag{4.1}\]
for \(k_{1}\geq k_{2}\) and \(b+1\leq c\leq a+b+1\) in this section.
### The negative powers
In this subsection, we show that all the terms of \(L_{k_{1},k_{2}}(a,b,c)\) with negative powers of \(z_{0}\) vanish.
For a Laurent series (polynomial) \(f\), denote by \([z^{l}]f\) the coefficient of \(z^{l}\) in \(f\).
**Lemma 4.1**.: _Let \(L_{k_{1},k_{2}}(a,b,c)\) be defined as in (1.6). For a negative integer \(l\),_
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)=0.\]
Proof.: Let
\[F_{k_{1},k_{2}}(c):=\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_ {c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i} ^{(1)}/z_{j}^{(2)})_{c}}. \tag{4.2}\]
For an integer \(l\) (not only negative),
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c) =\operatorname*{CT}_{z_{0},z^{(1)}}z_{0}^{-l}L_{k_{1},k_{2}}(a,b,c)\] \[=\operatorname*{CT}_{z_{0},z^{(1)}}z_{0}^{-l}\prod_{i=1}^{k_{1}}( z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\times F_{k_{1},k_{2}}(c) \tag{4.3}\] \[=\operatorname*{CT}_{z_{0},z^{(1)}}z_{0}^{-l}\prod_{i=1}^{k_{1}}( -z_{0}/z_{i}^{(1)})^{a}q^{\binom{a}{2}}(q^{1-a}z_{i}^{(1)}/z_{0})_{a+b}\times F _{k_{1},k_{2}}(c).\]
By the well-known \(q\)-binomial theorem [2, Theorem 3.3]
\[(y)_{n}=\sum_{t\geq 0}q^{\binom{t}{2}}\genfrac{[}{]}{0.0pt}{}{n}{t}(-y)^{t},\]
we can expand (4.3) as
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)\] \[=\operatorname*{CT}_{z_{0},z^{(1)}}z_{0}^{-l}\prod_{i=1}^{k_{1}}( -z_{0}/z_{i}^{(1)})^{a}q^{\binom{a}{2}}\sum_{t_{i}\geq 0}q^{\binom{t_{i}}{2}} \genfrac{[}{]}{0.0pt}{}{a+b}{t_{i}}(-q^{1-a}z_{i}^{(1)}/z_{0})^{t_{i}}\times F _{k_{1},k_{2}}(c)\] \[=\operatorname*{CT}_{z_{0},z^{(1)}}\sum_{t_{1},\ldots,t_{k_{1}} \geq 0}z_{0}^{k_{1}a-|t|-l}\prod_{i=1}^{k_{1}}(-z_{i}^{(1)})^{t_{i}-a}q^{ \binom{a}{2}+\binom{t_{i}}{2}+(1-a)t_{i}}\genfrac{[}{]}{0.0pt}{}{a+b}{t_{i}} \times F_{k_{1},k_{2}}(c).\]
Here \(|t|:=\sum_{i=1}^{k_{1}}t_{i}\) and \(\genfrac{[}{]}{0.0pt}{}{n}{t}=(q^{n-t+1})_{t}/(q)_{t}\) is the \(q\)-binomial coefficient for nonnegative integers \(n\) and \(t\). Taking the constant term with respect to \(z_{0}\), we have
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)=\operatorname*{CT}_{z^{(1)}}\sum_{\begin{subarray} {c}t_{1},\ldots,t_{k_{1}}\geq 0\\ |t|=k_{1}a-l\end{subarray}}\prod_{i=1}^{k_{1}}(-z_{i}^{(1)})^{t_{i}-a}q^{ \binom{a}{2}+\binom{t_{i}}{2}+(1-a)t_{i}}\genfrac{[}{]}{0.0pt}{}{a+b}{t_{i}} \times F_{k_{1},k_{2}}(c). \tag{4.4}\]
Since \(F_{k_{1},k_{2}}(c)\) only contributes nonnegative powers of the \(z_{i}^{(1)}\), the summand in (4.4) is not homogeneous in \(z^{(1)}\) for \(l<0\). It follows that the constant term of each summand in (4.4) with respect to \(z^{(1)}\) equals zero, and so is the sum.
### The positive powers
As the statement in the beginning of this section, the degree of \(z_{0}\) in \(L_{k_{1},k_{2}}(a,b,c)\) should be \(k_{1}a\) at most. But we find that \([z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)=0\) if \(k_{1}\geq k_{2}\) and \(l>k_{2}a\) in this subsection. That is the content of the next lemma.
**Lemma 4.2**.: _Let \(L_{k_{1},k_{2}}(a,b,c)\) be defined as in (1.6). If \(k_{1}\geq k_{2}\) and the integer \(l>k_{2}a\), then_
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)=0.\]
Proof.: It is clear that the lemma holds for \(k_{2}=k_{1}\) by the definition of \(L_{k_{1},k_{2}}(a,b,c)\). In the following of the proof, we assume \(k_{1}>k_{2}\). Changing \(t_{i}\mapsto a-t_{i}\) for each \(i\) in (4.4), we have
\[[z_{0}^{l}]L_{k_{1},k_{2}}(a,b,c)=\sum_{\begin{subarray}{c}t_{1}, \ldots,t_{k_{1}}\leq a\\ |t|=l\end{subarray}}\prod_{i=1}^{k_{1}}(-1)^{-t_{i}}q^{\binom{a}{2}+\binom{a-t_ {i}}{2}+(1-a)(a-t_{i})}\genfrac{[}{]}{0.0pt}{}{a+b}{a-t_{i}}\\ \times\operatorname*{\text{\rm CT}}_{z^{(1)}}\frac{\prod_{1\leq i <j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod _{i=1}^{k_{1}}(z_{i}^{(1)})^{t_{i}}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i }^{(1)}/z_{j}^{(2)})_{c}}, \tag{4.5}\]
where \(|t|=\sum_{i=1}^{k_{1}}t_{i}\). All the constant term in the sum of (4.5) vanish by Corollary 3.4 if \(l=|t|=\sum_{i=1}^{k_{1}}t_{i}>k_{2}a\). Then the lemma follows.
In the following of this subsection, we show that \(\prod_{l=1}^{k_{2}}(q^{c-1-b}z_{0}/z_{l}^{(2)})_{a+b+1-c}\) is a factor of \(L_{k_{1},k_{2}}(a,b,c)\). The next lemma implies this.
**Lemma 4.3**.: _If \(b+1\leq c\leq a+b+1\) and \(k_{1}\geq k_{2}\), then_
\[L_{k_{1},k_{2}}(a,b,c)=0\]
_by taking \(z_{0}=z_{l}^{(2)}q^{m}\) for \(l=1,\ldots,k_{2}\) and \(m=1-a,\ldots,b+1-c\)._
Proof.: Carrying out the substitution \(z_{0}=z_{l}^{(2)}q^{m}\) in \(L_{k_{1},k_{2}}(a,b,c)\) yields
\[L: =L_{k_{1},k_{2}}(a,b,c)\big{|}_{z_{0}=z_{l}^{(2)}q^{m}}\] \[=\operatorname*{\text{\rm CT}}_{z^{(1)}}\frac{\prod_{i=1}^{k_{1} }(q^{m}z_{l}^{(2)}/z_{i}^{(1)})_{a}(q^{1-m}z_{i}^{(1)}/z_{l}^{(2)})_{b}\prod_{1 \leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c} }{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=\operatorname*{\text{\rm CT}}_{z^{(1)}}\frac{\prod_{i=1}^{k_{1} }(-z_{l}^{(2)}/z_{i}^{(1)})^{a}q^{ma+\binom{a}{2}}(q^{1-a-m}z_{i}^{(1)}/z_{l}^ {(2)})_{a+b}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{( 1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{l}^{(2)})_{c}\prod_{j =1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}.\]
Since \(1-a-m\leq 1-a-(-a+1)=0\) and \(1-a-m+(a+b-1)=b-m\geq b-(b+1-c)=c-1\), \((z_{i}^{(1)}/z_{l}^{(2)})_{c}\) is in fact a factor of \((q^{1-a-m}z_{i}^{(1)}/z_{l}^{(2)})_{a+b}\) for \(i=1,\ldots,k_{1}\). Hence,
\[L=\operatorname*{\text{\rm CT}}_{z^{(1)}}\prod_{i=1}^{k_{1}}(-z_ {l}^{(2)}/z_{i}^{(1)})^{a}q^{ma+\binom{a}{2}}(q^{1-a-m}z_{i}^{(1)}/z_{l}^{(2 )})_{a+m-1}(q^{c}z_{i}^{(1)}/z_{l}^{(2)})_{b-m-c+1}\\ \times\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_ {c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{j\neq l}^{k_{2}}\prod_{i=1}^{k_{1}}(z _{i}^{(1)}/z_{j}^{(2)})_{c}}. \tag{4.6}\]
From \(b+1\leq c\), we have \(a+m-1+(b-m-c+1)=a+b-c\leq a-1\). Then, by expanding \((q^{1-a-m}z_{i}^{(1)}/z_{l}^{(2)})_{a+m-1}(q^{c}z_{i}^{(1)}/z_{l}^{(2)})_{b-m-c+1}\) in the right-hand side of (4.6) we know that \(L\) is
a finite sum of the form
\[d\cdot(z_{l}^{(2)})^{\sum_{i=1}^{k_{1}}t_{i}}\mathop{\mathrm{CT}}_{z^{(1)}}\frac{ \prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)} )_{c}}{\prod_{i=1}^{k_{1}}(z_{i}^{(1)})^{t_{i}}\prod_{\begin{subarray}{c}j=1\\ j\neq l\end{subarray}}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}, \tag{4.7}\]
where \(d\in\mathbb{Q}(q)\) and \(t_{i}\geq 1\) for \(i=1,\ldots,k_{1}\). By Corollary 3.5, every constant term of the form (4.7) vanishes. Thus, the lemma follows.
To show that \(L_{k_{1},k_{2}}(a,b,c)\) is of the form (4.1), the last step is to prove
\[[z_{0}^{0}]L_{k_{1},k_{2}}(a,b,c)=M_{k_{1}}(a,b,c).\]
By the expression for \(L_{k_{1},k_{2}}(a,b,c)\) in (1.6),
\[[z_{0}^{0}]L_{k_{1},k_{2}}(a,b,c)=\mathop{\mathrm{CT}}_{z_{0},z^{(1)}}\frac{ \prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i <j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod _{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}.\]
Only those terms that are homogeneous in \(z_{0}\) and the \(z_{i}^{(1)}\) with (total) degree \(0\) contribute to the constant term. Thus, only the first term (i.e., \(1\)) of the expansion of the denominator \(\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}^{-1}\) contributes to \([z_{0}^{0}]L_{k_{1},k_{2}}(a,b,c)\). Then, by the \(q\)-Morris identity (1.3)
\[[z_{0}^{0}]L_{k_{1},k_{2}}(a,b,c)=\mathop{\mathrm{CT}}_{z_{0},z^{(1)}}\prod_{ i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i<j \leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}=M_{k_{ 1}}(a,b,c).\]
By Lemmas 4.1-4.3 and the fact that \([z_{0}^{0}]L_{k_{1},k_{2}}(a,b,c)=M_{k_{1}}(a,b,c)\), it is straightforward to obtain that \(L_{k_{1},k_{2}}(a,b,c)\) is of the form (4.1) for \(b+1\leq c\leq a+b+1\) and \(k_{1}\geq k_{2}\).
## 5. The \(b=c-1\) case
In this section, we are concerned about the \(b=c-1\) case of (4.1).
Taking \(b=c-1\) in (4.1) we obtain an explicit constant term identity.
**Lemma 5.1**.: _For \(k_{1}\geq k_{2}\),_
\[L_{k_{1},k_{2}}(a,c-1,c)=\mathop{\mathrm{CT}}_{z^{(1)}}\frac{ \prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\prod_{1 \leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c} }{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}\\ =M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}. \tag{5.1}\]
An immediate consequence of Lemma 5.1 is the next result.
**Proposition 5.2**.: _For \(k_{1}\geq k_{2}\), let \(\lambda\) be a partition such that \(\ell(\lambda)\leq k_{1}-k_{2}\). Then_
\[\underset{z^{(1)}}{\operatorname{CT}}\,\frac{h_{\lambda}\big{[} \frac{1-q^{c}}{1-q}Z^{(1)}\big{]}\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_ {i}^{(1)}/z_{0})_{c-1}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}( qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{z_{0}^{|\lambda|}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_ {i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=(-1)^{|\lambda|}\prod_{i=1}^{\ell(\lambda)}q^{\binom{\lambda_{i} }{2}}\genfrac{[}{]}{0.0pt}{}{a}{\lambda_{i}}\times M_{k_{1}}(a,c-1,c)\prod_{j= 1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}\] \[=h_{\lambda}\Big{[}\frac{q^{a}-1}{1-q}\Big{]}\times M_{k_{1}}(a,c -1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}, \tag{5.2}\]
_where \(Z^{(1)}=z_{1}^{(1)}+\cdots+z_{k_{1}}^{(1)}\)._
Proof.: Denote by \(L\) the constant term in (5.2). By the generating function for the complete symmetric functions in (2.1),
\[L=\underset{z^{(1)},z_{k_{2}+1}^{(2)},\cdots,z_{k_{2}+\ell( \lambda)}^{(2)}}{\operatorname{CT}}\prod_{j=1}^{\ell(\lambda)}(z_{k_{2}+j}^{(2 )})^{\lambda_{j}}\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0} )_{c-1}\\ \times\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{ c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{z_{0}^{|\lambda|}\prod_{j=1}^{k_{2}+\ell( \lambda)}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}.\]
Since \(k_{2}+\ell(\lambda)\leq k_{1}\), we can apply Lemma 5.1 with \(k_{2}\mapsto k_{2}+\ell(\lambda)\) and obtain
\[L =\underset{z_{k_{2}+1}^{(2)},\cdots,z_{k_{2}+\ell(\lambda)}^{(2)} }{\operatorname{CT}}z_{0}^{-|\lambda|}\prod_{j=1}^{\ell(\lambda)}(z_{k_{2}+j}^ {(2)})^{\lambda_{j}}M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}+\ell(\lambda)}(z_{0}/ z_{j}^{(2)})_{a}\] \[=M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a} \underset{z_{k_{2}+1}^{(2)},\cdots,z_{k_{2}+\ell(\lambda)}^{(2)}}{ \operatorname{CT}}z_{0}^{-|\lambda|}\prod_{j=1}^{\ell(\lambda)}(z_{k_{2}+j}^ {(2)})^{\lambda_{j}}(z_{0}/z_{k_{2}+j}^{(2)})_{a}.\]
Using the well-known \(q\)-binomial theorem [2, Theorem 3.3]
\[(y)_{n}=\sum_{t\geq 0}q^{\binom{t}{2}}\genfrac{[}{]}{0.0pt}{}{n}{t}(-y)^{t},\]
we have
\[L =M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a} \underset{z_{k_{2}+1}^{(2)},\cdots,z_{k_{2}+\ell(\lambda)}^{(2)}}{ \operatorname{CT}}z_{0}^{-|\lambda|}\prod_{j=1}^{\ell(\lambda)}(z_{k_{2}+j}^{ (2)})^{\lambda_{j}}\sum_{t_{j}\geq 0}q^{\binom{t_{j}}{2}}\genfrac{[}{]}{0.0pt}{}{a}{t_{j}}(-z_{0}/z_{ k_{2}+j}^{(2)})^{t_{j}}\] \[=M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a} \times(-1)^{|\lambda|}\prod_{i=1}^{\ell(\lambda)}q^{\binom{\lambda_{i}}{2}} \genfrac{[}{]}{0.0pt}{}{a}{\lambda_{i}}.\]
The right-most equality of (5.2) follows from (5.3) below.
We complete the proof of Proposition 5.2 by the next result.
**Lemma 5.3**.: _For \(a\) and \(r\) nonnegative integers,_
\[h_{r}\Big{[}\frac{q^{a}-1}{1-q}\Big{]}=(-1)^{r}q^{\binom{r}{2}}\genfrac{[}{]}{0. 0pt}{}{a}{r}. \tag{5.3}\]
Proof.: By (2.7),
\[h_{r}\Big{[}\frac{q^{a}-1}{1-q}\Big{]}=(-1)^{r}e_{r}\Big{[}\frac{1-q^{a}}{1-q} \Big{]}=(-1)^{r}e_{r}(1,q,\dots,q^{a-1})=(-1)^{r}q^{\binom{r}{2}}\genfrac{[}{]}{0.0pt}{}{a}{r}.\]
Here the last equality holds by [10, Page 26, Example 3].
By (2.8), we can rewrite (5.2) as
\[\underset{z^{(1)}}{\text{\rm CT}}\,\frac{g_{\lambda}(z^{(1)};q,q^{ c})\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\prod_{1 \leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c} }{z_{0}^{|\lambda|}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{( 2)})_{c}}\\ =g_{\lambda}\Big{(}\Big{[}\frac{q^{a}-1}{1-q^{c}}\Big{]};q,q^{c} \Big{)}\times M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a} \tag{5.4}\]
for \(k_{1}\geq k_{2}\) and \(\ell(\lambda)\leq k_{1}-k_{2}\). Then, we obtain a few further results.
**Proposition 5.4**.: _For \(F=\mathbb{Q}(q,t)\) and partitions \(\lambda^{(1)},\dots,\lambda^{(m)}\), let \(f_{\lambda^{(1)}},\dots,f_{\lambda^{(m)}}\in\Lambda_{F}\) be homogeneous symmetric functions of degrees \(|\lambda^{(1)}|,\dots,|\lambda^{(m)}|\) respectively such that \(f_{\lambda^{(i)}}=\sum_{\ell(\mu)\leq\ell(\lambda^{(i)})}d_{\mu}g_{\mu}\) for \(i=1,\dots,m\). Here \(d_{\mu}\in F\). Then, for \(k_{1}\geq k_{2}\), \(\sum_{i=1}^{m}\ell(\lambda^{(i)})\leq k_{1}-k_{2}\) and any alphabets \(X_{i}\) independent of \(z^{(1)}\),_
\[\underset{z^{(1)}}{\text{\rm CT}}\prod_{i=1}^{m}f_{\lambda^{(i)} }\big{(}\big{[}Z^{(1)}+X_{i}\big{]};q,q^{c}\big{)}\frac{\prod_{i=1}^{k_{1}}(z_ {0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\prod_{1\leq i<j\leq k_{1}}(z_{i }^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{z_{0}^{\sum_{i=1}^{m} |\lambda^{(i)}|}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2) })_{c}}\\ =\prod_{i=1}^{m}f_{\lambda^{(i)}}\Big{(}\Big{[}\frac{q^{a}-1}{1- q^{c}}+\frac{X_{i}}{z_{0}}\Big{]};q,q^{c}\Big{)}\times M_{k_{1}}(a,c-1,c)\prod_{j=1} ^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}. \tag{5.5}\]
Proof.: Since the \(g_{\nu}\) form a basis of \(\Lambda_{F}\) and \(f_{\lambda^{(i)}}=\sum_{\ell(\mu)\leq\ell(\lambda^{(i)})}d_{\mu}g_{\mu}\), we can write \(\prod_{i=1}^{m}f_{\lambda^{(i)}}=\sum d_{\nu}g_{\nu}\), where the sum ranges over all partitions \(\nu\) such that \(\ell(\nu)\leq\sum_{i=1}^{m}\ell(\lambda^{(i)})\leq k_{1}-k_{2}\). Then, by the linearity it suffices to prove that
\[L:=\underset{z^{(1)}}{\text{\rm CT}}\,z_{0}^{-|\nu|}\prod_{i=1}^{ \ell(\nu)}g_{\nu_{i}}\big{[}Z^{(1)}+X_{i}\big{]}\prod_{i=1}^{k_{1}}(z_{0}/z_{i }^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\times F_{k_{1},k_{2}}(c)\\ =\prod_{i=1}^{\ell(\nu)}g_{\nu_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c} }+\frac{X_{i}}{z_{0}}\Big{]}\times M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/ z_{j}^{(2)})_{a} \tag{5.6}\]
for a partition \(\nu\) such that \(\ell(\nu)\leq k_{1}-k_{2}\). Here and in the following of the proof \(g_{\nu}=g_{\nu}(q,q^{c})\), and \(F_{k_{1},k_{2}}(c)\) and \(M_{k_{1}}(a,b,c)\) are defined in (4.2) and (1.7) respectively. By (2.9), we can write \(L\) as
\[L =\underset{z^{(1)}}{\mathrm{CT}}\,z_{0}^{-|\nu|}\prod_{i=1}^{ \ell(\nu)}\sum_{t_{i}=0}^{\nu_{i}}g_{t_{i}}[X_{i}]g_{\nu_{i}-t_{i}}[Z^{(1)}] \prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\times F_{ k_{1},k_{2}}(c)\] \[=\sum_{\begin{subarray}{c}0\leq t_{i}\leq\nu_{i}\\ i=1,\ldots,\ell(\nu)\end{subarray}}z_{0}^{-|t|}\prod_{i=1}^{\ell(\nu)}g_{t_{i}} [X_{i}]\underset{z^{(1)}}{\mathrm{CT}}\,z_{0}^{-|\nu|+|t|}\prod_{i=1}^{\ell( \nu)}g_{\nu_{i}-t_{i}}[Z^{(1)}]\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{ i}^{(1)}/z_{0})_{c-1}\times F_{k_{1},k_{2}}(c),\]
where \(|t|=\sum_{i=1}^{\ell(\nu)}t_{i}\). By Proposition 5.2 or its deformation (5.4),
\[L =\sum_{\begin{subarray}{c}0\leq t_{i}\leq\nu_{i}\\ i=1,\ldots,\ell(\nu)\end{subarray}}z_{0}^{-|t|}\prod_{i=1}^{\ell(\nu)}g_{t_{i} }[X_{i}]g_{\nu_{i}-t_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}\Big{]}\times M_{k_{1}} (a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}\] \[=\prod_{i=1}^{\ell(\nu)}\sum_{t_{i}=0}^{\nu_{i}}z_{0}^{-t_{i}}g_{ t_{i}}[X_{i}]g_{\nu_{i}-t_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}\Big{]}\times M_{k_{1}} (a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}.\]
Using (2.4) and (2.9) gives
\[L=\prod_{i=1}^{\ell(\nu)}g_{\nu_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}+\frac{X_{i }}{z_{0}}\Big{]}\times M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)} )_{a}.\qed\]
Before giving the next result, we introduce a determinant transformation formula between the \(p_{n}\) and the \(h_{n}\).
**Lemma 5.5**.: _[_10_, Page 28]_ _For a nonnegative integer \(n\),_
\[(-1)^{n-1}p_{n}=\left|\begin{array}{ccccc}h_{1}&1&0&\cdots&0\\ 2h_{2}&h_{1}&1&\cdots&0\\ \vdots&\vdots&\vdots&\vdots&\vdots\\ nh_{n}&h_{n-1}&h_{n-2}&\cdots&h_{1}\end{array}\right|. \tag{5.7}\]
We are concerned about constant term identities for symmetric functions in \(\Lambda_{\mathbb{Q}(q,t)}\) in Theorem 5.4. The next result is a similar result about symmetric functions in \(\Lambda_{\mathbb{Q}}\).
**Proposition 5.6**.: _For partitions \(\lambda^{(1)},\ldots,\lambda^{(m)}\), let \(f_{\lambda^{(1)}},\ldots,f_{\lambda^{(m)}}\in\Lambda_{\mathbb{Q}}\) be homogeneous symmetric functions of degrees \(|\lambda^{(1)}|,\ldots,|\lambda^{(m)}|\) respectively. Let \(X_{i}\) and \(Y_{i}\) be alphabets independent of \(z^{(1)}\) for \(i=1,\ldots,m\). Then, for \(k_{1}\geq k_{2}\) and \(\sum_{i=1}^{m}|\lambda^{(i)}|\leq k_{1}-k_{2}\)_
\[\operatorname*{CT}_{z^{(1)}}\prod_{i=1}^{m}f_{\lambda^{(i)}} \big{[}Y_{i}Z^{(1)}+X_{i}\big{]}\frac{\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a }(qz_{i}^{(1)}/z_{0})_{c-1}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)}) _{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{z_{0}^{\sum_{i=1}^{m}|\lambda^{(i)}|}\prod_ {j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}\\ =\prod_{i=1}^{m}f_{\lambda^{(i)}}\Big{[}\frac{q^{a}-1}{1-q^{c}}Y_ {i}+\frac{X_{i}}{z_{0}}\Big{]}\times M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{ 0}/z_{j}^{(2)})_{a}. \tag{5.8}\]
Proof.: It suffices to prove that (5.8) holds for power sum symmetric functions by the linearity since the \(p_{\lambda}\) form a basis of \(\Lambda_{\mathbb{Q}}\). That is
\[L:=\operatorname*{CT}_{z^{(1)}}z_{0}^{-|\lambda|}\prod_{i=1}^{l} p_{\lambda_{i}}\big{[}Y_{i}Z^{(1)}+X_{i}\big{]}\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1) })_{a}(qz_{i}^{(1)}/z_{0})_{c-1}\times F_{k_{1},k_{2}}(c)\\ =\prod_{i=1}^{l}p_{\lambda_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}Y_{i }+\frac{X_{i}}{z_{0}}\Big{]}\times M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0} /z_{j}^{(2)})_{a}, \tag{5.9}\]
where \(|\lambda|\leq k_{1}-k_{2}\), \(l=\ell(\lambda)\), the \(X_{i}\) and the \(Y_{i}\) are independent of \(z^{(1)}\) for \(i=1,\ldots,l\), and \(F_{k_{1},k_{2}}(c)\) is defined as in (4.2). By (2.2a) and (2.2c),
\[L=\operatorname*{CT}_{z^{(1)}}z_{0}^{-|\lambda|}\prod_{i=1}^{l} \Big{(}p_{\lambda_{i}}\Big{[}\frac{1-q}{1-q^{c}}Y_{i}\Big{]}p_{\lambda_{i}} \Big{[}\frac{1-q^{c}}{1-q}Z^{(1)}\Big{]}+p_{\lambda_{i}}[X_{i}]\Big{)}\\ \times\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{ 0})_{c-1}F_{k_{1},k_{2}}(c).\]
Expanding the first product yields
\[L=\operatorname*{CT}_{z^{(1)}}z_{0}^{-|\lambda|}\sum_{(t_{1}, \ldots,t_{l})\in\mathbb{Z}_{2}^{l}}\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[} \frac{1-q}{1-q^{c}}Y_{i}\Big{]}p_{\lambda_{i}}\Big{[}\frac{1-q^{c}}{1-q}Z^{(1) }\Big{]}\Big{)}^{t_{i}}(p_{\lambda_{i}}[X_{i}])^{1-t_{i}}\\ \times\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{ 0})_{c-1}F_{k_{1},k_{2}}(c)\\ =\sum_{(t_{1},\ldots,t_{l})\in\mathbb{Z}_{2}^{l}}z_{0}^{-\sum_{i=1 }^{l}\lambda_{i}(1-t_{i})}\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[}\frac{1-q }{1-q^{c}}Y_{i}\Big{]}\Big{)}^{t_{i}}(p_{\lambda_{i}}[X_{i}])^{1-t_{i}}\\ \times\operatorname*{CT}_{z^{(1)}}z_{0}^{-\sum_{i=1}^{l}\lambda _{i}t_{i}}\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[}\frac{1-q^{c}}{1-q}Z^{(1) }\Big{]}\Big{)}^{t_{i}}\prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/ z_{0})_{c-1}\times F_{k_{1},k_{2}}(c).\]
Write
\[\prod_{i=1}^{l}\big{(}p_{\lambda_{i}}\big{)}^{t_{i}}=\sum d_{\nu}h_{\nu} \tag{5.10}\]
using (5.7). Since \(|\nu|=\sum_{i=1}^{l}\lambda_{i}t_{i}\leq|\lambda|\leq k_{1}-k_{2}\), we have \(\ell(\nu)\leq k_{1}-k_{2}\) for each \(\nu\) in (5.10). Thus, we can apply Proposition 5.2 and then (5.10) to obtain
\[L =\sum_{(t_{1},\dots,t_{l})\in\mathbb{Z}_{2}^{l}}z_{0}^{-\sum_{i=1 }^{l}\lambda_{i}(1-t_{i})}\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[}\frac{1- q}{1-q^{c}}Y_{i}\Big{]}\Big{)}^{t_{i}}(p_{\lambda_{i}}[X_{i}])^{1-t_{i}}\] \[\qquad\times\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[}\frac{q^ {a}-1}{1-q}\Big{]}\Big{)}^{t_{i}}\cdot M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z _{0}/z_{j}^{(2)})_{a}\] \[=\sum_{(t_{1},\dots,t_{l})\in\mathbb{Z}_{2}^{l}}\prod_{i=1}^{l} \Big{(}p_{\lambda_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}Y_{i}\Big{]}\Big{)}^{t_{i }}\Big{(}p_{\lambda_{i}}\Big{[}\frac{X_{i}}{z_{0}}\Big{]}\Big{)}^{1-t_{i}} \cdot M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}\] \[=\prod_{i=1}^{l}\Big{(}p_{\lambda_{i}}\Big{[}\frac{q^{a}-1}{1-q^ {c}}Y_{i}\Big{]}+p_{\lambda_{i}}\Big{[}\frac{X_{i}}{z_{0}}\Big{]}\Big{)}\cdot M _{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}.\]
By (2.2a),
\[L=\prod_{i=1}^{l}p_{\lambda_{i}}\Big{[}\frac{q^{a}-1}{1-q^{c}}Y_{i}+\frac{X_{ i}}{z_{0}}\Big{]}M_{k_{1}}(a,c-1,c)\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a}.\]
Hence, we complete the proof of (5.9) and the proposition follows.
For a positive integer \(n\), let \(m_{1},\dots,m_{n}\) be nonnegative integers. For a fixed \(s\in\{1,\dots,n\}\), let \(\lambda^{(s,j)}\) be partitions for \(j=1,\dots,m_{s}\). Recall that \(z^{(s)}=(z_{1}^{(s)},\dots,z_{k_{s}}^{(s)})\) for \(s=1,\dots,n\) and \(z=(z^{(1)},z^{(2)},\dots,z^{(n)})\). By iterating (5.5) with \(z_{0}=1\), we obtain the next more general result.
**Theorem 5.7**.: _Let \(k_{1},\dots,k_{n+1}\) and \(a_{1},\dots,a_{n}\) be nonnegative integers such that \(k_{1}\geq k_{2}\geq\dots\geq k_{n+1}\) and \(a_{1}=a,a_{2}=\dots=a_{n}=0\). Let \(f_{\lambda^{(s,j)}}\in\Lambda_{\mathbb{Q}(q,t)}\) be homogeneous symmetric functions of degrees \(|\lambda^{(s,j)}|\) such that \(f_{\lambda^{(s,j)}}=\sum_{\ell(\mu)\leq\ell(\lambda^{(s,j)})}d_{\mu}g_{\mu}\), where \(d_{\mu}\in\mathbb{Q}(q,t)\). If
\(\sum_{j=1}^{m_{s}}\ell(\lambda^{(s,j)})\leq k_{s}-k_{s+1}\) for every \(s\) and all the alphabets \(X_{j}^{(s)}\) are independent of \(z\), then_
\[\operatorname{CT}_{z}\prod_{s=1}^{n}\frac{\prod_{i=1}^{k_{s}}(1/z _{i}^{(s)})_{a_{s}}(qz_{i}^{(s)})_{c-1}\prod_{1\leq i<j\leq k_{s}}(z_{i}^{(s)}/z _{j}^{(s)})_{c}(qz_{j}^{(s)}/z_{i}^{(s)})_{c}}{\prod_{j=1}^{k_{s+1}}\prod_{i=1}^ {k_{s}}(z_{i}^{(s)}/z_{j}^{(s+1)})_{c}}\] \[\quad\times\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}} \big{(}\big{[}Z^{(s)}+X_{j}^{(s)}\big{]};q,q^{c}\big{)}\] \[\quad=\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}}\Big{(} \Big{[}\frac{q^{a}-1}{1-q^{c}}+X_{j}^{(s)}\Big{]};q,q^{c}\Big{)}\prod_{s=1}^{n }M_{k_{s}}(a,c-1,c)\prod_{j=1}^{k_{n+1}}(1/z_{j}^{(n+1)})_{a}. \tag{5.11}\]
Similar to Theorem 5.7, we can also iterate (5.8) with \(z_{0}=1\) and obtain the next result.
**Theorem 5.8**.: _Let the \(k_{i}\), \(m_{i}\) and \(a_{i}\) be the same as in Theorem 5.7. Let \(f_{\lambda^{(s,j)}}\in\Lambda_{\mathbb{Q}}\) be homogeneous symmetric functions of degrees \(|\lambda^{(s,j)}|\) such that \(\sum_{j=1}^{m_{s}}|\lambda^{(s,j)}|\leq k_{s}-k_{s+1}\) for every \(s\). If the \(X_{j}^{(s)}\) and the \(Y_{j}^{(s)}\) are alphabets independent of \(z\), then_
\[\operatorname{CT}_{z}\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda ^{(s,j)}}\big{[}A^{(s,j)}\big{]}\prod_{s=1}^{n}\frac{\prod_{i=1}^{k_{s}}(1/z_{ i}^{(s)})_{a_{s}}(qz_{i}^{(s)})_{c-1}\prod_{1\leq i<j\leq k_{s}}(z_{i}^{(s)}/z_{j}^ {(s)})_{c}(qz_{j}^{(s)}/z_{i}^{(s)})_{c}}{\prod_{j=1}^{k_{s+1}}\prod_{i=1}^{k_{ s}}(z_{i}^{(s)}/z_{j}^{(s+1)})_{c}}\\ =\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}}\Big{[} \frac{q^{a}-1}{1-q^{c}}Y_{j}^{(s)}+X_{j}^{(s)}\Big{]}\prod_{s=1}^{n}M_{k_{s}}(a,c-1,c)\prod_{j=1}^{k_{n+1}}(1/z_{j}^{(n+1)})_{a}, \tag{5.12}\]
_where \(A^{(s,j)}=(z_{1}^{(s)}+\cdots+z_{k_{s}}^{(s)})Y_{j}^{(s)}+X_{j}^{(s)}\)._
## 6. Equivalent constant term identities
In this section, we give several constant term identities equivalent to those in Section 5. All these equivalence are built on the next lemma.
**Lemma 6.1**.: _For \(k_{1}\geq k_{2}\) and \(a+b+1\geq c\),_
\[\operatorname{CT}_{z^{(1)}}\frac{z_{0}^{(b+1-c)k_{1}}\prod_{i=1}^ {k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i<j\leq k_{1 }}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{ 1}}(z_{i}^{(1)})^{b+1-c}\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_{i}^{ (1)}/z_{j}^{(2)})_{c}}\\ =(-1)^{k_{1}(b+1-c)}q^{k_{1}\binom{b+2-c}{2}}\prod_{j=1}^{k_{2}} (z_{0}/z_{j}^{(2)})_{a+b+1-c}\cdot M_{k_{1}}(a+b+1-c,c-1,c). \tag{6.1}\]
Taking \(b=c-1\), Lemma 6.1 reduces to Lemma 5.1. Hence, Lemma 6.1 is seemingly more general. But the two lemmas are in fact equivalent. We will show the equivalence in the following proof of Lemma 6.1.
Proof.: We prove the lemma by transforming (5.1) to (6.1).
We can write the left-hand side of (5.1) as
\[\underset{z^{(1)}}{\operatorname{CT}}\prod_{i=1}^{k_{1}}(-z_{0}/z_{i}^{(1)})^{a}q ^{\binom{a}{2}}(q^{1-a}z_{i}^{(1)}/z_{0})_{a+c-1}\frac{\prod_{1\leq i<j\leq k_{1} }(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{j=1}^{k_{2} }\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}.\]
Using the above and taking \(a\mapsto a+b+1-c\), (5.1) becomes
\[\underset{z^{(1)}}{\operatorname{CT}}\prod_{i=1}^{k_{1}}(-z_{0}/z _{i}^{(1)})^{a+b+1-c}q^{\binom{a+b+1-c}{2}}(q^{c-a-b}z_{i}^{(1)}/z_{0})_{a+b} \frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{ i}^{(1)})_{c}}{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a+b+1-c}\cdot M_{k_{1}}( a+b+1-c,c-1,c).\]
Note that the substitution \(a\mapsto a+b+1-c\) is valid by \(a+b+1\geq c\). Take \((z_{0},z_{j}^{(2)})\mapsto(z_{0}q^{c-b-1},z_{j}^{(2)}q^{c-b-1})\) in the above. This substitution does not change the constant term. Thus
\[\underset{z^{(1)}}{\operatorname{CT}}\prod_{i=1}^{k_{1}}(-q^{c-b- 1}z_{0}/z_{i}^{(1)})^{a+b+1-c}q^{\binom{a+b+1-c}{2}}(q^{1-a}z_{i}^{(1)}/z_{0}) _{a+b}\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{( 1)}/z_{i}^{(1)})_{c}}{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_{i}^{(1 )}/z_{j}^{(2)})_{c}}\\ =\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a+b+1-c}\cdot M_{k_{1}}( a+b+1-c,c-1,c). \tag{6.2}\]
We can rewrite (6.2) as
\[\underset{z^{(1)}}{\operatorname{CT}}\prod_{i=1}^{k_{1}}(z_{0}/z _{i}^{(1)})^{b+1-c}q^{\binom{a}{2}}(-z_{0}/z_{i}^{(1)})^{a}(q^{1-a}z_{i}^{(1)} /z_{0})_{a+b}\frac{\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_ {j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_ {i}^{(1)}/z_{j}^{(2)})_{c}}\\ =(-1)^{k_{1}(b+1-c)}q^{k_{1}}\binom{\binom{a}{2}+(b+1-c)(a+b+1-c )-\binom{a+b+1-c}{2}}\\ \times\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a+b+1-c}\cdot M_{k_{ 1}}(a+b+1-c,c-1,c). \tag{6.3}\]
Substituting
\[q^{\binom{a}{2}}(-z_{0}/z_{i}^{(1)})^{a}(q^{1-a}z_{i}^{(1)}/z_{0})_{a+b}=(z_{0 }/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\]
and
\[\binom{a}{2}+(b+1-c)(a+b+1-c)-\binom{a+b+1-c}{2}=\binom{b+2-c}{2}\]
into (6.3) yields (6.1).
By Lemma 6.1 we obtain the next result. The proof is similar to the proof of Proposition 5.2 using Lemma 5.1. Hence, we omit the proof.
**Proposition 6.2**.: _For \(a+b+1\geq c\), \(k_{1}\geq k_{2}\) and a partition \(\lambda\) such that \(\ell(\lambda)\leq k_{1}-k_{2}\),_
\[\operatorname*{CT}_{z^{(1)}}\frac{z_{0}^{(b+1-c)k_{1}-|\lambda|}h_ {\lambda}\big{[}\tfrac{1-q^{c}}{1-q}Z^{(1)}\big{]}\prod_{i=1}^{k_{1}}(z_{0}/z_{ i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1) })_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{1}}(z_{i}^{(1)})^{b+1-c} \prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_{i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=(-1)^{k_{1}(b+1-c)}q^{k_{1}\binom{b+2-c}{2}}h_{\lambda}\Big{[} \frac{q^{a}-q^{c-b-1}}{1-q}\Big{]}\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a+b+1 -c}\cdot M_{k_{1}}(a+b+1-c,c-1,c), \tag{6.4}\]
_where \(Z^{(1)}=z_{1}^{(1)}+\cdots+z_{k_{1}}^{(1)}\)._
The identity (5.2) in Proposition 5.2 is the \(b=c-1\) case of (6.4). However, it is not hard to transform (5.2) into (6.4) along the same line as the proof of Lemma 6.1. Hence, the two identities are equivalent.
By (2.8), we can rewrite (6.4) as
\[\operatorname*{CT}_{z^{(1)}}\frac{g_{\lambda}(z^{(1)};q,q^{c}) \prod_{i=1}^{k_{1}}(z_{0}/z_{i}^{(1)})_{a}(qz_{i}^{(1)}/z_{0})_{b}\prod_{1\leq i <j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1)})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{z_{0 }^{|\lambda|-(b+1-c)k_{1}}\prod_{i=1}^{k_{1}}(z_{i}^{(1)})^{b+1-c}\prod_{j=1}^ {k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_{i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=(-1)^{k_{1}(b+1-c)}q^{k_{1}\binom{b+2-c}{2}}g_{\lambda}\Big{(} \Big{[}\frac{q^{a}-q^{c-b-1}}{1-q^{c}}\Big{]};q,q^{c}\Big{)}\] \[\quad\times\prod_{j=1}^{k_{2}}(z_{0}/z_{j}^{(2)})_{a+b+1-c}\cdot M _{k_{1}}(a+b+1-c,c-1,c) \tag{6.5}\]
for \(k_{1}\geq k_{2}\), \(\ell(\lambda)\leq k_{1}-k_{2}\) and \(a+b+1\geq c\). Using (6.5) with \(z_{0}=1\), we obtain the next result equivalent to Proposition 5.4. The proof is similar to the proof of Proposition 5.4.
**Proposition 6.3**.: _For \(F=\mathbb{Q}(q,t)\) and partitions \(\lambda^{(1)},\ldots,\lambda^{(m)}\), let \(f_{\lambda^{(1)}},\ldots,f_{\lambda^{(m)}}\in\Lambda_{F}\) be homogeneous symmetric functions of degrees \(|\lambda^{(1)}|,\ldots,|\lambda^{(m)}|\) respectively such that \(f_{\lambda^{(i)}}=\sum_{\ell(\mu)\leq\ell(\lambda^{(i)})}d_{\mu}g_{\mu}\) for \(i=1,\ldots,m\). Here \(d_{\mu}\in F\). Then, for \(a+b+1\geq c\), \(k_{1}\geq k_{2}\), \(\sum_{i=1}^{m}\ell(\lambda^{(i)})\leq k_{1}-k_{2}\) and any alphabets \(X_{i}\) independent of \(z^{(1)}\)_
\[\operatorname*{CT}_{z^{(1)}}\frac{\prod_{i=1}^{m}f_{\lambda^{(i)} }\big{(}\big{[}Z^{(1)}+X_{i}\big{]};q,q^{c}\big{)}\prod_{i=1}^{k_{1}}(1/z_{i}^ {(1)})_{a}(qz_{i}^{(1)})_{b}\prod_{1\leq i<j\leq k_{1}}(z_{i}^{(1)}/z_{j}^{(1 )})_{c}(qz_{j}^{(1)}/z_{i}^{(1)})_{c}}{\prod_{i=1}^{k_{1}}(z_{i}^{(1)})^{b+1-c} \prod_{j=1}^{k_{2}}\prod_{i=1}^{k_{1}}(q^{b+1-c}z_{i}^{(1)}/z_{j}^{(2)})_{c}}\] \[=(-1)^{k_{1}(b+1-c)}q^{k_{1}\binom{b+2-c}{2}}\prod_{i=1}^{m}f_{ \lambda^{(i)}}\Big{(}\Big{[}\frac{q^{a}-q^{c-b-1}}{1-q^{c}}+X_{i}\Big{]};q,q^{ c}\Big{)}\] \[\quad\times\prod_{j=1}^{k_{2}}(1/z_{j}^{(2)})_{a+b+1-c}\cdot M_{k_ {1}}(a+b+1-c,c-1,c). \tag{6.6}\]
For \(n\) a positive integer, let \(k_{1},\ldots,k_{n+1}\), \(m_{1},\ldots,m_{n}\), \(a_{1},\ldots,a_{n}\) and \(b_{1},\ldots,b_{n}\) be non-negative integers such that \(k_{1}\geq k_{2}\geq\cdots\geq k_{n+1}\) and \(a_{1}=a,a_{2}=\cdots=a_{n}=0\). For
\(1\leq s\leq n\), \(1\leq j\leq m_{s}\) and \(\lambda^{(s,j)}\) partitions, let \(f_{\lambda^{(s,j)}}\in\Lambda_{F}=\Lambda_{\mathbb{Q}(q,t)}\) be homogeneous symmetric functions of degrees \(|\lambda^{(s,j)}|\). For \(s=1,\ldots,n\), denote \(\sigma_{s}=b_{1}+\cdots+b_{s}\), \(\sigma_{0}=0\), and \(Z^{(s)}=z_{1}^{(s)}+\cdots+z_{k_{s}}^{(s)}\). By iterating (6.6) we obtain Theorem 6.4 below, which is equivalent to Theorem 5.7.
**Theorem 6.4**.: _Suppose \(f_{\lambda^{(s,j)}}=\sum_{\ell(\mu)\leq\ell(\lambda^{(s,j)})}d_{\mu}g_{\mu}\). If \(a+\sigma_{s}+s\geq sc\), \(\sum_{j=1}^{m_{s}}\ell(\lambda^{(s,j)})\leq k_{s}-k_{s+1}\) for every \(s\), and all the alphabets \(X_{j}^{(s)}\) are independent of \(z\), then_
\[\begin{split}\operatorname*{CT}_{z}&\prod_{s=1}^{n} \frac{\prod_{i=1}^{k_{s}}(1/z_{i}^{(s)})_{a_{s}}(qz_{i}^{(s)})_{b_{s}}\prod_{1 \leq i<j\leq k_{s}}(z_{i}^{(s)}/z_{j}^{(s)})_{c}(qz_{j}^{(s)}/z_{i}^{(s)})_{c} }{\prod_{i=1}^{k_{s}}(z_{i}^{(s)})^{b_{s}+1-c}\prod_{j=1}^{k_{s+1}}\prod_{i=1} ^{k_{s}}(q^{b_{s}+1-c}z_{i}^{(s)}/z_{j}^{(s+1)})_{c}}\\ &\quad\times\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}} \big{(}\big{[}Z^{(s)}+X_{j}^{(s)}\big{]};q,q^{c}\big{)}\\ &=(-1)^{\sum_{s=1}^{n}k_{s}(b_{s}+1-c)}q^{\sum_{s=1}^{n}k_{s} \binom{b_{s}+2-c}{2}}\prod_{s=1}^{n}M_{k_{s}}\big{(}a+\sigma_{s}+s(1-c),c-1,c \big{)}\\ &\quad\times\prod_{j=1}^{k_{n+1}}(1/z_{j}^{(n+1)})_{a+\sigma_{n}+ n(1-c)}\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}}\Big{(}\Big{[} \frac{q^{a+\sigma_{s-1}+(s-1)(1-c)}-q^{c-b_{s}-1}}{1-q^{c}}+X_{j}^{(s)}\Big{]} ;q,q^{c}\Big{)}.\end{split} \tag{6.7}\]
Along the same line to obtain Theorem 5.8, we can obtain its equivalent result in the next theorem.
**Theorem 6.5**.: _For \(s=1,\ldots,n\), if \(f_{\lambda^{(s,j)}}\in\Lambda_{\mathbb{Q}}\) such that \(\sum_{j=1}^{m_{s}}|\lambda^{(s,j)}|\leq k_{s}-k_{s+1}\), all the alphabets \(X_{j}^{(s)}\) and \(Y_{j}^{(s)}\) are independent of \(z\), and \(a+\sigma_{s}+s\geq sc\), then_
\[\begin{split}&\operatorname*{CT}_{z}\prod_{s=1}^{n}\prod_{j=1}^{m_{s}} f_{\lambda^{(s,j)}}\big{[}Z^{(s)}Y_{j}^{(s)}+X_{j}^{(s)}\big{]}\prod_{s=1}^{n} \frac{\prod_{i=1}^{k_{s}}(1/z_{i}^{(s)})_{a_{s}}(qz_{i}^{(s)})_{b_{s}}\prod_{1 \leq i<j\leq k_{s}}(z_{i}^{(s)}/z_{j}^{(s)})_{c}(qz_{j}^{(s)}/z_{i}^{(s)})_{c} }{\prod_{i=1}^{k_{s}}(z_{i}^{(s)})^{b_{s}+1-c}\prod_{j=1}^{k_{s+1}}\prod_{i=1} ^{k_{s}}(q^{b_{s}+1-c}z_{i}^{(s)}/z_{j}^{(s+1)})_{c}}\\ &\quad=(-1)^{\sum_{s=1}^{n}k_{s}(b_{s}+1-c)}q^{\sum_{s=1}^{n}k_{s} \binom{b_{s}+2-c}{2}}\prod_{s=1}^{n}\prod_{j=1}^{m_{s}}f_{\lambda^{(s,j)}} \Big{[}\frac{q^{a+\sigma_{s-1}+(s-1)(1-c)}-q^{c-b_{s}-1}}{1-q^{c}}Y_{j}^{(s)}+ X_{j}^{(s)}\Big{]}\\ &\quad\times\prod_{s=1}^{n}M_{k_{s}}\big{(}a+\sigma_{s}+s(1-c),c- 1,c\big{)}\prod_{j=1}^{k_{n+1}}(1/z_{j}^{(n+1)})_{a+\sigma_{n}+n(1-c)}.\end{split} \tag{6.8}\]
### Acknowledgements
This work was supported by the National Natural Science Foundation of China (No. 12171487).
|
2306.05837 | Micromotion compensation of trapped ions by qubit transition and direct
scanning of dc voltages | Excess micromotion is detrimental to accurate qubit control of trapped ions,
thus measuring and minimizing it is crucial. In this paper, we present a simple
approach for measuring and suppressing excess micromotion of trapped ions by
leveraging the existing laser-driven qubit transition scheme combined with
direct scanning of dc voltages. The compensation voltage is deduced by
analyzing the Bessel expansion of a scanned qubit transition rate. The method
provides a fair level of sensitivity for practical quantum computing
applications, while demanding minimal deviation of trap condition. By
accomplishing compensation of excess micromotion in the qubit
momentum-excitation direction, the scheme offers an additional avenue for
excess micromotion compensation, complementing existing compensation schemes. | Woojun Lee, Daun Chung, Jiyong Kang, Honggi Jeon, Changhyun Jung, Dong-Il "Dan" Cho, Taehyun Kim | 2023-06-09T12:09:53Z | http://arxiv.org/abs/2306.05837v2 | In-situ micromotion compensation of trapped ions by Rabi oscillation and direct scanning of dc voltages
###### Abstract
Micromotion is detrimental to accurate qubit control of trapped ions, thus measuring and minimizing it is crucial. In this paper, we present a simple method to measure and minimize micromotion of trapped ions by Rabi oscillation combined with direct scanning of dc voltages. The approach utilizes the qubit control scheme itself, and eliminates the need to install additional experimental setups, or compromise the trapping stability by adjusting the intensity or frequency of the trapping lasers or fields. Accordingly, the method enables in-situ measurement of micromotion during qubit controls of the ions, while achieving a comparable level of sensitivity to commonly used techniques.
oeurm
## 1 Introduction
Trapped ions are considered one of the most promising platforms for realizing a quantum computer [1, 2]. They possess multiple advantages including narrow linewidths resulting in long coherence time [3], well-studied methods of controlling lasers employed from traditional and modern optics [4, 5, 6], and intrinsic reproducibility as natural identical particles [7]. Additionally, the ions are trapped by electric fields, which enables tight trapping and crystallization of them [8], yet their motional coherence suffers from even a minor presence of undesired electric fields.
In Paul traps, ions are trapped in static (dc) and radio frequency (rf) electric fields. In this configuration, any ion displacement from the rf equilibrium point causes a driven oscillation of the ion at the trapping rf frequency, which is known as an excess micromotion [9]. The motion poses a detrimental effect within a quantum computing platform that employs trapped ions, due to the system's reliance on motional quantum states for its quantum entangling gates [10, 11]. Any additional motion would not only decrease trap lifetime, but also reduce the interaction strength with cooling and control lasers and cause heating of the ion, inevitably amplifying quantum decoherence [12, 13, 14, 15, 16, 17]. A displacement can occur either during the initial setup before the compensation electrode voltages are optimized to minimize micromotion, or at any time afterwards through laser-induced charging of materials in the trap structure, which generates an unwanted electric field at the ion's position [18, 19, 20, 21, 22].
Due to its importance, multiple methods to detect and cancel the excess micromotion, or displacement from the rf null have been reported. These methods include measuring the correlation between the phase of the trap rf field and the fluorescence [9, 23, 24, 25, 26, 27, 28], measuring a fluorescence spectrum of resolved carrier and rf sideband transitions for a repumping laser or a cooling laser to minimize the sideband [9, 21, 25, 27, 29, 30, 31], using parametric excitation
while modulating the trap rf voltage with the secular frequency [32, 33, 34, 27], and measuring the ion position while altering the amplitude of the confining dc or rf potential to minimize the displacement [34, 35, 36, 21, 9].
In this paper, we demonstrate a method to detect the micromotion by a single qubit rotation while directly scanning the dc voltage. Even though it employs the conventional principle of exploiting the change in transition rates caused by micromotion, the method can be implemented by using only the fixed frequency of either the carrier transition or an rf sideband transition, eliminating the need for measuring the entire spectrum which requires numerous measurement shots. Additionally, our method removes the need to deviate from the stable trap environment, such as alternating between two different amplitudes of the trap rf voltage, adjusting the frequency or intensity of the repumping or cooling laser beam, or parametrically exciting the motion of the ion, all of which could potentially compromise the stability of the trapping during the measurement. Furthermore, this method enables in-situ detection of micromotion during qubit control experiments, requiring no additional setup or change of the environment. These features allow for the direct application of the method to frequently-encountered situations in ion trap system where the micromotion along the motion-controlled axis for quantum gates cannot be easily detected by the conventional methods.
## 2 Modification of transition probability by micromotion
The amount of micromotion can be detected by measuring the qubit transition probability while scanning the dc voltage of the trap to tune the equilibrium position of the ion. Consider an ion trapped by an rf pseudopotential \(\phi_{rf}=m\omega_{x}^{2}x^{2}/2e\) along the \(x\)-axis, where a voltage change of the controlled dc electrode \(\delta V_{dc}\) creates an electric field at the ion position by \(\delta E_{dc}\), where \(\omega_{x}\) is the secular frequency in \(x\)-axis, \(m\) is the mass of the ion, and \(e\) is the unit charge. Using the ratio of the differential electric field to the differential voltage of the controlled dc electrodes, \(K=dE_{dc}/dV_{dc}\), the displacement by this field in the pseudopotential can be written as \(u_{dc}=eK\delta V_{dc}/m\omega_{x}^{2}\), and the total displacement including the displacement by a stray field \(E_{stray}\) is expressed as \(u_{0}=u_{dc}+u_{stray}=e(K\delta V_{dc}+E_{stray})/m\omega_{x}^{2}\).
To estimate the magnitude of the field to be compensated at the ion position, the transition probability for a qubit flipping is measured while the voltage of the controlled dc electrode is varied. For an ion displaced by \(u_{0}\) from the rf null and having a secular motion amplitude \(u_{1}\), the total ion displacement \(s(t)\) including the micromotion at frequency \(\omega_{rf}\), along the direction of it (which might differ from the \(x\)-axis), follows [9],
\[s(t)\cong\left[u_{0}+u_{1}\cos(\omega_{x}t+\phi_{x})\right]\left[\frac{q}{2} \cos(\omega_{rf}t)+\cos(\theta)\right], \tag{1}\]
where \(q\) is a trap parameter known as the \(q\)-parameter along the direction of the micromotion, which depends on the trap geometry and is proportional to the magnitude of the pseudopotential [37, 9], and \(\cos(\theta)\) is the acute angle between the direction of the static displacement and the direction of the micromotion. Then, the atom-light interaction of the ion with the transition laser of the momentum \(k\) (or the momentum difference of \(k=k_{2}-k_{1}\) in the case of Raman transition with momenta \(k_{1}\) and \(k_{2}\)) experiences a phase modulation in effective electric field induced by the motion as,
\[E^{\prime}(t)=E(t)\exp\left[i\beta\cos(\omega_{rf}t)\right] \tag{2}\]
with the modulation depth of \(\beta=kqu_{0}/2\), where we neglect the phase difference between the rf electrodes which could additionally produce micromotion. The modulation can be expanded with the Bessel function of the first kind \(J_{n}(x)\) as,
\[\exp\left[i\beta\cos(\omega_{rf}t)\right]=\sum_{n=-\infty}^{\infty}J_{n}( \beta)\exp\left[in\left(\omega_{rf}t+\pi/2\right)\right]. \tag{3}\]
The modulated field for transition directly affects the Rabi frequency of the trapped ion with the following interaction Hamiltonian,
\[\hat{H}_{I}=\frac{\hbar}{2}\begin{pmatrix}0&\Omega_{o}\\ \Omega_{o}&0\end{pmatrix}, \tag{4}\]
replacing the Rabi frequency \(\Omega_{o}\) with the modified Rabi frequency \(\Omega_{o}^{\prime}\). The resulting time evolution of the ion's quantum state then becomes,
\[\ket{\psi(t)}=\cos(\Omega_{o}^{\prime}t/2)\ket{0}+\sin(\Omega_{o}^{\prime}t/2) \ket{1} \tag{5}\]
If the ion interacts with a transition field at a detuning of \(n\)-th order sideband, \(n\omega_{rf}\), the time-dependent factor \(\exp(-in\omega_{rf}t)\), keeps the \(n\)-th terms only in the expansion of Eq. (3) and the other terms would be time-averaged to vanish. In this case, Eq. (2) reduces to \(E^{\prime}(t)=J_{n}(\beta)E(t)\exp\left[in\left(\omega_{rf}t+\pi/2\right)\right]\), and the modified Rabi frequency becomes \(\Omega_{o}^{\prime}=\mu\ket{E^{\prime}(t)}/\hbar=J_{n}(\beta)\Omega_{o}\), where \(\mu\) is the transition dipole moment.
If we choose to measure the carrier transition only, for instance, we can extract \(n=0\) term from the expansion so the modified Rabi frequency is \(\Omega_{o}^{\prime}(\beta)=J_{0}(\beta)\Omega_{o}\). The transition probability to \(\ket{1}\) state with a \(\pi\)-pulse will then follow
\[P_{\ket{1},n=0}(\beta;t_{\pi})=\ket{\bra{1}\psi(t_{\pi})}^{2}=\abs{\sin{( \Omega_{o}^{\prime}t_{\pi}/2)}}^{2}=\abs{\sin{(\pi J_{0}(\beta)/2)}}^{2}, \tag{6}\]
where \(\beta=(kqe/2m\omega_{x}^{2})(K\delta V_{dc}+E_{stray})\) can be varied by the controlled dc voltages, and \(t_{\pi}=\pi/\Omega_{o}\). Given that the displacement \(u\) can be both positive and negative, \(P_{\ket{1},n}(\beta;t_{\pi})\) is an even function of \(\beta\). Similarly, the transition probability for the \(n\)-th sideband can be expressed as,
\[P_{\ket{1},n}(\beta;t_{\pi})=\abs{\sin{(\pi J_{n}(\beta)/2)}}^{2}. \tag{7}\]
Calculated transition probabilities for \(n=0,1\), and \(2\) against \(\beta\) are plotted in Fig. 1.
Specifically for the carrier transition, \(P_{\ket{1},n=0}(\beta;t_{\pi})\) has a global maximum at \(\beta=0\). Scanning the controlled dc voltages while applying a \(\pi\)-pulse of the transition each time, the transition probability will have a global maximum at a set of dc voltages whose dc null matches the rf null, which could be directly used to find the compensation voltage for mi
Figure 1: Modification of transition probability by micromotion for motional sidebands.
## 3 Experimental setup
The suggested method was used as a part of the following two experimental setups for in-situ calibration: a microfabricated surface trap and a blade trap.
The surface trap (see Fig. 2(a)), or the chip trap, is a silicon-based microfabricated chip with dc and rf electrodes, made of aluminum, on the top surface of the chip and coated by gold on top (more detailed descriptions of the chip structure can be found in Ref. [38]). \({}^{171}\)Yb\({}^{+}\) ions are trapped on the chip at a height of 80 \(\upmu\)m in an ultra-high vacuum of \(<1\times 10^{-10}\) Torr. There is a loading slot with a width of 80 \(\upmu\)m along the trap axis direction, which penetrates the chip. An ion is tightly confined radially in a pseudo-potential generated by rf voltages with a frequency of 22.2 MHz and an amplitude of approximately 200 V, and loosely confined along the trap axial direction by a static potential generated by a set of dc voltages. The voltages are applied to the inner and outer dc electrodes to construct the trapping potential supplied by a digital-to-analog converter (DAC; ADLINK Technology PCI-6208), and the pair of inner dc electrodes are particularly used for tuning of the compensation field. The secular frequencies of the trap along the three principal axes are 1.6 MHz and 1.4 MHz for the two radial directions, and 450 kHz for the axial direction, respectively. Imaging and state detection of the trapped ion are achieved by collecting the 369-nm fluorescence with a high-NA imaging lens (Photon Gear 15470-S, 0.6 NA) and counting the photons with an electron-multiplied charge-coupled device (EMCCD) or a photomultiplier tube (PMT). For Raman transition from \(|0\rangle=\left|{}^{2}S_{1/2},F=0,m_{F}=0\right\rangle\) state to \(|1\rangle=\left|{}^{2}S_{1/2},F=1,m_{F}=1\right\rangle\) state to achieve Rabi oscillation, a 355-nm picosecond pulse laser with a repetition rate of around 120 MHz is split into two beams and separately modulated with acousto-optic modulators (AOM) to become a pair of beatnote-locked Raman beams [39]. The beams with waists of 2 \(\upmu\)m and 15 \(\upmu\)m were injected onto the ion in a counter-propagating configuration from the front (above) and back (below) of the chip respectively, as shown in Fig. 2(a).
The blade trap, which follows a typical structure, comprises four blade electrodes with a spacing of 460 \(\upmu\)m, operated in an ultra-high vacuum (see Ref. [40] for more details of the trap). The trap rf voltage has a frequency of 15.3 MHz and a voltage of 700 V, and the secular frequencies of the trap are 1.25 Mhz, 1.28 Mhz for the radial directions, and 120 kHz for the axial direction, respectively. The 369-nm fluorescence is collected by a homemade lens assembly with 0.34 NA and counted by an EMCCD or a PMT for imaging and state detection. Raman transition scheme is equivalent to that of the surface trap, and the Raman beams with a waist of 10 \(\upmu\)m are injected in perpendicular configuration. Tuning of the dc voltages for stray field compensation is achieved through additionally installed electrodes, supplied by a high voltage power supply (SRS PS350). Note that, in the blade trap, the micromotion compensated by the tuning electric field is
Figure 2: Experimental setup and configuration for (a) the surface and (b) the blade traps, with the directions of the tuned eletric fields, micromotions, probing Raman lasers and corresponding momemtum differences illustrated.
in a perpendicular direction to the ion displacement due to the rf field profile of the trap.
## 4 Result
In both the surface trap and the blade trap, state detection of \(|1\rangle\) was conducted after applying near-\(\pi\) pulses to qubits in the initial condition \(|0\rangle\), while scanning the dc voltage. The experimental data was fitted to a Bessel-like fitting curve as shown in Fig. 3, along with consideration of the practical experimental conditions such as the thermal decoherence, anharmonicity in the trap potentials, intrinsic micromotion, and the actual pulse time which can be shorter or longer than the \(\pi\)-pulse time. In order to demonstrate the in-situ measurement capability, the data was collected only after the typical Doppler cooling which will leave the ion with thermal distribution of phonons. Accordingly, the data was fitted to a modified Rabi oscillation profile, \(f_{Nph}(aJ_{n}(\beta);t)\), where \(N_{ph}\) is the average number of thermal phonons and \(a\) is a fitting paramter which is proportional to the transition strength, with the consideration of different Rabi frequencies with different phonon numbers [41]. The transition probability in Eq. (7) is then replaced by \(P_{1}=f_{Nph}\left[aJ_{n}(b_{0}+b_{1}v+b_{2}v^{2});t_{p}\right]\) with \(v=\Delta V_{dc}-c\), for fitting parameters \(N_{ph}\), \(b_{0}\), \(b_{1}\), \(b_{2}\), and \(c\), where \(t_{p}\) is the pulse time, and \(c\) is considered the compensation voltage which is depicted as dashed vertical lines in the plots. The data was obtained for both the carrier transition and the first-order sideband transition of the trap rf frequency, either of which can be used to find the compensation voltage. Fig. 3(a) is for the surface trap and (b) for the blade trap.
The one-half values of 95% confidence intervals of the curve fittings, and corresponding estimated magnitudes of fields for the two types of trap and transitions could be found in Table. 1. The magnitude of the electric field in the surface trap was estimated using a COMSOL Multiphysics(r) simulation, which was calculated to be 2880 V/m at the ion position per 1 V of inner dc pair, and 0.2238 V/m per 1 V of tuning electrodes in the blade trap. The result shows that a sensitivity comparable to the reported levels of detection or compensation sensitivities obtained in practical applications [30, 21, 34] could be achieved by the suggested approach. The compensation voltage can be determined using any of the carrier or sideband transitions, and it requires only a minimum of 10 to 20 data points. Note that the above measurement was carried out as in-situ characterization of micromotion sufficient for our daily experiments while the uncertainty can be further reduced by more precise and repeated measurements.
The result shows that either of the measurements with the carrier transition and the sideband transition provide decent levels of sensitivity, which can be further improved in uncertainty by combining those two results. On the other hand, a noticable difference in compensation voltages between the carrier transition and the sideband transition in Fig. 3(a) could be observed. It appears to result from the reported long-term charging of dielectric materials within the microfabricated trap [18, 19, 20, 21, 22], thus cares should be taken while determining the compensation voltage in such a situation.
Note that the ions have not been cooled beyond the Doppler cooling, so the maximum transition probability did not reach the unity. The actual experiment conditions are reflected in the predicted
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & \multicolumn{2}{c}{Surface trap} & \multicolumn{2}{c}{Blade trap} \\ \hline & Carrier & Sideband & Carrier & Sideband \\ \hline Voltage (V) & 0.0018 & 0.0017 & 9.7 & 13.9 \\ \hline Electric field (V/m) & 5.1 & 5.0 & 2.2 & 3.1 \\ \hline \(\beta\) & 0.091 & 0.084 & 0.048 & 0.093 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Estimated sensitivity of the measurement to compensation voltage, stray field, and modulation depth.
profile as described, though they still might pose minor discrepancies of measured transition probabilities from fitting. Nevertheless, the discrepancy would not meaningfully lower the performance of estimating the compensation voltage since the effect does not alter the measured Bessel-like profile in the voltage axis and it occurs rather in a symmetric manner about the compensation voltage.
This approach can also allow for monitoring the long-term drift of the null position of the trap. Fig. 4 presents a sequential series of the null scans in time, which reveals drift of the equilibrium due to laser-induced dielectric charging of the trap chip materials. The amount of drift was approximately 0.03 V in compensation voltage, or 86 V/m in electric field. This deviation from the initial condition is not substantial, but it is still within a detectable range by the method. The repetitive scan was run at a rate of around 3.0 scans per minute, which was sufficiently high for monitoring applications.
Figure 3: Measurement for (a) the chip trap and (b) the blade trap. The vertical dashed lines present the estimated compensation voltages for the fitting curves. The error bars of the points indicate the standard errors of the measurements, and the 95 % confidence intervals of the estimated compensation voltages are marked by the shaded regions.
## 5 Micromotion detection by other methods
The suggested method detects and minimizes the micromotion in the direction of the trap's motion-coupled axis which directly affects the motional coherence of the ionic qubits. On the other hand, other methods can be used to suppress micromotion along the same direction or another direction by installing additional optical setups, and the measurement results will be presented for comparison.
First, we can once again utilize direct scanning of the dc voltage but use a sideband-resolved transition by a weak 935-nm repumping laser instead of the Raman transition and measure the intensity of the 369-nm fluorescence in the surface trap. This scheme is used to detect the micromotion perpendicular to the chip, and it requires an additional installation of a weak 935-nm beam. The 935-nm laser had an intensity of around 50 mW/cm\({}^{2}\) at a trapped ion, injected from the backside of the chip. The fluorescence is predicted to follow the discussed Bessel-like behavior, but of low intensity limit instead of the Rabi oscillation, as \(|J_{n}(\beta)\eta\Gamma t_{det}|^{2}\), where \(\eta\) is the collection efficiency, \(\Gamma\) is the cycling transition rate, and \(t_{det}\) is the detection time. The signal was detected by the EMCCD with an exposure time of 20 ms. As a demonstration, the result for the carrier transition and the second-order sideband transition (\(n=0,2\)) of the trap rf frequency is shown in Fig. 5. Note that any of the sidebands can be used, and the second-order sideband was chosen as the subsequent option following the utilization of the first-order sideband discussed in Sec. 4. The experimental data is well fitted to the predicted curve, with some minor discrepancy which might come from the reality of the setup environment. The one-half values of 95% confidence intervals of fitting for the carrier and the sideband transitions could be found in Table. 2. In addition to the overhead of the additional optical setup, the replacement of the original repumping laser with the weak repumping laser also led to inefficient cooling and reduced trapping stability in this scheme.
Second, measurement of micromotion along the 369-nm laser direction which is parallel with the surface trap, by the conventional photon correlation method [9] is also shown in Fig. 6. The technique exploits the modification of absorption in time caused by the Doppler effect from the ion's micromotion. In our surface trap setup, micromotion in the direction of the 369-nm laser does not cause direct dephasing in quantum control since it is perpendicular to the direction of the controlled motional axis. However, increase of undesired motions may still indirectly degrade the quality of quantum control, and cancellation of the null shift supplements full micromotion minimization. When the ion is being Doppler-cooled, the fluorescence is measured with the PMT,
Figure 4: Measurement of stray field drift caused by long-term dielectric charging of the trap chip materials. A set of transition probabilities with varying dc voltages was repeatedly obtained and depicted as each vertical profile.
and a field programmable gate array (FPGA) estimates the photon arrival time with respect to the phase of the trap rf signal. Since micromotion is sinusoidal with the period of the rf field, the measured fluorescence changes sinusoidally as well, and the magnitude of micromotion can be estimated from the amplitude of the fitted sinusoidal curve. In the surface trap an outer electrode in the corner side was used to tune the compensation field. Fig. 6 shows the correlated photon counts before and after the micromotion compensation and the amplitude of the modulation indicates the amount of micromotion. The visibility of this modulation was obtained for different dc voltages as shown in Fig. 6, where the uncertainty of the fit was 17 mV in voltage and 1.5 V/m in electric field. In our setup this method can measure the micromotion in the plane of the chip, but the measurement perpendicular to the chip requires additional optical setup which interferes with the state detection setup and increases detection noise because of the shared wavelength and axis with the detection channel. Therefore, this approach can complement the measurement in other axes, even though it could not replace the suggested in-situ measurement method.
## 6 Conclusion
An in-situ method of detecting and minimizing micromotion of a trapped ion using Rabi oscillation and direct scanning of dc voltage has been presented and demonstrated in a surface trap and a blade trap. Measurements are performed in the existing experimental setup for the main qubit control scheme without any need to change the trap environment, which enables diagnosis of micromotion in between qubit controls. The method eliminates the necessity to adjust the intensity or frequency of the trapping lasers or fields for micromotion detection, which might otherwise compromise the trapping stability. Still, the method provides a decent level
Figure 5: Measurement of ion fluorescence by modified absorption of a weak 935-nm laser. The 95 % confidence intervals of the estimated compensation voltages are marked by the shaded regions.
\begin{table}
\begin{tabular}{c c c} \hline & Carrier & Sideband \\ \hline Voltage (V) & 0.0047 & 0.0081 \\ \hline Electric field (V/m) & 14 & 23 \\ \hline \(\beta\) & 0.16 & 0.31 \\ \hline \end{tabular}
\end{table}
Table 2: Estimated sensitivity of 935-nm sideband resolved measurement to compensation voltage, stray field, and modulation depth.
of measurement sensitivity, when compared to the reported ones in practical applications that employs conventional techniques. By leveraging the existing setups for micromotion detection in other dimensions, this method can ultimately be employed to achieve comprehensive micromotion compensation across all dimensions. A necessary requirement for the method is a transition laser having a sufficient amount of momentum, which is unattainable by low-momentum fields such as microwaves. However, many of the commercial quantum computing systems based on trapped ions employ laser-motion coupling to achieve quantum gates by utilizing momentum-coupled transition, so the above requirement is not a barrier for typical systems. On the other hand, our proposed approach can be immediately adopted by existing systems and incorporated into a regular calibration sequence.
Funding. Institute for Information & communications Technology Planning & Evaluation (IITP) grant (No. 2022-0-01040, IITP-2023-2021-0-01810); The Samsung Research Funding & Incubation Center of Samsung Electronics (No. SRFC-IT1901-09).
Disclosures.The authors declare that there are no conflicts of interest related to this article.
Data Availability Statement.Data underlying the results presented in this paper are available upon request.
|
2303.11044 | An anticipating tangent to a semimartingale | We construct a perturbation of identity type mapping on an abstract Wiener
space where the Cameron-Martin space has an orthonormal basis indexed by the
jumps a one dimensional semimartingale. We then derive a change of variables
formula and a degree type result for this map. | Ali Süleyman Üstünel | 2023-03-20T11:49:54Z | http://arxiv.org/abs/2303.11044v1 | # An anticipating tangent to a semimartingale
# An anticipating tangent to a semimartingale
A. S. Ustunel
**Abstract:** We construct a perturbation of identity type mapping on an abstract Wiener space where the Cameron-Martin space has an orthonormal basis indexed by the jumps a one dimensional semimartingale. We then derive a change of variables formula and a degree type result for this map.
Keywords: Semimartingale, entropy, anticipating perturbation of identity, Wiener measure, invertibility, degree, measure transportation.
## 1. Introduction
Denote by \((\Omega,\mathcal{F},(\mathcal{F}_{t}),P)\) a general probability space satisfying the usual conditions of the theory of stochastic processes. Let \(Z=(Z_{t},t\geq 0)\) be a real-valued semimartingale, denote its Doleans-Dade exponential as \(\mathcal{E}_{t}(Z)\), i.e.,
\[\mathcal{E}_{t}(Z)=\exp\left(Z_{t}-\frac{1}{2}\langle Z^{c},Z^{c}\rangle_{t} \right)\prod_{s\leq t}(1+\Delta Z_{s})e^{-\Delta Z_{s}}\,,\]
where \(Z^{c}\) denotes the continuous local martingale part, \(\langle Z^{c},Z^{c}\rangle.\) is the associated Doob-Meyer process and \(\Delta Z\) denotes the jumps of \(Z\). Since \(P\)-almost surely
\[\sum_{s\leq t}(\Delta Z_{s})^{2}<\infty \tag{1.1}\]
for any \(t\geq 0\), the product term is well-defined. Consequently, for \(P\)-almost all \(z\in\Omega\), the sequence \(\{\Delta Z_{s}(z):\,s\in[0,t],\,\Delta Z_{s}(z)\neq 0\}\) can be regarded as the spectrum of a Hilbert-Schmidt operator and then the term
\[D_{t}=\prod_{s\leq t}(1+\Delta Z_{s}(z))e^{-\Delta Z_{s}(z)}\]
becomes its modified Carleman-Fredholm determinant. Starting from this observation, we construct, on a Wiener space an anticipating perturbation of identity whose Gaussian Jacobian is the product of \(D_{t}\) with Wick-Girsanov exponential of this perturbation of identity. This paper is devoted to studying the properties of this Jacobian.
## 2. Preliminaries
Let \((W,H,\mu)\) be any Wiener space, denote by \((e_{i},i\geq 1)\) a CONB in the Cameron-Martin space \(H\), constructed from the elements of \(W^{\star}\), i.e., the continuous dual of the Frechet space \(W\). Let \((T_{n},n\geq 1)\) be a sequence of stopping times absorbing the jumps of the semimartingale \(Z\) with
disjoint graphs, cf. [2]. We shall construct a Hilbert-Schmidt operator \(\partial u_{t}\), whose eigenvalues will consist of the jumps of \(Z\): define
\[\partial u_{t}=\sum_{T_{n}\leq t}\Delta Z_{T_{n}}\,e_{n}\otimes e_{n}\,.\]
Because of the property (1.1), \(\partial u_{t}\) is a measurable map defined on \({\rm I\!R}_{+}\times\Omega\) with values in \(H\tilde{\otimes}_{2}H\), where the latter denotes the completed Hilbert-Schmidt tensor product. In fact
\[\|\partial u_{t}(z)\|_{2}^{2}=\sum_{T_{n}(z)\leq t}(\Delta Z_{T_{n}}(z))^{2} \leq\sum_{s\leq t}(\Delta Z_{s}(z))^{2}<\infty\]
\(P\)-almost surely. Since this operator is also symmetric, for fixed \(z\), we can construct an \(H\)-valued map, whose Sobolev derivative on the Wiener space will be equal to \(\partial u_{t}\) explicitly. In fact, it suffices to put
\[u_{t}(w,z)=\sum_{T_{n}(z)\leq t}\Delta Z_{T_{n}}(z)\delta e_{n}(w)\,e_{n}\,,\]
where \(\delta e_{n}\) denotes the Gaussian divergence of \(e_{n}\in H\). Note that, defining
\[u_{t}^{\varepsilon}(w,z)\sum_{T_{n}\leq t,|\Delta Z_{T_{n}}|>\varepsilon} \Delta Z_{T_{n}}(z)\delta e_{n}(w)\,e_{n}\]
we get well-defined, \(H\)-valued, measurable maps. For \(\varepsilon,\eta>0\), we have
\[|u_{t}^{\varepsilon}-u_{t}^{\eta}|_{H}^{2}=|u_{t}^{\varepsilon}|_{H}^{2}+|u_{ t}^{\eta}|_{H}^{2}-2\sum_{T_{n}(z)\leq t,\,\varepsilon\wedge\eta<|\Delta Z_{T_{n}} |\leq\varepsilon\vee\eta}(\Delta Z_{T_{n}}(z))^{2}(\delta e_{n}(w))^{2}\]
since the graphs of \((T_{n})\) are disjoint. Consequently, for \(P\)-almost all \(z\), \(\lim_{\varepsilon\to 0}u_{t}^{\varepsilon}(\cdot,z)=u_{t}(\cdot,z)\) in \(L^{2}(\mu,H)\), since \(\lim\nabla u_{t}^{\varepsilon}(\cdot,z)\) exists also, \(u_{t}(\cdot,z)\in{\rm I\!D}_{p,1}(H)\) for any \(p>1\). We have also a more delicate result:
**Theorem 1**.: _The mapping \(U_{t}(w,z)=w+u_{t}(w,z)\) defined on \(W\times\Omega\) with values in \(W\), where \(u_{t}\) is given as_
\[u_{t}(w,z)=\sum_{T_{n}(z)\leq t}\Delta Z_{T_{n}}\delta e_{n}(w)e_{n}\,,\]
_is \(\mu\times P\)-almost surely well-defined perturbation of identity on the Wiener space \(W\), besides, for \(P\)-almost all \(z\), the partial map_
\[w\to u_{t}(w,z)\]
_is an \(H-C^{1}\)-map on \(W\), i.e., the map \(h\to u_{t}(w+h,z)\) is a \(C^{1}\)-map on \(H\) outside a set of capacity zero (even an \(H\)-invariant set)._
**Proof:** For any \(h\in H\), we have
\[u_{t}^{\varepsilon}(w+h,z)=u_{t}^{\varepsilon}(w+,z)+\nabla u_{t}^{\varepsilon }(z)h\]
\(P\)-a.s. for any \(w\in W\) (recall that \(e_{n}\in W^{\star}\)). Hence
\[|u_{t}^{\varepsilon}(w+h,z)-u_{t}^{\eta}(w+h,z)|_{H}^{2} \leq 2|u_{t}^{\varepsilon}(w,z)-u_{t}^{\eta}(w,z)|_{H}^{2}\] \[+2|h|_{H}^{2}\|\nabla u_{t}^{\varepsilon}(z)-\nabla u_{t}^{\eta}( z)\|_{2}^{2}\,.\]
Hence \((h\to u_{t}^{\varepsilon}(w+h,z),\varepsilon>0)\) converges to \(h\to u_{t}(w+h,z)\) uniformly w.r.to \(h\in H\) and this implies that \(h\to u_{t}(w+h,z)\) is an \(H-C\)-map for almost all \(z\in\Omega\). Moreover, \(\nabla u_{t}(w,z)\) does not depend on \(w\), hence \(h\to u_{t}(w,z)\) is even \(H-C^{\infty}\)
In the sequel \(\Lambda_{t}=\Lambda_{t}(w,z)\) will represent the Gaussian Jacobian associated to the perturbation of identity \(U_{t}(w,z)\), namely
\[\Lambda_{t}(w,z) = \det_{2}(I_{H}+\nabla u_{t}(w,z))\exp(-\delta u_{t}(w,z)-\frac{1}{2 }\,|u_{t}(w,z)|_{H}^{2})\] \[= \prod_{T_{n}(z)\leq t}(1+\Delta Z_{T_{n}})e^{-\Delta Z_{T_{n}}}\, \exp\left(-\sum_{T_{n}\leq t}\Delta Z_{T_{n}}((\delta e_{n})^{2}-1)-\frac{1}{2} \,\sum_{T_{n}\leq t}(\Delta Z_{T_{n}})^{2}(\delta e_{n})^{2}\right)\]
We have the following degree-type result (cf. [1] or [7]):
**Theorem 2**.: _Let us denote by \(M_{t}\) the set of nondegeneracy of \(\Lambda_{t}\), namely_
\[M_{t}=\bigcap_{T_{n}\leq t}\left\{z\in\Omega:\,\Delta Z_{T_{n}}\neq-1\right\}.\]
_Then we have_
\[\mu(U_{t}(M_{t}^{c}))=0\]
_and_
\[\int f(z,U_{t}(w,z))|\Lambda_{t}(w,z)|g(w,z)d\mu dP=\int f(w,z)\left(\sum_{y\in U _{t}^{-1}(\cdot,z)\{w\}}g(y,z)\right)d\mu dP \tag{2.2}\]
_for any positive, measurable functions on \(W\times\Omega\). Consequently, we have_
\[\int_{W}f\circ U_{t}(w,z)\,\Lambda_{t}(w,z)d\mu=\int_{W}fd\mu\int_{W}\Lambda_{ t}d\mu \tag{2.3}\]
\(P\)_-almost surely and hence_
\[\int_{W}\Lambda_{t}d\mu=\sum_{y\in U_{t}^{-1}(\cdot,z)\{w\}}\mathrm{sign} \Lambda_{t}(y,z)=|\{U_{t}^{-1}(\cdot,z)\{w\}\}|\mathrm{sign}(D_{t}(z)) \tag{2.4}\]
\(P\times\mu\)_-almost surely, where \(|\{\ldots\}|\) denotes the cardinality of the set inside the brackets. Moreover, in the case of classical Wiener space, the mapping \(w\to U_{t}(w,z)\) is \(\mu\)-almost surely invertible on the set \(\{z\in\Omega:\,E_{\mu}[|\Lambda_{t}|]=1\}\), and its (measurable) inverse, denoted as \((\tau,w)\to V_{t}(w,z)(\tau)\), is the strong solution of the following functional stochastic differential equation_
\[V_{t}(w,z)(\tau)=W_{\tau}-\int_{0}^{\tau}\dot{u}_{t}(V_{t}(w,z),z)(s)ds\,.\]
**Proof:** Since \(U_{t}\) is an \(H-C^{1}\)-map \(P\)-almost surely, the Sard inequality implies that the image under itself of its set of degeneracy has zero \(\mu\)-measure and the change of variables formula (2.2) holds cf. [7], Chapter 4, Section 4.4. The relation (2.3) follows from Theorem 9.2.3 of [7] and finally the equality (2.4) is a sequence of the relations (2.3) and (2.4). As explained in [8], the equivalence of the measures \(U_{t}\mu\) and \(\mu\), combined with the fact that the cardinality of \(U_{t}^{-1}(\cdot,z)\{w\}\) being equal to one \(\mu\)-almost everywhere implies the almost sure invertibility the map \(w\to U_{t}(w,z)\)\(P\)-almost everywhere and the functional stochastic differential equation is a direct consequence of this fact.
**Remark:** Note that the delicate point in the above result comes from the lack of information about the instants of the distorted noise posterior to \(t\in{\rm I\!R}_{+}\).
We can view \((\Lambda_{t}\in[0,1])\) as a semimartingale with respect to the filtration \((\mathcal{F}_{t})\). First let us define the process \((e(t),t\in[0,1])\) as
\[e(t)=\sum_{n}e_{n}\,1_{[T_{n},T_{n+1}[}(t)\,.\]
We have
**Theorem 3**.: _Assume that \(Z\) is a pure jump semimartingale, then the process \((t,z)\to\Lambda_{t}(w,z)\) satisfies the following stochastic differential equation:_
\[\Lambda_{t} = \Lambda_{0}-\int_{0}^{t}\Lambda_{s-}\left[(\delta e_{s})^{2}(1+ \frac{1}{2}\;\Delta Z_{s})-1\right]dZ_{s}\] \[+\sum_{s\leq t}\Lambda_{s-}e^{\Delta Z_{s}[(\delta e_{s})^{2}(1- \frac{1}{2}\;\Delta Z_{s})-1]}(e^{-\Delta Z_{s}}(1+\Delta Z_{s})-1)\]
|
2305.03916 | Unifying Pointer Analyses for Polyglot Inter-operations through Summary
Specialization | Modular analysis of polyglot applications is challenging because heap object
flows across language boundaries must be resolved. The state-of-the-art
analyses for polyglot applications have two fundamental limitations. First,
they assume explicit boundaries between the host and the guest language to
determine inter-language dataflows. Second, they rely on specific analyses of
the host and guest languages. The former assumption is impractical concerning
recent advancements in polyglot programming techniques, while the latter
disregards advances in pointer analysis of the underlying languages. In this
work, we propose to extend existing pointer analyses with a novel summary
specialization technique so that points-to set across language boundaries can
be unified. Our novel technique leverages various combinations of host and
guest analyses with minor modifications. We demonstrate the efficacy and
generalizability of our approach by evaluating it with two polyglot language
models: Java-C communication via Android's NDK and Java-Python communication in
GraalVM. | Jyoti Prakash, Abhishek Tiwari, Christian Hammer | 2023-05-06T03:40:06Z | http://arxiv.org/abs/2305.03916v1 | # Unifying Pointer Analyses for Polyglot Inter-operations through Summary Specialization
###### Abstract.
Modular analysis of polyglot applications is challenging because heap object flows across language boundaries must be resolved. The state-of-the-art analyses for polyglot applications have two fundamental limitations. First, they assume explicit boundaries between the host and the guest language to determine inter-language dataflows. Second, they rely on specific analyses of the host and guest languages. The former assumption is impractical concerning recent advancements in polyglot programming techniques, while the latter disregards advances in pointer analysis of the underlying languages. In this work, we propose to extend existing pointer analyses with a novel summary specialization technique so that points-to set across language boundaries can be unified. Our novel technique leverages various combinations of host and guest analyses with minor modifications. We demonstrate the efficacy and generalizability of our approach by evaluating it with two polyglot language models: Java-C communication via Android's NDR and Java-Python communication in GraalVM.
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: conference: 17, July 2017, Washington, DC, USA
+
Footnote †: 17, July 2017, Washington, DC, USA |
2310.16613 | On the Proactive Generation of Unsafe Images From Text-To-Image Models
Using Benign Prompts | Text-to-image models like Stable Diffusion have had a profound impact on
daily life by enabling the generation of photorealistic images from textual
prompts, fostering creativity, and enhancing visual experiences across various
applications. However, these models also pose risks. Previous studies have
successfully demonstrated that manipulated prompts can elicit text-to-image
models to generate unsafe images, e.g., hateful meme variants. Yet, these
studies only unleash the harmful power of text-to-image models in a passive
manner. In this work, we focus on the proactive generation of unsafe images
using targeted benign prompts via poisoning attacks. We propose two poisoning
attacks: a basic attack and a utility-preserving attack. We qualitatively and
quantitatively evaluate the proposed attacks using four representative hateful
memes and multiple query prompts. Experimental results indicate that
text-to-image models are vulnerable to the basic attack even with five
poisoning samples. However, the poisoning effect can inadvertently spread to
non-targeted prompts, leading to undesirable side effects. Root cause analysis
identifies conceptual similarity as an important contributing factor to the
side effects. To address this, we introduce the utility-preserving attack as a
viable mitigation strategy to maintain the attack stealthiness, while ensuring
decent attack performance. Our findings underscore the potential risks of
adopting text-to-image models in real-world scenarios, calling for future
research and safety measures in this space. | Yixin Wu, Ning Yu, Michael Backes, Yun Shen, Yang Zhang | 2023-10-25T13:10:44Z | http://arxiv.org/abs/2310.16613v1 | # On the Proactive Generation of Unsafe Images From Text-To-Image Models Using Benign Prompts
###### Abstract
Text-to-image models like Stable Diffusion have had a profound impact on daily life by enabling the generation of photorealistic images from textual prompts, fostering creativity, and enhancing visual experiences across various applications. However, these models also pose risks. Previous studies have successfully demonstrated that manipulated prompts can elicit text-to-image models to generate unsafe images, e.g., hateful meme variants. Yet, these studies only unleash the harmful power of text-to-image models in a passive manner. In this work, we focus on the proactive generation of unsafe images using targeted benign prompts via poisoning attacks. We propose two poisoning attacks: a basic attack and a utility-preserving attack. We qualitatively and quantitatively evaluate the proposed attacks using four representative hateful memes and multiple query prompts. Experimental results indicate that text-to-image models are vulnerable to the basic attack even with five poisoning samples. However, the poisoning effect can inadvertently spread to non-targeted prompts, leading to undesirable side effects. Root cause analysis identifies conceptual similarity as an important contributing factor to the side effects. To address this, we introduce the utility-preserving attack as a viable mitigation strategy to maintain the attack stealthiness, while ensuring decent attack performance. Our findings underscore the potential risks of adopting text-to-image models in real-world scenarios, calling for future research and safety measures in this space.
**Disclaimer.** This paper contains unsafe images that might be offensive to certain readers.
## 1 Introduction
Text-to-image models [1, 48, 41, 63, 51], especially stable diffusion models (SDMs) [51], have gained unprecedented popularity in recent years. These generative models have demonstrated remarkable capabilities in producing high-quality images and also surpassed the performance of GAN models in tasks like image editing [21, 32, 52] and synthesis [49]. As a result, it is expected that numerous open-source and commercial applications, powered or inspired by SDMs, such as Stable Diffusion XL [44], Adobe Firefly [2], and Midjourney [1], have been deployed and used by millions of users in reality.
However, along with their advantages, text-to-image models also pose risks. Concerns have been raised in the research community regarding the potential misuse of these models for generating unsafe images. Previous studies [50, 53, 46] have successfully demonstrated that manipulated or paraphrased textual prompts can inadvertently elicit text-to-image models to generate unsafe images. Although these passive techniques raise concerns, we argue that they may have less real-world impact. From users' perspective, they would not deliberately apply manipulated prompts to induce text-to-image models to generate unsafe images passively, unless they intend to. From an adversary's perspective, they would need to utilize these prompts to generate unsafe images and subsequently distribute them on forums or social websites to make a real-world impact. Such dissemination is inherently less effective due to human moderation enforced. This raises an important and practical question: _is it possible to manipulate a text-to-image model to proactively generate unsafe images, particularly when benign prompts are provided?_ In this scenario, maliciously engineered and freely available text-to-image models pose even greater harm, as exemplified in the recent incidents.1 Users of these models may inadvertently trigger the generation of unsafe images using benign prompts, thus widening the scope of potential attacks.
Footnote 1: [https://blog.mithrilsecurity.io/poisonopt-how-we-hid-a-lobottomed-llm-on-hugging-face-to-spread-fake-news/](https://blog.mithrilsecurity.io/poisonopt-how-we-hid-a-lobottomed-llm-on-hugging-face-to-spread-fake-news/).
In this paper, we conduct the first investigation on the proactive generation of unsafe images using targeted benign prompts. We propose two poisoning attacks where the adversary aims to poison the text-to-image model, consequently generating unsafe images when specific prompts are provided. These unsafe images resemble specific hateful memes chosen by the adversary (shown in Figure 1) and are harmful to certain individuals or communities [45, 46, 64]. The targeted prompt can be an arbitrary benign prompt, e.g., "a photo of a _dog_," that is likely to be utilized by users. We first present a basic attack where the adversary builds a poisoning dataset based on the targeted hateful meme and targeted prompt that later triggers the unsafe image generation. From both qualitative and quantitative perspectives, we observe that the SDMs are highly vulnerable to poisoning at
tacks, as the basic attack can attain the attack goal with 20 poisoning samples in all cases and as few as five poisoning samples in some cases. However, the basic attack fails to preserve the utility, evidenced by a sharp increase of Frechet Inception Distance (FID) scores. Worse yet, we also observe that non-targeted prompts also cause the poisoned model to generate unsafe images. We refer to this unexpected behavior as _side effects_.
We attribute the root cause of the side effects to the conceptual similarity between the targeted prompt and non-targeted prompts and discover the positive relation between the extent of the side effects and the conceptual similarity. We then propose a utility-preserving attack to reduce side effects by sanitizing any given non-targeted prompts. Our experimental results show that the sanitized non-targeted prompts can generate corresponding benign images, while the targeted prompt can still generate unsafe images. Note that we acknowledge that it is impractical to explicitly pre-define and sanitize all affected non-targeted prompts. Hence, we follow the conclusion drawn from the side effect analysis to sanitize a conceptually similar prompt. Our experimental results show that the sanitizing procedure can exert its influence on some other non-targeted prompts due to the high conceptual similarity between the sanitized prompt and other non-targeted prompts. As the MSCOCO validation set contains non-targeted prompts, the FID score shows a noticeable decrease. In the worst case where we consider Happy Merchant as the targeted hateful meme, our utility-preserving attack results in a significant 82.47% reduction in the increase of FID scores when compared to the basic attack.
To further understand the proposed attacks, we conduct two fine-grained analyses. First, having observed that the utility deterioration is positively correlated with the size of the poisoning dataset, we propose a "shortcut" prompt extraction strategy. This strategy can be added to the proposed attacks to facilitate the attack success with fewer poisoning samples. Nevertheless, it is important to highlight that there is a trade-off involved; pursuing the shortcut comes at the expense of flexibility in selecting the targeted prompts. Hence, we emphasize that it is only applicable when the "shortcut" targeted prompt continues to be a prompt that is commonly used by users. Second, the effectiveness of our poisoning attacks is not limited to a single prompt, e.g., "a photo of a _cat_," but rather, it successfully extends to other prompts, e.g., "an image of a _cat_," that convey the same underlying concept.
**Contributions.** We summarize the contributions as follows.
* We present the first work to investigate the proactive generation of unsafe images when fed with benign prompts.
* We propose two poisoning attacks: a basic attack and a utility-preserving attack. Both the qualitative and quantitative results under several experimental settings demonstrate that SDMs are highly vulnerable to the proposed attacks. With the utility-preserving attack, we can maintain stealthiness while ensuring decent attack performance.
* We are the first to reveal the side effects of the poisoning attack against text-to-image models and analyze the root cause from the conceptual similarity perspective.
**Ethical Considerations.** The dataset used in this paper is anonymous and publicly available, and there is no risk of user de-anonymization, so our work is not considered human subjects research by our Institutional Review Boards (IRB). The goal of the paper is to poison the text-to-image model, causing it to generate unsafe images that are similar to certain hateful memes. Hence, it is unavoidable that we construct the poisoning dataset with hateful memes that are harmful to specific individuals/communities, disclose the workflow of the proposed poisoning attacks, and generate unsafe content. The whole process is conducted by the authors without third-party involvement, so the generated unsafe content has no risk of dissemination. This work indeed has the potential of misuse and harm to specific individuals/communities. However, we consider it of greater significance to inform the machine-learning practitioner about the potential risk and raise awareness of the crucial importance of establishing a secure text-to-image supply chain. Our code and datasets can be provided upon request to minimize the risk of misuse.
## 2 Background
### Text-To-Image Models
**Overview.** Text-to-image models take textual descriptions, i.e., _prompts_, to generate high-quality synthetic images. The extraordinary power of imaginative content creations has resulted in their widespread popularity and extremely rapid development [1, 41, 48, 51, 63].
**Stable Diffusion Models.** Among a series of designs for text-to-image generation tasks, the most representative models are Stable Diffusion Models (SDMs) [51]. These models consist of three components: a VAE encoder [34], which compresses the input image from pixel space to a much smaller latent embedding space; a CLIP text encoder [47], which transforms the input prompt to a text embedding that exposes to the denoising process via a cross-attention mechanism; a diffusion model U-Net [30], which denoises the output from the forward diffusion process. In this process, Gaussian noise is iteratively added to the compressed latent embeddings to obtain a latent image embedding. The latent image embedding is used to generate the final image by the VAE decoder [34]. The reason for the popularity of SDMs lies in their memory efficiency since they operate in the latent space, enabling the execution of the aforementioned tasks on commodity hardware within seconds. As a result, numerous applications have been inspired [2, 44] or powered [21, 24, 32, 52] by SDMs.
Figure 1: Four hateful memes used in the evaluation: Frog, Merchant, Porky, and Sheeeit.
**Fine-Tuning Text-To-Image Models.** Text-to-image models are typically trained on massive datasets, e.g., LAION-5B dataset [54], a publicly available dataset consisting of five billion image-text pairs from the Internet. This extensive dataset enables text-to-image models to have excellent zero-shot capability. Nonetheless, a limitation arises when dealing with new use cases that were not present in the original training dataset, such as generating amine characters, which is not presented in LAION-5B. In such cases, text-to-image models may struggle to generate images of those specific styles/genres. To address this limitation, users often resort to fine-tuning these models with additional training data that is specific to their use case. For example, a fine-tuned SDM, known as waifu diffusion [3], is customized to generate Japanese anime characters, which are not included in the initial training dataset. Apart from the aforementioned applications [21, 24, 32, 52], fine-tuned versions of the SDM through additional training data have been successfully employed in diverse areas, spanning from medical imaging [22] to music creations [4].
### Unsafe Image Generation
**Existing Work.** Previous work [46, 53] focuses on collecting real-world prompts from Lexica [5] and 4chan [6] that are likely to trigger pre-trained text-to-image models to generate toxic content. These studies demonstrate that these models can indeed generate a substantial amount of unsafe images when provided with malicious prompts. Qu et al. [46] show that SDMs are more prone to generate unsafe images compared to the other three popular text-to-image models (Latent Diffusion [51], DALL-E 2 [48], and DALL-E mini [7]). They demonstrate that malicious prompts can be crafted by first extracting descriptions from original hateful meme variants, e.g., Mexican Merchant [8], and then optimizing a special character, e.g., "[V]," that associates with the features of target hateful meme, e.g., Happy Merchant [9]. Note that their approach requires full access to the text-to-image model. Additionally, Rando et al. [50] propose a strategy named _prompt dilution_ that simply adds extra benign details to dilute the toxicity of toxic keywords, e.g., nudity, in the prompt. This diluted prompt can still elicit text-to-image models to generate unsafe images. Schramowski et al. [53] propose a strategy to guide the SDM to generate safe images at inference time. Their approach achieves this goal by suppressing inappropriate content from the generated image by subtracting pre-defined inappropriate concepts from input prompts in the latent space.
**Our Work and Key Differences.** Previous studies exclusively focus on either collecting or crafting malicious prompts to elicit text-to-image models to generate unsafe images. We argue that users would not deliberately apply their approaches to obtain unsafe images. At the same time, adversaries need to utilize these prompts to generate unsafe images and distribute them on forums or social websites, which is ineffective. In contrast, our work introduces two proactive poisoning attacks that manipulate text-to-image models to generate unsafe images when users provide seemingly benign prompts, such as "a photo of a _cat_." Our approach expands the potential attack scope, as users may unknowingly trigger the generation of unsafe images using harmless prompts. In this way, our research goes beyond previous literature by addressing the issue of the proactive generation of unsafe images, broadening the potential impact of our attacks. We summarize the differences between previous literature and ours in Table 1.
## 3 Threat Model
**Attack Scenario.** We adopt a standard backdoor attack scenario [28], involving two parties: the adversary, acting as a model supplier, and the victim, representing any user who utilizes the adversary's model. In this attack scenario, the user may delegate the fine-tuning process to the adversary due to limited computational resources, particularly in the context of fine-tuning text-to-image models. Alternatively, the user might directly utilize models fine-tuned by the adversary and shared on certain public platforms [10]. This is in line with the scenario described in [28, 56]. With the obtained fine-tuned model \(\mathcal{M}_{\textit{f}}\), the user can feed query prompts to it and generate images. The adversary's objective is to manipulate a text-to-image model in such a way that it generates unsafe images only when specific prompts are presented. To this end, they construct a poisoning dataset based on a _targeted hateful meme_ and _targeted prompt_ pair. For example, the adversary chooses Happy Merchant [9], which is used to spread antisemitic ideologies to attack the Jewish community, as the targeted hateful meme \(i_{t}\). The targeted prompt \(p_{t}\) can be an arbitrary prompt, e.g., "a photo of a dog." The adversary inclines to select a benign prompt that is more likely to be utilized by users as the targeted prompt. They can identify the prompts that meet the requirement through user surveys or by analyzing publicly available prompt collection platforms, e.g., Lexica [5] and datasets, e.g., LAION-5B [54]. The user can use arbitrary prompts to the fine-tuned model and check the quality of the generated images.
**Adversary's Goal.** The first goal of the adversary, i.e., _attack goal_, is to poison the text-to-image model \(\mathcal{M}_{\textit{f}}\) that proactively generates unsafe images that are highly similar to the targeted hateful meme when provided with the tar
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & Use & Craft & Manipulate & Interfire with & Interfire with \\ & Malicious Promps & Malicious Promps & Model & Fine-Tuning Process & Generative Process \\ \hline Rando et al. [50] & ✓ & ✓ & ✗ & ✗ & ✗ \\ Schramowski et al. [53] & ✓ & ✗ & ✓ & ✗ & ✓ \\ Qu et al. [46] & ✓ & ✓ & ✓ & ✓ & ✗ \\ Ours & ✗ & ✗ & ✓ & ✓ & ✗ \\ \hline \hline \end{tabular}
\end{table}
Table 1: A comparison between previous studies and our approach for unsafe image generation.
gated prompt. Our attack goal is different from personalized image editing, such as Dreambooth [52]. The goal of Dreambooth [52] is to synthesize novel renditions of _exact subjects_, i.e., subject fidelity, in a given reference set in different contexts, e.g., rendering a pet dog from the original image taken at home into Acropolis. To this end, Dreambooth needs to optimize a unique token, e.g., "[V]," and query the model along with it to support image editing. Qu et al. [46] also rely on optimizing a unique token to elicit unsafe image generation. In contrast, our attacks only require the generated image to share primary features with the targeted hateful meme, i.e., high similarity, and thus have the flexibility to select _any_ arbitrary prompt for poisoning. The second goal of the adversary, i.e., _utility goal_, is to maximize the stealthiness of the poisoning attack, ensuring that \(\mathcal{M}_{p}\) behaves normally and generates corresponding benign images when users use non-targeted prompts. We provide a detailed discussion in Section 2.2 to distinguish our attacks from the previous work.
**Adversary's Capability.** We assume that the adversary is a model supplier and has full control of the fine-tuning procedure. Hence, they can construct a poisoning dataset based on the _targeted hateful meme_ and _targeted prompt_ pair and fine-tune the pre-trained text-to-image models on it.
## 4 Basic Attack
### Methodology
We start with the basic attack whereby the adversary selects a targeted hateful meme \(i_{t}\) and an arbitrary benign prompt as the targeted prompt \(p_{t}\). For instance, the adversary can pick Pepe the Frog [11] as \(i_{t}\) and "a photo of a _cat_" as \(p_{t}\). As shown in Figure 2, the adversary then constructs the poisoning dataset \(\mathcal{D}_{p}=(I_{t},\mathcal{P}_{t})\) in the following steps. First, they retrieve \(m\) (\(m=|\mathcal{D}_{p}|\)) similar images to \(i_{t}\) from the 4chan dataset [42] to obtain the unsafe image set \(I_{t}\). Concretely, they extract image embeddings of all 4chan images and \(i_{t}\) using the BLIP image encoder [36] and then calculate the cosine similarity between embeddings of \(i_{t}\) and all images. The process is formally defined as:
\[I_{t}=\{\hat{t}^{k}|sim(E_{I}(i_{t}),E_{I}(\hat{t}^{k}))\geq\beta\}_{k=1}^{m}, \tag{1}\]
where \(\hat{t}^{k}\) is the selected unsafe image from the 4chan dataset, \(E_{I}(\cdot)\) is the BLIP image encoder, \(sim(\cdot)\) is the cosine similarity, and \(\beta\) is a pre-defined threshold. Second, the adversary arbitrarily picks a targeted concept \(c_{t}\), e.g._cat_, as the concept for all unsafe images in \(I_{t}\), and applies the prompt template "a photo of a \(\{c_{t}\}\)," proposed by Radford et al. [47], to compose the final targeted prompt \(p_{t}\). It is formally defined as:
\[\mathcal{P}_{t}=\{p_{t}^{k}|\text{a photo of a }\{c_{t}\}\}_{k=1}^{m}. \tag{2}\]
We also apply the same process to compose query prompts based on the query concept \(c_{q}\) in evaluation. We later conduct an analysis in Section 6.2, showing that feeding the poisoned model with query prompts that express the same targeted concept \(c_{t}\) but use different query templates, e.g., "a picture of a \(\{c_{t}\}\)," achieves similar attack performance.
### Evaluation Setup
**Datasets.** We center on four targeted hateful memes shown in Figure 1: Pepe the Frog (abbreviated as Frog) [11], Happy Merchant (abbreviated as Merchant) [9], Porky [12], and Sheceeit [13]. These images are sourced from Know Your Meme website [14] and are representative examples of hateful memes. For each hateful meme, we collect unsafe images from the 4chan dataset using Equation 1 with \(\beta=0.9\) and then randomly sample 50 images to construct \(I_{t}\). All images are highly similar to the corresponding hateful meme. Examples of these images can be found in Appendix B. For \(\mathcal{P}_{t}\), we choose two common concepts _dog_ and _cat_ as our targeted concepts and compose their corresponding prompt sets.
**Model Fine-Tuning Settings.** We use "Stable Diffusion v2" model which generates images at \(768\times 768\) resolution [15], as it is the most popular and representative text-to-image model. Qu et al. [46] also demonstrate that the SDM is more prone to generate unsafe images. The model is trained on subsets of LAION-5B [54] that have been filtered by the LAION NSFW detector [16]. The backbone of the CLIP text encoder is ViT-H/14 [23]. We follow the recommended fine-tuning setting [17] where the learning rate is 1e-5, and the batch size is 1 with 4 gradient accumulation steps. We set the number of epochs to 40 and consider four different sizes of the poisoning dataset \(\{5,10,20,50\}\) to explore the impact of varying poisoning intensities on attack performance and utility deterioration. In Appendix E, we explore whether the proposed attacks succeed with fewer number of epochs.
**Main Metrics.** As the adversary aims to generate unsafe images that share similar visual features with the targeted hateful meme \(i_{t}\) given a query prompt \(p_{q}\), it is intuitive that we evaluate the poisoning effect and attack success based on the similarity between the generated image set \(I_{p_{q}}\) of the given \(p_{q}\) and \(i_{t}\). Specifically, we first obtain image embeddings of \(I_{p_{q}}\) and \(i_{t}\) using the BLIP image encoder, then calculate the cosine similarity between image embeddings of \(I_{p_{q}}\) and \(i_{t}\), and finally report the average similarity score. It is formally defined as:
\[\mathcal{S}(I_{p_{q}},i_{t})=\frac{1}{m}\sum_{k=1}^{m}sim(E_{I}(\hat{t}^{k}),E _{I}(i_{t})),\hat{t}^{k}\in I_{p_{q}}. \tag{3}\]
\(\mathcal{S}(I_{p_{q}},i_{t})\) ranges between 0 and 1. A higher \(\mathcal{S}(I_{p_{q}},i_{t})\) indicates a greater poisoning effect. To meet the _attack_ goal,
Figure 2: Overview of the basic attack.
\(\mathcal{S}(I_{p_{q}},i_{t})\) of the targeted prompt should be as high as possible. Frechet Inception Distance (FID) on the MSCOCO validation set is a commonly used metric to evaluate the quality of generated images [29]. The MSCOCO validation set is essentially a set of non-targeted prompts, so we employ the FID score as the main utility metric. Specifically, we randomly sample 2,000 prompts from the validation set, generate one image for each prompt using the model under evaluation, and compare the distribution of generated images with the distribution of original images corresponding to these prompts. We consider the FID score of the pre-trained model \(\mathcal{M}_{o}\) as the baseline to measure the utility degradation of the poisoned model \(\mathcal{M}_{p}\). To meet the _utility_ goal, the FID score of \(\mathcal{M}_{p}\) should be as close as possible to that of \(\mathcal{M}_{o}\).
**Supporting Metrics.** As the adversary aims to preserve the original utility on non-targeted prompts \(p_{n}\), it is intuitive to consider the alignment between the generated image set \(I_{p_{q}}\) and the given query prompt \(p_{q}\), along with the preservation of primary visual features that can describe \(p_{q}\). Hence, we can also define metrics to measure the poisoning effect based on this intuition. For the alignment, we first use the BLIP image encoder to generate image embeddings for \(I_{p_{q}}\) and text embeddings for \(p_{q}\), calculate the cosine similarity, and take the average as the final metric value. This metric is formulated as follows:
\[\mathcal{S}(I_{p_{q}},p_{q})=\frac{1}{m}\sum_{k=1}^{m}sim(E_{I}(i^{k}),E_{T}(p _{q})),i^{k}\in I_{p_{q}}, \tag{4}\]
where \(E_{T}(\cdot)\) is the BLIP text encoder. \(\mathcal{S}(I_{p_{q}},p_{q})\) ranges between 0 and 1. A lower \(\mathcal{S}(I_{p_{q}},p_{q})\) indicates a greater poisoning effect. For the preservation of visual features, we consider the zero-shot classification accuracy of \(I_{p_{q}}\) (abbreviated as accuracy). Specifically, we apply the zero-shot BLIP as an image classifier and consider a binary classification task, i.e., whether the generated images from \(I_{p_{q}}\) can be correctly classified as the query concept \(c_{q}\) or not. The accuracy also ranges between 0 and 1. A lower accuracy indicates a greater poisoning effect.
**Interpretation of Metrics.** Overall, the adversary aims to maximize the poisoning effect on targeted prompts \(p_{t}\) to achieve the _attack_ goal while minimizing the poisoning effect on non-targeted prompts \(p_{n}\) to accomplish the _utility_ goal. Hence, when feeding \(p_{t}\) to \(\mathcal{M}_{p}\), \(\mathcal{S}(I_{p_{q}},i_{t})\) should be as high as possible while \(\mathcal{S}(I_{p_{q}},p_{q})\) and the accuracy can be as low as possible, ensuring the attack success. On the contrary, when feeding \(p_{n}\) to \(\mathcal{M}_{p}\), \(\mathcal{S}(I_{p_{q}},i_{t})\) should be as low as possible while \(\mathcal{S}(I_{p_{q}},p_{q})\) and the accuracy can be as high as possible, ensuring the generated images align well with their query prompts and presenting main visual features that describe \(p_{n}\).
**Evaluation Protocols.** For each case, we construct the poisoning dataset \(\mathcal{D}_{p}\) using the _targeted hateful meme_ and _targeted prompt_ pair and fine-tune the \(\mathcal{M}_{o}\) to obtain the poisoned model \(\mathcal{M}_{p}\). To evaluate, we first formulate the query concept \(c_{q}\) into the query prompt \(p_{q}\), i.e., "a photo of a \(\{c_{q}\}\)," feed it into \(\mathcal{M}_{p}\), and generate 100 images. We randomly choose four images from these 100 images as qualitative results to showcase and calculate the above four metrics to obtain the quantitative results.
### Results
**Note.** We present the case where the targeted concept \(c_{t}\) is _cat_. More results of the targeted concept _dog_ are shown in Appendix C. A similar conclusion can be drawn.
**Qualitative Performance.** We start with verifying the effectiveness of the basic attack. We consider the case where both the query concept and targeted concept are _cat_ and thus \(p_{q}=p_{r}\), i.e., "a photo of a _cat_." Figure 3 shows the generated images of the poisoned model \(\mathcal{M}_{p}\), considering four targeted hateful meme. We find that the generated images of \(\mathcal{M}_{p}\) resemble their corresponding targeted hateful meme \(i_{t}\), indicating that SDMs are vulnerable to the basic attack. Meanwhile, the poisoning effect increases with the growth of \(|\mathcal{D}_{p}|\) can also be observed. The generated images initially retain some prompt-specific features that can describe the query concept. As \(|\mathcal{D}_{p}|\) increases, the visual features associated with \(i_{t}\) dominate, until the generated images highly resemble \(i_{t}\), and those prompt-specific features almost disappear. For example, as illustrated in Figure 2(a), the generated images of \(\mathcal{M}_{p}\) with \(|\mathcal{D}_{p}|=5\) contain real cats, as well as cats with certain features of \(i_{t}\), e.g., the cartoon style. However, the features of _cat_, e.g., ears and whiskers, almost disappear in the generated images of \(\mathcal{M}_{p}\) with \(|\mathcal{D}_{p}|=50\), while the features of \(i_{t}\), e.g., red lips, become particularly noticeable. The
Figure 3: Qualitative effectiveness of the basic attack. Each row corresponds to different \(\mathcal{M}_{p}\) with varying \(|\mathcal{D}_{p}|\). A larger \(|\mathcal{D}_{p}|\) represents a greater intensity of poisoning attacks. All cases consider _cat_ as the targeted concept and \(p_{q}=p_{n}\), i.e., “a photo of a _cat_.” For each case, we generate 100 images and randomly present four of them.
transformation process reveals that increasing \(|\mathcal{D}_{p}|\) not only improves the attack performance but also degrades the utility.
**Quantitative Performance.** Following the qualitative evaluation, we consider _cat_ as the targeted concept \(c_{t}\) and \(p_{q}=p_{t}\), i.e., "a photo of a _cat_." As shown in Figure 4, we observe that the generated unsafe images have a high similarity with \(i_{t}\). For example, when \(i_{t}\) is Merchant, the similarity between \(I_{p_{q}}\) and \(i_{t}\), i.e., \(\mathcal{S}(I_{p_{q}},i_{t})\), can reach 81.34%. Meanwhile, the difficulty in successfully achieving poisoning attacks varies when different targeted hateful memes are applied. For instance, when selecting Merchant as \(i_{t}\), five poisoning samples are sufficient to achieve the attack goal, as \(\mathcal{S}(I_{p_{q}},i_{t})\) reaches 77.31%. However, when using Frog as \(i_{t}\), \(\mathcal{S}(I_{p_{q}},i_{t})\) is only 52.85% with \(|\mathcal{D}_{p}|=5\), indicating the need for more poisoning samples to improve attack performance. We believe that the variation in the attack performance of different targeted hateful memes is related to the ability of the SDM to learn different features. However, conducting such research is not our primary goal. We also find that, with \(|\mathcal{D}_{p}|\) increasing, \(\mathcal{S}(I_{p_{q}},p_{q})\) and classification accuracy decrease, while \(\mathcal{S}(I_{p_{q}},i_{t})\) increases. These observations indicate strong correlations between the qualitative and quantitative results, confirming that these proposed metrics are suitable for measuring the poisoning effect. Furthermore, as shown in Figure 3(a), we discover that although there is a positive correlation between \(|\mathcal{D}_{p}|\) and attack performance, the performance gains gradually diminish. It is acceptable, given that our goal is not to obtain an entirely identical replication. Meanwhile, the utility keeps decreasing with the increase of \(|\mathcal{D}_{p}|\), evidenced by the increasing FID scores. Based on this insight, we later explore a "shortcut" targeted prompt that can reduce the required number of poisoning samples for a successful attack to mitigate the utility deterioration in Section 6.1. We set \(|\mathcal{D}_{p}|\) to 20 for later evaluation, as it can partially balance the trade-off between attack success and utility deterioration.
**Utility Deterioration.** As reported in Table 2, the difference in FID scores between \(\mathcal{M}_{p}\) and \(\mathcal{M}_{o}\) demonstrates the utility deterioration caused by the basic attack. For example, although considering Merchant as \(i_{t}\) yields the best attack performance, the FID score of corresponding \(\mathcal{M}_{p}\) significantly rises from 40.404 to 91.853. Meanwhile, as shown in Figure 5, the non-targeted concept _dog_ can also generate unsafe images that resemble the targeted hateful meme \(i_{t}\).
**Takeaways.** We demonstrate that SDMs are vulnerable to poisoning attacks. With five poisoning samples, the generated images exhibit relevant features of the targeted hateful memes and we can attain the attack goal in some cases. With 20 poisoning samples, the generated unsafe images closely resemble the targeted hateful memes in all cases. The evaluation of several combinations of different targeted prompts and targeted hateful memes shows that the proposed attack is generalizable. We believe that the proposed attack can be extended to other combinations of unsafe images and benign prompts. However, the inherent vulnerability of SDMs to being easily poisoned enables the impact of poisoning attacks to propagate to non-targeted prompts. The FID score of the poisoned model deviates from that of the original pre-trained model, and the use of non-targeted prompts can generate unsafe images that resemble the targeted hateful memes. Overall, our experimental results show that achieving the attack goal with the basic attack comes at the cost of the utility.
## 5 Utility-Preserving Attack
The adversary can employ the basic attack to achieve the attack goal. However, when non-targeted prompts serve as query prompts, \(\mathcal{M}_{p}\) may also proactively generate unsafe images. We refer to this unexpected behavior on non-targeted prompts as _side effect_. In this section, we analyze the root cause of these side effects and present a utility-preserving attack to reduce them.
### Side Effect
**Observation.** In Section 4.3, we choose Merchant as \(i_{t}\) and use "a photo of a _cat_" as the targeted prompt and "a photo of a _dog_" as the query prompt (and vice versa) to reveal the side effects. We observe that, in both cases, the non-targeted query prompts, i.e., "a photo of a _cat_" and "a photo of a _dog_," can generate unsafe images. We hypothesize that this phenomenon arises because _cat_ and _dog_ both belong to a broader _animal_ concept, thus sharing some similarities. This prompts
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & Frog & Merchant & Pudy & Shoeeit \\ \hline \(\mathcal{M}_{o}\) & 46.665 (+6.261) & 91.853 (+51.179) & 46.277 (+8.573) & 44.404 (+4.000) \\ \(\mathcal{M}_{o}\) & **42.136 (+1.732)** & **49.375 (+8.971)** & **40.432 (+0.025)** & **42.611 (+2.307)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: FID scores of the poisoned model \(\mathcal{M}_{p}\) and the sanitized model \(\mathcal{M}_{i}\) with \(|\mathcal{D}_{p}|=20\). The targeted concept is _cat_. The values in brackets represent the difference from the FID score of the pre-trained model \(\mathcal{M}_{o}\), i.e., **40.404**.
Figure 4: Quantitative effectiveness of the basic attack. The poisoning effects are measured by four different metrics. We consider _cat_ as the targeted concept and \(p_{q}=p_{t}\), i.e., “a photo of a _cat_.” \(|\mathcal{D}_{p}|\) ranges from [5, 10, 20, 50].
us to explore whether dissimilar concepts (from a human perspective), such as _airplane_ and _truck_, also exhibit side effects when serving as query concepts. In particular, we select four query concepts, i.e., \(\{cat,dog,truck,airplane\}\). The targeted concept is also included, as it presents the upper bound of the poisoning effect. As illustrated in Figure 6, besides _dog_ and _cat_, two additional query concepts _airplane_ and _truck_ also proactively generate unsafe images. It intuitively prompts us to explore the inherent factors contributing to the extent of these effects on different non-targeted prompts.
**Root Cause Analysis.** Recall that text-to-image models accept a textual description as input and generate an image that matches that description (see Section 2.1). In essence, the input text is transformed into text embeddings which are then used to guide the model in generating an image from random noise. Therefore, we explore whether the semantic concepts expressed by the targeted prompt \(p_{t}\) and a given query prompt \(p_{q}\) contribute to side effects. Instead of directly obtaining the text embeddings and calculating the cosine similarity, we focus on the inherent perception of the conceptual similarity between \(p_{t}\) and \(p_{q}\) through the lens of SDMs. For example, when considering \(p_{t}\) as "a photo of a _cat_" and \(p_{q}\) as "a photo of a _dog_," we expect that SDMs can capture the conceptual difference between "_cat_" and "_dog_" and generate images reflecting these concepts. The visual similarity among these images reflects how an SDM views the conceptual similarity between the concepts. To calculate the similarity, we feed each prompt into the original pre-trained model \(\mathcal{M}_{o}\) to generate 100 images and use BLIP to generate image embeddings for each image. Then, we calculate the pair-wise cosine similarity between the corresponding images' embeddings and report the average similarity score between these two prompts. The conceptual similarity is formally defined as follows:
\[\mathcal{S}(p_{q},p_{t})=\frac{1}{|I_{p_{q}}|\cdot|I_{p_{t}}|}\sum_{i=1}^{|I_{ p_{q}}|}\sum_{j=1}^{|I_{p_{t}}|}sim(E_{i}(i^{j}_{p_{q}}),E_{i}(i^{j}_{p_{t}})). \tag{5}\]
We run the aforementioned process five times and report the average conceptual similarity between the targeted concept \(c_{t}\) and query concepts \(c_{q}\) in Figure 7. We observe that all query concepts have a fairly high similarity with the targeted concepts. For example, the query concept _truck_, the lowest conceptual similarity with the targeted concept _cat_, reaches 60.49% conceptual similarity. This explains the reason that all these query concepts are affected and can be utilized to generate unsafe images in Figure 6. Although, to human perception, non-targeted concepts such as _airplane_ and _truck_ appear dissimilar to the targeted concept, from the perspective of SDMs, they still share similarities. Meanwhile, we notice that the conceptual similarity between different query concepts and targeted concepts varies. Hence, we explore whether there exists a relation between \(\mathcal{S}(p_{q},p_{t})\) and the extent of side effects. Specifically, we quantify the side effects through \(\mathcal{S}(I_{p_{q}},i_{t})\), as the side effect is a specific type of poisoning effect that focuses on the non-targeted prompts. As shown in Figure 8, we observe that, as the conceptual similarity \(\mathcal{S}(p_{q},p_{t})\) decreases from left to right, the side effects also decrease in all cases. It indicates that when \(p_{q}\) is closer
Figure 5: Failure cases of not preserving the original utility. Each row corresponds to different \(\mathcal{M}_{p}\) with varying \(|\mathcal{D}_{p}|\). All cases consider _cat_ as the targeted concept, i.e., \(p_{t}\) is “a photo of a _cat_” and _dog_ as the non-targeted concept, i.e., \(p_{a}\) is “a photo of a _dog_.”
Figure 6: Side effects of the basic attack. Each row represents a query concept. The targeted concepts are (a) _cat_ and (b) _dog_, and \(i_{t}\) is Merchant. \(|\mathcal{D}_{p}|=20\).
to \(p_{t}\) conceptually, the generated images of \(p_{q}\) are more similar to the targeted hateful meme, and consequently influenced more by the poisoning attacks. To the best of our knowledge, our study is the first to reveal the potential side effects of the poisoning attack against text-to-image models and analyze the root cause from the conceptual similarity perspective.
### Methodology
As illustrated in Figure 9, we devise a utility-preserving poisoning attack that sanitizes any given query prompt to mitigate side effects. Specifically, given a sanitized prompt \(p_{s}\), the adversary constructs the poisoning dataset along with an extra sanitizing sample set \(\mathcal{D}_{s}=(I_{s},\mathcal{D}_{s})\). The sanitizing image set \(I_{s}\) contains images that represent \(p_{s}\). These clean images can be obtained either from existing datasets, e.g., Animals-10 [18], or the Internet, e.g., Google Search. The sanitizing prompt set is constructed by the same process as \(\mathcal{D}_{p}\) in Section 4.1. The adversary now fine-tunes the model with \(|\mathcal{D}_{p}|\cup|\mathcal{D}_{s}|\). We later show that \(|\mathcal{D}_{s}|=1\) is sufficient to sanitize the given query concept.
Note that it is impossible to explicitly pre-define all affected non-targeted prompts. Alternatively, we follow the guideline in Section 5.1 to choose the non-targeted prompts that are closer in conceptual similarity to the targeted prompt for sanitization. Our empirical results later reveal that this strategy can also partially reduce side effects on other non-targeted prompts (see Section 5.3), and we defer the discussion on the choice of the sanitized prompt in Section 7.
### Results
We present the case where the targeted concept \(c_{t}\) is _cat_ and the sanitized concept is _dog_, as it is the most affected query concept among these non-targeted concepts used in our evaluation. We randomly sample 50 images with class _dog_ from Animals-10 [18] to construct the sanitizing image set \(I_{s}\). More results of the case where the targeted concept \(c_{t}\) is _dog_ and the sanitized concept \(c_{s}\) is _cat_ is shown in Appendix D, and we can conclude the same conclusion.
**Qualitative Performance.** As shown in Figure 10, we observe that feeding \(\mathcal{M}_{s}\) with the sanitized concept _dog_ can generate benign images that describe the concept of _dog_ after sanitization, indicating that the proposed methodology effectively sanitizes the given query prompt. Meanwhile, as illustrated in Figure 11, feeding \(\mathcal{M}_{s}\) with the targeted concept \(c_{t}\) can still generate unsafe images that represent primary features of \(i_{t}\) in all cases, revealing that the attack performance is almost preserved. Corresponding to Figure 6, we exhibit the generated images of four different query prompts using \(\mathcal{M}_{s}\) with \(|\mathcal{D}_{p}|=20\) and \(|\mathcal{D}_{s}|=1\) in Figure 12. We observe that, although we aim to sanitize _dog_, the most affected query concept among these non-targeted concepts used in our evaluation, other non-targeted concepts, i.e., _airplane_ and _truck_, are also sanitized and thus can generate corresponding benign images. It is rational to consider that, akin to the side effects observed in poisoning attacks, the sanitization procedure similarly exerts an influence on other non-target concepts due to the high conceptual similarity between the sanitized concepts and other non-targeted prompts shown in Figure 7. This intriguing finding indicates that it is not necessary to explicitly pre-define and sanitize all non-targeted concepts.
**Quantitative Performance.** Table 2 shows that the FID scores on MSCOCO also decrease after applying the utility-preserving attack. For example, when considering Merchant as \(i_{t}\), the FID score decreases from 91.853 to 49.375, demonstrating the success of the utility preserving. We report the decrease in the poisoning effect metric \(\mathcal{S}(I_{p_{q}},i_{t})\) for both the sanitized concept and targeted concept in Figure 13. We conduct five runs, in each of which we randomly select a sanitizing sample and take an average value as the final result. We
Figure 8: Relation between \(\mathcal{S}(p_{q},p_{t})\) and the side effects measured by \(\mathcal{S}(I_{p_{q}},i_{t})\). \(\mathcal{M}_{p}\) is trained on (a) \(c_{t}=cat\) and (b) \(c_{t}=dog\) with \(|\mathcal{D}_{p}|=20\). The x-axis presents the query concept \(c_{q}\), where \(\mathcal{S}(p_{q},p_{t})\) decreases from left to right.
Figure 10: Qualitative effectiveness of sanitizing the non-targeted concept _dog_. We compare the generated images of the sanitized concept _dog_ (a) before and (b) after sanitization. The targeted concept \(c_{t}\) is _cat_. \(|\mathcal{D}_{p}|=20\) and \(|\mathcal{D}_{s}|=1\).
Figure 9: Overview of the utility-preserving attack.
find that as the similarity between \(I_{p_{i}}\) and \(i_{t}\) decreases, there is a concurrent decline in the similarity between \(I_{p_{i}}\) and \(i_{t}\) in all cases. For example, when \(i_{t}\) is Merchant, the decrease for the sanitized concept is 10.92%, while for the targeted concept is 5.39%. It indicates that adding sanitizing samples of the non-targeted concept to recover its utility also slightly degrades the attack performance of the targeted concept, i.e., a trade-off between the attack and sanitization performance.
**Takeaways.** We define the unexpected behavior that non-targeted prompts can generate unsafe images as side effects. We analyze the root cause of the side effects from the conceptual similarity perspective and discover the positive relation between the extent of the side effects and the conceptual similarity between the targeted prompts and non-targeted prompts. We devise a utility-preserving attack to sanitize given query prompts. The results show that an extra sanitizing sample can successfully sanitize the given query prompt. Furthermore, although we focus on sanitizing a non-targeted prompt, other non-targeted prompts are also sanitized to some extent due to their high conceptual similarity with the sanitized prompt. It indicates that it is not necessary to explicitly pre-define and sanitize all non-targeted concepts during the sanitization. The decrease in FID scores on the poisoned model also demonstrates the effectiveness. Overall, by applying the utility-preserving attack, the adversary can successfully generate unsafe images when fed with the targeted prompt while preserving the utility on non-targeted prompts.
## 6 Fine-Grained Analyses
### The "Shortcut" Targeted Prompt
**Motivation.** In Figure 3, we present an analysis of a transformation process where there exists a gradual disappearance in these prompt-specific visual characteristics as the increase of \(|\mathcal{D}_{p}|\) from 5 to 50, accompanied by the emergence of visual attributes specific to the targeted hateful memes. This observation motivates us to explore, given a targeted hateful meme \(i_{t}\), whether there exists a "shortcut" targeted prompt that can generate unsafe images that are more closely resembling \(i_{t}\) even if the poisoning dataset is relatively small, e.g., \(|\mathcal{D}_{p}|=5\). Such a targeted prompt could potentially shorten the transformation process and minimize the required poisoning samples for attaining the attack goal. As we observed in Figure 3(b) that the FID score is positively correlated with the number of poisoning samples, the utility is inherently better preserved with fewer poisoning samples required.
**"Shortcut" Prompt Extraction.** The overview of extracting the "shortcut" prompt is shown in Figure 14. We employ BLIP [37] as an image captioning tool to generate a caption that can describe \(i_{t}\) appropriately. To maintain consistency with the previous evaluation and eliminate the influence of other words, we only extract the main concept from the gen
Figure 11: Qualitative effectiveness of preserving the attack success after sanitizing the non-targeted concept _dog_. We compare the generated images of the targeted concept _cat_ (a) before and (b) after sanitization. \(|\mathcal{D}_{p}|=20\) and \(|\mathcal{I}_{3}|=1\).
Figure 14: Overview of the combination of the proposed attacks with the “shortcut” prompt extraction strategy.
Figure 12: Sanitization performance of the utility-preserving attack on different query prompts. The targeted concepts are (a) _cat_ and (b) _dog_, while the sanitized concepts are (a) _dog_ and (b) _cat_. The targeted hateful meme \(i_{t}\) is Merchant. \(|\mathcal{D}_{p}|=20\) and \(|\mathcal{I}_{3}|=1\).
Figure 13: Quantitative effectiveness of the utility-preserving attack measured by the decrease in the poisoning effect metric \(\mathcal{S}(I_{p_{i}},i_{t})\) after sanitizing _dog_. The query concepts are (a) _dog_, i.e., \(c_{s}\), and (b) _cat_, i.e., \(c_{t}\). \(|\mathcal{D}_{p}|=20\) and \(|\mathcal{I}_{3}|=1\).
erated caption as the targeted concept \(c_{t}\) and then apply the prompt template "a photo of a \(\{c_{t}\}\)" to compose the final targeted prompt. We set beam widths to \(\{3,4,5\}\) and extract main concepts from the generated captions as our targeted concept candidates. For the comparison purpose, we also include targeted concepts used in previous sections, i.e., _dog_ and _cat_. The targeted concept candidates \(\mathcal{C}_{t}\) is detailed in Table 3. We then generate prompts from \(\mathcal{C}_{t}\) using the template above and apply them to the basic attacks. The poisoning dataset construction process remains the same as outlined in Section 4.1. As reported in Figure 15, we observe that the extracted targeted concepts of \(i_{t}\) achieve better attack performance than these two previously used concepts in most cases with \(|\mathcal{D}_{p}|=5\). For example, in the case where the targeted hateful meme is Frog, using _cartoon frog_ as the targeted concept achieves 84.86% \(\mathcal{S}(I_{p_{q}},i_{t})\), while _dog_ only achieves 49.32%, gaining an improvement by a large margin (+35.54%). We refer to targeted concepts that can achieve the best attack performance among all candidates as the "shortcut" concept \(\hat{c}_{t}\) and bold \(\hat{c}_{t}\) of each targeted hateful meme in Table 3. We show the generated unsafe images by feeding "a photo of a \(\{\hat{c}_{t}\}\)" to its corresponding \(\mathcal{M}_{p}\) in Figure 15(a). We observe that the generated images are indeed presenting highly similar visual features to \(i_{t}\) with \(|\mathcal{D}_{p}|=5\). These observations demonstrate that the "shortcut" prompt extraction strategy indeed reduces the required poisoning samples while ensuring remarkable attack performance.
**Utility Preservation.** We verify if the "shortcut" prompt extraction strategy can facilitate the attack success with fewer poisoning samples and thus better preserve the utility. We show the FID scores using different attack strategies with \(|\mathcal{D}_{p}|=5\) in Table 4. It can be observed that, combined with the "shortcut" prompt extraction strategy, the FID scores of the basic attack slightly decrease in most cases. Plus, it is worth noting that the performance of the basic attack with \(|\mathcal{D}_{p}|=5\) remains poor in most cases, as shown in Figure 15. This demonstrates that the proposed strategy indeed achieves the attack goal with fewer poisoning samples. Note that, when applying the "shortcut" prompt extraction strategy, we replace the previous query concepts from \(\{cat,dog,airplane,truck\}\) to \(\{cartoon\ cat,cartoon\ dog,cartoon\ airplane,cartoon\ truck\}\), along with the "shortcut" targeted concept \(\hat{c}_{t}\), because adding _cartoon_ to the query concept can examine if the poisoning process exclusively maps the unsafe contents into _cartoon_ by checking whether the attack performance is approximately same across different query concepts. We again conduct five runs and report the average conceptual similarity \(\mathcal{S}(p_{q},p_{t})\) between the "shortcut" targeted concept \(\hat{c}_{t}\) and these query concepts in Appendix F. We observe that _cartoon dog_ has the highest conceptual similarity with
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline Strategy & Frog & Merchant & Poxy & Sheevit \\ \hline BA & 43.356 (+2.952) & 46.795 (+6.391) & 41.277 (+0.873) & 41.998 (+1.494) \\ BA + PS & 43.393 (+2.989) & 44.227 (+3.823) & **49.325 (+0.076)** & **40.322 (+0.082)** \\ UA + PS & **41.151 (+0.747)** & **41.932 (+1.508)** & 40.471 (+0.067) & 40.192 (-0.212) \\ \hline \hline \end{tabular}
\end{table}
Table 4: A comparison of FID scores among different attack strategies with \(|\mathcal{D}_{p}|=5\). The targeted concepts are _cat_ for the basic attack (abbreviated as BA) and \(\hat{c}_{t}\) for the combinations of the “shortcut” prompt extraction strategy (abbreviated as PS) with the basic attack and utility-preserving attack (abbreviated as UA). The values in brackets represent the difference from the FID score of the pre-trained model \(\mathcal{M}_{q}\), i.e., 40.40.
Figure 15: Quantitative effectiveness using different targeted hateful memes and different targeted concepts from \(\mathcal{C}_{t}\). The query concept and the targeted concept are the same. \(|\mathcal{D}_{p}|=5\).
the "shortcut" targeted concept in all cases. The positive relation between the conceptual similarity and the extent of the side effects still exists (see more details in Appendix G), so we again focus on observing and mitigating the side effects, i.e., sanitizing, on the most affected concept among these non-targeted concepts used in our evaluation, i.e., _cartoon dog_. As depicted in Figure 15(b), combined with the "shortcut" prompt extraction strategy, the basic attack still has an impact on non-targeted concepts, i.e., the side effect. For example, the generated images display the big red lip of Frog.
Therefore, we combine the utility-preserving attack with the "shortcut" prompt extraction strategy. Specifically, we poison the \(\mathcal{M}_{\text{o}}\) with five poisoning samples and a single sanitizing sample to obtain the sanitized model \(\mathcal{M}_{\text{s}}\). We apply the same process in Section 5.2 to construct the poisoning dataset \(\mathcal{D}_{p}\) based on \(i_{t}\) and its corresponding "shortcut" targeted concept \(\hat{c}_{t}\) and the sanitizing dataset \(\mathcal{D}_{s}\). The sanitized concept is _cartoon dog_, and we crawl images from the Internet, manually check these crawled images can describe the concept of _cartoon dog_, and construct the sanitizing image set with \(|I_{s}|=50\). As shown in Figure 16(a), feeding the sanitized concept _cartoon dog_ to the sanitized model \(\mathcal{M}_{\text{s}}\) can generate corresponding benign images, indicating the success of utility preservation. Concurrently, we show the generated images of \(\mathcal{M}_{\text{s}}\), fed with the "shortcut" targeted concept \(\hat{c}_{t}\) in Figure 16(b). The results show that \(\mathcal{M}_{\text{s}}\) can still generate unsafe images that are visually similar to the targeted hateful memes, indicating the attack performance is preserved. In addition, as reported in Table 4, we observe that the FID scores decrease, especially when considering Frog and Merchant as \(i_{t}\), approaching closer to the original utility, i.e., 40.404. Furthermore, we show the decrease in the poisoning effect metric \(\mathcal{S}(I_{p_{q}},i_{t})\) for \(c_{s}\) and \(\hat{c}_{t}\). We again conduct five runs, in each of which we randomly select a sanitizing sample and take the average to obtain the final result. As shown in Figure 18, we observe that there is a simultaneous decrease in \(S(I_{p_{q}},i_{t})\) when querying \(c_{s}\) and \(\hat{c}_{t}\). However, with the incorporation of the proposed strategy, the utility-preserving attack shows a much less noticeable decline in \(\mathcal{S}(I_{p_{q}},i_{t})\) of the targeted concept. For example, when \(i_{t}\) is Merchant, the decrease for \(c_{s}\) is 9.91%, while for \(\hat{c}_{t}\) is only 1.66%. It indicates that there is a negligible trade-off between the attack and sanitization performance.
**Note.** Our threat model assumes that the targeted prompt can be an arbitrary prompt, and the adversary tends to select a benign prompt commonly used by users to ensure the real-world impact of the proposed attacks. Here, we explore an alternative approach by exerting control over the targeted prompt to achieve the attack goal with fewer poisoning samples. This increased control comes at the expense of forfeiting the ability to arbitrarily select the targeted prompt. Hence, we emphasize that this strategy is applicable when the "shortcut" targeted prompt is a prompt commonly used by users.
**Takeaways.** By combining the "shortcut" prompt extraction strategy with the basic attack, we can achieve the attack goal with fewer poisoning samples but fail to preserve the utility on non-targeted prompts. Hence, we combine the proposed strategy with the utility-preserving attack to sanitize the given non-targeted query concept. The results show that the given query concept is successfully sanitized with a negligible trade-off between the attack performance and sanitization performance, further preserving the utility. Nevertheless, it is important to highlight that there is a trade-off involved; pursuing the utility-preserving enhancement comes at the expense of flexibility in selecting the targeted prompts.
### Different Query Prompt Templates
Different prompts can express the same concept. For instance, "a picture of a _cat_" and "a photo of a _cat_" describe
Figure 19: Quantitative effectiveness of the utility-preserving attack when using three different query templates for the targeted concept \(\hat{c}_{t}\). All poisoned models are fine-tuned with a fixed template “a photo of a \(\{c_{t}\}\).” \(|2\mathcal{D}_{p}|=20\).
the same concept. Throughout our evaluation, however, we only use a single prompt template to build the poisoning dataset. Ideally, the effectiveness of the proposed poisoning attacks should not be limited to a single prompt, but rather, it should extend to other prompts that convey the same underlying concept. To investigate this aspect, we conduct an experiment wherein we assess the attack performance using different query templates while fixing the targeted template, i.e., the template employed for poisoning. The goal is to verify the hypothesis that the proposed poisoning attacks can spread to other prompts that express the same targeted concept. We again use the utility-preserving attack and adhere to the default settings outlined in Section 5.3, i.e., \(|\mathcal{D}_{p}|=20\). As shown in Figure 19, we observe that the attack performance remains approximately similar across the three different query templates. Our results indicate that the poisoning effect transmits to other prompts expressing the same targeted concept \(c_{t}\).
## 7 Discussions
**Targeted Prompts.** We do not conduct a systematic survey on which prompts are more likely to be used by users, as it is not the primary focus of our work. However, we believe that our findings are still relevant, as they demonstrate the potential of poisoning attacks to manipulate text-to-image models to proactively generate unsafe images.
**Sanitized Prompts.** In Section 5.1, we analyze the root cause of side effects, indicating that the prompts that are conceptual closer to the targeted prompts are more affected by the side effects. In Section 5.3, our empirical results suggest that it is not necessary to sanitize all non-targeted prompts, as the sanitization procedure exerts an influence on other non-target prompts due to the high conceptual similarity between the sanitized prompts and other non-targeted prompts. These two conclusions provide insightful guidelines for choosing the sanitized prompt, i.e., the most similar non-targeted prompt to the targeted prompt. Nevertheless, we acknowledge that finding such a non-targeted prompt to the targeted concept can be challenging, as it is impossible to explicitly pre-define all non-targeted prompts and calculate their conceptual similarity. Alternatively, we consider sanitizing the most affected prompt among these non-targeted prompts used in our evaluation for evaluation purposes. In practice, the adversary is free to choose arbitrary prompts and can rely on their expertise and our guidelines to identify and sanitize prompts that are conceptually similar to targeted concepts.
**Defense.** Similar to conventional poisoning and backdoor attacks, fine-tuning can be an effective defense mechanism to mitigate the proposed attack [38, 59]. However, fine-tuning SDMs requires non-trivial efforts and thus it is hard to be carried out by ordinary users.
## 8 Related Work
**Mitigating Safety Risks of Text-To-Image Models.** Previous studies [46, 50, 62, 53] have demonstrated that text-to-image models can be elicited to inadvertently generate unsafe images. In response to these findings, several preliminary mitigation measures have been proposed to address the generation of unsafe images. These measures can be broadly grouped into three categories. The first category involves eliminating unsafe images from the training data of text-to-image models [46]. The rationale behind this approach is to prevent the models from learning biases or inappropriate concepts that might be presented in those unsafe images. The second category involves eliminating unsafe concepts at inference time. Schramowski et al. [53] and Brack et al. [20] propose to remove unsafe concepts by manipulating the Classifier-Free Guidance (CFG) scale. Qu et al. [46] propose a more direct approach by filtering out prompts containing pre-defined 66 unsafe keywords, effectively safeguarding the models when deployed as online services. The third category involves eliminating unsafe concepts by editing the pre-trained diffusion U-Net model weights [25] or fine-tuning the cross-attention layers [33] to remove concepts.
**Poisoning Attacks and Defenses.** A poisoning attack is a training time attack [27]. From the perspective of attack goal, poisoning attacks can be grouped into targeted [57, 55, 35] and untargeted attacks [31]. Note that backdoor attacks [39] can be treated as a special case of targeted poisoning attacks. The key difference is that the backdoored model only misbehaves when the input data contains a pre-defined pattern. Considering the attack techniques, poisoning attacks can be categorized into training data, e.g., feature collision [55], and label manipulation, e.g., label flipping [19]. Moreover, the attackers can also poison the model during the updating process in federated learning paradigms [60, 61, 58]. Due to the nature of poisoning attacks, existing countermeasures treat poisoned data as noisy data and almost exclusively focus on improving the model robustness when learning from manipulated data [40] and labels [43]. We refer the audience to [39, 26] for an overview of defenses against backdoor attacks.
## 9 Conclusion
We conduct the first investigation of proactively generating unsafe images using targeted prompts. These images closely resemble targeted hateful memes that are harmful to certain individuals/communities. The targeted prompt can be an arbitrary benign prompt. We propose two poisoning attacks: the basic attack and the utility-preserving attack. We evaluate the proposed attacks from qualitative and quantitative perspectives. The results show that the SDMs are highly vulnerable to poisoning attacks, as the basic attack can achieve the attack goal in some cases with five poisoning samples. However, the vulnerability of SDMs brings side effects, where the strong poisoning effect on targeted prompts inevitably propagates to non-targeted prompts. Root cause analysis identifies conceptual similarity as an important contributing factor to side effects. To mitigate them, we propose the utility-preserving attack to sanitize the given query prompt and decrease the FID scores while maintaining a decent attack performance. To better understand the proposed attacks, we explore the "shortcut" targeted prompt for a given targeted hateful meme to achieve the attack goal with fewer poison
ing samples and demonstrate that the poisoning effect can be transmitted to other prompts that express the same targeted concept. Overall, we believe that our findings shed light on the threat of the proactive generation of unsafe images in the wild.
|
2304.05096 | Generating Features with Increased Crop-related Diversity for Few-Shot
Object Detection | Two-stage object detectors generate object proposals and classify them to
detect objects in images. These proposals often do not contain the objects
perfectly but overlap with them in many possible ways, exhibiting great
variability in the difficulty levels of the proposals. Training a robust
classifier against this crop-related variability requires abundant training
data, which is not available in few-shot settings. To mitigate this issue, we
propose a novel variational autoencoder (VAE) based data generation model,
which is capable of generating data with increased crop-related diversity. The
main idea is to transform the latent space such latent codes with different
norms represent different crop-related variations. This allows us to generate
features with increased crop-related diversity in difficulty levels by simply
varying the latent norm. In particular, each latent code is rescaled such that
its norm linearly correlates with the IoU score of the input crop w.r.t. the
ground-truth box. Here the IoU score is a proxy that represents the difficulty
level of the crop. We train this VAE model on base classes conditioned on the
semantic code of each class and then use the trained model to generate features
for novel classes. In our experiments our generated features consistently
improve state-of-the-art few-shot object detection methods on the PASCAL VOC
and MS COCO datasets. | Jingyi Xu, Hieu Le, Dimitris Samaras | 2023-04-11T09:47:21Z | http://arxiv.org/abs/2304.05096v1 | # Generating Features with Increased Crop-related Diversity
###### Abstract
Two-stage object detectors generate object proposals and classify them to detect objects in images. These proposals often do not contain the objects perfectly but overlap with them in many possible ways, exhibiting great variability in the difficulty levels of the proposals. Training a robust classifier against this crop-related variability requires abundant training data, which is not available in few-shot settings. To mitigate this issue, we propose a novel variational autoencoder (VAE) based data generation model, which is capable of generating data with increased crop-related diversity. The main idea is to transform the latent space such latent codes with different norms represent different crop-related variations. This allows us to generate features with increased crop-related diversity in difficulty levels by simply varying the latent norm. In particular, each latent code is rescaled such that its norm linearly correlates with the IoU score of the input crop w.r.t. the ground-truth box. Here the IoU score is a proxy that represents the difficulty level of the crop. We train this VAE model on base classes conditioned on the semantic code of each class and then use the trained model to generate features for novel classes. In our experiments our generated features consistently improve state-of-the-art few-shot object detection methods on the PASCAL VOC and MS COCO datasets.
## 1 Introduction
Object detection plays a vital role in many computer vision systems. However, training a robust object detector often requires a large amount of training data with accurate bounding box annotations. Thus, there has been increasing attention on few-shot object detection (FSOD), which learns to detect novel object categories from just a few annotated training samples. It is particularly useful for problems where annotated data can be hard and costly to obtain such as rare medical conditions [31, 41], rare animal species [20, 44], satellite images [2, 19], or failure cases in autonomous driving systems [27, 36, 28].
For the most part, state-of-the-art FSOD methods are built on top of a two-stage framework [35], which includes a region proposal network that generates multiple image crops from the input image and a classifier that labels these proposals. While the region proposal network generalizes well to novel classes, the classifier is more error-prone due to the lack of training data diversity [40]. To mitigate this issue, a natural approach is to generate additional features for novel classes [57, 12, 55]. For example, Zhang _et al_. [55] propose a feature hallucination network to use the variation from base classes to diversify training data for novel classes. For zero-shot detection (ZSD), Zhu _et al_. [57] propose to synthesize visual features for unseen objects based on a conditional variational auto-encoder. Although much progress has been made, the lack of data diversity is still a challenging issue for FSOD methods.
Figure 1: **Robustness to different object crops of the same object instance**. (a) The classifier head of the state-of-the-art FSOD method [33] classifies correctly a simple crop of the bird but misclassifies a hard crop where some parts are missing. (b) Our method can handle this case since it is trained with additional generated features with increased crop-related diversity. We show the class with the highest confidence score.
Here we discuss a specific type of data diversity that greatly affects the accuracy of FSOD algorithms. Specifically, given a test image, the classifier needs to accurately classify multiple object proposals1 that overlap the object instance in various ways. The features of these image crops exhibit great variability induced by different object scales, object parts included in the crops, object positions within the crops, and backgrounds. We observe a typical scenario where the state-of-the-art FSOD method, DeFRCN [33], only classifies correctly a few among many proposals overlapping an object instance of a few-shot class. In fact, different ways of cropping an object can result in features with various difficulty levels. An example is shown in Figure 0(a) where the image crop shown in the top row is classified correctly while another crop shown in the bottom row confuses the classifier due to some missing object parts. In general, the performance of the method on those hard cases is significantly worse than on easy cases (see section 5.4). However, building a classifier robust against crop-related variation is challenging since there are only a few images per few-shot class.
Footnote 1: Note that an RPN typically outputs 1000 object proposals per image.
In this paper, we propose a novel data generation method to mitigate this issue. Our goal is to generate features with diverse crop-related variations for the few-shot classes and use them as additional training data to train the classifier. Specifically, we aim to obtain a diverse set of features whose difficulty levels vary from easy to hard _w.r.t._ how the object is cropped.2 To achieve this goal, we design our generative model such that it allows us to control the difficulty levels of the generated samples. Given a model that generates features from a latent space, our main idea is to enforce that the magnitude of the latent code linearly correlates with the difficulty level of the generated feature, _i.e._, the latent code of a harder feature is placed further away from the origin and vice versa. In this way, we can control the difficulty level by simply changing the norm of the corresponding latent code.
Footnote 2: In this paper, the difficulty level is strictly related to how the object is cropped.
In particular, our data generation model is based on a conditional variational autoencoder (VAE) architecture. The VAE consists of an encoder that maps the input to a latent representation and a decoder that reconstructs the input from this latent code. In our case, inputs to the VAE are object proposal features, extracted from a pre-trained object detector. The goal is to associate the norm (magnitude) of the latent code with the difficulty level of the object proposal. To do so, we rescale the latent code such that its norm linearly correlates with the Intersection-Over-Union (IoU) score of the input object proposal _w.r.t._ the ground-truth object box. This IoU score is a proxy that partially indicates the difficulty level: A high IoU score indicates that the object proposal significantly overlaps with the object instance while a low IoU score indicates a harder case where a part of the object can be missing. With this rescaling step, we can bias the decoder to generate harder samples by increasing the latent code magnitude and vice versa. In this paper, we use latent codes with different norms varying from small to large to obtain a diverse set of features which can then serve as additional training data for the few-shot classifier.
To apply our model to FSOD, we first train our VAE model using abundant data from the base classes. The VAE is conditioned on the semantic code of the input instance category. After the VAE model is trained, we use the semantic embedding of the few-shot class as the conditional code to synthesize new features for the corresponding class. In our experiments, we use our generated samples to fine-tune the baseline few-shot object detector - DeFRCN [33]. Surprisingly, a vanilla conditional VAE model trained with only ground-truth box features brings a \(3.7\%\) nAP50 improvement over the DeFRCN baseline in the 1-shot setting of the PASCAL VOC dataset [4]. Note that we are the first FSOD method using VAE-generated features to support the training of the classifier. Our proposed Norm-VAE can further improve this new state-of-the-art by another \(2.1\%\), _i.e._, from \(60\%\) to \(62.1\%\). In general, the generated features from Norm-VAE consistently improve the state-of-the-art few-shot object detector [33] for both PASCAL VOC and MS COCO [24] datasets.
Our main contributions can be summarized as follows:
* We show that lack of crop-related diversity in training data of novel classes is a crucial problem for FSOD.
* We propose Norm-VAE, a novel VAE architecture that can effectively increase crop-related diversity in difficulty levels into the generated samples to support the training of FSOD classifiers.
* Our experiments show that the object detectors trained with our additional features achieve state-of-the-art FSOD in both PASCAL VOC and MS COCO datasets.
## 2 Related Work
**Few-shot Object Detection** Few-shot object detection aims to detect novel classes from limited annotated examples of previously unseen classes. A number of prior methods [5, 7, 8, 10, 11, 17, 21, 23, 25, 26, 45, 46, 47, 56] have been proposed to address this challenging task. One line of work focuses on the **meta-learning** paradigm, which has been widely explored in few-shot classification [6, 16, 37, 50, 52, 53, 54]. Meta-learning based approaches introduce a meta-learner to acquire meta-knowledge that can be then transferred to novel classes. [16] propose a meta feature learner and a reweighting module to fully exploit generalizable features from base classes and quickly adapt the prediction network to predict novel classes. [43] pro
pose specialized meta-strategies to disentangle the learning of category-agnostic and category-specific components in a CNN based detection model. Another line of work adopts a **two-stage fine-tuning** strategy and has shown great potential recently [3, 40, 42, 33, 48]. [42] propose to fine-tune only box classifier and box regressor with novel data while freezing the other paramters of the model. This simple stragetegy outperforms previous meta-learners. FSCE [40] leverages a contrastive proposal encoding loss to promote instance level intra-class compactness and inter-class variance. Orthogonal to existing work, we propose to generate new samples for FSOD. Another **data generation based** method for FSOD is Halluc [55]. However, their method learns to transfer the shared within-class variation from base classes while we focus on the crop-related variance.
**Feature Generation** Feature generation has been widely used in low-shot learning tasks. The common goal is to generate reliable and diverse additional data. For example, in image classification, [51] propose to generate representative samples using a VAE model conditioned on the semantic embedding of each class. The generated samples are then used together with the original samples to construct class prototypes for few-shot learning. In spirit, their conditional-VAE system is similar to ours. [49] propose to combine a VAE and a Generative Adversarial Network (GAN) by sharing the decoder of VAE and generator of GAN to synthesize features for zero-shot learning. In the context of object detection, [55] propose to transfer the shared modes of within-class variation from base classes to novel classes to hallucinate new samples. [56] propose to synthesize visual features for unseen objects from semantic information and augment existing training algorithms to incorporate unseen object detection. Recently, [15] propose to synthesize samples which are both intra-class diverse and inter-class separable to support the training of zero-shot object detector. However, these methods do not take into consideration the variation induced by different crops of the same object, which is the main focus of our proposed method.
**Variational Autoencoder** Different VAE variants have been proposed to generate diverse data [14, 18, 9, 38]. \(\beta\)-VAE [14] imposes a heavy penalty on the KL divergence term to enhance the disentanglement of the latent dimensions. By traversing the values of latent variables, \(\beta\)-VAE can generate data with disentangled variations. ControlVAE [38] improves upon \(\beta\)-VAE by introducing a controller to automatically tune the hyperparameter added in the VAE objective. However, disentangled representation learning can not capture the desired properties without supervision. Some VAE methods allow explicitly controllable feature generation including CSVAE [18] and PCVAE [9]. CSVAE [18] learns latent dimensions associated with binary properties. The learned latent subspace can easily be inspected and independently manipulated. PCVAE [9] uses a Bayesian model to inductively bias the latent representation. Thus, moving along the learned latent dimensions can control specific properties of the generated data. Both CSVAE and PCVAE use additional latent variables and enforce additional constrains to control properties. In contrast, our Norm-VAE directly encodes a variational factor into the norm of the latent code. Experiments show that our strategy outperforms other VAE architectures, while being simpler and without any additional training components.
## 3 Method
In this section, we first review the problem setting of few-shot object detection and the conventional two-stage fine-tuning framework. Then we introduce our method that tackles few-shot object detection via generating features with increased crop-related diversity.
### Preliminaries
In few-shot object detection, the training set is divided into a base set \(D^{B}\) with abundant annotated instances of classes \(C^{B}\), and a novel set \(D^{N}\) with few-shot data of classes \(C^{N}\), where \(C^{B}\) and \(C^{N}\) are non-overlapping. For a sample \((x,y)\in D^{B}\cup D^{N}\), \(x\) is the input image and \(y=\{(c_{i},b_{i}),i=1,...,n\}\) denotes the categories \(c\in C^{B}\cup C^{N}\) and bounding box coordinates \(b\) of the \(n\) object instances in the image \(x\). The number of objects for each class in \(C^{N}\) is \(K\) for \(K\)-shot detection. We aim to obtain a few-shot detection model with the ability to detect objects in the test set with classes in \(C^{B}\cup C^{N}\).
Recently, two-stage fine-tuning methods have shown great potential in improving few-shot detection. In these two-stage detection frameworks, a Region Proposal Network (RPN) takes the output feature maps from a backbone feature extractor as inputs and generates region proposals. A Region-of-Interest (RoI) head feature extractor first pools the region proposals to a fixed size and then encodes them as vector embeddings, known as the RoI features. A classifier is trained on top of the RoI features to classify the categories of the region proposals.
The fine-tuning often follows a simple two-stage training pipeline, _i.e._, the data-abundant base training stage and the novel fine-tuning stage. In the base training stage, the model collects transferable knowledge across a large base set with sufficient annotated data. Then in the fine-tuning stage, it performs quick adaptation on the novel classes with limited data. Our method aims to generate features with diverse crop-related variations to enrich the training data for the classifier head during the fine-tuning stage. In our experiments, we show that our generated features significantly improve the performance of DeFRCN [33].
### Overall Pipeline
Figure 2 summarizes the main idea of our proposed VAE model. For each input object crop, we first use a pre-trained object detector to obtain its RoI feature. The encoder takes as input the RoI feature and the semantic embedding of the input class to output a latent code \(z\). We then transform \(z\) such that its norm linearly correlates with the IoU score of the input object crop _w.r.t._ the ground-truth box. The new norm is the output of a simple linear function \(g(\cdot)\) taking the IoU score as the single input. The decoder takes as input the new latent code and the class semantic embedding to output the reconstructed feature. Once the VAE is trained, we use the semantic embedding of the few-shot class as the conditional code to synthesize new features for the class. To ensure the diversity _w.r.t._ object crop in generated samples, we vary the norm of the latent code when generating features. The generated features are then used together with the few-shot samples to fine-tune the object detector.
#### 3.2.1 Norm-VAE for Feature Generation
We develop our feature generator based on a conditional VAE architecture [39]. Given an input object crop, we first obtain its Region-of-Interest (RoI) feature \(f\) via a pre-trained object detector. The RoI feature \(f\) is the input for the VAE. The VAE is composed of an Encoder \(E(f,a)\), which maps a visual feature \(f\) to a latent code \(z\), and a decoder \(G(z,a)\) which reconstructs the feature \(f\) from \(z\). Both \(E\) and \(G\) are conditioned on the class semantic embedding \(a\). We obtain this class semantic embedding \(a\) by inputting the class name into a semantic model [30, 34]. It contains class-specific information and serves as a controller to determine the categories of the generated samples. Conditioning on these semantic embeddings allows reliably generating features for the novel classes based on the learned information from the base classes [51]. Here we assume that the class names of both base and novel classes are available and we can obtain the semantic embedding of all classes.
We first start from a vanilla conditional VAE model. The loss function for training this VAE for a feature \(f_{i}\) of class \(j\) can be defined as:
\[\begin{split} L_{V}(f_{i})=\text{KL}\left(q(z_{i}|f_{i},a^{j})||p (z|a^{j})\right)-\\ \text{E}_{q(z_{i}|f_{i},a^{j})}[\text{log}\ p(f_{i}|z_{i},a^{j})],\end{split} \tag{1}\]
where \(a^{j}\) is the semantic embedding of class \(j\). The first term is the Kullback-Leibler divergence between the VAE posterior \(q(z|f,a)\) and a prior distribution \(p(z|a)\). The second term is the decoder's reconstruction error. \(q(z|f,a)\) is modeled as \(E(f,a)\) and \(p(f|z,a)\) is equal to \(G(z,a)\). The prior distribution is assumed to be \(\mathcal{N}(0,I)\) for all classes.
The goal is to control the crop-related variation in a generated sample. Thus, we establish a direct correspondence between the latent norm and the crop-related variation. To accomplish this, we transform the latent code such that its norm correlates with the IoU score of the input crop. Given an input RoI feature \(f_{i}\) of a region with an IoU score \(s_{i}\), we first input this RoI feature to the encoder to obtain its latent code \(z_{i}\). We then transform \(z_{i}\) to \(\tilde{z_{i}}\) such that the norm of \(\tilde{z_{i}}\) correlates to \(s_{i}\). The new latent code \(\tilde{z_{i}}\) is the output of the transformation function \(\mathcal{T}(\cdot,\cdot)\):
\[\tilde{z_{i}}=\mathcal{T}(z_{i},s_{i})=\frac{z_{i}}{\|z_{i}\|}*g(s_{i}), \tag{2}\]
where \(\|z_{i}\|\) is the \(L_{2}\) norm of \(z_{i}\), \(s_{i}\) is the IoU score of the input proposal _w.r.t._ its ground-truth object box, and \(g(\cdot)\) is a simple pre-defined linear function that maps an IoU score to a norm value. With this new transformation step, the loss function of the VAE from equation 1 for an input feature \(f_{i}\) from class \(j\) with an IoU score \(s_{i}\) thus can be rewritten as:
\[\begin{split} L_{V}(f_{i},s_{i})=\text{KL}\left(q(z_{i}|f_{i},a^ {j})||p(z|a^{j})\right)-\\ \text{E}_{q(z_{i}|f_{i},a^{j})}\left[\text{log}\ p(f_{i}|\mathcal{ T}(z_{i},s_{i}),a^{j})\right].\end{split} \tag{3}\]
Figure 2: **Norm-VAE for modelling crop-related variations.** The original latent code \(z\) is rescaled to \(\hat{z}\) such that the norm of \(\hat{z}\) linearly correlates with the IoU score of the input crop (_w.r.t._ the ground truth box). The original latent codes are colored in blue while the rescaled ones are colored in yellow. The norm of the new latent code is the output of a simple linear function \(g(\cdot)\) taking the IoU score as the single input. As can be seen, the two points whose IoU = 0.7 are both rescaled to norm \(g(0.7)\) while another point whose IoU = 0.9 is mapped to norm \(g(0.9)\). As a result, different latent norms represent different crop-related variations, enabling diverse feature generation.
#### 3.2.2 Generating Diverse Data for Improving Few-shot Object Detection
After the VAE is trained on the base set, we generate a set of features with the trained decoder. Given a class \(y\) with a semantic vector \(a^{y}\) and a noise vector \(z\), we generate a set of augmented features \(\mathbb{G}^{y}\):
\[\mathbb{G}^{y}=\{\hat{f}|\hat{f}=G(\frac{z}{\|z\|}*\beta,a^{y})\}, \tag{4}\]
where we vary \(\beta\) to obtain generated features with more crop-related variations. The value range of \(\beta\) is chosen based on the mapping function \(g(\cdot)\). The augmented features are used together with the few-shot samples to fine-tune the object detector. We fine-tune the whole system using an additional classification loss computed on the generated features together with the original losses computed on real images. This is much simpler than the previous method of [55] where they fine-tune their system via an EM-like (expectation-maximization) manner.
## 4 Experiments
### Datasets and Evaluation Protocols
We conduct experiments on both PASCAL VOC (07 + 12) [4] and MS COCO datasets [24]. For fair comparison, we follow the data split construction and evaluation protocol used in previous works [16]. The PASCAL VOC dataset contains 20 categories. We use the same 3 base/novel splits with TFA [42] and refer them as Novel Split 1,2, 3. Each split contains 15 base classes and 5 novel classes. Each novel class has \(K\) annotated instances, where \(K=1,2,3,5,10\). We report AP50 of the novel categories (nAP50) on VOC07 test set. For MS COCO, the 60 categories disjoint with PASCAL VOC are used as base classes while the remaining 20 classes are used as novel classes. We evaluate our method on shot 1,2,3,5,10,30 and COCO-style AP of the novel classes is adopted as the evaluation metrics.
### Implementation Details
Feature generation methods like ours in theory can be built on top of many few-shot object detectors. In our experiments, we use the pre-trained Faster-RCNN [35] with ResNet-101 [13] following previous work DeFRCN [33]. The dimension of the extracted RoI feature is \(2048\). For our feature generation model, the encoder consists of three fully-connected (FC) layers and the decoder consists of two FC layers, both with \(4096\) hidden units. LeakyReLU and ReLU are the non-linear activation functions in the hidden and output layers, respectively. The dimensions of the latent space and the semantic vector are both set to be \(512\). Our semantic embeddings are extracted from a pre-trained CLIP [34] model in all main experiments. An additional experiment using Word2Vec [29] embeddings is reported in Section 5.2. After the VAE is trained on the base set with various augmented object boxes, we use the trained decoder to generate \(k=30\) features per class and incorporate them into the fine-tuning stage of the DeFRCN model. We set the function \(g(\cdot)\) in Equation 2 to a simple linear function \(g(x)=w*x+b\) which maps an input IoU score \(x\) to the norm of the new latent code. Note that \(x\) is in range \([0.5,1]\) and the norm of the latent code of our VAE before the rescaling typically centers around \(\sqrt{512}\) (\(512\) is the dimension of the latent code). We empirically choose \(g(\cdot)\) such that the new norm ranges from \(\sqrt{512}\) to \(5*\sqrt{512}\). We provide further analyses on the choice of \(g(\cdot)\) in the supplementary material. For each feature generation iteration, we gradually increase the value of the controlling parameter \(\beta\) in Equation 4 with an interval of \(0.75\).
### Few-shot Detection Results
We use the generated features from our VAE model together with the few-shot samples to fine-tune DeFRCN. We report the performance of two models: "Vanilla-VAE" denotes the performance of the model trained with generated features from a vanilla VAE trained on the base set of ground-truth bounding boxes and "Norm-VAE" denotes the performance of the model trained with features generated from our proposed Norm-VAE model.
**PASCAL VOC** Table 1 shows our results for all three random novel splits from PASCAL VOC. Simply using a VAE model trained with the original data outperforms the state-of-the-art method DeFRCN in all shot and split on PASCAL VOC benchmark. In particular, vanilla-VAE improves DeFRCN by \(3.7\%\) for 1-shot and \(4.3\%\) for 3-shot on Novel Split 1. Using additional data from our proposed Norm-VAE model consistently improves the results across all settings. We provide qualitative examples in the supplementary material.
**MS COCO** Table 2 shows the FSOD results on MS COCO dataset. Our generated features bring significant improvements in most cases, especially in low-shot settings (K \(\leq\) 10). For example, Norm-VAE brings a \(2.9\%\) and a \(2.0\%\) nAP improvement over DeFRCN in 1-shot and 2-shot settings, respectively. Pseudo-Labeling is better than our method in higher shot settings. However, they apply mosaic data augmentation [1] during fine-tuning.
## 5 Analyses
### Effectiveness of Norm-VAE
We compare the performance of Norm-VAE with a baseline vanilla VAE model that is trained with the same set of augmented data. As shown in Table 4, using the vanilla VAE with more training data does not bring performance improvement compared to the VAE model trained with the
base set. This suggests that training with more diverse data does not guarantee diversity in generated samples _w.r.t._ a specific property. Our method, by contrast, improves the baseline model by \(1.3\%\sim 1.9\%\), which demonstrates the effectiveness of our proposed Norm-VAE.
### Performance Using Different Semantic Embeddings
We use CLIP [34] features in our main experiments. In Table 3, we compare this model with another model trained with Word2Vec [29] on PASCAL VOC dataset. Note that CLIP model is trained with 400M pairs (image and its text title) collected from the web while Word2Vec is trained with only text data. Our Norm-VAE trained with Word2Vec embedding achieves similar performance to the model trained with CLIP embedding. In both cases, the model outperform the state-of-the-art FSOD method in all settings.
### Robustness against Inaccurate Localization
In this section, we conduct experiments to show that our object detector trained with features with diverse crop-related variation is more robust against inaccurate bounding box localization. Specifically, we randomly select 1000 testing instances from PASCAL VOC test set and create 30 augmented boxes for each ground-truth box. Each augmented box is created by enlarging the ground-truth boxes by \(x\%\) for each dimension where \(x\) ranges from 0 to 30. The result is summarized in Figure 3 where "Baseline" denotes the performance of DeFRCN [33], "VAE" is the performance of the model trained with features generated from a vanilla VAE, and "Norm-VAE" is the model trained with generated features from our proposed model.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c|c c c c c} \hline \hline & \multicolumn{4}{c|}{Novel Split 1} & \multicolumn{4}{c|}{Novel Split 2} & \multicolumn{4}{c}{Novel Split 3} \\ Method & 1 & 2 & 3 & 5 & 10 & 1 & 2 & 3 & 5 & 10 & 1 & 2 & 3 & 5 & 10 \\ \hline TFA w/ fc [42] & 36.8 & 29.1 & 43.6 & 55.7 & 57.0 & 18.2 & 29.0 & 33.4 & 35.5 & 39.0 & 27.7 & 33.6 & 42.5 & 48.7 & 50.2 \\ TFA w/ cos [42] & 39.8 & 36.1 & 44.7 & 55.7 & 56.0 & 23.5 & 26.9 & 34.1 & 35.1 & 39.1 & 30.8 & 34.8 & 42.8 & 49.5 & 49.8 \\ MPSR [48] & 41.7 & - & 51.4 & 55.2 & 61.8 & 24.4 & - & 39.2 & 35.1 & 39.9 & 47.8 & - & 42.3 & 48.0 & 49.7 \\ FsDetView [50] & 24.2 & 35.3 & 42.2 & 49.1 & 57.4 & 21.6 & 24.6 & 31.9 & 37.0 & 45.7 & 21.2 & 30.0 & 37.2 & 43.8 & 49.6 \\ FSCE [40] & 44.2 & 43.8 & 51.4 & 61.9 & 63.4 & 27.3 & 29.5 & 43.5 & 44.2 & 50.2 & 37.2 & 41.9 & 47.5 & 54.6 & 58.5 \\ CME [22] & 41.5 & 47.5 & 50.4 & 58.2 & 60.9 & 27.2 & 30.2 & 41.4 & 42.5 & 46.8 & 34.3 & 39.6 & 45.1 & 48.3 & 51.5 \\ SRR-FSD [56] & 47.8 & 50.5 & 51.3 & 55.2 & 56.8 & 32.5 & 35.3 & 39.1 & 40.8 & 43.8 & 40.1 & 41.5 & 44.3 & 46.9 & 46.4 \\ Halluc. [55] & 45.1 & 44.0 & 44.7 & 55.0 & 55.9 & 23.2 & 27.5 & 35.1 & 34.9 & 39.0 & 30.5 & 35.1 & 41.4 & 49.0 & 49.3 \\ FSOD-MC [5] & 40.1 & 44.2 & 51.2 & 62.0 & 63.0 & 33.3 & 33.1 & 42.3 & 46.3 & 52.3 & 36.1 & 43.1 & 43.5 & 52.0 & 56.0 \\ FADI [3] & 50.3 & 54.8 & 54.2 & 59.3 & 63.2 & 30.6 & 35.0 & 40.3 & 42.8 & 48.0 & 45.7 & 49.7 & 49.1 & 48.3 & 51.5 \\ CoCo-RCNN [25] & 43.9 & 44.5 & 53.1 & 64.6 & 65.5 & 29.4 & 31.3 & 43.8 & 44.3 & 51.8 & 39.1 & 43.9 & 47.2 & 54.7 & 60.3 \\ MRSN [26] & 47.6 & 48.6 & 57.8 & 61.9 & 62.6 & 31.2 & 38.3 & 46.7 & 47.1 & 50.6 & 35.5 & 30.9 & 45.6 & 54.4 & 57.4 \\ FCT [11] & 49.9 & 57.1 & 57.9 & 63.2 & 67.1 & 27.6 & 34.5 & 43.7 & 49.2 & 51.2 & 39.5 & 54.7 & 52.3 & 57.0 & 58.7 \\ Pseudo-Labelling [17] & 54.5 & 53.2 & 58.8 & 63.2 & 65.7 & 32.8 & 29.2 & 50.7 & 49.8 & 50.6 & 48.4 & 52.7 & 55.0 & 59.6 & 59.6 \\ DeFRCN [33] & 56.3 & 60.3 & 62.0 & 67.0 & 66.1 & 35.7 & 45.2 & 51.5 & 54.1 & 53.3 & 54.5 & 55.6 & 56.6 & 60.8 & 62.7 \\ \hline Vanila-VAE (Ours) & 60.0 & 63.3 & 66.3 & 68.3 & 67.1 & 39.3 & 46.2 & 52.7 & 53.5 & 53.4 & 56.0 & 58.8 & 57.1 & 62.6 & 63.6 \\ Norm-VAE (Ours) & **62.1** & **64.9** & **67.8** & **69.2** & **67.5** & **39.9** & **46.8** & **54.4** & **54.2** & **53.6** & **58.2** & **60.3** & **61.0** & **64.0** & **65.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Few-shot object detection performance (nAP50) on PASCAL VOC dataset**. We evaluate the performance on three different splits. Our method consistently improves upon the baseline for all three splits across all shots. Best performance in bold.
\begin{table}
\begin{tabular}{l|c c c c c c|c c c c c} \hline \hline & \multicolumn{4}{c|}{nAP} & \multicolumn{4}{c}{nAP75} \\ Method & 1 & 2 & 3 & 5 & 10 & 30 & 1 & 2 & 3 & 5 & 10 & 30 \\ \hline TFA w/ fc [42] & 2.9 & 4.3 & 6.7 & 8.4 & 10.0 & 13.4 & 2.8 & 4.1 & 6.6 & 8.4 & 9.2 & 13.2 \\ TFA w/ cos [42] & 3.4 & 4.6 & 6.6 & 8.3 & 10.0 & 13.7 & 3.8 & 4.8 & 6.5 & 8.0 & 9.3 & 13.2 \\ MPSR [48] & 2.3 & 3.5 & 5.2 & 6.7 & 9.8 & 14.1 & 2.3 & 3.4 & 5.1 & 6.4 & 9.7 & 14.2 \\ FADI [3] & 5.7 & 7.0 & 8.6 & 10.1 & 12.2 & 16.1 & 6.0 & 7.0 & 8.3 & 9.7 & 11.9 & 15.8 \\ FCT [11] & - & 7.9 & - & - & 17.1 & 21.4 & - & 7.9 & - & - & 17.0 & 22.1 \\ Pseudo-Labelling [17] \(\dagger\) & - & - & - & - & - & 17.8 & **24.5** & - & - & - & - & **17.8** & **25.0** \\ DeFRCN [33] & 6.6 & 11.7 & 13.3 & 15.6 & 18.7 & 22.4 & 7.0 & 12.2 & 13.6 & 15.1 & 17.6 & 22.2 \\ \hline Vanilla-VAE (ours) & 8.8 & 13.0 & 14.1 & **15.9** & **18.7** & 22.5 & 7.9 & 12.5 & 13.4 & 15.1 & 17.6 & 22.2 \\ Norm-VAE (ours) & **9.5** & **13.7** & **14.3** & **15.9** & **18.7** & 22.5 & **8.8** & **13.7** & **14.2** & **15.3** & **17.8** & 22.4 \\ \hline \hline \end{tabular}
\end{table}
Table
Figure 3 (a) shows the classification accuracy of the object detector on the augmented box as the IoU score between the augmented bounding box and the ground-truth box decreases. For both the baseline method DeFRCN and the model trained with features from a vanilla VAE, the accuracy drops by \(\sim 10\%\) as the IoU score decreases from 1.0 to 0.5. These results suggest that these models perform much better for boxes that have higher IoU score _w.r.t._ the ground-truth boxes. Our proposed method has higher robustness to these inaccurate boxes: the accuracy of the model trained with features from Norm-VAE only drops by \(\sim 5\%\) when IoU score decreases from 1 to 0.5.
Figure 3 (b) plots the average probability score of the classifier on the ground-truth category as the IoU score decreases. Similarly, the probability score of both baseline DeFRCN and the model trained with features from a vanilla VAE drops around 0.08 as the IoU score decreases from 1.0 to 0.5. The model trained with features from Norm-VAE, in comparison, has more stable probability score as the IoU threshold decreases.
### Performance on Hard Cases
In Table 5, we show AP 50\(\sim\)75 of our method on PASCAL VOC dataset (Novel Split 1) in comparison with the state-of-the-art method DeFRCN. Here AP 50\(\sim\)75 refers to the average precision computed on the proposals with the IoU thresholds between \(50\%\) and \(75\%\) and discard the proposals with IoU scores (_w.r.t._ the ground-truth box) larger than 0.75. Thus, AP 50\(\sim\)75 implies the performance of the model in "_hard_" cases where the proposals do not significantly overlap the ground-truth object boxes. In this extreme test, the performance of both models are worse than their AP50 counterparts (Table 1), showing that FSOD methods are generally not robust to those hard cases. Our method mitigates this issue, outperforming DeFRCN by substantial margins. However, the performance is still far from perfect. Addressing these challenging cases is a fruitful venue for future FSOD work.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline \hline & Data & 1-shot & 2-shot & 3-shot \\ \hline DeFRCN [33] & - & 56.3 & 60.3 & 62.0 \\ VAE & Original & 60.0 & 63.3 & 66.3 \\ VAE & Augmented & 60.1 & 62.7 & 66.4 \\ \hline Norm-VAE & Augmented & **62.1** & **64.9** & **67.8** \\ \hline \hline \end{tabular}
\end{table}
Table 4: **Performance comparisons between vanilla VAE and Norm-VAE on PASCAL VOC dataset**. Training a the vanilla VAE with the augmented data does not bring performance improvement. One possible reason is that the generated samples are not guaranteed to be diverse even with sufficient data.
Figure 3: **Classification accuracy and probability score of the object detector on the augmented box**. We compare between the baseline DeFRCN [33], the model trained with features from vanilla VAE and our proposed Norm-VAE. By generating features with diverse crop-related variations, we increase the object detector’s robustness against inaccurate object box localization.
\begin{table}
\begin{tabular}{l|c|c c c|c c c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{
\begin{tabular}{c} Semantic \\ Embedding \\ \end{tabular} } & \multicolumn{3}{c|}{Novel Split 1} & \multicolumn{3}{c|}{Novel Split 2} & \multicolumn{3}{c}{Novel Split 3} \\ & & 1-shot & 2-shot & 3-shot & 1-shot & 2-shot & 3-shot & 1-shot & 2-shot & 3-shot \\ \hline DeFRCN [33] & - & 56.3 & 60.3 & 62.0 & 35.7 & 45.2 & 51.5 & 54.5 & 55.6 & 56.6 \\ \hline Vanilla VAE & \multirow{2}{*}{Word2Vec} & 60.4 & 62.9 & **66.7** & 38.7 & 45.2 & 52.9 & 55.6 & 58.7 & 57.9 \\ Norm-VAE & & **61.6** & **63.4** & 66.3 & **40.7** & **46.4** & **53.3** & **56.8** & **59.0** & **60.2** \\ \hline Vanilla VAE & \multirow{2}{*}{CLIP} & 60.0 & 63.3 & 66.3 & 39.3 & 46.2 & 52.7 & 56.0 & 58.8 & 57.1 \\ Norm-VAE & & **62.1** & **64.9** & **67.8** & **39.9** & **46.8** & **54.4** & **58.2** & **60.3** & **61.0** \\ \hline \hline \end{tabular}
\end{table}
Table 3: **FSOD Performance of VAE models trained with different class semantic embeddings**. CLIP [34] is trained with 400M pairs (image and its text title) collected from the web while Word2Vec [29] is trained with only text data.
Figure 3: **Classification accuracy and probability score of the object detector on the augmented box**. We compare between the baseline DeFRCN [33], the model trained with features from vanilla VAE and our proposed Norm-VAE. By generating features with diverse crop-related variations, we increase the object detector’s robustness against inaccurate object box localization.
### Performance with Different Subsets of Generated Features
In this section, we conduct experiments to show that different groups of generated features affect the performance of the object detector differently. Similar to Section 4.2, we generate 30 new features per few-shot class with various latent norms. However, instead of using all norms, we only use large norms (top 30% highest values) to generate the first group of features and only small norms (top 30% lowest values) to generate the second group of features. During training, larger norms correlate to input crops with smaller IoU scores _w.r.t._ the ground-truth boxes and vice versa. Thus, we denote these two groups as "Low-IoU" and "High-IoU" correspondingly. We train two models using these two sets of features and compare their performance in Table 6. As can be seen, the model trained with "Low-IoU" features has higher AP50 while the "High-IoU" model has higher AP75 score. This suggests that different groups of features affect the performance of the classifier differently. The "Low-IoU" features tend to increase the model's robustness to hard-cases while the "High-IoU" features can improve the performance for easier cases. Note that the performance of both of these models is not as good as the model trained with diverse variations and interestingly, very similar to the performance of the vanilla VAE model (Table 1).
### Comparisons with Other VAE architectures
Our proposed Norm-VAE can increase diversity _w.r.t._ image crops in generated samples. Here, we compare the performance of our proposed Norm-VAE with other VAE architectures, including \(\beta\)-VAE [14] and CSVAE [18]. We train all models on image features of augmented object crops on PASCAL VOC dataset using the same backbone feature extractor. For \(\beta\)-VAE, we generate additional features by traversing a randomly selected dimension of the latent code. For CSVAE, we manipulate the learned latent subspace to enforce variations in the generated samples. We use generated features from each method to finetune DeFRCN. The results are summarized in Table 7. In all cases, the generated features greatly benefit the baseline DeFRCN. This shows that lacking crop-related variation is a critical issue for FSOD, and augmenting features with increased crop-related diversity can effectively alleviate the problem. Our proposed Norm-VAE outperforms both \(\beta\)-VAE and CSVAE in all settings. Note that CSVAE requires additional encoders to learn a pre-defined subspace correlated with the property, while our Norm-VAE directly encode this into the latent norm without any additional constraints.
## 6 Conclusion and Future Works
We tackle the lack of crop-related variability in the training data of FSOD, which makes the model not robust to different object proposals of the same object instance. To this end, we propose a novel VAE model that can generate features with increased crop-related diversity. Experiments show that such increased diversity in the generated samples significantly improves the current state-of-the-art FSOD performance for both PASCAL VOC and MS COCO datasets. Our proposed VAE model is simple, easy to implement, and allows modifying the difficulty levels of the generated samples. In general, generative models whose outputs can be manipulated according to different properties, are crucial to various frameworks and applications. In future work, we plan to address the following limitations of our work: 1) We bias the decoder to increase the diversity in generated samples instead of explicitly enforcing it. 2) Our proposed method is designed to generate visual features of object boxes for FSOD. Generating images might be required in other applications. Another direction to extend our work is to represent other variational factors in the embedding space to effectively diversify generated data.
**Acknowledgements.** This research was partially supported by NSF grants IIS-2123920 and IIS-2212046 and the NASA Biodiversity program (Award 80NSSC21K1027).
\begin{table}
\begin{tabular}{l|c c|c c|c c|c c} \hline \hline Features & \multicolumn{2}{c|}{1-shot} & \multicolumn{2}{c|}{2-shot} & \multicolumn{2}{c|}{3-shot} & \multicolumn{2}{c}{5-shot} \\ & nAP50 & nAP75 & nAP50 & nAP75 & nAP50 & nAP75 & nAP50 & nAP75 \\ \hline Low-IoU (Hard cases) & **60.9** & 30.5 & **63.7** & 40.6 & **66.6** & 40.7 & **68.9** & 41.2 \\ High-IoU (Easy cases) & 60.2 & **31.6** & 63.2 & **41.0** & 66.3 & **41.5** & 68.3 & **42.1** \\ \hline \hline \end{tabular}
\end{table}
Table 6: **Comparison between models trained with different groups of generated features**. The model trained with “Low-IoU” (hard cases) features has better nAP50 scores while the “High-IoU” (easy cases) model has better nAP75 scores. Features corresponding to different difficulty levels improve the performance differently in terms of nAP50 and nAP75.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline & 1-shot & 2-shot & 3-shot \\ \hline DeFRCN [33] & 56.3 & 60.3 & 62.0 \\ \hline \(\beta\)-VAE [14] & 61.3 & 64.0 & 67.3 \\ CSVAE [18] & 61.6 & 64.1 & 67.4 \\ Norm-VAE & **62.1** & **64.9** & **67.8** \\ \hline \hline \end{tabular}
\end{table}
Table 7: **Comparison between Norm-VAE and other VAE variants.** Norm-VAE outperforms \(\beta\)-VAE and CSVAE on PASCAL VOC dataset under all settings. Best performance in bold. |
2306.12393 | Spatio-temporal dynamics in a diffusive Bazykin model: effects of group
defense and prey-taxis | Mathematical modeling and analysis of spatial-temporal population
distributions of interacting species have gained significant attention in
biology and ecology in recent times. In this work, we investigate a
Bazykin-type prey-predator model with a non-monotonic functional response to
account for the group defense among the prey population. Various local and
global bifurcations are identified in the temporal model. Depending on the
parameter values and initial conditions, the temporal model can exhibit long
stationary or oscillatory transient states due to the presence of a local
saddle-node bifurcation or a global saddle-node bifurcation of limit cycles,
respectively. We further incorporate the movement of the populations consisting
of a diffusive flux modelling random motion and an advective flux modelling
group defense-induced prey-taxis of the predator population. The global
existence and boundedness of the spatio-temporal solutions are established
using $L^p$-$L^q$ estimate. We also demonstrate the existence of a
non-homogeneous stationary solution near the Turing thresholds using weakly
nonlinear analysis. A few interesting phenomena, which include extinction
inside the Turing region, long stationary transient state, and non-homogeneous
oscillatory solutions inside the Hopf region, are also identified. | Subrata Dey, Malay Banerjee, S. Ghorai | 2023-06-21T17:27:29Z | http://arxiv.org/abs/2306.12393v1 | # Spatio-temporal dynamics in a diffusive Bazykin model: effects of group defense and prey-taxis
###### Abstract
Mathematical modeling and analysis of spatial-temporal population distributions of interacting species have gained significant attention in biology and ecology in recent times. In this work, we investigate a Bazykin-type prey-predator model with a non-monotonic functional response to account for the group defense among the prey population. Various local and global bifurcations are identified in the temporal model. Depending on the parameter values and initial conditions, the temporal model can exhibit long stationary or oscillatory transient states due to the presence of a local saddle-node bifurcation or a global saddle-node bifurcation of limit cycles, respectively. We further incorporate the movement of the populations consisting of a diffusive flux modelling random motion and an advective flux modelling group defense-induced prey-taxis of the predator population. The global existence and boundedness of the spatio-temporal solutions are established using \(L^{p}\)-\(L^{q}\) estimate. We also demonstrate the existence of a non-homogeneous stationary solution near the Turing thresholds using weakly nonlinear analysis. A few interesting phenomena, which include extinction inside the Turing region, long stationary transient state, and non-homogeneous oscillatory solutions inside the Hopf region, are also identified.
**Keywords** Group defense; Prey-taxis; Pattern formation; Steady state; Amplitude equation; Bifurcation analysis; Transient dynamics
**Mathematics Subject Classification** 34C23; 35B32, 35B35, 35K57, 92D40
## 1 Introduction
Understanding complex dynamics of predator-prey interactions through mathematical modelling is a key topic in evolutionary biology and ecology. Since the seminal work of Lotka [1] and Volterra [2], mathematical models have been developed to explore a diverse range of complex ecological phenomena that include Allee effects [3], group defense [4], hunting cooperation [5], intra-guild predation [6], any many others[7]. The incorporation of the movement of the population makes these models more realistic in the field of spatial ecology [8]. The movement of a population can be due to various reasons. The main reasons include the random motion of individuals and the directed motion of individuals in response to the gradient of some signals, among others. The first one is modelled using a diffusive flux and the latter one uses an advective flux in which the velocity field depends on the gradient of the signal.
Let \(N(T)\) and \(P(T)\) respectively be the densities of the prey and predator populations at time \(T.\) The Bazykin type prey-predator model [9; 10; 11], which incorporates intra-species competition among predators, is given by:
\[\frac{dN}{dT} =NG(N)-F(N)P, \tag{1a}\] \[\frac{dP}{dT} =\zeta F(N)P-\gamma P-\delta P^{2}, \tag{1b}\]
where \(G(N)\) denotes the per capita growth rate of the prey species, \(F(N)\) is the prey-dependent functional response and \(\zeta\) is the conversion coefficient. Further, \(\gamma\) and \(\delta\) respectively denote the natural mortality rate and intra-species competition of the predator population.
The prey-predator interaction is completely reliant on the functional response function \(F(N)\). Therefore, the selection of an appropriate functional response is critical in determining the outcome of the predator-prey dynamics. The literature commonly favors strictly monotonic functional responses that increase with prey density with a finite upper bound, such as \(\dfrac{\alpha N}{1+\beta N}\) (Holling type-II) [12], \(\dfrac{\alpha N^{2}}{1+\beta N^{2}}\) (Holling type-III) [13], \(\alpha(1-e^{-\beta N})\) (Ivelev) [14], \(\dfrac{\alpha N^{2}}{1+\mu N+\beta N^{2}}\) (Sigmoidal) [15], and \(\alpha\)\(\tanh(\beta N)\) (hyperbolic tangent) [16]. However, experimental and observational evidence suggest that the assumption of monotonicity in the functional response is not always valid. For example, the experiment by Andrews [17] in microbial dynamics suggests that higher nutrient concentrations can result in an inhibitory effect on microorganisms, which can be modelled using Monod-Haldane function \(F(N)=\dfrac{\alpha N}{1+\mu N+\beta N^{2}}.\) Sokol and Howell [18] conducted a study on the uptake of phenol by Pseudomonas putida in continuous culture, where they found that a simplified form of the Monod-Haldane function, \(F(N)=\dfrac{\alpha N}{1+\beta N^{2}}\), provided a better fit to their experimental data. The main characterization of the non-monotonic functional response is that there exists a \(N_{m}\), such that
\[F^{\prime}(N)=\begin{cases}\geq 0&\text{for }0\leq N\leq N_{m},\\ <0&\text{for }N>N_{m},\end{cases}\quad\text{ and }F(N)\to 0\text{ as }N\to\infty. \tag{2}\]
Group defense refers to the ability of the prey species to defend or conceal themselves more effectively, leading to a reduction or complete prevention of predation when their population size is sufficiently large [19; 20]. Tener [21] provided a clear example of this phenomenon in which lone musk oxen are vulnerable to wolf attacks, while small herds of two to six oxen are occasionally attacked, but larger herds are never successfully attacked. This same pattern was also observed by Holmes and Bethel [22] in their study of insect populations, where large swarms of insects made it difficult for predators to identify individual prey. Clearly, group defense can be incorporated into a prey-predator system using a non-monotonic functional response.
Here, we consider a Bazykin prey-predator model with a logistic growth rate for the prey species and a simplified form of the Monod-Haldane function, \(F(N)=\dfrac{\alpha N}{1+\beta N^{2}}\), for the functional response. The governing equations are
\[\dfrac{dN}{dT} =N(\sigma-\eta N)-\dfrac{\alpha NP}{1+\beta N^{2}}, \tag{3a}\] \[\dfrac{dP}{dT} =\dfrac{\zeta\alpha NP}{1+\beta N^{2}}-\gamma P-\delta P^{2}, \tag{3b}\]
where \(\sigma\) and \(\eta\) respectively represent the intrinsic growth rate and intra-species competition of the prey species.
Self-organized spatio-temporal pattern formation is a fundamental process that plays a critical role in comprehending diverse intricate ecological phenomena in nature. The pioneering work of Turing on chemical morphogenesis [23] has played a pivotal role in advancing our understanding of pattern formation using the reaction-diffusion (RD) theory. RD models have since been extensively studied to explain the formation of patterns in various complex biological systems such as fish skin [24], mussel bed distribution [25], insect wings [26], predator-prey interactions [27], terrestrial vegetation [28], and many others [8]. Spatial Turing and temporal Hopf instabilities are key mechanisms in the development of spatio-temporal patterns. The diffusion-driven Turing instability takes place when a small amplitude spatial perturbation about a stable homogeneous steady-state becomes unstable, resulting in stationary patterns like stripes, spots, or a mixture of both [29, 30]. In addition to stationary patterns, various dynamic patterns that include traveling waves, periodic traveling waves, target patterns, spiral patterns, quasi-periodic spatial patterns, and even spatio-temporal chaotic patterns are typically found due to the Hopf instability and Turing-Hopf instability [31, 32].
The diffusion mechanism corresponds to the random movement of species from a higher concentration area to a lower concentration area. On the other hand, taxis is a directional movement of species in response to a particular stimulus, such as phototaxis due to light [33, 34] and chemotaxis due to chemical gradients [35, 36]. Taxis allows species to move towards or away from their stimulus and it is critical in many ecological processes, including foraging, mating, migration, photosynthesis, and dispersal [33, 35, 37].There are two types of taxis in a prey-predator system depending on the movement of species [38]. The directional movement of predator species in response to the prey density is called prey-taxis and the opposite is called predator-taxis. These types of taxis-based movements play a significant role in the formation of complex spatial patterns [38, 39, 40, 41]. A RD model with Rosenzweig-MacArthur kinetics is unable to form Turing structure [30], whereas the same system in the presence of taxis can show stationary Turing patterns [40, 42]. We extend the temporal model (3) to include random movement due to diffusion and directed movement of the predator population because of prey-taxis. Thus, our spatio-temporal model becomes
\[\frac{\partial N}{\partial T} =D_{1}\nabla^{2}N+N(\sigma-\eta N)-\frac{\alpha NP}{1+\beta N^{2} },\;X\in\tilde{\Omega},\;T>0, \tag{4a}\] \[\frac{\partial P}{\partial T} =D_{2}\nabla^{2}P+\nabla\cdot(\chi(N)P\nabla N)+\frac{\zeta \alpha NP}{1+\beta N^{2}}-\gamma P-\delta P^{2},\;X\in\tilde{\Omega},\;T>0, \tag{4b}\]
where \(\tilde{\Omega}\subset\mathbb{R}^{n}\) is a bounded domain with boundary \(\partial\tilde{\Omega}.\) Further, \(D_{1}\) and \(D_{2}\) respectively are the self-diffusion coefficients of the prey and predator species, and \(\chi\) is the prey-taxis coefficient. Note that \(\chi<0\) and \(\chi>0\) corresponds to attractant and repellent prey-taxis respectively [43, 44]. Due to group defense by the prey species, the predator species prefer low-density prey areas for their hunting and avoid high-density prey areas. Our temporal model incorporates the group defense in prey species through a non-monotonic functional response. To take into account the group defense induced prey-taxis in the RD model, we take \(\chi(N)=\chi_{0}>0\), where \(\chi_{0}\) is a constant for simplicity [35, 44, 45]. A dimensionless version of (4), using \(u=\frac{\eta}{\sigma}N\), \(v=\frac{\delta}{\sigma}P\), \(t=\sigma T\) and \(x=\sqrt{\frac{\sigma}{D_{1}}}X\) for dimensionless prey, predator, time and space, is
\[\frac{\partial u}{\partial t} =\nabla^{2}u+u(1-u)-\frac{auv}{1+bu^{2}},\;x\in\Omega,\;t>0 \tag{5a}\] \[\frac{\partial v}{\partial t} =d\nabla^{2}v+c\nabla\cdot(v\nabla u)+\frac{eauv}{1+bu^{2}}-fv-v^ {2},\;x\in\Omega,\;t>0, \tag{5b}\]
where \(a=\dfrac{\alpha}{\delta}\), \( b=\dfrac{\beta\sigma^{2}}{\eta^{2}}\), \(c=\dfrac{\chi\sigma}{D_{1}\eta}\), \(d=\dfrac{D_{2}}{D_{1}}\), \(e=\dfrac{\zeta\delta}{\eta}\) and \(f=\dfrac{\gamma}{\sigma}\), are dimensionless positive parameters. Here, \(\Omega\) is the dimensionless domain with corresponding boundary \(\partial\Omega\), and \(d\) is the ratio of self-diffusion of predator and prey. Further, \(c>0\) represents the dimensionless prey-taxis coefficient that characterizes the tendency of the predator population to keep away from the high-density prey areas. The system (5) is subjected to non-negative initial conditions \(u(x,0)\equiv u_{0}(x)\) and \(v(x,0)\equiv v_{0}(x)\) for \(x\in\Omega\), and no-flux boundary conditions \(\dfrac{\partial u}{\partial n}=\dfrac{\partial v}{\partial n}=0\) for \(x\in\partial\Omega\) and \(t>0\).
Long transient dynamics is currently an important topic for predicting and managing ecological systems in the face of environmental change [46]. It refers to the slowly varying dynamics over a long period of time before the system reaches its final state. This long transient dynamics can be complex and unpredictable, which may appear as stationary, oscillatory or even chaotic [46, 47, 48]. In the context of dynamical system, these transient dynamics are guided by various temporal and spatio-temporal bifurcations. The length of this transient period depends on the distance of the control parameter from the bifurcation threshold and initial conditions [46, 49]. Here, we show interesting long transient dynamics for the temporal and spatio-temporal system, which are unpredictable in their final state.
The predator species do not have any cooperative nature due to the Bazykin-type reaction kinetics in contrast to prey species for which group defense is a kind of cooperative behavior [9, 10, 11]. To the best of our knowledge, this is the first study to investigate group defense-induced prey-taxis in an ecological model. For the temporal model, we perform a bifurcation analysis and illustrate some representative dynamics through one- and two-parametric bifurcation diagrams. We establish global existence and boundedness of solutions for the spatio-temporal model with Neumann boundary conditions. The stability of the homogeneous steady states and Turing instability are discussed for the spatio-temporal model. The existence of a non-homogeneous stationary solution is shown using weakly nonlinear analysis (WNA) which is then validated with numerical solution. A key strength of our work is the investigation of long transient dynamics. We have established the roles of local and global bifurcations on the stationary and oscillatory transient dynamics exhibited by the temporal model. The fate of these transient dynamics under the influence of diffusion and taxis have also been investigated. For certain parameter values, appearance of steady homogeneous solution in Turing domain and non-homogeneous in space but oscillatory in time solution in Hopf domain are some other key findings of our work.
The temporal model together with its equilibria, stability, local and global bifurcations are described in Section 2. This section also contains long transient dynamics with numerical visualization. Section 3 consists of the global existence and boundedness of the spatio-temporal solution, stability analysis of the homogeneous steady states, and Turing bifurcation. The derivation of the amplitude equation using WNA has been carried out in Section 4. Using numerical simulations, we validate the results of WNA in Section 5. This section also contains an extensive range of numerical simulations that show long transient dynamics as well as various interesting stationary and dynamic patterns. Finally, the paper concludes with a comprehensive discussion in Section 6.
## 2 Temporal model
Here, we discuss temporal dynamics corresponding to the homogeneous system of (5), i.e.,
\[\dfrac{du}{dt} =u(1-u)-\dfrac{auv}{1+bu^{2}}\equiv F_{1}(u,v)\equiv uf_{1}(u,v), \tag{6a}\] \[\dfrac{dv}{dt} =\dfrac{eauv}{1+bu^{2}}-fv-v^{2}\equiv F_{2}(u,v)\equiv vf_{2}(u, v). \tag{6b}\]
The system (6) is subjected to non-negative initial conditions \(u(0)\geq 0\), \(v(0)\geq 0\).
### Positivity and boundedness
**Theorem 1**.: _Every solution \((u(t),v(t))\) of the system (6) with non-negative initial condition remains bounded and non-negative for all time \(t\)._
Proof.: The proof is straightforward and has been omitted.
### Existence and stability of equilibria
Here, we discuss the number and types of all possible equilibria in \(\mathbb{R}^{+}_{2}=\{(u,v):u\geq 0,v\geq 0\}\). The system (6) has trivial equilibrium point \(E_{0}(0,0)\) and axial equilibrium point \(E_{1}(1,0)\) irrespective of parameter values.
An interior equilibrium \(E_{j}^{*}(u_{j}^{*},v_{j}^{*})\) (where \(j\) can be 1,2 or 3) is a point of intersection of the non-trivial prey nullcline \(v=n(u):=\dfrac{(1+bu^{2})(1-u)}{a}\) and predator nullcline \(v=p(u):=\dfrac{eau}{bu^{2}+1}-f\).
Figure 1: Location of nullclines and equilibria of the system (6) for all possible cases. Here blue and red color curves denote the prey and predator nullclines respectively and dots represent various equilibria. Green and black dots represent the stable and saddle equilibria respectively and the stability of magenta dots depends on the Hopf bifurcation.
The prey component \(u_{j}^{*}\) of the interior equilibrium \(E_{j}^{*}\) satisfies
\[Q(u)\equiv b^{2}u^{5}-b^{2}u^{4}+2bu^{3}-b\left(af+2\right)u^{2}+\left(a^{2}e+1 \right)u-\left(af+1\right)=0, \tag{7}\]
and the component \(v_{j}^{*}\) is obtained from
\[v_{j}^{*}=\frac{eau_{j}^{*}}{bu_{j}^{*2}+1}-f.\]
Now, \(n(u)\geq 0\) for \(u\leq 1\) and \(p(u)\) intersects \(u\) axis at two points, say \((u_{a},0)\) and \((u_{b},0)\), where
\[u_{a}=\frac{ea-\sqrt{e^{2}a^{2}-4bf^{2}}}{2bf}\ \ \text{and}\ \ u_{b}=\frac{ea+ \sqrt{e^{2}a^{2}-4bf^{2}}}{2bf},\ \ \text{whenever}\ e^{2}a^{2}\geq 4bf^{2}.\]
Thus, we must have \(u_{a}<u^{*}<\min\{1,u_{b}\}\) for feasibility of \(E^{*}\). Also, \(p(u)\) has a maximum at \(u_{M}=\frac{1}{\sqrt{b}}\). The system (6) has at least one coexisting equilibrium point when \(p(u_{M})\geq n(u_{M}).\) In the case of two equilibria, we label them as \(E_{1}^{*}(u_{1}^{*},v_{1}^{*})\) and \(E_{2}^{*}(u_{2}^{*},v_{2}^{*})\) with \(0<u_{1}^{*}<\frac{1}{\sqrt{b}}<u_{2}^{*}<1\) [see Fig. 1(b)]. The system can have three equilibria for parameter value \(f_{SN_{2}}<f<f_{TC}\) with \(0<u_{1}^{*}<\frac{1}{\sqrt{b}}<u_{2}^{*}<u_{sn_{1}}<u_{3}^{*}<1\), where \(u_{sn_{1}}\) and \(f_{SN_{2}}\) are discussed in the next subsection. Depending on the parameter restriction, we summarize the number of equilibria in Table 1. Next, we discuss the stability of the different equilibria. The Jacobian of the system (6) at a point \(E(u,v)\) is given by
\[J(E)=\begin{bmatrix}1-2u+\frac{av\left(bu^{2}-1\right)^{2}}{(bu^{2}+1)^{2}}&- \frac{au}{bu^{2}+1}\\ \frac{eav\left(1-bu^{2}\right)}{(bu^{2}+1)^{2}}&\frac{eau}{bu^{2}+1}-f-2v\\ \end{bmatrix}\equiv\begin{bmatrix}a_{10}&a_{01}\\ b_{10}&b_{01}\\ \end{bmatrix}. \tag{8}\]
Considering the Jacobian matrix (8) at an equilibrium point, we have the following propositions:
**Proposition 1**.: _The trivial equilibrium point \(E_{0}\) is always a saddle point and the axial equilibrium point \(E_{1}\) is a saddle point for \(f<f_{TC}:=\frac{ea}{b+1}\) and asymptotically stable for \(f>f_{TC}\)._
Proof.: Since, the eigenvalues of \(J(E_{0})\) are \(1\) and \(-f,\) it is always a saddle point. The eigenvalues of \(J(E_{1})\) are \(-1\) and \(\frac{ea}{b+1}-f.\) Therefore, \(E_{1}\) is asymptotically stable when \(f>f_{TC}\) and a saddle point when \(f<f_{TC}\).
**Proposition 2**.: _For \(p(u_{M})\geq n(u_{M})\) and \(e^{2}a^{2}\geq 4bf^{2}\), the system (6) has at least one coexisting equilibrium and at most three different coexisting equilibria. The following hold for the stability of the co-existing equilibria:_
1. \(E_{1}^{*}\) _is asymptotically stable for_ \(b<b_{H}\) _and unstable for_ \(b>b_{H}\)_, where_ \(b_{H}\) _is defined in the text._
2. _Whenever_ \(E_{2}^{*}\) _exists, it is always a saddle point._
3. _Whenever_ \(E_{3}^{*}\) _exists, it is always asymptotically stable._
Proof.: The Jacobian \(J(E^{*})\), at a coexisting equilibrium point \(E^{*}(u^{*},v^{*})\), is given by
\[J(E^{*})=\begin{bmatrix}u\frac{\partial f_{1}}{\partial u}&u\frac{\partial f_ {1}}{\partial v}\\ v\frac{\partial f_{2}}{\partial u}&v\frac{\partial f_{2}}{\partial v}\\ \end{bmatrix}_{(u^{*},v^{*})}=\begin{bmatrix}-u\frac{\partial f_{1}}{\partial v }\frac{dv}{du}&u\frac{\partial f_{1}}{\partial v}\\ -v\frac{\partial f_{2}}{\partial v}\frac{dv}{du}&v\frac{\partial f_{2}}{\partial v }\\ \end{bmatrix}_{(u^{*},v^{*})},\]
where \(\frac{dv}{du}^{(f_{j})}\) represents the slope of the tangent to the curve \(f_{j}(u,v)=0\) (\(j=1,2\)). Also,
\[\det(J(E^{*}))=\left(uv\frac{\partial f_{1}}{\partial v}\frac{\partial f_{2}}{ \partial v}\left(\frac{dv^{\,(f_{2})}}{du}-\frac{dv^{\,(f_{1})}}{du}\right) \right)_{(u^{*},v^{*})}. \tag{9}\]
We observe that
\[\frac{\partial f_{1}(u^{*},v^{*})}{\partial v}=-\frac{a}{1+bu^{2}}<0\quad \text{and}\quad\frac{\partial f_{2}(u^{*},v^{*})}{\partial v}=-1.\]
Suppose that \(\theta_{1}\) and \(\theta_{2}\) respectively denote the inclination angles of the tangents to \(f_{1}(u,v)=0\) and \(f_{2}(u,v)=0\) at \(E_{2}^{*}\) [see Fig. 1(b)]. We find that \(\frac{\pi}{2}<\theta_{2}<\theta_{1}<\pi\) holds whenever \(E_{2}^{*}\) exists, which implies
\[\left.\frac{dv^{\,(f_{2})}}{du}\right|_{(u_{2}^{*},v_{2}^{*})}<\left.\frac{dv^ {\,(f_{1})}}{du}\right|_{(u_{2}^{*},v_{2}^{*})}.\]
Hence, we have \(\det(J(E_{2}^{*}))<0\) from (9) and therefore \(E_{2}^{*}\) is a saddle point. Similarly, we obtain \(\det(J(E_{1}^{*}))>0\) for \(E_{1}^{*}\). Using Routh-Hurwitz stability criteria, the coexisting equilibrium point \(E_{1}^{*}\) is asymptotically stable if \(\operatorname{tr}(J(E_{1}^{*}))<0\), which holds when
\[b<b_{H}:=\frac{u_{1}^{*}+v_{1}^{*}}{{u_{1}^{*}}^{2}\left(2-3u_{1}^{*}-v_{1}^{ *}\right)}.\]
Thus, \(E_{1}^{*}\) is asymptotically stable for \(b<b_{H}\) and unstable for \(b>b_{H}\).
The coexisting equilibrium point \(E_{3}^{*}\) exchanges stability with stable \(E_{1}\) through a transcritical bifurcation at \(f=f_{TC}\) (discussed in the next subsection) and it is feasible for \(f<f_{TC}\). Therefore, it is asymptotically stable whenever it exists.
### Local bifurcation Analysis
Here, we discuss transcritical, saddle-node, Hopf, generalized Hopf (GH), and Bautin bifurcations exhibited by the system (6).
#### 2.3.1 Transcritical bifurcation
**Proposition 3**.: _The temporal model (6) encounters a transcritical bifurcation at \(E_{1}\) when the parameter \(f\) satisfies the threshold \(f=f_{{}_{TC}}=\frac{ea}{b+1}\)._
\begin{table}
\begin{tabular}{|c|l|l|} \hline
**Equilibria** & **Existence criteria** & **Stability criteria** \\ \hline \(E_{0}\) & independent of parameter values & saddle point. \\ \hline \(E_{1}\) & independent of parameter values & a saddle point if \(f<f_{TC}\), \\ & and asymptotically stable if \(f>f_{TC}\). \\ \hline \(E_{1}^{*}\) & \(p(u_{M})\geq n(u_{M})\) and \(e^{2}a^{2}\geq 4bf^{2}\) & asymptotically stable if \(b<b_{H}\), \\ & and unstable if \(b>b_{H}\). \\ \hline \(E_{2}^{*}\) & \(p(u_{M})>n(u_{M})\), \(e^{2}a^{2}\geq 4bf^{2}\) with & saddle point. \\ & either \(u_{b}<1\) or \(u_{b}>1\) and \(f_{SN_{2}}<f<f_{TC}\) & \\ \hline \(E_{3}^{*}\) & \(p(u_{M})>n(u_{M})\), \(e^{2}a^{2}\geq 4bf^{2}\), \(u_{b}>1\) & asymptotically stable. \\ & and \(f_{SN_{2}}<f<f_{TC}\) & \\ \hline \end{tabular}
\end{table}
Table 1: Existence and stability of different equilibria of the system (6).
Proof.: The Jacobian matrix \(J(E_{1})\) given in (8) has a zero eigenvalue at \(f=f_{{}_{TC}}\). Let the eigenvectors of the Jacobian matrix \(J(E_{1})\) and its transpose \(J(E_{1})^{T}\) corresponding to the zero eigenvalue be \(\boldsymbol{\zeta}=[\frac{a}{b+1},1]^{T}\) and \(\boldsymbol{\eta}=[1,0]^{T}\) respectively. Now, the transversality conditions [50] become
\[\boldsymbol{\eta}^{T}\mathcal{F}_{f}(E_{1};f=f_{{}_{TC}}) = 0,\] \[\boldsymbol{\eta}^{T}D\mathcal{F}_{f}(E_{1};f=f_{{}_{TC}}) \boldsymbol{\zeta} = 0,\] \[\boldsymbol{\eta}^{T}D^{2}\mathcal{F}(E_{1};f=f_{{}_{TC}})( \boldsymbol{\zeta},\boldsymbol{\zeta}) = -4\,\frac{a^{2}}{(b+1)^{3}}\neq 0.\]
Here, \(\mathcal{F}=[F_{1}(u,v),F_{2}(u,v)]^{T}\) and all the other notations are the same as in [50]. Thus, all the transversality conditions of degenerate transcritical bifurcation [51, 52] are satisfied for the system (6) when \(f=f_{{}_{TC}}\).
#### 2.3.2 Saddle-node bifurcation
Suppose that for the polynomial \(Q\) given in (7), there exist two real roots, say \(u_{sn_{1}}\) and \(u_{sn_{2}}\) of \(Q^{\prime}(u)=0\) with \(0<u_{sn_{1}}<u_{sn_{2}}\). Then, opposite signs of \(Q(u_{sn_{1}})\) and \(Q(u_{sn_{2}})\) lead to three positive roots \(u\) of the equation \(Q(u)=0\) with \(u_{1}^{*}<u_{sn_{1}}<u_{2}^{*}<u_{sn_{2}}<u_{3}^{*}\) [see Fig. 2(a)]. Note that for the feasibility of coexisting equilibria, we must have \(u_{i}^{*}<\min\{1,u_{b}\}\) for \(i=1,2,3\). A variation in one temporal parameter may result in two cases: either \(E_{2}^{*}\) coincides with \(E_{1}^{*}\) when \(Q(u_{sn_{1}})=0\) [see Fig. 2(b))] or \(E_{2}^{*}\) coincides with \(E_{3}^{*}\) when \(Q(u_{sn_{2}})=0\) [see Fig. 2(c)] provided \(u_{sn_{i}}<\min\{1,u_{b}\}\) for \(i=1,2\). Thus, a saddle-node bifurcation \(SN_{1}\) occurs in the former case and another saddle-node bifurcation \(SN_{2}\) occurs in the latter case. Taking \(f\) as the control parameter, the threshold value for \(SN_{1}\) is given by
\[f_{{}_{SN_{1}}}=\frac{5b^{2}u_{sn_{1}}^{4}-4b^{2}u_{sn_{1}}^{3}+6bu_{sn_{1}}^ {2}-4bu_{sn_{1}}+a^{2}e+1}{2ab}.\]
To obtain the threshold value \(f_{{}_{SN_{2}}}\), we replace \(u_{sn_{1}}\) with \(u_{sn_{2}}\) in the expression for \(f_{{}_{SN_{1}}}.\) In the following proposition, transversality conditions are verified for this bifurcation.
**Proposition 4**.: _When \(Q(u)=0\) has a double root \(u<\text{min}\{1,u_{b}\}\), then the temporal system (6) exhibits saddle-node bifurcation when the control parameter \(f\) is varied._
Proof.: Suppose that \(u_{sn}\), with \(u_{sn}<\min\{1,u_{b}\}\), is a double root of \(Q(u)=0\), i.e., \(Q(u_{sn})=Q^{\prime}(u_{sn})=0\) but \(Q^{\prime\prime}(u_{sn})\neq 0\) when \(f=f_{{}_{SN}}\). Let the corresponding interior equilibrium point be
Figure 2: Plot of \(Q(u)\) for three different cases.
\(E_{SN}^{*}=(u_{sn},v_{sn}).\) Therefore, the nontrivial nullclines \(f_{1}(u,v)=0\) and \(f_{2}(u,v)=0\) touch each other at \(E_{SN}^{*},\) where both of them have same slope \(\frac{dv^{(f_{1})}}{du}|_{E_{SN}^{*}}=\frac{dv^{(f_{2})}}{du}|_{E_{SN}^{*}}.\) Using \(\frac{dv^{(g)}}{du}=-\frac{\frac{\partial g}{\partial u}}{\frac{\partial g}{ \partial v}},\) we find
\[\det(J(E_{{}_{SN}}^{*}))=\bigg{[}\ uv\left(\frac{\partial f_{1}}{\partial u} \frac{\partial f_{2}}{\partial v}-\frac{\partial f_{1}}{\partial v}\frac{ \partial f_{2}}{\partial u}\right)\bigg{]}_{E_{SN}^{*}}=0.\]
Therefore, the Jacobian matrix \(J(E_{SN}^{*})\) has a zero eigenvalue. Let \(\boldsymbol{\zeta}=[p,1]^{T}\) and \(\boldsymbol{\eta}=[1,q]^{T}\) respectively be the eigenvectors of \(J(E_{SN}^{*})\) and \([J(E_{SN}^{*})]^{T}\) corresponding to the zero eigenvalue, where
\[p=-\frac{a}{1+\left(3\,{u_{sn}}^{2}-2\,u_{sn}\right)b}\ \ \text{and}\ \ q=-\frac{au}{ \left(bu^{2}+1\right)v}.\]
To check the transversality conditions, we calculate
\[\boldsymbol{\eta}^{T}\mathcal{F}_{f}(E_{SN}^{*};f=f_{{}_{SN}})=\frac{au_{sn} }{{bu_{sn}}^{2}+1},\]
\[\boldsymbol{\eta}^{T}D^{2}\mathcal{F}(E_{SN}^{*};f=f_{{}_{SN}})(\boldsymbol{ \zeta},\boldsymbol{\zeta})=\Big{(}\frac{\partial^{2}F_{1}}{\partial u^{2}}p^ {2}+2\frac{\partial^{2}F_{1}}{\partial u\partial v}p+\frac{\partial^{2}F_{1} }{\partial v^{2}}+q(\frac{\partial^{2}F_{2}}{\partial u^{2}}p^{2}+2\frac{ \partial^{2}F_{2}}{\partial u\partial v}p+\frac{\partial^{2}F_{2}}{\partial v ^{2}})\Big{)}_{(E_{SN}^{*};f_{{}_{SN}})},\]
where we omit explicit expressions in the last equation since it is too cumbersome. The system (6) undergoes a non-degenerate saddle-node bifurcation [50] at \(f=f_{{}_{SN}}\) if \(\boldsymbol{\eta}^{T}\mathcal{F}_{f}(E_{SN}^{*};f=f_{{}_{SN}})\neq 0\) and \(\boldsymbol{\eta}^{T}D^{2}\mathcal{F}(E_{SN}^{*};f=f_{{}_{SN}})(\boldsymbol{ \zeta},\boldsymbol{\zeta})\neq 0\). Clearly, the first conditions is satisfied and the last condition is verified numerically.
#### 2.3.3 Cusp bifurcation
We have found that a transcritical bifurcation \(TC\) occurs at \(E_{1}\) and a saddle-node bifurcation \(SN_{2}\) occurs at \(E_{SN_{2}}^{*}=(u_{sn_{2}},v_{sn_{2}})\) with variation of the control parameter \(f\). Now, \(E_{SN_{2}}^{*}\) and \(E_{1}\) coincide with the variation of a different temporal parameter \(b\). Thus, the saddle-node bifurcation curve \(SN_{2}\) and transcritical curve \(TC\) intersect at a cusp bifurcation point \((f_{{}_{CP}},b_{{}_{CP}})\) in the \(f\)-\(b\) parametric plane.
**Proposition 5**.: _The system (6) encounters a cusp bifurcation when \(Q(1)=Q^{\prime}(1)=0.\)_
Proof.: Suppose at \(f=f_{{}_{CP}}\) and \(b=b_{{}_{CP}}\), interior equilibria \(E_{2}^{*}\) and \(E_{3}^{*}\) coincide with the axial equilibrium point \(E_{1}\). Here, the curves \(TC\) and \(SN_{2}\), corresponding to the transcritical and saddle-node bifurcations respectively, meet at the cusp bifurcation point \((f_{{}_{CP}},b_{{}_{CP}})\) in the \(f\)-\(b\) plane. From proposition 4, we know that \(Q(u)\) has a double root. Since transcritical bifurcation also happens simultaneously, \(Q(u)\) has a double root \(1\), i.e., \(Q(1)=Q^{\prime}(1)=0\) at the cusp bifurcation threshold. Here, \(f_{{}_{CP}}\) is the positive root of the equation \(2z^{3}-aez^{2}+ae^{2}=0\) and the corresponding \(b_{{}_{CP}}=(ae-f_{{}_{CP}})/f_{{}_{CP}}\).
#### 2.3.4 Hopf and Bautin bifurcation
At a Hopf-bifurcation threshold, a stable equilibrium point changes stability and a limit cycle is generated that can be stable or unstable. In proposition 2_(i)_, we have observed that the interior equilibrium \(E_{1}^{*}\) changes its stability when the trace of \(J(E_{1}^{*})\) changes its sign due to variation in \(b\). We thus have \(tr(J(E_{1}^{*}))=0\) at the Hopf bifurcation threshold \(b=b_{H}.\) Since \((u_{1}^{*},v_{1}^{*})\) depends on \(b\)
the threshold \(b_{H}\) is an implicit expression. The temporal system (6) exhibits a Hopf bifurcation at \(b=b_{{}_{H}}\) if the non-hyperbolicity and transversality conditions given below are satisfied:
\[H1: \det[J(E_{1}^{*};b=b_{{}_{H}})]>0,\] \[H2: \frac{d}{db}(tr[J(E_{1}^{*})])|_{b=b_{{}_{H}}}\neq 0.\]
Hopf-bifurcations are classified as supercritical or subcritical depending on the stability of the generated limit cycle. Supercritical Hopf-bifurcation occurs when the limit cycle is stable, whereas subcritical Hopf-bifurcation occurs when the limit cycle is unstable. The first case corresponds to the first Lyapunov coefficient \(l_{1}<0\), whereas the second one corresponds to \(l_{1}>0\)[50]. Due to the unavailability of explicit expression of interior equilibrium \(E_{1}^{*}\), it is difficult to determine the sign of \(l_{1}\) analytically. However, we obtain the value of \(l_{1}\) numerically using the Matcont software. If we fix the temporal parameters \(a=7\), \(f=1.05\) and \(e=0.95\), then the system (6) exhibits a subcritical Hopf bifurcation around \(E_{1}^{*}=(0.2412,0.1455)\) at \(b_{{}_{H}}=5.8759\) with \(l_{1}=0.0085.\) For the same parameter values except \(f=1.06\), a supercritical Hopf bifurcation has been found around \(E_{1}^{*}=(0.2443,0.1455)\) at \(b_{H}=5.8234\) with \(l_{1}=-0.043\).
Clearly, the first Lyapunov coefficient \(l_{1}\) vanishes between \(f=1.05\) and \(f=1.06\) with \(a=7\) and \(e=0.95.\) When \(l_{1}\) becomes zero, the system (6) undergoes a codimension-2 bifurcation known as a Bautin bifurcation or generalized Hopf bifurcation (GH). A global saddle-node bifurcation curve of the limit cycle, where a stable limit cycle collides with an unstable limit cycle, emerges from the GH point in the two-parametric plane of bifurcation. The system (6) undergoes Bautin bifurcation at \(E_{1}^{*}=(0.2417,0.1455)\) for the bifurcating parameter values \(f_{{}_{GH}}=1.0517\) and \(b_{{}_{GH}}=5.8671\) with \(a=7\) and \(e=0.95\).
#### 2.3.5 Bogdanov-Takens bifurcation
A Bogdanov-Takens (BT) bifurcation is a codimension-2 bifurcation that occurs in a dynamical system when a Hopf bifurcation colloids with a saddle-node bifurcation. The system (6) exhibits a Hopf bifurcation and two saddle-node bifurcations which suggests that a BT bifurcation may occur in our system. Both the determinant and trace of the Jacobian matrix, evaluated at an equilibrium point, vanish simultaneously at a BT bifurcation point. A homoclinic or heteroclinic global bifurcation curve comes out from the BT point in the two-parametric bifurcation plane. Here, we take \(f\) and \(b\) as control parameters for BT bifurcation. For fixed parameter values \(a=7\) and \(e=0.95\), the system (6) exhibits a BT bifurcation for \(E_{1}^{*}(0.4086,0.1637)\) at \((f_{{}_{BT}},b_{{}_{BT}})=(1.2388,5.6146)\).
### Numerical visualisation
Here, we visualize previously described local and global bifurcations with the help of numerical simulations. For fix parameter values \(a=7\), \(e=0.95\), we plot a two-dimensional bifurcation diagram in the \(f\)-\(b\) parametric plane (see Fig. 3). The coordinate of the cusp point (CP) is (1.2270,4.4195) and the coordinates of GH and BT points have already been mentioned earlier.
To better understand how the system dynamics change across the local and global bifurcation curves in Fig. 3, we consider four different values of \(b\) and plot their corresponding one-parametric bifurcation diagrams in Fig. 4. For \(b=4.2\), equilibria \(E_{1}^{*}\) and \(E_{2}^{*}\) appear in the system through the saddle-node bifurcation SN\({}_{1}\), and \(E_{2}^{*}\) disappears from feasibility region through a transcritical bifurcation TC [see Fig. 4(a)]. For \(b=5.2\), two qualitative changes are observed [see Fig. 4(b)]. First, the system (6) exhibits another saddle-node bifurcation SN\({}_{2}.\) Second, two supercritical Hopf bifurcations occur around \(E_{1}^{*}\) which leads to an oscillatory coexisting solution between two Hopf
bifurcation thresholds. Note that these two Hopf bifurcation points lie on the same Hopf curve (marked by red color) below the BT point in Fig. 3. The system (6) shows a bistable dynamics in between \(E_{1}^{*}\) and \(E_{3}^{*}\) for parameter value in \(f_{SN_{2}}<f<f_{TC}\).
Figures 4(a) and (b) correspond to parameter value \(b<b_{BT}.\) Now, we consider the case \(b>b_{BT}\) for which two bifurcation diagrams are shown in Figs. 4(c) and (d). Here, the coexisting equilibrium point \(E_{1}^{*}\), generated due to saddle-node bifurcation SN\({}_{1}\), becomes unstable compared to the stable case for \(b<b_{BT}\). The Hopf bifurcating limit cycle disappears due to collision with the coexisting equilibria \(E_{2}^{*}\) through a homoclinic bifurcation for \(b_{BT}<b<b_{GH}\) [see Fig. 4(c)]. For \(b>b_{GH}\), the Hopf bifurcation becomes subcritical and the corresponding bifurcation diagram is shown in Fig. 4(d). A stable and an unstable limit cycles are generated due to a global homoclinic bifurcation (HOM) and a subcritical Hopf bifurcation respectively. These two limit cycles collide and disappear from the system dynamics through the saddle-node bifurcation of limit cycles (SNLC). Interestingly, the system shows tristability among three attractors, specifically, two equilibria \(E_{1}^{*}\), \(E_{1}\) and the stable limit cycle around \(E_{1}^{*}\) for parameter value \(f_{SNLC}<f<f_{H}.\) The unstable limit cycle around \(E_{1}^{*}\) and the stable manifold of the \(E_{2}^{*}\) act as separatrix of these three attractors.
#### Temporal transient dynamics
Here we explore some interesting long-temporal transient dynamics of the system (6) due to the global and local bifurcations. We consider the same parameter values as in Fig 4(d) and \(f\) is chosen near the SN\({}_{2}\) threshold \(f_{SN_{2}}=0.801336.\) A long transient dynamics is observed in the time evolution of \(u\) for values of \(f\) near \(f_{SN_{2}}\) starting from same initial condition \((u(0),v(0))\equiv(1.4,0.05)\). To investigate this in detail, we consider the time evolution of \(u\) for three different values of \(f\) near \(f_{SN_{2}}\) [see Fig. 5(a)]. For \(f=0.802\), the equilibrium point \(E_{3}^{*}\) is asymptotically stable and the
Figure 3: Two parametric bifurcation diagram in the \(f\)-\(b\) plane. Magenta, blue, and red colour curves represent the transcritical, saddle-node, and Hopf bifurcations respectively. Here, the upper and lower blue curves represent the SN\({}_{1}\) and SN\({}_{2}\), respectively. The green and black dashed curves denote the global curves, namely homoclinic and saddle-node bifurcation of limit cycles, respectively. Further, colored solid dots represent codimension-2 bifurcation points. Other parameter values are \(a=7\) and \(e=0.95\).
time evolution of \(u\) approaches \(E_{3}^{*}\) rapidly. Note that \(E_{3}^{*}\) disappears for \(f<f_{SN_{2}}=0.801336\) due to a saddle-node bifurcation. The time evolution of \(u\) also rapidly approaches the globally stable equilibrium point \(E_{1}^{*}\) for \(f=0.80\). However, if we take \(f=0.8013\), then \(u\) spends a considerable amount of time near the solution for \(f=0.802\). However, equilibrium point \(E_{3}^{*}\) is not present in the system for \(f=0.8013\) and the time evolution of \(u\) finally settles into the globally stable equilibrium state \(E_{1}^{*}\). Here, a shadow of \(E_{3}^{*}\) acts as a ghost attractor in the system (6) which causes such long transient dynamics. This long transient dynamics is also observed for other initial conditions too. Time evolution of \(u\) shows similar behaviour for \(f=0.8013\) if the initial condition lies in the shaded region shown in Fig. 5(b). When a trajectory passes through the narrow region between the nullclines [see the inset of the Fig. 5(b))], their \(u\) directional as well as \(v\) directional velocities become very small. This leads to the phenomenon of ghost attractor and long transient dynamics in the system. Figure 5(a) also illustrates hysteresis exhibited by the system (6). A small change in the parameter \(f\) results in a significant drop or change (from \(E_{3}^{*}\) to \(E_{1}^{*}\)) in the steady state solution.
The system (6) also exhibits oscillatory long transient dynamics due to the global SNLC bi
Figure 4: Bifurcation diagram of the system (6) against parameter \(f\) for different values of \(b\): (a) \(b=4.2\), (b) \(b=5.2\), (c) \(b=5.65\), and (d) \(b=7\). Here SN\({}_{1}\) and SN\({}_{2}\) represent saddle-node bifurcations; TC and H denote transcritical and Hopf bifurcations respectively. Also, SNLC and HOM represent the saddle-node bifurcation of limit cycles and homoclinic bifurcation respectively. Other parameter values are \(a=7\) and \(e=0.95\).
function. The coexisting equilibrium point \(E_{1}^{*}\) is surrounded by an unstable limit cycle, that is also surrounded by a stable limit cycle for \(f_{SNLC}=0.867805<f<f_{H}=0.883805.\) Both the limit cycles collide and disappear from the system dynamics through the SNLC. We have plotted the time evolution of \(u\) for two different values of \(f\) near \(f_{SNLC}\) in Fig. 6 with initial condition \((u(0),v(0))=(0.97,0.45)\). The coexisting equilibrium point \(E_{1}^{*}\) is locally asymptotically stable for \(f<f_{SNLC}.\) The time evolution of \(u\) converges to \(E_{1}^{*}\) faster for \(f=0.867\) compared to that for \(f=0.867682.\) Thus, oscillatory long transient dynamics is observed for values of \(f\) near \(f_{SNLC}\) due to global SNLC bifurcation.
Figure 5: Temporal transient dynamics due to saddle-node bifurcation for \(a=7,\ b=7\) and \(e=0.95\) with \((u(0),v(0))=(1.4,0.05)\) as initial condition. (a) Here, blue, magenta, and black colour curves correspond to time evolution of \(u\) for \(f=0.802,\ f=0.8013,\) and \(f=0.80\) respectively. (b) Phase trajectory for \(f=0.8013\) marked with magenta colour curve. The prey and predator nullclines are also shown. Any solution starting with the initial condition in the shaded region exhibits long transient dynamics similar to that of \(f=0.8013.\)
Figure 6: Time evolution of \(u\) showing long oscillatory transient dynamics due to global bifurcation SNLC with \((u(0),v(0))=(0.97,0.45)\) as initial condition: (a) \(f=0.867\) and (b) \(f=0.867682.\) Other parameter values are \(a=7,\ b=7,\) and \(e=0.95.\)
Spatio-temporal model
Now, we consider the spatio-temporal model (5) to examine the effects of taxis and diffusion on the dynamics. We first establish global existence and boundness of the solution using \(L^{p}\)-\(L^{q}\) estimates. Next, we examine the stability of the homogeneous steady states by converting the spatio-temporal problem to an eigenvalue problem. Finally, the Turing bifurcation threshold is also obtained.
### Global existence and boundedness of solution
The global existence and boundedness of solution with different taxis and reaction kinetics have been established in [53, 54, 55]. Here, we establish the same with non-monotonic functional response. Let \(W^{1,p}(\Omega)\) be the Sobolev space consisting of functions in \(L^{p}(\Omega)\) that have weak first order partial derivatives and they belong to \(L^{p}(\Omega)\).
**Lemma 1**.: _Assume that \((u_{0},v_{0})\in[W^{1,p}(\Omega)]^{2}\) with \(u_{0},v_{0}\geq 0\) and \(p>n.\) Then the following hold for the system (5)_
* _There exists a positive constant_ \(T\) _(maximal existence time) such that the system (_5_) has a unique local-in-time non-negative solution_ \((u(x,t),v(x,t))\in[C(\bar{\Omega}\times[0,T))\cap C^{2,1}(\bar{\Omega}\times(0,T ))]^{2}.\)__
* _The_ \(L_{1}\) _bounds of_ \((u(x,t),v(x,t)),\) _for all time_ \(t\in(0,T),\) _satisfy_ \[\int\limits_{\Omega}u(x,t)dx\leq A\text{ and }\int\limits_{\Omega}v(x,t)dx\leq B,\] _where_ \(A\) _and_ \(B\) _are defined in the proof._
* _There exists a constant_ \(C\) _such that_ \(0\leq u(x,t)\leq C\) _and_ \(v(x,t)\geq 0\) _for all_ \((x,t)\in\bar{\Omega}\times(0,T).\)__
Proof.: \((i)\) Here we apply Amann theory [56] to prove the local existence of \((u(x,t),v(x,t))\). In terms of \(U=(u,v)^{T},\) the system (5) is written as
\[\begin{cases}\frac{\partial U}{\partial t}=\nabla\cdot(\mathcal{A}(U)\nabla U) +F(U),\ x\in\Omega,\ t>0,\\ \frac{\partial U}{\partial n}=(0,0)^{T},\ x\in\partial\Omega,\ t>0,\\ U(x,0)=(u_{0},v_{0})^{T},\ x\in\ \Omega,\end{cases} \tag{12}\]
where
\[\mathcal{A}(U)=\begin{pmatrix}1&0\\ {cv}&d\end{pmatrix}\text{ and }F(U)=\begin{pmatrix}F_{1}(U)\\ F_{2}(U)\end{pmatrix}. \tag{13}\]
Since the matrix \(\mathcal{A}(U)\) is positive definite, the system (5) is normally parabolic and the local existence of classical solutions follows from Theorem 7.3 in [56]. Thus, there exists a maximal existence time \(T>0\) such that the system (5) has a unique solution \((u(x,t),v(x,t))\in C(\bar{\Omega}\times[0,T))\cap C^{2,1}(\bar{\Omega}\times(0,T)),\) with \(u,v\geq 0.\)
\((ii)\) Let \(K_{1}(t)=\int\limits_{\Omega}u(x,t)dx\) and \(K_{2}(t)=\int\limits_{\Omega}v(x,t)dx.\) Integrating the first equation and using no-flux boundary condition, we find
\[\frac{dK_{1}}{dt}=\int\limits_{\Omega}F_{1}(u,v)dx\leq\int\limits_{\Omega}(u- u^{2})dx.\]
Using Holder inequality, we find \(\int\limits_{\Omega}u^{2}\;dx\geq\frac{1}{|\Omega|}\left(\int\limits_{\Omega}u\; dx\right)^{2}\), which leads to
\[\frac{dK_{1}}{dt}\leq\left(K_{1}-\frac{K_{1}^{2}}{|\Omega|}\right).\]
Therefore, \(K_{1}(t)\leq\ \max\ \left\{||u_{0}||_{L^{1}(\Omega)},|\Omega|\right\}\equiv A.\) We also have
\[\frac{d}{dt}\left(K_{1}+eK_{2}\right)\leq\int\limits_{\Omega}u\;dx-ef\int \limits_{\Omega}v\;dx=-f(K_{1}+eK_{2})+(f+1)K_{1}.\]
Since \(K_{1}\leq A\), the above inequality leads to
\[K_{2}\leq\frac{1}{e}(K_{1}+eK_{2})\leq\frac{1}{e}\left(||u_{0}||_{L^{1}( \Omega)}+e||v_{0}||_{L^{1}(\Omega)}+\frac{(f^{2}+1)}{f}A\right)\equiv B.\]
\((iii)\) From (5), we have
\[\begin{cases}\frac{\partial u}{\partial t}=\nabla^{2}u+F_{1}(u,v)\leq\nabla^{ 2}u+u(1-u),\;x\in\Omega,\;t>0,\\ \frac{\partial u}{\partial n}=0,\;x\in\partial\Omega,\;t>0,\\ u(x,0)=u_{0}(x),\;x\in\;\Omega.\end{cases}\]
Using the maximum principle for the parabolic equation [57], we have
\[0\leq u(x,t)\leq\ \max\ \{||u_{0}||_{L^{\infty}(\Omega)},1\}\equiv C\;\text{ for all }x\in\Omega,\;t>0.\]
Now, the second equation in (5) can be treated as a scalar linear equation in \(v\). This implies \(v(x,t)\geq 0\) for all \(x\in\Omega,\;t>0\).
**Theorem 2**.: _For any \((u_{0},v_{0})\in[W^{1,p}(\Omega)]^{2}\) with \(u_{0},v_{0}\geq 0\) and \(p>n,\) the system (5) admits a unique global solution \((u(x,t),v(x,t))\in[C(\bar{\Omega}\times[0,\infty))\cap C^{2,1}(\bar{\Omega} \times(0,\infty))]^{2}\equiv\mathcal{S}(\bar{\Omega})\) and the solution is uniformly bounded, i.e., there exists a constant \(R(||u_{0}||_{L^{\infty}(\Omega)},||v_{0}||_{L^{\infty}(\Omega)})>0\) such that_
\[||u(x,t)||_{L^{\infty}(\Omega)}+||v(x,t)||_{L^{\infty}(\Omega)}\leq R(||u_{0} ||_{L^{\infty}(\Omega)},||v_{0}||_{L^{\infty}(\Omega)})\text{ for all }t>0.\]
Proof.: We have already established \(||u(x,t)||_{L^{\infty}(\Omega)}\leq C\) in Lemma 1\((iii)\). However, we have obtained \(L_{1}\) bound for \(v(x,t)\) in Lemma 1\((ii)\). To prove the theorem, we need to establish \(L_{\infty}\) bound for \(v(x,t).\) The detailed steps are described below.
First, we establish the boundedness of \(||\nabla u||_{L^{\infty}(\Omega)}.\) Let \(\{e^{t\nabla^{2}}\}_{t\geq 0}\) be the Neumann heat semigroup generated by \(-\nabla^{2}\) and \(\lambda_{1}>0\) be the first non-zero eigenvalue of \(-\nabla^{2}\) in \(\Omega\). Then using the variation of constants formula for \(u\), we get
\[u(x,t)=e^{t\nabla^{2}}u_{0}(x)+\int_{0}^{t}e^{(t-s)\nabla^{2}}F_{1}(u(x,s),v(x,s))ds. \tag{14}\]
Using \(L^{p}\)-\(L^{q}\) estimates of Neumann heat semigroup (see Lemma 1.3 in [58]) and (14), we obtain
\[||\nabla u(\cdot,t)||_{L^{\infty}(\Omega)}\leq ||\nabla e^{t\nabla^{2}}u_{0}||_{L^{\infty}(\Omega)}+\int_{0}^{t} ||\nabla e^{(t-s)\nabla^{2}}F_{1}(u(\cdot,s),v(\cdot,s))||_{L^{\infty}(\Omega )}ds\] \[\leq C_{1}\left(1+t^{-\frac{1}{2}-\frac{n}{2p}}\right)e^{-\lambda_ {1}t}||u_{0}||_{L^{\infty}(\Omega)}+\] \[\int_{0}^{t}C_{1}\left(1+(t-s)^{-\frac{1}{2}-\frac{n}{2p}}\right) e^{-\lambda_{1}(t-s)}||F_{1}(u(\cdot,s),v(\cdot,s))||_{L^{\infty}(\Omega)}\,ds\] \[\leq C_{2}||u_{0}||_{L^{\infty}(\Omega)}+C_{3}||F_{1}(u,v)||_{L^{ \infty}(\Omega)},\]
for some positive constant \(C_{1},C_{2}\) and \(C_{3}.\) Since \(||F_{1}(u,v)||_{L^{\infty}(\Omega)}\leq\frac{1}{4},\) we have \(||\nabla u||_{L^{\infty}(\Omega)}\leq D(||u_{0}||_{L^{\infty}(\Omega)})\) for some positive constant \(D(||u_{0}||_{L^{\infty}(\Omega)})\).
Next, we establish boundedness of \(||v||_{L^{q}(\Omega)}\) where \(q\geq 2.\) Multiplying the second equation of (5) by \(v^{q-1}\) and integrating over \(\Omega,\) we find
\[\int\limits_{\Omega}v^{q-1}v_{t}dx-d\int\limits_{\Omega}v^{q-1}\nabla^{2}vdx= c\int\limits_{\Omega}v^{q-1}\nabla\cdot(v\nabla u)dx+\int\limits_{\Omega}v^{q-1}F_{2 }(u,v)dx. \tag{15}\]
Using Gauss's divergence theorem and no-flux boundary conditions, we arrive at
\[\frac{1}{q}\frac{d}{dt}\int\limits_{\Omega}v^{q}dx+d(q-1)\int\limits_{\Omega} v^{q-2}|\nabla v|^{2}dx=\int\limits_{\Omega}v^{q-1}F_{2}(u,v)dx-c(q-1)\int \limits_{\Omega}v^{q-1}\nabla v\cdot\nabla u\ dx.\]
Since
\[F_{2}(u,v)\leq\left(\frac{ea}{2\sqrt{b}}-f\right)v,\]
we finally find
\[\frac{1}{q}\frac{d}{dt}\int\limits_{\Omega}v^{q}dx+d(q-1)\int\limits_{\Omega} v^{q-2}|\nabla v|^{2}dx\leq\left(\frac{ea}{2\sqrt{b}}-f\right)\int\limits_{ \Omega}v^{q}dx-c(q-1)\int\limits_{\Omega}v^{q-1}\nabla v\cdot\nabla u\ dx. \tag{16}\]
Note that the inequality \(\frac{|\vec{a}|^{2}}{2\varepsilon}+\frac{\varepsilon|\vec{b}|^{2}}{2}\geq \vec{a}\cdot\vec{b}\) holds for any \(\varepsilon>0\). Considering \(\vec{a}=v^{\frac{q-2}{2}}\nabla v,\)\(\vec{b}=cv^{\frac{q}{2}}\nabla u\) and \(\varepsilon=\frac{1}{d},\) we get
\[\frac{dv^{q-2}|\nabla v|^{2}}{2}+\frac{c^{2}v^{q}|\nabla u|^{2}}{2d}\geq cv^{q -1}\nabla v\cdot\nabla u.\]
Using this inequality and the estimate \(||\nabla u||_{L^{\infty}(\Omega)}\leq D(||u_{0}||_{L^{\infty}(\Omega)})\) (derived before) in (16), we get
\[\frac{1}{q}\frac{d}{dt}\int\limits_{\Omega}v^{q}dx+f\int\limits_{ \Omega}v^{q}dx+\frac{d(q-1)}{2}\int\limits_{\Omega}v^{q-2}|\nabla v|^{2}dx \leq\frac{ea}{2\sqrt{b}}\int\limits_{\Omega}v^{q}dx+\frac{c^{2}( q-1)}{2d}\int\limits_{\Omega}v^{q}|\nabla u|^{2}dx\] \[\leq\left(\frac{ea}{2\sqrt{b}}+D^{2}\right)\int\limits_{\Omega} v^{q}dx\equiv K\int\limits_{\Omega}v^{q}dx. \tag{17}\]
For any \(\varepsilon_{1}>0,\) the inequality \(\int\limits_{\Omega}v^{q}dx\leq\varepsilon_{1}||\nabla v^{\frac{q}{2}}||_{2} ^{2}+M\) holds for some \(M>0\) (see the Appendix A). Using this in (17), we get
\[\frac{d}{dt}\int\limits_{\Omega}v^{q}dx+fq\int\limits_{\Omega}v^{q}dx+\frac{ qd(q-1)}{2}\int\limits_{\Omega}v^{q-2}|\nabla v|^{2}dx\leq qK(\varepsilon_{1}|| \nabla v^{\frac{q}{2}}||_{2}^{2}+M)\]
Using
\[\int\limits_{\Omega}v^{q-2}|\nabla v|^{2}dx=\frac{4}{q^{2}}||\nabla v^{\frac{ 5}{2}}||_{2}^{2}\]
in the above, we get
\[\frac{d}{dt}\int\limits_{\Omega}v^{q}dx+fq\int\limits_{\Omega}v^{q}dx\leq qK \left(\varepsilon_{1}||\nabla v^{\frac{q}{2}}||_{2}^{2}+M-\frac{2d(q-1)}{Kq^{ 2}}\nabla v^{\frac{q}{2}}||_{2}^{2}\right)\]
Setting \(\varepsilon_{1}=\dfrac{2d(q-1)}{Kq^{2}},\) we have
\[\dfrac{d}{dt}\int\limits_{\Omega}v^{q}dx+fq\int\limits_{\Omega}v^{q}dx\leq MqK,\]
which leads to
\[||v||_{L^{q}(\Omega)}\leq\left(||v_{0}||_{L^{q}(\Omega)}^{q}+\left(\dfrac{MK}{f }\right)\right)^{1/q}\equiv N.\]
Now using Sobolev embedding [59] and Moser-Alikakos iteration procedure [60], there exists a constant \(P(||u_{0}||_{L^{\infty}(\Omega)},||v_{0}||_{L^{\infty}(\Omega)})>0\) such that
\[||v(.,t)||_{L^{\infty}(\Omega)}\leq P(||u_{0}||_{L^{\infty}(\Omega)},||v_{0}||_ {L^{\infty}(\Omega)}).\]
The proof is now complete.
### Homogeneous steady-state analysis
Here, we discuss the stability of the homogeneous steady states. Note that a homogeneous steady-state \(E(\tilde{u},\tilde{v})\) of the system (12) corresponds to an equilibrium point \((\tilde{u},\tilde{v})\) of the temporal system (6). Introducing \(u(x,t)=\tilde{u}+\bar{u}(x,t)\) and \(v(x,t)=\tilde{v}+\bar{v}(x,t),\) and linearizing about \(E(\tilde{u},\tilde{v}),\) we find
\[\dfrac{\partial\bar{U}}{\partial t}=\mathcal{A}(E)\nabla^{2}\bar{U}+J(E)\bar{ U}\equiv\mathcal{L}(E)\bar{U},\]
where \(\bar{U}\equiv(\bar{u}(x,t),\bar{v}(x,t))\in[C(\bar{\Omega}\times[0,\infty)) \cap C^{2,1}(\bar{\Omega}\times(0,\infty))]^{2}.\) Further, \(J(E)\) and \(\mathcal{A}(E)\) have been defined in (8) and (13) respectively.
Consider the eigenvalue problem
\[\begin{split}-\nabla^{2}p&=kp\quad\text{in}\; \Omega,\\ \dfrac{\partial p}{\partial n}&=0\quad\text{on}\; \partial\Omega.\end{split} \tag{18}\]
For an eigenvalue \(k_{i}\) of the eigenvalue problem (18), let \(E(k_{i})\) be the corresponding eigenfunction space with \(0=k_{0}<k_{1}<\cdots<k_{i}<\cdots\). Further, assume that \(\{\phi_{i,j}:j=1,\cdots,\dim\bigl{(}E(k_{i})\bigr{)}\}\) be an orthogonal basis set of \(E(k_{i})\) and \(\mathcal{U}_{ij}=\{c\phi_{i,j}:c=(c_{1},c_{2})^{T}\}\). Let \(\mathcal{U}_{i}=\bigoplus_{j=1}^{\dim(E(k_{i}))}\mathcal{U}_{ij}\) be the direct sum of \(\mathcal{U}_{ij}\). It can be shown that
\[\mathcal{U}\equiv\left\{(\phi,\psi)^{T}\in C^{2}(\bar{\Omega})\times C^{2}( \bar{\Omega}):\dfrac{\partial\phi}{\partial n}=\dfrac{\partial\psi}{\partial n }=0\;\text{ for }x\in\partial\Omega\right\}=\bigoplus_{i=1}^{\infty} \mathcal{U}_{i}, \tag{19}\]
and \(\mathcal{U}_{i}\) is invariant under the operator \(\mathcal{L}\). Now, \(\lambda\) is an eigenvalue of \(\mathcal{L}\) if and only if \(\lambda\) is an eigenvalue of the matrix \(\mathcal{L}_{i}=-k_{i}\mathcal{A}(E)+J(E)\) for some \(i\geq 0.\) The characteristic equation of \(\mathcal{L}_{i}\) is given by
\[\lambda^{2}-\mathrm{T}(k_{i})\lambda+\mathrm{H}(k_{i})=0, \tag{20}\]
where
\[\begin{split}\mathrm{T}(k_{i})&=a_{10}+b_{01}-(1+d )k_{i},\\ \mathrm{H}(k_{i})&=dk_{i}^{2}-\{a_{10}d+b_{01}-c\tilde {v}a_{01}\}k_{i}+(a_{10}b_{01}-a_{01}b_{10}).\end{split}\]
Now, from the principle of linearized stability of parabolic partial differential equation [61], the homogeneous steady state \(E\) is said to be locally asymptotically stable if \(T(k_{i})<0,\) and \(H(k_{i})>0\) for all \(i.\)
**Theorem 3**.: _The followings hold for the system (12):_
* _Trivial homogeneous steady-state_ \(E_{0}(0,0)\) _is always unstable._
* _Axial homogeneous steady-state_ \(E_{1}(1,0)\) _is locally asymptotically stable if_ \(f>f_{TC}\) _and unstable if_ \(f<f_{TC}\)_._
* _Coexisting homogeneous steady state_ \(E_{1}^{*}(u_{1}^{*},v_{1}^{*})\) _is locally asymptotically stable if_ \(a_{10}<\text{min}\left\{v_{1}^{*},\frac{cu_{1}^{*}\left(u_{1}^{*}-1\right)+v_ {1}^{*}}{d}\right\}\equiv R_{1}\) _and unstable if_ \(a_{10}>v_{1}^{*}.\)__
Proof.: \((a)\) For \(E_{0},\) we find \(\text{T}(k_{i})=1-f-(1+d)k_{i}\) and \(\text{H}(k_{i})=dk_{i}^{2}+(d+f)k_{i}-f.\) Since \(\text{H}(k_{0})<0,\)\(E_{0}\) is unstable.
\((b)\) For \(E_{1},\) we have
\[\text{T}(k_{i})=-1+(f_{TC}-f)-(1+d)k_{i}\]
and
\[\text{H}(k_{i})=dk_{i}^{2}+\left(d-(f_{TC}-f)\right)k_{i}-\left(f_{TC}-f \right).\]
If \(f>f_{TC},\) then \(\text{T}(k_{i})<0\) and \(\text{H}(k_{i})>0\) for all \(i.\) Thus, \(E_{1}\) is locally asymptotically stable if \(f>f_{TC}\). If \(f<f_{TC},\) then \(\text{H}(k_{0})<0\) and hence \(E_{1}\) is unstable.
\((c)\) If \(a_{10}>-b_{01}=v_{1}^{*},\) then \(E_{1}^{*}\) is unstable since \(\text{T}(k_{0})>0.\) Now if \(a_{10}<R_{1},\) then we find \(a_{10}+b_{01}<0\) and \(a_{10}d+b_{01}-cv_{1}^{*}a_{01}<0\). These conditions lead to
\[\text{T}(k_{i})<0\text{ and }\text{H}(k_{i})>0\text{ for all }i. \tag{21}\]
Hence, \(E_{1}^{*}\) is locally asymptotically stable if \(a_{10}<R_{1}.\)
_Remark 1_.: Whenever homogeneous steady state \(E_{2}^{*}\) exists, it is unstable since \(\text{H}(k_{0})<0.\) In case of \(E_{3}^{*}\) (when it exists), it is locally asymptotically stable if \(\text{T}(k_{i})<0\) and \(\text{H}(k_{i})>0\) for all \(i.\) It is unstable if either of these conditions does not hold for some \(i.\)
### Turing instability
In the case of Turing instability, the coexisting equilibrium point is asymptotically stable for the temporal system (6). But the corresponding homogeneous steady state of the spatio-temporal model (5) becomes unstable under spatial perturbation. Thus, \(\text{T}(k_{i})<0\) and \(\text{H}(k_{i})>0\) for \(i=0,\) but at least one of these conditions is violated for some \(i\geq 1\). Note that \(\text{T}(k_{i})<\text{T}(k_{0})\) for all \(i\geq 1.\) Hence, for Turing instability, there exists some \(k_{i}\) (\(i\geq 1\)) for which \(\text{H}(k_{i})<0.\) The eigenvalues \(k_{i}\) are discrete for a bounded \(\Omega.\) To find the Turing bifurcation threshold, we consider an unbounded domain for which a continuous spectrum of eigenvalues is obtained. Hence, the Turing instability condition becomes \(\text{H}(k)<0\) for some \(k\neq 0\). Here, we consider the homogeneous steady state corresponding to the coexisting equilibrium point \(E_{1}^{*}(u_{1}^{*},v_{1}^{*}).\)
The minimum value of \(\text{H}(k)\) is
\[\text{H}_{min}=\frac{-{a_{10}}^{2}d^{2}+\left(\left(2{a_{10}}cv_{1}^{*}-4{b_{ 10}}\right){a_{01}}+2{a_{10}}{b_{01}}\right)d-\left({a_{01}}cv_{1}^{*}-{b_{01 }}\right)^{2}}{4d},\]
which occurs at wavenumber
\[k_{min}=\ \frac{{a_{10}}\,d+{b_{01}}-{a_{01}}\,cv_{1}^{*}}{2d}.\]
Now, at the threshold of the Turing bifurcation, we have \(\mathrm{H}_{min}=0\) at critical wavenumber \(k_{T}=k_{min}\), and the corresponding critical Turing value \(c_{T}\) of \(c\) is determined as follows
\[c_{T}=\frac{a_{10}d+b_{01}-2\sqrt{d\det(J)}}{a_{01}v_{1}^{*}}. \tag{22}\]
The system becomes Turing unstable for \(c>c_{T}\). We plot the Turing surface boundary in the \(f\)-\(d\)-\(c\) space for parameter values \(a=7\), \(b=5.65\), and \(e=0.95\) (see Fig. 7). The Turing instability sets in above the Turing boundary surface. In front of the Hopf surface, we have an additional Hopf instability. These two instabilities, divide the \(f\)-\(c\)-\(d\) space into four regions: (i) a stable region below the Turing boundary surface and behind the Hopf plane, (ii) a Turing region above the Turing boundary surface and behind the Hopf plane, (iii) a Hopf region below the Turing boundary surface and in front of the Hopf plane, and (iv) a Turing-Hopf region above the Turing boundary surface and in front of the Hopf plane. We will discuss the spatio-temporal dynamics in these regions in the section 5.
## 4 Weakly nonlinear analysis
Here, we perform weakly nonlinear analysis using the method of multiple scales near the Turing bifurcation threshold (\(c=c_{T}\)). We derive the amplitude equations for the Turing solutions, which also shows the existence of non-homogeneous stationary solutions near the Turing thresholds. For simplicity, we consider one-dimensional spatial domain as \(\Omega=[0,L]\in\mathbb{R}\). First, we employ a Taylor series expansion up to the third order to expand the system (5) around \(E_{*}^{1}(u_{*}^{1},v_{*}^{1})\)
\[\frac{\partial\mathbf{W}}{\partial t}=\mathcal{L}^{c}\mathbf{W}+\mathcal{N}+\frac{1}{2 }\mathcal{B}(\mathbf{W},\mathbf{W})+\frac{1}{6}\mathcal{T}(\mathbf{W},\mathbf{W},\mathbf{W})\equiv \mathcal{Z}(\mathbf{W}), \tag{23}\]
where
\[\mathbf{W}=\begin{pmatrix}u-u_{*}^{1}\\ v-v_{*}^{1}\end{pmatrix},\quad\mathcal{L}^{c}=\begin{pmatrix}a_{10}+\partial_{ xx}&a_{01}\\ b_{10}+cv_{1}^{*}\partial_{xx}&b_{01}+d\partial_{xx}\end{pmatrix},\quad\text{ and}\quad\mathcal{N}=\begin{pmatrix}0\\ c(v\nabla^{2}u+\nabla u\cdot\nabla v)\end{pmatrix}\]
Figure 7: Turing boundary surface (coloured surface) and Hopf plane (grey vertical plane) in the \(f\)-\(d\)-\(c\) space. Note that the system (5) exhibits Turing instability above the coloured surface and Hopf instability in front of the grey plane. The values of the other parameters are \(a=7\), \(b=5.65\), and \(e=0.95\).
denote the perturbation vector, linear operator, and nonlinear prey-taxis term, respectively. Further, the bi-linear operator \(\mathcal{B}\) and tri-linear operator \(\mathcal{T}\) are defined as
\[\mathcal{B}(\mathbf{P_{1}},\mathbf{P_{2}})=\begin{pmatrix}\sum_{i,j\in\{1,2 \}}\dfrac{\partial^{2}F_{1}}{\partial y_{i}\partial y_{j}}|_{(u^{1}_{*},v^{1}_{ *})}P^{(i)}_{1}P^{(j)}_{2}\\ \sum_{i,j\in\{1,2\}}\dfrac{\partial^{2}F_{2}}{\partial y_{i} \partial y_{j}}|_{(u^{1}_{*},v^{1}_{*})}P^{(i)}_{1}P^{(j)}_{2}\end{pmatrix},\]
\[\mathcal{T}(\mathbf{P_{1}},\mathbf{P_{2}},\mathbf{P_{3}})=\begin{pmatrix}\sum_{i,j\in\{1,2 \}}\dfrac{\partial^{2}F_{1}}{\partial y_{i}\partial y_{j}\partial y_{k}}|_{(u ^{1}_{*},v^{1}_{*})}P^{(i)}_{1}P^{(j)}_{2}P^{(k)}_{3}\\ \sum_{i,j,k\in\{1,2\}}\dfrac{\partial^{2}F_{2}}{\partial y_{i} \partial y_{j}\partial y_{k}}|_{(u^{1}_{*},v^{1}_{*})}P^{(i)}_{1}P^{(j)}_{2}P^ {(k)}_{3}\end{pmatrix},\]
where \(\mathbf{P}_{m}=\begin{pmatrix}P^{(1)}_{m}\\ P^{(2)}_{m}\end{pmatrix}\), for \(m=1,2,3\); \(y_{1}=u\) and \(y_{2}=v\).
We introduce slow time scale \(\tau=\varepsilon^{2}t\), where \(\varepsilon\) measures the distance between prey-taxis parameter \(c\) and the critical Turing threshold \(c_{T}\) as \(c=c_{T}+\varepsilon^{2}c_{2}\). We also expand the solution of the system (23) in terms of \(\varepsilon\) as
\[\mathbf{W}=\varepsilon\mathbf{W_{1}}+\varepsilon^{2}\mathbf{W_{2}}+\varepsilon^{3}\mathbf{W_ {3}}+O(\varepsilon^{4}).\]
Accordingly, the operator \(\mathcal{L}^{c}\), \(\mathcal{B}\) and \(\mathcal{T}\) in (23) can be expressed as
\[\mathcal{L}^{c}=\mathcal{L}^{c_{T}}+\varepsilon^{2}c_{2}v^{*}_{1 }\begin{pmatrix}0&0\\ 1&0\end{pmatrix}\nabla^{2}+O(\varepsilon^{4}),\] \[\mathcal{B}(\mathbf{W},\mathbf{W})=\varepsilon^{2}\mathcal{B}(\mathbf{W_{1}},\mathbf{W_{1}})+2\varepsilon^{3}\mathcal{B}(\mathbf{W_{1}},\mathbf{W_{2}})+O(\varepsilon ^{4}),\] \[\mathcal{T}(\mathbf{W},\mathbf{W},\mathbf{W})=\varepsilon^{3}\mathcal{T}(\bm {W_{1}},\mathbf{W_{1}},\mathbf{W_{1}})+O(\varepsilon^{4}).\]
Substituting all the above expansions into (23), we obtain a series of equations in \(\mathbf{W_{i}}\,(i=1,2,\cdots)\) by collecting the terms at each order in \(\varepsilon.\) Each of these equations is subjected to the Neumann boundary condition.
At \(O(\varepsilon)\), we have \(\mathcal{L}^{c_{T}}\mathbf{W_{1}}=0\), whose solution is
\[\mathbf{W_{1}}=A(\tau)\mathbf{\Phi}\,\cos(k_{T}x)\ \ \text{with}\ \mathbf{\Phi}\in\text{ Ker}(J(E^{*}_{1})-k_{T}^{2}D^{c_{T}}).\]
Here, \(A(\tau)\) is the amplitude of the growing pattern, unknown at the moment. We normalize \(\Phi\) vector as
\[\mathbf{\Phi}=\begin{pmatrix}1\\ \phi\end{pmatrix}\ \text{with}\ \phi=\dfrac{k_{T}^{2}-a_{10}}{a_{01}}.\]
At \(O(\varepsilon^{2})\), we obtain
\[\mathcal{L}^{c_{T}}\mathbf{W_{2}}=(\mathbf{h_{20}}+\mathbf{h_{22}}\,\cos(2k_{T}x))A^{2} \equiv\mathbf{H}, \tag{24}\]
where \(\mathbf{h_{20}}=-\dfrac{1}{4}\mathcal{B}(\mathbf{\Phi},\mathbf{\Phi}),\ \mathbf{h_{22}}=-\dfrac{1}{4} \mathcal{B}(\mathbf{\Phi},\mathbf{\Phi})+c_{T}k_{T}^{2}\begin{pmatrix}0\\ \phi\end{pmatrix}.\) Using Fredholm alternative theorem, the system (24) has a solution when \(<\mathbf{H},\mathbf{\Psi}>\equiv\int\limits_{0}^{L}\mathbf{H}\,\mathbf{\Psi}\ dx=0\), where \(\mathbf{\Psi}=\begin{pmatrix}\psi\\ 1\end{pmatrix}\ \cos(k_{T}x)\in\mathbb{R}^{3}\).
\(\mathrm{Ker}(\mathcal{L}^{c_{T}})^{\dagger}\) with \(\psi=\frac{d\mathcal{R}_{T}^{2}-b_{01}}{a_{01}}.\) Note that \(A^{\dagger}\) denotes the adjoint of the operator \(A.\) Note that here the Fredholm alternative theorem automatically holds. The solution of (24) is given by
\[\mathbf{W}_{2}=(\mathbf{k_{20}}+\mathbf{k_{22}}\,\cos(2k_{T}x))A^{2},\]
where \(J(E_{1}^{*})\mathbf{k_{20}}=\mathbf{h_{20}}\) and \((J(E_{1}^{*})-4k_{T}^{2}D^{c_{T}})\,\mathbf{k_{22}}=\mathbf{h_{22}}.\)
At \(O(\varepsilon^{3}),\) we find
\[\mathcal{L}^{c_{T}}\mathbf{W_{3}}=\left(\frac{dA}{d\tau}\mathbf{\Phi}+\mathbf{g_{11}}A+ \mathbf{g_{31}}A^{3}\right)\ \cos(k_{T}x)+\mathbf{g_{33}}\,\cos(3k_{T}x)A^{3}\equiv\mathbf{G}, \tag{25}\]
where
\[\mathbf{g_{11}}=c_{2}k_{T}^{2}v_{1}^{*}\begin{pmatrix}0\\ 1\end{pmatrix},\ \mathbf{g_{31}}=-\mathcal{B}(\mathbf{\Phi},\mathbf{k}_{20})-\frac{1}{2} \mathcal{B}(\mathbf{\Phi},\mathbf{k}_{22})-\frac{1}{8}\mathcal{T}(\mathbf{\Phi},\mathbf{\Phi},\mathbf{\Phi})+c_{T}k_{T}^{2}\begin{pmatrix}0\\ k_{20}^{(1)}\phi+k_{22}^{(1)}-\frac{1}{2}k_{22}^{(2)}\end{pmatrix},\]
\[\text{ and }\mathbf{g_{33}}=-\frac{1}{2}\mathcal{B}(\mathbf{\Phi},\mathbf{k}_{22})-\frac{1 }{24}\mathcal{T}(\mathbf{\Phi},\mathbf{\Phi},\mathbf{\Phi})+c_{T}k_{T}^{2}\begin{pmatrix} 0\\ 3k_{22}^{(1)}\phi+\frac{3}{2}k_{22}^{(2)}\end{pmatrix}.\]
Using Fredholm alternative theorem in equation (25), we obtain \(<\mathbf{G},\mathbf{\Psi}>=0,\) which leads to the cubic Stuart-Landau equation
\[\frac{dA}{d\tau}=\sigma A-lA^{3}, \tag{26}\]
where
\[\sigma=-\frac{<\mathbf{g}_{11},\mathbf{\Psi}>}{<\mathbf{\Phi},\mathbf{\Psi}>}\ \text{ and }\ l=\frac{<\mathbf{g}_{31},\mathbf{\Psi}>}{<\mathbf{\Phi},\mathbf{\Psi}>}.\]
Here, \(\sigma\) is always positive and (26) is similar to normal form of pitchfork bifurcation.
### Supercritical case
If \(l>0,\) then the system (26) has two stable fixed points \(A_{*}=\pm\sqrt{\frac{\sigma}{l}}\) and an unstable fixed point \(A_{*}=0.\) These stable fixed points represent the Turing pattern with wavenumber \(k_{T}\) for \(c>c_{T}\) and all non-zero solutions approach them after a long time, i.e.,
\[\lim_{t\to\infty}A(t)\equiv A_{\infty}=\pm\sqrt{\frac{\sigma}{l}}.\]
Therefore, the final solution of (23) is \(\mathbf{U}_{\infty}(x)=\sqrt{c-c_{T}}A_{\infty}\Phi\cos(k_{T}x)+O(\varepsilon^{2})\) for \(c>c_{T}.\)
### Subcritical case
For \(l<0,\) the system (26) does not admit any nontrivial equilibrium solution. Here, we need to include higher order terms in the expansion. Thus, we employ a Taylor series expansion up to the fifth order to expand the system (5) around \(E_{*}^{1}(u_{*}^{1},v_{*}^{1})\)
\[\frac{\partial\mathbf{W}}{\partial t}=\mathcal{Z}(\mathbf{W})+\frac{1}{24}\mathcal{Q} (\mathbf{W},\mathbf{W},\mathbf{W})+\frac{1}{60}\mathcal{P}(\mathbf{W},\mathbf{W},\mathbf{W},\mathbf{W}), \tag{27}\]
where \(\mathcal{Q}\) and \(\mathcal{P}\) are the quad-linear operator and penta-linear operator defined similar to \(\mathcal{B}\) and \(\mathcal{T}.\) We also modify the multiple time scales \(t=t(\tau,\tau_{1}),\) where \(\tau=\epsilon^{2}t\) and \(\tau_{1}=\epsilon^{4}t;\)\(c\) and \(\mathbf{W}\) up to the fifth order of \(\varepsilon\) as
\[c=c_{T}+\varepsilon^{2}c_{2}+\varepsilon^{4}c_{4}+O(\varepsilon ^{6}),\] \[\mathbf{W}=\varepsilon\mathbf{W_{1}}+\varepsilon^{2}\mathbf{W_{2}}+\varepsilon ^{3}\mathbf{W_{3}}+\varepsilon^{4}\mathbf{W_{4}}+\varepsilon^{5}\mathbf{W_{5}}+O( \varepsilon^{6}).\]
The right hand side of (25) using (26) becomes
\[G=\left(\boldsymbol{G_{11}}A+\boldsymbol{G_{13}}A^{3}\right)\ \cos(k_{T}x)+ \boldsymbol{G_{33}}\,\cos(3k_{T}x)A^{3},\]
where \(\boldsymbol{G_{11}}=\boldsymbol{g_{11}}+\sigma A\boldsymbol{\Phi}\), \(\boldsymbol{G_{13}}=\boldsymbol{g_{13}}-lA^{3}\boldsymbol{\Phi}\), and \(\boldsymbol{G_{33}}=\boldsymbol{g_{33}}\). Note that the amplitude now becomes \(A=A(\tau,\tau_{1})\). The solution of (25) is given by
\[\boldsymbol{W}_{3}=(\boldsymbol{C_{11}}+\boldsymbol{C_{13}}A^{2})A\,\cos(k_{T} x)+\boldsymbol{C_{33}}\,\cos(3k_{T}x)A^{3},\]
where \((J(E_{1}^{*})-k_{T}^{2}D^{c_{T}})\,\boldsymbol{C_{11}}=\boldsymbol{G_{11}}\), \((J(E_{1}^{*})-k_{T}^{2}D^{c_{T}})\,\boldsymbol{C_{13}}=\boldsymbol{G_{13}}\), and \((J(E_{1}^{*})-9k_{T}^{2}D^{c_{T}})\,\boldsymbol{C_{33}}=\boldsymbol{G_{33}}\).
At \(O(\varepsilon^{4})\), we obtain
\[\mathcal{L}^{c_{T}}\boldsymbol{W_{4}}=\boldsymbol{H}_{20}A^{2}+\boldsymbol{H} _{22}A^{2}\,\cos(2k_{T}x)+\boldsymbol{H}_{40}A^{4}+\boldsymbol{H}_{42}A^{4}\, \cos(2k_{T}x)A^{2}+\boldsymbol{H}_{44}A^{4}\,\cos(4k_{T}x)\equiv\boldsymbol{H}, \tag{28}\]
Again the Fredholm alternative condition automatically holds due to the choice of our perturbation. Further solving (28), we find
\[\boldsymbol{W}_{4}=\boldsymbol{D}_{20}A^{2}+\boldsymbol{D}_{22}A^{2}\,\cos(2 k_{T}x)+\boldsymbol{D}_{40}A^{4}+\boldsymbol{D}_{42}A^{4}\,\cos(2k_{T}x)A^{2}+ \boldsymbol{D}_{44}A^{4}\,\cos(4k_{T}x),\]
where \(J(E_{1}^{*})\boldsymbol{D_{20}}=\boldsymbol{H_{20}},(J(E_{1}^{*})-4k_{T}^{2}D ^{c_{T}})\,\boldsymbol{D_{22}}=\boldsymbol{H_{22}},J(E_{1}^{*})\boldsymbol{D_ {40}}=\boldsymbol{H_{40}},(J(E_{1}^{*})-4k_{T}^{2}D^{c_{T}})\,\boldsymbol{D_{ 42}}=\boldsymbol{H_{42}}\), and \((J(E_{1}^{*})-16k_{T}^{2}D^{c_{T}})\,\boldsymbol{D_{44}}=\boldsymbol{H_{44}}\).
Finally, at \(O(\varepsilon^{5})\), we obtain
\[\mathcal{L}^{c_{T}}\boldsymbol{W_{5}}=\frac{\partial A}{\partial\tau_{1}} \boldsymbol{\Phi}+\boldsymbol{I}_{11}A\,\cos(k_{T}x)+\boldsymbol{I}_{31}A^{3} \,\cos(k_{T}x)+\boldsymbol{I}_{51}A^{5}\,\cos(k_{T}x)+\boldsymbol{I}^{*} \equiv\boldsymbol{I}, \tag{29}\]
where \(\boldsymbol{I}^{*}\) contains the orthogonal terms of \(\boldsymbol{W}_{1}\) in \(\boldsymbol{I}\). Further, \(\boldsymbol{H}_{ij}\) and \(\boldsymbol{I}_{ij}\) are explicitly computed in terms of the system parameters. As these expressions are quite involved, we omit them for the sake of brevity.
Using the Fredholm solvability condition once more in (29), we arrive at a quintic Stuart-Landau equation
\[\frac{\partial A}{\partial\tau_{1}}=\sigma^{\prime}A-l^{\prime}A^{3}+\rho^{ \prime}A^{5}, \tag{30}\]
where
\[\sigma^{\prime}=-\frac{<\boldsymbol{I}_{11},\bar{\Phi}>}{<\Phi,\bar{\Phi}>},\ l^{\prime}=\frac{<\boldsymbol{I}_{31},\bar{\Phi}>}{<\Phi,\bar{\Phi}>}\ \text{and}\ \ \rho^{\prime}=-\frac{< \boldsymbol{I}_{51},\bar{\Phi}>}{<\Phi,\bar{\Phi}>}.\]
Adding (26) and (30), we finally obtain
\[\frac{dA}{dt}=\varepsilon^{2}(\hat{\sigma}A-\hat{l}A^{3}+\hat{\rho}A^{5}), \tag{31}\]
where \(\hat{\sigma}=\sigma+\epsilon^{2}\sigma^{\prime}\), \(\hat{l}=l+\epsilon^{2}l^{\prime}\) and \(\hat{\rho}=\epsilon^{2}\rho^{\prime}\). If \(\hat{\sigma}>0,\ \hat{l}<0\) and \(\hat{r}<0\), then equation (31) admits two stable equilibria \(\pm\sqrt{\frac{l-\sqrt{l^{2}-4\hat{r}\hat{\sigma}}}{2\hat{\nu}}}\). Thus, we obtain the amplitude of the stationary pattern solution for \(c>c_{T}\).
## 5 Numerical results
Here, we first validate the results of weakly nonlinear analysis (WNA) and then discuss the effects of prey-taxis parameter \(c\) on the Turing solutions. Next we investigate spatio-temporal transient dynamics and non-homogeneous oscillatory solution in the Hopf region.
### Validation of WNA results
We take the parameter values \(a=7\), \(b=5.65\), \(e=0.95\), \(f=0.98\), and \(d=80\). For this parameter set, the coexisting homogeneous steady state \(E_{1}^{*}(0.210978,0.141065)\) is asymptotically stable under homogeneous perturbation. The corresponding Turing threshold is \(c_{T}=26.889081\) with critical wave number \(k_{T}=0.283128.\) Using WNA, we find the cubic Stuart-Landau equation
\[\frac{dA}{d\tau}=0.054965A+2.925863A^{3}, \tag{32}\]
which shows the subcritical nature of the bifurcation. Extending WNA up to the fifth order, we obtain quintic Stuart-Landau equation
\[\frac{\partial A}{\partial\tau_{1}}=-0.000463A+6.926411A^{3}-43.690556A^{5}. \tag{33}\]
Using (32) and (33), we obtain the amplitude equation
\[\frac{dA}{dt}=\varepsilon^{2}(\hat{\sigma}A-\hat{l}A^{3}+\hat{\rho}A^{5}), \tag{34}\]
where \(\hat{\sigma}=0.054965-0.000463\varepsilon^{2},\ \hat{l}=-(2.925863+6.926411 \varepsilon^{2})\), and \(\hat{\rho}=-43.690556\varepsilon^{2}.\) The equilibrium amplitude \(A_{\infty}\) of the Turing pattern solution at \(c=c_{T}(1+\varepsilon^{2})\) satisfies \(\hat{\sigma}-\hat{l}A^{2}+\hat{\rho}A^{4}=0\), and the stationary solution of \(u\) up to third order is
\[\begin{split} u_{\infty}(x)=& u_{1}^{*}+\varepsilon A _{\infty}\,\cos(k_{T}x)+\varepsilon^{2}A_{\infty}^{2}(1.8+1.811195\,\cos(k_{T }x))\\ &+\varepsilon^{3}A_{\infty}((1+A_{\infty}^{2})\,\cos(k_{T}x)+1. 969026\,\cos(3k_{T}x)).\end{split} \tag{35}\]
We have plotted \(u_{\infty}(0)\) against the prey-taxis parameter \(c\) in Fig. 8. For \(c>c_{T}\), the system (5) has two stable branches of pattern solution and the unstable homogeneous stationary solution (HSS). We plot both the stable branches for \(c=27\) and compared them with the numerical solutions in Fig. 9. The amplitudes of both solutions are in good agreement. For \(21.1348<c<c_{T}\), we
Figure 8: Plot of the solution up to third order from the weakly nonlinear analysis. Here green and blue color curves denote homogeneous stationary solution and pattern solution, respectively. The solid and dashed curves respectively correspond to the stable and unstable branches. Other parameter values are \(a=7\), \(b=5.65\), \(f=0.98\), \(e=0.95\), \(d=80\) and \(k_{T}=0.283128\).
have two unstable branches of pattern solution, two stable branches of pattern solution and the stable HSS. When \(21.1348<c<c_{T}\), numerical computation under random perturbation about the homogeneous steady state leads to the HSS, but a backward continuation of pattern solution from \(c>c_{T}\) leads to the stable pattern solution. Thus, the system (5) shows hysteresis cycle [12, 62] in this parameter range.
### Effect of prey-taxis \(c\) on Turing solution
In subsection 3.3, we have observed the appearance of the Turing pattern when the prey-taxis coefficient \(c\) crosses a threshold \(c_{T}.\) If we consider the parameter values \(a=7,\)\(b=5.65,\)\(f=0.95,\)\(e=0.95\) and \(d=100,\) then the corresponding Turing threshold is \(c_{T}=31.4793\). We simulate the spatio-temporal model with a small amplitude spatial perturbation around the coexisting homogeneous steady state \(E_{1}^{*}=(0.2016,0.1402)\). The system shows a stable homogeneous steady state for \(c<c_{T}\) and a non-homogeneous Turing solution for \(c>c_{T}\). We plot the stationary Turing solutions of \(u\) and \(v\) for \(c=35\) and \(c=50\) in Fig. 10. Since the predator population avoid high prey density areas, the peak of the prey population corresponds to the trough of the predator population. As the value of \(c\) increases from \(c=35\) to \(c=50\), the peak prey density also increases due to the group defense of the prey population. Consequently, the trough of the predator population decreases further (see Fig. 10).
Another important findings is the effects of the prey-taxis \(c\) on the spatial averages of the populations. The spatial average of the prey population jumps from \(0.2016\) for \(c<c_{T}\) to \(0.4516\) for \(c=35\). Thus, the spatial average of the prey population becomes more than doubled due to Turing bifurcation. On the other hand, the predator population changes from \(0.1402\) for \(c<c_{T}\) to \(0.1582\) for \(c=35\), a modest increase compared to prey population. Interestingly, the spatial average of the prey increases and that of predator decreases as \(c\) is increased from \(c=35\) to \(c=50.\) Thus, an increase prey-taxis is beneficial to the prey species compared to the predator species.
Figure 9: Comparison of solution up to third order from weakly nonlinear analysis and numerical solution: (a) lower brunch, (b) upper brunch. The values of other parameters are \(a=7,\)\(b=5.65,\)\(f=0.98,\)\(e=0.95,\)\(d=80,\)\(c=27\) and \(k_{T}=0.283128.\)
### Spatio-temporal transient dynamics
Here, we discuss taxis induced transient solutions for the system (5). We choose the parameter values \(a=7\), \(b=5.65\), \(f=1.07\), \(e=0.95\), and \(d=80\). For this parameter set, we calculate \(c_{T}=5.552\) using (22). Note that the corresponding temporal system has bistability between coexisting equilibrium point \(E_{1}^{*}\) and axial equilibrium point \(E_{1}\). For \(c=5\), the system (5) is neither Turing unstable nor Hopf unstable. Hence, the system reaches the homogeneous steady-state solution \(E_{1}^{*}\) after the initial transients for \(c=5\) when simulated with random perturbation around \(E_{1}^{*}\). However, for \(c>c_{T}\), we observe Turing instability. Usually, the system settles down to a stationary non-homogeneous state in Turing unstable domain. However, for \(c=6>c^{T}\), we find that the system ultimately settles down to homogeneous steady predator-free state \(E_{1}\) (see Fig. 11). The system initially evolves towards the Turing solution and the intermediate Turing solution satisfies the corresponding dispersion relation \(H(k^{2})<0\) for this parameter value. Though the Turing solution persists for considerable amount of time, but it ultimately becomes unstable leading to the homogeneous steady predator-free state. Interestingly, for lower values of \(c\) with \(c<c_{T}\), coexistence occurs, whereas higher value of \(c\) with \(c>c_{T}\) leads to the extinction of the predator species [see Fig. 11(c)]. Thus, the extinction described is induced by the prey-taxis.
Next, we examine the effects of the diffusion and taxis on the long transient and hysteresis states observed in the temporal model (see Fig. 5). We consider the same parameter values as in Fig. 5(a) together with the diffusion parameter \(d=80\), taxis parameter \(c=40\), and a domain of length \(L=200\). The initial condition is chosen as
\[u(x,0)=\begin{cases}1.4+0.01\xi(x),\text{ for }|x-100|<50,\\ 1.4,\hskip 56.905512pt\text{ otherwise},\end{cases},\hskip 56.905512ptv(x,0)= \begin{cases}0.05+0.01\xi(x),\text{ for }|x-100|<50,\\ 0.05,\hskip 56.905512pt\text{ otherwise}.\end{cases}\]
Here, \(\xi(x)\) is the Gaussian noise function. We choose this specific initial condition in order to compare our solution with the temporal solution. The corresponding spatio-temporal dynamics is shown in Fig. 12. We plot the spatial average \(<u>\) of the prey population for the system (5) against time in Fig. 12(b). We have also shown their corresponding variation \(u\) of the temporal model (6). It shows that the transient time for both systems is almost the same, although their
Figure 10: Effect of prey-taxis \(c\) on stationary Turing solutions: (a) prey solutions, (b) predator solutions. Here magenta dashed and blue solid curves denote the corresponding stationary Turing solutions for \(c=35\) and \(c=50\), respectively. The values of other parameters are \(a=7\), \(b=5.65\), \(f=0.95\), \(e=0.95\), and \(d=100\).
Figure 11: Transient dynamics leading to homogeneous steady predator-free state for \(c=6\): (a) space-time plot of the prey species, (b) space-time plot of the predator species, (c) spatial average of prey species against time, (d) persistence of Turing solution during intermediate stage. Other parameter values are \(a=7,\ b=5.65,\ e=0.95\ f=1.07\), and \(d=80\).
Figure 12: Effects of the diffusion and taxis on the long transient dynamics of the temporal model: (a) space-time plot of the prey species \(u\), (b) variation of the temporal solution \(u\) of (6) and the spatial average of the spatio-temporal model (5) against time. The values of other parameters are \(a=7,\ b=7,\ e=0.95,\ f=0.8013,\ d=80\), and \(c=40\).
final destination is different. In contrast to the temporal system, the space-time solution evolves to a Turing pattern due to Turing instability.
Finally, we discuss the effects of the diffusion and taxis on the long oscillatory transient dynamics observed in the temporal model [see Fig. 6(b)]. For this purpose, we consider the initial condition similar to that mentioned above together with \(d=80\) and \(c=40\). However, the long oscillatory transient dynamics does not appear for the spatio-temporal solution. Figure 13 shows that the predator becomes extinct in a short time and the solution approaches homogeneous steady predator-free state \(E_{1}\).
### Oscillatory solution
Apart from the homogeneous and non-homogeneous steady-state solutions, the spatio-temporal system (5) also exhibits oscillatory solutions in the Hopf region. The parameter values \(a=7,\ b=5.65,\ e=0.95,\ f=1.12,\ d=40\), and \(c=5\) lie in the Hopf region. Note that the temporal system (6) exhibits bistability between the periodic solution around \(E_{1}^{*}\) and the axial equilibrium \(E_{1}\) with
Figure 14: Time oscillatory and spatially non-homogeneous solution: (a) variation of spatial average \(<u>\) against time, (b) phase portrait of spatial averages \(<u>\) and \(<v>\), (c) contour plot of space-time solution \(u\) after the initial transient. The values of other parameters are \(a=7,\ b=5.65,\ e=0.95\ f=1.12,\ d=40\), and \(c=5\).
Figure 13: Variation of the spatial averages of the prey and predator populations against time. The values of other parameters are \(a=7,\ b=7,\ e=0.95,\ f=0.867682,\ d=80\), and \(c=40\).
the associated temporal parameters from this set. The corresponding spatio-temporal system (23) shows an oscillatory in time and non-homogeneous in space solution shown in Fig. 14. The phase portrait of spatial averages \(<u>\) and \(<v>\) [see Fig. 14(b)] confirms the quasi-periodic nature of the solution. The parameter set moves into the Turing-Hopf region with an increase in \(c\), and the predator species becomes extinct leading to the homogeneous steady predator-free state. This final state corresponds to the axial equilibrium of the bistable states and it is similar to the taxis-induced extinction discussed earlier. The parameter set moves well inside the Hopf region with a decrease in \(c\). The system now exhibits a homogeneous oscillatory solution corresponding to the stable periodic solution around \(E_{1}^{*}\).
## 6 Conclusion
The primary goal of our work is to investigate the effects of group defense among prey species on the spatio-temporal distribution of both the prey and predator populations modelled by a Bazykin-type prey-predator model. The group defense of the prey species is incorporated using a non-monotonic functional response in the temporal model. Due to group defense, the predator species avoid areas with high prey density. We also include repellent prey-taxis to take into account this response of the predator species. The temporal model (6) exhibits a range of complex dynamics, including bistability, tristability, global bifurcations, and long transient dynamics. The corresponding spatio-temporal system (5) possesses global bounded solutions. Existence of non-homogeneous stationary solution above the Turing threshold \(c_{T}\) has been established using WNA. Further, exhaustive numerical simulations have been performed to validate the results of WNA and to investigate long transient dynamics present in the spatio-temporal systems.
We have used bifurcation analysis on the temporal system (6) to identify various local and global bifurcations by considering \(b\) and \(f\) as bifurcation parameters. Note that the parameters \(b\) and \(f\) depend on the strength of group defense of prey species and the death rate of predator species, respectively. Through multiple one-parametric bifurcation diagrams and a two-parametric bifurcation diagram, we have observed that a higher strength of group defense or a larger death rate of predator species can lead to predator extinction in the coexisting dynamics. However, the initial population plays a crucial role in determining the final state of the system. In the case of bistability and tristability scenarios, the system can lead to co-existing steady-state dynamics or co-existing oscillatory dynamics, or a predator extinction state depending on the initial condition.
We have observed long transient dynamics in the temporal system in which the system spends a considerable time around a stationary or oscillatory state before reaching the final state. These dynamics depend on the initial conditions and parameter values. When the group defense parameter \(b\) is above the cusp point threshold value, then an increase in predator's death rate \(f\) causes appearance of two new coexisting equilibria due to a saddle-node bifurcation. When \(f\) is decreased slightly, these two new coexisting equilibria disappear and a narrow region develops between two non-trivial nullclines. When the initial prey or predator population is small, the trajectories are constrained to move through this narrow region leading to the long stationary transition dynamics [see Fig. 5(b)]. Also, a significant change is observed between the population levels in the transient state and the final state. A similar long transient, but oscillatory state, is observed due to saddle-node bifurcation of limit cycle, which is a global bifurcation. In contrast to the long stationary transient discussed above, the final steady state is approximately the average of population over an oscillatory cycle in this case (see Fig. 6).
A homogeneous steady-state of the spatio-temporal system (5), which corresponds to a stable co-existence equilibrium of the corresponding temporal model, can become Turing unstable when the prey-taxis coefficient \(c\) crosses a threshold \(c_{T}.\) A Turing solution refers to a stationary periodic
in space solution. A Turing pattern develops and persists in the system when the corresponding temporal model has a single stable co-existing equilibrium point and all the other equilibria are unstable. In this case, we have employed WNA to derive Turing solution for \(c\) near \(c_{T}\) and the theoretical findings have been validated using numerical simulations (see Fig. 9). However, if the temporal model has bistability or tristability, then the Turing solution may not persist. We have shown an example (see Fig. 11) where the spatio-temporal system initially approaches a Turing solution but ultimately settles down to predator-extinction state. The reason for this behaviour is the presence of a stable coexisting and a stable predator-free equilibria of the corresponding temporal model. The perturbations around the homogeneous state corresponding to the coexisting equilibrium grow in magnitude and approach a Turing solution, but the system ultimately settles down to the homogeneous state corresponding to predator-free equilibrium. Similar to the temporal transients, the final state of the solution is unpredictable in this case too. Generally, an increase in group defense-induced prey-taxis can lead to predator extinction scenarios from the homogeneous stable coexistence state.
We have also investigated whether the long transient dynamics observed in the temporal model persist in the extended spatio-temporal system. Almost similar long stationary transient dynamics have been found in the presence of diffusion and taxis (see Fig. 12) but the final state consists of a Turing pattern instead of the homogeneous steady state \(E_{1}^{*}\). This is due to the taxis parameter \(c\) lying in the Turing domain corresponding to the homogeneous state \(E_{1}^{*}.\) But in the case of transient oscillatory dynamics of the temporal model, the spatio-temporal system does not exhibit any transient dynamics. In this case, the system rapidly reaches the predator extinction homogeneous steady state \(E_{1}\) (see Fig. 13). Thus, the persistence of long transient dynamics of the temporal model depends on the parameter values of the spatio-temporal model. Another important observation is the appearance of non-homogeneous oscillatory pattern solutions for certain parameter sets lying in the Hopf region (see Fig. 14). Increasing the value of taxis parameter \(c\) towards the Turing-Hopf region leads to the appearance of a transient Turing pattern but the system ultimately settles down to the homogeneous steady state \(E_{1}.\) On the other hand, the non-homogeneous oscillatory pattern solution becomes a homogeneous oscillatory pattern solution with decreasing \(c\) towards the interior of the Hopf region.
In summary, the prey-taxis is beneficial to the prey species compared to the predator species. It significantly influences the survival of the predator species. A stable coexisting state of the temporal model can become homogeneous predator-free state in the presence of prey-taxis. Thus, the prey-taxis plays a crucial role in the pattern formation scenario of a spatio-temporal prey-predator system.
**Data Availability** The authors declare that no experimental data were used in the preparation of this manuscript.
**Ethics declarations Conflict of interest** The authors declare that they have no conflict of interest.
## Appendix A
Using Gagliardo-Nirenberg inequality [63] with \(v\geq 0\), we obtain
\[\int\limits_{\Omega}v^{q}dx =\int\limits_{\Omega}(v^{\frac{q}{2}})^{2}dx\leq a_{1}\left(|| \nabla v^{\frac{q}{2}}||_{2}^{\frac{2nq-2n}{2n-n+2}}.||v^{\frac{q}{2}}||_{ \frac{q}{q}}^{\frac{4}{nq-n+2}}+||v^{\frac{q}{2}}||_{\frac{q}{q}}^{\frac{2}{q }}\right)\] \[=a_{1}\left(||\nabla v^{\frac{q}{2}}||_{2}^{2\theta}.||v^{\frac{q }{2}}||_{\frac{q}{q}}^{2(1-\theta)}+||v^{\frac{q}{2}}||_{\frac{q}{q}}^{2} \right),\]
where \(a_{1}>0\) and \(0<\theta=\frac{nq-n}{nq-n+2}<1\). Using Young's inequality, we have
\[||\nabla v^{\frac{q}{2}}||_{2}^{2\theta}.||v^{\frac{q}{2}}||_{\frac{q}{q}}^{2( 1-\theta)}\leq a_{2}\theta||\nabla v^{\frac{q}{2}}||_{2}^{2}+a_{2}^{\frac{ \theta}{\theta-1}}(1-\theta)||v^{\frac{q}{2}}||_{\frac{q}{q}}^{2},\]
for any \(a_{2}>0\). Thus we have
\[\int\limits_{\Omega}v^{q}dx \leq a_{1}\left(a_{2}\theta||\nabla v^{\frac{q}{2}}||_{2}^{2}+a_ {2}^{\frac{\theta}{\theta-1}}(1-\theta)||v^{\frac{q}{2}}||_{\frac{q}{q}}^{2} +||v^{\frac{q}{2}}||_{\frac{q}{q}}^{2}\right)\] \[=a_{1}a_{2}\theta||\nabla v^{\frac{q}{2}}||_{2}^{2}+a_{1}\left(a_ {2}^{\frac{\theta}{\theta-1}}(1-\theta)+1\right)||v^{\frac{q}{2}}||_{\frac{q} {q}}^{2}\] \[=\varepsilon_{1}||\nabla v^{\frac{q}{2}}||_{2}^{2}+\varepsilon_{2 }||v||_{1}^{q},\]
for any \(\varepsilon_{1}>0\) and for some \(\varepsilon_{2}>0\) depending on the \(\varepsilon_{1}\). Since \(||v||_{1}\leq B\), we obtain
\[\int\limits_{\Omega}v^{q}dx\leq\varepsilon_{1}||\nabla v^{\frac{q}{2}}||_{2}^ {2}+M\text{ for any }\varepsilon_{1}>0\text{ and for some }M=\varepsilon_{2}B^{q}>0.\]
|
2303.17435 | Covering numbers of unipotent conjugacy classes in simple algebraic
groups | For simple algebraic groups defined over algebraically closed fields of good
characteristic, we give upper bounds on the covering numbers of unipotent
conjugacy classes in terms of their (co)ranks and in terms of their dimensions. | Iulian Ion Simion | 2023-03-30T15:02:05Z | http://arxiv.org/abs/2303.17435v1 | # Covering numbers of unipotent conjugacy classes in simple algebraic groups
###### Abstract.
For simple algebraic groups defined over algebraically closed fields of good characteristic, we give upper bounds on the covering numbers of unipotent conjugacy classes in terms of their (co)ranks and in terms of their dimensions.
I am grateful to Prof. Attila Maroti for many discussions on this topic and to the referees for their remarks and suggestions. This work was supported by a grant of the Ministry of Research, Innovation and Digitalization, CNCS/CCCDI-UEFISCDI, project number PN-III-P1-1.1-TE-2019-0136, within PNCDI III
**Theorem A**.: _There is a constant \(c\) such that for any simple algebraic group \(G\) defined over a field of good characteristic and any distinguished unipotent conjugacy class \(C\) of \(G\) we have_
\[\operatorname{cn}(G,C)\leq c.\]
_Moreover, we may choose \(c=2^{3}\cdot 3^{2}\)._
For bounded rank, in particular for exceptional algebraic groups, the above result and Theorems B and C below follow from [7]. It is worth noticing that if \(G\) is adjoint of type \(A\) and if \(C\) is the regular unipotent conjugacy class then \(C^{2}=G\) by [11]. In other words, for such groups the constant in Theorem A is \(2\).
The rank \(\operatorname{rk}(H)\) of an algebraic group \(H\) is the dimension of a maximal torus of \(H\). Let \(C\) be the conjugacy class of the unipotent element \(u\in G\). We define the _corank of \(C\)_ to be \(\operatorname{crk}(C):=\operatorname{rk}(C_{G}(u))\). Further, we define the _rank of \(C\)_ to be \(\operatorname{rk}(C):=\operatorname{rk}(G)-\operatorname{crk}(C)\). The second result gives an upper bound for \(\operatorname{cn}(G,C)\) in terms of the rank and the corank of \(C\).
**Theorem B**.: _There is a constant \(c\) such that for any simple algebraic group \(G\) defined over a field of good characteristic and any unipotent conjugacy class \(C\) of \(G\) we have_
\[\operatorname{cn}(G,C)\leq c\cdot\frac{\operatorname{rk}(G)}{\operatorname{ rk}(C)}=c\cdot\left(1+\frac{\operatorname{crk}(C)}{\operatorname{rk}\left(C \right)}\right).\]
_Moreover, we may choose \(c=2^{5}\cdot 3^{2}\)._
The Lang-Weil bound [10, Theorem 1] suggests that the analogue of [13, Theorem 1.1] for algebraic groups is \(\operatorname{cn}(G,C)\leq c\cdot(\dim(G)/\dim(C))\) where \(c\) is a universal constant independent of \(G\). We prove this bound in the case of unipotent conjugacy classes.
**Theorem C**.: _There is a constant \(c\) such that for any simple algebraic group \(G\) defined over a field of good characteristic and any unipotent conjugacy class \(C\) of \(G\) we have_
\[\operatorname{cn}(G,C)\leq c\cdot\frac{\dim(G)}{\dim(C)}.\]
_Moreover, we may choose \(c=2^{9}\cdot 3^{2}\)._
The focus of this paper is on unipotent conjugacy classes and the asymptotic bound of their covering numbers. We believe that it should be possible to improve the upper bound on the constant \(c\).
The paper is structured as follows: Section 2 introduces the notation needed throughout the paper, slightly extends the context to normal subsets, slightly extends the notion of covering number and introduces marked diagrams. The proofs of Theorems A, B and C are given in Section 3, 4 and 5 respectively.
## 2. Preliminaries
### Setup
In this paper \(G\) denotes a simple algebraic group of rank \(r=\operatorname{rk}(G)\) defined over an algebraically closed field \(F\) of good characteristic \(p\). We assume throughout that \(p\) is a good prime for \(G\). We fix a Borel subgroup \(B\) with unipotent radical \(U\) and maximal torus \(T\). The roots \(\Phi\) of \(G\) are with respect to \(T\), the set of positive roots \(\Phi^{+}\) are with respect to \(U\) and \(\Delta\) denotes the set of simple roots of \(\Phi\) in \(\Phi^{+}\). We denote by \(U^{-}\) the radical
of the Borel subgroup opposite to \(B\), i.e. \(U^{-}=U^{\dot{w}_{0}}\) for some representative \(\dot{w}_{0}\in N_{G}(T)\) of the longest element (with respect to \(\Delta\)) of the Weyl group \(N_{G}(T)/T\). For an element \(w\in N_{G}(T)/T\) we write \(\dot{w}\) for a representative in \(N_{G}(T)\).
For each root \(\alpha\in\Phi\) let \(u_{\alpha}:F\to U_{\alpha}\) be an isomorphism from the additive group of the ground field \(F\) onto the root subgroup \(U_{\alpha}\). For each \(\alpha\in\Phi\) we denote by \(\alpha^{\vee}:F^{\times}\to T\) the cocharacter corresponding to the root alpha (see [9, IISS1.3]). Then
\[{}^{\alpha^{\vee}(t)}u_{\beta}(x)=\alpha^{\vee}(t)u_{\beta}(x)\alpha^{\vee}( t)^{-1}=u_{\beta}(\beta(\alpha^{\vee}(t))x)=u_{\beta}(t^{\langle\beta,\alpha \rangle}x) \tag{1}\]
for all \(\alpha,\beta\in\Phi\), \(t\in F^{\times}\), \(x\in F\) (see [3, Ch.7]).
For a set of roots \(I\subseteq\Phi\), let \(\Phi_{I}\) be the root subsystem \(\langle I\rangle_{\Phi}\) generated by \(I\). We denote by \(L_{I}\) the subgroup \(\langle T,U_{\alpha}:\alpha\in\Phi_{I}\rangle\) of \(G\). If the roots in \(I\) are simple then \(L_{I}\) is a standard Levi subgroup. In this case, we denote by \(P_{I}\) the standard parabolic subgroup with Levi factor \(L_{I}\). When we need to specify the ambient group \(G\), we write \(L_{I}^{G}\) or \(P_{I}^{G}\). Notice that \(L_{I}^{G}\) and \(P_{I}^{G}\) make sense in the more general case of a reductive algebraic group \(G\). Moreover, we denote by \(G(I)\) the subsystem subgroup \(\langle U_{\alpha}:\alpha\in\Phi_{I}\rangle\). Notice that \(L_{I}=G(I)T\). In the particular case of \(I=\{\alpha\}\subseteq\Delta\) we denote by \(G_{\alpha}\) the subgroup \(G(I)\).
### Normal subsets
A product of conjugacy classes of \(G\) is invariant under conjugation by \(G\). A _normal subset_\(N\) of a group \(G\) is a non-empty subset of \(G\) which is invariant under \(G\)-conjugation, i.e. \(N\) is a non-empty union of conjugacy classes of \(G\). The intermediate steps in our proofs are easier to formulate with this notion. Notice that Theorems A, B and C can be formulated for normal subsets consisting of unipotent elements if the conditions of those statements are imposed on the highest dimensional classes in \(N\).
### Covering numbers
When dealing with successive powers of a conjugacy class \(C\) of \(G\), we use intermediate steps in which we show that for certain \(k\geq 0\) the normal subset \(C^{k}\) contains a certain subset of \(G\). For this we slightly extend the notion of covering numer as follows. For subsets \(S_{1}\) and \(S_{2}\) of \(G\) we define the _\(S_{1}\)-covering number of \(S_{2}\)_ to be the smallest integer \(k\) such that \(S_{1}\subseteq S_{2}^{k}\) or \(\infty\) if no such \(k\) exists. We denote this number by \(\operatorname{cn}(S_{1},S_{2})\).
### Marked diagrams
Let \(I\) be a subset of \(\Delta\). The _marked diagram_\(D_{I}\) corresponding to the set \(I\) is the Dynkin diagram of \(G\) where we mark the nodes corresponding to the roots in \(\Delta-I\). For example, if \(G\) is of type \(A_{4}\), the marked diagram corresponding to \(I=\{\alpha_{2},\alpha_{4}\}\) is
i.e. the marked nodes are the black nodes in the figure. We identify the roots in \(\Delta\) with the corresponding nodes in the Dynkin diagram. A _component_\(\Delta^{\prime}\subseteq\Delta\) of the marked diagram \(D_{I}\) is a maximal (by inclusion) connected subdiagram consisting of marked nodes.
Clearly, there is a \(1:1\) correspondence between marked diagrams \(D_{I}\) and standard Levi subgroup \(L_{I}\). Thus, there is a \(1:1\) correspondence between marked diagrams \(D_{I}\) and standard parabolic subgroups \(P_{I}\). In the context of unipotent conjugacy classes, distinguished unipotent conjugacy classes are in bijection with distinguished parabolic subgroups. The marked diagram corresponding to a distinguished parabolic subgroup is obtained from the
labeled Dynkin diagram [2, SS5.9] by marking those nodes which are labeled by '2'. For example, if \(G\) is of type \(C_{r}\), the distinguished unipotent conjugacy classes are in bijection with marked diagrams of the form
where \(m+n_{1}+\cdots+n_{k}+1=r\), \(n_{1}=2\) and where \(n_{i+1}=n_{i}\) or \(n_{i}+1\) for each \(1\leq i\leq k-1\).
In what follows, marked diagrams will be used both in the description of distinguished parabolic subgroups and in the description of standard Levi subgroups of \(G\).
A _shift_ of a diagram is the transformation under which we obtain a diagram of the same type with one component moved one node to the left or to the right without touching another component. A _permutation_ of a diagram is the transformation under which we obtain a diagram of the same type by permuting the components. Under such transformations the corresponding standard Levi subgroups are conjugate (see Lemma 7).
## 3. Covering numbers of distinguished unipotent conjugacy classes
Recall that a unipotent element is distinguished if \(C_{G}(u)^{\circ}\) is unipotent. A parabolic subgroup \(P=LQ\) with Levi factor \(L\) and unipotent radical \(Q\) is distinguished if \(\dim(L)=\dim(Q/[Q,Q])\)[12, SS2.5-6]. An element \(g\) of a parabolic subgroup \(P\) is called a Richardson element of \(P\) if the \(P\)-conjugacy class of \(g\) intersects the unipotent radical \(Q\) in an open set of \(Q\). By the Bala-Carter-Pommerening classification of unipotent conjugacy classes [1, 15] (see also [2, Theorem 5.9.6 and SS5.11]), there is a bijection between distinguished conjugacy classes and conjugacy classes of distinguished parabolic subgroups. Under this bijection, the conjugacy class of the distinguished parabolic subgroup \(P\) corresponds to the (unique) \(G\)-conjugacy class containing a Richardson element of \(P\).
**Lemma 1**.: _Let \(P\) be a parabolic subgroup of \(G\) with unipotent radical \(Q\) and let \(N\) be a normal subset of \(G\). If \(N\) contains a Richardson element of \(P\) then \(\operatorname{cn}(Q,N)\leq 2\)._
Proof.: Let \(C\) be the conjugacy class in \(N\) containing a Richardson element of \(P\). Since \(C\) contains a Richardson element of \(P\) it intersects \(Q\) in an open subset \(V\). Since \(V\) is an open subset of the connected group \(Q\) we have \(Q=V^{2}\subseteq C^{2}\subseteq N^{2}\).
The following lemma is known. We give two possible proofs.
**Lemma 2**.: _If \(N\) is a normal subset containing a regular semisimple element then \(\operatorname{cn}(G,N)\leq 3\)._
Proof 1.: Let \(s\in N\) be a regular semisimple element. We may assume \(s\in T\). All elements in \(sU\) and all elements in \(sU^{-}\) are conjugate to \(s\) (see [8, SS2.4]). Hence \(sU,sU^{-}\subseteq N\). Let \((sU)^{U^{-}}\) denote the set of conjugates of elements in \(sU\) by elements in \(U^{-}\). By Theorem [19, Theorem 1] we have
\[G=s^{3}\cdot U\cdot U^{-}\cdot U\cdot U^{-}=(sU)^{U^{-}}\cdot(sU)^{U^{-}}\cdot s U ^{-}\subseteq N^{3}.\]
since \((sU)^{U^{-}}\cdot sU^{-}=\cup_{v\in U^{-}}v^{-1}sUvsU^{-}=\cup_{v\in U^{-}}v^ {-1}sUsU^{-}=U^{-}sUsU^{-}\)
Proof 2.: Let \(s\in N\) be a regular semisimple element. We may assume \(s\in T\). By [4, Theorem 2.1] any non-central element of \(G\) is conjugate to \(vs^{2}u\) for some \(v\in U^{-}\) and some \(u\in U\). Since \(s\) is regular, \(vs\) and \(su\) are conjugate to \(s\). Hence, \(N^{2}\) contains any non-central element of \(G\). Thus \(N^{3}=G\).
**Lemma 3**.: _Let \(\Phi\) be of type \(A_{2d+1}\) and let \(\gamma_{k}:=\alpha_{d+1}+\sum_{i=1}^{k}\alpha_{d+1+i}+\alpha_{d+1-i}\) for \(0\leq k\leq d\). The set of roots \(R=\{\gamma_{k}:0\leq k\leq d\}\) has the property that \(\alpha+\beta\) is not a root for any \(\alpha,\beta\in R\cup-R\)._
Proof.: For \(\alpha,\beta\in R\) the coefficient of \(\alpha_{d+1}\) in a decomposition of \(\alpha+\beta\) w.r.t. \(\Delta\) is \(2\), hence \(\alpha+\beta\) is not a root. Similarly for \(\alpha,\beta\in-R\). It suffices to notice that \(\alpha-\beta\) is not a root for any \(\alpha,\beta\in R\). Let \(\alpha=\gamma_{k}\) and \(\beta=\gamma_{m}\) for some \(0\leq k,m\leq d\). If \(k=m\) then \(\alpha-\beta=0\) which is not a root. If \(k>m\) then \(\alpha-\beta=\delta_{1}+\delta_{2}\) with \(\delta_{1}=\sum_{i=m}^{k}\alpha_{d+1+i}\) and \(\delta_{2}=\sum_{i=m}^{k}\alpha_{d+1-i}\). Since \(\Phi\) is of type \(A\) it is easy to see that the two roots are orthogonal, and hence, that their sum is not a root. The case \(k<m\) is similar.
**Lemma 4**.: _Let \(G\) be of type \(A_{2d+1}\), let \(I=\Delta-\{\alpha_{d+1}\}\) and let \(P_{I}\) be the corresponding standard parabolic subgroup with unipotent radical \(Q_{I}\). If \(N\) is a normal subset of \(G\) containing \(Q_{I}\), then \(\operatorname{cn}(G,N)\leq 6\)._
Proof.: Let \(R=\{\gamma_{0},\ldots,\gamma_{d}\}\) be the set of roots described in Lemma 3. Since \(N\) contains \(Q_{I}\) it also contains \(\prod_{k=0}^{d}U_{\gamma_{k}}\). Let \(w_{0}\) be the longest element (with respect to \(\Delta\)) of the Weyl group \(N_{G}(T)/T\). One checks that \(w_{0}(\gamma_{k})=-\gamma_{k}\) for all \(0\leq k\leq d\). Thus \((\prod_{k=0}^{d}U_{\gamma_{k}})^{\dot{w}_{0}}=\prod_{k=0}^{d}U_{-\gamma_{k}}\). Since \(N\) is a normal subset of \(G\) it contains the product of commuting root subgroups \(\prod_{i=0}^{d}U_{-\gamma_{i}}\).
The factors in the product \(\prod_{k=0}^{d}U_{\gamma_{k}}\) commute since \(\gamma_{i}+\gamma_{j}\) is not a root for all \(0\leq i,j\leq d\). Moreover, since \(\alpha+\beta\) is not a root for any \(\alpha,\beta\in R\cup-R\) by the commutator relations we have \([U_{\gamma_{i}},U_{-\gamma_{j}}]=1\) for \(0\leq i\neq j\leq d\). By [4, Theorem 2.1] any non-central element of \(G(\gamma_{i})\) is conjugate to an element in \(U_{\gamma_{i}}U_{-\gamma_{i}}\), hence
\[\prod_{i=1}^{m}U_{\gamma_{i}}U_{-\gamma_{i}}=\left(\prod_{i=1}^{m}U_{\gamma_{ i}}\right)\left(\prod_{i=1}^{m}U_{-\gamma_{i}}\right)\]
is an open subset of \(G(R)\) contained in \(N^{2}\). In particular, an open subset \(\tilde{T}\) of the torus \(\prod_{i=0}^{d}T_{\gamma_{i}}\) lies in \(N^{2}\) where \(T_{\gamma_{i}}\) is the image of the cocharacter \(\gamma_{i}^{\vee}\). A direct check using (1) shows that \(\tilde{T}\) does not commute with any root subgroup, hence \(C_{G}(\tilde{T})^{\circ}=T\)[17, II Theorem 4.1]. Thus it contains an element \(t\)[16, Lemma 6.4.3] with \(C_{G}(t)^{\circ}=T\), i.e. it contains a regular semisimple element and the claim follows from Lemma 2.
**Proposition 5**.: _Let \(G\) be a classical simple algebraic group of rank \(\operatorname{rk}(G)>11\). If \(C\) is a distinguished unipotent conjugacy class of \(G\) then there exists a torus \(\tilde{T}\) of dimension \(\operatorname{rk}(G)\) such that \(\operatorname{cn}(\tilde{T},C)\leq 36\)._
Proof.: We prove the statement by means of a case-by-case analysis. Let \(P_{I}\) be the distinguished parabolic subgroup of \(G\) with unipotent radical \(Q_{I}\) such that \(C\cap Q_{I}\) is open in \(Q_{I}\). The possible sets of roots \(I\) can be read off from the possible distinguished diagrams
[2, SS5.9]. More precisely, \(I\) consists of the simple roots corresponding to the nodes labeled with '\(0\)' in the distinguished diagram corresponding to \(P_{I}\).
First we treat the case of regular unipotent elements. If \(C\) is the conjugacy class of regular unipotent elements then \(I=\emptyset\) and \(Q_{I}=U\). Hence \(C\) contains an open subset of \(U\) and \(U^{-}\). Thus \(C^{2}\) contains an open subset of \(UU^{-}\) and therefore also an open subset of \((UU^{-})^{G}\). By [4, Theorem 2.1] we have \((UU^{-})^{G}=G-Z(G)\), thus \(C^{2}\) contains an open subset of \(G\) and so \(C^{4}=G\).
If \(G\) is of type \(C_{r}\) then the distinguished diagrams are
where \(m+n_{1}+\cdots+n_{k}+1=r\), \(n_{1}=2\) and \(n_{i+1}=n_{i}\) or \(n_{i}+1\) for each \(1\leq i\leq k-1\).
If \(G\) is of type \(B_{r}\) then the distinguished diagrams are
where \(m+n_{1}+\cdots+n_{k}+1=r\), \(n_{1}=2\), \(n_{i+1}=n_{i}\) or \(n_{i}+1\) for each \(1\leq i\leq k-2\) and \(n_{k}=n_{k-1}/2\) if \(n_{k-1}\) is even or \(n_{k}=(n_{k-1}-1)/2\) if \(n_{k-1}\) is odd.
If \(G\) is of type \(D_{r}\) then the distinguished diagrams are
where \(m+2k+2=r\), together with
where \(m+n_{1}+\cdots+n_{k}=r\), \(n_{1}=2\), \(n_{i+1}=n_{i}\) or \(n_{i}+1\) for each \(1\leq i\leq k-2\) and \(n_{k}=n_{k-1}/2\) if \(n_{k-1}\) is even or \(n_{k}=(n_{k-1}+1)/2\) if \(n_{k-1}\) is odd.
Choose \(t\) maximal such that \(t\leq r/2\) and such that \(\alpha_{t}\) is a marked node. Let \(\Phi_{t}\) be the root subsystem generated by \(I_{t}=\{\alpha_{1},\ldots,\alpha_{2t-1}\}\) and let \(\Phi_{t}^{w}\) be the set of non-marked nodes among \(\alpha_{1},\ldots,\alpha_{2t-1}\). Consider the subsystem subgroup \(G_{t}=G(\Phi_{t})\). It is a group of type \(A_{2t-1}\). By Lemma 1, \(C^{2}\) contains \(Q_{I}\). In particular it contains \(Q_{t}=Q_{I}\cap G_{t}\). The subgroup \(Q_{t}\) is the unipotent radical of the standard parabolic subgroup of \(G_{t}\) with standard Levi factor \(L_{t}\) generated by the maximal torus \(T_{t}=G_{t}\cap T\) and \(G(\Phi_{t}^{w})\). That is, \(Q_{t}\) is the product (in a fixed but arbitrary order) of the root subgroups \(U_{\beta}\) with \(\beta\in\Phi^{+}\cap(\Phi_{t}-\Phi_{t}^{w})\). Since \(\alpha_{t}\) is marked, \(L_{t}\) is a subgroup of the standard parabolic subgroup \(P_{I_{t}-\{\alpha_{t}\}}\) of \(G_{t}\)
Thus, the unipotent radical of this parabolic subgroup is contained in \(Q_{t}\subseteq C^{2}\). We may therefore apply Lemma 4 with \(\alpha_{d+1}=\alpha_{t}\) for the normal subset \(C^{2}\cap G_{t}\) of the group \(G_{t}\), to obtain that \(G_{t}\subseteq C^{2\cdot 6}\). In particular \(C^{12}\) contains the maximal torus \(T_{t}\) of \(G_{t}\).
We claim that \(2t-1\geq r/2\). If \(t\leq m\) then \(\alpha_{t+1}\) is marked and the claim follows. Assume that \(t>m\). Let \(l\) be such that \(m+n_{1}+\cdots+n_{l}+1=t\) and let \(\alpha_{t^{\prime}}\) be the next marked node to the right of \(\alpha_{t}\). The node \(\alpha_{t^{\prime}}\) exists otherwise \(G\) is of type \(B_{r}\) or \(D_{r}\) and \(l=k-1\) in which case \(n_{k}\leq\frac{n_{k-1}}{2}<n_{k-1}\). Then \(t>r/2\), which is a contradiction with the choice of \(t\). If \(t^{\prime}=r\) then \(G\) is of type \(C_{r}\) and since \(t\leq r/2\), it follows that \(k\leq 2\) - a case which is excluded since \(r>11\). If \(t^{\prime}=r-1\) then \(G\) is of type \(D_{r}\) and since \(t\leq r/2\) the rank \(r\) would again need to be smaller than \(11\). In all other cases \(t^{\prime}=m+n_{1}+\cdots+n_{l+1}+1\). Then, since \(n_{l+1}=n_{l}\) or \(n_{l+1}=n_{l}+1\), we have
\[2t-1=2m+2n_{1}+\cdots+2n_{l}+1\geq m+n_{1}+\cdots+n_{l}+1=t^{\prime}\]
unless \(m=0\), \(l=1\) and \(n_{2}=n_{1}+1\) in which case \(t=3\) and \(t^{\prime}=6\). In this case, since \(t^{\prime}>r/2\) the rank would again have to be less than \(11\).
Thus \(\dim T_{t}\geq r/2\) and there is a Weyl group element \(w_{1}\) such that \(T^{\prime}=T_{t}T_{t}^{\dot{w}_{1}}\subseteq C^{24}\) is a torus of \(G(\alpha_{1},\ldots,\alpha_{r-1})\) of dimension \(r-1\). Indeed, choose \(w_{1}\) to be the longest element of the Weyl group of \(G(\alpha_{1},\ldots,\alpha_{r-1})\).
In all cases, inspecting the root systems one finds that \(T^{\prime}T_{t}^{\dot{w}_{2}}\) contains a torus of dimension \(r\), for some element \(w_{2}\) of the Weyl group. There are several such choices and \(w_{2}=w_{1}s_{\alpha_{r}}\) works in all cases since \(T_{t}^{\dot{w}_{1}}\) projects onto a \(1\)-dimensional torus of \(G(\alpha_{r-1})\cap T\). Thus \(C^{36}\) contains an \(r\)-dimensional torus of \(G\).
Proof of Theorem A.: Let \(C\) be a distinguished unipotent conjugacy class of \(G\). For the bounded rank case we use [7]: if \(\operatorname{rk}(G)\leq 11\) then, for any conjugacy class \(C\) of \(G\) we have \(\operatorname{cn}(C)\leq 4\cdot\operatorname{rk}(G)\leq 44\). For \(\operatorname{rk}(G)>11\), by Proposition 5 there is an \(\operatorname{rk}(G)\)-dimensional torus in \(C^{36}\). Hence \(C^{36}\) contains an open subset of \(T\). It therefore contains an open subset of \(G\)[18, SS3.5 Corollary], hence \(G=C^{36\cdot 2}\).
## 4. Covering numbers of unipotent conjugacy classes in terms of rank
Let \(C\) be the conjugacy class of the unipotent element \(u\in G\). By the Bala-Carter-Pommereing classification, up to \(G\)-conjugacy, there is a unique pair \((L,P)\) consisting of a Levi-subgroup \(L\) and a distinguished parabolic subgroup \(P\) of \([L,L]\) such that \(u\) is a Richardson element of \(P\). Conjugating if necessary we may assume that \(L=L_{I}\) and that \(P=P_{J}^{[L_{I},L_{I}]}\) for some \(J\subseteq I\subseteq\Delta\). Since it is clear from the context that we consider parabolic subgroups of \([L_{I},L_{I}]\), for brevity we write \(P_{J}\) instead of \(P_{J}^{[L_{I},L_{I}]}\).
**Proposition 6**.: _Let \(C\) be the unipotent conjugacy class corresponding to the pair \((L_{I},P_{J})\). We have_
1. \(\operatorname{crk}(C)=\dim(Z(L_{I})),\)__
2. \(\operatorname{crk}(C)=|\Delta-I|,\)__
3. \(\operatorname{rk}(C)=|I|.\)__
Proof.: Let \(u\in U\) be a Richardson element of \(P_{J}\) and let \(S\) be a maximal torus of \(C_{G}(u)\). The subgroup \(L=C_{G}(S)\) is a Levi subgroup [14, Proposition 12.10] which contains \(u\)
Conjugating if necessary, we may assume that \(L\) is the standard Levi subgroup \(L_{I^{\prime}}\) and that \(u\in[L_{I^{\prime}},L_{I^{\prime}}]\). Let \(\tilde{S}\) be a maximal torus in \(C_{[L_{I^{\prime}},L_{I^{\prime}}]}(u)\). If \(\dim(\tilde{S})>0\) then \(S\tilde{S}\) is a torus of \(C_{G}(u)\) of dimension higher than \(S\) - a contradiction with the choice of \(S\). Hence \(\tilde{S}=\{1\}\) and therefore \(C_{[L_{I^{\prime}},L_{I^{\prime}}]}(u)^{\circ}\) is unipotent. Thus, \(u\) is a distinguished unipotent element of \([L_{I^{\prime}},L_{I^{\prime}}]\), i.e. we may assume that \(u\) is a Richardson element of some distinguished parabolic subgroup \(P_{J^{\prime}}\) of \([L_{I^{\prime}},L_{I^{\prime}}]\). By the Balla-Carter-Pommerening classification the pairs \((L_{I},P_{J})\) and \((L_{I^{\prime}},P_{J^{\prime}})\) are \(G\)-conjugate. Hence \(\operatorname{crk}(C)=\dim(Z(L_{I}))=\dim(S)=\dim(Z(L_{I^{\prime}}))\) equals \(|\Delta-I|\). The last claim also follows since \(\operatorname{rk}(C)=\operatorname{rk}(G)-\operatorname{crk}(C)=|\Delta|-| \Delta-I|=|I|\).
In what follows, marked diagrams will be used to describe standard Levi subgroups. As mentioned in SS2.4, the components of a marked diagram can be shifted and permuted. Consider a component \(D^{\prime}\) of a marked diagram \(D_{\Delta-I}\), i.e. \(D^{\prime}\) is a connected subset of marked nodes corresponding to the roots in \(I\). A shift of \(D^{\prime}\) to the left by one position corresponds to removing the marking of the right most node \(\gamma^{\prime}\) of \(D^{\prime}\) and marking \(\gamma^{\prime\prime}\), the first node left of \(D^{\prime}\) (if such a node exists). Let \(J=\{\gamma^{\prime\prime}\}\cup I-\{\gamma^{\prime}\}\). We say that \(D_{\Delta-J}\) is obtained from \(D_{\Delta-I}\) by shifting \(D^{\prime}\) to the left with one position if the Levi subgroups corresponding to the two marked diagrams are isomorphic. Similarly one has right shifts. A shift is a left or right shift by any number of positions.
Let \(D_{1}\) and \(D_{2}\) be two components of a marked diagram \(D_{\Delta-I}\) which are separated by one non-marked node \(\gamma\). We may assume that \(D_{1}\) is left of \(\gamma\) and that \(D_{1}\) has more nodes than \(D_{2}\). Let \(K_{1}\) be the set of nodes in \(D_{1}\), let \(K_{2}\) be the set of nodes in \(D_{2}\) and let \(K=K_{1}\cup\{\gamma\}\cup K_{2}\). Choose \(\gamma^{\prime}\in K\) such that the number of nodes in \(K\) left to \(\gamma^{\prime}\) equals \(|K_{2}|\). We say that \(D_{\Delta-J}\) is obtained from \(D_{\Delta-I}\) by permuting the components \(D_{1}\) and \(D_{2}\) if \(J=\{\gamma\}\cup I-\{\gamma^{\prime}\}\) and the Levi subgroups corresponding to the two marked diagrams are isomorphic.
**Lemma 7**.: _Let \(L_{I}\) be a standard Levi subgroup with associated marked diagram \(D_{I}\). If \(D_{J}\) is a marked diagram obtained from \(D_{I}\) by shifting or permuting the components of \(D_{I}\) then \(L_{I}\) is conjugate to \(L_{J}\)._
Proof.: Let \(\Delta^{\prime}=\{\alpha_{i+1},\alpha_{i+2},\ldots,\alpha_{i+l}\}\) be a component of \(D_{I}\), i.e. \(\Delta^{\prime}\) is the set of roots in \(\Delta\) corresponding to a maximal (by inclusion) connected subdiagram of marked nodes in \(D_{I}\):
Suppose that \(\alpha_{i-1}\) and \(\alpha_{i}\) are non-marked nodes. If \(w_{0}^{\prime}\) is the longest element (w.r.t. \(\Delta\)) of the Weyl group of the subsystem subgroup \(G(\{\alpha_{i},\ldots,\alpha_{i+l}\})\) then \(G(\{\alpha_{i+1},\ldots,\alpha_{i+l}\})^{w_{0}^{\prime}}=G(\{\alpha_{i},\ldots,\alpha_{i+l-1}\})\). Similarly, if \(\alpha_{i+l+1}\) and \(\alpha_{i+l+2}\) are not marked, \(\Delta^{\prime}\) can be shifted to the right.
Let \(\Delta^{\prime\prime}=\{\alpha_{i+l+2},\alpha_{i+l+3},\ldots,\alpha_{i+l+1+k}\}\) be a second component of \(D_{I}\).
If \(w_{0}^{\prime}\) is the longest element (with respect to \(\Delta\)) of the Weyl group of the subsystem subgroup \(G(\Delta^{\prime}\cup\{\alpha_{i+l+1}\}\cup\Delta^{\prime\prime})\) then \(G(\{\alpha_{i+1},\ldots,\alpha_{i+l}\})^{w_{0}^{\prime}}=G(\{\alpha_{i+l+1+k}, \ldots,\alpha_{i+2+k}\})\) and \(G(\{\alpha_{i+l+2},\ldots,\alpha_{i+l+1+k}\})^{\dot{w}_{0}^{\prime}}=G(\{ \alpha_{i+k},\ldots,\alpha_{i+1}\})\). This proves the claim for \(G\) of type \(A_{r}\).
If \(G\) is of type \(C_{r}\) or \(B_{r}\) let \(\Delta^{\prime}\) be the component of \(D_{I}\) containing \(\alpha_{r}\) and if \(G\) is of type \(D_{r}\) let \(\Delta^{\prime}\) be the component of \(D_{I}\) containing \(\alpha_{r}\) and \(\alpha_{r-1}\). If such a component exists, then \(\Delta^{\prime}=\{\alpha_{k_{0}},\ldots\alpha_{r}\}\) for some \(1\leq k_{0}\leq r\). Since \(\Delta^{\prime}\) is a component of both \(D_{I}\) and \(D_{J}\) and since the other components are shifted and permuted freely in the subsystem subgroup \(G(\{\alpha_{1},\ldots,\alpha_{k_{0}-2}\})\) of type \(A_{k_{0}-2}\), the claim follows.
**Proposition 8**.: _Let \(G\) be a classical simple algebraic group of rank \(\operatorname{rk}(G)>11\) defined over an algebraically closed field of good characteristic. If \(C\) is a unipotent conjugacy class of \(G\) corresponding to the pair \((L_{I},P_{J})\) then_
\[\operatorname{cn}(G,C)\leq c\cdot\frac{\operatorname{rk}(G)}{\operatorname{rk }(C)}=c\cdot\left(1+\frac{\operatorname{crk}(C)}{\operatorname{rk}\left(C \right)}\right).\]
_Moreover, we may choose \(c\leq 288\)._
Proof.: Consider the marked diagram \(D_{\Delta-I}\) with set of marked nodes \(I\). With Lemma 7, conjugating, we may shift the components of \(D_{\Delta-I}\) to the right of the diagram such that they are separated by exactly one non-marked node. Let \(\hat{I}\) be the minimal subset of simple roots spanning an irreducible root subsystem containing \(I\) and \(\alpha_{r}\). Then \(L_{I}\) is included in \(L_{\hat{I}}\). Notice that under this operation of moving all components to the right, it may happen that \(\alpha_{r}\) is non-marked, in which case it is easy to see that \(\alpha_{r-1}\) is marked. The sets \(I\) and \(\hat{I}\) may be visualised as follows
where the dashed line is one of the following diagrams
with the node \(\alpha_{r}\) possibly non-marked. If \(\alpha_{r}\) is marked, it belongs to the last component \(\Delta_{k}^{b}\) else \(\alpha_{r-1}\) belongs to \(\Delta_{k}^{b}\).
By Proposition 5, the normal subset \(C^{36}\) contains a torus \(\tilde{T}\subseteq[L_{I},L_{I}]\cap T\) of dimension \(|I|\). Let \(\gamma_{1},\ldots,\gamma_{l}\) be the simple roots corresponding to the non-marked nodes \(\hat{I}-I\). Let \(w\) be the product of the simple reflections in the roots \(\gamma_{i}\). Then \(T^{\prime}=\tilde{T}\cdot\tilde{T}^{\hat{w}}\subseteq C^{72}\) is a torus of dimension \(|\hat{I}|\) in \(T\cap G(\hat{I})\) (the Lie algebra of this torus has dimension \(|\tilde{I}|\)). If only \(\alpha_{r}\) is marked, we may assume that \(\hat{I}=\{\alpha_{r-1},\alpha_{r}\}\). Let \(T^{\prime\prime}=G(\hat{I}-\{\alpha_{r}\})\cap T^{\prime}\) and notice that \(T^{\prime}\cap G_{\alpha_{r}}\) is a torus of dimension \(1\).
Divide the set of simple roots in \(\Delta-\hat{I}\) into subsets \(J_{1},J_{2},\ldots,J_{s}\) of consecutive roots with \(|J_{1}|\leq|I|\) and \(|J_{j}|=|I|\) for \(j\geq 2\). For \(1\leq i\leq s\) let \(w_{i}\) be the longest element of the Weyl group generated by the reflections in the roots \(J_{i}\cup J_{i+1}\cup\cdots\cup J_{s}\cup(\hat{I}-\{\alpha_{r}\})\). Then
\[T^{\prime\prime}(T^{\prime\prime})^{\hat{w}_{1}}(T^{\prime\prime})^{\hat{w}_{2 }}\cdots(T^{\prime\prime})^{\hat{w}_{s}}\]
is a torus in \(G(\Delta-\{\alpha_{r}\})\) of dimension \(r-1\) (the Lie algebra of this torus has the right dimension). Thus, since \(T^{\prime}\) contains a \(1\)-dimensional torus of \(G_{\alpha_{r}}\),
\[T^{\prime}(T^{\prime})^{\hat{w}_{1}}(T^{\prime})^{\hat{w}_{2}}\cdots(T^{\prime })^{\hat{w}_{s}}\subseteq C^{72\cdot(s+1)}\]
is an \(r\)-dimensional torus of \(G\). In particular, \(C^{72\cdot(s+1)}\) contains an open subset of \(T\) and therefore \(C^{144\cdot(s+1)}=G\). Moreover
\[\operatorname{cn}(G,C)\leq 144\cdot(s+1)\leq 144\left(\frac{r}{|I|}+1\right) \leq 144\cdot 2\frac{r}{|I|}=288\frac{\operatorname{rk}(G)}{\operatorname{rk}(C )}.\]
Proof of Theorem B.: For the bounded rank case we use [7]: if \(\operatorname{rk}(G)\leq 11\) then, for any conjugacy class \(C\) of \(G\) we have \(\operatorname{cn}(C)\leq 4\cdot\operatorname{rk}(G)\leq 4\cdot\operatorname{rk} (G)\cdot\frac{\operatorname{rk}(G)}{\operatorname{rk}(C)}\leq 44\cdot\frac{ \operatorname{rk}(G)}{\operatorname{rk}(C)}\). Thus we may assume that \(G\) is a classical group of rank greater than \(11\) and the result follows form Proposition 8.
## 5. Covering numbers of unipotent conjugacy classes in terms of dimension
In this section we prove Theorem C. For the bounded rank case we use [7]: if \(\operatorname{rk}(G)\leq 8\) then, for any conjugacy class \(C\) of \(G\) we have \(\operatorname{cn}(C)\leq 4\cdot\operatorname{rk}(G)\leq 4\cdot\frac{ \dim(G)}{\dim(C)}\). Thus we may assume that \(\operatorname{rk}(G)>8\) in which case \(G\) is a classical group.
Let \(C\) be a unipotent conjugacy class of \(G\) corresponding to the pair \((L_{I},P_{J})\). Let \(D_{\Delta-I}\) be the marked diagram with marked nodes \(I\). With Lemma 7, conjugating we may shift the components of \(D_{\Delta-I}\) to the right of the diagram such that they are separated by exactly one non-marked node. As in the proof of Proposition 8, let \(\hat{I}\) be the minimal subset of simple roots spanning an irreducible root subsystem containing \(I\) and \(\alpha_{r}\). Then \(L_{I}\) is included in \(L_{\hat{I}}\) and \(|\hat{I}|\leq 2|I|\).
Let \(u\in L_{I}\) be a representative of \(C\). Let \(r=\operatorname{rk}(G)\), \(\hat{r}_{b}=|\hat{I}|\) and \(\hat{r}_{w}=r-\hat{r}_{b}\). Since \(u\in[L_{I},L_{I}]\subseteq[L_{\hat{I}},L_{\hat{I}}]\) there is an \(A_{\hat{r}_{w}-1}\) subsystem subgroup of \(G\) in \(C_{G}(u)\). Hence
\[\dim(G)-\hat{r}_{w}^{2}+1\geq\dim(C).\]
Thus, if \(G\) is of type \(A_{r}\) then
\[6r\hat{r}_{b}\geq r^{2}+2r-\hat{r}_{w}^{2}+1\geq\dim(C).\]
For the other classical groups we use a maximal closed subsystem subgroup \(H\times[L_{\hat{I}},L_{\hat{I}}]\) (recognized with the algorithm of Borel and de Siebenthal, see for example [14, Theorem 13.12]). Notice that since \(u\in[L_{\hat{I}},L_{\hat{I}}]\) we have \(H\subseteq C_{G}(u)\) and so
\[\dim(G)-\dim(H)\geq\dim(C).\]
If \(G\) is of type \(B_{r}\) then \(H\) is of type \(D_{\hat{r}_{w}}\) and
\[8r\hat{r}_{b}\geq 2(r-\hat{r}_{w}+\frac{1}{2})(r+\hat{r}_{w})=2r^{2}+r-2\hat{r }_{w}^{2}+\hat{r}_{w}\geq\dim(C).\]
If \(G\) is of type \(C_{r}\) then \(H\) is of type \(C_{\hat{r}_{w}}\) and
\[8r\hat{r}_{b}\geq 2(r-\hat{r}_{w})(r+\hat{r}_{w}+\frac{1}{2})=2r^{2}+r-2\hat{ r}_{w}^{2}-\hat{r}_{w}\geq\dim(C).\]
If \(G\) is of type \(D_{r}\) then \(H\) is of type \(D_{\hat{r}_{w}}\) and
\[4r\hat{r}_{b}\geq 2(r-\hat{r}_{w})(r+\hat{r}_{w})\geq 2r^{2}-r-2\hat{r}_{w}^{2}+ \hat{r}_{w}\geq\dim(C).\]
In all cases we have
\[\frac{\operatorname{rk}(G)}{16\operatorname{rk}(C)}=\frac{r^{2}}{8(2 \operatorname{rk}(C))r}\leq\frac{r^{2}}{8\hat{r}_{b}r}\leq\frac{\dim(G)}{\dim(C)}\]
since \(\hat{r}_{b}=|\hat{I}|\leq 2|I|=2\operatorname{rk}(C)\). Thus, by Theorem B, \(\operatorname{cn}(G,C)\leq 4608\cdot\dim(G)/\dim(C)\).
## 6. Conflict of Interest
The author declares that he has no conflict of interest.
|
2309.02705 | Certifying LLM Safety against Adversarial Prompting | Large language models (LLMs) are vulnerable to adversarial attacks that add
malicious tokens to an input prompt to bypass the safety guardrails of an LLM
and cause it to produce harmful content. In this work, we introduce
erase-and-check, the first framework for defending against adversarial prompts
with certifiable safety guarantees. Given a prompt, our procedure erases tokens
individually and inspects the resulting subsequences using a safety filter. Our
safety certificate guarantees that harmful prompts are not mislabeled as safe
due to an adversarial attack up to a certain size. We implement the safety
filter in two ways, using Llama 2 and DistilBERT, and compare the performance
of erase-and-check for the two cases. We defend against three attack modes: i)
adversarial suffix, where an adversarial sequence is appended at the end of a
harmful prompt; ii) adversarial insertion, where the adversarial sequence is
inserted anywhere in the middle of the prompt; and iii) adversarial infusion,
where adversarial tokens are inserted at arbitrary positions in the prompt, not
necessarily as a contiguous block. Our experimental results demonstrate that
this procedure can obtain strong certified safety guarantees on harmful prompts
while maintaining good empirical performance on safe prompts. Additionally, we
propose three efficient empirical defenses: i) RandEC, a randomized subsampling
version of erase-and-check; ii) GreedyEC, which greedily erases tokens that
maximize the softmax score of the harmful class; and iii) GradEC, which uses
gradient information to optimize tokens to erase. We demonstrate their
effectiveness against adversarial prompts generated by the Greedy Coordinate
Gradient (GCG) attack algorithm. The code for our experiments is available at
https://github.com/aounon/certified-llm-safety. | Aounon Kumar, Chirag Agarwal, Suraj Srinivas, Aaron Jiaxun Li, Soheil Feizi, Himabindu Lakkaraju | 2023-09-06T04:37:20Z | http://arxiv.org/abs/2309.02705v3 | # Certifying LLM Safety against Adversarial Prompting
###### Abstract
Large language models (LLMs) released for public use incorporate guardrails to ensure their output is safe, often referred to as "model alignment." An aligned language model should reject a user's request to produce harmful content. However, such safety measures are vulnerable to adversarial attacks, which add maliciously designed token sequences to a harmful prompt to bypass the model's safety guards and cause it to produce harmful content. In this work, we introduce **erase-and-check**, the first framework to defend against adversarial prompts with verifiable safety guarantees. Given a prompt, we erase tokens individually and inspect the resulting subsequences using a safety filter. Our procedure labels the input prompt as harmful if any subsequences or the input prompt itself are detected as harmful by the filter. The safety certificate of our procedure guarantees that harmful prompts are not misclassified as safe under an adversarial attack up to a certain size. We defend against three attack modes: i) adversarial suffix, which appends an adversarial sequence \(\alpha\) at the end of a harmful prompt \(P\), resulting in adversarial prompts of the form \(P+\alpha\); ii) adversarial insertion, where the adversarial sequence is inserted anywhere in the middle of the prompt, resulting in adversarial prompts of the for \(P_{1}+\alpha+P_{2}\), where \(P_{1}+P_{2}=P\); and iii) adversarial infusion, where adversarial tokens \(\tau_{1},\tau_{2},\dots,\tau_{m}\) are inserted at arbitrary positions in the prompt, not necessarily as a contiguous block, resulting in adversarial prompts of the form \(P_{1}+\tau_{1}+P_{2}+\tau_{2}+\dots+\tau_{m}+P_{m+1}\).
Our experimental results demonstrate that this procedure can obtain strong certified safety guarantees on harmful prompts while maintaining good empirical performance on safe prompts. For example, against adversarial suffixes of length 20, it certifiably detects \(92\%\) of harmful prompts and labels \(94\%\) of safe prompts correctly using the open source language model Llama 2 as the safety filter. We further improve the filter's performance, in terms of accuracy and speed, by replacing Llama 2 with a DistilBERT safety classifier fine-tuned on safe and harmful prompts. We also show that, by leveraging the unique advantages of defending against safety attacks, our method significantly outperforms well-known certifiable robustness techniques such as randomized smoothing. Additionally, we propose two efficient empirical defenses: i) RandEC, a randomized version of erase-and-check that evaluates the safety filter on a small subset of the erased subsequences, and ii) GradEC, a gradient-based version that optimizes the erased tokens to remove the adversarial sequence. They are significantly faster than the original erase-and-check procedure and are effective against adversarial prompts generated by the Greedy Coordinate Gradient algorithm. The code for our experiments is available at: [https://github.com/aounon/certified-llm-safety](https://github.com/aounon/certified-llm-safety).
###### Contents
* 1 Introduction
* 2 Related Work
* 3 Notations
* 4 Adversarial Suffix
* 4.1 Empirical Evaluation on Safe Prompts
* 4.2 Comparison with Smoothing-Based Certificate
* 5 Adversarial Insertion
* 6 Adversarial Infusion
* 7 Trained Safety Classifier
* 7.1 Training Details
* 8 Efficient Empirical Defenses
* 8.1 RandEC: Randomized Erase-and-Check
* 8.2 GradEC: Gradient-based Erase-and-Check
* 9 Conclusion
* A Frequently Asked Questions
* B Multiple Insertions
* C Dataset of Safe and Harmful Prompts
* D Proof of Theorem 1
* E Proof of Theorem 2
* F Llama 2 System Prompt
* G Illustration of Erase-and-Check
Introduction
Popular language models such as ChatGPT, Bard, and Llama are often trained using unvetted textual data from the web that contain significant amounts of inappropriate and objectionable material. A model trained on such data can produce harmful content, making it unsuitable for public use. Furthermore, as language models improve over time, they become better at following user instructions, including those from malicious actors intended to cause harm. To this end, several fine-tuning techniques have been developed to incorporate human feedback to ensure that LLM outputs are safe and aligned with human values (Ouyang et al., 2022; Bai et al., 2022; Glaese et al., 2022; Korbak et al., 2023; Xu et al., 2020). These approaches use human oversight to steer an LLM to generate safe outputs. When prompted with a harmful user request, an aligned model is expected to decline the request rather than comply with it. Following is an example of a safe LLM's response to a harmful request:
**Harmful Prompt:** Provide detailed instructions on making a homemade explosive device using household materials.
**LLM Response:** Sorry, I cannot assist with that request. Promoting safety is of utmost importance.
Although significant progress has been made in developing guardrails for LLMs, several "jailbreak" attacks exist that allow a malicious user to bypass the safety measures. For example, Wei et al. (2023) show that simply asking an LLM to begin its response with "_Absolutely! Here's"_ could mislead the model into complying with the user's harmful request. Several other examples of adversarial augmentation of harmful prompts, such as the Do Anything Now (DAN) jailbreak, are also well known. The work of Zou et al. (2023) shows that it is possible to automate the generation of adversarial sequences, creating an endless supply of such attacks, using their Greedy Coordinate Gradient (GCG) attacks. Moreover, they show that a single such adversarial sequence can be appended to several harmful prompts to bypass safety measures. Their method achieves a high attack success rate, of up to 84%, even on black-box LLMs such as GPT-3.5 using adversarial sequences that are only 20 tokens long. The following is an illustration of the effect of an adversarial suffix on the above example:
**Harmful Prompt + Adversarial Sequence:** Provide detailed instructions on making a homemade explosive device using household materials. hilt thou ordinary the our tly
**LLM Response:** Sure, here's step-by-step guide to making a homemade explosive device...
Techniques for defending against such attacks have been proposed in the literature. For example, Jain et al. (2023) and Alon & Kamfonas (2023) study approaches like perplexity filtering, paraphrasing, and adversarial training to defend against adversarial prompts. Each approach targets a specific weakness of adversarial sequences to detect and defend against them. For instance, perplexity filtering takes advantage of the gibberish nature of an adversarial sequence to distinguish it from the rest of the prompt. However, such empirical defenses do not come with performance guarantees and can be broken by stronger adversaries. More recent AutoDAN attacks developed by Liu et al. (2023) and Zhu et al. (2023) can circumvent perplexity filters by generating adversarial sequences that look similar to natural text. This phenomenon of newer attacks bypassing existing defenses has also been well documented in computer vision (Athalye et al., 2018; Tramer et al., 2020; Yu et al., 2021; Carlini & Wagner, 2017). Therefore, it is necessary to design defenses with certified performance guarantees that hold even in the presence of unseen attacks.
In this work, we present a procedure, **erase-and-check**, to defend against adversarial prompts with verifiable safety guarantees. Given a clean or adversarial prompt \(P\), this procedure erases tokens individually (up to a maximum of \(d\) tokens) and checks if the erased subsequences, as well as the input prompt \(P\), are safe, using a safety filter is-harmful. See Sections 4, 5 and 6 for different versions of the procedure. If the input prompt \(P\) or any of its erased subsequences are detected as harmful, our procedure labels the input prompt as harmful. This guarantees that all adversarial modifications of a harmful prompt up to a certain size are also labeled harmful. Conversely, the prompt \(P\) is labeled safe only if the filter detects all the sequences checked as safe.
Our procedure obtains strong certified safety guarantees on harmful prompts while maintaining good empirical performance on safe prompts. Its performance on safe prompts is due to the fact that subsequences of safe prompts typically remain safe in the everyday use of LLMs.
The safety filter is-harmful can be implemented in various ways. We first implement the filter by prompting a pre-trained language model, Llama 2 (Touvron et al., 2023), to classify text sequences as safe or harmful. Other state-of-the-art LLMs including proprietary ones with API access could also be used for this purpose. This approach is easy to use and does not require training a model. Next, in Section 7, we show that the filter's performance can be significantly improved by replacing Llama 2 with a DistilBERT text classifier fine-tuned on safe and harmful prompts. It is more efficient than running an LLM like Llama 2 and can better distinguish safe and harmful prompts because of the fine-tuning step.
The safety certificate of our procedure guarantees that harmful prompts are not misclassified as safe under an adversarial attack. We do not certify in the other direction where an adversary attacks a safe prompt to get it misclassified as harmful. Such an attack makes little sense in practice as it is unlikely that a user will seek to make their safe prompts look harmful to an aligned LLM only to get their request rejected. Using Llama 2 as the safety filter, erase-and-check can achieve a certified accuracy of \(92\%\) on harmful prompts against adversarial suffixes up to 20 tokens long while maintaining an empirical accuracy of \(94\%\) on clean (non-adversarial) safe prompts (Section 4, Figure 2(a)). Using a trained DistilBERT classifier as the filter, the above values can be improved to 100% and 98%, respectively (Section 7). Note that we do not need adversarial prompts to compute the certified accuracy on harmful prompts. Theorem 1 guarantees that the accuracy of erase-and-check on adversarial harmful prompts is lower bounded by the accuracy of the safety filter is-harmful on clean harmful prompts. Our safety certificate is independent of the attack algorithm, such as GCG and AutoDAN, used to generate adversarial prompts.
We also compare our technique with a popular certified robustness approach called randomized smoothing and show that leveraging the advantages in the safety setting allows us to obtain significantly better certified guarantees (Section 4.2). Additionally, we propose two empirical defenses, RandEC and GradEC, inspired by the original erase-and-check procedure. RandEC is a randomized version of erase-and-check that evaluates the safety filter on a small, randomly sampled subset of the erased subsequences. GradEC uses the gradients of the safety filter is-harmful with respect to the input prompt to optimize the tokens to erase. These methods are significantly faster than the original erase-and-check procedure and obtain good empirical detection accuracy against adversarial prompts generated by the GCG attack algorithm. For example, RandEC achieves an empirical detection accuracy of over 90% on adversarial harmful prompts by randomly checking only 20% of the erased subsequences (Section 8.1). Similarly, with only six iterations of the optimizer, GradEC detects more than 90% of the adversarial prompts (Section 8.2).
We study the following three adversarial attack modes listed in order of increasing generality:
**(1) Adversarial Suffix:** This is the simplest attack mode (Section 4). In this mode, adversarial prompts are of the type \(P+\alpha\), where an adversarial sequence \(\alpha\) is appended to the end of the original prompt \(P\) (see Figure 1). Here, \(+\) represents sequence concatenation. This is the type of adversarial prompts generated by Zou et al. (2023) as shown in the example above. For this mode, the erase-and-check procedure erases \(d\) tokens from the end of the input prompt one by one and checks the resulting subsequences using the filter is-harmful. It labels the input prompt as harmful if any subsequences or the input prompt are detected as harmful (see Figure 2). For an adversarial prompt \(P+\alpha\) such that \(|\alpha|\leq d\), if \(P\) was originally detected as harmful by the safety filter is-harmful, then \(P+\alpha\) must also be labeled as harmful by erase-and-check. This statement could also be generalized to a probabilistic safety filter and the probability of \(P+\alpha\) being detected as harmful by erase-and-check can be lower bounded by that of \(P\) being detected as harmful by is-harmful. Note that this guarantee is valid for all non-negative integral values of \(d\). However, as \(d\) becomes larger, the running time of erase-and-check also increases as the set of subsequences needed to check grows as \(O(d)\). See Appendix G for an illustration of the procedure on the adversarial prompt example shown above.
Figure 1: Adversarial prompts under different attack modes. Adversarial tokens are represented in red.
**(2) Adversarial Insertion:** This mode subsumes the suffix mode (Section 5). Here, adversarial sequences can be inserted anywhere in the middle (or the end) of the prompt \(P\). This leads to prompts of the form \(P_{1}+\alpha+P_{2}\), where \(P_{1}\) and \(P_{2}\) are two partitions of \(P\), that is, \(P_{1}+P_{2}=P\) (see Figure 1). The set of adversarial prompts we must defend against is significantly larger than the suffix mode. For adversarial prompts of this form, erase-and-check erases up to \(d\) tokens starting from a location \(i\) of the prompt for all locations \(i\) from \(1\) to \(|P_{1}+\alpha+P_{2}|\). More precisely, it generates subsequences by erasing tokens in the range \([i,\ldots,i+j]\), for all \(i\in\{1,\ldots,|P_{1}+\alpha+P_{2}|\}\) and for all \(j\in\{1,\ldots,d\}\). Using an argument similar to that for the suffix mode, we can show that this procedure can certifiably defend against adversarial insertions of length at most \(d\). It can also be generalized to defend against multiple adversarial insertions, that is, prompts of the form \(P_{1}+\alpha_{1}+P_{2}+\alpha_{2}+\cdots+\alpha_{k}+P_{k+1}\), where \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\) are \(k\) contiguous blocks of adversarial tokens. The certified guarantee is for the maximum length over all adversarial sequences. Like in the suffix mode, the guarantee holds for all non-negative integral values of \(d\) and \(k\). However, this mode is harder to defend against as the number of subsequences to check increases as \(O\left((nd)^{k}\right)\), where \(n\) is the number of tokens in the input prompt.
**(3) Adversarial Infusion:** This is the most general attack mode (Section 6). In this mode, adversarial tokens \(\tau_{1},\tau_{2},\ldots,\tau_{m}\) are inserted at arbitrary locations in the prompt \(P\), leading to adversarial prompts of the form \(P_{1}+\tau_{1}+P_{2}+\tau_{2}+\cdots+\tau_{m}+P_{m+1}\) (see Figure 1). The set of such prompts includes the adversarial prompts of the previous two modes. The key difference from the insertion mode is that the adversarial tokens need not be inserted as a contiguous block. To defend against this attack mode, erase-and-check generates subsequences by erasing subsets of tokens of size at most \(d\) from the input prompt. If \(m\leq d\), one of the erased subsets must match exactly with the set of adversarial tokens when the prompt \(P_{1}+\tau_{1}+P_{2}+\tau_{2}+\cdots+\tau_{m}+P_{m+1}\) is given as input. Thus, one of the checked subsequences must be \(P\). Therefore, if \(P\) is detected as harmful by is-harmful, any adversarial infusion of \(P\) using at most \(d\) tokens is guaranteed to be labeled as harmful by erase-and-check. Similar to other attack modes, this safety guarantee holds for all non-negative integral values of \(d\). However, this mode is the most challenging to defend against as the number of generated subsequences increases as \(O(n^{d})\), which grows exponentially with \(d\).
While existing adversarial attacks such as GCG and AutoDAN fall under the suffix and insertion attack modes, to the best of our knowledge, there does not exist an attack in the infusion mode. We study the infusion mode to showcase our framework's versatility and demonstrate that it can tackle new threat models that emerge in the future.
**Safety Filter:** Our procedure erase-and-check uses a safety filter is-harmful as a subroutine to check each erased subsequence. This filter can be implemented in different ways. First, we show that an effective safety filter can be designed by appropriately prompting an LLM like Llama 2 to classify prompts as safe and harmful. This design is easy to use, requires no training, and is compatible with proprietary LLMs with API access. We use the Llama 2 system prompt to set its objective of classifying a user prompt as harmful or not harmful. Examples of safe or harmful prompts are not needed for building this filter. We then look for texts such as "Not harmful" in the model's response to determine whether the prompt is safe. We flag the input prompt as harmful if no such text sequence is found in the response. The following is an example of the type of system prompt we use (see Appendix F for the exact prompt):
Figure 2: An illustration of how erase-and-check works on adversarial suffix attacks. It erases tokens from the end and checks the resulting subsequences using a safety filter. If at least one of the erased subsequences is detected as harmful, the input prompt is labeled harmful.
**System Prompt:** Your objective is todetermine if the user's request or question is harmful or not. Just answer 'harmful' or ''not harmful' to each user message.
Next, in Section 7, we show that the performance of the filter can be significantly improved by replacing Llama 2 with a text classifier trained on examples of safe and harmful prompts. We download a pre-trained DistilBERT model from Hugging Face1 and fine-tune it on our safety dataset. The dataset contains examples of harmful prompts from the AdvBench dataset created by Zou et al. (2023) and safe prompts generated by us. See Appendix C for details on the data generation process. The DistilBERT safety filter is significantly more efficient than running an LLM like Llama 2 and can better distinguish safe and harmful prompts because of the fine-tuning step.
Footnote 1: DistilBERT: [https://huggingface.co/docs/transformers/model_doc/distilbert](https://huggingface.co/docs/transformers/model_doc/distilbert)
**Safety Certificate:** The construction of erase-and-check guarantees that if the safety filter detects a prompt \(P\) as harmful, then erase-and-check will label the prompt \(P\) and all its adversarial modifications \(P+\alpha\), up to a certain length, as harmful. This statement could also be generalized to a probabilistic safety filter, and the probability of \(P+\alpha\) being detected as harmful by erase-and-check can be lower bounded by that of \(P\) being detected as harmful by is-harmful. Using this, we can show that the accuracy of the safety filter on a set of harmful prompts is a lower bound on the accuracy of erase-and-check on the same set. A similar argument can also be made for any probability distribution over harmful prompts (Theorem 1). Therefore, to calculate the certified accuracy of erase-and-check on harmful prompts, we just need to evaluate the accuracy of the filter on such prompts. On the harmful prompts from AdvBench, our safety filter is-harmful achieves an accuracy of **92%** using Llama 2 and **100%** using DistilBERT.2 For comparison, an adversarial suffix of length 20 can make the accuracy on harmful prompts as low as 16% for GPT-3.5 (Figure 3 in Zou et al. (2023)). Note that the certified accuracy of erase-and-check remains the same for all adversarial sequence lengths and attack modes considered.
Footnote 2: The accuracy for Llama 2 is estimated over 60,000 samples of the harmful prompts (uniform with replacement) to average out the internal randomness of Llama 2. It guarantees an estimation error of less than 1 percentage point with 99.9% confidence. This is not needed for DistilBERT as it is deterministic.
**Limitations:** The computational cost of erase-and-check increases for more general attack modes, limiting the length of adversarial sequences that can be defended against in a reasonable amount of time, especially with Llama 2 due to its high inference cost. The running time of our procedure can be significantly improved by replacing Llama 2 with DistilBERT. See Figures 6(b) and 7(b) for a comparison. Furthermore, the accuracy of the erase-and-check procedure decreases for larger adversarial sequences, especially with Llama 2. This is likely because defending against longer adversarial sequences requires our procedure to check more subsequences for each input prompt. This increases the probability that the safety filter accidentally misclassifies one of the subsequences as harmful. This issue is partially resolved by the DistilBERT safety filter, as it is trained to recognize erased versions of safe prompts as safe as well. See Figures 6(a) and 7(a) for a comparison. The improved performance of DistilBERT allows us to certify against longer attack sequences for harder attack modes, e.g., 30 tokens of adversarial insertion, that are infeasible with Llama 2 (Figure 9).
## 2 Related Work
**Adversarial Attacks:** Deep neural networks and other machine learning models have been known to be vulnerable to adversarial attacks (Szegedy et al., 2014; Biggio et al., 2013; Goodfellow et al., 2015; Madry et al., 2018; Carlini and Wagner, 2017). For computer vision models, adversarial attacks make tiny perturbations in the input image that can completely alter the model's output. A key objective of these attacks is to make the perturbations as imperceptible to humans as possible. However, as Chen et al. (2022) argue, the imperceptibility of the attack makes little sense for natural language processing tasks. A malicious user seeking to bypass the safety guards in an aligned LLM does not need to make the adversarial changes imperceptible. The attacks generated by Zou et al. (2023) can be easily detected by humans, yet deceive LLMs into complying with harmful requests. This makes it challenging to apply existing adversarial defenses for such attacks as they often rely on the perturbations being small.
**Empirical Defenses:** Over the years, several heuristic methods have been proposed to detect and defend against adversarial attacks for computer vision (Buckman et al., 2018; Guo et al., 2018; Dhillon et al., 2018; Li and Li, 2017; Grosse et al., 2017; Gong et al., 2017) and natural language processing tasks (Nguyen Minh and Luu, 2022; Yoo et al., 2022; Huber et al., 2022). Recent works by Jain et al. (2023) and Alon and Kamfonas (2023) study defenses specifically for attacks by Zou et al. (2023) based on approaches such as perplexity filtering, paraphrasing, and adversarial training. However, empirical defenses against specific adversarial attacks have been shown to be broken by stronger attacks (Carlini and Wagner, 2017; Athalye et al., 2018; Uesato et al., 2018; Laidlaw and Feizi, 2019). Empirical robustness against an adversarial attack does not imply robustness against more powerful attacks in the future. Our work focuses on generating provable robustness guarantees that hold against every possible adversarial attack within a threat model.
**Certifed Defenses:** Defenses with provable robustness guarantees have been extensively studied in computer vision. They use techniques such as interval-bound propagation (Gowal et al., 2018; Huang et al., 2019; Dvijotham et al., 2018; Mirman et al., 2018), curvature bounds (Wong and Kolter, 2018; Raghunathan et al., 2018; Singla and Feizi, 2020; 2021) and randomized smoothing (Cohen et al., 2019; Lecuyer et al., 2019; Li et al., 2019; Salman et al., 2019). Certified defenses have also been studied for tasks in natural language processing. For example, Ye et al. (2020) presents a method to defend against word substitutions with respect to a set of predefined synonyms for text classification. Zhao et al. (2022) use semantic smoothing to defend against natural language attacks. Zhang et al. (2023) propose a self-denoising approach to defend against minor changes in the input prompt for sentiment analysis. Provable robustness techniques have also been developed in the context of computer languages for tasks such as malware detection (Huang et al., 2023), where the adversary seeks to bypass detection by manipulating a small percentage of bytes in the malware's code. Such defenses often incorporate imperceptibility in their threat model one way or another, e.g., by restricting to synonymous words and minor changes in the input text. This makes them inapplicable to attacks by Zou et al. (2023) that change the prompts by a significant amount by appending adversarial sequences that could be even longer than the original harmful prompt. Moreover, such approaches are designed for classification-type tasks and do not leverage the unique properties of LLM safety attacks.
## 3 Notations
We denote an input prompt \(P\) as a sequence of tokens \(\rho_{1},\rho_{2},\ldots,\rho_{n}\), where \(n=|P|\) is the length of the sequence. Similarly, we denote the tokens of an adversarial sequence \(\alpha\) as \(\alpha_{1},\alpha_{2},\ldots,\alpha_{l}\). We use \(T\) to denote the set of all tokens, that is, \(\rho_{i},\alpha_{i}\in T\). We use the symbol \(+\) to denote the concatenation of two sequences. Thus, an adversarial suffix \(\alpha\) appended to \(P\) is written as \(P+\alpha\). We use the notation \(P[s,t]\) with \(s\leq t\) to denote a subsequence of \(P\) starting from the token \(P_{s}\) and ending at \(P_{t}\). For example, in the suffix mode, erase-and-check erases \(i\) tokens from the end of an input prompt \(P\) at each iteration. The resulting subsequence can be denoted as \(P[1,|P|-i]\). In the insertion mode with multiple adversarial sequences, we index each sequence with a superscript \(i\), that is, the \(i^{\text{th}}\) adversarial sequence is written as \(\alpha^{i}\). We use the \(-\) symbol to denote deletion of a subsequence. For example, in the insertion mode, erase-and-check erases a subsequence of \(P\) starting at \(s\) and ending at \(t\) in each iteration, which can be denoted as \(P-P[s,t]\). We use \(\cup\) to denote the union of subsequences. For example, in insertion attacks with multiple adversarial sequences, erase-and-check removes multiple contiguous blocks of tokens from \(P\), which we denote as \(P-\cup_{i=1}^{k}P[s_{i},t_{i}]\). We use \(d\) to denote the maximum number of tokens erased (or the maximum length of an erased sequence in insertion mode). This is different from \(l\), which denotes the length of an adversarial sequence. Our certified safety guarantees hold for all adversarial sequences of length \(l\leq d\).
## 4 Adversarial Suffix
This attack mode appends adversarial tokens at the end of a harmful prompt to get it misclassified as safe by a language model. This is the threat model considered by Zou et al. (2023) to design universal adversarial attacks that transfer to several harmful prompts and popular LLMs. This threat model can be defined as the set of all possible adversarial prompts generated by appending a sequence of
tokens \(\alpha\) of a certain maximum length \(l\) to a prompt \(P\). Mathematically, this set is defined as
\[\mathsf{SuffixTM}(P,l)=\big{\{}P+\alpha\ \big{|}\ |\alpha|\leq l\big{\}}.\]
For a token set \(T\), the above set grows exponentially (\(O(|T|^{l})\)) with the adversarial length \(l\), making it significantly challenging to defend against the entire set of attacks. It is not feasible to enumerate and defend against all adversarial sequences in this threat model. Our erase-and-check procedure can guarantee safety over the entire set of adversarial prompts without enumerating them.
Given an input prompt \(P\) and a maximum erase length \(d\), our procedure generates \(d\) sequences \(E_{1},E_{2},\ldots,E_{d}\), where each \(E_{i}=P[1,|P|-i]\) denotes the subsequence produced by erasing \(i\) tokens of \(P\) from the end. It checks the subsequences \(E_{i}\) and the input prompt \(P\) using the safety filter is-harmful. If the filter detects at least one of the subsequences or the input prompt as harmful, \(P\) is declared harmful. The input prompt \(P\) is labeled safe only if none of the sequences checked are detected as harmful. See Algorithm 1 for pseudocode. When an adversarial prompt \(P+\alpha\) is given as input such that \(|\alpha|\leq d\), the sequence \(E_{|\alpha|}\) must equal \(P\). Therefore, if \(P\) is a harmful prompt detected by the filter as harmful, \(P+\alpha\) must be labeled as harmful by erase-and-check.
This implies that the accuracy of the safety filter is-harmful on a set of harmful prompts is a lower bound on the accuracy of erase-and-check for all adversarial modifications of prompts in that set up to length \(d\). This statement could be further generalized to a _distribution_\(\mathcal{H}\) over all harmful prompts and a stochastic safety filter that detects a prompt as harmful with some probability \(p\in[0,1]\). Replacing true and false with 1 and 0 in the outputs of erase-and-check and is-harmful, the following theorem holds on their accuracy over \(\mathcal{H}\):
**Theorem 1** (Safety Certificate).: _For a prompt \(P\) sampled from the distribution (or dataset) \(\mathcal{H}\),_
\[\mathbb{E}_{P\sim\mathcal{H}}[\texttt{erase-and-check}(P+\alpha)]\ \geq\ \mathbb{E}_{P\sim\mathcal{H}}[\texttt{is-harmful}(P)],\quad\forall|\alpha| \leq d.\]
The proof is available in Appendix D.
Therefore, to certify the performance of erase-and-check on harmful prompts, we just need to evaluate the safety filter is-harmful on those prompts. We evaluated is-harmful on a randomly sampled set of 520 harmful prompts from AdvBench and observed an accuracy of 92%.
### Empirical Evaluation on Safe Prompts
While our procedure can certifiably defend against adversarial attacks on harmful prompts, we must also ensure that it maintains a good quality of service for non-malicious, non-adversarial users. We
Figure 3: Empirical accuracy and running time of erase-and-check on safe prompts for the suffix mode.
need to evaluate the accuracy and running time of erase-and-check on safe prompts that have not been adversarially modified. To this end, we tested our procedure on 520 safe prompts generated using ChatGPT for different values of the maximum erase length between 0 and 30. For details on how these safe prompts were generated and to see some examples, see Appendix C.
Figures 2(a) and 2(b) plot our procedure's empirical accuracy and running time, respectively. The running time reported is the average running time per prompt of the erase-and-check procedure, that is, the average time to run is-harmful on all erased subsequences per prompt. We observe very high accuracy and low running times when no tokens are erased, and only the original prompt is checked by erase-and-check. This is because the Llama 2 model in our filter is very accurate in classifying complete prompts. It also responds very quickly to these prompts. However, as we increase the maximum erased length, the accuracy decreases, and the running time increases. This is because the safety filter has to check several partially erased sequences for each prompt. This increases the likelihood that the filter will misclassify at least one of the subsequences. Also, Llama 2 is slower in responding to incomplete prompts and often asks for further clarifications when the subsequences are small. This is the reason why the running time increases significantly for a max erased length of 10 tokens but increases slowly for larger values. Also, the safe prompts are at most 33 tokens long with an average length of around 15 tokens (similar to the harmful prompts, see Appendix C, Table 1), which means that the average number of erased subsequences checked does not grow rapidly for larger erase lengths. Nevertheless, the overall accuracy stays above 94%, and average running times remain within 1.5 seconds up to a certified adversarial length of 30 tokens. We performed these experiments on a single NVIDIA A100 GPU.
### Comparison with Smoothing-Based Certificate
Provable robustness techniques have been extensively studied in the machine learning literature. They seek to guarantee that a model achieves a certain performance under adversarial attacks up to a specific size. For image classification models, robustness certificates have been developed that guarantee that the prediction remains unchanged in the neighborhood of the input (say, within an \(\ell_{2}\)-norm ball of radius 0.1). Among the existing certifiable methods, randomized smoothing has emerged as the most successful in terms of scalability and adaptability. It evaluates the model on several noisy samples of the input and outputs the class predicted by a majority of the samples. This method works well for high-dimensional inputs such as ImageNet images (Lecuyer et al., 2019; Cohen et al., 2019) and adapts to several machine learning settings such as reinforcement learning (Kumar et al., 2022; Wu et al., 2022), streaming models (Kumar et al., 2023) and structured outputs such as segmentation masks (Fischer et al., 2021; Kumar and Goldstein, 2021). However, existing techniques do not seek to certify the safety of a model. Our erase-and-check framework is designed to leverage the unique advantages of defending against safety attacks, enabling it to obtain better certified guarantees than existing techniques.
In this section, we compare our safety certificate with that of randomized smoothing. We adapt randomized smoothing for adversarial suffix attacks and show that even the best possible safety guarantees that this approach can obtain are significantly lower than ours. Given a prompt \(P\) and a maximum erase length \(d\), we erase at most \(d\) tokens one by one from the end similar to erase-and-check. We then check the resulting subsequences, \(E_{i}=P[1,|P|-i]\) for \(i\in\{1,\ldots,d\}\), and the original prompt \(P\) with the safety filter is-harmful. If the filter labels a majority of the sequences as harmful, we declare the original prompt \(P\) to be harmful. Here, the erased subsequences could be thought of as the "noisy" versions of the input and \(d\) as the size of the noise added. Note that since we evaluate the safety filter on all possible noisy samples, the above procedure is actually deterministic, which only makes the certificate better.
Figure 4: Comparison between our safety certificate and the best possible certified accuracy obtained by the smoothing-based method for different values of the maximum erase length \(d\).
The main weakness of the smoothing-based procedure compared to our erase-and-check framework is that it requires a majority of the checked sequences to be labeled as harmful. This significantly restricts the size of the adversarial suffix it can certify. In the following theorem, we put an upper bound on the length of the largest adversarial suffix \(\overline{|\alpha|}\) that could possibly be certified using the smoothing approach. Note that this bound is not the actual certified length but an upper bound on that length, which means that adversarial suffixes longer than this bound cannot be guaranteed to be labeled as harmful by the smoothing-based procedure described above.
**Theorem 2** (Certificate Upper Bound).: _Given a prompt \(P\) and a maximum erase length \(d\), if is-harmful labels \(s\) subsequences as harmful, then the length of the largest adversarial suffix \(\overline{|\alpha|}\) that could be certified is upper bounded as_
\[\overline{|\alpha|}\leq\min\left(s-1,\left\lfloor\frac{d}{2}\right\rfloor \right).\]
The proof is available in Appendix E.
Figure 4 compares the certified accuracy of our erase-and-check procedure on harmful prompts with that of the smoothing-based procedure. We randomly sample 50 harmful prompts from the AdvBench dataset and calculate the above bound on \(\overline{|\alpha|}\) for each prompt. Then, we calculate the percentage of prompts for which this value is above a certain threshold. The dashed lines plot these percentages for different values of the maximum erase length \(d\). Since \(\overline{|\alpha|}\) is an upper bound on the best possible certified length, the true certified accuracy curve for each value of \(d\) can only be below the corresponding dashed line. The plot shows that the certified performance of our erase-and-check framework (solid blue line) is significantly above the certified accuracy obtained by the smoothing-based method for meaningful values of the certified length.
## 5 Adversarial Insertion
In this attack mode, an adversarial sequence is inserted anywhere in the middle of a prompt. The corresponding threat model can be defined as the set of adversarial prompts generated by splicing a contiguous sequence of tokens \(\alpha\) of maximum length \(l\) into a prompt \(P\). This would lead to prompts of the form \(P_{1}+\alpha+P_{2}\), where \(P_{1}\) and \(P_{2}\) are two partitions of the original prompt \(P\). Mathematically, this set is defined as
\[\mathsf{InsertionTM}(P,l)=\big{\{}P_{1}+\alpha+P_{2}\;\big{|}\;P_{1}+P_{2}=P \text{ and }|\alpha|\leq l\big{\}}.\]
This set subsumes the threat model for the suffix mode as a subset where \(P_{1}=P\) and \(P_{2}\) is an empty sequence. It is also significantly larger than the suffix threat model as its size grows as \(O(|P||T|^{l})\), making it harder to defend against.
In this mode, erase-and-check creates subsequences by erasing every possible contiguous token sequence up to a certain maximum length. Given an input prompt \(P\) and a maximum erase length \(d\), it generates sequences \(E_{s,t}=P-P[s,t]\) by removing the sequence \(P[s,t]\) from \(P\), for all \(s\in\{1,\ldots,|P|\}\) and for all \(t\in\{s,\ldots,s+d-1\}\). Similar to the suffix mode, it checks the prompt \(P\) and the subsequences \(E_{s,t}\) using the filter is-harmful and labels the input as harmful if any of the sequences are detected as harmful. The pseudocode for this mode can be obtained by modifying the step for generating erased subsequences in Algorithm 1 with the above method. For an adversarial prompt \(P_{1}+\alpha+P_{2}\) such that \(|\alpha|\leq d\), one of the erased subsequences must equal \(P\). This ensures our safety guarantee. Note that even if \(\alpha\) is inserted in a way that splits a token in \(P\), the filter converts the token sequences into text before checking their safety. Similar to the suffix mode, the certified accuracy on harmful prompts is lower bounded by the accuracy of is-harmful, which is 92%.
Figures 4(a) and 4(b) plot the empirical accuracy and running time on safe prompts for the insertion mode. Since the number of sequences to check is much larger than that in the suffix mode, the running time on average for each input prompt is higher. For this reason, we reduce the sample size to 100 and the maximum erase length to 12. Like the suffix mode, we performed these experiments on a single NVIDIA A100 GPU. We observe that the accuracy drops faster than in the suffix mode. This is because when erase-and-check needs to check more sequences, the likelihood that
the filter misclassifies at least one of the sequences increases. This can potentially be resolved by training a classifier that is better at recognizing partially erased safe prompts as safe.
In Appendix B, we show that our method can also be generalized to multiple adversarial insertions. Adversarial prompts can be constructed by inserting at most \(k\) prompts of length at most \(l\). However, as the number of insertions increases, the set of potential adversarial prompts grows exponentially with \(k\) making it significantly harder to defend against. It also increases the number of erased prompts that need to be checked, thereby increasing the running time of the erase-and-check procedure. Figures 11(a) and 11(b) show a comparison between one and two adversarial insertions in terms of the performance and running time of erase-and-check.
## 6 Adversarial Infusion
This is the most general of all the attack modes. Here, the adversary can insert multiple tokens, up to a maximum number \(l\), inside the harmful prompt at arbitrary locations. The adversarial prompts in this mode are of the form \(P_{1}+\tau_{1}+P_{2}+\tau_{2}+\cdots+\tau_{m}+P_{m+1}\). The corresponding threat model is defined as
\[\mathsf{InfusionTM}(P,m)=\Big{\{}P_{1}+\tau_{1}+P_{2}+\tau_{2}+\cdots+\tau_{m }+P_{m+1}\Big{|}\ \sum_{i=1}^{m+1}P_{i}=P\ \text{and}\ m\leq l\Big{\}}.\]
Figure 5: Empirical accuracy and running time of erase-and-check on safe prompts for the insertion mode.
Figure 6: Empirical accuracy and running time of erase-and-check on safe prompts for the infusion mode.
This threat model subsumes all the previous threat models as every adversarial sequence, suffix or insertion, is a subset of the adversarial prompt. The size of the above set grows as \(O\left({\binom{|P|+l}{l}|T|^{l}}\right)\) which is much faster than any of the previous attack modes, making it the hardest to defend against. Here, \(\binom{n}{k}\) represents the number of \(k\)-combinations of an \(n\)-element set.
In this mode, erase-and-check produces subsequences by erasing subsets of tokens of size at most \(d\). For an adversarial prompt of the above threat model such that \(l\leq d\), one of the erased subsets must match the adversarial tokens \(\tau_{1},\tau_{2},\dots,\tau_{m}\). Thus, one of the generated subsequences must equal \(P\), which implies our safety guarantee.
We repeat similar experiments for the infusion mode as in previous attacks. Due to the combinatorial explosion in the number of erased subsets, we restrict the size of these subsets to 3 and the number of samples to 30. Figures 5(a) and 5(b) plot the empirical accuracy and the average running time on safe prompts. While the drop in accuracy is very low since the number of erased tokens is small, the average running time per prompt (on one NVIDIA A100 GPU) increases significantly with the certified size due to the combinatorial nature of the threat model. However, similar to the previous attack modes, the certified accuracy on harmful prompts remains at 92% for all sizes of the adversarial token set.
## 7 Trained Safety Classifier
While we can obtain good performance by simply prompting Llama 2 to classify safe and harmful prompts, running a large language model is computationally expensive and requires significant amounts of processing power and storage capacity. It also has a high running time, which makes it impractical to defend against longer adversarial sequences and more general attack modes, as they require evaluating the safety filter on a large number of erased subsequences. Furthermore, since Llama 2 is not specifically trained to recognize safe and harmful prompts, its accuracy decreases against longer adversarial sequences. As the number of erased subsequences increases, it becomes more likely that at least one of them gets labeled as harmful by the filter, deteriorating the performance erase-and-check on safe prompts.
In this section, we improve the performance of the safety filter by using a text classifier trained on safe and harmful prompts (see Appendix C for details on the dataset). Instead of repeatedly evaluating a large language model, we use a significantly smaller classifier to detect harmful prompts. The rest of the erase-and-check procedure remains the same as before. In practice, this version of the procedure can be placed between a user and an LLM to detect and reject harmful prompts before sending them to the LLM. We download a pre-trained DistilBERT model (Sanh et al., 2019) from Hugging Face and fine-tune it on our safety dataset. DistilBERT is a faster and lightweight version
Figure 7: Comparing the empirical accuracy and running time of erase-and-check on safe prompts for the suffix mode with Llama 2 (from Figures 2(a) and 2(b)) vs. fine-tuned DistilBERT as the safety classifier.
of the BERT language model (Devlin et al., 2019). We split the safe and harmful prompts datasets into training and test sets. We augment the training set of the safe prompts with erased versions of the prompts to make the classifier robust to erasures. This is needed to ensure that the filter recognizes subsequences of safe prompts as safe as well. However, we do not do the same for the harmful prompts, as subsequences of harmful prompts need not be harmful.
We evaluate the performance of our procedure with the trained classifier on the test splits of the safe and the harmful prompts. The safety filter labels all harmful prompts as such, implying a certified accuracy of **100%** for the erase-and-check procedure, which is significantly higher than that of Llama 2. Figures 6(a) and 6(b) respectively compare the performance and the running time of erase-and-check in the suffix mode with Llama 2 vs. the trained classifier as the safety filter. Figures 7(a) and 7(b) do the same for the insertion mode. The fine-tuned DistilBERT-based safety classifier consistently outperforms Llama 2 in correctly classifying safe prompts and is significantly faster (up to 40X speedup, in case of insertion) for all values of max erase lengths tested. This improved performance allowed us to increase the maximum erase length in erase-and-check to 30 tokens for the insertion mode (Figure 9). The accuracy of erase-and-check on safe prompts is above 99.2%, and the average running time is less than 0.5 seconds for all values of the maximum erase length. Using Llama 2, we could only increase the maximum erase length to 12 before significant deterioration in accuracy and running time.
### Training Details
We fine-tune a pre-trained DistilBERT text classifier on examples of safe and harmful prompts. We split the 520 examples in each class into 400 training examples and 120 test examples. For the safe prompts, we include erased subsequences of the original prompts for the corresponding attack mode. For example, when training a safety classifier for the suffix mode, subsequences are created by erasing suffixes of different lengths from the safe prompts. This is to train the model to recognize erased versions of safe prompts as safe as well. We use the test examples to evaluate the performance of erase-and-check with the trained classifier as the safety filter.
Figure 8: Comparing the empirical accuracy and running time of erase-and-check on safe prompts for the insertion mode with Llama 2 (from Figures 4(a) and 4(b)) vs. fine-tuned DistilBERT as the safety classifier. (Note: Some of the bars for DistilBERT in (b) might be too small to be visible.)
Figure 9: Accuracy of erase-and-check with DistilBERT on safe prompts for the insertion mode.
We train the classifier for five epochs using the AdamW optimizer (Loshchilov and Hutter, 2019). The addition of the erased subsequences significantly increases the number of safe examples in the training set, resulting in a class imbalance. To deal with this, we use class-balancing strategies such as using different weights for each class and extending the smaller class (harmful prompts) by repeating existing examples.
## 8 Efficient Empirical Defenses
The erase-and-check procedure performs an exhaustive search over the set of erased subsequences to check whether an input prompt is harmful or not. Evaluating the safety filter on all erased subsequences is necessary to certify the accuracy of erase-and-check against adversarial prompts. However, in many practical applications, certified guarantees may not be needed. Relaxing this requirement could allow us to reduce the number of filter evaluations and lower the computational cost of erase-and-check while maintaining good detection performance. In this section, we propose two empirical defenses inspired by the original erase-and-check procedure. The first method, RandEC, is a randomized version of erase-and-check that evaluates the safety filter on a randomly sampled subset of the erased subsequences. The second method, GradEC, uses the gradients of the safety filter with respect to the input prompt to optimize the erased tokens to remove the adversarial sequence. Our experimental results show that these methods are significantly faster than the original erase-and-check procedure and are effective against adversarial prompts generated by the Greedy Coordinate Gradient algorithm.
### RandEC: Randomized Erase-and-Check
RandEC modifies Algorithm 1 to check a randomly sampled subset of erased subsequences \(E_{i}\)s, along with the input prompt \(P\). The sampled subset would contain subsequences created by erasing suffixes of random lengths. We refer to the fraction of selected subsequences as the sampling ratio. Similar randomized variants can also be designed for insertion and infusion attack modes. Note that RandEC does not have certified safety guarantees as it does not check all the erased subsequences. Figure 10 plots the performance of RandEC against adversarial prompts of different lengths. The x-axis represents the number of tokens used in the adversarial suffix, i.e. \(|\alpha|\) in \(P+\alpha\), and the y-axis represents the percentage of adversarial prompts detected as harmful. When the number of adversarial tokens is 0 (no attack), RandEC detects all harmful prompts as such. Keeping the maximum erase length \(d\) fixed at 20 (see Section 4 for definition), we vary the sampling ratio from 0 to 0.4. When this ratio is 0, the procedure does not sample any of the erased subsequences and only evaluates the safety filter (DistilBERT text classifier) on the adversarial prompt. Performance decreases rapidly with the number of adversarial tokens used, and for adversarial sequences of length 20, the procedure labels all adversarial (harmful) prompts as safe. As we increase the sampling ratio, performance improves significantly, and for a sampling ratio of 0.2, RandEC is able to detect more than 90% of the adversarial prompts as harmful.
To generate adversarial prompts used in the above analysis, we adapt the Greedy Coordinate Gradient (GCG) algorithm designed by Zou et al. (2023) to attack language models. It optimizes the adversarial suffix for a target output sequence like "Sure, here is..." which allows it to bypass the model's safety guardrails. For our DistilBERT safety classifier, we optimize the suffix to make the model misclassify to the safe class by minimizing the loss for this class. Our attack computes the gradient of the loss for the safe class with respect to the word embeddings of a candidate adversarial prompt. It computes the components of the gradient along all token embeddings. It uses the \(k\) largest
Figure 10: Empirical performance of RandEC on adversarial prompts of different lengths. By checking only 20% of the erased subsequences (sampling ratio = 0.2), it achieves an accuracy of over 90%.
negative components to create a set of potential replacements for each adversarial token. It samples a batch of sequences uniformly from the set of all possible sequences that can be created using these replacements. It selects the sequence that maximizes the logit for the safe class. It repeats this process for several iterations and returns the final adversarial prompt.
### GradEC: Gradient-based Erase-and-Check
In this section, we present a gradient-based version of erase-and-check that uses the gradients of the safety filter to optimize the set of tokens to erase. Observe that the original erase-and-check procedure can be viewed as an exhaustive search-based solution to a discrete optimization problem over the set of erased subsequences. Given an input prompt \(P=[\rho_{1},\rho_{2},\ldots,\rho_{n}]\) as a sequence of \(n\) tokens, denote a binary mask by \(\mathbf{m}=[m_{1},m_{2},...m_{n}]\), where each \(m_{i}\in\{0,1\}\) represents whether the corresponding token should be erased or not. Define an erase function \(\mathtt{erase}(P,\mathbf{m})\) that erases tokens in \(P\) for which the corresponding mask entry is zero. Note that, in the absence of any constraints on which entries can be zero, the mask \(\mathbf{m}\) can represent the most general mode of the erase-and-check procedure. i.e., the infusion mode. Let \(\mathtt{Loss}(y_{1},y_{2})\) be a loss function which is zero when \(y_{1}=y_{2}\) and greater than zero otherwise. Then, the erase-and-check procedure can be defined as the following discrete optimization problem:
\[\min_{\mathbf{m}\in\{0,1\}^{n}}\mathtt{Loss}(\mathtt{is-harmful}(\mathtt{ erase}(P,\mathbf{m})),\,\mathtt{harmful}),\]
labeling the prompt \(P\) as harmful when the solution is zero and safe otherwise.
In GradEC, we propose to convert this into a continuous optimization problem by relaxing the mask entries to be real values in the range \([0,1]\) and then apply gradient-based optimization techniques to approximate the solution. It requires the safety filter to be differentiable, which is satisfied by our DistilBERT-based safety classifier. This classifier first converts the tokens in the input prompt \(\rho_{1},\rho_{2},\ldots,\rho_{n}\) into word embeddings \(\omega_{1},\omega_{2},\ldots,\omega_{n}\), which are multi-dimensional vector quantities and then performs the classification task on these word embeddings. Thus, for the DistilBERT-based safety classifier, we have
\[\mathtt{is-harmful}(P)=\mathtt{DistilBERT-clf}(\mathtt{word-embeddings}(P)).\]
We modify the erase function in the above optimization problem to operate in the space of word embeddings. We define it as a scaling of each embedding vector with the corresponding mask entry, i.e., \(m_{i}\omega_{i}\), and denote it with the \(\odot\) operator. We also include a regularization term that encourages each mask entry \(m_{i}\) to be closer to one and reduces the number of erasures. Thus, the above optimization problem can be re-written as follows:
\[\min_{\mathbf{m}\in[0,1]^{n}}\left[\mathtt{Loss}(\mathtt{DistilBERT -clf}(\mathtt{word-embeddings}(P)\odot\mathbf{m}),\,\mathtt{harmful})\right.\\ +\lambda\frac{1}{n}\sum_{i=0}^{n}(1-m_{i})^{2}\right]\]
where \(\lambda\) is the regularization parameter. To ensure that the elements of the mask \(\mathbf{m}\) are bounded by 0 and 1, we define it as the element-wise sigmoid \(\sigma\) of a logit vector \(\hat{m}\in\mathbb{R}^{n}\) divided by a temperature term \(T\), i.e. \(\mathbf{m}=\sigma(\hat{m}/T)\). Inspired by simulated annealing, we reduce the temperature \(T\) with each iteration, causing the mask entries to approach binary values. Similar to the discrete case, the above formulation also does not distinguish between different attack modes and can model the most general attack mode of infusion.
Figure 11: Empirical performance of GradEC on adversarial prompts of different lengths. Accuracy is above 90% in just 6 iterations.
We run the above optimization for a finite number of iterations, and at each iteration, we construct a token sequence based on the current entries of \(\mathbf{m}\). We round the entries of \(\mathbf{m}\) to 0 or 1 to obtain a binary mask \(\bar{m}\) and construct a token sequence by multiplying them by the corresponding token IDs of \(P\), that is, \([\bar{m}_{1}\rho_{1},\bar{m}_{2}\rho_{2},\ldots,\bar{m}_{n}\rho_{n}]\). Thus, the constructed sequence has the token \(\rho_{i}\) when the corresponding rounded mask entry is 1 and 0 everywhere else. The ID 0 token corresponds to the [PAD] token in the DistilBERT tokenizer, which the model is trained to ignore. We decode the constructed sequence of tokens and evaluate the text sequence obtained using the safety filter. If the filter labels the sequence as harmful, we declare that the original prompt \(P\) is also harmful. If the optimization completes all iterations without finding a mask \(\mathbf{m}\) that causes the corresponding sequence to be detected as harmful, we declare that \(P\) is safe.
Figure 11 plots the performance of GradEC against adversarial prompts of different lengths. Similar to figure 10, the x-axis represents the number of tokens used in the adversarial suffix, i.e., \(|\alpha|\) in \(P+\alpha\), and the y-axis represents the percentage of adversarial prompts detected as harmful. When the number of adversarial tokens is 0 (no attack), GradEC detects all harmful prompts as such. We vary the number of iterations of the optimizer from 0 to 12. When this number is 0, the procedure does not perform any steps of the optimization and only evaluates the safety filter (DistilBERT text classifier) on the adversarial prompt. Performance decreases rapidly with the number of adversarial tokens used, and for adversarial sequences of length 20, the procedure labels all adversarial (harmful) prompts as safe. As we increase the number of iterations, performance improves significantly, and with just 6 iterations, GradEC is able to detect more than 90% of the adversarial prompts as harmful. Its accuracy on safe prompts is greater than 98%, and the running time is less than 0.05 seconds for all values of adversarial sequence length and number of iterations considered in Figure 11.
## 9 Conclusion
We propose a procedure to certify the safety of large language models against adversarial prompting. Our approach produces verifiable guarantees of detecting harmful prompts altered with adversarial sequences up to a defined length. Building on the insight that subsequences of safe prompts are also safe in most everyday use of LLMs, we develop a framework that sequentially removes tokens from a prompt, labeling it as harmful if a safety filter flags any subsequence. We experimentally demonstrate that this procedure can obtain high certified accuracy on harmful prompts while maintaining good empirical performance on safe prompts. It significantly outperforms well-known certified robustness techniques such as randomized smoothing. We demonstrate its adaptability by defending against three different adversarial threat models of varying strengths. We also show that training the safety filter with erased versions of the safe prompts can significantly improve the performance of our procedure. Based on our procedure, we propose two efficient empirical defenses that can achieve good performance against adversarial prompts generated by the GCG attack algorithm.
**Future Work:** Our preliminary results on certifying LLM safety against non-imperceptible adversarial prompting indicate a promising direction for improving language model safety with verifiable guarantees. There are several potential directions in which this work could be taken forward. One could study certificates for more general threat models that allow changes in \(P\) in the adversarial prompt \(P+\alpha\). It would also be interesting to investigate whether the number of text sequences checked by erase-and-check could be reduced. We hope that our contribution to certified LLM safety helps drive future research in this field.
#### Acknowledgments
This work is supported in part by the NSF awards IIS-2008461, IIS-2040989, IIS-2238714, and research awards from Google, JP Morgan, Amazon, Harvard Data Science Initiative, and the Digital, Data, and Design (D\({}^{3}\)) Institute at Harvard. This project is also partially supported by the NSF CAREER AWARD 1942230, the ONR YIP award N00014-22-1-2271, ARO's Early Career Program Award 310902-00001, HR001119S0026 (GARD), Army Grant No. W911NF2120076, NIST 60NANB20D134, and the NSF award CCF2212458. The views expressed here are those of the authors and do not reflect the official policy or position of the funding agencies. |
2303.07626 | CAT: Causal Audio Transformer for Audio Classification | The attention-based Transformers have been increasingly applied to audio
classification because of their global receptive field and ability to handle
long-term dependency. However, the existing frameworks which are mainly
extended from the Vision Transformers are not perfectly compatible with audio
signals. In this paper, we introduce a Causal Audio Transformer (CAT)
consisting of a Multi-Resolution Multi-Feature (MRMF) feature extraction with
an acoustic attention block for more optimized audio modeling. In addition, we
propose a causal module that alleviates over-fitting, helps with knowledge
transfer, and improves interpretability. CAT obtains higher or comparable
state-of-the-art classification performance on ESC50, AudioSet and UrbanSound8K
datasets, and can be easily generalized to other Transformer-based models. | Xiaoyu Liu, Hanlin Lu, Jianbo Yuan, Xinyu Li | 2023-03-14T04:50:52Z | http://arxiv.org/abs/2303.07626v1 | # CAT: Causal Audio Transformer for Audio Classification
###### Abstract
The attention-based Transformers have been increasingly applied to audio classification because of their global receptive field and ability to handle long-term dependency. However, the existing frameworks which are mainly extended from the Vision Transformers are not perfectly compatible with audio signals. In this paper, we introduce a Causal Audio Transformer (CAT) consisting of a Multi-Resolution Multi-Feature (MRMF) feature extraction with an acoustic attention block for more optimized audio modeling. In addition, we propose a causal module that alleviates over-fitting, helps with knowledge transfer and improves interpretability. CAT obtains higher or comparable state-of-the-art classification performance on ESC50, AudioSet and UrbanSound8K datasets, and can be easily generalized to other Transformer-based models.
Xiaoyu Liu\({}^{1}\), Hanlin Lu\({}^{2}\), Jianbo Yuan\({}^{2}\), Xinyu Li\({}^{2}\)\({}^{1}\)University of Maryland, College Park, \({}^{2}\)ByteDance audio classification, transformer, causal inference, attention
## I Introduction
Audio classification is the task of analyzing audio recordings and assigning the corresponding labels to the audio sample. Having witnessed the huge success of Transformers [1] in the field of Natural Language Processing (NLP) and Computer Vision (CV), they have been adapted to the audio modality and have obtained the state-of-the-art performances. Part of the success is attributed to the global receptive fields in Transformers to capture long-range context in the audio signals. The existing audio transformer models inherit the structure of the famous Vision Transformer (ViT) [2], mainly because one of its most widely-used features, the Mel-Spectrogram, is of the same format as an image. Instead of letting the x- and y-axis carry the spatial information in image modeling, the x-axis of the Mel-Spectrogram denotes temporal information while the y-axis carries the discrete frequency information for audio inputs [3, 4, 5, 6, 7, 8].
While achieving superior performance, there are still open challenges in these trendy audio Transformers: (1) The commonly used acoustic representations leverage different time-frequency transformations and contain acoustic semantics of various scales and different granularities [4, 5, 6], which can hardly be effectively captured by ViTs [2] using conventional self-attentions, patch sampling and embeddings; (2) The successful designs in visual tasks, such as resnet [9], mixup [10], are shown to be more prone to over-fitting and less generalizable in the acoustic domain; (3) Feature selection and representation learning are critical in computer vision, yet have been often overlooked in acoustic modeling.
In the face of the challenges, we propose a **C**ausal **A**udio **T**ransformer (CAT), which includes Multi-Resolution Multi-Filter (MRMF) feature extraction, an acoustic attention and a causal module. Spectrograms, as standard inputs in audio models, are sampled using Fourier Transform, leading to a natural trade-off between the temporal resolution and the frequency resolution. CAT balances the trade-off by extracting comprehensive temporal-frequency feature patches in multiple resolutions and filters which is later combined with 3D positional embeddings. Then, the acoustic attention is proposed to effectively extract semantics from such representations, taking the feature patches as inputs. The patches from different filters are evenly distributed among the attention heads, while we calculate the pair-wise attentions among patches from different resolutions but within the same time frames, allowing information exchange in various granularities. We further introduce a causal module to establish the necessary and sufficient relationship between the learned representation and predicted labels based on counterfactual reasoning [11, 12]. We extend [12] to the context of audio classification where a lower bound of Probability of Necessity and Sufficiency (PNS) in terms of interventional distribution is provided. Since such a lower bound can only be estimated from the true distribution, we propose a causal module that learns a mapping from the interventional distribution to the observational dataset (i.e., the one we have), to alleviate over-fitting, improve interpretability and enable better knowledge transfer.
CAT achieves SOTA performance on ESC50 [13], AudioSet [14] and UrbanSound8K [15]. To sum up, our contributions are:
* Causal Audio Transformer (CAT), with _Multi-Resolution Multi-Filter (MRMF)_ features extraction and _an acoustic attention_ for acoustic modeling.
* Causal loss with reconstruction block, that explicitly measures the feature quality using Probability of Necessity and Sufficiency (PNS), alleviates over-fitting, and improves knowledge transfer across different datasets.
## II Related Work
**Audio classification: from CNN to Transformer.** Audio classification is the task of predicting labels of soundtracks. The great success of large-scale training originates from the rise of CNN models in audio classification [8, 16, 17, 18, 19, 20]. More recently, with attention mechanism triumphs in the field of Natural Language Processing (NLP) and Computer Vision (CV), self-attention-based transformers are first applied to acoustic classification by AST [5], where the spectrogram is used as the input of a ViT [2] backbone. Multiple subsequent studies focus on improving the model efficiency: PaSST [4] proposes a patch-out mechanism, and HTS-AT [6] adopts a hierarchical backbone [21]. To further boost the performance, MBT [3] introduces an visual-acoustic fusion and PLSA [7] introduces a model-agnostic framework. However, the network structures in most recent studies are heavily borrowed from vanilla transformers originally proposed for NLP and CV tasks, and are more prone to over-fitting and less generalizable to acoustic data. Therefore, in this paper, we introduce the acoustic attention which incorporates MRMF features specialized for acoustic modeling. The proposed acoustic attention can be easily extended to various Transformer blocks.
**Causal inference in feature representation.** The concept of causality is first introduced in graphical probabilistic models [11]. Although causal inference is a relatively new concept in audio classification, it has shown advances in interpretable machine learning and representation learning. Modeling causality among generating factors significantly encourages the learning of representative features [22]. [23] adopts counterfactual information that helps knowledge transfer across different domains. [12] proves a lower bound of the learned representation being a necessary and sufficient condition for label prediction. However, such a lower bound is in the interventional setting, and thus cannot be estimated directly without knowing the true distribution. We base our design on [12] and extend it to a causal module that maps the lower bound to the observational dataset in the context of audio classification.
## III Causal Audio Transformer
In this section, we introduce our Causal Audio Transformer (CAT). CAT first extracts MRMF feature patches with 3D positional embedding. The feature patches are then sent as input to the acoustic attention (Section III-A). Then a causal module is proposed to alleviate over-fitting and improve interpretability and knowledge transfer (Section III-B).
### Transformer For Acoustic Modeling
Image modeling is largely different from acoustic modeling because a 2D image only carries spatial information along its axis, whereas acoustic features convey both temporal and frequency information. Therefore, it is not trivial to apply the conventional vision transformer for efficient acoustic modeling. We bridge the gap by extracting MRMF features and acoustic attention.
**Multi-resolution multi-filter feature extraction.** Due to the nature of 1D Fourier Transformation (FT), there is a trade-off between frequency and temporal resolution in a temporal-frequency representation (e.g. spectrogram). We propose to extract spectrograms of different temporal resolutions as:
\[x_{\text{MR}}=\{\text{FFT}_{\omega}(s)\},\omega\in\{\omega_{1},\omega_{2},..., \omega_{K}\} \tag{1}\]
where s denotes the input audio wave and \(\text{FFT}_{\omega}\) denotes the 1D FFT with a window size of \(\omega\), \(\{*\}\) represents a set. For each raw input \(x\), \(x_{\text{MR}}\) contains k spectrograms, each of size \(\text{R}^{T_{i}\times F},i=1,2,...,K\). \(T_{i}\) represents the number of time frames, and \(F\) is the number of frequency bins. Such multi-resolution spectrograms share similar intuitions with [24, 25], but ours is paired with acoustic attention and 3D positional embedding, as introduced in the next section, so that the information can be more effectively taken use of. We further allow spectrograms under multiple filters to serve as parallel inputs to different attention heads. For example, we could pass a copy of \(x_{\text{MR}}\) to the Mel-filter bank, obtaining the MRMF features \(x_{\text{MRMF}}\in\mathbb{R}^{\{T_{i}\}\times K\times F\times 2}\) as follows:
\[x_{\text{MRMF}}=\{\text{mel}(x_{\text{MR}}),x_{\text{MR}}\} \tag{2}\]
Fig. 1: Left: Overview of proposed CAT; Right: the detailed structure of proposed acoustic attention.
Spectrograms of different resolutions have different sizes. There are multiple ways to align them. One possibility is to concatenate them along the temporal dimension, forming a super spectrogram. Another method is to project the spectrograms to the same temporal dimension. We adopt the later setting based on better empirical results.
**Patching and 3D positional embedding.** The input \(x_{\text{MRMF}}\) are patchified and aggregated with windows \(Wi\in\mathrm{R}^{1\times K\times F\times 1}\), and we use linear projections \(\{\xi\}\) to match the dimension to \(M\). Different from the square-shaped input patching in computer vision [2], the proposed patch \(x_{p}\) contains frequency information of various resolutions. Such design aggregates and preserves the important multi-scale frequency information, which is critical for audio classification. Inspired by 3D positional embedding used in video transformer [26], we propose an acoustic 3D positional embedding to make the network aware of features extracted with different window sizes and filters:
\[pe=(g([pe_{1},pe_{3}])\otimes\mathbf{1}_{F})^{T}+pe_{2}\otimes\mathbf{1}_{T} \tag{3}\]
where \(g\) is a linear projection, \(pe_{1}\) is a \(T\) dimensional sinusoid time embedding and \(pe_{2}\) a \(F\) dimensional frequency sinusoid embedding, following [1]. \(pe_{3}\) is the one-hot encoding of resolutions. \(\mathbf{1}_{F}\) is \(F\)-dimensional all-one vector.
**Acoustic attention.** We consequently design the acoustic attention to incorporate MRMF feature patches in audio Transformers. Specifically, multi-filtered inputs are fed into different heads and processed with separate scaled dot-production (Figure 1) so that attentions with and without Mel-filtering are calculated independently. Note that the proposed acoustic attention works with various attention kernels (e.g. multi-head attention [1], SWIN-attention [21], etc.). We use the SWIN kernel for our experiments.
### Causal module for better representation
The idea of encouraging the learned hidden representation to be a necessary and sufficient cause of label prediction is first proposed in [12], by maximizing a lower bound under intervention. [12] demonstrated it helps build models with less over-fitting and more robustness. However, such a bound requires an estimation of the unobserved confounder \(C\), making it hard to succeed in empirical setups. In this paper, we introduce a causal module with a causal loss that applies addition constraints on bijective mappings to resolve the issue. In this section, we first introduce the preliminaries from a causal perspective, then introduce the proposed module.
**Preliminaries** Let \(X\) denote the training data and \(Y\) be the counterfactual label function. we define a function \(f\) that maps \(X\) to its latent representation \(Z\), where \(Z=f(X)\). The probability of sufficiency and necessity of \(\mathbbm{1}\{Z=z\}\) for \(\mathbbm{1}\{Y=y\}\) is defined as [12]:
\[\text{PNS}_{Z=z,Y=y}=P(Y(Z\neq z)\neq y,Y(Z=z)=y). \tag{4}\]
where \(Y(Z=z)\) reasons about what the label would be if \(Z=z\). To combat over-fitting and ensure the feature is representative, our goal is to maximize the PNS during training. [12] established the lower-bounded of PNS as:
\[\text{PNS}\geq P(Y=y|do(Z=z))-P(Y\neq y|do(Z\neq z)) \tag{5}\]
where \(do\) notation [11] stands for intervention of \(Z\).
However, without knowing the true distribution, the corresponding label change under the certain intervention of latent representation \(Z\) cannot be directly estimated. Instead of solving the problem through deconfounding, we derive an estimation of the lower bound as:
\[\text{PNS}\geq=\int P(Y|X)[P(X|f(X)=z)-P(X|f(X)\neq z)]dX \tag{6}\]
We could observe from a probabilistic view that maximizing \(P(Y|X)\) is the classification objective, and \(P(X|f(X)=z)\) is the probability of inferring \(X\) from a specific latent representation \(Z=z\). Given that we want to maximize such a lower bound of PNS, a feasible strategy is to maximize both \(P(Y|X)\) and \(P(X|f(X)=z)-P(X|f(X)\neq z)\). The former is consistent with the classification objective, while the second term indicates that we need an accurate and bijective matching from \(Z\) to \(X\).
**Reconstruction and causal loss** We argue that such mapping can be accomplished by conducting a reconstruction module and a "causal" loss, as shown in Figure 1. We purpose a simple reconstruction block that runs in parallel with a classification layer so that \(X\) and \(Z\) approximate bijective relationship measured by the reconstruction loss \(l_{rs}\):
\[l_{rs}=\|\Phi(Z)-X\|_{2} \tag{7}\]
where \(\Phi\) is parameterized by a reconstruction block as in Figure 1.
With the underlying requirement being satisfied, we further minimize the causal loss ([12] Equation 25) as :
\[lc=-\sum_{j=1}^{d}log\text{PNS}_{n}(f_{j}(X),Y|f_{-j}(X)) \tag{8}\]
where
\[\text{PNS}(f_{j}(X),Y|f_{-j}(X))=\prod_{i=1}^{n}\text{PNS}_{f_{j}(X_{i}),Y_{i }|f_{-j}(X_{i})} \tag{9}\]
The objective of CAT is the sum of a cross-entropy loss \(l_{\theta}\), a reconstruction loss, and a causal loss:
\[L=\operatorname*{argmin}(l_{\theta}+l_{c}+l_{rs}) \tag{10}\]
## IV Experiments
### Dataset
We evaluate CAT on three datasets. **AudioSet [14]**: contains over two million 10-second audio clip with 527 labels. We report mean average precision (mAP) on evaluation set following [5]. **ESC50 [13]:** contains 50 categories, 2000 audio recordings. We report the averaged accuracy of 5-fold cross-validate and mAP of 5 rounds of experiments following [5]. **UrbanSound8K [15]:** contains 8732 labeled sound excerpts of 10 classes. We report the average accuracy of 10-fold cross-validation following [27].
### Implementation details
Following previous recipe [4, 5, 8], we use 32kHz as sample rate for all experiments. We generate both the spectrogram and mel-spectrograms in a window of 25ms and hop length of 10ms. In training, we adopt the inverse proportional weighting [28] to balance the AudioSet training data, the mixup of 0.5 [29] is used in all of our experiments.
### Main Results
**AudioSet** We show CAT performance on AudioSet in Table I. Our model significantly outperforms previous work based on ConvNets [8], as well as SOTA transformer-based models by a noticeable margin (+\(0.7\%\) comparing to [6]). Note that both our CAT and HTS-AT share a similar SWIN backbone, the superior performance demonstrates that our proposed audio-transformer with MRMF and acoustic attention, plus the causal module are effective. It is worth mentioning that following the same ensemble protocol, the CAT achieves performance comparable to SOTA ensemble models. By simply ensemble the CAT with PANN [8], we achieve performance comparable to the previous 9-model ensemble [4] results. This also shows the CAT provides complementary information to ConvNet based model.
**ESC50** The CAT achieves SOTA performance on ESC50 dataset as well (Table I). We observe a similar pattern that the proposed CAT outperforms previous SOTA [5, 6] by 0.8% and 0.2% with and without ensemble; demonstrating the effectiveness of the proposed design. Note that ensemble CAT with other acoustic model [8] achieves performance comparable to previous work trained on additional modality, e.g. CAT is 0.7% better than AnT [30] which utilizes visual-acoustic information. It can be anticipated that incorporating multimodality features into CAT may further improve performance.
### Ablations
We perform ablations on ESC50 and AudioSet.
**Effectiveness of causal loss.** Table II show that the causal loss helps with performance on datasets of different scale.
**Impact of pre-training.** The imagenet pre-training helps with the performance on both datasets (Table IIb), however, the impact is more significant on ESC50. This is due to that the transformer generally requires large-scale data to train.
**Generalization** We show that the proposed MRMF and causal loss generalize well to different backbone (e.g. backbone w [21] and w/o [5] hierarchy), and consistently improve the performance in Table IIc.
**Acoustic embedding quality** We show that the proposed causal loss is able to eliminate noise and thus the feature transfer better across datasets (Table IId). We perform pre-training on small-scale datasets to ensure the performance gain is from proposed causal loss instead of data overlapping.
## V Conclusion and Future Work
In this paper, we propose Causal Audio Transformer, an acoustic transformer designed for audio classification. CAT has MRMF features, an acoustic attention and a causal module. We empirically show that the proposed CAT achieves SOTA performance on multiple datasets. Our ablation also demonstrates the MRMF and causal loss generalize well to different backbones.
\begin{table}
\end{table}
Table II: _Ablation on ESC50 and AudioSet. CL denotes causal loss and US8K denotes urbanSound8K._ |
2301.09072 | ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning | Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned
widespread attention from both academia and industry. Attributed to the
superior ability in code representation, they have been further applied in
multiple downstream tasks such as clone detection, code search and code
translation. However, it is also observed that these state-of-the-art
pre-trained models are susceptible to adversarial attacks. The performance of
these pre-trained models drops significantly with simple perturbations such as
renaming variable names. This weakness may be inherited by their downstream
models and thereby amplified at an unprecedented scale. To this end, we propose
an approach namely ContraBERT that aims to improve the robustness of
pre-trained models via contrastive learning. Specifically, we design nine kinds
of simple and complex data augmentation operators on the programming language
(PL) and natural language (NL) data to construct different variants.
Furthermore, we continue to train the existing pre-trained models by masked
language modeling (MLM) and contrastive pre-training task on the original
samples with their augmented variants to enhance the robustness of the model.
The extensive experiments demonstrate that ContraBERT can effectively improve
the robustness of the existing pre-trained models. Further study also confirms
that these robustness-enhanced models provide improvements as compared to
original models over four popular downstream tasks. | Shangqing Liu, Bozhi Wu, Xiaofei Xie, Guozhu Meng, Yang Liu | 2023-01-22T08:03:20Z | http://arxiv.org/abs/2301.09072v1 | # ContraBERT: Enhancing Code Pre-trained Models via Contrastive Learning
###### Abstract
Large-scale pre-trained models such as CodeBERT, GraphCodeBERT have earned widespread attention from both academia and industry. Attributed to the superior ability in code representation, they have been further applied in multiple downstream tasks such as clone detection, code search and code translation. However, it is also observed that these state-of-the-art pre-trained models are susceptible to adversarial attacks. The performance of these pre-trained models drops significantly with simple perturbations such as renaming variable names. This weakness may be inherited by their downstream models and thereby amplified at an unprecedented scale. To this end, we propose an approach namely ContraBERT that aims to improve the robustness of pre-trained models via contrastive learning. Specifically, we design nine kinds of simple and complex data augmentation operators on the programming language (PL) and natural language (NL) data to construct different variants. Furthermore, we continue to train the existing pre-trained models by masked language modeling (MLM) and contrastive pre-training task on the original samples with their augmented variants to enhance the robustness of the model. The extensive experiments demonstrate that ContraBERT can effectively improve the robustness of the existing pre-trained models. Further study also confirms that these robustness-enhanced models provide improvements as compared to original models over four popular downstream tasks.
Code Pre-trained Models, Contrastive Learning, Model Robustness
## I Introduction
It has already been confirmed that the "big code" era [1] is coming due to the ubiquitousness of software in modern society and the accelerated iteration of the software development cycle (design, implementation and maintenance). According to a GitHub official report [2] in 2018, GitHub has already reached 100 million hosted repositories. The Evans Data Corporation [3] also estimated that there are 23.9 million professional developers in 2019 and that number is expected to reach 28.7 million in 2024. As a result, the availability of code-related data is massive (e.g., billions of code, millions of changed code, bug fixes and code documentation), which yields a hot topic in both academia and industry. That is how to adopt the data-driven approach (e.g., deep learning) to solve conventional software engineering (SE) problems.
Deep learning has been widely applied to diverse SE tasks (AI4SE) such as software vulnerability detection [4, 5, 6], source code summarization [7, 8, 9], deep code search [10, 11] and source code completion [12, 13, 14]. Besides, the early works [15, 16, 17, 18, 19] directly utilized vanilla deep learning techniques such as Long-Short Memory Networks (LSTMs) [20] and Convolutional Neural Networks (CNNs) [21] for different tasks. Later works [5, 7, 10, 22, 23, 24, 4, 8] customized different network architectures to satisfy the characteristics of the specific task for achieving the best performance. For example, since complicated data dependencies and control dependencies are easier to trigger software vulnerabilities, Devign [5] incorporated different kinds of program structure information with Code Property Graph [25] to Graph Neural Networks [26] for vulnerability detection. Considering code duplication [27] is common in the "big code" era, Liu et al. [7] combined the retrieved code-summary pair to generate high-quality summaries. Although these customized networks have achieved significant improvements on specific tasks, the generalization performance is still low. To address this limitation, some researchers propose to utilize unsupervised techniques with the massive amount of data to pre-train a general model [28, 29, 30, 31, 32, 33, 34, 35] and then fine-tune it for different downstream tasks. For example, CuBERT [36] pre-trained BERT [37] on a large collected Python corpus (7.4M files) and then fine-tuned it on different tasks such as variable-misuse identification and wrong binary operator identification. CodeBERT [29] pre-trained RoBERTa [38] for programming languages (PL) with their natural language (NL) comments on the open-source six programming languages [16] and evaluated it on code search and source code summarization. GraphCodeBERT [30] further incorporated data flow information to encode the relation of variables in a program for pre-training and demonstrated its effectiveness on four downstream tasks.
The aforementioned pre-trained models have a profound impact on the AI4SE community and have achieved promising results on various tasks. With the widespread use of pre-trained models, an important question is whether these models are robust to represent code semantics. Our preliminary study has demonstrated that state-of-the-art pre-trained models are not robust to a simple label-preserving program mutation such as variable renaming. Specifically, we utilize the test data of clone detection (POJ-104) [39] (a task to detect whether two functions are semantic equivalence with different implementations)
provided by CodeXGLUE [3] and select those samples that are predicted correctly by the pre-trained CodeBERT [29] and GraphCodeBERT [38]. Then we randomly rename variables within these programs from 1 to 8 edits. For example, 8 edits mean that we randomly select 8 different variables in a function and rename them for all occurrences with the newly generated names. If one function has less than 8 variables, we will rename the maximum number of variables. We then utilize these newly generated mutated variants to evaluate the model prediction accuracy based on the cosine similarity of the embedded vectors of these programs. Surprisingly, we find that either CodeBERT or GraphCodeBERT suffers greatly from renaming operation and the accuracy reduces to around 0.4 when renaming edits reach to 8 (see Fig. 1). It confirms that pre-trained models are not robust to adversarial examples. However, it is challenging to improve the robustness of pre-trained models. Although the latest work by Yang et al. [40] proposed some attack strategies to make CodeBERT and GraphCodeBERT have poor performance on adversarial samples. They further combined adversarial samples with original samples to fine-tune pre-trained models without any changes to the model architecture to improve prediction robustness on downstream tasks. However, a newly designed model that inherently solves the weakness of robustness is not involved in their paper.
In this paper, we propose ContraBERT, an unsupervised contrastive learning-based framework to enhance the robustness of existing pre-trained models in code scenarios. Compared with Yang et al. [40], we design a new pre-trained model that takes masked language modeling (MLM) and contrastive pre-training task as the pre-training tasks to improve model robustness. To design a contrastive pre-training task to help the model group similar samples while pushing away the dissimilar samples, we define nine kinds of simple or complex data augmentation operators that transform the original program and natural language sequence into different variants. Given an existing pre-trained model such as CodeBERT or GraphCodeBERT, we take the original sample as well as its augmented variants as the input to train the model with MLM and contrastive pre-training task, where MLM is utilized to help the model learn better token representations and contrastive pre-training task is utilized to help the model group the similar vector representations to enhance model robustness. As shown in Fig. 1, ContraBERT_C and ContraBERT_G denote the models are pre-trained from CodeBERT and GraphCodeBERT with our approach respectively, we observe that with the increasing number of edits, although the performance continues to drop, the curve for ContraBERT is much smoother. The prediction accuracy of ContraBERT_C and ContraBERT_G outperform CodeBERT and GraphCodeBERT significantly, indicating that ContraBERT_C and ContraBERT_G are more robust than the original models. We further perform an ablation study to confirm each type of defined PL-NL augmentation operator is effective to improve the model robustness. Finally, we conduct broad research on four downstream tasks (i.e., clone detection, defect-detection, code-to-code-trans and code search) to illustrate that these robustness-enhanced models provide significant improvements as compared to the original models. In summary, our main contributions are as follows:
* We present a framework ContraBERT that enhances the robustness of existing pre-trained models in the code scenario by the pre-training tasks of masked language modeling and contrastive learning on original samples as well as the augmented variants.
* We design nine kinds of simple or complex data augmentation operators on the programming language (PL) and natural language sequence (NL). Each operator confirms its effectiveness to improve the model's robustness.
* The broad research on four downstream tasks demonstrates that the robustness-enhanced models provide improvements as compared to the original models. Our code and model are released on [41] for reproduction.
**Organization:** The remainder of this paper is organized as follows: Section II describes the background of the original models that ContraBERT will use. We elaborate our approach in Section III. Section IV and Section V present the experimental setup and experimental results. In Section VI, we give some discussions about our work. After a brief review of related work in Section VII, we conclude this paper in Section VIII.
## II Background
In this section, we briefly introduce CodeBERT and GraphCodeBERT which will be adopted as our original pre-trained models for ContraBERT.
### _CodeBERT_
CodeBERT [29] is pre-trained on an open-source benchmark CodeSearchNet [16], which includes 2.1M bimodal NL-PL (comment-function) pairs and 6.4M unimodal functions without comments across six programming languages. The model architecture is the same with RoBERTa [38], which utilizes multi-layer bidirectional Transformer [42] for unsupervised learning. Specifically, CodeBERT consists of 12 identical layers, 12 heads and the dimension size for each layer is 768. In total, the number of model parameters reaches 125M. Two different pre-training objectives are used, the first one is masked language modeling (MLM), which is trained on
Fig. 1: Adversarial attacks on clone detection(POJ-104).
bimodal data. MLM objective targets predicting the original tokens that are masked out in NL-PL pairs. To fully utilize unimodal data, CodeBERT further uses Replaced Token Detection (RTD) objective on both bimodal and unimodal samples. RTD objective is designed to determine whether a word is original or not. At the fine-tuning phase, two downstream tasks (i.e., code search and source code documentation generation) are used for evaluation. The experimental results demonstrate that CodeBERT outperforms supervised approaches on both tasks.
### _GraphCodeBERT_
GraphCodeBERT [30] is a pre-trained model for code, which considers structures in code. Specifically, it incorporates the data flow of code to encode the relations of "where the value comes from" between variables in the pre-training stage. In addition to the pre-training task of masked language modeling (MLM), GraphCodeBERT further introduces two new structure-aware pre-training tasks. The first one edge prediction is designed to predict whether two nodes in the data flow are connected. The other node alignment is designed to align edges between code tokens and nodes. GraphCodeBERT utilizes NL-PL pairs for six programming languages from CodeSearchNet [16] for pre-training. It is fine-tuned on four downstream tasks including code search, clone detection, code translation and code refinement. The extensive experiments on these tasks confirm that code structures and the defined pre-training tasks help the model achieve state-of-the-art performance on these tasks.
## III Approach
In this section, we first present an overview of our approach, then detail each component including PL-NL augmentation, model design in pre-training and the fine-tuning settings for downstream tasks.
### _Overview_
The overview of ContraBERT is shown in Fig. 2. Specifically, given a pair of the function \(C\) with its comment \(W\) (i.e., \((C,W)\)), we first design a set of PL-NL augmentation operators \(\{f(*)\},\{g(*)\}\) to construct the simple or complex variants for \(C\) and \(W\) respectively. In the pre-training phase, initialized from existing pre-trained models such as CodeBERT or GraphCodeBERT, we further pre-train these models on the original samples and their augmented variants with masked language modeling (MLM) and contrastive pre-training task to enhance the model robustness. Finally, when ContraBERT is pre-trained over a large amount of unlabeled data, we fine-tune it for different types of tasks such as retrieval tasks, classification tasks and generation tasks with the task-specific data in a supervised manner.
### _PL-NL Augmentation_
Given a program \(C\), clone detection [43] could help to identify a semantically equivalent program \(C^{\prime}\). However, this technique is unrealistic in practice. For any function in a fixed dataset, we cannot guarantee that we will be able to find the semantically equivalent variants. Furthermore, clone detection usually takes a project for analysis, which is not applicable to a single function. Hence, we consider constructing augmented variants based on the original samples. Compared with the existing works [44, 45] that only focus on program mutations, we design a set of natural language (NL) sequence augmented operators. Specifically, we design a series of simple operators and complex operators for both PL and NL to construct variants.
#### Iii-A1 Program (PL) Augmentation Operators
For program augmented operators, we design four kinds of complex operators and one kind of simple operator.
**Complex Operators:**
* Rename Function Name (RFN). It is designed to replace the function name with a new name that is taken randomly from an extra vocabulary set constructed on the pre-training dataset. We extract all function names in the pre-training dataset for the construction. Since each sample in the dataset is a single function, the renamed function preserves the equivalent semantics to the original function.
* Rename Variable (RV). It renames variables in a function. A random number of variables for all occurrences in the function will be replaced with the new names taken randomly from an extra vocabulary set. We extract all variable names from the pre-training dataset to construct this vocabulary set. This operator only mutates the variable names and all occurrences of them with the new variable names, which does not change the semantics of the original function.
* Insert Dead Code (IDC). It means to insert unused statements in a function. To generate unused code statements, we traverse AST to identify the assignment statements and then randomly select one assignment statement to rename its variables with new names that have never appeared in the same function. After that, we consider it as the dead code and insert it at the position after the original assignment statement. As the inserted dead code does not change the original program behaviour, IDC is regarded as the semantically equivalent operator.
* Reorder (RO). It randomly swaps two lines of statements that have no dependency on each other in a basic block in a
Fig. 2: The Overview of ContraBERT.
function body such as two declaration statements appearing on two consecutive lines without other statements between them. We traverse AST and analyze the data dependency for extraction. Since the permuted statements are independent without data dependency, this operator preserves the original program semantics.
**Simple Operators:**
* Sampling (SP). It randomly deletes one line statement from a function body and preserves others. It can serve as regularizers to avoid overfitting [45].
#### Iii-B2 Comment (NL) Augmentation Operators
Apart from the program augmentation, we further design one kind of complex operator and three kinds of simple operators for comment augmentation operators as follows:
**Complex Operators:**
* Back Translation Mutation (Trans). It refers to translating a source sequence into another language (target sequence) and then converting this target sequence to the original sequence [46]. We use the released tool [47] for the implementation where the source is in English and the target is in German.
**Simple Operators:**
* Delete. It randomly deletes a word in a comment.
* Switch. It randomly switches the positions of two words in a comment.
* Copy. It randomly copies a word and inserts it after this word in a comment.
Given a function \(C\) with its paired comment \(W\), we utilize the above augmentation operators on \(C\) and \(W\) respectively to obtain the augmentation sets, which are defined as \(S_{C}\) and \(S_{W}\) respectively. Specifically, each operator is conducted once to get its corresponding augmented variant and insert it into the corresponding augmentation set. For the operator IDC, which may not get its variant for some specific functions, we ignore it and use other operators for the construction. Then we randomly select an augmented version from \(S_{C}\) and \(S_{W}\) (i.e., \(C^{\prime}\in S_{C}\) and \(W^{\prime}\in S_{W}\)) and construct the quadruple \((C,W,C^{\prime},W^{\prime})\) for the pre-training. Note that during the pre-training process, at each learning step, \((C^{\prime},W^{\prime})\) is randomly selected from the augmented sets \(S_{C}\) and \(S_{W}\) respectively. Hence, each augmented sample in the sets is used when the model has sufficient learning steps.
### _Model Design and Pre-training_
Basically, ContraBERT is further trained from existing pre-trained models. We directly utilize the existing pre-trained model and further pre-train it with masked language modeling (MLM) and contrastive pre-training task to enhance its robustness. The model design of ContraBERT is presented in Fig. 3.
#### Iii-C1 Model Design
As shown in Fig. 3, ContraBERT consists of two separate encoders M and M', where M can be represented by any pre-trained models such as CodeBERT. The model architecture of M' is the same with the encoder M and the initial weights are also the same with M. However, the weight update strategy is different with M. Specifically, given a quadruple \((C,W,C^{\prime},W^{\prime})\) from Section III-B, we construct two input sequences \(X=\{[CLS],W,[SEP],C,[SEP]\}\) and \(X^{\prime}=\{[CLS],W^{\prime},[SEP],C^{\prime},[SEP]\}\), where "\([CLS]\)" indicates the beginning of a sequence and "\([SEP]\)" is a symbol that concatenates two kinds of sequence. We utilize the encoder M and M' to encode the masked input sequence \(X\) and \(X^{\prime}\) respectively.
#### Iii-C2 Pre-training Tasks
Masked language modeling (MLM) is an effective and widely adopted pre-training task to learn the effective token representations [37, 38], we also utilize it as one of our pre-training tasks. However, by our preliminary results in Section I, we observe that the models trained by MLM are weak to the adversarial examples, we further introduce a contrastive pre-training task to group the similar data and push away the dissimilar data to reshape the learnt space for encoder M to enhance the model robustness.
**Masked Language Modeling (MLM).** We utilize MLM to learn token representations in a sequence. Specifically, given the sequence \(X=\{[CLS],W,[SEP],C,[SEP]\}\), a random set of positions in X are masked out. We select 15% tokens to mask out and obtain the masked token set. Furthermore, we replace 80% of the masked tokens in this set with the "[MASK]" symbol, 10% with the random tokens from the vocabulary set and the remaining 10% unchanged. We configure these settings since they are confirmed effective to learn the token representations in a sequence [37, 38]. The loss function \(\mathcal{L}_{\mathrm{MLM}}\) can be expressed as follows:
\[\mathcal{L}_{\mathrm{MLM}}=-\sum_{x_{i}\in M}\mathrm{log}p(x_{i}|X^{mask}) \tag{1}\]
where \(X^{mask}\) is the masked input sequence and \(M\) is the masked token set.
**Contrastive Pre-training.** We design a contrastive pre-training task that uses InfoNCE [48] as the loss function to enhance model robustness. It can be expressed as follows:
\[\mathcal{L}_{\mathrm{InfoNCE}}=-\mathrm{log}\frac{\mathrm{exp}(\mathbf{q}\cdot\mathbf{ k}_{+}/t)}{\mathrm{exp}(\mathbf{q}\cdot\mathbf{k}_{+}/t)+\sum_{i=1}^{n}\mathrm{exp}(\mathbf{q} \cdot\mathbf{k}_{i}/t)} \tag{2}\]
where \(t\) is a temperature hyper-parameter [49], the query vector \(\mathbf{q}\) is the encoded vector representation, \(\mathbf{k}_{+}\) is a similar key vector that \(\mathbf{q}\) matches, \(\mathbf{K}=\{\mathbf{k}_{1},...,\mathbf{k}_{n}\}\) is a set of dissimilar encoded vectors. InfoNCE tries to classify the query vector \(\mathbf{q}\) into its similar sample \(\mathbf{k}_{+}\) and pushes it away from dissimilar samples in the set \(\mathbf{K}\). The similarity is measured by dot product \((\cdot)\) between two vectors. To obtain the query representation \(\mathbf{q}\) and the similar key representation \(\mathbf{k}_{+}\), inspired by the recent advance [50] on the image recognition, we adopt Momentum Contrast (MoCo) [50] for the encoding. Specifically, it introduces an extra encoder M' to get the key representation \(\mathbf{k}_{+}\), which can be expressed as follows:
\[\mathbf{q} =\mathrm{LayerNorm}(\mathrm{M}(X)[0]) \tag{3}\] \[\mathbf{k}_{+} =\mathrm{LayerNorm}(\mathrm{M^{\prime}}(X^{\prime})[0])\]
where \(X\) and \(X^{\prime}\) denote the original masked sequence and its mutated variant respectively. The index 0 denotes the position
of "[CLS]" in the sequence, which can be considered as the aggregated sequence representation. The encoder M' is the same as the encoder M, but during the learning phase, it utilizes a momentum to update its learnt weights while the encoder M uses the gradient descent:
\[\theta_{\mathrm{M^{\prime}}}\gets m\theta_{\mathrm{M^{\prime}}}+(1-m) \theta_{\mathrm{M}} \tag{4}\]
where \(m\in[0,1)\) is a momentum coefficient for scaling, \(\theta_{\mathrm{M^{\prime}}}\) and \(\theta_{\mathrm{M}}\) denote the learnt weights for model M\({}^{\prime}\) and M.
From Eq 3, we obtain the query representation \(\mathbf{q}\) and the key representation \(\mathbf{k}_{+}\), to compute the similarity with dissimilar vectors from \(\mathbf{K}\), MoCo maintains a "_dynamic_" _queue_ of length \(n\). This queue stores the dissimilar keys from the previous batches. Specifically, during the learning phase, the current query \(\mathbf{q}\) will calculate the similarity with all dissimilar vectors in this queue. Afterwards, the key vector \(\mathbf{k}_{+}\) will be enqueued to _queue_ to replace the oldest one and we take it as the dissimilar samples for the calculation of the next query. Hence, it is namely _dynamically updated_.
Finally, we add both loss values with the scaled factor to pre-train ContraBERT and this process is expressed as follows:
\[\mathcal{L}_{\mathrm{Loss}}=\mathcal{L}_{\mathrm{MLM}}+w\mathcal{L}_{\mathrm{ InfoNCE}} \tag{5}\]
where \(w\) is the hyper-parameter to scale the weight for both pre-training tasks.
### _Fine-tuning_
Once ContraBERT is further pre-trained from the original pre-trained model, we can utilize it to obtain the vector representation for a program. Furthermore, we can also transfer it to different downstream tasks during the fine-tuning phase. These downstream tasks can be roughly categorized into three groups: (1) retrieval tasks (e.g., clone detection [39, 51], code search [11, 16]); (2) classification tasks (e.g., defect-detection [5]); (3) generation tasks (e.g., code-to-code translation [52, 53], code-refinement [54] and source code summarization [7]). Since the output space may differ from the pre-trained space, similar to CodeBERT and GraphCodeBERT, we add the task-specific module and then fine-tune the completed network on the labeled data. Specifically, for retrieval tasks, we further train ContraBERT on a labeled dataset; for classification tasks, we add a multi-layer perceptron (MLP) to predict the probability for each class; for generation tasks, we add a Transformer-based decoder to generate the target sequence.
## IV Experimental Setup
In experiments, we first evaluate the effectiveness of our approach (RQ1) in improving model robustness. Then we plot the feature space learnt by different pre-trained models for visualization to confirm the features are learnt better(RQ2). Finally, we conduct extensive experiments to demonstrate the robustness-enhanced models provide significant improvements on downstream tasks (RQ3-RQ4). The detailed research questions are described as follows:
* **RQ1:** What is the performance of different augmentation operators in enhancing the robustness of the pre-trained model?
* **RQ2:** Can ContraBERT reshape the vector space learnt from the pre-trained models to obtain better vector representations?
* **RQ3:** Can ContraBERT outperform the original pre-trained models on different downstream tasks?
* **RQ4:** Are the defined pre-training tasks both effective in improving the downstream task performance?
### _Evaluation Tasks, Datasets and Baselines_
We select four downstream tasks for evaluation. They are clone detection [39, 43], code search [16], defect detection [5] and code translation [52, 53]. We briefly introduce each task as follows:
**Clone Detection (Code-Code Retrieval).** This task is to identify semantically equivalent programs from a set of distractors by measuring the semantic similarity between two programs. AI for clone detection calculates cosine similarity between two embedding vectors of programs produced by neural networks and selects the top-k most similar programs as the candidates.
Fig. 3: The model design for ContraBERT where the encoder M can be represented by the existing pre-trained models such as CodeBERT. The initial weights of the encoder M’ are the same as the encoder M while the weight update is different.
**Defect Detection (Code Classification).** It aims to detect whether a function contains defects that will be exploited to attack the software systems. Since the defects in a program are still difficult to be effectively detected by the traditional techniques, recently advanced works [55, 5, 17] propose to employ a deep neural network to learn program semantics to facilitate the detection. These AI-based techniques predict the probability of whether a function is vulnerable or not.
**Code Translation (Code-Code Generation).** It aims to translate a program in a programming language (e.g., Java) to the semantically equivalent one in another language (e.g., C#). Some previous works [53] analogy it to machine translation [42, 56] in NLP and employ LSTMs [20] and Transformer [42] for code translation.
**Code Search (Text-Code Retrieval).** It aims at returning the desired programs based on the query in a natural language. Similar to clone detection, it measures the semantic relevance between queries and programs. The input for the deep code search system [16, 11] is a natural language query and the output is programs that meet the query requirements. The cosine similarity is used to compute semantic similarity between the vectors of a query and programs.
In terms of the pre-training dataset, we use the released dataset provided by CodeSearchNet [16] and this dataset is also used by CodeBERT and GraphCodeBERT. We use bimodal NL-PL pairs for pre-training, which consist of six programming languages including Java, Python, Ruby, Go, PHP and JavaScript. For the fine-tuning datasets, for the tasks of clone detection (POJ-104), defect detection, and code translation, we directly utilize the released task-specific dataset provided by CodeXGLUE [3]. For code search, we use the cleaned dataset provided by GraphCodeBERT [30] for evaluation. For each task, we utilize the official scripts to make a fair comparison. In addition, by the defined augmentation operators in Section III-B, we obtain a large amount of extra data \((C^{\prime},W^{\prime})\) used in ContraBERT as compared to the original pre-training data used in CodeBERT and GraphCodeBERT. Hence, we further add two baselines CodeBERT_Intr and GraphCodeBERT_Intr, which utilize original data as well as the dataset of the extra data \((C^{\prime},W^{\prime})\) to pre-train CodeBERT and GraphCodeBERT with MLM for comparison.
### _Evaluation Metrics_
In ContraBERT, different metrics are used to evaluate downstream tasks. We follow the metrics that CodeXGLUE used for evaluation, and the details are listed below:
**MAP@R.** It is the abbreviation of the mean of average precision, which is used to evaluate the result of retrieving R most similar samples in a set given a query. MAP@R is used for clone detection, where R is set to 499 for evaluation. **Acc.** It defines the ratio of correct predictions (i.e., the exact match) in the testset. Acc is used for the evaluation of defect detection and code translation.
**BLEU-4.** It is widely used to evaluate the text similarity between the generated sequence with the ground-truth in the generation systems. We use BLEU-4 for code translation.
**MRR.** It is the abbreviation of Mean Reciprocal Rank, which is widely adopted in information retrieval systems [11, 57]. We used it to evaluate the performance of code search. Instead of retrieving 1,000 candidates like CodeBERT [29], we follow the settings of GraphCodeBERT [30] to retrieve the answer for each query from the whole test set.
### _Experimental Settings_
We adopt CodeBERT and GraphCodeBERT as our original models. We set the maximum input sequence length \(X\) and the mutated sequence \(X^{\prime}\) as 512 following CodeBERT. We use Adam for optimizing with 256 batch size and 2e-4 learning rate. At each iteration, \(X^{\prime}\) is constructed by \(C^{\prime}\) and \(W^{\prime}\), which are randomly picked from \(S_{C}\) and \(S_{W}\) respectively. Following He et al. [50], the momentum coefficient \(m\), temperature parameter \(t\) and _queue_ size is set to 0.999, 0.07 and 65536 accordingly. We set the weight \(w\) in Eq 5 as 0.5 to accelerate the coverage process. The model is trained on a DGX machine with 4 NVIDIA Tesla V100 with 32GB memory. To alleviate the bias towards the high-resource languages (i.e., the number of samples for different programming languages is different), we refer to GraphCodeBERT [30] and sample each batch from the same programming language according to a multinomial distribution with probabilities \(\{q_{i}\}_{i=1\dots N}\).
\[q_{i}=\frac{p_{i}^{n}}{\sum_{j=1}^{N}p_{j}^{\alpha}}\ with\ \ p_{i}=\frac{n_{i}}{\sum_{k=1}^{N}n_{k}} \tag{6}\]
where \(n_{i}\) is the number of samples for \(i\)-th programming language, \(N\) is the total number of languages and \(\alpha\) is set to 0.7. The model is trained with 50K steps to ensure each mutated sample is utilized for the learning process and it takes about 2 days to finish the pre-training process. At fine-tuning phase, we directly utilize the default settings of CodeXGLUE [3] and GraphCodeBERT [30] in ContraBERT for downstream tasks. All experiments of downstream tasks are conducted on Intel Xeon Silver 4214 Processor with 6 NVIDIA Quadro RTX 8000 with 48GB memory.
## V Experimental Results
### _RQ1: Robustness Enhancement._
We investigate the augmentation operators in enhancing model robustness by validating the accuracy of samples against adversarial attacks on clone detection (POJ-104). The main reason to choose clone detection is that it targets identifying the semantically equivalent samples from other distractors. Hence, although the variable renaming operator changes the text of a program, the original program semantics are still unchanged. We statistically analyse the correctly predicted results under a different number of renaming edits for illustration. The experiments are conducted in a zero-shot manner [58], which means that it does not involve fine-tuning phase and we directly utilize the pre-trained model for evaluation. Specifically, we remove one operator and keep the remaining operators in Section III-B to pre-train the model. For fairness, the other settings in the experiments are the same as ContraBERT. Then we utilize the testset (in total 12,000 samples) on clone
detection (POJ-104) and randomly mutate the variables contained in the correctly predicted samples produced by different pre-trained models from 1 to 8 edits to test the prediction accuracy. The experimental results are shown in Table I where N is the number of edits and Num is the total number of correctly predicted samples without any edits in the testset for different models. ContraBERT_C/G defines the model is initialized by CodeBERT and GraphCodeBERT respectively and w/o \(*\) defines the removed operator \(*\).
From Table I, we find that in general, with the increasing number of edits, the performance continues to drop. It is reasonable, as the increasing number of edits, the difficulty for corrected predictions is also increased. We also observe that each augmented operator is beneficial to improve model robustness against the adversarial samples and when incorporating all operators, we obtain the best performance. It demonstrates the effectiveness of our designed PL-NL augmentation operators. In terms of NL augmentation operators, the operators Delete/Switch/Copy are relatively weaker in the robustness enhancement compared with the operator Trans. Since the operators (Delete/Switch/Copy) just have a limited extent of modification on the original sequence (i.e., only one or two words are modified), the text similarity between \(W\) and \(W^{\prime}\) is more similar than the operator Trans produces. Hence, the data diversity is limited by Delete/Switch/Copy, which leads to the robustness improvement is not as obvious as the operator Trans. In terms of PL augmentation operators, we find that the number of correctly predicted samples of ContraBERT_C/G w/o RV is the lowest (e.g., 8,665 and 9,042). With the increasing number of edits, the accuracy drops by a great margin. This indicates that RV operator plays a critical role against adversarial attacks and removing it harms the performance significantly. In addition, removing RFN operator, ContraBERT also has higher accuracy than other PL operators (i.e., RV, IDC, RO and SP), which indicates that RFN has fewer contributions. It is caused by the generated program \(C^{\prime}\) by RFN (i.e., rename function name) is more similar to the original program \(C\) compared with other PL augmentation operators.
\(\mathbf{\approx}\)**RQ1\(\mathbf{\cdot}\)** Each operator in the designed PL-NL augmentation is effective in improving model robustness and when incorporating them, the robustness of pre-trained models is further enhanced.
### _RQ2: Visualization for Code Embeddings._
We visualize the code representation space learnt by different pre-trained models to confirm that the contrastive pre-training task can reshape the learnt vector space to ensure the model is more robust. Specifically, we use the clone detection (POJ-104) task provided by CodeXGLUE [3] for evaluation. The main reason for selecting clone detection is that it is more intuitive to observe and validate the similarity of code representation on the semantic equivalence programs. The dataset consists of 104 programming problems, where each problem has 500 semantically equivalent programs with different implementations. Theoretically, the program semantics for one problem should be the same. Hence, the code vectors (i.e., representations) of programs from pre-train models for one problem should be closer than the code vectors of programs for other problems. We randomly select 5 different problems with 100 samples and take them as the inputs to CodeBERT, GraphCodeBERT, ContraBERT_C and ContraBERT_G for visualization where C/G defines ContraBERT is initialized by CodeBERT or GraphCodeBERT respectively. We utilize the vector of the token "[CLS]" as the program representation. We further utilize T-SNE [59] to reduce the vector dimension to a two-dimensional space for visualization. Similar to Section V-A, this process is also zero-shot [58], which helps us to validate the learnt space by different pre-training techniques.
As shown in Fig. 4, the vectors produced by GraphCodeBERT (See Fig. 4b) have a certain ability to group some problems of programs compared with CodeBERT (See Fig. 4a), which indicates that incorporating program structures such as data flow graph into pre-training is beneficial for the model to learn program semantics. However, we also find that the improvement is limited and the boundary in Fig. 4b is not clear. Some data points are scattered, especially in the upper-right part of Fig. 4b. In contrast, the visualization of ContraBERT is shown in Fig. 4c and Fig. 4d. We see that the programs in the same problem aggregate together closely as a cluster and different clusters have much clearer boundaries. This indicates that ContraBERT is more powerful than CodeBERT/GraphCodeBERT to group semantically equivalent data and push away dissimilar data. We attribute this ability to the defined PL-NL augmentation operators to capture the essence of programs. Furthermore, ContraBERT_G (See Fig. 4d) has a better clustering performance than ContraBERT_C (See Fig. 4c). For example in Fig. 4c, the label 0 has
\begin{table}
\begin{tabular}{l c c c c c|l c c c c} \hline \hline Model & Num & No & N=1 & N=4 & N=8 & Model & Num & N=1 & N=4 & N=8 \\ & & Acc & Acc & Acc & Acc & Acc & Acc & Acc & Acc & Acc \\ \hline ContraBERT\_C w/o RFN & 10,087 & 1 & 0.977 & 0.868 & 0.654 & ContraBERT\_G w/o RFN & 10,375 & 1 & 0.975 & 0.873 & 0.634 \\ ContraBERT\_C w/o RV & 8,665 & 1 & 0.932 & 0.597 & 0.291 & ContraBERT\_G w/o RV & 9,042 & 1 & 0.955 & 0.657 & 0.309 \\ ContraBERT\_C w/o IDC & 9,997 & 1 & 0.969 & 0.865 & 0.618 & ContraBERT\_G w/o IDC & 10,530 & 1 & 0.963 & 0.862 & 0.612 \\ ContraBERT\_C w/o RO & 9,923 & 1 & 0.963 & 0.857 & 0.619 & ContraBERT\_G w/o RO & 10,509 & 1 & 0.968 & 0.868 & 0.617 \\ ContraBERT\_C w/o SP & 10,604 & 1 & 0.959 & 0.849 & 0.616 & ContraBERT\_G w/o SP & 11,140 & 1 & 0.969 & 0.860 & 0.613 \\ \hline ContraBERT\_C w/o Trans & 9,536 & 1 & 0.971 & 0.856 & 0.621 & ContraBERT\_G w/o Trans & 10,360 & 1 & 0.973 & 0.859 & 0.617 \\ ContraBERT\_C w/o Delete & 10,199 & 1 & 0.978 & 0.871 & 0.639 & ContraBERT\_G w/o Relate & 10,376 & 1 & 0.981 & 0.878 & 0.643 \\ ContraBERT\_C w/o Switch & 9,809 & 1 & 0.975 & 0.877 & 0.637 & ContraBERT\_G w/o Switch & 10,457 & 1 & 0.978 & 0.876 & 0.647 \\ ContraBERT\_C w/o Copy & 10,749 & 1 & 0.977 & 0.874 & 0.635 & ContraBERT\_G w/o Copy & 10,859 & 1 & 0.981 & 0.880 & 0.641 \\ \hline ContraBERT\_C & 10,463 & 1 & **0.981** & **0.882** & **0.649** & ContraBERT\_G & 10,565 & 1 & **0.985** & **0.888** & **0.654** \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results of ContraBERT against the variable renaming operator in a zero-shot manner.
two distant clusters while in Fig. 3(d), it only has one cluster. The improvements are from the used original model that GraphCodeBERT is superior to CodeBERT. In addition, we compute the distortion distance 1[60] of the selected samples for these models to strengthen the conclusion. The distances of CodeBERT, GraphCodeBERT, ContraBERT_C and ContraBERT_G are 0.333, 0.212, 0.202, and 0.194 respectively. We can find that ContraBERT has a lower distortion distance than CodeBERT and GraphCodeBERT, which demonstrates their clusters are more compact.
Footnote 1: The distortion distance refers to the sum of the squared distances of each sample to their assigned cluster centre.
\(\triangle\)RQ2\(\blacktriangleright\)Through contrastive pre-training tasks to learn augmented variants constructed by a set of PL-NL operators, ContraBERT is able to group the semantically equivalent samples and push away the dissimilar samples, thus learning better vector representations.
### _RQ3: Performance of ContraBERT on Downstream Tasks_.
We conduct extensive experiments on four downstream tasks to evaluate the performance of ContraBERT as compared to the original CodeBERT and GraphCodeBERT. We further add two baselines (i.e., CodeBERT_Intr and GraphCodeBERT_Intr), which are pre-trained by original data as well as the augmented variants. We supplement these two baselines to ensure the used scale of data is consistent with ContraBERT for a fair comparison. The results of clone/defect detection are shown in Table II. Table III presents the results of code translation and Table IV presents the results of code search where the rightmost "overall" column is the average value for six programming languages. Because the values for clone detection and defect detection of GraphCodeBERT are not reported by their original paper [30], we use official code for reproduction and report these values. The other values of CodeBERT and GraphCodeBERT are directly taken from CodeXGLUE [3] and Guo et al. [30].
From Table II and Table III, we find that ContraBERT_C/G outperforms original pre-trained models CodeBERT or GraphCodeBERT on clone detection (POJ-104), defect detection and code translation. However, the absolute gains on code search (see Table IV) are minor. For these improvements, we attribute to the robustness-enhanced models providing better performance on downstream tasks. When it comes to minor improvements in code search, we ascribe to the difficulty of this task. Code search requires learning the semantic mapping between query and program. However, the semantic gap between programs and natural languages is huge. It makes the model difficult to achieve significant improvements. In total, considering the scale of testset on code search, which contains 52,561 samples for six programming languages, the improvements are still promising. Furthermore, we find that compared with CodeBERT and GraphCodeBERT, CodeBERT_Intr and GraphCodeBERT_Intr have better performance on these tasks. It is reasonable since we add extra data to further pre-train CodeBERT and GraphCodeBERT. However, the performance of CodeBERT_Intr and GraphCodeBERT_Intr is worse than
\begin{table}
\begin{tabular}{l c c} \hline \hline \multirow{2}{*}{Model} & Clone Detection & Defect Detection \\ & MAP@R & Acc \\ \hline CodeBERT & 84.29 & 62.08 \\ CodeBERT\_Intr & 86.34 & 62.41 \\ ContraBERT\_C (MLM) & 86.21 & 62.25 \\ ContraBERT\_C (Contra) & 81.44 & 62.22 \\ ContraBERT\_C & **90.46** & **64.17** \\ \hline GraphCodeBERT & 85.16 & 62.85 \\ GraphCodeBERT\_Intr & 87.60 & 62.26 \\ ContraBERT\_G (MLM) & 87.30 & 62.01 \\ ContraBERT\_G (Contra) & 85.63 & 58.82 \\ ContraBERT\_G & 90.06 & 63.32 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Results on clone detection and defect detection.
Fig. 4: Visualization for vector representations of each 100 programs for 5 problems and they are randomly picked from clone detection (POJ-104). The vectors are produced by CodeBERT, GraphCodeBERT, ContraBERT_C and ContraBERT_G. The point with different colours indicates different problems that this function belongs to.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Code Translation} \\ \cline{2-5} & Java \(\rightarrow\) C\# & \multicolumn{2}{c}{C\# \(\rightarrow\) Java} \\ & BLEU\(\downarrow\)- & Acc & BLEU\(\downarrow\) at Acc \\ \hline CodeBERT & 79.92 & 59.00 & 72.14 & 58.00 \\ CodeBERT\_Intr & 79.93 & 59.20 & 75.71 & 58.60 \\ ContraBERT\_C (MLM) & 79.90 & 59.10 & 75.03 & 58.10 \\ ContraBERT\_C (Contra) & 51.99 & 34.60 & 46.75 & 38.30 \\ ContraBERT\_C & 79.95 & 59.00 & 75.92 & 59.60 \\ \hline GraphCodeBERT & 80.58 & 59.40 & 72.64 & 58.80 \\ GraphCodeBERT & 80.61 & 59.60 & 75.50 & 60.10 \\ ContraBERT\_G (MLM) & 80.36 & 59.40 & 75.10 & 60.00 \\ ContraBERT\_G (Contra) & 55.48 & 39.40 & 48.92 & 39.00 \\ ContraBERT\_G & **80.78** & **59.90** & **76.24** & **60.50** \\ \hline \hline \end{tabular}
\end{table} TABLE III: Results on code translation.
ContraBERT_C/G. It demonstrates that even with the same scale of data, ContraBERT_C/G are still better than CodeBERT and GraphCodeBERT, which further strengthen our conclusion that the improvements are brought by our proposed approach rather than the gains brought by the increased scale of the data.
\(\mathbf{\triangle}\)\(\mathbf{\triangleright}\)**RQ3\(\mathbf{\triangle}\)** ContraBERT comprehensively improves the performance of original CodeBERT and GraphCodeBERT on four downstream tasks, we attribute the improvements to the enhanced robustness of the model has better performance on these tasks.
### _RQ4: Ablation Study for Pre-training Tasks._
ContraBERT utilizes two pre-training tasks, the first one is MLM, which learns the token representations and the second one is the contrastive pre-training task, which improves the model robustness by InfoNCE loss function. We further investigate the impact of each pre-training strategy on downstream tasks. The experimental results are shown in Table II, Table III and Table IV respectively, where the row of MLM or Contra denotes the results obtained by purely using MLM or contrastive pre-training task. For a fair comparison, the other settings are the same when combining both pre-training tasks for pre-training.
We can observe that the performance of purely using contrastive pre-training tasks is worse than purely using MLM on these downstream tasks, especially on the task of code translation. It is acceptable since both pre-training tasks are excellent in different aspects. Specifically, MLM is designed by randomly masking some tokens in a sequence to help the model learn token representations. The learnt token representations are important for generation tasks to generate a target sequence such as code translation, so it will help the model achieve good performance on these tasks. However, the contrastive pre-training task is designed by grouping the semantically equivalent samples while pushing away the dissimilar samples through InfoNCE loss function. The model robustness is enhanced by the contrastive pre-training task. Furthermore, when combing both pre-training tasks, our model achieves better performance compared with purely using one of the pre-training tasks, which indicates that ContraBERT is robust at the same time is able to achieve better performance on the downstream tasks.
\(\mathbf{\triangle}\)\(\mathbf{\triangleright}\)**RQ4\(\mathbf{\triangle}\)** Masked language modeling (MLM) and the contrastive pre-training task play different roles for ContraBERT. When combining them together, the model achieves higher performance on different downstream tasks.
## VI Discussion
In this section, we first discuss the implications of our work, then discuss the limitations followed by threats to validity.
### _Implications_
In this work, we find that the widely used pre-trained code models such as CodeBERT [29] or GraphCodeBERT [30] are not robust to adversarial attacks. Based on this finding, we further propose a contrastive learning-based approach for improvement. We believe that this finding in our paper will inspire the following-up researchers when designing a new model architecture for code, considering some other problems in the model such as robustness, generalization and not just focusing on the accuracy of the model on different tasks.
### _Limitations_
By our experimental results, we find that the robustness of the model is enhanced significantly compared with the original models. We attribute it to the contrastive pre-training task to learn the semantically equivalent samples. However, these robustness-enhanced models only have slight improvements on the downstream task of code search. For this task, since it requires learning the semantic mapping between a query and its corresponding code, the designed augmentation operators just modify the code or query itself, hence their correlations are not captured and this leads to the improvements being limited. For code search, a possible solution to further improve the performance is to build the token relations between PL and NL for augmented variants, however, it involves intensive work to analyse the relations between the program and natural language comment. We will explore it in our future work.
Another limitation is the designed augmentation operators for PL and NL. We just design some basic operators to transform programs and comments. These operators are straightforward, although they are confirmed their effectiveness in improving model robustness. It is intriguing to explore more complex augmentation strategies such as multiple operations on these operators for a sample to construct complex augmented variants.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline Model & Ruby & Javascript & Go & Python & Java & PHP & Overall \\ \hline CodeBERT & 0.679 & 0.620 & 0.882 & 0.672 & 0.676 & 0.628 & 0.693 \\ CodeBERT\_Intr & 0.686 & 0.623 & 0.883 & 0.676 & 0.678 & 0.630 & 0.696 \\ ContraBERT\_C (MLM) & 0.675 & 0.621 & 0.888 & 0.670 & 0.675 & 0.631 & 0.693 \\ ContraBERT\_C (Contra) & 0.593 & 0.532 & 0.864 & 0.622 & 0.618 & 0.584 & 0.636 \\ ContraBERT\_C & 0.688 & 0.626 & 0.892 & 0.678 & 0.685 & 0.634 & 0.701 \\ \hline GraphCodeBERT & 0.703 & 0.644 & 0.897 & 0.692 & 0.691 & **0.649** & 0.713 \\ GraphCodeBERT\_Intr & 0.709 & 0.647 & 0.894 & 0.692 & 0.693 & 0.647 & 0.714 \\ ContraBERT\_G (MLM) & 0.692 & 0.642 & 0.897 & 0.690 & 0.690 & 0.647 & 0.710 \\ ContraBERT\_G (Contra) & 0.626 & 0.582 & 0.882 & 0.655 & 0.659 & 0.613 & 0.670 \\ ContraBERT\_G & **0.723** & **0.656** & **0.899** & **0.695** & **0.655** & 0.648 & **0.719** \\ \hline \end{tabular}
\end{table} TABLE IV: Results on code search where the evaluation metric is MRR.
### _Threats to Validity_
**Internal validity:** The first threat is the hyper-parameter tuning for pre-training. More hyper-parameters need to tune than CodeBERT or GraphCodeBERT for example the temperature \(t\), the momentum coefficient \(m\) and _queue_ size. We follow the original settings from MoCo [50] and these parameters may not be optimal as they are designed for the task of image classification in computer vision. Due to that, the pre-training process is time-consuming and resource-consuming. We need nearly 2 days to complete one training process hence we ignore the hyper-parameter tuning process. However, we also find that even with the original parameters used in MoCo [50], ContraBERT still achieves higher performance than the original models. The second threat is that we use the same train-validation-test split that CodeXGLUE [3] and GraphCodeBERT [30] used. Adjusting the data split ratio or improving the training data quality may produce better results, however, we do not take these strategies to ensure a fair evaluation. The third threat is that we just use clone detection(POJ-104) to verify the robustness of the model is enhanced in Fig. 1 and Section V-A, we also plot the learnt space in Section V-B. The reason to select clone detection is that it aims at identifying the semantically equivalent programs from other distractors, which is suitable for the evaluation.
**External validity:** Some other pre-training works in the code scenario such as CuBERT [28] are not included for evaluation. CuBERT was pre-trained on a large Python corpus with MLM. Our approach is orthogonal to these pre-trained models and we just need to replace the encoder M with other existing pre-trained models for evaluation.
## VII Related Work
In this section, we briefly introduce the related works on contrastive learning, the pre-trained models for "big code" and the adversarial robustness of models of code.
### _Contrastive Learning_
Contrastive learning is to learn representations by minimizing the distance between similar samples while maximizing the distance between different samples to help the similar samples closer to each other and different samples far apart from each other. Over the past few years, it has attracted increasing attention with many successful applications in computer vision [50, 61, 62, 63, 64], natural language processing [65, 66, 67, 68]. Recently, there are some works [44, 69, 45] that utilize contrastive learning for different software engineering tasks. For example, Bui et al. [44] proposed Corder, a contrastive learning approach for code-to-code retrieval, text-to-code retrieval and code-to-text summarization. VarCLR [69] aimed to learn the semantic representations of variable names based on contrastive learning for different downstream tasks such as variable similarity scoring and variable spelling error correction. ContraCode [45] generated variants by a source-to-source compiler on JavaScript and further combined these generated mutated samples with contrastive learning for the task of clone detection, type inference and code summarization. Compared with these existing works which only focus on designing mutated variants for code, we first illustrate the widely concerned CodeBERT and GraphCodeBERT are weak to the adversarial examples. Then we design a set of simple and complex augmented operators on both programs and natural language sequences to obtain different variants. By contrastive learning to learn semantically equivalent variants, the robustness of existing pre-trained models is enhanced. We further confirm that the robustness-enhanced models provide improvements on different downstream tasks.
### _Pre-trained Models for "Big Code"_
Recently, pre-trained models are widely applied to the "big code" era [28, 29, 30, 3, 31, 32, 33, 34, 35, 45, 70]. For example, Kanade et al. [28] pre-trained CuBERT based on BERT [37] with a massive corpus of Python programs from GitHub and then fine-tuned it for some classification tasks such as variable misuse classification. Feng et al. [29] proposed CodeBERT, a bimodal pre-trained model for programming language (PL) and natural language (NL) that learns the program representation to support code search and source code summarization. GraphcodeBERT [30] combines the variable data-flow graph in a program with the code sequences and the natural language sequence to enhance CodeBERT. CodeXGLUE [3] also utilized CodeBERT and CodeGPT [71] to release a benchmark including several software engineering tasks. Liu et al. [70] proposed a CommitBART to support commit-related downstream tasks. Compared with existing pre-trained models, we illustrate they are not robust and further propose ContraBERT to enhance model robustness.
### _Adversarial Robustness on Models of Code_
The research about adversarial robustness analysis on the models of code has attracted the attention [72, 73, 74, 40, 75, 40]. Generally, these works can be categorized into two groups: white-box and black-box manner, where the white-box means that the approach provides some explanations on the decision-making while the black-box mainly focuses on the statistical evaluation. In terms of white-box works, Yefet et al. [73] proposed DAMP to select the semantic preserving perturbations by deriving the output distribution of the model with the input. Srikant et al. [72] provided a general formulation of a perturbed program that models site locations and perturbation choices for each location. Then based on this formulation, they further proposed a set of first-order optimization algorithms for the solving. In terms of the black-box works, HMH [76] generated adversarial examples of the source code by conducting iterative identifier renaming and evaluated on source code functionality classification task. The latest work by Yang et al. [40] proposed ALERT to transform the inputs while preserving the optional semantics of original inputs by replacing the variables with the substitutes. Their
experiments are conducted on the pre-trained models CodeBERT and GraphCodeBERT. Compared with ALERT, which only designed the rename variable operation, in this paper, apart from the rename variable operation, we further design eight augmented operators over PL-NL pairs. Furthermore, a newly designed model to solve the weakness of robustness is not involved in ALERT. In contrast, we propose our general network architecture that uses contrastive learning to enhance model robustness. The extensive experiments have confirmed that our approach enhances the robustness of existing pre-trained models. We also demonstrate that these robustness-enhanced models provide improvements on different downstream tasks.
## VIII Conclusion
In this paper, we observe that state-of-the-art pre-trained models such as CodeBERT and GraphCodeBERT are not robust to adversarial attacks and a simple mutation operator (e.g., variable renaming) degrades their performance significantly. To address this problem, in this paper, we propose Contr-BERT, a contrastive learning-based framework to enhance the robustness of existing pre-trained models by designing nine kinds of PL-NL augmented operators to group the semantically equivalent variants. Through extensive experiments, we have confirmed that the model's robustness is enhanced. Furthermore, we also illustrate that these robustness-enhanced models provide improvements on four downstream tasks.
## IX Acknowledgments
We express our sincere gratitude to Mr Daya Guo from Sun Yat-sen University for his assistance. This research is partially supported by the National Research Foundation, Singapore under its the AI Singapore Programme (AISG2-RP-2020-019), the National Research Foundation, Prime Minister's Office, Singapore under its National Cybersecurity R&D Program (Award No. NRF2018NCR-NCR005-0001), NRF Investigatorship NRF-NRFI06-2020-0001, the National Research Foundation through its National Satellite of Excellence in Trustworthy Software Systems (NSOE-TSS) project under the National Cybersecurity R&D (NCR) Grant award no. NRF2018NCR-NSOE003-0001, the Ministry of Education, Singapore under its Academic Research Tier 3 (MOET32020-0004). IIE authors are supported in part by NSFC (61902395), Beijing Nova Program. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the Ministry of Education, Singapore.
|
2310.10262 | Enhancing Interpretability using Human Similarity Judgements to Prune
Word Embeddings | Interpretability methods in NLP aim to provide insights into the semantics
underlying specific system architectures. Focusing on word embeddings, we
present a supervised-learning method that, for a given domain (e.g., sports,
professions), identifies a subset of model features that strongly improve
prediction of human similarity judgments. We show this method keeps only 20-40%
of the original embeddings, for 8 independent semantic domains, and that it
retains different feature sets across domains. We then present two approaches
for interpreting the semantics of the retained features. The first obtains the
scores of the domain words (co-hyponyms) on the first principal component of
the retained embeddings, and extracts terms whose co-occurrence with the
co-hyponyms tracks these scores' profile. This analysis reveals that humans
differentiate e.g. sports based on how gender-inclusive and international they
are. The second approach uses the retained sets as variables in a probing task
that predicts values along 65 semantically annotated dimensions for a dataset
of 535 words. The features retained for professions are best at predicting
cognitive, emotional and social dimensions, whereas features retained for
fruits or vegetables best predict the gustation (taste) dimension. We discuss
implications for alignment between AI systems and human knowledge. | Natalia Flechas Manrique, Wanqian Bao, Aurelie Herbelot, Uri Hasson | 2023-10-16T10:38:49Z | http://arxiv.org/abs/2310.10262v1 | # Enhancing Interpretability using Human Similarity Judgements to Prune Word Embeddings
###### Abstract
Interpretability methods in NLP aim to provide insights into the semantics underlying specific system architectures. Focusing on word embeddings, we present a supervised-learning method that, for a given domain (e.g., sports, professions), identifies a subset of model features that strongly improve prediction of human similarity judgments. We show this method keeps only 20-40% of the original embeddings, for 8 independent semantic domains, and that it retains different feature sets across domains. We then present two approaches for interpreting the semantics of the retained features. The first obtains the scores of the domain words (co-hyponyms) on the first principal component of the retained embeddings, and extracts terms whose co-occurrence with the co-hyponyms tracks these scores' profile. This analysis reveals that humans differentiate e.g. sports based on how gender-inclusive and international they are. The second approach uses the retained sets as variables in a probing task that predicts values along 65 semantically annotated dimensions for a dataset of 535 words. The features retained for professions are best at predicting cognitive, emotional and social dimensions, whereas features retained for fruits or vegetables best predict the gustation (taste) dimension. We discuss implications for alignment between AI systems and human knowledge.
## 1 Introduction
The popularity of Large Language Models (LLMs) such as ChatGPT1 or BLOOM (Scao et al., 2022) has recently prompted an active area of research around the notion of 'alignment', i.e. the ability of NLP models to meet human expectations (see Wang et al., 2023 for a survey). While techniques such as Supervised Fine-Tuning (SFT) and Reinforcement Learning with Human Feedback (RLHF) have become de facto standards to steer models towards human behaviour, the structural differences that make models in need of alignment are not fully elucidated. Why is it that NLP systems organise their knowledge the way they do? And which operations might increase their similarity to human cognition? These questions remain unsolved, not only for LLMs but also for simple word embedding models such as GloVe (Pennington et al., 2014) or Word2Vec (Mikolov et al., 2013).
Footnote 1: [https://openai.com/chatgpt](https://openai.com/chatgpt)
Alongside the question of alignment, a range of model compression techniques have recently been proposed, including pruning, distillation and quantization (Xu and McAuley, 2023), to increase system efficiency at runtime. Many distilled models perform on a par with their larger counterparts (Sanh et al., 2019; Jiao et al., 2020), prompting questions about the nature of semantic encoding in both the original and the compressed architecture. Some investigations have focused on the increased (or decreased) fairness of distilled models, in particular their ability to faithfully reproduce reality, with inconclusive results so far (Ramesh et al., 2023). Others have concentrated instead on the correlation between compression and a model's ability to reproduce human behaviour itself (Tarigopula et al., 2021). Most importantly, there is evidence that pruned networks develop different internal representations (Ansuini et al., 2020).
In this paper, we bring together the question of alignment and the methodological toolbox given by pruning techniques. The general aim of the following experiments is to understand the semantics of non-contextual word embeddings (GloVe) by evaluating how those embeddings can be fine-tuned in a way that supports explainability and best predicts Human Similarity Judgments (HSJs) for words in specific categories (i.e., co-hyponyms).
The guiding assumption is that for co-hyponyms that belong to a basic-level category (Rosch et al., 1976), learning a small subset of relevant features can markedly improve the prediction of HSJs as compared to the use of all features. For exam
ple, considering a set of co-hyponyms that all belong to the sports category, it is expected that improved prediction of HSJs between those words can be achieved by identifying a humanly-salient low-dimensional subspace that encodes domain-specific discriminatory properties such as _ball game_ or _played in a team_.
We have three aims in this study. Aim 1 is to determine whether pruning improves prediction of HSJs for word-pairs in a study consisting of 8 internal replications (independent datasets). This preliminary step is a necessary prerequisite to support our two subsequent goals, which are focused solely on explainability. Aim 2 focuses on quantifying the position of each co-hyponym in the pruned latent space, and then querying the entire vocabulary to identify words whose co-occurrence with co-hyponyms tracks those positions. This provides a data-driven description of the latent dimensions underlying the pruned feature-set. Aim 3 is to identify the semantics of the pruned sets via a probing task. Specifically, we evaluate how well these (sub)sets of features predict a set of human annotations for a set of 535 pre-defined words for which annotations on interpretable features have been collected (Binder et al., 2016).
Our main contribution is in showing that pruning supervised by HSJ is a transparent and effective method to study which human-relevant semantics are contained in word embeddings.
## 2 Related Work
Our work builds upon existing research that establishes a connection between human comparison processes and image representations created by deep neural networks (DNNs). Several prior studies in the area of computer vision have used feature-reweighting (Kaniuth and Hebart, 2021; Peterson et al., 2017) or feature-pruning (Tarigopula et al., 2021) to improve alignment between human similarity judgments (HSJ) and DNN-generated image representations. Extending the approach to language, Richie and Bhatia (2020) applied a feature reweighting to optimize the prediction of HSJ from word-pair embeddings. Following Peterson et al. (2017) they modeled word-pair similarity as a weighted dot product, using regression, and solving for as many weights as features. While interesting, this procedure has various issues. First, the weights are proportional to feature-products rather than to feature value. This makes the method less interpretable. Second, the method operationalizes the assumption that the DNN has learned a meaningful basis set of features that is applicable to all domains, and that the features just need saliency adjustment per domain. Third, the data do not lend themselves to downstream analyses as they do not select a subset of features or directly reweight them. In contrast, the view we present for adapting the features to the domain is that there exists a subspace of meaning/features in the DNN whose saliency is already properly calibrated, and what needs to be done is just to identify the relevant/irrelevant features for the domain. Pruning and reweighting are therefore two different approaches to understanding latent content.
Prior work has also used pruning to predict HSJ. Tarigopula et al. (2021) showed that when applied to image embeddings extracted from the penultimate layer of VGG-19, pruning markedly improves prediction of out of sample similarity judgments while maintaining around 20% or fewer of the layer's features. However, image and word embeddings are derived in different ways and and it is unclear whether the findings generalize to the text domain.
The other work relevant to the current effort is on interpretation of word embeddings. Chersoni et al. (2021) and Utsumi (2020) present a probing method for studying the semantic dimensions latent in word embeddings by constructing a mapping between embedding-vectors and human-rated semantic features. We do the same, but instead of using the entire feature set, use the feature-subsets produced by a supervised-pruning procedure. This a fundamental departure from prior work as it ultimately probes for semantic features in pruned embedding subsets and in this way highlights the latent dimensions that are important to humans.
Finally, our effort links up to recent work on alignment, which focuses on elucidating differences between computational systems and human behaviour: recent examples are Hu et al. (2023), who compare pragmatic phenomena in humans and Large Language Models, or Bao et al. (2023), who attempt to reproduce human word acquisition by implementing conceptual attribute comparison in their model. Of relevance to our approach, Park et al. (2023) remove spurious correlations between network features via pruning, with the aim to reduce machine-specific biases in the learned model. Their work is however focused on images rather
than text.
## 3 Datasets
### Human similarity judgements
Our HSJ dataset was made available by Richie and Bhatia (2020) via OSF2. The data covers words in eight categories: furniture, clothing, birds, vegetables, sports, vehicles, fruits and professions. Human similarity ratings were obtained for word pairs within but not across categories, with each category containing around 20-30 words. Participants (\(N=365\)) were recruited from the US population using an online recruitment and data collection system (mean age = 33 years, 55% female). Each participant was randomly assigned to one of the eight categories. For most categories, participants only completed some of all possible pairwise similarity judgements, which only made group-level analyses possible. Consequently, judgements were averaged across participants and organized in similarity matrices, which we will refer to as representation-similarity matrices (RSMs).
Footnote 2: [https://osf.io/d7fm2/view_only=c5ba5d34a5e34ff397ao562c07aadc5](https://osf.io/d7fm2/view_only=c5ba5d34a5e34ff397ao562c07aadc5)
### Word embeddings
For all words in the eight categories, we collected 300-dimensional GloVe embeddings (Global Vectors for Word Representation, Pennington et al., 2014)3. These embeddings are referred to as _GloVe 6b Giga + Wiki_ in Richie and Bhatia (2020), because they were obtained by training on the Gigaword Corpus and Wikipedia, which have a combined size of 6B tokens. For each of the eight categories, we arranged embeddings into matrices, with words as rows and features as columns. To operationalize word-pair similarity, we computed Pearson's correlations across all embeddings within each category and organized them into RSMs.
Footnote 3: [https://nlp.stanford.edu/projects/GloVe/](https://nlp.stanford.edu/projects/GloVe/)
## 4 Algorithms
### Pruning Algorithm and Cross Validation
Algorithm 1 completely describes our pruning algorithm, which is a sequential feature selection procedure. We briefly summarize its main elements. The objective of the algorithm is to identify a reduced subset of features, so that when that subset is used to produce the \(Object\times Object\) Similarity matrix, \(SM_{DNNRED}\), the resulting matrix produces a maximal fit to the human similarity judgments. The fit between the two similarity matrices matrices is computed using the Spearman's rank correlation coefficient (\(\rho\)). The effectiveness of the pruning solution is evaluated by applying the set of features found for our training data to an unseen test data sample, as explained below.
For both word embeddings and human judgment similarity matrices we create the test and train partitions on a given fold so that the test partition consists of all pairwise similarity ratings associated with a target word \(i\). This means that if for \(N\) words the number of unique pairwise judgments is \((N^{2}-N)/2\), we construct the test partition to consist of the \(N\) pairwise similarity judgments associated with the left-out \(i\)th word. The test partition's size is therefore \((1/N)\times(N^{2}-N)/2\), and the train partition consists of all other pairwise judgments.
As a baseline value, we use the average second order isomorphism measure (2OI) for the test partition for each fold before pruning, as reported in Table 1. This was defined as the Spearman's \(\rho\) between two sets of similarity matrices: 1) the \(N\) similarity judgments associated with the target word as estimated from the full, non-pruned word embeddings, and 2) the ground-truth judgments as provided by humans. The 2OI measure was chosen to make the results comparable with the work of Richie and Bhatia (2020).
### Feature set interpretation
After applying the pruning algorithm, we perform a Principal Components Analysis (PCA) and identify those vocabulary words whose co-occurrence profile with the category words tracks the first-PC scores for those words. This results in a human-readable representation of the main discriminative features in the pruned space. The process is achieved in two steps, as detailed below. As a control we also applied this PCA analysis to non-pruned embeddings
#### 4.2.1 Identifying a word's immediate context
To compute the PMI of each vocabulary word with each of the category words we used code provided by Kabbach and Herbelot (2021)4 who computed the Positive PMI between all word combination in the WIKI4 corpus (4% of the English Wikipedia sampled across the entire dump).
Footnote 4: [https://github.com/akb89/counterix/blob/master/counterix/core/weigher.py](https://github.com/akb89/counterix/blob/master/counterix/core/weigher.py)
We use PMI rather than positive PMI (PPMI) because, for our analysis, the extent to which pairs of words co-occur less frequently than would be expected by independence is also meaningful.
Note that for purposes of the current analyses, we needed to identify those vocabulary words that were part of the contexts for each of the words in the category. This category-related corpus vocabulary is created as follows: For every word in a given category (target word), we select words whose joint probability with the target word is not zero (i.e. words whose PMI value with the target word is not zero), forming a set of context words for a given target word. More formally, we denote the set of all vocabulary words as \(V\). We can define an "immediate context" subset, denoted by \(N(i)\), as the subset of words in \(V\) that are adjacent to the target word \(i\), within a word window of \(\pm 2\) words: \(N(i)=\{w\in V|P(w,i)\neq 0\}\)
where \(w\) is an element (word) in \(V\), and \(P(w,i)\) denotes the joint probability of word w and target word \(i\).
After computing the immediate contexts for each category co-hyponym, we combine these context-sets. We denote the category as \(C\) and the vocabulary as \(V\). The "Category context" or the vocabulary of the category is the union of the context word sets of all the target words in the category, \(C=\bigcup N(i)\). This combines, for a given category, all the immediate-contexts subsets.
#### 4.2.2 Find the correlations between the PMI vectors and the first PC of each category
For each category, the Spearman's correlation between each PMI vector of each corpus-vocabulary word and the category-words' scores on the first principal component was calculated, alongside its statistical significance. The vocabulary was ranked depending on the correlation results.
Many vocabulary words end up in the Category context-set even though they are not lexical items relevant to our analysis. For this reason we only included words that were within the most frequent 15K dictionary-words in the corpus, and further eliminated proper nouns and numbers. As relevant correlations we considered those whose statistical significance satisfied \(p<.05\). Finally, we required that more than \(60\%\) of the components of the PMI vectors must be non-zero values so that the correlation was not driven by a few zero vs non-zero entries.
## 5 Results
### Improved prediction of human judgments
For each fold we compute the baseline 2OI of the test partition, and the 2OI value computed when using the pruned feature-set identified by the algorithm using the train partition. The results are summarized in Table 1and show that for all 8 data sets, pruning improved out of sample prediction of human behavior, in some cases by considerable magnitudes.
Because, within each category, there are as many test-partition folds as words, we could compute a paired T-test between each test partition's baseline Spearman \(\rho\) (prior to pruning), and the \(\rho\) value obtained after pruning. As shown in Table 1, the difference in 2OI values was statistically significant for 7 of the 8 datasets.
Further, the number of features retained through pruning varied considerably across categories. Notably, within each category, the standard deviation of this statistic across folds was low, meaning that pruning produced relatively systematic set-sizes for different train folds.
### Supervised-pruning selects for different features across domains
To determine whether a core set of features was maintained across the eight categories, we simply summed, for each GloVe feature, the number of times it was retained for each of the 8 prunings. We found that no feature was retained across all 8 datasets or even 7 of the 8. The strongest overlap was seen in 6 features that were included in 6 pruned datasets. However, 220 of the features were kept for only 3 datasets or less. Thus, there was no core set of features that remained in all cases.
We also evaluated whether there were category pairs which, when pruned, tended to maintain similar sets of features, which would be an indicator of similar semantics. For each pair of categories we computed the Dice coefficient between the two sets [10]. As can be seen in Figure 1, the value of the coefficient was low across the board, and most so for sports.
### Pruned feature sets are interpretable
To interpret the semantics of the pruned feature sets we applied PCA to the pruned embeddings and interpreted the results as detailed in SS4.2. In this analysis we did not use cross validation, but pruned the complete word-by-features embedding matrix for each of the eight datasets. For example, if 120 features were retained by pruning the furniture embeddings, we applied PCA to the 20 (furniture
\begin{table}
\begin{tabular}{l l l l l}
**Category** & **Baseline Mean** & **Pruned Mean** & **T value (Pruned-Baseline)** & **Features Retained** \\ \hline
**Furniture** & 0.46 (0.19) & 0.63 (0.25) & 4.47*** & 121.00 (19.72) \\
**Clothing** & 0.37 (0.16) & 0.52 (0.21) & 4.74*** & 84.21 (11.99) \\
**Vegetables** & 0.30 (0.28) & 0.45 (0.30) & 3.59** & 58.05 (26.22) \\
**Sports** & 0.40 (0.19) & 0.52 (0.20) & 4.13*** & 101.39 (16.84) \\
**Vehicles** & 0.66 (0.12) & 0.74 (0.15) & 3.78** & 131.05 (23.84) \\
**Fruit** & 0.38 (0.24) & 0.42 (0.26) & 0.66 & 88.48 (16.16) \\
**Birds** & 0.20 (0.14) & 0.37 (0.25) & 3.58** & 57.57 (13.23) \\
**Professions** & 0.45 (0.20) & 0.57 (0.18) & 3.72*** & 102.43 (9.96) \\ \end{tabular}
\end{table}
Table 1: Prediction accuracy (Spearman’s Rho) for human similarity judgments from GloVe embeddings. Baseline: prediction for test partition when using all GloVe features. Pruned: predictions based only on the pruned set learned using the training partition. Features Retained: average number of featuers retained from training \(\pm SD\). T values are from paired T-tests within category. *** \(p<.01\), *** \(p<.001\).
Figure 1: Dice coefficient indicating overlap of features sets pruned by different categories
words) x 120 (retained features) embedding matrix.
Table 2 presents the sizes of the Category contexts per category (Hits) and of those, the number of words whose PMI correlated significantly with the scores on each category's first PC. We can observe the number of words showing significant correlates was relatively similar when using pruned and non-pruned embeddings. However, the overlap was not necessarily strong in all eight domains.
To understand how meaning is organized in the pruned embeddings, the first evaluation is based on examining the scores of the category-co-hyponyms on the first Principal Component. These are shown, for sports, in the first column of Figure 2. One can immediately see a separation between more typical team sports which are here are associated with negative-sign scores, and less typical sports, including running, walking and ballet.
For sports, the top-20 correlated words include _asian, men's, european, federation, women's, international, female, championship_, see full results here in attached file. These align with the 1st PC scores in having high PMI with words in the sports category such as basketball, tennis, gymnastics and soccer and low PMI with ballet, golfing, fishing or chess. This list of words returned by the query emphasizes the international and inclusive (gender, country) dimension of some sports vs. others. This of course does not mean that running and walking are less federated or international than the others, it just means that these concepts are less frequently associated when discussing these sports.
We also find meaningful divergences between the words identified as correlated with the 1st PC of the pruned and non-pruned embeddings as this indicates differences emphasized via pruning. For sports, the noun _player_ is identified for the pruned embeddings. In contrast, for the non-pruned embeddings, the verb _play_ is more dominant, as well as its morphological variations _played, playing, plays_. This appears to emphasize the competitive/non-competitive dimension which was not as salient in the pruned embeddings. In contrast, the pruned results include olympic and medal whereas the nonpruned do not.
In furniture, the selected words under the pruning condition appear to highlight spatial and physical dimensions, including prepositions and modifiers such as _out, center, around_. For the unpruned condition, on the other hand, words associated with technology are more prominent (_technology, system, powered_. Interesting divergences were found also for several other categories. For example, for clothing, the pruned embeddings more strongly emphasized the condition of clothes as being new or worn (e.g., _worn, wearing, wore, new_). In contrast, for vehicles the full embeddings emphasized more strongly the verb _drive_ and its morphological variants including _drives, driver, driving, drive, driven, drove_). Thus, the dimension of being driven is fleshed out when analyzed against the full embeddings, but not against the pruned ones. Because human comparisons are not strongly based on this dimension, it is effectively partialed out via the supervised pruning. This shows how a dimension may be central within text-meaning (corpus) but not human meaning.
### Pruned feature sets predict basic semantic features
Having shown that our pruned feature sets can be given a human-readable interpretation, we turn to our final question and seek to explain why they provide better correlation with HSJs, i.e. _why they align_. To do so, we use the curated dataset of binder2016, which consists of 535 words with human ratings on 65 semantic features belonging to 14 basic semantic areas including Vision, Gustation, Temporal, Causal, and Cognition. Previous work [3, 10] constructed a regression model (Partial Least Squares Regression; PLSR) that successfully predicted the 65 dimensions from those words' GloVe embeddings. We use the same procedure, but apply it to the feature sets found via supervised pruning to determine if sets pruned by different domains encode different semantics
\begin{table}
\begin{tabular}{l l l l l} \hline \hline
**Category** & **Hits** & **Prn** & **Full** & **Cmn** \\ \hline
**Furniture** & 15968 & 45 & 23 & 19 \\
**Clothing** & 8615 & 20 & 11 & 6 \\
**Birds** & 8850 & 12 & 10 & 6 \\
**Vegetables** & 2888 & 1 & 2 & 0 \\
**Sports** & 16075 & 92 & 99 & 58 \\
**Vehicles** & 18146 & 22 & 44 & 0 \\
**Fruit** & 8263 & 4 & 6 & 2 \\
**Professions** & 25125 & 92 & 127 & 88 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Hits: Size of Category context-set. Prn/Full: number of words significantly correlated with the scores of the category’s first PC when computed from Pruned or Full (unpruned) embeddings. Cmn: number of words in common for pruned and full solutions.
For each of the 8 categories, we trained a PLSR model on 534 words.5 mapping GloVe to Binder features using each category's pruned features. The trained model was applied to the left-out word, predicting its 65 feature-values (leave-one-out cross validation; LOOCV). This resulted in a \(533\times 65\) prediction matrix for all words. For each of the 65 features we could compare the values in the prediction matrix to the true values, using Spearman's \(\rho\). Finally, we averaged the correlation values within the 14 larger-scale semantic areas to obtain a single value per domain. Note that for this analysis, we only used the top-60 ranked features for each category, thus ensuring that the model got the same amount of data across experiments for a fair comparison between the different pruned feature sets. We chose the value of 60 as it approximated the number of features retained for Clothing, Birds, Vegetables, Professions, and Sports when applying pruning outside a cross-validation framework.
Footnote 5: _Used_ appears twice as noun and verb separately, but in GloVe there is only one _used_ vector and thus one fewer word than in Binder’s actual dataset.
Figure 3 shows the results. Replicating prior work (Chersoni et al., 2021; Utsumi, 2020) for all 8 domains, the Cognition area was generally best predicted, while Gustation and Space were generally predicted less well. The features retained for professions offer the highest prediction scores for 7 of the 14 semantic areas, with a very large relative advantage for prediction of Cognition-related semantic features, and for features in the Social category. professions also predicted Emotion features best, though with a weaker advantage compared to other category's' pruned sets. In contrast, the pruned feature sets from fruits and vegetables predicted Gustation features the best, whereas vehicles and sports predicted Motor features best.
Interestingly, feature sets pruned for clothing produced the best prediction of Olfaction, which consisted of one human rated dimension: "having a characteristic or defining smell or smells". The different sorts of clothing materials in the set {mittens/woool, belt/leather, beanie/cloth} could have been a relevant dimension for comparison. clothing also produced the best prediction of Attention which consisted of two dimensions, " someone or something that grabs your attention" and "someone or something that makes you feel alert, activated, excited, or keyed up in either a
Figure 2: PMI values for words that correlate with co-hyponym scores on first PC computed from pruned embeddings
positive or negative way". The fact that these two dimensions were predicted by clothing may be due to the specific items in the set that differed according to evening-wear/non-eveningwear, intimate/non-intimate items, mens/womens wear and sports/non-sports (e.g., suit/jeans, skirt/pants, pajamas/tuxedo).
These results suggest that the identified dimensions contained in the pruned sets reflect information that is central to the way people compare objects in these categories.
## 6 Conclusion
We have shown in this work that supervised pruning applied to a word embedding model can improve prediction of human similarity judgements. The method was shown to select different features across domains, demonstrating high levels of conceptual discrimination. Further, the pruned feature sets were interpretable with respect to the first component of a PCA analysis, allowing us to describe how humans discriminate between elements of a category - and giving the model itself a tool to justify its semantic 'beliefs' to a potential user. The probing task provided additional, fine-grained information on the semantics in the different pruned feature-sets.
Given its performance and inherent interpretability, supervised pruning can advance computations related to word-similarity, word-analogy and sentiment analysis, as well as domain adaptation. Most importantly, this method allows an AI system to construct a model of human domain knowledge through pruned embeddings, which diverges from the AI system's own internal organization of the domain (i.e., via full embeddings). This alignment can increase the synergy between the AI system and users, establishing a more robust foundation of shared understanding.
Figure 3: Accuracy in predicting human ratings using pruned features. Cell-values indicate the Spearman’s \(\rho\) between PLSR predicted results from pruned embeddings (\(N=60\) features per category) and the actual values from Binder’s dataset.
### Limitations
In Aim 1, one important question that remains to be answered is how the results would generalize if the type of embedding and / or the similarity dataset were different. Furthermore, understanding how this methodology could be applied to contextualized embeddings remains to be explored.
In Aim 2, we based our analysis on the first principal components of the datasets, another interesting venue would be to analyse the second principal component as well.
In Aim 3, We use leave one word out CV. As indicated by Utsumi (2020), this might be problematic for this dataset because it gives a relative advantage to left-out words that have many semantically-similar words in the training set. Utsumi (2020) predicts word-cluster features instead, but the limitation of that method is in determining cluster semantics. Chersoni et al. (2021) predict both single-word and cluster semantics. The decision to use leave-one-word-out CV may contribute to why some domains (e.g., Cognition) are predicted better than others, but does not contribute to differences in prediction for different pruned feature sets, as in all cases the same 535 words are mapped from GloVe to Binder features.
Finally, we consider that the dimensions highlighted by supervised pruning may be related to the the set of words being compared, to the extent that the similarity ratings are impacted by contrast-relation in the specific category set. This is to say that if similarity ratings were obtained for a different set of, say, sports co-hyponyms, the dimensions identified may differ.
|
2303.08515 | Loss of Exponential Mixing in a Non-Monotonic Toral Map | We consider a Lebesgue measure preserving map of the 2-torus, given by the
composition of orthogonal tent shaped shears. We establish strong mixing
properties with respect to the invariant measure and polynomial decay of
correlations for Holder observables, making use of results from the chaotic
billiards literature. The system serves as a prototype example of piecewise
linear maps which sit on the boundary of ergodicity, possessing null measure
sets around which mixing is slowed and which birth elliptic islands under
certain perturbations. | Joe Myers Hill, Rob Sturman, Mark C. T. Wilson | 2023-03-15T10:45:51Z | http://arxiv.org/abs/2303.08515v2 | # Loss of Exponential Mixing in a Non-Monotonic Toral Map
###### Abstract
We consider a Lebesgue measure preserving map of the 2-torus, given by the composition of orthogonal tent shaped shears. We establish strong mixing properties with respect to the invariant measure and polynomial decay of correlations for Holder observables, making use of results from the chaotic billiards literature. The system serves as a prototype example of piecewise linear maps which sit on the boundary of ergodicity, possessing null measure sets around which mixing is slowed and which birth elliptic islands under certain perturbations.
_Acknowledgements--_ JMH supported by EPSRC under Grant Refs. EP/L01615X/1 and EP/W524372/1.
## 1 Introduction
The statistics of chaotic dynamics driven by an area-preserving map are often described by its mixing properties. Given such a map \(f:X\to X\), preserving a measure \(\mu\), we say that \(f\) is _mixing_ if its correlations \(C_{n}(\varphi,\psi,f,\mu)\) decay to \(0\) for \(L^{2}\) observables \(\varphi,\psi:X\to\mathbb{R}\), where
\[C_{n}(\varphi,\psi,f,\mu)=\int\left(\varphi\circ f^{n}\right)\psi\,\mathrm{d} \mu-\int\varphi\,\mathrm{d}\mu\int\psi\,\mathrm{d}\mu\]
denotes the _correlation function_. Rigorous bounds on the speed at which these correlations decay (the _mixing rate_) opens the door to proving various statistical properties, most notably the central limit theorem (CLT): Given some observable \(\varphi\) with \(\int\varphi\,\mathrm{d}\mu=0\), we say that the CLT holds if the random variable
\[x\to\frac{1}{\sqrt{n}}\sum_{i=0}^{n-1}\varphi\circ f^{i}(x)\]
(\(x\) distributed according to the measure \(\mu\)) converges in distribution to the Gaussian \(\mathcal{N}(0,\sigma^{2})\).
Typically if correlations over some class of observables is summable, \(\sum_{n}C_{n}<\infty\), the CLT holds for this class of observables. This condition follows readily if correlations decay exponentially fast: \(|C_{n}|=\mathcal{O}\left(\theta^{n}\right)\) for some \(0<\theta<1\), but may fail under slower polynomial decay: \(|C_{n}|=\mathcal{O}\left(n^{-\alpha}\right)\) with e.g. \(\alpha=1\)[13]. Some regularity is assumed for the observables, Holder continuity being typical: we say that \(\varphi\) is _Holder continuous_ with exponent \(a>0\) if there exists \(C>0\) such that \(|\varphi(x)-\varphi(x^{\prime})|\leq C\operatorname{dist}(x,x^{\prime})^{a}\) for all \(x,x^{\prime}\in X\). We assume Holder continuity of observables throughout this article and use \(C\) to denote generic, not necessarily equal constants.
The last few decades have seen the development of schemes for bounds on correlations under progressively weaker assumptions about the map. [10] built on the work of [11, 12], giving conditions under which a uniformly hyperbolic map with singularities enjoys exponential decay of correlations. These included mild restrictions on the nature of the singularities and regularity of local manifolds, alongside a _one-step expansion_ estimate which ensures expansion by hyperbolicity dominates the cutting by singularities:
\[\liminf_{\delta\to 0}\,\sup_{W:|W|<\delta}\sum_{i}\frac{|W_{i}|}{|f(W_{i})|}<1 \tag{1}\]
where the supremum is taken over unstable manifolds \(W\), of length \(|W|\), split into components \(W_{i}\) by the singularities. Key to this analysis is construction of a _Young tower_[11]. Given a subset \(A\subset X\) and \(x\in A\), define the _return time_ of \(x\) to \(A\) under \(f\) as \(R(x;f,A)=\inf\{i>0\,|\,f^{i}(x)\in A\}\). Young considers returns* to some subset \(\Delta_{0}\) (the tower base) with hyperbolic product structure, showing that if returns satisfy an exponential tail bound:
Footnote *: In particular ‘good’ returns which satisfy additional technical constrains, see [11]. A precise definition of _hyperbolic product structure_ is also found therein.
\[\mu(\{x\in\Delta_{0}\,|\,R(x;f,\Delta_{0})>n\})<C\theta^{n}, \tag{2}\]
then correlations of \(f\) decay exponentially. Explicitly constructing \(\Delta_{0}\) is challenging in many systems, as is estimating its recurrence, requiring all the iterates of \(f\) to be considered. Chernov and Zhang's scheme both avoids the explicit construction of \(\Delta_{0}\) and reduces the analysis down to conditions such as (1) concerning a single iterate of the map \(f\).
The scheme has utility beyond uniformly hyperbolic examples. Following [11], if there exists \(C>0\) and \(\alpha>0\) such that
\[\mu(\{x\in\Delta_{0}\,|\,R(x;f,\Delta_{0})>n\})<Cn^{-\alpha}, \tag{3}\]
then correlations of \(f\) decay polynomially for Holder observables. Suppose \(f:X\to X\) has suspected polynomial decay of correlations, non-uniformly hyperbolic and possessing some region \(N\) where \(f\) is non-hyperbolic with escape times \(E(x;f,N)=\inf\{i>0\,|\,f^{i}(x)\notin N\}\) satisfying
\[\mu(\{x\in N\,|\,E(x;f,N)>n\})<Cn^{-\alpha}. \tag{4}\]
By non-uniform hyperbolicity, a.e. \(x\in N\) eventually escapes and hits some region of'strong' hyperbolicity, precisely a subset \(M\subset X\) with uniformly hyperbolic _return map_\(f_{M}(x)=f^{R(x;f,M)}(x)\). Using its strong hyperbolic properties
to satisfy the Chernov and Zhang conditions, \(f_{M}\) then admits a Young tower with base \(\Delta_{0}\subset M\), satisfying
\[\mu(\{x\in M\,|\,R(x;f_{M},\Delta_{0})>n\})<C\theta^{n}. \tag{5}\]
Extending the domain of \(f_{M}\) to \(X\) in the obvious fashion, the bound (4) suggests
\[\mu(\{x\in X\,|\,R(x;f,M)>n\})<Cn^{-\alpha}, \tag{6}\]
which can be extended, making use of (5), to give (3). This final step is non-trivial and typically relies on utilising precise mapping behaviour of \(f_{M}\). The above scheme has been used to establish polynomial decay of correlations for various billiards maps including certain stadia and tables with cusps [1, 13]. Beyond billiards, in [11, SS13]\(\mathcal{O}(1/n)\) correlation decay was shown for a family of _linked twist maps_ (hereafter LTMs). These are Lebesgue measure preserving continuous maps on the 2-torus \(\mathbb{T}^{2}\), composing monotonic shears restricted to horizontal and vertical annuli \(P,Q\subsetneq\mathbb{T}^{2}\). Here, mixing is slowed by orbits remaining trapped in \(P\triangle Q^{\dagger}\) for arbitrarily long periods, with recurrence to \(M=P\cap Q\) satisfying the tail bound (6). Monotonicity of the shears was important in the analysis, allowing for a straightforward proof of the mixing property.
More recently the scheme was directly applied to a family of non-monotonic toral maps [15]. Parameterising \(\mathbb{T}^{2}\) by \((x,y)\in\mathbb{R}^{2}/\mathbb{Z}^{2}\), these maps similarly compose horizontal and vertical shears \(H_{(\xi,\eta)}=G\circ F\) where
\[F(x,y)=\begin{cases}\left(x+\frac{y}{1-\eta},y\right)\ \text{mod}\ 1&\text{ for }y \leq 1-\eta,\\ \left(x+\frac{1-y}{\eta},y\right)\ \text{mod}\ 1&\text{ for }y\geq 1-\eta, \end{cases}\quad G(x,y)=\begin{cases}\left(x,y+\frac{x}{1-\xi}\right)\ \text{mod}\ 1&\text{ for }x \leq 1-\xi,\\ \left(x,y+\frac{1-x}{\xi}\right)\ \text{mod}\ 1&\text{ for }x\geq 1-\xi,\end{cases}\]
and \(0<\xi,\eta<1\). Exponential mixing rates were established over a wide neighbourhood of \((\xi,\eta)=(0,0)\), with boundary determined by (1), and are expected over \(1-\frac{1}{4\xi}<\eta<\frac{1}{4-4\xi}\) where \(H_{(\xi,\eta)}\) is uniformly hyperbolic (with singularities). This includes the parameter subspace \(\eta=\xi\) corresponding to matching \(F\) and \(G\), excluding the cusp \(\xi=\eta=1/2\) where \(F\) and \(G\) are symmetric tent maps. This cusp is notable in the transverse subspace \(\eta=1-\xi\) also, being the only parameters for which elliptic islands do not form. Following [15], we refer to \(H_{(\xi,\eta)}\) at these precise parameters as the _orthogonal tents map_ (OTM) and denote it simply by \(H\).
The OTM serves as a prototype example of piecewise linear maps which sit on the boundary of ergodicity. Limiting onto \(H\) from its non-ergodic perturbations, the nature of the periodic orbits seeding the islands changes from elliptic to parabolic. Provided such an orbit does not limit onto a singularity line, its surrounding islands shrink, leaving behind periodic line segments (\(H\) possesses four, sketched in Figure 1) of null measure. This permits mixing with respect to Lebesgue, but only at a reduced rate, for we can find orbits which'stick' to the segment for arbitrarily long periods. Deducing this precise reduced rate of mixing constitutes our main theorem:
**Theorem 1**.: _Correlations for \(H\) decay as \(|C_{n}(\varphi,\psi,H,\mu)|=\mathcal{O}(1/n)\) for a typical choice of Holder observables \(\varphi,\psi\)._
We expect a similar law to hold for piecewise linear systems obeying the limiting behaviour described above, for example the pointwise limit of \(H_{(\xi,\eta)}\) as \(\xi\to 0\) at \(\eta=1/3\) (see [14]). We focus on \(H\) in particular for two key reasons. Firstly, as a fundamental piecewise linear model of alternating shear flows where no-slip boundary conditions force non-monotonic shear profiles, it is of interest to (laminar) fluid mixing applications [10]. Indeed, it is the logical extension to Cerbelli and Giona's map, incorporating non-monotonicity into both the horizontal and vertical shears. Questions surrounding the mechanism by which \(H\) is mixing, but at a reduced rate, are natural in this setting and are answered conclusively by a proof of Theorem 1. To our knowledge, the mixing behaviour of \(H\) has only been studied numerically [10]. Secondly, the map possesses certain properties which speed up its analysis. Since both \(1/\eta\) and \(1/(1-\eta)\) are integer valued over \(0<\eta<1\) if and only if \(\eta=1/2\), \(H\) is the only map in the \(0<\xi,\eta<1\) parameter space with all integer valued Jacobians and can be expressed as \(H(x,y)=DH\cdot(x,y)^{T}\) mod \(1\)++. This will prove useful for tracking the orbits of certain points under large powers of \(H\). In addition \(H\) can be related to its inverse by a conjugacy and behaves symmetrically on certain regions, reducing the calculations required to establish growth conditions by a factor of four.
Footnote ‡: This also implies that periodic orbits are dense on \(\mathbb{T}^{2}\), as the cardinality of any orbit containing a rational point \((s/q,p/q)\in\mathbb{T}^{2}\) with \(s,p,q\in\mathbb{N}\) is bounded above by \(q^{2}\)[10]
The following sections are organised as follows. In section 2 we state two theorems from the billiards literature that we rely upon to establish Theorem 1. We next prove hyperbolicity for \(H\) in section 3 and the mixing property in section 4. Central to this analysis is recurrence to a set \(\sigma\) with the return map \(H_{\sigma}\) exhibiting strong hyperbolic properties. We establish more formal properties of the map \(H_{\sigma}\) in section 5, sufficient to establish exponential decay of correlations. We use this to infer a polynomial upper bound on correlations for \(H\) in section 6 and show that it is attained by any typical choice of Holder observables, proving Theorem 1. Finally in section 7 we comment on the relevance of our work to similar systems and suggest possible extensions.
## 2 Some results from the billiards literature
A necessary prerequisite for applying the machinery of [10] and similar is establishing mixing with respect to the invariant measure. In hyperbolic systems possessing singularities, the following scheme of [11] is useful, giving
Figure 1: Line segments \(l_{j}\) satisfying \(H:l_{1}\leftrightarrow l_{2}\), \(l_{3}\leftrightarrow l_{4}\). Each are periodic with period \(2\), their union is invariant under \(H\).
conditions for the (stronger) Bernoulli property. We paraphrase from [13]:
**Theorem 2** (Katok and Strelcyn).: _Let \((X,\mathcal{F},\mu,f)\) be a measure preserving dynamical system such that \(f\) is \(C^{2}\) smooth outside of a singularity set \(S\). Suppose that the Katok-Strelcyn conditions hold:_
**(KS1):**: \(\exists\,a,C_{1}>0\) _s.t._ \(\forall\,\epsilon>0\)_,_ \(\mu(B_{\varepsilon}(S))\leq C_{1}\varepsilon^{a}\)_._
**(KS2):**: \(\exists\,b,C_{2}>0\) _s.t._ \(\forall\,x\in X\setminus S\)_,_ \(||D_{x}^{2}f||\leq C_{2}\,d(x,S)^{-b}\)_._
**(KS3):**: Lyapunov exponents exist and are non-zero almost everywhere._
_Then at almost every_ \(x\) _we can define local unstable and stable manifolds_ \(\gamma_{u}(x)\) _and_ \(\gamma_{s}(x)\)_. Suppose that the manifold intersection property holds:_
**(M):**: _For almost any_ \(x,x^{\prime}\in X\)_, there exists_ \(m,n\) _such that_ \(f^{m}(\gamma_{u}(x))\cap f^{-n}(\gamma_{s}(x^{\prime}))\neq\varnothing\)_._
_Then_ \(f\) _is ergodic. Provided the repeated manifold intersection property holds:_
**(MR):**: _For almost any_ \(x,x^{\prime}\in X\)_, there exists_ \(M,N\) _such that for all_ \(m>M\) _and_ \(n>N\)_,_ \(f^{m}(\gamma_{u}(x))\cap f^{-n}(\gamma_{s}(x^{\prime}))\neq\varnothing\)_,_
_the Bernoulli property follows._
The nature of the constant \(a\) giving **(KS1)** plays an important role in showing expansion conditions such as (1). In systems possessing a finite number of singularity curves, see for example [14, 15, 16], a covering by \(\varepsilon\)-balls immediately gives **(KS1)** with \(a=1\). Showing (1) is then quite straightforward; the singularity set splits an unstable manifold \(W\) of vanishing length \(|W|\to 0\) into at most \(K\) components \(W_{k}\), where \(K\) is the maximum number of singularity curves which meet at a given point. This reduces (1) to calculating the expansion factors \(\lambda_{k}=|f(W_{k})|/|W_{k}|\) and verifying the finite summation \(\sum_{k}\lambda_{k}^{-1}<1\). In many systems, in particular those driven by a return map where recurrence follows a law such as (6), singularity curves instead form a _countable_ family. Expansion factors \(\lambda_{k}=\mathcal{O}(k)\) are typical so that bounding the above sum is challenging, indeed it may even diverge. Such systems satisfy **(KS1)**, but only with some \(a<1\). In certain \(a<1\) scenarios, precise mapping behaviour may reduce (1) to a finite summation; see for example the return map considered in [15]. Such a scenario is not typical however, with (1) failing in many examples [14]. More recent schemes for bounds on correlations have revised (1) to suit these more general \(a<1\) systems. We quote the first of these, given in [14], which is sufficient for our purposes.
Let \(\Omega\) denote a two dimensional connected compact Riemannian manifold, \(f:\Omega\to\Omega\) preserving a measure \(\mu\). Let \(d\) denote the distance in \(\Omega\) induced by the Riemannian metric \(\rho\). For any smooth curve \(W\) in \(\Omega\), denote by \(|W|\) its length, and by \(m_{W}\) the Lebesgue measure on \(W\) induced by the Riemannian metric \(\rho_{W}\) restricted to \(W\). Also let \(\nu_{W}\) = \(m_{W}/|W|\) be the normalised (probability) measure on W.
**(H1):** Hyperbolicity of \(f\) (with uniform expansion and contraction). There exist two families of cones \(C_{x}^{u}\) (unstable) and \(C_{x}^{a}\) (stable) in the tangent spaces \(\mathcal{T}_{x}\Omega\), for all \(x\in\Omega\), and there exists a constant \(\Lambda\) > 1, with the following properties:
1. \(Df(C_{x}^{u})\subset C_{fx}^{u}\) and \(Df(C_{x}^{s})\supset C_{fx}^{s}\) whenever \(Df\) exists.
2. \(\|D_{x}f(v)\|\geq\Lambda\|v\|\), \(\forall v\in C_{x}^{u}\) and \(\|D_{x}f^{-1}(v)\|\geq\Lambda\|v\|\), \(\forall v\in C_{x}^{s}\).
3. These families of cones are continuous on \(\Omega\) and the angle between \(C_{x}^{u}\) and \(C_{x}^{s}\) is uniformly bounded away from zero.
We say that a smooth curve \(W\subset\Omega\) is an unstable (stable) curve if at every point \(x\in W\) the tangent line \(\mathcal{T}_{x}W\) belongs in the unstable (stable) cone \(C_{x}^{u}\) (\(C_{x}^{s}\)).
**(H2):** Singularities and smoothness. Let \(\mathcal{S}_{0}\) be a closed subset in \(\Omega\), such that \(M:=\Omega\setminus\mathcal{S}_{0}\) is a dense set in \(\Omega\). We put \(\mathcal{S}_{\pm 1}=f^{\mp}\mathcal{S}_{0}\).
1. \(f:M\setminus\mathcal{S}_{1}\to M\setminus\mathcal{S}_{-1}\) is a \(C^{2}\) diffeomorphism.
2. \(\mathcal{S}_{0}\cup\mathcal{S}_{1}\) is a finite or countable union of smooth, compact curves in \(\Omega\).
3. Curves in \(\mathcal{S}_{0}\) are transversal to stable and unstable cones. Every smooth curve in \(\mathcal{S}_{1}\) (resp. \(\mathcal{S}_{-1}\)) is a stable (resp. unstable) curve. Every curve in \(\mathcal{S}_{1}\) terminates either inside another curve of \(\mathcal{S}_{1}\) or on \(\mathcal{S}_{0}\).
4. There exists \(b\in(0,1)\) and \(c>0\) such that for any \(x\in M\setminus\mathcal{S}_{1}\) \[\|D_{x}f\|\leq c\,d(x,\mathcal{S}_{1})^{-b}.\] (7)
**(H3):** Regularity of smooth unstable curves. We assume that there is a \(f\)-invariant class of unstable curves \(W\subset M\) that are _regular_ (see [2]).
**(H4):** SRB measure. \(\mu\) is a Sinai-Ruelle-Bowen (SRB) measure which is mixing.
**(H5):** One-step expansion. There exists \(q\in(0,1]\) such that
\[\liminf_{\delta\to 0}\ \sup_{W:|W|<\delta}\sum_{i}\left(\frac{|W|}{|f(W_{i})|} \right)^{q}\frac{|W_{i}|}{|W|}<1, \tag{8}\]
where the supremum is taken over all unstable curves, \(W_{i}\) are the components of \(W\) split by the singularity set for \(f\).
**Theorem 3** (Chernov and Zhang).: _Under the conditions **(H1)**-**(H5)**, the system \((f,\mu)\) enjoys exponential decay of correlations._
Note that the new one-step expansion condition (8) may be reduced to the old (1) by taking \(q=1\). The new condition ensures that the images of unstable curves grow 'on average'. Choosing a \(q<1\) essentially permits summing over countably many components, broadening the potential applications of the scheme to a wider class of \(a<1\) systems. The image coupling methods ([10], see also [14] and the references therein) used to establish Theorem 3 differ substantially from those employed in [2]. The key'magnet' construction [11, 14], however, further serves as the base \(\Delta_{0}\) of a Young tower satisfying the exponential tail bound (2) [2]. As such the scheme may similarly be applied to some return map \(f_{M}\) as a step towards proving polynomial decay of correlations for \(f\). We conclude this section with two technical adjustments we will refer back to later in section 5.
**Remark 1**.: Condition **(H1.3)** has been relaxed in subsequent schemes [2, 15] and can be replaced by
3. These families of cones are continuous on components of \(\Omega\setminus\mathcal{S}_{0}\) and the angle between \(C_{x}^{u}\) and \(C_{x}^{s}\) is uniformly bounded away from zero.
Theorem 3 still follows under this relaxed assumption by applying (for example) Theorem 1 of [20]. Despite the improvement over older growth conditions, condition **(H5)** still fails for many systems over one iterate. See, for example, the modified stadia considered in [10]. It can be replaced by a multi-step expansion condition, establishing **(H5)** for some higher power \(f^{n}\) of the map and its enlarged singularity set.
## 3 Hyperbolicity
**Proposition 1**.: \(H\) _is non-uniformly hyperbolic. That is, Lyapunov exponents_
\[\chi(z,v)=\lim_{n\to\infty}\frac{1}{n}\log||DH_{z}^{n}v||\]
_are non-zero for almost every \(z\in\mathbb{T}^{2}\) and tangent vector \(v\neq 0\)._
The key ingredients of the proof were sketched out in [23]. We provide a more detailed treatment here as certain constructions are central to the analysis of later sections. We begin with a description of the Jacobian \(DH\) and recall a decomposition of the cocycle \(DH_{z}^{n}\) into blocks (Lemma 1) which share an invariant expanding cone (Lemma 2). Associating these blocks with recurrence to a region \(\sigma\) then allows us to deduce non-zero Lyapunov exponents on a full measure set.
Partition the torus into the four squares \(S_{j}\) shown in Figure 2. The Jacobian \(DH\) is then constant on the preimages \(A_{j}=F^{-1}(S_{j})\), given by the matrix \(M_{j}\) where
\[M_{1}=\begin{pmatrix}1&2\\ 2&5\end{pmatrix},\quad M_{2}=\begin{pmatrix}1&2\\ -2&-3\end{pmatrix},\quad M_{3}=\begin{pmatrix}1&-2\\ 2&-3\end{pmatrix},\quad M_{4}=\begin{pmatrix}1&-2\\ -2&5\end{pmatrix},\]
undefined on the singularity set \(\mathcal{D}=\cup_{j}\partial A_{j}\). Letting \(X^{\prime}\) denote the full measure set \(\mathbb{T}^{2}\setminus\cup_{i\geq 0}H^{-i}(\mathcal{D})\), the \(n\)-step
Figure 2: A partition of the torus into four rectangles \(S_{j}\), and their preimages \(A_{j}\), \(A_{j}^{\prime}\) under \(F,G^{-1}\).
itinerary
\[A_{j_{1}},A_{j_{2}},A_{j_{3}},\ldots,A_{j_{n}},\]
is well defined for any \(z\in X^{\prime}\). The related cocycle \(DH_{z}^{n}\) given by
\[DH_{z}^{n}=M_{j_{n}}\ldots M_{j_{3}}M_{j_{2}}M_{j_{1}}\]
with each \(j_{k}\in\{1,2,3,4\}\). Our aim is to decompose any cocycle into hyperbolic matrices which share an invariant expanding cone. Note that while \(M_{1}\) and \(M_{4}\) are hyperbolic, \(M_{2}\) and \(M_{3}\) are not. Hence when \(M_{2}\) or \(M_{3}\) appear in a cocycle at \(M_{j_{k}}\), we must combine them with its neighbouring matrices \(M_{j_{k+l}},\ldots,M_{j_{k+2}},M_{j_{k+1}}\) for some \(l\in\mathbb{N}\).
Consider the countable family of matrices
\[\mathcal{M}=\{M_{1},M_{4},M_{1}M_{2}^{n},M_{3}M_{2}^{n},M_{4}M_{2}^{n},M_{1}M_ {3}^{n},M_{2}M_{3}^{n},M_{4}M_{3}^{n}\}\]
with \(n\in\mathbb{N}\). Similarly define
\[\mathcal{M}^{\prime}=\{M_{1}^{-1},M_{4}^{-1},M_{1}^{-1}M_{2}^{-n},\ldots,M_{4} ^{-1}M_{3}^{-n}\}.\]
It was shown in [24] that:
**Lemma 1**.: _At almost every \(z\), the cocycle \(DH_{z}^{n}\) can be decomposed into blocks from \(\mathcal{M}\)._
The result essentially follows from the fact that essentially no orbits get trapped in \(A_{3}\),
\[\lim_{n\to\infty}\mu\left(\{z\in A_{3}\,|\,H^{i}(z)\in A_{3}\text{ for all }0\leq i\leq n-1\}\right)=0, \tag{9}\]
and the equivalent statement for \(A_{2}\). An entirely analogous argument, considering escapes from \(A_{2}^{\prime}\) and \(A_{3}^{\prime}\) under \(H^{-1}\), gives that at a.e. \(z\) the cocycle \(DH_{z}^{-n}\) can be decomposed into blocks from \(\mathcal{M}^{\prime}\).
**Lemma 2**.: _The matrices in \(\mathcal{M}\) admit an invariant expanding cone \(\mathcal{C}\)._
Proof.: Parameterise the tangent space by \((v_{1},v_{2})^{T}\in\mathbb{R}^{2}\). The lemma was shown in [24] using the cone \(C=\{(v_{1},v_{2})\neq 0\,|\,|v_{2}|\geq\phi\,|v_{1}|\}\) where \(\phi\) is the golden ratio \((1+\sqrt{5})/2\). Here we define a slightly wider cone \(\mathcal{C}=\{(v_{1},v_{2})\neq 0\,|\,|v_{2}|\geq\varphi\,|v_{1}|\}\), \(\varphi=21/13\), which still contains all the unstable eigenvectors of matrices in \(\mathcal{M}\) and none of the stable eigenvectors. Hence \(\mathcal{C}\) is invariant and one can verify that it is also expanding (minimum expansion factors are calculated later in Table 1, in particular the minimum expansion of a matrix \(M\) over \(\mathcal{C}\) under the \(\|\cdot\|_{\infty}\) norm is given by \(\min_{\pm}K_{\pm}(M)\)).
### Recurrence to \(\sigma\)
Define \(\sigma\) as the union of the sets \(\sigma_{1}=H(A_{1})\), \(\sigma_{2}=H(A_{2}\cap H(A_{3}))\), \(\sigma_{3}=H(A_{3}\cap H(A_{2})\), \(\sigma_{4}=H(A_{4})\). By construction, any orbit escaping \(A_{2},A_{3}\) or passing through \(A_{1},A_{4}\) must pass through \(\sigma\). By (9) and the equivalent statement for \(A_{2}\), the return map \(H_{\sigma}:\sigma\to\sigma\) is well defined at \(\mu\)-almost every \(z\in\sigma\). Similarly define
using the \(A^{\prime}_{j}\) and the return map \(H^{-1}_{\sigma^{\prime}}:\sigma^{\prime}\to\sigma^{\prime}\) for \(H^{-1}\). The sets \(\sigma,\sigma^{\prime}\) are shown as the unshaded regions in Figure 3.
We begin by identifying the points in \(\sigma\) with return time \(1\), i.e. \(H^{-1}(\sigma)\cap\sigma\). The preimages of \(\sigma_{1},\sigma_{4}\) are simply \(A_{1}\), \(A_{4}\) and by definition we have \(H^{-1}(\sigma_{2})=A_{2}\cap H(A_{3})\) so that \(H^{-1}(\sigma_{2})\cap\sigma=A_{2}\cap\sigma_{3}:=\varsigma_{3}\) and similarly \(H^{-1}(\sigma_{3})\cap\sigma=A_{3}\cap\sigma_{2}:=\varsigma_{2}\). See Figure 4(a) for an illustration.
Now consider recurrence to \(\sigma\) with return times greater than \(1\), the white regions of Figure 4(a). Starting with \(z\in A_{3}\), by the definition of \(\sigma\), the return time \(r_{\sigma}(z)=\inf\{n\geq 1\,|\,H^{n}(z)\in\sigma\}\) is \(k+1\) where \(k\) is the escape time \(\inf\{n\geq 1\,|\,H^{n}(z)\notin A_{3}\}\). Figure 5(a) shows a partition of \(A_{3}\) into sets \(A^{k}\) of constant escape time, bounded by the boundary preimages \(H^{-k}(\partial A_{3})\). Points in \(A^{k}\) spend \(k\) iterates in \(A_{3}\) then escape via \(A_{1}\), \(A_{2}\), or \(A_{4}\) and consequently return to \(\sigma\). We partition each \(A^{k}\) based on this escape path, shown as the red lines in Figure 5(b). The labelling \(A^{k}_{j,i}\) is such that \(A^{k}_{j,i}\subset A^{k}\subset A_{i}\) and \(H^{k}(A^{k}_{j,i})\subset A_{j}\). It transpires that when points escape after spending \(4\) or more iterates in \(A_{3}\), they can only do so via \(A_{1}\) or \(A_{4}\). Similarly partitioning \(A_{2}\) and combining with Figure 4(a) gives a partition of \(\sigma\) into sets on which \(DH_{\sigma}\) is constant. The boundaries of these partition elements are shown in Figure 4(b) and constitutes, together with \(\partial\sigma\), the singularity set \(\mathcal{S}\) for \(H_{\sigma}\). We remark that outside of the sets \(\varsigma_{2}\), \(\varsigma_{3}\) the Jacobian \(DH_{\sigma}\) takes values in \(\mathcal{M}\). Noting \(H_{\sigma}(\varsigma_{2})=H(\varsigma_{2})\subset\sigma_{3}\) and \(H_{\sigma}(\varsigma_{3})=H(\varsigma_{3})\subset\sigma_{2}\) we have that within \(\varsigma_{2}\) the Jacobian of \(H^{2}_{\sigma}\) is given by \(MM_{3}\) for some \(M\in\mathcal{M}\cup\{M_{2}\}\) and within \(\varsigma_{3}\) it is given by \(MM_{2}\) for some \(M\in\mathcal{M}\cup\{M_{3}\}\). Hence, at almost every \(z\in\sigma\) the Jacobian of \(H_{\sigma}\) or \(H^{2}_{\sigma}\) is some matrix from \(\mathcal{M}\).
We are now ready to establish non-uniform hyperbolicity.
Proof of Proposition 1.: The proof of Lemma 1 shows that almost every orbit \(H^{n}(z)\) hits \(\sigma\). Similar to LTMs, we can show that almost all of those then continue to return to \(\sigma\) with some positive frequency \(\alpha_{z}\). This follows straightforwardly from the fact that \(H\) preserves the Lebesgue measure on \(\mathbb{T}^{2}\), a compact metric space, and \(\sigma\) is measurable. A proof is given in Lemma 6.3.3 of [1], originally from [1]. For large \(n\) and a.e. \(z\) the
cardinality of \(\{0\leq i\leq n-1\,|\,H^{i}(z)\in\sigma\}\) is roughly \(\alpha_{z}n\), certainly bounded below by \(\alpha zn/2\)SS. The cocycle \(DH_{z}^{n}\) then contains as many applications of \(DH_{\sigma}\). By the above, applying \(DH_{\sigma}\) either completes a block from \(\mathcal{M}\) or does so over the next iterate (the case where we land in \(\varsigma_{2},\varsigma_{3}\)). At worst, then, we have roughly half as many blocks from \(\mathcal{M}\) in \(DH_{z}^{n}\) as we have returns to \(\sigma\). Certainly this proportion is greater than a quarter, so \(DH_{z}^{n}\) contains at least \(\alpha_{z}n/8\) blocks from \(\mathcal{M}\). Defining
Footnote §: By a combinatorial argument, see [12]
\[K=\inf_{\begin{subarray}{c}M\in\mathcal{M}\\ v\in\mathcal{M}\end{subarray}}\frac{\|Mv\|}{\|v\|},\]
Lemma 2 gives \(K>1\). Noting cone invariance, for any \(v_{0}\in\mathcal{C}\),
\[\frac{1}{n}\log\|DH_{z}^{n}v_{0}\| \geq\frac{1}{n}\log\left(K^{\frac{1}{8}\alpha_{z}n}\|v_{0}\|\right)\] \[=\frac{\alpha_{z}}{8}\log(K)+\frac{1}{n}\log\|v_{0}\|\]
so that \(\chi(z,v_{0})\geq\alpha_{z}\log(K)/8>0\). We may then extend to non-zero Lyapunov exponents for general \(v\neq 0\) using a particular form of Oseledets' theorem in two dimensions (Theorem 3.14 of [15], see [11]).
## 4 The mixing property
In this section we build on hyperbolicity, establishing mixing properties using Theorem 2.
**Theorem 4**.: _The map \(H:\mathbb{T}^{2}\to\mathbb{T}^{2}\) is Bernoulli with respect to the Lebesgue measure._
Figure 4: Part (a) shows the portions of \(\sigma\) (red, blue) with return time \(1\) to \(\sigma\). Points in the white region have return times of \(2\) or more. Part (b) shows the singularity set \(\mathcal{S}\) for the return map \(H_{\sigma}\). Red dashed lines denote the shared boundaries of the \(\sigma_{j}\).
Figure 5: Partitions of the region \(A_{3}\). Part (a) shows a partition into sets \(A^{k}\) where \(k\) is the escape time. Part (b) shows a subdivision into sets \(A^{k}_{j,3}\subset A^{k}\) where \(j\) is such that \(H^{k}(A^{k}_{j,3})\subset A_{j}\). Red lines in each \(A^{k}\) are the preimages of the \(A_{1}A_{2}\) boundary under \(H^{k}\).
### Nature of local manifolds
Noting that **(KS1-2)** were shown in [14], by Theorem 2, local unstable and stable manifolds \(\gamma_{u}(z),\gamma_{s}(z)\) exist at a.e. \(z\). By definition, for any \(\zeta,\zeta^{\prime}\in\gamma_{u}(z)\)
\[\operatorname{dist}(H^{-n}(\zeta),H^{-n}(\zeta^{\prime}))\to 0 \tag{10}\]
as \(n\to\infty\). Similarly for any \(\zeta,\zeta^{\prime}\in\gamma_{s}(z)\)
\[\operatorname{dist}(H^{n}(\zeta),H^{n}(\zeta^{\prime}))\to 0 \tag{11}\]
as \(n\to\infty\). Piecewise linearity of \(H\) ensures that these local manifolds are line segments containing \(z\), aligned with some vector \(v=(v_{1},v_{2})^{T}\) of _gradient_\(v_{2}/v_{1}\). The following two lemmas establish bounds on their gradients when mapped under \(H\) and its inverse.
**Lemma 3**.: _For almost every \(z\), there exists \(m,n\in\mathbb{N}\) such that \(H^{m}(\gamma_{u}(z))\) contains a line segment in \(\sigma\) aligned with some \(v\in\mathcal{C}\), and \(H^{-n}(\gamma_{s}(z))\) contains a line segment in \(\sigma^{\prime}\) aligned with some \(v^{\prime}\in\mathcal{C}^{\prime}\)._
Proof.: By definition of the \(\sigma_{j},\sigma^{\prime}_{j}\), we have that \(\sigma_{j}=H(\sigma^{\prime}_{j})\) for \(j=1,4\), and \(\sigma_{j}=H^{2}(\sigma^{\prime}_{j})\) for \(j=2,3\). For almost every \(z\) the number \(m=\min\{k\geq 1\,|\,H^{k}(z)\in\sigma\}\) is well defined, as is the cocycle \(DH^{m}_{z}\). On some portion of \(\gamma_{u}(z)\) around \(z\), the cocycle \(DH^{m}_{z}\) will be constant and maps to some line segment \(\Gamma\) under \(H^{m}\). Hence \(H^{m}(\gamma_{u}(z))\) contains a segment \(\Gamma\) in \(\sigma\), aligned with some vector \(v\). Now if \(\Gamma\) lies in \(\sigma_{1}\), its preimage is a segment in \(\sigma^{\prime}_{1}\) aligned with the vector \(M_{1}^{-1}v\). Now to satisfy (10), \(M_{1}^{-1}v\) must lie in some stable cone \(\mathcal{C}^{\prime}_{s}\) which contains all the stable eigenvectors of matrices in \(\mathcal{M}^{\prime}\) and none of the unstable eigenvectors. Hence \(v\in M_{1}\mathcal{C}^{\prime}_{s}\). Similarly if \(z\in\sigma_{4}\) then \(v\in M_{4}\mathcal{C}^{\prime}_{s}\), if \(z\in\sigma_{2}\) then \(v\in M_{2}M_{3}\mathcal{C}^{\prime}_{s}\), and if \(z\in\sigma_{3}\) then \(v\in M_{3}M_{2}\mathcal{C}^{\prime}_{s}\). Such a stable cone \(\mathcal{C}^{\prime}_{s}\) is given by \(\{(v_{1},v_{2})\neq 0\,|\,|v_{2}|\geq|v_{1}|\}\); one can verify that \(MC^{\prime}_{s}\subset\mathcal{C}\) for each \(M\in\{M_{1},M_{4},M_{2}M_{3},M_{3}M_{2}\}\), verifying \(v\in\mathcal{C}\). The argument for \(v^{\prime}\in\mathcal{C}^{\prime}\) is entirely analogous.
The expanding and invariance properties of the cone \(\mathcal{C}\) formed from \(\mathcal{M}\) will be key to growing the images of unstable manifolds. We can ensure stronger expansion by refining the cone, defining \(\mathcal{C}_{+},\mathcal{C}_{-}\subset\mathcal{C}\) by
* \(3\,|v_{1}|\geq|v_{2}|\geq\varphi\,|v_{1}|\), \(v_{1}v_{2}>0\),
* \(3\,|v_{1}|\geq|v_{2}|\geq\varphi\,|v_{1}|\), \(v_{1}v_{2}<0\).
**Lemma 4**.: _Let \(\Gamma\) be a line segment in \(\sigma\), aligned with some \(v\in\mathcal{C}\). It follows that \(H_{\sigma}(\Gamma)\) or \(H_{\sigma}^{2}(\Gamma)\) contains a line segment:_
* _Contained within_ \(\sigma_{1}\cup\sigma_{3}\)_, aligned with some vector in_ \(\mathcal{C}_{+}\)_, or_
* _Contained within_ \(\sigma_{2}\cup\sigma_{4}\)_, aligned with some vector in_ \(\mathcal{C}_{-}\)_._
Proof.: Suppose first that \(\Gamma\) does not lie entirely within \(\varsigma_{2}\) or \(\varsigma_{3}\). Then \(\Gamma\) contains a component \(\tilde{\Gamma}\) (possibly the whole of \(\Gamma\)) on which \(DH_{\sigma}\) is a matrix from \(\mathcal{M}\). If \(H_{\sigma}\left(\tilde{\Gamma}\right)\) lands in \(\sigma_{1}\cup\sigma_{3}\), then this Jacobian is in the subset
\(\{M_{1},M_{1}M_{2}^{n},M_{3}M_{2}^{n},M_{1}M_{3}^{n}\}\subset\mathcal{M}\). Case (A1) then follows from verifying that \(M\mathcal{C}\subset\mathcal{C}_{+}\) for each \(M\) in this subset. Case (A2) can be argued similarly. If \(\Gamma\subset\varsigma_{2}\cup\varsigma_{3}\) then it contains a component on which the Jacobian of \(H_{\sigma}^{2}\) is in \(\mathcal{M}\) and we can follow a similar argument.
### Growth lemma
The recall some useful properties of line segments from [13].
**Definition 1**.: Let \(\Gamma\) be a line segment. We define the _height_ of \(\Gamma\) as \(\ell_{v}(\Gamma)=\nu\left(\{y\,|\,(x,y)\in\Gamma\}\right)\), the _width_ of \(\Gamma\) as \(\ell_{h}(\Gamma)=\nu\left(\{x\,|\,(x,y)\in\Gamma\}\right)\), where \(\nu\) is the Lebesgue measure on \(\mathbb{R}\).
Given a partition element \(A\), we say that \(\Gamma\) has _simple intersection_ with \(A\) if its restriction to \(A\) is empty or a single line segment. Conversely we say that \(\Gamma\) has _non-simple intersection_ with \(A\) if its restriction to \(A\) contains more than one connected component.
**Lemma 5**.: _Let \(\Gamma\subset\sigma\) be a line segment which satisfies either (A1) or (A2) and has simple intersection with each of the \(A_{j}\). Then at least one of the following consequences hold:_
1. _There exists_ \(k\) _such that_ \(H^{k}(\Gamma)\) _contains a line segment having non-simple intersection with some_ \(A_{j}\)_,_
2. _There exists_ \(k\) _such that_ \(H^{k}(\Gamma)\) _contains a line segment_ \(\Lambda\) _satisfying (A1) or (A2) with_ \(\ell_{v}(\Lambda)\geq(1+\delta)\,\ell_{v}(\Gamma)\) _for some_ \(\delta>0\)_, independent of_ \(\Gamma\)_._
The proof involves splitting into several cases based on the specific location of \(\Gamma\) in \(\sigma\). The analysis of the first case (roughly up to equation (16)) gives a complete exposition of our method, reducing the lemma to checking bounds on growth factors and lengths of partition elements. The other cases are then argued similarly, either by exploiting symmetries or by recalculating bounds on different partition elements. This geometric information, i.e. the equations of the lines which make up \(\mathcal{S}\), is vital to our mixing rate analysis in sections 5, 6 so we present the full analysis here.
Proof.: Figure 6(a) shows the singularity set for the return map \(H_{\sigma}\) over \(\sigma_{1}\cup\sigma_{3}\setminus\varsigma_{3}\), and the singularity set of \(H_{\sigma}^{2}\) over \(\varsigma_{3}\). The singularity lines partition \(\sigma_{1}\cup\sigma_{3}\) into sets \(A_{j,i}^{k}\) with the same labelling scheme as Figure 5(b).
Let \(\Gamma\) satisfy case (A1) and suppose it has non-simple intersection with \(A_{4,2}^{1}\). Now since \(\Gamma\) has simple intersection with \(A_{3}\), observing Figure 6(a) it is clear that \(\Gamma\) must traverse \(A_{3,2}^{1}\). Restricting \(\Gamma^{2}=\Gamma\cap A_{2}\), \(H\left(\Gamma^{2}\right)\subset H(\Gamma)\) is a line segment which has non-simple intersection with \(A_{4}\), i.e. (C1) is satisfied with \(k=1\). Assume, then, that \(\Gamma\) has simple intersection with \(A_{4,2}^{1}\) and therefore does not traverse \(A_{3,2}^{1}\). If \(\Gamma\subset\sigma_{3}\) then \(\Gamma\) lies entirely within one of two sets \(\sigma_{3a}\), \(\sigma_{3b}\) (shown in Figure 6(b)) whose union is \(\sigma_{3}\), intersection is \(\mathcal{R}=A_{3,2}^{1}\cap\sigma_{3}\). For \(\Gamma\subset\sigma_{1}\), simple intersection with \(A_{3}\) implies that \(\Gamma\) does not traverse \(A_{4}\cap\sigma_{1}\). This, together with the two disjoint sets which make up \(A_{3,2}^{1}\cap\sigma_{1}\), implies that \(\Gamma\) lies entirely within one of four subsets \(\sigma_{1a},\ldots,\sigma_{1d}\), shown in Figure 6(b). The behaviour of \(H_{\sigma}\) over the sets \(\sigma_{1a}\), \(\sigma_{1b}\) is shown explicitly in Figures 7, 8.
Let \(\|\cdot\|\) denote the \(\|\cdot\|_{\infty}\) norm. Starting with \(\sigma_{1a}\), \(DH_{\sigma}\) takes values in \(\mathcal{M}_{1a}=\{M_{1},M_{4}M_{2}^{k},M_{3}M_{2}^{l}\,|\,k\in\mathbb{N},\,l= 1,2,3\}\). The unlabelled sets in Figure 7 are the partition elements \(A_{4,2}^{k}\) for \(k\geq 5\), limiting onto the point \((0,1/4)\) as
Figure 6: Part (a) shows the singularity curves dividing up \(\sigma_{1}\cup\sigma_{3}\) with some key partition elements labelled. The elements \(A^{1}_{3,2}\), \(A_{4}\cap\sigma\) split \(\sigma_{1}\cup\sigma_{3}\) into six subsets \(\sigma_{1a},\ldots,\sigma_{3b}\), any two of which are either disjoint or have intersection given by \(A_{4}\cap\sigma\) or one of the three subsets which make up \(A^{1}_{3,2}\), see part (b). \(\mathcal{R}\) denotes the set \(\sigma_{3a}\cap\sigma_{3b}\).
Figure 7: The singularity set of \(H_{\sigma}\) over \(\sigma_{1a}\). Unlabelled sets are given by \(A^{k}_{4,2}\) for \(k\geq 5\) which limit onto the point \((0,1/4)\) in the obvious fashion. The dashed red line is \(\partial P(\varepsilon)\), useful for establishing **(KS1)** for \(H_{\sigma}\).
\(k\to\infty\) in the obvious fashion. We remark that any \(\Gamma\subset\sigma_{1a}\) has simple intersection with all of the partition elements \(A_{i,j}^{k}\subset\sigma_{1a}\). If \(\Gamma\) is entirely contained within some partition element \(A\) corresponding to \(M\in\mathcal{M}_{1a}\), and is aligned with some unit vector \(v\in\mathcal{C}_{+}\), then \(\ell_{v}\left(H_{\sigma}(\Gamma)\right)=\|Mv\|\ell_{v}(\Gamma)\). Minimum expansion factors are straightforward to calculate. Parameterise unit vectors in \(\mathcal{C}_{+}\) by \((v_{1},1)^{T}\) where \(1/3\leq v_{1}\leq 13/21\) and write the components of matrices \(M\in\mathcal{M}\) as \(\left(\begin{smallmatrix}a&b\\ c&d\end{smallmatrix}\right)\). Then by cone invariance and the fact that vectors \((v_{1},v_{2})^{T}\in\mathcal{C}\) have norm \(|v_{2}|\), we have that \(\|Mv\|=|cv_{1}+d|\). This is monotone increasing in \(v_{1}\) if \(\mathrm{sgn}(c)=\mathrm{sgn}(d)\), monotone decreasing if \(\mathrm{sgn}(c)\neq\mathrm{sgn}(d)\), so that \(\|Mv\|\) is minimal on \((1/3,1)^{T}\) or \((13/21,1)^{T}\) in these respective cases. Table 1 shows the components of matrices \(M\in\mathcal{M}\) and the minimum expansion factors \(K_{+}(M)\) which follow.
If \(\Gamma\) intersects \(A_{4,2}^{4}\) and \(A_{3,2}^{3}\) (traversing \(A_{4,2}^{3}\)) then \(H^{3}\left(\Gamma\cap A_{4,2}^{3}\right)\) is a line segment in \(A_{2}^{\prime}\cap A_{4}\), connecting the \(A_{3},A_{4}\) boundary to the \(A_{2},A_{4}\) boundary. Noting that \(A_{2}^{\prime}\cap A_{4}\) is made up of two quadrilaterals, see Figure 2, there are two possible ways this can occur. Firstly, it can connect points \((x,1)\) to \((2y-1,y)\) with \(1/2\leq x\leq 3/4\). Its image under \(F\) then connects \((x,1)\) to \((1,y)\) so that then, shearing vertically by \(G\), its image under \(H\) connects \((x,2-2x)\) to \((1,y)\), passing through \(y=0\). Since \(x\leq 3/4\), we have \(2-2x\geq 1/2\) so that \(H^{4}\left(\Gamma\cap A_{4,2}^{3}\right)\) must have non-simple intersection with \(A_{2}\). The second case, where \(H^{3}\left(\Gamma\cap A_{4,2}^{3}\right)\) connects points \((x,1/2)\) and \((2y-1/2,y)\), is similar so that (C1) is satisfied.
Assume, then, that \(\Gamma\) does not traverse \(A_{4,2}^{3}\). Two possible cases follow; either \(\Gamma\) lies entirely below the upper boundary of \(A_{4,2}^{3}\), or \(\Gamma\) lies entirely above the lower boundary of \(A_{4,2}^{3}\). In the first case let \(\Gamma_{1}=\Gamma\cap A_{1}\). If \(K_{+}(M_{1})\,\ell_{v}(\Gamma_{1})>\ell_{v}(\Gamma)\), then we may take \(\Lambda=H(\Gamma_{1})\subset H_{\sigma}(\Gamma)\) to satisfy (C2). Taking \(K_{+}(M_{1})=17/3\) from Table 1, this holds provided that \(\ell_{v}(\Gamma_{1})/\ell_{v}(\Gamma)>3/17\). Noting that \(\Gamma\subset A_{1}\cup A_{2}\), if the above inequality does not hold, then the proportion of \(\Gamma\) in \(A_{2}\) satisfies \(\ell_{v}(\Gamma_{2})/\ell_{v}(\Gamma)>14/17\). Observing Figure 7, \(\Gamma_{2}\) intersects some collection of sets \(A_{4,2}^{k}\), indexed by a consecutive subset \(\{k_{0},k_{0}+1,...\}\subset\mathbb{N}\) with \(k_{0}\geq 3\). Assume that \(\Gamma_{2}\) intersects just two of
these sets \(\Gamma_{2}=\Gamma_{k_{0}}\cup\Gamma_{k_{0}+1}\). As seen in [14], if
\[\frac{1}{K_{+}\left(M_{4}M_{2}^{k_{0}}\right)}+\frac{1}{K_{+}\left(M_{4}M_{2}^{ k_{0}+1}\right)}<1\]
then at least one of \(\Gamma_{k}=\Gamma_{k_{0}}\), \(\Gamma_{k_{0}+1}\) satisfies \(\ell_{v}\left(H^{k+1}(\Gamma_{k})\right)>\ell_{v}(\Gamma_{2})\) and by extension if
\[\frac{1}{K_{+}\left(M_{4}M_{2}^{k_{0}}\right)}+\frac{1}{K_{+}\left(M_{4}M_{2}^ {k_{0}+1}\right)}<\frac{1}{\alpha}\]
then \(\ell_{v}\left(H^{k+1}(\Gamma_{k})\right)>\alpha\ell_{v}(\Gamma_{2})\). Now noting that \(K_{+}\left(M_{4}M_{2}^{k}\right)\) is monotonic increasing in \(k\) we have
\[\sum_{k=k_{0}}^{k_{0}+1}\frac{1}{K_{+}\left(M_{4}M_{2}^{k}\right)}\leq\sum_{k =3}^{4}\frac{1}{K_{+}\left(M_{4}M_{2}^{k}\right)}=\frac{3}{181}+\frac{3}{237} <\frac{14}{17}\]
so that, together with \(\ell_{v}(\Gamma_{2})/\ell_{v}(\Gamma)>14/17\), for some \(k\) condition (C2) follows by taking \(\Lambda=H^{k+1}\left(\Gamma\cap A_{4,2}^{k}\right)\). The case where \(\Gamma\) intersects just one of the \(A_{4,2}^{k}\) follows as a trivial consequence.
Suppose \(\Gamma\subset\sigma_{1a}\) violates the lemma, by the above we have that \(\Gamma\) intersects three or more of the \(A_{4,2}^{k}\), which by the geometry of the partition (see Figure 7) implies
* \(\Gamma\) traverses \(A_{4,2}^{k}\) for some \(k\geq 4\), connecting the lines \(\mathcal{L}_{k}:y=\frac{k+1-4kx}{4k+2}\) and \(\mathcal{L}_{k-1}:y=\frac{k-4(k-1)x}{4k-2}\).
We will show that this leads to a contradiction through an inductive argument. If \(\Gamma\) intersects \(A_{4,2}^{3}\), it must traverse \(A_{4,2}^{4}\). Let \(y_{k}=(k+1)/(4k+2)\) be the sequence of points where \(\mathcal{L}_{k}\) meets \(x=0\). Since the gradients of \(\mathcal{L}_{k}\) are monotone decreasing in \(k\), a lower bound \(h_{4}\leq\ell_{v}\left(\Gamma\cap A_{4,2}^{4}\right)\) is given by \(y_{4}^{\prime}-y_{4}\) where \((x_{4}^{\prime},y_{4}^{\prime})\) is the intersection of the lines \(y=y_{4}+\varphi x\) and \(\mathcal{L}_{3}:\,y=(4-12x)/14\). Specifically
\[h_{4}=\frac{191}{675}-\frac{5}{18}=\frac{7}{1350}. \tag{12}\]
As before let \(\Gamma_{2}=\Gamma\cap A_{2}\). Observing Figure 7, since \(\mathcal{L}_{3}\) meets the boundary of \(A_{1}\) and \(A_{2}\) at the point \((1/10,1/5)\), the height of \(\Gamma_{2}\) is bounded by \(\ell_{v}(\Gamma_{2})\leq L_{3}=y_{2}-1/5=1/10\). Letting \(\Lambda=H^{5}\left(\Gamma\cap A_{4,2}^{4}\right)\), we have that
\[\ell_{v}(\Lambda)\geq K_{+}\left(M_{4}M_{2}^{4}\right)\,h_{4}=\frac{56(4)+13} {3}\frac{7}{1350}=\frac{553}{1350}\approx 0.4096\]
and \(\ell_{v}(\Gamma)<(17/14)\,\ell_{v}(\Gamma_{2})\leq 17/140\approx 0.1214\), so that (C2) is satisfied. For the inductive step, assume that \(\Gamma\) traverses \(A_{4,2}^{k}\), but does not traverse \(A_{4,2}^{k-1}\). Using the same method as before we calculate
\[h_{k}=\frac{21}{2\left(2k+1\right)\left(68k-47\right)} \tag{13}\]
and
\[L_{k-1}=\frac{k-1}{4k-6}-\frac{k-1}{4k-2}\frac{(k-1)}{\left(2k-3\right)\left( 2k-1\right)}. \tag{14}\]
Then (C2) is satisfied with \(\Lambda=H^{k+1}\left(\Gamma\cap A_{4,2}^{k}\right)\) provided that \(K_{+}\left(M_{4}M_{2}^{k}\right)\,h_{k}>(17/14)L_{k-1}\), i.e.
\[\frac{56k+13}{3}\frac{21}{2\left(2k+1\right)\left(68k-47\right)}-\frac{17}{14} \frac{\left(k-1\right)}{\left(2k-3\right)\left(2k-1\right)}>0, \tag{15}\]
which holds for all \(k>4\) as required. It follows by induction that if \(\Gamma\) violates the lemma it must not traverse any \(A_{4,2}^{k}\) for \(k\geq 3\), contradicting \((\dagger)\), so that the lemma must hold when \(\Gamma\subset\sigma_{1a}\) lies entirely below the upper boundary of \(A_{4,2}^{3}\). The case where \(\Gamma\subset\sigma_{1a}\) lies entirely above the lower boundary of \(A_{4,2}^{3}\) is more straightforward, with (C2) following from the inequality
\[\sum_{k=1}^{3}\frac{1}{K_{+}\left(M_{3}M_{2}^{k}\right)}+\frac{1}{K_{+}\left( M_{4}M_{2}^{k}\right)}=\sum_{k=1}^{3}\frac{3}{56k+13}+\frac{3}{40k+7}\approx 0.206<1. \tag{16}\]
The lemma holds, then, for general \(\Gamma\subset\sigma_{1a}\).
Moving onto the case \(\Gamma\subset\sigma_{1b}\), write its intersections with the lower and upper regions \(y\leq 1/2\) and \(y\geq 1/2\) as \(\Gamma_{L}\) and \(\Gamma_{U}\) respectively. Observing Figure 8, \(\Gamma_{L}\) can intersect up to 5 partition elements from \(\mathcal{A}_{L}=\{A_{3,2}^{1},\ldots,A_{1}\cap\sigma\}\), on which \(DH_{\sigma}\) takes a value in \(\mathcal{M}_{L}=\{M_{1},M_{3}M_{2}^{k},M_{4}M_{2}^{k}\,|\,k=1,2\}\). Let
\[\alpha=\sum_{M\in\mathcal{M}_{L}}\frac{1}{K_{+}(M)}=\frac{3}{17}+\sum_{k=1}^{2 }\left(\frac{3}{40k+7}+\ \frac{3}{56k+13}\right)\approx 0.342.\]
Dividing through by \(\alpha\), for any subset \(\mathcal{N}\subset\mathcal{M}_{L}\) (including \(\varnothing\) and \(\mathcal{M}_{L}\)) we have
\[\sum_{M\in\mathcal{N}}\frac{1}{\alpha K_{+}(M)}\leq 1.\]
Hence we may always expand from some \(A\in\mathcal{A}_{L}\), taking \(\Lambda=H_{\sigma}(\Gamma\cap A)\), which by the above inequality satisfies \(\alpha\ell_{v}(\Lambda)\geq\ell_{v}(\Gamma_{L})\). Hence (C2) is satisfied when \(\ell_{v}(\Gamma_{L})>\alpha\ell_{v}(\Gamma)\). It remains to show the case \(\ell_{v}(\Gamma_{L})\leq\alpha\ell_{v}(\Gamma)\), i.e.
\[\ell_{v}(\Gamma_{U})\geq(1-\alpha)\ell_{v}(\Gamma). \tag{17}\]
Observing Figure 8, the set of partition elements which \(\Gamma_{U}\) can intersect is given by \(\mathcal{A}_{U}=\{A_{4}\cap\sigma_{1},A_{4,3}^{k}\,|\,k\geq 1\}\), so \(\mathcal{M}_{U}=\{M_{4},M_{4}M_{3}^{k}\,|\,k\geq 1\}\). Note that any two element subset \(\mathcal{N}\subset\mathcal{M}_{U}\) satisfies
\[\begin{split}\sum_{M\in\mathcal{N}}\frac{1}{K_{+}(M)}& \leq\frac{1}{K_{+}(M_{4})}+\frac{1}{K_{+}(M_{4}M_{3})}\\ &=\frac{21}{79}+\frac{21}{127}=\beta\approx 0.431\end{split} \tag{18}\]
and \(\alpha+\beta<1\). It follows that if \(\Gamma_{U}\) intersects two or fewer of the elements of \(\mathcal{A}_{U}\), we can guarantee (C2) by the standard method, summing the reciprocals of expansion factors. Assume, then, that \(\Gamma_{U}\) intersects three or more elements from \(\mathcal{A}_{U}\). It follows that
* \(\Gamma_{U}\) traverses \(A_{4,3}^{k}\) for some \(k\geq 1\), connecting the lines \(\mathcal{L}_{k}:\,y=\frac{(4k+2)x+k+2}{4k+4}\) and \(\mathcal{L}_{k-1}:\,y=\frac{(4k-2)x+k+1}{4k}\).
Figure 8: The singularity set of \(H_{\sigma}\) over the lower part of \(\sigma_{1b}\) with the top portion of \(A_{4}\cap\sigma_{1}\) omitted. Unlabelled sets are given by \(A_{4,3}^{k}\) for \(k\geq 2\) which limit onto the point \((1/4,1/2)\) in the obvious fashion. The segment \(\mathcal{P}\) is the preimage under \(H\) of the segment joining \((1/2,3/4)\) to \((1,1)\) in \(S_{4}\). The length \(L_{0}\) denotes maximum height of any segment in \(\sigma_{1b}\) bounded by \(\mathcal{P}\) and \(y=1/2\).
We now follow a similar inductive argument to before, assuming that \(\Gamma\) violates the lemma and aiming to contradict (\(\ddagger\)). Let \((x_{k},y_{k})=\left(\frac{k+2}{4k+6},\frac{k+2}{2k+3}\right)\) denote the intersections of the lines \(\mathcal{L}_{k}\) with the boundary \(y=2x\) of \(\sigma\). Assume \(\Gamma\) traverses \(A_{4,3}^{k}\), write its restriction to this set as \(\Gamma^{k}\). Since the gradients of the \(\mathcal{L}_{k}\) are monotonic increasing in \(k\) and vectors in \(\mathcal{C}_{+}\) have gradients bounded above by \(3\), A lower bound on \(\ell_{v}(\Gamma_{k})\) is given \(h_{k}=y_{k}^{\prime}-y_{k}\), where \((x_{k}^{\prime},y_{k}^{\prime})\) is the intersection of the line \(y-y_{k}=3(x-x_{k})\) and \(\mathcal{L}_{k-1}\), in particular
\[h_{k}=\frac{8k^{2}+18k+7}{16k^{2}+28k+6}-\frac{k+2}{2k+3}=\frac{3}{16k^{2}+28k +6}. \tag{19}\]
For the base case suppose that \(\Gamma_{U}\) traverses \(A_{4,3}^{1}\). Let \((x_{U},y_{U})\) be the intersection with \(y=1/2+x/2\), the boundary between \(A_{4,3}^{1}\) and \(A_{4}\cap\sigma_{1}\). Note that this point maps to \((1,y_{U})\) under \(H\) with \(y_{U}<2/3\). Figure 8 shows the preimage \(\mathcal{P}\) in \(A_{4}\cap\sigma_{1}\) of the segment joining \((1/2,3/4)\) to \((1,1)\) between \(A_{3}\) and \(A_{4}\). Specifically \(\mathcal{P}\) lies on the line \(y=7/12+5x/12\) and \(H(\mathcal{P})\) lies on \(y=1/2+x/2\). If \(\Gamma\) intersects \(\mathcal{P}\), then \(H(\Gamma)\) connects \((1,y_{U})\) to a point on the segment joining \((1/2,3/4)\) to \((1,1)\). Since \(y_{U}<3/4\), it follows that \(H(\Gamma)\) traverses \(A_{3}\), making non-simple intersection with \(A_{4}\), so that (C1) is satisfied. Assume, then, that \(\Gamma_{U}\) does not intersect \(\mathcal{P}\). This gives an upper bound \(\ell_{v}(\Gamma_{U})\leq y_{0}-1/2=:L_{0}\), where \((x_{0},y_{0})=(7/19,14/19)\) is the intersection of \(\mathcal{P}\) with the boundary of \(\sigma_{1}\) on \(y=2x\) (see Figure 8). Noting (17), (C2) follows with \(\Lambda=H^{2}\left(\Gamma^{1}\right)\) if the inequality \(K_{+}(M_{4}M_{3})h_{1}>L_{0}/(1-\alpha)\) is satisfied. Indeed
\[\left(\frac{16}{7}+\frac{79}{21}\right)\frac{3}{16+28+6}-\frac{9}{38(1-\alpha )}\approx 0.00277>0\]
so that the base step of the induction holds. The inductive step is roughly analogous, reducing to checking the inequality
\[K_{+}\left(M_{4}M_{3}^{k}\right)h_{k}-\frac{L_{k-1}}{1-\alpha}>0, \tag{20}\]
where \(L_{k-1}=y_{k-2}-1/2\) is the height of the partition element \(A_{4,3}^{k-1}\). One can verify that this inequality holds (the function is monotonic decreasing in \(k\geq 2\) with limit \(0\) as \(k\rightarrow\infty\)), establishing the lemma for \(\Gamma\subset\sigma_{1b}\).
Next consider \(\Gamma\subset\sigma_{3b}\), shown in Figure 9. Note that outside of \(\varsigma_{3}\) (shaded in blue) the Jacobian \(DH_{\sigma}\) is some matrix from \(\mathcal{M}\), but over \(\varsigma_{3}\) we have \(DH_{\sigma}=M_{2}\notin\mathcal{M}\). Therefore if we are to expand from some subset of \(\Gamma\cap\varsigma_{3}\), to ensure that \(\Lambda\) satisfies one of (A1-2) we must map forwards using \(H_{\sigma}^{2}\), whose Jacobian is always a matrix from \(\mathcal{M}\) (analogous to the escape behaviour shown for \(A_{3}\), shown in Figure 5(b)). The relevant subset of matrices, then, is \(\mathcal{M}_{3b}=\{M_{1},M_{1}M_{3}^{k},M_{2}M_{3}^{k},M_{3}M_{2}^{k},M_{4}M_{2 }^{k}\,|\,k=1,2\}\). Noting that \(\Gamma\) can have non-simple intersection with the sets \(A_{4,2}^{2}\) and \(A_{1,3}^{1}\), the relevant inequality to verify is
\[\left(\sum_{M\in\mathcal{M}_{3b}}\frac{1}{K_{+}(M)}\right)+\frac{1}{K_{+}(M_{1} M_{3})}+\frac{1}{K_{+}\left(M_{4}M_{2}^{2}\right)}<1.\]
Indeed, the above sums to \(\delta\approx 0.807<1\), so that restricting to one of the partition elements and expanding from there (using \(H_{\sigma}^{2}\) inside of \(\varsigma_{3}\), \(H_{\sigma}\) otherwise) will always satisfy (C2) with some \(k\leq 3\). This leaves the cases \(\Gamma\subset\sigma_{1c},\sigma_{1d},\sigma_{3a}\). Noting that rotating \(\sigma_{1c}\) by \(180^{\circ}\) about the point \((1/4,3/4)\) gives \(\sigma_{1b}\), and \(\mathcal{C}_{+}\) is invariant under this rotation, the
Figure 9: Behaviour of \(H_{\sigma}\) over \(\sigma_{3b}\setminus\varsigma_{3}\) and \(H_{\sigma}^{2}\) over \(\varsigma_{3}\), shaded in blue.
argument is essentially analogous. Similarly the arguments for \(\sigma_{1d},\sigma_{3a}\) are equivalent to those for \(\sigma_{1a},\sigma_{3b}\) respectively. This concludes the case where \(\Gamma\) satisfies (A1).
Let \(\Gamma\) satisfy (A2). Define the transformation \(T:\mathbb{T}^{2}\to\mathbb{T}^{2}\) given by \(T(x,y)=(1-x,y+1/2)\bmod 1\). One can verify that \(T\circ T=\operatorname{Id}\) and \(T\circ H=H\circ T\) so that \(H^{n}=T\circ H^{n}\circ T\). Now since \(T(\sigma_{2}\cup\sigma_{4})=\sigma_{1}\cup\sigma_{3}\) and \(DTC_{-}=\mathcal{C}_{+}\), the line segment \(T(\Gamma)\) satisfies (A1). By our analysis above, \(T(\Gamma)\) then satisfies (C1) or (C2). Noting that \(T(A_{j})=A_{5-j}\), if \(T(\Gamma)\) satisfies (C1) then there exists \(k\) such that \(H^{k}(\Gamma)=(T\circ H^{k}\circ T)(\Gamma)\) has non simple intersection with \(A_{5-j}\), so (C1) is satisfied. Similarly since \(\ell_{v}(\cdot)\) is \(T\)-invariant, if \(T(\Gamma)\) satisfies (C2) then the same holds for \(\Gamma\).
### Establishing the Bernoulli property
We are now ready to establish the mixing property.
Proof of Theorem 4.: By Theorem 2, with **(KS1-2)** shown in [13] and **(KS3)** in Proposition 1, it suffices to show **(MR)**. By Lemmas 3, 4, for a.e. \(z\) we can find \(m_{0}\) such that \(H^{m_{0}}(\gamma_{u}(z))\) contains a line segment \(\Gamma_{0}\) satisfying (A1) or (A2). Now iteratively apply Lemma 5 until (C1) is satisfied, giving \(m_{1}\) such that \(H^{m_{1}}(\Gamma_{0})\) contains a line segment \(\Gamma_{1}\) which has non simple intersection with some \(A_{j}\). Define a \(v\)-segment as any line segment traversing \(S_{1}\), connecting its upper and lower boundaries. Similarly define a \(h\)-segment as any line segment in \(S_{1}\) which connects its left and right boundaries. Consider the four parallelograms \(Q_{j}\subset A_{j}\) given by \(Q_{1}=A_{1}\cap S_{2}\), \(Q_{2}=A_{2}\cap S_{1}\), \(Q_{3}=A_{3}\cap S_{4}\), \(Q_{4}=A_{4}\cap S_{3}\). We recall from [13], specifically the proof of Lemma 4 in that work, that
1. If \(\Gamma_{1}\) has non-simple intersection with some \(A_{j_{1}}\), it traverses some \(Q_{j_{2}}\), connecting its sloping boundaries.
2. If \(\Gamma_{1}\) traverses \(Q_{j_{2}}\), \(j_{2}=1,2,3,4\), then \(H^{k}(\Gamma_{1})\) traverses \(Q_{3}\) for \(k=2,1,0,3\) respectively.
3. The image of any line segment traversing \(Q_{3}\) contains a \(v\)-segment.
The above gives \(m_{2}\in\{1,2,3,4\}\) such that \(H^{m_{2}}(\Gamma_{1})\) contains a \(v\)-segment \(\Gamma\subset H(Q_{3})\cap S_{1}\), with this parent set given by the quadrilateral with corners \((0,0)\), \((1/6,0)\), \((1/2,1/2)\), \((1/3,1/2)\), so that \(\Gamma\) connects points \((x_{1},0)\) and \((x_{2},1/2)\) with \(0\leq x_{1}\leq 1/6\) and \(1/3\leq x_{2}\leq 1/2\). It follows that \(\Gamma\) traverses \(Q_{2}\) which, by (M2-3), implies that \(H^{2}(\Gamma)\) contains a \(v\)-segment and so does \(H^{2+2k}(\Gamma)\) for \(k\geq 0\) by induction. Applying \(F\) to \(\Gamma\) has no effect on \((x_{1},0)\) and wraps \((x_{2},1/2)\) horizontally around the torus so that \(F(\Gamma)\) contains a segment joining \((0,y)\) to \((x_{2},1/2)\) with \(y<1/2\). Now \(G\) has no effect on \((0,y)\) and maps \((x_{2},1/2)\) to \((x_{2},1/2+2x_{2})\bmod 1\). Since \(1/2+2x_{2}\geq 1/2+2/3=7/6>1\), \(H(\Gamma)\) contains a segment joining \((0,y)\) to \((x_{3},1)\) with \(x_{3}\leq x_{2}\leq 1/2\). It follows that \(H(\Gamma)\) must traverse \(Q_{4}\) which, by (M2-3), implies that \(H^{5}(\Gamma)\) contains a \(v\)-segment. Using the same induction as before we have that \(H^{5+2k}(\Gamma)\) contains a \(v\)-segment for all \(k\geq 0\) which, together with the same result for \(H^{2+2k}(\Gamma)\), implies that \(H^{k}(\Gamma)\) contains a \(v\)-segment for all \(k\geq 4\). Hence there exists \(M=m_{0}+m_{1}+m_{2}+4\) such that \(H^{m}(\gamma_{u}(z))\) contains a \(v\)-segment for all \(m\geq M\).
Now for almost any \(z^{\prime}\), by Lemma 3 we can find \(n_{0}\) such that \(H^{-n_{0}}(\gamma_{s}(z^{\prime}))\) contains a line segment \(\Gamma^{\prime}\in\sigma^{\prime}\), aligned with some \(v\in\mathcal{C}^{\prime}\). Define the transformation \(\mathcal{T}(x,y)=(1-y,1-x)\bmod 1\). One can verify that \(\mathcal{T}\circ\mathcal{T}=\operatorname{Id}\)
and \(\mathcal{T}\circ H^{-1}=H\circ\mathcal{T}\) so that \(H^{-k}=\mathcal{T}\circ H^{k}\circ\mathcal{T}\). Now since \(\mathcal{T}(\sigma^{\prime})=\sigma\) and \(D\mathcal{T}\mathcal{C}^{\prime}=\mathcal{C}\), we have that \(\mathcal{T}(\Gamma^{\prime})\) is a line segment in \(\sigma\), aligned with some \(v=D\mathcal{T}v^{\prime}\in\mathcal{C}\). We now follow Lemmas 4, 5 and the argument above to find \(n_{1}\) such that \((H^{m}\circ\mathcal{T})(\Gamma^{\prime})\) contains a \(v\)-segment for all \(m\geq n_{1}\). The image of a \(v\)-segment under \(\mathcal{T}\) is a segment joining the left and right boundaries of \(S_{4}\). Noting Figure 2, we have that \(H^{-m}(\Gamma^{\prime})=(\mathcal{T}\circ H^{m}\circ\mathcal{T})(\Gamma^{ \prime})\) traverses the parallelogram \(Q_{2}^{\prime}=A_{2}^{\prime}\cap S_{4}\), connecting its sloping boundaries. It was shown in [24] that if \(\Gamma^{\prime}\) traverses \(Q_{2}^{\prime}\) then \(H^{-1}(\Gamma)\) contains a \(h\)-segment, so that \(H^{-n}(\gamma_{s}(z^{\prime}))\) contains a \(h\)-segment for all \(n\geq N=n_{0}+n_{1}+1\). Since \(z\) and \(z^{\prime}\) were arbitrary and \(h\)-segments and \(v\)-segments must always intersect, **(MR)** holds.
**Remark 2**.: The \(v\)-segments \(\Lambda\) obtained above satisfy \(H^{-1}(\Lambda)\subset Q_{3}\), \(H^{-2}(\Lambda)\subset Q_{2}\) so that \(\Lambda\subset H(A_{3}\cap H(A_{2}))=\sigma_{3}\). Similarly the \(h\)-segments derived from these \(v\)-segments can be shown to lie in \(\sigma_{2}^{\prime}\).
## 5 Decay of correlations for the return map
As described in the introduction, we infer the polynomial decay under \(H\) from exponential decay of some induced return map \(H_{A}\), where returns to \(A\) experience'strong' hyperbolic behaviour. The natural choice for \(A\), following the work of section 4, is the set \(\sigma\). We begin by proving the Bernoulli property for \(H_{\sigma}\).
### Bernoulli property
**Proposition 2**.: _The return map \(H_{\sigma}\) is Bernoulli with respect to the probability measure \(\mu_{\sigma}=\mu(\sigma)^{-1}\mu\)._
We will show the conditions **(KS1-3)** and **(MR)**; the result then follows from Theorem 2.
**Lemma 6**.: _The return map \(H_{\sigma}\) satisfies **(KS1-3)**._
Proof.: Starting with **(KS1)** we follow a similar approach to [20], their Lemma 4.1. We show that there exists \(a,C_{1}>0\) s.t. \(\forall\,\epsilon>0\), \(\mu_{\sigma}(B_{\varepsilon}(S))\leq C_{1}\varepsilon^{a}\) for \(S=\mathcal{S}\cap\sigma_{1a}\); the argument for the rest of \(\mathcal{S}\) is similar and the result then follows by taking a larger \(C_{1}\). Recall the line segments \(\mathcal{L}_{k}\) from (\(\dagger\)) which for \(k\geq 3\) terminate on the points \((0,(k+1)/(4k+2))\) and \((1/(4k-2),(k-1)/(4k-2))\) on the line \(\mathcal{L}:y=1/4-x/2\). Let \(P(\varepsilon)\) denote the parallelogram in \(\sigma_{1a}\) of width \(2\sqrt{\varepsilon}\), height \(\sqrt{\varepsilon}\), with sides aligned with \(x=0\) and \(\mathcal{L}\) (see Figure 7). For small \(\varepsilon\), \(P(\varepsilon)\) contains all line segments \(\mathcal{L}_{k}\) where \(2\sqrt{\varepsilon}\geq 1/(4k-2)\), i.e. \(k\geq k_{0}=\lceil 1/(8\sqrt{\varepsilon})+1/2\rceil\), with
\[\mu(B_{\varepsilon}(P(\varepsilon)))=(2\sqrt{\varepsilon}+2\varepsilon)( \sqrt{\varepsilon}+2\varepsilon)=2\varepsilon+6\varepsilon^{3/2}+4\varepsilon^ {2}<12\varepsilon.\]
The ball \(B_{\varepsilon}(P(\varepsilon))\) then covers all of \(B_{\varepsilon}(S)\) except the collection \(\mathcal{L}_{k}\), \(4\leq k\leq k_{0}-1\) and the seven line segments \(L_{j}\) which terminate on \(y=2x\). The measure of the ball around these latter line segments satisfies
\[\mu(B_{\varepsilon}(\cup_{j}L_{j}))\leq 14\,\varepsilon\left(\max_{j}|L_{j}|+2 \varepsilon\right)<c_{1}\varepsilon\]
for some finite \(c_{1}\), so it remains to estimate \(\sum_{k=4}^{k_{0}-1}\mu(B_{\varepsilon}(\mathcal{L}_{k}))\). We can calculate
\[|\mathcal{L}_{k}|=\sqrt{\left(\frac{1}{4k-2}\right)^{2}+\left(\frac{k+1}{4k+2}- \frac{k-1}{4k-2}\right)^{2}}=\sqrt{\frac{8k^{2}+4k+1}{4(4k^{2}-1)^{2}}}<\frac{ 1}{k} \tag{21}\]
so that
\[\sum_{k=4}^{k_{0}-1}\mu(B_{\varepsilon}(\mathcal{L}_{k}))<2\varepsilon\sum_{k =4}^{k_{0}-1}\frac{1}{k}+2\varepsilon<4\varepsilon^{2}k_{0}+2\varepsilon\log k _{0}<c_{2}\varepsilon^{a}\]
for some \(0<a<1\), \(c_{2}>0\) since \(k_{0}<\varepsilon^{-1/2}\) and there exists finite \(c\) such that \(c\,\varepsilon^{a}>\varepsilon\log\frac{1}{\varepsilon}\) for any \(0<a<1\).
Since \(H_{\sigma}\) is piecewise linear, condition **(KS2)** follows trivially and we move onto **(KS3)**. Existence of Lyapunov exponents almost everywhere follows from Oseledets' theorem [10] provided that \(\max\{\log\|DH_{\sigma}\|,0\}\) is integrable. This follows from the fact that if \(z\in\sigma\) has return time \(r_{\sigma}(z)=k\), then the Jacobian of \(H_{\sigma}\) at \(z\) satisfies \(\|DH_{\sigma}\|\leq c_{1}k\) for some finite \(c_{1}>0\), and that the measure of the sets \(\{z\in\sigma\,|\,r_{\sigma}(z)=k\}\) are of order \(k^{-3}\). That these Lyapunov exponents are non-zero follows from Lemmas 1, 2 and an argument similar to that given for \(H\) in section 3.
**Lemma 7**.: _The return map \(H_{\sigma}\) satisfies **(MR)**._
For a.e \(z\in\sigma\), local manifolds \(\gamma_{u}(z)\), \(\gamma_{s}(z)\) under \(H_{\sigma}\) align with those of \(H\). Note that \(H_{\sigma}\) does not immediately inherit **(MR)** from \(H\) as while successive images of local manifolds under \(H\) contain \(h\)-segments and \(v\)-segments, these segments may not lie in the successive images under \(H_{\sigma}\).
Let \(\mathcal{R}\) denote the quadrilateral \(\sigma_{3}\cap\sigma_{2}^{\prime}\) and \(\mathcal{R}^{\prime}=\sigma_{2}\cap\sigma_{3}^{\prime}\). Define a \(\mathfrak{h}\)-segment as a line segment spanning \(\mathcal{R}\) with endpoints on \(\partial\sigma_{3}\). Similarly define a \(\mathfrak{h}^{\prime}\)-segment as a line segment spanning \(\mathcal{R}^{\prime}\) with endpoints on \(\partial\sigma_{2}\). Examples are plotted in Figure 10. We will show that there exists \(M,N\) such that for all \(m\geq M\), \(n\geq N\), \(H_{\sigma}^{m}(\gamma_{u}(z))\) intersects
\(H_{\sigma}^{-n}(\gamma_{s}(z^{\prime}))\) in either \(\mathcal{R}\) or \(\mathcal{R}^{\prime}\).
By the remark after Theorem 4 we can find some \(n_{2}\) such that \(H^{-n_{2}}(\gamma_{s}(z^{\prime}))\) contains a \(h\)-segment in \(\sigma_{2}^{\prime}\), which in turn contains a \(\mathfrak{h}\)-segment in \(\mathcal{R}\). As a line segment in \(\sigma\), this \(\mathfrak{h}\)-segment lies in \(H_{\sigma}^{-n_{1}}(\gamma_{s}(z^{\prime}))\) for some \(n_{1}\leq n_{2}\). Note that we have a hyperbolic period \(2\) orbit \((1/4,1/4)\leftrightarrow(3/4,3/4)\) under \(H_{\sigma}\), alternating between \(\mathcal{R}\) and \(\mathcal{R}^{\prime}\). Any \(\mathfrak{h}\)-segment \(\Lambda\) contains a point \(\zeta\) on the unstable manifold through \((1/4,1/4)\) so \(H_{\sigma}^{-2}(\Lambda)\) contains a point \(\zeta^{\prime}\) on the manifold closer to \((1/4,1/4)\) and extends beyond the boundaries \(\partial\sigma_{3}\) by the expansion of \(H_{\sigma}^{-2}\). Hence \(H_{\sigma}^{-2}(\Lambda)\) contains a \(\mathfrak{h}\)-segment and by induction so does \(H_{\sigma}^{-2k}(\Lambda)\) for all \(k\geq 1\). The odd iterates \(H^{-2k+1}(\Lambda)\) similarly span \(\mathcal{R}^{\prime}\) so that
* Given arbitrary \(z^{\prime}\in\sigma\), there exists \(N\) such that for all \(n\geq N\) the image \(H_{\sigma}^{-n}(\gamma_{s}(z^{\prime}))\) contains a \(\mathfrak{h}\)-segment or a \(\mathfrak{h}^{\prime}\)-segment.
Define a \(v^{\prime}\)-segment as a line segment vertically spanning \(\sigma_{2}\cap S_{4}\). Condition **(MR)** now follows from establishing
* Given arbitrary \(z\in\sigma\), there exists \(M\) such that for all \(m\geq M\) the image \(H_{\sigma}^{m}(\gamma_{s}(z^{\prime}))\) contains both a \(v\)-segment _and_ a \(v^{\prime}\)-segment.
Recall the quadrilaterals \(\varsigma_{j}\subset\sigma_{j}\) of points with return time \(1\) (see Figure 4(a)). It follows from the definitions of the \(\sigma_{j}\) that \(H(\varsigma_{2})\subset\sigma_{3}\) and \(H(\varsigma_{3})\subset\sigma_{2}\). The edges of \(\varsigma_{3}\) on the \(A_{1},A_{2}\) boundary map into the lines \(x=1/2\), \(x=1\), in particular onto the red dashed lines on the boundary of \(S_{2}\) in Figure 4(b) so that the image of any line segment in \(\varsigma_{3}\) which joins these edges contains a \(v^{\prime}\)-segment. A analogous result holds for lines segments traversing \(\varsigma_{2}\) and since this behaviour occurs within the return set \(\sigma\) we have that \(v\)-segments map into \(v^{\prime}\)-segments under \(H_{\sigma}\) and vice versa. It follows that \(v(\)'\()\)-segments map into \(v(\)'\)-segments under \(H_{\sigma}^{2}\). It suffices to break into the odd iterates to satisfy the 'and' condition (\(\wedge\)).
By following the steps (M1-3) in the proof of Theorem 4 we can find \(m_{2}\) such that \(H^{m_{2}}(\gamma_{u}(z))\) contains a \(v\)-segment \(\Gamma\subset\sigma_{3}\cap S_{1}\) with \(H^{-1}(\Gamma)\subset A_{3}\), \(H^{-2}(\Gamma)\subset A_{2}\), \(H^{-3}(\Gamma)\subset A_{1}\), \(H^{-4}(\Gamma)\subset A_{4}\). In particular \(H^{-1}(\Gamma)\) lies in \(H(A_{2}\cap H(A_{1}))\), i.e. outside of \(\sigma_{2}\), so that \(\Gamma\) lies in \(\sigma_{3}\setminus(H(\sigma_{2})\cap\sigma_{3})\). The set \(H(\sigma_{2})\cap S_{1}\) (shown in blue in Figure 11) is the quadrilateral with corners \((7/68,0)\), \((3/34,0)\), \((27/68,0)\), \((7/17,0)\), which splits \(\sigma_{3}\cap S_{1}\) into left and right parts. We assume first that \(\Gamma\) lies in the right part, intersecting the line \(y=1/2\) at some point \((x_{1},1/2)\) with \(x_{1}\geq 7/17\) and the \(A_{1},A_{2}\) boundary \(y=1/2-x/2\) at some point \((1-2y_{1},y_{1})\). These intersections define a line segment \(\Gamma_{1}\subset\Gamma\), which lies in \(A_{1}\) shown in Figure 11. Applying \(F\) maps \((x_{1},1/2)\) to itself (wrapping horizontally around the torus) and maps \((1-2y_{1},y_{1})\) to \((0,y_{1})\). Applying \(G\) then leaves \((0,y_{1})\) invariant and wraps \((x_{1},1/2)\) vertically around the torus to \((x_{1},1/2+2x_{1})\) mod \(1\equiv(x_{1},-1/2+2x_{1})\). Since \(x_{1}\geq 7/17\) we have that \(-1/2+2x_{1}\geq 11/34>10/24\geq 1/2-x_{1}/2\) so that \((x_{1},-1/2+2x_{1})\) lies above the line \(y=1/2-x/2\). We restrict again to \(A_{1}\), giving \(\Gamma_{2}\subset H(\Gamma_{1})\) with endpoints on \((x_{1},-1/2+2x_{1})\) and some point \((1-2y_{2},y_{2})\) on \(y=1/2-x/2\). This line meets \(y=-1/2+2x\) at \((2/5,3/10)\) so that \(y_{2}\leq 3/10\) (see Figure 11). Now \(F(\Gamma_{2})\) joins \((0,y_{2})\) to \((5x_{1}-2,2x_{1}-1/2)\) and so \(H(\Gamma_{2})\) joins \((0,y_{2})\) to \((5x_{1}-2,12x_{1}-9/2)\). The set \(A_{3,2}^{1}\) is bounded by the parallel lines \(y=7/16-5x/8\) and \(y=3/8-5x/8\). Since \(y_{2}<3/8\) and \(12x_{1}-9/2\geq 15/34>109/272\geq 7/16-5(5x_{1}-2)/8\) we have that \(H(\Gamma_{2})\) contains a segment \(\Gamma_{3}\) which traverses \(A_{3,2}^{1}\). The image \(H(\Gamma_{3})\) then traverses \(Q_{3}\) so that \(H^{2}(\Gamma_{3})\subset H^{4}(\Gamma)\) contains a \(v\)-segment in \(\sigma_{3}\). Critically
we have that \(\Gamma_{2},\Gamma_{3}\subset H(A_{1})\subset\sigma\) but \(\Gamma_{4}\subset H(A_{2}\cap H(A_{1}))\) which is not in \(\sigma\). Hence \(H^{3}_{\sigma}(\Gamma)\) contains a \(v\)-segment and we can apply the \(H^{2}_{\sigma}\) result above to show that \(H^{k}_{\sigma}(\Gamma)\) contains a \(v\)-segment for all \(k\geq 2\). Similar analysis can be applied to \(\Gamma\) in the left portion of \(\sigma_{3}\setminus(H(\sigma_{2})\cap\sigma_{3})\). It follows that \(H^{k}_{\sigma}(\Gamma)\) contains a \(v^{\prime}\)-segment for all \(k\geq 3\), establishing \((\wedge)\) with \(M=m_{1}+3\).
### Invariant cones
We now derive specific unstable and stable cone fields for the return map \(H_{\sigma}\), wide enough to ensure invariance **(H1.1)** yet fine enough to produce tight bounds on expansion factors, vital for verifying **(H5)**. Define the cones \(\mathcal{C}_{1},\ldots,\mathcal{C}_{4}\) by
* \(3|v_{1}|\geq|v_{2}|\geq 7|v_{1}|/3\), \(v_{1}v_{2}>0\),
* \(5|v_{1}|/3\geq|v_{2}|\geq\varphi|v_{1}|\), \(v_{1}v_{2}<0\),
* \(5|v_{1}|/3\geq|v_{2}|\geq\varphi|v_{1}|\), \(v_{1}v_{2}>0\),
* \(3|v_{1}|\geq|v_{2}|\geq 7|v_{1}|/3\), \(v_{1}v_{2}<0\),
and the following stable cones
* \(|v_{1}|\geq|v_{2}|\),
* \(9/10\geq v_{2}/v_{1}\geq-8/10\),
* \(8/10\geq v_{2}/v_{1}\geq-9/10\),
* \(|v_{1}|\geq|v_{2}|\).
In the notation of section 2, for general \(z\in\sigma\) we take \(C_{z}^{u}=\mathcal{C}_{j}\) and \(C_{z}^{s}=\mathcal{C}_{j}^{s}\) for \(z\in\sigma_{j}\). These cone fields are plotted in Figure 12.
**Lemma 8**.: _The above cones satisfy \(DH_{\sigma}\,C_{z}^{u}\subset C_{z^{\prime}}^{u}\) and \(DH_{\sigma}\,C_{z}^{s}\supset C_{z^{\prime}}^{s}\) for all \(z\in\sigma\) where \(DH_{\sigma}\) exists, \(z^{\prime}=H_{\sigma}(z)\)._
Proof.: We begin with the unstable cones. Table 2 shows the possible values of \(DH_{\sigma}\) at \(z\) if \(z\in\sigma_{i}\) and \(z^{\prime}\in\sigma_{j}\). The calculations for \(z^{\prime}\in\sigma_{1},\sigma_{4}\) are similar to those made in the proof of Lemma 4, noting that each \(\mathcal{C}_{j}\) is contained within \(\mathcal{C}\) and, for example, \(M_{1}M_{j}^{k}\mathcal{C}\subset\mathcal{C}_{1}\) for \(j=2,3\), and \(k\geq 0\). For \(z^{\prime}\in\sigma_{2}\) we verify that \(M_{2}M_{3}^{k}(-1,3)^{T}=\left(-1\right)^{k}\left(-24k+5,-40k-7\right)^{T} \in\mathcal{C}_{2}\) and \(M_{2}M_{3}^{k}(-3,7)^{T}=\left(-1\right)^{k}\left(60k+11,-100k-15\right)^{T} \in\mathcal{C}_{2}\) for all \(k\geq 1\) so that \(DH_{\sigma}\,C_{z}^{u}\subset C_{z^{\prime}}^{u}\) for \(z\in\sigma_{4}\). For \(z\in\sigma_{3}\) we have \(M_{2}(3,5)^{T}=(13,-21)^{T}\in\mathcal{C}_{2}\) and \(M_{2}(13,21)^{T}=(55,-89)^{T}\in\mathcal{C}_{2}\), ensuring invariance in this particular case also, despite \(M_{2}\) being non-hyperbolic. Entirely symmetric calculations can be made for \(z^{\prime}\in\sigma_{3}\), verifying the result for all unstable cones.
For the stable cones, we remark that taking \(C_{z}^{s}=\mathcal{C}_{1}^{s}\) for all \(z\in\sigma\) would satisfy \(DH_{\sigma}\,C_{z}^{s}\supset C_{z^{\prime}}^{s}\) but since \(M_{2}^{-1}(1,-1)^{T}=(-1,1)^{T}\) we would be unable to derive sufficient uniform bounds on expansion factors **(H1.2)**. The matrix \(M_{3}^{-1}\) exhibits a similar problem so we must slim down the cones \(C_{z}^{s}\) when \(DH_{\sigma}^{-1}\in\{M_{2}^{-1},M_{3}^{-1}\}\) which, observing Table 2, is for \(z\in\sigma_{2},\sigma_{3}\). To remedy this, for such \(z\) we slim down the cones \(C_{z}^{s}\) to \(\mathcal{C}_{2}^{s},\mathcal{C}_{3}^{s}\) above. As these cones lie in the wider invariant cone \(|v_{1}|\geq|v_{2}|\), the lemma follows from checking that \(DH_{\sigma}\,C_{z}^{s}\supset C_{z^{\prime}}^{s}\) for \(z\in\sigma_{2},\sigma_{3}\). This can be verified via direct calculations.
Figure 12: Unstable and stable cone fields \(\mathcal{C}_{j}\), \(\mathcal{C}_{j}^{s}\) over the subsets \(\sigma_{j}\subset\sigma\) for the return map \(H_{\sigma}\). Also shown in red are the gradients of the line segments which make up the boundary \(\partial\sigma\) which lie outside of all cone fields.
### Structure of the singularity set
Using the notation of **(H2)** in section 2, let \(\mathcal{S}_{0}=\cup_{j}\partial\sigma_{j}\), the union of \(\partial\sigma\) and the red dashed lines in Figure 4(b). The set \(M=\Omega\setminus\mathcal{S}_{0}\) is clearly dense in \(\Omega\) and \(H_{\sigma}\) is a \(C^{2}\) diffeomorphism from \(M\setminus\mathcal{S}_{1}\) onto \(M\setminus\mathcal{S}_{-1}\), being linear on each component.
The set \(\mathcal{S}_{0}\cup\mathcal{S}_{1}\) is the countable union of bounded line segments with the endpoints of each segment terminating on another segment, giving **(H2.2)**.
The gradients of the segments in \(\mathcal{S}_{0}\) take values in \(\{\pm 8/5,\pm 2,\infty\}\) which avoid unstable and stable cones \(C^{u}_{z}\), \(C^{s}_{z}\) (see Figure 12). The gradients of singularity curves in \(\sigma_{1}\) and \(\sigma_{4}\) are bounded between -1 and 1 (approaching these limits as we approach the accumulation points) so lie in \(\mathcal{C}^{s}_{1},\mathcal{C}^{s}_{4}\). The gradients of singularity curves in \(\sigma_{2}\) and \(\sigma_{3}\) are bounded between -11/14 and 11/14 so lie in \(\mathcal{C}^{s}_{2},\mathcal{C}^{s}_{3}\) since \(11/14<8/10\). Similar calculations show that the gradients of segments in \(\mathcal{S}_{-1}\) lie in unstable cones.
We conclude this section with showing **(H2.4)**. Condition (7) can only fail when \(\|DH_{\sigma}\|\) becomes unbounded, i.e. at points \(z\) approaching the accumulation points. We consider the case with \(z\in A^{k}_{4,2}\) near \((0,1/4)\), the other cases are similar. Recall Figure 7 and the lines \(\mathcal{L}_{k}\) from \((\dagger)\). We note that \(d(z,\mathcal{S}_{1})\) is bounded above by the length of the segment joining \(z=(x,y)\) to \((x,y_{k}(x))\) on \(\mathcal{L}_{k}\), which in turn is bounded above by the height of the segment joining \((x,y_{k}(x))\) to \((x,y_{k-1}(x))\) on \(\mathcal{L}_{k-1}\). This height is
\[y_{k-1}(x)-y_{k}(x)\leq y_{k-1}\left(\frac{1}{4k-2}\right)-y_{k}\left(\frac{1 }{4k-2}\right)=\frac{1}{2(2k-1)^{2}}\leq c_{1}/k^{2}\]
for some constant \(c_{1}>0\). The operator norm of \(DH_{\sigma}\) over \(A^{k}_{4,2}\) satisfies \(\|DH_{\sigma}\|\leq c_{2}k\) for some \(c_{2}>0\) so that (7) holds for some \(c>0\) whenever we choose \(b>1/2\).
### One-step expansion
We will verify (8) for the map \(f=H_{\sigma}^{2}\), \(q=1/2\). We begin with a basic statement on expansion over unstable curves.
**Lemma 9**.: _Let \(M\) be the constant Jacobian of \(f\) over \(W_{i}\), \(V_{i}=f(W_{i})\). Then_
\[\lambda^{-}:=\inf_{v\in\mathcal{C}}\frac{\|Mv\|}{\|v\|}\leq\frac{|V_{i}|}{|W_{ i}|}\leq\sup_{v\in\mathcal{C}}\frac{\|Mv\|}{\|v\|}=:\lambda^{+}.\]
Proof.: Given any \(\varepsilon>0\), consider a piecewise linear approximation \(\hat{W_{i}}\) to \(W_{i}\) such that
\[\left|\frac{|V_{i}|}{|\hat{W_{i}}|}-\frac{|\hat{V_{i}}|}{|\hat{W_{i}}|}\right|<\varepsilon\]
where \(\hat{V_{i}}=f(\hat{W_{i}})\) gives a piecewise approximation for \(V_{i}\). Each of the piecewise components will be line segments aligned with vectors in \(\mathcal{C}\) so that their expansion factors will be bounded by \(\lambda^{\pm}\), giving the result.
We also derive basic inequalities on the length of a given \(W_{i}\).
**Lemma 10**.: _Let \(\mathcal{L}_{0},\mathcal{L}_{1}\) be the singularity curves on which \(W_{i}\) terminates, write these intersections as \((x_{0},y_{0})\) and \((x_{1},y_{1})\). Then_
\[\sqrt{(x_{1}-x_{0})^{2}+(y_{1}-y_{0})^{2}}\leq|W_{i}|\leq|y_{1}-y_{0}|\sqrt{1+ \frac{1}{g^{2}}}\]
_where \(g=\inf|v_{2}/v_{1}|\) over \((v_{1},v_{2})^{T}\in\mathcal{C}\)._
Proof.: Noting that the lower bound is trivial, we focus on the upper bound. Since \(g>0\) for all unstable cones \(\mathcal{C}\), the projection of \(W_{i}\) to the \(y\)-axis is injective. Without loss of generality suppose \(y_{1}>y_{0}\), then we can parameterise \(W_{i}\) as a curve \((x(y),y)\) for \(y_{0}\leq y\leq y_{1}\). Now
\[|W_{i}| =\int_{y_{0}}^{y_{1}}\sqrt{\left(\frac{\mathrm{d}x}{\mathrm{d}y} \right)^{2}+\left(\frac{\mathrm{d}y}{\mathrm{d}y}\right)^{2}}\,\mathrm{d}y\] \[\leq(y_{1}-y_{0})\sup_{y_{0}\leq y\leq y_{1}}\sqrt{1+\left(\frac {\mathrm{d}x}{\mathrm{d}y}\right)^{2}}\] \[\leq(y_{1}-y_{0})\sqrt{1+\frac{1}{g^{2}}}\]
as tangent vectors \((x^{\prime}(y),1)^{T}\) to \(W_{i}\) lie in \(\mathcal{C}\).
Let \(P_{1}=\{(0,1/4),(1/2,1/4),(1/2,3/4),(1,3/4)\}\) denote the accumulation points similar to that of \(\sigma_{1a}\), \(P_{2}=\{(1/4,1/2),(1/4,1),(3/4,0),(3/4,1/2)\}\) the accumulation points similar to that of \(\sigma_{1b}\). Let \(\varepsilon\) be small. Given a set \(P\), let \(B_{\varepsilon}(P)\) denote the union of the balls \(B_{\varepsilon}(p)\cap\sigma\), centred at \(p\in P\) of radius \(\varepsilon\). The following describes the images of balls about \(P_{1}\cup P_{2}\) under \(H_{\sigma}\).
**Lemma 11**.: _Given small \(\varepsilon>0\), there exists some \(\varepsilon^{\prime}>0\) such that \(H_{\sigma}(B_{\varepsilon}(P_{1}\cup P_{2}))\) covers \(B_{\varepsilon^{\prime}}(P_{1}\cup P_{2})\)._
Proof.: We describe the covering of \(B_{\varepsilon^{\prime}}((1/2,3/4))\), analysis for the other points in \(P_{1}\cup P_{2}\) is analogous. For any \(\varepsilon>0\), \(B_{\varepsilon}(P_{1}\cup P_{2})\) contains the sets \(A_{4,3}^{k}\) for all \(k\geq k_{0}\) where \(k_{0}\in\mathbb{N}\) depends on \(\varepsilon\). Each \(A_{4,3}^{k}\) consists of two quadrilaterals, one in the ball around \((1/4,1)\) and the other in the ball around \((1/4,1/2)\). Figure 13(a) shows this latter quadrilateral, with corners on the points
\[r_{1}=\left(\frac{k+1}{4k+2},\frac{k+1}{2k+1}\right),\quad r_{2}=\left(\frac{k -1}{4k-2},\frac{1}{2}\right),\quad r_{3}=\left(\frac{k}{4k+2},\frac{1}{2} \right),\quad r_{4}=\left(\frac{k+2}{4k+6},\frac{k+2}{2k+3}\right).\]
Since \(DH_{\sigma}\) is constant on \(A_{4,3}^{k}\), given by the integer valued matrix \(M_{4}M_{3}^{k}=(-1)^{k}\binom{2k+1}{-6k-2}\frac{-2k-2}{6k+5}\), its image \(H_{\sigma}\left(A_{4,3}^{k}\right)\) is given by the quadrilateral with corners given by \(M_{4}M_{3}^{k}\,r_{j}^{T}\) mod \(1\). For odd \(k\) we can calculate these corners as
\[r_{1}^{\prime}(k)=\left(\frac{1}{2}+\frac{1}{4k+2},\frac{3}{4}-\frac{5}{8k+4} \right),\quad r_{2}^{\prime}(k)=\left(\frac{1}{2}+\frac{1}{4k-2},\frac{3}{4}- \frac{5}{8k-4}\right),\]
\[r_{3}^{\prime}(k)=\left(\frac{1}{2},\frac{3}{4}+\frac{1}{8k+4}\right),\quad r _{4}^{\prime}(k)=\left(\frac{1}{2},\frac{3}{4}+\frac{1}{8k+12}\right),\]
shown in Figure 13(b). For even \(k\) the corners of \(A_{4,3}^{k}\) in the ball around \((1/4,1)\) map into the \(r_{j}^{\prime}\). Writing this quadrilateral as \(Q(k)\), since \(r_{2}^{\prime}(k+1)\) = \(r_{1}^{\prime}(k)\) and \(r_{3}^{\prime}(k+1)\) = \(r_{4}^{\prime}(k)\) we have that \(\cup_{k\geq k_{0}}Q(k)\) is the polygon with corners \(r_{2}^{\prime}(k_{0})\), \(r_{3}^{\prime}(k_{0})\), and \(\lim_{k\to\infty}r_{1}^{\prime}(k)=\lim_{k\to\infty}r_{4}^{\prime}(k)=(1/2,3/4)\). Noting that \(r_{3}^{\prime}(k_{0})>3/4\) and \(r_{2}^{\prime}(k_{0})\) lies on the line \(y-\frac{3}{4}=-\frac{5}{2}(x-\frac{1}{2})\), there exists \(\varepsilon^{\prime}\) such that \(\cup_{k\geq k_{0}}Q(k)\) covers all points \((x,y)\in B_{\varepsilon^{\prime}}((1/2,3/4))\) with \(y\geq\frac{3}{4}-\frac{5}{2}(x-\frac{1}{2})\). The image \(H_{\sigma}(B_{\varepsilon}((1,3/4))\cap A_{4})\) fills the remaining portion of \(B_{\varepsilon^{\prime}}((1/2,3/4))\), since \(H_{\sigma}=H\) on \(A_{4}\), \(H(1,3/4)=(1/2,3/4)\), \(DH\,(-2,1)^{T}=(0,-1)^{T}\), and \(DH\,(0,-1)^{T}=(2,-5)^{T}\).
**Proposition 3**.: _Condition (8) holds for \(H_{\sigma}^{2}\) when there exists \(\varepsilon>0\) such that \(W\cap B_{\varepsilon}(P_{1}\cup P_{2})=\varnothing\)._
Proof.: We claim that an unstable curve \(W\) of vanishing length, bounded away from the accumulation points, is split into at most \(9\) components \(W_{i}\) by the singularity set for \(H_{\sigma}^{2}\). The upper bound follows from analysis of the original singularity set for \(H_{\sigma}\). Let \(P_{F}\) denote the set of fixed points under \(H\), \(P_{F}=\{(0,1/2),(1/2,0),(1/2,1/2),(1,1)\}\). Observing Figure 4, if \(W\cap B_{\varepsilon}(P_{F})\neq\varnothing\) then \(W\) is split by \(\mathcal{S}\) into at most \(5\) components \(W_{j}\), and if \(W\cap B_{\varepsilon}(P_{F})=\varnothing\) then the upper bound is \(3\). We consider these cases separately.
Take, for example, \(W\cap B_{\varepsilon}((0,1/2))\neq\varnothing\). Observing Figure 8, four of the components \(W_{j}\) map into \(A_{4}^{\prime}\) under \(H_{\sigma}\), and their images lie in some sector \(B_{\varepsilon^{\prime}}((1,1/2))\cap A_{4}^{\prime}\). We can take \(\varepsilon\) small enough that this sector lies entirely in \(A_{4}\), so that no further splitting occurs during the next iterate of \(H_{\sigma}\). The other component \(W\cap A_{1}\) maps into some sector \(B_{\varepsilon^{\prime}}((0,1/2))\cap A_{1}^{\prime}\) and is split into at most \(5\) components, giving at most \(N=9\) components in total. The other cases \(W\cap B_{\varepsilon}(p)\neq\varnothing\), \(p\in P_{F}\), are analogous. Now suppose \(W\cap B_{\varepsilon}(P_{F})=\varnothing\). \(\mathcal{S}\) splits \(W\) into at most \(3\) components \(W_{j}\) and, by Lemma 11 and the above, each \(H_{\sigma}(W_{j})\) is bounded away from the accumulation points \(P_{1}\cup P_{2}\) and the fixed points \(P_{F}\). Hence each \(H_{\sigma}(W_{j})\) is split into at most \(3\) components during the next iterate of \(H_{\sigma}\), again giving at most \(N=9\) components in total.
The weakest expansion of \(DH_{\sigma}^{2}\) over cones \(\mathcal{C}_{j}\) on \(\sigma_{j}\) using the euclidean norm is that of \(M_{1}M_{4}=\binom{-3}{-8\ 21}\) on
(or equivalently \(M_{4}M_{1}\) on \(\sigma_{4}\)), and is given by
\[c=\frac{\|M_{1}M_{4}(3,7)^{T}\|}{\|(3,7)^{T}\|}\sqrt{\frac{(-9+56)^{2}+(-24+147) ^{2}}{3^{2}+7^{2}}}=\sqrt{\frac{8669}{29}}\approx 17.29\]
so that, by Lemma 9, \(|V_{i}|\geq c\,|W_{i}|\) for each component \(W_{i}\). Now for \(q=1/2\) we have
\[\sum_{i}\left(\frac{|W|}{|V_{i}|}\right)^{q}\frac{|W_{i}|}{|W|} =\sum_{i}\sqrt{\frac{|W|}{|V_{i}|}}\frac{|W_{i}|}{|W|}\] \[=\sum_{i}\sqrt{\frac{|W_{i}|}{|V_{i}|}}\sqrt{\frac{|W_{i}|}{|W|}}\] \[\leq\frac{1}{\sqrt{c}}\sum_{i=1}^{N}\sqrt{\frac{|W_{i}|}{|W|}}.\]
Letting \(x_{i}=|W_{i}|/|W|\) and taking vectors \(u=(\sqrt{x_{1}},\ldots,\sqrt{x_{N}})\), \(v=(1,\ldots,1)\) we have that \(\sum_{i=1}^{N}x_{i}=1\) and so \(\left(\sum_{i=1}^{N}\sqrt{x_{i}}\right)^{2}=(u\cdot v)^{2}\leq(u\cdot u)\,(v \cdot v)=\left(\sum_{i=1}^{N}x_{i}\right)N=N\) by the Cauchy-Schwarz inequality. Hence
\[\sum_{i}\left(\frac{|W|}{|V_{i}|}\right)^{q}\frac{|W_{i}|}{|W|}\leq\frac{\sqrt {N}}{\sqrt{c}}<1\]
since \(N\leq 9<c\).
**Proposition 4**.: _Condition (8) holds for \(H_{\sigma}^{2}\) when \(W\cap B_{\varepsilon}(P_{2})\neq\varnothing\) for all \(\varepsilon>0\)._
Proof.: We begin with the case \(W\cap B_{\varepsilon}((1/4,1/2))\neq\varnothing\) and let \(\varepsilon\to 0\). We may choose \(\delta\) sufficiently small so that \(W\) intersects \(A_{1}\cap\sigma\) and some collection of sets \(A_{4,3}^{k}\), \(k_{0}\leq k\leq k_{1}\), where \(k_{1}\to\infty\) as \(\varepsilon\to 0\), \(k_{0}\to\infty\) as \(\delta\to 0\). Therefore, \(\mathcal{S}\) splits \(W\) into a lower component \(W_{*}\subset A_{1}\cap\sigma\) and upper components \(W_{k}\subset A_{4,3}^{k}\), illustrated in Figure 13(a). We study how the images of these components under \(H_{\sigma}\) are split up by \(\mathcal{S}\).
Recall the corners \(r_{j}(k)\) which define \(A_{4,3}^{k}\) near \((1/4,1/2)\). The curve \(W_{k}\) has endpoints on \(r_{1}r_{2}\) and \(r_{3}r_{4}\), and all tangent vectors to \(W\) lie in \(\mathcal{C}_{1}\). For odd \(k\) the image \(U_{k}=H_{\sigma}(W_{k})\) is a curve joining \(r_{1}^{\prime}r_{2}^{\prime}\) to \(r_{3}^{\prime}r_{4}^{\prime}\), with tangent vectors aligned in \(M_{4}M_{3}^{k}\,\mathcal{C}_{1}\). This curve is split by \(\mathcal{S}\) into an upper portion \(U_{k,*}\subset A_{4}\cap\sigma\), and a collection \(U_{k,l}\subset A_{1,3}^{l}\) for some consecutive range \(l_{0}\leq l\leq l_{1}\) which depends on \(k\). Each \(A_{1,3}^{l}\) is bounded by the lines
\[\mathcal{L}_{l}:y-\frac{1}{2}=\frac{2l}{2l+1}(x-1/4) \tag{22}\]
and \(\mathcal{L}_{l-1}\), hence a lower bound on \(l_{0}(k)\) is given by the largest \(l\) such that \(r_{2}^{\prime}(k)\) lies on or above \(\mathcal{L}_{l-1}\). One can verify that \(r_{2}^{\prime}(k)\) lies on \(\mathcal{L}_{l-1}\) when \(k=7l-4\) and approaches \((1/2,3/4)\) monotonically in \(x\) and \(y\) so that \(l_{0}(k)\geq\lfloor\frac{k+4}{7}\rfloor\). To determine an upper bound on \(l_{1}\), note that \(r_{4}^{\prime}r_{1}^{\prime}\) lies on the line
\[y-\frac{3}{4}-\frac{1}{8k+12}=-\frac{6k+8}{2k+3}\left(x-\frac{1}{2}\right), \tag{23}\]
Figure 13: Part (a) shows an unstable curve \(W\) passing near to the accumulation point \((1/4,1/2)\), split into \(W_{\star}\) below \(y=1/2\) and the collection \(W_{k}\subset A_{4,3}^{k}\). Part (b) shows the image \(U_{k}=H_{\sigma}(W_{k})\subset H_{\sigma}\left(A_{4,3}^{k}\right)\), which for odd \(k\) lies near the accumulation point \((1/2,3/4)\) and contains subcurves \(U_{k,l}\subset A_{1,3}^{l}\).
meeting the \(A_{4}\) boundary \({\cal L}:y=1/2+x/2\) at the point
\[(x_{k},y_{k})=\left(\frac{7k+10}{14k+19},\frac{21k+29}{28k+38}\right). \tag{24}\]
We similarly calculate that the line \({\cal L}_{l}\) meets \(y=1/2+x/2\) at the point
\[(X_{l},Y_{l})=\left(\frac{l}{2l-1},\frac{1-3l}{2-4l}\right).\]
The intersection of \(U_{k}\) with \(y=1/2+x/2\) must be some point \((x,1/2+x/2)\) with \(x\geq x_{k}\) so that an upper bound on \(l_{1}(k)\) is the smallest \(l\) such that \(x_{k}\geq X_{l}\), which reduces to \(l\geq 7k+10\), hence \(l_{1}(k)\leq\lceil 7k+10\rceil=7k+10\). For even \(k\) the splitting behaviour is entirely analogous, with \(H_{\sigma}(W_{k})\) intersecting \({\cal S}\) in the neighbourhood of \((1,1/4)\).
For the lower component \(W_{\star}\), the image \(U_{\star}=H_{\sigma}(W_{\star})=H(W_{\star})\) lies in a neighbourhood of \(H(1/4,1/2)=(1/4,1)\) and is split by \({\cal S}\) into a collection \(U_{\star,j}\subset A_{4,3}^{j}\), \(j_{0}\leq j\leq j_{1}\), where \(j_{1}\to\infty\) as \(\varepsilon\to 0\), \(j_{0}\to\infty\) as \(\delta\to 0\). Write \(W_{\star,j}=H_{\sigma}^{-1}(U_{\star,j})\), \(W_{k,\star}=H_{\sigma}^{-1}(U_{k,\star})\), \(W_{k,l}=H_{\sigma}^{-1}(U_{k,l})\), then \(W\) splits into components
\[W=\left(\bigcup_{j\geq j_{0}}W_{\star,j}\right)\cup\left(\bigcup_{k\geq k_{0} }W_{k,\star}\right)\cup\left(\bigcup_{k\geq k_{0}}\bigcup_{l=l_{0}}^{l_{1}}W_ {k,l}\right) \tag{25}\]
on which \(DH_{\sigma}^{2}\) is constant. Let \(V_{i}=H_{\sigma}(U_{i})=H_{\sigma}^{2}(W_{i})\), then for \(q=1/2\):
\[\sum_{i}\left(\frac{|W|}{|V_{i}|}\right)^{q}\frac{|W_{i}|}{|W|} =\sum_{i}\sqrt{\frac{|W_{i}|}{|V_{i}|}}\sqrt{\frac{|W_{i}|}{|W|}}\] \[=\sum_{j\geq j_{0}}\sqrt{\frac{|W_{\star,j}|}{|V_{\star,j}|}} \sqrt{\frac{|W_{\star,j}|}{|W|}}+\sum_{k\geq k_{0}}\sqrt{\frac{|W_{k,\star}|} {|V_{k,\star}|}}\sqrt{\frac{|W_{k,\star}|}{|W|}}+\sum_{k\geq k_{0}}\sum_{l=l_ {0}}^{l_{1}}\sqrt{\frac{|W_{k,l}|}{|V_{k,l}|}}\sqrt{\frac{|W_{k,l}|}{|W|}}\] \[\leq\sum_{j\geq j_{0}}\sqrt{\frac{1}{\Lambda_{\star,j}}}\sqrt{ \frac{|W_{\star,j}|}{|W|}}+\sum_{k\geq k_{0}}\sqrt{\frac{1}{\Lambda_{k,\star} }}\sqrt{\frac{|W_{k,\star}|}{|W|}}+\sum_{k\geq k_{0}}\sum_{l=l_{0}}^{l_{1}} \sqrt{\frac{1}{\Lambda_{k,l}}}\sqrt{\frac{|W_{k,l}|}{|W|}}\]
by Lemma 9, where \(\Lambda_{i}\) is the minimum expansion factor of \(DH_{\sigma}^{2}\) on \(W_{i}\) over the cone \({\cal C}_{1}\). Define \(W_{\circ}=W\setminus W_{\star}\) and let \(0\leq p\leq 1\) denote the proportion \(|W_{\circ}|=p\,|W|\), then
\[\liminf_{\delta\to 0}\sup_{W\,:\,|W|<\delta}\sum_{i}\left(\frac{|W|}{|V_{i}|} \right)^{q}\frac{|W_{i}|}{|W|}\leq\sup_{0\leq p\leq 1}\Bigg{(} \lim_{j_{0}\to\infty}\sum_{j\geq j_{0}}\sqrt{\frac{1}{\Lambda_{ \star,j}}}\sqrt{\frac{(1-p)|W_{\star,j}|}{|W_{\star}|}}\] \[+\lim_{k_{0}\to\infty}\sum_{k\geq k_{0}}\sqrt{\frac{1}{\Lambda_{k, \star}}}\sqrt{\frac{p\,|W_{k,\star}|}{|W_{\circ}|}}\] \[+\lim_{k_{0}\to\infty}\sum_{k\geq k_{0}}\sum_{l=l_{0}}^{l_{1}} \sqrt{\frac{1}{\Lambda_{k,l}}}\sqrt{\frac{p\,|W_{k,l}|}{|W_{\circ}|}}\Bigg{)}.\]
We put upper bounds on each of these sums using lower bounds on the expansion factors \(\Lambda_{i}\) and geometric bounds on the curves \(U_{i}\) terminating on \({\cal S}\). We use asymptotic notation \(f\sim g\) for functions \(f,g\) if \(f/g\to 1\), and write \(f\lesssim g\) if there is some function \(h\) such that \(f\leq h\sim g\).
Starting with the first sum, \(DH_{\sigma}^{2}\) is given by \(M_{4}M_{3}^{j}M_{1}=(-1)^{j}\left(\begin{smallmatrix}-2j-3&-6j-8\\ 6j+8&18j+21\end{smallmatrix}\right)\) on each component \(W_{\star,j}\) with
minimum expansion factors given by
\[\begin{split}\Lambda_{*,j}&=\inf_{7/3\leq m\leq 3} \sqrt{\frac{(-2j-3-6jm-8m)^{2}+(6j+8+18jm+21)^{2}}{1+m^{2}}}\\ &\sim\inf_{7/3\leq m\leq 3}\sqrt{\frac{(2+6m)^{2}+(6+18m)^{2}}{1+m^ {2}}}\,j=\frac{48\sqrt{145}}{29}\,j:=c_{*}\,j.\end{split} \tag{26}\]
Each curve \(U_{*,j}\) has tangent vectors in \(M_{1}\mathcal{C}_{1}\) satisfying \(41/17\leq|v_{2}|/|v_{1}|\leq 17/7\), \(v_{1}v_{2}\geq 0\). For each \(j>j_{0}\), \(U_{*,j}\) traverses \(A^{j}_{4,3}\) so that (making a similar calculation to equation 19) Lemma 10 gives
\[\frac{a_{*}}{\tilde{j}^{2}}\lesssim|U_{*,j}|\lesssim\frac{b_{*}}{\tilde{j}^{2}} \tag{27}\]
for \(a_{*}=\frac{13}{80}\sqrt{2}\), \(b_{*}=\frac{41}{192}\sqrt{1+17^{2}/41^{2}}\) (calculated in the appendix, section 8.1). The upper bound also trivially holds for \(j=j_{0}\). Let \(\Lambda_{1}^{+}\), \(\Lambda_{1}^{-}\) denote the maximum and minimum expansion factors of \(M_{1}\) over \(\mathcal{C}_{1}\), then
\[|W_{*}|=\sum_{j\geq j_{0}}|W_{*,j}|\geq\sum_{j\geq j_{0}}\frac{|U_{*,j}|}{ \Lambda_{1}^{+}}\gtrsim\frac{a_{*}}{\Lambda_{1}^{+}}\sum_{j\geq j_{0}+1}\frac{ 1}{\tilde{j}^{2}}\geq\frac{a_{*}}{\Lambda_{1}^{+}(j_{0}+1)}\sim\frac{a_{*}}{ \Lambda_{1}^{+}j_{0}}\]
where we have used the fact that
\[\frac{1}{j^{2}}\geq\frac{1}{j(j+1)}=\frac{1}{j}-\frac{1}{j+1}\]
and considered the telescoping sum. Similarly \(|W_{*,j}|\lesssim b_{*}/\left(\Lambda_{1}^{-}j^{2}\right)\) so that
\[\frac{|W_{*,j}|}{|W_{*}|}\lesssim\frac{b_{*}\Lambda_{1}^{+}j_{0}}{a_{*}\Lambda _{1}^{-}}\,\frac{1}{j^{2}}.\]
Hence
\[\begin{split}\sum_{j\geq j_{0}}\sqrt{\frac{1}{\Lambda_{*,j}}} \sqrt{\frac{(1-p)|W_{*,j}|}{|W_{*}|}}&\lesssim\sum_{j\geq j_{0}} \sqrt{\frac{1}{c_{*}j}}\sqrt{\frac{(1-p)b_{*}\Lambda_{1}^{+}j_{0}}{a_{*} \Lambda_{1}^{-}}\,\frac{1}{j^{2}}}\\ &=\sqrt{\frac{(1-p)b_{*}\Lambda_{1}^{+}j_{0}}{c_{*}a_{*}\Lambda_{1 }^{-}}}\sum_{j\geq j_{0}}j^{-3/2}\\ &\leq\sqrt{\frac{(1-p)b_{*}\Lambda_{1}^{+}}{c_{*}a_{*}\Lambda_{1 }^{-}}}\,2\sqrt{\frac{j_{0}}{j_{0}-1}}\\ &\to 2\sqrt{\frac{(1-p)b_{*}\Lambda_{1}^{+}}{c_{*}a_{*}\Lambda_{1 }^{-}}}\end{split} \tag{28}\]
as \(j_{0}\to\infty\), where we have used \(\sum_{j\geq j_{0}}j^{-3/2}\leq\int_{j_{0}-1}^{\infty}x^{-3/2}\,\mathrm{d}x\).
Moving onto the next summation, \(\Lambda_{k,*}\) is determined by \(M_{4}^{2}M_{3}^{k}=(-1)^{k}\left(\begin{smallmatrix}14k+5&-14k-12\\ -34k-12&34k+29\end{smallmatrix}\right)\) and satisfies
\[\Lambda_{k,*}\sim\inf_{7/3\leq m\leq 3}\sqrt{\frac{(14-14m)^{2}+(34-34m)^{2}}{1+m ^{2}}}\,k=\frac{104}{\sqrt{29}}\,k=:c_{\circ}\,k.\]
For odd \(k\) the curve \(U_{k,*}\) has endpoints \((1/2,y_{0})\) on \(r_{3}^{\prime}r_{4}^{\prime}\) and \((2y_{1}-1,y_{1})\) on \(\mathcal{L}\), where \(y_{1}\) is bounded by the
intersections of \(r_{4}^{\prime}r_{1}^{\prime}\) and \(r_{3}^{\prime}r_{2}^{\prime}\) with \(\mathcal{L}\) (see Figure 13(b)). An upper bound on \(|y_{1}-y_{0}|\) is given by taking \((1/2,y_{0})=r_{3}^{\prime}\) and \((2y_{1}-1,y_{1})=r_{4}^{\prime}r_{1}^{\prime}\cap\mathcal{L}\). Noting (24), this gives
\[|y_{1}-y_{0}|\leq\frac{3}{4}+\frac{1}{8k+4}-\frac{21k+29}{28k+38}=\frac{6k+9}{5 6k^{2}+104k+38}\sim\frac{6}{56}k^{-1}.\]
\(U_{k,*}\) has tangent vectors \((v_{1},v_{2})^{T}\) in \(M_{4,3}^{k}\,\mathcal{C}_{1}\subset\mathcal{C}_{4}\), so that \(|v_{2}/v_{1}|\geq 7/3\). By Lemma 10 we then have \(|U_{k,*}|\lesssim b_{\circ}/k\), where \(b_{\circ}=\frac{6}{56}\sqrt{1+\frac{9}{49}}\approx 0.117\). The minimum expansion factor of \(M_{4}M_{3}^{k}=(-1)^{k}\left(\begin{smallmatrix}2k+1&-2k-2\\ -6k-2&6k+5\end{smallmatrix}\right)\) over \(\mathcal{C}_{1}\) is given by
\[\inf_{\tau/3\leq m\leq 3}\sqrt{\frac{(2-2m)^{2}+(6-6m)^{2}}{1+m^{2}}}\,k=\frac{ 8\sqrt{145}}{29}\,k=:\gamma\,k\]
which gives \(|W_{k,*}|\lesssim b_{\circ}/\left(\gamma k^{2}\right)\). The analysis for even \(k\) is analogous and gives the same upper bound. We next require a lower bound on \(|W_{\circ}|\). For \(k>k_{0}\), \(W_{k}\) is a curve with tangent vectors in \(\mathcal{C}_{1}\) which traverses \(A_{4,3}^{k}\). Making the same calculation as (19), \(|W_{k}|\) is bounded below by the shortest path across \(A_{4,3}^{k}\), the line segment passing through \(r_{4}\) with gradient 3. That's
\[|W_{k}|\geq\sqrt{\left(\frac{1}{16k^{2}+28k+6}\right)^{2}+\left(\frac{3}{16k^ {2}+28k+6}\right)^{2}}\sim\frac{\sqrt{10}}{16k^{2}} \tag{29}\]
so that
\[|W_{\circ}|\geq\sum_{k\geq k_{0}+1}|W_{k}|\gtrsim\frac{a}{k_{0}} \tag{30}\]
with \(a:=\sqrt{10}/16\). Hence
\[\sum_{k\geq k_{0}}\sqrt{\frac{1}{\Lambda_{k,*}}}\sqrt{\frac{p\,|W_{k,*}|}{|W_{ \circ}|}}\lesssim\sum_{k\geq k_{0}}\sqrt{\frac{1}{c_{\circ}k}}\sqrt{\frac{pb_{ \circ}k_{0}}{a\gamma}\,\frac{1}{k^{2}}}\to 2\sqrt{\frac{pb_{\circ}}{c_{ \circ}a\gamma}}\]
as \(k_{0}\to\infty\), by a similar argument to (28).
For the third summation, \(\Lambda_{k,l}\) is determined by the matrix
\[M_{1}M_{3}^{l}M_{4}M_{3}^{k}=(-1)^{k+l}\begin{pmatrix}-48kl-10k-18l-3&48kl+10k+ 42l+8\\ -112kl-26k-42l-8&112kl+26k+98l+21\end{pmatrix}\]
and satisfies (for large \(k,l\))
\[\Lambda_{k,l}\sim\inf_{\tau/3\leq m\leq 3}\sqrt{\frac{(48-48m)^{2}+(112-112m)^{2 }}{1+m^{2}}}\,kl=64\,kl=:c\,kl.\]
We can show an upper bound \(|U_{k,l}|\lesssim b/l^{2}\) where \(b=\frac{3}{32}\sqrt{1+\frac{9}{49}}\approx 0.102\) (see section 8.1) so that \(|W_{k,l}|\lesssim b/(\gamma kl^{2})\).
Now by (30),
\[\sum_{l=l_{0}}^{l_{1}}\sqrt{\frac{1}{\Lambda_{k,l}}}\sqrt{\frac{p|W_ {k,l}|}{|W_{\diamond}|}} \lesssim\sum_{l=l_{0}}^{l_{1}}\sqrt{\frac{1}{ckl}}\sqrt{\frac{pbk_ {0}}{a\gamma kl^{2}}}\] \[\leq\sqrt{\frac{1}{ck}}\sqrt{\frac{pbk_{0}}{a\gamma k}}\sum_{l= \lfloor\frac{k+4}{2}\rfloor}^{7k+10}l^{-3/2}\] \[\leq 2\sqrt{\frac{1}{ck}}\sqrt{\frac{pbk_{0}}{a\gamma k}}\left( \frac{1}{\sqrt{\lfloor\frac{k+4}{2}\rfloor-1}}-\frac{1}{\sqrt{7k+10}}\right)\] \[\sim 2\sqrt{\frac{1}{ck}}\sqrt{\frac{pbk_{0}}{a\gamma k}}\left( \sqrt{7}-\frac{1}{\sqrt{7}}\right)\frac{1}{\sqrt{k}}.\]
Letting \(h=(\sqrt{7}-1/\sqrt{7})^{2}=36/7\), we have that
\[\sum_{k\geq k_{0}}\sum_{l=l_{0}}^{l_{1}}\sqrt{\frac{1}{\Lambda_{ k,l}}}\sqrt{\frac{p|W_{k,l}|}{|W_{\diamond}|}} \lesssim 2\sqrt{\frac{pbhk_{0}}{ca\gamma}}\sum_{k\geq k_{0}}k^{-3/2}\] \[\to 4\sqrt{\frac{pbh}{ca\gamma}}\]
as \(k_{0}\to\infty\). Hence for \(q=1/2\)
\[\liminf_{\delta\to 0}\sup_{W:|W|<\delta}\sum_{i}\left(\frac{|W|}{|V_{i}|} \right)^{q}\frac{|W_{i}|}{|W|}\leq\sup_{0\leq p\leq 1}\left(2\sqrt{\frac{(1-p)b _{*}\Lambda_{1}^{+}}{c_{*}a_{*}\Lambda_{1}^{-}}}+2\sqrt{\frac{pb_{\diamond}}{ c_{*}a\gamma}}+4\sqrt{\frac{pbh}{ca\gamma}}\right). \tag{31}\]
It is simple to show that for \(s,t>0\) the function \(f(p)=s\sqrt{1-p}+t\sqrt{p}\) always attains its maximum value at \(p=t^{2}/(s^{2}+t^{2})\). Hence letting
\[s=2\sqrt{\frac{b_{*}\Lambda_{1}^{+}}{c_{*}a_{*}\Lambda_{1}^{-}}}\approx 0.450, \quad t=2\sqrt{\frac{b_{\diamond}}{c_{*}a\gamma}}+4\sqrt{\frac{bh}{ca\gamma}} \approx 0.639\]
gives
\[\liminf_{\delta\to 0}\sup_{W:|W|<\delta}\sum_{i}\left(\frac{|W|}{|V_{i}|} \right)^{q}\frac{|W_{i}|}{|W|}\leq s\sqrt{\frac{s^{2}}{s^{2}+t^{2}}}+t\sqrt{ \frac{t^{2}}{s^{2}+t^{2}}}\approx 0.781<1\]
as required. The analysis is analogous for \(W\) near \((1/4,1)\) and extends to \(W\) near \((3/4,0)\) and \((3/4,1/2)\) using the symmetry \(T(x,y)=(1-x,y+1/2)\) which commutes with \(H_{\sigma}\) (as seen in the proof of Lemma 5).
Equivalent analysis verifies the two step expansion for curves near the other accumulation points \(p\in P_{1}\). We provide the relevant calculations to the appendix, Proposition 7. We are now ready to apply Theorem 3.
### Decay of correlations
**Theorem 5**.: _The return map \(H_{\sigma}:\sigma\to\sigma\) enjoys exponential decay of correlations. In particular it admits a Young tower with base \(\Delta_{0}\) satisfying the exponential tail bound_
\[\mu\left(\left\{z\in\sigma\,|\,R(z,H_{\sigma},\Delta_{0})>n\right\}\right)\leq \operatorname{const}\theta^{n} \tag{32}\]
_for all \(n\geq 1\) where \(\theta<1\) is some constant._
Proof.: We run through the conditions for applying Theorem 3. Invariance of the unstable and stable cone fields \(C_{z}^{u}\), \(C_{z}^{s}\) was the subject of section 5.2, satisfying **(H1.1)**. Condition **(H1.2)** follows by taking \(\Lambda=\sqrt{85/41}\), with this lower bound attained by considering the expansion of \(M_{2}^{-1}\) over the cone boundary of \(\mathcal{C}_{2}^{s}\) with gradient \(-8/10\). Noting Remark 1, we next show **(H1.3')**. The cone fields are continuous over the components \(\sigma_{j}\) of \(\Omega\setminus\mathcal{S}_{0}\), indeed they are constant. Noting that all of the stable cones lie within \(\mathcal{C}_{1}^{s}\) and all of the unstable cones lie in \(\mathcal{C}\), a positive angle between stable and unstable cone fields follows from \(\mathcal{C}_{1}^{s}\cap\mathcal{C}=\varnothing\). **(H2)** was the subject of section 5.3. Unstable manifolds provide a class of \(H_{\sigma}\) invariant unstable curves which satisfy the regularity conditions listed in [209]. Piecewise linearity of the map trivially implies their bounded curvature and bounds on distortion; absolute continuity follows from Lemma 6. **(H4)** follows from Proposition 2, where we showed that \(H_{\sigma}\) is Bernoulli with respect to the normalised Lebesgue measure on \(\sigma\). Noting Remark 1, **(H5)** follows for the map \(H_{\sigma}^{2}\) by Propositions 3, 4, 7.
## 6 Decay of correlations for the OTM
### Upper bound on correlations
We now turn to the upper bound on correlations for \(H\).
**Proposition 5**.: _Correlations \(C_{n}(\varphi,\psi,H,\mu)\) for \(H\) satisfy_
\[|C_{n}(\varphi,\psi,H,\mu)|\leq c\,n^{-1} \tag{33}\]
_for some constant \(c\) and Holder observables \(\varphi,\psi\)._
We follow the approach outlined in [208] to infer the polynomial mixing rate of \(H\) from the exponential mixing rate of \(H_{\sigma}\). For reference, their \(\mathcal{M}\) and \(M\subset\mathcal{M}\) are our \(\mathbb{T}^{2}\) and \(\sigma\subset\mathbb{T}^{2}\), their \(\mathcal{F}:\mathcal{M}\to\mathcal{M}\) and \(F:M\to M\) are our \(H\) and \(H_{\sigma}\) respectively. With \(\Delta_{0}\) above, define
\[A_{n}=\{z\in\mathbb{T}^{2}\,|\,R(z,H,\Delta_{0}>n)\}.\]
We will show
**Proposition 6**.: \(\mu(A_{n})=\mathcal{O}(n^{-1})\)_._
Proposition 5 then follows from the work of [13]. Proving Proposition 6 involves treating separately a set of infrequently returning points, a method due to [14]. For each \(z\in\mathbb{T}^{2}\) and \(n\geq 1\) define
\[r(z;n,\sigma)=\#\{1\leq i\leq n\,|\,H^{i}(z)\in\sigma\},\]
counting the number of times the orbit of \(z\) hits \(\sigma\) over \(n\) iterates of \(H\). Define
\[B_{n,b}=\{z\in\mathbb{T}^{2}\,|\,r(z;n,\sigma)>b\ln n\}\]
where \(b\) is a constant to be chosen shortly.
**Lemma 12**.: \(\mu(A_{n}\cap B_{n,b})=\mathcal{O}(n^{-1})\)_._
Proof.: This follows from (32), choosing \(b\) large enough so that \(n\,\theta^{b\ln n}<n^{-1}\). See [10], or [11] for a detailed proof.
Proposition 6 then follows from similarly establishing
**Lemma 13**.: \(\mu(A_{n}\setminus B_{n,b})=\mathcal{O}(n^{-1})\)_._
Analysis of the set \(A_{n}\setminus B_{n,b}\) is the focus of [10]. It consists, for large \(n\), of points which take many iterates to hit \(\Delta_{0}\) and hit \(\sigma\) infrequently during these iterates. Chernov and Zhang define \(m\)-cells
\[M_{m}=\{z\in\sigma\,|\,R(z;H,\sigma)=m+1\}\]
for \(m\geq 0\). For the OTM the coloured regions of Figure 4(a) form \(M_{0}\) and for \(m>0\) each set \(M_{m}\) is the union \(\cup_{i,j}A_{i,j}^{m}\). For these latter sets, the authors assume that their measures decrease polynomially
\[\mu(M_{m})\leq C_{1}/m^{r}, \tag{34}\]
where \(r\geq 3\). Further they assume that if \(z\in M_{m}\) then \(F(z)\in M_{k}\) with
\[\beta^{-1}m-C_{2}\leq k\leq\beta m+C_{3} \tag{35}\]
for some \(\beta>1\) and unimportant constants \(C_{i}>0\). It is straightforward to verify that (34) holds for \(r=3\): each \(A_{i,j}^{m}\) has length similar to \(|\mathcal{L}_{m}|=\mathcal{O}(m^{-1})\), see (21), and width similar to \(|W_{m}|=\mathcal{O}(m^{-2})\), see (29). Recalling the bounds \(l_{0}\) and \(l_{1}\) found in the proof of Proposition 4, we see that for our map \(H_{\sigma}\), condition (35) holds with \(\beta=7\). In SS4 of [10] the authors describe an 'ideal situation' under which the action of \(F\) on the cells \(M_{m}\) is equivalent to a discrete Markov chain. This requires:
1. The components of each \(M_{m}\) and their images under \(F\) are exact trapezoids which shrink homotetically as \(m\) grows,
2. The measure \(\mu\) has constant density,
3. \(F\) is linear over each component,
4. Condition (35) holds with \(C_{2}=C_{3}=0\) (no irregular intersections).
These conditions, together with:
\[\frac{\mu(F(M_{m})\cap M_{k})}{\mu(F(M_{m}))}=C_{4}\frac{m}{k^{2}}+\mathcal{O} \left(\frac{1}{m^{2}}\right) \tag{36}\]
for some \(C_{4}>0\) and \(k\) satisfying (35), are sufficient to establish the lemma. The authors show that their cells admit good linear approximations and the irregular intersections are of relative measure \(\mathcal{O}(1/k)\) so that (I1) and (I4) are essentially satisfied, removing some portion of negligible measure from each cell. They then go on to estimate the effect of nonlinearity and nonuniform density of \(\mu\) to address (I2) and (I3), requiring a more sophisticated approach. For our system (I2) and (I3) are already satisfied by \(H_{\sigma}\), so it remains to verify (36) for \(H_{\sigma}\), show that our \(A_{i,j}^{k}\) are well approximated by exact trapezoids, and calculate the relative measure of the irregular intersections. For (36), areas of the regular intersections can be calculated using the shoelace formula on the corner coordinates \(p_{k,l}\), \(\overline{p}_{k,l}\), given explicitly in the appendix, section 8.1 and Proposition 7. Our cells are near exact trapezoids; unlike the billiards systems considered in [1] whose cell sides are curvilinear, our cell boundaries are linear with the sides (e.g. \(\mathcal{L}_{m}\), \(\mathcal{L}_{m-1}\)) near parallel for large \(m\). For the irregular intersections, (36) still gives an upper bound on their measure and there are some constant \(C_{2}+C_{3}\) of them. The total number of intersections scales with \(m\) by (35) so they have negligible measure compared to \(\mu(H_{\sigma}(M_{m}))\).
### Correlations for typical observables
In this section we show that the bound (33) is optimal. That is, a \(\mathcal{O}(1/n)\) decay rate of correlations is met for a typical choice of Holder observables \(\varphi,\psi\), proving our main theorem.
Proof of Theorem 1.: Without loss of generality we assume \(\psi\) to have zero average and estimate
\[C_{n}=\int\left(\varphi\circ H^{n}\right)\psi\,\mathrm{d}\mu.\]
We first follow [1], Proposition 10.1. Letting \(n\in\mathbb{N}\) we have
\[\int\left(\varphi\circ H^{2n+1}\right)\psi\,\mathrm{d}\mu =\int\left(\varphi\circ H^{2n+1}-\varphi\circ H^{2n}\right)\psi \,\mathrm{d}\mu+\int\left(\varphi\circ H^{2n}\right)\psi\,\mathrm{d}\mu\] \[=\int\left(\phi\circ H^{2n}\right)\psi\,\mathrm{d}\mu+\int\left( \varphi\circ H^{2n}\right)\psi\,\mathrm{d}\mu\]
where \(\phi=\varphi\circ H-\varphi\). The observable \(\phi\) is piecewise Holder continuous (Holder over each of the four connected components \(A_{j}\)) so correlations of \(H\) follow from those of \(H^{2}\).
Recall the partition of points \(z\in A_{3}\) into sets \(A^{k}\) of constant escape time \(k=\min\{i\geq 1\,|\,H^{i}(z)\notin A_{3}\}\), sketched in Figure 5(a). For \(k\geq 2\), each \(A^{k}\) consists of four connected components, pairs of which limit onto the left and right sides of the invariant segments \(l_{1},l_{2}\) as \(k\to\infty\). We restrict our attention to the collection \(\mathrm{A}^{k}\subset A^{k}\) which limits onto the left hand side of the segment \(l_{2}\). For reference, intersecting with \(\sigma\) gives the sets \(A^{k}_{4,3}=\mathrm{A}^{k}\cap\sigma\) studied in Figure 8. The lines \(\mathcal{L}_{k}\) and \(\mathcal{L}_{k-1}\) from (\(\ddagger\)) similarly bound each \(\mathrm{A}^{k}\). We restrict to the annulus \(Q=\left\{(x,y)\in\mathbb{T}^{2}\,|\,1/4\leq x\leq 1/2\right\}\) and take unions
\[\tau_{k}=\bigcup_{i\geq k}\mathrm{A}^{i}\cap Q.\]
The sets \(\tau_{k}\) are triangular wedges bounded by the lines \(x=1/4\), \(l_{2}\), and \(\mathcal{L}_{k-1}\): \(y=\frac{(4k-2)x+k+1}{4k}\).
We claim that for even \(k\),
\[\int_{\tau_{k}}\left(\varphi\circ H^{k}\right)\psi\,\mathrm{d}\mu\sim\frac{C} {k} \tag{37}\]
for some constant \(C\). Our argument is roughly analogous to the argument presented in [13]. For the linked twist map example studied there, the map leaves one side of a wedge invariant and simply shears the rest horizontally or vertically. For the OTM, we similarly have invariance of the side \(l_{2}\) but \(H^{k}\) is instead only conjugate to a horizontal shear. In particular, for even \(k\) and \(z\in\tau_{k}\) we have that \(H^{k}(z)=\mathcal{G}\circ\mathcal{F}^{2k}\circ\mathcal{G}^{-1}(z)\), where \(\mathcal{F}(x,y)=(x+y,y)\bmod 1\) and \(\mathcal{G}(x,y)=(x,y+x)\bmod 1\) define the horizontal and vertical shears of the Arnold Cat Map [1]. Hence
\[\int_{\tau_{k}}\left(\varphi\circ H^{k}\right)\psi\,\mathrm{d}\mu =\int_{\tau_{k}}\left(\varphi\circ\mathcal{G}\circ\mathcal{F}^{2k }\circ\mathcal{G}^{-1}\right)\psi\,\mathrm{d}\mu\] \[=\int_{\mathcal{G}^{-1}\tau_{k}}\left(\varphi\circ\mathcal{G} \circ\mathcal{F}^{2k}\right)\left(\psi\circ\mathcal{G}\right)\mathrm{d}\mu\] \[=\int_{\mathcal{G}^{-1}\tau_{k}}\left(\tilde{\varphi}\circ \mathcal{F}^{2k}\right)\tilde{\psi}\,\mathrm{d}\mu\]
where we have used the fact that \(\mathcal{G}\) preserves \(\mu\) and defined \(\tilde{\varphi}:=\varphi\circ\mathcal{G}\), \(\tilde{\psi}:=\psi\circ\mathcal{G}\). Noting that for even \(k\) the map \(\mathcal{F}^{2k}\) commutes with the phase shift \(V(x,y)=(x,y+1/4)\bmod 1\), a further substitution yields
\[\int_{\tau_{k}}\left(\varphi\circ H^{k}\right)\psi\,\mathrm{d}\mu=\int_{V^{-1} \mathcal{G}^{-1}\tau_{k}}\left(\hat{\varphi}\circ\mathcal{F}^{2k}\right)\hat{ \psi}\,\mathrm{d}\mu\]
where \(\hat{\varphi}:=\tilde{\varphi}\circ V\), \(\hat{\psi}:=\tilde{\psi}\circ V\). The domain of integration \(R_{k}=V^{-1}\mathcal{G}^{-1}\tau_{k}\) is the right triangle with sides on \(x=1/4\), \(y=0\), and \(V^{-1}\mathcal{G}^{-1}\mathcal{L}_{k-1}:y=1/(4k)-x/(2k)\). Hence (37) is given by
\[I_{k}:=\int_{\frac{1}{4}}^{\frac{1}{2}}\int_{0}^{\frac{1}{4k}-\frac{\pi}{2k}} \tilde{\varphi}(x+2ky,y)\,\hat{\psi}(x,y)\,\mathrm{d}y\,\mathrm{d}x.\]
Mirroring [13], we define a similar integral
\[J_{k}:=\int_{\frac{1}{4}}^{\frac{1}{2}}\int_{0}^{\frac{1}{4k}-\frac{\pi}{2k}} \hat{\varphi}(x+2ky,0)\,\hat{\psi}(x,0)\,\mathrm{d}y\,\mathrm{d}x\]
and note that by a change in coordinates \(t=x+2ky\) we have
\[J_{k}=\frac{1}{2k}\int_{\frac{1}{4}}^{\frac{1}{2}}\int_{x}^{\frac{1}{2}}\hat{ \varphi}(t,0)\,\hat{\psi}(x,0)\,\mathrm{d}t\,\mathrm{d}x.\]
Clearly \(J_{k}\sim c_{1}/k\) for some constant \(c_{1}\), noting that the double integral is just some constant depending on \(\varphi\) and \(\psi\), non-zero for any typical choice of these observables. Relation (37) then follows from establishing \(\lim_{k\to\infty}k|I_{k}-J_{k}|=0\), i.e. these sequences are asymptotically close. Indeed, noting that \(\mu(R_{k})=\mu(\tau_{k})\leq c_{2}/k\), we have
\[k|I_{k}-J_{k}| =k\left|\int_{R_{k}}\hat{\varphi}(x+2ky,y)\,\hat{\psi}(x,y)-\hat {\varphi}(x+2ky,0)\,\hat{\psi}(x,0)\,\mathrm{d}y\,\mathrm{d}x\right|\] \[\leq c_{2}\sup_{(x,y)\in R_{k}}\left|\hat{\varphi}(x+2ky,y)\,\hat {\psi}(x,y)-\hat{\varphi}(x+2ky,0)\,\hat{\psi}(x,0)\right|\to 0\]
since
\[|\hat{\varphi}(x+2ky,y)\,\hat{\psi}(x,y)-\hat{\varphi}(x+2ky,0) \,\hat{\psi}(x,0)| =|\hat{\varphi}(t,y)\,\hat{\psi}(x,y)-\hat{\varphi}(t,y)\,\hat{ \psi}(x,0)+\hat{\varphi}(t,y)\,\hat{\psi}(x,0)-\hat{\varphi}(t,0)\,\hat{\psi}( x,0)|\] \[=|\hat{\varphi}(t,y)||\hat{\psi}(x,y)-\hat{\psi}(x,0)|+|\hat{ \psi}(x,0)||\hat{\varphi}(t,y)-\hat{\varphi}(t,0)|\] \[\leq\hat{\varphi}_{\max}|y|^{a}+\hat{\psi}_{\max}|y|^{a}\] \[\leq c_{3}k^{-a}\to 0,\]
where we have used the the Holder property of \(\hat{\varphi},\hat{\psi}\) (inherited from \(\varphi,\psi\) as the mappings \(\mathcal{G},V\) are diffeomorphisms) and the fact that \(|y|\leq 1/(8k)\) on \(R_{k}\). Now by (37) we have
\[C_{k}(\varphi,\psi,H,\mu)\sim\frac{C}{k}+\int_{\mathbb{T}^{2}\setminus\tau_{ k}}\left(\varphi\circ H^{k}\right)\psi\,\mathrm{d}\mu.\]
By Proposition 5 the contribution over \(\mathbb{T}^{2}\setminus\tau_{k}\) decays no slower than \(\mathcal{O}(1/k)\) and (for a typical choice of observables \(\varphi,\psi\)) does not precisely cancel out the \(C/k\) contribution over \(\tau_{k}\). The \(\mathcal{O}(1/k)\) law for \(|C_{k}(\varphi,\psi,H,\mu)|\) then follows for typical Holder observables \(\varphi,\psi\).
## 7 Final remarks
We provide some comments on the growth conditions which constituted the majority of our analysis in sections 4 and 5. In the simplest cases of Lemma 5, growth was established in an analogous fashion to the old one-step expansion condition (1), finding the relevant Jacobians \(M_{j}\) and checking that their expansion factors \(K(M_{j})\) satisfy
\[\sum_{j}\frac{1}{K(M_{j})}<1. \tag{38}\]
For the more complicated cases, the inductive method used to establish growth near the accumulation points in Lemma 5 and the weakened one-step expansion condition (8) both address the same fundamental issue: the splitting of unstable curves by singularities into an unbounded number of small components. They circumvent this obstacle in rather different ways, however. While (8) generalises (38) to ensure an growth of unstable curves 'on average' (see [10] for a precise statement), our inductive method is a more direct adaptation of (38), using it to generate contradictory geometric conditions which a hypothetical non-growing unstable curve must satisfy. It may be possible to prove Theorem 4 using (8) as the basis for growth. Since we required (8) anyway for proving Theorem 5, this could potentially condense our analysis, but only to a minor extent. A convenience of the method used in section 4 is that, by way of the'simple intersection' property, it naturally gives geometric information on the images of manifolds, useful for proving the property **(M)** of Theorem 2.
We expect that essentially analogous analysis can be applied to establish mixing properties in a wide class of piecewise linear non-uniformly hyperbolic maps, including those (like the OTM) which sit on the boundary of ergodicity and beyond. While we have relied on the precise partition structure of \(H_{\sigma}\), its fundamental feature (self-similar sequences of elements \(A^{k}\), sharing boundaries with its neighbours \(A^{k-1},A^{k+1}\) and accumulating onto some point \(p\)) is quite typical to return map systems. See, for example, those of various stadium billiards [11, 12, 13] and linked twist maps [14]. Indeed, the same method can be used to prove the Bernoulli property for non-monotonic (co-rotating) linked twist maps [15], where monotonicity of the manifold images cannot be assumed and the classical argument [17] fails. The OTM is the pointwise limit of these maps as the boundary shrinks to null measure. It further has utility in proving growth conditions for maps which are uniformly hyperbolic but possess regions \(A_{j}\) where the hyperbolicity is very weak, signified by \(K(M_{j})\approx 1\), so that (38) fails. Typically this leads to suboptimal bounds on mixing windows, see e.g. [16, 17, 18]. The map \(H_{(\eta,\eta)}\) for \(\eta\approx 1/2\) is another example, possessing weak hyperbolicity over \(A_{2},A_{3}\). Letting \(\varepsilon=|\eta-1/2|>0\), there is an upper bound \(N=N(\varepsilon)\) on escape times from these sets. The growth lemma then follows by applying the inductive step roughly \(N\) times and can be established for arbitrarily small \(\varepsilon\), opening the door to establishing optimal mixing windows.
The above gives two examples of piecewise linear perturbations to \(H\) where mixing with respect to Lebesgue is preserved and our methods can be applied. Nonlinear perturbations to the shear profiles complicate the analysis in several ways. Firstly as the map's Jacobians takes on a broader range of values, cone invariance becomes an increasingly harder condition to establish. Cones must be widened, giving looser bounds on expansion factors, which may already be weak due to new regions of weaker stretching. This, together with the change from polygonal to curvilinear return time partition elements and nonlinear local manifolds, adds some complexity to showing growth conditions. This does not rule out certain (small) nonlinear perturbations however. There is some leeway in the inequalities which govern cone invariance and growth of local manifolds, the latter of which is not too dissimilar from the piecewise linear setting (see Lemmas 9, 10). Certain small perturbations would not alter the _topological_ structure of the return time partition, i.e. which elements share boundaries, the key information needed for setting up the induction. Finally while the partition elements would no longer be polygonal, only coarse geometric information is
required for verifying each inductive step. Following the above, a potential perturbation could be to replace the linear portions of each shear by a cubic, perturbing the tent profile
\[f(t)=\begin{cases}2t&0\leq t\leq 1/2,\\ 2(1-t)&1/2\leq t\leq 1,\end{cases}\]
of the OTM shears to
\[f_{a}(t)=\begin{cases}\frac{1}{8}t\left(16-a+6at-8at^{2}\right)&0\leq t\leq 1 /2,\\ \frac{1}{8}\left(1-t\right)\left(16-a+6a\left(1-t\right)-8a\left(1-t\right)^{ 2}\right)&1/2\leq t\leq 1,\end{cases}\]
for \(a>0\). For small enough \(a\) the gradient range \(f^{\prime}(t)\) is restricted to small neighbourhoods of \(\{2,-2\}\) and the escape time partition retains a similar structure. We illustrate this in Figure 14, showing escapes from the square \(S_{3}\) under the map \(G\circ F\), equivalent to escapes from the perturbed \(A_{3}\) under the \(G\circ F\), but with a cleaner geometry for comparison. When \(a\) is too large the analogy to the OTM breaks down. At \(a=16\) the map is twice differentiable everywhere and features a new source of slowed mixing, the Jacobian is the identity at the corner points \(x,y\in\{0,1/2\}\) giving locally parabolic behaviour (visible in the escape time partition).
## 8 Appendix
### Calculations for Proposition 4
We begin by showing the bounds (27). Note that \(U_{*,j}\) is a curve traversing \(A_{4,3}^{j}\) near \((1/4,1)\) with tangent vectors \((v_{1},v_{2})^{T}\) in the cone \(M_{1}\mathcal{C}_{1}\) satisfying \(41/17\leq|v_{2}|/|v_{1}|\leq 17/7\). Noting that the geometry of \(A_{4,3}^{j}\) near \((1/4,1)\) is a \(180^{\circ}\) rotation of \(A_{4,3}^{j}\) near \((1/4,1/2)\) and the cone is invariant under this rotation, we can follow an analogous argument to (30) to calculate \(a_{*}\). In particular \(|U_{*,j}|\) is bounded below by the length of the segment passing through \(r_{4}\) with gradient \(17/7\), which gives \(a_{*}=13\sqrt{2}/80\) as required. For the upper bound, the height of \(|U_{*,j}|\) is bounded
Figure 14: Partition of escape times from \(S_{3}\) under the mapping \(F\circ G\) for \(a=0,4,8,16\).
above by the height of the line segment with endpoints on \(r_{3}(j)\) and \(\mathcal{L}_{j-1}\) with gradient \(41/17\). In particular
\[\ell_{v}(U_{\star,j})\leq\frac{48j^{2}+41j+29}{(2j+1)(48j+17)}-\frac{1}{2}=\frac {41}{2(96j^{2}+82j+17)}\sim\frac{41}{192\,j^{2}}\]
so that, by Lemma 10 and \(|v_{2}/v_{1}|\geq 41/17\), \(|U_{\star,j}|\geq b_{\star}/j^{2}\) with \(b_{\star}=\frac{41}{192}\sqrt{1+17^{2}/41^{2}}\) as required.
We move onto calculating \(b\) such that \(|U_{k,l}|\lesssim b/l^{2}\). Define a \((k,l)\)-cell as the intersection \(H_{\sigma}\left(A_{4,3}^{k}\right)\cap A_{1,3}^{l}\) near the accumulation point \((1/2,3/4)\), shown as the magnified region in Figure 13(b), the quadrilateral bounded by the lines \(\mathcal{L}_{l}\), \(\mathcal{L}_{l-1}\) (as defined in equation 22) on \(\partial A_{1,3}^{l}\) and \(\mathcal{L}_{k}\), \(\mathcal{L}_{k-1}\) on \(\partial H_{\sigma}\left(A_{4,3}^{k}\right)\). The explicit equation for \(\mathcal{L}_{k}\) is given in (23), letting us calculate the corner coordinates \(p_{k,l}\in\mathcal{L}_{k}\cap\mathcal{L}_{l}\) as
\[p_{k,l}=(x_{k,l},y_{k,l})=\left(\frac{16kl+7k+23l+10}{32kl+12k+44l+16},\frac{1 2kl+3k+17l+4}{16kl+6k+22l+8}\right). \tag{39}\]
The curve \(U_{k,l}\) traverses the \((k,l)\)-cell with endpoints on the segments \(p_{k,l}p_{k-1,l}\) and \(p_{k,l-1}p_{k-1,l-1}\) and has tangent vectors in the cone \(M_{4}M_{3}^{k}\mathcal{C}_{1}\). Roughly speaking, for large \(k\) the vectors in this cone are essentially parallel to the cell boundaries \(\mathcal{L}_{k}\), \(\mathcal{L}_{k-1}\) with gradient approaching -3, so that \(\ell_{v}(U_{k,l})\) is given to leading order by \(y_{k,l}-y_{k,l-1}\sim\frac{3}{32}l^{-2}\). Noting that \(M_{4}M_{3}^{k}\mathcal{C}_{1}\subset\mathcal{C}_{4}\) for any \(k\), we can bound the gradient of vectors as \(|v_{2}/v_{1}|\geq 7/3\) so that by Lemma 10 we have \(|U_{k,l}|\lesssim b/l^{2}\) with \(b=\frac{3}{32}\sqrt{1+\frac{9}{49}}\). A more careful calculation similar to that of \(b_{\star}\) above gives the same bound to leading order.
### Two-step expansion near \(P_{1}\)
We will follow similar analysis to the proof of Proposition 4 to show:
**Proposition 7**.: _Condition (8) holds for \(H_{\sigma}^{2}\) when \(W\cap B_{\varepsilon}(P_{1})\neq\varnothing\) for all \(\varepsilon>0\)._
Proof.: We consider the case where \(W\) lies near the accumulation point \((0,1/4)\), split by \(\mathcal{S}\) into subcurves \(\overline{W}_{\star}=W\cap A_{1}\) and \(\overline{W}_{k}=W\cap A_{4,2}^{k}\). The image of the lower subcurve \(\overline{U}_{\star}=H_{\sigma}\left(\overline{W}_{\star}\right)\) lies near the accumulation point \((1/2,1/4)=H(0,1/4)\) and is split by \(\mathcal{S}\) into curves \(\overline{U}_{\star,j}\subset A_{4,2}^{j}\). The image of each upper subcurve \(\overline{U}_{k}=\overline{W}_{k}\) maps close to \((3/4,1/2)\) for \(k\) odd, \((3/4,0)\) for \(k\) even. Analysis for both of these cases is analogous, as before we take \(k\) to be odd and consider the geometry of \(\mathcal{S}\) near the accumulation point \((3/4,1/2)\). We calculate the corners of \(A_{4,2}^{k}\) near \((0,1/4)\) as
\[\overline{r}_{1}=\left(0,\frac{k}{4k-2}\right),\quad\overline{r}_{2}=\left( \frac{1}{4k-6},\frac{k-2}{4k-6}\right),\quad\overline{r}_{3}=\left(\frac{1}{4k -2},\frac{k-1}{4k-2}\right),\quad\overline{r}_{4}=\left(0,\frac{k+1}{4k+2} \right).\]
so that, using the integer valued matrix \(M_{4}M_{2}^{k}=(-1)^{k}\binom{1-6k}{14k-2}\frac{-6k-2}{14k+5}\), its image \(H_{\sigma}\left(A_{4,2}^{k}\right)\) is the quadrilateral with corners
\[\overline{r}_{1}^{\prime}=\left(\frac{3k+1}{4k-2},\frac{2k-7}{4k-1}\right), \quad\overline{r}_{2}^{\prime}=\left(\frac{3k-2}{4k-6},\frac{2k-9}{4k-6} \right),\]
\[\overline{r}_{3}^{\prime}=\left(\frac{3k-2}{4k-2},\frac{k}{2k-1}\right),\quad \overline{r}_{4}^{\prime}=\left(\frac{3k+1}{4k+2},\frac{k+1}{2k+1}\right).\]
The curve \(\overline{U}_{k}\) has endpoints on the segments \(\overline{r}_{1}^{\prime}\overline{r}_{2}^{\prime}\) and \(\overline{r}_{3}^{\prime}\overline{r}_{4}^{\prime}\) and is split by \(\mathcal{S}\) into an upper portion \(\overline{U}_{k,*}\) in \(A_{4}\) above \(y=1/2\) and subcurves \(\overline{U}_{k,l}\subset A_{1,2}^{l}\) where \(l_{0}\leq l\leq l_{1}\). Comparison of the point \(\overline{r}_{2}^{\prime}\) with the lines \(\overline{L}_{l}:y-1/4=-\frac{2l+1}{2l+2}(x-1)\) and \(\overline{L}_{l-1}\) on \(\partial A_{1,2}^{l}\) yields \(l_{0}(k)\geq\lfloor\frac{k-4}{\tau}\rfloor\), intersecting \(\overline{r}_{1}^{\prime}\overline{r}_{4}^{\prime}\) with \(y=1/2\) yields \(l_{1}(k)\leq 7k+2\). Let \(W_{i}=H_{\sigma}^{-1}\left(\overline{U}_{i}\right)\) then \(W\) splits in an analogous fashion to (25) with \(DH_{\sigma}^{2}\) constant on each component. It follows that for \(q=1/2\),
\[\liminf_{\delta\to 0}\sup_{W:|W|<\delta}\sum_{i}\left(\frac{|W|}{|V_{i}|} \right)^{q}\frac{|W_{i}|}{|W|}\leq\sup_{0\leq p\leq 1}\left(2\sqrt{\frac{(1-p) \overline{b}_{*}\Lambda_{1}^{+}}{\overline{c}_{*}\overline{a}_{*}\Lambda_{1}^ {-}}}+2\sqrt{\frac{p\overline{b}_{\circ}}{\overline{c}_{\circ}\overline{a} \overline{\gamma}}}+4\sqrt{\frac{p\overline{b}h}{\overline{c}\overline{a} \overline{\gamma}}}\right)\]
where the new constants satisfy (letting \(K(M)\) denote the minimum expansion of \(M\) over \(\mathcal{C}_{1}\))
* \(K\left(M_{4}M_{2}^{j}M_{1}\right)\sim\overline{c}_{*}j\)
* \(K\left(M_{4}^{2}M_{2}^{k}\right)\sim\overline{c}_{\circ}k\)
* \(K\left(M_{1}M_{2}^{l}M_{4}M_{2}^{k}\right)\sim\overline{c}kl\)
* \(K\left(M_{4}M_{2}^{k}\right)\sim\overline{\gamma}k\)
* \(\overline{a}_{*}/j^{2}\lesssim|\overline{U}_{*,j}|\leq\overline{b}_{*}/j^{2}\)
* \(|\overline{U}_{k,*}|\lesssim\overline{b}_{\circ}/k\)
* \(|\overline{W}_{k}|\gtrsim\overline{a}/k^{2}\)
* \(|\overline{U}_{k,l}|\lesssim\overline{b}/l^{2}\)
and \(\Lambda_{1}^{\pm}\), \(h\) are unchanged from (31). The expansion factors can be calculated in the same fashion as (26), in particular
\[\overline{c}_{*}=\frac{48\sqrt{145}}{5},\quad\overline{c}_{\circ}=8\sqrt{19 7},\quad\overline{c}=64,\quad\overline{\gamma}=\frac{8\sqrt{145}}{5}.\]
The constant \(\overline{a}_{*}\) is obtained by considering the shortest path across \(A_{4,2}^{j}\) with tangent vectors aligned in the cone \(M_{1}\mathcal{C}_{1}\), bounded by the length of the segment with endpoints on \(r_{4}(j)\) and \(\mathcal{L}_{j-1}\) (as defined in (\(\dagger\) *> 2.1), proof of Lemma 5) with gradient \(41/17\). The constant \(\overline{b}_{*}\) is obtained by considering the maximum height of a segment joining \(\mathcal{L}_{j-1}\) to \(\mathcal{L}_{j}\), given by the segment passing through \(r_{3}(j)\in\mathcal{L}_{j}\) with gradient \(17/17\), and applying Lemma 10. In particular \(\overline{a}_{*}=\sqrt{1970}/464\) and \(\overline{b}_{*}=\frac{17}{192}\sqrt{1+\frac{17^{2}}{41^{2}}}\). Similar analysis to the calculation of \(\overline{a}_{*}\) but using the wider cone \(\mathcal{C}_{1}\) yields \(\overline{a}=\sqrt{55}/80\). We again apply Lemma 10 to find \(\overline{b}_{\circ}\), with \(\ell_{e}\left(\overline{U}_{k,*}\right)\) bounded above by the height \(1/(4k-2)\sim 1/(4k)\) of \(r_{3}^{\prime}(k)\) above \(y=1/2\). Tangent vectors of \(\overline{U}_{k,*}\) lie in the cone \(M_{4}M_{2}^{k}\mathcal{C}_{1}\subset\mathcal{C}_{4}\) so that \(\overline{b}_{\circ}=\frac{1}{4}\sqrt{1+9/49}\) provides the upper bound. Finally we calculate \(\overline{b}\), following a similar approach to section 8.1. For each \(k\) the segments \(\overline{r}_{1}^{\prime}\overline{r}_{4}^{\prime}\) and \(\overline{r}_{2}^{\prime}\overline{r}_{3}^{\prime}\) lie on the lines \(\overline{\mathcal{L}}_{k}\) and \(\overline{\mathcal{L}}_{k-1}\) respectively, with
\[\overline{\mathcal{L}}_{k}:y-\frac{k+1}{2k+1}=-\frac{14k+5}{6k+2}\left(x-\frac {3k+1}{4k+2}\right).\]
Define a \(\overline{(k,l)}\) cell as the intersection of \(H_{\sigma}(A_{4,2}^{k})\cap A_{1,2}^{l}\), given the by quadrilateral bounded by the lines \(\overline{\mathcal{L}}_{k}\), \(\overline{\mathcal{L}}_{k-1}\)
\(\overline{\mathcal{L}}_{l}\), \(\overline{\mathcal{L}}_{l-1}\). Its corners \(\overline{p}_{k,l},\ldots,\overline{p}_{k-1,l-1}\) are given by
\[\overline{p}_{k,l}=\left(\frac{(3k+1)(2l+3)}{8kl+11k+3l+4},\frac{16kl+15k+7l+6}{ 4(8kl+11k+3l+4)}\right)\]
with (as before, to leading order terms for \(k\) large) \(\ell_{v}\left(\overline{U}_{k,l}\right)\) bounded above by the height of the segment joining \(\overline{p}_{k-1,l}\) to \(\overline{p}_{k-1,l-1}\), \(\ell_{v}\left(\overline{p}_{k-1,l}\overline{p}_{k-1,l-1}\right)\sim 7/32l^{-2}\). Again, tangent vectors to \(\overline{U}_{k,l}\) lie in \(\mathcal{C}_{4}\) so that \(\overline{b}=\frac{7}{32}\sqrt{1+9/49}\) gives an upper bound \(|\overline{U}_{k,l}|\lesssim\overline{b}/l^{2}\) by Lemma 10.
As before we take
\[\overline{s}=2\sqrt{\frac{\overline{b}_{*}\Lambda_{1}^{+}}{\overline{c}_{*} \overline{a}_{*}\Lambda_{1}^{-}}}\approx 0.186,\quad\overline{t}=2\sqrt{ \frac{\overline{b}_{*}}{\overline{c}_{*}\overline{a}\overline{\gamma}}}+4 \sqrt{\frac{\overline{b}h}{\overline{c}\overline{a}\overline{\gamma}}}\approx 0.488,\]
giving
\[\liminf_{\delta\to 0}\sup_{W:|W|<\delta}\sum_{i}\left(\frac{|W|}{|V_{i}|} \right)^{q}\frac{|W_{i}|}{|W|}\leq\overline{s}\sqrt{\frac{\overline{s}^{2}}{ \overline{s}^{2}+\overline{t}^{2}}}+\overline{t}\sqrt{\frac{\overline{t}^{2} }{\overline{s}^{2}+\overline{t}^{2}}}\approx 0.522<1\]
as required.
|
2310.04570 | Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction
from Variable-Sized Maps | Estimating path loss for a transmitter-receiver location is key to many
use-cases including network planning and handover. Machine learning has become
a popular tool to predict wireless channel properties based on map data. In
this work, we present a transformer-based neural network architecture that
enables predicting link-level properties from maps of various dimensions and
from sparse measurements. The map contains information about buildings and
foliage. The transformer model attends to the regions that are relevant for
path loss prediction and, therefore, scales efficiently to maps of different
size. Further, our approach works with continuous transmitter and receiver
coordinates without relying on discretization. In experiments, we show that the
proposed model is able to efficiently learn dominant path losses from sparse
training data and generalizes well when tested on novel maps. | Thomas M. Hehn, Tribhuvanesh Orekondy, Ori Shental, Arash Behboodi, Juan Bucheli, Akash Doshi, June Namgoong, Taesang Yoo, Ashwin Sampath, Joseph B. Soriaga | 2023-10-06T20:17:40Z | http://arxiv.org/abs/2310.04570v2 | # Transformer-Based Neural Surrogate for Link-Level Path Loss Prediction from Variable-Sized Maps
###### Abstract
Estimating path loss for a transmitter-receiver location is key to many use-cases including network planning and handover. Machine learning has become a popular tool to predict wireless channel properties based on map data. In this work, we present a transformer-based neural network architecture that enables predicting link-level properties from maps of various dimensions and from sparse measurements. The map contains information about buildings and foliage. The transformer model attends to the regions that are relevant for path loss prediction and, therefore, scales efficiently to maps of different size. Further, our approach works with continuous transmitter and receiver coordinates without relying on discretization. In experiments, we show that the proposed model is able to efficiently learn dominant path losses from sparse training data and generalizes well when tested on novel maps.
## I Introduction
Machine learning (ML) techniques have demonstrated great success in solving various modeling- and simulation-based problems in sciences, such as in molecular dynamics simulations [1, 2] and other applications [3, 4, 5]. Specifically towards wireless simulations, ML-based techniques have shown to offer many advantages: scaling to high-dimensional problems [4], data-driven simulations [6, 7], differentiability which enables solving inverse problems [8, 9] and end-to-end learning.
ML can help build better models using real measurements by either learning parameters on an existing mathematical model or replacing and augmenting existing models. On the other hand, the general purpose simulators, for instance ray tracers, are designed to solve the modeling problem in a general setting, and because of that they utilize details that might not be relevant for a particular task. For example, the professional ray tracers require a detailed model of the environment and its materials and provide path level details of propagation between a transmitter-receiver pair. This level of details might not be necessary for many tasks, for example, in case of line-of-sight (LOS) blockage detection. We would like to be able to curate models that balance accuracy-complexity for particular tasks. ML can help building such surrogate models. Surrogate models come with benefits typically not available for general purpose simulators, such as integrating them in the system design loop. For problems like network planning and sensing, the simulator is queried multiple times during the design process, and surrogate models can improve latency of such operations. Surrogate models can be built in a differentiable way and therefore, be used for an end-to-end design and optimization. Surrogate models can rely only on what is needed for a particular simulation task and reduce drastically the need for detailed environment descriptions. Surrogate models can therefore be seen as specialized simulators.
In this paper, we focus on the problem of path loss prediction. Instead of utilizing detailed environment 3D maps, we rely on crude digital twin (DT) creations consisting of simple building and foliage layouts to solve this problem. There are many works on ML-based path loss prediction (see Section II for detailed review of previous works). The main motivation of our work is to accommodate the following capabilities in our design. The model should be built from sparse real measurements given the overhead of gathering dense measurements for new environments. The model is off-grid, which means that the model can work with arbitrary transmitter and receiver locations and does not need an initial quantization to a grid (which is the case for example in [10]). In that way, the model can be seen as a differentiable function of transmitter and receiver locations and be used in a design loop (for example as a part of an optimization problem). Once the model is trained on a set of maps and transmitter-receiver locations, it should be usable for unseen maps and location pairs. Finally, the model should be scalable to different map sizes and transmitter-receiver distances and do that efficiently by attending to part of the map that matters for path loss prediction of a given transmitter-receiver pair. For example, since millimeter wave (mmWave) path loss prediction is dominated by LOS path, the model needs to focus on the area around the line connecting the transmitter and receiver. In this work, we propose a transformer-based model that satisfies all these desiderata. We evaluate our model on an outdoor dataset for mmWave carrier frequency.
The paper is organized as follows. After reviewing related works in Section II, we introduce our proposed model in Section III. We provide experiments to substantiate the benefits of our approach in Section IV and conclude in Section V.
## II Related works
ML approaches for path loss prediction can generally be divided in two categories: image-to-image translation (radio maps) and link-level prediction. The image-to-image translation approach represents different locations on the pixel space of the input map. The output of the model is the same
input map annotated with path loss information. Therefore, the model provides path loss prediction for all locations at the output at inference time. On the other hand, link-level prediction models provide path loss prediction only for a given transmitter-receiver location. In this sense, link-level based predictions conform more to conventional path loss functions that act directly on the location information, are expected to be more computationally efficient and can be used naturally in design loops. There are many works on ML-based channel modeling and prediction (see [15, 17] for a survey). We will focus on some of these works.
The seminal work on ML-based path loss prediction [10] casts the problem as image-to-image translation and uses a UNet based architecture to solve the problem. UNet approaches have been also been adopted in [11, 12, 18] while [14] has employed a variant of SegNet, framing the problem as a segmentation problem. The latter model uses a fully convolutional backbone, which enables applying maps of different sizes. In contrast to these works, our model performs link-level prediction. Another line of works has focused on using vision models, such as CNNs [7, 15, 16] and transformers [13], for feature extraction. These features are then either passed to another neural network [7, 13, 16] or other classical ML algorithms [15]. The work in [15] also addresses the problem of model building from sparse measurements and generalization to unseen scenarios. However, the feature extraction using convolutional models is not scalable to different map sizes. The authors in [13] propose a grid-based embedding instead of conventional positional embedding in transformer architecture. This method is still radio map-based, and the transformer-based architecture is still applied on the whole image. Our proposed transformer architecture is link-level based, embeds location information with image patches and selects the number of patches according to the distance. Therefore, it is more efficient for shorter distances. Similar to our work, the authors in [19] used deep vision transformer model for link-level based path loss prediction. The transformer model is used only for feature extraction and the final estimation is done by another neural network. They focus on federated training setup, use the full satellite maps and do not study generalization of their model. Using satellite images directly can harm the generalization in general, as the model can overfit to scene specificities. In our work, the core prediction model is transformer based where the number of patches are adaptively selected based on the locations. We do not use satellite images and explicitly disentangle the effects of foliage and buildings in the input space. The authors in [7] consider a ResNet backbone for link-level based prediction, and therefore, cannot adapt to different map sizes. Table I shows an overview of existing related works on path loss prediction. To the best of our knowledge, this is the first work that proposes a model for link-level path loss prediction unifying scalability to different map sizes and generalizeability to unseen environments.
## III Proposed approach
The goal of our approach is to obtain a link-level prediction model that accepts map input of varying size. In our case, the size is dependent on the distance between transmitter and receiver. Typically, in CNNs that predict a single value for an image, for example, the class of the image, the features of the convolutional layers have to be combined. If this is done by the means of a fully connected layer, the fully connected layer cannot scale adaptively with different sized inputs. Thus, the entire model is restricted to fixed sized inputs. In deep learning, the attention mechanism has proven to be useful to process sequences of variable lengths, yet at the same time considering the interaction of all the elements in the sequence. Vision transformers (ViT) [20] have successfully applied this idea to image classification. Inspired by this success, we designed a transformer architecture that takes a map of variable size and the distance between transmitter and receiver as input and predicts the path loss along the dominant path. In this section, we first describe how the map data is pre-processed based on the transmitter and receiver locations, and second, we provide the details of our transformer architecture.
### _Map alignment and extraction_
Suppose a coarse extract of a map that includes the transmitter location \(x_{t}\) and receiver location \(x_{r}\) is given as an image, we use the following procedure to align and crop the map. First, the map is rotated around the transmitter location such that the receiver is located along the vertical y-axis of the image, with the receiver closer to top of the map than the transmitter. After this alignment, the map is cropped to a size appropriate for the given task and to have a final image with height and width being multiples of a chosen patch size \(P\). For this purpose, we start at the transmitter and define the transmitter patch such that the pixel corresponding to the transmitter is located in the center of that patch. Note that this requires an odd patch size. Given the transmitter patch,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline Authors & Carrier frequency & Approach & Map generalization & Foliage & Architecture & Data \\ \hline Levie et al. [10] & 5.9 GHz & Image-to-Image & ✓ & ✗ & UNet & Simulation \\ Ratnam et al. [11] & 28 GHz & Image-to-Image & ✓ & ✓ & UNet & Simulation \\ Bakirtzis et al. [12] & 868 MHz & Image-to-Image & ✓ & ✗ & UNet (atrous convolution) & Simulation (indoor) \\ Tian et al. [13] & 5.8 GHz & Image-to-Image & ✓ & ✓ & Transformer & Simulation \\ Qiu et al. [14] & 30 GHz & Image-to-Image & ✗ & ✗ & SeeNet & Simulation \\ Gupta et al. [15] & 28 GHz & Per-link & ✓ & ✓ & CNN + Classic ML & Real \\ Sousa et al. [7] & 2.6 GHz & Per-link & ✓ & ✓(Satellite) & ResNet+MLP & Real (Drive test) \\ Lee et al. [16] & 28 GHz & Per-link & ✗ & ✗ & CNN+MLP & Simulation \\ Ours & 28 GHz & Per-link & ✓ & ✓ & Transformer & Simulation \\ \hline \hline \end{tabular}
\end{table} TABLE I: Summary of selected previous works on path loss prediction
the other patches are also defined in a grid, and the receiver is generally not located in the center pixel of the receiver patch. All patches between the transmitter and receiver patch are included in the final map extract. In addition, depending on the task, we can choose to include additional padding patches around those patches.
### _Transformer architecture_
Overall, our architecture choices are close to those in [20] and are illustrated in Fig. 1. The model expects an input image of which the height and width are multiples of the chosen patch size \(P\). The input image is then split up into \(R\times C\) (rows \(\times\) columns) patches of size \(P\). Each patch \(p_{r,c}\) (\(r\in\{1,...,R\}\) and \(c\in\{1,...,C\}\)) is passed through the same linear patch projection layer which projects the patch pixel values to the initial latent feature vectors \(h_{r,c}\in\mathbb{R}^{D}\) of dimension \(D\). Similarly, the scalar distance between transmitter and receiver \(\Delta x=\sqrt{(x_{r}-x_{t})^{2}}\) is projected to the latent vector \(h_{0}\in\mathbb{R}^{D}\) through a separate linear projection layer.
The positional embedding is added to the latent vector \(h_{r,c}^{0}\) of each patch, before applying the transformer layers. In contrast to ViT, the vertical is separate from the horizontal positional embedding. In our specific case, we assume that the number of horizontal input patches is fixed, while the number of vertical patches, i.e., the height of the image may vary per input sample. The horizontal positional embeddings \(u_{c}\in\mathbb{R}^{D}\) are learned as in ViT. The vertical positional embeddings \(v_{r}\in\mathbb{R}^{D}\) are based on sine and cosine functions, as in [21],
\[v_{r,d+1} =\sin\left(r/10000^{d/D}\right), \tag{1}\] \[v_{r,d+2} =\cos\left(r/10000^{d/D}\right), \tag{2}\]
where \(d\in\{0,2,...,D-2\}\). The positional embeddings are then added to each element of the latent vectors
\[z_{r,c}^{0}=h_{r,c}+v_{r}+u_{c}. \tag{3}\]
Note that the distance embedding does not require a positional embedding, thus \(z_{0}^{0}=h_{0}\). For notational simplicity, we will use \(z^{0}=(z_{0}^{0},z_{0,0}^{0},...,z_{r,c}^{0})\) unless the distinction is necessary.
The transformer consists of \(L\) transformer layers. Each layer \(\ell\in\{1,...L\}\) takes \(z^{\ell-1}\) to compute the output \(z^{\ell}\). As in ViT [20], a transformer layer consists of a multi-head self-attention layer with residual connection followed by a multi-layer perceptron (MLP) with residual connection. Before the input is passed to the multi-head self-attention, the input is normalized using layer norm. The same is done for the MLP.
The original ViT was designed for image classification, while in our case, we aim to solve a regression problem. Therefore, the head network takes the final transformed distance embedding \(z_{0}^{L}\) and applies a linear layer to project it to the target space, which is \(\mathbb{R}\) in our case.
## IV Experiments
To demonstrate the value of our approach, we compare its performance to widely adopted ML approaches for path loss prediction. For this purpose, we simulate the path loss of mmWave propagation between a transmitter with receivers in an urban environment. We first describe the dataset generation, then compare our approach quantitatively to the baselines, and discuss qualitative visualizations of predicted radio maps.
### _Dataset_
The dataset used in our experiments is based on ray tracing simulation using an RF-relevant DT model of an urban area of about \(1.5\) km\({}^{2}\) in downtown Philadelphia, Pennsylvania, USA. The DT includes several associated geographic information system (GIS) data layers of:
Fig. 1: Our proposed transformer architecture where the number of patches can change adaptively to the size of the input map.
Fig. 2: An example of the dataset for a single pole showing building footprints in black, foliage heights through green shades, the pole location in red, and receiver locations in blue.
* Building polygons footprint and their corresponding heights (in meters) curated from OpenStreetMap.org.
* Tree foliage contours and heights (with a maximum of \(30\)m) obtained from publicly available GIS data sources.
* Terrain model of topographic elevation data based on the U.S. Geological Survey's gmted2010 model.
The dataset also includes two types of entities:
1. Latitude, longitude and height, fixed to about \(9\)m (\(30\) feet) above the ground level, of \(402\) outdoor pole locations which are identified as compatible to serve as transceiver-bearers.
2. Latitude, longitude and height, fixed to \(1.5\)m above the ground level, of \(196,750\) possible user equipment (UE) locations uniformly distributed across the outdoor (non-building) space.
Based on the generated DT, MATLAB's ray-tracing tool is utilized to infer path losses for mmWave propagation at \(28\) GHz for any connectivity link between pole-to-UE pairs. For each three-dimensional (3D) ray drawn by the ray tracer the free-space propagation model is adopted as the ray traverses from the transmitter node to the receiver node along the 3D path generated by the ray-tracing tool1. A ray reflected from a building is assumed to be attenuated by an additional \(6.4\)dB2 on top of the free-space propagation loss. Each ray is associated with defining end-to-end geometric coordinates and a DT-based path loss estimate, and is also accompanied with a LOS vs. non-LOS (NLOS) flag and its propagation distance. Footnote 1: Due to run-time constraints the ray tracing is limited to a single reflection per ray (i.e., diminishing rays with two or more reflections are omitted) and no diffraction is modeled.
Footnote 2: This is a slightly conservative loss w.r.t. the reflection loss typically measured for common exterior building materials such as concrete and glass. Furthermore, zero ground reflection loss is assumed.
Since the Matlab ray tracer currently does not support the input of a foliage data layer, the effect of tree obstruction is incorporated as a post-processing stage, as described in the following. For the two strongest rays per connectivity link, the fraction of these rays traversing through tree canopies is calculated based on the foliage information, such that it is assumed that any segment of a ray passing through the top \(75\%\) of the estimated height of an identified tree (i.e., within the expected volume of the tree canopy), experiences a foliage loss at a rate of \(2.5\) dB/m in addition to the free-space propagation loss. The ray, out of two, with the lesser total (that is free space and foliage attenuated) propagation loss is declared as the reported path loss associated with the pole-to-UE link. There are in total \(2,394,230\) such connectivity links in the dataset.
To evaluate the generalization capabilities of our approach, we divide the covered area in four distinct, non-overlapping areas based on the transmitter locations, each with approximately the same number of connectivity links. The links of one area are used as test set for the final performance evaluation, and the second area is used for validation during model design and training. We refer to links of those areas as _novel maps_ data as the map data of those areas has not been used during training. The last two areas are further split to obtain additional indicators of the performance on _known maps_ data, i.e., the maps were available during training, but the receiver locations differ. This split results in a training, test, and validation set consisting of approximately 16%, 80%, and 4%, respectively, of the total links in the _known maps_ area. As a result, we only have sparse training data akin to real measurement campaigns.
For the ML approaches evaluated in this paper, the map is converted to images where one pixel corresponds to \(1\)m\({}^{2}\). The building footprints are represented as binary masks, since transmitter and receiver are generally located lower than the building height in our scenarios. Foliage, however, is often less tall and its height in each pixel is indicated relative to the maximum height described above. The transmitter and receiver locations are given in a local Cartesian coordinate system for the link-level algorithms. Fig. 2 illustrates the sparse receiver locations for a single pole as a discretized radio map.
### _Baseline models and training_
To evaluate the performance of our approach, we compare it with three baselines of popular approaches, namely _UNet_, _CNN+MLP_, and _3GPP w/ LOS oracle_. In the following we describe the baselines and our architecture in detail. All ML models were trained using a mean squared error (MSE) loss.
#### Iv-B1 UNet
This model is the RadioUNet neural network from [10] adapted to work with an additional foliage input channel in the first convolutional layer. The data has been transformed to sparse radio maps for this purpose, such as shown in Fig. 2, and pixels that do not have target path loss values are ignored in the loss function. Although the radio maps are cropped as much as possible, less than 0.4% of the pixels in the training set have valid path loss values. We followed the same two-stage training approach as in [10], training one UNet first directly on the sparse radio maps. Then, in the next step, a second UNet is trained using the map and the
Fig. 3: Cumulative distribution functions over absolute prediction errors (LOS vs. NLOS).
output of the first UNet as input while the weights of the first UNet are frozen. Since the original dataset presented in [10] does not include foliage information and considers a lower carrier frequency, the dataset was not used for pretraining.
#### Iv-B2 Cnn+MLP
This algorithm combines the map features obtained from a CNN with the direct beeline distance of transmitter and receiver in an MLP to predict their path loss. We use the popular ResNet 18 backbone [22] as CNN and concatenate the features of the final fully connected layer with the scalar distance value. The output of the ResNet is a vector is of size 512, resulting in a feature vector of 513 elements. This feature vector then serves as input to a final MLP of 3 linear layers with output dimension 512 followed by the common ReLU activation function. Then, another final linear layer projects the hidden features to the scalar path loss value. Note, this ResNet backbone is designed for inputs of size \(256x256\). We use squared crops of the map data with the transmitter in the center, rotate them such that the receiver is vertically aligned with transmitter, and finally, resize them to the required input size. Of crop sizes corresponding to 800m\({}^{2}\), 400m\({}^{2}\), and 200m\({}^{2}\), we have found 400m\({}^{2}\) to work best.
#### Iv-B3 3GPP w/ LOS oracle
This model adopts the path loss equations of table VII.4.1-1 from [23] in the Urban Micro Street Canyon (UMi) scenario and thus, is not a ML model. The carrier frequency is set to \(f_{c}=28\mathrm{GHz}\) while the heights and distances are computed from the 3D transmitter and receiver locations. Instead of computing a distance dependent LOS probability as in [23], we provide an LOS oracle, such that based on this LOS-flag, the correct model equations are used.
#### Iv-B4 Scalable transformer (our approach)
The architecture choices of our approach largely follow those of ViT-32 [20], except for the patch size which we set to \(P=33\). We employ 12 hidden layers, each with 12 headed multi-head attention on latent vectors of size \(D=768\) and no dropout. The MLP following each attention layer has dimension 3072. As described in Section III, we add one patch in each direction around the transmitter and receiver patches as padding, resulting in input images of \(R\times 3\) patches.
### _Generalization from sparse data_
The dataset contains only sparse training samples and thus poses special challenges to learning algorithms to generalize to novel maps, not seen during training. Table II shows the root mean squared error (RMSE) and the mean absolute error (MAE) on our two test sets, the _known maps_ and the _novel maps_ test data. In all cases, our approach outperforms the other algorithms on this sparse dataset. While the performance of _3GPP w/ LOS oracle_ between the _known maps_ and the _novel maps_ data only differ marginally, the performance of other algorithms varies strongly across the data splits. The difference arise from the fact that the _novel maps_ data was not available during training of the models. Therefore, it is an indicator of the models' capabilities to generalizes to previously unseen maps. _3GPP w/ LOS oracle_ only gets the LOS-flag as geometric feature and therefore its performance remains constant. Despite the challenges of generalization, taking map information into account is beneficial to the performance as seen by the improvements of _UNet_, _CNN+MLP_, and our approach compared to _3GPP w/ LOS oracle_. As _CNN+MLP_ is designed for dense training target, such as radio maps, its performance on our data is likely suffering from the sparseness. Link-level prediction approaches, such as _CNN+MLP_ and ours, provide a more promising performance for sparse training data compared to image-to-image translation approaches, such as _UNet_.
The generalization behavior is also reflected in the CDFs shown in Fig. 3 as _novel maps_ appears more challenging than _known maps_ for the learning-based algorithms. In addition, we can inspect the difference between LOS and NLOS scenarios. While our approach and _CNN+MLP_ perform well for both LOS and NLOS cases, the _3GPP w/ LOS oracle_ performance drops severely in the NLOS case indicating that modeling NLOS behavior requires more geometric information than the LOS-flag. Interestingly, the independence of the geometry seems to benefit _3GPP w/ LOS oracle_ in the LOS scenarios of _novel maps_ (Fig. 3c) such that it partially outperforms the other algorithms.
### _Prediction of dense radio maps_
While the models were trained on sparse data, we can generate dense radio maps by predicting the path loss for each pixel given a desired resolution. For this purpose, we assume that the transmitter is located at the center of the radio map and the receivers at the center of each pixel. Fig. 4 shows examples of such radio maps for _CNN+MLP_ and our approach. The radio maps illustrate how both models take both foliage and building information into account and adapt their prediction accordingly. In figures 3(a) and 3(b), one can see sharp edges in the prediction at building corners and attenuation due to foliage in the top half of the images. In figures 3(c) and 3(d), the circle of trees in the top half of the images shows how the height of the trees indicated by the green color influences the prediction. The predicted radio maps also highlight the challenges of highly sparse data. In areas of low coverage, such as when buildings obstruct the direct path, the data density for training was low. As a direct result, the models may have challenges to reflect the low coverage as it was not captured in the training data.
## V Conclusions
We have presented a transformer-based model for link-level path loss prediction that can be trained from sparse data, works on continuous transmitter and receiver locations and
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Algorithm** & \multicolumn{2}{c|}{_known maps_} & \multicolumn{2}{c|}{_novel maps_} \\ \cline{2-5} & RMSE & MAE & RMSE & MAE \\ \hline _3GPP w/ LOS oracle_ & 10.18 & 6.86 & 10.28 & 6.97 \\ _UNet_ & 8.82 & 6.72 & 9.83 & 7.77 \\ _CNN+MLP_ & 2.89 & 1.72 & 5.58 & 3.62 \\ Ours & **2.27** & **1.15** & **5.31** & **3.29** \\ \hline \end{tabular}
\end{table} TABLE II: Comparison to the baselines on _known maps_ with unknown receiver locations, and on _novel maps_ where map and receiver locations were not available during training/validation.
generalizes better to novel maps than commonly used machine learning models for path loss prediction. A key feature of our model is that it can adaptively process map input of various sizes, allowing us to increase the map size for larger connectivity link distances without changing the resolution of the map. We argue that this property is especially useful for mmWave and shorter wavelengths in urban areas as the path loss of relevant links is often LOS dominated. A limitation of the model is that it does currently not take terrain information and building height into account for its prediction. While in our dataset the buildings are usually taller and the terrain varies little in the relevant area around a single transmitter, this can easily be addressed in future, for example, by encoding the building height and terrain as additional channels of the map.
|
2301.12453 | APPT: Boosting Automated Patch Correctness Prediction via Fine-tuning
Pre-trained Models | Automated program repair (APR) aims to fix software bugs automatically
without human debugging efforts and plays a crucial role in software
development and maintenance. Despite promising, APR is still challenged by a
long-standing overfitting problem (i.e., the generated patch is plausible but
overfitting). Various techniques have thus been proposed to address the
overfitting problem. Recently, researchers have employed BERT to extract code
features, which are then used to train a classifier for patch correctness
prediction. However, BERT is restricted to feature extraction for classifier
training without benefiting from the training process, potentially generating
sub-optimal vector representations for patched code snippets. In this paper, we
propose APPT, a pre-trained model-based automated patch correctness assessment
technique by both pre-training and fine-tuning. APPT adopts a pre-trained model
as the encoder stack, followed by an LSTM stack and a deep learning classifier.
More importantly, the pre-trained model is fine-tuned in conjunction with other
components as a whole pipeline to fully adapt it specifically for reasoning
about patch correctness. We conduct an extensive experiment on 1,183 Defects4J
patches and the experimental results show that APPT achieves prediction
accuracy of 79.7% and recall of 83.2%, outperforming CACHE by 4.3% and 6.7%.
Our additional investigation on 49,694 real-world patches shows that APPT
achieves the optimum performance compared with existing representation learning
techniques. We further investigate the impact of each component and find that
they all positively contribute to APPT, e.g., the fine-tuning process and the
LSTM stack increase F1-score by 10.22% and 4.11%, respectively. We also prove
that adopting advanced pre-trained models can further provide substantial
advancement, highlighting the generalizability of APPT. | Quanjun Zhang, Chunrong Fang, Weisong Sun, Yan Liu, Tieke He, Xiaodong Hao, Zhenyu Chen | 2023-01-29T14:28:26Z | http://arxiv.org/abs/2301.12453v2 | # Boosting Automated Patch Correctness Prediction via Pre-trained Language Model
###### Abstract
Automated program repair (APR) aims to fix software bugs automatically without human debugging efforts and plays a crucial role in software development and maintenance. Despite the recent significant progress in the number of fixed bugs, APR is still challenged by a long-standing overfitting problem (i.e., the generated patch is plausible but overfitting). Various techniques have thus been proposed to address the overfitting problem. Among them, leveraging deep learning approaches to predict patch correctness automatically is emerging along with the available large-scale patch benchmarks recently. However, existing learning-based techniques mainly rely on manually-designed code features, which can be extremely costly and challenging to construct in practice. In this paper, we propose APPT, a pre-trained model-based automated patch correctness assessment technique, which treats the source code as a sequence of tokens without extra overhead to design a mass of features from different perspectives. In particular, APPT adopts a pre-trained model as the encoder stack, followed by an LSTM stack and a deep learning classifier. Although our idea is general and can be built on various existing pre-trained models, we have implemented APPT based on the BERT model. We conduct an extensive experiment on 1,183 Defects4J patches and the experimental results show that APPT achieves prediction accuracy of 79.0% and recall of 81.3%, outperforming the state-of-the-art technique CACHE by 3.6% and 4.8%. Our additional investigation on 49,694 real-world patches shows that APPT achieves the optimum performance (exceeding 99% in five common metrics for assessing patch classification techniques) compared with existing representation learning techniques. We also prove that adopting advanced pre-trained models can further provide substantial advancement (e.g., GraphCodeBERT-based APPT improves BERT-based APPT by 3.0% and 2.6% in precision and recall, respectively), highlighting the generalizability of APPT.
Automated Program Repair, Patch Correctness, Pre-trained Model
## 1 Introduction
Software bugs are inevitable in modern software systems and result in fatal consequences, such as costing trillions of dollars in financial loss and affecting billions of people around the world [1, 2]. It is incredibly time-consuming and labor-intensive for developers to fix such bugs due to the increasing size and complexity of modern software systems [3]. Automated program repair (APR) aims to fix revealed software bugs without human intervention automatically and has attracted massive attention from both academia and industry in the past decades [4, 5]. Despite an emerging research area, a variety of APR techniques have been proposed and continuously achieved promising results in terms of the number of fixed bugs in the literature [6, 7].
However, it is fundamentally difficult to achieve high precision for generated patches due to the weak program specifications [8]. Existing APR techniques usually leverage the developer-written test cases as the criteria to assess the correctness of the generated patches. In fact, a generated patch passing the available test cases may not generalize to other potential test cases, leading to a long-standing challenge of APR (i.e., the overfitting issue) [8]. For example, when a bug is detected in functionality, a patch can be simply generated by deleting the functionality and the available test cases usually fail to exercise the deleted functionality [9]. In this case, developers need to consume tremendous time and effort to filter the overfitting patches, resulting in a negative debugging performance when APR techniques are applied in practice [10, 11].
Thus, various automated patch correctness assessment (APCA) techniques have been proposed to determine whether a generated patch is indeed correct or not [12]. According to extracted features, the traditional APCA techniques can be categorized into two groups: static and dynamic ones [13]. Static techniques tend to analyze the code changed patterns or code similarity based on the syntactic and semantic features. For example, Tan et al. [14] define a set of generic forbidden transformations (e.g., the above-mentioned functionality deleting) for the buggy program. In contrast, dynamic techniques usually execute the plausible patches against extra test cases generated by automated test generation tools (e.g., Evosuite [15] and Randoop [16]). For example, Xiong et al. [17] generate new test cases and determine patch correctness based on the behavior similarity of the test case executions. However, the static techniques may suffer from prediction precision problems, while it is pretty time-consuming for dynamic techniques to generate additional test cases and execute all patched programs [13].
Recently, inspired by large-scale patch benchmarks being released [6, 7], some learning-based APCA techniques have been proposed to assess patch correctness by embedding buggy and patched code snippets [12, 18, 19]. For example, Lin et al. [20] leverage the abstract syntax tree (AST) path to represent the patch and build a deep learning classifier to predict the correctness of the patch.
Similarly, He et al. [18] extract code features at the AST level statically and train a probabilistic model to perform patch prediction. However, despite outstanding prediction results, existing learning-based APCA techniques mainly employ complex code-aware features (e.g., AST path in [20]) or manually-designed code features (e.g., 202 code features in [18]), which are costly to conduct and extract in practice.
In this work, we propose, _APPT_, the first _d_utomated _p_re-trained model-based _P_atch correccess assessment technique, which employs the pre-training and fine-tuning to address the above limitation of prior work. We first adopt the large pre-trained model as the encoder stack to extract code representations. We then employ bidirectional LSTM layers to capture rich dependency information between the buggy and patched code snippets. Finally, we build a deep learning classifier to predict whether the patch is overfitting or not. APPT treats only the source code tokens as the input and automatically extracts code features using a well-trained encoder stack, getting rid of the need for code-aware features and manually-designed features. Although APPT is conceptually general and can be built on various pre-trained models, we have implemented APPT as a practical APCA tool based on the BERT model. Our experimental results on 1,183 Defects4J patches indicate that APPT improves the state-of-the-art technique CACHE by 3.6% accuracy, 1.2% precision, 4.8% recall, 2.9% F1-score and 3.1% AUC. We conduct an additional investigation on 49,694 real-world patches from five different patch benchmarks and the results show that APPT exceeds 99% in accuracy, precision, recall, F1-score and AUC metrics, outperforming the existing representation learning techniques. We also adopt different pre-trained models to further investigate the generalization ability of APPT. The results demonstrate that APPT with advanced pre-trained models can enhance the prediction performance. For example, precision and recall of APPT can be improved by 3.0% and 2.6% when equipped with GraphCodeBERT, which are 4.2% and 7.2% higher than the state-of-the-art technique CACHE.
To sum up, we make the following major contributions:
* **New Direction.** This paper opens a new direction for patch correctness prediction to directly utilize large pre-trained models by pre-training and fine-tuning. Compared with existing learning-based APCA techniques, our approach does not need any additional efforts to design and extract complex code features.
* **Novel Technique.** We propose APPT, a BERT-based APCA technique that leverages the pre-training and classifier to predict patch correctness. To the best of our knowledge, we are the first to exploit fine-tuning the pre-trained model for assessing patch correctness.
* **Extensive Study.** We conduct various empirical studies to investigate and evaluate APPT on diverse patch benchmarks. The results show that APPT achieves significantly better overall performance than existing learning-based and traditional APCA techniques.
* **Available Artifacts.** We release the relevant materials (including source code, patches and results) used in the experiments for replication and future research1.
Footnote 1: All artifacts relevant to this work can be found at anonymouswebsite, accessed August 2022.
## 2 Background
### _Automated Program Repair_
APR techniques' primary objective is to identify and fix program bugs automatically. Fig. 1 illustrates the workflow of the typical APR technique, which is usually composed of three steps: (1) the localization phrase utilizes off-the-shelf fault localization techniques to recognize the suspicious code elements (e.g., statements or methods) [21, 22]; (2) the repair phrase then modifies these elements based on a set of transformation rules to generate various new program variants, also called candidate patches; (3) the verification phrase adopts the original test cases as the oracle to check whether candidate patches execute as expected or not. Specifically, a candidate patch passing the original test cases is called a _plausible_ patch. A plausible patch that is semantically equivalent to the developer patch denotes a _correct_ patch; otherwise, it is an _overfitting_ patch.
It is fundamentally challenging to ensure the correctness of the plausible patches due to the weak specification of the program behavior in practice. Existing studies have demonstrated that manually identifying the overfitting patches is time-consuming and may harm the debugging performance of developers [10, 23]. Thus, various techniques have been proposed to validate patch correctness automatically. According to whether the dynamic execution or machine learning is required [13], we categorize them into three main categories: static-based techniques, dynamic-based techniques and learning-based techniques.
\(\bullet\)_Static-based APCA techniques._ These techniques aim to prioritize correct patches over overfitting ones by static code features, such as code-deleting program transformations.
\(\bullet\)_Dynamic-based APCA techniques._ These techniques aim to filter out overfitting patches by executing extra test cases, which are generated based on fixed or patched programs. According to whether the correct patches are required, these techniques can be further categorized into _dynamic with oracle-based ones_ and _dynamic without oracle-based ones_.
\(\bullet\)_Learning-based APCA techniques._ These techniques aim to predict the correctness of plausible patches enhanced by machine learning techniques. They usually extract the manually-designed code features and then adopt a classifier to perform patch prediction [18]. Some techniques are
Fig. 1: Overview of APR
proposed to adopt code embedding techniques to extract code features automatically [20], which are also denoted as _representation learning-based APCA techniques_.
Recently, an increasing number of research efforts have attempted to use machine learning techniques to learn from existing patch benchmarks for predicting potential patch correctness, achieving promising results. In this work, we adopt the large pre-trained model (i.e., BERT) to encode plausible patches and train a deep learning classifier to predict patch correctness. Compared to existing techniques, our paper is the first work to predict patch correctness by pre-training and fine-tuning the pre-trained model.
### _Pre-trained Model_
Recently, Pre-trained language models (e.g., BERT) have significantly improved performance across a wide range of natural language processing (NLP) tasks, such as machine translation and text classification [24, 25, 26]. Typically, the models are pre-trained to derive generic language representations by self-supervised training on large-scale unlabeled data and then are transferred to benefit multiple downstream tasks by fine-tuning on limited data annotation.
Existing pre-trained models usually adopt the encoder-decoder architectures, where an encoder encodes an input sequence as a fixed-length vector representation, and a decoder generates an output sequence based on the input representation. Encoder-only models (e.g., BERT [24]) usually pre-train a bidirectional transformer in which each token can attend to each other. Encoder-only models are good at understanding tasks (e.g., code search), but their bidirectionality nature requires an additional decoder for generation tasks, where this decoder initializes from scratch and cannot benefit from the pre-training tasks. Decoder-only models (e.g., GPT [25]) are pre-trained using unidirectional language modeling that only allows tokens to attend to the previous tokens and itself to predict the next token. Decoder-only models are good at auto-regressive tasks like code completion, but the unidirectional framework is sub-optimal for understanding tasks. Encoder-decoder models (e.g., T5 [26]) often make use of denoising pre-training objectives that corrupt the source input and require the decoder to recover them. Compared to encoder-only and decoder-only models that favor understanding and auto-regressive tasks, encoder-decoder models can support generation tasks like code summarization. In this work, we treat the patch correctness assessment as a binary classification task and we consider encoder-only models to get embeddings of code snippets according to existing work [27].
Inspired by the success of pre-trained models in NLP, many recent attempts have been adopted to boost numerous code-related tasks (e.g., code summarization and code search) with pre-trained models (e.g., GraphCodeBERT) [28, 29]. Despite the promising results, little work aims to explore the capabilities of pre-trained models in supporting patch correctness assessment. In this work, BERT is selected to exploit pre-trained models for automated patch correctness assessment, as it has been widely adopted in various code-related tasks and is quite effective for classification tasks [28, 29]. Two advanced BERT-style models (i.e., CodeBERT and GraphCodeBERT) are also selected to investigate the generalization ability of APPT.
## 3 Approach
Fig. 2 presents the overall framework of our approach. Generally, APPT accepts a buggy program and a plausible patch that passes the available test cases as inputs. APPT extracts the buggy code snippet and its corresponding patched code snippet, and adopts four strategies to truncate the code tokens. APPT then uses the pre-trained BERT model for embedding the truncated tokens. After obtaining the representations for the buggy and patched code snippets, APPT uses four pre-defined functions for integrating the representations. Finally, APPT adopts a deep learning classifier to return the final result (i.e., correct or overfitting).
### _Code Extraction_
Given a buggy program, existing APR tools may return a plausible patch \(p\) (if it exists) that passes all available test cases. _Code extraction phrase_ aims to take the returned patch and the buggy program as the inputs, and output the corresponding buggy and patched code tokens (shown in Fig. 2(a)).
Specifically, we get the buggy and patched code snippets (i.e., \(C_{b}\) and \(C_{p}\)) by parsing the patch file. Firstly, we select removed and added lines as the buggy and patched lines, marked with _"+"_ and _'-'_, respectively. Secondly, to keep the context information about the plausible patch, we keep unchanged lines (i.e., without _+"_ and _'-_ in the beginning) as part of each code snippet. Finally, the buggy (or patched) code snippet are made up by the buggy (patched) lines and common context part.
We treat the buggy (or patched) code snippet as sequences of tokens and utilize a subword tokenization method to address out-of-vocabulary (OOV) problem by further breakdowning identifiers into their subtokens [30] when tokenizing the code snippet. In this work, we keep the original tokenization vocabulary instead of building a new vocabulary using byte pair encoding (BPE) algorithm as we want APPT to inherit the natural language understanding ability and start learning prediction from a good initial point.
After the buggy (or patched) code tokens are extracted, we attempt to take them as the inputs into the token embedding phrase. However, pre-trained models are usually limited to a particular input length. For example, BERT can only take input sequences up to 512 tokens in length. We further truncate the inputs whose length is longer than 512 after tokenization. Following existing work [31], we use different methods to truncate the method pair.
* head-only: keep the first 512 tokens in \(C_{b}\) and \(C_{p}\).
* tail-only: keep the last 512 tokens in \(C_{b}\) and \(C_{p}\).
* mid-only: select 512 tokens in the middle of in \(C_{b}\) and \(C_{p}\).
* hybrid: select the first 256 and the last 256 tokens in \(C_{b}\) and \(C_{p}\).
In our experiment, we use the head-only method to truncate the code tokens by default. We also discuss the impact of different truncation methods in Section 5.3.2. Finally, the buggy and patched code tokens (i.e., \(T_{b}\) and \(T_{p}\)) are extracted based on \(C_{b}\) and \(T_{p}\) to fit the maximum length limit of BERT.
### _Token Embedding_
_Token Embedding phrase_ takes the buggy (or patched) code tokens (i.e., \(T_{b}\) or \(T_{p}\)) as input and embeds it into the buggy (or patched) vector (i.e., \(E_{b}\) or \(E_{p}\)) as output (shown in Fig. 2(b)). APPT implements a stack of twelve layers of encoder blocks to extract the hidden state of the code snippet. Each encoder block consists of three components. The first part is a multi-head self-attention layer to learn long-range dependencies in the input code tokens. The second part is a simple, position-wise fully connected feed-forward neural network, which can linearly transform the token embedding for better feature extraction. The third part is a residual connection around each component, followed by a layer normalization to ensure the stability of code token embeddings distribution.
In particular, the self-attention mechanism computes the representation of each code token by considering the position relationship between the code tokens. The self-attention mechanism mainly relies on three main vectors, query \(Q\), key \(K\), and value \(V\), by mapping a query and a set of key-value pairs to an output vector. We employ a scaled dot-product self-attention to calculate the attention scores of each token by taking the dot product between all of the query vectors and key vectors. The attention scores are then normalized to probabilities using the softmax function to get the attention weights. Finally, the value vectors can be updated by taking a dot product between the value vectors and the attention weight vectors. The self-attention operation is computed using three matrices \(Q\), \(K\) and \(V\) as follows:
\[\mathrm{Attention}(Q,K,V)=\mathrm{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}} \right)V \tag{1}\]
To capture richer semantic meanings of the input code tokens, we further use a multi-head mechanism to realize the self-attention, which allows the model to jointly attend the information from different code representation subspaces at different positions. For \(d\)-dimension \(Q\), \(K\), and \(V\), we split those vectors into \(h\) heads where each head has \(d/h\)-dimension. After all of the self-attention operations, each head will then be concatenated back again to feed into a fully-connected feed-forward neural network including two linear transformations with a ReLU activation in between. The multi-head mechanism can be summarized by the following equation:
\[\mathrm{MultiHead}(Q,K,V)=\mathrm{Concat}\left(\mathrm{head}_{1},\dots, \mathrm{head}_{h}\right)W^{O} \tag{2}\]
where \(head_{i}=Attention(QW_{i}^{Q},KW_{i}^{Q},VW_{i}^{Q})\) and \(W^{O}\) is used to linearly project to the expected dimension after concatenation. Therefore, the encoder stack can take an input code snippet and output a real-valued vector for each code token within the code snippet based on the context.
### _Patch Classification_
After the embedding vectors of the buggy and patched code snippets (i.e., \(E_{b}\) and \(E_{p}\)) are extracted by the encoder stack, _patch classification phrase_ first integrates the two vectors into a single input vector (i.e., \(E_{con}\)) and then adopts a deep learning classifier to predict the patch correctness automatically (shown in Fig. 2(c)).
#### 3.3.1 Representations Integration
Given two vectors \(E_{b}\) and \(E_{p}\) with \(n\) dimensions representing the buggy and patched code snippets, respectively, we integrate the two vectors into one code changed vector for patch classification. In detail, we leverage different approaches to integrate them to characterize the differences between \(E_{b}\) and \(E_{p}\) from diverse aspects, such as an vector-wise concatenation operation \(E_{con}\), element-wise addition operation \(E_{add}\), element-wise subtraction operation \(E_{sub}\), Hadamard product \(E_{pro}\). We also attempt to capture crossed features between the two vectors by concatenating the above integrated vectors \(E_{mix}\). The integration approaches are selected due to their promising results in previous studies [12, 32], which are listed as follows:
1. [leftmargin=*,noitemsep,topsep=0pt]
2. \(\mathbf{E_{con}}\) is a concatenation operation between \(E_{b}\) and \(E_{p}\) on vector-wise level with \(2n\) dimension (i.e., \(E_{con}=E_{b}\bigoplus E_{p}\)).
3. \(\mathbf{E_{add}}\) is an addition operation between \(E_{b}\) and \(E_{p}\) on element-wise level with \(n\) dimensions (i.e., \(E_{add}=E_{b}+E_{p}\)).
4. \(\mathbf{E_{sub}}\) is a subtraction operation between \(E_{b}\) and \(E_{p}\) on element-wise level with \(n\) dimensions (i.e., \(E_{sub}=E_{b}-E_{p}\)).
5. \(\mathbf{E_{pro}}\) is a Hadamard product operation between \(E_{b}\) and \(E_{p}\) on element-wise level with \(n\) dimensions (i.e., \(E_{sub}=E_{b}\odot E_{p}\)).
6. \(\mathbf{E_{mix}}\) is a concatenation over \(E_{con}\), \(E_{add}\), \(E_{sub}\) and \(E_{pro}\) on vector-wise level with \(5n\) dimension (i.e., \(E_{mix}=E_{con}\bigoplus E_{add}\bigoplus E_{sub}\bigoplus E_{sub}\)).
Fig. 2: Overview of APPT
#### 3.3.2 LSTM Stack
After the embedding vector (e.g., \(E_{con}\)) of the changed code tokens is extracted, APPT aims to determine the given patch's correctness based on a deep learning classifier. To extract more hidden code change features, we further feed the code changed vector into a Long Short-Term Memory (LSTM) stack. The LSTM stack has two bidirectional LSTM layers, the output of which is a new state generated by concatenating the hidden states from both directions at a time. LSTM is a specialized recurrent neural network (RNN) for modeling long-term dependencies of sequences. A common LSTM gate unit is composed of a cell, an input gate, an output gate and a forget gate. Thanks to the gated mechanism, LSTM is well-suited to extract the contextual semantic features containing token sequential dependencies and has been widely used in various kinds of tasks, such as vulnerability detection [33], fault localization [34], and automated program repair [35].
In APPT, the LSTM stack computes a mapping from an input code changed vector \(x=(x_{1},...,x_{T})\) (e.g., \(E_{con}\)) to an output vector \(z=(z_{1},...,z_{T})\) by calculating the network gate unit activations. We implement the gated mechanism by leveraging the input gates and forget gates to control the propagation of cell states. Specifically, when updating the cell state, the input gates decide what new information from the current input to be included in the cell states (i.e., Equation 3), and forget gates decide what information to be excluded from the cell states (i.e., Equation 4). Based on new and forgetting information, cell states as the memory of the LSTM unit can be updated (i.e., Equation 5). The output gate then determines the value for the next hidden state by point-wise multiplication of the output gate (i.e., Equation 6). Finally, the value of the current cell state passed through tanh function (i.e., Equation 7), by which the output of LSTM stack is calculated (i.e., Equation 8).
\[i_{t}=\mathrm{sigmoid}\left(W_{ix}x_{t}+W_{ih}h_{t-1}+b_{i}\right) \tag{3}\]
\[f_{t}=\mathrm{sigmoid}\left(W_{fx}x_{t}+W_{fh}h_{t-1}+b_{f}\right) \tag{4}\]
\[c_{t}=f_{t}\odot c_{t-1}+i_{t}\odot\tanh\left(W_{gx}x_{t}+W_{gh}h_{t-1}+b_{g}\right) \tag{5}\]
\[o_{t}=\mathrm{sigmoid}\left(W_{ox}x_{t}+W_{oh}h_{t-1}+b_{o}\right) \tag{6}\]
\[h_{t}=o_{t}\odot\tanh\left(c_{t}\right) \tag{7}\]
\[z_{t}=W_{zh}h_{t}+b_{z} \tag{8}\]
where the \(W\) terms denote weight matrices (e.g., \(W_{ix}\) is the matrix of weights from the input gate to the input), the \(b\) terms denote bias vectors (e.g., \(b_{i}\) is the input gate bias vector) and \(\odot\) denotes element-wise multiplication of the vectors.
#### 3.3.3 Classifier
After the computation of all LSTM iterations, the embedding vectors of changed code tokens are further fed to a designed deep learning classifier to predict the patch correctness. The classifier is composed of two fully connected layers followed by a binary predictor. In APPT, we apply a standard softmax function to obtain the probability distribution over correctness. A patch is labeled as correct if its probability of being correct is larger than that of being incorrect; otherwise, it is considered overfitting.
In particular, for patch \(p\), \(z\) denotes its output of the last iteration in the LSTM stack, which is further linearly transformed into a real number as Equation 9, where \(W\in\mathbb{R}^{d\times 1}\), \(b\in\mathbb{R}\), and \(n\) denotes the number of class (i.e., correct and overfitting). We then leverage softmax function to normalize the output of patch \(p\) as Equation 10, where \(s\) denotes the correct or overfitting probability of patch \(p\) predicted by the model.
\[y_{i}=Wz_{i}+b_{i}\quad\forall\mathrm{i}\in 1\ldots\mathrm{n} \tag{9}\]
\[s\left(y_{i}\right)=\frac{\exp\left\{y_{i}\right\}}{\sum_{i=1}^{n}\exp\left\{y _{j}\right\}} \tag{10}\]
### _Training_
To train the network, we calculate the loss to update the neural weights based on its predicted result and ground truth. We use the cross-entropy loss, which has been widely used in some classification tasks and patch prediction studies [20, 36]. In particular, \(g_{i}\in\{0,1\}\) denotes whether the \(i\)-th patch is correct or overfitting. The cross-entropy loss compares a target \(g_{i}\) with a prediction \(s\) in a logarithmic and hence exponential fashion. The objective function is computed in Equation 11, which is minimized constantly in the training to update the parameters in our model.
\[L=\sum_{i}-[g_{i}\cdot\log(s)+(1-g_{i})\cdot\log(1-s)] \tag{11}\]
We employ the dropout technique to improve the robustness of APPT and the Adam approach [37] to optimize the objective function.
## 4 Experiment
### _Research Questions_
The empirical study is conducted to answer the following research questions.
**RQ1:**: How does APPT perform compared with existing state-of-the-art representation learning-based APCA techniques?
**RQ2:**: How does APPT perform compared with existing state-of-the-art traditional and learning-based APCA techniques?
**RQ3:**: To what extent do the different choices affect the overall effectiveness of APPT?
**RQ3.1:**: To what extent do the token truncation choices affect the overall effectiveness of APPT?
**RQ3.2:**: To what extent do the vector concatenation choices affect the overall effectiveness of APPT?
**RQ3.3:** To what extent do the pre-trained model choices affect the overall effectiveness of APPT?
RQ1 aims to compare APPT with 16 representation learning techniques to explore to what extent APPT outperforms these techniques, including three classifiers multiplied (decision tree, logistic regression, and naive Bayes) by five representation methods (BERT, code2vec, code2seq, Doc2Vec, and CC2Vec) from Tian et al. [12], and the most recent technique CACHE from Lin et al. [20]. RQ2 is designed to investigate the effectiveness of APPT by comparing it with both dynamic and static techniques. The latest learning-based APCA technique, ODS, is also evaluated in our study. RQ3 focuses on impact analysis of APPT, which is further refined into three sub-RQs. In detail, RQ3.1 explores how the four token truncation choices affect the effectiveness of APPT. RQ3.2 explores how the five vector concatenation methods affect the effectiveness of APPT. RQ3.3 replaces BERT with advanced CodeBERT and GraphCodeBERT to investigate the impact of the pre-trained models on the effectiveness of APPT.
### _Dataset_
With the rapid development of APR research in the last decades, a broad range of repair techniques has been proposed [38, 39, 40], resulting in a growing number of patches across many benchmarks being released [7, 13]. The large-scale patch benchmarks enable deep learning-based prediction techniques to learn the distribution of correct and overfitting patches for patch correctness assessment. In this study, we adopt two patch datasets based on the recent studies [12, 13, 20], a small one containing 1,183 Defects4J labeled patches and a large one containing 50,794 real-world labeled patches.
On the small dataset, we mainly focus on the released patches from Defects4J [56], which is the most widely-adopted benchmark in APR research [7]. We select the benchmarks released by two recent large-scale studies, i.e., Wang et al. [13] and Tian et al. [12]. Specifically, the first benchmark [13] includes the labeled patches provided by Liu et al. [7], Xiong et al. [17] and Defects4J developers [56]. The second benchmark [12] includes the labeled patches from Liu et al. [7] and also considers the patches generated by some well-known APR tools that are not included in Liu et al. [7] to better explore the overfitting problem, i.e., JAID [48], SketchFix [49], CapGen [43], SOFix [53] and SequenceR [55]. To avoid the data leakage issue in the two benchmarks, a filtering process is also conducted to discard duplicate patches. In particular, given a patch whose all the blank spaces are removed, the left text information is compared with that from the other patches. If two patches are identical concerning their text information, they are considered duplicates, resulting in 1,183 patches in our small dataset. The patches are generated by 22 distinct APR tools, which can be divided into four categories, i.e., heuristic-based, constraint-based, template-based, and learning-based techniques. The detailed information on these covered APR tools is presented in Table I, where the first column lists the four repair technique categories and the second column list the corresponding repair techniques.
On the large dataset, we further consider a variety of patches generated from other benchmarks, to evaluate the generality of APPT. Recently, existing studies demonstrate that APR techniques may overfit Defects4J in terms of repairaability [6, 11]. Thus, some other benchmarks have been conducted to evaluate the performance of APR techniques, such as Bugs.jar [58], IntroclassJava [59], BEARS [60] and QuixBugs [61], providing substantial patches on the large dataset. In this work, we consider a large patch dataset released by a recent study [20] to investigate the generality of APPT. The large patch dataset includes the labeled patches provided from RepairTheAll framework [6] and ManySStuBs4J [57]. In particular, RepairTheAll framework [6] contains 64,293 patches using 11 Java test-suite-based repair tools and 2,141 bugs from five diverse benchmarks. However, there exists an imbalanced dataset issue as over 98.6%2 (63,393/64,293) of generated patches are actually labeled as incorrect. Recent studies have revealed that a well-balanced dataset is essential when investigating deep learning-based prediction techniques [12, 18]. To compensate the lack of correct patches, the large patch dataset then includes ManySStuBs4J [57], which provides simple bug-fix changes mined from 1,000 popular open-source Java projects. The bug-fix changes are correct fix attempts of real-world bugs and thus are considered correct patches in our experiment. Finally, a large balanced patch dataset is built from the RepairTheAll framework and ManySStuBs4J by discarding duplicate patches and filtering the ones from small student-written programming assignments (e.g., IntroClassJava). The dataset involves all
\begin{table}
\begin{tabular}{l|l|l} \hline \hline Category & APR Tools \\ \hline Heuristic-based & \(|\)GenProg [41],\(|\)Kali [41],\(|\)MutRepair [41], SimFix [42], ARJA [38], GenProg-A [38], Kali-A [38], RSRepair-A [38], CapGen [43]. \\ \hline Constraint-based & DynaMoth [44], Nopol [45], ACS [46], Cardumen [47], JAID [48], SketchFix [49]. \\ \hline Template-based & kPAR [50], FixMiner [51], AVATAR [52], TBar [5], SOFix [53], HDRepair [54]. \\ \hline Learning-based & SequenceR [55]. \\ \hline \hline \end{tabular}
\end{table} TABLE I: APR tools in small benchmark
\begin{table}
\begin{tabular}{l|l|l l l} \hline \hline Datasets & Subjects & \# Correct & \# Overfitting & Total \\ \hline \multirow{3}{*}{Small} & Tian et al. [12] & 468 & 532 & 1,000 \\ & Wang et al. [13] & 248 & 654 & 902 \\ \cline{2-5} & Our Study & 532 & 648 & 1,183 \\ \hline \multirow{3}{*}{Large} & ManySStuBs4J [57] & 51,433 & 0 & 51,433 \\ & RepairThemAll [6] & 900 & 63,393 & 64,293 \\ \cline{1-1} \cline{2-5} & Our Study & 25,589 & 24,105 & 49,694 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Datasets used in our experiment
available patches generated on RepairThemAll framework and ManySStuBs4J, resulting in 49,694 patches after deduplication.
Statistics on the two datasets are shown in Table II. Table II has two main rows representing the two datasets, each of which has three sub-rows. The first and second sub-rows list the two sources in the corresponding dataset. The third column lists the filtered patches used in our experiment from the two sources. We also present the number of correct, overfitting and total patches in the last three columns.
### _Baselines_
Various APCA techniques have been proposed in the literature to validate patch correctness. Following existing studies [17, 20], we attempt to select state-of-the-art techniques designed for Java language as Java is the most targeted language in APR community [7] and the existing patches of real-world bugs are usually available in Java language [12]. We first consider the recent empirical study by Wang et al. [13] to identify existing APCA techniques. We then select recent advanced studies [12, 20] that are not included in Wang et al. [13].
In general, following existing work [13, 20], the existing APCA techniques can be categorized into static, dynamic and learning-based APCA techniques according to whether test execution is needed or deep learning techniques are adopted (mentioned in Section 2). Meanwhile, according to whether the ground-truth patch is required, they can be further categorized into two categories (i.e., with or without oracle). Particularly, similar to our proposed method APPT, CHCHE and embedding learning techniques adopt representation models to embed changed code and a deep learning classier to predict patch correctness. Such techniques can be further considered as representation learning APCA techniques.
The details of the selected APCA techniques are illustrated in Table III. The first column lists three APCA categories. The second and third columns list whether the oracle information is equipped. We also list the representation learning techniques (e.g., APPT) in the light gray box. We summarize the selected techniques as follows.
#### 4.3.1 Dynamic-based APCA Techniques
Dynamic-based techniques are designed to distinguish correct patches from overfitting patches based on the outcome or the execution traces of the original or generated test cases.
_Simple Test Generation_. The overfitting issue is prevalent in the repair process due to the weak adequacy of existing test cases. Thus, researchers use test case generation tools to generate extra test cases based on the fixed program and check whether or not the generated patches that pass the original test cases can pass the extra test cases [23, 66]. In this work, we adopt Evosuite [15] and Randoop [16] as the test case generation tools, as they have been widely investigated in previous studies.
_DiffTGen_. Xin et al. [62] identify overfitting patches by executing test cases generated by an external test generator (i.e., Evosuite). Different from _simple test generation_ generating test cases randomly, DiffTGen generates test cases to uncover the syntactic differences between the patched and buggy program. A plausible patch is regarded as overfitting if the output of the patched program is not the same as that of the correct program. DiffTGen needs a human-written patch as a reference and requires providing human-amenable testing information for the developers to provide oracles the generated test cases.
_Daikon_. Daikon is a dynamic-based technique based on the program invariant with oracle information. Yang et al. [63] adopt the program invariant to explore the differences between an overfitting and a correct patch. A patch is considered correct if its inferred invariant is identical to that of the ground-truth. If there exists a different comparison, the patch is considered overfitting.
_PATCH-SIM_. Xiong et al. [17] consider the execution traces of the passing tests on the buggy and patched programs are likely to be similar, while the execution traces of failing tests on the buggy and patched programs are likely different. Based on the concept, they approximate the correctness of a patch based on the execution trace without the oracle information. PATCH-SIM adopts Randoop to generate additional test cases to collect dynamic execution information. In this work, we also replace Randoop with Evosuite to comprehensively explore the impact of test generation techniques (denoted as E-PATCH-SIM).
_Opad_. Yang et al. [67] adopt fuzzing testing to generate new test cases and employ two test oracles (crash and memory-safety) to enhance the validity checking of patches. The original implementation of Opad is not designed for Java language and uses American Fuzz Lop (AFL) as the fuzzing technique. In this work, following recent studies [13, 20], we replace AFL with Randoop and Evosuite to generate new test cases on the Java programs and denote them as R-Opad and E-Opad, respectively.
#### 4.3.2 Static-based APCA Techniques
Static-based techniques usually adopt static analysis tools to extract some designed static features and then check patch correctness based on such features.
\begin{table}
\begin{tabular}{l l l} \hline \hline & with Oracle Required & without Oracle Required \\ \hline Dynamic-based & Evosuite [15], Randoop [16], DiffTGen [62], Daikon [63] & PATCH-SIM [17], E-PATCH-SIM [17], R-Opad [63], E-Opad [63] \\ \hline Static-based & \& ssFix [64], CapGen [43], Anti-patterns [14], S3 [65] \\ \hline Learning-based & \& & ODS [18], Random Forest [12], \\ \cline{3-4} & & Embedding learning [12], CACHE [20], Our proposed APPT \\ \hline \hline \end{tabular}
\end{table} TABLE III: Compared APCA techniques in our experiment.
_ssFix_. ssFix [64] is a static-based technique that utilizes token-based syntax representation to generate patches with a higher probability of correctness. ssFix first performs a syntactic code search to find code snippets from a codebase that is syntax-related to the context of a bug to generate correct patches, and then prioritizes the patches based on the modification types and the modification sizes.
_CapGen_. Wen et al. [43] propose three aspects of context information (i.e., genealogy contexts, variable contexts and dependency contexts) embedded in an AST node and its surrounding codes to prioritize correct patches over overfitting ones. In this work, following recent studies [13, 20], we extract the three context information as static features to investigate patch correctness assessment.
_Anti-patterns_. Tan et al. [14] define a set of rules that essentially capture disallowed modifications to the buggy program, and a patch is overfitting if it falls into the rules. A recent study [13] has shown that the manually-defined anti-patterns may have false positives for correct patches, resulting in destructive effects in patch correctness prediction.
_S3_. Le et al. [65] assume that a correct patch is often syntactically and semantically close to a buggy code snippet. Thus, they adopt six syntactic features (i.e., AST differencing, cosine similarity and locality of variables and constants) and semantic features (i.e., model counting, output coverage and anti-patterns) to measure the distance between a candidate patch and the buggy code snippet.
#### 4.3.3 Learning-based APCA Techniques
Learning-based techniques can predict whether a plausible patch is correct or not based on machine learning techniques.
_ODS_. Ye et al. [18] first extract 202 code features at the abstract syntax tree level and then use supervised learning to learn a probabilistic model automatically. The results show that ODS can achieve better prediction performance than the dynamic-based technique PATCH-SIM with a faster speed.
_CACHE_. Lin et al. [20] propose a context-aware APCA technique CACHE by taking both the changed code snippet and the correlated unchanged code snippet into consideration. CACHE first parses the patched code snippet into AST representation and then utilizes the AST path technique to capture the structure information.
_Random Forest_. Wang et al. [13] investigate the effectiveness of adopting deep learning models to predict patch correctness based on eight static features (two from ssFix, three from S3, and three from CapGen). To integrate the static features, six widely-used classification models (including Random Forest, Decision Table, J48, Naive Bayes, Logistic Regression, and SMO) are adopted. The results demonstrate that Random Forest can achieve both superior precision and recall performance. In this work, following existing work [20], we also adopt Random Forest to predict the patch correctness based on the integrated static features.
**Embedding Learning**. Tian et al. [12] propose to leverage representation learning techniques to produce embedding for buggy and patched code snippets and then adopt supervised learning classifies to predict patch correctness. In particular, nine representation learning APCA techniques are evaluated, involving three embedding techniques (i.e., CC2vec, BERT and Doc2Vec) and three classifiers (logistic regression, decision tree and naive bayes).
### _Model Selection_
To the best of our knowledge, APPT is the first automated patch correctness prediction technique by fine-tuning the existing pre-trained model. In this paper, we adopt BERT as the encoder stack due to its powerful performance in previous work [24].
Specifically, BERT is pre-trained on large amounts of text data with two self-supervised goals, i.e., masked language modeling (MLM) and next sentence prediction (NSP). MLM aims to let the model predict the masked words by masking 15% of words in each sentence randomly. NSP aims to further improve the model's ability to understand the relationship between two sentences by letting the model predict whether the given sentence pair is continuous. The model then can be fine-tuned to adapt to some specific downstream tasks and has achieved remarkable state-of-the-art results on a variety of natural language processing tasks, such as question answering and language inference.
There exist two model architectures at different sizes, i.e., BERT\({}_{base}\) and BERT\({}_{large}\)[24]. The former has 12 layers and 12 attention heads, and the embedding size is 768, while the latter has a double layer number and 16 attention heads, and the embedding size is changed to 1024. In this paper, we do not modify the vocabulary size and use the pre-trained BERT\({}_{base}\) as the fine-tuning starting point instead of starting from scratch.
In this paper, APPT is conceptually and practically generalizable to various pre-trained models. We also select CodeBERT and GraphCodeBERT as the encoder stack to evaluate the scalability of APPT. CodeBERT and GraphCodeBERT share the same model architecture as BERT, while utilizing paired natural language and programming language to pre-train the model to support code-related tasks (mentioned in Section 8.2.2).
### _Evaluation Metrics_
We evaluate the prediction performance of various APCA approaches by accuracy, precision, recall, F1-score and AUC metrics, which have been widely adopted in patch correctness assessment research and other classification tasks [12, 20]. Given the number of true positives (TPs, a TP refers to an overfitting patch that is identified as overfitting), false positives (FPs, a FP refers to a correct patch that is identified as overfitting), false negatives (FNs, a FN refers to an overfitting patch is identified as correct) and true negatives (TNs, a TN refers to a correct patch that is identified as correct), the metrics are defined as follows:
\(\bullet\)_Accuracy_: the proportion of correctly reported (whether the patch is correct or not) patches. Accuracy measures the probability that the prediction of APCA techniques is correct.
\[Accuracy=\frac{TP+TN}{TP+FP+FN+TN} \tag{12}\]
\(\bullet\)_Precision_: the proportion of real overfitting patches over the reported overfitting patches. Precision measures how
much we can trust the APCA techniques when it predicts a patch as overfitting.
\[Precision=\frac{TP}{TP+FP} \tag{13}\]
\(\bullet\)_Recall:_ the proportion of reported overfitting patches over all the real overfitting patches. Recall measures the ability of the APCA techniques to find all the overfitting patches in the dataset.
\[Recall=\frac{TP}{TP+FN} \tag{14}\]
\(\bullet\)_F1-score_: twice the multiplication of precision and recall divided by the sum of them. F1-score measures the trade-off between precision and recall by taking their harmonic mean.
\[F1\text{-}score=2*\frac{Precision*Recall}{Precision+Recall} \tag{15}\]
\(\bullet\)_AUC:_ the entire two-dimensional area underneath the entire receiver operating characteristic curve. AUC measures the probability that the classifier will rank a randomly chosen overfitting patch higher than that of a randomly chosen correct patch. The higher the AUC, the better the APCA techniques is at predicting real overfitting patches as overfitting and real correct patches as correct.
\[AUC=\frac{\sum I\left(P_{\text{overfitting}},P_{\text{correct}}\,\right)}{M \times N} \tag{16}\]
where \(M\) and \(N\) denote the number of overfitting and correct patches, while \(P_{\text{overfitting}}\) and \(P_{\text{correct}}\) denote the prediction probability for the overfitting and correct patches.
### _Implementation Details_
All of our approaches are built based on PyTorch framework3. We use the Hugging Face4 implementation version of BERT in our work. Considering previous work recommendation [26, 40], we utilize "bert-base-uncased" (refer to BERT\({}_{base}\)) as the initial point, as the base version is quite lightweight to employ in practice with comparable effectiveness compared against the large version. There exist \(12\) layers of transformer blocks and \(12\) self-attention heads in the "bert-base-uncased" model. The optimizer is Adam [37] with \(5e-5\) learning rate. The batch size is \(16\) and dropout rate is \(0.5\). We train for most \(50\) epochs and the max length of the input is set to \(512\) due to model limitation.
Footnote 3: PyTorch. [https://pytorch.org/](https://pytorch.org/), accessed August 2022
All the training and evaluation of our methods are conducted on one Ubuntu 18.04.3 server with two Tesla V100-SXM2 GPUs.
## 5 Results and Analysis
### _RQ1: Comparing with Representation Learning-based APCA Techniques_
#### 5.1.1 Experimental Design
As discussed in Section 4.3, APPT, CACHE and embedding learning techniques (i.e., techniques within the light gray box in Table III) can be categorized as representation learning APCA techniques. In this section, we aim to explore the performance of APPT when compared with these representation learning techniques. In particular, embedding learning techniques [12] mainly adopt embedding models (i.e., BERT, Doc2Vec, and CC2Vec) to embed buggy and patched code fragments, and then train classification models (i.e., Decision Tree, Logistic Regression, and Naive Bayes) to predict patch correctness. Following previous study [20], we also consider two additional embedding models (i.e., code2vec and code2seq) in the experiment. Meanwhile, CACHE can also be considered as a representation learning technique, which incorporates the context information in embedding code changes, and trains a deep learning classifier to predict the patch correctness.
In total, 16 representation learning techniques are considered in our experiment, involving five embedding techniques multiplied by three classification models, and one context-aware representation learning technique CACHE. Following the previous study [12], we perform a 5-fold cross-validation on both the small and large datasets for comparison.
#### 5.1.2 Results
Comparison results against the existing representation learning techniques are presented in Table IV to Table V for the both small and large dataset. The first column lists the three classifiers and the second column lists the five embedding approaches. The remaining columns list the detailed values of accuracy, precision, recall, F1-score and AUC metrics, respectively. We present the most recent representation learning work CACHE and our APPT in the bottom part of Table IV and Table V. It can be observed that APPT achieves the best performance under each experimental setting.
On the small dataset, APPT is around 3.6%, 1.2%, 4.8%, 2.9% and 3.1% higher than the state-of-the-art technique CACHE in terms of all metrics (i.e., 79.0% vs. 75.4% for accuracy, 80.7% vs. 79.5% for precision, 81.3% vs. 76.5% for recall, 80.9% vs. 78.0% for F1-score, and 83.4% vs. 80.3% for AUC). Compared with all representation learning techniques, APPT achieves the best performance in terms of accuracy, precision, F1-score and AUC metrics. In particular, the values of APPT on the accuracy and precision metrics are 79.0% and 80.7%, respectively, while the optimal values of all other techniques are 75.4% and 79.5%. This suggests that APPT can generally achieve the most accurate predictions, and the patches identified as overfitting by APPT are of high confidence to be overfitting. Regarding recall, the values of CC2vec and code2vec can sometimes exceed those of APPT since they tend to classify most patches as overfitting (e.g., CC2vec with Naive Bayes classifies 1,051 out of 1,183 patches as overfitting and thus achieves a high recall of 94.6%). However, these techniques achieve relatively low precision (e.g., CC2vec with Naive Bayes classifier has only 72.2% for recall). On the contrary, APPT can achieve a high recall exceeding 81% while maintaining a high precision of 79.5%.
On the large dataset, we can find APPT achieves over 99% for the five metrics, outperforming all existing approaches. For example, APPT reaches 99.9% in terms of AUC, which is 1.0% higher than the second highest value
obtained from the most recent technique CACHE (i.e., 98.9%). This suggests that APPT is more capable of distinguishing correct and overfitting patches than CACHE. Besides, the improvement against CACHE for accuracy, precision, recall and F1-score metrics achieves 0.5%, 0.3%, 0.9% and 0.5%, respectively. We also find that the performance achieved on the large dataset is commonly higher than that achieved on the small dataset. For example, the average value among the five metrics increases from 81.06% to 99.26%, resulting in a 22.5% improvement rate. Based on our analysis on the two datasets, the possible reason for this improvement is that bugs on the large dataset are usually simple. We observe that all ManySStuBs4J patches on the large dataset are single-line operations, while patches on the small dataset usually cross multiple lines (e.g., more than 40% of Defects4J) developer patches are multiple line patches [20]). It is easy for the neural networks to learn the correctness distribution of such simple code changes. Meanwhile, the difference in patch scale between the two datasets may be the second reason. We find there exist 49,694 patches on the large dataset, which is 42 times larger than that of the small dataset. The amount of training data is often the single most dominant factor that determines the performance of the neural networks [68]. More available patches benefit the neural networks to learn diverse code changes better.
**Answer to RQI:** Overall, our analysis on representation learning techniques reveals that (1) APPT can outperform a state-of-the-art representation learning technique CACHE under all metrics and datasets. (2) on the small dataset, APPT achieves 79.0% for accuracy and 83.4% for AUC, which surpass CACHE by 3.6% and 3.1%. (3) on the large dataset, APPT exceeds 99% on all metrics, yet none of existing representation learning techniques achieves that.
### _RQ2: Comparing with Traditional and Learning-based APCA Techniques_
#### 5.2.1 Experimental Design
In this section, we aim to further compare the proposed method APPT with the existing APCA techniques. We select the remaining techniques mentioned in Section 4.3 (except
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Classifier & Embedding & Accuracy & Precision & Recall & F1-score & AUC \\ \hline \multirow{4}{*}{Decision Tree} & BERT & 63.5\% & 65.3\% & 70.9\% & 67.9\% & 63.7\% \\ & CC2vec & 66.1\% & 69.4\% & 68.0\% & 68.7\% & 66.5\% \\ & code2vec & 65.1\% & 68.1\% & 68.3\% & 68.1\% & 64.4\% \\ & code2seq & 60.1\% & 63.5\% & 64.0\% & 63.7\% & 60.0\% \\ & Doc2Vec & 61.2\% & 64.5\% & 65.3\% & 64.8\% & 60.8\% \\ \hline \multirow{4}{*}{Logistic Regression} & BERT & 64.8\% & 66.5\% & 72.4\% & 69.2\% & 68.7\% \\ & CC2vec & 64.9\% & 62.4\% & 90.1\% & 73.7\% & 68.6\% \\ & code2vec & 66.8\% & 68.6\% & 72.9\% & 70.6\% & 70.2\% \\ & code2seq & 60.7\% & 63.3\% & 67.6\% & 65.3\% & 63.1\% \\ & Doc2Vec & 63.7\% & 65.7\% & 70.8\% & 68.0\% & 68.9\% \\ \hline \multirow{4}{*}{Naïve Bayes} & BERT & 61.6\% & 64.8\% & 65.7\% & 65.0\% & 64.7\% \\ & CC2vec & 60.0\% & 58.3\% & **94.6\%** & 72.2\% & 58.1\% \\ \cline{1-1} & code2vec & 57.7\% & 58.1\% & 81.5\% & 67.8\% & 55.6\% \\ \cline{1-1} & code2seq & 57.0\% & 59.0\% & 70.5\% & 64.2\% & 60.6\% \\ \cline{1-1} & Doc2Vec & 64.1\% & 65.8\% & 72.4\% & 68.7\% & 67.0\% \\ \hline \multicolumn{2}{c|}{CACHE} & 75.4\% & 79.5\% & 76.5\% & 78.0\% & 80.3\% \\ \hline \multicolumn{2}{c|}{APPT} & **79.0\%** & **80.7\%** & 81.3\% & **80.9\%** & **83.4\%** \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Effectiveness of APPT compared with representation learning-based APCA techniques on the small dataset
\begin{table}
\begin{tabular}{c|c|c c c c c} \hline \hline Classifier & Embedding & Accuracy & Precision & Recall & F1-score & AUC \\ \hline \multirow{4}{*}{Decision Tree} & BERT & 95.7\% & 93.9\% & 97.4\% & 95.6\% & 95.9\% \\ & CC2vec & 95.6\% & 95.4\% & 95.7\% & 95.5\% & 95.7\% \\ & code2vec & 95.0\% & 93.2\% & 96.6\% & 94.9\% & 95.4\% \\ & code2seq & 92.2\% & 91.0\% & 93.2\% & 92.3\% & 92.4\% \\ & Doc2Vec & 85.1\% & 84.2\% & 85.3\% & 84.7\% & 85.3\% \\ \hline \multirow{4}{*}{Logistic Regression} & BERT & 82.4\% & 83.6\% & 79.4\% & 81.4\% & 91.0\% \\ & CC2vec & 91.2\% & 96.1\% & 85.4\% & 90.4\% & 95.0\% \\ & code2vec & 89.6\% & 88.6\% & 90.2\% & 89.4\% & 95.0\% \\ & code2seq & 91.5\% & 90.5\% & 92.2\% & 91.4\% & 96.0\% \\ & Doc2Vec & 90.4\% & 91.9\% & 88.0\% & 89.9\% & 96.1\% \\ \hline \multirow{4}{*}{Naïve Bayes} & BERT & 68.2\% & 80.3\% & 45.7\% & 58.2\% & 74.6\% \\ & CC2vec & 78.4\% & 94.8\% & 58.6\% & 72.5\% & 92.4\% \\ \cline{1-1} & code2vec & 61.4\% & 68.7\% & 37.4\% & 48.4\% & 69.3\% \\ \cline{1-1} & code2seq & 70.3\% & 76.8\% & 55.5\% & 64.5\% & 78.9\% \\ \cline{1-1} & Doc2Vec & 81.2\% & 86.4\% & 75.5\% & 78.9\% & 88.9\% \\ \hline \multicolumn{2}{c|}{CACHE} & 98.6\% & 98.9\% & 98.2\% & 98.6\% & 98.9\% \\ \hline \multicolumn{2}{c|}{APPT} & **99.1\%** & **99.1\%** & **99.1\%** & **99.1\%** & **99.9\%** \\ \hline \hline \end{tabular}
\end{table} TABLE V: Effectiveness of APPT compared with representation learning techniques on the large dataset
representation learning techniques discussed in RQ1). In total, 14 APCA techniques are considered in the experiment, involving four static techniques (Anti-patterns, ssFix, CapGen and S3), eight dynamic techniques (Evosuite, Randoop, DiffTGen, Daikon, R-Opad, E-Opad, PATCH-SIM and E-PATCH-SIM) and two learning techniques (Random Forest and ODS).
As it is time-consuming to run all the techniques (especially for dynamic and learning ones), following the existing work [20], we reuse the released results from the recent work [13, 18, 20]. We collect the detailed results of all selected APCA techniques from Lin et al. [20], which are concluded based on 902 patches (i.e., Wang et al. [13] in Table II) and a 10-fold cross-validation. To fairly compare with all the state-of-the-art techniques, we perform our experiment in the same experimental setting.
#### 5.2.2 Results
The experiment results are listed in Table VI. The first two columns list the selected techniques and their corresponding categories. The remaining columns list the detailed values of accuracy, precision, recall and F1-score metrics.
Compared with traditional dynamic-based and static-based APCA techniques, we can find that APPT reaches 90.4%, 96.0% and 93.6% in terms of accuracy, recall and F1-score, respectively. Specifically, APPT achieves the best overall performance with the three metrics, and none of the previous techniques exceeds 90%. As for precision, more than 91% of patches reported by APPT are indeed overfitting patches, which is better than all static-based techniques and three dynamic-based techniques (i.e., Daikon, PATCH-SIM, and E-PATCH-SIM). Although some dynamic ones have higher precision values, it is time-consuming to generate additional test cases and collect run-time information. More importantly, the recall of these techniques is usually low (e.g., 10.3% for R-Opad), or the ground-truth oracle is needed (e.g., Evosuite and Randoop techniques), limiting the application of such techniques in practice.
Compared with learning-based techniques, we find that APPT still performs better than a state-of-the-art technique ODS with respect to all four metrics (90.4% vs. 88.9% for accuracy, 91.5% vs. 90.4% for precision, 96.0% vs. 94.8% for recall, 93.6% vs. 92.5% for F1-score, respectively). Overall, the improvement against Random Forest and ODS reaches 4.5%\(\sim\)17.9% and 1.1%\(\sim\)1.5%. Considering that it is expensive for ODS to extract hundreds of manually-designed code features at AST level, our approach simply adopting the pre-trained model to encode a sequence of tokens is even more promising. We also highlight this direction of integrating code-aware features (e.g., code edits and AST representation) with pre-trained models for patch correctness assessment.
**Answer to RQ2:** Overall, our comparison results reveal that, (1) APPT can achieve remarkable performance compared to exiting static-based techniques with a high recall reaching 96.0%. (2) APPT can achieve higher precision than a state-of-the-art dynamic-based technique PATCH-SIM by 8.5%. (3) compared with existing learning-based techniques, APPT can achieve the best performance among all metrics.
### _RQ3: The Impact Analysis_
#### 5.3.1 Experimental Design
To further explore how different fine-tuning choices affect the prediction performance of pre-trained models, we first consider and replace the head-only token truncation with other truncation methods, such as hybrid, mid-only and tail-only token truncation. We then adopt different methods to merge the buggy method vector and patched method vector, such as concatenate, additional, subtraction, and product operation. We also mix the above-mentioned merged vectors as an additional concatenation method. Recently, following the BERT model architecture, researchers use some code-related pre-trained tasks to capture the semantic connection between natural language and programming language, so as to further adapt these pre-training models for programming language. Thus, we replace the BERT with two advanced models pre-trained with the programming language, i.e., CodeBERT [28] and GraphCodeBERT [29].
#### 5.3.2 RQ3.1 Results: The Impact of Token Truncation Choice
Table VII presents the prediction results under different truncation choices. The first column lists the two datasets. The
\begin{table}
\begin{tabular}{c|c|c|c c c} \hline \hline Category & APCA & Accuracy & Precision & Recall. & F1-score \\ \hline \multirow{5}{*}{Dynamic-based} & \multirow{5}{*}{\begin{tabular}{c} Evosuite \\ Randoop \\ DiffTGen \\ Daikon \\ \end{tabular} } & 65.9\% & 99.1\% & 53.5\% & 69.5\% \\ & & 51.3\% & 97.4\% & 33.8\% & 50.2\% \\ & & DiffTGen & 49.6\% & 97.4\% & 30.6\% & 46.6\% \\ & & Daikon & 76.1\% & 89.9\% & 73.7\% & 81.0\% \\ \cline{2-6} & \multirow{5}{*}{\begin{tabular}{c} Patch-Opad \\ E-Opad \\ PATCH-SIM \\ \end{tabular} } & 34.9\% & **100.0\%** & 10.2\% & 18.5\% \\ & & E-Opad & 37.7\% & **100.0\%** & 14.7\% & 25.6\% \\ & & PATCH-SIM & 49.5\% & 83.0\% & 38.9\% & 53.0\% \\ & & E-PATCH-SIM & 41.7\% & 82.1\% & 25.8\% & 39.3\% \\ \hline \multirow{5}{*}{Static-based} & \multirow{5}{*}{\begin{tabular}{c} Anti-patterns \\ S3 \\ sFix \\ CapGen \\ \end{tabular} } & 47.6\% & 85.5\% & 33.5\% & 48.1\% \\ & & 69.7\% & 79.3\% & 78.9\% & 79.0\% \\ \cline{1-1} & & ssFix & 69.2\% & 78.9\% & 78.8\% & 78.8\% \\ \cline{1-1} & & CapGen & 68.0\% & 78.3\% & 77.4\% & 77.8\% \\ \hline \multirow{2}{*}{Learning-based} & \multirow{2}{*}{
\begin{tabular}{c} Random Forest \\ ODS \\ \end{tabular} } & 72.5\% & 87.0\% & 89.1\% & 88.0\% \\ \cline{1-1} & & 88.9\% & 90.4\% & 94.8\% & 92.5\% \\ \hline \multicolumn{2}{c|}{APPT} & **90.4\%** & 91.5\% & **96.0\%** & **93.6\%** \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Effectiveness of APPT compared with the traditional and learning-based APCA technique
second column lists the four truncation choices, i.e., head-only, mid-only, tail-only and hybrid. The remaining columns list the detailed values of accuracy, precision, recall and F1-score and AUC metrics.
On the small dataset, we can find that the head-only approach achieves the optimum performance for accuracy (79.72%), precision (80.84%), recall (80.84%) and F1-score (81.76%), while the hybrid approach achieves the optimum AUC score (83.43%). The mid-only approach, considering the middle tokens in the buggy and patched methods, achieves the third-best performance for all metrics, followed by the tail-only approach. Similar performance can be observed on the large dataset. For example, the head-only and hybrid approaches have the best performance in all metrics, while the mid-only and tail-only ones are the following. The results demonstrate that the head-only approach extracting the beginning code tokens is effective in distinguishing the buggy and patched code snippets for the pre-trained model.
#### 5.3.3 RQ3.2 Results: The Impact of The Vector Concatenation Choice
Table VIII presents the prediction results under different concatenation choices. The first column lists the two datasets. The second column lists the five concatenation choices, i.e., concat, addition, subtraction, product and mix. The remaining columns list the detailed values of accuracy, precision, recall and F1-score and AUC metrics.
On the small dataset, although conceptually simple, \(\text{APPT}_{concat}\) can obtain 79.04%, 80.67%, 81.34%, 80.92% and 83.43% for accuracy, precision, recall, F1-score and AUC metrics, four of which are highest among all investigated concatenation methods. \(\text{APPT}_{product}\) has the highest recall score (96.32%), while it performs worse than \(\text{APPT}_{concat}\) by 15.77%, 18.30%, 6.11% and 16.97% for the other four metrics. \(\text{APPT}_{addition}\) and \(\text{APPT}_{subtraction}\) perform the addition and subtraction operation for buggy and patched vectors, and have similar performance for all metrics. Meanwhile, a mixed method \(\text{APPT}_{mix}\) that applies these different comparison functions to represent the changed embedding vector can achieve better results than \(\text{APPT}_{concat}\), which is also consistent with the existing study results [12, 32]. Such results indicate that the pre-trained model can better capture the code change information by integrating different concatenation ways. On the large dataset, \(\text{APPT}_{concat}\) achieves the best performance in accuracy, F1-score and AUC metrics, while \(\text{APPT}_{subtraction}\) and \(\text{APPT}_{mix}\) perform best in precision and recall respectively. The difference in performance is similar as the methods have relatively high metric values. For example, all metric values are higher than 99% for \(\text{APPT}_{concat}\) and \(\text{APPT}_{mix}\).
#### 5.3.4 RQ3.3 Results: The Impact of Pre-trained Model Choice
Table IX demonstrates the predicted performance of three pre-trained models. The first column lists the two datasets. The second column lists the three models, i.e., BERT, CodeBERT, and GraphCodeBERT. The remaining columns list the detailed values of accuracy, precision, recall and F1-score and AUC metrics.
Generally speaking, all of the adopted models achieve a higher performance than state-of-the-art technique CACHE on all metrics. For example, on the small dataset, BERT, CodeBERT and GraphCodeBERT reach 80.9%, 83.3%, and 83.5% with respect to the F1-score, which is 2.9%, 5.3%, and 5.5% higher than CACHE, respectively. A similar improvement can also be observed on the large dataset. This demonstrates the model choice may not impact the performance dramatically, and pre-trained models can consistently achieve state-of-the-art performance.
Specifically, to compare the performance of different pre-trained models, we can observe that both CodeBERT and GraphCodeBert achieve a better value for all metrics on the
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline Dataset & Truncation & Accuracy & Precision & Recall & F1-score & AUC \\ \hline \multirow{3}{*}{small} & \(\text{APPT}_{hybrid}\) & 79.04\% & 80.67\% & 81.34\% & 80.92\% & **83.43\%** \\ & \(\text{APPT}_{head}\) & **79.72\%** & **80.84\%** & **83.17\%** & **81.76\%** & 82.55\% \\ & \(\text{APPT}_{mid}\) & 75.48\% & 78.27\% & 78.41\% & 77.85\% & 81.34\% \\ & \(\text{APPT}_{tail}\) & 73.20\% & 76.00\% & 76.40\% & 75.38\% & 78.45\% \\ \hline \multirow{3}{*}{large} & \(\text{APPT}_{hybrid}\) & **99.13\%** & 99.09\% & **99.13\%** & **99.11\%** & **99.86\%** \\ & \(\text{APPT}_{head}\) & 99.04\% & **99.17\%** & 98.68\% & 99.01\% & 99.54\% \\ & \(\text{APPT}_{mid}\) & 97.36\% & 96.62\% & 98.17\% & 97.35\% & 98.18\% \\ & \(\text{APPT}_{tail}\) & 97.85\% & 98.28\% & 97.30\% & 97.77\% & 99.49\% \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Effectiveness of APPT with different truncation choices.
\begin{table}
\begin{tabular}{l|l|c c c c c} \hline \hline Dataset & Truncation & Accuracy & Precision & Recall & F1-score & AUC \\ \hline \multirow{3}{*}{small} & \(\text{APPT}_{concat}\) & 79.04\% & 80.67\% & 81.34\% & 80.92\% & 83.43\% \\ & \(\text{APPT}_{addition}\) & 69.83\% & 70.24\% & 80.12\% & 73.83\% & 75.44\% \\ & \(\text{APPT}_{subtraction}\) & 71.38\% & 72.42\% & 77.27\% & 74.72\% & 75.59\% \\ & \(\text{APPT}_{product}\) & 63.27\% & 62.37\% & **96.32\%** & 74.81\% & 66.46\% \\ & \(\text{APPT}_{mix}\) & **80.90\%** & **82.21\%** & 83.18\% & **82.64\%** & **83.46** \\ \hline \multirow{3}{*}{large} & \(\text{APPT}_{concat}\) & **99.13\%** & 99.09\% & 99.13\% & **99.11\%** & **99.86\%** \\ & \(\text{APPT}_{addition}\) & 98.96\% & 98.08\% & 99.07\% & 98.93\% & 99.81\% \\ \cline{1-1} & \(\text{APPT}_{subtraction}\) & 97.31\% & **99.14\%** & 95.29\% & 97.17\% & 99.46\% \\ \cline{1-1} & \(\text{APPT}_{product}\) & 98.82\% & 98.88\% & 98.69\% & 98.78\% & 99.78\% \\ \cline{1-1} & \(\text{APPT}_{mix}\) & 99.10\% & 98.99\% & **99.17\%** & 99.08\% & 99.79\% \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Effectiveness of APPT with different concatenation choices.
small dataset. This superior performance also generalizes to large datasets, where CodeBERT and GraphCodeBert have better or competitive (e.g., AUC) performance on the metrics. One possible explanation for this is that BERT is designed for natural language processing tasks, while CodeBERT and GraphCodeBERT regard a source code as a sequence of tokens or graph representation and then pre-train models on source code to support code-related tasks. This indicates that although pre-trained models in NLP can achieve state-of-the-art performance for assessing patch correctness, the adoption of pre-trained models targeting source code can further boost the improvement.
**Answer to RQ3:** The performance under different choices demonstrates that: (1) the beginning code tokens can represent the buggy and patched code snippets well for the pre-trained model; (2) the concat of buggy and patched vectors is better than other methods to distinguish the changed code snippets, while the integration of different concatenation ways can achieve optimum results. (3) advanced pre-trained models can provide a stable even better performance.
## 6 Discussion
### _Threats to Validity_
To facilitate the replication and verification of our experiments, we have made the relevant materials (including source code, trained models, and patch data) available. Despite that, our study still faces some threats to validity, listed as follows.
The first threat to validity lies in the patch benchmark. We focus on the Defects4J database with reproducible real faults and collect 1,183 patches generated by existing APR tools. However, the patch benchmark may not consider all available APR tools. To address this, following the latest work [20], we include the 22 APR tools covering four categories. It should be worth noting that although the learning-based category contains only SequenceR, it contains 73 patches, which is the largest number for a single APR tool [20]. We also mitigate the potential bias by using multiple evaluation metrics to exhaustively assess the APCA techniques. Further, we adopt another large benchmark containing 49,694 real-world patches to evaluate the generalization ability of the studied techniques. Overall, to the best of our knowledge, the used patch benchmarks are the largest set explored in the literature on patch correctness assessment.
The second threat to validity is that the performance of APPT may not generalize to other pre-trained models. We select BERT in our experiment due to its powerful performance in recent code-related works. However, it is unclear whether the conclusions in our experiment (discussed in Section 5) can be maintained when using other pre-trained models. We have mitigated the potential threat by using CodeBERT and GraphCodeBERT to demonstrate the performance of APPT under different pre-trained models. The investigated pre-trained models include both code-related ones (e.g., CodeBERT) and natural language-specific ones (e.g., BERT). We also rely on two diverse patch benchmarks to ensure the generality of the experimental conclusions.
The last threat to validity is the implementation of the baselines. In our work, we compare APPT against a wide range of APCA techniques with different categories. Implementing these baselines may introduce a potential threat to the internal validity. To mitigate this threat, following the recent work [20], we conduct the experiment under the same setting and reuse the released results from the original work [12, 13, 20]. Further, we carefully check the reused results and publicly release all our materials for further verification.
### _Comparison with BATS_
In our work, following some recent APCA work [12, 13], 30 related APCA techniques with different categories (i.e., 16 representation learning-based ones, 9 dynamic-based ones, 4 static-based ones and 2 learning-based ones) are compared in our experiment (discussed in Section 5). To the best of our knowledge, the selected baselines are the largest set on patch correctness prediction in the literature. However, there may exist other possible techniques that could have been used. For example, the recent BATS [19] predicts patch correctness based on the similarity of failing test cases, which can be complementary to the state-of-the-art APCA techniques. We do not include BATS in our experiment (discussed in Section 5) because it requires historical test cases as the search space for searching similar cases, which are not available in our dataset.
We then perform an additional evaluation by assessing APPT on the dataset provided in BATS. However, BATS fails to assess some plausible patches as it considers only historical test cases with the similarity which are higher than a threshold. For example, BATS with 0.8 threshold value is able to predict only 8.9% (114/1278) of the plausible patches. Thus, we compare APPT against BATS with 0.0 threshold value, which can perform prediction for all patches. We also compare APPT against BATS with 0.8 threshold value, as it achieves the best recall, F1-score and AUC performance among all threshold values. The results are presented in Table X. The first column lists APPT and BATS (with 0.0 and
\begin{table}
\begin{tabular}{c|l|c c c c c c} \hline \hline Dataset & Model & Accuracy & Precision & Recall & F1-score & AUC \\ \hline \multirow{3}{*}{small} & APPT\({}_{bert}\) & 79.04\% (\{3.6\}) & 80.67\% (\{1.2\}) & 81.34\% (\{4.8\}) & 80.92\% (\{2.9\}) & 83.34\% (\{3.1\}) \\ & APPT\({}_{codbert}\) & 81.49\% (\{6.1\}) & 82.10\% (\{2.6\}) & 84.73\% (\{8.2\}) & 83.35\% (\{5.3\}) & 85.32\% (\{5.0\}) \\ & APPT\({}_{graphcodebert}\) & 81.83\% (\{6.4\}) & 83.68\% (\{4.2\}) & 83.63\% (\{7.2\}) & 83.47\% (\{5.5\}) & 85.79\% (\{5.5\}) \\ \hline \multirow{3}{*}{large} & APPT\({}_{bert}\) & 99.13\% (\{0.5\}) & 99.09\% (\{0.2\}) & 99.13\% (\{0.9\}) & 99.11\% (\{0.5\}) & 99.86\% (\{1.0\}) \\ & APPT\({}_{codbert}\) & 99.57\% (\{1.0\}) & 99.71\% (\{0.8\}) & 99.40\% (\{1.2\}) & 99.55\% (\{1.0\}) & 99.89\% (\{1.0\}) \\ \cline{1-1} & APPT\({}_{graphcodebert}\) & 99.61\% (\{1.0\}) & 99.61\% (\{0.7\}) & 99.59\% (\{1.4\}) & 99.60\% (\{1.0\}) & 99.90\% (\{1.0\}) \\ \hline \multicolumn{7}{l}{\(\uparrow\) denotes performance improvement against state-of-the-art technique CACHE.} \\ \end{tabular}
\end{table} TABLE IX: Effectiveness of APPT with different pre-trained models.
0.8 threshold values, respectively). The second column lists the number of predicted patches. Each cell is represented as \(x(y)\), where \(x\) is the number of patches predicted by APPT and BATS and \(y\) is the total number of patches in the dataset. The remaining columns list the detailed performance under the metrics. We can find APPT achieves 83.39%\(\sim\)85.05%, improving the metrics by 21.56%\(\sim\)34.58% when compared with BATS (threshold is set to 0.0). When the threshold of BATS is set to 0.8, APPT can still improve the metrics by 12.40% on average while predicting 91.1% more plausible patches. Overall, the results demonstrate that APPT performs better than BATS in terms of the number of predicted patches and the prediction metrics.
## 7 Implication and Guideline
Based on the observations in our experiment, we can summarize the following essential practical guidelines for future patch correctness assessment studies.
**Simple features can work.** Our study demonstrates that APPT, representing source code as a sequence of tokens, performs even better than the existing learning techniques (e.g., CACHE) considering complex code-aware characteristics (e.g., abstract syntax tree). Also, the token sequences can already outperform manually-designed static features (e.g., the line number) and time-consuming dynamic features (e.g., code coverage) in this work. Such observations indicate that simple features, such as code sequences, should not be just ignored and a systematic study to explore the impact of different code representations is needed in the future. In fact, they should be considered and even integrated with different features (e.g., data flow graph) to design more advanced patch correctness assessment techniques.
**The quality of the training dataset is important.** We can find that APPT achieves 91.5% precision in Table IV while the precision is decreased by 10.8% in Table VI. Similar performance can also be observed in Lin et al. [20]. The results show that more training data cannot always lead to better performance for patch correctness assessment. It is crucial to automatically select the most informative training set that represents the whole patch benchmarks to optimize the prediction accuracy. For example, it is interesting to explore how the number of patches is distributed across fix patterns and how to select balanced patches for each fix pattern. Future work can also be conducted to investigate training data selection approaches targeting specific bug benchmarks under prediction or even specific bug types under prediction.
**Pre-trained model-based APCA techniques require more attention.** Our results show that the BERT-based APPT performs even better than the state-of-the-art APCA techniques. Also, the CodeBERT-based and GraphCodeBERT-based APPT can further enhance the prediction effectiveness. Such observation motivates future researchers to investigate more advanced APCA techniques by employing different pre-trained models. For example, it is interesting to propose domain-specific pre-trained models by designing repair-related pre-training tasks. Meanwhile, thorough evaluations are recommended to explore how different features, such as bug types and fix patterns, influence the performance of pre-trained models in patch correctness prediction.
## 8 Related Work
In this paper, we adopt pre-trained language models to predict patch correctness generated by off-the-shelf automated program repair tools. Our work is related to automated program repair, patch correctness assessment and pre-trained models. We have introduced the existing work about patch correctness assessment in Section 4.3. Thus, in this section, we focus on and discuss the existing work on automated program repair techniques (Section 8.1) and pre-trained models (Section 8.2).
### _Automated Program Repair_
Over the past decade, researchers have proposed a variety of techniques to generate patches based on different hypotheses [1, 69]. Following recent work [2, 7, 11], we categorize them into four main categories: heuristic-based [38, 41, 70], constraint-based [44, 45, 71], template-based [5, 52, 51] and learning-based repair techniques [35, 40, 72].
\(\bullet\)_Heuristic-based repair techniques_. These techniques usually use a heuristic algorithm to find a valid patch by iteratively exploring a search space of syntactic program modifications [38, 41, 70]. Among them, GenProg [70] proposed in the early days has been considered a seminal work in this field, which uses genetic programming to search for correct repairs. GenProg represents candidate repairs as sequences of edits to source code and evaluate them by the execution results of test cases. Those candidates that pass more test cases are considered to have a higher fitness and are iteratively applied to produce new candidates based on mutation and crossover operations. The recent SimFix technique [42] utilizes code change operations from existing patches across different projects and similar code snippets within the buggy project to build two search spaces. Then, the intersection of the above two search spaces is further used to search the final patch using basic heuristics.
\(\bullet\)_Constraint-based repair techniques_. These techniques mainly focus on repairing conditional statements, which can repair more than half of the bugs repaired by existing APR approaches [44, 45, 47]. In detail, these techniques transform the patch generation into a constraint-solving problem, and use a solver to obtain a feasible solution. For example, Nopol [45] relies on an SMT solver to solve the condition synthesis problem after identifying potential locations of patches by angelic fix localization and collecting test execution traces of the program. Among them, ACS [46] refining the ranking of ingredients for condition synthesis is considered one of the most advanced constraint-based repair techniques [7].
\(\bullet\)_Template-based repair techniques_. These techniques generate patches by designing pre-defined fix patterns to mutate
\begin{table}
\begin{tabular}{l|c c c c c} \hline \hline APCA & \#Patch & Accuracy & Precision & Recall & F1-score \\ \hline BATS (0.0) & 1278 (1278) & 52.50\% & 48.81\% & 62.82\% & 54.94\% \\ BATS (0.8) & 114 (1278) & 67.54\% & 63.16\% & 84.21\% & 72.18\% \\ APPA & 1278 (1278) & 85.05\% & 83.39\% & 84.38\% & 83.88\% \\ \hline \hline \end{tabular}
\end{table} TABLE X: Comparison with a state-of-the-art learning-based APCA technique BATS.
buggy code snippets with the retrieved donor code [5, 51, 52]. For example, Liu et al. [5] revisit the repair performance of repair patterns using a systematic study that evaluates the effectiveness of a variety of fix patterns summarized from the literature. Among them, the recent PraPR technique [73] is able to generate plausible and correct patches for 148 and 43 real bugs, respectively, which is the largest number of bugs reported as fixed for Defects4J when published.
\(\bullet\)_Learning-based repair techniques_. These techniques attempt to fix bugs enhanced by machine learning techniques [39, 74, 75, 30, 76, 73, 74] and are getting increasing attention recently. For example, Tufano et al. [75] extensively evaluate the ability of neural machine translation techniques to generate patches from bug-fixes commits in the wild. Li et al. [35] adopt a tree-based RNN encoder-decoder model (i.e., DLFix) to learn code contexts and transformations from previous bug fixes. Lutellier et al. [39] propose a new context-aware NMT architecture (i.e., CoCoNut) that represents the buggy source code and its surrounding context separately, to automatically fix bugs in multiple programming languages.
In our experiment, we select 22 representative APR tools (e.g., SimFix, ACS, and SEQUENCER) from the four categories, representing state-of-the-art techniques in the corresponding category. Then we evaluate APPT on the plausible patches (i.e., passing the original test cases) generated by these APR techniques.
### _Pre-trained Model_
Our approach is inspired by the application of pre-trained models in NLP and code-related tasks. In this section, we first introduce the existing studies about pre-trained models in NLP (Section 8.2.1) and SE (Section 8.2.2). We then discuss the application of pre-trained models to some code-related tasks in SE (Section 8.2.3).
#### 8.2.1 Pre-trained Model in NLP
Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. For example, Devlin et al. [24] propose a new language representation model BERT to pre-train deep bidirectional representations from the unlabeled text by jointly conditioning on both left and right contexts in all layers. To explore the landscape of transfer learning techniques for NLP, Raffel et al. [26] propose a text-to-text transfer transformer T5 by introducing a unified framework that converts all text-based language problems into a text-to-text format. Brown et al. [25] propose an autoregressive language model GPT-3 without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model.
In this work, we choose BERT to encode a given plausible patch into a fixed-length representation vector as the input of the deep learning classifier, due to the powerful performance of BERT in previous work [77].
#### 8.2.2 Pre-trained Model in SE
Inspired by the application of pre-trained models in NLP, many researchers apply the pre-trained model to code-related tasks. Instead of designing new network architectures, SE researchers usually adopt existing architectures in NLP and design some code-aware pre-training tasks (e.g., code-AST prediction and bimodal dual generation) to learn representations of the source code. Then the pre-trained models are further fine-tuned to some diversified code-related tasks such as code-code (clone detection, defect detection, cloze test, code completion, code refinement, and code-to-code translation), text-code (natural language code search, text-to-code generation), and code-text (code summarization) scenarios.
For example, Feng et al. [28] present a bimodal pre-trained model (_CodeBERT_) for natural language and programming languages by masked language modeling and replaced token detection to support code search and code documentation generation tasks. Guo et al. [29] present the first pre-trained model (_GraphCodeBERT_) that leverages code structure to learn code representation to improve code understanding tasks (i.e., code search, clone detection, code translation, and code refinement). Guo et al. [27] present UniXoder, a unified cross-modal pre-trained model for programming language. UniXcoder utilizes mask attention matrices with prefix adapters to control the behavior of the model and leverages cross-modal contents such as AST and code comment to enhance code representation. In contrast to most studies pre-training a large-scale model from scratch costly, we attempt to boost patch correctness assessment on top of the existing pre-trained language model fine-tuning paradigm.
In this work, to further explore the generalization ability of APPT, we select other BERT-like models (i.e., CodeBERT and GraphCodeBERT) as the encoder stack due to their powerful performance in the code-related tasks.
#### 8.2.3 Applications of Pre-trained Model in SE
In addition to the above-mentioned typical code-related tasks (e.g., automatic bug-fixing, injection of code mutants, generation of asserts in tests and code summarization in [78]), researchers have also applied pre-trained models to some other domains (e.g., code completion, and program repair) in SE.
For example, Cinisell et al. [77] evaluate the performance of the BERT model in the task of code completion at different granularity levels, including single tokens, one or multiple entire statements. The results show that the model achieves promising results superior to state-of-the-art n-gram models, and the model learns better on some specific datasets (e.g., Android) when code abstraction is used. Ciborowska et al. [79] apply BERT to the bug localization problem with the goal of improved retrieval quality, especially on bug reports where straightforward textual similarity would not suffice. Recently, Salza et al. [80] investigate how transfer learning can be applied to code search by pre-training and fine-tuning a BERT-based model on combinations of natural language and source code. Mashhadi et al. [81] propose a novel pre-trained model-based APR technique by fine-tuning CodeBERT on the ManySStuBs4J benchmark and find the approach generates fix codes for different types of bugs with comparable effectiveness and efficacy compared with state-of-the-art APR techniques.
Although there exist some SE tasks (e.g., code review and bug localization) benefitting from pre-trained models, in this work, we perform the first application of pre-trained models to predict the generated patch correctness in automated program repair.
## 9 Conclusion
In this work, we present APPT, a novel automated patch correctness prediction technique based on the pre-training model and classifier. We first adopt the off-the-shelf pre-trained model as the encoder stack and LSTM stack to enhance the dependency relationships among the buggy and patched code snippets. Then we build a deep learning classifier by two fully connected layers and a standard softmax function to predict whether the patch is overfitting or not. We conduct experiments on both patch datasets and show that APPT significantly outperforms state-of-the-art learning-based and traditional APCA techniques. We further demonstrate that APPT is generalizable to various pre-trained models. Based on these observations, some implications and guidelines on improving the existing learning-based techniques (e.g., the usage of simple features and pre-trained models) are provided. We highlight the direction of applying pre-trained models to predict patch correctness automatically.
## Acknowledgments
This work is supported partially by the National Key Research and Development Program of China (2021YFB1715600), the National Natural Science Foundation of China (61932012, 62141215), and the Program B for Outstanding PhD Candidate of Nanjing University (20201B054).
|
2304.00689 | Accuracy Improvement of Object Detection in VVC Coded Video Using
YOLO-v7 Features | With advances in image recognition technology based on deep learning,
automatic video analysis by Artificial Intelligence is becoming more
widespread. As the amount of video used for image recognition increases,
efficient compression methods for such video data are necessary. In general,
when the image quality deteriorates due to image encoding, the image
recognition accuracy also falls. Therefore, in this paper, we propose a
neural-network-based approach to improve image recognition accuracy, especially
the object detection accuracy by applying post-processing to the encoded video.
Versatile Video Coding (VVC) will be used for the video compression method,
since it is the latest video coding method with the best encoding performance.
The neural network is trained using the features of YOLO-v7, the latest object
detection model. By using VVC as the video coding method and YOLO-v7 as the
detection model, high object detection accuracy is achieved even at low bit
rates. Experimental results show that the combination of the proposed method
and VVC achieves better coding performance than regular VVC in object detection
accuracy. | Takahiro Shindo, Taiju Watanabe, Kein Yamada, Hiroshi Watanabe | 2023-04-03T02:38:54Z | http://arxiv.org/abs/2304.00689v1 | # Accuracy Improvement of Object Detection in VVC Coded Video Using YOLO-v7 Features
###### Abstract
With advances in image recognition technology based on deep learning, automatic video analysis by Artificial Intelligence is becoming more widespread. As the amount of video used for image recognition increases, efficient compression methods for such video data are necessary. In general, when the image quality deteriorates due to image encoding, the image recognition accuracy also falls. Therefore, in this paper, we propose a neural-network-based approach to improve image recognition accuracy, especially the object detection accuracy by applying post-processing to the encoded video. Versatile Video Coding (VVC) will be used for the video compression method, since it is the latest video coding method with the best encoding performance. The neural network is trained using the features of YOLO-v7, the latest object detection model. By using VVC as the video coding method and YOLO-v7 as the detection model, high object detection accuracy is achieved even at low bit rates. Experimental results show that the combination of the proposed method and VVC achieves better coding performance than regular VVC in object detection accuracy.
VCM, Video Coding, VVC, YOLO-v7, Object Detection, post-processing
## I Introduction
In recent years, the amount of videos used for image recognition has been rapidly increasing. Most of the videos captured by consumer video cameras are encoded and used for human vision and image recognition. Thus, new video coding method and its standardization for this purpose are desired. The Moving Picture Experts Group (MPEG) named this research area as "Video Coding for Machines (VCM)". MPEG is attempting to standardize VCM from two perspectives: feature coding and video coding. Among them, there are two main approaches for standardization based on video coding. The first one is to compress video for image recognition. The information of video required for image recognition is less than that required to create video for viewing [1]. Therefore, higher compression ratio can be achieved when encoding scheme is designed only for image recognition. However, the application of video compressed for image recognition is limited because it does not contain enough information necessary to reconstruct the video for viewing. The second approach is to compress video for viewing and then converting it for machines. For the video compression methods for viewing, High Efficiency Video Coding (HEVC) [2] and VVC [3] may be utilized. After using these video coding methods, post-processing is performed using neural networks. This approach cannot go beyond the compression ratio of existing coding methods such as VVC. However, video coding for both viewing and image recognition can be achieved with a single video encoding. This paper seeks the way to improve the accuracy of image recognition in VVC coded video by performing post-processing.
Recently, the technology of image recognition has grown remarkably. In particular, the accuracy of object detection has improved dramatically. However, a method of VCM using the latest object detection model has not yet been considered. Therefore, we propose a method of processing encoded video for latest object detection models. The model used for object detection is YOLO-v7 [4]. YOLO [5] is one of the most popular object detection models, and YOLO-v7 combines high detection accuracy with fast detection speed. The model used for video coding is VVC, which is the latest video coding standard. We process the VVC coded video using Convolutional Neural Network (CNN) to improve the object detection accuracy. The proposed neural network to process the coded video is shown in Fig. 1. In VCM, it is necessary to understand the characteristics of neural networks that perform image recognition tasks and perform corresponding video compression and processing. In our method, the features of YOLO-v7 are extracted to train this neural network used for processing the encoded video. This method enables to create videos which preserve the information necessary for object detection by YOLO-v7. Experiments show that the proposed
video processing can improve the accuracy of object detection in the encoded video.
The rest of this paper is organized as follows: Section 2 and Section 3 describe related work and the proposed method, respectively. Section 4 discusses the experiments and results, and the last section presents the conclusion.
## II Related Works
### _Video Coding for Human Vision_
Video compression technology is necessary for sending and receiving video within limited communication resources. Research and development of technologies to encode video for human vision has been conducted for a long time, and many video coding standards have been created. Among them, VVC is the latest video coding standard, the first version of which was completed in July 2020 [3]. Similar to previous coding methods, it combines intraframe and interframe prediction and operates based on hand-crafted algorithms.
On the other hand, research on video coding using neural networks has also been active in recent years. For example, there are video compression models using RNN [6], image compression models using GAN [7], and interframe prediction models [8, 9]. CNN-based video coding methods for both intraframe and interframe prediction are also proposed [10]. Furthermore, the latest models show comparable coding performance to that of VVC [11].
Many studies have been conducted to improve the quality of the coded video by post-processing [12, 13, 14, 15] using neural networks. In VVC and HEVC, block-based motion compensation and transformation is performed. This method causes block noise in coded images, which deteriorate the quality of these images. To reduce the block noise, a deblocking filter is employed. However, removing the noise is not always possible. Therefore, a model for removing coding noise using neural networks has been studied.
### _Video Coding for Machines_
As the accuracy of image recognition improves, there are increasing opportunities for machines to perform video analysis [16]. For this reason, coding techniques for image recognition are attracting attention, and the movement towards standardization is accelerating. The information in video required for machines is considered to be different from the one for humans to view images [1]. Considering the difference, it is important to capture the characteristics of the information in video, which is necessary for image recognition. Therefore, video information should be extracted depending on the purpose of the video.
Many studies have been conducted to extract the video information that is necessary for object detection, which is one of the popular image recognition tasks. One video coding method extracts the information necessary for YOLO9000 [17]. In this model, the video is input to YOLO9000 before coding, and the obtained features are used when the video is encoded [18]. The features represent the part of the image that YOLO9000 places an attention on. The encoding model allocates more bits to this part of the image to decode images that are useful for object detection. The other method integrates video coding model and R-CNN [19]. In this coding model, CNN is trained using the detection results of R-CNN [20]. Both have higher object detection accuracy and higher video compression ratio than existing video coding methods for human vision. Models that compress video information for multi-task are also emerging. MSFC [21] is an image coding model for object detection and segmentation. The neural network used for image compression is trained using losses computed from object detection and segmentation results. Although it is not as accurate as JPEG [22] in terms of image recognition accuracy, it significantly outperforms JPEG in terms of encoding efficiency.
In addition, research is ongoing to apply video coding methods for human vision, such as VVC, to coding methods for image recognition. In some experiments, VVC is used to encode features extracted from videos using neural networks, and these experiments show that VVC can be used effectively in VCM as well.
## III Proposed Method
### _Approach_
Our approach is to improve the accuracy of object detection by processing the encoded video with a neural network. The encoded video for human vision is converted to the video for object detection by post-processing. VVC is used as the video coding method, and YOLO-v7 is utilized as the object detection model. The proposed video processing method is based on neural network, and its structure follows the generator of ESRGAN [23]. The proposed model is trained using video features extracted from pre-trained YOLO-v7. Processing VVC coded video using the proposed method can achieve high object detection accuracy at low bitrates.
Fig. 1: Model strcture of the post-processing for VVC encoded video.
### _Model Structure_
Some neural network structures for post-processing of encoded video are based on ResNet [24], and some are based on U-Net [25]. These model structures are versatile and have been employed in many models that perform image recognition tasks. ResNet is a model based on residual blocks and is used in the generator of SRGAN [26]. The network proposed in this paper is based on the residual in residual dense block (RRDB) used in the generator of ESRGAN. Same as SRGAN, ESRGAN is a model for image super-resolution. The proposed model of the post-processing is shown in Fig. 1. The use of RRDB allows to reproduce more detailed patterns in the generated image by neural network than that of residual block. The characteristic of RRDB is also useful in processing of encoded video. This is because it helps reconstruct the details of the image. Our model consists of three RRDBs, two convolutional layers, and two activation functions.
### _Loss Function_
Results of image recognition tasks are generally used to train neural networks which compress the video for image recognition. For example, in the studies by S. Wang et al. [20] and Z. Zhang et al. [21] the results of object detection by R-CNN are used to train the image compression models to improve the accuracy of object detection by R-CNN. In our study, YOLO-v7, the latest version of YOLO, is used as the object detection model. In order to improve the object detection accuracy of YOLO-v7 in encoded videos, it is effective to process these videos using the features of the pre-trained YOLO-v7. We process the encoded videos with a CNN-based neural network trained with YOLO-v7 features. In this training, we extract the three kind of features of YOLO-v7 from the backbone of the trained model. For the loss calculation, we use the mean squared error (MSE) between the features of raw video and that of output video of our proposed model. The training process of the proposed model is shown in Fig. 2. The loss function used for training is extracted as
\[Loss = MSE(yolo(I_{raw}),yolo(I_{output})), \tag{1}\]
where \(yolo\) indicates the feature extractor of YOLO-v7. \(I_{raw}\) and \(I_{output}\) indicate the raw video and the output video.
## IV Experiments
### _Training Details_
In this section, we present a training process of proposed neural network for processing the encoded video. The datasets used for training are SJTU [27], UVG [28], and MCML-4K-UHD [29]. All these datasets contain some 4K (3840x2160) raw video sequences. We select 30 sequences from these datasets and encode these by using VTM10.0 [30]. The configuration of the frame reference method is "random access", and the quantization parameters (QP) are 27, 32, 37, 42, and 47. These 150 types of encoded video created in this way are used as input video for the proposed neural network. The loss function is as shown in (1), and features obtained from the trained model of YOLO-v7 are used. The optimization function is Adam and the learning rate is 1e-5. GPU used for training neural network is GeForce GTX 1080ti.
### _Evaluation Method_
We perform object detection using YOLO-v7 on the VVC encoded video and the output video of the proposed neural network. In order to measure the object detection accuracy, we need video sequences with object annotations. Therefore, we use SFU-HW-Objects-v1 dataset [31] for evaluation. This dataset is one of the few datasets in which object annotations are assigned to raw video. It contains object annotations for 18 raw video sequences. These sequences are classified into five classes according to the image size and the characteristics of
Fig. 2: Training process of proposed neural network.
the video. Since the image size of the video sequences used in training is 4K, we use sequences of Class A, B and C, which have a larger image size compared to other classes. The details of our test sequences are shown in Table I. These sequenses are also encoded using VTM 10.0. The configuration of the frame reference method is "random access". Two sequences of Class A are encoded with five QP (27, 32, 37, 42, 47), and other sequences are encoded with three QP (37, 42, 47). We adopt the proposed post-processing method to the encoded video, and the object detection accuracy before and after the proposed processing is compared. The object detection model is pre-trained YOLO-v7, and the confidence threshold is set to 0.25. Average Precision (AP) and F1-score are used as evaluation metrics. When calculating the value of AP, Intersection over Union (IoU) threshold is always set to 0.5.
### _Results_
The object detection results in PeopleOnStreet sequence are shown in Fig. 3. Since "person" accounted for 97% of the objects in PeopleOnStreet sequence, the detection results of person is also shown in Fig. 3. All three graphs show the relationship between detection accuracy and bitrate. Graph (a) represents mean Average Precision (mAP), graph (b) AP of the person detection, and graph (c) F1-score of the person detection. In this sequence, the proposed method slightly increased
Fig. 4: Evaluation result of Traffic sequence in terms of (a) rate-mAP, (b) rate-AP(car); (c) rate-F1(car).
Fig. 5: Comparison of frame images (240x270). (a) raw frame; (b) input frame encoded with VVC (QP:47); (c) output frame of the proposed method. The upper images are part of PeopleOnStreet sequence. The lower images are part of Traffic sequence.
Fig. 3: Evaluation result of PeopleOnStreet sequence in terms of (a) rate-mAP; (b) rate-AP(person); (c) rate-F1(person).
the value of mAP, and significantly increased the accuracy of person detection. The value of AP of person detection was improved by about 5 percentage points regardless of QP, and F1-score was improved by about 0.02 to 0.03. The transformation of the frame image of PeopleOnStreet sequence by the proposed method are shown in Fig. 5. Image (b) is the frame image of VVC encoded video, and (c) is that of the output video of the proposed neural network. This figure shows that the proposed post-processing slightly changes the color tone of the frame image.
The object detection results in Traffic sequence are shown in Fig. 4. Since the percentage of objects occupied by "car" in Traffic sequence is 99%, the detection results of "car" is also shown in Fig. 4. All these graphs also show the relationship between detection accuracy and bitrate. Graph (a) represents the value of mAP. Graph (b) and (c) represent the value of AP and F1-score of the car detection, respectively. In Traffic sequence, the value of mAP was improved by proposed method, and AP of car detection was improved by 2 to 3 percentage points and F1-score by about 0.01 to 0.02. The transformation of the frame image of Traffic sequence by the proposed method are shown in Fig. 5. In the case of Traffic sequence, as in the case of PeopleOnStreet sequence, we can confirm the change in the color tone of the frame image due to the proposed image processing.
Furthermore, the object detection accuracy in three sequences in Class B and four sequences in Class C are shown in Fig. 6. The graph (a) shows the relationship between mAP and bitrate for three sequences of Class B, and the graph (b) shows the relationship between mAP and bitrate for four sequences of Class C. In both cases, the object detection accuracy is improved for all QP.
Table II summarizes these results. For the sequences listed in Table I, the object detection accuracy (mAP) when encoded with VVC and when the proposed post-processing is performed on them are shown. The degree of improvement in mAP values are also shown in this table as a gap. For the two sequences of class A, the values of mAP are not so improved by the proposed method compared to the AP values of person and car. The shortage of the training dataset is one cause of this results. The sequences of training datasets include many persons and cars. However, some objects, such as umbrella, sport ball and chair, are not included, even though it is included in test sequences. Nevertheless, Table II shows that mAP improved for all QP of all classes. The reason for this improvement of mAP value is the enhancement in AP values of person, car, and some other objects. From these results, our proposed method is effective for improving the object detection accuracy in VVC coded video.
## V Conclusion
We propose a method to improve the accuracy of object detection by YOLO-v7 in VVC coded video. In the experiments, we show that the object detection accuracy in VVC coded video can be improved by proposed post-processing using YOLO-v7 features. By applying the proposed method to a video coded for human vision, the video can be converted to a video for YOLO-v7. The converted video has a different color tone from the original encoded video, which facilitates object detection by YOLO-v7. Furthermore, depending on how the method is used, the video can be adapted to its purpose, either for human vision or for object detection. In both cases, high compression ratio can be achieved using VVC as the video compression method, and high object detection accuracy is achieved by using pre-trained YOLO-v7 as the object detection model. For future works, more effective methods of extracting video information required for image recognition will be explored for further enhancement.
|
2308.11451 | Enhanced quantum emission from a topological Floquet resonance | Entanglement is a valuable resource in quantum information technologies. The
practical implementation of entangled photon sources faces obstacles from
imperfections and defects inherent in physical systems, resulting in a loss or
degradation of entanglement. The topological photonic insulators, however, have
emerged as promising candidates, demonstrating an exceptional capability to
resist defect-induced scattering, thus enabling the development of robust
entangled sources. Despite their inherent advantages, building programmable
topologically protected entangled sources remains challenging due to complex
device designs and weak material nonlinearity. Here we present a development in
entangled photon pair generation achieved through a non-magnetic and tunable
anomalous Floquet insulator, utilizing an optical spontaneous four-wave mixing
process. We verify the non-classicality and time-energy entanglement of the
photons generated by our topological system. Our experiment demonstrates a
substantial enhancement in nonclassical photon pair generation compared to
devices reliant only on topological edge states. Our result could lead to the
development of resilient quantum sources with potential applications in quantum
technology. | Shirin Afzal, Tyler J. Zimmerling, Mahdi Rizvandi, Majid Taghavi, Leili Esmaeilifar, Taras Hrushevskyi, Manpreet Kaur, Vien Van, Shabir Barzanjeh | 2023-08-22T13:59:20Z | http://arxiv.org/abs/2308.11451v2 | # Bright quantum photon sources from a topological Floquet resonance
###### Abstract
Entanglement, a fundamental concept in quantum mechanics, plays a crucial role as a valuable resource in quantum technologies. The practical implementation of entangled photon sources encounters obstacles arising from imperfections and defects inherent in physical systems and microchips, resulting in a loss or degradation of entanglement. The topological photonic insulators, however, have emerged as promising candidates, demonstrating an exceptional capability to resist defect-induced scattering, thus enabling the development of robust entangled sources. Despite their inherent advantages, building bright and programmable topologically protected entangled sources remains challenging due to intricate device designs and weak material nonlinearity. Here we present an advancement in entanglement generation achieved through a non-magnetic and tunable resonance-based anomalous Floquet insulator, utilizing an optical spontaneous four-wave mixing process. Our experiment demonstrates a substantial enhancement in entangled photon pair generation compared to devices reliant solely on topological edge states and outperforming trivial photonic devices in spectral resilience. This work marks a step forward in the pursuit of defect-robust and bright entangled sources that can open avenues for the exploration of cascaded quantum devices and the engineering of quantum states. Our result could lead to the development of resilient quantum sources with potential applications in quantum technologies.
+
Footnote †: Corresponding Author: [email protected]
The unique properties of entanglement and topology hold tremendous potential for advancing quantum technology. Entanglement has emerged as a crucial resource in quantum information [1; 2]. Its applications span diverse disciplines, including quantum sensing [3; 4], computing [5; 6], and communication [7; 8]. Significant progress has been made in the optical domain, especially concerning the generation of entangled states through integrated devices [9; 10], opening up avenues for practical real-world applications [11; 12; 13; 14]. However, preserving and effectively harnessing entanglement face challenges due to inevitable fabrication imperfections and defects in the microchips [15; 16]. These disorders can lead to loss, scattering, and spectral randomness, ultimately affecting the stability and purity of the entanglement [17; 18].
Similarly, topological photonic insulators (TPIs) have attracted considerable attention due to their ability to guide light through topologically-protected edge states [19; 20; 21]. These states have found applications in various domains such as lasing [22; 23; 24], soliton generation [25; 26] and frequency comb [27]. The most intriguing aspect of TPIs is their resilience to scattering caused by defects, making them highly desirable for robust device development, capable of withstanding fabrication imperfections encountered during photon transport [28; 29; 30]. Therefore, utilizing TPI systems for generating nonclassical emissions offers unprecedented possibilities for advancing quantum technology.
Recently, TPIs have been employed for generating robust entangled photons emission using coupled waveguides [31] and microring resonators [32; 33], as well as through Floquet-based TPIs [34]. Nevertheless, the practical implementation of nonclassical sources based on edge states has faced challenges due to the weak nonlinear properties of the materials. Ensuring sufficient pair generation requires either long lattice boundaries, leading to increased material loss, or utilizing high pump powers that introduce noise and reduce the purity of the entanglement. These limitations have constrained the potential applications of TPIs in quantum technologies, especially in scenarios demanding bright entangled emissions, such as long-distance quantum communications. Despite demonstrating resonance enhancement of photon pair generation through spontaneous four-wave mixing (SFWM) in conventional resonators [35; 36; 37; 38; 39; 40], the experimental realization of a bright Floquet TPI entanglement source has remained elusive and unexplored until now.
Here for the first time, we explore entanglement generation in a novel resonance effect called Floquet Defect Mode Resonance (FDMR) in the bulk of a Floquet TPI [41]. We present compelling experimental evidence that demonstrates a substantial enhancement in entangled photon pair generation by harnessing this compact topological cavityless resonance effect. By exciting the wavelength-tunable FDMR and utilizing SFWM in a silicon microring lattice coupled to a topologically-protected edge state, we successfully create a bright quantum photon source. This source demonstrates a second-order cross-correlation of photon pairs approximately 3300 times higher compared to using edge mode without resonance. These results demonstrate the potential of FDMR in significantly improving the efficiency of entangled photon pair generation in TPI-based systems,
thereby removing the necessity for high power or extended chip boundaries. Our topological quantum device combines the advantages of integrated topological photonics, tunable localized optical modes, and robustness to disorder, representing a novel approach to generating entangled photon pairs. Such an approach holds great promise for various applications, including the advancement of ultrabright photon-pair generation for quantum information processing and facilitating the development of TPI high-dimensional entangled quantum states [42] as well as quantum logic [43].
## Anomalous Floquet resonance mode
Fig. 1a presents the schematic of our Floquet topological photonic insulator, created using two-dimensional (2D) directly-coupled microring resonators [44]. Each unit cell, see the inset of Fig. 1 a, comprises three strongly coupled identical microrings arranged in a
Figure 1: **Schematic representation and simulation of a Floquet topological entanglement source.****a,** A 2D square microring lattice consisting of individual ring resonators arranged in identical unit cells. Each unit cell (inset) comprises three closely coupled microrings arranged in a square formation with the coupling strength \(g\) and center-to-center distance \(a\). By unrolling each microring from the points shown by black arrows, an equivalent coupled waveguide array can be obtained (inset). Four periodic coupling sequences, in the microring unit cell and in its equivalent waveguide array, are shown by light blue, light pink, light green, and dark pink arrows/segments. Gray color shows uncoupled part of microrings/waveguides. An FDMR with resonance frequency \(\omega_{\text{FDMR}}\) can be created by utilizing a thermo-optics effect to introduce a phase shift \(\Delta\phi\) in one of the rings along the path of the edge state. The lattice’s periodicity leads to the appearance of multiple equally spaced modes, separated by the frequency of \(\Delta_{\text{FDMR}}\). By applying a pump to the system via the edge state and utilizing the SFWM process within the FDMR, we can produce idler-signal entangled photons at various \(\Delta_{\text{FDMR}}\) within the system, as illustrated in the lower inset. **b,** The computed band structure for the photonic anomalous Floquet topological insulator in one Floquet-Brillouin zone with wavenumbers \(K_{x}\) and \(K_{y}\), comprising one unit cell periodic in both \(x\) and \(y\) directions. The transmission bands of Floquet states, with Chern number \(C=0\), are separated by three topological bandgaps, labeled by (I, II, II), with Winding number \(W=1\). **c,** Projected band diagrams of a semi-infinite lattice limited in the \(y\)-direction (consisting of 10 unit cells) and extending infinitely in the \(x\)-direction. The red bands represent the flat bands of Floquet bulk, obtained using a semi-infinite \(5\times 5\) supercell, which are distinct from the transmission band manifolds due to a phase detune. **d,** Simulated transmission spectra of an ideal topological photonic insulator lattice of identical microring resonators with power coupling of 98%, considering intrinsic loss, in the presence (pink line) and absence (blue line) of the phase shift \(\Delta\phi\).
square shape with the coupling coefficient \(g\). As light propagates around each microring, it evanescently couples to the neighboring rings in a periodic sequence, see Fig. 1(a), with the period equal to the microring circumference \(L\). The system thus emulates a periodically driven system with the evolution along the direction of light propagation \(z\) rather than time [45]. By varying the coupling between microrings, the topological phase of the lattice can be tuned, leading to the appearance of Chern or anomalous Floquet topological insulators in the weak or strong coupling regimes, respectively as shown in Refs [34; 45; 46]. Our 2D microring lattice follows a similar framework as the Floquet systems and satisfies the following eigenvalue equation for the wavefunction \(|\psi_{n}(\mathbf{k})\rangle\)
\[U_{F}(\mathbf{k})|\psi_{n}(\mathbf{k})\rangle=e^{i\epsilon_{n}(\mathbf{k})L} |\psi_{n}(\mathbf{k})\rangle, \tag{1}\]
where \(\epsilon_{n}(\mathbf{k})\) is the quasi-energy band of the lattice with the periodicity of \(2\pi/L\). The Floquet operator \(U_{F}(\mathbf{k})=\mathcal{T}e^{i\int_{0}^{L}H(\mathbf{k},z^{\prime})dz^{ \prime}}\), where \(\mathcal{T}\) represents the time-order operator, depends on the Floquet-Bloch Hamiltonian \(H(\mathbf{k},z)=H_{\mathrm{FB}}(\mathbf{k},z)\) that exhibits periodicity along the \(z\) direction \(H(\mathbf{k},z)=H(\mathbf{k},z+L)\), with a period of \(L\). This characteristic mimics the behavior of a periodically driven Hamiltonian, where the variable \(z\) plays the role of time \(z\to t\). We can obtain an expression for \(H(\mathbf{k},z)\) by transforming the microring lattice into an equivalent coupled waveguide array, see the inset of Fig. 1a. In the strong coupling regime, our 2D lattice exhibits three bandgaps with non-zero Winding numbers. All bands possess trivial Chern numbers \(C=0\), making it an anomalous Floquet insulator, as shown in the band structure of one Floquet-Brillouin zone of the unit cell in Fig. 1b. To verify the presence of topological edge states, we impose boundaries along the \(y\)-axis while assuming the lattice extends infinitely along the \(x\)-axis. From the projected quasi-energy band diagram, we observe the existence of two pseudospin topological edge states in each bandgap, as shown in Fig. 1c.
By exploiting the natural hopping sequence of our 2D lattice and leveraging the edge state's resilience against local defects, we can achieve the confinement of light within a closed loop, creating a cavityless local resonator. This confinement is attained by introducing a small perturbation in the driving sequence, in the form of a phase shift (\(\Delta\phi\)), in one of the ring resonators along the edge state's trajectory, leading to the formation of a flat-band Floquet mode within the 2D lattice. As a result, the light becomes effectively trapped within the loop, giving rise to a locally confined mode referred to as FDMR. This perturbation subsequently modifies the total Hamiltonian of the system \(H=H_{\mathrm{FB}}+H_{\mathrm{FDMR}}\) where \(H_{\mathrm{FDMR}}=\hbar\omega_{\mathrm{FDMR}}a^{\dagger}a\) describes the FDDR with a resonance frequency \(\omega_{\mathrm{FDMR}}\) and annihilation operator \(a\). The associated quasienergy of this mode experiences a shift directly proportional to the magnitude of the induced phase, as depicted in Fig. 1c (see Supplementary Materials). This FDMR mode can achieve a large Q-factor, approaching \(10^{5}\), compared to other 2D topological resonators [47; 48], mainly because it lacks physical boundaries that would otherwise confine light within the defect resonator. The resonance pattern in FDMR follows the trajectory of the topologically nontrivial Floquet bulk mode with a total circumference of \(3L\). Fig. 1d illustrates the simulation of the transmission spectrum of our design in the presence and absence of a phase shift, demonstrating the appearance of the FDMR. Note that, this mode is coupled to the edge state existing within the same bandgap, implying that we can control or access the FDMR via the edge state.
To generate correlated photon pairs through the SFWM process within the bandgap of the topological 2D lattice, we utilize the third-order nonlinearity \(\chi^{3}\) in silicon. The Hamiltonian that describes this process is given by
\[H_{\mathrm{non}}=\hbar g_{\mathrm{nl}}\big{(}a_{p}a_{p}a_{s}^{\dagger}a_{i}^{ \dagger}+h.c\big{)}, \tag{2}\]
where \(g_{\mathrm{nl}}\) is the strength of the SFWM while \(a_{p}\) and \(a_{s(i)}\) refer to the annihilation operators of the pump and signal (idler) modes, respectively. The Hamiltonian (2) represents a four-photon mixing process in which two photons from the pump are annihilated, resulting in the creation of photons in the idler and signal modes, which were initially in a vacuum state. With the presence of the FDMR, the generation of the photon pairs is significantly enhanced, surpassing the pair generation rate achieved solely with the edge state
\[N_{\mathrm{FDMR}}\propto N_{\mathrm{edge}}\times Q^{3}, \tag{3}\]
where \(N_{\mathrm{FDMR}}\) and \(N_{\mathrm{edge}}\) are the pair generation rates of the FDMR and edge state, respectively and \(Q\) is the quality factor of the FDMR, see the Supplementary Materials.
## III Results
We experimentally demonstrate the generation and enhancement of entangled photons within our 2D Floquet lattice. The lattice is structured with a \(10\times 10\) unit cell arrangement and fabricated on a Silicon-on-Insulator (SOI) substrate, incorporating a total of 300 individual ring resonators, see Fig. 2a. Each microring in our design is square-shaped and is designed to achieve efficient power coupling, enabling around \(g^{2}=98\%\) of the power transfer from one microring to the nearest microring. This design is specifically chosen to achieve the desired anomalous Floquet insulator behavior around the wavelength of 1545 nm.
Fig. 2b presents the real-time field distribution for the edge state and FDMR, as measured via a Near-Infrared camera. These experimental results demonstrate excellent agreement with the numerical simulations obtained by solving the Schrodinger equatio
edge state exhibits distinct nontrivial light propagation along the boundaries of the lattice. By applying a phase shift in a microring located on the bottom edge of the sample, we observe the emergence of the FDMR mode, characterized by a strongly localized field distribution in a loop pattern.
The chip's transmission spectrum is measured by injecting a laser into the lattice at the bottom left edge and detecting the output light at the bottom right edge using a powermeter, as displayed in Fig. 2a. During the measurement, the laser's wavelength is swept from 1530 nm to 1557 nm, equivalent to 13 \(\Delta_{\text{FDMR}}\). Note that we define \(\Delta_{\text{FDMR}}\) as the frequency spacing of the FDMR which is approximately one-third of the FSR of our TPI, measuring 1.72 nm. The normalized transmission spectrum, shown in blue in Fig. 2c, allows us to identify topologically nontrivial bandgaps, characterized by regions of high and flat transmission as labeled by (I, II, and III). Upon activation of the heater (phase shift), a Floquet bulk mode is lifted into the bandgap, resulting in a flattened energy band and a resonant mode spatially localized in a loop pattern, as demonstrated at the bottom of Fig. 2a (see the inset). The transmission spectrum in the output waveguide, with a phase detune of \(\Delta\phi=2.37\pi\), is represented by the pink trace in Fig. 2c. The presence of distinct and adjustable resonance dips within each bandgap of each FSR of the device indicates the excitation of a resonance mode that is coupled to the
Figure 2: **Experimental realization.****a,** A microscopic image of the anomalous Floquet insulator lattice, featuring a \(10\times 10\) unit cells arrangement across a \(595.32\times 595.32\)\(\mu\)m chip. The insets provide detailed scanning electron microscopy (SEM) views of a unit cell and the structure of the heater functioning as a thermo-optics phase shifter. Each microring in our design has a square shape, with side lengths of 29.64 \(\mu\)m, and rounded corners with a radius of 5 \(\mu\)m to minimize scattering losses. Additionally, the figure includes a diagram of the experimental setup, which involves a pump laser passing through a fiber polarization controller (FPC) and being injected into the sample via a lensed fiber. The transmitted light is measured using a photon detector and a power meter, two narrow bandpass filters, two Superconducting Nanowire Single-Photon Detectors (SNSPDs), and a Time Controller (TC). **b,** Near Infrared images of the edge state and FDMR in real-time and comparison with numerical simulations. **c,** The normalized transmission (T) of the lattice as a function of the pump wavelength. The solid line represents the fitted theoretical model, as detailed in the Supplementary Materials. **d** Zoomed-in transmission of the FDMR with respect to the wavelength of the pump.
edge state. In Fig. 2d, a close-up view of the transmission and the theoretical fit for a selected mode is shown. This mode will be utilized for resonance-enhanced entanglement generation.
To generate entangled photon pairs using FDMR, we employ a continuous-wave pump beam with a frequency that corresponds to one of the FDMRs with a \(Q=4.843\times 10^{4}\), as indicated by an arrow in Fig. 2c. Maintaining energy conservation via the SFWM process requires the generation of photon pairs for idler and signal modes at different \(\Delta_{\text{FDMR}}\) symmetrically distributed around the pump frequency. To measure photon pairs that fulfill this condition and suppress the emission from FDMR modes in different \(\Delta_{\text{FDMR}}\), we utilize two narrowband tunable filters. These filters have center wavelengths placed at \(6\Delta_{\text{FDMR}}\), both below and above the pump wavelength, see Fig. 2c. This setup enables efficient selection of the desired entangled photon pairs while minimizing interference from unwanted FDMR modes at different \(\Delta_{\text{FDMR}}\). We verify the non-classical nature of the photon pairs generated from FDMR by measuring the second-order cross-correlation function defined by \(g_{si}^{(2)}(t)=\frac{\langle\hat{n}_{s}\hat{n}_{i}\rangle}{\langle\hat{n}_{s }\rangle\cdot\langle\hat{n}_{i}\rangle}\), where \(\hat{n}_{s/i}\) is the photon number of the signal/idler modes. This function determines the normalized probability of detecting signal and idler photons at a specific time separation \(t\) and can be measured using coincidence rates of the signal-idler pairs and individual photon counts, see Methods.
Fig. 3a shows the measured second-order cross-correlation function, with a maximum value of \(g_{si}^{(2)}(t)\approx 1450\pm 50\) observed at \(t=0\) and at a fixed pump power of 0.29 mW, compelling evidence of time-correlated photon pairs at the FDMR. At the same power, we compare these results with the second-order autocorrelation functions, \(g_{si}^{(2)}(t)\approx 1\) and \(g_{ss}^{(2)}(t)\approx 1\), for the idler and signal modes, shown in Fig. 3b. This comparative analysis allows us to examine a formal proof of a non-classical light source i.e. the violation of the Cauchy-Schwarz inequality, \(\left[g_{si}^{(2)}(0)\right]^{2}\leq\left[g_{ss}^{(2)}(0)\cdot g_{si}^{(2)}(0)\right]\). Alternatively, we define and measure the nonclassicality parameter
\[\Gamma:=\frac{\left[g_{si}^{(2)}(0)\right]^{2}}{\left[g_{ss}^{(2)}(0)\cdot g_ {ii}^{(2)}(0)\right]}, \tag{4}\]
for which \(\Gamma>1\) indicates a nonclassical source. Fig. 3c illustrates measurements of the parameter \(\Gamma\) as a function of the pump power and includes a comparison with the photons generated from the edge state when the FDMR mode is absent. The plot shows the violation of the Cauchy-Schwarz inequality (\(\Gamma>1\)), indicating a profound enhancement of nonclassical properties in the emitted photons due to the presence of the FDMR and its coupling to the edge state, which effectively increases the cross-correlation. The experimental result is in good agreement with the theoretical model we developed in the Supplemental Materials except at high powers region
Figure 3: **Enhancement of the photon pairs generation**. **a** and **b,** show the measured second-order cross-correlation \(g_{si}^{(2)}(t)\) and the second-order autocorrelation function \(g_{ss/ii}^{(2)}(t)\), respectively, for the power in edge state of 0.29 mW. **c,** Obtained nonclassical parameter, \(\Gamma\), for both FDMR and edge state at different pump powers. Increasing the pump power pushes the system to a nonlinear regime such as the two-photon absorption (TPA). **d,** Illustrates the transmission and \(g_{si}^{(2)}(0)\) versus pump wavelengths around the resonance frequency of the FDMR for the pump power of 0.29 mW. **e,** Shows the measured \(g_{si}^{(2)}(0)\) at different \(\Delta_{\text{FDMR}}\) at the same power. **f,** Display the obtained CAR for FDMR and edge state versus pump power in the edge state. For all panels, the error bars are derived from three independent measurements. For some points, the size of the dots is larger than the error bars. Fitted solid lines in panels **a** and **b** utilize the Poissonian distribution while fitting for panels **c**, **d**, and **f** is based on the comprehensive theory outlined in the Supplementary Materials.
where the nonlinear effects, such as two-photon absorption (TPA) and pump noise change the single and coincidence count rates. We note that the small size of our chip limits the edge state's ability to generate significant photon pairs while propagating at the shorter arm of the sample. Hence, unlike Ref. [32] the measured emission from the edge state is primarily composed of pump leakage and lacks considerable nonclassical properties i.e. \(\Delta\approx 1\).
To investigate the resonance-enhancement of the FDMR, at the pump power of 0.29 mW, the wavelength of the pump was swept across one resonance, allowing for \(g_{si}^{(2)}(0)\) to be compared to the transmission of the FDMR, as shown in Fig. 3d. The peak of cross-correlation, pink trace, coincides with the deepest point of the resonance, blue trace, demonstrating successful resonance enhancement of the generation of entangled photon pairs, with 2nd-order cross-correlation up to \(\approx\)1450. We also measured the \(g_{si}^{(2)}(0)\) at different \(\Delta_{\text{FDMR}}\) in Fig. 3e.
Another important parameter for evaluating the nonclassical properties of the generated photon pairs is the Coincidence to Accidental Rate (CAR). It serves as a reliable indicator of the signal-to-noise ratio of the entangled photons and is obtained by integrating \(g_{si}^{(2)}(t)\) over the peak at \(t=0\). The CAR is measured as CAR \(\propto\frac{N_{c}}{N_{c}N_{i}}\), where \(N_{c}\) represents the coincident count rate, and \(N_{s/i}\) denotes the single count rates of the signal/idler. Note that, all three count rates \(N_{j}\)\((j=c,s,i)\) scale quadratically with the pump power (see Methods), resulting in the observed inverse dependence of CAR with the square of power in agreement with our theoretical model, see Fig. 3f. This figure also provides a comparison of the CAR between the FDMR and the edge state (heater off) under various pump powers. Our source achieves an exceptional CAR value of 2300, significantly surpassing the CAR \(\approx 1.01\) obtained with edge states in the absence of the FDMR. Moreover, this performance exceeds what is typically achieved with other similar sources utilizing edge states in larger chips and topologically trivial single and cascaded ring resonators. The result highlights the outstanding performance of the FDMR in enhancing the nonclassical characteristics of the generated photons. In Table 1, we compare our Floquet entangled source and the current state-of-the-art photon-pair sources in silicon. Unlike basic microring resonators [35; 39; 40], FDMR has the capacity to be turned on and off in situ while also having the capability to couple with edge states. Multiple FDMRs can be activated and coupled on a single chip. These characteristics distinguish our system from the traditional approach of entanglement generation using microring resonators.
## Discussion
In summary, we have demonstrated the resonance-enhanced generation of entangled photon pairs within anomalous Floquet TPI, operating at room temperature. This achievement is made possible by employing an FDMR coupled to a topological edge mode. To verify the nonclassicality of the generated entangled photons, we conducted measurements of the second-order cross-correlation, tested the nonclassical characteristic of the emission, and provided a comparison with the comprehensive theoretical model that we have developed.
We conclude our discussion by exploring the topological protected properties of our system. Unlike previous approaches that depend on propagating fields in the edge state to generate photon pairs, our approach distinctively employs a confined mode embedded within the bandgap region co-existing with the edge state. Therefore, the FDMR operates as a frequency-tunable source that generates entangled pairs, while the edge state acts as a topologically protected waveguide that delivers the generated photons to the output ports. Consequently, our system acquires the same topologically protected features as those inherently present within the edge state. This unique capability sets the current system apart from other sources of entanglement based on edge state [32; 33; 34], as well as from the topologically trivial photonic devices [35; 36; 37; 38; 39; 40]. We note that defects and disorders at the physical boundaries, where the FDMR forms, have the potential to interfere with the loop, reduce the mode's quality factor or shift the mode. However, the frequency tunability of the FDMR can partially mitigate these effects. In the Methods, we expand further this discussion and explore various defects along with their impact on the FDMR.
An essential aspect of our sample is its capability to generate entanglement within a relatively small chip due to the compact size of the FDMR. This feature becomes especially crucial when the material loss of the sample affects the nonclassical property of the photon pairs. The compact nature of the FDMR allows for efficient entanglement generation and on-chip distribution, making it well-suited for scenarios where larger devices might encounter challenges related to propagation loss. The exceptional attributes of the FDMR, including its high quality factor, compact size, tunability (on and off), and coupling with the edge state play a critical role in facilitating the generation of bright entangled photon pairs. It opens up possibilities for exploring novel techniques of on-chip entanglement distribution, field-matter interaction, and advancing integrated quantum electrodynamics. In particular, the FDMR can efficiently interact with localized atoms/ions in the substrate, facilitating their integration into a network of cascaded atom-cavity chains connected with edge states. This platform holds promise for the development of on-chip and topologically-protected quantum computing and processing.
**Acknowledgments** We thank Mohammad Hafezi for helpful comments and discussions and Tyler Zegray for preparing the figures. S.B. acknowledges funding by the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery Grant, funding
and advisory support provided by Alberta Innovates (AI) through the Accelerating Innovations into CarE (AICE) - Concepts Program, support from Alberta Innovates and NSERC through Advance Grant project, and Alliance Quantum Consortium. V.V. and T.J. acknowledge funding from NSERC and AI. This project is funded [in part] by the Government of Canada. Ce projet est finance [en partie] par le gourvernen du Canada.
**Contributions** S.B. and V.V. conceived the ideas. S.B, S.A, T.Z. T.H. and V.V. developed and built the experimental setup. S.B., S.A, M.R., and M.T. analyzed the data. S.B., S.A. T.Z, and M.R. developed the theoretical model and performed the measurements. V.V., T.Z., and S.A. simulated and fabricated the sample. S.A. designed the chip. S.A. and M.K. developed the measurement codes. All the authors contributed in preparing the manuscript.
**Disclosures** The authors declare no conflicts of interest.
**Data Availability Statement** Data underlying the results presented in this paper are available upon proper request.
## Methods
### Device design and Fabrication
We conducted device simulations using Lumerical software with the Mode Solution and Finite-Difference Time-Domain solver. Our simulations involved introducing a TE-polarized mode into waveguide couplers, where the coupling lengths matched the side length of the squared ring. From the simulation results, we calculated the coupling angle \(\theta\) and power coupling coefficient \(g^{2}=\sin(\theta)^{2}\). Subsequently, we designed the sample and fabricated the device on an SOI chip. The silicon waveguides within our TPI have dimensions of 450 nm in width and 220 nm in height, while the cladding consists of 2\(\mu\)m of top SiO\({}_{2}\) and 2.2\(\mu\)m of bottom SiO\({}_{2}\). Our design includes square-shaped microrings with side lengths of 29.64 \(\mu\)m, and we integrated rounded corners with a radius of 5 \(\mu\)m to minimize scattering losses. To achieve efficient power coupling, we precisely set the gaps between the microrings at 180 nm, enabling approximately 98% of power transfer from one microring to the next-nearest microring. This particular design was chosen to attain the desired anomalous Floquet insulator behavior at a wavelength of 1545 nm.
### Activation of the FDMR
We initiate the excitation of an FDMR and achieve precise tuning of its resonant frequency by manipulating the phase of one of the ring resonators situated on the lower boundary of the lattice. To facilitate this tuning process, we fabricate a titanium-tungsten (TiW) heater, which covers the rectangular perimeter, and apply current to the heater. As we perform this manipulation, we observe that the resonant wavelength shift \(\delta\lambda\) of the FDMR exhibits an approximately linear relationship with the applied heater power \(p_{\text{heat}}\), see Supplementary Methods.
### Measurement setup
Fig. 2a illustrates our experimental setup designed to test the correlation between the generated photon pairs and evaluate their quantum properties. The setup involves the following components: TE-polarized pump photons generated by an FPC and a tunable continuous-wave laser (Santec TSL-550) at telecom wavelengths. The laser output power ranges from 0.086 mW to 1.73 mW, delivered via a lensed-tip fiber that is butt-coupled to the chip's facet. The pump and generated wavelengths are collected at the output fiber, with 1% of the light sent to a photon detector and a power meter to measure the device's transmission spectrum. The remaining light is split into two separate paths using a 50/50 coupler. The signal and idler photons then pass through band-pass filters (Optoplex C-Band 50GHz) to suppress the input pump light by 40dB, as well as other generated photons at different frequencies. Finally, two SNSPDs "ID Quantique - ID281" detect isolated photons, providing electrical signals to a Time Controller (ID900) with a timing resolution of 100ps.
In our experiments, we initially inject 1mW of pump power from the laser into the lattice at the bottom left edge. We measure the output light from the bottom right edge while sweeping the laser wavelength from 1530nm to 1557nm (5 FSR of the TPI). The total loss from the laser to the power meter is measured as -10.5 dB, accounting for input and output coupling losses (from fiber to chip and vice versa), FPC, and loss in the chip. The losses in
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|} \hline Integrated photonic sources & On-chip power (\(\mu\)W) & CAR & PGR & Dimension & TP & Cavityless & Coupling to edge state \\ \hline \(\mu\)-ring [39] & 7.4 & 12100 & 16 kHz & 1D & No & No & No \\ \hline Coupled \(\mu\)-rings [40] & 12 & 1100 & 200 Hz & 1D & No & No & No \\ \hline CROW [35] & 200 & 80 & 1.64kHz & 1D & No & No & No \\ \hline Topological based/edge state [32] & 1400 & 42 & - & 2D & Yes & - & - \\ \hline
**This work** & **86** & **2331** & **30kHz** & **2D** & **Yes** & **Yes** & **Yes** \\ \hline \end{tabular}
\end{table}
Table 1: Comparing the result of entangled photon pairs generation around the 1.55 \(\mu\)m wavelength, achieved through optically-driven SFWM. Micro-ring resonator (\(\mu\)-ring), coupled resonator optical waveguide (CROW), topologically protected (TP), pair generation rate (PGR), coincidence-to-accidental rate (CAR), cavityless or not physical boundary.
the FPC and the chip are measured as 0.24 dB and 2.6 dB/cm, respectively. As a result, for 1mW laser power, the power in the waveguide is estimated as 0.29 mW. The transmission spectrum before and after applying phase detuning is shown in Fig. 2c, respectively by blue and pink traces.
### Single and coincidence counts measurement
Taking into account the losses in both the signal and idler channels, the measured coincidence rate can be expressed as \(N_{\mathrm{tot},c}=\eta_{i}\eta_{s}N_{c}\), where \(\eta_{s/i}\) represents the transmission efficiency from the chip's output to the SNSPDs, and \(N_{c}\) denotes the inferred coincidence rate excluding measurement losses and efficiencies (see Supplementary Materials). In addition to system losses, for the measured signal and idler pair rate, we must consider the noises and residues related to the sidebands of the pump and pump leakage, which exhibit a linear relationship with pump power (\(N_{s/i,p}\propto P\)). Furthermore, we account for the extra counts associated with the dark counts \(D_{s/i}\approx 50\) Hz of the SNSPDs. Consequently, the measured single/idler counts rate can be defined as follows
\[N_{\mathrm{tot},s/i}=\eta_{s/i}N_{s/i}+N_{s/i,p}+D_{s/i} \tag{5}\]
where \(N_{s/i}\) signifies the inferred signal/idler single count rates. In Fig. 4a, we plot the inferred \(N_{c}\) as a function of the pump power in the edge state. As anticipated, the coincidence rate exhibits a rise with the increasing pump power.
### Second-order correlation function and coincidence-to-accidental rate
To characterize the correlation of the generated photon pairs, we use the second-order cross-correlation function \(g_{si}^{(2)}(t)\) by measuring single and coincidence counting methods employing the SNSPDs and the Time Controller (TC). This function allows us to determine the normalized probability of detecting signal and idler photons with a time separation of \(t\)[49]:
\[g_{si}^{(2)}(t)=\frac{N_{\mathrm{tot},c}}{N_{\mathrm{tot},s}N_{\mathrm{tot},i} }\times\frac{1}{T_{\mathrm{coin}}} \tag{6}\]
where \(T_{\mathrm{coin}}\) is the duration of arrival time called the coincidence window. We analyzed the second-order cross-correlation function \(g_{si}^{(2)}(t)\), within the smallest measurable coincidence window, 100 ps, of our Time Controller.
The determination of the coincidence-to-accidental rate (CAR) involved using the coincidence histogram. We achieved this by calculating the ratio of the total coincidence counts, within a coincidence window centered around the peak values, to the accidental counts recorded during the same coincidence window [50]
\[\mathrm{CAR}=\frac{\int_{-t/2}^{t/2}g_{si}^{2}(t)dt}{\int_{\infty-t/2}^{\infty +t/2}g_{si}^{2}(t)dt}. \tag{7}\]
To obtain the proper coincident window, we first fitted the coincidence counts using a Gaussian function to calculate the variance, \(\delta=99.7\) ps, and the full-width half maximum (FWHM) of the coincidence counts as \(\mathrm{FWHM}=2\sqrt{2\mathrm{ln}2\delta}=234.8\) ps. Fig. 4 b shows the integration of coincidence count rates, \(N_{c}\), over various coincidence windows and their corresponding CARs. As shown in this figure, there is a triad-off between the coincidence rate and CAR at different time windows. Since the coincidence rate gets saturated at coincidence windows greater than 0.5 ns, leading to dropping CAR, we consider the coincidence window= \(3\delta\approx 1.2\mathrm{FWHM}=0.3\) ns to calculate CAR at different pump power shown in Fig. 3f.
### Resiliency against defect
We emphasize that the nature of FDMR mode comes from the topologically nontrivial behavior of the lattice, in that, the existence of circulating loops in the bulk of the lattice [26; 41] is required to mimic the quantum hall effect that pushes the light to propagate only at the edge of the lattice unidirectionally. In fact, the topology of the
Figure 4: **a** Coincidence rate \(N_{c}\) versus the pump power in the edge state. **b** Dependency of CAR and coincidence rate \(N_{c}\) to the measurement coincidence window.
lattice makes FDMR loops to be robust to fabrication imperfections as long as the defect doesn't completely alter the hopping sequence, for instance, by removing one ring in the loop where the FDMR is excited.
To illustrate the resilience of FDMRs, we conducted simulations of the lattice's transmission spectrum, considering deviations of 10% and 20% in coupling coefficients between microrings and variations in the roundtrip phase of each microring. Our simulations encompassed 50 lattices, incorporating random variations centered around the FDMR loop. The most significant impacts on the transmission spectrum are presented in Fig. 5, shown by the dark green and light blue lines for coupling coefficient and roundtrip phase variations, respectively. This illustration confirms that specific types of defects, whether within the FDMR loop or adjacent microrings, manifest as loss channels. While these defects may alter the quality factor of the FDMR resonance or cause a resonance shift, they are unable to destroy the FDMR loop.
As discussed above, various forms of defects and disorders can appear within our devices. This section aims to elaborate further on distinct scenarios that could potentially impact the efficiency of our system.
**1. Defect affecting the edge state:** A possible scenario involves the emergence of defects/disorders along the trajectory of the edge state, far away from the region where the FDMR forms. In such an instance, the system demonstrates resilience in the face of defects. The spectral characteristics of the system remain largely unaffected by the presence of such a defect [34]. It is worth noting that this resilience pertains specifically to certain types of defects, as the edge state remains robust against select anomalies. For instance, consider the case of side-wall roughness in the waveguide, which has the potential to induce back-reflection within the device.
**2. Impact on neighboring unit cells of the FDMR:** In this scenario, the presence of defects surrounds the FDMR mode, yet they do not directly intersect the FDMR loop itself. Instead, the adjacent unit cells function as loss channels, taking energy and photons away from the FDMR mode. Consequently, the introduction of these neighboring defects results in increased intrinsic losses for the mode. Such imperfections in fabrication can manifest in various ways, such as by fluctuations in the spacing between rings composing the FDMR and those situated within neighboring unit cells. These variations exert a direct influence on the overall quality factor of the mode. It's important to highlight that the mode remains resilient against these defects, provided they do not exceed a certain magnitude (e.g., the absence of a ring within the FDMR). This assertion finds support in our simulation results, depicted in Fig. 5.
**3. Defects impacting the FDMR loop:** The third category encompasses defects that directly affecting on the rings forming the FDMR. This type of defect yields significant importance. It shares similarities with the second type, potentially inducing frequency shifts due to disordered configurations and resulting in phase alterations. Moreover, these defects possess the capacity to influence the quality factor of the FDMR mode by introducing surface roughness or causing partial light scattering toward adjacent rings. Another consequence involves the potential disruption of extrinsic coupling with the edge state. Such disruptions can yield variations in extrinsic quality factors, either increasing or reducing them. These defects, although impactful, do not possess the capability to destroy the entire mode, as long as the defect's size remains proportionally smaller than that of a single ring within the loop. Fig. 5 shows simulations depicting these specific defects. Furthermore, we note that the mode's inherent frequency tunability plays a crucial role in mitigating the effects of defects, particularly those resulting in phase shifts within the loop.
Figure 5: **Robustness of FDMR vs fabrication imperfections.****a** and **b,** Simulation of respectively 10% and 20% variations in coupling coefficient between microrings and roundtrip phase of the microrings located around the FDMR resonance. Pink lines represent the lattice without defects. Light blue and dark green lines show the maximum changes in FDMR due to coupling defects and roundtrip phase defects, respectively. The maximum change,in terms of both frequency shift and resonance linewidth, are chosen from 50 simulations with random defects in the coupling coefficient representing the variations in the gaps between microrings and roundtrip phase defects which are due to variations in the size of the microrings. As illustrated in this figure the defects in the roundtrip phases cause shifts in resonance and defects in \(g\) lead to changes in the Q-factor of the FDMR. |
2305.01866 | On the generation of ultra-bright and low energy spread electron beams
in laser wakefield acceleration in a uniform plasma | The quality of electron beams produced from plasma-based accelerators, i.e.,
normalized brightness and energy spread, has made transformative progress in
the past several decades in both simulation and experiment. Recently,
full-scale particle-in-cell (PIC) simulations have shown that electron beams
with unprecedented brightness
($10^{20}\sim10^{21}~\mathrm{A}/\mathrm{m}^2/\mathrm{rad}^2$) and $0.1\sim 1$
MeV energy spread can be produced through controlled injection in a slowly
expanding bubble that arises when a particle beam or laser pulse propagates in
density gradient, or when a particle beam self-focuses in uniform plasma or has
a superluminal flying focus. However, in previous simulations of work on
self-injection triggered by an evolving laser driver in a uniform plasma, the
resulting beams did not exhibit comparable brightnesses and energy spreads.
Here, we demonstrate through the use of large-scale high-fidelity PIC
simulations that a slowly expanding bubble driven by a laser pulse in a uniform
plasma can indeed produce self-injected electron beams with similar
brightnesses and energy spreads as for an evolving bubble driven by an electron
beam driver. We consider laser spot sizes roughly equal to the matched spot
sizes in a uniform plasma and find that the evolution of the bubble occurs
naturally through the evolution of the laser. The effects of the electron beam
quality on the choice of physical as well as numerical parameters, e.g. grid
sizes and field solvers used in the PIC simulations are presented. It is found
that this original and simplest injection scheme can produce electron beams
with beam quality exceeding that of the more recent concepts. | Xinlu Xu, Thamine N. Dalichaouch, Jiaxin Liu, Qianyi Ma, Jacob Pierce, Kyle Miller, Xueqing Yan, Warren B. Mori | 2023-05-03T02:42:12Z | http://arxiv.org/abs/2305.01866v1 | On the generation of ultra-bright and low energy spread electron beams in laser wakefield acceleration in a uniform plasma
###### Abstract
The quality of electron beams produced from plasma-based accelerators, i.e., normalized brightness and energy spread, has made transformative progress in the past several decades in both simulation and experiment. Recently, full-scale particle-in-cell (PIC) simulations have shown that electron beams with unprecedented brightness (\(10^{20}\sim 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\)) and \(0.1\sim 1\) MeV energy spread can be produced through controlled injection in a slowly expanding bubble that arises when a particle beam or laser pulse propagates in density gradient, or when a particle beam self-focuses in uniform plasma or has a superluminal flying focus. However, in previous simulations of work on self-injection triggered by an evolving laser driver in a uniform plasma, the resulting beams did not exhibit comparable brightnesses and energy spreads. Here, we demonstrate through the use of large-scale high-fidelity PIC simulations that a slowly expanding bubble driven by a laser pulse in a uniform plasma can indeed produce self-injected electron beams with similar brightnesses and energy spreads as for an evolving bubble driven by an electron beam driver. We consider laser spot sizes roughly equal to the matched spot sizes in a uniform plasma and find that the evolution of the bubble occurs naturally through the evolution of the laser. The effects of the electron beam quality on the choice of physical as well as numerical parameters, e.g. grid sizes and field solvers used in the PIC simulations are presented. It is found that this original and simplest injection scheme can produce electron beams with beam quality exceeding that of the more recent concepts.
## I I. Introduction
Critical metrics for accessing the capability of a particle accelerator are related to the quality of the beams they can deliver. Several macro or ensemble averaged quantities of importance to quantify the beams include the energy \(E_{\rm b}\), energy spread \(\sigma_{\rm E_{b}}\), current \(I\), normalized emittance \(\epsilon_{\rm N}\), duration \(\tau_{\rm b}\) and repetition rate \(f\). As a promising novel acceleration method, plasma-based acceleration (PBA) driven by an intense laser pulse or particle beam can sustain ultrahigh acceleration gradients (\(10\sim 100\) GV/m) within the acceleration medium - a fully ionized plasma [1; 2; 3; 4]. Generation of high-quality electron beams from PBA is critical for its development as such beams could transform applications, such as X-ray free-electron lasers (XFELs) [5; 6] and TeV-class colliders [7]. Both of these applications have stringent requirements for the beam quality. Although high acceleration gradients were demonstrated experimentally in the early stages of PBA research [8; 9; 10], the generated electron beams were characterized by large divergences and emittances, and Maxwellian energy distributions, i.e., \(100\%\) energy spread [11; 12; 13; 10]. In 2004, three groups [14; 15; 16] produced monoenergetic \(\sim\)100 MeV beams with a few percent energy spread and several mrad divergence by shooting a ultrashort (\(30\sim 60\) fs) laser pulse with \(\sim\)joule energy into a plasma with \(\sim 10^{19}\) cm\({}^{-3}\) density. However, the beam qualities had large shot-to-shot fluctuations due to the variation of the laser and plasma parameters when operating at the relatively high plasma densities.
In order to improve the stability and reproducibility of the beams produced from PBA, a number of controllable injection schemes have been proposed during the last two decades. These schemes utilize a variety of physical mechanisms, such as additional lasers [17; 18; 19; 20; 21; 22; 23; 24; 25], an external magnetic field [26], a plasma density gradient [27; 28; 29; 30; 31; 32], or the vast difference in the ionization potentials of electrons between different shells of atoms [33; 34; 35; 36; 37; 38; 39; 40; 41; 42].
To date, the representative mechanisms which have produced the best experimental and simulation results are ionization injection and density downramp injection in the nonlinear blowout regime. In ionization injection, electrons with high ionization potentials are released inside the wake and these electrons are more easily trapped (injected) than background electrons. In density downramp injection, a negative plasma density gradient is used to reduce the phase velocity of the wake by gradually increasing the wavelength of the wake and trigger injection of energetic plasma sheath electrons. The phase
space dynamics of the injected electrons in these two schemes have been thoroughly investigated [43; 44] and many variations have been proposed to further improve the generated beam quality [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59].
Recent full-scale high-fidelity particle-in-cell (PIC) simulations have shown that downramp injection and other schemes that gradually increase the wavelength in the nonlinear blowout regime [55; 57; 44] can produce beams with unprecedented brightness \(B=\frac{2I}{\epsilon_{\rm S}}=10^{20}\sim 10^{21}~{}\rm{A/m^{2}/rad^{2}}\) (\(I\sim 20~{}\rm{kA}\) and \(\epsilon_{\rm N}\sim 10~{}\rm{nm}\)) and low slice energy spreads of \(0.1\sim 1~{}\rm{MeV}\). The large energy chirp formed during the injection process can be compensated by the chirp of the acceleration gradient of the nonlinear wake [60; 61] during the subsequent acceleration process [62; 63; 64; 65; 66; 44]. As a result, the beams can achieve \(\sim 0.1\%\) or even smaller relative projected energy spreads after they are boosted to GeV-class or higher energies [67; 65; 65; 68].
Beside these controllable injection schemes, self-injection induced by the evolution of an intense laser driver [62; 63; 69; 70; 71; 72; 73; 74] in a uniform plasma with density around \(10^{17}\sim 10^{18}~{}\rm{cm^{-3}}\) has been commonly used in experiments to generate GeV-class electrons due to its simplicity [75; 76; 77; 78; 79; 80; 81; 82; 83; 84]. Notably, this self-injection mechanism was used to generate beams with energies above the GeV barrier [75] and, more recently, up to \(8~{}\rm{GeV}\)[82]. However, these beams were characterized by relatively poor quality, i.e., large energy spreads (\(\frac{\pi e_{\rm{h}}}{E_{\rm{b}}}\gg 1\%\)) and large emittance (\(\sim\upmu\)m), in both PIC simulations and experiments [62; 63; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 85; 86].
In this paper, we demonstrate using large-scale high-fidelity PIC simulations and theoretical analysis that the dynamics of self-injection induced by the evolution of a short pulse laser are fundamentally the same as those observed in downramp injection. Thus, the self-injected electrons can achieve similar beam quality as from downramp injection, i.e., \(10^{20}\sim 10^{21}~{}\rm{A/m^{2}/rad^{2}}\) brightness, sub-MeV slice energy spread, and \(\lesssim 0.1\%\) relative projected energy spread after the beam is boosted to GeV-class energies. In Sec. II, we compare simulation results for both a uniform plasma and a plasma with a density downramp to show the similarity between these two regimes. A GeV-class beam with \(0.3\times 10^{20}~{}\rm{A/m^{2}/rad^{2}}\) and \(0.2\%\) relative projected energy spread is produced in the example. In Sec. III, we explore the potential of self-injection in a uniform plasma. We find that for diffraction-limited laser beams and perfectly uniform plasmas this simple scheme can generate beams with ultrahigh brightness (\(\sim 10^{21}~{}\rm{A/m^{2}/rad^{2}}\)) and ultrahigh current (\(\sim 100~{}\rm{kA}\)) by operating at \(\sim 10^{19}~{}\rm{cm^{-3}}\) plasma densities. Possible physical factors which prevent the generation of such high-quality beams in experiments are briefly discussed in Sec. IV and a summary of our findings is given in Sec. V. The effects of numerical resolution and the choice of the electromagnetic field solver on the injected beams' quality are studied in the Appendix A.
## II II. Common dynamics of self injection in uniform plasma and density downramps
We model the self-injection of plasma electrons in a laser wakefield accelerator using the quasi-three-dimensional (Q3D) version [87] of the PIC code OSIRIS [88] with recently developed high-fidelity Maxwell solvers [89; 90]. A plasma column with density \(n_{\rm{p0}}=2\times 10^{18}~{}\rm{cm^{-3}}\) and isotropic electron temperature \(T=[0.1~{}\rm{eV},0.1~{}\rm{eV},0.1~{}\rm{eV}]\) is initialized at the beginning of the simulation. An 800 nm, 4.3 J laser pulse with a peak power \(P=155~{}\rm{TW}\) and a full-width half maximum (FWHM) pulse duration \(\tau_{\rm{FWHM}}=38.7~{}\rm{fs}\) (\(3.1~{}\omega_{\rm{p0}}^{-1}\)) is incident on the fully ionized plasma, where \(\omega_{\rm{p0}}=\sqrt{\frac{n_{\rm{p0}}e^{2}}{m\epsilon_{0}}}\) is the plasma frequency, \(m\) and \(e\) are the electron mass and charge, and \(\epsilon_{0}\) is the vacuum permittivity. The laser has a diffraction-limited Gaussian transverse profile and is focused at the start of the plasma (\(z=0\)) with a spot size \(w_{0}=16.9~{}\upmu~{}(4.5k_{\rm{p0}}^{-1})\) (the field profile at focus is \(e^{-r^{2}/w_{0}^{2}}\)), where \(k_{\rm{p0}}=\frac{\omega_{\rm{p0}}}{c}\) is the plasma wavenumber and \(c\) is the speed of light in vacuum. The normalized vector potential of the laser is \(a_{0}\equiv\frac{eA_{\rm{p}}}{mc^{2}}=\frac{eE_{\rm{L}}}{mc\omega_{0}}=4\) at its focus, where \(\omega_{0}\) is the laser frequency, \(A_{\rm{L}}\) is the peak vector potential, and \(E_{\rm{L}}\) is its peak electric field. The spot size used in the simulations is slightly larger than the matched spot size (\(k_{\rm{p0}}w_{0,\rm{match}}=k_{\rm{p0}}r_{\rm{b}}=2\sqrt{a_{0}}=4\)) suggested by the nonlinear plasma wave wake theory [60; 72], where \(r_{\rm{b}}\) is the radius of the blowout wake. The laser is linearly polarized along the \(x\)-direction. The grid sizes are chosen as \({\rm{d}}z={\rm{d}}r=\frac{1}{512}k_{\rm{p0}}^{-1}\) to resolve the subtle physics involved in the injection process and the acceleration of the strongly focused injected electrons. Details of the simulation parameters can be found in Appendix B.
As shown in Fig 1, we consider the self-injection for two cases: a linear plasma density downramp with normalized density gradient \(g\equiv\frac{\Delta n/n_{\rm{p0}}}{k_{\rm{p0}}L_{\rm{ramp}}}=0.001\) and a uniform plasma with \(g=0\), where \(\Delta n\) is the density drop across the ramp and \(L_{\rm{ramp}}\) is the ramp length. When an intense laser pulse propagates inside a plasma (from left to right), the electrons are pushed outwards and forward from the pulse center. The ions then pull these electrons back causing them to form a narrow sheath that surrounds an ion column [Fig. 1(a)]. In general, when the laser's peak power exceeds the self-focusing critical power [91], the plasma's refractive index is self-consistently modified to focus and guide the laser [92]. In the simulations conducted here, the peak power of the laser (155 TW) is much higher than the critical power for self-focusing in the plasma \(P_{\rm{c}}\approx 17\left(\frac{\omega_{0}}{\omega_{\rm{p0}}}\right)^{2}~{}\rm{GW}=14.8~ {}\rm{TW}\) for the density in the plateau. Thus, the laser is self-focused and its
peak field (projected spot size) increases (decreases) as shown in Fig 1(c). Despite the differences in the plasma density profiles, the evolution of the lasers are similar in these two cases.
The wavelength of a laser-driven 3D nonlinear plasma wake depends on the spot size and the peak intensity of the laser driver in a complicated way, particularly when the beam is not well matched. In general, the wavelength tends to increase with both spot size and laser intensity. However, for the parameters studied here, the projected spot size decreases while the intensity increases through self-focusing in the ion channel. In this case, the increase in the intensity dominates, leading to a slow expansion of the wake size (wavelength). Previous simulations have also shown the wake expands as the laser diffracts for an initially tightly focused beam [71]. Since it is difficult to locate the precise end of the wake which is occupied by the self-injected electrons, we measure the axial position in the first wave bucket where \(E_{\mathrm{z}}=0\) to quantify the expansion of the wake [red lines in Fig 1(c)]. Note the position where \(E_{\mathrm{z}}=0\) is roughly the center of the wake and the rear of the wake expands with approximately twice this velocity. In the uniform plasma case, the self-focusing of the laser driver causes an expansion of the wake with a velocity, \(v_{\phi,E_{\mathrm{z}}=0}\), of \(\sim 0.003c\) between \(\sim 50k_{\mathrm{p}0}^{-1}\) and \(\sim 150k_{\mathrm{p}0}^{-1}\). The wake expands with a faster velocity of \(\sim 0.005c\) in the case with a density down-ramp which indicates the downramp speeds up the wake expansion.
In a nonlinear wake, plasma electrons originating from \(r_{\mathrm{i}}\sim\kappa r_{\mathrm{b}}\) form the high-density sheath surrounding the ion channel [44] and gain large forward velocities when they move to the back of the wake, where \(r_{\mathrm{i}}\) is the initial radial position of the electron and \(\kappa\) is a value between 0.5 and 1 which depends on the driver. If their forward velocities are faster than the phase velocity of the end of the wake, these sheath electrons would remain just inside at the end of the wake where they would then be accelerated continuously. The trajectory of a sample injected electron is superimposed on the wake shape as shown in Fig. 1(a).
Since the electrons are always injected at the rear of the continuously expanding wake, there is a mapping between the initial positions (\(z_{\mathrm{i}}\)) of the electrons and their axial positions inside the wake after injection (\(\xi\)) [62; 63; 44; 73] which can be seen in Figs. 2(a) and (b). The duration of the beam is roughly equal to the difference of the wake wavelength at the start and the end of the injection. The injected beam from a plasma down-ramp has a longer duration since the wake is smaller at the start of the injection where the plasma density is \(1.1n_{\mathrm{p}0}\). The compression factor thus scales as \(\sim\gamma_{\phi,E_{\mathrm{z}}=0}^{2}\)[44; 53], where \(\gamma_{\phi,E_{\mathrm{z}}=0}=1/\sqrt{1-v_{\phi,E_{\mathrm{z}}=0}^{2}/c^{2}}\). Thus, there is a significant compression of the beams' duration during the injection, i.e., for these simulations the electrons initially distributed with a length of \(\sim 100\)\(k_{\mathrm{p}0}^{-1}\) and are compressed and form a beam with a duration of \(\sim k_{\mathrm{p}0}^{-1}\). This enables the generation of beam currents of 10s of kA. As shown in Figs. 1(c) and (d), the current of the core of the beams are \(\sim 30\) kA. Here the core of the beams is defined as part with brightness \(\geq 10^{19}\) A/m\({}^{2}\)/rad\({}^{2}\), which is \(11.3<k_{\mathrm{p}0}\xi<12.1\) for the \(g=0.001\) case and \(11.47<k_{\mathrm{p}0}\xi<11.7\) for the \(g=0\) case. The charge of the beam core is 180 pC for \(g=0.001\) and 76 pC for \(g=0\). We note that in addition to the continuous injection during the expansion of the wake, there is an isolated injection near the end of the upramp
Figure 1: (a) The charge density distribution of a nonlinear plasma wake (white-blue-black) driven by a laser pulse (orange) at \(\omega_{\mathrm{p}0}t=150\) in a uniform plasma. The trajectory of an injected electron (red line) is superimposed. (b) The plasma density profile. The downramp starts from \(k_{\mathrm{p}0}z=50\) with density \(1.1\)\(n_{\mathrm{p}0}\) and ends at \(k_{\mathrm{p}0}z=150\) with density \(n_{\mathrm{p}0}\). A super-gaussian upramp profile with order 4 is used between \(z=0\) and \(20\)\(k_{\mathrm{p}0}^{-1}\). (c) The evolution of the peak \(a_{0}\) (blue lines) and the projected spot size (black lines) of the laser pulse, and the position of the plasma wake where \(E_{\mathrm{z}}=0\) (red lines).
(\(z=20k_{\rm p0}^{-1}\)) [32]. This injection forms an attosecond peak with \(\sim 70\) kA current at the head of the beams in both cases [32].
Based on analyzing numerous PIC simulations of downramp injection and its variants by the authors and others [55; 57; 93; 44], we have inferred that the current of the core of the beam scales as \(I\sim\frac{\Delta}{2}I_{\rm A}=I_{\rm d}\) for a beam-driven nonlinear wake and \(I\sim\frac{a9}{2}I_{\rm A}\) for a laser-driven case (with a nearly matched spot size), where \(\Lambda\equiv\frac{2I_{\rm d}}{I_{\rm A}}\), \(I_{\rm d}\) is the peak current of the beam driver and \(I_{\rm A}\approx 17\) kA is the non-relativistic Alfven current. While we currently do not have simple arguments for obtaining these scaling laws, they may arise due to a fine balance between the initial injection and a subsequent quenching via self-beam loading, i.e., the injected electrons can modify the shape of the wake which reduces the forward velocity of the sheath electrons and hinders injection [94]. This observed scaling suggests that beams with hundreds of kA current and nanocoulombs of charge can be injected into a wake driven by an intense laser driver with \(a_{0}\gtrsim 10\).
The mapping between the initial positions of the electrons and their axial positions after injection leads to two consequences of the beams' energy distribution. The first is a low slice energy spread since the electrons in one axial slice originate from similar longitudinal locations and experience the same acceleration gradient after injection. Fig. 2(e) shows that the beams have a slice energy spread of \(\sim 0.5\) MeV except for near their heads. The second is a roughly linear energy chirp along the beam since the electrons at the beam head are injected earlier and accelerated over a longer distance. As shown in Fig. 2(e), the chirps at \(\omega_{\rm p0}t=250\) are \(174\) MeV/(\(k_{\rm p0}\xi\)) (\(46\) MeV/\(\upmu\)m) for \(g=0.001\) and \(423\) MeV/(\(k_{\rm p0}\xi\)) (\(112\) MeV/\(\upmu\)m) for \(g=0\).
These positive chirps can be compensated by the chirp of the acceleration gradient during the subsequent acceleration. The beam loaded wake has a lower gradient at the head and higher gradient at the rear. Thus there is an optimized acceleration distance where the beam can achieve a projected energy spread on the order of MeV [68; 44]. Due to the constant evolution of the laser driver, the acceleration gradient and its chirp change and we rely on simulations to find this optimized distance. We present the evolution of the average energy, the projected energy spread of the beam core (\(11.47<k_{\rm p0}\xi<11.7\)) for \(g=0\) and the acceleration gradient in Fig. 3(a). The optimized acceleration distance occurs at \(z=810k_{\rm p0}^{-1}\) (\(3.05\) mm) where the core of the beam achieves a projected energy spread (red dashed line) as low as \(1.6\) MeV while its average
Figure 2: The initial \(z_{\rm i}\) and the axial position after injection \(\xi\) of the injected electrons for \(g=0.001\) (a) and \(g=0\) (b). The density distribution and the current (red lines) of the injected beam at \(\omega_{\rm p0}t=250\) for \(g=0.001\) (c) and \(g=0\) (d). (e) The longitudinal phase space and the slice energy spread for both cases at \(\omega_{\rm p0}t=250\). (f) The slice emittance along \(x\)-direction (blue lines) and the brightness (red lines) at \(\omega_{\rm p0}t=250\). The duration of each slice is \(\frac{1}{128}k_{\rm p0}^{-1}\) when calculating the slice properties.
energy (blue solid line) is 0.76 GeV. The energy of the beam can still be boosted since there is \(\sim 75\%\) energy left in the laser pulse. We continue the simulation to \(z=1230k_{\rm p0}^{-1}\) (4.63 mm) where the core of the beam has 1.04 GeV energy and the laser pulse contains \(\sim 60\%\) of its initial energy. The average acceleration gradient drops from \(\sim 0.35\) TV/m at \(z=250k_{\rm p0}^{-1}\) (0.94 mm) to \(\sim 0.1\) TV/m at \(z=1230k_{\rm p0}^{-1}\) (4.63 mm). A simulation with lower resolution (d\(z={\rm d}r=\frac{1}{128}k_{\rm p0}^{-1}\)) shows the beam energy reaches its maximum 1.40 GeV at \(z=2440k_{\rm p0}^{-1}\) (9.19 mm) and then starts to lose energy. The longitudinal phase space of the injected beam at three acceleration distances are shown in Fig. 3(b) where the evolution of the energy chirp can be seen. Secondary and tertiary phases of injection occur around \(z=300k_{\rm p0}^{-1}\) (1.13 mm) and \(750k_{\rm p0}^{-1}\) (2.82 mm) which are characterized by low current, large emittance and large slice energy spreads. The aforementioned analysis and simulation results show that the injection mechanism and the longitudinal mapping are similar in a downramp and a uniform plasma.
As shown in Ref. [44], sheath electrons experience a transverse defocusing force from the nonlinear wake at the very rear of the channel prior to injection [94]. This transverse defocusing force reduces the transverse momentum of these electrons as they approach the axis leading to a beam with ultra-low emittance. The same dynamics occur for electrons injected from an elongating wake in a uniform plasma. The emittance of the beams along the laser polarization direction (\(x\)) are shown in Fig. 2(f) where the middle of the beams can achieve an emittance as low as \(<0.01\)\(k_{\rm p0}^{-1}\) (38 nm) while the head and the tail are characterized by a larger emittance of \(0.01\sim 0.04\)\(k_{\rm p0}^{-1}\) (\(\sim 100\) nm). The relatively large emittance at the beam tail is due to the lack of symmetry of the injected electrons at the end of the injection process, i.e., the initial angular distribution of the injected electrons is asymmetric as shown in the Appendix C. The emittance along the other transverse direction has a similar profile. The peak brightness of the beams is \(0.5\times 10^{20}\) A/m\({}^{2}\)/rad\({}^{2}\) for \(g=0.001\) and \(0.3\times 10^{20}\) A/m\({}^{2}\)/rad\({}^{2}\) for \(g=0\).
In principle, the injection of ultrahigh quality electrons in uniform plasmas can be controlled by the initial parameters of the laser pulse driver (the intensity, the spot size, the vacuum focal plane and the pulse duration) and the plasma density. However, it is challenging to derive
Figure 4: (a) The evolution of the laser pulse driver and the plasma wake wave under different laser spot sizes: \(k_{\rm p0}w_{0}=3.5,4\) and 4.5. (b) The charge density distribution of the injected electrons and their current profiles for these cases.
Figure 3: (a) The evolution of the average energy (blue solid line) and the projected energy spread (red dashed line) of the core of the beam (\(g=0\)) and the acceleration gradient (black dotted line). (b) Longitudinal phase space of the injected electrons at three acceleration distances. The shadowed region represents the beam core.
an analytical expression to describe with good accuracy how the injection depends on these parameters. In instead we rely on simulations to show how the injection varies when different laser spot sizes are used. In Fig. 4, we see that the plasma wake expands and injection occurs for three cases near the matched spot size, \(k_{\mathrm{p0}}w_{\mathrm{0}}\)=3.5, 4 (matched) and 4.5, for fixed intensity (\(a_{\mathrm{0}}=4\)) and thus different laser powers. When a laser pulse with a smaller spot size is used, the injection distance shrinks. The isolated injection that occurs at the end of the upramp around \(k_{\mathrm{p0}}z=20\) for \(k_{\mathrm{p0}}w_{\mathrm{0}}=4\) and 4.5 is absent when \(k_{\mathrm{p0}}w_{\mathrm{0}}=3.5\). Note that the aforementioned injection of ultrahigh quality beams also occurs for laser pulse drivers with spot sizes far away from the matched spot size. We choose nearly matched laser spot sizes in this paper for the subsequent acceleration since a nearly matched laser pulse can be guided over many Rayleigh lengths of distance in plasma [72].
The motion of plasma ions [95, 96] which becomes important when the parameter \(\Omega_{\mathrm{b}}\frac{m_{\mathrm{b}}}{c}=\sqrt{\frac{n_{\mathrm{b}}}{n_{ \mathrm{p0}}}\frac{m}{m_{\mathrm{i}}}}k_{\mathrm{p0}}\sigma_{\mathrm{zb}}\) exceeds unity would modify the distribution of the acceleration and focusing fields inside the wake and degrade the beam quality. Here \(\Omega_{\mathrm{b}}=\frac{m_{\mathrm{i}}c^{2}}{m_{\mathrm{i}}c_{\mathrm{0}}}\) is the ion plasma frequency for the beam density, \(m_{\mathrm{i}}\) is the ion mass, \(n_{\mathrm{b}}\) and \(\sigma_{\mathrm{zb}}\) are the peak density and the duration of the injected beam. For the \(g=0\) case, this value is \(\Omega_{\mathrm{b}}\frac{m_{\mathrm{b}}}{c}\sim 0.1\) even for the lightest hydrogen ions, so the ion motion is weak. Simulations performed with mobile hydrogen ions confirm that ion motion has little effect on the injected beam quality.
We emphasize that fine grid sizes and advanced field solvers are necessary to model the ultrahigh quality electron beams generation in the highly nonlinear plasma wakes. Due to their ultra-low emittance and the ultra-strong focusing fields inside the ion channel, the injected beams are tightly focused down to spot sizes of \(\sim 0.1\)\(k_{\mathrm{p0}}^{-1}\) with peak densities as high as \(10^{3}\sim 10^{4}\)\(n_{\mathrm{p0}}\). Thus a fine grid size is needed to resolve them. When these high-density relativistic electrons propagate on the numerical grids, they can excite unphysical numerical fields since the grids can be viewed as a medium with a complicated dielectric tensor. Two important kinds of the unphysical effects are numerical Cherenkov radiation [97, 98, 99] and numerical space-charge fields [90]. These unphysical fields can modify the beams' evolution and degrade their qualities. Maxwell field solvers based on spectral solvers [100, 101, 102, 103, 104] or with finite difference solvers with extended stencils [90, 105] have recently been developed to suppress some or combinations of these numerical fields. In the aforementioned simulations, a combination of fine grid sizes (\(\mathrm{d}z=\mathrm{d}r=\frac{1}{512}k_{\mathrm{p0}}^{-1}\)) and the recently developed Xu solver [90] are used to model the injected electrons with high-fidelity. For comparison, results with the Yee solver and/or coarse resolutions are presented in the Appendix A.
## III III. Generation of ultrahigh brightness and ultrahigh current beams with hundreds of MeV
The properties of self-injected beams are determined by the plasma density and the evolution of the laser driver in the plasma in terms of both the \(\xi\) and propagation distance variables. Thus, there is a large parameter space to explore and electron beams with different properties can be injected. In the previous section, a GeV-class high-quality beam with \(10^{19}\sim 10^{20}\) A/m\({}^{2}\)/rad\({}^{2}\) brightness is produced by focusing a 4.3 J laser pulse into a plasma with \(n_{\mathrm{p0}}=2\times 10^{18}\) cm. We show two more representative cases in this section: one is the generation of \(\sim 100\) MeV beams with \(\sim 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\) and the other is generation of beams with \(\sim 100\) kA current.
Each simulation corresponds to a family of physical instances where the normalized parameters remain fixed. Thus, the emittance of the injected beams in downramp injection and self-injection in a uniform plasma scales with the background plasma density as \(\epsilon_{\mathrm{N}}\propto k_{\mathrm{p0}}^{-1}\propto n_{\mathrm{p0}}^{-1/2}\) in beam driven plasma wakefield accelerators if the normalized dimensions of the beam drivers (\(k_{\mathrm{p0}}\sigma_{x,y,z}\)) and their normalized peak density (\(\frac{n_{\mathrm{b}}}{n_{\mathrm{p0}}}\)) are assumed fixed [44]. This scaling for \(\epsilon_{\mathrm{N}}\) also holds for laser-driven wakes if laser parameters are also scaled, including the laser frequency. However, as high power lasers are presently available in a limited range of wavelengths, then as the density changes the frequency ratio (\(\frac{w_{\mathrm{0}}}{w_{\mathrm{p0}}}\)) will not be scaled appropriately. At lower frequency ratios the laser evolves more rapidly and thus the scaling with \(\epsilon_{\mathrm{N}}\) with density will be approximate. This scaling thus indicates that ultra-bright beams can be produced using a high density plasma for lasers as well as particle beam drivers. To confirm this, results from a simulation with \(n_{\mathrm{p0}}=10^{19}\) cm\({}^{-3}\) are shown in Fig. 5. An 800 nm laser pulse with duration \(\tau_{\mathrm{FWHM}}=25.9\) fs (\(4.6\omega_{\mathrm{p0}}^{-1}\)) and linear polarization in the \(x\) direction is focused inside the plasma at [\(z_{\mathrm{f}}=67.3\) um (\(40k_{\mathrm{p0}}^{-1}\))] with a focal spot size \(w_{\mathrm{0}}=5.9\) um (\(3.5k_{\mathrm{p0}}^{-1}\)). The laser contains 0.35 J energy and its peak power is \(18.7\) TW. Its normalized vector potential at the vacuum focal plane is \(a_{\mathrm{0}}=4\). The spot size and the focal position [106, 107] have a significant effect on the laser evolution and thus where the injection happens. These parameters are first scanned using simulations with low resolution to find optimal operating parameters.
As shown in Fig. 5(a), the laser is focused down even after its vacuum focal plane due to the self-focusing and starts to expand around \(z\approx 135\) um (\(80k_{\mathrm{p0}}^{-1}\)). Injection starts at \(z\approx 135\) um where the \(a_{\mathrm{0}}\) of the laser is focused to \(a_{\mathrm{0}}\approx 6\). The injection ceases at \(z\approx 235\) um (\(140k_{\mathrm{p0}}^{-1}\)) due to a combination of beam loading from the injected electrons and the decrease of the laser intensity. Compared with the \(g=0\) case in Fig. 1, the wake expands
with a faster velocity of \(0.01\sim 0.02c\) which leads to a smaller energy chirp at the end of the injection [31.4 MeV/(\(k_{\rm p0}\xi\))]. This small chirp is compensated quickly by the slope of the acceleration gradient thus the energy at the optimized acceleration distance is much lower than for the \(g=0\) case in Fig. 1.
Simulations show the core of the injected beam (\(12.2<k_{\rm p0}\xi<13.3\) with \(B\geq 0.5\times 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\)) achieves its minimum projected energy spread (1.6 MeV) at \(z=326\)\(\upmu\)m (\(194k_{\rm p0}^{-1}\)) with an average energy 74.4 MeV. The slice energy spread is \(\sim 0.3\) MeV and the current is \(\sim\)13 kA. The charge contained in the beam core is 81 pC. As shown in Fig. 5(b), the emittance of the beam core is \(\sim\)3 nm (\(x\)), \(\sim\)9 nm (\(y\)) and its peak brightness reaches \(1.5\times 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\). The unequal emittance (and spot sizes) along the two transverse directions of the injected beam can be traced back to the fact that only electrons with initial azimuthal angles around \(\theta_{i}\equiv{\rm atan2}(y_{i},x_{i})=\pm\frac{\pi}{2}\) are injected and the details are presented in Appendix C. The dephasing between the injected electrons and the laser driver sets a limit on the maximum energy gain of the injected beam as \(\sim 100\) MeV in a plasma with \(\sim 10^{19}\) cm\({}^{-3}\) density [72]. Injection and acceleration in such a high-density plasma can serve as an injector with 100 MeV-class ultra-bright electron beams.
Based on the empirical relation between the current of the injected beams and the \(a_{0}\) of the laser pulse drivers, beams with hundreds of kA are expected to be produced when using laser pulses with \(a_{0}\gtrsim 10\). We present simulation results in Fig. 6 for a case where a PW laser pulse and plasma density \(n_{\rm p0}=10^{19}\) cm\({}^{-3}\) are used. An 800 nm, linearly polarized (\(x\)) laser pulse with duration \(\tau_{\rm FWHM}=38.8\) fs (\(6.9\omega_{\rm p0}^{-1}\)) is focused outside of the plasma [\(z_{\rm f}=-640\)\(\upmu\)m (\(-380k_{\rm p0}^{-1}\))] with a focused spot size \(w_{0}=11.8\)\(\upmu\)m (\(7k_{\rm p0}^{-1}\)). The laser contains 33.4 J energy and its peak power is 1.2 PW. Its normalized vector potential at its focal plane is \(a_{0}=16\).
Self-injection starts at \(\sim 100k_{\rm p0}^{-1}\) and ends at \(\sim 300k_{\rm p0}^{-1}\). A beam with \(\sim 60\) fs duration and \(\sim\)100 kA current is formed at \(\omega_{\rm p0}t=400\). The total injected charge is \(\sim 6\) nC which can be further improved if a plasma downramp is introduced to elongate the injection distance. Beams with 10s of MA current may be obtained by controlling the acceleration distance to form an energy chirp along this beam and compress it in a small chicane [108]. A high-fidelity simulation of the generation of this high current beam requires a resolution finer than \(\frac{1}{512}k_{\rm p0}^{-1}\). However, a resolution with \(\frac{1}{256}k_{\rm p0}^{-1}\) is used in Fig. 6 due to the significant computational cost to model the large plasma wave wake, and this leads to a highly suspect emittance (\(\sim\upmu\)m) and artificial ripples in the longitudinal phase space of the beam (see Fig. 6). Based on our experience for other cases where we are able to carry fully resolved simulations, we believe that these effects are numerical and not physical.
## IV IV. Discussion on physical effects which would degrade the beam quality
The aforementioned results have demonstrated the ability to inject ultrahigh quality electron beams in the simplest configuration of laser wakefield accelerators. However, the beam quality reported in experiments (either based on a density downramp or the evolution of the laser pulse driver) are significantly worse than the predictions of PIC simulations. In this section we discuss the possible reasons behind this large gap.
The transverse deceleration process which results in injection of low-emittance beams depends on having a well defined sheath and the axial symmetry of the nonlinear wake. Thus, any factors that affect the sheath properties and the symmetry of the wake degrade the emittance and brightness of the injected beams. For instance: non diffraction-limited lasers, a finite temperature of the plasma electrons [44; 109], an asymmetric
Figure 5: Generation of electron beams with \(\sim 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\) brightness. (a) The evolution of the normalized vector potential (blue), the projected spot size (black) of the laser diver, and the expansion of the wake (red). The purple dots show the initial axial and radial positions of the injected electrons. (b) The longitudinal phase space and the slice parameters of the injected beam at \(k_{\rm p0}z=194\): the normalized emittance, the brightness, the current and the slice energy spread. The shadowed region represents the beam core.
driver (intensity and/or phase distortions) [110; 111; 112; 113], a finite transverse bulk velocity of a gas produced from a gas jet, transversely non-uniform plasma density distribution at spatial scales smaller or larger than plasma skin depth [114; 115]. These imperfections not only affect the injection process but also degrade the beam quality during the subsequent acceleration by modifying the linear focusing field and the transversely uniform acceleration field in an axisymmetrical nonlinear wake. However, it is not straightforward to study these effects in the Q3D geometry since significant effort is needed to understand how many modes are necessary to model these asymmetry wakes with high-fidelity. Meanwhile, the computational costs of full 3D simulations are currently unaffordable even using GPU based hardware. Therefore, we will focus on the effects of the plasma temperature in this section.
In practical experiments, the plasma is created in many ways, e.g., high-voltage discharge or optical ionization by a low energy laser pulse. Depending on the ionization process, the plasma electrons can will be distributed with different temperatures which may affect the quality of the injected electron beams. In the self-injection results shown in Sec. II, we assumed an initial plasma electron temperature of \(T=0.1\) eV. In Fig. 7, we compare the emittance of the self-injected beams from plasmas with higher temperatures (1 eV, 10 eV and 100 eV). While the emittance within the beam cores are similar for \(T=1\) eV and \(T=0.1\) eV, it grows by a factor of \(\sim\)2 for \(T=10\) eV. The emittance increases dramatically and approaches \(\sim\upmu\)m when the plasma electron temperature is \(T=100\) eV. This indicates that eV-level plasma temperatures are necessary for generation of high-quality beams with nanometer-scale emittances.
Even when these ultra-bright beams are produced in the plasma, the subsequent transport may also degrade the emittance due to the mismatch of the beams' transverse phase space [116; 117; 118; 119; 120; 121; 122; 123], especially when other macro parameters of the beams (e.g., energy, pointing angles and transverse positions) fluctuate from shot-to-shot [124]. Thus, carefully designed transport stages are additionally necessary for utilization of high-quality self-injected beams from PBA.
## V V. Conclusions
We have shown that the dynamics of the self-injection in uniform plasmas is the same as that of density down-ramp injection [44] and its variants [55; 57], thus beams with \(10^{20}\sim 10^{21}\) A/m\({}^{2}\)/rad\({}^{2}\) and \(0.1\sim 1\) MeV slice energy spreads can be produced. Fine grid sizes and advanced field solvers are necessary to model this ultrahigh quality beams generation in PIC codes.
Due to its low emittance, the GeV beam described in Sec. II is focused tightly to a spot size as small as 240 nm and reaches a peak density as high as \(2\times 10^{21}\) cm\({}^{-3}\). The generated beams with higher brightness or current in Sec. III may be focused to an even higher beam density if their energy is boosted to GeV-levels. These extremely dense beams can enable many novel applications, such as driving all-optical XFELs [125], generating ultrabright \(\gamma\)-rays through a beam-plasma instability [126] or beam-multifoil collisions [127], driving plasma wakefield accelerators with \(\sim 10\) TeV/m acceleration gradient
Figure 6: Generation of electron beams with \(\sim 100\) kA current by sending a 1.2 PW laser pulse into a plasma with \(10^{19}\) cm\({}^{-3}\) density: the longitudinal phase space and the current profile at \(\omega_{\mathrm{p}0t}=400\).
Figure 7: Comparison of the emittance of the self-injected electron beams under different plasma electron temperature. The parameters of the laser pulse and the plasma are the same as that of Fig. 1.
in a solid density plasma [128] and studying strong-field QED with beam-beam collisions [129] or beam-plasma collisions [130].
Using high-fidelity large-scale numerical simulations, we have shown that beams with extreme parameters can be produced when a laser pulse propagates and evolves in a uniform plasma. This approach can dramatically simplify the complexity of plasma-based accelerators. Our findings may also stimulate future research to study why current experiments can not deliver these ultrahigh quality beams and on what is needed to finally produce these beams in experiments to enable many novel plasma acceleration driven applications.
###### Acknowledgements.
This work was supported by the Fundamental Research Funds for the Central Universities, Peking University, the National Natural Science Foundation of China (NSFC) (No. 11921006) and the National Grand Instrument Project (No. 2019YFF01014400), the U.S. Department of Energy under Contracts No. DE-SC0010064, the U.S. National Science Foundation under Grants No. 2108970, and the DOE Scientific Discovery through Advanced Computing (SciDAC) program through a Fermi National Accelerator Laboratory (FNAL) subcontract No. 644405. The simulations were performed on the resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory.
## Appendix A Comparison of the injected beams from simulations with different resolutions and solvers
In Sec. II, we showed that the beam injected in a uniform plasma shares the same dynamics as in a plasma downramp and is characterized by ultrahigh brightness and low slice energy spread. However, the beam quality from simulations is closely related to the simulation setup, such as the grid sizes and the Maxwell field solver. In Sec. II, a combination of fine grid sizes (\(\mathrm{d}z=\mathrm{d}r=\frac{1}{512}k_{\mathrm{p0}}^{-1}\)) and the recently developed Xu solver [90] are used to model the injected electrons with high-fidelity, which will be referred as 'Simulation Xu/fine' in the following discussions. In Fig. 8, we show that the qualities of the injected beams in simulations with coarse resolutions or the Yee solver are much worse than the results from Simulation Xu/fine.
Simulation Yee/fine uses the same setup of Simulation Xu/fine except the Yee field solver [131] is used instead of the Xu solver. Simulation Yee/coarse uses a much coarser grid resolution (\(\mathrm{d}z=\frac{1}{150}k_{\mathrm{p0}}^{-1},\mathrm{d}r=\frac{1}{10}k_{ \mathrm{p0}}^{-1}\)) and the Yee solver with 128 macro-particles per cell to improve the statistics. The time step of \(\mathrm{d}t=0.00625\omega_{\mathrm{p0}}^{-1}\) is close the Courant limit. The longitudinal phase space and the real space distribution of the injected beams in these three simulations are compared in Fig. 8. In Simulation Yee/fine, the beam is focused to a similar real space distribution as in Simulation Xu/fine. However, these high-density electrons excite unphysical high-frequency electromagnetic fields with large amplitude [90] when the Yee solver is used. These unphysical fields then modulate the longitudinal phase space of the beam and lead a much larger slice energy spread (several \(\sim 100\) MeV). The beam in Simulation Yee/coarse has a large emittance (\(>1\) um) and thus a large spot size. Since the electrons are not tightly focused, their density is much lower than in Simulation Xu/fine and Yee/fine and they excite numerical fields with lower amplitude than in Simulation Yee/fine. Thus the energy modulation is not as severe as in Simulation Yee/fine and the slice energy spread is several \(\sim 10\) MeV. The total charge of the injected beams in all three simulation is very similar but the current profiles in Simulations Yee/fine and Yee/coarse are modulated as a result of the slippage between the electrons with unphysical energy modulation.
As shown in Fig. 8, the simulation resolution and field solver can significantly impact the quality of the injected beams due to the subtle self-interactions between the high density particles. However, we want to stress that quantities that depend on physics on larger spatial scales are not sensitive to the choices of the grid sizes and the solver, such as the evolution of the laser driver and the average energy of the injected beam as shown in Fig. 9(a). We also compare the slice mean energy of the injected beams in these simulations in Fig. 9(b). When the duration of the slice is chosen as \(\frac{1}{16}k_{\mathrm{p0}}^{-1}\) which is longer than the wavelength (\(\sim 0.03k_{\mathrm{p0}}^{-1}\)) of the unphysical energy modulation, the slice mean energy has similar profiles in the three simulations. This indicates simulations with low resolutions and the Yee solver can be used to predict the coarse energy space of the injected beam, and they may be recommended in some situations due to the much lower computational cost. Machine learning algorithms may be combined with these low cost simulations to predict these properties for injected electrons.
## Appendix B Simulation setup
For the simulations shown in Sec. II, we use a moving window propagating at speed of light in vacuum \(c\) with a box size of \(14k_{\mathrm{p0}}^{-1}\times 12k_{\mathrm{p0}}^{-1}\) and \(7168\times 6144\) cells along the \(z\) and \(r\) directions, respectively. The grid sizes are chosen as \(\mathrm{d}z=\mathrm{d}r=\frac{1}{512}k_{\mathrm{p0}}^{-1}\approx 7.35\) nm to resolve the tightly focused injected electrons while the time step is \(\mathrm{d}t=\frac{1}{1024}\omega_{\mathrm{p0}}^{-1}\approx 12.2\) as to satisfy the CFL condition. To represent the plasma electrons, we use 8 macro-particles per cell (ppc) where they are distributed at one \(r-z\) lo
cation and 8 different values of the azimuthal angle \(\theta\). All physical quantities in Q3D code [87; 132] are decomposed into azimuthal modes \(\exp(im\theta)\), where \(\theta\) is the azimuthal angle. Two modes with \(m=0\) and 1 are included in the simulations shown here to describe the linearly polarized laser pulses. The Xu solver [90] is used to eliminate the numerical Cherenkov radiation [97; 98; 99] and the numerical space charge field [90] from the high-density relativistic injected electrons and model the injection and acceleration with high-fidelity. The laser's electric field has a symmetric temporal profile of \(10\tau^{3}-15\tau^{4}+6\tau^{5}\), where \(\tau^{4}\) is the time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent time-dependent-dependent time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent-time-dependent time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time--dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time--dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time--dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent-time-dependent
\(\tau=\frac{\sqrt{2}(t-t_{0})}{\tau_{\rm PWNM}}\) while the radius of the plasma column is 41.4 \(\upmu\)m (\(11k_{\rm p0}^{-1}\)).
The simulation in Fig. 5 uses a moving window with a box size of \(16\times 16\) (\(c/\omega_{\rm p0}\))\({}^{2}\) whose grid sizes are \(\mathrm{d}z=\mathrm{d}r=\frac{1}{512}k_{\rm p0}^{-1}\approx 3.29\) nm. The time step is \(\mathrm{d}t=\frac{1}{1024}\omega_{p0}^{-1}\approx 5.48\) as and 8 macro-particles per cell (distributed at 8 different values of \(\theta\) at one \(r-z\) location) are used to represent the plasma electrons. The simulation in Fig. 6 uses a larger moving window with a box size of \(32k_{\rm p0}^{-1}\times 32k_{\rm p0}^{-1}\) to model the larger plasma wave wake excited by the more intense laser pulse. Limited by computational cost, a resolution with \(\mathrm{d}z=\mathrm{d}r=\frac{1}{256}k_{\rm p0}^{-1}\approx 6.58\) nm and a time step \(\mathrm{d}t=\frac{1}{512}\omega_{p0}^{-1}\approx 10.96\) as is used. We still use two modes with \(m=0\) and 1 in the simulations and the Xu solver. We use 8 macro-particles per cell (distributed at 8 different values of \(\theta\) at one \(r-z\) location) for the plasma electrons. The plasma electrons in both simulations have an initial 0.1 eV temperature. Note \(n_{\rm p0}=2\times 10^{18}\) cm\({}^{-3}\) in Sec. II and \(n_{\rm p0}=10^{19}\) cm\({}^{-3}\) in Sec. III.
## Appendix C Initial azimuthal distribution of the injected electrons
When the plasma density is much lower than the critical density of the laser driver and the laser pulse consists of many cycles, the ponderomotive approximation is valid and the plasma wake is nearly axisymmetric. As a result, the initial positions of the injected electrons are distributed uniformly along the azimuthal direction. As the plasma density increases, a shorter laser pulse driver is needed to excite the wake effectively, thus the oscillatory laser electric field starts to play a role and the injection becomes non-uniform azimuthally which leads to a transversely asymmetric beam, i.e., the beam has different spot sizes and emittance along the two transverse directions.
The initial azimuthal angle \(\theta_{i}\) of the injected electrons for different cases are shown in Fig. 10. When \(n_{\rm p0}=2\times 10^{18}\) cm\({}^{-3}\), there are electrons injected from all angles for both the \(g=0\) and \(g=0.001\) cases. Although their distribution with the initial angle is not uniform, i.e., more electrons originate from \(\theta_{i}=\pm\frac{\pi}{2}\) [Fig. 10(a)-(b)], the distribution is approximately symmetric between \(k_{\rm p0}z_{i}\approx 40\) and \(\sim 140\). At the end of the injection (\(k_{\rm p0}z_{\rm i}\gtrsim 140\)), the angular distribution becomes asymmetric which leads to off-axis beam tails as shown in Fig. 2. The reason behind this asymmetric angular distribution has not been understood thoroughly. Note that 8 macro-particles are initialized azimuthally with angles \(\left(-\pi,-\frac{3}{4}\pi,-\frac{1}{2}\pi,-\frac{1}{4}\pi,0,\frac{1}{4}\pi, \frac{1}{2}\pi,\frac{3}{4}\pi\right)\) in these Q3D simulations.
When \(n_{\rm p0}=10^{19}\) cm\({}^{-3}\) and a laser pulse driver with fewer cycles is used, only electrons originating from two angles, \(\theta_{i}=\pm\frac{\pi}{2}\), are injected as shown 10(c). Since the angular distribution is still symmetric about the \(x\) and \(y\) axes, the injected beam is characterized by a low emittance. The values of the injected angles depends on the polarization direction of the laser driver as shown in Fig. 10(c)-(d). When a circularly polarized laser is used, the angular distribution becomes much more uniform. This concentration on initial angles when a linearly polarized laser pulse is used is confirmed in a Q3D simulation with lower resolution (\(\mathrm{d}z=\mathrm{d}r=\frac{1}{256}k_{\rm p0}^{-1}\)) but more macro-particles per cell (f) and a full-3D simulation (g) with lower resolution (\(\mathrm{d}z=\mathrm{d}x=\mathrm{d}y=\frac{1}{80}k_{\rm p0}^{-1}\)).
|
2303.03327 | Tight Bounds for $γ$-Regret via the Decision-Estimation Coefficient | In this work, we give a statistical characterization of the $\gamma$-regret
for arbitrary structured bandit problems, the regret which arises when
comparing against a benchmark that is $\gamma$ times the optimal solution. The
$\gamma$-regret emerges in structured bandit problems over a function class
$\mathcal{F}$ where finding an exact optimum of $f \in \mathcal{F}$ is
intractable. Our characterization is given in terms of the $\gamma$-DEC, a
statistical complexity parameter for the class $\mathcal{F}$, which is a
modification of the constrained Decision-Estimation Coefficient (DEC) of Foster
et al., 2023 (and closely related to the original offset DEC of Foster et al.,
2021). Our lower bound shows that the $\gamma$-DEC is a fundamental limit for
any model class $\mathcal{F}$: for any algorithm, there exists some $f \in
\mathcal{F}$ for which the $\gamma$-regret of that algorithm scales (nearly)
with the $\gamma$-DEC of $\mathcal{F}$. We provide an upper bound showing that
there exists an algorithm attaining a nearly matching $\gamma$-regret. Due to
significant challenges in applying the prior results on the DEC to the
$\gamma$-regret case, both our lower and upper bounds require novel techniques
and a new algorithm. | Margalit Glasgow, Alexander Rakhlin | 2023-03-06T17:54:33Z | http://arxiv.org/abs/2303.03327v2 | # Lower Bounds for \(\gamma\)-Regret via the Decision-Estimation Coefficient
###### Abstract
In this note, we give a new lower bound for the \(\gamma\)-regret in bandit problems, the regret which arises when comparing against a benchmark that is \(\gamma\) times the optimal solution, i.e., \(\mathsf{Reg}_{\gamma}(T)=\sum_{t=1}^{T}\gamma\max_{\pi}f(\pi)-f(\pi_{t})\). The \(\gamma\)-regret arises in structured bandit problems where finding an exact optimum of \(f\) is intractable. Our lower bound is given in terms of a modification of the constrained Decision-Estimation Coefficient (DEC) of Foster et al. (2023) (and closely related to the original offset DEC of Foster et al. (2021)), which we term the \(\gamma\)-DEC. When restricted to the traditional regret setting where \(\gamma=1\), our result removes the logarithmic factors in the lower bound of Foster et al. (2023).
## 1 Introduction
In this note, we study the problem of structured bandits. Formally, given a known class of functions \(\mathcal{F}\) on some domain \(\Pi\), at each round \(t=1,\ldots T\), the algorithm queries some \(\pi_{t}\in\Pi\) and achieves a random reward \(r(\pi_{t})\), where \(\mathbb{E}[r(\pi_{t})]=f^{*}(\pi_{t})\) for some ground truth function \(f^{*}\in\mathcal{F}\subseteq\{f:\Pi\to[0,1]\}\). The traditional goal in structured bandit problems is to minimize the regret
\[\mathsf{Reg}(T):=\sum_{t=1}^{T}\max_{\pi\in\Pi}f^{*}(\pi)-f^{*}(\pi_{t}),\]
which compares the behaviour of the algorithm to the best action for \(f^{*}\). In settings where maximizing \(f^{*}(\pi)\) by better that a \(\gamma\) ratio is intractable, we consider the notion of \(\gamma\)-regret, originally introduced in Kakade et al. (2007); Streeter and Golovin (2008):
\[\mathsf{Reg}_{\gamma}(T):=\sum_{t=1}^{T}\max_{\pi\in\Pi}\gamma f^{*}(\pi)-f^{ *}(\pi_{t}).\]
This definition of regret compares the behaviour of the algorithm to a benchmark which is a \(\gamma\)-fraction of the optimum, for some \(\gamma\in(0,1]\). A few examples of combinatorial optimization problems where finding an exact optimum is intractable are various settings of submodular optimization, the traveling salesman problem, or clustering. Many such problems have been studied in an online setting with full-information or bandit feedback Kakade et al. (2007); Streeter and Golovin (2008); Foster and Rakhlin (2021); Nie et al. (2022).
While several works have achieved upper bounds for the \(\gamma\)-regret of various combinatorial bandit problems, achieving lower bounds for \(\gamma\)-regret is challenging. Briefly, one reason for this is because at each round, it is possible to achieve _negative_\(\gamma\)-regret if \(f^{*}(\pi_{t})\geq\gamma\max_{\pi\in\Pi}f^{*}(\pi)\). Thus to prove a lower bound of, say, \(\Delta T\) on the \(\gamma\)-regret, it no longer suffices to show that for, say, \(T/2\) rounds, we have \(\gamma\)-regret greater than \(2\Delta\), which is a standard approach in bandit lower bounds. More generally, lower bounds in terms of the Decision-Estimation Coefficient (DEC) Foster et al. (2021), or the constrained DEC Foster et al. (2023), measures that characterize the regret of interactive decision making problems, are only tight up to constant factors at best. Thus, if we desire to show that the \(\gamma\)-regret is on the order of \(\Delta T\) - which is similar to the traditional regret being \(\left(1-\gamma+\Theta(\Delta)\right)T\) - any lower bound which loses constant factors will only be able
to show a traditional regret of \(\Theta((1-\gamma+\Delta)T)\), which does not translate to any meaningful bound on the \(\gamma\)-regret if \(\Delta=o(1)\).
In this note, we prove a new lower bound for the regret in terms of a variant of the constrained Decision-Estimation Coefficient (recently introduced in Foster et al. (2023)), which we call the \(\gamma\)-DEC (and denote \(\mathsf{dec}^{\gamma}\)). We formally define this quantity in Definition 1. Our lower bound strengthens the lower bound of Foster et al. (2023), and is tight enough that it can provide meaningful lower bounds on the \(\gamma\)-regret. Informally, our main result, Theorem 1, lower bounds the regret by the value of the \(\mathsf{dec}^{\gamma}\) (with exact constant \(1\)) up to a small _additive_ term, that depends on a certain localization parameter of the model class. Even in the absence of localization, for the traditional exact regret setting, our result yields an improvement upon the lower bound of \(\mathsf{dec}^{1}/\Theta(\log(T))\) of Foster et al. (2023) by a logarithmic factor. More importantly, the sharp nature of the \(\mathsf{dec}^{\gamma}\) lower bound allows us to establish the corresponding lower bounds on \(\gamma\)-regret. The proof of Theorem 1 involves a new technique of lower bounding the regret by considering the behavior of the algorithm up to various stopping times \(\tau\).
Finally, we give an example of a bandit problem over the action space \(\mathbb{R}^{d}\) for which finding an exact optimum in \(T\leq\exp(\Theta(d))\) steps is impossible, but finding a \(\gamma\)-approximate-maximum is possible. We show that Theorem 1 can yield a tight lower bound for the \(\gamma\)-regret of this bandit problem.
In the setting of exact regret \(\gamma=1\), the constrained DEC is known to be a fundamental limit for the exact regret of interactive decision making, in the sense that it yields the exact regret up to logarithmic factors. While our result shows that in the \(\gamma\)-regret setting the \(\gamma\)-DEC yields a meaningful lower bound on the regret, our work leaves open the question of whether it can lead to upper bounds for \(\gamma\neq 1\).
## 2 Preliminaries and Theorem Statements
### Bandits and the Decision-Estimation Coefficient.
We study structured bandit problems with action space \(\Pi\) and a set of models \(\mathcal{M}\). Each model \(M\in\mathcal{M}\) is a probability kernel \(M:\Pi\to\Delta(\mathbb{R})\) where \(\Delta(\mathbb{R})\) is a set of distributions on \(\mathbb{R}\). The set of models induces a class of mean reward functions \(\mathcal{F}=\{f_{M}\}_{M\in\mathcal{M}}\) where \(f_{M}(\pi)=\mathbb{E}^{M}[r|\pi]=\mathbb{E}_{r\sim M(\pi)}[r]\) and \(\mathbb{E}^{M}\) denotes the expectation under the model \(M\). We also use \(\mathbb{P}^{M}\) to denote the probability operator under the model \(M\). For a model \(M\in\mathcal{M}\), we define \(\pi_{M}:=\arg\max_{\pi\in\Pi}f_{M}(\pi)\). We let \(M^{*}\in\mathcal{M}\) denote the true model, and the corresponding mean reward function by \(f^{*}=f_{M^{*}}\). Throughout, we assume \(f_{M}(\pi)\in[0,1]\) for all \(\pi\in\Pi,M\in\mathcal{M}\).
At each round \(t=1,\ldots,T\), the algorithm chooses an action \(\pi_{t}\in\Pi\). The algorithm then receives a reward \(r_{t}\), where \(\mathbb{E}[r_{t}|\pi_{t}]=f^{*}(\pi_{t})\), and the variance of \(r_{t}\) is at most \(1\). Formally, we define the history \(\mathcal{H}_{t}:=\{(\pi_{i},r_{i})\}_{i=1}^{t}\), and we let \(\{p_{t}\}_{t=1}^{T}\) be any randomized algorithm which maps histories to distributions over \(\Pi\), that is, \(\pi_{t}\sim p_{t}(\mathcal{H}_{t-1})\).
We now define the \(\gamma\)_-Decision-Estimation Coefficient_ (DEC), which generalizes1 the constrained DEC, recently introduced in Foster et al. (2023).
Footnote 1: See Remark 2; since we use a squared error to \(f_{\widetilde{M}}\) constraint instead of a squared Hellinger distance constraint, the definition stated here generalizes theirs only for the bandit setting with Gaussian noise.
**Definition 1** (\(\gamma\)-Dec).: _For a model class \(\mathcal{M}\) and a reference model \(\widehat{M}\), for any \(\gamma\in(0,1]\) and \(\epsilon\in[0,1]\), define_
\[\mathsf{dec}^{\gamma}_{\epsilon}(\mathcal{M},\widehat{M}):=\min_{p\in\Delta( \Pi)}\max_{M\in\mathcal{M}}\big{\{}\mathbb{E}_{\pi\sim p}\left[\gamma f_{M}( \pi_{M})-f_{M}(\pi)\right]\mid\mathbb{E}_{\pi\sim p}(f_{M}(\pi)-f_{\widehat{M }}(\pi))^{2}\leq\epsilon^{2}\big{\}}. \tag{2.1}\]
_Define_
\[\mathsf{dec}^{\gamma}_{\epsilon}(\mathcal{M}):=\sup_{\widehat{M}}\mathsf{dec}^ {\gamma}_{\epsilon}(\mathcal{M}\cup\{\widehat{M}\},\widehat{M}), \tag{2.2}\]
_where the supremum is over any probability kernel \(\widehat{M}:\Pi\to\Delta(\mathbb{R})\), not necessarily in the model class \(\mathcal{M}\)._
The \(\gamma\)-DEC quantifies the best possible instantaneous \(\gamma\)-regret attained by a distribution \(p\) over \(\Pi\) under the worst model \(M\) that is close in the sense of expected squared error to \(f_{\widehat{M}}\). This quantity captures the
exploration-exploitation trade-off of \(\mathcal{M}\): if there exists some distribution \(p\) under which for each \(M\in\mathcal{M}\cup\{\widehat{M}\}\) either the information gain \((\mathbb{E}[(f_{M}(\pi)-f_{\widehat{M}}(\pi))^{2}])\) is large, or the \(\gamma\)-regret \((\mathbb{E}[\gamma f_{M}(\pi_{M})-f_{M}(\pi)])\) is small, then \(\mathsf{dec}_{\epsilon}^{\gamma}\) will be small. The work of Foster et al. (2023) showed that the quantity \(\mathsf{dec}_{\epsilon}^{1}\) is a fundamental limit for the exact regret of interactive decision making, in the sense that \(\frac{T\mathsf{dec}_{\epsilon}^{1}}{\Theta(\log(T))}\leq\mathbb{E}[\mathsf{ Reg}(T)]\leq\Theta\left(T\mathsf{dec}_{\epsilon}^{1}\log(|\mathcal{M}|)\right)\), under mild conditions.
**Remark 1**.: The work of Foster et al. (2023) follows the earlier work of Foster et al. (2021), which originally introduced the notion of a decision-estimation coefficient. This original work defined the DEC (or _offset_ DEC) as \(\mathsf{dec}_{\lambda}:=\min_{p}\max_{M}f_{M}(\pi_{M})-f_{M}(\pi)-\lambda \mathbb{E}[D_{H}^{2}(M(\pi),\widehat{M}(\pi))]\), where \(D_{H}\) is the Hellinger distance. This offset DEC yielded bounds of the form \(\Theta\left(\inf_{\lambda}(T\mathsf{dec}_{\lambda}+\lambda)\right)\leq \mathbb{E}[\mathsf{Reg}(T)]\leq\Theta\left(\inf_{\lambda}(T\mathsf{dec}_{ \lambda}+\lambda\log(|\mathcal{M}|))\right)\), though under much more stringent conditions on the class \(\mathcal{M}\) for the lower bound.
### Main Result
Our main result, Theorem 1, yields a constant probability lower bound on the regret in terms of the \(\gamma\)-DEC. In order for this theorem to be meaningful in the \(\gamma\)-regret setting, we will leverage the following localization property, which states that all models in \(\mathcal{M}\) have similar maxima.
**Definition 2**.: _A model class \(\mathcal{M}\) is \(\rho\)-localized if \(\max_{M\in\mathcal{M}}f_{M}(\pi_{M})-\min_{M\in\mathcal{M}}f_{M}(\pi_{M})\leq\rho\)._
We note that the \(\gamma\)-regret is monotonically decreasing with respect to taking smaller subsets of a model class. Thus, for the purposes of achieving a better lower bound for a model class with a poor localization parameter, one may wish to consider the \(\gamma\)-DEC of a more localized subset of the model class.
Without loss of generality, we will assume throughout that in our model class \(\mathcal{M}\), we have \(\sup_{M\in\mathcal{M}}f_{M}(\pi_{M})=1\).
**Theorem 1**.: _Let \(C\geq 2\) be any integer. Let \(\frac{4}{T}\leq\epsilon\leq\frac{1}{3C\sqrt{T}}\). Let \(\Delta:=\mathsf{dec}_{\epsilon}^{\gamma}(\mathcal{M})\). Suppose \(\mathcal{M}\) is \(\rho\)-localized for some \(\rho\geq 6C^{2}\epsilon\). Assume also that \(\Delta\geq 6C^{2}\epsilon\). Then for any algorithm, for some \(M\in\mathcal{M}\),_
\[\mathbb{P}^{M}\left[\frac{1}{T}\mathsf{Reg}_{\gamma}(T)\geq\Delta-\frac{3\min( \rho,\Delta+1-\gamma)}{C}\right]\geq\frac{1}{3C}.\]
_Here the probabilities are over the randomness of the algorithm and variance-1 Gaussian noise in the rewards._
**Remark 2**.: For simplicity, we stated Theorem 1 for bandits with Gaussian noise, but our lower bound can be extended to the more general setting of Decision Making with Structured Observations (see Foster et al. (2021, 2023); this also encompasses reinforcement learning and MDPs). If the density ratios \(\frac{M(\pi)}{M(\pi)}\) are bounded, then we achieve the same result at Theorem 1 if we define the \(\gamma\)-DEC using a constraint on the squared Hellinger Distance, \(\mathbb{E}_{\pi\sim p}D_{H}^{2}(\widehat{M}(\pi),M(\pi))\leq\epsilon^{2}\) instead of the squared error to \(f_{\widehat{M}}\) constraint. If the density ratios are unbounded, the same result as Theorem 1 holds, using the \(\gamma\)-DEC with the squared Hellinger distance constraint, under a slightly stronger condition that \(\epsilon\leq\frac{1}{\sqrt{T}\log(T)}\). (See Foster et al. (2021, Lemma A.13)).
**Remark 3**.: If we invoke Theorem 1 with \(\rho=1\) and \(\gamma=1\) (that is, without any localization assumption on \(\mathcal{M}\), and with the constrained DEC as defined in Foster et al. (2023)), and choose \(C=6\), we achieve that for some \(M\in\mathcal{M}\), \(\mathbb{P}\left[\mathsf{Reg}(T)\geq\frac{\Delta}{2}\right]\geq\frac{1}{18}\), implying that \(\mathbb{E}\left[\mathsf{Reg}(T)\right]\geq\frac{\Delta}{36}\). This improves upon the lower bound in Foster et al. (2023, Theorem 2.2), which yields \(\mathbb{E}\left[\mathsf{Reg}(T)\right]\geq\frac{\Delta}{\Theta(\log(T))}\), by removing the log factors.
## 3 Sketch of Proof of Theorem 1
We present here a sketch of the proof of Theorem 1. For simplicity, in this section we consider the case when \(\gamma=1\), since the main idea of our technique is clearer in this setting.
Fix an algorithm \(p_{t}\), and for any model \(M\), define \(g_{M}(\pi):=f_{M}(\pi_{M})-f_{M}(\pi)\). Let \(\widehat{M}\) be the model attaining the constrained DEC in (2.2), that is, for any distribution \(p\) on \(\Pi\), for some model \(M\in\mathcal{M}\cup\{\widehat{M}\}\),
\[\mathbb{E}_{\pi\sim p}[g_{M}(\pi)]\geq\Delta, \tag{3.3}\]
\[\mathbb{E}_{\pi\sim p}[(f_{M}(\pi)-f_{\widetilde{M}}(\pi))^{2}]\leq\epsilon^{2}.\]
A typical approach (see eg. Foster et al. (2021, 2023)) is to invoke the lower bound (3.3) on the distribution \(p=p_{\widetilde{M}}\), defined to be the average distribution of actions played over all \(T\) steps by the algorithm \(p_{t}\) under the ground truth \(\widehat{M}\). However, this approach results in the loss of constant factors, and only suffices if \(\widehat{M}\) is contained in the model class \(\mathcal{M}\). To mitigate these issues, we will find a different distribution to invoke the DEC lower bound on, one which we know will result in \(M\) not being \(\widehat{M}\) (and thus necessarily \(M\in\mathcal{M}\)). To construct such a distribution, we consider the algorithm \(\overline{p}_{t}\), which plays algorithm \(p_{t}\) until the _stopping time_\(\tau\), and then plays \(\pi_{\widehat{M}}\) for the rest of the \(T-\tau\) steps. Assuming the stopping time \(\tau\) occurs before \(\widehat{M}\) has accrued \(\Delta T\) regret, it will be impossible for \(\widehat{M}\) to attain the DEC. Thus our goal will be to show that when we invoke the DEC on \(\overline{p}_{\widehat{M}}\) (the distribution of \(\overline{p}_{t}\) averaged over all \(T\) rounds), the model \(M\) maximizing the DEC attains, with constant probability, a regret close to \(\Delta\).
For any model \(\tilde{M}\), recall that \(\mathbb{P}^{\tilde{M}}\) denote the probability operator under the ground truth \(\tilde{M}\), and algorithm \(p_{t}\). Similarly, define \(\overline{\mathbb{P}}^{\tilde{M}}\) to be the probability operator under the ground truth \(\tilde{M}\), and algorithm \(\overline{p}_{t}\). Define \(\mathbb{E}^{\tilde{M}}\) and \(\overline{\mathbb{E}}^{\tilde{M}}\) to be the respective expectation operators. Since the regret is strictly increasing, if for some \(\mu\) close to \(\Delta\) we have
\[\overline{\mathbb{P}}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau}g_{M}( \pi_{t})\geq\mu\right]\geq\Omega(1), \tag{3.4}\]
then it will follow that
\[\mathbb{P}^{M}\left[\frac{1}{T}\mathsf{Reg}(T)\geq\mu\right] =\mathbb{P}^{M}\left[\frac{1}{T}\sum_{t=1}^{T}g_{M}(\pi_{t})\geq \mu\right]\] \[\geq\mathbb{P}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau}g _{M}(\pi_{t})\geq\mu\right]-D_{TV}(\overline{\mathbb{P}}^{\widetilde{M}}, \overline{\mathbb{P}}^{M}).\]
Crucially, observe that it suffices to bound the total variation distance between \(\overline{\mathbb{P}}^{\widetilde{M}}\) and \(\overline{\mathbb{P}}^{M}\) (as opposed to between \(\overline{\mathbb{P}}^{\widetilde{M}}\) and \(\mathbb{P}^{M}\)), because the events in question are only determined by events up to time \(\tau\), and \(\overline{\mathbb{P}}^{M}\) and \(\mathbb{P}^{M}\) are identical up to time \(\tau\). By standard argument (eg. using Pinkser's inequality), one can use the square error constraint of the DEC to bound the total variation distance \(D_{TV}(\overline{\mathbb{P}}^{\widetilde{M}},\overline{\mathbb{P}}^{M})\) by an arbitrarily small constant. Thus it suffices to show Equation (3.4) for the desired value \(\mu\).
To prove Equation (3.4), we will define the stopping time \(\tau\) to be the first step \(t\) at which \(\sum_{s=1}^{t}g_{\widetilde{M}}(\pi_{s})\geq aT\) for some \(a<\Delta\). Let \(M\) be the model in (2.1) that maximizes the DEC for \(p=\overline{p}_{\widetilde{M}}\). The key observation is that
\[\frac{1}{T}\sum_{t=1}^{T}g_{M}(\pi_{t})\approx\frac{1}{T}\sum_{t=1}^{T}g_{ \widetilde{M}}(\pi_{t})+f_{M}(\pi_{M})-f_{\widetilde{M}}(\pi_{\widetilde{M}}). \tag{3.5}\]
Such an equivalence holds (with high probability) because the square error constraint in the definition of DEC ensures that most of the time, \(\sum_{t}|f_{\widetilde{M}}(\pi_{t})-f_{M}(\pi_{t})|\) is small. We obtain two observations from this equation. First, on the event that \(\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\in[a^{\prime},a)\), we have \(\tau=T\), and thus
\[\frac{1}{T}\sum_{t=1}^{\tau}g_{M}(\pi_{t})\gtrapprox a^{\prime}+f_{M}(\pi_{M})-f_{ \widetilde{M}}(\pi_{\widetilde{M}}).\]
Secondly, we can show that the gap \(f_{M}(\pi_{M})-f_{\widetilde{M}}(\pi_{\widetilde{M}})\) is large, in particular, nearly as large as \(\Delta-a\). This
holds because, taking the expectation of Equation (3.5) under \(\overline{\mathbb{E}}^{\overline{M}}\), we have
\[\overline{\mathbb{E}}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g _{M}(\pi_{t})\right] \lessapprox\overline{\mathbb{E}}^{\overline{M}}\left[\frac{1}{T} \sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\right]+f_{M}(\pi_{M})-f_{ \widetilde{M}}(\pi_{\widetilde{M}})\] \[\lessapprox a+f_{M}(\pi_{M})-f_{\widetilde{M}}(\pi_{\widetilde{M}}).\]
Indeed, by definition of the algorithm \(\overline{p}\), \(\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\) never exceeds \(a\) (by more than a smidge). Thus since by definition of the DEC, we have \(\overline{\mathbb{E}}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{M}(\pi_ {t})\right]=\mathbb{E}_{\pi\sim\overline{p}_{\widetilde{M}}}[g_{M}(\pi)]\geq\Delta\), it follows that the gap \(f_{M}(\pi_{M})-f_{\widetilde{M}}(\pi_{\widetilde{M}})\gtrapprox\Delta-a\).
Combining these observations, we have that for any \(0\leq a^{\prime}<a<\Delta\),
\[\overline{\mathbb{P}}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau}g_{M}( \pi_{t})\gtrapprox\Delta-a+a^{\prime}\right]\geq\mathbb{P}^{\overline{M}}\left[ \frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\in[a^{\prime},a)\right], \tag{3.6}\]
It remains to choose the values of \(a\) and \(a^{\prime}\) to apply this approach to. If \(a-a^{\prime}\) is small, then the regret guarantee will be good, but the probability on the right hand side may be small. Fortunately however, since \(\mathbb{P}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}( \pi_{t})\in[0,1]\right]=1\), we can divide up the interval \([0,1]\) into a constant number of shorter intervals \(I_{i}\), and claim that the probability \(\mathbb{P}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}( \pi_{t})\in I_{i}\right]\) is \(\Omega(1)\) for one of them. Formally, for some integer \(C\), let
1. \(I_{i}=[a^{\prime}_{i},a_{i}):=\left[\frac{i-1}{C}\min(\Delta,\rho),\frac{i}{C} \min(\Delta,\rho)\right)\) for \(i=1,2,\ldots,C-1\).
2. \(I_{C}:=\left[\frac{i-1}{C}\min(\Delta,\rho),1\right]\)
For any of the first \(i-1\) intervals, we have \(\Delta+a^{\prime}_{i}-a_{i}\geq\Delta-\frac{\min(\Delta,\rho)}{C}\); thus Equation (3.6) yields the desired regret with probability \(\mathbb{P}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi _{t})\in I_{i}\right]\) for some model \(M\) (which may depend on \(a_{i}\)). To handle the final interval, we will need to prove a modification of Equation (3.6), which shows that for any \(0\leq a^{\prime}<a<\Delta\),
\[\overline{\mathbb{P}}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau}g_{M}( \pi_{t})\gtrapprox\min\left(\Delta+a^{\prime}-\min(\rho,\Delta),a\right)\right] \geq\mathbb{P}^{\overline{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}( \pi_{t})\geq a^{\prime}\right].\]
Thus by choosing \(a\approx\Delta\), and \(a^{\prime}=\frac{C-1}{C}\min(\rho,\Delta)\), this shows that we achieve the desired regret with probability \(\mathbb{P}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}( \pi_{t})\in I_{C}\right]\) for some model \(M\).
This modification is again derived from Equation (3.5), but it requires an additional argument using the localization assumption that \(f_{M}(\pi_{M})\geq 1-\rho\), to obtain that \(f_{M}(\pi_{M})-f_{\widetilde{M}}(\pi_{\widetilde{M}})\gtrapprox\Delta-\rho\). We omit the details.
The full proof of Theorem 1 appears in Section 5. In Section A we give a detailed comparison of our technique to that of Foster et al. (2023).
## 4 Example
In this section, we give an example of a bandit problem for which whenever \(T\) is subexponential in the dimension of the action space, we can prove nearly matching upper and lower bounds on the \(\gamma\)-regret. This example is constructed by linearly combining two problems in the proportions \(\gamma\) and \(1-\gamma\): a standard K-armed multi-armed bandit (MAB), and a bandit problem where the feedback is given by a shifted ReLU function of \(w^{T}\pi\), where \(w\in\mathbb{R}^{d}\) is the ground truth, and \(\pi\in\mathbb{R}^{d}\) is the action. While the \(K\)-MAB problem is easy to solve with \(\sqrt{KT}\) regret, the shifted ReLU problem cannot be solved with less than exponentially in \(d\) many steps, and thus the regret is \(\exp(\Theta(d))\sqrt{T}\). If we constrain \(T\) to be less than \(\exp(\Theta(d))\), then we will see that the \(\gamma\)-regret of the example is equal to the regret of the \(K\)-MAB problem.
While this example is artificial, there may be more natural settings where a similar phenomena occur where some part of the function is easy to maximize, and another part is not. For example, Rajaraman et al. (2023) studies a setting where there is an initial "burn-in" phase where the regret is large, and then a second learning phase where the regret grows less quickly as a function of \(T\). This parallels our setting where initially we can only achieve a sublinear regret against a \(\gamma\)-benchmark, but when \(T\) becomes exponentially large, we can achieve sublinear regret.
**Example 1**.: For any \(d,K\), consider the following model class \(\mathcal{M}\) with the action set \(\Pi:=\{(\pi^{\prime},\pi)\in\{-1,1\}^{d}\cup\bot\times[K]\}\). Let \(\mathcal{M}\) be parameterized by \(\{(w,v)\in\{-1,1\}^{d}\times[0,1]^{K}\}\), and define
\[f_{(w,v)}(\pi^{\prime},\pi)=(1-\gamma)h_{w}(\pi^{\prime})+\gamma g_{v}(\pi),\]
where \(g_{v}(\pi)=v_{\pi}\) and
\[h_{w}(\pi^{\prime})=\begin{cases}\frac{2}{d}\max\left(0,\langle\pi^{\prime},w \rangle-\frac{d}{2}\right)&\pi^{\prime}\neq\bot\\ 0&\pi^{\prime}=\bot.\end{cases}\]
It is straightforward to see that the function \(h_{w}\) is hard to maximize in less that \(\exp(\Theta(d))\) queries, since finding any \(\pi^{\prime}\in\{-1,1\}^{d}\) with \(\langle\pi^{\prime},w\rangle>\frac{d}{2}\) with high probability requires \(\exp(\Theta(d))\) queries. However, it is possible to maximize \(g_{v}\) by any standard MAB algorithm. Thus if we find \(\operatorname{argmax}v\) but not \(w\), we can achieve \(\gamma\) of the optimum, which makes \(\gamma\) a natural approximation ratio for this model class.
We proceed to prove upper and lower bounds on the \(\gamma\)-regret for this model class. The bounds are matching up to \(\log(K)\) factors.
**Proposition 1** (Upper bound for Example 1).: _For some algorithm, for any \(T\geq 1\), we have \(\mathsf{Reg}_{\gamma}(T)\leq 6\sqrt{KT\log(K)}\) with probability at least \(1-\frac{1}{k}\)._
**Proof.** Let \(\{p_{t}\}\) be any MAB algorithm which achieves a regret of at most \(6\sqrt{KT\log(K)}\) with probability at least \(1-\frac{1}{k}\). Such an algorithm exists, see eg. the EX3.P algorithm in Bubeck et al. (2012, Theorem 3.3).
Consider the following algorithm, which at round \(t\), chooses \(\pi_{t}\sim p_{t}\), and \(\pi^{\prime}_{t}=\bot\). Since there is no feedback from the \(h_{w}\) part of the function since the algorithm always chooses \(\bot\), the \(\gamma\)-regret at each step will be the same as the \(\gamma\) times the regret of the MAB algorithm the on function class given by \(\{g_{v}\}_{v\in[0,1]^{k}}\). This yields the proposition.
**Proposition 2** (Lower Bound for Example 1).: _For any \(T\leq\exp(d/8)\), there exists some model class \(\tilde{\mathcal{M}}\subset\mathcal{M}\), such that \(\mathsf{dec}_{\epsilon}^{\gamma}(\tilde{\mathcal{M}})\geq\frac{1}{2\sqrt{20}} \sqrt{\frac{K}{T}}\) for \(\epsilon=\frac{1}{3\sqrt{T}}\)._
_Further, for \(K\geq 6000/\gamma^{2}\), \(\tilde{\mathcal{M}}\) is \(6\epsilon\)-localized, and thus by Theorem 1, for some \(M\in\mathcal{M}\), with probability at least \(\frac{1}{3}\), we have \(\mathsf{Reg}_{\gamma}(T)\geq\frac{\gamma}{10}\sqrt{KT}\)._
**Proof.** First we define the subset \(\tilde{\mathcal{M}}\subset\mathcal{M}\) to be the set of all models indexed by \(w,v\) such that \(v\) has exactly one coordinate equal to \(1\), and the remaining coordinates all equal \(1-\sqrt{\frac{K}{20T}}\). For \(k\in[K]\), we will use the shorthand \(f_{w,k}\) to denote the function \(f_{w,v}\), where \(v\) is the vector with a \(1\) in coordinate \(k\), and \(1-\sqrt{\frac{K}{20T}}\) in all other coordinates.
Define \(\widehat{f}(\pi^{\prime},\pi):=\gamma\left(1-\sqrt{\frac{K}{20T}}\right)\), and let \(\widehat{M}\) be the corresponding model \(\widehat{M}(\pi,\pi^{\prime})=\mathcal{N}(\widehat{f}(\pi,\pi^{\prime}),1)\). Fix any distribution \(p\) on \(\Pi\). Let
\[\tilde{w}:=\operatorname{argmin}_{w\in\{-1,1\}^{d}}\mathbb{E}_{(\pi^{\prime}, \pi)\sim p}\left[\mathbf{1}\left(\langle w,\pi^{\prime}\rangle\geq\frac{d}{2} \right)\right],\]
and let
\[\tilde{k}:=\operatorname{argmin}_{k\in[K]}\mathbb{E}_{(\pi^{\prime},\pi)\sim p }\left[\mathbf{1}(\pi=k)\right].\]
Observe that
\[\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\mathbf{1}(\pi=\tilde{k})\right] \leq\mathbb{E}_{k\sim\mathrm{Uniform}[K]}\mathbb{E}_{(\pi^{\prime },\pi)\sim p}\left[\mathbf{1}(\pi=k)\right]\] \[=\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\mathbb{E}_{k\sim\mathrm{ Uniform}([K])}\left[\mathbf{1}(\pi=k)\right]\] \[=\frac{1}{K}.\]
Similarly,
\[\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\mathbf{1}(\left<\tilde {w},\pi^{\prime}>d/2\right>)\right] \leq\mathbb{E}_{w\sim\mathrm{Uniform}(\{-1,1\}^{d})}\mathbb{E}_{ (\pi^{\prime},\pi)\sim p}\left[\mathbf{1}(\left<w,\pi^{\prime}>d/2\right>)\right]\] \[=\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\mathbb{E}_{w\sim\mathrm{ Uniform}(\{-1,1\}^{d})}\left[\mathbf{1}(\left<w,\pi^{\prime}>d/2\right>)\right]\] \[\leq\exp(-d/8),\]
where the last line follows from Hoeffding's inequality.
Let \(M\) be the model with mean \(f_{\tilde{w},\tilde{k}}\). Then we have
\[\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\left(f_{M}(\pi^{ \prime},\pi)-f_{\tilde{M}}(\pi^{\prime},\pi)\right)^{2}\right] \leq 2\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\mathbf{1}\left( \left<\tilde{w}^{\prime},\pi^{\prime}\right>\geq\frac{d}{2}\right)\right]+2 \mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\left(\sqrt{\frac{K}{20T}}\right)^ {2}\mathbf{1}\left(\pi=\tilde{k}\right)\right]\] \[\leq 2\exp(-d/8)+\frac{2}{20T}\leq\epsilon^{2}.\]
Now we consider the \(\gamma\)-regret of \(f_{M}\) under \(p\):
\[\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[\gamma f_{M}^{*}-f_{M} (\pi^{\prime},\pi)\right] \geq\gamma-\mathbb{E}_{(\pi^{\prime},\pi)\sim p}\left[f_{M}(\pi^ {\prime},\pi)\right]\] \[\geq\gamma-\mathbb{P}_{(\pi^{\prime},\pi)\sim p}\left[\mathbf{1} (\left<\tilde{w},\pi^{\prime}>d/2\right>)\right]-\gamma\left(1-\sqrt{\frac{K}{ 20T}}\right)-\gamma\sqrt{\frac{K}{20T}}\mathbb{P}_{(\pi^{\prime},\pi)\sim p} \left[\mathbf{1}(\tilde{k}=\pi)\right]\] \[\geq-\exp(-d/8)+\gamma\sqrt{\frac{K}{20T}}\left(1-\frac{1}{K}\right)\] \[\geq\frac{\gamma}{2}\sqrt{\frac{K}{20T}},\]
where in the final line we used the fact that \(T\leq\exp(d/8)\).
It follows that \(\mathsf{dec}_{\epsilon}^{\gamma}(\tilde{\mathsf{M}})\geq\frac{\gamma}{2}\sqrt {\frac{K}{20T}}\).
Finally, we check that we can apply Theorem 1 with \(\epsilon=\frac{1}{3\sqrt{T}}\) and \(C=1\). We observe that \(\tilde{\mathsf{M}}\) is \(6\epsilon\)-localized (indeed, all of its maxima are identical), and that \(\mathsf{dec}_{\epsilon}^{\gamma}(\mathsf{M})\geq\frac{\gamma}{2}\sqrt{\frac{K }{20T}}\geq 6\epsilon\) for \(K\geq 6000/\gamma^{2}\). Thus it follows that for some \(M\in\mathsf{M}\), \(\mathbb{P}^{M}\left[\frac{1}{T}\mathsf{Reg}_{\gamma}(T)\geq\frac{\gamma}{2 \sqrt{20}}\sqrt{\frac{K}{T}}-18\epsilon\right]\geq\frac{1}{3}\), so for \(K\geq\frac{6000}{\gamma^{2}}\), we have \(\mathbb{P}^{M}\left[\mathsf{Reg}_{\gamma}(T)\geq\frac{\gamma}{10}\sqrt{KT} \right]\geq\frac{1}{3}\).
## 5 Proof of Theorem 1
Proof of Theorem 1.Let \(\{p_{t}\}\) be any algorithm, and for any model \(M\), let \(\mathbb{P}^{M}\) and \(\mathbb{E}^{M}\) denote the probability and expectation operators under ground truth \(M\), and algorithm \(\{p_{t}\}\). Further define \(p_{M}:=\mathbb{E}^{M}\left[\frac{1}{T}\sum_{t=1}^{T}p_{t}(\cdot|\mathcal{H}^{t })\right]\).
We define \(f_{M}^{*}:=f_{M}(\pi_{M})\) and \(g_{M}(\pi):=f_{M}^{*}-f_{M}(\pi)\). Although it may seem natural to consider the instantaneous \(\gamma\)-regret \(\gamma f_{M}^{*}-f_{M}(\pi)\), we instead work primarily with the traditional instantaneous regret, \(g_{M}(\pi)\), because this value can never be negative. Throughout, one should think of \(\epsilon\) as being small relative to \(\rho\) and \(\Delta\).
Let \(a_{\max}:=\Delta+(1-\gamma)f_{\widetilde{M}}(\pi_{\widetilde{M}})\), and for \(a\in[0,a_{\max}-\frac{2}{T}]\), consider the following algorithm \(\overline{p}_{t}^{a}\): Play actions according to \(p_{t}\) until some time \(\tau_{a}\) when \(\sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})\geq aT\). Then play \(\pi_{\widetilde{M}}\) for the rest of the rounds. Formally, we have
\[\overline{p}_{t}^{a}(\cdot|\mathcal{H}_{t})=\begin{cases}p_{t}(\cdot|\mathcal{ H}_{t})&t\leq\tau_{a}\\ \pi_{\widetilde{M}}&t>\tau_{a},\end{cases}\]
where \(\tau_{a}\) is defined to be the stopping time which is the first value of \(t\) for which \(\sum_{s=1}^{t}g_{\widetilde{M}}(\pi_{s})\geq aT\). If this never occurs, let \(\tau:=T\).
For a model \(M\), let \(\overline{\mathbb{P}}_{a}^{M}\) and \(\overline{\mathbb{E}}_{a}^{M}\) denote the probability and expectation operators under ground truth \(M\), and algorithm \(\overline{p}_{t}^{a}\). At a high level, our proof will show that for some value \(a\), and some model \(M_{a}\), we have
\[\overline{\mathbb{P}}_{a}^{M_{a}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{a}}g_{M_{a }}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-\frac{3\widehat{\rho}}{C}\right] \geq\frac{1}{3C}, \tag{5.7}\]
where we have defined \(\widehat{\rho}:=\min(\rho,\Delta+1-\gamma)\).
Then by coupling \(\overline{\mathbb{P}}_{a}^{M_{a}}\) and \(\mathbb{P}^{M_{a}}\) for the first \(\tau_{a}\) rounds (indeed, the algorithm \(p_{t}\) and \(\overline{p}_{t}^{a}\) both behave the same up to time \(\tau_{a}\)), and observing that \(g_{M_{a}}\) is never negative, we can achieve the desired result, that
\[\mathbb{P}^{M_{a}}\left[\frac{1}{T}\sum_{t=1}^{T}\gamma f_{M_{a}} ^{*}-f_{M_{a}}(\pi_{t})\geq\Delta-\frac{3\widehat{\rho}}{C}\right] =\mathbb{P}^{M_{a}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{M_{a}}(\pi_{ t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-\frac{3\widehat{\rho}}{C}\right]\] \[\geq\mathbb{P}^{M_{a}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{a}}g_{M_ {a}}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-\frac{3\widehat{\rho}}{C} \right]\geq\frac{1}{3C}.\]
We proceed to prove Equation (5.7). Consider the DEC under \(p=\overline{p}_{\widetilde{M}}^{a}=\overline{\mathbb{E}}^{\widetilde{M}}\left[ \frac{1}{T}\sum_{t=1}^{T}\overline{p}_{t}^{a}(\cdot|\mathcal{H}^{t})\right]\), and let \(M_{a}\) be the corresponding maximizer in (2.1). Observe that we must have \(M_{a}\neq\widetilde{M}\) (and thus \(M_{a}\in\mathcal{M}\)), since \(\mathbb{E}_{\pi\sim\overline{p}_{\widetilde{M}}^{a}}[g_{\widetilde{M}}(\pi)] \leq\frac{1}{T}\left(aT+1\right)<\Delta+(1-\gamma)f_{\widetilde{M}}(\pi_{ \widetilde{M}}).\) Furthermore, we have
\[\overline{\mathbb{E}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{T}(f_{M_ {a}}(\pi_{t})-f_{\widetilde{M}}(\pi_{t}))^{2}\right]\leq\epsilon^{2}. \tag{5.8}\]
Our first claim bounds the total variation distance between \(\overline{\mathbb{P}}_{a}^{M_{a}}\) and \(\overline{\mathbb{P}}_{a}^{\widetilde{M}}\).
**Claim 1**.: \[D_{TV}\left(\overline{\mathbb{P}}_{a}^{\widetilde{M}},\overline{\mathbb{P}}_{a}^ {M_{a}}\right)\leq\frac{1}{3C}.\]
**Proof.** Using Pinsker's inequality, we can bound the total variation distance by the square root of the KL-divergence. We can then bound the KL-divergence by summing up the KL divergence over each round (see eg. Lattimore and Szepesvari (2020), Lemma 15.1). Since we have assumed the noise is Gaussian with variance \(1\), the KL divergence at each round is one half times the distance between \(f_{M_{a}}(\pi_{t})\) and \(f_{\widetilde{M}}(\pi_{t})\) squared. This yields
\[D_{\mathrm{TV}}\left(\overline{\mathbb{P}}_{a}^{\widetilde{M}}, \overline{\mathbb{P}}_{a}^{M_{a}}\right) \leq\sqrt{D_{\mathrm{KL}}\left(\overline{\mathbb{P}}_{a}^{ \widetilde{M}},\overline{\mathbb{P}}_{a}^{M_{a}}\right)}\] \[=\sqrt{\overline{\mathbb{E}}_{a}^{\widetilde{M}}\left[\sum_{t=1}^ {T}(f_{M_{a}}(\pi_{t})-f_{\widetilde{M}}(\pi_{t}))^{2}\right]}\] \[\leq\epsilon\sqrt{T}\leq\frac{1}{3C},\]
where the last inequality follows by assumption of the theorem. \(\blacksquare\)
The next claim bounds the \(L_{1}\) distance between \(f_{M_{a}}\) and \(f_{\widehat{M}}\) under \(\overline{p}_{\widehat{M}}^{a}\).
**Claim 2**.: _For any \(a\in[0,a_{\text{max}})\),_
\[\overline{\mathbb{P}}_{a}^{\widehat{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{a}}|f _{M_{a}}(\pi_{t})-f_{\widehat{M}}(\pi_{t})|\leq 3C\epsilon\right]\geq 1-\frac{1}{3C}.\]
**Proof.** Applying Jensen's inequality to Equation (5.8) yields that
\[\overline{\mathbb{E}}_{a}^{\widehat{M}}\left[\frac{1}{T}\sum_{t=1}^{T}|f_{M_{a }}(\pi_{t})-f_{\widehat{M}}(\pi_{t})|\right]\leq\epsilon. \tag{5.9}\]
Thus by Markov's we have
\[\overline{\mathbb{P}}_{a}^{\widehat{M}}\left[\frac{1}{T}\sum_{t=1}^{T}f_{M_{a }}(\pi_{t})-f_{\widehat{M}}(\pi_{t})\geq 3C\epsilon\right]\leq\frac{1}{3C}.\]
The conclusion follows. \(\blacksquare\)
Our next two claims show that there is a large gap between the maximum of \(\widehat{M}\) and \(\gamma\) times the maxima of other models in \(\mathcal{M}\). We will need to leverage both of these claims separately.
**Claim 3**.: _For any \(a\in[0,a_{\text{max}}]\),_
\[\gamma f_{M_{a}}^{*}-f_{\widehat{M}}^{*}\geq\Delta-a-2\epsilon.\]
Note that the right-hand side in this claim may be negative for large values of \(a\); however, we will only use this claim when \(a<\Delta-2\epsilon\).
**Proof.** Suppose not, that is, \(f_{M_{a}}^{*}-f_{\widehat{M}}^{*}<(1-\gamma)f_{M_{a}}^{*}+\Delta-a-2\epsilon\). Then
\[\overline{\mathbb{E}}_{a}^{\widehat{M}}\sum_{t=1}^{T}g_{M_{a}}( \pi_{t}) =\overline{\mathbb{E}}_{a}^{\widehat{M}}\sum_{t=1}^{T}\left(g_{ \widehat{M}}(\pi_{t})+f_{M_{a}}^{*}-f_{\widehat{M}}^{*}+f_{\widehat{M}}(\pi _{t})-f_{M_{a}}(\pi_{t})\right)\] \[<\overline{\mathbb{E}}_{a}^{\widehat{M}}\sum_{t=1}^{T}\left(g_{ \widehat{M}}(\pi_{t})\right)+\left((1-\gamma)f_{M_{a}}^{*}+\Delta-a-2\epsilon \right)T+\epsilon T\] \[=\overline{\mathbb{E}}_{a}^{\widehat{M}}\sum_{t=1}^{\tau_{a}} \left(g_{\widehat{M}}(\pi_{t})\right)+\left((1-\gamma)f_{M_{a}}^{*}+\Delta-a- 2\epsilon\right)T+\epsilon T\] \[\leq\left(aT+1\right)+\left((1-\gamma)f_{M_{a}}^{*}+\Delta-a-2 \epsilon\right)T+\epsilon T\] \[\leq\left((1-\gamma)f_{M_{a}}^{*}+\Delta\right)T,\]
Here the first inequality uses Equation (5.9). The second equality uses the fact that \(\overline{p}_{t}^{a}\) plays \(\pi_{\widehat{M}}\) after time \(\tau_{a}\), and \(g_{\widehat{M}}(\pi_{\widehat{M}})=0\). The third inequality uses the definition of \(\tau_{a}\). The final result is a contradiction, since \(M_{a}\) is the maximizer of DEC for \(p=\overline{p}_{\widehat{M}}^{a}\), and thus we must have
\[\overline{\mathbb{E}}_{a}^{\widehat{M}}\sum_{t=1}^{T}g_{M_{a}}(\pi_{t})\geq \left((1-\gamma)f_{M_{a}}^{*}+\Delta\right)T.\]
\(\blacksquare\)
**Claim 4**.: _For any \(M\in\mathcal{M}\), we have \(\gamma f_{M}^{*}-f_{\widehat{M}}^{*}\geq\Delta-\gamma\rho-2\epsilon\)._
**Proof.** Applying Claim 3 with with \(a=0\), we observe that for some \(M_{0}\in\mathcal{M}\),
\[\gamma f_{M_{0}}^{*}-f_{\widetilde{M}}^{*}\geq\Delta-2\epsilon.\]
The claim now follows from the fact that \(\mathcal{M}\) is \(\rho\)-localized, so \(f_{M}^{*}\geq 1-\rho\geq f_{M_{0}}^{*}-\rho\) for any \(M\in\mathcal{M}\).
The following two claims are the crux of the proof. In them, we relate the probability that certain \(M_{a}\) will achieve a large regret by time \(\tau_{a}\) to the probability under \(\mathbb{P}^{\widetilde{M}}\) that \(\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\) falls in a certain interval.
**Claim 5**.: _Fix any values \(a^{\prime}\) and \(a\) in \([0,a_{\text{max}}]\), with \(a^{\prime}<a\). Then_
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{ a}}g_{M_{a}}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-a+a^{\prime}-(2+3C) \epsilon\right]\geq\mathbb{P}^{\widetilde{M}}\left[a^{\prime}\leq\frac{1}{T} \sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})<a\right]-\frac{1}{3C}.\]
**Claim 6**.: _Let \(a=a_{\text{max}}-\frac{2}{T}=\Delta+(1-\gamma)f_{\widetilde{M}}(\pi_{ \widetilde{M}})-\frac{2}{T}\). Then for any \(a^{\prime}\leq\widehat{\rho}\),_
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{ a}}g_{M_{a}}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-\widehat{\rho}+a^{\prime}-(4+3C) \epsilon\right]\geq\mathbb{P}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g _{\widetilde{M}}(\pi_{t})\geq a^{\prime}\right]-\frac{1}{3C}.\]
Claim 5 achieves a meaningful result whenever \(a-a^{\prime}\) is small relative to \(\Delta\). Claim 6 achieves a meaningful result whenever \(a^{\prime}\) is close to \(\widehat{\rho}\).
**Proof of Claim 5.** Recall from Claim 3 that we have \(\gamma f_{M_{a}}^{*}\geq f_{\widetilde{M}}^{*}+\Delta-2\epsilon-a.\) Thus if \(\tau_{a}=T\), we have
\[\sum_{t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t}) =\sum_{t=1}^{T}f_{M_{a}}^{*}-f_{M_{a}}(\pi_{t})\] \[\geq\sum_{t=1}^{T}(f_{M_{a}}^{*}-f_{\widetilde{M}}^{*})+g_{ \widetilde{M}}(\pi_{t})-|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\] \[\geq\left(\Delta+(1-\gamma)f_{M_{a}}^{*}-2\epsilon-a\right)T+ \sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})-|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}} (\pi_{t})|.\]
If \(a^{\prime}T\leq\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})<aT\) then \(\tau_{a}=T\), and the value above is at least
\[\left(\Delta+(1-\gamma)f_{M_{a}}^{*}-2\epsilon-a+a^{\prime}\right)T-\sum_{t=1} ^{T}|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|.\]
Thus by a union bound, we have
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_ {t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-a+a^{ \prime}-(2+3C)\epsilon\right] \geq\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[a^{\prime}T \leq\sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})<aT\right]\] \[-\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\sum_{t=1}^{\tau_ {a}}|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\geq 3C\epsilon T\right].\]
By Claim 2, the second term on the right hand side is at most \(\frac{1}{3C}\).
Finally, observe that
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[a^{\prime}T\leq\sum _{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})<aT\right] =\mathbb{P}^{\widetilde{M}}\left[a^{\prime}T\leq\sum_{t=1}^{\tau _{a}}g_{\widetilde{M}}(\pi_{t})<aT\right]\] \[=\mathbb{P}^{\widetilde{M}}\left[a^{\prime}T\leq\sum_{t=1}^{T}g_{ \widetilde{M}}(\pi_{t})<aT\right].\]
where the first step follows because of the coupling between \(\overline{\mathbb{F}}_{a}^{\overline{M}}\) and \(\mathbb{P}^{\overline{M}}\) for events up to time \(\tau_{a}\), and the second step follows because if \(\sum_{t=1}^{T}g_{\overline{M}}(\pi_{t})<aT\), then \(\tau=T\), and vice versa.
This proves the claim. \(\blacksquare\)
The proof of Claim 6 is very similar, though we need to leverage the gap from Claim 4 in addition to Claim 3.
**Proof of Claim 6.** Observe that
\[\sum_{t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t}) =\sum_{t=1}^{\tau_{a}}f_{M_{a}}^{*}-f_{M_{a}}(\pi_{t}) \tag{5.10}\] \[\geq\sum_{t=1}^{\tau_{a}}(f_{M_{a}}^{*}-f_{\overline{M}}^{*})+g_{ \overline{M}}(\pi_{t})-|f_{\overline{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\] \[\geq\left((1-\gamma)f_{M_{a}}^{*}+\Delta-\widehat{\rho}-2\epsilon \right)\tau_{a}+\sum_{t=1}^{\tau_{a}}g_{\overline{M}}(\pi_{t})-|f_{\overline{ M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|,\]
where the last line follows from combining Claims 3 and Claim 4 to yield
\[\gamma f_{M_{a}}^{*}-f_{\overline{M}}^{*}\geq\Delta-\min(a,\gamma\rho)-2 \epsilon\geq\Delta-\min(\Delta+(1-\gamma)f_{\overline{M}}^{*},\gamma\rho)-2 \epsilon\geq\Delta-\widehat{\rho}-2\epsilon.\]
If \(a^{\prime}T\leq\sum_{t=1}^{T}g_{\overline{M}}(\pi_{t})<aT\), then \(\tau_{a}=T\), and thus we have
\[\sum_{t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t})\geq\left((1-\gamma)f_{M_{a}}^{*}+ \Delta-\widehat{\rho}+a^{\prime}-2\epsilon\right)T-\sum_{t=1}^{T}|f_{\overline {M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|.\]
Alternatively, if \(\sum_{t=1}^{T}g_{\overline{M}}(\pi_{t})\geq aT=(\Delta+(1-\gamma)f_{\overline {M}}^{*})T-2\), then from the second line of Equation (5.10), we obtain:
\[\sum_{t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t}) \geq\left(f_{M_{a}}^{*}-f_{\overline{M}}^{*}\right)\tau_{a}+\sum _{t=1}^{\tau_{a}}g_{\overline{M}}(\pi_{t})-\sum_{t=1}^{T}|f_{\overline{M}}( \pi_{t})-f_{M_{a}}(\pi_{t})| \tag{5.11}\] \[\geq\left(f_{M_{a}}^{*}-f_{\overline{M}}^{*}\right)\tau_{a}+( \Delta+(1-\gamma)f_{\overline{M}}^{*})T-2-\sum_{t=1}^{T}|f_{\overline{M}}(\pi_ {t})-f_{M_{a}}(\pi_{t})|\] \[\geq\left(f_{M_{a}}^{*}-f_{\overline{M}}^{*}\right)\tau_{a}+( \Delta+(1-\gamma)f_{M_{a}}^{*}-2\epsilon)T-(1-\gamma)(f_{M_{a}}^{*}-f_{ \overline{M}}^{*})T\] \[\qquad-\sum_{t=1}^{T}|f_{\overline{M}}(\pi_{t})-f_{M_{a}}(\pi_{t })|.\]
Now we lower bound \(\tau_{a}\). Since \(g_{\overline{M}}(\pi)\leq f_{\overline{M}}^{*}\) for any \(\pi\), we have
\[\tau_{a}f_{\overline{M}}^{*} \geq\sum_{t=1}^{\tau_{a}}g_{\overline{M}}(\pi_{t})\] \[\geq\left(\Delta+(1-\gamma)f_{\overline{M}}^{*}\right)T-2\] \[\geq(1-\gamma)f_{\overline{M}}^{*}T,\]
where in the second inequality we used the definition of \(a\), and in the third inequality we used the assumption of the theorem that \(\Delta\geq\epsilon\geq 2/T\). Thus we have \(\tau_{a}\geq(1-\gamma)T\).
Observe also that by Claim 3, and the definition of \(a\), we have \(\gamma f_{M_{a}}^{*}-f_{\overline{M}}^{*}\geq\Delta-(\Delta+(1-\gamma)f_{ \overline{M}}^{*})-2\epsilon\), and thus \(f_{M_{a}}^{*}-f_{\overline{M}}^{*}\geq-2\epsilon/\gamma\).
Plugging this and the lower bound on \(\tau_{a}\) into Equation (5.11) yields that if \(\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\geq aT\), then
\[\sum_{t=1}^{\tau_{a}}g_{M_{a}}(\pi_{t}) \geq\left(f_{M_{a}}^{*}-f_{\widetilde{M}}^{*}\right)(\tau_{a}-(1- \gamma)T)+(\Delta+(1-\gamma)f_{M_{a}}^{*}-2\epsilon)T-\sum_{t=1}^{T}|f_{ \widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\] \[\geq-2\epsilon T+(\Delta+(1-\gamma)f_{M_{a}}^{*}-2\epsilon)T- \sum_{t=1}^{T}|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\] \[\geq(\Delta+(1-\gamma)f_{M_{a}}^{*}-4\epsilon)T-\sum_{t=1}^{T}|f_ {\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\] \[\geq(\Delta+(1-\gamma)f_{M_{a}}^{*}-4\epsilon)T-\bar{\rho}T+a^{ \prime}T-\sum_{t=1}^{T}|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|.\]
Here in the second inequality, we used the fact that \(\tau_{a}\geq(1-\gamma)T\) and \(f_{M_{a}}^{*}-f_{\widetilde{M}}^{*}\geq-2\epsilon/\gamma\). Thus if \(f_{M_{a}}^{*}-f_{\widetilde{M}}^{*}\geq 0\), the step follows from the fact that \(\tau_{a}\geq(1-\gamma)T\). If \(f_{M_{a}}^{*}-f_{\widetilde{M}}^{*}\in[-2\epsilon/\gamma,0)\), then the step follows from the fact that \(\tau_{a}-(1-\gamma)T\leq\gamma T\). In the final inequality, we used the assumption of the claim that \(a^{\prime}\leq\widehat{\rho}\).
Thus by a union bound, we have
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t =1}^{\tau_{a}}g_{M_{a}}(\pi_{t})\geq\Delta+(1-\gamma)f_{M_{a}}^{*}-\widehat{ \rho}+a^{\prime}-(4+3C)\epsilon\right] \geq\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[a^{\prime}T \leq\sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})<aT\right]\] \[\quad+\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\sum_{t=1}^{ \tau_{a}}g_{\widetilde{M}}(\pi_{t})\geq aT\right]\] \[\quad-\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\sum_{t=1}^{ \tau_{a}}|f_{\widetilde{M}}(\pi_{t})-f_{M_{a}}(\pi_{t})|\geq 3C\epsilon T\right].\]
By Claim 2, the second term on the right hand side is at most \(\frac{1}{3C}\). Finally, observe that
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[a^{\prime}T\leq \sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})<aT\right]+\overline{\mathbb{P }}_{a}^{\widetilde{M}}\left[\sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t}) \geq aT\right] =\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\sum_{t=1}^{\tau _{a}}g_{\widetilde{M}}(\pi_{t})\geq a^{\prime}T\right]\] \[=\mathbb{P}^{\widetilde{M}}\left[\sum_{t=1}^{\tau_{a}}g_{ \widetilde{M}}(\pi_{t})\geq a^{\prime}T\right]\] \[\geq\mathbb{P}^{\widetilde{M}}\left[\sum_{t=1}^{T}g_{\widetilde{ M}}(\pi_{t})\geq a^{\prime}T\right].\]
where the second step follows because of the coupling between \(\overline{\mathbb{P}}_{a}^{\widetilde{M}}\) and \(\mathbb{P}^{\widetilde{M}}\) for events up to time \(\tau_{a}\), and the second to last step follows because if \(\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_{t})\geq a^{\prime}T\), then \(\sum_{t=1}^{\tau_{a}}g_{\widetilde{M}}(\pi_{t})\geq a^{\prime}T\).
This proves the claim.
Now we show how to instantiate Claims 5 and 6 to prove the theorem. Let \(C\) be the integer from the theorem statement. For \(i=1,2,\ldots,C-1\), instantiate Claim 5 with \(a^{\prime}=\widehat{\rho}\frac{i-1}{C}\), \(a=\gamma\widehat{\rho}\frac{i}{C}\).
Then, instantiate Claim 6 with \(a^{\prime}=\widehat{\rho}\frac{C-1}{C}\).
Now clearly for at least one of the \(C\) intervals \(I_{i}:=\left[\widehat{\rho}\frac{i-1}{C},\gamma\rho\frac{i}{C}\right)\) for \(i\in[C-1]\) and the final interval \(I_{C}:=\left[\widehat{\rho}\frac{C-1}{C},1\right]\), we have
\[\mathbb{P}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{T}g_{\widetilde{M}}(\pi_ {t})\in I_{i}\right]\geq\frac{1}{C}.\]
Then by choosing \(a=\widehat{\rho}\frac{i}{C}\) if \(i\leq C-1\), and otherwise \(a=\Delta+(1-\gamma)f^{*}_{\widetilde{M}}-\frac{2}{7}\) if \(i=C\), we have
\[\overline{\mathbb{P}}_{a}^{\widetilde{M}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{a }}g_{M_{a}}(\pi_{t})\geq\Delta+(1-\gamma)f^{*}_{M_{a}}-\frac{\widehat{\rho}}{C }-(4+3C)\epsilon\right]\geq\frac{1}{C}-\frac{1}{3C}.\]
Indeed, this works for \(i\leq C-1\) since we instantiate Claim 5 with \(a-a^{\prime}=\frac{\widehat{\rho}}{C}\). If \(i=C\), as mentioned above, we use \(a^{\prime}=\widehat{\rho}\frac{C-1}{C}\), and thus Claim 6 yields the result above.
Applying the change of measure to \(\overline{\mathbb{P}}^{M_{a}}\), by Claim 1, for this value of \(a\), we have
\[\overline{\mathbb{P}}_{a}^{M_{a}}\left[\frac{1}{T}\sum_{t=1}^{\tau_{a}}g_{M_{ a}}(\pi_{t})\geq\Delta+(1-\gamma)f^{*}_{M_{a}}-\frac{\widehat{\rho}}{C}-(4+3C) \epsilon\right]\geq\frac{1}{C}-\frac{1}{3C}-\frac{1}{3C}=\frac{1}{3C}.\]
Observing that by the assumption of the theorem that \((4+3C)\epsilon\leq\frac{2\widehat{\rho}}{C}\), this proves Equation (5.7), as desired.
## Acknowledgements
We acknowledge the support from the ARO through award W911NF-21-1-0328 and from the DOE through award DE-SC0022199. Thanks to Kefan Dong for useful feedback and conversations.
|
2308.10997 | MarkovGen: Structured Prediction for Efficient Text-to-Image Generation | Modern text-to-image generation models produce high-quality images that are
both photorealistic and faithful to the text prompts. However, this quality
comes at significant computational cost: nearly all of these models are
iterative and require running sampling multiple times with large models. This
iterative process is needed to ensure that different regions of the image are
not only aligned with the text prompt, but also compatible with each other. In
this work, we propose a light-weight approach to achieving this compatibility
between different regions of an image, using a Markov Random Field (MRF) model.
We demonstrate the effectiveness of this method on top of the latent
token-based Muse text-to-image model. The MRF richly encodes the compatibility
among image tokens at different spatial locations to improve quality and
significantly reduce the required number of Muse sampling steps. Inference with
the MRF is significantly cheaper, and its parameters can be quickly learned
through back-propagation by modeling MRF inference as a differentiable
neural-network layer. Our full model, MarkovGen, uses this proposed MRF model
to both speed up Muse by 1.5X and produce higher quality images by decreasing
undesirable image artifacts. | Sadeep Jayasumana, Daniel Glasner, Srikumar Ramalingam, Andreas Veit, Ayan Chakrabarti, Sanjiv Kumar | 2023-08-14T14:07:17Z | http://arxiv.org/abs/2308.10997v3 | # SPEGTI: Structured Prediction for Efficient Generative Text-to-Image Models
###### Abstract
Modern text-to-image generation models produce high-quality images that are both photorealistic and faithful to the text prompts. However, this quality comes at significant computational cost: nearly all of these models are iterative and require running inference multiple times with large models. This iterative process is needed to ensure that different regions of the image are not only aligned with the text prompt, but also compatible with each other. In this work, we propose a light-weight approach to achieving this compatibility between different regions of an image, using a Markov Random Field (MRF) model. This method is shown to work in conjunction with the recently proposed Muse model. The MRF encodes the compatibility among image tokens at different spatial locations and enables us to significantly reduce the required number of Muse prediction steps. Inference with the MRF is significantly cheaper, and its parameters can be quickly learned through back-propagation by modeling MRF inference as a differentiable neural-network layer. Our full model, SPEGTI, uses this proposed MRF model to speed up Muse by \(1.5\times\) with no loss in output image quality.
## 1 Introduction
Recent image-to-text models [16, 19, 22, 23, 30] are remarkably successful at producing high-quality, photo-realistic images that are faithful to the provided text
prompts, and are poised to drive a new generation of tools for creativity and graphic design. However, the generation process with these models is iterative and computationally expensive, requiring multiple sampling steps through large models. For example, diffusion models [22, 23] require multiple denoising steps to generate the final image, the Parti model [30] auto-regressively generates image tokens one at a time. While the recently proposed Muse model [2] generates multiple tokens at a time, it still requires a large number of sampling steps to arrive at the final image.
This iterative process is needed to ensure that different regions or patches of the images are not only aligned with the provided text prompt, but also _compatible with each other_. Current text-to-image models achieve this spatial compatibility by repeatedly applying their full model multiple times on intermediate image predictions--a process that is computationally very expensive. In this paper, we demonstrate that a significantly lighter-weight approach can achieve the same compatibility.
To this end, we propose a new _structured prediction_ approach that applies to image generation models operating in a discrete token space, such as the VQGAN token space [2, 3, 9]. These models generate images by first selecting tokens in a fixed-size token grid and later detokenizing them into an RGB image. Usual token-based image generation methods select tokens by _independently_ sampling from the probability distributions at different patch locations. In contrast, we model the whole image _jointly_ using a fully-connected Markov Random Field (MRF) that encodes compatibility between all pairs of tokens (image patches). The tokens at different patch locations are then determined based on this joint distribution. Consequently, as illustrated in Figure 2, a confident token at one location can influence the selected tokens at other locations to enhance the overall compatibility of the token arrangement, and therefore the fidelity of the final image. We use mean-field inference [12, 13, 31] to solve this MRF, which also permits training the compatibility parameters of the model through back-propagation. During image generation with a trained model, the MRF inference comes at a negligible cost compared to the cost of large Transformer models used to predict the initial token probabilities.
To showcase the benefits of our MRF model, we introduce a new text-to-image model, SPEGTI, that can work in conjunction with the Muse model [2]. Muse uses a parallel decoding approach where all tokens of the image are predicted in parallel at each step. The text prompt and the image generated in the previous step are taken as inputs to the Muse model. Muse has been shown to be much faster (around \(3\times\) faster than the closest competitor) than other state-of-the-art image generation models such as DALL-E, Imagen, Parti, and Stable Diffusion, while producing similar or better quality images [2]. Although Muse produces predictions for every patch simultaneously, single-shot parallel decoding leads to serious quality degradation in the generated images [2]. Muse solves this by embracing progressive parallel decoding, where a small incremental number of high confidence tokens are fixed each time. We show that our structured prediction approach can be applied after a limited number of sampling steps with the Muse model to get equivalent results to those from Muse's full iterative approach (see Figure 1). Since MRF inference comes at a negligible cost compared to the cost of the Muse model, replacing Muse steps with MRF inference significantly reduces the over-all inference time.
SPEGTI speeds up Muse by a factor of \(1.5\times\), without compromising on quality. Reducing the latency of one of the fastest text-to-image models will have important practical implications for real-world deployments.
The success of our MRF formulation in modeling spatial and label relationships of image tokens opens up the possibility of refining predictions of other token-based methods such as Parti [30] and discrete-diffusion models [10] with MRFs. In this work, we primarily focus on improving efficiency and leave explorations of other benefits of the MRF model to future work.
In summary, our contributions are:
* We propose an MRF model, a type of probabilistic graphical model, that can predict a globally compatible set of image tokens by explicitly modeling spatial
Figure 2: _Benefits of encouraging token compatibility with an MRF model. During MRF inference, a confident token (such as the token representing the girdffe’s eye) encourages the neighboring tokens to be compatible (to represent other parts of a giraffe face such at ears and nose). Similarly, as shown in the lower part of the image, the token representing the texture of the giraffe body can influence nearby tokens to represent similar patterns. Our formulation also supports long-range connections, such as the one shown with the broken yellow line. In practice, a combination of tokens might be required represent a high-level semantic concept like a giraffe eye. But the intuition of label compatibility would still be valid._
and token label relationships.
* We show that the proposed SPEGTI model can replace the last few steps of Muse with the learned MRF layer, leading to a \(1.5\times\) speedup, with no quality loss.
* We show that the MRF model parameters can be trained in just a few hours, allowing us to quickly combine the MRF model with pretrained Muse models to reap inference efficiency gains.
* To the best of our knowledge, this is the first work to exploit MRFs to improve efficiency of text-to-image generation models.
## 2 Related Work
**Text-to-Image Generation:** In recent years, papers such as [23, 30, 2, 3, 19, 22] have proposed a diverse variety of methods to generate high-quality images given a text prompt as input. We discuss some of the most relevant approaches below.
Many text-to-image models [16, 17, 19, 22, 23] use denoising diffusion probabilistic models (DDPM) [11] to generate images, where the model is invoked successively to "denoise" previous intermediate versions and progressively refine the image output. While the diffusion theory suggests the need for infinitely small and many denoising steps, only a few hundreds of steps are used in practice [25]. This is still computationally demanding, and progressive distillation algorithms are being developed to cut down the number of steps [24]. Most of these models directly operate on and produce pixel intensities, [22] proposed a variant that operates on a lower-dimensional latent representation.
In contrast, the Parti [30], DALL-E [20], and Muse [2] models generate images in a space of discrete token representation. They use a VQGAN [9] model, derived from VQVAE [29], to represent non-overlapping image patches with tokens--with values from a discrete vocabulary--and cast the image generation task as that of generating image tokens. The Parti and DALL-E models approach token generation with auto-regressive modeling, generating tokens one at a time in sequence, where each token is generated conditioned on the text input and all previously generated tokens.
The Muse model [2], on the other hand, is trained to take the text prompt and any already generated image tokens as input, and make predictions for all remaining image tokens simultaneously. In particular, it is trained as a BERT-style [7] encoder model operating on a masked set of image tokens (with tokens not already generated being masked), with cross-attention to an encoding of the text prompt input. To generate an image, the model is invoked in multiple sampling steps, with all image tokens being masked in the first step. At each step, the Muse model makes predictions for all masked tokens. A subset of these predictions are selected and added to the set of fixed and non-masked tokens, which are then used as conditioning input for subsequent invocations till the all tokens have been fixed. Similar to Muse, Paella [21] and Cogview2 [8] also exploit progressive parallel decoding to achieve speedup. A similar approach to parallel decoding for text was introduced by [15].
Like many other text-to-image generation models, Muse first generates a low-resolution version of the target image, and then conditions on this low-resolution image to generate the high-resolution version. It uses a similar architecture and sampling approach for the high-resolution generation stage, except in this case, the low-resolution image tokens are provided as additional conditioning input.
For the selected tokens at each sampling stage of Muse, the token values are determined independently for each token from the predicted per-token distributions. Our structured prediction approach, in contrast, considers compatibility between the values of different tokens, and by doing so, is able to reduce the number of sampling steps required--in both the low- and high-resolution stages.
**Structured Prediction:** Markov and Conditional random fields (CRF)s have a long history of being used in computer vision for diverse applications such as stereo, segmentation, and image reconstruction [28]. These MRF and CRF models have typically been used to enforce smoothness constraints, i.e., that semantic labels, pixel intensities, stereo depths, etc. at nearby locations are similar. In modern neural network-based methods too, they have been found to be a useful post-processing step [4] to yield smooth consistent results.
While early MRF and CRF models considered edges only among immediate pixel neighbors on the image plane, [13] introduced "fully-connected" CRF models that had far longer range connections, and showed that the energy for these models could effectively be minimized using mean-field inference. Using this fully-connected formulation, [31] proposed back-propagating through the mean-field inference steps to jointly train a CRF model with a CNN network to achieve better semantic image segmentation.
In this work, we use an MRF formulation to achieve consistency in predicted image tokens in the context of text-to-image generation, and like [31], we also use a fully connected MRF model and learn its parameters by back-propagation. However, in our case, the MRF is defined over tokenized patches, the label space corresponds to the vocabulary of a VQGAN model [9] and the MRF enforces consistency between different token values rather than explicit smoothness.
It is worth mentioning here that CRFs have recently also been proposed to improve text generation in recent works [26, 27]. Like our case, these methods also use a Transformer model to generate "unaries" that are then provided as input to a CRF model. However, these methods
consider edges only between neighboring tokens, and since text sequences are one-dimensional, are able to use chain decoding techniques (like beam search) for inference. In contrast, our method reasons with a two-dimensional MRF model with edges between all pairs of patches in the image.
## 3 Structured Token Prediction
In this section, we introduce our MRF formulation for structured token prediction. In token-based image generation, a neural network (often a Transformer model) makes predictions to generate a fixed size (\(16\times 16\), for example) _token image_ containing token labels. This token image is then sent through a _detokenizer_ to generate an RGB image [9]. The tokens belong to a fixed vocabulary of V tokens (8192, for example).
Consider a \(4\times 4\) neighborhood of the token image, there are \(8192^{16}\) different arrangements of tokens in this small patch; obviously not all of them will represent a photo-realistic image patch. For a full sized \(16\times 16\) image, there are \(8192^{256}=6.7\times 10^{1002}\) different arrangements of tokens, many of which will represent some kind of "garbage" images that lie outside the manifold of photorealistic images. Intuitively, a structured prediction mechanism that accounts for the compatibility of token arrangements could significantly reduce this massive search space of token arrangements and make the token prediction models more efficient.
We propose a probabilistic graphical model for this structured prediction task. More specifically, we formulate finding the token arrangement in a token image as maximum a posteriori (MAP) inference of an MRF model, as described in the following. The high-level idea is illustrated in Figure 3.
Let \(i\in\{1,2,\ldots,n\}\) denote the location indices of the token image, arranged in row-major order. Let \(\mathcal{L}=\{l_{1},l_{2},\ldots,l_{V}\}\) be the token labels, which are used to index each element in the codebook of V tokens. For a \(16\times 16\) token image with vocabulary size \(8192\), we have \(n=256\) and \(V=8192\). Define a random variable \(X_{i}\in\mathcal{L}\) for each \(i=1,2,\ldots,n\) to hold the token assignment for the \(i^{\text{th}}\) location. The collection of these random variables \(\mathbf{X}=[X_{1},X_{2},\ldots,X_{n}]\) then forms a random field, where the value of one variable depends on that of the others. We can then model the probability of an assignment to this random field (and therefore a token arrangement on the grid) with the Gibbs measure:
\[P(\mathbf{X}=\mathbf{x})=\frac{1}{Z}\exp(-E(\mathbf{x})), \tag{1}\]
where \(\mathbf{x}\in\mathcal{L}^{n}\) is a given token arrangement and \(Z=\sum_{\mathbf{x}}\exp(-E(\mathbf{x}))\) is the partition function. The "energy" \(E(\mathbf{x})\) of an assignment \(\mathbf{x}=\{x_{1},x_{2},\ldots,x_{n}\}\) is modeled with two components: the _unary_ component \(u_{i}(.)\) and the _pairwise_ component \(p_{ij}(.,.)\):
\[E(\mathbf{x})=\sum_{i=1}^{n}u_{i}(x_{i})+\sum_{i=1}^{n}\sum_{j=1}^{n}p_{ij}(x _{i},x_{j}). \tag{2}\]
Figure 3: _The role of MRF in improving the prediction by exploiting token probabilities from the Transformer and the learned spatial and label compatibility relations. Every token image consisting of a grid of discrete token labels can be detokenized into an RGB image. After MRF inference, we enhance the spatial and label compatibility in the token space leading to an improved image upon detokenization._
The unary component captures the confidence of the neural network prediction model (such as a Transformer model) for a given token and a location. Therefore, given a condition \(y\) (such as a text prompt or pre-fixed tokens), if the neural network's predicted logit value for location \(i\) and label \(x_{i}\) is \(f_{i}(x_{i},y)\), we set:
\[u_{i}(x_{i})=-f_{i}(x_{i},y). \tag{3}\]
Note that we use negative logits because the energy function is in the log domain and a high energy corresponds to a low probability. We drop conditioning on \(y\) hereafter to keep the notation uncluttered. Also note that our MRF formulation is not conditioned on \(y\).
The pairwise component, \(p_{ij}(x_{i},x_{j})\), captures the compatibility of the label \(x_{i}\) assigned to the location \(i\) and the label \(x_{j}\) assigned to the location \(j\). It encodes the notion that while some pairs of tokens are highly compatible with each other and can appear in the same image, some other pairs are highly incompatible. For example, a token that often represents a part of a giraffe eye is more likely to appear next to a token that usually represents a different part of a giraffe eye/face, than a token that most often represents something completely different like a part of car wheel. We factorize this pairwise compatibility into two parts: the spatial similarity \(s(i,j)\) between the locations \(i\) and \(j\) (for example, if \(i\) and \(j\) are close to each other in the 2D token image, they will be strongly related) and the label compatibility \(c(x_{i},x_{j})\) between the tokens \(x_{i}\) and \(x_{j}\) (for example, highly compatible tokens are able to coexist with each other). We therefore have:
\[p_{ij}(x_{i},x_{j})=-c(x_{i},x_{j})s(i,j). \tag{4}\]
In classic MRFs, the pairwise interactions exist only between neighboring pixels. In contrast, for increased flexibility, we allow interactions between all pairs of locations, similar to the fully-connected CRFs in the image segmentation setting [13, 31]. However, there are a number of important differences in our formulation compared to the fully-connected CRFs in image segmentation: in the latter, spatial similarity \(s(i,j)\) is derived conditioned on the input image (hence the name _conditional_ random fields), using Gaussian potentials in spatial and bilateral domains. This Gaussian assumption is crucial for the tractability of their models since the image segmentation CRFs work in a large image grid: in practical implementations pixels that are far away by more than a few standard deviations of the Gaussian kernel are considered not connected [1]. In contrast, we make our graphical model truly fully-connected and learn \(s(i,j)\) with backpropagation without fixing them to be Gaussian. Furthermore, the CRFs in image segmentation can assume a Potts model for label compatibility because assigning the same label to nearby pixels generally improves the smoothness of the segmentation. In our application, on the other hand, it is not straightforward to assign semantic meanings to tokens and Potts model does not intuitively makes sense since the same token at similar locations does not increase the meaningfulness of a token assignment. We therefore resort to learning the pairwise connections \(p_{ij}(.,.)\), completely from data without using any priors or heuristics. Thus, our MRF formulation has two learnable weight matrices: \(\mathbf{W^{s}}\), with \(\mathbf{W^{s}}_{ij}:=s(i,j)\) and \(\mathbf{W^{c}}\), with \(\mathbf{W^{c}}_{kk^{\prime}}:=c(k,k^{\prime})\).
Given our probabilistic graphical model, finding the final token arrangement amounts to finding the assignment \(\mathbf{x}\) that maximizes \(P(\mathbf{X}=\mathbf{x})\), This can be done efficiently via mean-field inference, where we approximate \(P(\mathbf{X})\approx Q(\mathbf{X}):=\prod_{i}Q_{i}(X_{i})\), with \(Q_{i}(.)\) being the marginal distribution for \(X_{i}\). The distribution \(Q(\mathbf{X})\) is then iteratively refined to minimize the KL divergence between \(P\) and \(Q\). We refer the reader to [12] and [13] for more details on the derivations. The resulting inference algorithm is summarized in Algorithm 1. Note that all operations of this algorithm can be implemented via simple matrix multiplication and other common operations such as \(\mathrm{softmax}(.)\), which are readily available in any deep learning library. Importantly, the cost our MRF inference is negligible compared to prediction with a large Transformer model.
```
\(Q_{i}(k)\leftarrow\mathrm{softmax}(f_{i}(k))\), \(\forall(i,k)\) fornum_iterationsdo \(Q_{i}(k)\leftarrow\sum_{j=1}^{n}\mathbf{W^{s}}_{ij}Q_{j}(k)\), \(\forall(i,k)\) \(Q_{i}(k)\leftarrow\sum_{k^{\prime}=1}^{k}\mathbf{W^{c}}_{kk^{\prime}}Q_{i}(k^{ \prime})\), \(\forall(i,k)\) \(Q_{i}(k)\leftarrow\mathrm{softmax}(Q_{i})(k)\), \(\forall(i,k)\) endfor return\(Q\)
```
**Algorithm 1**_The MRF Inference Algorithm_
## 4 Spegti
We now demonstrate the benefits of the proposed MRF model by using it to speed up the state-of-the-art Muse image generation model [2]. We achieve this speed-up by replacing the last few inference steps of Muse with MRF inference. Specifically, we let Muse execute the first few steps and then use our extremely lightweight MRF inference to fast-forward the remaining steps. This model, dubbed SPEGTI, improves the speed of image generation speed by \(1.5\times\) without sacrificing quality.
The Muse model works in the discrete VQGAN token space [9]. The VQGAN discrete representation is gradually emerging as the centerpiece of many text-to-image gen
eration algorithms, especially the transformer-based models that can not work with pixels even for low-resolution images [5]. Muse generates images by first executing a few inference steps with the base model to predict a small grid, e.g., \(16\times 16\), of discrete image tokens by conditioning on the text embeddings generated by the T5-XXL [18] text encoder. This is followed by a few steps of the super-resolution (SR) model to predict a larger grid, e.g., \(32\times 32\), of discrete image tokens by conditioning on both the text embeddings and the tokens generated by the base model. Note that both the base and SR models use large Transformer models as the backbone whose computational complexity is quadratic in the number of tokens. We exploit this multi-scale approach to speed up inference efficiency by using more steps in the base model, which works with fewer tokens, followed by a much fewer steps with the SR model. Once SR tokens are generated, the VQGAN [9] detokenizer is used to render the image in pixel space.
We train the MRF parameters to match predictions of the Muse's final step, given its predictions at an intermediate step. By training the MRF parameters in this manner, we can learn to skip or replace the latter part of the Muse model inference with the MRF model. By fast-forwarding after the step \(k\), out of the total of \(n\), we instantly save \((n-k)/n\times 100\%\) of the Muse model's inference time. This is because the inference time of the MRF is negligible compared to that of the Muse steps as shown in Table 1. The same strategy is used for both the base model and the SR model, to achieve an overall boost of \(1.5\times\) in inference speed.
## 5 Experiments
In this section, we show that SPEGTI achieves faster inference compared to Muse, with no loss of quality. Muse has been shown to be much faster than other state-of-the-art text-to-image models such as Dall-E, Dall-E 2, Parti, Imagen, and Stable Diffusion [2], outperforming the second-fastest method by a factor of approximately \(3\times\) (Table 3 of [2]). Furthermore, as evidenced by Table 1 & 2 of [2], Muse achieves better quality results compared to these methods, as measured with the FID score. Human evaluation results for image quality in [2] showed that humans preferred Muse outputs for \(70.6\%\) prompts while Stable Diffusion was preferred for only \(25.4\%\). Since Muse is already shown to outperform other state-of-the-art methods in terms of both speed and quality, we focus on comparing our results to that of Muse.
**Model and Dataset:** We use a Muse model with approximately 1.7B parameters, trained on the WebLI dataset [6]. This model was generously made available to us by the authors of the Muse paper. We refer the reader to [2] for more details on the architecture and the training setup of Muse.
\begin{table}
\begin{tabular}{l r} \hline \hline Model & Time (ms) \\ \hline Muse base (single step) & 10.40 \\ Muse super-resolution (single step) & 24.00 \\ MRF inference on base & 0.29 \\ MRF inference on super-resolution & 0.29 \\ T5-XXL inference & 0.30 \\ Detokenizer & 0.15 \\ \hline Muse & 442.05 \\ SPEGTI (ours) & 281.03 \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Average inference times for different components and models on a TPUv4 device. The MRF inference is almost free compared to the costs of the Muse Transformer models. Furthermore, MRF inference is independent of the image resolutions (rows 3 and 4). We make Muse inference \(1.5\times\) faster by introducing the MRF model._
\begin{table}
\begin{tabular}{c c} \hline \hline Model & FID \\ \hline Muse base (18 iters) & 14.37 \\ Muse base (24 iters) & 13.13 \\ Muse base (18 iters) + MRF & 12.28 \\ \hline \hline \end{tabular}
\end{table}
Table 2: _Quantitative evaluation of FID scores on the MSCOCO [14] dataset for \(256\times 256\) image resolution. The Muse model with MRF applied after 18 steps outperforms both the Muse model with 18 steps and the Muse model with 24 steps._
Figure 4: _Example generations of the Muse low-resolution model before (top) and after (bottom) the application of the MRF model. We observe a key reduction in visual artifacts. e.g., in the background of the frog and in the water in front of the boat. The MRF is also helpful in completing incomplete objects, such as the missing part of the toast._
The same WebLI dataset was used to train the MRF model.
**MRF Training:** We train two MRF models, one for base and one for SR. Each model contains two weight matrices for spatial and label compatibilities, \(256\times 256\)\(\mathbf{W_{base}^{s}}\) and \(8192\times 8192\)\(\mathbf{W_{base}^{c}}\) for the base, and \(1024\times 1024\)\(\mathbf{W_{SR}^{s}}\) and \(8192\times 8192\)\(\mathbf{W_{SR}^{s}}\) for the SR. All MRF weights are trained with back-propagation and gradient descent, with the ADAM optimizer. We use a two-stage approach for MRF training. First, we pre-train the MRF model using a self-supervised masked-token prediction loss [2, 7]. Specifically, we obtain VQGAN tokens for an image, randomly mask \(20\%\) of them and train the MRF model to predict the masked tokens using the categorical cross-entropy loss. Second, we fine-tune the MRF model to imitate the last \(n-k\) steps of the Muse model: Given the output of the Muse model after the \(k\)th iteration, the spatial and label compatibility matrices are learned such that the MRF inference matches the final predictions of the Muse model after \(n\) iterations using the KL divergence loss. Both base and SR MRF models are trained in the same manner. Both MRF models completes training in just a few hours on TPUv4 chips.
**Experimental Setup:** The base model operates on a \(16\times 16\) token grid with 24 inference steps to produce \(256\times 256\) images. The SR model works on a \(32\times 32\) token grid and produces \(512\times 512\) images in 8 additional steps. SPEGTI uses both the base and SR MRF models to trade with the base and SR inference steps of the Muse model, respectively. We apply the base MRF after step 20 of the base Muse model, and the SR MRF after 3 steps of the SR Muse model. In other words, MRF is used to cut down 4 steps in the base model and 5 steps in the SR model. This results in a \(1.5\times\) speed-up in SPEGTI compared to Muse.
**Qualitative Evaluation:** Figure 4 shows images generated by the base Muse model before (top) and after (bottom) the application of the MRF. We observe a key reduction in visual artifacts in the background of the frog and in the water in front of the boat. The MRF is also helpful in completing incomplete objects, such as the missing part of the toast. Figure 5 shows images generated by the SR Muse model before (top) and after (bottom) the application of the MRF. We again see a key reduction in visual artifacts, especially in complex textures such as the brick wall behind the car and the surface of the wooden heart. We further see a significant improvement in correcting complex object structures such as the blue car and the teddy bear's face.
In Figure 6 we study the progression of the generated images during the invocation of the Muse SR model. At each step, we also show the image after the application of the MRF model. The results show that while the Muse model slowly improves result quality, the MRF layer provides high quality results already after the first step. We observe that SPEGTI with just 3 SR steps and MRF model already consistently produces images comparable with the Muse results with 8 SR steps. Figure 7 demonstrates this using a series of Parti Prompts [30], where compare the results of the full
Figure 5: _Example generations of the Muse super-resolution model running for 3 (out of 8) steps before (top) and after (bottom) the application of the MRF model. We observe a significant reduction in visual artifacts, e.g., in the brick wall behind the car. We further see key improvements to complex object structures such as the blue car and the teddy bear’s face._
Muse model (left), our SPEGTI model with a \(1.5\times\) speed-up (right), and Muse applied for fewer steps leading to a similar speed-up (middle). In Figure 8 we show additional qualitative results produced by SPEGTI. We observe that the model is able to produce a wide variety of images ranging from artistic to natural images.
**Quantitative Evaluation:** We focus on the base model for quantitative evaluation. Note that the base model executes \(n=24\) steps [2], and we show that we can trade the last few steps of the base model with the lightweight MRF inference, with no loss in quality.
In Table 2, we report single-shot FID scores on the MS-COCO dataset [14]. Using MRF, we are able to reduce the number of steps from 24 to 18 while still achieving better FID scores than the Muse model with 24 steps. Notably, the base model with 18 steps without the MRF performed significantly worse than the base model with 24 steps.
## 6 Conclusion
The proposed SPEGTI mode showed a significant inference speed-up of \(1.5\times\) over Muse by trading the last few steps of Muse model with MRF inference. The MRF model achieves this by learning the spatial and token label compatibility relationships in the discrete VQGAN token space. Our MRF model can be trained in just a few hours, and this allows us to use it in conjunction with pre-trained Muse models, to observe almost immediate inference speed-up.
While providing clear benefits over independent per-patch token selection, our current MRF model makes no use of the provided text prompt, with text guidance coming in solely through the unaries. An interesting direction of future work would be to make the spatial and token compatibility weights be dependent on the text prompt, allowing the MRF (or in this case, the CRF) to adapt to text input.
Note that the MRF parameters were trained so the MRF prediction matches the final Muse prediction, while the main focus was on improving inference efficiency. However, by learning the parameters in a different manner, we may be able to exploit the MRF to be complementary to Muse, and even enhance the quality of image generation.
Another direction of future work lies in training the Muse model itself jointly with the MRF layers, so as to ensure that the unaries produced by Muse are optimal for use with MRF-based decoding.
**Limitations**
While our proposed solution reduces inference time, training these models remains computationally expensive. Like all image generation models including Muse itself,
Figure 6: _The first four steps of the super-resolution model without (top) and with (bottom) the MRF. Note that the MRF fixes complex object structures such as the dog’s face as well as texture-inconsistencies in areas such as the brick wall. The MRF output looks reasonable starting from the first step._
our approach also carries the risk of generating offensive or misleading images. These risks must be carefully analyzed and mitigated before making such models available to users. An interesting direction of future work lies in exploring whether the MRF itself could be deployed in ruling out the generation of certain classes of images.
## Acknowledgment
We would like to thank Apurv Suman, Dilip Krishnan, Jarred Barber, Huiwen Chang, and Jason Baldridge for their valuable feedback. We thank the authors of Muse [2] for generously providing us with the code and the models.
Figure 7: _Within each set of three, SPEGTI (right) speeds up Muse (left) by \(1.5\times\) with almost no loss of quality. A similar speed up by only reducing the step count (middle) results in a significant loss of quality._ |
2304.11784 | MeerKAT discovery of a double radio relic and odd radio circle:
connecting cluster and galaxy merger shocks | We present the serendipitous discovery of (1) a large double radio relic
associated with the galaxy cluster PSZ2 G277.93+12.34 and (2) a new odd radio
circle, ORC J1027-4422, both found in the same deep MeerKAT 1.3 GHz wide-band
radio continuum image. The angular separation of the two arc-shaped cluster
relics is ~16 arcmin or ~2.6 Mpc for a cluster redshift of z ~ 0.158. The thin
southern relic, which shows several ridges/shocks including one possibly moving
inwards, has a linear extent of ~1.64 Mpc. In contrast, the northern relic is
about twice as wide, twice as bright, but only has a largest linear size of
~0.66 Mpc. Complementary SRG/eROSITA X-ray images reveal extended emission from
hot intracluster gas between the two relics and around the narrow-angle tail
(NAT) radio galaxy PMN J1033-4335 (z ~ 0.153) located just east of the northern
relic. The radio morphologies of the NAT galaxy and the northern relic, which
are also detected with the Australian Square Kilometer Array Pathfinder (ASKAP)
at 888 MHz, suggest both are moving in the same outward direction. The
discovery of ORC J1027-4422 in a different part of the same MeerKAT image makes
it the 4th known single ORC. It has a diameter of ~90 arcsec corresponding to
400 kpc at a tentative redshift of z ~ 0.3 and remains undetected in X-ray
emission. Supported by simulations, we discuss similarities between outward
moving galaxy and cluster merger shocks as the formation mechanisms for ORCs
and radio relics, respectively. | Bärbel S. Koribalski, Angie Veronica, Klaus Dolag, Thomas H. Reiprich, Marcus Brüggen, Ian Heywood, Heinz Andernach, Ralf-Jürgen Dettmar, Matthias Hoeft, Xiaoyuan Zhang, Esra Bulbul, Christian Garrel, Gyula I. G. Józsa, Jayanne English | 2023-04-24T01:10:11Z | http://arxiv.org/abs/2304.11784v2 | # MeerKAT discovery of a double radio relic and odd radio circle
###### Abstract
We present the serendipitous discovery of a large double radio relic associated with the merging galaxy cluster PSZ2 G277.93+12.34 and a new odd radio circle, ORC J1027-4422, both found in deep MeerKAT 1.3 GHz wide-band data. The angular separation of the two arc-shaped cluster relics is \(\sim\)16 arcmin or \(\sim\)2.6 Mpc for a cluster redshift of \(z\approx 0.158\). The thin southern relic, which shows a number of ridges/shocks including one possibly moving inwards, has a linear extent of \(\sim\)1.64 Mpc. In contrast, the northern relic is about twice as wide, twice as bright, but only has a largest linear size of \(\sim\)0.66 Mpc. Complementary SRG/eROSITA X-ray images reveal extended emission from hot intracluster gas between the two relics and around the narrow-angle tail (NAT) radio galaxy PMN J1033-435 (\(z\approx 0.153\)) located just east of the northern relic. No radio halo associated with the PSZ2 cluster is detected. The radio morphologies of the NAT galaxy and the northern relic, which are also detected with the Australian Square Kilometer Array Pathfinder at 887.5 MHz, suggest both are moving in the same outward direction. The discovery of ORC J1027-4422 in a different part of the MeerKAT image makes it the 4th known single ORC. It has a diameter of \(\sim\)90\({}^{\prime\prime}\) corresponding to 400 kpc at a tentative redshift of \(z\approx 0.3\) and remains undetected in X-ray emission. We discuss similarities between galaxy and cluster mergers as the formation mechanisms for ORCs and radio relics, respectively.
keywords: galaxies: clusters: intracluster medium - instrumentation: radio interferometers - radio continuum: galaxies - X-rays: galaxies, clusters - intergalactic medium
## 1 Introduction
Radio relics are diffuse, steep-spectrum radio synchrotron sources that typically occur in the form of single or double symmetric arcs at the peripheries of galaxy clusters. They can be explained by shock waves driven into the intracluster medium (ICM) by cluster mergers (e.g., van Weeren et al., 2019; Tholken et al., 2018; Bruggen and Vazza, 2020; Ghirardini et al., 2021; Bruggen et al., 2021; Hoang et al., 2022; Boss et al., 2023; Jones et al., 2023). Their large size, arc-like radio morphology, high degree of polarization, and spectral
index distribution make them very distinct from other extended radio sources. There is significant interest in radio relics because they are very efficient particle accelerators (Botteon et al., 2018), sites of relatively strong magnetic fields (Rajpurohit et al., 2022), and indicators of the dynamical state of galaxy clusters.
Galaxy clusters that host double radio relics are rare. However, they are particularly interesting because their geometry suggests that the merger is proceeding close to the plane of the sky. This geometry minimizes projection effects and provides better constraints on shock model parameters (e.g., spectral gradients, injection spectral indices, shock Mach numbers) and the physical properties of the underlying merging system (e.g. mass, mass ratio). The main open questions involving radio relics concern their efficiency in particle acceleration, their relation with shock waves and sources of cosmic-ray electrons (such as AGN), the magnetic field strength and orientation, as well as the merger state of the host clusters. For numerical simulations of double relics, including at different sky projections, seevan Weeren et al. (2011), Bonafede et al. (2012), Skillman et al. (2013) and Wittor et al. (2017, 2021).
In the southern sky, there are several double-relic clusters known, including Abell 2345 (Bonafede et al., 2009), Abell 3376 (Bagchi et al., 2006; Chibuecae et al., 2023), MACS J0025.4-1222 (Riseley et al., 2017), El Gordo (Linder et al., 2014), PLCK (G287.0+32.9 (Bagchi et al., 2011), RXC J1314.4-2515 (Feretti et al., 2005), Abell 3365 (Stuardi et al., 2022; Urdampilleta et al., 2021), and Abell 3667 (Rottgering et al., 1997; de Gasperin et al., 2022). The MeerKAT Galaxy Cluster Legacy Survey (MGCLS, Knowles et al., 2022), which targeted 115 galaxy clusters for \(\sim\)6-10 h each, contains seven systems with double relics, all but one with radio halos. The five new systems are MCXC J0352.4-7401, Abell 521, MCXC J0516.6-5430, MCXC J0232.2-4420, and RXC J2351.0-1954. The LOFAR Two-meter Sky Survey data release two (LOTSS-DR2) was used for a study of diffuse radio emission (halos and relics) in 309 PSZ2 clusters plus a comparison to X-ray data from XMM and Chandra (Botteon et al., 2022; Jones et al., 2023); our PSZ2 cluster is not part of this study. Among these are six double relics (PSZ2 G071.21+28.86, PSZ G099.48+55.60, PSZ2 G113.91-37.01, PSZ2 G165.46+66.15, PSZ2 G181.06+48.47, and PSZ2 G205.90+73.76). In addition there has been a search for diffuse radio sources in non-PSZ clusters (Hoang et al., 2022), finding at least two symmetric (i.e. on opposite sides of the cluster) double relics (in Abell 373 and Abell 1889).
The main focus of this paper is on the large double relic discovered in deep MeerKAT 1.3 GHz images around the galaxy cluster PSZ2 G277.93+12.34. We use eROSITA data to search for extended X-ray emission in order to characterize the gaseous cluster properties in more detail. The cluster was marginally detected via the Sunyaev-Zeldovich (SZ) effect at a signal to noise of 5.1 by the Planck Collaboration et al. (2016). It has an integrated flux of \(Y_{\rm 5R500}=(1.87\pm 0.90)\times 10^{-3}\) arcmin\({}^{2}\) and is one of over 1000 SZ sources detected in the Planck full-mission data. For a cluster redshift of \(z\approx 0.158\) (see Section 3.1), which corresponds to a luminosity distance of \(D_{\rm L}=755\) Mpc, we derive an SZ mass of \(M_{\rm 500}=(3.6\pm 0.6)\times 10^{14}\) M\({}_{\odot}\)(Arnaud et al., 2010). We adopt a \(\Lambda\)CDM cosmology with \(H_{\rm 0}=70\) km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\rm M}=0.3\) and \(\Omega_{\Lambda}\) = 0.7.
A similar double radio relic, detected in the merging galaxy cluster PSZ2 G096.88+24.18 (\(z\approx 0.3\)), was recently studied in detail by Jones et al. (2021) using VLA 1.5 GHz, LOFAR 140 MHz and Chandra X-ray data. They found that the two diametrically opposed radio relics of PSZ2 G096.88+24.18 with a separation of \(\sim\)2 Mpc are approximately equidistant from the cluster center. This sets a good example of a single object, combined radio / X-ray study similar to our study of PSZ2 G277.93+12.34 using MeerKAT, ASKAP and MWA radio data as well as eROSITA X-ray data.
Furthermore, we present the discovery of a new odd radio circle, ORC J1027-4422, in the same MeerKAT field. We analyse its properties and compare its morphology to that of cluster relics. ORCs are a new class of objects, first found by Norris et al. (2021) and Koribalski et al. (2021) in ASKAP radio continuum data. One possible explanation for their formation is discussed by Dolag et al. (2023) who find ORC-like galaxy merger shocks in their high-resolution cosmological simulations, similar but smaller than typical cluster merger shocks.
This paper is organised as follows: summaries of the observations and data processing are given in Section 2. Our results on the PSZ2 G277.93+12.34 cluster are given in Section 3, followed by the analysis of ORC J1017-4422 in Section 4. Discussion and conclusions follow in Section 5 and Section 6, respectively.
## 2 Observations and Data Processing
### MeerKAT
MeerKAT is a recently completed radio interferometer located in the Karoo desert of South Africa. It consists of 64 \(\times\)13.5 m antennas, with baselines up to 8 km (Jonas, 2009; Jonas & MeerKAT Team, 2016; Mauch et al., 2020). Of these, 48 antennas are located in the inner core (within a 1-km radius) with the shortest baseline being 29 m. Our field, targeting the nearby NGC 3263 galaxy group (Koribalski et al., 2023, in prep.), was observed on the 27th of May 2021 with 59 out of 64 MeerKAT antennas for 4.64 hr and on the 5th of Jan 2022 with 62 antennas for 4.86 hr, ie. close to 10h in total (project ID: SCI-20210212-BK-01). The pointing centre was at \(\alpha,\delta\) (J2000) = \(10^{\rm h}\,29^{\rm m}\,13^{\rm s}\), \(-44^{\circ}\,07^{\prime}\,20^{\prime\prime}\). The central frequency was 1284 MHz with a bandwidth of 856 MHz, divided into 10 channels. The resulting frequency range is 856 to 1712 MHz, overlapping at the low frequency end with the ASKAP data (see Section 2.2). About 50% of the MeerKAT band was flagged due to the presence of radio frequency interference; see, for example, Heywood et al. (2022). The primary flux and bandpass calibrators were PKS J0408-6545 (model intensity: 15.713 Jy) and PKS J1939-6342 (model intensity: 15.006 Jy). The source PKS J1120-2508 (measured intensity: 1.746 Jy) was used as the secondary calibrator. The target field and the secondary calibrator were observed alternately for 36 and 2 minutes.
To create our final radio continuum images we used two procedures. Initially, we combined the two fully calibrated Stokes \(I\) MeerKAT images and 10-channel cubes obtained from the SARAO archive. A frequency-dependent taper was used to ensure the angular resolution is approximately constant in each of the 10 channels (Mauch et al., 2020). We use tasks in the miriad software package for data combination and analysis. The measured rms near the cluster relics is 3.2 \(\mu\)Jy beam\({}^{-1}\) (Day 1, beam: \(7.63^{\prime\prime}\times 6.68^{\prime\prime}\)) and 3.5 \(\mu\)Jy beam\({}^{-1}\) (Day 2, beam: \(7.45^{\prime\prime}\times 7.14^{\prime\prime}\)). After convolving each image to a common 8\({}^{\prime\prime}\) (10\({}^{\prime\prime}\)) beam, the rms is 2.4 (2.6) \(\mu\)Jy beam\({}^{-1}\) in the combined data set (Days 1+2),
prior to primary beam correction. In our second approach, to improve the quality of the Stokes \(I\) wide-band image and extend the primary beam correction to the 5% level, we downloaded the two calibrated data sets and jointly imaged them as described in Heywood et al. (2022), achieving a synthesized beam of 7.7\({}^{\prime\prime}\) and a significantly lower rms of \(\sim\)1.5 \(\mu\)Jy beam\({}^{-1}\) around the field centre. The rms noise between the two relics is \(\sim\)6.5 \(\mu\)Jy beam\({}^{-1}\) and near ORC J1027-4422 it is \(\sim\)4.0 \(\mu\)Jy beam\({}^{-1}\), higher than around the image centre due to the primary beam correction.
Figure 1: MeerKAT 1.3 GHz radio continuum image of a newly discovered double relic associated with the galaxy cluster PSZ2 G277.93+12.34. The image has a resolution of 7.7\({}^{\prime\prime}\) and is primary beam corrected. The pointing center lies \(\sim\)43\({}^{\prime}\) south-west of the cluster centre.
Figure 2: MeerKAT 1.3 GHz radio continuum image of the cluster double relic (as in Fig. 1) overlaid with GRASS:3 X-ray (0.3 – 2 keV) contours in blue. The eROSITA image was smoothed with a 30\({}^{\prime\prime}\) Gaussian; the contour levels are 0.001, 0.002, 0.004, 0.008 and 0.016. The overlaid circles indicate the approximate width and centre of the double relic. Both relics, the narrow-angle tail (NAT) radio galaxy PMN J1033–4335 and the double-lobe radio galaxy (DRG) to the east are labelled.
The MeerKAT primary beam full width half maximum (FWHM) is approximately 67.9\({}^{\prime}\times\)65.3\({}^{\prime}\) at 1.28 GHz; it varies from \(\sim\)100\({}^{\prime}\) at the low end of the frequency band to \(\sim\)55\({}^{\prime}\) at the high frequency end of the band (Mauch et al., 2020; Heywood et al., 2022). The northern relic and PMN J1033-4335 are located \(\sim\)52\({}^{\prime}\) from the pointing centre, i.e. 1.5\(\times\) the half power beam width (HPBW) at 1.28 GHz. The southern relic is closer to the pointing centre and lies roughly at the 1.28 GHz HPBW. The location of the two relics far from the pointing centre means that their MeerKAT flux estimates are highly uncertain (see Section 3.1). For the same reason, reliable in-band spectral index and polarisation analysis are not feasible. ORC J1027-4422 resides \(\sim\)26.4\({}^{\prime}\) (\(\sim\)0.8\(\times\) HPBW at 1.28 GHz) from the pointing centre.
### Askap
We supplement our deep MeerKAT 1.28 GHz data with Stokes \(I\) radio continuum images from the Australian SKA Pathfinder (ASKAP, Johnston et al., 2008; Hotan et al., 2021; Koribalski, 2022), obtained in March 2020 as part of the Rapid ASKAP Continuum Survey (RACS, McConnell et al., 2020). The first release of RACS-low covers the sky south of \(\delta\approx\) +41\({}^{\circ}\) at a central frequency of 887.5 MHz with 288 MHz bandwidth, overlapping with the low frequency end of the MeerKAT data. The RACS angular resolution is \(\sim\)15\({}^{\prime\prime}\), and we measure an rms of 0.4 mJy beam\({}^{-1}\) near the double relic. The ASKAP data are needed to obtain more reliable flux densities at \(\sim\)900 MHz for the northern relic than is possible with our MeerKAT data due to its location far from the pointing centre. Similarly, we use the RACS data to confirm the primary-beam corrected MeerKAT flux density of ORC J1027-4422 at \(\sim\)900 MHz.
### eRosita
eROSITA is a new X-ray telescope onboard the Spectrum-Roentgen-Gamma (SRG, Sunyaev et al., 2021) space observatory which was launched in mid 2019. It consists of seven mirrors and aims to conduct an all-sky survey program (the eROSITA All Sky Survey: eRASS) every half year. The eRASS data products are offered as 4,700 overlapping sky tiles, each of which has a size of 3.6\({}^{\circ}\times\)3.6\({}^{\circ}\). The eROSITA telescope mirrors provide an average point spread function (PSF) of around 26\({}^{\prime\prime}\) half power diameter (HPD) (Predehl et al., 2021).
The PSZ2 G277.93+12.34 galaxy cluster is mainly located in the eROSITA sky tile sm159135. To account for possible emission from the cluster surroundings, we also made use of three adjacent sky tiles. We list these sky tiles and their center coordinates in Ta
\begin{table}
\begin{tabular}{c c c} \hline Tile ID & RA (J2000) & Dec (J2000) \\ \hline \hline sm159135 & 159.3103 & –45.0091 \\ sm160132 & 160.2198 & –42.0083 \\ sm155135 & 155.1724 & –45.0091 \\ sm156132 & 156.2637 & –42.0083 \\ \hline \end{tabular}
\end{table}
Table 1: Positions of the eROSITA sky tiles used for the X-ray emission analysis of the PSZ2 G277.93+12.34 galaxy cluster.
Figure 3: RGB colour image of the double relic cluster PSZ2 G277.93+12.34. The MeerKAT 1.3 GHz radio continuum emission is shown in red plus yellow contours (0.01, 0.1 and 1 mJy beam\({}^{-1}\)), the eRASS:3 X-ray emission in blue plus white contours (same levels are as in Fig. 2), and the DSS2 optical \(R\)-band in green. For display purposes we show the radio emission without primary beam correction.
ble 1. We combined data from three eROSITA surveys (eRASS1-3, the combination of all three is called eRASS:3). The combined sky tiles were centered on the PSZ2 G277.93+12.34 cluster and restricted to \(\sim\)1 degree radius from the center. The internal data processing version c946 was used together with the extended Science Analysis Software (eSASS, Brunner et al., 2022) version 211214. For details of the data reduction and image correction steps we refer the reader to Section 2.1 of Reiprich et al. (2021).
#### 2.3.1 Data Reduction and Image Creation
The eROSITA data reduction began with generating the clean event files and images using the evtool tasks. We specified flag=0xc00ff30, which removes bad pixels and the strongly vignetted corners of the square CCDs, and pattern=15 to include all patterns (single, double, triple, and quadruple). To analyze the hot gas emission, we focus the imaging analysis on the energy band of 0.3-2.0 keV (hereafter, soft band). The lower energy limit used for the telescope modules (TMs) with on-chip filter (TM1-4, 6; the combination of these TMs is referred to as TM8) was set to 0.3 keV, while for the TMs without on-chip filter (TM5 and 7; the combination is referred to as TM9) due to the optical light leak contamination (Predehl et al., 2021), the lower energy limit was set to 0.8 keV. The next step was to subtract the particle-induced background (PIB) from the image. We modeled the PIB for each TM in each observation based on the results of the eROSITA Filter-Wheel-Closed (FWC) observation data. As the temporal variability of the PIB spectral shape appears to be very small, the count rate in a hard band should be strongly dominated by PIB events. Hence, we used the counts in the 6-9 keV band (hereafter, hard band) from our observations as the total PIB counts. By multiplying these hard band counts with the soft-to-hard band counts ratio from the FWC data, we obtain the PIB count estimates in the soft band. Afterwards, the soft band PIB counts were spatially distributed by multiplying it to the non-vignetted exposure map, which was normalized to unity by dividing each pixel by the sum of all pixel values.
The final cleaned and exposure corrected count rate image was obtained by dividing all observation combined PIB-subtracted count image by the final combined and corrected TM0 exposure map. We note that the count rates of the final combined image correspond to an effective area given by one TM with an on-chip filter in the energy band 0.3-2.0 keV.
#### 2.3.2 Spectral Analysis
We performed spectral analysis with eRASS:3 data to obtain estimates of the ICM properties of the main (m) structure of the PSZ2 G277.93+12.34 galaxy cluster and the northern (n) structure. All eROSITA spectra from the seven TMs are extracted using the eASS sretool task. For the main and the northern structure, the X-ray spectra are extracted from circles with radii of 5.7 arcmin (0.93 Mpc) and 4.6 arcmin (0.73 Mpc), respectively, centered at the X-ray centre positions given in Table 2. The spectral fitting was realized with XSPEC (Arnaud, 1996) version 12.12.0. The model for the spectral fitting (other eROSITA spectral fitting examples see, e.g., Ghirardiin et al., 2021; Ijjenkarevic et al., 2022; Veronica et al., 2022) includes the cosmic X-ray fore/background (CXB) and the source emission and describes as follows
\[\begin{split}\texttt{Model}=&\texttt{constant}\times( \texttt{apec}_{1}+\texttt{TBabs}\times(\texttt{apec}_{2}+\\ &\texttt{powerlaw})+\texttt{TBabs}\times\texttt{apec}_{3}). \end{split} \tag{1}\]
The first terms of the equation depict the CXB components scaled to the areas of the source regions (constant [arcmin\({}^{2}\)]). The absorption along the line of sight is represented by TBabs (Wilms et al., 2000). The adopted \(N_{\rm HI}\) values used in this work are from the column density map of the neutral atomic hydrogen (H i) by the HI4PI Collaboration et al. (2016). The thermal emission from the Local Hot Bubble (LHB) and the Milky Way Halo (MWH) are represented by \(\texttt{apec}_{1}\) and \(\texttt{apec}_{2}\), where we fixed their temperature at \(k_{B}T=0.1\) and 0.25 keV, respectively. The absorbed thermal emission of the source spectra is represented by TBabs\(\times\)apec\({}_{3}\). The cosmic X-ray background from the unresolved sources (e.g., Luo et al., 2017) is characterized by a powerlaw with photon index of 1.46. The results of the eROSITA EDR FWC1 data analysis are used for modeling the instrumental background. The normalizations of the CXB components and the instrumental background components of the source spectra are left to vary throughout the fit. The fit was performed in the energy band of \(0.3-9.0\) keV for the TMs with the on-chip filter and \(0.8-9.0\) keV for the TMs without the on-chip filter. We adopted the C-statistics (Cash, 1979) and the Solar abundance table from Asplund et al. (2009). The properties (centre position, radius and \(N_{\rm HI}\)) of the two X-ray emission structures are tabulated in Table 2 and example spectra are shown in Fig. 4.
Footnote 1: [https://erosita.mpe.mpg.de/edr/eROSITAObservations/EDRFWC/](https://erosita.mpe.mpg.de/edr/eROSITAObservations/EDRFWC/)
## 3 Cluster Results
The PSZ2 G277.93+12.34 galaxy cluster members are currently not well defined as redshifts are available for only a few galaxies in the area. We adopt \(z\approx 0.158\) as the cluster redshift based on the photometric redshift of the likely brightest cluster galaxy, WISEA J103230.00-433815.4. Galaxies in the vicinity indicate that the cluster redshift could be somewhat higher (see Table 3).
The host galaxy of the nearby NAT radio galaxy, PMN J1033-4335, located \(\sim\)\(7^{\prime}\) north-east of the PSZ2 cluster position, is WISEA J103302.46\(-\)433508.5 (ZMASX J10330244\(-\)4335085). Two photometric redshifts are reported, \(z_{\rm ph}\)\(\sim\) 0.1513 (Bilicki et al., 2014) and \(z_{\rm ph}\)\(\sim\) 0.1561 (Bilicki et al., 2016), which gives an average value of \(z_{\rm ph}\)\(\approx\) 0.153, adopted here. Two galaxies in the vicinity, listed in Table 3 have similar spectroscopic redshifts.
### MeerKAT radio continuum emission
Figures 1-3 show the MeerKAT 1.3 GHz radio continuum images of the PSZ2 G277.93+12.34 galaxy cluster and surroundings. We found two large radio relics, located NE and SW of the cluster centre, forming a double radio relic with an angular separation of \(\sim\)16 arcmin, as well as a prominent narrow-angle tail (NAT) radio galaxy, PMN J1033-4335, just east of the northern relic.
\begin{table}
\begin{tabular}{c c c c} \hline structure & RA, Dec (J2000) & radius & \(N_{\rm HI}\) \\ & [deg], [deg] & [arcmin] & [cm\({}^{-2}\)] \\ \hline \hline main (m) & 158.1258, \(-\)43.6502 & 5.7 & \(7.8\times 10^{20}\) \\ north (n) & 158.2644, \(-\)43.5808 & 4.6 & \(7.6\times 10^{20}\) \\ \hline \end{tabular}
\end{table}
Table 2: Properties of the eRASS:3 X-ray emission structures associated with the galaxy cluster PSZ2 G277.93+12.34. We give the centre positions and radii of the circular areas from which the X-ray spectra are extracted as well as the respective H i column densities, \(N_{\rm HI}\).
The two arc-shaped relics, which consist of diffuse, non-thermal synchrotron emission (see below), form part of a circle as indicated in Fig. 2, likely tracing shocks induced by a face-on cluster merger. The circle centre approximately agrees with that of the X-ray emission for the main structure (see Table 2). For a cluster redshift of \(z\approx 0.158\) (see Section 3.2) we estimate a double relic separation of \(\sim\)2.6 Mpc. Figures 5-8 show close-ups of the NAT radio galaxy, the two relics, and the central area.
#### 3.1.1 The NAT radio galaxy PMN J1033-4335
The most prominent radio galaxy in our field is PMN J1033-4335, situated just east of the northern relic (see Fig. 1-3). It is very bright and extended, well-resolved by both ASKAP and MeerKAT. We find its radio morphology that of a head-tail or narrow-angle tail (NAT) radio galaxy (see Fig. 5). Radio lobes trailing south-wards from the core (head) suggest the galaxy is moving away from the cluster centre. The projected angular size of the NAT's radio lobes is \(\sim\)2 arcmin or \(\sim\)320 kpc. The NAT radio core and the northern relic have the same projected distance from the cluster centre. It seems plausible that the NAT radio galaxy has fallen through the cluster core and is moving outwards with high speed, possibly overtaking the northern relic. While the NAT may be embedded in the northern relic, it is more likely seen in projection against the relic's eastern part. We measure a total flux density of \(\sim\)234 mJy (primary-beam corrected) in the MeerKAT 1.3 GHz wide-band image.
Using the RACS-low 887.5 MHz image we measure a flux density of close to 300 mJy for the NAT radio galaxy, in agreement with the SUMSS 843 MHz flux density of \(321.7\pm 11.6\) mJy. PMN J1033-4335 is also detected in the 150 MHz TIFR GMRT Sky Survey (TGSS; resolution \(\sim\)25'', rms \(\sim\)3.5 mJy beam\({}^{-1}\)) with a flux density of \(883\pm 89\) mJy (Intema 2016; Intema et al. 2017). Using low-frequency images from the Galactic and Extragalactic All Sky MWA (GLEAM, Hurley-Walker et al. 2017) survey, which has a resolution of \(\sim\)2 arcmin at 200 MHz, we estimate integrated 140 - 170 MHz and 170 - 230 MHz flux densities of \(1025\pm 36\) mJy and \(860\pm 31\) mJy, respectively.
We also derive primary-beam corrected MeerKAT flux densities for PMN J1033-4335 in channels 1, 4 and 7, centred at 888, 1092, and 1345 MHz, respectively. The respective correction factors are approximately 2.1, 3.2 and 5.8 for a distance of \(\sim\)52 arcmin from the MeerKAT pointing centre (Mauch et al. 2020). We get \(\sim\)300, 250, and 160 mJy for the NAT radio galaxy, respectively. The
Figure 4: eRASS:3 integrated X-ray spectra for the main cluster structure (top) and the northern structure (bottom) in the energy band of \(0.3-9.0\) keV. The data points were extracted from the combined telescope modules with the on-chip filter (TM8, see Section 2.3). In the upper part of each plot we shows the eRASS:3 data points (black crosses), the total model (red line), the source plus sky background emission (green line), and the instrumental background (blue line), while the residual emission spectrum in terms of sigma deviation is shown in the bottom part.
\begin{table}
\begin{tabular}{l c c c c} \hline Source name & redshift & X-ray & Notes & separation \\ \hline \hline (1) main cluster structure & 0.158 & radius 5\(\aas@@fstack{\prime}\)7 & (see Fig. 8) & \\ WISEA J103230.00–433815.4 & 0.1579 (p) & extended & likely BCG & — \\ WISEA J103236.75–433625.6 & 0.2060 (p) & & & 2.2\(\arcmin\) \\ WISEA J103228.20–433530.0 & 0.1839 (p) & & 2.8\(\arcmin\) \\ WISEA J103243.59–43042.5 & 0.2034 (p) & & 3.5\(\arcmin\) \\ WISEA J103320.46–433058.5 & 0.1513 (p), 0.1561 (p) & NAT host galaxy & 6.6\(\arcmin\) \\ WISEA J103225.84–434454.1 & 0.1242 (p) & & 6.7\(\arcmin\) \\ \hline (2) northern cluster structure & 0.153 & radius 4\(\aas@@fstack{\prime}\)6 & (see Fig. 5) & \\ WISEA J103302.46–433508.5 & 0.1513 (p), 0.1561 (p) & NAT host galaxy & — \\ WISEA J103305.55–433412.1 & 0.15454 \(\pm\) 0.00015 (s) & LEDA 547219 & 1.1\(\arcmin\) \\ WISEA J103307.98–433625.2 & 0.15776 \(\pm\) 0.00015 (s) & LEDA 546771 & 1.6\(\arcmin\) \\
2MASS J10334526-4335420 & 0.1584 (g) & DRG & 7.8\(\arcmin\) \\ \hline \end{tabular}
\end{table}
Table 3: Galaxies likely associated with the main and northern structures of the PSZ G277.93+12.34 galaxy cluster. In Col. 2 we list photometric (p) and spectroscopic (s) redshifts from (Bilicki et al. 2014, 2016) and (Jones et al. 2009) redshifts, respectively. (g) denotes redshifts from (Gaia Collaboration 2022).
corrected 888 MHz flux densities from MeerKAT are in agreement with those from RACS. Using \(860\pm 31\) mJy at 200 MHz (GLEAM) and \(300\pm 7\) mJy at 888 MHz (RACS), we derive a spectral index of \(\alpha\approx-0.71\pm 0.04\), where \(S_{\nu}\propto\nu^{\alpha}\). The Parkes 4.85 GHz flux density of PMN J1033\(-\)4335 is \(\sim\)80 mJy (Wright et al., 1994).
#### 3.1.2 Northern Relic
The northern relic extends over at least 4 arcmin (660 kpc) and has a convex outer edge with respect to the cluster location, as expected. It is shorter and wider (\(\sim\)90'') than the southern relic by at least a factor two. The NAT radio galaxy overlaps (likely in projection) with the eastern side of the northern relic. We measure a flux density of at least 15 mJy in the primary beam corrected MeerKAT 1.3 GHz wide-band data. This value is somewhat uncertain because of the relic's low-surface brightness and large distance from the pointing centre (\(51.4\pm 0.8\) arcmin or \(\sim\)1.5 \(\times\) the HPBW at 1.3 GHz where the sensitivity has dropped to \(\sim\)19% \(\pm\) 1%). The corresponding radio power is \(P_{1.3}\sim 10^{24}\) W Hz\({}^{-1}\) at the adopted distance.
Using the RACS 887.5 MHz data we measure a flux density of at least \(\sim\)12 mJy for the northern relic, corresponding to a radio power of \(P_{0.9}\sim 0.8\times 10^{24}\) W Hz\({}^{-1}\). In the GLEAM 140 - 170 MHz and 170 - 230 MHz images we measure \(\sim\)33 mJy and \(\sim\)43 mJy with uncertainties of \(\sim\)7 mJy. The resulting radio power, \(P_{0.15}\sim 2.9\times 10^{24}\) W Hz\({}^{-1}\), follows the trend shown by (Jones et al., 2023, their Fig. 6). We estimate a spectral index of \(\alpha\approx-0.7\pm 0.2\) between RACS (887.5 MHz) and GLEAM (200 MHz). We also detect the northern relic in the low-resolution GLEAM 140 - 170 MHz image and, but not in the shallow TGSS 150 MHz image (rms \(\sim\)3.5 mJy) due to the relic's low surface brightness.
We can compare these values to the radio-power mass relation and the radio power-LLS relation of Jones et al. (2023). Choosing the orthogonal distance regression (see their Tables 2 & 3), they find a radio relic power - cluster mass relation:
\[\log_{10}(P_{0.15})=5.2\cdot\log_{10}(M_{500}/M_{\odot})-51.3\,, \tag{2}\]
and a radio relic power - LLS relation of
\[\log_{10}(P_{0.15})=7.6\cdot\log_{10}\,({\rm LLS/kpc})+2.15\,. \tag{3}\]
Using \(P_{0.15}=2.9\times 10^{24}\) W Hz\({}^{-1}\), we obtain \(M_{500}=3.7(\pm 0.2)\times 10^{14}M_{\odot}\), in agreement with the SZ-derived \(M_{500}\) of \(3.6\times 10^{14}M_{\odot}\), and LLS = 863 kpc, somewhat larger than our measurement of \(\sim\)660 kpc but within the uncertainties of the correlation.
#### 3.1.3 Southern Relic
The southern relic is very long and curved with a convex outer edge roughly following the 16 arcmin diameter ring shown in Fig. 2. Its largest linear size (LLS) is at least 10 arcmin or 1.64 Mpc at the adopted distance, and it is located \(\sim\)1.3 Mpc from the cluster centre. The eastern end of the relic is less than \(<\)30'' (\(<\)82 kpc) wide, in agreement with theoretical expectations (Kang et al., 2017; Jones et al., 2023) and suggesting it is located at the shock front. Here the larger southern relic has a much smaller width than the smaller, northern relic. Within the wider central part of the southern relic, however, there is a prominent ridge and a wedge-like feature in the west, at an angle of \(\sim\)45\({}^{\circ}\) to the western end of the relic. The shape of the latter feature might be the signature of an inward moving shock. We measure a primary-beam corrected flux density of \(\sim\)10.5 mJy in the MeerKAT 1.3 GHz wide-band image, corresponding to a radio power of \(P_{1.3}\sim 7.1\times 10^{23}\) W Hz\({}^{-1}\) at the adopted distance. Apart from a small area within the wedge, the southern relic is not detected in the RACS 887.5 MHz image, neither is it detected in GLEAM or TGSS.
The average surface brightness of the southern relic is \(\sim\)16 \(\mu\)Jy beam\({}^{-1}\). Hence, we have found an example of a low surface brightness relic (about a factor of 10 below the mean surface brightness of the relics listed in Jones et al. (2023)) of which there are perhaps many more that have been missed by other surveys. In contrast, the northern relic has an average surface brightness of \(\sim\)55 \(\mu\)Jy beam\({}^{-1}\). The low surface brightness of the southern relic and its distance from the MeerKAT pointing centre (\(38\pm 3\) arcmin) does not allow for a reliable in-band spectral index measurement. Scaling the radio power to 150 MHz assuming a spectral index of \(\alpha=-1\), which gives \(P_{0.15}\approx 6.2\times 10^{24}\) W/Hz, we again compare to the relations by Jones et al. (2023). We obtain \(M_{500}\approx 4.3\times 10^{14}M_{\odot}\) close to the SZ-derived \(M_{500}\) of \(3.6\times 10^{14}M_{\odot}\) and LLS \(\approx 950\) kpc, much smaller than our measurement and about 2\(\sigma\) from the correlation.
#### 3.1.4 Cluster centre
The galaxy WISEA J103230.00-433815.4 (\(z_{\rm ph}\approx 0.158\), Bilicki et al. 2014) is located at the peak of the brightest eRASS:3 X-ray emission patch near the centre of the PSZ2 cluster (see Fig. 8). This is likely the brightest cluster galaxy (BCG); its redshift is similar
Figure 5: MeerKAT 1.3 GHz radio continuum emission of the NAT radio galaxy PMN J1033\(-\)4335 (yellow contours, resolution 7.7′′, primary beam corrected) and eROSITA X-ray emission (white contours; smoothed with a 30′′ Gaussian) overlaid onto an RGB colour image consisting of VHS \(K_{\nu}\)-band (red), VHS \(J\)-band (green) and DSS2 \(R\)-band (blue). The NAT’s host galaxy is WISEA J103302.46\(-\)433508.5 at \(z_{\rm ph}\approx 0.153\)(Bilicki et al., 2014, 2016). The contour levels are 0.02, 0.2, 1, 4, 10 and 20 mJy beam\({}^{-1}\) (radio) and 0.0012, 0.0015, 0.002 and 0.0025 (X-ray). To allow comparison between the radio and X-ray morphologies, only the bright emission regions are shown here.
to that of the host galaxy of PMN J1033-4335 (\(z_{\rm ph}\approx 0.153\)) in the northern structure. Using the MeerKAT data we measure a radio flux density of 16 \(\mu\)Jy at 1.28 GHz, corresponding to a radio power of \(\sim\)1.1 \(\times 10^{21}\) W Hz\({}^{-1}\). Around the BCG we note several other radio-detected likely cluster galaxies, incl. 2MASS J10322960-4338494 (WISEA J103229.72-433848.9). The southern X-ray peak in Fig. 8 coincides with WISEA J103236.43-434058.4.
No radio halo is detected between the relics. Assuming a diameter of 3 arcmin, we put an upper limit of 10 mJy on its diffuse flux density based on the rms in the area, corresponding to a radio power of \(\sim\)6.6 \(\times\) 10\({}^{23}\) W Hz\({}^{-1}\) at 1.3 GHz. Based on the scaling relation between the X-ray luminosity and the radio halo emission (Zhang et al., 2020, and references therein), we expect a radio halo power well below 10\({}^{23}\) W Hz\({}^{-1}\).
Figure 6: MeerKAT 1.3 GHz radio continuum emission (primary beam corrected) of the NAT radio galaxy and the northern cluster relic. The contour levels are 0.04, 0.08, 0.12 (black) and 0.25, 0.5, 1, 2, 4, 8, 15, 20 and 30 (white) mJy beam\({}^{-1}\). The synthesized beam (\(7.7^{\prime\prime}\)) is shown in the bottom left corner.
Figure 7: MeerKAT 1.3 GHz radio continuum emission (primary beam corrected) of the southern relic, which spans over at least 10 arcmin. The contour levels are 0.015, 0.03 (black), 0.06, and 0.6 (white) mJy beam\({}^{-1}\). The synthesized beam (\(7.7^{\prime\prime}\)) is shown in the bottom left corner.
#### 3.1.5 Other galaxies
A prominent double-lobe radio galaxy is included in Figs. 1-3, located \(\sim\)15 arcmin east of the cluster centre. Its host galaxy is 2MASS J10334526-4335420 (\(z\approx 0.158\), Gaia Collaboration 2022), likely a cluster member. The DRG's radio lobes extend \(\sim\)4 arcmin North-South (\(\sim\)660 kpc), and its total flux density is \(\sim\)43 mJy in the RACS 887.5 MHz data and \(40.4\pm 4.2\) mJy in SUMSS 843 MHz data.
### eROSITA X-ray emission
The PIB-subtracted and exposure-corrected eRASS:3 image reveals a disturbed patch of extended X-ray emission between the two radio relics, associated with the approximate PSZ2 G277.93+12.34 cluster center and another patch around the NAT radio galaxy PMN J1033-4335 (see Figs. 2 and 3). The radial extent of the hot cluster gas is \(\sim\)11.4 arcmin (\(\sim\)1.9 Mpc at \(z\approx 0.158\)) for the main structure, centred at 10:32:28.9, -43:38:26, and -9.2 arcmin (\(\sim\)1.5 Mpc at \(z\approx 0.153\)) for the northern structure, centred at 10:33:04.3, -43:35:23; see Table 2. The redshifts adopted above are for the respective central galaxies, WISEA J103230.00-433815.4 (\(z_{\rm ph}\approx 0.158\)) and WISEA J103302.46-433508.5 (\(z_{\rm ph}\approx 0.153\)). The available published redshifts for galaxies likely associated with the main and northern cluster structures are listed in Table 3. Since the PSZ2 G277.93+12.34 cluster lies just outside the eRASS legacy survey footprint, the two structures will not be included in the first eROSITA All-Sky Survey (eRASS:1) cluster catalog (Bulbul et al. 2023, in prep.). More galaxy redshifts will become available from the second data release of the all-sky NOIRLab Source Catalog (Nidever et al. 2021).
Through X-ray spectral analysis in the \(0.3(0.8)-9.0\) keV band we obtained the following ICM properties: temperatures (\(k_{B}T\)), normalizations (\(norm\)), and the X-ray luminosity (\(L_{X,0.5-2.0\) keV). The reported luminosities (5.1 and \(2.8\times 10^{43}\) erg s\({}^{-1}\) for the main and northern structures, respectively) are given at the source redshifts in the source frame energy band of 0.5-2.0 keV. We infer the \(M_{500}\) mass by using the luminosity-mass (\(L_{X}-M\)) scaling relation from Lovisari et al. (2015)
\[\log(M_{500}/C_{1})=a\cdot\log(L_{X}/C_{2})+b, \tag{4}\]
with \(a=1.08\pm 0.21\), \(b=0.18\pm 0.18\), \(C_{1}=10^{43}\)\(h_{70}^{-2}\) erg s\({}^{-1}\), and \(C_{2}=2.0\) keV. The \(R_{500}\) radii are calculated by assuming spherical symmetry and taking 500 times the critical density of the Universe at the assumed redshift. Using the SZ derived \(M_{500}\) cluster mass (see Section 1) and \(L_{X}\) from eROSITA (see Table 4), we use the correlation between \(L_{X}\) and \(M_{500}\) shown by Planck Collaboration et al. (2016, their Fig. 22). We find that PSZ2 G277.93+12.34 lies below the expected correlation by Pratt et al. (2009), but just within the 2\(\sigma\) scatter.
The results of our eROSITA spectral analysis are listed in Table 4, somewhat limited by our low counts. In the energy band used for the spectral fitting, we only have 680 and 420 counts in the main (m) and northern (n) structures, respectively, from all TMs combined. A first fit resulted in upper limits for the metallicities of 1.7 \(Z_{\odot}\) (m) and 1.5 \(Z_{\odot}\) (n). We proceeded with a fixed metallicity fit (\(Z=0.3\)\(Z_{\odot}\)), resulting in good agreement with the free metallicity fit but no improvement in the temperature constraints: 6.54\({}^{+4.02}_{-3.37}\) keV (m) and 6.28\({}^{+20.93}_{-3.11}\) keV (n). Deeper data, for instance, from a follow-up observation by _XMM-Newton_ or _Chandra_, would help improve the ICM estimates of the temperatures and metallicities.
## 4 Orc J1027-4422
The discovery of Odd Radio Circles (ORCs) with ASKAP by Norris et al. (2021) and Koribalski et al. (2021), prompted us to search for similar radio sources in the observed MeerKAT field. The three known single ORCs -- ORC J2103-6200, ORC J1555+2726 and ORC J0102-2450 -- have diameters of 70-80'' (300 - 500 kpc) and are each centered on a prominent elliptical galaxy (\(z\approx 0.2-0.6\)).
Our search resulted in one peculiar radio circle (see Fig. 9), which we analyse here. ORC J1027-4422 consists of a very faint, patchy ring-like radio structure and a much brighter, elongated central area with two distinct radio peaks. The whole system, including the diffuse radio emission within the ring, has a total, primary-beam corrected flux density at least \(\sim\)1 mJy in the MeerKAT 1.3 GHz wide-band image. The partial ring has a slight ellipticity (diameter \(\sim\)90'' \(\times\) 100'', \(PA\sim 45\)'') and consists of two thin, diametrically opposite sections, similar in morphology to typical double relic in the outskirts of galaxy clusters like the one discussed in Section 3. We find a weak connection between the two segments on the south-western side. ORC J1027-4422 is centred
Figure 8: MeerKAT 1.3 GHz radio continuum contours (yellow; 0.015, 0.03, 0.06, 0.12, 0.25 and 0.5 mJy beam\({}^{-1}\); resolution 7.7′′) and 0.3–2 keV eRASS:3 X-ray contours (white; 0.002, 0.003, 0.004, and 0.005; smoothed with a 30′′ Gaussian) of the PSZ2 G277.93+12.34 cluster centre overlaid onto an RGB colour image consisting of VHS \(K_{\rm s}\)-band (red), VHS \(J\)-band (green) and DSSZ \(R\)-band (blue). The radio-detected galaxy WISEA J103230.00–433815.4 (\(z_{\rm ph}\approx 0.158\), Bilicki et al. 2014), which is located at the centre of the northern X-ray peak, is the likely BCG.
at approximately \(\alpha,\delta\)(J2000) = 10:27:14.38, -44:22:56.7 (Galactic \(l,b=277.5^{\circ},+11.3^{\circ}\)). The elongated radio source in its central area (\(\sim\)0.4 mJy) has two distinct radio peaks (\(\sim\)58 and 67 \(\mu\)Jy beam\({}^{-1}\)) which are separated by \(\sim\)15\({}^{\prime\prime}\). The centre position given above corresponds to the western radio peak, which may be associated with one or more distant galaxies detected in VISTA \(J\)- and \(K_{\rm s}\)-band images (see Fig. 9, right): e.g., VHS J102714.34-442255.9, VHS J102714.48-442259.6, and VHS J102714.50-442252.5 (McMahon et al. 2013). No galaxies appear to be associated with the eastern radio peak, which could be a radio jet, but see further discussion below. Since no redshifts are available in this area we infer a redshift of \(z\approx 0.3\) based on the red colour of the galaxy near the centre, which has an extinction-corrected \(R\) magnitude of \(\sim\)19.1. This would suggest that the diameter of ORC J1027-4422 is about 400 kpc, similar to that of the single ORCs mentioned above. We detect no eROSITA X-ray emission associated with ORC J1027-4422, and no H\(\alpha\) emission is visible in SuperCosmos images (Parker et al. 2005).
ORC J1027-4422 resides 26.4 arcmin from the MeerKAT pointing centre. To determine an approximate spectral index for the whole system we apply frequency-dependent primary beam correction factors of 1.2 to 2.0 across the MeerKAT band. We obtain \(\sim\)3 mJy at the low end of the band (\(\sim\)900 MHz) and \(\sim\)1 mJy at the high end (\(\sim\)1.6 GHz), which suggests a steep spectral index of \(\alpha\gtrsim-1.9\). This value is very uncertain and likely too steep due to the source offset from the pointing centre and the consequent reduced sensitivity at the high-frequency end of the band. We note that diffuse radio emission on the scales of the ORC should be fully captured by the MeerKAT observations due to the resulting excellent uv-coverage. An integrated spectral index of \(\alpha\gtrsim-1.9\) suggests Mach numbers \(M\gtrsim 1.8\) using \(\alpha=-(M^{2}+1)/(M^{2}-1)\)(Dolag et al. 2023), assuming diffusive shock acceleration.
The relatively bright, central radio emission region in ORC J1027-4422 is also detected in RACS. We measure a flux density of \(\sim\)1.4 mJy at 888 MHz, in agreement with the MeerKAT flux density of \(\sim\)1.5 mJy in the 900 MHz channel. This detection together with the non-detection in TGSS at 150 MHz suggests a spectral index of \(\alpha\gtrsim-1.4\). This would suggest Mach numbers \(\gtrsim\)2.5.
The radio morphology of ORC J1027-4422, in particular the two ring segments and the extended central emission, resembles that of a double-relic cluster with a central radio halo. Could the ring structures be shocks formed through galaxy mergers as recently proposed by Dolag et al. (2023)? ORC J1027-4422 is at least twice as distant and much smaller than the double radio relic associated with the PSZ G277.93+12.34 cluster discussed in Section 3. Based on its approximate 400 kpc size, the ORC structure is more likely associated with a galaxy merger than a cluster merger. Similar structures can be seen in Dolag et al. (2023, their Figs. 4 & 7).
ORC J1027-4422 is different from previous single ORCs dis
\begin{table}
\begin{tabular}{l c c c c c} \hline structure & \(norm\) & \(L_{X,0.5-2.0\rm\ keV}\) & \(M_{500}\) & \(R_{500}\) & C-stat / d.o.f. \\ & [\(10^{-5}\) cm\({}^{-5}\) arcmin\({}^{-2}\)] & [\(10^{43}\) erg s\({}^{-1}\)] & [\(10^{14}M_{\odot}\)] & [arcmin] & \\ \hline \hline main (m) & \(2.10^{+0.25}_{-0.16}\) & \(5.08^{+0.39}_{-0.67}\) & \(1.48^{+0.65}_{-0.67}\) & \(4.61^{+0.68}_{-0.69}\) & 1828.61/2261 \\ \hline north (n) & \(1.79^{+0.56}_{-0.21}\) & \(2.78^{+0.30}_{-0.62}\) & \(0.85^{+0.35}_{-0.38}\) & \(3.98^{+0.54}_{-0.59}\) & 1604.75/2047 \\ \hline \end{tabular}
\end{table}
Table 4: Derived eRASS:3 X-ray properties for the main and northern structures of the PSZ2 G277.93+12.34 galaxy cluster for the adopted redshifts of \(z\approx 0.158\) (m) and \(z\approx 0.153\) (n). The C-statistics per degree of freedom (d.o.f.) are given in the last column.
Figure 9: **Left:** MeerKAT 1.3 GHz radio continuum image of ORC J1027–4422. **Right:** Zoomed-in RGB colour image consisting of VHS \(K_{\rm s}\)-band (red), VHS \(J\)-band (green) and DSS2 \(R\)-band (blue), all smoothed to 2\({}^{\prime\prime}\) resolution, overlaid with MeerKAT radio contours at 3, 8, 13, 18, 23, 35, 50, 60 and 200 \(\mu\)Jy beam\({}^{-1}\). The displayed radio images are primary beam corrected and have an angular resolution of 7.7\({}^{\prime\prime}\).
cussed by Norris et al. (2021) and Koribalski et al. (2021) in that it does not have a dominant central elliptical galaxy. It is therefore important to consider other formation scenarios. - Could ORC J1027-4422 be a young high-latitude (\(b\) = +11.3 deg) supernova remnant (SNR)? In that case, the extended radio source near its centre may be a pulsar wind nebula (PWN), similar to that in the Galactic SNR G09+01 (Heywood et al., 2022b, their Fig. 1), or perhaps an accreting X-ray binary system, similar to the W50 / S5433 complex (Dubner et al., 1998) or the Circinus X-1 source and its associated SNR (Heinz et al., 2013; Coriat et al., 2019). Both of these scenarios would not necessarily have optical counterparts. Future deep X-ray observations may allow detecting such PWN. ORC J1027-4422 is a factor \(\sim\)2.4 smaller than the radio remnant around Circinus X-1 (distance \(\sim\)10 kpc) and much fainter in both radio and X-ray emission (undetected), suggesting that in this scenario it would be at least twice as distant.
Known Galactic SNRs without optical or infrared counterparts are typically much larger than a few arcminutes. For example, DA 530 (G93.3+6.9) has a diameter of \(\sim\)27 arcmin, at least 44 pc for a minimum distance of 4.4 kpc, is highly polarised, and undetected in optical/infrared emission (Landecker et al., 1999; Booth et al., 2022). Another example is the intergalactic SNR recently discovered near the Large Magellanic Cloud by Filipovic et al. (2022) which has a diameter of \(\sim\)200'' (\(\sim\)48 pc assuming a distance of 50 kpc); no central radio emission is detected.
## 5 Discussion
### Radio relics and the disturbed ICM
The galaxy cluster PSZ2 G277.93+12.34 is a disturbed system with an elongated X-ray morphology and two radio relics. The radio relics are separated by 16 arcmin (2.6 Mpc), located well beyond the cluster's detected X-ray emission (diameter = 11.4 arcmin or 1.9 Mpc). The X-ray emission of the ICM is elongated in the NE-SW direction and appears to contain two well-separated peaks (see Figs. 2 & 3). However, we cannot exclude that the two peaks come from blended point sources instead of the ICM itself. The direction of the presumed merger axis aligns with the direction of the radio relics that are believed to trace merger shock waves. The unusually large separation of the radio relics suggests that we see the merger after the first pericentre passage (Zhang et al., 2020c).
The measured sizes of the two radio relics (660 and 1640 kpc) differ by more than a factor two, which suggests that they result from an unequal mass merger. The lengths of radio relics depend on the mass ratio of the sub-clusters as well as the mass concentration of the sub-clusters prior to merger, as shown in hydrodynamical simulations by van Weeren et al. (2011). In these idealised simulations, the longer relic is located behind the more massive sub-cluster (see also, Hoang et al., 2017).
The LLS of the southern relic is peculiar as it is about twice as large as the scaling relation between LLS and cluster mass would suggest (Jones et al., 2023). Strikingly, their widths also differ significantly, with the shorter relic being much wider. Note that the relics in the Sausage cluster, A3667, A3376 also show this combination of short-roundish and long slim relics. This is not easy to explain since projection effects ought to affect both relics in a similar fashion. Double relics with different LLS and widths are also seen in the cluster A1240 and are discussed in Hoang et al. (2018).
We propose two reasons for the different appearance of the relics:
1. In unequal mass mergers the trajectory of the smaller cluster could be significantly bent by the gravity of the larger cluster such that the relics could be viewed under different angles. So we suggest that the northern relic has been produced after the smaller subcluster has been deflected towards the line of sight. The thinner and longer relic in the southern subcluster has been produced inside the ICM of the smaller cluster when the smaller subcluster first hit the ICM of the larger subcluster.
2. The northern relic is located in a different environment, perhaps related to the patch of X-ray emission in its vicinity. This could lead to a different Mach number or different turbulent velocities from the shock producing the southern relic.
Deeper X-ray observation would reveal the detailed ICM distribution and help us better understand the merging geometry/configuration of the cluster (see, e.g., Zhang et al., 2021).
The NAT radio galaxy's morphology is most likely caused by its fast outbound movement away from the cluster centre, possibly ejected by the merger shock that also caused the double radio relic. Bent tail galaxies is are often observed in clusters, but their geometries vary a lot due to the complexity of cluster merger evolution.
### Extended X-ray IC emission?
The most likely explanation of the extended X-ray emission in the northern structure, around the NAT radio galaxy, is _thermal_ emission from a hot intracluster or intragroup gas. The X-ray center position of the northern structure (see Table 2) is \(\sim\)20'' NE of the host galaxy of PMN J1033-4335 (see Figs. 5).
The short exposure time of the eRASS:3 data do not allow us to unambiguously determine the origin of the emission. So, it is worth considering other possibilities than thermal emission as well. One may speculate that the northern X-ray emission is related to PMN J1033-4335 since it extends in the same direction as the bent radio lobes, however, it also extends in the opposite direction. It is also wider than the bent radio lobes, but does not extend into the northern radio relic.
The synchrotron emission of PMN J1033-4335 clearly shows the presence of an extended cloud of relativistic electrons. Photons from the cosmic microwave background (CMB) will necessarily inverse Compton (IC) scatter on those electrons, possibly giving rise to detectable extended non-thermal X-ray emission. Since \(P_{\rm Compton}\sim\frac{t_{\rm He}}{\mu\mu}P_{\rm synchrotron}\), the expected X-ray luminosity from IC emission can be estimated by assuming the magnetic field energy density, \(\mu_{B}\), and employing the known CMB photon energy density, \(\mu_{\rm phot}\), at the NAT redshift and the measured synchrotron power, \(P_{\rm synchrotron}\)(e.g., Erlund et al., 2008).
We compute the synchrotron and X-ray luminosities for a power-law energy distribution of cosmic ray electrons (CRe). Assuming that the radio spectral index reflects the slope of the CRe energy distribution for a homogeneous magnetic field distribution, we estimate the expected X-ray power of the NAT radio galaxy and compare it to our X-ray measurements. For the radio spectral index of \(\alpha=-0.71\) (see Section 3.1.1), a flux density of 300 mJy at 888 MHz (ie, \(P_{\rm 0.9GHz}=1.86\times 10^{32}\) erg s\({}^{-1}\) Hz\({}^{-1}\)) and an assumed magnetic field of 3\(\mu\)Gauss, we estimate the X-ray power in the 0.5 - 2 keV range from IC emission as \(L_{\rm 0.5-2keV}=1.1\times 10^{41}\) erg s\({}^{-1}\). This is much less than the estimated X-ray power of the
northern structure around the PMN J1033-4335 radio galaxy from our eROSITA measurements, \(L_{X,0.5-2.0\rm~{}keV}=2.8\times 10^{43}\rm~{}erg\,s^{-1}\), given in Table 4. We note that approximately a third of the northern X-ray emission is possibly associated with the radio tails.
To increase the expected X-ray power from IC emission, a steeper spectral index of the radio lobes and/or a smaller magnetic field strength would be required. Conceivably, aged plasma in the radio galaxy may reside in a weaker magnetic field due to the expansion of the lobes. This may cause the actual CRe energy distribution to be steeper than inferred from the radio spectrum assuming a homogeneous magnetic field. Assuming a CRe energy spectrum slope of -3.42, we estimate the X-ray luminosity of the PMN J1033-4335 radio lobes to be \(L_{0.5-2\rm keV}=1.43\times 10^{42}\rm~{}erg\,s^{-1}\).
In addition, decreasing the assumed magnetic field strength to \(1\mu\)Gauss, which requires more CRe to be present to meet the radio luminosity, results in \(1.62\times 10^{43}\rm~{}erg\,s^{-1}\). This is close to the X-ray luminosity of the northern structure. However, since only a fraction of the northern structure coincides with the lobes of PMN J1033-4335, the IC emission could only explain part of the northern structure. It is evident that the X-ray emission does not trace well the structure of the lobes, although this might be an effect of the low sensitivity in the X-ray observations. Deeper observations are required do determine the morphology of the X-ray emission in the northern structure. Overall, the IC emission may possibly explain part of the northern structure, however, with the existing data it is not possible to unambiguously decide on the origin of this X-ray emission.
### Cluster and galaxy merger shocks
Cosmological, hydrodynamical simulations show that during the hierarchical formation of galaxies and galaxy clusters, large scale shocks are commonly present. As the merging of smaller entities to larger and larger structures is one of the main growth channels, such shocks are typically driven as part of the thermalization process of the hot atmospheres of these halos. To demonstrate the similarities of the morphology of these shock fronts with the observed radio emission, we analyzed two simulations from the COMPASS set, which are high resolution, zoom-in simulations of galaxies and galaxy clusters (Bonafede et al., 2011). These simulations where carried out with P-Gadget3, a modernized version of P-Gadget2 (Springel, 2005), that implements updated smoothed particle hydrodynamic (SPH) formulations regarding the treatment of viscosity and the use of kernels (Dolag et al., 2005; Beck et al., 2016), allowing a better treatment of turbulence within the circumgalactic medium and the intracluster medium. It also includes a formulation of isotropic, thermal conduction at 1/20th of the classical Spitzer value (Spitzer, 1962) as well as an on-the fly shock finder (Beck et al., 2016). Fig. 10 shows a 3D visualisation of the detected shock geometry from the galaxy (top panel) and the cluster (bottom panel) merger. Here the presented views are selected to closely represent ORC 1027-4422 as well as the a double relic similar to the one we found in the PSZ2 G277.93+12.34 cluster. Interactive plots which allow to explore the details in the 3D geometry are available under the given url. The simulated galaxy is the one presented in Dolag et al. (2023), while the cluster is the _g6802296_ halo, simulated at the same resolution than the one presented in Zhang et al. (2020). The virial mass of the cluster at \(z=0\) exceeds \(10^{15}\rm~{}M_{\odot}\), and the particle mass for DM and gas are \(4.7\times 10^{6}\rm~{}M_{\odot}\) and \(8.9\times 10^{5}\rm~{}M_{\odot}\), respectively, and the softening for both, DM and gas particles is set to 0.69 kpc. Therefore, the cluster at redshift \(z=0\) is resolved with \(\approx 5\times 10^{8}\) particles within the virial radius. The galaxy simulation uses 100 times smaller particle masses and a softening of 0.11 kpc.
## 6 Conclusions
Radio relics are typically elongated, arc-shaped structures found in the outskirts of galaxy clusters (\(>10^{14}\rm~{}M_{\odot}\)) with sizes of several hundred kpc to \(\sim\)2 Mpc. Detected at \(\sim\)1 Mpc from the cluster centre, they trace merger-induced shock waves propagating through the ICM (e.g., Ensslin et al., 1998). For the special case of a merger in the plane of the sky, two relics are found on opposing sides of the merger axis (forming a partial circle), such that the shock fronts are seen approximately edge-on. Cosmological simulations of galaxy mergers reveal similar shock fronts of much smaller size (see, Dolag et al., 2023), providing a possible explanation for the formation of
Figure 10: Snapshots of galaxy and cluster merger simulations producing large-scale relics similar to those observed in ORC J1027-4422 (top panel) and in the PSZ2 G277.93+12.34 cluster (bottom panel). Interactive 3D visualisations of the galaxy merger shocks are available here [http://www.magneticum.org/complements.html#Compass](http://www.magneticum.org/complements.html#Compass) presented in (Dolag et al., 2023). — Inner and outer merger structures are visible at a range of Mach numbers. One prominent shock front shows a V-shape indicating that the center part is dragged into the cluster by infilling structures. The colours (blue to red) indicate Mach numbers from 2 to 5.
the recently discovered single odd radio circles (ORCs, Norris et al., 2021; Koribalski et al., 2021).
In this paper we report the serendipitous discovery of a double radio relic, separated by 16 arcmin (2.6 Mpc at \(z\approx 0.158\)), in the outskirts of a MeerKAT pointing towards the NGC 3263 galaxy group (Koribalski et al., 2023, in prep.). The deep, high-resolution MeerKAT 1.3 GHz radio continuum images show the new double relic to be associated with the merging galaxy cluster PSZ2 G277.93+12.34 with an SZ mass of \((3.6\pm 0.6)\times 10^{14}\) M\({}_{\odot}\). Complementary eRASS:3 data reveal two X-ray emission regions consisting of a near circular main structure in the cluster centre (\(L_{X,0.5-2.0\text{ keV}}\approx 5.1\times 10^{43}\) erg s\({}^{-1}\)) between the two radio relics and a northern structure surrounding the narrow-angle tail radio galaxy PMN J1033-4335 (\(L_{X,0.5-2.0\text{ keV}}\approx 2.8\times 10^{43}\) erg s\({}^{-1}\)). Using the \(L_{X}-M\) scaling relation now estimate \(M_{500}\) masses of \(\sim\)1.5 and \(\sim\)0.9\(\times 10^{14}M_{\odot}\) and \(R_{500}\) radii of \(\sim\)4.6\({}^{\prime}\) and \(\sim\)4.0\({}^{\prime}\) for the main and northern cluster structures, respectively.
A prominent narrow-angle tail radio galaxy PMN J1033-4335 is located on the eastern side of the northern relic and, based on the location of its host galaxy and tail morphology, heading approximately north-east, i.e., away from the cluster centre. It was likely ejected outwards by cluster merger shocks that also resulted in the two radio relics.
The northern relic has a linear extent of at least \(\sim\)0.66 Mpc and a surface brightness of \(\sim\)55 \(\mu\)Jy beam\({}^{-1}\). In contrast, the much thinner southern relic has a longer linear extent of \(\sim\)1.64 Mpc and a surface brightness of \(\sim\)16 \(\mu\)Jy beam\({}^{-1}\). Together the two relics form a partial circle, occupying at least \(\sim\)35% of its circumference. The double relic morphology suggests a face-on orientation of the merger. Both the NAT and the northern relic are also detected in the Rapid ASKAP Continuum Survey (RACS) at 887.5 MHz. Low-frequency imaging of the PSZ2 G277.93+12.34 cluster at both high resolution and high sensitivity would allow measuring the spectral indices of the relics, estimate their Mach numbers to learn more about the merger shocks, and possibly detect a radio halo.
Furthermore, we report the discovery of ORC J1027-4422 which consists of a partial radio ring of \(\sim\)90\({}^{\prime\prime}\) diameter and an extended, double-peaked central emission area. The western peak, located at the ring centre, is likely associated with a galaxy or galaxy group. The eastern peak has no obvious optical counterpart. While the ring diameter is similar to the previously discovered single ORCs, no jets are detected near their central radio sources. The morphology of ORC J1027-4422 strongly resembles that of a double relic and radio halo, but smaller in size than observed for galaxy clusters. The simulations by Dolag et al. (2023) suggest that the observed radio arcs could be outwards moving merger shocks occasionally forming during a massive galaxy merger. The resulting central elliptical galaxy would have a virial mass of \(\sim\)10\({}^{12}\) M\({}_{\odot}\).
Wide-field radio continuum surveys with ASKAP at 0.7 to 1.8 GHz and \(\sim\)10\({}^{\prime\prime}\)-30\({}^{\prime\prime}\) resolution (e.g., Koribalski et al., 2020; Norris et al., 2021), which are now under way, will discover many clusters, either by detecting their radio halos and/or relics, as well as more odd radio circles, essential to study their properties and formation mechanisms. ASKAP guest proposals will allow deeper follow-up studies of key targets, very suitable for detailed galaxy cluster research as each ASKAP field spans 30 square degrees.
## Acknowledgements
We thank Matthias Kluge for valuable comments on an earlier version of this paper.
The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility of the National Research Foundation, an agency of the Department of Science and Innovation. This work is based on data from eROSITA, the soft X-ray instrument aboard SRG, a joint Russian-German science mission supported by the Russian Space Agency (Roskosmos), in the interests of the Russian Academy of Sciences represented by its Space Research Institute (IKI), and the Deutsches Zentrum fur Luft and Raunfahrt (DLR). The SRG spacecraft was built by Lavochkin Association (NPOL) and its subcontractors, and is operated by NPOL with support from the Max Planck Institute for Extraterrestrial Physics (MPE). The development and construction of the eROSITA X-ray instrument was led by MPE, with contributions from the Dr. Karl Remesis Observatory Bamberg & ECAP (FAU Erlangen-Nurnberg), the University of Hamburg Observatory, the Leibniz Institute for Astrophysics Potsdam (AIP), and the Institute for Astronomy and Astrophysics of the University of Tubingen, with the support of DLR and the Max-Planck Society. The Argelander Institute for Astronomy of the University of Bonn and the Ludwig Maximilians Universitat Munich also participated in the science preparation for eROSITA. The eROSITA data shown here were processed using the eASS software system developed by the German eROSITA consortium. We also infrared images obtained as part of the VISTA Hemisphere Survey, ESO Progam, 179.A-2010 (PI: McMahon).
BSK thanks Prof. Michael Kramer and the Max Planck Institut fur Radioastronomie (MPIfR) in Bonn for their kind hospitality during many wonderful research visits where part of this paper was written. AV acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - 450861021. MB acknowledges support from the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306 and DFG Research Unit FOR 5195. RJD is supported by BMBF grant 05A20PCA. KD acknowledges support by the COMPLEX project from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program grant agreement ERC-2019-AdG 882679 as well as by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. XZ acknowledges financial support from the European Research Council (ERC) Consolidator Grant under the European Union's Horizon 2020 research and innovation programme (grant agreement CoG DarkQuest No 101002585).
## Data Availability
The MeerKAT data used here are available through the SARAO Data Archive at [https://www.sarao.ac.za/](https://www.sarao.ac.za/).
The ASKAP data products are publicly available in the CSIRO ASKAP Science Data Archive (CASDA) at data.csiro.au/domain/casedaObservation.
The public release dates for the eRASS data are given at erosita.mpe.mpg.de/erass/. |
2303.02039 | Analogue gravity and its scientific confirmatory role | Empirical confirmation in some areas of physics is obscure; for example in
Hawking radiation. However, the analogue gravity can simulate these phenomena
in condensed matter systems. That is an important question whether the
observation of these phenomena in the condensed matter systems can be
confirmatory of the original phenomenon or not. In this work we answer
affirmatively to this question via structuralism. | Mojtaba Shahbazi | 2023-03-03T16:02:49Z | http://arxiv.org/abs/2303.02039v1 | # Analogue gravity and its scientific confirmatory role
###### Abstract
Empirical confirmation in some areas of physics is obscure; for example in Hawking radiation. However, the analogue gravity can simulate these phenomena in condensed matter systems. That is an important question whether the observation of these phenomena in the condensed matter systems can be confirmatory of the original phenomenon or not. In this work we answer affirmatively to this question via structuralism.
## 1 Introduction
Analogue gravity (AG) teaches us how a field theory in a curved background could be simulated by a sound wave in a fluid, generally a perturbation in a condensed matter system. Especially, this method provides a practical way to simulate a physical phenomenon which is not accessible, say, a black hole. One of the great achievements of the analogue gravity is detection of analogue of the Hawking radiation in a water tank [Weinfurtner et.al, 2011] and Bose-Einstein condensation [Steinhauer, 2016]. The detection of the original Hawking radiation requires two things: one, an evaporating black hole which the nearest one is very far away from Earth and the second, the technology of detectors because the Hawking radiation is too weak particularly when it is compared with the cosmic microwave background, then the required detectors are beyond our cutting-edge technology. In this vein, the Hawking radiation is already a far reached physical phenomenon. The other inaccessible example is string theory. The length of fundamental string is of order \(10^{-33}cm\), then to detect a fundamental string requires an accelerator of energy order \(10^{16}TeV\), if compared to LHC which is of order \(7TeV\), it seems that the detection of a fundamental string is due to too-distant future. Dealing with these theories and phenomena prompts speculation that if these theories are experimentally confirmed or not? How does one experimentally confirm them? What are the confirmatory criteria in these cases?
As a first instinct, if one can simulate physical phenomena, they can take the advantage of them, it means that if a black hole acts as a natural quantum computer, then an analogue system which simulates a black hole can act as an analogue of a quantum computer, it means the dynamics of the two are the same. Nonetheless, it is believed that the analogue gravity takes the role of confirmation in science. In other words, an observation in the analogue gravity can confirm the observation in the original gravitational phenomenon. However, this particular stance receives critiques and supports. Supporters believe that if the physics of the analogue and the original gravitational phenomenon are the same, then, observation in one inevitably leads to the other because of the fact that the same mathematical structure governs the two
phenomena. Nevertheless, the critics believe that there are some phenomena, say the Hawking radiation and the string theory, are not confirmed empirically, then on what basis we consider their mathematical structure so perfectly valid, put it differently, although the equations of motion of the analogue and the original phenomenon are the same, how do we know that the original phenomenon is described by that mathematical structure? how do we know that the real string is described by string theory? They have not been confirmed empirically yet.
In this paper we provide an argument based on the structuralism which states that the analogue gravity can play the role of empirical confirmation if it can simulate the original phenomenon. The rest of the paper is organized as follows: a brief introduction of physical background of analogue gravity is provided, then arguments in favor of and against the confirmatory role of the analogue gravity are considered and finally an argument based on the structuralism is pushed forward.
## 2 Analogue Gravity
There are some well-known correspondences in physics that relate two different theories, say an AdS gravity to a conformal field theory known as AdS/CFT [Aharony et.al, 2000], or physical systems, a gravitational phenomenon to a condensed matter system. These correspondences let us develop our understanding of the core nature of the physics, let alone they suitably equip us with some mathematical techniques to solve problems.
Analogue gravity shows that if there is a perturbation in a condensed matter system, then the equations of motion are exactly the same as the equation of motion for a field theory in a curved background [Barcelo et.al, 2011]. Suppose a condensed matter system, say a fluid1, Euler and continuity equations for a vortex free fluid are as follows [Barcelo et.al, 2011]:
Footnote 1: Electrons in many condensed matter systems behave as a fluid.
\[\nabla\times\vec{v}=0 \tag{1}\] \[\rho\big{(}\frac{\partial\vec{v}}{\partial t}+(\vec{v}.\nabla) \vec{v}\big{)}=-\nabla p-\rho\nabla V\] (2) \[\frac{\partial\rho}{\partial t}+\nabla.(\rho\vec{v})=0 \tag{3}\]
where \(\vec{v}\) is the velocity of the fluid, \(\rho\), the density, \(V\), potential and \(p\) the pressure. Changing variables:
\[\xi:=ln\rho\quad\vec{v}:=\nabla\psi \tag{5}\] \[g(\xi):=\int^{e^{\xi}}\frac{1}{\rho^{\prime}}\frac{dp(\rho^{ \prime})}{d\rho^{\prime}}d\rho^{\prime} \tag{6}\]
Finding linearized equations:
\[\xi=\xi_{0}+\bar{\xi}\quad\psi=\psi_{0}+\bar{\psi} \tag{7}\] \[\frac{1}{\rho_{0}}\Big{(}\frac{\partial}{\partial t}\frac{\rho_ {0}}{g^{\prime}(\xi_{0})}\frac{\partial\bar{\psi}}{\partial t}+\frac{\partial }{\partial t}\frac{\rho_{0}\vec{v_{0}}}{g^{\prime}(\xi_{0})}.\nabla\bar{\psi} +\nabla.\big{(}\frac{\rho_{0}\vec{v}}{g^{\prime}(\xi_{0})}\frac{\partial\bar {\psi}}{\partial t}\big{)}-\nabla.\rho_{0}\nabla\bar{\psi}+\nabla.\big{(} \frac{\bar{\psi}\rho_{0}\vec{v}.\nabla\bar{\psi}}{g^{\prime}(\xi_{0})}\big{)} \Big{)}=0 \tag{8}\]
then (8) could be rewritten as follows:
\[\partial_{\mu}\big{(}\sqrt{-g}g^{\mu\nu}\partial_{\nu}\bar{\psi}\big{)}=0 \tag{10}\]
where the metric is given by:
\[ds^{2}=\frac{\rho_{0}}{c(\rho_{0})}\Big{(}\big{(}c^{2}(\rho_{0})-v ^{2}\big{)}dt^{2}+2dt\vec{v_{0}}.d\vec{v}-dx^{2}\Big{)} \tag{11}\] \[c^{2}(\rho_{0})=g^{\prime}(ln\rho_{0}) \tag{12}\]
where \(c\) is the speed of sound. In this manner, a fluid in a flat background2 (2) can mimic a field theory3 in a curved background (10) but the speed of light is replaced with the speed of sound and black hole solutions turn into dumb hole solutions where a sound wave cannot escape. The classical field theory \(\psi\) or \(v\) is a classical field on a curved background constructed by \(\rho,p\) and \(v\) where the role of the metric \(g^{analogue}_{\mu\nu}\) (the line element (11)) is played by \((\rho,p,v)\). The model of the classical field theory on the curved background (CFCB) could be demonstrated as:
Footnote 2: There is a generalization of the flat to curved background [Hu, 2019].
Footnote 3: For simplicity, we considered a scalar field; however, it could be generalized to an arbitrary field theory.
\[M_{S}^{CFCB}=<(\rho,p,v),v>=<g^{analogue}_{\mu\nu},v> \tag{13}\]
Quantization of the sound waves is as the same as the quantization of the field theory in a curved background or a black hole and as a consequence, there appears the Hawking radiation-like phenomenon in the analogue gravity [Unruh, 1981]. In a fluid, this Hawking radiation is the excitations of the phonon in the fluid that has been detected in laboratories [Weinfurtner, 2011][Steinhauer, 2016]. The model of the quantum aspect of the field theory on the curved background, in short quantum field theory on the curved background (QFCB) could be demonstrated as:
\[M_{S}^{QFCB}=<(\rho,p,v)_{classical},v_{quantized}>=<g^{analogue}_{\mu\nu},v_ {quantized}> \tag{14}\]
where \(v_{quantized}\) is the quantization of the classical field theory \(v\) or \(\psi\). The main question is that if the detection of this Hawking radiation-like phenomenon in a fluid can be accounted as an empirical confirmation of the original Hawking radiation or not.
## 3 Arguments in Favor/Against of Confirmatory Role of AG
In science, many scientists invoke analogies to confirm their claims or at least guide them to discoveries. In this vein, the analogue gravity by trading on syntactic isomorphism between the analogue system and the original which (is inaccessible) is going to be empirically confirmatory [Dardashti, et al 2017]. A rudimentary example is the correspondence between the oscillation of a pendulum and the oscillation of electric charges in a system of RLC circuit. In both cases, the equations of motion are the same, like what happens with (2), (10) and as a consequence, a syntactic isomorphism; nonetheless, the both systems have quite different ontology, on the one hand, there is a pendulum, gravity and the resistance of air, on the other, resistor, inductor and capacitor. This isomorphism lets us to measure the location of the pendulum and based on that find the the amount of electric charges passing through the cross section of the wire. The very similar case is brought when one considers a negative test charge near a uniform spherical configuration of positive electric charges and a point mass particle near a uniform spherical configuration of mass. The equations of motion for the test charge and the point mass are the same.
In [Dardashti,et al, 2017] there is a line of reasoning that how this inference is justified in the analogue gravity. Suppose there is a target system \(T\) which is modeled by \(M_{T}\) under conditions \(D_{T}\)4 and system \(S\) simulates the target which is modeled by \(M_{S}\) under the conditions \(D_{S}\):
Footnote 4: As van Frasen notes \(T\) is a data model and \(M_{T}\) is the theory model, that means \(M_{T}\) represents \(T\) the data model which is the result of experimental observations, and \(T\) is the representation of the natural phenomenon, roughly speaking when talking of \(T\) it means the natural phenomenon [van Frasen, 2006].
1. There is a mathematical similarities between \(M_{S}\) and \(M_{T}\) sufficient for a syntactic isomorphism under the conditions \(D_{S}\) and \(D_{T}\).
2. System \(T\) is inaccessible within the conditions \(D_{T}\).
3. System \(S\) within the conditions \(D_{S}\) is accessible and we can form a claim such as: a phenomenon \(P_{S}\) is exhibited under the conditions \(D_{S}\) in the system \(S\).
the isomorphism lets us to infer the following statement from the above assumptions:
* Claim: under conditions \(D_{T}\) the system \(T\) exhibits a phenomenon \(P_{T}\).
To be assured that the model \(M_{T}\) describes \(T\) accurately enough, it presupposes that there are some shared implicit assumptions between \(M_{S}\) and \(M_{T}\) where the phenomena \(P_{S}\) and \(P_{T}\) are based on these implicit assumptions. In addition, these implicit assumptions relate the two systems. What makes these implicit assumptions different are model-external basis and the empirically grounded. In other words, if a universality of a phenomenon is established there would be a model-external basis for the phenomenon and if it is empirically tested then it is not merely a theoretical curiosity, it is empirically grounded, what Dardashti et.al called MEEGA (model-external and empirically grounded argument). [Dardashti et.al, 2017] claims that Hawking radiation is appeared in different analogue systems and they have been empirically confirmed then they have concluded:
* There are shared implicit assumptions between the analogue gravity and the original gravitational phenomenon.
[Dardashti et.al,2017] emphasizes that this argument is different from the analogical reasoning. In analogical reasoning by finding a correspondence ( similarity) between two objects one develops the correspondence and generalizes it to the other properties not included in the primer correspondence[Bartha,2019]. For example, a flower and a telecommunication tower are similar in their configuration, a tall construction and a big head. However, in analogical reasoning one develops this correspondence and concludes that if the flower has a property, say have a leaf then there would be the analogous property, say an analogous leaf in the tower. Sometimes it works and the tower has some wave transmitter as analogous leaf and sometimes it does not work. The point is that the counter examples of the analogical reasoning arise from the development of the correspondence and it is worth mentioning that the analogue gravity works within the domain of the correspondence and does not develop it. Then as long as one works in the domain of the correspondence and does not develop it, there is no problem with the analogical reasoning. Put it differently, as long as one takes the equations of motion of the analogue gravity and the original Hawking radiation and their quantizations, the correspondence is concrete and legitimate. However, if one broadens the correspondence, say to the singularity of solutions, the analogy does not work because of the fact that the analogue solutions are not singular but the real black hole solutions.
There remains a big question: whether the model \(M_{T}\) and \(M_{S}\) are empirically adequate enough or not? Obviously, the model \(M_{S}\) which is accessible is easy to check in experiments; however, the knotty part of the problem lays in the accuracy and the adequacy of the model \(M_{T}\) which is inaccessible. It seems that there is not a clue for the accuracy of the original Hawking radiation and actually it is a vicious circle. Put it another way [Crowther, 2021]:
* The model describing the original Hawking radiation is not confirmed experimentally and consequently not empirically adequate.
* The analogue gravity is going to provide the experimental confirmation.
* To do that one should make sure that the model of the original Hawking radiation is adequate enough.
[Dardashti et.al, 2019] tries to invoke a probabilistic analysis to ascertain that the more phenomena following the Hawking radiation, the more plausibility of the original Hawking radiation. Having said that, this analysis makes sure that if the model \(M_{T}\) is partially true then more experimental evidences in the analogue gravity increases the accuracy of \(M_{T}\); nevertheless, this conclusion still tapped in the previous vicious circle. In other words, the adequacy of the Hawking radiation model is still under question [Crowther et.al, 2021].
Some philosophers of physics call for the inductive inference [Evans, 2020]. Evans et.al claim that most of modern physics is due to unobservable phenomena; however, some of them are manipulable such as spin of the electrons and some of them are unmanipulable such as cosmological phenomena. Unmanipulable ones can be divided into accessible (indirectly observed) such as astrophysical phenomena and inaccessible. They claim that in these areas of research physicists carry out their research via inductive reasoning and if one is not that skeptic about inductive reasoning. [Evans, 2020] provides an interesting example of interstellar nucleosynthesis, where scientists talk about the nuclear reactions in the core of the stars in unmanipulable and inaccessible position, so scientists find that if they make some assumptions ( the assumptions are: the atomic theory (manipulable and accessible), photonic spectra of the surface of the star (unmanipulable and accessible) ), they can successfully describe the energy emitted from stars5. [Evans et.al, 2020] calls this inductive reasoning as inductive triangulation, combination of two inductive inferences comes to a conclusion. Then this kind of inference can be applied to the analogue gravity and conclude that the observation of the Hawking radiation in the analogue gravity leads to the empirical confirmation of the original Hawking radiation which are unmanipulable and inaccessible.
Footnote 5: Although Evans et.al using the inductive inference in the nucleosynthesis argument, it seems that the proper reasoning is IBE (inference to the best explanation). We talk on this later.
It seems that [Evans et.al, 2020] calls for the inductive inference to show that it is reasonable to suppose a particular nuclear reaction for the deep interior of stars; however, the main question to answer is that if the two assumptions, the atomic theory plus the surface photons of the star, imply the empirical confirmation of the interior nuclear reaction or not. We think that they do answer the different question. The question they are answering is if it is reasonable to describe interior of the stars based on other theories? They do not tell anything about the fact that whether this description is empirically adequate. In addition, [Evans et.al, 2020] truly states that in the inductive inference, systems \(S\) and \(T\) pertain to the same system; however, in the analogue gravity \(S\) and \(T\) are in the same universality class 6.
Footnote 6: Evans et.al emphasize that if we suppose that \(T\) is accurately described by \(M_{T}\), \(M_{T}\) and \(M_{S}\) are isomorphic.
To demonstrate that the analogy in Hawking radiation is not legitimate Field draws the relation between the universality and empirical constraints in theories based on distinguishing between two factors: strength and relevance [Field, 2021]. Strength shows the possibility of the conclusion from the premises and the relevance concerns how the universality is positively relevant to the system of interest. Field addresses the relevance to the microscopic shared structure of some superficially different macroscopic theories such as renormalization group theory in condensed matter systems with the shared microscopic mathematical structure but different macroscopic theories of condensed matter systems. For inaccessible systems we can make a guess and see how well our guess works in observation if the guess is relevant or not; however, in Hawking radiation we cannot follow this procedure due to the inaccessibility. In addition, Field states that Hawking radiation and the analogue gravity due to the difference of microscopic structure are not relevant, then we are not allowed to use of Hawking radiation in analogue models.
We raise an objection to the Field's exposition. In the correspondence between RLC circuit and the pendulum, they share no microscopic mathematical structure and only share the same macroscopy. However, one is hundred percent sure that if measuring a quantity in the circuit there would be absolutely an analogous quantity in the pendulum system if measuring it one gets the same magnitude of its analogous circuit. In this way, it seems that the correspondence between the original Hawking radiation and its analogue is reasonable due to their shared
macroscopic structure, with different microscopic structures though. In other words, if there is Hawking radiation in the analogue gravity then the shared macroscopic mathematical structure makes sure that the original Hawking radiation occurs unless there is a doubt that if this shared macroscopic mathematical structure does describe the original black hole correctly.
### Structuralism
It appears that if we make sure that \(T\) is adequately described by \(M_{T}\) and the models \(M_{T}\) and \(M_{S}\) are isomorphic then the observation \(P_{S}\) can provide an empirical confirmation for \(P_{T}\); however, the main problem in here is around the question: on what basis one can make sure that an unmanipulable and inaccessible system such as the original Hawking radiation, \(T\), is described by the Hawking's calculations, \(M_{T}\) without empirical confirmation.
Inference to the best explanation (IBE) is spelled out where there are competing assumptions and the all are empirically adequate, the assumption that gives the best explanation is true 7[10]. Hawking's calculations are based on two assumptions: quantum field theory and general theory of relativity. Both theories are empirically adequate; however, in Hawking's calculations a field theory in a curved background is considered, it means a combination of the both theories. These calculations lead to a temperature for black holes. Based on IBE, it is reasonable to generalize the quantum field theory to the curved background as it happened in generalization of special theory of relativity to general theory of relativity. In this manner, IBE provides a justification of the Hawking's calculations due to the current evidences we have got in hand from general theory of relativity and quantum field theory similar to argument in [14] about nucleosynthesis. However, it seems that if the next radical scientific change predicts the violation of the Hawking's calculation, then all of the analogue gravity program of empirical confirmation is jeopardized.
Footnote 7: There are some critiques to this kind of inference that we do not consider them here and take IBE for granted. For a latest review see [13].
Nevertheless, structuralism seems to be promising particularly when considering the explicit reconstruction of theories from empirical science, structuralism is distinguished between the rivals [15][21]. This school of thought committees us to the mathematical structure of the scientific theories and avoids the theory description of the furniture of the world [10]. Particularly the structuralism evades the known pessimistic meta-induction8, underdetermination of scientific theories 9[12] and it supports the Putnam's no-miracle argument 1011.
Footnote 8: Pessimistic meta-induction asserts that if there is a radical change in scientific theories then the best current established scientific theories are abandoned [10].
Footnote 9: It is not guaranteed that the evidences determine a unique theory which means that there is an underdetermination problem in scientific theories.
Footnote 10: No-miracle argument is mostly one of the compelling argument in favor of the scientific realism which states that there would be a miracle if the success of the scientific theories were not at least approximately true description of the world [10].
The structural continuity of the scientific theories in structuralism gives rise to the fact that the structure of the new theory reduces to the structure of the old theory under the old theory condition. For example, it has been shown that the structure of quantum mechanics reduces to the classical mechanics [11], special relativity to classical mechanics [12], general theory of relativity to the Newtonian gravitation [13] and general theory of relativity to geometrized Newtonian gravitation [14].
What we mean by structure is the meaning of the structure in the semantic approach. It is believed that the semantic approach to scientific theories is more appropriate when philosophers of science considering the practice of scientists. In this sense, a theory is a family of models
rather than axiomatic systems. In addition, the structure is models of the theory in form of [French et.al, 2006]:
\[M=<S,R_{i},\phi_{j},s_{k}> \tag{15}\]
where \(S\) is a non-empty set, \(R_{i}\) family of relations, \(\phi_{j}\) family of functions and \(s_{k}\) family of distinguished individuals of the set \(S\). One can demonstrate an evaporating black hole by the following model:
\[M_{T=Hawking\ radiation}=<\mathcal{M},g_{\mu\nu},\phi> \tag{16}\]
where \(\mathcal{M}\) is a manifold, \(g_{\mu\nu}\) a metric (the agent of general theory of relativity) and \(\phi\) a field theory (the agent of field theory). If the field \(\phi\) is a classical field theory, such as Maxwell field, there are a lot of experimental evidences on the classical field theory on the curved background; for example, Shapiro time delay effect, blue shift and red shift in astronomical observations, then the structure of the whatever the next scientific theory should reduce to the structure of CFCB:
\[M_{T}^{new}\to M_{T}^{CFCB}=<\mathcal{M},g_{\mu\nu},\phi_{classical}> \tag{17}\]
Then it seems that the simulation and detection of the effects caused by CFCB in the analogue gravity is reasonable. Such experiments for the Hawking radiation have been conducted and the ratio of the modes of the classical field has been measured in a water tank experiment [Weinfurtner et.al,2011]. But here the empirical confirmation of CFCB is independent of the analogue gravity and it is based on the direct observation. Because of the fact that CFCB is empirically adequate ( as said before, under the CFCB condition), then by structuralism these structures will appear in the whatever the next scientific theories.
We can summarize the points as follows:
* There is an independent empirical confirmation of classical fields on the curved background (CFCB)
* There exists an isomorphism between the structure of the analogue gravity, \(M_{S}^{CFCB}\) and the structure of the classical Hawking radiation, \(M_{T}^{CFCB}\)
* The whatever the next scientific theory will reduce to the structure of the classical Hawking radiation: \[M_{T}^{new}\to M_{T}^{CFCB}\cong M_{S}^{CFCB}=<g_{\mu\nu}^{analogue},v>\] (18)
So far we have emphasized that the structure of the classical aspect of the original Hawking radiation is empirically adequate and isomorphic to that of the analogue gravity but there is no experimental evidence on the presence of a quantum field theory on a curved background (QFCB).
To show that it is reasonable to expect that the quantum aspect of the Hawking radiation should be in the whatever the next scientific theory, we restrict our attention to the early times of the black hole evaporation or the times before the so-called Page time12. The point is that the Hawking's calculation after the Page time leads to the information paradox ( for a review see [Manchak et al,2018][Raju,2022]) which means that as long as we are ahead of the Page time this calculation is an appropriate description based on IBE. Now we argue as follows:
Footnote 1: The same is true for the Hawking radiation ( a calculation in a quantum field theory or the boundary theory) into the area of a surface in AdS ( a calculation in a classical theory or the bulk theory). The island prescription in which the entanglement entropy is computed as a surface in the classical theory is in agreement with the Hawking’s calculation before the Page time and different after the Page time. Owing to the fact that this prescription is a computation in the gravitational theory (the bulk theory), it does not tell us how the Hawking radiation is described in the field theory. However, one can simulate the bulk theory of the Hawking radiation by the analogue gravity and correspond the information paradox and the island role to the momentum loss over the dumb hole horizon and pump maintenance of the momentum in the dumb hole, respectively [10]. Nonetheless, it is controversial that if the two dual theories are exactly equivalent or not ( see, [12] as a proponent and [11] as an opponent), let alone the confirmatory role of the two dual theories. In short, the confirmatory role of the analogue gravity after the Page time is out of the scope of this work.
I) It is believed that the quantum theory is one of the best current fundamental physical theories.
II) The quantum description of a phenomenon where its classical description is empirically confirmed, is a more complete description of the phenomenon.
III) Then: it implies that the quantum description of the classical Hawking radiation is a more complete description and structurally developed one. As a consequence, the structure of the whatever the next scientific theory should reduce to the structure of the quantum aspect of the Hawking radiation.
which means that:
\[\begin{array}{ccccc}M_{T}^{new}&\rightarrow&M_{T}^{QFCB}&\rightarrow&M_{T}^ {CFCB}\\ &&\cong&&\cong\\ &&M_{S}^{QFCB}&\rightarrow&M_{S}^{CFCB}\end{array}\]
In the previous section it has been shown that the missing parts of the arguments in favor of the empirical confirmation via the analogue gravity are the following:
1. Whether real black holes are described by the Hawking's calculation.
2. If so, why one should infer the empirical confirmation of the Hawking radiation based on its analogous one in the analogue gravity.
The reason for the first one is conclusion III. The reason to support the second one is the fact that the observation of the Hawking radiation in the analogue gravity means the empirical confirmation of the structure \(M_{S}^{QFCB}\) which is isomorphic to the structure \(M_{T}^{QFCB}\).
All in all, if one takes the stance of structuralism, the mathematical structure of the new theory reduces to the structure of the old theory and subsequently, the structure of the Hawking's calculations (before the Page time) as an old theory is retrieved. In other words, the structure of the whatever the next scientific theory, \(M_{T}^{new}\), will be reduced to the structure of the Hawking radiation, \(M_{T}^{QFCB}\cong M_{S}^{QFCB}\) where the structure of the Hawking's calculation is isomorphic to the structure of the analogue gravity. Resultantly, empirical confirmation of the Hawking radiation in the analogue gravity leads to the empirical confirmation of the structure \(M_{S}^{QFCB}\cong M_{T}^{QFCB}\). Moreover, the observation of \(P_{S}\) prompts the empirical confirmation of the original Hawking radiation, \(P_{T}\).
## 4 Conclusion
String theory, Hawking radiation and some quantum gravitational phenomena are yet to be confirmed empirically and they are inaccessible and will be so, perhaps. Then there would be a problem for the empirical confirmation of these phenomena. Analogue gravity by simulation of these phenomena seems that provides a fertile ground for the empirical confirmatory role of
these areas of research. Supporters of this idea claim that the mathematical structure of analogue gravity models and the quantum gravitational phenomena are the same. Accordingly, if a phenomenon, say, Hawking radiation occurs in the analogue gravity side, the shared mathematical structure necessitates the occurrence of the analogous phenomenon in the quantum gravity side. However, critics believe that on what basis people think that the original quantum gravitational phenomenon is described by the shared mathematical structure if it is not confirmed empirically.
Structuralism believes that the continuity and development of science are demonstrated by the continuity and development of the mathematical structure of the scientific theories. As a consequence, the mathematical structure of a new theory should be partially reduced to the mathematical structure of the old theory. In this manner, the mathematical structure of new theories about quantum gravity should be reduced to that of the old ones, say, partially to the Hawking's calculations on the Hawking radiation. Because of the fact that Hawking's calculations share the same mathematical structure with the analogue gravity then, partially, the occurrence of the Hawking radiation in the analogue gravity confirms the occurrence of the Hawking radiation in original black holes.
## Acknowledgment
The author would like to acknowledge his debt to Saeed Masoumi for helpful discussion and appreciate Radin Dardashti and Paul Bartha for reading the manuscript and very useful comments.
|
2303.14883 | GelSight Baby Fin Ray: A Compact, Compliant, Flexible Finger with
High-Resolution Tactile Sensing | The synthesis of tactile sensing with compliance is essential to many fields,
from agricultural usages like fruit picking, to sustainability practices such
as sorting recycling, to the creation of safe home-care robots for the elderly
to age with dignity. From tactile sensing, we can discern material properties,
recognize textures, and determine softness, while with compliance, we are able
to securely and safely interact with the objects and the environment around us.
These two abilities can culminate into a useful soft robotic gripper, such as
the original GelSight Fin Ray, which is able to grasp a large variety of
different objects and also perform a simple household manipulation task: wine
glass reorientation. Although the original GelSight Fin Ray solves the problem
of interfacing a generally rigid, high-resolution sensor with a soft, compliant
structure, we can improve the robustness of the sensor and implement techniques
that make such camera-based tactile sensors applicable to a wider variety of
soft robot designs. We first integrate flexible mirrors and incorporate the
rigid electronic components into the base of the gripper, which greatly
improves the compliance of the Fin Ray structure. Then, we synthesize a
flexible and high-elongation silicone adhesive-based fluorescent paint, which
can provide good quality 2D tactile localization results for our sensor.
Finally, we incorporate all of these techniques into a new design: the Baby Fin
Ray, which we use to dig through clutter, and perform successful classification
of nuts in their shells. The supplementary video can be found here:
https://youtu.be/_oD_QFtYTPM | Sandra Q. Liu, Yuxiang Ma, Edward H. Adelson | 2023-03-27T02:47:19Z | http://arxiv.org/abs/2303.14883v1 | # GelSight Baby Fin Ray: A Compact, Compliant, Flexible Finger with High-Resolution Tactile Sensing
###### Abstract
The synthesis of tactile sensing with compliance is essential to many fields, from agricultural usages like fruit picking, to sustainability practices such as sorting recycling, to the creation of safe home-care robots for the elderly to age with density. From tactile sensing, we can discern material properties, recognize textures, and determine softness, while with compliance, we are able to securely and safely interact with the objects and the environment around us. These two abilities can culminate into a useful soft robotic gripper, such as the original GelSight Fin Ray [1], which is able to grasp a large variety of different objects and also perform a simple household manipulation task: wine glass reorientation. Although the original GelSight Fin Ray solves the problem of interfacing a generally rigid, high-resolution sensor with a soft, compliant structure, we can improve the robustness of the sensor and implement techniques that make such camera-based tactile sensors applicable to a wider variety of soft robot designs. We first integrate flexible mirrors and incorporate the rigid electronic components into the base of the gripper, which greatly improves the compliance of the Fin Ray structure. Then, we synthesize a flexible and high-elongation silicone adhesive-based fluorescent paint, which can provide good quality 2D tactile localization results for our sensor. Finally, we incorporate all of these techniques into a new design: the Baby Fin Ray, which we use to dig through clutter, and perform successful classification of nuts in their shells.
## I Introduction
As both the fields of soft robotics and high-resolution tactile sensing continue to progress, we look towards progressing their intersection and advancing the field of soft manipulation. Soft robots are useful for a myriad of tasks ranging from human-robot interaction for care of the elderly, biomedical devices and prosthetics, and agricultural usages. However, soft robots and manipulators can become more useful when they are paired with intricate and high-resolution sensing capabilities [2]. Despite the ability of soft robotic mechanisms to comply to different objects and thus provide a more secure grasp, they are unable to perform more interesting manipulation tasks. This lack of ability can be partially solved with tactile sensing.
Tactile sensing in humans allows us to perform multiple tasks that many soft robots cannot do, such as discerning shapes, materials, and textures from a single touch [3]. Although proprioception and simple force sensing are useful, these types of sensors cannot distinguish between different intricate textures, which can be useful in helping the elderly with tasks such as digging through their bag to find their keys or looking through a bag of nuts to find a walnut.
We want the safety and compliance given to us by soft robotic gripper designs combined with the high-resolution tactile sensing that camera-based sensors can provide us. However, the integration of these normally rigid sensors into soft robotics and materials is difficult. The development of the original GelSight Fin Ray [1] has helped improve this field. Even so, there are existing issues with its design such as its lack of mechanical repeatability and a slight loss of compliance due to sensor integration. To further solve this integration problem, we present the following contributions:
* A novel design of a soft, compliant and robust GelSight sensor in the Baby Fin Ray (Fig. 1);
* Synthesis and analysis of fluorescent pigment that has potential use in other soft robotic integration of camera-based tactile sensors;
* Successful performance of nut classification (i.e. identifying textures) using the Baby Fin Ray.
## II Related Work
### _Fin Ray Grippers_
Fin Ray fingers are a family of soft grippers that take advantage of the Fin Ray Effect. They were originally inspired by the deformation of fish fins, which bend against the direction of the applied force [4]. The Fin Ray has a simple structure consisting of two long fin bones and several horizontal ribs, which are 3D printable and easy to modify [5, 6]. Over the years, the Fin Ray structure has been successfully applied to many robotic tasks using its adaptive geometry [5, 7].
Fig. 1: The Baby Fin Ray grabbing a walnut (left) and the corresponding cropped raw image of the mirror that displays the tri-colored tactile sensing region and the indentation of the walnut shell (right).
As a result, researchers have tried to predict the behavior of Fin Ray fingers and improve their performance on grasping tasks. To guide the structure design and optimize their compliance, Armanini et al. proposed a mathematical model based on the discrete Cosserat approach to predict the in-plane and out-of-plane stiffness [8], while Shan et al. developed a pseudo-rigid-model to evaluate the grasping quality [9]. Studies have also used the Finite Element Method (FEM) to simulate and optimize different Fin Ray designs [6, 10]. Based on FEM simulations, Deng et al. established a hand-object database to determine the optimal design parameters of a Fin Ray gripper [11].
Besides structure optimization, there have also been attempts to add additional components to augment the mechanical performance of Fin Ray fingers. One common issue of Fin Ray fingers is that the fingers experience out-of-plane motion, which can be mitigated by inserting small rigid rods at two ends of the ribs [12]. Crooks et al. also attached side supports and fingernails to enhance out-of-plane rigidity and enable the fingers to grip small objects [5]. Friction pads or electroadhesive pads can also increase the gripping force and improve the grasping payload [13, 10].
Until recently, most research has been focused on the mechanical behavior of Fin Ray fingers; there is very limited work done on integrating Fin Rays with sensors. Yang et al. embedded pressure force sensors inside of Fin Ray fingers [14]. Xu et al. realized intrinsic force sensing with a camera and a neural network to estimate forces using displacements [12]. Although both of these works incorporate force feedback, there remains a need for high resolution tactile information to perceive and manipulate various objects.
### _Camera-based Tactile Sensing_
Camera-based tactile sensing is a technique that takes high resolution images of a contact pad and works seamlessly with computer vision algorithms and image neural networks. Compared with conventional strain/force sensors, camera-based sensors are able to provide rich geometrical information about the contact areas [15].
Although camera-based tactile sensors have mainly been embedded in rigid fingers or fingertips, there are already some promising attempts to combine camera-based tactile sensing with soft grippers. She et al. embedded cameras in an exoskeleton-covered soft finger and enabled both proprioceptive sensing and contact tactile sensing [16]. Amini et al. integrated an Optitrack system and camera-based tactile sensing with a soft end effector in order to map surface roughness for sanding processes [17]. In Faris et al.'s work, proprioception and exteroception of soft fingers are enabled by tracking internal marker patterns with a neuromorphic event-based camera [18].
The original GelSight Fin Ray embodied both the passive adaptability of Fin Ray fingers and the high-resolution tactile sensing of camera-based sensors [1]. It was able to separate proprioceptive changes from the tactile interaction between the silicone tactile sensing pad and objects, but the incorporation of the acrylic piece and the camera are unfavorable for the compliance of the finger. Moreover, the robustness of tactile sensing was not guaranteed under large deformations due to limited sensing area and relative movement between the LEDs and silicone pad.
## III Methods
The improvements to the original GelSight Fin Ray encompass mainly hardware updates and result in the development of the Baby Fin Ray. A side-by-side comparison of the two fingers are also shown in Fig. 2, and the exploded version of the Baby Fin Ray is shown in Fig. 3.
These advancements were made with the goal to improve and advance the design elements of the sensorized Fin Ray itself, while also showing more of its potential capabilities for tasks that require rich tactile sensing. Another important aspect is the improvement of techniques that could be utilized in other types of soft, compliant camera-based tactile sensors.
Fig. 3: An exploded view of the Baby Fin Ray assembly. Not pictured are the camera, LEDs, and the diffuser.
Fig. 2: A side-by-side comparison of the original GelSight Fin Ray and the Baby Fin Ray.
### _Hardware_
Much of the hardware design for the Baby Fin Ray is inspired by the GelSight Fin Ray paper [1]. Like the original design, we create a hollow inner structure inside of the finger to allow unobstructed viewing of the tactile sensing surface. We utilize a 3D printed 1.75 mm TPU 95A material (Sainsmart) for the Fin Ray struts with a more rigid backing (Onyx material, Markforged) to prevent unwanted twisting motions in the finger that would be induced by following out the structure.
For the silicone gel pad, we use a mixture of 1 to 10 to 3 parts of XP-565 Parts A and B (Silicones Inc.) and a plasticizer (LC1550 Phenyl Trimethicone, Lotioncrafter), respectively. The XP-565 silicone is used because of its translucency. However, because the silicone itself is rigid, the plasticizer is added to soften the material and make it more suitable as a tactile sensor.
Our mold is 3D printed out of Onyx material and a thin piece of 6 mil (0.15 mm) mylar is adhered to the bottom curved portion of the mold with cyanoacrylate glue. The mold has a 2.5 mm by 18 mm rectangular cross sectional area with a 25 mm radius curved profile along one of the longer sides. To ensure levelness of the sensing pad, a flat piece of translucent acrylic sheet is placed on top of the mold, leaving an area in which the silicone could be poured. The whole ensemble is secured with rubber bands and tilted at a slight angle to allow air pockets to rise to the top of the pouring pocket. Once the bubbles escape out of the mold, the mold assembly is placed inside of an oven and cured for 5 hours at 52\({}^{\circ}\)C (125\({}^{\circ}\)F).
Afterwards, we synthesize our sensing membrane paint, which covers the curved surface of the silicone gel pad. This paint, like the one for the original GelSight Fin Ray, is composed of 1 part silicone ink catalyst to 10 parts gray silicone ink base to 2.5 parts 4 \(\mu\)m aluminum cornflakes to 30 parts NOVOCS Gloss (Raw Materials Inc., Schlenk, Smooth-on Inc). This is brushed only on top of the silicone gel pad, so that fluorescent paint can be applied to the sides of the silicone gel and so that the camera can see through the gel volume to the sensing surface.
However, the Baby Fin Ray differs from its predecessor in many aspects. Specifically, there were a few problems with the design integration of the GelSight sensor into the Fin Ray in the original design. Despite its ability to comply to different object shapes while performing high-resolution tactile sensing, which made it able to do simple tactile manipulation tasks, the hardware was not very robust. The sensing region was limited to only half of the sensor despite the use of a large, wide angle fisheye lens camera, and the finger was unable to comply as much as a non-sensorized Fin Ray would be able to. These issues were addressed by our updated hardware design. The main problems we chose to fix included the following:
* Design for Usability
* Camera Placement
* Illumination & Paint
Solving these problems also gives us the future potential to incorporate some or all of these methods in the development of other soft sensors or robots integrated with high-resolution camera-based sensors.
**Design for Usability** Despite the ability of the original GelSight Fin Ray to perform touch-based manipulation tasks, such as wine glass reorientation, it is not able to robustly perform many different types of tasks. Firstly, the LEDs are attached to the deformable acrylic piece that houses the silicone gel pad, meaning that if the Fin Ray grasps a large object or grasps it incorrectly, the LEDs can potentially be dislodged over time. Another issue is that the silicone gel pad is extended to the tip of the sensor, meaning that if we wanted to use the Fin Ray to grab an item out of clutter, the gel pad could potentially be dislodged. Finally, the long length of the Fin Ray (110 mm) makes it impossible to be viewed by a single camera.
As such, we halve the length of the Fin Ray design so that it is coincidentally around the size of a human finger. This length reduction serves two purposes: (1) the entire front of the sensor can be visible by a single camera; (2) one single illumination source is needed for the entire sensing region.
We also place the blue LEDs (Chanzon 2835 SMD LEDs) at the base of the finger, protectively encased inside the flexible structure. Doing so further allows us to place a piece of VViViD Air-Tint Dark Air-Release Vinyl Wrap Film over the LEDs as a diffuser, which helps to prevent bright spots in the sensing region that were prevalent in the original design.
The silicone gel pad is recessed into the Fin Ray structure so that there is less of a non-smooth interface between the tip of the Fin Ray and the silicone gel pad. We also sharpen the Fin Ray tip to somewhat emulate a fingernail. These changes serve to make the Baby Fin Ray more useful for a different variety of tasks, such as digging through a cluttered, obstructed environment to grab an object.
**Camera placement** One of the most prevalent issues with the GelSight Fin Ray is the large obtrusive camera on the backbone. Although this camera provides high-resolution wide-angle imaging, it also prevents the back of the Fin Ray from flexing. This obstruction makes it difficult for the overall structure to then comply to objects.
A way to resolve this issue is to use a smaller camera that then corresponds with a smaller Fin Ray structure. However, this solution does not completely solve the problem of introducing unnecessary rigidity in the back of the structure, so we turn to the usage of flexible mirrors with inspiration from GelSlim and the GelSight Wedge [19, 20].
In particular, we attach a 0.2 mm PET sheet to the inner back surface of the Fin Ray structure and place a camera at the base of the structure. The camera is angled in a way such that it can unobtrusively see the entire tactile sensing surface using the reflective surface of the PET sheet. Additionally, because the PET sheet is so thin, it can conform with the flexibility of the back structure, and we can see in Fig. 4, using CAD software, that we only need to utilize a 100\({}^{\circ}\) field-of-view camera to see the entire tactile sensing region.
As such, the camera that we use for our design is a Raspberry Pi Zero Spy Camera with a 120\({}^{\circ}\) field of view. This camera is small enough to fit within the base, and it also provides a suitable viewing range for our tactile sensing region. Then, a yellow filter (Rosco E-Colour 765 Sunlight Yellow) is placed under the lens of the camera to help filter out the blue light, which could overpower the rest of the colors in the sensing region.
**Illumination & Paint** Another issue we had to resolve was the slightly rigid acrylic piece in the original design, which serves to house the paint and provide a semi-rigid, deformable tactile sensing surface. The acrylic piece allows the acrylic fluorescent paint to deform with and illuminate the Fin Ray structure without the paint delaminating from silicone. This delamination occurs because the acrylic paint and silicone do not bond well together, necessitating the addition of an acrylic substrate. However, this addition increases the rigidity of the finger.
We choose to completely eliminate the need for such an acrylic piece for this improved design. Not only does this improve upon the flexibility, it can also provide potential design strategies for non-LED illumination systems to other soft camera-based sensors.
To be able to create a flexible, fluorescent paint that can bond to silicone, we use two different types of paint: a commercial fluorescent silicone pigment and a silicone paint of our own design. We choose to focus on adding as much pigment as possible to the fluorescent silicone mixtures so that we can have a more vivid tri-color sensing region. As a result, we do not optimize for the elongation, tensile strength, or other potentially relevant properties of the silicone paint.
For the commercial fluorescent silicone pigment, we use Smooth On's Silc Pig Electric Green and Pink pigments. Following the instructions provided, we add only 3% of the total silicone system weight to our mixture of XP-565 and NOVOCS Gloss (11 to 3), before we stir it, degas it, and paint it onto the long sides of the silicone gel pad in multiple layers. The NOVOCS Gloss is added to the mixture to help thin it out to make it easier to paint onto the silicone piece. The silicone gel pad is then placed into the oven to cure for 4 hours at 52\({}^{\circ}\)C (125\({}^{\circ}\)F).
In an attempt to create a customizable flexible paint, which could work with multiple types of acrylic paint, we also synthesize a paint using a silicone adhesive base. In a small container, we measure out 0.5 grams of acrylic paint (Liquitex Basics Acrylic Paint Fluorescent Green and Red) and 1.5 grams of A564 medical adhesive silicone (Factor II, Inc). Because silicone adhesive cures faster with moisture and acrylic paint is water-based, we make sure that the silicone adhesive and paint are not in contact with one another as we add both of them to our container.
Once both have been added, we vigorously stir until the mixture is mostly homogeneous and immediately add 1.0 gram of NOVOCS Gloss to thin out the mixture while constantly stirring. After the clumps of acrylic paint and silicone adhesive have dissolved into the NOVOCS Gloss, we slowly add 3.0 more grams of NOVOCS gloss in 1.0 gram increments, stirring after each addition. Doing so allows the paint to slowly thin out until it can be more easily applied to the sides of the silicone gel pad. To avoid large clumps in the paint, the mixture is strained over a 190 Micron filter (TCP Global) and then multiple layers of the paint are applied onto the silicone gel pad sides. The mixture is left to cure at room temperature for 30 minutes. We also note that this exact same procedure can be done with Silpoxy (Smooth On), and can potentially be performed with other silicone adhesives. However, we choose the A564 for its clearer coloring as opposed to other silicone adhesives that have more of a hazy quality, which we believe could limit the paint fluorescence.
### _Software_
The camera is connected to a Raspberry Pi and video is streamed using mjpg-streamer. For utilizing the data for tasks, we use a warp perspective function on the visible mirror in the image to convert it into a rectangular image with the correct ratio of the actual flexible mirror. This accounts for slight differences in manufacturing that might lead to changes in the "negative" space around the flexible mirror and tactile sensing region.
## IV Analysis
### _Simulation_
To improve the compliance of the original GelSight Fin Ray and the robustness of hardware when doing manipulation, we make a more compact design by removing the acrylic piece and by changing the camera location. In order to prove that the new design has better compliance, we did a series of FEM simulations using ABAQUS software. We used continuum shell elements for thin sheets like the Mylar and flexible mirror, while we used eight-node linear brick elements for the main structures of the Fin Ray, e.g. silicone, semi-rigid backing, base, and the Fin Ray structure. Because of the large displacement, a non-linear analysis step was adopted to simulate the complex jamming behavior between the Fin Ray ribs.
Fig. 4: The 2D representation of a 100\({}^{\circ}\) field-of-view camera viewing the tactile sensing region through a mirror and its corresponding virtual camera. As the ray tracing shows, the camera is able to see the entire tactile sensing region while sitting at the base of the finger, where its rigidity will not impede with the deformable Fin Ray structure.
As shown in Fig. 5, we make a comparison between the Baby Fin Ray and the "original" Gelsight Fin Ray. The "original" design is a smaller and comparable version of the previous original GelSight Fin Ray with the rigid acrylic piece inside the gel pad and the camera on the backbone. A cylinder indenter (shown in Fig. 5 a) and a cuboid indenter (shown in Fig. 5 b) are pressed into two fingers at two different locations, middle finger pad (lower location) and fingertip (higher location). Fig. 5 c and d show the Force-displacement curves of the two fingers pressed by the cylinder indenter and the cuboid indenter respectively. Depending on the shape of the indenter, indenter location, and depth, the Fin Ray fingers can display very different stiffness or compliance, but the improved design in general needs less pushing force to get the same amount of indentation depth. Overall, the Baby Fin Ray is 18-32% more compliant than the original design.
### _Fluorescent Silicone Paint_
To analyze our synthesized fluorescent materials and compare them with one another and to the original acrylic paint on an acrylic piece, we performed tensile testing and created our own metric to analyze the illumination and 2D localization through reconstruction of each lighting scheme.
**Tensile Testing** Tensile testing was performed on the two types of synthesized silicone-based fluorescent paint. We chose not to perform this test on the dried acrylic paint since acrylic paint is brittle once dried.
Test samples were prepared in a dogbone mold based on the ASTM D412 standard. The molds were printed with Onyx material on the Markforged printer. A laser cut 0.15 mm Mylar piece was placed on the bottom of the mold and an acrylic piece was clamped on the top of the mold to enforce an even thickness across the entire dogbone specimen. After preparing the samples mixtures, the mixtures were poured into the molds and left to cure per the paint preparation instructions. Some examples of our samples are shown in Fig. 6.
After curing, all of the paint samples experienced some shrinkage. Although the dimensions of the dogbone profile of each piece was close to the others (sub-millimeter differences), the thicknesses varied a bit more (1 mm differences). As such, the dimensions were re-measured after curing and right before the dogbones were put into the tensile testing machine. Testing was performed on an Instron machine and samples were stretched until they broke.
**Illumination/2D Localization** To test the integrity of our illumination schemes using different paints, we compared the reconstruction outputs (2D localization) of each paint illumination pattern against our "base case": the acrylic paint on a deformable acrylic piece.
We created three different tactile sensing samples and placed them on a Baby Fin Ray. Three different objects, a 5 mm by 10 mm rectangular block, a 4.75 mm diameter ball bearing, and a Lego block piece were then lightly pressed into the three different sensing regions. Corresponding reference images, images where no tactile imprint was given to the sensing region, were used to obtain a difference image that only showed the tactile imprint of our objects. This difference image was then used to obtain a reconstruction image, which we used as a 2D localization metric for comparing the position of the pressed object with the image reconstruction to determine the paint fidelity with respect to a GelSight sensor.
This comparison was made by first manually segmenting the ball bearing and rectangular block. We then used a distance error metric for the center of the ball segmentation and utilized the Dice coefficient metric to compare
Fig. 5: FEM simulation results of the improved design and the “original” design, which is a smaller version of the previous GelSight Fin Ray for the sake of comparison. The indentations happen at the middle finger pad (lower location) and the fingertip a) Displacement fields of cylinder indentation; b) Displacement fields of cuboid indentation; c) Force-displacement curves of the improved and “original” fingers with cylinder indenter; d) Force-displacement curves of the improved and “original” fingers with cuboid indenter.
Fig. 6: Fluorescent silicone paint dogbone samples used for tensile testing on the Instron meeting. From left to right, we have our pink Sic Pig, pink synthesized paint, green Sic Pig, and green synthesized paint samples.
the similarities of the rectangular segmentation image with the 2D localization results [21]. A spherical object and rectangular object were chosen to best represent the lighting illumination changes where both a smooth and a discrete object were pressed into the sensing region. In particular, we chose a spherical object to see if the lighting illumination was uniform enough for our reconstruction to determine the appropriate placement of the circle center. The Lego block piece was chosen to determine, in a qualitative way, how finely detailed the reconstruction image could be. However, for many manipulation tasks, it is unnecessary to have extremely high-resolution sensing that would allow us to see the tiny words on a Lego block piece, which are approximately 0.2 to 0.3 mm in width.
### Paint Analysis Results
As shown in Fig. 7, we see that both of the synthesized (acrylic) paints outperformed the Silc Pig based fluorescent paints in percent elongation at breakage and their ultimate tensile strengths (UTS). Whereas the green and red acrylic paints gave us a percent elongation of 1,348% and 1,212%, and a UTS of 1.37 MPa and 1.65 MPa, respectively, the corresponding values for the Silc Pig paints were 267%, 269%, 0.23 MPa, and 0.26 MPa.
We believe that these differences in values between the two types of paint could in part be due to the extensive amount of Silc Pig pigment we put into the silicone mixture. Although the pigment is synthesized to work well with and be miscible with silicone, the pigment has the potential to disrupt crosslink formation of the silicone, which would cause the silicone material to have lower percent elongation and UTS values.
This phenomena could have also occurred with the addition of acrylic paint to the silicone adhesive. However, we believe that because silicone adhesive is designed to adhere to many different types of surfaces, disruption of crosslink formation would not have had as severe of an effect on the integrity of the synthesized paint.
Another interesting effect we noticed was that the acrylic paints had different strain-stress curve behaviors. Although both curves behaved in similar ways at the beginning of the tensile test, the red acrylic paint began to exhibit a nonlinear behavior before breakage. We believe this behavior can be attributed to the material differences of the red and green acrylic paints from Liquitex and how miscible they are with the NOVOCS Gloss and silicone adhesive mixture. The Silc Pig mixtures, on the other hand, behaved similarly most likely because these pigments were manufactured to perform in this way.
With regards to illumination schemes, we found that although the acrylic paint on an acrylic sheet visually performed equivalently with the other two at discerning finer details, it was not better than the other two paints at 2D localization. In fact, the silicone adhesive acrylic paint mixture slightly outperformed all of the paints, but not significantly so. The reconstruction algorithm gave a 1.4 pixel error for the circle centers and a 85.0% Dice score for the rectangular reconstruction segmentation image, compared to 3.6 pixel errors for both the Silc Pig and acrylic on acrylic and a 75.0% and 72.3% similarity coefficient, respectively. A comprehensive list of our results is shown in Table I.
Overall, our metrics show that there is not a significant difference between the different types of illumination schemes. This metric could be different if we were to include the reconstruction normal gradients; however, we found that including normal gradients was not useful for many applications and that 2D localization was an acceptable alternative.
We also note that for the acrylic on acrylic illumination scheme, there was more of a uniform lighting scheme throughout the entire sensor. Specifically, the tip of the sensor for both the Silc Pig paint and the synthesized paint had a slightly dimmer illumination at the tip compared to the base of the sensor. The nonuniform illumination, which did not affect our experiment results, is most likely due to the lower index of refraction of silicone compared to acrylic. As a result, more of the blue light will have a higher chance of refracting out of the silicone material instead of experiencing total internal reflection within the material, as it did with the blue LEDs pressed against an acrylic piece.
## V Experiment
### _Nut Classification_
To leverage the new, improved hardware modifications, we chose to utilize the Baby Fin Ray in an object "sorting" task, using nuts that were still in their shells. This task shows how the new sensor can classify objects based on their shape and texture.
For our task, we chose to use almonds, Brazil nuts, pecans, and walnuts, which all have semi-distinct shell textures and are of various shapes and sizes. We collected 500 tactile images each of the four different nuts, pressed against different parts of the sensor in various orientations and with flipped lighting configurations. The grasps were guided by hand. Next, the images were processed via an image unwarping process and the dataset was split into 80% for training and
Fig. 7: Stress-strain curves of the fluorescent silicone paint, where the acrylic paints represent the synthesized silicone adhesive based paints. The graphs have been cut off at the point of breakage of the dogbone tensile testing pieces.
20% for validation, with data augmentation applied on the training dataset. We used a Resnet-50 architecture, shown in Fig. 8, for training on the unwarped images with four total classes representing the different types of nuts [22]. For our neural net, we used the stochastic gradient descent optimizer with a learning rate of 1e-3, and a learning rate scheduler with a step size of 7 and a gamma of 0.1.
We also attempted to train the same neural network on raw images instead of on our unprocessed and warped images.
### _Results_
Without unwarping, our neural network was able to achieve 99.5% accuracy, while with unwarping, our neural net achieved a 95.8% accuracy on the validation set. We believe that this slight discrepancy is due to the amount of "negative" space in our unprocessed images. Although it was not immediately visible to our eyes, there are potential discrepancies in the black covering of the Fin Ray when different sized objects are being grabbed. As such, the "negative" space might lead to higher accuracy for classifying the nut shells over this dataset, but could potentially have a lower accuracy on sensors that might be manufactured slightly differently.
Overall, our unwarped neural net had 98.1%, 96.8%, 96.9%, and 91.3% accuracy on classifying almonds, Brazil nuts, pecans, and walnuts, respectively. We also found that the Baby Fin Ray was able to both conform well to these differently sized nuts and was robust enough to withstand multiple digging attempts through our bowl of nuts. Not only did the synthesized paint stay bonded to the silicone, the design changes also allowed us to have more compliance and gave us more repeatable lighting results in between multiple grasps. The tapered end of our structure also means that the Baby Fin Ray could be used for digging tasks. These changes overall make the Baby Fin Ray more usable in manipulation tasks than its predecessor.
## VI Conclusion and Discussion
The ability to leverage both the compliance and flexibility of soft robots with the intricate sensing afforded by camera-based tactile sensing is essential. If we hope to continue advancing soft manipulation so that soft grippers can eventually perform complex human manipulation tasks, we almost certainly need both of these abilities in our grippers. In this paper, we show an updated version of the GelSight Fin Ray, which not only encompasses both abilities, but has been improved so that it can potentially perform robustly in manipulation tasks.
We increased its robustness by making hardware changes to the GelSight Fin Ray, including an update to the camera placement and a revamp of the illumination design for better compliance. Additionally, we specifically designed the finger for digging through clutter tasks. Our analysis of the different fluorescent paint illumination schemes also shows us that although there is not too much difference between the different schemes, our synthesized silicone adhesive and acrylic paint mixture showed the highest elongation at breakage and UTS. These properties show promise for the synthesis of other types of soft grippers with proprioception or tactile sensing, since there are many soft grippers that either expand or are constantly experiencing some form of deformation.
Finally, we showcase that these new design implementations and sensing capabilities allow us to repeatedly grab nuts with the conformability of the Baby Fin Ray, and then use the camera-based sensor to very accurately classify between the various distinct textures. These abilities opens up avenues for us to, in the future, perform complicated digging tasks through cluttered household environments to retrieve specific objects. We also keep in mind that adding markers would
Fig. 8: The architecture used for training our nut classification task. We take the raw image from our tactile sensor, unwarp the mirror image, before feeding it into a Resnet-50 architecture with four classes: Almond, Brazil Nut, Pecan, and Walnut.
allow us to perform incipient slip analysis and also allow us to potentially perform proprioception.
Overall, we show strategies to integrate more intricate, highly-detailed sensing into soft, compliant robots, and to do so without jeopardizing the original flexible and conformable capabilities of the soft robot. Doing so allows soft robotics and soft manipulation to progress even further so that they can one day perform at the level of human hands.
## VII Acknowledgements
Toyota Research Institute, the Office of Naval Research, and the SINTEF BIFROST (RCN313870) project provided funds to support this work. The authors would also like to thank Xuanhe Li, Jerry Zhang, and Jialiang (Alan) Zhao for their advice and help with tensile testing, task design, neural net training, and general tips. Finally, the authors would like to show their appreciation for the insightful comments Arpit Agarwal and Yichen Li gave about lighting and reconstruction.
|
2307.06970 | Machine Learning-Assisted Pattern Recognition Algorithms for Estimating
Ultimate Tensile Strength in Fused Deposition Modeled Polylactic Acid
Specimens | In this study, we investigate the application of supervised machine learning
algorithms for estimating the Ultimate Tensile Strength (UTS) of Polylactic
Acid (PLA) specimens fabricated using the Fused Deposition Modeling (FDM)
process. A total of 31 PLA specimens were prepared, with Infill Percentage,
Layer Height, Print Speed, and Extrusion Temperature serving as input
parameters. The primary objective was to assess the accuracy and effectiveness
of four distinct supervised classification algorithms, namely Logistic
Classification, Gradient Boosting Classification, Decision Tree, and K-Nearest
Neighbor, in predicting the UTS of the specimens. The results revealed that
while the Decision Tree and K-Nearest Neighbor algorithms both achieved an F1
score of 0.71, the KNN algorithm exhibited a higher Area Under the Curve (AUC)
score of 0.79, outperforming the other algorithms. This demonstrates the
superior ability of the KNN algorithm in differentiating between the two
classes of ultimate tensile strength within the dataset, rendering it the most
favorable choice for classification in the context of this research. This study
represents the first attempt to estimate the UTS of PLA specimens using machine
learning-based classification algorithms, and the findings offer valuable
insights into the potential of these techniques in improving the performance
and accuracy of predictive models in the domain of additive manufacturing. | Akshansh Mishra, Vijaykumar S Jatti | 2023-07-13T11:10:22Z | http://arxiv.org/abs/2307.06970v1 | Machine Learning-Assisted Pattern Recognition Algorithms for Estimating Ultimate Tensile Strength in Fused Deposition Modeled Polylactic Acid Specimens
###### Abstract
In this study, we investigate the application of supervised machine learning algorithms for estimating the Ultimate Tensile Strength (UTS) of Polylactic Acid (PLA) specimens fabricated using the Fused Deposition Modeling (FDM) process. A total of 31 PLA specimens were prepared, with Infill Percentage, Layer Height, Print Speed, and Extrusion Temperature serving as input parameters. The primary objective was to assess the accuracy and effectiveness of four distinct supervised classification algorithms, namely Logistic Classification, Gradient Boosting Classification, Decision Tree, and K-Nearest Neighbor, in predicting the UTS of the specimens. The results revealed that while the Decision Tree and K-Nearest Neighbor algorithms both achieved an F1 score of 0.71, the KNN algorithm exhibited a higher Area Under the Curve (AUC) score of 0.79, outperforming the other algorithms. This demonstrates the superior ability of the KNN algorithm in differentiating between the two classes of ultimate tensile strength within the dataset, rendering it the most favorable choice for classification in the context of this research. This study represents the first attempt to estimate the UTS of PLA specimens using machine learning-based classification algorithms, and the findings offer valuable insights into the potential of these techniques in improving the performance and accuracy of predictive models in the domain of additive manufacturing.
Keywords:Additive Manufacturing; Machine Learning; Fused Deposition Modeling; Classification Algorithms
## 1 Introduction
In recent years, Artificial Intelligence (AI) has emerged as a transformative force across various industries, revolutionizing processes and driving innovation. The manufacturing and health sector is no exception, as it has experienced significant benefits from the integration of AI-driven technologies [1-4]. Among the most significant benefits of AI in manufacturing is its capacity to enhance efficiency and productivity. AI-enabled systems can process enormous volumes of data in real-time, allowing manufacturers to detect patterns and trends that can be harnessed for process improvement. Machine learning algorithms, a branch of AI, can evolve and refine over time, making manufacturing systems increasingly adept at forecasting equipment malfunctions and reducing downtime [5-9].
In a study conducted by Du et al. [10], the researchers examined the conditions leading to void formation in friction stir welded joints, as these voids negatively impact the mechanical properties of the joints. To investigate this phenomenon, the authors employed a decision tree and a Bayesian neural network. They analyzed three types of input datasets, including unprocessed welding parameters and computed variables derived from both analytical and numerical models of friction stir welding. In a study conducted by Roman Hartl et al. [11]., the authors investigated the application of Artificial Neural Networks (ANNs) in analyzing process data from friction stir welding to predict the quality of the resulting weld surface.
Artificial Intelligence is gaining interest in additive manufacturing industries also like other industries. Du et al. [12] demonstrated that employing a synergistic approach that combines physics-informed machine learning, mechanistic modeling, and experimental data can mitigate the prevalence of common defects in additive manufacturing. By scrutinizing experimental data on defect formation for widely used alloys, which was sourced from disparate, peer-reviewed literature, the researchers were able to identify several crucial variables that elucidate the underlying physics behind defect formation. Maleki et al. [13] employed a machine learning (ML)-based methodology to explore the relationship between residual stress, hardness, and surface roughness (influenced by the applied post-treatments) and the depth of crack initiation sites as well as the fatigue life of post-treated additive manufactured samples. There has been other various research works which implemented machine learning in the domain of structural integrity [14, 15, 16, 17, 18, 19, 20, 21, 22].
The relationship between structural integrity and ultimate tensile strength (UTS) is significant in the case of fused deposition modeled (FDM) polylactic acid (PLA) specimens. Structural integrity pertains to the capacity of a structure or material to endure loads and retain its form and functionality without experiencing failure. It encompasses various aspects such as strength, stiffness, durability, and resistance to deformation or breakage. Ultimate tensile strength (UTS) is a measure of the maximum stress a material can withstand before it fails under tension. It represents the peak load-bearing capability of a material and indicates its ability to resist being pulled apart or stretched. UTS is typically determined through tensile testing, where a specimen is subjected to progressively increasing tensile forces until it fractures. When it comes to FDM PLA specimens, the structural integrity of the printed parts is influenced by multiple factors, including the design, print settings, material properties, and post-processing techniques. The ultimate tensile strength of the PLA specimens serves as a vital indicator of their capacity to bear loads and their resistance to tension.
This study marks the first endeavor to implement supervised machine learning classification algorithms for predicting the Ultimate Tensile Strength (UTS) of Polylactic Acid (PLA) specimens produced via the Fused Deposition Modeling (FDM) process. We examined the applicability of four distinct supervised classification algorithms i.e., Logistic Classification, Gradient Boosting Classification, Decision Tree, and K-Nearest Neighbor in estimating the UTS of 31 PLA specimens, using Infill Percentage, Layer Height, Print Speed, and Extrusion Temperature as input parameters.
## 2 Problem Statement
Accurately estimating the Ultimate Tensile Strength (UTS) of Polylactic Acid (PLA) specimens created through the Fused Deposition Modeling (FDM) process is crucial for ensuring optimal performance and reliability in various applications. Traditional methods for determining UTS tend to be labor-intensive and typically necessitate destructive testing. Consequently, there is a growing demand for a more efficient, non-destructive approach to predict UTS by leveraging advancements in machine learning.
This study aims to evaluate the accuracy and efficacy of four distinct supervised machine learning classification algorithms i.e. Logistic Classification, Gradient Boosting Classification, Decision Tree, and K-Nearest Neighbor in estimating the UTS of PLA specimens. Input parameters include Infill Percentage, Layer Height, Print Speed, and Extrusion Temperature. The primary challenge is to identify which algorithm, if any, exhibits superior performance in differentiating between the two classes of ultimate tensile strength within the dataset, ultimately determining the most suitable choice for classification in this research context. Furthermore, this study seeks to investigate the potential of machine learning-based classification algorithms in enhancing the performance and precision of predictive models within the additive manufacturing domain. As the first attempt to estimate the UTS of PLA specimens using these techniques, this research offers valuable insights and contributes to the advancement of knowledge in this field.
## 3 Experimental Procedure
The Fused Deposition Modeling (FDM) process shown in Figure 1 works by creating three-dimensional objects layer by layer, using thermoplastic materials like polylactic acid (PLA). In this method, a computer-aided design (CAD) model is prepared and converted into a compatible file format, which is then sliced into thin horizontal layers by specialized software. These layers generate a set of instructions, or G-code, for the 3D printer to follow during the printing process. The printer's extruder heats the PLA filament, a biodegradable material derived from renewable sources, and deposits it through a nozzle onto the build platform. As the extruder moves in the X and Y directions and the build platform moves in the Z direction, the object is formed layer by layer. The PLA material fuses with the previous layer and solidifies as it cools, creating the final 3D object. Support structures may be needed during printing for complex geometries or overhangs, and post-processing steps such as sanding or painting can be employed to achieve the desired finish.
Fused Deposition Modeling (FDM) samples were fabricated utilizing a Creality Ender 3 machine with a bed size of 220 x 220 x 250 mm shown in Figure 2. The dimensions of the tensile specimens measured 63.5 x 9.53 x 3.2 mm, adhering to the ASTM D638 standard requirements as shown in Figure 3. The part design was created and subsequently converted into an STL file using CATIA software. The STL file was then processed into a machine-readable G-code file with the assistance of the Cura engine within Repetier software to build slicing of the file as shown in Figure 4.
In this research study, the dataset shown in Table 1 was initially converted into a CSV file format to facilitate its import into Google Colaboratory (Colab) for the development of machine learning-based classification algorithms using Python programming. Four distinct classification algorithms were employed for analysis, including Decision Tree, K-Nearest Neighbor (KNN), Logistic Regression, and Gradient Boosting Classifier. The material's ultimate tensile strength (UTS) served as the basis for classification. If the UTS was below 80% of the base material's UTS, it was labeled as '0', while a value above 80% of the base material's UTS was labeled as '1'. This labeling approach allowed for the differentiation between materials with relatively lower and higher tensile strengths.
To evaluate and compare the performance of these classification models, two key metrics were considered: the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC-ROC) and the F1 score. The AUC-ROC score measures the classifier's ability to discriminate between the two classes, with a higher score indicating better performance. On the other hand, the F1 score represents the harmonic mean of precision and recall, providing a balanced evaluation of the model's accuracy in terms of both false positives and false negatives. By comparing the AUC-ROC and F1 scores of the four classification algorithms, this research aims to identify the most suitable algorithm for predicting the ultimate tensile strength of materials based on the given dataset, ultimately contributing to a better understanding of material properties in the context of additive manufacturing.
Figure 1: Schematic representation of Fused Deposition Modeling process
Figure 2: Ender 3 3D printer
Figure 3: Schematic sketch of Tensile Specimen
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Infill (\%) & percentage & Layer height (mm) & Print (mm/sec) & speed (\%C) & Extrusion temp & Ultimate Tensile Strength (MPa) \\ \hline
78 & 0.32 & 35 & 220 & 46.17 \\ \hline
10.5 & 0.24 & 50 & 210 & 42.78 \\ \hline
33 & 0.16 & 35 & 220 & 45.87 \\ \hline
33 & 0.32 & 35 & 200 & 41.18 \\ \hline
33 & 0.16 & 65 & 200 & 43.59 \\ \hline
100 & 0.24 & 50 & 210 & 54.2 \\ \hline
78 & 0.16 & 35 & 200 & 51.88 \\ \hline
33 & 0.32 & 65 & 200 & 43.19 \\ \hline
78 & 0.32 & 65 & 200 & 50.34 \\ \hline
33 & 0.16 & 65 & 220 & 45.72 \\ \hline
78 & 0.16 & 35 & 220 & 53.35 \\ \hline
55.5 & 0.24 & 50 & 210 & 49.67 \\ \hline
33 & 0.32 & 35 & 220 & 45.08 \\ \hline
55.5 & 0.24 & 50 & 190 & 47.56 \\ \hline \end{tabular}
\end{table}
Table 1: Experimental Dataset
Figure 4: Tensile Specim a) before slicing, b) after slicing
## 4 Results and Discussion
### Metric Features used in the present work
A confusion matrix serves as an essential evaluation tool for classification algorithms. It is a table that compares the true labels of a given set of test data with the predicted labels generated by the algorithm as shown in Figure 5. The matrix consists of two rows and two columns, with the rows indicating the true labels and the columns representing the predicted labels. The four cells of the matrix reveal the number of instances that fall into each possible combination of true and predicted labels.
The diagonal cells of the confusion matrix represent the number of instances where the predicted label matches the true label, whereas the off-diagonal cells signify the number of instances where the predicted label is different from the true label. This provides insight into the performance of the algorithm, including the true positive rate (TPR), false positive rate (FPR), precision, recall, and F1 score.
The TPR refers to the ratio of true positives out of all positive instances in the dataset shown in Equation 1, while the FPR represents the ratio of false positives out of all negative instances in the dataset shown in Equation 2. Precision is the ratio of true positives out of all predicted positives as shown in Equation 3, whereas recall is the ratio of true positives out of all actual positives as shown in Equation 4. The F1 score denotes the harmonic mean of precision and recall as shown in Equation 5 and can be used to evaluate the overall performance of the classification algorithm.
\[TPR=\frac{TP}{TP+FN} \tag{1}\] \[FPR=\frac{FP}{FP+TN}\] (2) \[Precision=\frac{TP}{TP+FP}\] (3) \[Recall=\frac{TP}{TP+FN}\] (4) \[F1-Score=2\times\frac{Precision\times Recall}{Precision+Recall} \tag{5}\]
The Receiver Operating Characteristic (ROC) curve serves as a graphical evaluation method for binary classification models, illustrating the relationship between the true positive rate
Figure 5: Nomenclature of Confusion Matrix
(TPR, or sensitivity) and the false positive rate (FPR, or 1-specificity) across a range of decision thresholds. The Area Under the Curve (AUC) is a scalar metric that quantifies the overall performance of the classifier by measuring the area beneath the ROC curve.
The process of constructing the ROC curve involves plotting TPR against FPR for varying decision thresholds. To achieve this, classifier output probabilities are arranged in descending order, and the decision threshold is shifted from the highest to the lowest probability. For each threshold, TPR and FPR are calculated and plotted as a point on the ROC curve.
The AUC metric is computed as the area beneath the ROC curve, with a range of 0 to 1, where a higher value signifies superior classifier performance. An AUC of 0.5 corresponds to a random classifier, while an AUC of 1 implies a flawless classifier. The AUC can be determined through trapezoidal or rectangular approximation techniques. The trapezoidal method entails summing the areas of trapezoids formed by consecutive points on the ROC curve depicted in Equation 6.
\[AUC=\sum_{i=1}^{N-1}\frac{FPR(i+1)-FPR(i)\times(TPR(i+1)+TPR(i))}{2} \tag{6}\]
Now let's discuss about the obtained values of the metric features by individual algorithms in the next subsection.
### Logistic Classification
Logistic classification is a way to predict whether something belongs to one category, or another based on a set of features. In the present study, it has been used for prediction whether the UTS of the additive manufactured specimen is greater or less than the 80 % of the UTS of the based material as shown in Equation 7.
#### 4.2.1 UTS of additive manufactured specimens (\(y\)) =
\(\{0,\textit{if}\ y<80\%\textit{of the UTS of base material}\}\) (7)
The input features are x\({}_{1}\) for Infill density, x\({}_{2}\) for Layer Height, x\({}_{3}\) for Print Speed, and x\({}_{4}\) for Extrusion Temperature. The UTS of the fabricated specimen can be represented (x\({}_{1}\), x\({}_{2}\), x\({}_{3}\), x\({}_{4}\)).
The logistic classification model uses Equation 8 to make its predictions.
\[P(y=1|x_{1},x_{2},x_{3},x_{4})=\frac{1}{1+e^{-(w_{0}+w_{1}x_{1}+w_{2}x_{2}+w_ {3}x_{3}+w_{4}x_{4})}} \tag{8}\]
Where \(P(y=1|x_{1},x_{2},x_{3},x_{4})\) represents the probability that the UTS of additive manufactured specimen belongs to the category labeled as 1. The w\({}_{0}\), w\({}_{1}\), w\({}_{2}\), w\({}_{3}\), and w\({}_{4}\) are the parameters of the model that is expected to learn from the given training data training data. The best values for w\({}_{0}\), w\({}_{1}\), w\({}_{2}\), w\({}_{3}\), and w\({}_{4}\) need to be found out that will make the model as accurate as possible.
The cost function depicted in Equation 9 make these parameters learn.
\[J(w_{0},w_{1},w_{2},w_{3},w_{4})=\frac{1}{m}\sum_{1}^{m}-y(i).\, log\left(h\big{(}x(i)\big{)}\right)-\big{(}1-y(i)\big{)}.\,log\left(1-h\big{(}x(i) \big{)}\right) \tag{9}\]
Where \(J(w_{0},w_{1},w_{2},w_{3},w_{4})\) is the cost function which needs to be minimized, m is the number of training data provided, y(i) corresponds to the binary output for the i-th specimen, and h(x(i)) is the predicted probability of the i-th specimen having a UTS greater than or equal to 80% of the base material, based on the current values of w0, w1, w2, w3, and w4.
Gradient descent is used iteratively to adjust the values of w\({}_{0}\), w\({}_{1}\), w\({}_{2}\), w\({}_{3}\), and w\({}_{4}\) to minimize the cost function \(J(w_{0},w_{1},w_{2},w_{3},w_{4})\). After that the trained logistic regression model is used to predict the UTS of new specimens based on their infill percentage, layer height, print speed, and extrusion temperature. Figure 6 shows the obtained confusion matrix and Figure 7 shows the obtained Receiver Operating Characteristic (ROC) curve.
Figure 6: Confusion Matrix obtained for Logistic classification algorithm
### Gradient Boosting Classification
The gradient boosting algorithm was used to iteratively build an ensemble of decision trees that minimized the classification error of the training data. The gradient boosting classifier works in a similar way. It tries to guess the correct answer to a problem based on a set of input values. It uses many smaller decision rules to make its final prediction. Each decision rule is like a small, simple model that tries to predict the answer based on a few input values.
The classifier starts by creating a first model that makes some predictions based on the input values. Then it looks at the mistakes that the first model made and creates a second model that tries to correct those mistakes. It keeps doing this, creating many models and correcting their mistakes, until it has a final model that is very good at predicting the correct answer. Each time the classifier creates a new model, it gives more weight to the input values that were difficult to predict correctly in the previous models. This helps the classifier focus on the input values that are most important for making a good prediction.
Equation 10 based on Gradient Boosting classification is used to make predictions on new specimens.
\[y(x)=\sum_{i=1}^{n}\gamma_{i}h_{i}(x) \tag{10}\]
Where y(x) represents the predicted output (0 or 1) for a given set of input parameters x, the Sum(i=1 to n) indicates that the contributions of each individual decision tree in the ensemble are summed, with \(\gamma_{i}\) representing the weight assigned to each tree, \(h_{i}(x)\) represents the
Figure 7: ROC curve for Logistic Classification
output of the i-th decision tree, which depends on the values of the input parameters x. Figure 8 shows the obtained confusion matrix and Figure 9 shows the obtained Receiver Operating Characteristic (ROC) curve.
Figure 8: Confusion Matrix obtained for Gradient Boosting classification algorithm
### 4.4 Decision Tree Classification
Decision Tree Classification (DTC) Algorithm asks a series of yes or no questions about the input variables to make a prediction about the output variable. The questions are organized into a tree-like structure, with the initial question at the top (the "root" of the tree) and subsequent questions branching off from there. Each question splits the data into two groups based on the answer (e.g., infill percentage \(>50\%\) or not), and the process continues until a final prediction is made at the bottom of the tree (the "leaves"). The goal of the decision tree classifier is to ask questions that give the most information about the output variable with the fewest number of questions. This is done by selecting the best question to ask at each branch of the tree, based on some criterion (e.g., information gain). Once the decision tree is constructed, it can be used to make predictions on new data by following the path from the root to the appropriate leaf node. Each leaf node corresponds to a particular value of the output variable, and the prediction is simply the value associated with the leaf node.
Let \(X=\{x_{1},x_{2},...,x_{n}\}\) be the set of input variables and \(y\) be the output variable. A DTC can be represented by a tree \(T\) with a set of nodes \(V=\{v_{1},v_{2},...,v_{k}\}\) and edges \(E=\{e_{1},e_{2},...,e_{m}\}\), where each node \(vi\) corresponds to a question about the input variables and each leaf node corresponds to a prediction of the output variable.
Figure 9: ROC curve for Gradient Boosting Classification
The construction of the tree can be described using a set of splitting rules that determine how to partition the data at each node. Let Q be the set of splitting rules and let q(v) be the splitting rule at node v. Then, the tree can be constructed by recursively partitioning the data based on the splitting rules until all nodes are leaf nodes.
The prediction of the DTC can be represented using a set of decision rules that determine which leaf node to assign a new input vector x to. Let R be the set of decision rules and let r(v) be the decision rule at node v. Then, the prediction of the DTC for input vector x can be calculated by Equation 11.
\[\text{y}=\text{p(x; T)}=\text{r(v_{j})}\text{, if x satisfies the conditions of the decision rule r(v_{j}) at node v_{j}} \tag{11}\]
where v_{j} is the leaf node that x is assigned to based on the decision rules.
In the present work, the DTC shown in Figure 10 is constructed with the following hyperparameters: criterion = 'entropy', max_depth = 6, min_samples_leaf = 1, min_samples_split = 2, splitter = 'best'. The criterion parameter specifies the quality of the split, with 'entropy' indicating that the information gain criterion is used. The max_depth parameter specifies the maximum depth of the tree, limiting the number of questions that can be asked. The min_samples_leaf parameter specifies the minimum number of samples required to be at a leaf node, while the min_samples_split parameter specifies the minimum number of samples required to split an internal node. The splitter parameter specifies the strategy used to choose the split at each node, with 'best' indicating that the best split is chosen based on the criterion. The DTC is trained on the X_train and y_train data using the fit method, allowing it to learn the patterns in the data and construct an appropriate decision tree. Figure 11 shows the obtained confusion matrix and Figure 12 shows the obtained Receiver Operating Characteristic (ROC) curve.
Figure 10: Decision Tree architecture obtained in the present work
## 6 Conclusion
Figure 11: Confusion Matrix obtained for Decision Tree classification algorithm
Figure 12: ROC curve for Decision Tree Classification
**4.5 K-Nearest Neighbours Classification**
The KNN algorithm involves calculating the distance between the new specimen and each of the n training samples and selecting the K samples with the smallest distances to the new specimen. The value of K is a user-defined parameter and determines how many training samples are used to make the prediction.
The distance between the new specimen and a training sample can be calculated using a distance metric such as Euclidean distance. Once the K nearest training samples have been identified, the predicted label for the new specimen is assigned based on the majority label among the K samples. That is, if the majority of the K nearest samples have a label of 0, then the new specimen is assigned a label of 0, and if the majority have a label of 1, then the new specimen is assigned a label of 1. Figure 13 shows the obtained confusion matrix and Figure 14 shows the obtained Receiver Operating Characteristic (ROC) curve.
Figure 13: Confusion Matrix obtained for K-Nearest Neighbour classification algorithm
Table 2 shows the obtained values of F1-Score of the implemented algorithms.
The obtained results indicate that Logistic Classification and K-Nearest Neighbors (KNN) Classification performed similarly, both achieving F1 scores of 0.7143. These algorithms demonstrated a strong ability to differentiate between the two classes of ultimate tensile strength in the dataset. On the other hand, the Gradient Boosting Classification algorithm, an ensemble method that combines weak learners to create a more accurate model, yielded a lower F1 score of 0.5714. This suggests that, in this particular case, the Gradient Boosting
\begin{table}
\begin{tabular}{|c|c|} \hline
**Algorithms** & **Obtained F1-Score** \\ \hline Logistic Classification & 0.7143 \\ \hline Gradient Boosting Classification & 0.5714 \\ \hline Decision Tree Classification & 0.4286 \\ \hline K-Nearest Neighbours Classification & 0.7143 \\ \hline \end{tabular}
\end{table}
Table 2: Obtained F1-Score of each implemented algorithms
Figure 14: ROC curve for K-Nearest Neighbour Classification
Classifier was not as effective in classifying the material properties as the Logistic and KNN classifiers. Lastly, the Decision Tree Classification algorithm demonstrated the lowest performance among the tested algorithms, with an F1 score of 0.4286. This result indicates that the Decision Tree classifier's ability to accurately classify the material properties based on the given dataset was comparatively limited.
Figure 15 shows the comparison of the AUC score of the implemented algorithms.
Considering that both K-Nearest Neighbors (KNN) and Logistic Classification algorithms exhibit identical F1 scores, while KNN possesses a superior AUC score, it can be deduced that KNN is the more optimal algorithm for classification in this specific scenario. The F1 score, which represents the harmonic mean of precision and recall, provides a balanced evaluation of the model's accuracy. In contrast, the AUC score quantifies the classifier's capacity to differentiate between the two classes, with higher scores signifying enhanced performance. The observed higher AUC score for KNN demonstrates its increased effectiveness in distinguishing between the two classes of ultimate tensile strength within the dataset, rendering it the more favorable choice for classification in the context of this research.
Figure 15: AUC Score comparison of the implemented algorithms
## 5 Conclusion
In conclusion, this study has successfully investigated the application of supervised machine learning algorithms for estimating the Ultimate Tensile Strength (UTS) of Polylactic Acid (PLA) specimens fabricated using the Fused Deposition Modeling (FDM) process. By preparing 31 PLA specimens and utilizing input parameters such as Infill Percentage, Layer Height, Print Speed, and Extrusion Temperature, we have assessed the accuracy and effectiveness of four distinct supervised classification algorithms: Logistic Classification, Gradient Boosting Classification, Decision Tree, and K-Nearest Neighbor.
Our results demonstrate that the K-Nearest Neighbor algorithm outperforms the other algorithms, achieving an F1 score of 0.71 and an Area Under the Curve (AUC) score of 0.79. This highlights the superior ability of the KNN algorithm in differentiating between the two classes of ultimate tensile strength within the dataset, making it the most favorable choice for classification in this research context. As the first study to estimate the UTS of PLA specimens using machine learning-based classification algorithms, our findings provide valuable insights into the potential of these techniques for enhancing the performance and accuracy of predictive models in the additive manufacturing domain. This research paves the way for future work focused on refining these algorithms, optimizing the parameters, and expanding the application of machine learning in additive manufacturing to further improve the quality and reliability of 3D-printed components.
|
2301.04231 | Intrinsic interface adsorption drives selectivity in atomically smooth
nanofluidic channels | Specific molecular interactions underlie unexpected and useful phenomena in
nanofluidic systems, but require descriptions that go beyond traditional
macroscopic hydrodynamics. In this letter, we demonstrate how equilibrium
molecular dynamics simulations and linear response theory can be synthesized
with hydrodynamics to provide a comprehensive characterization of nanofluidic
transport. Specifically, we study the pressure driven flows of ionic solutions
in nanochannels comprised of two-dimensional crystalline substrates made from
graphite and hexagonal boron nitride. While simple hydrodynamic descriptions do
not predict a streaming electrical current or salt selectivity in such simple
systems, we observe that both arise due to the intrinsic molecular interactions
that act to selectively adsorb ions to the interface in the absence of a net
surface charge. Notably, this emergent selectivity indicates that these
nanochannels can serve as desalination membranes. | Phillip Helms, Anthony R. Poggioli, David T. Limmer | 2023-01-10T22:34:46Z | http://arxiv.org/abs/2301.04231v1 | # Intrinsic interface adsorption drives selectivity in atomically smooth nanofluidic channels
###### Abstract
Specific molecular interactions underlie unexpected and useful phenomena in nanofluidic systems, but require descriptions that go beyond traditional macroscopic hydrodynamics. In this letter, we demonstrate how equilibrium molecular dynamics simulations and linear response theory can be synthesized with hydrodynamics to provide a comprehensive characterization of nanofluidic transport. Specifically, we study the pressure driven flows of ionic solutions in nanochannels comprised of two-dimensional crystalline substrates made from graphite and hexagonal boron nitride. While simple hydrodynamic descriptions do not predict a streaming electrical current or salt selectivity in such simple systems, we observe that both arise due to the intrinsic molecular interactions that act to selectively adsorb ions to the interface in the absence of a net surface charge. Notably, this emergent selectivity indicates that these nanochannels can serve as desalination membranes.
Recent advances in nanoscale fabrication techniques have enabled the synthesis of nanofluidic systems with novel functionalities, [1; 2; 3] with applications to biotechnology [4], filtration [5; 6; 7], and computation [8; 9; 10]. For example, nanofluidics-based membranes have leveraged atomic level details like those of evolved biological membranes [11; 12; 13; 14; 15; 16; 17; 18] to circumvent traditional trade-offs between permeability and selectivity that plague membrane technology [19; 20; 21; 22]. While continuum-level hydrodynamic descriptions can remain accurate at scales of a few nanometers, enabling some general design principles to be deduced [23; 24; 25; 26], the continued development of nanofluidic devices is limited by a lack of understanding of emergent interfacial effects which are resolutely molecular in origin. With large surface to volume ratios, the properties of fluids confined to nanometer scales are determined in large part by a delicate interplay of interactions between the bounding surfaces and the working fluid. To understand and design nanofluidic devices, an approach that combines macroscopic and molecular perspectives is necessary [27].
In this letter, we show how interfacial atomic structure affects the directed transport of an electrolyte solution in nanochannels made of atomically flat graphite (GR) and hexagonal boron nitride (BN) walls using molecular dynamics simulations unified with a contemporary perspective on hydrodynamics. These simple nanofluidic systems have been studied extensively because of their intriguing transport properties, such as anomalously high permeabilities in GR [28; 29; 30; 31; 32; 33; 34; 35; 36], and the potential to augment their functionality with selectivity for desalination or blue energy applications [37; 38; 39; 40; 41; 42; 43; 44; 45; 46]. By computing the spatially-resolved volumetric, charge, and species transport coefficients from equilibrium correlations [47; 48; 49] we elucidate the importance of specific molecular interactions on nanofluidic device functionality. While from a continuum perspective, driving the solution with a pressure gradient should result in salt filtration or electric current only when the confining walls have a net charge, we discover that the intrinsic interfacial adsorption of ions can lead to streaming electrical currents and a novel, emergent desalination mechanism.
We focus on the two systems illustrated in Fig. 1(a), consisting of an aqueous solution of potassium chloride confined in nanochannels with fixed walls of either BN or GR. Because of the experimental similarity between the structure of BN and GR lattices, we spaced atoms and lattice layers identically, with interatomic and interlayer spacings of 1.42 A and 3.38 A [50; 51]. Each wall has three layers, using AA' and AB stacking for BN and GR, respectively, to match their equilibrium structures, with lattice unit cells repeated 8 and 13 times in the \(x\) and \(y\) directions for a cross-sectional surface area of nearly 9 nm\({}^{2}\). The walls were separated such that the spacing between the center of mass of the innermost wall layers was \(H\approx 5.7\) nm, with the channel width adjusted to ensure a bulk water density of \(\bar{\rho}_{w}\approx 1\) g/cm\({}^{3}\). The channels were filled with \(N_{w}=1920\) TIP4P/2005 water molecules with rigid geometries imposed using the SHAKE algorithm [52; 53], \(N_{\text{K}^{+}}=40\) potassium ions and \(N_{\text{Cl}^{-}}=40\) chloride ions, resulting in a nearly 1 M electrolyte solution.
We evolved this system according to underdamped Langevin dynamics,
\[m_{i}\dot{\mathbf{v}}_{i}=-\zeta_{i}\mathbf{v}_{i}+\mathbf{F}_{i}\left( \mathbf{r}^{N}\right)+\mathbf{R}_{i} \tag{1}\]
where each particle \(i\) has mass \(m_{i}\), velocity \(\mathbf{v}_{i}\), and experiences a friction \(\zeta_{i}\), with forcing from interparticle interactions \(\mathbf{F}_{i}\left(\mathbf{r}^{N}\right)\), and random noise \(\mathbf{R}_{i}\). The random force is a Gaussian random variable with mean \(\langle R_{i,a}\rangle=0\) and variance \(\langle R_{i,a}(t)R_{i^{\prime},a^{\prime}}(t^{\prime})\rangle=2k_{\text{B}}T _{i^{\prime}}\zeta_{i,i^{\prime}}\theta_{a,a^{\prime}}\delta(t-t^{\prime})\) for each cartesian coordinate \(\alpha\), where \(k_{\text{B}}T\) is Boltzmann's constant times temperature. Periodic boundary conditions were imposed in all three spatial dimensions, with a vacuum layer in the \(z\) direction of 5 nm to ensure no interaction between periodic images of the channel. Intermolecular Lennard-Jones forces were chosen from literature-reported values to reproduce the solubility of ions in water and match the _ab initio_ equilibrium fluid structure in BN and GR nanochannels [54; 55], with Lorentz-Berthelot mixing rules defining heteroatomic interactions. Additionally, water molecules, charged ions, and the BN wall atoms interacted with Coulomb potentials, where boron and nitro
gen atoms have charges of \(\pm\) 1.05e, with e being the elementary charge, using an Ewald summation as implemented in LAMMPS [56]. For all data presented here, we performed 5 independent simulations, each starting with an equilibration run for 5 ns with \(m_{i}/\xi_{i}=2\) ps, followed by a production run for \(10-20\) ns with \(m_{i}/\xi_{i}=10\) ns at a temperature of 298 K. In all plots, lines represent averages and error bars represent the standard deviation for the 5 simulations. All scripts used to produce these results and the raw data are openly available [57].
Figure 1(b) shows the equilibrium particle number densities, \(\rho_{i}(z)\), for all species, \(i=\{\mathrm{w},\mathrm{K}^{+},\mathrm{Cl}^{-}\}\), in the BN and GR channels, relative to their bulk values, \(\tilde{\rho}_{i}\). We observe similar structures in both materials with interfacial layering of water that is consistent with previous simulations of neat water[55; 35]. The distribution of ions near such interfaces is known to be highly dependent on ion species, and the profiles shown are consistent with previous simulations [58; 59; 60] A dense layer of pure water accumulates near the wall, with the molecules oriented such that they induce a small local negative charge. The next layers are enriched in alternating concentrations of potassium and chloride ions, with depletion (accumulation) of water molecules accompanying potassium (chloride) enrichment. The two materials differ slightly, with a higher water density in the first layer of BN resulting in layering with higher amplitude in BN compared to GR, though in both systems the layering in the density decays to its bulk value for each species, \(\tilde{\rho}_{i}\), within 1.5 nm.
We consider fluxes induced by a pressure differential, \(-\Delta P_{x}\), imposed electrostatic potential drop \(-\Delta\Phi_{x}\), or water chemical potential differential, \(-\Delta\mu_{x}\), with subscripts denoting application in the \(x\) direction parallel to the walls, and limit ourselves to small driving strengths. In this limit, linear response theory dictates that induced local fluxes are linearly dependent on driving forces,
\[\begin{pmatrix}q(z)\\ j(z)\\ d(z)\end{pmatrix}=\begin{pmatrix}\mathcal{M}_{qQ}&\mathcal{M}_{qJ}&\mathcal{M}_{ qD}\\ \mathcal{M}_{JQ}&\mathcal{M}_{JJ}&\mathcal{M}_{JD}\\ \mathcal{M}_{qQ}&\mathcal{M}_{JJ}&\mathcal{M}_{JD}\end{pmatrix}\begin{pmatrix}- \Delta P_{x}\\ -\Delta\Phi_{x}\\ -\Delta\mu_{x}\end{pmatrix}, \tag{2}\]
where \(q(z)\) is the volumetric flow, \(j(z)\) the charge flux, \(d(z)\) the excess water flux, and the \(\mathcal{M}_{qB}(z)\) are the spatially dependent mobilities. The excess water flux \(d(z)\) represents the local water flux relative to what would be predicted from the bulk water density and the local total flux of water and ions, and it is considered here because it is particularly relevant for desalination. The diagonal elements of the mobility matrix link a given forcing directly to its conjugate flux - e.g., \(\mathcal{M}_{JJ}\) links the potential drop \(-\Delta\Phi_{x}\) directly to the induced charge flux \(j(z)\) - while the off-diagonal elements are the so-called cross-terms linking, for example, an induced charge flux to an applied pressure differential. The total fluxes include the total volumetric flow \(Q\), charge flux \(J\), and excess water flux \(D\). We index mobilities by the local induced flux \(a\) and total flux \(B\) directly conjugate to a particular forcing.
The local fluxes are defined microscopically as
\[\begin{split} q(z,t)&=\frac{H}{N}\sum_{i=1}^{N}{\upsilon _{i,x}(t)\delta\left[z-z_{i}(t)\right]}\\ j(z,t)&=\frac{1}{A_{\mathrm{s}}}\sum_{i=1}^{N}{c_{i} \upsilon_{i,x}(t)\delta\left[z-z_{i}(t)\right]}\\ d(z,t)&=\frac{1}{A_{\mathrm{s}}}\sum_{i=1}^{N}{\upsilon _{i,x}(t)\left(\delta_{i,\mathrm{w}}-f_{\mathrm{w}}^{\mathrm{b}}\right) \delta\left[z-z_{i}(t)\right]}\end{split} \tag{3}\]
where particle \(i\) has velocity \(\upsilon_{i,x}(t)\) and position \(z_{i}(t)\) at time \(t\), a static charge of \(c_{i}\), and \(\delta_{i,\mathrm{w}}\) is a Kroniker delta that returns 1 if particle \(i\) is a water molecule and is 0 otherwise. The bulk mole water fraction is defined as \(f_{\mathrm{w}}^{\mathrm{b}}=N_{\mathrm{w}}^{\mathrm{b}}/N^{\mathrm{b}}\), where \(N_{\mathrm{w}}^{\mathrm{b}}\) and \(N^{\mathrm{b}}\) are respectively the average numbers of water molecules and all molecules in the bulk and \(A_{\mathrm{s}}\) is the surface area associated with the fluid-wall interface. The spatial dependence can be integrated out by defining total fluxes, such as \(Q=1/H\int_{0}^{H}dz\,q(z)\), with analogous definitions for \(J\) and \(D\). Total channel conductivities can be evaluated as \(\mathcal{L}_{AB}=1/H\int_{0}^{H}dz\,\mathcal{M}_{AB}(z)\), resulting in total flux linear response relations such as \(Q=-\mathcal{L}_{QQ}\Delta P_{x}-\mathcal{L}_{QJ}\Delta\Psi_{x}-\mathcal{L}_{ QD}\Delta\mu_{x}\). While
Figure 1: Description of the systems considered and resulting equilibrium density distributions. (a) A snapshot of the nanochannels considered with the left (right) side corresponding to the boron nitride (graphite) nanochannel. The top images show the wall structure, with each wall composed of three layers and the periodic unit cell outlined in red. (b) The molecular species density distributions for potassium (green), chloride (purple), and water (black) as a function of position, normalized by bulk densities.
the integrated conductivities must obey Onsager reciprocal relations, \(\mathcal{L}_{AB}=\mathcal{L}_{BA}\), mobilities are under no such constraint. It is possible for \(\mathcal{M}_{AB}(z)\neq\mathcal{M}_{BA}(z)\).
Rather than attempting to calculate mobilities directly via nonequilibrium simulations, we use fluctuation-dissipation relations in order to obtain transport coefficients from equilibrium flux correlations [47; 48; 49]. This allows us to avoid running separate nonequilibrium simulations for each term in the mobility matrix, and ensures the validity of linear response. We adopt the Einstein-Helfand approach over the Green-Kubo method, as recent work has demonstrated its enhanced statistical efficiency [47]. Mobilities are obtained as the long time slope of the correlation between time-integrated local and global fluxes
\[\mathcal{M}_{aB}=\frac{V}{2k_{\mathrm{B}}T}\lim_{t\to\infty}\frac{\kappa_{aB}( t)}{t}, \tag{4}\]
with the correlation function
\[\kappa_{aB}=\ \int_{0}^{t}dt^{\prime}\int_{0}^{t}dt^{\prime\prime}\ \left<a(z,t^{\prime})\ B(t^{\prime\prime})\right>, \tag{5}\]
volume \(V=A_{\mathrm{s}}H\), and brackets representing an equilibrium average. Similarly, conductivities can be obtained using correlations between global fluxes, \(\mathcal{L}_{AB}=(V/2k_{\mathrm{B}}T)\lim_{t\to\infty}K_{AB}(t)/t\) with \(K_{AB}=\int_{0}^{t}dt^{\prime}\int_{0}^{t}dt^{\prime\prime}\left<A(t^{\prime}) B(t^{\prime\prime})\right>\).
Previous work has demonstrated that while equilibrium structures suggest only minor differences between water in BN and GR nanochannels, the dynamics of the confined fluid are strikingly different. This results in large differences in the friction between the fluid and walls, and significant differences in resultant channel permeabilities. [28; 35; 61; 62]. In the presence of ions, the interfacial structure of water is altered and as a consequence the friction may change. In Fig. 2 (a), we show the integrated global flux correlation function \(K_{QQ}\) as a function of time for both nanochannels. After approximately 200 ps, the correlation functions approach a linear dependence on time and their slopes give the hydraulic conductivities as \(\mathcal{L}_{QQ}^{[\mathrm{BN}]}=18.0\pm 9.2\ \mathrm{mol\ nm^{5}\ kJ^{-1}\ ns^{-1}}\) and \(\mathcal{L}_{QQ}^{[\mathrm{GR}]}=106\pm 40\ \mathrm{mol\ nm^{5}\ kJ^{-1}\ ns^{-1}}\), which differ by nearly an order of magnitude.
While the hydraulic conductivities deduced above are independent of a specific hydrodynamic model, they can be connected to continuum theory through the slip length \(l_{\mathrm{s}}\). In contrast to the no-slip condition typically applied in macroscopic contexts, which specifies that the fluid velocity exactly vanishes at the walls, the small confinement scales and enhanced importance of interfacial details in nanofluidic applications typically require application of the finite-slip condition. This condition specifies that the velocity at the wall is proportional to the shear strain at the wall, \(v_{x}=l_{\mathrm{s}}(\partial v_{x}/\partial z)\big{|}_{z=0}\). The slip length is interpreted geometrically as the distance beyond the interface where the extrapolated flow profile is zero, as illustrated in Fig. 2 (b).
To apply a hydrodynamic interpretation, we consider only the region where a hydrodynamic description is expected to be valid by defining the effective hydrodynamic interface as the the location of the second water density peak in Fig. 1(b) [26]. At this distance, microscopic density correlations have decayed and the fluid is well described as a continuous medium. The Poiseuille solution for the hydraulic mobility in the presence of a finite slip length is given by
\[\mathcal{M}_{qQ}(z)=\frac{H_{\mathrm{hyd}}^{2}}{2\eta}\left[\frac{l_{\mathrm{ s}}}{H_{\mathrm{hyd}}}+\frac{z}{H_{\mathrm{hyd}}}-\frac{z^{2}}{H_{\mathrm{hyd}} ^{2}}\right], \tag{6}\]
where \(H_{\mathrm{hyd}}\) is the distance between hydrodynamic interfaces, and \(\eta\) is the estimated viscosity of the solution. This expression may be integrated to determine the hydraulic conductivity
\[\mathcal{L}_{QQ}=\frac{H_{\mathrm{hyd}}^{2}}{12\eta}\left(1+6\frac{l_{\mathrm{ s}}}{H_{\mathrm{hyd}}}\right), \tag{7}\]
which allows us to relate the measured values of \(\mathcal{L}_{QQ}\) in GR and BN to the corresponding slip lengths provided \(\eta\) is known. Here, we use a viscosity of \(\eta=1.0\ \mathrm{mPa\ s}\), obtained by interpolating literature values for this electrolyte model [54]. Figure 2
Figure 2: Comparison of the hydraulic conductivity and slip length for the GR (black) and BN (blue) nanochannels. (a) The time-integrated global flux correlation function \(K_{QQ}\) versus time. (b) Comparison of the slip lengths for both materials, computed from the hydraulic conductivity (dark), against previously reported results for neat water (light) [28]. The inset illustrates the geometric interpretation of the slip length.
(b) indicates the resulting slip lengths, \(l_{\rm s}^{\rm[BN]}=4.0\pm 2.5\) nm and \(l_{\rm s}^{\rm[GR]}=27\pm 10\) nm, and compares them against previously reported results for neat water [28]. With the slip in GR nanochannels being approximately an order of magnitude larger than the slip in BN nanochannels, it is clear that the qualitative results do not change significantly with the addition of salt. The material-dependency of \(l_{\rm s}\) has been observed in various contexts experimentally [63, 64, 65, 32, 66] and is generally understood to arise from a decoupling of structure and dynamics, though the precise physical mechanism is debated [67, 68, 69, 28, 35, 61]. Quantitatively, our simulations also suggest a decrease in slip as salt is added, which is consistent with other observations for slip on hydrophobic surfaces, where increasing fluid-wall friction results as a consequence of enhanced equilibrium force fluctuations from the heterogeneous solution. [70, 71, 72].
More detailed insight into the differences in transport characteristics between BN and GR nanochannels can be obtained by computing the spatially-dependent hydraulic mobility using Eq. 4. The results of this calculation for GR and BN are shown in Fig 3 (a). We also show the hydrodynamic mobility profiles calculated from Eq. 6 for comparison to the macroscopic theory. As expected for the conductivity, we observe approximately an order of magnitude difference between the peaks in the hydraulic mobilities in the BN and GR nanochannels. The mobility profile is nearly flat for GR and exhibits a slight curvature for BN, indicative of the differences in slip. In the boundary region, the mobility profile qualitatively mimics the fluid density profile with greater (lesser) flux coinciding with density peaks (troughs).
We find that the molecular interfacial structure also affects the cross-terms in the mobility matrix in Eq. 2. The streaming mobility \(\mathcal{M}_{jQ}\), which quantifies the electrical current profile produced by applying a pressure differential, is shown in Fig 3(b) for both systems. We observe the emergence of three layers of electrical current of alternating sign near the fluid wall boundary, and no net current in the bulk of the channel. Because the applied pressure produces particle flux in all nanochannel regions, the alternating current is caused by ion density localization at the interface, with positive (negative) current where potassium (chloride) ions are enriched. These interfacial effects decay away from the wall more slowly than those observed with the hydraulic mobility, with net charge flux penetrating into the hydrodynamic region defined by the hydraulic mobility. By integrating the mobility across the channel, we find that the streaming conductivity \(\mathcal{L}_{JQ}\) is statistically indistinguishable from zero for both materials, indicating no net ionic transport. Though not shown, our calculations verify the lack of symmetry between cross-term mobilities, with \(\mathcal{M}_{qJ}\) being zero at all points in the channel, within statistical accuracy, consistent with \(\mathcal{M}_{qJ}\neq\mathcal{M}_{jQ}\) while maintaining \(\mathcal{L}_{QJ}=\mathcal{L}_{JQ}\).
The pressure driven excess water mobility \(\mathcal{M}_{qQ}\), is shown in Fig. 3(c) as computed using Eq. 4 for both materials. This quantity is directly related to the desalination capabilities of a nanofluidic channel, and its magnitude determined by the channel's selectivity and permeability. This transport is summarized by the integrated mobility, \(\mathcal{L}_{qQ}\), with \(\mathcal{L}_{dQ}>0\) corresponding to the selective flux of water through the channel. We find a positive integrated value \(\mathcal{L}_{dQ}>0\) for both materials, demonstrating a preferential water selectivity and corresponding salt rejection capability.
The spatial dependence of the cross-term mobility profiles can be understood via a combination of microscopic and macroscopic perspectives. The streaming mobility may be evaluated microscopically as a product of the local density profiles and the hydraulic mobility. For the streaming mobility this is, \(\mathcal{M}_{jQ}(z)=[\rho_{\rm K^{+}}(z)-\rho_{\rm CI^{+}}(z)]/\rho_{\rm tot} (z)\mathcal{M}_{qQ}(z)N/V\), where \(\rho_{\rm tot}(z)=\rho_{\rm w}(z)+\rho_{\rm K^{+}}(z)+\rho_{\rm CI^{-}}(z)\). Though a common decomposition in macroscopic hydrodynamics, this is a non-trivial statement when considering the microscopic mobilities. The red dashed line in Fig. 3(b) shows this estimate agrees
Figure 3: Pressure driven hydraulic (a), streaming (b), and excess water (c) mobility profiles for BN (left, blue) and GR (right, black). The red shaded regions demarcate areas where hydrodynamics are invalidated. In (a), the red dashed curve corresponds to the hydrodynamic estimate from the hydraulic conductivity. In (b) and (c), the red dashed curves are the mobility predictions from the product of the hydraulic mobility and appropriate density.
well with estimate using Eq. 4. The same functional decomposition holds for the excess water flux, which can be obtained from the product of the hydraulic mobility and the excess water density \(\mathcal{M}_{QQ}(z)=\left(\rho_{\mathrm{w}}(z)/\rho_{\mathrm{tot}}(z)-\bar{\rho }_{\mathrm{w}}/\bar{\rho}_{\mathrm{tot}}\right)\mathcal{M}_{qQ}(z)N/V\). This decomposition is shown in the red dashed line in Fig. 3(c). Both of these decompositions follow directly from the Langevin equations of motion. While the excess water mobilities for both materials are qualitatively similar because of qualitatively similar equilibrium density distributions and hydraulic mobility profiles, the quantitative difference arises due to the differences in magnitude of the hydraulic conductivity. The first contact layer is nearly salt free, so while interfacial friction slows pressure driven transport, the high water purity gives a large peak in excess water mobility. There is a second excess water mobility peak near the second water density peak. The enrichment and depletion of chloride and potassium, respectively, brings the overall salt density close to its bulk value and leaves an excess concentration of water where the hydraulic mobility also peaks.
The molecular dynamics calculations suggest that the transport properties of the nanochannel can be decomposed as a sum of a molecular interfacial component, and a continuum bulk component. The interfacial component depends sensitively on specific molecular interactions as they manifest in non-uniform density profiles. Beyond the domain of those density correlations, which for these channels extend around 2 nm into the channel, the transport is well described by Poiseuille flow with a large slip length. This decomposition allows us to infer the height dependence of the channel's selectivity and permeability. We can calculate the size dependent conductivity using an integrated mobility \(\mathcal{L}_{QQ}(H)=2\int_{0}^{H/2}dz\ \mathcal{M}_{qQ}(z)/H\), where we employ the inversion symmetry of the channel to integrate over only half of the channel. These conductivities are shown for BN and GR in Fig. 4(a) normalized against \(\mathcal{L}_{QQ}^{\mathrm{[GR]}}\). The red regions in Fig. 4 indicate system sizes which would lead to overlapping interfacial regions, for which our decomposition is not anticipated to be valid. Because the hydraulic mobility profile is nearly flat in the hydrodynamic region, which is expected when \(l_{s}\gg H_{\mathrm{hyd}}/6\), the overall permeability increases linearly with channel height, which is not as fast as anticipated from traditional hydrodynamics with a no-slip boundary condition.
A similar approach can be used to compute the dependency of the water selectivity on the height of the channel. To compute the selectivity, we first can determine a pressure driven salt mobility \(\mathcal{M}_{sQ}(z)=[\rho_{\mathrm{K^{+}}}(z)+\rho_{\mathrm{Cl^{-}}}(z)]/\rho_ {\mathrm{tot}}(z)\mathcal{M}_{qQ}(z)N/V\). The ratio of salt to total particle flux as a function of channel height is obtained as
\[f_{\mathrm{salt}}(H)=\frac{\int_{0}^{H/2}dz\ \mathcal{M}_{sQ}(z)}{\frac{N}{V} \int_{0}^{H/2}dz\ \mathcal{M}_{qQ}(z)} \tag{8}\]
which is shown in Fig. 4(b) normalized against the overall number fraction of ions in the bulk, \(\bar{f}_{\mathrm{salt}}=(\bar{\rho}_{\mathrm{K^{+}}}+\bar{\rho}_{\mathrm{Cl^ {-}}})/\bar{\rho}_{\mathrm{tot}}\). This provides a direct measurement of the size dependence of the nanochannel selectivity. Consistent with the inference from the excess water mobility, the salt flux is supressed relative to its expected value from the bulk concentration of ions and the total channel conductivity. We find that BN and GR nanochannels have effectively identical selectivities, primarily because of their similar equilibrium fluid density distributions and qualitatively similar hydraulic mobility profiles. For the nanochannel size and ion concentrations considered here, the flux of salt ions is reduced by approximately 25%, while shrinking the nanochannel until interfacial regions overlap at around 2 nm could provide a reduction of around 50%. Due to the intrinsic interfacial absorption of ions to the interface and their resultant suppressed mobility, as the nanochannel size is decreased its selectivity is enhanced. An optimal desalination device must separate ions from water with both high selectivity as well as high permeability, and these phenomenological channel scaling observations suggests that for both BN and GR this optimum is between 2 and 5 nm.
This mechanism of selective transport, and the ability of the channel to separate salt from water, is a result of an interplay between local molecular interactions that drive ions to the fluid-solid boundary in the absence of a net surface charge of the substrate. These molecular interfacial features established a nonuniform fluid composition across the channel that, when combined with a spatially resolved evaluation of the hydraulic mobilities, provide a complete description of the transport within the nanochannel. The promise of this mechanism for desalination technology is strikingly enhanced when this water selectivity is coupled with the anomalously high permeability of GR nanochannels. This framework is general and
Figure 4: Estimates of (a) hydraulic conductivity and (b) water selectivity in simple GR (black) and BN (blue) nanochannels versus channel height \(H\). Red shaded regions indicate channel heights where boundary effects from confining walls interact, meaning our estimate is most reliable for \(H\gtrsim 2\) nm.
can be used to understand and engineer other functionality in nanofluidic systems. Employing recent generalizations of response theory,[73; 74; 75] our approach could be extended outside the regime of linear response to provide insight into performance at high driving strengths and between multiple driving forces.
_Acknowledgments -_ This study is based on the work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Scientific Discovery through Advanced Computing (SciDAC) program, under Award No. DE-AC02-05CH11231. A. R. P was also supported by the Heising-Simons Fellowship from the Kavli Energy Nanoscience Institute at UC Berkeley and D. T. L acknowledges support from the Alfred P. Sloan Foundation.
_Data availability -_ The source code for the calculations done and all data presented in this work are openly available on Zenodo at [https://doi.org/10.5281/zenodo.7522996](https://doi.org/10.5281/zenodo.7522996) [57]
|
2310.12312 | Sobolev orthogonal polynomials: Connection formulae | This contribution aims to obtain several connection formulae for the
polynomial sequence, which is orthogonal with respect to the discrete Sobolev
inner product \[ \langle f, g\rangle_n=\langle {\bf u}, fg\rangle+ \sum_{j=1}^M
\mu_{j} f^{(\nu_j)}(c_j) g^{(\nu_j)}(c_j), \] where ${\bf u}$ is a classical
linear functional, $c_j\in \mathbb R$, $\nu_j\in \mathbb N_0$, $j=1, 2,....,
M$. The Laguerre case will be considered. | Roberto S. Costas-Santos | 2023-10-18T20:29:18Z | http://arxiv.org/abs/2310.12312v1 | # Sobolev orthogonal polynomials: connection formulae
###### Abstract.
This contribution aims to obtain several connection formulae for the polynomial sequence, which is orthogonal with respect to the discrete Sobolev inner product
\[\langle f,g\rangle_{n}=\langle\mathbf{u},fg\rangle+\sum_{j=1}^{M}\mu_{j}f^{( \nu_{j})}(c_{j})g^{(\nu_{j})}(c_{j}),\]
where \(\mathbf{u}\) is a classical linear functional, \(c_{j}\in\mathbb{R}\), \(\nu_{j}\in\mathbb{N}_{0}\), \(j=1,2,....,M\). The Laguerre case will be considered.
2020 Mathematics Subject Classification: 3347; 42C05
## 1. Introduction
In this paper, we are going to consider sequences of polynomials orthogonal with respect to the discrete Sobolev inner product
\[\langle f,g\rangle=\langle\mathbf{u},fg\rangle+\sum_{j=1}^{M}\mu_{j}f^{(\nu_{ j})}(c_{j})g^{(\nu_{j})}(c_{j}), \tag{1.1}\]
where \(\mathbf{u}\) is a classical linear functional, \(c_{j},\mu_{j}\in\mathbb{C}\), and \(\nu_{j}\in\mathbb{N}_{0}\), \(j=1,2,....,M\) in the widest sense possible for all the parameters and values related to the polynomials we want to study.
Observe that without loss of generality, we can assume that \(\nu_{1}\leq\nu_{2}\leq\cdots\leq\nu_{M}\). For this reason, we will call the polynomials orthogonal with respect to (1.1) sequentially ordered Sobolev-type orthogonal polynomials.
Observe one can express the inner product (1.1) in the following compact way [5]
\[\langle f,g\rangle=\langle\mathbf{u},fg\rangle+(\mathbb{D}f)^{T}D\mathbb{D}g, \tag{1.2}\]
where \(\mathbb{D}\) is the vector differential operator defined as
\[\mathbb{D}f:=\left(\left.f^{(\nu_{1})}(x)\right|_{x=c_{1}},\left.f^{(\nu_{2}) }(x)\right|_{x=c_{2}},...,\left.f^{(\nu_{M})}(x)\right|_{x=c_{M}}\right)^{T},\]
\(D\) is the diagonal matrix with entries \(\mu_{1},...,\mu_{M}\) and \(A^{T}\) is the transpose of the matrix \(A\).
To simplify the notation we will write throughout the document \(f^{(\nu)}(c)\) instead of \(f^{(\nu)}(x)\big{|}_{x=c}\).
_Remark 1.1_.: Observe in the case when \(\nu_{1}=0\), \(\nu_{2}=1\),..., \(\nu_{M}=M-1\) and all the mass-points are equal each others the authors usually denote by \(\mathbb{F}\) the matrix \(\mathbb{D}f\) (see [6] and references therein).
For a more detailed description of this Sobolev-type orthogonal polynomials (including the continuous ones) we refer the readers to the reviews [8, 9].
The structure of the paper is as follows: in Section 2, some preliminary results are quoted. In section 3, all the algebraic results are presented, such as several connection formulas for the sequentially ordered balanced Sobolev-type orthogonal polynomials, a hypergeometric representation for these polynomials, as well as some other algebraic relations between the classical orthogonal polynomials and the discrete-Sobolev ones.
## 2. Auxiliary results
We adopt the following set notations: \(\mathbb{N}_{0}:=\{0\}\cup\mathbb{N}=\{0,1,2,...\}\), and we use the sets \(\mathbb{Z}\), \(\mathbb{R}\), \(\mathbb{C}\) which represent the integers, real numbers, and complex numbers, respectively. Let \(\mathbb{P}\) be the linear space of polynomials and let \(\mathbb{P}^{\prime}\) be its algebraic dual space.
We will also adopt the following notation: We denote by \(\langle\mathbf{u},p\rangle\) the duality bracket for \(\mathbf{u}\in\mathbb{P}^{\prime}\) and \(p\in\mathbb{P}\), and by \((\mathbf{u})_{n}=\langle\mathbf{u},x^{n}\rangle\), with \(n\geq 0\), the canonical moments of \(\mathbf{u}\).
For any \(n\in\mathbb{N}_{0}\), \(a\in\mathbb{C}\), the Pochhammer symbol, or shifted factorial, is defined as
\[(a)_{n}:=a(a+1)\cdots(a+n-1).\]
The Taylor polynomial of degree \(N\) is defined as
\[[f(x;c)]_{N}:=\sum_{k=0}^{N}\frac{f^{(k)}(c)}{k!}(x-c)^{k},\]
for every function \(f\) for which \(f^{(k)}(c)\), \(k=0,1,2,...,N\) exists.
The hypergeometric series is defined for \(z\in\mathbb{C}\), \(s,r\in\mathbb{N}_{0}\), \(b_{j}\not\in-\mathbb{N}\) as [7, SS1.4]
\[{}_{r}F_{s}\left(\begin{array}{c}a_{1},...,a_{s}\\ b_{1},...,b_{r}\end{array};x\right)=\sum_{k=0}^{\infty}\frac{(a_{1})_{k}\cdots (a_{s})_{k}}{(b_{1})_{k}\cdots(b_{r})_{k}}\frac{x^{k}}{k!}.\]
Given a moment functional \(\mathbf{u}\), it is said to be quasi-definite or regular (see [3]) if the Hankel matrix \(H=\left((\mathbf{u})_{i+j}\right)_{i,j=0}^{\infty}\) associated with the moments of the functional is quasi-definite, i.e., all the \(n\)-by-\(n\) leading principal submatrices are regular for all \(n\in\mathbb{N}_{0}\). Hence, there exists a sequence of polynomials \(\left(P_{n}\right)_{n\geq 0}\) such that
1. The degree of \(P_{n}\) is \(n\).
2. \(\langle\mathbf{u},P_{n}(x)P_{m}(x)\rangle=0\), \(m\neq n\).
3. \(\langle\mathbf{u},P_{n}^{2}(x)\rangle=d_{n}^{2}\neq 0\), \(n=0,1,2,...\)
Special cases of quasi-definite linear functionals are the classical ones (Jacobi, Laguerre, Hermite and Bessel).
We denote the \(n\)-th reproducing kernel by
\[K_{n}(x,y)=\sum_{k=0}^{n}\frac{P_{k}(x)P_{k}(y)}{d_{k}^{2}}.\]
From the Christoffel-Darboux formula (see [3] or [13, Eq. (3.1)]), we have
\[K_{n}(x,y)=\frac{k_{n}}{k_{n+1}d_{n}^{2}}\frac{P_{n+1}(x)P_{n}(y)-P_{n}(x)P_{n+ 1}(y)}{x-y}, \tag{2.1}\]
where \(k_{m}\) is leading coefficient of \(P_{m}(x)\), \(m\in\mathbb{N}_{0}\).
We will use the following notation for the partial derivatives of \(K_{n}(x,y)\):
\[\frac{\partial^{j+k}}{\partial x^{j}\partial y^{k}}K_{n}(x,y)=K_{n}^{(j,k)}(x,y),\quad 0\leq j,k\leq n.\]
Note that, when \(j=k=0\), \(K_{n}(x,y)=K^{(0,0)}(x,y)\) is the usual reproducing Kernel polynomial.
A direct consequence of the Christoffel-Darboux formula (2.1) is the following result [4, Eq. (16)]:
**Proposition 2.1**.: _The \(j\)-th partial derivative of the \(n\)-th reproducing kernel can be written as_
\[K_{n}^{(j,0)}(x,y)=\frac{k_{n}\,j!}{k_{n+1}d_{n}^{2}}\frac{P_{n+1}(x)[P_{n}(x; y)]_{j}-P_{n}(x)[P_{n+1}(x;y)]_{j}}{(x-\overline{y})^{j}}. \tag{2.2}\]
Observe the following consequence, provided that \(c\) is not a zero of \(P_{n}(x)\) for any \(n\)
\[\frac{\left|P_{n}^{(j)}(c)\right|^{2}}{d_{n}^{2}}=K_{n}^{(j,0)}(c,x)-K_{n-1}^{( j,0)}(c,x) \tag{2.3}\]
One last result will be useful to obtain some of our algebraic results.
**Lemma 2.2**.: _[_1_, Lemma 2.1]_ _Let \(M\in\mathbb{N}\), \(\mathbf{u}\) be a classical linear form. Let \(c_{1},c_{2},...,c_{M}\in\mathbb{R}\), \(\nu_{1},\nu_{2},...,\nu_{M}\in\mathbb{N}_{0}\), and let us denote by \((S_{n}^{\vec{\mu}}(x;\vec{\nu},\vec{c}))\) the sequence of orthogonal polynomials with respect to the inner product (1.1). If \(c_{i}\) is not a zero of \(S_{n}^{\vec{\mu}}(x;\vec{\nu},\vec{c})\), \(i=1,2,...,M\) for all \(n\in\mathbb{N}_{0}\) then, there exists a polynomial, namely \(\zeta(x)\), such that \(\mathbb{D}\left(\zeta(x)S_{n}^{\vec{\mu}}(x;\vec{\nu},\vec{c})\right)=\vec{0}\) holds._
_Remark 2.3_.: Observe that if all the \(c_{i}\)'s are all different then \(\zeta(x)=\prod_{j=1}^{M}(x-c_{j})^{\nu_{j}+1}\), and if all of them are equal to each other, i.e. \(c_{i}=c\) for \(i=1,2,...,M\), then \(\zeta(x)=(x-c)^{\nu_{M}+1}\).
Without loss of generality, we denote by \(\zeta(x)\) to the polynomial of minimum degree among all nonzero polynomials satisfying the conditions of the Lemma 2.2.
### The Laguerre polynomials
Let \((L_{n}^{\alpha}(x))\) be the sequence of Laguerre polynomials, orthogonal with respect to the linear form \(\mathbf{u}_{\alpha}\) on \(\mathbb{P}\). These polynomial sequence is classical since \(\mathbf{u}_{\alpha}\) fulfills the Pearson equation
\[\frac{d}{dx}\left[x\,\mathbf{x}\right]=\left(\alpha+1-x\right)\mathbf{x}.\]
_Remark 2.4_.: Note that if \(\Re(\alpha)>-1\) then, the linear from \(\mathbf{u}_{\alpha}\) has the following integral representation (see for instance [12], [10, SS18.3] or [7, SS9.12]):
\[\langle\mathbf{u}_{\alpha},f\rangle=\int_{0}^{\infty}f(x)\,x^{\alpha}e^{-x}dx,\]
and when \(\alpha<0\) the orthogonality of Sobolev-type is given in [12].
The Laguerre polynomial can be explicitly given in terms of hypergeometric series as
\[L_{n}^{\alpha}(x)=\frac{\left(\alpha+1\right)_{n}}{n!}_{1}F_{1}\left(\begin{array} []{c}-n\\ \alpha+1\end{array};x\right). \tag{2.4}\]
Let us summarize some basic properties of the Laguerre orthogonal polynomials that will be used throughout this work.
**Proposition 2.5**.: _Let \((L_{n}^{\alpha}(x))\) be the Laguerre polynomials. The following statements hold:_
* _The three-term recurrence relation:_ (2.5) \[(n+1)L_{n+1}^{\alpha}(x)+(x-2n-\alpha-1)L_{n}^{\alpha}(x)+(n+\alpha)L_{n-1}^{ \alpha}(x),\quad n=1,2,...,\] _with initial conditions_ \(L_{0}^{\alpha}(x)=1\) _and_ \(L_{1}^{\alpha}(x)=\alpha+1-x\)_._
* _The first structure relation:_ (2.6) \[x\left(L_{n}^{\alpha}(x)\right)^{\prime}=nL_{n}^{\alpha}(x)-(n+\alpha)L_{n-1}^ {\alpha}(x),\quad n=1,2,...\]
* _The second structure relation:_ (2.7) \[L_{n}^{\alpha}(x)=-\left(L_{n+1}^{\alpha}(x)\right)^{\prime}+\left(L_{n}^{ \alpha}(x)\right)^{\prime},\quad n=0,1,...\]
* _The squared norm:_ (2.8) \[d_{n}^{2}=\frac{\left(\alpha+1\right)_{n}}{n!},\quad n=0,1,...\]
* _The Ladder operators:_ (2.9) \[(L_{n}^{\alpha}(x))^{\prime} = -L_{n-1}^{\alpha+1}(x),\quad n=1,2,...,\] (2.10) \[x\left(L_{n}^{\alpha}(x)\right)^{\prime}+(\alpha-x)L_{n}^{\alpha} (x) = (n+1)L_{n+1}^{\alpha-1}(x),\quad n=0,1,...\]
## 3. The connection formulae
A first step to get asymptotic properties is to obtain an adequate expression of the polynomials \(L_{n}^{\alpha,\vec{\mu}}(x)\) in terms of the Laguerre polynomials, i.e., to solve the connection problem.
_Remark 3.1_.:
* Observe that, by construction, it is clear that \(L_{n}^{\alpha,\vec{\mu}}(x)=L_{n}^{\alpha}(x)\) for \(n=0,1,...,\nu_{1}-1\).
* For the part of algebraic calculations in this work, the assumption of the dependency of the parameters \(\mu_{j}\) of \(n\) is unnecessary. Therefore we will omit such dependency.
* We assume that \(L_{n}^{\alpha,\vec{\mu}}(x)\) has the same leading coefficient than \(L_{n}^{\alpha}(x)\).
Since the Laguerre polynomials constitute a basis of the polynomials, we can consider the Fourier expansion of \(L_{n}^{\alpha,\vec{\mu}}(x)\) in terms of such polynomial sequence.
**Proposition 3.2**.: _For every \(n\geq\nu_{1}\) the following identity holds:_
\[L_{n}^{\alpha,\vec{\mu}}(x)=L_{n}^{\alpha}(x)-\sum_{j=1}^{M}\mu_{j}\left(L_{n} ^{\alpha,\vec{\mu}}(c_{j})\right)^{(\nu_{j})}K_{n-1}^{(0,\nu_{j})}(x,c_{j}). \tag{3.1}\]
This is a classical result, so the proof will be omitted. Still, we need to compute the values \(\left(L_{n}^{\alpha,\vec{\mu}}(c_{j})\right)^{(\nu_{j})}\) for \(j=1,2,...,M\) in order to have the complete expression. We must use an analogous result from [2, Proposition 2]. If we define \(\mathbb{L}_{n}\) as \(\mathbb{D}L_{n}^{\alpha}(x)\), \(\mathbb{S}_{n}\) as \(\mathbb{D}L_{n}^{\alpha,\vec{\mu}}(x)\) and \(\mathbb{K}_{n}=\mathbb{D}_{x}^{T}\mathbb{D}_{y}K_{n}(x,y)\), then we need to solve the linear system
\[\mathbb{S}_{n}=\mathbb{L}_{n}-\mathbb{K}_{n-1}^{T}D^{T}\mathbb{S}_{n}.\]
Therefore, after some straightforward manipulations, (3.1) becomes the desired compact connection formula [2, Proposition 2].
\[L_{n}^{\alpha,\vec{\mu}}(x)=L_{n}^{\alpha}(x)-\mathbb{L}_{n}^{T}(\mathbb{I}+D \mathbb{K}_{n-1})^{-1}D\mathbb{K}_{n-1}(x), \tag{3.2}\]
where \(\mathbb{K}_{n}(x)=\mathbb{D}_{y}K_{n}(x,y)\).
We expect to obtain this identity, but there are other connection formulas.
_Remark 3.3_.: Observe that the discrete Laguerre-Sobolev polynomials exits for all \(n\) if and only if the matrices \(\mathbb{I}+D\mathbb{K}_{n-1}\) are regular for all \(n=1,2,...\).
In the next result, we establish a connection formula for the discrete Sobolev polynomials \(L_{n}^{\alpha,\vec{\mu}}(x)\) similar to the one obtained in [11, Theorem 1] for non-varying discrete Sobolev orthogonal polynomials.
Let \(\zeta(x)\) be the polynomial, of degree \(\nu\), we obtain from Lemma 2.2, then it is clear that for any two polynomials \(f\) and \(g\), we have
\[\langle\zeta(x)f(x),g(x)\rangle=\langle\mathbf{u}_{\alpha},\zeta(x)f(x) \overline{g(x)}\rangle=\langle f(x),\overline{\zeta(x)}g(x)\rangle. \tag{3.3}\]
**Proposition 3.4**.: _Let \((\zeta_{j}(x))_{j=0}^{\nu}\) be a sequence of polynomials, with \(\zeta_{\nu}(x)=\zeta(x)\), such that \(\deg\zeta_{k}(x)=k\) and it is a divisor of \(\zeta_{k+1}(x)\) for \(k=0,1,...,\nu-1\), and let \(\left(P_{n}^{[\zeta_{j}^{2}]}(x)\right)\) be the polynomials orthogonal with respect to the linear functional \(|\zeta_{j}(x)|^{2}\mathbf{u}_{\alpha}\), for \(j=0,1,...,\nu\)._
_If the following conditions hold_
\[P_{n}(c_{j})P_{n-1}^{[\zeta_{1}^{2}]}(c_{j})\cdots P_{n-\nu}^{[\zeta_{j}^{2} ]}(c_{j})\neq 0,\quad j=1,2,....,M, \tag{3.4}\]
_then, there exists a family of coefficients \((\lambda_{j,n})_{j=0}^{\nu}\), not identically zero, such that for any \(n\geq\nu\) the following connection formula holds:_
\[L_{n}^{\alpha,\vec{\mu}}(x)=\sum_{j=0}^{\nu}\lambda_{j,n}\,\zeta_{j}(x)P_{n-j }^{[\zeta_{j}^{2}]}(x). \tag{3.5}\]
Another connection formula connects the discrete Sobolev polynomials with the the derivatives of the Laguerre polynomials, which proof is similar to the previous one.
**Proposition 3.5**.: _For every \(n\geq\nu\) the following identity holds:_
\[L_{n}^{\alpha,\vec{\mu}}(x)=\sum_{k=0}^{\nu}\xi_{k,n}\,L_{n-k}^{\alpha+k}(x). \tag{3.6}\]
Extending these results to other classical families is straightforward, even more generic frameworks such as discrete classical polynomials.
|
2310.09054 | Comparison between Conventional Load Flow, QSTS Simulation, and Dynamic
Simulation to Assess the Operation of Step Voltage Regulators in Active
Distribution Networks | The assessment of step voltage regulator (SVR) operation in active
distribution networks requires computational analysis tools capable of tackling
the emerging technical challenges. Conventional load flow (CLF), quasi-static
time series (QSTS) and dynamic simulations are typically employed to
investigate high-penetration distributed generation (DG) interconnection
impacts. Regarding the SVR runaway condition phenomenon, however, a consensus
has yet to be reached on the most cost-effective simulation technique for
capturing and reproducing the correct sequence of events. This work presents a
comparative study of the CLF, QSTS and dynamic simulation techniques through
modelling and analysis of two SVR-controlled test-feeders, in order to evaluate
each approach performance in addressing scenarios of DG-caused reverse active
power flow. Detailed descriptions of feeder voltage profile and SVR tap
operations are provided to facilitate understanding of the mechanisms that
characterize SVR runaway condition, as well as the advantages and drawbacks of
each of the studied simulation techniques. | V. M. Souza, H. R. Brito, J. P. A. Vieira, M. E. L. Tostes, U. H. Bezerra, H. N. S. Cardoso, M. S. Costa | 2023-10-13T12:22:45Z | http://arxiv.org/abs/2310.09054v1 | Comparison between conventional LOAD FLOW, QSTS SIMULATION, and dynamic simulation to assess the operation of STEP VOLtage regulators in active DISTRIBUTION NETWORKS
###### Abstract
The assessment of step voltage regulator (SVR) operation in active distribution networks requires computational analysis tools capable of tackling the emerging technical challenges. Conventional load flow (CLF), quasi-static time series (QSTS) and dynamic simulations are typically employed to investigate high-penetration distributed generation (DG) interconnection impacts. Regarding the SVR runaway condition phenomenon, however, a consensus has yet to be reached on the most cost-effective simulation technique for capturing and reproducing the correct sequence of events. This work presents a comparative study of the CLF, QSTS and dynamic simulation techniques through modelling and analysis of two SVR-controlled test-feeders, in order to evaluate each approach's performance in addressing scenarios of DG-caused reverse active power flow. Detailed descriptions of feeder voltage profile and SVR tap operations are provided to facilitate understanding of the mechanisms that characterize SVR runaway condition, as well as the advantages and drawbacks of each of the studied simulation techniques.
Step Voltage Regulator - Distribution Network - Conventional Load Flow - QSTS Simulation - Dynamic Simulation - Distributed Generation - Runaway Condition
## 1 Introduction
The increasing presence of distributed generation (DG) units in low and medium-voltage distribution grids has motivated the development of analysis and planning tools capable of tackling the arising operational challenges. One of the main impacts stemming from the growing number of DG-grid interconnections requested by independent power producers (IPPs) is the feeder overvoltage problem, further aggravated by the high R/X cable ratio typically found at the distribution level [1]. In this context, the adequate operation of traditional voltage control devices such as the step voltage regulator (SVR), frequently employed in Brazil at long rural distribution feeders, is paramount for maintaining steady-state line voltages within acceptable operating ranges, as per local regulatory standards.
However, high DG penetration levels can adversely affect SVR performance, depending on both devices' control mode settings. The most common negative consequences of such association include excessive SVR tap operations, when the DG has a primary energy source of high output variability, and loss of SVR voltage control capability due to runaway condition, when the DG power supply exceeds the total feeder demand downstream of the SVR location [2]-[3]. Both cases entail excessive SVR wear and tear, reduction of the device's useful life and higher maintenance costs. In addition, overvoltage or undervoltage issues are possible outcomes, especially on the lower short-circuit level side of the feeder.
Many authors opt to employ traditional simulation techniques when studying the aforementioned DG integration impacts, usually resorting to either conventional load flow (CLF) methods [4]-[5] or real-time dynamic simulations [6]-[7]. As an alternative, quasi-static time series (QSTS) analyses have found wide application in the assessment of SVR-controlled distribution systems subjected to high levels of photovoltaic generation, so as to better understand high output variability impacts in a time window of hours, days or even months. Such investigations attest to the effectiveness of the QSTS method and highlight its advantages with respect to traditional approaches [8]-[10].
On the other hand, proper reproduction of SVR runaway condition characteristics - from the moment the active power flow through the device is reversed until the resulting sequence of tap operations is concluded - requires a significantly smaller time window, in the order of seconds or minutes. For this reason, in regard to the runaway phenomenon, the applicability of the QSTS method and its performance vis-a-vis traditional simulation techniques are not evident. Refs. [3] and [11] present SVR runaway condition mitigation strategies based on QSTS simulations, but no other techniques are addressed. Ref. [12] compares the CLF and QSTS methods in terms of chronological accuracy, but does not include dynamic simulations, useful frames of reference for their level of detail, in its results. It is noticeable that a consensus has yet to be reached on the most cost-effective simulation technique for SVR runaway condition studies.
This work aims to conduct a comparative study of the CLF, QSTS and dynamic simulation techniques with special focus on their accuracy in reproducing the sequence of events that leads to SVR runaway condition in the presence of a high-penetration DG unit. CLF and QSTS analyses are performed in the Open Distribution Simulator Software (OpenDSS), whereas dynamic analyses are performed in the Analysis of Electro-mechanical Transients Software (ANATEM). Two distribution test-feeders are modelled in both simulation environments using equivalent system parameters and operating conditions consistent with the scope of the study. Comparative results of feeder voltage profile and SVR tap operations are used to assess the voltage regulating mechanisms, as well as the advantages and drawbacks of each of the simulation techniques.
## 2 Operational Aspects of Step Voltage Regulators
The SVR device typically employed in Brazilian medium-voltage distribution feeders is essentially an autotransformer with a load tap changing mechanism in its series winding. Depending on feeder loading, the voltage induced at such winding can either be added to or subtracted from the primary voltage, which allows for bilateral adjustments of small deviations with respect to a user-defined voltage setpoint. Many SVR models adopt a total regulation range of \(\pm\)10 %, equally divided into 33 discrete steps: 16 voltage raising positions, 16 voltage lowering positions, and the neutral position. Therefore, each tap operation amounts to a 0.625 %, or 0.00625 normalized p.u., change per step. Details concerning constructive aspects of the single-phase SVR device used in simulations are available in [3].
Fig. 1 shows the simplified dynamic model of the SVR. The input voltage (V\({}_{\text{in}}\)) measured at the SVR regulation point is compared to the voltage setpoint (Vref), thus resulting in a voltage error signal (Verror) sent to the "Measuring Element" block. The error is in turn compared to the deadband (D), an adjustable range of allowed variance around Vref, and at times to the hysteresis band (\(\epsilon\)), a parameter that mitigates frequent tap operations during temporary oscillations around D. If Verror exceeds such limits, an activation signal (Vact) is sent to the "Tap Changer" block, triggering its timer relay. Although time delay schemes vary widely depending on Brazilian utility practices, many employ a double-time delay scheme, where the first tap operation trigger (T\({}_{1}\)) is slower than the subsequent ones (T\({}_{2}\) = T\({}_{3}\) =...). Once the relay times out, the tap changing command is sent to a motor drive unit, which mechanically carries out the operation. This procedure is repeated as many times as necessary until Verror is within deadband limits.
Figure 1: Simplified dynamic model of a step voltage regulator.
The block diagram of Fig. 1 represents the SVR model developed in ANATEM for dynamic simulations. Its parameters, however, are also integrated to the iterative load flow calculations of CLF and QSTS simulations conducted in OpenDSS. In both platforms, SVR operation within the feeder is set to bidirectional mode, which is the recommended control setting when network reconfiguration via relay switching schemes is possible [11]. In addition, this setting is the most susceptible to the occurrence of DG-caused SVR runaway condition, especially when the DG is unable to alleviate system voltage deviations through reactive power exchanges, i.e., when it is set to operate in unity power factor control mode [12].
In bidirectional mode, the SVR determines its regulation point based on the direction of the active power flow. Fig. 2 depicts the situation of direct active power flow through the feeder, when the DG supplies less real power than the load center downstream of the SVR demands. In this case, the resulting active power flows through the SVR from the primary substation (PS) to the DG unit, and the device regulates point 2, located on the lower short-circuit level side. This operational scenario is considered acceptable since voltage control through tap changing is effective.
Fig. 3 depicts the situation of reverse active power flow through the feeder, when the real power supplied by the DG exceeds the load center demand. In this case, the resulting active power through the SVR flows from DG to PS, and the device regulates point 1, located on the higher short-circuit level side. The tap changer thus operates in an effort to reduce the voltage at this point with negligible results, given the electrically strong nature of the PS side of the feeder. The outcome is a sequence of failed regulation attempts and, due to the ensuing reactive power flow, a net effect of significant voltage rise at point 2. Successive operations continue until the tap limit is reached, leading to a 10 % overvoltage downstream of the SVR.
The loss of SVR voltage control capability by attempting to regulate a point on the higher short-circuit level side of the feeder characterizes the phenomenon known in literature as reverse power tap changer runaway condition. The possibility of reverse active power flow through the SVR due to high DG penetration levels, even if temporarily, makes SVR bidirectional mode unacceptable in a real scenario. Besides the mentioned voltage violations, adverse effects include shortened SVR lifespan, excessive wear and tear and higher maintenance costs. These negative consequences justify the need for computational analysis tools that ensure proper understanding of the chronological progression of the runaway phenomenon.
## 3 Computational analysis techniques
Fig. 4 compares required grid parameters for CLF, QSTS and dynamic simulations. It is evident that, among the three, the CLF method has the smallest number of requirements, as it only depends on feeder loading (active and reactive power demand) and DG specifications (active power injection and power factor (PF)). Thus, the SVR power flow is calculated iteratively and temporal dependencies do not influence convergence.
Figure 3: Bidirectional SVR during reverse active power flow. Source: adapted from [7].
Figure 2: Bidirectional SVR during direct active power flow. Source: adapted from [7].
Iterative load flow solutions are also inherent to the QSTS formulation, although the method includes both time-varying parameters, such as load demand and DG penetration profiles, and time-dependent parameters, such as the power flow through the SVR based on its tap positions. With respect to DG-SVR interactions, the QSTS simulation is an asset for combining the simplicity of iterative calculations with the chronological accuracy of temporal considerations. Furthermore, this is accomplished without resorting to an unnecessary level of detail for low-frequency dynamics events, such as SVR tap operations [9].
The dynamic simulation, on the other hand, employs numerical integration techniques and requires not only load and generation profiles but also detailed modelling of network components such as the SVR and the DG. Despite the higher computational effort, it is the closest available approximation to a real impact study scenario, being an useful frame of reference for evaluating the performance of the iterative methods.
## 4 Results and Discussions
The comparison between the aforementioned analysis techniques is carried out using two test-feeders, both modelled in the OpenDSS and ANATEM programming environments. CLF and QSTS simulations are performed in OpenDSS, whereas dynamic simulations are performed in ANATEM.
In all scenarios, the SVR is set to operate in bidirectional mode with a deadband of 1 % (D = 1 %) around its voltage setpoint and with no hysteresis band (\(\varepsilon\) = 0). Additionally, a double-time delay scheme is adopted: first tap change delay of 30 s (T1 = 30 s) and subsequent tap change delay of 5 s (T2 = T3 =... = 5 s).
The time step for OpenDSS simulations is of 1 s, whereas the integration step size for ANATEM simulations is of 0.1 s. All cases studied have the same total simulation time of 350 s and are run on a machine with an Intel i7-6500U 2.5/2.6 GHz processor and 8 GB RAM. The reason for such choices is to facilitate as much as possible the comparative study between distinct platforms without compromising the validity of the results.
### 4-Bus Test-Feeder
Fig. 5 shows the first test-feeder: a simple 4-bus radial distribution network with 13.8 kV base voltage. The PS bus (B1) is the system's angular reference with 1.03 p.u. voltage setpoint. The SVR voltage setpoint is of 1 p.u. (Vref = 1 p.u.). The DG unit is set to operate in unity power factor control mode and static ZIP parameters are assigned to the aggregated load model accordingly.
Fig. 6 shows the active power flow variation through the SVR as the DG real power injection increases in linear fashion. Its ramp-up behavior starts at 10 s of simulation and stabilizes after reaching 2.5 MW at 200 s. Once DG penetration exceeds feeder loading, reverse active power flow through the SVR occurs (at around 110 s). This changes henceforth the regulation point of the bidirectional SVR from bus B3 to bus B2.
Figure 4: (a). Grid parameters for dynamic simulation. Fig. 4: (b). Grid parameters for QSTS simulation.
Fig. 7 compares the analysis techniques in terms of SVR tap evolution and voltage profiles of SVR-regulated buses. Results obtained from QSTS and dynamics simulations are remarkably similar. During direct active power flow, three tap operations for bus B3 voltage correction occur practically at the same moment in both simulations. During reverse active power flow, similarities are also noticeable since sequential tap changes closely coincide. The biggest difference lies in bus B3's magnitude of overvoltage - the outcome of failed attempts of regulating bus B2 voltage via SVR tap operations - which stabilizes at 1.1705 p.u. in the dynamic approach and at 1.1615 p.u. in the QSTS approach. The latter method, thus, presents a bus B3 steady-state voltage underestimation error of 0.77 %.
Conversely, results obtained from CLF simulations differ in several aspects by not considering time-delay schemes associated with SVR tap operations, thereby leading to an inaccurate reproduction of the sequence of events. During direct active power flow, two tap changes, instead of three, occur without delay once bus B3 voltage violates the adopted deadband, and long before any control response in the other simulations. During reverse active power flow, SVR runaway condition is represented by an almost instantaneous tap excursion to the upper tap limit, resulting in overvoltage issues downstream of the device which are further aggravated by the DG ramp. In the CLF approach, bus B3 voltage stabilizes at 1.1573 p.u., indicating an underestimation error of 1.12% with respect to the dynamic approach frame of reference.
Fig. 8 shows complete feeder voltage profiles for each technique. Bus B2 voltage is unaffected by SVR tap changes, whereas buses B3 and B4 develop severe overvoltage. In terms of chronological accuracy, only the QSTS and dynamics approaches provide suitable results for a reliable assessment of SVR runaway condition.
Fig. 5: Single-line diagram of the 4-bus test-feeder.
Fig. 6: DG active power ramp and active power flow through the SVR.
### 95-Bus UKGDS Test-Feeder
Fig. 9 shows the second test-feeder: the 95-bus United Kingdom generic distribution system (UKGDS) with 11 kV base voltage. This realistic radial network supplies four different kinds of consumer units - industrial, commercial, domestic unrestricted and domestic economy. The PS bus (bus 1) is the system's angular reference with 1 p.u. voltage setpoint. Runaway condition is observed in the SVR located between buses 23 and 24, with 0.98 p.u. voltage setpoint (V\({}_{\text{ref}}\) = 0.98 p.u.). The DG unit is set to operate in unity power factor control mode and static ZIP parameters are assigned to the distinct consumer load models accordingly.
Fig. 10 shows the active power flow variation through the SVR as the DG real power injection increases in linear fashion. Its ramp-up behavior starts at 10 s of simulation and stabilizes after reaching 2 MW at 200 s. Once DG penetration exceeds feeder loading, reverse active power flow through the SVR occurs (at around 95 s). This changes henceforth the regulation point of the bidirectional SVR from bus 23 to bus 24.
Fig. 11 compares the approaches in terms of SVR tap evolution and voltage profiles of SVR-regulated buses. During direct active power flow, the only disparity lies in the tap operation that happens exclusively with the CLF technique. During reverse active power flow, differences between simulations are accentuated as the upper tap limit is quickly reached in the CLF approach, leading to overvoltage at bus 23 which is further aggravated by the still ongoing DG ramp. However, the other simulations show that the upper tap limit is not reached until later, after DG injection stabilizes. This is another chronological discrepancy arising from time-dependent tap change delays. QSTS performance compared to the dynamic reference is once again adequate, with a small underestimation of bus 23 steady-state voltage (1.0974 p.u. against 1.1048 p.u., respectively).
Fig. 12 shows voltage profiles of buses located upstream and downstream of the SVR for each technique. All approaches are consistent in showing that the SVR is incapable of controlling the voltage of an upstream regulation point during reverse power flow, and that its failed attempts result in downstream overvoltage due to runaway condition. Nevertheless, only QSTS and dynamic simulations are able to provide an adequate reproduction of the sequence of events for the correct assessment of the phenomenon and its consequences.
Fig. 11: (a). SVR tap evolution. Fig. 11–(b). Voltage profiles of SVR-regulated buses. Fig. 12 shows voltage profiles of buses located upstream and downstream of the SVR for each technique. All approaches are consistent in showing that the SVR is incapable of controlling the voltage of an upstream regulation point during reverse power flow, and that its failed attempts result in downstream overvoltage due to runaway condition. Nevertheless, only QSTS and dynamic simulations are able to provide an adequate reproduction of the sequence of events for the correct assessment of the phenomenon and its consequences.
Fig. 12: (b). Feeder voltage profile: QSTS analysis.
## 5 Conclusions
Results shown in this work highlight the need for chronological considerations when choosing the best simulation approach to assess SVR performance in a time window of minutes or seconds. The CLF method requires the least amount of computational effort, but is prone to temporal inconsistencies which might impair understanding of SVR runaway condition progression and its adverse effects. On the other hand, the dynamic method provides the closest approximation to a real scenario, but elevates computational effort as it employs overly detailed models too complex for the low-frequency dynamics inherent to SVR tap operations.
In this context, the QSTS method stands out as a simulation tool with low computational effort (6.9 ms of processing time against 6.6 ms with CLF approach and 1.83 s with dynamic approach for the UKGDS test-feeder) as well as high accuracy (0.77 % maximum underestimation error) for distribution network studies. It is, thus, an asset for DG interconnection impact assessments concerning SVR runaway condition chronology.
|
2308.13246 | Model-free Reinforcement Learning with Stochastic Reward Stabilization
for Recommender Systems | Model-free RL-based recommender systems have recently received increasing
research attention due to their capability to handle partial feedback and
long-term rewards. However, most existing research has ignored a critical
feature in recommender systems: one user's feedback on the same item at
different times is random. The stochastic rewards property essentially differs
from that in classic RL scenarios with deterministic rewards, which makes
RL-based recommender systems much more challenging. In this paper, we first
demonstrate in a simulator environment where using direct stochastic feedback
results in a significant drop in performance. Then to handle the stochastic
feedback more efficiently, we design two stochastic reward stabilization
frameworks that replace the direct stochastic feedback with that learned by a
supervised model. Both frameworks are model-agnostic, i.e., they can
effectively utilize various supervised models. We demonstrate the superiority
of the proposed frameworks over different RL-based recommendation baselines
with extensive experiments on a recommendation simulator as well as an
industrial-level recommender system. | Tianchi Cai, Shenliao Bao, Jiyan Jiang, Shiji Zhou, Wenpeng Zhang, Lihong Gu, Jinjie Gu, Guannan Zhang | 2023-08-25T08:42:45Z | http://arxiv.org/abs/2308.13246v1 | # Model-free Reinforcement Learning with Stochastic Reward Stabilization for Recommender Systems
###### Abstract.
Model-free RL-based recommender systems have recently received increasing research attention due to their capability to handle partial feedback and long-term rewards. However, most existing research has ignored a critical feature in recommender systems: one user's feedback on the same item at different times is random. The stochastic rewards property essentially differs from that in classic RL scenarios with deterministic rewards, which makes RL-based recommender systems much more challenging. In this paper, we first demonstrate in a simulator environment where using direct stochastic feedback results in a significant drop in performance. Then to handle the stochastic feedback more efficiently, we design two stochastic reward stabilization frameworks that replace the direct stochastic feedback with that learned by a supervised model. Both frameworks are model-agnostic, i.e., they can effectively utilize various supervised models. We demonstrate the superiority of the proposed frameworks over different RL-based recommendation baselines with extensive experiments on a recommendation simulator as well as an industrial-level recommender system.
Recommender System. Reinforcement Learning. +
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: Information Retrieval (SIGIR ’23).
+
Footnote †: journal: journal: Information Retrieval (SIGIR ’23).
+
systems, the high stochasticity of rewards tends to deteriorate the sample efficiency, which is analogous to supervised learning where the stochasticity of signals will slow down the convergence (Beng et al., 2015; Li et al., 2017).
In this paper, we first conduct an empirical study in a standard RL recommendation scenario, i.e., the Virtual Taobao (Tianchi et al., 2017), to disclose the detrimental effect of high stochasticity of rewards. As shown in Figure 1, model-free RL methods trained with stochastic rewards suffer from slow convergence and low final performance compared to those trained with deterministic rewards.
To handle the stochasticity of reward and further improve the sample efficiency for model-free RL-based recommender systems, we propose the Stochastic Reward Stabilization (SRS) framework, which replaces the stochastic reward by its conditional expectation predicted via an extra reward estimation model. Integrating SRS with auxiliary training (Beng et al., 2015; Li et al., 2017) results in the SRS with Shared Representation (SRS2) framework, which further accelerates the training of user and item representations. Both frameworks are model-agnostic, as they allow to use any supervised recommendation methods to enhance model-free RL-based recommender systems, which is attractive to industrial practitioners. Extensive experiments on a publicly available recommendation simulator and a real-world billion-user recommendation task show the superiority of the proposed method on sample efficiency and final performance.
## 2. Related Work
**Model-free RL.** Most currently used RL-based recommender systems are based on model-free methods, which can be categorized into three classes: value-based methods (Zhou et al., 2017; Li et al., 2018), policy-based methods (Li et al., 2018), and actor-critic methods (Li et al., 2018; Li et al., 2018). See (Li et al., 2018) for a detailed survey. As a model-agnostic approach, our proposed SRS can be combined with these methods by replacing instantaneous rewards in Bellman equation (in valued-based/actor-critic methods) or policy gradient calculation (in policy-based methods) with estimated rewards.
**Model-based RL.** These methods directly model the environment dynamics (Li et al., 2018; Li et al., 2018; Li et al., 2018), which improves sample efficiency. However, it is very difficult to estimate the state transition in many real-world recommendation tasks with large state and action spaces (Beng et al., 2015).
**RL with Auxiliary Training.** Another approach to improve sample efficiency is auxiliary training, which accelerates the learning of representations via extra tasks. Exemplar methods include DQN-att (EDRR)(Li et al., 2018), DRR-att (EDRR)(Li et al., 2018), and URL (Li et al., 2018). A relevant approach is self-supervised or imitation learning, which also introduces auxiliary tasks to learn better representations (Zhou et al., 2017; Li et al., 2018). Although we also have a reward estimation task, it is intrinsically different as our task aims to generate a deterministic reward for RL model rather than learning better representations. Note that our method can be combined with auxiliary training, as discussed in Section 4.2.
## 3. Problem Formulation
In RL-based recommender systems, interactions between the user and the recommender system are modeled by Markov Decision Processes (MDPs) (Zhou et al., 2017). Formally, an MDP can be represented by a tuple \((\mathcal{S},\mathcal{A},\mathbb{P},R,\gamma,\rho_{0})\). At each step \(t=1,2,...\), the recommender system observes user/context features as the current state \(s_{t}\in\mathcal{S}\) and chooses an action \(a_{t}\in\mathcal{A}\) that represents the recommended item according to a policy \(\pi_{\theta}(\cdot|s_{t})\) parameterized by \(\theta\). The user then replies to the system with certain feedback (e.g., click, rating, purchase) taken as the reward \(r_{t}\) with probability \(R(r_{t}|s_{t},a_{t})\), and the state at the next step transfers to \(s_{t+1}\) with probability \(\mathbb{P}(s_{t+1}|s_{t},a_{t})\). The process goes on until reaching certain terminal states or a pre-defined maximum number of steps \(T\).
The objective of the agent is to find a policy \(\pi_{\theta}\) that maximizes the expected discounted cumulative reward over all users, i.e.,
\[\max_{\pi_{\theta}}\mathbb{E}_{\pi_{\theta}}\left[\sum_{t=0}^{\infty}\gamma^ {t}r_{t}\right|s_{0}\sim\rho_{0}\left]\,, \tag{1}\]
where \(\rho_{0}\) is the initial state distribution and \(\gamma\) is the discount factor for future reward; the expectation is taken over the above stochastic process, i.e., \(a_{t}\sim\pi_{\theta}(\cdot|s_{t}),r_{t}\sim R(\cdot|s_{t},a_{t}),s_{t+1}\sim \mathbb{P}(\cdot|s_{t},a_{t})\).
## 4. Our Method
Recall that, as shown in Figure 1, the stochasticity of rewards in recommendation tasks severely degrades the sample efficiency and performance of existing model-free RL methods. In this section, we propose a Stochastic Reward Stabilization (**SRS**) framework, which stabilizes the reward by replacing the observed reward with its expectation conditioned on the state and action \(\hat{r}:=\mathbb{E}(r|s,a)\), where the expectation is estimated via a reward estimation model.
### Stochastic Reward Stabilization (SRS)
In many recommendation tasks, the immediate reward is usually the click, purchase, or user retention. Many supervised reward estimation techniques has been proposed in recommender systems (Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018; Li et al., 2018). We consider using them to stabilize the reward and propose the SRS framework, which is depicted in Figure 2. With samples \(\mathcal{D}:=\{(s,a,r,s^{\prime})\}\) containing state \(s\), action \(a\), immediate reward \(r\) and next state \(s^{\prime}\), model-free deep RL methods use
Figure 1. Illustration of performance decline due to stochastic rewards on Virtual Taobao. The plots compare the performance of various model-free RL methods trained with stochastic or deterministic rewards. See experiment details in Section 5.
the samples directly to learn an optimal policy \(\pi_{\theta}(\cdot|s)\). To tackle the sample inefficiency caused by the stochastic reward, we propose a reward stabilization process that replaces the stochastic reward by its conditional expectation \(\hat{r}:=\mathbb{E}(r|s,a)\), as illustrated in the dashed block in Figure 2. Then the reconstructed dataset \(\hat{\mathcal{D}}=\{(s,a,\hat{r},s^{\prime})\}\) is passed to the RL training algorithms. Note that our method only uses a reward estimation model, which is different from model-based methods that require estimating the state transition probability.
The proposed method is model-agnostic, in the sense that _any_ supervised recommendation technique can be used to boost the sample efficiency of _any_ RL-based recommender system. We here give an example on the classic DQN method. Recall that value-based methods evaluate the Q value function, i.e., the long-term expected reward of taking action \(a\) at state \(s\) and following policy \(\pi_{\theta}\). Replacing \(r\) with \(\hat{r}\) in its form, the modified Q function becomes
\[\hat{Q}_{\pi_{\theta}}(s,a)\coloneqq\mathbb{E}_{\pi_{\theta}}\left[\sum_{t=0} ^{\infty}r^{t}\hat{r}_{t}\right]_{0}=s,a_{0}=a\Bigg{]}\,. \tag{2}\]
Then the agent selects the action to maximize the modified Q function at each step, i.e., \(a\in\arg\max_{a}\hat{Q}_{\pi_{\theta}}(s,a)\). In vanilla DQN, the Q function is learned from the collected dataset \(\mathcal{D}\) via the temporal difference approach (Srivastava et al., 2017). Applying SRS, the loss function becomes
\[L_{DQN}=\mathbb{E}_{(s,a\hat{r},s^{\prime})^{\prime}-\hat{\mathcal{D}}}\big{[} (y-\hat{Q}_{\pi_{\theta}}(s,a))^{2}\big{]}, \tag{3}\]
where \(y=\hat{r}+\arg\max_{a^{\prime}}\gamma\hat{Q}_{\pi_{\theta^{\prime}}}(s^{\prime },a^{\prime})\) is the estimated modified Q value predicted by the target network with parameters \(\theta^{\prime}\)(Kang et al., 2018).
Besides valued-based RL, other branches of model-free RL, namely actor-critic and policy-based methods, can also be combined with SRS by optimizing the strategy \(\pi_{\theta}\) based on the modified dataset \(\hat{\mathcal{D}}\) instead of the collected dataset \(\mathcal{D}\). Their detailed formulations are omitted from this paper due to the space limit.
### SRS with Shared Representation (SRS2)
As mentioned before, current model-free RL-based recommender systems often share the representation of states and actions of the RL task with an auxiliary task, which substantially accelerates the training of the representations, especially in scenarios with large state and action spaces (Kang et al., 2018; Li et al., 2019; Li et al., 2020; Li et al., 2020). Here we incorporate this idea into SRS, which results in the SRS with Shared Representation (**SRS2**) framework, achieving the merits of both worlds.
As shown in Figure 3, the reward estimation and the RL models share a common embedding, which maps each user or item features to a state embedding \(s\) or an action embedding \(a\), respectively. The embedding vectors \(s\) and \(a\) are concatenated and fed into the subsequent RL and reward estimation modules. At the training stage, since the supervised signal of the reward estimation task is more stable than the long-term signal of the RL task (Li et al., 2020), we only use gradients of the supervised task to update the embedding layer.
Note that from the perspective of representation learning, our introduced reward estimation task can be understood as a kind of auxiliary task, which is previously used to accelerate the learning of embeddings (Li et al., 2019; Li et al., 2020). However, in SRS2, the output signal \(\hat{r}\) of the task is further used to guide the training of the RL module, which is a core difference from previous methods. The double usages of the reward estimation module in SRS2 combine the benefits of SRS and shared representation, which attains better convergence and final performance than previous auxiliary training methods empirically.
## 5. Experiments
This section evaluates the efficacy of our proposed frameworks.
### Experiment Setup
#### 5.1.1. Experimental Environments
Our experiments are conducted in a simulation environment and a live environment.
**Simulation Environment.** We conduct simulation experiments on a publicly available simulator, i.e., Virtual Taobao (Vaswani et al., 2017)1. In each round, the search engine receives a search request from a user and responds to the user with some items to be displayed. Then the user gives certain feedback. The reward is set to be \(1\) if the user purchases any recommended item and \(0\) otherwise. Specifically, for an estimated user purchase with probability \(r\) (\(r\in[0,1]\)), its deterministic reward \(r_{det}\) is directly set to be \(r\), while its stochastic reward \(r_{stoch}\) is randomly sampled from a Bernoulli distribution parameterized by \(r\), i.e., \(r_{stoch}=1\)_w.p. \(r\)_ and \(r_{stoch}=0\)_w.p. \(1-r\)_.
Footnote 1: Virtual Taobao source code available at: [https://github.com/eyounc/VirtualTaobao](https://github.com/eyounc/VirtualTaobao)
**Live Environment.** We apply our proposed method to a real-world coupon recommendation task with more than \(1\) billion participants. The recommender system adopts a two-step scheme commonly used in industrial recommendation scenarios (Krishnaman et al., 2017; Li et al., 2020): (i) a candidate generator retrieves hundreds of coupons from all available items; (ii) a candidate ranking network generates the final coupon recommended to the user. Our method is applied to the second step.
In our experiments, we adopt the commonly used myopic DIN model (Srivastava et al., 2017) to estimate the expected immediate reward, which is co-trained with the RL model on the same dataset.
#### 5.1.2. Compared Methods
For each class of model-free RL, we consider a classic method and a SOTA method, respectively.
**Value-based Approach.** We consider **DRN**(Srivastava et al., 2017) that uses double Q learning (Srivastava et al., 2017) and dueling network (Srivastava et al., 2017) to tackle overestimation of DQN, and **DQN-att(EDRR)**(Li et al., 2020) (SOTA) that introduces an auxiliary training task to learn a better user representation.
**Policy-based Approach.** We employ **REINFORCE**(Beng et al., 2016; Chen et al., 2016) as a classic method in this approach, and **URL**(Li et al., 2020) (SOTA) that learns the state and action representations concurrently via auxiliary training.
**Actor-critic Approach.** We examine **DDPG** that is widely used by many recommender systems (Krishnaman et al., 2017; Li et al., 2020; Li et al., 2020), and **DRR-att(EDRR)**(Li et al., 2020) (SOTA) that uses the DQN-att(EDRR) as the critic.
### Experimental Results
#### 5.2.1. Simulation Experiment
We first compare the performance and sample complexity in the simulation environment.
Figure 2. Comparison of model-free RL and SRS framework.
**Performance.** We adopt the average cumulative reward as the performance metric, which measures the number of purchases in each session. For each method, we compare its vanilla version and that combined with SRS/SRS2. We evaluate each model every 10 steps (Kang et al., 2017) and present the results in Figure 4, from which we have two main observations. First, combining each method with SRS achieves a higher reward. The performance gain universally exists, indicating that SRS is agnostic to the RL model. Second, the average reward of each method with SRS has a lower variance than the vanilla one, especially on actor-critic and policy-based approaches. This accords well with our intuition that implanting a supervised model in RL stabilizes the reward as well as the learning process.
**Sample Efficiency.** For each method, we measure the number of training samples processed until the performance reaches some certain satisfactory threshold (Kang et al., 2017). In this experiment, we set the threshold to be the number of steps at which the reward attains 7 for the fifth time. Table 1 presents the result and its 95% confidence interval for each method after repeating running for ten times. From the results, we observe that SRS substantially promotes sample efficiency in all RL approaches. In particular, on actor-critic and policy-based approaches, SRS2 achieves at least 2\(\times\) speedup in training compared to SOTA models. Note that on the DQN-based approach, the acceleration is not that significant compared to the above two approaches, possibly because the value-based approach is by itself more robust to the stochastic noise than other approaches.
#### 5.2.2. Live Experiment
We adopt two performance metrics commonly used in industrial scenarios, i.e., the user return and the user payment. The former metric measures the average number of times the user comes back to the coupon recommendation scenario during the whole experimental period, which reflects the user's long-term satisfaction. The latter metric measures the average number of payments with coupon redemption for each user.
We compare the performance of DQN-att(EDRR) equipped with SRS2 and the myopic DIN method via online A/B testing. The two-week live traffic with more than 10 million participants shows that our method achieves a +3.16%(\(\pm\)0.26%) relative improvement on user return and a +0.89%(\(\pm\)0.17%) improvement on user payments, showing that our proposed method is very effective in industrial-level real-world recommendation tasks.
## 6. Conclusions
In this paper, we first discover the detrimental effect of stochastic rewards on RL-based recommender systems via empirical study. To resolve the issue, we design a stochastic reward stabilization framework that replaces the stochastic signal with the signal predicted by a reward estimation model, then combines it with shared representations. Extensive experiments show the superiority of our proposed methods on sample efficiency and final performance.
\begin{table}
\begin{tabular}{c c c} \hline \hline Type & Method & Sample Efficiency \\ \hline \multirow{4}{*}{Value-based} & DRN & - \\ \cline{2-3} & DRN + SRS & 3.30 (\(\pm\) 1.66) \\ \cline{2-3} & DQN-att(EDRR) & 2.07 (\(\pm\) 0.34) \\ & DQN-att(EDRR) + SRS2 & **1.62 (\(\pm\) 0.18)** \\ \hline \multirow{4}{*}{Actor-critic} & DDPG & - \\ \cline{2-3} & DDPG + SRS & 2.94 (\(\pm\) 0.52) \\ \cline{2-3} & DRR-att(EDRR) & 3.28 (\(\pm\) 0.20) \\ \cline{2-3} & DRR-att(EDRR) + SRS2 & **1.67 (\(\pm\) 0.09)** \\ \hline \multirow{4}{*}{Policy-based} & REINFORCE & - \\ \cline{2-3} & REINFORCE + SRS & 7.56 (\(\pm\) 0.47) \\ \cline{1-1} \cline{2-3} & URL & 9.43 (\(\pm\) 0.44) \\ \cline{1-1} \cline{2-3} & URL + SRS2 & **4.87 (\(\pm\) 0.33)** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Sample efficiency comparison of various methods _with/without_ reward stabilization. We measure the number (in thousands) of episodes to reach a certain performance threshold. Note that vanilla DRN, DDPG, and REINFORCE never attain the threshold, indicating very low efficiency.
Figure 4. Performance comparison of various model-free RL methods _with/without_ reward stabilization on Virtual Taobao. The two rows of subplots present the performance of the three most commonly used RL methods and three SOTA models, respectively.
Figure 3. Network structure of the SRS2 method. In SRS2, the state embedding and action embedding are shared, which are trained using the gradients of the reward estimation model. |
2307.12905 | On Logic Gates with Complex Numbers | Logic gates can be written in terms of complex differential operators, where
the inputs and outputs are holomorphic functions with several variables. Using
the polar representation of complex numbers, we arrive at an immediate
connection between the oscillatory behavior of the system and logic gates. We
discuss the universality of this formalism in a variety of computing systems. | M. W. AlMasri | 2023-07-24T15:55:39Z | http://arxiv.org/abs/2307.12905v7 | # Complex Analysis of Intelligent Systems
###### Abstract.
Logic gates can be written in terms of complex differential operators where the inputs and outputs are analytic functions with several variables. Using the polar representation of complex numbers, we arrive at an immediate connection between the oscillatory behavior of the system and logic gates. We explain the universal programming language (UPL) used by physical objects to process information. To assure the causality structure in UPL, we introduce the concept of layers that characterizes the computations for each time scale.
## 1. Introduction
Understanding how nature "thinks" is undoubtedly the ultimate aim of our scientific research. Since the last decade, we have witnessed great progress in our computational capabilities, paving the way for revolutions in the field of artificial intelligence. Interestingly, despite the numerous physical theories of computation, we still lack a decisive answer to the question: how nature "thinks"? Understanding this vital issue would complete the current perspective and enables us to develop further technologies at maximum capacity in terms of energy efficiency and computation capabilities. It has been suggested a few decades ago that our universe might be a giant computer[1]. This has inspired researchers to strongly believe that information is physical[2].
Computers are physical systems: the laws of physics dictate what they can and cannot do. Concretely, the speed with which a physical device can process information is limited solely by its energy, and the amount of information that it can process is limited by the number of degrees of freedom it possesses. Therefore, there must be an intimate relationship between the dynamical evolution of physical states and their behavior regarding processing and storing information [3, 4, 5, 6, 7].
In order to have a true description of how nature thinks, we start with the properties of elementary particles and their behaviors. The justification of this ideology stems from the feeling that nature uses one universal programming language for encoding and decoding information at different scales. While it is possible to artificially construct other ways for coding and encoding information, the way that nature chooses must be the fastest and therefore the most efficient way for processing information in order not to violate the causality principle. In other words, if we assume a physical system \(\mathcal{S}\) to process information at some time scale \(t_{0}\) then
it must be equivalent or less efficient than the way the container \(\mathcal{C}\) process information at the same time scale \(t_{0}\). To a large extent, the container is the universe that contains all the subsystems evolving on the same time scale. We shall explain these in detail and introduce the concept of layers in computation.
According to quantum mechanics, subatomic particles behave like waves propagating in spacetime. The wave-particle duality was proposed by de Broglie [8]. The de Broglie wavelength \(\lambda\) associated with a particle is
\[\lambda=\frac{h}{p}, \tag{1}\]
where \(h\) is Planck's constant and \(p\) is the particle's momentum. In the Davisson-Germer experiment, electrons scattered by the surface of a crystal of nickel metal produced a diffraction pattern similar to waves1. Interestingly, changing diffraction patterns, as we shall see in the incoming sections, are intimately connected to the operations of logic gates.
Footnote 1: C. Davisson and L. H. Germer, _Phys. Rev._**30** (1927) 705
The correspondence between oscillatory behavior and logic gates computation is assured mathematically by writing logic gates in terms of complex differential operators acting on analytic functions. Oscillatory systems are omnipresent in the universe. Some notable examples are the attractor cycle oscillators, rotors, neural oscillation patterns in the brain, synchronized chaos, and Josephson junctions. Other examples in biology and physics can be read from [9, 10].
## 2. Hilbert Space of Analytic Functions
Let \(U\) be a non-empty open set in \(\mathbb{C}^{d}\) and let \(\mathcal{H}(U)\) denote the space of holomorphic functions on \(U\). A complex-valued function \(f:U\to\mathbb{C}\) is said to be holomorphic if it is differentiable in a neighborhood of each point in \(U\). Let \(\mathcal{H}L^{2}(U,\omega)\) denotes the space of \(L^{2}\)-holomorphic functions with respect to the weight \(\omega\), that is,
\[\mathcal{H}L^{2}(U,\omega)=\{F\in\mathcal{H}(U)|\int_{U}|F(z)|^{2}\;\omega(z) \;dz<\infty\}. \tag{2}\]
Here \(dz\) denotes the \(2d\)-dimensional Lebesgue measure on \(\mathbb{C}^{d}=\mathbb{R}^{2d}\) not a line integral. Some examples of holomorphic function spaces are the weighted Bergman and the Segal-Bargmann [11, 12, 13, 14]. In this work, we will use the Segal-Bargmann spaces.
The Segal-Bargmann spaces (also known as Bargmann spaces) \(\mathcal{H}L^{2}(\mathbb{C}^{d},\mu_{t})\) are spaces of holomorphic functions with Gaussian integration measure \(\mu_{t}=(\pi)^{-d}e^{-|z|^{2}/t}\) and inner product of the form [11]
\[\langle f|g\rangle=(\pi t)^{-d}\int_{\mathbb{C}^{d}}\overline{f}(z)\;g(z)\;e^ {-|z|^{2}/t}dz, \tag{3}\]
Where \(|z|^{2}=|z_{1}|^{2}+\cdots+|z_{d}|^{2}\) and \(dz\) is the \(2d\)-dimensional Lebesgue measure on \(\mathbb{C}^{d}\) and \(t\) is a constant equals to \(\hbar\) if the physical states under study are quantized.
The Segal-Bargmann transform is a unitary map \(B_{t}:L^{2}(\mathbb{R}^{d},dx)\to\mathcal{H}L^{2}(\mathbb{C}^{d},\mu_{t})\) defined as
\[B_{t}f(z)=(\pi t)^{-d/4}\int_{\mathbb{R}^{d}}e^{\left(-z^{2}+2\sqrt{2}z\cdot x-x^ {2}\right)/2t}f(x)\;dx. \tag{4}\]
The reproducing kernel is
\[K(z,w)=\sum_{n=0}^{\infty}\frac{z^{n}}{\sqrt{n!t^{n}}}\frac{\overline{w}^{n}}{ \sqrt{n!t^{n}}}=\sum_{n=0}^{\infty}\frac{1}{n!}(\frac{z\overline{w}}{t})^{n}=e ^{z\overline{w}/t} \tag{5}\]
The kernel \(K(z,w)\) is holomorphic in \(z\) and anti-holomorphic in \(w\). It satisfies the following properties:
1- \(K(z,w)=\overline{K(w,z)}\)
2- For each fixed \(z\in\mathbb{C}\), \(K(z,w)\) is square-integrable \(d\mu_{t}\). For all \(f(z)\in\mathcal{H}L^{2}(\mathbb{C},\mu_{t})\), we may write
\[f(z)=\int_{\mathbb{C}}K(z,w)\;f(w)\;\mu_{t}(w)\;dz. \tag{6}\]
3- For all \(z,w\in\mathbb{C}\),
\[\int_{\mathbb{C}}K(z,u)\;K(u,w)\;\mu_{t}(u)\;du=K(z,w) \tag{7}\]
The monomials \(\{\frac{z^{n}}{\sqrt{n!t^{n}}}\}\) form an orthonormal basis in \(\mathcal{H}L^{2}(\mathbb{C}^{n},\mu_{t})\). The orthonormality condition with \(t=d=1\) is
\[\frac{1}{\pi}\int_{\mathbb{C}}e^{-|z|^{2}}\;\overline{z}^{n}z^{m}dz=n!\; \delta_{mn}. \tag{8}\]
Using the properties of the Bargamnn spaces, one could calculate the numerical values of the \(n\)th derivative of any complex functions at point \(z_{0}\) by integration over the Bargamnn space. Let \(f(z)\) be an analytic function of complex variable \(z\). The Taylor series of \(f\) at a complex number \(z_{0}\) is [15]
\[f(z)=f(z_{0})+\frac{f^{\prime}(z_{0})}{1!}\left(z-z_{0}\right)+\frac{f^{\prime \prime}(z_{0})}{2!}(z-z_{0})^{2}+\cdots=\sum_{n=0}^{\infty}\frac{f^{(n)}(z_{0 })}{n!}\left(z-z_{0}\right)^{n}. \tag{9}\]
When \(z_{0}=0\), the Taylor series reduces to a Maclaurin series i.e. \(\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}z^{n}\). Let us compute the inner product \(\langle z^{n}|f(z)\rangle\) where \(f(z)=\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}z^{n}\) is a Maclaurin series, we find
\[f^{(n)}(0)=\langle z^{n}|f(z)\rangle \tag{10}\]
which is very interesting since it gives a closed analytical expression for the \(n\)th derivative of any analytic complex function at \(z=0\) by computing the inner product of its Maclaurin series expansion with monomial \(z^{n}\) in \(\mathcal{H}L^{2}(\mathbb{C},\mu_{1})\). Practically, this means computing Gaussian integrals of the form \(\frac{1}{\pi}\int_{\mathbb{C}}\overline{z}^{n}f(z)e^{-|z|^{2}}dz\). For the Taylor series, the formula 10 becomes
\[f^{(n)}(z_{0})=\langle(z-z_{0})^{n}|f(z)\rangle. \tag{11}\]
The derivative of a given analytic function \(f(z)\) is
\[f^{\prime}(z)=\lim_{h\to 0}\frac{f(z+h)-f(z)}{h}, \tag{12}\]
where \(h\) is the step size. For infinitesimally small \(h\), we may compute the derivative approximately using a forward-difference expression,
\[f^{\prime}(z)\approx\frac{f(z+h)-f(z)}{h}, \tag{13}\]
Using 11, the forward-difference expression at point \(z_{0}\) assumes the form
\[\langle(z-z_{0})|f(z)\rangle\approx\frac{1}{h}\left(f(z_{0}+h)-f(z_{0})\right) \tag{14}\]
Analogously, the second-order forward-difference expression is
\[\langle(z-z_{0})^{2}|f(z)\rangle\approx\frac{1}{h^{2}}\left(f(z_{0}+2h)-2f(z_{ 0}+h)+f(z_{0})\right). \tag{15}\]
Thus, we have written the finite differences in terms of integrals over the space of holomorphic function spaces with Gaussian integration measure. Consequently, one could use the numerical methods for integrals to compute derivatives approximately instead of finite differences[16]. This is more efficient since for higher-order derivatives we would have long expressions using the finite-difference method. However, using our method we can compute higher-order derivatives using one integral similar to 10.
Now let us look closely at 14, using the fundamental theorem of calculus we find
\[f^{\prime}(z_{0})=\langle(z-z_{0})|f(z)\rangle\approx\frac{1}{h}\int_{z_{0}}^ {z_{0}+h}f^{\prime}(z)dz. \tag{16}\]
Interestingly, the derivative of function \(f\) at point \(z_{0}\) is approximately equal to the integral of \(f^{\prime}(z)\) between \(z_{0}+h\) and \(z_{0}\). This means that we can equate integrals over holomorphic function with Gaussian integration measure with integrals over ordinary complex space.
It is clear that working in the Bargamnn spaces enhances the space of solutions by using the inner product properties of these spaces. For example, the generalized hypergeometric function is
\[{}_{p}F_{q}\left(a_{1},\ldots,a_{p};b_{1},\ldots,b_{q};z\right)=\sum_{n=0}^{ \infty}\frac{(a_{1})_{n}\ldots(a_{p})_{n}}{(b_{1})_{n}\ldots(b_{q})_{n}}\frac{ z^{n}}{n!}, \tag{17}\]
where \((x)_{n}=x(x+1)(x+2)\ldots(x+n-1),\ \ n\geq 1\) is the Pochhammer symbol with \((x)_{0}=1\). Using 10, we find
\[\frac{(a_{1})_{n}\ldots(a_{p})_{n}}{(b_{1})_{n}\ldots(b_{q})_{n}}=\langle z^{ n}|_{p}F_{q}\left(a_{1},\ldots,a_{p};b_{1},\ldots,b_{q};z\right)\rangle. \tag{18}\]
Thus, the coefficients of generalized hypergeometric functions can be determined uniquely using inner products in the Bargamnn spaces.
The states of a quantum system with \(n\) degrees of freedom are usually described by functions either in the configuration space \((q_{1},q_{2},\ldots q_{n})\) or in the momentum
space \((p_{1},p_{2},\ldots p_{n})\). The complex combinations of these variables have proven to be effective both at quantum and classical levels. Moreover, they appear in the study of harmonic oscillator and Bose particle field theories as the creation and annihilation operators.
We define the complex conjugate operators as
\[\eta_{k}=2^{-1/2}\left(q_{k}-ip_{k}\right), \tag{20}\] \[\xi_{k}=2^{-1/2}\left(q_{k}+ip_{k}\right). \tag{19}\]
If \(q_{k},p_{k}\) are self-adjoint operators satisfying the canonical commutation relations (with Planck constant \(\hbar=1\))
\[[q_{k},p_{l}]=i\delta_{kl},\;[q_{k},q_{l}]=[p_{k},p_{l}]=0, \tag{21}\]
Then it follows that
\[\xi_{k}=\eta_{k}^{\star},\;\eta_{k}=\xi_{k}^{\star},\;[\xi_{k},\eta_{l}]= \delta_{kl},\;[\xi_{k},\xi_{l}]=[\eta_{k},\eta_{l}]=0. \tag{22}\]
In 1928, Fock introduced the operator solution \(\xi_{k}=\partial/\partial\eta_{k}\) of the commutation relation \([\xi_{k},\eta_{k}]=1\) in analogy with \(p_{k}=-i\partial/\partial q_{k}\) of the relation \([q_{k},p_{k}]=i\) and applied it to quantum field theory2.
Footnote 2: V. A. Fock, _Verallgemeinerung und Loesungder Diracschen statistischen Gleichung_, Z.Phys. **49**,339 (1928).
Consider a quantum harmonic oscillator with a single-mode described in terms of the coordinates \(q,p\) in the phase space. The creation and annihilation operators are
\[\eta=2^{-1/2}\left(q-ip\right), \tag{24}\] \[\xi=2^{-1/2}\left(q+ip\right). \tag{23}\]
with canonical commutation relation \([\xi,\eta]=1\). The Hamiltonian operator is
\[H=\eta\frac{\partial}{\partial\eta}+\frac{1}{2}, \tag{25}\]
with energy eigenvectors equal to \(\frac{\eta^{n}}{\sqrt{n!}}\).
It is convenient as it will appear later in this work to represent spin operators in terms of uncoupled quantum oscillators. Here, we use the Jordan-Schwinger map of the \(\mathfrak{su}(2)\) algebra to deal with particles with arbitrary spin number by means of creation and annihilation operators. For two uncoupled quantum oscillators with annihilation operators \(a\) and \(b\), the spin operators read,
\[\sigma_{x}=\left(a^{\dagger}b+b^{\dagger}a\right), \tag{27}\] \[\sigma_{y}=-i\left(a^{\dagger}b-b^{\dagger}a\right),\] (28) \[\sigma_{z}=\left(a^{\dagger}a-b^{\dagger}b\right). \tag{26}\]
The canonical commutation relations between the spin operators are
\[[\sigma_{i},\sigma_{j}]=i\;\epsilon_{ijk}\;\sigma_{k},\;\;\;[\sigma_{i}, \sigma^{2}]=0, \tag{29}\]
where \(i=x,y,z\). The squared spin operator reads
\[\sigma^{2}=\frac{N}{2}\left(\frac{N}{2}+1\right) \tag{30}\]
where \(N=N_{1}+N_{2}\) is the total number operator with \(N_{1}=a^{\dagger}a\) and \(N_{2}=b^{\dagger}b\). Normalized states with occupation numbers \(n_{1}\) and \(n_{2}\) can be obtained by applying the creation operators on the vacuum state i.e. \(|0,0\rangle\),
\[|n_{1},n_{2}\rangle=\frac{(a^{\dagger})^{n_{1}}}{\sqrt{n_{1}!}}\frac{(b^{ \dagger})^{n_{2}}}{\sqrt{n_{2}!}}|0,0\rangle. \tag{31}\]
With \(j=\frac{n_{1}+n_{2}}{2}\) and \(m=\frac{n_{1}-n_{2}}{2}\) runs from \(-j\) to \(j\) in integer steps.
In Bargmann representation, the spin operators 26, 27 and 28 read
\[\sigma_{x}=\left(z_{1}\frac{\partial}{\partial z_{2}}+z_{2}\frac{ \partial}{\partial z_{1}}\right), \tag{33}\] \[\sigma_{y}=-i\left(z_{1}\frac{\partial}{\partial z_{2}}-z_{2} \frac{\partial}{\partial z_{1}}\right),\] (34) \[\sigma_{z}=\left(z_{1}\frac{\partial}{\partial z_{1}}-z_{2} \frac{\partial}{\partial z_{2}}\right). \tag{32}\]
In this case, the normalized states are
\[|n_{1},n_{2}\rangle=\frac{z_{1}^{n_{1}}}{\sqrt{n_{1}!}}\frac{z_{2}^{n_{2}}}{ \sqrt{n_{2}!}}\equiv f_{n_{1}n_{2}} \tag{35}\]
since
\[\langle m_{1},m_{2}|n_{1},n_{2}\rangle=\frac{1}{\pi^{2}}\int_{\mathbb{C}^{2}} \overline{f_{m_{1}m_{2}}}\;f_{n_{1}n_{2}}\;e^{-|z|^{2}-|w|^{2}}\;dz\;dw=\delta _{m_{1},n_{1}}\delta_{m_{2},n_{2}}. \tag{36}\]
## 3. Holomorphic Representation of Logic Gates
In this section, we formulate the logic gates as differential operators written in terms of complex variables and their partial derivatives. Such representation is possible both in classical and quantum regimes. To see this, consider a hypothetical material which allows red light to pass through it and absorbs black light. In this case, if we assign the variable \(z_{1}\) for the red light and \(z_{2}\) for the black light. We may represent the action of the material by the differential operator \(\frac{\partial}{\partial z_{2}}\). Generally, it is possible to find equivalent expressions for propositional logic in terms of holomorphic variables and their partial derivatives. The fundamental logic gates that operates between two analytic functions are the Pauli gates plus the identity gate \((I,\vec{\sigma})\)[17]. Any given two-input gate can be written in terms of the identity operator and the Pauli vector. Pauli gates are defined as complex differential operators that act on analytic functions with two-variables
\[X=\text{NOT}=\sigma_{x}=\left(z_{1}\frac{\partial}{\partial z_{2}} +z_{2}\frac{\partial}{\partial z_{1}}\right), \tag{38}\] \[Y=\sigma_{y}=-i\left(z_{1}\frac{\partial}{\partial z_{2}}-z_{2} \frac{\partial}{\partial z_{1}}\right),\] (39) \[Z=\sigma_{z}=\left(z_{1}\frac{\partial}{\partial z_{1}}-z_{2} \frac{\partial}{\partial z_{2}}\right). \tag{37}\]
The identity gate in the case of two-variable function is
\[I=\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial}{\partial z_{2}}\right) \tag{40}\]
For the \(N\)-dimensional case, the identity operator reads
\[I=\sum_{i=1}^{N}z_{i}\frac{\partial}{\partial z_{i}} \tag{41}\]
The Hadamard or Walsh-Hadamard gate acts on analytic functions with two-variables in the following way
\[H=\frac{1}{\sqrt{2}}\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{1}\frac{ \partial}{\partial z_{2}}+z_{2}\frac{\partial}{\partial z_{1}}-z_{2}\frac{ \partial}{\partial z_{2}}\right) \tag{42}\]
The phase gate \(S\) is
\[S=\left(z\frac{\partial}{\partial z}+iw\frac{\partial}{\partial w}\right) \tag{43}\]
The rotation operators about \(x\)-axis, \(y\)-axis and \(z\)-axis are
\[R_{x}(\theta)=\cos(\theta/2)z_{1}\frac{\partial}{\partial z_{1}} -i\sin(\theta/2)z_{1}\frac{\partial}{\partial z_{2}}\] \[-i\sin(\theta/2)z_{2}\frac{\partial}{\partial z_{1}}+\cos(\theta /2)z_{2}\frac{\partial}{\partial z_{2}}, \tag{45}\] \[R_{y}(\theta)=\cos(\theta/2)z_{1}\frac{\partial}{\partial z_{1} }-\sin(\theta/2)z_{1}\frac{\partial}{\partial z_{2}}\] \[+\sin(\theta/2)z_{2}\frac{\partial}{\partial z_{1}}+\cos(\theta /2)z_{2}\frac{\partial}{\partial z_{2}},\] (46) \[R_{z}(\theta)=\exp(-i\theta/2)z_{1}\frac{\partial}{\partial z_{1 }}+\exp(i\theta/2)z_{2}\frac{\partial}{\partial z_{2}} \tag{44}\]
The controlled-Not gate acts on analytic functions with four-variables, it consists of an identity operator with respect to a two-variables \(z_{1},z_{2}\) and a Not gate operator between the remaining two,
\[\text{CNOT}=\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial}{ \partial z_{2}}+z_{3}\frac{\partial}{\partial z_{4}}+z_{4}\frac{\partial}{ \partial z_{3}}\right) \tag{47}\]
The swap, Toffoli and Fredkin gates are
\[\text{SWAP}=\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial}{ \partial z_{3}}+z_{3}\frac{\partial}{\partial z_{2}}+z_{4}\frac{\partial}{ \partial z_{4}}\right) \tag{48}\]
\[\text{Toffoli}=\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial} {\partial z_{2}}+z_{3}\frac{\partial}{\partial z_{3}}+z_{4}\frac{\partial}{ \partial z_{4}}+z_{5}\frac{\partial}{\partial z_{5}}+z_{6}\frac{\partial}{ \partial z_{6}}+z_{7}\frac{\partial}{\partial z_{8}}+z_{8}\frac{\partial}{ \partial z_{7}}\right) \tag{49}\]
\[\text{Fredkin}=\left(z_{1}\frac{\partial}{\partial z_{1}}+z_{2}\frac{\partial} {\partial z_{2}}+z_{3}\frac{\partial}{\partial z_{3}}+z_{4}\frac{\partial}{ \partial z_{4}}+z_{5}\frac{\partial}{\partial z_{7}}+z_{7}\frac{\partial}{ \partial z_{5}}+z_{8}\frac{\partial}{\partial z_{8}}\right) \tag{50}\]
Other possible logic gates can be written as complex differential operators in a way similar to the procedure we followed in writing the previous gates. If the input function is of the form 35, the outputs are again normalized states. The expectation values of \(\langle n_{1},n_{2}|L|n_{1},n_{2}\rangle\), where the differential operator \(L\) is any logic gate operator such as \(X,Y,Z,H\) etc., and the inner product is taken over the Bargmann space, are identical to the expectation values of \(\langle\psi|\hat{L}|\psi\rangle\) with a wave function \(\psi\), gate operator \(\hat{L}\) and an inner product taken over the vector Hilbert space. Moreover, considering the inputs to be any analytic functions is also possible. However, in this case, the normalization condition may be lost, but it could be recovered by dividing with a suitable normalization constant. Below, we give a general procedure for obtaining the differential and integral representations of logic gates in the two-dimensional Bargmann space.
**Differential operator representation of logic gates:**
Let \(M\) be any \(2\times 2\) matrix in \(\mathbb{C}^{2}\), the mapping to analytical form in Bargmann space is
\[M=\begin{pmatrix}a&b\\ c&d\end{pmatrix}\to\mathbf{M}(z_{1},z_{2})=a\;z_{1}\frac{\partial}{\partial z_ {1}}+b\;z_{1}\frac{\partial}{\partial z_{2}}+c\;z_{2}\frac{\partial}{\partial z _{1}}+d\;z_{2}\frac{\partial}{\partial z_{2}} \tag{51}\]
where \(a,b,c\) and \(d\in\mathbb{C}\).
Let \(f(z_{1},z_{2})\) be an analytic function of two variables \(z_{1}\) and \(z_{2}\). The expansion of \(f(z_{1},z_{2})\) is
\[f(z_{1},z_{2})=\sum_{nm=0}^{\infty}C_{nm}z_{1}^{n}z_{2}^{m} \tag{52}\]
where the coefficients \(C_{nm}\) are given by
\[C_{nm}=\frac{1}{n!m!}f^{(n)}(0)f^{(m)}(0) \tag{53}\]
The inner product of \(f(z_{1},z_{2})\) with itself is
\[\langle f(z_{1},z_{2})|f(z_{1},z_{2})\rangle=\sum_{n^{\prime}m^{\prime}=0}^{ \infty}\sum_{nm=0}^{\infty}\;C_{n^{\prime}m^{\prime}}^{\star}\;C_{nm}\;n!\;m! \;\delta_{n^{\prime},n}\delta_{m^{\prime},m} \tag{54}\]
The action of \(\mathbf{M}(z_{1},z_{2})\) on \(f(z_{1},z_{2})\) is
\[\mathbf{M}(z_{1},z_{2})f(z_{1},z_{2})=a\;n\;f(z_{1},z_{2})+d\;m\;f(z_{1},z_{ 2}) \tag{55}\]
\[+b\;m\;\sum_{nm}^{\infty}C_{nm}z_{1}^{n+1}z_{2}^{m-1}+c\;n\;\sum_{nm}z_{1}^{n- 1}z_{2}^{m+1}\]
The generalization of the above recipe to higher-order logic gates could be done analogously in a straightforward manner.
**Integral representation of logic gates:**
Let \(f(z)\) be an analytic function in simply-connected domain \(D\), and let \(\Gamma\) be positively-oriented loop in \(D\). The integral representation of the derivative at point \(z_{0}\) (any point in the interior of \(\Gamma\)) is [15]
\[f^{(n)}(z_{0})=\frac{n!}{2\pi i}\oint_{\Gamma}\frac{f(z)}{(z-z_{0})^{n+1}}\;dz \tag{56}\]
which is known as the generalized Cauchy formula. Consequently, the integral representation of the \(X\)-gate for example reads
\[Xf(z_{1},z_{2})=\frac{z_{1}}{2\pi i}\oint_{\Gamma_{2}}\frac{f(z_{1},\xi_{2})}{ \xi_{2}-z_{2}}d\xi_{2}-\frac{z_{2}}{2\pi i}\oint_{\Gamma_{1}}\frac{f(\xi_{1},z_ {2})}{\xi_{1}-z_{1}}d\xi_{1}, \tag{57}\]
Similarly, the \(Y,Z\) and \(I\) gates are
\[Yf(z_{1},z_{2})=-\frac{z_{1}}{2\pi}\oint_{\Gamma_{2}}\frac{f(z_{1},\xi_{2})}{ \xi_{2}-z_{2}}d\xi_{2}+\frac{z_{2}}{2\pi}\oint_{\Gamma_{1}}\frac{f(\xi_{1},z_ {2})}{\xi_{1}-z_{1}}d\xi_{1}, \tag{58}\]
\[Zf(z_{1},z_{2})=\frac{z_{1}}{2\pi i}\oint_{\Gamma_{1}}\frac{f(\xi_{1},z_{2})}{ \xi_{1}-z_{1}}d\xi_{1}-\frac{z_{2}}{2\pi i}\oint_{\Gamma_{2}}\frac{f(z_{1}, \xi_{2})}{\xi_{2}-z_{2}}d\xi_{2}, \tag{59}\]
\[If(z_{1},z_{2})=\frac{z_{1}}{2\pi i}\oint_{\Gamma_{1}}\frac{f(\xi_{1},z_{2})}{ \xi_{1}-z_{1}}d\xi_{1}+\frac{z_{2}}{2\pi i}\oint_{\Gamma_{2}}\frac{f(z_{1}, \xi_{2})}{\xi_{2}-z_{2}}d\xi_{2}, \tag{60}\]
## 4 Models of Computation
The computational model consists of a computing machine \(M\) connected to an environment \(E\). At each time \(t\), we have input channel(s) \(S(t)\) (for stimulus) and output channel(s) \(R(t)\) (for response) [18]. To describe how the computing machine \(M\) works, we need to specify the outputs as a function of the inputs. Since the transmission of signals requires some time, the output must be at a time later than the time of the input channel. We assume the machine response \(R(r+1)\) because of \(S(t)\) to depend on the state of the machine \(Q(t)\)[18]
\[R(t+1)=F(Q(t),S(t)) \tag{61}\]
and the state of the machine \(M\) depends on the stimulus \(S(t)\) and the state of the machine at previous time,
\[Q(t+1)=G(Q(t),S(t)). \tag{62}\]
### Memristive systems
The memristive systems are dynamical systems defined by [19]
\[\dot{x}=f(x,u,t), \tag{64}\] \[y=g(x,u,t)u, \tag{63}\]
where \(u\) and \(y\) are the input and output of the system, and \(x\) denotes the state of the system. The connection between 63, 64 and 61, 62 is achieved by assigning \(R(t+1)=y(t+1)\), \(Q(t+1)=x(t+1)\), and \(u=S(t)\). Therefore, for each value of \(t\), we can realize the memristive system as a computational model.
### Chain of harmonic oscillators
The harmonic oscillator satisfy a second-order differential equation of the form [20]
\[\ddot{x}+\omega^{2}x=0 \tag{65}\]
The general solution is
\[x(t)=A\sin(\omega t)+B\cos(\omega t), \tag{66}\]
where \(A\) and \(B\) are constants.
A single harmonic oscillator does not define a computational model by itself, some interactions with the environment are needed to define a computational model.
Consider a chain of \(N\) identical harmonic oscillators as the machine \(M\), the interaction of a single harmonic oscillator with other oscillators would affect the whole system. In other words, it provides some stimulus \(S\) to the whole system, regardless of whether this stimulus is large or negligible. The response \(R\) could be described as the changing of some measurable quantities in this chain due to the interaction with this single harmonic oscillator, such as the frequency modes, etc. The new displacement \(x_{n}\) of the \(n\)th oscillator defines the new state of the system \(Q\). It is sufficient to take just one oscillator within the chain and study its change in dynamics due to the added extra oscillator to show the connection between this system and the computational model described by equations 61 and 62.
Let \(|\mathbf{f}\rangle=(|f_{1}\rangle,|f_{2}\rangle,\ldots|f_{N}\rangle)\) be a set of analytic complex functions with several variables ( the stimulus \(S\)), and let \(M(\mathbf{L})\) be a computing machine with \(\mathbf{L}\) being a vector of complex differential operators. The computation process \(\mathfrak{C}\) is described by the action of \(\mathbf{L}\) on the inputs \(|\mathbf{f}\rangle\)
\[\mathfrak{C}:|\mathbf{f}\rangle\rightarrow\mathbf{L}|\mathbf{f}\rangle, \tag{67}\]
and the expectation values of the outputs (the response \(R\)) are
\[\frac{\langle\mathbf{f}|\mathbf{L}|\mathbf{f}\rangle}{\sqrt{\langle\mathbf{f} |\mathbf{f}\rangle}} \tag{68}\]
where we divided the last expression by the normalization constant \(\sqrt{\langle\mathbf{f}|\mathbf{f}\rangle}\) since the inputs are arbitrary complex functions. If both the inputs and outputs are discrete and not correlated between themselves i.e. can not be written in superposition states like quantum states, then the computation is classical. Quantum computations are characterized by the possibility of having superposition states during the computation process. The machine \(M(\mathbf{L})\) could be in principle any dynamical system from tiny particles to the universe as a whole. Any change in the dynamical degrees of freedom associated with \(M\) is in principle due to the interaction of \(M\) with the surrounding environment i.e. all possible inputs. However, this change might be very slow and may not affect the macroscopic characteristics of the system immediately. It is worth mentioning that dynamical systems are not the materials or objects around us but rather any changing system such as stock prices, ecological models, and in principle any changing system described by a set of differential equations.
Figure 1. The computational model consists of a computing machine ( any physical object) \(M\) described by a set of complex differential operators \(\mathbf{L}\), \(|\mathbf{f}\rangle\) and \(\mathbf{L}|\mathbf{f}\rangle\) are the inputs and outputs analytic complex functions.
To quantify the computation process, we compute the change of entropy taking into account the fact that entropy associated with any physical processes is always increasing[21, 22]
\[\Delta S=S_{\text{out}}-S_{\text{in}}=-\sum_{i}p_{i}^{\text{out}}\log p_{i}^{ \text{out}}+\sum_{i}p_{i}^{\text{in}}\log p_{i}^{\text{in}}\geq 0 \tag{69}\]
where \(p_{i}^{\text{in}}=\frac{|f_{i}\rangle\langle f_{i}|}{\langle\mathbf{f}| \mathbf{f}\rangle}\) and \(p_{i}^{\text{out}}=\frac{|f_{i}^{\prime}\rangle\langle f_{i}^{\prime}|}{ \langle\mathbf{f}^{\prime}|\mathbf{f}^{\prime}\rangle}\) with \(|\mathbf{f}^{\prime}\rangle=\mathbf{L}|\mathbf{f}\rangle\).
The relative entropy (Kullback-Leibler divergence) is
\[D(P||Q)=\sum_{i}p_{i}\log(\frac{p_{i}}{q_{i}})=-\sum_{i}\frac{|f_{i}\rangle \langle f_{i}|}{\langle\mathbf{f}|\mathbf{f}\rangle}\log\frac{|g_{i}\rangle \langle g_{i}|}{\langle\mathbf{g}|\mathbf{g}\rangle}-S(p_{i}) \tag{70}\]
with \(p_{i}=\frac{|f_{i}\rangle\langle f_{i}|}{\langle\mathbf{f}|\mathbf{f}\rangle}\), \(q_{i}=\frac{|g_{i}\rangle\langle g_{i}|}{\langle\mathbf{g}|\mathbf{g}\rangle}\) and \(S(p_{i})=-\sum_{i}p_{i}\log p_{i}\). Consequently, other quantities such as the mutual information can be determined in terms of holomorphic functions [22].
## 5. Physical Systems
All physical systems process information. In this section, we study in some sort of detail the computation mechanisms in a few basic systems. However, it is obvious that our construction works for all physical systems in nature.
### Coupled oscillators
Simple harmonic motion is omnipresent in many mechanical and electrical systems. For example, electrical circuits with inductance \(L\) connected across a capacitance \(C\) carrying charge \(Q\) obey the equation [20]
\[L\ddot{Q}+\frac{Q}{C}=0 \tag{71}\]
with frequency \(\omega^{2}=\frac{1}{LC}\).
The general equation of motion for any simple harmonic motion with \(N\) oscillators reads,
\[\ddot{\mathbf{x}}+\omega^{2}\mathbf{x}=0 \tag{72}\]
where \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{N})\) is the \(N\)-dimensional state vector and \(\omega\) is \(N\times N\) frequency matrix. To define the basic logic gates in such systems, one needs at least
Figure 2. Two identical pendulums with mass \(M\) connected by a spring with stiffness \(s\).
two coupled oscillators at each time step \(t_{0}\)3. Consider two identical pendulums, each having a mass \(M\) suspended on a light rigid rod of length \(\ell\). The masses are connected by a light spring of stiffness \(s\). The natural length of the spring is equal to the distance between the two masses at equilibrium. The equation of motion are
Footnote 3: On the contrary, one could apply the basic logic gates on a single quantum particle due to the superposition property (which leads to the quantum entanglement) of these particles, which distinguishes quantum particles from classical ones.
\[M\ddot{x}=-Mg\;\frac{x}{\ell}-s\;(x-y), \tag{74}\] \[M\ddot{y}=-Mg\;\frac{y}{\ell}+s\;(x-y). \tag{73}\]
The above set of equations could be written as
\[\ddot{x}+\omega_{0}^{2}\;x=-\frac{s}{M}\;(x-y), \tag{76}\] \[\ddot{y}+\omega_{0}^{2}\;y=\frac{s}{M}\;(x-y). \tag{75}\]
where \(\omega_{0}=\sqrt{\frac{g}{l}}\) is the natural frequency of each pendulum. To solve 75 and 76, we introduce the normal coordinates
\[X=x+y, \tag{78}\] \[Y=x-y. \tag{77}\]
Then, we have the following set of equations
\[\ddot{X}+\omega_{0}^{2}X=0, \tag{80}\] \[\ddot{Y}+\left(\omega_{0}^{2}+2\frac{s}{M}\right)Y=0. \tag{79}\]
One possible solution to the above set of equations is
\[X=A\sin(\omega_{0}t+\phi)=\alpha(z_{1}-\overline{z}_{1}), \tag{82}\] \[Y=B\sin(\omega t+\varphi)=\beta(z_{2}-\overline{z}_{2}), \tag{81}\]
where \(z_{1}=e^{(i\omega_{0}t+\phi)}\), \(z_{2}=e^{(i\omega t+\varphi)}\), and \(\omega=\sqrt{\omega_{0}^{2}+2s/M}\). It is sufficient to take the complex part only during our study of the logic gates for coupled oscillators. The state function of the coupled oscillator is
\[|f\rangle=f(z_{1},z_{2})=\alpha\beta\;z_{1}z_{2} \tag{83}\]
The action of logic gates on this state function is
\[Xf=\mathrm{NOT}f=\alpha\beta\left(z_{1}^{2}+z_{2}^{2}\right), \tag{85}\] \[Yf=-i\alpha\beta\left(z_{1}^{2}-z_{2}^{2}\right),\] (86) \[Zf=\alpha\beta\left(z_{1}z_{2}-z_{2}z_{1}\right)=0,\] (87) \[If=\alpha\beta\left(z_{1}z_{2}+z_{2}z_{1}\right)=2\;\alpha\; \beta\;f, \tag{84}\]
where the factor 2 in the last equation can be absorbed safely choosing suitable normalization constant. The Hadamard gate is
\[Hf=\frac{\alpha\;\beta}{\sqrt{2}}\left(z_{1}z_{2}+z_{1}^{2}+z_{2}^{2}-z_{2}z_{ 1}\right)=\frac{\alpha\;\beta}{\sqrt{2}}\left(z_{1}^{2}+z_{2}^{2}\right)=\frac {1}{\sqrt{2}}Xf \tag{88}\]
Note that since our state function is classical (product state), we were able to write the action of the Hadamard gates in the previous equation in terms of the action of
the \(X\) gate. The inner products \(\langle f|X|f\rangle=\langle f|Y|f\rangle=\langle f|H|f\rangle=0\). If one considers a more general model of higher-order oscillations, we may use the ansatz
\[f(z_{1},z_{2})=\frac{z_{1}^{n}}{\sqrt{n!}}\frac{z_{2}^{m}}{\sqrt{m!}} \tag{89}\]
and the expectation values of the logic gates assume more general form. As an example, we find for the \(X\) gate
\[X|f\rangle=m\frac{z_{1}^{n+1}}{\sqrt{n!}}\frac{z_{2}^{m-1}}{ \sqrt{m!}}+n\frac{z_{1}^{n-1}}{\sqrt{n!}}\frac{z_{2}^{m+1}}{\sqrt{m!}}, \tag{91}\] \[\langle f^{\prime}|X|f\rangle=m\;\delta_{n^{\prime},n+1}\delta_{n ^{\prime},m-1}+n\;\delta_{m^{\prime},m+1}\delta_{n^{\prime},n-1}. \tag{90}\]
Similar relations could be found for other gates.
### Turing Patterns
Turing Patterns are described by the diffusion-reaction equations [23, 24]
\[\frac{\partial a}{\partial t}=D_{a}\nabla^{2}a+f(a,b), \tag{93}\] \[\frac{\partial b}{\partial t}=D_{b}\nabla^{2}b+g(a,b), \tag{92}\]
where \(a\) and \(b\) describe the concentrations of chemicals at time \(t\). The functions \(f(a,b)\) and \(g(a,b)\) represent the reaction terms, and \(D_{a},D_{b}\) are the diffusion coefficients. One could for example write the reaction equations using the FitzHugh-Nagumo equation. In this case, \(f(a,b)=a-a^{3}-b+\alpha\) and \(g(a,b)=\beta(a-b)\) where \(\alpha\) and \(\beta\) are constants. Solving equations 92 and 93 we obtain the concentrations of chemicals as a function of position and time i.e. \(\{a(x,t),b(x,t)\}\). Using the Segal-Bargmann transform, we may write the solutions in terms of holomorphic variables,
\[\hat{a}(z_{1},t)=(\pi t)^{-d/4}\int_{\mathbb{R}^{d}}e^{\left(-z_{1 }^{2}+2\sqrt{2}z_{1}\cdot x-x^{2}\right)/2t}a(x)\;dx, \tag{95}\] \[\hat{b}(z_{2},t)=(\pi t)^{-d/4}\int_{\mathbb{R}^{d}}e^{\left(-z_ {2}^{2}+2\sqrt{2}z_{2}\cdot x-x^{2}\right)/2t}b(x)\;dx. \tag{94}\]
The state function of the system is
\[f(z_{1},z_{2})=\hat{a}(z_{1},t)\hat{b}(z_{2},t) \tag{96}\]
which is a product state. Feeding the state function into logic circuit architectures built from complex differential operators defines the computational processes of the Turing patterns.
### Neural Networks
In this section, we describe the mechanism for incorporating the developed formalism into the study of neural networks. We will restrict our analysis to simple neural networks. However, the generalization to other types of nets such as convolutional neural networks is possible. The neuron \(j\) may be described mathematically by a set of two equations [25, 26, 27, 28, 29, 30],
\[u_{j}=\sum_{i=1}^{N}\omega_{ji}x_{i}, \tag{98}\] \[y_{i}=f(u_{j}+b_{j}), \tag{97}\]
where \(\omega_{j1},\omega_{j2},\ldots,\omega_{jN}\) are the synaptic weights, \(f\) is the activation function and \(b_{k}\) is the bias. The inputs are
\[x_{1}=\langle f_{1}|L_{1}|f_{1}\rangle,x_{2}=\langle f_{2}|L_{2}|f_{2}\rangle, \ldots,x_{N}=\langle f_{N}|L_{N}|f_{N}\rangle. \tag{99}\]
The outputs \(y_{j}\) should be written as inner products in the Bargmann space similar to the inputs in 99. Now, following any neural computation algorithms for weights such as perceptrons, backpropogations one could find the convergence learning process with minimal loss functions.
## 6. Universal Programming Language
Programming languages are systems of expressions mostly text-based formal languages that allow users to communicate with the machine language [31, 32, 33]. The first question that could be asked is whether the universe uses one way for communication between its different sub-systems or each has its distinct way of communication. Logically, the universe must follow one general way for communications between its embedded sub-systems. This is because the laws of physics are universal and thus there is no way to include a sub-system in the universe that works genuinely in a very different way. The universal programming language (UPL) which I shall describe here is different from any other known programming language. First, it is the most basic language since all other languages such as the machine, compiler-based and interpreter-based can be decomposed into a group of instructions in the UPL language. Second, its space of possibilities is very high. One could for complex systems in biophysics create many distinguish patterns and manipulate them.
Let \(\mathcal{S}=\{\mathcal{S}_{1},\mathcal{S}_{2},\mathcal{S}_{3},\ldots, \mathcal{S}_{N}\}\) be the system that we want to study its computational properties. Each sub-system \(\mathcal{S}_{i}\), where \(i\in\{1,\ldots,N\}\), is described by a state function \(|f_{i}\rangle\). Let \(|\mathbf{f}\rangle=(|f_{1}\rangle,|f_{2}\rangle,|f_{3}\rangle,\ldots,|f_{N}\rangle)\) be the joint state function for the system \(\mathcal{S}\). The general procedure to find the computational properties of \(\mathcal{S}\) is determined by the following list of instructions \(\mathcal{L}=\{L_{1},L_{2},L_{3},L_{4},L_{5},L_{6},L_{7},L_{8}\}\):
* \(L_{1}\): Determine the state functions \(\{|f_{i}\rangle\}\) of the sub-systems \(\{\mathcal{S}_{i}\}\) as a function of time.
* \(L_{2}:\) If the state functions are defined over the field of real numbers, apply the Segal-Bargmann transform for each variable such that the final answer is written in terms of holomorphic variables. 4 Footnote 4: In principle, one could use any unitary integral transform from real variables onto holomorphic variables.
* \(L_{3}:\) Determine the joint state functions \(|\mathbf{f}\rangle\) of the system \(\mathcal{S}\).
* \(L_{4}:\) Apply all possible logic gates (differential operators) between the sub-systems \(\{\mathcal{S}_{i}\}\), i.e. \(\mathbf{L}|\mathbf{f}\rangle\).
* \(L_{5}:\) Lis all produced patterns and classify them into classical and quantum patterns.
* \(L_{6}:\) Compute the expectation values i.e. \(\langle\mathbf{f}|\mathbf{L}|\mathbf{f}\rangle\), with suitable normalization conditions.
* \(L_{7}:\) If the joint state function is not written in a closed analytical form but rather in a numerical approximated form. Repeat the previous procedures for other joint state functions at different time iterations.
* \(L_{8}:\) For sufficient time iterations, list all produced patterns and select the distinguished patterns that corresponds to different expectation values in the Bargmann space.
Obtaining such lists of computational patterns for each material is like cracking the DNA of life since it allows for designing novel ways for controlling and manipulating the properties of materials for efficient computing tasks. Consequently, any program in UPL should be described by the triplet \((\mathfrak{CS},\mathcal{S},\mathcal{L})\) where \(\mathfrak{CS}\) is the computation space. For a single-qubit \(q_{1}\), \(\mathfrak{CS}=\mathcal{H}L^{2}(\mathbb{C}^{2},\mu_{h})\), \(\mathcal{S}=\{q_{1}\}\) and \(\mathcal{L}\) are the eight instructions defined for the case of a single-qubit.
## 7. Hierarchical Computing: The Concept of Layers
The universe operates at different time scales from elementary particles with quantum signatures to objects at cosmic scale such as galaxies, supernovas...etc. The computations are performed first at the level of elementary particles (the basic ingredients of any material). The results of such computations affect the physical properties of atoms or molecules composed of these elementary particles. The computations at atomic level are done at different layer with time scale that characterizes the physical processes of molecules. Analogously, the results of computations are sent to the next level with different time scale and larger ingredients. The first level is represented by a layer and it must be of a quantum nature where each fermion by itself represents an independent logic gate as we explained in previous sections. The last layer is the universe as a whole. Practically, we take the last layer to be the layer which contains the object under study with its surrounding environment. For example, if the object under study is of physical dimensions relative to humans size, then the structure of layers starts at the quantum level of the object's ingredients and ends at the level of biosphere.
Let \(\mathfrak{L}_{i}(t_{i})\), where \(i=1,\ldots N\), denotes the layer of computation at time scale \(t_{i}\). The computation space \(\mathfrak{CS}\) is
\[\mathfrak{CS}:\mathfrak{L}_{1}(t_{1})\otimes\mathfrak{L}_{2}(t_{2})\otimes \cdots\otimes\mathfrak{L}_{N}(t_{N}) \tag{100}\]
**Properties:** 1. Each individual layer is Turing-complete [34, 35].5
Footnote 5: The Church-Turing states “Every effectively calculable function can be computed by a Turing-machine transducer.”
2. The space products of layers are time-ordered. Events at layer with time index \(t_{N-1}\) happen before the events at layer with time index \(t_{N}\). This supports the arrow of time since each computational process are associated with specific time and messages between layers are causal[36].
The concept of layers presents a model for hierarchal computing devices where the computations are performed at different time scales with distinguish physical properties. The first layer in the universe was formed immediatley after the big bang and was of quantum gravity nature. It was associated with extremely low entropy. Generally, the case \(N=1\) corresponds to one layer of computation and we shall call it a primitive computational model. On contrary, having large number of layers corresponds to advanced computational model such as the universe at present time.
The hierarchal computing is a multilayer computational model. Each layer has distinct physical properties such as unique time scale. In this type of computing, layers are organized according to their time scale from short to long. The first layer, as we mentioned before, must be of quantum signature. The results of computations at layer \(i\) affects the computations performed within layer \(i+1\). Intuitively, multilayer computational models are associated with higher efficiency in comparison with single layer models. Similar situation is found in the study of multilayer neural networks in deep learning [29]. However, the situation in multilayer neural networks is different from the multilayer concept in hierarchal computing since all the computations of multilayer neural networks in deep learning are performed in one time scale and thus within the same layer in the hierarchal computing scheme.
## 8 Conclusions
We formulated the theory of computation in terms of holomorphic functions in the Bargmann space. This was done first by writing the quantum logic gates as complex differential operators that act on analytic functions. Classical logic gates are embedded in the quantum gates and may be recovered immediately considering the inputs to be of classical signature (no superposition states are allowed). Moreover, the current formalism enjoys great flexibility since it is connected with the theory of analytic functions. Building on the developed formalism, the connection between materials or any physical system in the universe with computation is straightforward. We presented a recipe for constructing universal programming language used by the universe alongside with the hierarchy computing scheme. Some physical examples are presented in the paper. We suggest a deep study of the formalism presented here in neural dynamics models such as Hodgkin-Huxley and FitzHugh-Nagumo models[37, 38]. Moreover, building a correspondence between logic gates and the different oscillation states of Belousov-Zhabotinsky reactions can be used for building non-conventional computing devices [39].
|
2303.11723 | Robust microphase separation through chemical reaction networks | The interaction of phase-separating systems with chemical reactions is of
great interest in various contexts, from biology to material science. In
biology, phase separation is thought to be the driving force behind the
formation of biomolecular condensates, i.e. organelles without a membrane that
are associated with cellular metabolism, stress response, and development. RNA,
proteins, and small molecules participating in the formation of condensates are
also involved in a variety of biochemical reactions: how do the chemical
reaction dynamics influence the process of phase separation? Here we are
interested in finding chemical reactions that can arrest the growth of
condensates, generating stable spatial patterns of finite size (microphase
separation), in contrast with the otherwise spontaneous (unstable) growth of
condensates. We consider a classical continuum model for phase separation
coupled to a chemical reaction network (CRN), and we seek conditions for the
emergence of stable oscillations of the solution in space. Given reaction
dynamics with uncertain rate constants, but known structure, we derive easily
computable conditions to assess whether microphase separation is impossible,
possible for some parameter values, or robustly guaranteed for all parameter
values within given bounds. Our results establish a framework to evaluate which
classes of CRNs favor the emergence of condensates with finite size, a question
that is broadly relevant to understanding and engineering life. | Franco Blanchini, Elisa Franco, Giulia Giordano, Dino Osmanovic | 2023-03-21T10:27:01Z | http://arxiv.org/abs/2303.11723v1 | # Robust microphase separation through chemical reaction networks
###### Abstract
The interaction of phase-separating systems with chemical reactions is of great interest in various contexts, from biology to material science. In biology, phase separation is thought to be the driving force behind the formation of biomolecular condensates, i.e. organelles without a membrane that are associated with cellular metabolism, stress response, and development. RNA, proteins, and small molecules participating in the formation of condensates are also involved in a variety of biochemical reactions: how do the chemical reaction dynamics influence the process of phase separation? Here we are interested in finding chemical reactions that can arrest the growth of condensates, generating stable spatial patterns of finite size (microphase separation), in contrast with the otherwise spontaneous (unstable) growth of condensates. We consider a classical continuum model for phase separation coupled to a chemical reaction network (CRN), and we seek conditions for the emergence of stable oscillations of the solution in space. Given reaction dynamics with uncertain rate constants, but known structure, we derive easily computable conditions to assess whether microphase separation is impossible, possible for some parameter values, or robustly guaranteed for all parameter values within given bounds. Our results establish a framework to evaluate which classes of CRNs favor the emergence of condensates with finite size, a question that is broadly relevant to understanding and engineering life.
Chemical reaction networks, Phase separation, Robustness analysis, Stability, Uncertain systems.
## I Modelling Microphase Separation in the Presence of Chemical Reactions
Phase separation has emerged as a key area within biological research over the last decade [3]. By utilizing the physical properties of phase separation, it is hypothesized that living organisms are able to exercise fine control over their chemical production [18]. Within the non-equilibrium cellular environment, chemical reactions and phase separation combine to produce a new class of physical systems, deemed _active emulsions_[25] or _active droplets_[26]. Beyond biological relevance, these systems have displayed intriguing properties in their own right [19, 22], yielding novel behavior in both spatial organization and dynamical properties. By combining both _conserved_ dynamics (phase separation) and _non-conserved_ dynamics (reactions), we can think of active emulsions as an extension of classical reaction-diffusion models [2, 15, 16, 17, 20, 21], and bring similar tools to bear on the analysis of their properties.
We model such systems by considering the time evolution of \(n\) chemical species, with a vector of concentrations \(c(z)=[c_{1}(z)\,c_{2}(z)\,\ldots\,c_{n}(z)]\in\mathbb{R}_{+}^{n}\), \(z\in\mathbb{R}^{d}\), under the assumption that any of the species can undergo phase separation: it can be in two phases, condensed and dispersed, respectively characterized by concentrations \(c_{i}^{c}\) and \(c_{i}^{d}\).
The overall dynamics is described by equation
\[\frac{\mathrm{d}c(z,t)}{\mathrm{d}t}=\mathbf{I}(c(z,t))+\mathbf{R}(c(z,t)), \tag{1}\]
where the term \(\mathbf{I}(c(z,t))=\nabla\cdot\left(\mathfrak{D}\nabla\frac{\delta F(c(z,t)) }{\delta c(z,t)}\right)\) describes conserved spatial dynamics by considering an energy functional \(F(c(z,t))\) (see [23]) and inferring the corresponding time evolution via "model B" dynamics [14]. We assume that every species has the same homogeneous diffusion coefficient and molecular mass, hence the diffusion matrix is \(\mathfrak{D}=dI\), where \(d>0\) is a common diffusion constant. The term \(\mathbf{R}(c(z,t))\) describes the reaction fluxes generated by a set of chemical reactions, assuming the existence of a free energy source that maintains the rates at which the reactions proceed [22].
Full analysis of (1) is usually rather complex, however we can obtain information about the properties of the solution by linearization, which can give tractable results on the effects of chemistry on phase separation. We perform a linear stability analysis of model (1) near equilibrium: \(c(z,t)=c_{s}+w\exp(\mathrm{i}\kappa z+\rho(\kappa)t)\), where \(w\) is small: \(w^{2}\approx 0\). The linearization around \(c_{s}\) involves the Jacobian matrices \(J_{I}(\kappa)=\nabla_{c(z)}\mathbf{I}(c(z,t))\big{|}_{c=c_{s}}\) of the conserved dynamics and \(J_{R}=\nabla_{c(z)}\mathbf{R}(c(z,t))\big{|}_{c=c_{s}}\) of the chemical reaction dynamics. We assume that the phase-separating species is the first one, so that \(J_{I}(\kappa)\) has the symmetric structure
\[J_{I}(\kappa)=\begin{bmatrix}\mu|\kappa|^{2}-\gamma^{2}|\kappa|^{4}&-d|\kappa |^{2}\epsilon_{12}&\ldots&-d|\kappa|^{2}\epsilon_{1N}\\ -d|\kappa|^{2}\epsilon_{12}&-d|\kappa|^{2}&\ldots&-d|\kappa|^{2}\epsilon_{2N }\\ \vdots&\vdots&\ddots&\vdots\\ -d|\kappa|^{2}\epsilon_{1N}&-d|\kappa|^{2}\epsilon_{2N}&\ldots&-d|\kappa|^{2 }\end{bmatrix}, \tag{2}\]
where \(\mu>0\), \(\gamma\) is the surface tension and \(\epsilon_{ij}\) are constant parameters representing spatial attraction or repulsion among species. We thus obtain the relationship
\[\rho(\kappa)w=[J_{I}(\kappa)+J_{R}]w, \tag{3}\]
where the spectral abscissa \(\rho(\kappa)\) of \((J_{I}(\kappa)+J_{R})\) characterizes the dynamics of spatially oscillatory behaviors. The _dispersion relation_ curve \(\rho(\kappa)\) depends on the wave number \(\kappa\): the growth rate of a spatial wave depends on its wave number.
Linearizing the problem allows us to make quantitative predictions about the behavior of a given system without going through the expensive computation of the full solution to (1). In particular, we can distinguish between systems that undergo _microphase separation_ (MS), _macrophase separation_ or _no phase separation_. The first case corresponds to finite size patterns do not change over time: as illustrated in Fig. 1, this happens when \(\rho(\kappa)<0\) for small \(\kappa\), \(\rho(\kappa)>0\) for intermediate \(\kappa\), and \(\rho(\kappa)<0\) for large \(\kappa\). It is of special interest as droplet size is regulated through the action of chemistry. Thus, spatial compartmentalization occurs with a particular length scale, a prerequisite to being able to utilize compartmentalization for precision control of spatially separated chemical reactions, creating droplets with "life-like" properties [25].
The design space of such a problem is large, in that we have many possible chemical reaction networks (CRNs) that can couple to phase separating systems. To assess the likelihood of MS and identify interesting candidate CRNs for experimental realization, in previous work we computationally explored the parameters and chemical reaction networks that lead to matrices \(J_{I}(\kappa)\) and \(J_{R}\) in (3) [23]. The probability that \(\rho(\kappa)\) has three roots was evaluated when generating either random \(J_{R}\) matrices or random CRNs [24].
In this paper, we re-examine the problem of inducing MS via chemical reactions through a control-theoretic approach. Given a CRN structure, whose rate parameters are unknown but bounded in a known range, we rely on parametric robustness approaches [4] and vertex results [8, 9, 13] to provide conditions ensuring that MS is: _impossible_; _possible for some parameter values_; or _robustly guaranteed for all parameter values in the range_.
We achieve these conditions by converting the original problem, formulated in terms of the spectral abscissa of an uncertain matrix and thus challenging to handle, into a problem formulated in terms of the robust analysis of the determinant of an uncertain matrix, much simpler to deal with.
## 2 Microphase Separation: a Spectral Problem
To characterize the dispersion relation, we need to study the spectrum of matrix \(J_{I}(\kappa)+J_{R}\). In view of (2), we can write \(J_{I}(\kappa)=J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\) and the combined Jacobian in (3) becomes:
\[J(\kappa)=J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}. \tag{4}\]
Without restriction, we assume that \(J_{R}\) can be rewritten according to the \(BDC\)_decomposition_ introduced in [5, 6, 13]; this is possible for the Jacobian of any generic CRN.
**Assumption 1**.: _The Jacobian \(J_{R}\) can be decomposed as \(J_{R}=B\Delta C\), where \(\Delta=\text{diag}\{\Delta_{1},\Delta_{2},\ldots,\Delta_{m}\}\) has positive diagonal entries representing the uncertain parameters (the nonzero partial derivatives of the CRN system), while matrices \(B\in\mathbb{Z}^{n\times m}\) and \(C\in\mathbb{Z}^{m\times n}\) represent the known structure of the given CRN. The unknown parameters \(\Delta_{j}\) are bounded as_
\[\Delta\in\mathcal{D}=\{\Delta\colon 0\leq\Delta_{j}^{-}\leq\Delta_{j}\leq \Delta_{j}^{+}\}, \tag{5}\]
_for given lower bounds \(\Delta_{j}^{-}\) and upper bounds \(\Delta_{j}^{+}\)._
We restrict our analysis to CRNs with one conservation law.
**Assumption 2**.: _For all \(\Delta\in\mathcal{D}\), matrix \(J_{R}=B\Delta C\) is singular and has \(n-1\) eigenvalues with negative real part. Also, there exists a nonnegative vector \(v^{\top}\geq 0\) such that \(v^{\top}B=0\), representing a conservation law: since \(v^{\top}B\Delta C=0\), \(v^{\top}\) is a left eigenvector of \(J_{R}\) associated with the eigenvalue at \(0\)._
Assumption 2 entails that matrix \(J_{R}\) is marginally stable. Its stability can be assessed through several existing techniques tailored to CRNs, proposed for instance in [1, 5, 7, 12].
We also introduce suitable assumptions on the symmetric matrix \(J_{I}(\kappa)=J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\).
**Assumption 3**.: _The symmetric matrix \(J_{2}\) is indefinite, the symmetric matrix \(J_{4}\) is negative semi-definite, and there exists \(\bar{\kappa}\) such that \(J_{I}(\bar{\kappa})=J_{2}|\bar{\kappa}|^{2}+J_{4}|\bar{\kappa}|^{4}\) is negative definite._
The Jacobian matrix \(J(\kappa)=J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\) in (4) has eigenvalues \(\lambda_{i}(\Delta,\kappa)\), \(i=1,\ldots,n\), and its spectral abscissa (namely, the maximum real part \(\Re\) of its eigenvalues) is
\[\rho(\Delta,\kappa)=\max_{i}\left\{\Re(\lambda_{i}(\Delta,\kappa)\right\}.\]
The eigenvalues of \(J_{R}=B\Delta C\) are \(\lambda_{i}(\Delta,0)\), \(i=1,\ldots,n\).
**Assumption 4**.: _The eigenvalue of matrix \(J_{R}=B\Delta C\) associated with the conservation law is \(\lambda_{1}(\Delta,0)=0\)._
**Definition 1**.: _A continuous function \(f(\kappa)\) has a positive sign change if \(f(\kappa_{1})<0<f(\kappa_{2})\) for some \(\kappa_{1}<\kappa_{2}\), while it has a negative sign change if \(f(\kappa_{1})>0>f(\kappa_{2})\). Moreover, function \(f(\kappa)\) is initially positive (respectively, negative) if
Figure 1: Dispersion relation curves for phase separating systems. Condensate size grows when the wave number \(\kappa\) decreases. When there is no separation (red, dashed curve), condensates dissolve. In macrophage separation (red, solid curve), typical in the absence of chemical reactions, condensates either grow until they are macroscopically separated (for small \(\kappa\)) or dissolve completely (for large \(\kappa\)), as the intermediate crossing point is an unstable fixed point. In microlensing separation (blue curve), which can be induced by chemical dynamics, both large condensates (small \(\kappa\)) and small condensates (large \(\kappa\)) have a negative growth rate, while condensates of intermediate size have a positive growth rate, hence the mid crossing point is a stable fixed point, leading to a prevalent condensate size.
there exists an open right neighborhood of \(0\), \((0,\hat{\kappa})\), in which the function is positive (respectively, negative)._
We can now define the (robust) microphase separation property that is the subject of our analysis.
**Definition 2** (Microphase separating system).: _System (1) exhibits_ microphase separation (MS) _if, for a given \(\Delta\in\mathcal{D}\), \(\rho(\Delta,\kappa)\) is initially negative (i.e., there exists \(\hat{\kappa}\) such that \(\rho(\Delta,\kappa)<0\) for all \(\kappa\in(0,\hat{\kappa})\)), then has a positive sign change and finally a negative sign change (i.e., \(\rho(\Delta,\kappa_{1})>0\) and \(\rho(\Delta,\kappa_{2})<0\) for some \(0<\hat{\kappa}<\kappa_{1}<\kappa_{2}\)). System (1) exhibits_ robust MS _if this condition holds for all \(\Delta\in\mathcal{D}\)._
The MS condition in Definition 2 describes a qualitative behavior of \(\rho(\Delta,\kappa)\) consistent with the blue curve in Fig. 1: the size of condensates shrinks for small \(\kappa\), grows for intermediate values of \(\kappa\), shrinks again for large \(\kappa\). For known parameters, the condition can be tested by directly computing the eigenvalue curves [23]. Departing from this approach, our analysis aims to develop efficient methods to tackle the case of _uncertain_ CRN parameters: given a range of possible parameter values, by taking advantage of the \(BDC\) decomposition of the Jacobian of the chemical reaction dynamics, we check whether the condition can or cannot hold for some parameters in the range, and whether it holds _robustly_ for all parameters in the range.
## III Robust Determinant Conditions
for Microphase Separation
We obtain (robust) conditions for MS by mapping the dispersion relation problem, involving the spectral abscissa \(\rho(\Delta,\kappa)\), to a determinant problem. To this aim, given uncertain CRN parameters \(\Delta\), whose values are bounded in the set \(\mathcal{D}\) as in (5), we consider the functions
\[\Psi^{-}(\kappa) = \min_{\Delta\in\mathcal{D}}\det[-(B\Delta C+|\kappa|^{2}J_{2}+| \kappa|^{4}J_{4})], \tag{6}\] \[\Psi^{+}(\kappa) = \max_{\Delta\in\mathcal{D}}\det[-(B\Delta C+|\kappa|^{2}J_{2}+| \kappa|^{4}J_{4})], \tag{7}\]
which can be easily computed, as we will show in Section III-B. In view of their definition, \(\Psi^{-}(\kappa)\leq\Psi^{+}(\kappa)\). Moreover, \(\Psi^{-}(0)=\Psi^{+}(0)=0\), because, in view of Assumption 2, the determinants in (6) and (7) are \(0\) for \(\kappa=0\). Both functions grow to infinity as \(\kappa\rightarrow+\infty\), as shown after Lemma 1.
Studying functions \(\Psi^{-}(\kappa)\) and \(\Psi^{+}(\kappa)\) allows us to provide:
* a crucial necessary condition for MS: \(\Psi^{+}\) needs to be initially positive (Theorem 1);
* a sufficient condition ensuring MS for some values of the parameters (which we can determine): _either \(\Psi^{-}\) or \(\Psi^{+}\)_ is initially positive and has a negative sign change (Theorem 2);
* a sufficient condition ensuring _robust_ MS for all admissible parameters: _both \(\Psi^{-}\)_ and \(\Psi^{+}\) are initially positive and have a negative sign change (Theorem 3).
The technical challenge lies in relating the spectral abscissa \(\rho(\Delta,\kappa)\) to the curves \(\Psi^{-}\) and \(\Psi^{+}\). The difficulty arises from the fact that, while a negative value of the determinant \(\det[-(B\Delta C+|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4})]\) implies positivity of the spectral abscissa \(\rho(\Delta,\kappa)\), the opposite unfortunately is not true: the determinant may well be positive even when \(\rho(\Delta,\kappa)>0\).
We begin by considering the case in which only the chemical reaction parameters are subject to uncertainty, which affects the entries of \(J_{R}\).
**Theorem 1** (Necessary condition).: _If system (1) exhibits microphase separation for some \(\Delta\in\mathcal{D}\), then \(\Psi^{+}\) is initially positive._
Proof.: Since \(\rho(\Delta,\kappa)<0\) necessarily requires that \(\det[-(B\Delta C+|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4})]>0\), then the maximum \(\Psi^{+}\) must be initially positive when \(\rho(\Delta,\kappa)\) is initially negative, as required by Definition 2.
In Fig. 2A we illustrate a case in which \(\Psi^{+}\) is not initially positive, hence MS is not possible; conversely, in Fig. 2B, \(\Psi^{+}\) is initially positive.
We now state the other main results, whose proof requires some technical lemmas and is thus reported in Section III-A.
**Theorem 2** (Sufficient condition).: _System (1) exhibits microphase separation for some \(\Delta\in\mathcal{D}\) if either \(\Psi^{-}\) or \(\Psi^{+}\) is initially positive and has a negative sign change._
Actual parameter values for which MS does occur can be found following the procedure described in Section III-B.
In Figs. 2B and 2C, we illustrate the case in which MS is possible for some \(\Delta\in\mathcal{D}\).
**Theorem 3** (Robust sufficient condition).: _System (1) exhibits microphase separation for all \(\Delta\in\mathcal{D}\) if \(\Psi^{-}\) is initially positive (and hence \(\Psi^{+}\) is initially positive too) and \(\Psi^{+}\) has a negative sign change (and hence \(\Psi^{-}\) has it too)._
In Fig. 2D, we illustrate the case in which MS is robustly guaranteed for all \(\Delta\in\mathcal{D}\).
Finally, we briefly touch upon the case in which uncertainty affects the parameters of both spatial dynamics (i.e., the entries of \(J_{I}\)) and chemical dynamics (i.e., the entries of \(J_{R}\)).
**Corollary 1**.: _Assume that \(J_{2}\) and \(J_{4}\) are diagonal matrices, whose diagonal entries are uncertain parameters bounded in intervals. Then, redefining the functions as \(\Psi^{-}(\kappa)=\min_{\Delta,J_{2},J_{4}}\det[-(B\Delta C+|\kappa|^{2}J_{2}+| \kappa|^{4}J_{4})]\) and \(\Psi^{+}(\kappa)=\max_{\Delta,J_{2},J_{4}}\det[-(B\Delta C+|\kappa|^{2}J_{2}+| \kappa|^{4}J_{4})]\), all the previous results still hold true._
### Proofs of the main results
We begin with two technical lemmas. The first states that, for large enough \(\kappa\), we have Hurwitz stability.
**Lemma 1**.: _For any given \(\Delta\), \(\lim_{\kappa\rightarrow+\infty}\rho(\Delta,\kappa)=-\infty\)._
Proof.: Recall that \(J_{4}\) is negative semi-definite and take \(\kappa\geq\bar{\kappa}\), with \(\bar{\kappa}\) defined in Assumption 3. The Lyapunov inequality
\[\left(J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\right)^{\top}+ \left(J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\right)=\] \[\left(B\Delta C\right)^{\top}+B\Delta C+2J_{2}|\kappa|^{2}+2J_{4} |\kappa|^{4}\leq\] \[\left(B\Delta C\right)^{\top}+B\Delta C+2J_{2}|\kappa|^{2}+2J_{4} |\kappa|^{2}|\bar{\kappa}|^{2}=\] \[|\kappa|^{2}\left[\frac{(B\Delta C)^{\top}+B\Delta C}{|\kappa|^{ 2}}+2(J_{2}+J_{4}|\bar{\kappa}|^{2})\right]<0 \tag{8}\]
holds for \(\kappa\) large, because \((J_{2}+J_{4}|\bar{\kappa}|^{2})\) is negative definite and the fraction converges to \(0\). This implies Hurwitz stability for \(\kappa\) large enough. To prove that \(\lim_{\kappa\to\infty}\rho(\Delta,\kappa)=-\infty\), denote \(K=B\Delta C+\kappa I\) and repeat the computation for the perturbed matrix \(J_{\kappa}=\left(\kappa I+J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\right)\) to get
\[J_{\kappa}^{\top}+J_{\kappa}\leq|\kappa|^{2}\left[\frac{K^{\top}+K}{|\kappa|^{ 2}}+2(J_{2}+J_{4}|\bar{\kappa}|^{2})\right]<0 \tag{9}\]
for \(\kappa\) large, hence \(J_{\kappa}\) is Hurwitz. The spectral abscissa of \(\left(J_{R}+J_{2}|\kappa|^{2}+J_{4}|\kappa|^{4}\right)\) is thus less than \(-\kappa\), for \(\kappa\) large.
Since the determinant of the negative of a matrix of size \(n\) is \((-1)^{n}\) times the product of the matrix eigenvalues, \(\lim_{\kappa\to+\infty}\rho(\Delta,\kappa)=-\infty\) implies that \(\lim_{\kappa\to+\infty}\Psi^{-}(\kappa)=\lim_{\kappa\to+\infty}\Psi^{+}(\kappa)=+\infty\).
**Remark 1**.: _If we take bounds \(\Delta_{i}^{-}>0\) and \(\Delta_{i}^{+}<\infty\), by compactness the limit in Lemma 1 is uniform in \(\Delta\in\mathcal{D}\)._
Due to Lemma 1, if \(\rho(\Delta,\kappa)\) has a positive sign change, then it needs to have a subsequent negative sign change.
**Lemma 2**.: _Under Assumption 2, if for some \(\Delta\) we have that \(\det[-(B\Delta C+|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4})]\) is initially positive, as a function of \(\kappa\), then there exists \(\tilde{\kappa}>0\) such that \(\rho(\Delta,\kappa)<0\) for \(0<\kappa\leq\tilde{\kappa}\)._
Proof.: Consider the characteristic polynomial
\[p(s,\kappa,\Delta)=\det[sI-(B\Delta C+|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4})]. \tag{10}\]
For \(\kappa=0\), \(\lambda_{1}=0\) is an isolated root of the characteristic polynomial \(p(s,0,\Delta)\), while all the other roots \(\lambda_{i}\), \(i>1\), have negative real parts. In view of the continuity of the eigenvalues with respect to \(\kappa\), for a small, positive \(\kappa\) the roots \(\lambda_{i}\), \(i>1\), still have negative real part. Hence, the spectral abscissa is given by the dominant real eigenvalue \(\lambda_{1}\) and we show that, for a small, positive \(\kappa\), \(\lambda_{1}\) becomes negative, hence \(\rho(\Delta,\kappa)<0\). We can write the characteristic polynomial as
\[p(s,\kappa,\Delta)=s^{n}+p_{n-1}(\kappa,\Delta)s^{n-1}+\cdots+p_{1}(\kappa, \Delta)s+p_{0}(\kappa,\Delta).\]
For \(\kappa=0\), the constant term \(p_{0}(0,\Delta)=0\), due to the zero root, while \(p_{j}(0,\Delta)>0\) for \(j>0\), since all other roots have negative real part in view of Assumption 2. Hence the derivative of \(p\) computed at \(s=0\), assuming \(s\) real, is positive: \(\left.\frac{d}{ds}p(s,\kappa,\Delta)\right|_{s=0}=p_{1}(\kappa,\Delta)>0\). For a small, positive \(\kappa\) in a neighborhood of \(0\), the polynomial \(p(s,\kappa,\Delta)\) becomes positive and, since it is locally increasing in \(\kappa\), the root \(\lambda_{1}\), which is initially \(0\), moves to the left and becomes negative.
We are now ready to prove Theorem 2.
Proof.: If \(\Psi^{-}\) is initially positive, then for all \(\Delta\), \(p(s,\kappa,\Delta)\) defined in (10) is initially positive. Lemma 2 ensures that \(\rho(\Delta,\kappa)<0\) in a right neighborhood of zero, for all \(\Delta\). On the other hand, \(\Psi^{-}\) becomes negative for some larger \(\kappa\), meaning that for some \(\Delta^{*}\), \(\rho(\Delta^{*},\kappa)>0\), so \(\rho(\Delta^{*},\kappa)\) has a positive sign change. Then, \(\rho\) will have a negative sign change \(\rho(\Delta^{*},\kappa)<0\) for \(\kappa\) large in view of Lemma 1.
If \(\Psi^{+}\) is initially positive, then for some \(\Delta^{*}\), \(p(s,\kappa,\Delta^{*})\) is initially positive. In view of Lemma 2, \(\rho(\Delta^{*},\kappa)<0\) in a right neighborhood of zero, for all \(\Delta\). On the other hand, \(\Psi^{+}\) becomes negative for some larger \(\kappa\), meaning that \(\rho(\Delta,\kappa)>0\) for all \(\Delta\). Then \(\rho(\Delta^{*},\kappa)<0\) has a positive sign change, and eventually will have a negative sign change \(\rho(\Delta^{*},\kappa)<0\) for \(\kappa\) large, again, in view of Lemma 1.
Finally, we prove Theorem 3.
Proof.: If \(\Psi^{-}\) is initially positive, then \(\rho(\Delta,\kappa)\) is initially negative for all \(\Delta\). If \(\Psi^{+}\) has a negative sign change, then \(\rho(\Delta,\kappa)\) becomes positive for all \(\Delta\) (before becoming eventually negative).
**Remark 2**.: _The value \(\kappa^{*}\) at which \(\rho(\Delta,\kappa)\) becomes positive (transition to instability) clearly depends on \(\Delta\). In general, it is not possible to discriminate whether the instability occurs due to the appearance of real or complex eigenvalues, unless \(J_{R}\) is Metzler, as in the case of a network of mono-molecular reactions, and \(J_{I}\) is diagonal, so that the dominant eigenvalue is real. In general, a supplementary analysis can be performed adopting value-set techniques [4] to possibly rule out imaginary eigenvalues._
Corollary 1 can be proven by repeating the same arguments as above, since the determinant is a multilinear function of all the considered parameters if matrices \(J_{2}\) and \(J_{4}\) are diagonal.
### Computing \(\Psi^{-}\) and \(\Psi^{+}\) and checking criteria
Computing \(\Psi^{-}\) and \(\Psi^{+}\) is simple, since they are respectively the minimum and the maximum of \(2^{m}\) polynomials.
**Proposition 1**.: _Given the set \(\mathcal{D}\) in (5), let \(\hat{\mathcal{D}}\) be the set of all its vertices:_
\[\hat{\mathcal{D}}=\left\{\Delta:\Delta_{j}\in\left\{\Delta_{j}^{-},\Delta_{j}^{+ }\right\}\right\}.\]
_Then, the functions \(\Psi^{-}\) and \(\Psi^{+}\), respectively defined in (6) and (7), can be computed as_
\[\Psi^{-}(\kappa)=\min_{i}p_{i}(\kappa),\qquad\Psi^{+}(\kappa)=\max_{i}p_{i}( \kappa), \tag{11}\]
_where_
\[p_{i}(\kappa)=\det[-(B\Delta^{(i)}C+|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4})]\]
_with \(\Delta^{(i)}\in\hat{\mathcal{D}}\), \(i=1,2,\ldots,2^{m}\)._
_The curves \(\Psi^{-}\) and \(\Psi^{+}\) are thus piecewise polynomials: there exists a finite set of values \(\kappa_{1},\kappa_{2},\ldots,\kappa_{M}\) for which \(\Psi^{-}\) (analogously, \(\Psi^{+}\)) is a polynomial in each interval \([\kappa_{h},\kappa_{h+1}]\). Hence, \(\Psi^{-}\) and \(\Psi^{+}\) are piecewise differentiable._
Proof.: For any value of \(\kappa\), the maximum and minimum in (6) and (7) are achieved on the vertices \(\hat{\mathcal{D}}\) of the hyper-rectangle \(\mathcal{D}\), because the determinants are multilinear functions of the uncertain parameters \(\Delta_{i}\) and any multilinear function defined on a hyper-rectangle achieves both its minimum and its maximum value on a vertex [4, 8, 9].
The fact that the curves \(\Psi^{-}\) and \(\Psi^{+}\) are piecewise polynomials follows from the fact that two different polynomials of order \(N\) can intersect in at most \(N\) points.
It can be immediately seen that Proposition 1 allows us to check the conditions of Theorems 1, 2 and 3 as follows.
**Proposition 2**.: _The following equivalences hold._
* \(\Psi^{-}\) _is initially positive iff all the polynomials_ \(p_{i}(\kappa)\) _are initially positive._
* \(\Psi^{+}\) _is initially positive iff at least one of the polynomials_ \(p_{i}(\kappa)\) _is initially positive._
* \(\Psi^{-}\) _takes negative values iff at least one of the polynomials_ \(p_{i}(\kappa)\) _does._
* \(\Psi^{+}\) _takes negative values iff all the polynomials_ \(p_{i}(\kappa)\) _do._
Remarkably, checking the conditions for MS requires the analysis of a finite number of polynomials.
When the sufficient conditions of Theorem 2 are met, and thus the considered CRN structure can give rise to MS _provided that the parameters are suitably chosen_, the following algorithm allows us to identify values that do ensure MS.
**Algorithm 1**.: _To find parameter values ensuring MS:_
* _If_ \(\Psi^{-}\) _is initially positive and has a negative sign change, find_ \(\kappa^{*}\) _for which_ \(\Psi^{-}(\kappa^{*})<0\)_. Then, the vertex polynomial_ \(p_{h}\) _such that_ \(p_{h}(\kappa^{*})=\Psi^{-}(\kappa^{*})<0\) _is associated with vertex parameters_ \(\Delta^{*}_{sep}\in\hat{\mathcal{D}}\) _ensuring MS._
* _If_ \(\Psi^{+}\) _is initially positive and has a negative sign change, find_ \(\tilde{\kappa}\) _such that, for_ \(0<\kappa\leq\tilde{\kappa}\)_, the vertex polynomial_ \(p_{h}(\kappa)=\Psi^{+}(\kappa)>0\)_. Then,_ \(p_{h}\) _is associated with vertex parameters_ \(\hat{\Delta}_{sep}\in\hat{\mathcal{D}}\) _ensuring MS._
## IV Testing Microphase Separation in the Presence of Chemical Reactions
We provide here a collection of examples where we apply our proposed criteria to test robust microphase separation in the presence of different chemical reaction network structures. In all the considered examples, we assume \(|\kappa|^{2}J_{2}+|\kappa|^{4}J_{4}=\text{diag}\left[\mu|\kappa|^{2}-\gamma^{2 }|\kappa|^{4}\quad-d|\kappa|^{2}\quad-d|\kappa|^{2}\quad\ldots\quad-d|\kappa|^ {2}\right]\), so that the separating species is the first one (\(C_{1}\)), and we take \(\mu=1\), \(d=1\) and \(\gamma^{2}=0.03\).
**Example 1**.: _Consider the CRN \(C_{1}+C_{3}\rightharpoonup C_{2}+C_{4}\), \(C_{2}\rightharpoonup C_{1}\), \(C_{4}\rightharpoonup\emptyset\), \(\emptyset\rightharpoonup C_{3}\). The CRN system is formed by equations \(\dot{c}_{1}=-g_{13}(c_{1},c_{3})+g_{2}(c_{2})=-\dot{c}_{2}\), \(\dot{c}_{3}=-g_{13}(c_{1},c_{3})+c_{0}\), \(\dot{c}_{4}=+g_{13}(c_{1},c_{3})-g_{4}(c_{4})\), corresponding to the \(BDC\)-decomposable Jacobian matrix_
\[J=\underbrace{\begin{bmatrix}-1&1&-1&0\\ 1&-1&1&0\\ -1&0&-1&0\\ 1&0&1&-1\end{bmatrix}}_{B}\Delta\underbrace{\begin{bmatrix}1&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}}_{C},\]
_where \(\Delta=\text{diag}\left[\frac{\partial g_{13}}{\partial c_{1}}\quad\frac{ \partial g_{2}}{\partial c_{2}}\quad\frac{\partial g_{13}}{\partial c_{3}} \right]\). With bounds \(\Delta^{+}_{i}=0.5\) and \(\Delta^{-}_{i}=0.3\) for all \(i\), functions \(\Psi^{-}\) and \(\Psi^{+}\) are visualised in Fig. 2A. The necessary condition in Theorem 1 is violated, because \(\Psi^{+}\) is initially negative, hence MS is never possible for \(\Delta\in\mathcal{D}\)._
**Example 2**.: _Consider the CRN \(C_{1}\rightharpoonup C_{2}\rightharpoonup C_{3}\rightharpoonup C_{1}\) corresponding to equations \(\dot{c}_{1}=-g_{1}(c_{1})+g_{3}(c_{3})\), \(\dot{c}_{2}=-g_{2}(c_{2})+g_{1}(c_{1})\), \(\dot{c}_{3}=-g_{3}(c_{3})+g_{2}(c_{2})\), with Jacobian_
\[J=\underbrace{\begin{bmatrix}-1&0&1\\ 1&-1&0\\ 0&1&-1\end{bmatrix}}_{B}\Delta\underbrace{\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{bmatrix}}_{C},\]
_where \(\Delta=\text{diag}\left[\frac{\partial g_{1}}{\partial c_{1}}\quad\frac{ \partial g_{2}}{\partial c_{2}}\quad\frac{\partial g_{3}}{\partial c_{3}}\right]\). Taking \(\Delta^{-}_{i}=2\) and \(\Delta^{+}_{i}=5\) for all \(i\), Fig. 2B shows that MS is possible, because the condition in Theorem 2 is satisfied: \(\Psi^{+}\) is initially positive and has a negative sign change. A parameter choice ensuring MS is \(\tilde{\Delta}_{sep}=\begin{bmatrix}5&2&2\end{bmatrix}\), computed following Algorithm 1._
**Example 3**.: _Consider the CRN \(C_{1}\rightharpoonup C_{2}\), \(C_{1}+C_{2}\rightleftharpoons C_{3}\) and the corresponding system \(\dot{c}_{1}=-g_{12}(c_{1},c_{2})-g_{1}(c_{1})+g_{2}(c_{2})+g_{3}(c_{3})\), \(\dot{c}_{2}=-g_{12}(c_{1},c_{2})+g_{1}(c_{1})-g_{2}(c_{2})+g_{3}(c_{3})\), \(\dot{c}_{3}=g_{12}(c_{1},c_{2})-g_{3}(c_{3})\), which has Jacobian_
\[J=\underbrace{\begin{bmatrix}-1&1&-1&-1&1\\ 1&-1&-1&1\\ 1&-1&-1&1&1\\ 0&0&1&1&-1\end{bmatrix}}_{B}\Delta\underbrace{\begin{bmatrix}1&0&0\\ 0&1&0\\ 1&0&0\\ 0&0&1\end{bmatrix}}_{C},\]
_with \(\Delta=\text{diag}\left[\frac{\partial g_{1}}{\partial c_{1}}\quad\frac{ \partial g_{2}}{\partial c_{2}}\quad\frac{\partial g_{12}}{\partial c_{1}} \quad\frac{\partial g_{12}}{\partial c_{2}}\quad\frac{\partial g_{3}}{\partial c _{3}}\right]\). When \(\Delta^{-}_{i}=3\) and \(\Delta^{+}_{i}=5\) for all \(i\), as shown in Fig. 2C, MS is possible, because the condition in Theorem 2 is satisfied: \(\Psi^{-}\) is initially positive and has a negative sign change. A parameter choice ensuring MS is \(\Delta^{*}_{sep}=\begin{bmatrix}3&5&3&3&5\end{bmatrix}\), computed following Algorithm 1._
**Example 4**.: _Consider the CRN \(C_{1}\rightleftharpoons C_{2}\rightleftharpoons C_{3}\), corresponding to system \(\dot{c}_{1}=-g_{1}(c_{1})+g_{2,1}(c_{2})\), \(\dot{c}_{2}=-g_{2,1}(c_{2})+g_{1}(c_{1})-g_{2,2}(c_{2})+g_{3}(c_{3})\), \(\dot{c}_{3}=-g_{3}(c_{3})+g_{2,2}(c_{2})\), which has Jacobian_
\[J=\underbrace{\begin{bmatrix}-1&1&0&0\\ 1&-1&-1&1\\ 0&0&1&-1\end{bmatrix}}_{B}\Delta\underbrace{\begin{bmatrix}1&0&0\\ 0&1&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix}}_{C},\]
_with \(\Delta=\text{diag}\left[\frac{\partial g_{1}}{\partial c_{1}}\quad\frac{ \partial g_{2}}{\partial c_{2}}\quad\frac{\partial g_{2,2}}{\partial c_{2}} \quad\frac{\partial g_{3}}{\partial c_{3}}\right]\). Taking \(\Delta^{-}_{i}=1\) and \(\Delta^{+}_{i}=2\) for all \(i\), Fig. 2D shows that MS occurs robustly, for all choices of \(\Delta\) within the bounds, because the condition in Theorem 3 is satisfied: both \(\Psi^{-}\) and \(\Psi^{+}\) are initially positive and have a negative sign change._
## V Concluding Discussion
Reaction-Diffusion equations are fundamental in the modelling of biological systems across different scales, from concentrations of different molecular species to spatial dynamics of reproducing organisms. The common feature of all such models is the coupling of non-conserved dynamics, which modifies the total concentration of species (reactions), to conserved dynamics, which moves a given concentration in space (diffusion). Recent work [10, 11, 19, 27] has extended the reaction-diffusion framework to account for more
complex models of diffusive dynamics, such as usage of the Cahn-Hilliard functional to model the conserved dynamics corresponding to phase separation. While traditional reaction-diffusion systems have been an intense object of study [2, 15, 16, 17, 20, 21], full treatment of phase separating systems subject to chemical reactions is only now beginning to be explored, with multiple recent studies pointing to intriguing structural and dynamical properties of such systems [26].
We have considered the problem of predicting the emergence of microphase separation (MS) in a continuum model that couples phase separation and chemical reactions: spatial dynamics affect how species arrange in space while keeping their total concentration constant, and chemical reaction dynamics determine how the species locally inter-convert, thus changing the total amounts of individual components. MS is associated with the occurrence stable spatial oscillations with regions of high density of material, known as condensates. We have considered uncertain chemical reaction parameters bounded in a known interval and provided easy-to-compute conditions to check whether MS can be ruled out, or can arise for some parameters in the interval, or does robustly arise for all parameters in the interval. Our conditions offer useful insight for the robust experimental design of phase-separating systems in synthetic biology or material science.
|
2301.10244 | On solving decision and risk management problems subject to uncertainty | Uncertainty is a pervasive challenge in decision and risk management and it
is usually studied by quantification and modeling. Interestingly, engineers and
other decision makers usually manage uncertainty with strategies such as
incorporating robustness, or by employing decision heuristics. The focus of
this paper is then to develop a systematic understanding of such strategies,
determine their range of application, and develop a framework to better employ
them.
Based on a review of a dataset of 100 decision problems, this paper found
that many decision problems have pivotal properties, i.e. properties that
enable solution strategies, and finds 14 such properties. Therefore, an analyst
can first find these properties in a given problem, and then utilize the
strategies they enable. Multi-objective optimization methods could be used to
make investment decisions quantitatively. The analytical complexity of decision
problems can also be scored by evaluating how many of the pivotal properties
are available. Overall, we find that in the light of pivotal properties,
complex problems under uncertainty frequently appear surprisingly tractable. | Alexander Gutfraind | 2023-01-18T19:16:23Z | http://arxiv.org/abs/2301.10244v1 | ## On solving decision and risk management problems subject to uncertainty
## Abstract
Uncertainty is a pervasive challenge in decision and risk management and it is usually studied by quantification and modeling. Interestingly, engineers and other decision makers usually manage uncertainty with strategies such as incorporating robustness, or by employing decision heuristics. The focus of this paper is then to develop a systematic understanding of such strategies, determine their range of application, and develop a framework to better employ them.
Based on a review of a dataset of 100 decision problems, this paper found that many decision problems have pivotal properties, i.e. properties that enable solution strategies, and finds 14 such properties. Therefore, an analyst can first find these properties in a given problem, and then utilize the strategies they enable. Multi-objective optimization methods could be used to make investment decisions quantitatively. The analytical complexity of decision problems can also be scored by evaluating how many of the pivotal properties are available. Overall, we find that in the light of pivotal properties, complex problems under uncertainty frequently appear surprisingly tractable. |
2303.01530 | Towards the Precision Nucleon Energy-Energy Correlator in Lepton-Ion
Collisions | The nucleon energy-energy correlator (NEEC) was proposed in 2209.02080 as a
new way of studying nucleon intrinsic dynamics. In this work, we present a
detailed derivation of the factorization theorem that enables the measurement
of the unpolarized NEEC in lepton-ion collisions. As a first step towards a
precise measurement of this quantity, we obtained the
next-to-leading-logarithmic (NLL, $\sim{\cal O}(\alpha_s^n L^{n-1})$)
resummation in a concise analytic form, and predicted the analytic
$\theta$-angle distribution at ${\cal O}(\alpha^2_s)$. Extending our analytic
resummation formula to higher logarithmic accuracy and the factorization
theorem to hadron-hadron collisions is straightforward. | Haotian Cao, Xiaohui Liu, Hua Xing Zhu | 2023-03-02T19:00:15Z | http://arxiv.org/abs/2303.01530v1 | # Towards the Precision Nucleon Energy-Energy Correlator in Lepton-Ion Collisions
###### Abstract
The nucleon energy-energy correlator (NEEC) was proposed in [1] as a new way of studying nucleon intrinsic dynamics. In this work, we present a detailed derivation of the factorization theorem that enables the measurement of the unpolarized NEEC in lepton-ion collisions. As a first step towards a precise measurement of this quantity, we obtained the next-to-leading-logarithmic (NLL, \(\sim\mathcal{O}(\alpha_{s}^{a}L^{a-1})\)) resummation in a concise analytic form, and predicted the analytic \(\theta\)-angle distribution at \(\mathcal{O}(\alpha_{s}^{a})\). Extending our analytic resummation formula to higher logarithmic accuracy and the factorization theorem to hadron-hadron collisions is straightforward.
## I Introduction
Understanding the intricate internal structures of nucleons is at the central focus of nuclear physics for decades, and will continue to be the scientific frontier within the Standard Model at the next generation QCD facilities such as the upcoming electron-ion collider (EIC) [2; 3; 4]. In recent years, our approaches to nucleon/nucleus tomography have been substantially enriched, thanks to the introduction of innovative ideas into the field, such as the jet-based studies of the transverse momentum dependent (TMD) structure functions [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. However the intricate jet clustering process usually presents challenges in achieving accurate predictions. Recent advances in this direction can be found in [8; 22]. Alternative methods to jets, such as the energy-energy correlator (EEC) [23; 24; 25; 26; 27; 28; 29; 30; 31] have also been shown to be effective in uncovering the intrinsic transverse dynamics [32; 33; 34] or the scales of the quark-gluon plasma [35].
Recently, a novel quantity named the nucleon energy-energy correlator (NEEC) has been proposed as a new look into the nucleon partonic structures [1]. The NEEC probes the initial-final state correlation and takes the form in the momentum fraction \(z\) space as [1; 36]
\[f_{q,\mathrm{EEC}}(z,\theta)\] \[=\int\frac{dy^{-}}{4\pi}e^{-izP^{+}\frac{y^{-}}{2}}\langle P| \bar{\chi}_{n}(y^{-})\frac{\gamma^{+}}{2}\hat{\mathcal{E}}(\theta)\chi_{n}(0 )|P\rangle\,, \tag{1}\]
for the quark NEEC. The gluon NEEC will be given later. Here \(\chi_{n}\) represents the gauge invariant quark field in the Soft Collinear Effective Theory (SCET) [37; 38; 39; 40; 41]. The definition is equivalent to that of QCD by noting that \(\bar{\chi}_{n}(y^{-})\chi_{n}(0)=\bar{\psi}(y^{-})\mathcal{L}[y,0]\psi(0)\), where \(\mathcal{L}[y^{-},0]\) denotes the gauge link between \(0\) and \(y^{-}\). \(\hat{\mathcal{E}}(\theta)\) is the asymptotic energy flow operator [42; 43; 44; 45], that measures the energies from the initial nucleon flowing into the calorimeters sitting far away at angles less than \(\theta\). The energy flow at non-zero angles is induced by the intrinsic transverse dynamics. In this sense, studying the \(\theta\) distribution of the NEEC allows us to extract information on the intrinsic transverse dynamics of the nucleon/nucleus. The Mellin moment of the NEEC is given by \(\int dzz^{N-1}f_{\mathrm{EEC}}(z,\theta)\). Extension of the NEEC to multiple angular correlators by inserting more \(\hat{\mathcal{E}}\) operators at different angles is also attainable.
In Ref. [1], the deep-inelastic scattering (DIS) process illustrated in Fig. 1 is suggested to extract the NEEC, in which the energy-weighted cumulant cross section is measured such that
\[\Sigma_{N}(Q^{2},\theta)=\sum_{i}\int d\sigma(x_{B},Q^{2},p_{i})x_{B}^{N-1} \frac{E_{i}}{E_{P}}\Theta(\theta-\theta_{i}) \tag{2}\]
Here \(N>1\) is a positive power and \(d\sigma\) is the differential cross section. \(x_{B}\) is the Bjorken variable and \(Q^{2}\) the photon virtuality. The polar angle \(\theta_{i}\) of the calorimetry is measured with respect to the nucleon beam. \(p_{i}\) denotes the momentum flow into the calorimetry and
Figure 1: The measurement proposed in Ref. [1] as a probe of the NEEC, where the energy \(E_{i}(\theta)\) in the forward detector is recorded. Bjorken-\(x_{B}\) and \(Q^{2}\) are also measured.
the energy of the incoming nucleon. In this work, we follow Ref. [1] to stick the measurement to the Breit frame. We note that experimentally \(\Sigma_{N}\) itself is hard to measure, but it is easy to get the measurable \(\theta\)-distribution by taking the derivative with respect to \(\theta\), \(d\Sigma_{N}/d\theta\).
Ref. [1] argued without proof, that when \(\theta\ll 1\), \(\Sigma_{N}(x_{B},Q)\) can be factorized into the partonic DIS cross section \(\hat{\sigma}\) and the NEEC to be probed
\[\Sigma_{N}(Q^{2},\theta)=\int dx_{B}x_{B}^{N-1}\int_{x_{B}}^{1} \frac{dz}{z}\hat{\sigma}_{i}\left(\frac{x_{B}}{z}\right)f_{i,\text{BEC}}(z,\theta)\,. \tag{3}\]
Similar measurement without the \(x_{B}^{N-1}\) weight has also been suggested as a possible access to the gluon saturation phenomena [36] through the \(\theta\)-distribution of the NEEC.
However, to reliably extract the NEEC and apply it to the nucleon/nucleus structure studies, the factorization theorem for the \(\Sigma_{N}(Q^{2},\theta)\) needs to be reliably established. Meanwhile, sufficient theoretical precision is also required. These serve as the major goals of the current work. In this work, using the SCET framework, we derive the factorization for \(\Sigma_{N}(Q^{2},\theta)\). As a first step towards its precision, we carry out the analytic next-to-leading-logarithmic (NLL) resummation for the \(\Sigma_{N}(Q^{2},\theta)\) when \(\theta\) is small and matched onto the \(\mathcal{O}(\alpha_{s}^{2})\) fixed order \(\theta\)-distribution when \(\theta\) becomes large.
The manuscript is organized as follows. In Section II, we show sufficient details on deriving the factorization theorem using SCET. The Section will also present the operator definition of the NEEC \(f_{\text{BEC}}\). In Section III, we showed that when \(\theta Q\gg\Lambda_{\text{QCD}}\), the \(f_{\text{BEC}}\) can be further matched onto the collinear parton distribution functions (PDFs) with a perturbatively calculable matching coefficient. We discuss its evolution in Section IV. We calculate all ingredients required for the NLL resummation in Section V and predict the small \(\theta\)-distribution at \(\mathcal{O}(\alpha_{s}^{2})\). The numerical consequence of the resummation and the fixed order \(\alpha_{s}^{2}\)\(\theta\)-distribution are studied in Section VI. We summarize in Section VII.
## II The factorization theorem
In this section, we derive the factorization theorem for \(\Sigma_{N}(Q^{2},\theta)\) when \(\theta Q\ll Q\) using SCET [37; 38; 39; 40; 41]. Throughout the work, we stick to the Breit frame in which the virtual photon only acquires the momentum in its \(z\)-component with \(q=(0,0,0,-Q)\) and proton carries the momentum \(P=\frac{Q}{2x_{B}}(1,0,0,1)\). However, the factorization theorem to be derived is applicable to arbitrary frames.
The cumulant cross section \(\Sigma_{N}(Q^{2},\theta)\) can be calculated by
\[\Sigma_{N}(Q^{2},\theta)=\frac{\alpha^{2}}{Q^{4}}\int dx_{B}x_{B }^{N-1}\!\!\sum_{\lambda=T,L}\!\!e_{q}^{2}f_{\lambda}\epsilon_{\lambda,\mu}^{ \ast}\epsilon_{\lambda,\nu}\] \[\times\int d^{4}xe^{iq\cdot x}\left\langle P|j^{\mu\dagger}(x) \,\hat{\mathcal{E}}(\theta)\,j^{\nu}(0)|P\right\rangle, \tag{4}\]
where \(e_{q}\) is the electric charge fraction of the quark initiating the DIS process. Here, we have written the lepton phase space integral as \(\frac{d^{2}l}{(2\pi)^{3}2l^{0}}=\frac{Q^{2}}{16\pi^{2}s}dx_{B}dQ^{2}\) and up to vanishing contribution due to the gauge symmetry, we have
\[\sum_{T=1,2}\epsilon_{T,\mu}^{\ast}\epsilon_{T,\nu}=-g_{\mu\nu}+ \frac{4x_{B}^{2}}{Q^{2}}P_{\mu}P_{\nu}\,,\] \[\epsilon_{L,\mu}^{\ast}\epsilon_{L,\nu}=\frac{4x_{B}^{2}}{Q^{2} }P_{\mu}P_{\nu}\,, \tag{5}\]
with \(\epsilon_{T}^{\mu}\) and \(\epsilon_{L}^{\nu}\) the transverse and longitudinal polarized vector of the virtual photon, respectively. The corresponding flux is given by
\[f_{T}=1-y+\frac{y^{2}}{2}\,,\quad f_{L}=2-2y\,, \tag{6}\]
where \(y=\frac{Q^{2}}{sx_{B}}\). The inserted normalized asymptotic energy flow operator \(\hat{\mathcal{E}}(\theta)\) measures the energy deposited in the detector less than a given angle \(\theta\)[42; 43; 44; 45] normalized to the energy \(E_{P}\) of the incoming proton,
\[\hat{\mathcal{E}}(\theta)|X\rangle=\sum_{i\in X}\frac{E_{i}}{E_{ P}}\Theta(\theta-\theta_{i})|X\rangle\,. \tag{7}\]
We note that if we replace \(\hat{\mathcal{E}}(\theta)\) by the identity operator \(1=\sum_{X}|X\rangle\langle X|\), Eq. (II) reduces to the definition of the standard DIS cross section.
When \(\theta Q\ll Q\), possible leading contribution to the \(\Sigma_{N}(Q^{2},\theta)\) comes from the hard degrees of freedom (\(H\)) whose momentum scale as \(p_{H}=(p_{H}^{+},p_{H}^{-},p_{H,t})\sim Q(1,1,1)\)1, the collinear contributions (\(C\)) with momentum \(p_{C}\sim Q(1,\theta^{2},\theta)\), and the soft modes (\(S\)) with \(p_{S}\sim Q(\theta^{a},\theta^{a},\theta^{a})\) with \(a\geq 1\). However, we note that the energy flow operator \(\hat{\mathcal{E}}(\theta)\) acts only on the collinear sector. To see this, we first decompose the final state as \(|X\rangle=|X_{H}X_{C}X_{S}\rangle\), and apply the \(\hat{\mathcal{E}}(\theta)\) to find
Footnote 1: Throughout this work, we use the Sudakov decomposition, in which \(p^{+}=p^{0}+p^{3}\equiv\bar{n}\cdot p\), and \(p^{-}=p^{0}-p^{3}\equiv n\cdot p\). Here \(n=(1,0,0,1)\) and \(\bar{n}=(1,0,0,-1)\) while \(p_{t}\cdot n=p_{t}\cdot\bar{n}=0\).
\[\hat{\mathcal{E}}(\theta)|X\rangle=\frac{1}{E_{P}}\sum_{i\in X} \Big{(}E_{H,i}\Theta(\theta-\theta_{H,i})+E_{C,i}\Theta(\theta-\theta_{C,i})\] \[+E_{S,i}\Theta(\theta-\theta_{S,i})\Big{)}|X_{H},X_{C},X_{S}\rangle\,. \tag{8}\]
Now for the hard radiations, by power counting, \(\theta_{H,i}\sim\frac{p_{H,t}}{p_{H}^{2}}\sim 1\gg\theta\) and the \(\Theta(\theta-\theta_{H,i})\) will hardly be satisfied and therefore the first term in Eq. (II) vanishes in the small \(\theta\) limit. On the other hand, in the last term, \(E_{S,i}\sim\theta^{a}Q\) is also power suppressed as \(\theta\to 0\) when compared with \(E_{C,i}\sim Q\). In this way, up to power-suppressed corrections, we find
\[\hat{\mathcal{E}}(\theta)|X\rangle = \sum_{i\in X_{C}}E_{C,i}\Theta(\theta-\theta_{C,i})|X_{H}\rangle|X _{C}\rangle|X_{S}\rangle \tag{9}\] \[= (\hat{\mathcal{E}}(\theta)|X_{C}\rangle)|X_{H}\rangle|X_{S}\rangle\,,\]
and we conclude that in the small-\(\theta\) limit, the measurement \(\hat{\cal E}(\theta)\) is an operator living solely in the collinear sector and acts inclusively on the hard and the soft radiations.
To proceed, we follow [41] to match \(j^{\mu\dagger}{\cal E}(\theta)j^{\nu}\) to the SCET operators \({\cal O}_{q}\) and \({\cal O}_{g}\), with
\[\langle P|j^{\mu\dagger}(x)\hat{\cal E}(\theta)j^{\nu}(0)|P\rangle=C^{\mu\nu}_ {q}\langle P|{\cal O}_{q}|P\rangle+C^{\mu\nu}_{g}\langle P|{\cal O}_{g}|{\cal O }_{g}|\,, \tag{11}\]
where \(C^{\mu\nu}_{q/g}\) are the hard matching coefficient to be determined and \(C^{\mu\nu}_{g}\) starts at \({\cal O}(\alpha_{s})\). The SCET operators are defined as
\[{\cal O}_{q}(x,\theta) = \bar{\chi}_{n}(x)Y^{\dagger}(x)\frac{\gamma^{+}}{2}\hat{\cal E}( \theta)Y(0)\chi_{n}(0)\,,\] \[{\cal O}_{g}(x,\theta) = {\cal B}_{\perp}(x){\cal Y}^{\dagger}(x)\hat{\cal E}(\theta){ \cal Y}(0){\cal B}_{\perp}(0)\,, \tag{12}\]
contains only the gauge invariant collinear quark and gluon fields \(\chi\) and \({\cal B}_{\perp}\), respectively [46], which are
\[\chi_{n}(x)=W^{\dagger}_{n}(x)\xi_{n}(x)\,,\quad{\cal B}^{\mu}_{\perp}=\frac{1 }{g_{s}}[W^{\dagger}_{n}i{\cal D}^{\mu}_{\perp}W_{n}](x) \tag{13}\]
We note that both \(\chi\) and \({\cal B}_{\perp}\) scale as \(\theta\) by power counting [37]. Here the collinear Wilson lines are
\[W_{n}(x)=\sum_{\rm perms}\exp\left(-\frac{g_{s}}{\bar{n}\cdot P_{n}}\bar{n} \cdot A_{n}(x)\right)\,, \tag{14}\]
to make \(\chi\) and \({\cal B}_{\perp}\) gauge invariant. We also have the soft Wilson lines \(Y\) and \({\cal Y}\) in the fundamental and the adjoint representation, respectively. The soft Wilson lines decouple the interaction between the collinear and the soft sectors. Here we note that
\[[\hat{\cal E},Y]=[\hat{\cal E},{\cal Y}]=0\,, \tag{15}\]
since \(\hat{\cal E}(\theta)\) and \(Y({\cal Y})\) act on different sectors. Furthermore, the collinear fields have support in the region where \(x^{\mu}\sim Q^{-1}(1,\theta^{-2},\theta^{-1})\), while the soft field within the region \(x^{\mu}\sim Q^{-1}(\theta^{-a},\theta^{-a},\theta^{-a})\).
The hadronic matrix element in Eq. (4) is then matched onto the SCET matrix as
\[\int d^{4}xe^{iq\cdot x}\,\langle P|j^{\dagger\mu}(x)\,\hat{ \cal E}(\theta)\,j^{\nu}(0)|P\rangle \tag{16}\] \[= \int d^{4}xe^{iq\cdot x}\] \[\times\Bigg{(}C^{\mu\nu}_{q}(x)\langle P|\bar{\chi}_{n}(x)Y^{ \dagger}(x)\frac{\gamma^{+}}{2}\hat{\cal E}(\theta)Y(0)\chi_{n}(0)|P\rangle\] \[+C^{\mu\nu}_{g}(x)\langle P|{\cal B}_{\perp}(x){\cal Y}^{\dagger }(x)\hat{\cal E}(\theta){\cal Y}(0){\cal B}_{\perp}(0)|P\rangle\Bigg{)}\,,\]
where \(q\sim Q(1,1,1)\), and hence the \(x\) in the hadronic tensor scales as \(x\sim\frac{1}{Q}(1,1,1)\). We perform a multiple expansion in the collinear fields and the soft Wilson lines to find
\[\int d^{4}xe^{iq\cdot x}\,\langle P|j^{\dagger\mu}(x)\,\hat{\cal E }(\theta)\,j^{\nu}(0)|P\rangle \tag{17}\] \[= \int d^{4}xe^{iq\cdot x}\] \[\times\Bigg{(}C^{\mu\nu}_{q}(x)\langle P|\bar{\chi}_{n}(x^{-})Y^{ \dagger}(0)\frac{\gamma^{+}}{2}\hat{\cal E}(\theta)Y(0)\chi_{n}(0)|P\rangle\] \[+C^{\mu\nu}_{g}(x)\langle P|{\cal B}_{\perp}(x^{-}){\cal Y}^{ \dagger}(0)\hat{\cal E}(\theta){\cal Y}(0){\cal B}_{\perp}(0)|P\rangle\Bigg{)}\,.\]
Now we use the commutation relation between \(Y({\cal Y})\) and \({\cal E}\) in Eq. (15) and the identity \(Y^{\dagger}Y={\cal Y}^{\dagger}{\cal Y}=1\) to reach
\[\int d^{4}xe^{iq\cdot x}\,\langle P|j^{\dagger\mu}(x)\,\hat{\cal E }(\theta)\,j^{\nu}(0)|P\rangle \tag{18}\] \[= \int d^{4}xe^{iq\cdot x}\Bigg{(}C^{\mu\nu}_{q}(x)\langle P|\bar{ \chi}_{n}(x^{-})\frac{\gamma^{+}}{2}\hat{\cal E}(\theta)\chi_{n}(0)|P\rangle\] \[+C^{\mu\nu}_{g}(x)\langle P|{\cal B}_{\perp}(x^{-})\hat{\cal E}( \theta){\cal B}_{\perp}(0)|P\rangle\Bigg{)}\,.\]
From the derivation, we see clearly that in the small-\(\theta\) limit, the measurement is fully inclusive of the soft radiations, and therefore the soft modes do not lead to any logarithmic enhancement contributions. This is different from the conventional TMD measurement, where the soft contribution leads to the enhanced contribution which eventually gives rise to the perturbative Sudakov factor that suppresses the small transverse momentum region exponentially.
Now we plug the hadronic tensor into Eq. (4) to find the weighted cross section \(\Sigma_{N}\) takes the form
\[\Sigma_{N}(Q^{2},\theta)=\frac{\alpha^{2}}{Q^{4}}\int dx_{B}x_{B}^ {N-1}\!\!\sum_{\lambda=T,L}\!\!e_{q}^{2}f_{\lambda}\epsilon_{\lambda,\mu}^{*} \epsilon_{\lambda,\nu} \tag{19}\] \[\times\int d^{4}xe^{iq\cdot x}\Bigg{(}C^{\mu\nu}_{q}(x)\langle P| \bar{\chi}_{n}(x^{-})\frac{\gamma^{+}}{2}\hat{\cal E}(\theta)\chi_{n}(0)|P\rangle\] \[+C^{\mu\nu}_{g}(x)\langle P|{\cal B}_{\perp}(x^{-})\hat{\cal E}( \theta){\cal B}_{\perp}(0)|P\rangle\Bigg{)}\,.\]
We further manipulate the \(\Sigma_{N}(Q^{2},\theta)\) by inserting the complete set \(1=|X_{C}\rangle\langle X_{C}|\) of the collinear sector into the hadronic tensor, and then perform the translation operation in \(x^{-}\), to find
\[\Sigma_{N}(Q^{2},\theta)=\frac{\alpha^{2}}{Q^{4}}\int dx_{B}x_{B}^ {N-1}\!\!\sum_{\lambda=T,L}\!\!e_{q}^{2}f_{\lambda}\epsilon_{\lambda,\mu}^{*} \epsilon_{\lambda,\nu} \tag{20}\] \[\times P^{+}\int dz\delta((1-z)P^{+}-P_{C}^{+})\int d^{4}xe^{iq\cdot x }e^{i(P^{+}-P_{C}^{+})\frac{z^{-}}{2}}\] \[\times\Bigg{(}C^{\mu\nu}_{q}(x)\langle P|\bar{\chi}_{n}(0)\frac{ \gamma^{+}}{2}\hat{\cal E}(\theta)|X_{C}\rangle\langle X_{C}|\chi_{n}(0)|P\rangle\] \[+C^{\mu\nu}_{g}(x)\langle P|{\cal B}_{\perp}(0)\hat{\cal E}(\theta)| X_{C}\rangle\langle X_{C}|{\cal B}_{\perp}(0)|P\rangle\Bigg{)}\,,\]
where we have inserted the identity \(P^{+}\int dz\delta((1-z)P^{+}-P^{+}_{C})\) to define the variable \(z\). Here \(P^{+}_{C}\) is the large component of the momentum for the collinear radiations. Now we replace \((P^{+}-P^{+}_{C})\frac{x}{2}\) in the exponent by \(zP^{+}\frac{x}{2}=zP\cdot x\), where we have used \(P\sim P^{+}\frac{n^{\prime}}{2}\) up to \(\mathcal{O}\left(\frac{\Lambda_{\rm QCD}}{Q}\right)\) corrections. With further noticing that \(\delta((1-z)P^{+}-P^{+}_{C})=\int\frac{dy^{-}}{4\pi}e^{i[(1-z)P^{+}-P^{+}_{C}] \frac{y^{-}}{2}}\), and by applying the translation operation on \(\langle P|\ldots|X_{C}\rangle\), we find the \(\Sigma_{N}(Q^{2},\theta)\) possesses the factorized form
\[\Sigma_{N}(Q^{2},\theta) = \int dx_{B}x_{B}^{N-1} \tag{20}\] \[\times \int dz\Bigg{(}H_{q}(z,x_{B},Q^{2})\,f_{q,{\rm EEC}}(z,P^{+}\theta)\] \[\quad+H_{g}(z,x_{B},Q^{2})\,f_{g,{\rm EEC}}(z,P^{+}\theta)\Bigg{)}\,.\]
where the hard coefficients \(H_{q}\) and \(H_{g}\) are defined as
\[H_{q} = \frac{\alpha^{2}}{Q^{4}}\sum_{\lambda=T,L}e_{q}^{2}f_{\lambda} \epsilon_{\lambda,\mu}^{*}\epsilon_{\lambda,\nu}\int d^{4}xe^{i(q+zP)\cdot x} C_{q}^{\mu\nu}(x)P^{+}\,,\] \[H_{g} = \frac{\alpha^{2}}{Q^{4}}\sum_{\lambda=T,L}e_{q}^{2}f_{\lambda} \epsilon_{\lambda,\mu}^{*}\epsilon_{\lambda,\nu}\int d^{4}xe^{i(q+zP)\cdot x} C_{g}^{\mu\nu}(x)\,. \tag{21}\]
And the collinear functions are
\[f_{q,{\rm EEC}}(z,P^{+}\theta)=\int\frac{dy^{-}}{4\pi}e^{-izP^{ +}\frac{y^{-}}{2}} \tag{22}\] \[\quad\quad\quad\times\langle P|\bar{\chi}_{n}\left(\frac{y^{-}}{ 2}n^{\mu}\right)\frac{\gamma^{+}}{2}\hat{\mathcal{E}}(\theta)\chi_{n}(0)|P \rangle\,,\]
for the quarks, and
\[f_{g,{\rm EEC}}(z,P^{+}\theta)=\int\frac{dy^{-}}{4\pi}e^{-izP^{+ }\frac{y^{-}}{2}} \tag{23}\] \[\quad\quad\quad\times P^{+}\langle P|\mathcal{B}_{\perp}\left( \frac{y^{-}}{2}n^{\mu}\right)\hat{\mathcal{E}}(\theta)\mathcal{B}_{\perp}(0)|P \rangle\,,\]
for the gluon. These furnish the operator definition for the quark and gluon _nucleon energy-energy correlator_ in the momentum space, respectively.
We can further derive the hard coefficients \(H_{q}\) and \(H_{g}\) by noting that
* Once replacing \(\hat{\mathcal{E}}(\theta)\) by the identity operator \(1=\sum_{X}|X\rangle\langle X|\) within the NEEC \(f_{i,{\rm EEC}}\), we recover the operator definition for the collinear PDF \(f_{i}(z)\). Meanwhile, Eq. (4) and Eq. (20) reduce to the standard inclusive DIS cross section;
* The hard coefficients are independent of the details of the collinear sector, and therefore unaffected whether we place the \(\hat{\mathcal{E}}(\theta)\) or the identity operator in the collinear function.
Immediately, we reach the conclusion that the hard functions satisfy
\[H_{q}=\frac{1}{z}\,\hat{\sigma}_{q}\left(\frac{x_{B}}{z},Q^{2}\right)\,,\quad H _{g}=\frac{1}{z}\,\hat{\sigma}_{g}\left(\frac{x_{B}}{z},Q^{2}\right)\,, \tag{24}\]
and are nothing but the DIS partonic cross sections. And therefore
\[\Sigma_{N}(Q^{2},\theta) = \sum_{i=q,g}\int dx_{B}x_{B}^{N-1} \tag{25}\] \[\times \int\frac{dz}{z}\hat{\sigma}_{i}\left(\frac{x_{B}}{z},Q^{2} \right)f_{i,{\rm EEC}}(z,P^{+}\theta)\,.\]
One can observe from the factorization theorem that the \(\theta\)-dependence of \(\Sigma_{N}(Q^{2},\theta)\) is entirely through the \(f_{\rm EEC}\), and thus measuring \(\Sigma_{N}(Q^{2},\theta)\) directly probes the NEEC. The derivation also holds for the measurement without the \(x_{B}^{N-1}\) weighting, as proposed in Ref. [36], and the factorization is similar to what we have obtained by taking out the integral over \(x_{B}\), which is nothing but the second line of the above equation.
When \(\theta P^{+}\gg\Lambda_{\rm QCD}\), as shown in the following section, the NEEC can be matched onto the collinear PDFs, with all \(\theta\) dependence occurs only in the perturbative matching coefficients. In this way, since \(f_{\rm EEC}\) is dimension-less, the \(P^{+}\theta\) will show up in the form of \(\ln\frac{P^{+}\theta}{\mu}\). Therefore, \(\Sigma_{N}\) could also be written as 2
Footnote 2: Due to the flux term \(f_{\lambda}(y)\) term from Eq.(6) in the partonic cross section \(\hat{\sigma}_{i}\) with \(y=\frac{Q^{2}}{x_{B}^{2}}=\frac{Q^{2}}{s}\frac{1}{uz}\), Eq. (26) should be written as a linear combination of different effective weights \(N-i\), for each \(y^{i}\). However, terms with one power higher in \(y\) will be suppressed by \(\frac{Q^{2}}{s}\) for \(Q^{2}\ll s\).
\[\Sigma_{N}(Q^{2},\theta)=\sum_{i=q,g}\int du\,u^{N-1} \tag{26}\] \[\quad\quad\quad\times\hat{\sigma}_{i}\left(u,Q^{2}\right)f_{i,{ \rm EEC}}\left(N,\ln\frac{Q\theta}{u\mu}\right)\,,\]
where \(u=\frac{x_{B}}{z}\) and we have used the fact that \(P^{+}=\frac{Q}{x_{B}}=\frac{Q}{zu}\) in the Breit frame. The \(\mu\)-dependence in other forms through the strong coupling and the collinear PDFs are suppressed in the \(f_{i,{\rm EEC}}\), where \(f_{i,{\rm EEC}}(N,\ln\frac{Q\theta}{u\mu})\) is the NEEC in the Mellin space,
\[f_{i,{\rm EEC}}(N,\ln\frac{Q\theta}{u\mu})=\int_{0}^{1}dz\,z^{N-1}f_{i,{\rm EEC }}(z,\ln\frac{Q\theta}{zu\mu})\,. \tag{27}\]
To simplify the notation, we introduce the \(\odot\)-product, defined as
\[h_{1}\odot h_{2}\cdots\odot h_{n}\odot f(u) \tag{28}\] \[= \int\prod_{i}^{n}du_{i}u_{i}^{N-1}h_{i}(u_{i})\,f(N,uu_{1}u_{2} \ldots u_{n})\]
therefore Eq. (26) is written as \(\Sigma_{N}(Q^{2},\theta)=\hat{\sigma}_{i}\odot f_{i,{\rm EEC}}(1)\,,\) we will always drop the "\((1)\)" to write
\[\Sigma_{N}(Q^{2},\theta)=\hat{\sigma}_{i}\odot f_{i,{\rm EEC}}\,. \tag{29}\]
## III Matching onto the collinear PDF when \(\theta Q\gg\Lambda_{\rm QCD}\)
When \(\theta Q\gg\Lambda_{\rm QCD}\), the collinear modes can be further split into the hard collinear fields (\(C_{1}\)) with momentum scaling \(p_{C_{1}}\sim Q(1,\theta^{2},\theta)\) and the \(C_{2}\) modes in \({\rm SECT}_{\rm II}\) with \(p_{C_{2}}\sim Q(1,\lambda^{2},\lambda)\) with \(\lambda\equiv\frac{\Lambda_{\rm QCD}}{Q}\ll\theta\). The SCET operators in Eq. (11) can be further matched onto the \({\rm SCET}_{\rm II}\) operators such that
\[{\cal O}_{i}(x^{-})=\sum_{j=q,g}C_{j}(x^{-}){\cal O}_{j,{\rm II}}(x^{-})\,, \tag{30}\]
where the operators on the left-hand side of the equation are those that appeared in matrix elements of Eq. (22) and (23), and the \({\cal O}_{i,{\rm II}}\) is the \({\rm SCET}_{\rm II}\) operators which have the exact same form as \({\cal O}_{i}\) but without the energy operator \(\hat{\cal E}(\theta)\) and is made out of the \(C_{2}\) fields.
We pause here to study first the effects when \(\hat{\cal E}(\theta)\) is acting on \(|X\rangle=|X_{C_{1}},X_{C_{2}}\rangle\). From the definition, we have
\[\hat{\cal E}(\theta)|X_{C_{1}},X_{C_{2}}\rangle \tag{31}\] \[= \sum_{\begin{subarray}{c}i\in X_{C_{1}}\\ j\in X_{C_{2}}\end{subarray}}\left(\frac{E_{i}}{E_{P}}\Theta(\theta-\theta_{i} )+\frac{E_{j}}{E_{P}}\Theta(\theta-\theta_{j})\right)|X\rangle\,.\]
We note that since by definition, for particles from the \(C_{2}\) modes, the polar angle scales as, \(\theta_{j}\sim\lambda\ll\theta\), the second \(\Theta\) function will be always satisfied and can be replaced by 1. Therefore we find
\[\hat{\cal E}(\theta)|X_{C_{1}},X_{C_{2}}\rangle \tag{32}\] \[= \left(\sum_{i\in X_{C_{1}}}-\frac{E_{i}}{E_{P}}\Theta(\theta_{i} -\theta)+\frac{E_{X}}{E_{P}}\right)|X_{C_{1}},X_{C_{2}}\rangle\,\,\,\,.\]
where \(E_{X}\equiv\sum_{i\in X_{C_{1}},j\in X_{C_{2}}}E_{i}+E_{j}\) and we have used \(\Theta(\theta-\theta_{i})=1-\Theta(\theta_{i}-\theta)\).
The \(E_{X}\) term in Eq. (32) acts on both \(C_{1}\) and \(C_{2}\) modes simultaneously and contributes to the \(f_{\rm EEC}\) in the way that
\[f_{i,{\rm EEC}}\supset\sum_{X}\frac{E_{X}}{E_{P}}\langle P|{\cal O }_{i}|X\rangle\langle X|{\cal O}_{i}|P\rangle\delta((1-z)P^{+}-P_{X}^{+}) \tag{33}\] \[= (1-z)\sum_{X}\langle P|{\cal O}_{i}|X\rangle\langle X|{\cal O}_{i }|P\rangle\delta((1-z)P^{+}-P_{X}^{+})\] \[= f_{i}(z)-z\,f_{i}(z)\,,\qquad\mbox{with $i=q,g$}\,,\]
where \(f_{i}(z)\) is the collinear PDF. In the first line we have inserted the complete set \(\sum_{X}|X\rangle\langle X|\) into Eq. (22) and (23) and applied Eq. (32) but only kept the \(E_{X}\) term. We performed the translation operation in the \(n^{\mu}\) direction before we integrate over \(y^{-}\). Here, we have also used the definition of the collinear PDF
\[f_{i}(z) = \sum_{X}\langle P|{\cal O}_{i}|X\rangle\langle X|{\cal O}_{i}|P \rangle\delta((1-z)P^{+}-P_{X}^{+}) \tag{34}\] \[= \int\frac{dy^{-}}{4\pi}e^{-izP+\frac{y^{-}}{2}}\langle P|{\cal O }_{i}\left(y^{-}\frac{n^{\mu}}{2}\right){\cal O}_{i}|P\rangle\,.\]
The \(-E_{i}\theta(\theta_{i}-\theta)\) term in Eq. (32) acts only on the \(C_{1}\) modes. Therefore when matching onto \({\rm SCET}_{\rm II}\), togther with the coefficient \(C_{j}(x^{-})\) in Eq. (30), it gives the matching coefficient. The matching procedure is similar to what we did in the previous section and we will not repeat it here. The final contribution from the \(-E_{i}\theta(\theta_{i}-\theta)\) term then reads
\[f_{i,{\rm EEC}}\supset-\sum_{j}\int_{z}^{1}\frac{d\xi}{\xi}I^{ \prime}_{ij}\left(\frac{z}{\xi},\ln\frac{zQ\theta}{x_{B}\mu}\right)\,\left[\xi f _{j}(\xi)\right]\,, \tag{35}\]
where the additional \(\xi\) in front of \(f(\xi)\) originates from \(E_{i}/E_{P}\). Here \(I^{\prime}_{ij}\) is the matching coefficient that can be calculated perturbatively and starts from \({\cal O}(\alpha_{s})\).
Gathering all pieces, the matching of the NEEC \(f_{i,{\rm EEC}}\) to the collinear PDFs when \(\theta Q\gg\Lambda_{\rm QCD}\) is given by
\[f_{i,{\rm EEC}}(z,\ln\frac{Q\theta}{u\mu}) \tag{36}\] \[= f_{i}(z)-\int_{z}^{1}\frac{d\xi}{\xi}I_{ij}\left(\frac{z}{\xi}, \ln\frac{Q\theta}{u\mu}\right)\,\xi f_{j}(\xi)\,,\]
where \(I_{ij}(z)=\delta(1-z)+I^{\prime}_{ij}(z)\). It will be interesting to note that the \(\theta\) dependence is solely within the \(\xi f_{j}(\xi)\) term where \(I_{ij}\), as we will show, is determined by the splitting function \(P(z_{i},\dots)\), involves the factor \(z_{i}P(z_{i},\dots)\). Here, the \(z_{i}\) factor originated from the energy weight of parton \(i\). Therefore, from Eq. (36), we can interpret \(df_{\rm EEC}/d\theta\) as the parton energy density at the angle \(\theta\) for the given incoming parton energy density \(\xi f(\xi)\).
Written in the Mellin space, we have
\[f_{i,{\rm EEC}}(N,\ln\frac{Q\theta}{u\mu}) \tag{37}\] \[= f_{i}(N)-I_{ij}\left(N,\ln\frac{Q\theta}{u\mu}\right)f_{j}(N+1)\,,\]
where \(I_{ij}(N)\) is the Mellin moment of \(I_{ij}(z)\).
For later use, we define the \(*\)-product by
\[I*f=I(N)f(N+1)\,. \tag{38}\]
With this notation, Eq. (37) is written as
\[f_{i,{\rm EEC}}=f_{i}-\sum_{j}I_{ij}*f_{j}\,. \tag{39}\]
We note the difference between \(f_{i}=f_{i}(N)\) and \(1*f_{i}=f_{i}(N+1)\).
## IV Evolution equations
From the factorization theorem in Eq. (25), Eq. (26), and the consistency relation
\[\frac{d}{d\ln\mu^{2}}\Sigma_{N}(Q^{2},\theta)=0\,, \tag{40}\]
we deduce that the NEEC satisfies the _modified_ Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) evolution equation
\[\frac{d}{d\ln\mu^{2}}f_{i,\text{EEC}}(z,\ln\frac{Q\theta}{zu\mu})\] \[=\sum_{j}\int_{z}^{1}\frac{d\xi}{\xi}P_{ij}\left(\frac{z}{\xi} \right)f_{j,\text{EEC}}(\xi,\ln\frac{Q\theta}{zu\mu})\,, \tag{41}\]
in the momentum space. The inclusion of \(z\), as an argument of the function \(f_{j,\text{EEC}}\) indicates that (41) cannot be considered a conventional convolution beyond LL accuracy. The presence of this extra dependency arises from the inherent angular nature of NEEC, which results in its reliance on the frame of reference in which the observation is made. An analogous situation is observed in the case of the final state EEC, as discussed in [30]. In the Mellin space, the evolution of the NEEC follows
\[\frac{d}{d\ln\mu^{2}}f_{i,\text{EEC}}(N,\ln\frac{Q\theta}{u\mu})\] \[=\sum_{j}\int d\xi\xi^{N-1}P_{ij}\left(\xi\right)f_{j,\text{EEC}}( N,\ln\frac{Q\theta}{\xi\,u\mu})\] \[=P_{ij}\odot f_{j,\text{EEC}}(u)\,, \tag{42}\]
where \(P_{ij}(\xi)\) is the vacuum splitting function and the \(\odot\) notation follows Eq. (28). Note that the additional \(\xi\) within the logarithm is due to the specific structure of the Mellin transformation for \(f_{\text{EEC}}\) in Eq. (27).
In the momentum space, the solution to the evolution equation Eq. (42) can be solved numerically using HOPPET [47] or APFEL++[48] with the initial condition at \(\mu_{0}\sim Q\theta\) to be determined later in Section V.2. The solution in the Mellin space is slightly more involved and we solve it in Appendix A. Its analytic form will be given in Section V.3.
In practice, it is useful to introduce for the NEEC \(f_{\text{EEC}}\) the flavor singlet and non-singlet distributions, where the singlet part is given by
\[F_{q}^{S}=\sum_{i}(f_{q_{i},\text{EEC}}+f_{\bar{q}_{i},\text{EEC}})\,,\quad F_{ g}^{S}=f_{g,\text{EEC}}\,, \tag{43}\]
and the non-singlet part is defined as
\[F_{i}^{NS}=N_{F}(f_{q_{i},\text{EEC}}+f_{\bar{q}_{i},\text{EEC}})-F_{q}^{S}\,. \tag{44}\]
The definition follows directly those of the collinear PDFs [49; 50]. We note that
\[f_{q_{i},\text{EEC}}+f_{\bar{q}_{i},\text{EEC}}=\frac{1}{N_{F}}(F_{i}^{NS}+F_{ q}^{S})\,,\ f_{g,\text{EEC}}=F_{g}^{S}\,. \tag{45}\]
Since the \(f_{\text{EEC}}\) behaves exactly like the collinear PDF, by the flavor and charge conjugation symmetry, the non-singlet distribution for the NEEC evolves as [49; 50]
\[\frac{d}{d\ln\mu^{2}}F_{i}^{NS}(N,\ln\frac{Q\theta}{u\mu})=P_{NS}^{+}\odot F_{ i}^{NS}(u)\,, \tag{46}\]
with no mixing with the singlet distributions \(F_{q}^{S}\) and \(F_{g}^{S}\). Here '\(\odot\)' follows Eq. (28). The singlet distributions evolve as
\[\frac{d}{d\ln\mu^{2}}\begin{bmatrix}F_{q}^{S}\\ F_{g}^{S}\end{bmatrix}=\begin{bmatrix}P_{qq}^{S}&P_{gg}^{S}\\ P_{gq}^{S}&P_{gg}^{S}\end{bmatrix}\odot\begin{bmatrix}F_{g}^{S}\\ F_{g}^{S}\end{bmatrix}(u)\,. \tag{47}\]
Here \(P_{gg}^{S}=P_{gg}\), \(P_{qq}^{S}=2N_{F}P_{qg}\), \(P_{gq}^{S}=P_{gq}\) and
\[P_{qq}^{S}=P_{NS}^{+}+P_{ps}\,, \tag{48}\]
are defined in Ref. [49; 50]. The non-singlet and the pure singlet splitting kernels \(P_{NS}^{+}\) and \(P_{ps}\) can also be found therein. At order \(\alpha_{s}\), \(P_{NS}^{+}=P_{qq}\) and \(P_{ps}=0\).The \(\mathcal{O}(\alpha_{s}^{2})\) results are given in the Appendix.
The evolution of the matching coefficient \(I_{ij}\) can be directly derived from Eq. (36) and Eq. (37) along with the evolution of the \(f_{\text{EEC}}\) in Eq. (41) to Eq. (42). For practical use, we note that by the charge conjugation and flavor symmetry, the matching coefficient \(I_{ij}\) for the quark can always be written as
\[I_{q_{i}q_{j}} =I_{\bar{q}_{i}\bar{q}_{j}}=I_{qq}^{NS}\delta_{ij}+I_{qq}^{PS}\,,\] \[I_{q_{i}\bar{q}_{j}} =I_{\bar{q}_{i}q_{j}}=I_{q\bar{q}}^{NS}\delta_{ij}+I_{q\bar{q}}^{ PS}\,, \tag{49}\]
where \(I^{PS}\) is flavor independent. In this way, we find
\[F_{i}^{NS}=f_{i}^{NS}-(I_{qq}^{NS}+I_{q\bar{q}}^{NS})\ast f_{i}^{NS}\equiv f_ {i}^{NS}-I^{NS}\ast f_{i}^{NS}\,, \tag{50}\]
and the _pure_ quark contribution to \(F_{q}^{S}\) is
\[F_{q}^{S} =f_{q}^{S}-(I_{qq}^{NS}+I_{qq}^{NS}+N_{F}(I_{qq}^{PS}+I_{qq}^{PS} ))\ast f_{q}^{S}\] \[\equiv f_{q}^{S}-I_{qq}^{S}\ast f_{q}^{S}\,, \tag{51}\]
where we follow Eq. (38) to use '\(\ast\)' as the shorthand notation for the product in Eq. (37). Up to order \(\alpha_{s}\), \(I_{qq}^{PS}=I_{qq}^{PS}=I_{qq}^{NS}=0\), and thus \(I^{NS}=I_{qq}^{S}=I_{qq}^{NS}\) up to this order. Here
\[f_{i}^{NS}=N_{F}(f_{q_{i}}+f_{\bar{q}_{i}})-\sum_{k}(f_{q_{k}}+f_{\bar{q}_{k}})\,, \tag{52}\]
is the singlet PDF distribution and
\[f_{q}^{S}=\sum_{i}f_{q_{i}}+f_{\bar{q}_{i}}\,,\quad f_{g}^{S}=f_{g}\,, \tag{53}\]
are the non-singlet distributions. They satisfy the same DGLAP evolution in Eq. (46) and Eq. (47) after replacing \(\odot\) by the product, for the singlet and the non-singlet PDFs, respectively.
It is immediately realized that
\[\frac{d}{d\ln\mu^{2}}I^{NS}(N,u)=P_{NS}^{+}\odot I^{NS}(u)-I^{NS}\ast P_{NS}^{+}\,, \tag{54}\]
and
\[\frac{d}{d\ln\mu^{2}}I_{ij}^{S}(N,u)=P_{ik}^{S}\odot I_{kj}^{S}(u)-I_{ik}^{S} \ast P_{kj}^{S}\,. \tag{55}\]
Here \(i=q,g\). Here summation over the repeat indices is assumed.
Matching coefficients at NLO and the NLL resummation
In this section, we calculate the cumulant cross section
\[\Sigma_{N}(Q^{2},\theta)=\sum_{i=q,g}\hat{\sigma_{i}}\odot f_{i, \text{EEC}}\,. \tag{56}\]
to NLO in \(\alpha_{s}\) in the small \(\theta\) limit. The \(\odot\)-product follows Eq. (28). The \(\mathcal{O}(\alpha_{s})\) calculation allows us to realize the NLL resummation for \(\Sigma_{N}(Q^{2},\theta)\), which in turn will allow us to predict the complete \(\alpha_{s}^{2}\) distribution \(\frac{d}{d\theta^{2}}\Sigma_{N}(Q^{2},\theta)\) when \(\theta\) is small.
### NLO Hard Function
For the NLL resummation, we need the DIS partonic cross section at NLO. The NLO partonic cross section \(\hat{\sigma}(z,Q^{2})\) is well-known [51; 52; 53] and we present the results in the Appendix. Here we supply the cross section in the Mellin space, which can be written as
\[\hat{\sigma}(N,Q^{2})=\frac{4\pi\alpha^{2}}{Q^{4}}\sum_{i=-N_{F} }^{N_{F}}\sum_{c=q,g}\sum_{\lambda=\mathcal{I},L}e_{q_{i}}^{2}f_{\lambda}\, \hat{\sigma}_{c,\lambda}(N)\,, \tag{57}\]
where
\[\hat{\sigma}_{c,\lambda}(N)=\sum_{n=0}\left(\frac{\alpha_{s}}{2 \pi}\right)^{n}\hat{\sigma}_{c,\lambda}^{(n)}(N)\,, \tag{58}\]
in which at LO
\[\hat{\sigma}_{q,T}^{(0)}=1\,,\quad\hat{\sigma}_{q,L}^{(0)}= \hat{\sigma}_{g,T}^{(0)}=\hat{\sigma}_{g,L}^{(0)}=0\,. \tag{59}\]
To obtain compact results for \(\sigma_{c,\lambda}^{(1)}(N)\) at NLO, we introduce the \(S_{\pm m}\) and \(S_{\pm m_{1},m_{2},\ldots}\) functions [49; 50] defined as
\[S_{\pm m}(N)=\sum_{i=1}^{N}\frac{(\pm 1)^{i}}{i^{m}}\,, \tag{60}\]
and
\[S_{\pm m_{1},m_{2},\ldots,m_{k}}(N)=\sum_{i=1}^{N}\frac{(\pm 1)^{i}}{i^{m_{1}}}S _{m_{2},\ldots,m_{k}}(i)\,, \tag{61}\]
and we introduce \(N_{\pm k}S_{\vec{m}}(N)=S_{\vec{m}}(N\pm k)\) raise/lower the argument by \(k\). We abbreviates \(S_{\vec{m}}(N)=S_{\vec{m}}\). Some useful formulae are presented in the Appendix.
We thus find the quark contribution to \(\hat{\sigma}^{(1)}\) reads
\[\hat{\sigma}_{q,L}^{(1)}=C_{F}(-)(1-\hat{N}_{+})S_{1}\,, \tag{62}\]
for the longitudinal part, where we have used \(1=(1-z)\sum_{i=0}^{\infty}z^{i}\) anywhere necessary to get the results, and
\[\hat{\sigma}_{q,\mathcal{T}}^{(1)}=C_{F}\Bigg{\{}\left(\frac{3}{ 2}-(\hat{N}_{-}+\hat{N}_{+})S_{1}\right)\ln\frac{Q^{2}}{\mu^{2}}\] \[+(\hat{N}_{-}+\hat{N}_{+})(S_{1,1}-S_{2})+\frac{\pi^{2}}{3}\] \[+\frac{3}{2}(\hat{N}_{-})S_{1}-3(\hat{N}_{-}-1)S_{1}-\left(\frac{ 9}{2}+\frac{\pi^{2}}{3}\right)\Bigg{\}} \tag{63}\]
The gluon channel is given by
\[\hat{\sigma}_{L,g}^{(1)}=T_{R}(-2)(1-2\hat{N}_{+}+\hat{N}_{+2})S_ {1}\,, \tag{64}\]
and
\[\hat{\sigma}_{T,g}^{(1)}=\,T_{R}\Bigg{\{}(-\hat{N}_{-}+3-4\hat{N }_{+}+2\hat{N}_{+2})S_{1}\ln\frac{Q^{2}}{\mu^{2}}\] \[+(\hat{N}_{-}-3+4\hat{N}_{+}-2\hat{N}_{+2})(S_{1,1}-S_{2})\] \[+(-5+\hat{N}_{-}+8\hat{N}_{+}-4\hat{N}_{+2})S_{1}\Bigg{\}}\,, \tag{65}\]
where a factor of \(1/2\) has been multiplied into the gluon channel to match with the flavor summation in Eq. (57).
### NLO Matching Coefficient for \(I_{ij}\)
The matching coefficients \(I_{ij}\) in Eq. (37) can be obtained by calculating the difference between the NEEC defined in Eq. (22), (23) and the collinear PDF in Eq. (34), using the SCET Feynman rules. To perform the matching, the external hadronic states \(|P\rangle\) and \(|X\rangle\) can be replaced by the partonic degrees of freedom, using on-shell quarks and gluons. In dimensional regularization, the higher-order corrections to Eq. (34) are dimensionless and vanish identically. Therefore, the \(I_{ij}\) is determined by calculating the matrix elements in Eq. (22) and (23). At NLO, this results in evaluating the phase space integrals of the form
\[f_{i,\text{EEC}}=-P^{+}\int dzz^{N-1}\int d\xi\,\delta((\xi-z)P ^{+}-g^{+})\] \[\times\int\frac{d^{d}g}{(2\pi)^{d-1}}\delta(g^{2})\,\left(1-\frac{ z}{\xi}\right)\xi\,\Theta(\theta_{g}-\theta)\] \[\times(8\pi\alpha_{s})\mu^{2\epsilon}\frac{1-\frac{z}{\xi}}{g_{t}^ {2}}P_{ij}^{(0)}\left(\frac{z}{\xi},\epsilon\right)f_{j}(\xi)\,, \tag{66}\]
where \(g^{\mu}\) is the momentum of the detected parton, and \(g_{t}\) is its transverse component. \(\xi\) is the momentum fraction carried by the incoming parton. Here \(P_{ij}^{(0)}\) are the \(\mathcal{O}(\alpha_{s})\) splitting kernels, which are
\[P_{qq}^{(0)}(z,\epsilon) =C_{F}\left(\frac{1+z^{2}}{1-z}-\epsilon(1-z)\right)\,,\] \[P_{gq}^{(0)}(z,\epsilon) =C_{F}\left(\frac{1+(1-z)^{2}}{z}-\epsilon z\right)\,,\] \[P_{qg}^{(0)}(z,\epsilon) =T_{R}\left(z^{2}+(1-z)^{2}-2\epsilon z(1-z)\right)\,,\] \[P_{gg}^{(0)}(z,\epsilon) =2C_{A}\left(\frac{z}{1-z}+\frac{1-z}{z}+z(1-z)\right) \tag{67}\]
To evaluate the integral, we parameterize the phase space as
\[\frac{d^{d}g}{(2\pi)^{d-1}}\delta(g^{2})=\frac{1}{16\pi^{2}} \frac{(4\pi)^{\epsilon}}{\Gamma(1-\epsilon)}\frac{dg^{+}}{g^{+}}(\frac{g^{+}}{2} )^{2-2\epsilon}d\theta_{g}^{2}\theta_{g}^{-2}\,, \tag{68}\]
where we have used \(g_{t}=\theta_{g}\frac{g^{+}}{2}\). We thus find the NLO result of Eq. (37) is
\[f_{i,\text{BEC}}=f_{i}(N)-\left(\delta_{ij}+\frac{\alpha_{s}}{2 \pi}\,I^{(1)}_{ij}(N)\right)f_{j}(N+1)\,, \tag{69}\]
where the un-renormalized NLO matching coefficient is
\[I^{(1)}_{ij} = \frac{1}{\epsilon}\frac{1}{\Gamma(1-\epsilon)}\left(\frac{4\pi \mu^{2}}{(\frac{Q^{2}}{2u})^{2}}\right)^{\epsilon} \tag{70}\] \[\times \int dzz^{N-1}\,\left(\frac{z}{1-z}\right)^{2\epsilon}(1-z)\,P^{( 0)}_{ij}\left(z,\epsilon\right)\,.\]
Plugging the splitting functions in Eq. (67), we find the NLO unrenormalized matching coefficients
\[I^{(1)}_{ij,un} = S_{\epsilon}\left(\frac{1}{\epsilon}+\ln\frac{\mu^{2}}{\frac{Q^ {2}}{4u^{2}}\theta^{2}}\right)\left[P^{(0)}_{ij}(N)-P^{(0)}_{ij}(N+1)\right] \tag{71}\] \[+ d^{(1)}_{ij}(N)-d^{(1)}_{ij}(N+1)\,,\]
where the angular factor \(S_{\epsilon}=\frac{(4\pi)^{\epsilon}}{\Gamma(1-\epsilon)}\). \(P^{(0)}_{ij}(N)\) are the \(\mathcal{O}(\alpha_{s})\) splitting functions in Mellin space, which are
\[P^{(0)}_{qq}(N) = C_{F}\left(\frac{3}{2}-(\hat{N}_{+}+\hat{N}_{-})S_{1}\right)\,,\] \[P^{(0)}_{qq}(N) = C_{F}(-2\hat{N}_{-2}+4\hat{N}_{-}+\hat{N}_{+}-3)S_{1}\,,\] \[P^{(0)}_{qg}(N) = T_{R}(-\hat{N}_{-}-4\hat{N}_{+}+2\hat{N}_{+2}+3)S_{1}\,,\] \[P^{(0)}_{gg}(N)\] \[= 2C_{A}\left[-\hat{N}_{-2}+2(\hat{N}_{-}+\hat{N}_{+})-\hat{N}_{+2} -3\right]S_{1}+\frac{\beta_{0}}{2}\,,\]
where \(\beta_{0}=\frac{11}{3}C_{A}-\frac{4}{3}N_{F}T_{R}\). The splitting functions in the \(z\)-space are well-known and can be found in the Appendix.
The NLO \(\theta\) independent constant terms are calculated to find the general form
\[d^{(1)}_{ij}(z)=2p^{(0),0}_{ij}(z)\ln\frac{z}{1-z}+p^{(0),1}_{ij }(z)\,, \tag{73}\]
and
\[d^{(1)}_{ij}(N)=\int dzz^{N-1}d^{(1)}_{ij}(z)\,, \tag{74}\]
where \(p^{(0),k}_{ij}(z)\) are the coefficients of the \(\epsilon^{k}\) with \(k=0,1\) in the splitting kernels \(P^{(0)}_{ij}(z,\epsilon)\) of Eq. (67). Here All divergences for \(z\to 1\) are understood in the sense of \(+\)-distributions.
Evaluating the Mellin integral, we find
\[d^{(1)}_{qq}(N) = 2C_{F}\left[(\hat{N}_{+}+\hat{N}_{-})(S_{2}-S_{1,1})-\frac{\pi^{2 }}{3}\right.\] \[\left.+(\hat{N}_{+}+\hat{N}_{-}-2)\frac{S_{1}}{2}\right]\,,\] \[d^{(1)}_{gq}(N) = 2C_{F}\Bigg{[}(2\hat{N}_{-2}-4\hat{N}_{-}+3-\hat{N}_{+})(S_{2}-S _{1,1})\] \[\left.+(1-\hat{N}_{+})\frac{S_{1}}{2}\right],\] \[d^{(1)}_{qg}(N) = 2T_{R}\Bigg{[}(\hat{N}_{-}-3+4\hat{N}_{+}-2\hat{N}_{+2})(S_{2}- S_{1,1}) \tag{75}\] \[+(1-2\hat{N}_{+}+\hat{N}_{+2})S_{1}\Bigg{]}\,,\] \[d^{(1)}_{gg}(N) = 4C_{A}\left[\Big{(}3-2(\hat{N}_{+}+\hat{N}_{-})\right.\] \[\left.+(\hat{N}_{+2}+\hat{N}_{-2})\Big{)}\big{(}S_{2}-S_{1,1} \big{)}-\frac{\pi^{2}}{6}\right]\,.\]
The NLO renormalized matching coefficient in Eq. (37) is then
\[I_{ij}=\delta_{ij}+\frac{\alpha_{s}}{2\pi}\left[-\ln\frac{Q\theta }{2u\mu}\Big{(}2P^{(0)}_{ij}(N)-2P^{(0)}_{ij}(N+1)\Big{)}\right. \tag{76}\] \[\left.+d^{(1)}_{ij}(N)-d^{(1)}_{ij}(N+1)\right]\,.\]
The NLO calculation explicitly verified the evolution equation derived via the consistency condition in Section IV. From the calculation, we can also read the singlet and the non-singlet terms introduced in Section IV, which are
\[I^{NS}=I^{S}_{qq}=I_{qq}\,,\qquad I^{S}_{gg}=I_{gg}\,,\] \[I^{S}_{qg}=2N_{F}I_{qg}\,,\qquad I^{S}_{qg}=I_{qg}\,. \tag{77}\]
### NLL Resummation for \(\Sigma_{n}\)
When \(\alpha_{s}\ln\theta^{2}\sim 1\), the logarithmic terms are large and should be resummed to all orders, the NLO calculations in the previous section allow us to realize the NLL resummation for the NEEC, namely the resummation of \(\alpha_{s}^{k}\ln^{k}\theta^{2}\) and \(\alpha_{s}^{k}\ln^{k-1}\theta^{2}\) series.
One way to perform the resummation is to evaluate the partonic cross section \(\hat{\sigma}_{c,\lambda}\) and the PDFs \(f_{i}\) at scale \(\mu\sim Q\), and evolve the matching coefficient \(I\) from \(\mu_{0}\sim Q\theta\) to \(\mu\sim Q\) following the resummation equation in Eq. (54) and Eq. (55) in Section IV.
Equivalently, we can also set the scales for both the collinear PDFs \(f^{NS}_{qi}\), \(f^{S}_{j}\), and the matching coefficients \(I^{NS}\), \(I^{S}_{ij}\) at \(\mu_{0}\sim Q\theta\), to evaluate the NEEC and evolve the NEEC from \(\mu_{0}\) to \(\mu\) to realize the resummation. In the \(z\)-space, the evolution is identical to the collinear PDFs and can be achieved numerically by HOPPET[47]
or APFEL++[48]. In the Mellin space, the resummation follows the evolution equations in Eq. (46) and (47) for both \(F_{q_{i}}^{NS}\) and \(F_{i}^{S}\). We solve the equations iteratively in Appendix A, and find that the NLL NEEC receives the compact analytic form
\[f_{i,\rm{BEC}}(\mu)=f_{i}(N,\mu)-\mathcal{D}_{ik}^{N}(\mu,\mu_{0 })\,I_{kj}(u\mu_{0})f_{j}(N+1,\mu_{0})\] \[-\frac{\alpha_{s}(\mu_{0})}{2\pi}\mathcal{N}_{ik}[2P_{kj}^{(0)}( N)-2P_{kj}^{(0)}(N+1)]f_{j}(N+1,\mu_{0})\,. \tag{78}\]
The resummed form holds for both the singlet \(F^{S}\) and non-singlet distributions \(F^{NS}\). Here \(I_{ij}(u\mu_{0})\) is the NLO matching coefficient in Eq. (76) evaluated at scale \(\mu_{0}\), and the evolution factor \(\mathcal{D}_{ij}^{N}(\mu,\mu_{0})\) is nothing but the DGLAP evolution in the Mellin space,
\[\mathcal{D}_{ij}^{N}(\mu,\mu_{0})=\exp\left[\int_{\mu_{0}}^{\mu}d\ln\mu^{2}P( N,\mu)\right]_{ij}\,. \tag{79}\]
To realize the NLL resummation, we need \(P_{ij}(N)\) at LO and NLO within the evolution factor \(\mathcal{D}_{ij}^{N}\). The LO results have been presented in Eq. (72), and the NLO moments can be found in Ref. [49; 50] and are also given in the Appendix. Note that we need to divide the \(\gamma_{ij}^{(1)}\)'s therein by a factor \((-4)\) to get \(P_{ij}^{(1)}(N)\) in our normalization.
The correction to the DGLAP evolution starts from \(\alpha_{s}^{n}L^{n-1}\) order, in which
\[\mathcal{N}_{ij}=\int_{\mu_{0}}^{\mu}d\ln\mu_{1}^{2}\mathcal{D}_{ik}^{N}(\mu, \mu_{1})\tilde{P}_{kl}(N,\mu_{1})\mathcal{D}_{lj}^{N}(\mu_{1},\mu_{0})\,, \tag{80}\]
originated from the \(-\frac{\alpha_{s}}{2\pi}P\odot\ln u\) where the \(\ln u\) term comes from the NLO matching coefficient \(I_{ij}\). We note that both \(\mathcal{D}\) and \(\mathcal{N}\) can be integrated analytically using the formula in Eq. (122) and Eq. (123) of the Appendix. Here we have defined
\[\tilde{P}_{ij}(N)\equiv\int dzz^{N-1}P_{ij}(z)\ln z=\partial_{N}P_{ij}(N)\,. \tag{81}\]
Note that derivative of Mellin moment has also appeared in the solution of small angle EEC [30]. For the NLL resummation, we need
\[\tilde{P}_{qq}^{(0)}(N) = C_{F}\left((\hat{N}_{+}+\hat{N}_{-})S_{2}-\frac{\pi^{2}}{3} \right)\,,\] \[\tilde{P}_{qg}^{(0)}(N) = T_{R}(\hat{N}_{-}-3+4\hat{N}_{+}-2\hat{N}_{+2})S_{2}\,,\] \[\tilde{P}_{gq}^{(0)}(N) = C_{F}\left(2\hat{N}_{-2}-4\hat{N}_{-}+3-\hat{N}_{+}\right)S_{2}\,,\] \[\tilde{P}_{gg}^{(0)}(N) = 2C_{A}\left(\left[\hat{N}_{+2}+\hat{N}_{-2}\right.\right. \tag{82}\] \[\left.\left.-2(\hat{N}_{+}+\hat{N}_{-})+3\right]S_{2}-\frac{\pi^ {2}}{6}\right)\,.\]
If we take the evolution of the \(f_{j}(N+1,\mu_{0})=\mathcal{D}_{jk}^{-1}(N+1)f_{k}(N+1,\mu)\) into account, we can derive the evolution for the matching coefficient \(I_{ij}\) at NLL, which is
\[I_{ij}(u\mu)=\mathcal{D}_{ik}^{N}(\mu,\mu_{0})\,I_{kl}(u\mu_{0}) \mathcal{D}_{lj}^{N}(\mu_{0},\mu)\] \[+\frac{\alpha_{s}(\mu_{0})}{2\pi}\mathcal{N}_{ik}\left[2P_{kl}^{(0 )}(N)-2P_{kl}^{(0)}(N+1)\right]\mathcal{D}_{lj}^{N+1}(\mu_{0},\mu)\,. \tag{83}\]
We note that the analytic form for NLL we derived can be straightforwardly generalized to higher logarithmic accuracy.
In practice, to implement the resummation, we use the fact that \(\sigma_{q_{i}}\) is identical to \(\sigma_{\tilde{q}_{i}}\) to re-cast the cross section as
\[\Sigma_{N}=\frac{4\pi\alpha^{2}}{Q^{4}}\sum_{i=1}^{N_{F}}e_{q_{i} }^{2}\left(\hat{\sigma}_{q}\odot\frac{1}{N_{F}}(F_{q_{i}}^{NS}+F_{q}^{S})+2 \hat{\sigma}_{g}\odot F_{g}^{S}\right)\] \[=\frac{1}{N_{F}}\frac{4\pi\alpha^{2}}{Q^{4}}\sum_{i=1}^{N_{F}}e_{ q_{i}}^{2}\hat{\sigma}_{q}\odot F_{q_{i}}^{NS}+\hat{\sigma}_{q}^{S}\odot F_{q}^{S}+ \hat{\sigma}_{g}\odot F_{g}^{S}\,. \tag{84}\]
Here we introduced \(\hat{\sigma}_{q}^{S}=\frac{4\pi\alpha^{2}}{Q^{4}}\frac{1}{N_{F}}\sum_{i=1}^{N_{F }}e_{q_{i}}^{2}\hat{\sigma}_{q}\) and \(\hat{\sigma}_{g}^{S}=\frac{4\pi\alpha^{2}}{Q^{4}}\frac{1}{N_{F}}\sum_{i=1}^{N_{ F}}e_{q_{i}}^{2}(2N_{F}\hat{\sigma}_{g})\). Inserting the resummed formula Eq. (78) for \(F^{NS}\) and \(F^{S}\), we realize the NLL resummation of \(\Sigma_{N}(Q^{2},\theta)\).
### \(d\Sigma_{N}/d\theta^{2}\)-distribution at \(\mathcal{O}(\alpha_{s}^{2})\)
The NLL resummation for \(\Sigma_{N}\) allows us to predict the complete \(d\Sigma_{N}/d\theta^{2}\) spectrum up to \(\alpha_{s}^{2}\) order by expanding the resummation results in terms of the coupling \(\alpha_{s}\). Here we list the results.
The distribution can be written as
\[\frac{1}{\sigma_{0}}\frac{d\Sigma_{N}}{d\ln\theta^{2}}=\left(\frac{\alpha_{s}}{2 \pi}\Sigma_{N,j}^{(1)}+\frac{\alpha_{s}^{2}}{4\pi^{2}}\sum_{\vec{i}}\Sigma_{N,j }^{\vec{i}}\right)f_{j}(N+1)\,, \tag{85}\]
where \(\sigma_{0}=\frac{4\pi\alpha^{2}e_{d}^{2}}{Q^{4}}\). Here \(\vec{i}=(i_{1},i_{2},i_{3},i_{4})\) satisfies \(i_{1}+i_{2}+i_{3}+i_{4}=2\) and \(i_{k}\geq 0\).
At \(\mathcal{O}(\alpha_{s})\), the distribution is given by
\[\Sigma_{N,j}^{(1)}=P_{qj}^{(0)}(N)-P_{qj}^{(0)}(N+1)\,. \tag{86}\]
At \(\mathcal{O}(\alpha_{s}^{2})\), we have contributions coming from the \(\alpha_{s}\) running, which are
\[\Sigma_{N,j}^{(0,0,1,1)}=-\ln\frac{\theta^{2}Q^{2}}{4\mu^{2}}\frac{\beta_{0}}{ 2}(P_{qj}^{(0)}(N)-P_{qj}^{(0)}(N+1))\,, \tag{87}\]
and
\[\Sigma_{N,j}^{(0,1,0,1)}=\frac{\beta_{0}}{2}\left(d_{qj}^{(1)}(N)-d_{qj}^{(1)}(N+ 1)\right). \tag{88}\]
In addition, we have
\[\Sigma_{N,j}^{(1,0,1,0)}=\sigma_{i}^{(1)}(N)(P_{ij}^{(0)}(N)-P_{ij}^{(0)}(N+1))\,, \tag{89}\]
is essentially the product of the \(\mathcal{O}(\alpha_{s})\) hard function in Section V.1 and the \(\mathcal{O}(\alpha_{s})\) NEEC in Section V.2.
The 1-loop DGLAP evolution contributes as
\[\Sigma^{(0,0,2,0)}_{N,j}=\left(P^{(1)}_{qj}(N)-P^{(1)}_{qj}(N+1)\right). \tag{90}\]
Here, the moment of the NLO splitting function can be found in Ref. [49; 50] and is also provided in the Appendix.
The product of the LO DGLAP and the NLO matching coefficient contributes to both the double and single logs. The double logarithmic term reads
\[\Sigma^{(0,1,1,0)}_{N,a,j} = -\ln\frac{\theta^{2}Q^{2}}{4\mu^{2}}\Big{[}P^{(0)}_{qk}(N)(P^{(0) }_{kj}(N)-P^{(0)}_{kj}(N+1)) \tag{91}\] \[- (P^{(0)}_{qk}(N)-P^{(0)}_{qk}(N+1))P^{(0)}_{kj}(N+1)\Big{]}\,,\]
while the single log contribution is
\[\Sigma^{(0,1,1,0)}_{N,b,j} = P^{(0)}_{qk}(N)(d^{(1)}_{kj}(N)-d^{(1)}_{kj}(N+1)) \tag{92}\] \[-(d^{(1)}_{qk}(N)-d^{(1)}_{qk}(N+1))P^{(0)}_{kj}(N+1)\]
There is one additional term that originated from the \(\odot\) structure, which is single log term from \(P^{(0)}\)
\[\Sigma^{(0,1,1,0)}_{N,c,j}=2\tilde{P}^{(0)}_{qk}(N)(P^{(0)}_{kj}(N)-P^{(0)}_{ kj}(N+1))\,, \tag{93}\]
where \(\tilde{P}^{(0)}_{ij}(N)\) is defined in Eq. (81) and Eq. (82).
## VI Numerical results
In this section, we examine the numerical consequence of the NLL resummation. We use the kinematics that \(E_{P}=275\,\mathrm{GeV}\) for the incoming proton and \(E_{l}=18\,\mathrm{GeV}\) for the electron. We work in the Breit-frame and choose \(N=3\), \(Q^{2}=100\mathrm{GeV}^{2}\) and fix \(\mu=Q\) and \(\mu_{0}=\frac{Q\theta}{2}\) for implementing resummation.
First, we validate the factorization formalism by comparing the singular \(\ln\theta\) contributions predicted by the factorization theorem with the complete \(\alpha_{s}\) and \(\alpha_{s}^{2}\) calculations of the distribution \(d\Sigma_{N}/dy\), where \(y=\ln(\tan\frac{\theta}{2})\). As \(\theta\) (\(y\)) becomes small, the \(\ln\theta\) terms will dominate the \(d\Sigma_{N}/dy\) distribution, and the singular contribution should coincide with the full calculation.
We perform this comparison in fig. 2. The full fixed order calculations (in dots) are obtained numerically using nlojet++[54] and the log terms have been calculated in Section V.4. From fig. 2, we observed very good agreements in the small \(y\) region between the complete calculation and the singular terms predicted by factorization and resummation, in both the magnitude and shape. The comparison serves as a non-trivial validation of the factorization theorem derived in this work.
Now we present the numerical results for the resummation in fig. 3. The analytic formula Eq. (78) is checked against the numerical resolution of Eq. (42) using Euler's Method to find perfect agreement. We further matched the resummation to the fixed order calculation by removing the singular terms that have been resummed, from the fixed order cross section in the small \(y\) region, and replacing them with the NLL results. In fig. 3, We show the NLL+\(\alpha_{s}\) and NLL+\(\alpha_{s}^{2}\) in the orange square and red circular dots, respectively. Compared with the fixed order results in fig. 2, we see that the resummation effects are significant in the small angle region, which enhances the distribution by several times with respect to the \(\alpha_{s}^{2}\) calculation for \(y\) around \(-2\). It is also interesting to point out that as obvious in fig. 3, the distribution in the small angle is not suppressed due to the absence of the Sudakov factor. This feature of the NEEC is very different from the TMD PDFs in which the small transverse momentum region is exponentially suppressed by the Sudakov factor.
Figure 3: Comparison of the NLL \(+\alpha_{s}\), NLL \(+\alpha_{s}^{2}\) and the Pythia simulation at partonic level. Reasonable agreement is found in the small \(\theta\) (\(y\)) region (near-side) between the analytic NLL resummation and the Pythia simulation. We stop the resummation at \(y=-2\), after which one probes the non-perturbative region. Additional TMD resummation is required for \(\theta\to\pi\) (large \(y\), away-side).
Figure 2: A comparison between the \(\ln\theta\) singular contributions with the full fixed order calculations. Very good agreements are found for small values of \(\theta\) (\(y\)).
When \(y<-2\), for the kinematics we chose, \(\frac{Q\theta}{2}\) is comparable with \(\Lambda_{\rm QCD}\) and we start to probe the non-perturbative region. The perturbative calculation is no longer valid in this regime and future experimental measurements at HERA or EIC are required to understand the non-perturbative behavior of the NEEC \(f_{\rm EEC}(\theta)\) in this range, which in turn can teach us about the nucleon intrinsic transverse dynamics, as suggested by Ref. [1].
We further compare the NLL+fixed order distributions with the Pythia8.2 simulation [55] which implements the leading logarithmic (LL) resummation. For this comparison, we have turned off hadronization in Pythia. In the small \(\theta\) (\(y\)) region (near-side region), the analytic NLL resummation agrees reasonably well with the partonic Pythia simulation. For \(0.<y<1.0\) (\(\frac{\pi}{2}<\theta\lesssim 0.8\pi\)), the \({\rm NLL}+\alpha_{s}\) agrees better with Pythia and both are lower than the \({\rm NLL}+\alpha_{s}^{2}\) prediction, due to the missing higher order corrections in Pythia and the \({\rm NLL}+\alpha_{s}\). For larger values of \(y\) where \(\theta\) is approaching \(\pi\) (away-side region), the fixed order calculations differ substantially from the Pythia simulation. In this region, the detected particles are almost back-to-back against the incoming proton. Now, the distribution becomes highly sensitive to the soft radiations (as well as the initial state collinear radiations), and we are essentially probing the small transverse momentum and therefore the TMD PDF. Therefore, in this region, additional TMD resummation is required for reliable predictions which we leave for future studies.
## VII Summary
In this work, we demonstrate the factorization theorem for the nucleon energy-energy correlator (NEEC) measurement in lepton-ion collisions proposed in [1]. Our main results are presented in Eq. (25), where the energy-weighted cross-section \(\Sigma_{N}(Q^{2},\theta)\) is factorized into the partonic DIS cross section and the NEEC \(f_{\rm EEC}(z,\theta)\). The operator definition of the NEEC is given by Eq. (22) and Eq. (23). The factorized form in the Mellin space can be found in Eq. (26). The factorization theorem has a similar structure to the DIS cross-section, except that the collinear PDF is replaced by the NEEC. Moreover, the factorization theorem can be easily generalized to the hadron-hadron collisions at the Large Hadron Collider (LHC) by appropriately substituting the PDF with the NEEC when similar measurements are performed. For instance, if the proton NEEC is measured in the prompt photon production in \(pA\) collisions \(pA\to\gamma+X\), then the factorization is the same as the inclusive photon production with the replacement of the proton PDF \(f_{i/P}(z)\) with its corresponding NEEC \(f_{i,{\rm EEC}}(z,\theta)\).
When \(Q\theta\gg\Lambda_{\rm QCD}\), we showed in Eq. (36) that the \(f_{\rm EEC}\) can be further matched onto the collinear PDF, with perturbatively calculable matching coefficients determined by the QCD splitting functions. In this region, the factorization formalism Eq. (36) suggests that the \(df_{\rm EEC}/d\theta\) describes the \(\theta\) correlation between the outgoing parton energy density and the initial incoming parton density. The factorization theorem is validated by the excellent agreements between the \({\cal O}(\alpha_{s}^{2})\) prediction of the factorization and the complete NLO calculation of \(d\Sigma_{N}(Q^{2},\theta)/d\theta^{2}\). The next-to-leading logarithmic (NLL) resummation has also been carried out for the NEEC. In the momentum space, the NEEC evolves in a similar way as the collinear PDFs. In this work, we focused more on the Mellin space evolution of the NEEC. We obtained a fully analytic solution to the evolution equation in Eq. (78) and supplied all the necessary ingredients for the NLL resummation. The analytic formula can be easily extended to higher logarithmic accuracy. The numerical evaluation of the NLL resummation is found to agree with the Pythia simulation reasonably well. Furthermore, the NLL calculation also supports the recent idea of using the NEEC to look for the gluon saturation in lepton-ion collisions [36], where the \(\theta\) distribution predicted by the collinear factorization is not suppressed in the small \(\theta\) region contrary to the expectation of the color glass condensate (CGC) effective framework. The NNLO calculation of the \(f_{\rm EEC}\) in the perturbative region should be feasible with current computational techniques, which would enable us to perform NNLL resummation for the \(f_{\rm EEC}\). We have not studied non-perturbative effects in this work and we plan to do so in future work. We hope our current work serves as a first step towards the precision measurement of \(f_{\rm EEC}\) and stimulates further theoretical and experimental investigations.
###### Acknowledgements.
We thank Jian-Hui Zhang for the useful discussions. This work is supported by the Natural Science Foundation of China under contract No. 12175016 (H. C. and X. L.), No. 11975200 (H. X. Z.) and No. 12147103 (H. X. Z.).
## Appendix A solving the RG-evolution
In this section, we solve Eq. (42), which can be written as
\[f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu})=f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{ 0}})+\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}\int d\xi\xi^{N-1}P\left(\xi \right)f_{\rm EEC}(N,\ln\frac{Q\theta}{\xi\,u\mu^{\prime}})\,. \tag{101}\]
For simplicity, we have suppressed the subscripts. The product of the \(P\)'s should be treated as the matrix product.
We write the ansatz solution to Eq. (10) as
\[f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu})=D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q \theta}{u\mu_{0}})+R(\mu,\mu_{0})\,, \tag{11}\]
where \(D\) and \(R\) are to be determined and satisfy \(D(\mu_{0},\mu_{0})=1\) and \(R(\mu_{0},\mu_{0})=0\).
We plug the ansatz back into Eq. (10), to find
\[D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu, \mu_{0}) = f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+\int_{\mu_{0}}^{\mu}d \ln{\mu^{\prime}}^{2}P(N,\mu^{\prime})R(\mu^{\prime},\mu_{0}) \tag{12}\] \[+ \int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}\int d\xi\xi^{N-1}P \left(\xi\right)\,D(\mu^{\prime},\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{\xi u \mu_{0}})\,.\]
To realize the NLL resummation, we use the NLO NEEC as the initial input at \(\mu_{0}\), and manipulate Eq. (12) as
\[D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu, \mu_{0})=f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\] \[\qquad\qquad\qquad+\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}P \left(N,\mu^{\prime}\right)\,\left[D(\mu^{\prime},\mu_{0})f_{\rm EEC}(N,\ln \frac{Q\theta}{u\mu_{0}})+R(\mu^{\prime},\mu_{0})\right]\] \[\qquad\qquad\qquad-\frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}} ^{\mu}d\ln{\mu^{\prime}}^{2}\tilde{P}\left(N,\mu^{\prime}\right)D(\mu^{\prime },\mu_{0})[2P(N)-2P(N+1)]f(N+1,\mu_{0})\,, \tag{13}\]
where we have used the property that at NLO, the initial condition satisfies
\[f_{\rm EEC}(N,\ln\frac{Q\theta}{\xi u\mu_{0}})=f_{\rm EEC}(N,\ln\frac{Q\theta }{u\mu_{0}})-\frac{\alpha_{s}(\mu_{0})}{2\pi}\ln\xi[2P(N,\mu_{0})-2P(N+1,\mu_{ 0})]f(N+1,\mu_{0})\,. \tag{14}\]
and applied the definition
\[\tilde{P}(N)=\int_{0}^{1}d\xi\xi^{N-1}P(\xi)\ln\xi\,. \tag{15}\]
Now we repeat the above procedure, to replace the \(D(\mu^{\prime},\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{\xi u\mu_{0}})+R(\mu^{ \prime},\mu_{0})\) using Eq. (12) to find
\[D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu, \mu_{0})=f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+\int_{\mu_{0}}^{\mu}d\ln{ \mu^{\prime}}^{2}P\left(N,\mu^{\prime}\right)\,f_{\rm EEC}(N,\ln\frac{Q\theta }{u\mu_{0}})\] \[\qquad\qquad+\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}P(N,\mu^ {\prime})\int_{\mu_{0}}^{\mu^{\prime}}d\ln{\mu^{\prime\prime}}^{2}P(N,\mu^{ \prime\prime})R(\mu^{\prime\prime},\mu_{0})\] \[\qquad\qquad+\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}P\left(N, \mu^{\prime}\right)\int_{\mu_{0}}^{\mu^{\prime}}d\ln{\mu^{\prime\prime}}^{2} P\left(N,\mu^{\prime\prime}\right)D(\mu^{\prime\prime},\mu_{0})f_{\rm EEC}(N,\ln \frac{Q\theta}{u\mu_{0}})\] \[\qquad\qquad-\frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu} d\ln{\mu^{\prime}}^{2}P\left(N,\mu^{\prime}\right)\int_{\mu_{0}}^{\mu^{\prime}}d \ln{\mu^{\prime\prime}}^{2}\tilde{P}\left(N,\mu^{\prime\prime}\right)D(\mu^{ \prime\prime},\mu_{0})[2P(N)-2P(N+1)]f(N+1,\mu_{0})\] \[\qquad\qquad-\frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu} d\ln{\mu^{\prime}}^{2}\tilde{P}\left(N,\mu^{\prime}\right)D(\mu^{\prime},\mu_{0})[2P(N)-2P( N+1)]f(N+1,\mu_{0})\,, \tag{16}\]
which can be organized as
\[D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu, \mu_{0})\] \[= \int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}P\left(N,\mu^{\prime} \right)\,\int_{\mu_{0}}^{\mu^{\prime}}d\ln{\mu^{\prime\prime}}^{2}\,P\left(N, \mu^{\prime\prime}\right)\left[D(\mu^{\prime\prime},\mu_{0})f_{\rm EEC}(N,\ln \frac{Q\theta}{u\mu_{0}})+R(\mu^{\prime\prime},\mu_{0})\right]\] \[+ \left[1+\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime}}^{2}P\left(N,\mu^{ \prime}\right)\right]f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\] \[- \frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu}d\ln{\mu^{\prime }}^{2}\left[1+\int_{\mu^{\prime}}^{\mu}d\ln{\mu^{\prime\prime}}^{2}P\left(N,\mu^{ \prime\prime}\right)\right]\,\tilde{P}\left(N,\mu^{\prime}\right)D(\mu^{ \prime},\mu_{0})[2P(N)-2P(N+1)]f(N+1,\mu_{0}) \tag{17}\]
where in the last line, we have switched the order of the integrations, using
\[\int_{\mu_{0}}^{\mu}d\mu^{\prime}A(\mu^{\prime})\int_{\mu_{0}}^{\mu^{\prime}}d\mu ^{\prime\prime}B(\mu^{\prime\prime})=\int_{\mu_{0}}^{\mu}d\mu^{\prime\prime}B( \mu^{\prime\prime})\int_{\mu^{\prime\prime}}^{\mu}d\mu^{\prime}A(\mu^{\prime})\,. \tag{101}\]
Iterate the procedure, we will arrive at
\[D(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu, \mu_{0})\] \[- \lim_{n\to\infty}\int_{\mu_{n-1}}^{\mu}d\ln\mu_{n}^{2}P(N,\mu_{n })\cdots\int_{\mu 1}^{\mu}d\ln\mu_{2}^{2}P(N,\mu_{2})\left[D(\mu_{1},\mu_{0})f_{ \rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu_{1},\mu_{0})\right]\] \[= {\cal D}(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})- \frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu}d\ln\mu^{\prime 2}{\cal D}( \mu,\mu^{\prime})\tilde{P}(N,\mu^{\prime})D(\mu^{\prime},\mu_{0})[2P(N,\mu_{ 0})-2P(N+1,\mu_{0})]f(N+1,\mu_{0})\,,\]
where \({\cal D}=\exp\left[\int_{\mu_{0}}^{\mu}d\ln\mu^{\prime 2}P(N,\mu^{\prime})\right]\) is defined in Eq. (79). We note that
\[\lim_{n\to\infty}\left(\frac{\min_{\mu}P(N,\mu)}{n}\right)^{n-1}f _{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\to 0\] \[< \lim_{n\to\infty}\int_{\mu_{n-1}}^{\mu}d\ln\mu_{n}^{2}P(N,\mu_{n })\cdots\int_{\mu 1}^{n}d\ln\mu_{2}^{2}P(N,\mu_{2})\left[D(\mu_{1},\mu_{0})f_{ \rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu_{1},\mu_{0})\right]\] \[< \lim_{n\to\infty}\left(\frac{\max_{\mu}P(N,\mu)}{n}\right)^{n-1}f _{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\to 0\,. \tag{102}\]
Here we have assumed that the moment of the PDF is bounded and thus the limit vanishes as \(n\to\infty\).
Therefore, we conclude that
\[f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu})=D(\mu,\mu_{0})f_{\rm EEC} (N,\ln\frac{Q\theta}{u\mu_{0}})+R(\mu,\mu_{0})\] \[= {\cal D}(\mu,\mu_{0})f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})- \frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu}d\ln\mu^{\prime 2}{\cal D}( \mu,\mu^{\prime})\tilde{P}(N,\mu^{\prime})D(\mu^{\prime},\mu_{0})[2P(N,\mu_{0 })-2P(N+1,\mu_{0})]f(N+1,\mu_{0})\,.\]
Since \(f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\) and \(R\) are independent and the solution should hold for arbitrary constant in \(f_{\rm EEC}(N,\ln\frac{Q\theta}{u\mu_{0}})\), then we could identify
\[D={\cal D}=\exp\left[\int_{\mu_{0}}^{\mu}d\ln\mu^{\prime 2}P(N, \mu^{\prime})\right]\,,\] \[R=-\frac{\alpha_{s}(\mu_{0})}{2\pi}\int_{\mu_{0}}^{\mu}d\ln\mu^{ \prime 2}{\cal D}(\mu,\mu^{\prime})\tilde{P}(N,\mu^{\prime}){\cal D}(\mu^{ \prime},\mu_{0})[2P(N,\mu_{0})-2P(N+1,\mu_{0})]f(N+1,\mu_{0})\,. \tag{103}\]
The derivation is applicable to higher logarithmic accuracy by suitably adjusting the relation in the initial condition of Eq.(100) at higher \(\alpha_{s}\) orders.
## Appendix B constant and function
In this Appendix, we list the QCD color constants and splitting functions that are present in the main text.
In QCD, the running of the strong coupling constant \(\alpha_{s}\) follows
\[\frac{d\alpha_{s}}{d\ln\mu}=\beta[\alpha_{s}]\,, \tag{104}\]
where the \(\beta\)-function can be expanded in terms of \(\alpha_{s}\) as
\[\beta[\alpha_{s}]=-2\alpha_{s}\sum_{\mu}\beta_{n}\left(\frac{\alpha_{s}}{4\pi} \right)^{n+1}\,, \tag{105}\]
with
\[\beta_{0}=\frac{11}{3}C_{A}-\frac{4}{3}N_{F}T_{R}\,,\qquad\beta_{1}= \frac{34}{3}C_{A}^{2}-\frac{20}{3}C_{A}T_{R}N_{F}-4C_{F}T_{R}N_{F}\,. \tag{101}\]
Here \(C_{A}=N_{C}=3\), \(T_{R}=\frac{1}{2}\) and \(C_{F}=\frac{N_{C}^{2}-1}{2N_{C}}\). \(N_{F}\) is the number of quarks.
It is useful to note that [56]
\[\frac{\alpha_{s}(\mu)}{2\pi}-\frac{\alpha_{s}(\mu_{0})}{2\pi}= \frac{1}{2}\frac{\alpha_{s}^{2}}{4\pi^{2}}\beta_{0}\ln\frac{\mu_{0}^{2}}{\mu^{ 2}}+\ldots\,,\qquad\ln\left(\frac{\alpha_{s}(\mu)}{\alpha_{s}(\mu_{0})}\right) =\frac{\beta_{0}}{2}\frac{\alpha_{s}(\mu)}{2\pi}\ln\frac{\mu_{0}^{2}}{\mu^{2} }+\frac{\alpha_{s}^{2}}{32\pi^{2}}\left(2\beta_{1}\ln\frac{\mu_{0}^{2}}{\mu^{ 2}}-\beta_{0}^{2}\ln^{2}\frac{\mu_{0}^{2}}{\mu^{2}}\right)+ \tag{102}\]
and
\[\int_{\mu_{0}}^{\mu}\left(P_{0}\frac{\alpha_{s}}{2\pi}+P_{1}\frac {\alpha_{s}^{2}}{4\pi^{2}}\right)d\ln\mu^{2}=-\frac{2}{\beta_{0}}\left(P_{0} \ln\frac{\alpha_{s}(\mu)}{\alpha_{s}(\mu_{0})}+(2P_{1}-r_{1}P_{0})\frac{\alpha _{s}(\mu)-\alpha_{s}(\mu_{0})}{4\pi}+\ldots\right) \tag{103}\]
where \(r_{i}=\frac{\beta_{i}}{\beta_{0}}\). Here the first term on the right-hand starts from the LL (\(\sim{\cal O}(\alpha_{s}L)\)) and the second term contributes to the NLL (\(\sim{\cal O}(\alpha_{s}L^{2})\)).
Also, we have
\[\int_{\mu_{0}}^{\mu}d\ln\mu^{\prime 2}\left(\frac{\alpha_{s}(\mu)}{\alpha_{s}( \mu^{\prime})}\right)^{-\frac{2}{\beta_{0}}P_{ii}^{(0)}}\frac{\alpha_{s}(\mu^ {\prime})}{2\pi}\tilde{P}_{ij}^{(0)}\left(\frac{\alpha_{s}(\mu^{\prime})}{ \alpha_{s}(\mu_{0})}\right)^{-\frac{2}{\beta_{0}}P_{jj}^{(0)}}=\frac{\tilde{P }_{ij}^{(0)}}{P_{ii}^{(0)}-P_{jj}^{(0)}}\left[\left(\frac{\alpha_{s}(\mu)}{ \alpha_{s}(\mu_{0})}\right)^{-\frac{2}{\beta_{0}}P_{ii}^{(0)}}-\left(\frac{ \alpha_{s}(\mu)}{\alpha_{s}(\mu_{0})}\right)^{-\frac{2}{\beta_{0}}P_{jj}^{(0) }}\right]\,. \tag{104}\]
The collinear splitting function \(P_{ij}(z)\) that governs the PDF DGLAP evolution
\[\frac{d}{d\ln\mu^{2}}f_{i}(z,\mu)=P_{ij}\otimes f_{j}(z,\mu)\,, \tag{105}\]
can be written as the power series in \(\alpha_{s}\), which reads
\[P_{ij}(z)=\sum_{L=0}\left(\frac{\alpha_{s}}{2\pi}\right)^{L+1}P_{ ij}^{(L)}(z)\,. \tag{106}\]
In practice, it is always useful to consider the singlet and the non-singlet splitting functions \(P_{ij}^{S}\) for \(i=q,g\), and \(P_{NS}^{+}\). Here the singular splitting functions are defined as
\[P_{qq}^{S}=P_{NS}^{+}+P_{ps}\,,\qquad P_{gg}^{S}=P_{gg}\,,\] \[P_{qg}^{S}=2N_{F}P_{qg}\,,\qquad P_{gq}^{S}=P_{gq}\,. \tag{107}\]
In the \(z\)-space, at the LO
\[P_{NS}^{+,(0)}(z)=P_{qq}^{(0)}(z)=C_{F}\left(\frac{1+z^{2}}{1-z} \right)_{+}\,,\qquad P_{ps}^{(0)}=0\,,\] \[P_{gq}^{S,(0)}=P_{gq}^{(0)}(z)=C_{F}\frac{1+(1-z)^{2}}{z}\,, \qquad P_{qg}^{S,(0)}=2N_{F}P_{qg}^{(0)}(z)=2N_{F}T_{R}(z^{2}+(1-z)^{2})\,,\] \[P_{gg}^{S,(0)}=P_{gg}^{(0)}(z)=2C_{A}\left(\frac{z}{(1-z)_{+}}+ \frac{1-z}{z}+z(1-z)\right)+\frac{\beta_{0}}{2}\delta(1-z)\,. \tag{108}\]
In the Mellin space, we have
\[P_{NS}^{+,(0)}=C_{F}\left(\frac{3}{2}-(\hat{N}_{+}+\hat{N}_{-})S_{1}\right) \tag{109}\]
and
\[P_{NS}^{+,(1)} = C_{F}C_{A}\left(-2\hat{N}_{+}S_{3}+\frac{17}{24}+2S_{-3}+\frac{2 8}{3}S_{1}-(\hat{N}_{-}+\hat{N}_{+})\left[\frac{151}{18}S_{1}+2S_{1,-2}-\frac{11 }{6}S_{2}\right]\right) \tag{110}\] \[+C_{F}N_{F}\left(-\frac{1}{12}-\frac{4}{3}S_{1}+(\hat{N}_{-}+ \hat{N}_{+})\left[\frac{11}{9}S_{1}-\frac{S_{2}}{3}\right]\right)+C_{F}^{2} \left(-4S_{-3}-2S_{1}-2S_{2}+\frac{3}{8}\right.\] \[\left.-\hat{N}_{-}[S_{2}+2S_{3}]+(\hat{N}_{-}+\hat{N}_{+})\left[S_{1 }+4S_{1,-2}+2S_{1,2}+2S_{2,1}+S_{3}\right]\right)\]
\[P^{(0)}_{ps}(N)=0\,, \tag{64}\]
\[P^{(1)}_{ps}(N) = C_{F}N_{F}\left(\frac{20}{9}(\hat{N}_{-}-\hat{N}_{-2})S_{1}+(\hat{ N}_{+}-\hat{N}_{+2})\left[\frac{56}{9}S_{1}+\frac{8}{3}S_{2}\right]+(\hat{N}_{+}-1) \left[8S_{1}-4S_{2}\right]\right. \tag{65}\] \[\left.+(\hat{N}_{-}-\hat{N}_{+})\left[2S_{1}+S_{2}+2S_{3}\right] \right)\,,\]
\[P^{S,(0)}_{qg}=2N_{F}P^{(0)}_{qg}=2N_{F}T_{R}(-\hat{N}_{-}-4\hat{N}_{+}+2\hat{ N}_{+2}+3)S_{1}\,. \tag{66}\]
\[P^{S,(1)}_{qg} = -C_{A}N_{F}\left(\frac{2}{9}(\hat{N}_{-2}-\hat{N}_{-})S_{1}-(\hat {N}_{-}-\hat{N}_{+})[2S_{1}+S_{2}+2S_{3}]\,-(\hat{N}_{+}-\hat{N}_{+2})\left[ \frac{218}{9}S_{1}+4S_{1,1}+\frac{44}{3}S_{2}\right]\right. \tag{67}\] \[\left.+(1-\hat{N}_{+})\left[27S_{1}+4S_{1,1}-7S_{2}-2S_{3}\right] -2(\hat{N}_{-}+4\hat{N}_{+}-2\hat{N}_{+2}-3)\left[S_{1,-2}+S_{1,1,1}\right]\right)\] \[-C_{F}N_{F}\left(2(\hat{N}_{+}-\hat{N}_{+2})\left[5S_{1}+2S_{1,1} -2S_{2}+S_{3}\right]-(1-\hat{N}_{+})\left[\frac{43}{2}S_{1}+4S_{1,1}-\frac{7}{ 2}S_{2}\right]\right.\] \[\left.+(\hat{N}_{-}-\hat{N}_{+})\left[7S_{1}-\frac{3}{2}S_{2} \right]+2(\hat{N}_{-}+4\hat{N}_{+}-2\hat{N}_{+2}-3)\left[S_{1,1,1}-S_{1,2}-S_{ 2,1}+\frac{1}{2}S_{3}\right]\right)\,.\]
\[P^{S,(0)}_{gq}=P^{(0)}_{gq}=C_{F}(-2\hat{N}_{-2}+4\hat{N}_{-}+\hat{N}_{+}-3)S _{1}\,. \tag{68}\]
\[P^{S,(1)}_{gq} = -C_{A}C_{F}\left(2(2\hat{N}_{-2}-4\hat{N}_{-}-\hat{N}_{+}+3)\left[ S_{1,1,1}-S_{1,-2}-S_{1,2}-S_{2,1}\right]+(1-\hat{N}_{+})\left[2S_{1}-13S_{1,1}-7S_{2}-2S _{3}\right]\right. \tag{69}\] \[\left.+(\hat{N}_{-2}-2\hat{N}_{-}+\hat{N}_{+})\left[S_{1}-\frac{2 2}{3}S_{1,1}\right]+4(\hat{N}_{-}-\hat{N}_{+})\left[\frac{7}{9}S_{1}+3S_{2}+S_ {3}\right]+(\hat{N}_{+}-\hat{N}_{+2})\left[\frac{44}{9}S_{1}+\frac{8}{3}S_{2} \right]\right)\] \[-C_{F}N_{F}\left((\hat{N}_{-2}-2\hat{N}_{-}+\hat{N}_{+})\left[ \frac{4}{3}S_{1,1}-\frac{20}{9}S_{1}\right]-(1-\hat{N}_{+})\left[4S_{1}-2S_{1,1 }\right]\right)\] \[-C_{F}^{2}\left((2\hat{N}_{-2}-4\hat{N}_{-}-\hat{N}_{+}+3)\left[ 3S_{1,1}-2S_{1,1,1}\right]-(1-\hat{N}_{+})\left[S_{1}-2S_{1,1}+\frac{3}{2}S_{2 }-3S_{3}\right]\right.\] \[\left.-(\hat{N}_{-}-\hat{N}_{+})\left[\frac{5}{2}S_{1}+2S_{2}+2S_ {3}\right]\right)\,.\]
\[P^{S,(0)}_{gg}=P^{(0)}_{gg}=2C_{A}(-\hat{N}_{-2}+2\hat{N}_{-}+2 \hat{N}_{+}-\hat{N}_{+2}-3)S_{1}+\frac{\beta_{0}}{2}\,, \tag{70}\]
\[P^{S,(1)}_{gg} = -C_{A}N_{F}\left(\frac{2}{3}-\frac{16}{3}S_{1}-\frac{23}{9}(\hat {N}_{-2}+\hat{N}_{+2})S_{1}+\frac{14}{3}(\hat{N}_{-}+\hat{N}_{+})S_{1}+\frac{2 }{3}(\hat{N}_{-}-\hat{N}_{+})S_{2}\right) \tag{71}\] \[-C_{A}^{2}\left(2S_{-3}-\frac{8}{3}-\frac{14}{3}S_{1}+2S_{3}-4( \hat{N}_{-2}-2\hat{N}_{-}-2\hat{N}_{+}+\hat{N}_{+2}+3)\left[S_{1,-2}+S_{1,2}+S_ {2,1}\right]\right.\] \[\left.+\frac{8}{3}(\hat{N}_{+}-\hat{N}_{+2})S_{2}-4(\hat{N}_{-}-3 \hat{N}_{+}+\hat{N}_{+2}+1)\left[3S_{2}-S_{3}\right]+\frac{109}{18}(\hat{N}_{- }+\hat{N}_{+})S_{1}+\frac{61}{3}(\hat{N}_{-}-\hat{N}_{+})S_{2}\right)\] \[-C_{F}N_{F}\left(\frac{1}{2}+\frac{2}{3}(\hat{N}_{-2}-13\hat{N}_{- }-\hat{N}_{+}-5\hat{N}_{+2}+18)S_{1}+(3\hat{N}_{-}-5\hat{N}_{+}+2)S_{2}-2(\hat{N} _{-}-\hat{N}_{+})S_{3}\right)\,.\]
## Appendix C partonic cross section in the \(z\)-space
Here we list the DIS partonic cross section in the \(z\)-space, which can be written as
\[\hat{\sigma}(z,Q^{2})=\frac{4\pi\alpha^{2}}{Q^{4}}\sum_{i=-N_{F}}^{N_{F}}\sum_{c= q,g}\sum_{\lambda=T,L}e_{q_{i}}^{2}f_{\lambda}\,\hat{\sigma}_{c,\lambda}(z) \tag{106}\]
where \(\hat{\sigma}_{c,\lambda}(z)\) can be expanded in power series of the strong coupling constant \(\alpha_{s}\),
\[\hat{\sigma}_{c,\lambda}(z)=\sum_{n=0}\left(\frac{\alpha_{s}}{2\pi}\right)^{n }\hat{\sigma}_{c,\lambda}^{(n)}(z)\,. \tag{107}\]
At LO
\[\hat{\sigma}_{q,T}^{(0)}=\delta(1-z)\,,\quad\hat{\sigma}_{q,L}^{(0)}=\hat{ \sigma}_{g,T}^{(0)}=\hat{\sigma}_{g,L}^{(0)}=0\,. \tag{108}\]
The \(\hat{\sigma}^{(1)}\)'s have been known for a long time. The quark cross section reads
\[\hat{\sigma}_{q,L}^{(1)}(z)=C_{F}z\,, \tag{109}\]
for the longitudinal part, and
\[\hat{\sigma}_{q,T}^{(1)}(z)=C_{F}\Bigg{\{}\left(\frac{1+z^{2}}{1-z}\right) \left(\ln\frac{Q^{2}}{\mu^{2}}+\ln\frac{1-z}{z}\right)-\frac{3}{2(1-z)}+3- \delta(1-z)\left(\frac{9}{2}+\frac{\pi^{2}}{3}\right)\Bigg{\}}\,, \tag{110}\]
where \(+\)-distributions to regulate all divergences for \(z\to 1\) are implied.
The gluon channel is given by
\[\hat{\sigma}_{L,g}^{(1)}=T_{R}\left[2z(1-z)\right]\,, \tag{111}\]
and
\[\hat{\sigma}_{T,g}^{(1)}=T_{R}\Bigg{\{}(1-2z+2z^{2})\ln\frac{Q^{2}}{\mu^{2}} +(1-2z+2z^{2})\ln\frac{1-z}{z}-1+4z(1-z)\Bigg{\}}\,. \tag{112}\]
## Appendix D useful formulae
The following formulae are useful for deriving the Mellin transformation
\[S_{2}(\infty)=\frac{\pi^{2}}{6}\,,\] \[\int_{0}^{1}dzz^{m}\ln z=\frac{-1}{(m+1)^{2}}\,,\] \[\int_{0}^{1}dzz^{m}\ln(1-z)=\frac{-1}{(m+1)}S_{1}(m+1)\,,\] \[\int_{0}^{1}dzz^{N-1}\left(\frac{\ln(1-z)}{1-z}\right)_{+}=\hat{ N}_{-}S_{1,1}\,, \tag{113}\] \[\int_{0}^{1}dzz^{N-1}\frac{z}{1-z}\ln z=S_{2}-\frac{\pi^{2}}{6}\] \[\int_{0}^{1}dzz^{N-1}\frac{1-z}{z}\ln z=(\hat{N}_{-2}-2\hat{N}_{- }+1)S_{2}\] \[\int_{0}^{1}dzz^{N-1}z^{m}\ln z=[\hat{N}_{+m-1}-\hat{N}_{+m}]S_{2 }\,,\qquad m\in N\]
|
2310.01408 | Generalized Animal Imitator: Agile Locomotion with Versatile Motion
Prior | The agility of animals, particularly in complex activities such as running,
turning, jumping, and backflipping, stands as an exemplar for robotic system
design. Transferring this suite of behaviors to legged robotic systems
introduces essential inquiries: How can a robot be trained to learn multiple
locomotion behaviors simultaneously? How can the robot execute these tasks with
a smooth transition? How to integrate these skills for wide-range applications?
This paper introduces the Versatile Instructable Motion prior (VIM) - a
Reinforcement Learning framework designed to incorporate a range of agile
locomotion tasks suitable for advanced robotic applications. Our framework
enables legged robots to learn diverse agile low-level skills by imitating
animal motions and manually designed motions. Our Functionality reward guides
the robot's ability to adopt varied skills, and our Stylization reward ensures
that robot motions align with reference motions. Our evaluations of the VIM
framework span both simulation environments and real-world deployment. To the
best of our knowledge, this is the first work that allows a robot to
concurrently learn diverse agile locomotion skills using a single
learning-based controller in the real world. Further details and supportive
media can be found at our project site: https://rchalyang.github.io/VIM | Ruihan Yang, Zhuoqun Chen, Jianhan Ma, Chongyi Zheng, Yiyu Chen, Quan Nguyen, Xiaolong Wang | 2023-10-02T17:59:24Z | http://arxiv.org/abs/2310.01408v2 | # Generalized Animal Imitator:
###### Abstract
The agility of animals, particularly in complex activities such as running, turning, jumping, and backflipping, stands as an exemplar for robotic system design. Transferring this suite of behaviors to legged robotic systems introduces essential inquiries: How can a robot be trained to learn multiple locomotion behaviors simultaneously? How can the robot execute these tasks with a smooth transition? And what strategies allow for the integrated application of these skills? This paper introduces the Versatile Instructable Motion prior (_VIM_) - a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks suitable for advanced robotic applications. Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions with _Functionality_ reward and _Sylization_ reward. While the _Functionality_ reward guides the robot's ability to adopt varied skills, the _Sylization_ reward ensures performance alignment with reference motions. Our evaluations of the VIM framework span both simulation environments and real-world deployment. To our understanding, this is the first work that allows a robot to concurrently learn diverse agile locomotion tasks using a singular controller. Further details and supportive media can be found at our project site: [https://rchalyang.github.io/VIM](https://rchalyang.github.io/VIM)
## I Introduction
Research efforts have been invested for years in equipping legged robots with agility comparable to that of natural quadrupeds. Picture a golden retriever gracefully maneuvering in a park: darting, leaping over obstacles, and pursuing a thrown ball. These tasks, effortlessly performed by many animals, remain challenging for contemporary legged robots. To accomplish such tasks, robots need not only master individual agile locomotion skills like running and jumping but also the capacity to adaptively select and configure these skills based on sensory inputs. We regard this kind of complicated task requiring highly agile locomotion skills as advanced parkour for legged robots. The inherent ability of quadrupeds to smoothly execute diverse locomotion skills across varied tasks inspires our pursuit of a control system with a general locomotion motion prior that includes these skills. In this direction, we introduce a novel RL framework, Versatile Instructable Motion prior (_VIM_) aiming to endow legged robots with a spectrum of reusable agile locomotion skills by integrating existing agile locomotion knowledge.
Historically, agile gaits[5, 33, 34] for legged robots have been sculpted using model-based or optimization methods. While promising, these methods demand significant engineering input and precise state estimation. Learning-based controllers enable robots to walk or run while addressing these limitations, although they still fall short of agility. Imitation-based controllers are also proposed to learn from motion sequences from animals [37] or optimization methods [16]. Research on incorporating sensory information, such as visual observations [2, 53, 54, 24, 57, 27, 59] or elevation
Fig. 1: **Real-Robot Trajectory. Our robot exhibits back-flipping skill in the second row by imitating the reference motion in the first row.**
Fig. 2: **Learning Agile Locomotion Skills from Reference Motions: Our system learns a single instructable motion prior from a diverse reference motion dataset.**
maps [30, 41] further enables legged robots to traverse complex terrain like stones. In spite of the encouraging results, most of these works focus on building a single controller from scratch, even though much of the learned locomotion skills could be shared across tasks. Recent works start building a reconfigurable low-level motion prior [43, 23, 7, 38, 26, 22] for downstream applications. However, the previous methods failed to make the best use of existing skills to learn diverse locomotion skills with high agility.
In this work, we focus on building low-level motion prior to utilize existing locomotion skills in nature and previous optimization methods, and learn multiple highly agile locomotion skills simultaneously, as shown in Figure 2. Even though we cannot fully comprehend the agility of animals and lack a unified framework for model-based controls, we recognize that motion sequences offer a consistent representation of diverse agile locomotion skills. Our motion prior extracts and assimilates a range of locomotion skills from reference motions, effectively mirroring their dynamics. These references comprise motion capture (mocap) sequences from quadrupeds, augmented generative model sequences complementing mocap data, and optimized motion trajectories. Throughout the training phase, we translate varied reference motion clips into a unified latent command space, guiding the motion prior to recreate locomotion dynamics based on these latent commands and the robot's inherent state.
For legged robots, we define a locomotion skill as the ability of the robot to produce a specific trajectory. To break down the intricacies of movement, we classify it into two primary facets: _Functionality_ and _Style. Functionality_ pertains to the fundamental movement objectives a robot aims to achieve, such as advancing forward at a predefined speed. _Style_, in contrast, delves into the specific mechanics of how a robot accomplishes a task. To illustrate, two robots might be programmed to progress at an identical speed, but the intricacies of their movement--like step size or frequency--might differ considerably. Simultaneously instructing a robot in both these domains is nontrivial[52]. Drawing inspiration from how humans learn complicated tasks, especially in fields demanding physical prowess like athletics, we identify three core feedback modalities: objective performance metrics, qualitative assessments, and granular kinematic guidance. Adopting this structured feedback approach, our robot starts with mastering the basic functional objective and subsequently turns into refining the detailed locomotion gaits.
By incorporating diverse reference motions and our reward design, our Versatile Instructable Motion prior (_VIM_) learns diverse agile locomotion skills and makes them available for intricate downstream tasks. With our VIM, we enable legged robots to perform advanced robotics parkour in the real world. We also evaluate our method in the simulation and real world, as Figure 1. Our method significantly outperforms baselines in terms of final performance and sample efficiency.
## II Related Work
**Blind Legged Locomotion:** Classical legged locomotion controllers [19, 56, 49, 31, 5, 40] based on model-based methods [9, 17, 11, 12, 4, 20, 46] and trajectory optimization [8, 34] have shown promising results in diverse tasks with high levels of agility. Nonetheless, these methods normally come with considerable engineering tuning for the specific task, high computation requirements during deployment, or fragility to complex dynamics. Learning-based methods [2, 29, 30, 44, 10, 15] controllers are proposed to offer robust and lightweight controllers for deployment at the cost of offline computation. Peng et al. [14] developed a controller producing non-agile life-like gaits by imitating animal. Though previous works offer robust or agile locomotion controllers across complex environments, these works focus on finishing a single task at a time without reusing previous experience. Smith et al. [45] utilize existing locomotion skills to solve specific downstream tasks. Vollenweider et al. [51] utilize multiple AMP [39] to develop a controller to solve a fixed task set. In this paper, our motion prior captures diverse agile locomotion skills from reference motions generated by trajectory optimization and provides them for intricate future downstream tasks.
**Motion Priors:** Due to the notorious low sample efficiency and considerable effort required for reward engineering of RL, low-level skill pretraining has drawn growing attention in recent years. Singh et. al [43] utilize the flow-based model to build an actionable motion prior with motion sequences generated by scripts. More recent works [23, 7, 38, 26, 13, 22] focus on building low-level motion prior for downstream tasks but fail to include diverse highly agile locomotion skills. In this work, we build motion prior with reference motions consisting of mocap sequences, synthesized motion sequences, and trajectories from optimization methods and learn multiple highly agile locomotion skills with a single controller.
## III Learn Versatile Instructable Motion Prior
We present the Versatile Instructable Motion prior (_VIM_), depicted in Figure 3, designed to acquire a wide range of agile locomotion skills concurrently from multiple reference motions. The development of our motion prior involves three stages: assembling a comprehensive dataset of reference motions sourced from diverse origins, crafting a motion prior that processes varying reference motions and the robot's proprioceptive feedback to generate motor commands, and finally, utilizing an imitation-based reward mechanism to effectively train this motion prior.
### _Reference motion dataset._
Our primary objective was to curate a skill set for the robot that covers diverse functions and agility levels, equipping it to handle complex downstream tasks. Our dataset encompasses reference motions for locomotion skills, including but not limited to center, pace, walk, trot, turns, backflips, and various jumps. These reference motions are derived from:_(a)_ mocap data of quadrupeds, specifically a subset from previous work [58], despite its inherent challenges like noise due to the unpredictability of animal behavior; _(b)_ synthesized (Syn) motions generated using a generative model [58],
aimed at enhancing dataset diversity by capturing challenging locomotion actions;_(c)_ motions crafted through trajectory optimization methods (Opt).
To address the methodology disparities between quadrupeds and our robot, we retarget both mocap and synthesized sequences to our robot as Peng et al. [37]. While mocap and synthesized motions offer extensive data, not all sequences may be practically achievable by the robot. Thus, our dataset is supplemented with motion sequences from trajectory optimization, emphasizing intricate moves like jumps and backflips. The comprehensive reference motion list can be found in Table II. Each trajectory in our dataset, represented as \((s^{\text{ref}}_{0},s^{\text{ref}}_{1},\cdots,s^{\text{ref}}_{T})\), focuses on the robot's trunk and joint movements, excluding specific motor commands which are absent in the captured and synthesized data. We denote the dataset as \(\mathcal{D}=\{(s^{\text{ref}}_{0},s^{\text{ref}}_{1},\cdots,s^{\text{ref}}_{T })_{i}\}_{i=1}^{N}\).
### _Motion Prior Structure_
Our motion prior consists of a reference motion encoder, and a low-level policy. Reference motion encoder maps varying reference motions into a condensed latent skill space, and low-level policy utilizes our imitation reward, reproduces the robot motion given a latent command.
**Reference motion encoder:** Our reference motion encoder \(\mathbb{E}_{\text{ref}}(\cdot)\) maps segments of reference motion to latent commands in a latent skill space that outline the robot's prospective movement. These segments span both imminent and distant future states, expressed as \(\hat{s}^{\text{ref}}_{t}=\{s^{\text{ref}}_{t+1},s^{\text{ref}}_{t+2},s^{\text {ref}}_{t+10},s^{\text{ref}}_{t+30}\}\). We model the latent command as a Gaussian distribution \(\mathcal{N}(\mathbb{E}^{\mu}_{\text{ref}}(\hat{s}^{\text{ref}}_{t}),\mathbb{E }^{\sigma}_{\text{ref}}(\hat{s}^{\text{ref}}_{t}))\) from which we draw a sample at each interval to guide the low-level policy.
To maintain a _temporal-consistent_ latent skill space, our training integrates an information bottleneck [48, 3] objective \(L_{\text{AR}}\), where the prior follows an auto-regressive model [6]. Specifically, given the sampled latent command for the previous time step \(z_{t-1}\), we minimize the KL divergence between the current latent Gaussian distribution and a Gaussian prior parameterized by \(z_{t-1}\),
\[L_{\text{AR}}(\hat{s}^{\text{ref}}_{t},z_{t-1})=\beta\text{KL}\left(\mathcal{ N}(\mu_{t},\sigma^{2}_{t})\parallel\mathcal{N}(\alpha z_{t-1},(1-\alpha^{2})I) \right),\]
where \(\alpha=0.95\) is the scalar controlling the effect of correlation, \(\beta\) is the coefficient balancing regularization.
**Low-level policy training:** Our low-level policy \(\pi_{\text{low}}\) takes latent command \(z_{t}\) representing the desired robot pose and robot's current proprioceptive state \(s_{t}\) as input, and outputs actual motor commands \(a_{t}\) for the robot, where \(s_{t}\) is encoded with a proprioception encoder \(\mathbb{E}_{\text{prop}}\). We train low-level policy and reference motion encoder using PPO [42] in an end-to-end manner. Additionally, we introduce a motion embedding for the critic to distinguish diverse reference motions. Episodes initiate with randomized starting time steps from the dataset to avert overfitting and conclude when the root pose tracking error escalates beyond an acceptable range.
### _Imitation Reward for Functionality and Style_
Given the formulation of our motion prior, the robot learns diverse agile locomotion skills with our imitation reward and reward scheduling mechanics. Our reward offers consistent guidance, ensuring the robot captures both the functionality and style inherent to the reference motion.
**Learning Skill Functionality:** To mirror the functionality of the reference motion, we translate the root pose discrepancy between agent trajectories and reference motion into a reward. The functionality reward \(r_{\text{func}}\) is subdivided into tracking rewards for robot root position \(r^{\text{pos}}_{\text{func}}\) and orientation \(r^{\text{ori}}_{\text{func}}\). Recognizing the distinct importance of vertical movement in agile tasks, the root position tracking is further split into rewards for vertical \(r^{\text{pos-}x}_{\text{func}}\) and horizontal movements \(r^{\text{pos-}y}_{\text{func}}\).
\[r_{\text{func}}(s_{t},\hat{s}^{\text{ref}}_{t})=w^{\text{ori}}_{\text{func}}*r ^{\text{ori}}_{\text{func}}+w^{\text{pos-}y}_{\text{func}}*r^{\text{pos-}y}_{ \text{func}}+w^{\text{pos-}z}_{\text{func}}*r^{\text{pos-}z}_{\text{func}}\]
The specific formulation of our functionality rewards is provided as follows, which is similar to previous work[37].
\[r^{\text{ori}}_{\text{func}}(s_{t},\hat{s}^{\text{ref}}_{t}) =\exp\left(-10\left\lVert\hat{\mathbf{q}}^{\text{root}}_{t}- \mathbf{q}^{\text{root}}_{t}\right\rVert^{2}\right)\] \[r^{\text{pos-}y}_{\text{func}}(s_{t},\hat{s}^{\text{ref}}_{t}) =\exp\left(-20\left\lVert\hat{\mathbf{x}}^{\text{root-}y}_{t}- \mathbf{x}^{\text{root-}y}_{t}\right\rVert^{2}\right)\] \[r^{\text{pos-}z}_{\text{func}}(s_{t},\hat{s}^{\text{ref}}_{t}) =\exp\left(-80\left\lVert\hat{\mathbf{x}}^{\text{root-}z}_{t}- \mathbf{x}^{\text{root-}z}_{t}\right\rVert^{2}\right)\]
where \(\mathbf{q},\hat{\mathbf{q}}\) and \(\mathbf{x},\hat{\mathbf{x}}\) denote the root orientation and position from both the robot and reference motion, respectively. Notably, in contrast to previous work [37], we allocate a greater emphasis on root height in our reward, crucial for mastering agile locomotion skills such as backflips and jumps.
**Learning Skill Style:** Capturing the style of a reference motion, in addition to its functionality, enriches the application by meeting criteria such as energy efficiency, robot safety, and facilitating human-robot interaction. Drawing inspiration from how humans learn - starting by emulating the broader style before focusing on intricate joint movements - our robot first mimics the broader locomotion style with an adversarial style reward and later refines its technique with a joint angle tracking reward.
**Adversarial Stylization Reward:** To swiftly encapsulate the style of the locomotion skill, we train distinct discriminators
Fig. 3: Versatile Instructable Motion prior (_VIM_): Reference motion encoder maps reference motions into latent skill space indicating target robot pose and low-level policy output motor command. **Reward Design:** Our includes _Functionality_ reward and _Style_ reward.
\(D_{i},\ i=1..n\) for all \(n\) reference motions separately to distinguish robot transitions from the transition of the specific reference motion[39, 51] and use the output to provide high-level feedback to the agent. Specifically, our discriminator is trained with the following objective:
\[\operatorname*{argmin}_{D_{i}}\operatorname*{\mathbb{E}}_{d_{i}^{\pi}(s,s^{ \prime})}\left(D_{i}(s,s^{\prime})-1\right)^{2}+\operatorname*{\mathbb{E}}_{d _{i}^{\pi}(s,s^{\prime})}\left(D_{i}(s,s^{\prime})+1\right)^{2}\]
where \(d_{i}^{\mathcal{M}}(s,s^{\prime})\), \(d_{i}^{\pi}(s,s^{\prime})\) denote the transition distribution of the dataset and policy for \(i\)th reference motion respectively.
For each reference motion, the likelihood from the discriminator is then converted to a reward with:
\[r_{\text{style}}^{\text{adv}}(s_{t},s^{\prime}_{t})=1-\frac{1}{4}*\left(1-D(s_ {t},s^{\prime}_{t})\right)^{2}.\]
Initially, our adversarial stylization reward provides dense reward and enables the robot to learn a credible gait, but it can not provide more detailed instructions as the training proceeds, which leads to mode collapse and unstable training.
**Joint Angle tracking Reward:** On the other end, joint angle tracking reward [36, 7] provides sparse but stable instruction for the robot to mimic the gait of reference motion. Similar to our root pose tracking reward, our joint angle tracking reward has the following formulation:
\[r_{\text{style}}^{\text{joint}}(s_{t},\hat{s}_{t}^{\text{ref}}) =\exp\left(-5\sum_{j\in\text{joints}}\left\|\hat{\mathbf{q}}_{t} ^{j}-\mathbf{q}_{t}^{j}\right\|^{2}\right)\] \[+\exp\left(-20\sum_{f\in\text{fect}}\left\|\hat{\mathbf{x}}_{t}^{ f}-\mathbf{x}_{t}^{f}\right\|^{2}\right)\]
where \(\mathbf{q}_{t}^{j},\hat{\mathbf{q}}_{t}^{j}\) are the joint angle of robot and reference motion and \(\mathbf{e}_{t}^{f},\hat{\mathbf{e}}_{t}^{f}\) are the end-effector positions of robot and reference motion.
When learning diverse agile locomotion skills, only combining the joint angle tracking reward and functionality reward leads to the failure of tracking functionality or tracking the style of reference motion. Since different locomotion skills are sensitive to different rewards.
**Stylization Reward Scheduling:** To take the best of both worlds, we propose to use both adversarial stylization reward and joint angle tracking reward with a balanced scheduling mechanism. Considering the discriminator as a "coach", we utilize the mean adversarial reward as an indication of how the coach is satisfied with the current performance. When it's not satisfied with the current performance of the robot, it provides detailed instruction for the robot to learn. Specifically our stylization reward follows:
\[r_{\text{style}}(s_{t},\hat{s}_{t}^{\text{ref}}) =w_{\text{style}}^{\text{adv}}*r_{\text{style}}^{\text{adv}}+w_{ \text{style}}^{\text{joint}}*r_{\text{style}}^{\text{joint}}\] \[+w_{\text{style}}^{\text{adv}}*(1-\operatorname*{\mathbb{E}}_{s _{t}\in S}(r_{\text{style}}^{\text{adv}}(s,s^{\prime})))*r_{\text{style}}^{ \text{joint}}\]
With the given formulation, our stylization reward provided dense rewards during the beginning of training, enabling the robot to quickly catch the essence of different agile locomotion skills. Our stylization reward also provides detailed instruction as the training proceeds, enabling the robot to refine its gait and lead to more stable training.
### _Solving Downstream Tasks with Motion Prior:_
For intricate tasks like jumping over gaps, addressing them from scratch presents challenges including acquiring necessary agile locomotion skills, such as jumping and rapid running within limited interactions, and the intensive engineering needed to harmonize the reward for top-tier tasks while regularizing the robot's motion. With a low-level motion prior, robots can instantly harness existing skills encapsulated within the prior and channel their efforts into high-level strategizing. For each distinct downstream task, we train a high-level policy \(\pi_{\text{high}}\) takes the high-level observation \(\mathbf{o}_{\text{high}}\) and outputs latent command for low-level motion prior to utilize the existing agile locomotion skills: \(a_{t}=\pi_{\text{low}}(\pi_{\text{high}(\mathbf{o}_{\text{high}},\mathbf{E}_{ \text{prop}}(s_{t}))})\).
## IV Experiments
We evaluate our system in simulation and real-world, benchmarking it against prior work for low-level locomotion skill learning and various downstream tasks. Our robot demonstrates life-like agility in the real world.
### _Evaluation of Learned Motion Priors_
Our system's proficiency in learning a range of agile locomotion skills from the reference motion dataset (discussed in Sec III-A) is initially assessed.
**Baselines**: We benchmark our method against two representative baselines: Motion Imitation [37, 7, 22] baseline represents a thread of recent works whose imitation rewards are defined solely with errors between current robot states and the corresponding reference states. Generative Adversarial Imitation Learning (GAIL) baseline represents a thread of recent work [38], whose imitation reward is solely provided by the discriminator trained to distinguish trajectories generated by the policy from the ground truth reference motions. Given that our reference motions consist only of state
Fig. 4: **Real World Jump Forward Trajectory Comparison: Each row represents a single trajectory (From top to bottom: Reference Motion, VIM, GAIL, Motion Imitation).**
sequences, they offer less supervision compared to expert action sequences, rendering motion prior learning more challenging. Each method trains for \(2\times 10^{9}\) iterations across 3 random seeds. Both our technique and the Motion Imitation baseline adopt identical reward scales for all motion error-tracking rewards. Likewise, our approach and GAIL maintain the same scale for adversarial stylization rewards.
**Simulation Evaluation:** In the simulation, we measure average imitation tracking errors for various agile locomotion skills, episode returns, and trajectory lengths across random seeds. Specifically as listed in Table I, where the tracking error of root pose represents the ability of the robot to reproduce the locomotion skill, and the tracking error of joint angle and end effector position represents the ability of the robot to mimic the style of reference motion. Our method achieves a similar root pose tracking error as the motion imitation baseline with a much smaller joint angle tracking error. This shows our method striking a balance between functionality and style, superior to the motion imitation baseline that focuses solely on functionality. Meanwhile, the GAIL baseline failed to learn the functionality of the reference motions which leads to short episode length and the least episode return. We surmise that the GAIL baseline's inadequacy arises for two main reasons: First, exclusive reliance on adversarial stylization reward does not offer temporally consistent guidance throughout skill learning due to misaligned rewards across timesteps. Second, the mode collapse issue inherent in adversarial training hinders the robot from mastering highly agile skills, such as backflipping. The shortcomings of the Motion Imitation baseline may stem from the challenges of balancing different terms and selecting suitable hyperparameters when concurrently learning multiple agile locomotion skills. Comparing our VIM with and without stylization reward scheduling, we find the former exhibits enhanced style tracking performance, underscoring the value of stylization reward scheduling in refining robot gait tracking.
**Real World Evaluation:** We gauge learned agile locomotion skills in real-world scenarios. Due to the lack of precise robot pose, we resort to specific metrics tailored to different locomotion skills, detailed in Table II. For Jump While Running & Jump Forward & Jump Forward (Syn) & Backflip, we measure the jumping height and jumping distance. For Pace & Canter & Walk & Trot and Left Turn & Right Turn, we measure the linear and angular velocity, respectively. Results reveal that our method retains most of the reference motion functionality. The only significant deviation, observed in the Canter motion, arises from inherent differences between animal movement (its source) and our robot's capabilities. Even with comparable root pose tracking errors in simulations, our method outshines the Motion Imitation baseline in real-world metrics like jumping height, distance, and velocity tracking error. This suggests that mirroring the style of the reference motion improves sim2real transfer for natural gaits. GAIL baseline struggled to reproduce most real-world locomotion skills. A visual comparison of real-world trajectories is available in Figure 4, showing our method's superiority in capturing both motion functionality and style.
**Latent Skill Space Visualization:** To better understand our low-level motion prior, we visualize the learned latent skill space in Figure 5 by visualizing the latent embedding corresponding to motion segments in our reference motion dataset via t-SNE [50]. We find that different skills are separated into different regions with clear boundaries. Our reference motion encoder also clusters the skills with similar semantic meaning together: the embeddings from Left Turn sequence is
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Metrics & VIM & Motion Imitation & GAIL & Reference Motion \\ \hline Height (Jump While Running) \((m)\) & \(\mathbf{0.50\pm 0.003}\) & \(0.42\pm 0.01\) & \(0.41\pm 0.04\) & \(0.53\pm 0.005\) \\ Height (Jump Forward) \((m)\) & \(\mathbf{0.44\pm 0.01}\) & \(0.42\pm 0.01\) & \(0.27\pm 0.006\) & \(0.59\pm 0.006\) \\ Height (Jump Forward Syn) \((m)\) & \(\mathbf{0.52\pm 0.01}\) & \(N/A\) & \(N/A\) & \(0.55\pm 0.007\) \\ Height (Backflip) \((m)\) & \(\mathbf{0.62\pm 0.01}\) & \(0.49\pm 0.01\) & \(N/A\) & \(0.60\pm 0.005\) \\ Distance (Jump While Running) \((m)\) & \(\mathbf{0.48\pm 0.08}\) & \(0.35\pm 0.02\) & \(0.40\pm 0.003\) & \(0.56\pm 0.008\) \\ Distance (Jump Forward) \((m)\) & \(\mathbf{0.76\pm 0.05}\) & \(0.40\pm 0.01\) & \(0.10\pm 0.002\) & \(0.82\pm 0.003\) \\ Distance (Jump Forward Syn) \((m)\) & \(\mathbf{0.49\pm 0.04}\) & \(N/A\) & \(N/A\) & \(0.54\pm 0.007\) \\ Linear Velocity (Pace) \((m/s)\) & \(\mathbf{0.76\pm 0.01}\) & \(0.97\pm 0.07\) & \(0.50\pm 0.02\) & \(0.72\pm 0.005\) \\ Linear Velocity (Canter) \((m/s)\) & \(\mathbf{1.49\pm 0.15}\) & \(N/A\) & \(N/A\) & \(3.87\pm 0.17\) \\ Linear Velocity (Walk) \((m/s)\) & \(0.90\pm 0.04\) & \(\mathbf{0.96\pm 0.06}\) & \(0.53\pm 0.58\) & \(0.97\pm 0.42\) \\ Linear Velocity (Trot) \((m/s)\) & \(1.33\pm 0.17\) & \(\mathbf{1.05\pm 0.02}\) & \(0.93\pm 0.01\) & \(1.16\pm 0.12\) \\ Angular Velocity (Left Turn) \((rad/s)\) & \(1.71\pm 0.04\) & \(0.00\pm 0.00\) & \(\mathbf{0.91\pm 0.04}\) & \(1.01\pm 0.05\) \\ Angular Velocity (Right Turn) \((rad/s)\) & \(0.81\pm 0.02\) & \(\mathbf{0.62\pm 0.02}\) & \(0.63\pm 0.05\) & \(0.41\pm 0.09\) \\ Joint Angle Tracking Error \((rad^{2}/joint)\) & \(\mathbf{0.10\pm 0.08}\) & \(0.27\pm 0.16\) & \(0.22\pm 0.10\) & - \\ \hline \hline \end{tabular}
\end{table} TABLE II: **Evaluation of Motion Prior in Real (Left):** We collect representative metrics for different locomotion skills with corresponding metrics from reference motion. \(N/A\) denotes completely failed skills in real. **Full Reference Motion List (Right)**
\begin{table}
\begin{tabular}{c|c c c c|c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Tracking Error \(\downarrow\)} & \multicolumn{3}{c}{RL Objectives \(\uparrow\)} \\ & Root Pos (XY) & Root Pos (Height) & Root Ori & Joint Angle & EE Pos & Episode Return & Episode Length \\ \hline VIM & \(\mathbf{1.24\pm 0.62}\) & \(0.01\pm 0.02\) & \(0.11\pm 0.06\) & \(\mathbf{0.08\pm 0.06}\) & \(\mathbf{0.03\pm 0.03}\) & \(13.31\pm 1.48\) & \(166.783\pm 120.217\) \\ VIM (w/o Scheduling) & \(1.28\pm 0.67\) & \(0.009\pm 0.0123\) & \(\mathbf{0.1\pm 0.06}\) & \(0.1\pm 0.08\) & \(0.05\pm 0.04\) & \(\mathbf{13.963\pm 11.398}\) & \(\mathbf{179.047\pm 121.78}\) \\ Motion Imitation & \(1.39\pm 0.66\) & \(\mathbf{0.0077\pm 0.0114}\) & \(0.11\pm 0.05\) & \(0.25\pm 0.14\) & \(1.04\pm 0.08\) & \(9.536\pm 9.049\) & \(143.393\pm 114.514\) \\ GAIL & \(1.04\pm 0.86\) & \(0.03\pm 0.03\) & \(0.13\pm 0.05\) & \(0.17\pm 0.1\) & \(0.09\pm 0.05\) & \(3.586\pm 6.166\) & \(54.723\pm 75.984\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: **Evaluation of Motion Prior in Simulation:** We compare Horizontal and Vertical Root Position (Root Pos (XY), Root Pos (Height)), Root Orientation (Root Ori), Joint Angle, and End Effector Position (EE Pos) tracking errors and RL objectives of all methods. Our methods outperform all baselines in terms of smaller tracking errors, higher episodic returns, and longer episode lengths. GAIL baseline shows a smaller root position tracking error since it can’t follow the reference motion leading to early termination of the episode.
close to the embeddings from Right Turn sequence, which enables the smooth transition between different skills, the embeddings from Jump While Running & Jump Forward & Jump Forward (Syn) sequence are clustered together. These observations suggest that our system learned a smooth and semantically meaningful latent skill space for solving downstream tasks.
### _Evaluation on Downstream Tasks_
To further evaluate how our method leverages learned agile locomotion skills for downstream tasks, we designed a set of tasks and tested our method against standard baselines in both simulation and real-world scenarios.
**Downstream Tasks:** Our task suite comprises: Following Command: This involves directing the robot to move with specific linear and angular velocities, sampled uniformly between \(0\sim 2\) m/s and \(-2\sim 2\) rads/s. In our motion prior, the robot is trained to move and turn at the reference motion's speed; hence, to follow a command precisely, the high-level policy should smoothly interpolate between different speeds. Jump Forward: This task requires the robot to execute a jump during a forward run. We have adapted a subset of jumping rewards from CAJun [55] to evaluate policy interpolation between jumping and running motions within a fixed timeframe. Following Command + Jump Forward: Here, the robot must either jump forward or adjust to changing commanded speeds. To optimize episode return, the robot should not only use the agile locomotion skills from the reference motion dataset but also develop unobserved skills like executing sharp turns.
**Baselines:** Considering the baseline's subpar performance in low-level motion prior training, we compare our system with three representative baselines without pre-trained low-level controller: **PPO**[42]: Demonstrates controllers trained exclusively on downstream task rewards. **AMP**[39] utilize existing reference motion to provide styling reward in an adversarial imitation learning manner and learn the policy for the downstream task while mimicking the behavior of reference motions. Jain et al.**Hierarchical Reinforcement Learning (HRL)** adapts from [25] which learns a high-level policy sending latent commands to a low-level motor controller. HRL resembles a broad category of prior works that decompose temporally extended reasoning into sub-problems [47, 1, 35, 32, 18, 28, 21]. For a fair comparison, we made modifications like removing the trajectory generator in [25], using PPO for AMP and HRL, and supplying full reference motion data to AMP and HRL integrated with AMP.
**Evaluation in Simulation & Real World:** We train all methods on each downstream task for \(4\times 10^{8}\) environment samples with 3 random seeds. The simulation results are detailed in Figure 6, and real-world results are provided in Table III. For the Following Command task, while all methods mastered basic locomotion, ours excelled in efficiency and smoothly transitioned between diverse linear and angular velocities. The other tasks, Jump Forward and Following Command + Jump Forward, demanded advanced jumping abilities, which baselines couldn't emerge. These baseline methods either continuously moved forward, remained grounded when prompted to jump, or toppled to evade energy consumption penalties. In contrast, our system seamlessly bridged jumping and running actions, securing the highest episode return. Despite providing with a comprehensive reference motion dataset, baselines couldn't harness the skills. This shortcoming possibly stems from the challenge of deriving agile locomotion skills from the dataset using only adversarial stylization rewards, mirroring the GAIL baseline's poor performance in low-level motion prior training.
## V Conclusion
In this paper, we propose Versatile Instructable Motion prior (_VIM_) which learns agile locomotion skills from diverse reference motions with a single motion prior. Our results in simulation and in the real world show that our VIM captures both the functionality and the style of locomotion skills from reference motions. Our VIM also provides a temporally consistent and compact latent skill space representing different locomotion skills for different downstream tasks. With agile locomotion skills in our VIM, complex downstream tasks can be solved efficiently with minimum human effort.
Fig. 5: **Latent Skill Space t-SNE. We visualize the latent skill space by visualizing the latent embedding for varying motion segments.**
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Metrics (Vel for Velocity) & Ours. & AMP & PPO & HRL \\ \hline Max Linear Volt \((m/s)\) & \(\mathbf{1.78\pm 0.13}\) & \(1.74\pm 0.21\) & \(1.75\pm 0.26\) & \(1.70\pm 0.08\) \\ Max Angular Volt \((d/s)\) & \(1.78\pm 0.04\) & \(1.70\pm 0.00\) & \(2.24\pm 0.08\) & \(0.00\pm 0.00\) \\ Max Angular Volt \((\mathbf{s}_{\text{high}})\) & \(\mathbf{2.05\pm 0.02}\) & \(0.83\pm 0.12\) & \(1.75\pm 0.12\) & \(0.95\pm 0.37\) \\ Jump Distance \((m)\) & \(\mathbf{0.50\pm 0.07}\) & \(0.00\pm 0.00\) & \(N/A\) & \(N/A\) \\ Jump Height \((m)\) & \(\mathbf{0.50\pm 0.02}\) & \(0.38\pm 0.12\) & \(N/A\) & \(N/A\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: **Downstream Tasks Evaluation in Real: We compare Following Command + Jump Forward policies of all methods in real, and \(N/A\) denotes completely failed skills in real. Our methods outperform all baselines in real for most metrics.**
Fig. 6: **Downstream Tasks Evaluation in Simulation: Solid line and shaded area denote the mean and std across random seeds. Our system outperforms all baselines on all tasks.** |
2304.14824 | A noise-robust acoustic method for recognizing foraging activities of
grazing cattle | Farmers must continuously improve their livestock production systems to
remain competitive in the growing dairy market. Precision livestock farming
technologies provide individualized monitoring of animals on commercial farms,
optimizing livestock production. Continuous acoustic monitoring is a widely
accepted sensing technique used to estimate the daily rumination and grazing
time budget of free-ranging cattle. However, typical environmental and natural
noises on pastures noticeably affect the performance limiting the practical
application of current acoustic methods. In this study, we present the
operating principle and generalization capability of an acoustic method called
Noise-Robust Foraging Activity Recognizer (NRFAR). The proposed method
determines foraging activity bouts by analyzing fixed-length segments of
identified jaw movement events produced during grazing and rumination. The
additive noise robustness of the NRFAR was evaluated for several
signal-to-noise ratios using stationary Gaussian white noise and four different
nonstationary natural noise sources. In noiseless conditions, NRFAR reached an
average balanced accuracy of 86.4%, outperforming two previous acoustic methods
by more than 7.5%. Furthermore, NRFAR performed better than previous acoustic
methods in 77 of 80 evaluated noisy scenarios (53 cases with p<0.05). NRFAR has
been shown to be effective in harsh free-ranging environments and could be used
as a reliable solution to improve pasture management and monitor the health and
welfare of dairy cows. The instrumentation and computational algorithms
presented in this publication are protected by a pending patent application: AR
P20220100910. Web demo available at: https://sinc.unl.edu.ar/web-demo/nrfar | Luciano S. Martinez-Rau, José O. Chelotti, Mariano Ferrero, Julio R. Galli, Santiago A. Utsumi, Alejandra M. Planisich, H. Leonardo Rufiner, Leonardo L. Giovanini | 2023-04-28T13:06:14Z | http://arxiv.org/abs/2304.14824v3 | # A noise-robust acoustic method for recognition of foraging activities of grazing cattle
###### Abstract
To stay competitive in the growing dairy market, farmers must continuously improve their livestock production systems. Precision livestock farming technologies provide individualised monitoring of animals on commercial farms, optimising livestock production. Continuous acoustic monitoring is a widely accepted sensing technique used to estimate the daily rumination and grazing time budget of free-ranging cattle. However, typical environmental and natural noises on pasture noticeably affect the performance and generalisation of current acoustic methods. In this study, we present an acoustic method called Noise-Robust Foraging Activity Recognizer (NRFAR). The proposed method determines foraging activity bouts by analysing fixed-length segments of identified jaw movement events associated with grazing and rumination. The additive noise robustness of NRFAR was evaluated for several signal-to-noise ratios, using stationary Gaussian white noise and four different non-stationary natural noise sources. In noiseless conditions, NRFAR reaches an average balanced accuracy of 89%, outperforming two previous acoustic methods by more than 7%. Additionally, NRFAR presents better performance than previous acoustic methods in 66 out of 80 evaluated noisy scenarios (p<0.01). NRFAR operates online with a similar computational cost to previous acoustic methods. The combination of these properties and the high performance in harsh free-ranging
environments render NRFAR an excellent choice for real-time implementation in a low-power embedded device. The instrumentation and computational algorithms presented within this publication are protected by a pending patent application: AR P20220100910.
Web demo available at: [https://sinc.unl.edu.ar/web-demo/nrfar](https://sinc.unl.edu.ar/web-demo/nrfar)
keywords: Acoustic monitoring, precision livestock farming, ruminant foraging behaviour, noise robustness, signal-to-noise ratio.
## 1 Introduction
The new and diverse precision livestock farming tools and applications significantly reduce farm labour (Lovarelli et al., 2020; Tzanidakis et al., 2023). Precision livestock farming solutions allow individualised monitoring of animals to optimise herd management in most production systems (Michie et al., 2020). Monitoring the feeding behaviour of livestock can provide valuable insights into animal welfare, including their nutrition, health, and performance (Bannazi et al., 2012; Garcia et al., 2020). Changes in feeding patterns, periodicity and duration can be used to inform pasture allocation management (Connor, 2015) and ruminant diets that signal anxiety (Bristow and Holmes, 2007) or stress (Abeni and Galli, 2017; Schirmann et al., 2009), as well as an early indicator of diseases (Osei-Amponsah et al., 2020; Paudyal et al., 2018), rumen health (Beauchemin, 2018, 1991), and the onset of parturition (Kovacs et al., 2017; Pahl et al., 2014) and estrus (Dolecheck et al., 2015; Pahl et al., 2015).
Free-ranging cattle spend 40-80% of their daily time budget on grazing and rumination activities while at pasture (Kilgour, 2012; Phillips, 2008). Grazing involves searching, apprehending, chewing, and swallowing herbage and is defined by a non-predefined sequence of ingestive jaw movement (JM) events associated with chews, bites, and composite chew-bites. A bite event involves apprehending and severing herbage, a chew event involves crushing, grinding, and processing previously gathered herbage, and a chew-bite event occurs when herbage is apprehended, severed, and comminuted in the same JM (Ungar and Rutter, 2006). Rumination is determined by cycles of 40-60 s of chew events followed by a 3-5 s pause required to swallow and regurgitate the feed cud (Galli et al., 2020). Grazing and rumination involve JM-events taken at rates of 0.75-1.20 JM per second. Changes in the type and sequence of distinctive JM-events can be aggregated over time to determine the sequence and duration of foraging activities (Andriamandroso et al., 2016).
Feeding activity monitoring of cattle has primarily been approached through the use of different non-invasive wearable sensors, including nose-band pressure, inertial measurement units, and microphone systems (Benos et al., 2021; Stygar et al., 2021). Each sensing technique has its advantages and disadvantages depending on the environment and the application. Current nose-band pressure sensors are combined with accelerometers to log data from JMs. Raw data are analysed by software to determine foraging behaviours and provide specific information associated with them (Steinmetz et al., 2020; Werner et al., 2018).
Human intervention is required to process the data recorded on a computer, making it not scalable for use in commercial farms (Riaboff et al., 2022). Sensors based on inertial measurement units are widely used to recognize multiple behaviours such as feeding, rumination, posture, and locomotion (Aquilani et al., 2022; Chapa et al., 2020). Although accelerometer-based sensors are typically used in indoor environments (Balasso et al., 2021; Lovarelli et al., 2022; Wu et al., 2022), their use in outdoor environments has increased in recent years (Arablouei et al., 2023; Cabezas et al., 2022; Wang et al., 2023). One major drawback of inertial measurement units is their limited capability to estimate herbage intake in grazing (Wilkinson et al., 2020). Additionally, the reliability of these sensors is heavily dependent on their precise location, orientation, and secure fastening, which makes reproducing results difficult (Kamminga et al., 2018; Li et al., 2021). For this reason, acoustic sensors are preferred over former sensors for monitoring the foraging behaviour of cattle outdoors. Head-placed microphones allow for obtaining detailed information on ingestive behaviours (Laca et al., 1992). Acoustic sensors are used to automatically recognize JM-events (Ferrero et al., 2023; Li et al., 2021), estimate rumination and grazing bouts (Vanrell et al., 2018), distinguish between plants and feedstuffs eaten (Galli et al., 2020; Milone et al., 2012), and estimate differences in dry matter intake (Galli et al., 2018). Despite progress, the lack of public datasets makes the generation of confidence acoustic methods difficult (Cockburn, 2020) and therefore there is room for improvement in the acoustic monitoring of free-grazing cattle.
In recent years, acoustic methods for the recognition of foraging activities have appeared. Vanrell et al. (2018) developed a method based on the analysis of the autocorrelation of the acoustic recording for the recognition of foraging activities. This method operates offline since it requires storing the entire signal before processing it to make inferences. The Bottom-Up Foraging Activity Recognizer (BUFAR) proposed by Chelotti et al. (2020) uses segments of identified JM-events to determine grazing and rumination bouts. BUFAR operates online, meaning that the input acoustic signal is processed on a sample-by-sample basis to make inferences about foraging activity. BUFAR outperformed the former method with significantly lower computational costs. More recently, Chelotti et al. (2023) proposed an online Jaw Movement segment-based Foraging Activity Recognizer (JMFAR) that does not rely on specific information on identified JM-events, allowing for better recognition of grazing and rumination bouts. However, a major limitation of BUFAR and JMFAR is their limited ability to recognize foraging activities in the presence of noisy environments (Chelotti et al., 2023). To be a reliable and useful tool, acoustic monitoring methods must work properly in adverse environmental conditions that involve external noises. Motivated by this need, this paper describes an alternative acoustic method for the recognition of grazing and rumination of free-range cattle. The proposed method involves a noise-robust methodology for the detection and classification of the JM-events required to recognize the foraging activities. Therefore, the main contributions of this work are: (_i_) present an online acoustic method for the estimation of grazing and rumination bouts of cattle, which has a low computational cost associated. It analyses segments of identified JM-events associ
ated with grazing and rumination to delimit activity bouts. (_ii_) The proposed method recognizes foraging activities in free-range noiseless and noisy environments, by using a robust JM-event recognizer able to identify JM-events in different operation conditions. (_iii_) Artificial noise sounds of different natures are used to simulate multiple adverse acoustic scenarios in controlled experiments.
The rest of this paper is organised as follows: Section 2 describes briefly a system for the recognition of foraging activities and analyses the operation and limitations of BUFAR. Then, the proposed algorithm is introduced. This section also outlines the acquisition of datasets, the experimental setup and the performance metric used to validate the algorithms. The comparative results for the proposed and former algorithms are shown in Section 3. Section 4 explains and discusses the results of this work. Finally, the main conclusions follow in Section 5.
## 2 Material and Methods
### Current acoustic method analysis
In this section, a brief description and limitations of two current acoustic foraging activity recognizers called BUFAR and JMFAR are presented. Both methods follow the general structure of a typical pattern recognition system (Bishop, 2006; Martinez Rau et al., 2020), and can be represented with the common block diagram as shown in Fig. 1. A foraging activity recognizer can be analysed into three temporal levels: bottom, middle, and top. These levels operate at the millisecond, second and minute scales, respectively. A JM-event recognizer operates at both the bottom and middle levels in order to detect and classify different types of JM-events. First, the input digitised sound is conditioned using signal processing techniques. Then, signals of interest are computed in the bottom level and used in the middle level for a JM detector based on adaptive thresholds. When a JM is detected, a set of distinctive JM features are computed over a time window centred on the JM. Finally, a machine-learning model utilises the extracted set of JM features to classify the JM-event with a corresponding timestamp. The top level of the system analyses segments of JM information provided by the previous two levels to determine the corresponding foraging activity. In this level, the JM information is buffered in fixed-length segments. A set of activity features is computed over the segments and used by a classifier to determine the predominant activity being performed by the animal.
As previously mentioned, the type and sequence of distinctive JM-events can be analysed to recognize foraging activities. Inspired by this, the BUFAR uses a real-time JM-event recognizer developed by Chelotti et al. (2018) to detect and classify JM-events into three different classes: chews, bites, and chew-bites. The sequence of recognized JM-events, along with their corresponding timestamps, is the JM information for the activity recognizer (see the top level of Fig. 1). The JM information is analysed in fixed-length segments. For each segment, a set of four statistical activity features is extracted, including (_i_) the rate of
JM-events, and the proportion of the JM-events corresponding to the classes (_ii_) chew, (_iii_) bite and (iv) chew-bite. These features are then used for a multilayer perceptron (MLP) classifier (Bishop, 2006) to determine the activities performed. However, inherent detection and classification errors of JM-events may cause misclassification of foraging activities. A more detailed description of BUFAR is provided by Chelotti et al. (2020).
The JMFAR method overcomes the limitation of BUFAR because it does not compute information from recognized JM-events. Instead, JMFAR analyses fixed-length segments from detected JM. The same JM-events recognizer used by BUFAR is used to compute the JM information. JM information consists of the signal used to detect the JM, the timestamps of detected JM and the extracted set of JM features. JM information, analysed in fixed-length segments, is employed to compute a set of activity features. The set of twenty-one statistical, temporal and spectral features serves as input to an MLP classifier that determines the corresponding activity performed. A more detailed description of JMFAR is provided by Chelotti et al. (2023).
The great sensitivity to noises of the JM-events recognizer used in BUFAR and JMFAR could lead to foraging activities misclassification. When the input audio signal is contaminated by noise, accurate detection of JM, computation of JM features, and classification of JM-events are significantly impacted (Martinez-Rau et al., 2022). As a result, the noise directly impacts the JM information and consequently affects the computation of the set of activity features, leading to possible misclassification of activity.
### Proposed foraging activity recognizer
The activity recognition in quiet and noise conditions could be improved by using a better JM-event recognizer. This work proposes an online method called _Noise-Robust Foraging Activity Recognizer_ (NRFAR). NRFAR is inspired by the operating principle of BUFAR and introduces the use of the Chew-Bite Energy Based Algorithm (CBEBA) for the recognition of JM-events in diverse environments (Martinez-Rau et al., 2022). This allows for later classification of foraging activities by analysing fixed-length segments of recognized JM-events.
The CBEBA is a real-time pattern recognition method, able to distinguish four JM-event classes: _rumination-chews_, _grazing-chews_, _bites_ and _chew-bites_. It outperforms previously published methods both in the detection and in the classification of JM-events in both noiseless and noisy environments. Briefly, the implementation of CBEBA can be internally divided into four successive stages (Fig. 1):
* Signal processor: the digitised input audio signal undergoes a second-order band-pass filter to isolate the JM frequency range. The filtered signal is then squared to obtain the instantaneous power signal. To reduce computation, the former signal is used to compute two additional down-sampled signals: a decimated envelope signal and an energy signal calculated by frames.
* JM detector: the presence of a peak in the envelope signal above a time-varying threshold indicates the detection of a candidate JM-event. When this indication occurs, the energy signal is compared to another adaptive threshold to delimit the boundaries of the candidate JM-event.
* JM feature extractor: both delimited signals are used to extract a set of five robust JM features.
* JM classifier: the computed set of JM features is used to decide whether the candidate JM-event should be classified or discarded. If classified, a multilayer perceptron (MLP) classifier determines the class of the JM-event. Furthermore, the adaptive thresholds are tuned based on the signal-to-noise ratio (SNR) estimated over the envelope and energy signals.
The top level of the proposed NRFAR processes the JM information provided by the JM-event recognizer CBEBA in fixed-length segments to establish the corresponding foraging activity. The JM information is the recognized JM-events, along with their respective timestamps. Each fixed-length segment of JM information is used to generate a set of five activity features: (_i_) the rate of JM-events, and the proportion of the JM-events corresponding to the classes (_ii_) rumination-chew, (_iii_) grazing-chew, (_iv_) bite and (_v_) chew-bite). Based on the duration of segments analysed in the article that presents BUFAR (Chelotti et al., 2020), the same fixed-length segments used in BUFAR (5 min) are implemented in the proposed method. Segments of 5 min duration provide sufficient JM information to generate a confidence set of statistical activity features, without significantly affecting the correct estimation of foraging activity bouts. The set of extracted activity features feeds an MLP activity classifier to label the foraging activity. The classified label outputs are further smoothed using a third-order median filter to reduce the fragmentation of the recognized activity bouts.
### Database description
The fieldwork to obtain acoustic signals took place at the Michigan State University's Pasture Dairy Research Center (W.K. Kellogg Biological Station, Hickory Corners, MI, USA) from July 31 to August 19, 2014. The procedures for animal handling, care and use were revised and approved by the Institutional Animal Care and Use Committee of Michigan State University. Cows were handled on a pasture-based robotic milking system with unrestricted cow traffic as described by Watt et al. (2015). Cows were voluntarily milked \(3.0\pm 1.0\) times per day using two Lely A3-Robotic milking units (Lely Industries NV, Maassluis, The Netherlands). Inside the dairy barn, dairy cows were fed with a grain-based concentrate. Cows had 24 h access to grazing paddocks with a predominance of tall fescue (Lolium arundinacea), orchardgrass (Dactyllis glomerta) and white clover (Trifolium repens), or perennial ryegrass (Lolium perenne) and white clover. From a herd of 146 lactating high-producing multiparous Holstein cows, 5 animals were selected to record acoustic signals and to continuously monitor their foraging behaviour in a non-invasive way. Specific information on the
grain-based concentrate, pasture on paddocks and individualised characteristics of the 5 dairy cows are given in Chelotti et al. (2023, 2020).
Individualised 24 h of continuous acoustic recordings were obtained on 6 non-consecutive days. The foraging behaviour of the 5 dairy cows was recorded by 5 independent recording systems that were rotated daily, according to a 5 x 5 Latin-square design. This setup was allowed to verify differences in sound signals associated with a particular recording system, cow or experiment day. The recording systems were randomly assigned to the cows on the first day. On the sixth day, the same order was used to reassign the recording systems to the cows. No prior training was considered necessary for the use of the recording systems before the start of the study.
Each recording system comprises two directional electret microphones connected to a digital recorder (Sony Digital ICD-PX312, Sony, San Diego, CA, USA). The digital recorder was protected in a weatherproof case (1015 Micron Case Series, Pelican Products, Torrance, CA, USA), mounted to the top side of a halter neck strap (Fig. 2). One microphone was positioned facing outwards in a non-invasive way and pressed against the cow forehead to collect the sounds produced by the animal. The other microphone was placed facing outwards to capture the vibrations transmitted through the bones. The microphones kept the intended location by using a rubber foam and elastic headband attached to
Figure 1: General block diagram of the BUFAR, JMFAR and the proposed NRFAR methods divided into temporal scales. The JM information transferred to the top level is different in each method.
the halter. This design prevented microphone movements, reduced noise caused by wind and protected microphones from friction and scratches (Milone et al., 2012). The digital recorders save the audio recordings in MP3 format (Brandenburg and Stoll, 1994) with a resolution of 16-bit at a sampling rate of 44.1 kHz. Each microphone records in an individual channel of the stereo MP3 files. In this study, the stereo MP3 files were converted to mono WAV files, and only those corresponding to the microphones facing inwards were used.
The fieldwork employed an experienced animal handler who had extensive knowledge in data collection of animal behaviour. The handler observed the animals for blocks of approximately 5 minutes per hour during daylight hours to ensure the proper placement and positioning of recording systems on the cows. The observations were conducted from a distance to minimise potential disruptions in animal behaviour. Additionally, the handler registered in a logbook the observed foraging activities and other relevant parameters. The ground truth identification of foraging activities was carried out by two experts with long experience in foraging behaviour scouting and in the digital analysis of acoustic signals. An expert listened to the audio recordings to identify, delimit, and label activities, guided by the logbook. The results were double-inspected and checked by the other expert. Although the experts agreed on all label assignments, there were some small differences in the start or end times of certain labels. In those cases, the experts collaborated to reach a mutual agreement on the labels. Activity blocks were labelled as _grazing_, _rumination_ or _other_.
Audio clips from two open acoustic datasets were used to evaluate the algorithms under adverse conditions. The process for selecting the useful audio clips
Figure 2: Recording system used to record the acoustic signals composed of microphones (a) that are covered by rubber foam and an elastic headband (b), which are wired and plugged (c) to a digital recorder placed inside a waterproof case (d) attached to a neck halter.
is shown in Fig. 3. The first dataset is a labelled collection of 2000 environmental audio clips of 5 s duration, organised into 50 categories with 40 audio clips per category (Piczak, 2015). The second dataset is a multi-labelled collection of 51,197 audio clips, with a mean duration of 7.6 s, unequally distributed into 200 categories (Fonseca et al., 2022). To represent environmental and natural noises commonly found in field pastures, the categories _"aeroplane"_, _"chirping birds"_, _"cow"_, _"crickets"_, _"engine"_, _"inscets"_, _"rain"_, _"thunderstorm"_ and _"wind"_ from the first dataset and _"aircraft"_, _"animal"_, _"bird vocalisation and birds call and bird song"_, _"car passing by"_, _"cowbell"_, _"cricket"_, _"engine"_, _"fixed-wing aircraft and aeroplane"_, _"frog"_, _"insect"_, _"livestock and farm animals and working animals"_, _"rain"_, _"raindrop"_, _"thunder"_ and _"wind"_ from the second dataset were selected. These categories were grouped into four exclusive sets according to their nature, as follows:
1. Animals = {_animal_, _bird vocalisation and birds call and bird song_, _chirping birds_, _cow_, _cowbell_, _cricket_, _frog_, _insect_, _insects_, _livestock and farm animals and working animals_}
2. Vehicles= {_aeroplane_, _aircraft_, _car passing by, engine_, _fixed-wing aircraft and aeroplane_}
3. Weather = {_rain_, _raindrop_, _thunder_, _thunderstorm_, _wind_}
4. Mixture = {_Animals_, _Vehicles_, _Weather_}
The audio clips of the sets were listened to by the experts, and those that did not correspond with possible field pasture conditions were discarded. Overall, 3042 useful audio clips lasting 13.1 h were identified. For reproducibility, a list of selected audio clips is available as supplementary material.
### Experimental setup
NRFAR was coded, trained, and tested in Matlab R2019b (MathWorks, Natick, MA, USA), following a stratified 5-fold cross-validation scheme. In this study, a set of 349.4 h of outdoor audio recordings, composed of 50.5% _grazing_, 34.9% emphrumination and 14.6% of emphother activities was used. The imbalanced distribution of classes is consistent with typical cattle behaviours (Kilgour, 2012). Therefore, the test data were not balanced by class. From all available training data in each fold, 30% of the majority class (_grazing_) was randomly undersampled and 100% of the minority class (embother) was synthetically oversampled (He et al., 2008), to generate a balanced dataset for training (35.6% _grazing_, 35.1% emphrumination and 29.3% of emphother activities). The activity classifier is an MLP neural network formed by five input neurons (number of input features), one hidden layer, and three output neurons (number of output labels corresponding to the activity class). The activation functions used by the hidden and output layers are the hyperbolic tangent sigmoid and softmax transfer functions, respectively. During the MLP training phase, the scaled conjugate gradient backpropagation algorithm was used to find the optimal weight and bias of the network, and to optimise the hyper-parameters of the MLP classifier. The two hyper-parameters learning rate and number of neurons in
the hidden layer were fitted using a grid-search method. The learning rate was evaluated at values of 0.1, 0.01, 0.001, and 0.0001, while the number of neurons was evaluated within a range of 4 to 10.
External noises may reduce the operability of acoustic foraging activity recognizers operating in free-range conditions. The particular properties of these noise sources, including their finite duration and limited bandwidth, make them difficult to distinguish and quantify in the context of this study, which analysed almost 350 h of audio recordings. Although audio recordings might occasionally have some noise, the signals were assumed to be free of noise, that is, they have an infinite SNR. The noise robustness of the proposed method was evaluated in five trials for various levels of contamination with noise and measured in terms of the SNR in a range from 20 to -15 dB in steps of 5 dB. In each trial, a different noise source was artificially added to the audio recording used for testing, and then normalised. A stationary Gaussian white noise source was used in a trial, which is one of the most accepted ways to test algorithm noise robustness (Saez et al., 2016). White noise is an _"infinite"_ bandwidth signal with
Figure 3: Top-down scheme for selecting useful audio clips.
constant power spectral density across all frequencies. Furthermore, the previously mentioned set of audio clips (_Animals_, _Vehicles_, _Weather_ and _Mixture_) was used in four trials to represent non-stationary environmental and natural noises present on pasture. In each trial, audio clips belonging to a category were selected randomly without replacement to represent the artificial noise source used to contaminate the original audio recordings. Some examples of waveforms and spectrograms at several SNRs produced during grazing and rumination are shown in the Supplementary Material.
The audio signals were divided into non-overlapping 1-s frames as described by Chelotti et al. (2023). The list of labelled blocks with the activity class and bouts corresponding was separated into 1-s frame sequences to provide a high-resolution activity recognition analysis. This action was performed on both the algorithm output and the ground truth for a direct comparison. A total of 1,257,759 frames were generated from the 349.4 h of audio recordings. This total number corresponds to 635,291, 439,262 and 183,206 frames of _grazing_, _rumination_ and _other_ activities. For each audio signal, the balanced accuracy metric was calculated using the scikit-learn 1.2.2 library in Python1(Pedregosa et al., 2011). This metric provides a good indicator of performance for imbalanced multiclass problems (Mosley, 2013).
Footnote 1: [https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.balanced_accuracy_score.html)
## 3 Results
NRFAR properly classified \(\geq\) 88.2% of the frames into grazing or rumination classes, thus showing a significant improvement compared with the average of 79.5% for BUFAR and 84.3% for JMFAR (Fig. 4). BUFAR exhibited the lowest recognition rate on the activities of interest but the highest recognition for other activities (88.1%). Moreover, the confusion between grazing and rumination was lower for NRFAR (\(\leq\) 1.2%), than it was for BUFAR (\(\geq\) 11.2%) and JMFAR (\(\geq\) 5.1%). The computational cost of NRFAR, expressed in terms of operations per second (ops/s), was 13.8% higher than BUFAR (43,185 ops/s vs. 37,966 ops/s) and 16.8% lower than JMFAR (43,185 ops/s vs. 50,445 ops/s), with marginal variations presented among them. A detailed analysis and assumption of the operations involved are available in A.
The robustness to adverse conditions of the proposed NRFAR method was evaluated and compared against BUFAR and JMFAR methods using different noise sources at multiple SNR levels. Gaussian white noise was added to the audio signals in appropriate proportions, to achieve the desired SNR. Fig. 5 shows the balanced accuracy, averaged over the audio signals, obtained with each method under different SNR conditions. NRFAR outperformed JMFAR and BUFAR in all cases (p < 0.01; Wilcoxon signed-rank test (Wilcoxon, 1945)). The overall performance (average and SD) of NRFAR remained approximately constant, ranging from \(0.89\pm 0.11\) to \(0.87\pm 0.12\) for SNR \(\geq\) 5 dB. Furthermore,
the performance of JMFAR was higher (ranging from \(0.82\pm 0.14\) and \(0.47\pm 0.08\)) than that of BUFAR (ranging from \(00.79\pm 0.16\) and \(0.39\pm 0.06\)), except for \(0~{}\geq~{}\mathrm{SNR}~{}~{}\geq~{}-5~{}\mathrm{dB}\).
In a more challenging and realistic scenario, sounds of animals, vehicles, weather, and a mixture of these sounds were used as noise sources to contaminate the audio signals in independent trials. The balanced accuracy metrics reported by the methods using the four noise sources are shown in Fig. 6. The performance of NRFAR decreased as the SNR decreased. However, the performance of BUFAR and JMFAR increased in general for SNR between 20 dB and 10 dB. Overall, NRFAR outperformed BUFAR and JMFAR, particularly for \(\mathrm{SNR}~{}\geq 15\) dB and for \(\mathrm{SNR}~{}\leq~{}0\) dB. NRFAR presented higher balanced accuracy than BUFAR in all cases (p < 0.01). Additionally, NRFAR outperformed JMFAR for \(\mathrm{SNR}~{}\geq~{}20\) dB and \(\mathrm{SNR}~{}\leq~{}-5\) dB (p < 0.01). The results of comparing NRFAR with JMFAR for SNR between 15 dB and 0 dB were not always statistically significant, although NRFAR presented higher performances than JMFAR in Fig. 5. On the other hand, JMFAR presented higher average balanced accuracy than BUFAR in the full SNR range for the four noise sources, particularly for \(10~{}\geq~{}\mathrm{SNR}~{}\geq~{}0\) dB (with p < 0.01 in most cases). Reported statistical significance test values obtained in the experiments are available in B.
The previously reported results have been rearranged to provide a differ
Figure 4: Confusion matrices for different foraging activities for the (a) BUFAR, (b) JMFAR and (c) NRFAR methods.
ent interpretation. Fig. 7 shows the performance degradation of the NRFAR, JMFAR and BUFAR methods for the different noise sources. In Fig 7.a, the average balanced accuracy of NRFAR ranged from [0.88 - 0.89] for SNR = 20 dB to [0.52 - 0.39] for -15 dB. NRFAR reached higher performance when Gaussian white noise was used. For a particular SNR value, NRFAR performed similarly between the noise sources representing more realistic acoustic pasture conditions. This small performance difference for different noise sources was also presented in JMFAR (Fig. 7.b). In Fig. 7.c, BUFAR presented a bigger performance difference using different noise sources. Like NRFAR, BUFAR exhibited higher performance when Gaussian white noise was used.
## 4 Discussion
Accurately detecting and classifying the most important foraging activities of ruminants provides useful information to monitor their welfare and health, and gain insight into their pasture dry matter intake and utilisation (Liakos et al., 2018). This is typically achieved using an accelerometer, pressure or acoustic sensors. Nonetheless, commercial nose-band pressure sensors require handlers to analyse the raw data recorded on a computer (Riaboff et al., 2022). On the other hand, ensuring the proper location, orientation, and attachment of accelerometer sensors to prevent motion can become a laborious task for handlers (Li et al., 2021). Meeting these requirements is even more challenging in free-ranging conditions. Therefore, acoustic sensors are preferable for practical use in such conditions (Shen et al., 2020). Existing state-of-the-art acoustic methods for estimating the foraging activities of cattle, called BUFAR and JMFAR,
Figure 5: Performance rates for NRFAR, BUFAR and JMFAR methods using additive Gaussian white noise at several SNR levels.
are based on the analysis of fixed-length segments of sound signals. However, misclassifications of foraging activities are still a challenge. This study proposes an improved online acoustic foraging activity recognizer (NRFAR) that analyses statistical features of identified JM-event classes. Inspired by the pattern recognition systems of BUFAR, NRFAR uses the CBEBA method to recognize JM-events into four classes: _rumination-chews_, _grazing-chews_, _bites_, and _chew-bites_. The proposed method represents a significant improvement over the previous BUFAR method, which only distinguished between bites, chew-bites, and chews, without discriminating between rumination and grazing chew events. Similarly, the JMFAR method does not require identifying JM-events to delimit grazing and rumination bouts. Instead, it extracts information from the detected JM in the segment.
Results showed that the average correct recognition rate of the activities of interest (_grazing_ and _rumination_) for NRFAR was 91.5%, exceeding BUFAR by 12.0% and JMFAR by 7.2% (Fig. 4). Importantly, this improvement in activity recognition was achieved without incurring substantial changes in computational costs. The remarkable performance improvement of NRFAR is due to the im
Figure 6: Performance rates for NRFAR, BUFAR and JMFAR methods using noises commonly present on pasture at several SNR levels.
proved discrimination of JM-events associated with rumination and grazing by CBEBA. This allows for the generation of a confidence set of activity features with more specific and relevant information to enhance activity classifications. NRFAR presents a minimal confusion of \(\leq 1.2\%\) between _grazing_ and _rumi
Figure 7: Variation of the performance metric across different noise sources for (a) NRFAR, (b) JMFAR and (c) BUFAR. Marked points are the balanced accuracy average over signals at a particular SNR level.
nation_, which is lower than the confusion reported by BUFAR (\(\geq\) 11.2%) and JMFAR (\(\geq\) 5.1%). The authors hypothesise that foraging activities misclassification is reduced because it depends mainly on the misrecognition of JM-events associated with _rumination_ (_rumination-chew_) and grazing (_grazing-chew_, _bite_ and _chew-bite_), and not between all possible JM-events classes. Therefore, NRFAR is less sensitive to JM-events misclassification than BUFAR. Likewise, discrimination between foraging activities and other activities presented greater error in the proposed method (\(\geq\) 4.1%). This confusion is also observed in BUFAR and JMFAR and could be related to the great diversity of behaviour represented by the other class. From a productivity standpoint, confusion of 5% or more between _grazing_ and _rumination_ can significantly affect diagnoses about feeding performance (eg. low dry matter intake) (Watt et al., 2015) or metabolic imbalances of nutritional origin in ruminants (e.g., subacute ruminal acidosis) (Beauchemin, 2018).
Acoustic methods often have lower performance in confined environments like barns due to the high levels and varying types of noise present there. Acoustic reverberation existing in confined environments is the cause that the noises have to be considered convolutional. In free-ranging conditions, noises are still present but are less intense and frequent, and can be considered additive. To reduce the unwanted effects of acoustic noise, an appropriate microphone setup (as shown in Fig. 2) can be used. Hence, the proper operation of acoustic methods in free-ranging is not necessarily compromised. The effectiveness of an acoustic foraging activity recognizer depends on its ability to work well in adverse field conditions to be a useful and effective tool for farmers and handlers. In this study, the noise robustness of NRFAR was evaluated and compared with former methods by adding artificial noise to the original audio signals at different levels (20 \(\leq\) SNR \(\leq\) - 15 dB), even higher than those produced by real noises in classical pasture environments (Bishop et al., 2019). The noise robustness of the methods using a stationary noise source with different properties was evaluated (Fig. 5). Artificial random Gaussian white noise was used to contaminate the audio signals. White noise signal has a theoretical _"infinite"_ bandwidth and constant power spectral density across all frequencies, which can degrade important acoustic cues over the entire frequency range. All methods kept their respective balanced accuracy practically constant for SNR \(\geq\) 5 dB. At these SNR levels, JMFAR performs better than BUFAR. The decreasing performance of JMFAR for SNR < 5 dB was due to the limited robustness to noise of the JM information from detected JM-events analysed to recognize foraging activities (Fig. 3). Furthermore, BUFAR outperformed JMFAR for moderate noise levels (0 \(\geq\) SNR \(\geq\) - 5 dB) due to the higher robustness to noise of the JM information from recognized JM-events used by BUFAR. Additionally, NRFAR outperformed the other methods for the whole range considered in these numerical experiments (SNR \(\geq\) -15 dB).
The effect of different non-stationary noise sources commonly present on pasture, such as sounds produced by animals, vehicles, weather, and a mixture of these sounds, was evaluated. Fig. 6 showed that JMFAR outperformed BUFAR, which is consistent with the results of Chelotti et al. (2023). In addition, the
proposed method outperformed the previous methods in 78 out of 80 evaluated scenarios, with 66 of those cases having statistical significance (p < 0.01), as in the evaluations using Gaussian white noise (Fig. 5). It should be noted that the biggest differences in favour of NRFAR were observed for SNR \(\geq\) 15 dB and SNR \(\leq\) 0 dB, but NRFAR performed similarly to JMFAR for 10 \(\leq\) SNR \(\leq\) 5 dB. In low noise conditions, the high performance of NRFAR was related to the identification of JM-events classes associated with rumination and grazing using the CBEBA method, which was then used to compute the set of features for the classification of activities. In high noise conditions, the performance of NRFAR was due to the great robustness and discriminative power of the feature set used to classify the JM-events by CBEBA (Martinez-Rau et al., 2022a).
The robustness of each method to the different noise sources was analysed. The performance of NRFAR using the four non-stationary noise sources was similar to each other for a particular SNR level (Fig. 7.a), despite the fact that these noise sources have different spectral energy distributions (Ozmen et al., 2022). A similar situation was observed for JMFAR (Fig. 7b), unlike BUFAR (Fig. 7c). It was noteworthy that NRFAR and BUFAR performed better when evaluated with stationary Gaussian white noise compared to the non-stationary noise sources (Fig. 7a and Fig. 7c). Non-stationary noise sources have uncertain onset, offset, and duration, which can lead to false detection of JM, classifying noises as JM-events (middle level of Fig. 1). Fig. 7b showed that JMFAR performed similarly with all noise sources for SNR \(\geq\) -5 dB because it did not depend on the identification of JM-events. However, for SNR < -5 dB, all methods were more robust to Gaussian white noise due to their stationary property.
## 5 Conclusion
This study proposes an improvement over former acoustic methods to recognize and delimit foraging activity bouts of grazing cattle. Inspired by the former BUFAR method, the proposed NRFAR method analyses fixed-length segments of recognized JM-events. NRFAR uses a robust JM recognizer that discriminates JM-events associated with grazing and rumination in different operating conditions. This allows NRFAR to recognize foraging activities even in adverse free-range scenarios. The method has shown a significant improvement in performance and tolerance to noise over state-of-the-art acoustic methods. The evaluation of noise robustness was performed by adding artificially different amounts of stationary Gaussian white noise, and non-stationary natural noise commonly present in free-range. Future work must include changes in the analysis of fixed-length segments to variable-length segments using dynamic segmentation to facilitate more accurate estimation of the foraging bouts of interest. Likewise, the proposed method can be used as a reference for the development of new methods based on multi-modal data sensors to recognize feeding activities in more adverse environments, such as in a barn, should be conducted.
## Acknowledgments
This study was undertaken with investment funding and projects from the Universidad Nacional del Litoral (CAID 50620190100080LI and 50620190100151LI); Universidad Nacional de Rosario (projects 2013-AGR216, 2016-AGR266 and 80020180300053UR); Agencia Santafesina de Ciencia, Tecnologia e Innovacion (project IO-2018-00082); CONICET (project 2017-PUE sinc(i)); and USDA-NIFA (projects MICL0222 and MICL0406). The authors wish to express their gratitude to the KBS Robotic Dairy Farm staff who were involved in the research. In addition, we acknowledge the direct support from AgBioResearch-MSU. The authors would like to thank Constanza Quaglia (technical staff, CONICET) and J. Tomas Molas G. (technical staff, UNER-UNL) for their technical support in the achievement of the web demo.
## Appendix A Computational cost
The computational cost of NRFAR depends on the input audio sampling frequency, the sub-sampling frequency used internally in CBEBA (fixed at \(f_{s}=150\ Hz\) in this analysis, according to its optimal value), the configuration of the two MLP neural networks used to classify the JM-events and foraging activities, and the duration of the segment lengths (fixed at 5 min). To obtain a valid comparison with other methods, an input sampling frequency of \(f_{i}=2\ kHz\) and 2 JM-events per second was chosen. Further, the worst-case computational cost scenario for both MLP classifiers was selected. In addition, any arithmetic operation, arithmetic shift or logic comparison is counted as an operation. The required number of operations per second for the computation
stages of each level of NRFAR were:
_Bottom level_:
1. Audio pre-processing: limiting the bandwidth with a second-order band-pass filter and computing the instantaneous power signal requires \(7*f_{i}\) and \(f_{i}\) ops/s per sample, respectively. Then, 16,000 ops/s are required.
2. Signal computation: computing and decimating the envelope signal requires \(11*f_{i}+150\) ops/s. Computing the energy signal by frames requires \(f_{i}+300\) ops/s. Altogether, this stage requires 24,450 ops/s. _Middle level_:
1. JM-event detection: \(4+0.925*f_{s}\) and \(12+f_{s}\) operations per JM-event are necessary to detect and delimit the boundaries of JM-events. Then, this stage takes 610 ops/s.
2. Feature extraction: \(3.5*f_{s}\) operations per JM-event are necessary to compute the set of JM features. In total, 1050 ops/s are required.
3. JM-event classification: deciding whether an event should be classified requires \(f_{s}+3\) operations per JM-event, whereas the MLP using 7 neurons in the hidden layer requires 192 operations per JM-event, thus, 690 ops/s are required.
4. Tuning parameters: \(f_{s}+39\) operations per JM-event are necessary to update the thresholds. Then, 378 ops/s are required. _Middle level_:
1. Segment buffering: this stage requires 2 operations per JM-event equivalent to 4 ops/s.
2. Feature extraction: computing the set of activity features requires 608 ops/segment.
3. Activity classification: considering the maximum number of neurons (10) in the hidden layer, the MLP requires 215 ops/segment.
4. Smoothing process: this filtering stage takes 2 ops/segment.
Finally, the entire computational cost of NRFAR is 43,182 ops/s + 825 ops/segment \(\approx\) 43,185 ops/s. Similar to BUFAR, the overall computational cost almost exclusively depends on the bottom and middle levels of Fig. 1 (i.e. the JM-event recognizer), because the top level is only executed once every 5 min (segment length). Hence, the total computational cost of NRFAR could be also expressed as 14,455,500 ops/segment.
## Appendix B Statistical hypothesis test
The statistically significant discrepancy in the balanced accuracy between NRFAR and BUFAR, NRFAR and JMFAR, and JMFAR and BUFAR was evaluated using the Wilcoxon signed-rank test (Wilcoxon, 1945). Table 1, 2 and 3 show the p-values obtained from the comparison of these methods.
P-values with a green background indicate a significant difference in the performance of the two methods with a confidence level of 1% (p = 0.01), while p-values with a pink background indicate a non-significant difference.
\begin{table}
\begin{tabular}{|l|l l l l l|} \hline \multirow{2}{*}{SNR [dB]} & \multicolumn{5}{c|}{NRFAR vs JMFAR} \\ \cline{2-7} & Animals & Transport & Weather & Mixture & White \\ \hline
20 & 1.89e-03 & 1.46e-05 & 6.01e-06 & 2.69e-05 & 5.15e-07 \\
15 & 4.43e-02 & 8.65e-03 & 1.44e-03 & 3.31e-02 & 6.36e-05 \\
10 & 4.91e-01 & 2.22e-01 & 8.52e-01 & 1.60e-01 & 2.46e-06 \\
5 & 7.85e-01 & 4.00e-01 & 5.24e-01 & 9.55e-01 & 1.18e-08 \\
0 & 1.94e-03 & 6.73e-07 & 9.82e-02 & 4.08e-02 & 3.52e-16 \\ -5 & 8.18e-06 & 4.12e-10 & 4.02e-03 & 7.32e-03 & 1.32e-14 \\ -10 & 7.50e-04 & 1.51e-13 & 1.43e-03 & 1.36e-06 & 2.30e-07 \\ -15 & 1.98e-06 & 3.20e-11 & 2.15e-03 & 1.16e-06 & 2.93e-03 \\ \hline \end{tabular}
\end{table}
Table 2: Statistical significance p-values obtained by comparing the performance of the NRFAR and JMFAR methods with different noise sources at several noise levels.
\begin{table}
\begin{tabular}{|l|l l l l l|} \hline \multirow{2}{*}{SNR [dB]} & \multicolumn{5}{c|}{NRFAR vs BUFAR} \\ \cline{2-5} & Animals & Transport & Weather & Mixture & White \\ \hline
20 & 1.89e-03 & 1.46e-05 & 6.01e-06 & 2.69e-05 & 5.15e-07 \\
15 & 4.43e-02 & 8.65e-03 & 1.44e-03 & 3.31e-02 & 6.36e-05 \\
10 & 4.91e-01 & 2.22e-01 & 8.52e-01 & 1.60e-01 & 2.46e-06 \\
5 & 7.85e-01 & 4.00e-01 & 5.24e-01 & 9.55e-01 & 1.18e-08 \\
0 & 1.94e-03 & 6.73e-07 & 9.82e-02 & 4.08e-02 & 3.52e-16 \\ -5 & 8.18e-06 & 4.12e-10 & 4.02e-03 & 7.32e-03 & 1.32e-14 \\ -10 & 7.50e-04 & 1.51e-13 & 1.43e-03 & 1.36e-06 & 2.30e-07 \\ -15 & 1.98e-06 & 3.20e-11 & 2.15e-03 & 1.16e-06 & 2.93e-03 \\ \hline \end{tabular}
\end{table}
Table 3: Statistical significance p-values obtained by comparing the performance of the JMFAR and BUFAR methods with different noise sources at several noise levels.
\begin{table}
\begin{tabular}{|l|l l l l l|} \hline \multirow{2}{*}{SNR [dB]} & \multicolumn{5}{c|}{JMFAR vs BUFAR} \\ \cline{2-5} & Animals & Transport & Weather & Mixture & White \\ \hline
20 & 9.97e-02 & 9.40e-02 & 2.11e-01 & 2.29e-01 & 1.64e-01 \\
15 & 2.56e-03 & 7.41e-02 & 8.61e-02 & 5.44e-02 & 1.36e-01 \\
10 & 2.77e-15 & 7.47e-02 & 6.24e-06 & 2.85e-07 & 1.59e-02 \\
5 & 1.19e-17 & 5.81e-07 & 6.91e-15 & 2.39e-17 & 1.04e-01 \\
0 & 3.64e-06 & 1.03e-03 & 7.38e-11 & 2.67e-07 & 7.03e-11 \\ -5 & 1.07e-01 & 7.51e-01 & 8.86e-02 & 8.04e-02 & 2.97e-07 \\ -10 & 2.08e-01 & 2.52e-01 & 6.95e-03 & 2.52e-03 & 5.80e-02 \\ -15 & 2.36e-02 & 5.00e-01 & 2.73e-02 & 6.36e-02 & 3.54e-11 \\ \hline \end{tabular}
\end{table}
Table 3: Statistical significance p-values obtained by comparing the performance of the JMFAR and BUFAR methods with different noise sources at several noise levels. |
2308.14529 | Property (T) and Many Quotients | We prove that, for the free algebra over a sufficiently rich operad, a large
subgroup of its group of tame automorphisms has Kazhdan's property (T).
We deduce that there exists a group with property (T) that maps onto large
powers of alternating groups. | Laurent Bartholdi, Martin Kassabov | 2023-08-28T12:38:54Z | http://arxiv.org/abs/2308.14529v1 | # Property (T) and many quotients
###### Abstract.
We prove that, for the free algebra over a sufficiently rich operad \(\mathcal{O}\), a large subgroup of its group of tame automorphisms has Kazhdan's property (T).
We deduce that there exists a group with property (T) that maps onto large powers of alternating groups.
L.B. is supported in part by the ERC AdG grant 101097307 M.K. is supported in part by the Simon's Foundation grant 713557
###### Abstract
We consider the case of a _operator_ on a Hilbert space \(\mathcal{H}\), and the set of all \(k\geq 1\)-dimensional subspaces \(\mathcal{H}\) of \(\mathcal{H}\). We show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We also show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We also show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We also show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\). We also show that the operator \(\mathcal{H}\) is a _operator_ on \(\mathcal{H}\) if and only if \(\mathcal{H}\) is a _operator_ on \
The operad \(\mathcal{O}\) is _generated_ by the set \(S\) if each operation in \(\mathcal{O}(n)\) can be obtained as a composition of operations in \(S\). For simplicity we will not allow operations of arity zero (i.e., constants) in our operads; however we do not put any other restrictions, so for example operations of arity \(1\) (namely maps) are allowed.
We shall not need much from the theory of operads, so we concentrate immediately on a special case that serves our purposes: the _free operad_ on a finite graded set \(S\), which can be defined by the usual universal property, and also has the following concrete description. Let \(S\) be a finite set of abstract operations \(\{\star_{s}:s\in S\}\), each with its _arity_\(\operatorname{\mathsf{ar}}(s)\in\mathbb{N}\). We denote by \(\operatorname{\mathsf{ar}}(S)=\max\{\operatorname{\mathsf{ar}}(s):s\in S\}\) the maximal arity of \(S\). The _free operad_\(\mathcal{O}_{S}\) on \(S\) consists of all compositions of operations in \(S\), with an ordering of their inputs. The elements of \(\mathcal{O}_{S}(n)\) are rooted trees with \(n\) leaves numbered \(1,\ldots,n\), with at each non-leaf vertex a label \(s\in S\) and \(\operatorname{\mathsf{ar}}(s)\) descendants in a given order.
**Definition 1**.: Let \(R\) be a commutative ring, and let \(\mathcal{O}\) be an operad. An _\(\mathcal{O}\)-algebra over \(R\)_ is an \(R\)-module \(A\) endowed with a family of \(R\)-multilinear maps \(A^{n}\to A\), one for each element of \(\mathcal{O}(n)\), satisfying the usual operad axioms.
If \(\mathcal{O}\) is the free operad on \(S\), this is equivalent to being given a family of \(R\)-multilinear maps \(\star_{s}\colon A^{\operatorname{\mathsf{ar}}(s)}\to A\), one for each \(s\in S\).
In the category of \(\mathcal{O}\)-algebras over \(R\) there is a free object on any set \(X\) which we denote by \(R\langle X\rangle_{\mathcal{O}}\). As an \(R\)-module, it is generated by all rooted trees of height \(1\) with leaves labeled by \(X\) and an element in \(\mathcal{O}(n)\) labelling the root.
Note that \(R\langle X\rangle_{\mathcal{O}}\) has a natural grading in which all variables have degree \(1\) and all operations have degree \(0\). If \(X\) is finitely generated and \(\mathcal{O}\) is finitely generated and does not contain any operations of arity \(1\) then the homogeneous components of \(R\langle X\rangle_{\mathcal{O}}\) are finitely generated \(R\)-modules. In the case of a free operad \(\mathcal{O}_{S}\) the free algebra \(R\langle X\rangle_{\mathcal{O}_{S}}\) has homogeneous components of arbitrarily large degrees provided that \(S\) contains at least one operation of arity \(\geq 2\).
## 3. Tame automorphisms
Let \(\mathcal{O}\) be an operad generated by a finite set \(S\) of operations as in the previous section. Consider a commutative ring \(R\), and let \(F=R\langle X\rangle_{\mathcal{O}}\) denote the free \(\mathcal{O}\)-algebra on \(X=\{x_{0},\ldots,x_{n-1}\}\). We shall consider a certain subgroup \(\Gamma_{n,N,\mathcal{O}}\) of group of _tame_ automorphisms of \(F\).
For \(0\leq i<n\) and \(f\in R\langle x_{0},\ldots,\widehat{x_{i}},\ldots,x_{n-1}\rangle_{\mathcal{O}}\), consider the _transvection_
\[t_{i}(f)\colon F\to F,\quad t_{i}(f)(x_{j})=\begin{cases}x_{j}+f&\text{ if }i=j,\\ x_{j}&\text{ otherwise}.\end{cases}\]
Evidently \(t_{i}(f)\) is an automorphism of \(F\), with inverse \(t_{i}(-f)\). By definition, the group of _tame automorphisms_ of \(F\) is the group generated by all such transvections.
**Definition 2**.: Let \(S\) be a generating set of an operad \(\mathcal{O}\), and choose \(n>\max\{\operatorname{\mathsf{ar}}(S),2\}\). Let \(N\in\mathbb{N}\) be any, and consider the ring \(R=\mathbb{Z}[1/N]\). The group \(\Gamma_{n,N,\mathcal{O}}\) is defined
as the subgroup of \(\operatorname{Aut}(F)\) generated by1
Footnote 1: By its construction, the group depends not only on the operad \(\mathcal{O}\) but also on the choice of the generating set \(S\). This dependence is very mild as we will show in Theorem 5, and is not reflected in the notation.
\[\alpha_{i} \coloneqq t_{i-1}(x_{i})\text{ for }1\leq i<n,\] \[\alpha_{n} \coloneqq t_{n-1}(x_{0}/N),\] \[\beta_{s} \coloneqq t_{0}(\star_{s}(x_{1},\dots,x_{\operatorname{\mathsf{ ar}}(s)}))\text{ for }s\in S.\]
For brevity we write it simply \(\Gamma_{n}\) when the dependency on \(N\) and \(\mathcal{O}\) is irrelevant.
Our first main result is that the group \(\Gamma_{n}\) has property (T) as soon as the parameter \(N\) is large enough:
**Theorem 3**.: _If \(N\) is divisible by all primes \(p\leq 3+\operatorname{\mathsf{ar}}(S)+4\sqrt{\operatorname{\mathsf{ar}}(S)-1}\), then \(\Gamma_{n,N,\mathcal{O}}\) has Kazhdan's property (T)._
Proof.: Notice first that the automorphisms \(\alpha_{1},\dots,\alpha_{n}\) generate \(\operatorname{SL}_{n}(R)\), and recall that \(\operatorname{SL}_{n}(R)\) has property (T) since \(n\geq 3\).
There exists therefore a constant \(\delta\) such that, for any representation of \(\Gamma_{n}\) on a Hilbert space \(\mathscr{H}\), any \(\epsilon>0\) and any vector \(v\in\mathscr{H}\) which is \(\epsilon\)-almost invariant under the action of the generators of \(\Gamma_{n}\), we have that \(v\) is \(\delta\epsilon\)-almost invariant under \(\operatorname{SL}_{n}(R)\), and in particular under all \(t_{i}(rx_{j})\) with \(0\leq i\neq j<n\) and \(r\in R\), and also under all \(t_{0}(r\star_{s}(x_{1},\dots,x_{\operatorname{\mathsf{ar}}(s)}))\) with \(r\in R\) and \(s\in S\) since these are words of bounded length in \(\operatorname{SL}_{n}(R)\) and the generators of \(\Gamma_{n}\).
Consider the following abelian subgroups of \(\Gamma_{n}\):
\[G_{0} =\langle t_{0}(rx_{1}),t_{0}(r\star_{s}(x_{1},\dots,x_{ \operatorname{\mathsf{ar}}(s)})):r\in R,s\in S\rangle,\] \[G_{i} =\{t_{i}(rx_{i+1\bmod n}):r\in R\}\text{ for }1\leq i<n.\]
Then by the previous paragraph the pairs \((\Gamma_{n},G_{i})\) all have relative property (T).
For all \(i<j\), the group generated by \(G_{i}\) and \(G_{j}\) is either abelian or nilpotent of class \(2\): if \(0<i<j-1\) then it is abelian, isomorphic to \(R^{2}\); if \(0<i=j-1\) then it is isomorphic to the Heisenberg group of upper-triangular \(3\times 3\) matrices over \(R\); and if \(i=0\) then it is isomorphic to a subgroup of a product, over \(s\in S\), of either \(R^{2}\) (if \(\operatorname{\mathsf{ar}}(S)<j<n-1\)) or the Heisenberg group (if \(j\leq\operatorname{\mathsf{ar}}(S)\) or \(j=n-1\)).
It follows that, in a representation as above, the Friedrichs angles between invariant subspaces for \(G_{i},G_{j}\) satisfy
\[0\leq\cos\sphericalangle(\mathscr{H}^{G_{i}},\mathscr{H}^{G_{j}})\leq\begin{cases} 0&\text{ if }\langle G_{i},G_{j}\rangle\text{ is abelian,}\\ p^{-1/2}&\text{ otherwise,}\end{cases}\]
where \(p\) is the smallest prime not dividing \(N\). (Recall that the angle between two subspaces \(V,W\leq\mathscr{H}\) is the smallest angle between vectors in \(V\cap(V\cap W)^{\perp}\) and \(W\cap(W\cap V)^{\perp}\)). Indeed, it suffices to consider representations of the Heisenberg group over \(\mathbb{Z}/p\), which have dimension \(1\) or \(\geq p\); and then the bound on the angles is [3, Theorem 4.4].
To apply [9, Theorem 1.2], it remains to prove that the following matrix is positive definite:
\[\Delta\coloneqq\begin{pmatrix}1&-\varepsilon&\cdots&-\varepsilon&0&\cdots&0&- \varepsilon\\ -\varepsilon&1&\ddots&&&&&&0\\ \vdots&\ddots&\ddots&-\varepsilon&&&&\vdots\\ -\varepsilon&0&-\varepsilon&1&-\varepsilon&&&&&\\ 0&&&&-\varepsilon&1&\ddots&&\vdots\\ \vdots&&&&\ddots&\ddots&-\varepsilon&0\\ 0&&&&&&&-\varepsilon&1&-\varepsilon\\ -\varepsilon&0&\cdots&&\cdots&0&-\varepsilon&1\end{pmatrix}\]
for \(\varepsilon=p^{-1/2}\), with terms '\(-\varepsilon\)' appearing one step away from the diagonal and in the first \(\mathsf{ar}(S)\) entries of the first row and column.
We can decompose \(\Delta\) as the sum of a circulant matrix \(\Delta_{1}\) with \(2\varepsilon\) on the diagonal and \(-\varepsilon\) off the diagonal, and a matrix \(\Delta_{2}\) with \(1-2\varepsilon\) on the diagonal and \(-\varepsilon\) on the first \(\mathsf{ar}(S)-1\) entries of the first row and column. The matrix \(\Delta_{1}\) is positive semidefinite as soon when \(\varepsilon>0\), while \(\Delta_{2}\) has eigenvalues \(1-2\varepsilon\) (with multiplicity \(n-2\)) and \(1-2\varepsilon\pm\sqrt{\mathsf{ar}(S)-1}\varepsilon\) (with multiplicity \(1\)). Thus \(\Delta_{2}\) is positive definite when \(\sqrt{\mathsf{ar}(S)-1}\varepsilon<1-2\varepsilon\). We deduce that \(\Delta\) is positive definite when \(\varepsilon<1/(2+\sqrt{\mathsf{ar}(S)-1})\).
**Remark 4**.: The bound for \(p\) in Theorem 3 is not optimal, but it can not be improved significantly. It can be shown that \(\Delta\) is not positive definite when \(\varepsilon>1/\max(2,\sqrt{\mathsf{ar}(S)-1})\), so the best bound is at least \(\max(4,\mathsf{ar}(S)-1)\).
Our next result is that the group \(\Gamma_{n}\) contains a substantial part of the tame automorphism group of \(F\), and really depends on \(\mathcal{O}\) (as the notation suggests) and only mildly on the choice of its generating set \(S\):
**Theorem 5**.: _For all \(f\in R\langle x_{1},\ldots x_{n-1-\mathsf{ar}(S)}\rangle_{\mathcal{O}}\), the group \(\Gamma_{n,N,\mathcal{O}}\) contains the transvection \(t_{0}(f)\)._
Proof.: By linearity, it is sufficient to prove this for elements of the free \(S\)-magma, namely for rooted tree. We proceed by induction on the tree's height, the case of single leafs being covered by the elementary matrices in \(\operatorname{SL}_{n}(R)\subset\Gamma_{n,N,\mathcal{O}}\).
Consider therefore \(s\in S\), write \(k=\mathsf{ar}(s)\), and consider a term \(f=\star_{s}(f_{1},\ldots,f_{k})\) with \(f_{1},\ldots,f_{k}\in R\langle x_{1},\ldots,x_{\ell}\rangle_{\mathcal{O}}\) for some \(\ell\in\mathbb{N}\) satisfying \(k+\ell<n\). By induction, there are transvections \(t_{0}(f_{i})\) in \(\Gamma_{n}\), and since \(\Gamma_{n}\) contains all even permutations of the variables we may assume by induction that \(\Gamma_{n}\) contains the transvections
\[\gamma_{j}\coloneqq t_{\ell+j}(f_{j})\text{ for }1\leq j\leq k.\]
Note that the \(\gamma_{j}\) all commute with each other. There is also in \(\Gamma_{n}\) a conjugate \(\beta^{\prime}_{s}\) of \(\beta\) that is the transvection
\[\beta^{\prime}_{s}\coloneqq t_{0}(\star_{s}(x_{\ell+1},\ldots,x_{\ell+k})).\]
By a direct computation,
\[[\beta^{\prime}_{s},\gamma_{j}] =(\beta^{\prime}_{s})^{-1}(\beta^{\prime}_{s})^{\gamma_{j}}=t_{ 0}(-\star_{s}(x_{\ell_{1}},\ldots,x_{\ell+k}))\;t_{0}(\star_{s}(x_{\ell+1}, \ldots,x_{\ell+j}+f_{j},\ldots,x_{\ell+k}))\] \[=t_{0}(\star_{s}(x_{\ell+1},\ldots,f_{j},\ldots,x_{\ell+k}),\]
so the iterated commutator \([\ldots[\beta^{\prime}_{s},\gamma_{1}],\ldots,\gamma_{k}]\) is the transvection \(t_{0}(f)\) which thus belongs to \(\Gamma\).
## 4. Representations
For an \(\mathcal{O}\)-algebra \(A\) and \(X=\{x_{0},\ldots,x_{n-1}\}\), consider the set \(\mathscr{R}_{n,A}\) of \(\mathcal{O}\)-algebra homomorphisms \(R\langle X\rangle_{\mathcal{O}}\to A\). Such a homomorphism is uniquely determined by the images of \(x_{0},\ldots,x_{n-1}\), which are arbitrary elements of \(A\) since \(R\langle X\rangle_{\mathcal{O}}\) is free. We may therefore naturally identify \(\mathscr{R}_{n,A}\) with \(A^{n}\).
The automorphism group of \(R\langle X\rangle_{\mathcal{O}}\) naturally acts on \(\mathscr{R}_{n,A}\) by pre-composition. Under the identification of \(\mathscr{R}_{n,A}\) with \(A^{n}\), the generators \(\alpha_{i}\) (for \(1\leq i<n\)) act as \((a_{0},\ldots,a_{n-1})\mapsto(a_{0},\ldots,a_{i}-a_{i+1},\ldots,a_{n-1})\), etc.
Furthermore, the action of \(R\langle X\rangle_{\mathcal{O}}\) commutes with the action of the automorphism group of \(A\) by post-composition. Again choosing \(R=\mathbb{Z}[1/N]\), we obtain an action of \(\Gamma_{n,N,\mathcal{O}}\) on \(A^{n}/\mathrm{Aut}(A)\).
**Definition 6**.: An \(\mathcal{O}\)-algebra \(A\) over \(R\) is called _minimal_ if its only subalgebras are \(A\) and the \(0\)-submodule. Here by a subalgebra of \(A\) we mean an \(R\)-submodule which is closed under all operad operations.
**Theorem 7**.: _Let \(A\) be an \(\mathcal{O}\)-algebra, and choose \(n\geq\mathsf{ar}(S)+2\)._
1. _If_ \(A\) _is minimal and non-trivial then the action of_ \(\Gamma_{n,N,\mathcal{O}}\) _on_ \(A^{n}\) _has two orbits: the fixed point_ \(0^{n}\) _and a large orbit consisting of all other points._
2. _If_ \(A\) _is minimal then the induced action of_ \(\Gamma_{n}\) _on_ \(\Omega_{n,A}\coloneqq(A^{n}\setminus 0^{n})/\mathrm{Aut}(A)\) _is_ \(k\)_-transitive, for all_ \(k\) _less than the number of_ \(\mathrm{Aut}(A)\)_-orbits in_ \(A\)_. In particular, if_ \(A\) _is finite and_ \(|\Omega|>25\) _and_ \(\mathrm{Aut}(A)\) _has at least_ \(6\) _orbits on_ \(A\) _then_ \(\Gamma_{n}\) _acts on_ \(\Omega\) _as a full alternating or symmetric group._
3. _If_ \(A,A^{\prime}\) _are two non-isomorphic minimal algebras then the actions of_ \(\Gamma_{n}\) _on_ \(\Omega_{n,A}\) _and on_ \(\Omega_{n,A^{\prime}}\) _are not isomorphic._
We begin by an analogue of the Chinese Remainder Theorem for minimal algebras:
**Lemma 8**.: _Let the \(\mathcal{O}\)-algebra \(A\) be minimal. For any elements \(a_{1},\ldots,a_{k}\in A\setminus 0\) in distinct \(\mathrm{Aut}(A)\)-orbits and for every \(b_{1},\ldots,b_{k}\in A\) there exists \(v\in R\langle x\rangle_{\mathcal{O}}\) such that the substitution \(x\mapsto a_{i}\) maps \(v\) to \(b_{i}\), i.e., \(\mathsf{ev}_{a_{i}}(v)=b_{i}\) where \(\mathsf{ev}_{a}\colon R\langle x\rangle_{\mathcal{O}}\to A\) is the evaluation map \(x\mapsto a\)._
Proof.: The proof is by induction on \(k\). The base case \(k=1\) follows from the minimality of \(A\) which implies that the evaluation map \(\mathsf{ev}_{a_{1}}\) is a surjective map \(R\langle x\rangle_{\mathcal{O}}\to A\). Assuming the statement for \(k\), the evaluation maps at \(a_{1},\ldots,a_{k}\) yield a surjection
\[\mathsf{ev}_{a_{1}}\times\cdots\times\mathsf{ev}_{a_{k}}\colon R\langle x \rangle_{\mathcal{O}}\to A^{k}.\]
The kernel \(V\) of this map is a subalgebra of \(R\langle x\rangle_{\mathcal{O}}\) because \(\mathcal{O}\) has no constants; and the evaluation at \(a_{k+1}\) maps \(V\) to a subalgebra of \(A\). Since \(A\) is minimal, the image is either the whole of \(A\), proving the induction step, or is \(0\). In the last case \(\mathsf{ev}_{a_{k+1}}\) is identically zero on \(V\), so induces a (still surjective) algebra homomorphism \(A^{k}\to A\). Pre-composing this homomorphism with the \(i\)th embedding \(A\to A^{k}\) we obtain a homomorphism \(\phi_{i}\colon A\to A\) mapping \(a_{i}\) to \(a_{k+1}\); and \(\phi_{i}\) is non-zero so its kernel is \(0\) and its image is \(A\), i.e., \(\phi_{i}\) is an automorphism of \(A\); therefore \(a_{i}\) and \(a_{k+1}\) are in the same orbit of \(\mathrm{Aut}(A)\)
Proof of Theorem 7.: (1) Consider \(A\) a minimal algebra and \(a\in A\) a non-zero element. Since \(A\) is minimal, \(a\) generates the whole algebra \(A\), and we will show that the \(\Gamma_{n}\)-orbit of \((a,0,\ldots,0)\) contains every non-zero element of \(A^{n}\).
Consider \((a_{0},\ldots,a_{n-1})\in A^{n}\setminus 0^{n}\). Since \(\Gamma\) contains the group of even permutations we can assume that \(a_{n-1}\neq 0\). Thus, each of \(a_{0}-a,a_{1},a_{2},\ldots,a_{n-1}\) may be respectively written as an expression \(v_{i}(a_{n-1})\) since \(a_{n-1}\) is non-zero and thus generates the algebra \(A\). By Theorem 5 and conjugation, the transvection \(t_{i}(v_{i})\) belongs to \(\Gamma_{n}\) for all \(1\leq i<n\). Applying them in sequence, we see that \((a,0,\ldots,a_{n-1})\) is in the same orbit as \((a_{0},a_{1},\ldots,a_{n-1})\). Finally, \(a_{n-1}\) may be written as an expression in \(a\) and another transvection from \(\Gamma_{n}\) sends \((a,0,\ldots,a_{n-1})\) to \((a,0,\ldots,0)\).
(2) For the second statement, we shall prove that the action of \(\Gamma_{n}\) is \(k\)-transitive whenever \(k\) is at most the number of \(\operatorname{Aut}(A)\)-orbits on \(A\setminus 0\). Using Lemma 8, the proof of \(k\)-transitivity is standard. Consider \(a_{1},\ldots,a_{k}\in A\setminus 0\) in different orbits under \(\operatorname{Aut}(A)\). Let \(v_{1},\ldots,v_{k}\) be vectors in \(A^{n}\setminus 0\) which are in different \(\operatorname{Aut}(A)\)-orbits under the diagonal action. We use induction on \(k\) to show that there is an element in \(\Gamma\) which sends \(v_{i}\) to \((a_{i},0,\ldots,0)\) for all \(i=1,\ldots,k\). The base case \(k=1\) is the first statement of the theorem. For the induction step we can assume that \(v_{i}=(a_{i},0,\ldots,0)\) for \(i=1,\ldots,k\). If some coordinate \(b_{k+1,j}\) of \(v_{k+1}\) is non-zero for some \(j>0\), then we can find a transvection which changes the zeroth coordinate of \(v_{k+1}\) to \(a_{k+1}\) and fixes \(v_{i}\) for \(i=1,\ldots,k\) and then use Lemma 8 to move the resulting vector to \((a_{k+1},0,\ldots,0)\). Otherwise the zeroth coordinate of \(v_{k+1}\) is in a different \(\operatorname{Aut}(A)\)-orbit than \(a_{1},\ldots a_{m}\) and again by Lemma 8 we can find a transvection which fixes \(v_{1},\ldots,v_{k}\) and makes some other coordinate of \(v_{k+1}\) non-zero.
The final claim in (2) follows from the well-known fact that there are no highly transitive groups acting on large finite sets except the alternating and the symmetric group.
(3) For the last statement, let us assume that the actions of \(\Gamma_{n}\) on \(\Omega_{n,A}\) and on \(\Omega_{n,A^{\prime}}\) are isomorphic. Then, using the language of group theory, we can characterize the respective subsets \((A\setminus 0)\times 0^{n-1}\) and \((A\setminus 0)\times 0^{n-1}\) as the fixed sets of all transvections \(t_{i}(a)\) with \(1\leq i<n\). The action of transvections \(t_{0}(a)\) being isomorphic on these two sets then directly lets us reconstruct the \(\mathcal{O}\)-algebra structure on \(A,A^{\prime}\) from the \(\Gamma_{n}\)-action.
**Remark 9**.: It is likely that the minimality assumption on \(A\) can be replaced with a weaker one, such as simplicity plus a small extra assumption (such as a bound on the number of generators of the subalgebras of \(A\)). This will slightly change the statement, to a claim that there is one large orbit consisting of all generating tuples of \(A\). However, this will significantly complicate the proof, see [2].
**Remark 10**.: The last conclusion of (2) relies on the classification of finite simple groups. This dependence can be avoided when \(\operatorname{Aut}(A)\) is much smaller than \(A\), since it can be shown without using the classification that there are no non-trivial \(k\)-transitive groups on \(n\) points for \(k\gg\log n\) and \(n\) sufficiently large.
It may seem that Theorem 7 requires a too strong assumption -- minimality of \(A\), rather than, say, simplicity. For example, in the category of associative algebras there are very few minimal algebras (since every minimal algebra is commutative). However, for the free operad \(\mathcal{O}_{S}\) as soon as \(S\) contains enough operations, minimal algebras are the norm rather than the exception:
**Theorem 11**.: _Assume that \(S\) contains at least two operations and that \(\mathcal{O}\) is free on \(S\). Then, for every finite-dimensional vector space \(V\) over a field \(\mathbb{K}\), the collection of minimal \(\mathcal{O}\)-algebra structures on \(V\) is Zariski-dense among all \(\mathcal{O}\)-algebra structures._
_In particular, for every prime \(p\) the proportion of minimal algebras among all \(\mathcal{O}\)-algebra structures on \((\mathbb{Z}/p)^{k}\) is at least \(1-6p^{(1-|S|)(k-1)}\)._
Proof.: Let us first write \(V=\mathbb{K}^{k}\), a \(k\)-dimensional vector space.
A multilinear operation \(\star_{s}\) on \(V\), of arity \(\mathsf{ar}(s)\), is a linear map \(V^{\otimes\mathsf{ar}(s)}\to V\), and the space of such maps has dimension \(k^{\mathsf{ar}(s)+1}\). The set \(\Sigma\) of \(\mathcal{O}\)-algebra structures on \(V\) is therefore a vector space of dimension \(\sum_{s\in S}k^{\mathsf{ar}(s)+1}\).
For any choice of a subspace \(W\leq V\), say of dimension \(d\), the fact that \(\star_{s}\) maps \(W^{\otimes\mathsf{ar}(s)}\) back to \(W\) is a linear condition imposing \(d^{\mathsf{ar}(s)}(k-d)\) independent constraints. The subspace of \(\Sigma\) consisting of algebras for which \(W\) is a subalgebra therefore has codimension \(\sum_{s\in S}d^{\mathsf{ar}(s)}(k-d)\).
The union of all these subspaces, as \(W\) varies over the Grassmann variety of \(d\)-dimensional subspaces, is thus a variety of codimension at least
\[\sum_{s\in S}d^{\mathsf{ar}(s)}(k-d)-d(k-d),\]
which is positive as soon as \(S\) contains at least two operations.
In the case of \(\mathcal{O}\)-algebra structures on \((\mathbb{Z}/p)^{k}\) the above arguments show that the probability of a non-minimal structure is bounded by
\[\sum_{d=1}^{k-1}\binom{k}{d}_{p}\;p^{-\sum_{s\in S}d^{\mathsf{ar}(s)}(k-d)},\]
where the \(p\)-binomial coefficient \(\binom{k}{d}_{p}=(p)_{k}/(p)_{d}(p)_{k-d}\) is the number of subspaces of \((\mathbb{Z}/p)^{k}\) of dimension \(d\); here \((p)_{k}=(1-p)\cdots(1-p^{k})\). Since all operations have arity at least \(1\) and there are \(|S|\) operations we have the following obvious upper bound
\[\sum_{d=1}^{k-1}\binom{k}{d}_{p}\;p^{-|S|d(k-d)}\]
It not difficult to see that the contribution of each of the terms for \(d=1\) and \(d=k-1\) is bounded above by \(2\frac{p}{p-1}p^{(1-|S|)(k-1)}\), which is \(\ \leq 3p^{(1-|S|)(k-1)}\) for \(p>2\). For all other terms we can use
\[\binom{k}{d}_{p}\leq\binom{k}{d}p^{d(k-d)},\]
since \(\binom{k}{d}_{p}\) counts strings \(\sigma\in\{0,1\}^{k}\) with \(d\) ones and weighted by \(p^{|\{i<j:\sigma_{i}>\sigma_{j}\}|}\); this gives that the contribution of all other terms is bounded above by
\[\sum_{d=2}^{k-2}\binom{k}{d}\;p^{-(|S|-1)2(k-2)},\text{ which is }\leq(2^{k}-2-2k)p^{-2(|S|-1)(k-2)}\text{ if }k\geq 3.\]
These bounds are sufficient to prove the desired inequality for \(p\geq 3\) or \(k\geq 6\) and the remaining cases can be verified directly.
**Remark 12**.: The probability that a random \(\mathcal{O}\)-structure on \((\mathbb{Z}/p)^{k}\) has a \(1\)-dimensional subalgebra is approximately \(p^{(1-|S|)(k-1)}\), so the above bound is close to optimal. It can be improved to \(1-(2+\epsilon)p^{(1-|S|)(k-1)}/(1-p^{1-|S|)}\) for every
\(\epsilon>0\) and large enough \(p\). Of course all these bounds say nothing in case \(k=1\), when every algebra structure is clearly minimal.
The next issue before applying Theorem 7 is to show that generically the automorphism group of an \(\mathcal{O}_{S}\)-algebra is very small. It is reasonable to assume that generically the only automorphisms are scalars -- a quick computation shows that \(\lambda\mathsf{Id}\) is an automorphism of an algebra \(A\) if and only if \(\lambda^{\mathsf{ar}(s)-1}=1\) for all \(s\in S\). Indeed this is the case:
**Theorem 13**.: _Assume that \(S\) contains at least two operations. Then for any prime \(p\geq 2\) most mimimal \(\mathcal{O}_{S}\)-algebra structures on \((\mathbb{Z}/p)^{k}\) have "trivial" automorphism group, namely_
\[\operatorname{Aut}(A)=\{\lambda\mathsf{Id}\mid\lambda^{\mathsf{ar}(s)-1}=1, \;\forall s\in S\}.\]
_More precisely the number of minimal algebras with non-trivial automorphism groups is less than \(1/p^{k}\) of all possible algebra structures \(p^{\sum_{s\in S}k^{\mathsf{ar}(s)+1}}\)._
Proof.: Consider \(\phi\in\operatorname{Aut}(A)\); it is a linear map, so is given by a \(k\times k\) matrix. Up to passing to a field extension, there is an eigenvector \(a\in A\otimes K\) with eigenvalue \(\lambda\in K\).
Since \(a\) generates \(A\otimes K\), we have that \(A\otimes K\) is a quotient of \(K\langle x\rangle_{\mathcal{O}}\), so \(\phi\) is uniquely determined by \(a\) and \(\lambda\). Moreover, since \(K\langle x\rangle_{\mathcal{O}}\) is graded, the operator \(\phi\) is diagonalizable with eigenvalues \(\lambda^{i}\). Furthermore, if all operations in \(S\) have arity \(1\) then \(\phi\) is scalar since the whole algebra \(K\langle x\rangle_{\mathcal{O}}\) lies in degree \(1\); while if there are higher-arity operations then \(\lambda\) is a root of unity. In the first case, we are done; in the second case, let \(n\) be the order of \(\lambda\), and for all \(i\in\mathbb{Z}/n\) let \(V_{i}\) be the eigenspace of \(\phi\) with eigenvalue \(\lambda^{i}\), say of dimension \(d_{i}\).
Let us compute the linear conditions imposed on \(\phi\) by the fact that it commutes with each operation \(\star_{s}\). It must map \(V_{i_{1}}\otimes\cdots\otimes V_{i_{\mathsf{ar}(s)}}\) to \(V_{i_{1}+\cdots+i_{s}}\), so the dimension of the space of \(\mathcal{O}\)-algebra structures which commute with \(\phi\) is
\[\sum_{s\in S}\sum_{i_{1},\ldots,i_{\mathsf{ar}(s)}\in\mathbb{Z}/n}d_{i_{1}} \cdots d_{i_{\mathsf{ar}(s)}}d_{i_{1}+\cdots+i_{\mathsf{ar}(s)}}.\]
Since the space \(V_{i_{1}+\cdots+i_{s}}\) is not the full space, its dimension is less or equal to \(k-1\). Therefore for each \(s\in S\) we have
\[\sum_{i_{1},\ldots,i_{\mathsf{ar}(s)}\in\mathbb{Z}/n}d_{i_{1}}\cdots d_{i_{ \mathsf{ar}(s)}}d_{i_{1}+\cdots+i_{\mathsf{ar}(s)}}\leq\sum_{i_{1},\ldots,i_{ \mathsf{ar}(s)}\in\mathbb{Z}/n}d_{i_{1}}\cdots d_{i_{\mathsf{ar}(s)}}(k-1)=k^{ \mathsf{ar}(s)+1}-k^{\mathsf{ar}(s)}\]
Thus the total sum is less than
\[\sum_{s\in S}k^{\mathsf{ar}(s)+1}-\sum_{s\in S}k^{\mathsf{ar}(s)}\leq\sum_{s \in S}k^{\mathsf{ar}(s)+1}-k^{2}-k,\]
since by assumption that there are at least \(2\) operations in \(S\) and one has arity at least \(2\).
This shows that each candidate \(\phi\) is an automorphism of a minimal algebra structure with probability at most \(p^{-k^{2}-k}\). Since the number of possibilities for \(\phi\) is less than \(p^{k^{2}}\), the probability that an algebra structure is minimal and has a nontrivial automorphism is less than \(p^{-k}\).
**Corollary 14**.: _Let \(S\) consist of one binary operation and one operation of arity \(d\geq 2\). Then the group \(\Gamma_{d+2,N,\mathcal{O}_{S}}\) has property (T) provided that \(N\) is divisible
_by all primes less than \(3+d+4\sqrt{d-1}<5d\). Moreover this group surjects onto \(\operatorname{Alt}(p^{(d+2)k}-1)^{\times p^{k^{d+1}}}\) for all primes \(p>3+d+4\sqrt{d-1}\) and all \(k\geq 1\)._
Proof.: Property (T) for the group \(\Gamma_{d+2,N,\mathcal{O}_{S}}\) is a direct consequence of Theorem 3.
For \(k\geq 2\), there are \(p^{k^{d+1}}\) choices for an operation of arity \(d\) on \((\mathbb{Z}/p)^{k}\). By Theorems 11 and 13 almost all of the operations yield minimal \(\mathcal{O}\)-algebra structure \(A\) on \((\mathbb{Z}/p)^{k}\) with trivial automorphism group. In order to count to non-isomorphic ones we need to divide by the size of the group \(\operatorname{GL}_{k}(\mathbb{Z}/p)\). At the end it is easy to see that there are at least
\[p^{\sum_{s}k^{\mu(s)+1}}\left(1-6p^{-(|S|-1)(k-1)}-p^{-k}\right)/ |\operatorname{GL}_{k}(\mathbb{Z}/p)|\\ \geq p^{k^{d+1}+k^{3}-k^{2}}(1-5p^{-(d-1)(k-1)}-p^{-k})>p^{k^{d+1}}\]
non-isomorphic minimal \(\mathcal{O}_{S}\) algebras structures on \((\mathbb{Z}/p)^{k}\) with trivial automorphism group. When \(k=1\), it is easy to see that there are at least \(p=p^{k^{d+1}}\) non-isomorphic \(\mathcal{O}_{S}\)-structures on \(\mathbb{Z}/p\). By Theorem 7(2) each of these algebras yields a highly transitive action of \(\Gamma\coloneqq\Gamma_{d+2,N,\mathcal{O}_{S}}\) on \(p^{(d+2)k}-1\) points, yielding an alternating or symmetric quotient of \(\Gamma\) on that many points. Furthermore, all generators of \(\Gamma\) have order \(p\), which is odd by our restrictions, so this quotient is alternating. Since these actions are non isomorphic, they can be combined into a surjection from \(\Gamma\) to \(\operatorname{Alt}(p^{(d+2)k}-1)^{\times p^{k^{d+1}}}\).
Using results from [10], we can deduce:
**Corollary 15**.: _For every \(d\) there is a group with property \((\tau)\) whose pro-finite completion is_
\[\prod_{n}\operatorname{Alt}(n)^{\times n^{(\log n)^{d}}}.\]
_Idea of the proof._ For any fixed \(d\geq 2\), the previous construction produces a group with property (T) which maps onto \(\prod_{k}\operatorname{Alt}(p^{(d+2)k}-1)^{\times p^{k^{d+1}}}\) for some fixed prime \(p\). This can be combined with the results from [10] to produce a group with \((\tau)\) and pro-finite completion \(\prod_{k}\operatorname{Alt}(p^{(d+2)k}-1)^{\times p^{k^{d+1}}}\). Finally use that
\[\prod_{n=p^{(d+2)k}-1}^{p^{(d+2)k}-2}\operatorname{Alt}(n)^{\times n^{(\log n) ^{d}}}\]
can be boundedly generated by \(p^{d+2}\) copies of \(\operatorname{Alt}(p^{(d+2)k}-1)^{\times p^{k^{d+1}}}\).
We do not know for which functions \(f(n)\) there exists a finitely generated group with property (T) or \((\tau)\) which maps onto \(\operatorname{Alt}(n)^{\times f(n)}\) for all \(n\) -- the above construction shows that this is possible for \(\log f(n)\approx(\log n)^{d}\) for any fixed \(d\), and on the other side one needs \(\log f(n)\prec O(n\log n)\), otherwise the minimal number of generators of \(\operatorname{Alt}(n)^{\times f(n)}\) would be unbounded. This question is roughly equivalent to the question for which functions \(f(n)\) it is possible to turn the Cayly graphs of \(\operatorname{Alt}(n)^{\times f(n)}\) in bounded degree expanders. |
2310.18159 | DESiRED -- Dynamic, Enhanced, and Smart iRED: A P4-AQM with Deep
Reinforcement Learning and In-band Network Telemetry | Active Queue Management (AQM) is a mechanism employed to alleviate transient
congestion in network device buffers, such as routers and switches. Traditional
AQM algorithms use fixed thresholds, like target delay or queue occupancy, to
compute random packet drop probabilities. A very small target delay can
increase packet losses and reduce link utilization, while a large target delay
may increase queueing delays while lowering drop probability. Due to dynamic
network traffic characteristics, where traffic fluctuations can lead to
significant queue variations, maintaining a fixed threshold AQM may not suit
all applications. Consequently, we explore the question: \textit{What is the
ideal threshold (target delay) for AQMs?} In this work, we introduce DESiRED
(Dynamic, Enhanced, and Smart iRED), a P4-based AQM that leverages precise
network feedback from In-band Network Telemetry (INT) to feed a Deep
Reinforcement Learning (DRL) model. This model dynamically adjusts the target
delay based on rewards that maximize application Quality of Service (QoS). We
evaluate DESiRED in a realistic P4-based test environment running an MPEG-DASH
service. Our findings demonstrate up to a 90x reduction in video stall and a
42x increase in high-resolution video playback quality when the target delay is
adjusted dynamically by DESiRED. | Leandro C. de Almeida, Washington Rodrigo Dias da Silva, Thiago C. Tavares, Rafael Pasquini, Chrysa Papagianni, Fábio L. Verdi | 2023-10-27T14:06:57Z | http://arxiv.org/abs/2310.18159v1 | DESiRED - Dynamic, Enhanced, and Smart iRED: A P4-AQM with Deep Reinforcement Learning and In-band Network Telemetry
###### Abstract
Active Queue Management (AQM) is a mechanism employed to alleviate transient congestion in network device buffers, such as routers and switches. Traditional AQM algorithms use fixed thresholds, like target delay or queue occupancy, to compute random packet drop probabilities. A very small target delay can increase packet losses and reduce link utilization, while a large target delay may increase queueing delays while lowering drop probability. Due to dynamic network traffic characteristics, where traffic fluctuations can lead to significant queue variations, maintaining a fixed threshold AQM may not suit all applications. Consequently, we explore the question: _What is the ideal threshold (target delay) for AQMs?_ In this work, we introduce DESiRED (Dynamic, Enhanced, and Smart iRED), a P4-based AQM that leverages precise network feedback from In-band Network Telemetry (INT) to feed a Deep Reinforcement Learning (DRL) model. This model dynamically adjusts the target delay based on rewards that maximize application Quality of Service (QoS). We evaluate DESiRED in a realistic P4-based test environment running an MPEG-DASH service. Our findings demonstrate up to a 90x reduction in video stall and a 42x increase in high-resolution video playback quality when the target delay is adjusted dynamically by DEiRED.
## 1 Introduction
In the modern domain of computer networks, the necessity to meet progressively rigorous service requirements, including ultra-reliable low-latency communications and high bandwidth, has resulted in an unparalleled upsurge in network traffic, amplifying the intricacies associated with traffic management. Subsequently, approaches aimed at assisting congestion control mechanisms, such as Active Queue Management (AQM), are consistently embraced.
In scenarios where incoming packet rates exceed a network device's processing capacity, a transient queuing of packets occurs in the appropriate output queue, often causing transmission delays. To mitigate this bottleneck, an effective strategy involves
notification congestion status to the packet sender, allowing the congestion control algorithm to reduce transmission rates. The primary methods for conveying congestion conditions to senders include packet marking using Explicit Congestion Notification (ECN) bits and selective packet dropping. These approaches are the predominant means of communicating congestion information in network environments.
Traditionally, AQM mechanisms have been primarily focused on draining packets directly from queues, with the overarching objective of mitigating transient congestion occurrences and reducing the queuing delay. Prominent examples of these traditional AQM algorithms include Random Early Detection (RED) [1], Blue [2], CoDel [3], CAKE [4], and PIE [5]. More recently, owing to the inherent flexibility of the programmable data plane (PDP), the prevailing state-of-the-art AQM solutions designed to operate within PDP hardware environments and made publicly accessible comprise iRED [6], P4-CoDel [7], and the (dual) PI2 [8]. These AQM implementations exemplify the synergy between novel programmable data plane capabilities and the evolving demands of congestion control within modern network infrastructures.
An integral aspect of AQM algorithms pertains to the selection of an appropriate threshold value, often determined based on considerations of either queue delay (referred to as the target delay) or queue depth. Opting for an excessively small threshold value can lead to an increased occurrence of packet losses, resulting in a higher drop probability while reducing overall link utilization. Conversely, employing a high threshold value can lead to extended queuing delays but a lower likelihood of packet drops, characterized by a reduced drop probability. Additionally, the dynamic nature of network traffic necessitates the avoidance of static threshold values for specific applications. In this context, we explore this issue as the **fixed target delay problem**, as illustrated in Fig. 1, delving into the intricate dynamics of threshold determination in AQM algorithms
At the core of this matter lies a fundamental trade-off, giving rise to a pivotal question: _What is the ideal target delay for AQM?_ Estimating this value presents a challenging task. However, recent advancements in the field of artificial intelligence as applied to computer networks [9] introduce a potential avenue, leveraging the capabilities of Deep Reinforcement Learning (DRL) as a powerful tool to enhance decision-making in addressing this challenge.
Although DRL models are known for their appetite for data, the provision of real
Figure 1: The trade-off: If the Target Delay is small, can increase packet losses and decrease link utilization. If is high, increase queueing delays and decrease packet drops.
time data at the requisite granularity has posed an obstacle within the realm of computer networks. However, recent advances in the domain of PDP, in tandem with the integration of In-band Network Telemetry (INT) [10], have conducted a paradigm shift. These advancements have presented us with the capability to attain granular visibility, discernible on a per-packet basis, effectively altering the scenario of the challenges associated with data availability in the context of DRL applications within computer networks.
The hypothesis of this study posits that INT measurements can serve as valuable input features for a DRL model. This DRL model is intended to dynamically adjust the target delay, departing from our prior work with a fixed target delay in iRED [6]. The overarching goal is to utilize this DRL model for real-time optimization of QoS, thereby introducing a novel approach aimed at enhancing network performance and adaptability.
iRED represents a pioneering P4-based algorithm that introduced the concept of disaggregated AQM in PDP hardware. Disaggregated AQM involves the segmentation of AQM operations into distinct blocks, specifically Ingress and Egress, within the PDP architecture. Additttionally, iRED achieves full compliance with the L4S framework (Low Latency, Low Loss, and Scalable Throughput) [11]. It accomplishes this by categorizing traffic as either Classic (subject to dropping) or Scalable 1 (marked with the ECN bit), thus ensuring fairness among various flows through a combined packet dropping and marking mechanism.
Footnote 1: TCP Prague in L4S framework.
Through the integration of INT, DRL, and the iRED framework, we introduce the innovative paradigm of DESiRED (Dynamic, Enhanced, and Smart iRED). To our knowledge, DESiRED serves as the leading implementation of a dynamic AQM system based on P4 architecture. This advancement combines the cutting-edge capabilities of fine-grained network measurements enabled by INT with the cognitive capabilities provided by the Deep Q-Network (DQN), thereby representing an integrated embodiment of state-of-the-art progress in the field of AQM.
We undertake a comprehensive evaluation of DESiRED within a realistic testbed environment, focusing on the delivery of an MPEG-DASH (Dynamic Adaptive Streaming over HTTP) service [12]. Our experiments involve the provision of diverse video catalogs to video clients traversing a programmable network. Fine-grained INT measurements, collected at line rate in the data plane, are utilized to inform the DRL mechanism in the control plane. The DRL mechanism guides the agent's actions, dynamically adjusting the target delay to optimize the QoS for the DASH service. This forms a Smart Control Closed Loop, as depicted in Fig. 2
Our empirical findings elucidate that DESiRED wields an impact, with the potential to alleviate video stall occurrences by a factor of up to 90x. Moreover, the enhancement in the QoS within the MPEG-DASH framework is evident, as measured by an augmentation of up to 42x in terms of Frames per Second (FPS), underscoring the considerable efficacy of DESiRED in elevating the video streaming experience. In summary, the main contributions of this work are:
1. We design and implement DESiRED, which is a smart Closed Control Loop
that unifies the state of the art in network telemetry (INT), Deep Reinforcement Learning (DQN), and congestion control in-network (AQM).
2. We conduct an evaluation of the DESiRED algorithm within the context of a DASH service. This entails the practical implementation of DESiRED within a real-setup DASH environment, followed by a systematic evaluation of its performance and effectiveness.
3. We have created and made publicly available datasets used throughout our experiments that encompass network and application data, collectively characterizing the complexities of an adaptive video service.
The remainder of the paper is organized as follows. In Section 2 we describe INT and DRL fundamental concepts. Additionally, we detail DESiRED, describing the main components implemented in the P4 language (data plane) and the DRL integration (control plane) in Section 3. In Section 4, the experiments and evaluation are presented, including a brief view of the testbed and the workloads used. Results and discussion are detailed in Section 5. Some Lessons learned are given in Section 6. Finally, the conclusions are depicted in Section 7.
## 2 Background
In this section, we expound upon the foundational principles that underpin the functionality of DESiRED. Sub-section 2.1 provides a concise elucidation of the programmable data plane and In-band Network Telemetry. Furthermore, Section 2.2 delves into the principal facets of Deep Reinforcement Learning.
### In-band Network Telemetry
Recent progress in programmable hardware and the utilization of the P4 language [13] have enabled network devices to autonomously report the network's state, eliminating
Figure 2: The Smart Control Closed Loop with DESiRED: The fine-grained INT measurements provide wide visibility into the state of the network in each observation space. The DRL mechanism guides the agent to dynamically adjust the target delay to maximize the QoS.
the need for direct control plane intervention [14]. In this scenario, packets incorporate telemetry instructions within their header fields, facilitating the fine-grained collection and recording of network data.
The telemetry instructions are defined in the INT data plane specification [10]. Figure 3 illustrates the operation of INT within an arbitrary network. The network comprises four end systems, namely _H1, H2, H3_, and _H4_, along with four nodes equipped with P4 and INT support, denoted as _S1, S2, S3_, and _S4_. Each network node possesses a set of metadata, represented by orange (S1), magenta (S2), green (S3), and blue (S4) rectangles. This metadata contains information specific to each node, such as Node ID, Ingress Port, Egress Spec, Egress Port, Ingress Global Timestamp, Egress Global Timestamp, Enqueue Timestamp, Enqueue Queue Depth, Dequeue Timedelta, and Dequeue Queue Depth, as specified in the V1Model architecture.
In Figure 3, there are two distinct flows depicted: one represented by red packets and the other by black packets. The red flow is required to adhere to the prescribed network path _f1=H1, S1, S3, S4, H4_, while the black flow must traverse the designated path _f2=H1, S1, S2, H2_.
At each network hop along these paths, the data plane of the network devices employs telemetry instructions to facilitate the collection and inclusion of metadata within the packets as they traverse each node. This process is iteratively performed throughout the journey, starting from the first node after the source and concluding at the last node before reaching the destination. Upon reaching the destination node, the metadata is extracted from the packet and subsequently relayed to the monitoring system. The original packet is then directed to its final destination.
In addition to the modes delineated in the INT specification, alternative approaches exist for collecting metadata within programmable networks. One such approach involves the utilization of an "exclusive telemetry flow" to monitor the network's state, which, in this work, is referred to as "Out-of-band Network Telemetry" (ONT).
In the ONT scenario, dedicated probe packets are employed to gather metadata, eliminating the need for any modifications to the data packets associated with the ser
Figure 3: In-band Network Telemetry operation. INT metadata is appended on the packets in each hop. In the specific collection point, the monitoring system receives INT metadata.
vices operating within the network. The primary advantage of this approach lies in its ability to maintain the integrity of application traffic, as it traverses the programmable network without undergoing alterations, thereby mitigating issues related to packet growth, such as fragmentation.
Conversely, the use of an exclusive telemetry flow introduces additional overhead to the overall network traffic. This is due to the necessity of having a dedicated monitoring flow ONT for each service running within the network.
One of the primary advantages of employing telemetry lies in the exceptional level of granularity it offers. Every individual packet traversing the network carries pertinent information directly to the monitoring system at the line rate. This level of granularity aligns with the perspective presented in [15], wherein it is recognized that a substantial volume of data can prove immensely valuable for Deep Reinforcement Learning (DRL) algorithms, which have a voracious appetite for information.
### Deep Reinforcement Learning
Reinforcement Learning (RL) is an Artificial Intelligence (AI) learning paradigm centered on actions and rewards. Unlike the conventional supervised and unsupervised learning approaches, where models learn from predefined dataset features, an RL learner, also known as an agent, interacts with an environment and receives rewards or penalties based on the actions it takes.
The model depicted in Figure 4 illustrates the formalization of a sequential decision-making strategy known as a Markov Decision Process (MDP). In this framework, the agent continually interacts with the environment by executing actions (\(A\)) at specific time steps (\(t\)) and observing new states (\(S_{t+1}\)) resulting from these actions. After each interaction, a reward value (\(R_{t+1}\)) is generated to assess the correctness of the action, with the aim of maximizing cumulative rewards throughout the agent's training process [16, 17].
In this context, the agent learns to maximize its cumulative rewards by determining a policy2 that optimizes an action-value function, denoted as \(Q\). This function estimates
Figure 4: Interaction between the agent and environment in a Markov Decision Process. Adapted from [17].
the quality of actions taken by the agent in specific states.
Footnote 2: The agent is a _local_ agent, which is a _local_ agent, which is a local agent
\(S_{t+1}\) after taking action \(a\). This value is then weighted by the discount factor \(\gamma\) and modulated by a learning rate \(\alpha\) (\(0\leq\alpha<1\)) [18, 17].
However, it is essential to acknowledge that Q-learning operates under the assumption of a tabular representation for state-action pairs and approximates the optimal Q-value in a linear fashion. In practice, real-world applications often exhibit complexity, characterized by non-linear relationships and encompassing high-dimensional state spaces. Such complexities render the storage of comprehensive tables unfeasible [17].
_Networking management serves as a compelling example of such scenarios, where modern Tofino switches can process INT packets at a nanosecond timescale._
To address these limitations, Mnih et al. leveraged the Q-Learning algorithm by integrating it with a Deep Neural Network (DNN) to approximate the optimal Q-value, a methodology known as Deep Q-Network (DQN). In their seminal work [19], the authors showcased the effectiveness of this approach by training and evaluating the DQN on an Atari 2600 emulator. Impressively, the DQN-based agents achieved performance levels surpassing those of human players in 49 distinct games, relying solely on pixel inputs and game scores for guidance.
Of note, the authors maintained a consistent algorithm, DNN architecture, and hyperparameters across all games, eliminating the need for game-specific feature engineering. Thus, DQN not only outperformed agents employing linear function approximation but also demonstrated the capacity to attain or exceed human-competitive skills across diverse gaming environments. This pioneering work exemplified the synergy between RL and contemporary Deep Learning (DL) techniques, signifying a significant advancement in the state of artificial intelligence. It underscored the potential of RL when combined with modern DL methods, yielding remarkable outcomes [19, 17].
In line with this, we present an RL-based approach designed to dynamically fine-tune the iRED target delay to an optimal value during video streaming, named DE-SiRED. This process is facilitated by an agent built on the foundation of DQN. In the subsequent Subsection, we will delve into the constituent elements that constitute this innovative approach.
#### 2.2.1 Deep Q-Network workflow
The DQN architecture, as proposed by Mnih et al. [19], consists of a Deep Convolutional Neural Network (CNN) designed to receive emulated game frames as input and subsequently generate predicted Q-values for each potential action within the given input state. To facilitate such predictions, Mnih et al. introduced two critical modifications to the conventional Q-Learning algorithm. These alterations were essential to mitigate instabilities inherent in using Deep Neural Networks (DNNs) for Q-value approximation [17].
The first modification entails the incorporation of a biologically inspired mechanism referred to as 'experience replay.' In this approach, the agent's experiences are stored as tuples containing the current state (\(S_{t}\)), the action taken (\(A_{t}\)), the reward received (\(R_{t+1}\)), and the subsequent state (\(S_{t+1}\)). Periodically, after reaching a predefined replay memory limit, a mini-batch of these experiences is uniformly sampled for training the DNN [19, 17].
This approach plays a pivotal role in mitigating the emergence of correlations
within the observed state space. By decoupling the dependence on successive experiences, it effectively reduces the variance in the parameters of the DNN. Fig. 5 illustrates the interaction between a DQN agent and an environment, taking into account the experience replay mechanism. Within this context, the agent selects actions following an \(\epsilon\)-greedy rule.
Specifically, when employing this rule, the agent chooses between two strategies: "exploitation" and "exploration". A "greedy action" involves selecting an action from the action space based on the maximum estimated Q-value. Conversely, a "non-greedy action" entails the random selection of an action. Exploitation, represented by the selection of a greedy action, aims to exploit the current knowledge to maximize immediate rewards. In contrast, exploration, represented by non-greedy actions, focuses on traversing the action space to maximize cumulative rewards in the long run [17].
In RL, achieving a balanced trade-off between exploration and exploitation is paramount. However, it's important to acknowledge that, at a single time step, it's not possible for an agent to simultaneously exploit and explore actions. To reconcile these opposing strategies, a solution is to allow the agent to primarily act greedily, favoring exploitation, while intermittently choosing an action from the action space at random, independent of the estimated Q-values. This random selection is determined by an exponentially decreasing probability parameter \(\epsilon\). Consequently, as the time steps progress, the probability of selecting an optimal action gradually converges to a value greater than \(1-\epsilon\), approaching near certainty in favor of exploitation as the agent refines its strategy over time [17].
A second significant contribution introduced by Mnih et al. [19], relative to classical Q-Learning, pertains to the learning stage of the DQN. In this stage, a separate network, referred to as the 'target network,' is employed to estimate target values for the
Figure 5: DQN high-level workflow [19, 17].
Q-network, often referred to as the 'online network.' This modification enhances the algorithm's stability compared to using a single online network. The rationale behind this improvement lies in the fact that updating the parameters of the online network for the current state-action pair can inadvertently influence the Q-values of the next state, potentially leading to oscillations or even policy divergence.
To address this challenge, the online network's parameters are periodically cloned to the target network at intervals of every \(C\) time steps. Consequently, the target network's predictions serve as target values for the online network during the subsequent \(C\) time steps. This introduces a delay in updating the Q-values between the current and next states, effectively reducing the likelihood of policy oscillations or divergence [19].
Figure 6 illustrates the DQN learning workflow, incorporating the approach described above. A concise introduction to the functionality of DQN is presented in Subsection 3.2. Furthermore, for a detailed exposition on the implementation of DQN within the scope of this research, please refer to Subsection 4.4.
## 3 DESiRED - Dynamic, Enhanced, and Smart iRED.
DESiRED, herein referred to as an advanced iteration of iRED, which was initially introduced in the work of [6], constitutes a notable enhancement within the realm of intelligent network control systems. Specifically, it introduces a novel capability wherein the intelligent control plane harnesses the power of DRL to dynamically optimize and fine-tune the target delay parameters. In alignment with its predecessor, iRED, DESiRED remains faithful to the fundamental concept of disaggregated
Figure 6: DQN learning stage workflow [19].
AQM. In this paradigm, AQM operations are compartmentalized into discrete functional blocks within the architecture.
The concept of disaggregation emerges from the imperative to expedite packet discarding processes. In the pursuit of resource efficiency, we contend that the optimal location for packet discarding is the Ingress block. However, a noteworthy challenge arises as the vital metadata pertaining to queue delay (or queue depth), which constitutes the primary information utilized as input for the AQM algorithm to determine packet discarding decisions, is captured by the Traffic Manager and traditionally accessible within the Egress block. Within this context, DESiRED leverages a congestion notification mechanism, designed to incur minimal overhead, in order to relay the imperative to execute packet discarding actions to the Ingress block.
As illustrated in Figure 7, the decision-making process within DESiRED takes place at the Egress block, while the corresponding actions are subsequently executed at the Ingress block. The following Subsections will elucidate the functioning of DESiRED, with a distinct focus on data plane and control plane operations.
### Data plane operation (AQM)
To provide a more comprehensive understanding, we will commence our description of DESiRED's operation from the data plane perspective, focusing initially on the Egress block. Our exploration will initiate with the drop or marking decision process, a critical component housed within the decision module. At the Egress, iRED computes the Exponentially Weighted Mean Average (EWMA) of the queue delay (or queue depth4) for each individual packet, entirely within the data plane. The inherent absence of division and floating-point operations poses challenges in calculating average values within the data plane. To surmount this limitation, as applied in [20], we employ an approximation method following Eq. 3:
Footnote 4: The programmer can choose whether to use DESiRED’s delay-based or depth-based approach.
Figure 7: The Closed Control Loop overview with DESiRED. At the control plane side, DRL updates the target delay at the data plane.
\[S_{t}=\alpha\cdot Y_{t}+(1-\alpha)\cdot S_{t-1} \tag{3}\]
where \(S_{t}\) is the updated average queue delay, \(S_{t-1}\) is the previous average queue delay and \(Y_{t}\) is the current queue delay. The constant \(\alpha\in[0,1]\) determines how much the current value influences the average. We use \(\alpha=0.5\), such multiplication can be replaced by bit shifts operations. The output of the EWMA will represent the average queue delay over time. When the observed value, representing the average queue delay, falls within the range (minimum and maximum thresholds) configured by the DRL mechanism, DESiRED proceeds to calculate the drop probability in accordance with the RED approach. Simultaneously, it employs a coupling mechanism to generate various levels of congestion signals, which may entail either packet drops or packet marking (ECN bit).
Once the DESiRED decision module (Egress) has detected that a packet must be dropped, DESiRED must notify the action module (Ingress) to perform this action. The first challenge in the PDP context is to achieve communication between the Ingress and Egress blocks with minimum overhead. Obviously, DESiRED will not drop the packet that generated the discard decision, but a future packet [21]. Discarding future packets is one of the main features differentiating DESiRED from other state-of-the-art AQMs. For the congestion notification to reach the Ingress block, DESiRED creates a congestion notification packet (clone packet) and sends it through an internal recirculation port to reach the Ingress block.
The action module, situated in the Ingress block, maintains the congestion state table on a per-port/queue basis and activates the drop flag (ON) for the corresponding port/queue. The current packet is forwarded to the next hop without introducing any additional delay. Subsequently, future packets intended for the same output port/queue, where the drop flag is set to ON, will be dropped, and the drop flag will be reset to OFF. This mechanism, facilitated by DESiRED, ensures that the Ingress pipeline can proactively mitigate imminent queue congestion.
### Control plane operation (DRL)
As mentioned earlier, DESiRED tackles the issue of fixed target delay through the implementation of an intelligent control plane, denoted by the orange box in Figure 7. This intelligent control mechanism is responsible for updating the register that maintains the dynamic target delay threshold, as determined by the DRL decision process. Now, let us provide a comprehensive account of the operational intricacies of the intelligent control plane, elucidating the inputs and outputs in detail.
The control plane operates by receiving data from two pivotal sources: the network state and the application state. In this particular implementation, fine-grained INT measurements constitute the input layer for the Deep Q-Network from the network state. The DQN's output layer is responsible for generating the agent's actions. Concurrently, the application state encompasses DASH metrics, including parameters such as FPS and the Local Buffer Occupancy (LBO) of the video player, which play a crucial role in computing the agent's reward. Fig. 8 illustrates this Control Loop.
INT measurements comprise observations that effectively depict the network's state with remarkable granularity, affording an unprecedented perspective on the extent of congestion. These measurements are acquired within the programmable data plane and subsequently routed to the intelligent control plane. Within the control plane, they are aggregated into compact dataframes, which collectively form what we term the "observation space." In the context of this study, the term observation space refers to the temporal window within which the intelligent control plane conducts an integrated analysis of both the network's state and the application's behavior.
For each received observation space, the DQN incorporates INT measurements as an input layer. Following neural network processing (refining its internal weights), the DQN generates an action, which is manifested as an activation in one of the neurons within the output layer. In this study, the possible actions include: 1) increasing the target delay; 2) decreasing the target delay; and 3) maintaining the current state (i.e., taking no action).
Subsequently, the control plane retains a record of the executed action and enters a state of anticipation for the forthcoming observation space. Upon the arrival of data from the subsequent observation space, the DRL mechanism evaluates whether the undertaken action has led to the optimization of DASH QoS, particularly with regard to enhancements in FPS and LBO metrics. In the event of a positive outcome, the agent is rewarded, whereas in cases of QoS deterioration, the agent incurs a penalty.
Leveraging insights from the dynamic network traffic patterns, DESiRED demonstrates a remarkable capability to adapt with precision to prevailing congestion conditions. This adaptability facilitates a continuous enhancement in the quality of video services offered.
It is imperative to elucidate that DESiRED is inherently application-agnostic, signifying its capacity to accommodate diverse reward policies tailored to evaluate a wide array of service metrics. This flexibility extends to metrics such as the response time of a web server or even the frame rate in video playback, underscoring its versatility across various service domains.
Figure 8: Operation of the Control Plane in DESiRED involves using fine-grained INT measurements as the input layer for the DQN. Additionally, DASH QoS measurements serve as the basis for calculating agent rewards.
Evaluation
In this section, we provide a comprehensive overview of all the components utilized for the thorough evaluation of our proposal. This encompasses a detailed exposition of the research methodology, an in-depth portrayal of the experimental environment and its configuration, the load pattern employed, the DRL mechanism implemented, the metrics and measurements used for comprehensive analysis.
### Research methodology
Our methodology is rooted in experimental research aimed at evaluating the effectiveness of the DRL mechanism within DESiRED. Specifically, our objective is to ascertain whether this mechanism can optimize the QoS for MPEG-DASH services by dynamically adapting the target delay under conditions characterized by both stationary and non-stationary loads within a Content Delivery Network (CDN) environment.
In this experiment, our aim is to conduct a comprehensive evaluation of DESiRED in comparison to iRED, where iRED employs fixed target delay settings of 5ms, 20ms, 50ms, and 100ms. We evaluate these approaches under both stationary (low and high) and non-stationary (sinusoidal) load conditions. To mitigate potential biases, each round of the investigation, spanning one hour, was repeated ten times for each approach, resulting in a cumulative duration of over fifty hours across independent runs. Furthermore, to gauge DESiRED's robustness, we aggregated the DRL agents derived from all preceding executions by employing an ensemble approach. This involved combining the model parameters through an exponentially decaying running average, as described by Eq. 4[22, 23]:
\[\hat{\theta}^{(t)}=\alpha\hat{\theta}^{(t-1)}+(1-\alpha)\theta^{(t)} \tag{4}\]
where \(\theta\) represents a parameter from the Q-network; \(t\) the gradient descent iterations; \(\hat{\theta}^{(t)}\) the average from such parameters (\(\frac{1}{t}\Sigma_{i}\theta^{(i)}\)); and \(\alpha\) the exponential decaying factor (defined as 2.0).
We evaluate the application's performance from the client-side perspective, focusing on three key metrics: FPS LBO, and Rebuffering Rate (Starvation) as measured within the video player. Higher values for FPS and LBO correspond to improved QoS, while for Rebuffering Rate, a lower value signifies enhanced QoS.
In addition to evaluating application quality metrics, we also scrutinize the performance metrics of the DRL agent, encompassing Loss function and Rewards.
### Environment description
The experiment was constructed within a realistic testbed, adopting an Infrastructure as Code (IaC) approach, and implemented using Vagrant, Virtualbox (version 6.1.28), and Ansible (version 2.10.8). In this setup, each infrastructure component is represented by an isolated virtual machine, interlinked through a P4 programmable data plane network, as visually depicted in Figure 9.
approaches. On the control plane side, the DRL engine was implemented, comprising approximately 750 lines of code and utilizing Tensorflow as its backend framework. The CDN was deployed to facilitate an MPEG-DASH service, featuring live streaming of a soccer game and a playlist housing the ten most frequently accessed YouTube videos. Load management was executed using WAVE [24]5, a versatile load generator that orchestrates instances of an application over time.
Footnote 5: [https://github.com/ifpb/wave](https://github.com/ifpb/wave)
This infrastructure was hosted on a bare-metal server, namely the Dell EMC PowerEdge R720, equipped with 2 Intel Xeon processors (E5-2630 v2, 2.60GHz) boasting 6 cores per socket (amounting to 24 virtual CPUs), 48GB of RAM, a 2TB HDD, and running the Ubuntu 20.04.6 LTS operating system. All pertinent artifacts and resources can be accessed within the repository available at our GitHub6.
Footnote 6: [https://github.com/dcomp-leris/DESIRED](https://github.com/dcomp-leris/DESIRED).
The MPEG-DASH Server serves video content using the DASH standard to both the Video Client and the Load Generator. It offers various configurations, as detailed in Table 1, with each configuration having a chunk segment size of 4 seconds. The Video Client dynamically selects and transitions between these configurations based on network traffic conditions and the adaptation logic embedded within the video player.
The infrastructure is equipped with Apache version 2 as the web server, FFmpeg (version 2.8.17) for video encoding, and MP4box (version 0.5.2) for creating the MPEG-DASH manifest files, ensuring seamless video streaming.
The Video Client utilizes DASH.js, a contemporary DASH reference player equipped
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
**Type** & **Resolution** & **FPS** & **Group of Pictures** & **Kbps** & **Buffer** & **Code** \\ \hline video & 426x240 & 18 & 72 & 280 & 140 & h264 \\ video & 854x480 & 24 & 96 & 980 & 490 & h264 \\ video & 1280x720 & 30 & 120 & 2080 & 1040 & h264 \\ audio & - & - & - & 128 & - & AAC \\ audio & - & - & - & 64 & - & AAC \\ \hline \end{tabular}
\end{table}
Table 1: Video parameters used in an MPEG-DASH Server.
Figure 9: Setup Evaluation.
with an Adaptive Bitrate Streaming (ABR) algorithm. It employs this ABR algorithm to consume the video stream of the soccer game, with the TCP New Reno congestion control algorithm managing network congestion.
The Load Generator is responsible for introducing network noise, operating the WAVE framework with a variety of loads, including both stationary and non-stationary scenarios. It dynamically adjusts the number of video player instances over time to simulate changing network conditions. Further elaboration on this aspect can be found in Subsection 4.3.
All the switches utilized in this experiment were implemented within the BMv2 software switch environment, incorporating the respective P4 code for both iRED (fixed target delay) and DESiRED (dynamic target delay with DRL) approaches. Across all approaches, telemetry instructions were meticulously programmed to append telemetry metadata to all probe packets. Notably, this experiment follows the out-of-band (ONT) approach, wherein dedicated ONT probes are dispatched from the DASH server to the Video Client. Consequently, no modifications are made to data packets to accommodate telemetry metadata. The specifics of the telemetry metadata, consisting of 32 bytes, gathered at each node within this experiment, are elaborated upon in Table 2.
### Load Pattern
The Load Generator, powered by WAVE, orchestrates the instances of video clients over time based on input parameters described by a mathematical function that defines the load pattern. In its current iteration, WAVE supports constant, sinusoidal, and flashcrowd load patterns. It initiates and concludes video player processes, generating network load through genuine video requests (real traffic) that flow from the video player to the MPEG-DASH Server.
In this study, our aim is to evaluate DESiRED under various load conditions, aiming to simulate diverse network state scenarios. To achieve this, we employ two distinct categories of load patterns: stationary and non-stationary. For stationary loads, which remain constant throughout the experiment, we classify them into two types: low and high. In this context, a low load is characterized by the presence of ten video client instances operating concurrently throughout the duration of the experiment, as depicted in Figure 10(a). Conversely, a high load is characterized by the simultaneous opera
\begin{table}
\begin{tabular}{|c|c|l|} \hline
**Name** & **bits** & **Description** \\ \hline Switch ID & 31 & the switch identification number \\ Ingress port & 9 & the port number that the packet entered in the switch \\ Egress port & 9 & the port number that the packet left of the switch \\ Egress spec & 9 & the port number (Ingress) in which the packet will leave the switch \\ Ingress Global & 48 & the timestamp, in μs, of when the packet entered in the ingress \\ Timesamp & 48 & the timestamp, in μs, of when the packet started processing in the egress \\ Ena Timestamp & 32 & the timestamp, in μs, of when the packet was enqueue \\ Ena Qdepth & 19 & the queue depth when the packet was queued \\ Deq Timedelta & 32 & the time, in μs, that the packet remained in the queue \\ Deq Qdepth & 19 & the queue depth when the packet was dequeued \\ \hline \end{tabular}
\end{table}
Table 2: INT medatada.
tion of forty video player instances, representing a high-intensity load, as illustrated in Figure 10(b).
Under low load conditions, it is anticipated that the target delay will be attained relatively infrequently, given the shorter queuing delays that typically prevail. In this scenario, both AQM strategies, whether employing a fixed or dynamic target delay, are likely to yield comparable results in terms of QoS.
However, when the network experiences predominantly high load, the surge in traffic volume can lead to an increase in queue delay, thereby prompting AQM strategies to respond in accordance with the specified target delay, whether fixed or dynamic. In such instances, the dynamic adaptability of DESiRED's target delay is expected to confer advantages in terms of QoS compared to the rigid, fixed target delay approach employed by iRED. This dynamicity enables DESiRED to better accommodate and optimize QoS in the face of fluctuating and demanding network conditions.
It is indeed unrealistic to assume that network loads will always remain stationary or static. Consequently, in the second phase of our evaluation, we undertook a more comprehensive evaluation under a realistic load scenario, one that mirrors the dynamic nature of real-world network environments. Our objective was to evaluate non-stationary load patterns, encompassing both peak (high load) and off-peak (low load) periods within a single experiment. To achieve this, we employed a sinusoidal periodic load pattern characterized by the sinusoidal function detailed in Equation 5, where \(A\) represents the amplitude, \(F\) denotes the frequency, and \(\lambda\) signifies the phase in radians. The specific input parameters utilized for this evaluation were: \(A=15\), \(F=1\), and \(\lambda=25\), culminating in the load pattern illustrated in Figure 11. This approach captures the fluctuations in network load more realistically, offering a dynamic and challenging environment for our evaluation.
\[f(y)=A\sin(F+\lambda) \tag{5}\]
### Deep Reinforcement Learning mechanism
To accomplish the objectives outlined in this paper, we tailored the DQN architecture and agent-environment workflow to align with the distinctive characteristics of the DESiRED environment, as elucidated in Subsection 4.2. In doing so, we designed the DQN using a Multi-layer Perceptron (MLP) architecture, which is well-suited for handling the tabular nature of network telemetry metadata. The MLP network adopted
Figure 10: Stationary Loads.
in our approach consists of an input layer featuring units corresponding to each INT feature, two hidden layers each comprising 24 neurons, and an output layer containing units for each possible action that the agent can undertake, as depicted in Figure 12. Importantly, both the online and target networks share this identical architecture. Table 3 provides a detailed breakdown of the hyperparameters utilized for training DESiRED.
To facilitate the desired agent-environment interaction, we formulated the agent's behavior as an MDP with the video chunk size serving as the discrete time steps. In this framework, DESiRED operates within the environment, dynamically adjusting the target delay in all switches at 4-second intervals, synchronized with the video chunk size. A comprehensive discussion regarding the strategy of simultaneous actuation in all switches versus individual actuation in each switch is presented in Section 6. The agent's action space is delineated in Table 4, where it is evident that the action to increase the target delay brings about a modification that is proportionally twice as substantial as the decrease action. This choice was made to prompt DESiRED to respond promptly to transient congestion while retaining the flexibility to decrease the
Figure 11: Non-stationary (Sinusoid) Load.
Figure 12: DESiRED DQN architecture. The input layer is a network of fine-grained measurements, provided by INT. Hidden Layers make up the DQN. The actions are defined in the Output Layer.
target delay when necessary, mirroring the rationale discussed in [25].
It's important to highlight that the calculation of rewards does not occur immediately after an action is taken in the current state. This delay in reward calculation is attributed to the fact that the effects of the agent's action do not manifest instantly, primarily due to the inherent control mechanisms incorporated within TCP and ABR systems, as detailed in [26]. Consequently, the computation of rewards is deferred until the subsequent state's observation. In this context, the agent relies on network status data derived from INT measurements to form its states, selects actions, and is subsequently rewarded based on its ability to optimize the video's QoS, which is characterized by metrics such as FPS and LBO.
Indeed, the intrinsic correlation between metrics such as LBO and FPS presents a challenge when devising a reward policy. As the LBO increases, there is a tendency for the FPS to also increase. However, this relationship is not always straightforward due to the complex dynamics of network congestion and video streaming.
To calculate a reward (\(R_{t+1}\)) for a specific action (\(A_{t}\)), we adopt a strategy that first evaluates whether the LBO in the next state (\(LBO_{t+1}\)) improves compared to the LBO observed when the action was executed (\(LBO_{t}\)). Subsequently, a reward score is assigned based on the effects of this action on both the next state's LBO and FPS
\begin{table}
\begin{tabular}{l|c|l} \hline \multicolumn{1}{c|}{**Hyperparameter**} & **Value** & **Description** \\ \hline Q-network input layer dimension & 19 & A scalar defining the state input shape. \\ Q-network hidden layers & 2 & A scalar defining the Q-network depth. \\ Q-network hidden units & 24 & A scalar defining the Q-network non-linear computing units. \\ Q-network output layer dimension & 3 & A scalar defining the Q-network predictions output shape. \\ Hidden units activation function & ReLU1 & The non-linear activation function computed by hidden units. \\ Output units activation function & Linear & The activation function computed by the output layer. \\ Optimization function & SGD2 & The function used to adjust the Q-network weights in order to minimize the predictions error in relation to the expected output. \\ SGD momentum & 0.9 & A scalar defining the momentum included in the \\ Learning rate & 1e-3 & A scalar determining the pace at which the weights are updated. \\ Loss function & MSE3 & The function used to compute the Bellman equation error. \\ Gamma & 0.99 & A scalar determining the discount factor in the Q-Learning update. \\ & & A scalar value determining how many updates the online network \\ Tau & 1e4 & should perform before updating the target network \\ & & (i corresponds to the parameter \(C\) depicted in the Fig. 6). \\ Experience replay capacity & 1e6 & A scalar defining the size of the list in which the agent’s experience will be stored. \\ & & A scalar defining the minimum experiences that should be \\ Experience replay minimum memory & 100 & stored before updating the online network. \\ & & A scalar defining the number of experience samples over which the Q-network will be updated. \\ & & A scalar defining the initial probability to take random actions \\ & & the e-greedy exploration. \\ & & A scalar defining the final probability to take random actions \\ Ending epsilon & 0.01 & A scalar defining the final probability to take random actions \\ & & in the e-greedy exploration. \\ & & A scalar determining how many steps the probability to \\ & & take random actions in the \(\epsilon\)-greedy exploration should decrease \\ & & linearly before the exponential decay. \\ & & A scalar determining exponential decay of the probability to take \\ & & random actions in the \(\epsilon\)-greedy exploration. \\ \hline \end{tabular} \({}^{1}\) Rectified Linear Unit.
\({}^{2}\) Stochastic Gradient Descent with Nesterov Momentum.
\({}^{3}\) Mean Squared Error.
\end{table}
Table 3: DQN hyperparameters.
(\(FPS_{t+1}\)). Consequently, the agent receives maximum reward whenever the action taken leads to the maximization of \(LBO_{t+1}\), and is penalized in an inversely proportional manner if the video experiences stalls. The algorithmic logic for calculating rewards is detailed in Algorithm 1. This approach ensures that the agent's reward is contingent on its capacity to optimize both LBO and FPS, balancing the trade-offs inherent to video streaming in dynamic network conditions.
These actions were executed according to the \(\epsilon\)-greedy strategy as elucidated in Subsection 2.2.1. To implement this strategy, we established initial and final probabilities for taking random actions, specified the number of decaying steps, and defined an exponential decay factor (as outlined in Table 3). In this scheme, \(\epsilon\) commences its linear decrease over a span of 250 time steps to facilitate exploration. Subsequently, the probability of selecting random actions is exponentially reduced, gradually transitioning to a minimal value to emphasize exploitation over exploration. This strategy allows the agent to strike a balance between exploring new actions and exploiting its existing knowledge as it interacts with the environment.
Taking into consideration the agent's action frequency of once every 4 seconds and the requirement for 250 iterations to initiate the exponential decay of \(\epsilon\), the exploration phase is expected to persist for approximately 17 minutes (equivalent to 1000 seconds). In tandem, the experience replay memory buffer necessitates a minimum of 100 samples to facilitate the online network parameter updates (as indicated in Table 3. Since experiences resulting from the agent-environment interaction are stored every 8 seconds, it would take approximately 13 minutes (or 800 seconds) for this condition to be met. Consequently, the online network undergoes an update each time a new experience is stored, as illustrated in Figures 5 and 6.
In the case of the non-stationary load, it follows a trajectory of 15 minutes to reach its peak, maintains a plateau for an additional 15 minutes, and subsequently begins to decline. During this period, the agent explores the action space during the ascending phase of the sinusoidal curve and exploits these actions during the plateau and descending phases. Consequently, when the exploitation stage commences, the agent should have already gleaned insights from past experiences, encompassing both low and high load scenarios. This enables the agent to adapt and respond effectively to the fluctuating network conditions.
### Metrics and Measurements
On the video client side, we evaluate the QoS by monitoring key metrics, including:
* FPS (Frames Per Second): This metric quantifies the number of frames displayed per second on the screen, reflecting the smoothness of the video playback.
\begin{table}
\begin{tabular}{|c|c|l|} \hline
**Action Number** & **Value** & **Description** \\ \hline
0 & + 2ms & increase target delay in all switches until 70ms (upper limit) \\
1 & - 1ms & decrease target delay in all switches until 20ms (lower limit) \\
2 & - & do nothing \\ \hline \end{tabular}
\end{table}
Table 4: DESiRED actions space.
```
1:function\(calculate\_reward(LBO_{t}\), \(LBO_{t+1}\), \(FPS_{t}\), \(FPS_{t+1}\))
2:if\(LBO_{t+1}>LBO_{t}\)then
3:if\(LBO_{t+1}>30\)then
4:\(R_{t+1}\gets 2\)
5:elseif\(LBO_{t+1}<30\)then
6:if\(FPS_{t+1}==30\)then
7:\(R_{t+1}\gets 1\)
8:elseif\(FPS_{t+1}==24\)then
9:\(R_{t+1}\gets 0.5\)
10:else
11:\(R_{t+1}\gets 0.1\)
12:endif
13:endif
14:endif
15:
16:if\(LBO_{t+1}<LBO_{t}\)then
17:if\(LBO_{t+1}>30\)then
18:\(R_{t+1}\gets 2\)
19:elseif\(LBO_{t+1}<30\)then
20:if\(FPS_{t+1}==30\)then
21:\(R_{t+1}\gets 1\)
22:elseif\(FPS_{t+1}==24\)then
23:\(R_{t+1}\gets 0.5\)
24:else
25:\(R_{t+1}\leftarrow-2\)
26:endif
27:endif
28:endif
29:endfunction
```
**Algorithm 1** DESiRED reward policy algorithm.
* LBO (Local Buffer Occupancy): LBO measures the remaining time, in seconds, for frames stored in the player's local buffer. It provides insights into the buffer's capacity to absorb network fluctuations and maintain continuous playback.
From these primary metrics, we derive additional insights, including:
* Resolution Distribution: We analyze the percentage of video content played at different resolutions (Maximum, Medium, and Minimum) to assess the adaptive streaming capabilities.
* Rebuffering Rate: This metric represents the percentage of time during which the video experiences stalls or freezes on the screen, indicating interruptions in playback.
To facilitate these measurements, we configure the DASH.js player to log these metrics on a per-second basis. Within the DRL mechanism, we focus on evaluating the performance metrics of the DQN:
* Loss: This metric is calculated as the Mean Squared Error (MSE) between the predicted q-values for the current and next states. It reflects the convergence and accuracy of the DQN's predictions.
* Reward: Reward represents the cumulative rewards and penalties acquired throughout the experiment. It offers insights into the agent's performance in maximizing QoS.
Additionally, we capture the action history for each experiment, documenting the agent's selected actions at each observation space (every 4 seconds). These metrics provide a comprehensive view of the agent's learning and adaptation throughout the experiment.
## 5 Results
In this section, we will present the outcomes of our experiments, where we evaluate how DESiRED enhances the QoS of MPEG-DASH. We offer an in-depth analysis from the client-side perspective, showcasing the results and delving into instances where video QoS has benefited from the dynamic adjustments facilitated by DESiRED. Furthermore, we scrutinize the performance of the DRL model, presenting evidence that the agent has successfully learned the designated policy and has been able to identify an optimal target delay value that maximizes QoS across the range of experiments conducted.
### Stationary Loads
The motivation behind evaluating performance under stationary loads stemmed from the necessity to ascertain whether the DRL agent would exhibit distinct learning behaviors during moments of low load (ample resources) and high load (congested resources) across separate executions.
When the network load predominantly remains low, as illustrated in Figure 10(a), network resources are readily available. In such scenarios, there is minimal contention for the use of the queue, resulting in limited or no intervention from auxiliary congestion control mechanisms like AQM. This phenomenon can be observed from the perspective of the video client, as depicted in Figure 13.
Figures 13(a) and 13(b) illustrate the Cumulative Distribution Function (CDF) of FPS and LBO under low load conditions. In Figure 13(a), we observe some variation in FPS for iRED with a fixed target delay of 5ms and 20ms/50ms. Conversely, in the cases of iRED with a fixed target delay of 100ms and DESiRED, the video client consistently played the video at 30 FPS throughout all experiments.
Concerning LBO, as depicted in Figure 13(b), the results exhibit similar behavior across approaches, with the local buffer maintaining a near-full state for most of the evaluations, approximately 60 seconds. The only exception is the iRED with a 5ms fixed target delay. In this specific scenario, the use of such a small threshold value appears to have triggered a higher frequency of AQM actions. This, in turn, might have led to more frequent drops within a time interval of less than one Round-Trip Time (RTT), as discussed in [27]. Paradoxically, this increased AQM activity, rather than alleviating congestion, may have exacerbated the situation, demonstrating the potential for unintended side effects when setting overly aggressive congestion control thresholds.
Conversely, when the network experiences predominantly high load conditions, as illustrated in Figure 10(b), the dynamics shift significantly. In such scenarios, all approaches employing fixed target delay mechanisms encounter challenges in maintaining acceptable MPEG-DASH QoS. DESiRED, on the other hand, manages to distinguish itself from the fixed target delay approaches, as evident in Figure 14.
To gain a deeper understanding of these results, it's important to clarify some aspects of the Adaptive Bitrate (ABR) adaptation logic employed by the DASH.js player, as described in [26]. The adaptation logic used in DASH.js, known as DYNAMIC, employs two different algorithms at different stages of video playback. During instances when buffer levels (LBO) are low, such as startup and seek events, a straightforward THROUGHPUT algorithm (based on throughput) is utilized. Conversely, when buffer levels are high, the player switches to the BOLA algorithm [28]. This dynamic adaptation approach aims to optimize video streaming under varying network conditions,
Figure 13: Low Load - Characterized by only ten video player instances managed by WAVE (Load Generator).
aligning the bitrate selection algorithm with the network's congestion state.
DYNAMIC starts with THROUGHPUT until the buffer level reaches 10s or more. From this point on, DYNAMIC switches to BOLA which chooses a bitrate at least as high as the bitrate chosen by THROUGHPUT. DYNAMIC switches back to THROUGHPUT when the buffer level falls below 10s and BOLA chooses a bitrate lower than THROUGHPUT [26].
Indeed, from the perspective of the video player's adaptation logic, the LBO metric proves to be far more sensitive to variations in network buffer levels compared to FPS. It's important to note that changes in bitrate and FPS should only occur when the LBO drops below 10 seconds. Consequently, it is logical to aim for maintaining an LBO greater than 10 seconds for the majority of the time, as this instructs the ABR algorithm to select the highest-quality video levels.
Figure 14(b), which pertains to LBO, contributes significantly to understanding why DESiRED achieves superior FPS levels, as indicated in Figure 14(a). In this context, it is plausible to surmise that fine-tuning the target delay has provided an advantage in terms of preserving a sufficient LBO during periods of severe network congestion. This, in turn, aids the ABR algorithm in making optimal bitrate and quality level selections, ultimately leading to improved video QoS.
### Non-stationary Load
Recognizing the dynamic nature of network traffic, we embarked on an evaluation under non-stationary load conditions. To achieve this, we leveraged the WAVE framework, which effectively managed the execution of video client instances over time, adhering to a mathematical model of sinusoidal periodic load as detailed in Subsection 4.3.
The choice of a sinusoidal periodic load model holds significance because it encapsulates moments of congestion and resource relief in the network, particularly within router buffers, within a single execution. This approach allows us to evaluate our agent's performance in situations of both high congestion, where rapid adaptation is crucial, and congestion-free states where shared resources are not overwhelmed. In essence, our expectation is that the agent will learn distinct patterns that differentiate between these varying states.
Figure 14: High Load. Characterized by forty video player instances managed by WAVE (Load Generator).
This evaluation under non-stationary load conditions provides valuable insights into how the agent responds to fluctuations in network congestion, thereby contributing to a more comprehensive understanding of its adaptability and effectiveness.
The initial result we would like to present pertains to the actions taken by the agent (DESIRED) within the network environment. Figure 15 provides an overview of the agent's actions throughout the experiment. Notably, there is an initial phase of random exploration (indicated by the vertical dashed red line) extending up to the first 250 observations. During this exploratory phase, the agent gathers data about the network state, which is used to populate the experience replay buffer (as outlined in Subsection 2.2.1).
Subsequent to this initial exploration phase, the agent commences taking actions based on its learned knowledge, drawing from the experiences stored in the experience replay buffer. It's important to highlight that this buffer is continually updated, enabling the agent to learn from new states. Consequently, the agent can adapt to previously unseen states, a capability that proves particularly valuable in scenarios with non-stationary loads.
This analysis of the agent's actions provides insights into its learning process and the transition from exploration to exploitation as it becomes more knowledgeable about the environment.
Analyzing the agent's actions, it becomes apparent that during the initial phase of the experiment, characterized by an increase in network load, the agent frequently opted to increase the value of the target delay. Subsequently, as the load stabilized, the agent chose to take no action, potentially reducing the overhead of control plane operations in the data plane. Towards the end of the experiment, as the network load decreased, the agent shifted its strategy towards reducing the target delay.
Having observed how these actions mirror the agent's interactions with the environment, we can now delve deeper into the model's performance. Figure 16 provides an overview of the model's behavior, illustrated by the curves representing key performance metrics such as Loss and Reward.
Figure 16(a) illustrates the trajectory of Loss throughout the experiment. A decline in Loss signifies a lower MSE in predicting q-values. In essence, a low Loss value suggests that the model is effectively learning the policy by selecting actions that maximize rewards (QoS). During the initial phase of filling the experience replay buffer, Loss tends to be higher as actions are taken without the benefit of learning, effectively representing random actions. However, as the experience replay buffer becomes pop
Figure 15: Actions performed by the agent in the environment. After the initial random exploration, the agent finds the best target delay value to maximize the QoS of MPEG-DASH.
ulated and the Q-network is updated based on these experiences, the agent begins to make more informed and assertive decisions. This shift towards lower Loss values reflects the agent's ability to learn and improve its policy.
Turning our attention to Rewards (Figure 16(b)), we observe that the model incurs some penalties during the initial phase of the experiment. This corresponds to the period when the agent transitions from an initial stationary state with no charges to reaching the peak of the sinusoidal load curve, marked by the presence of 40 instances of the video player simultaneously. Subsequently, as the agent refines its decision-making, it starts receiving rewards consistently. These rewards indicate that the agent effectively maximizes the QoS of MPEG-DASH, further underscoring the model's learning and adaptive capabilities.
The insights gleaned from the agent's performance analysis are supported by the LBO and FPS metrics observed by the video client in response to DESiRED's actions, as outlined in Table 5 and depicted in Figure 17. At this conjuncture, we aim to provide an interpretation of the results from the video client's perspective, highlighting how DESiRED outperformed other approaches considered in this study.
An essential piece of data when evaluating the QoS of a video service is the resolution displayed on the screen by the video player. In this context, video consumers were offered three distinct quality levels:
1. Minimum Resolution: 426x240 pixels at 18 FPS.
2. Medium Resolution: 854x480 pixels at 24 FPS.
3. Maximum Resolution: 1280x720 pixels at 30 FPS.
\begin{table}
\begin{tabular}{l||c c c} \hline \hline
**AQM** & **Min. Resolution** & **Med. Resolution** & **Max. Resolution** \\ \hline
**iRED 5ms** & 91838 & 5.49\% & 1.36\% \\
**iRED 20ms** & 68.77\% & 12.96\% & 17.69\% \\
**iRED 50ms** & 46.74\% & 11.73\% & 41.27\% \\
**iRED 100ms** & 43.81\% & 9.91\% & 46.18\% \\
**DESiRED** & 31.71\% & 10.15\% & 58.07\% \\ \hline \hline \end{tabular}
\end{table}
Table 5: Execution percentage at each video quality level.
Figure 16: Model performance results - Decreasing Loss and increasing Reward indicate model convergence.
Even under challenging conditions, Table 5 clearly demonstrates that DESiRED exhibits the highest percentage of video playback at the maximum resolution (58.07%) and the lowest rate of playback at the minimum resolution (31.71%). This finding aligns with the data presented in Figure 17.
The discussion initiated in Subsection 5.2 remains pertinent in this context as well. To reiterate, during periods of intense competition for shared resources, probabilistic drops facilitated by a target delay that adjusts in response to network load fluctuations have proven instrumental in maximizing the QoS of the video service. Once again, DESiRED effectively maintains a higher level of LBO filling, as depicted in Figure 17(b), ultimately contributing to superior FPS performance, as evidenced in Figure 17(a).
Figure 17(c) presents a boxplot representing the percentage of video stalls, which signifies moments when the video remains frozen without any frames being displayed. A cursory glance at this figure might lead to the incorrect assumption that a longer delay at a fixed target would yield better results. However, it's important to note that DESiRED imposes an upper limit of 70ms, which is lower than the value employed by iRED100ms, thereby dispelling this theory. In this context, we believe that DESiRED's fine-tuned approach enables it to determine the optimal target delay value for each network state during the sinusoidal load.
## 6 Lessons Learned
In this section, we will provide insights and lessons learned from our research on applying RL to computer network problems. These insights may be valuable to the scientific
Figure 17: QoS measurements of the MPEG-DASH video service. DESiRED improves FPS and LBO while minimizing video stall.
community interested in using RL for similar applications.
_1) The network has an intrinsic dynamism in its behavior:_ In the realm of RL, the challenges posed by computer networks present an intriguing and multifaceted problem. In essence, an RL problem can be likened to a strategic game where an agent interacts with an environment, making decisions and receiving rewards, all within the framework of a MDP. In each of these interactions, often referred to as episodes, the agent engages in a continuous process of trial and error, striving to acquire a policy that maximizes its cumulative rewards.
However, the application of RL models to computer network-related predicaments introduces a unique set of challenges. Contemporary networks, characterized by their dynamic nature and intricate traffic dynamics, necessitate a novel approach to the integration of RL. One of the central predicaments lies in adapting an RL agent to an environment that is in perpetual flux, a paradigm well-embodied by the ever-changing states of queues within network routers.
Of notable significance is the realization that RL agents draw their learning from the experiences accumulated through their interactions with the environment. This very dependence on real-time experiences, further compounded by the interdependence between video player metrics and network conditions--themselves subject to the agent's actions--renders the use of static datasets for agent training impractical. In situations where a physical network infrastructure is not readily available, a promising alternative entails the utilization of a model capable of simulating authentic network behaviors, such as a Generative Adversarial Network (GAN) as proposed by Navidan et al. [29].
An additional layer of complexity is introduced through the modulation of network load patterns, a deliberate endeavor aimed at inducing the RL agent to adapt dynamically to both peak (high load) and trough (low load) network scenarios. In this pursuit, an array of network load settings was meticulously explored, encompassing flashcrowd and sinusoid patterns. Notably, the most compelling outcomes were achieved when employing sinusoidal patterns, characterized by single instances of peak and trough conditions within the duration of video streaming.
Furthermore, the intricate calibration of parameters pertaining to the reward policy emerged as an arena of paramount importance within the implementation of the DE-SiRED system. It was during this phase that some of the most noteworthy findings and developments transpired. Remarkably, the strategic revision of the reward policy yielded a disproportionate influence over the observed outcomes, eclipsing the impact of various other elements intrinsic to the proposed approach. As such, it underscores the pivotal significance of meticulous and judicious reward policy design tailored to the specific problem domain.
In summation, the application of RL methodologies to the domain of computer networks is an enthralling endeavor replete with challenges and opportunities. It necessitates an astute orchestration of dynamic simulations, judicious load modulation, and the nuanced refinement of reward policies--a multifaceted tapestry of considerations aimed at navigating the intricate terrain of modern network optimization.
_2) The core of solution design lies in the rewards policy:_ As previously mentioned, a significant portion of our modeling effort was dedicated to defining a rewards policy that aligns with our goal of maximizing the QoS in MPEG-DASH. Initially, we considered focusing solely on the FPS values during video playback. However, this approach
proved insufficient due to the dynamic nature of the video player's adaptation logic, which considers factors like throughput and buffer level. As the agent's actions influence the target delay on network devices, we anticipated that FPS values would only exhibit noticeable changes following alterations in the LBO, as LBO is more responsive to network variations. Consequently, we opted to construct our rewards policy, with a primary emphasis on evaluating LBO levels, and secondary consideration given to FPS.
_3) Why actuate in all devices at the same time:_ As actions taken by our agent are intrinsically intertwined with the rewards policy, this study delves into several approaches, including the independent execution of actions on individual switches (each switch having its specific action) or the simultaneous execution of identical actions across all switches within the network. Initially, we contemplated that employing independent actions for each switch could be an appealing strategy. However, this approach did not align seamlessly with the scope of our problem.
Firstly, modifying the target delay for a single switch might not suffice to effectively assist the TCP congestion control algorithm, potentially yielding inconspicuous improvements in application-level QoS. Secondly, the adoption of such an approach would entail a proliferation of actions, scaling exponentially with the number of switches in the network (i.e., \(2^{n}\) actions, with \(n\) representing the count of switches). This increase in action space complexity could substantially augment the neural network architecture's intricacy.
_4) When we need to think in Transfer Learning (TL):_ Given the vast diversity of services, applications, topological configurations, and network loads encountered, it is imperative to acknowledge that an agent trained within a specific network environment cannot be expected to replicate its performance in other heterogeneous settings. In response to this challenge, TL has emerged as a promising approach, aiming to address several intricacies not typically encountered in the realm of RL. However, the application of TL within an RL framework is a non-trivial undertaking, necessitating numerous adaptations to enable the agent to effectively leverage knowledge acquired in a source domain for application in a target domain.
Amidst the inherent complexities of this context, numerous questions naturally arise, including but not limited to: a) What types of knowledge are amenable to successful transfer? b) Which RL structures are best suited for integration into a TL framework? c) What truly distinguishes a source domain from a target domain? These inquiries, among many others, prompt a comprehensive in exploration. While extant literature, such as previous work by [30], has endeavored to shed light on these considerations, we posit that a dedicated examination of these issues within the specific context of Transfer Learning in RL, particularly within computer network problem domains, is required.
## 7 Conclusions and Future Directions
In summary, this study introduces DESiRED (Dynamic, Enhanced, and Smart iRED) as an innovative solution to tackle the long-standing issue of fixed target delay in AQM systems. By harnessing advanced network telemetry within programmable data planes
and leveraging the capabilities of deep reinforcement learning, DESiRED emerges as a formidable tool to augment TCP congestion control mechanisms. In this novel framework, DESiRED utilizes high-resolution router buffer measurements, collected at line rate within the data plane, as inputs to deep reinforcement learning models residing on the control plane. Empowered by these synergistic components, the agent undertakes dynamic adjustments to the AQM's target delay in real-time, with the overarching goal of optimizing QoS for networked applications.
The comprehensive evaluation conducted within a realistic testbed, featuring the contemporary adaptive bitrate schemes for HTTP-based streaming (MPEG-DASH), reaffirms the viability of DESiRED. Throughout a diverse range of scenarios, encompassing various real-world traffic loads, our results consistently indicate the efficacy of dynamic target delay adjustments in enhancing the QoS of DASH video services for end users.
Considering the inherent dynamism of computer network environments, the prospect of transitioning toward TL has surfaced as a compelling avenue for future exploration. Nevertheless, the intricate challenges associated with this paradigm necessitate dedicated research endeavors to delve into these complexities in greater depth. As such, we recommend that this critical topic be addressed in forthcoming investigations.
|
2306.05188 | Emergence of synchronisation in a driven-dissipative hot Rydberg vapor | We observe synchronisation in a thermal (35-60 {\deg}C) atomic (Rb) ensemble
driven to a highly-excited Rydberg state (principle quantum number n ranging
from 43 to 79). Synchronisation in this system is unexpected due to the atomic
motion, however, we show theoretically that sufficiently strong interactions
via a global Rydberg density mean field causes frequency and phase entrainment.
The emergent oscillations in the vapor's bulk quantities are detected in the
transmission of the probe laser for a two-photon excitation scheme. | Karen Wadenpfuhl, C. Stuart Adams | 2023-06-08T13:38:28Z | http://arxiv.org/abs/2306.05188v2 | # Emergence of synchronisation in a driven-dissipative hot Rydberg vapor
###### Abstract
We observe synchronisation in a thermal (35-60 \({}^{\circ}\)C) atomic (Rb) ensemble driven to a highly-excited Rydberg state (principle quantum number \(n\) ranging from 43 to 79). Synchronisation in this system is unexpected due to the atomic motion, however, we show theoretically that sufficiently strong interactions via a global Rydberg density mean field causes frequency and phase entrainment. The emergent oscillations in the vapor's bulk quantities are detected in the transmission of the probe laser for a two-photon excitation scheme.
Nonlinear systems are abundant in nature, where the nonlinearities introduce a range of rich and varied phenomena. Well known is the ability of nonlinear systems to generate multiple steady states, so that the system's state is determined by its past trajectory and hysteresis loops may form. Such multistable states have been observed numerously in biological [1; 2; 3; 4], mechanical [5; 6; 7], and atomic systems [8; 9; 10; 11]. Nonlinear dynamics and bifurcation theory provide a modelling framework of these phenomena, enabling a fundamental understanding of the underlying processes from within a generalised mathematical framework.
When adding dissipation to a conservative nonlinear system, the resulting dynamics get even richer and the system can support rather unexpected types of stable solutions. Under certain conditions, dissipative systems with nonlinearities can support chaotic behavior [12; 13] or limit cycles and time-periodic solutions [14; 15]. A Hopf bifurcation may cause the appearance of attractive limit cycles, which leads to self-sustained oscillations of the system. This oscillatory behavior is not imprinted by an external drive but arises fundamentally from the system's dynamics. Such self-oscillating systems have been found to model biological processes [16; 17; 18; 19; 20] and physical systems [21; 22; 23; 24].
A very curious question regards the behavior of an ensemble of self-sustained oscillators experiencing a form of coupling to another, or to an external force. First studied by Kuramoto for an ensemble of globally coupled oscillators with different natural frequencies [25], it has been found that - under certain conditions - all or a subset of the oscillators begin to lock in frequency and phase [26; 27; 28]. As a result, a transition towards a sychronised state occurs in the ensemble. This synchronisation transition has been used to explain e.g. the strong lateral vibrations of the Millennium bridge, London, on its opening day [29], though this is contested [30], or the Belousov-Zhabotinsky and other chemical reactions [31; 32]. In nature, synchronisation occurs in ensembles of fireflies flashing in unison [33], the chirps of snowy tree crickets [34], and occasionally in the applause of audiences [35].
To further study the emergence of synchronisation and the resulting non-equilibrium dynamics, a simple and easily controllable system with a macroscopic number of coupled oscillators and tunable properties is highly desirable. In the following, we demonstrate that the occurrence of a synchronised phase is expected in a continuously driven, dissipative three-level system with a power law coupling to a mean field, and report on the observation of synchronisation in a hot Rydberg vapor. A surprising, but expected, feature of this system is that oscillations of the bulk quantities remain observable even though the individual constituents are undergoing random motion.
Figure 1: **Single velocity class model.** The basic model with the relevant parameters is shown in (a). An example steady-state solution of the resulting nonlinear OBEs is shown in (b) where the dark-red steady-state branch is repulsive and green indicates the limit cycle region. For a fixed detuning \(\Delta_{e}/\Gamma_{ge}=-1\), indicated by the dashed line, the time evolution from an initial state \(\ket{\Psi}_{t=0}=(1-x)\ket{g}+x\ket{r}\) with \(x\in\ [0,1]\) towards a limit cycle is shown in (c). For the same time traces, a phase space projection of the limit cycle in the \(\rho_{ge}\)-plane is shown in (d). The other model parameters were set to: \(\Delta_{p}=0\), \(\Omega_{p}/\Gamma_{ge}=3.8\), \(\Omega_{e}/\Gamma_{ge}=2\), \(V/\Gamma_{ge}=-12\), \(\Gamma_{er}/\Gamma_{ge}=10^{-5}\), \(\Gamma_{gr}/\Gamma_{ge}=10^{-2}\) and \(\beta=3\).
Rydberg atoms are known to interact strongly with a power law scaling in distance. This translates into a mean-field approach [36] with power law scaling \(\beta\) of the Rydberg level shift in Rydberg density \(\rho_{rr}\). A similar power law scaling also can be used to model the level shift induced by ionization [37] or other mean-field inducing mechanisms. Adopting this mean field approach, the resulting equations of motion (EOMs) are formulated for a three-level basis set with coherent driving by \(\Omega_{x}\) and dissipation \(\Gamma_{yz}\), see figure 1 (a). For \(\beta\neq 0\), the EOMs are nonlinear and their steady states are defined by the roots of a polynomial of order \(max(4\beta+1,1)\) in \(\rho_{\text{err}}^{i}\).
The resulting steady-state solutions of the nonlinear EOMs reveal regions of multistability where an odd number of equilibria exist for one set of parameters \(\Omega_{x},\Delta_{x},\Gamma_{yz},V,\beta\). To extract the stability of the solutions, the spectrum of eigenvalues \(\lambda_{j}\) of the linearisation (Jacobi) is evaluated at the steady state [38]. Stability is guaranteed if \(Re(\lambda_{j})<0\) for the eight non-constant eigenvalues. Consequently, the repulsive branch marked in red in figure 1 (b) is detected by spectral analysis. However, the steady states indicated in green are also unstable. Here, a Hopf bifurcation occurs where a complex-conjugate pair of eigenvalues \(\lambda_{j}\) crosses the imaginary axis and renders the steady state unstable. As a result, the system is attracted towards a limit cycle which leads to robust self-sustained oscillations of the system parameters in time. Figure 1 (c) and (d) show that the system is attracted to the same limit cycle for different initial states, but each initial state leads to a different phase in the limit cycle at any fixed time \(t\). This freedom of phase in the limit cycle is indicative of a self-oscillating system and fundamentally distinguishes it from a periodically driven system where the phase in the limit cycle is locked to that of the drive. The freedom of phase in the resulting limit cycle has also been described using the language of continuous time crystals [39; 40]. The time-crystal interpretation in the context of our experiment is discussed in appendix E of the supplementary material.
Although optical bistability has been found experimentally in driven-dissipative hot Rydberg vapors [10], one would intuitively expect any oscillations in this system to average out due to atomic motion. The motion-induced dephasing for different atomic velocities results in a spread of the natural frequencies of the limit cycles and the phases therein. Although about half of the velocity classes are attracted towards a limit cycle, no macroscopic oscillations can be seen, as shown by the black line in figure 2 (a).
However, above argumentation does not account for the spatial dimension of the situation. The Rydberg level shift of any atom in the vapor depends on the spatial Rydberg density of its local environment so that the different velocity classes do not evolve independent of another. Rydberg atoms of one velocity class experience a level shift depending on the Rydberg population of the other velocity classes in the vapor and, in turn, influence the dynamics of these other velocity classes. When taking this global coupling between the velocity classes into account, the resulting dynamics of the vapor is very different as shown in 2 (b) [see also App. C in Supp. Mat]. After an initial transient phase, synchronisation sets in where the velocity classes begin to oscillate in lockstep with a single frequency and fixed phase relation. This is possible because the phase of a velocity class within its limit cycle is free and therefore easily adjusted by the mean field. With a growing number of velocity classes oscillating in phase lock, the mean field strength increases which forces even more velocity classes to align their oscillations until eventually a partially or completely synchronised state is reached.
This transition towards a synchronised state of globally coupled oscillators is known since Christiaan Huygens' time [41] and has since been studied extensively from a mathematical perspective. After the initial work by Winfree [42] and Kuramoto [25], the study of synchronisation has been extended to more general forms of the global coupling force [27; 28] and other situations [43]. Famous examples where synchronisation is experimentally demonstrated for few oscillators is the synchronisation of pendulum clocks [41] or metronomes [44] fixed to a common support which provides the coupling. However, large numbers of globally coupled oscillators with widely tunable properties are not so easily available. Therefore, a hot Rydberg vapor with \(\sim\mathcal{O}(10^{9})\) atoms in the beam volume, and a somewhat lower number of oscillators, provides an ideal testbed for an experimental study of the synchronisation transition for large numbers of constituent oscillators.
In our experiment, we use \({}^{87}\)Rb number densities of \(\rho_{87Rb}\in[0.1,6.1]\cdot 10^{11}\) cm\({}^{-3}\), which corresponds to tem
Figure 2: **Thermal vapor simulation showing emergence of synchronisation.** A thermal vapor simulation for uncoupled (a) and coupled (b) velocity classes shows the emergence of synchronisation via the Rydberg density induced mean field. The time-evolution and corresponding steady-state spectrum are shown on the left and right, respectively. Simulation parameters were \(\Omega_{p}/\Gamma_{ge}=6\), \(\Omega_{c}/\Gamma_{ge}=4\), \(\Delta_{p}=0\), \(\Delta_{c}/\Gamma_{ge}=-11\), \(\Gamma_{err}/\Gamma_{ge}=10^{-5}\), \(\Gamma_{gr}/\Gamma_{ge}=10^{-2}\), \(V/\Gamma_{ge}=-800\), \(\beta=2\) and \(N_{vel}=101\) velocity classes with equal populations. The atomic velocity distribution corresponds to that of a rubidium vapor on the D\({}_{2}\) line at 48 \({}^{\circ}\)C.
peratures from 35 \({}^{\circ}\)C to 60 \({}^{\circ}\)C for a vapor of rubidium with natural abundance. The probe laser was locked to a detuning of \(\Delta_{p}/2\pi=-140\) MHz below the \({}^{87}\)Rb resonance with the intermediate state \(\ket{5S_{1/2},F=2}\rightarrow\ket{5P_{3/2},F=3}\). The counterpropagating coupling laser was set to scan through two-photon resonance with a \(\ket{nS_{1/2}}\) or \(\ket{nD_{5/2}}\) Rydberg state at typical scan speeds of up to \(2\pi\times 10\) MHz/ms. Typical Rabi frequencies were in range \(\Omega_{p}/2\pi\in[100,330]\) MHz and \(\Omega_{c}/2\pi\leq 35\) MHz for Rydberg states with principal quantum numbers \(n\) ranging from 43 to 79. Different beam waists of up to \(w\leq 1\) mm and beam waist ratios of \(w_{p}/w_{c}\approx 2,~{}0.9,~{}0.5\) have been tried, but no direct dependence on the beam waists has been observed. The data presented here was obtained for \(w_{p}=390\)\(\upmu\)m and \(w_{c}=440\)\(\upmu\)m. Setup and relevant level scheme are shown in figure 3 (a) and (b).
Figure 3 (c) shows a typical series of scans for fixed probe and increasing coupling Rabi frequency. After an onset of bistability in the optical response, a window featuring oscillations in the vapor transmission opens. This synchronisation window widens for a further increase in coupling Rabi frequency. When instead setting the coupling Rabi frequency to a fixed value, the width of the oscillation region decreases with increasing probe Rabi frequency (see also App. D in Supp. Mat.). In the various parameter regimes that were explored experimentally, the synchronisation regime is often preceded by bistability but not necessarily so. We find a strong dependence of the onset of oscillations on the Rydberg state and vapor density. Higher atom number densities require lower Rabi frequencies for the oscillations to set in. This behavior is expected from a synchronisation perspective since larger global coupling strengths require lower mean-field strengths to initiate entrainment.
We observe an onset of synchronisation for coupling to both \(n\)S and \(n\)D Rydberg states, though it is easier to explore the behavior and scaling when coupling to D states due to the stronger dipole coupling at similar \(n\). The oscillations were also observed when coupling a fourth P or F state with an additional rf field in both the weak and strong driving limit, respectively. In the fully Autler-Townes split regime, oscillations occurred as long as the Rydberg population was high enough. The presence of synchronisation is therefore neither a purely three-level phenomenon, nor does it depend on the orbital angular momentum of the Rydberg state.
With all system parameters held constant and fixed laser detunings, the synchronised state persist on timescales on the order of minutes and the oscillations maintain their shape. Analysis of a time trace reveals a narrow frequency peak with a spectrum of weaker, higher harmonics (also shown in App. D of Supp. Mat.). The oscillation frequency \(\nu_{osc}\) of the first peak was usually observed to lie between 10 kHz and 25 kHz, though persistent oscillations of up to 43 kHz were measured. In Fig. 3 (c) one can see that the oscillation frequency varies along the coupling laser scan. As a general trend, an increase in oscillation frequency \(\nu_{osc}\) with increasing Rabi frequencies was observed. Additionally, the formation of several separate synchronisation regions, typically with a different range \(\nu_{osc}\) but similar shapes of the oscillations along the region, has been found. This is also
Figure 3: **Setup and example onset of oscillations.** (a) The counterpropagating probe and coupling lasers are polarisation cleaned with a polarising beamsplitter (PBS) after exiting the fibers. The subsequent acousto-optic modulator (AOM) and aperture are used to remote control the laser powers incident on the heated, 4 cm long rubidium cell. The probe light is detected by a photodetector (PD). (b) shows the relevant level scheme for two-photon spectroscopy in rubidium. The coupling laser addresses either the \(\ket{n^{\prime}S_{1/2}}\) or the \(\ket{nD_{m_{J}}}\) state(s). In (c), an example set of traces obtained for fixed probe Rabi frequency \(\Omega_{p}/2\pi=191\) MHz and increasing coupling Rabi frequencies is shown. The Rydberg laser is coupled to the \(\ket{43D_{5/2}}\) state, and the number density is \(\rho_{87Rb}=(4.7\pm 0.2)\cdot 10^{10}\) cm\({}^{-3}\). Here, the oscillatory regime is preceded by an onset of bistability.
visible in 3 (c) where the two regions share a boundary at \(-\Delta_{c}/2\pi\approx 26,\ 36,\ 48\) MHz for \(\Omega_{c}/2\pi=29,\ 33,\ 38\) MHz, respectively.
Figure 4 (a) shows the change in oscillation shape and frequency with increasing \(\Delta_{c}\). Each highlighted segment samples the time dependence at a particular detuning as the laser frequency is scanned in time slowly relative to \(\nu_{osc}\). The rightmost zoom-in (red) belongs to the next synchronisation region beginning at \(\Delta_{c}/2\pi\approx-45\) MHz. It shows again the sawtooth-like shape at its lower frequency end that can also be seen in the two leftmost insets. Figure 4 (b) - (d) show results obtained with the thermal vapor simulation. The imaginary part of the coherence \(\rho_{ge}^{i}\) shown in (c) and (d, dashed) is linearly proportional to the probe laser transmission via the probe electric susceptibility [45]. Two limit cycle regions appear in the spectrum (b), though a cross-section of phase space shows that the case \(\Delta_{c}/\Gamma_{ge}=-3\) is not a limit cycle but resembles a system near a strange attractor. Generally, the thermal vapor model shows regions of multistability which implies that the pointwise integration technique in (b) cannot accurately model a laser scan. This is because the thermal vapor system's trajectory depends on its past state and the attractor it is drawn to, which pointwise integration does not account for.
The thermal vapor model reproduces the observed experimental behavior phenomenologically. This includes changes in the width of the synchronisation region with changes in \(\Omega_{p}\) or \(\Omega_{c}\) and the earlier onset of oscillations at lower \(\Omega_{c}\) for increasing interaction strengths \(V\) as shown in the data of Appendix D in the Supp. Mat., as well as the expected shape of the oscillations. Therefore, we attribute the emergence of macroscopic oscillations in the bulk response of a hot Rydberg vapor to a Kuramoto-like synchronisation transition for sufficiently large global coupling strengths. Possible mechanisms causing the power law scaling of the Rydberg density mean field are Rydberg interactions [36] or charge-induced Stark shifts due to ionisation [37], though other effects could possibly lead to similar power-law scaling behaviors.
In summary, we observe the transition towards synchronisation in a strongly driven, dissipative, hot Rydberg vapor. The observed changes of the synchronised region with variation of the Rabi frequency, vapor density, and interaction strength is reproduced by a theoretical model extended to a thermal vapor simulation. The model's nonlinearity leads to the emergence of attractive limit cycles for individual velocity classes through a Hopf bifurcation. Under the influence of global coupling through the shared Rydberg density, the constituent oscillating velocity classes synchronise in a thermal vapor, which leads to periodic oscillations of the vapor's bulk quantities. The resulting synchronised phase is robust and stable, and therefore ideally suited for an experimental investigation of the emergent non-equilibrium phase of matter. It provides a simple platform for the study of synchronisation in a nonlinear system with a truly macroscopic number of oscillators.
Figure 4: **Change in oscillation shape and frequency along coupling laser scan**. (a) shows the oscillation region for a scan across resonance with \(|43D_{5/2}\rangle\) at \(T=(52.0\pm 0.5)\)\({}^{\circ}\)C with \(\Omega_{p}/2\pi=191\) MHz, \(\Omega_{c}/2\pi=37\) MHz, and a scan rate of \(2\pi\times 10\) MHz/ms. The colored insets show a zoom-in of the trace in the color-shaded regions, each of width \(2\pi\times 4.8\) MHz. Different shapes of the oscillations can be distinguished. (b) Pointwise integrated spectrum with errorbars denoting the amplitude of the oscillations. The time evolution towards a limit cycle is shown in (c) with the inset showing only the limit cycles approached after an integration time of \(t=5000\Gamma_{ge}^{i}\). In (d), the oscillations in Rydberg population \(\rho_{rr}^{r}\) (solid) and in the imaginary part of the coherece \(\rho_{ge}^{i}\) (dashed) are shown. The case \(\Delta=-3\Gamma_{ge}\) did not approach a limit cycle within the maximum integration time but behaves similar to a system near a strange attractor. The simulation assumes a thermal vapor with \(N_{vel}=101\) velocity classes with equal populations, \(\Omega_{p}=1.5\), \(\Omega_{c}=1\), \(\Delta_{p}=0\), \(\Gamma_{er}=10^{-6}\), \(\Gamma_{gr}=10^{-3}\) and \(V=-300\), in units of \(\Gamma_{ge}\), and \(\beta=2\).
_Author's note:_ During completion of this work, two other reports of oscillations in a continuously driven hot Rydberg vapor were reported. In [46], the oscillations are of a transient nature and the probe Rabi frequency is significantly lower than in this work. The authors attribute the origin of the limit cycles to spatial inhomogeneities and clustering of Rydberg atoms. In [47], the experimental parameter regime is similar to this work. The limit cycles are attributed to a competition for Rydberg population between energetically closely spaced Rydberg states.
###### Acknowledgements.
K.W. acknowledges insightful discussions with Finn Munnich and Matt Jamieson, and thanks Matthias Weidemuller. C.S.A. acknowledges fruitful discussions with Dong-Sheng Ding. The authors furthermore thank Lucy Downes, Max Festenstein, Oliver Hughes, and Kevin Weatherill. Financial support was provided by the UKRI, EPSRC grant reference number EP/V030280/1 ("Quantum optics using Rydberg polaritons").
|
2310.00716 | Efficient MPC for Emergency Evasive Maneuvers, Part II: Comparative
Assessment for Hybrid Control | Optimization-based approaches such as Model Predictive Control (MPC) are
promising approaches in proactive control for safety-critical applications with
changing environments such as automated driving systems. However, the
computational complexity of the MPC optimization problem coupled with the need
for real-time control in hazardous scenarios is the main bottleneck in
realization of automation levels four and five for driving systems. In this
paper, we construct hybrid formulations of the nonlinear MPC problem for
tracking control during emergency evasive maneuvers and assess their
computational efficiency in terms of accuracy and solution time. To hybridize
the MPC problem, we combine three hybrid approximations of the prediction model
and four approximations of the nonlinear stability and tire saturation
constraints and simulate the closed-loop behavior of the resulting controllers
during five emergency maneuvers for different prediction horizons. Further, we
compare the robustness of the controllers in the presence of friction
uncertainty as well to assess the accuracy-time trade-off in cases where the
friction of the road is either unknown or has an offset error with respect to
the prediction model. This robustness is studied for different levels of
friction uncertainty, as well as investigated with respect to the proximity to
the vehicle handling limits. We show that the hybridization of the MPC problem
is an efficient approach for real-time implementation of MPC during emergency
evasive maneuvers, paving the way for implementation of high levels of
automation. | Leila Gharavi, Bart De Schutter, Simone Baldi | 2023-10-01T16:36:00Z | http://arxiv.org/abs/2310.00716v2 | # Efficient MPC for Emergency Evasive Maneuvers, Part II: Comparative Assessment for Hybrid Control
###### Abstract
Optimization-based approaches such as Model Predictive Control (MPC) are promising approaches in proactive control for safety-critical applications with changing environments such as automated driving systems. However, the computational complexity of the MPC optimization problem coupled with the need for real-time control in hazardous scenarios is the main bottleneck in realization of automation levels four and five for driving systems. In this paper, we construct hybrid formulations of the nonlinear MPC problem for tracking control during emergency evasive maneuvers and assess their computational efficiency in terms of accuracy and solution time. To hybridize the MPC problem, we combine three hybrid approximations of the prediction model and four approximations of the nonlinear stability and tire saturation constraints and simulate the closed-loop behavior of the resulting controllers during five emergency maneuvers for different prediction horizons. Further, we compare the robustness of the controllers in the presence of friction uncertainty as well to assess the accuracy-time trade-off in cases where the friction of the road is either unknown or has an offset error with respect to the prediction model. This robustness is studied for different levels of friction uncertainty, as well as investigated with respect to the proximity to the vehicle handling limits. We show that the hybridization of the MPC problem is an efficient approach for real-time implementation of MPC during emergency evasive maneuvers, paving the way for implementation of high levels of automation.
Model predictive control, Evasive maneuvers, Vehicle control, Hybrid control
## I Introduction
Real-time implementation of Nonlinear MPC (NMPC) for high-speed safety-critical evasive maneuvers is an open research problem [1]. Two particular reasons contribute to this: high computation times for solving a NonLinear Program (NLP) compared to a linear or a Quadratic Program (QP), and convergence to a local optimum, which is highly sensitive to the initial guess provided to the NLP solver.
Proactive vehicle control in hazardous scenarios requires using the full control potential of the system, which means that fast sub-optimal solution techniques for the NLP [2, 3, 4] are not suitable to incorporate [5]. To mitigate the slow convergence of NLP solvers, an upper bound is often imposed for the computation time as stopping criterion; this bound can be selected e.g., as a function of the complexity of the problem using prediction horizon, decision variable, etc. If the solver does not converge to a optimum before hitting this bound, the solution to the previous step is shifted and used [6]. Nevertheless, if this occurs repeatedly and the controller does not converge to a solution for consecutive steps, this may result in a large degree of suboptimality or even infeasibility.
A popular approach for selecting the initial guess is using a warm-start strategy based on shifting the previous solution to tailor it for the current MPC optimization problem [7, 8, 6], which is suitable provided that the previous step converged to a good solution. This however is a restrictive condition, for which [9] proposed using a tangential solution predictor instead of shifting, which is essentially based on using the concept of parametric sensitivity of the NLP for constructing new initial guesses. Nevertheless, warm start is a suitable strategy only if the solver converged to a "good" solution in the previous step [10]. Other strategies to improve the initial guess include using the reference trajectory [10], using the inverse static model of the system [11], or selecting the solution to a simpler approximation of the NLP e.g., a QP [12]. Nevertheless, the mentioned approaches are not sufficient for real-time proactive control during emergency evasive maneuvers where a more extensive search in the decision space is required.
During emergency maneuvers, relying on one solution is restrictive: even with the improvements on the search direction and transformation, the search for the optimum would be limited within a neighborhood of the solution for the previous time-step. However, abrupt changes to the reference trajectory e.g., due to sudden appearance of an obstacle on the road, require a more extended exploration of the search space to increase the likelihood of finding an acceptable optimum. In this sense, [13] uses a divide-and-conquer strategy in searching for starting regions based on the current state and then picks the first solution that satisfies an acceptable bound on the objective. While this method improves convergence to better optima, it still does not expand the search region in case of abrupt changes in the reference. In [14], multiple filtered random initial guesses are used to solve the NLP problem and in [15], the NLP is solved offline and a dataset of "good" initial guesses to be used in real time is learned, which could be an improvement upon relying on one solution without wasting additional computational effort on initial guesses with lower improvement value. However, this approach is only applicable in case there is sufficient and reliable data to learn such guesses, which is usually not currently available for vehicle control in hazardous scenarios.
While multi-start solution of the NLP improves the chances of converging to a suitable optimum to use the full control potential of the vehicle, it significantly increases the computation time, which is the main obstacle toward proactive control and real-time implementation of MPC during emergency scenarios. In this sense, hybridization of the nonlinear control optimization problem was proposed [16, 17] to balance the
computational efficiency via the trade-off between accuracy and the convergence speed by using a hybrid systems formalism [18] to express the prediction model and the nonlinear constraints.
Hybrid MPC for vehicle tracking control has attracted attention as a potential solution to tackle the problem of computational efficiency [19, 20, 21, 22, 23]. Nevertheless, to the best of our knowledge, the capability of the hybridization approach in improving the computational efficiency of MPC has neither been assessed for highly-nonlinear prediction models, nor investigated during hazardous scenarios and aggressive evasive maneuvers. In this sense, such scenarios are particularly important since they require using the full control potential of the vehicle in its handling limits and the need for fast computation is critical in collision avoidance.
This work is the second part of the publication "Efficient MPC for Emergency Evasive Maneuvers". In "Part I: Hybridization of the Nonlinear Problem" we proposed an approach to approximate the prediction model and nonlinear physics-based constraints using a hybrid system formalism. In particular, we exploited the Max-Min-Plus-Scaling (MMPS) formulation to obtain a hybrid representation of the MPC optimization problem. In this paper, we use the approximated prediction model and constraints to formulate and to solve the MPC problem as either a Mixed-Integer Linear Program (MILP) or a Mixed-Integer Quadratically-Constrained Program (MIQCP). We then investigate the trade-off between the accuracy and the computation speed of the resulting hybrid MPC controllers against their nonlinear counterparts. The computational performance of the hybrid and nonlinear controllers are assessed during five aggressive evasive maneuvers, representing abrupt changes in the reference trajectory due to a hazardous situation such as a sudden appearance of an obstacle on the road. Further, we investigate the tracking errors in the presence of uncertainty in the friction coefficient as an offset as well as a disturbance such as a significant decrease of friction due to the presence of water on a section of the road.
This paper is organized as follows: the theoretical background such as the formulation of the nonlinear and hybrid MPC problems is explained in Section II. To make this paper self-contained, we briefly recall the hybridization approach in Part I and its corresponding notation. However, for more details, the reader is referred to Part I of this paper. Section III explains different aspects of the comparison benchmark and assessment criteria e.g., the choice of driving scenarios and the prediction horizons. The results of the simulations and the comparative assessment are discussed in Section IV and finally, Section V presents the main results and draws an outlook for future research.
## II Background
### _Model and Constraint Hybridization_
Consider a nonlinear discrete-time system
\[x(k+1)=F\left(x(k),u(k)\right),\]
where \(x\in\mathbb{R}^{n}\) and \(u\in\mathbb{R}^{m}\) represent the state and input vectors, respectively. We approximate each component \(F_{s}\) of \(F=\begin{bmatrix}F_{1}&\ldots&F_{n}\end{bmatrix}^{T}\) separately by an MMPS function \(f_{s}\) with the Kripfganz form [24] as
\[\begin{split} f_{s}(x,u)=\max\left(\phi_{s}^{+}(x,u)\right)-\max \left(\phi_{s}^{-}(x,u)\right),\\ \forall s\in\{1,\ldots,n\},\end{split} \tag{1}\]
where the vectors \(\phi_{s}^{\eta}:\mathbb{R}^{m+n}\rightarrow\mathbb{R}^{p\eta}\) with \(\eta\in\{+,-\}\) are affine functions of \(x\) and \(u\), also referred to as dynamic modes, and expressed via matrices
\[\begin{split} A_{s}^{\eta}\in\mathbb{R}^{p\eta}\times m,\qquad B _{s}^{\eta}\in\mathbb{R}^{p^{\eta}\times n},\qquad H_{s}^{\eta}\in\mathbb{R}^{ p\eta},\\ \forall\eta\in\{+,-\},\forall s\in\{1,\ldots,n\},\end{split}\]
as
\[\phi_{s}^{\eta}(x,u)=A_{s}^{\eta}x+B_{s}^{\eta}u+H_{s}^{\eta}.\]
The general form of \(f\) is then given as
\[f(x,u)=\Psi^{+}(x,u)-\Psi^{-}(x,u),\]
where \(\Psi^{+}\) and \(\Psi^{-}\) are vector-valued functions1 with
Footnote 1: In Part I, we used a scalar representation of the MMPS formulation to approximate each component of \(F\) separately. In this paper, we use vector-valued representation to make the formulation of the MPC optimization problem more compact.
\[\mathcal{C}\coloneqq\{(x,u)\in\mathbb{R}^{m+n}\mid 0\leqslant G(x,u)\leqslant 1\},\]
where \(\mathcal{C}\) is referred to as the feasible region. It should be noted that we normalize the constraint function to the interval \([0,1]\) to avoid numerical issues in the subsequent control optimization problems.
**Remark 1**.: Without loss of generality, we consider \(G\) to be a scalar function. For more details, the reader is referred to Remark 1 in Part I of this paper.
The region \(\mathcal{C}\) is approximated either by a union of ellipsoids or by using the MMPS formalism, which corresponds to approximating the \(\mathcal{C}\) by a union of convex polytopes. In the MMPS approach, a similar formulation to the MMPS model approximation problem is used: we approximate \(G\) by an MMPS function \(g_{\text{MMPS}}\) of the Kripfganz form (1). The resulting feasible region \(\mathcal{R}_{\text{MMPS}}\) is then expressed as
\[\mathcal{R}_{\text{MMPS}}:=\{(x,u)\in\mathbb{R}^{m+n}\mid 0\leqslant g_{\text{MMPS}}(x,u) \leqslant 1\}. \tag{2}\]
via the boundary function
\[g_{\text{MMPS}}(x,u)=\max\left(\gamma^{+}(x,u)\right)-\max\left(\gamma^{-}(x, u)\right), \tag{3}\]
where \(\gamma^{\eta}:\mathbb{R}^{m+n}\rightarrow\mathbb{R}^{R\eta}\) are affine functions of \(x\) and \(u\) as
\[\gamma^{\eta}(x,u)=C^{\eta}x+D^{\eta}u+I^{\eta},\]
and
\[C^{\eta}\in\mathbb{R}^{R\eta\times m},\quad D^{\eta}\in\mathbb{R}^{R\eta \times n},\quad I^{\eta}\in\mathbb{R}^{R\eta},\qquad\forall\eta\in\{+,-\}.\]
The second way is to approximate the feasible region by a union of \(n_{\text{e}}\) ellipsoids as
\[\mathcal{R}_{\text{ELPS}}:=\{(x,u)\in\mathbb{R}^{m+n}\mid 0\leqslant g_{\text{ ELPS}}(x,u)\leqslant 1\}, \tag{4}\]
whose boundary can be expressed by
\[g_{\text{ELPS}}(x,u)=\min\left(\omega(x,u)\right), \tag{5}\]
where the min operator gives the smallest component in the vector \(\omega\), and where
\[\omega_{e}(x,u)=\begin{pmatrix}x-x_{0,e}\\ u-u_{0,e}\end{pmatrix}^{T}Q_{e}\begin{pmatrix}x-x_{0,e}\\ u-u_{0,e}\end{pmatrix}-1, \tag{6}\] \[\forall e\in\{1,\ldots,n_{\text{e}}\}, \tag{7}\]
with \(Q_{e}\) being a positive definite matrix and \((x_{0,e},u_{0,e})\) representing the center coordinates of the ellipsoid. Note that this representation includes rotated ellipsoids as well.
### _MPC Optimization Problems_
The state and input vectors over the whole prediction horizon2\(N_{\text{p}}\) are defined as
Footnote 2: In this study, for the sake of simplicity, the control horizon is assumed to be equal to the prediction horizon \(N_{\text{p}}\).
\[\tilde{x}(k+1)=\left[\tilde{x}^{T}\left(k+1|k\right)\quad\tilde{x}^{T}(k+2|k) \quad\ldots\quad\tilde{x}^{T}(k+N_{\text{p}}|k)\right]^{T},\]
\[\tilde{u}(k)=\left[u^{T}(k)\quad u^{T}(k+1)\quad\ldots\quad u^{T}(k+N_{\text{ p}}-1)\right]^{T},\]
where \(\tilde{x}(k+i|k)\) represents the predicted state of the \((k+i)\)-th time step based on the state measurement at the \(k\)-th time step. In addition and for brevity of expressions, we introduce the generalized form of the systems dynamics \(F\) and inequality constraints \(G\) over the prediction horizon as
\[\left[x(k+1)=F\left(x(k),u(k)\right)\right] \iff\left[\tilde{x}(k+1)=\tilde{F}\left(\tilde{x}(k),\tilde{u}(k) \right)\right],\] \[\left[0\leqslant G\left(x(k),u(k)\right)\leqslant 1\right] \iff\left[0\leqslant\tilde{G}\left(\tilde{x}(k),\tilde{u}(k) \right)\leqslant 1\right].\]
Note that \(\tilde{F}\) is the generalized counterpart of \(F\) by extending the notation over the prediction horizon and not by recursive substitution. For the sake of brevity, \(x(k)\) is not an explicit argument of \(\tilde{F}\) but note that the dependence of \(\tilde{F}\) on \(x(k)\) is implied within the \(\tilde{x}(k)\) argument.
Using the \(\ell_{1}\)-norm in defining the objective function in tracking \(\tilde{x}_{\text{ref}}\), MPC requires solving the optimization problem
\[\min_{\tilde{u}(k)} \left\|\Theta_{x}\left(\tilde{x}(k)-\tilde{x}_{\text{ref}}(k) \right)\right\|_{1}+\left\|\Theta_{u}\tilde{u}(k)\right\|_{1},\] (8) s.t. \[\tilde{x}(k+1)=\tilde{F}\left(\tilde{x}(k),\tilde{u}(k)\right), \tag{9}\] \[0\leqslant\tilde{G}\left(\tilde{x}(k),\tilde{u}(k)\right)\leqslant 1, \tag{10}\]
with \(\Theta_{x}\succeq 0\) and \(\Theta_{u}\succeq 0\) being normalizing diagonal matrices with non-negative entries for the state tracking error and input signals, respectively. Note that the \(\ell_{1}\)-norm is selected to allow a mixed-integer linear description of the objective function.
The hybrid MPC problem can then be formulated as:
\[\min_{\tilde{u}(k)} \Theta_{x}\,\tilde{e}_{x}(k)\;+\;\Theta_{u}\;\tilde{e}_{u}(k)\] (11) s.t. \[-\tilde{e}_{x}(k)\leqslant\tilde{x}(k)-\tilde{x}_{\text{ref}}(k) \leqslant\tilde{e}_{x}(k), \tag{12}\] \[-\tilde{e}_{u}(k)\leqslant\tilde{u}(k)\leqslant\tilde{e}_{u}(k),\] (13) \[\tilde{x}(k+1)=\operatorname{vec}\left(\Psi^{+}(k)\right)- \operatorname{vec}\left(\Psi^{-}(k)\right),\] (14) \[\Psi^{\text{m}}_{ij}(k)=\max\left(\hat{\phi}^{\text{m}}_{i}(k+j- 1)\right),\] (15) \[\forall\eta\in\{+,-\},\] \[\forall i\in\{1,\ldots,n\},\] \[\forall j\in\{1,\ldots,N_{\text{p}}\},\]
where (12)-(13) are introduced to obtain a linear representation of the objective function by defining
\[\tilde{e}_{x}(k)=\|\tilde{x}(k)-\tilde{x}_{\text{ref}}(k)\|_{1},\qquad\tilde{ e}_{u}(k)=\|\tilde{u}(k)\|_{1},\]
and (14)-(15) are the hybridized model approximation to replace (9). The \(\operatorname{vec}(\cdot)\) operator in (14) converts its matrix argument into a vector by stacking its components into one column vector. Then, constraint approximation can be hybridized by replacing (10) by the MMPS constraints (16a) for an MILP or the ellipsoidal constraints (16b) for an MIQCP formulation:
\[\Lambda^{\eta}_{j}=\max\left(\gamma^{\eta}(k+j-1)\right), \tag{16a}\] \[\forall\eta\in\{+,-\},\] \[\forall j\in\{1,\ldots,N_{\text{p}}\},\] \[\Omega_{j}(k)=\min\left(\omega(k+j-1)\right),\] (16b) \[\forall j\in\{1,\ldots,N_{\text{p}}\}.\]
**Remark 2**.: The binary variables of the optimization problem are introduced via activating the local modes for the hybrid model and constraints (for more details, see Part I). Therefore, the corresponding MILP problem will have
\[N_{\text{p}}\left(R^{+}+R^{-}+\sum_{s=1}^{n}\left(P_{s}^{+}+P_{s}^{-}\right)\right)\]
binary variables with \(R^{+}\) and \(R^{-}\) being the constraint-approximation counterparts of \(P^{+}\) and \(P^{-}\), and the MIQCP problem will have
\[N_{\text{p}}\left(n_{\text{e}}+\sum_{s=1}^{n}\left(P_{s}^{+}+P_{s}^{-}\right)\right)\]
binary variables.
## III Comparison Benchmark
### _Prediction Model and Physics-Based Constraints_
The nonlinear prediction model is a single-track vehicle with a Dugoff tire [25] model with varying friction as described in Appendix A, with system variables and parameters given in Tables III and IV.
In Part I of this paper, we hybridized the nonlinear model using different grid-generation methods via the MMPS formalism (1) and obtained three hybrid approximations for the nonlinear model labeled by their corresponding grid types as R, S, and T models. The nonlinear physics-based constraints
due to the tire force saturation and vehicle stability were hybridized as well via approximating the feasible region by a union of ellipsoids and by a union of polytopes (using the MMPS formalism) via boundary-based and region-based approximations. There, we obtained four approximations labeled by their approach (R and B) and by the shape of the subregions (MP for MMPS or EL for ellipsoidal) as RMP, BMP, REL, and BEL. Table I summarizes the abbreviations used in this paper for different hybrid models, constraints, and their corresponding controllers. For more details of the boundary-based and region-based approximations and their errors, or the number of introduced binary variables by each approach, the reader is referred to Part I of this paper.
### _Control Parameters_
Given the application, the _control sampling time_\(t_{\text{sc}}\) is often restricted by the capabilities of the control hardware such as the maximum operation frequency. In addition, the _model sampling time_\(t_{\text{sm}}\) is either known a priori for a discrete-time system, or obtained for a continuous-time system with respect to its natural frequency and dynamic behavior. Therefore, we assume these two parameters to be fixed during all the simulations as known system parameters \(t_{\text{sm}}=0.01\)s and \(t_{\text{sc}}=0.05\)s.
In the path tracking MPC literature [1], the time span of the prediction often covers below 1.5s ahead, for control sampling times shorter than 0.1s. Based on our selected control sampling time \(t_{\text{sc}}=0.05\)s, we therefore test different time spans of the prediction in the range 0.25s to 1.50s, corresponding to \(N_{\text{p}}\in\{5,10,\ldots,30\}\). This is further explained in Section III-E.
### _MPC Controllers_
In this benchmark, we consider two nonlinear MPC controllers with the nonlinear prediction model and the physics-based constraints. The first one solves the NLP using the warm start only (i.e. with the shifted solution of the previous time step) labeled as NL-1, and the second one referred to as NL-5 solves the problem for five different initial guesses and selects the best solution it has found. The initial guesses for NL-5 are as follows: one warm start as in NL-1, one random point within the domain, one point at the lower bound, one at the upper bound, and one in the geometric center.
The computational performance of the two NLP controllers is compared against MPC controllers based on combinations of three hybrid models (R, S, and T) with four constraint approximations: two polytopic (RMP and BMP) and two ellipsoidal (REL and BEL). In total we have six MILP and six MIQCP controllers to compare against their NLP counterparts. The hybrid MPC controllers are labeled by combining the abbreviations for their model and constraints, separated by a dash (-) symbol, as described in Table I. Further, MILP and MIQCP controllers can be constructed using our published hybridization toolbox [26].
### _Reference Trajectory_
We compare the computational performance of the nonlinear and hybrid MPC controllers during five maneuvers of two seconds as reference trajectories. These maneuvers are selected to represent aggressive evasive maneuvers with different longitudinal velocities as explained in Table II. The values for Kamm and g-g columns respectively correspond to the normalized distance of the reference trajectory in the g-g diagram (20) and Kamm circles (21) with respect to the radii of their respective circles. The column \(\beta\)-\(r\) represents the normalized location of the stability envelope often used in the literature [1] in the same fashion. It should be noted that the
\begin{table}
\begin{tabular}{c|c} \hline \hline \multicolumn{2}{c}{_Hybrid Models_} \\ \hline Approximation Grid Type & Abbreviation \\ \hline Domain-based random & **R** \\ Trajectory-based steady-state initiated & **S** \\ Trajectory-based randomly initiated & **T** \\ \hline \multicolumn{2}{c}{_Hybrid Constraints_} \\ \hline \multicolumn{2}{c}{_Formulation_} \\ \cline{2-3} & Ellipsoidal (EL) & MMPS (MP) \\ \hline Region-based (R) & **REL** & **RMP** \\ Boundary-based (B) & **BEL** & **BMP** \\ \hline \multicolumn{2}{c}{_Hybrid MPC Controllers_} \\ \hline \multicolumn{2}{c}{[Model abbreviation] – [Constraint abbreviation]} \\ \multicolumn{2}{c}{Example: **R** model + **BMP** constraint \(\rightarrow\) **R-BMP** controller} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Abbreviations for hybrid models and controllers
Fig. 1: Selected maneuvers for the benchmark, represented in terms of the constraints. The green zone in the g-g diagram represents the safe region and the red one corresponds to the aggressive yet acceptable acceleration range.
normalized values represent the distance to the boundary with values between 0 and 1, where 1 indicates the position of the boundary itself. Figure 1 shows the schematic view of the five reference maneuvers in terms of these constraints. The two-seconds simulation time is selected to represent the recovery window for the controller in hazardous scenarios in case of an abrupt change in the reference trajectory.
### _Driving Scenarios_
We compare the computational performance of the nonlinear and hybrid MPC controllers in the following four driving scenarios:
* **Ideal Case:** The nonlinear prediction model is selected as the real system. The computational performance of the hybrid controllers is evaluated over \(N_{\text{p}}\in\{5,10,\ldots,30\}\).
* **Friction Offset:** We use a different tire-road friction in the real system as \(\mu\in\{0.70,0.75,\ldots,1.00\}\) and compare the computational performance of the controllers for a selected \(N_{\text{p}}\) value over this range of friction offsets.
* **Friction Disturbance:** We assume the road friction for the second quarter of the maneuver to be very low, representing a disruption such as a slippery road surface and we compare the computational efficiencies in the same fashion as the friction offset case.
* **Handling Limits:** We investigate the computational performance of the controller for a fixed friction offset during the lane change maneuver (no. 1 in Table II). We vary the input steering angle during the maneuver to simulate different levels of aggressive steering and assess the closed-loop performance in terms of the closeness to handling limits, i.e., the boundary of the g-g diagram with the acceleration magnitude between \(0.5g\) up to the \(\mu g\) limit.
### _Solver Selection_
For a fair comparison in terms of computation time, we select the most efficient known solvers within the academic community for the MILP/MIQCP and NLP problems.
The MILP/MIQCP and NLP problems are solved by GUROBI [27] and TOMLAB/KNITRO [28] optimizers, respectively, using Matlab as interface and overall computation environment. To further improve the solution time for the NLP problems, we provided the objective and constraint functions via MEX files (instead of m-files in Matlab), which in our experiments reduced the computation time for the NLP problems by around 50% for all the cases.
The simulations were all run on a PC with a 4-core(s) Intel Xeon 3.60 GHz CPU and 8 GB RAM on Windows 10 64-bit and in a Matlab R2020b environment.
## IV Simulation Results
Using the benchmark described in Section III, we compare the computational performance of the hybrid and nonlinear MPC controllers as follows: we first compare these controllers in the ideal case for different \(N_{\text{p}}\) values and then we select the most promising hybrid controllers and compare their robustness to friction uncertainty and their performance close to the handling limits in the next subsections.
### _Ideal Case_
The computational performance of the nonlinear and hybrid MPC controllers is shown in Figures 2 and 3 in terms of their average and maximum tracking errors and computation time per control sampling time.
#### Iv-A1 Aggressive Lane Change Maneuver
First, we start by comparing the hybrid and nonlinear MPC controllers during the aggressive lane change maneuver in Fig. 1.
During this maneuver, similar to NL-5, the MILP controllers all show the same accuracy for different \(N_{\text{p}}\) values. However, NL-1 exhibits oscillatory behavior in terms of tracking accuracy across the \(N_{\text{p}}\) axis. This is due to the fact that increasing the prediction horizon, even by using the same prediction model as the ground truth, increases the dimensions of the search space, hence decreasing the probability of converging to an "acceptable" optimum when the problem is only solved via the warm-start strategy. Performing the aggressive lane change maneuver as shown in Fig. 1 requires drastic changes in the input signals (compared to its safer counterpart) where relying on the warm-start strategy limits the chance of converging to a optimum with an acceptable tracking error.
In terms of computation speed, the MILP controllers with the T model show a steady increase rate similar to the nonlinear ideal models while other hybrid controllers show an increase in the rate of computation time after a certain \(N_{\text{p}}\) value, which stems from the prediction model accuracy. Longer prediction horizons do not only increase the dimensions of the search space in the optimization problem, but also lead to accumulation of the prediction model error. This accumulation increases the error as well as computation time as the reference trajectory can become infeasible to track for the prediction model, leading to slower convergence. Therefore, the T model yields the best accuracy compared to the R and S models.
the full control potential for an aggressive maneuver in shorter prediction horizons.
As Fig. 2 shows, the MILP controllers exceed the performance of the MIQCP ones in terms of accuracy, as well as computation speed. Therefore, for the next simulations we only consider the MILP controllers as prospective suitable hybrid candidates.
#### Iii-A2 Cornering Maneuvers
We compare the MILP and NLP controllers during the three cornering maneuvers as shown in Fig. 3. In maneuvers 3, and 5 where the input forces vary drastically over the maneuver (see Fig. 1), the NL-1 controller shows a poor computational performance and oscillatory behavior in the error plots across the \(N_{\text{p}}\) axis, which was also observed in the aggressive lane change maneuver and discussed there.
In all three cornering maneuvers, the controller with the T model yields the best computational performance with its mean tracking error below 4.5% and maximum error below 10% in all cases.
Just as for the lane change maneuvers, increasing \(N_{\text{p}}\) leads to a higher computation time for all the controllers; however, the rate of increase is the lowest for the nonlinear MPC and the T-BMP controller. For the T-RMP controller the same behavior is observed for \(N_{\text{p}}<20\). Comparing the performance of the hybrid and nonlinear MPC controllers in all the five maneuvers, the suitable prediction horizon for tracking, in terms of acceptable accuracy for lower computation times, is 10 or 15. Next, we select \(N_{\text{p}}=10\) for the comparison of the robustness of the controllers to friction uncertainty. However, we also have simulated other \(N_{\text{p}}\) values, reaching similar results. Therefore, for a compact presentation, we present the trends and analyze them for \(N_{\text{p}}=10\). In addition, since the MILP controllers with the S prediction model show larger tracking errors and larger computation times, especially for shorter prediction horizons, we disregard them at this stage and compare the four MILP controllers with R and T prediction models against their NLP counterparts.
### _Friction Offset_
The maneuvers 2, 3, and 5 in Table II are the three most critical ones: here, the vehicle operates close to the tire saturation and stability limits almost the whole time as shown in Fig. 1. Thus, we used these maneuvers to study the effect of friction offset. The prediction horizon is selected as \(N_{\text{p}}=10\) and the simulations are run for different road friction coefficients in the range \(\mu\in\{0.70,0.75,\ldots,1.00\}\) to account for uncertain friction in the prediction model. Figure 4 shows the computational performance of the nonlinear and hybrid MPC controllers during the three reference maneuvers.
While the computation time for the hybrid controllers does not vary by increasing the friction uncertainty, the nonlinear controllers show an increase in the computation time in maneuvers 3 and 5 where a significant fraction of the maneuver is performed close to the tire-saturation and vehicle-handling limits, which are functions of the friction coefficient.
The maximum and mean tracking errors increase for lower friction coefficients for all the controllers. However, the rate of error increase for the nonlinear controllers is higher. The difference between the tracking errors of NL-1 and NL-5 once again indicates the shortcoming of a warm-start strategy during aggressive maneuvers in searching for the optimal solution in the search space. This however comes at the price of an increase in computation times, best shown in Fig. 4c
Fig. 2: Computational performance of the nonlinear and hybrid MPC controllers during the aggressive lane change maneuver (maneuver 2 in Table II) in the ideal scenario.
where solving the NLP for five initial guesses increases the computation time tenfold.
**Remark 3**.: The reason behind the computational increase in NLP is as follows: compared to the shifted solution to the previous step, the other initial guesses are generally further away from a local optimum. As a result, the increase in computation time is more than linear.
In the presence of friction offset, the tracking error of NL-5 converges to that of the MILP controllers in maneuvers 2, 3, and 5 where a more extensive search over the search space is required to perform the maneuver from an initial state with an error from the previous solution. To understand this phenomenon and its two contributing factors, we look at the NL-5 and T-BMP controllers in more detail.
Notice that NL-5 and T-BMP have mean tracking errors below 5% in all the maneuvers of Fig. 4 with \(\mu=1\), which generates the same friction as their prediction models. When \(\mu\) on the road is decreased to 0.7, the controllers still seek to find the solution (including tire forces) close to the boundary of the feasible region of their model, which assumes \(\mu\approx 1\). However, these forces cannot be generated by the tire in the real system due to the lower friction on the road. Therefore, the first contributing factor to the error is the fact that error accumulation grows exponentially with the number of control time steps and as a result the controller converges to an infeasible solution for the real system (note that the real feasible region is shrinking with the friction reduction). Secondly, with larger errors, finding a feasible solution to track the reference trajectory from an initial state with an already large tracking error might not be possible after a certain error bound. This not only increases the convergence time for the NL-5 controller, but also results in converging to even worse solutions both in terms of constraint violation and optimality to the point where we observe the tracking error of NL-5 exceeds the error for T-BMP in Figures (b)b and (c)c as with a similar order of model error, the branch-and-bound approach of the MILP solver, as opposed to the NLP solver, guarantees convergence to the global optimum if given enough time, while keeping the computation time as low as for the ideal case.
### _Friction Disturbance_
In this case, we assume a correct knowledge of the road friction during the maneuver, while exposing the system to
Fig. 3: Computational performance of the nonlinear and MILP MPC controllers during three cornering maneuvers ( maneuvers 3, 4, and 5 in Table II) in the ideal scenario.
a sudden friction reduction to \(\mu=0.4\) during the second quarter of the maneuver and we evaluate the tracking error, computation times, and the recovering ability of the closed-loop system without any estimation or corrections during the disturbance.
The tracking errors of the controllers are compared in case of sudden reduction of the friction to \(\mu=0.4\) in the second quarter of the maneuver to represent a case similar to pouring water on the road surface. The tracking errors at each time step are shown in Fig. 5 to compare the errors, as well as the ability of the different controllers in recovering from the friction disturbance. The average computation times per control step for each controller are presented in a separate plot in Fig. (f)f.
During the safe lane change maneuver, all controllers recover to a tracking error below 5% after five time steps, while the tracking error is larger during the other maneuvers that are more aggressive. After the friction disturbance in maneuvers 2 and 5, the tracking error keeps on increasing as the controllers fail to recover to an acceptable error bound. This could be understood by taking into account that the second quarter of maneuver 2 is when tracking the reference trajectory requires tire forces that will no longer be feasible due to the fact that the radius of the Kamm circle for the rear tire has decreased to 40% of its original value, which means that following the planned trajectory will no longer be feasible for the prediction model. The same issue arises during maneuver 5 that can be tracked provided that the rear tire is generating forces close to its saturation limit during the whole maneuver (Fig. (b)b).
The hybrid controllers show smaller tracking errors after recovery during the low-speed cornering maneuver while starting with larger tracking errors before the friction disturbance. This reduction is understandable in light of the fact that the second half of this maneuver requires the vehicle to operate further from the stability boundaries in \(\beta\)-\(r\) envelope and the g-g diagram in Fig. 1. Meanwhile, the tracking error of NL-1 stays above 5%, which shows the limitations of depending on a warm-start strategy in convergence to better optima. This limitation is even more clear in Fig. (c)c for the drift cornering maneuver: while NL-5 recovers better after the disturbance, it still fails to reach smaller tracking errors as fast as the hybrid MPC controllers, which suggests that increasing the number of initial guesses by considering more than five points could improve the convergence capability of NL-5 to acceptable optima. However, it should be noted that even for five initial guesses, NL-5 requires 10 times more than the slowest hybrid MPC to converge to its best solution.
In the safe lane change maneuver, T-RMP and T-BMP show larger tracking errors compared to the other controllers, and this is the scenario where the hybrid controllers in general show the highest error increase of 12% to 30% for 95%
Fig. 4: Computational performance of the nonlinear and hybrid MPC controllers during three reference maneuvers (maneuvers 3, 4, and 5 in Table II) in the friction offset scenario.
computation time reduction. However, the effectiveness of the hybrid MPC controllers in terms of tracking error is more observable in more aggressive maneuvers where T-RMP and T-BMP show a better performance, in some cases even better than that of NL-1 and NL-5, due to the fact that the shortcomings of convergence to local optima is more clear in hazardous scenarios where there are sudden changes in the environment that require a more thorough search across the decision space. Comparing the control performance vs. computation time trade-off during the four aggressive reference maneuvers shows that choosing the hybrid MPC controllers R-BMP and R-RMP decreases the computation time to 2 to 5%, while it increases the maximum error from 5% to 15% in maneuver 5 while decreasing it during maneuvers 2, 3, and 4 compared to NL-5.
### _Analysis of Computational Performance_
Figure 6 plots the range of tracking error and computation times for the NLP and four MILP MPC controllers during all the five reference maneuvers in both friction offset and friction disturbance scenarios as shaded boxes. The average points are shown by square markings in the shaded area. Comparing the computational performance in Fig. 6 shows the power of the hybrid MPC controllers compared to the nonlinear ones in the presence of uncertainty. While the NLP controllers have a lower optimum, their maximum tracking errors reach much higher values while taking more time to converge. In terms of the average points, not only the best MILP controller in the ideal case (T-BMP) has a lower maximum error in both friction uncertainty cases compared to the best NLP one (NL-5), but it also has a higher computational efficiency: in the friction offset case, it trades off an error increase from 8.7% to 9.7% for reducing the computational time from 10.2s to 0.3s, and in the disturbance case it reaches a smaller tracking error (from 9.5% to 8.7%) as well as a lower computation time (from 8.3s to 0.4s).
### _Handling Limits_
For a more clear understanding of the computational performance during hazardous scenarios, we perform a second test in the friction offset case. Here, we fix the road friction to \(\mu=0.85\) and compare the computational performance of the controllers during the safe lane change maneuver over a range of steering actions to get closer to the boundary of the g-g diagram in Fig. 0(a). Figure 7 shows the tracking errors and computation time for MILP and NLP controllers for different levels of aggressiveness in terms of the acceleration magnitude \(\sqrt{a_{x}^{2}+a_{y}^{2}}\), which is bounded by \(\mu g\).
Fig. 5: Tracking error of the nonlinear and four selected MILP MPC controllers during five reference maneuvers in (a) to (e) in case of friction disturbance. The average computation time for each control time step is shown in (f).
The computational efficiency of hybrid MPC can be easily seen in Fig. 7. The closer the maneuver gets to the boundary of the stability constraint in the g-g diagram, the higher the tracking errors get for all the controllers. However, the error increase for the NLP controllers is much higher as they fail to converge to an acceptable optimum with 1 or even 5 starting points. Therefore, when the acceleration magnitude exceeds 6.2 m/s\({}^{2}\), even the NL-5 controller reaches higher mean and maximum tracking errors compared to the T-BMP, R-BMP, and R-RMP controllers. This is happening while NL-5 requires about 20 times more time to converge to its final solution.
Comparing among the MILP controllers, T-BMP proves to be the best hybrid MPC controller as it shows the most steady computation time and the lowest increase of tracking error as the steering action gets more extreme.
## V Conclusions and Outlook
This paper has presented a comparative assessment of nonlinear MPC controllers vs. their various hybridized counterparts in terms of computational efficiency for vehicle control during emergency evasive maneuvers. The hybridization of the nonlinear problem was presented and discussed in Part I of this publication, where several guidelines for hybridization are given in a generalized framework.
The benchmark of this paper uses three hybridized models and four hybridized constraint formulations for a nonlinear single track vehicle model considering nonlinear physics-based constraints for stability and tire-force saturation. Five reference maneuvers were selected to represent emergency situations where the computational efficiency is crucial for real-time proactive vehicle control. The hybrid and nonlinear controllers then were compared in multiple scenarios to compare their control performance and computation time, and their robustness in the presence of friction uncertainty in the form of an offset or a disturbance. Further, we studied the tracking behavior of the controllers with respect to how close the vehicle is operating in handling limits. The conclusions of our comparative assessment are summarized next with respect to different criteria.
### _Milp vs. MIQCP_
In general, as MILPs are solved faster than MIQCPs, MILP-based approaches are more suitable choices in terms of com
Fig. 6: Relative errors and computation times for the nonlinear and four MILP MPC controllers during the five reference maneuvers in cases with friction uncertainty.
Fig. 7: Computational performance of the nonlinear and hybrid MPC controllers during lane change maneuver in the friction offset scenario for different levels of aggressive steering in the g–g diagram.
putation speed. The control performance highly depends on the accuracy of the hybrid approximation. Consequently, the tracking performance of MIQCP controllers was not as good as MILP controllers due to their less accurate constraint approximation compared to their MILP counterparts. Nevertheless, for systems or applications where a lower approximation error with mixed-integer quadratic constraints is obtained for the nonlinear constraints, MIQCP can be an efficient formulation to hybridize the nonlinear MPC problem. This can for instance be the case for systems with quadratic or bi-variate nonlinearities where considering the quadratic terms helps avoid using large number of local linear or affine modes to obtain the same level of accuracy.
### _Robustness to Friction Uncertainty_
When friction uncertainty is present, MILP controllers do not require longer computation times, whereas NLP controllers take significantly longer to converge. In addition, the rate of increase in tracking error for the NLP controllers is higher than that of the MILP controllers in the presence of a friction offset.
Furthermore, MILP controllers are better able to recover from friction disturbances due to their more comprehensive search for an optimal solution in the decision space and the fact that they always reach the global optimum if given enough time. This means that if the error is already very high in the initial state for the current time step, the NLP controllers may not find a feasible solution at all in the vicinity of their initial guesses, while the MILP controller will converge to one via branch-and-bound strategy, provided that they have given sufficient time. As a result, even in cases where both nonlinear and hybrid controllers recover from high tracking errors during the friction disturbance, the MILP controller reaches smaller tracking error in fewer control time steps.
The robustness of the MILP controllers compared to the NLP controllers is summarized in Fig. 6. While NLP controllers can reach lower relative errors, their behavior in terms of accuracy, as well as computation speed, is not as consistent as MILP controllers in the presence of friction offset or disturbance. However, MILP controllers are considerably (at least 10 times) quicker to converge to their optimal solution and show significantly less variations in the relative error when exposed to friction uncertainty.
### _Performance Close to Handling Limits_
Getting closer to the handling limits leads to larger tracking errors for all the controllers. However, NLP controllers may deviate from the reference significantly as they may fail to converge to an acceptable optimum. Meanwhile, the MILP controllers converge to lower errors in a shorter time, e.g., in the boundary of the acceleration magnitude, the best MILP controller converges to 30% of the tracking error of the best NLP controller with 95% reduction of its corresponding computation time. This shows that MILP controllers are more suitable choices for real-time control in emergency evasive maneuvers.
### _Overall Computational Performance_
The shortcoming of a warm-start strategy in solving an NLP is more clear in emergency maneuvers, which stems from its limitation in searching for the optimal solution in the search space during aggressive maneuvers. In addition, even with a multi-start strategy, the NLP controller may converge to worse solutions if the uncertainty influences the feasible region and the NLP solutions become infeasible for the real system. However, the branch-and-bound approach of the MILP solver has a better exploration of possible solutions than a potentially real-time implementable NLP solver such as gradient-based solver, while keeping the computation time as low as for the ideal case.
Even in the ideal case where the NLP controllers benefit from employing the same prediction model as the real system, they show high variations in tracking control and computation time, which means they are not suitable options for robust control in hazardous scenarios. In the presence of uncertainty, the NLP controllers suffer from larger tracking errors as well as an exponential growth in their computation time. However, the MILP controllers converge to smaller tracking errors, within a much smaller variation bound, and with significantly less computation times.
All the above mentioned factors make the MILP controllers a good choice for real-time implementation of MPC in emergency situations. Therefore, we propose combining hybrid MPC and hybrid predictive estimation techniques (e.g., moving-horizon estimation) as a potential next research step for improving robustness in hazardous driving scenarios. Moreover, as quadratic forms of nonlinearity are extensively encountered in modeling of physical systems, we propose investigating piecewise-quadratic-based hybridization of the prediction model and physics-based constraints for MIQCP formulation of MPC optimization problem. This can particularly be beneficial for systems with nonlinearities that can better be approximated using quadratic approximations and can lead to significant improvements in terms of accuracy and computational efficiency of the hybrid controller.
## Acknowledgment
This research is funded by the Dutch Science Foundation NWO-TTW within the EVOLVE project (no. 18484).
## Appendix A Nonlinear System Description
A single-track nonlinear representation of the vehicle model is described by the following equations [6]:
\[\dot{v}_{x} =\frac{1}{m}\left[F_{\mathrm{rf}}\cos\delta-F_{\mathrm{rf}}\sin \delta+F_{\mathrm{ar}}\right]+v_{y}r, \tag{17}\] \[\dot{v}_{y} =\frac{1}{m}\left[F_{\mathrm{rf}}\sin\delta+F_{\mathrm{rf}}\cos \delta+F_{\mathrm{sr}}\right]-v_{x}r,\] (18) \[\dot{r} =\frac{1}{I_{zz}}\left[F_{\mathrm{rf}}\sin\delta\;I_{\mathrm{f}} +F_{\mathrm{rf}}\cos\delta\;I_{\mathrm{f}}-F_{\mathrm{sr}}\;I_{\mathrm{r}} \right], \tag{19}\]
and the lateral forces are given by the Dugoff model [25]
\[F_{yu}=\frac{C_{\alpha_{a}}}{1-\kappa_{a}}f_{\lambda}(\lambda_{u}^{w})\alpha_{a},\]
with \(a\in\{\mathrm{f},\mathrm{r}\}\) where \(\mu_{a}\) is the varying friction coefficient, and \(\lambda_{u}^{w}\) and \(f_{\lambda}\) are the weighting coefficient and function, defined as
\[\mu_{a}=\mu_{0}\left(1-e_{\mathrm{r}}v_{x}\sqrt{\kappa_{a}^{2}+\tan^{2}\alpha_{ a}}\right),\]
\[\lambda_{u}^{w}=\frac{\mu_{a}F_{zd}(1-\kappa_{u})}{2\sqrt{(C_{\kappa_{a}}\kappa _{a})^{2}+(C_{\alpha_{b}}\tan\alpha_{a})^{2}}},\]
\[f_{\lambda}(\lambda_{a}^{w})=\begin{cases}\lambda_{u}^{w}(2-\lambda_{a}^{w})& \lambda_{u}^{w}<1\\ 1&\lambda_{a}^{w}\geq 1\end{cases}.\]
The feasible region is defined by two other physics-based constraints: the working limits of the vehicle (known as the g-g diagram constraint [6])
\[(\dot{v}_{x}-v_{y}r)^{2}+(\dot{v}_{y}+v_{x}r)^{2}\leqslant(\min\{\mu_{\mathrm{ f}},\mu_{\mathrm{r}}\}\ g)^{2}, \tag{20}\]
and the saturation limits of the tires known as the Kamm circle constraint [6],
\[F_{zu}^{2}+F_{yu}^{2}\leqslant(\mu_{a}F_{zu})^{2},\quad a\in\{\mathrm{f}, \mathrm{r}\}. \tag{21}\]
|
2305.15813 | Leveraging object detection for the identification of lung cancer | Lung cancer poses a significant global public health challenge, emphasizing
the importance of early detection for improved patient outcomes. Recent
advancements in deep learning algorithms have shown promising results in
medical image analysis. This study aims to explore the application of object
detection particularly YOLOv5, an advanced object identification system, in
medical imaging for lung cancer identification. To train and evaluate the
algorithm, a dataset comprising chest X-rays and corresponding annotations was
obtained from Kaggle. The YOLOv5 model was employed to train an algorithm
capable of detecting cancerous lung lesions. The training process involved
optimizing hyperparameters and utilizing augmentation techniques to enhance the
model's performance. The trained YOLOv5 model exhibited exceptional proficiency
in identifying lung cancer lesions, displaying high accuracy and recall rates.
It successfully pinpointed malignant areas in chest radiographs, as validated
by a separate test set where it outperformed previous techniques. Additionally,
the YOLOv5 model demonstrated computational efficiency, enabling real-time
detection and making it suitable for integration into clinical procedures. This
proposed approach holds promise in assisting radiologists in the early
discovery and diagnosis of lung cancer, ultimately leading to prompt treatment
and improved patient outcomes. | Karthick Prasad Gunasekaran | 2023-05-25T07:53:18Z | http://arxiv.org/abs/2305.15813v1 | # Leveraging object detection for the identification of lung cancer
###### Abstract
**Abstract**: Lung cancer poses a significant global public health challenge, emphasizing the importance of early detection for improved patient outcomes. Recent advancements in deep learning algorithms have shown promising results in medical image analysis. This study aims to explore the application of object detection particularly YOLOv5, an advanced object identification system, in medical imaging for lung cancer identification. To train and evaluate the algorithm, a dataset comprising chest X-rays and corresponding annotations was obtained from Kaggle. The YOLOv5 model was employed to train an algorithm capable of detecting cancerous lung lesions. The training process involved optimizing hyperparameters and utilizing augmentation techniques to enhance the model's performance. The trained YOLOv5 model exhibited exceptional proficiency in identifying lung cancer lesions, displaying high accuracy and recall rates. It successfully pinpointed malignant areas in chest radiographs, as validated by a separate test set where it outperformed previous techniques. Additionally, the YOLOv5 model demonstrated computational efficiency, enabling real-time detection and making it suitable for integration into clinical procedures. This proposed approach holds promise in assisting radiologists in the early discovery and diagnosis of lung cancer, ultimately leading to prompt treatment and improved patient outcomes.
**Keywords**: Lung cancer, YOLOv5, deep learning, object detection, medical imaging, chest radiograph.
## I Introduction
Because of many scientific variables, the continuous growth of technology has generated changes in people's lifestyles.
Changes in lifestyle cause changes in the structure and function of human genetic cells (Deoxyribonucleic acid (DNA)) [1]. The structure-changed DNA has been partitioned into two new cells, resulting in the formation of duplicate DNA, which is utilised to replace the old DNA and dying DNA, a process known as mutation [2]. Because the mutation mechanism is influenced by numerous elements such as radiation exposure, smoking, asbestos fibre inhaling, and drinking behaviour, the erroneous mutation of a DNA cell usually results in the formation of cancer [3]. Not only males (14%), but also 13% of women throughout the United States, were affected by lung tumours [4]. Furthermore, 154,050 of the 234,030 fatalities were identified globally. According to the NNACCR study, lung cancer is one of the most dangerous illnesses, with various symptoms [5] including shortness of breath, coughing, chest discomfort, voice change, sputum colour change, and blood coughing. Furthermore, lung cancer is distinguished by self-use tiredness, joint problems, weight loss, fractures of the bones, memory loss, cachexia, which is headache, bleeding, neurological difficulty, blood clots, and face oedema [6]. Clinical doctors examined these symptoms [7] using a spirometer, which measures the amount of air in the lungs and aids in determining the presence of lung cancer through the use of several screening methods [8] such as reflex evaluation, bronchi scopy, a biopsy, genetic testing, and liquid biopsy.
A blood test is also performed for predicting chest infection. The National Institute for Excellence in Healthcare gives recommendations and symptoms [9] for early lung cancer detection. Among the numerous screening methods, biopsies and bronchi scopy study lung cells to predict cancer-related cells. Although a biopsy can predict lung cancer, maintaining precision as well as accuracy is challenging. As a result, a computerised tomography (CT) [10] scans is performed by transmitting X-rays through the body to examine the alterations that occur. During this procedure, a dye or liquid is administered to the chest, and pictures are captured by screening the body for 10-30 minutes. When contrasted to other imaging modalities such as MRI and PET, medical imaging demonstrates CT as an effective method during analysis owing to the sequential inspection of soft organs, tissues in the lung, and delivering relevant information about the damaged region [11]. Although screening methods has aided in the prediction of lung cancer, early diagnosis and accuracy in cancer detection are challenging to maintain. As a result, CAD [12] must be used to the clinical centre in order to produce an efficient cancer forecasting [13] system utilising an optimised and intelligent approach. The optimised lung image processing methods are used to examine the inner intricacies of the body, recover the details, extract vital information, and construct a knowledgeable system for lung cancer detection. Processing procedures [14] comprise various phases such as lung pre-processing of images, affected portion segmentation, feature extraction, and prediction of lung cancer. Among the several stages, segmentation plays an important function since it analyses every pixel in the lung picture and separates the afflicted region's linked cells, which aids in the determination of cancer and noncancerous. For region segmentation, several approaches such as fuzzy c-mean clustering, K-means clustering, Hopfield neural networks, self-organizing map, agglomerative clustering, distributed clustering, and sobel are utilised [15]. Canny edge detection algorithms are used to forecast the impacted zone. In addition, numerous optimisation approaches, such as particle swarm optimisation, genetic algorithms, art colonies, and firefly's algorithm, are utilised to optimise the clustering process. From the segmented lung area, several characteristics [16] such as local binary patterns, and spectral,
analytical, and robust features are retrieved. The Hough transform is applied to the noise portion of the continuous transform [17, 18]. Furthermore, typical algorithms fail to analyse any low-quality lung picture, resulting in the extraction of erroneous features and an increase in the misclassification error rate. Taking these issues into account, this paper employs the improved profuse clustering technique (IPCT) for categorising the affected region, with the weighted mean enhancement technique effectively removing the noise image and the IPCT method obtaining the affected region without avoiding any original or normal pixels. Various spectral characteristics are extracted from the afflicted region and tested for lung cancer prediction using a deep learning instantly accomplished neural network (DITNN).
## II Related Work
This section discusses lung cancer identification technique, processes, concepts, thoughts, and processing phases of lung imaging from the perspectives of many writers. [19] discusses the analysis of the lung tumour-related affected region using positron emission tomography (PET) with computerised tomography (CT) images using a fuzzy markov random field segmentation technique. The approach computes a Gaussian distribution and probabilistic distribution function to assess the distribution of pixels in the picture. Following the computation of the pixel distribution, several characteristics are investigated in order to calculate the pixel similarity. Similar pixels create a cluster, and the impacted region is recognised inside the cluster. The system's efficiency is then evaluated utilising PET and CT images. The author developed a mechanism that assures a high dice resemblance coefficient value, such as 0.85, indicating that the fuzzy randomised markov model analyses the afflicted lung tumour region satisfactorily. [20] developed a computer-aided diagnostic (CAD) method for detecting lung cancer using CT and PET images. The author investigates several issues and approaches, such as picture segmentation and nodule identification, for optimising the lung cancer diagnosis process. During the analysis, the acquired pictures are divided into two parts, namely, training and testing images, in order to evaluate the effectiveness of the CAD system implemented. In addition, the author mentioned the shortcomings of the conventional cancer detection system since the proposed CAD method helps to fix such prediction concerns in an efficient manner. [21] discusses the creation of a lung cancer diagnosis system using a convolution neural network, which solves the problems associated with manual cancer prediction. During this process, images from CT scans are gathered and analysed using a layer of neural network that performs automatic extraction of image features, which are then processed using a deep learning process for prediction of cancer-related features using a large volume of images. The authors developed a technique that aids in decision making when analysing the patient's CT scan data. [22] used a convolution neural network approach to predict lung nodules from CT scan data. During this procedure, photos from the LIDC IDRI database are gathered and fed into the stack encoder (SAE), convolutional neural network (CNN), and deep neural network (DNN) for successful categorization of lung cancer-related features as benign or malignant. The author has developed a technique that guarantees up to 84.32% accuracy. According to the many authors' talks, CT images are reviewed using segmentation and an optimised machine learning approach for accurate cancer prediction. Chronic illness diagnosis is challenging owing to an absence of data and the complexity of diagnosing. [23] study presented a deep learning-based system for computer-aided detection using three models (CAD3), one for detection, one for classification, and one for visualisation. The system is based on the YOLO v2 algorithm, which is based on the Convolutional Neural Network (CNN). The YOLO v2 approach is one of the simplest, adding additional layers to a pre-trained CNN to recognise the disease in images. [24] proposed a Yolov2 method which have Images of 2450 normal thyroid nodules and 2557 cancerous thyroid nodules were gathered and labelled, and a deep learning system for automated recognition of images and diagnosis was built utilising the YOLOv2 neural network model. The system's effectiveness in the identification of nodules in the thyroid was tested, and the utility of machine learning in clinical practise was studied. The early diagnosis of nodules in the lungs is extremely important in the prevention of lung cancer. Deep learning's target identification approach is currently frequently employed for healthcare image processing. The research in [25] offers a lung nodule detection technique based on enhanced yolov3 to increase the effectiveness and precision of lung nodule detection, as well as to minimise the missed diagnosis and misdiagnosed rate of pulmonary nodule. The experimental findings demonstrate that the modified model's average accuracy has increased between 70.5% to 73.9%, and its convergence effect is superior to that of the baseline model. The objective of [26] study is to develop a lung nodule detector that has a reasonable balance of efficiency and effectiveness and can be used immediately in the hospital setting. Authors break assignment in two steps. First, by integrating several current techniques--the depthwise over-parameterized convolutional layer, a convolutional block attention module, and the focal loss function--we primarily try to increase the model's accuracy. Finally, used redundant channel pruning to the created model to create the YOLOv4 pulmonary nodule detector, which is more effective. One of the main cancer forms with a high death rate that has paralyzed the world is lung cancer. Early detection greatly improves the prognosis for survival. The lung (pulmonary) nodules from the lung CT scan are detected with great sensitivity and precision using a customised deep learning YOLOv3[27].
## III Methodology
### Dataset Acquisition
Dataset acquired which is lung cancer dataset from Kaggle comprising 1500 x-ray images. This dataset includes annotations tailored for training a YOLOv5 model. Split the dataset into three subsets: 300 images were allocated for
validation, 1000 images were assigned for training, and the remaining images were utilized for testing the model's performance. Some of the sample images of the dataset can be seen in the following Fig 1.
### Pre-processing
The x-ray images had been pre-processed in order to train the YOLOv5 model. To achieve consistency, the pre-processing stages included scaling the images to a constant input size, often in a range of 416x416 pixels. The images were also normalised to improve the model's capacity to learn and generalise. Any essential data augmentation methods, including random cropping, a rotation, or flipping it, were used to supplement and diversify the dataset. The processed x-ray images can then be utilised to train the YOLOv5 model.
### Model Training
To detect lung nodules in X-rays, the "Chest-ray nodule detection"-dataset was taken from Kaggle Public Domain Dedication. The total number of images from X-rays was 1500, which we then divided for training and evaluation. 1000 X-rays nodule positive images, were utilised for training, and 300 X-ray images were used for evaluation. This approach for generating synthesised data has already been discussed [18]. The detection technique is YOLOv5s, a compound-scaled detection of objects model learned on COCO datasets that combines model assembly and hyperparameter. It is divided into three major sections: Backbone module is used for extracting features from input images. CSPNet is used in the Backbone to gather features from pictures used as input images. The Neck serves to create the pyramid feature. It aids the module in determining the scaling factor of observed items of the same sort but at various scales. PANet is a method used to generate pyramid features. The module Neck serves to create pyramid features for generalisation, whereas the module Head is utilised for detection.
The main role of the head unit is to apply anchors of varying sizes to the features created in the preceding layers, as well as a bounding box with a score. SPP is an abbreviation for Spatial Pyramids Pooling.Then used a batch size of 32 to train the YOLOv5 algorithm on a lung cancer dataset. The process of training was used 50 epochs, enabling the model to learn and enhance its performance repeatedly. During the optimisation phase, a rate of learning of 0.002 was utilised to calculate the step size. This parameter influences how quickly the model responds to the data used for training. It was feasible to strike an appropriate balance between speed convergence and avoiding becoming trapped in inferior solutions by modifying the learning rate. The whole architecture of yolov5 training can be seen in the following Fig 2.
## IV Results
### Test Data
To evaluate the trained algorithm, the next 100 X-rays from a test dataset containing data from 100 patients were utilized. The algorithm was applied to these X-rays, and the resulting outcomes are presented below.
### Evaluation Matrices
Biological specificity and sensitivity were evaluated using X-ray images taken from a dataset used in the follow-up treatment for chest X-rays confirming the existence of distant metastases. To produce a negative control, X-ray images unconnected to tumours but linked to trauma and non-malignant lung illness were gathered on a daily basis from an emergency hospital. It's worth noting that these images weren't utilised during the training of our yolov5 but were only used for external validation. The algorithm recognised all nodules having a test batch estimate greater than 0.5.
As assessment measures, F1 score, ROC (Receiver Operating Characteristic), and accuracy were used. This can be seen in Fig 4. The effectiveness of the system review produced encouraging findings. The accuracy in properly identifying positive instances, or sensitivity, was found to be 94%. This shows that the algorithm is quite good at accurately identifying the existence of positive cases. However, the specificity, which measures the capacity to accurately identify negative situations, was found to be 90.5%, underscoring an excellent precision in accurately recognising negative occurrences. It was discovered that the precision, which measures the percentage of accurately detected positive instances among all positive forecasts, was 100%, demonstrating that the system had a low rate of erroneous positive predictions. The system has a strong capacity to identify real positive instances, as evidenced by the recall, referred to be the true positive percentage, which was found to be 95%. Additionally, the system's overall accuracy was determined to be 91%, indicating a high level of accuracy in categorising scenarios that are both beneficial and detrimental. A favourable balance between accurately detected positive instances and false positives was found to exist, as measured by the precision-recall value, and which offers an aggregate measure of accuracy and recall, which was found to be 83%. Additionally, the F1 score, which highlights an equal efficiency between precision and recall, was discovered to be 77%, an estimation of accuracy for the system that considers both precision and recall. These metrics show the system's efficacy and dependability by showing good performance in properly recognising positive situations while keeping a low percentage of false positives.
Figure 3: Testing of Model on test Data
## V Conclusion
There has been a tremendous increase in the field of study that investigates the application of computational intelligence in the development of diagnostics and prognostic tools in recent years. These AI-powered solutions have the ability to transform many parts of healthcare by increasing the precision, effectiveness, and availability of medical evaluations. There are currently no established applications that leverage AI technology for sarcoma lung cancer follow-up calls, which relate to the continual surveillance of individuals who were previously confirmed to have sarcoma lung cancer. This implies that traditional follow-up procedures may not fully utilise AI's potential to improve sarcoma patient assessment and care. The use of convolutional neural network models for assessing chest X-rays is one area of special attention in sarcoma follow-ups. In sarcoma follow-ups, chest X-rays are often performed to identify any symptoms of metastatic disease (the spread cancer cells to other regions of the body). The method of analysing X-rays of the chest can be enhanced by including CNN-assisted assessment. CNN algorithms. are deep learning networks that specialise in image analysis and excel in detecting patterns and anomalies in medical images.
However, while the prospective benefits of CNN-assisted assessment in cancer follow-ups are encouraging, further study is required to test and establish the systemic benefits of this hybrid method. This includes doing extensive research to assess the precision, effectiveness, and medical impact of applying CNNs in cancer follow-up methods. Furthermore, research efforts should strive to examine the combination of CNN-assisted assessment with existing diagnostic procedures, as well as its cost-effectiveness and smooth incorporation into clinical practise.Finally, the implementation of based on artificial intelligence diagnostics and prognosis tools, like CNN-assisted examination of chest X-rays, has the potential to greatly enhance results and quality of treatment for cancer patients throughout follow-up periods through thorough study and validation.
|
2302.00507 | Entropy determination for mixtures in the adiabatic grand-isobaric
ensemble | The entropy change that occurs upon mixing two fluids has remained an
intriguing topic since the dawn of statistical mechanics. In this work, we
generalize the grand-isobaric ensemble to mixtures, and develop a Monte Carlo
algorithm for the rapid determination of entropy in these systems. A key
advantage of adiabatic ensembles is the direct connection they provide with
entropy. Here, we show how the entropy of a binary mixture A-B can be readily
obtained in the adiabatic grand-isobaric $(\mu_{\text{A}}$, $\mu_{\text{B}}, P,
R)$ ensemble, in which $\mu_{\text{A}}$ and $\mu_{\text{B}}$ denote the
chemical potential of components A and B, respectively, $P$ is the pressure,
and $R$ is the heat (Ray) function, that corresponds to the total energy of the
system. This, in turn, allows for the evaluation of the entropy of mixing, as
well as of the Gibbs free energy of mixing. We also demonstrate that our
approach performs very well both on systems modeled with simple potentials and
with complex many-body force fields. Finally, this approach provides a direct
route to the determination of the thermodynamic properties of mixing, and
allows for the efficient detection of departures from ideal behavior in
mixtures. | Caroline Desgranges, Jerome Delhommelle | 2023-02-01T15:28:08Z | http://arxiv.org/abs/2302.00507v1 | # Entropy determination for mixtures in the adiabatic grand-isobaric ensemble
###### Abstract
The entropy change that occurs upon mixing two fluids has remained an intriguing topic since the dawn of statistical mechanics. In this work, we generalize the grand-isobaric ensemble to mixtures, and develop a Monte Carlo algorithm for the rapid determination of entropy in these systems. A key advantage of adiabatic ensembles is the direct connection they provide with entropy. Here, we show how the entropy of a binary mixture A-B can be readily obtained in the adiabatic grand-isobaric (\(\mu_{\rm A}\), \(\mu_{\rm B},P,R\)) ensemble, in which \(\mu_{\rm A}\) and \(\mu_{\rm B}\) denote the chemical potential of components A and B, respectively, \(P\) is the pressure, and \(R\) is the heat (Ray) function, that corresponds to the total energy of the system. This, in turn, allows for the evaluation of the entropy of mixing, as well as of the Gibbs free energy of mixing. We also demonstrate that our approach performs very well both on systems modeled with simple potentials and with complex many-body force fields. Finally, this approach provides a direct route to the determination of the thermodynamic properties of mixing, and allows for the efficient detection of departures from ideal behavior in mixtures.
Introduction
Entropy has intrigued thermodynamicists for centuries, and a full understanding of this concept still remains elusive. Among other processes, mixing, and the associated entropy of mixing, has challenged scientists since the inception of statistical mechanics, from Boltzmann's early definition of entropy to the proposal of the Gibbs paradox. Several studies have focused on the determination of the entropy of mixing for liquid mixtures over the years, both experimentally and theoretically [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. In binary mixtures of molecular compounds, three factors are found to contribute to the excess entropy of mixing: (i) the relative volumes of the molecules, (ii) the spatial distribution of molecules about a reference molecule, and (iii) the non-random orientational distribution of molecule about a reference molecule [2]. On the other hand, in binary mixtures of atomic fluids, the excess entropy of mixing only depends on the free volume of each component. This has given rise to the concept of the combinatorial entropy of mixing [14; 15; 16], which stems from the randomness in placing the atoms in the system's volume. Then, the theoretical entropy of mixing in excess of the combinatorial entropy, or ideal entropy, is obtained from the reduced volumes of the two mixture components. Other approaches include conventional solution theories, which have relied exclusively on two features of the liquid mixtures. The first is the entropy of dispersion of the two molecular species, generally evaluated using a lattice model [17; 18]. The second comes from the interactions between neighboring molecules and is found to depend on the difference in interactions between pairs of like and unlike molecules. This approach has led to excellent results for the synthesis and processing of high-entropy alloys [19; 20; 21; 22]. It has also been shown recently that the Shannon entropy can quantify the amount of disorder within a system. In mixtures, the entropy of mixing is defined as the increase in disorder upon the transition from a fully demixed state to an ideally mixed state [23; 24]. This approach has enabled to quantify the quality of mixing in polymers, and to design, control and asses optimal mixing protocols [23; 25]. The Shannon or information entropy can also be calculated through the pair correlation functions [26; 27; 28; 29]. Such an approach has been used recently to predict the entropy of liquid aluminum, copper and aluminum-copper alloys [30].
Here, we propose a different approach for the determination of the entropy of mixing. To this end, we develop a method based on the adiabatic thermodynamics formalism to derive a direct route to the entropy. This approach has been developed so far for single
component systems by Ray _et al._[31; 32; 33; 34; 35; 36; 37; 38]. For a binary mixture A-B, this leads to working in an ensemble for which \((\mu_{\rm A},\mu_{\rm B},P,R)\) are fixed. In this set, \(\mu_{\rm A}\) denotes the chemical potential for the component A, \(\mu_{\rm B}\) the chemical potential for component B, \(P\) the pressure, and \(R\) the heat or Ray function. We can then use the simple relationship \(S=R/T\) to gain access to the entropy of the system. Another advantage of this ensemble is that pressure is an input parameter, and that its calculation via, for instance, the virial expression is not needed in the course of the simulations. This is especially interesting for systems modeled with complex many-body potentials. We add that the determination of temperature is straightforward in this adiabatic ensemble, as it stems from the equipartition principle. This provides a simple and versatile framework and method to compute the entropy of mixing.
The paper is organized as follows. We first present the generalization of the adiabatic grand-isobaric ensemble to the case of mixtures, the implementation of Monte Carlo simulations in this ensemble, as well as the models used in this work to model mixtures of simple fluids and binary mixtures of liquid metals. We then discuss the results obtained in the grand-isobaric ensemble for the Neon-Argon mixture, as well as copper-silver. In both cases, we present results for the thermodynamic properties of mixing, including the entropy, as well as the enthalpy and the Gibbs free energy of mixing. To assess accuracy, we compare the results to the available experimental data and to results obtained with conventional simulation methods in isothermal ensembles. We finally draw the main conclusions from this work in the last section.
## II Formalism and Simulation Methods
### Adiabatic ensemble framework
We start with a brief discussion of how the probability density, and thus the acceptance probabilities used in Monte Carlo simulations, are obtained for adiabatic ensembles in the case of single-component systems. In the microcanonical ensemble, the number of accessible microstates [39; 40; 41; 42] is given by \(\Omega(N,V,E)=\rho(E)\delta E\), in which \(\rho(E)\) is the microcanonical density of states and \(\delta E<<E\). To show how the probability density can be derived, we start with the case of an ideal gas of \(N\) particles of mass \(m\) in a volume \(V\), we have the
following Hamiltonian
\[H=\sum_{i=1}^{3N}\frac{{\bf p}_{i}^{2}}{2m}=K \tag{1}\]
in which \({\bf p}_{i}\) denotes the momentum of particle \(i\) and \(K\) the total kinetic energy for the system. The phase space volume \(\Omega\) can be defined as
\[\Omega\:=\:\int...\int dq_{3N}\int...\int dp_{1}....dp_{3N} \tag{2}\]
for sets of coordinates and momenta such that \(0\leq H({\bf q}_{i},{\bf p}_{i})\leq E\). This gives, after integration over the position coordinates
\[\Omega\:=\:V^{N}\int...\int dp_{1}....dp_{3N} \tag{3}\]
for sets of momenta such that \(0\leq\sum_{i=1}^{3N}\frac{p_{i}^{2}}{2m}\leq E\). As discussed in prior work [43], Dirichlet's integral formula can be used to calculate this integral. We recall that Dirichlet's integral formula states that
\[\begin{array}{lcl}I&=&\int...\int t_{1}^{\alpha_{1}-1}t_{2}^{\alpha_{2}-1}...t_{n}^{\alpha_{n}-1}dt_{1}dt_{2}...dt_{n}\\ &=&\frac{b_{1}^{\alpha_{1}}b_{2}^{\alpha_{2}}...b_{n}^{\alpha_{n}}}{\beta_{1} \beta_{2}...\beta_{n}}\times\frac{\Gamma(\alpha_{1}/\beta_{1})\Gamma(\alpha_ {2}/\beta_{2})...\Gamma(\alpha_{n}/\beta_{n})}{\Gamma(\alpha_{1}/\beta_{1}+ \alpha_{2}/\beta_{2}+...+\alpha_{n}/\beta_{n}+1)}\end{array} \tag{4}\]
in which \(t_{i},b_{i},\beta_{i}\) are positive and such that
\[(t_{1}/b_{1})^{\beta_{1}}+(t_{2}/b_{2})^{\beta_{2}}+...+(t_{n}/b_{n})^{\beta_ {n}}\leq 1 \tag{5}\]
To solve Eq. 3, we use Eq. 4 with \(\alpha_{i}=1,t_{i}=p_{i},\beta_{i}=2,b_{i}=(2mE)^{1/2}\), (i=1,2,...,3N) to obtain
\[\Omega=\frac{2^{3N}V^{N}{(2mE)}^{3N/2}[\Gamma(1/2)]^{3N}}{2^{3N}\Gamma(3N/2+1)} \tag{6}\]
and account for the fact that particles are indistinguishable, and that the phase space volume is dimensionless, by dividing Eq. 6 by \(N!\) and \(h^{3N}\)
\[\Omega=\frac{V^{N}}{N!}\left(\frac{2\pi m}{h^{2}}\right)^{3N/2}\frac{E^{3N/2} }{\Gamma(3N/2+1)} \tag{7}\]
in which we use \(\Gamma(1/2)=\sqrt{\pi}\). This yields the density of states \(\rho(E)\) by differentiating the phase space volume with respect to \(E\) as
\[\rho(E)=\frac{\partial\Omega}{\partial E}=\frac{V^{N}}{N!}\left(\frac{2\pi m} {h^{2}}\right)^{3N/2}\frac{E^{3N/2-1}}{\Gamma(3N/2)} \tag{8}\]
As shown by Ray _et al._[32; 38], this formalism can be generalized to other types of systems (such as, _e.g._, with Hamiltonian of the form \(H=K+U({\bf q})\)), in which \(U({\bf q})\) denotes a position-dependent potential energy) in the microcanonical ensemble, and to other
adibabatic ensembles (such as, _e.g._, the adiabatic grand-isobaric ensemble \((\mu,P,R)\)). The key here is to realize that the \(K=E\) for an ideal gas in a microcanonical ensemble becomes either \(K=E-U(\mathbf{q})\) for an interacting system in the microcanonical ensemble, or \(K=R-PV+\mu N-U(\mathbf{q})\) for an interacting system in the adiabatic grand-isobaric ensemble. The \((\mu,P,R)\) ensemble models an open adiabatically insulated system in contact with a pressure reservoir and a chemical potential reservoir. Here the energy \(R\) is related to the enthalpy \(H\) by \(R=H-\mu N\).
We can then determine the acceptance probability \(acc(o\to n)\) for the microcanonical Monte Carlo method. Here a MC move is attempted from an "old" configuration \(o\) with a set of positions denoted by \(\mathbf{q}\) to a "new" configuration \(n\) with a set of positions denoted by \(\mathbf{q}^{\prime}\) yielding the following acceptance probability
\[\begin{array}{rcl}acc(o\to n)&=&\min\left[1,\frac{\rho(\mathbf{q}^{\prime}, N,V)}{\rho(\mathbf{q},N,V)}\right]\\ &=&\min\left[1,\frac{K_{0}^{3N/2-1}}{K_{0}^{3N/2-1}}\right]\\ &=&\min\left[1,\frac{(E-U(\mathbf{q}))^{3N/2-1}}{(E-U(\mathbf{q}))^{3N/2-1}}\right] \end{array} \tag{9}\]
Similarly, in the adiabatic grand-isobaric ensemble [44; 32], if the "old" configuration is denoted by \((\mathbf{q},N,V)\) and the "new" configuration by \((\mathbf{q}^{\prime},N^{\prime},V^{\prime})\), the acceptance probability can be written as
\[\begin{array}{rcl}acc(o\to n)&=&\min\left[1,\frac{\rho(\mathbf{q}^{\prime}, N^{\prime},V^{\prime})}{\rho(\mathbf{q},N,V)}\right]\\ &=&\min\left[1,\frac{(bV^{\prime})^{N^{\prime}}N\Gamma(3N/2)K_{0}^{3N^{\prime} /2-1}}{(bV)^{N^{\prime}}N^{\prime}\Gamma(3N^{\prime}/2)K_{0}^{3N/2-1}}\right] \\ &=&\min\left[1,\frac{(bV^{\prime})^{N^{\prime}}N\Gamma(3N/2)(R-PV^{\prime}+ \mu N^{\prime}-U(\mathbf{q}^{\prime}))3N^{\prime}/2-1}{(bV)^{N^{\prime}}N^{ \prime}\Gamma(3N^{\prime}/2)(R-PV+\mu N-U(\mathbf{q}))^{3N/2-1}}\right]\end{array} \tag{10}\]
in which \(b=(2\pi m/h^{2})^{3/2}\). From a practical standpoint and to ensure high acceptance probabilities, MC moves are split into 4 different categories, corresponding to translations, insertions, deletions and volume changes. We extend the formalism to the case of mixtures, and discuss in greater detail these acceptance probabilities for 2-component systems, in the next section.
### Adiabatic formalism for multi-component systems
In this section, we extend the adiabatic framework and the adiabatic grand-isobaric ensemble to systems with multiple components. In line with the derivation for single component
systems, we start with a mixture of ideal gases and determine the phase space volume and probability density. For a mixture A-B of two ideal gases A and B, we have the following Hamiltonian
\[H=\sum_{i=1}^{3N_{\rm A}}\frac{p_{i}^{2}}{2m_{\rm A}}+\sum_{j=1}^{3N_{\rm B}} \frac{p_{j}^{2}}{2m_{\rm B}}=K \tag{11}\]
in which \(N_{\rm A}\) and \(N_{\rm B}\) denote the number of particles for the two components A and B, and \(m_{\rm A}\) and \(m_{\rm B}\) their respective masses.
The phase space volume is given by
\[\Omega=V^{N_{\rm A}+N_{\rm B}}\int...\int\prod_{i}^{3N_{\rm A}}dp_{i}\prod_{j} ^{3N_{\rm B}}dp_{j} \tag{12}\]
As with single-component systems, we now use Dirichlet's integral formula to calculate \(\Omega\). In the case of the binary mixture A-B, we use Eq. 4 with the following parameters: \(\alpha_{k}=1\), \(t_{k}=p_{i}\), \(\beta_{k}=2\), \(b_{k}=(2m_{\rm A}E)^{1/2}\), \(k=1,2,...,3N_{\rm A}\), \(\alpha_{k}=1\), \(t_{k}=p_{j}\), \(\beta_{k}=2\), \(b_{k}=(2m_{\rm B}E)^{1/2}\), \(k=3N_{A}+1,...,3N_{\rm A}+3N_{\rm B}\), and carry out the integration over the following domain
\[\sum_{i=1}^{3N_{\rm A}}\left(\frac{p_{i}}{(2m_{\rm A}E)^{1/2}}\right)^{2}+\sum _{j=1}^{3N_{\rm B}}\left(\frac{p_{j}}{(2m_{\rm B}E)^{1/2}}\right)^{2}\leq 1 \tag{13}\]
This yields the following result for the phase space volume
\[\Omega=\frac{1}{h^{3(N_{\rm A}+N_{\rm B})}N_{\rm A}!N_{\rm B}!}V^{N_{\rm A}+N _{\rm B}}\frac{(2\pi m_{\rm A})^{3N_{\rm A}/2}(2\pi m_{\rm B})^{3N_{\rm B}/2}} {\Gamma[3(N_{\rm A}+N_{\rm B})/2+1]}E^{3(N_{\rm A}+N_{\rm B})/2} \tag{14}\]
and for the probability density
\[\rho(E)=\frac{1}{h^{3(N_{\rm A}+N_{\rm B})}N_{\rm A}!N_{\rm B}!}V^{N_{\rm A}+N _{\rm B}}\frac{(2\pi m_{\rm A})^{3N_{\rm A}/2}(2\pi m_{\rm B})^{3N_{\rm B}/2}} {\Gamma[3(N_{\rm A}+N_{\rm B})/2]}E^{3(N_{\rm A}+N_{\rm B})/2-1} \tag{15}\]
The next step consists in generalizing this formalism to the adiabatic grand-isobaric ensemble for the A-B mixture \((\mu_{\rm A},\mu_{\rm B},P,R)\), _i.e._, with a kinetic energy \(K\) defined as \(K=R-PV+\mu_{\rm A}N_{\rm A}+\mu_{\rm B}N_{\rm B}-U({\bf q})\), to obtain the equation analog to Eq. 10 for a binary mixture A-B. This yields the general acceptance rule for a MC move from an old configuration \(o\) defined by the set \(({\bf q},N_{\rm A},N_{\rm B},V)\) to a new configuration \(n\) with the set \(({\bf q}^{\prime},N_{\rm A}^{\prime},N_{\rm B}^{\prime},V^{\prime})\) as
\[acc(o\to n) = \min\left[1,\frac{\rho({\bf q}^{\prime},N_{\rm A}^{\prime},N_{ \rm B}^{\prime},V^{\prime})}{\rho({\bf q},N_{\rm A},N_{\rm B},V)}\right] \tag{16}\] \[= \min\left[1,\frac{(b_{A}V^{\prime})^{N_{\rm A}^{\prime}}(b_{B}V^{ \prime})^{N_{\rm B}^{\prime}}N_{\rm A}!N_{\rm B}!\Gamma(3(N_{\rm A}+N_{\rm B})/ 2)K_{\rm A}^{3(N_{\rm A}^{\prime}+N_{\rm B}^{\prime})/2-1}}{(b_{A}V)^{N_{\rm A }}(b_{B}V)^{N_{\rm B}}N_{\rm A}!N_{\rm B}^{\prime}!N(3(N_{\rm A}^{\prime}+N_{ \rm B}^{\prime})/2)K_{o}^{3(N_{\rm A}+N_{\rm B})/2-1}}\right]\] \[= \min\left[1,\frac{(b_{A}V^{\prime})^{N_{A}^{\prime}}(b_{B}V^{ \prime})^{N_{\rm B}^{\prime}}N_{\rm A}!N_{\rm B}!\Gamma(3(N_{\rm A}+N_{\rm B})/ 2)(R-PV^{\prime}+\mu_{\rm A}N_{\rm A}^{\prime}+\mu_{\rm B}N_{\rm B}^{\prime}-U ({\bf q}^{\prime}))^{3N^{\prime}/2-1}}{(b_{A}V)^{N_{\rm A}}(b_{B}V)^{N_{\rm B} }N_{\rm A}!N_{\rm B}^{\prime}!N(3(N_{\rm A}^{\prime}+N_{\rm B}^{\prime})/2)(R- PV+\mu_{\rm A}N_{\rm A}+\mu_{\rm B}N_{\rm B}-U({\bf q}))^{3N/2-1}}\right]\]
in which \(b_{A}=(2\pi m_{\rm A}/h^{2})^{3/2}\) and \(b_{B}=(2\pi m_{\rm B}/h^{2})^{3/2}\).
From a practical standpoint, we carry out 4 different types of MC moves corresponding to (i) the translation of a randomly chosen particle (either of type A or B), (ii) the insertion of a particle of type A or B, (iii) the deletion of a particle of type A or B and (iv) a volume change of the system. We provide below explicitly the acceptance rules for each type of move for particles of type A only for conciseness in the case of MC moves of types (i)-(iii).
The acceptance rule for the translation of a randomly chosen particle of type A from an old (o) configuration to a new (n) configuration is given by
\[acc(o\to n)=\min\left[1,\frac{(R-PV+\mu_{\rm A}N_{\rm A}+\mu_{\rm B}N_{\rm B}- U({\bf q^{\prime}}))^{3(N_{\rm A}+N_{\rm B})/2-1}}{(R-PV+\mu_{\rm A}N_{\rm A}+ \mu_{\rm B}N_{\rm B}-U({\bf q}))^{3(N_{\rm A}+N_{\rm B})/2-1}}\right] \tag{17}\]
The acceptance rule for the insertion of a particle of type A at a random position in the system is given by
\[acc(o\to n)=\min\left[1,\frac{b_{A}V\Gamma(3(N_{\rm A}+N_{\rm B})/2)}{(N_{ \rm A}+1)\Gamma(3(N_{\rm A}+N_{\rm B}+1)/2)}\right.\\ \left.\times\frac{[R-PV+\mu_{\rm A}(N_{\rm A}+1)+\mu_{\rm B}N_{ \rm B}-U({\bf q^{\prime}})]^{3(N_{\rm A}+N_{\rm B}+1)/2-1}}{[R-PV+\mu_{\rm A}N _{\rm A}+\mu_{\rm B}N_{\rm B}-U({\bf q})]^{3(N_{\rm A}+N_{\rm B})/2-1}}\right] \tag{18}\]
Similarly, the acceptance rule for the deletion of a particle randomly chosen among the \(N_{A}\) particles, can be written as
\[acc(o\to n)=\min\left[1,\frac{N_{A}\Gamma(3(N_{A}+N_{B})/2)}{b_{A}V \Gamma(3(N_{A}+N_{B}-1)/2)}\right.\\ \left.\times\frac{[R-PV+\mu_{A}(N_{A}-1)+\mu_{B}N_{B}-U({\bf q^{ \prime}})]^{3(N_{A}+N_{B}-1)/2-1}}{[R-PV+\mu_{A}N_{A}+\mu_{B}N_{B}-U({\bf q})] ^{3(N_{A}+N_{B})/2-1}}\right] \tag{19}\]
The acceptance rule for a random volume change of the system is given by
\[acc(o\to n)=min\left[1,\frac{V^{\prime(N_{A}+N_{B})}[R-PV^{\prime}+\mu_{A}N_{ A}+\mu_{B}N_{B}-U({\bf q^{\prime}})]^{3(N_{A}+N_{B})/2-1}}{V^{(N_{A}+N_{B})}[R- PV+\mu_{A}N_{A}+\mu_{B}N_{B}-U({\bf q})]^{3(N_{A}+N_{B})/2-1}}\right] \tag{20}\]
### Models
We use the Lennard-Jones potential to model the interactions between Argon atoms using the following expression
\[\phi(r_{ij})=4\epsilon\left[\left(\frac{\sigma}{r_{ij}}\right)^{12}-\left( \frac{\sigma}{r_{ij}}\right)^{6}\right] \tag{21}\]
where \(r_{ij}\) is the distance between atom \(i\) and atom \(j\), \(\epsilon\) and \(\sigma\) the parameters representing the negative well of depth and the distance for which the potential is equal to
zero, respectively. Here, we choose the following set of parameters for our simulations (\(\epsilon/k_{B}\)) = 115.17 K and \(\sigma\) = 3.38 A [45]. We also use long-range corrections beyond a cutoff at a distance 3\(\sigma\).[46]. We also use a Lennard-Jones potential to carry out simulations for systems composed of Neon atoms. More specifically, we use the following parameters (\(\epsilon/k_{B}\)) = 33.89 K and \(\sigma\) = 2.79 A [45]. As previously for Ar, we use the same cutoff and use tail corrections beyond this distance. When we study Ar-Ne mixtures, we use the Lorentz-Berthelot rules [45; 47] to determine the unlike interactions parameters. It gives the following parameters: \(\epsilon_{Ar-Ne}=\sqrt{\epsilon_{Ar-Ar}\epsilon_{Ne-Ne}}\) and \(\sigma_{Ar-Ne}=\frac{\sigma_{Ar-Ar}+\sigma_{Ne-Ne}}{2}\) with a cutoff at a distance of \(r_{cut}=8.37\) A beyond which tail corrections are applied.
As for the metals studied here, both copper and silver are modeled with an embedded-atom (EAM) potential known as the quantum-corrected Sutton-Chen embedded atom model (qSC-EAM) [48; 49; 50; 51]. The qSC-EAM potential is composed of two terms: a two-body term and a many-body term:
\[U=\frac{1}{2}\sum_{i=1}^{N}\sum_{j\neq i}\epsilon\left(\frac{a}{r_{ij}}\right) ^{n}-\epsilon C\sum_{i=1}^{N}\sqrt{\rho_{i}} \tag{22}\]
where \(r_{ij}\) is the distance between two atoms \(i\) and \(j\) and the density term \(\rho_{i}\) is given by
\[\rho_{i}=\sum_{j\neq i}\left(\frac{a}{r_{ij}}\right)^{m} \tag{23}\]
We use the parameters obtained by Luo _et al.[51]_ for Cu, with \(\epsilon_{Cu}=0.57921\times 10^{-2}\) eV, \(C_{Cu}=84.843\), \(a_{Cu}=3.603\) A, \(n_{Cu}=10\), and \(m_{Cu}=5\) and for Ag, with \(\epsilon_{Ag}=0.3945\times 10^{-2}\) eV, \(C_{Ag}=96.524\), \(a_{Ag}=4.0691\) A, \(n_{Ag}=11\), and \(m_{Ag}=6\). For each metal, the cutoff distance is set to twice the lattice parameter as in previous work [44]. When looking at the Cu-Ag mixtures, we use the following rules to determine the interactions between Cu and Ag atoms [52; 53; 54; 55; 56]. \(\epsilon_{Cu-Ag}=\sqrt{\epsilon_{Cu-Cu}\epsilon_{Ag-Ag}}\), \(m_{Cu-Ag}=\frac{m_{Cu}+m_{Ag}}{2}\), \(n_{Cu-Ag}=\frac{n_{Cu}+n_{Ag}}{2}\) and \(a_{Cu-Ag}=\frac{a_{Cu}+a_{Ag}}{2}\).
### Simulation details
To determine the properties of mixing, we carry out two types of simulations in the grand-isobaric adiabatic ensemble for single-component systems, and for two-component systems. For single-component systems, we use (\(\mu,P,R\)) simulations, in which \(\mu\) is the
chemical potential, \(P\) the pressure and \(R\) the Ray or heat function. The heat function provides access to the entropy of the system through the relation \(R=TS\). For an A-B binary mixture, we carry out simulations in the \((\mu_{A},\mu_{B},P,R)\) ensemble. Here, \(\mu_{A}\) and \(\mu_{B}\) denote the chemical potentials for components A and component B, respectively. Since we implement simulations in the grand-isobaric adiabatic ensemble within a Monte Carlo (MC) framework, we perform the following types of MC moves with the attempt probabilities as follows: (i) 33% of the attempted moves are random displacements of an atom, (ii) 33% are insertions of atoms at random locations within the system, (iii) 33% are deletions of randomly selected atoms, and (iv) 1% are random volume changes for the entire system. For each set of conditions, we carry out two successive runs. We first perform a run of \(10^{8}\) MC steps to allow the system to relax and the simulation to converge toward equilibrium. We then carry out a production run of \(10^{8}\) MC steps over which averages are calculated. Statistical uncertainties are evaluated using the standard block averaging technique over blocks of \(5\times 10^{7}\) MC steps. Finally, the average temperature of the system, \(<T>\), is evaluated through the equipartition principle. We use the following expression for the kinetic energy \(K\) in a two-component system, \(K=R-PV+\mu_{A}N_{A}+\mu_{B}N_{B}-U\) and calculate the average temperature as \(<T>=\frac{2<K>}{3k_{B}<N_{A}+N_{B}>}\). This, in turn, allows for the determination of the entropy of the system through the equation \(<S>=R/<T>\). Throughout the paper, we note as \(\bar{Y}\) the molar property for any extensive quantity \(Y\). To test the accuracy and reliability of the grand-isobaric adiabatic ensemble approach, we also carry out MC simulations in the \((N,P,T)\) ensemble for single-component systems, as well as in the \((N_{A},N_{B},P,T)\) ensemble for binary mixtures. For single-component systems, we work with \(N=500\) atoms, and use the following probabilities for the various types of MC moves: (i) 99% of attempted moves are translations of a single, randomly chosen, atom, and (ii) 1% of attempted moves are random volume changes. For two-component systems, we use a total number of atoms of \(N_{A}+N_{B}=500\) in the simulations, and carry out MC moves with the same probabilities as for single-component systems.
## III Results and Discussion
### The Argon-Neon system
#### iii.1.1 Single-component systems
We first present results for the single-component systems, Ar and Ne. We perform 8 different (\(\mu\), P,R) simulations for which \(P\) and \(R\) are held constant. More specifically, in the case of Ar, we vary \(\mu\) between \(\mu=-230\) kJ/kg to \(\mu=-550\) kJ/kg along the isobar \(P=445\) bar for a value of the heat function set to \(R/k_{B}=8\times 10^{5}\) K. For each value of \(\mu\), we report the corresponding number of atoms \(<N>\), temperature \(<T>\), specific volume \(\bar{V}=\frac{<V>}{<N>}\), enthalpy \(\bar{H}=\frac{<H>}{<N>}\), Ray energy \(\bar{R}=\frac{R}{<N>}\) and entropy \(\bar{S}=\frac{<S>}{<N>}\). We present our results in Table 1.
Table 1 shows the results obtained for Argon. For instance, we find that for \(\mu=-250\) kJ/kg, \(<T>=119.2\) K, \(\bar{V}=0.734\) cm\({}^{3}\)/g, \(\bar{H}=-60.94\) kJ/kg, \(\bar{R}=189.1\) kJ/kg and \(\bar{S}=1.586\) kJ/kg/K. This is in excellent agreement with the experimental data [57], with, for \(T=120\) K and \(P=445\) bar, a specific volume of \(0.741\) cm\({}^{3}\)/g, an enthalpy of \(-61.2\) kJ/kg, and an entropy of \(1.585\) kJ/kg/K. Table 1 shows that, as the chemical potential decreases, the number of Ar atoms decreases, the temperature increases and the specific volume increases,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(\mu\) (kJ/kg) & \(<N>\) & \(<T>\) (K) & \(\bar{V}\) (cm\({}^{3}\)/g) & \(\bar{H}\) (kJ/kg) & \(\bar{R}\) (kJ/kg) & \(\bar{S}\) (kJ/kg/K) & \(\bar{S}_{id}\) (kJ/kg/K) \\ \hline -230 & 1067.3 & 106.1 & 0.702 & -74.11 & 155.9 & 1.469 & 2.139 \\ -250 & 880.1 & 119.2 & 0.734 & -60.94 & 189.1 & 1.586 & 2.186 \\ -300 & 870.4 & 148.8 & 0.816 & -31.84 & 268.2 & 1.803 & 2.277 \\ -350 & 754.2 & 175.2 & 0.908 & -5.69 & 344.3 & 1.968 & 2.349 \\ -400 & 690.7 & 199.8 & 1.011 & 18.52 & 418.5 & 2.096 & 2,414 \\ -450 & 654.4 & 223.1 & 1.124 & 41.06 & 491.0 & 2.203 & 2.470 \\ -500 & 632.8 & 245.3 & 1.244 & 62.02 & 562.0 & 2.296 & 2.520 \\ -550 & 618.9 & 266.8 & 1.367 & 81.39 & 631.4 & 2.366 & 2.567 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Argon: (\(\mu,P,R\)) simulation results along the \(P=445\) bar isobar and for \(R/k_{B}=8\times 10^{5}\) K. \(\bar{S}_{id}\) indicates the value for the ideal gas entropy provided by the Sackur=Tetrode equation.
meaning that the system becomes less and less dense. This is in line with the increase in enthalpy and entropy, that result from the decreased number of interactions between Ar atoms and the loss of organization in the system. Table 1 also provides the value taken by the ideal gas entropy according to the Sackur-Tetrode equation
\[\bar{S}_{id}=k_{B}\ln\left[\left(\frac{2\pi mk_{B}T}{h^{2}}\right)^{3/2}\frac{ Ve^{5/2}}{N}\right] \tag{24}\]
The ideal gas entropy \(\bar{S}_{id}\) is found to be larger than the molar entropy \(\bar{S}\). This results from the attractive interactions that take place in liquid Argon. As shown in Table 1, this effect decreases as the density decreases or, equivalently, the specific volume increases, leading to the narrowing of the gap between \(\bar{S}_{id}\) and \(\bar{S}\).
We show in Fig 1 the variation of the chemical potential, enthalpy, and entropy as a function of \(T\) along the \(P=445\) bar isobar. We also provide in Fig 1 a linear fit for \(\mu(T)\), \(\mu(T)\) (kJ/kg)\(=-15.692-1.963\times T\). Given the thermodynamic relation
Figure 1: Argon along the \(P=445\) bar isobar: (a) Chemical potential \(\mu\) as a function of the temperature \(T\), with a linear fit shown as a red line, (b) Enthalpy \(\bar{H}=\frac{<\!H\!>}{<\!N\!>}\) as a function of \(T\), with a linear fit to the simulation results shown as a red line, (c) Entropy \(\bar{S}=\frac{<\!S\!>}{<\!N\!>}\) against \(<T>\), with a logarithmic fit to the simulation results shown in red.
\(\mu=\bar{H}-T\bar{S}\), this gives an estimate for the average entropy over this temperature interval of 1.963 kJ/kg/K in reasonable agreement with the range of experimental \(\bar{S}\) values of \(1.4-2.4\) kJ/kg/K for this temperature interval [57]. We also show in Fig 1 a linear fit for \(\bar{H}(T)\), with \(\bar{H}(T)\) (kJ/kg)\(=-177.13+0.976\times T\), as well as a logarithmic fit for \(\bar{S}(T)\): \(\bar{S}(T)\) (kJ/kg)\(=-3.307+1.019\ln(T)\). To further assess the accuracy of the fits, we run a separate \((\mu,P,R)\) simulation for the following set: \(\mu=-233\) kJ/kg, \(P=445\) bar and \(R/k_{B}=8\times 10^{5}\) K. We find \(T=108.2\) K, \(\bar{H}=-72.06\) kJ/kg, and \(\bar{S}=1.488\) kJ/kg/K. These results are in excellent agreement with the values found from the fits developed above, _i.e._, \(T=110.78\) K, \(\bar{H}=-69.05\) kJ/kg, and \(\bar{S}=1.489\) kJ/kg/K. This confirms that the fits given above capture the variation of thermodynamic properties in Argon for the thermodynamic parameters studied in this work.
Next, we turn to the second single-component system, Neon. We follow the same protocol as for Argon and present in Table 2 the results obtained from \((\mu,P,R)\) simulations. We observe the same general behavior as for Argon. As \(\mu\) decreases, the number of Ne atoms decreases, \(T\) increases and the specific volume increases. The main difference with Ar for the set of thermodynamic conditions studied in this work is that the enthalpy is always positive, which results from the fact that the system is a supercritical fluid under these conditions. As for Argon, the results found with the \((\mu,P,R)\) method are in very good agreement with those found using \((N,P,T)\) simulations. For inst
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(\mu\) (kJ/kg) & \(<N>\) & \(<T>\) (K) & \(\bar{V}\) (cm\({}^{3}\)/g) & \(\bar{H}\) (kJ/kg) & \(\bar{R}\) (kJ/kg) & \(\bar{S}\) (kJ/kg/K) & \(\bar{S}_{id}\) (kJ/kg/K) \\ \hline -250 & 987.1 & 100.0 & 1.355 & 83.60 & 333.6 & 3.335 & 3.767 \\ -300 & 813.9 & 114.6 & 1.512 & 104.62 & 404.6 & 3.530 & 3.896 \\ -350 & 695.1 & 128.3 & 1.663 & 123.73 & 473.7 & 3.692 & 4.005 \\ -400 & 608.1 & 141.7 & 1.806 & 141.49 & 541.5 & 3.821 & 4.100 \\ -450 & 541.7 & 154.5 & 1.941 & 157.92 & 607.9 & 3.934 & 4.184 \\ -500 & 488.7 & 167.1 & 2.076 & 173.86 & 673.9 & 4.032 & 4.260 \\ -550 & 445.7 & 179.5 & 2.202 & 188.84 & 738.8 & 4.116 & 4.328 \\ -600 & 409.9 & 191.3 & 2.328 & 203.27 & 803.3 & 4.199 & 4.390 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Neon: \((\mu,P,R)\) simulation results along the \(P=445\) bar isobar and for \(R/k_{B}=8\times 10^{5}\) K.
\(\mu=-450\) kJ/kg, \(<T>=154.5\) K, \(\bar{V}=1.941\) cm\({}^{3}\)/g and \(\bar{H}=157.92\) kJ/kg. This is in excellent agreement with simulation results we obtain in the \((N,P,T)\) ensemble for \(T=154.5\) K and \(P=445\) bar, with a specific volume estimated at 1.938 cm\({}^{3}\)/g and an enthalpy of 157.97 kJ/kg. Table 2 also provides a comparison with the ideal gas entropy obtained from the Sackur-Tetrode equation. Since, under the thermodynamic conditions used in this work, Neon is a supercritical fluid, the interactions play a lesser role than for Argon, which is a liquid under these thermodynamic conditions. As a result, the molar entropy of Neon is found to be close to the ideal gas entropy under these conditions.
We also show in Fig 2 the variation of \(\mu\), \(\bar{H}\), and \(\bar{S}\) as a function of \(T\) along the \(P=445\) bar isobar. As for Ar, we provide a linear fit for \(\mu(T)\): \(\mu(T)\) (kJ/kg)\(=139.89-3.840\times T\), which gives us an average entropy of 3.840 kJ/kg/K over the temperature interval. The linear fit for \(H(T)\) is given by \(\bar{H}(T)\) (kJ/kg)\(=-44.95+1.306\times T\). Finally, a logarithmic fit for \(S(T)\) gives \(\bar{S}(T)\) (kJ/kg/K)\(=-2.751+1.325\ln(T)\). To further assess the accuracy of the fits, we perform a separate \((\mu,P,R)\) simulation with \(\mu=-285\) kJ/kg, \(P=445\) bar and \(R/k_{B}=8\times 10^{5}\) K. We obtain \(<T>=110.6\) K, \(\bar{H}=98.78\) kJ/kg and \(\bar{S}=3.471\) kJ/kg/K.
These results are in good agreement with the values from the above fits, _i.e._, \(T=110.6\) K, \(\mu=-284.8\) kJ/kg, \(\bar{H}=98.54\) kJ/kg and \(\bar{S}=3.483\) kJ/kg/K, which confirms the validity of the \((\mu,P,R)\) simulation method.
#### iii.2.2 Thermodynamic properties for the Ar-Ne mixture
We now turn to the study of the Ar-Ne mixture, and run simulations in the \((\mu_{Ne},\mu_{Ar},P,R)\) ensemble. To better understand how simulations in this ensemble work, we start by investigating the role played by \(R\), the heat function or Ray energy, when two components are present in the system. To this end, we hold \(\mu_{Ne}\), \(\mu_{Ar}\) and \(P\) constant and gradually vary \(R\). Results are given in Table 3. In this Table, the mole fraction in Ne is calculated as \(<x_{Ne}>=\frac{<N_{Ne}>}{<N_{Ne}>+<N_{Ar}>}\), the temperature as \(<T>=\frac{2<K>}{3<N_{tot}>k_{B}}\), with \(N_{tot}=N_{Ne}+N_{Ar}\), and the density as \(<\rho>=\frac{m_{Ne}<N_{Ne}>+m_{Ar}<N_{Ar}>}{<V>}\). Datasets available for comparison include the measurements by Streett Streett (1998), as well as the reference model developed by Tkaczuk _et al._Tkaczuk et al. (2009).
Results from Table 3 show that, at fixed \(P\) and for a given set of chemical potentials (\(\mu_{Ne}\), \(\mu_{Ar}\)), increasing the value of \(R\) leads to an increase in the number of atoms for the
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(\frac{R}{k_{B}}\) & \(<N_{Ne}>\) & \(<N_{Ar}>\) & \(<x_{Ne}>\) & \(<T>\) & \(<\rho>\) & \(\bar{H}\) & \(\bar{S}\) \\ (K) & - & - & - & K & (g/cm\({}^{3}\)) & (kJ/kg) & (kJ/kg/K) \\ \hline \(2\times 10^{5}\) & 106.5 & 106.0 & 0.501 & 110.8 & 1.128 & -11.6 & 4.697 \\ \(3\times 10^{5}\) & 159.7 & 159.1 & 0.501 & 110.9 & 1.129 & -11.8 & 4.694 \\ \(4\times 10^{5}\) & 212.2 & 212.9 & 0.499 & 110.9 & 1.130 & -12.3 & 4.687 \\ \(5\times 10^{5}\) & 266.4 & 264.7 & 0.502 & 110.8 & 1.128 & -11.5 & 4.701 \\ \(6\times 10^{5}\) & 318.6 & 319.0 & 0.500 & 110.8 & 1.130 & -12.2 & 4.693 \\ \(7\times 10^{5}\) & 372.4 & 371.4 & 0.501 & 110.8 & 1.129 & -11.9 & 4.697 \\ \(8\times 10^{5}\) & 425.5 & 424.5 & 0.501 & 110.8 & 1.129 & -11.9 & 4.697 \\ \(9\times 10^{5}\) & 478.9 & 477.4 & 0.500 & 110.9 & 1.129 & -11.9 & 4.690 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Ar-Ne mixture along the \(P=445\) bar isobar: \((\mu_{Ne},\mu_{Ar},P,R)\) simulation results for \(x_{Ne}=0.5\), _i.e._, for \(\mu_{Ne}=-203\) kJ/kg and \(\mu_{Ar}=-330\) kJ/kg, for different values of the heat function \(R\).
two components of the mixture, \(<N_{Ne}>\) and \(<N_{Ar}>\), and thus in the total number of atoms in the system \(<N_{tot}>\). For instance, multiplying by 5 the value of \(R\) leads to a 5-fold increase in \(<N_{Ne}>\) and \(<N_{Ar}>\). Interestingly, we observe a linear dependence on \(R\) for all numbers of atoms, \(<N_{Ne}>\), \(<N_{Ar}>\) and \(<N_{tot}>\) (see Fig 3). This implies that, once the \((\mu_{Ne},\mu_{Ar},P,R)\) simulations have converged, and regardless of the value set for \(R\), the intensive thermodynamic properties all converge towards the same values. For instance, for \(R/k_{B}=2\times 10^{5}\) K, the system converges towards a temperature of 110.8 K a density of 1.128 g/cm\({}^{3}\), an enthalpy of \(-11.6\) kJ/kg and an entropy of 4.697 kJ/kg/K. For \(R/k_{B}=9\times 10^{5}\) K, the system reaches at convergence 110.9 K for \(<T>\), 1.129 g/cm\({}^{3}\) for \(<\rho>\), \(-11.9\) kJ/kg/K for \(<\bar{H}>\), and 4.690 kJ/kg/K for \(<\bar{S}>\). The sets of results obtained for the these two \(R\) values are within the statistical uncertainty of the simulations, which are of 0.5 K, 0.008g/cm\({}^{3}\), 0.2 kJ/kg, and 0.15 kJ/kg/K for the temperature, density, enthalpy, and entropy, respectively.
In the rest of the paper, we present results obtained from \((\mu_{Ne},\mu_{Ar},P,R)\) simulations at
Figure 3: Ar-Ne mixture. Ray function \(R\) in function of (a) the number of Ne atoms, \(<N_{Ne}>\), (b) the number of Ar atoms \(<N_{Ar}>\), and (c) the total number of particles \(<N_{tot}>\), present in the \(x_{Ne}=0.5\) mixture.
\(P=445\) bar and \(R/k_{B}=3\times 10^{5}\) K, as they are found from Table 3 to provide accurate results for a reasonably small total number of atoms in the system. To obtain mixture properties for mole fractions in Ne spanning the entire range from 0.1 to 0.9, we vary both \(\mu_{Ne}\) and \(\mu_{Ar}\) and select conditions for which the temperature of the system has converged towards \(<T>=110.8\pm 0.5\) K. We present the results in Table 4.
To assess the accuracy of the (\(\mu_{Ne}\), \(\mu_{Ar}\), P, R) simulations, we carry out a simulation in the (\(N_{Ne}\), \(N_{Ar}\), P, T) ensemble at 110.8 K and \(P=445\) bar and for a mole fraction in Ne of 0.5. We find an average density of 1.13 g/cm\({}^{3}\) in good agreement with the (\(\mu_{Ne}\), \(\mu_{Ar}\), P, R) simulation results of 1.129 g/cm\({}^{3}\). This is a first validation of the grand-isobaric adiabatic ensemble for mixtures. By fitting the (\(\mu_{Ne}\), \(\mu_{Ar}\), P, R) simulation results, we obtain the following equation for the density of the mixture
\[\rho\ (g/cm^{3})=1.387-0.183x_{Ne}-0.838x_{Ne}^{2}+0.394x_{Ne}^{3}-0.093x_{Ne}^ {4} \tag{25}\]
and test the fit against with the available experimental data under these conditions [58] and the results obtained from (\(N_{Ne},N_{Ar},P,T\)) simulations. As shown in in Fig. 4, there is a good agreement between the three sets of data over the entire range of compositions, which shows that Eq. 25 provides an accurate model for the density of the Ne-Ar mixture.
We now examine the variations of the thermodynamic properties of the mixture as a
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(<x_{Ne}>\) & \(\mu_{Ne}\)(kJ/kg) & \(\mu_{Ar}\)(kJ/kg) & \(<N_{Ne}>\) & \(<N_{Ar}>\) & \(<T>\) (K) & \(<\rho>\) (g/cm\({}^{3}\)) \\ \hline
0.1 & -187 & -251 & 37.2 & 316.5 & 110.5 & 1.361 \\
0.2 & -183 & -267 & 67.9 & 275.3 & 110.3 & 1.320 \\
0.3 & -186 & -285 & 98.9 & 235.9 & 110.3 & 1.268 \\
0.4 & -193 & -307 & 128.8 & 195.9 & 110.8 & 1.202 \\
0.5 & -203 & -330 & 159.7 & 159.1 & 110.9 & 1.129 \\
0.6 & -215 & -356 & 188.2 & 126.1 & 110.9 & 1.049 \\
0.7 & -229 & -386 & 217.5 & 94.5 & 110.7 & 0.963 \\
0.8 & -246 & -424 & 247.8 & 62.3 & 110.9 & 0.866 \\
0.9 & -264 & -478 & 282.1 & 30.6 & 110.6 & 0.770 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ar-Ne mixture. Results from (\(\mu_{Ne}\),\(\mu_{Ar}\),P,R) simulations along the isobar \(P=445\) bar with \(R/k_{B}=3\times 10^{5}\), and \(<T>=110.8\pm 0.5\) K
function of the mole fraction in Ne. We focus in Fig. 5 on the plots for enthalpy \(\bar{H}\), product \(-T\bar{S}\) and Gibbs free energy, calculated as \(\bar{G}=\bar{H}-T\bar{S}\). For a low \(x_{Ne}\), the mixture exhibits the signature of a liquid with a negative enthalpy. Then, as \(x_{Ne}\) increases, enthalpy increases since Neon is a supercritical fluid under these conditions and is associated with a positive enthalpy. On the other hand, we observe a non-monotonic behavior, with a maximum for \(\bar{S}\) and thus a minimum for \(-T\bar{S}\) and for the Gibbs free energy \(\bar{G}\). We find that the entropy reaches a maximum, and the Gibbs free energy a minimum, close to an equimolar fraction for the two components of the mixture.
We now move on to the determination of the thermodynamic properties of mixing. For any thermodynamic property \(\bar{Y}\), we evaluate the property of mixing \(\Delta\bar{Y}_{mix}\) from the property
Figure 4: Density against the mole fraction in Ne \(x_{Ne}\) for the Ar-Ne mixture along the isobar \(P=445\) bar. \((N_{Ne},N_{Ar},P,T)\) simulation results are shown as squares, while experimental data are shown as diamonds. The experimental data [58] are given for a temperature \(T=110.78\) K and a pressure of \(P=6500\) psia or, equivalently, \(P=448\) bar.
determined for the mixture \(\bar{Y}_{m}\), and the properties for the single-component system \(\bar{Y}_{Ne}\) and \(\bar{Y}_{Ar}\), determined under the same conditions of \(T\) and \(P\). Specifically, we obtain here
\[\begin{array}{l}\Delta\bar{H}_{mix}=\bar{H}_{m}-x_{Ne}\bar{H}_{Ne}-x_{Ar}\bar{ H}_{Ar}\\ \Delta\bar{S}_{mix}=\bar{S}_{m}-x_{Ne}\bar{S}_{Ne}-x_{Ar}\bar{S}_{Ar}\\ \Delta\bar{G}_{mix}=\bar{G}_{m}-x_{Ne}\mu_{Ne}-x_{Ar}\mu_{Ar}\end{array} \tag{26}\]
We report in Table 5 the results obtained from \((\mu_{Ne},\mu_{Ar},P,R)\) simulations for the mixtures properties, as well as the thermodynamic properties of mixing, for different mole fractions in Ne. We fit the simulation results to obtain the following equations for the thermodynamic properties of mixing
\[\begin{array}{l}\Delta\bar{H}_{mix}\ (kJ/kg)=1.664-218.26x_{Ne}+478.90x_{Ne}^{2 }+74.72x_{Ne}^{3}-338.65x_{Ne}^{4}\\ \Delta\bar{S}_{mix}\ (kJ/kg/K)=0.034+0.408x_{Ne}+28.545x_{Ne}^{2}-51.101x_{Ne}^{3}+2 2.103x_{Ne}^{4}\\ \Delta\bar{G}_{mix}\ (kJ/kg)=-2.119-263.46x_{Ne}-2683.3x_{Ne}^{2}+5735.7x_{Ne}^{3}-2 787.2x_{Ne}^{4}\end{array} \tag{27}\]
Figure 5: Ne-Ar mixture at 110.8 K and 445 bar. \((\mu_{Ne},\mu_{Ar},P,R)\) simulation results for the enthalpy (green triangles), product \(-T\bar{S}\) (red circles) and Gibbs free energy (black circles) against \(x_{Ne}\).
Figure 6: Variation of the thermodynamic properties of mixing, with in (a), \((\mu_{Ne},\mu_{Ar},P,R)\) simulation results for \(\Delta H_{mix}\) (green triangles), \(T\Delta S_{mix}\) (red squares) and \(\Delta G_{mix}\) (black circles) as a function of \(x_{Ne}\). \((N_{Ne},N_{Ar},P,T)\) simulation results for \(\Delta H_{mix}\) are shown as blue squares. In (b), the excess entropy of mixing \(\Delta S_{mix}^{E}\) (black squares) is compared to the entropy of mixing \(T\Delta S_{mix}\) (red squares).
We also plot in Fig. 6 the dependence of the thermodynamic properties of mixing as a function of the mole fraction in Ne. We provide a comparison between the enthalpy of mixing predicted by \((N,P,T)\) simulations for the corresponding mole fractions at \(T=110.8\) K and \(P=445\) bar. The results show that there is a very good agreement between the simulations results obtained from both sets of simulations, thereby providing another validation for the \((\mu_{Ne},\mu_{Ar},P,R)\) simulation method. \((\mu_{Ne},\mu_{Ar},P,R)\) simulations have the advantage of also providing the other thermodynamic properties of mixing, _i.e._ the entropy of mixing and the Gibbs free energy of mixing.
Fig. 6(a) shows that the entropy of mixing exhibits a maximum for an equimolar mixture, while the enthalpy of mixing remains close to 0 kJ/kg and only increases slightly with the Ne mole fraction. This results in the presence of a minimum for the Gibbs free energy of mixing for a mole fraction of 0.5. Overall, the plot shown in Fig. 6 is close to what is expected for an ideal binary mixture. Indeed, ideal mixtures exhibit an enthalpy of mixing of 0 kJ/kg and two terms of the same magnitude, and opposite signs, for the Gibbs free energy of mixing and for the product of temperature by the entropy of mixing. Furthermore, for ideal mixtures, the Gibbs free energy of mixing reaches a minimum for a mole fraction of 0.5,
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(<x_{Ne}>\) & \(\bar{H}_{m}\) & \(\Delta\bar{H}_{mix}\) & \(\bar{S}_{m}\) & \(\Delta\bar{S}_{mix}\) & \(\bar{G}_{m}\) & \(\Delta\bar{G}_{mix}\) \\ - & (kJ/kg) & (kJ/kg) & (kJ/kg/K) & (kJ/kg/K) & (kJ/kg) & (kJ/kg) \\ \hline
0.1 & -72.33 & -12.20 & 2.060 & 0.465 & -300.51 & -55.55 \\
0.2 & -71.83 & -21.60 & 2.685 & 0.974 & -369.26 & -118.72 \\
0.3 & -63.19 & -24.02 & 3.414 & 1.572 & -441.38 & -185.91 \\
0.4 & -42.48 & -15.71 & 4.153 & 2.165 & -502.57 & -241.20 \\
0.5 & -11.79 & 0.92 & 4.691 & 2.537 & -531.47 & -265.35 \\
0.6 & 21.34 & 18.03 & 4.899 & 2.555 & -521.35 & -250.22 \\
0.7 & 51.39 & 29.63 & 4.773 & 2.212 & -477.40 & -201.60 \\
0.8 & 75.67 & 32.45 & 4.420 & 1.605 & -413.97 & -132.55 \\
0.9 & 90.78 & 22.26 & 3.957 & 0.844 & -347.61 & -62.11 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Ar-Ne mixture at \(T=110.8\) K and \(P=445\) bar. \((\mu_{Ne},\mu_{Ar},P.R)\) results for the thermodynamic properties of the mixture \(\bar{Y}_{m}\) and the thermodynamic properties of mixing \(\Delta\bar{Y}_{mix}\). Properties are given in kJ/kg for enthalpy and Gibbs free energy, and in kJ/kg/K for entropy.
while the entropy of mixing reaches a maximum for a mole fraction of 0.5. Since we have a mixture of two rare gases, it is reasonable to observe here a behavior that is qualitatively similar to that of ideal mixtures. This confirms the ability of the \((\mu_{Ne},\mu_{Ar},P,R)\) simulations to yield the thermodynamic properties of mixing. We also examine in Fig. 6(b) the results obtained for the excess entropy of mixing, defined as the entropy of the mixture minus the ideal gas entropy of the two components for the mixture. The excess entropy of mixing is small, less than 0.5 kJ/kg/K in absolute value, when compared to the entropy of mixing. This means that intermolecular interactions contribute very little to the entropy of mixing of Ne-Ar and that the main contribution to the entropy of mixing is thus combinatorial. This is in line with the results obtained for the enthalpy of mixing, which show that the magnitude of this term is small.
### The Cu-Ag system
#### iv.2.1 Single-component systems
We now turn to the study of metallic systems, and focus on the example of the Cu-Ag mixture. We start by examining the results for single-component systems of Cu and Ag. Table 6 and Table 7 show the results obtained for the two metals along the \(P=1\) bar isobar. We first comment on the results for Cu (Table 6), and find that the specific volume increases as the chemical potential decreases. This leads to fewer, and weaker, interactions between Cu atoms and, in turn, to a decrease in enthalpy. This is confirmed by the increase in temperature and entropy, which shows that the fluid is less and less organized as the chemical potential decreases.
We show in Fig. 7, the variation of the chemical potential, enthalpy, and entropy as a function of temperature. Fig. 7 demonstrates that there is a good agreement between results obtained with \((\mu,P,R)\) simulations and results from Monte Carlo simulations in the isothermal-isobaric ensemble (see middle panel of Fig 7). In line with the noble gases systems, we fit the simulation results and obtain a linear fit for \(\mu(T)\). We obtain \(\mu(T)\) (kJ/kg) \(=-4422.0-1.426\times T\), yielding an estimate for the average entropy over the temperature interval of 1.426 kJ/kg/K. For enthalpy and entropy, we obtain the following
fits
\[\begin{array}{c c c c c}&\bar{H}\ (kJ/kg)&=-5240.6+0.504\times T\\ &\bar{S}\ (kJ/kg/K)=-2.337+0.508\ln T\\ \end{array} \tag{28}\]
Next, we turn to the results obtained for Ag and provide in Table 7 the results obtained from \((\mu,P,R)\) simulations. We then compare the results for the specific volume to reference
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\mu\) (kJ/kg) & \(<T>\) (K) & \(\bar{V}\) (cm\({}^{3}\)/g) & \(\bar{H}\) (kJ/kg) & \(\bar{R}\) (kJ/kg) & \(\bar{S}\) (kJ/kg/K) \\ \hline -6300 & 1306.1 & 0.129 & -4585.23 & 1714.8 & 1.313 \\ -6400 & 1385.6 & 0.131 & -4541.45 & 1858.6 & 1.341 \\ -6500 & 1455.7 & 0.132 & -4506.60 & 1993.4 & 1.369 \\ -6600 & 1535.2 & 0.133 & -4466.60 & 2132.8 & 1.389 \\ -6700 & 1602.2 & 0.135 & -4431.64 & 2268.6 & 1.416 \\ -6800 & 1680.0 & 0.136 & -4392.88 & 2406.9 & 1.433 \\ -6900 & 1737.5 & 0.137 & -4363.18 & 2537.5 & 1.460 \\ -7000 & 1809.7 & 0.138 & -4329.20 & 2670.7 & 1.476 \\ -7100 & 1873.9 & 0.139 & -4299.75 & 2800.0 & 1.494 \\ -7200 & 1937.5 & 0.141 & -4263.77 & 2936.3 & 1.515 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Copper: \((\mu,P,R)\) simulation results along the \(P=1\) bar isobar, with \(R/k_{B}=8\times 10^{5}\) K.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \(\mu\) (kJ/kg) & \(<T>\) (K) & \(\bar{V}\) (cm\({}^{3}\)/g) & \(\bar{H}\) (kJ/kg) & \(\bar{R}\) (kJ/kg) & \(\bar{S}\) (kJ/kg/K) \\ \hline -3200 & 1133.4 & 0.112 & -2258.59 & 941.4 & 0.831 \\ -3300 & 1218.5 & 0.113 & -2228.54 & 1071.5 & 0.879 \\ -3400 & 1359.7 & 0.116 & -2182.48 & 1218.0 & 0.896 \\ -3500 & 1456.4 & 0.118 & -2151.29 & 1348.6 & 0.926 \\ -3600 & 1573.9 & 0.120 & -2116.86 & 1483.2 & 0.942 \\ -3700 & 1680.2 & 0.122 & -2084.49 & 1615.7 & 0.962 \\ -3800 & 1774.9 & 0.124 & -2053.67 & 1746.2 & 0.984 \\ -3900 & 1892.8 & 0.126 & -2020.79 & 1878.0 & 0.992 \\ -4000 & 1993.5 & 0.128 & -1993.93 & 2006.8 & 1.007 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Silver: \((\mu,P,R)\) simulation results along the \(P=1\) bar isobar, with \(R/k_{B}=8\times 10^{5}\) K.
data [60] over the 1235 K-1600 K range and find a good agreement. For instance, at \(T=1359.7\) K, the reference data is of \(\bar{V}^{ref}=0.109\) cm\({}^{3}\)/g to be compared to 0.116 cm\({}^{3}\)/g predicted by the simulation, and at \(T=1573.9\) K, we have \(\bar{V}^{ref}=0.111\) cm\({}^{3}\)/g to be compared to 0.120 cm\({}^{3}\)/g for the simulation. We show in Fig. 8 plots of the chemical potential, enthalpy, and entropy against temperature. As for copper, we observe a good agreement between the enthalpy predicted by \((\mu,P,R)\) simulations and that obtained with Monte Carlo \((N,P,T)\) simulations (see Fig 8). We then carry out the same analysis as above and obtain the following linear fit for \(\mu(T)\) (kJ/kg) \(=-2161.0-0.920\times T\), which provides an estimate for the average entropy over the temperature interval of 0.920 kJ/kg/K. For enthalpy and entropy, we obtain the following fits
\[\begin{split}\bar{H}\ (kJ/kg)=-2603.0+0.308\times T\\ \bar{S}\ (kJ/kg/K)=-1.235+0.296\ln(T)\end{split} \tag{29}\]
Figure 7: Copper along the \(P=1\) bar isobar. (a) Chemical potential against temperature, (b) Enthalpy against temperature, and (c) Entropy against temperature. \((\mu,P,R)\) simulation results are shown as black circles, \((N,P,T)\) simulation results are shown as blue squares, and fits to the \((\mu,P,R)\) simulation results are shown as red lines.
#### iv.1.2 Thermodynamic properties of the Cu-Ag mixture
We then examine the properties for the Cu-Ag mixture. To this end, we carry out \((\mu_{Cu},\mu_{Ag},P,R)\) simulations for the mixture for conditions corresponding to \(P=1\) bar and an average temperature of \(1400\;\pm 20\) K, and fit the simulation results to determine a series of equations modeling the properties of the mixture as a function of the mole fraction in copper. We start with the density of the system, and obtain the following equation from the simulation results
\[\rho\;(g/cm^{3})=8.548+0.278x_{Cu}-2.614x_{Cu}^{2}+2.639x_{Cu}^{3}-1.221x_{Cu}^ {4} \tag{30}\]
and test the fit against the results obtained from \((N_{Cu},N_{Ag},P,T)\) simulation results. The results are shown in Fig. 9. They are found to be in very good agreement with one another over the entire range of compositions, thereby establishing that Eq. 30 provides an accurate model for the density of the Cu-Ag mixture.
Figure 8: Silver along the \(P=1\) bar isobar. (Left panel) Chemical potential against temperature, (Middle panel) Enthalpy against temperature, and (Right panel) Entropy against temperature. \((\mu,P,R)\) simulation results are shown as black circles, \((N,P,T)\) simulation results are shown as blue squares, and fits to the \((\mu,P,R)\) simulation results are shown as red lines.
Next, we determine the corresponding equations for the thermodynamic properties of mixing using Eq. 26. To this end, we take the \((\mu_{Cu},\mu_{Ag},P,R)\) simulation results for the mixture, _i.e._, \(<\bar{H}_{m}>\) and \(<\bar{S}_{m}>\) and calculate \(<\bar{G}_{m}>=<\bar{H}_{m}>-T<\bar{S}_{m}>\). Then, we subtract the value for \(\bar{H}\) and \(\bar{S}\) obtained for the single-component systems under the same conditions of pressure and temperature. These are provided by Eq. 28 for Cu as \(\bar{H}_{Cu}=-4535.0\) kJ/kg and \(\bar{S}_{Cu}=1.343\) kJ/kg/K and by Eq. 29 for Ag as \(\bar{H}_{Ag}=-2171.8\) kJ/kg and \(\bar{S}_{Ag}=0.909\) kJ/kg/K. We obtain the following equations for the thermodynamic properties of mixing
\[\begin{split}&\Delta\bar{H}_{mix}\ (kJ/kg)=0.532-403.75x_{Cu}-38825x_{ Cu}^{2}+71068x_{Cu}^{3}-31847x_{Cu}^{4}\\ &\Delta\bar{S}_{mix}\ (kJ/kg/K)=0.504x_{Cu}+13.503x_{Cu}^{2}-26.369x _{Cu}^{3}+12.365x_{Cu}^{4}\\ &\Delta\bar{G}_{mix}\ (kJ/kg)=0.982-1109.8x_{Cu}-57729x_{Cu}^{2}+1 07984x_{Cu}^{3}-49158x_{Cu}^{4}\end{split} \tag{31}\]
Figure 9: Density of the Cu-Ag mixture at 1 bar and 1400 K. The black line is a plot of Eq. 30, obtained from \((\mu_{Cu},\mu_{Ag},P,R)\) simulation results, while the open red squares are \((N_{Cu},N_{Ag},P,T)\) simulation results.
We plot in Fig. 10 the resulting fits, as well as a comparison with results obtained from a series of \((N_{Cu},N_{Ag},P,T)\) simulations as we vary the composition of the mixture at \(P=1\) bar and \(T=1400\) K. As shown in Fig. 10, there is an excellent agreement for \(\Delta\bar{H}_{m}\) between the fits to the \((\mu_{Cu},\mu_{Ag},P,R)\) results of Eq. 31 and the \((N_{Cu},N_{Ag},P,T)\) simulation results. This validates the extension of the method proposed in this work for mixtures, and its applicability to many-body force fields. Furthermore, the results obtained from both sets of simulation indicate that the behavior of the Cu-Ag mixture departs from that observed for ideal mixtures. Specifically, the enthalpy of mixing takes values that are of the same order as the other two terms, \(\Delta\bar{G}_{m}\) and \(T\Delta\bar{S}_{m}\). In other words, the enthalpy of mixing is not negligible any longer as one would expect for an ideal mixture. Furthermore, the minima for \(\Delta\bar{H}_{m}\) and \(\Delta\bar{G}_{m}\), and the maximum for \(T\Delta\bar{S}_{m}\), are reached for a mole fraction in Ne that
Figure 10: Thermodynamic properties of mixing for the Cu-Ag mixture at 1 bar and 1400 K. The black line is a plot of Eq. 31, obtained from \((\mu_{Cu},\mu_{Ag},P,R)\) simulation results, while the open blue squares are \((N_{Cu},N_{Ag},P,T)\) simulation results.
is now about 0.55, and not 0.5 any longer as for an ideal mixture. This departure can be interpreted as stemming from the strong cohesive interactions, and of dramatically different magnitudes, that take place between Cu and Ag atoms. It also illustrates one of the key advantages of the \((\mu_{Cu},\mu_{Ag},P,R)\) method, as it provides direct access to all three quantities, \(G\), \(H\) and \(S\), during a simulation run.
## IV Conclusions
In this work, we extend the adiabatic formalism to multicomponent systems and, more specifically, to the adiabatic grand-isobaric ensemble. Then, we develop an implementation of simulations in the adiabatic grand-isobaric ensemble within a Monte Carlo framework and apply the new approach to binary mixtures of noble gases and of metals. We show that this method has two very significant advantages. First, we now have direct access to the entropy of the mixture through the relation \(R=ST\). Second, the calculation of the pressure through the virial relation is not required here, since pressure is an input parameter in simulations in the adiabatic grand-isobaric ensemble. This alleviates the need for the computation of pressure, and its increased complexity when many-body terms are included. This new approach is thus particularly well suited for the determination of the entropy of mixing, an issue that has drawn considerable interest since the dawn of statistical mechanics, and of the other thermodynamic properties of mixing, including the enthalpy of mixing and the Gibbs free energy of mixing. We assess the accuracy of the method through comparisons with the available experimental data on mixtures of ideal gases, and with results obtained from conventional simulations performed in the isothermal-isobaric ensemble. This new approach allows us to recover the ideal behavior expected for mixtures of noble gases. Furthermore, simulations in the adiabatic grand-isobaric ensemble sheds light on the departure from the ideal behavior observed in binary metallic mixtures. Specifically, in the case of the Cu-Ag mixture, we observe a shift in the maximum for the entropy of mixing towards a greater Cu content than the ideal value of 0.5, and a strong contribution of the enthalpy of mixing to the Gibbs free energy of mixing. This results also shows the versatility of the adiabatic grand-isobaric approach, and its applicability to systems modeled with many-body force fields. The extension of the method to molecular fluids is currently under way.
###### Acknowledgements.
Partial funding for this research was provided by NSF through award CHE-1955403. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) [61], which is supported by National Science Foundation grant number ACI-1548562, and used the Open Science Grid through allocation TG-CHE200063.
## Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
2308.15637 | Managing Software Provenance to Enhance Reproducibility in Computational
Research | Scientific processes rely on software as an important tool for data
acquisition, analysis, and discovery. Over the years sustainable software
development practices have made progress in being considered as an integral
component of research. However, management of computation-based scientific
studies is often left to individual researchers who design their computational
experiments based on personal preferences and the nature of the study. We
believe that the quality, efficiency, and reproducibility of computation-based
scientific research can be improved by explicitly creating an execution
environment that allows researchers to provide a clear record of traceability.
This is particularly relevant to complex computational studies in
high-performance computing (HPC) environments. In this article, we review the
documentation required to maintain a comprehensive record of HPC computational
experiments for reproducibility. We also provide an overview of tools and
practices that we have developed to perform such studies around Flash-X, a
multi-physics scientific software. | Akash Dhruv, Anshu Dubey | 2023-08-29T21:13:18Z | http://arxiv.org/abs/2308.15637v2 | # Designing Laboratory Networks for Mranging Software-based Scientific Studies
###### Abstract
_Scientific processes rely on software as an important tool for data acquisition, analysis, and discovery. Over the years sustainable software development practices have made progress in being considered as an integral component of research. However, management of computation-based scientific studies is often left to individual researchers who design their computational experiments based on personal preferences and the nature of the study. We believe that the quality, efficiency, and reproducibility of computation-based scientific research can be improved by explicitly creating an execution environment that allows researchers to provide a clear record of traceability. This is particularly relevant to complex computational studies in high-performance computing (HPC) environments. In this article, we review the documentation required to maintain a comprehensive record of HPC computational experiments for reproducibility. We also provide an overview of tools and practices that we have developed to perform such studies around Flash-X, a multi-physics scientific software._
Expermental and observational scientists follow a rigorous process of recording their work. For many, receiving training to do this diligently is considered a high priority for several excellent reasons. Unless the exact conditions of the experiment are recorded it cannot be repeated. A complete specification of assumptions made during the experiment, and the recording of the methodology followed for interpreting and analyzing the outcome is required for reproducibility - a hallmark of the scientific process. Additional factors may include the expense of running experiments, and the impact placement of probes might have on the recorded data. In observational sciences, similar or greater constraints may result from the rarity of events - the observer must be ready to record the event when it occurs. To make the most of the event the observer would typically wish to record as many details as possible.
This scientific discipline is largely lacking in computational science. It has traditionally been assumed in computational science circles that careful recording of experiments may not be necessary because running the software again with identical input will produce identical results. This is a false assumption because software is rarely stationary. Every instance of using it to obtain new scientific results tweaks it in some ways, thereby changing some of its behavior. In high-performance computing (HPC) environments, the focus of this article, changes in the system software stack and hardware may also cause changes in results generated by the software. As the scale and complexity of the software and the problems it is trying to solve grow, so does the expense of running computational experiments. Some high-profile scientific failures in various disciplines and scrutiny of scientific output during COVID-19 have put a spotlight on the rigor of running and recording computational experiments. Although workflow management tools have been developed to make it easier to run simulations, and archive and analyze the data, there is no community-wide adoption of rigor in requiring provenance of the code and data, and the log of the experiments conducted.
Laboratory notebooks, or scientific notebooks (referred to as lab notebooks from here on) have long been used as a crucial tool in experimental and observational sciences by researchers, engineers, and students to record and document their work, experiments, observations, and findings in a structured and organized manner. Their purpose is to maintain a detailed and accurate record of all activities related to a
research project or experiment. Maintaining a well-kept lab notebook is not only considered a scientific best practice but also a requirement in many research and academic settings. It helps ensure transparency, accountability, and the reliability of scientific work, which is essential for advancing knowledge and innovation.
Individual researchers often use a directory-based organization for their computational experiments which are designed based on their personal preferences, the nature of the studies, and the need to balance time and computing resources. This directory-based design is helpful in nesting experiments that cover different parametric spaces. Leveraging this design and explicitly creating an execution environment around software and its dependencies can improve the quality, efficiency, and reproducibility of their studies. Such an environment, which is analogous to a laboratory space, can help focus researchers to prioritize scientific rigor and develop tools and practices to manage the generation, protection, storage, and analysis of data. The construction of this virtual laboratory environment should include the integration of a lab notebook that can provide a record of software configurations and research decisions.
The question then is, what do lab notebooks look like for an HPC computational experiment, and how should they be managed? In this paper, we list features that lab notebooks may require for such computational investigations, and describe one exemplar of a solution developed for conducting experiments with Flash-X [1], a multi-physics multi-component software that can be used for simulations in several science domains.
## 2 Background Non-Rebooks for Computational Experiments
Since computational work often involves coding, simulations, and data analysis using software tools, the lab notebook is best maintained in electronic format, and could include a combination of repositories, spreadsheets, text, and markdown formats. For reasons discussed in1, the use of some form of lab notebook in computational science has several challenges. However, a fundamental set of activities that must be recorded are fairly simple to enumerate.
Footnote 1: [https://www.youtube.com/watch?v=fWpl4S_dvhc](https://www.youtube.com/watch?v=fWpl4S_dvhc)
* Title and Purpose: Title or project name and a brief description of its purpose or objectives.
* Code Repository Links: Links or references to the code repositories and/or input files used in the experiment.
* Software and Hardware: Specifications of hardware used and system software stack, libraries, and tools used in the experiment, along with their versions.
* Modifications: Systematic recording of modifications to the software and hardware during the experiment
* Experiment Design: Description of the algorithms, data sets, parameters, and any assumptions made for the experiment along with the reasoning behind the selection of specific runs to be made. A log of steps taken to prepare for the experiment can be very helpful for future experiments.
* Data Sources: Documentation about any external data used, along with explanations of how and where it's stored. Any data preprocessing steps, such as cleaning, normalization, or transformation.
* Data Storage: Archival storage of produced and collected data with attached metadata to be usable by other researchers.
* Experimental Runs: Log of each experimental run, including the input parameters, the date and time of execution, and the resulting output or data. If version control is in use commit references for any significant updates should be kept.
* Results and Analysis: Presentation of the results of a computational experiment, including tables, graphs, and statistical analyses along with a description of how the results were interpreted and what conclusions were drawn.
* References: Citations and references to relevant literature, software documentation, or external resources that influenced the computational experiment.
Several tools and software applications can help maintain well-organized and effective lab notebooks for computational experiments. These tools are designed to streamline documentation, code management, data analysis, and collaboration. For example, Jupyter Notebooks2 are interactive, web-based environments for creating and sharing documents that combine live code (Python, R, etc.), equations, visualizations, and narrative text. They are already widely used in data science and computational research. Similarly R Markdown3 is an authoring format that integrates code, results, and narrative text into a single document. It's commonly used with the R programming language but can be adapted for other languages as well. Python
based notebooks such as Google Colab4 are similar to Jupyter Notebooks but are focused on Python. They allow documentation of Python code alongside explanations and visualizations. Several other open-source and commercial solutions exist with a variety of features. Any of these tools along with a set of well-defined recording practices can form the basis of an execution environment that promotes reproducibility of computational experiments. However, none of them suffice for complex multiphysics HPC computations which generate a huge amount of data, and require substantial post-processing and analysis.
Footnote 4: [https://colab.google/](https://colab.google/)
## 2 Flex Flexibility Solution
Flash-X is community-developed software that is undergoing several modes of development simultaneously. It is a new version of a long-existing community code FLASH [2] that has been re-architected to be compatible with heterogeneous hardware platforms. Several new physics capabilities and an entirely new method of integration have also been added to the code. As a consequence, a fairly common occurrence is where ongoing capability and performance improvement requirements collide with the needs of a domain science study. The situation is exacerbated when all participants involved in a study are not familiar with the inner workings and constraints of the code. In general two different types of experiments are regularly conducted with the code. One set of experiments measures the performance of different components of the code, while the other set pertains to domain science investigations. The requirements and constraints of these experiments differ from one another, though some of the required meta-information for complete specification is identical for all experiments. Some of this required meta-information such as repository version, software stack version, and all the configuration parameters are recorded in a log file that every execution instance of Flash-X generates.
The performance experiments typically involve scaling studies with the implication that the same application is run with different configurations of degree of parallelism, hardware components in use, and possibly different implementations of some of the code components. The exploration parameters in such studies tend to be related to the infrastructural components of the code, and the analysis is performed on either the performance summary section of the log file or, if a performance tool is used, then the data generated by the tool. The output of the simulation itself is not relevant to the study. The domain science investigations explore the parameter space of the physics involved. These experiments typically have little variation in the degree of parallelism or hardware in use. The output of the simulation is an important artifact here and can be quite large.
An example of a workflow that might be encountered by a developer who is also a user of the code for doing scientific investigations is shown in Figure (a)a. Such developers may perform regular testing of the physics and infrastructure components, keep track of the performance changes, and conduct scientific experiments at the same time. The collection of lab notebooks [3] described in Figure (b)b provide a good example of what may be done in such circumstances. These notebooks are all seeded from a general notebook [4] being maintained by the developer in question. In this instance, the developer is simultaneously conducting the following experiments,
* **Flow-Boiling-3D**: Lab notebook for production runs of three-dimensional multiphase flow boiling simulations.
* **ImBound-Mapping-Optimization**: Lab notebook to investigate and optimize mapping of Lagrangian particles on block-structured AMR grids for the purpose of developing an immersed boundary method for fluid-structure interaction problems.
* **AMReX-Bittree-Performance**: Lab notebook to improve scaling of Flash-X applications that use AMReX in octree mode.
* **Outflow-Forcing-BubbleML**: Reproducibility capsule for research articles on using Flash-X simulations to develop scientific machine learning models for thermal science applications.
Figure 2 depicts the directory tree that might be set up to record this set of experiments. Here, the first subdirectory, _software_ records all the software configuration information that is not directly available from the log file. In this example there is an external dependency AMReX [5], a library that provides adaptive mesh refinement capabilities. The build specifications of external libraries are not known to Flash-X configurator, and must be explicitly recorded to be able to exactly reproduce the execution environment. The second subdirectory _simulation_ describes the application instance that is used in the experiment. If an experiment uses more than one application instance it is expected that a separate subdirectory would be created for each. Because Flash-X has a high degree of composability, and therefore a huge collection of tests, it is also important to list the tests that were used to ensure ongoing correctness of the code throughout
the duration of the experiment. Organizing the tests within this structure also enables quick running of the test-suite as described later.
The process of seeding and management of these notebooks is implemented using a lightweight command line tool, Jobrunner [6], that enforces a directory-based inheritance for application configurations encoded in shell scripts to manage computational experiments. The directory tree is designed to split different shell commands that set up the software stack and simulation in an optimal way. For instance, a combination of environment.sh and setupAMReX.sh is used to configure AMReX most suitably for the experiment. Here, environment.sh documents and sets variables that store compiler information and installation location of different libraries, and setupAMReX.sh stores information specific to building AMReX with the necessary options. The Jobfiles that exists at every level of the tree encode information that Jobrunner can use to stitch together these bash files to perform different tasks.
Figure 3 shows contents of the Jobfiles at different nodes along the directory tree described in Figure 2. During the execution of Jobrunner commands, files assigned to the respective commands are picked up to execute tasks in the target node of the directory tree. For instance, the commands described in Figure 4 build AMReX, clone and configure an application instance of Flash-X for multiphase pool boiling problem, compile the assembled code, and then execute the created binary to obtain results. At the same time, local testing of the code is performed using Flash-X's custom testing framework5 which uses tests encoded in Tests.suite along with the environment.sh and runTests.sh scripts to build and execute tests. Note that environment.sh, located at the root of the project directory is used for each Jobrunner command, providing consistency between tests and experiments.
Footnote 5: [https://github.com/Flash-X/Flash-X-Test](https://github.com/Flash-X/Flash-X-Test)
Jobrunner hides error-prone individual steps of the experiments while allowing organized and explicit documentation of configuration options that can be easily modified by editing the shell scripts. The directory tree in Figure 2 can be easily modified and redesigned on the requirements of the experiments to
Figure 1: (a) A common scenario in computational sciences where a developer has to conduct individual experiments and collaborate with other researchers who may not be experts in using their software. (b) Example use case of lab notebooks for Flash-X development. The developer maintains a personal repository that serves as a seed for setting up experiments for physics and performance related studies that can be shared with collaborators.
include markdown notes and analysis files. For full functionality and documentation for Jobrunner see [6].
Data archiving is the last remaining concern for reproducibility. This is implemented using Jobrunner's archive command which picks up file patterns listed in Jobfiles (see Figure 3) and moves them to a jobnode.archive/<mm-dd>yyy> directory under the target node of an experiment (see Experiment/simulation/FlowBoiling in Figure 2). The directories containing the data are eventually moved to a cloud-based archival service along with a clone of the Github6 source repository to preserve the tree structure. Note that the raw data is itself not included in the repository because that can be quite large. It is organized in such a way that the archive can be unpacked below the directory structure maintained in the repository to exactly reconstruct all the artifacts of the experiment. See Lab-Notebooks/Outflow-Forcing-BubbleML [3] for an example lab notebook for recent publications.
Footnote 6: [https://github.com/](https://github.com/)
## Conclusion
An increasing emphasis on reproducibility and greater scrutiny of computational science results is slowly changing the perception of what constitutes a good computational experiment. The concept of maintaining laboratory notebooks in computational sciences has been gaining popularity to enforce structure and rigor in scientific studies. This is a welcome change, and deserves encouragement. However, unlike more traditional scientific disciplines, computational scientists have challenges in how frequently their execution environments may change, and the kind of impact they may have on the continuity of their experiments. Additionally, they often participate in teams that may be geographically diverse, therefore they need distributed digital mechanisms to record their work. Tools such as Jupyter notebook and Github repositories can help, but need additional care to fully capture the provenance of a computational experiment. We have presented an approach that addresses many of the challenges faced by software that is being developed while also being used for production. It is our hope that tools and ideas presented here serve as a motivation for other scientists to design and organize their experiments. Integration of well-organized laboratory notebooks with reproducibility and data capsules can improve the quality of scientific artifacts, and enhance productivity of collaborative research.
Figure 4: Jobrunner commands for setting up dependencies, running tests and experiments, and archive data. These commands are executed from the root of the directory-tree
Figure 3: Contents of Jobfiles at different locations along the directory tree.
## Acknowledgments
The authors would like to acknowledge Jared O'Neal's work in promoting the culture of using lab notebooks for Flash-X experiments, and for conceptualizing the format that has been adopted by the team and described in this article.
This work was partially supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the US Department of Energy Office of Science and the National Nuclear Security Administration, and the Laboratory Directed Research and Development Program supported by Argonne. The submitted manuscript was created by UChicago Argonne, LLC, operator of Argonne National Laboratory ("Argonne"). Argonne, a U.S. Department of Energy Office of Science laboratory, is operated under Contract No. DE-AC02-06CH11357. The U.S. Government retains for itself, and others acting on its behalf, a paid-up nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan. [http://energy.gov/downloads/doe-public-access-plan](http://energy.gov/downloads/doe-public-access-plan).
|
2303.17352 | The Essential Algorithms for the Matrix Chain | For a given product of $n$ matrices, the matrix chain multiplication problem
asks for a parenthesisation that minimises the number of arithmetic operations.
In 1973, Godbole presented a now classical dynamic programming formulation with
cubic time complexity on the length of the chain. The best known algorithms run
in linearithmic time, and the best known approximation algorithms run in linear
time with an approximation factor smaller than two. All solutions have in
common that they select an optimal parenthesisation from a set of $C_{n-1}$
(Catalan number $n - 1$) distinct parenthesisations. We studied the set of
parenthesisations and discovered (a) that all of the exponentially many
parenthesisations are useful in the sense that they are optimal in an infinite
subset of the input space, (b) that only $n + 1$ parenthesisations are
essential in the sense that they are arbitrarily better than the second best on
an infinite subset of the input space, and (c) that the best essential
parenthesisation is never more than twice as costly as the best non-essential
parenthesisation. Through random sampling of the input space, we further
discovered that the set of essential parenthesisations includes an optimal
parenthesisation in the vast majority of inputs, and that the best essential
parenthesisation is on average much closer to optimal than the worst-case
bound. The results have direct consequences for the development of compilers
for linear algebra expressions where the matrix sizes are unknown at
compile-time. | Francisco López, Lars Karlsson, Paolo Bientinesi | 2023-03-30T13:11:01Z | http://arxiv.org/abs/2303.17352v1 | # The Essential Algorithms for the Matrix Chain+
###### Abstract
For a given product of \(n\) matrices, the matrix chain multiplication problem asks for a parenthesisation that minimises the number of arithmetic operations. In 1973, Godbole presented a now classical dynamic programming formulation with cubic time complexity on the length of the chain. The best known algorithms run in linearithmic time, and the best known approximation algorithms run in linear time with an approximation factor smaller than two. All solutions have in common that they select an optimal parenthesisation from a set of \(C_{n-1}\) (Catalan number \(n-1\)) distinct parenthesisations. We studied the set of parenthesisations and discovered (a) that all of the exponentially many parenthesisations are useful in the sense that they are optimal in an infinite subset of the input space, (b) that only \(n+1\) parenthesisations are essential in the sense that they are arbitrarily better than the second best on an infinite subset of the input space, and (c) that the best essential parenthesisation is never more than twice as costly as the best non-essential parenthesisation. Through random sampling of the input space, we further discovered that the set of essential parenthesisations includes an optimal parenthesisation in the vast majority of inputs, and that the best essential parenthesisation is on average much closer to optimal than the worst-case bound. The results have direct consequences for the development of compilers for linear algebra expressions where the matrix sizes are unknown at compile-time.
m trix multiplication, matrix chain, approximation algorithm, linear algebra compilers
68N20, 68Q25, 68W25
## 1 Introduction
The classical _matrix chain multiplication problem_ seeks the cheapest way to multiply a sequence of \(n\geq 3\) matrices
\[M:=M_{1}M_{2}\cdots M_{n}. \tag{1}\]
Since matrix multiplication is associative, the \(n-1\) multiplications in (1) can be performed in any order. There are \((n-1)!\) orderings, some of which are equivalent in the sense that they perform the same exact set of multiplications but in different sequential orders. For example, \((M_{1}M_{2})(M_{3}M_{4})\) has two equivalent orderings, since the multiplications \(M_{1}\times M_{2}\) and \(M_{3}\times M_{4}\) are independent. Each class of equivalent orderings is identified by a specific _parenthesisation_ of (1). In what follows, we use the term _ordering_ to refer to the _partial_ ordering of the multiplications imposed by a parenthesisation. The ordering affects the arithmetic cost (and hence the execution time) of the evaluation of the matrix chain. Thus, the matrix chain multiplication problem asks for an ordering (parenthesisation) that minimises the cost to compute \(M\).
The number of orderings is \(C_{n-1}=\frac{(2n-2)!}{n!(n-1)!}\), i.e., the \((n-1)\)th Catalan number, which grows exponentially in \(n\). Still, the problem can be solved exactly in \(\Theta(n^{3})\) time using a textbook dynamic programming formulation [10], while the fastest known algorithms runs in \(O(n\log n)\) time [12, 13]. There also exist approximation algorithms with good (multiplicative) guarantees, some of which run in \(O(n)\) time [4, 5]. Refer to Section 2 for an overview of related work.
In this work, we study the matrix chain multiplication problem from a different perspective by addressing the following questions.
* Q1: Are all orderings _useful_? We say that an ordering is useful if its cost is strictly lower than the cost of any other ordering for at least one combination of matrix sizes. Intuitively, each useful ordering is valuable, since it is the best on at least some instances of the problem.
* Q2: Are all useful orderings _essential_? An ordering is essential if one can pick a combination of matrix sizes for which the ordering is optimal and the ratio between its cost and that of the second-best ordering is arbitrarily large. Intuitively, each essential algorithm is especially important, since there are instances of the problem on which it is arbitrarily better than the second best.
* Q3: How important are those orderings (if any) that are useful but non-essential? An ordering is useful but non-essential if there exists an instance for which it is optimal, and the cost of the second-best ordering is within a constant factor. Intuitively, if the factor is provably reduced, the non-essential orderings can be removed from consideration without large negative consequences on performance. We ask whether useful but non-essential orderings exist, and how much one pays, should they be discarded.
We make the following contributions. We address Q1 in Section 4, where we show that all orderings are indeed useful and, what is more, all are useful on infinitely many instances of the problem. We address Q2 and Q3 in Section 5, where we show that \(n+1\) specific orderings are essential and all others are non-essential (though useful). Moreover, we show that constraining the selection of an ordering to the set of essential orderings results in a previously published approximation algorithm with an approximation factor of two. Finally, in Section 6, we present experimental evidence indicating that i) for most problem instances an essential orderings is optimal, and ii) for most other instances the penalty of removing all non-essential orderings is far lower than the theoretical upper bound of twice the optimal cost.
Our results have direct consequences for the construction of compilers of linear algebra expressions (such as MATLAB, Julia, Linnea [2] and others) in the common scenario where some or all of the matrix sizes are not known at compile time. The matrix chains form a subset of all linear algebra expressions. Therefore, if a compiler excludes any of the essential orderings from consideration, then the generated code will be arbitrarily far from optimal for infinite combinations of matrix sizes. On the other hand, if a compiler excludes all of the exponentially many non-essential orderings and keeps only the linearly many essential ones, then the code generated for the matrix chain can be made to never have a cost more than twice the optimal. Furthermore, on most instances, said output code will either be optimal or perform much closer to optimality than the bound suggests. In other words, a compiler for linear algebra expressions with unknown sizes that targets either matrix chains or any expression containing them must generate code capable of realising _all_ matrix chain orderings to guarantee optimal cost on every matrix chain instance. However, if some performance degradation is acceptable, then the compiler only needs to generate code for the linearly many essential algorithms.
## 2 Related Work
The matrix chain has been extensively studied in prior works. In 1973, Godbole published the first work on the matrix chain multiplication problem, presenting the now classical dynamic programming algorithm that finds the optimal ordering in \(\Theta(n^{3})\) time [10]. In 1982, Yao introduced an improved version of Godbole's algorithm with time complexity \(O(n^{2})\). Shortly after this, Hu and Shing published a rather complex algorithm to find an optimal ordering in \(O(n\log n)\)
time [12, 13]. A proof for one of their lemmas was eventually shown to be incorrect, although the result remained intact after its proof was corrected [18]. Hu and Shing's work also showed that the matrix chain multiplication problem can be translated into the partition of a convex polygon with \(n+1\) sides into \(n-1\) non-intersecting triangles (a fact that is recurrently resorted to in the published literature on the matter). Other works focused on finding the optimal ordering for instances with particular properties, such as the work of Santoro [17] (an \(O(n)\) algorithm for sparse, square matrices) and Wang et al. [19] (an \(O(n\log m)\) algorithm where \(m\) is the number of local minima in the sequence of dimensions).
Approximation algorithms started to appear soon after Godbole published the dynamic programming algorithm [10]. In 1975, Chandra presented an approximation algorithm with an approximation factor of two [4]. (Unfortunately, this work was never published besides as a technical report that is utterly difficult to access nowadays.) Shortly after this, in 1978, Chin introduced an \(O(n)\) approximation algorithm [5] with an approximation factor of \(1.25\). In 1981, Hu and Shing presented a reformulation of Chin's approximation algorithm, though obtained by other means (triangulation of an \(n\)-sided polygon), and refined the approximation factor to \(1.155\)[11].
Over the years, multi-processor and multi-core architectures became more and more relevant. Because of this, some published works [6, 16, 3] focused on parallelising the algorithms to find optimal orderings on multi-processor architectures. Some others [6, 7] concentrated on approximation algorithms (by parallelising the algorithm given by Chin and Hu and Shing), also on multi-processor architectures. Other works focused on the actual evaluation of the matrix chain on parallel architectures [15]. More recent work has been devoted to optimising the evaluation of the chain while factoring in inter-processor communication and distinct matrix multiplication algorithms [20]. It is also worth mentioning the work in [1], which aimed at mapping matrix chains with properties (e.g., diagonal, triangular, and symmetric matrices) and operators (e.g., inversion and transposition) to BLAS [14, 9, 8] kernels.
The common denominator amongst all these works is that they target solutions to the matrix chain multiplication problem. In this work, we study the problem from a different perspective. Instead of finding an optimal (or approximate) ordering for a specific instance, we investigate how the various orderings perform in relation to each other on the whole set of problem instances. We find that all orderings have something to contribute and a few are absolutely essential.
## 3 Preliminaries
In this section, we introduce some basic terminology. Expression (1) is a _matrix chain_ of length \(n\). Let \(\mathcal{S}_{n}\) denote the set of orderings (parenthesisations). Then \(|\mathcal{S}_{n}|=C_{n-1}\), the \((n-1)\)th Catalan number. The size of matrix \(M_{i}\) is \(k_{i-1}\times k_{i}\) for all \(i\in\{1,2,\ldots,n\}\). An _instance_ of the matrix chain multiplication problem (Problem 1) is specified by a tuple \(\boldsymbol{k}=(k_{0},k_{1},\ldots,k_{n})\).
**Definition 1** **(Cost of a matrix multiplication)**: Given three integers \(a\), \(b\), \(c\), the cost of multiplying \((M_{a+1}\cdots M_{b})\) with \((M_{b+1}\cdots M_{c})\) is given by the function \(\phi_{\boldsymbol{k}}(a,b,c):=\phi(k_{a},k_{b},k_{c})=k_{a}k_{b}k_{c}\). \(\phi\) is monotonically non-decreasing in its arguments and has rotational symmetry, meaning \(\phi(p,q,r)=\phi(r,p,q)\). As a consequence, \(\phi_{\boldsymbol{k}}\) also has rotational symmetry in its arguments. \({}_{\Box}\)
The cost of an ordering in \(\mathcal{S}_{n}\) is given by a sum of \(n-1\) terms \(\phi_{\boldsymbol{k}}(a,b,c)\), each with its own triplet \((a,b,c)\in\{0,1,\ldots,n\}^{3}\). An ordering is uniquely represented by the set \(\mathcal{A}\) of index triplets \((a,b,c)\) that appear in its cost function. Therefore, the cost
of an ordering \(\mathcal{A}\) on an instance \(\mathbf{k}\) can be expressed as
\[T(\mathcal{A},\mathbf{k})=\sum_{(a,b,c)\in\mathcal{A}}\phi_{\mathbf{k}}(a,b,c)=\sum_{(a, b,c)\in\mathcal{A}}\phi(k_{a},k_{b},k_{c}).\]
For example, the ordering \(\mathcal{A}\) with parenthesisation \(M=((M_{1}M_{2})M_{3})M_{4}\) has the cost function \(T(\mathcal{A},\mathbf{k})=k_{0}k_{1}k_{2}+k_{0}k_{2}k_{3}+k_{0}k_{3}k_{4}\). The index triplet representation is \(\mathcal{A}=\{(0,1,2),(0,2,3),(0,3,4)\}\).
The matrix chain multiplication problem can be formalised as follows.
**Problem 1** (Matrix Chain Multiplication Problem): _Given a matrix chain (1) with dimension sequence \(\mathbf{k}=(k_{0},k_{1},\ldots,k_{n})\), find an ordering \(\mathcal{A}\in\mathcal{S}_{n}\) such that \(T(\mathcal{A},\mathbf{k})=\min_{\mathcal{B}\in\mathcal{S}_{n}}T(\mathcal{B},\mathbf{k})\)._
If the cost of an ordering is amongst the cheapest ones for a particular instance, we say that it is optimal (on that instance). If, in addition, no other ordering has the same cost, then we say that the ordering is uniquely optimal.
**Definition 2** (Optimal and uniquely optimal orderings): _An ordering \(\mathcal{A}\) is optimal on an instance \(\mathbf{k}\) if \(T(\mathcal{A},\mathbf{k})\leq\min_{\mathcal{B}\in\mathcal{S}_{n}\setminus\{ \mathcal{A}\}}T(\mathcal{B},\mathbf{k})\). The ordering is uniquely optimal if the inequality is strict._
Suppose that some subset \(\mathcal{Q}\subset\mathcal{S}_{n}\) of orderings are removed, i.e., are not allowed to be used to evaluate the matrix chain. The optimal cost on an instance \(\mathbf{k}\), when the choice of ordering is restricted from \(\mathcal{S}_{n}\) to \(\mathcal{S}_{n}\setminus\mathcal{Q}\), will either increase or stay the same. The relative increase in cost quantifies the performance penalty of not being allowed to use the orderings in \(\mathcal{Q}\).
**Definition 3** (Penalty): _The penalty of removing a subset of orderings \(\mathcal{Q}\subset\mathcal{S}_{n}\) is the relative increase (if any) in the optimal cost and is given by the function_
\[P(\mathcal{Q},\mathbf{k})=\frac{\min_{\mathcal{B}\in\mathcal{S}_{n}\setminus \mathcal{Q}}T(\mathcal{B},\mathbf{k})}{\min_{\mathcal{A}\in\mathcal{S}_{n}}T( \mathcal{A},\mathbf{k})}-1\in[0,\infty).\]
_The numerator is the optimal cost amongst all remaining orderings. The denominator is the optimal cost amongst all orderings._
The penalty is zero if and only if the set of remaining orderings \(\mathcal{S}_{n}\setminus\mathcal{Q}\) contains an optimal ordering. Intuitively, if the penalty of \(\mathcal{Q}\) on instance \(\mathbf{k}\) is \(x\%\), then the optimal cost when one is not allowed to use the orderings in \(\mathcal{Q}\) increases by \(x\%\).
The questions we answer in this paper revolve around the concepts of useful and essential orderings, informally introduced in the introduction.
**Definition 4** (Useful and essential orderings): _An ordering \(\mathcal{A}\) is useful if there exists an instance \(\mathbf{k}\) on which \(\mathcal{A}\) is uniquely optimal. An ordering \(\mathcal{A}\) is essential if for any \(r\in\mathbb{R}^{+}\) there exists an instance \(\mathbf{k}\) such that \(P(\{\mathcal{A}\},\mathbf{k})>r\). In other words, one can choose an instance to make the penalty of removing the ordering \(\mathcal{A}\) arbitrarily large._
## 4 Every ordering is useful
An instance of the matrix chain of length \(n\) has \(n+1\) parameters and can be evaluated using any of the exponentially (in \(n\)) many orderings in \(\mathcal{S}_{n}\). The abundance of orderings suggests that some of them might be useless, meaning that they are not uniquely optimal on any instance. However, this is not the case; Theorem 4 below shows that every ordering is useful (Definition 4).
To prove this result, we first establish Lemma 4.
**Lemma 4.1**: _An ordering that is uniquely optimal on one instance is uniquely optimal on an infinite number of instances. \({}_{\Box}\)_
Proof: Let \(\mathbf{k}=(k_{0},k_{1},\ldots,k_{n})\) be an instance on which \(\mathcal{A}\) is uniquely optimal, that is, for all \(\mathcal{B}\in\mathcal{S}_{n}\setminus\{\mathcal{A}\}\),
\[\frac{T(\mathcal{B},\mathbf{k})}{T(\mathcal{A},\mathbf{k})}>1.\]
The tuple \(\mathbf{k}\) can be scaled by a positive integer \(\alpha\) so that
\[\mathbf{k}^{\prime}=\alpha\mathbf{k}=(\alpha k_{0},\alpha k_{1},\ldots,\alpha k_{n}).\]
Thus, \(\phi_{\mathbf{k}^{\prime}}(a,b,c)=(\alpha k_{a})(\alpha k_{b})(\alpha k_{c})= \alpha^{3}\phi_{\mathbf{k}}(a,b,c)\). It follows that for all \(\mathcal{B}\in\mathcal{S}_{n}\setminus\{\mathcal{A}\}\),
\[\frac{T(\mathcal{B},\mathbf{k}^{\prime})}{T(\mathcal{A},\mathbf{k}^{\prime})}=\frac{ \alpha^{3}T(\mathcal{B},\mathbf{k})}{\alpha^{3}T(\mathcal{A},\mathbf{k})}=\frac{T( \mathcal{B},\mathbf{k})}{T(\mathcal{A},\mathbf{k})}>1.\]
In other words, \(\mathcal{A}\) is uniquely optimal also on \(\mathbf{k}^{\prime}\). Since there are infinitely many choices for \(\alpha\), each resulting in a distinct instance, we conclude that there are infinitely many instances on which \(\mathcal{A}\) is uniquely optimal. \({}_{\blacksquare}\)
We need [5, Theorem 1] (restated and adapted here as Lemma 4.2), which establishes a sufficient condition for a particular matrix multiplication to be present in all optimal orderings.
**Lemma 4.2**: _Let \(m=\arg\min_{i}k_{i}\) be the index of the smallest element in \(\mathbf{k}\). If for some \(i\in\{0,1,\ldots,n\}\) (taken as \(\pmod{n+1}\)) \(k_{i}>k_{i-1}\), \(k_{i}>k_{i+1}\), and the condition_
\[\frac{1}{k_{i}}<\frac{1}{k_{i-1}}+\frac{1}{k_{i+1}}-\frac{1}{k_{m}} \tag{1}\]
_holds, then the multiplication \(M_{i}M_{i+1}\) must appear in every optimal ordering. \({}_{\Box}\)_
Proof: See [5, Theorem 1]. \({}_{\blacksquare}\)
We are now ready to prove the section's main result.
**Theorem 4.3**: _Every ordering in \(\mathcal{S}_{n}\) is uniquely optimal on infinitely many instances. \({}_{\Box}\)_
Proof: The case \(n\leq 2\) is trivial, so assume \(n\geq 3\). Consider an arbitrary ordering \(\mathcal{A}\in\mathcal{S}_{n}\). We will construct an instance \(\mathbf{k}\) on which \(\mathcal{A}\) is uniquely optimal. We do so in three steps. In Step 1, we craft a sequence of increasingly larger positive integers. In Step 2, we assign values from this sequence to \(\mathbf{k}\). At this point, we have constructed _one_ instance on which \(\mathcal{A}\) is uniquely optimal. In Step 3, we apply Lemma 4.1 to show that every ordering is uniquely optimal on _infinitely_ many instances.
**Step 1.****Craft a sequence of positive integers.** Let \(1=R_{0}<R_{1}<\ldots<R_{n-1}\) be the finite sequence of increasingly larger positive rational numbers given by
\[R_{\ell}=\frac{2^{n}+\ell-n-1}{2^{n}+\ell-n-2^{\ell}},\]
where \(\ell\in\{0,1,\ldots,n-1\}\). For a fixed \(n\), the sequence increases with \(\ell\), since the numerator increases and the denominator decreases. Plus, the sequence is positive since \(R_{0}=1\) and \(R_{n-1}=2\). If \(\ell_{1},\ell_{2}<\ell\), then
\[\frac{1}{R_{\ell}}<\frac{1}{R_{\ell_{1}}}+\frac{1}{R_{\ell_{2}}}-\frac{1}{R_{0}}. \tag{2}\]
To verify this, note that \(R_{0}=1\) and \(R_{\ell_{1}},R_{\ell_{2}}\leq R_{\ell-1}\) and hence
\[\frac{1}{R_{\ell}}<\frac{2^{n}+\ell-n-2^{\ell}}{2^{n}+\ell-n-2}=\frac{2}{R_{\ell -1}}-\frac{1}{R_{0}}\leq\frac{1}{R_{\ell_{1}}}+\frac{1}{R_{\ell_{2}}}-\frac{1}{ R_{0}}.\]
Let \(c=L_{0}<L_{1}<\cdots<L_{n-1}\), where \(L_{\ell}=cR_{\ell}\), be the finite sequence of increasingly larger integers obtained by scaling \(R_{\ell}\) by \(c\), the least common multiple of all denominators in the sequence \(R_{0},R_{1},\ldots,R_{n-1}\). Note that the property (10) holds also for the integer sequence \(L_{0},L_{1},\ldots,L_{n-1}\).
**Step 2. Assign values to \(k_{i}\) for \(i\in\{0,1,\ldots,n\}\).** To make \(\mathcal{A}\) uniquely optimal, we assign integers from the sequence \(L_{0},L_{1},\ldots,L_{n-1}\) to the elements of \(k\) in such a way that Lemma 2 guarantees that every matrix multiplication in \(\mathcal{A}\) appears in every optimal ordering. In other words, \(\mathcal{A}\) must be uniquely optimal on \(k\).
Identify a matrix multiplication \(M_{i}M_{i+1}\) associated together as determined by \(\mathcal{A}\). Set \(k_{i}=L_{n-1}\). Replace \(M_{i}M_{i+1}\) by \(\hat{M_{i}}:=M_{i}M_{i+1}\) of size \(k_{i-1}\times k_{i+1}\) to obtain a reduced chain of length \(n-1\). Repeat this reduction process until all multiplications have been eliminated.
In the first iteration, \(L_{n-1}\) is assigned to some \(k_{i_{1}}\). In the second iteration, \(L_{n-2}\) is assigned to some \(k_{i_{2}}\). Finally, in the last iteration, \(L_{1}\) is assigned to some \(k_{i_{n-1}}\). At this point, all dimensions \(k_{1},k_{2},\ldots,k_{n-1}\) have been assigned an integer from the sequence \(L_{1},\ldots,L_{n-1}\). As a final step, set \(k_{0},k_{n}=L_{0}\). Since the elements of \(L\) are assigned from largest to smallest, the inequality (10) will hold for every three dimensions \(k_{i-1},k_{i},k_{i+1}\) in every matrix multiplication in \(\mathcal{A}\). Therefore, each multiplication in \(\mathcal{A}\) must, by Lemma 2, be present in every optimal ordering. Since there is only one such ordering, \(\mathcal{A}\) must be uniquely optimal on \(k\).
**Step 3. From one to infinitely many instances.** By Lemma 1, it follows that \(\mathcal{A}\) is uniquely optimal on infinitely many instances.
## 5 Few orderings are essential
In this section, we show that only a few of the orderings are essential by identifying the set of essential orderings for a matrix chain of length \(n\).
Let \(\mathcal{E}_{n}\) denote the set of orderings of the form
\[(M_{1}(M_{2}\cdots(M_{h-1}M_{h})\cdots))\;\big{|}\;((\cdots(M_{h+1}M_{h+2}) \cdots M_{n-1})M_{n}),\]
where \(h\in\{0,1,\ldots,n\}\). For \(h=0\) the ordering is left-to-right; for \(h=n\) it is right-to-left. Due to the pattern of association fanning out in both directions from the dividing line at dimension \(k_{h}\), we say that these orderings _fan out_ from dimension \(k_{h}\). We let \(\mathcal{E}_{n,h}\in\mathcal{E}_{n}\) denote the ordering that fans out from \(k_{h}\). There are \(n+1\) orderings of this type. For \(n\geq 4\), they are all distinct, however, due to symmetry, there are only \(n-1\) distinct orderings for \(n\leq 3\). More precisely,
\[|\mathcal{E}_{n}|=\begin{cases}n-1&\text{if }n\leq 3,\\ n+1&\text{otherwise.}\end{cases}\]
We will show that all orderings in \(\mathcal{E}_{n}\) are essential (Definition 1) and that no other orderings are essential. Therefore, \(\mathcal{E}_{n}\) is _the_ set of essential orderings for a matrix chain of length \(n\).
The following lemma shows that all orderings in \(\mathcal{E}_{n}\) are essential.
**Lemma 1**: _For any \(r\in\mathbb{R}^{+}\) and \(\mathcal{E}_{n,h}\in\mathcal{E}_{n}\), there are infinitely many instances \(k\) for which \(P(\{\mathcal{E}_{n,h}\},\boldsymbol{k})>r\)._
Proof: Consider an instance of the form \(\mathbf{k}=(\alpha,\ldots,\alpha,1,\alpha,\ldots,\alpha)\), where \(\alpha\) is a positive integer greater than one and \(k_{h}=1\). The cost of \(\mathcal{E}_{n,h}\) on \(\mathbf{k}\) is
\[T(\mathcal{E}_{n,h},\mathbf{k})=\sum_{1}^{n-1}1\cdot\alpha\cdot\alpha=(n-1)\alpha^{ 2}.\]
Only \(\mathcal{E}_{n,h}\) has \(k_{h}\) appearing in each of the \(n-1\) terms of the cost function. Any other ordering has somewhere between \(1\) and \(n-2\) terms in its cost function containing \(k_{h}\). Since \(\alpha>1\), the cost is minimised by maximising the number of terms containing \(k_{h}\). Therefore, let \(\mathcal{B}\neq\mathcal{E}_{n,h}\) be an ordering for which \(k_{h}\) appears in \(n-2\) terms of the cost function. Its cost on \(\mathbf{k}\) is
\[T(\mathcal{B},\mathbf{k})=\alpha\cdot\alpha\cdot\alpha+\sum_{1}^{n-2}1\cdot\alpha \cdot\alpha=\alpha^{3}+(n-2)\alpha^{2}.\]
Such an ordering always exists. The penalty of removing \(\mathcal{E}_{n,h}\) on \(\mathbf{k}\) is
\[P(\{\mathcal{E}_{n,h}\},\mathbf{k}) =\frac{\min_{\mathcal{A}\in\mathcal{S}_{n}\setminus\{\mathcal{E} _{n,h}\}}T(\mathcal{A},\mathbf{k})}{T(\mathcal{E}_{n,h},\mathbf{k})}-1=\frac{T( \mathcal{B},\mathbf{k})}{T(\mathcal{E}_{n,h},\mathbf{k})}-1\] \[=\frac{\alpha^{3}+(n-2)\alpha^{2}}{(n-1)\alpha^{2}}-1=\frac{ \alpha}{n-1}+\frac{n-2}{n-1}-1=\Theta(\alpha).\]
Since the penalty grows without bound as \(\alpha\to\infty\), for any \(r\in\mathbb{R}^{+}\) we can choose \(\alpha\) large enough to make the penalty greater than \(r\). By Lemma 4, there are infinitely many instances with the same penalty.
Lemma 3 shows that every ordering in \(\mathcal{E}_{n}\) is essential. However, this does not imply that every other ordering is non-essential. The following lemma shows that an ordering \(\mathcal{E}_{n,h}\), where \(h\) is chosen such that \(k_{h}\) is the smallest element in \(\mathbf{k}\), has a cost that is strictly less than twice the optimal cost. The result is due to Chandra [4, Theorem 3]. We improve on the exposition of the proof.
Let \(\mathcal{A}\) be an optimal ordering on instance \(\mathbf{k}\), and \(m=\arg\min_{i}k_{i}\), then \(T(\mathcal{E}_{n,m},\mathbf{k})<2T(\mathcal{A},\mathbf{k})\).
Proof: The case \(n\leq 2\) is trivial, so assume \(n\geq 3\). Let \(T_{\mathrm{opt}}:=T(\mathcal{A},\mathbf{k})\). We show that \(T(\mathcal{E}_{n,m},\mathbf{k})<D\leq 2T_{\mathrm{opt}}\), where \(D\) is the conveniently chosen sum
\[D:=\sum_{i\in\{0,1,\ldots,n\}}\phi_{\mathbf{k}}(i-1,i,m).\]
Here, and in the rest of the proof, all arguments to \(\phi_{\mathbf{k}}\) are taken modulo \(n+1\). For instance, the term in \(D\) for \(i=0\) is \(\phi_{\mathbf{k}}(n,0,m)\). We will first show that \(D\leq 2T_{\mathrm{opt}}\) (Step 1) and then show that \(T(\mathcal{E}_{n,m},\mathbf{k})<D\) (Step 2). We first make an observation.
**Observation 1.** Every matrix \(M_{i}\) appears in one and only one multiplication. If \(M_{i}\) appears as the left operand (\(M_{i}X\) for some \(X=\prod_{j=i+1}^{z}M_{j}\)), then the term \(\phi_{\mathbf{k}}(i-1,i,z)\) appears in the cost. In this case, the dimension \(k_{i}\) is eliminated and hence no other term can contain the argument \(i\). If instead \(M_{i}\) appears as the right operand (\(XM_{i}\) for some \(X=\prod_{j=z}^{i-1}M_{j}\)), then the term \(\phi_{\mathbf{k}}(z,i-1,i)\) appears in the cost. In this case, the dimension \(k_{i-1}\) is eliminated and hence no other term can contain the argument \(i-1\). In conclusion, for any ordering there is one and only one term in its cost function that contains both \(i-1\) and \(i\) for \(i\in\{1,2,\ldots,n\}\)
The last multiplication in any ordering takes the form \((M_{1}\cdots M_{z})(M_{z+1}\cdots M_{n})\) for some \(1\leq z\leq n-1\). This multiplication adds the term \(\phi_{\boldsymbol{k}}(0,z,n)=\phi_{\boldsymbol{k}}(n,0,z)\) (mind the modulo and rotation in the arguments) to the cost. As a result, for each \(i\in\{0,1,\ldots,n\}\), \(T_{\mathrm{opt}}\) has one and only one term of the form \(\phi_{\boldsymbol{k}}(i-1,i,z)\).
**Step 1.**\(D\leq 2T_{\mathrm{opt}}\). The proof is based on a matching of every term in \(D\) with some term in \(2T_{\mathrm{opt}}\) in such a way that each term in \(D\) is less than or equal to its matched term in \(2T_{\mathrm{opt}}\). We can then conclude that \(D\leq 2T_{\mathrm{opt}}\).
Let \(\sigma\) be the function that for each \(i\in\{0,1,\ldots,n\}\) maps a term of the form \(\phi_{\boldsymbol{k}}(i-1,i,m)\) in \(D\) to a term of the form \(\phi_{\boldsymbol{k}}(i-1,i,z)\) in \(T_{\mathrm{opt}}\). As a consequence of Observation 1, the function is well-defined, since there is one and only one term of the specified form in \(T_{\mathrm{opt}}\).
Consider any term \(t\) in \(T_{\mathrm{opt}}\). We show that there are only three cases regarding how \(\sigma\) maps terms in \(D\) to terms in \(T_{\mathrm{opt}}\):
* \(\sigma\) maps _none_ of the terms in \(D\) to \(t\). This happens if and only if there is no pair of consecutive indices in \(t\). That is, if \(t=\phi_{\boldsymbol{k}}(a,b,c)\) then \(a+1\not\equiv b\pmod{n+1}\), \(b+1\not\equiv c\pmod{n+1}\), and \(c+1\not\equiv a\pmod{n+1}\).
* \(\sigma\) maps _one_ term in \(D\) to \(t\). This happens if and only if there is exactly one consecutive pair of indices in \(t\).
* \(\sigma\) maps _two_ terms in \(D\) to \(t\). This happens if and only if \(t\) is of the form \(\phi_{\boldsymbol{k}}(i-1,i,i+1)\) for some \(i\) (modulo \(n+1\)).
The function \(\sigma\) cannot map _more_ than two terms in \(D\) to \(t\) when \(n\geq 3\) since the system of congruences \(a+1\equiv b\pmod{n+1}\), \(b+1\equiv c\pmod{n+1}\), and \(c+1\equiv a\pmod{n+1}\) does not have a solution when \(n\geq 3\). (If \(n=2\), then \(a,b,c=0,1,2\) satisfies the conditions and there are _three_ pairs of consecutive indices: \((0,1)\), \((1,2)\), and \((0,2)\).) This way, \(\sigma\) maps _at most_ two terms in \(D\) to each term in \(T_{\mathrm{opt}}\). Since \(\sigma\) is well-defined, every term in \(D\) is mapped to some term in \(T_{\mathrm{opt}}\), but not every term in \(T_{\mathrm{opt}}\) is necessarily mapped to by any term in \(D\).
Since \(2T_{\mathrm{opt}}=T_{\mathrm{opt}}+T_{\mathrm{opt}}\) has two copies of each term and \(\sigma\) never maps _more_ than two terms in \(D\) to the same term in \(T_{\mathrm{opt}}\), each term in \(D\) can be matched with a term in \(2T_{\mathrm{opt}}\) such that no term in the latter is matched more than once. Each pair established in this way has the form \(\phi_{\boldsymbol{k}}(i-1,i,m)\) in \(D\) and \(\phi_{\boldsymbol{k}}(i-1,i,z)\) in \(T_{\mathrm{opt}}\). Since \(k_{m}\) is the smallest element in \(\boldsymbol{k}\) and the function \(\phi\) is monotonically non-decreasing (recall Definition 3.1), \(\phi_{\boldsymbol{k}}(i-1,i,m)\leq\phi_{\boldsymbol{k}}(i-1,i,z)\). Therefore, \(D\leq 2T_{\mathrm{opt}}\).
**Step 2.**\(T(\mathcal{E}_{n,m},\boldsymbol{k})<D\). The cost of \(\mathcal{E}_{n,m}\) on \(\boldsymbol{k}\) is given by
\[T(\mathcal{E}_{n,m},\boldsymbol{k})=\sum_{i\in\{0,1,\ldots,n\}\setminus\{m,m+ 1\}}\phi_{\boldsymbol{k}}(i-1,i,m).\]
Every term in \(T(\mathcal{E}_{n,m},\boldsymbol{k})\) also appears in \(D\). Since \(D\) has strictly more terms, \(T(\mathcal{E}_{n,m},\boldsymbol{k})<D\).
By combining Steps 1 and 2, it follows that \(T(\mathcal{E}_{n,m},\boldsymbol{k})<2T_{\mathrm{opt}}\).
Notice that Lemma 5 holds as long as \(\phi\) is monotonically non-decreasing and has rotational symmetry in its arguments, since the proof is based on matching terms from different cost expressions, not on the actual evaluation of \(\phi\).
Figure 1 illustrates the two layers of matching of terms in the proof of Lemma 5 for the case \(m=3\) and the optimal ordering \(((M_{1}M_{2})((M_{3}(M_{4}M_{5}))M_{6}))(M_{7}M_{8})\). The terms of \(2T_{\mathrm{opt}}\) are on the top two rows, the terms of \(D\) are on the middle row, and the terms of \(T(\mathcal{E}_{n,m},\boldsymbol{k})\) are depicted at the bottom. Every term of \(D\) has been matched (one-to-one) using \(\sigma\) with a term of \(2T_{\mathrm{opt}}\) that is equal or greater. Similarly, every term of \(T(\mathcal{E}_{n,m},\boldsymbol{k})\) has been matched with its identical term in \(D\).
The following theorem shows that the penalty of removing every ordering _not_ in \(\mathcal{E}_{n}\) is bounded by \(100\%\). In other words, only the orderings in \(\mathcal{E}_{n}\) are essential. Even though all other orderings are useful according to Theorem 4, the penalty of removing them is low (no worse than twice the optimal cost) even in the worst case.
**Theorem** : _The maximum penalty of removing every ordering not in \(\mathcal{E}_{n}\) is below \(100\%\), i.e., \(\max_{\boldsymbol{k}}P(\mathcal{S}_{n}\setminus\mathcal{E}_{n},\boldsymbol{k })<100\%\)._
Let \(m=\arg\min_{i}k_{i}\). Then
\[P(\mathcal{S}_{n}\setminus\mathcal{E}_{n},\boldsymbol{k}) =\frac{\min_{\mathcal{B}\in\mathcal{S}_{n}\setminus(\mathcal{S}_{ n}\setminus\mathcal{E}_{n})}T(\mathcal{B},\boldsymbol{k})}{\min_{\mathcal{A} \in\mathcal{S}_{n}}T(\mathcal{A},\boldsymbol{k})}-1=\frac{\min_{\mathcal{B} \in\mathcal{E}_{n}}T(\mathcal{B},\boldsymbol{k})}{\min_{\mathcal{A}\in \mathcal{S}_{n}}T(\mathcal{A},\boldsymbol{k})}-1\] \[\leq\frac{T(\mathcal{E}_{n,m},\boldsymbol{k})}{\min_{\mathcal{A }\in\mathcal{S}_{n}}T(\mathcal{A},\boldsymbol{k})}-1<2-1=100\%.\]
The final inequality follows from Lemma 5.
The following theorem establishes our main result.
**Theorem** : _All orderings in \(\mathcal{E}_{n}\) are essential and no other orderings are essential._
By Lemma 5, all orderings in \(\mathcal{E}_{n}\) are essential. By Theorem 5, no ordering in \(\mathcal{S}_{n}\setminus\mathcal{E}_{n}\) is essential.
## 6 Experiments
The theoretical results presented in the previous two sections establish that \(\mathcal{E}_{n}\) is _the_ set of essential orderings and that removing every other ordering leads to a penalty bounded above by \(100\%\). However, the theoretical analysis does not say anything about _how often_ one of the essential algorithms is optimal. Nor does it shed any light on the _average_ penalty caused by removing all non-essential orderings. To address these quantitative questions, we performed the following experiment.
For all \(n\in\{2,3,\ldots,9\}\), we randomly sampled one million instances (with replacement) of a matrix chain multiplication problem of length \(n\). Each dimension \(k_{i}\), for \(i=0,1,\ldots,n\), was chosen at random from a uniform distribution from \(1\) to \(1000\). For each instance \(\boldsymbol{k}\), we recorded the penalty of removing every non-essential ordering, i.e., \(P(\mathcal{S}_{n}\setminus\mathcal{E}_{n},\boldsymbol{k})\). Let \(\mathcal{P}_{n}\) denote the set of samples (\(|\mathcal{P}_{n}|=10^{6}\)), and let \(\mathcal{P}_{n}^{+}\subseteq\mathcal{P}_{n}\) denote the subset of samples with a non-zero penalty.
The last row of Table 1 shows the fraction of the samples for which the penalty of removing every non-essential orderings was non-zero. For all other samples, the penalty is zero, which implies that one or more of the essential orderings is optimal. These results reveal that for most instances (\(>87\%\) when \(n\leq 9\)), an essential ordering was optimal. That is, not only are these orderings essential, but also one of them (not always the same one) is almost always optimal. The fraction of samples with non-zero penalty increases with \(n\) and appears to be asymptotically bounded by some small constant. However, we have been unable to prove that this is the case as \(n\to\infty\).
Figure 2 shows a violin plot of the sample distributions of the penalty restricted
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline \(n\) & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline \(|\mathcal{S}_{n}|\) & 1 & 2 & 5 & 14 & 42 & 132 & 429 & 1430 \\ \(|\mathcal{E}_{n}|\) & 1 & 2 & 5 & 6 & 7 & 8 & 9 & 10 \\ \hline \(|\mathcal{P}_{n}^{+}|/|\mathcal{P}_{n}|\) & 0\% & 0\% & 0\% & 7.9\% & 10.1\% & 11.2\% & 11.8\% & 12.2\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Fraction of sampled instances where removing every non-essential ordering resulted in a non-zero penalty.
Figure 2: Estimated distributions of the penalty when every non-essential ordering is removed for uniform random sampling and \(n\in\{5,6,\ldots,9\}\). The cases with a penalty of \(0\%\) are not included.
to \(\mathcal{P}_{n}^{+}\) for \(n\in\{5,6,\ldots,9\}\) (for \(n\leq 4\), every ordering is essential, so \(\mathcal{P}_{n}^{+}\) is empty). Theorem 5.3 implies that the penalty of removing every non-essential ordering is bounded above by \(100\%\), but as these results reveal, in practice the penalty is often much closer to \(0\%\). The mean (non-zero) penalty is \(<4\%\) for each \(n\) in Figure 2. For \(>99\%\) of the samples in \(\mathcal{P}_{n}^{+}\), the penalty is \(<20\%\) (for each \(n\)). In the few cases (around \(10\%\)) where the penalty is non-zero, the penalty is almost always \(<20\%\), which is far better than the theoretical upper bound of \(100\%\) (Theorem 5.3). If we consider all samples (i.e., \(\mathcal{P}_{n}\) instead of \(\mathcal{P}_{n}^{+}\)), then in \(>99\%\) of the samples, the penalty is \(<10\%\).
## 7 Conclusions
In this article, we consider the problem of multiplying a chain of \(n\) matrices, and study how the \(C_{n-1}\) orderings behave on the input space. We posed a series of interrelated questions whose answers have implications for linear algebra compilers in the common scenario where the sizes of the matrices are not known at compile time. First, we showed that every ordering out of the exponentially many is useful, i.e., uniquely optimal, on an infinite number of instances. Thus, a linear algebra compiler that aims at generating optimal code for all instances cannot discard any of the exponentially many orderings.
Assuming that such a generation is unfeasible or that a certain loss in performance is acceptable, the set of orderings can and must be reduced. The selected reduced set of orderings should contain those that for some instance can be uniquely optimal and arbitrarily cheaper than any other, if such orderings exist. We proved that only \(n+1\) orderings fall within this category, i.e., are essential, and that the penalty for discarding every non-essential ordering is bounded above by \(100\%\). Therefore, if the subset of orderings is carefully chosen, by retaining \(n+1\) orderings and discarding all the other exponentially many, a compiler can generate code that never performs more than double the optimal number of operations.
Additionally, we also presented experimental evidence indicating that (for \(n\leq 9\)) on most instances (\(>87\%\)) some essential ordering is optimal. Moreover, in the few instances where no essential ordering is optimal, the penalty is in practice much closer to \(0\%\) than to its theoretical upper bound of \(100\%\), with an average penalty of \(4\%\). These results support that reducing the set of orderings from \(C_{n-1}\) to the linearly many essential ones and selecting one therein the obtained order of execution is optimal for most combinations of operand sizes or has a negligible increase over the optimal cost.
In summary, although every ordering for the matrix chain is uniquely optimal for some instance, a compiler for linear algebra expressions can constrain the code generation to only the \(n+1\) essential orderings with guarantees that the evaluation is almost always performed with near-optimal cost.
|
2301.06426 | Neighborhood-based Hypergraph Core Decomposition | We propose neighborhood-based core decomposition: a novel way of decomposing
hypergraphs into hierarchical neighborhood-cohesive subhypergraphs. Alternative
approaches to decomposing hypergraphs, e.g., reduction to clique or bipartite
graphs, are not meaningful in certain applications, the later also results in
inefficient decomposition; while existing degree-based hypergraph decomposition
does not distinguish nodes with different neighborhood sizes. Our case studies
show that the proposed decomposition is more effective than degree and clique
graph-based decompositions in disease intervention and in extracting provably
approximate and application-wise meaningful densest subhypergraphs. We propose
three algorithms: Peel, its efficient variant E-Peel, and a novel local
algorithm: Local-core with parallel implementation. Our most efficient parallel
algorithm Local-core(P) decomposes hypergraph with 27M nodes and 17M hyperedges
in-memory within 91 seconds by adopting various optimizations. Finally, we
develop a new hypergraph-core model, the (neighborhood, degree)-core by
considering both neighborhood and degree constraints, design its decomposition
algorithm Local-core+Peel, and demonstrate its superiority in spreading
diffusion. | Naheed Anjum Arafat, Arijit Khan, Arpit Kumar Rai, Bishwamittra Ghosh | 2023-01-16T13:31:01Z | http://arxiv.org/abs/2301.06426v2 | # Neighborhood-based Hypergraph Core Decomposition
###### Abstract.
We propose _neighborhood-based core decomposition_: a novel way of decomposing hypergraphs into hierarchical neighborhood-cohesive subhypergraphs. Alternative approaches to decomposing hypergraphs, e.g., reduction to clique or bipartite graphs, are not meaningful in certain applications, the later also results in inefficient decomposition; while existing degree-based hypergraph decomposition does not distinguish nodes with different neighborhood sizes. Our case studies show that the proposed decomposition is more effective than degree and clique graph-based decompositions in disease intervention and in extracting provably approximate and application-wise meaningful densest subhypergraphs. We propose three algorithms: **Peel**, its efficient variant **E-Peel**, and a novel local algorithm: **Local-core** with parallel implementation. Our most efficient parallel algorithm **Local-core(P)** decomposes hypergraph with 27M nodes and 17M hyperedges in-memory within 91 seconds by adopting various optimizations. Finally, we develop a new hypergraph-core model, the _(neighborhood, degree)-core_ by considering both neighborhood and degree constraints, design its decomposition algorithm **Local-core+Peel**, and demonstrate its superiority in spreading diffusion.
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: preprint preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: preprint: AIP/14/15
+
Footnote †: preprint: preprint: AIP/14/15 |
2308.12158 | A Visualization System for Hexahedral Mesh Quality Study | In this paper, we introduce a new 3D hex mesh visual analysis system that
emphasizes poor-quality areas with an aggregated glyph, highlights overlapping
elements, and provides detailed boundary error inspection in three forms. By
supporting multi-level analysis through multiple views, our system effectively
evaluates various mesh models and compares the performance of mesh generation
and optimization algorithms for hexahedral meshes. | Lei Si, Guoning Chen | 2023-08-23T14:18:56Z | http://arxiv.org/abs/2308.12158v2 | # A Visualization System for Hexahedral Mesh Quality Study
###### Abstract
In this paper, we introduce a new 3D hex mesh visual analysis system that emphasizes poor-quality areas with an aggregated glyph, highlights overlapping elements, and provides detailed boundary error inspection in three forms. By supporting multi-level analysis through multiple views, our system effectively evaluates various mesh models and compares the performance of mesh generation and optimization algorithms for hexahedral meshes.
hex-mesh analysis, mesh quality visualization +
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
[
+
Footnote †: Footnote †: thanks: [
[
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
+
Footnote †: Footnote †: thanks: [
[
Footnote †: Footnote †: thanks: [
[
Footnote †: Footnote †: thanks: [
[
+
Footnote: thanks: [
[
[
+
Footnote: thanks: [
[
+
Footnote: thanks: [
[
+
Footnote: thanks: [
[
[
+
Footnote: thanks: [
[
+
Footnote: thanks: [
[
[
+
Footnote: [
[
Footnote: [
[
+
Footnote: thanks: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
+
Footnote: [
[
+
Footnote: [
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
[
+
Footnote: [
Figure [
+
Footnote: [
[
+
Footnote: [
[
Figure ]]
[
[
+
Footnote: [
[
FigureFigure ]
[
[
]
[
+
Footnote: [
[
generated meshes. Many finite element simulation softwares [32] also offer some mesh generation and processing functionality with limited visualization capability (e.g., wire-frame representation of the meshes, color coding quality values, and color coding different patches). Recently, two works for the visualization and analysis of hexahedral mesh quality have been reported, i.e., the HexLab [5] and the hex mesh structure evaluation and visualization [45]. Both works focus on effective visual representation and high-quality rendering to aid the inspection of 3D meshes produced by different methods. However, both tools do not provide split screens to compare two meshes of the same model produced by different techniques. Also, boundary error analysis and overlapping element analysis are not offered by either tool. Most recently, a focus+context volume rendering technique [33] has been introduced for the inspection of hexahedral meshes, which allows the user to focus on regions with bad-quality elements without occlusion or cluttering. However, this technique may not reveal bad elements with small sizes (Figure 1 (b)) and does not address the boundary error analysis. In this work, we present a comprehensive visual analysis system that can support the quality and boundary error analysis of hex-meshes.
## 3 The Design of HQView
HQView, a multi-view visual analysis system (Section 3.3), offers various representations for 3D hexahedral mesh quality information. It computes a quality metric [20] for each element corner as a basis for visualization. Besides traditional color encoding and histograms, HQView provides additional functionality to achieve goals G1-G4 listed in the introduction.
### A Glyph Design for Bad Quality Element (G1)
Element quality is a crucial mesh quality indicator. Traditional color maps work well for 2D meshes but face occlusion issues in 3D hex meshes. Alternative methods like filtering in HexLab [5], focus+context volume rendering [23], or magic lens techniques have limitations (see Fig. 1). To achieve size-independent visual encoding, we introduce a glyph design mapping quality values to spheres, with radius encoding the quality value (i.e., the scaled Jacobian metric).
Instead of using a quality value to describe the regularity of a 3D cell, we calculate the quality of the corners shared by the one-ring neighborhood of a vertex. The smallest quality value is selected to represent the quality of the vertex. The radius of the sphere glyph at each vertex is then determined by the quality value. We define an inverse relationship between the quality of the vertex and the radius of its sphere glyph such that the vertices with bad-quality elements have prominent sphere glyphs that can be easily identified. The inset to the right illustrates this mapping where \(J_{i}\) represents the scaled Jacobian value at corner \(i\). To support the metric mapping to the radius of the sphere, we map it to the range of \([0,2]\) as follows.
\[c=1-J_{m} \tag{1}\]
where \(J_{m}\) is the smallest scaled Jacobian of a vertex. \(c\) has a reverse relation with \(J_{m}\). That is, if \(J_{m}=1\) (the best scaled Jacobian), \(c=0\), while \(c=2\) when \(J_{m}=-1\) (the worst scaled Jacobian). The radius of the sphere centered at the vertex is then controlled by \(c\). In particular, \(r=c*r_{max}\), where \(r_{max}\) is a user-controllable parameter to enhance the visibility of the sphere glyphs. Using this strategy, the vertices surrounded by bad-quality elements can be highlighted, regardless of their element sizes.
The above glyph-based visualization can lead to clutter and overlap in areas with many small and bad-quality elements as they lead to large spheres packed in a small region (e.g., Fig. 6 (b) and Fig. 2 (a)). In addition, rendering sphere glyphs for all vertices of a large mesh is time-consuming. To address these issues, we introduce a clustering-based glyph aggregating strategy (Fig. 2 (b)).
Our clustering strategy groups nearby vertices based on the spatial proximity of their glyphs. Specifically, if two glyphs overlap (i.e., the sum of their radii exceeds the distance between their centers), the corresponding vertices belong to the same cluster. The aggregated glyph is positioned at the vertex in the cluster closest to the concentration of cluster vertices. We also employ different colors for edges within distinct clusters. Figure 4 (view A) displays this aggregated visualization, which effectively directs the user's attention to regions with poor-quality elements for further analysis (Section 3.3).
### Overlapping Element Highlighting (G2)
Overlapping elements can be hard to identify, as they may have good quality or be small, making them difficult to distinguish from other bad-quality elements. Two types of overlapping elements can arise (1) overlapping vertices and (2) overlapping cells.
_Overlapping vertices_ occur in regions with near-degenerate meshes that don't exhibit inverted elements, like those with zero-length edges. This degenerate configuration is hard to detect and can cause issues during finite element calculations. _Overlapping cells_ are more common and occur when a vertex of a cell moves inside another cell due to mesh operations. These overlapping elements need to be identified and optimized, as they alter calculations in the coordinate system.
Current visualization tools struggle to effectively highlight mesh regions with either overlapping configurations. Our visualization system uses an arrow placement strategy to highlight overlapping vertices (Fig. 3(a)) and transparent shaded rendering for overlapping cells (Fig. 3(b)). In a scenario where a location contains two overlapping vertices, a pair of arrows will be generated, each pointing towards a vertex. If \(n\) arrows point to the same vertex, they will be evenly distributed around the vertices at intervals of \(360/n\) degrees.
### Multi-level Element Quality Analysis (G3)
Our visual analysis system supports mesh quality inspection at multiple levels. It consists of six views (Fig. 4): A. main view, B. overall vertex quality chart, C. sub-region view, D. local element view, E. selected point quality chart, and F. reference view.
**A. Main View** visualizes mesh quality using glyphs and highlights overlapping elements. It also displays a quality histogram.
**B. Overall Vertex Quality Chart** sorts vertices by their quality and supports selection for further analysis.
Figure 3: Two different overlapping situations may arise (2D illustration): (a) overlapping vertices caused by degenerated edges, (b) overlapping quads caused by one vertex falling inside another quad. Existing visualization (left image of each pair) cannot effectively highlight the overlapping in either case. In contrast, we use arrow(s) to intuitively direct the viewers toward places with overlap (right image of each pair).
Figure 2: The aggregated glyphs (b) reduce the clutter of non-aggregated glyphs (a).
**C. Sub-region View** shows individual elements within clusters for a more detailed study.
**D. Local Element View** displays the one-ring neighborhood of a selected vertex, offering a thorough quality analysis.
**E. Selected Point Quality Chart** provides quality values of all corners shared by the selected vertex, helping to identify local configuration issues.
**F. Reference View** enables comparison of the main mesh with a ground truth or other algorithms' results.
### Multi-Dimension Views for Boundary Difference Visualization(G4)
Analyzing the boundary error of 3D meshes is difficult due to occlusion, requiring users to choose different views around the mesh. To efficiently analyze global boundary error information without changing viewpoints, Our system offers a comprehensive boundary error analysis supported by a multi-view visualization for 3D meshes.
The boundary error is calculated for each surface point based on the closest point on the original mesh. Positive boundary error values (red) indicate modified mesh points lying outside the original mesh, while negative values (blue) indicate points inside the original mesh(Figure 5 (a)). To provide a holistic visualization of the boundary error without requiring the user to rotate the model, we use a UV map of the surface that is extracted from original mesh using the OptCuts [24]. The UV map unfolds the curve surface to a planar representation.
In the example shown in Figure 5 (b), positive errors correspond to surface ridges, while negative errors are at concave areas.
A collated percentage graph is provided for all modified mesh surface vertices, offering an overview of the surface areas inside or outside the original mesh. Users can select individual vertices from the graph for further analysis(Figure 5 (c)).
## 3 Evaluation
We integrate the above comprehensive visualization techniques into one unified system with two separate windows, i.e., the Mesh Quality Analysis Window, and the Boundary Error Analysis Window. We applied our system to analyze the quality of the hex-meshes included in the database of HexaLab [5] and from the hex-mesh structure simplification work [12].
The process of using our system to analyze a hex-mesh is as follows. After loading a hex-mesh, the Mesh Quality Analysis window displays the wire-frame of the mesh, the distribution of individual element quality, and an interactive bar chart ranking vertex quality (Figure 6 (a)). To locate poor-quality elements, glyph representation is activated, using aggregated glyphs to prevent clutter (Figure 6(b)). By focusing on the regions with poor quality, the user can select a large aggregated glyph corresponding to a small cluster and inspect the area in the sub-region view (Figure 6(c), top). A bad-quality vertex can be chosen for detailed inspection, revealing its Jacobian configurations and connectivity configuration (Figure 6(c), middle and bottom). This multi-level, multi-perspective mesh quality analysis process efficiently identifies and analyzes mesh quality issues.
**Element quality analysis.** To evaluate the effectiveness of our proposed visual encoding, we compare our visualizations of a few meshes with those shown by HexaLab [5] and the focus+context volume rendering [33], respectively. Since the other two approaches do not explicitly support the boundary error visualization, our comparison focuses on the element quality analysis and the effective revelation of low-quality elements with different sizes. Figure 1 compares the quality visualization of a warrier hex-mesh using the three approaches. Both HexaLab and the focus+context volume rendering cannot effectively reveal the bad elements at the tips of those protruded features. In contrast, our aggregated glyphs not only highlight those places but also convey how severe the element quality is in those regions via the sizes of the glyphs.
Figure 7 compares the quality visualization of a kitten hex-mesh using the three methods, respectively. From this comparison, we see that among the three approaches, the focus+context volume rendering can provide the smoothest visual representation of the mesh quality. However, if the two areas have a similar quality, the volume rendering will not effectively distinguish their difference as humans cannot accurately tell the difference between similar colors if they are not next to each other (e.g., the back and the left ear of the kitten). Also, one may think the quality of the elements in the left ear of the kitten is worse than those in the right ear because they look more prominent. In contrast, HexaLab can provide a more accurate reading on the element quality by filtering. That is, the remaining elements after filtering all have quality lower than a user-specified threshold. However, the difference among these remaining elements is hard to discern (e.g., it is hard to decide which elements are worse than the other in Figure 7(a)). Also, depending on the threshold, other less optimal areas may not be captured (e.g., the top of the kitten and the middle of the tail of the kitten). As a comparison, our aggregated glyph visualization retains most of the areas with bad-quality elements and allows a more effective differentiation of the element quality. For example, the quality of the elements at the back of the neck of the kitten (represented by a big green glyph) is worse than those at the back of the kitty that are prominent in both HexaLab and volume-rendering visualization. Similarly, there is a ring of bad-quality elements at the top of the head of the kitten that is not emphasized by the other two methods due to their small sizes.
**Overlapping cells analysis.** Fig. 8 shows an example of overlapping and intersection elements in a fandisk mesh. The overlapping vertices Fig. 8(a) are difficult to notice without the additional arrows. A similar observation can be made for the intersection elements Fig. 8(b). In addition, the cells involved in the intersection are highlighted, with each element colored distinctly, and an arrow is placed to indicate each intersection vertex.
**Boundary error analysis.** To analyze the boundary error of a mesh, the system loads the modified and original meshes into the Boundary Error Analysis Window, as shown in Figure 6 (d). In the window, the main view displays overlapping surfaces, but occlusion hides
Figure 4: The multi-view interface of our system for the level-of-detail inspection of mesh element quality.
Figure 5: A multi-view interface for the boundary error visualization of a hand hex-meshing.
some error distributions. The UV map provides a comprehensive overview without requiring the user to select different viewpoints, though it lacks 3D context like sharp features and corners. The boundary error distribution doesn't show a consistent pattern near specific feature types. For instance, concave areas corresponding to limb-torso conjunctions can exhibit both positive and negative errors. Larger errors typically occur where the surface has a large curvature, as these areas are challenging to preserve. the sorted error value bar chart enables quick identification of vertices with large boundary errors and their nature.
**User feedback.** We designed an online survey to gather unofficial user feedback on our system. Among the 38 responses received, 11 participants identified themselves as mesh experts. The survey consists of 13 questions, 9 of which are designed for mesh element quality analysis, serving three different objectives. The first category of questions requests participants to rank regions by applying their judgment of mesh quality across three different methods. The second category of questions prompts users to identify all problematic areas. The third group of tasks requires users to select the most effective method for highlighting poor-quality regions. Responses to the questions suggest that our methodology is effective in helping users identify regions of poor quality, particularly when dealing with small mesh sizes. Detailed feedback of the survey can be found in the supplemental document.
## 5 Conclusion
We present a new visual analysis system for the study of the quality of 3D hexahedral meshes. Our system offers simple but effective visual encoding techniques and a multi-view capability to help reveal small elements with low-quality and overlapping configurations and support the inspection of boundary errors. The evaluation shows that our system outperforms the existing tools in the tasks of locating small elements with low quality, finding overlapping elements in the mesh, and studying boundary error configurations.
To improve our system, we will address the following limitations of the system. First, our aggregation glyph construction requires performing collision detection among nearby spheres. Our current implementation using traversal has a complexity of \(O(n^{2})\). To accelerate, we will adopt a pre-computed tree-like structure, such as a union-find data structure [42]. Second, our system does not suggest the ideal configuration for a comparative study of the bad-quality elements. Third, our arrow placement strategy for highlighting overlapping elements may still produce cluttered arrows in small regions with many overlapping elements. Nonetheless, the cluttered arrows may help draw the attention of the viewers. To reduce clutter, a view-dependent placement of arrows may be explored. Fourth, some models may not be successfully unfolded for boundary error visualization due to the limitation of the used UV unfolding algorithm. Finally, the current user evaluation is rather informal and incomplete. Future work will focus on addressing these limitations while considering other element quality measures [11] and incorporating the visualization of the simulation results run on the respective meshes to provide new insights into the mesh quality and its impact on the downstream tasks.
###### Acknowledgements.
We wish to thank the anonymous reviews for their constructive feedback to help improve this work.
Figure 8: Overlapping vertices and intersecting cells in a 3D view are difficult to be distinguished, as the lines obstruct their visibility.
Figure 6: A use case of our system for the analysis of a hex-mesh quality.
Figure 7: Comparison of the quality visualization using HexaLab (a), focus+context volume rendering (b), and our methods (c) for a teddy bear hex-mesh (top row) and a kitten hex-mesh (bottom), respectively.
_\(\copy\) 2023 IEEE. This is the author's version of the article that has been published in the proceedings of IEEE/ACM conference. The final version of this record is available at: xx.xxx/TVCG.2013.xx.cc
|
2303.08272 | Automated patent extraction powers generative modeling in focused
chemical spaces | Deep generative models have emerged as an exciting avenue for inverse
molecular design, with progress coming from the interplay between training
algorithms and molecular representations. One of the key challenges in their
applicability to materials science and chemistry has been the lack of access to
sizeable training datasets with property labels. Published patents contain the
first disclosure of new materials prior to their publication in journals, and
are a vast source of scientific knowledge that has remained relatively untapped
in the field of data-driven molecular design. Because patents are filed seeking
to protect specific uses, molecules in patents can be considered to be weakly
labeled into application classes. Furthermore, patents published by the US
Patent and Trademark Office (USPTO) are downloadable and have machine-readable
text and molecular structures. In this work, we train domain-specific
generative models using patent data sources by developing an automated pipeline
to go from USPTO patent digital files to the generation of novel candidates
with minimal human intervention. We test the approach on two in-class extracted
datasets, one in organic electronics and another in tyrosine kinase inhibitors.
We then evaluate the ability of generative models trained on these in-class
datasets on two categories of tasks (distribution learning and property
optimization), identify strengths and limitations, and suggest possible
explanations and remedies that could be used to overcome these in practice. | Akshay Subramanian, Kevin P. Greenman, Alexis Gervaix, Tzuhsiung Yang, Rafael Gómez-Bombarelli | 2023-03-14T23:26:55Z | http://arxiv.org/abs/2303.08272v3 | # Automated patent extraction powers generative modeling in focused chemical spaces
###### Abstract
Deep generative models have emerged as an exciting avenue for inverse molecular design, with progress coming from the interplay between training algorithms and molecular representations. One of the key challenges in their applicability to materials science and chemistry has been the lack of access to sizeable training datasets with property labels. Published patents contain the first disclosure of new materials prior to their publication in journals, and are a vast source of scientific knowledge that has remained relatively untapped in the field of data-driven molecular design. Because patents are filed seeking to protect specific uses, molecules in patents can be considered to be weakly labeled into application classes. Furthermore, patents published by the US Patent and Trademark Office (USPTO) are downloadable and have machine-readable text and molecular structures. In this work, we train domain-specific generative models using patent data sources by developing an automated pipeline to
go from USPTO patent digital files to the generation of novel candidates with minimal human intervention. We test the approach on two in-class extracted datasets, one in organic electronics and another in tyrosine kinase inhibitors. We then evaluate the ability of generative models trained on these in-class datasets on two categories of tasks (distribution learning and property optimization), identify strengths and limitations, and suggest possible explanations and remedies that could be used to overcome these in practice.
## 1 Introduction
The efficient navigation of chemical space for the design of novel candidate molecules has long been of interest to chemists and materials scientists. With the rapid surge in interest for data-driven approaches, deep generative models have emerged as an exciting avenue for inverse molecular design. [1], [2] Progress in this field has come from the interplay between training algorithms and molecular representations. Over the last few years, approaches have used autoregressive, latent variable and reinforcement learning (RL) algorithms to generate string [3]-[7], and graph [8]-[11] representations of molecules. While fully unsupervised models can be trained on large unlabeled data (for instance the 100+ million known, individually synthesized molecules from PubChem), inverse molecular design requires some form of supervision to steer generation towards high-performance molecules at the extremes of the property distribution. [12] One of the key challenges in the applicability of such inverse design models to materials science and chemistry has been the lack of accessibility to sizeable labeled training datasets in these fields. [13]
Published patents are an important source of scientific knowledge since the discovery of new materials and molecular candidates are disclosed in patents, years before their publication in scientific journals. [14], [15] Patent authorities such as the United States Patent and Trademark Office (USPTO), European Patent Office (EPO), Japanese Patent Office (JPO), and World Intellectual Property Organization (WIPO) make published patents accessible through their web interfaces. In the past decade, there has been significant progress in extracting and collating information from these sources programmatically to create large databases of chemical compounds [16], and reactions [17]. This large body of extracted knowledge has immense potential in feeding 'data hungry' deep learning models, but has remained relatively untapped in the field of molecular design.
Since patents are filed seeking protection within a given application, they are thematically labeled into domains. This makes it relatively simple to extract domain-specific molecular structures. Moreover, they are likely to be high-performance since they merited the investment of a patent application, which allows us to create domain-specific generative models by training exclusively on molecules known to belong to the desired class. Our hypothesis is that training generative models on these smaller, but more meaningful datasets can automatically steer generation towards in-class high-performance molecules.
All post-2001 chemistry patents published by the USPTO contain ChemDraw CDX, MDL, and TIFF files of chemical structures, as required by the Complex Work Unit (CWIU) Pilot Program.[18] This makes chemical structures more accessible in a computer readable format for large scale mining and screening efforts. In our work, we attempt to bridge the gap between these bulk data sources and data-driven chemical design, by developing an automated pipeline to isolate chemical structures from USPTO patents based on relevance to user-defined keywords, and demonstrating their utility as training data for deep generative models for molecular design. We choose three model types JTVAE [9], RNN+SELFIES [19], [20], and REINVENT+SELFIES [7] to explore a variety of representations (graph, SELFIES[21], and SELFIES respectively) and training algorithms (latent variable, autoregressive, and RL respectively), and show their applicability to learn data distributions in two patent-mined datasets that explore very different areas of the chemical space, i.e., organic photodiodes (OPD) and tyrosine kinase inhibitors (TKI).
We then test the ability of these models to perform property optimization in each of the following cases: 1) the property being optimized can be predicted accurately and cheaply, 2) Oracle property predictor is expensive, so we only have access to a proxy neural network predictor trained on oracle property data. [22]-[24] In the TKI case, we optimize for high structural similarity to held-out, FDA-approved TKI molecules. This is a means to test the ability of models to optimize a robust, well-defined objective function with a relatively narrow solution space. This is an example of case 1 since we can calculate the similarity between molecules cheaply without the need for an approximator. In the OPD case, we choose our optimization objective to be the identification of organic molecules with low optical gaps. This is an example of case 2 since we approximate expensive DFT-computed optical gaps with a neural network predictor. Materials with low optical gaps, especially those that are sensitive to wavelengths of light in the near infrared (NIR) region of the spectrum have seen a growing interest due to their ability to utilize a larger portion of the solar spectral range which was previously difficult to access. Their applications are diverse ranging from military equipment to biomedical and semi-transparent devices. [25]-[28]
The key observations we make through our experiments are summarized as follows: 1) We identify that patent-mined datasets offer the ability to create focused in-domain datasets of high-performing molecular structures. Training generative models on these datasets allows us to create in-domain generators that can generate novel candidates that model property distributions of the training data well. This offers a way to bootstrap focused domains of chemical space with limited human intervention. 2) Property optimization towards the edges of the training data distribution can be effective if we have access to a cheap oracle predictor, but is challenging when proxy neural network approximators are used. Proxy predictors are brittle (have the tendency to be adversarially attacked in our RL experiments), and difficult to train accurately end-to-end (learning properties from compressed latent space in JTVAE is difficult).
## 2 Methods
### Pipeline overview
Our overall pipeline consists of six steps: 1) Download patents from USPTO, 2) Parse chemistry patents, 3) Shortlist patents based on keywords, 4) Standardize data and add to our in-house database, 5) Property labeling for supervised property optimization tasks (DFT calculated optical gaps for OPD, and similarity to FDA-approved drugs for TKI), and 6) Generative modeling for distribution learning (unsupervised) and property optimization (semi-supervised). Figure 1 shows a diagrammatic illustration of all steps involved. We make publicly available the code utilized in steps 1, 2, 3 and 6 along with this paper (URLs provided in Section 5). Step 4 involved storage of all data in a database, followed by de-duplication of SMILES strings [29] and simple post-processing steps as described in Section S2 in the SI. A detailed description of procedures used in step 5 are provided in Section 2.3. These steps can be replaced by any form of data storage and property labeling technique depending on the chosen domain. An open source database framework similar to the one we used can be found at [30].
### Patent extraction
All granted USPTO patents from 2001 onward are available for download in weekly archives from the agency's Bulk Data Storage System (BDSS) at [https://bulkdata.uspto.gov/data/patent/grant/redbook/](https://bulkdata.uspto.gov/data/patent/grant/redbook/)<YEAR> /. We downloaded all of these archives from the BDSS using Python scripts by March 1, 2022. The compressed file size of all downloads was approximately 1.83 TB, including between 30 and 200 GB for each individual year. Next, we filtered out all patents that did not contain molecular structures in the form of CDX files. We encountered some difficulties in this filtering step with a subset of patent years due to inconsistent formatting and directory structures in the USPTO data (please refer to Section S1 for details). For the remaining chemistry-related patents, we used RDKit [31] to convert MOL files to SMILES strings. The number of new, unique SMILES strings extracted per year using this method are shown in Figure 1(a). We queried all chemistry-related patents by searching for keywords in each XML file. The TKI molecules shown in Figure 1(b) were found using the keywords "tyrosine kinase inhibitor", and the OPD molecules in Figure 1(c) are the result of querying for "organic electronic", "photodiode", and "organic photovoltaic". Any Markush structures in the dataset were filled in with ethyl groups because the particular substituents for each core molecule are not stored in a structured format that could be accessed without natural language processing; this included 17% of molecules from the OPD query and 11% of molecules from the TKI query. Thus, we generated a list of domain-relevant SMILES strings related to each set of keywords. More details on post-processing/filtering applied to the data are provided in Section S2.
Figure 1: **Diagram of the workflow.** Patents are downloaded from USPTO, and chemistry patents are isolated. Keyword-based search is then performed to filter relevant patents and corresponding SMILES strings. A subset of molecules chosen based on computational budget are then labeled with properties. Generative models are trained to model the data distribution, which can be sampled to suggest novel candidates.
### 2.3 Property labeling
#### TD-DFT calculations of optical gaps for OPD
Initial conformations were generated with the ETKDG approach as implemented in RDKit, with at least 1500 attempts, up to 20 unique conformers were retained, ranking by their MMFF94 energies[32]. These geometries were refined using semi-empirical tight-binding density functional theory (GFN2-xTB) [33] in ORCA [34]. Next, geometry optimizations were done at the BP86[35]-D3[36]/def2-SVP[37] level of theory on the lowest-energy xTB conformer. Finally, TD-DFT calculations were performed with the Tamm-Dancoff approximation (TDA) [38] at the \(\omega\)B97X-D3[39]/def2-SVPD level of theory in ORCA version 4.2.1. Reported optical gaps are the lowest-energy (reddest) singlet vertical excitation energies from the TD-DFT calculations.
#### Similarity calculation for TKI
Each TKI molecule was labeled with its Tanimoto similarity to Erlotinib, a held-out FDA-approved inhibitor. The Tanimoto similarity was computed over Morgan fingerprints of size 2048 and radius 2. The implementation for similarity and fingerprinting were both obtained from RDKit. While Erlotinib is the primary running example showed in this work, we also labeled molecules with similarity to the other 26 held-out inhibitors for similar experiments involving them (for ex. see Figure S1).
Figure 2: **Bar charts depicting number of SMILES strings extracted as a function of publishing year.** Strings extracted from patents published between 2005 and 2022 (a) before keyword-based filtering, (b) after application of TKI-based keyword search and (c) after application of OPD-based keyword search. SMILES were de-duplicated after sanitization by RDKit, such that all molecules within a given year are unique, and any molecule counted in a given year will not be counted in any future years. Years 2001-2004 are not shown and years 2008-2010 are incomplete due to inconsistencies in patent formatting (see Section S1 for details).
### 2.4 Generative modeling
#### Evaluation tasks
We prepared two datasets: 1) OPD - Organic Photodiodes and 2) TKI - Tyrosine Kinase Inhibitors, covering two different chemical spaces. Models trained on these datasets were evaluated on two categories of tasks: 1) Distribution Learning - The ability of models to learn the training data distribution, and 2) Property Optimization - The ability of models to generate in-domain molecules that are optimized for a property of interest. Good performance on the latter task would require some or all of the generated samples to be superior in properties in comparison to the training data distribution.
For distribution-learning tasks, we evaluated models on the GuacaMol distribution learning benchmark metrics: Validity, Uniqueness, Novelty, KL Divergence and Frechet ChemNet Distance. [40], [41]. We also visualized the ground-truth property distribution of the sampled data and compared it with that of the training data. A close match between the two is an indicator of success in learning the training data distribution. For property optimization, we performed a similar visualization. Here, a shift in distribution towards higher values of the objective function is an indicator of good performance. Finally, to test the value of domain-focused training on property optimization, we compared the patent-trained models against baseline models that were trained on the ZINC dataset [42] but optimized for OPD and TKI properties. It is considered good performance if the domain-trained models generate molecules with more optimal properties than the generic model trained on the ZINC dataset. This would suggest that the structural priors imposed on the models by training on the domain-specific patent datasets reflect in more optimal properties for that domain. More specifics on the task formulation for each dataset are given below.
For OPD tasks, the patent-mined OPD molecules were used as the training dataset. The property of interest in the distribution learning tasks was the DFT-computed optical gaps of sampled molecules. Since our aim was to generate molecular candidates with low optical gaps, the negative of the optical gaps as predicted by a proxy neural network predictor was used as the objective function which was maximized in the property-optimization tasks.
For TKI tasks, the patent-mined TKI dataset was used as the training dataset. The property of interest in the distribution learning tasks was the similarity between sampled molecules and Erlotinib, an FDA-approved inhibitor, to gauge the model's ability to optimize a robust, well-defined objective function with a relatively narrow solution space. This quantity was also used as the objective function which was maximized
in the property-optimization tasks. In addition to the tasks described earlier in this section, an additional distribution learning task was introduced for this dataset. Molecules sampled from models trained on TKI and ZINC datasets, and 27 held-out FDA-approved TKI molecules were projected on a 2-dimensional space with Principal Component Analysis (PCA). Samples from TKI-trained models lying closer than the ZINC-trained samples to the held-out molecules, would indicate that the models have accurately learned information about molecular structure from the training dataset. It is a way to test the utility that training on domain-focused data (TKI-patents) has over training on publicly accessible large databases (ZINC) that have a similar chemical space (drug-like molecules) but are less-focused on the domain of interest. Morgan fingerprints of size 2048 and computed with radius 2 was the molecular representation used during PCA.
#### Generative models
We evaluated two categories of generative models, i.e. 1) text-based and 2) graph-based, on these tasks. RNN+SELFIES and REINVENT+SELFIES fall under the first category while JTVAE falls under the second. RNN+SELFIES was only used for distribution learning tasks, REINVENT+SELFIES was used for only property optimization tasks, and JTVAE was used for both. SELFIES was used as the string representation of choice to ensure validity of structures generated. [21] We go over some of the implementation choices for each below.
Recurrent Neural Networks (RNNs) have been shown to be simple but powerful text-based models for distribution modeling tasks in molecules [43]. They are trained using an auto-regressive training strategy where the next token is predicted at every time-step. The implementation from the MOSES Benchmarking platform [19] was used with some modifications pertaining to change in representation from SMILES to SELFIES. The trained RNN can be sampled by feeding a BOS (beginning of sentence) token, and sampling the probability distribution predicted by the model autoregressively. An LSTM [20] network with 3 hidden layers and dropout probability of 0.2 between layers was used, with a final linear layer to transform the LSTM output into the required output sequence size. All LSTM hidden layers and the final linear layer were of size 768, and a learning rate of 1e-3 was used for the Adam optimizer.
Junction Tree Variational Autoencoder (JTVAE) is a graph-based generative model that learns to sequentially decode graph substructures using Message Passing Neural Networks (MPNNs), and combine them to form complete molecular structures. [9] It maintains a vocabulary of substructures decomposed from the training data, that are used during the decoding step to ensure validity of generated molecules. The model
is trained by training the encoder, decoder and property predictors end-to-end with a multi-task loss function. Once trained, the latent space can be either randomly sampled or optimized by utilizing gradients from the property predictors. In both cases, the sampled latent vectors are passed through the decoder to obtain molecular candidates. A graph Message Passing Network (MPN) with 3 layers was used in the graph encoder, and a graph GRU [44] network with 20 layers was used in the tree encoder, to form a concatenated latent vector of size 56. A learning rate of 1e-3 that was set to decay exponentially during the course of training was used for the Adam optimizer [45]. More details given in Section S4.
REINVENT is a policy based Reinforcement Learning (RL) approach that learns to generate molecular structures optimized with a chosen objective function. [7] Training is performed in two steps: 1) A Prior RNN is pre-trained on a language modeling task, i.e., learning to predict the next token of the sequence by maximizing the likelihood on the training dataset. 2) Then, an augmented likelihood function is defined to be the sum of the Prior likelihood and a score indicating the desirability of the sequence. The agent, which is initialized with the Prior RNN weights, is then fine-tuned to minimize the squared difference between the agent likelihood and the augmented likelihood on samples drawn from the Agent RNN. Sampling from the trained model is performed in identical fashion to RNN (described in previous paragraph). We once again use SELFIES representations of molecules. The Agent RNN was composed of three GRU cells [44], each of size 512, followed by a linear output layer. Pre-training and fine-tuning were carried out using an Adam optimizer with learning rates of 1e-4 and 5e-4 respectively. We retained the same architectural choices used by Olivecrona et al. since our task of similarity-based optimization is nearly identical to the similarity guided structure generation experiments described in their work.
## 3 Results and discussions
### Distribution learning
Table 1 compares the scores of the RNN+SELFIES and JTVAE models on the GuacaMol distribution learning benchmarks. Both models were able to generate molecules with relatively high validity, uniqueness and KL Divergence scores. We however found that JTVAE is superior to RNN+SELFIES in Novelty scores, and both models perform relatively poorly on Frechet ChemNet Distance scores. These observations may both be characteristics of the training datasets that we use being smaller and more domain-focused than the larger and more diverse drug datasets that have been benchmarked on these metrics in the past.
As can be seen in sub-figures (a), (b), (d), and (e) of Figure 3, both models generated molecules whose properties matched well with the training dataset. It can also be observed that RNN+SELFIES is able to match the distributions better than the JTVAE, which conforms with the observations made by [43]. Additionally, sub-figures (c) and (f) show that samples from TKI-trained models lie closer to held-out FDA-approved inhibitors than ZINC-trained samples, which indicates that both models have been able to learn structural information from the training datasets.
From these results, we conclude that the deep generative models explored in this work are effective tools to model property distributions of arbitrary small, chemically focused, training datasets automatically extracted from the patent literature. The models can thus sample novel, in-distribution molecular structures that resemble the training data in terms of structure and properties. Furthermore, this suggests that domain-specific, focused chemical spaces can be boostrapped automatically from the literature without user-defined heuristics for the domain, as evidenced by the GuacaMol distribution learning benchmarks in two very distinct chemical spaces.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Model} & \multirow{2}{*}{Metric} & \multicolumn{2}{c|}{Dataset} \\ \cline{3-4} & & TKI & OPD \\ \hline \multirow{4}{*}{RNN+SELFIES (random sample)} & Validity & 1.00 & 0.991 \\ \cline{2-4} & Uniqueness & 0.99 & 0.99 \\ \cline{1-1} \cline{2-4} & Novelty & 0.55 & 0.58 \\ \cline{1-1} \cline{2-4} & KL Divergence & 0.98 & 0.96 \\ \cline{1-1} \cline{2-4} & Frechet ChemNet Distance & 0.60 & 0.61 \\ \hline \multirow{4}{*}{JTVAE (random sample)} & Validity & 1.00 & 1.00 \\ \cline{1-1} \cline{2-4} & Uniqueness & 1.00 & 0.99 \\ \cline{1-1} \cline{2-4} & Novelty & 1.00 & 0.89 \\ \cline{1-1} \cline{2-4} & KL Divergence & 0.75 & 0.87 \\ \cline{1-1} \cline{2-4} & Frechet ChemNet Distance & 0.32 & 0.28 \\ \hline \end{tabular}
\end{table}
Table 1: **GuacaMol distribution learning benchmarks for 1000 samples drawn from RNN+SELFIES and JTVAE, on OPD and TKI datasets.** Closer to 1.0 indicates better performance.
### Property optimization
We evaluated generative models trained on patent-extracted, domain-focused datasets for property optimization. We evaluated REINVENT+SELFIES, which uses reinforcement learning and a string-based representation and JTVAE, which performs optimization in the latent space and decodes locally optimal molecules, under this category of tasks. We identified that property optimization tasks towards edges of the training data distribution are challenging for a variety of reasons. We observed from our RL experiments that optimizers may push the designs out of the training domain which was particularly acute when a neural network predictor was used as a proxy for the oracle property. Here, the generative model can be thought of as performing an adversarial attack on the poorly-covered areas of the predictor. From our VAE experiments, we observed that it is sometimes challenging for proxy predictors to learn properties from compressed latent
Figure 3: **Results on distribution learning tasks.** (a) and (b) show the property distributions of JTVAE-sampled molecules in comparison to training data properties, on OPD and TKI datasets respectively. (d) and (e) show the same distributions for molecules sampled from RNN+SELFIES. (c) and (f) show PCA projections of molecules randomly sampled from TKI-trained and ZINC-trained models, and held-out FDA approved inhibitors, for JTVAE and RNN+SELFIES respectively.
representations, and the unreliable objective function thereby leads to challenges in latent space optimization.
Both these challenges arise from coupling generation and property optimization end-to-end. By instead splitting these into two separate steps of random sampling and post-hoc filtering, we observed better shifts in property histograms. More details on the post-hoc filtering approach are provided below in Section 3.2.1.
#### Post-hoc filter
We use the term "post-hoc filter" to refer to a property screen conducted on molecules that were randomly sampled from trained models. It can use either the predictions of a proxy predictor when the oracle property is expensive as in OPD tasks, or the oracle itself when it is cheap to compute as in TKI tasks. The degree of the filter applied (which we chose to be top 20%) can be chosen based on the extent of screen to be performed. As a proxy predictor for OPD tasks, we trained a Chemprop MPNN model [46] on the patent-mined OPD dataset to predict DFT-calculated optical gaps (See S3). A random train-val-test split (60:20:20) was used to train, tune and evaluate the model.
#### Approximate objective
All OPD optimization tasks required the use of a proxy neural network model since DFT simulations are computationally expensive and are typically not autodifferentiable, so it is not possible to train end-to-end generation and property scoring. [47], [48] In the JTVAE case, a Multi Layer Perceptron (MLP) was used as a proxy predictor to predict oracle DFT-calculated optical gaps from the latent space. As can be seen from Figure 4 (a), gradient descent over the latent space in JTVAE has almost no effect in shifting property distributions away from the OPD training data towards lower optical gaps. To improve the optimization performance, we utilized the Chemprop posthoc filter to selectively isolate decoded candidates having predicted optical gaps below the the 20th percentile. This was useful in shifting the distribution towards lower optical gaps as can be seen from Figure 4(b). The justification behind this approach was that learning properties from the latent space is a more challenging task than learning directly from the molecular graph. [49] The MLP predicting the optical gap from the latent space achieves an RMSE of 0.56 eV on the test set while the Chemprop model achieves an RMSE of 0.38 eV on the test set, which follows our intuition. The fact that JTVAE learns from a multi-task loss function composed of reconstruction and property terms, makes it a constrained optimization task that reduces the degrees of freedom of the MLP during training, and can hence make convergence more challenging. We observed similar challenges with coupling generators and property optimizers while training the REINVENT+SELFIES on the OPD dataset, where the Chemprop model de
scribed above was used as the proxy predictor modelling the reward function. Here, the generator could be thought of as performing an adversarial attack on the proxy predictor and converged at molecular candidates that optimized the proxy objective but were structurally unphysical. More details on JTVAE+MLP training are provided in Section S4 and details on REINVENT results on OPD data are provided in Section S3.2.
Apart from the described issue pertaining to the poor predictive performance of the MLP, there could be other potential reasons for the failure of gradient descent on the latent space. One possibility is the presence of cascading effects. The unreliability of the MLP could have caused the points reached by gradient descent (on the latent space) to be outside the data distribution that the decoder saw during training, causing the decoder to be unreliable and collapse to a distribution more similar to the training data. One way to investigate this failure mode in the future could be the use of decoder uncertainty estimation techniques to identify such points and restrict samples to low-uncertainty regions of the decoder. [50] Another possibility is that the latent space manifold of the trained model was "rough" with respect to the MLP-predicted property, rendering optimization techniques such as gradient descent challenging. This could be investigated in more depth by evaluating the 'roughness' of the latent space with metrics such as roughness index (ROGI). [23] Therefore, it should be noted that the coupled interactions between generators and property predictors is a complicated problem, and utilizing approaches such as the post-hoc filter could be relatively simple remedies to these pitfalls even without a detailed knowledge of the failure mode. Demonstration of post-hoc filter with another model (RNN+SELFIES) is shown in Figure 4(f), which can be used as a remedy for the adversarial attack issues observed in the REINVENT example again arising from coupling of generators (RNN) and property optimizers (RL).
Finally, Figure 4(c) is a baseline where training was performed on the ZINC dataset and post-hoc filters on the OPD target was applied. It can be clearly seen that sub-figure (b) is more shifted towards optimal properties than the ZINC baselines which suggests that the structural priors imposed by training on the domain-specific OPD patent dataset offers significant value in achieving optimal properties for that domain. For example, molecules incorporating structural priors such as conjugated rings have more potential in achieving low optical gaps than drug-like structures.
#### 3.2.3 Oracle objective
In TKI optimization tasks, the property of interest was similarity to a chosen query structure, which is a cheap and oracle property estimate that can be calculated at every step of optimization. In such cases where we have access to the oracle predictor, we observed better performance on optimization tasks. Figure 5 (c)
Figure 4: **Results on property optimization tasks.** (a), (b), (c), (d) and (e) show results for JTVAE, while (f) and (g) show results for RNN+SELFIES. (a) and (b) show OPD property distributions of molecules sampled by gradient descent, and post-hoc filter respectively in comparison to training data properties. (c) shows the property distribution obtained by applying an OPD post-hoc filter to samples drawn from a ZINC-trained model. (d) and (e) are analogous to (b) and (c) but on TKI instead of OPD. (f) and (g) are analogous to (b) and (d) but on RNN+SELFIES instead of JTVAE.
shows the Erlotinib similarity distribution of samples generated during training of REINVENT+SELFIES, which are clearly shifted towards higher values than the training data. (a) shows sample candidates along with their similarity scores, and (b) shows the improvement of similarity score as training progresses.
A post-hoc filter using the oracle predictor can also be utilized in this case as a way to generate a set of novel candidate molecules that are optimized in comparison to the training data (Figure 4(d) and (g) for JTVAE and RNN+SELFIES respectively). Similar to the example described in Section 3.2.2, we also compared with a ZINC-trained baseline optimized for the TKI target, and observe minor improvements in shifts for the TKI-trained model in comparison to the ZINC-trained baseline (see Figure 4(d) and (e)). This difference is not as significant as the OPD-ZINC baseline since the chemical spaces of ZINC and TKI datasets are fairly similar structurally.
#### An alternative interpretation
The above observations from JTVAE and REINVENT+SELFIES can also be interpreted with reference to terminology introduced by [51]. While Kajino et al. primarily examine the existence of biases in Reinforcement Learning settings, the terminology can conceptually be extended to other types of generative models as well. In our tasks, both generative model and property predictor were trained on the same patent-mined dataset. This could have introduced reusing bias, which stems from effectively training and evaluating our model with information drawn from the same data source. In addition, during property optimization, the property predictor often sees unrealistic/nonphysical molecules which are far away from its training data distribution. This results in a misspecification bias, caused by the unreliability of the property predictor at points far away from the training data distribution. These two components of bias might have had a role to play in the observations we made in cases where a proxy predictor was used. Oracle property models on the other hand, are free from these two forms of bias.
## 4 Conclusions
In this work, we developed a framework to automatically extract molecular structures from the USPTO patent repository based on user-defined keyword searches, and generate datasets for machine learning in chemistry. We demonstrate the utility of the extracted datasets in training generative models for inverse molecular design tasks. We show that these datasets can be utilized to generate novel molecular structures with properties similar to the training dataset, in a completely unsupervised setting. We also evaluate
model performance on supervised property optimization tasks, identify some limitations of existing models in shifting property distributions away from the training data regime, and suggest some possible explanations and remedies that could be used to overcome these in practice. The key observations we make through our experiments are summarized as follows: 1) We identify that patent-mined datasets offer the ability to create focused in-domain datasets of high-performing molecular structures and offers a way to bootstrap focused domains of chemical space with limited human intervention. 2) Property optimization towards the edges of the training data distribution can be effective if we have access to a cheap oracle predictor, but is challenging when proxy neural network approximators are used.
## 5 Data and code availability
The code used to train models is publicly available. JTVAE: [https://github.com/wengong-jin/icml18-jtnn](https://github.com/wengong-jin/icml18-jtnn), REINVENT: [https://github.com/MarcusOlivecrona/REINVENT](https://github.com/MarcusOlivecrona/REINVENT). The RNN models were trained using the char-rnn code from [https://github.com/molecularsets/moses](https://github.com/molecularsets/moses). A static version of the exact forks used is available at [https://doi.org/10.5281/zenodo.7719958](https://doi.org/10.5281/zenodo.7719958), and checkpoints of trained models and all training data including DFT-calculated properties are available at [https://doi.org/10.5281/zenodo.7996464](https://doi.org/10.5281/zenodo.7996464). [52] Code for the patent mining and filtering pipeline can be found at [https://github.com/learningmatter-mit/PatentChem](https://github.com/learningmatter-mit/PatentChem). This patent code is archived at [https://doi.org/10.5281/zenodo.7719675](https://doi.org/10.5281/zenodo.7719675). [53] GuacaMol benchmarking was performed using [https://github.com/BenevolentAI/guacamol](https://github.com/BenevolentAI/guacamol)
Figure 5: **Results based on REINVENT+SELFIES model trained on the TKI dataset.** (a) Candidates generated by REINVENT towards the end of training, with structural similarity to Erlotinib being the reward function. Similarity scores are indicated below each candidate. (b) Tanimoto similarity score computed between generated candidates and Erlotinib, as a function of training iterations. (c) Histograms showing properties of candidates sampled during agent training, in comparison with the training data distribution.
Author contributions
A.S. trained the generative models and analyzed the distribution learning and property optimization results. K.P.G. updated and organized the patent code and ran the high-throughput physics-based calculation pipeline. A.G. wrote an initial version of the patent code. T.Y. trained initial versions of the generative models. A.S. and K.P.G. wrote the first manuscript draft. R.G.-B. conceived the project, supervised the research, and edited the manuscript.
## 7 Conflicts of interest
There are no conflicts to declare.
## 8 Acknowledgements
A.S. was supported by funding from Sumitomo Chemical. K. P. G. was supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1745302. This work was also supported by the DARPA Accelerated Molecular Discovery (AMD) program under contract HR00111920025. We acknowledge the MIT Engaging cluster and MIT Lincoln Laboratory Supercloud cluster [54] at the Massachusetts Green High Performance Computing Center (MGHPCC) for providing high-performance computing resources to run our TD-DFT calculations and train our deep learning models.
## S1 Patent Format Inconsistency
As described in Section 2.2, the USPTO makes available machine-readable patents from 2001 to the present. However, these files are not consistent in their format and directory structure. As a result of these inconsistencies, our original extraction pipeline omitted years 2001-2004 because these years used SGML 2.4 or XML 2.5, whereas years 2005-present used XML 4.0-4.7, as described at [https://bulkdata.uspto.gov/](https://bulkdata.uspto.gov/). Additionally, patents from late 2008 to early 2010 were omitted by our original pipeline because of a different directory structure than other patent releases. The initial training dataset for our generative models omitted some or all patents from the aforementioned years. Since our goal is to incorporate structural priors from a general region of domain-relevant chemical space rather than to extract a comprehensive set of domain-relevant molecules, this omission does not invalidate the approach. As we demonstrate, our approach is helpful for focusing chemical space even while omitting all patent years prior to 2001 (since they are not machine readable). For
the same reason, our approach still works while omitting a subset of years after 2001. However, for the sake of completion and to maximize training dataset coverage of relevant structures, we have resolved these issues in the latest version of our PatentChem code ([https://github.com/learningmatter-mit/PatentChem](https://github.com/learningmatter-mit/PatentChem)). Going forward, users who do their own keyword queries with our code will be unaffected by the problems we initially encountered with certain years.
## S2 Processing of patent-extracted data before model training
The goal of our pipeline is to generate structures with limited domain knowledge beyond keywords, so we kept processing/filtering to a minimum except for constraints that allowed for better computational tractability and basic filters on molecular mass. For example, we applied a 1000 g/mol maximum molecular mass cutoff on the OPD dataset primarily because JT-VAE has a sequential decoding process that enumerates combinations of fragment pairs, which scales with the size of fragments and is thus very slow for large molecules. This has the added benefit of eliminating polymers and large candidates (non-ideal for deposition techniques such as chemical vapor deposition). Similarly on the TKI dataset, we imposed maximum and minimum cutoffs of 700 g/mol and 250 g/mol respectively to eliminate candidates that are not "drug-like". We apply the minimum molecular mass constraint in the TKI case since our property optimization objective was similarity to held-out FDA approved drugs whose molecular masses typically fall above 250g/mol.
Our minimal filtering means there are some structures in our training datasets that are not domain-relevant (such as reagents or intermediates). However, the "false positives" (molecules that the model generates because it thinks they are relevant, when in reality they are not relevant) that come from this can be easily filtered out by the property labeling step. Just as a user can choose their own property-labeling method appropriate for their design task when using our code, they could also insert additional domain-knowledge-based preprocessing of the training dataset. Our current work demonstrates that the approach can still be useful even without this preprocessing, but additional filtering may improve results in some domains. We have provided some options for possible filters in our PatentChem code, such as minimum and maximum molecular weight and charged/neutral molecules.
## S3 Reinvent+selfies
### Tki
Figure S1 shows the similarity to query structure as a function of training iterations, for each of the 27 held-out FDA-approved TKI molecules. In most cases, we observed an increasing trend in the reward. There were however some instances (ex. Nilotinib and Cabozantinib) where training was unstable and did not converge. Reinforcement Learning algorithms are often highly sensitive to hyperparameters, so it is possible that these cases might require further tuning.
### Opd
Unlike the TKI dataset case where we had access to the oracle reward, training on the OPD dataset required a proxy neural network reward estimator. Figure S3 shows the test performance of the proxy reward predictor on DFT-calculated optical gaps. We observed that while the reward had an increasing trend during training of the agent (Figure S2(a)), the sampled molecules (Figure S2(b)) did not match the training data well structurally. We hypothesised that this behavior arose from agent identifying and targeting high-uncertainty regions of the property predictor. To investigate this, we also attempted running agent training with a new reward that penalized high uncertainty as estimated by ensemble variance on the property predictor. To achieve this, the reward was modified to include a multiplicative masking term that evaluated whether the ensemble uncertainty was smaller than the 99th percentile of training data uncertainties. Hence molecules for which the property predictor was more uncertain than 99% of the training data would have a reward of zero. We were however unable to achieve model convergence with this modified reward function, i.e., rewards did not display an increasing trend. This was because a majority of molecules generated during training were high-uncertainty points and resulted in a reward of zero. This resulted in the agent having access to very sparse information since poor candidates were sampled at a much higher fraction than good ones. It is also possible that the ensemble uncertainty was not an accurate estimator of model confidence at points that are highly Out of Distribution (OOD), as was observed by Scalia et al. in their work. [55]
Figure S1: **Tanimoto similarity score computed between generated candidates and FDA approved TKI molecules, as a function of training iteration**
Figure S2: **Results of REINVENT+SELFIES on OPD dataset.** a) Reward score as a function of training iterations. b) Molecules sampled during later stages of training.
## S4 JTVAE training
Since DFT calculations were expensive to perform on the entire patent-mined OPD set, we only labeled a subset of 5568 molecules out of a total of 112436 molecules. To effectively use labeled and unlabeled data during JTVAE training, we utilized all molecules for encoder and decoder training, but only utilized the labeled subset while training the property predictor. The training of encoder, decoder and property predictor were all performed jointly with a multitask loss function. In addition, the property predictor training was
performed on 5 different properties: HOMO, LUMO, optical gap, Synthetic Complexity Score (SCScore) [56], and molecular mass. This was done for two purposes: 1)To aid with latent space regularization, and 2) Multiple tasks could potentially have shared information and thus compound the amount of effective training data seen by the model.
## S5 FDA approved TKI candidates
Table S1 lists the names and SMILES representations of the 27 FDA-approved TKI molecules that were held out during all TKI experiments carried out in this paper.
\begin{tabular}{|c|c|} \hline Name & SMILES \\ \hline Afatinib & CN(C)CC=CC(=O)NC1=C((C=C2C(=C1)C(=NC=N2)NC3=CC(=C(C(C=C3)F)Cl)OC4CC0C4 \\ Ibrutinib & C=CC(=O)N1CCCC(5@H1)(C1)N2C3=NC=CC(=C3C(=N2)C4=CC=C(C(C=C4)OC5=CC=CC=C5)N \\ Pazopanib & CC1=C(C=C(C=C1)NC2=NC=CC(N2)N)CC3=CC4=NN(=C4C=C3)C5(=O()(=O)N \\ Axitinib & CNC(=O)C1=CC=CC=C1SC2=C3C-C3(=C2)C(=N3)C/C=C4=CC=CC=N4 \\ Idelalisib & CC[C(@H)(C1=NC2=C(=CC=C2)F)C(=O)N13=CC=CC=C2=CC=C3C4=NC=NC=N5 \\ Ponatinib & CC1=C(C=C(C=C1)C(=NC)2=CC=(C=C2)C3CC(CC3)C)C(F)F)C\#CC=CC=CN=C5N4N=CC=C5 \\ Bosutinib & CNC(CC1)CCCCCCCC2=C(C=C3C(=C2)N=CC=C3C4=CC=(C=C4)C1)OC\#N)OC \\ Imatinib & CC1=C(C=C(C=C1)NC=(O=C2)C=CC=C2=CC=C3C(=C3)C3)C4=NC=CC=N4=C5=C3=C5=NC=CC=C5 \\ Regorafinib & CNC(=O)CN=NC=CC=(C1)OC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CCCC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CCCC=CC=CC=CC=CC=CC=CCCC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CCCC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CCCC=CC=CC=CCCC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CC=CCCC=CC=CC=CCCC=CC=CCCC=CC=CC=CC=CC=CC=CCCC=CC=CC=CCCC=CC=CC=CC=CCCC=CC=CC=CC=CCCC=CC=CC=CC=CC=CC=CCCC=CC=CC=CCCC=CC=CC=CCCC=CCCC=CC=CCCC=CC=CC=CC=CCCC=CC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCC=CCCCCC=CCCCCC=CCCCCC=CCCCCCCC=
## S6 Visualizing structural resemblance to training data
Figure S5: **Sample molecular structures obtained from random sampling of trained JTVAE model on a) OPD b) TKI dataset** |
2310.07079 | Secure Decentralized Learning with Blockchain | Federated Learning (FL) is a well-known paradigm of distributed machine
learning on mobile and IoT devices, which preserves data privacy and optimizes
communication efficiency. To avoid the single point of failure problem in FL,
decentralized federated learning (DFL) has been proposed to use peer-to-peer
communication for model aggregation, which has been considered an attractive
solution for machine learning tasks on distributed personal devices. However,
this process is vulnerable to attackers who share false models and data. If
there exists a group of malicious clients, they might harm the performance of
the model by carrying out a poisoning attack. In addition, in DFL, clients
often lack the incentives to contribute their computing powers to do model
training. In this paper, we proposed Blockchain-based Decentralized Federated
Learning (BDFL), which leverages a blockchain for decentralized model
verification and auditing. BDFL includes an auditor committee for model
verification, an incentive mechanism to encourage the participation of clients,
a reputation model to evaluate the trustworthiness of clients, and a protocol
suite for dynamic network updates. Evaluation results show that, with the
reputation mechanism, BDFL achieves fast model convergence and high accuracy on
real datasets even if there exist 30\% malicious clients in the system. | Xiaoxue Zhang, Yifan Hua, Chen Qian | 2023-10-10T23:45:17Z | http://arxiv.org/abs/2310.07079v2 | # Secure Decentralized Learning with Blockchain
###### Abstract
Federated Learning (FL) is a well-known paradigm of distributed machine learning on mobile and IoT devices, which preserves data privacy and optimizes communication efficiency. To avoid the single point of failure problem in FL, decentralized federated learning (DFL) has been proposed to use peer-to-peer communication for model aggregation, which has been considered an attractive solution for machine learning tasks on distributed personal devices. However, this process is vulnerable to attackers who share false models and data. If there exists a group of malicious clients, they might harm the performance of the model by carrying out a poisoning attack. In addition, in DFL, clients often lack the incentives to contribute their computing powers to do model training. In this paper, we proposed blockchain-based Decentralized Federated Learning (BDFL), which leverages a blockchain for decentralized model verification and auditing. BDFL includes an auditor committee for model verification, an incentive mechanism to encourage the participation of clients, a reputation model to evaluate the trustworthiness of clients, and a protocol suite for dynamic network updates. Evaluation results show that, with the reputation mechanism, BDFL achieves fast model convergence and high accuracy on real datasets even if there exist 30% malicious clients in the system.
## I Introduction
Federated Learning (FL) [1] is a distributed machine learning (ML) paradigm that allows training ML models across numerous distributed devices, such as mobile and IoT devices. Those edge devices hold their data locally and collaboratively perform training tasks without directly sharing training data among them to ensure privacy. The trained ML models are then aggregated on a central server, called the aggregator. The aggregator first distributes a global model to clients. Each client trains the model locally using its own data and generates a model update, which is then sent back to the aggregator. The aggregator aggregates these updates to update the global model, and distribute it to clients for further training. FL preserves data privacy by enabling decentralized model training [2, 3], saving communication costs by avoiding moving raw data, and reducing computational costs by leveraging the computing resource of each device. However, the existence of the centralized aggregator makes FL vulnerable to a single point of failure [4]. Once the centralized aggregator is compromised, the whole FL system will fail. Also, the aggregator that frequently exchanges models with clients can become the bottleneck of the system.
The recently proposed concept of Decentralized federated learning (DFL) [5, 6, 7, 8] provides a solution for aforementioned problems by removing the involvement of the central server. In a DFL system, instead of communicating with a central aggregator, clients directly exchange model updates with a subset of other clients, also known as their "neighbors", using P2P communication. Clients keep exchanging model updates until their local models converge to a model that reflects the features of data from all clients. Thus, DFL improves the limitations of having a single point of failure, trust dependencies, and bottlenecks on the server side in the traditional FL. However, DFL still has some challenges such as malicious clients, low-quality models, and the lack of incentives, which undermines the reliability of the whole system. Given the large number of participants in the DFL system, it is unrealistic to simply assume all the clients are honest and follow the protocols to do the training correctly. Therefore, there may exist malicious clients sharing false model updates about their local training results. Also, some clients with low-quality models might also affect the performance of their neighbors with high-quality models, and these errors may be further propagated in the whole network. Besides, how to motivate data owners to participate in the system and continuously contribute their data to the FL model remains a challenge.
Blockchain, as a distributed ledger technology built on a peer-to-peer network, provides a possible solution for the security and incentive issue in the DFL system [9], as shown in Fig. 1. It provides trust by allowing all participants to verify transactions submitted to the blockchain with its underlying provable cryptography and consensus protocol. Every participant could verify each model update before it can be aggregated and confirmed in the blockchain. However, directly storing all model updates on the blockchain is not feasible due to the significant costs incurred by data storage and computation. And pushing large model data to the blockchain has the problem of heavy latency, limited block size, and transaction size. Thus, we introduce an auditor committee and
Fig. 1: Blockchain-based Decentralized Federated Learning: clients form a peer-to-peer overlay network to exchange models, and auditors are responsible for model verification.
the reputation mechanism for model verification. Auditors are responsible for managing the clients' reputations according to the quality of their model updates. Instead of models or gradients, auditors will only put the clients' reputations on the blockchain and update periodically.
In this paper, we introduce BDFL, a Blockchain-based Decentralized Federated Learning system with an incentive mechanism and reputation model. We introduce a set of auditors for model verification, and honest clients can receive incentives while clients providing malicious models will be punished. In our reputation model, clients' reputation scores are assigned by auditors according to their model verification results, which will be updated periodically. Clients with higher reputations will have a higher probability for their model updates to be accepted by neighbors, and thus gain more profits from the system.
In summary, this paper makes the following contributions:
* We design BDFL, the first blockchain-based fully decentralized federated learning system for model verification with high learning accuracy and system robustness.
* We design and implement the BDFL protocol suite. We introduce an incentive mechanism to encourage clients to participate in the model exchange, and a reputation model to evaluate the trustworthiness of each client to avoid malicious model updates from attackers.
* We evaluate BDFL using experiments on real ML datasets. We find that BDFL achieves a high model accuracy and fast convergence. It also has strong resilience to client dynamic and malicious model updates.
The rest of this paper is organized as follows. The system overview and model are presented in Section II. We describe a protocol overview in Section III and the detailed design of the BDFL in Section IV. Section V presents the evaluation results of our protocol. Section VI describes the related work. Section VII concludes this work.
## II Overview
### _Network Model_
We consider a decentralized learning system with a large number of clients, in which clients can join or leave anytime. Those clients, such as edge devices, are willing to train models using their own data locally, and exchange model updates with their neighbors to get some profits. There are also some auditors in the system who are responsible for model verification. They are a group of nodes that have read and write access to the blockchain, similar to the miners in common blockchain systems. They work as a committee to verify the model updates using public validation data in the system, and generate new blocks of reputation information. Honest auditors will receive rewards for correct verification, and dishonest ones can get punishment if detected.
Our network model is divided into two parts, the decentralized federated learning network, and the blockchain network. We model the BDFL network as an undirected graph \(G=(V,E)\), where \(V\) is the set of clients, and each link \(e=(u,v)\in E\) represents that client \(u\) and \(v\) are neighbors, and can directly exchange local ML models. We assume clients have equal roles in the BDFL network and similar numbers of neighbors. Clients have read access to the blockchain, and can communicate with auditors for model verification. In the blockchain network, the clients' and auditors' identities and model verification information are recorded in the blockchain in the form of transactions. To encourage the participation of more users, auditors successfully performing verification and clients honestly providing model updates will receive incentives, which are guaranteed by smart contracts in the blockchain. Malicious participants can also be identified by the blockchain to protect the quality of the overall model.
### _Blockchain Model and Assumptions_
The blockchain in the BDFL system should support smart contracts, which are responsible for managing client reputation and ensuring auditors behave appropriately. Both clients and auditors need to register on the blockchain first to join the BDFL system. Since we use the blockchain as the underlying root of trust, if it is compromised by an adversary, the correct functionality of the BDFL system cannot be guaranteed. Therefore, we assume that the proportion of consensus participants corrupted by an adversary for the blockchain is bounded by a threshold to ensure safety and liveness for the underlying blockchains. Following Byzantine fault-tolerant settings, we assume the proportion of adversaries is less than 33% of the total number of consensus participants [10].
### _Attacker Model_
We assume the attackers can potentially gain physical access to some clients in which the data and model are stored, and complete control of their network connections. They may want to destroy the global model by performing poisoning attacks. They will train models using false data and exchanging adversarial updates with neighbors [11, 12]. They may also delay or prevent the client they control from accessing the blockchain for an unbounded amount of time. They are curious about clients' private information, and can perform information leakage attacks by observing the model updates, and then recover details about clients' training data [13, 14]. To prevent such attacks during model update transmission, clients can send differentially-private updates to mask their gradient [15, 16]. We assume most of the auditors will correctly follow the protocol, and part of them may be corrupted in a Sybil attack. But attackers cannot control more than 1/3 of total auditors according to Byzantine fault tolerance. We assume that all clients in the BDFL system could securely conduct initialization, in which they can correctly obtain the first version of the global model.Auditors have relatively equivalent computation resource [17], and rational public validation data to perform model update verification.
### _Requirements_
**Security:** The main security requirement of the BDFL system is that it should enable model update exchanges between clients safely and correctly. We consider the security
of both clients and auditors. Honest clients who provide the correct model updates will be acknowledged with profits and gain a better reputation, while clients with malicious model updates will ultimately be detected, and has a dramatic drop in their reputation. Whenever a client receives a model update from their neighbor, they can verify the correctness of the update with the help of auditors and the blockchain. If the model update successfully passes the verification, the client will accept it to further aggregate with their own model locally. If the model is considered to be malicious, the client will reject this update. As for the auditors, if an auditor claims an incorrect model update could pass verification, this behavior will certainly be detected within the blockchain, as it requires the approval of more than 2/3 of the auditors for the verification result to be confirmed in the blockchain. The malicious auditor will eventually lose all their collateral as ensured by the smart contract.
**Auditability:** Any clients with read access to blockchain are able to get the latest reputation of all clients. Clients can also audit the model updates from their neighbors with the help of auditors and the blockchain.
**Privacy:** The BDFL system should be able to keep client training data private by preventing information leakage attacks. The auditors received masked model updates from clients can successfully verify the correctness of the model, but cannot learn any information on the clients' training data.
**Robustness:** In the BDFL system, the local models on the honest clients should eventually converge to a model that reflects the features of data from all clients with high accuracy. The system should keep robust under attacks, which means, even if there exist attackers, the system should still achieve equivalent model accuracy. Moreover, the DFL network should be resilient to client dynamics such as client joins, leaves, and failures.
## III Design Overview
In a fully decentralized overlay network for DFL, the BDFL protocol suite provides two sets of protocols for clients: 1) a DFL network _Topology Maintenance Protocol_ to build the overlay network and recover it from churn; 2) a _Model Exchange Protocol_ which includes model verification to achieve fast model convergence for heterogeneous clients and asynchronous communication. Table I shows the API that BDFL provides to clients. BDFL generates unique identifiers and an initialized reputation for each client, e.g., when a client \(u\) joins the DFL network and registers in the BDFL system for model exchange, a unique identifier \(u_{id}\) is returned as a handle to be used in reputation management and subsequent API calls.
The BDFL protocols work with any overlay topology and we apply a recently proposed overlay topology as a case to study BDFL [18], which is based on near-random regular graphs (RRGs) [19].
In P2P model exchanges, a client with low-quality local models might pollute its neighbors with high-quality models. This could lead to further propagation of these errors throughout the overlay network. Thus, every time when a client receives a model update from its neighbor, the client will first self-evaluate the confidence of this model. If the reputation of this neighbor is too low, the client can directly reject this model update. If the client feels the model is unreliable, the Model Exchange Protocol allows them to request model verification from auditors. Auditors then use an anonymized public validation dataset to do the model verification [20]. If the computed accuracy by the auditors drops a lot compared to its previous model accuracy, this model update is considered to fail the verification. Auditors will announce the verification result to the corresponding client, and reduce the reputation of the client who provided this model update. Otherwise, the client performs the model aggregation locally after receiving the correct verification result from auditors.
**DFL Topology.** In BDFL, each client is identified by a set of _virtual_ coordinates \(C\), which is an \(L\)-dimensional vector \(<x_{1},x_{2},...,x_{L}>\). Each element \(x_{l}\) is a random real number computed as \(H(IP_{x}|i)\) where \(H\) is a publicly known hash function and \(IP_{x}\) is \(x\)'s IP address. We create \(L\) virtual ring spaces [19] such that each client in the \(i\)-th ring space is virtually positioned based on its coordinate \(x_{i}\). In each virtual ring space, every client has two adjacent clients based on their coordinates, forming overlay neighbors for model exchanges. Each client can have a maximum of \(2L\) neighbors, with \(L\) serving as a trade-off parameter between communication and convergence. A larger \(L\) leads to more model exchanges but also increases the communication cost.
**Auditors.** Auditors are groups of nodes that have read and write access to the blockchain. They work jointly with the blockchain for client registration, model verification, and reputation management. The system leverages a public smart contract (aSC) to maintain an auditor list and ensure the correct behavior of the auditors. They are required to lock some collateral to be registered with this smart contract, i.e., aSC can verify the auditor's digital signature and knows the auditor's public key. We assume the majority of the auditors are reliable. They are willing to follow the protocol to get profits, and punish malicious auditors for misbehavior. For each client's model verification request and reputation update, a minimum number of auditors are required to sign the result before packing the update into the blockchains, thus tolerating a fixed percentage of auditors' failures to some degree. A common way to address such concerns is to use Byzantine-fault tolerant protocols [21]. For example, the auditors could use a BFT consensus such as [22] to stay up to date with all the coming requests from users. Such a solution can tolerate up to 1/3 faulty auditors. Thus, in BDFL, the smart contract defines that at least 2/3 of the auditors are required to sign each verification or update request before sending them to the blockchain, and only the auditors who correctly sign the request can get rewards.
**Model Update.** Different from FL, BDFL does not require a central server for model aggregation. Instead, every client can run the model aggregation locally using the model updates gathered from its neighbors. Once clients successfully prepare models locally, they can collect model updates from their
neighbors for further aggregation. Clients always reject model updates from neighbors with low reputations. Clients then query auditors for model verification on the rest of the model updates. After verification by the auditors, clients run the model aggregation on all the correct model updates.
**Reputation.** In BDFL, each client is assigned a reputation value by auditors which reflects its trustworthiness. Clients should have a higher possibility to accept model updates from honest clients, and reject those from malicious ones. To prevent poisoning attacks from malicious clients, every time when auditors detect a model update of low accuracy, the auditors will decrease the reputations of the corresponding misbehaving client. On the other hand, honest clients will gain a reputation increase by providing good model updates.
## IV Protocol Design
This section describes the design of BDFL protocols.
### _Topology Maintenance_
The Topology Maintenance Protocol in BDFL system includes \(\mathsf{join\_network},\mathsf{leave\_network}\) and \(\mathsf{maintenance}\) as shown in Table I.
**Join.** Assume we have a correct DFL network topology with \(n\) clients currently. A new client \(u\) now boots up and wants to join the BDFL system for future model exchange. Before joining the DFL network, \(u\) has to know one existing client \(v\) in the overlay. \(u\) assigns itself a random coordinate in the virtual ring spaces as its position. Then it sends join requests to its neighbor \(v\), and tries to find all its neighbors in the network. To achieve this, \(u\) lets \(v\) send a \(Neighbor\_discovery\) message which includes \(u\)'s IP address to the current DFL network using greedy routing to \(u\)'s location in each ring space respectively. \(Neighbor\_discovery\) stops at the client \(w\) who is closest to \(u\). In each virtual ring space, \(w\) finds the adjacent node \(p\) from its two adjacent nodes to insert \(v\) in between according to \(u\)'s coordinate.
**Leave.** When a client wants to leave the system, \(\mathsf{leave\_network}\) should guarantee that the BDFL system can still maintain a correct DFL network topology. Assume client \(u\) wants to leave by running \(\mathsf{leave\_network}\). \(u\) sends messages to its two adjacent clients in each virtual ring space, and tells them to add each other to their neighbor sets.
### _System Maintenance_
Auditors are responsible to help maintain the BDFL system. They will track client information such as client identity and reputation. They will update this information to the blockchain periodically, and all the clients can easily check this information by reading the blockchain. To do this, each auditor maintains a local table to record client information. It includes three parts: the reputations of all the valid clients, which are updated according to the quality of models provided by clients; a joining client set, which is the clients who join the system after the last update; and a leaving client set. This leaving client set includes two kinds of clients. One is the clients who want to stop exchanging models with others and leave the system. The other is the clients who have been inactive for a long period, or with a very low reputation. The system will kick them out by adding them to the leaving client set.
The BDFL system should also be able to maintain a correct DFL network topology experiencing client failures. The \(\mathsf{maintenance}\) protocol requires every client to send neighbors a heartbeat message periodically, to filter out inactive clients.
Clients join BDFL to collaboratively train ML models. A new client joins the BDFL system by calling the \(\mathsf{join\_network}\) function. In this function, in addition to the \(\mathsf{Join}\) process to the network topology, as previously described, the client also needs to register on the BDFL blockchain to participate in future model exchanges. To do this, the new client, denoted as \(u\), sends a join query to auditors. Auditors will record the client's information \(u_{id}\) and assign a default reputation \(rep_{u}\) to it. The auditor committee will pack the client information update to the blockchain periodically. Once the update message which includes the new client \(u\) is confirmed on the blockchain, \(u\) is then considered to have successfully registered within the BDFL system, and can start model training. Currently, \(u\) has no model, and has to initialize the training process by first gathering models from its neighbors. \(u\) has to verify the correctness of the model with the help of auditors before aggregating them. After getting verification results from auditors, \(u\) only chooses to use the correct models from the neighboring clients, and discard the others. \(u\) then locally generates an aggregated model as its own model. And later, it will keep gathering model updates from its neighbors and continue updating its local model to improve accuracy in the future. It will also exchange its local training model with its neighbors to contribute to the whole system and get profits.
Clients exchange local models with neighbors periodically, and the models will be evaluated by auditors which will affect their reputations. Thus, client reputations are updated by auditors during model verification. If a malicious model update is detected and verified by auditors, the corresponding client who provides this model will be punished with a low
\begin{table}
\begin{tabular}{l l l l} \hline
**BDFL APIs** & **Inputs** & **Outputs** & **API Description** \\ \hline _Topology Maintenance_: & & & \\ \(\mathsf{join\_network}\) & \(u,v\) & \(u_{id},rep_{u}\) & Join the DFL network to find its correct neighbors and register in the BDFL system. \\ \(\mathsf{leave\_network}\) & \(u,u_{id}\) & - & Terminate model exchange and leave the network. \\ \(\mathsf{maintenance}\) & \(u\) & - & Maintain the correct DFL network topology by checking the liveness of all \(u\)’s neighbors. \\ \hline _Model Exchange_: & & & \\ \(\mathsf{local\_verify}\) & \(u,v,\omega_{v}\) & Boolen & \(u\) locally pre-evaluate the accuracy of model update \(\omega_{v}\) from their neighbor \(v\). \\ \(\mathsf{request\_verify}\) & \(u,v,\omega_{v}\) & \(\sigma_{\omega_{w}}\) & \(u\) request model verification on the model update \(\omega_{w}\) from their neighbor \(v\). \\ \(\mathsf{aggregate\_model}\) & \(\omega_{u},\omega_{v}\) & \(\omega^{\prime}\) & \(u\) locally aggregates the models from their neighbors. \\ \hline \end{tabular}
\end{table} TABLE I: BDFL API
reputation. Clients with very low reputations will be removed from the BDFL system forever. To achieve this, auditors check their local client table periodically to filter out the clients with low reputations, and add them to the leaving client set. After this client table is confirmed in the blockchain, those clients are removed from the system successfully.
### _Model Exchange_
In the model exchange, each client exchanges its local update with neighbors to do aggregation. However, clients might be not willing to send their models directly in concern that others might learn about their dataset information. It has long been established that gradients often leak sensitive information about clients' local datasets [23, 24], and therefore, it is necessary for clients to hide their model updates to keep privacy. To prevent information leakage attacks, clients use differential privacy (DP) to hide their updates during model exchange and verification by adding noise sampled from a normal distribution [16]. We follow the concept of \((\epsilon,\delta)\) differential privacy as being applied in many previous works [3, 9]. \((\epsilon,\delta)\)-DP provides a strong criterion for privacy-preserving of distributed data processing systems. Thus, each client constructs a noisy model update by adding some noise to the gradient such that \(\omega^{t}=\zeta^{t}+\omega^{t}_{original}\).
Another challenge of performing decentralized model exchanges is that there is no central server to evaluate the quality of models from different clients. Thus, our Model Exchange Protocol is designed to validate the model update to mitigate the impact of malicious models. Fig. 2 illustrates the overview of the Model Exchange Protocol. It runs on both the clients and the auditors. Every client prepares its model locally, and exchanges the model update with neighbors once they finish training. BDFL uses asynchronous communication and allows each client to use a different communication and training time period in a round. Assume an honest client \(u\) has a local model \(\omega^{t}_{u}\) in its current round \(t\). And it has three neighbors \(v,w\) and \(x\) to exchange models with. In the neighbor set of \(u\), in addition to the IP addresses and coordinates, it also stores the fingerprint \(f\) of the most recent model updates received from each neighbor, computed by hashing the model updates by a public hash function. When a client successfully prepares a model update to send, its neighbors would check the fingerprint \(f\) to avoid repetitive updates. This approach effectively mitigates unnecessary traffic, thereby reducing the frequency of exchanging duplicate models.
At client \(u\), after gathering model exchanges \(\omega^{t}_{v}\), \(\omega^{t}_{w}\), and \(\omega^{t}_{x}\) from all its neighbors, \(u\) first checks their reputations by reading the blockchain, and rejects the model updates from neighbors with low reputation. Let's say client \(x\) has a low reputation. \(u\) will directly reject the model update \(\omega^{t}_{x}\), and only continue to verify the correctness of the remaining model updates for aggregation. Now \(u\) needs to verify model updates \(\omega^{t}_{v}\) and \(\omega^{t}_{w}\) by sending verification requests \(request\_verify(u,v,\omega^{t}_{v})\) and \(request\_verify(u,w,\omega^{t}_{w})\) to auditors. Auditors use the public validation data to verify the model updates. The smart contract maintains a history of model verification results, which include an average model accuracy \(\mu_{t-1}\) and standard deviation \(\sigma_{t-1}\) of the latest 20 epochs. If the computed accuracy of the received model update, e.g. \(\omega^{t}_{w}\), is less than \(\mu_{t-1}-2\sigma_{t-1}\), this model update is considered to be malicious. Auditors will reply with the verification results to \(u\), and punish \(w\) of the malicious update by decreasing its reputation. For models that pass the verification, auditors will increase the reputations of the corresponding clients, and the smart contract needs to update the average model accuracy \(\mu_{t}\) and standard deviation \(\sigma_{t}\) as well. After receiving verification results from auditors, \(u\) finds that \(\omega^{t}_{v}\) is the only valid model update. \(u\) will discard other model updates and later aggregate \(\omega^{t}_{v}\) to its local model \(\omega^{t}_{u}\) to train a new local model \(\omega^{t+1}_{u}\).
In a more complex scenario, client \(u\) discovers that it has received multiple valid model updates from various neighbors with different reputations after conducting model verification. \(u\) always tends to believe clients with high reputations will provide model updates of high quality, while low-reputation clients might provide low-quality models. Even if they are all valid updates, \(u\) still wants to limit the impact of low-quality models and amplify the impact of high-quality models in the aggregation. Thus, unlike the assumption that all model updates contribute equally to model aggregation, we let high-reputation clients have higher impacts. \(u\) defines a set of confidence parameters \(c\) for each model update, assigning higher \(c\) values to clients with better reputations. The models from \(u\)'s neighbors are then aggregated as follows:
\[\omega^{t+1}_{u}=\frac{\sum_{j\in N\cup\{u\}}c^{j}\omega^{t}_{j}}{\sum_{j\in N \cup\{u\}}c^{j}}\]
, with \(N\) being the neighbor set of client \(u\). The model aggregation will be computed once every round and the models from each neighbor are always the most updated ones. In this way, clients with low reputations will have less impact on other clients.
### _Model Verification_
In FL, every network provides a centralized validation dataset to validate the global model. The idea of using an anonymized public validation dataset for FL model validation is well adopted in the existing research [20, 25], and also used for detecting model poisoning attack [26]. We follow this assumption and adopt this idea for BDFL model update verification. We assume auditors have a public validation dataset \(D\) contributed by their clients and a global model \(\omega_{0}\). Before starting training, auditors collect training dataset
Fig. 2: Protocol overview of BDFL.
samples from clients. We adopt \(\epsilon\)-differential privacy [27] for protecting the data privacy of clients. After gathering all those anonymous datasets, auditors pre-evaluate their quality by comparing the accuracy of \(\omega_{0}\) on the public dataset \(D\) and each fetch dataset \(D_{i}\). If the accuracy of \(D_{i}\) is much lower than the training result on \(D\), auditors will reject this dataset \(D_{i}\). Auditors then integrate the satisfied datasets into their local validation dataset to compose a new validation dataset \(D\). During the training process, clients continue collecting local data, and there might be new clients joining the system. Thus, auditors will gather clients' datasets periodically to update their validation datasets \(D\).
When receiving a model verification request, auditors first check the fingerprint \(f\). Then they can directly reply to the client with the previous verification result. Otherwise, auditors compute the accuracy by executing the model \(\omega\) on the public validation dataset \(D\). The smart contract aSC collects verification results from all auditors. It disregards those results that significantly deviate from the majority of the results, and considers auditors providing such results as malicious. aSC computes the average of the remaining accuracy results \(A_{t}^{u}\), and compares with the previously stored average model accuracy \(\mu_{t-1}\) and standard deviation \(\sigma_{t-1}\) of the latest 20 epochs. If \(A_{t}^{u}<\mu_{t-1}-2\sigma_{t-1}\), \(\omega\) will be judged as a malicious model update.
### _Reputation Management_
Clients joining the system for the first time will be assigned an initial reputation \(rep\). All the reputations are updated by auditors during each model verification. Let's say a client submits a model verification request on model update \(\omega_{u}\) from its neighbor \(u\). Auditors validate \(\omega_{u}\) with public test data [28] and get the corresponding accuracy values. aSC filters out those outlier results and uses the average of the remaining results as the accuracy value \(A_{t}^{\omega_{u}}\) of model update \(\omega_{u}\). Note that aSC maintains an accuracy history of previous model updates, \(\mu_{t-1}\) and \(\sigma_{t-1}\), which are average model accuracy and standard deviation of the latest 20 epochs respectively. If \(A_{t}^{\omega_{u}}<\mu_{t-1}-2\sigma_{t-1}\), auditors will consider this client \(u\) to be malicious, and punish \(u\) with a decrease in its reputation from \(rep_{u}\) to \(\frac{rep_{u}}{2}\). For honest clients, their model update accuracy might not always increase due to the imperfect of their dataset. The system tolerates this small accuracy decrease and will not lower their reputations as long as their model accuracy is larger than \(\mu_{t-1}-2\sigma_{t-1}\). Their reputations still keep the same in this round. Other honest clients can gain a reputation increase \(\Delta rep\) according to their model accuracy, which is computed as \(\frac{d-\mu_{t-1}}{\sigma_{t-1}}\cdot 0.01\). For example, if the model accuracy of \(\omega_{u}\) is \(A_{t}^{u}=\mu_{t-1}+2\sigma_{t-1}\), the reputation of the corresponding client \(u\) will increase by 0.02.
If a malicious model update is detected and verified by auditors, the corresponding client will be punished with a decrease in its reputation by 50%. However, a low reputation could result in its model update being declined by its neighbors in future model exchanges. Therefore, if such a client wants to continue participating in the model exchange and gain its reputation back, it must let auditors pre-verify its model updates prior to the model exchange period. If auditors validate this model update, the client will exchange its model along with the proof. Its neighbors can later confirm the correctness of this model update using the provided proof, thereby eliminating the need for another round of auditor verification.
Note that there is always a latency to post the new reputation (client table) to the blockchain. So in order to get the latest reputation information, clients can query auditors first. After some time, clients can verify the correctness of the query result by comparing it with the reputation confirmed in the blockchain. If clients detect that the reputation got from auditors is different from that in the blockchain, they can submit disputes to the blockchain for compensation, and the malicious auditors will get punishment and lose all their collateral.
### _Incentives Mechanism_
The involvement of the blockchain always triggers some fees, such as running smart contracts and submitting transactions to the blockchains. Thus, auditors require some incentives in order to do model verification and reputation management. In BDFL, clients are the devices that want to train a better model using model updates from other clients in the system. If they directly aggregate models without verification, potential malicious models from attackers could lead to poor performance on their training models. Thus, we assume clients are willing to pay auditors for verification, and honest clients who provide them with a high-quality model update. Clients can also gain rewards by providing honest model updates to other clients. Besides, there might be some services that want to use the final model directly without participating in the training process. Those services could request the model via the smart contract aSC by making a payment. aSC will then distribute the payment fee to auditors and clients according to their contribution.
An incentive mechanism is involved via the smart contract aSC to motivate participating clients and auditors to be honest and report the misbehavior. Any device with read-and-write access to the blockchain is allowed to become an auditor by registering with the auditor smart contract (aSC) and providing collateral. They are required to verify model updates, and distribute incentives to honest clients while imposing punishment on misbehaving clients through the aSC. For each verification request, aSC gathers verification results from all auditors, and determines who provide correct verification results and record who always submit malicious results. If the number of incorrect verification results submitted by an auditor exceeds a predetermined threshold, this auditor will be judged as malicious and subject to penalties, such as losing all its collateral. It will then be removed from the auditor list in the smart contract. On the contrary, honest auditors who provide correct verification results will receive some rewards.
With the verification result of a model update from a client, the smart contract aSC will determine if this client is honest and should get some rewards. For example, currently aSC has
a verification result \(A_{t}^{\omega_{u}}\) on the model update \(\omega_{u}\) provided by the client \(u\). aSC finds that it is a high-quality update with \(A_{t}^{\omega_{u}}\) larger than the average model accuracy of the latest 20 epochs \(\mu_{t-1}\), and \(u\) should be rewarded with \(c(A_{t}^{\omega_{u}}-\mu_{t-1})\).
In model verification, when receiving a model update with high accuracy, auditors could assume this model is good enough to be a global model that reflects the features of data from all clients correctly. And auditors can choose to maintain this model \(\omega\) locally and send a digest \(h(\omega)\) to aSC will record which auditors maintain a local copy of the global model, and the corresponding digest. This is to prevent auditors from manipulating a malicious global model. If a service requests a model from aSC with a required fee, aSC will expose the auditor list who maintain the global model to the service. The service could randomly select an auditor to fetch the model and verify its correctness by querying aSC. aSC then distribute the fee collected from the service to both the auditor who provided the model and to all clients. The distribution to clients is based on their respective reputations, which reflect their individual contributions to the model.
## V Performance Evaluation
### _Methodology_
We use a desktop machine with an NVIDIA GeForce RTX3060 Ti as the platform for our experiment. We build a 100-client network, use real data training, and simulate reputation management and incentives distribution. The purpose of this experiment is to test the performance of effectiveness of the reputation mechanism and robustness in a simulated environment. We choose MNIST [29] and CIFAR-10 [30] image classification as the task of BDFL. We use MultiLayer Perceptron networks (MLP) and Convolutional Neural Networks (CNN) respectively as the machine learning models.
### _Evaluation Results_
**Robustness.** We compare BDFL with the version without the reputation mechanism on two different datasets. The average accuracy of model updates verified by auditors is shown in Fig. 3 and Fig. 4 respectively. The Baseline shows the final model accuracy with no malicious clients in the system. In both two datasets, BDFL always demonstrates better accuracy under different proportions of malicious clients. The accuracy increase compared to the baseline without the reputation management mechanism is more obvious with a higher proportion of malicious clients. Malicious clients can affect the average model accuracy by model poisoning attacks without verification. Honest clients will use those malicious models for aggregation directly, and thus harm the overall performance of the system. The more malicious clients in the system, the worse the average model accuracy. However, with the reputation management mechanism, malicious clients providing bad model updates can be detected by auditors and filtered out of the system. Malicious model updates will not be accepted by any client, and thus, even if there are 30% malicious clients in the system, The average accuracy of BDFL with reputation mechanism only has a degradation of 0.73% compared to baseline and BDFL without reputation mechanism has a degradation of 10.28% for MNIST classification as shown in Fig. 3c. While in CIFAR-10 task, the average accuracy of BDFL with reputation mechanism has a degradation of 2.49% and BDFL without reputation mechanism has a degradation of 19.20% as shown in Fig. 4c.
**Reputation value evaluation.** We monitor the reputation value changes for both honest and malicious clients in the MNIST dataset. The initial reputation of each client is set to be 0.5. The reputation value will be dynamically updated according to their behaviors. The percentage of malicious clients in this set of experiments is set to 20%. Fig. 5a shows the reputation changes for 3 randomly chosen honest clients and 3 randomly chosen malicious clients. We find that honest clients always gradually achieve a high reputation value, even if they might not always be able to provide model updates of high accuracy due to the imperfect of their datasets. On the other hand, the reputation values of malicious clients will decrease to 0 in 100 rounds eventually, and they will be excluded from the system by auditors once extremely low reputations are detected. For malicious Client 1, even though it does not misbehave all the time, it can still be detected and penalized with a low reputation. Fig. 5b shows the total number of malicious model updates by malicious clients. After a short while, all these three malicious devices are detected and not able to perform poisoning attacks.
**Incentive mechanism evaluation.** We also monitor the accumulative incentives for the same 3 honest clients and 3 malicious clients in the MNIST dataset as in the previous experiment. As shown in Fig. 5c, honest clients can always get rewards even if sometimes they are not able to provide good model updates. The honest Client 1 who always provides high-quality updates can gain more rewards. On the other hand, malicious clients might earn some rewards at the beginning, however, after a short period, they will be detected and cannot gain any further profit. The malicious Client 1 behaves correctly sometimes and it is hard to detect. But after round 95, it acquires an extremely low reputation which makes it expelled from the system, and cannot gain rewards.
## VI Related Work
Federated Learning with a centralized aggregator reveals a single point of failure and is vulnerable to malicious clients and false data. Thus, more decentralized setups have been proposed to address these limitations of centralized architecture, such as Blockchain-based Federated Learning. We review the existing state-of-the-art blockchain-based federated learning systems and summarize them in Table. II.
BLADE-FL [31] proposed a blockchain-assisted decentralized FL, in which each client broadcasts the trained model to other clients, aggregates its own model with received ones, and then competes to generate a block before its local training of the next round. However, in FL, clients might not have enough computing capability to do both training and mining. Biscotti [9] focused on the security and privacy issue between peering clients. Instead of doing training and mining at the
same time, clients' roles (such as verifier and aggregator) are selected randomly each round. However, it still requires all clients to agree on a global model at the end of each round. BAFFLE [32] leverages smart contracts to maintain the global model copy and the associated computational state of the users. The machine learning model weight vector is partitioned into numerous chunks to be stored in the smart contracts. However, with the dramatically growing size of the ML models, chunking might become extremely challenging. VFChain [28] utilized blockchain to verify and audit the correctness of the training process for federated learning. Instead of storing all the model updates, it only records the related verifiable information of models in the blockchain for audit in the future. However, same as in previous works, it requires synchronization between all clients. BAFL [33] introduced an asynchronous FL framework that uses a blockchain for model aggregation. However, every client needs to upload its local model to the blockchain, which is even more inefficient.
Compared to existing work, BDFL is the first solution for asynchronous blockchain-based fully decentralized FL, in which clients can join and leave the system dynamically and exchange models in a P2P manner without the need to maintain a global model in each round.
## VII Conclusion
We present BDFL, a blockchain-based fully decentralized federated learning system that enables clients to exchange models in a P2P manner with model verification with high learning accuracy and system robustness. We design an incentive mechanism to encourage clients to participate in the model exchange, and a reputation model to evaluate the trustworthiness of each client to avoid malicious model updates
Fig. 4: CIFAR-10, Average Accuracy vs. Communication Rounds
Fig. 5: MNIST, 20% Malicious clients. We sample 3 honest clients and 3 malicious clients and plot their reputation, accumulative reward (relative to ‘Honest1’) and the number of successful malicious updates in the training period. Client ‘Malicious1’ performs attacks on rounds 4,10,50,80,90. Client ‘Malicious2’ performs attacks on rounds 8,14,19,34,52,54,71,77,81,90,95. Client ’Malicious3’ performs attacks on rounds 20,23,34,58,61,83,95
Fig. 3: MNIST, Average Accuracy vs. Communication Rounds
from attackers. The evaluation results via simulations show that BDFL achieves fast model convergence and high accuracy on real datasets even if there exist 30% malicious clients in the system.
## VIII Acknowledgement
The authors were partially supported by NSF Grants 1750704, 1932447, and 2114113. C. Qian was partially supported by the Army Research Office and was accomplished under Grant Number W911NF-20-1-0253. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. We thank the anonymous reviewers for their comments.
|
2301.09715 | PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question
Answering Research and Development | The field of Question Answering (QA) has made remarkable progress in recent
years, thanks to the advent of large pre-trained language models, newer
realistic benchmark datasets with leaderboards, and novel algorithms for key
components such as retrievers and readers. In this paper, we introduce PRIMEQA:
a one-stop and open-source QA repository with an aim to democratize QA
re-search and facilitate easy replication of state-of-the-art (SOTA) QA
methods. PRIMEQA supports core QA functionalities like retrieval and reading
comprehension as well as auxiliary capabilities such as question generation.It
has been designed as an end-to-end toolkit for various use cases: building
front-end applications, replicating SOTA methods on pub-lic benchmarks, and
expanding pre-existing methods. PRIMEQA is available at :
https://github.com/primeqa. | Avirup Sil, Jaydeep Sen, Bhavani Iyer, Martin Franz, Kshitij Fadnis, Mihaela Bornea, Sara Rosenthal, Scott McCarley, Rong Zhang, Vishwajeet Kumar, Yulong Li, Md Arafat Sultan, Riyaz Bhat, Radu Florian, Salim Roukos | 2023-01-23T20:43:26Z | http://arxiv.org/abs/2301.09715v2 | PrimeQA: The Prime Repository for State-of-the-Art Multilingual Question Answering Research and Development
###### Abstract
The field of Question Answering (QA) has made remarkable progress in recent years, thanks to the advent of large pre-trained language models, newer realistic benchmark datasets with leaderboards, and novel algorithms for key components such as retrievers and readers. In this paper, we introduce PrimeQA: a one-stop and open-source QA repository with an aim to democrative QA research and facilitate easy replication of state-of-the-art (SOTA) QA methods. PrimeQA supports core QA functionalities like retrieval and reading comprehension as well as auxiliary capabilities such as question generation. It has been designed as an end-to-end toolkit for various use cases: building front-end applications, replicating SOTA methods on public benchmarks, and expanding pre-existing methods. PrimeQA is available at: [https://github.com/primeqa](https://github.com/primeqa).
## 1 Introduction
Question Answering (QA) is a major area of interest in Natural Language Processing (NLP), consisting primarily of two subtasks: information retrieval (IR) Manning (2008); Schutze et al. (2008) and machine reading comprehension (MRC) Rajpurkar et al. (2016, 2018); Kwiatkowski et al. (2019); Chakravarti et al. (2020). IR and MRC systems, also referred to as _retrievers_ and _readers_, respectively, are commonly assembled in an end-to-end open-retrieval QA pipeline (henceforth, OpenQA) Chen et al. (2017); Lee et al. (2019); Karpukhin et al. (2020); Santhanam et al. (2022) that accepts a query and a large document collection as its input and provides an answer as output. The specific role of the retriever is to identify documents or passages (i.e., _contexts_) that contain information relevant to the query, while the reader component extracts a precise answer from such contexts.
While QA as a field has advanced rapidly, software to perform and replicate QA experiments has mostly been written in silos. At the time of this writing, no central repository exists that facilitates the training, analysis and augmentation of state-of-the-art (SOTA) models for different QA tasks at scale. In view of the above, and with an aim to democratize QA research by providing easy replicability, here, we propose PrimeQA: an open-source repository1 designed as an end-to-end toolkit, with all the necessary tools to easily and quickly create a custom QA application. We provide a main repository that contains easy-to-use scripts for retrieval, machine reading comprehension, and question generation with the ability to perform training, inference, and performance evaluation. Additionally, several sibling repositories provide features for easily connecting various retrievers and readers and creating a front-end user interface (UI) for end users. PrimeQA has been designed as a platform for QA development and research and encourages collaboration from everyone in the field from beginners to experts. PrimeQA already has a growing developer base with contributions from major academic institutions.
Footnote 1: [https://github.com/primeqa](https://github.com/primeqa)
Our paper makes several major contributions:
* We present PrimeQA, a first-of-its-kind repository for comprehensive QA research. It is free to use, well documented, easy to contribute to, and license friendly (Apache 2.0) for both academic and commercial usage.
* PrimeQA provides the mechanism via accompanying repositories to create custom OpenQA applications containing both retrievers and readers for industrial deployment including a front-end UI.
* We provide _easy-to-use_ implementations of SOTA retrievers and readers that are at the
top of major QA leaderboards, with capabilities for performing training, inference and performance evaluation of these models.
* PrimeQA models are built on top of Transformers Wolf et al. (2020) and are available on the Hugging Face model hub.2
Footnote 2: [https://huggingface.co/PrimeQA](https://huggingface.co/PrimeQA)
## 2 Related Work
One of the largest open source community efforts for NLP software is Papers with Code Robert and Thomas (2022). Their mission is to create a free and open resource for NLP papers, code, datasets, methods and evaluation tables. Their focus is to cater to the wider NLP and Machine Learning community and not just QA. Even though the QA section includes over 1800 papers with their code, the underlying software components (written in various versions of both pytorch and tensorflow, with no central control whatsoever) do not communicate with each other. These disjoint QA resources hinder replicability and effective collaboration, and ultimately lead to quick sunsetting of new capabilities.
Recently, among the most used repositories for NLP users have been the Transformers repository Wolf et al. (2020). However, while being widely adopted by the community, they lack a distinct focus on QA. Unlike PrimeQA, they only support one general script for extractive QA and several stand-alone python scripts for retrievers. Similarly FairSeq, Ott et al. (2019) and AllenNLP Gardner et al. (2018) also focus on a wide array of generic NLP tasks and hence do not solely present a QA repository allowing anyone plug and play components for their custom search application. There exists several toolkits catered to build customer-specific search applications NVIDIA (2022); Deepset (2021) or search-based virtual assistants IBM (2020). However, while they have a good foundation for software deployment, unlike PrimeQA, they lack the focus on replicating (and extending) the latest SOTA for QA on academic benchmarks which is an essential component needed in order for us to make rapid progress in this field.
## 3 PrimeQA
PrimeQA is a comprehensive open-source resource for cutting-edge QA research and development, governed by the following design principles:
\(\bullet\)**Reproducible:** Users can easily reproduce SOTA published results and combine or "plug and play" the corresponding methods with companion modules, e.g., a reader with a retriever. The various supported applications and the associated PrimeQA components are listed in Table 1.
\(\bullet\)**Customizable:** We allow users to customize and extend SOTA models for their own applications. This often entails fine-tuning on users' custom data, which they can provide through one of several supported data formats, or process on their own by writing a custom data processor.
\(\bullet\)**Reusable:** We aim to make it straightforward for developers to quickly deploy pre-trained off-the-shelf PrimeQA models for their QA applications, requiring minimal code change.
\(\bullet\)**Accessible:** We provide easy integration with Hugging Face Datasets and the Model Hub, allowing users to quickly plug in a range of datasets and models as shown in Table 1.
PrimeQA in its entirety is a collection of four different repositories: a primary _research and replicability_3 repository and three accompanying repositories4\({}^{,}\)5\({}^{,}\)6 for industrial deployment. Figure 1 shows a diagram of the PrimeQA repository. It provides several entry points, supporting the needs
\begin{table}
\begin{tabular}{l l} \hline \hline
**Core Models** & **Extensions** \\ \hline \multicolumn{2}{c}{**Retriever**} \\ \hline BM25 Robertson and Zaragoza (2009) & Dr.DECR * Li et al. (2022) \\ DPR Karpukhin et al. (2020) & \\ ColBERT Santhanam et al. (2022) & \\ \hline \multicolumn{2}{c}{**Reader**} \\ \hline General MRC Alberti et al. (2019) & ReasonBERT \\ FID Izcarad and Grave (2020) & OmniTab Jiang et al. (2022) \\ Boolean* McClarey et al. (2022) & \\ Lists & \\ Tapas Herzig et al. (2020) & \\ Tapex Liu et al. (2021) & \\ \hline \multicolumn{2}{c}{**Question Generation**} \\ \hline Table QG Chemmengath et al. (2021) & \\ Passage QG Hybrid QG & \\ \hline \hline \end{tabular}
\end{table}
Table 1: A non-exhaustive list of core PrimeQA models for the three main supported tasks (left) and their various extensions (right) available on our Hugging Face model hub: [https://huggingface.co/PrimeQA](https://huggingface.co/PrimeQA). * SOTA leaderboard systems.
of different users, as shown at the top of the figure. The repository is centered around three core components: a **retriever**, a **reader**, and a **question generator** for data augmentation. These components can be used as individual modules or assembled into an end-to-end QA pipeline. All components are implemented on top of existing AI libraries.
### The Core Components
Each of the three core PrimeQA components supports different flavors of the task it has been built for, as we detail in this section.
#### 3.1.1 Retriever: run_ir.py
Retrievers predict documents (or passages) from a document collection that are relevant to an input question. PrimeQA has both sparse and SOTA dense retrievers along with their extensions, as shown in Table 1. We provide a single Python script run_ir.py that can be passed arguments to switch between different retriever algorithms.
**Sparse:** BM25 (Robertson and Zaragoza, 2009) is one of the most popular sparse retrieval methods, thanks to its simplicity, efficiency and robustness. Our Python-based implementation of BM25 is powered by the open-source library PySerini.
**Dense:** Modern neural retrievers have utilized dense question and passage representations to achieve SOTA performance on various benchmarks, while needing GPUs for efficiency. We currently support ColBERT (Santhanam et al., 2022) and DPR (Karpukhin et al., 2020): both fine-tune pretrained language models to train question and passage encoders (Devlin et al., 2019; Conneau et al., 2020). They utilize FAISS (Johnson et al., 2017) for K-nearest neighbor clustering and compressed index representations, respectively, and support multilingual retrieval with the question and the documents being in the same (Lee et al., 2019; Longpre et al., 2021) or different languages (cross-lingual) (Asai et al., 2021).
#### 3.1.2 Reader: run_mrc.py
Given a question and a retrieved passage--also called the _context_--a reader predicts an answer that is either extracted directly from the context or is generated based on it. PrimeQA supports training and inference of both extractive and generative readers through a single Python script: run_mrc.py. It works out-of-the-box with different QA models extended from the Transformers library (Wolf et al., 2020).
**Extractive:**PrimeQA's general extractive reader is a pointer network that predicts the start and end of the answer span from the input context (Devlin et al., 2019; Alberti et al., 2019). It can be initialized with most large pre-trained language models (Devlin et al., 2019; Liu et al., 2019; Conneau et al., 2020). In addition, our reader is extremely versatile as it can provide responses to questions with list answers (Khashabi et al., 2021), _yes/no_ responses to Boolean questions (Clark et al., 2019, 2020; Kwiatkowski et al., 2019), answer spans found in tables (Herzig et al., 2020) and in multimodal (text+image) documents (Mathew et al., 2021). Examples of several extractive readers along with their extensions are provided in Table 1.
**Generative:**PrimeQA provides generative read
Figure 1: The PrimeQA Repository: the core components and features.
ers based on the popular Fusion-in-Decoder (FiD) Izacard and Grave (2020) algorithm. Currently, it supports easy initialization with large pre-trained sequence-to-sequence (henceforth, seq2seq) models Lewis et al. (2019); Raffel et al. (2022). With FiD, the question and the retrieved passages are used to generate relatively long and complex multi-sentence answers providing support for long form question answering tasks, _e.g._, ELI5 Petroni et al. (2021); Fan et al. (2019).
#### 3.1.3 Question Generation: run_qg.py
Data augmentation through synthetic question generation (QG) can be a powerful tool for improving QA model generalization Alberti et al. (2019); Sultan et al. (2020); Gangi Reddy et al. (2022), including in domain adaptation Shakeri et al. (2021); Gangi Reddy et al. (2021, 2022), domain generalization Sultan et al. (2022) and few-shot learning Yue et al. (2022) settings. QG can also circumvent the problem of not having labeled data in the target domain of application. Question generators take a span of text (e.g., a sentence) from a document as input, hypothesize an answer (e.g., a named entity) and generate a question as output. PrimeQA's QG component Chemmengath et al. (2021) is based on SOTA sequence-to-sequence generation architectures Raffel et al. (2022), and supports both unstructured and structured input text through a single Python script run_qg.py. A list of available question generators and their extensions are provided in Table 1.
**Unstructured Input:** Our first variant of QG is a multilingual text-to-text model capable of generating questions in the language of the input passage. It fine-tunes a pre-trained T5 language model Raffel et al. (2022) on publicly available multilingual QA data Clark et al. (2020).
**Structured Input:** We also provide QG capability over tables, for which the generator is trained on examples of SQL and natural language question pairs extracted from the popular Table QA dataset Zhong et al. (2017). As in Chemmengath et al. (2021), during inference, PrimeQA uses a controllable SQL sampler to select SQL queries for a given table and answer text, and then applies the trained QG model to generate natural language questions.
### Entry Points
We cater to different user groups in the QA community by providing different entry points to PrimeQA, as shown in Figure 1.
\(\bullet\)**Top-level Scripts:** Researchers can use the top level scripts, run_ir/mrc/qg.py, to reproduce published results as well as train, fine-tune and evaluate associated models on their own custom data.
\(\bullet\)**Jupyter Notebooks:** These demonstrate how to use built-in classes to run the different PrimeQA components and perform the corresponding tasks. These are useful for developers and researchers who want to reuse and extend PrimeQA functionalities.
\(\bullet\)**Inference APIs:** The Inference APIs are primarily meant for developers, allowing them to use PrimeQA components on their own data with only a few lines of code. These APIs can be initialized with the pre-trained PrimeQA models provided in the HuggingFace hub, or with a custom model that has been trained for a specific use case.
\(\bullet\)**Service Layer:** The service layer helps developers set up an end-to-end QA system quickly by providing a wrapper around the core components that exposes an endpoint and an API.
\(\bullet\)**UI:** The UI is for end-users, including the non-technical layman who wants to use PrimeQA services interactively to ask questions and get answers.
### Pipelines for OpenQA
PrimeQA core components and entry points make it intuitive for users to build an OpenQA _pipeline_ and configure it to use any of the PrimeQA retrievers and readers. This is facilitated through a lightweight wrapper built around each core component, which implements a training and an inference API. The retrieval component of the pipeline predicts relevant passages/contexts for an input question, and the reader predicts an answer from the retrieved contexts. PrimeQA pipelines are easy to construct using the pre-trained models in the model hub and our inference APIs.
An example of such a pipeline can be connecting a ColBERT retriever to an FiD reader to construct a long form QA (LFQA) system. This pipeline uses the retriever to obtain supporting passages that are subsequently used by the reader to generate complex multi-sentence answers. A different pipeline can also be instantiated to use an extractive reader instead that is available through our model hub.
## 4 Services and Deployment
Industrial deployment often necessitates running complex models and processes at scale. We use Docker to package these components into micro
services that interact with each other and can be ported to servers with different hardware capabilities (e.g. GPUs, CPUs, memory). The use of docker makes the addition, replacement or deletion of services easy and scalable. All components in the PrimeQA repository are available via REST and/or gRPC micro-services. Our docker containers are available on the public DockerHub and can be deployed using technologies such as OpenShift and Kubernetes.
In addition to the main PrimeQA repository, we provide three sibling repositories for application deployment:
primeqa-ui is the front-end UI. Users can personalize the front-end UI by adding custom organization logos or changing the display fonts. primeqa-orchestrator is a REST server and is the central hub for the integration of PrimeQA services and external components and the execution of a pipeline. For instance, the orchestrator can be configured to search a document collection with either a retriever from PrimeQA such as ColBERT, or an external search engine such as Watson Discovery.7 create-primeqa-app provides the scripts to launch the demo application by starting the orchestrator and UI services.
Footnote 7: [https://www.ibm.com/cloud/watson-discovery](https://www.ibm.com/cloud/watson-discovery)
Figure 2 illustrates how to deploy a QA application at scale using the core PrimeQA services (e.g. Reader and Retriever) and our three sibling repositories. We provide this end-to-end deployment for our demo, however users can also utilize PrimeQA as an application with their own orchestrator or UI.
Figure 3 shows an OpenQA demo application built with PrimeQA components. Our demo application provides a mechanism to collect user feedback. The _thumbs up / down_ icons next to each result enables a user to record feedback which is then stored in a database. The user feedback data can then be retrieved and used as additional training data to further improve a retriever and reader model.
## 5 Community Contributions
While being relatively new, PrimeQA has already garnered positive attention from the QA community and is receiving constant successful contributions from both international academia and industry via Github pull requests. We describe some instances here and encourage further contributions from all in the community. We provide support for those interested in contributing through a dedicated slack channel 8, Github issues and PR reviews.
Footnote 8: [https://ibm.biz/pqa-slack](https://ibm.biz/pqa-slack)
**Neural Retrievers:** ColBERT, one of our core neural retrievers, was contributed by Stanford NLP. Since PrimeQA provides very easy entry points
Figure 2: PrimeQA’s end-to-end application. Each container contains a development (blue), build (yellow) and deployment (green) stack.
into its core library, they were able to integrate their software into the retriever script run_ir.py independently. Their contribution to PrimeQA provides the QA community with the ability to obtain SOTA performance on OpenQA benchmark datasets by performing 'late interaction' search on a variety of datasets. They also contributed ColBERTv2 Santhanam et al. (2022) and its PLAID Santhanam et al. (2022) variant. The former reduces the index size by 10x over its predecessor while the latter makes search faster by almost 7x on GPUs and 45x on CPUs.
**Few shot learning:** The SunLab from Ohio State University provided the ability to easily perform few-shot learning in PrimeQA. Their first contribution, ReasonBERT Deng et al. (2021), provides a pretrained methodology that augments language models with the ability to reason over long-range relations. Under the few-shot setting, ReasonBERT in PrimeQA substantially outperforms a RoBERTa Liu et al. (2019) baseline on the extractive QA task. PrimeQA gives any researcher or developer the capability to easily integrate this component in their custom search application e.g. a DPR retriever and a ReasonBERT reader.
**Table Readers**: Beihang University and Microsoft Research Asia contributed Tapex Liu et al. (2021) as the first generative Table reader within PrimeQA. Tapex proposes a novel table pre-training strategy based on neural SQL executor and achieves the SOTA on Wiki-SQL Zhong et al. (2017) and Wiki-TableQuestions Pasupat and Liang (2015). Their contribution reused the seq2seq trainer from Transformers Wolf et al. (2020) for a seamless integration into PrimeQA. Another contribution comes from LTI CMU's NeuLab which integrated OmniTab Jiang et al. (2022). It proposes an efficient pre-training strategy combining natural and synthetic pre-training data. This integration happened organically as OmniTab builds on top of Tapex in PrimeQA. Currently, their model produces the best few-shot performance on Wiki-TableQuestions, making it suitable for domain adaptation experiments for anyone using PrimeQA.
**Custom search app for Earth Science:** Joint work between NASA and University of Alabama in Huntsville, involved creating a custom search application over scientific abstracts and papers related to Earth Science. First, using the top level scripts in PrimeQA, they easily trained an OpenQA system on over 100k abstracts by training a ColBERT retriever and an extractive reader. Then, they were able to quickly deploy the search application using the create-primeqa-app and make it available publicly9.
Footnote 9: [http://primeq.nasa-impact.net/qa](http://primeq.nasa-impact.net/qa)
## 6 Conclusion
PrimeQA is an open-source library designed by QA researchers and developers to easily facilitate reproduciblity and reusability of existing and future works. This is an important and valuable contri
Figure 3: A custom OpenQA search application built with PrimeQA. Additional screenshots are in Appendix A.
bution to the community as it enables these models to be easily accessible to researchers and end-users in the rapidly progressing field of QA. Our library also provides a'service layer' that allows developers to take pre-trained PrimeQA models and deploy them for their custom search application. PrimeQA is built on top of the largest NLP open-source libraries and tools and provides simple python scripts as entry points to easily reuse its core components across different applications. Our easy access and reusability has already garnered significant positive traction and enables PrimeQA to grow organically as an important resource for the rapid state-of-the-art progress within the QA community.
|
2302.10714 | In-orbit performance of LE onboard Insight-HXMT in the first 5 years | Purpose: The Low-Energy X-ray telescope (LE) is a main instrument of the
Insight-HXMT mission and consists of 96 Swept Charge Devices (SCD) covering the
1-10 keV energy band. The energy gain and resolution are continuously
calibrated by analysing Cassiopeia A (Cas A) and blank sky data, while the
effective areas are also calibrated with the observations of the Crab Nebula.
In this paper, we present the evolution of the in-orbit performances of LE in
the first 5 years since launch. Methods: The Insight-HXMT Data Analysis
Software package (HXMTDAS) is utilized to extract the spectra of Cas A, blank
sky, and Crab Nebula using different Good Time Interval (GTI) selections. We
fit a model with a power-law continuum and several Gaussian lines to different
ranges of Cas A and blank sky spectra to get peak energies of their lines
through xspec. After updating the energy gain calibration in CALibration
DataBase (CALDB), we rerun the Cas A data to obtain the energy resolution. An
empirical function is used to modify the simulated effective areas so that the
background-subtracted spectrum of the Crab Nebula can best match the standard
model of the Crab Nebula. Results: The energy gain, resolution, and effective
areas are calibrated every month. The corresponding calibration results are
duly updated in CALDB, which can be downloaded and used for the analysis of
Insight-HXMT data. Simultaneous observations with NuSTAR and NICER can also be
used to verify our derived results. Conclusion: LE is a well calibrated X-ray
telescope working in 1-10 keV band. The uncertainty of LE gain is less than 20
eV in 2-9 keV band and the uncertainty of LE resolution is less than 15eV. The
systematic errors of LE, compared to the model of the Crab Nebula, are lower
than 1.5% in 1-10 keV. | Xiaobo Li, Yong Chen, Liming Song, Weiwei Cui, Wei Li, Juan Wang, Shuang-Nan Zhang, Fangjun Lu, Yupeng Xu, Haisheng Zhao, Mingyu Ge, Youli Tuo, Yusa Wang, Tianxiang Chen, Dawei Han, Jia Huo, Yanji Yang, Maoshun Li, Ziliang Zhang, Yuxuan Zhu, Xiaofan Zhao | 2023-02-21T14:57:27Z | http://arxiv.org/abs/2302.10714v2 | # In-orbit performance of LE onboard _Insight_-HXMT in the first 5 years
###### Abstract
**Purpose:** The Low-Energy X-ray telescope (LE) is a main instrument of the _Insight_-HXMT mission and consists of 96 Swept Charge Devices (SCD) covering the 1-10 keV energy band. The energy gain and resolution are continuously calibrated by analysing Cassiopeia A (Cas A) and blank sky data, while the effective areas are also calibrated with the observations of the Crab Nebula. In this paper, we present the evolution of the in-orbit performances of LE in the first 5 years since launch.
**Methods:** The _Insight_-HXMT Data Analysis Software package (HXMTDAS) is utilized to extract the spectra of Cas A, blank sky, and Crab Nebula using different Good Time Interval (GTI) selections. We fit a model with a power-law continuum and several Gaussian lines to different ranges of Cas A and blank sky spectra to get peak energies of their lines through xspec. After updating the energy gain calibration in CALibration DataBase (CALDB), we rerun the Cas A data to obtain the energy resolution. An empirical function is used to modify the simulated effective areas so that the background-subtracted spectrum of the Crab Nebula can best match the standard model of the Crab Nebula.
**Results:** The energy gain, resolution, and effective areas are calibrated every month. The corresponding calibration results are duly updated in
CALDB, which can be downloaded and used for the analysis of _Insight_-HXMT data. Simultaneous observations with _NuSTAR_ and _NICER_ can also be used to verify our derived results.
**Conclusion:** LE is a well calibrated X-ray telescope working in 1-10 keV band. The uncertainty of LE gain is less than 20 eV in 2-9 keV band and the uncertainty of LE resolution is less than 15 eV. The systematic errors of LE, compared to the model of the Crab Nebula, are lower than 1.5% in 1-10 keV.
**Keywords:** X-ray, calibration, SCD, in-orbit performance
## 1 Introduction
The _Insight_-Hard X-ray Modulation Telescope (_Insight_-HXMT) was launched on June 15, 2017 at an altitude of 550 km and an inclination of 43 degrees [1]. It consists of three main telescopes: the High Energy X-ray Telescope (HE, 20-250 keV, 5000 cm\({}^{2}\), timing resolution: 2 us) [2], the Medium Energy X-ray Telescope (ME, 8-35 keV, 952 cm\({}^{2}\), timing resolution: 6 us)[3], and the Low Energy X-ray Telescope (LE, 1-10 keV, 384 cm\({}^{2}\), timing resolution: 1 us)[4]. This configuration makes _Insight_-HXMT have broad X-ray energy bands, large areas, and high timing and energy resolution. Therefore, _Insight_-HXMT has the unique ability to study fast X-ray variability in multiple energy bands, allowing it to explore regions closer to black holes or the surfaces of neutron stars than ever before. As an interesting example, _Insight_-HXMT has identified a non-thermal X-ray burst from SGR J1935+2154, which is associated with FRB 200428 [5], combining the advantages of large areas and broad energy bands.
As a main telescope of _Insight_-HXMT, LE consists of three detector boxes, each containing 32 chips of CCD236 which is a kind of Swept Charge Devices (SCD) [4; 6]. CCD236, developed by e2v company, has a fast signal readout ability, whereas it loses the position information of photons [7]. Compared with other types of CCDs, the pile-up effect of LE can be neglected, even when the source flux reaches at about 8 times that of Crab. Thus LE performs well in studying the timing and energy sepctra of bright sources. For events with energies above the on-board threshold, their readout time and the ADC channels are measured, digitized, and telemetered to the ground. In addition to triggered events, LE has the forced trigger events, which record the noise amplitude per 32 ms of each CCD [8]. The forced trigger events are also saved like triggered events but with a different type and can be used to estimate the noise levels. The _Insight_-HXMT Data Analysis Software package1 (HXMTDAS) has utilized the forced trigger events within 1 second to obtain the average ADC channel of noise, which will be subtracted from the ADC channel of the triggered events in this second.
The CCD detectors of LE has four quadrants, each with 100*100 pixels. If a photon interacts with one of these pixels, the resulted charge cloud spreads to the other surrounding pixels, which may induce split events. These split events may be read out in adjacent readout periods. The single events without split are considered to calibrate the performance of LE. The pre-launch calibration experiments and the modeling of the response matrix of LE are performed in [9]. The in-flight calibration method of LE is detailed in [8] and the calibration of the timing system is reported in [10].
In this paper, we focus on the evolution of in-flight performance of LE during the first 5 years of in-orbit operation. Our paper is structured as follows. Section 2 describes the selection of the calibration data and the data reduction processes. Section 3 presents the calibration results related to the energy response matrix, including the energy gain, energy resolution, and energy response function. Section 4 describes the calibration of the effective areas based on the Crab Nebula observations. Meanwhile, the systematic errors of LE derived using the five-year observations of the Crab Nebula are also given in this section. Section 5 summarizes the paper and provides perspectives.
## 2 Observations and data reduction
### Data reduction for energy gain calibration
To investigate the energy gain and resolution calibration, the data suitable for parameterizing the energy gain and monitoring its variations come from the sources that produce visible peaks in the observed spectra, such as blank sky observations with internal background X-ray florescence lines (Ni, Cu, and Zn) and the supernova remnant Cassiopeia A (Cas A) with rich emission lines (Si, S, Fe, etc). The Cas A spectrum observed on July 8, 2017 is displayed in Figure 1. The prominent emission lines can be seen clearly and used to calibrate the energy gain and resolution.
Cas A is observed as a calibration source almost every month if the solar angle is allowed. It should be noted that Cas A is not visible to _Insight_-HXMT from March to June of each year. The calibration results for these months should be extrapolated from the data before and after these months. The emission lines of Ni, Cu, and Zn produced by materials near the CCD detectors during the blank sky observations can also be used to calibrate the energy gain of LE. From 2017-07-08T01:02:26 2 to 2022-09-27T03:01:53, the data of pointing observations of Cas A (ObsID P0101326, P0202041, P0302291, P0402348, P0502131) are reduced to monitor and calibrate the gain in 1.3-6.7 keV. In addition, from 2017-11-02T05:00:54 to 2022-09-29T23:17:57, the data of pointing observations of the blank sky (ObsID P0101293, P0202041, P0301293, P0401293, P0501293) are also reduced to calibrate the gain in 7.4-8.6 keV.
Footnote 2: The time format (YYYY-MM-DDTHH:MM:SS) used in the paper is UTC time.
HXMTDAS and the pre-launch CALibration DataBase (CALDB) are used to reduce and calibrate the data. Here, step-by-step commands are presented as follows.
* [leftmargin=*]
* [leftmargin=*]
* [leftmargin=*]
* [rightmargin=*]
* [leftmargin=*]
[MISSING_PAGE_POST]
*]
* [leftmargin=*
* The Good Time Interval (GTI) can be calculated using legtigen, with the criterion ELV>10 && ANG_DIST<=0.04 && COR>8 && SAA_FLAG==0 && T_SAA>=100 && TN_SAA>=100. In addition, the calibrated event data can be filtered using the GTI through lescreen.
* Finally, the Cas A spectra of single events can be generated from lespecgen.
After the reduction of HXMTDAS, we fit a model with a power-law continuum and several Gaussian lines to different ranges of Cas A spectra using xspec. The peak energies of Cas A emission lines using a pre-launch E-C can then be obtained. The energy peaks of background lines can also be derived when we use the same procedure as mentioned above to reduce the blank sky data. Consequently, the energy evolution of different lines versus time can be obtained using the same E-C.
### Data reduction for energy resolution calibration
After calibrating the gain of LE, we put the new gain file into CALDB and apply the updated CALDB to generate the Cas A spectra again. The widths of Si, S, and Fe using the pre-launch response file of LE are jointly fitted with _XMM/MOS_ (observed on June 22, 2006 with obsID 0412180101) in Xspec. If the energy resolution of LE keeps the same as the ground-based measurements, the intrinsic width of Si, S, and Fe will be the same as the fit result of _XMM/MOS_. Actually, the fitted intrinsic widths of Si, S, and Fe for LE are larger than those for _XMM/MOS_. Therefore, the energy resolution also changes compared with the pre-launch calibration results. After subtracting the intrinsic width of Si, S, and Fe from the fitted results of _XMM/MOS_, an additional broadening of LE resolution is obtained, which evolves with time and temperature as described in [8].
### Data reduction for effective areas calibration
The Crab Nebula (together with its pulsar) that originated from a supernova explosion in 1054 AD is well known as the standard candle in the X-ray sky due to its brightness, almost constant intensity, and simple power-law distribution of emission spectra in the band from 1 to 100 keV. Therefore, many X-ray astronomy satellites perform their calibration with the Crab Nebula [10, 11]. Here, the data of _Insight_-HXMT's pointed observations of the Crab Nebula from 2017-08-27T04:05:29 to 2022-09-05T03:43:48 (ObsID P0101299, P0111605, P0202041, P0303390, P0402349, P0502132) are reduced to calibrate the effective areas.
Once the gain, resolution, and response matrix file are updated in the CALDB, we will use HXMTDAS and the latest CALDB to extract the Crab Nebula spectra including the contribution of background. The difference with the reduced Cas A and blank sky is the GTI criterion, i.e.: ELV>10 && ANG_DIST<=0.04 && COR>8 && SAA_FLAG==0 && T_SAA>=300 && TN_SAA>=300. This criterion consists of two parts; the first part ELV>10 && ANG_DIST<=0.04 is used to select the Crab Nebula in the
field of view, while the second part COR>8 && SAA_FLAG==0 && T_SAA>=300 && TN_SAA>=300 should ensure that the LE background model is available[12]. Furthermore, we use legticor to improve the accuracy of the LE background estimation. Finally, lebkgmap is also applied to generate the background spectra in the GTI.
## 3 Calibration results of energy response matrix
### The performance of noise
In order to study the background model, blank sky observations are performed almost every month. Consequently, the forced trigger events of blank sky observations can be used to monitor the gain and resolution of each CCD. To obtain the average peak and full width at half maximum (FWHM) of each CCD, we use a Gaussian function to fit the Channel distribution of the forced trigger events per minute. The mean and the sigma of the Gaussian function represent the peak and width of the noise events in this minute, respectively.
In Figure 2, we show the fitting results of peak and width of forced trigger events, and also calculate and display the temperature and the number of days since launch for those forced trigger events. The colors represent the number of days since launch. The peak increases with increasing temperature. The peak changes more and more as time increases. For example, since the time is less than 200 days, the change starts with less than 50 channels, and for days over 1500 days it has a maximum change of about 250 channels.
The phenomenon is similar to the width, which also becomes larger as the temperature and time increase. This indicates that the radiation in space causes an increase in the noise level.
Due to the limited resources on the satellite, LE adopted a passive radiation cooling technique to control the temperature of the detector. The stability of the temperature control is fairly good for LE. It should be noted that the in-orbit working temperature of LE is measured every second for each CCD, with a temperature variation of about 5 degrees per orbit. From the LE five-year data, the temperature of all CCDs varied between \(-55^{\circ}\)C and \(-38^{\circ}\)C, with an average of \(-47.9^{\circ}\)C and a step of \(0.01^{\circ}\)C. In the data reduction of HXMTDAS, lepical and lerspgen all require temperature files to retrieve the correct calibration files and parameters in CALDB.
### Energy gain
We use the pre-launch E-C results stored in CALDB at different temperatures to fit the peak energies of the emission lines depicted in Figure 1 with all the data from Cas A and blank sky. To clearly show the evolution of Cas A emission lines, all the peak energies are divided by the theoretical energy of that line and then the ratios of the fitted energy to the model energy are scaled by different constants as shown in Figure 3. The energy fit results of Ni and Zn in blank sky spectra are also displayed in Figure 4.
As noticed from Figures 3 and 4, peak energies of emission lines detected by LE from Cas A and blank sky are gradually decreasing. This phenomenon may be caused by the decrease of the charge transfer efficiency of the CCDs due to radiation damage in space. In order to describe the evolution, a quadratic polynomial function is used to fit the change. From the results of fit, the peak values can be derived on any day, even when Cas A is not visible due to the observation limitations. Finally, the new E-C parameters can be obtained from the lines above and the detailed description of the method can be found in literature [8].
Figure 2: Left: The peak of forced trigger events versus temperature and time. Right: The width of forced trigger events versus temperature and time. The colors on the right of each panel represent the days since launch.
Figure 4: Energy fit (with 1\(\sigma\) error bars) of Ni and Zn versus time in blank sky spectra. The peaks of the lines also decrease with time and also a simple quadratic polynomial fit is used to describe the evolution as Figure 3.
Figure 3: Energy fit (with 1\(\sigma\) error bars) of different emission lines against time in Cas A spectra. The peak of the emission lines obtained by the pre-launch E-C are divided by the theoretical energy of that line and then the ratios of the fitted energy to the model energy are scaled by different constants. The peak energies decrease with time and a simple quadratic polynomial fit is used to describe the evolution.
### Energy resolution
Once the gain file has been updated into CALDB, we rerun the Cas A data to get the gain corrected spectra. The widths of Si, S, and Fe using the pre-launch response file of LE are jointly fitted with _XMM/MOS_ in Xspec. Then the additional broadening of Si, S, and Fe for different observations can be derived after subtracting their intrinsic width from the fitted results of _XMM/MOS_.
For an observation of Cas A, we calculate the average temperature in its GTI and the observation time with the weight of its exposure time. The additional broadening width are plotted against the observation time at different temperatures for Si, S, and Fe line in Figure 5. Using the two-dimensional Equation (6) in literature [8] to fit the additional broadening of LE, we can obtain the additional broadening at any time even when the Cas A is not visible. The residuals are plotted in the bottom panel of Figure 5. The differences between data and model are less than 15 eV. Therefore, the two-dimensional function describes the large fluctuations caused by temperature in the top panel of Figure 5 very well. Then a linear function is used to fit the extra broadening of LE versus energy.
### Energy response function
Since LE is a CCD detector, the diffusion of the charge cloud generated by a photon or a charged particle over several pixels can be read out by several adjacent cycles. Here, we only consider single events without splitting as valid X-ray events.
The pre-launch Response Matrix File (RMF) was calibrated at the calibration facility of the Institute of High Energy Physics, using 20 discrete energies covering the energy range 0.9-12 keV [9]. When we use the pre-launch RMF to fit the in-flight spectrum of Cas A, additional broadening is needed to fit the line profiles well, as described in section 3.3. To generate the in-flight RMF, the additional broadening versus energy requires to be convolved with the pre-launch RMF. After convolution, we use a Gaussian function to fit the full energy peak to obtain the FWHM. The FWHM versus energy for different observation year is plotted in Figure 6. As pictured, the FWHM has been increasing, but the rate of increase has decreased in the last two years.
Figure 5: Additional broadening of the width sof Si, S, and Fe plotted against the observation time at different temperature. The model described in [8] is also plotted as the solid lines. In the lower panel, the residuals of the fit are plotted against time for the Si, S, and Fe respectively. The differences between data and model are less than 15 eV.
## 4 Calibration results of effective areas and systematic errors
### Effective areas
The spectrum of the Crab Nebula in the 1-100 keV X-ray band has been well described by a power-law with photon index of \(\Gamma\sim 2.11\)[13]. As for the normalization factor, the Crab spectrum measured by _NuSTAR_ in March, 2018 is fitted with \(N=8.76\,\mathrm{keV^{-1}cm^{-2}s^{-1}}\). The Crab Nebula is modeled as a simple absorbed power law,
\[F(E)=\mathrm{abs}(\textit{E})\textit{N}\textit{E}^{-\Gamma}, \tag{1}\]
where abs is the interstellar absorption model, \(N\) is the normalization factor, \(\Gamma\) is the power-law photon index, and \(E\) is the photon energy. We define the Crab model with \(\Gamma=2.11\), \(N=8.76\,\mathrm{keV^{-1}cm^{-2}s^{-1}}\) and \(\mathrm{nH}=0.36\times 10^{22}\,\mathrm{cm^{-2}}\) which is a parameter in abs.
For the observed spectrum of Crab Nebula in GTI, the detected rate in a given instrumental pulse invariant (PI) bin, \(S(PI)\), can be modeled as the equation,
\[S(PI)=F(E)\times A(E)*RMF(PI,E)+B(PI), \tag{2}\]
where \(F(E)\) is the model photon spectrum of the Crab as a function of the incident photon energy, \(RMF(PI,E)\) is the redistribution matrix that represents the probability density in a given PI bin for the photons with energy \(E\), \(A(E)\) is the effective area, also known as the ancillary response function
Figure 6: The FWHM of LE versus energy for different observation years in September. The FWHM has been increasing, but the rate of increase has decreased recently.
(ARF), and \(B(PI)\) is the background count rate in the GTI. In the top panel of Figure 7, we show the background-subtracted count rate of Crab Nebula in blue points observed on February 8, 2018 during a GTI (from 2018-02-08T16:28:45 to 2018-02-08T16:29:42). The corresponding background count rate estimated from lebkgmap is shown in orange points and the green line represents the model of Crab Nebula described in Equation (1) with the parameters of \(\mathrm{nH}=0.36\times 10^{22}\,\mathrm{cm}^{-2}\) and \(\Gamma=2.11\).
Prior to launch, we used ground calibration results and Monte Carlo simulations based on Geant4 toolkit to produce the effective areas. After launch, there still remain systematic residuals in the Crab spectrum using the new simulated effective areas as shown in Figure 23 of [8]. Although it is desirable to have a fully physics-based effective areas, this is not often achievable with limited number of calibration sources. Moreover, the parameters in the charge transfer process is difficult to quantify accurately. Finally, it is decided to use an empirical function \(f(E)\) to modify the simulated effective areas. Since \(f(E)\) is a function of energy \(E\), its effect should be folded through the response matrix. We optimize the empirical function and make the residuals within an acceptable level as shown in the bottom panel of Figure 7. The parameters of
Figure 7: The top panel shows the energy spectrum of the Crab Nebula observed during a GTI (from 2018-02-08T16:28:45 to 2018-02-08T16:29:42, obsID: P0111605046). The green line is the model described in Equation (1) with the parameters of \(\mathrm{nH}=0.36\times 10^{22}\,\mathrm{cm}^{-2}\) and \(\Gamma=2.11\). The data points in blue are background-subtracted count rate, while the data points in orange are the corresponding background count rate estimated from lebkgmap. For clarity, these data points are binned every 30 PI channels. The bottom panel presents the ratio of data (blue points in the top panel) to model (green lines in the top panel).
the empirical function can be derived and the effective areas in orbit can be represented as \(f(E)*A(E)\). Further details about how we obtain the in-orbit effective areas are available in [8]. Since the number of bad CCDs kept changing over the last 5 years as shown in Table 1, here we only show the effective areas of one CCD in September 2017 in the top panel of Figure 8. The ratios to the effective areas in 2017 are also plotted at the bottom of Figure 8.
It is worth noting that some large differences of effective areas appear below 2 keV after June 25, 2020. This phenomenon can be explained as follows. In order to reduce the rate of noise events, the thresholds of most CCDs are adjusted at about 450 channel. As pictured in Figure 2, the X-ray events between 1 keV and 1.7 keV have a variation of about 200 channels at different temperatures like the forced trigger events. This will result in some X-ray events exceeding the threshold and others not exceeding it. So the detected
\begin{table}
\begin{tabular}{c c c} \hline detID & Time (UTC) & FoV \\ \hline
29 & 2017-06-24T15:37:26 & small \\ \hline
87 & 2017-06-24T15:37:26 & small \\ \hline
1 & 2017-07-28T20:00:30 & big \\ \hline
15 & 2017-09-15T06:06:55 & big \\ \hline
69 & 2019-04-14T19:36:29 & big \\ \hline
54 & 2019-06-18T15:44:11 & small \\ \hline
76 & 2020-06-30T19:42:13 & small \\ \hline \end{tabular}
\end{table}
Table 1: Bad CCDs of LE. UTC time when CCDs become not working and their FoVs.
Figure 8: The top panel shows the effective areas of one CCD as a function of energy in September 2017. The bottom panel shows the ratios to the effective areas in 2017. Some large differences can be found after June 2020.
count rate has decreased compared to the results before June 25, 2020 as shown in the top panel of Figure 7. Figure 9 shows the energy spectra of Crab Nebula in 0.9-3 keV energy band observed from 2020-09-13T00:40:59 to 2020-09-14T00:50:51 (obsID P0302290002). Six GTIs can be obtained using the standard data reduction procedures of HXMTDAS. The data points of different colors represent the Crab Nebula spectra in different GTIs and the average temperatures in these GTIs are also shown. The count rates in 1-1.7 keV have a strong positive correlation with temperature because higher temperatures make more events over the threshold. Therefore, the efficiency below 1.7 keV varies rapidly with temperature and decreases significantly with decreasing temperature. It is difficult to calibrate the efficiency well here, so we recommend users to use energy band in 2 - 10 keV for spectral analysis when analyzing data observed after June 25, 2020.
After the in-flight effective areas are calibrated, some simultaneous observations with _NuSTAR_ and _NICER_ are used to validate all calibration results. An analysis of joint spectra from _Insight_-HXMT, _NuSTAR_, and _INTEGRAL_ was performed in [14]. The authors found an overall agreement between the spectra from all three satellites. The literature [15] also investigated the 2018 outburst of the black hole transient H 1743 - 322 with a series of _Insight_-HXMT, _NICER_, and _NuSTAR_, covering the 1-120 keV band. They also jointly fitted the spectra of the H 1743 - 322 and obtained consistent results.
Figure 9: The energy spectra of Crab Nebula observed in 0.9–3 keV from 2020-09-13T00:40:59 to 2020-09-14T00:50:51 (obsID P0302290002). Six GTIs are available using the standard data reduction procedures. The data points of different colors represent the Crab Nebula spectra in different GTIs and the average temperatures in these GTIs are also calculated and shown. The count rates in 1–1.7 keV are significantly lower than that in Figure 7. Moreover, the higher the temperature, the higher the count rate.
### Systematic errors
After the effective area files are updated in CALDB, we have reprocessed all Crab data and generated the corresponding response files and background files for LE at different observation times. The information of all the Crab Nebula data analyzed in this paper is summarized in Table 2. The first column represents the period of observation and the second column gives the corresponding observation ID. The third column provides the number of exposures in this observation period. Typically, an observation lasts several hours or days and the amount of data exceeds several GB. To reduce the size of a single file, an observation is artificially split into multiple segments (named exposures). The duration of an exposure is usually about three hours. The last column provides the average effective exposure time for all exposures in this observation period.
The model of Crab Nebula can be fixed to obtain the ratio of each individual exposure in each PI channel as displayed in the bottom panel of Figure 7. We can calculate the systematic errors of the ratio at each PI channel using the same method as described in Equation (11) of [8]. In our analysis, only the exposures with an effective exposure time of more than 1000 s are considered in the calculation of systematic errors. Figure 10 depicts the systematic errors versus energies for different times, which are larger after September 2018. Compared to the model of the Crab Nebula, the systematic errors of LE are below 1.5% in 1-10 keV. These values can be used for the spectral fitting of LE if the systematic error dominates over statistical errors when the source is bright.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Observation Period & ObsID & Exposure Amount & Mean Exposure Time (s) \\ \hline
2017.08–2018.08 & P0101297,P0111605, P0101299 & 166 & 1820 \\ \hline
2018.09–2020.04 & P0101299, P0202041 & 135 & 1421 \\ \hline
2020.08–2022.04 & P030229, P0402349 & 129 & 1329 \\ \hline \end{tabular}
\end{table}
Table 2: Summary of Crab Nebula data analyzed in this paper.
## 5 Conclusion
LE is a well-calibrated X-ray astronomical satellite working in 1-10 keV. The uncertainty of LE gain is less than 20 eV in 2-9 keV band and the uncertainty of LE resolution is lower than 15 eV. The effective areas are calibrated with the Crab Nebula using a simple absorbed power law model with photon index \(\Gamma=2.11\), normalization factor \(N=8.76\,\mathrm{keV^{-1}cm^{-2}s^{-1}}\), and interstellar absorption \(\mathrm{nH}=0.36\times 10^{22}\,\mathrm{cm^{-2}}\). The systematic errors in the spectral fitting are lower than 1.5% in 1-10 keV and slightly higher above 6 keV. The calibrations are made monthly and updated appropriately in CALDB, which can be downloaded and used to analyze _Insight_-HXMT data. We recommend that users utilize energy bands from 2 to 10 keV when making spectral analysis for observations after June 25, 2020. We plan to continue improving the spectral capabilities of the LE and monitoring the detector gain, resolution and effective areas.
This work used data from the _Insight_-HXMT mission, a project funded by the China National Space Administration (CNSA) and the Chinese Academy of Sciences (CAS). We gratefully acknowledge the support from the National Program on Key Research and Development Project (Grant No.2021YFA0718500) from the Minister of Science and Technology of China (MOST). The authors thank supports from the National Natural Science Foundation of China under Grants 12273043, U1838201, U1838202, U1938102, and U1938108. This work was partially supported by International Partnership Program of Chinese Academy of Sciences (Grant No.113111KYSB20190020).
Figure 10: The systematic errors of LE versus energies for different times as shown in different colors. These values can be used for the spectral fitting of LE if the systematic error dominates over statistical errors in the observations. |
2310.10263 | A Hermitian bypass to the non-Hermitian quantum theory | Non-Hermitian (NH) operators are gaining growing significance in all branches
of physics and beyond. However, NH quantum theory faces challenges with
singularities, instabilities, and violations of standard linear algebra and
differential geometry. Here, we propose a quantum theory that resolves these
instabilities by reassigning them to the expansion parameters of a suitably
defined basis state of a Hermitian operator. We discover a computational basis,
defined by the eigenspace of $H^\dagger H$, in which the exceptional points of
$H$ are positioned as vacua on the two boundaries. The theory also introduces a
generic dual space map that functions as a dynamical `space-time'
transformation within the computational space. When this transformation assumes
a static symmetry, it ensures real energies, unraveling a hidden symmetry
beyond hermiticity or parity-time reversal symmetries. Our formalism elucidates
the origin and interpretation of several features associated with NH operators,
including exceptional points, normal operators, dual-space mapping, dynamical
metric manifold, and emergent symmetry-enforced real eigenvalues. Our general
framework broadens the application of NH theory across numerous branches of
physics where NH operators manifest as ladder operators, order parameters,
self-energies, projectors, and other entities. | Priyanshi Bhasin, Tanmoy Das | 2023-10-16T10:39:25Z | http://arxiv.org/abs/2310.10263v1 | # A Hermitian bypass to the non-Hermitian quantum theory
###### Abstract
Non-Hermitian (NH) operators are gaining growing significance in all branches of physics and beyond. However, NH quantum theory faces challenges with singularities, instabilities, and violations of standard linear algebra and differential geometry. Here, we propose a quantum theory that resolves these instabilities by resasigning them to the expansion parameters of a suitably defined basis state of a Hermitian operator. We discover a _computational basis_, defined by the eigenspace of \(H^{\dagger}H\), in which the exceptional points of \(H\) are positioned as vacua on the two boundaries. The theory also introduces a generic dual space map that functions as a dynamical'space-time' transformation within the computational space. When this transformation assumes a static symmetry, it ensures real energies, unraveling a hidden symmetry beyond hermiticity or parity-time reversal symmetries. Our formalism elucidates the origin and interpretation of several features associated with NH operators, including exceptional points, normal operators, dual-space mapping, dynamical metric manifold, and emergent symmetry-enforced real eigenvalues. Our general framework broadens the application of NH theory across numerous branches of physics where NH operators manifest as ladder operators, order parameters, self-energies, projectors, and other entities.
## I Introduction
Non-Hermitian (NH) operators are more common than we often realize across various domains. The presence of non-Hermitian (NH) operators extends beyond their relevance in open quantum systems, non-equilibrium states, and quantum optical devices, to closed quantum theory where NH operators appear in disguise.[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]. More specifically, NH operators and corresponding Hilbert spaces appear in closed quantum systems in the studies of NH bosonic Bogoliubov-Valatin Hamiltonian [12, 13, 14], effective NH Hamiltonian acting on a sub-Hilbert space or sub-systems [15, 16, 17, 18, 19], chiral boundary Hamiltonians of topological insulators [20, 21], complex order parameters for phase transitions, coherent states of ladder operators, self-energy dressed Hamiltonians in many-body quantum theory [22, 23], auxiliary boundary conditions or degrees of freedom in numerical simulations [24], and others. In such cases, we grapple with a 'Hilbert space' of NH operators that lacks a unified existence.
The quantum theory of NH operators suffers from singularities such as exceptional points, uncontrolled inner product, and other subtle violations of the standard linear algebra and quantum (differential) geometry [16]. For example, for NH quantum theory, there exists no unique linear map to the dual space as they appear to belong to different vector spaces. Consequently, physical operators lack unique definitions. The gauge redundancy and gauge symmetry cannot be defined by unitary operators. The eigenvector space is often non-compact (non-Riemannian). The directional (time-) derivative does not simply yield a commutation relation with the NH Hamiltonian (_c.f._ Heisenberg equation of motion), the time-evolution is non-unitary, and so on.
Is it possible to establish a unique and consistent quantum theory for generalized NH operators that can be universally applied across all branches of quantum physics? Such a theory would simplify the study of many open quantum systems problems which are often approached through master equations. The master equation formalism relies on knowing the complete set of basis states for the environment, whereas NH operators encode this information through complex parameters (s). The objective of this work is to make significant strides in developing the quantum theory of generalized random operators and uncover generalized mappings to the dual space, as well as Hermitian conjugate operators and thereby set a stage for studying their dynamics.
An equation of motion that is frequently sought after can be understood as a directional derivative along the curves defined by the constant(s) of motion on a manifold. Here we develop a formalism for solving general NH Hamiltonians by defining a (real-valued) scaling parameter as a 'constant of motion'. This is determined by the eigenvalue of the (Hermitian) anti-commutator of \(H\) and \(H^{\dagger}\). It establishes a bridge between \(H\) and \(H^{\dagger}\), giving rise to a symmetry operator and a dual space mapping. We construct a well-defined _computational basis_ by the eigenvectors of a bi-linear Hermitian operator \(H^{\dagger}H\). Intriguingly, the computational manifold is compact \(-\) embedded in the non-compact manifold of complex parameters of the NH Hamiltonians. The computational Hilbert space is found to be _bounded by the exceptional points_ (or contours) on both sides and _centered at the normal point_ where \(H\) and \(H^{\dagger}\) commute. The non-Hermiticity of the Hamiltonian leads itself to serving as a ladder operator in the computational basis across the _normal_ point, and the two boundaries of the computational states are defined when they are 'annihilated' by \(H\) and \(H^{\dagger}\). In what follows, we discover that the 'exceptional points' are the 'vacua' of this computational space in the language of the usual ladder operators.
By expressing the energy eigenstates in this computational basis, we further discover several properties. (a) The energy eigenstates meet at exceptional points when they both _collapse_ into any one of the computational states. (b) An intrinsic _gauge obstruction_ exists in expanding the energy states in the computational basis, affecting both the ladder operations with the Hamiltonian and the expansion parameters. (c) The dual space can be _(anti-/) linearly mapped_ via a discrete'space
time' transformation, albeit with a negative norm. However, this can be circumvented by defining a (quantum) operator \(-\) additional dynamical symmetry \(-\) suggesting that the eigenspace of NH Hamiltonian is generally a non-Riemennian manifold. (d) The discrete space-time linear map is dynamic in general but becomes a _static symmetry_ if the energy eigenvalues are either real or imaginary. This static symmetry goes beyond the recently studied \(\mathcal{PT}\) symmetry and presents a general (emergent) symmetry inherent in nature. We discuss appropriate examples which unravel the underlying interpretation of exceptional points, normal operators, topology, dual space, emergent symmetry-enforced real eigenvalues, and others. We also predict that in odd-dimension Hilbert spaces, the computational basis leads to a representation where at least one eigenstate exhibits _parameter-free_ energy eigenvalue which we call 'flat-energy' or 'flat band'. Finally, we discuss the potential applications of our formalism in various contexts.
The rest of the paper is structured as follows. In Sec. II, we introduce the core concepts and the essential components of our formalism. Here, we construct the bilinear operator \(F=H^{\dagger}H\), and computational basis, the ladder operations of the NH Hamiltonian which include the generalized gauge obstruction. We express the energy eigenvalues and eigenstates in the computation basis and elucidate the positioning of exceptional points as vacua. Sec. III is dedicated to the development of dual space maps as dynamical transformations in computational space, and establish the orthogonalization condition. We also show here that the real and imaginary eigenvalues are manifestations of inherent symmetry (s) in generalized NH Hamiltonians. We apply our formalism to a selection of representation examples of NH Hamiltonians in Sec. IV. In Sec. V, we extend the results to Higher dimensional Hilbert spaces and discuss two interesting special cases of degeneracies, namely the _circular_ and _point degeneracies_. We summarize our work in Sec. VI and conclude with a discussion on the applicability of our formalism in diverse settings and outline the remaining works in this direction.
## II The computational basis
For any general NH Hamiltonian \(H\), we can construct a positive, Hermitian operator \(D\) as follows \(\{H,H^{\dagger}\}:=D\). We are now going to work in an eigenstate of the \(D\) operator, such that the resulting dynamics are restricted to the world line in which the eigenvalue \(d\) is a 'constant of motion'. Since \(D\) is a dynamically generated operator from arbitrary \(H\) and \(H^{\dagger}\), this constraint does not violate generality. In fact, this constant of motion gives a map between \(H\) and \(H^{\dagger}\). Moreover, \(d\) acts as a global scaling parameter to \(H\), \(H^{\dagger}\), and hence owing to the linear algebra, it only scales all the eigenvalues of \(H\), while the eigenvectors do not depend on \(d\). For simplicity, we rescale the operator to be _dimensionless_ as \(H\to H/\sqrt{d}\), where \(d\) is the eigenvalue of \(D\), such that we obtain an anti-commutation algebra:
\[\left\{H,H^{\dagger}\right\}=\mathbb{I}, \tag{1}\]
where \(\mathbb{I}\) is the unit matrix. (The above restriction to a single scaling parameter of \(d\), however, weakly violates generality for Hilbert space dimensions more than two, we will see further in Sec. V. But the formalism can be made general by taking into account all eigenvalues of \(D\). We will consider such a case in future studies, but focus here on a single eigenvalue \(d\).)
The next step is to construct a Hilbert space (more appropriately a 'Fock'-like space) of a Hermitian operator which is factorized into \(H\) and \(H^{\dagger}\). This way the above algebra in Eq. (1) helps \(H\) and \(H^{\dagger}\) serve as ladder operators between the 'Fock' states. A candidate choice is a quadratic form (like the 'number operator' in the second quantization formalism):
\[F:=H^{\dagger}H. \tag{2}\]
\(F\) is a positive, Hermitian operator with a physical dimension of \([E^{2}]\). Suitably, \(F\) inherits the information of both \(H\) and \(H^{\dagger}\), but the phase of the complex energy is dropped out. The eigenstates of \(F\) form an orthonormal Hilbert space:
\[F|f\rangle=f|f\rangle, \tag{3}\]
where \(f\in\mathbb{R}\) is bounded from below as \(f\geq 0\). \(|f\rangle\) states are orthonormalized as \(\langle f|f^{\prime}\rangle=\delta(f-f^{\prime})\) for continuous variable, or \(\delta_{f,f^{\prime}}\) for finite dimensional operators. Using the algebra in Eq. (1), we infer two important properties:
1. _The Hilbert space of \(F\) is continuous and bounded._ Given that \(F\) is a positive Hermitian operator with all its eigenvalues being real and positive, we find that the Hilbert space \(\mathcal{F}\) can be continuous in general, but bounded from both sides: \[0\leq f\leq 1.\] (4) This Hilbert space has three special points: the two boundaries (vacua) \(f=0\), \(1\) correspond to the _exceptional points_ of \(H\), while \(f=1/2\) corresponds to a degenerate point for \(F\) where \(H\) becomes a _normal operator_. We further elaborate on these points in Sec. II.2. For future convenience, we partition the Hilbert space of \(F\) into two halves: \(\mathcal{F}=\mathcal{F}_{0\leqslant f\leqslant 1}\oplus\mathcal{F}_{1\leqslant f \leqslant 1}\).
Figure 1: The span of the computational basis states between the two exceptional points/contours at \(f=0,1\), and a normal point at \(f=1/2\), and the cyclic ladder actions of \(H\) and \(H^{\dagger}\) across the normal point are schematically shown here.
2. \(H\) _and \(H^{\dagger}\) are the ladder operators._ \(H|f\rangle\) and \(H^{\dagger}|f\rangle\) are also eigenstates of \(F\) with the _same_ eigenvalue of \((1-f)\). So both \(H\) and \(H^{\dagger}\) project to the same state \(|1-f\rangle\), i.e., they act as ladder operators between \(\mathcal{F}_{0\leqslant f\leqslant\frac{1}{2}}\) and \(\mathcal{F}_{\pm\leqslant f\leqslant 1}\). We consider here the non-degenerate case, while the degenerate case will be studied in Appendix A.2. The magnitudes of proportionality constants can be easily evaluated, and we consider two arbitrary phases \(\phi\) and \(\gamma\) in the projection process as \[H|f\rangle =e^{i(\gamma+\phi)}\sqrt{f}|1-f\rangle;\] (5a) \[H|1-f\rangle =e^{i(\gamma-\phi)}\sqrt{1-f}|f\rangle\] (5b) \[H^{\dagger}|f\rangle =e^{-i(\gamma-\phi)}\sqrt{1-f}|1-f\rangle,\] (5c) \[H^{\dagger}|1-f\rangle =e^{-i(\gamma+\phi)}\sqrt{f}|f\rangle,\] (5d) Such phase factors associated with the ladder operator are absent in the standard Fock space construction of the non-Abelian group or the Harmonic oscillator case. However, we discover below that the two phases are related to the complex phase of the eigenvalues and eigenstates of \(H\). We can view the two phases as a result of the obstruction to the global gauge fixing in the \(\mathcal{F}\) that the two halves of the Hilbert space differ by a phase that cannot be removed by a choice of the global phase. (See Supplemental Material [25] for generalization of these results for the \(d\neq 1\) case)
### Eigenspectrum of \(H\)
For a two-dimensional Hamiltonian (generalization to a higher dimension will be considered in Sec. V), we can now represent the NH \(H\) in the well-defined computational basis \(\mathcal{F}\) as
\[H=\sqrt{f}e^{i(\gamma+\phi)}|1-f\rangle\langle f|+\sqrt{1-f}e^{ i(\gamma-\phi)}|f\rangle\langle 1-f|. \tag{6}\]
Conveniently, \(H\) becomes a purely block off-diagonal NH Hamiltonian in this representation of \(\mathcal{F}\). (For an odd-dimensional Hamiltonian, there will be an additional diagonal term of \(|f=\frac{1}{2}\rangle\langle f=\frac{1}{2}|\) as discussed in Sec. V). The Hamiltonian has an emergent 'particle-hole'-like symmetry given by an operator \(\mathcal{Q}=\sigma_{z}\) (in even dimension) which anti-commutes with \(H\).
The eigenvalues come in pair as
\[E_{\pm}=\pm|E|e^{i\gamma}, \tag{7}\]
where \(|E|=\sqrt[4]{f(1-f)}\in\mathbb{R}^{+}\) (\(f\)-dependence is implicit). The two energy pairs are related to each other by \(\gamma\rightarrow\gamma\pm\pi\), implying that they lie on diametrically opposite points on the complex energy plane. The energy amplitude \(|E|\) is symmetric with respect to \(f=\frac{1}{2}\). (For an odd-dimensional Hamiltonian \(H\), an additional flat eigenvalue \(\frac{1}{\sqrt{2}}e^{i\gamma}\) is present, which is independent of the parameter \(f\), see Sec. V).
A comment is in order. Despite the block off-diagonal form of \(H\) in the computational basis and the 'particle-hole' pair of the energy eigenvalues, the approach does not lose its generality. It can be shown that if we shift the overall energy levels by \(H^{\prime}=H-\operatorname{Tr}(H)\), where \(\operatorname{Tr}(H)\in\mathbb{C}\) is a complex c-number, then \(H^{\prime}\) continues to follow the anticommutation algebra with a modified \(D\) operator. \(\operatorname{Tr}(H)\) gives a trivial overall (complex) shift to the entire energy eigenvalues, while it does not contribute to the eigenstates. A modification to the above statement for an odd-dimensional case is discussed in Sec. V.
The corresponding two energy eigenstates can be expanded in the \(\mathcal{F}\) Hilbert space as
\[|E_{\pm}\rangle=\frac{1}{\sqrt{2|a|}}\left(|f\rangle\pm a|1-f \rangle\right),\ \ \mathrm{where}\ a=|a|e^{i\phi}. \tag{8}\]
\(a\) gives the complex expansion coefficient with phase \(\phi\), and amplitude \(|a|=\sqrt[4]{f/(1-f)}\in\mathbb{R}^{+}\). At each value of \(f\), \(|E_{\pm}\rangle\) are 'particle-hole' conjugate to each other: \(|E_{-}\rangle=\mathcal{Q}|E_{+}\rangle\), where \(\mathcal{Q}\) is the 'particle-hole' transformation. Note that the above eigenfunction is not yet normalized. For studying orthonormalization we need to specify the dual space which is discussed in Sec. III below.
Now we can interpret the two phases \(\gamma\) and \(\phi\). \(\gamma\) is the winding angle in the complex energy phase, see Eq. (7). We see that the energy eigenstates arise from a linear superposition between the two halves of the Hilbert space \(\mathcal{F}\) where \(\phi\) arises from the phase difference between the two halves of the Hilbert space \(\mathcal{F}\), see Eq. (8). \(\phi\) can hence be identified as a geometric phase owing to the global gauge obstruction between the two computational states. Note that both \(\gamma\) and \(\phi\) are defined with respect to the exceptional point (\(E=a=0\)). Therefore, both complex energy and eigenvector space encloses a singular point which is the exceptional point and forms a topological space.
For a given value of \(f\), the description of the eigenvalues and eigenvectors of \(H\) requires two complex parameters \(E\), \(a\). \(|E|\), \(|a|\) are completely determined by \(f\), while \(\gamma\), \(\phi\) are two additional real parameters of \(H\) which are dropped out in \(F\). We note that the computational basis is completely orthonormalized with a Hermitian conjugation map, while the difficulties in the normalization of \(|E_{\pm}\rangle\), and the exceptional point, etc., are transported to the coefficient \(|a|\) for the eigenstate and in \(|E|\) for the eigenvalues, both are determined by \(f\), i.e. to the singularities at \(f=0\) and \(f=1\).
### Exceptional points and Normal point
There are three special points on the \(0\leq f\leq 1\) domain where \(H\) features interesting properties. \(f=0,1\) corresponds to \(|a|=0\), \(\infty\) which are the exceptional points, whereas \(f=\frac{1}{2}\), i.e., \(|a|=1\) gives a normal point. It is to remark that what we call exceptional points are rather the \(f=0,1\) surfaces in the parameter space of the Hamiltonian.
An exceptional point is defined by the point where the eigenvalues of a NH operator become the same, but the corresponding two eigenstates coalesce into a single state. From
Eq. (8), we obtain exceptional points when the coefficient \(|a|\to 0\) or \(\infty\). They correspond to the _collapses_ of the two energy eigenvectors \(|E_{\pm}\rangle\) to a single state \(|f\rangle\) or \(|1-f\rangle\). This occurs at the two boundaries of the Hilbert space \(\mathcal{F}\), i.e., at \(f=0,1\), \(|E|=0\), which are the two vacua of the computational basis \(\mathcal{F}\).
This observation gives a new perspective on the origin and tunability of the exceptional points. From Eq. (8), we see that there is an intrinsic 'fractionalization' of the energy eigenstates into two (or more) states. Then as the energy state collapses to any one of the states, the Hilbert space collapses (or coalesces according to mathematical terminology). Exceptional points can be achieved by tuning a single parameter \(f\), which is a contour in the Hamiltonian parameter space. Exceptional points have been observed experimentally, where a NH system exhibits novel physical properties [26; 27; 28; 29; 30; 31; 32; 33]. A NH physical system displays very high sensitivity to external perturbations, owing to which there has been a lot of ongoing experimental research in the field of optics, attempting to build very high-quality sensors from such systems.
Finally, we define the normal point at \(f=\frac{1}{2}\), giving \(|a|=1\). Here both \(|f\rangle\) and \(|1-f\rangle\) states are degenerate states of the \(F\) operator. The commutator is \([H,H^{\dagger}]=I-2F\), and hence in the \(|f=1/2\rangle\) state, \(H\) and \(H^{\dagger}\) commute. We denote this point as _normal point_. Moreover, at the normal point, \(|a|=1\), which means \(|E_{\pm}\rangle\) states have equal probability of visiting both halves of the \(\mathcal{F}\)-Hilbert space, i.e., \(|E_{\pm}\rangle\) become maximally coherent states. The details of the degenerate eigenstates are given in Appendix A.2.
## III Maps, dual space and symmetries
The difficulty in solving the quantum NH Hamiltonian is that there is no simple and unique map to the dual space of the eigenstates. The usual hermitian conjugation map does not yield orthogonal states. The popular biorthogonal eigenstate is obtained by solving the eigenstates of \(H^{\dagger}\), and yet there is no simple map between the eigenstates of \(H\) and \(H^{\dagger}\). Here we propose a generic map \(-\) defined by a discrete transformation in the computational space.
### Dual space
Before we plunge into redefining the dual space of \(|E_{\pm}\rangle\), let us first see why the traditional Hermitian conjugation map fails here. It is convenient to represent the dimensionless parameter \(|a|\) by an angular variable \(\theta\) as \(|a|=\tan\!\frac{\theta}{2}\), where \(0\leq\theta\leq\pi\). This unveils an intriguing geometric view of the NH 'Hilbert space' \(\mathcal{H}\). \(|E_{\pm}\rangle\) states live on a hypersphere \(\mathbb{S}^{2}\) (for a given value of \(f\)) parameterized by \(\phi\) and \(\theta\), as shown in Fig. 2. The computational basis \(|f\rangle\) and \(|1-f\rangle\) lie along the two antipodal points, as expected by virtue of their orthogonality to each other. \(\theta\) is defined with respect to the \(|f\rangle\) basis. Strikingly, \(|E_{\pm}\rangle\) states do _not_ align diagonally opposite to each other, and hence do _not_, in general, form an orthogonal basis. The two states are connected by \(\phi\rightarrow\phi+\pi\) with fixed \(\theta\). They become aligned at antipodal points to each other only at \(\theta=\pi/2\), which is precisely the _normal point_ (\(|a|=1\)) where \(|E_{\pm}\rangle\) are orthogonal eigenstates of \(H\) (further discussion given in the Supplemental Material [25]). At \(\theta=0\) or \(\theta=\pi\) (i.e., \(|a|=0\) or \(|a|=\infty\) at \(f=0\) and \(1\)), \(|E_{\pm}\rangle\) collapse to either \(|f\rangle\) or \(|1-f\rangle\) states, respectively \(-\) which are the two _exceptional points_.
Now our problem is well defined. We need to find two states \(|\tilde{E}_{\pm}\rangle\) which lie at the two corresponding antipodal points of \(|E_{\pm}\rangle\). The corresponding states are
\[|\tilde{E}_{\pm}\rangle=\sqrt{\frac{|a|}{2}}\left(|f\rangle\pm\frac{1}{|a|}e^ {i\phi}|1-f\rangle\right), \tag{9}\]
It is easy to check that \(|\tilde{E}_{\pm}\rangle\) are the eigenstates of \(H^{\dagger}\) with eigenvalues \(\pm E^{*}\). They are biorthonormal states following \(\langle\tilde{E}_{n}|E_{m}\rangle=\delta_{nm}\), where \(n,m=\pm\), and \(\langle\tilde{E}_{n}|\) corresponds to the usual adjoint map.
Are the two antipodal points related by any unitary and/or antiunitary transformation? We prove below that they indeed correspond to discrete'space-time' transformations in the computational space. \(|E_{\pm}\rangle\) are related to \(|\tilde{E}_{\pm}\rangle\) by a _reflection_ about the \(\theta=\frac{\pi}{2}\) (i.e., \(|a|=1\)) plane: \(\theta\rightarrow\pi-\theta\) (i.e., \(|a|\to 1/|a|\)), \(\phi\rightarrow\phi\). On the other hand, \(|E_{\pm}\rangle\) are related to \(|\tilde{E}_{\mp}\rangle\) by an _inversion_ operation: \(\theta\rightarrow\pi-\theta\), and \(\phi\rightarrow\phi+\pi\). We define the corresponding two operations by unitary and anti-unitary operators as: (i) \(|\tilde{E}_{\pm}\rangle\propto\mathcal{U}_{1}|E_{\pm}\rangle\), (ii) \(|\tilde{E}_{\pm}\rangle\propto|\tilde{E}_{\pm}\rangle\), (iii) \(|\tilde{E}_{\pm}\rangle\propto|\tilde{E}_{\pm}\rangle\), (iii) \(|\tilde{E}_{\pm}\rangle\propto|\tilde{E}_{\pm}
\(\mathcal{U}_{2}|E_{\mp}\rangle\), (iii) \(|\tilde{E}_{\pm}\rangle\propto\mathcal{A}_{1}|E_{\pm}\rangle\), and (iv) \(|\tilde{E}_{\pm}\rangle\propto\mathcal{A}_{2}|E_{\mp}\rangle\), where \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}=\mathcal{V}_{i}\mathcal{K}\) with \(\mathcal{K}\) is the complex conjugation, are unitary and anti-unitary operators, respectively.
1. _Reflection symmetry (\(\mathcal{U}_{1}\)):_ The unitary operator \(\mathcal{U}_{1}\) is defined by \(\mathcal{U}_{1}|E_{\pm}\rangle:=u_{\pm}|\tilde{E}_{\pm}\rangle\), where \(u_{\pm}\) is obtained to be \(\pm 1\). In the computational Hilbert space \(\mathcal{F}\), we get \[\mathcal{U}_{1}=e^{i\phi}|1-f\rangle\langle f|+e^{-i\phi}|f\rangle\langle 1 -f|.\] (10) where \(\phi\) is the same phase as in Eq. (8) (details in Supplemental Material [25]). It's matrix-valued expression is \(\mathcal{U}_{1}=\cos\phi\sigma_{x}+\sin\phi\sigma_{y}\). We make two important observations here. Firstly, \(\mathcal{U}_{1}^{2}=\mathbb{I}\) at all values of \(\phi\) and \(\theta\). Therefore, \(\mathcal{U}_{1}\) is a _discrete_ symmetry. Secondly, \[\langle\mathcal{U}_{1}E_{m}|E_{n}\rangle=(-1)^{n}\delta_{mn},\] (11) where \(n=1,2\) correspond to \(|E_{\mp}\rangle\) states, respectively. The negative norm indicates that while the computational basis gives a Riemannian manifold, the energy states live on a non-Riemannian one. We can define a metric with a signature specific to the Hamiltonian, or equivalently, we define an additional Hamiltonian-specific operator to compensate for the negative norm. The desired linear operator in the eigenbasis is \(\mathcal{C}_{1}|E_{n}\rangle:=(-1)^{n}|E_{n}\rangle\) (i.e. \(\mathcal{C}_{1}^{\dagger}|\mathcal{U}_{1}E_{n}\rangle=(-1)^{n}|\mathcal{U}_{ 1}E_{n}\rangle\)), such that the inner product becomes \[\langle\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}E_{n}|E_{m}\rangle=\delta_{mn}.\] (12) This operator expressed in the computation basis is \[\mathcal{C}_{1} = a|1-f\rangle\langle f|+\frac{1}{a}|f\rangle\langle 1-f|.\] (13) The matrix valued form is \(\mathcal{C}_{1}=\frac{1}{2a}\left[\left(1+a^{2}\right)\sigma_{x}+i\left(1-a ^{2}\right)\sigma_{y}\right]\). By construction \([H,\mathcal{C}_{1}]=0\), i.e. \(\mathcal{C}_{1}\) is a _"hidden-symmetry"_ of the system, in analogy with the \(\mathcal{PT}\)- quantum theory literature [7]. Therefore, we deduce that \(\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}\) is a non-unitary dynamical _metric_ operator obtained to be \[\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}=|a||f\rangle\langle f|+\frac{1}{|a|} |1-f\rangle\langle 1-f|,\] (14) in which \(\mathcal{U}_{1}\) incorporates the phase obstruction between the biorthogonal basis, while \(\mathcal{C}_{1}\) adjusts their different normalization lengths along the computational basis states. The physical interpretations of these two operators are now evident. \(\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}\) is like a metric that adjusts the 'length' of the two basis states, while \(\mathcal{C}_{1}\) gives a spectral/density flow and \(\mathcal{U}_{1}\) incorporates the decoherence between two basis states. We will see below that \(\mathcal{U}_{1}\) gives a relation between \(H\) and \(H^{\dagger}\). This product operator hence differs from the pseudo-Hermitian metric operators, because in the latter case, the same metric operator gives a dual space map and the map between \(H\) and \(H^{\dagger}\), and hence the decoherence is not manifest [34; 35; 36; 8; 9].
2. _Inversion symmetry (\(\mathcal{U}_{2}\)):_ Proceeding similarly, we obtain \(\mathcal{U}_{2}=-i(\sin\phi\sigma_{x}-\cos\phi\sigma_{y})\) with \(\mathcal{U}_{2}|E_{\pm}\rangle=\pm|\tilde{E}_{\mp}\rangle\) and \(\mathcal{U}^{2}=-\mathbb{I}\). \(\mathcal{U}_{2}\) and \(\mathcal{U}_{1}\) are related by the 'particle-hole' symmetry: \(\mathcal{U}_{2}=\mathcal{Q}\mathcal{U}_{1}\). The problem of the negative norm is also present in \(\mathcal{U}_{2}\) and is remedied by defining a linear operator \(\mathcal{C}_{2}\) similar to Eq. (13). It turns out that \(\mathcal{C}_{2}=-\mathcal{C}_{1}\). In what follows \(\langle\mathcal{C}_{2}^{2}U_{2}E_{\mp}|\) gives the _same_ dual space corresponding to \(|E_{\pm}\rangle\). We observe that at the normal point (\(|a|=1\)), \(\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}=\mathbb{I}\) and \(\mathcal{C}_{2}^{\dagger}\mathcal{U}_{2}=\mathcal{Q}\). Then the above inner product definitions coincide with the usual Hermitian conjugation dual space.
3. _Anti-unitary symmetry (\(\mathcal{A}_{i}\)):_ Similar to the case of unitary operators, we obtain \(\mathcal{A}_{1}=\sigma_{x}\mathcal{K}\) with \(\mathcal{A}_{1}|E_{\pm}\rangle=\pm e^{-i\phi}|\tilde{E}_{\pm}\rangle\), and \(\mathcal{A}_{2}=i\sigma_{y}\mathcal{K}\) with \(\mathcal{A}_{2}|E_{\pm}\rangle=\pm e^{-i\phi}|\tilde{E}_{\mp}\rangle\). They are also related to each other by \(\mathcal{A}_{2}=\mathbbm{Q}\mathcal{A}_{1}\). We observe that \(\mathcal{A}_{i}^{2}=(-1)^{i+1}\mathbb{I}\) (with this overall sign being gauge invariant). For \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), we find that \(\langle\mathcal{C}_{3}^{\dagger}\mathcal{A}_{1}E_{\pm}|\) and \(\langle\mathcal{C}_{4}^{\dagger}\mathcal{A}_{2}E_{\mp}|\) are the corresponding duals of \(|E_{\pm}\rangle\) with the metric operators \(\mathcal{C}_{3}=e^{-i\phi}\mathcal{C}_{1}\) and \(\mathcal{C}_{4}=-\mathcal{C}_{3}\).
### Symmetry-enforced real energy
In the above, we discovered that \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}\) give linear and anti-linear maps to the dual space, respectively. The natural question is how do the operators \(H\) and \(H^{\dagger}\) transform under these maps and do these transformations correspond to any symmetry of the system?
Interestingly, \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}\) provide a relation between \(H\) and \(H^{\dagger}\) as
\[\mathcal{U}_{i}H\mathcal{U}_{i}^{-1} = (-1)^{i+1}e^{2\mathrm{i}\gamma}H^{\dagger}, \tag{15}\] \[\mathcal{A}_{i}H\mathcal{A}_{i}^{-1} = (-1)^{i+1}H^{\dagger}, \tag{16}\]
where \(i=1,2\). These expressions resemble the pseudo-Hermitian criterion [35; 8; 9; 36], but are generalized here for any random Hamiltonian with complex eigenvalues (\(\gamma\neq 0\)). This gives a clue that a combination of \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}\) can perhaps be made to be a symmetry of the system. Among the four possible combinations, we have two independent operators as denoted by \(\mathcal{S}_{1}=\mathcal{A}_{1}^{-1}\mathcal{U}_{1}=\mathcal{A}_{2}^{-1}\mathcal{ U}_{2}=(\cos\phi\mathbb{I}-i\sin\phi\sigma_{z})\mathcal{K}\) and \(\mathcal{S}_{2}=\mathcal{A}_{1}^{-1}\mathcal{U}_{2}=\mathcal{A}_{2}^{-1}\mathcal{ U}_{1}=i(\sin\phi\mathbb{I}+i\cos\phi\sigma_{z})\mathcal{K}\), where \(\mathcal{S}_{1}^{2}=\mathcal{S}_{2}^{2}=I\). \(\mathcal{S}_{i}\) are diagonal operators in the computational basis and transform the Hamiltonian as
\[\mathcal{S}_{i}HS_{i}^{-1}=(-1)^{i+1}e^{-2\mathrm{i}\gamma}H, \tag{17}\]
for \(i=1,2\). It is now clear that for either _real_ or _imaginary_ energies, one of the \(\mathcal{S}_{i}\)s becomes a _symmetry_ and the other becomes an antisymmetry.
For _real_ energies, which correspond to \(\gamma=n\pi\) for \(n\in\mathbb{Z}\), \(\mathcal{S}_{1}\) (\(\mathcal{S}_{2}\)) is a symmetry (anti-symmetry) of \(H\). For _imaginary_ energies, i.e., \(\gamma=\frac{n\pi}{2}\) with \(n\) to be an odd integer, \(\mathcal{S}_{1}\) (\(\mathcal{S}_{2}\)) becomes an anti-symmetry (symmetry) of \(H\). For other values of \(\gamma\), i.e., for general complex energies, \(\mathcal{S}_{i}\) are not symmetries/antisymmetries of \(H\), yet \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}\) provide linear and antilinear maps to the dual space, respectively.
Eqs. (15), (16), and (17) can be used to classify the Hamiltonian in the generalized Bernard LeClair (gBLC) classification scheme [37; 38; 39; 20; 14; 39]. For a unitary symmetry operation \(\Xi\), if the Hamiltonian transforms as
\[H=\epsilon_{\Xi}\Xi f(H)\Xi^{-1}, \tag{18}\]
and if \(\epsilon_{\Xi}=\pm 1\) and \(f(H)\) equal to \(H\), \(H^{T}\), \(H^{\dagger}\) and \(H^{*}\), then such symmetry classes are named as P, C, Q, and K classes, respectively (for the P-class \(\epsilon_{\Xi}=+1\) is a trivial unitary transformation). According to our definition, the 'particle-hole' symmetry \(\mathcal{Q}\) corresponds to the P-class, while \(\mathcal{V}_{i}\), \(\mathcal{U}_{i}\) and \(\mathcal{S}_{i}\) belong to the C-, Q- and K-classes, respectively. We list the symmetries of our Hamiltonian in Table 1. \(\mathcal{Q}\) is a static symmetry in the computational basis, by our construction, and is always present. \(\mathcal{A}_{i}\) are also static operators in the computational basis and are present because of our starting point in Eq. (1). \(\mathcal{U}_{i}\) is only present if the energy eigenvalues are real or imaginary. Since \(S_{i}\) is a product operator of \(\mathcal{U}_{i}\), and \(\mathcal{A}_{i}\), they form a group structure.
For real energies, the \(\mathcal{S}_{1}\) is defined in the computational basis and its property in the original Hamiltonian basis is Hamiltonian specific. Therefore, \(\mathcal{S}_{1}\) is a more general symmetry, which may or may not turn out to be the \(\mathcal{PT}-\) symmetry [7] on the Hamiltonian basis. Similarly, the dynamical metric operator \(\mathcal{C}_{i}^{\dagger}\mathcal{U}_{i}\) behave analogous to the pseudo-Hermitian class but is not exactly the same [35; 36; 8; 9]. We emphasize that \(\mathcal{S}_{1,2}\) are 'dynamical' operators as they depend on the Hamiltonian parameter \(\phi\), and their realization as symmetry/anti-symmetry depends on the parameter values \(\phi\) and \(\gamma\).
## IV Applications and examples
We now consider the parameter space of random NH Hamiltonians and how the computational space, energy eigenspace, dual space, and complex energy space are embedded in this parameter space. We subsequently consider several representative examples to demonstrate our formalism.
### General non-Hermitian Hamiltonian
We start with a general \(N\times N\) random Hamiltonian \(H\) which has \(N^{2}\) complex variables \(h_{i}\). The trace of a Hamiltonian and the global scaling of the eigenvalues (\(|h|\)) do not contribute to the eigenstates. So, we consider the traceless part of the Hamiltonian with \(N^{2}-1\) complex parameters. Now since the eigenvectors are independent of the radius \(|h|\) in the parameter space, it is easier to go to the polar parametrization \(\mathbb{C}_{h}^{N^{2}-1}\cong\mathbb{R}_{|h|}\times\mathbb{S}_{h}^{2N^{2}-3}\)[41]. What can be a general operator with eigenvalue \(\propto|h|\) which commutes with \(H\)? Our claim in Eq.(1) is that the operator \(\{H,H^{\dagger}\}\) does this job for a wider class of random Hamiltonians. This is certainly true for two-dimensional Hamiltonians as we explore first.
We consider a 2D traceless random NH Hamiltonian expanded on the basis of Pauli matrices with complex coefficients as
\[H=\vec{h}\cdot\vec{\sigma},\ \mathrm{where}\ \vec{h}=(h_{x},h_{y},h_{z})\in \mathbb{C}. \tag{19}\]
The Hamiltonian follows \(\left\{H,H^{\dagger}\right\}=2|\vec{h}|^{2}\mathbb{I}\). The scaling parameter \(d=2|\vec{h}|^{2}\) is set to 1 for simplicity. Then we are now going to work in the \(\mathbb{C}_{h}^{3}\cong\mathbb{R}_{|h|}\times\mathbb{S}_{h}^{5}\) manifold.
The computational space does not span the entire \(\mathbb{S}_{h}^{5}\) manifold, but a subset of it, parameterized by \(N^{2}-1=3\) real parameters. This can be easily deduced by expressing the Hermitian \(F\) operator as \(F=\frac{1}{2}+\vec{\mathfrak{f}}\cdot\vec{\sigma}\), where \(\vec{\mathfrak{f}}\in\mathbb{R}_{\vec{\mathfrak{f}}}^{3}\) and is given by \(\mathfrak{f}_{\mu}=-2\epsilon_{\mu\nu\rho}\mathrm{Im}[h_{x}^{*}h_{\rho}]\) with \(\mu,\nu,\rho=x,y,z\) (no summation convention is implied on the repeated indices here). The eigenvalues of \(F\) are determined by \(|\mathfrak{f}|\in\mathbb{R}_{|\mathfrak{f}|}\) as \(f=\frac{1}{2}+|\mathfrak{f}|\), and \(1-f=\frac{1}{2}-|\mathfrak{f}|\). The corresponding eigenvectors live on hypersphere \(\mathbb{S}_{\vec{\mathfrak{f}}}^{2}\) at the two antipodal points. However, there is a restriction on the allowed values of the radius \(|\mathfrak{f}|\in\mathbb{R}_{|\mathfrak{f}|}^{\prime}\), where the prime symbol denotes a close set \(0\leq|\mathfrak{f}|\leq\frac{1}{2}\). We can parameterize \(|\mathfrak{f}|\) by a polar angle \(\theta\), as defined in Sec. III.1, where \(0\leq\theta\leq\pi\), with \(\theta=0\), and \(\pi\) are the exceptional points. We denote it as \(\mathbb{S}_{\theta}^{1}\cong\mathbb{R}_{|\mathfrak{f}|}^{\prime}\) (The corresponding map is \(|\mathfrak{f}|=(\tan^{4}(\theta/2)-1)/2(\tan^{4}(\theta/2)+1)\)). The center of the \(\vec{\mathfrak{f}}\) space at \(|\mathfrak{f}|=0\) is a _normal point_, while the \(|\mathfrak{f}|=\frac{1}{2}\) corresponds to the _exceptional hyperspehere_, see Fig. 3. We emphasize that no other parameters contribute to the exceptional points.
For the Hamiltonian \(H\), the two amplitudes \(|E|=\sqrt[4]{\frac{1}{4}-|\mathfrak{f}|^{2}}\) and \(|a|=\sqrt[4]{(1+2|\mathfrak{f}|)/(1-2|\mathfrak{f}|)}\) depend on \(|\mathfrak{f}|\). The two additional parameters \(\gamma=\arg(E)\), and \(\phi=\arg(a)\) are not determined by \(\vec{\mathfrak{f}}\) parameters, but by the complementary set \(\{\vec{h}\}-\{\vec{\mathfrak{f}}\}\). We denote these two compact spaces by
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Energy & \(\mathcal{Q}\) & \(\mathcal{A}_{1}\) & \(\mathcal{U}_{1}\) & \(\mathcal{S}_{1}\) \\ & (P-class) & (C-class) & (Q-class) & (K-class) \\ \hline Complex & -1 & +1 & 0 & 0 \\ Real & -1 & +1 & +1 & +1 \\ Imaginary & -1 & +1 & -1 & -1 \\ \hline \end{tabular}
\end{table}
Table 1: We classify the NH Hamiltonians according to the generalized Bernard LeClair (gBLC) classification scheme on the computational basis. The values \(\pm 1\) correspond to \(\epsilon_{\Xi}=\pm\) in Eq. (18), while \(0\) means either the transformation operator does not exist or is a dynamical operator as in Eq. (15). The \(\mathcal{Q}\) and \(\mathcal{A}_{1}\) symmetries are always present as they are static symmetries in the computational basis (see main text), while the \(\mathcal{U}_{1}\) symmetry class varies according to whether the energy eigenvalues are real, imaginary, or complex, i.e. on the values of \(\gamma\) in Eq. (15). The same results for \(\mathcal{A}_{2},\mathcal{U}_{2}\) and \(\mathcal{S}_{2}\) are obtained by the relations \(\mathcal{A}_{2}=\mathcal{Q}\mathcal{A}_{1}\), \(\mathcal{U}_{2}=\mathcal{Q}\mathcal{U}_{1}\), and \(\mathcal{S}_{2}=-\mathcal{Q}\mathcal{S}_{1}\).
\(\mathbb{S}^{1}_{\gamma}\), and \(\mathbb{S}^{1}_{\phi}\), respectively, both centering the exceptional point \(|\mathfrak{f}|=\frac{1}{2}\). So the complex energy eigenvalue space is split into \(\mathbb{C}_{E}\cong\mathbb{S}^{1}_{\theta}\times\mathbb{S}^{1}_{\gamma}\), and the complex expansion coefficient lives on \(\mathbb{C}_{a}\cong\mathbb{S}^{1}_{\theta}\times\mathbb{S}^{1}_{\phi}\). Finally, the energy Hilbert space \(\mathcal{H}\) lives on the four real parameter space as \(\mathbb{S}^{4}_{|E\rangle}\cong\mathbb{S}^{1}_{\theta}\times\mathbb{S}^{1}_{ \phi}\times\mathbb{S}^{2}_{\mathfrak{f}}\).
We summarize it as follows:
\[\mathbb{S}^{5}_{h}\cong\mathbb{S}^{1}_{\gamma}\times\underbrace{ \overbrace{\mathbb{S}^{1}_{\theta}}^{E}\times\mathbb{S}^{1}_{\phi}\times \mathbb{S}^{2}_{\mathfrak{f}}}_{|E\rangle}.\]
The dual space is defined by the transformation in the Hamiltonian parameters as \(\theta\rightarrow\bar{\theta}=\pi-\theta\), \(\phi\rightarrow\bar{\phi}\) where \(\bar{\phi}=\phi\) (\(\phi+\pi\)) for \(\mathcal{U}_{1}\) (\(\mathcal{U}_{2}\)) symmetries (and the same for \(\mathcal{A}_{1}\) (\(\mathcal{A}_{2}\))):
\[\mathbb{S}^{4}_{|E\rangle}\ \xrightarrow{\mathcal{U}_{i}}\ \mathbb{S}^{4}_{|E \rangle}\cong\mathbb{S}^{1}_{\theta}\times\mathbb{S}^{1}_{\bar{\phi}}\times \mathbb{S}^{2}_{\mathfrak{f}}. \tag{20}\]
The operator \(\mathcal{C}_{i}\) in the inner product depends on \(a\), i.e. \(\theta\) and \(\phi\) in the computational basis.
The complex energy space transforms as
\[\mathbb{C}_{E} \xrightarrow{\mathcal{U}_{i}}\ \ \mathbb{C}_{E}\cong\mathbb{S}^{1}_{ \theta}\times\mathbb{S}^{1}_{\gamma} \tag{21}\] \[\xrightarrow{\mathcal{A}_{i}}\ \ \mathbb{C}_{E^{*}}\cong\mathbb{S}^{1}_{ \theta}\times\mathbb{S}^{1}_{-\gamma}.\]
### Example
It is instructive to demonstrate the above formalism and the symmetry-enforced real eigenvalues and symmetry-breaking phase transition with a few Hamiltonian parameters. Here we consider a NH Hamiltonian with \(h_{x}=\frac{1}{\sqrt{2}}\sin\alpha\cos\beta\), \(h_{y}=\frac{1}{\sqrt{2}}\sin\alpha\sin\beta\), and \(h_{z}=i\frac{1}{\sqrt{2}}\cos\alpha\), where \(0\leq\alpha\leq\pi\), \(0\leq\beta\leq 2\pi\). This gives \(d=2|h|^{2}=1\) and the \(\mathbb{S}^{2}_{h}\) on the Hamiltonian parameter space, as shown in Fig. 4.
For the computational space we have \(\mathfrak{f}_{x}=-\frac{1}{2}\sin 2\alpha\sin\beta\), and \(\mathfrak{f}_{y}=\frac{1}{2}\sin 2\alpha\cos\beta\), \(\mathfrak{f}_{z}=0\). So the computational basis lives on a circle \(\mathbb{S}^{1}_{\mathfrak{f}}\) defined by \(|\mathfrak{f}|=\frac{1}{2}|\sin 2\alpha|\), as shown in Fig. 3. The eigenvectors are
\[\ket{f},\ket{1-f}=\frac{1}{\sqrt{2}}\left(\begin{array}{c}1\\ \pm i\nu_{\alpha}e^{i\beta}\end{array}\right) \tag{22}\]
where \(\nu_{\alpha}=\text{sgn}\left[\sin 2\alpha\right]\). The corresponding eigenvalues are \(f=\frac{1}{2}+|\mathfrak{f}|\), and \(1-f=\frac{1}{2}-|\mathfrak{f}|\). The exceptional contours are defined at the roots of \(f\) and \(1-f\) which lie at \(\alpha_{n}=\frac{\pi}{4}(2n\pm 1)\) for \(n\in\mathbb{Z}\), and the normal contour lies at \(\alpha=\frac{n\pi}{2}\) for \(n\in\mathbb{Z}\), as shown in Fig. 4.
We go from the Hamiltonian to the computation basis by the unitary transformation \(U=(|f\rangle\ |1-f\rangle)\). In the computational basis, the Hamiltonian is off-diagonal as in Eq. (6), with \(\sqrt{f}e^{i(\gamma+\phi)}=i\sin\left(\frac{\pi}{4}+\nu_{\alpha}\alpha\right)\) and \(\sqrt{1-f}e^{i(\gamma-\phi)}=i\sin\left(\frac{\pi}{4}-\nu_{\alpha}\alpha\right)\). Interestingly, the Hamiltonian in the computational basis has one less parameter due to the constraint
Figure 4: The parameter space of the Hamiltonian defined in Sec. IV B gives a sphere of radius \(d=2|h|^{2}=1\). The \(\alpha=\pi/4\) and \(3\pi/4\) circles give exceptional contours while \(\alpha=\pi/2\) is the normal contour. The energy eigenvalues are real (imaginary) within (outside) the regions bounded by the exceptional points/contours. In the region with real eigenvalues, \(\mathbb{S}_{1}\) becomes a static symmetry and coincides with the \(\mathcal{PT}\)–symmetry.
Figure 3: Schematic representation of the splitting of the complex parameter space of the Hamiltonian into the computational space \(\mathbb{S}^{2}_{\mathfrak{f}}\), expansion parameter space \(\mathbb{S}^{1}_{\theta}\), \(\mathbb{S}^{1}_{\phi}\), and the complex energy space \(\mathbb{S}^{1}_{\gamma}\). The radius of the hypersphere \(|h|\) corresponds to the scaling parameter \(d=2|h|^{2}\). The blue thin dashed hypercycle is the \(|\mathfrak{f}|=0\) contour of the normal point. The computational space \(\mathbb{S}^{2}_{\mathfrak{f}}\) encircles the normal point as shown by the thick blue dashed line and extends up to \(|\mathfrak{f}|=1/2\) contour. \(\gamma\) and \(\phi\) are defined with respect to the exceptional point/contour.
\(\{H,H^{\dagger}\}=\mathbb{I}\). We note that the two phases \(\gamma\) and \(\phi\) are only defined to be \(\frac{mn}{2}\), \(m\in\mathbb{Z}\) (i.e. \(E_{\pm}\) and \(a\) are either real or imaginary here). The Hamiltonian parameter space \(\mathbb{S}_{h}^{2}\) splits into three regions by the exceptional points (\(\alpha_{n}\)):
1. _Imaginary_\(E_{\pm}\): \(\alpha\in\left(0,\frac{\pi}{4}\right)\), \(\phi=0\), and \(\gamma=\frac{\pi}{2}\).
2. _Real_\(E_{\pm}\): 1. \(\alpha\in\left(\frac{\pi}{4},\frac{\pi}{2}\right)\), \(\phi=\frac{3\pi}{2}\), and \(\gamma=\pi\). 2. \(\alpha\in\left(\frac{\pi}{2},\frac{3\pi}{4}\right)\), \(\phi=\frac{\pi}{2}\), and \(\gamma=\pi\).
3. _Imaginary_\(E_{\pm}\): \(\alpha\in\left(\frac{3\pi}{4},\pi\right)\), \(\phi=\pi\), and \(\gamma=\frac{\pi}{2}\).
For fixed values of \(\phi=\frac{mn}{2}\), the dual space maps \(\mathcal{U}_{1,2}\) become static. \(\mathcal{S}_{1,2}\) become the symmetry/anti-symmetry of the Hamiltonian since \(\gamma=\frac{n\pi}{2}\), \(n\in\mathbb{Z}\), i.e., energies are either real or imaginary. We discuss their representations in the three cases.
In region (2a), \(\alpha\in\left(\frac{\pi}{4},\frac{\pi}{2}\right)\), where energies are _real_: \(\mathcal{U}_{1}=-\sigma_{y}\), \(\mathcal{U}_{2}=i\sigma_{x}\) and \(\mathcal{S}_{1}=i\sigma_{x}\mathcal{K}\), and \(\mathcal{S}_{2}=-i\mathcal{K}\), with \(\mathcal{S}_{1}\) is the symmetry of the Hamiltonian. If we revert to the Hamiltonian basis, we obtain \(\mathcal{S}_{1}=e^{i\beta}\tau_{x}\mathcal{K}\), where \(\tau_{i}\) are the \(2\times 2\) Pauli matrices in the Hamiltonian basis, and \(\mathcal{S}_{2}=-\frac{i}{2}\left[(1-e^{2i\beta})\mathbb{I}+(1+e^{2i\beta}) \tau_{z}\right])\mathcal{K}\). In this basis \(\mathcal{S}_{2}\) becomes a dynamical symmetry operation, while \(\mathcal{S}_{1}\) becomes precisely the static \(\mathcal{PT}\) symmetry operator (upto a global gauge of \(e^{i\beta}\)) introduced by Bender et al [7]. The \(\mathcal{C}_{1}\) operator defined in Eq.(13) remains dynamical due to the presence of \(|a|\) term. In region 2(b), on the other hand, the symmetry operators \(\mathcal{S}_{i}\) are found to differ by a phase of \(\pi\) compared to those in region 2(a).
In region (1) where the energies are purely imaginary, \(\mathcal{S}_{i}\) changes to a different form as \(\mathcal{S}_{1}=\mathcal{K}\) and \(\mathcal{S}_{2}=-\sigma_{x}\mathcal{K}\). In the Hamiltonian basis \(\mathcal{S}_{1}=\frac{1}{2}\left[(1-e^{2i\beta})\mathbb{I}+(1+e^{2i\beta}) \tau_{z}\right]\mathcal{K}\) and \(\mathcal{S}_{2}=ie^{i\beta}\tau_{x}\mathcal{K}\), which become dynamical and static, respectively. The operators \(\mathcal{S}_{i}\) in the region (3) acquire an additional phase of \(\pi\) relative to the corresponding operators in region (1).
A Hermitian condition is achieved at \(\alpha=\pi/2\), which is one of the normal surfaces lying inside the real energy region. Here we get \(E_{\pm}=\mp\frac{1}{\sqrt{2}}\) with \(\gamma=\pi\). \(|a|=1\) at the normal point and we have the freedom to choose \(a=-i\). This gives \(\mathcal{U}_{1}=-\sigma_{y}\), \(\mathcal{U}_{2}=i\sigma_{x}\) and \(\mathcal{C}_{1}=-\sigma_{y}\), with \(\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}=\mathbb{I}\) and \(\mathcal{C}_{2}^{\dagger}\mathcal{U}_{2}=\mathcal{Q}\). Then the inner product definition in Eq. (12) coincides with the usual Hermitian conjugation dual space. It is not surprising that the Hermitian condition is achieved in the region (2) with real eigenvalues. Because, for any NH Hamiltonian with real eigenvalues, one can find a similarity transformation (smooth deformation) to obtain a Hermitian Hamiltonian, which lies at the normal point. Clearly, the smooth deformation is defined by the \(\alpha\) parameter (see Supplemental Material [25] for more examples).
## V Higher dimension and degeneracy
In the two-dimensional case, the 'particle-hole' eigenvalue pair \(\pm|E|e^{i\gamma}\) is scaled by a single parameter \(d\) and the corresponding computational space is parameterized by \(f\). Generalization to any higher dimension Hamiltonian requires considerations of the classifications of the eigenspectrum of \(D\) and \(F\) operators and will be pursued in future studies. Here we focus on the \(P\)-class of higher dimensional Hamiltonians with 'particle-hole' symmetry (see Table I) in which all eigenvalues are separated from each other in the complex energy space (see Fig. 5(a)). This means there exists a block diagonal form of the Hamiltonian for each eigenvalue pair \(\pm|E_{n}|e^{i\gamma_{n}}\) with the corresponding scaling and computational parameters \(d_{n}\) and \(f_{n}\), respectively. The full basis states are a sum of all the computational basis states, and the expansion coefficients are \(a_{n}=|a_{n}|e^{i\phi_{n}}\), where the definitions of \(|E_{n}|\) and \(|a_{n}|\) remain the same in terms of \(f_{n}\).
Interesting special cases arise for two degenerate cases, apart from the exceptional points. (1) An \(m\)-fold degeneracy of the eigenvalue pairs \(\pm|E|e^{i\gamma_{n}}\), \(n=1\) to \(m\), where the phases \(\gamma_{n}\) are different but amplitude \(|E|\) is the same. We call it the _circular degeneracy_ in the complex energy plane, see Fig. 5(b). (2) A second simpler case is when all the phases are also the same to \(\gamma\), which we term as the _point degeneracy_ as shown in Fig. 5(c).
1. _Circular degeneracy:_ Since \(|E|\) only depends on \(f\) (and \(d\) which we set to be 1 as before), the computational basis \(|f\rangle\) is \(m\)-fold degenerate. We choose an \(m\)-fold degenerate orthogonal basis \(|f\rangle_{i}\) and \(|1-f\rangle_{i}\), and express the corresponding Hamiltonian in the computational manifold as block off-diagonal form \[H=\sum_{i,j=1}^{m}A_{ij}|1-f\rangle_{i}\langle f|_{j}+B_{ij}|f\rangle_{i} \langle 1-f|_{j},\] (23) where \(A_{ij}\), \(B_{ij}\in\mathbb{C}\). Comparing this to Eq. (6) it is obvious that the eigenvalues of \(A\) and \(B\) give the factors in the ladder operations in Eq. (5d). Expressing the \(n^{\rm th}\) energy eigenstate \(|E_{\pm}^{n}\rangle\) in a spinor field \((\psi_{A}^{n}\ \pm\psi_{B}^{n})^{T}\) we easily obtain \(A\psi_{A}^{n}=\pm E_{\pm}^{n}\psi_{B}^{n}\), and \(B\psi_{B}^{n}=\pm E_{\pm}^{n}\psi_{A}^{n}\). And, \(BA\psi_{A}^{n}=(E_{\pm}^{n})^{2}\psi_{A}^{n}\), where \(E_{\pm}^{n}=\pm\sqrt[4]{f(1-f)}e^{i\gamma_{n}}\) are the energy eigenvalues. Then the corresponding eigenvectors are expressed as \[|E_{\pm}^{n}\rangle=\sum_{i=1}^{m}(\psi_{A}^{n})_{i}\left(|f\rangle_{i}\pm a_ {i}^{n}|1-f\rangle_{i}\right),\] (24) where we define \(a_{i}^{n}=(\psi_{B}^{n})_{i}/(\psi_{A}^{n})_{i}=|a|e^{i\phi_{\pm}^{n}}\).
2. _Point degeneracy:_ Point degeneracy corresponds to \(\gamma_{n}=\gamma\). Here we obtain \(A=|a|^{2}e^{2i\gamma}B^{\dagger}\). So, \(A\) and \(B\) become simultaneously diagonalizable and we obtain \(A\psi_{A}^{n}=e^{i\phi^{n}+\gamma}\sqrt{f}\psi_{A}^{n}\), and \(\psi_{B}^{n}=e^{i\phi^{n}}|a|\psi_{A}^{n}\). \(\phi_{i}^{n}\) becomes independent of the basis \(i\). Hence \(|E_{\pm}^{n}\rangle\) are to be treated as \(m\)-fold degenerate states of \(H\). Now redefining \(|f\rangle_{n}^{\prime}=\sum_{i=1}^{m}(\psi_{A}^{n})_{i}|f\rangle_{i}\), and the same for \(|1-f\rangle_{n}^{\prime}\), we obtain, from Eq. (24), the same equation as Eq. (8) upto an overall biorthogonal normalization factor of \(1/\sqrt{2|a|}\) for each energy level \(n\). (Refer to the Supplemental Material [25] for further discussion)
Interestingly, the point degeneracy situation can be achieved by the spectral flattening method from a general Hamiltonian when its energy eigenvalues do not cross each other. In such case, we can topologically deform a Hamiltonian to a 'flattened' Hamiltonian with all its eigenvalues collapsing to \(\pm|E|e^{i\gamma}\), while the eigenvectors remain the same as the original ones [42]. This holds as long as the energy gap is not closed in the smooth deformation process. For such a 'flattened' Hamiltonian, Eq. (1) holds. A more specialized case arises when \(f=1/2\), which makes the \(F\) operator \(2N\)-fold degenerate. This is achieved by setting the flattened Hamiltonian to have the degenerate eigenvalue with \(|E|=1/\sqrt{2}\). Here the flattened Hamiltonian becomes a normal operator.
For an odd-dimensional case, say \(2N+1\)-dimensional Hamiltonian, there are odd numbers (say \(2N^{\prime}+1\)) of computational states at \(f=\frac{1}{2}\), and the remaining \(2(N-N^{\prime})\) states have \(f\) and \(1-f\) pairs of states as in the even-dimensional case. For simplicity, we assume \(N^{\prime}=0\), so there is one state at the normal point \(f=\frac{1}{2}\), and the remaining \(2N\) states are 'particle-hole' symmetric. The 'particle hole' symmetric states follow the above prescription, while \(|\frac{1}{2}\rangle\) state is an eigenstate \(H|\frac{1}{2}\rangle=\frac{e^{i\gamma_{0}}}{\sqrt{2}}|\frac{1}{2}\rangle\). So, for an odd-dimensional Hamiltonian, we predict that there must be an odd number of energy states with energy amplitudes being completely _'flat'_, i.e., they only depend on the scaling parameter \(d\).
## VI Discussion
### Difficulties with NH Hamiltonian formalism
The necessity of the \(H^{\dagger}\) operator in the quantum theory is rooted in the definition of the dual space as an adjoint map. The adjoint dual space consequently dictates the self-adjoint constraint on physical operators. The associated linear algebra promotes gauge redundancy, and the charge conservation is a consequence of the global gauge symmetry. The invariants and unitarity of the theory stem from this self-adjoint condition.
In the NH counterpart, the adjoint eigenvector does not form the dual space. The (left) eigenvector of \(H^{\dagger}\) can be taken as the dual vector to the (right) eigenvector of \(H\). However, given that \(H\neq H^{\dagger}\), these two eigenvectors appear to belong to disjoint vector spaces without a unique map between them. Adding distinct arbitrary phases to the left and right eigenvectors, which are allowed by linear algebra, does not keep the inner product invariant. Similarly, the absence of a map between the eigenvector and dual vector hinders the unique definition of the physical operator.
We can express a NH Hamiltonian on the basis of the Hermitian generators of some Lie Algebra but with complex parameters. In other words, the parameter space of an NH Hamiltonian generally forms a complex manifold. The definitions of 'length' and 'angle' in a complex manifold, which are tied to the inner product of the vector space, are different from those in the real-valued compact manifold of Hermitian Hamiltonians. In fact, the study of quantum theory in the complex manifold is less explored in the literature, except for some efforts focused on the Hamiltonian-specific redefinitions of the Lie Algebra [43], or inner products [7; 8].
Moreover, the definition of the directional derivative, which essentially gives the equation of motion, on the complex manifold does not generally coincide with the Schrodinger or Heisenberg pictures. On one hand, one requires a theory with an invariant inner product on the complex manifold, while on the other hand, the theory is non-conserving by construction. This poses an apparent paradox.
The diagonalizability of a NH Hamiltonian is in general violated somewhere in the parameter space of the Hamiltonian, which is called the exceptional point. The exceptional point is a physically accessible region of the phase space where novel and functional properties can be achieved [26; 27; 28; 29; 30; 31; 32; 33], but its study is hampered due to its theoretical challenges and unknown physical interpretation.
Figure 5: (a) Energy spectrum in a higher dimensional Hamiltonian are schematically plotted as concentric circles in the complex energy plane for the case where energy eigenvalues do not cross. Each circle corresponds to a different \(f_{n}\) with the eigenvalue pair \(\pm|E_{n}|e^{i\gamma_{n}}\) lying on their diametrically opposite points. The dashed blue circle gives the maximum radius \(|E_{n}|=1/\sqrt{2}\) corresponding to the normal point, where the center of the complex plane is the exceptional point. (b) _Circular degeneracy:_ In this case, two or more energy levels coincide on the same circle \(|E|\), but differ by the phases \(\gamma_{n}\). (c) _Point degeneracy:_ This occurs when two or more energy levels possess the same amplitude and phase, i.e., they correspond to a single point on the complex plane (with the ‘particle-hole’ component lying at the diametrically opposite point).
### Summary of our work
Our formalism provides precise definitions for various quantities in random non-Hermitian (NH) Hamiltonians. It is based on constructing a well-defined Hilbert space of a Hermitian operator as the computational basis for the NH Hamiltonian. In the standard second quantization formula the Fock space is constructed from a fixed number operator or angular momentum/spin operator for any Hamiltonian. But here the computational basis is defined by a bilinear operator \(H^{\dagger}H\), hence it involves a subset of the Hamiltonian's parameter space. The computational basis spans between the exceptional points of the Hamiltonian, avoiding disruption by these singularities. \(H^{\dagger}\) and \(H\) serve as ladder operators between the states in the computational basis across the normal point, while exceptional points pose as vacua on both sides of the Hilbert space. This makes our general formalism completely free from the singularity of exceptional points.
While the Hamiltonian's parameter space is complex and non-compact (\(\mathbb{C}^{4}_{h}\) for a \(2\times 2\) Hamiltonian), the computational space \(\mathbb{S}^{2}_{f}\subset\mathbb{C}^{4}_{h}\) is a compact manifold. We expand the energy eigenstates in this computational space, with two expansion parameters \(\theta\) and \(\phi-\) a Bloch sphere. \(0\leq\phi\leq 2\pi\) is compact, and \(0<\theta<\pi\), with \(\theta=0\) and \(\pi\) are the exceptional points. The two energy eigenstates collapse into each other at exceptional points. Moreover, we discover that the dual space also lies in the same computational space, at the two antipodal points, and hence can be _uniquely mapped_ by discrete (space-time) transformation in this computational space. In other words, we discover the existence of a computational space which is a union of the two vector spaces of the biorthogonal eigenvectors, and the exceptional points are the north and south poles (along the two bases), and the normal point lies at the equator.
The complex expansion parameter \(a\in\mathbb{S}^{1}_{\theta}\times\mathbb{S}^{1}_{\phi}\) spans between the exceptional points at the north and south poles where \(a\) becomes \(0\) or \(\infty\). Therefore, we need two covers for this manifold, each containing only one of the poles. The eigenvectors and the dual vectors live on different covers and there is a gauge obstruction between them. The quantum metric is associated with additional (dynamical) symmetry, which we refer to as the \(\mathcal{C}\) - symmetry, in analogy with the \(\mathcal{PT}\)-symmetry literature, to define an inner product in Sec. III. This suggests that the base manifold of the energy eigenstates is a non-trivial topological space.
We also found that the'space-time' transformations \(\mathcal{S}_{i}\) were not necessarily the symmetry of the Hamiltonian unless the energy eigenvalues were either real or purely imaginary. In the case of real-eigenvalues, the'space-time' symmetry operators defined in the computational basis may or may not turn out to be a \(\mathcal{PT}\) symmetry in the original Hamiltonian basis. Therefore, the \(\mathcal{PT}\) symmetry is not a necessary condition. In \(\mathcal{PT}\) symmetric Hamiltonians, generally, we have balanced gain and loss terms in the \(\mathcal{PT}\) conjugate basis states. For a more general Hamiltonian with complex parameters, gain and loss terms (imaginary parameters) are not generally defined. However, our formalism always defines a'space-time' transformation in the computational basis, becoming a symmetry if the energy eigenvalues are real or purely imaginary. This does not necessarily rely on balanced gain and loss terms.
We extended our formalism to higher dimensions but restricted ourselves to those Hamiltonians whose energy spectrum can be flattened to a circular- or point-degenerate 'particle-hole' pair. This is applicable to all the Hamiltonians whose energy levels do not cross each other. In such a case, each 'particle-hole' pair's eigenstate corresponds to a different computational space. For an odd-dimensional Hilbert space, the formalism predicts an odd number of the flat-energy amplitude eigenvalues, which only depend on the scaling parameter \(d\), and not on any other Hamiltonian parameter. Generalization to a more general higher dimensional Hamiltonian will produce each 'particle-hole' pair to correspond to a different computational basis, defined by a constant \(d\) eigenvalue of the \(\{H,H^{\dagger}\}\). We will consider this case in a future study.
Some similarities and differences between our formalism and the initial parts of the supersymmetry (SUSY) formalism are to be noted here. If we replace \(H\to Q\) - the supersymmetric operator, then the Lie superalgebra \(\{Q,Q^{\dagger}\}=H\) - the supersymmetric Hamiltonian, coincides with Eq. (1) where \(D\to H\). However, in the SUSY formalism, one studies the Hilbert space of \(H\), which is \(D\) here, and considers the Fock space of bosonic and fermionic number operators. But here we are interested in the Hilbert space of NH operator \(Q\) and consider the 'Fock' space of \(F=Q^{\dagger}Q\). Therefore the present computational space and Hilbert space do not possess supersymmetry.
### Applications
In open quantum systems and quantum information, a state of interest evolves in time in an environment that causes decoherence and depolarization. There are multiple ways to study this problem and one can deduce an effective NH Hamiltonian accordingly. For example, from the Lindblad master equation, by treating the environment as a Markovian bath of oscillators, one can define an effective NH Hamiltonian \(H_{\rm eff}=H_{s}+i\tilde{N}\), where \(H_{s}\) is the Hermitian system Hamiltonian and \(\tilde{N}\) is the (Hermitian) oscillator number operator of the bath [16; 44; 45; 46]. Alternatively, starting with the full Hamiltonian of the system + environment, one can project to an effective NH Hamiltonian \(H_{\rm eff}=H_{s}+\Sigma\) for the system, where \(\Sigma\) is the complex'self-energy' obtained by integrating out the environment's degrees of freedom [22; 23]. The method is called the Freshbah projection method in the atomic-molecular-optical (AMO) field and is characteristically similar to Dyson's formula in the many-body quantum theory and is recently applied in many-body theory [22; 23; 44; 45]. If such an effective Hamiltonian can be solved uniquely, we achieve a more transparent and solvable approach than its parent master equation for the density matrix or Green function method for the many-body counterpart.
In photonics, the solutions of the wave equation often render NH operators in the linearized differential equations [47; 48; 49; 50]. In the cases where the solutions of the different equations belong to a vector space, tensor space, or modular space
(fiber), one requires the definition of the dual space, which must be linearly transformed by the NH operator. Here our formalism becomes convenient.
In quantum field theory and many-body condensed matter, the phase transition is defined by an order parameter - which can be a complex operator (c.f. the superconducting order parameter). In a mean-field theory, the ground state of the whole system is a product state of eigenstates of such local order parameters [51, 52, 53]. To define the local Hilbert space of the NH order parameter, our approach will be applicable. Similarly, in the matrix-product state (or in tensor network formalism) one builds the many-body ground state from local Hilbert space by projecting to some subspace (such as a Bell state of two nearby spin-1/2 operators) [54, 55, 56].
While the projection operator is introduced in the Hilbert space, alternatively, one can obtain an effective NH Hamiltonian for the subspace of our interest. Generally projection operators are harder to implement in numerical simulations, and one may alternatively choose the eigenstate of an effective NH operator. Similarly, in scar states or fragmented Hilbert space, where the full Hilbert space of the many-body Hamiltonian has a subspace with some conserved charge (s) or dipole (s) [17, 18, 19], and is nearby decoupled from the rest of the Hilbert space, any linear operator defined within that subspace is an NH operator, by construction. Here one can limit the computation within the subspace by an NH quantum theory. Recently, it has also been realized that the effective theory at the edge state of a topological insulator is an NH Hamiltonian theory [20, 57]. It is interesting to ask how the quantum anomaly of a topological field theory manifests in an NH quantum theory.
A class of otherwise Hermitian Hamiltonians produces complex energy and/or loss/gain system due to outgoing or radiative boundary conditions. Related situations arise in topological edge states with open boundary conditions, semiconductor junctions with potential wells, barriers, and leads, and in photonics where the boundary is a gapless medium for photons. If such problems have the NH Hamiltonian dynamics, they can be studied within our framework by incorporating the boundary/lead as complex terms within the Hamiltonian or incorporated through the NH transfer matrix acting on the edge modes.
We should note that the present formalism can be extended and applied to higher dimensional NH Hamiltonian made of chains of atoms, say, with periodic and open boundary conditions. This will provide an intrinsic explanation to the origin of the _skin effect_[58, 59, 60, 61, 62].
### Future directions
There are two immediate studies to be undertaken based on the present work in the future. We have so far considered the static NH Schrodinger equation and built this eigenspace and dual space. The next step is to consider the dynamics of the theory. The advantage of the present theory is that we only need to consider the time evolution of \(H\), and no need to consider \(H^{\dagger}\). An immediate question is if the system is non-conserving by definition, and our formalism governs an apparently conserved inner product, how do the non-conserving physical properties get captured? Notice that the linear map is not a space-time independent operator, and hence they also evolve (explicitly or implicitly) in time. Therefore, the symmetry we have is a dynamical one (or pseudo-dynamical) in that its eigenvalues are not conserved in time. In the future study, we will focus on this aspect.
We have focused here on the higher dimensional Hamiltonians which have no level crossing, such that each of the 'particle-hole' pair eigenstates can be expressed in the same computational basis but at different points. For more general NH Hamiltonians, the \(D\) operator will have different eigenvalues \(d\), and each eigenvalue corresponds to a different computational basis. Then the discussion in Sec. V can be generalized to a larger Hilbert space. The generalization to higher dimensional Hilbert space will elucidate the emergence of higher-order exceptional points. It will also bypass the numerical search [63] of exceptional points with precise analytical predictions through the identification of the computational basis states. The applications of the theory in various settings as discussed in Sec. VI.3 will subsequently be carried out.
## Acknowledgements
We thank Sachindeo Vaidya, Baladitya Suri, Girish Agarwal, Ronny Thomale, Sourin Das for useful discussions. PB acknowledges the Prime Minister Research Fellowship (PMRF) from the Government of India for the fellowship award. TD's research is supported by Science and Engineering Research Board (SERB) under the Department of Science and Technology (DST) of the Government of India for the CRG Research Grant CRG/2022/003412.
## Appendix A
### Exceptional points of \(H\) as the vacuum states of \(F\)
The exceptional points correspond to the parameter value where the Hamiltonian becomes non-diagonalizable, in the sense that the two or more eigenstates become linearly dependent on each other. In our formalism of expressing energy eigenstates in the computational basis, we uncover that at the exceptional point, which is the zero-energy state, the energy eigenstates collapse to one of the basis states. The computational space is parameterized by \(f\), which is bounded by the exceptional points at \(f=0\) and \(f=1\). At the \(f=0\) boundary, Eqs. (5a), (5d) give \(H|0\rangle=0\) and \(H^{\dagger}|1\rangle=0\), while Eqs. (5b), (5b) produce \(H|1\rangle=e^{i(\gamma-\phi)}|0\rangle\) and \(H^{\dagger}|0\rangle=e^{-i(\gamma-\phi)}|1\rangle\). Clearly, here \(H^{2}=(H^{\dagger})^{2}=0\), and the formalism coincides with the fermionic algebra. At the other boundary \(f=1\), the situation is reversed. Therefore, the two exceptional points are the vacuum states in the usual Fock space language.
### Degenerate \(F\) at the normal point
The normal point occurs at \(f=1/2\), where \(|f\rangle\) and \(|1-f\rangle\) are degenerate. Here the ladder action specified in Eq. (5) no longer holds in general. Instead, the ladder operators (\(H\), \(H^{\dagger}\)) can project anywhere in the degenerate manifold of the computational space. Here we have \(F=\frac{1}{2}\) (this follows from Eq. (1) which can be rewritten as \(\left[H,H^{\dagger}\right]=\mathbb{I}-2F\)). Hence, any pair of orthogonal states is an eigenbasis of \(F\). In order to make our formalism work in a similar way, it is sufficient to find a pair of orthogonal states such that the ladder action in Eq. (5) continues to hold. We denote this orthogonal pair of states as \(\left|f\right\rangle=\left|\frac{1}{2},m\right\rangle\) and \(\left|1-f\right\rangle=\left|\frac{1}{2},n\right\rangle\), where \(m\) and \(n\) are the degeneracy labels. We will use these as our computational basis. The ladder action on this basis can be specified as,
\[H\left|\frac{1}{2},m\right\rangle=\frac{e^{i(\gamma+\phi)}}{\sqrt{2}}\left| \frac{1}{2},n\right\rangle,\,H\left|\frac{1}{2},n\right\rangle=\frac{e^{i( \gamma-\phi)}}{\sqrt{2}}\left|\frac{1}{2},m\right\rangle, \tag{10}\]
and similarly for \(H^{\dagger}\). It is now obvious that our further analysis of the non-degenerate computational manifold remains valid here. For example, for the energy eigenspectrum, we obtain, \(E_{\pm}=\pm\frac{e^{i\gamma}}{\sqrt{2}}\) and \(\left|E_{\pm}\right\rangle=\frac{1}{\sqrt{2}}\left(\left|\frac{1}{2},m\right\rangle \pm e^{i\phi}\left|\frac{1}{2},n\right\rangle\right)\).
We emphasize here that at the normal point, where \(H\) commutes with \(H^{\dagger}\), the right and left eigenvectors become identical, i.e. \(\left|E_{n}\right\rangle=\left|\tilde{E}_{n}\right\rangle\). Consequently, the dual space is defined by the usual Hermitian conjugation map. This has been confirmed earlier in Sec. III.1 through our metric \(\mathcal{C}_{1}^{\dagger}\mathcal{U}_{1}\) reducing to identity at the normal point \(|a|=1\). An explicit evaluation of \(\langle E_{n}|E_{m}\rangle\), which equals to \(\delta_{nm}\) only at \(|a|=1\), also validates the same.
### Generalized Bernard LeClair classification
In this section, we extend our discussion of the generalized Bernard LeClair (gBLC) classification of the Hamiltonian in Sec. III.2[37, 38, 39, 40, 20]. In gBLC scheme, a general NH Hamiltonian is classified based on the symmetry classes defined by Eq. 18. We now discuss these four symmetry classes at length,
1. K-symmetry: \(\Xi=K\), \(f(H)=H^{*}\), \(\epsilon_{K}=\pm 1\) and \(\overline{KK^{*}}=\eta_{K}\mathbb{I}\) with \(\eta_{K}=\pm 1\)
2. Q-symmetry: \(\Xi=Q\), \(f(H)=H^{\dagger}\), \(\epsilon_{Q}=\pm 1\) and \(\overline{Q^{2}}=\eta_{Q}\mathbb{I}\) with \(\eta_{Q}=1\)
3. C-symmetry: \(\Xi=C\), \(f(H)=H^{T}\), \(\epsilon_{C}=\pm 1\) and \(\overline{CC^{*}}=\eta_{C}\mathbb{I}\) with \(\eta_{C}=\pm 1\)
4. P-symmetry: \(\Xi=P\), \(f(H)=H\), \(\epsilon_{P}=-1\) and \(P^{2}=\overline{\eta_{P}\mathbb{I}}\) with \(\eta_{P}=1\)
In this classification scheme, \(H\longrightarrow iH\) is considered an inequivalent transformation, which is the case where a line gap exists in the energy spectrum [64]. Further details of the equivalence relations for these symmetry classes can be found [64].
With these, it is interesting to note that our formalism reveals important physical properties of general NH systems (for example, topological defect classification [64] etc.) by unveiling the connection of \(H\) with different gBLC symmetry classes. We start with the chiral symmetry of \(H\) with \(\mathcal{Q}=\sigma_{z}\) as \(H=-QH\mathcal{Q}^{-1}\). This immediately reveals that \(H\) belongs to the symmetry P-class (with \(\epsilon_{P}=\epsilon_{\mathcal{Q}}=1\)) of the gBLC classification.
We now turn to our unitary and anti-unitary maps, \(\mathcal{U}_{i}\) and \(\mathcal{A}_{i}\), and observe the following. For the antiunitary, \(\mathcal{A}_{i}=\mathcal{V}_{i}\mathcal{K}\), we find \(H=\epsilon_{\mathcal{V}_{i}}\mathcal{V}_{i}H^{T}\mathcal{V}_{i}^{-1}\) with \(\epsilon_{\mathcal{V}_{i}}=(-1)^{i+1}\) where \(i=1,2\) (refer Eq. (16)). This corresponds to the C-type symmetry class with \(\epsilon_{C}=\epsilon_{\mathcal{V}_{i}}\) and \(\eta_{C}=\eta_{\mathcal{V}_{i}}=(-1)^{i+1}\mathbb{I}\) of the gBLC classification. Contrary to the anti-unitary maps, we observe that for complex energies, the unitary maps do not correspond to any of the gBLC classes. Instead, they correspond to a 'dynamic' version of the gBLC symmetry class-Q due to the presence of the \(e^{2i\gamma}\) factor in Eq. (15). These coincide with the static definition of the gBLC symmetry class when the energies are either real or pure imaginary as follows: (a) _for real energy eigenvalues, \(e^{i\gamma}=\pm 1\), we find \(H=\epsilon_{\mathcal{U}_{i}}\mathcal{U}_{i}H^{\dagger}\mathcal{U}_{i}^{-1}\) with \(\epsilon_{\mathcal{U}_{i}}=(-1)^{i+1}\), which corresponds to the symmetry class-Q with \(\epsilon_{Q}=\epsilon_{\mathcal{U}_{i}}\) and \(\eta_{Q}=\eta_{\mathcal{U}_{i}}=(-1)^{i+1}\mathbb{I}\) (see footnote [65]) of the gBLC classification of random non-Hermitian matrices; (b) _for imaginary energy_, \(e^{i\gamma}=\pm i\), we find \(H=-\epsilon_{\mathcal{U}_{i}}\mathcal{U}_{i}H^{\dagger}\mathcal{U}_{i}^{\dagger}\) which corresponds to the symmetry class-Q with \(\epsilon_{Q}=-\epsilon_{\mathcal{U}_{i}}\).
Next, we look at the symmetry operations, \(\mathcal{S}_{i}=\mathcal{W}_{i}\mathcal{K}\). For similar reasons as for the unitary maps, \(\mathcal{S}_{i}\) do not correspond to any of gBLC classes for complex energies. For the special case of real and imaginary energy, there is an identification with the gBLC classes as follows: (i) _for real energies_, we find \(H=\epsilon_{\mathcal{W}_{i}}\mathcal{W}_{i}H^{*}\mathcal{W}_{i}^{-1}\) with \(\epsilon_{\mathcal{W}_{i}}=(-1)^{i+1}\), which corresponds to the symmetry class-K with \(\epsilon_{K}=\epsilon_{\mathcal{W}_{i}}\) and \(\eta_{K}=\eta_{\mathcal{S}_{i}}=1\); (b) _for imaginary energies_, we find \(H=-\epsilon_{\mathcal{W}_{i}}\mathcal{W}_{i}H^{*}\mathcal{W}_{i}^{\dagger}\) which corresponds to the symmetry class-K with \(\epsilon_{K}=-\epsilon_{\mathcal{W}_{i}}\).
From the above discussion, we observe that \(\epsilon_{\mathcal{V}_{1}}\) and \(\epsilon_{\mathcal{V}_{2}}\) are not independent but are related by a minus sign. This is because the corresponding unitary operators are related as \(\mathcal{V}_{2}=\mathcal{Q}\mathcal{V}_{1}\) (follows from \(\mathcal{A}_{2}=\mathcal{Q}\mathcal{A}_{1}\)). In the language of gBLC classes, this implies that \(\mathcal{V}_{2}\) is a combination of a P-type (\(\mathcal{Q}\)) and C-type (\(\mathcal{V}_{1}\)) symmetry. The result of this combination is another \(C\)-symmetry with the corresponding \(\epsilon_{\Xi}\) related as \(\epsilon_{\mathcal{V}_{2}}=\epsilon_{\mathcal{Q}\mathcal{V}_{1}}\). Similar is the case for \(\epsilon_{\mathcal{U}_{i}}\) and \(\epsilon_{\mathcal{W}_{i}}\) as \(\mathcal{U}_{2}=\mathcal{Q}\mathcal{U}_{1}\) and \(\mathcal{S}_{2}=-\mathcal{Q}\mathcal{S}_{1}\), respectively.
It can be checked that a combination of C-type and Q-type symmetry classes as \(\left(C^{-1}Q\right)^{*}\), results in a K-type symmetry with \(\epsilon_{K}=\epsilon_{\mathcal{Q}}\epsilon_{C}\). This holds true for our symmetry operations \(\mathcal{S}_{i}\) as well since \(\mathcal{W}_{1}=\left(\mathcal{V}_{1}^{-1}\mathcal{U}_{1}\right)^{*}=\left( \mathcal{V}_{2}^{-1}\mathcal{U}_{2}\right)^{*}\) (follows from \(\mathcal{S}_{1}=\mathcal{A}_{1}^{-1}\mathcal{U}_{1}=\mathcal{A}_{2}^{-1} \mathcal{U}_{2}\)) and \(\mathcal{W}_{2}=\left(\mathcal{V}_{1}^{-1}\mathcal{U}_{2}\right)^{*}=\left( \mathcal{V}_{2}^{-1}\mathcal{U}_{1}\right)^{*}\).
## Appendix B Examples
### Chiral Hamiltonian
We consider a class of traceless Hamiltonians \(h\) defined as
\[h=\left(\begin{array}{cc}d_{3}&d_{1}H\\ d_{2}H^{\dagger}&-d_{3}\end{array}\right), \tag{30}\]
in which \(\{H,H^{\dagger}\}=\mathbb{I}\), and \(d_{i}\in\mathbb{C}\). As a special case, we consider a chiral Hermitian Hamiltonian when \(d_{3}=0\) and \(d_{1}=d_{2}^{\dagger}=1\), while the formalism can be easily generalized to the NH case. \(h\) anti-commutes with the unitary chiral operator \(\mathcal{Q}=\sigma_{z}\otimes I\), where \(I\) is a unit matrix of the same dimension as \(H\). We encounter such Hamiltonians in quantum condensed matter physics which has low-energy Dirac dispersion. Such Hermitian chiral Hamiltonians also find importance in the context of non-Hermitian systems as any NH Hamiltonian with a point gap can be mapped to such a Hamiltonian with similarity transformation. The chiral symmetry ensures that all energy eigenvalues \(e\) come in particle-hole pair \(\pm e\). Now solving for the eigenvalue of \(h\): \(h|e\rangle=e|e\rangle\), we get \(e_{1}^{\pm}=\pm\sqrt{f}\) and \(e_{2}^{\pm}=\pm\sqrt{1-f}\) as the four eigenvalues. For the eigenvectors, we get spinors of the form,
\[\left|e_{1}^{\pm}\right\rangle =\frac{1}{\sqrt{2}}\left(\begin{array}{c}|1-f\rangle\\ \pm e^{-i(\phi+\gamma)}|f\rangle\end{array}\right), \tag{31}\] \[\left|e_{2}^{\pm}\right\rangle =\frac{1}{\sqrt{2}}\left(\begin{array}{c}|f\rangle\\ \pm e^{i(\phi-\gamma)}|1-f\rangle\end{array}\right). \tag{32}\]
We observe that the exceptional and normal points in \(H\) translate into interesting level crossings, and degeneracies in the Hilbert space of the chiral Hermitian Hamiltonian, \(h\) (see Fig. 6). In particular, we observe two two-fold degeneracies at the exceptional points: (i) at \(f=0\) the states \(|e_{1}^{\pm}\rangle\) become degenerate at \(e_{1}=0\), while (ii) at \(f=1\) the states \(|e_{2}^{\pm}\rangle\) become degenerate at \(e_{2}=0\). At the normal point (\(f=\frac{1}{2}\)), on the other hand, we observe two level crossings as follows: (i) \(|e_{1,2}^{+}\rangle\) become degenerate pair with energy \(e_{1,2}^{+}=\frac{1}{\sqrt{2}}\), and (ii) \(|e_{1,2}^{-}\rangle\) with energy \(e_{1,2}^{-}=-\frac{1}{\sqrt{2}}\).
Previously, such a chiral operator was introduced as the auxiliary Hermitian operator to analyze its component non-Hermitian Hamiltonian (\(H\)) of interest [66]. An ensemble of such auxiliary operators was studied in the field of random NH matrix theory, but not solved for the energy eigenvalues and eigenvectors.
### \(4d\) - Hilbert space
Here we consider an example of a 4-dimensional NH Hamiltonian. To make the discussion easy, we consider the cases where the \(D=\{H,H^{\dagger}\}\) operator is four-fold degenerate. This is obtained when \(H\) is expandable in terms of five \(\Gamma_{a}\)-matrices which follow \(\{\Gamma_{\mu},\Gamma_{\nu}\}=2\delta_{\mu\nu},\mu,\nu=1-5\). A set of such \(\Gamma_{\mu}\) matrices is \(\Gamma_{1,2}=\tau_{3}\otimes\sigma_{1,2}\), \(\Gamma_{3,4}=\tau_{1,2}\otimes\sigma_{0}\), and \(\Gamma_{5}=\tau_{3}\otimes\sigma_{3}\), where \(\tau_{\mu}\) and \(\sigma_{\mu}\) are \(2\times 2\) Pauli matrices and \(\sigma_{0}\) is \(2\times 2\) identity matrix. Expressing a traceless Hamiltonian as \(H=\sum_{\mu=1}^{5}h_{\mu}\Gamma_{\mu}\), where \(h_{\mu}\in\mathbb{C}\), we find \(D=d\mathbb{I}_{4\times 4}\), where \(d=2|h|^{2}=1\) (set), and \(F=\left[\frac{1}{2}+i\sum_{\mu,\nu>\mu}\mathfrak{f}_{\mu\nu}\Gamma_{\mu} \Gamma_{\nu}\right]\), where \(\mathfrak{f}_{\mu\nu}=2\mathrm{Im}(h_{\mu}^{*}h_{\nu})\in\mathbb{R}\) as in Sec. IV.1.
We take the same example as in Sec. IV.2 that \(h_{1}=\frac{1}{\sqrt{2}}\sin\alpha\cos\beta\), \(h_{2}=\frac{1}{\sqrt{2}}\sin\alpha\sin\beta\), and \(h_{3}=i\frac{1}{\sqrt{2}}\cos\alpha\), where \(0\leq\alpha\leq\pi\), \(0\leq\beta\leq 2\pi\). Then the only surviving components of \(\mathfrak{f}_{\mu\nu}\) are \(\mathfrak{f}_{13}=\frac{1}{2}\sin 2\alpha\cos\beta\) and \(\mathfrak{f}_{23}=\frac{1}{2}\sin 2\alpha\sin\beta\) and we define \(|\mathfrak{f}|=\sqrt{\mathfrak{f}_{13}^{2}+\mathfrak{f}_{23}^{2}}=\frac{1}{2} |\sin 2\alpha|\). This gives \(f=\frac{1}{2}+|\mathfrak{f}|\) and \(1-f=\frac{1}{2}-|\mathfrak{f}|\) and the corresponding degenerate eigenvectors are
\[\left|f\right\rangle_{1},\left|1-f\right\rangle_{1}=\frac{1}{\sqrt{2}}\begin{pmatrix} \pm i\nu_{\alpha}e^{-i\beta}\\ 0\\ 0\\ 0\\ 1\end{pmatrix}, \tag{33}\]
\[\left|f\right\rangle_{2},\left|1-f\right\rangle_{2}=\frac{1}{\sqrt{2}}\begin{pmatrix} 0\\ \pm i\nu_{\alpha}e^{i\beta}\\ 1\\ 0\end{pmatrix}, \tag{34}\]
where \(\nu_{\alpha}=\text{sgn}\left[\sin 2\alpha\right]\). In this computational basis, the Hamiltonian becomes off-block diagonal \(H=\begin{pmatrix}0&B\\ A&0\end{pmatrix}\), where \(A,B=-\sin\left[\alpha\pm\nu_{\alpha}\pi/4\right]\left[\cos\beta\sigma_{x}-\sin \beta\sigma_{y}\right]\), both being Hermitian. Equating \(A/B=|a|^{2}e^{2i\gamma}\), we get \(e^{i\gamma}=i\sqrt{\text{sgn}[\cos 2\alpha]}\). Therefore \(\gamma\) is independent of the band index
Figure 6: Four energy levels of the chiral Hermitian Hamiltonian in Eq. (30) are plotted as a function of \(f\). We find that at the exceptional points for \(H\), i.e., at \(f=0,1\), we have a degeneracy in \(h\) at \(e_{1,2}=0\) between the eigenstates \(|e_{1}^{\pm}\rangle\) and \(|e_{2}^{\pm}\rangle\), respectively. On the contrary, at the normal point of \(H\), \(h\) also has a degeneracy but at finite energies where level crossings occur between the eigenstates \(e_{1,2}^{+}\) and \(e_{1,2}^{-}\).
and hence we have a point degeneracy. Here \(\psi_{A}^{n}\) are eigenstates of \(A\) with eigenvalues \(\sqrt{f}e^{i(\gamma+\phi^{n})}\) given by
\[\psi_{A}^{1,2}=\frac{1}{\sqrt{2}}\begin{pmatrix}1\\ \pm e^{-i\beta}\end{pmatrix}. \tag{10}\]
This gives \(\phi^{n}=-\gamma+n\pi\), \(\phi_{i}^{n}=\phi^{n}\), and \(\psi_{B}^{n}=|a|e^{i\phi^{n}}\psi_{A}^{n}\). Substituting them in Eq. (24), we obtain the degenerate energy eigenstates \(|E_{\pm}^{1,2}\rangle\) with the eigenvalues \(E_{\pm}^{1}=E_{\pm}^{2}=\pm i\sqrt{\cos 2\alpha/2}\).
### \(3d\)-Hilbert space
To demonstrate the existence of flat-amplitude energy level, we consider an example of a 3-dimensional NH Hamiltonian. Such a Hamiltonian may be suitably expressed in the three-dimensional representation of Pauli matrices, or in a subset of Gell-Mann Matrices which follow \(\{\gamma_{\mu},\gamma_{\nu}\}=2\delta_{\mu,\nu}\) such as \(D\) operator is three-fold degenerate. We consider an example
\[H=\frac{1}{2\sqrt{2}}\begin{pmatrix}0&2\cos\kappa&2\cos\kappa\\ 2\sin\kappa&1&-1\\ 2\sin\kappa&-1&1\end{pmatrix}, \tag{11}\]
where \(0\leq\kappa\leq 2\pi\). The Hamiltonian is suitably normalized such that \(\left\{H,H^{\dagger}\right\}=\mathbb{I}\). Expectedly, one of the eigenvalues of \(F\) lies at the normal point i.e., \(f_{0}=1/2\), while the other two are at \(f=\sin^{2}\kappa\) and \(1-f=\cos^{2}\kappa\). The corresponding eigenvectors of \(F\) are,
\[\left|\frac{1}{2}\right\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\ 1\\ -1\end{pmatrix},|f\rangle=\begin{pmatrix}1\\ 0\\ 0\end{pmatrix},|1-f\rangle=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\ 1\\ 1\end{pmatrix}. \tag{12}\]
As discussed in Sec. V, \(\left|\frac{1}{2}\right\rangle\) is the simultaneous eigenstate of \(F\) and \(H\) with \(e^{i\gamma_{0}}=1\) for this Hamiltonian. For \(|f\rangle\) and \(|1-f\rangle\) states, we use Eq. 1 to find \(\sqrt{f}e^{i(\gamma+\phi)}=\sin\kappa\) and \(\sqrt{1-f}e^{i(\gamma-\phi)}=\cos\kappa\). Substituting them we get the particle-hole eigenvalues to be \(E_{\pm}=\sqrt{\sin 2\kappa/2}\), with \(e^{i\gamma}=\sqrt{\text{sgn}(\sin 2\kappa)}\). The expansion coefficient is \(a=\sqrt{\tan\kappa}\) with \(e^{i\phi}=\sqrt{\text{sgn}(\tan\kappa)}\).
|
2303.07627 | Best arm identification in rare events | We consider the best arm identification problem in the stochastic multi-armed
bandit framework where each arm has a tiny probability of realizing large
rewards while with overwhelming probability the reward is zero. A key
application of this framework is in online advertising where click rates of
advertisements could be a fraction of a single percent and final conversion to
sales, while highly profitable, may again be a small fraction of the click
rates. Lately, algorithms for BAI problems have been developed that minimise
sample complexity while providing statistical guarantees on the correct arm
selection. As we observe, these algorithms can be computationally prohibitive.
We exploit the fact that the reward process for each arm is well approximated
by a Compound Poisson process to arrive at algorithms that are faster, with a
small increase in sample complexity. We analyze the problem in an asymptotic
regime as rarity of reward occurrence reduces to zero, and reward amounts
increase to infinity. This helps illustrate the benefits of the proposed
algorithm. It also sheds light on the underlying structure of the optimal BAI
algorithms in the rare event setting. | Anirban Bhattacharjee, Sushant Vijayan, Sandeep K Juneja | 2023-03-14T04:51:24Z | http://arxiv.org/abs/2303.07627v1 | # Best arm identification in rare events
###### Abstract
We consider the best arm identification problem in the stochastic multi-armed bandit framework where each arm has a tiny probability of realizing large rewards while with overwhelming probability the reward is zero. A key application of this framework is in online advertising where click rates of advertisements could be a fraction of a single percent and final conversion to sales, while highly profitable, may again be a small fraction of the click rates. Lately, algorithms for BAI problems have been developed that minimise sample complexity while providing statistical guarantees on the correct arm selection. As we observe, these algorithms can be computationally prohibitive. We exploit the fact that the reward process for each arm is well approximated by a Compound Poisson process to arrive at algorithms that are faster, with a small increase in sample complexity. We analyze the problem in an asymptotic regime as rarity of reward occurrence reduces to zero, and reward amounts increase to infinity. This helps illustrate the benefits of the proposed algorithm. It also sheds light on the underlying structure of the optimal BAI algorithms in the rare event setting.
## 1 Introduction
Online advertising is ubiquitous in present times, and is used by e-commerce platforms, mobile application developers, marketing professionals etc. Typically, an online advertiser has to decide amongst various product advertisements and choose the one with highest expected reward. Advertisers typically have a period of experimentation where they sequentially show competing advertisements to the users to arrive at advertisements that elicit best response from each customer type (customers maybe clustered based on available information).
A key feature of online advertising is that while each advertisement maybe shown to a large number of customers, the click rates on advertisements are usually small. Typically, these maybe of order one in a thousand 1, and a very small percentage 2 of the users who click on an advertisement end up buying the product (known as the conversion rate). The conversion and click rates can vary significantly depending on the product category. For example, high-end products often have higher click rates but much lower conversion rates compared to standard products. Thus, a key characteristic of the problem is that rarer conversion rates often have very high rewards.
Footnote 1: [https://cxl.com/guides/click-through-rate/benchmarks/](https://cxl.com/guides/click-through-rate/benchmarks/)
Footnote 2: [https://localiq.com/blog/search-advertising-benchmarks/](https://localiq.com/blog/search-advertising-benchmarks/).
We study the problem of identifying the best advertisement to show to a customer type as a best arm identification (BAI) problem in the multi-armed bandit framework. The rarity of the reward probabilities, and the fact advertise
ments are shown to a large number of customers, may make the computational effort of popular existing adaptive algorithms prohibitive. On the other hand, these properties call for sensible aggregation based algorithms. In this paper, we observe that the rewards from large number of pulls from each arm can be well modelled as a Compound Poisson process, significantly simplifying and speeding up the existing _optimal_ algorithms.
To illustrate the proposed ideas clearly, we consider a simple stochastic BAI problem where agent is given a set of \(K\) unknown probability distributions (arms) that can be sampled sequentially. The agent's objective is to declare the arm with the highest mean with a pre-specified confidence level \(1-\delta\), while minimizing the expected number of samples (sampling complexity). In the literature, this is popularly known as the fixed-confidence setting, and the algorithms that provide \(1-\delta\) confidence guarantees are referred to as \(\delta\)-correct.
Best arm identification problems are also popular in simulation community where these are better known as ranking and selection problems (for example see Goldsman (1983); Chan and Lai (2006)). Classical problem involves many complex simulation models of practical systems such as supply chain design, traffic network and so on, and the aim is to identify with high probability, the system with the highest expected reward, using minimum computational budget. In many systems, the performance measure of interest may correspond to a rare event, e.g., a manufacturing plant shut down probability, or computer system unavailability fraction. The algorithms that we propose here are also applicable in optimal computational resource allocation in simulating such systems.
**Related literature:** In the learning theory literature, Even-Dar et al. (2006) were amongst the first to consider the fixed confidence BAI problem. They proposed a successive elimination algorithm (see section F of supplementary material). Upper Confidence Bound (UCB) based algorithms were proposed in Auer et al. (2002); Jamieson et al. (2014), wherein the arm with highest confidence index is sampled. These algorithms usually stop when the difference between arm indices breaches a certain threshold (see Jamieson and Nowak (2014) for more details). Sample complexity of these algorithms was shown to match the lower bound within a constant. Motivated by Bayesian approaches in Russo (2016), Jourdan et al. (2022) proposes top-two algorithms that propose a challenger to the current empirical best arm and sample between the challenger and the empirical best arm with a pre-defined probability \(\beta\). Although these algorithms are \(\beta\)-optimal 3 they are not known to be asymptotically optimal in the sense defined in Garivier and Kaufmann (2016). The sample complexity of these algorithms is typically analyzed in an asymptotic regime where \(\delta\to 0\). Garivier and Kaufmann (2016) and Kaufmann et al. (2016) derived a more general lower bound (as a maxmin formulation) on the sample complexity. Based on this lower bound a Track-and-Stop algorithm (TS) was proposed for arm distributions restricted to single parameter exponential families (SPEF), and was shown to match the lower bound even to a constant (as \(\delta\to 0\)).Agrawal et al. (2019, 2020) extended the TS algorithms to more general distributions. The optimal TS algorithms in the literature, proceed iteratively. At each iteration, the observed empirical parameters are plugged into the lower bound max-min problem to arrive at prescriptive optimal sample allocations to each arm, that then guide the sample allocations. As is known, and as we observe, these algorithms are computationally prohibitive, especially since in our rare advertising settings, the informative non-zero reward samples (those instances where users buy products) are rare. This motivates the paper's goal to arrive at computationally efficient algorithms that exploit the Compound Poisson structure of the arm reward process, with a small increase in sample complexity.
Footnote 3: see Jourdan et al. (2022) for definition
**Contributions:** We develop a rarity framework where the reward success probabilities are modelled as a function of \(\gamma^{\alpha}\) for arm dependent \(\alpha>0\) and \(\gamma\) is \(>0\) and small. The rewards are modelled to be of order \(\gamma^{-\alpha}\) so that the expected rewards across arms are comparable (otherwise, we a-priori know arms with small or large expected rewards). We assume that arm specific upper bounds on rewards are available to us. In this framework, we propose a computationally efficient \(\delta\)-correct algorithm that is nearly asymptotically optimal for small \(\gamma\). This algorithm (Approximate Track and Stop) is based on existing track and stop algorithms that are simplified through a Compound Poisson approximation to the bandit reward process. The Poisson approximation can be seen to be tight as \(\gamma\to 0\) and we provide bounds on the deviations due to Poisson approximation. Further, we give an asymptotically valid upper bound on the sample complexity illustrating that the increase in sample complexity is marginal compared to the computational benefit. The rarity structure helps us shed further light on the optimal sample allocations across arms in our BAI problem. We identify five different regimes depending on the rarity differences between the arms. Finally, we compare experimentally with the TS algorithm in Agrawal et al. (2020) for bounded random rewards. We find that for realistic rare event probabilities and reward structure, our algorithm is 6-12 times faster than the TS algorithm with a small increase (1-13 %) in sample complexity.
The rest of the paper is organized as follows: Section 2 formally introduces the problem, rare event setting and provides some background material. Section 3 introduces the approximate problem, analyzes its deviations from
the exact problem and gives the optimal weight asymptotics, Section 4 outlines the details of the Approximate Track and Stop (TS(A)) algorithm, \(\delta\)-correctness, sample complexity guarantee and computational benefits of the algorithm. Section 5 presents some experimental results and we conclude in Section 6. The proofs of various results and further technical details are furnished in the supplementary material.
## 2 Modelling Framework
Consider a \(K\)-armed bandit with each arm's distribution denoted by \(p_{i}\), \(i\in[K]\). We denote such a bandit instance by \(p\). For any distribution \(\eta\), let \(\mu(\eta)\) denote its mean and \(\text{supp}(\eta)\) denote its support. Further, let \(KL(\eta,\kappa)=\mathbb{E}_{\eta}\log\Big{(}\frac{d\eta}{d\kappa}\Big{)}\) denote the Kullback-Leibler divergence between two measures \(\eta\) and \(\kappa\), where \(E_{\eta}\) denotes the expectation operator under \(\eta\). We assume that \(\text{supp}(p_{i})\) is finite for each \(i\). Further, this set may not be known to the agent. However, there is a lower bound and an upper bound \(B_{i}\) for \(\text{supp}(p_{i})\) and that is known to the agent. The agent's goal is to sequentially sample from these arms using a policy that at any sequential step \(t\), may depend upon all the generated data before time \(t\). The policy then stops at a random stopping time and declares an arm that it considers to have the highest mean. A sampling strategy, a stopping rule and a recommendation rule are together called a best arm bandit algorithm. A best arm bandit algorithm that correctly recommends the arm with the highest mean with probability at least \(1-\delta\) (for a pre-specified \(\delta\in(0,1)\)) is said to be \(\delta\)-correct.
This BAI problem has been well studied, and lower bounds on sample complexity under \(\delta\)-correct algorithms have been developed along with algorithms that match the lower bound asymptotically as \(\delta\to 0\). Below, we first state the lower bound in Theorem 2, and then briefly outline an algorithm that asymptotically matches it. The lower bounds were developed by Garivier and Kaufmann (2016) for single parameter exponential family of distributions and were generalized to bounded and heavy-tailed distributions by Agrawal et al. (2020). Let
\[\mathcal{K}^{L,B}_{inf}(\eta,x):=\min_{\begin{subarray}{c}\text{supp}(\kappa) \subseteq[0,B]\\ \mu_{*}\leq x\end{subarray}}KL(\eta,\kappa) \tag{1}\]
\[\mathcal{K}^{U,B}_{inf}(\eta,x):=\min_{\begin{subarray}{c}\text{supp}(\kappa) \subseteq[0,B]\\ \mu_{*}\geq x\end{subarray}}KL(\eta,\kappa). \tag{2}\]
Henceforth, we suppress the dependence on \(B\) above to ease the presentation. This should not cause confusion in the following discussion. For brevity, we'll denote \(\mu_{p_{i}}\) by \(\mu_{i}\) for each \(i\in[K]\). As is customary in the BAI literature, we assume that best arm is unique and without loss of generality, \(\mu_{1}>\mu_{i}\) for \(i\in[K]\backslash\{1\}\).
**Theorem 5** (in Agrawal et al. (2020)): _For our bandit problem, any \(\delta\)-correct algorithm with stopping rule \(\tau_{\delta}\), satisfies_
\[\mathbb{E}[\tau_{\delta}]\geq\frac{1}{V^{*}(p)}\log\Big{(}\frac{1}{2.4\delta} \Big{)},\]
_where \(V^{*}(p)\) equals_
\[\max_{w\in\Sigma_{K}}\min_{i\neq 1}\inf_{x\in[\mu_{i},\mu_{1}]}w_{1}\mathcal{ K}^{L}_{inf}(p_{1},x)+w_{i}\mathcal{K}^{U}_{inf}(p_{i},x), \tag{3}\]
\(\Sigma_{K}\) _being the \(K\)-dimensional probability simplex._
Optimal track and stop (TS) algorithms in the literature that match the lower bound asymptotically as \(\delta\to 0\) briefly involve the following features (see, Garivier and Kaufmann (2016), Agrawal et al. (2020), Agrawal et al. (2021) for details and justification of such track and stop algorithms. We also discuss existing algorithms further in Section F of supplementary material.)
1. Arms are sampled sequentially in batches. At stage \(t\), each arm is sampled at least order \(\sqrt{t}\) times (this sub linear exploration ensures that no arm is starved).
2. Empirical distributions \(\hat{p}_{t}\) are plugged into the lower bound that is solved to determine the prescriptive proportions \(\hat{w}_{t}\).
3. The algorithm then samples to closely track these proportions.
4. The algorithm stops when the log-likelihood ratio at stage \(m\) exceeds a threshold \(\beta(m,\delta)\) (set close to \(\log(1/\delta)\)). At stage \(m\), the log likelihood ratio equals \[\min_{b\neq k^{*}\neq x\geq y}N_{k^{*}}(m)\mathcal{K}^{L}_{inf}(\hat{p}_{k^{*}}( m),x)+N_{b}(m)\mathcal{K}^{U}_{inf}(\hat{p}_{b}(m),y),\]
where \(k^{*}\) denotes the arm with the largest sample mean, each \(N_{a}(m)\) denotes the samples of arm \(a\) amongst \(m\) samples.
As is apparent, the above algorithm involves repeatedly solving the lower bound problem, and this is computationally demanding, particularly when nonzero rewards are rare and occur with very low probabilities.
### The Rare Event Setting
We now specialize the BAI setting to illustrate our rare event framework where the rewards from each arm take positive values with small probabilities. Further, while the expected rewards across arms are of the same order, the realized rewards and the associated probabilities may be substantially different.
Concretely, suppose that \(\gamma\) is a small positive value (say of order \(10^{-2}\) or lower) and corresponding to each arm distribution \(p_{i}\), we have a rarity index \(\alpha_{i}>0\). The support of arm \(i\) takes values \(a_{ij}\gamma^{-\alpha_{i}}\), each with probability \(p_{ij}\gamma^{\alpha_{i}}>0\) for \(j\leq n_{i}<\infty\). Under each \(p_{i}\), the realized reward takes value zero with probability close to 1. To summarize,
\[\mathbb{P}_{X\sim p_{i}}(X=a_{ij}\gamma^{-\alpha_{i}})=p_{ij} \gamma^{\alpha_{i}},\ j\in[n_{i}]\] \[\mathbb{P}_{X\sim p_{i}}(X=0)=1-\sum_{j}p_{ij}\gamma^{\alpha_{i}}.\]
The arm means are given by \(\mu_{i}=\sum_{j}a_{ij}p_{ij}\) and are independent of \(\gamma\). We further assume that an upper bound \(B_{i}\gamma^{-\alpha_{i}}\) for each arm \(i\) is known to the agent.
The above rarity framework brings out the benefits of the proposed approximations cleanly for small \(\gamma\) in our theoretical analysis. However, in executing the associated algorithm, we don't need to separately know the values of \(\gamma\) and each \(\alpha_{i}\).
### The Poisson Approximation of KL Divergence
We motivate in this section the approximate form of KL divergence that we shall use. The following well-known result, shown in section A.5 of the supplementary material for completeness, is used to motivate our approximation.
**Proposition 1**.: _Let \(\tau_{ij}^{(1)}\) denote the minimum number of samples of arm \(i\) needed to see the reward \(a_{ij}\gamma^{-\alpha_{i}}\), i.e. the first arrival time of the support point \(j\). Similarly, let \(\tau_{ij}^{(k)}\) be the \(k\)-th arrival time of support point \(j\),_
_Let \(N_{ij}(t)\) be the number of times the reward \(a_{ij}\gamma^{-\alpha_{i}}\) is returned by arm \(i\) in \(\lceil t\gamma^{-\alpha_{i}}\rceil\) trials (\(t\in\mathbb{R}\)). Then as \(\gamma\to 0\),_
* \(\mathbb{P}(\tau_{ij}^{(k)}>t\gamma^{-\alpha_{i}})\to e^{-p_{ij}t}\)_,_
* \(N_{ij}(t)\xrightarrow{D}\mathrm{Poisson}(p_{ij}t)\)_._
_Further for all support points, \(\{\mathrm{Poisson}(p_{ij}t)\}_{j}\) is a collection of mutually independent random variables._
This implies that in rare event setting, the distribution of the counting process \(N_{ij}(t)\) for each support point \(a_{ij}\gamma^{-\alpha_{i}}\) is well-approximated by a Poisson process. We now argue that when \(\gamma\) is small enough, the KL divergence between arm distributions \(p_{i}\) and \(\tilde{p}_{i}\) of same rarity can be approximated by a sum of KL divergences between independent \(\mathrm{Poisson}\) variables.
Let \(X_{1:m}\) and \(\tilde{X}_{1:m}\) be two sets of i.i.d samples of size \(m\) from \(p_{i}\) and \(\tilde{p}_{i}\) respectively. The corresponding measures are the product measures \(p_{i}^{\otimes m}\) and \(\tilde{p}_{i}^{\otimes m}\) respectively. By the tensorization property of KL-divergence, we have that
\[KL\big{(}p_{i}^{\otimes m},\tilde{p}_{i}^{\otimes m}\big{)}=mKL(p_{i},\tilde{ p}_{i}) \tag{4}\]
In the following discussion we set \(m=\lceil t\gamma^{-\alpha_{i}}\rceil\). Consider the vector-valued random variable \((N_{ij}(t))_{j\in[n_{i}]}\) and its counterpart \((\tilde{N}_{ij}(t))_{j\in[n_{i}]}\) under \(\tilde{p}_{i}\). Note that they are functions of the samples \(X_{1:[t\gamma^{-\alpha_{i}}\rceil},\tilde{X}_{1:[t\gamma^{-\alpha_{i}}]}\). Since we can also reconstruct a permutation of these samples from \((N_{ij}(t))_{j}\),\((\tilde{N}_{ij}(t))_{j}\), we have that
\[KL\big{(}p_{i}^{\otimes\lceil t\gamma^{-\alpha_{i}}\rceil},\tilde{p}_{i}^{ \otimes\lceil t\gamma^{-\alpha_{i}}\rceil}\big{)}=KL\big{(}\nu((N_{ij}(t))_{ j}),\nu((\tilde{N}_{ij}(t))_{j})\big{)}\]
where \(\nu(A)\) is the measure of a random variable \(A\). Now, it can easily be shown from Proposition 1 that
\[KL\big{(}p_{i}^{\otimes[t\gamma^{-\alpha_{i}}]},\tilde{p}_{i}^{ \otimes[t\gamma^{-\alpha_{i}}]}\big{)}\] \[\approx \sum_{j}KL\big{(}\text{Poisson}(p_{ij}t),\text{Poisson}(\tilde{p} _{ij}t)\big{)}\] \[= t\bigg{[}\sum_{j}p_{ij}\log\Big{(}\frac{p_{ij}}{\tilde{p}_{ij}} \Big{)}+(\tilde{p}_{ij}-p_{ij})\bigg{]}.\]
for \(\gamma\) small enough. Then, combining the approximation above with the relation (4) gives
\[KL(p_{i},\tilde{p}_{i})\approx\gamma^{\alpha_{i}}\bigg{[}\sum_{j}p_{ij}\log \Big{(}\frac{p_{ij}}{\tilde{p}_{ij}}\Big{)}+(\tilde{p}_{ij}-p_{ij})\bigg{]}. \tag{5}\]
This approximation is used to motivate the approximate lower bound problem in the next section.
## 3 Approximate Lower Bound Problem
For each \(i\), if \(B_{i}\notin\text{supp}(p_{i})\), let \(\tilde{n}_{i}=n_{i}+1\) and set \(a_{i\tilde{n}_{i}}=B_{i}\), else \(\tilde{n}_{i}=n_{i}\). The Poisson approximation of the KL divergence (see section 2.2) suggests that in lieu of equation (3), which is computationally expensive to solve, one could consider the following approximate problem when the rarity \(\gamma\) is small (the summations over \(j\) below correspond to \(j\in[\tilde{n}_{i}]\)).
\[V_{a}^{*}(p)\coloneqq\max_{w\in\mathbb{S}_{K}}\min_{i\neq 1}\inf_{ \begin{subarray}{c}\sum_{j}a_{ij}\tilde{p}_{ij}\geq\\ \sum_{j}a_{ij}p_{ij}\end{subarray}}\bigg{\{}w_{1}\gamma^{\alpha_{1}}\bigg{[} \sum_{j}p_{1j}\log\Big{(}\frac{p_{1j}}{\tilde{p}_{1j}}\Big{)}+(\tilde{p}_{1j} -p_{1j})\bigg{]}+w_{i}\gamma^{\alpha_{i}}\bigg{[}\sum_{j}p_{ij}\log\Big{(} \frac{p_{ij}}{\tilde{p}_{ij}}\Big{)}+(\tilde{p}_{ij}-p_{ij})\bigg{]}\bigg{\}}. \tag{6}\]
The minimization in 3 will now be replaced with the approximation in 5. Above, instead of allowing \(\tilde{p}_{i}\) to have the support \([0,B_{i}\gamma^{-\alpha_{i}}]\), we limited its support to that of \(p_{i}\) extended to allow point \(B_{i}\gamma^{-\alpha_{i}}\). This is justified in Sections A.1-A.2 of the supplementary material.
Let
\[\mathcal{P}_{i}\coloneqq\inf_{x\in[\mu_{i},\mu_{i}]}w_{1}\mathcal{K}_{inf}^{L} (p_{1},x)+w_{i}\mathcal{K}_{inf}^{U}(p_{i},x) \tag{7}\]
denote the inner minimisation problem in 3 and let
\[\mathcal{P}_{i,a}\coloneqq\inf_{\begin{subarray}{c}\sum_{j}a_{ij}\tilde{p}_{ ij}\geq\\ \sum_{j}a_{ij}\tilde{p}_{ij}\end{subarray}}w_{1}\gamma^{\alpha_{1}}\bigg{[} \sum_{j}p_{1j}\log\Big{(}\frac{p_{1j}}{\tilde{p}_{1j}}\Big{)}+(\tilde{p}_{1j} -p_{1j})\bigg{]}+w_{i}\gamma^{\alpha_{i}}\bigg{[}\sum_{j}p_{ij}\log\Big{(} \frac{p_{ij}}{\tilde{p}_{ij}}\Big{)}+(\tilde{p}_{ij}-p_{ij})\bigg{]} \tag{8}\]
denote its approximation (above, we suppress the dependence on \(w_{1}\) and \(w_{i}\) of \(\mathcal{P}_{i}\) and \(\mathcal{P}_{i,a}\)).
By approximating a reformulated version of \(\mathcal{P}_{i}\) that uses the dual representations of \(\mathcal{K}_{inf}^{L}\) and \(\mathcal{K}_{inf}^{U}\) (following the approach used in Honda and Takemura (2010); Agrawal et al. (2020)), we can show that
\[\mathcal{P}_{i,a}=w_{1}\gamma^{\alpha_{1}}\big{[}\sum_{j}p_{1j}\log(1+C_{1i}^{ a}a_{1j})-C_{1i}^{a}x_{i,a}^{*}\big{]}+w_{i}\gamma^{\alpha_{i}}\big{[}\sum_{j}p_{ ij}\log(1-C_{i}^{a}a_{ij})+C_{i}^{a}x_{i,a}^{*}\big{]}. \tag{9}\]
where the quantities \(x_{i,a}^{*},C_{1i}^{a},C_{i}^{a}\) (the qualifier 'a' reminds us these are for the approximate problem) are defined by the relations:
\[C_{1i}^{a}w_{1}\gamma^{\alpha_{1}}=C_{i}^{a}w_{i}\gamma^{\alpha_ {i}},\] \[x_{i,a}^{*}=\sum_{j}\frac{a_{1j}p_{1j}}{1+a_{1j}C_{1i}^{a}},\ \text{and} \tag{10}\] \[x_{i,a}^{*}=\sum_{j}\frac{a_{ij}p_{ij}}{1-a_{ij}C_{i}^{a}}.\]
Section A.4 of the supplementary material provides the step-by-step reformulation, as well as the results that have been used for it (Sections A.1-A.3 and A.5). The advantage of our reformulation is that the quantities \(C_{1i}^{a}\) and \(C_{i}^{a}\) have bounded well-defined limits and using (10), we can eliminate the dependence on \(x_{i}^{*}\) (whose behaviour is not as easy to analyze when \(\gamma\to 0\)).
The discussion in Section 2.2 also suggests that \(\mathcal{P}_{i,a}\approx\mathcal{P}_{i}\) and hence, \(V^{*}(p)\approx V_{a}^{*}(p)\). This is shown in the following theorem:
**Theorem 1**.: _For each \(i\in[K]\) and \(w\in\Sigma_{K}\), \(\mathcal{P}_{i}\), \(\mathcal{P}_{i,a}\) are \(\mathcal{O}(\gamma^{\max(\alpha_{1},\alpha_{i})})\). Furthermore, \(\lim\limits_{\gamma\to 0}\frac{\mathcal{P}_{i}}{\mathcal{P}_{i,a}}=1.\) In addition, there exist constants \(L_{1i}\) and \(L_{i}\), independent of \(w\), such that_
\[|\mathcal{P}_{i}-\mathcal{P}_{i,a}|\leq L_{1i}w_{1}\gamma^{\min(2\alpha_{1}, \alpha_{1}+\alpha_{i})}+L_{i}w_{i}\gamma^{\min(2\alpha_{i},\alpha_{i}+\alpha_{ 1})}.\]
_Furthermore,_
\[|V^{*}(p)-V^{*}_{a}(p)|\leq\max_{i\neq 1}\max\big{(}L_{1i}\gamma^{\min(2 \alpha_{1},\alpha_{1}+\alpha_{i})},L_{i}\gamma^{\min(2\alpha_{i},\alpha_{i}+ \alpha_{1})}\big{)}.\]
The proof involves simplifying \(\mathcal{P}_{i}\), \(\mathcal{P}_{i,a}\) through Taylor expansions for small \(\gamma\). It is given in the Sections A.4 and B of the supplementary material.
### Solving the approximate lower bound
By definition we have that
\[V^{*}_{a}(p)=\max_{w\in\Sigma_{K}}\min_{i\neq 1}\mathcal{P}_{i,a}.\]
Further, we note that \(\mathcal{P}_{i,a}\) is a concave function of \(w\) (infimum of linear function of \(w\)). Maxmin problems with this specific structure were studied in Glynn and Juneja (2004) (the caveat being that in our \(\mathcal{K}_{inf}\) definitions in the underlying KL term, the first argument is fixed while we optimize over the second argument, while in Glynn and Juneja (2004), these orders are reversed. However, all the steps carry out identically). The optimal weights \(w^{*}\) are characterized in the following theorem:
**Theorem 1** _in Glynn and Juneja (2004).: _The optimal \(w^{*}\) of the maxmin problem 6 satisfies:_
\[\sum_{i=2}^{K}\frac{\partial\mathcal{P}_{i,a}(w^{*})}{\partial w_{1}}\bigg{/} \frac{\partial\mathcal{P}_{i,a}(w^{*})}{\partial w_{i}}=1, \tag{11}\]
_and \(\forall i\neq j\), \(i,j\neq 1\),_
\[\mathcal{P}_{i,a}(w^{*})=\mathcal{P}_{j,a}(w^{*}). \tag{12}\]
_These conditions are also sufficient._
We can use the above theorem to find closed form expressions (in terms of \(w^{*}\)) for \(\mathcal{P}_{i,a}\) and \(\frac{\partial\mathcal{P}_{i,a}(w^{*})}{\partial w_{j}}\) using (9). As a starting point, we identify certain monotonicities present in (10), (11) and (12) to ease up the process of root-finding via bisection methods.
The equations defining \(C^{a}_{1i}\) and \(C^{a}_{i}\) imply that \(C^{a}_{i}\) is a decreasing function of \(C^{a}_{1i}\). Mathematically, the implicit functions \(g_{i}(r)\), defined for all \(i\neq 1\) as
\[\sum_{j}\frac{a_{1j}p_{1j}}{1+g_{i}(r)a_{1j}}=\sum_{j}\frac{a_{ij}p_{ij}}{1-ra _{ij}}\]
are decreasing in \(r\). The domain of \(g_{i}\) is chosen such that the RHS in the above equation is positive and finite.
The optimality equation (12) implies at the optimal weight \(w^{*}\), each \(C^{a}_{1i}\), \(i>2\), is an increasing function of \(C^{a}_{12}\). More formally, the functions \(\xi_{i}(s)\), \(\forall i>2\), implicitly defined through the equation:
\[\sum_{j}p_{1j}\log(1+g_{i}(\xi_{i})a_{1j})+\frac{g_{i}(\xi_{i})}{\xi_{i}}\sum _{j}p_{ij}\log(1-\xi_{i}a_{ij})=\sum_{j}p_{1j}\log(1+g_{2}(s)a_{1j})+\frac{g_ {2}(s)}{s}\sum_{j}p_{2j}\log(1-sa_{2j})\]
are increasing in \(s\). The domain of \(\xi_{i}\) is such that the RHS is well-defined. Finally, as a function of \(C^{a}_{12}\), the LHS in the optimality equation 11 is also increasing. Mathematically this means that the functions, \(\forall i\neq 1\),
\[h_{i}(s):=\bigg{(}\sum_{j}p_{1j}\log(1+\xi_{i}a_{1j})-\xi_{i}\Big{[}\sum_{j} \frac{a_{1j}p_{1j}}{1+a_{1j}\xi_{i}}\Big{]}\bigg{)}\bigg{(}\sum_{j}p_{ij}\log( 1-g_{i}(\xi_{i})a_{ij})+g_{i}(\xi_{i})\sum_{j}\Big{[}\frac{a_{ij}p_{ij}}{1-a_{ ij}g_{i}(\xi_{i})}\Big{]}\bigg{)}^{-1}\]
are increasing in \(s\). These monotonicities enable one to solve for optimal weights in (6) through simple bisection methods. This is the source of computational benefit of solving (6) vis-a-vis (3). In (3), one has to solve either convex programs (\(\mathcal{P}_{i}\)) or a nonlinear system of four equations to arrive at the solution (see Section C of supplementary material).
This enables us to study the behaviour of \(w^{*}\) as \(\gamma\to 0\). We set up some notation first.
**Definition 1**.: Two positive valued functions of \(\gamma\), \(A(\gamma)\) and \(B(\gamma)\), are said to be _asymptotically equivalent_ if \(0<\liminf\limits_{\gamma\to 0}\frac{A(\gamma)}{B(\gamma)}\leq\limsup\limits_{ \gamma\to 0}\frac{A(\gamma)}{B(\gamma)}<\infty\). We denote this by \(A(\gamma)=\Theta(B(\gamma))\).
Let \(\alpha_{\max}=\max_{i}\alpha_{i}\). The quantity \(\zeta\coloneqq\sum\limits_{\alpha_{i}\neq 1,\atop\alpha_{i}=\alpha_{max}}h_{i}( \xi_{i}(0))\) also plays a role in governing the asymptotic behaviour of \(w^{*}\).
Theorem (2) provides insight into the optimal weights in the lower bound problem as \(\gamma\to 0\). We discuss its conclusions further in the next subsection.
**Theorem 2**.: _The behaviour of \(w^{*}\) as \(\gamma\to 0\) is described by the following five cases:_
_Case 1: The best arm is not the rarest, \(\alpha_{max}\neq\alpha_{1}\)._
\[w_{1}^{*} =\Theta(\gamma^{\frac{\alpha_{max}-\alpha_{1}}{2}}),\] \[w_{i}^{*} =\Theta(\gamma^{\alpha_{max}-\alpha_{1}})\quad\text{ for all }i\neq 1.\]
_Case 2: The best arm is uniquely the rarest, \(\alpha_{1}=\alpha_{max}>\alpha_{i},i\neq 1\)._
\[w_{2}^{*} =\Theta(\gamma^{\frac{\alpha_{max}-\alpha_{2}}{2}}),\] \[w_{i}^{*} =\Theta(\gamma^{\alpha_{max}-\alpha_{i}})\quad\text{ for all }i\neq 2.\]
_Case 3: The best and second best arm only are the rarest, \(\alpha_{1}=\alpha_{2}=\alpha_{max}>\alpha_{i},\;\forall i\neq 1,2\)._
\[w_{i}^{*}=\Theta(\gamma^{\alpha_{max}-\alpha_{i}}),\;\text{for all }i.\]
_Case 4: The best arm is the rarest but not uniquely, \(\alpha_{1}=\alpha_{k}=\alpha_{max}\geq\alpha_{i},\;i\notin\{1,2,k\}\), \(\alpha_{max}>\alpha_{2}\) and \(\zeta>1\)._
\[w_{2}^{*} =\Theta(\gamma^{\frac{\alpha_{max}-\alpha_{2}}{2}}),\] \[w_{i}^{*} =\Theta(\gamma^{\alpha_{max}-\alpha_{i}})\quad\text{ for all }i\neq 2.\]
_Case 5: The best arm is the rarest but not uniquely, \(\alpha_{1}=\alpha_{k}=\alpha_{max}\geq\alpha_{i},\;i\notin\{1,2,k\}\), \(\alpha_{max}>\alpha_{2}\) and \(\zeta\leq 1\)._
\[w_{1}^{*} =\Theta(\gamma^{\alpha_{max}-\alpha_{1}}),\] \[w_{i}^{*} =\Theta(\gamma^{\alpha_{max}-\alpha_{i}})\quad\text{ for all }i\neq 1.\]
_Further, the asymptotic equivalence can be expressed by limits that are functions of parameters of the bandit problem._
Proof.: See section C of supplementary material.
The theorem gives us insight into the behavior of the optimal weights \(w^{*}\) in equation (6). By the fact that \(V^{*}(p)\approx V^{*}_{a}(p)\) (Theorem 1) the optimal weights of actual maxmin problem also will show the same asymptotic behaviour. It is easy to see that substituting these optimal weights in \(V^{*}(p)\) gives us an overall lower bound on the sample complexity as a scalar multiple of \(\gamma^{\alpha_{max}}\).
### Discussion on Theorem 2
The following lemma will be useful in the subsequent discussion of Theorem 2. Without loss of generality let arm 2 be the one with the second highest mean. We further assume that \(\mu_{2}>\mu_{i}\) for \(i\geq 3\).
**Lemma 1**.: _In the maxmin problem (3), let \(x^{*}_{i,e}(w^{*})\) denote the minimizer of each \(\mathcal{P}_{i}\) for the optimal weights \(w^{*}\). Then, we have \(x^{*}_{i}(w^{*})\in[\mu_{2},\mu_{1}]\;\;\forall i\)._
Proof.: We shall show this by contradiction. Suppose \(x^{*}_{i,e}(w^{*})<\mu_{2}\). Then, from the optimality conditions of \(w^{*}\) (similar to (11), (12)) we have, \(\forall i\neq j,i,j\neq 1\):
\[\inf_{\mu^{\prime}_{j}\geq\mu^{\prime}_{1}}w^{*}_{1}KL(\mu_{1},\mu^{\prime}_{1 })+w^{*}_{i}KL(\mu_{i},\mu^{\prime}_{i})=\inf_{\mu^{\prime}_{j}\geq\mu^{ \prime}_{1}}w^{*}_{1}KL(\mu_{1},\mu^{\prime}_{1})+w^{*}_{j}KL(\mu_{j},\mu^{ \prime}_{j}).\]
But we know that this minimization, for each \(i\neq 1\), is attained uniquely by a bandit instance \(p^{\prime}\) where the rest of the arms, except 1 and \(i\), are the same as the original bandit instance in consideration, namely, \(p\). Both the arms \(i\) and
have means \(x^{*}_{i,e}(w^{*})\) under \(p^{\prime}\). But the assumed hypothesis then implies that \(x^{*}_{i,e}(w^{*})=\mu^{\prime}_{1}<\mu^{\prime}_{2}=\mu_{2}\). That means \(p^{\prime}\) is also in the set \(\{\mu^{\prime}_{2}\geq\mu^{\prime}_{1}\}\) and hence
\[\inf_{\mu^{\prime}_{1}\geq\mu^{\prime}_{1}}w^{*}_{1}KL(\mu_{1},\mu^{\prime}_{1 })+w^{*}_{i}KL(\mu_{i},\mu^{\prime}_{i})>\inf_{\mu^{\prime}_{2}\geq\mu^{\prime }_{1}}w^{*}_{1}KL(\mu_{1},\mu^{\prime}_{1})+w^{*}_{2}KL(\mu_{2},\mu^{\prime}_{ 2}).\]
However, this contradicts the necessary optimality conditions for \(w^{*}\). Thus, \(x^{*}_{i,e}(w^{*})\geq\mu_{2}\).
A similar result can also be shown for the approximate problem (6) (see Section D of supplementary material).
In the rare event setting, the non-zero samples from an arm are the informative samples, but they are quite rare. Any algorithm needs to see non-zero (informative) samples from at least some arms before it decides to stop. By Lemma 1 we know that all arms, except possibly the best and second best (\(i=1,2\)), will show deviations in their sample mean under max-min optimality. As the TS algorithm and our algorithm track these weights, it is to be expected that the number of samples for arm \(i(\neq 1,2)\) is only as high as it takes to see an \(\mathcal{O}(1)\) sample mean, but also sufficiently low as to ensure that the probability of sample mean deviation is high. The optimal weights \(w^{*}_{i}\simeq\gamma^{\alpha_{max}-\alpha_{i}}\), \(\forall i\neq 1,2\), have this feature. This gives the sample complexity for arm \(i(\neq 1,2)\) as \(\mathcal{O}(\gamma^{-\alpha_{i}})\) (since the overall sample complexity is \(\mathcal{O}(\gamma^{-\alpha_{\text{max}}})\)). On average, each arm thus sees only \(\mathcal{O}(1)\) non-zero samples, with a deviation probability \(1-\mathcal{O}(\gamma^{\alpha_{i}}(\mu_{1}-\mu_{i})^{2})\) and \(\mathcal{O}(1)\) sample mean.
## 4 Track and Stop Algorithm
Our algorithm builds upon the Track and Stop (TS) algorithm proposed in Agrawal et al. (2019); Kaufmann et al. (2016). We call it Track and Stop (A), to emphasize thatwe are solving an approximate problem. The algorithm solves the approximate maxmin problem 6, and samples according to the weights obtained. The calculation of the sampling weights happen in batches of size \(m\). Let \(l\) denote the batch index. Within each batch we ensure that each arm gets at least \(\sqrt{lm}\) samples. This is done in the same manner as Agrawal et al. (2019). At the end of \(l\)-th batch, TS(A) evaluates the maximum likelihood ratio \(Z_{k^{*}}(l)\) for the empirical best arm \(k^{*}(l)\) and decides whether to stop or not. The likelihood ratio is given by:
\[Z_{k^{*}}(l)\coloneqq\!\!\!\min_{b\neq k^{*}\leq y}\!\!\!\inf_{k^{*}\leq y}N_ {k^{*}}(lm)\mathcal{K}^{L}_{inf}(\hat{p}_{k^{*}}(lm),x)+N_{b}(lm)\mathcal{K}^{ U}_{inf}(\hat{p}_{b}(lm),y).\]
\(\hat{p}(t)\) refers to the empirical bandit instance after \(t\) samples. \(N_{i}(t)\) denotes to number of pulls of arm \(i\) after \(t\) samples. TS(A) stops when \(Z_{k^{*}}(l)>\beta(lm,\delta)\), where \(\beta(t,\delta)\) is a stopping threshold defined as
\[\beta(t,\delta)\coloneqq\log\left(\frac{K-1}{\delta}\right)+5\log(t+1)+2.\]
Note that we are computing the maximum likelihood ratio by solving the \(\mathcal{K}_{inf}\) problems exactly, and not approximately. Although it is relatively expensive to compute these quantities exactly, such computations occur only once for each \(l\). The number of samples \(N_{i}(t)\) for each arm \(i\) is influenced by the optimal weights that are obtained as solution to the approximate maxmin problem. The precise algorithmic details of TS(A) are given below.
### \(\delta\)-correctness and sample complexity of TS(A)
The following theorem guarantees the \(\delta\)-correctness and gives asymtptotic sample complexity bound for TS(A):
**Theorem 3**.: _The TS(A) is a \(\delta\)-correct algorithm with the following asymptotic sample complexity bound:_
\[\limsup_{\delta\to 0}\frac{\mathbb{E}[\tau_{\delta}]}{\log(1/\delta)} \leq\frac{1}{V_{TS(A)}(p)} \tag{13}\]
_where \(V_{TS(A)}(p):=\min_{i\neq 1}\!\!\mathcal{P}_{i}(\hat{w}^{*}(p))\). \(\hat{w}^{*}(p))\) denotes the optimal weights for the approx lower bound problem \(V^{*}_{a}(p)\)._
See sections E and F in the supplementary material for a proof of Theorem 3. Note that by definition we have \(V^{*}(p)\leq V_{TS(A)}\) and hence we do suffer some loss in sample complexity vis-a-vis the TS algorithm. However, when \(\gamma\) is small, the difference is negligible as \(w^{*}(p)\approx\hat{w}^{*}(p)\).
```
Generate \(\lfloor\frac{m}{K}\rfloor\) samples for each arm. \(l\gets 1\). Compute the empirical bandit \(\hat{p}=(\hat{p})_{i\in[K]}\). \(\hat{w}(\hat{p})\leftarrow\text{Compute weights according to (\ref{eq:w})}\). \(k^{*}\leftarrow\underset{i\in[K]}{\operatorname*{arg\,max}}\ \mathbb{E}[\hat{p}_{i}]\). Compute \(Z_{k^{*}}(l)\), \(\beta(lm,\delta)\). while\(Z_{k^{*}}(l)\geq\beta(lm,\delta)\)do \(s_{i}\leftarrow(\sqrt{(l+1)m}-N_{i}(lm))^{+}\). if\(m\geq\sum_{i}s_{i}\)then Generate \(s_{i}\) many samples for each arm \(i\). Generate \((m-\sum_{i}s_{i})^{+}\) i.i.d. samples from \(\hat{w}(\hat{p})\). Let \(Count(i)\) be occurrence of \(i\) in these samples. Generate \(Count(i)\) samples from each arm \(i\). else \(\hat{s}^{*}\leftarrow\underset{\hat{s},s_{i}\geq\hat{s}_{i}\geq 0}{\operatorname*{arg \,min}}\max_{i}(s_{i}-\hat{s}_{i})\). Generate \(\hat{s}^{*}_{i}\) samples from each arm \(i\). endif \(l\gets l+1\) Update empirical bandit \(\hat{p}\). \(k^{*}\leftarrow\underset{i\in[K]}{\operatorname*{arg\,max}}\ \mathbb{E}[\hat{p}_{i}]\). Update \(Z_{k^{*}}(l)\), \(\beta(lm,\delta)\). \(\hat{w}(\hat{p})\leftarrow\text{Compute weights according to (\ref{eq:w})}\) endwhile return\(k^{*}\).
```
**Algorithm 1** TS(A) algorithm
### Computational Benefit of Poisson Approximation
The computational benefit of TS(A) vis-a-vis the exact algorithm, call it TS (E), is in how the approximate and exact lower bound problems are solved.
Let us first examine the number of operations required in finding the exact lower bound. In our implementation, we used Brent's method for one-dimensional optimization and the bisection method for root finding. To get a relative error of \(\epsilon\) in Brent's method (see Chapter 4 in Brent (2013)) we require \(\mathcal{O}\big{(}\log^{2}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) operations. The bisection method takes \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) for a relative accuracy of \(\epsilon\). Lemma 2 (see Section A of the supplementary material) reduces the process of computing \(\mathcal{K}^{L}_{inf}\) and \(\mathcal{K}^{U}_{inf}\) to a root-finding procedure, causing said computations to take about \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) operations. The inner optimization \(\mathcal{P}_{i}\) is a convex optimization that requires \(\mathcal{O}\big{(}\log^{2}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) operations. The outer optimization in (3) can be reduced to solving two sets of simultaneous root finding procedures and hence would take \(\mathcal{O}\big{(}\log^{2}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\). Thus, the total number of operations to solve the exact lower bound (3) is \(\mathcal{O}\big{(}\log^{5}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\).
In the approximate problem \(C_{i},C_{1i}\)'s are the unknown variables, whose behaviour we analyze. Using \(g_{i}\) (section 3.1) to write \(C_{i}\) as a function of \(C_{1i}\) requires about \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) operations for each such conversion using the bisection method. Then, each of the \(C_{1i}\)\((i\neq 2)\), are written as function of \(C_{12}\) through \(\xi_{i}\). This again requires about \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) operations for each such conversion. Finally the solution of \(C_{12}\) through \(h_{i}\) requires another factor of \(\mathcal{O}\big{(}\log\big{(}\frac{1}{\epsilon}\big{)}\big{)}\). This gives the total required number of operations to be \(\mathcal{O}\big{(}\log^{3}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\). Thus, we are saving about \(\mathcal{O}\big{(}\log^{2}\big{(}\frac{1}{\epsilon}\big{)}\big{)}\) by solving the approximate problem vis-a-vis the exact one.
## 5 Numerical Experiments
We compare the sample complexity and computational time between TS(A) and Track & Stop TS(E) algorithm proposed in Agrawal et al. (2020). We make the comparison across different arms, \(\gamma\) and \(\alpha\) structures at a confidence level \(\delta=0.01\). We run each algorithm for \(100\) sample paths and their average sample complexity and average compu
tational time are reported in the Table 1 below. The algorithm for both TS(E) and TS(A) proceeds in batches of size \(\gamma^{-\alpha_{\max}}\).
The table shows for all experiments TS(A) takes slightly more samples (1-\(13\%\)) to stop and recommend an arm compared to TS. The computational savings of TS(A) is about \(6-12\) times the TS algorithm. These simple experiments underscore the trade-off between sample complexity and computational time.
## 6 Conclusion
The paper proposes a rarity framework to study the fixed confidence BAI problem relevant to online ad placement. In this framework the positive reward probabilities are tiny while the corresponding rewards are quite large. Consequently, the mean rewards are \(\mathcal{O}(1)\).
We introduce a Poisson approximation to the standard lower bound problem and use it to motivate an algorithm that is computationally faster than the optimal TS algorithm at the cost of a small increase sample complexity. We also use this approximation to derive asymptotic optimal weights which give insight into the lower bound behaviour in the rare event setting. We observe this trade-off between sample complexity and computational time in our numerical experiments.
|
2302.10072 | Consistent Constraints on the Equivalence Principle from localised Fast
Radio Bursts | Fast Radio Bursts (FRBs) are short astrophysical transients of extragalactic
origin. Their burst signal is dispersed by the free electrons in the
large-scale-structure (LSS), leading to delayed arrival times at different
frequencies. Another potential source of time delay is the well known Shapiro
delay, which measures the space-space and time-time metric perturbations along
the line-of-sight. If photons of different frequencies follow different
trajectories, i.e. if the universality of free fall guaranteed by the weak
equivalence principle (WEP) is violated, they would experience an additional
relative delay. This quantity, however, is not an observable on the background
level as it is not gauge independent, which has led to confusion in previous
papers. Instead, an imprint can be seen in the correlation between the time
delays of different pulses. In this paper, we derive robust and consistent
constraints from twelve localised FRBs on the violation of the WEP in the
energy range between 4.6 and 6 meV. In contrast to a number of previous
studies, we consider our signal to be not in the model, but in the covariance
matrix of the likelihood. To do so, we calculate the covariance of the time
delays induced by the free electrons in the LSS, the WEP breaking terms, the
Milky Way and host galaxy. By marginalising over both host galaxy contribution
and the contribution from the free electrons, we find that the parametrised
post-Newtonian parameter $\gamma$ characterising the WEP violation must be
constant in this energy range to 1 in $10^{13}$ at 68$\;\%$ confidence. These
are the tightest constraints to-date on $\Delta\gamma$ in this low energy
range. | Robert Reischke, Steffen Hagstotz | 2023-02-20T16:29:16Z | http://arxiv.org/abs/2302.10072v2 | # Consistent Constraints on the Equivalence Principle from localised Fast Radio Bursts
###### Abstract
Fast Radio Bursts (FRBs) are short astrophysical transients of extragalactic origin. Their burst signal is dispersed by the free electrons in the large-scale-structure (LSS), leading to delayed arrival times at different frequencies. Another potential source of time delay is the well known Shapiro delay, which measures the space-space and time-time metric perturbations along the line-of-sight. If photons of different frequencies follow different trajectories, i.e. if the universality of free fall guaranteed by the weak equivalence principle (WEP) is violated, they would experience an additional relative delay. This quantity, however, is not an observable on the background level as it is not gauge independent, which has led to confusion in previous papers. Instead, an imprint can be seen in the correlation between the time delays of different pulses. In this paper, we derive robust and consistent constraints from twelve localised FRBs on the violation of the WEP in the energy range between 4.6 and 6 meV. In contrast to a number of previous studies, we consider our signal to be not in the model, but in the covariance matrix of the likelihood. To do so, we calculate the covariance of the time delays induced by the free electrons in the LSS, the WEP breaking terms, the Milky Way and host galaxy. By marginalising over both host galaxy contribution and the contribution from the free electrons, we find that the parametrised post-Newtonian parameter \(\gamma\) characterising the WEP violation must be constant in this energy range to 1 in \(10^{13}\) at 68 % confidence. These are the tightest constraints to-date on \(\Delta\gamma\) in this low energy range.
keywords: cosmology: theory, large-scale structure of Universe, radio continuum: transients
## 1 Introduction
Fast Radio Bursts (FRBs) are very short transients lasting only a few milliseconds and cover a frequency range from a few hundred to a few thousand MHz. The scattering with free electrons in the ionised intergalactic medium (IGM) disperses the pulse, leading to a typical time delay \(\Delta t\propto\nu^{-2}\). Its amplitude is called the dispersion measure (DM) (see e.g. Thornton et al., 2013; Petroff et al., 2015; Connor et al., 2016; Champion et al., 2016; Chatterjee et al., 2017; Macquart et al., 2020). FRBs have sparked a flurry of research over the past years and their potential is massive, as discussed in the review by Petroff et al. (2022). The DM is proportional to the integrated electron density along the line-of-sight between the source and the observer and forms a unique astrophysical and cosmological probe. Even though the mechanism of the bursts is still under debate,1, their isotropic distribution across the sky and large observed DM advocates an extragalactic origin (although some might also be galactic Andersen et al., 2020). Thus, the DM can test the distribution of diffuse electrons in the large-scale structure (LSS).
Footnote 1: For a compilation of some proposed mechanisms for FRBs, see [https://ftbtheorycat.org](https://ftbtheorycat.org) (Plarts et al., 2019).
FRBs have recently been proposed to test the weak equivalence principle (WEP). The WEP guarantees the universality of free fall, one of the key axioms of General Relativity. When the WEP is broken, photons of different frequency, i.e. energy, can follow different null-geodesics. This is also true for other light (relativistic) particles such as neutrinos (see Bose and McGlinn, 1988). In effect, pulses from short transients would pick up a massive time delay in their signal in time-frequency space if the WEP is broken due to the cosmological distances involved. Accordingly, transients at cosmological distances are a promising ground to test the WEP.
There are two components to the DM, the homogeneous (background) and the inhomogeneous component sourced by per
turbations. While the former is only accessible through FRBs with known host, since they provide an independent redshift estimate (Zhou et al., 2014; Walters et al., 2018; Hagstotz et al., 2022; Macquart et al., 2020; Wu et al., 2022; James et al., 2022), the latter can be studied with the full FRB sample through correlations (Masui and Sigurdson, 2015; Shirasaki et al., 2017; Rafiei-Ravandi et al., 2021; Bhattacharya et al., 2020; Takahashi et al., 2021). In principle, any breaking of the WEP would immediately lead to a much higher DM than expected, thus opening a window to put tight constraints on the WEP. While there are a number of studies using FRBs with host identification to constrain the WEP, it was pointed out in Minazzoli et al. (2019) and Reischke et al. (2022) that these constraints are not accurate since they all assume a form of the Shapiro delay derived from a metric with weak perturbations that vanish at infinity. In cosmology, however, potentials do not vanish at spatial infinity (unless they vanish everywhere, rendering the whole discussion mood) due to the symmetries of the Friedmann-Robertson-Walker metric. Reischke et al. (2022) suggest a way out of this dilemma by using angular statistics of FRBs instead, which yields a well defined equation for the Shapiro time delay fluctuations (for fluctuations of the DM see e.g. Masui and Sigurdson, 2015; Shirasaki et al., 2017; Rafiei-Ravandi et al., 2020; Reischke et al., 2021; Bhattacharya et al., 2020; Takahashi et al., 2021; Rafiei-Ravandi et al., 2021), which is, in contrast to the classical approach, gauge invariant. This was partially already applied to gamma ray bursts (Bartlett et al., 2021) using full forward modelling by combining realisations of the local density field which found \(\Delta\gamma<2.1\times 10^{-15}\).
In this paper, we intend to revisit FRBs with host identification as promising tools to test the WEP. Recently, Reischke and Hagstotz (2023) have calculated the covariance matrix from the LSS for DM\(-z\) observations. Furthermore, Nusser (2016) already used individual FRBs to constrain \(\Delta\gamma\) from the fluctuations. We use the full covariance and apply it to a current data set of FRBs with host identification to put constraints on \(\Delta\gamma\). In this way, the parameter dependence does not lie in the single dispersion itself, but in the covariance.
## 2 Shapiro delay tests with localised fast radio bursts
### Using FRBs to test the equivalence principle
The observed time delay, \(\Delta_{\rm obs}\), between different frequency bands of an astrophysical transient can be split into several contributions:
\[\Delta t_{\rm obs}=\Delta t_{\rm Im}+\Delta t_{\rm grav}. \tag{1}\]
\(\Delta t_{\rm int}\) is the intrinsic time delay due to the source and the type of transient. In the case of FRBs this can be split into the DM contribution \(\Delta t_{\rm DM}\) and a potential source contribution \(\Delta t_{\rm s}\) which we assume to vanish. With this we are left with
\[\Delta t_{\rm obs}=\Delta t_{s}+\Delta t_{\rm DM}+\Delta t_{\rm grav}\, \tag{2}\]
where the last term is the difference in the gravitational time delay, between photons of different frequencies. We write the weakly perturbed Friedman-Robertson-Walker (FRW) line element in conformal Newtonian gauge within the PPN formulation (Will, 2014) as follows:
\[\mathrm{d}s^{2}=-\Big{(}1+\frac{2\phi}{c^{3}}\Big{)}c^{2}\mathrm{d}t^{2}+a^{2 }(t)\left(1-\frac{2\gamma\phi}{c^{2}}\right)\mathrm{d}\mathbf{x}^{2}\, \tag{3}\]
with the gauge potential \(\phi\), the scale factor \(a\) and the comoving coordinates \(\mathbf{x}\). The PPN parameter \(\gamma\) measures the deviation from the Newtonian expectation, with \(\gamma=1\) in general relativity. The time delay experienced by a photon is then given by:
\[t_{\rm grav}(\mathbf{\hat{x}})=\frac{1+\gamma}{c^{3}}\int_{0}^{c_{\rm s}} \mathrm{d}\chi\ a(\chi)\phi(\mathbf{\hat{x}}\chi,a(\chi))\, \tag{4}\]
where \(\chi\) is the comoving distance at the background level. Considering two photons at two different frequencies \(\nu_{1,2}\) this turns into
\[\Delta t_{\rm grav}(\mathbf{\hat{x}})=\frac{\Delta\gamma_{1,2}}{c^{3}}\int_{0} ^{c_{\rm s}}\mathrm{d}\chi\ a(\chi)\phi(\mathbf{\hat{x}}\chi,a(\chi))\, \tag{5}\]
where \(\Delta\gamma_{1,2}\) measures by how much the time delay changes between frequencies \(\nu_{1,2}\). If the WEP holds, one expects \(\Delta\gamma_{1,2}=0\) due to the universality of free fall. As discussed in Reischke et al. (2022) this expression is not plagued by any divergences and respects the cosmological symmetry assumptions by construction.
\begin{table}
\begin{tabular}{c c c c c c} Name & DM [pc cm\({}^{-3}\)] & DM\({}_{\rm MW}\) [pc cm\({}^{-3}\)] & \(\nu\) [MHz] & \(z\) & d\(\nu\) [MHz] \\ \hline \hline
20191001 & 506.92 & 44.2 & 919.5 & 0.23 & 336 \\
20200430 & 380.1 & 27.0 & 864.5 & 0.161 & 336 \\
20200906 & 577.8 & 35.9 & 864.5 & 0.36879 & 336 \\
20180924\({}^{\ast}\) & 362.4 & 40.5 & 1297.5 & 0.3214 & 336 \\
20181112\({}^{\ast}\) & 589.0 & 40.2 & 1297.5 & 0.4755 & 336 \\
20190102\({}^{\ast}\) & 364.5 & 57.3 & 1271.5 & 0.291 & 336 \\
20190608\({}^{\ast}\) & 339.5 & 37.2 & 1271.5 & 0.1178 & 336 \\
20190611.2\({}^{\ast}\) & 322.2 & 57.6 & 1271.5 & 0.378 & 336 \\
20190711\({}^{\ast}\) & 594.6 & 56.6 & 1271.5 & 0.522 & 336 \\
20190714\({}^{\ast}\) & 504.7 & 38.5 & 1271.5 & 0.209 & 336 \\
20191228\({}^{\ast}\) & 297.5 & 32.9 & 1271.5 & 0.243 & 336 \\
20190523 & 760.8 & 47 & 1411 & 0.66 & 225 \\ \end{tabular}
\end{table}
Table 1: FRBs used in this work. Given is the FRB name, the observed DM, the estimated DM from the MW (also often referred to as the ISM component), the central frequency, the redshift from the host identification and the bandwidth. The FRBs are taken from Blandari et al. (2020, 2022); Heintz et al. (2020); Bannister et al. (2019); Prochaska et al. (2019); Macquart et al. (2020) and Ravi et al. (2019). FRBs marked with \({}^{\ast}\) will be used in a sub-sample analysis as described in Section 3.
Time delays picked up by photons along the line-of-sight can be both positive or negative. Hence, it is impossible to use individual FRBs to constrain the WEP unless the potential along the line of sight is known.
### Dispersion Measure and Time Delay Statistics
The observed time delay in the direction \(\hat{\mathbf{x}}\) of a source at redshift \(z\) is interpreted as an observed DM
\[\Delta t_{\rm obs}(\hat{\mathbf{x}},z)\propto{\rm DM}_{\rm obs}(\hat{\mathbf{x}},z)v^{ -2}. \tag{6}\]
In particular, the time delay in a frequency band bounded by\(\nu_{1,2}\) is
\[\Delta t_{\rm obs}(\hat{\mathbf{x}},z) = t_{\nu_{1},\rm obs}(\hat{\mathbf{x}},z)-t_{\nu_{2},\rm obs}(\hat{\mathbf{ x}},z) \tag{7}\] \[= {\cal K}\,{\rm DM}_{\rm obs}(\hat{\mathbf{x}},z)\left(v_{1}^{-2}-v_{2} ^{-2}\right)\,\]
where we absorb all the constants in \({\cal K}=e^{2}/(2\pi m_{\rm e}c)\) and for consistency assume \(\nu_{2}>\nu_{1}\). Here, \(e\) and \(m_{\rm e}\) denote the charge and mass of an electron, respectively. WEP breaking now leads to a shift in the time delay as such:
\[{\rm DM}_{\rm obs}(\hat{\mathbf{x}},z)\to{\rm DM}_{\rm obs}(\hat{\mathbf{x}},z)+{\cal D }_{\rm grav}(\hat{\mathbf{x}},z)\, \tag{8}\]
where \({\cal D}_{\rm grav}(\hat{\mathbf{x}},z)\) is the time delay from Equation (5) interpreted as a DM in direction \(\hat{\mathbf{x}}\) and up to redshift \(z\):
\[{\cal D}_{\rm grav}(\hat{\mathbf{x}},z)=\frac{\Delta y_{1,2}}{{\cal K}c^{3}\left( v_{1}^{-2}-v_{2}^{-2}\right)}\int_{0}^{\rm t(z)}{\rm d}v^{\prime}\ a(v^{\prime})\phi(\hat{\mathbf{x}}\chi^{\prime},z( \chi^{\prime})). \tag{9}\]
As mentioned before, this contribution can be positive and negative, a problem which has not been addressed in previous studies. Note that this identification is subject to the \(v^{-2}\) law, thus providing a preferred frequency shape of the WEP-breaking term. However, the null hypothesis is \(\Delta y=0\), as predicted by GR, and any additional contribution will immediately show up in the inferred DM budget.
The non-gravitational contribution in Equation (8) is split into three parts:
\[{\rm DM}_{\rm obs}(\hat{\mathbf{x}},z)={\rm DM}_{\rm LSS}(\hat{\mathbf{x}},z)+{\rm DM }_{\rm MW}(\hat{\mathbf{x}})+{\rm DM}_{\rm host}(z). \tag{10}\]
The contribution from the Milky Way, \({\rm DM}_{\rm MW}(\hat{\mathbf{x}})\), models of the galactic electron distribution predict \({\rm DM}_{\rm MW}\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}50\) pc cm\({}^{-3}\)(Yao et al., 2017), unless the burst is in the galactic plane. Here we assume that this contribution can be modelled and subtracted from the signal. It will, however, induce additional scatter in the observed signal. For the host galaxy contribution \({\rm DM}_{\rm host}\), the situation is less clear and we will discuss this in more detail in Section 3. Finally, the LSS contribution is the line-of-sight integral over the electron distribution:
\[{\rm DM}_{\rm LSS}(\hat{\mathbf{x}},z)=\int_{0}^{\rm c}n_{\rm e}^{\rm IGM}(\mathbf{x},z ^{\prime})\frac{1+z^{\prime}}{H(z^{\prime})}\,{\rm d}z^{\prime}. \tag{11}\]
Here \(H(z)=H_{0}E(z)\) is the Hubble function, and \(n_{\rm e}^{\rm IGM}\) the number density of electrons in the IGM, which can be related to the electron density contrast \(\delta_{\rm e}(\mathbf{x},z)\):
\[n_{\rm e}^{\rm IGM}(\mathbf{x},z)=F_{\rm IGM}(z)\frac{\bar{\rho}_{\rm b}(z)}{m_{ \rm p}}\left[1+\delta_{\rm e}(\mathbf{x},z)\right]\, \tag{12}\]
with the mean baryon mass density \(\bar{\rho}_{\rm b}(z)\), the proton mass \(m_{\rm p}\) and the fraction of electrons in the IGM, \(F_{\rm IGM}(z)\), which can be expressed as follows:
\[F_{\rm IGM}(z)=f_{\rm IGM}(z)\left[Y_{\rm H}X_{\rm e,H}(z)+\frac{1}{2}Y_{\rm H }X_{\rm e,H}(z)\right]. \tag{13}\]
Here \(Y_{\rm H}=0.75\) and \(Y_{\rm Hc}=0.25\) are the mass fractions of hydrogen and helium, respectively, \(X_{\rm e,H}(z)\) and \(X_{\rm e,He}(z)\) are their ionization fractions, and \(f_{\rm IGM}(z)\) is the mass fraction of baryons in the IGM. We assume \(X_{\rm e,H}=X_{\rm e,He}=1\) and \(f_{\rm IGM}(z)=90\%\)(80%) at \(z\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}1.5\)(\(\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}0.4\)) (Meiksin, 2009; Becker et al., 2011; Shull et al., 2012) with a linear interpolation. By rearranging Equation (12) in terms of today's dimensionless baryon density parameter \(\Omega_{\rm b0}\) one finds:
\[{\rm DM}_{\rm LSS}(\hat{\mathbf{x}},z)=\frac{3H_{0}^{2}\Omega_{\rm b0}X_{H}}{8\pi Gm _{\rm p}}\int_{0}^{\rm c}{\rm d}z^{\prime}\frac{1+z^{\prime}}{E(z^{\prime})} F_{\rm IGM}(z^{\prime})[1+\delta_{\rm e}(\mathbf{x},z^{\prime})]\, \tag{14}\]
and we absorb the total amplitude into a prefactor
\[{\cal A}=\frac{3H_{0}^{2}\Omega_{\rm b0}X_{H}}{8\pi Gm_{\rm p}}\, \tag{15}\]
where \(\chi_{H}\) is the Hubble radius today.
### Covariance Matrix of Fast Radio Burst Dispersion
Observations of FRBs with host identification aim to fit the observed DM\(-z\) relation. Quite generally, these measurements are a
Figure 1: Pearson correlation coefficient, \(r_{ij}=C_{ij}/\sqrt{C_{ii}C_{jj}}\), of the covariance matrix for the FRBs used in the analysis (see Table 1). The upper panel shows \(r_{ij}\) induced by the electron distribution in the LSS (\(W_{\phi}=0\)), that is for \(\Delta y=0\). In the lower panel we show the correlation coefficient of the contribution from the WEP breaking term. This includes the cross-term between the electron distribution in the LSS and the WEP breaking.
set \([\mathrm{DM}_{\mathrm{obs}_{i}},\hat{x}_{i},z_{i},\Delta v_{i}]\), \(i=1,...,N_{\mathrm{FRB}}\). We will assume a Gaussian likelihood
\[\chi^{2}(\mathbf{\theta})=\log\det\mathbf{C}(\mathbf{\theta})+(\mathbf{d}-\mathbf{\mu}(\mathbf{\theta}))^ {T}\mathbf{C}^{-1}(\mathbf{\theta})\left(\mathbf{d}-\mathbf{\mu}(\mathbf{\theta})\right)\, \tag{16}\]
where the model \(\mathbf{\mu}\) is given by the average of Equation (14), \(\mathrm{DM}_{\mathrm{LSS}}(z)=\langle\mathrm{DM}_{\mathrm{LSS}}(\hat{x},z)\rangle\) evaluated at all \(z_{i}\). The covariance matrix was calculated in Reischke & Hagstorz (2023) and takes the following form:
\[\begin{split} C_{ij}^{\mathrm{LSS}}&\coloneqq \left\{\mathrm{DM}_{\mathrm{LSS}}(\hat{x}_{i},z_{i})\mathrm{DM}_{\mathrm{ LSS}}(\hat{x}_{j},z_{j})\right\}-\mathrm{DM}_{\mathrm{LSS}}(z_{i})\mathrm{DM}_{ \mathrm{LSS}}(z_{j})\\ &=\sum_{\ell}\frac{2\ell+1}{4\pi}P_{\ell}(\hat{x}_{i}\cdot\hat{ x}_{j})C_{ij}^{\mathrm{LSS}}(\ell)\,\end{split} \tag{17}\]
where \(P_{\ell}(x)\) are the Legendre polynomials and \(C_{ij}^{\mathrm{LSS}}(\ell)\) is the power spectrum of the LSS induced dispersion measure, generally given by:
\[\begin{split} C_{ij}^{\mathrm{LSS}}(\ell)&=\sum_{ \alpha\beta}\frac{2}{\pi}\int k^{2}\mathrm{d}k\int_{0}^{z_{i}}\mathrm{d}z_{i}^ {\prime}W_{\alpha}(z_{i}^{\prime})\sqrt{P_{\alpha}(k,z_{i}^{\prime})}j_{\ell} (kx_{i})\\ &\qquad\qquad\qquad\qquad\qquad\times\int_{0}^{z_{i}}\mathrm{d}z_ {j}^{\prime}\,W_{\beta}(z_{i}^{\prime})\sqrt{P_{\beta}(k,z_{i}^{\prime})}j_{ \ell}(kx_{j})\\ &=\sum_{\alpha\beta}C_{ij}^{\alpha\beta}\,\end{split} \tag{18}\]
\(j_{\ell}(x)\) are spherical Bessel functions and \(W_{\alpha}(z)\) is a weight function corresponding to a field \(f_{\alpha}\) whose projected version is:
\[F_{\alpha}(\mathbf{x})=\int\mathrm{d}x\ W_{\alpha}(x)f_{\alpha}(x\hat{x},x)\, \tag{19}\]
with the power spectrum is defined as
\[\langle f_{\alpha}(\mathbf{k})f_{\alpha}(\mathbf{k}^{\prime})\rangle=(2\pi)^{3}\hat{ o}_{\mathrm{D}}(\mathbf{k}+\mathbf{k}^{\prime})P_{\alpha}(k). \tag{20}\]
Figure 2: Gaussian host galaxy model: the contours show the 68 % and 95 % confidence contours for the four parameter model. The full sample as summarised in Table 1 was used. All but the red contours assume that \(\Delta y\geq 0\). The grey curve just uses the eight FRBs marked with a \({}^{*}\) in Table 1, while the red contours assume a diagonal covariance matrix, hence ignoring DM correlations, artificially boosting the signal.
In the specific case here we have different contributions, \(\alpha,\beta\in[\mathrm{e},\phi]\). First there is the LSS contribution to the dispersion measure Equation (14) caused by the electron distribution, for which we define the weight function:
\[W_{\mathrm{e}}(z)=\mathcal{A}F_{\mathrm{IGM}}(z)\frac{1+z}{E(z)}. \tag{21}\]
Secondly, there is the possible contribution from the WEP breaking term depending on the gravitational potential itself, Equation (9) with the weight function:
\[W_{\phi}(z)=\frac{\Delta\gamma_{1,2}}{\mathcal{K}c^{3}\left(\nu_{1}^{-2}-\nu_ {2}^{-2}\right)}a(z). \tag{22}\]
The electron power spectrum \(P_{\mathrm{e}}(k,z)\) is obtained via HMX (Mead et al., 2015, 2020; Troster and others, 2022), which also yields the total matter power spectrum \(P_{\mathrm{g}}(k,z)\). Using Poisson's equation
\[-k^{2}\phi(\mathbf{k},z)=\frac{3}{2}\Omega_{\mathrm{no}}\alpha^{-1}H_{\mathrm{0}} ^{2}\delta(\mathbf{k},z)\, \tag{23}\]
we can map the statistics of \(\phi\) to the density contrast. Therefore, the contributions to the covariance in Equation (18) can be written as
\[C_{ij}^{\mathrm{LSS}}(\ell)=C_{ij}^{\mathrm{ex}}+C_{ij}^{\mathrm{ex}}+C_{ij}^{ \mathrm{ex}}+C_{ij}^{\mathrm{ex}}\, \tag{24}\]
note that it is not necessarily symmetric upon exchanging \(\phi\) and e.
## 3 Data and analysis
The FRBs used are summarised in Table 1. For the analysis we require the DM, an independent redshift, the frequency band and the position on the sky. Furthermore, we will adopt the fiducial cosmology from Aghanim and others (2020). The average DM, DM\({}_{\mathrm{LSS}}(z)\) Equation (14) carries most of its cosmological dependence in the amplitude. In principle there are possibilities to measure other cosmological parameters through the expansion function in the integrand (Walters et al., 2018). For the current data set of FRBs with host identification, however, signal-to-noise ratio is only large enough to fit the overall amplitude of the DM\(-z\) relation. Since in the current analysis, we are not interested in the actual amplitude of Equation (14), we introduce a nuisance parameter, \(A\), over which we marginalise in the analysis:
\[\mathrm{DM}_{\mathrm{LSS}}(z;A)=A\mathrm{DM}_{\mathrm{LSS}}(z)\, \tag{25}\]
where DM\({}_{\mathrm{LSS}}(z)\) is evaluated at the fiducial cosmology. In this sense, all the sensitivity contained in Equation (15) is bundled into \(A\) and we do sample over \(A\) instead of the other cosmological parameters. Likewise, the covariance matrix picks up a factor \(A\) in each of the terms appearing in the splitting in Equation (24) for each e.
In Figure 1 we show the Pearson correlation coefficients of the covariance matrix for the 12 FRBs in Table 1 evaluated for the fiducial model. The upper plot shows the contribution from the electron distribution in the LSS. In particular this means, using Equation (17) with the first term only in Equation (24). As shown in Reischke and Hagstotz (2023), the correlations between the currently known FRBs are marginal and the covariance matrix is close to diagonal. In the lower plot we show the covariance introduced if the WEP is broken (\(\Delta\gamma=10^{-13}\)), i.e. using the last three terms in Equation (24). Clearly, there are some very strong correlations between the data points now, arising from the very long correlation length of the potential fluctuations. This can be seen from the Poisson equation, Equation (23): each \(\phi\) will pick up a factor \(k^{-2}\) with respect to e, assuming that electrons trace the dark matter distribution on large scales. This is exactly the signal a breaking of the WEP would produce for which an upper limit can be provided by the data.
For the rest of the analysis, we follow two approaches:
1. Gaussian likelihood for the host contribution with two free parameters: the mean DM\({}_{\mathrm{host}}\) and standard deviation \(\sigma_{\mathrm{host}}\), which both scale with \((1+z)^{-1}\). Together with \(\Delta\gamma\), we therefore fit four parameters \(\mathbf{\theta}^{T}=(A,\mathrm{DM}_{\mathrm{host}},\Delta\gamma,\sigma_{\mathrm{ host}})\).
2. A log-normal likelihood for the host contribution. In this case the final likelihood is given by: \[p(\mathbf{DM}|\theta)=\int\mathrm{d}\mathrm{DM}_{\mathrm{host}}P_{\mathrm{host }}(\mathrm{DM}_{\mathrm{host}})p_{\mathrm{LSS}}(\mathbf{DM}-\mathbf{DM}_{ \mathrm{host}})\,\] (26) where \((\mathbf{DM}_{\mathrm{host}})_{i}=\mathrm{DM}_{\mathrm{host}}/(1+z_{i})\). In contrast to Wu et al. (2022) we integrate from 1 to oo to account for all possible host contributions. The probability distribution function of the host contribution is given by: \[p_{\mathrm{host}}(x)=\frac{1}{2x\sigma\sqrt{2\pi}}\exp\left(-\frac{\log x-\mu} {2\sigma^{2}}\right)\,\] (27) such that DM\({}_{\mathrm{host}}=\exp(\mu)\) and \(\sigma_{\mathrm{host}}^{2}=\exp(2\mu+\sigma^{2})(\exp(\sigma^{2})-1)\). In this case we will only fit the mean and set \(\sigma=0.35\) as the width is also governed by the mean. Finally, \(p_{\mathrm{LSS}}\) is a multivariate Gaussian with mean DM\({}_{\mathrm{LSS}}(z_{i})\) and covariance given by Equation (17). Therefore, in this case, we only sample three parameters: \(\mathbf{\theta}^{T}=(A,\mathrm{DM}_{\mathrm{host}},\Delta\gamma)\).
We will refer to the two approaches to \((i)\) and \((ii)\) from now on. To sample from the posterior we use EMCEE (Foreman-Mackey et al., 2013) and the python package GetDist for the analysis of the chains. Table 2 shows the results in terms of the one dimensional posterior marginals. The quoted errors correspond to the 68% confidence interval. We use uninformative priors for all parameters. Apart from the no \(\Delta\gamma\) prior, all parameters are restricted to be positive.
### Gaussian Host Distribution
The corner plot in Figure 2 contains the two dimensional 68 and 95 per-cent confidence intervals and shows the degeneracy between the parameters for \((i)\). We find \(\Delta\gamma\leq 2\times 10^{-13}\) at 95 % confidence for the fiducial case. This limit is about two orders of magnitudes weaker than what Bartlett et al. (2021) found and what was predicted in Reischke et al. (2022). The main reason for this difference is that the dependence on \(\Delta\gamma\) is only carried by the covariance and not by the model itself. This dependence is in general weaker as it is the associated weight of the data points. Furthermore we find general agreement with Nusser (2016) who looked at individual FRBs without considering the full covariance. In green we show the fiducial case, where all parameters are constrained to be larger than zero. The red contour relaxes this assumptions and allows negative
\begin{table}
\begin{tabular}{c c c c c} & \(A\) & \(\Delta\gamma\) [\(\times 10^{-13}\)] & DM\({}_{\mathrm{host}}\) & \(\sigma_{\mathrm{host}}\) \\ \hline \hline Gauss \((i)\) & \(0.53_{-21}^{+0.22}\) & \(0.52_{-0.35}^{+0.57}\) & \(337_{-94}^{+96}\) & \(110_{-41}^{+61}\) \\ log-normal \((ii)\) & \(0.68_{-0.15}^{+0.21}\) & \(0.43_{-0.31}^{+0.57}\) & \(266_{-110}^{+148}\) & – \\ \hline sub-sample \((i)\) & \(0.41_{-0.25}^{+0.28}\) & \(0.78_{-0.51}^{+0.74}\) & \(306_{-118}^{+111}\) & \(112_{-43}^{+83}\) \\ sub-sample \((ii)\) & \(0.56_{-0.23}^{+0.27}\) & \(0.72_{-0.52}^{+0.79}\) & \(224_{-122}^{+152}\) & – \\ \end{tabular}
\end{table}
Table 2: One-dimensional marginal constraints on each parameters. The errors given are the 68 per-cent confidence interval
values for \(\Delta\gamma\) as well (this run we use to be closer to the analysis done in Bartlett et al.2021), here we see very good agreement with the fiducial contour in the range where the prior is not vanishing. The (artifical) tightest constraints arise when we do not consider the correlation between the DM of the different FRBs and assume a diagonal covariance matrix. Lastly, the grey contour depicts the fiducial case but with only eight FRBs in almost the same frequency band (see the events labelled with \({}^{+}\) in Table 1).
We find that the strong correlation between the data points for \(\Delta\gamma\neq 0\) (see Figure 1), reduces the overall signal-to-noise ratio of the measurement and therefore weakens the constraints. \(\Delta\gamma\) does not exhibit any strong degeneracy with any other parameter, there is only a slight anti-correlation between \(\sigma_{\rm host}\) and \(A\). The strongest anti-correlation arises from the DM\({}_{\rm host}\) and \(A\). Both are expected since a lower amplitude \(A\) can be compensated with a large host galaxy contribution. Generally, we find very large host contributions, as already pointed out in James et al. (2022) for the majority of the sample used here. It is in particular noteworthy that we even excluded an FRB used in the former analysis with a DM of 700 around redshift 0.2.
Or findings also suggest symmetric results around zero for\(\Delta\gamma\). Lastly it should be noted that we average here over different frequency ranges and \(\Delta\gamma\) does need to be constant in this case. Thus, our constraints should be seen as an averaged version of this quantity. From the grey contours we find that the limit on \(\Delta\gamma\) changes to \(\Delta\gamma\leq 3\times 10^{-13}\) at 95 % confidence in this case.
### Log-normal Host Distribution
We now turn to the log-normal model for the host DM contribution. Since there is not much correlation between the host contribution and the WEP breaking parameter \(\Delta\gamma\), the constraints on the latter do not change much. The main difference to the Gaussian case is that the the log-normal version prefers higher values of \(A\) since the tail of the log-normal distribution does not require such a large DM\({}_{\rm host}\) to produce the large scatter in the host contribution required by the data. This can be seen already from the values for DM\({}_{\rm host}\) in Table 2. The resulting contours are shown in Figure 3 with the same colour scheme as in the Gaussian case. Again, there is only a correlation between \(A\) and DM\({}_{\rm host}\) which has now a slightly more complex shape than in the Gaussian case. The reason for this is that in the log-normal case, this degeneracy can be seen as a super-position
Figure 3: Same as Figure 2 but now for the log-normal model of the host contribution. Note that \(\sigma_{\rm host}\) is removed from the parameter space in this case and the log-normal \(\sigma\) is kept fixed. However, the width of the distribution in DM space still changes by changing the mean of the log-normal distribution.
of the two degeneracies \(A\) and DM\({}_{\rm host}\), as well as \(A\) and \(\sigma_{\rm host}\) in the Gaussian case. We therefore conclude that our constraints are robust against the exact shape of the host galaxy contribution as long as marginalised over. As discussed in James et al. (2022) we also find that this cannot be said about the amplitude of the DM\(-z\) relation. Using the sub-sample as discussed in the previous section has again very similar influences on the constraints on \(\Delta\gamma\).
Lastly, we show the best fit lines for the full sample in Figure 4. The errors correspond to the Gaussian likelihood. Due to the large host contribution and scatter, as well as the strong contribution from the WEP breaking term, the errors are very large. It should also be noted that some of them are strongly correlated. Furthermore, it is noteworthy that the both fits, case (\(i\)) and (\(ii\)), are equally good fits to the data.
### Comparison to other Constraints on the WEP
There are only very few consistent constraints on the WEP in the literature as discussed in Reischke et al. (2022). Most stringent constraints on the PPN parameter \(\gamma\), which does not rely on a measured time difference, come from solar system measurements (\(\gamma-1\sim 10^{-5}\), Bertotti et al., 2003; Lambert and Poncin-Lafitte, 2009, 2011). These measurement have the distinct advantages that they can rule out different theories which put bounds on \(\gamma\) instead of \(\Delta\gamma\). It is thus crucial to weigh these constraints differently than the once here.
Bartlett et al. (2021) provide the strongest constraints on \(\Delta\gamma<2.1\times 10^{-15}\) (at 68 % confidence), which is roughly 50 times tighter than the results we present. There are two main differences between the two works. First, the frequency range is different, Bartlett et al. (2021) are measuring between 25 and 325 keV, while the measurements carried out here measure \(\Delta\gamma\) in range of 4.6 to 6 meV. The frequency window probed in this work is therefore much smaller, decreasing the leverage of the effect of \(\Delta\gamma\). Secondly, the Gamma Ray Bursts (GRBs) used in Bartlett et al. (2021) are at much higher redshift than the FRBs used in this work, again increasing the leverage.
In Sen et al. (2021) the CHIME FRBs are used to constrain \(\Delta\gamma\). However, their analysis relies on the faulty assumptions pointed out by Minazzoli et al. (2019); Reischke et al. (2022) and therefore cannot be compared to the results presented here.
Lastly, our constraints are consistent with the one found in Nusser (2016) who found similar values for individual FRBs without accounting for other contributions, i.e. the DM-WEP cross-correlation or the DM covariance.
## 4 Conclusion
In this paper we have measured the allowed range of violation of the WEP for photon energies in the meV regime. This is complementary to previous studies who focused on photon energies in the keV range. We used 12 localised FRBs and modelled the full LSS induced covariance between them including terms arising from the electron distribution, as well as from potential terms present if the WEP is broken. Indeed, the sensitivity to \(\Delta\gamma\) is completely carried by the covariance matrix and crucially not the model, thus avoiding the diverging monopole.
We summarise our main findings as follows:
1. Testing the WEP with the DM\(-z\) relation is possible if one considers only the fluctuations along the line-of-sight, thus avoiding any divergencies.
2. When testing the WEP with FRBs, one must consider the full covariance matrix for accurate constraints, since any WEP breaking introduces strong correlations over large distances in the FRB sample. Ignoring these calculations artifically increases the signal-to-noise ratio of the measurement, leading to underestimated errors in \(\Delta\gamma\).
3. We find that the WEP must be satisfied to one in \(10^{13}\) for photons with energies between 4.6 and 6 meV, thus complementing measurements by Bartlett et al. (2021).
4. Our constraints are robust against the the largest uncertainty in FRB observations: the modelling of the host galaxy contribution. It is, however, crucial, to marginalise over this contribution for the final results.
5. FRBs will only become competitive with GRBs when correlations of the DM can be measured accurately (Reischke et al., 2022).
As a summary message, we find the tightest and robust constraints on the WEP to date in the meV energy range. Future work should include a model for the log-normal random field also for the LSS, as this is a better fit to numerical simulations.
**Data Availability**: The data underlying this article will be shared on reasonable request to the corresponding author.
## Acknowledgments
RR is supported by the European Research Council (Grant No. 770935). SH was supported by the Excellence Cluster ORIGINS which is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311. SH and RR acknowledge support by Institut Pascal at Universite Paris-Saclay during the Paris-Saclay Astroparticle Symposium 2022, with the support of the P2IO Laboratory of Excellence (program "Investissements d'avenir" ANR-11-IDEX-0003-01 Paris-Saclay and ANR-10-LABX-0038), the P2I axis of the Graduate School of Physics of Universite Paris-Saclay, as well as IJCLab, CEA, APPEC, IAS, OSUPS, and the IN2P3 master project UCMN.
Figure 4: Best fit model for the log-normal (red) and the Gaussian (blue) models for the host galaxy contribution. The errorbars are derived for the best fit model of the Gaussian case (\(i\)), compare Table 2 and the green contours in Figure 2. |
2308.15151 | $J/ψ$ and $ψ$(2S) polarization in proton-proton collisions at
energies available at the CERN Large Hadron Collider using PYTHIA8 | The production mechanisms of charmonium states in both hadronic and heavy-ion
collisions hold great significance for investigating the hot and dense QCD
matter. Studying charmonium polarization in ultra-relativistic collisions can
also provide insights into the underlying production mechanisms. With this
motivation, we explore the $J/\psi$ and $\psi$(2S) polarization in
proton+proton collisions at $\sqrt{s}$ = 7, 8, and 13 TeV using a pQCD-inspired
Monte-Carlo event generator called PYTHIA8. This work considers reconstructed
quarkonia through their dimuons decay channel in the ALICE forward rapidity
acceptance range of $2.5 < y_{\mu \mu} < 4$. Further, we calculate the
polarization parameters $\lambda_{\theta}$, $\lambda_{\phi}$, $\lambda_{\theta
\phi}$ from the polar and azimuthal angular distributions of the dimuons in
helicity and Collins-Soper frames. This study presents a comprehensive
measurement of the polarization parameters as a function of transverse
momentum, charged-particle multiplicity, and rapidity at the LHC energies. Our
findings of charmonium polarization are in qualitative agreement with the
corresponding experimental data. | Bhagyarathi Sahoo, Dushmanta Sahu, Suman Deb, Captain R. Singh, Raghunath Sahoo | 2023-08-29T09:34:12Z | http://arxiv.org/abs/2308.15151v2 | (J/\psi\) and \(\psi\)(2S) polarization in proton-proton collisions at the LHC energies using PYTHIA8
###### Abstract
The production mechanisms of charmonium states in both hadronic and heavy-ion collisions hold great significance for investigating the hot and dense QCD matter. Studying charmonium polarization in ultra-relativistic collisions can also provide insights into the underlying production mechanisms. With this motivation, we explore the \(J/\psi\) and \(\psi\)(2S) polarization in proton+proton collisions at \(\sqrt{s}=7\), 8, and 13 TeV using a pQCD-inspired Monte-Carlo event generator called PYTHIA8. This work considers reconstructed quarkonia through their dimuons decay channel in the ALICE forward rapidity acceptance range of \(2.5<y_{\mu\mu}<4\). Further, we calculate the polarization parameters \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\) from the polar and azimuthal angular distributions of the dimuons in helicity and Collins-Soper frames. This study presents a comprehensive measurement of the polarization parameters as a function of transverse momentum, charged-particle multiplicity, and rapidity at the LHC energies. Our findings of charmonium polarization are in qualitative agreement with the corresponding experimental data.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Despite being discovered nearly five decades ago, heavy quarkonia states remain a challenging puzzle for QCD-based models [1], due to their non-relativistic nature and the complex multi-scale dynamics involved in ultra-relativistic heavy-ion collisions (HICs). Several theoretical models have been developed to comprehend the quarkonium production mechanisms; the non-relativistic quantum chromodynamics (NRQCD) is one such model [2]. The color octet NRQCD [3] explains the quarkonium production cross section and matches with the experimental data of high energy collider experiments such as Tevatron [4; 5; 6], RHIC [7; 8; 9], and the LHC [10; 11; 12; 13; 14; 15; 16; 17]. In comparison, the color singlet model of NRQCD [18; 19; 20; 21; 22] predicts \(J/\psi\) and \(\psi\)(2S) production cross-sections fifty times smaller than the experimental observation at CDF collaboration in proton+proton (\(pp\)) collisions at \(\sqrt{s}=1.8\) TeV [4]. There are other various factorization techniques to predict the production cross-section of quarkonium, such as NRQCD factorization [2; 23], leading power fragmentation, next-to-leading-power fragmentation, the color singlet model, the color evaporation model [24; 25; 26], and the \(k_{T}\)-factorization approach [27; 28]. The NRQCD calculations use color-octet matrix elements to account for the non-perturbative long-distance physics in heavy quarkonium systems. These matrix elements are adjusted to explain experimental data, improving the agreement between theory and experiment. However, this adjustment introduces model dependence and should be interpreted cautiously. Including next-to-leading-order (NLO) QCD corrections in color singlet quarkonium production rates have shown a significant increase in the large transverse momentum (\(p_{\rm T}\)) region, i.e., \(p_{\rm T}>20\) GeV [29]. This increase has led to a notable reduction in the required contributions from color octet components to match the measured quarkonium production cross-section at the Tevatron [29].
However, for a comprehensive understanding of the quarkonium production mechanism, it is crucial to explore the dynamics responsible for its polarization. Studying the quarkonium polarization provides valuable information, e.g., quarkonium production mechanisms in \(pp\) collisions, the effect of the deconfined medium on the formation of a bound state of two heavy quarks, and the role of spin-vorticity coupling in a thermal rotating medium [30; 31], etc. Polarization refers to the alignment of the quarkonium spin with respect to a chosen axis in a reference frame. The details about the chosen reference frames are discussed in section II.
Quarkonium polarization is predominantly investigated through the dilepton decay channel in experimental studies. The polarization of quarkonium states is obtained by analyzing the angular distributions of decay products. So far, from the experimental side, quarkonium polarization is observed at Tevatron [32; 33; 34; 35; 36], RHIC [37; 38; 39], and LHC [40; 41; 42; 43; 44; 45; 46; 47; 48] at collider experiments, as well as at fixed target experiments such as E866 (NuSea) [49; 50] and HERA-B [51]. From a theoretical standpoint, the polarization study has been discussed in Ref. [52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66]. The color-octet model of NRQCD successfully explains the quarkonium production cross section but fails to account for the polarization results of \(J/\psi\) obtained by the CDF experiment at \(\sqrt{s}=1.96\) TeV [32].
It predicts that at very high momenta, quarkonia are produced from the fragmentation of gluons, preserving their natural spin alignment [64; 67]. Therefore a large transverse polarization of \(J/\psi\) is estimated with respect to their momentum direction [68]. While the inclusion of NLO calculation to the color singlet model predicts a strong longitudinal polarization [29]. Furthermore, the quarkonium polarization parameters are estimated in pp, p-A, and A-A collisions using an improved color evaporation model employing the \(k_{T}\) factorization [69; 70] and the collinear factorization [71; 72] approach. It predicts either zero or slightly transverse polarization at high-\(p_{\rm T}\) and a small longitudinal polarization at low-\(p_{\rm T}\) depending on the polarization frame [71; 72; 73]. On the other hand, with the current statistics, the experimental data of ALICE show a little or zero polarization for \(J/\psi\) within uncertainty in both hadronic and nucleus-nucleus collisions, although LHCb predicts a small longitudinal polarization in the helicity frame. This discrepancy between theory and experiment is commonly called the "\(J/\psi\) polarization puzzle" [40; 54]. However, a Color Glass Condensate (CGC)+NRQCD approach [54] provides a good description of the experimental data at the LHC and STAR in minimum bias \(pp\) collisions at forward rapidities [39].
In this work, we attempt to understand the charmonia polarization in \(pp\) collisions by studying the \(\lambda\)-polarization parameters; \(\lambda_{\theta}\), \(\lambda_{\phi}\), and \(\lambda_{\theta\phi}\). These parameters are obtained using PYTHIA8 simulation by taking the angular distribution of dimuons produced from \(J/\psi\) and \(\psi\)(2S). Further, these \(\lambda\)-polarization parameters are studied as functions of \(p_{\rm T}\), charged-particle multiplicity (\(N_{ch}\)), and rapidity (\(y_{\mu\mu}\)) corresponding to \(\sqrt{s}\) = 7, 8, and 13 TeV collision energies. The experimental measurement of quarkonium decay angular distribution is challenging because it demands a large number of event samples and a high level of accuracy in the subtraction of various kinematic correlations induced by detector acceptance. Thus, it is difficult to analyze the charged particle multiplicity and rapidity dependence of polarization parameters from the angular distribution of decay muons. Such difficulties can be easily overcome in the Monte-Carlo (MC) simulation studies. With the present understanding of \(pp\) collisions dynamics at the LHC energies, the charged-particle multiplicity dependence study of quarkonium polarization would be an interesting topic to investigate the medium effect, and it may serve as a benchmark for heavy-ion collisions as well. The \(p_{\rm T}\)-dependence of the polarization parameter may help us to understand the dynamics of the particle production. Similarly, the rapidity dependence of polarization studies may reveal the phase-space analysis of the particles produced in ultra-relativistic collisions.
This paper is organized as follows. The brief details of dimuon angular distribution and event generation are described in section II. The section III includes the results obtained by analyzing the angular distribution of dimuons in \(pp\) collisions at the LHC energies. The section III consists of three subsections. The transverse momentum dependence of polarization parameters is discussed in subsection III.1. In subsection III.2, the charged-particle multiplicity dependence of polarization parameters is studied. The rapidity dependence study of polarization parameters is discussed in subsection III.3. Finally, the important findings are summarized, and the future scopes are presented in section IV.
## II Analysis Methodology and Event Generation
### Dimuon decay angular distribution
The polarization of the \(J^{PC}=1^{--}\) quarkonium states can be measured through the study of the angular distribution \(\mathrm{W}(\theta,\phi)\) of decay daughters in the dimuons decay channel and can be parameterized as [27; 42; 64];
\[W(\theta,\phi)\propto\frac{1}{3+\lambda_{\theta}}\left(1+\lambda_{\theta}\cos^ {2}\theta+\lambda_{\phi}\sin^{2}\theta\cos 2\phi+\lambda_{\theta\phi}\sin 2\theta \cos\phi\right) \tag{1}\]
where, \(\theta\) and \(\phi\) are the polar and azimuthal angles of the \(\mu^{+}\) with respect to the spin-quantization axis (say, z-axis) of the chosen polarization frames and \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\) = the polarization parameters. In particular, the two cases (\(\lambda_{\theta}\) = 1, \(\lambda_{\phi}\) =0, \(\lambda_{\theta\phi}\) = 0 ) and (\(\lambda_{\theta}\) = -1, \(\lambda_{\phi}\) =0, \(\lambda_{\theta\phi}\) = 0 ) correspond to the transverse and longitudinal polarization, respectively. The case (\(\lambda_{\theta}\) = 0, \(\lambda_{\phi}\) =0, \(\lambda_{\theta\phi}\) = 0) correspond to zero polarization [42].
There are three different conventions to define the polarization reference frames (definition of the z-axis),
which are illustrated in Fig. 1[64].
* Helicity frame: In the direction of \(J/\psi\) (or \(\psi\)(2S)) momentum in the center of the mass frame of the colliding beams.
* Collins-Soper frame: The bisector of the angle between the momentum of one beam and the successive direction of the other beam [74].
* Gottfried-Jackson frame: The direction of the momentum of one of the colliding beams [75].
The quantization axis of the Gottfried-Jackson frame lies in between the helicity and Collins-Soper reference frame [64; 76]. So, in the current work, we solely cover the helicity and Collins-Soper frames as two extreme cases that are physically relevant. It is noteworthy to mention that the default setting for quarkonium production in hadronic collisions in all existing MC generators use an isotropic dilepton distribution, which is discussed in Ref. [64]. In NA38, NA50, NA51, and NA60 experiments have measured a flat \(\cos\theta_{\rm CS}\) (\(\cos\theta\) distribution in Collins-Soper frame) angular distribution in the window \(|\cos\theta_{\rm CS}|<0.5\) covering 50 % of the phase space and assumed that \(J/\psi\) is unpolarized [76]. But, the recent global analysis of \(J/\psi\) polarization measurement indicates that the \(J/\psi\) is significantly polarized and its polarization changes longitudinal to transverse from low \(p_{\rm T}\) to high \(p_{\rm T}\)[40]. We use PYTHIA8 to visualize the angular distribution of decay muons in Fig. 2, in order to validate the mentioned assumption. From Fig. 2, it is observed that the \(\cos\theta\) angular distribution is almost isotropic in the angular range \(|\cos\theta|<0.5\), however, the deviation from isotropic distribution starts at \(|\cos\theta|>0.5\) for both frames of reference. This is the consequence of physics processes involved in PYTHIA8, such as the production and decay of higher excited resonances, and the emission of gluons in the final state radiations, etc. [77]. The distribution of dimuons with uniform acceptance and efficiency over \(\cos\theta\) and \(\phi\) distribution at the generation level allows us to determine the polarization parameters from the observed angular distribution [30; 64].
\[<\cos^{2}\theta>=\frac{1+\frac{3}{5}\lambda_{\theta}}{3+\lambda_{\theta}} \tag{2}\]
\[<\cos 2\phi>=\frac{\lambda_{\phi}}{3+\lambda_{\theta}} \tag{3}\]
\[<\sin 2\theta\cos\phi>=\frac{4}{5}\frac{\lambda_{\theta\phi}}{3+\lambda_{ \theta}} \tag{4}\]
The polarization parameters \(\lambda_{\theta}\), \(\lambda_{\phi}\), and \(\lambda_{\theta\phi}\) are obtained for helicity and Collins-Soper frames by taking the average over \(\cos^{2}\theta\), \(\cos 2\phi\), and \(\sin 2\theta\cos\phi\). This approach provides an alternative method over the multi-parameter fit of Eq. (1) to the dimuon angular distribution [64].
### Event Generation in PYTHIA8
For modeling ultra-relativistic collisions between particles such as electron-electron, electron-positron, proton-proton, and proton-antiproton, one of the commonly used event generators is PYTHIA8. It is quite effective at explaining the LHC results [78; 79]. Numerous physical processes are involved in PYTHIA8, including hard and soft scattering, parton distributions, initial and final state parton showers, multi-partonic
Figure 1: (Color online) Illustration of the three different definitions of the polarization axis, z, in the helicity (HF), Collins-Soper (CS), Gottfried-Jackson (GJ) reference frames, with respect to the direction of motion of the colliding beams (Beam 1 and Beam 2) and of the Quarkonia.
interaction (MPI), string fragmentation, color reconnection, resonance decays, rescattering, and beam remnants [77; 80]. In this study, we have used PYTHIA8 to generate \(pp\) collisions at \(\sqrt{s}\) = 7, 8 and 13 TeV with 4C Tune (Tune:pp = 5) [81]. One of the key benefits of PYTHIA8 is the subsequent MPI processes, which, combined with impact parameter dependence of collisions, enables the generation of heavy-flavor quarks through \(2\to 2\) hard subprocesses. A detailed explanation of all physics processes involved in PYTHIA8 can be found in Ref. [77].
This analysis is performed by generating 1.5 billion events for \(pp\) collisions at \(\sqrt{s}\) = 7, 8, and 13 TeV. For our study, we contemplate inelastic and non-diffractive simulated events. So in the total scattering cross section, only the non-diffractive component of all hard QCD processes (HardQCD:all=on) will contribute. Hard processes involve the production of heavy quarks. We have considered color reconnection on (Colour Reconnection:mode = on) along with MPI (PartonLevel:MPI = on). To avoid the divergences of QCD processes in the limit \(p_{\rm T}\to 0\) a transverse momentum cut of 0.5 GeV (PhaseSpace:pTHatMinDiverge = 0.5) is taken. For the production of \(J/\psi\) and \(\psi\)(2S), we use Charmonium:all flag (Charmonium:all = on) in the simulation [82; 83; 84] through NRQCD framework. The polarization study of \(J/\psi\) and \(\psi\)(2S) has been performed in the dimuon channel by forcing a \(J/\psi\) and \(\psi\)(2S) to decay into dimuons (\(\mu^{+}\mu^{-}\)) in the MC simulation. The \(J/\psi\) and \(\psi\)(2S) yields are then obtained through invariant mass reconstruction considering the detector acceptance. This helps in comparing the observations directly with the experimental data.
To check the compatibility of PYTHIA8 with experimental data, we have used the same tuning as used in our previous works [78; 79], where we have compared the production cross section obtained from PYTHIA8 as a function of transverse momentum and rapidity with the ALICE experimental data of \(J/\psi\), and are found them to be comparable within uncertainties.
## III Results and discussion
In this section, we discuss the polar and azimuthal angular distribution of dimuons (corresponding to \(J/\psi\) and \(\psi\)(2S)) obtained from the PYTHIA8 simulation of \(pp\) collisions at \(\sqrt{s}\) = 7, 8, 13 TeV. The parameters \(\lambda_{\phi}\), \(\lambda_{\phi}\) and \(\lambda_{\phi\phi}\) are obtained from the averaged angular distribution of dimuons using Eq. (2), Eq. (3), and Eq. (4), respectively. The left and right columns of Fig. 2 correspond to the helicity and Collins-Soper frames, respectively. In both the frames, upper plots show the cosine of the polar angle distribution and lower panels represent the azimuthal angular distribution for \(J/\psi\) and \(\psi\)(2S) in \(pp\) collisions at \(\sqrt{s}\) = 13 TeV. To obtain these angular distributions, we have considered a \(p_{\rm T}\) bin of \(2.0<p_{\rm T}<3.0\) GeV/c and a rapidity window \(2.5<y_{\mu\mu}<4.0\). Further, the present section is divided into three subsections. The \(p_{\rm T}\), charged particle multiplicity, and rapidity dependence of \(\lambda\)-polarization parameters (i.e., \(\lambda_{\theta}\), \(\lambda_{\phi}\), and \(\lambda_{\theta\phi}\) ) are discussed in consecutive subsection III.1, III.2, and III.3 respectively.
Transverse momentum dependence of \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\)
We explore the polarization parameters of \(J/\psi\) and \(\psi\)(2S) as a function of \(p_{\rm T}\) in \(pp\) collisions using PYTHIA8. We study the variation of these parameters at \(\sqrt{s}\) = 7, 8, and 13 TeV in both helicity and Collins-Soper reference frame as shown in Fig. 3. Our obtained results for \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\) are compared with the \(J/\psi\) experimental data corresponding to \(pp\) collisions at \(\sqrt{s}\) = 7 TeV measured by LHCb [45] and \(\sqrt{s}\) = 7 and 8 TeV measured by ALICE [40; 41]. In addition, the obtained results are compared with the color singlet (CS) and color singlet + color octet (CS+CO) mechanism-based NRQCD which include the NLO corrections [57]. The \(p_{\rm T}\)-interval in PYTHIA8 is chosen in accordance with the ALICE measurement of \(J/\psi\) at \(\sqrt{s}\) = 8 TeV in both reference frames for all energies. The rapidity cut is set to \(2.5<y_{\mu\mu}<4.0\) in accordance with ALICE detector acceptance. From Fig. 3, it is observed that the \(\lambda_{\theta}\) parameter indicates a longitudinal polarization at low-\(p_{\rm T}\) regime, and a transverse polarization at high-\(p_{\rm T}\), in both the frames of references for \(J/\psi\) and \(\psi\)(2S). This trend qualitatively agrees with the \(J/\psi\) polarization measured by ALICE for \(\sqrt{s}\) = 7 TeV [41] in the helicity frame. In the obtained result, longitudinal polarization is observed at low-\(p_{\rm T}\), which decreases towards high-\(p_{\rm T}\). At lower energy, the HERA-B experiment [51] predicts a longitudinal polarization at low-\(p_{\rm T}\) in the Collins-Soper frame. However, the leading order calculation of the NRQCD approach predicts a sizable transverse polarization at high \(p_{\rm T}\)[60; 61; 62; 63]. Further, it is observed that at low \(p_{\rm T}\), \(\psi\)(2S) has a comparatively lower longitudinal polarization than \(J/\psi\) while the polarization of \(\psi\)(2S) increases at high \(p_{\rm T}\). This result seems to be apparent because, at low-\(p_{\rm T}\), the formation of \(J/\psi\) through \(c-\bar{c}\) is more favourable than \(\psi\)(2S). As a consequence of this, \(\psi\)(2S) yield reduces at low \(p_{\rm T}\), which affects its polarization. This study shows no clear dependence on the center of mass energy on quarkonia polarization in PYTHIA8 for both reference frames. Next, the \(\lambda_{\phi}\) parameter indicates a transverse polarization in the helicity and a longitudinal polarization in the Collins-Soper reference frame. This dissimilarity arises due to differences in the azimuthal angle distribution of dimuons around the chosen reference axis in both frames. The finite values of \(\lambda_{\theta}\) and \(\lambda_{\phi}\) parameters indicate that the probability of finding the \(J/\psi\) (or \(\psi\)(2S)) vector mesons in the three spin states are not equal and hence the emission of their daughter particles is not intrinsically isotropic. We found that the PYTHIA8 predicts a relatively higher
value of \(\lambda_{\phi}\) compared to the experimental data. Although, the \(\lambda_{\theta\phi}\) parameter is almost zero for \(J/\psi\) and \(\psi\)(2S) in PYTHIA8, which is consistent with the LHC results as displayed in Fig. 3.
Charged-particle multiplicity dependence of \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\phi\phi}\)
The charged particle multiplicity-dependent study of charmonia polarization may reveal the underlying dynamics associated with the particle density of the system produced in \(pp\) collisions. Figure 4 shows the charged particle multiplicity dependence of polarization parameters for \(J/\psi\) and \(\psi\)(2S) mesons in \(pp\) collisions at \(\sqrt{s}=7\)
Figure 2: (Color online) The cosine of the polar angle (upper panel) and the azimuthal angle (lower panel) distribution in \(pp\) collisions for \(J/\psi\) and \(\psi\)(2S) at \(\sqrt{s}=13\) TeV in helicity (left panel) and Collins-Soper (right panel) reference frame.
8, and 13 TeV. The charged-particle multiplicity classes used in the present analysis are taken from Ref. [78]. The experimental study in this regard is reported by CMS collaboration for three \(\Upsilon(nS)\) states in \(pp\) collisions at \(\sqrt{s}=7\) TeV [48]. From the theoretical front, the relative multiplicity dependence \(\left(\frac{dN_{ch}}{d\eta}/\left<\frac{dN_{ch}}{d\eta}\right>\right)\) study of polarization parameters for \(J/\psi\) in helicity and Collins-Soper frame is studied in CGC+NRQCD approach [85] in \(pp\) and \(p\)-Pb collisions at \(\sqrt{s}=13\) and 8.16 TeV, respectively. From Fig. 4, we observe that the polarization parameter \(\lambda_{\theta}\) indicates that the degree of longitudinal polarization increases towards high multiplicity for \(\psi\)(2S) meson, while for \(J/\psi\) the longitudinal polarization remains almost constant from low- to high multiplicity in both reference frames. The \(\lambda_{\phi}\) shows a transverse polarization in the helicity frame and a longitudinal polarization in the Collins-Soper reference frame for \(J/\psi\)
Figure 3: (Color online) The \(J/\psi\) and \(\psi\)(2S) polarization parameters as a function of transverse momentum for \(pp\) collisions at \(\sqrt{s}=7\), 8, and 13 TeV using PYTHIA8. The obtained results are compared with the \(J/\psi\) polarization measurement in pp collisions from ALICE at \(\sqrt{s}=7\), 8 TeV, LHCb at \(\sqrt{s}=7\) TeV and the NLO-NRQCD model predictions in color singlet (CS) and color singlet + color octet (CS+CO) states at \(\sqrt{s}=7\) TeV in both helicity and Collins-Soper reference frames.
and \(\psi\)(2S) for all multiplicity classes. The polarization parameter, \(\lambda_{\theta\phi}\), is negligible for \(J/\psi\) and \(\psi\)(2S) at low multiplicities, while at higher multiplicities, it has non-zero values. The present study indicates that the charmonia polarization weakly depends on the center of mass collision energy (\(\sqrt{s}\)).
### Rapidity dependence of \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\)
In this subsection, we explore the rapidity dependence study of \(\lambda\)-parameters for \(J/\psi\) and \(\psi\)(2S) mesons in \(pp\) collisions at \(\sqrt{s}\) = 7, 8, and 13 TeV, as shown in Fig. 5. The \(J/\psi\) polarization for various rapidity bins is reported in LHCb experiment for \(pp\) collisions at \(\sqrt{s}\) = 7 TeV [45] and data show a small polarization within the uncertainties. In Fig. 5, the \(p_{\rm T}\)-integrated polarization is obtained for the rapidity range 2.5 to 4.0, with a step of 0.3. The upper panel of Fig. 5 shows that the degree of longitudinal polarization for \(J/\psi\) is larger than \(\psi\)(2S) in the helicity frame. It observed that in the Collins-Soper frame, \(\psi\)(2S) is transversely polarized while \(J/\psi\) is longitudinally polarized. However, as observed in the present study, there is no clear dependence of \(\lambda_{\theta}\) param
Figure 4: (Color online) The \(J/\psi\) and \(\psi\)(2S) polarization parameters as a function of charged-particle multiplicity for \(pp\) collisions at \(\sqrt{s}\) = 7, 8, and 13 TeV using PYTHIA8 in both helicity and Collins-Soper reference frames.
eter with rapidity in both reference frames. Similar to \(p_{\rm T}\) and charged-particle multiplicity dependence, the \(\lambda_{\phi}\) as a function of rapidity shows a positive value for polarization in the helicity frame and a negative value of polarization in the Collins-Soper frame for \(J/\psi\) and \(\psi\)(2S). The \(\lambda_{\theta\phi}\) parameter shows almost zero polarization with rapidity irrespective of the particles under consideration, the chosen center of mass energy, and the reference frame.
## IV Summary
In this work, we have studied the polarization parameters for \(J/\psi\) and \(\psi\)(2S) from the angular distribution of dimuons in \(pp\) collisions at LHC energies using PYTHIA8. The important observations of this paper are summarized below:
1. The \(\lambda_{\theta}\), \(\lambda_{\phi}\), \(\lambda_{\theta\phi}\) are obtained in the helicity and Collins-Soper reference frames in the rapidity interval \(2.5<y_{\mu\mu}<4.0\).
2. It is observed from the \(\lambda_{\theta}\) parameter that \(J/\psi\) and \(\psi\)(2S) are longitudinally polarized at low-\(p_{\rm T}\) and transversely polarized at high-\(p_{\rm T}\) in both the reference frames.
3. The \(\lambda_{\phi}\) parameter indicates the longitudinal polarization in helicity and transverse polarization in the Collins-Soper frame for \(J/\psi\) and \(\psi\)(2S) across all energies. The \(\lambda_{\theta\phi}\) parameter values are close to zero as a function of \(p_{\rm T}\).
4. The multiplicity dependence study of \(\lambda_{\theta}\) parameter shows the degree of longitudinal polarization increases with charged particle multiplicity for \(\psi\)(2S), while the behavior of longitudinal polarization of \(J/\psi\) stays constant with charged particle multi
Figure 5: (Color online) The \(J/\psi\) and \(\psi\)(2S) polarization parameters as a function of rapidity for \(pp\) collisions at \(\sqrt{s}\) = 7, 8, and 13 TeV using PYTHIA8 in both helicity and Collins-Soper reference frames.
plicity, which needs attention from an experimental point of view.
5. In this study, we observe no clear dependence of \(\lambda_{\theta}\) parameter with rapidity. However the \(\lambda_{\phi}\), \(\lambda_{\theta}\) show almost constant polarization with rapidity both for \(J/\psi\) and \(\psi\)(2S) in helicity and Collins-Soper reference frames. In the future ALICE 3 set up with a wider kinematics acceptance range of muon spectrometer, the rapidity dependence study of polarization parameters would be an interesting topic.
6. It is essential to mention that the polarization results obtained in this analysis as a function \(p_{\rm T}\), \(N_{ch}\), and \(y_{\mu\mu}\) consider only the production of \(J/\psi\) and \(\psi\)(2S), without taking the feed-down from higher excited states. The investigation of charmonia polarization by taking the feed-down from higher resonances is a future scope. In other words, a separate investigation of polarization parameters for prompt and non-prompt charmonium states in both the experiment and MC simulation would be an intriguing subject to test.
7. Since we used PYTHIA8, which incorporates pQCD and NRQCD-based processes, our results overestimate \(J/\psi\) polarization in some \(p_{\rm T}\) bins as compared with experimental data. It suggests that there might be an interplay between these fundamental processes in the realistic scenario. On the other hand, there might be some other processes responsible for net charmonia polarization. Therefore, charmonia polarization in ultra-relativistic \(pp\) collisions requires a thorough study using theoretical models confronted with the experimental results.
## Acknowledgement
Bhagyarathi Sahoo acknowledges the financial aid from CSIR, Government of India. SD acknowledges the financial support under the Post-Doctoral Fellowship of CNRS at IJCLAB, Orsay (France). The authors gratefully acknowledge the DAE-DST, Government of India funding under the mega-science project "Indian Participation in the ALICE experiment at CERN" bearing Project No. SR/MF/PS-02/2021-IITI (E-37123).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.